authors
listlengths
1
41
survey_title
stringlengths
24
120
year
stringdate
2019-01-01 00:00:00
2023-01-01 00:00:00
date
timestamp[ns, tz=UTC]date
2019-01-03 03:20:55
2023-12-18 07:47:33
category
stringclasses
23 values
abstract
stringlengths
0
2.64k
structure
listlengths
6
212
survey_id
int64
0
204
all_cites
listlengths
0
359
Bertopic_CD
float64
0.36
2.57
[ "Fang Liu", "Guoming Tang", "Youhuizi Li", "Zhiping Cai", "Xingzhou Zhang", "Tongqing Zhou" ]
A Survey on Edge Computing Systems and Tools
2019
2019-11-07T08:16:40Z
cs.DC
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ] ], "subsections": [ "e3477cb3-f2f8-4d37-bfaf-3408b8c60cb3", "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "b72e7b0d-2142-415a-8bed-1b1f88160a2d", "e695a48d-424b-476b-aca3-58d54c720c98", "a3a2618a-548a-4cd2-95bc-06a60c44ffb0", "478327fb-c0c5-49d5-ae91-7fa642949abc", "aa0b39df-82ae-4dab-a3c4-17e99ed6d2be", "df3878a1-598b-4a96-97d4-ce6abf0afee7" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:intro}\nIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public's habit of accessing and processing data, and challenges the linearly increasing capability of cloud computing. Edge computing is a new computing paradigm with data processed at the edge of the network. Promoted by the fast growing demand and interest in this area, the edge computing systems and tools are blooming, even though some of them may not be popularly used right now. \nThere are many classification perspectives to distinguish different edge computing system. To figure out why edge computing appears as well as its necessity, we pay more attention to the basic motivations. Specifically, based on different design demands, existing edge computing systems can roughly be classified into three categories, together yielding innovations on system architecture, programming models and various applications, as shown in Fig.~\\ref{fig:systemCategory}. \n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.42\\textwidth]{systemsCategoryAll.pdf}\n\\caption{Categorization of edge computing systems.}\\label{fig:systemCategory}\n\\end{center}\n\\end{figure}\n\\begin{itemize}\n\\item Push from cloud. In this category, cloud providers push services and computation to the edge in order to leverage locality, reduce response time and improve user experience. Representative systems include Cloudlet, Cachier, AirBox, and CloudPath. Many traditional cloud computing service providers are actively pushing cloud services closer to users, shortening the distance between customers and cloud computing, so as not to lose market to mobile edge computing. For example, Microsoft launched AzureStack in 2017, which allows cloud computing capabilities to be integrated into the terminal, and data can be processed and analyzed on the terminal device.\n\\item Pull from IoT. Internet of Things (IoT) applications pull services and computation from the faraway cloud to the near edge to handle the huge amount of data generated by IoT devices. Representative systems include PCloud, ParaDrop, FocusStack and SpanEdge. Advances in embedded Systems-on-a-Chip (SoCs) have given rise to many IoT devices that are powerful enough to run embedded operating systems and complex algorithms. Many manufacturers integrate machine learning and even deep learning capabilities into IoT devices. Utilizing edge computing systems and tools, IoT devices can effectively share computing, storage, and network resources while maintaining a certain degree of independence.\n\\item Hybrid cloud-edge analytics. The integration of advantages of cloud and edge provides a solution to facilitate both global optimal results and minimum response time in modern advanced services and applications. Representative systems include Firework and Cloud-Sea Computing Systems\\nop{ and AWS Greengrass}. Such edge computing systems utilize the processing power of IoT devices to filter, pre-process, and aggregate IoT data, while employing the power and flexibility of cloud services to run complex analytics on those data. For example, Alibaba Cloud launched its first IoT edge computing product, LinkEdge, in 2018, which expands its advantages in cloud computing, big data and artificial intelligence to the edge to build a cloud/edge integrated collaborative computing system; Amazon released AWS Greengrass in 2017, which can extend AWS seamlessly to devices so that devices can perform local operations on the data they generate, while data is transferred to the cloud for management, analysis, and storage.\n\\end{itemize}\nFrom a research point of view, this paper gives a detailed introduction to the distinctive ideas and model abstractions of the aforementioned edge computing systems. Noted that the three categories are presented to clearly explain the necessity of edge computing, and the classification is not the main line of this paper. Specifically, we review systems designed for architecture innovation first, then introduce those for programming models and applications (in Sec.~\\ref{sec:edge-computing-systems}). Besides, some recently efforts for specific application scenarios are also studied.\n\\nop{This paper is from a research point of view, focusing on distinctive ideas and model abstractions of different systems. It introduces representative systems in the categories discussed above, and some recently proposed edge systems are outlined.}\nWhile we can find a lot of systems using edge computing as the building block, there still lacks standardization to such a paradigm. Therefore, a comprehensive and coordinated set of foundational open-source systems/tools are also needed to accelerate the deployment of IoT and edge computing solutions. Some open source edge computing projects have been launched recently (e.g., CORD). As shown in Fig.~\\ref{fig:systemCategory}, these systems can support the design of both architecture and programming models with useful APIs. We review these open source systems with a comparative study on their characteristics (in Sec.~\\ref{sec:open-source-projects}).\n\\nop{This paper studies current work on open source edge computing tools, which is challenging and has attracted a lot of attentions.}\nWhen designing the edge computing system described above, energy efficiency is always considered as one of the major concerns as the edge hardware is energy-restriction. Meanwhile, the increasing number of IoT devices is bringing the growth of energy-hungry services. Therefore, we also review the energy efficiency enhancing mechanisms adopted by the state-of-the-art edge computing systems from the three-layer paradigm of edge computing (in Sec.~\\ref{sec:energy-efficiency}).\n\\nop{This paper reviews the energy efficiency enhancing mechanisms adopted by the state-of-the-art edge computing systems from three Layer of the edge computing paradigm (cloud, edge server and device).}\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.42\\textwidth]{framework-intro.pdf}\n\\caption{Major building blocks and organization of this survey paper.}\\label{fig:framework-intro}\n\\vspace{-0.20in}\n\\end{center}\n\\end{figure}\nIn addition to investigations from the system view, we also look into the emerging techniques in edge computing system from the application view. Recently, deep learning based Artificial Intelligence (AI) applications are widely used and offloading the AI functions from the cloud to the edge is becoming a trend. However, deep learning models are known for being large and computationally expensive. Traditionally, many systems and tools are designed to run deep learning models efficiently on the cloud. As the multi-layer structure of deep learning, it is appropriate for edge computing paradigm, and more of its functions can be offloaded to the edge. Accordingly, this paper also studies the new techniques recently proposed to support the deep learning models at the edge (in Sec.~\\ref{sec:deep-learning-opt}).\n\\nop{This paper studies new techniques recently proposed to support the deep learning models at the edge; these techniques are categorized into three types: systems and toolkits, deep learning packages, and hardware.}\nOur main contributions in this work are as follows:\n\\begin{itemize}\n\\item Reviewing existing systems and open source projects for edge computing by categorizing them from their design demands and innovations. We study the targets, architecture, characteristics, and limitations of the systems in a comparative way.\n\\item Investigating the energy efficiency enhancing mechanism for edge computing from the view of the cloud, the edge servers, and the battery-powered devices.\n\\item Studying the technological innovations dedicated to deploying deep learning models on the edge, including systems and toolkits, packages, and hardware.\n\\item Identifying challenges and open research issues of edge computing systems, such as mobility support, multi-user fairness, and privacy protection.\n\\end{itemize}\nWe hope this effort will inspire further research on edge computing systems. The contents and their organization in this paper are shown in Fig.~\\ref{fig:framework-intro}. Besides the major four building blocks (Sec.~\\ref{sec:edge-computing-systems}$\\sim$Sec.~\\ref{sec:deep-learning-opt}), we also give a list of open issues for analyzing and designing an edge computing system in Sec.~\\ref{sec:key-design-issues}. The paper is concluded in Sec.~\\ref{sec:conclusion}.\n\\nop{The remaining parts of this paper are organized as follows. Section II introduces ten representative edge computing systems in academia, and outline some recently proposed edge systems. Section III presents some open source edge computing projects. Section IV discusses the energy efficiency enhancing mechanisms of edge computing systems. Section V presents new techniques that support the deep learning models at the edge. Open issues for analyzing and designing an edge computing system are studied in section VI. Finally, the paper is concluded in section VII.}", "id": "e3477cb3-f2f8-4d37-bfaf-3408b8c60cb3", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:edge-computing-systems}\nIn this section, we review edge computing systems and tools presenting architecture innovations, programming models, and applications, respectively. For each part, we introduce work under the ``push\", ``pull\", and ``hybrid\" demand in turn.", "id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ] ], "subsections": [ "41515cac-6ba7-49e0-b96b-0ef0d264a0ca", "de742ea0-6e5a-4f78-8cc0-f5db79790794", "44b0c252-9c6d-4438-9d77-129644ea6df0", "26e36cce-0199-4af9-92e8-65f39c1250d6", "13fec1ef-1af3-4617-8a90-614c3d31824e", "94a01773-4152-4ca2-a525-839a27b31c3b", "d41a4e11-d85b-40f4-b891-834279d27134", "87078893-bde3-463c-947c-f7dc4133bca0", "111f14e4-0ffb-419b-80b8-09b438ff1935", "49d0417a-3b91-46c8-8267-e0938c8f0aa3", "22a62c69-a6b7-4429-8620-50170782843d" ], "title": "Edge Computing Systems and Tools" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{Cloudlet.png}\n\\caption{Cloudlet Component Overview and Functions that support application mobility. \nA: Cloudlet Discovery, B: VM Provisioning, C: VM Handoff.}\\label{fig:Cloudlet}\n\\end{center}\n\\end{figure}\nIn 2009, Carnegie Mellon University proposed the concept of Cloudlet~, and the Open Edge computing initiative was also evolved from the Cloudlet project~. Cloudlet is a trusted, resource-rich computer or cluster of computers that are well-connected to the Internet and available to nearby mobile devices. It upgrades the original two-tier architecture ``Mobile Device-Cloud'' of the mobile cloud computing to a three-tier architecture ``Mobile Device-Cloudlet-Cloud''. Meanwhile, Cloudlet can also serve users like an independent cloud, making it a ``small cloud'' or ``data center in a box''. Although the Cloudlet project is not proposed and launched in the name of edge computing, its architecture and ideas fit those of the edge computing and thus can be regarded as an edge computing system. \nThe Cloudlet is in the middle layer of the three-tier edge computing architecture and can be implemented on a personal computer, low-cost server, or small cluster. {It can be composed of a single machine or small clusters consisting of multiple machines.} Like WiFi service access points, a Cloudlet can be deployed at a convenient location (such as a restaurant, a cafe, or a library). Multiple Cloudlets may form a distributed computing platform, which can further extend the available resources for mobile devices~. As the Cloudlet is just one hop away from the users' mobile devices, it improves the QoS with low communication delay and high bandwidth utilization.\nIn detail, Cloudlet has three main features as follows.", "id": "41515cac-6ba7-49e0-b96b-0ef0d264a0ca", "level": "subsection", "origin_cites_number": 3, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cloudlet" ] ], "subsections": [ "52718c0a-d1e8-43e1-ab9c-697977382c63", "7177bd8d-5697-4a4d-b003-8e9ba628a902", "3937f91e-955f-4a8a-9d5c-45cfe802f164" ], "title": "Cloudlet" }, { "cite_extract_rate": 0, "cites": [], "content": "Cloudlet can be regarded as a small cloud computing center located at the edge of the network. Therefore, as the server end of the application, the Cloudlet generally needs to maintain state information for interacting with the client. However, unlike Cloud, Cloudlet does not maintain long-term state information for interactions, but only temporarily caches some state information. This reduces much of the burden of Cloudlet as a lightweight cloud.", "id": "52718c0a-d1e8-43e1-ab9c-697977382c63", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "41515cac-6ba7-49e0-b96b-0ef0d264a0ca", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cloudlet" ], [ "subsubsection", "Soft State" ] ], "subsections": [], "title": "Soft State" }, { "cite_extract_rate": 0, "cites": [], "content": "Cloudlet has sufficient computing resources to enable multiple mobile users to offload computing tasks to it. Besides, Cloudlet also has stable power supply so it doesn't need to worry about energy exhaustion.", "id": "7177bd8d-5697-4a4d-b003-8e9ba628a902", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "41515cac-6ba7-49e0-b96b-0ef0d264a0ca", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cloudlet" ], [ "subsubsection", "Rich Resources" ] ], "subsections": [], "title": "Rich Resources" }, { "cite_extract_rate": 0, "cites": [], "content": "Cloudlets are deployed at those places where both network distance and physical distance are short to the end user, making it easy to control the network bandwidth, delay and jitter. Besides, the physical proximity ensures that the Cloudlet and the user are in the same context (e.g., the same location), based on which customized services (e.g., the location-based service) could be provided.\nTo further promote Cloudlet, CMU built up an open edge computing alliance, with Intel, Huawei and other companies~, to develop standardized APIs for Cloudlet-based edge computing platforms. Currently, the alliance has transplanted OpenStack to the edge computing platform, which enables distributed Cloudlet control and management via the standard OpenStack APIs~. With recent development of the edge computing, the Cloudlet paradigm has been widely adopted in various applications, e.g., cognitive assistance system~, Internet of Things data analysis~, and hostile environments~. \nUnlike the cloud, cloudlets are deployed on the edge of the network and serve only nearby users. Cloudlet supports application mobility, allowing devices to switch service requests to the nearest cloudlet during the mobile process. As shown in Fig.~\\ref{fig:Cloudlet}, Cloudlet supports for application mobility relying on three key steps.\n\\textit{1) Cloudlet Discovery:} \nMobile devices can quickly discover the available Cloudlets around them, and choose the most suitable one to offload tasks.\n\\textit{2) VM Provisioning:}\nConfiguring and deploying the service VM that contains the server code on the cloudlet so that it is ready to be used by the client.\n\\textit{3) VM Handoff:}\nMigrating the VM running the application to another cloudlet.", "id": "3937f91e-955f-4a8a-9d5c-45cfe802f164", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "41515cac-6ba7-49e0-b96b-0ef0d264a0ca", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cloudlet" ], [ "subsubsection", "Close to Users" ] ], "subsections": [], "title": "Close to Users" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{CloudPath.png}\n\\caption{CloudPath Architecture~.}\\label{fig:CloudPath}\n\\end{center}\n\\end{figure}\nCloudPath~ is an edge computing system proposed by the University of Toronto. In such a system, diverse resources like computing and storage are provided along the path from the user device to the cloud data center. It supports on-demand allocation and dynamic deployment of the multi-level architecture. The main idea of CloudPath is to implement the so-called ``path computing'', such that it can reduce the response time and improve the bandwidth utilization, compared with the conventional cloud computing. \nAs illustrated in Fig.~\\ref{fig:CloudPath}, the bottom layer of CloudPath is user devices, and the top layer is the cloud computing data center. The system reassigns those tasks of the data centers along the path (for path computing) to support different types of applications, such as IoT data aggregation, data caching services, and data processing services. Developers can select an optimal hierarchical deployment plan for their services by considering the factors such as cost, delay, resource availability and geographic coverage. Path computing builds a multi-tiered architecture, and from the top (traditional data center) to the bottom (user terminal equipment), the device capability becomes weaker, while the number of devices gets larger. On the premise of clear separation of computing and states, CloudPath extends the abstract shared storage layer to all data center nodes along the path, which reduces the complexity of third-party application development and deployment, and meanwhile keeps the RESTful development style.\nThe CloudPath application consists of a set of short-cycle and stateless functions that can be quickly instantiated at any level of the CloudPath framework. Developers either tag functions to specify where (such as edges, cores, clouds, etc.) their codes run, or tag performance requirements (such as response latency) to estimate the running location. \nCloudPath does not migrate a running function/module, but supports service mobility by stopping the current instance and re-starting a new one at the expected location.\nEach CloudPath node usually consists of the following six modules.\n\\textit{1) PathExecute:} implements a serverless cloud container architecture that supports lightweight stateless application functions/modules.\n\\textit{2) PathStore:} provides a distributed eventual consistent storage system that transparently manages application data across nodes. \\nop{PathStore is also used in PathDeploy and PathRoute to get the application program and routing information.}\n\\textit{3) PathRoute:} transmits the request to the most appropriate CloudPath node, according to the information such as user's location in the network, application preferences, or system status.\n\\textit{4) PathDeploy:} dynamically deploys and removes applications on CloudPath nodes based on application preferences and system policies.\n\\textit{5) PathMonitor:} provides real-time monitoring and historical data analysis function to applications and CloudPath nodes. It collects the metrics of other CloudPath modules on each node through the PathStore and presents the data to users using web pages.\n\\textit{6) PathInit:} is an initialization module in the top-level data center node and can be used to upload applications/services to the CloudPath.", "id": "de742ea0-6e5a-4f78-8cc0-f5db79790794", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "CloudPath" ] ], "subsections": [], "title": "CloudPath" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{PCloud.pdf}\n\\caption{PCloud Architecture~.}\\label{fig:PCloud}\n\\end{center}\n\\end{figure}\nPCloud~ integrates the edge computing and storage resources with those at the cloud to support seamless mobile services. The architecture of PCloud is shown in Fig.~\\ref{fig:PCloud}. Specifically, these resources are virtualized through a special virtualization layer named STRATUS~, and form a distributed resource pool that can discover new resources and monitor resource changes. With the resource pool, the runtime mechanism is responsible for resource application and allocation. Through a resource description interface, the runtime mechanism selects and combines appropriate resources based on the requirements of specified applications. After the resources are combined, it generates a new instance to provide corresponding services for external applications, according to the resource access control policy. (Note that, the computing resources of the newly generated instance may come from multiple devices, which is equivalent to one integrated computing device for the external applications.) Thus, an application program is actually a combination of services running on the PCloud instance. For example, a media player application can be a combination of the storage, decoding and playing services. These services could be local or remote but are transparent to the applications. Furthermore, the PCloud system also provides basic system services, such as permission management and user data aggregation, to control the resources access of other users.\nIn the actual operation, the mobile application describes the required resources to the PCloud through interfaces. The PCloud will find out the optimal resource configuration by analyzing the description and the currently available resources, and then generates an instance to provide corresponding services for the application. \nPCloud integrates edge resources with cloud resources so that they can complement each other. The abundant resources from the cloud can make up for the lack of computing and storage capabilities at the edge; meanwhile, due to the physical proximity, edge devices can provide low-latency services to the user that cloud cannot offer. In addition, PCloud also enhances the availability of the entire system and can choose alternate resources when encountering network and equipment failures.", "id": "44b0c252-9c6d-4438-9d77-129644ea6df0", "level": "subsection", "origin_cites_number": 2, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "PCloud" ] ], "subsections": [], "title": "PCloud" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{ParaDrop.png}\n\\caption{ParaDrop System~.}\\label{fig:ParaDrop}\n\\end{center}\n\\end{figure}\nParaDrop~ is developed by the WiNGS Laboratory at the University of Wisconsin-Madison. It is an edge computing framework that makes the computing/storage resources close to mobile devices and data sources available to the third party developers. Its goal is to bring intelligence to the network edge in a friendly way. \nParaDrop upgrades the existing access point to an edge computing system, which supports applications and services like a normal server. To isolate applications under the multi-tenancy scenario, ParaDrop leverages the lightweight container virtualization technique. As Fig.~\\ref{fig:ParaDrop} shows, the ParaDrop server (in the cloud) controls the deployment, starting and deletion of the applications. It provides a group of APIs, via which the developer can monitor and manage the system resources and configure the running environment. The web UI is also provided, through which the user can directly interact with the applications.\nThe design goals of ParaDrop include three aspects: multi-tenancy, efficient resource utilization and dynamic application management. To achieve these goals, the container technology is applied to manage the multi-tenancy resources separately.\\nop{Besides, the flexible APIs helps developers allocate specific resources to each application.} As the resources of the edge devices are very limited, compared with the virtual machine, the container consumes less resources and would be more suitable for delay sensitive and high I/O applications. Moreover, as applications running in the container, ParaDrop can easily control their startup and revocation.\nParaDrop is mainly used for IoT applications, especially IoT data analysis. Its advantages over traditional cloud system can be summarized as follows: a) since sensitive data can be processed locally, it protects the users’ privacy; b) WiFi access point is only one hop away from the data source, leading to low network delay and stable connection; c) only user requested data are transmitted to the equipment through the Internet, thus cutting down the total traffic amount and saving the bandwidth of the backbone network; d) the gateway can obtain the location information of the edge devices through radio signals (e.g., the distance between devices, and the location of the specific device), which facilitates the location-aware services; e) when edge devices cannot be connected to the Internet, the edge service can still work. \\nop{In a word, ParaDrop is well developed, the software system is fully open source, and the hardware devices that support ParaDrop system are also on the market.}", "id": "26e36cce-0199-4af9-92e8-65f39c1250d6", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "ParaDrop" ] ], "subsections": [], "title": "ParaDrop" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{SpanEdge.pdf}\n\\caption{SpanEdge Architecture~.}\\label{fig:SpanEdge}\n\\end{center}\n\\end{figure}\nStreaming processing is one important type of applications in edge computing, where data is generated by various data sources in different geographical locations and continuously transmitted in the form of streams. Traditionally, all raw data is transmitted over the WAN to the data center server, and stream processing systems, such as Apache Spark and Flink, are also designed and optimized for one centralized data center. However, this approach cannot effectively handle the huge data generated by a lot of devices at the edge of the network, and the situation is even worse when the applications require low latency and predictability. SpanEdge~ is a research project of the Royal Institute of Technology in Sweden. It unifies the cloud central node and the near-edge central node, reduces network latency in WAN connections, and provides a programming environment that allows the program to run near the data source. Developers can focus on developing streaming applications without considering where the data sources are located and distributed.\nThe data center in SpanEdge is composed of two levels: the cloud data center is the first level and the edge data center (such as the operator's cloud, Cloudlet, or Fog) is the second level. Partial streaming processing tasks run on the edge central nodes to reduce latency and boost performance. SpanEdge uses the master-worker architecture (as shown in Fig.~\\ref{fig:SpanEdge}) with one manager and multiple workers. The manager collects the streaming processing requests and assigns tasks to the workers. Workers mainly consist of cluster nodes whose primary responsibility is to execute tasks. There are two types of workers: hub-worker (first level) and spoke-worker (second level). \\nop{They have no functional differences.} The network transmission overhead and latency are related to the geographical location and network connection status between workers. The communication in SpanEdge is also divided into a system management communication (worker-manager) and a data transmission communication (worker-worker). System management communication aims to schedule tasks between managers and workers, and data transmission communication takes care of the data flow in each task. Each worker has an agent that handles system management operations, such as sending and receiving management information, monitoring compute nodes to ensure that they are running normally, and periodically sending heartbeat messages to the manager to ensure immediate recovery when the task fails.\nSpanEdge allows developers to divide the tasks into local ones and global ones. Local tasks should run on the node near the data source and provide only part of the required data; global tasks are responsible for further processing the results of local tasks and aggregating all results. SpanEdge creates a copy of the local task on each spoke-worker which has all corresponding data sources. If the data is insufficient, the task is dispatched to the hub-worker. The global task runs on a separate hub-worker, and the scheduler selects the optimal hub-worker based on the network delay (i.e. the distance from the spoke-worker in the network topology).", "id": "13fec1ef-1af3-4617-8a90-614c3d31824e", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "SpanEdge" ] ], "subsections": [], "title": "SpanEdge" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{Cloud-Sea.pdf}\n\t\\caption{Cloud-Sea Computing Model.}\n\t\\label{fig:Cloud-Sea}\n\\end{figure}\nCloud-Sea Computing Systems project~ is a main research thrust of the Next Generation Information and Communication Technology initiative (the NICT initiative) and a 10-year strategic priority research initiative, launched by the Chinese Academy of Science in 2012. The NICT initiative aims to address the three major technology challenges in the coming Zettabyte era: i) improving the performance per watt by $1000$ times, ii) supporting more applications from the human-cyber-physical ternary computing, and iii) enabling transformative innovations in devices, systems and applications, while without polluting beneficial IT ecosystems.\nIn the cloud-sea computing system, ``cloud'' refers to the datacenters and ``sea'' refers to the terminal side (the client devices, e.g., human-facing and physical world facing subsystems). The design of the project can be depicted from three levels: the overall systems architecture level, the datacenter server and storage system level, and the processor chip level. The project contains four research components: a computing model called REST 2.0 which extends the representational state transfer (REST)~ architectural style of Web computing to cloud-sea computing, a three-tier storage system architecture capable of managing ZBs of data, a billion-thread datacenter server with high energy efficiency, and an elastic processor aiming at energy efficiency of one trillion operations per second per watt.\nAs shown in Fig.~\\ref{fig:Cloud-Sea}, the cloud-sea computing model includes sea-side functions and cloud-side functions. The sea zone is expanded from the traditional cloud client, e.g., a home, an office, a factory manufacturing pipeline. There can be multiple client devices inside a sea zone, and each device can be human facing or physical world facing. In a sea zone, there is a special device (like a home datacenter or a smart TV set) designated as the seaport for three purposes: i) a gateway interfacing the sea zone to the cloud, ii) a gathering point of information and functionalities inside a sea zone, and iii) a shield protecting security and privacy of the sea zone. A device inside a sea zone does not communicate to the cloud directly, but through the seaport, either implicitly or explicitly. The SeaHTTP, a variant based on HTTP $2.0$, is the widely-used protocol in sea zones of the cloud-sea system to connect with the cloud.\nThe cloud-sea computing model has four distinct features. \n\\textit{1) Ternary computing via sea devices:} \nHuman and physical world entities interface and collaborate with the cyberspace through the sea side. For example, users can leverage a smart phone application to read and control a sensor device in a home through an Internet application service.\n\\textit{2) Cooperation with locality:} \nA specific network computing system will partition its functions between the sea side and the cloud side. Sea-side functions include sensing, interaction, and local processing while cloud-side functions include aggregations, request-response, and big data processing.\n\\textit{3) Scalability to ZB and trillion devices:} \nThis future Net will collectively need to support trillions of sea devices and to handle ZBs of data.\n\\textit{4) Minimal extension to existing ecosystems:} \nThe REST $2.0$ cloud-sea computing architecture attempts to utilize existing Web computing ecosystems as much as possible. \nOverall, the cloud-sea computing model is proposed to migrate the cloud computing function to the sea side, and it focuses more on the devices at the ``sea'' side and the data at the ``cloud'' side. Typical edge computing is more generalized and may care about any intermediate computing resources and network resources between the ``sea'' and the ``cloud''. The researches in cloud-sea computing (e.g., energy efficient computing and elastic processor designing) are consultative for the edge computing.", "id": "94a01773-4152-4ca2-a525-839a27b31c3b", "level": "subsection", "origin_cites_number": 2, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cloud-Sea Computing Systems" ] ], "subsections": [], "title": "Cloud-Sea Computing Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:Cachier_and_Precog}\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{Cachier.pdf}\n\\caption{Cachier System~.}\\label{fig:Cachier}\n\\end{center}\n\\end{figure}\nCachier~ and Precog~ are two edge caching systems that proposed by the researchers from Carnegie Mellon University for image recognition. \nRecognition applications have strict response time requirements, while the computation is huge due to the model complexity and dataset size. With edge computing, we can leverage the computing resource of the edge nodes to process the matching, so the network delay can be reduced.\nConsidering that edge nodes mainly provide services to nearby users, the spatio-temporal characteristics of service requests can be leveraged. From the perspective of caching, Cachier system proposes that edge nodes can be used as ``computable cache devices'', which can cache recognition results, reduce matching dataset size and improve response time. By analyzing the response delay model and requests' characteristics, the system can dynamically adjust the size of the dataset on the edge nodes according to the environment factors, thus ensuring optimal response time. \nAs Fig.~\\ref{fig:Cachier} shows, Cachier consists of the Recognizer Module, the Optimizer Module, and the Offline Analysis Module. \nThe Recognizer Module is responsible for analyzing and matching the received figures according to the cached training data and model. If there is a match, the result will be directly returned to the user, otherwise the figure will be transmitted to the cloud for recognition. \nThe distribution estimator in the Optimizer Module can use the maximum a posteriori estimation to predict the request distribution. Secondly, given the classification algorithm and training dataset, the cache searching delay and the cache accuracy can also be calculated by the Offline Analysis Module. At last, the Profiler sub-module is responsible for estimating the network delay and cloud latency incurred by cache misses. It measures and records the delay under corresponding distance in real time, and uses the moving average filter to remove the noise data. By taking such information into the delay expectation time model, Cachier is able to calculate the optimal cache size, and then adjusts the cache on the edge nodes accordingly. \\nop{In addition, the Cachier system also provides other optimizations, such as hot start and feature request waiting queues.}\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{Precon.png}\n\\caption{Precog System~.}\\label{fig:Precon}\n\\end{center}\n\\end{figure}\nPrecog is an extension of Cachier. The cached data is not only on the edge nodes, but also on the end devices which are used for selection calculation in order to reduce the image migration between the edge nodes and the cloud. Based on the prediction model, Precog prefetches some of the trained classifiers, uses them to recognize the images, and cooperates with the edge nodes to efficiently complete the tasks. \nPrecog pushes computing capability to the end device and leverages the locality and selectivity of the user requests to reduce the last mile delay. For the end device, if the cache system uses the same cache replacement policy as the edge node applies, it will lead to a large number of forced misses. For a single user, the device only has the request information of the user, and usually the user will not identify the same object multiple times in the near future. Therefore, Precog constructs a Markov model using the request information from the edge node to describe the relationship among the identified objects, and then predicts the potential future requests. \nPrecog improves the delay expectation time model by considering the network state, device capabilities and the prediction information from edge nodes. \nThe system architecture of Precog is illustrated in Fig.~\\ref{fig:Precon}. We can see that it is mainly composed of the feature extraction module, the offline analysis module, the optimizer module and the profiler module. The edge node is mainly responsible for the construction of Markov model. Based on the offline analysis results, the Markov model prediction results, and the network information provided by the profiler module, the optimizer module determines the number of feature to be extracted as well as the data to be cached on the end device.", "id": "d41a4e11-d85b-40f4-b891-834279d27134", "level": "subsection", "origin_cites_number": 2, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Cachier and Precog" ] ], "subsections": [], "title": "Cachier and Precog" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{FocusStack.pdf}\n\\caption{FocusStack System~.}\\label{fig:FocusStack}\n\\end{center}\n\\end{figure}\nFocusStack~ is developed by AT\\&T Labs, which supports the deployment of complex applications on a variety of potential IoT edge devices. Although the resources, such as computing, power consumption, connectivity, are limited on edge devices, they have the nature of mobility. Hence, it is very important to have a system that can discover and organize available edge resources. FocusStack is such a system that can discover a set of edge devices with sufficient resources, deploy applications and run them accordingly. Thus, the developers can focus more on the design of applications than on how to find and track edge resources.\nFocusStack consists of two parts (as shown in Fig.~\\ref{fig:FocusStack}): i) Geocast system, which provides location-based situational awareness (LSA) information, and ii) OpenStack extension (OSE), which is responsible for deploying, executing, and managing the containers on the edge devices. FocusStack builds a hybrid cloud of edge devices (containers) and data center servers (virtual machines). When a user initiates a cloud operation through the FocusStack API (such as instantiating a container), the LSA subsystem analyzes the scope of the request based on the Geocast route and sends a resource list of geographic locations to the target area, and waits for the response from (online) edge devices that can satisfy the requirements. Then, the selected edge device runs the corresponding OpenStack operations with the help of the conductor module.\nThe situational aware subsystem enables one group of devices to monitor the survival status of each other, and each device can update the sensing information of other devices. \nThe geo-processing projection service is primarily intended to pass requests and reply messages between areas of interest, send out device control information (like drones) and broadcast location-based information. The service is based on a two-layer network, and data can be transmitted over a dedicated WiFi network or the Internet. The sender transmits the data packet to the geo-router server which tracks the location and metadata of each edge device, and then the data packet is sent to each device (including edge devices and a cloud device running a SAMonitor instance) in the specified area based on the address. Location and connection statuses are maintained by the geo-routing database (GRDB). The GCLib is a software framework that provides data acquisition services, and it runs on edge devices and cloud applications that use SAMonitor. \nAny application or service in need of the state of the device in the area needs a SAMonitor component to communicate with the LSA subsystem. The application server makes a request to SAMonitor through the FocusStack API, and then SAMonitor builds a real-time graph of the current area and returns a list of available edge devices. The regional graph is sent to the conductor in the OSE, which is responsible for checking whether the devices are capable of running the tasks, whether the pre-defined policy rules are met, and so on. After that, the available edge device list is submitted to the application server and the server selects devices to be used. OSE manages and deploys the program through the OpenStack Nova API. The edge devices run a custom version of Nova Compute to interact with the local Docker to manage the container. The container on the edge devices supports all OpenStack services, including access to virtual networks and application-based granularity configuration.", "id": "87078893-bde3-463c-947c-f7dc4133bca0", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "FocusStack" ] ], "subsections": [], "title": "FocusStack" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{AirBox.png}\n\\caption{AirBox Architecture~.}\\label{fig:AirBox}\n\\end{center}\n\\end{figure}\nAirBox~ is a secure, lightweight and flexible edge function system developed by Georgia Institute of Technology. It supports fast and flexible edge function loading and provides service security and user privacy protection. The Edge Function (EF) in AirBox is defined as a service that can be loaded on an edge node, and the software stack that supports the edge function is named Edge Function Platform (EFP). \nAs shown in Fig.~\\ref{fig:AirBox}, the AirBox consists of two parts: the AB console and the AB provisioner. The back-end service manager deploys and manages the EF on the edge nodes through the AB console. The AB provisioner which runs at the edge is responsible for providing dynamic EF seamlessly. The edge functions are implemented through system-level containers with minimal constraints on developers. \nSecurity is enhanced by using the hardware security mechanisms like Intel SGX. AirBox provides centrally controlled backend services in discovering edge nodes and registering them. The AB console is a web-based management system, which activates the docker start-up process on the edge node with AB provisioner. \nTo ensure the security of the AirBox, the EF consists of a trusted part and an untrusted part. The untrusted part is responsible for all network and storage interactions. Based on OpenSGX APIs, AirBox provides four extensible APIs to implement secure communication and storage: Remote Attestation, Remote Authentication, Sealed Storage, and EF Defined Interface. AirBox supports a variety of edge features such as aggregation, buffering, and caching.", "id": "111f14e4-0ffb-419b-80b8-09b438ff1935", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "AirBox" ] ], "subsections": [], "title": "AirBox" }, { "cite_extract_rate": 0, "cites": [], "content": "Wayne State University's MIST Lab proposes a programming model for edge computing\\textemdash Firework~. In the model of Firework, all services/functions are represented in a data view of datasets and functions which can be the results of processing with their own data or secondary processing with other services/datasets. A system consisting of nodes that apply the Firework model is called the Firework system. In the Firework system, instead of dividing nodes into edge nodes and cloud ones, they are all considered as Firework nodes. Through the mutual invocation of the Firework nodes, the data can be distributed and processed on each node. Along the path of data transmission, the edge nodes perform a series of calculations upon the data, thus forming a ``computational flow''. The beginning of the computational flow could be user terminals, edge servers close to the user, edge servers close to the cloud, or cloud nodes. \nIn the Firework system, the data processing service is split into multiple sub-services, and the scheduling of the data processing is performed at the following two layers.\n\\textit{1) Same sub-service layer scheduling:} \nA Firework node can cooperate with another for the same sub-services in the surrounding area to achieve optimal response time. For idle Firework nodes, the system can schedule sub-service programs onto them, such that a cluster providing specific sub-service is formed dynamically and complete the service faster.\n\\textit{2) Computational flow layer scheduling:} \nFirework nodes along the computational flow can cooperate with each other and dynamically schedule execution nodes to achieve an optimal solution. For example, depending on the states of the network, the system can choose Firework nodes for service providing based on the nodes' locations (e.g., selecting those closest to the users).\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.35\\textwidth]{Firework.jpg}\n\\caption{An example of a Firework instance that consists of heterogeneous computing platforms~.}\\label{fig:Firework}\n\\end{center}\n\\end{figure}\nAs shown in Fig.~\\ref{fig:Firework}, Firework divides the nodes into computing nodes and managers, depending on the type of service those nodes provide. In general, each node with the Firework model has three modules: the job management module, the actuator management module, and the service management module.\n\\begin{table*}[htb]\n\\centering\n\\caption{Summary of edge computing systems} \n\\label{tab:otherEdgeSystems}\n\\begin{tabular}{cccccc}\n\\hline\n\\textbf{\\begin{tabular}[c]{@{}c@{}}Application\\\\ Scenarios\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Edge\\\\ Computing\\\\ Systems\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}End\\\\ Devices\\end{tabular}} & \\textbf{Edge Nodes} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Computation\\\\ Architecture\\end{tabular}} & \\textbf{Features/Targets} \n \\\\ \\hline\n\\multirow{10}{*}{\\begin{tabular}[c]{@{}c@{}} General \\\\ Usage \\\\ Scenario \\end{tabular}} & Cloudlet & Mobile devices & Cloudlet & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Lightweight\\\\ VM migration\\end{tabular} \n\\\\ \\cline{2-6}\n & PCloud & Mobile devices & \\begin{tabular}[c]{@{}c@{}}Mobile devices,\\\\ local server, PC\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Resource integration, \\\\ dynamic allocation\\end{tabular} \n\\\\ \\cline{2-6}\n & ParaDrop & IoT devices & Home gateway & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hardware, \\\\ developer support\\end{tabular}\n\\\\ \\cline{2-6}\n & Cachier \\& Precog & Mobile devices & \\begin{tabular}[c]{@{}c@{}}Mobile devices,\\\\ local server, PC\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Figure recognition, \\\\ identification \\end{tabular}\n\\\\ \\cline{2-6}\n & FocusStack & IoT devices & Router, server & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Location-based info, \\\\ OpenStack extension \\end{tabular}\n\\\\ \\cline{2-6}\n & SpanEdge & IoT devices & \\begin{tabular}[c]{@{}c@{}}Local cluster,\\\\ Cloudlet, Fog \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (2-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Streaming processing, \\\\ local/global task \\end{tabular}\n\\\\ \\cline{2-6}\n & AirBox & IoT devices & \\begin{tabular}[c]{@{}c@{}}Mobile devices,\\\\ local server, PC\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & Security\n\\\\ \\cline{2-6}\n & CloudPath & Mobile devices & \n \\begin{tabular}[c]{@{}c@{}}Multi-level \\\\data centers\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (multi-tier)\\end{tabular} & Path computing\n\\\\ \\cline{2-6}\n & Firework & Firework.Node & \n Firework.Node & \\begin{tabular}[c]{@{}c@{}}Two-layer\\\\ scheduling\\end{tabular} & Programming model\n\\\\ \\cline{2-6}\n & Cloud-Sea & Sea & \n Seaport & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Minimal extension,\\\\transparency\\end{tabular}\n\\\\ \\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Vehicular\\\\ Data\\\\ Analytics\\end{tabular}} & OpenVDAP & CAVs & XEdge & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (2-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}General\\\\ platform\\end{tabular} \\\\ \\cline{2-6}\n & SafeShareRide & \\begin{tabular}[c]{@{}c@{}}Smartphones\\\\ and vehicles\\end{tabular} & Smartphones & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (2-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}In-vehicle\\\\ security\\end{tabular} \\\\ \\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Smart\\\\ Home\\end{tabular}} & Vigilia & \\begin{tabular}[c]{@{}c@{}}Smart\\\\ home devices\\end{tabular} & Hubs & \\begin{tabular}[c]{@{}c@{}}Edge\\\\ only\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Smart\\\\ Home Security\\end{tabular} \\\\ \\cline{2-6}\n & HomePad & \\begin{tabular}[c]{@{}c@{}}Smart\\\\ home devices\\end{tabular} & Routers & \\begin{tabular}[c]{@{}c@{}}Edge\\\\ only\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Smart\\\\ Home Security\\end{tabular} \\\\ \\hline\n\\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Video\\\\ Stream\\\\ Analytics\\end{tabular}} & LAVEA & $\\sim$ & $\\sim$ & \\begin{tabular}[c]{@{}c@{}}Edge\\\\ or cloud\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Low\\\\ latency response\\end{tabular} \\\\ \\cline{2-6}\n & VideoEdge & Cameras & \\begin{tabular}[c]{@{}c@{}}Cameras and\\\\ private\\\\ clusters\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Hybrid\\\\ (3-tier)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Resource-accuracy\\\\ tradeoff\\end{tabular} \\\\ \\cline{2-6}\n & \\begin{tabular}[c]{@{}c@{}}Video\\\\ on drones\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Autonomous\\\\ drones\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Portable\\\\ edge computers\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Edge\\\\ only\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Bandwidth\\\\ saving\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Virtual\\\\ Reality\\end{tabular} & MUVR & Smartphones & \\begin{tabular}[c]{@{}c@{}}Individual\\\\ households\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Edge\\\\ only\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Resource utilization\\\\ efficiency\\\\ optimization\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\textit{1) Service management module:} \nThis type of module is designed for the management of data views. It provides interfaces to update the data view, as well as relevant programs for data processing.\n\\textit{2) Job management module:}\nThis type of module is responsible for the scheduling, monitoring, and evaluation of the task executions. When the local computing resources are insufficient, the module can look into the node list and the data view\\nop{ (of the Firework manager)} and make resource re-scheduling at the same sub-service layer. When the sub-service is running, the module can also provide necessary monitoring information and give feedback to other upstream and downstream nodes for flow layer scheduling.\n\\textit{3) Actuator management module:}\nThis type of module is mainly responsible for managing all hardware resources and hosting the execution processes of different tasks. With the help of this module, the device, running environment and the upper layer functions could be decoupled, so that the nodes of Firework system are not limited to a certain type of devices, and the data processing environment is not limited to a certain type of computing platform.\n\\nop{\nFirework manager has two sub-modules, and Firework computing node has three sub-modules. For Firework manager, it includes service management module that provides the registration and viewing capabilities of the view and the status of the associated Firework compute nodes, and job management module, which is responsible for scheduling the service. Firework computing node refers to a node that provides computing resources, which may have its own database and provide a corresponding view of the data but may also simply provide computing resources. \n}", "id": "49d0417a-3b91-46c8-8267-e0938c8f0aa3", "level": "subsection", "origin_cites_number": 1, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Firework" ] ], "subsections": [], "title": "Firework" }, { "cite_extract_rate": 0, "cites": [], "content": "The edge systems introduced above depict some typical and basic innovations on the exploitation of edge analytics for highly-responsive services. The previous systems are used in general cases, which lay the foundation of further development. We highlight that, in addition to these efforts, there are many other edge systems tuned for a series of different application scenarios.In Table~\\ref{tab:otherEdgeSystems}, we briefly summarize the general and application-specific edge systems, which leverage different kinds of edge nodes to serve diverse end devices using either a hybrid or edge only computation architecture.\nCompared to both on-board computation and cloud-based computation, edge computing can provide more effective data analytics with lower latency for the moving vehicles~. In~, an open full-stack edge computing-based platform OpenVDAP is proposed for the data analytics of connected and autonomous vehicles (CAVs). OpenVDAP proposes systematic mechanisms, including varied wireless interfaces, to utilize the heterogeneous computation resources of nearby CAVs, edge nodes, and the cloud. For optimal utilization of the resources, a dynamic scheduling interface is also provided to sense the status of available resources and to offload divided tasks in a distributed way for computation efficiency. SafeShareRide is an edge-based attack detection system addressing in-vehicle security for ridesharing services~. Its three detection stages leverage both the smartphones of drivers and passengers as edge computing platform to collect multimedia information in vehicles. Specifically, speech recognition and driving behavior detection stages are first carried out independently to capture in-vehicle danger, and video capture and uploading stage is activated when abnormal keywords or dangerous behaviors are detected to collect videos for cloud-based analysis. By using such an edge-cloud collaborative architecture, SafeShareRide can accurately detect attacks in-vehicle with low bandwidth demand.\nAnother scenario that edge computing can play an important role is the IoT devices management in the smart home environment. Wherein, the privacy issue of the wide range of home-devices is a popular topic. In~, the Vigilia system is proposed to harden smart home systems by restricting the network access of devices. A default access deny policy and an API-granularity device access mechanism for applications are adopted to enforce access at the network level. Run time checking implemented in the routers only permits those declared communications, thus helping users secure their home-devices. Similarly, the HomePad system in~ also proposes to execute IoT applications at the edge and introduces a privacy-aware hub to mitigate security concerns. Homepad allows users to specify privacy policy to regulate how applications access and process their data. Through enforcing applications to use explicit information flow, Homepad can use Prolog rules to verify whether applications have the ability to violate the defined privacy policy at install time.\nEdge computing has also been widely used in the analysis of video stream. LAVEA is an edge-based system built for latency-aware video analytics nearby the end users~. In order to minimize the response time, LAVEA formulates an optimization problem to determine which part of tasks to be offloaded to the edge computer and uses a task queue prioritizer to minimize the makespan. It also proposes several task placement schemes to enable the collaboration of nearby edge nodes, which can further reduce the overall task completion time. VideoEdge is a system that provides the most promising video analytics implementation across a hierarchy of clusters in the city environment~. A 3-tier computation architecture is considered with deployed cameras and private clusters as the edge and remote server as the cloud. The hierarchical edge architecture is also adopted in~ and is believed to be promising in processing live video stream at scale. Technically, VideoEdge searches thousands of combinations of computer vision components implementation, knobs, and placement and finds a configuration to balance the accuracy and resource demands using an efficient heuristic. In~, a video analytics system for autonomous drones is proposed, where edge computing is introduced to save the bandwidth. Portable edge computers are required here to support dynamic transportation during a mission. Totally four different video transmission strategies are presented to build an adaptive and efficient computer vision pipeline. In addition to the analytics work (e.g., object recognition), the edge nodes also train filters for the drones to avoid the uploading of the uninteresting video frames.\nIn order to provide flexible virtual reality (VR) on untethered smartphones, edge computing can be useful to transport the heavy workload from smartphones to their nearby edge cloud~. However, the rendering task of the panoramic VR frames (i.e., 2GB per second) will also saturate the individual households as common edge in the house. In~, the MUVR system is designed to support multi-user VR with efficient bandwidth and computation resources utilization. MUVR is built on a basic observation that the VR frames being rendered and transmitted to different users are highly redundant. For computation efficiency, MUVR maintains a two-level hierarchical cache for invariant background at the edge and the user end to reuse frames whenever necessary. Meanwhile, MUVR transmits a part of all frames in full and delivers the distinct portion for the rest frames to further reduce the transmission costs.\n\\nop{", "id": "22a62c69-a6b7-4429-8620-50170782843d", "level": "subsection", "origin_cites_number": 9, "parent_id": "2d1d8056-04f7-4532-9d27-b8a8f91e3db7", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Edge Computing Systems and Tools" ], [ "subsection", "Other Edge Computing Systems" ] ], "subsections": [], "title": "Other Edge Computing Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "ParaDrop is an edge computing system that uses a Wi-Fi access point/wireless gateway as a hardware platform to provide third-party applications with computing and storage resources at the edge of the network. Compared to cloud data centers, ParaDrop is closer to data sources and end users. It is aware of network and environmental status, which helps reduce response time, enhance data security and protect user privacy.", "id": "b72e7b0d-2142-415a-8bed-1b1f88160a2d", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "ParaDrop" ] ], "subsections": [ "b08f7313-052a-475e-91fc-ecf9b1117462", "adac1e95-6fb5-4cca-b7a1-c16f16b2589a", "e332c996-588d-441c-bafd-1d99b5403e4d" ], "title": "ParaDrop" }, { "cite_extract_rate": 0, "cites": [], "content": "Since computing and storage resources on the gateway are available for most of the time, ParaDrop prefer to use gateways and Wi-Fi access points to deploy service which are named as chutes. ParaDrop provides developers with a full-featured API and centralized control framework that simplifies deployment and management complexity. \nParaDrop contains mainly three components: Cloud Controller, Edge Compute Node and APIs. Edge services/chutes are deployed on edge node (e.g. gateways) through the backend Cloud Controller and developer API. There can be multiple chutes on one edge node, the isolation is guaranteed by using container technology. In Figure Y, two services are used as examples to illustrate the usage scenarios. SecCam is a wireless security monitoring service that collects video data within the range and analyzes activities locally. EnvSense is an environmental monitoring service that monitors temperature and humidity in buildings.\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{ParaDrop_2.png}\n\\caption{ParaDrop System}\\label{fig:ParaDrop_2}\n\\end{center}\n\\end{figure}\nParaDrop Cloud Controller manages ParaDrop system resources, maintains information of edge compute nodes and users, and provides the Chute Store (like Google Play Store and AppStore) for software distribution. From the programming viewpoint, the ParaDrop Cloud Controller supports two important interfaces: the WAMP API and the HTTP RESTful API. The WAMP (Web Application Messaging Protocol) API is used to communicate with the edge compute nodes, send control information, and receive replies and status reports in real time. The HTTP RESTful API is used to communicate with users, developers and administrators. Specifically, the backend aggregates the data from all authenticated edge compute nodes to provide a graphical display to the users. From the functionality viewpoint, the ParaDrop Cloud Controller saves the deployment information of the ParaDrop system, such as the location and configuration of edge compute nodes; and transfers messages between the user and the edge compute node. Noted that because of the highly distributed nature of edge computing, the central cloud controller is not strictly required, each edge compute node has a publicly-documented local API and can be directly managed using command line tools.\nParaDrop Edge Compute Node is the specific computing platform that provides a virtualized resource environment for services/chutes, including CPU, memory and network resources. The software platform is entirely built from open source components, each chute is deployed as Docker containers so that the hardware heterogeneity is shield. A ParaDrop Daemon runs on the edge devices and responsible for: 1) registering the current edge device at the ParaDrop Cloud Controller ; 2) monitoring the status of the edge device and reporting it to the Cloud Controller; 3) managing local resources and processes, including virtual wireless devices, firewalls, DHCP, and so on; 4) managing the running containers and responsible for the installation, start, stop and uninstall of chutes based on the information received from the Cloud Controller.\nParaDrop API is used to exports the system capabilities to developers so that they can monitor and control the ParaDrop system. The API is composed of two parts: the cloud API and the edge API. The cloud API provides fixed status information (such as user type, user authorization, chute description, chute resource configuration, edge compute node configuration) and real-time status information (such as chute and edge compute node running status). Besides, the cloud API can also be used to publish/delete chute and register/revoke edge compute node. The edge API supports local context information of the edge compute node to chutes, such as network connection status and peripheral devices information, so that services can process and analyze the data locally. As a result, it brings the intelligence to the network of the edge.", "id": "b08f7313-052a-475e-91fc-ecf9b1117462", "level": "subsection", "origin_cites_number": 0, "parent_id": "b72e7b0d-2142-415a-8bed-1b1f88160a2d", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "ParaDrop" ], [ "subsection", "System Architecture" ] ], "subsections": [], "title": "System Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "Chute is the crucial component of the ParaDrop system, it refers to the service package that running on the ParaDrop edge compute nodes. Multiple chutes can be deployed on one edge node, and they are isolated from each other by using the container technology. Traditional cloud services can be packaged into chute to improve the user experience. For complex services containing multiple microservices, part of them can be transformed into chute while rest of the microservices can still run in the cloud. Chute also provides well supports for cloud-edge cooperation.\nChute can be seen as a Docker image containing ParaDrop and services related files. Each Chute has a configuration file that defines its requirements for various resources. ParaDrop edge compute node allocates appropriate resources according to its requirements and guarantees fairness among multi-tenants. For examples, we can set the CPU share value to 1024, that is, when there are other chutes competing for CPU resources in the system, the relative resource amount that this chute should be allocated is 1024. For maximum memory configuration, it is a hard constraint. If the data is set too low, the chute kernel may not be fully booted due to insufficient memory.\nAsides from resource information, chute configuration file also includes a runtime domain and a traffic domain. The Runtime domain describes the operations that chute can complete on its own. Take Chute.runtime information of SecCam as an example, webhosting can create a uhttpd instance based on the configured parameters and the DHCP server establishes a default version for connecting to security cameras. In many cases, devices that chute wants to interact with are probably not directly connected to the chute's network. For security concerns, ParaDrop allows developers to design and implement communication traffic rules which are wrote on the traffic domains. The rules can be implemented on the top of the firewall rules of the host network stack, allowing devices on the host LAN to connect to a specific interface to interact with chute. If users in the LAN want to obtain the data stored on chute, they can reach to the web server running on chute based on the set rules or connect through the default ParaDrop SSID in SSH mode to obtain the webpage. Compared with the traditional cloud computing mode, users can obtain data more quickly and avoid the transmission from the edge node to the cloud server.", "id": "adac1e95-6fb5-4cca-b7a1-c16f16b2589a", "level": "subsection", "origin_cites_number": 0, "parent_id": "b72e7b0d-2142-415a-8bed-1b1f88160a2d", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "ParaDrop" ], [ "subsection", "ParaDrop Chute" ] ], "subsections": [], "title": "ParaDrop Chute" }, { "cite_extract_rate": 0, "cites": [], "content": "To design a ParaDrop program, developers need to focus on two components (as shown in Figure X): 1) build tools which responsible for communicating with the rest of the ParaDrop system; 2) ParaDrop daemon instance tools (Instance tools) that responsible for running the application on various edge devices. Developers can create and manage chutes through these tools. The detailed implementation is done by the ParaDrop system and it is transparent which allowing developers to focus more on the application/service itself.\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{ParaDrop_3.png}\n\\caption{ParaDrop developer system}\\label{fig:ParaDrop_3}\n\\end{center}\n\\end{figure}\nThe basic steps to develop a chute includes:\n\\begin{enumerate}\n\\item Evaluate the applications’ functionality and performance locally;\n\\item Use the build tool to package binary files, scripts, configuration files and Docker files as chute;\n\\item Test chute at the local ParaDrop edge compute node;\n\\item Publish chute in ChuteStore and push chute to one or more edge nodes;\n\\item Use instance tool on the edge node to open the chute package and set the running environment based on the requirements.\n\\item The Docker engine creates a chute container on the edge node, including downloading the base image; installing the package; downloading the executable file and resource files;\n\\item Finally, start the developed chute.\n\\end{enumerate}\nThe ParaDrop system also offers the light chutes development method, which leverages pre-compiled base images that are optimized for different programming languages rather than building from scratch. Its compiling and installation are similar to. Although the most of the functions are similar, compared with the general chutes, the advantages of the light chutes include: higher security protection, fast installation and improved portability. Since base image is most likely already cached on the edge device, it is easy to install in a short time.\n}", "id": "e332c996-588d-441c-bafd-1d99b5403e4d", "level": "subsection", "origin_cites_number": 0, "parent_id": "b72e7b0d-2142-415a-8bed-1b1f88160a2d", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "ParaDrop" ], [ "subsection", "ParaDrop Developer Workflow" ] ], "subsections": [], "title": "ParaDrop Developer Workflow" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:open-source-projects}\nBesides the designed edge computing systems for specific purposes, some open source edge computing projects have also been launched recently. The Linux Foundation published two projects, EdgeX Foundry in 2017 and Akraino Edge Statck~ in 2018. The Open Network Foundation(ONF) launched a project namely CORD (Central Office Re-architected as a Datacenter)~. The Apache Software Foundation published Apache Edgent. Microsoft published Azure IoT Edge in 2017 and announced it as an open source in 2018.\nAmong them, CORD and Akraino Edge Stack focus on providing edge cloud services; EdgeX Foundry and Apache Edgent focus on IoT and aim to solve problems which bring difficulties to practical applications of edge computing in IoT; Azure IoT Edge provides hybrid cloud-edge analytics, which helps to migrate cloud solutions to IoT devices.", "id": "e695a48d-424b-476b-aca3-58d54c720c98", "level": "section", "origin_cites_number": 2, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ] ], "subsections": [ "1446b4fa-7c0c-42d1-a70f-09fde4a71f2c", "fb60c39e-d592-4b9c-a844-9af05b80d85f", "5f68758b-531e-45bb-9d03-537df0f852c4", "62b19b9e-8f3e-4206-80e6-13afc62367cb", "af81b2b6-3749-41ab-8e6a-cce258d013d8", "e0298129-29bd-4b69-9e8c-2c1905e898d2" ], "title": "Open Source Edge Computing Projects" }, { "cite_extract_rate": 0.2, "cites": [ 8836 ], "content": "\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{CORD.pdf}\n\\caption{Hardware Architecture of CORD.}\\label{fig:CORD}\n\\end{center}\n\\end{figure} \nCORD is an open source project of ONF initiated by AT\\&T and is designed for network operators. Current network infrastructure is built with closed proprietary integrated systems provided by network equipment providers. Due to the closed property, the network capability cannot scale up and down dynamically. And the lack of flexibility results in inefficient utilization of the computing and networking resources. CORD plans to reconstruct the edge network infrastructure to build datacenters with SDN~, NFV~ and Cloud technologies. It attempts to slice the computing, storage and network resources so that these datacenters can act as clouds at the edge, providing agile services for end users.\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{CORD_SW.pdf}\n\\caption{Software Architecture of CORD.}\\label{fig:CORD_SW}\n\\end{center}\n\\end{figure}\nCORD is an integrated system built from commodity hardware and open source software. Fig.~\\ref{fig:CORD} shows the hardware architecture of CORD~. It uses commodity servers that are interconnected by a Fabric of White-box switches. White-box switch~ is a component of SDN switch, which is responsible to regulate the flow of data according to SDN controller. These commodity servers provide computing, storage resources, and the fabric of switches are used to build the network. This switching fabric is organized to a Spine-Leaf topology , a kind of flat network topology structure which adds a horizontal network structure parallel to the trunk longitudinal network structure, and then adds corresponding switching network on the horizontal structure. Comparing to traditional three-tier network topology, it can provide scalable throughput for greater East-to-West network traffic, that is, traffic coming from network diagram drawings that usually depict local area network (LAN) traffic horizontally. In addition, specialized access hardware is required to connect subscribers. The subscribers can be divided into three categories for different use cases, mobile subscribers, enterprise subscribers and residential subscribers. Each category demands different access hardware due to different access technologies. In terms of software, Fig.~\\ref{fig:CORD_SW} shows the software architecture of CORD~. Based on the servers and the fabric of switches, OpenStack provides with IaaS capability for CORD, it manages the compute, storage and networking resources as well as creating virtual machines and virtual networks. Docker is used to run services in containers for isolation. ONOS(Open Network Operating System) is a network operating system which is used to manage network components like the switching fabric and provide communication services to end-users. XOS provides a control plane to assemble and compose services. Other software projects provide component capabilities, for example, vRouter(Virtual Router) provides with virtual routing functionality.\nThe edge of the operator network is a sweet spot for edge computing because it connects customers with operators and is close to customers' applications as data sources. CORD takes edge computing into consideration and moves to support edge computing as a platform to provide edge cloud services (from the released version $4.1$). CORD can be deployed into three solution, M-CORD (Mobile CORD), R-CORD (Residential CORD) and E-CORD (Enterprise CORD) for different use cases. M-CORD focuses on mobile network, especially 5G network, and it plans to disaggregate and virtualize cellular network functions to enable services be created and scaled dynamically. This agility helps to provide multi-access edge services for mobile applications. For those use cases like driverless cars or drones, users can rent the edge service to run their edge applications. Similarly, R-CORD and E-CORD are designed to be agile service delivery platforms but for different users, residential and enterprise users relatively. \nSo far, the deployment of CORD is still under test among network operators, and more researches are needed to combine CORD with various edge applications.", "id": "1446b4fa-7c0c-42d1-a70f-09fde4a71f2c", "level": "subsection", "origin_cites_number": 5, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "CORD" ] ], "subsections": [], "title": "CORD" }, { "cite_extract_rate": 0, "cites": [], "content": "Akraino Edge Stack, initiated by AT\\&T and now hosted by Linux Foundation, is a project to develop a holistic solution for edge infrastructure so as to support high-availability edge cloud services~. An open source software stack, as the software part of this solution, is developed for network carrier to facilitate optimal networking and workload orchestration for underlying infrastructure in order to meet the need of edge computing such as low latency, high performance, high availability, scalability and so on.\nTo provide a holistic solution, Akraino Edge Stack has a wide scope from infrastructure layer to application layer. Fig.~\\ref{fig:Akraino}~ shows the scope with three layers. In the application layer, Akraino Edge Stack wants to create a virutal network function (VNF) ecosystem and calls for edge applications. The second layer consists of middleware which supports applications in the top layer. In this layer, Akraino Edge Stack plans to develop Edge API and framework for interoperability with 3rd party Edge projects such as EdgeX Foundry. At the bottom layer, Akraino Edge Stack intends to develop an open source software stack for the edge infrastructure in collaboration with upstream communities. It interfaces with and maximizes the use of existing open source projects such as Kubernetes, OpenStack and so on. Akraino Edge Stack provides different edge use cases with blueprints, which are declarative configurations of entire stack including hardware, software, point of delivery, etc~. The application domains of these blueprints start from Telco industry and are expected to applied in more domains like Enterprise and industrial IoT. Now Akraino Edge Stack has put forward several blueprints such as Micro-MEC and Edge Media Processing. Micro-MEC intends to develop a new service infrastructure for smart cities, which enables developing services for smart city and has high data capacity for citizens. Edge Media Processing intends to develop a network cloud to enable real-time media processing and edge media AI analytics with low latency.\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{Akraino.png}\n\\caption{Akraino Edge Stack's Scope.}\\label{fig:Akraino}\n\\end{center}\n\\end{figure}\nAs an emerging project, Akraino Edge Stack has been taken to execution since August 2018. Thus more researches need to be done with the development of this project.", "id": "fb60c39e-d592-4b9c-a844-9af05b80d85f", "level": "subsection", "origin_cites_number": 1, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Akraino Edge Stack" ] ], "subsections": [], "title": "Akraino Edge Stack" }, { "cite_extract_rate": 0.5, "cites": [ 8836 ], "content": "EdgeX Foundry is a standardized interoperability framework for IoT edge computing, whose sweet spots are edge nodes such as gateways, hubs, and routers~. It can connect with various sensors and devices via different protocols, manage them and collect data from them, and export the data to a local application at the edge or the cloud for further processing. EdgeX is designed to be agnostic to hardware, CPU, operating system, and application environment. It can run natively or run in docker containers.\nFig.~\\ref{fig:EdgeX}~ shows the architecture of EdgeX Foundry. ``South side'' at the bottom of the figure includes all IoT objects, and the edge of the network that communicates directly with those devices, sensors, actuators and other IoT objects to collect the data from them. Relatively, ``north side'' at the top of the figure includes the Cloud (or Enterprise system) where data are collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the Cloud. EdgeX Foundry connects these two sides regardless of the differences of hardware, software and network. EdgeX tries to unify the manipulation method of the IoT objects from the south side to a common API, so that those objects can be manipulated in the same way by the applications of north side.\nEdgeX uses a Device Profile to describe a south side object. A Device Profile defines the type of the object, the format of data that the object provides, the format of data to be stored in EdgeX and the commands used to manipulate this object. Each Device Profile involves a Device Service, which is a service that converts the format of the data, and translates the commands into instructions that IoT objects know how to execute. EdgeX provides SDK for developers to create Device Services, so that it can support for any combination of device interfaces and protocols by programming.\n\\begin{figure*}[!tb]\n\\begin{center}\n\\includegraphics[width=0.65\\textwidth]{EdgeX2.png}\n\\caption{Architecture of EdgeX Foundry.}\\label{fig:EdgeX}\n\\end{center}\n\\end{figure*}\nEdgeX consists of a collection of microservices, which allows services to scale up and down based on device capability. These microservices can be grouped into four service layers and two underlying augmenting system services, as depicted in Fig.~\\ref{fig:EdgeX}. The four service layers include Device Services Layer, Core Services Layer, Supporting Services Layer and Export Services Layer, respectively; the two underlying augmenting system services are System Management and Security, respectively. Each of the six layers consists of several components and all components use a common Restful API for configuration.\n\\textit{1) Device Services Layer:}\nThis layer consists of Device Services. According to the Device Profiles, Device Service Layer converts the format of the data, sends them to Core Services Layer, and translates the command requests from the Core Services Layer.\n\\textit{2) Core Services Layer:}\nThis layer consists of four components: Core Data, Command, Metadata, and Registry \\& Configuration. Core Data is a persistence repository as well as a management service. It stores and manages the data collected from the south side objects. Command is a service to offer the API for command requests from the north side to Device Services. Metadata is a repository and management service for metadata about IoT objects. For example, the Device Profiles are uploaded and stored in Metadata. Registry \\& Configuration provides centralized management of configuration and operating parameters for other microservices. \n\\textit{3) Supporting Services Layer:}\nThis layer is designed to provide edge analytics and intelligence~. Now the Rules Engine, Alerting and Notification, Scheduling and Logging microservices are implemented. A target range of data can be set to trigger a specific device actuation as a rule and Rules Engine helps realize the rule by monitoring the incoming data. Alerting and Notifications can send notifications or alerts to another system or person by email, REST callback or other methods when an urgent actuation or a service malfunction happens. The scheduling module can set up a timer to regularly clean up the stale data. Logging is used to record the running information of EdgeX.\n\\textit{4) Export Services Layer:}\nThis layer connects EdgeX with North Side and consists of Client Registration and Export Distribution. Client Registration enables clients like a specific cloud or a local application to register as recipients of data from Core Data. Export Distribution distributes the data to the Clients registered in Client Registration. \n\\textit{5) System Management and Security:}\nSystem Management provides management operations including installation, upgrade, starting, stopping and monitoring, as EdgeX is scalable and can be deployed dynamically. Security is designed to protect the data and command of IoT objects connected with EdgeX Foundry.\nEdgeX is designed for the user cases dealing with multitudes of sensors or devices, such as automated factories, machinery systems and lots of other cases in IoT. Now EdgeX Foundry is in the rapid upgrading phase, and more features will be added in future releases. An EdgeX UI is in development as a web-based interface to add and manage the devices.", "id": "5f68758b-531e-45bb-9d03-537df0f852c4", "level": "subsection", "origin_cites_number": 2, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "EdgeX Foundry" ] ], "subsections": [], "title": "EdgeX Foundry" }, { "cite_extract_rate": 0, "cites": [], "content": "Apache Edgent, which was known as Apache Quarks previously, is an Apache Incubator project at present. It is an open source programming model for lightweight runtime data analytics, used in small devices such as routers and gateways at the edge. Apache Edgent focuses on data analytics at the edge, aiming to accelerate the development of data analysis. \n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{Edgent.pdf}\n\\caption{Model of the Edgent Applications.}\\label{fig:Edgent}\n\\end{center}\n\\end{figure}\nAs a programming model, Edgent provides API to build edge applications. Fig.~\\ref{fig:Edgent} illustrates the model of the Edgent applications. Edgent uses a topology as a graph to represent the processing transformation of streams of data which are abstracted to a Tstream class. A connector is used to get streams of data from external entities such as sensors and devices in physical world, or to send streams of data to back-end systems like a cloud. The primary API of Edgent is responsible for data analysis. The streams of data can be filtered, split, transformed or processed by other operations in a topology. Edgent uses a provider to act as a factory to create and execute topologies. To build an Edgent application, users should firstly get a provider, then create a topology and add the processing flow to deal with the streams of data, and finally submit the topology. The deployment environments of Edgent are Java 8, Java 7 and Android. \nEdgent provides APIs for sending data to back-end systems and now supports MQTT, IBM Watson IoT Platform, Apache Kafka and custom message hubs. Edgent applications analyze the data from sensors and devices, and send the essential data to the back-end system for further analysis. For IoT use cases, Edgent helps reduce the cost of transmitting data and provide local feedback. \nEdgent is suitable for use cases in IoT such as intelligent transportation, automated factories and so on. In addition, the data in Edgent applications are not limited to sensor readings, they can also be files or logs. Therefore, Edgent can be applied to other use cases. For example, it can perform local data analysis when embedded in application servers, where it can analyze error logs without impacting network traffic~.", "id": "62b19b9e-8f3e-4206-80e6-13afc62367cb", "level": "subsection", "origin_cites_number": 1, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Apache Edgent" ] ], "subsections": [], "title": "Apache Edgent" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:AzureIoT}\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{AzureIoT.pdf}\n\\caption{Diagram of Azure IoT Edge.}\\label{fig:AzureIoT}\n\\end{center}\n\\end{figure}\nAzure IoT Edge, provided by Microsoft Azure as a cloud service provider, tries to move cloud analytics to edge devices. These edge devices can be routers, gateways or other devices which can provide computing resources. The programming model of Azure IoT Edge is the same as that of other Azure IoT services~ in the cloud, which enables user to move their existing applications from Azure to the edge devices for lower latency. The convenience simplifies the development of edge applications. In addition, Azure services like Azure Functions, Azure Machine Learning and Azure Stream Analytics can be used to deploy complex tasks on the edge devices such as machine learning, image recognition and other tasks about artificial intelligence.\nAzure IoT Edge consists of three components: IoT Edge modules, IoT Edge runtime and a cloud-based interface as depicted in Fig.~\\ref{fig:AzureIoT}. The first two components run on edge devices, and the last one is an interface in the cloud. IoT Edge modules are containerized instances running the customer code or Azure services. IoT Edge runtime manages these modules. The cloud-based interface is used to monitor and manage the former two components, in other words, monitor and manage the edge devices.\nIoT Edge modules are the places that run specific applications as the units of execution. A module image is a docker image containing the user code. A module instance, as a docker container, is a unit of computation running the module image. If the resources at the edge devices are sufficient, these modules can run the same Azure services or custom applications as in the cloud because of the same programming model. In addition, these modules can be deployed dynamically as Azure IoT Edge is scalable.\nIoT Edge runtime acts as a manager on the edge devices. It consists of two modules: IoT Edge hub and IoT Edge agent. IoT Edge hub acts as a local proxy for IoT Hub which is a managed service and a central message hub in the cloud. As a message broker, IoT Edge hub helps modules communicate with each other, and transport data to IoT Hub. IoT Edge agent is used to deploy and monitor the IoT Edge modules. It receives the deployment information about modules from IoT Hub, instantiates these modules, and ensures they are running, for example, restarts the crashed modules. In addition, it reports the status of the modules to the IoT hub.\nIoT Edge cloud interface is provided for device management. By this interface, users can create edge applications, then send these applications to the device and monitor the running status of the device. This monitoring function is useful for use cases with massive devices, where users can deploy applications to devices on a large scale and monitor these devices.\nA simple deployment procedure for applications is that: users choose a Azure service or write their own code as an application, build it as an IoT Edge module image, and deploy this module image to the edge device with the help of the IoT Edge interface. Then the IoT Edge receives the deployment information, pulls the module image, and instantiates the module instance.\nAzure IoT Edge has wide application areas. Now it has application cases on intelligent manufacturing, irrigation system, drone management system and so on. It is worth noting that Azure IoT Edge is open-source but the Azure services like Azure Functions, Azure Machine Learning and Azure Stream are charged.", "id": "af81b2b6-3749-41ab-8e6a-cce258d013d8", "level": "subsection", "origin_cites_number": 1, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Azure IoT Edge" ] ], "subsections": [], "title": "Azure IoT Edge" }, { "cite_extract_rate": 0, "cites": [], "content": "We summarize the features of the above open source edge computing systems in Table~\\ref{tab:characterEdge}. Then we compare them from different aspects in Table~\\ref{tab:characterEdge}, including the main purpose of the systems,application area, deployment, target user, virtualization technology, system characteristic, limitations, scalability and mobility. We believe such comparisons give better understandings of current open source edge computing systems. \n\\begin{table*}[htb]\n\\centering\n\\caption{Comparison of Open Edge System Characteristics}\n\\label{tab:characterEdge}\n\\begin{tabular}{cccccc}\n\\hline\n\\textbf{Aspect} & \\textbf{EdgeX Foundry} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Azure IoT}\\\\ \\textbf{Edge}\\end{tabular} & \\textbf{Apache Edgent} & \\textbf{CORD} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Akraino}\\\\ \\textbf{Edge Stack}\\end{tabular}\n \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}User access\\\\ interface\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Restful API\\\\ or EdgeX UI\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Web service,\\\\ Command-line\\end{tabular} & API & \\begin{tabular}[c]{@{}c@{}}API or\\\\ XOS-GUI\\end{tabular} & N/A \\\\ \\hline\nOS support & Various OS & Various OS & Various OS & Ubuntu & Linux \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Programming\\\\ framework\\end{tabular} & Not provided & \\begin{tabular}[c]{@{}c@{}}Java, .NET, C,\\\\ Python, etc.\\end{tabular} & Java & \\begin{tabular}[c]{@{}c@{}}Shell script,\\\\ Python\\end{tabular} &N/A\n\\\\ \\hline\nMain purpose & \\begin{tabular}[c]{@{}c@{}}Provide with\\\\ Interoperability\\\\ for IoT edge\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Support hybrid\\\\ cloud-edge\\\\ analytics\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Accelerate the \\\\ development\\\\ process of\\\\ data analysis\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Transform edge of\\\\ the operator network\\\\ into agile service\\\\ delivery platforms\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Support edge\\\\ clouds with an\\\\ open source\\\\ software stack\\end{tabular} \\\\ \\hline\nApplication area & IoT & Unrestricted & IoT & Unrestricted & Unrestricted \\\\ \\hline\nDeployment & Dynamic & Dynamic & Static & Dynamic & Dynamic \\\\ \\hline\nTarget user & General users & General users & General users & Network operators & \\begin{tabular}[c]{@{}c@{}}Network\\\\ operators\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Virtualization \\\\ technology\\end{tabular} & Container & Container & JVM & \\begin{tabular}[c]{@{}c@{}}Virtual Machine\\\\ and Container\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Virtual Machine\\\\ and Container\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}System\\\\ characteristics\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}A common API for\\\\ device management\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Powerful\\\\ Azure services\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}APIs for\\\\ data analytics\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Widespread\\\\ edge clouds\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Widespread\\\\ edge clouds\\end{tabular} \\\\ \\hline\nLimitation & \\begin{tabular}[c]{@{}c@{}}Lack of\\\\ programable\\\\ interface\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Azure Services\\\\ is chargeable\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Limited to\\\\ data analytics\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Unable to\\\\ be offline\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Unable to\\\\ be offline\\end{tabular} \n \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Scalability\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Scalable\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Scalable\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Not scalable\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Scalable\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Scalable\\end{tabular}\n \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Mobility\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Not support\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Not support\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Not support\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Support\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Support\\end{tabular}\n\\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "level": "subsection", "origin_cites_number": 0, "parent_id": "e695a48d-424b-476b-aca3-58d54c720c98", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ] ], "subsections": [ "1bf4941a-3b2d-427d-a60c-eafb5647903b", "9a51feef-b020-4190-b5d7-5786064a0299", "e06a57cf-a435-43b6-82dc-5bead50804ba", "b98f232b-467f-4515-9bce-538e319df917", "6dec77a6-4f15-4759-b0bb-ba5175fd3c7a", "ff5a04f4-5ec2-4c3a-b42f-e112c6f3b1e2", "7fd26e57-2cf5-4496-a04b-c492c08696f5", "85fc4e16-f456-41a3-9214-86f76e219c7c", "365dab20-e711-47f8-a7e6-b7d0607b797d", "49fdae92-d420-4706-96a2-36269bfa7536" ], "title": "Comparative Study" }, { "cite_extract_rate": 0, "cites": [], "content": "The main purpose shows the target problem that a system tries to fix, and it is a key factor for us to choose a suitable system to run edge applications. As an interoperability framework, EdgeX Foundry aims to communicate with any sensor or device in IoT. This ability is necessary for edge applications with data from various sensors and devices. Azure IoT Edge offers an efficient solution to move the existing applications from cloud to edge, and to develop edge applications in the same way with the cloud applications. Apache Edgent helps to accelerate the development process of data analysis in IoT use cases. CORD aims to reconstruct current edge network infrastructure to build datacenters so as to provide agile network services for end-user customers. From the view of edge computing, CORD provides with multi-access edge services. Akraino Edge Stack provides an open source software stack to support high-availability edge clouds.", "id": "1bf4941a-3b2d-427d-a60c-eafb5647903b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Main purpose" ] ], "subsections": [], "title": "Main purpose" }, { "cite_extract_rate": 0, "cites": [], "content": "EdgeX Foundry and Apache Edgent both focus on IoT edge, and EdgeX Foundry is geared toward communication with various sensors and devices, while Edgent is geared toward data analysis. They are suitable for intelligent manufacturing, intelligent transportation and smart city where various sensors and devices generate data all the time. Azure IoT Edge can be thought as the expansion of Azure Cloud. It has an extensive application area but depends on the compute resources of edge devices. Besides, it is very convenient to deploy edge applications about artificial intelligence such as machine learning and image recognition to Azure IoT Edge with the help of Azure services. CORD and Akraino Edge Stack support edge cloud services, which have no restriction on application area. If the edge devices of users don't have sufficient computing capability, these two systems are suitable for users to run resource-intensive and interactive applications in connection with operator network.", "id": "9a51feef-b020-4190-b5d7-5786064a0299", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Application area" ] ], "subsections": [], "title": "Application area" }, { "cite_extract_rate": 0, "cites": [], "content": "As for the deployment requirements, EdgeX Foundry, Apache Edgent and Azure IoT Edge are deployed in edge devices such as routers, gateways, switchers and so on. Users can deploy EdgeX Foundry by themselves, add or reduce microservices dynamically, and run their own edge applications. Differently, users need the help of cloud-based interface to deploy Azure IoT Edge and develop their edge applications. CORD and Akraino Edge Stack are designed for network operators, who need fabric switches, access devices, network cards and other related hardware apart from compute machines. Customers have no need to think about the hardware requirements and management process of the hardware, but to rent the services provided by the network operators like renting a cloud service instead of managing a physical server.", "id": "e06a57cf-a435-43b6-82dc-5bead50804ba", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Deployment" ] ], "subsections": [], "title": "Deployment" }, { "cite_extract_rate": 0, "cites": [], "content": "Though these open source systems focus on edge computing, their target users are not the same. EdgeX Foundry, Azure IoT Edge and Apache Edgent have no restriction on target users. Therefore, every developer can deploy them into local edge devices like gateways, routers and hubs. Differently, CORD and Akraino Edge Stack are created for network operators because they focus on edge infrastructure.", "id": "b98f232b-467f-4515-9bce-538e319df917", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Target user" ] ], "subsections": [], "title": "Target user" }, { "cite_extract_rate": 0, "cites": [], "content": "Virtualization technologies are widely used nowadays. Virtual machine technology can provide better management and higher utilization of resources, stability, scalability and other advantages. Container technology can provide services with isolation and agility but with negligible overhead, which can be used in edge devices~. Using OpenStack and Docker as software components, CORD and Akraino Edge Stack use both of these two technologies to support edge cloud. Different edge devices may have different hardware and software environment. For those edge systems which are deployed on edge devices, container is a good technology for services to keep independence in different environment. Therefore, EdgeX Foundry and Azure IoT Edge choose to run as docker containers. As for Edgent, Edgent applications run on JVM.", "id": "6dec77a6-4f15-4759-b0bb-ba5175fd3c7a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Virtualization technology" ] ], "subsections": [], "title": "Virtualization technology" }, { "cite_extract_rate": 0, "cites": [], "content": "System characteristics show the unique features of the system, which may help users to develop, deploy or monitor their edge applications. It will save lots of workload and time if making good use of these characteristics. EdgeX Foundry provides a common API to manage the devices, and this brings great convenience to deploying and monitoring edge applications in large scale. Azure IoT Edge provides powerful Azure services to accelerate the development of edge applications. Apache Edgent provides a series of functional APIs for data analytics, which lowers the difficulty and reduces the time on developing edge analytic applications. CORD and Akraino Edge Stack provide with multi-access edge services on edge cloud. We only need to keep connection with operator network, and we can apply for these services without the need to deploy an edge computing system on edge devices by ourselves.", "id": "ff5a04f4-5ec2-4c3a-b42f-e112c6f3b1e2", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "System characteristic" ] ], "subsections": [], "title": "System characteristic" }, { "cite_extract_rate": 0, "cites": [], "content": "This subsection discusses the limitation of the latest version of them to deploy edge applications. The lastest version of EdgeX Foundry has not provided a programmable interface in its architecture for developers to write their own applications. Although EdgeX allows us to add custom implementations, it demands more workload and time. As for Azure IoT Edge, though it is open-source and free, Azure services are chargeable as commercial software. For Apache Edgent, it is lightweight and it focuses on only data analytics. As for CORD and Akraino Edge Stack, these two systems demand stable network between data sources and the operators because the edge applications are running on the edge of operator network rather than local devices.", "id": "7fd26e57-2cf5-4496-a04b-c492c08696f5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Increasing applications at edge make the network architecture more complex and the application management more difficult. Scalability is one major concern in edge computing. Among these edge computing systems, Azure IoT Edge, CORD and Akraino Edge Stack apply docker technology or virtual machine technology to support users to scale up or down their applications efficiently by adding or deleting module images. EdgeX Foundry is also a scalable platform that enables users to dynamically add or reduce microservices to adapt to the actual needs. But Apache Edgent is not scalable enough because every Edgent application is a single Java application and performance cannot be changed dynamically.", "id": "85fc4e16-f456-41a3-9214-86f76e219c7c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Scalability" ] ], "subsections": [], "title": "Scalability" }, { "cite_extract_rate": 0, "cites": [], "content": "For EdgeX Foundry, Apache Edgent and Azure IoT Edge, once the applications are executed on some edge devices, they cannot be dynamically migrated to other devices. CORD and Akraino Edge Stack, deployed in the telecom infrastructure, support mobile edge services through mobile access network like 4G/5G. The mobility of these systems meet the need of cases such as unmanned cars and drones.", "id": "365dab20-e711-47f8-a7e6-b7d0607b797d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Mobility" ] ], "subsections": [], "title": "Mobility" }, { "cite_extract_rate": 0, "cites": [], "content": "We discuss the choice of open-source tools from the perspective of the following three scenarios, as shown in Fig.~\\ref{fig:scenarios}.\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.48\\textwidth]{scenarios.pdf}\n\\caption{The choice of open-source tools in different scenarios.}\\label{fig:scenarios}\n\\end{center}\n\\end{figure}\nIn the first scenario, suppose IoT edge applications are running on local area network (LAN), and the local enterprise system is regarded as back-end system with no need for third-party clouds. In this case, EdgeX Foundry or Apache Edgent are favorable because they enable users to build and control their own back-end system without being bound to any specific cloud platform. Furthermore, for the sake of managing and controlling edge devices on a large scale, EdgeX Foundry is a better choice for good device management capability. If considering data analysis, Apache Edgent is preferable to EdgeX Foundry. It provides a programming model to accelerate the development of edge analytic applications and a set of powerful APIs for data processing. \nIn the second scenario, suppose the cloud services push to the edge of network to improve the quality. In this case, Auzre IoT Edge provides a convenient way for developers to migrate the applications from the cloud to the edge devices, and to leverage third-party high value services or functions when developing edge systems. Besides, AWS IoT Greengrass and Link IoT Edge, which are published by Amazon and Alibaba Cloud, are good choices as the competitive projects. More specifically, Azure IoT Edge provides Azure services such as Azure Functions, Azure Stream Analytics, and Azure Machine Learning. AWS IoT Greengrass can run AWS Lamda functions and machine learning models that are created, trained, and optimized in the cloud. Link IoT Edge provides Function Compute and other functions. Based on the application requirements, a suitable system could be chosen among them by taking account the functions they provide.\nIn the third scenario, suppose the users expect to directly leverage third-party services to deploy edge applications without any hardware or software system locally. In this case, edge systems such as CORD and Akraino Edge Stack are suitable. The users could choose one of them dependent on their application requirements. These systems are deployed by telecom operators. And telecom operators are also the providers of network services. Thus when there exist special network requirements of edge applications, these systems could satisfy the requirements. For example, edge computing applications on unmanned vehicles or drones need the support of wireless telecom networks (such as 4G/5G), in this case, a mobile edge computing service provided by these systems is a good choice.\nIn addition to the systems described above, there are other emerging open source projects. Device Management Edge, as part of Mbed IoT Platform published by ARM, is responsible for edge computing and provides the ability to access, manage and control Edge devices. KubeEdge, released by Huawei, provides edge nodes with native containerized application orchestration capabilities.", "id": "49fdae92-d420-4706-96a2-36269bfa7536", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0298129-29bd-4b69-9e8c-2c1905e898d2", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Open Source Edge Computing Projects" ], [ "subsection", "Comparative Study" ], [ "subsubsection", "Scenarios" ] ], "subsections": [], "title": "Scenarios" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:energy-efficiency}\nIn the design of an edge computing system, energy consumption is always considered as one of the major concerns and evaluated as one of the key performance metrics. In this section, we review the energy efficiency enhancing mechanisms adopted by the state-of-the-art edge computing systems, from the view of \\emph{top cloud layer}, \\emph{middle edge server layer} and \\emph{bottom device layer}, respectively (the three-layer paradigm is illustrated by Fig.~\\ref{fig:three-layer-paradigm}).\n\\begin{figure}[!tb]\n\\begin{center}\n\\includegraphics[width=0.32\\textwidth]{three-layer-paradigm.pdf}\n\\caption{The Three-layer Edge Computing Paradigm from a Power Source View.}\\label{fig:three-layer-paradigm}\n\\end{center}\n\\end{figure}", "id": "a3a2618a-548a-4cd2-95bc-06a60c44ffb0", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ] ], "subsections": [ "568161be-6e40-4554-b2de-39750eac4eb0", "cef0da1f-0af4-4654-af0c-b8d0a6a63c4a", "92004c3a-1e93-4cfc-9226-2ae148fcfafd" ], "title": "Enhancing Energy Efficiency of Edge Computing Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "For cloud computing, a centralized DC can comprise thousands of servers and thus consume enormous energy~. As an alternative to the cloud computing, does the edge/fog computing paradigm consume more or less energy? Different points of view have been given: some claim that decentralized data storage and processing supported by the edge computing architecture are more energy efficient~, while some others show that such distributed content delivery may consume more energy than that of the centralized way~.\nThe authors in~ give a thorough energy analysis for applications running over the centralized DC (i.e., under cloud mode) and decentralized nano DCs (i.e., under fog mode), respectively. The results indicate that the fog mode may be with a higher energy efficiency, depending on several system design factors (e.g., type of application, type of access network, and ratio of active time), and those applications that generate and distribute a large amount of data in end-user premises result in best energy saving under the fog mode.\nNote that the decentralized nano DCs or fog nodes in our context are different from the traditional CDN datacenters~. They are also designed in a decentralized way but usually with much more powerful computing/communicating/storage capacities.", "id": "568161be-6e40-4554-b2de-39750eac4eb0", "level": "subsection", "origin_cites_number": 8, "parent_id": "a3a2618a-548a-4cd2-95bc-06a60c44ffb0", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Top Cloud Layer" ] ], "subsections": [], "title": "At the Top Cloud Layer" }, { "cite_extract_rate": 0, "cites": [], "content": "At the middle layer of the edge computing paradigm, energy is also regarded as an important aspect, as the edge servers can be deployed in a domestic environment or powered by the battery (e.g., a desktop or a portable WiFi router, as shown in Fig.~\\ref{fig:three-layer-paradigm}). Thus, to provide a higher availability, many power management techniques have been applied to limit the energy consumption of edge servers, while still ensuring their performances. We give a review of two major strategies used at the edge server layer in recent edge computing systems.", "id": "cef0da1f-0af4-4654-af0c-b8d0a6a63c4a", "level": "subsection", "origin_cites_number": 0, "parent_id": "a3a2618a-548a-4cd2-95bc-06a60c44ffb0", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Middle Edge Server Layer" ] ], "subsections": [ "e29f4cf4-8bb2-43ab-9414-a3c4becd1890", "0886e58c-6c6e-466f-b652-ef84a543e8d9" ], "title": "At the Middle Edge Server Layer" }, { "cite_extract_rate": 0.25, "cites": [ 4754 ], "content": "In~, the tactical cloudlet is presented and its energy consumption when performing VM synthesis is evaluated particularly, under different cloudlet provisioning mechanisms. The results show that the largest amount of energy is consumed by i) VM synthesis due to the large payload size, and ii) on-demand VM provisioning due to the long application-ready time. Such results lead to the high energy efficiency policy: combining cached VM with cloudlet push for cloudlet provision.\nA service-oriented architecture for fog/edge computing, Fog Data, is proposed and evaluated in~. It is implemented with an embedded computer system and performs data mining and data analytics on the raw data collection from the wearable sensors (in telehealth applications). With Fog Data, orders of magnitude data are reduced for transmission, thus leading to enormous energy saving. Furthermore, Fog Data is with a low power architecture design, and even consumes much less energy than that of a Raspberry Pi.\nIn~, a performance-aware orchestrator for Docker containers, named DockerCap, is developed to meet the power consumption constraints of the edge server (fog node). Following the observe-decide-act loop structure, DockerCap is able to manage container resources at run-time and provide soft-level power capping strategies. The experiments demonstrate that the obtained results with DockerCap is comparable to that from the power capping solution provided by the hardware (Intel RAPL).\nAn energy aware edge computer architecture is designed to be portable and usable in the fieldwork scenarios in~. Based on the architecture, a high-density cluster prototype is built using the compact general-purpose commodity hardware. Power management policies are implemented in the prototype to enable the real-time energy-awareness. Through various experiments, it shows that both the load balance strategies and cluster configurations have big impacts on the system energy consumption and responsiveness.", "id": "e29f4cf4-8bb2-43ab-9414-a3c4becd1890", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "cef0da1f-0af4-4654-af0c-b8d0a6a63c4a", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Middle Edge Server Layer" ], [ "subsubsection", "Low-power system design and power management" ] ], "subsections": [], "title": "Low-power system design and power management" }, { "cite_extract_rate": 0, "cites": [], "content": "Dual energy sources are employed to support the running of a fog computing based system in~, where solar power is utilized as the primary energy supply of the fog nodes. A comprehensive analytic framework is presented to minimize the long-term cost on energy consumption. Meanwhile, the framework also enables an energy-efficient data offloading (from fog nodes to the cloud) mechanism to help provide a high quality of service.\nIn~, a rack-scale green-energy powered edge infrastructure, InSURE (in-situ server system using renewable energy) is implemented for data pre-processing at the edge. InSURE can be powered by standalone (solar/wind) power and with batteries as the energy backup. Meanwhile, an energy buffering mechanism and a joint spatio-temporal power management scheme are applied, to enable efficient energy flow control from the power supply to the edge server.", "id": "0886e58c-6c6e-466f-b652-ef84a543e8d9", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "cef0da1f-0af4-4654-af0c-b8d0a6a63c4a", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Middle Edge Server Layer" ], [ "subsubsection", "Green-energy powered sustainable computing" ] ], "subsections": [], "title": "Green-energy powered sustainable computing" }, { "cite_extract_rate": 1, "cites": [ 3357 ], "content": "As a well-recognized fact, the IoT devices in edge computing usually have strict energy constraints, e.g., limited battery life and energy storage. Thus, it remains a key challenge to power a great number (can up to tens of billions) of IoT devices at the edge, especially for those resource-intensive applications or services~. We review the energy saving strategies adopted at the device layer of the edge computing diagram. Specifically, we go through three major approaches to achieving high energy efficiency in different edge/fog computing systems.", "id": "92004c3a-1e93-4cfc-9226-2ae148fcfafd", "level": "subsection", "origin_cites_number": 1, "parent_id": "a3a2618a-548a-4cd2-95bc-06a60c44ffb0", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Bottom Device Layer" ] ], "subsections": [ "b5cb6b35-d6f7-494a-b718-94bbbaeaf422", "225a8b24-31ce-4561-a161-bff22095366d" ], "title": "At the Bottom Device Layer" }, { "cite_extract_rate": 0, "cites": [], "content": "As a natural idea to solve the energy poverty problem, computation offloading from the IoT devices to the edge servers or cloud has been long investigated~. It was also demonstrated that, for some particular applications or services, offloading tasks from IoT devices to more powerful ends can reduce the total energy consumption of the system, since the task execution time on powerful servers or cloud can be much shortened~. Although it increases the energy consumption of (wireless) data transmission, the tradeoff favors the offloading option as the computational demand increases~.\nHaving realized that the battery life is the primary bottleneck of handheld mobile devices, the authors in~ present MAUI, an architecture for mobile code offloading and remote execution. To reduce the energy consumption of the smartphone program, MAUI adopts a fine-grained program partitioning mechanism and minimizes the code changes required at the remote server or cloud. The ability of MAUI in energy reduction is validated by various experiments upon macro-benchmarks and micro-benchmarks. The results show that MAUI's energy saving for a resource-intensive mobile application is up to one order of magnitude, also with a significant performance improvement.\nLike MAUI~ , the authors in~ design and implement CloneCloud, a system that helps partition mobile application programs and performs strategically offloading for fast and elastic execution at the cloud end. As the major difference to MAUI, CloneCloud involves less programmer help during the whole process and only offloads particular program partitions on demand of execution, which further speeds up the program execution. Evaluation shows that CloneCloud can improve the energy efficiency of mobile applications (along with their execution efficiency) by 20 times. Similarly in~, by continuous updates of software clones in the cloud with a reasonable overhead, the offloading service can lead to energy reduction at the mobile end by a factor.\nFor computation intensive applications on resource constrained edge devices, their executions usually need to be offloaded to the cloud. To reduce the response latency of the image recognition application, Precog is presented which has been introduced in Sec.~\\ref{subsec:Cachier_and_Precog}. With the on-device recognition caches, Precog much reduces the amount of images offloading to the edge server or cloud, by predicting and prefetching the future images to be recognized.", "id": "b5cb6b35-d6f7-494a-b718-94bbbaeaf422", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "92004c3a-1e93-4cfc-9226-2ae148fcfafd", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Bottom Device Layer" ], [ "subsubsection", "Computation offloading to edge servers or cloud" ] ], "subsections": [], "title": "Computation offloading to edge servers or cloud" }, { "cite_extract_rate": 0, "cites": [], "content": "For energy saving of the massive devices at the edge, besides offloading their computational tasks to more powerful ends, there is also a great potential via sophisticated collaboration and cooperation among the devices themselves. Particularly, when the remote resources from the edge server or cloud are unavailable, it is critical and non-trivial to complete the edge tasks while without violating the energy constraint.\nPCloud is presented in~ to enhance the capability of individual mobile devices at the edge. By seamless using available resources from the nearby devices, PCloud forms a ‘personal cloud’ to serve end users whenever the cloud resources are difficult to access, where device participation is guided in a privacy-preserving manner. The authors show that, by leveraging multiple nearby device resources, PCloud can much reduce the task execution time as well as energy consumption. For example, in the case study of ‘neighborhood watch with face recognition’, the results show a $74\\%$ reduction in energy consumption on a PCloud vs. on a single edge device. Similar to PCloud, the concept of mobile device cloud (MDC) is proposed in~, where computational offloading is also adopted among the mobile devices. It shows that the energy efficiency (gain) is increased by $26\\%$ via offloading in MDC. The authors of~ propose an adaptive method to dynamically discovery available nearby resource in heterogeneous networks, and perform automatic transformation between centralized and flooding strategies to save energy.\nAs current LTE standard is not optimized to support a large simultaneous access of IoT devices, the authors in~ propose an improved memory replication architecture and protocol, REPLISON, for computation offloading of the massive IoT devices at the edge. REPLISON improves the memory replication performance through an LTE-optimized protocol, where Device-to-Device (D2D) communication is applied as an important supporting technology (to pull memory replicas from IoT devices). The total energy consumption of REPLISON is generally worse than the conventional LTE scenario, as it needs more active devices. However, the energy consumed per device during a single replicate transmission is much less. With further evaluation results, it shows that REPLISOM has an energy advantage over the conventional LTE scenarios as long as the size of replica is sufficiently small.\nFor IoT devices distributed at the edge, the authors in~ leverage the software agents running on the IoT devices to establish an integrated multi-agent system (MAS). By sharing data and information among the mobile agents, edge devices are able to collaborate with each other and improve the system energy efficiency in executing distributed opportunistic applications. Upon the experimental platform with 100 sensor nodes and 20 smartphones as edge devices, the authors show the great potential of data transmission reduction with MAS. This leads to a significant energy saving, from $15\\%$ to $66\\%$, under different edge computing scenarios. As another work applying data reduction for energy saving, CAROMM~ employs a change detect technique (LWC algorithm) to control the data transmission of IoT devices, while maintaining the data accuracy.", "id": "225a8b24-31ce-4561-a161-bff22095366d", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "92004c3a-1e93-4cfc-9226-2ae148fcfafd", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Enhancing Energy Efficiency of Edge Computing Systems" ], [ "subsection", "At the Bottom Device Layer" ], [ "subsubsection", "Collaborated devices control and resource management" ] ], "subsections": [], "title": "Collaborated devices control and resource management" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:deep-learning-opt}\nIn the past decades, we have witnessed the burgeoning of machine learning, especially deep learning based applications which have changed human being's life. With complex structures of hierarchical layers to capture features from raw data, deep learning models have shown outstanding performances in those novel applications, such as machine translation, object detection, and smart question and answer systems.\nTraditionally, most deep learning based applications are deployed on a remote cloud center, and many systems and tools are designed to run deep learning models efficiently on the cloud. Recently, with the rapid development of edge computing, the deep learning functions are being offloaded to the edge. Thus it calls for new techniques to support the deep learning models at the edge. This section classifies these technologies into three categories: systems and toolkits, deep learning packages, and hardware.", "id": "478327fb-c0c5-49d5-ae91-7fa642949abc", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ] ], "subsections": [ "98b9a8e8-5e84-4189-aa8d-48851fdccc02", "083ff676-0bf0-44e0-859d-63889cdc7ccb", "e71e3ef2-27ab-4b8d-8ebe-b33c2376117c" ], "title": "Deep Learning Optimization at the Edge" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7842 ], "content": "Building systems to support deep learning at the edge is currently a hot topic for both industry and academy. There are several challenges when offloading state-of-the-art AI techniques on the edge directly, including computing power limitation, data sharing and collaborating, and mismatch between edge platform and AI algorithms. To address these challenges, OpenEI is proposed as an Open Framework for Edge Intelligence~. OpenEI is a lightweight software platform to equip edges with intelligent processing and data sharing capability. OpenEI consists of three components: a package manager to execute the real-time deep learning task and train the model locally, a model selector to select the most suitable model for different edge hardware, and a library including a RESTFul API for data sharing. The goal of OpenEI is that any edge hardware will has the intelligent capability after deploying it. \nIn the industry, some top-leading tech-giants have published several projects to move the deep learning functions from the cloud to the edge. Except Microsoft published Azure IoT Edge which have been introduced in Sec.~\\ref{subsec:AzureIoT}, Amazon and Google also build their services to support deep learning on the edge.\nTable~\\ref{table: Comparison of Deep learning Systems on Edge} summarizes the features of the systems which will be discussed below.\nAmazon Web Services (AWS) has published IoT Greengrass ML Inference~ after IoT Greengrass. AWS IoT Greengrass ML Inference is a software to support machine learning inferences on local devices. With AWS IoT Greengrass ML Inference, connected IoT devices can run AWS Lambda functions and have the flexibility to execute predictions based on those deep learning models created, trained, and optimized in the cloud. AWS IoT Greengrass consists of three software distributions: AWS IoT Greengrass Core, AWS IoT Device SDK, and AWS IoT Greengrass SDK. Greengrass is flexible for users as it includes a pre-built TensorFlow, Apache MXNet, and Chainer package, and it can also work with Caffe2 and Microsoft Cognitive Toolkit. \nCloud IoT Edge~ extends Google Cloud's data processing and machine learning to edge devices by taking advantages of Google AI products, such TensorFlow Lite and Edge TPU. Cloud IoT Edge can either run on Android or Linux-based operating systems. It is made up of three components: Edge Connect ensures the connection to the cloud and the updates of software and firmware, Edge ML runs ML inference by TensorFlow Lite, and Edge TPU specific designed to run TensorFlow Lite ML models. Cloud IoT Edge can satisfy the real-time requirement for the mission-critical IoT applications, as it can take advantages of Google AI products (such as TensorFlow Lite and Edge TPU) and optimize the performance collaboratively.\n\\begin{table*}[!htp]\n\t\\caption{Comparison of Deep learning Systems on Edge}\n\t\\label{table: Comparison of Deep learning Systems on Edge}\n\t\\begin{tabular}{ l l l l }\n\t\t\\hline\n\t\t\\textbf{Features} & \\textbf{AWS IoT Greengrass} & \\textbf{Azure IoT Edge} & \\textbf{Cloud IoT Edge} \\\\ \\hline\n\t\tDeveloper & Amazon & Microsoft & Google \\\\ \\hline\n\t\tComponents & \\begin{tabular}[c]{@{}l@{}}IoT Greengrass Core, IoT Device SDK, \\\\ IoT Greengrass SDK\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}IoT Edge modules, IoT Edge runtime, \\\\ Cloud-based interface\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Edge Connect, Edge ML, Edge TPU\\end{tabular} \\\\ \\hline\n\t\tOS & Linux, macOS, Windows & Windows, Linux, macOS & Linux, macOS, Windows, Android \\\\ \\hline\n\t\tTarget device & Multiple platforms (GPU-based, Raspberry Pi) & Multiple platforms & TPU\\\\ \\hline\n\t\tCharacteristic & Flexible & Windows friendly & Real-time \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "98b9a8e8-5e84-4189-aa8d-48851fdccc02", "level": "subsection", "origin_cites_number": 3, "parent_id": "478327fb-c0c5-49d5-ae91-7fa642949abc", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Systems and Toolkits" ] ], "subsections": [], "title": "Systems and Toolkits" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 3416, 2672, 2671, 7199, 4755 ], "content": "Many deep learning packages have been widely used to deliver the deep learning algorithms and deployed on the cloud data centers, including TensorFlow~, Caffe~, PyTorch~, and MXNet~. Due to the limitations of computing resources at the edge, the packages designed for the cloud are not suitable for edge devices. Thus, to support data processing with deep learning models at the edge, several edge-based deep learning frameworks and tools have been released. In this section, we introduce TensorFlow Lite, Caffe2, PyTorch, MXNet, CoreML~, and TensorRT~, whose features are summarized in Tables \\ref{table: Comparison of Deep Learning Frameworks on Edge}.\nTensorFlow Lite~ is TensorFlow's lightweight solution which is designed for mobile and edge devices. TensorFlow is developed by Google in 2016 and becomes one of the most widely used deep learning frameworks in cloud data centers. To enable low-latency inference of on-device deep learning models, TensorFlow Lite leverages many optimization techniques, including optimizing the kernels for mobile apps, pre-fused activations, and quantized kernels that allow smaller and faster (fixed-point math) models. \nFacebook published Caffe2~ as a lightweight, modular, and scalable framework for deep learning in 2017. Caffe2 is a new version of Caffe which is first developed by UC Berkeley AI Research (BAIR) and community contributors. Caffe2 provides an easy and straightforward way to play with the deep learning and leverage community contributions of new models and algorithms. Comparing with the original Caffe framework, Caffe2 merges many new computation patterns, including distributed computation, mobile, reduced precision computation, and more non-vision use cases. Caffe2 supports multiple platforms which enable developers to use the power of GPUs in the cloud or at the edge with cross-platform libraries.\nPyTorch~ is published by Facebook. It is a python package that provides two high-level features: tensor computation with strong GPU acceleration and deep Neural Networks built on a tape-based auto-grad system. Maintained by the same company (Facebook), PyTorch and Caffe2 have their own advantages. PyTorch is geared toward research, experimentation and trying out exotic neural networks, while caffe2 supports more industrial-strength applications with a heavy focus on the mobile. In 2018, Caffe2 and PyTorch projects merged into a new one named PyTorch 1.0, which would combine the user experience of the PyTorch frontend with scaling, deployment and embedding capabilities of the Caffe2 backend.\nMXNet~ is a flexible and efficient library for deep learning. It was initially developed by the University of Washington and Carnegie Mellon University, to support CNN and long short-term memory networks (LSTM). In 2017, Amazon announced MXNet as its choice of deep learning framework. MXNet places a special emphasis on speeding up the development and deployment of large-scale deep neural networks. It is designed to support multiple different platforms (either cloud platforms or the edge ones) and can execute training and inference tasks. Furthermore, other than the Windows, Linux, and OSX operating systems based devices, it also supports the Ubuntu Arch64 and Raspbian ARM based operating systems.\nCoreML~ is a deep learning framework optimized for on-device performance at memory footprint and power consumption. Published by Apple, users can integrate the trained machine learning model into Apple products, such as Siri, Camera, and QuickType. CoreML supports not only deep learning models, but also some standard models such as tree ensembles, SVMs, and generalized linear models. Built on top of low level technologies, CoreML aims to make full use of the CPU and GPU capability and ensure the performance and efficiency of data processing.\nThe platform of TensorRT~ acts as a deep learning inference to run the models trained by TensorFlow, Caffe, and other frameworks. Developed by NVIDIA company, it is designed to reduce the latency and increase the throughput when executing the inference task on NVIDIA GPU. To achieve computing acceleration, TensorRT leverages several techniques, including weight and activation precision calibration, layer and tensor fusion, kernel auto-tuning, dynamic tensor memory, and multi-stream execution.\nConsidering the different performance of the packages and the diversity of the edge hardware, it is challenging to choose a suitable package to build edge computing systems. To evaluate the deep learning frameworks at the edge and provide a reference to select appropriate combinations of package and edge hardware, pCAMP~ is proposed. It compares the packages' performances (w.r.t. the latency, memory footprint, and energy) resulting from five edge devices and observes that no framework could win over all the others at all aspects.\nIt indicates that there is much room to improve the frameworks at the edge. Currently, developing a lightweight, efficient and high-scalability framework to support diverse deep learning modes at the edge cannot be more important and urgent.\nIn addition to these single-device based frameworks, more researchers focus on distributed deep learning models over the cloud and edge. DDNN~ is a distributed deep neural network architecture across cloud, edge, and edge devices. DDNN maps the sections of a deep neural network onto different computing devices, to minimize communication and resource usage for devices and maximize usefulness of features extracted from the cloud. \nNeurosurgeon~ is a lightweight scheduler which can automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. By effectively leveraging the resources in the cloud and at the edge, Neurosurgeon achieves low computing latency, low energy consumption, and high traffic throughput. \n\\begin{table*}[!htp]\n\t\\centering\n\t\\caption{Comparison of Deep Learning Packages on Edge}\n\t\\label{table: Comparison of Deep Learning Frameworks on Edge}\n\t\\begin{tabular}{ l l l l l l l }\n\t\t\\hline\n\t\t\\textbf{Features} & \\textbf{TensorFlow Lite} & \\textbf{Caffe2} & \\textbf{PyTorch} & \\textbf{MXNet} & \\textbf{CoreML} & \\textbf{TensorRT} \\\\ \\hline\n\t\tDeveloper & Google & Facebook & Facebook & DMLC, Amazon & Apple & NVIDIA \\\\ \\hline\n\t\t\\begin{tabular}[c]{@{}l@{}}Open Source \\\\ License\\end{tabular} & Apache-2.0 & Apache-2.0 & BSD & Apache-2.0 & Not open source & Not open source \\\\ \\hline\n\t\tTask & Inference & Training, Inference & Training, Inference & Training, Inference & Inference & Inference \\\\ \\hline\n\t\tTarget Device & \\begin{tabular}[c]{@{}l@{}}Mobile and \\\\ embedded device\\end{tabular} & Multiple platform & Multiple platform & Multiple platform & Apple devices & NVIDIA GPU \\\\ \\hline\n\t\tCharacteristic & Latency & \\begin{tabular}[c]{@{}l@{}}Lightweight, modular, \\\\ and scalable\\end{tabular} & Research & \\begin{tabular}[c]{@{}l@{}}Large-scale \\\\ deep neural networks\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Memory footprint and \\\\ power consumption.\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Latency and\\\\ throughput\\end{tabular} \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "083ff676-0bf0-44e0-859d-63889cdc7ccb", "level": "subsection", "origin_cites_number": 11, "parent_id": "478327fb-c0c5-49d5-ae91-7fa642949abc", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Deep Learning Packages" ] ], "subsections": [], "title": "Deep Learning Packages" }, { "cite_extract_rate": 0, "cites": [], "content": "The hardware designed specifically for deep learning can strongly support edge computing. Thus, we further review relevant hardware systems and classify them into three categories: FPGA-based hardware, GPU-based hardware, and ASIC.", "id": "e71e3ef2-27ab-4b8d-8ebe-b33c2376117c", "level": "subsection", "origin_cites_number": 0, "parent_id": "478327fb-c0c5-49d5-ae91-7fa642949abc", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Hardware System" ] ], "subsections": [ "bb0c2551-78bb-4b6f-b8fd-cff2ffbb6a59", "8cf9bef2-b337-4bcf-b88f-1c2fb48cab5e", "5cde47d9-ebf3-4868-b0ed-359a6b64f5ef" ], "title": "Hardware System" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4756 ], "content": "A field-programmable gate array (FPGA) is an integrated circuit and can be configured by the customer or designer after manufacturing. FPGA based accelerators can achieve high performance computing with low energy, high parallelism, high flexibility, and high security~.\n implements a CNN accelerator on a VC707 FPGA board. The accelerator focuses on solving the problem that the computation throughput does not match the memory bandwidth well. By quantitatively analyzing the two factors using various optimization techniques, the authors provide a solution with better performance and lower FPGA resource requirement, and their solution achieves a peak performance of $61.62$ GOPS under a $100MHz$ working frequency. \nFollowing the above work, Qiu et al.~ propose a CNN accelerator designed upon the embedded FPGA, Xilinx Zynq ZC706, for large-scale image classification. It presents an in-depth analysis of state-of-the-art CNN models and shows that Convolutional layers are computational-centric and Fully-Connected layers are memory-centric. The average performances of the CNN accelerator at convolutional layers and the full CNN are $187.8$ GOPS and $137.0$ GOPS under a $150MHz$ working frequency, respectively, which outperform previous approaches significantly.\nAn efficient speech recognition engine (ESE) is designed to speed up the predictions and save energy when applying the deep learning model of LSTM. ESE is implemented in a Xilinx XCKU060 FPGA opearting at $200MHz$. For the sparse LSTM network, it can achieve 282GOPS, corresponding to a $2.52$ TOPS on the dense LSTM network. Besides, energy efficiency improvements of $40x$ and $11.5x$ are achieved, respectively, compared with the CPU and GPU based solution.", "id": "bb0c2551-78bb-4b6f-b8fd-cff2ffbb6a59", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "e71e3ef2-27ab-4b8d-8ebe-b33c2376117c", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Hardware System" ], [ "subsubsection", "FPGA-based Hardware" ] ], "subsections": [], "title": "FPGA-based Hardware" }, { "cite_extract_rate": 0, "cites": [], "content": "GPU can execute parallel programs at a much higher speed than CPU, which makes it fit for the computational paradigm of deep learning algorithms. Thus, to run deep learning models at the edge, building the hardware platform with GPU is a must choice. Specifically, NVIDIA Jetson TX2 and DRIVE PX2 are two representative GPU-based hardware platforms for deep learning.\nNVIDIA Jetson TX2~ is an embedded AI computing device which is designed to achieve low latency and high power-efficient. It is built upon an NVIDIA Pascal GPU with 256 CUDA cores, an HMP Dual Denver CPU and a Qualcomm ARM CPU. It is loaded with 8GB of memory and 59.7GB/s of memory bandwidth and the power is about 7.5 watts. The GPU is used to execute the deep learning task and CPUs are used to maintain general tasks. It also supports the NVIDIA Jetpack SDK which includes libraries for deep learning, computer vision, GPU computing, and multimedia processing.\nNVIDIA DRIVE PX~ is designed as the AI supercomputer for autonomous driving. The architecture is available in a variety of configurations, from the mobile processor operating at 10 watts to a multi-chip AI processors delivering 320 TOPS. It can fuse data from multiple cameras, as well as lidar, radar, and ultrasonic sensors.", "id": "8cf9bef2-b337-4bcf-b88f-1c2fb48cab5e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e71e3ef2-27ab-4b8d-8ebe-b33c2376117c", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Hardware System" ], [ "subsubsection", "GPU-based hardware" ] ], "subsections": [], "title": "GPU-based hardware" }, { "cite_extract_rate": 0, "cites": [], "content": "Application-Specific Integrated Circuit (ASIC) is the integrated circuit which supports customized design for a particular application, rather than the general-purpose use. ASIC is suitable for the edge scenario as it usually has a smaller size, lower power consumption, higher performance, and higher security than many other circuits. Researchers and developers design ASIC to meet the computing pattern of deep learning.\nDianNao family~ is a series of hardware accelerators designed for deep learning models, including DianNao, DaDianNao, ShiDianNao, and PuDianNao. They investigate the accelerator architecture to minimize memory transfers as efficiently as possible. Different from other accelerators in DianNao family, ShiDianNao~ focuses on image applications in embedded systems which are widely used in edge computing scenarios. For the CNN-based deep learning models, it provides a computing speed $30x$ faster than that of NVIDIA K20M GPU, with an body area of $4.86 mm^2$ and a power of $320 mW$.\nEdge TPU~ is Google's purpose-built ASIC for edge computing. It augments Google's Cloud TPU and Cloud IoT to provide an end-to-end infrastructure and facilitates the deployment of customers' AI-based solutions. In addition, Edge TPU can combine the custom hardware, open software, and state-of-the-art AI algorithms to achieve high performance with a small physical area and low power consumption.", "id": "5cde47d9-ebf3-4868-b0ed-359a6b64f5ef", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "e71e3ef2-27ab-4b8d-8ebe-b33c2376117c", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Deep Learning Optimization at the Edge" ], [ "subsection", "Hardware System" ], [ "subsubsection", "ASIC" ] ], "subsections": [], "title": "ASIC" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:key-design-issues}\nThe edge computing system manages various resources along the path from the cloud center to end devices, shielding the complexity and diversity of hardware and helping developers quickly design and deploy novel applications. To fully leverages the advantages, we discuss the following key issues which need to be paid attention when analyzing and designing a new edge computing system.", "id": "aa0b39df-82ae-4dab-a3c4-17e99ed6d2be", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ] ], "subsections": [ "efe4a2f1-018b-4922-b876-e75becde3cd5" ], "title": "Key Design Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "Mobility support has two aspects: user mobility and resource mobility. User mobility refers to how to automatically migrate the current program state and necessary data when the user moves from one edge node coverage domain to another so that the service handover is seamlessly. Currently, Cloudlet and CloudPath provide service migration through terminating/finishing existing task and starting a new VM/instance in the target edge node. Fine-grain migration and target edge node prediction are not supported. Resource mobility refers to i) how to dynamically discover and manage available resources, including both the long-term resources and short-term ones, and ii) when some edge devices are damaged how the system can be resumed as soon as possible with replacements. For example, PCloud and FocusStack supports edge devices dynamic join and leave with no task running. Intelligent dynamic resource management is still required for edge computing systems.", "id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "aa0b39df-82ae-4dab-a3c4-17e99ed6d2be", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ] ], "subsections": [ "17c9b7d9-27e3-4b0f-a67c-bcd7775d13c6", "b64b53f2-9d6b-4d5d-90de-95bb60a6b2be", "36480d33-7cba-43e6-a451-cc5f81fa12fb", "4f1bfad2-fe7a-4d93-ac92-f95b4f346a5a", "3264a2b7-c71e-4be3-aad8-80db5b0b442f", "faa8d6ed-a0d3-4cfd-a483-16229a33a25b" ], "title": "Mobility Support" }, { "cite_extract_rate": 0, "cites": [], "content": "For edge devices with limited resources, how to ensure the fairness of multi-user usage, especially for the shared resources and rare resources. For example, a smartphone made up of various sensors and computing resources can act as an edge node to serve multiple users. However, as the smartphone has limited battery life, it is a problem to fairly allocate resources when receiving many requests. The resource competition is more intense, the requests are coming from different irrelevant tasks, it is hard to decide who should use the resource with only local information. Besides, the unfairness can be used as an attack strategy to make the critical task resource hungry, which may lead to main task failure. The existing edge computing systems do not pay much attention to the multi-user fairness, the basic solution (bottom-line) is to provide resource isolation, users only get what the edge node promised when it accepts the request. More fairness strategies require system support, like related task status updates, etc.", "id": "17c9b7d9-27e3-4b0f-a67c-bcd7775d13c6", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Multi-user Fairness" ] ], "subsections": [], "title": "Multi-user Fairness" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike cloud computing, edge devices can be privately owned, such as gateway devices for smart home systems. When other users use such edge devices, obtain their data, and even take control of them, how to ensure the owner's privacy and guest users' data privacy is important. AirBox is a lightweight flexible edge function system, which leverages hardware security mechanism (e.g. Intel SGX) to enhance system security. Other system have not pay much attention to privacy and security. Existing cloud security approaches can be applied with the consideration of the resource limitation. Enhancing resource isolation, setting up privilege management and access control policies can be potential directions to solve the privacy problem.", "id": "b64b53f2-9d6b-4d5d-90de-95bb60a6b2be", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Privacy Protection" ] ], "subsections": [], "title": "Privacy Protection" }, { "cite_extract_rate": 0, "cites": [], "content": "The system ultimately provides hardware interaction and basic services for upper-level applications. How to design interactive APIs, program deployment module, resource application and revocation, etc., are the key factors for the system to be widely used. Therefore, to design an edge computing system, we should think from an application developer's perspective. Specifically, to provide effective development and deployment services is a good idea to help improve the ecosystem of the edge computing system. For instances, EdgeX Foundry and Apache Edgent provide APIs to manage devices and analyze data respectively. ParaDrop supports monitoring devices and application status through developer APIs, and provides a web UI for users to manage/configure gateway as well as deploy applications to devices.", "id": "36480d33-7cba-43e6-a451-cc5f81fa12fb", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Developer Friendliness" ] ], "subsections": [], "title": "Developer Friendliness" }, { "cite_extract_rate": 0, "cites": [], "content": "Edge computing involves multiple types of resources, each of which may belong to a different owner. For examples, the smart home gateways and sensors of the house owner, networks resources and base stations from the Internet service providers (ISP), traffic cameras owned by the government. How to access these resources and organize them according to the requirements of application and services, especially in emergency situations, is a problem that the edge computing system needs to consider. Existing researches assume we have the permission to use various devices belongs to different owners. In the initial stage, systems focus on the functionality perspective rather than implementation issues like price model. With the developing of edge computing, the real deployment issues should be solved before we enjoy all the benefits.", "id": "4f1bfad2-fe7a-4d93-ac92-f95b4f346a5a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Multi-domain Management and Cooperation" ] ], "subsections": [], "title": "Multi-domain Management and Cooperation" }, { "cite_extract_rate": 0, "cites": [], "content": "In cloud computing, the corresponding virtual machine can be allocated based on the resource requested by the user, and the cost model can be given according to the resource usage. In edge computing, an application may use resources from different owners. Thus, how to measure resource usage, calculate overall overhead, and give an appropriate pricing model are crucial problems when deploying an edge computing system. Generally, edge computing systems focus on how to satisfy the resource requirements of various services and applications, some of them pay more attention to the energy consumption due to the nature of mobile nodes, more complex overhead are not considered.", "id": "3264a2b7-c71e-4be3-aad8-80db5b0b442f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Cost Model" ] ], "subsections": [], "title": "Cost Model" }, { "cite_extract_rate": 0, "cites": [], "content": "Currently, specialized edge computing applications are still quite few. For examples, ParaDrop applications require additional XML configuration file to specify the resource usage requirements, SpanEdge needs developers to divide/configure tasks into local tasks and global tasks. Common applications are not directly supported to run in edge systems. How to automatically and transparently convert existing programs to the edge version and conveniently leverages the advantages of edge computing are still open problems. The compatibility should be considered in the edge system design. Specifically, how to adapt traditional applications to the new architecture and realize the basic functions so that they can run successfully, deserving further exploration and development.", "id": "faa8d6ed-a0d3-4cfd-a483-16229a33a25b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "efe4a2f1-018b-4922-b876-e75becde3cd5", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Key Design Issues" ], [ "subsubsection", "Mobility Support" ], [ "subsubsection", "Compatibility" ] ], "subsections": [], "title": "Compatibility" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nEdge computing is a new paradigm that jointly migrates the capability of networking, computation, and storage from the remote cloud to the user sides. Under the context of IoT and 5G, the vision of edge computing is promising in providing smarter services and applications with better user experiences. The recently proposed systems and tools on edge computing generally reduce the data processing and transmission overhead, and improve the efficiency and efficacy of mobile data analytics. Additionally, the integration of edge computing and deep learning techniques further fosters the research on edge-based intelligence services. This article introduced the representative systems and open source projects for edge computing, presented several energy efficiency enhancing strategies for performance consideration and technologies for deploying deep learning models at the edge, and suggested a few research directions during system design and analysis.\n\\nop{\n\\section*{Acknowledgment}\nThis work is partially supported by the National Key Research and Development Program of China (2016YFB1000302) and National Natural Science Foundation of China (61433019). Fang Liu and\nZhiping Cai are the corresponding authors.\n}\n\\bibliographystyle{IEEEtran}\n\\bibliography{reference}\n\\nop{\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/FangLiu.jpg}}]{Fang Liu}\nis an associate professor at the School of Data and Computer Science, Sun Yat-Sen University(SYSU), China. She received the B.S. and PhD. degrees in computer science from National University of Defense Technology, Changsha, China in 1999 and 2005, respectively. \nHer main research interests include computer architecture, edge computing and storage system.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/GuomingTang.png}}]{Guoming Tang}\nis an assistant professor at the Key Laboratory of Science and Technology on Information System Engineering, National University of Defense Technology (NUDT), China. He received the Bachelor's and Master's degrees from NUDT in 2010 and 2012, respectively, and the Ph.D. degree in Computer Science from the University of Victoria, Canada, in 2017. Aided by machine learning and optimization techniques, his research focuses on computational sustainability in edge/cloud computing systems.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/YouhuiziLi.jpg}}]{Youhuizi Li}\nis currently an assistant professor of Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou. She is also with the School of Compute Science and Technology, Hangzhou Dianzi University, China. She received her B.E. and Ph.D. both in Computer Science from Xidian University in 2010, and Wayne State University in 2016, respectively. Her research interests include energy efficiency, edge computing and mobile system. She is a member of IEEE and CCF.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/Zhiping.jpg}}]{Zhiping Cai}\nis a full professor in the College of Computer, NUDT. He received the B.Eng., M.A.Sc., and Ph.D. degrees in computer science and technology from the National University of Defense Technology (NUDT), China, in 1996, 2002, and 2005, respectively. His current research interests include network security and big data. He is a senior member of the CCF and a member of the IEEE. His doctoral dissertation has been rewarded with the Outstanding Dissertation Award of the Chinese PLA.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/XingzhouZhang.jpg}}]{Xingzhou Zhang}\nis currently a Ph.D. candidate at State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, China. He received the Bachelor's degree from Shandong University in 2014. His main research interests include edge computing, machine learning, and computer systems.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/Tongqing.jpg}}]{Tongqing Zhou}\nreceived the bachelor’s, and master’s degrees in Computer Science and Technology from National University of Defense Technology (NUDT), Changsha in 2012 and 2014, respectively. He is currently working toward the PhD degree in College of Computer, NUDT. His main research interests include wireless networks, mobile sensing, and network measurement.\n\\end{IEEEbiography}\n}\n\\end{document}", "id": "df3878a1-598b-4a96-97d4-ce6abf0afee7", "level": "section", "origin_cites_number": 0, "parent_id": "0055f39f-b56b-4a38-8ce6-f52d60700ea8", "prefix_titles": [ [ "title", "A Survey on Edge Computing Systems and Tools" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
0
[ 8836, 4754, 3357, 7842, 3416, 2672, 2671, 7199, 4755, 4756 ]
1.586956
[ "David Ahmedt-Aristizabal", "Mohammad Ali Armin", "Simon Denman", "Clinton Fookes", "Lars Petersson" ]
A Survey on Graph-Based Deep Learning \\ for Computational Histopathology
2021
2021-07-01T07:50:35Z
cs.LG
With the remarkable success of representation learning for prediction problems, we have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches. However, learning over patch-wise features using convolutional neural networks limits the ability of the model to capture global contextual information and comprehensively model tissue composition. The phenotypical and topological distribution of constituent histological entities play a critical role in tissue diagnosis. As such, graph data representations and deep learning have attracted significant attention for encoding tissue representations, and capturing intra- and inter- entity level interactions. In this review, we provide a conceptual grounding for graph analytics in digital pathology, including entity-graph construction and graph architectures, and present their current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction. We provide an overview of these methods in a systematic manner organized by the graph representation of the input image, scale, and organ on which they operate. We also outline the limitations of existing techniques, and suggest potential future research directions in this domain.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ] ], "subsections": [ "1107ff3b-7f8f-45af-bf5c-5ef3d043677e", "71961fe0-d880-47a5-944e-136229bc22c0", "ec40cece-9cde-4272-b0f4-123fed08d299", "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "b4c39f0e-0871-4d3e-90ce-279f08670306", "7ace2f45-c6f4-4f16-b76a-f57923b4e6a9" ], "title": "root" }, { "cite_extract_rate": 0.416666666666666, "cites": [ 8845, 6936, 262, 553, 885, 6938, 6935, 8305, 550, 6937 ], "content": "\\IEEEPARstart{R}{ecent} advances in deep learning techniques have rapidly transformed these approaches into the methodology of choice for analyzing medical images, and in particular for histology image classification problems~. Because of the increasing availability of large scale high-resolution whole-slide images (WSI) of tissue specimens, digital pathology and microscopy have become appealing application areas for deep learning algorithms.\nGiven wide variations in pathology and the often time-consuming diagnosis process, clinical experts have begun to benefit from computer-aided detection and diagnosis methods capable of learning features that optimally represent the data~.\nThis thorough survey serves as an accurate guide to biomedical engineering and clinical research communities interested in discovering the tissue composition-to-functionality relationship using image-to-graph translation and deep learning.\nThere are several review papers available that analyse the benefits of deep learning for providing reliable support for microscopic and digital pathology diagnosis and treatment decisions~, and specifically for cancer diagnosis~.\nCompared to other medical fields such as dermatology, ophthalmology, neurology, cardiology, and radiology, digital pathology and microscopy is one of the most dominant medical applications of deep learning. One driving force behind innovation in computational pathology has been the introduction of grand challenges\n(\\textit{e.g.} NuCLS~, BACH~, MoNuSeg~). Developed techniques that offer decision support to human pathologists have shown bright prospects for detecting, segmenting, and classifying the cell and nucleus; and detecting and classifying diseases such as cancer.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{images/Fig1a.png}\n\\caption{\nTraditional CNNs excel at modelling local relations in grid representation, where the topology of the neighborhood is constant (Left).\nGCNs can take into account different neighbouring relations (global relation) by going beyond the local pixel neighbourhoods used by convolutions. On a graph, the neighbours of a node are unordered and variable in size (Right).}\n\\label{fig:Fig1a}\n\\vspace{-10pt}\n\\end{figure}\nDeep learning techniques such as convolutional neural networks (CNNs) have demonstrated success in extracting image-level representations, however, they are inefficient when dealing with relation-aware representations. Modern deep learning variations of graph neural networks (GNNs) have made a significant impact in many technological domains for describing relationships.\nGraphs, by definition, capture relationships between entities and can thus be used to encode relational information between variables~. As a result, special emphasis has been placed on the generalisation of GNNs into non-structured and structured scenarios.\nTraditional CNNs analyse local areas based on fixed connectivity (determined by the convolutional kernel), leading to limited performance, and difficulty in interpreting the structures being modeled.\nGraphs, on the other hand, offer more flexibility to analyse unordered data by preserving neighboring relations. This difference is illustrated in Fig.{~\\ref{fig:Fig1a}}.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig1.png}\n\\vspace{-15pt}\n\\caption{\n\\textbf{Top:} \nGraph-based representation of images for relation-aware human-object interaction, image segmentation, and human pose estimation (left-to-right). Images adapted from~.\n\\textbf{Bottom:} \n\\textbf{A.} Cell-graph representation for prostate cancer.\n\\textbf{B.} Tissue-graph representation for colorectal cancer.\n\\textbf{C.} Hierarchical cell-to-tissue graph representation for breast cancer.\nImages adapted from~.}\n\\label{fig:Fig1}\n\\vspace{-10pt}\n\\end{figure}\nThe adaptation of deep learning from images to graphs has received increased attention, leading to a new cross-domain field of graph-based deep learning which seeks to learn informative representations of graphs in an end-to-end manner. This field has exhibited remarkable success for various tasks as discussed by recent surveys on graph deep learning frameworks and their applications~. \nGraph embeddings have appeared in computer vision tasks where graphs can efficiently define relationships between objects, or for the purpose of graph-structured image analysis. \nInteresting results have been obtained for object detection, semantic segmentation, skeleton-based action recognition, image classification and human-object interaction tasks as illustrated in Fig.~\\ref{fig:Fig1} (Top). \nMedical applications have benefited from rapid progress in the field of computer vision and GNNs. \nThe development of GNNs has seen the application of deep learning methods to GNNs, such as graph convolutional networks (GCNs). These models have been proposed as a powerful tool to model functional and anatomical structures, brain electrical activity, and segmentation of the vasculature system and organs~. \nHistological images depict the micro-anatomy of a tissue sample, and pathologists use histological images to make diagnoses based on morphological changes in tissues, the spatial relationship between cells, cell density, and other factors. Graph-based methods, which can capture geometrical and topological properties, are able to model cell-level information and overall tissue micro-architecture. \nPrior to the advent of deep learning, numerous approaches for processing histopathological images as graphs were investigated~. \nThese methods used classical machine learning approaches, which are less accurate for graph classification compared to GCNs. \nThe capabilities of graph-based deep learning, which bridges the gap between deep learning methods and traditional cell graphs for disease diagnosis, are yet to be sufficiently investigated.\nIn this survey, we analyse how graph embeddings are employed in histopathology diagnosis and analysis. \nWhile graphs are not directly expressed within this data, they can efficiently describe relationships between tissue regions and cells. \nThis setting offers a very different task for GNNs in comparison to analysis of unstructured data such as electrophysiological and neuroimaging recordings where the data can be directly mapped to a graph{~}.\nSelected samples of graph representations in digital pathology (cell-graph, patch-graph, tissue-graph and cell-tissue representation) used to capture and learn relevant morphological regions that will be covered in this review are illustrated in Fig.{~\\ref{fig:Fig1}} (Bottom).\nThis survey offers a comprehensive overview of preprocessing, graph models and explainability tools used in computational pathology, highlighting the capability of GNNs to detect and associate key tissue architectures, regions of interest, and their interdependence.\nAlthough some papers have surveyed conventional cell graphs with handcrafted features to characterize the entities~, and others have briefly touched upon the benefits of GCNs in biology and medicine{~}, to the best of our knowledge, no systematic review exists that presents and discusses all relevant works concerning graph-based representations and deep learning models for computational pathology.\n\\vspace{-6pt}", "id": "1107ff3b-7f8f-45af-bf5c-5ef3d043677e", "level": "section", "origin_cites_number": 24, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Introduction" ] ], "subsections": [ "1866f73b-494a-47fb-ae56-18b7dff9b9ca", "45bdb02e-6313-44fd-a72e-51199db5900d" ], "title": "Introduction" }, { "cite_extract_rate": 0.36363636363636304, "cites": [ 6940, 6939, 553, 8305 ], "content": "Deep learning has increased the potential of medical image analysis by enabling the discovery of morphological and textural representations in images solely from the data. Although CNNs have shown impressive performance in the field of histopathology analysis, they are unable to capture complex neighborhood information as they analyse local areas determined by the convolutional kernel. To extract interaction information between objects, a CNN needs to reach sufficient depth by stacking multiple convolutional layers, which is inefficient. This leads to limitations in the performance and interpretability of the analysis of anatomical structures and microscopic samples.\nGraph convolutional networks (GCNs) are a deep learning-based method that operate over graphs, and are becoming increasingly useful for medical diagnosis and analysis~. GCNs can better exploit irregular relationships and preserve neighboring relations compared with CNN-based models~. \nBelow we outline the reasons why current research in histopathology has shifted the analytical paradigm from pixel to entity-graph processing:\n\\begin{enumerate}\n \\item The potential correlations among images are ignored during traditional CNN feature learning, however, a GCN can be introduced to estimate the dependencies between images and enhance the discriminative ability of CNN features~.\n \\item CNNs have been commonly used for the analysis of whole slide images (WSI) by classifying fixed-sized biopsy image patches using fixed fusion rules such as averaging features or class scores, or weighted averaging with learnable weights to obtain an image-level classification score. \n Aggregation using a CNN also includes excessive whitespace, putting undue reliance on the orientation and location of the tissue segment.\n Even though CNN-based models have practical merits through considering important patches for prediction, they dismiss the spatial relationships between patches, or global contextual information.\n Architectures are required to be capable of dealing with size and shape variation in region-of-interests (ROIs), and must encode the spatial context of individual patches and their collective contribution to the diagnosis, which can be addressed with graph-based representations~.\n \\item A robust computer-aided detection system should be able to capture multi-scale contextual features in tissues, which can be difficult with traditional CNN-based models. A pathological image can be transformed into a graph representation to capture the cellular morphology and topology (cell-graph)~, and the attributes of the tissue parts and their spatial relationships (tissue-graph)~.\n \\item Graph representations can enhance the interpretation of the final representation by modeling relations among different regions of interest. Graph-based models offer a new way to verify existing observations in pathology. \n Attention mechanisms with GCNs, for example, highlight informative nuclei and inter-nuclear interactions, allowing the production of interpretable maps of tissue images displaying the contribution of each nucleus and its surroundings to the final diagnosis~.\n \\item By incorporating any task-specific prior pathological information, an entity-graph can be customized in various ways. As a result, pathology-specific interpretability and human-machine co-learning are enabled by the graph format~.\n \\item GCNs are a complimentary method to CNNs for morphological feature extraction, and they can be employed instead of, or in addition to CNNs during multimodal fusion for fine-grained patient stratification~.\n\\end{enumerate}\n\\vspace{-6pt}", "id": "1866f73b-494a-47fb-ae56-18b7dff9b9ca", "level": "subsection", "origin_cites_number": 11, "parent_id": "1107ff3b-7f8f-45af-bf5c-5ef3d043677e", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Introduction" ], [ "subsection", "Why graph-based deep learning for characterizing diseases through histopathology slides?" ] ], "subsections": [], "title": "Why graph-based deep learning for characterizing diseases through histopathology slides?" }, { "cite_extract_rate": 0.30769230769230704, "cites": [ 6943, 6945, 6936, 6942, 6939, 6941, 6944, 6940 ], "content": "Compared to other recent reviews on traditional deep learning in histopathology slides, our manuscript captures the current efforts relating to entity-graphs and recent advancements in GCNs for characterizing diseases and pathology tasks.\nPapers included in the survey are obtained from various journals, conference proceedings and open-access repositories.\nTable~\\ref{table:Applications} outlines the applications that were addressed across all reviewed publications. It is noted that breast cancer analysis constitutes the major application in digital pathology that has been analyzed using graph-based deep learning techniques.\nThis review is divided into three major sections.\nIn Section~\\ref{sec:sec2} we provide a technical overview of the prevailing tools for entity-graph representation and graph architectures used in accelerating digital pathology research.\nIn Section~\\ref{sec:sec3} we introduce the current applications of deep graph representation learning and cluster these proposals based on the graph construction (cell-graph, patch-graph, tissue-graph, hierarchical graph) and feature level fusion methods followed by the task or organ on which they operate.\nFinally, Section~\\ref{sec:sec4} highlights open problems and perspectives regarding the shifting analytical paradigm from pixel to entity-based processing. Specifically, we discuss the topics of graph construction, embedding expert knowledge, complexity of graph models, training paradigms, and graph model interpretability.\n\\begin{table}[t!]\n\\caption{Summary of applications of graph-based deep learning in histopathology covered in this survey.}\n\\vspace{-2pt}\n\\centering\n\\label{table:Applications}\n\\resizebox{0.44\\textwidth}{!}{\n\\begin{tabular}{\nlcl\n}\n\\toprule\n\\textbf{Application} & \\textbf{\\#Applications} & \\textbf{Reference} \\\\\n\\midrule\nBreast cancer & 11\n& \\\\\nColorectal cancer & 6\n& \\\\\nProstate cancer & 3 \n& \\\\\nLung cancer & 3 \n& \\\\\nCervical cancer & 2 \n& \\\\\nLymphoma & 1 \n& \\\\\nSkin cancer & 1 \n& \\\\\nRenal cancer & 1 \n& \\\\\n\\midrule\n\\textbf{Total} & \\textbf{28} \n& \\\\\n\\bottomrule\n\\end{tabular}}\n\\vspace{-9pt}\n\\end{table}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig2-updated.png}\n\\caption{\nOverview of a standard graph-based workflow in computational pathology.\nThe WSI image is first transformed into one or more graphs.\n\\textbf{1.} The entities can be nuclei, patches or tissue regions. \n\\textbf{2.} Node features comprise handcrafted or deep learning features to characterize the entities.\n\\textbf{3.} The edges encode intrinsic relationships (spatial or semantic) among the entities.\n\\textbf{4.} Graph encoding and classification (node-level or graph-level prediction): the graph representation is processed using GNNs and its variants such as ChebNet, GCN, GraphSAGE, GAT, and GIN, including different graph pooling strategies (global or hierarchical pooling).\n\\textbf{5.} Graph interpretations: a set of GNN model interpretability tools such as graph attentions or post-hoc graph explainers (\\textit{e.g.}~GNNExplainer and GraphGrad-CAM.)}\n\\label{fig:Fig2}\n\\vspace{-8pt}\n\\end{figure*}", "id": "45bdb02e-6313-44fd-a72e-51199db5900d", "level": "subsection", "origin_cites_number": 26, "parent_id": "1107ff3b-7f8f-45af-bf5c-5ef3d043677e", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Introduction" ], [ "subsection", "Contribution and organisation" ] ], "subsections": [], "title": "Contribution and organisation" }, { "cite_extract_rate": 1, "cites": [ 553, 550 ], "content": "\\label{sec:sec2}\nTranslating patient histopathological images into graphs to encode the spatial context of cells and tissues for a given patient has been used to improve prediction accuracy of various pathology tasks. Graph representations followed by GNN-based models and interpretability approaches allows pathologists to directly comprehend and reason for the outcomes. GNNs can also serve a variety of prediction purposes by adapting different designs, such as performing node-level and graph-level predictions.\nA standard entity-graph based pathological workflow requires several phases, such as node and graph topology definition, as well as the choice of GNN architecture.\nIn this section, we provide technical insights of these phases that are required for graph analytics in computational pathology: \n(1) Graph representation (entity, embeddings and edges definition);\n(2) Graph models (graph structures for processing graph-structured);\nand (3) Explainability (a set of interpretation methodologies such as model-based and post-hoc interpretability).\nA traditional framework with aforementioned phases is illustrated in Fig.{~\\ref{fig:Fig2}}.\nA deep analysis of each GNN model can be found in survey papers that deal with graph architectures~.\n\\vspace{-6pt}", "id": "71961fe0-d880-47a5-944e-136229bc22c0", "level": "section", "origin_cites_number": 2, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ] ], "subsections": [ "e0b23ea9-2f62-4439-87d8-347ef71f4911", "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "10f6e9a7-6d01-437b-8683-eef537fbe22e", "b0482895-606f-4df3-8a91-6704a57c5554" ], "title": "Graph representation learning \\\\ in digital pathology: Background" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e0b23ea9-2f62-4439-87d8-347ef71f4911", "level": "subsection", "origin_cites_number": 0, "parent_id": "71961fe0-d880-47a5-944e-136229bc22c0", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ] ], "subsections": [ "1007d2ae-bd5f-406a-bd10-dc9fdec695dc", "44b2f3c3-1a14-491e-8d42-de4b79515c0f", "df50a9a4-5365-476c-b956-826102762942" ], "title": "Histopathology graph representation" }, { "cite_extract_rate": 0, "cites": [], "content": "A graph can be represented by $G=(\\mathcal{V},\\mathcal{E},W)$, where $V$ is a vertex set with $|\\mathcal{V}|=n$ nodes and $\\mathcal{E}$ denotes the set of edges connecting these nodes.\nData in $\\mathcal{V}$ can be represented by a feature matrix $\\mathrm{X} \\in \\mathbb{R}^{n \\times d}$, where $n$ and $d$ denote the input feature dimensions.\n$W \\in \\mathbb{R}^{n \\times n}$ is a binary or weighted adjacency matrix describing the connections between any two nodes in $\\mathcal{V}$, in which the importance of the connections between the \\textit{i}-th and the \\textit{j}-th nodes is measured by the entry $W$ in the \\textit{i}-th row and \\textit{j}-th column, and denoted $w_{ij}$.\nCommonly used methods to determine the entries, $w_{ij}$, of $W$ include Pearson correlation-based graph, the K-nearest neighbor (KNN) method, and the distance-based graph~. \nIn general, GNNs learn a feature transformation function for $\\mathrm{X}$ and produce output $Z \\in \\mathbb{R}^{n \\times d^{'}}$ , where $d^{'}$ denotes the output feature dimension.\nPresented graph methods in digital pathology typically use data in one of two forms. Whole slide images (WSI), also known as virtual microscopy, are high-resolution images generated by combining many smaller image tiles or strips and tiling them to form a single image. Tissue microarrays (TMAs) consist of paraffin blocks produced by extracting cylindrical tissue cores and inserting them into a single recipient block (microarray) in a precisely spaced pattern. With this technique, up to 1000 tissue cores can be assembled in a single paraffin block to allow multiplex histological analysis.", "id": "1007d2ae-bd5f-406a-bd10-dc9fdec695dc", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e0b23ea9-2f62-4439-87d8-347ef71f4911", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Preliminaries" ] ], "subsections": [], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "Graph representations have been used in digital pathology for multiple tasks where a histology image is described as an entity-graph, and nodes and edges of a graph denote biological entities and inter-entity interactions respectively. \nThe entities can be biologically-defined such as nuclei and tissue regions, or can be defined patch-wise. \nTherefore, constructing an entity-graph for graph analytics in computational pathology demands the following pre-processing steps.", "id": "44b2f3c3-1a14-491e-8d42-de4b79515c0f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e0b23ea9-2f62-4439-87d8-347ef71f4911", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Graph construction" ] ], "subsections": [ "d68d0b84-4119-48cb-97c3-a58a87699858", "0e528c80-31df-44a1-b9be-77ac1fbb2150", "7ebdf5da-766b-458c-9aaa-7dd14e66692e" ], "title": "Graph construction" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 8460, 6948, 6946, 6949, 6947, 825 ], "content": "WSI usually includes significant non-tissue regions. To identify tissue regions the foreground is segmented with Gaussian smoothing and OTSU thresholding{~}.\nOne of the most common graph representation, cell-graphs, requires model training and fine-tuning for cell detection or segmentation. To detect nuclei several methods have been used such as Hover-Net{~}, CIA-Net{~}, \nUNet{~} and cGANs{~}, that are trained on multi-organ nuclei segmentation datasets (MoNuSeg{~}, PanNuke{~}, CoNSep{~}). The entities can also be calculated using agglomerative clustering{~} of detected cells.\nThe nodes in a graph can also be represented by fixed-sized patches (patch-graphs) randomly sampled from the raw WSI or by using a patch selection method where non-tissue regions are removed{~}. \nImportant patches can be sampled from segmented tissues using color thresholds where patches with similar features (tissue cluster) are modeled as a node.\nPre-trained deep learning models on tissue datasets (\\textit{e.g.} NCT-CRC-HE-100{~}) have also been used to detect the tumor region of the specific pathological task.\nMeaningful tissue regions have been also used as nodes to capture the tissue distribution (tissue-graphs). To separate tissue structures, superpixels{~} obtained using unsupervised algorithms such as simple linear iterative clustering (SLIC){~}) become nodes.", "id": "d68d0b84-4119-48cb-97c3-a58a87699858", "level": "paragraph", "origin_cites_number": 11, "parent_id": "44b2f3c3-1a14-491e-8d42-de4b79515c0f", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Graph construction" ], [ "paragraph", "Node definition" ] ], "subsections": [], "title": "Node definition" }, { "cite_extract_rate": 1, "cites": [ 97, 825 ], "content": "Node features can comprise hand-crafted features including morphological and topological properties (\\textit{e.g.} shape, size, orientation, nuclei intensity, and the chromaticity using the gray-level co-occurrence matrix).\nFor cell-graph representations, some works include learned features extracted from the trained model used to localise the nuclei.\nIn patch-graph methods, deep neural networks are used to automatically learn a feature representation from patches around the centroids of the nuclei and tissue regions. If the entity is larger than the specified patch size, multiple patches inside the entity are processed, and the final feature is computed as the mean of the patch-level deep features.\nSome works have aggregated features from neighboring patches and combined them to obtain a central node representation to increase feature learning performance.\nAuthors have adopted CNNs (MobileNetV2, DenseNet, ResNet-18 or ResNet-50{~}), and encoder-decoder segmentation models (UNet{~}) for the purpose of deep feature extraction. To generate patch-level embeddings, ImageNet-pretrained CNN as well as a CNN pretrained for tissue sub-compartment classification task have been used.", "id": "0e528c80-31df-44a1-b9be-77ac1fbb2150", "level": "paragraph", "origin_cites_number": 2, "parent_id": "44b2f3c3-1a14-491e-8d42-de4b79515c0f", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Graph construction" ], [ "paragraph", "Node embeddings" ] ], "subsections": [], "title": "Node embeddings" }, { "cite_extract_rate": 0, "cites": [], "content": "The edge configuration encodes the cellular or tissue interactions, \\textit{i.e.} how likely two nearby entities will interact and consequently form an edge. \nThis topology is often defined heuristically using a pre-defined proximity threshold, a nearest neighbor rule, a probabilistic model, or a Waxman model{~}.\nThe graph topology can also be computed by constructing a region adjacency graph (RAG) {~} by using the spatial centroids of superpixels.", "id": "7ebdf5da-766b-458c-9aaa-7dd14e66692e", "level": "paragraph", "origin_cites_number": 2, "parent_id": "44b2f3c3-1a14-491e-8d42-de4b79515c0f", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Graph construction" ], [ "paragraph", "Edge definition" ] ], "subsections": [], "title": "Edge definition" }, { "cite_extract_rate": 1, "cites": [ 134, 5680, 6950 ], "content": "From the perspective of supervision, we can categorize graph learning tasks into different training settings. Such approaches have also been used to extract effective representations from data.\n\\begin{itemize}\n \\item The \\textit{Supervised} learning setting provides labeled data for training.\n \\item \\textit{Weakly or partially supervised learning} refers to models that are trained using examples that are only partially annotated.\n \\item \\textit{Semi-supervised learning} trains a model using a small set of annotated samples, then generates pseudo-labels for a large set of samples without annotations, and learns a final model by mixing both sets of samples.\n \\item \\textit{Self-supervised learning} is a form of unsupervised learning in which the data provides supervisory signals when learning a representation via a proxy task. Annotated data is used to fine-tune the representation once it has been learned.\n Some self-supervised approaches adopted as feature extractors include contrastive predictive coding (CPC){~}, texture auto encoder (Deep Ten){~}, and variational autoencoders (VAE){~}.\n\\end{itemize}\n\\vspace{-6pt}", "id": "df50a9a4-5365-476c-b956-826102762942", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "e0b23ea9-2f62-4439-87d8-347ef71f4911", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Histopathology graph representation" ], [ "subsubsection", "Training paradigms" ] ], "subsections": [], "title": "Training paradigms" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 242, 8313 ], "content": "Following graph building, the entity graph is processed using a graph-based deep learning model that works with graph-structured data to perform analysis.\nGCNs can be broadly categorised as spectral-based~ and spatial-based~.\nSpectral-based GCNs use spectral convolutional neural networks, that build upon the graph Fourier transform and the normalized Laplacian matrix of the graph. Spatial-based GCNs define a graph convolution operation based on spatial relationships that exist among graph nodes. \nGraph convolutional networks, similar to CNNs, learn abstract feature representations for each feature at a node via message passing, in which nodes successively aggregate feature vectors from their neighborhood to compute a new feature vector at the next hidden layer in the network.\nA basic GNN consists of two components:\nThe \\textit{AGGREGATE} operation can aggregate neighboring node representations of the center node, whereas the \\textit{COMBINE} operation combines the neighborhood node representation with the center node representation to generate the updated center node representation.\nThe Aggregate and Combine at each $l-th$ layer of the GNN can be defined as follows:\n\\vspace{-2pt}\n\\begin{equation}\nh_{\\mathcal{N}_v}^{(t)} = \\text{AGGREGATE}^{(l)} \\left( \\big\\{ h_u^{l-1}, \\forall u \\in \\mathcal{N}_v \\big\\} \\right) ,\n\\end{equation}\nwhere $h_{\\mathcal{N}_v}^{(t)}$ is the aggregated node feature of the neighbourhood, $h_u^{l-1}$ is the node feature in neighbourhood $\\mathcal{N}(\\cdot)$ of node $v$.\n\\vspace{-2pt}\n\\begin{equation}\n\\resizebox{0.43\\textwidth}{!}{$h_v^{(t)} = \\text{COMBINE}^{(l)} \\left( h_v^{t-1}, h_{\\mathcal{N}_v}^{(t)} \\right) = \\sigma (W^t \\cdot [ h_v^{t-1} \\| h_{\\mathcal{N}_v}^t ] ) $}, \n\\end{equation}\nwhere $h_v^{(t)}$ is the node representation at the $l-th$ iteration. $h_v^{(0)} = x_{v}$ where $x_{v}$ is the initial feature vector for the node, $\\sigma$ denotes the logistic sigmoid function, and $\\|$ denotes vector concatenation. \nWith the network structure and node content information as inputs, the outputs of GNNs can focus on various graph analytic tasks using one of the processes listed below:\n\\begin{itemize}\n \\item \\textit{Node-level prediction}: A GNN operating at the node-level computes values for each node in the graph and is thus useful for node classification and regression purposes.\n In node classification, the task is to predict the node label for every node in a graph. To compute the node-level predictions, the node embedding is input to a Multi-Layer Perceptron (MLP) (See Fig.{~\\ref{fig:Fig2-1}}).\n \\item \\textit{Graph-level prediction}: Refers to GNNs that predict a single value for an entire graph. This is mostly used to classify entire graphs, or compute similarities between graphs.\n To compute graph-level predictions, the same node embedding used in node-level prediction is input to a pooling process followed by a separate MLP (See Fig.{~\\ref{fig:Fig2-2}}). \n\\end{itemize}\nIn the following subsections, we describe in more detail the GNN architectures considered in digital pathology analysis methods. Different GNN variants employ different aggregators to acquire information from each node's neighbors, as well as different techniques to update the nodes' hidden states. In GNNs, the number of parameters is dependent on the number of node and edge features, as their aggregation is learned.", "id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "level": "subsection", "origin_cites_number": 3, "parent_id": "71961fe0-d880-47a5-944e-136229bc22c0", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ] ], "subsections": [ "b8b2f114-15a2-44ed-b36d-1322c72aed62", "138dc01f-1d58-421e-9aaf-52a8356fe628", "cf270f9c-dc5c-4359-a525-8a4fba3901c2", "0028c1f2-3513-42c7-a618-91bea4922763", "b99bda05-8cf2-4f84-ab99-7cfa8a62def6", "04d76a71-f83f-4b41-bcbc-4e67b8ff20f9" ], "title": "Graph neural networks models" }, { "cite_extract_rate": 1, "cites": [ 213, 553, 8313 ], "content": "The convolution operation for spectral-based GCNs is defined in the Fourier domain by determining the eigen decomposition of the graph Laplacian~. \nThe normalized graph Laplacian is defined as $L=I_N-D^{-1/2}AD^{-1/2}=U \\Lambda U^T$ ($D$ is the degree matrix and $A$ is the adjacency matrix of the graph), where the columns of $U$ are the matrix of eigenvectors and $\\Lambda$ is a diagonal matrix of its eigenvalues. The operation can be defined as the multiplication of a signal $x \\in \\mathbb{R}^N$ (a scalar for each node) with a filter $g_{\\theta}=\\text{diag}(\\theta)$, parameterized by $\\theta \\in \\mathbb{R}^N$,\n\\begin{equation}\ng_{\\theta} \\star x = U g_\\theta (\\Lambda) U^Tx.\n\\end{equation}\nDefferrard et al.~ proposed a Chebyshev spectral CNN (ChebNet), which approximates the spectral filters by truncated Chebyshev polynomials, avoiding the calculation of the eigenvectors of the Laplacian matrix, and thus reducing the computational cost.\nA Chebyshev polynomial $T_m(x)$ of order $m$ evaluated at $\\tilde{L}$ is used. Thus the operation is defined as,\n\\vspace{-2pt}\n\\begin{equation}\ng_{\\theta} \\star x \\approx \\sum_{m=0}^{M-1} \\theta_m T_m (\\tilde{L})x ,\n\\label{eq:eq2}\n\\end{equation}\nwhere $\\tilde{L}$ is a diagonal matrix of scaled eigenvalues defined as $\\tilde{L}=\\nicefrac{2L}{\\lambda_{\\text{max}}}-I_N$. $\\lambda_{\\text{max}}$ denotes the largest eigenvalue of $L$. The Chebyshev polynomials are defined as $T_m(x)=2xT_{k-1}(x)-T_{k-2}(x)$ with $T_0(x)=1$ and $T_1(x)=x$.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig2-1.png}\n\\caption{\nRepresentation of graph architectures for node-level classification. Recreated from{~}.\n}\n\\label{fig:Fig2-1}\n\\vspace{-8pt}\n\\end{figure}", "id": "b8b2f114-15a2-44ed-b36d-1322c72aed62", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "ChebNet" ] ], "subsections": [], "title": "ChebNet" }, { "cite_extract_rate": 0.5, "cites": [ 216 ], "content": "A GCN is a spectral-based GNN with mean pooling aggregation. Kipf and Welling~ presented the GCN using a localized first-order approximation of ChebNet.\nIt limits the layer-wise convolution filter to $K=1$ and uses a further approximation of $\\lambda \\approx 2$, to avoid overfitting and limit the number of parameters. Thus, Equation{~\\ref{eq:eq2}} can be simplified to,\n\\begin{equation}\ng_{\\theta} \\star x \\approx \\theta_0^{'}x + \\theta_1^{'}x (L-I_N)x = \\theta_0^{'}x + \\theta_1^{'} D^{-1/2}AD^{-1/2}x .\n\\end{equation}\nHere, $\\theta_0^{'}, \\theta_1^{'}$ are two unconstrained variables. \nA GCN further assumes that $ \\theta = \\theta_0^{'} = -\\theta_1^{'} $, leading to the\nfollowing definition of a graph convolution:\n\\vspace{-2pt}\n\\begin{equation}\ng_{\\theta} \\star x \\approx \\theta (I_N + D^{-1/2} A D^{-1/2} ) x\n\\end{equation}\nThe definition to a signal $X \\in \\mathbb{R}^{N \\times C}$ with $C$ input channels and $F$ filters for feature maps is generalized as follows,\n\\vspace{-2pt}\n\\begin{equation}\nZ = \\tilde{D}^{-1/2} \\tilde{A}\\tilde{D}^{-1/2} X \\Theta ,\n\\label{eq:eq3}\n\\end{equation}\nwhere $\\Theta \\in \\mathbb{R}^{C \\times F}$ is the matrix formed by the filter bank parameters, and $Z \\in \\mathbb{R}^{N \\times F}$ is the signal matrix obtained by convolution.\nFrom a spatial-based perspective, Equation{~\\ref{eq:eq3}} is reformulated in{~} as a message passing layer which updates the node's representation $x_i^{k}$ as follows:\n\\vspace{-2pt}\n\\begin{equation}\n\\begin{split}\nm_i^{k+1} = \\sum_{j \\in N(i) \\cup i } \\frac{x_j^k}{\\sqrt{ |N(J)| |N(i)| }} , \\\\\nx_i^{k+1} = \\sigma (W^k m_i^{k+1}), \n\\label{eq:eq4}\n\\end{split}\n\\end{equation}\nwhere $m_i^k$ is the output of a message passing iteration, $|N(J)|$ and $ |N(i)|$ denote the node degree of node $j$ and $i$ respectively, $W^k$ denotes a layer-specific trainable weight matrix and $\\sigma$ is a non-linearity function.", "id": "138dc01f-1d58-421e-9aaf-52a8356fe628", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "GCN" ] ], "subsections": [], "title": "GCN" }, { "cite_extract_rate": 0.5, "cites": [ 242 ], "content": "GraphSAGE is a spatial-GCN which uses a node embedding with max-pooling aggregation. Hamilton et al.~ offer an extension of GCNs for inductive unsupervised representation learning with trainable aggregation functions instead of simple convolutions applied to neighborhoods as in a GCN. The authors propose a batch-training algorithm for GCNs to save memory at the cost of sacrificing time efficiency.\nIn~ three aggregating functions are proposed: the element-wise mean, an LSTM, and max-pooling. \nThe mean aggregator is an approximation of the convolutional operation from the transductive GCN framework~. An LSTM is adapted to operate on an unordered set by permuting the neighbors of the node. In the pooling aggregator, each neighbor's hidden state is fed through a fully-connected layer, and then a max-pooling operation is applied to the set of the node’s neighbors.\nThese aggregator functions are denoted as,\n\\begin{equation}\nh_{\\mathcal{N}_v}^t = \\text{max} \\big\\{ \\sigma ( W_{\\text{pool}} h_u^{t-1} + b_{\\text{pool}}), \\forall u \\in \\mathcal{N}_v \\big\\} , \n\\end{equation}\nwhere $\\mathcal{N}_v$ is the neighborhood set of node $v$, $W_{\\text{pool}}$ and $b_{\\text{pool}}$ are the parameters to be learned, and $\\text{max}\\{ \\cdot \\}$ is the element-wise maximum.\nHence, following the message passing formulation in Equation{~\\ref{eq:eq4}}, the node representation is updated according to, \n\\vspace{-2pt}\n\\begin{equation}\n\\begin{split}\nm_i^{k+1} = MEAN_{j \\in N(i) \\cup i } ( x_j^k ) , \\\\\nx_i^{k+1} = \\sigma (W^k m_i^{k+1}), \n\\end{split}\n\\end{equation}", "id": "cf270f9c-dc5c-4359-a525-8a4fba3901c2", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "GraphSAGE" ] ], "subsections": [], "title": "GraphSAGE" }, { "cite_extract_rate": 1, "cites": [ 180, 38 ], "content": "Inspired by the self-attention mechanism{~}, graph attention networks (GAT){~} incorporate the attention mechanism into the propagation steps by modifying the convolution operation. \nGAT is a spatial-GCN model that incorporates masked self-attention layers into graph convolutions and uses a neural network architecture to learn neighbor-specific weights.\nVeli{\\v{c}}kovi{\\'c} et al.~ constructed a graph attention network by stacking a single graph attention layer, $a$, which is a single-layer feed-forward neural network, parametrized by a weight vector $\\vec{a} \\in \\mathbb{R}^{2F^{i}}$. The layer computes the coefficients in the attention mechanisms of the node pair $(i,j)$ by,\n\\begin{equation}\n\\alpha_{i,j} = \\frac{ \\text{exp} (\\text{LeakyReLu} ( \\vec{a}^T [W\\vec{h}_i \\mathbin\\Vert W\\vec{h}_j] ) ) }\n{ \\sum_{k \\in N_i \\mathbb{N} } \\text{exp} (\\text{LeakyReLu} ( \\vec{a}^T [W\\vec{h}_i \\mathbin\\Vert W\\vec{h}_k] ) ) } ,\n\\end{equation}\nwhere $\\mathbin\\Vert$ represents the concatenation operation. The attention layer takes as input a set of node features $h=\\{\\vec{h_1},\\vec{h_2},...,\\vec{h_N}\\}, \\vec{h_i} \\in R^F$, where $N$ is the number of nodes of the input graph and $F$ the number of features for each node, and produces a new set of node features $h^{'}=\\{\\vec{h_1}^{'},\\vec{h_2}^{'},...,\\vec{h_N}^{'}\\}, \\vec{h_i}^{'} \\in R^F$ as its output.\nTo generate higher-level features, as an initial step a shared linear transformation, parametrized by a weight matrix $W \\in R^{F'*F}$, is applied to every node and subsequently a masked attention mechanism is applied to every node, resulting in the following scores,\n\\begin{equation}\ne_{ij} = a ( W \\vec{h_i}, W \\vec{h_j} ),\n\\end{equation}\nthat indicates the importance of node $j^{'}s$ features to node $i$. The final output feature of each node can be obtained by applying a non-linearity, $\\sigma$,\n\\begin{equation}\nh_i^{'} = \\sigma ( \\sum_{j \\in N_i} \\alpha_{ij} Wh_j ).\n\\end{equation}\nThe layer also uses multi-head attention to stabilise the learning process. $K$ different attention heads are applied to compute mutually independent features in parallel, and then their features are concatenated.\nThe attention coefficients are used to update the node representation according to the following message passing formulation,\n\\begin{equation}\n\\begin{split}\nm_i^{k+1} = \\sum_{j \\in N(i)} \\alpha_{i,j}^k W^k x_j^k , \\\\\nx_i^{k+1} = \\sigma (\\alpha_{i,j}^k W^k x_j^k + m_i^{k+1}),\n\\end{split}\n\\end{equation}", "id": "0028c1f2-3513-42c7-a618-91bea4922763", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "GAT" ] ], "subsections": [], "title": "GAT" }, { "cite_extract_rate": 1, "cites": [ 231 ], "content": "The graph isomorphism network (GIN)~ is a spatial-GCN that aggregates neighborhood information by summing the representations of neighboring nodes. Isomorphism graph-based models are designed to interpret graphs with different nodes and edges.\nThe representation of node $i$ itself is then updated using a MLP,\n\\begin{equation}\n\\begin{split}\nm_i^{k+1} = \\sum_{j \\in N(i)} x_j^k , \\\\\nx_i^{k+1} = F((1+\\epsilon) \\cdot x_i^k+m_i^{k+1}), \n\\end{split}\n\\end{equation}\nwhere $F$ is the MLP and $\\epsilon$ is either a learnable parameter or fixed. GIN’s aggregation and readout functions are injective, and thus are designed to achieve maximum discriminative power~.", "id": "b99bda05-8cf2-4f84-ab99-7cfa8a62def6", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "GIN" ] ], "subsections": [], "title": "GIN" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 6941, 6951, 231, 246, 6940, 6939, 4005, 553 ], "content": "Other GNN architectures considered for entity-graph evaluation in digital pathology that were proposed by the surveyed works include:\n\\begin{itemize}\n \\item \\textit{Edge graph neural network (EGNN)}~:\n Edge features are included when leveraging the graph structure in the network. \n \\item \\textit{Robust spatial filtering (RSF)}~: \n These spatial-based models are more flexible when dealing with heterogenous graphs as the graph inputs can be easily incorporated into the aggregation function.\n \\item \\textit{Adaptive GraphSAGE}~:\n Graph networks with the ability to more effectively learn the embedding feature between nodes, by using a learnable pattern to adaptively aggregate multi-level embedding features for each node.\n 3\n \\item \\textit{Jumping Knowledge Network (JK-Net)}\n Xu et al.~ proposed the Jumping Knowledge (JK) approach to adaptively leverage, for each node, different neighborhood ranges to better represent feature.\n \\item \\textit{Feature-enhanced spatial-GCN (FENet)}~:\n This model is proposed to analyse non-isomorphic graphs, distinct from isomorphic graphs which strictly share the same adjacency neighborhood matrix. The feature-enhance mechanism adaptively selects the node representation from different graph convolution layers. The model adopts sum-pooling to capture the full structural information of the entire graph representation.\n \\item \\textit{Multi-scale graph wavelet neural network (MS-GWNN)} ~:\n This spectral model leverages the localization property of graph wavelets to perform multi-scale analysis with a variety of scaling parameters in parallel, offering high efficiency and good interpretability for graph convolution.\n\\end{itemize}\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig2-2.png}\n\\caption{\nRepresentation of graph models for graph-level classification. Recreated from{~}.\n}\n\\label{fig:Fig2-2}\n\\vspace{-8pt}\n\\end{figure}\n\\vspace{-6pt}", "id": "04d76a71-f83f-4b41-bcbc-4e67b8ff20f9", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "773fcdc6-0414-4c43-9a26-ec3ad24224ca", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph neural networks models" ], [ "subsubsection", "Other GNN architectures in histopathology" ] ], "subsections": [], "title": "Other GNN architectures in histopathology" }, { "cite_extract_rate": 0, "cites": [], "content": "Different graph pooling strategies have been developed to minimise the graph size in order to learn hierarchical features for improved graph-level classification, and reduce computational complexity.", "id": "10f6e9a7-6d01-437b-8683-eef537fbe22e", "level": "subsection", "origin_cites_number": 0, "parent_id": "71961fe0-d880-47a5-944e-136229bc22c0", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph pooling" ] ], "subsections": [ "6aa515ff-8dca-4383-b0d9-dc0bce9d84df" ], "title": "Graph pooling" }, { "cite_extract_rate": 1, "cites": [ 211 ], "content": "The most fundamental type of signal pooling on a graph is global pooling. It is also referred to as a readout layer in the literature. Similar to CNNs, mean, max, and sum functions are often utilized as basic pooling methods.\nOther approaches, instead of employing these simple aggregators, transform the vertex representation to a permutation invariant graph-level representation or embedding. In particular, Li et al.{~} proposed a \\textit{global attention pooling} system that uses a soft attention mechanism to determine which nodes are relevant to the present graph-level task and returns the pooled feature vector from all nodes.", "id": "6aa515ff-8dca-4383-b0d9-dc0bce9d84df", "level": "paragraph", "origin_cites_number": 1, "parent_id": "10f6e9a7-6d01-437b-8683-eef537fbe22e", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph pooling" ], [ "paragraph", "Global pooling" ] ], "subsections": [ "91034012-b7fc-4dc9-87bf-bb18f75a0e59" ], "title": "Global pooling" }, { "cite_extract_rate": 1, "cites": [ 224, 254 ], "content": "A graph pooling layer in the GCN pools information from multiple vertices to one vertex, to reduce graph size and expand the receptive field of the graph filters. Many graph classification methods use hierarchical pooling in conjunction with a final global pooling or readout layer to represent the graph as illustrated in Fig.{~\\ref{fig:Fig2-2}}\nBelow we outline the most common hierarchical pooling techniques used in digital pathology. \n\\begin{itemize}\n \\item \\textit{DiffPool:} Ying et al.{~} introduced the differentiable graph pooling operator (DiffPool) which uses another graph convolution layer to generate the assignment matrix for each node (\\textit{i.e.} DiffPool does not simply cluster the nodes in a graph, but learns a cluster assignment matrix).\n \\item \\textit{SAGPool} The self-attention graph pooling (SAGPool) introduced by Lee et al.{~} is a hierarchical pooling method that performs local pooling operations over node embeddings in a graph. The pooling module considers both node features and graph topology and learns to pool features via a self-attention mechanism, which can reduce computational complexity.\n\\end{itemize}\n\\vspace{-6pt}", "id": "91034012-b7fc-4dc9-87bf-bb18f75a0e59", "level": "paragraph", "origin_cites_number": 2, "parent_id": "6aa515ff-8dca-4383-b0d9-dc0bce9d84df", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph pooling" ], [ "paragraph", "Global pooling" ], [ "paragraph", "Hierarchical pooling" ] ], "subsections": [], "title": "Hierarchical pooling" }, { "cite_extract_rate": 0, "cites": [], "content": "Graph representations embed biological entities and their interactions, but their explainability for digital pathology is less explored. While cells and their spatial interactions are visible in great detail, identifying relevant visual features is difficult. To undertake due diligence on model outputs and improve understanding of disease mechanisms and therapies, the medical community requires interpretable models.\nThe two most popular types of interpretation methodologies are model-based and post-hoc interpretability.\nThe former constrains the model so that it can quickly deliver meaningful details about the relationships that have been discovered (such as sparsity, modularity, etc). Here, internal model information such as weights or structural information can be accessed and used to infer group-level patterns across training instances.\nThe latter seeks to extract information about the learnt relationships in the model. These post-hoc methods are typically used to analyze individual feature input and output pairs, limiting their explainability to the individual sample level.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{images/Fig3-new.png}\n\\caption{\nCell-graph based representation. Nucleus detection is conducted using fully convolutional networks. Then, edge and vertex features are computed to obtain an entity-graph representation as input to a GCN for cancer classification. Recreated from~.\n}\n\\label{fig:Fig3}\n\\vspace{-8pt}\n\\end{figure*}", "id": "b0482895-606f-4df3-8a91-6704a57c5554", "level": "subsection", "origin_cites_number": 1, "parent_id": "71961fe0-d880-47a5-944e-136229bc22c0", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph interpretations" ] ], "subsections": [ "d61bb7b2-2341-4db9-92ea-4aabd5b1f2c4", "bad0ed20-5151-4dfe-9b0f-600b52e992a3" ], "title": "Graph interpretations" }, { "cite_extract_rate": 0.5, "cites": [ 6952 ], "content": "Graph-structured data can be both massive and noisy, and not all portions of the graph are equally important. As such, attention mechanisms can direct a network to focus on the most relevant parts of the input, suppressing uninformative features, reducing computational cost and enhancing accuracy. \nA gate-based attention mechanism{~} controls, for example, the expressiveness of each feature.\nAttention has also been used as an explanation technique where the attention weights highlight the nodes and edges in their relative order of importance, and can be used for discovering the underlying dependencies that have been learnt.\nThe activation map and gradient sensitivity of GAT models are used to interpret the salient input features at both the group and individual levels.\nIn a graph model with attention, selected layers of the graph are connected to an attention layer, and all attention layers are jointly trained with the network. A traditional attention mechanism that can be learned by gradient-based methods{~} can be formulated as,\n\\vspace{-4pt}\n\\begin{equation}\n\\begin{split}\nu_t = \\tanh ( W h_t + b), \\\\\n\\alpha_t = \\dfrac{\\exp(u_t^Tu_w)}{\\sum_{j=1}^{n} \\exp(u_t^Tu_w)} , \\\\\ns_t = \\sum_{t} \\alpha_t h_t ,\n\\end{split}\n\\end{equation}\nwhere $h_t$ is the output of a layer; and $W$, $u_w$ and $b$ are trainable weights and bias. The importance of each element in $h_t$ is measured by estimating the similarity between $u_t$ and $h_t$, which is randomly initialized. $\\alpha_t$ is a softmax function. The scores are multiplied by the hidden states to calculate the weighted combination, $s_t$ (the attention-weighted final output).", "id": "d61bb7b2-2341-4db9-92ea-4aabd5b1f2c4", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "b0482895-606f-4df3-8a91-6704a57c5554", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph interpretations" ], [ "subsubsection", "Attention mechanisms" ] ], "subsections": [], "title": "Attention mechanisms" }, { "cite_extract_rate": 0.27083333333333304, "cites": [ 6943, 569, 6945, 6936, 8708, 6942, 6939, 6954, 6935, 6953, 6941, 6944, 6940 ], "content": "Several post-hoc feature attribution graph explainers have been presented in the literature including excitation backpropagation{~}, a node pruning-based explainer (GNNExplainer){~}, gradient-based explainers (GraphGrad-CAM{~} and GraphGrad-CAM++{~}), a layerwise relevance propagation explainer (GraphLRP)~, and deep graph mapper~.\n\\begin{table*}[t]\n\\caption{Summary of applications and graphs models in computational pathology.}\n\\vspace{-5pt}\n\\centering\n\\label{table:pathology}\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{\nl\n>{\\raggedright\\arraybackslash}p{1.7cm}\n>{\\raggedright\\arraybackslash}p{2.2cm}\nc\n>{\\raggedright\\arraybackslash}p{3cm} \n>{\\raggedright\\arraybackslash}p{10cm}}\n\\toprule\n\\textbf{Authors} &\n\\textbf{Topic} &\n\\textbf{Application} & \n\\textbf{Entity-graph} & \n\\textbf{GNN Model + Explainer} & \n\\textbf{Input; Training (Node detection/embeddings); Training (GNN model/pathology task); Datasets; Additional remarks} \\\\\n\\midrule\nJaume et al. (2021)~ & Classification & Breast cancer & CG &\nGIN + Post-hoc explainers & WSI; Supervised; Supervised; BRACS~ (5 classes); Post-hoc explainers: GNNExplainer, GraphGrad-CAM, GraphGrad-CAM++, GraphLRP. \\\\ \nJaume et al. (2020)~ & Classification & Breast cancer & CG &\nGIN + CGExplainer & WSI; Supervised; Supervised; BRACS~ (5 classes); Customized cell-graph explainer based on GNNExplainer. \\\\ \nSureka et al. (2020)~ & Classification & Breast cancer / Prostate cancer & CG &\nGCN, RSF + Attention/Node occlusion & WSI, TMAs; Supervised; Supervised; Breast cancer: BACH~ (2 classes), Prostate cancer: TM~ (2 classes); Gleason grade. \\\\\nAnand et al. (2020)~ & Classification & Breast cancer & CG &\nGCN, RSF & WSI; Supervised; Supervised; BACH~ (4 classes). \\newline \\\\\nStuder et al. (2021)~ & Classification & Colorectal cancer & CG &\nGCN, GraphSAGE, GAT, GIN, ENN, JK-Net & WSI; Supervised; Supervised; pT1-Gland Graph~ (2 classes); Graph-level output. Concatenation of global add, mean and max pooling). Dysplasia of intestinal glands. \\\\ \nZhou et al. (2019)~ & Classification & Colorectal cancer & CG &\nAdaptive GraphSAGE, JK-Net, Graph clustering & WSI; Supervised; Supervised; CRC dataset~ (3 classes); Graph-level output. Hierarchical representation of cells based on graph clustering method from DiffPool). \\\\ \nWang et al. (2020)~ & Classification & Prostate cancer & CG &\nGraphSAGE, SAGPool & TMA; Self-supervised; Weakly-supervised; UZH prostate TMAs~ (2 classes); Graph-level output. Grade classification (low and high-risk). \\\\\n\\midrule\nOzen et al. (2020)~ & ROI Retrieval & Breast cancer & PG &\nGCN, DiffPool & WSI; Supervised; Self-Supervised; Department of Pathology at Hacettepe University (private) (4 classes); Histopathological image retrieval (slide-level and ROI-level). \\\\\nLu et al. (2020)~ & Classification & Breast cancer (HER2, PR) & TG &\nGIN & WSI; Supervised; Supervised; TCGA-BRCA~ (2 classes); Graph-level. Status of Human epidermal growth factor receptor 2 (HER2) and Progesterone receptor (PR). \\\\\nAyg{\\\"u}ne{\\c{s}} et al. (2020)~ & Classification & Breast cancer & PG &\nGCN & WSI; Supervised; Weakly-supervised; Department of Pathology at Hacettepe University (private) (4 classes). ROI-level classification. \\\\\nYe et al. (2019)~ & Classification & Breast cancer & PG &\nGCN & WSI; Supervised; Supervised; BACH~ (4 classes); Graph construction based on the ROI segmentation map. \\\\\nZhao et al. (2020)~ & Classification & Colorectal cancer & PG &\nChebNet, SAGPool & WSI; Self-Supervised; Weakly-supervised; TCGA-COAD~ (2 classes); Multiple instance learning. Graph-level output. \\\\\nRaju et al. (2020)~ & Classification & Colorectal cancer & TG &\nAdaptive GraphSage + Attention & WSI; Self-Supervised; Weakly-supervised; MCO~ (4 classes); Multiple instance learning. Cluster embedding (Siamese architecture); Tumor node metastasis staging. \\\\\nDing et al. (2020)~ & Classification & Colorectal cancer & PG &\nSpatial-GCN (FENet) & WSI; Supervised; Supervised; TCGA-COAD and TCGA-READ~ (2 classes); Genetic mutational prediction. \\\\\nAdnan et al. (2020)~ & Classification & Lung cancer & PG &\nChebNet, GraphSAGE + Global attention pooling & WSI; Supervised; Supervised; TCGA-LUSC~ (2 classes), MUSK1~; Adjacency learning layer. Multiple instance learning. \\\\\nZheng et al. (2019)~ & Retrieval & Lung cancer & PG &\nGNN, DiffPool (GNN-Hash) & WSI; Supervised; Similarity (Hamming distance); ACDC-LungHP~; Hashing methods and binary encoding. Histopathological image retrieval. \\\\\nLi et al. (2018)~ & Classification & Lung cancer & PG &\nChebNet + Attention & WSI; Self-Supervised; Supervised; TCGA-LUSC~ (2 classes), NLST~ (2 classes); Survival prediction. \\\\\nWu et al. (2019)~ & Classification & Skin cancer & PG &\nGCN & WSI; Supervised; Weakly- and Semi-supervised; BCC data collected from 2 different hospitals (private) (4 classes). \\\\\nAnklin et al. (2021)~ & Segmentation / Classification & Prostate cancer & TG &\nGIN (SegGini) + GraphGrad-CAM & TMA, WSI; Supervised; Weakly-supervised; UZH prostate TMAs~ (4 classes), SICAPv2~ (4 classes); Gleason grade, Post-hoc interpretability. \\\\ \n\\midrule\nPati et al. (2021)~ & Classification & Breast cancer & CG, TG, HR &\nGIN-PNA (HACT-Net) + GraphGrad-CAM & WSI; Supervised; Supervised; BRACS~ (7 classes), BACH~ (4 classes); Cell-to-Tissue Hierarchies. \\\\\nPati et al. (2020)~ & Classification & Breast cancer & CG, TG, HR &\nGIN (HACT-Net) & WSI; Supervised; Supervised; BRACS~ (5 classes); Cell-to-Tissue Hierarchies. \\newline\\\\ \nZhang and Li (2020)~ & Classification & Breast cancer & PG, HR &\nMS-GWNN & WSI; Supervised; Supervised; BACH~ (4 classes), BreakHis~ (2 classes); Multi-scale graph feature learning (node-level and graph-level prediction). \\\\\nLevy et al. (2021)~ & Regression & Colorectal cancer / lymphoma & PG, HR &\nGAT, TDA + Graph Mapper & WSI; Supervised; Supervised; Dartmouth Hitchcock Medical Center (private): colon (9 classes), lymph (4 classes); Hierarchical representation. Tumor invasion score and staging. \\\\\n\\midrule\nShi et al. (2020)~ & Classification & Cervical cancer & CCG &\nFusion CNN-GCN & RGB; Supervised; Semi-supervised; SIPaKMed~ (5 classes), Motic~ (7 classes); Population analyis of isolated cell images. \\\\\nShi et al. (2019)~ & Classification & Cervical cancer & CCG &\nFusion CNN-GCN & RGB; Supervised; Supervised; SIPaKMed~ (5 classes), Motic~ (7 classes); Population analyis of isolated cell images. \\\\\nChen et al. (2020)~ & Classification & Renal Cancer & CG &\nGraphSAGE, SAGPool + Attention & Fusion: WSI+Genome; Self-Supervised; Self-Supervised; TCGA-GBMLGG, TCGA-KIRC~; Survival outcome, Integrated gradient method. \\\\\n\\bottomrule\n\\multicolumn{6}{p{500pt}}\n{ \nGraph representation: Cell-Graph (CG); Patch-Graph (PG); Tissue-Graph (TG); Hierarchical Representation (HR); Cluster-Centroids-Graph (CCG) \n}\n\\end{tabular}}\n\\vspace{-4pt}\n\\end{table*}\n\\vspace{-3pt}", "id": "bad0ed20-5151-4dfe-9b0f-600b52e992a3", "level": "subsubsection", "origin_cites_number": 48, "parent_id": "b0482895-606f-4df3-8a91-6704a57c5554", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Graph representation learning \\\\ in digital pathology: Background" ], [ "subsection", "Graph interpretations" ], [ "subsubsection", "Graph explainers" ] ], "subsections": [], "title": "Graph explainers" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:sec3}\nThe case studies presented in this section are organised according to the methodology adopted for the graph representation and the clinical application. The graph model, training paradigm, and datasets used in all applications are detailed in Table~\\ref{table:pathology}. \nRather than providing an exhaustive review of the literature, we present prominent highlights concerning the pre-processing, graph construction and graph models adopted, and their benefits in addressing various pathology tasks.\nWith the development of TMAs and WSI scanning techniques, as well as access to massive digital datasets of tissue images, deep learning methods for tumor localization, survival prediction and cancer recurrence prediction have made substantial progress~. \nBoth the spatial arrangement of cells of various types (macro features), and the details of specific cells (micro features) are important for detecting and characterizing cancers. Thus, a valuable representation of histopathology data must capture micro features and macro spatial relationships.\nGraphs are powerful representational data structures, and have attracted significant attention in analysis of histopathological images~ due to their ability to represent tissue architectures. The paradigm change from pixel-based to entity-based research has the potential to improve deep learning techniques' interpretability in digital pathology, which is relevant for diagnostics.\n\\vspace{-6pt}", "id": "ec40cece-9cde-4272-b0f4-123fed08d299", "level": "section", "origin_cites_number": 2, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ] ], "subsections": [ "622ca37f-363b-4e07-84a5-63ab6cf1d0ef", "c4273fb9-7544-4380-8214-44083e361bfa", "636f71e7-362b-4881-bc1e-1ab1bd4357a9", "788e261b-9bd3-43b4-96be-8ae8e8bb6470" ], "title": "Applications of graph deep learning \\\\ in digital pathology" }, { "cite_extract_rate": 0, "cites": [], "content": "Most of these works follow a similar framework where a cell-graphs is introduced using cells as the entities to capture the cell micro-environment. The image is converted into a graph representation with the locations of identified cells serving as graph vertices and edges constructed depending on spatial distance. Cell-level features are extracted as the initial node embedding. The cell-graph is fed to a GCN to perform image-wise classification.", "id": "622ca37f-363b-4e07-84a5-63ab6cf1d0ef", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec40cece-9cde-4272-b0f4-123fed08d299", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Cell-graph representation" ] ], "subsections": [ "1a4e6a83-5ba4-4bcc-9fff-4b074e4f5651", "f2db0578-688d-405d-8d0e-25ef66270619", "ffea6be3-eb6e-4828-a044-7a5435490a4e" ], "title": "Cell-graph representation" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 6944, 6942, 246, 569, 6939, 9084, 6935, 6953 ], "content": "Breast cancer is the most commonly diagnosed cancer and registers the highest number of cancer deaths among women. A majority of breast lesions are diagnosed along a spectrum of cancer classes that ranges from benign to invasive.\nCancer diagnosis and the detection of breast cancer is one of the most common applications of machine learning and computer vision within digital pathology analysis. CNNs have been used for various digital pathology tasks in breast cancer diagnosis such as nucleus segmentation and classification, and tumor detection and staging. However, these patch-wise approaches do not explicitly capture the inter-nuclear relationships and limit access to global information.\nAnand et al.~ proposed the use of GCNs to classify WSIs represented by graphs of their constituent cells. \nMicro-level features (nuclear morphology) were incorporated as vertex features using local image descriptors, while macro-level features (gland formation) were included as edge attributes based on a mapping of Euclidean distances between nearby nuclei.\nThe vertex features are represented by the average RGB intensity, morphological features and learned features extracted from a pre-trained CNN applied to a window around the nuclei centroid. Finally, each tissue image is classified by giving its cell-graph as an input to the GCN which is trained in a supervised manner. The authors adopted a spatial GCN known as robust spatial filtering (RSF)~, which can take heterogeneous graphs as input. This framework is depicted in Fig.~\\ref{fig:Fig3}.\nThe authors demonstrate competitive performance compared to conventional patch-based CNN approaches to classify patients into cancerous or non-cancerous groups using the Breast Cancer Histology Challenge (BACH) dataset~.\nSureka et al.~ modeled histology tissue as a graph of nuclei and employed the RSF with a GCN~ with attention mechanisms and node occlusion to highlight the relative cell contributions in the image, which fits the mental model used by pathologists. \nIn the first approach, the authors occluded nuclei clusters to assess the drop in the probability of the correct class, while also including a method based on~ to learn enhanced vertex and edge features. \nIn a second approach, an attention layer is introduced before the first pooling operation for visualization of important nuclei for the binary classification of breast cancer on the BACH dataset and Gleason grade classification on a prostate cancer~ dataset. \n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig4-new.png}\n\\caption{\nFor a ductal carcinoma, examples of explanations given by graph-based (Left) and pixel-based (Right) explainability algorithms. Recreated from~.\n}\n\\label{fig:Fig4}\n\\vspace{-10pt}\n\\end{figure}\nSeveral explainers have been applied in digital pathology, inspired by explainability techniques for CNN model predictions on images. However, pixel-level explanations fail to encode tumor macro-environment information, and result in ill-defined visual heatmaps of important locations as illustrated in Fig.~\\ref{fig:Fig4}. Thus, graph representations are relevant for both diagnostics and interpretation. Generating intuitive explanations for pathologists is critical to quantify the quality of the explanation.\nTo address this, Jaume et al.~ introduced a framework using entity-based graph analysis to provide pathologically-understandable concepts (\\textit{i.e.} to make the graph decisions understandable to pathologists).\nThe authors proposed a set of quantitative metrics based on pathologically measurable cellular properties to characterize explainability techniques in cell-graph representations for breast cancer sub-typing. \nIn~, the authors first transform the histology image into a cell-graph, and a GIN model is used to map the corresponding class level. Then, a post-hoc graph explainer generates an explanation per entity graph. Finally, the proposed metrics are used to assess explanation quality in identifying the nuclei driving the prediction (nuclei importance maps). \nFour graph explainers were considered in this analysis: \nGNNExplainer~, GraphGrad-CAM~, GraphGrad-CAM++~, and GraphLRP~.\nThe results on the Breast Carcinoma Subtyping (BRACS) dataset~ confirm that GraphGrad-CAM++ produces the best overall agreement with pathologists. The proposed metrics, which include domain-specific user-understandable terminology, could be useful for quantitative evaluation of graph explainability.\nJaume et al.~ focused on the analysis of cells and cellular interactions in breast cancer sub-typing classification, and introduced an instance-level post-hoc graph-pruning explainer to identify decisive cells and interactions from the input graph in the BRACS dataset~.\nTo create the cell-graph, nuclei are detected with segmentation algorithms and hand-crafted features including shape, texture and color attributes are extracted to represent each nucleus. The cell-graph topology uses the KNN algorithm and is based on the assumption that that spatially close cells encode biological relationships and, as a result, should create an edge. \nThe cell-graph is processed by a GIN model, followed by a MLP to predict the cancer stages.\nJaume et al.~ designed a cell-graph explainer (CGExplainer), based on the GNNExplainer, to remove redundant and uninformative graph components, and the resulting sub-graph will be responsible for class-specific patterns that will aid disease comprehension. \nThis module aims to learn a mask at the node-level that activates or deactivates parts of the graph. Fig.~\\ref{fig:Fig5} provides an overview of the explainer module. The proposed explainer was shown to prune a substantial percentage of nodes and edges to extract valuable information while retaining prediction accuracy (\\textit{e.g.} the explanations retain relevant tumor epithelial nuclei for cancer diagnosis).\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{images/Fig5-new.png}\n\\caption{\nCell-graph explainer (CGExplainer): a customized post-hoc graph explainer based on graph pruning optimization. Recreated from~.\n}\n\\label{fig:Fig5}\n\\vspace{-10pt}\n\\end{figure}", "id": "1a4e6a83-5ba4-4bcc-9fff-4b074e4f5651", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "622ca37f-363b-4e07-84a5-63ab6cf1d0ef", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Cell-graph representation" ], [ "subsubsection", "Breast cancer" ] ], "subsections": [], "title": "Breast cancer" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 6941, 231, 5733, 6936, 180, 242, 4005, 224 ], "content": "Colorectal cancer (CRC) grading is a critical task since it plays a key role in determining the appropriate follow-up treatment, and is also indicative of overall patient outcome. The grade of a cancer is determined, for example, by assessing the degree of glandular formation in the tumour. Nevertheless, automatic CNN-based methods for grading CRC typically use image patches which fail to include information on the micro-architecture of the entire tissue sample, and do not capture correspondence between the tissue morphology and glandular structure. \nTo model nuclear features along with their cellular interactions, Zhou et al.~ proposed a cell-graph model for grading CRC, in which each node is represented by a nucleus within the original image, and cellular interactions are captured as graph edges based on node similarity.\nA nuclear instance segmentation model is used to detect the nucleus and to extract accurate node features including nucleus shape and appearance features. Spatial features such as centroid coordinates, nuclei intensity and dissimilarity extracted from the grey level co-occurrence matrix were used as descriptors for predicting the grade of cancer. To reduce the number of nodes and edges based on the relative inter-node distance, an additional sampling strategy was used.\nTo conduct the graph-level classification, the authors in~ proposed the Adaptive GraphSAGE model, which is inspired by GraphSAGE~ and JK-Net~, to obtain multi-level features (\\textit{i.e.} capturing the gland structure at various scales). \nTo achieve multi-scale feature fusion, Adaptive GraphSAGE employs an attention technique which allows the network to adaptively generate an effective node representation. \nA graph clustering operation, which can be considered as an extension of DiffPool{~}, is used to group cells according to their appearance and tissue type, and to extract more abstract features for hierarchical representation. However, since the tissue hierarchy is inaccessible via this approach, the representation does not include high-level tissue features.\nBased on the degree of gland differentiation, the graph model categorises each image as normal, low-grade, or high-grade. In comparison with a traditional CNN, the proposed model achieves better accuracy by incorporating both nuclear and graph-level features.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig6-new.png}\n\\vspace{-10pt}\n\\caption{\nThe nuclei that have been detected are segmented, and a graph is constructed using the centroid of each nuclei. For each node, morphological, texture and contrastive predictive coding features are extracted, and GCNs are used as the graph representation. Recreated from~.\n}\n\\label{fig:Fig6}\n\\vspace{-8pt}\n\\end{figure*}\nDysplasia of intestinal glands is especially important in pT1 colorectal cancer, the earliest stage of invasive colorectal cancer. \nStuder et al.~ introduced the pT1 Gland graph (pT1-GG) dataset that consists of cell-graphs of healthy and dysplastic intestinal glands. In this work, the authors established a baseline for gland classification using labelled cell-graphs and the graph edit distance (GED), which is an error-tolerant measurement of similarity between two graphs. This technique is an improved version of the bipartite graph-matching method (BP2)~ combined with a KNN algorithm to perform classification.\nLater, the same authors investigated different graph-based architectures~ to classify healthy gland tissue and dysplastic glandular areas on the pT1-GG dataset.\nThe GNN architectures evaluated for cell-graph classification are GCN~, GraphSAGE~, GAT~, GIN~, EGNN~ and a 1-dimensional GNN~. All models are trained using three graph convolution layers where GraphSAGE and GCN are also trained with jumping knowledge (JK)~ to allow for an adaptive neighborhood range by aggregating representations across different layers. A concatenation of global sum-pooling, global mean-pooling and global max-pooling is used to get the graph-level output, followed by a MLP to classify an input graph.\nThe results demonstrated that graph-based deep learning methods outperformed classical graph-based and CNN-based methods. \nIt should be emphasised, however, that each node is only linked to its two spatially closest neighbors, resulting in very restricted information sharing during message passing.", "id": "f2db0578-688d-405d-8d0e-25ef66270619", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "622ca37f-363b-4e07-84a5-63ab6cf1d0ef", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Cell-graph representation" ], [ "subsubsection", "Colorectal cancer" ] ], "subsections": [], "title": "Colorectal cancer" }, { "cite_extract_rate": 1, "cites": [ 6936, 134, 254 ], "content": "The commonly used Gleason score, which is based on the architectural pattern of tumor tissues and the distribution of glands, determines the aggressiveness of prostate cancer. CNNs have been used for histology image classification including Gleason score assignment, but CNNs are unable to capture the dense spatial relationships between cells and require detailed pixel level annotations for training.\nTo analyse the spatial distribution of the glands in prostate TMAs, Wang et al.~ proposed a weakly-supervised approach for grade classification and to stratify low and high-risk cases (Gleason score $<6$ is normal tissue; Gleason score $\\geq 6$ is abnormal tissue or high-risk).\nThe authors segmented the nuclei and construct a cell-graph for each image with nuclei as the nodes, and the distance between neighboring nuclei as the edges, as illustrated in Fig.~\\ref{fig:Fig6}.\nUsing prostate TMAs with only image-level labels rather than pixel-level labels, a GCN is used to identify high-risk patients via a self-supervised technique known as contrastive predictive coding (CPC)~.\nFeatures for each node are generated by extracting morphological (area, roundness) and texture features (dissimilarity, homogeneity) as well as features from CPC-based learning.\nA GraphSAGE convolution and a self-attention graph pooling (SAGPool)~ are applied to the graph representation to learn from the global distribution of cell nuclei, cell morphology and spatial features. The proposed method can calculate attention scores, focus on the more significant node attributes, and aggregate information at different levels.\n\\vspace{-6pt}", "id": "ffea6be3-eb6e-4828-a044-7a5435490a4e", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "622ca37f-363b-4e07-84a5-63ab6cf1d0ef", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Cell-graph representation" ], [ "subsubsection", "Prostate cancer" ] ], "subsections": [], "title": "Prostate cancer" }, { "cite_extract_rate": 0, "cites": [], "content": "The majority of the following works transform pathological images into patch-graphs, where nodes are important patches, and edges encode the intrinsic relationships between these patches.\nThese patches are sampled using methods such as color-based, cell density or attention mechanisms. Then, CNNs are used to extract features from these patches to generate a feature vector for the node embedding of the graph representation. Given the constructed graph, a graph deep learning model is used to conduct node or graph classification.\nIt is important to make the distinction between tissue-graphs, which are biologically-defined and capture relevant morphological regions; while patch-graphs connect patches of interest, where each patch can contain multiple biological entities, with each other.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{images/Fig8-new.png}\n\\caption{\nSlide Graph Model which constructs a graph from the nuclei level to the entire WSI-level. The main steps are as follows: segmenting and classification of nuclei; clustering; constructing the graph and graph classification. Recreated from~.\n}\n\\label{fig:Fig8}\n\\vspace{-6pt}\n\\end{figure*}", "id": "c4273fb9-7544-4380-8214-44083e361bfa", "level": "subsection", "origin_cites_number": 1, "parent_id": "ec40cece-9cde-4272-b0f4-123fed08d299", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ] ], "subsections": [ "6487bd68-30b6-4a63-b364-d724bdfbee76", "b197c302-3a1a-441b-877e-b85e29b84410", "e1d897fa-bb2b-4568-a21d-f416df3c0e79", "e24e167e-5424-490e-8c21-7da495bc83ca", "fb8c0443-99ea-4928-9bef-1419db88edfc" ], "title": "Patch-graphs and Tissue-graphs representations" }, { "cite_extract_rate": 0.384615384615384, "cites": [ 8460, 6949, 6935, 7000, 6955 ], "content": "Multi-class classification of arbitrarily sized ROIs is an important problem that serves as a necessary step in the diagnostic process for breast cancer.\nAyg{\\\"u}ne{\\c{s}} et al.~ proposed to incorporate local context through a graph-based ROI representation over a variable number of patches (nodes) and their spatial proximity relationships (edges).\nA CNN is used to extract a feature vector for each node represented by fixed-sized patches of the ROI.\nThen, to propagate information across patches and incorporate local contextual information, two consecutive GCNs are used, which also aggregate the patch representation to classify the whole ROI into a diagnostic class. \nThe classification is conducted in a weakly-supervised manner over the patches and ROI-level annotations, without having access to patch-level labels.\nResults on a private data collected from the Department of Pathology at Hacettepe University outperformed CNN-based models that incorporated majority-voting, learned-fusion and base-penultimate methods.\nSome traditional CNN-based models have proposed to jointly segment a ROI of an image and classify WSIs and that enabled the classifier to better predict the image class~.\nYe et al.~ captured the topological structure of a ROI image through a GCN where a graph is constructed with segmentation masks of image patches that contain high levels of semantic information.\nThe segmentation mask for each image patch is obtained using an encoder-decoder semantic segmentation framework where each pixel is classified as one of the four classes of tissue samples (normal, benign, in situ, and invasive) of the BACH~ dataset.\nThe combined segmentation masks of the image patches yield the total ROI segmentation mask. The area ratio of each lesion is calculated as the value of the unit node in each picture patch. Then, a graph is constructed to capture the spatial dependencies using the features of the image patch segmentation masks. Finally, the ROI image is classified based on the features learned by the GCNs. \nOne limitation of previous works is that they construct graphs using small patches of the WSI. Lu et al.~ overcome this challenge by introducing a pipeline to construct a graph from the entire WSI using the nuclei level information, including geometry and cellular organization in tissue slides (termed the histology landscape). \nAfter building the graph, the authors used a GIN model to predict the positive or negative human epidermal growth factor receptor 2 (HER2), and the progesterone receptor (PR), which are two valuable biomarkers for breast cancer prognosis.\nThe proposed method in~ consists of four steps as illustrated in Fig.~\\ref{fig:Fig8}. \nThis work first used Hover-Net~ to simultaneously segment and classify the individual nuclei and extract their features. Then, agglomerative clustering~ is used to group spatially neighboring nuclei into clusters which results in reduced computational cost for downstream analysis. Using these clusters, a graph is generated by assigning the tissue clusters to nodes and the edges of the graph encode the cellular topology of the WSI. Lastly, the graph generated from the entire WSI is used as an input to a GCN to predict HER2 or PR status at the WSI-level.\nThe performance of this method is evaluated on the hematoxylin and eosin (H\\&E) stained WSI images from the TCGA-BRCA~ dataset, which consist of 608 HER2 negative and 101 HER2 positive, and 452 PR positive and 256 PR negative samples. \nContent-based histopathological image retrieval has also been investigated for decision support in digital pathology. This system scans a pre-existing WSI database for regions that the pathologist is interested in and returns related regions to the pathologists for comparison. These methods can provide valuable information including diagnosis reports from experts for similar regions. Retrieval methods can also be used for classification by considering the most likely diagnosis~. However the amount of manually labelled training data limits their power. \nOzen et al.~ suggested a generic method that combines GNNs with a self-supervised training method that employs a contrastive loss function without requiring labeled data. \nIn this framework, fixed-size patches and their spatial proximity relations are represented by undirected graphs. \nThe simple framework for constrastive learning of visual representation (SimCLR)~ is adopted for learning representations of ROIs.\nUsing the contrastive loss, the GNN encoder and MLP projection head are trained to maximise the agreement between the representations. A GCN followed by a DiffPool operation is selected as the model configuration. \nFor content-based retrieval tasks, this GNN is trained in a self-supervised setting and is used to extract ROI representations where the Euclidean distance between the extracted representations is used to determine how similar two ROIs are. \nQuantitative results demonstrated that contrastive learning can improve the quality of learned representations, and despite not utilizing class labels could outperforming supervised classification methods.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig9-new.png}\n\\caption{\nRepresentation of a GCN-based MIL method. Once the bag of patches are extracted, instance-level feature extraction and selection is conducted followed by a bag-level classification. Recreated from~.\n}\n\\label{fig:Fig9}\n\\vspace{-3pt}\n\\end{figure*}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{images/Fig10-new.png}\n\\vspace{-2pt}\n\\caption{\nThe proposed FENet architecture. For each WSI, patches are randomly selected. Each patch corresponds to a node in each non-isomorphic subgraph where a CNN is used to extract node attributes. A feature-enhanced mechanism is adopted to consider all topological structural information. An ensemble approach used majority voting to aggregate all subgraphs' prediction outcomes. Recreated from~.\n}\n\\label{fig:Fig10}\n\\vspace{-6pt}\n\\end{figure*}", "id": "6487bd68-30b6-4a63-b364-d724bdfbee76", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "c4273fb9-7544-4380-8214-44083e361bfa", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ], [ "subsubsection", "Breast cancer" ] ], "subsections": [], "title": "Breast cancer" }, { "cite_extract_rate": 0.5, "cites": [ 6941, 254, 6950, 6937, 6956, 6957, 5680, 8313 ], "content": "Although CNN-based approaches have practical merits when identifying important patches for predicting CRC, they do not take into account the spatial relationships between patches, which is important for determining the stage of the tumor. The size and the relative location of the tumor in relation to other tissue partitions are used for tumor node metastasis staging estimation. Furthermore, traditional approaches require the presence of expert pathologists to annotate each WSI.\nWeakly-supervised learning is an important and potentially viable solution to dealing with sparse annotations in medical imagery. Multiple instance learning (MIL) is well-suited to histology slide classification, as it is designed to operate on weakly-labeled data~.\nRaju et al.~ considered the spatial relationship between tumor and other tissue partitions with a graph attention multi-instance learning framework to predict colorectal tumor node metastasis staging. Each graph with nodes representing different tissues serves as an instance, and the multiple instances for a WSI form a bag that aids in tumour stage prediction.\nIn~, given a WSI, a texture autoencoder~ is used to encode the texture from random sample patches. Then a cluster embedding network based on a Siamese architecture~ is trained on a binary classification task to group similar texture features into multiple graphs.\nEach WSI is divided into multiple graphs and each graph has features from all cluster labels.\nThe authors used a tissue wise annotated CRC dataset~ to assign cluster labels for similar image patches.\nThe authors consider the multiple graphs as multiple instances in a bag which are used to predict the tumor staging using an attention MIL method~. The authors adopted an Adaptive GraphSage~ approach with learnable attention weights to assign more importance to instances which contain more information towards predicting the tumor stage.\nThe authors demonstrated that graph attention multi-instance learning can perform better than a GCN on the Molecular and Cellular Oncology (MCO)~ dataset.\nColorectal cancer lymph node metastasis (LNM) is a crucial factor in patient management and prognosis, and its identification suggests the need for dissection to avoid further spread.\nZhao et al.~ introduced a GCN-based multiple instance learning method combined with a feature selection strategy to predict LNM in the colon adenocarcinoma (COAD) cohort of the Cancer Genome Atlas (TCGA) project~.\nFollowing the MIL approach, the training dataset is composed of bags where each bag contains a set of instances. The goal of this work is to teach a model to predict the bag label, where only the bag-level label is available.\nThe overall framework has three major components: instance-level feature extraction, instance-level feature selection, and bag-level classification, as illustrated in Fig.~\\ref{fig:Fig9}. \nFirst, non-overlapping patches are extracted from a WSI which is represented as a bag of patches. Since instance labels are unavailable, the authors introduced a combination of a variational autoencoder (VAE)~ and a generative adversarial network (GAN) for fine-tunning the encoder component as an instance-level feature extractor in a self-supervised manner. In this VAE-GAN model, the architecture of the network for the decoder of the VAE and generator of the GAN is the same network. \\\nThen, a feature selection component is incorporated to remove redundant and unhelpful features to alleviate the workload when generating the bag representation. The maximum mean discrepancy is used to evaluate the feature importance. Finally, the authors employed ChebNet~ followed by SAGPool~ to generate the bag representation and perform the bag-level classification.\nThe authors demonstrated that the proposed model outperformed CNN-based and attention-based MIL models.\nColon adenoma and carcinoma may occur as a result of a series of histopathological changes due to key genetic alterations. Thus, the ability to predict genetic mutations is important for the diagnosis of colon cancer.\nDing et al.~ proposed a feature-enhanced graph network (FENet) using a spatial-GCNs, based on GIN, to predict gene mutations across all three key mutational prediction tasks (APC, KRAS, and TP53) that are associated with colon cancer evolution. In this approach, multiple spatial graphs are created using randomly selected image patches from each patient's WSI. \nThe feature-enhanced mechanism aggregates features from neighboring patches and combines them as the central node representation to increase feature learning performance. \nThe authors introduced GlobalAddPooling as a READOUT function to convert the node representation into a graph representation. The prediction outcome for each sub-graph is classified by fully-connected layers.\nFinally, an ensemble strategy combines the prediction results of all sub-graphs to predict mutated and non-mutated classes. \nFig.~\\ref{fig:Fig10} illustrates the proposed FENet networks.\nThe authors demonstrated that the integration of multiple sub-graph outcomes in the proposed model leads to a significant improvement in prediction performance on the Cancer Genome Atlas Colon Adenocarcinoma dataset~, outperforming graph-based baseline models such as ChebNet, GraphSAGE and GAT.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{images/Fig12-new.png}\n\\caption{\nRepresentation of the retrieval framework. Patch-graphs are constructed based on spatial relationships and feature distances between patches, and are fed into the developed GNN-Hash model for graph encoding. \nWhen retrieving, the query region is converted into a patch-graph and a binary code for similarity comparison with samples in the database. Recreated from~.\n}\n\\label{fig:Fig12}\n\\vspace{-6pt}\n\\end{figure*}", "id": "b197c302-3a1a-441b-877e-b85e29b84410", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "c4273fb9-7544-4380-8214-44083e361bfa", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ], [ "subsubsection", "Colorectal cancer" ] ], "subsections": [], "title": "Colorectal cancer" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 6943, 6948, 6945, 224, 211 ], "content": "Lung adenocarcinoma and lung squamous cell carcinoma are the most common subtypes of lung cancer, and distinguishing between them requires a visual examination by an experienced pathologist.\nEfficient mining of survival-related structural features on a WSI is a promising way to improve survival analysis. \nLi et al.~ introduced a GCN-based survival prediction model that integrated local patch features with global topological structures (patch-graph) through spectral graph convolution operators (ChebNet) using the TCGA-LUSC~ and NLST~ datasets.\nThe model utilized a survival-specific graph trained under supervision using survival labels.\nA parallel graph attention mechanism is used to learn attention node features to improve model robustness by reducing the randomness of patch sampling (\\textit{i.e.} an adaptive patch selection by learning the importance of individual patches). \nThis attention network is trained jointly with the prediction network. The authors demonstrated that topological features fine-tuned with survival-specific labels outperformed CNN-based models. \nAdnan et al.~ explored the application of GNNs for MIL. The authors sampled important patches from a WSI and model them as a fully-connected graph where the graph is converted to a vector representation for classification. Each instance is treated as a node of the graph in order to learn end-to-end relationships between nodes.\nIn this approach, a DenseNet is used to extract features from all important patches sampled from a segmented tissue using color thresholds~. \nThen, an adjacency learning layer which uses global information about the patches is adopted to define the connections within nodes in an end-to-end manner. The adjacency matrix is calculated by an adjacency learning block using a series of dense layers and cross-correlation.\nThe constructed graph is passed through two types of graph models (ChebNet and GraphSAGE), followed by a graph pooling layer to get a single feature vector to compare the discrimination of sub-types of lung cancer on the TCGA{~} and MUSK1{~} datasets. \nWith the adopted global attention pooling{~} which uses a soft attention mechanism, it is possible to visualise the importance that the network places on each patch when making the prediction.\nThe pooled representation is fed to two fully connected dense layers to achieve the final classification between lung adenocarcinoma and lung squamous cell carcinoma. The proposed model outperformed CNN-based models that use attention-MIL.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{images/Fig13-new.png}\n\\caption{\nRepresentation of the proposed SegGini methodology.\na) Tissue graph construction with tissue superpixels as nodes, and edges computed using a region adjacency graph from the spatial connectivity of superpixels. A GNN is used to learn discriminative\nnode embeddings to perform semantic segmentation.\nb) Graph-head: graph classification and feature atribution based on GraphGrad-CAM.\nc) Node-head: node classification.\nRecreated from~.\n}\n\\label{fig:Fig13}\n\\vspace{-6pt}\n\\end{figure*}\nAs discussed previously, content-based image retrieval seeks to find images that have morphological characteristics that are most similar to a query image.\nBinary encoding and hashing techniques have been successfully adopted to speed up the retrieval process in order to satisfy efficiency requirements~. However, WSI are commonly divided into small patches to index WSIs for region-level retrieval. This process does not consider the contextual information from a broad region surrounding the nuclei and the adjacency relationships that exist for different types of biopsy.\nZheng et al.~ proposed a retrieval framework for a large-scale WSI database based on GNNs and hashing, which is illustrated in Fig.~\\ref{fig:Fig12}.\nPatch-graphs are first built in an offline stage based on patch spatial adjacency, and feature similarity extracted with a pre-trained CNN. Then, the patch-graphs are processed by a GNN-Hash model designed to use a graph encoding, and stored in the retrieval database. The GNN-Hash structure was created by stacking GNN modules and a DiffPool module~. The output of the hierarchical GNN-Hash is modified with a binary encoding layer in the final graph embedding layer. Finally, the relevant regions are retrieved and returned to pathologists after the region the pathologist queries is converted to a binary code. The similarities between the query code and those in the database are measured using Hamming distance.\nExperiments to estimate the adjacency relationships between local regions in WSIs and the similarities with query regions were conducted using the lung cancer ACDC-LungHP~ dataset.\nThe results demonstrated that the proposed retrieval model is scalable to different query region sizes and shapes, and returns tissue samples with similar content and structure.", "id": "e1d897fa-bb2b-4568-a21d-f416df3c0e79", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "c4273fb9-7544-4380-8214-44083e361bfa", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ], [ "subsubsection", "Lung cancer" ] ], "subsections": [], "title": "Lung cancer" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the most common types of skin cancer is basal cell carcinoma (BCC) which can look similar to open sores, red patches and shiny bumps. Several studies have demonstrated the ability to identify BCC from pathological images. \nWu et al.~ introduced a model that predicted BCC on WSI using a weakly- and semi-supervised formulation by combining prior knowledge from expert observations, and structural information between patches into a graph-based model. A sample of this prior knowledge is the fact that a dense patch with predictive cancer cells is more likely to have a cluster of cancer cells, and more patches with high cancer likelihoods increase the overall likelihood of an image being positive.\nThe framework consists of two modules, a GCN that propagates supervisory information over patches to learn patch-aware interpretabililty in the form of a probability score; and an aggregation function that connects patch-level and image-level predictions using prior knowledge. \nThe proposed model makes full use of different levels of supervision, using a mix of weak supervision from image-level labels and available pixel-wise segmentation labels as a semi-supervised signal.\nBy incorporating prior knowledge and structure information, both image-level classification and patch-level interpretation are significantly improved.", "id": "e24e167e-5424-490e-8c21-7da495bc83ca", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "c4273fb9-7544-4380-8214-44083e361bfa", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ], [ "subsubsection", "Skin cancer" ] ], "subsections": [], "title": "Skin cancer" }, { "cite_extract_rate": 0.125, "cites": [ 6945 ], "content": "Pathologists must go above and beyond normal clinical demands and norms when precisely annotating image data. As a result, a semantic segmentation method should be able to learn from inexact, coarse, and image-level annotations without complex task-specific post-processing steps.\nTo this end, Anklin et al.~ proposed a weakly-supervised semantic segmentation method based on graphs (SegGini) that incorporates both local and global inter-tissue-region relations to perform contextualized segmentation using inexact and incomplete labels.\nThe model is evaluated on the UZH (TMAs)~ and SICAPv2 (WSI)~ prostate cancer datasets for Gleason pattern segmentation and Gleason grade classification.\nFig.~\\ref{fig:Fig13} depicts the proposed SegGini methodology.\nA tissue-graph representation for an input histology image is constructed as proposed in~, where the graph nodes depict tissue superpixels. As the rectangular patches can span multiple distinct structures, superpixels are used~. \nTo characterize the nodes, morphological and spatial features are extracted, and the graph topology is computed with a region adjacency graph (RAG)~, using the spatial connectivity of superpixels.\nGiven a tissue graph, a GIN model learns contextualized features from the tissue microenvironment and inter-tissue interactions to perform semantic segmentation, where the proposed SegGini model assigns a class label to each node.\nThe resulting node features are processed by a graph-head (image label), a node-head (node label), or both, based on the type of weak supervision.\nThe graph-head consists of a graph classification and a feature attribution technique. The authors employed GraphGrad-CAM to measure importance scores towards the classification of each class, where the node attribution maps determine the node labels.\nFurther, the authors in~ found that the node-head simplifies image segmentation into classifying nodes where the node labels are extracted by assigning the most prevalent class within each node.\nFor inexact image label and incomplete scribbles, both heads are jointly trained to improve the individual classification tasks. \nThe outcomes of the heads are used to segment Gleason patterns.\nFinally, to identify image-level Gleason grades from the segmentation map, a classification approach~ is used.\nSegGini outperforms prior models such as HistoSegNet~ in terms of per-class and average segmentation, as well as classification metrics. This model also provides comparable segmentation performance for both inexact and complete supervision; and can be applied to a variety of tissues, organs, and histology tasks.\n\\vspace{-6pt}", "id": "fb8c0443-99ea-4928-9bef-1419db88edfc", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "c4273fb9-7544-4380-8214-44083e361bfa", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Patch-graphs and Tissue-graphs representations" ], [ "subsubsection", "Prostate cancer" ] ], "subsections": [], "title": "Prostate cancer" }, { "cite_extract_rate": 0, "cites": [], "content": "In previous approaches, pathological images have been represented by cell-graphs, patch-graphs or tissue-graphs. However, cellular or tissue interactions alone are insufficient to fully represent pathological structures.\nA cell-graph incorporates only the cellular morphology and topology, and discards tissue distribution information that is vital for appropriate representation of histopathological structures. A tissue-graph made up of a collection of tissue areas, on the other hand, is unable to portray the cell microenvironment.\nThus, to learn the intrinsic characteristics of cancerous tissue it is necessary to aggregate multilevel structural information, which seeks to replicate the tissue diagnostic process followed by a pathologist when analyzing images at different magnification levels.", "id": "636f71e7-362b-4881-bc1e-1ab1bd4357a9", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec40cece-9cde-4272-b0f4-123fed08d299", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Hierarchical graph representation (macro and micro architectures)" ] ], "subsections": [ "3ee1aba6-cab9-40ae-8893-af6c9c7ea724", "86a3816d-947a-4ba7-9a5a-273100e318eb" ], "title": "Hierarchical graph representation (macro and micro architectures)" }, { "cite_extract_rate": 0.5, "cites": [ 6941, 6951, 231, 6958, 6936, 6940, 8251, 6935 ], "content": "Early detection of cancer can significantly reduce the mortality rate of breast cancer, where it is crucial to capture multi-scale contextual features in cancerous tissue. \nCombinations of CNNs have been used to encode multi-scale information in pathology images via multi-scale feature fusion, where scale is often associated with spatial location.\nZhang and Li~ introduced a multi-scale graph wavelet neural network (MS-GWNN) that uses graph wavelets with different scaling parameters in parallel to obtain multilevel tissue structural information in a graph topology. The graph wavelet neural network (GWNN)~ replaces the graph convolution in a spectral GCN with the wavelet transform which has an excellent localization capability.\nFor breast cancer classification, the authors first transformed pathological images into graph structures where nodes are non-overlapping patches. Then, node classification is performed via a GWNN at different scales in parallel (node-level prediction). After that, multi-level node representations are incorporated to perform graph-level classification.\nThe results and the visualization of the learned node embeddings demonstrated the strong capacity of the model to encode different structural information on two public datasets: BACH~ and BreakHis~. However, this approach is limited by the manual selection of the appropriate scaling parameter.\nA hierarchy defined from the cells with learned pooling layers~ does not include high-level tissue features and approaches that concatenate cell-level and tissue-level information~ cannot leverage the hierarchy between the levels of the tissue representation.\nTo address these issues, Pati et al.~ proposed a hierarchical-cell-to-tissue (HACT) representation that utilizes both nuclei and tissue distribution properties for breast cancer subtype classification.\nThe HACT representation consists of a low-level cell-graph (CG) that captures the cellular morphology and topology; a tissue-graph (TG) at a high-level that captures the properties of the tissue sections as well as their spatial distribution; and the hierarchy between the cell-graph and the tissue-graph that captures the cells' relative distribution within the tissue.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig14.png}\n\\vspace{-6pt}\n\\caption{\nRepresentation of a) CG, b) TG, and c) Hierarchical-cell-to-tissue. Image adapted from~.\n}\n\\label{fig:Fig14}\n\\vspace{-12pt}\n\\end{figure}\nFig.~\\ref{fig:Fig14} illustrates samples of the CG, the TG and the hierarchical cell-to-tissue representation. \nTo construct a CG, each node represents a cell and edges encode cellular interactions, where for each nucleus hand-crafted features such as shape, texture and spatial location are extracted. Then, a KNN algorithm is adopted to build the initial topology based on the assumption that a close cell should be connected and a distant cell should remain disconnected. The Euclidean distances between nuclei centroids in the image space are used to quantify cellular distances.\nThe TG is constructed by first identifying tissue regions (\\textit{e.g.}, epithelium, stroma, lumen, necrosis) by detecting non-overlapping homogeneous superpixels of the tissue and iteratively merging neighboring superpixels that have similar colour attributes.\nThe TG topology is generated assuming that adjacent tissue parts should be connected by constructing a region adjacency graph~ with the spatial centroids of the superpixels.\nThe HACT representation, that jointly represents the low-level (CG) and high-level (TG) relationships, is processed with a hierarchical model (HACT-Net) that employs two GIN models~. The learned cell-node embeddings are combined with the corresponding tissue-node embeddings to predict the classes.\nTo demonstrate the hierarchical-learning, the authors introduce the BRACS dataset to classify five breast cancer subtypes: normal, benign, atypical, ductal carcinoma in situ, and invasive. The authors also evaluate the generalizability to unseen data by splitting the data at the WSI-level (two images from the same slide do not belong to different splits) different from previous approaches that split at the image-level~.\nThe enriched multi-level HACT representation for classification outperformed CNN-based models and standalone cell-graph and tissue-graph models, confirming that for better structure-function mapping, the link between low-level and high-level information must be modelled at the local node level rather than at the graph level.\nLater, Pati et al.~ exploited hierarchical modeling for interpretability in digital pathology, aiming to map the tissue structure to tissue functionality. The authors adopt the hierarchical entity-graph representation of a tissue which is processed via a hierarchical GNN to learn the mapping from tissue compositions to respective tissue categories.\nIn this work, Pati et al.~ improved the HACT representation and the HACT-Net model. HACT-Net is modeled using principal neighborhood aggregation (PNA)~ layers, which use a combination of aggregators to replace the sum operation in GIN and adopt degree-scalers to amplify or dampen neighboring aggregated messages based on the degree of a node.\nGraph normalization followed by batch normalization is incorporated after each PNA layer~, which aids the network in learning discriminative topological patterns when the number of nodes within a class varies dramatically.\nTo further assess the quality of the methodology, a comparison with independent pathologists is conducted. Three board-certified pathologists were recruited to annotate the BRACS test set without having access to the respective WSIs. The results indicate that the model outperforms the domain experts in the 7-class classification task.\nThe authors employed the GraphGrad-CAM to highlight the nuclei and tissue region nodes to show what the HACT-Net focuses on while classifying the tumor regions-of-interest.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{images/Fig16-new.png}\n\\vspace{-10pt}\n\\caption{\nA-C. Patch-level embeddings, graph representation and classification via a GCN. A refinement phase is incorporated through estimation of uncertainty.\nD-E. The Graph Mapper summarizes high-order relationships over a WSI as a graph, where meaningful histology regions are captured.\nF-G. Tumor invasion scores are used in the prediction model to form an interpretable staging score.\nImage adapted from~.\n}\n\\label{fig:Fig16}\n\\vspace{-8pt}\n\\end{figure}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.72\\linewidth]{images/Fig17.png}\n\\caption{\nClassification framework for cervical cell images. Features are extracted with a CNN pre-trained on a cervical cell classification task. K-means clustering is performed on these CNN features. A graph of cluster centroid correlations is built based on intrinsic similarities, and is the input to a GCN model. The encoded representations are incorporated into the CNN features for classification. Image reproduced from~.\n}\n\\label{fig:Fig17}\n\\vspace{-6pt}\n\\end{figure*}", "id": "3ee1aba6-cab9-40ae-8893-af6c9c7ea724", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "636f71e7-362b-4881-bc1e-1ab1bd4357a9", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Hierarchical graph representation (macro and micro architectures)" ], [ "subsubsection", "Breast cancer" ] ], "subsections": [], "title": "Breast cancer" }, { "cite_extract_rate": 0.5, "cites": [ 6959, 6954 ], "content": "Tumor staging includes both tissue and nodal stages, with higher numbers indicating a greater depth of invasion and a greater number of lymph nodes implicated in the tumor, respectively.\nLevy et al.~ introduced a framework that used varied levels of structure to learn both local and global patterns from histological images for determining the degree of tumor invasion. Fig.~\\ref{fig:Fig16} illustrates the proposed framework where the authors combined GCNs to explain the mechanisms by which tissue regions interact, and topological feature extraction methods~ to extract essential contextual information.\nPatch-level classification of colon sub-compartments was conducted via a GCN as well as a refinement of patch-level predictions, in which nodes with high uncertainty were deleted, and the remaining class labels were propagated to unlabeled patches.\nA topological data analysis (TDA) tool for graphs known as Graph Mapper~ was adopted as a post-hoc model explanation technique to elucidate the high-level topology of the WSI. \nThe mapper generates a graph in which each node represents a cluster of WSI patches and each edge represents the degree of shared patches between the clusters. This tool can offer higher level information flow descriptors in a GNN model, substantially simplifying analysis. \nWith the regions of interest (collection of patches) extracted with the mapper, the authors compute tumor invasion scores that measure the degree of overlap between the tumor and adjacent tissue region.\nFinally, cancer staging is predicted via derived invasion scores using a private colon and lymph node dataset collected from the Dartmouth Hitchcock Medical Center, where the results demonstrated the potential of topological methods in the analysis of GNN models.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.72\\linewidth]{images/Fig18.png}\n\\caption{\nAn integrated framework for multi-modal fusion of histology and genomics features for survival outcome prediction. Image-based features using CNNs, and graph-based features using GCNs. Recreated from~.\n}\n\\label{fig:Fig18}\n\\vspace{-6pt}\n\\end{figure*}\n\\vspace{-6pt}", "id": "86a3816d-947a-4ba7-9a5a-273100e318eb", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "636f71e7-362b-4881-bc1e-1ab1bd4357a9", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Hierarchical graph representation (macro and micro architectures)" ], [ "subsubsection", "Colorectal cancer" ] ], "subsections": [], "title": "Colorectal cancer" }, { "cite_extract_rate": 0, "cites": [], "content": "In this subsection we introduce works that have used fusion techniques to extract and combined multiple rich visual representations of the same input data (unimodal fusion), or integrate information from various input modalities (multi-modal fusion) to enable more accurate and robust decisions.\nThe former involves integrating several feature sets acquired from different networks into a single vector, which is then used for classification. This fusion occurs in two stages: normalization of a feature, and selection of a feature.\nThe latter seeks to correlate and combine disparate heterogeneous modalities, such that the model can learn pairwise feature interactions and control the expressiveness of each modality. The main challenges in multi-modal data fusion are the dissimilarity of the data types being fused, and the interpretation of the results.", "id": "788e261b-9bd3-43b4-96be-8ae8e8bb6470", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec40cece-9cde-4272-b0f4-123fed08d299", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Unimodal and multi-modal feature level fusion" ] ], "subsections": [ "2730f9b6-3f75-4b35-9f72-8ddc6d6207f5", "435960b9-fa38-49a7-a7fc-93979f9cda44" ], "title": "Unimodal and multi-modal feature level fusion" }, { "cite_extract_rate": 0, "cites": [], "content": "Cervical cancer is one of the most common causes of cancer death in women, and screening for abnormal cells from a cervical cytology slide is a common procedure for early detection of cervical cancer. \nIn contrast with conventional CNNs which learn multi-level features through hierarchical deep architectures, Shi et al.{~} combined a GCN output with deep CNN features to classify images of isolated cervical cells into five and seven classes using the SIPakMeD~ and Motic (liquid-based cytology image)~ datasets, respectively. \nFirst a CNN model pretrained for a cervical cell classification task is used to extract features of each individual cervical cell image. Then, K-means clustering is computed on the extracted features from all images to construct a graph where the centre of each cluster represents a node. The constructed graph of intrinsic similarities can be used to further investigate the potential relationships between images.\nConsequently, a stacked two-layer GCN generates a relation-aware representation which is encoded into CNN features for classification, as illustrated in Fig.~\\ref{fig:Fig17}. \nThe authors demonstrated that the relation-aware representation generated by the GCN greatly enhances the classification performance.\nExtensive experiments to validate the performance of cervical cytology classification with a GCN were also published by the same authors in~.", "id": "2730f9b6-3f75-4b35-9f72-8ddc6d6207f5", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "788e261b-9bd3-43b4-96be-8ae8e8bb6470", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Unimodal and multi-modal feature level fusion" ], [ "subsubsection", "Unimodal fusion (Cervical cancer)" ] ], "subsections": [], "title": "Unimodal fusion (Cervical cancer)" }, { "cite_extract_rate": 0.25, "cites": [ 254 ], "content": "To predict clinical outcomes, oncologists often use both quantitative and qualitative information from genomics and histology{~}. However, current automated histology methods do not take genomic details into account. The following work exploits the complementary knowledge within morphological information and molecular information from genomics to better quantify tumors using graph-based methods.\nRenal cell carcinoma is the most common malignant tumor of the kidney, and it is a diverse category of tumor with varying histology, clinical outcomes, and therapeutic responses. \nRenal cell carcinoma subtypes can be automatically classified through Deep learning frameworks. These algorithms can also identify features that predict survival outcomes from digital histopathological images. Several authors have used GCNs for cancer histology classification, however, its application to survival outcome prediction is less explored. \nChen et al.~ proposed a framework for multi-modal fusion of histology and genomic features for renal cancer survival outcome prediction on the TCGA datasets (glioma and clear cell renal cell carcinoma)~, which contains paired whole slide images, genotype, and transcriptome data.\nTheir model fuses the histology image (patch features), cell-graph and genomic features into a multi-modal tensor that models interactions between the different modalities and outperforms deep learning-based feature fusion for survival outcome prediction.\nThis framework is illustrated in Fig.~\\ref{fig:Fig18}.\nThe authors first extract morphological features from image-based features using CNNs, and graph-based features using GCNs, to learn cell-to-cell interactions in WSI. Cells are represented as nodes in a graph, with cells segregated using a nuclei segmentation method and connections established using KNN. \nCPC is also adopted as a self-supervised method for cell feature extraction.\nThe authors adopted the aggregating functions of the GraphSAGE architecture. The hierarchical self-attention pooling strategy, SAGPool~, is adopted to encode the hierarchical structure of cell graphs.\nThen, to monitor the expressiveness of each modality, a gating-based attention system is used to perform uni-modal function fusion.\nMulti-modal interpretability was considered by adopting an integrated gradient method for visualizing image saliency feature importance.", "id": "435960b9-fa38-49a7-a7fc-93979f9cda44", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "788e261b-9bd3-43b4-96be-8ae8e8bb6470", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Applications of graph deep learning \\\\ in digital pathology" ], [ "subsection", "Unimodal and multi-modal feature level fusion" ], [ "subsubsection", "Multi-modal fusion (Renal cancer)" ] ], "subsections": [], "title": "Multi-modal fusion (Renal cancer)" }, { "cite_extract_rate": 0.5, "cites": [ 8305 ], "content": "\\label{sec:sec4}\nBeyond generating predictions relating to biology and medicine at molecular, genomic and therapeutic levels~, graph representation learning has also been used to support medical diagnosis through the representation of patient records as graphs by using information including brain electrical activity, functional connectivity and anatomical structures~. \nAs demonstrated throughout this review, graph-based deep learning has been successfully used to capture phenotypical and topological distributions in histopathology to better enable precision medicine. Numerous entity-graph based tissue representations and GNN models have been proposed for computer-aided detection and diagnosis of breast, colorectal, prostate, lung, lymphoma, skin, colon, cervical and renal cancers.\nGiven the utility of graphs across biomedical domains, especially to model the histology of cancer tissue, there has been a major push to exploit recent developments in deep learning for graphs in this domain. However, these applications are still in their nascent stages compared to existing research concerning conventional deep learning methods. There are challenges associated with the adoption of GNNs, and there are graph approaches yet to be explored in this domain that potentially allow a more robust and comprehensive investigation of complex biological processes that merit further investigation.\nIn this section, we discuss several future research directions that need to be addressed to unlock the full power of graph deep learning in digital pathology: \n1) Entity graph construction;\n2) Embedding expert knowledge and clinical adoption of graph analytics;\n3) Complexity of graph models;\n4) Training paradigms; and\n5) Explainability of graph models.\n\\vspace{-6pt}", "id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "level": "section", "origin_cites_number": 2, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ] ], "subsections": [ "2951ed7d-2491-48df-adf6-afcd3145da2d", "73fa0ac6-b74f-4ce3-92ad-ee14a635304d", "d861f337-c37b-4f04-9357-81caa768e47a", "e596faee-7b32-43f7-aa51-281221369c0c", "197b05bf-f415-41b3-a1a4-e5f425458a5c" ], "title": "Discussion and open challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:sec4a}\nDefining an appropriate graph representation where vertices correspond to entities, and edges represent the connectivity of these entities, is highly relevant. \nGiven a pathology task, different choices of entities in histology images can be selected as the relevant biological structures.\nSeveral graph representations have been customized according to the relevant entity such as nuclei, tissue regions, glands or just traditional patches. However, in the majority of methods discussed in this survey, graph structures are designed manually.", "id": "2951ed7d-2491-48df-adf6-afcd3145da2d", "level": "subsection", "origin_cites_number": 0, "parent_id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ] ], "subsections": [ "9d65cfa8-0fba-44e8-a9e9-2427a48e8533", "b93c4ca3-1e4b-4713-933c-41ab751be71b" ], "title": "Entity-graph construction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "9d65cfa8-0fba-44e8-a9e9-2427a48e8533", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "2951ed7d-2491-48df-adf6-afcd3145da2d", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ], [ "subsubsection", "Pros and cons of current preprocessing steps for entity-graph construction" ] ], "subsections": [ "e0210c85-0a3c-4709-aa9c-2001c956e3a0", "3db899c7-70e7-4074-8047-1cf246aa7c6b", "d0e34ea0-3261-48a8-8cdb-78b306c4dd02" ], "title": "Pros and cons of current preprocessing steps for entity-graph construction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6943, 6945, 6936, 6942, 6939, 6941, 6944, 6940 ], "content": "Cell-graphs have been one of the most popular graph representations, where cells are the entities used to encode cell microenvironments, including morphology of cells and cellular interactions. Such cell-graph representations were proposed in~. However, modeling a WSI as a cell-graph is non-trivial due to the large number of cells and the many possibly isolated cells and weak nuclear boundaries. This representation relies heavily on cell detection or segmentation methods. \nAlthough some works have used representative node sampling{~} or agglomerative clustering{~} to remove redundancy in the graph and reduce computation cost, the majority of cell-graph based proposals assume that cell-cell interactions are the most salient sources of information. Cell-graphs do not exploit tissue macro-architectural structures, or the hierarchical nature of the tissue.\nAnother traditional technique for analysing WSI that include context information of ROIs is patch-graphs. Although patch-graph representations have been adopted in a number of studies~, not all entities are biologically-defined and methods are limited by the patch definition. The resolution and optimal size of each image patch and the level of context offered are trade-off against one another, and are determined by the data. For example, variations in glandular morphology and size make determining an acceptable image patch size problematic. Operating at lower magnification levels may not capture cell-level features, and higher resolutions limits the ability to capture the tissue micro-environment. Thus, an automated technique that defines these patch regions and an appropriate scaling parameter from the input data is vital. \nTo improve the tissue structure-function mapping, graph representations based on tissue regions have been proposed, which can also deal with one of the limitations of cell-graph as important regions may not need to only contain cells~. Tissue-graphs represent well-defined tissue regions and are used to propagate information across neighboring nodes in a progressive manner at a gland or region level. Although superpixel-based approaches are proposed to address patch-graph limitations, a tissue-graph alone cannot capture local cellular information.\nA combination of cell-level and patch-level features was proposed to capture local and global patterns from histological images{~}. However, this fusion approach cannot take advantage of the hierarchy between levels.\nHierarchical graph representations were proposed as an adequate tissue representation as histological structures cannot be fully represented by cellular or tissue interactions alone. It has been shown that cell-graphs and tissue-graphs provide valuable complementary information (cellular and tissue interactions) to learn the intrinsic characteristics of cancerous tissues.\nSuch hierarchical analysis that captures multivariate tissue information at multiple levels has been addressed only by~. \nNevertheless, this approach is still dependent on the construction of a cell-centered graph, which itself is limited by cell detection accuracy and is subjected to the complexity constraints of the model driven by the number of nodes.\nOther works have dealt with cell detection limitations by exploiting graph wavelets with different scaling parameters{~} to obtain multilevel tissue structural information in a tissue-graph. Further, in{~} micro- and macro architectures of histology images were captured with the combination of a topological data analysis tool (cell-level) and GCN (tissue-level).", "id": "e0210c85-0a3c-4709-aa9c-2001c956e3a0", "level": "paragraph", "origin_cites_number": 24, "parent_id": "9d65cfa8-0fba-44e8-a9e9-2427a48e8533", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ], [ "subsubsection", "Pros and cons of current preprocessing steps for entity-graph construction" ], [ "paragraph", "Entity definition" ] ], "subsections": [], "title": "Entity definition" }, { "cite_extract_rate": 0.25, "cites": [ 6936 ], "content": "Handcrafted and CNN-based features have been the typical methods to characterize entities. Such deep feature extraction allows use of features from a pre-trained deep architecture. However, the performance of these methods is compromised because the authors usually utilize a pre-trained model (\\textit{e.g.}, trained on ImageNet) due to a lack of patch labels to fine-tune the network, and thus suffer from the domain gap between natural scene images and histopathological images. To address this limitation, a small number of works trained a feature extractor using self-supervised approaches such as CPC, VAE-GAN and auto encoder in~.", "id": "3db899c7-70e7-4074-8047-1cf246aa7c6b", "level": "paragraph", "origin_cites_number": 4, "parent_id": "9d65cfa8-0fba-44e8-a9e9-2427a48e8533", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ], [ "subsubsection", "Pros and cons of current preprocessing steps for entity-graph construction" ], [ "paragraph", "Feature extraction" ] ], "subsections": [], "title": "Feature extraction" }, { "cite_extract_rate": 0.75, "cites": [ 6943, 9085, 5733 ], "content": "On current entity-graphs, each node is only connected to its spatially nearest neighbors, resulting in relatively limited information exchange during the message passing phase. Only one approach to date has computed the connections between nodes by using an adjacency learning layer in an end-to-end manner that considered the global context of all patches~.\nEdge embeddings in cell-graph and tissue-graph topologies are a poorly studied field with few approaches. Learning takes place primarily at the vertices, with edge attributes serving as auxiliary information. The EGNN has only been applied in{~} for colorectal cancer classification, and shows similar performance to the best model based on a 1-dimensional GNN{~}.\nEdge attributes can also directly inform the message passing phase operating over the vertices. In the MEGNet{~} model, vertices are updated by an aggregation of features from adjacent edges.", "id": "d0e34ea0-3261-48a8-8cdb-78b306c4dd02", "level": "paragraph", "origin_cites_number": 4, "parent_id": "9d65cfa8-0fba-44e8-a9e9-2427a48e8533", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ], [ "subsubsection", "Pros and cons of current preprocessing steps for entity-graph construction" ], [ "paragraph", "Graph topology" ] ], "subsections": [], "title": "Graph topology" }, { "cite_extract_rate": 0.75, "cites": [ 222, 264, 8305 ], "content": "Automated graph structure estimation aims to find a suitable graph to represent the data as input to the GNN model. \nBy modeling graph generation as a sequential process, the graph representation (nodes, edges and embeddings) can be inferred directly from data which would be especially useful when representing tissues with a variety of complex micro- and macro environments. \nHowever, the majority of methods surveyed follow a standard sequential workflow which is highly dependent on the individual performance of each preprocessing step, including tissue mask detection, nuclei detection, super-pixel detection, deep feature extraction, and graph building. \nThe use of neural networks to build generative graph models is gaining popularity to capture both their topology and their attributes, which can in turn lead to more robust algorithms and help to provide more accurate results. However, the effectiveness of such algorithms have not been investigated for histopathology images. Therefore, several requirements are still needed to enable the generation process.\nSeveral works that have adopted GCNs for brain electrical activity analysis tasks {~} have demonstrated that learning the graph structure from data improves classification performance in comparison to approaches where a pre-defined graph topology is used.\nIn digital pathology these predefined parameters per histology task are represented by fixed threshold to differentiate non-tissue pixels; patch size and number of patches for nuclei detection, and nuclei and tissue feature extraction; sample ratio of representative nuclei; thresholded KNN and distance that define topology and edges; the number of superpixels and downsampling factor per image; and selection of handcrafted features and CNN layer from which deep features are extracted. Such definitions limit the generalization of entity-graphs to different tissues, organs, and histology tasks.\nSome graph generation approaches that are worthy of exploration within histopathology diagnosis are GraphGAN~, DGMG~, and GCPN~. For instance, DGMG{~} can be used to generate one node at a time from each histopathology patch and then create edges one by one, to connect each node to the existing partial graph using probabilistic dependencies among nodes.\nIn summary, the preceding discussion exemplified the difficulties in estimating a graph structure with the desired properties from data. While there is emerging work in this field, it is ripe for further investigation. In digital pathology, automated graph generation, in which a graph model infers structural content from data, and the integration of domain knowledge, are also underutilised.\n\\vspace{-6pt}", "id": "b93c4ca3-1e4b-4713-933c-41ab751be71b", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "2951ed7d-2491-48df-adf6-afcd3145da2d", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Entity-graph construction" ], [ "subsubsection", "Automated graph generation" ] ], "subsections": [], "title": "Automated graph generation" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 279, 6944, 6960, 6672 ], "content": "Incorporating domain knowledge into the model has emerged as a promising method for improving medical image analysis~. \nThe use of graph-based mappings with label representations (word embeddings) have been investigated to guide information propagation among nodes~. For example, in basal cell carcinoma classification{~}, the embedding knowledge is represented by encoding patches based on prior expert knowledge, which bridges the gap between patch-level and image-level predictions and results in better performance.\nFurther, pathologists' feedback can help to improve the graph representation in terms of how to best mirror the biological relationship between cells and tissues. Thus, graph-based analysis motivates exploring the inclusion of task-specific pathological prior knowledge in the construction of the graph representations~.\nAnother open research question is how to incorporate interdisciplinary knowledge in a principled way, rather than on a case-by-case basis.\nIntegrating electronic health records for personalized medicine can also boost the diagnostic power of digital pathology. The hierarchical information inherent in medical ontologies naturally lends itself to creating a rich network of medical knowledge, and other data types such as symptoms and genomics~. Thus, by integrating patient records into the graph representation learning environment, tailored predictions can be generated for individual patients.\nAmong the AI-techniques, graph-based tissue image analysis demonstrated performance superior or comparable to domain experts in breast cancer analysis{~}.\nThese results combined with studies examining the effect of explanations on clinical end-user decisions{~} show generally positive results in the translation of this technology into diagnostic pathology.\nSuch translation will require to considered integration of standardised technologies into digital pathology workflows, resulting in an integrated approach to diagnosis and offering pathologists new tools that accelerate their workflow, increase diagnostic consistency, and reduce errors.\nWhile, there is considerable promise for graph analytics in digital pathology, there are some challenges ahead.\nThese include, for example, the ability to generalize a diagnosis technique to a large population of patients which contain outliers; and to develop problem-solving skills that demand complex interactions with other medical disciplines.\nThus, more work should be conducted to investigate how a pathologist could refine a graph model decision via a human-in-the-loop system~\nSuch approaches provide an important safety mechanism for detecting and correcting algorithmic errors that may occur.\nA remaining challenge here is to provide frameworks with the above functionalities with reduced complexity to lower the barriers between the systems and clinicians, to help facilitate system uptake.\nEntity-graph analysis has the ability to transform pathology by providing applications that speed up workflow, improve diagnosis, and improve patient clinical outcomes. However, there is still a gap between research studies and the effort required to deliver reliable graph analytics that incorporate expert knowledge into the system, and can be integrated into existing clinical workflows.\n\\vspace{-6pt}", "id": "73fa0ac6-b74f-4ce3-92ad-ee14a635304d", "level": "subsection", "origin_cites_number": 9, "parent_id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Embedding expert knowledge and clinical adoption of graph analytics" ] ], "subsections": [], "title": "Embedding expert knowledge and clinical adoption of graph analytics" }, { "cite_extract_rate": 0.7368421052631571, "cites": [ 8306, 231, 242, 8313, 2618, 240, 237, 6961, 246, 180, 6962, 6941, 230, 6940 ], "content": "Graph-based approaches for histology analysis have a high representational power, and can describe topological and geometric properties of multiple types of cancers. When compared to pixel-based approaches, the graph representation can more seamlessly describe a large tissue region. However, classical graph-based models have a high computational complexity. As a result, in the suggested learning approach, the choice of GNN architecture should be handled as a hyper-parameter.\nThe most common GNNs used by methods in this survey include ChebNet~, GCN~, GraphSAGE~, GAT~, GIN~, and variants such as Adaptive GraphSAGE~, RSF~, MS-GWNN~ and FENet~.\nSpatial-GCNs such as GraphSAGE and GIN demonstrated their learning ability using max-, mean-, or sum-pooling aggregators. GIN has been particularly effective in computational pathology with a provably strong expressive power to learn fixed-size discriminative graph embeddings from cellular and tissue architectures in WSIs, which demonstrate translation and rotation invariance. \nHowever, it is noted that these GNN models inherit considerable complexity from their deep learning lineage, which can be burdensome when scaling and deploying GNNs. This is likely one of the reasons that has seen patch-based approaches remain a popular approach for many problems.\nThe training of GNNs remains one of the most difficult tasks due to their high memory consumption and inference latency compared to patch-based deep learning approaches.\nGNNs usually require the whole graph and the intermediate states of all nodes to be saved in memory. However, the adoption of an efficient training approach is uncommon in the applications surveyed.\nVarious graph sampling approaches have been proposed as a way to alleviate the cost of training GNNs. Rather than training over the full graph, each iteration is run over a sampled sub-graph, whether they are sampled node-wise (GraphSage~), layer-wise (FastGCN~, $L^2$-GCN~), or by clustering (Cluster-GCN~).\nSome works have proposed more efficient and simple architectures that deserve attention for their potential to be adopted in computational histopathology.\nThe simple graph convolution (SGC)~ reduces the complexity of GCNs by repeatedly removing the non-linearities between GCN layers and collapsing multiple weight matrices into a single linear transformation. This model was adopted for emotion recognition and increased the performance speed with a comparable classification accuracy in comparison to other networks~.\nThe simple scalable inception GNN (SIGN)~ is explicitly designed as a shallow architecture that combines graph convolutional filters of different sizes that allow efficient pre-computation.\nThe efficient graph convolution (EGC)~ method does not require trading accuracy for runtime memory or latency reductions based on an adaptive filtering approach.\nGNNs can also deliver high performance for feature matching across images~, which can be incorporated for content-based histopathological image retrieval.\nIt is also important to highlight that some works exploit the cell-graph representation without the complexity of GCN processing. \nThe tissue classification problem was proposed in~ as a cellular community detection based on cell detection and classification into distinct cellular components (cell-graphs), and clustering of image patches (patch-level graphs) into biologically meaningful communities (specific tissue phenotype). \nThe concept of constructing a graph and then using geodesic distance for community detection has outperformed deep neural networks and graph-based deep leaning methods such as ChebNet, GCNs and deep graph infomax learning (DGI)~.\nIn the coming years, a key research topic will be how to effectively learn and compute GNNs in order to realise their full potential. Deep learning on graphs is inherently difficult due to the graphs' complex topological structure, which can be made up of many different types of entities and interactions. \nAs such, the appropriate selection of key parameters of a model prior to representation learning is essential to capture the structural information of the histopathology slides.\n\\vspace{-6pt}", "id": "d861f337-c37b-4f04-9357-81caa768e47a", "level": "subsection", "origin_cites_number": 19, "parent_id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Complexity of graph models" ] ], "subsections": [], "title": "Complexity of graph models" }, { "cite_extract_rate": 0, "cites": [], "content": "As stated in previous sections, training paradigms can be divided into two main categories: training a network to learn the node embeddings used in the graph representation; and the training of the GNN model.", "id": "e596faee-7b32-43f7-aa51-281221369c0c", "level": "subsection", "origin_cites_number": 0, "parent_id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Training paradigms" ] ], "subsections": [ "41f3c05a-0835-499d-a5b8-6ce13c052c52", "bd2e9e68-3745-492a-911f-5ab75e723bbd" ], "title": "Training paradigms" }, { "cite_extract_rate": 0.28, "cites": [ 6943, 6936, 6942, 6939, 6941, 6944, 6940 ], "content": "The node embeddings are the features that are learned to represent the defined node (\\textit{e.g.} cells, nucleus, patches, super-pixels). Some of the embedding features extracted through attribute networks require labeled datasets and need to be trained in a supervised manner as explained in~.\nHowever, one of the main challenges in deep learning is the lack of large corpora of manually labeled data for training, which often imposes a limitation on problems in the medical domain.\nThus, self-supervised methods are gaining interest to improve the quality of learned node embeddings~ by learning embedding features directly from histopathology images, rather than relying on extracting features using transfer learning, which is discussed in Subsection~\\ref{subsec:sec4a}.", "id": "41f3c05a-0835-499d-a5b8-6ce13c052c52", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "e596faee-7b32-43f7-aa51-281221369c0c", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Training paradigms" ], [ "subsubsection", "Node embeddings" ] ], "subsections": [], "title": "Node embeddings" }, { "cite_extract_rate": 0.41935483870967705, "cites": [ 6943, 6965, 6945, 7000, 6936, 6942, 6939, 6963, 6941, 6944, 6940, 6937, 6964 ], "content": "Training a GCN for node or graph level classification can be performed in supervised, semi-supervised or even in a self-supervised manner. \nIf sufficient labels are available for nodes or graph data, the common practice is a supervised training approach, such as the methods of~. \nThough supervised methods can achieve high performance, they can place limitations on model complexity and can suffer when annotations are inconsistent or imprecise. \nIn the absence of sufficient labeled data, weakly-supervised or semi-supervised frameworks are proposed to better capture the structure of histopathology data and reduce the human annotation workload. \nAlthough the issue of missing labels is not specific to the graph domain, only a few works have adopted such frameworks (pixel or patch level labels).\nIn semi- or weakly-supervised approach, the node embeddings are learnt from few labeled samples per class~.\nFor example, in a weakly supervised learning approach, the contributions of the individual patches to the ROI-level diagnosis are not known during training{~}.\nIn addition to the above, extensive research over past years in deep learning~ showed that a decision classifier based on Multiple Instance Learning (MIL) can boost the performance in classifying cancer by aggregating instance-level predictions. \nMIL only requires labels for the bag of instances rather than individual instances, which makes it well-suited for histology slide classification. One example is CLAM (Clustering-constrained attention multiple instance learning{~}). Even though these approaches have practical merits and can consider the important patches for predicting the staging, they do not consider the spatial relationships between patches.\nCurrent multiple instance learning approaches using deep graphs~ follow this line of research.\nThey can seamlessly scale to arbitrary tissue dimensions by incorporating an arbitrary number of entities and interactions, thus offering an alternative to traditional MIL{~}.\nMIL methods can be incorporated with a GCN to take advantage of the structural information among instances{~}. For example, the SegGini model{~} outperforms several traditional state-of-the-art methods such as CLAM{~} and Context-Aware CNN (CACNN){~} for weakly-supervised classification of prostate cancer.\nSelf supervised methods have also been successfully deployed as a training paradigm for GCNs. For example, Ozen et al.~ adopted a SimCLR framework~ along with contrastive loss to learn a representation of ROIs and perform classification.\nAlthough the aforementioned training paradigms demonstrate remarkable performance, few works~ have considered end-to-end training and the challenge that brings such as dealing with complexity of constructing a graph or labeled data, and thus this requires investigation in future works. \nTraining paradigms are dependent on the availability of manually labeled data. In medical imaging and specifically histopathology obtaining a large set of labeled data is a tedious process and so weakly- and self-supervised algorithms are receiving increasing interest for learning node embeddings and performing graph classification. It is expected that in future, further research carry out on a large-scale to analyse histopathology data using GCNs in a weakly- or self-supervised manner. \n\\vspace{-8pt}", "id": "bd2e9e68-3745-492a-911f-5ab75e723bbd", "level": "subsubsection", "origin_cites_number": 31, "parent_id": "e596faee-7b32-43f7-aa51-281221369c0c", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Training paradigms" ], [ "subsubsection", "Node/graph classification" ] ], "subsections": [], "title": "Node/graph classification" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 569, 6973, 6945, 6966, 6971, 6942, 6939, 6967, 6941, 6944, 6969, 6972, 6970, 6968 ], "content": "To effectively translate graph models into clinical practise, clinicians' trust must be established. Explainability, or a model's ability to justify its outcomes and therefore assist clinicians in understanding a model's prediction, has long been seen as crucial to building trust. Understanding model behaviour beyond traditional performance indicators has thus become an important part of machine learning research, particularly in healthcare{~}.\nExplainability in deep models has focused on providing input-dependent explanations and understanding model behavior from different perspectives, including visual explanations and highlighting salient regions. We can examine the sensitivity between the input features and the predictions, for example, by looking at the gradients or weights. We can also highlight important features or regions of an image by incorporating attention mechanisms{~}.\nNevertheless, compared with traditional image domains, explainability and visualization of deep learning for graphs is less explored{~}, yet explanability is critical to highlight informative structural compositions of tissue and inter-nuclear relationships, as is desired for computational histopathology.\nWhile interpretability approaches are generally lacking within most graph network methods, it is worth noting that a few methods exist and incorporate such explanations in digital pathology as illustrated in Table~\\ref{table:pathology}:\ni) In~ a GCN propagated supervisory information over patches to learn patch-aware interpretability in the form of a probability score.\nii) A robust spatial filtering with an attention-based architecture and node occlusion was used to capture the contribution of each nucleus and its neighborhood to the prediction~.\niii) The Graph Mapper, a topological data analysis tool, was adopted to compress histological information to its essential structures, where meaningful histology regions are captured~.\niv) In~, an integrated gradient method was used to visualise image saliency feature importance.\nv) A graph clustering visualization was used in~ to group cells with similar tissue structures.\nvi) A post-hoc graph-pruning explainer, GCExplainer, was designed to identify decisive cells and interactions from the input graph~. \nvii) The gradient-based saliency method, GraphGrad-CAM, was adopted in~ and~ to measure importance scores and regions that contributed towards the classification of each class.\nThe majority of approaches that have incorporated explainers are limited to cell-graph analysis. Considering the pathologically aligned multi-level hierarchical tissue attributes{~}, the interpretability can reveal crucial entities such as nuclei, tissue parts and interactions which can mimic the pathologist's assessment and therefore, increase the level of trust between experts and AI frameworks.\nExisting works however lack the definition of objectives to validate a model in terms of effective explainability, and only a single work has looked at the quality and utility of the proposed explanation methods for the intended audience (\\textit{i.e.} clinicians). In~, the authors evaluated several graph explainers (GNNExplainer, GraphGrad-CAM, GraphGrad-CAM++, GraphLRP) to provide domain-understandable quantitative metrics based on pathologically measurable cellular properties, to make graph decisions understandable to pathologists. The authors found that at the concept-level, GraphGrad-CAM++ has the highest overall agreement with the pathologists, followed by GraphGrad-CAM and GNNExplainer.\nOther methods not investigated in this survey that focus on instance-level interpretation of deep graph models that deserve attention in digital pathology for explainability at the node, edge, or node feature levels are: excitation BP~, PGM-explainer~, GraphMask~, Graphlime~, and Relex~.\nOther methods such as SubgraphX~ provide subgraph-level explanations which may be more intuitive and human-intelligible for digital pathology.\nKnowing the subset of features from which the model outcome is derived is critical. This allows clinicians to compare model decisions with clinical judgement, which is especially useful when there is a discrepancy. It is also worth noting that clinicians expect variation in the importance of inputs to exist both across patients and populations~. \nHowever, the explanations provided by methods discussed in this survey using gradient-based (GraphGrad-CAM) and perturbation-based methods (GNNExplainer) are limited to single instances. To verify and understand a deep model, pathologists need to check explanations for all input graphs, which is time-consuming and impractical. \nModels that interpret each instance independently, as previously stated, are insufficient to provide a global understanding of the trained model~. Thus, methods to provide GNN predictions on a group of instances collectively (\\textit{i.e.} a population) and provide a global understanding of GNN predictions is less explored in the literature.\nInstance-level methods explain GNNs with respect to each input graph, whereas model-level methods explain GNNs without regard for any specific input example. The latter specifically investigates what input graph patterns can lead to a specific GNN behaviour, such as maximising a target prediction. However, no research on interpreting GNNs at the model-level exists in digital pathology.\nXGNN~ provides model-level explanations by training a graph generator to build graph patterns that optimize a specific model prediction. The authors formulated graph creation as a reinforcement learning problem, with the graph generator predicting how to add an edge to a given graph and build a new graph at each step. The generator is then trained using a policy gradient based on feedback from the trained graph models. Several graph rules are also used to ensure that the explanations are both valid and human-readable. \nPGExplainer~ can also provide an explanation for each instance with a global view of the GNN model by incorporating a generative probabilistic model.\nNonetheless, it is unknown whether XGNN and PGExplainer can be used to perform node classification tasks for histopathology analysis, which is an important area for future research.\nGiven the trend of graph-based processing for a variety of applications in computational pathology, graph explainability and quantitative evaluation with a focus on clinician usability are critical. Interpretability is essential because it can aid, for example, in informed decision-making during cancer diagnosis and treatment planning. However, interpretability of GNNs within digital pathology has received insufficient attention to date.\n\\vspace{-6pt}", "id": "197b05bf-f415-41b3-a1a4-e5f425458a5c", "level": "subsection", "origin_cites_number": 21, "parent_id": "9ba94eb4-186e-4b6e-b41c-1ce1e1f42bab", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Discussion and open challenges" ], [ "subsection", "Explainability of graph models" ] ], "subsections": [], "title": "Explainability of graph models" }, { "cite_extract_rate": 0, "cites": [], "content": "Through the use of whole-slide images (WSIs) and tissue microarrays (TMAs), digital pathology has transformed pathology diagnosis. The growing use of this data has also given rise to a new field of study known as computational pathology, which aims to develop machine learning techniques to provide more objective and reproducible results. \nDeep learning, in particular Convolutional Neural Networks (CNNs), have demonstrated efficacy in visual representation learning in digital pathology. To obtain image-level representations, mainstream CNN architectures typically aggregate feature representations over fixed-sized patches of the WSI. However, the patch-wise and pixel-based processing used by CNNs lacks the ability to capture global contextual information relating to meaningful entities such as cells, glands, and tissue types.\nAs demonstrated throughout this review, histopathology knowledge graphs enable the capture of more comprehensive and interpretable information relating to the underlying mechanisms of a disease.\nSeveral works have attempted to adopt graph-based deep learning models to learn both local and global patterns. Entity-based analysis has the potential to improve the interpretability of deep learning techniques by identifying decisive nuclei, tissue regions and interactions. This can also potentially replicate holistic and context aware parts of a pathologist's assessment.\nOur survey has provided a detailed overview of a new rapidly growing field of representation learning for computational histopathology. The enriched graph representation and learning in digital pathology has resulted in superior performance for diverse types of cancer analysis. Nevertheless, we highlight open research directions concerning the adoption of graph-based deep learning, including the explainability of graph representation learning, methods of graph construction, and the complexity of graph models and their limited training efficiency. \n\\vspace{-6pt}", "id": "b4c39f0e-0871-4d3e-90ce-279f08670306", "level": "section", "origin_cites_number": 0, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "The authors report no conflicts of interest.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n{\\small\n \\bibliographystyle{IEEEtran}\n \\bibliography{IEEEabrv,ref}\n}\n\\end{document}", "id": "7ace2f45-c6f4-4f16-b76a-f57923b4e6a9", "level": "section", "origin_cites_number": 0, "parent_id": "01a5d650-4933-4e6e-b664-6a6a8f2896ea", "prefix_titles": [ [ "title", "A Survey on Graph-Based Deep Learning \\\\ for Computational Histopathology" ], [ "section", "Conflict of interest statement" ] ], "subsections": [], "title": "Conflict of interest statement" } ]
1
[ 8845, 6936, 262, 553, 885, 6938, 6935, 8305, 550, 6937, 6940, 6939, 6943, 6945, 6942, 6941, 6944, 8460, 6948, 6946, 6949, 6947, 825, 97, 134, 5680, 6950, 242, 8313, 213, 216, 180, 38, 231, 6951, 246, 4005, 211, 224, 254, 6952, 569, 8708, 6954, 6953, 9084, 5733, 7000, 6955, 6956, 6957, 6958, 8251, 6959, 9085, 222, 264, 279, 6960, 6672, 8306, 2618, 240, 237, 6961, 6962, 230, 6965, 6963, 6964, 6973, 6966, 6971, 6967, 6969, 6972, 6970, 6968 ]
1.185043
[ "Jakob Gawlikowski", "Cedrique Rovile Njieutcheu Tassi", "Mohsin Ali", "Jongseok Lee", "Matthias Humt", "Jianxiang Feng", "Anna Kruspe", "Rudolph Triebel", "Peter Jung", "Ribana Roscher", "Muhammad Shahzad", "Wen Yang", "Richard Bamler", "Xiao Xiang Zhu" ]
A Survey of Uncertainty in Deep Neural Networks
2021
2021-07-07T16:39:28Z
cs.LG
Over the last decade, neural networks have reached almost every field of science and became a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. For that, a comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and available implementations. Different examples from the wide spectrum of challenges in the fields of medical image analysis, robotic and earth observation give an idea of the needs and challenges regarding uncertainties in practical applications of neural networks. Additionally, the practical limitations of uncertainty quantification methods in neural networks for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ] ], "subsections": [ "18ea1659-9afd-4cf0-b7ab-6d8859dc5af6", "695bfab1-de93-493c-976a-2aa81c0f0d91", "33e781ef-a127-43ed-8702-88e09976b718", "07c7f2da-ef2c-475e-823a-a86caf9c3b67", "71582559-f2d3-45a1-a741-a40d9e53b709", "2dbc07cb-8a4d-4558-872e-32e013a2582a", "56922a8f-84a8-4d62-aa6d-86c2e59e8e40", "4fde619f-e57d-42f5-974b-6ca984c28e03" ], "title": "root" }, { "cite_extract_rate": 0.706896551724137, "cites": [ 4601, 4598, 4606, 4612, 4617, 8806, 7755, 4596, 4590, 1828, 3289, 2441, 4594, 759, 8807, 4610, 4600, 4607, 1044, 3288, 4616, 4603, 4593, 4025, 4597, 4599, 4615, 4592, 4605, 4609, 8805, 8633, 4602, 4595, 4591, 4608, 4614, 4611, 3234, 4604, 4613 ], "content": "Within the last decade enormous advances on deep neural networks (DNNs) have been realized, encouraging their adaptation in a variety of research fields, where complex systems have to be modeled or understood, as for example earth observation, medical image analysis or robotics. Although DNNs have become attractive in high risk fields such as medical image analysis or autonomous vehicle control , their deployment in mission- and safety-critical real world applications remains limited. The main factors responsible for this limitation are\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item the lack of expressiveness and transparency of a deep neural network's inference model, which makes it difficult to trust their outcomes ,\n \\item the inability to distinguish between in-domain and out-of-domain samples and the sensitivity to domain shifts ,\n \\item the inability to provide reliable uncertainty estimates for a deep neural network's decision and frequently occurring overconfident predictions ,\n \\item the sensitivity to adversarial attacks that make deep neural networks vulnerable for sabotage .\n\\end{itemize}\nThese factors are mainly based on an uncertainty already included in the data (data uncertainty) or a lack of knowledge of the neural network (model uncertainty). To overcome these limitations, it is essential to provide uncertainty estimates, such that uncertain predictions can be ignored or passed to human experts . Providing uncertainty estimates is not only important for a safe decision-making in high-risks fields, but also crucial in fields where the data sources are highly inhomogeneous and labeled data is rare, such as in remote sensing . Also for fields where uncertainties form a crucial part of the learning techniques, such as for active learning or reinforcement learning , uncertainty estimates are highly important.\nIn recent years, researchers have shown an increased interest in estimating uncertainty in DNNs . The most common way to estimate the uncertainty on a prediction (the predictive uncertainty) is based on separately modelling the uncertainty caused by the model (epistemic or model uncertainty) and the uncertainty caused by the data (aleatoric or data uncertainty). While the first one is reducible by improving the model which is learned by the DNN, the latter one is not reducible. The most important approaches for modeling this separation are Bayesian inference , ensemble approaches , test time data augmentation approaches , or single deterministic networks containing explicit components to represent the model and the data uncertainty . Estimating the predictive uncertainty is not sufficient for safe decision-making. Furthermore, it is crucial to assure that the uncertainty estimates are reliable. To this end, the calibration property (the degree of reliability) of DNNs has been investigated and re-calibration methods have been proposed to obtain reliable (well-calibrated) uncertainty estimates.\nThere are several works that give an introduction and overview of uncertainty in statistical modelling. Ghanem et al. published a handbook about uncertainty quantification, which includes a detailed and broad description of different concepts of uncertainty quantification, but without explicitly focusing on the application of neural networks. The theses of Gal and Kendall contain a good overview about Bayesian neural networks, especially with focus on the Monte Carlo (MC) Dropout approach and its application in computer vision tasks. The thesis of Malinin also contains a very good introduction and additional insights into Prior networks. Wang et al. contributed two surveys on Bayesian deep learning . They introduced a general framework and the conceptual description of the Bayesian neural networks (BNNs), followed by an updated presentation of Bayesian approaches for uncertainty quantification in neural networks with special focus on recommender systems, topic models, and control. In an evaluation of uncertainty quantification in deep learning is given by presenting and comparing the uncertainty quantification based on the softmax output, the ensemble of networks, Bayesian neural networks, and autoencoders on the MNIST data set. Regarding the practicability of uncertainty quantification approaches for real life mission- and safety-critical applications, Gustafsson et al. introduced a framework to test the robustness required in real-world computer vision applications and delivered a comparison of two popular approaches, namely MC Dropout and Ensemble methods.\nHüllermeier et al. presented the concepts of aleatoric and epistemic uncertainty in neural networks and discussed different concepts to model and quantify them. In contrast to this, Abdar et al. presented an overview on uncertainty quantification methodologies in neural networks and provide an extensive list of references for different application fields and a discussion of open challenges.\\\\\nIn this work, we present an extensive overview over all concepts that have to be taken into account when working with uncertainty in neural networks while keeping the applicability on real world applications in mind. Our goal is to provide the reader with a clear thread from the sources of uncertainty to applications, where uncertainty estimations are needed. Furthermore, we point out the limitations of current approaches and discuss further challenges to be tackled in the future. For that, we provide a broad introduction and comparison of different approaches and fundamental concepts. The survey is mainly designed for people already familiar with deep learning concepts and who are planning to incorporate uncertainty estimation into their predictions. But also for people already familiar with the topic, this review provides a useful overview of the whole concept of uncertainty in neural networks and their applications in different fields.\\\\\nIn summary, we comprehensively discuss\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item Sources and types of uncertainty (Section \\ref{sec:uncertainty_types_and_sources}),\n \\item Recent studies and approaches for estimating uncertainty in DNNs (Section \\ref{sec:uncertainty_quantification_methods}),\n \\item Uncertainty measures and methods for assessing the quality and impact of uncertainty estimates (Section \\ref{sec:uncertainty_measures}),\n \\item Recent studies and approaches for calibrating DNNs (Section \\ref{sec:calibration}),\n \\item An overview over frequently used evaluation data sets, available benchmarks and implementations\\footnote{The list of available implementations can be found in Section \\ref{sec:data_sets_and_baselines} as well as within an additional GitHub repository under \\href{https://github.com/JakobCode/UncertaintyInNeuralNetworks\\_Resources}{github.com/JakobCode/UncertaintyInNeuralNetworks\\_Resources}.} (Section \\ref{sec:data_sets_and_baselines}),\n \\item An overview over real-world applications using uncertainty estimates (Section \\ref{sec:application_fields}),\n \\item A discussion on current challenges and further directions of research in the future (Section \\ref{sec:con}).\n\\end{itemize}\nIn general, the principles and methods for estimating uncertainty and calibrating DNNs can be applied to all regression, classification, and segmentation problems, if not stated differently. In order to get a deeper dive into explicit applications of the methods, we refer to the section on applications and to further readings in the referenced literature.", "id": "18ea1659-9afd-4cf0-b7ab-6d8859dc5af6", "level": "section", "origin_cites_number": 58, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:uncertainty_types_and_sources}\nA neural network is a non-linear function $f_\\theta$ parameterized by model parameters $\\theta$ (i.e. the network weights) that maps from a measurable input set $\\Xspace$ to a measurable output set $\\Yspace$, i.e.\n\\begin{equation}\\label{eq:inference_model}\nf_\\theta: \\Xspace\\rightarrow \\Yspace \\qquad f_\\theta(x)=y~.\n\\end{equation}\nFor a supervised setting, we further have a finite set of training data $\\D\\subseteq\\Dspace=\\Xspace\\times\\Yspace$ containing $N$ data samples and corresponding targets, i.e.\n\\begin{align}\n \\D=(\\X,\\Y)=\\{x_n,y_n\\}_{n=1}^N\\subseteq\\Dspace~.\n\\end{align}\nFor a new data sample $x^*\\in\\Xspace$, a neural network trained on $\\D$ can be used to predict a corresponding target $f_\\theta(x^*) = y^*$.\nWe consider four different steps from the raw information in the environment to a prediction by a neural network with quantified uncertainties, namely\n\\begin{enumerate}\n \\vspace{0.1em}\n \\setlength\\itemsep{0.5em}\n \\item the \\textit{data acquisition} process: \\\\\n The occurrence of some information in the environment (e.g. a bird's singing) and a measured observation of this information (e.g. an audio record).\n \\item the \\textit{DNN building} process: \\\\\n The design and training of a neural network. \n \\item the \\textit{applied inference} model:\n The model which is applied for inference (e.g. a Bayesian neural network or an ensemble of neural networks).\n \\item the \\textit{prediction's uncertainty} model:\n The modelling of the uncertainties caused by the neural network and by the data. \n\\end{enumerate} \nIn practice, these four steps contain several potential sources of uncertainty and errors, which again affect the final prediction of a neural network. The five factors that we think are the most vital for the cause of uncertainty in a DNN's predictions are\n\\begin{itemize}\n \\vspace{0.1em}\n \\setlength\\itemsep{0.5em}\n \\item the variability in real world situations,\n \\item the errors inherent to the measurement systems,\n \\item the errors in the architecture specification of the DNN,\n \\item the errors in the training procedure of the DNN,\n \\item the errors caused by unknown data. \\\\\n\\end{itemize}\nIn the following, the four steps leading from raw information to uncertainty quantification on a DNN's prediction are described in more detail. Within this, we highlight the sources of uncertainty that are related to the single steps and explain how the uncertainties are propagated through the process. Finally, we introduce a model for the uncertainty on a neural network's prediction and introduce the main types of uncertainty considered in neural networks. \\\\\nThe goal of this section is to give an accountable idea of the uncertainties in neural networks. Hence, for the sake of simplicity we only describe and discuss the mathematical properties, which are relevant for understanding the approaches and applying the methodology in different fields.", "id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "level": "section", "origin_cites_number": 0, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ] ], "subsections": [ "f0e101b2-aeb1-45fc-b6c1-236618d43ee9", "b9af7cc5-b075-4669-8b41-eed3b73255d2", "df986699-2a9c-491a-9db7-8c7cb9c07072", "ae1d7700-2122-494c-9572-78ac85447ec9", "8f3aceee-639b-4bb8-b299-e1edd450ebe7" ], "title": "Uncertainty in Deep Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In the context of supervised learning, the data acquisition describes the process where measurements $x$ and target variables $y$ are generated in order to represent a (real world) situation $\\omega$ from some space $\\Omega$. In the real world, a realization of $\\omega$ could for example be a bird, $x$ a picture of this bird, and $y$ a label stating \\textit{'bird'}. During the measurement, random noise can occur and information may get lost. We model this randomness in $x$ by\n \\begin{equation}\n x\\vert\\omega \\sim p_{x\\vert \\omega}~.\n \\end{equation}\n Equivalently, the corresponding target variable $y$ is derived, where the description is either based on another measurement or is the result of a labeling process\\footnote{In many cases one can model the labeling process as a mapping from $\\Xspace$ to $\\Yspace$, e.g. for speech recognition or various computer vision tasks. For other tasks, as for example earth observation, this is not always the case. Data is often labeled based on high resolutional data while low resolutional data is utilized for the prediction task.}. For both cases the description can be affected by noise and errors and we state it as\n \\begin{equation}\n y\\vert\\omega \\sim p_{y\\vert \\omega}~.\n \\end{equation}\n A neural network is trained on a finite data set of realizations of $x|\\omega_i$ and $y|\\omega_i$ based on $N$ real world situations $\\omega_1,...,\\omega_N$,\n \\begin{align}\n \\D=\\{x_i, y_i\\}_{i=1}^N~.\n \\end{align}\n When collecting the training data, two factors can cause uncertainty in a neural network trained on this data. First, the sample space $\\Omega$ should be sufficiently covered by the training data $x_1,...,x_N$ for $\\omega_1,...,\\omega_N$. For that, one has to take into account that for a new sample $x^*$ it in general holds that $x^*\\neq x_i$ for all training situations $x_i$. Following, the target has to be estimated based on the trained neural network model, which directly leads to the first factor of uncertainty:\n \\vspace{0.1em}\n \\begin{mdframed}\n \\textbf{Factor I: Variability in Real World Situations}\\\\\n Most real world environments are highly variable and almost constantly affected by changes. These changes affect parameters as for example temperature, illumination, clutter, and physical objects' size and shape. Changes in the environment can also affect the expression of objects, as for example plants after rain look very different to plants after a drought. \n When real world situations change compared to the training set, this is called a distribution shift. Neural networks are sensitive to distribution shifts, which can lead to significant changes in the performance of a neural network.\n \\end{mdframed}\\vspace{0.8em}\n The second case is based on the measurement system, which has a direct effect on the correlation between the samples and the corresponding targets. The measurement system generates information $x_i$ and $y_i$ that describe $\\omega_i$ but might not contain enough information to learn a direct mapping from $x_i$ to $y_i$. This means that there might be highly different real world information $\\omega_i$ and $\\omega_j$ (e.g. city and forest) resulting in very similar corresponding measurements $x_i$ and $x_j$ (e.g. temperature) or similar corresponding targets $y_i$ and $y_j$ (e.g. label noise that labels both samples as forest). This directly leads to our second factor of uncertainty:\n \\vspace{0.8em}\n \\begin{mdframed}\n \\textbf{Factor II: Error and Noise in Measurement Systems}\\\\\n The measurements themselves can be a source of uncertainty on the neural network's prediction. This can be caused by limited information in the measurements, as for example the image resolution, or by not measuring false or insufficiently available information modalities. Moreover, it can be caused by noise, for example sensor noise, by motion, or mechanical stress leading to imprecise measures. Furthermore, false labeling is also a source of uncertainty that can be seen as error and noise in the measurement system. It is referenced as label noise and affects the model by reducing the confidence on the true class prediction during training.\n \\end{mdframed}", "id": "f0e101b2-aeb1-45fc-b6c1-236618d43ee9", "level": "subsection", "origin_cites_number": 0, "parent_id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Data Acquisition" ] ], "subsections": [], "title": "Data Acquisition" }, { "cite_extract_rate": 1, "cites": [ 208, 759, 3288 ], "content": "The design of a DNN covers the explicit modeling of the neural network and its stochastic training process. The assumptions on the problem structure induced by the design and training of the neural network are called inductive bias . We summarize all decisions of the modeler on the network's structure (e.g. the number of parameters, the layers, the activation functions, etc.) and training process (e.g. optimization algorithm, regularization, augmentation, etc.) in a structure configuration $s$.\n The defined network structure gives the third factor of uncertainty in a neural network's predictions:\n \\vspace{0.8em}\n \\begin{mdframed}\n \\textbf{Factor III: Errors in the Model Structure}\\\\\n The structure of a neural network has a direct effect on its performance and therefore also on the uncertainty of its prediction. For instance, the number of parameters affects the memorization capacity, which can lead to under- or over-fitting on the training data. Regarding uncertainty in neural networks, it is known that deeper networks tend to be overconfident in their softmax output, meaning that they predict too much probability on the class with highest probability score .\n \\end{mdframed}\n \\vspace{0.8em}\n For a given network structure $s$ and a training data set $\\D$, the training of a neural network is a stochastic process and therefore the resulting neural network $f_\\theta$ is based on a random variable,\n \\begin{align}\n \\theta\\vert D, s \\sim p_{\\theta|D,s}.\n \\end{align}\n The process is stochastic due to random decisions as the order of the data, random initialization or random regularization as augmentation or dropout. The loss landscape of a neural network is highly non-linear and the randomness in the training process in general leads to different local optima $\\theta^*$ resulting in different models . Also, parameters as batch size, learning rate, and the number of training epochs affect the training and result in different models. Depending on the underlying task these models can significantly differ in their predictions for single samples, even leading to a difference in the overall model performance. This sensitivity to the training process directly leads to the fourth factor for uncertainties in neural network predictions: \n \\vspace{0.8em}\n \\begin{mdframed}\n \\textbf{Factor IV: Errors in the Training Procedure} \\\\\n The training process of a neural network includes many parameters that have to be defined (batch size, optimizer, learning rate, stopping criteria, regularization, etc.) and also stochastic decisions within the training process (batch generation and weight initialization) take place. All these decisions affect the local optima and it is therefore very unlikely that two training processes deliver the same model parameterization. A training data set that suffers from imbalance or low coverage of single regions in the data distribution also introduces uncertainties on the network's learned parameters, as already described in the data acquisition. This might be softened by applying augmentation to increase the variety or by balancing the impact of single classes or regions on the loss function.\n \\end{mdframed}\n Since the training process is based on the given training data set $\\D$, errors in the data acquisition process (e.g. label noise) can result in errors in the training process.", "id": "b9af7cc5-b075-4669-8b41-eed3b73255d2", "level": "subsection", "origin_cites_number": 3, "parent_id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Deep Neural Network Design and Training" ] ], "subsections": [], "title": "Deep Neural Network Design and Training" }, { "cite_extract_rate": 0, "cites": [], "content": "The inference describes the prediction of an output $y^*$ for a new data sample $x^*$ by the neural network. At this time, the network is trained for a specific task. Thus, samples which are not inputs for this task cause errors and are therefore also a source of uncertainty: \n \\vspace{0.8em}\n \\begin{mdframed}\n \\textbf{Factor V: Errors Caused by Unknown Data}\\\\\n Especially in classification tasks, a neural network that is trained on samples derived from a world $\\W_1$ can also be capable of processing samples derived from a completely different world $\\W_2$. This is for example the case, when a network trained on images of cats and dogs receives a sample showing a bird. Here, the source of uncertainty does not lie in the data acquisition process, since we assume a world to contain only feasible inputs for a prediction task. Even though the practical result might be equal to too much noise on a sensor or complete failure of a sensor, the data considered here represents a valid sample, but for a different task or domain.\n \\end{mdframed}", "id": "df986699-2a9c-491a-9db7-8c7cb9c07072", "level": "subsection", "origin_cites_number": 0, "parent_id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Inference" ] ], "subsections": [], "title": "Inference" }, { "cite_extract_rate": 1, "cites": [ 8633, 4604 ], "content": "As a modeller, one is mainly interested in the uncertainty that is propagated onto a prediction $y^*$, the so-called \\textit{predictive uncertainty}. Within the data acquisition model, the probability distribution for a prediction $y^*$ based on some sample $x^*$ is given by\n\\begin{equation}\\label{eq:real_world_distribution}\n p(y^*|x^*) = \\int_\\Omega p(y^*|\\omega)p(\\omega|x^*)d\\omega\n\\end{equation}\nand a maximum a posteriori (MAP) estimation is given by\n\\begin{equation}\\label{eq:real_world_max_likelihood}\n y^* = \\arg \\max_y p(y | x^*)~.\n\\end{equation}\nSince the modeling is based on the unavailable latent variable $\\omega$, one takes an approximative representation based on a sampled training data set $\\mathcal{D}=\\{x_i,y_i\\}_{i=1}^N$ containing $N$ samples and corresponding targets. The distribution and MAP estimator in \\eqref{eq:real_world_distribution} and \\eqref{eq:real_world_max_likelihood} for a new sample $x^*$ are then predicted based on the known examples by\n\\begin{equation}\\label{eq:data_set_distribution}\n p(y^*\\vert x^*) = \\int_D p(y^*\\vert \\D,x^*)\n\\end{equation}\nand\n\\begin{equation}\\label{eq:data_set_max_likelihood}\n y^* = \\arg \\max_y p(y | \\D,x^*)~.\n\\end{equation}\nIn general, the distribution given in \\eqref{eq:data_set_distribution} is unknown and can only be estimated based on the given data in $D$. For this estimation, neural networks form a very powerful tool for many tasks and applications. \n\\par\nThe prediction of a neural network is subject to both model-dependent and input data-dependent errors, and therefore the predictive uncertainty associated with $y^*$ is in general separated into \\textit{data uncertainty} (also statistical or aleatoric uncertainty ) and \\textit{model uncertainty} (also systemic or epistemic uncertainty ). Depending on the underlying approach, an additional explicit modeling of \\textit{distributional uncertainty} is used to model the uncertainty, which is caused by examples from a region not covered by the training data.", "id": "ae1d7700-2122-494c-9572-78ac85447ec9", "level": "subsection", "origin_cites_number": 2, "parent_id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Predictive Uncertainty Model" ] ], "subsections": [ "897710f8-9cf6-4254-9319-d5f427b2ded5", "cf1e789d-36cb-4385-a33d-3ae68f07aef2" ], "title": "Predictive Uncertainty Model" }, { "cite_extract_rate": 0.4, "cites": [ 4618, 3288 ], "content": "The model uncertainty covers the uncertainty that is caused by shortcomings in the model, either by errors in the training procedure, an insufficient model structure, or lack of knowledge due to unknown samples or a bad coverage of the training data set. \nIn contrast to this, data uncertainty is related to uncertainty that directly stems from the data. Data uncertainty is caused by information loss when representing the real world within a data sample and represents the distribution stated in \\eqref{eq:real_world_distribution}. For example, in regression tasks noise in the input and target measurements causes data uncertainty that the network cannot learn to correct. In classification tasks, samples which do not contain enough information in order to identify one class with 100\\% certainty cause data uncertainty on the prediction. The information loss is a result of the measurement system, e.g. by representing real world information by image pixels with a specific resolution, or by errors in the labelling process. \\\\\nConsidering the five presented factors for uncertainties on a neural network's prediction, model uncertainty covers Factors I, III, IV, and V and data uncertainty is related to Factor II. While model uncertainty can be (theoretically) reduced by improving the architecture, the learning process, or the training data set, the data uncertainties cannot be explained away . Therefore, DNNs that are capable of handling uncertain inputs and that are able to remove or quantify the model uncertainty and give a correct prediction of the data uncertainty are of paramount importance for a variety of real world mission- and safety-critical applications.\n\\begin{figure*}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\node[inner sep=0pt] at (0,0)\n\t\t{\\includegraphics[clip, trim=7.5cm 2.8cm 10.5cm 1.2cm, width=.316\\textwidth]{figures/data_regression.pdf}};\n\t\t\\node[inner sep=0pt] at (6,0)\n\t\t{\\includegraphics[clip, trim=7.5cm 2.8cm 10.5cm 1.2cm, width=.316\\textwidth]{figures/model_regression.pdf}};\n\t\t\\node[inner sep=0pt] at (12,0)\n\t\t{\\includegraphics[clip, trim=7.5cm 2.8cm 10.5cm 1.2cm, width=.316\\textwidth]{figures/distributional_regression.pdf}};\n\t\t\\node[inner sep=0pt] at (0,-5.5)\n\t\t{\\includegraphics[clip, trim=7.5cm 2.8cm 10.5cm 1.2cm, width=.316\\textwidth]{figures/data_classification.pdf}};\n\t\t\\node[inner sep=0pt] at (6,-5.5)\n\t\t{\\includegraphics[clip, trim=7.8cm 2.8cm 10.2cm 1.2cm, width=.316\\textwidth]{figures/model_classification.pdf}};\n\t\t\\node[inner sep=0pt] at (12,-5.5)\n\t\t{\\includegraphics[clip, trim=7.5cm 2.8cm 10.5cm 1.2cm, width=.316\\textwidth]{figures/distributional_classification.pdf}};\n\t\t\\draw (-2.75,-2.75) -- (15,-2.75);\n\t\t\\draw (3,3) -- (3,-8.5);\n\t\t\\draw (9,3) -- (9,-8.5);\n\t\\end{tikzpicture}\n\t\\caption{Visualization of the data, the model, and the distributional uncertainty for classification and regression models.}\n\t\\label{fig:uncertainty_types}\n\\end{figure*}\nThe Bayesian framework offers a practical tool to reason about uncertainty in deep learning . In Bayesian modeling, the model uncertainty is formalized as a probability distribution over\nthe model parameters $\\theta$, while the data uncertainty is formalized as a probability distribution over the model outputs $y^*$, given a parameterized model $f_\\theta$. The distribution over a prediction $y^*$, the predictive distribution, is then given by \n\\begin{align}\\label{eq:predictive_uncertainty_intro} \n \t p(y^*|x^*, D)&=\\int\\underbrace{p(y^*\\vert x^*,\\theta)}_{\\text{Data}}\\underbrace{p(\\theta\\vert D)}_{\\text{Model}}d\\theta~.\n\\end{align}\nThe term $p(\\theta | D)$ is referenced as posterior distribution on the model parameters and describes the uncertainty on the model parameters given a training data set $D$. The posterior distribution is in general not tractable. While ensemble approaches seek to approximate it by learning several different parameter settings and averaging over the resulting models , Bayesian inference reformulates it using Bayes Theorem \n\\begin{equation}\\label{eq:bayes_posterior_intro}\n p(\\theta|D) = \\frac{p(D|\\theta)p(\\theta)}{p(D)}~.\n\\end{equation}\nThe term $p(\\theta)$ is called the prior distribution on the model parameters, since it does not take any information but the general knowledge on $\\theta$ into account. The term $p(D\\vert\\theta)$ represents the likelihood that the data in $D$ is a realization of the distribution predicted by a model parameterized with $\\theta$. Many loss functions are motivated by or can be related to the likelihood function. Loss functions that seek to maximize the log-likelihood (for an assumed distribution) are for example the cross-entropy or the mean squared error . \nEven with the reformulation given in \\eqref{eq:bayes_posterior_intro}, the predictive distribution given in \\eqref{eq:predictive_uncertainty_intro} is still intractable. To overcome this, several different ways to approximate the predictive distribution were proposed. A broad overview on the different concepts and some specific approaches is presented in Section \\ref{sec:uncertainty_quantification_methods}.", "id": "897710f8-9cf6-4254-9319-d5f427b2ded5", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "ae1d7700-2122-494c-9572-78ac85447ec9", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Predictive Uncertainty Model" ], [ "subsubsection", "Model- and Data Uncertainty" ] ], "subsections": [], "title": "Model- and Data Uncertainty" }, { "cite_extract_rate": 0.5, "cites": [ 8633 ], "content": "Depending on the approaches that are used to quantify the uncertainty in $y^*$, the formulation of the predictive distribution might be further separated into data, distributional, and model parts :\n\\begin{equation}\\label{eq:predictive_uncertainty_split_2_intro}\np(y^*|x^*, D)=\\int\\int \\underbrace{p(y\\vert \\mu)}_{\\text{Data}}\\underbrace{p(\\mu\\vert x^*,\\theta)}_{\\text{Distributional}}\\underbrace{p(\\theta\\vert D)}_{\\text{Model}}d\\mu d\\theta~.\n\\end{equation}\nThe distributional part in \\eqref{eq:predictive_uncertainty_split_2_intro} represents the uncertainty on the actual network output, e.g. for classification tasks this might be a Dirichlet distribution, which is a distribution over the categorical distribution given by the softmax output.\nModeled this way, distributional uncertainty refers to uncertainty that is caused by a change in the input-data distribution, while model uncertainty refers to uncertainty that is caused by the process of building and training the DNN. As modeled in \\eqref{eq:predictive_uncertainty_split_2_intro}, the model uncertainty affects the estimation of the distributional uncertainty, which affects the estimation of the data uncertainty.\nWhile most methods presented in this paper only distinguish between model and data uncertainty, approaches specialized on out-of-distribution detection often explicitly aim at representing the distributional uncertainty . A more detailed presentation of different approaches for quantifying uncertainties in neural networks is given in Section \\ref{sec:uncertainty_quantification_methods}. In Section \\ref{sec:uncertainty_measures}, different measures for measuring the different types of uncertainty are presented.", "id": "cf1e789d-36cb-4385-a33d-3ae68f07aef2", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ae1d7700-2122-494c-9572-78ac85447ec9", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Predictive Uncertainty Model" ], [ "subsubsection", "Distributional Uncertainty" ] ], "subsections": [], "title": "Distributional Uncertainty" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 4616, 3251, 8319, 8808, 3299, 4604, 1624 ], "content": "On the basis of the input data domain, the predictive uncertainty can also be classified into three main classes: \n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textit{In-domain uncertainty} \\\\\n In-domain uncertainty represents the uncertainty related to an input drawn from a data distribution assumed to be equal to the training data distribution. The in-domain uncertainty stems from the inability of the deep neural network to explain an in-domain sample due to lack of in-domain knowledge. From a modeler point of view, in-domain uncertainty is caused by design errors (model uncertainty) and the complexity of the problem at hand (data uncertainty). Depending on the source of the in-domain uncertainty, it might be reduced by increasing the quality of the training data (set) or the training process . \n \\item \\textit{Domain-shift uncertainty} \\\\\n Domain-shift uncertainty denotes the uncertainty related to an input drawn from a shifted version of the training distribution. The distribution shift results from insufficient coverage by the training data and the variability inherent to real world situations. A domain-shift might increase the uncertainty due to the inability of the DNN to explain the domain shift sample on the basis of the seen samples at training time. Some errors causing domain shift uncertainty can be modeled and can therefore be reduced. For example, occluded samples can be learned by the deep neural network to reduce domain shift uncertainty caused by occlusions . However, it is difficult if not impossible to model all errors causing domain shift uncertainty, e.g., motion noise . From a modeler point of view, domain-shift uncertainty is caused by external or environmental factors but can be reduced by covering the shifted domain in the training data set. \n \\item \\textit{Out-of-domain uncertainty} \\\\\n Out-of-domain uncertainty represents the uncertainty related to an input drawn from the subspace of unknown data. The distribution of unknown data is different and far from the training distribution. While a DNN can extract in-domain knowledge from domain-shift samples, it cannot extract in-domain knowledge from out-of-domain samples. For example, when domain-shift uncertainty describes phenomena like a blurred picture of a dog, out-of-domain uncertainty describes the case when a network that learned to classify cats and dogs is asked to predict a bird.\n The out-of-domain uncertainty stems from the inability of the DNN to explain an out-of-domain sample due to its lack of out-of-domain knowledge. From a modeler point of view, out-of-domain uncertainty is caused by input samples, where the network is not meant to give a prediction for or by insufficient training data.\n\\end{itemize}\nSince the model uncertainty captures what the DNN does not know due to lack of in-domain or out-of-domain knowledge, it captures all, in-domain, domain-shift, and out-of-domain uncertainties. In contrast, the data uncertainty captures in-domain uncertainty that is caused by the nature of the data the network is trained on, as for example overlapping samples and systematic label noise. \n\\begin{figure*}[t]\n\\resizebox{\\textwidth}{!}{\n \\includegraphics{figures/Overview_fertig_cropped.pdf}}\n \\caption{The illustration shows the different steps of a neural network pipeline, based on the earth observation example of land cover classification (here settlement and forest) based on optical images. The different factors that affect the predictive uncertainty are highlighted in the boxes. Factor I is shown as changing environments by cloud covered trees, different types and colors of trees. Factor II is shown by insufficient measurements, that can not directly be used to separate between settlement and forest and by label noise. In practice, the resolution of such images can be low and which would also be part of Factor II. Factor III and Factor IV represent the uncertainties caused by the network structure and the stochastic training process, respectively. Factor V in contrast is represented by feeding the trained network with unknown types of images, namely cows and pigs.} \n\\label{fig:uncertainty_sources}\n\\end{figure*}", "id": "8f3aceee-639b-4bb8-b299-e1edd450ebe7", "level": "subsection", "origin_cites_number": 9, "parent_id": "695bfab1-de93-493c-976a-2aa81c0f0d91", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty in Deep Neural Networks" ], [ "subsection", "Uncertainty Classification" ] ], "subsections": [], "title": "Uncertainty Classification" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8633, 3288 ], "content": "\\label{sec:uncertainty_quantification_methods}\nAs described in Section \\ref{sec:uncertainty_types_and_sources}, several factors may cause model and data uncertainty and affect a DNN's prediction. This variety of sources of uncertainty makes the complete exclusion of uncertainties in a neural network impossible for almost all applications. Especially in practical applications employing real world data, the training data is only a subset of all possible input data, which means that a miss-match between the DNN domain and the unknown actual data domain is often unavoidable. However, an exact representation of the uncertainty of a DNN prediction is also not possible to compute, since the different uncertainties can in general not be modeled accurately and are most often even unknown.\\\\\nTherefore, methods for estimating uncertainty in a DNN prediction is a popular and vital field of research. The data uncertainty part is normally represented in the prediction, e.g. in the softmax output of a classification network or in the explicit prediction of a standard deviation in a regression network . In contrast to this, several different approaches which model the model uncertainty and seek to separate it from the data uncertainty in order to receive an accurate representation of the data uncertainty were introduced . \\\\\nIn general, the methods for estimating the uncertainty can be split in four different types based on the number (single or multiple) and the nature (deterministic or stochastic) of the used DNNs. \n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textbf{Single deterministic methods} give the prediction based on one single forward pass within a deterministic network. The uncertainty quantification is either derived by using additional (external) methods or is directly predicted by the network. \n \\item \\textbf{Bayesian methods} cover all kinds of stochastic DNNs, i.e. DNNs where two forward passes of the same sample generally lead to different results. \n \\item \\textbf{Ensemble methods} combine the predictions of several different deterministic networks at inference. \n \\item \\textbf{Test-time augmentation methods} give the prediction based on one single deterministic network but augment the input data at test-time in order to generate several predictions that are used to evaluate the certainty of the prediction.\n\\end{itemize}\n\\input{figures/methods_diagram}\n\\input{tables/methods_overview}\n\\input{figures/UncertaintyMethods}\nIn the following, the main ideas and further extensions of the four types are presented and their main properties are discussed. In Figure \\ref{fig:methods_diagram}, an overview of the different types and methods is given. In Figure \\ref{fig:methods_overview}, the different underlying principles that are used to differentiate between the different types of methods are presented. \nTable \\ref{tab:types_compare} summarizes the main properties of the methods presented in this work, such as complexity, computational effort, memory consumption, flexibility, and others. \n\\input{tables/table_deterministic}", "id": "33e781ef-a127-43ed-8702-88e09976b718", "level": "section", "origin_cites_number": 3, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ] ], "subsections": [ "51765ff7-d94c-4419-adeb-5f1d5e3b561a", "65bd2854-4c7c-4495-b6b5-340505392445", "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "781f73a5-8ad5-4b3b-aeac-5bca88d02c49", "ad900f38-8b0a-4a6b-ab56-457ba95c6ae7" ], "title": "Uncertainty Estimation" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 4617, 4592, 4622, 4621, 8633, 4620, 4613, 4619 ], "content": "}\\label{sec:deterministic_methods}\nFor deterministic neural networks the parameters are deterministic and each repetition of a forward pass delivers the same result. With single deterministic network methods for uncertainty quantification, we summarize all approaches where the uncertainty on a prediction $y^*$ is computed based on one single forward pass within a deterministic network. In the literature, several such approaches can be found. They can be roughly categorized into approaches where one single network is explicitly modeled and trained in order to quantify uncertainties and approaches that use additional components in order to give an uncertainty estimate on the prediction of a network . While for the first type, the uncertainty quantification affects the training procedure and the predictions of the network, the latter type is in general applied on already trained networks. Since trained networks are not modified by such methods, they have no effect on the network's predictions. In the following, we call these two types \\textit{internal} and \\textit{external} uncertainty quantification approaches.", "id": "51765ff7-d94c-4419-adeb-5f1d5e3b561a", "level": "subsection", "origin_cites_number": 9, "parent_id": "33e781ef-a127-43ed-8702-88e09976b718", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Single Deterministic Methods" ] ], "subsections": [ "4bcab73f-de19-4117-aec4-349b0c596eac", "030cf20c-f0d2-4fb7-a22f-4ddb1e9417c8", "dc290065-ab0a-4da2-9665-8ecf9cd04ce4" ], "title": "Single Deterministic Methods" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 4627, 4626, 3289, 4624, 8807, 4625, 4623, 4619, 3288, 4622, 8633, 8639, 1624, 3251, 4611, 4613 ], "content": "Many of the internal uncertainty quantification approaches followed the idea of predicting the parameters of a distribution over the predictions instead of a direct pointwise maximum-a-posteriori estimation. Often, the loss function of such networks takes the expected divergence between the true distribution and the predicted distribution into account as e.g., in . The distribution over the outputs can be interpreted as a quantification of the model uncertainty (see Section \\ref{sec:uncertainty_types_and_sources}), trying to emulate the behavior of a Bayesian modeling of the network parameters . The prediction is then given as the expected value of the predicted distribution. \nFor classification tasks, the output in general represents class probabilities. These probabilities are a result of applying the softmax function \n\\begin{align}\n\\begin{split}\n&\\text{softmax}:\\mathbb{R}^K\\rightarrow\\left\\{z\\in \\mathbb{R}^K\\vert z_i \\geq 0, \\sum_{k=1}^K z_k =1\\right\\} \\\\\n&\\text{softmax}(z)_j = \\frac{\\exp(z_j)}{\\sum_{k=1}^K\\exp(z_k)}\n\\end{split}\n\\end{align}\nfor multiclass settings and the sigmoid function \n\\begin{align}\n\\begin{split}\n&\\text{sigmoid}:\\mathbb{R}\\rightarrow[0,1] \\\\\n&\\text{sigmoid}(z) = \\frac{1}{1+\\exp(-z)}\n\\end{split}\n\\end{align}\nfor binary classification tasks on the logits $z$. These probabilities can be already interpreted as a prediction of the data uncertainty. However, it is widely discussed that neural networks are often over-confident and the softmax output is often poorly calibrated, leading to inaccurate uncertainty estimates . Furthermore, the softmax output cannot be associated with model uncertainty. But without explicitly taking the model uncertainty into account, out-of-distribution samples could lead to outputs that certify a false confidence. For example, a network trained on cats and dogs will very likely not result in 50\\% dog and 50\\% cat when it is fed with the image of a bird. This is, because the network extracts features from the image and even though the features do not fit to the cat class, they might fit even less to the dog class. As a result, the network puts more probability on cat. Furthermore, it was shown that the combination of rectified linear unit (ReLu) networks and the softmax output leads to settings where the network becomes more and more confident as the distance between an out-of-distribution sample and the learned training set becomes larger .\nFigure \\ref{fig:softmax_probability} shows an example where the rotation of a digit from MNIST leads to false predictions with high softmax values. \n\\input{figures/rotating_mnist}\nThis phenomenon is described and further investigated by Hein et al. who proposed a method to avoid this behaviour, based on enforcing a uniform predictive distribution far away from the training data.\nSeveral other classification approaches followed a similar idea of taking the logit magnitude into account, but make use of the Dirichlet distribution. The Dirichlet distribution is the conjugate prior of the categorical distribution and hence can be interpreted as a distribution over categorical distributions. The density of the Dirichlet distribution is defined by \n\\begin{equation*}\n \\text{Dir}\\left(\\mu\\vert\\alpha\\right) = \\frac{\\Gamma(\\alpha_0)}{\\prod_{c=1}^{K}\\Gamma(\\alpha_c)}\\prod_{c=1}^{K}\\mu_c^{\\alpha_c-1}, \n \\quad \\alpha_c > 0,~\\alpha_0=\\sum_{c=1}^K\\alpha_c \\quad,\n\\end{equation*}\nwhere $\\Gamma$ is the gamma function, $\\alpha_1, ..., \\alpha_K$ are called the concentration parameters, and the scalar $\\alpha_0$ is the precision of the distribution. In practice, the concentrations $\\alpha_1,...,\\alpha_K$ are derived by applying a strictly positive transformation, as for example the exponential function, to the logit values. As visualized in Figure \\ref{fig:simplex_uncertainty}, a higher concentration value leads to a sharper Dirichlet distribution. \\\\\nThe set of all class probabilities of a categorical distribution over $k$ classes is equivalent to a $k-1$-dimensional standard or probability simplex. Each node of this simplex represents a probability vector with the full probability mass on one class and each convex combination of the nodes represents a categorical distribution with the probability mass distributed over multiple classes. Malinin et al. argued that a high model uncertainty should lead to a lower precision value and therefore to a flat distribution over the whole simplex, since the network is not familiar with the data. In contrast to this, data uncertainty should be represented by a sharper but also centered distribution, since the network can handle the data, but cannot give a clear class preference. In Figure \\ref{fig:simplex_uncertainty} the different desired behaviors are shown. \n\\input{figures/solution_simplex}\nThe Dirichlet distribution is utilized in several approaches as \\textit{Dirichlet Prior Networks} and \\textit{Evidential Neural Networks} . Both of these network types output the parameters of a Dirichlet distribution from which the categorical distribution describing the class probabilities can be derived. The general idea of prior networks is already described above and is visualized in Figure \\ref{fig:simplex_uncertainty}. \nPrior networks are trained in a multi task way with the goal of minimizing the expected Kullback-Leibler (KL) divergence between the predictions of in-distribution data and a sharp Dirichlet distribution and between a flat Dirichlet distribution and the predictions of out-of-distribution data . Besides the main motivation of a better separation between in-distribution and OOD samples, these approaches also improve the separation between the confidence of correct and incorrect predictions, as was shown by . As a follow up, discussed that for the case that the data uncertainty is high, the forward definition of the KL-divergence can lead to an undesirable multi-model target distribution. In order to avoid this, they reformulated the loss using the reverse KL-divergence. The experiments showed improved results in the uncertainty estimation as well as for the adversarial robustness. \nZhao et al. extended the Dirichlet network approach by a new loss function that aims at minimizing an upper bound on the expected error based on the $\\mathcal{L}_\\infty$-norm, i.e. optimizing an expected worst-case upper bound. argued that using a mixture of Dirichlet distributions gives much more flexibility in approximating the posterior distribution. Therefore, an approach where the network predicts the parameters for a mixture of $K$ Dirichlet distributions was suggested. For this, the network logits represent the parameters for $M$ Dirichlet distributions and additionally $M$ weights $\\omega_i, i=1,..,M$ with the constraint $\\sum_{i=1}^M\\omega_i=1$ are optimized. Nandy et al. analytically showed that for in-domain samples with high data uncertainty, the Dirichlet distribution predicted for a false prediction is often flatter than for a correct prediction. They argued that this makes it harder to differentiate between in- and out-of-distribution predictions and suggested a regularization term towards maximizing the gap between in- and out-of-distribution samples. \\\\\nEvidential neural networks also optimize the parameterization of a single Dirichlet network. The loss formulation is derived by using subjective logic and interpret the logits as multinomial opinions or beliefs, as introduced in Evidence or Dempster-Shafer theory . Evidential neural networks set the total amount of evidence in relation with the number of classes and conclude a value of uncertainty from this, i.e. receiving an additional \"I don't know class\". The loss is formulated as expected value of a basic loss, as for example categorical cross entropy, with respect to a Dirichlet distribution parameterized by the logits. Additionally, a regularization term is added, encouraging the network to not consider features that provide evidence for multiple classes at the same time, as for example a circle is for 6 and 8. Due to this, the networks do not differentiate between data uncertainty and model uncertainty, but learn whether they can give a certain prediction or not. extended this idea by differentiating between acuity and dissonance in the collected evidence in order to better separate in- and out-of-distribution samples. For that, two explicit data sets containing overlapping classes and out-of-distribution samples are needed to learn a regularization term. Amini et al. transferred the idea of evidential neural networks from classification tasks to regression tasks by learning the parameters of an evidential normal inverse gamma distribution over an underlying Normal distribution. Charpentier et al. avoided the need of OOD data for the training process by using normalizing flows to learn a distribution over a latent space for each class. A new input sample is projected onto this latent space and a Dirichlet distribution is parameterized based on the class wise densities of the received latent point. \nBeside the Dirichlet distribution based approaches described above, several other internal approaches exist. In , a relatively simple approach based on small pertubations on the training input data and the temperature scaling calibration is presented leading to an efficient differentiation of in- and out-of-distritbuion samples. Mo$\\dot{\\text{z}}$ejko et al. made use of the inhibited softmax function. It contains an artificial and constant logit that makes the absolute magnitude of the single logits more determining in the softmax output. Van Amersfoort et al. showed that Radial Basis Function (RBF) networks can be used to achieve competitive results in accuracy and very good results regarding uncertainty estimation. RBF networks learn a linear transformation on the logits and classify inputs based on the distance between the transformed logits and the learned class centroids. In , a scaled exponentiated $L_2$ distance was used. The data uncertainty can be directly derived from the distances between the centroids. By including penalties on the Jacobian matrix in the loss function, the network was trained to be more sensitive to changes in the input space. As a result, the method reached good performance on out-of-distribution detection. In several tests, the approach was compared to a five members deep ensemble and it was shown that this single network approach performs at least equivalently well on detecting out-of-distribution samples and improves the true-positive rate. \\\\\nFor regression tasks, Oala et al. introduced an uncertainty score based on the lower and an upper bound output of an interval neural network. The interval neural network has the same structure as the underlying deterministic neural network and is initialized with the deterministic network's weights. In contrast to Gaussian representations of uncertainty given by a standard deviation, this approach can give non-symmetric values of uncertainty. Furthermore, the approach is found to be more robust in the presence of noise. Tagasovska and Lopez-Paz presented an approach to estimate data and model uncertainty. A simultaneous quantile regression loss function was introduced in order to generate well-calibrated prediction intervals for the data uncertainty. The model uncertainty is quantified based on a mapping from the training data to zero, based on so called Orthonormal Certificates. The aim was that out-of-distribution samples, where the model is uncertain, are mapped to a non-zero value and thus can be recognized. Kawashima et al. introduced a method which computes virtual residuals in the training samples of a regression task based on a cross-validation like pre-training step. With original training data expanded by the information of these residuals, the actual predictor is trained to give a prediction and a value of certainty. The experiments indicated that the virtual residuals represent a promising tool in order to avoid overconfident network predictions.", "id": "4bcab73f-de19-4117-aec4-349b0c596eac", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "51765ff7-d94c-4419-adeb-5f1d5e3b561a", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Single Deterministic Methods" ], [ "subsubsection", "Internal Uncertainty Quantification Approaches" ] ], "subsections": [], "title": "Internal Uncertainty Quantification Approaches" }, { "cite_extract_rate": 1, "cites": [ 4617, 4592, 7130, 4620, 4621 ], "content": "~\\\\\nExternal uncertainty quantification approaches do not affect the models' predictions, since the evaluation of the uncertainty is separated from the underlying prediction task. Furthermore, several external approaches can be applied to already trained networks at the same time without affecting each other. Raghu et al. argued that when both tasks, the prediction and the uncertainty quantification, are done by one single method, the uncertainty estimation is biased by the actual prediction task. Therefore, they recommended a \"direct uncertainty prediction\" and suggested to train two neural networks, one for the actual prediction task and a second one for the prediction of the uncertainty on the first network's predictions. Similarly, Ramalho and Miranda introduced an additional neural network for uncertainty estimation. But in contrast to , the representation space of the training data is considered and the density around a given test sample is evaluated. The additional neural network uses this training data density in order to predict whether the main network's estimate is expected to be correct or false. Hsu et al. detected out-of-distribution examples in classification tasks at test-time by predicting total probabilities for each class, additional to the categorical distribution given by the softmax output. The class wise total probability is predicted by applying the sigmoid function to the network's logits. Based on these total probabilities, OOD examples can be identified as those with low class probabilities for all classes. \\\\\nIn contrast to this, Oberdiek et al. took the sensitivity of the model, i.e. the model's slope, into account by using gradient metrics for the uncertainty quantification in classification tasks. Lee et al. applied a similar idea but made use of back propagated gradients. In their work they presented state of the art results on out-of-distribution and corrupted input detection.", "id": "030cf20c-f0d2-4fb7-a22f-4ddb1e9417c8", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "51765ff7-d94c-4419-adeb-5f1d5e3b561a", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Single Deterministic Methods" ], [ "subsubsection", "External Uncertainty Quantification Approaches" ] ], "subsections": [], "title": "External Uncertainty Quantification Approaches" }, { "cite_extract_rate": 1, "cites": [ 4617, 4613 ], "content": "Compared to many other principles, single deterministic methods are computational efficient in training and evaluation. For training, only one network has to be trained and often the approaches can even be applied on pre-trained networks. Depending on the actual approach, only a single or at most two forward passes have to be fulfilled for evaluation. The underlying networks could contain more complex loss functions, which slows down the training process or external components that have to be trained and evaluated additionally . But in general, this is still more efficient than the number of predictions needed for ensembles based methods (Section \\ref{sec:ensemble.methods}), Bayesian methods (Section \\ref{sec:bayesian_methods}), and test-time data augmentation methods (Section \\ref{sec:test_time_augmentation}).\nA drawback of single deterministic neural network approaches is the fact that they rely on a single opinion and can therefore become very sensitive to the underlying network architecture, training procedure, and the training data.\n\\input{tables/table_bayesian_methods}", "id": "dc290065-ab0a-4da2-9665-8ecf9cd04ce4", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "51765ff7-d94c-4419-adeb-5f1d5e3b561a", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Single Deterministic Methods" ], [ "subsubsection", "Summing Up Single Deterministic Methods" ] ], "subsections": [], "title": "Summing Up Single Deterministic Methods" }, { "cite_extract_rate": 0.5, "cites": [ 4629, 4638, 4630, 4639, 4631, 4636, 4632, 8811, 4634, 8813, 1044, 8812, 4597, 4628, 4637, 8809, 4633, 4641, 4640, 4635, 8810 ], "content": "}\\label{sec:bayesian_methods}\nBayesian Neural Networks (BNNs) have the ability to combine the scalability, expressiveness, and predictive performance of neural networks with the Bayesian learning as opposed to learning via the maximum likelihood principles. This is achieved by inferring the probability distribution over the network parameters $\\theta=(w_1,...,w_K)$. More specifically, given a training input-target pair $(x, y)$ the posterior distribution over the space of parameters $p(\\theta|x,y)$ is modelled by assuming a prior distribution over the parameters $p(\\theta)$ and applying Bayes theorem:\n\\begin{equation}\n\\label{1.b.eq1}\np(\\theta\\vert x,y) = \\frac{p(y\\vert x,\\theta)p(\\theta)}{p(y|x)} \\propto p(y\\vert x,\\theta)p(\\theta).\n\\end{equation}\nHere, the normalization constant in \\eqref{1.b.eq1} is called the model evidence $p(y|x)$ which is defined as\n\\begin{equation}\np(y | x) = \\int p(y\\vert x, \\theta)p(\\theta)d\\theta.\n\\end{equation}\nOnce the posterior distribution over the weights have been estimated, the prediction of an output $y^*$ for a new input data $x^*$ can be obtained by Bayesian Model Averaging or Full Bayesian Analysis that involves marginalizing the likelihood $p(y|x,\\theta)$ with the posterior distribution:\n\\begin{equation}\n\\label{eq:bayesian_marginalization}\np(y^*|x^*, x, y) = \\int p(y^*|x^*,\\theta) p(\\theta|x,y)d\\theta.\n\\end{equation}\nThis Bayesian way of prediction is a direct application of the law of total probability and endows the ability to compute the principled predictive uncertainty. The integral of \\eqref{eq:bayesian_marginalization} is intractable for the most common prior posterior pairs and approximation techniques are therefore typically applied. The most widespread approximation, the \\textit{Monte Carlo Approximation}, follows the law of large numbers and approximates the expected value by the mean of $N$ deterministic networks, $f_{\\theta_1}, ...,f_{\\theta_N}$, parameterized by $N$ samples, $\\theta_1, \\theta_2, ..., \\theta_N$, from the posterior distribution of the weights, i.e.\n\\begin{equation}\n \ty^*\\quad \\approx \\quad \\frac{1}{N}\\sum_{i=1}^{N} y_i^* \\quad = \\quad \\frac{1}{N}\\sum_{i=1}^{N} f_{\\theta_i}(x^*).\n\\end{equation} \nWilson and Izmailov argue that a key advantage of BNNs lie in this marginalization step, which particularly can improve both the accuracy and calibration of modern deep neural networks. We note that the use-cases of BNNs are not limited for uncertainty estimation but open up the possibility to bridge the powerful Bayesian tool-boxes within deep learning. Notable examples include Bayesian model selection , model compression , active learning , continual learning , theoretic advances in Bayesian learning and beyond.\\\\\nWhile the formulation is rather simple, there exist several challenges. For example, no closed form solution exists for the posterior inference as conjugate priors do not typically exist for complex models such as neural networks . Hence, approximate Bayesian inference techniques are often needed to compute the posterior probabilities. Yet, directly using approximate Bayesian inference techniques have been proven to be difficult as the size of the data and number of parameters are too large for the use-cases of deep neural networks. In other words, the integrals of above equations are not computationally tractable as the size of the data and number of parameters grows. Moreover, specifying a meaningful prior for deep neural networks is another challenge that is less understood.\nIn this survey, we classify the BNNs into three different types based on how the posterior distribution is inferred to approximate Bayesian inference:\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textit{Variational inference} \\\\\n Variational inference approaches approximate the (in general intractable) posterior distribution by optimizing over a family of tractable distributions. \n \\item \\textit{Sampling approaches} \\\\\n Sampling approaches deliver a representation of the target random variable from which realizations can be sampled. Such methods are based on Markov Chain Monte Carlo and further extensions. \n \\item \\textit{Laplace approximation} \\\\\n Laplace approximation simplifies the target distribution by approximating the log-posterior distribution and then, based on this approximation, deriving a normal distribution over the network weights.\n\\end{itemize}\nWhile limiting our scope to these three categories, we also acknowledge several advances in related domains of BNN research. Some examples are (i) approximate inference techniques such as alpha divergence , expectation propagation , assumed density filtering etc, (ii) probabilistic programming to exploit modern Graphical Processing Units (GPUs) , (iii) different types of priors , (iv) advancements in theoretical understandings of BNNs , (iv) uncertainty propagation techniques to speed up the marginalization procedures and (v) computations of aleatoric uncertainty .\\\\", "id": "65bd2854-4c7c-4495-b6b5-340505392445", "level": "subsection", "origin_cites_number": 42, "parent_id": "33e781ef-a127-43ed-8702-88e09976b718", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Bayesian Neural Networks" ] ], "subsections": [ "78daf31e-4286-4b9d-b08a-bb3e05166e37", "e6bfbade-59c4-4a36-9057-34e98611718d", "5640575a-f2c5-44fa-8507-32c600eb808c", "3aa6b1db-f185-4451-a4c0-99067177b029" ], "title": "Bayesian Neural Networks" }, { "cite_extract_rate": 0.6071428571428571, "cites": [ 4601, 8815, 4648, 4649, 4646, 4647, 4643, 4618, 3329, 4644, 8814, 4025, 4642, 4595, 4645, 8816, 3278 ], "content": "The goal of variational inference is to infer the posterior probabilities $p(\\theta|x,y)$ using a pre-specified family of distributions $q(\\theta)$. Here, these so-called variational family $q(\\theta)$ is defined as a parametric distribution. An example is the Multivariate Normal distribution where its parameters are the mean and the covariance matrix. The main idea of variational inference is to find the settings of these parameters that make $q(\\theta)$ to be close to the posterior of interest $p(\\theta|x,y)$. This measure of closeness between the probability distributions are given by the Kullback-Leibler (KL) divergence\n\\begin{equation}\\label{eq:kl}\n \\text{KL}(q||p) = \\mathbb{E}_q\\left [ \\text{log} \\frac{q(\\theta)}{p(\\theta|x,y)} \\right ].\n\\end{equation}\nDue to the posterior $p(\\theta\\vert x, y)$ the KL-divergence in \\eqref{eq:kl} can not be minimized directly. Instead, the evidence lower bound (ELBO), a function that is equal to the KL divergence up to a constant, is optimized. For a given prior distribution on the parameters $p(\\theta)$, the ELBO is given by\n\\begin{equation}\n L = \\mathbb{E}_q\\left[\\log\\frac{p(y|x,\\theta)}{q(\\theta)}\\right]\n\\end{equation}\nand for the KL divergence \n\\begin{equation}\n \\text{KL}(q||p) = -L + \\log p(y\\vert x)\n\\end{equation}\nholds.\nVariational methods for BNNs have been pioneered by Hinton and Van Camp where the authors derived a diagonal Gaussian approximation to the posterior distribution of neural networks (couched in information theory - a minimum description length). Another notable extension in 1990s has been proposed by Barber and Bishop , in which the full covariance matrix was chosen as the variational family, and the authors demonstrated how the ELBO can be optimized for neural networks. Several modern approaches can be viewed as extensions of these early works with a focus on how to scale the variational inference to modern neural networks. \\\\\nAn evident direction with the current methods are the use of stochastic variational inference (or Monte-Carlo variational inference), where the optimization of ELBO is performed using mini-batch of data. One of the first connections to stochastic variational inference has been proposed by Graves et al. with Gaussian priors. In 2015, Blundell et al. introduced Bayes By Backprop, a further extension of stochastic variational inference to non-Gaussian priors and demonstrated how the stochastic gradients can be made unbiased. Notable, Kingma et al. introduced the local reparameterization trick to reduce the variance of the stochastic gradients. One of the key concepts is to reformulate the loss function of neural network as the ELBO. As a result the intractable posterior distribution is indirectly optimized and variational inference is compatible to back-propagation with certain modifications to the training procedure. These extensions widely focus on the fragility of stochastic variational inference that arises due to sensitivity to initialization, prior definition and variance of the gradients. These limitations have been addressed recently by Wu et al. , where a hierarchical prior was used and the moments of the variational distribution are approximated deterministically. \\\\\nAbove works commonly assumed mean-field approximations as the variational family, neglecting the correlations between the parameters. In order to make more expressive variational distributions feasible for deep neural networks, several works proposed to infer using the matrix normal distribution or more expressive variants where the covariance matrix is decomposed into the Kronecker products of smaller matrices or in a low rank form plus a positive diagonal matrix. A notable contribution towards expressive posterior distributions has been the use of normalizing flows - a hierarchical probability distribution where a sequence of invertible transformations are applied so that a simple initial density function is transformed into a more complex distribution. Interestingly, Farquhar et al. argue that mean-field approximation is not a restrictive assumption, and the layer-wise weight correlations may not be as important as capturing the depth-wise correlations. While the claim of Farquhar et al. may remain to be an open question, the mean-field approximations have an advantage on smaller computational complexities . For example, Osawa et al. demonstrated that variational inference can be scaled up to ImageNet size data-sets and architectures using multiple GPUs and proposed practical tricks such as data augmentation, momentum initialization and learning rate scheduling. \\\\\nOne of the successes in variational methods have been accomplished by casting existing stochastic elements of deep learning as variational inference. A widely known example is Monte Carlo Dropout (MC Dropout) where the dropout layers are formulated as Bernoulli distributed random variables, and training a neural network with dropout layers can be approximated as performing variational inference . A main advantage of MC dropout is that the predictive uncertainty can be computed by activating dropout not only during training, but also at test time. In this way, once the neural network is trained with dropout layers, the implementation efforts can be kept minimum and the practitioners do not need expert knowledge to reason about uncertainty - certain criteria that the authors are attributing to its success . The practical values of this method has been demonstrated also in several works and resulted in different extensions (evaluating the usage of different dropout masks for example for convolutional layers or by changing the representations of the predictive uncertainty into model and data uncertainties ). Approaches that build upon the similar idea but randomly drop incoming activations of a node, instead of dropping an activation for all following nodes, were also proposed within the literature and called drop connect. This was found to be more robust on the uncertainty representation, even though it was shown that a combination of both can lead to higher accuracy and robustness in the test predictions . Lastly, connections of variation inference to Adam , RMS Prop and batch normalization have been further suggested in the literature.\\\\", "id": "78daf31e-4286-4b9d-b08a-bb3e05166e37", "level": "subsubsection", "origin_cites_number": 28, "parent_id": "65bd2854-4c7c-4495-b6b5-340505392445", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Bayesian Neural Networks" ], [ "subsubsection", "Variational Inference" ] ], "subsections": [], "title": "Variational Inference" }, { "cite_extract_rate": 0.5121951219512191, "cites": [ 4636, 4652, 8819, 4650, 8826, 8818, 8825, 8822, 4657, 4655, 8823, 4654, 4651, 3301, 8817, 4653, 166, 8824, 8821, 4656, 8820 ], "content": "Sampling methods, or also often called Monte Carlo methods, are another family of Bayesian inference algorithms that represent uncertainty without a parametric model. Specifically, sampling methods use a set of hypotheses (or samples) drawn from the distribution and offer an advantage that the representation itself is not restricted by the type of distribution (e.g. can be multi-modal or non-Gaussian) - hence probability distributions are obtained non-parametrically. Popular algorithms within this domain are particle filtering, rejection sampling, importance sampling and Markov Chain Monte Carlo sampling (MCMC) . \\\\ \nIn case of neural networks, MCMC is often used since alternatives such as rejection and importance sampling are known to be inefficient for such high dimensional problems. The main idea of MCMC is to sample from arbitrary distributions by transition in state space where this transition is governed by a record of the current state and the proposal distribution that aims to estimate the target distribution (e.g. the true posterior). To explain this, let us start defining the Markov Chain: a Markov Chain is a distribution over random variables $x_1, \\cdots, x_T$ which follows the state transition rule:\n\\begin{align}\np(x_1,\\cdots, x_T) = p(x_1)\\prod_{t=2}^{T}p(x_t|x_{t-1}),\n\\end{align}\ni.e. the next state only depends on the current state and not on any other former state. \nIn order to draw samples from the true posterior, MCMC sampling methods first generate samples in an iterative and the Markov Chain fashion. Then, at each iteration, the algorithm decides to either accept or reject the samples where the probability of acceptance is determined by certain rules. In this way, as more and more samples are produced, their values can approximate the desired distribution. \\\\\nHamiltonian Monte Carlo or Hybrid Monte Carlo (HMC) is an important variant of MCMC sampling method (pioneered by Neals for neural networks), and is often known to be the gold standards of Bayesian inference . The algorithm works as follows: (i) start by initializing a set of parameters $\\theta$ (either randomly or in a user-specific manner). Then, for a given number of total iterations, (ii) instead of a random walk, a momentum vector - an auxiliary variable $\\rho$ is sampled, and the current value of parameters $\\theta$ is updated via the Hamiltonian dynamics:\n\\begin{align}\nH(\\rho, \\theta) = -\\text{log} p(\\rho, \\theta) = -\\text{log} p(\\rho|\\theta) - \\text{log} p(\\theta).\n\\end{align}\nDefining the potential energy ($V(\\theta) = - log p(\\theta)$ and the kinetic energy $T(\\rho|\\theta) = -\\text{log} p(\\rho|\\theta)$, the update steps via Hamilton's equations are governed by,\n\\begin{align}\n\\frac{d\\theta}{dt} &= \\frac{\\partial H}{\\partial \\rho} = \\frac{\\partial T}{\\partial \\rho} \\ \\text{and}\\\\\n\\frac{d\\rho}{dt} &= - \\frac{\\partial H}{\\partial \\theta} = -\\frac{\\partial T}{\\partial \\theta} - \\frac{\\partial V}{\\partial \\theta}.\n\\end{align}\nThe so-called leapfrog integrator is used as a solver . (iii) For each step, a Metropolis acceptance criterion is applied to either reject or accept the samples (similar to MCMC). Unfortunately, HMC requires the processing of the entire data-set per iteration, which is computationally too expensive when the data-set size grows to million to even billions. Hence, many modern algorithms focus on how to perform the computations in a mini-batch fashion stochastically. In this context, for the first time, Welling and Teh proposed to combine Stochastic Gradient Descent (SGD) with Langevin dynamics (a form of MCMC ) in order to obtain a scalable approximation to MCMC algorithm based on mini-batch SGD . The work demonstrated that performing Bayesian inference on Deep Neural Networks can be as simple as running a noisy SGD. This method does not include the momentum term of HMC via using the first order Langevin dynamics and opened up a new research area on Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC). \\\\\nConsequently, several extensions are available which include the use of 2nd order information such as preconditioning and optimizing with the Fisher Information Matrix (FIM) , the Hessian , adapting preconditioning diagonal matrix , generating samples from non-isotropic target densities using Fisher scoring , and samplers in the Riemannian manifold using the first order Langevin dynamics and Levy diffusion noise and momentum . Within these methods, the so-called parameter dependent diffusion matrices are incorporated with an intention to offset the stochastic perturbation of the gradient. To do so, the \"thermostat\" ideas are proposed so that a prescribed constant temperature distribution is maintained with the parameter dependent noise. Ahn et al. devised a distributed computing system for SG-MCMC to exploit the modern computing routines, while Wang et al. showed that Generative Adversarial Models (GANs) can be used to distill the samples for improved memory efficiency, instead of distillation for enhancing the run-time capabilities of computing predictive uncertainty . Lastly, other recent trends are techniques that reduce the variance and bias arising from stochastic gradients. \\\\\nConcurrently, there have been solid advances in theory of SG-MCMC methods and their applications in practice. Sato and Nakagawa , for the first time, showed that the SGLD algorithm with constant step size weakly converges; Chen et al. showed that faster convergence rates and more accurate invariant measures can be observed for SG-MCMCs with higher order integrators rather than a 1st order Euler integrator while Teh et al. studied the consistency and fluctuation properties of the SGLD. As a result, verifiable conditions obeying a central limit theorem for which the algorithm is consistent, and how its asymptotic bias-variance decomposition depends on step-size sequences have been discovered. A more detailed review of the SG-MCMC with a focus on supporting theoretical results can be found in Nemeth and Fearnhead . Practically, SG-MCMC techniques have been applied to shape classification and uncertainty quantification , empirically study and validate the effects of tempered posteriors (or called cold-posteriors) and train a deep neural network in order to generalize and avoid over-fitting .\\\\", "id": "e6bfbade-59c4-4a36-9057-34e98611718d", "level": "subsubsection", "origin_cites_number": 41, "parent_id": "65bd2854-4c7c-4495-b6b5-340505392445", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Bayesian Neural Networks" ], [ "subsubsection", "Sampling Methods" ] ], "subsections": [], "title": "Sampling Methods" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 4664, 153, 4663, 4660, 8828, 3300, 1562, 4667, 8827, 4659, 4666, 4661, 4665, 4658, 4662 ], "content": "The goal of the Laplace Approximation is to estimate the posterior distribution over the parameters of neural networks $p(\\theta\\mid x,y)$ around a local mode of the loss surface with a Multivariate Normal distribution. The Laplace Approximation to the posterior can be obtained by taking the second-order Taylor series expansion of the log posterior over the weights around the MAP estimate $\\hat \\theta$ given some data $(x, y)$. If we assume a Gaussian prior with a scalar precision value $\\tau>0$, then this corresponds to the commonly used $L_2$-regularization, and the Taylor series expansion results in\n\\begin{align*}\n\\log p(\\theta\\mid x,y) & \\approx \\log p(\\hat \\theta \\mid x,y) \\\\ & + \\frac{1}{2}(\\theta -\\hat \\theta)^T(H + \\tau I)(\\theta -\\hat \\theta),\n\\end{align*}\nwhere the first-order term vanishes because the gradient of the log posterior $\\delta\\theta=\\nabla \\log p(\\theta \\mid x,y)$ is zero at the maximum $\\hat \\theta$. Taking the exponential on both sides and approximating integrals by reverse engineering densities, the weight posterior is approximately a Gaussian with the mean $\\hat \\theta$ and the covariance matrix $(H+\\tau I)^{-1}$ where $H$ is the Hessian of $\\log p(\\theta\\mid x,y) $. This means that the model uncertainty is represented by the Hessian $H$ resulting in a Multivariate Normal distribution:\n\\begin{align}\n p(\\theta\\mid x,y) \\sim \\mathcal{N}\\left(\\hat \\theta, (H+\\tau I)^{-1}\\right).\n\\end{align}\nIn contrast to the two other methods described, the Laplace approximation can be applied on already trained networks, and is generally applicable when using standard loss functions such as MSE or cross entropy and piece-wise linear activations (e.g RELU). Mackay and Denker et al. have pioneered the Laplace approximation for neural networks in 1990s, and several modern methods provide an extension to deep neural networks . \\\\\nThe core of the Laplace Approximation is the estimation of the Hessian. Unfortunately, due to the enormous number of parameters in modern neural networks, the Hessian matrices cannot be computed in a feasible way as opposed to relative smaller networks in Mackay and Denker et al. . Consequently, several different ways for approximating $H$ have been proposed in the literature. A brief review is as follows. Instead of diagonal approximations (e.g. , ), several researchers have been focusing on including the off-diagonal elements (e.g. , and ). Amongst them, layer-wise Kronecker Factor approximation of , , and have demonstrated a notable scalability . A recent extension can be found in where the authors propose to re-scale the eigen-values of the Kronecker factored matrices so that the diagonal variance in its eigenbasis is accurate. The work presents an interesting idea as one can prove that in terms of a Frobenius norm, the proposed approximation is more accurate than that of . However, as this approximation is harmed by inaccurate estimates of eigenvectors, Lee et al. proposed to further correct the diagonal elements in the parameter space. \\\\\nExisting works obtain Laplace Approximation using various approximation of the Hessian in the line of fidelity-complexity trade-offs. For several works, an approximation using the diagonal of the Fisher information matrix or Gauss Newton matrix, leading to independently distributed model weights, have been utilized in order to prune weights or perform continual learning in order to avoid catastrophic forgetting . In Ritter et al. , the Kronecker factorization of the approximate block-diagonal Hessian have been applied to obtain scalable Laplace Approximation for neural networks. With this, the weights among different layers are still assumed to be independently distributed, but not the correlations within the same layer. Recently, building upon the current understandings of neural network's loss landscape that many eigenvalues of the Hessian tend to be zero, developed a low rank approximation that leads to sparse representations of the layers' co-variance matrices. Furthermore, Lee et al. demonstrated that the Laplace Approximation can be scaled to ImageNet size data-sets and architectures, and further showed that with the proposed sparsification technique, the memory complexity of modelling correlations can be made similar to the diagonal approximation. Lastly, Kristiadi et al. proposed a simple procedure to compute the last-layer Gaussian approximation (neglecting the model uncertainty in all other layers of neural networks), and showed that even such a minimalist solution can mitigate overconfidence predictions of ReLU networks.\\\\\nRecent efforts have extended the Laplace Approximation beyond the Hessian approximation. To tackle the widely known assumption that the Laplace Approximation is for the bell shaped true posterior and thus resulting in under-fitting behavior , Humt et al. proposed to use Bayesian Optimization and showed that hyperparameters of the Laplace Approximation can be efficiently optimized with increaed calibration performance. Another work in this domain is by Kristiadi et al. , who proposed uncertainty units - a new type of hidden units that changes the geometry of the loss landscape so that more accurate inference is possible. While Shinde et al. demonstrated practical effectiveness of the Laplace Approximation to the autonomous driving applications, Feng et al. showed the possibility to (i) incorporate contextual information and (ii) domain adaptation in a semi-supervised manner within the context of image classification. This is achieved by designing unary potentials within a Conditional Random Field. Several real-time methods also exist that do not require multiple forward passes to compute the predictive uncertainty. So-called linearized Laplace Approximation has been proposed in using the ideas of Mackay and have been extended with Laplace bridge for classification . Within this framework, Daxberger et al. proposed inferring the sub-networks to increase the expressivity of covariance propagation while remaining computationally tractable.\\\\", "id": "5640575a-f2c5-44fa-8507-32c600eb808c", "level": "subsubsection", "origin_cites_number": 27, "parent_id": "65bd2854-4c7c-4495-b6b5-340505392445", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Bayesian Neural Networks" ], [ "subsubsection", "Laplace Approximation" ] ], "subsections": [], "title": "Laplace Approximation" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4629, 4638, 4630, 4664, 8829, 3302, 4668, 4669, 4661, 3301, 4665, 4640, 3278, 4656 ], "content": "Bayesian methods for deep learning have emerged as a strong research domain by combining principled Bayesian learning for deep neural networks. A review of current BNNs has been provided with a focus on mostly, how the posterior $p(\\theta\\vert x,y)$ is inferred. As an observation, many of the recent breakthroughs have been achieved by performing approximate Bayesian inference in a mini-batch fashion (stochastically) or investigating relatively simple but scalable techniques such as MC-dropout or Laplace Approximation. As a result, several works demonstrated that the posterior inference in large scale settings are now possible , and the field has several practical approximation tools to compute more expressive and accurate posteriors since the revival of BNNs beyond the pioneers . There are also emerging challenges on new frontiers beyond accurate inference techniques. Some examples are: (i) how to specify meaningful priors? , (ii) how to efficiently marginalize over the parameters for fast predictive uncertainty? (iii) infrastructures such as new benchmarks, evaluation protocols and software tools , and (iv) towards better understandings on the current methodologies and their potential applications .", "id": "3aa6b1db-f185-4451-a4c0-99067177b029", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "65bd2854-4c7c-4495-b6b5-340505392445", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Bayesian Neural Networks" ], [ "subsubsection", "Sum Up Bayesian Methods" ] ], "subsections": [], "title": "Sum Up Bayesian Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{sec:ensemble.methods}", "id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "level": "subsection", "origin_cites_number": 0, "parent_id": "33e781ef-a127-43ed-8702-88e09976b718", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ] ], "subsections": [ "7e3b6412-5190-4394-be53-d12c07b2aac7", "37e3936a-4b5d-49cd-811b-c73e9b28a890", "9ce35270-ac08-482d-92ab-e17c1f1db144", "c3b26604-2508-4eee-b0cc-935596bd13d0", "7c4dbab1-06f4-4752-955d-84b7181f67aa", "2cc00bbc-fc33-44ef-a304-012b3a168687" ], "title": "Ensemble Methods" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 4670, 3288 ], "content": "Ensembles derive a prediction based on the predictions received from multiple so-called ensemble members.\nThey target at a better generalization by making use of synergy effects among the different models, arguing that a group of decision makers tend to make better decisions than a single decision maker . For an ensemble $f:X \\rightarrow Y$ with members $f_i:X \\rightarrow Y$ for $i \\in {1,2,...,M}$, this could be for example implemented by simply averaging over the members' predictions, \n\\begin{equation*}\n f(x):=\\frac{1}{M} \\sum_{i=1}^M f_i(x)~.\n\\end{equation*}\nBased on this intuitive idea, several works applying ensemble methods to different kinds of practical tasks and approaches, as for example bioinformatics , remote sensing , or reinforcement learning can be found in the literature. Besides the improvement in the accuracy, ensembles give an intuitive way of representing the model uncertainty on a prediction by evaluating the variety among the member's predictions. \\\\\nCompared to Bayesian and single deterministic network approaches, ensemble methods have two major differences. First, the general idea behind ensembles is relatively clear and there are not many groundbreaking differences in the application of different types of ensemble methods and their application in different fields. Hence, this section focuses on different strategies to train an ensemble and some variations that target on making ensemble methods more efficient. Second, ensemble methods were originally not introduced to explicitly handle and quantify uncertainties in neural networks. Although the derivation of uncertainty from ensemble predictions is obvious, since they actually aim at reducing the model uncertainty, ensembles were first introduced and discussed in order to improve the accuracy on a prediction . Therefore, many works on ensemble methods do not explicitly take the uncertainty into account. Notwithstanding this, ensembles have been found to be well suited for uncertainty estimations in neural networks .\\\\", "id": "7e3b6412-5190-4394-be53-d12c07b2aac7", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Principles of Ensemble Methods" ] ], "subsections": [], "title": "Principles of Ensemble Methods" }, { "cite_extract_rate": 1, "cites": [ 4671 ], "content": "One main point where ensemble methods differ from the other methods presented in this paper is the number of local optima that are considered, i.e. the differentiation into \\textit{single-mode} and \\textit{multi-mode} evaluation. \\\\\nIn order to create synergies and marginalise false predictions of single members, the members of an ensemble have to behave differently in case of an uncertain outcome. The mapping defined by a neural network is highly non-linear and hence the optimized loss function contains many local optima to which a training algorithm could converge to. Deterministic neural networks converge to one single local optimum in the solution space . Other approaches, e.g. BNNs, still converge to one single optimum, but additionally take the uncertainty on this local optimum into account . This means, that neighbouring points within a certain region around the solution also affect the loss and also influence the prediction of a test sample. Since these methods focus on single regions, the evaluation is called \\textit{single-mode} evaluation. In contrast to this, ensemble methods consist of several networks, which should converge to different local optima. This leads to a so called multi-mode evaluation . \n\\input{figures/loss_landscape}\nIn Figure \\ref{fig:loss_landscape}, the considered parameters of a single-mode deterministic, single-mode probabilistic (Bayesian) and multi-mode ensemble approach are visualized. The goal of multi-mode evaluation is that different local optima could lead to models with different strengths and weaknesses in the predictions such that a combination of several such models brings synergy effects improving the overall performance. \\\\", "id": "37e3936a-4b5d-49cd-811b-c73e9b28a890", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Single- and Multi-Mode Evaluation" ] ], "subsections": [], "title": "Single- and Multi-Mode Evaluation" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 4678, 4677, 4672, 4676, 4673, 4675, 4674, 3288 ], "content": "One of the most crucial points when applying ensemble methods is to maximize the variety in the behaviour among the single networks . In order to increase the variety, several different approaches can be applied:\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textbf{Random Initialization and Data Shuffle} \\\\\n Due to the very non-linear loss landscape, different initializations of a neural network lead in general to different training results. Since the training is realized on mini-batches, the order of the training data points also affects the final result. \n \\item \\textbf{Bagging and Boosting}\\\\\n Bagging (\\textbf{B}ootstrap \\textbf{agg}regat\\textbf{ing}) and Boosting are two strategies that vary the distribution of the used training data sets by sampling new sets of training samples from the original set. Bagging is sampling from the training data uniformly and with replacement . Thanks to the replacement process, ensemble members can see single samples several times in the training set while missing some other training samples. For boosting, the members are trained one after another and the probability of sampling a sample for the next training set is based on the performance of the already trained ensemble . \n \\item \\textbf{Data Augmentation}\\\\\n Augmenting the input data randomly for each ensemble member leads to models trained on different data points and therefore in general to a larger variety among the different members. \n \\item \\textbf{Ensemble of different Network Architecture}\\\\\n The combination of different network architectures leads to different loss landscapes and can therefore also increase the diversity in the resulting predictions .\n\\end{itemize}\nIn several works, it has been shown that the variety induced by random initialization works sufficiently and that bagging could even lead to a weaker performance . Livieris et al. evaluated different bagging and boosting strategies for ensembles of weight constrained neural networks. Interestingly, it is found that bagging performs better for a small number of ensemble members while boosting performs better for a large number.\nNanni et al. evaluated ensembles based on different types of image augmentation for bioimage classification tasks and compared those to each other. Guo and Gould used augmentation methods within in an ensemble approach for object detection. Both works stated that the ensemble approach using augmentations improves the resulting accuracy. In contrast to this, stated with respect to uncertainty quantification that image augmentation can harm the calibration of an ensemble and post-processing calibration methods have to be slightly adapted when using ensemble methods. Other ways of inducing variety for specific tasks have been also introduced. For instance, in , the members are trained with different attention masks in order to focus on different parts of the input data. Other approaches focused on the training process and introduced learning rate schedulers that are designed to discover several local optima within one training process . Following, an ensemble can be built based on local optima found within one single training run. It is important to note that if not explicitly stated, the works and approaches presented so far targeted on improvements in the predictive accuracy and did not explicitly consider uncertainty quantification. \\\\", "id": "9ce35270-ac08-482d-92ab-e17c1f1db144", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Bringing Variety into Ensembles" ] ], "subsections": [], "title": "Bringing Variety into Ensembles" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 4616, 3293, 4600, 4675, 3288 ], "content": "Besides the improvement in the accuracy, ensembles are widely used for modelling uncertainty on predictions of complex models, as for example in climate prediction . Accordingly, ensembles are also used for quantifying the uncertainty on a deep neural network's prediction, and over the last years they became more and more popular for such tasks . Lakshminarayanan et al. are often referenced as a base work on uncertainty estimations derived from ensembles of neural networks and as a reference for the competitiveness of deep ensembles. They introduced an ensemble training pipeline to quantify predictive uncertainty within DNNs. In order to handle data and model uncertainty, the member networks are designed with two heads, representing the prediction and a predicted value of data uncertainty on the prediction. The approach is evaluated with respect to accuracy, calibration, and out-of-distribution detection for classification and regression tasks. In all tests, the method performs at least equally well as the BNN approaches used for comparison, namely Monte Carlo Dropout and Probabilistic Backpropagation. Lakshminarayanan et al. also showed that shuffling the training data and a random initialization of the training process induces a sufficient variety in the models in order to predict the uncertainty for the given architectures and data sets. Furthermore, bagging is even found to worsen the predictive uncertainty estimation, extending the findings of Lee et al. , who found bagging to worsen the predictive accuracy of ensemble methods on the investigated tasks. Gustafsson et al. introduced a framework for the comparison of uncertainty quantification methods with a specific focus on real life applications. Based on this framework, they compared ensembles and Monte Carlo dropout and found ensembles to be more reliable and applicable to real life applications. These findings endorse the results reported by Beluch et al. who found ensemble methods to deliver more accurate and better calibrated predictions on active learning tasks than Monte Carlo Dropout. Ovadia et al. evaluated different uncertainty quantification methods based on test sets affected by distribution shifts. The excessive evaluation contains a variety of model types and data modalities. As a take away, the authors stated that already for a relatively small ensemble size of five, deep ensembles seem to perform best and are more robust to data set shifts than the compared methods. Vyas et al. presented an ensemble method for the improved detection of out-of-distribution samples. For each member, a subset of the training data is considered as out-of-distribution. For the training process, a loss, seeking a minimum margin greater zero between the average entropy of the in-domain and the out-of-distribution subsets is introduced and leads to a significant improvement in the out-of-distribution detection. \\\\", "id": "c3b26604-2508-4eee-b0cc-935596bd13d0", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Ensemble Methods and Uncertainty Quantification" ] ], "subsections": [], "title": "Ensemble Methods and Uncertainty Quantification" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 6989, 8806, 4590, 4680, 681, 4679, 8633, 4594, 4613 ], "content": "Compared to single model methods, ensemble methods come along with a significantly increased computational effort and memory consumption . When deploying an ensemble for a real life application the available memory and computational power are often limited. Such limitations could easily become a bottleneck and could become critical for applications with limited reaction time. Reducing the number of models leads to less memory and computational power consumption.\n\\textit{Pruning approaches} reduce the complexity of ensembles by pruning over the members and reducing the redundancy among them. For that, several approaches based on different diversity measures are developed to remove single members without strongly affecting the performance . \nDistillation is another approach where the number of networks is reduced to one single model. It is the procedure of teaching a single network to represent the knowledge of a group of neural networks . First works on the distillation of neural networks were motivated by restrictions when deploying large scale classification problems . The original classification problem is separated into several sub-problems focusing on single blocks of classes that are difficult to differentiate. Several smaller trainer networks are trained on the sub-problems and then teach one student network to separate all classes at the same time. In contrast to this, \\textit{Ensemble distillation approaches} capture the behaviour of an ensemble by one single network. First works on ensemble distillation used the average of the softmax outputs of the ensemble members in order to teach a student network the derived predictive uncertainty . Englesson and Azizpour justify the resulting predictive distributions of this approach and additionally cover the handling of out-of-distribution samples. When averaging over the members' outputs, the model uncertainty, which is represented in the variety of ensemble outputs, gets lost. To overcome this drawback, researchers applied the idea of learning higher order distributions, i.e. distributions over a distribution, instead of directly predicting the output . The members are then distillated based on the divergence from the average distribution. The idea is closely related to the prior networks and the evidential neural networks , which are described in Section \\ref{sec:deterministic_methods}. modelled ensemble members and the distilled network as prior networks predicting the parameters of a Dirichlet distribution. The distillation then seeks to minimize the KL divergence between the averaged Dirichlet distributions of the ensemble members and the output of the distilled network. Lindqvist et al. generalized this idea to any other parameterizable distribution. With that, the method is also applicable to regression problems, for example by predicting a mean and standard deviation to describe a normal distribution. Within several tests, the distillation models generated by these approaches are able to distinguish between data uncertainty and model uncertainty. Although distillation methods cannot completely capture the behaviour of an underlying ensemble, it has been shown that they are capable of delivering good and for some experiments even comparable results . \\\\\nOther approaches, as \\textit{sub-ensembles} and \\textit{batch-ensembles} seek to reduce the computation effort and memory consumption by sharing parts among the single members. It is important to note that the possibility of using different model architectures for the ensemble members could get lost when parts of the ensembles are shared. Also, the training of the models cannot be run in a completely independent manner. Therefore, the actual time needed for training does not necessarily decrease in the same way as the computational effort does. \\\\\nSub-ensembles divide a neural network architecture into two sub-networks. The trunk network for the extraction of general information from the input data, and the task network that uses these information to fulfill the actual task. In order to train a sub-ensemble, first, the weights of each member's trunk network are fixed based on the resulting parameters of one single model's training process. Following, the parameters of each ensemble members' task network are trained independently from the other members. As a result, the members are built with a common trunk and an individual task sub-network. Since the training and the evaluation of the trunk network have to be done only once, the number of computations needed for training and testing decreases by the factor $\\frac{M \\cdot N_{\\text{task}} + N_{\\text{trunk}}}{M\\cdot N}$, where $N_{\\text{task}}$, $N_{\\text{trunk}}$, and $N$ stand for the number of variables in the task networks, the trunk network, and the complete network. Valdenegro-Toro further underlined the usage of a shared trunk network by arguing that the trunk network is in general computational more costly than the task network. \nIn contrast to this, batch-ensembles connect the member networks with each other at every layer. The ensemble members' weights are described as a Hadamard product of one shared weight matrix $W \\in \\R^{n\\times m}$ and $M$ individual rank one matrices $F_i \\in \\R^{n \\times m}$, each linked with one of the $M$ ensemble members. The rank one matrices can be written as a multiplication $F_i=r_is_i^\\text{T}$ of two vectors $s\\in \\R^{n}$ and $r\\in \\R^{m}$ and hence the matrix $F_i$ can be described by $n+m$ parameters. With this approach, each additional ensemble member increases the number of parameters only by the factor $\\frac{n+m}{M\\cdot(n+m)+n\\cdot m} + 1$ instead of $\\frac{M+1}{M}=1 + \\frac{1}{M}$. On the one hand, with this approach, the members are not independent anymore such that all the members have to be trained in parallel. On the other hand, the authors also showed that the parallelization can be realized similar to the optimization on mini-batches and on a single unit.\\\\", "id": "7c4dbab1-06f4-4752-955d-84b7181f67aa", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Making Ensemble Methods more Efficient" ] ], "subsections": [], "title": "Making Ensemble Methods more Efficient" }, { "cite_extract_rate": 0.75, "cites": [ 4671, 8806, 3288 ], "content": "Ensemble methods are very easy to apply, since no complex implementation or major modification of the standard deterministic model have to be realized. Furthermore, ensemble members are trained independently from each other, which makes the training easily parallelizable. Also, trained ensembles can be extended easily, but the needed memory and the computational effort increases linearly with the number of members for training and evaluation. The main challenge when working with ensemble methods is the need of introducing diversity among the ensemble members. For accuracy, uncertainty quantification, and out-of-distribution detection, random initialization, data shuffling, and augmentations have been found to be sufficient for many applications and tasks . Since these methods may be applied anyway, they do not need much additional effort. The independence of the single ensemble members leads to a linear increase in the required memory and computation power with each additional member. This holds for the training as well as for testing. This limits the deployment of ensemble methods in many practical applications where the computation power or memory is limited, the application is time-critical, or very large networks with high inference time are included .\\\\\nMany aspects of ensemble approaches are only investigated with respect to the performance on the predictive accuracy but do not take predictive uncertainty into account. This also holds for the comparison of different training strategies for a broad range of problems and data sets. \nEspecially since the overconfidence from single members can be transferred to the whole ensemble, strategies that encourage the members to deliver different false predictions instead of all delivering the same false prediction should be further investigated. For a better understanding of ensemble behavior, further evaluations of the loss landscape, as done by Fort et al. , could offer interesting insights.", "id": "2cc00bbc-fc33-44ef-a304-012b3a168687", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "4fdfdf37-e014-4b52-b80a-5b1aa7684c7d", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Ensemble Methods" ], [ "subsubsection", "Sum Up Ensemble Methods" ] ], "subsections": [], "title": "Sum Up Ensemble Methods" }, { "cite_extract_rate": 0.5, "cites": [ 4682, 4683, 825, 4681 ], "content": "}\\label{sec:test_time_augmentation}\nInspired by ensemble methods and adversarial examples , the test time data augmentation is one of the simpler predictive uncertainty estimation techniques. The basic method is to create multiple test samples from each test sample by applying data augmentation techniques on it and then test all those samples to compute a predictive distribution in order to measure uncertainty. The idea behind this method is that the augmented test samples allow the exploration of different views and is therefore capable of capturing the uncertainty. Mostly, this technique of test time data augmentations has been used in medical image processing . One of the reasons for this is that the field of medical image processing already makes heavy use of data augmentations while using deep learning , so it is quite easy to just apply those same augmentations during test time to calculate the uncertainties. Another reason is that collecting medical images is costly, thus forcing practitioners to rely on data augmentation techniques. \nMoshkov et al. used the test time augmentation technique for cell segmentation tasks. For that, they created multiple variations of the test data before feeding it to a trained UNet or Mask R-CNN architecture. Following, they used a majority voting to create the final output segmentation mask and discuss the policies of applying different augmentation techniques and how they affect the final predictive results of the deep networks.\\\\\nOverall, test time augmentation is an easy method for estimating uncertainties because it keeps the underlying model unchanged, requires no additional data, and is simple to put into practice with off-the-shelf libraries. Nonetheless, it needs to be kept in mind that during applying this technique, one should only apply valid augmentations to the data, meaning that the augmentations should not generate data from outside the target distribution. According to , test time augmentation can change many correct predictions into incorrect predictions (and vice versa) due to many factors such as the nature of the problem at hand, the size of training data, the deep neural network architecture, and the type of augmentation. To limit the impact of these factors, Shanmugam et al. proposed a learning-based method for test time augmentation that takes these factors into consideration. In particular, the proposed method learns a function that aggregates the predictions from each augmentation of a test sample. Similar to , Molchanov et al. proposed a method, named “greedy Policy Search”, for constructing a test-time augmentation policy by choosing augmentations to be include in a fixed-length policy. Similarly, Kim et al. proposed a method for learning a loss predictor from the training data for instance-aware test-time augmentation selection. The predictor selects test-time augmentations with the lowest predicted loss for a given sample.\nAlthough learnable test time augmentation techniques help to select valid augmentations, one of the major open question is to find out the effect on uncertainty due to different kinds of augmentations. It can for example happen that a simple augmentation like reflection is not able to capture much of the uncertainty while some domain specialized stretching and shearing captures more uncertainty. It is also important to find out how many augmentations are needed to correctly quantify uncertainties in a given task. This is particularly important in applications like earth observation, where inference might be needed on global scale with limited resources.", "id": "781f73a5-8ad5-4b3b-aeac-5bca88d02c49", "level": "subsection", "origin_cites_number": 8, "parent_id": "33e781ef-a127-43ed-8702-88e09976b718", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Test Time Augmentation" ] ], "subsections": [], "title": "Test Time Augmentation" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 4664, 3293, 8807, 4665, 3294, 4600, 3232, 8633, 1044, 4613 ], "content": "In order to use the presented methods on real life tasks, several different considerations have to be taken into account. The memory and computational power is often restricted while many real world tasks my be time-critical . An overview over the main properties is given in Table \\ref{tab:types_compare}. \\\\\nThe presented applications all come along with advantages and disadvantages, depending on the properties a user is interested in. While ensemble methods and test-time augmentation methods are relatively easy to apply, Bayesian approaches deliver a clear description of the uncertainty on the models parameters and also deliver a deeper theoretical basis. The computational effort and memory consumption is a common restriction on real life applications, where single deterministic network approaches perform best, but distillation of ensembles or efficient Bayesian methods can also be taken into consideration. Within the different types of Bayesian approaches, the performance, the computational effort, and the implementation effort still vary strongly. Laplace approximations are relatively easy to apply and compared to sampling approaches much less computational effort is needed. Furthermore, there often already exist pretrained networks for an application. In this case, Laplace Approximation and external deterministic single network approaches can in general be applied to already trained networks. \\\\~\\\\\nAnother important aspect that has to be taken into account for uncertainty quantification in real life applications is the source and type of uncertainty. For real life applications, out-of-distribution detection forms the maybe most important challenge in order to avoid unexpected decisions of the network and to be aware of adversarial attacks. Especially since many motivations of uncertainty quantification are given by risk minimization, methods that deliver risk averse predictions are an important field to evaluate.\nMany works already demonstrated the capability of detecting out-of-distribution samples on several tasks and built a strong fundamental tool set for the deployment in real life applications . However, in real life, the tasks are much more difficult than finding out-of-distribution samples among data sets (e.g., MNIST or CIFAR data sets etc.) and the main challenge lies in comparing such approaches on several real-world data sets against each other. The work of Gustafsson et al. forms a first important step towards an evaluation of methods that better suits the demands in real life applications. Interestingly, they found for their tests ensembles to outperform the considered Bayesian approaches. This indicates, that the multi-mode evaluation given by ensembles is a powerful property for real life applications. Nevertheless Bayesian approaches have delivered strong results as well and furthermore come along with a strong theoretical foundation . As a way to go, the combination of efficient ensemble strategies and Bayesian approaches could combine the variability in the model parameters while still considering several modes for a prediction. \nAlso, single deterministic approaches as the prior networks deliver comparable results while consuming significantly less computation power. However, this efficiency often comes along with the problem that separated sets of in- and out-of-distribution samples have to be available for the training process . \nIn general, the development of new problem and loss formulations as for example given in leads to a better understanding and description of the underlying problem and forms an important field of research.", "id": "ad900f38-8b0a-4a6b-ab56-457ba95c6ae7", "level": "subsection", "origin_cites_number": 13, "parent_id": "33e781ef-a127-43ed-8702-88e09976b718", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Estimation" ], [ "subsection", "Neural Network Uncertainty Quantification Approaches for Real Life Applications" ] ], "subsections": [], "title": "Neural Network Uncertainty Quantification Approaches for Real Life Applications" }, { "cite_extract_rate": 1, "cites": [ 4684, 4685, 3288 ], "content": "\\label{sec:uncertainty_measures}\nIn Section \\ref{sec:uncertainty_quantification_methods}, we presented different methods for modeling and predicting different types of uncertainty in neural networks. In order to evaluate these approaches, measures have to be applied on the derived uncertainties. In the following, we present different measures for quantifying the different predicted types of uncertainty.\nIn general, the correctness and trustworthiness of these uncertainties is not automatically given. In fact, there are several reasons why evaluating the quality of the uncertainty estimates is a challenging task.\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item First, the quality of the uncertainty estimation depends on the underlying method for estimating uncertainty. This is exemplified in the work undertaken by Yao et al. , which shows that different approximates of Bayesian inference (e.g. Gaussian and Laplace approximates) result in different qualities of uncertainty estimates.\n \\item Second, there is a lack of ground truth uncertainty estimates and defining ground truth uncertainty estimates is challenging. For instance, if we define the ground truth uncertainty as the uncertainty across human subjects, we still have to answer questions as \"How many subjects do we need?\" or \"How to choose the subjects?\".\n \\item Third, there is a lack of a unified quantitative evaluation metric . To be more specific, the uncertainty is defined differently in different machine learning tasks such as classification, segmentation, and regression. For instance, prediction intervals or standard deviations are used to represent uncertainty in regression tasks, while entropy (and other related measures) are used to capture uncertainty in classification and segmentation tasks.\n\\end{itemize}", "id": "07c7f2da-ef2c-475e-823a-a86caf9c3b67", "level": "section", "origin_cites_number": 3, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ] ], "subsections": [ "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "d7ef6da1-d088-4266-8f76-706da1adb552", "52da755e-6d99-4ad1-8f6b-287af0fc5c38" ], "title": "Uncertainty Measures and Quality" }, { "cite_extract_rate": 1, "cites": [ 4593, 1624 ], "content": "For classification tasks, the network's softmax output already represents a measure of confidence. But since the raw softmax output is neither very reliable nor can it represent all sources of uncertainty , further approaches and corresponding measures were developed.", "id": "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "level": "subsection", "origin_cites_number": 2, "parent_id": "07c7f2da-ef2c-475e-823a-a86caf9c3b67", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Classification Tasks" ] ], "subsections": [ "1b5886d5-9af7-4c82-b2f2-48edadb5f482", "d93d8bab-5114-41f4-a244-1861fac352b3", "0c83491b-d4ea-49e8-af4d-01614b6386be", "9c2e0b54-8727-4734-918a-7309443b8949" ], "title": "Evaluating Uncertainty in Classification Tasks" }, { "cite_extract_rate": 0, "cites": [], "content": "~\\\\\nConsider a classification task with $K$ different classes and a probability vector network output $p(x)$ for some input sample $x$. In the following $p$ is used for simplification and $p_k$ stands for the $k$-th entry in the vector. In general, the given prediction $p$ represents a categorical distribution, i.e. it assigns a probability to each class to be the correct prediction. Since the prediction is not given as an explicit class but as a probability distribution, (un)certainty estimates can be directly derived from the prediction. In general this pointwise prediction can be seen as estimated data uncertainty . However, as discussed in Section \\ref{sec:uncertainty_types_and_sources}, the model's estimation of the data uncertainty is affected by model uncertainty, which has to be taken into account separately. In order to evaluate the amount of predicted data uncertainty, one can for example apply the maximal class probability or the entropy measures:\n\\begin{align}\n &\\text{Maximal probability:} \\quad &p_{\\text{max}} &=\\max\\left\\{p_k\\right\\}_{k=1}^K &\\\\[1em]\n &\\text{Entropy:} &\\text{H}(p) &=-\\sum_{k=1}^Kp_k\\log_2(p_k) &\n\\end{align}\nThe maximal probability represents a direct representation of certainty, while entropy describes the average level of information in a random variable. Even though a softmax output should represent the data uncertainty, one cannot tell from a single prediction how large the amount of model uncertainty is that affects this specific prediction as well.", "id": "1b5886d5-9af7-4c82-b2f2-48edadb5f482", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Classification Tasks" ], [ "subsubsection", "Measuring Data Uncertainty in Classification Tasks" ] ], "subsections": [], "title": "Measuring Data Uncertainty in Classification Tasks" }, { "cite_extract_rate": 1, "cites": [ 4593 ], "content": "~\\\\\nAs already discussed in Section \\ref{sec:uncertainty_quantification_methods}, a single softmax prediction is not a very reliable way for uncertainty quantification since it is often badly calibrated and does not have any information about the certainty the model itself has on this specific output . An (approximated) posterior distribution $p(\\theta \\vert D)$ on the learned model parameters can help to receive better uncertainty estimates. With such a posterior distribution, the softmax output itself becomes a random variable and one can evaluate its variation, i.e. uncertainty. For simplicity, we denote $p(y\\vert \\theta, x)$ also as $p$ and it will be clear from context whether $p$ depends on $\\theta$ or not. The most common measures for this are the mutual information (MI), the expected Kullback-Leibler Divergence (EKL), and the predictive variance. Basically, all these measures compute the expected divergence between the (stochastic) softmax output and the expected softmax output\n\\begin{align}\n \\hat{p} = \\mathbb{E}_{\\theta\\sim p(\\theta\\vert D)}\\left[p(y\\vert x, \\theta\\right]~.\n\\end{align}\nThe MI uses the entropy to measure the mutual dependence between two variables. In the described case, the difference between the information given in the expected softmax output and the expected information in the softmax output is compared, i.e.\n\\begin{align}\\label{eq:MI}\n \\text{MI}\\left(\\theta, y \\vert x, D\\right) = \\text{H}\\left[\\hat{p}\\right] - \\mathbb{E}_{\\theta\\sim p(\\theta\\vert D)}\\text{H}\\left[p(y \\vert x, \\theta )\\right]~.\n\\end{align}\nSmith and Gal pointed out that the MI is minimal when the knowledge about model parameters does not increase the information in the final prediction. Therefore, the MI can be interpreted as a measure of model uncertainty. \\\\\nThe Kullback-Leibler divergence measures the divergence between two given probability distributions. The EKL can be used to measure the (expected) divergence among the possible softmax outputs,\n\\begin{align}\\label{eq:EKL}\n \\mathbb{E}_{\\theta\\sim p(\\theta \\vert D)}\\left[KL(\\hat{p}~||~p)\\right] =\\mathbb{E}_{\\theta\\sim p(\\theta \\vert D)}\\left[\\sum_{i=1}^K \\hat{p}_i \\log\\left(\\frac{\\hat{p}_i}{p_i}\\right)\\right]~,\n\\end{align}\nwhich can also be interpreted as a measure of uncertainty on the model's output and therefore represents the model uncertainty. \\\\\nThe predictive variance evaluates the variance on the (random) softmax outpus, i.e.\n\\begin{align}\\label{eq:pred_sigma}\n \\sigma(p) &= \\mathbb{E}_{\\theta\\sim p(\\theta\\vert D)} \\left[\\left(p - \\hat{p} \\right)^2\\right]~.\n\\end{align}\nAs described in Section \\ref{sec:uncertainty_quantification_methods}, an analytically described posterior distribution $p(\\theta\\vert D)$ is only given for a subset of the Bayesian methods. And even for an analytically described distribution, the propagation of the parameter uncertainty into the prediction is in almost all cases intractable and has to be approximated for example with Monte Carlo approximation. Similarly, ensemble methods collect predictions from $M$ neural networks, and test-time data augmentation approaches receive $M$ predictions from $M$ different augmentations applied to the original input sample. For all these cases, we receive a set of $M$ samples, $\\left\\{p^i\\right\\}_{i=1}^M$, which can be used to approximate the intractable or even undefined underlying distribution. With these approximations, the measures defined in \\eqref{eq:MI}, \\eqref{eq:EKL}, and \\eqref{eq:pred_sigma} can be applied straight forward and only the expectation has to be replaced by average sums. For example, the expected softmax output becomes \n\\begin{align*}\n \\hat{p} \\approx \\frac{1}{M}\\sum_{i=1}^M p^i~.\n\\end{align*}\nFor the expectations given in \\eqref{eq:MI}, \\eqref{eq:EKL}, and \\eqref{eq:pred_sigma}, the expectation is approximated similarly.", "id": "d93d8bab-5114-41f4-a244-1861fac352b3", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Classification Tasks" ], [ "subsubsection", "Measuring Model Uncertainty in Classification Tasks" ] ], "subsections": [], "title": "Measuring Model Uncertainty in Classification Tasks" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 4592, 7130, 8639, 4619, 8633, 4613, 3288 ], "content": "~\\\\\nAlthough these uncertainty measures are widely used to capture the variability among several predictions derived from Bayesian neural networks , ensemble methods , or test-time data augmentation methods , they cannot capture distributional shifts in the input data or out-of-distribution examples, which could lead to a biased inference process and a falsely stated confidence. If all predictors attribute a high probability mass to the same (false) class label, this induces a low variability among the estimates. Hence, the network seams to be certain about its prediction, while the uncertainty in the prediction itself (given by the softmax probabilities) is also evaluated to be low. To tackle this issue, several approaches described in Section \\ref{sec:uncertainty_quantification_methods} take the magnitude of the logits into account, since a larger logit indicates larger evidence for the corresponding class . Thus, the methods either interpret the total sum of the (exponentials of) the logits as precision value of a Dirichlet distribution (see description of Dirichlet Priors in Section \\ref{sec:deterministic_methods}) , or as a collection of evidence that is compared to a defined constant . One can also derive a total class probability for each class individually by applying the sigmoid function to each logit . Based on the class-wise total probabilities, OOD samples might easier be detected, since all classes can have low probability at the same time. Other methods deliver an explicit measure how well new data samples suit into the training data distribution. Based on this, they also give a measure that a sample will be predicted correctly .", "id": "0c83491b-d4ea-49e8-af4d-01614b6386be", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Classification Tasks" ], [ "subsubsection", "Measuring Distributional Uncertainty in Classification Tasks" ] ], "subsections": [], "title": "Measuring Distributional Uncertainty in Classification Tasks" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 8633, 8639, 1624 ], "content": "~\\\\\nWhile the measures described above measure the performance of individual predictions, others evaluate the usage of these measures on a set of samples. Measures of uncertainty can be used to separate between correctly and falsely classified samples or between in-domain and out-of-distribution samples . For that, the samples are split into two sets, for example in-domain and out-of-distribution or correctly classified and falsely classified. The two most common approaches are the \\textit{Receiver Operating Characteristic} (ROC) curve and the \\textit{Precision-Recall} (PR) curve. Both methods generate curves based on different thresholds of the underlying measure. For each considered threshold, the ROC curve plots the true positive rate against the false positive rate\\footnote{The true positive rate is the number of samples, which are correctly predicted as positive divided by the total number of true samples. The false positive rate is the number of samples falsely predicted as positive divided by the total number of negative samples (see also )}, and the PR curve plots the precision against the recall\\footnote{The precision is equal to the number of samples that are correctly classified as positive, divided by the total number of positive samples. The recall is equal to the number of samples correctly predicted as positive divided by the total number of positive samples (see also )}. While the ROC and PR curves give a visual idea of how well the underlying measures are suited to separate the two considered test cases, they do not give a qualitative measure. To reach this, the area under the curve (AUC) can be evaluated. Roughly speaking, the AUC gives a probability value that a randomly chosen positive sample leads to a higher measure than a randomly chosen negative example. For example, the maximum softmax values measure ranks of correctly classified examples higher than falsely classified examples. Hendrycks and Gimpel showed for several application fields that correct predictions have in general a higher predicted certainty in the softmax value than false predictions. Especially for the evaluation of in-domain and out-of-distribution examples, the \\textit{Area Under Receiver Operating Curve} (AUROC) and the \\textit{Area Under Precision Recall Curce} (AUPRC) are commonly used . The clear weakness of these evaluations is the fact that the performance is evaluated and the optimal threshold is computed based on a given test data set. A distribution shift from the test set distribution can ruin the whole performance and make the derived thresholds impractical.", "id": "9c2e0b54-8727-4734-918a-7309443b8949", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "09365f96-9cba-420c-afc8-a3c8d6b4d1b6", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Classification Tasks" ], [ "subsubsection", "Performance Measure on Complete Data Set" ] ], "subsections": [], "title": "Performance Measure on Complete Data Set" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "d7ef6da1-d088-4266-8f76-706da1adb552", "level": "subsection", "origin_cites_number": 0, "parent_id": "07c7f2da-ef2c-475e-823a-a86caf9c3b67", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Regression Tasks" ] ], "subsections": [ "34d11d6d-7a6b-48a6-83c2-934650e857f9", "374d27da-6f8b-43df-b519-3c6ebef34b2a" ], "title": "Evaluating Uncertainty in Regression Tasks" }, { "cite_extract_rate": 0.75, "cites": [ 8830, 4686, 3288 ], "content": "In contrast to classification tasks, where the network typically outputs a probability distritution over the possible classes, regression tasks only predict a pointwise estimation without any hint of data uncertainty. As already described in Section \\ref{sec:uncertainty_quantification_methods}, a common approach to overcome this is to let the network predict the parameters of a probability distribution, for example a mean vector and a standard deviation for a normally distributed uncertainty . Doing so, a measure of data uncertainty is directly given. The prediction of the standard deviation allows an analytical description that the (unknown) true value is within a specific region. The interval that covers the true value with a probability of $\\alpha$ (under the assumption that the predicted distribution is correct) is given by \n\\begin{equation}\n \\left[\\hat{y}-\\frac{1}{2}\\Phi^{-1}(\\alpha)\\cdot\\sigma;\\quad \\hat{y}+\\frac{1}{2}\\Phi^{-1}(\\alpha)\\cdot\\sigma\\right]\n\\end{equation}\nwhere $\\Phi^{-1}$ is the quantile function, the inverse of the cumulative probability function. For a given probability value $\\alpha$ the quantile function gives a boundary, such that $100\\cdot\\alpha\\%$ of a standard normal distribution's probability mass is on values smaller than $\\Phi^{-1}(\\alpha)$. Quantiles assume some probability distribution and interpret the given prediction as the expected value of the distribution. \\\\\nIn contrast to this, other approaches directly predict a so called prediction interval (PI) \n\\begin{align}\n PI(x) = \\left[B_l, B_u\\right]\n\\end{align}\nin which the prediction is assumed to lay. Such intervals induce an uncertainty as a uniform distribution without giving a concrete prediction. The certainty of such approaches can, as the name indicates, be directly measured by the size of the predicted interval. The \\textit{Mean Prediction Interval Width} (MPIW) can be used to evaluate the average certainty of the model . In order to evaluate the correctness of the predicted intervals the \\textit{Prediction Interval Coverage Probability} (PICP) can be applied . The PCIP represents the percentage of test predictions that fall into a prediction interval and is defined as \n\\begin{equation}\\label{eq:picp}\n \\text{PICP}=\\frac{c}{n}~,\n\\end{equation}\nwhere $n$ is the total number of predictions and $c$ the number of ground truth values that are actually captured by the predicted intervals.", "id": "34d11d6d-7a6b-48a6-83c2-934650e857f9", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "d7ef6da1-d088-4266-8f76-706da1adb552", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Regression Tasks" ], [ "subsubsection", "Measuring Data Uncertainty in Regression Predictions" ] ], "subsections": [], "title": "Measuring Data Uncertainty in Regression Predictions" }, { "cite_extract_rate": 0, "cites": [], "content": "~\\\\\nIn Section \\ref{sec:uncertainty_types_and_sources}, it is described, that model uncertainty is mainly caused by the model's architecture, the training process, and underrepresented areas in the training data. Hence, there is no real difference in the causes and effects of model uncertainty between regression and classification tasks such that model uncertainty in regression tasks can be measured equivalently as already described for classification tasks, i.e. in most cases by approximating an average prediction and measuring the divergence among the single predictions .", "id": "374d27da-6f8b-43df-b519-3c6ebef34b2a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "d7ef6da1-d088-4266-8f76-706da1adb552", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Regression Tasks" ], [ "subsubsection", "Measuring Model Uncertainty in Regression Predictions" ] ], "subsections": [], "title": "Measuring Model Uncertainty in Regression Predictions" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 4606, 4688, 4687, 8831, 4603 ], "content": "The evaluation of uncertainties in segmentation tasks is very similar to the evaluation for classification problems. The uncertainty is estimated in segmentation tasks using approximates of Bayesian inference or test-time data augmentation techniques . In the context of segmentation, the uncertainty in pixel wise segmentation is measured using confidence intervals , the predictive variance , the predictive entropy or the mutual information . The uncertainty in structure (volume) estimation is obtained by averaging over all pixel-wise uncertainty estimates . The quality of volume uncertainties is assessed by evaluating the coefficient of variation, the average Dice score or the intersection over union . These metrics measure the agreement in area overlap between multiple estimates in a pair-wise fashion.\nIdeally, a false segmentation should result in an increase in pixel-wise and structure uncertainty. To evaluate whether this is the case, Nair et al. evaluated the pixel-level true positive rate and false detection rate as well as the ROC curves for the retained pixels at different uncertainty thresholds. Similar to , McClure et al. also analyzed the area under the ROC curve.", "id": "52da755e-6d99-4ad1-8f6b-287af0fc5c38", "level": "subsection", "origin_cites_number": 9, "parent_id": "07c7f2da-ef2c-475e-823a-a86caf9c3b67", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Uncertainty Measures and Quality" ], [ "subsection", "Evaluating Uncertainty in Segmentation Tasks" ] ], "subsections": [], "title": "Evaluating Uncertainty in Segmentation Tasks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 759, 4689, 4690, 3288 ], "content": "\\label{sec:calibration}\nA predictor is called well-calibrated if the derived predictive confidence represents a good approximation of the actual probability of correctness . Therefore, in order to make use of uncertainty quantification methods, one has to be sure that the network is well calibrated. Formally, for classification tasks a neural network $f_\\theta$ is calibrated if it holds that \n\\begin{align}\\label{eq:calibration_classification}\n \\forall p \\in [0,1]:\\quad \\sum_{i=1}^N \\sum_{k=1}^K\\frac{y_{i,k}\\cdot\\mathbb{I}\\{f_\\theta(x_i)_k=p\\}}{\\mathbb{I}\\{f_\\theta(x_i)_k=p\\}} \\xrightarrow[]{N \\to \\infty} p~.\n\\end{align}\nHere, $\\mathbb{I}\\{\\cdot\\}$ is the indicator function that is either 1 if the condition is true or 0 if it is false and $y_{i,k}$ is the $k$-th entry in the one-hot encoded groundtruth vector of a training sample $(x_i,y_i)$. This formulation means that for example $30\\%$ of all predictions with a predictive confidence of $70\\%$ should actually be false.\nFor regression tasks the calibration can be defined such that predicted confidence intervals should match the confidence intervals empirically computed from the data set , i.e.\n\\begin{equation}\\label{eq:calibration_regression}\n\\forall p \\in [0,1]:\\quad \\sum_{i=1}^N\\frac{\\mathbb{I}\\left\\{y_i\\in \\text{conf}_{p}(f_\\theta(x_i))\\right\\}}{N} \\xrightarrow[]{N \\to \\infty} p,\n\\end{equation}\nwhere $\\text{conf}_p$ is the confidence interval that covers $p$ percent of a distribution. \nA DNN is called under-confident if the left hand side of \\eqref{eq:calibration_classification} and \\eqref{eq:calibration_regression} are larger than $p$. Equivalently, it is under-confident if the terms are smaller than $p$. The calibration property of a DNN can be visualized using a \\textit{reliability diagram}, as shown in Figure \\ref{fig:reliability_diagram}.\nIn general, calibration errors are caused by factors related to model uncertainty . This is intuitively clear, since as discussed in Section \\ref{sec:uncertainty_types_and_sources}, data uncertainty represents the underlying uncertainty that an input $x$ and a target $y$ represent the same real world information. Following, correctly predicted data uncertainty would lead to a perfectly calibrated neural network.\nIn practice, several works pointed out that deeper networks tend to be more overconfident than shallower ones . \\\\\nSeveral methods for uncertainty estimation presented in Section \\ref{sec:uncertainty_quantification_methods} also improve the networks calibration . This is clear, since these methods quantify model and data uncertainty separately and aim at reducing the model uncertainty on the predictions.\nBesides the methods that improve the calibration by reducing the model uncertainty, a large and growing body of literature has investigated methods for explicitly reducing calibration errors. These methods are presented in the following, followed by measures to quantify the calibration error. It is important to note that these methods do not reduce the model uncertainty, but propagate the model uncertainty onto the representation of the data uncertainty. For example, if a binary classifier is overfitted and predicts all samples of a test set as class A with probability 1, while half of the test samples are actually class B, the recalibration methods might map the network output to 0.5 in order to have a reliable confidence. This probability of 0.5 is not equivalent to the data uncertainty but represents the model uncertainty propagated onto the predicted data uncertainty.\n\\begin{figure*}[t]\n \\begin{center}\n \\begin{subfigure}{0.25\\textwidth}\\includegraphics[width=\\textwidth]{figures/overconfident.png}\\subcaption[]{Underconfidence}\\end{subfigure}\n \\begin{subfigure}{0.25\\textwidth}\\includegraphics[width=\\textwidth]{figures/underconfident.png}\\subcaption[]{Overconfidence}\\end{subfigure}\n \\begin{subfigure}{0.25\\textwidth}\\includegraphics[width=\\textwidth]{figures/calibrated.png}\\subcaption[]{Calibrated classifier}\\end{subfigure}\n \\end{center}\n \\caption{(a) Reliability diagram showing an overconfident classifier: The bin-wise accuracy is smaller than the corresponding confidence. (b) Reliability diagram of an underconfident classifier: The bin-wise accuracy is larger than the corresponding confidence. (c) Reliability diagram of a well calibrated classifier: The confidence fits the actual accuracy for the single bins.}\n \\label{fig:reliability_diagram}\n\\end{figure*}", "id": "71582559-f2d3-45a1-a741-a40d9e53b709", "level": "section", "origin_cites_number": 6, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ] ], "subsections": [ "c8944f89-810f-4c7d-ab40-2e0f83834278", "e0dba29a-a672-47b8-bc13-8cb4402e7555" ], "title": "Calibration" }, { "cite_extract_rate": 0.8636363636363631, "cites": [ 4612, 3300, 4141, 8832, 759, 301, 4607, 3254, 4691, 3288, 4597, 4693, 4676, 4694, 4695, 3251, 4692, 3234, 4696 ], "content": "Calibration methods can be classified into three main groups according to the step when they are applied: \n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textit{Regularization methods applied during the training phase} \\\\\n These methods modify the objective, optimization and/or regularization procedure in order to build DNNs that are inherently calibrated.\n \\item \\textit{Post-processing methods applied after the training process of the DNN} \\\\ \n These methods require a held-out calibration data set to adjust the prediction scores for recalibration. They only work under the assumption that the distribution of the left-out validation set is equivalent to the distribution, on which inference is done.\n Hence, also the size of the validation data set can influence the calibration result. \n \\item \\textit{Neural network uncertainty estimation methods}\\\\\n Approaches, as presented in Section \\ref{sec:uncertainty_quantification_methods}, that reduce the amount of model uncertainty on a neural network's confidence prediction, also lead to a better calibrated predictor. This is because the remaining predicted data uncertainty better represents the actual uncertainty on the prediction. Such methods are based for example on Bayesian methods or deep ensembles .\n\\end{itemize}\nIn the following, we present the three types of calibration methods in more detail. \n\\begin{figure*}[h]\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tikzpicture}[framed, every annotation/.style = {draw,\n fill = white, font = \\large}]\n \\path[mindmap,concept color=black!40,text=black,\n every node/.style={concept,circular drop shadow},\n root/.style = {concept color=black!40,\n font=\\Large\\bfseries,text width=10em},\n level 1 concept/.append style={font=\\normalsize\\bfseries, clockwise from=-15,\n sibling angle=75,text width=7em,\n level distance=15em,inner sep=0pt},\n level 2 concept/.append style={font=\\small\\bfseries,level distance=9em},\n ]\n node[root] {Neural Network Calibration Methods} [clockwise from=-15]\n child[concept color=orange] {\n node[concept] {Regularization Methods}\n [clockwise from=90]\n child { node[concept] (Objective Function Modification)\n {Objective Function Modification\\textsuperscript{11}}}\n child { node[concept] (Data Augmentation)\n {Data Augmentation\\textsuperscript{10}} }\n child { node[concept] (Label Smoothing)\n {Label Smoothing\\textsuperscript{9}}}\n child { node[concept] (Exposure to OOD Examples)\n {Exposure to OOD examples\\textsuperscript{8}}}\n }\n child[concept color=green!40!black] {\n node[concept] {Uncertainty Estimation Approaches}\n [clockwise from=-60]\n child { node[concept] (Bayesian Neural Networks)\n {Bayesian Neural Networks\\textsuperscript{6}}}\n child { node[concept] (Ensemble of Neural Networks)\n {Ensemble of Neural Networks\\textsuperscript{5}}}\n }\n child[concept color=blue!60] {\n node {Post-Processing Methods} [clockwise from=-90]\n child { node (temeprature_scaling){Temperature Scaling\\textsuperscript{4}} }\n child { node (histogram_binning){Histogram Binning\\textsuperscript{3}} }\n child { node (gaussian_process) {Gaussian Processes\\textsuperscript{2}} }\n child { node (ensemble_of_post_processing_models) {Ensemble of Post-Processing Models\\textsuperscript{1}} }\n }; \n \\node[draw,text width=18cm, align=left] at (0,-10.5){\n \\textsuperscript{1}~\n ~~~\\textsuperscript{2}~\n ~~~\\textsuperscript{3}~\n ~~~\\textsuperscript{4}\n ~~~\\textsuperscript{5}~\n ~~~\\textsuperscript{6}~\n \\textsuperscript{8}~\n ~~~\\textsuperscript{9}~\n ~~~\\textsuperscript{10}~\n ~~~\\textsuperscript{11}~\n };\n \\end{tikzpicture}\n }\n \\caption{Visualization of the different types of uncertainty calibration methods presented in this paper.}\n \\label{fig:calibration_diagram}\n\\end{figure*}", "id": "c8944f89-810f-4c7d-ab40-2e0f83834278", "level": "subsection", "origin_cites_number": 22, "parent_id": "71582559-f2d3-45a1-a741-a40d9e53b709", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ], [ "subsection", "Calibration Methods" ] ], "subsections": [ "886e481a-f19e-4a18-8e0f-c24754d408ef", "14644a47-0f49-48f7-a0d2-b62942954202", "d193f79c-349d-447b-816c-8fa8897f56dd" ], "title": "Calibration Methods" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 4141, 301, 4693, 4676, 3254, 7191, 4691 ], "content": "Regularization methods for calibrating confidences manipulate the training of DNNs by modifying the objective function or by augmenting the training data set. The goal and idea of regularization methods is very similar to the methods presented in Section \\ref{sec:deterministic_methods} where the methods mainly quantify model and data uncertainty separately within a single forward pass. However, the methods in Section \\ref{sec:deterministic_methods} quantify the model and data uncertainty, while these calibration methods are regularized in order to minimize the model uncertainty. Following, at inference, the model uncertainty cannot be obtained anymore. This is the main motivation for us to separate the approaches presented below from the approaches presented in Section \\ref{sec:deterministic_methods}. \\\\\nOne popular regularization based calibration method is label smoothing . For label smoothing, the labels of the training examples are modified by taking a small portion $\\alpha$ of the true class' probability mass and assign it uniformly to the false classes. For hard, non-smoothed labels, the optimum cannot be reached in practice, as the gradient of the output with respect to the logit vector $z$,\n\\begin{align}\n \\begin{split}\n \\nabla_z \\text{CE}(y, \\hat y(z)) &= \\text{softmax}(z) - y \\\\\n &= \\frac{\\exp(z)}{\\sum_{i=1}^K \\exp(z_i)}-y~,\n \\end{split}\n\\end{align}\ncan only converge to zero with increasing distance between the true and false classes' logits. As a result, the logits of the correct class are much larger than the logits for the incorrect classes and the logits of the incorrect classes can be very different to each other. Label-smoothing avoids this and while it generally leads to a higher training loss, the calibration error decreases and the accuracy often increases as well . \\\\\nSeo et al. extended the idea of label smoothing and directly aimed at reducing the model uncertainty. For this, they sampled $T$ forward passes of a stochastic neural network already at training time. Based on the $T$ forward passes of a training sample $(x_i,y_i)$, a normalized model variance $\\alpha_i$ is derived as the mean of the Bhattacharyya coefficients between the $T$ individual predictions $\\hat y_1,...,\\hat y_T$ and the average prediction $\\bar y = \\frac{1}{T}\\sum_{t=1}^T\\hat y_t$,\n\\begin{align}\n \\begin{split}\n \\alpha_i &= \\frac{1}{T}\\sum_{t=1}^T BC(\\bar y_i, \\hat y_{i,t}) \\\\\n &=\\frac{1}{T}\\sum_{t=1}^T \\sum_{k=1}^K \\sqrt{\\bar y_{i,k} \\cdot \\hat y_{i,t,k}}~.\n \\end{split}\n\\end{align}\nBased on this $\\alpha_i$, Seo et al. introduced the variance-weighted confidence-integrated loss function that is a convex combination of two contradictive loss functions,\n\\begin{align}\n \\begin{split}\n L^{\\text{VWCI}}(\\theta)=-\\sum_{i=1}^N(1-\\alpha_i)L_{\\text{GT}}^{(i)}(\\theta) + \\alpha_i L_{\\text{U}}^{(i)}(\\theta)~,\n \\end{split}\n\\end{align}\nwhere $L_\\text{GT}^{(i)}$ is the mean cross-entropy computed for the training sample $x_i$ with given ground-truth $y_i$. $L_\\text{U}$ represents the mean KL-divergence between a uniform target probability vector and the computed prediction. The adaptive smoothing parameter ${\\alpha}_i$ pushes predictions of training samples with high model uncertainty (given by high variances) towards a uniform distribution while increasing the prediction scores of samples with low model uncertainty. As a result, variances in the predictions of a single sample are reduced and the network can then be applied with a single forward pass at inference.\nPereyra et al. combated the overconfidence issue by adding the negative entropy to the standard loss function and therefore a penalty that increases with the network's predicted confidence. This results in the entropy-based objective function $L^H$, which is defined as\n\\begin{equation}\nL^H(\\theta) = -\\frac{1}{N} \\sum_{i=1}^{N} y_i \\log \\hat{y}_i - \\alpha_i H(\\hat{y}_i)~,\n\\end{equation}\nwhere $H(\\hat{y}_i)$ is the entropy of the output and $\\alpha_i$ a parameter that controls the strength of the entropy-based confidence penalty. The parameter $\\alpha_i$ is computed equivalently as for the VWCI loss. \nInstead of regularizing the training process by modifying the objective function, Thulasidasan et al. regularized it by using a data-agnostic data augmentation technique named mixup . In mixup training, the network is not only trained on the training data, but also on virtual training samples $(\\tilde x, \\tilde y)$ generated by a convex combination of two random training pairs $(x_i,y_i)$ and $(x_j,y_j)$, i.e. \n\\begin{equation}\n\\tilde{x} = \\lambda x_i + (1 - \\lambda) x_j\n\\end{equation}\n\\begin{equation}\n\\tilde{y} = \\lambda y_i + (1 - \\lambda) y_j~.\n\\end{equation}\nAccording to , the label smoothing resulting from mixup training can be viewed as a form of entropy-based regularization resulting in inherent calibration of networks trained with mixup. Maro$\\tilde{\\text{n}}$as et al. see mixup training among the most popular data augmentation regularization techniques due to its ability to improve the calibration as well as the accuracy. However, they argued that in mixup training the data uncertainty in mixed inputs affects the calibration and therefore mixup does not necessarily improve the calibration. They also underlined this claim empirically. Similarly, Rahaman and Thiery experimentally showed that the distributional-shift induced by data augmentation techniques such as mixup training can negatively affect the confidence calibration. Based on this observation, Maro$\\tilde{\\text{n}}$as et al. proposed a new objective function that explicitly takes the calibration performance on the unmixed input samples into account. Inspired by the expected calibration error (ECE, see Section \\ref{sec:calibration_quality}) Naeini et al. measured the calibration performance on the unmixed samples for each batch $b$ by the differentiable squared differences between the batch accuracy and the mean confidence on the batch samples. The total loss is given as a weighted combination of the original loss on mixed and unmixed samples and the calibration measure evaluated only on the unmixed samples: \n\\begin{equation}\nL^{ECE}(\\theta) = \\frac{1}{B} \\sum_{b \\in B} L^b(\\theta) + \\beta ECE_b~,\n\\end{equation}\nwhere $L^b(\\theta)$ is the original unregularized loss using training and mixed samples included in batch $b$ and $\\beta$ is a hyperparameter controlling the relative importance given to the batchwise expected calibration error $ECE_b$. By adding the batchwise calibration error for each batch $b \\in B$ to the standard loss function, the miscalibration induced by mixup training is regularized. \nIn the context of data augmentation, Patel et al. improved the calibration of uncertainty estimates by using on-manifold data augmentation. While mixup training combines training samples, on-manifold adversarial training generate out-of-domain samples using adversarial attack. They experimentally showed that on-manifold adversarial training outperforms mixup training in improving the calibration. Similar to , Hendrycks et al. showed that exposing classifiers to out-of-distribution examples at training can help to improve the calibration.", "id": "886e481a-f19e-4a18-8e0f-c24754d408ef", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "c8944f89-810f-4c7d-ab40-2e0f83834278", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ], [ "subsection", "Calibration Methods" ], [ "subsubsection", "Regularization Methods" ] ], "subsections": [], "title": "Regularization Methods" }, { "cite_extract_rate": 0.8, "cites": [ 759, 4612, 4607, 4697 ], "content": "Post-processing (or post-hoc) methods are applied after the training process and aim at learning a re-calibration function. For this, a subset of the training data is held-out during the training process and used as a calibration set. The re-calibration function is applied to the network's outputs (e.g. the logit vector) and yields an improved calibration learned on the left-out calibration set. Zhang et al. discussed three requirements that should be satisfied by post-hoc calibration methods. They should \n\\begin{enumerate}\n \\item preserve the accuracy, i.e. should not affect the predictors performance.\n \\item be data efficient, i.e. only a small fraction of the training data set should be left out for the calibration.\n \\item be able to approximate the correct re-calibration map as long as there is enough data available for calibration. \n\\end{enumerate}\nFurthermore, they pointed out that none of the existing approaches fulfills all three requirements. \\\\\nFor classification tasks, the most basic but still very efficient way of post-hoc calibration is temperature scaling . For temperature scaling, the temperature $T>0$ of the softmax function\n\\begin{equation}\\label{eq:temp_scaling}\n \\text{softmax}(z_i) = \\frac{\\exp^{z_i/T}}{\\sum_{j=1}^K\\exp^{z_j/T}}~,\n\\end{equation}\nis optimized. For $T=1$ the function remains the regular softmax function. For $T>1$ the output changes such that its entropy increases, i.e. the predicted confidence decreases. For $T\\in(0,1)$ the entropy decreases and following, the predicted confidence increases. As already mentioned above, a perfect calibrated neural network outputs MAP estimates. Since the learned transformation can only affect the uncertainty, the log-likelihood based losses as cross-entropy do not have to be replaced by a special calibration loss. While the data efficiency and the preservation of the accuracy is given, the expressiveness of basic temperature scaling is limited . To overcome this, Zhang et al. investigated an ensemble of several temperature scaling models. Doing so, they achieved better calibrated predictions, while preserving the classification accuracy and improving the data efficiency and the expressive power. \nKull et al. were motivated by non-neural network calibration methods, where the calibration is performed class-wise as a one-vs-all binary calibration. They showed that this approach can be interpreted as learning a linear transformation of the predicted log-likelihoods followed by a softmax function. This again is equivalent to train a dense layer on the log-probabilities and hence the method is also very easy to implement and apply. Obviously, the original predictions are not guaranteed to be preserved. \nAnalogous to temperature scaling for classification networks, Levi et al. introduced standard deviation scaling (std-scaling) for regression networks. As the name indicates, the method is trained to rescale the predicted standard deviations of a given network. Equivalently to the motivation of optimizing temperature scaling with the cross-entropy loss, std-scaling can be trained using the Gaussian log-likelihood function as loss, which is in general also used for the training of regression networks, which also give a prediction for the data uncertainty.\nWenger et al. proposed a Gaussian process (GP) based method, which can be used to calibrate any multi-class classifier that outputs confidence values and presented their methodology by calibrating neural networks. The main idea behind their work is to learn the calibration map by a Gaussian process that is trained on the networks confidence predictions and the corresponding ground-truths in the left out calibration set. For this approach, the preservation of the predictions is also not assured.", "id": "14644a47-0f49-48f7-a0d2-b62942954202", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "c8944f89-810f-4c7d-ab40-2e0f83834278", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ], [ "subsection", "Calibration Methods" ], [ "subsubsection", "Post-Processing Methods" ] ], "subsections": [], "title": "Post-Processing Methods" }, { "cite_extract_rate": 1, "cites": [ 3300, 8832, 4658, 4692, 4676, 4695, 4694, 3288 ], "content": "As already discussed above, removing the model uncertainty and receiving an accurate estimation of the data uncertainty leads to a well calibrated predictor. Following several works based on deep ensembles and BNNs, also compared their performance to other methods based on the resulting calibration. Lakshminarayanan et al. and Mehrtash et al. reported an improved calibration by applying deep ensembles compared to single networks. However, Rahaman and Thiery showed that for specific configurations as the usage of mixup-regularization, deep ensembles can even increase the calibration error. On the other side they showed that applying temperature scaling on the averaged predictions can give a significant improvement on the calibration. \\\\\nFor the Bayesian approaches, showed that restricting the Bayesian approximation to the weights of the last fully connected layer of a DNN is already enough to improve the calibration significantly. Zhang et al. and Laves et al. showed that confidence estimates computed with MC dropout can be poorly calibrated. To overcome this, Zhang et al. proposed structured dropout, which consists of dropping channel, blocks or layers, to promote model diversity and reduce calibration errors. \\\\", "id": "d193f79c-349d-447b-816c-8fa8897f56dd", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "c8944f89-810f-4c7d-ab40-2e0f83834278", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ], [ "subsection", "Calibration Methods" ], [ "subsubsection", "Calibration with Uncertainty Estimation Approaches" ] ], "subsections": [], "title": "Calibration with Uncertainty Estimation Approaches" }, { "cite_extract_rate": 0.75, "cites": [ 8832, 759, 4693, 4607, 3254, 4699, 4698, 7838, 4694 ], "content": "\\label{sec:calibration_quality}\nEvaluating calibration consists of measuring the statistical consistency between the predictive distributions and the observations . \nFor classification tasks, several calibration measures are based on binning. For that, the predictions are ordered by the predicted confidence $\\hat p_i$ and grouped into $M$ bins $b_1,...,b_M$. Following, the calibration of the single bins is evaluated by setting the average bin confidence \n\\begin{equation}\\label{eq:conf}\n \\text{conf}(b_m)=\\frac{1}{\\vert b_m \\vert} \\sum_{s\\in b_m}\\hat{p}_s\n\\end{equation}\n in relation to the average bin accuracy \n \\begin{equation}\\label{eq:acc}\n \\text{acc}(b_m) = \\frac{1}{\\vert b_m \\vert} \\sum_{s \\in b_m} \\mathbbm{1}(\\hat{y}_s=y_s)~,\n\\end{equation}\nwhere $\\hat{y}_s$, $y_s$ and $\\hat{p}_s$ refer to the predicted and true class label of a sample $s$. As noted in , confidences are well-calibrated when for each bin $\\text{acc}(b_m)=\\text{conf}(b_m)$.\nFor a visual evaluation of a model's calibration, the reliability diagram introduced by is widely used. For a reliability diagram, the $\\text{conf}(b_m)$ is plotted against $\\text{acc}(b_m)$. For a well-calibrated model, the plot should be close to the diagonal, as visualized in Figure \\ref{fig:reliability_diagram}. The basic reliability diagram visualization does not distinguish between different classes. In order to do so and hence to improve the interpretability of the calibration error, Vaicenavicius et al. used an alternative visualization named multidimensional reliability diagram. \nFor a quantitative evaluation of a model's calibration, different calibration measures can be considered. \\\\\nThe \\textit{Expected Calibration Error} (ECE) is a widely used binning based calibration measure . For the ECE, $M$ equally-spaced bins $b_1,...,b_M$ are considered, where $b_m$ denotes the set of indices of samples whose confidences fall into the interval $I_m=]\\frac{m-1}{M},\\frac{m}{M}]$. The ECE is then computed as the weighted average of the bin-wise calibration errors, i.e.\n\\begin{equation}\\label{eq:ece}\n \\text{ECE} = \\sum_{m=1}^{M}\\frac{\\vert b_m \\vert}{N}\\vert \\text{acc}(b_m)-\\text{conf}(b_m)\\vert~.\n\\end{equation}\nFor the ECE, only the predicted confidence score (top-label) is considered. In contrast to this, the \\textit{Static Calibration Error} (SCE) considers the predictions of all classes (all-labels). For each class, the SCE computes the calibration error within the bins and then averages across all the bins, i.e.\n\\begin{equation}\\label{eq:sce}\n \\text{SCE} = \\frac{1}{K} \\sum_{k=1}^{K} \\sum_{m=1}^{M} \\frac{\\vert b_{m_k} \\vert}{N} \\vert \\text{conf}(b_{m_k})-\\text{acc}(b_{m_k}) \\vert~.\n\\end{equation}\nHere $conf(b_{m_k})$ and $acc(b_{m_k})$ are the confidence and accuracy of bin $b_m$ for class label $k$, respectively. Nixon et al. empirically showed that all-labels calibration measures such as the SCE are more effective in assessing the calibration error than the top-label calibration measures as the ECE.\nIn contrast to the ECE and SCE, which group predictions into $M$ equally-spaced bins (what in general leads to different numbers of evaluation samples per bin), the adaptive calibration error adaptively groups predictions into $R$ bins with different width but equal number of predictions. With this adaptive bin size, the \\textit{adaptive Expected Calibration Error} (aECE)\n\\begin{equation}\\label{eq:a_ece}\n \\text{aECE} = \\frac{1}{R}\\sum_{r=1}^{R} \\vert \\text{conf}(b_r) - \\text{acc}(b_r) \\vert~,\n\\end{equation}\nand the \\textit{adaptive Static Calibration Error} (aSCE)\n\\begin{equation}\\label{eq:a_sce}\n \\text{aSCE} = \\frac{1}{K R} \\sum_{k=1}^{K} \\sum_{r=1}^{R} \\vert \\text{conf}(b_{r_k})-\\text{acc}(b_{r_k}) \\vert\n\\end{equation}\nare defined as extensions of the ECE and the SCE.\\\\\nAs has been empirically shown in and , the adaptive binning calibration measures $\\text{aECE}$ and $\\text{aSCE}$ are more robust to the number of bins than the corresponding equal-width binning calibration measures $\\text{ECE}$ and $\\text{SCE}$.\nIt is important to make clear that in a multi-class setting, the calibration measures can suffer from imbalance in the test data. Even when then calibration is computed classwise, the computed errors are weighted by the number of samples in the classes. Following, larger classes can shadow the bad calibration on small classes, comparable to accuracy values in classification tasks .", "id": "e0dba29a-a672-47b8-bc13-8cb4402e7555", "level": "subsection", "origin_cites_number": 12, "parent_id": "71582559-f2d3-45a1-a741-a40d9e53b709", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Calibration" ], [ "subsection", "Evaluating Calibration Quality" ] ], "subsections": [], "title": "Evaluating Calibration Quality" }, { "cite_extract_rate": 0.7307692307692301, "cites": [ 4630, 4664, 4617, 4627, 4590, 4648, 4700, 4649, 4679, 4626, 3289, 4594, 4620, 4621, 3293, 759, 3302, 4625, 3288, 8812, 4616, 4619, 4025, 4642, 4592, 4680, 4676, 4674, 8639, 1624, 8830, 4678, 4683, 3267, 3251, 4692, 3278, 4613 ], "content": "\\label{sec:data_sets_and_baselines} \n\\begin{table*}[ht!]\n\t\\centering\n\t\\caption{Overview of frequently compared benchmark approaches, tasks and their data sets among existing works organized according to the taxonomy of this paper.}\n\t\\begin{tabular}{>{\\raggedright\\arraybackslash}p{3.1cm}p{4.5cm}p{3.5cm}p{4cm}}\n\t\t& \\textbf{Tasks}\n\t\t& \\textbf{Task index: Data sets}\n\t\t& \\textbf{Baselines}\n\t\t\\\\ \\hline\n\t\t\\addlinespace[2ex]\n\t\t\\textbf{Bayesian Neural Networks} \n\t\t& 1. Regression\\textsuperscript{1,3,4,7,8,11,12,15-17}\n\t\t\\newline 2. Calibration\\textsuperscript{6,10,13,14}\n\t\t\\newline 3. OOD Detection\\textsuperscript{4,6,8,10,12,13}\n\t\t\\newline 4. Adversarial Attacks\\textsuperscript{4,8,12}\n\t\t\\newline 5. Active Learning\\textsuperscript{7,12,14} \n\t\t\\newline 6. Continual Learning\\textsuperscript{10}\n\t\t\\newline 7. Reinforcement Learning (Intrinsic Motivation, Contextual Bandits)\\textsuperscript{1,14,15}\n\t\t& 1: UCI\n\t\t\\newline~~2,~~3,~~4:~~(not)MNIST,\n\t\t\\newline~~CIFAR10/100,~~SHVN,\n\t\t\\newline~~ImageNet\n\t\t\\newline 5: UCI\n\t\t\\newline 6: Permuted MNIST\n\t\t& Softmax\\textsuperscript{44},\n\t\t\\newline MCdropout\\textsuperscript{\\hyperlink{gal2016dropout}{1}}, ~~DeepEnsemble\\textsuperscript{\\hyperlink{2}{2}},\n\t\t\\newline BBB\\textsuperscript{3},~~NormalizingFlow\\textsuperscript{4},\n\t\t\\newline PBP\\textsuperscript{7},~~SWAG\\textsuperscript{6},\n\t\t\\newline KFAC\\textsuperscript{8},~~DVI\\textsuperscript{11}\n\t\t\\newline HMC\\textsuperscript{9}, ~~\n\t\t\\newline VOGN\\textsuperscript{10},~~INF\\textsuperscript{12}\n\t\t\\\\ \n\t\t\\addlinespace[2ex]\n\t\t\\multicolumn{4}{p\\linewidth}{\n\t\t\t\t\t\\textsuperscript{\\hypertarget{gal2016dropout}{1}}~\n\t\t\t\t\t\\textsuperscript{3}~\n\t\t\t\t\t\\textsuperscript{4}~\n\t\t\t\t\t\\textsuperscript{5}~\n\t\t\t\t\t\\textsuperscript{6}~\n\t\t\t\t\t\\textsuperscript{7}~\n\t\t\t\t\t\\textsuperscript{8}~\n\t\t\t\t\t\\textsuperscript{9}~\n\t\t\t\t\t\\textsuperscript{10}~\n\t\t\t\t\t\\textsuperscript{11}~\n\t\t\t\t\t\\textsuperscript{12}~\n\t\t\t\t\t\\textsuperscript{13}~\n\t\t\t\t\t\\textsuperscript{14}~\n\t\t\t\t\t\\textsuperscript{15}~\n\t\t\t\t\t\\textsuperscript{16}~\n\t\t\t\t\t\\textsuperscript{17}~\n\t\t\t\t}\n\t\t\t\t\\\\\n\t\t\\addlinespace[2ex]\n\t\t\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\textbf{Ensembles}\n\t\t& 1. Regression\\textsuperscript{2,20}\n\t\t\\newline 2. Calibration\\textsuperscript{2,20,24-32}\n\t\t\\newline 3. OOD Detection\\textsuperscript{2,20,25,27-30,32-34}\n\t\t\\newline 4. Active Learning\\textsuperscript{31}\n\t\t& 1: Toy\\textsuperscript{7}, UCI\n\t\t\\newline 2, 3: Toy, (not)MNIST, SVHN, LSUN, CIFAR10/100,\n\t\t\\newline (Tiny)ImageNet, Diabetic~~Retinopathy\n\t\t\\newline 4: MNIST\n\t\t& Softmax\\textsuperscript{44},~~MFVI\\textsuperscript{5},~~SGLD\\textsuperscript{55},\n\t\t\\newline MCdropout\\textsuperscript{\\hyperlink{1}{1}},~~DeepEnsemble\\textsuperscript{2},\n\t\t\\newline BBB\\textsuperscript{3},~~PBP\\textsuperscript{7},~~NormalizingFlow\\textsuperscript{4},\n\t\t\\newline TemperatureScaling\\textsuperscript{38,54}\n\t\t\\\\ \n\t\t\\addlinespace[2ex]\n\t\t\\multicolumn{4}{p\\linewidth}{\n\t\t\t\\textsuperscript{\\hypertarget{2}{2}}~\n\t\t\t\\textsuperscript{20}~\n\t\t\t\\textsuperscript{24}~\n\t\t\t\\textsuperscript{25}~\n\t\t\t\\textsuperscript{26}~\n\t\t\t\\textsuperscript{27}~\n\t\t\t\\textsuperscript{28}~\n\t\t\t\\textsuperscript{29}~\n\t\t\t\\textsuperscript{30}~\n\t\t\t\\textsuperscript{31}~\n\t\t\t\\textsuperscript{32}~\n\t\t\t\\textsuperscript{33}~\n\t\t\t\\textsuperscript{34}~\n\t\t}\n\t\t\\\\%\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\textbf{Single Deterministic Models}\n\t\t& 1. Regression\\textsuperscript{21-23}\n\t\t\\newline 2. Calibration\\textsuperscript{21-23,39,40,41,43}\n\t\t\\newline 3. OOD Detection\\textsuperscript{21,22,38,39,41-53}\n\t\t\\newline 4. Adversarial Attacks\\textsuperscript{21,41,48}\n\t\t& 1. Toy\\textsuperscript{7}, UCI, NYU Depth\n\t\t\\newline~~2,~~3:~~(E/Fashion/not)MNIST,\n\t\t\\newline~~Toy,~~CIFAR10/100,~~SVHN,\n\t\t\\newline~~LSUN,~~(Tiny)ImageNet,~~IMDB,\n\t\t\\newline~~Diabetic~~Retinopathy,~~Omniglot\n\t\t\\newline 4: MNIST, CIFAR10, NYU Depth, Omniglot\n\t\t& Softmax\\textsuperscript{44},~~GAN\\textsuperscript{27},~~Dirichlet\\textsuperscript{48},~~BBB\\textsuperscript{3},\n\t\t\\newline~~MCdropout\\textsuperscript{\\hyperlink{1}{1}},~~DeepEnsemble\\textsuperscript{\\hyperlink{2}{2},57},\n\t\t\\newline~~Mahalanobis\\textsuperscript{56},~~TemperatureScaling\\textsuperscript{38,54}\n\t\t\\newline~~NormalizingFlow\\textsuperscript{4}\n\t\t\\\\%\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\multicolumn{4}{p\\linewidth}{\n\t\t\t\\textsuperscript{21}~\n\t\t\t\\textsuperscript{22}~\n\t\t\t\\textsuperscript{23}~\n\t\t\t\\textsuperscript{38}\n\t\t\t\\textsuperscript{39}\n\t\t\t\\textsuperscript{40}\n\t\t\t\\textsuperscript{41}\n\t\t\t\\textsuperscript{42}\n\t\t\t\\textsuperscript{43}\n\t\t\t\\textsuperscript{44}\n\t\t\t\\textsuperscript{45}\n\t\t\t\\textsuperscript{46}\n\t\t\t\\textsuperscript{47}\n\t\t\t\\textsuperscript{48}\n\t\t\t\\textsuperscript{49}\n\t\t\t\\textsuperscript{50}\n\t\t\t\\textsuperscript{51}\n\t\t\t\\textsuperscript{52}\n\t\t\t\\textsuperscript{53}\n\t\t}\n\t\t\\\\\n\t\t\\addlinespace[2ex]\n\t\t\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\textbf{Test-Time Data Augmentation}\n\t\t& 1. Semantic Segmentation\\textsuperscript{36, 37}\n\t\t\\newline 2. Calibration\\textsuperscript{35}\n\t\t\\newline 3. OOD Detection\\textsuperscript{35-37}\n\t\t& 1, 2, 3: Medical data, Diabetic Retinopathy\n\t\t& Softmax\\textsuperscript{44},~~MCdropout\\textsuperscript{\\hyperlink{1}{1}}\n\t\t\\\\%\\hline\n\t\t\\addlinespace[2ex]\n\t\t\\multicolumn{4}{p\\linewidth}{\n\t\t\t\\textsuperscript{35}~\n\t\t\t\\textsuperscript{36}~\n\t\t\t\\textsuperscript{37}~\n\t\t}\n\t\t\\\\\n\t\t\\addlinespace[2ex]\n\t\t\\hline\t\t\n\t\t\\\\\n\t\t\\multicolumn{4}{p\\linewidth}{\n\t\t\t\\textsuperscript{54}\n\t\t\t\\textsuperscript{55}\n\t\t\t\\textsuperscript{56}\n\t\t\t\\textsuperscript{57}\n\t\t}\n\t\\end{tabular}\n\t\\label{tab:datasets_baselines}\n\\end{table*}\nIn this section, we collect commonly used tasks and data sets for evaluating uncertainty estimation among existing works. Besides, a variety of baseline approaches commonly used as comparison against the methods proposed by the researchers are also presented. By providing a review on the relevant information of these experiments, we hope that both researchers and practitioners can benefit from it. While the former can gain a basic understanding of recent benchmarks tasks, data sets and baselines so that they can design appropriate experiments to validate their ideas more efficiently, the latter might use the provided information to select more relevant approaches to start based on a concise overview on the tasks and data sets on which the approach has been validated.\nIn the following, we will introduce the data sets and baselines summarized in table \\ref{tab:datasets_baselines} according to the taxonomy used throughout this review.\nThe structure of the table is designed to organize the main contents of this section concisely, hoping to provide a clear overview of the relevant works. We group the approaches of each category into one of four blocks and extract the most commonly used tasks, data sets and provided baselines for each column respectively. The corresponding literature is listed at the bottom of each block to facilitate lookup. Note that we focus on methodological comparison here, but not the choice of architecture for different methods which has an impact on performance as well. Due to the space limitation and visual density, we only show the most important elements (task, data set, baselines) ranked according to the frequency of use in the literature we have researched.\nThe main results are as follows. One of the most frequent tasks for evaluating uncertainty estimation methods are regression tasks, where samples close and far away from the training distribution are studied. Furthermore, the calibration of uncertainty estimates in the case of classification problems is very often investigated. \nFurther noteworthy tasks are out-of-distribution (OOD) detection and robustness against adversarial attacks. In the medical domain, calibration of semantic segmentation results is the predominant use case.\nThe choice of data sets is mostly consistent among all reviewed works. For regression, toy data sets are employed for visualization of uncertainty intervals while the UCI data sets are studied in light of (negative) log-likelihood comparison. The most common data sets for calibration and OOD detection are MNIST, CIFAR10 and 100 as well as SVHN while ImageNet and its tiny variant are also studied frequently. These form distinct pairs when OOD detection is studied where models trained on CIFAR variants are evaluated on SVHN and visa versa while MNIST is paired with variants of itself like notMNIST and FashionMNIST. Classification data sets are also commonly distorted and corrupted to study the effects on calibration, blurring the line between OOD detection and adversarial attacks.\nFinally, the most commonly used baselines by far are Monte Carlo (MC) Dropout and deep ensembles while the softmax output of deterministic models is almost always employed as a kind of surrogate baseline. It is interesting to note that inside each approach--BNNs, Ensembles, Single Deterministic Models and Input Augmentation--some baselines are preferred over others.\nBNNs are most frequently compared against variational inference methods like Bayes' by Backprop (BBB) or Probabilistic Backpropagation (PBP) while for Single Deterministic Models it is more common to compare them against distance-based methods in the case of OOD detection.\nOverall, BNN methods show a more diverse set of tasks considered while being less frequently evaluated on large data sets like ImageNet.\nTo further facilitate access for practitioners, we provide web-links to the authors' official implementations (marked by a star) of all common baselines as identified in the baselines column. Where no official implementation is provided, we instead link to the highest ranked implementations found on \\href{https://github.com/}{GitHub} at the time of this survey. The list can be also found within \\href{https://github.com/JakobCode/UncertaintyInNeuralNetworks\\_Resources}{our GitHub repository on available implementations}\\footnote{\\href{https://github.com/JakobCode/UncertaintyInNeuralNetworks\\_Resources}{https://github.com/JakobCode/UncertaintyInNeuralNetworks\\_Resources}}. The relevant baselines are Softmax\\textsuperscript{*} (\\href{https://github.com/hendrycks/error-detection}{TensorFlow}, \\href{https://github.com/hendrycks/outlier-exposure}{PyTorch}), MCdropout (\\href{https://github.com/yaringal/DropoutUncertaintyExps}{TensorFlow\\textsuperscript{*}}; PyTorch: \\href{https://github.com/cpark321/uncertainty-deep-learning}{1}, \\href{https://github.com/JavierAntoran/Bayesian-Neural-Networks}{2}), DeepEnsembles (TensorFlow: \\href{https://github.com/vvanirudh/deep-ensembles-uncertainty}{1}, \\href{https://github.com/Kyushik/Predictive-Uncertainty-Estimation-using-Deep-Ensemble}{2}, \\href{https://github.com/axelbrando/Mixture-Density-Networks-for-distribution-and-uncertainty-estimation}{3}; PyTorch: \\href{https://github.com/bayesgroup/pytorch-ensembles}{1}, \\href{https://github.com/cpark321/uncertainty-deep-learning}{2}), BBB (PyTorch: \\href{https://github.com/ThirstyScholar/bayes-by-backprop}{1}, \\href{https://github.com/nitarshan/bayes-by-backprop}{2}, \\href{https://github.com/cpark321/uncertainty-deep-learning}{3}, \\href{https://github.com/JavierAntoran/Bayesian-Neural-Networks}{4}), NormalizingFlow (\\href{https://github.com/AMLab-Amsterdam/MNF_VBNN}{TensorFlow}, \\href{https://github.com/janosh/torch-mnf}{PyTorch}), \\href{https://github.com/HIPS/Probabilistic-Backpropagation}{PBP}, SWAG (\\href{https://github.com/wjmaddox/swa_gaussian}{1\\textsuperscript{*}}, \\href{https://github.com/bayesgroup/pytorch-ensembles}{2}), KFAC (PyTorch: \\href{https://github.com/DLR-RM/curvature}{1}, \\href{https://github.com/bayesgroup/pytorch-ensembles}{2}, \\href{https://github.com/JavierAntoran/Bayesian-Neural-Networks}{3}; \\href{https://github.com/tensorflow/kfac}{TensorFlow}), DVI (\\href{https://github.com/Microsoft/deterministic-variational-inference}{TensorFlow\\textsuperscript{*}}, \\href{https://github.com/markovalexander/DVI}{PyTorch}), \\href{https://github.com/JavierAntoran/Bayesian-Neural-Networks}{HMC}, \\href{https://github.com/team-approx-bayes/dl-with-bayes}{VOGN\\textsuperscript{*}}, \\href{https://github.com/DLR-RM/curvature}{INF\\textsuperscript{*}}, \\href{https://github.com/ctallec/pyvarinf}{MFVI}, \\href{https://github.com/JavierAntoran/Bayesian-Neural-Networks}{SGLD}, TemperatureScaling (\\href{https://github.com/gpleiss/temperature_scaling}{1\\textsuperscript{*}}, \\href{https://github.com/facebookresearch/odin}{2}, \\href{https://github.com/cpark321/uncertainty-deep-learning}{3}), \\href{https://github.com/KaosEngineer/PriorNetworks}{GAN\\textsuperscript{*}}, \\href{https://github.com/dougbrion/pytorch-classification-uncertainty}{Dirichlet} and \\href{https://github.com/pokaxpoka/deep_Mahalanobis_detector}{Mahalanobis\\textsuperscript{*}}.", "id": "2dbc07cb-8a4d-4558-872e-32e013a2582a", "level": "section", "origin_cites_number": 52, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Data Sets and Baselines" ] ], "subsections": [], "title": "Data Sets and Baselines" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:application_fields}\nFrom a practical point of view, the main motivation for quantifying uncertainties in DNNs is to be able to classify the received predictions and to make more confident decisions. This section gives a brief overview and examples of the aforementioned motivations. In the first part, we discuss how uncertainty is used within active learning and reinforcement learning. Subsequently, we discuss the interest of the communities working on domain fields like medical image analysis, robotics, and earth observation. These application fields are used representatively for the large number of domains where the uncertainty quantification plays an important role. The challenges and concepts could (and should) be transferred to any application domain of interest.", "id": "56922a8f-84a8-4d62-aa6d-86c2e59e8e40", "level": "section", "origin_cites_number": 0, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ] ], "subsections": [ "77b5da91-1f19-4c15-8640-b5faf69d58e8", "9b15acc6-a237-457d-988d-2dbb131f41f4" ], "title": "Applications of Uncertainty Estimates" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 4599, 4610, 4701, 8805, 1044 ], "content": "The process of collecting labeled data for supervised training of a DNN can be laborious, time-consuming, and costly. To reduce the annotation effort, the active learning framework shown in Figure \\ref{fig:active_learning} trains the DNN sequentially on different labelled data sets increasing in size over time . In particular, given a small labelled data set and a large unlabeled data set, a deep neural network trained in the setting of active learning learns from the small labeled data set and decides based on the acquisition function, which samples to select from the pool of unlabeled data. The selected data are added to the training data set and a new DNN is trained on the updated training data set. This process is then repeated with the training set increasing in size over time. Uncertainty sampling is one most popular criteria used in acquisition functions where predictive uncertainty determines which training samples have the highest uncertainty and should be labelled next. Uncertainty based active learning strategies for deep learning applications have been successfully used in several works . \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.38\\textwidth]{figures/active_learning.pdf}\n \\caption{The active learning framework: The acquisition function evaluates the uncertainties on the network's test predictions in order to select unlabelled data. The selected data are labelled and added to the pool of labelled data, which is used to train and improve the performance of the predictor.}\n \\label{fig:active_learning}\n\\end{figure}", "id": "77b5da91-1f19-4c15-8640-b5faf69d58e8", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "56922a8f-84a8-4d62-aa6d-86c2e59e8e40", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsubsection", "Active Learning" ] ], "subsections": [ "f7e0d8b1-9d2d-49ec-9731-26873e781591" ], "title": "Active Learning" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4596, 4614, 1343 ], "content": "The general framework of deep reinforcement learning is shown in Figure \\ref{fig:reinforcement_learning}. In the context of reinforcement learning, uncertainty estimates can be used to solve the exploration-exploitation dilemma. It says that uncertainty estimates can be used to effectively balance the exploration of unknown environments with the exploitation of existing knowledge extracted from known environments. For example, if a robot interacts with an unknown environment, the robot can safely avoid catastrophic failures by reasoning about its uncertainty. To estimate the uncertainty in this framework, Huang et al. used an ensemble of bootstrapped models (models trained on different data sets sampled with replacement from the original data set), while Gal and Ghahramani approximated Bayesian inference via dropout sampling. Inspired by and , Kahn et al. and Lötjens et al. used a mixture of deep Bayesian networks performing dropout sampling on an ensemble of bootstrapped models. For further reading, Ghavamzadeh et al. presented a survey of Bayesian reinforcement learning. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.38\\textwidth]{figures/reinforcement_learning.pdf}\n \\caption{The reinforcement learning framework: The agent interacts with the environment by executing a specific action influencing the next state of the agent. The agent observes a reward representing the cost associated with the executed action. The agent chooses actions based on a policy learned by a deep neural network. However, the predicted uncertainty associated with the action predicted by the deep neural network can help the agent to decide weather to execute the predicted action or not.}\n \\label{fig:reinforcement_learning}\n\\end{figure}", "id": "f7e0d8b1-9d2d-49ec-9731-26873e781591", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "77b5da91-1f19-4c15-8640-b5faf69d58e8", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsubsection", "Active Learning" ], [ "subsubsection", "Reinforcement Learning" ] ], "subsections": [], "title": "Reinforcement Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "With increasing usage of deep learning approaches within many different fields, quantifying and handling uncertainties has become more and more important. On one hand, uncertainty quantification plays an important role in risk minimization, which is needed in many application fields. On the other hand, many fields offer only challenging data sources, which are hard to control and verify. This makes the generation of trust-worthy ground truth a very challenging task. In the following, three different fields where uncertainty plays an important role are presented, namely Autonomous Driving, medical image analysis, and earth observation.", "id": "9b15acc6-a237-457d-988d-2dbb131f41f4", "level": "subsection", "origin_cites_number": 0, "parent_id": "56922a8f-84a8-4d62-aa6d-86c2e59e8e40", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsection", "Uncertainty in Real-World Applications" ] ], "subsections": [ "e330bdbb-a5a2-4a38-b6ec-85fed5ce7a5a", "d9cfe77c-99d1-4e5a-8976-ae60d7f37ffd", "e98c01dd-e9df-4538-8b11-ca24c3cd697b" ], "title": "Uncertainty in Real-World Applications" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4703, 4591, 4603, 4702, 4687, 4688, 4683, 8831, 4606 ], "content": "Since the size, shape, and location of many diseases vary largely across patients, the estimation of the predictive uncertainty is crucial in analyzing medical images in applications such as lesion detection , lung node segmentation , brain tumor segmentation , parasite segmentation in images of liver stage malaria , recognition of abnormalities on chest radiographs , and age estimation . Here, uncertainty estimates in particular improve the interpretability of decisions of DNNs . They are essential to understand the reliability of segmentation results, to detect false segmented areas and to guide human experts in the task of refinement . Well-calibrated and reliable uncertainty estimates allow clinical experts to properly judge whether an automated diagnosis can be trusted . Uncertainty was estimated in medical image segmentation based on Monte Carlo dropout , spike-and-slab dropout , and spatial dropout . Wang et al. used test time data augmentation to estimate the data-dependent uncertainty in medical image segmentation.", "id": "e330bdbb-a5a2-4a38-b6ec-85fed5ce7a5a", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "9b15acc6-a237-457d-988d-2dbb131f41f4", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsection", "Uncertainty in Real-World Applications" ], [ "subsubsection", "Medical Analysis" ] ], "subsections": [], "title": "Medical Analysis" }, { "cite_extract_rate": 0.40540540540540504, "cites": [ 4711, 4710, 4704, 3881, 8828, 8833, 4705, 4706, 4596, 4712, 4707, 4708, 4709, 4614, 8400 ], "content": "Robots are active agents that perceive, decide, plan, and act in the real-world – all based on their incomplete knowledge about the world. As a result, mistakes of the robots not only cause failures of their own mission, but can endanger human lives, e.g. in case of surgical robotics, self-driving cars, space robotics, etc. Hence, the robotics application of deep learning poses unique research challenges that significantly differ from those often addressed in computer vision and other off-line settings . For example, the assumption that the testing condition come from the same distribution as training is often invalid in many settings of robotics, resulting in deterioration of the performance of DNNs in uncontrolled and detrimental conditions. This raises the questions how we can quantify the uncertainty in a DNN’s predictions in order to avoid catastrophic failures. Answering such questions are important in robotics, as it might be a lofty goal to expect data-driven approaches (in many aspects from control to perception) to always be accurate. Instead, reasoning about uncertainty can help in leveraging the recent advances in deep learning for robotics. \nReasoning about uncertainties and the use of probabilistic representations, as oppose to relying on a single, most-likely estimate, have been central to many domains of robotics research, even before the advent of deep learning . In robot perception, several uncertainty-aware methods have been proposed in the past, starting from localization methods to simultaneous localization and mapping (SLAM) frameworks . As a result, many probabilistic methods such as factor graphs are now the work-horse of advanced consumer products such as robotic vacuum cleaners and unmanned aerial vehicles. In case of planning and control, estimation problems are widely treated as Bayesian sequential learning problems, and sequential decision making frameworks such as POMDPs assume a probabilistic treatment of the underlying planning problems. With probabilistic representations, many reinforcement learning algorithms are backed up by stability guarantees for safe interactions in the real-world . Lastly, there have been also several advances starting from reasoning (semantics to joint reasoning with geometry), embodiment (e.g. active perception ) to learning (e.g. active learning and identifying unknown objects ). \nSimilarly, with the advent of deep learning, many researchers proposed new methods to quantify the uncertainty in deep learning as well as on how to further exploit such information. As oppose to many generic approaches, we summarize task-specific methods and their application in practice as followings. Notably, proposed to perform novelty detection using auto-encoders, where the reconstructed outputs of auto-encoders was used to decide how much one can trust the network’s predictions. Peretroukhin et al. developed a SO(3) representation and uncertainty estimation framework for the problem of rotational learning problems with uncertainty. demonstrated uncertainty-aware, real world application of a reinforcement learning algorithm for robotics, while proposed to leverage spatial information, on top of MC-dropout. developed deep learning based localization systems along with uncertainty estimates. Other approaches also learn on the robots' past experiences of failures or detect inconsistencies of the predictors . In summary, the robotics community has been both, the users and the developers of the uncertainty estimation frameworks targeted to a specific problems. \nYet, robotics pose several unique challenges to uncertainty estimation methods for DNNs. These are for example, (i) how to limit the computational burden and build real-time capable methods that can be executed on the robots with limited computational capacities (e.g. aerial, space robots, etc); (ii) how to leverage spatial and temporal information, as robots sense sequentially instead of having a batch of training data for uncertainty estimates; (iii) whether robots can select the most uncertainty samples and update its learner online; (iv) Whether robots can purposefully manipulate the scene when uncertain. Most of these challenges arise due to the properties of robots that they are physically situated systems.", "id": "d9cfe77c-99d1-4e5a-8976-ae60d7f37ffd", "level": "subsubsection", "origin_cites_number": 37, "parent_id": "9b15acc6-a237-457d-988d-2dbb131f41f4", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsection", "Uncertainty in Real-World Applications" ], [ "subsubsection", "Robotics" ] ], "subsections": [], "title": "Robotics" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 4609 ], "content": "Earth Observation (EO) systems are increasingly used to make critical decisions related to urban planning , resource management , disaster response , and many more. Right now, there are hundreds of EO satellites in space, owned by different space agencies and private companies. Figure \\ref{fig:ESA} shows the satellites owned by the European Space Agency (ESA). Like in many other domains, deep learning has shown great initial success in the field of EO over the past few years . These early successes consisted of taking the latest developments of deep learning in computer vision and applying them to small curated earth observation data sets . At the same time, the underlying data is very challenging. Even though the amount of data is huge, so is the variability in the data. This variability is caused by different sensor types, spatial changes (e.g. different regions and resolutions), and temporal changes (e.g. changing light conditions, weather conditions, seasons). Besides the challenge of efficient uncertainty quantification methods for such large amounts of data, several other challenges that can be tackled with uncertainty quantification exist in the field of EO. All in all, the sensitivity of many EO applications together with the nature of EO systems and the challenging EO data make the quantification of uncertainties very important in this field. Despite hundreds of publications in the last years on DL for EO, the range of literature on measuring uncertainties of these systems is relatively small. \nFurthermore, due to the large variation in the data, a data sample received at test time is often not covered by the training data distribution. For example while preparing training data for a local climate zone classification, the human experts might be presented only with images where there is no obstruction and structures are clearly visible. When a model which is trained on this data set is deployed in real world, it might see the images with clouds obstructing the structures or snow giving them a completely different look. Also, the classes in EO data can have a very wide distribution. For example, there are millions of types of houses in the world and no training data can contain the examples for all of them. The question is where the OOD detector will draw the line and declare the following houses as OOD. Hence, OOD detection is important in earth observation and uncertainty measurements play an important part in this . \nAnother common task in EO, where uncertainties can play an important role, is the data fusion. Optical images normally contain only a few channels like RGB. In contrast to this, EO data can contain optical images with up to hundreds of channels, and a variety of different sensors with different spatial, temporal, and semantic properties. Fusing the information from these different sources and channels propagates the uncertainties from different sources onto the prediction. The challenge lies in developing methods that do not only quantify uncertainties but also the amount of contribution from different channels individually and which learn to focus on the trustworthy data source for a given sample .\nUnlike normal computer vision scenarios where the image acquisition equipment is quite near to the subject, the EO satellites are hundreds of kilometers away from the subject. The sensitivity of sensors, the atmospheric absorption properties, and surface reflectance properties all contribute to uncertainties in the acquired data. Integrating the knowledge of physical EO systems, which also contain information about uncertainty models in those systems, is another major open issue. However, for several applications in EO, measuring uncertainties is not only something good to have but rather an important requirement of the field. E.g., the geo-variables derived from EO data may be assimilated into process models (ocean, hydrological, weather, climate, etc) and the assimilation requires the probability distribution of the estimated variables.\n\\begin{figure*}[t]\n\\resizebox{\\textwidth}{!}{\n \\includegraphics{figures/ESA_satellites_cropped.jpg}}\n \\caption{European Space Agency (ESA) Developed Earth Observation Missions .}\n \\label{fig:ESA} \n\\end{figure*}", "id": "e98c01dd-e9df-4538-8b11-ca24c3cd697b", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "9b15acc6-a237-457d-988d-2dbb131f41f4", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Applications of Uncertainty Estimates" ], [ "subsection", "Uncertainty in Real-World Applications" ], [ "subsubsection", "Earth Observation(EO)" ] ], "subsections": [], "title": "Earth Observation(EO)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:con}", "id": "4fde619f-e57d-42f5-974b-6ca984c28e03", "level": "section", "origin_cites_number": 0, "parent_id": "039eb0dd-a257-4d85-a714-ad5fc65c6f0b", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Conclusion and Outlook" ] ], "subsections": [ "964544a8-0c56-446c-8edc-dc3bb28f1f78", "556a7e6a-39ab-43ff-8b98-4411e6832e1c" ], "title": "Conclusion and Outlook" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4671, 4600, 7839, 8633, 4595, 4668 ], "content": "\\label{ssec:conclusion}\nEven though many advances on uncertainty quantification in neural networks have been made over the last years, their adoption in practical mission- and safety-critical applications is still limited. There are several reasons for this, which are discussed one-by-one as follows:\n\\begin{itemize}\n \\setlength\\itemsep{0.5em}\n \\item \\textbf{Missing Validation of Existing Methods over Real-World Problems}~\\\\\n Although DNNs have become the defacto standard in solving numerous computer vision and medical image processing tasks, the majority of existing models are not able to appropriately quantify uncertainty that is inherent to their inferences particularly in real world applications. This is primarily because the baseline models are mostly developed using standard data sets such as Cifar10/100, ImageNet, or well known regression data sets that are specific to a particular use case and are therefore not readily applicable to complex real-world environments, as for example low resolutional satellite data or other data sources affected by noise. Although many researchers from other fields apply uncertainty quantification in their field , a broad and structured evaluation of existing methods based on different real world applications is not available yet. Works like already built first steps towards a real life evaluation.\n \\item \\textbf{Lack of Standardized Evaluation Protocol}~\\\\\n Existing methods for evaluating the estimated uncertainty are better suited to compare uncertainty quantification methods based on measurable quantities such as the calibration or the performance on out-of-distribution detection . As described in Section \\ref{sec:data_sets_and_baselines}, these tests are performed on standardized sets within the machine learning community. Furthermore, the details of these experiments might differ in the experimental setting from paper to paper . However, a clear standardized protocol of tests that should be performed on uncertainty quantification methods is still not available. For researchers from other domains it is difficult to directly find state of the art methods for the field they are interested in, not to speak of the hard decision on which sub-field of uncertainty quantification to focus. This makes the direct comparison of the latest approaches difficult and also limits the acceptance and adoption of current existing methods for uncertainty quantification.\n \\item \\textbf{Inability to Evaluate Uncertainty Associated to a Single Decision}~\\\\ \n Existing measures for evaluating the estimated uncertainty (e.g., the expected calibration error) are based on the whole testing data set. This means, that equivalent to classification tasks on unbalanced data sets, the uncertainty associated with single samples or small groups of samples may potentially get biased towards the performance on the rest of the data set. But for practical applications, assessing the reliability of a predicted confidence would give much more possibilities than an aggregated reliability based on some testing data, which are independent from the current situation . Especially for mission- and safety-critical applications, pointwise evaluation measures could be of paramount importance and hence such evaluation approaches are very desirable. \n \\item \\textbf{Lack of Ground Truth Uncertainties}~\\\\\n Current methods are empirically evaluated and the performance is underlined by reasonable and explainable values of uncertainty. A ground truth uncertainty that could be used for validation is in general not available.\n Additionally, even though existing methods are calibrated on given data sets, one cannot simply transfer these results to any other data set since one has to be aware of shifts in the data distribution and that many fields can only cover a tiny portion of the actual data environment. In application fields as EO, the preparation of a huge amount of training data is hard and expensive and hence synthetic data can be used to train a model. For this artificial data, artificial uncertainties in labels and data should be taken into account to receive a better understanding of the uncertainty quantification performance. The gap between the real and synthetic data, or estimated and real uncertainty further limits the adoption of currently existing methods for uncertainty quantification.\n \\item \\textbf{Explainability Issue:} ~\\\\\n Existing methods of neural network uncertainty quantification deliver predictions of certainty without any clue about what causes possible uncertainties. Even though those certainty values often look \\textit{reasonable} to a human observer, one does not know whether the uncertainties are actually predicted based on the same observations the human observer made. But without being sure about the reasons and motivations of single uncertainty estimations, a proper transfer from one data set to another, and even only a domain shift, are much harder to realize with a guaranteed performance. Regarding safety critical real life applications, the lack of explainability makes the application of the available methods significantly harder. Besides the explainability of neural networks decisions, existing methods for uncertainty quantification are not well understood on a higher level. For instance, explaining the behavior of single deterministic approaches, ensembles or Bayesian methods is a current direction of research and remains difficult to grasp in every detail . It is, however, crucial to understand how those methods operate and capture uncertainty to identify pathways for refinement, detect and characterize uncertainty, failures and important shortcomings .\n\\end{itemize}", "id": "964544a8-0c56-446c-8edc-dc3bb28f1f78", "level": "subsection", "origin_cites_number": 9, "parent_id": "4fde619f-e57d-42f5-974b-6ca984c28e03", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Conclusion and Outlook" ], [ "subsection", "Conclusion - How well do the current uncertainty quantification methods work for real world applications?" ] ], "subsections": [], "title": "Conclusion - How well do the current uncertainty quantification methods work for real world applications?" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4714, 4713, 7839 ], "content": "\\begin{itemize}\n \\item \\textbf{Generic Evaluation Framework}\\\\\n As already discussed above, there are still problems regarding the evaluation of uncertainty methods, as the lack of 'ground truth' uncertainties, the inability to test on single instances, and standardized benchmarking protocols, etc. To cope with such issues, the provision of an evaluation protocol containing various concrete baseline data sets and evaluation metrics that cover all types of uncertainty would undoubtedly help to boost research in uncertainty quantification. Also, the evaluation with regard to risk-averse and worst case scenarios should be considered there. This means, that uncertainty predictions with a very high predicted uncertainty should never fail, as for example for a prediction of a red or green traffic light. Such a general protocol would enable researchers to easily compare different types of methods against an established benchmark as well as on real world data sets. The adoption of such a standard evaluation protocol should be encouraged by conferences and journals.\n \\item \\textbf{Expert \\& Systematic Comparison of Baselines}~\\\\\n A broad and structured comparison of existing methods for uncertainty estimation on real world applications is not available yet. An evaluation on real world data is even not standard in current machine learning research papers. As a result, given a specific application, it remains unclear which method for uncertainty estimation performs best and whether the latest methods outperform older methods also on real world examples. This is also partly caused by the fact, that researchers from other domains that use uncertainty quantification methods, in general present successful applications of single approaches on a specific problem or a data set by hand. Considering this, there are several points that could be adopted for a better comparison within the different research domains. For instance, domain experts should also compare different approaches against each other and present the weaknesses of single approaches in this domain. Similarly, for a better comparison among several domains, a collection of all the works in the different real world domains could be collected and exchanged on a central platform. Such a platform might also help machine learning researchers in providing an additional source of challenges in the real world and would pave way to broadly highlight weaknesses in the current state of the art approaches. Google's repository on baselines in uncertainties in neural networks \\footnote{\\href{https://github.com/google/uncertainty-baselines}{https://github.com/google/uncertainty-baselines}} could be such a platform and a step towards achieving this goal. \n \\item \\textbf{Uncertainty Ground Truths} \\\\\n It remains difficult to validate existing methods due to the lack of uncertainty ground truths. An actual uncertainty ground truth on which methods can be compared in an ImageNet like manner would make the evaluation of predictions on single samples possible. To reach this, the evaluation of the data generation process and occurring sources of uncertainty, as for example the labeling process, might be investigated in more detail.\n \\item \\textbf{Explainability and Physical Models} \\\\\n Knowing the actual reasons for a false high certainty or a low certainty makes it much easier to engineer the methods for real life applications, which again increases the trust of people into such methods. Recently, Antorán et al. claimed to have published the first work on explainable uncertainty estimation. Uncertainty estimations, in general, form an important step towards explainable artificial intelligence. Explainable uncertainty estimations would give an even deeper understanding of the decision process of a neural network, which, in practical deployment of DNNs, shall incorporate the desired ability to be risk averse while staying applicable in real world (especially safety critical applications). Also, the possibility of improving explainability with physically based arguments offers great potential. While DNNs are very flexible and efficient, they do not directly embed the domain specific expert knowledge that is mostly available and can often be described by mathematical or physical models, as for example earth system science problems . Such physic guided models offer a variety of possibilities to include explicit knowledge as well as practical uncertainty representations into a deep learning framework . \n \\end{itemize}\n\\bibliography{main_bib}\n\\end{document}", "id": "556a7e6a-39ab-43ff-8b98-4411e6832e1c", "level": "subsection", "origin_cites_number": 5, "parent_id": "4fde619f-e57d-42f5-974b-6ca984c28e03", "prefix_titles": [ [ "title", "A Survey of Uncertainty in Deep Neural Networks" ], [ "section", "Conclusion and Outlook" ], [ "subsection", "Outlook" ] ], "subsections": [], "title": "Outlook" } ]
41
[ 4601, 4598, 4606, 4612, 4617, 8806, 7755, 4596, 4590, 1828, 3289, 2441, 4594, 759, 8807, 4610, 4600, 4607, 1044, 3288, 4616, 4603, 4593, 4025, 4597, 4599, 4615, 4592, 4605, 4609, 8805, 8633, 4602, 4595, 4591, 4608, 4614, 4611, 3234, 4604, 4613, 208, 4618, 3251, 8319, 8808, 3299, 1624, 4622, 4621, 4620, 4619, 4627, 4626, 4624, 4625, 4623, 8639, 7130, 4629, 4638, 4630, 4639, 4631, 4636, 4632, 8811, 4634, 8813, 8812, 4628, 4637, 8809, 4633, 4641, 4640, 4635, 8810, 8815, 4648, 4649, 4646, 4647, 4643, 3329, 4644, 8814, 4642, 4645, 8816, 3278, 4652, 8819, 4650, 8826, 8818, 8825, 8822, 4657, 4655, 8823, 4654, 4651, 3301, 8817, 4653, 166, 8824, 8821, 4656, 8820, 4664, 153, 4663, 4660, 8828, 3300, 1562, 4667, 8827, 4659, 4666, 4661, 4665, 4658, 4662, 8829, 3302, 4668, 4669, 4670, 4671, 4678, 4677, 4672, 4676, 4673, 4675, 4674, 3293, 6989, 4680, 681, 4679, 4682, 4683, 825, 4681, 3294, 3232, 4684, 4685, 8830, 4686, 4688, 4687, 8831, 4689, 4690, 4141, 8832, 301, 3254, 4691, 4693, 4694, 4695, 4692, 4696, 7191, 4697, 4699, 4698, 7838, 4700, 3267, 4701, 1343, 4703, 4702, 4711, 4710, 4704, 3881, 8833, 4705, 4706, 4712, 4707, 4708, 4709, 8400, 7839, 4714, 4713 ]
0.976169
[ "Mokanarangan Thayaparan", "Marco Valentino", "André Freitas" ]
A Survey on Explainability in Machine Reading Comprehension
2020
2020-10-01T13:26:58Z
cs.CL
This paper presents a systematic review of benchmarks and approaches for \textit{explainability} in Machine Reading Comprehension (MRC). We present how the representation and inference challenges evolved and the steps which were taken to tackle these challenges. We also present the evaluation methodologies to assess the performance of explainable systems. In addition, we identify persisting open research questions and highlight critical directions for future work.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ] ], "subsections": [ "4e7495bb-e63c-494e-a5e5-3266c6508430", "f85ddc30-cc46-47dc-98bd-bb29ed2b0a20", "15e67409-9476-47aa-8a47-bfcb4ac8569f", "70bbe41f-8ace-48f8-96e7-be40ad390f2b", "cdcc17cb-f2d2-4dee-a997-023bad38e00d", "42f2c83a-4225-46e7-9b7f-f595d1fda0c2" ], "title": "root" }, { "cite_extract_rate": 1, "cites": [ 456, 444, 1798, 439, 460, 4274, 8761, 4275 ], "content": "\\label{sec:intro}\n\\emph{Machine Reading Comprehension (MRC)} has the long-standing goal of developing machines that can reason with natural language. A typical reading comprehension task consists in answering questions about the background knowledge expressed in a textual corpus. Recent years have seen an explosion of models and architectures due to the release of large-scale benchmarks, ranging from open-domain to commonsense and scientific reading comprehension tasks. Research in MRC is gradually evolving in the direction of abstractive inference capabilities, going beyond what is explicitly stated in the text .\nAs the need to evaluate abstractive reasoning becomes predominant, a crucial requirement emerging in recent years is \\emph{explainability} , intended as the ability of a model to expose the underlying mechanisms adopted to arrive at the final answers. Explainability has the potential to tackle some of the current issues in the field:", "id": "4e7495bb-e63c-494e-a5e5-3266c6508430", "level": "section", "origin_cites_number": 8, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Introduction" ] ], "subsections": [ "e67e16ee-5c30-4e37-a53b-e46d5c2962af" ], "title": "Introduction" }, { "cite_extract_rate": 1, "cites": [ 4277, 2396, 4276, 4278 ], "content": "Traditionally, MRC models have been evaluated on end-to-end prediction tasks. In other words, the capability of achieving high accuracy on specific datasets has been considered a proxy for evaluating a desired set of reasoning skills. However, recent work have demonstrated that this is not necessarily true for models based on deep learning, which are particularly capable of exploiting biases in the data . Research in explainability can provide novel evaluation frameworks to investigate and analyse the internal reasoning mechanisms ;", "id": "e67e16ee-5c30-4e37-a53b-e46d5c2962af", "level": "paragraph", "origin_cites_number": 5, "parent_id": "4e7495bb-e63c-494e-a5e5-3266c6508430", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Introduction" ], [ "paragraph", "Evaluation:" ] ], "subsections": [ "97b2bd5c-5cbd-44cb-8a30-f0b5c5f07c99" ], "title": "Evaluation:" }, { "cite_extract_rate": 1, "cites": [ 7802, 7801, 4279, 4275 ], "content": "Despite remarkable performance achieved in specific MRC tasks, machines based on deep learning still suffer from overfitting and lack of generalisation. By focusing on explicit reasoning methods, research in explainability can lead to the development of novel models able to perform compositional generalisation and discover abstract inference patterns in the data , favouring few-shot learning and cross-domain transportability ;", "id": "97b2bd5c-5cbd-44cb-8a30-f0b5c5f07c99", "level": "paragraph", "origin_cites_number": 4, "parent_id": "e67e16ee-5c30-4e37-a53b-e46d5c2962af", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Introduction" ], [ "paragraph", "Evaluation:" ], [ "paragraph", "Generalisation:" ] ], "subsections": [ "f4173d5c-3a3b-441f-8811-b06630ceb118" ], "title": "Generalisation:" }, { "cite_extract_rate": 0, "cites": [], "content": "A system capable of delivering explanations is generally more interpretable, meeting some of the requirements for real world applications, such as user trust, confidence and acceptance .\nDespite the potential impact of explainability in MRC, little has been done to provide a unifying and organised view of the field. This paper aims at systematically categorising explanation-supporting benchmarks and models. To this end, we review the work published in the main AI and NLP conferences from 2015 onwards which actively contributed to explainability in MRC, referring also to preprint versions when necessary. The survey is organised as follows: (a) Section 2 frames the scope of the survey, stating a definition of explainability in MRC; (b) Section 3 reviews the main benchmarks proposed in recent years for the explicit evaluation and development of explainable MRC models; (c) Section 4 provides a detailed classification of the main architectural patterns and approaches proposed for explanation generation; (d) Section 5 describes quantitative and qualitative metrics for the evaluation of explainability, highlighting some of the issues connected with the development of explanation-supporting benchmarks.", "id": "f4173d5c-3a3b-441f-8811-b06630ceb118", "level": "paragraph", "origin_cites_number": 1, "parent_id": "97b2bd5c-5cbd-44cb-8a30-f0b5c5f07c99", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Introduction" ], [ "paragraph", "Evaluation:" ], [ "paragraph", "Generalisation:" ], [ "paragraph", "Interpretability:" ] ], "subsections": [], "title": "Interpretability:" }, { "cite_extract_rate": 0.5, "cites": [ 1798 ], "content": "\\label{sec:explanation_as_nli}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{abstraction_2.pdf}\n\\caption{Dimensions of explainability in Machine Reading Comprehension.}\n\\label{fig:approach}\n\\end{figure}\nIn the field of Explainable AI, there is no consensus, in general, on the nature of explanation . As AI embraces a variety of tasks, the resulting definition of explainability is often fragmented and dependent on the specific scenario. Here, we frame the scope of the survey by investigating the dimensions of explainability in MRC.", "id": "f85ddc30-cc46-47dc-98bd-bb29ed2b0a20", "level": "section", "origin_cites_number": 2, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainability in Machine Reading Comprehension" ] ], "subsections": [ "329b2d11-acc7-4022-989c-f8e798e6b4af" ], "title": "Explainability in Machine Reading Comprehension" }, { "cite_extract_rate": 1, "cites": [ 7303, 1798 ], "content": "We refer to \\emph{explainability} as a specialisation of the higher level concept of \\emph{interpretability}. In general, interpretability aims at developing tools to understand and investigate the behaviour of an AI system. This definition also includes tools that are external to a black-box model, as in the case of post-hoc interpretability . On the other hand, the goal of explainability is the design of \\emph{inherently interpretable} models, capable of performing transparent inference through the generation of an \\emph{explanation} for the final prediction .\nIn general, an explanation can be seen as an answer to a \\emph{how} question formulated as follows: \\emph{``How did the model arrive at the conclusion $c$ starting from the problem formulation $p$?''}. In the context of MRC, the answer to this question can be addressed by exposing the internal reasoning mechanisms linking $p$ to $c$. This goal can be achieved in two different ways: \n\\begin{enumerate}\n \\item \\textbf{Knowledge-based explanation:} exposing part of the relevant background knowledge connecting $p$ and $c$ in terms of supporting facts and/or inference rules; \n \\item \\textbf{Operational explanation:} composing a set of atomic operations through the generation of a symbolic program, whose execution leads to the final answer $c$.\n\\end{enumerate}\nGiven the scope of explainability in MRC, this survey reviews recent developments in \\emph{knowledge-based} and \\emph{operational explanation} (Sec. 4), emphasising the problem of \\emph{explanatory relevance} for the former -- i.e. the identification of relevant information for the construction of explanations, and of \\emph{question decomposition} for the latter -- i.e. casting a problem expressed in natural language into an executable program.", "id": "329b2d11-acc7-4022-989c-f8e798e6b4af", "level": "paragraph", "origin_cites_number": 2, "parent_id": "f85ddc30-cc46-47dc-98bd-bb29ed2b0a20", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainability in Machine Reading Comprehension" ], [ "paragraph", "The scope of explainability." ] ], "subsections": [ "0e6e5551-33c9-4797-b4df-659c6055693e" ], "title": "The scope of explainability." }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 460 ], "content": "\\begin{table}[t]\n\\small\n\\centering\n\\ra{0.9}\n\\begin{tabular}{@{}lp{6.5cm}p{6.5cm}@{}}\n\\toprule\n\\multirow{2}{*}{} &\n\\multirow{2}{*}{\\textbf{Extractive MRC}} &\n\\multirow{2}{*}{\\textbf{Abstractive MRC}}\\\\\\\\\n\\midrule\n\\textbf{Question} & When was Erik Watt's father born? & What is an example of a force producing heat?\\\\\n\\textbf{Answer} & May 5, 1939 & Two sticks getting warm when rubbed together\\\\\n\\midrule\n\\textbf{Explanation} & (1) He (Erik Watt) is the son of WWE Hall of Famer Bill Watts; (2) William F. Watts Jr. (born May 5, 1939) is an American former professional wrestler, promoter, and WWE Hall of Fame Inductee (2009). & (1) A stick is a kind of object; (2) To rub together means to move against; (3) Friction is a kind of force; (4) Friction occurs when two object's surfaces move against each other; (5) Friction causes the temperature of an object to increase.\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Explanations for extractive~\\protect and abstractive~\\protect MRC.}\n\\label{tab:extractive_abstractive}\n\\end{table}\nDepending on the nature of the MRC problem, a complete explanation can include pieces of evidence at different levels of abstraction (Fig. \\ref{fig:approach}.3). Traditionally, the field has been divided into \\emph{extractive} and \\emph{abstractive} tasks (e.g. table \\ref{tab:extractive_abstractive}). In extractive MRC, the reasoning required to solve the task is derivable from the original problem formulation. In other words, the correct decomposition of the problem provides the necessary inference steps for the answer, and the role of the explanation is to fill an information gap at the \\emph{extensional level} -- i.e. identifying the correct arguments for a set of predicates, via paraphrasing and coreference resolution. As a result, explanations for extractive MRC are often expressed in the form of supporting passages retrieved from the contextual paragraphs . On the other hand, abstractive MRC tasks usually require going beyond the surface form of the problem with the inclusion of high level knowledge about abstract concepts. In this case, the explanation typically leverages the use of supporting definitions, including taxonomic relations and essential properties, to perform abstraction from the original context in search of high level rules and inference patterns . As the nature of the task impacts explainability, we consider the distinction between extractive and abstractive MRC throughout the survey, categorising the reviewed benchmarks and approaches according to the underlying reasoning capabilities involved in the explanations.", "id": "0e6e5551-33c9-4797-b4df-659c6055693e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "329b2d11-acc7-4022-989c-f8e798e6b4af", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainability in Machine Reading Comprehension" ], [ "paragraph", "The scope of explainability." ], [ "paragraph", "Explanation and abstraction." ] ], "subsections": [], "title": "Explanation and abstraction." }, { "cite_extract_rate": 0.8095238095238091, "cites": [ 1147, 8626, 4281, 7802, 4280, 8763, 4283, 4284, 1098, 6988, 4282, 460, 7801, 8761, 8762, 4275, 4278 ], "content": "In this section we review the benchmarks that have been designed for the development and evaluation of explainable reading comprehension models. Specifically, we classify a benchmark as \\emph{explanation-supporting} if it exhibits the following properties:\n\\begin{enumerate}\n \\item \\textbf{Labelled data for training on explanations:} The benchmark includes gold explanations that can be adopted as an additional training signal for the development of explainable MRC models.\n \\item \\textbf{Design for quantitative explanation evaluation:} The benchmark supports the use of quantitative metrics for evaluating the explainability of MRC systems, or it is explicitly constructed to test explanation-related inference.\n\\end{enumerate}\nWe exclude from the review all the datasets that do not comply with at least one of these requirements. For a complete overview of the existing benchmarks in MRC, the reader is referred to the following surveys: .\nThe resulting classification of the datasets with their highlighted properties is reported in Table \\ref{tab:explainable_datasets}. The benchmarks are categorised according to a set of dimensions that depend on the nature of the task -- i.e. domain, format, MRC type, multi-hop inference, and the characteristics of the explanations -- i.e. explanation type, explanation level, format of the background knowledge, and explanation representation.\n\\begin{table}[t]\n\\small\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{>{\\raggedright}p{3.5cm}p{12.5cm}}\n \\toprule\n \\textbf{Domain} & The knowledge domain of the MRC task -- i.e. open domain (OD) , science (SCI), or commonsense (CS).\\\\\n \\textbf{Format} & The task format -- i.e. span retrieval (Span), free-form (Free), multiple-choice (MC), textual entailment (TE).\\\\\n \\textbf{MRC Type} & The reasoning capabilities involved -- i.e. Extractive (Extr.), Abstractive (Abstr.).\\\\\n \\textbf{Multi-hop (MH) } & Whether the task requires the explicit composition of multiple facts to infer the answer.\\\\\n \\midrule\n \\textbf{Explanation Type (ET)} & The type of explanation -- i.e. knolwedge-based (KB) or operational (OP).\\\\\n \\textbf{Explanation Level (EL)} & The abstraction level of the explanations -- i.e. Extensional (E) or Intensional (I).\\\\\n \\textbf{Background Knowledge (BKG)} & The format of the provided background knowledge, if present, from which to extract or construct the explanations -- i.e. single paragraph (SP), multiple paragraph (MP), sentence corpus (C), table-store (TS), suit of atomic operations (AO).\\\\\n \\textbf{Explanation Representation (ER) } & The explanation representation -- i.e. single passage (S), multiple passages (M), facts composition (FC), explanation graph (EG), generated sentence (GS), symbolic program (PR).\\\\\n \\bottomrule\\\\\n \\end{tabular}}\n\\small\n\\centering\n\\begin{tabular}{p{5cm}|cccc|cccc|c}\n\\toprule\n\\textbf{Dataset} &\n\\textbf{Domain} &\n\\textbf{Format} &\n\\textbf{Type} &\n\\textbf{MH} &\n\\textbf{ET} &\n\\textbf{EL} &\n\\textbf{BKG} &\n\\textbf{ER}&\n\\textbf{Year}\\\\\n\\midrule\n\\textbf{WikiQA}~ & OD & Span & Extr. & N & KB & E & SP & S & 2015\\\\\n\\textbf{HotpotQA}~ & OD & Span & Extr. & Y & KB & E & MP & M & 2018\\\\\n\\textbf{MultiRC}~ & OD & MC & Abstr. & Y & KB & E & SP & M & 2018 \\\\\n\\textbf{OpenBookQA}~ & SCI & MC & Abstr. & Y & KB & I &C & FC & 2018\\\\\n\\textbf{Worldtree}~ & SCI & MC & Abstr. & Y & KB & I & TS & EG & 2018\\\\\n\\textbf{e-SNLI}~ & CS & TE & Abstr. & N & KB & I & - & GS & 2018\\\\\n\\textbf{Cos-E}~ & CS & MC & Abstr. & N & KB & I & - & GS & 2019\\\\\n\\textbf{WIQA}~ & SCI & MC & Abstr. & Y & KB & I & SP & EG & 2019\\\\\n\\textbf{CosmosQA}~ & CS & MC & Abstr. & N & KB & I & SP & S & 2019\\\\\n\\textbf{CoQA} & OD & Free & Extr. & N & KB & E & SP & S & 2019 \\\\\n\\textbf{Sen-Making}~ & CS & MC & Abstr.& N & KB & I & - & S & 2019\\\\\n\\textbf{ArtDataset}~ & CS & MC & Abstr. & N & KB & I & C & S,GS & 2019\\\\\n\\textbf{QASC}~ & SCI & MC & Abstr. & Y & KB & I & C & FC & 2020\\\\\n\\textbf{Worldtree V2}~ & SCI & MC & Abstr. & Y & KB & I & TS & EG & 2020\\\\\n\\textbf{R$^4$C}~ & OD & Span & Extr. & Y & KB & E & MP & EG & 2020\\\\\n\\textbf{Break}~ & OD & Free, Span & Abstr. & Y & OP & I & AO & PR & 2020\\\\\n\\textbf{R$^3$}~ & OD & Free & Abstr. & Y & OP & I & AO & PR & 2020\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Classification of \\emph{explanation-supporting} benchmarks in MRC.}\n\\label{tab:explainable_datasets}\n\\end{table}", "id": "15e67409-9476-47aa-8a47-bfcb4ac8569f", "level": "section", "origin_cites_number": 21, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explanation-supporting Benchmarks" ] ], "subsections": [ "6d576fba-6b66-4a43-adda-53c2ca4d62f2" ], "title": "Explanation-supporting Benchmarks" }, { "cite_extract_rate": 0.7916666666666661, "cites": [ 4285, 4287, 1147, 1174, 4286, 4281, 7802, 4277, 456, 1098, 6988, 4282, 1142, 460, 7801, 8761, 8762, 4275, 4278 ], "content": "In line with the general research trend in MRC, the development of explanation-supporting benchmarks is evolving towards the evaluation of complex reasoning, testing the models on their ability to go beyond the surface form of the text. \nEarly datasets on open-domain QA have framed explanation as a \\emph{sentence selection} problem , where the evidence necessary to infer the final answer is entirely encoded in a single supporting sentence. Subsequent work has started the transition towards more complex tasks that require the integration of multiple supporting facts. HotpotQA is one of the first \\emph{multi-hop} datasets introducing a leaderboard based on a quantitative evaluation of the explanations produced by the systems\\footnote{\\url{https://hotpotqa.github.io/}}. The nature of HotpotQA is still closer to extractive MRC, where the supporting facts can be derived via paraphrasing from the explicit decomposition of the questions . \nMultiRC combines multi-hop inference with various forms of abstract reasoning such as commonsense, causal relations, spatio-temporal and mathematical operations. The gold explanations in these benchmarks are still expressed in terms of supporting passages, leaving it implicit a consistent part of the abstract inference rules adopted to derive the answer . Following HotpotQA and MultiRC, several benchmarks on open-domain tasks have gradually refined the supporting facts annotation, whose benefits have been demonstrated in terms of interpretability, bias, and performance . Moreover, recent work have focused on complementing knowledge-based explanation with operational interpretability, introducing explicit annotation for the decomposition of multi-hop and discrete reasoning questions into a sequence of atomic operations . In parallel with open-domain QA, scientific reasoning has been identified as a suitable candidate for the evaluation of explanations at a higher level of abstraction . Explanations in the scientific domain naturally mention facts about underlying regularities which are hidden in the original problem formulation and that refer to knowledge about abstract conceptual categories . The benchmarks in this domain provide gold explanations for multiple-choice science questions or related scientific tasks such as what-if questions on procedural text and explanation via sentences composition . Similarly to the scientific domain, a set of abstractive MRC benchmarks have been proposed for the evaluation of commonsense explanations . Cos-E and e-SNLI augment existing datasets for textual entailment and commonsense QA with crowd-sourced explanations, framing explainability as a natural language generation problem. Other commonsense tasks have been explicitly designed to test explanation-related inference, such as causal and abductive reasoning . Bhagavatula et al. \\shortcite{bhagavatula2019abductive} propose the tasks of Abductive Natural Language Inference ($\\alpha$NLI) and Abductive Natural Language Generation ($\\alpha$NLG), where MRC models are required to select or generate the hypothesis that best explains a set of observations.", "id": "6d576fba-6b66-4a43-adda-53c2ca4d62f2", "level": "paragraph", "origin_cites_number": 24, "parent_id": "15e67409-9476-47aa-8a47-bfcb4ac8569f", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explanation-supporting Benchmarks" ], [ "paragraph", "Towards abstractive and explainable MRC." ] ], "subsections": [ "0e2bef1b-e40c-4370-9b45-e71c65df80fd" ], "title": "Towards abstractive and explainable MRC." }, { "cite_extract_rate": 0.5, "cites": [ 1143, 460, 1147, 4275 ], "content": "The ability to construct explanations in MRC is typically associated with multi-hop reasoning. However, the nature and the structure of the inference can differ greatly according to the specific task. In extractive MRC , multi-hop reasoning often consists in the identification of bridge entities, or in the extraction and comparison of information encoded in different passages. On the other hand, observe that complete explanations for science questions require an average of 6 facts classified in three main explanatory roles: \\emph{grounding facts} and \\emph{lexical glues} have the function of connecting the specific concepts in the question with abstract conceptual categories, while \\emph{central facts} refer to high-level explanatory knowledge. Similarly, OpenbookQA provides annotations for the core explanatory sentences, which can only be inferred by performing multi-hop reasoning through the integration of external commonsense knowledge. In general, the number of hops needed to construct the explanations is correlated with \\emph{semantic drift} -- i.e. the tendency of composing spurious inference chains that lead to wrong conclusions . Recent explanation-supporting benchmarks attempt to limit this phenomenon by providing additional signals to learn abstract composition schemes, via the explicit annotation of valid inference chains or the identification of common explanatory patterns .", "id": "0e2bef1b-e40c-4370-9b45-e71c65df80fd", "level": "paragraph", "origin_cites_number": 8, "parent_id": "6d576fba-6b66-4a43-adda-53c2ca4d62f2", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explanation-supporting Benchmarks" ], [ "paragraph", "Towards abstractive and explainable MRC." ], [ "paragraph", "Multi-hop reasoning and explanation." ] ], "subsections": [], "title": "Multi-hop reasoning and explanation." }, { "cite_extract_rate": 1, "cites": [ 8385, 4281, 8762 ], "content": "This section describes the major architectural trends for Explainable MRC (X-MRC). The approaches are broadly classified according to the nature of the MRC task they are applied to -- i.e. extractive or abstractive. In order to elicit architectural trends, we further categorise the approaches as described in Table~\\ref{tab:approach_category}.\n\\begin{table}[t]\n \\centering\n \\small\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{>{\\raggedright}p{2.5cm}p{13cm}}\n \\toprule\n \\textbf{Explanation Type} & (1) Knowledge-based explanation; (2) Operational-based explanation \\\\\n \\midrule\n \\textbf{Learning method} & (1) Unsupervised (US): Does not require any annotated data; (2) Strongly Supervised (SS): Requires gold explanations for training or inference; (3) Distantly Supervised (DS): Treats explanation as a latent variable training only on problem-solution pairs.\\\\\n \\midrule\n \\textbf{Generated Output} & Denotes whether the explanation is generated or composed from facts retrieved from the background knowledge.\\\\\n \\midrule\n \\textbf{Multi-Hop} & Denotes whether the approach is designed for multi-hop reasoning \\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Categories adopted for the classification of Explainable MRC approaches.}\n \\label{tab:approach_category}\n\\end{table}\nFigure~\\ref{fig:approaches} illustrates the resulting classification when \nconsidering the underlying architectural components.\nIf an approach employs distinct modules for explanation generation and answer prediction, the latter is marked as $\\bigtriangleup$. For these instances, we only consider the categorization for the explanation extraction module.\nAdmittedly, the boundaries of these categories can be quite fuzzy. For instance, pre-trained embeddings such as ELMo~ are composed of recurrent neural networks, and transformers are composed of attention networks. In cases like these, we only consider the larger component that subsumes the smaller one. If approaches employ both architectures, but as different functional modules, we plot them separately. \n\\begin{figure}[t]\n\\centering\n\\subfloat[Explainable Abstractive MRC Approaches]{{\\includegraphics[width=0.46\\textwidth]{images/abstractive.png} }}\n\\qquad\n\\subfloat[Explainable Extractive MRC Approaches]{{\\includegraphics[width=0.46\\textwidth]{images/extractive.png} }}\n\\caption{\\small Explainable Machine Reading Comprehension (MRC) approaches. \\textbf{Operational Explanations}: (\\textbf{O}), \\textbf{Knowledge-based Explanations}: (\\textit{K}), \\textbf{Operational and Knowledge-based Explanations}: (\\textbf{\\textit{K,O}})\n\\textbf{Learning}: Unsupervised (\\tikzcircle{3pt}), Distantly Supervised (\\tikzcircle{3pt,OliveGreen}), Strongly Supervised (\\tikzcircle{3pt,Blue}).\n\\textbf{Generated Output}: (\\tikzcircle[black,fill=white]{4pt}). \\textbf{Multi Hop}: ($\\fbox{$\\phantom{5}$}$). \\textbf{Answer Selection Module:} ($\\bigtriangleup$). \\textbf{Architectures}: \\textsc{Weighting Schemes} (\\underline{WS}): Document and query weighting schemes consist of information retrieval systems that use any form of vector space scoring system, \\textsc{Heuristics} (\\underline{HS}): Hand-coded heuristics and scoring functions, \\textsc{Linear Programming} (\\underline{LP}), \\textsc{Convolutional Neural Network} (\\underline{CNN}), \\textsc{Recurrent Neural Networks} (\\underline{RNN}), \\textsc{Pre-Trained Embeddings} (\\underline{Emb}), \\textsc{Attention Network} (\\underline{Att}), \\textsc{Transformers} (\\underline{TR}), \\textsc{Graph Neural Networks} (\\underline{GN}), \\textsc{Neuro-Symbolic} (\\underline{NS}) and \\textsc{Others}.\n}\n\\label{fig:approaches}\n\\end{figure}\nIn general, we observe an overall shift towards supervised methods over the years for both abstractive and extractive MRC. \nWe posit that the advent of explanation-supporting datasets has facilitated the adoption of complex supervised neural architectures.\nMoreover, as shown in the classification, the majority of the approaches are designed for knowledge-based explanation. We attribute this phenomenon to the absence of large-scale datasets for operational interpretability until 2020. However, we note a recent uptake of distantly supervised approaches. We believe that further progress can be made with the introduction of novel datasets supporting symbolic question decomposition such as Break and R$^3$ (See Sec.~\\ref{tab:explainable_datasets}).", "id": "70bbe41f-8ace-48f8-96e7-be40ad390f2b", "level": "section", "origin_cites_number": 3, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ] ], "subsections": [ "74afb577-8534-43f2-86de-e72f1ead360a", "ffa18066-38ff-4407-9045-6f4b5d08d7a6" ], "title": "Explainable MRC Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "This section reviews the main approaches adopted for modeling \\emph{explanatory relevance}, namely the problem of identifying the relevant information for the construction of \\emph{knowledge-based explanations}. \nWe group the models into three main categories: \\textit{Explicit}, \\textit{Latent}, and \\textit{Hybrid}.", "id": "74afb577-8534-43f2-86de-e72f1ead360a", "level": "subsection", "origin_cites_number": 0, "parent_id": "70bbe41f-8ace-48f8-96e7-be40ad390f2b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ] ], "subsections": [ "29ced539-a3f8-4ada-a1f8-42aa91cc793b", "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "5df46897-94e6-4356-bd8b-da416a4fec3b" ], "title": "Modeling Explanatory Relevance for Knowledge-based Explanations" }, { "cite_extract_rate": 0, "cites": [], "content": "Explicit models typically adopt heuristics and hand-crafted constraints to encode high level hypotheses of explanatory relevance. The major architectural patterns are listed below:", "id": "29ced539-a3f8-4ada-a1f8-42aa91cc793b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "74afb577-8534-43f2-86de-e72f1ead360a", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Explicit Models" ] ], "subsections": [ "3b5c1596-41e0-45df-bfca-c010f29c69b6", "b13d9377-4770-49f0-b4b8-346de4f1f003", "b53d1662-a252-4737-af63-ae5fc07353c8" ], "title": "Explicit Models" }, { "cite_extract_rate": 1, "cites": [ 4288, 4290, 4289 ], "content": "Linear programming has been used for modeling semantic and structural constrains in an unsupervised fashion.\nEarly LP systems, such as TableILP~, formulate the construction of explanations as an optimal sub-graph selection problem over a set of semi-structured tables. Subsequent approaches~ \nhave proposed methods to reason over textual coprora via semantic abstraction, leveraging semi-structured representations automatically extracted through Semantic Role Labeling, OpenIE, and Named Entity Recognition. \nApproaches based on LP have been effectively applied for multiple-choice science questions, when no gold explanation is available for strong supervision.", "id": "3b5c1596-41e0-45df-bfca-c010f29c69b6", "level": "paragraph", "origin_cites_number": 3, "parent_id": "29ced539-a3f8-4ada-a1f8-42aa91cc793b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Explicit Models" ], [ "paragraph", "Linear Programming (LP):" ] ], "subsections": [], "title": "Linear Programming (LP):" }, { "cite_extract_rate": 0.5, "cites": [ 3154 ], "content": "The integration of heuristics and weighing schemes have been demonstrated to be effective for the implementation of lightweight methods that are inherently scalable to large corpora and knowledge bases. \nIn the open-domain, approaches based on lemma overlaps and weighted triplet scoring function have been proposed~, along with path-based heuristics implemented with the auxiliary use of external knowledge bases ~. \nSimilarly, path-based heuristics have been adopted for commonsense tasks, where Lv et al.~\\shortcite{lv2019graph} propose a path extraction technique based on question coverage. For scientific and multi-hop MRC, Yadav et al.~\\shortcite{yadav2019quick} propose ROCC, an unsupervised method to retrieve multi-hop explanations that maximise relevance and coverage while minimising overlaps between intermediate hops. Valentino et al.~\\shortcite{valentino2020unification,valentino2020explainable} present an explanation reconstruction framework for multiple-choice science questions based on the notion of unification in science. The unification-based framework models explanatory relevance using two scoring functions: a relevance score representing lexical similarity, and a unification score denoting the explanatory power of fact, depending on its frequency in explanations for similar cases.", "id": "b13d9377-4770-49f0-b4b8-346de4f1f003", "level": "paragraph", "origin_cites_number": 2, "parent_id": "29ced539-a3f8-4ada-a1f8-42aa91cc793b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Explicit Models" ], [ "paragraph", "Weighting schemes with heuristics:" ] ], "subsections": [], "title": "Weighting schemes with heuristics:" }, { "cite_extract_rate": 0.75, "cites": [ 4291, 1684, 1123 ], "content": "Pre-trained embeddings have the advantage of capturing semantic similarity, going beyond the lexical overlaps limitation imposed by the use of weighting schemes. This property has been shown to be useful for multi-hop and abstractive tasks, where approaches based on pre-trained word embeddings, such as GloVe , have been adopted to perform semantic alignment between question, answer and justification sentences . \nSilva et al.~\\shortcite{silva2018recognizing,silva2019exploring} employ word embeddings and semantic similarity scores to perform selective reasoning on commonsense knowledge graphs and construct explanations for textual entailment. Similarly, knowledge graph embeddings, such as TransE~, have been adopted for extracting reasoning paths for commonsense QA~.", "id": "b53d1662-a252-4737-af63-ae5fc07353c8", "level": "paragraph", "origin_cites_number": 4, "parent_id": "29ced539-a3f8-4ada-a1f8-42aa91cc793b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Explicit Models" ], [ "paragraph", "Pre-trained embeddings with heuristics:" ] ], "subsections": [], "title": "Pre-trained embeddings with heuristics:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:latent_model}\nLatent models \nlearn the notion of explanatory relevance implicitly through the use of machine learning techniques such as neural embeddings and neural language models. The architectural clusters adopting latent modeling are classified as follows:", "id": "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "74afb577-8534-43f2-86de-e72f1ead360a", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Latent Models" ] ], "subsections": [ "5b1db0ca-6ed8-4309-8f6c-767c8273191f", "0cf2d886-f420-4b93-9ba5-619748a9a28d", "718e9d67-658a-4372-9fe2-94b2dca60ce1", "4041fb3c-8c87-469b-b54f-471794259cbf" ], "title": "Latent Models" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 5, 4293, 1139, 4294, 4292 ], "content": "This category refers to a set of neural approaches proposed for the \\emph{answer sentence selection} problem. \nThese approaches typically adopt deep \nlearning architectures, such as RNN, CNN and Attention networks via strong or distant supervision. Strongly supervised approaches~ are trained on gold supporting sentences. In contrast, distantly supervised techniques indirectly learn to extract the supporting sentence by training on the final answer. Attention mechanisms have been frequently used for distant supervision~ to highlight the attended explanation sentence in the contextual passage. Other distantly supervised approaches model the sentence selection problem through the use of latent variables~.", "id": "5b1db0ca-6ed8-4309-8f6c-767c8273191f", "level": "paragraph", "origin_cites_number": 6, "parent_id": "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Latent Models" ], [ "paragraph", "Neural models for sentence selection:" ] ], "subsections": [], "title": "Neural models for sentence selection:" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 4291, 4295, 4275, 460, 7, 1147, 38 ], "content": "Transformers-based architectures have been successfully applied to learn explanatory relevance in both extractive and abstractive MRC tasks. Banerjee~\\shortcite{banerjee2019asu} and Chia et al.~\\shortcite{chia2019red} adopt a BERT model to learn to rank explanatory facts in the scientific domain. Shao et al.~\\shortcite{shao2020graph} employ transformers with self-attention on multi-hop QA datasets , demonstrating that the attention layers implicitly capture high-level relations in the text. The Quartet model has been adopted for reasoning on procedural text and \nproducing structured explanations based on qualitative effects and interactions between concepts.\nIn the distant supervison setting, Niu et al.~\\shortcite{niu2020self} address the problem of lack of gold explanations by training a self-supervised evidence extractor with auto-generated labels in an iterative process. Banerjee and Bara \\shortcite{banerjee2020knowledge} propose a semantic ranking model based on BERT for QASC and OpenBookQA . Transformers have shown improved performance on downstream answer prediction tasks when applied in combination with explanations constructed through explicit models~.", "id": "0cf2d886-f420-4b93-9ba5-619748a9a28d", "level": "paragraph", "origin_cites_number": 9, "parent_id": "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Latent Models" ], [ "paragraph", "Transformers for multi-hop reasoning:" ] ], "subsections": [], "title": "Transformers for multi-hop reasoning:" }, { "cite_extract_rate": 1, "cites": [ 9092 ], "content": "Similar to transformer-based approaches, attention networks have also been employed to extract relevant explanatory facts. However, attention networks are \nusually applied in combination with other neural modules. For HotpotQA, Yang et al.~\\shortcite{yang2018hotpotqa} propose a model trained in a multi-task setting on both gold explanations and answers, composed of recurrent neural networks and attention layers. \nNishida et al.~\\shortcite{nishida2019answering} introduce a similarly structured model with a query-focused extractor designed to elicit explanations. The distantly supervised MUPPET model~ captures the relevance between question and supporting facts through bi-directional attention on sentence vectors encoded using pre-trained embedding, CNN, and RNN. In the scientific domain, Trivedi et al.~\\shortcite{trivedi2019repurposing} repurpose existing textual entailment datasets to learn the supporting facts relevance for multi-hop QA. Khot et al.~\\shortcite{khot2019s} propose a knowledge gap guided framework to construct explanations for OpenBookQA.", "id": "718e9d67-658a-4372-9fe2-94b2dca60ce1", "level": "paragraph", "origin_cites_number": 1, "parent_id": "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Latent Models" ], [ "paragraph", "Attention networks for multi-hop reasoning:" ] ], "subsections": [], "title": "Attention networks for multi-hop reasoning:" }, { "cite_extract_rate": 0.75, "cites": [ 4296, 7802, 7801 ], "content": "Recent developments in language modeling along with the creation of explanation-supporting benchmarks, such as e-SNLI and Cos-E , have opened up the possibility to automatically generate semantically plausible and coherent explanation sentences. \nLanguage models, such as GPT-2~, have been adopted for producing commonsense explanations, whose application has demonstrated benefits in terms of accuracy and zero-shot generalisation . Kumar and Talukdar~\\shortcite{kumar2020nile} present a similar approach for natural language inference, generating explanations for entailment, neutral and contradiction labels. e-SNLI~ present a baseline based on a Bi-LSTM encoder-decoder with attention. Lukasiewicz et al.~\\shortcite{lukasiewiczmake} enhance this baseline by proposing an adversarial framework to generate more consistent and plausible explanations.", "id": "4041fb3c-8c87-469b-b54f-471794259cbf", "level": "paragraph", "origin_cites_number": 4, "parent_id": "7d111ebf-fbbc-455e-b2a2-e4338e190d91", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Latent Models" ], [ "paragraph", "Language generation models:" ] ], "subsections": [], "title": "Language generation models:" }, { "cite_extract_rate": 0, "cites": [], "content": "Hybrid models adopt heuristics and hand-crafted constraints as a pre-processing step to impose an explicit inductive bias for explanatory relevance. The major architectural patterns are listed below:", "id": "5df46897-94e6-4356-bd8b-da416a4fec3b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "74afb577-8534-43f2-86de-e72f1ead360a", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Hybrid Models" ] ], "subsections": [ "5c461c34-c0f3-4ab2-8c2e-1055d2e6b266", "65879cf5-b58d-411d-9954-b66029fee02e" ], "title": "Hybrid Models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 208, 4298, 4297, 1075, 460, 1123 ], "content": "The relational inductive bias encoded in Graph Networks provides a viable support for reasoning and learning over structured representations. This characteristic has been identified as particularly suitable for supporting facts selection in multi-hop MRC tasks. A set of graph-based architectures have been proposed for multi-hop reasoning in HotpotQA . Ye et al.~\\shortcite{ye2019multi} build a graph using sentence vectors as nodes \nand edges connecting sentences that share the same named entities. Similarly, Tu et al.~\\shortcite{tu2019select} construct a graph connecting sentences that are part of the same document, share noun-phrases, and have named entities or noun phrases in common with the question. \nThayaparan et al. \\shortcite{thayaparan2019identifying} propose a graph structure including both documents and sentences as nodes. The graph connects documents that mention the same named entities. To improve scalability, the Dynamically Fused Graph Network (DFGN)~ adopts a dynamic construction of the graph, starting from the entities in the question and gradually selecting the supporting facts. Similarly, Ding et al.~\\shortcite{ding2019cognitive} implement a dynamic graph exploration inspired by the dual-process theory~. \nthe Hierarchical Graph Network leverages a hierarchical graph representation of the background knowledge (i.e. question, paragraphs, sentences, and entities). In parallel with extractive MRC tasks, Graph Networks are applied for answer selection on commonsense reasoning, where a subset of approaches have started exploring the use of explanation graphs extracted from external knowledge bases through path-based heuristics .", "id": "5c461c34-c0f3-4ab2-8c2e-1055d2e6b266", "level": "paragraph", "origin_cites_number": 9, "parent_id": "5df46897-94e6-4356-bd8b-da416a4fec3b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Hybrid Models" ], [ "paragraph", "Graph Networks:" ] ], "subsections": [], "title": "Graph Networks:" }, { "cite_extract_rate": 1, "cites": [ 4300, 4299, 9102, 4301, 8626 ], "content": "A subset of approaches has introduced end-to-end frameworks explicitly designed to emulate the step-by-step reasoning process involved in multi-hop MRC~. The baseline approach proposed for Abductive Natural Language Inference~ builds chains composed of hypotheses and observations, and encode them using transformers to identify the most plausible explanatory hypothesis. Similarly, Das et al.~\\shortcite{das2019chains} embed the reasoning chains retrieved via TF-IDF and lexical overlaps using a BERT model to identify plausible explanatory patterns for multiple-choice science questions. In the open domain, Asai et al.~\\shortcite{asai2019learning} build a graph structure using entities and hyperlinks and adopt recurrent neural networks to retrieve relevant documents sequentially. Nie et al.~\\shortcite{nie2019revealing} introduce a step-by-step reasoning process that first retrieves the relevant paragraph, then the supporting sentence, and finally, the answer. Dhingra et al.~\\shortcite{dhingra2020differentiable} propose an end-to-end differentiable model that uses Maximum Inner Product Search (MIPS)~ to query a virtual knowledge-base and extract a set of reasoning chains. Feng et al~\\shortcite{feng2020learning} propose a cooperative game approach to select the most relevant explanatory chains from a large set of candidates. In contrast to neural-based methods, Weber et al.~\\shortcite{weber2019nlprolog} propose a neuro-symbolic approach for multi-hop reasoning that extends the unification algorithm in Prolog with pre-trained sentence embeddings.", "id": "65879cf5-b58d-411d-9954-b66029fee02e", "level": "paragraph", "origin_cites_number": 5, "parent_id": "5df46897-94e6-4356-bd8b-da416a4fec3b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Modeling Explanatory Relevance for Knowledge-based Explanations" ], [ "subsubsection", "Hybrid Models" ], [ "paragraph", "Explicit inference chains for multi-hop reasoning:" ] ], "subsections": [], "title": "Explicit inference chains for multi-hop reasoning:" }, { "cite_extract_rate": 0, "cites": [], "content": "Operational explanations aim at providing interpretability by exposing the set of operations adopted to arrive at the final answer. This section reviews the main architectural patterns for operational interpretability that focus on the problem of casting a question into an executable program.", "id": "ffa18066-38ff-4407-9045-6f4b5d08d7a6", "level": "subsection", "origin_cites_number": 0, "parent_id": "70bbe41f-8ace-48f8-96e7-be40ad390f2b", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Operational Explanation" ] ], "subsections": [ "2462188e-cd48-4e02-8a37-bc236260eb93" ], "title": "Operational Explanation" }, { "cite_extract_rate": 1, "cites": [ 8762, 1142, 4302, 4303 ], "content": "Neuro-symbolic approaches combine neural models with symbolic programs.\nLiu and Gardner~\\shortcite{liu2020multi} propose a multi-step inference model with three primary operations: Select, Chain, and Predict. The Select operation retrieves the relevant knowledge; the Chain operation composes the background knowledge together; the Predict operation select the final answer. Jiang and Bansel.~\\shortcite{jiang2019self} propose the adoption of Neural Module Networks~ for multi-hop QA by designing four atomic neural modules (Find, Relocate, Compare, NoOp) that allow for both operational explanation and supporting facts selection. Similarly, Gupta et al.~\\shortcite{gupta2019neural} adopt Neural Module Networks to perform discrete reasoning on DROP . In contrast, Chen et al.~\\shortcite{chen2019neural} propose an architecture based on LSTM, attention modules, and transformers to generate compositional programs. While most of the neuro-symbolic approaches are distantly supervised, the recent introduction of question decomposition datasets~ allows for a direct supervision of symbolic program generation .", "id": "2462188e-cd48-4e02-8a37-bc236260eb93", "level": "paragraph", "origin_cites_number": 4, "parent_id": "ffa18066-38ff-4407-9045-6f4b5d08d7a6", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Operational Explanation" ], [ "paragraph", "Neuro-Symbolic models:" ] ], "subsections": [ "619931af-64a9-47a4-a532-e696b9675768" ], "title": "Neuro-Symbolic models:" }, { "cite_extract_rate": 0, "cites": [], "content": "The approaches in this category aim at breaking multi-hop questions into single-hop queries that are simpler to solve. The decomposition allows for the application of divide-et-impera methods where the solutions for the single-hop queries are computed individually and subsequently merged to derive the final answer. Perez et al.~\\shortcite{perez2020unsupervised} propose an unsupervised decomposition method for the HotpotQA dataset. Min et al.~\\shortcite{min2019multi} frame question decomposition as a span prediction problem adopting supervised learning with a small set of annotated data. Qi et al.~\\shortcite{qi2019answering} propose GOLDEN Retriever, a scalable method to generate search queries for multi-hop QA, enabling the application of off-the-shelf information retrieval systems for the selection of supporting facts.", "id": "619931af-64a9-47a4-a532-e696b9675768", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2462188e-cd48-4e02-8a37-bc236260eb93", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Explainable MRC Architectures" ], [ "subsection", "Operational Explanation" ], [ "paragraph", "Neuro-Symbolic models:" ], [ "paragraph", "Multi-hop question decomposition:" ] ], "subsections": [], "title": "Multi-hop question decomposition:" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2257, 460, 7801 ], "content": "\\label{sec:evaluation}\nThe development of explanation-supporting benchmarks has allowed for a quantitative evaluation of the explainability in MRC. In open-domain settings, Exact Matching (EM) and F1 score are often employed for evaluating the supporting facts , while explanations for multiple-choice science questions have been evaluated using ranking-based metrics such as Mean Average Precision (MAP) . In contexts where the explanations are produced by language models, natural language generation metrics have been adopted, such as BLEU score and perplexity . Human evaluation still plays an important role, especially for distantly supervised approaches applied on benchmarks that do not provide labelled explanations.", "id": "cdcc17cb-f2d2-4dee-a997-023bad38e00d", "level": "section", "origin_cites_number": 5, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Evaluation" ] ], "subsections": [ "c888df75-7c18-4652-a987-b7917036a374" ], "title": "Evaluation" }, { "cite_extract_rate": 1, "cites": [ 4292, 4304, 4299, 4293 ], "content": "Since annotating explanations are expensive and not readily available, approaches automatically curate \\textit{silver} explanations for training. For single-hop Extractive MRC, both and use oracle sentence (the sentence containing the answer span) as the descriptive explanation. Similarly, for multi-hop Extractive approaches~ extract explanations by building a path connecting question to the oracle sentence, by linking multiple sentences using inter-sentence knowledge representation. Since there might be multiple paths connecting question and answer, to determine the best path, ~ uses the shortest path with the highest lexical overlap and ~ employs Integer Linear Programming (ILP) with hand-coded heuristics.", "id": "c888df75-7c18-4652-a987-b7917036a374", "level": "subsection", "origin_cites_number": 4, "parent_id": "cdcc17cb-f2d2-4dee-a997-023bad38e00d", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Evaluation" ], [ "subsection", "Silver Explanations" ] ], "subsections": [ "25af0825-5a1a-4017-88bc-3254bd7e47e7" ], "title": "Silver Explanations" }, { "cite_extract_rate": 0.75, "cites": [ 4307, 4306, 4303, 4285, 8764, 4305 ], "content": "Evaluating explainability through multi-hop reasoning presents still several challenges . Recent works have demonstrated that some of the questions in multi-hop QA datasets do not require multi-hop reasoning or can be answered by exploiting statistical shortcuts in the data . In parallel, other works have shown that a consistent part of the expected reasoning capabilities for a proper evaluation of reading comprehension are missing in several benchmarks . A set of possible solutions have been proposed to overcome some of the reported issues, including the creation of evaluation frameworks for the gold standards , the development of novel metrics for multi-hop reasoning , and the adoption of adversarial training techniques . A related research problem concerns the faithfulness of the explanations. Subramanian et al. \\shortcite{subramanian2020obtaining} observe that some of the modules in compositional neural networks , particularly suited for operational interpretability, do not perform their intended behaviour. To improve faithfulness the authors suggest novel architectural design choices and propose the use of auxiliary supervision.", "id": "25af0825-5a1a-4017-88bc-3254bd7e47e7", "level": "paragraph", "origin_cites_number": 8, "parent_id": "c888df75-7c18-4652-a987-b7917036a374", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Evaluation" ], [ "subsection", "Silver Explanations" ], [ "paragraph", "Evaluating multi-hop reasoning" ] ], "subsections": [], "title": "Evaluating multi-hop reasoning" }, { "cite_extract_rate": 0.7746478873239431, "cites": [ 4325, 4291, 4328, 4326, 5, 4329, 4295, 4327, 7803, 4324, 4296, 4323, 3154, 1798, 4318, 4320, 4297, 4278, 4321, 1070, 4315, 4317, 4319, 4294, 4314, 4286, 4301, 1123, 4289, 8626, 4281, 7802, 4300, 4304, 4310, 1139, 4292, 4298, 4293, 4316, 8765, 1075, 4308, 4312, 4288, 4309, 4299, 9092, 460, 4290, 4313, 4322, 7801, 8762, 4311 ], "content": "This survey has proposed a systematic categorisation of benchmarks and approaches for explainability in MRC. Lastly, we outline a set of open research questions for future work: \n\\begin{enumerate}\n \\item \\textbf{Contrastive Explanations}: while contrastive and conterfactual explanations are becoming central in Explainable AI , this type of explanations is still under-explored for MRC. We believe that contrastive explanations can lead to the development of novel reasoning paradigms, especially in the context of multiple-choice science and commonsense QA\n \\item \\textbf{Benchmark Design:} to advance research in explainability it is fundamental to develop reliable methods for explanation evaluation, overcoming the issues presented in Section \\ref{sec:evaluation}, and identify techniques for scaling up the annotation of gold explanations \n \\item \\textbf{Knowledge Representation:} the combination of explicit and latent representations have been useful for explainability in multi-hop, extractive MRC. An open research question is understanding whether a similar paradigm can be beneficial for abstractive tasks to limit the phenomenon of semantic drift observed in many hops reasoning\n \\item \\textbf{Supervised Program Generation:} large-scale benchmarks for operational explanations have been released just recently . We believe that these corpora open up the possibility to explore strongly supervised methods to improve accuracy and faithfulness in compositional neural networks and symbolic program generation.\n\\end{enumerate}\n\\bibliographystyle{acl}\n\\bibliography{coling2020,approach}\n\\begin{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{comment}\n\\end{document}", "id": "42f2c83a-4225-46e7-9b7f-f595d1fda0c2", "level": "section", "origin_cites_number": 71, "parent_id": "03ca5aea-6a90-479f-934f-d9ab95896c40", "prefix_titles": [ [ "title", "A Survey on Explainability in Machine Reading Comprehension" ], [ "section", "Conclusion and Open Research Questions" ] ], "subsections": [], "title": "Conclusion and Open Research Questions" } ]
42
[ 456, 444, 1798, 439, 460, 4274, 8761, 4275, 4277, 2396, 4276, 4278, 7802, 7801, 4279, 7303, 1147, 8626, 4281, 4280, 8763, 4283, 4284, 1098, 6988, 4282, 8762, 4285, 4287, 1174, 4286, 1142, 1143, 8385, 4288, 4290, 4289, 3154, 4291, 1684, 1123, 5, 4293, 1139, 4294, 4292, 4295, 7, 38, 9092, 4296, 208, 4298, 4297, 1075, 4300, 4299, 9102, 4301, 4302, 4303, 2257, 4304, 4307, 4306, 8764, 4305, 4325, 4328, 4326, 4329, 4327, 7803, 4324, 4323, 4318, 4320, 4321, 1070, 4315, 4317, 4319, 4314, 4310, 4316, 8765, 4308, 4312, 4309, 4313, 4322, 4311 ]
0.948755
[ "Saim Ghafoor", "Noureddine Boujnah", "Mubashir Husain Rehmani", "Alan Davy" ]
MAC Protocols for Terahertz Communication: A Comprehensive Survey
2019
2019-04-25T16:34:35Z
cs.NI
Terahertz communication is emerging as a future technology to support Terabits per second link with highlighting features as high throughput and negligible latency. However, the unique features of the Terahertz band such as high path loss, scattering and reflection pose new challenges and results in short communication distance. The antenna directionality, in turn, is required to enhance the communication distance and to overcome the high path loss. However, these features in combine negate the use of traditional \blue{m}edium access protocols \blue{(MAC)}. Therefore\blue{,} novel MAC protocol designs are required to fully exploit their potential benefits including efficient channel access, control message exchange, link establishment, mobility management, and line-of-sight blockage mitigation. An in-depth survey of Terahertz MAC protocols is presented in this paper. The paper highlights the key features of the Terahertz band which should be considered while designing an efficient Terahertz MAC protocol, and the decisions which if taken at Terahertz MAC layer can enhance the network performance. Different Terahertz applications at macro and nano scales are highlighted with design requirements for their MAC protocols. The MAC protocol design issues and considerations are highlighted. Further, the existing MAC protocols are also classified based on network topology, channel access mechanisms, and link establishment strategies as Transmitter and Receiver initiated communication. The open challenges and future research directions on Terahertz MAC protocols are also highlighted.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ] ], "subsections": [ "61eb8f42-e6fe-4301-9da9-bfacc3754ecb", "03ea239c-b181-43e3-ae94-81f1373631bc", "f7836041-01d7-468b-aaaf-0b8915fb280f", "e087585c-1918-49f8-acdb-ad6a59eee45b", "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "531de97c-bbd8-4f31-a238-f8cce366e033", "c64882d2-b2f6-454b-8909-b35d498b09e0", "fe299020-4e95-45fc-964a-5394cc33dc39", "67d10fcc-9af2-4057-9070-d37a680aa748" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:intro}\nThe demand for wireless data traffic has increased significantly since the evolution of Internet and Mobile Technology and is projected to exceed Petabytes by 2021~. The existing wireless technology although reaching the capacity of wired technology, still it is not meeting the demands of future ultra-high bandwidth communication networks. The spectrum at and below 60 GHz still orders of magnitude below the targeted Terabits per second (Tbps) link. The Free-space optical (FSO) which operates at Infrared (IR) frequencies also has several issues that limit the practicality of these systems for personal wireless communications~. In this perspective, the Terahertz (THz) band from 0.1 to 10 Terahertz has the potential to provide up to Tbps link speed to satisfy beyond fifth-generation (5G) communication requirements such as high throughput and low latency~. The Terahertz bands offer much larger bandwidth (up to 1 THz) than the existing millimeter-wave (mmWave) systems (up to 10 GHz))~.\nWhile, the technology is rapidly advancing with new transceiver architectures, materials, antenna design, channel/propagation model, and physical layer techniques, there still exist several research challenges that need to be addressed before achieving the Tbps links. Among these different fields of interest, Medium Access Control (MAC) is least explored area of research in Terahertz communication networks. The existing MAC protocols of traditional networks cannot be directly applied, because they do not consider the unique features of Terahertz band like path and molecular loss, multipath, reflection, and scattering. Therefore, novel and efficient MAC protocols are required which should consider the features of Terahertz bands and antenna requirements. In this paper\\blue{,} a comprehensive survey on Terahertz MAC protocols is presented with classification, design issues, and considerations, requirements for different application areas and challenges. The acronyms used commonly throughout this survey are shown in Table~\\ref{tab:acronyms}.\n\\begin{table}[htbp]\n \\centering\n \\small\n \\caption{\\mage{Acronym definitions used throughout this survey.}}\n \\begin{tabular}{ll} \\hline \\hline\n \\multicolumn{1}{l}{\\textbf{Acronyms}} & \\multicolumn{1}{l}{\\textbf{Definitions}} \\\\ \\hline\n 5G & Fifth generation \\\\\n ACK & Acknowledgement \\\\\n AP & Access point \\\\\n BER & Bit error rate \\\\\n CA\t & Collision avoidance \\\\\n CSMA & Carrier sensing multiple access\\\\\n CTS & Clear to send \\\\\n DMDS & Distributed \\blue{m}aximum depth scheduling \\\\\n ESaware & Energy and spectrum aware \\\\\n FSO & Free space optical \\\\\n FTDMA & Frequency and time division multiple access \\\\\n LOS & Line of sight \\\\\n LTE-A & Long term evolution advanced \\\\\n MAC & Medium Access Control \\\\\n MIMO & Multiple input multiple output \\\\\n mmWave & Millimeter wave \\\\\n MRAMAC & Multiradio assisted MAC \\\\\n NLOS & Non line of sight \\\\\n OFDM & Orthogonal frequency division multiplexing \\\\\n PAM & Pulse amplitude modulation \\\\\n PNC & Piconet coordinator \\\\\n PPM & Pulse position modulation \\\\\n QoS & Quality of service \\\\\n RA & Random access \\\\\n RD & Rate division \\\\\n RTDs & Resonant tunnelling diode \\\\\n RTR & Request to receiver \\\\\n RTS & Request to send \\\\\n SDN & Software defined network \\\\\n SINR & Signal to interference and noise ratio \\\\\n TABMAC & Terahertz assisted beamforming MAC \\\\\n Tbps & Terabits per second \\\\\n TC & Transmission confirmation \\\\\n TCN & Terahertz communication network \\\\\n TDMA & Time division multiple access \\\\\n THz & Terahertz \\\\\n TLAN & Terahertz local area network \\\\\n TR & Transmission request \\\\\n TS-OOK & Time spread On-off Keying \\\\\n TTS & Test to send \\\\\n TPAN & Terahertz personal area network \\\\\n UL & Uplink \\\\\n UTC-PD & Uni-travelling carrier photo diodes \\\\\n UV & Ultraviolet \\\\\n WLAN & Wireless local area network \\\\\n WPAN & Wireless personal area network \\\\ \\hline \\hline\n \\end{tabular}\n \\label{tab:acronyms}\n\\end{table}", "id": "61eb8f42-e6fe-4301-9da9-bfacc3754ecb", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Introduction" ] ], "subsections": [ "ca4cb50e-885d-4909-8177-1693658ddc5f", "47ba3b14-e2e7-4a7d-bd21-4c9bb98708b4", "ec0599e2-79e1-40db-aca3-320dcabca2ce" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "Table~\\ref{tab:thz_general_survey} highlights and summarise the overall survey papers on Terahertz communication without MAC layer protocols. These survey papers cover different application areas, however covers mostly the device, antenna, channel, and physical layer aspects. These include the nano-communication networks~, Internet of nano-things \\blue{(IoNT)}~, molecular communication network~, nano-sensor network~, in-body nanonetworks~, broadband communication~, vehicular networks~, wireless indoor/outdoor communications which include the office/data-center or small cells deployment~ and Terahertz Communication Networks~. Table~\\ref{tab:thz_relevant_survey} highlights the survey papers which discuss the MAC layer aspects to some extent and the differences with our work.\nIn~, the joint impact of Ultra-Dense Network, Multiple Input Multiple Output (MIMO), mmWave and Terahertz communications is discussed for supporting the demands of mobile broadband services. Particularly, the indoor and outdoor environments are analyzed for noise levels and signal to interference and noise ratio (SINR). In~, the challenges, and opportunities are mentioned but only with the perspective of Terahertz vehicular networks. It also describes briefly different aspects including transceiver design, MIMO antenna arrays, channel modeling and estimation, interference management, MAC layer design, and standardization. For Nano Communication Networks, the survey articles are discussed in~, with physical layer aspects, propagation models, security, in-body communication, biomedical applications, materials, and antenna design, and channel modeling. The Internet of nano-things for interconnecting devices at the nanoscale is discussed in~. Although, a brief discussion on architecture, channel modeling and challenges related to MAC and Network layer are mentioned, but only for a specific scenario of the IoNT. A network architecture for wireless nano-sensor networks is given in~, which discusses briefly the challenges related to channel modeling, information encoding and protocols for nano-sensor networks. The molecular communication network survey is given in~ for the body area networks. \nIn~, the Terahertz recent progress is reviewed but only for the propagation models, antenna and testbed design, and an implementation \\blue{roadmap} \\blue{is} mentioned. For opportunities beyond 5G paradigm, an architecture is discussed with possible application areas in~. A survey related to MIMO is given in~. Some standardization related work is mentioned in~. Some other survey papers on the usage of Graphene material, weather impact on Terahertz bands and Terahertz antenna are mentioned in~. A guest editorial is also published recently on Terahertz communication~. \\blue{In~, Terahertz band communication importance is highlighted in terms of the interaction between the sensing, imaging, and localization. In~, a tutorial is presented on Terahertz signal processing techniques highlighting ultra massive MIMO and reconfigurable intelligent surfaces to overcome distance problem.}\nWith these related survey articles, there are some articles that discuss the Terahertz MAC protocols but not with required detail, as shown in Table~\\ref{tab:thz_relevant_survey}. In~, a survey on MAC schemes for mmWave and Terahertz wireless communication is presented. The MAC protocols overall related to Terahertz band communication are not fully covered and mostly MAC strategies related to mmWave are discussed. Only few challenges and design issues are mentioned. Whereas, in this survey paper, detailed work on existing Terahertz MAC protocols with classifications, band features, design issues and considerations, application requirements and challenges are discussed.", "id": "ca4cb50e-885d-4909-8177-1693658ddc5f", "level": "subsection", "origin_cites_number": 0, "parent_id": "61eb8f42-e6fe-4301-9da9-bfacc3754ecb", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Introduction" ], [ "subsection", "Terahertz Communication: Related survey articles" ] ], "subsections": [], "title": "Terahertz Communication: Related survey articles" }, { "cite_extract_rate": 0, "cites": [], "content": "The main contributions of this survey paper are: \n\\begin{itemize}\n\\item \\blue{We Provide a} comprehensive survey of existing Terahertz MAC protocols. \n\\item Classification of existing Terahertz MAC protocols based on network scale and topologies, channel access mechanisms and transmitter/receiver-initiated communication \\blue{has been presented}. \n\\item The unique features of the Terahertz band are highlighted to be considered for Terahertz MAC protocols. \n\\item The design issues are highlighted which should be considered while designing efficient Terahertz MAC protocols with decisions that should be taken at the MAC layer for performance enhancements. \n\\item The requirements and design challenges for Terahertz MAC protocols for different application areas are discussed. \n\\item Different challenges and future research directions are also highlighted \\blue{about THz MAC protocols}.\n\\end{itemize}", "id": "47ba3b14-e2e7-4a7d-bd21-4c9bb98708b4", "level": "subsection", "origin_cites_number": 0, "parent_id": "61eb8f42-e6fe-4301-9da9-bfacc3754ecb", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Introduction" ], [ "subsection", "Contributions of this survey" ] ], "subsections": [], "title": "Contributions of this survey" }, { "cite_extract_rate": 0, "cites": [], "content": "The paper is organized as\\blue{:} Section~\\ref{sec:intro} presents the introduction and literature review. Section~\\ref{sec:background}, presents the background on Terahertz band, technology, and MAC protocols. In Section~\\ref{sec:applications}, different applications of Terahertz band communication are discussed with respect to macro and nanoscale communication with their requirements. Section~\\ref{sec:requirements}, highlights the unique features of THz band, issues that needs to be considered at Physical and MAC layer with decisions while designing an efficient Terahertz MAC protocols. Section~\\ref{sec:topologies}, mentions the topologies so far focused on Terahertz communication networks. Different channel access mechanisms are discussed in Section~\\ref{sec:ch-access}. The transmitter and receiver-initiated communication are discussed in Section~\\ref{sec:communication}. \\blue{C}hallenges and future research directions are discussed in Section~\\ref{sec:issues-challenges}. Finally, in Section~\\ref{sec:conclusion}, the survey paper is concluded.\n\\begin{table*}[htbp]\n \\centering\n \\small\n \\caption{General survey papers on Terahertz bands, devices, and communications.}\n \\begin{tabular}{p{2.4em}p{3.4em}p{15em}p{30em}}\n \\textbf{Year} & \\textbf{Reference} & \\textbf{Network Area/Type} & \\textbf{Brief description of main topics covered} \\\\ \\hline \\hline\n 2004 & & Terahertz Communication Network & An overview of communication and sensing applications is given with sources, detectors, and modulators for practical Terahertz Communication systems. \\\\ \\hline \n 2007 & & Terahertz Communication Network & The developments in the fields like Terahertz quantum cascade lasers, quantum well photodetectors, time-domain spectroscopy system and materials are discussed with measurements of atmospheric propagation. \\\\ \\hline\n 2010 & & Nano Communication Network & Propagation models for molecular and nano-electromagnetic communications are discussed with challenges. \\\\ \\hline\n \\multirow{2}{*}{2011} & & Terahertz Communication Network & Different aspects of Terahertz communication are discussed with transistors, mixers, antennas, and detectors. \\\\\n \t \t\t\t\t & & Terahertz Communication Network \t& Progress on Terahertz wave technologies is discussed. \\\\ \\hline\n \\multirow{5}{*}{2012} \t& & Terahertz Communication Network & Terahertz antenna technologies are discussed with different substrate integrated antennas and beamforming networks.\t \\\\ \n \t\t\t\t\t\t& & Nano Communication Network & An overview on biochemical cryptography is discussed with requirements related to security and challenges. \\\\\n \t\t\t\t\t\t& & Terahertz Communication Network & An overview on demonstration of data transmission is given with standardization activities.\\\\\n \t\t\t\t\t\t& & Terahertz Communication Network & The Terahertz technology is discussed with challenges for spectroscopy and communications. \\\\\n \t\t\t\t\t\t& \t& Molecular communication networks & The elementary models for intra body molecular communication channel and their extensions are discussed with challenges. \\\\ \\hline \n \\multirow{2}{*}{2013} & & Nano Communication Network & Issues of Nanonetworks are analyzed and discussed with particular focus on communication via microtubules and physical contact. \\\\\n \t\t\t\t\t\t & & Molecular Communication Network & A review on bacterial communication and neuronal networks are given with application areas in body area networks. \\\\ \\hline\n \t\t\t2014\t\t&\t & Terahertz Communication Network & Summarizes the research projects, spectrum regulation, and standardization effort for the Terahertz band. \\\\ \\hline\n \\multirow{2}{*}{2015} & & Internet of Nano-Things & A survey is presented for connecting body area networks and external gateway for in-body nano communication. Network architecture, requirements, and simulation based performance evaluation are also discussed. \\\\ \n \t\t\t\t\t& & Terahertz Communication Network & A survey on Terahertz technology is presented including devices, antennas, and standardization efforts. \\\\ \\hline\n \\multirow{2}{*}{2016} \t\t& \t& Terahertz Communication Network &\t A review is presented for impact of weather on Terahertz links, attenuation, and channel impairments caused by atmospheric gases like water vapor, dust, fog, and rain. \\\\\n \t\t\t\t\t\t\t& \t& Terahertz Communication Network &\tA survey on \\blue{G}raphene based devices for modulation, detection, and generation of Terahertz waves is discussed. \\\\ \\hline\n \\multirow{3}{*}{2018} \t& & Terahertz Communication Network & A review is presented for channel modelling for Terahertz band including single antenna and ultra massive MIMO systems. \\\\\n \t\t\t\t\t\t& & 5G Femtocell Internet of Things & A survey on low Terahertz band circuit blocks is presented with focus on energy consumption using best modulation schemes and optimizing hardware parameters. \\\\\n \t \t\t\t\t\t& & Terahertz Communication Network & A review is presented for deployments of Terahertz wireless link and opportunities to meet future communication requirements. \\\\ \\hline \n\t\\multirow{1}{*}{2019} \t& & Nano Communication Network & \\saim{A summary of current status of Nano communication network is presented with applications and different layers of protocol stack}. \\\\\n\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & \\saim{Discusses wireless communications and applications above 100 GHz bands with opportunities and challenges. } \\\\\n\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & \\saim{Recent activities on Terahertz development, standardization, applications and communications are reported.} \\\\\n\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & \\saim{Discusses wireless communications and applications above 100 GHz bands with opportunities and challenges. } \\\\\n\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & \\saim{A survey on Terahertz communications, applications and layers of protocol stack is presented. } \\\\\n\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & \\saim{A survey on Terahertz communications, applications and layers of protocol stack is presented. } \\\\\n\t\\multirow{1}{*}{2019} \t& & Terahertz Communication Network & A review is presented on development towards Terahertz communications with key technologies. \\\\ \\hline \\hline \t \t\t\t\t\t\n \\end{tabular}\n \\label{tab:thz_general_survey}\n\\end{table*}\n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption{\\mage{Survey papers discussing the Terahertz MAC layer.}}\n \\begin{tabular}{|p{9.78em}|c|c|p{3.39em}|p{1.445em}|p{1.445em}|p{2.61em}|p{2.61em}|p{2.61em}|p{1.445em}|p{1.445em}|p{1.445em}|p{1.445em}|p{1.335em}|p{1.445em}|p{1.445em}|p{1.445em}|p{1.445em}|p{1.445em}|p{1.445em}|}\n \\toprule\n \\multicolumn{6}{|c|}{} & \\multicolumn{3}{p{7.83em}|}{\\textbf{MAC protocols classification}} & \\multicolumn{11}{p{18.395em}|}{\\textbf{MAC functionalities}} \\\\\n \\midrule\n \\textbf{Network type} & \\multicolumn{1}{p{2.11em}|}{\\textbf{Year }} & \\multicolumn{1}{p{5.72em}|}{\\textbf{Reference}} & \\begin{sideways}\\textbf{MAC layer discussed}\\end{sideways} & \\begin{sideways}\\textbf{Application areas}\\end{sideways} & \\begin{sideways}\\textbf{Challenges}\\end{sideways} & \\begin{sideways}\\textbf{Network topologies and scale}\\end{sideways} & \\begin{sideways}\\textbf{Rx/Tx Initiatied communication}\\end{sideways} & \\begin{sideways}\\textbf{Channel access/sharing}\\end{sideways} & \\begin{sideways}\\textbf{Interference}\\end{sideways} & \\begin{sideways}\\textbf{Error control}\\end{sideways} & \\begin{sideways}\\textbf{Packet size}\\end{sideways} & \\begin{sideways}\\textbf{Device discovery}\\end{sideways} & \\begin{sideways}\\textbf{Handshaking}\\end{sideways} & \\begin{sideways}\\textbf{Tx distance}\\end{sideways} & \\begin{sideways}\\textbf{Data rate}\\end{sideways} & \\begin{sideways}\\textbf{Antennas}\\end{sideways} & \\begin{sideways}\\textbf{Beamforming}\\end{sideways} & \\begin{sideways}\\textbf{Modulation}\\end{sideways} & \\begin{sideways}\\textbf{Cross layer}\\end{sideways} \\\\\n \\midrule\n \\multirow{9}[2]{*}{\\textbf{Nanonetworks}} & 2008 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X \\\\\n\\multicolumn{1}{|c|}{} & 2010 & \\multicolumn{1}{p{5.72em}|}{\\textcolor[rgb]{ .376, .376, .376}{}} & Partially & X & \\checkmark & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & X & X \\\\ \n \\multicolumn{1}{|c|}{} & 2010 & \\multicolumn{1}{p{5.72em}|}{\\textcolor[rgb]{ .376, .376, .376}{}} & Partially & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X \\\\\n \\multicolumn{1}{|c|}{} & 2010 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X \\\\ \n \\multicolumn{1}{|c|}{} & 2011 & \\multicolumn{1}{p{5.72em}|}{\\textcolor[rgb]{ .376, .376, .376}{}} & Partially & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2012 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark \\\\\n \\multicolumn{1}{|c|}{} & 2012 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark \\\\\n \\multicolumn{1}{|c|}{} & 2016 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2017 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X \\\\ \n \\midrule\n \\textbf{Vehicular networks} & 2017 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X \\\\\n \\midrule\n \\multirow{4}[2]{*}{\\textbf{Terahertz networks}} & 2014 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2014 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & \\checkmark & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2016 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2016 & \\multicolumn{1}{p{5.72em}|}{} & Partially & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X \\\\\n \\multicolumn{1}{|c|}{} & 2019 & \\multicolumn{1}{p{5.72em}|}{} & Partially & X & \\checkmark & \\checkmark & X & \\checkmark & X & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & X & X \\\\\n \\midrule\n \\textbf{This work} & 2019 & & Detailed & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:thz_relevant_survey}\n\\end{table*}", "id": "ec0599e2-79e1-40db-aca3-320dcabca2ce", "level": "subsection", "origin_cites_number": 0, "parent_id": "61eb8f42-e6fe-4301-9da9-bfacc3754ecb", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Introduction" ], [ "subsection", "Organization of survey" ] ], "subsections": [], "title": "Organization of survey" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{sec:background}\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.2in,height=1.2in]{figures/frequency-ranges.png}\n\\caption{Terahertz gap in the electromagnetic spectrum.}\n\\label{fig:electromagnetic-spectrum}\n\\end{figure}", "id": "03ea239c-b181-43e3-ae94-81f1373631bc", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ] ], "subsections": [ "c0dc29c7-785e-4048-b11f-65478f4838bd", "6a1ccc5f-5c86-4fc4-8ed0-d67834f08bd8", "4ea94c9c-3736-4981-957f-9a964c84f6f5" ], "title": "\\mage{Background on Terahertz band, Technology and MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "The Terahertz can be termed as a unit of frequency (one trillion cycles per second or $10^{12}$ Hz) or electromagnetic waves within ITU designated band of frequencies. The Terahertz frequency range (0.1 - 10 THz) is the last span within the whole electromagnetic wave spectrum and is more commonly referred to as Terahertz Gap. They appear between the Microwave and Infrared bands, as shown in Figure~\\ref{fig:electromagnetic-spectrum}. The wavelength of radiation in the Terahertz band range from 1 mm to 0.1 mm (or 100 $\\mu$m). The bands from 100 GHz to 200 GHz are also referred as sub-Terahertz band~, as it begins at a wavelength of one millimeter and proceeds into shorter wavelengths. \nFor nearly two decades, the Terahertz bands are been efficiently used for imaging applications because these waves are non-ionizing and able to penetrate through materials and absorbed by water and organic substances. Their properties allow them to be used in communication networks to provide higher data rates up to Tbps. The Terahertz Gap is still the least explored band for its potential use in communication networks and to achieve higher data rates. Table~\\ref{tab:bands-features}, enlist the features of different frequency bands closest to Terahertz frequency bands. Its unique potentials motivate its usage for broadband wireless communications.", "id": "c0dc29c7-785e-4048-b11f-65478f4838bd", "level": "subsection", "origin_cites_number": 0, "parent_id": "03ea239c-b181-43e3-ae94-81f1373631bc", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "Terahertz bands" ] ], "subsections": [], "title": "Terahertz bands" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-bkg:trad-comm}\nA brief comparison between existing wireless communication technologies and Terahertz band communication including mmWave communication is presented below and shown in Table~\\ref{tab:bands-features}.", "id": "6a1ccc5f-5c86-4fc4-8ed0-d67834f08bd8", "level": "subsection", "origin_cites_number": 0, "parent_id": "03ea239c-b181-43e3-ae94-81f1373631bc", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "Comparison between Terahertz band and other wireless technologies" ] ], "subsections": [ "6dfa1006-9c6a-497c-a8c5-f8cd79513fc7", "8224efec-6470-41f8-8ecd-0abbdbff530d" ], "title": "Comparison between Terahertz band and other wireless technologies" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe mmWave bands comprises frequencies from 30 GHz to 300 GHz. Due to higher frequencies\\blue{,} these bands face severe attenuation due to oxygen absorption. Exceptionally, the propagation on 35 GHz, 94 GHz, 140 GHz, and 220 GHz experience relatively small attenuation which enables long-distance communications between peers~. Other bands like 60 GHz, 120 GHz, and 180 GHz attenuates up to 15dB/km and also experience poor diffraction due to blockages~. The range of mmWave and Terahertz can be reduced by path loss, molecular absorption and atmospheric attenuation. The channeling effect can be compensated by using high directional antenna for which mmWave is more mature than Terahertz band. Additional features such as modulation and coding schemes, massive MIMO and phased antenna can enhance the spectral efficiency and transmission range \\blue{for THz}. An advantage of Terahertz frequency over mmWave is that communication windows for Terahertz wave are higher than for mmWave, and because of \\blue{this} Terahertz frequencies seem to be more suitable for high data rate and low range communication. \n\\saim{The Terahertz and mmWave are neighboring bands but their properties are different. In comparison, the bandwidth at mmWave band is 10 GHz, which cannot support Tbps link speed, whereas the Terahertz band has distance varying transmission windows of up to Terahertz bandwidth. To reach the data rate up to 100 Gbps, the transmission schemes must reach the challenging spectral efficiency of 14 \\blue{bits per second per hertz}~. Moreover\\blue{,} the link capacity required for few Gbps should be several times higher than the user required data rate for the timely delivery of data from multiple users. With increasing frequencies to Terahertz band, the Tbps links can attain moderate and realistic spectral efficiency of few bits per second per hertz. The Terahertz band can also allow higher link directionality compared to mmWave at same Transmitter aperture due to their less free space diffraction and shorter wavelength compared to mmWave. The transmitted power and interference between the antennas can also be reduced by using smaller antennas with good directivity in Terahertz communications~. In Terahertz bands, the eavesdropping chances are also lower compared to mmWave band due to high directionality of Terahertz beams, in which the unauthorized users must also be on the same narrow beam to intercept messages.}\n\\saim{The difference between mmWave and Terahertz bands are summarised in Table~\\ref{mmwave-thz-diff}. From a technical point of view, Terahertz band offers much higher bandwidth than mmWave band and therefore high data rate. The mmWave is deployed actually in WLAN as well as cellular network system by contrast to THz where researchers are striving to design new devices with high power generation and low receiving sensitivity. Due to the larger coverage and mobility support, mmWave communications can be used in backhaul communication and cellular communications. Whereas Terahertz can be used where high throughput and low latency are required with fixed infrastructure for now until mature devices will be available. MmWave antenna is much more mature than for THz, \\blue{thus} it is possible to deploy antenna diversity and beam steering and tracking for mmWave. The channel model for mmWave is well developed as measurements are carried for many mmWave windows as well as for different scenarios. For Terahertz band few measurement campaigns are performed particularly for indoor scenarios around 300 GHz and 100 GHz. The free space attenuation increases as a function of frequency and molecular absorption loss occur due to oxygen molecules in mmWave, whereas in Terahertz band\\blue{,} it occurs due to water vapors. The reflection loss is high for both mmWave and Terahertz band which results in severe loss of NLOS path \\saim{compared} to LOS path. The scattering effect also becomes severe when the wavelength decreases below 1 mm which results in increase of multipath components, angular spreads, and delay. Due to much smaller wavelength many antennas can be packaged together to generate narrower beams. However, the stronger directivity increases the difficulties and overhead of beam alignment and tracking but reduces the interference. The difference between mmWave and Terahertz band communication can also be found in~ for further reading. \\blue{A large scale open source testbed for Terahertz and mmWave is in~.}}", "id": "6dfa1006-9c6a-497c-a8c5-f8cd79513fc7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6a1ccc5f-5c86-4fc4-8ed0-d67834f08bd8", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "Comparison between Terahertz band and other wireless technologies" ], [ "subsubsection", "\\saim{Comparison with mmWave band communications" ] ], "subsections": [], "title": "\\saim{Comparison with mmWave band communications" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe traditional 802.11 protocol is mainly designed for 2.4 GHz WiFi, which uses frequency-hopping spread spectrum and direct sequence spread spectrum. It provides a simple data rate of up to 2 Mbps. After that 802.11 (a and b) were published, operating at 5 and 2.4 GHz bands. The 802.11a is based on Orthogonal Frequency Division Multiplexing (OFDM) and can provide a data rate of up to 54 Mbps, whereas 802.11b supports only 11 Mbps. The 802.11ac aimed at providing the data rate up to more than 100 Mbps. Other than those, the 802.11ad is developed for the carrier frequency of 60 GHz and it belongs to mmWave frequency bands. The detailed description and comparison are provided in Table~\\ref{tab:bands-features}. \\blue{S}mart technologies like OFDM and communication schemes like large-scale MIMO can be used for frequencies below 5 GHz to achieve higher spectral efficiency. In Long-Term Evaluation Advanced (LTE-A), the peak data of up to 1 Gbps is possible only when a 4x4 MIMO scheme is used over a 100 MHz aggregated bandwidth~.\nThe frequency bands above 10 Terahertz cannot support Tbps links. Although very large bandwidth is available in FSO communication system which operates at IR frequencies, it still holds some issues which limit its use for personal wireless communication like the atmospheric effects on the signal propagation (fog, rain, pollution and dust); high reflection loss; misalignment between transmitter and receiver; and low power link budget due to health safety which limits both transmission range and achievable data rates for FSO communication. It can support up to 10 Gbps of data rate with a proper line of sight (LOS) for Wireless Local Area Network (WLAN)~. For non-LOS\\blue{,} much lower data rate has been reported~. For longer distance, an FSO system was demonstrated in~ to support 1.28 Tbps, however, requires typical fiber optical solution to generate and detect high capacity optical signals which are injected in the optical front-end and also does not include the signal generation, detection, modulation, and demodulation blocks. These constraints limit the overall feasibility to achieve higher data rates for 5G and beyond networks. A comparison of wireless and optical technologies is presented in~ for wireless indoor environments. It is mentioned in~, wireless communication has overall better chances for penetration through obstacles in comparison with FSOs. \\blue{In~, it is shown that with increasing gain the BER performance of Terahertz links is much better than the FSO link.} Further, the IR and ultraviolet are not considered as safe for human and the Visible Light communication requires the visibility of light at all times. \\saim{A comparison of visible light and sub-Terahertz band communication is discussed in~ for 5G and 6G wireless systems.}\n\\begin{table*}[htbp]\n \\centering\n \\scriptsize\n \\caption{Different types and features of wireless communication technologies\\protect.}\n \\begin{tabular}{p{4.7em}p{4.7em}p{5em}p{12em}p{8em}p{5em}p{4.5em}cp{4.5em}p{2.5em}}\n \\textbf{Technology} & \\textbf{Frequency range} & \\textbf{Wavelength} & \\multicolumn{1}{p{11.335em}}{\\textbf{Data rate}} & \\textbf{Transmission range} & \\textbf{Power consumption} & \\textbf{Topology} & \\multicolumn{1}{p{4.78em}}{\\textbf{LOS/nLOS}} & \\textbf{Noise source} & \\textbf{Weather effect} \\\\ \\hline \\hline\n \\textbf{mmWave} & 30GHz-300GHz & 3cm-1mm & \\multicolumn{1}{p{11.335em}}{\\saim{100 Gbps~}} & \\saim{High} & Medium & PTP, PTmP & \\multicolumn{1}{p{4.78em}}{both} & Thermal & Sensitive \\\\ \\hline\n \\multirow{6}{*}{\\textbf{Terahertz}} & \\multicolumn{1}{p{4.8em}}{100-300 GHz (sub-THz)\\newline{}300GHz-10THz} & \\multirow{6}{*}{1mm-30$\\mu$m} & \\multicolumn{1}{p{12em}}{upto 240 GHZ: 10 Gbps~\\newline{}upto 300 GHz: 64 Gbps~\\newline{}300-500 GHz: 160 Gbps (single channel)~\\newline{}300-500GHz: > 160 Gbps (multiple channels)~ } & \\multirow{6}{*}{Short/Medium (1-10m)} & \\multirow{6}{*}{Medium} & \\multirow{6}{*}{PTP, PTmP} & \\multirow{6}{*}{both} & \\multirow{6}{4em}{Thermal and\\newline{} molecular noise} & \\multirow{6}{*}{Sensitive} \\\\ \\hline\n \\textbf{Infrared} & 10THz-430THz & 30$\\mu$m-3 $\\mu$m & \\multicolumn{1}{p{11.335em}}{2.4 kbps to 1 Gbps} & Short, upto 1 m & Low & PTP & \\multicolumn{1}{p{4.78em}}{LOS} & Sun/Ambient & Sensitive \\\\ \\hline\n \\textbf{Visible Light} & 430THz-790THz & 0.3$\\mu$m & \\multicolumn{1}{p{11.335em}}{100 Mbps to Gbps~} & Short & Low & PTP & \\multicolumn{1}{p{4.78em}}{\\saim{both}} & Sun/Ambient & \\multicolumn{1}{c}{} \\\\ \\hline\n \\textbf{Ultra Violet} & 790THz-30PHz & 100-400 nm & & Short & Low & PTmP & & Sun/Ambient & Sensitive \\\\ \\hline \\hline\n \\end{tabular}\n \\label{tab:bands-features}\n\\end{table*}", "id": "8224efec-6470-41f8-8ecd-0abbdbff530d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6a1ccc5f-5c86-4fc4-8ed0-d67834f08bd8", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "Comparison between Terahertz band and other wireless technologies" ], [ "subsubsection", "\\saim{Other technologies" ] ], "subsections": [], "title": "\\saim{Other technologies" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n In this Section, \\blue{we present} the definition of MAC layer protocols with motivation for the need of different MAC protocols for Terahertz communication networks is presented.", "id": "4ea94c9c-3736-4981-957f-9a964c84f6f5", "level": "subsection", "origin_cites_number": 0, "parent_id": "03ea239c-b181-43e3-ae94-81f1373631bc", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "\\mage{Background and motivation for Terahertz MAC protocols" ] ], "subsections": [ "fd0754cd-9162-4ee1-be95-207bcb8ee140", "6bbb43e5-543f-415c-9298-182ad95a0feb", "675cd177-d4e5-40b9-aec0-07f20cdf7d0f" ], "title": "\\mage{Background and motivation for Terahertz MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe MAC layer is mainly responsible for controlling the hardware to interact with wireless transmission medium by flow control and multiplexing. It serves the interaction between the Physical and Upper layers. It provides a method to identify the frame errors and sends the packet to \\blue{the} Physical layer when channel access is permitted or scheduled. It also decides when and how much to wait before sending a packet to avoid collisions among the nodes. Different wireless technologies require different MAC protocols to serve the transmission purpose. For example, the MAC protocols for LTE and GSM standard has different user requirements \\blue{compared to a MAC protocol required for Wireless sensor network.} \n\\mage{As the user demands and network requirement are enhancing, efficient MAC protocols are in demand to assist the network operations and provide adaptive solutions. The MAC although suppose to provide same functionalities mentioned above, but due to different band features and requirements (cf. Section~\\ref{sec:requirements}), new mechanisms are required to facilitate user demands and networks requirements (cf. Section~\\ref{sec:applications}).}\n\\begin {figure*}[!hbtp]\n\\centering\n\\scriptsize\n\\begin{adjustbox}{width=\\linewidth}\n\\tikzstyle{block} = [rectangle, draw, text width=4cm, text centered, rounded corners, minimum height=5em,fill=blue!20]\n\\tikzstyle{line} = [draw,thick, -latex']\n\\tikzstyle{cloud} = [draw, ellipse, text width=4.5cm, text centered]\n\\tikzstyle{edge from parent}=[->,thick,draw]\n\\begin{tikzpicture}[auto,edge from parent fork down]\n\\tikzstyle{level 1}=[sibling distance=60mm,level distance=18ex]\n\\tikzstyle{level 2}=[sibling distance=25mm,level distance=15ex]\n\\tikzstyle{level 3}=[sibling distance=30mm,level distance=34ex]\n\\tikzstyle{level 4}=[sibling distance=35mm,level distance=34ex]\n\\node [block,text width=10cm,minimum height=3em,,fill=black!15] (cst) {\\textbf{Terahertz MAC protocols classification }}\n{\nchild{node [block,text width=3.6cm,fill=green!20] (rnm) {\\textbf{Network Topologies \\\\ (Sect.\\ref{sec:topologies}) } }\n\t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-1.3cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Centralized \\\\ (Sect.\\ref{subsec-topo:centralized}) } \\\\\n \t\t}} \n \t child{node [block,text width=2cm,fill=orange!10,xshift=-1.4cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Clustered \\\\ (Sect.\\ref{subsec-topo:clustered}) } \\\\ \n \t\t}}\n \t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-1.5cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Distributed \\\\ (Sect.\\ref{subsec-topo:distributed}) } \\\\ \n \t\t}} \n}\nchild{node [block,text width=4cm,fill=green!20] (opm) {\\textbf{Channel Access Mechanisms \\\\ (Sect.\\ref{sec:ch-access}) } }\n\t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-0.2cm, yshift = -1.1cm] (pmt) { \\\\\n\t\t{\\bf Random Channel Access \\\\ (Sect.\\ref{subsubsec-ch:nano-random}, \\ref{subsubsec-ch:macro-random}) } \\\\\n\t\t}}\n\t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-0.3cm, yshift = -1.1cm] (pmt) { \\\\\n\t\t{\\bf Sceduled Channel Access \\\\ (Sect.\\ref{subsubsec-ch:nano-scheduled},\\ref{subsubsec-ch:macro-scheduled}) } \\\\\n\t\t}}\n\t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-0.5cm, yshift = -1.1cm] (pmt) { \\\\\n\t\t{\\bf Hybrid Channel Access \\\\ (Sect.\\ref{subsubsec-ch:macro-hybrid}) } \\\\\n\t\t}}\t\t\n}\nchild{node [block,text width=3.6cm,fill=green!20] (rnm) {\\textbf{Tx/Rx Initiated Communication \\\\ (Sect.\\ref{sec:communication}) } }\n\t\tchild{node [block,text width=2cm,fill=orange!10,xshift=-0.3cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Tx Initiated \\\\ (Sect.\\ref{subsec-comm:tx}) } \\\\\n \t\t}} \n \t child{node [block,text width=2cm,fill=orange!10,xshift=-0.5cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Rx Initiated \\\\ (Sect.\\ref{subsec-comm:rx}) } \\\\ \n \t\t}} \n}\n};\n\\end{tikzpicture}\n\\end{adjustbox}\n\\caption{\\protect \\mage{Classification of Terahertz MAC protocols based on network topologies, channel access mechanisms and Tx/Rx initiated communication. They are further discussed based on Terhaertz applications and network scale as macro and nano.}}\n\\label{fig:classification}\n\\end{figure*}", "id": "fd0754cd-9162-4ee1-be95-207bcb8ee140", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4ea94c9c-3736-4981-957f-9a964c84f6f5", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "\\mage{Background and motivation for Terahertz MAC protocols" ], [ "subsubsection", "\\mage{Medium Access Control background" ] ], "subsections": [], "title": "\\mage{Medium Access Control background" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\blue{The existing THz MAC protocols can be broadly classified, as shown in Figure~\\ref{fig:classification}.} \nThe network topologies \\blue{for} MAC protocol design are discussed in Section~\\ref{sec:topologies}. In centralized networks, a controller or an access point is used to provide coordination among the nodes, whereas in distributed networks each node takes its own decision and coordinates in a distributed manner. In cluster-based networks, nodes form a group to transmit their information via a single group head. \n\\mage{The channel access mechanisms are presented in Section~\\ref{sec:ch-access} and they include mainly random, scheduled and hybrid channel access mechanisms. In random channel access, each node contends for a shared channel to improve channel utilization and can introduce collisions. The directional antenna usage in Terahertz frequency band in carrier sense based multiple access can provide spatial reuse of spectrum which can improve the throughput of the network. Whereas, the schedule based channel access methods like TDMA assigns channel access slots in advance without spatial reuse. This provides a contention-free environment at the cost of high synchronization overhead. In~, the scheduled based mechanism is shown to provide worse throughput and latency performance compared to random schemes. The hybrid channel access mechanism combines the functionalities of random and scheduled channel access mechanisms. In this scheme, the channel access is performed using random scheme followed by scheduled data transmission. }\n\\mage{The initial access mechanisms are presented in Section~\\ref{sec:communication}. The unique features of the Terahertz band (discussed in Section~\\ref{sec-req:features}) adds different challenges for Terahertz wireless communications like directional antenna and high path loss. These unique features, network scale, and application requirements demand new initial access mechanisms. The receiver initiated communication is mainly followed in networks with severe energy limitation, in which a receiver triggers the communication when it has sufficient energy to receive a packet. Whereas, transmitter initiated communication is mostly followed in wireless communication, in which a sender when having some data to transmit triggers the communication. }\n\\mage{Moreover, the Terahertz MAC protocols can also be classified based on applications as nano and macro scale applications. These applications are discussed and their design challenges are identified in Section~\\ref{sec:applications}. The nano applications involve applications with range up to a few centimeters, whereas applications with range higher than a meter can be considered as macro scale applications. }", "id": "6bbb43e5-543f-415c-9298-182ad95a0feb", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4ea94c9c-3736-4981-957f-9a964c84f6f5", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "\\mage{Background and motivation for Terahertz MAC protocols" ], [ "subsubsection", "\\mage{Review of various Terahertz MAC protocols" ] ], "subsections": [], "title": "\\mage{Review of various Terahertz MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nTerahertz is characterized by high bandwidth comparing to lower frequency band. \\blue{I}t is also characterized by its high-frequency attenuation. Therefore, traditional concepts of MAC should be modified or extended in most cases to \\blue{consider} application requirements such as ultra high throughput, coverage and low latency. The Terahertz communication allows concurrent narrow beam links as compared to the traditional networks. The unique features of the Terahertz band, and Terahertz MAC protocol design considerations are discussed in detail in Section~\\ref{sec:requirements}. \n\\mage{\\textit{Link preparation for access, ACK and Data: } The traditional WiFi networks use in-band data and ACKs with short inter-frame spacing gap of 20 ms between data and ACK message. In Terahertz using the same link for ACK and data involves many challenges including, beam switching and steering, and additional synchronization overhead, which can increase the delay.}\n\\mage{\\textit{Deafness issue: } The traditional networks operates in both omni and directional antenna modes, whereas in Terahertz networks directional antennas are required at both the sender and receiver, which can introduce deafness and collision problem. Because the nodes can sense only in one direction at a time which makes it difficult to capture accurate and timely information of their neighbors. }\n\\textit{Noise and interference :} Terahertz band is sensitive to thermal and molecular noise, interferences come generally from receiver side lobe and interferers beam alignment or from multipath reflections. It can be neglected if directional antennas are used, but still can reduce system performances, mmWave is much more mature regarding interference and noise modeling, techniques to reduce impact on the system are available using adequate equalizers and filters as well as high-efficiency coding schemes. From MAC point of view designing efficient scheduling algorithm can improve system performance.\n\\textit{Antenna directionality and beam management: } In traditional networks mostly Omni-directional antennas are used. Although directional antennas are used for point-to-point connectivity, they cover high transmission distance. However, using directional antennas for Terahertz band, introduce many challenges including beam management for initial access and data transmission, frequently than in traditional networks. Fast beam tracking mechanisms are required mainly at Terahertz band involving applications with mobility.\n\\textit{Frame length and duration: } Long frame can increase throughput however frame error rate can increase, and an efficient error control mechanism should be used to mitigate frame losses. For Terahertz system, the frame duration is very low compared to microwave and mmWave system. The system can accept more users without affecting overall delay\\blue{. F}or example, the scheduling time for LTE at microwave frequency is equal to 1 ms whereas for Terahertz it can be much lower, and more nodes can join system within 1 ms, if we consider it as a threshold for network delay. In order to increase nodes in the system, fast beam switching and steering is then required. For frame size, as Terahertz system is still immature, frames should be of low size, as they are prone to channel errors. Therefore\\blue{,} robust channel coding should be implemented to detect and correct bits and reduce retransmissions and delays. Comparing to traditional techniques, it is possible to re-use traditional mechanisms for frame error handling such as frame retransmission. \n\\textit{Channel access and huge bandwidth availability: } \\blue{T}he bandwidth available for tradition networks, as discussed in previous subsection, is few MHz or up to a few GHz in case of mmWave band communication. The huge bandwidth availability means less contention and collision at these bands with low transmission times. The main functionalities required at Terahertz band are coordination and scheduling, whereas in traditional networks contention and interference management are main requirements for channel access. Therefore, TDMA with highly directional antennas can be a good choice for Terahertz networks. \n\\mage{\\textit{High energy fluctuations: } The heavy energy fluctuations at nanoscale networks require efficient energy harvesting mechanisms in support of MAC layer to perform handshaking and data transmissions. Whereas in traditional networks\\blue{,} energy fluctuations are not as high as in nanonetworks. }\n\\mage{\\textit{Scheduling: } For macro network, MAC is based on time division techniques in most case as communication can occur only if beams are aligned between nodes, the scheduling is a crucial part of the THz MAC procedure that makes THz macro networks different from traditional MAC techniques.}\n\\mage{\\textit{Throughput and latency requirements: } Traditional networks offer a limited amount of throughput, as discussed in previous subsection. Higher throughput is possible by using MIMO and higher-order modulation techniques, whereas, in Terahertz networks, low complexity modulation techniques are required for nanoscale networks. The latency requirements are also very tight in Terahertz networks.}\n\\begin{table*}[htbp]\n \\centering\n \\footnotesize\n \\caption{\\protect \\mage{Comparison between mmWave and Terahertz wave technologies.}}\n\\begin{tabular}{|p{3.5cm}|p{5.2cm}|p{5.2cm}|}\n\\hline\n & {\\bf \\saim{Milimeter wave technology}} & {\\bf \\saim{Terahertz wave technology}} \\\\\n\\hline\n\\saim{Transceivers Device} & \\saim{High performance mmWave devices Available~} & \\saim{Available immature: UTC-PD, RTD and SBD~} \\\\\n\\hline\n\\saim{Modulation and coding} & \\saim{High order modulation techniques available~}. \\saim{For example, QPSK, LDPC 1/2 (264 Mbps)\n16QAM, LDPC 1/2 (528 Mbps)} & \\saim{Low order modulation techniques (OOK, QPSK)~, LDPC, Reed Soloman, Hamming} \\\\\n\\hline\n \\saim{Antenna} & \\saim{Omni and high gain directional, MIMO supported~, antenna gain =18dBi when 8X8 antenna array used~} & \\saim{Omni and Directional, phased array with low number of antenna elements (4x1)~} \\\\\n\\hline\n \\saim{Bandwidth} & \\saim{7GHz @60GHz~} & \\saim{69 GHz at 300GHz} \\\\\n\\hline\n\\saim{channel models~} & \\saim{Yes} & \\saim{Partially} \\\\\n\\hline\n \\saim{Data rate} & \\saim{Maximum of 100Gbps~} & \\saim{100 Gbps~ to few Tbps (theoretical)} \\\\\n\\hline\n \\saim{Standards} & \\saim{5G NR, IEEE 802.11 ad, IEEE 802.11 ay} & \\saim{IEEE 802.15.3d} \\\\\n\\hline\n \\saim{Mobility} & \\saim{Supported~} & \\saim{Not yet} \\\\\n\\hline\n\\saim{Beam management~} & \\saim{ Yes} & \\saim{ No} \\\\\n\\hline\n\\saim{Adaptive beam searching and switching time} & \\saim{45 ms~} & \\saim{in progress} \\\\\n\\hline\n\\saim{Outdoor deployment} & \\saim{Yes} & \\saim{No} \\\\\n\\hline\n\\saim{Free space loss} & \\saim{Low} & \\saim{ High } \\\\\n\\hline\n \\saim{Coverage} & \\saim{High~} & \\saim{ Low} \\\\\n\\hline\n\\saim{Radio Measurements~} & \\saim{Available for many windows: 28GHz, 72GHz, 52 GHz, 60 GHz} & \\saim{300 GHz indoor, example measurement carried at data center environment} \\\\\n\\hline\n\\saim{Device size} & \\saim{Few millimetres} & \\saim{Few micrometers} \\\\\n\\hline\n\\saim{End to end simulators} & \\saim{Available on ns3 for 5G cellular network~} & \\saim{NS3 Terasim~} \\\\\n\\hline\n\\end{tabular} \n\\label{mmwave-thz-diff}\n\\end{table*}", "id": "675cd177-d4e5-40b9-aec0-07f20cdf7d0f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4ea94c9c-3736-4981-957f-9a964c84f6f5", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "\\mage{Background on Terahertz band, Technology and MAC protocols" ], [ "subsection", "\\mage{Background and motivation for Terahertz MAC protocols" ], [ "subsubsection", "\\mage{Difference between Terahertz MAC protocols and protocols for other communications" ] ], "subsections": [], "title": "\\mage{Difference between Terahertz MAC protocols and protocols for other communications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}\n\\blue{The existing applications of Terahertz band are categorized into macro and nano applications.} Typically, these applications include outdoor as well as indoor applications that require speed from Gbps to Tbps. There are applications in which users require Gbps speed like small cells, WLAN, vehicle to vehicle communication and device to device communication. The applications which require Tbps speed and can not be satisfied using traditional bands include applications that use traffic aggregation and nano communication, for example backhaul communication, edge communication within a Data Centre and nanodevices communication which can utilize full bandwidth of Terahertz band due to small distance. The Terahertz link can be used to aggregate 5G traffic including control plane signaling, IoT traffic, \\blue{Internet}, and mobile services at the backhaul and core network side to replace existing optical fiber links. It can be used also for traffic augmentation. The Tbps links can also be required for inter-chip communication, where chips can exchange ultra-high data rate in a flexible way using short-range, then THz communication will be feasible with the ultra-high data rate. These applications are also shown in Figures~\\ref{fig:macroapp} and~\\ref{fig:nanoapp}. Their design requirements are highlighted to emphasize their particular necessities to progress in the Terahertz MAC protocol design. Their performance target requirements are given in Table~\\ref{tab:thz-app-chal}. Table~\\ref{tab:paraaware}, presents the details of these protocols based on different communication aspects and parameter aware Terahertz MAC protocols\\footnote{1) Channel aware: Nodes are aware of the spectrum information, 2) Physical layer: Nodes are aware of physical layer parameters like propagation loss and bit error rate, 3) Memory aware: Nodes are aware of the available memory at each node, 4) Position aware: Nodes are aware of the position of other nodes, 5) Nodes are aware of the bandwidth and adapt according to the available bandwidth.}. The MAC layer related design requirements, issues and considerations will be discussed in Section IV.\n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption{Terahertz applications features, requirements and some general MAC related challenges.}\n \\begin{tabular}{|p{2em}|p{2em}|p{6.1em}|p{3.0em}|p{4.0em}|p{3.3em}|p{3.485em}|p{5.0em}|p{6.0em}|p{5.07em}|p{4.0em}|c|p{22.715em}|} \\hline\n \\toprule\n \\multicolumn{1}{|p{2em}|}{\\textbf{Network scale}} & \\multicolumn{1}{p{2em}|}{\\textbf{Category}} & \\textbf{Application Areas} & \\textbf{Coverage} & \\textbf{Mobility} & \\textbf{Data rate} & \\textbf{Latency} & \\textbf{No of connections} & \\textbf{Link availability and reliability} & \\textbf{Connectivity} & \\textbf{Energy efficiency} & \\multicolumn{1}{p{3.215em}|}{\\textbf{Target BER}} & \\textbf{\\blue{Terahertz MAC layer challenges \\newline (Sect.~\\ref{sec:applications}, Sect.~\\ref{subsec:req-apps}, and Sect.~\\ref{sec:issues-challenges} ) }} \\\\\n \\midrule\n \\multicolumn{1}{|c|}{\\multirow{11}[22]{*}{\\textbf{Macro}}} & \\multicolumn{1}{c|}{\\multirow{5}[10]{*}{\\textbf{Indoor applications}}} & Data Centre networks~\\saim{} & < 20 m & No & upto 100 Gbps & 0.1-0.2 ms & large & \\multicolumn{1}{c|}{99.99\\%} & P2P,P2MP & \\saim{Low priority} & $10^{-12}$ & \\blue{In DCNs, Top of Rack (ToR) nodes require coverage in all direction to connect to other nearby ToR nodes. Due\nto directional beam requirement at each node, a node cannot cover all direction which requires an efficient beam synchronization mechanism to establish reliable links. } \\\\\n\\cmidrule{3-13} & & Terahertz Local Area Networks~ & < 50 m & Yes & upto 100 Gbps & < 1 ms & medium & High & P2P, P2MP, Adhoc & \\saim{Medium priority} & $10^{-6}$ & \\blue{In a local area scenario, many users can access THz access point placed in indoor environment such as office. The medium access layer at base station should guarantee access to the multiple users based on their distance and the available bandwidth. Another challenge that should be addressed is the tracking of mobile users’ position and to adapt link parameters as users are moving within the network range. } \\\\\n\\cmidrule{3-13} & & KIOSK downloading system~\\saim{} & 0.1m & No & 1 Tbps & < 1 ms & small & \\multicolumn{1}{c|}{99.99\\%} & P2P & \\saim{Low priority} & $10^{-6}$ & \\blue{The MAC layer should support the transfer of huge amount of data in a short period. Fast link establishment should also be provided in an efficient way with authentication mechanism.} \\\\\n\\cmidrule{3-13} & & Terahertz Personal Area Networks~ & < 20 m & Yes & upto 100 Gbps & < 1 ms & meidum & High & P2P, P2MP, Adhoc & {Medium priority} & $10^{-6}$ & \\blue{High bandwidth availability at Terahertz bands for WPANs require efficient distance aware radio resource management. Another challenge is to discover the nodes and achieve synchronization to inform nodes about the topology distribution and management. } \\\\\n\\cmidrule{3-13} & & Information \\saim{Broadcast}~ & 0.1-5m & Yes & 1 Tbps & upto few sec & small to medium & \\multicolumn{1}{c|}{99.99\\%} & P2P,P2MP & \\saim{Low priority} & & \\blue{This include applications which supports prefetching of data (file transfer, video streaming etc.). The maximum transfer rate at the Terahertz band and the number of users for an efficient data offload can be handled at the MAC layer. Other challenges will be to accommodate the short range Terahertz links and mobility. } \\\\\n\\cmidrule{2-13} & \\multicolumn{1}{c|}{\\multirow{6}[12]{*}{\\textbf{Outdoor applications}}} & Small and ultra dense cell technology~ & 10-15m & Yes & > 100Gbps & upto few ms & medium to large & High & P2MP & \\saim{High priority} & $10^{-10}$ & \\blue{Due to high density of cells, user mobility, and shorter range of Terahertz link, small cells coordination should be addressed while considering pencil beam width for link establishment and transmissions. Additional challenges can be proactive blockage detection and avoidance at MAC layer.} \\\\\n\\cmidrule{3-13} & & Vehicular networks and driver-less cars~ & > 100 m & Yes & > 100Gbps & upto few ms & medium & medium & P2P, P2MP, Adhoc & \\saim{High priority} & & \\blue{In V2X scenarios the Terahertz short range and directional antenna usage includes challenges where mobile vehicles needs to establish and manage the link and transfer huge amount of data (maps and road views etc.). These challenges require efficient MAC layer solutions. Similarly, when a mobile vehicle needs to connect to a static infrastructure, the node needs beam tracking and management techniques to communicate and transfer huge amount of data with intelligent blockage management. } \\\\\n\\cmidrule{3-13} & & Military applications~ & > 100 m & Yes & 10-100Gbps & upto few ms & medium & High & P2P, P2MP, Adhoc & \\saim{For fixed units: Low priority. For sensors: High priority} & & \\blue{In a battlefield and harsh environments, the military vehicles and planes may require secure bulk data transfer to assist other vehicles. This require secure and adaptive mechanisms to transfer data. Additionally, highly directional antennas with long range communication will be required. MAC layer should be designed to provide secure and reliable links with energy efficient mechanisms for these challenges. } \\\\\n\\cmidrule{3-13} & & Space Applications~ & kms & Yes & 10-100Gbps & upto few sec & small to medium & High & P2P,P2MP & \\saim{Low priority} & & \\blue{Due to specific Terahertz channel characteristics and noise in space, the inter and intra satellite communication requires strict LOS communication. At MAC layer, adaptive resource allocation and link reliability should be addressed with blockage detection and mitigation. } \\\\\n\\cmidrule{3-13} & & Backhaul connectivity~\\saim{} & 1 km & No & > 100Gbps & upto few ms & small & High & P2P & \\saim{Low priority} & $10^{-12}$ & \\blue{THz wireless backhauling links can replace wired links for a specific environment. At MAC, link parameters can be monitored to achieve reliable high data rate links. Because, the Terahertz channels can be easily affected by environment and blockages which requires highly directional beams. In backhaul networks, nodes switching can be used to avoid blockages and change communication paths. } \\\\\n \\midrule\n \\multicolumn{1}{|c|}{\\multirow{5}[10]{*}{\\textbf{Nano}}} & \\multicolumn{1}{p{5.785em}|}{\\textbf{In/On-body applications}} & Health monitoring~ & 0.5-2.5mm & Yes & 100Gbps & 1 ms & large & High & adhoc, P2MP & \\saim{High priority} & & \\blue{Disease monitoring and early detection is crucial which requires energy efficient mechanisms and nodes mobility to carry data from one place to another where MAC layer should provide energy efficient and interference mitigation mechanisms. } \\\\\n\\cmidrule{2-13} & \\multicolumn{1}{c|}{\\multirow{2}[4]{*}{\\textbf{Outdoor applications}}} & Defence applications~ & \\multicolumn{1}{c|}{} & No & 100Gbps & 1 ms & large & High & adhoc, P2MP & \\saim{High priority} & & \\blue{In defence applications, nano devices can be used to monitor a specific area or on-body communication. The nodes will require efficient access and scheduling mechanism to transmit data efficiently while balancing energy consumption and harvesting for massive connectivity. Additionally, discovery and coordination among the nodes will also be required for adaptive network management. \n} \\\\\n\\cmidrule{3-13} & & Agricultural applications~ & 4mm & No & 100Gbps & 1 ms & large & High & adhoc, P2MP & \\saim{High priority} & & \\blue{Application of THz radio communication for agriculture application is challenged by many factors such as a specific channel characteristic including attenuation of leaves, soil, dust and water vapour. These factors should be monitored with efficient data transmission scheduling and frame structure. UAVs can also be used to monitor a field by fetching data from a gateway node which will require beam alignment and efficient resource allocation. } \\\\\n\\cmidrule{2-13} & \\multicolumn{1}{c|}{\\multirow{2}[4]{*}{\\textbf{Indoor applications}}} & Internet of Nano-Things~ & 2m & No & 90Gbps & 1 ms & massive & High & adhoc, P2MP & \\saim{High priority} & & \\blue{Nano nodes will be deployed in high density to perform specific tasks such as sensing and relaying. MAC layer should perform scheduling and channel sensing for link adaptation and efficient resource allocation. Additionally, node buffer should be optimized for this massive connectivity scenario with efficient path selection. \n } \\\\\n\\cmidrule{3-13} & & Intra chip network~\\saim{} & 0.03m & No & 100Gbps & 1 ms & small to medium & High & Ad-hoc & \\saim{Medium priority} & $10^{-12}$ & \\blue{The intra chip communication will require bulk data transmission which requires effective memory synchronization and access. Optimization of number of cores supported for high throughput and low delay is a challenge to address. Further, Hybrid MAC protocols and multiplexing mechanisms are also required.\n } \\\\ \\hline\n \\bottomrule\n \\end{tabular}\n \\label{tab:thz-app-chal}\n\\end{table*}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=5.3in,height=3.2in]{figures/macro-app.png}\n\\caption{Terahertz communication applications for macro scale networks.}\n\\label{fig:macroapp}\n\\end{figure*}", "id": "f7836041-01d7-468b-aaaf-0b8915fb280f", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ] ], "subsections": [ "fd56f117-8e9c-4784-8c88-1fef25d864df", "a7c27535-4e26-4f84-a175-e793b24dbbe3", "3a08212b-709c-41dc-ac28-ef910f217b9d" ], "title": "Terahertz band applications and their requirements" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-app:macro}\n\\saim{The macro-scale communication involve applications in which the transmission range is higher than 1 meter and up to a kilometer. The Terahertz bands although has huge bandwidth availability, the transmission distance can vary due to high path and absorption loss. The indoor application differs from outdoor applications mainly due to scattering and reflection effects. Therefore\\blue{,} requires different channel models for different environment which should be considered while designing MAC protocols for these applications. These applications are shown in Figure~\\ref{fig:macroapp}. Table~\\ref{tab:thz-app-chal}, is showing the technical requirements for different Terahertz applications. It includes fixed point-to-point, point-to-multipoint, and mobility scenarios.}\nThe applications which require Tbps links include wireless backhaul~ and Data Centres~. Wireless Backhaul typically involves point-to-point connections for information transmission to the base stations of the macrocell, especially where the fiber optic is not available. Small cell communication can be another application scenario in which the backhaul part will require Tbps links to transfer huge amount of data. High power antennas can be used for larger distance coverage but should consider environmental factors and frequent user association while designing a MAC protocol. The atmosphere loss is higher, however can be compensated by high antenna directivity. For capacity enhancement and larger bandwidth with Tbps transmission, the Terahertz band between 200 and 300 GHz has shown low atmospheric losses~. The wireless fiber extender is also an interesting scenario to extend the communication range and capacity of existing backhaul communication set up to provide reliable data communication with Tbps throughput for distance up to 1 Km in outdoor environments. A very large antenna array or Massive MIMO techniques can be used to transfer information between cells. The use of massive MIMO arrays can be used in an adaptive manner to modify the transmit and receive beams to accommodate the changes in the environment than to physically readjusting the arrays~. They can be used to communicate with multiple backhaul stations by electronic beam steering. \n\\saim{At the macro scale, another promising application is wireless Data Centres~. In an attempt to increase the user experience and high-speed network, the cloud applications have introduced the competition between the Data Centres. This resulted in extensive extension of servers and required bandwidth to support numerous applications to manage bursty high traffic. The Terahertz band can be augmented to support Tbps links especially at the edge where the traffic aggregates rather than using the cables with limited capability. The Terahertz links can be used in parallel with existing architecture to provide backup links, failover mechanisms and high-speed edge router links with SDN support~. This can improve the user experience and can also reduce the cost of deployment within Data Centres.}\n\\saim{Using the high capacity wireless Terahertz links can also help in re-designing the Data Centres geometry~. However, require careful communication protocol design like Physical, MAC and network layer, to be efficiently utilized for the Data Centre network. The top-of-rack (ToR) Terahertz devices can connect using point-to-point and multipoint wireless links. However, requires directional antennas for inter rack communication for enhanced coverage for more than a meter. The intra rack communications can also use omni-directional antennas due to short distance between the routers. A fair and efficient channel access scheme is required for both inter/intra rack communication with scheduling (for directional antennas) and with collision avoidance (CA) techniques (for Omni-directional antennas) due to multi-user interference. To connect different ToR devices, the link establishment is very important however becomes challenging with directional antennas and energy minimization constraint.}\n\\saim{The other point-to-point connectivity applications include the KIOSK downloading system which can be used for instant transfer of bulk amount of data, as shown in Figure~\\ref{fig:macroapp}. Its a peer to peer communication system between stationary transmitter and a mobile receiver with limited mobility. It can be imagined as a stationary terminal or point, which may be connected with a data center using fiber-optic and can provide the bulk amount of data to various users in few seconds, typically in GBs. This type of application can use Tbps links to satisfy user demands. The challenges include rapid user association and link establishment, beam management, error detection, and correction strategies, and secure authentication on public systems. An experimental demonstration of a prototype for KIOSK Downloading system using 300 GHz band is presented in~ and~, which includes the channel and LOS analysis with comparison of error correction algorithms. The approximate beam size is mentioned in~ as 22 cm with 30 dBi as antenna gain for 1-meter distance. }\n\\saim{Vehicle to infrastructure and its backhaul network can also utilize Tbps links to improve vehicular communication networks. For example, Google's auto-driver cars generate the sensor data at the rate of 750 MBps~ and are expected to generate 1 TB of sensor data in a single trip~. The delivery, processing, and optimization of such huge data to assist vehicles can require Tbps links to improve the efficiency and latency of communication. As an alternative solution for fiber-based backhauling, the vehicles can also serve as digital mules to reduce the deployment cost and to migrate the data~. }\n\\saim{The point-to-point communication can also be utilized in military applications between air to air and ground machines~. For many years, space agencies such as NASA and ESA, are developing sensors and instruments for space technology~,~. Using THz link for intra satellite application can be feasible outside the atmospheric region, where the only free space loss is considered. A possible scenario can be an inter-satellite link within the same constellation or between low orbit and geostationary satellite.}\n\\saim{The interesting scenarios for TLAN are the indoor home networks and LAN~. Mainly, the backhaul part of this home network to core networks can use Tbps links to transfer aggregated data, which can facilitate multiple users at a time to download huge data. However, currently, the short distance within an indoor home or office environment requires point-to-point communication or efficient beam management strategies. The high speed and instant connectivity between multiple personal devices are possible using Terahertz communication links~. }\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.2in,height=2.2in]{figures/nano.png}\n\\caption{Terahertz communication applications for nanoscale networks.}\n\\label{fig:nanoapp}\n\\end{figure}", "id": "fd56f117-8e9c-4784-8c88-1fef25d864df", "level": "subsection", "origin_cites_number": 0, "parent_id": "f7836041-01d7-468b-aaaf-0b8915fb280f", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ], [ "subsection", "Applications for Macro Scale Terahertz Networks" ] ], "subsections": [ "3823ef59-8656-4a3d-870e-fb9f8b1ade29" ], "title": "Applications for Macro Scale Terahertz Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\saim{The Terahertz applications range from indoor to outdoor from short to large distances. The applications with very short communication distance like KIOSK downloading system and intra rack communication within a Data Centre does not involve mobility challenge. However, a delay can be involved depending upon the amount of data transfer and channel access schedule. Mainly, they require point-to-point connectivity between the systems. The last mile access and backhaul point-to-point and multipoint are also very interesting scenarios that require high throughput and low latency for longer distances up to a km. Currently, it has reached 10 Gbps and is realized to reach beyond 100 Gbps using Terahertz band with high bandwidth availability~. }\n\\saim{The atmospheric losses affect both indoor and outdoor type of applications differently and therefore the appropriate channel and propagation models should be considered. The indoor environments like Data Centres require fixed links between the racks \\blue{with} very limited mobility. The outdoor environments like backhaul links involve fixed point-to-point links, however differ from indoor environments in terms of atmosphere, distance, reliability and link-budget requirements. The scenarios like vehicular and small cells require high mobility and therefore involve frequent handovers and must support high user density and scalability for new MAC protocols. For both scenarios, efficient channel access mechanisms, reliable connections with error detection and correction are required with efficient beam management techniques \\blue{for the MAC design}.}", "id": "3823ef59-8656-4a3d-870e-fb9f8b1ade29", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fd56f117-8e9c-4784-8c88-1fef25d864df", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ], [ "subsection", "Applications for Macro Scale Terahertz Networks" ], [ "subsubsection", "Summary of macro scale applications" ] ], "subsections": [], "title": "Summary of macro scale applications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-app:nano}\n\\saim{Nanotechnology enables the nano-components which are able to perform simple specific tasks, like data storage, computation, sensing, and actuation. These nano-components can be integrated into a device with only a few cubic meters in size and can enable the development of more advanced nano devices~. These nanodevices can exchange information to achieve complex tasks in a centralized or a distributed manner~, which enable unique and interesting applications in the field of biomedical, industrial, military, health monitoring, plant monitoring, nanosensor networks, chemical and biological attack prevention and system-on-chip wireless networks~. It mainly involves communication upto few centimeters. This also includes the networks using electromagnetic radiation like nanonetworks and molecular networks in which transmission occur using the flow of molecules~. }\n\\saim{The nano devices, due to their smaller size and communication range can benefit with larger bandwidth and so can utilize Tbps links. Due to smaller range, the path loss remains at the lowest which enables high throughput in nano communication. These applications are shown in Figure~\\ref{fig:nanoapp} and their challenges are also mentioned in Table~\\ref{tab:thz-app-chal}. The nanosensors can be used to detect an early disease by using molecular communication by gathering heart rate. The gathered information that can be transmitted over the Internet using a device or mobile phone to a healthcare provider. Other applications are nano-scallop to swim in a biomedical fluid~, a bio-bot which is powered by skeletal muscles ~, on board drug delivery system~, a magnetic helical micro-swimmer to deliver single cell gene to the human embryonic kidney in a wireless way~. \\blue{THz} MAC layer protocols should consider energy consumption and harvesting trade-off, error recovery, scheduling of transmissions among different nodes and efficient channel access.}\n\\saim{Terahertz band can be used for wireless on-chip networks~. It can be used at the very small scale to connect two chips due to its high bandwidth and low area overhead. The MAC should consider supporting maximum number of cores by addressing MAC performance by specifying input traffic and interface characteristics~. The tolerable delay should be analysed among different layer architecture and analysis of maximum cores supported for throughput delay~. The challenges include efficient channel access mechanism for intra chip communication with scheduling, efficient inter-core communication, small-scale antennas to provide high bandwidth and low delay. }", "id": "a7c27535-4e26-4f84-a175-e793b24dbbe3", "level": "subsection", "origin_cites_number": 0, "parent_id": "f7836041-01d7-468b-aaaf-0b8915fb280f", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ], [ "subsection", "Applications for NanoScale Terahertz Networks" ] ], "subsections": [ "a64107ed-8370-41d8-bc12-6c3f999e5bac" ], "title": "Applications for NanoScale Terahertz Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\saim{The nano applications, especially on chip communications can utilize high bandwidth Tbps links. In nano communications\\blue{,} the communication range is very small, due to which the path loss remains at the lowest. Mainly, these applications requires efficient energy consumption and harvesting mechanisms to address limited energy issues while considering nonbatteries/nanogenerators/nontransreceivers architecture and performance enhancements. The antenna technology and new channel/propagation and noise models are required with tools to estimate path loss for different nanoscale network environments. Efficient communication protocols such as modulation and coding techniques, power control, routing and MAC strategies for nanoscale communications. Their MAC protocols also requires to support scalability and connectivity among large number of devices. Due to limited capacity of devices, energy efficient mechanism are required with harvesting and transmission balance for efficient communication. Link establishment and scheduling the communication among devices is also a challenge. Some architectures for nanonetworks to handle complex task by combining it with current technologies like SDN, IoTs, virtual network and fog computing are presented in~.}", "id": "a64107ed-8370-41d8-bc12-6c3f999e5bac", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a7c27535-4e26-4f84-a175-e793b24dbbe3", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ], [ "subsection", "Applications for NanoScale Terahertz Networks" ], [ "subsubsection", "Summary of Nanoscale applications" ] ], "subsections": [], "title": "Summary of Nanoscale applications" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\saim{The Terahertz band is emerging as the most promising way to realize Gbps link, due to increase in the current traffic demand~. A radio link over 120 GHz band for data transmission at the rate of 10 Gbps to the range of over 800 m is presented in~. There are applications mostly at the end users, which requires high speed data connections upto a more than 100 Gbps~. That can be fulfilled by using low Terahertz frequency bands with higher order modulation schemes. These applications include a broadcast system at public places like metro stations, airports, and shopping malls to transfer \\blue{bulk amount of} data \\blue{with Gbps speed}. However, the transmission or data transmission currently can be possible only for the short distance. The short distance, mobility and user density per coverage area should be considered while designing a MAC protocol. The benefits of this application is also recognised by IEEE 802.15.3d (TG - 100 Gbps wireless) as one of the use cases for the Terahertz communication~. An information broadcast case with area density, number of users and mobility pattern is discussed in~. Further, the front ends of applications like Terahertz LAN, vehicle to vehicle communication, small cells to mobile users are all applications where Terahertz bands can be used for high throughput. However\\blue{,} will require extreme antenna directionality management and rapid user association and disassociation mechanisms. }\n\\saim{The fronthaul is between the base station and the end user radio equipments which requires Gbps links. These radio equipments can be mobile as in small cells and wireless LAN scenarios and can be fixed users. The critical parameters for these applications other than high data rate (upto 1 Tbps) is distance which should be above a meter to a kilometre. Future applications which will include massive deployment of small cells for cloud radio access network which may increase the data rates for end users at front end and back end which will require Tbps links. The small cells deployment can utilize the huge bandwidth available in Terahertz band and can free up the lower frequency bands which leads to several Tbps of data transfer~. One of the possible and upcoming applications of Terahertz band is the small cell communication for mobile cellular networks, in which ultra-high data rate can be provided to mobile users within transmission range up to 20 m~. The Terahertz small cell can be a fixed point installed to serve multiple mobile users. The mobility of users with higher data volume offloading needs to be supported. The users moving from cell to cell requires seamless handover for uninterrupted communication. The Terahertz directional antenna usage can increase challenges in user association and tracking with scheduled channel access. Therefore, the Terahertz MAC protocol should consider these requirements and the target performance to ensure the user satisfaction.}\n\\saim{The virtual reality (VR) device is an interesting application which requires at least 10 Gbps data traffic transfer. However, currently it relies on wired cord and needs to be shifted to wireless transfer with more than 10 Gbps data rate. The VR applications mainly requires ultra high reliability and low latency with high data rates for their services and fast data processing~. Using Terahertz band for VR services requires transmission and processing delay to be low\\blue{~}.}\n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption{Summary of existing Terahertz MAC protocols with MAC aspects and parameter awareness.}\n \\begin{tabular}{|p{1.1em}|p{0.42em}|p{0.33em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{26em}|p{11em}|}\n\\cmidrule{3-24} \\multicolumn{2}{c|}{} & \\multicolumn{5}{|p{8.6em}|}{\\textbf{Parameter aware MAC protocols}} & \\multicolumn{17}{p{23.5em}|}{\\textbf{ MAC layer aspects}} & \\multicolumn{1}{l}{} \\\\\n \\midrule\n \\begin{sideways}\\textbf{Paper}\\end{sideways} & \\multicolumn{1}{p{1em}|}{\\begin{sideways}\\textbf{Year}\\end{sideways}} & \\begin{sideways}\\textbf{Channel}\\end{sideways} & \\begin{sideways}\\textbf{Physical layer}\\end{sideways} & \\begin{sideways}\\textbf{Energy}\\end{sideways} & \\begin{sideways}\\textbf{Memory}\\end{sideways} & \\begin{sideways}\\textbf{Position/Distance}\\end{sideways} & \\begin{sideways}\\textbf{Bandwidth adaptive}\\end{sideways} & \\begin{sideways}\\textbf{Hand shake}\\end{sideways} & \\begin{sideways}\\textbf{Synchronization}\\end{sideways} & \\begin{sideways}\\textbf{Neighbor discovery}\\end{sideways} & \\begin{sideways}\\textbf{Channel access method}\\end{sideways} & \\begin{sideways}\\textbf{channel selection}\\end{sideways} & \\begin{sideways}\\textbf{carier sensing}\\end{sideways} & \\begin{sideways}\\textbf{Scheduling}\\end{sideways} & \\begin{sideways}\\textbf{Cross layer MAC design}\\end{sideways} & \\begin{sideways}\\textbf{Collision and congestion}\\end{sideways} & \\begin{sideways}\\textbf{Interference}\\end{sideways} & \\begin{sideways}\\textbf{Packet size and structure}\\end{sideways} & \\begin{sideways}\\textbf{Data transmission}\\end{sideways} & \\begin{sideways}\\textbf{Error Control}\\end{sideways} & \\begin{sideways}\\textbf{Delay and throughput}\\end{sideways} & \\begin{sideways}\\textbf{Multiplexing}\\end{sideways} & \\begin{sideways}\\textbf{Beam forming and scanning}\\end{sideways} & \\textbf{Protocol Description}& \\textbf{\\blue{THz challenge addressed}} \\\\\n \\midrule\n & 2011 & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & X & Effects of congestion and traffic generation intensity are analysed for nano-networks through competition among bacteria for conjugation at nano gateways. & \\blue{Traffic congestion at nano gateways and link layer performance analysis.} \\\\ [-0.5em]\n \\midrule\n & 2012 & \\checkmark & X & \\checkmark & X & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & X & X & The communication and coding schemes are jointly selected to maximise the decoding probability and minimise the interference while considering energy limitations. & \\blue{MAC protocol with low weight coding scheme and pulse-based communication.} \\\\[-0.5em]\n \\midrule\n & 2012 & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & X & \\checkmark & X & X & X & X & X & An energy efficient, scalable and reliable MAC protocol is proposed for dense nanonetworks with control and data packet structures. & \\blue{MAC protocol for Nanonetworks with scalability.} \\\\[-0.5em]\n \\midrule\n & 2013 & \\checkmark & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & X & X & X & X & \\checkmark & X & X & An energy and spectrum aware MAC protocol is proposed to achieve fair throughput and optimal channel access by optimising the energy harvesting and consumption in nano-sensors. & \\blue{A sub-optimal symbol compression scheduling algorithm with throughput and lifetime balance.} \\\\[-0.5em] \n \\midrule\n & 2013 & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & X & \\checkmark & X & X & X & \\checkmark & X & X & \\checkmark & X & X & A MAC protocol based on IEEE 802.15.3c is proposed for Terahertz ultra high data rate wireless networks is proposed with super frame structure and timeslot allocation scheme. & \\blue{MAC protocol for high data rate support.} \\\\[-0.5em]\n \\midrule\n & 2013 & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & A MAC protocol is proposed for health monitoring for nanosensor network with anlaysis of node density and Tx range with routing strategies. & \\blue{Simulator for nano networks, Node density and Transmission analysis.} \\\\[-0.5em]\n \\midrule\n & 2013 & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & A MAC protocol for Terahertz communication is proposed with channel access and data rate analyses with superframe structure. & \\blue{MAC protocol with super frame structure.} \\\\[-0.5em]\n \\midrule\n & 2014 & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & \\checkmark & A MAC design is proposed for macro scale communication at 100 Gbps for pulse-level beam switching and energy control with focus on neighbor discovery and scheduling. & \\blue{MAC protocol with pulse level beam switching and energy control.} \\\\[-0.5em]\n \\midrule\n & 2014 & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & A technique to utilize the harvested energy in wireless nano=networks is presented with focus on optimal energy consumption for transmission and reception with packet scheduling. & \\blue{Energy harvesting and consumption with low energy storage.} \\\\[-0.5em]\n \\midrule\n & 2014 & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & A frequency hopping scheme is presented to overcome the problems of molecular absorption. & \\blue{Frequency selection based on channel noise and absorption.} \\\\[-0.5em]\n \\midrule\n & 2014 & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & A receiver initiated MAC protocol is proposed based on distributed and probabilistic schemes for adaptive energy harvesting nanonodes with scheduling. & \\blue{Rx initiated communication with energy harvesting.} \\\\[-0.5em]\n \\midrule\n & 2015 & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & A distributed receiver initiated MAC protocol is proposed with scheduling scheme to minimize collisions and maximize the utilization of energy harvesting. & \\blue{Distributed Scheduling with energy harvesting.} \\\\[-0.5em]\n \\midrule\n & 2015 & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & An Rx initiated handshake based link layer synchronization mechanism is proposed to maximise the channel utilization with analysis of delay, throughput and packet transmission rate. & \\blue{Handshaking and synchronization mechanisms.} \\\\[-0.5em]\n \\midrule\n & 2015 & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & X & X & X & A scheme with logical channel information is proposed in which information is encoded in timings of channel. It supports low rate communication in an energy efficient and reliable manner. & \\blue{Information encoding over logical communication channels.} \\\\[-0.5em]\n \\midrule\n & 2015 & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X & A cross layer analysis of error control strategies is presented for nanonetworks with trade-off between bit error rate, packet error rate, energy consumption and latency. & \\blue{Error control analysis.} \\\\[-0.5em]\n \\midrule\n & 2015 & X & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & An intra-body disease detection is proposed for wireless nanosensor network using on-off keying and TDMA framework for analysing the data transmission efficiency. & \\blue{Multiple channel access scheme.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & \\checkmark & X & \\checkmark & X & X & A fully distributed low-computation scheduling MAC protocol is proposed for maximising network throughput by jointly considering the energy consumption and harvesting. & \\blue{Scheduling with energy harvesting and consumption.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark & X & \\checkmark & An assisted beam-forming and alignment MAC protocol is proposed with neighbor discovery, data transmission, delay and throughput analysis. & \\blue{MAC protocol with assisted beam forming.} \\\\ [-0.5em]\n \\midrule\n & 2016 & X & X & \\checkmark & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & A synchronization mechanism is proposed for nano sensor network based on TS-OOK with analysis of consumed energy, collision probability, delay and throughput. & \\blue{Synchronization with energy consumption and collision analysis.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & A networking approach for static and dense topologies is presented with flooding, network density, data dissemination and broadcast analysis. & \\blue{Scalability and coverage, flooding based communication.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & X & A link throughput maximization problem is discussed. An optimal packet size is derived with combined physical and link layer peculiarities. & \\blue{Link throughput maximization and optimal packet size.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & X & X & X & Different MAC protocols are compared and analysed in terms of transmission distance, energy consumption and collision probability. & \\blue{Transmission, energy and collision performance analysis.} \\\\[-0.5em]\n \\midrule\n & 2016 & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & Four frequency selection schemes are analysed for throughput and energy utilization. & \\blue{Frequency selection with sensitivity to moisture level.} \\\\[-0.6em]\n \\midrule\n & 2016 & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & X & A high throughput low delay MAC is proposed to reduces the delay with super-frame structure. & \\blue{Low delay MAC with super frame structure.} \\\\[-0.6em]\n \\midrule\n & 2017 & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & X & X & A hardware processor for 100 Gbps wireless data links is presented. A light weight FEC engine, BER, frames fragmentation retransmission protocol is also presented. & \\blue{Hardware processor for 100 Gbps wireless data link layer.}\\\\[-0.5em] \n \\midrule\n & 2017 & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & \\checkmark & \\checkmark & X & A memory assisted MAC protocol with angular division multiplexing is proposed with multiple antennas for service discovery, coordination, data communications and interference. & \\blue{Medium access with angular division multiplexing and memory consideration.} \\\\[-0.5em]\n \\bottomrule\n \\end{tabular}\n \\label{tab:paraaware}\n\\end{table*}\n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption*{(TABLE VII Continued.) Summary of existing Terahertz MAC protocols with MAC aspects and parameter awareness.}\n \\begin{tabular}{|p{1.1em}|p{0.42em}|p{0.33em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{0.34em}|p{26em}|p{11em}|}\n\\cmidrule{3-24} \\multicolumn{2}{c|}{} & \\multicolumn{5}{|p{8.6em}|}{\\textbf{Parameter aware MAC protocols}} & \\multicolumn{17}{p{23.5em}|}{\\textbf{ MAC layer aspects}} & \\multicolumn{1}{l}{} \\\\\n \\midrule\n \\begin{sideways}\\textbf{Paper}\\end{sideways} & \\multicolumn{1}{p{1em}|}{\\begin{sideways}\\textbf{Year}\\end{sideways}} & \\begin{sideways}\\textbf{Channel}\\end{sideways} & \\begin{sideways}\\textbf{Physical layer}\\end{sideways} & \\begin{sideways}\\textbf{Energy}\\end{sideways} & \\begin{sideways}\\textbf{Memory}\\end{sideways} & \\begin{sideways}\\textbf{Position/Distance}\\end{sideways} & \\begin{sideways}\\textbf{Bandwidth adaptive}\\end{sideways} & \\begin{sideways}\\textbf{Hand shake}\\end{sideways} & \\begin{sideways}\\textbf{Synchronization}\\end{sideways} & \\begin{sideways}\\textbf{Neighbor discovery}\\end{sideways} & \\begin{sideways}\\textbf{Channel access method}\\end{sideways} & \\begin{sideways}\\textbf{channel selection}\\end{sideways} & \\begin{sideways}\\textbf{carier sensing}\\end{sideways} & \\begin{sideways}\\textbf{Scheduling}\\end{sideways} & \\begin{sideways}\\textbf{Cross layer MAC design}\\end{sideways} & \\begin{sideways}\\textbf{Collision and congestion}\\end{sideways} & \\begin{sideways}\\textbf{Interference}\\end{sideways} & \\begin{sideways}\\textbf{Packet size and structure}\\end{sideways} & \\begin{sideways}\\textbf{Data transmission}\\end{sideways} & \\begin{sideways}\\textbf{Error Control}\\end{sideways} & \\begin{sideways}\\textbf{Delay and throughput}\\end{sideways} & \\begin{sideways}\\textbf{Multiplexing}\\end{sideways} & \\begin{sideways}\\textbf{Beam forming and scanning}\\end{sideways} & \\textbf{Protocol Description}& \\textbf{\\blue{THz challenge addressed}} \\\\\n \\midrule\n & 2017 & X & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & A distributed multi radio assisted MAC protocol is proposed with multiple antennas for signal control mechanism with beam-forming. & \\blue{Assisted beamforming for data transmission.} \\\\ [-0.5em]\n \\midrule\n & 2017 & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & A high throughput low delay MAC is proposed with on-demand retransmission mechanism based on verification, reserved timeslots based channel condition and adaptive retransmission mechanism. & \\blue{Retransmissions and reserved timeslots mechanisms based on THz channel conditions.} \\\\[-0.5em] \n \\midrule\n & 2017 & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & A relay-based MAC is presented which considers communication blockage and facing problem. It further presents a neighbor discovery mechanism and data transmission. & \\blue{Relay based MAC for obstacle effect mitigation.} \\\\[-0.5em]\n \\midrule\n & 2017 & X & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & An adaptive pulse interval scheduling mechanism based on pulse arrival pattern is presented. & \\blue{Pulse level access scheduling.} \\\\[-0.5em]\n \\midrule\n & 2017 & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & \\checkmark & Optimal relaying strategies with cross layer analysis. & \\blue{Cross layer design for high throughput.} \\\\[-0.5em]\n \\midrule\n & 2017 & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & \\checkmark & X & X & X & X & Channel handoff mechanism for mmWave and Terahertz channels, high bandwidth data transfer, scheduling and channel capacity modelling. & \\blue{Spectrum switching and scheduling for mmWave and THz bands.} \\\\[-0.5em]\n \\midrule\n & 2017 & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & An energy efficient MAC with clustering and TDMA scheduling for mobility and collisions. & \\blue{TDMA scheduling with mobility.} \\\\[-0.5em]\n \\midrule\n & 2018 & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & An autonomous relay algorithm is presented for vehicular networks. & \\blue{Short directional links for autonomous vehicular communication.} \\\\[-0.5em]\n \\midrule\n & 2018 & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & X & A secure and intelligent spectrum control strategy is presented with fixed channel. & \\blue{Security in Terahertz band with coding and interference consideration.} \\\\[-0.5em]\n \\midrule\n & 2018 & X & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X & Channel switching based on distance, signalling overhead, throughput maximization and error recovery. & \\blue{Spectrum switching with distance, signalling overhead, error and throughput consideration.} \\\\[-0.5em]\n \\midrule\n & 2018 & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & Throughput maximization with molecular absorption, interference, energy harvesting, and link capacity. & \\blue{Throughput maximization and energy harvesting.} \\\\[-0.5em]\n \\midrule\n & 2018 & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark & X & \\checkmark & X & X & X & \\checkmark & \\checkmark & X & X & X & X & Performance of energy consumption with dynamic super-frame durations and packet lengths. & \\blue{Packet length and energy consumption analysis.} \\\\[-0.5em]\n \\midrule\n & 2018 & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & X & X & \\checkmark & A MAC Yugi-Ada antenna is presented for frequency and beam direction reconfigurability. & \\blue{Reconfigurable antenna and programmable physical layer for MAC protocol.} \\\\[-0.5em]\n \\bottomrule\n \\end{tabular}\n \\label{tab:paraawaree}\n\\end{table*}", "id": "3a08212b-709c-41dc-ac28-ef910f217b9d", "level": "subsection", "origin_cites_number": 0, "parent_id": "f7836041-01d7-468b-aaaf-0b8915fb280f", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz band applications and their requirements" ], [ "subsection", "\\saim{Other applications for Terahertz communications" ] ], "subsections": [], "title": "\\saim{Other applications for Terahertz communications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:requirements}\n\\blue{This section discusses the feature, design issues, and challenges which needs to be considered while designing efficient THz MAC protocols.}", "id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ] ], "subsections": [ "4734ed01-d547-4bff-ae71-d408e63ad680", "0bb9b7d3-b4da-4d86-aa03-d676321bfecb", "2f3a0ac2-f992-43fb-975c-c0827d275048", "8823c2f5-f322-4c0f-bbe5-df4591e73e33", "9c8b19bf-b8fe-48ba-bb44-786e530f726b" ], "title": "Design issues and considerations for Terahertz MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:features}\nBy using frequencies above $0.1$ THz, new propagation phenomena can appear such as reflection, wave absorption by some molecules and channel noise generation~. The understanding of the Terahertz band seems to be crucial to design systems exploiting this frequency, hence, researchers are focusing on the behavior of Terahertz wave traveling under different environments and in the presence of items such as walls, concrete or grass. Following are the Terahertz band features which can affect the MAC-layer performance including throughput and delay.", "id": "4734ed01-d547-4bff-ae71-d408e63ad680", "level": "subsection", "origin_cites_number": 0, "parent_id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ] ], "subsections": [ "d1782f0c-a620-48db-b19d-a0fb091ea4b9", "b09dc4fe-9059-4fba-a33c-70581b555f0f", "dbaf54ce-3c68-4043-bf8c-ada97aa71850", "8a32d4be-a9b7-43a0-93b3-b788d213a803", "683a5eb8-f050-48c7-bdfa-f489f5da75d0" ], "title": "Feature of Terahertz band communication related to Terahertz MAC protocol design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:pathloss}\nTo realize the Terahertz band and its characteristics, it is important to understand its propagation phenomenon and to analyze the impact of molecular absorption on the path loss and noise~. The current efforts are mainly focused on channel characterization at 300 GHz band~. As Terahertz wave propagates, it suffers from different types of attenuation due to absorption, scattering, and refraction~. It can follow different paths at the receiver as the sum of non-/line of sight. The path loss includes the spreading as well as the absorption loss. The spreading loss occurs due to the expansion of waves as it propagates through the medium, whereas the absorption loss occurs when a Terahertz wave suffers from the molecular absorption at the medium~. These losses make a Terahertz band a frequency selective. The spreading loss can be given as~,\n\\begin{equation}\\label{eq:ploss1}\na_{1}(f,d)=\\left( \\frac{c}{4\\pi f d}\\right)^{2}\n\\end{equation}\n\\noindent \\saim{whereas the absorption loss depends upon the parameters such as the temperature, pressure, and distance, and can be demonstrated as, }\n\\begin{equation}\\label{eq:ploss2}\na_{2}(f,d)=e^{-K(f)d}\n\\end{equation}\n\\noindent \\saim{where $K(f)$ is the total absorption coefficient and $d$ is the distance between transmitter and receiver~. $K(f)$ can be calculated using the High-Resolution Transmission Molecular Database (HITRAN) database~.} \nFor a particular transmission distance, the path loss increases with frequency due to spreading loss. For a few meters distance, the path loss can increase up to 100 dB. Further, the molecular absorption defines several transmission windows depending upon the transmission distance. For a few centimeters distance, the transmission window behaves like a 10 Terahertz wide transmission window due to negligible absorption loss. However, for distance more than 1 meter the absorption becomes significant which narrows down the transmission window. Such extreme path loss results in reduced bandwidth and only a few transmission windows. Different transmission windows are marked as feasible in~ showing up to less than 10 dB of Path loss due to negligible impact of molecular absorption. However, due to the spreading loss, the path loss remains higher, which motivates the usage of highly directional antennas and MIMO techniques~. The Terahertz wave can be absorbed by raindrops, ices, foliage and grass and any medium containing water molecule~.", "id": "d1782f0c-a620-48db-b19d-a0fb091ea4b9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4734ed01-d547-4bff-ae71-d408e63ad680", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ], [ "subsubsection", "Path loss" ] ], "subsections": [], "title": "Path loss" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:noise}\nWithin the Terahertz band, the molecules presented in the medium are excited by \\saim{electromagnetic waves} at specific frequencies. These excited molecules vibrate internally where the atom vibrates in periodic motion and the molecule vibrates in a constant translational and rotational motion. Due to the internal vibration, the energy of the propagating waves is converted into kinetic energy partly. From the communication perspective, it can be referred to as a loss of signal. Such molecule vibration at given frequencies can be obtained by solving the Schrodinger equation for particular molecular structure~. A model for computation of attenuation by gases in the atmosphere is also described by International Telecommunication Union, which considers the water vapor and oxygen molecules over Terahertz band from 1-1000 GHz~. A HITRAN database is also found useful for the computation of attenuation due to molecular absorption in Terahertz band~. \nThe molecular absorption is an important issue to consider along with free space path loss, as it also causes the loss to the signals due to partial transformation of electromagnetic energy into internal energy~. Such transformation in the Terahertz band can introduce noise which can be due to atmospheric temperature or the transmission in the radio channel. The noise occurs due to atmosphere temperature (such as Sun) can be referred as Sky-noise~, \\saim{and} the noise introduced due to transmission in the radio channel can be referred as the molecular absorption noise~. A noise model for Terahertz wireless nanosensor networks with individual noise sources that impact intra-body systems is presented in~, with noise contributions of Johnson-Nyquist, black body, and Doppler-shift induced noise. \n\\textit{Molecular absorption noise:} The molecular noise is the result of radiation of absorbed Terahertz energy by molecules which depends on the propagation environment. The fundamental equation of molecular noise under different assumptions, such as medium homogeneity or scattering properties, can be directly derived from radiative transfer theory~. The absorption is generally caused when the transmitted EM wave shifts the medium to higher energy states, where the difference between the higher and lower energy state of a molecule determines the absorption energy which is drawn from the EM wave. It has a direct impact on the frequency as the absorbed energy is $E = hf$, where $h$ is the Planck's constant and $f$ is frequency~. It can also be described stochastically using the absorption coefficient $K_{a}(f)$, which describes the average effective area of molecules per unit volume and depends upon frequency due to which the Terahertz band has a unique frequency-selective absorption profile. Similarly, the amount of radiation capable of penetrating through the absorption medium is known as transmittance, which can also be defined by the Beer's Lambert's Law~ (\\ref{eq:ploss2}). Further details on the calculation of molecular absorption coefficient and model can be found in~.\n\\textit{Sky noise:} The Sky-noise is independent of the transmitted signals and can be known as background noise. It is caused by the temperature of the absorbing atmosphere and can be termed as an effective blackbody radiator or grey body radiator for a non-homogeneous atmosphere medium. Several papers have described the Sky-noise, like~. It is identified in satellite communication and is mostly affected by the antenna temperature which is an additional temperature that accounts for the radiation from the absorbing temperature. The atmosphere can be considered as a dynamic medium with decreasing temperature and pressure as a function of elevation. In general, it depends upon the absorption coefficient and the distance due to the variable temperature and pressure in the atmosphere~. When the distance is small and the atmosphere is more likely homogeneous\\blue{,} the absorption coefficient can be \\blue{calculated as} as $K_{a}(s,f) = K_{a}(f)$ where $s$ represents the distance.\n\\textit{Black body noise:} A body with temperature T radiates energy, the energy can reach its maximum value for a given wavelength according to the Wien displacement law~. This phenomenon is known as black body radiation and it contributes for a specific range of temperature to the total noise of the Terahertz system~.", "id": "b09dc4fe-9059-4fba-a33c-70581b555f0f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4734ed01-d547-4bff-ae71-d408e63ad680", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ], [ "subsubsection", "Noise" ] ], "subsections": [], "title": "Noise" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:scatt-ref}\nReflection and scattering are two physical properties that characterize electromagnetic wave, the region between transmitter and receiver can contain a large number of scatters with different sizes and are distributed randomly. There are two types of scattering: elastic scattering in which only the direction of the wave is changed, and inelastic scattering in which the scatter introduces a change to the energy. The scattering processes include Rayleigh scattering which occurs when the dimension of scatter diameter is larger than the Terahertz wavelength and Mie scattering otherwise. Mie and Rayleigh scattering can affect received Terahertz signal~. \n In~, a statistical model for Terahertz scatters channel is proposed, based on indoor measurements of a point-to-point link and transmitter and receiver were equipped with directional antennas, at $300 GHz$ window and a bandwidth equal to $10 GHz$. \\saim{A scattering power radiated from different surfaces is analyzed in~ for spectrum ranging from 1 GHz to 1 THz. Two scattering models are analyzed, the direct scattering model and radar cross-section model. The scattering can be an important propagation mechanism and for frequencies above 100 GHz, \\saim{it} can be treated as a simple reflection~.} \n Radio wave reflections occur commonly in indoor scenarios. The reflected ray depends on the electromagnetic properties of the reflector, the surface roughness and the location of the reflectors with respect to the transmitter and receiver. The received signal at the $R_X$ side is the sum of direct ray and all reflected rays. In~, a demonstrator is set up for four frequency windows: $100$, $200$, $300$ and $400$ GHz, to characterize reflections in each window. The reflection coefficient is given by:\n \\begin{equation}\n r=\\frac{Zcos(\\theta_{i})-Z_{0}cos(\\theta_{t})}{Zcos(\\theta_{i})+Z_{0}cos(\\theta_{t})}\n \\end{equation} \n\\noindent \\saim{where, $Z_{0}=337\\Omega$ is the wave impedance in free space and $Z$ the impedance of the reflector. $Z$ depends on frequency, material relative index and absorption coefficient. $\\theta_i$ is the angle of incidence and $\\theta_t$ is the angle of transmission. The reflected wave can be reduced using phased array antenna~.}", "id": "dbaf54ce-3c68-4043-bf8c-ada97aa71850", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4734ed01-d547-4bff-ae71-d408e63ad680", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ], [ "subsubsection", "Terahertz scattering and reflection" ] ], "subsections": [], "title": "Terahertz scattering and reflection" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:multipath}\nIn the presence of reflectors and scatters, a non-line of sight (NLOS) can be generated by the channel for Terahertz waves. In Terahertz communication, where the line of sight (LOS) and NLOS exist together, the NLOS can interfere with the main signal in LOS at the receiving side~. The advantage of NLOS component is when the LOS is obstructed, the receiver can still decode the transmitted signal. The magnitude of the received signal at the receiver depends on parameters such as reflector permittivity which characterizes the material, reflector roughness coefficient, incidence angle, and wave polarization and finally its position toward transmitter and receiver~. The magnitude of the NLOS signal is also affected by the antenna properties, the distance between the source and receiver, and the plane containing the reflector.\nBoth LOS and NLOS propagation scenarios exist in the indoor environment~ where the presence of NLOS is mainly due to scatters and reflectors. The channel attenuations and delays can be estimated using NLOS and LOS components of channel impulse response $h(f,t)$ by:\n \\begin{equation}\n h(f,t)=\\sqrt{l(d_{0},f)}\\delta(t-t_{0})+\\sum_{j=1}^{N_{NLOS}}\\sqrt{l^{'}(d_{j},f)}\\delta(t-t_{j})\n \\end{equation}\n\\noindent \\saim{where, $N_{NLOS}$ is the number of NLOS paths, $d$ is the distance, $f$ is frequency, $\\delta$ is the Dirac function, and $l$ is the total attenuation and can be written as, }\n \\begin{equation}\n l(d_0,f) = a_1(d_0,f) * a_2(d_0,f)\n\\end{equation} \n\\noindent \\saim{and,}\n\\begin{equation}\n l^{'}(d_j,f) = r^{2} a_1(d_j,f) * a_2(d_j,f)\n\\end{equation} \n\\noindent \\saim{Delay parameters in Equation 6 affect some of MAC decisions such as modulation and coding selection module, antenna beam steering module, then the estimation of delay parameters can help to select or switching to the path that gives the lowest attenuation for the link. Presence of NLOS and LOS components can be used as an alternative for link communication outage in which if LOS is blocked the NLOS can be used as an alternate path.}", "id": "8a32d4be-a9b7-43a0-93b3-b788d213a803", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4734ed01-d547-4bff-ae71-d408e63ad680", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ], [ "subsubsection", "Multi-path" ] ], "subsections": [], "title": "Multi-path" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:trwindow}\nThe path losses which occur in Terahertz wave communication give these bands a frequency selective behavior in which some chunks of bands can be used to provide higher bandwidth due to less amount of losses. Terahertz windows for communication depends on many parameters, such as communication range and technology requirements. The distance-dependant bandwidth is given by~:\n\\begin{equation}\nB_{3dB}(d)=\\{f|\\frac{a_{1}(f,d)a_{2}(f,d)}{N(f,d)}\\geq \\frac{a_{1}(f_0,d)a_{2}(f_0,d)}{N(f_0,d)}-3dB \\}\n\\end{equation}\n\\noindent \\saim{where, $f_{0}$ is the central frequency, $a_{1}$ is the spreading loss, $a_{2}$ is the absorption loss and $N(f,d)$ is the total molecular noise.}\nTypically, four Terahertz windows can be exploited within the band $[0.1-1 THz]$ for a communication range of $1-10 m$. The optimal compact window with low attenuations and high bandwidth is the one centered around $0.3 THz$. The $300 GHz$ window is characterized by an available bandwidth of $69.12 GHz$, subdivided into separate channels or sub-bands. The supported channels for Terahertz communication for the frequency range from $F_{min}=252.72$ GHz to $F_{max}=321.84$ GHz were proposed by IEEE 802.15.3d wireless personal area networks (WPAN) working group and summarized in Table~\\ref{tab:sub-bands}~. \\blue{In~, transmission windows are selected based on the distance between the nodes. This is due to higher attenuation in channel impulse response as a result of molecular absorption at longer distance.} \n\\begin{table}[htbp!]\n\t\\centering\n\t\\caption{Bandwidth and maximum achievable data rate for sub bands of 0.3 THz window for single career using high order modulation~\\protect.}\n\t\\label{tab:sub-bands}\n\t\\begin{tabular}{|c||c|c|}\n\t\t\\hline\n\t\t Bandwidth(GHz)& Index range & Data Rate (Gbps)\\\\\n\t\t\\hline\n\t\t\\hline\n\t\t $2.16$&$1-32$ & 9.86 \\\\\n\t\t\\hline\n\t\t $4.32$&$33-48$ & 19.71 \\\\\n\t\t\\hline\n\t\t $8.64$&$49-56$ & 39.42 \\\\\n\t\t\\hline\n\t\t $12.96$&$57-61$ & 59.14 \\\\\n\t\t\\hline\n\t\t $17.28$&$62-65$ & 78.85 \\\\\n\t\t\\hline\n\t\t $25.92$&$66-67$ & 118.27 \\\\\n\t\t\\hline\n\t\t $51.84$&$68$ & 236.54\\\\\n\t\t\\hline\n\t\t $69.12$&$69$ & 315.39\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}", "id": "683a5eb8-f050-48c7-bdfa-f489f5da75d0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4734ed01-d547-4bff-ae71-d408e63ad680", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Feature of Terahertz band communication related to Terahertz MAC protocol design" ], [ "subsubsection", "Terahertz transmission windows" ] ], "subsections": [], "title": "Terahertz transmission windows" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:design-issues}\nThe Terahertz band can provide high bandwidth for future high-speed networks. However, possesses unique features as discussed in the above section which can affect communication performance. These features do not affect the MAC performance directly but impact hugely the Physical layer design, antenna and link capacity, which affects the MAC-layer performance, throughput, and latency. The choice of physical layer functionalities can also affect the MAC layer design such as antenna technology, modulation and coding scheme, and waveform. MAC functionalities depends on channel characteristics, device technology, and physical layer features. There are several design issues related to Physical and MAC layer features which should be considered for designing an efficient MAC protocol for different applications. These issues and considerations are highlighted in Table~\\ref{tab:summ-issue-features} with Terahertz features and decisions to be taken at MAC layer. \n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption{Terahertz features, design issues and considered in existing Terahertz MAC protocols.}\n \\begin{tabular}{|c|c|p{9.665em}|p{1.335em}p{1.335em}p{1.335em}p{1.335em}p{1.335em}|cccccccc|cccccccc|}\n\\cmidrule{9-24} \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c|}{} & \\multicolumn{16}{p{22.24em}|}{\\textbf{Terahertz MAC design issues and considerations}} \\\\\n\\cmidrule{4-24} \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c|}{} & \\multicolumn{5}{p{6.175em}|}{\\textbf{THz band features}} & \\multicolumn{8}{p{10.28em}|}{\\textbf{Physical layer \\& device related}} & \\multicolumn{8}{p{14.16em}|}{\\textbf{MAC layer related}} \\\\\n \\midrule\n \\multicolumn{1}{|p{3.28em}|}{\\textbf{Network scale}} & \\multicolumn{1}{p{2.11em}|}{\\textbf{Ref.}} & \\textbf{Application area} & \\multicolumn{1}{p{1.135em}|}{\\begin{sideways}\\textbf{Path loss}\\end{sideways}} & \\multicolumn{1}{p{1.135em}|}{\\begin{sideways}\\textbf{Noise}\\end{sideways}} & \\multicolumn{1}{p{1.05em}|}{\\begin{sideways}\\textbf{Scattering \\& reflections }\\end{sideways}} & \\multicolumn{1}{p{1.05em}|}{\\begin{sideways}\\textbf{Multi path}\\end{sideways}} & \\begin{sideways}\\textbf{Transmission windows}\\end{sideways} & \\multicolumn{1}{p{1.135em}|}{\\begin{sideways}\\textbf{Channel model}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Antenna}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Interference and noise}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{THz device}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Waveform}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Modulation and coding}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Link budget}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Channel capacity}\\end{sideways}} & \\multicolumn{1}{p{1.035em}|}{\\begin{sideways}\\textbf{Channel access}\\end{sideways}} & \\multicolumn{1}{p{1.81em}|}{\\begin{sideways}\\textbf{Neighbor discovery and link establishment}\\end{sideways}} & \\multicolumn{1}{p{2.01em}|}{\\begin{sideways}\\textbf{Mobility management and handovers}\\end{sideways}} & \\multicolumn{1}{p{2.01em}|}{\\begin{sideways}\\textbf{Collisions and multi user interference}\\end{sideways}} & \\multicolumn{1}{p{1.09em}|}{\\begin{sideways}\\textbf{Throughput and latency}\\end{sideways}} & \\multicolumn{1}{p{1.05em}|}{\\begin{sideways}\\textbf{Reliability}\\end{sideways}} & \\multicolumn{1}{p{1.05em}|}{\\begin{sideways}\\textbf{Coverage and connectivity}\\end{sideways}} & \\multicolumn{1}{p{1.05em}|}{\\begin{sideways}\\textbf{Energy considerations}\\end{sideways}} \\\\\n \\midrule\n \\multirow{25}[2]{*}{nano} & & Wireless Nanosensor\\newline{}Networks & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark \\\\\n & & Wireless Nanosensor\\newline{}Networks & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & Software defined metamaterials & \\checkmark & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & X & X & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\\n & & SDM & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark \\\\\n & & Nanonetworks & \\checkmark & \\checkmark & X & X & X & \\checkmark & X & \\checkmark & X & X & \\checkmark & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark & X & \\checkmark & X & \\checkmark \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark & X & X & \\checkmark \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & X & X & X & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & Health monitoring & X & X & X & X & X & X & \\checkmark & X & \\checkmark & X & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X & X & X & \\checkmark \\\\\n & & In-body Nanonetworks & \\checkmark & \\checkmark & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark \\\\\n & & Health monitoring & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X & X & X & \\checkmark \\\\\n & & Industrial monitoring & X & \\checkmark & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & X & \\checkmark \\\\\n & & Agriculture monitoring & \\checkmark & \\checkmark & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & X & X & \\checkmark \\\\\n & & Wireless Nanosensor\\newline{}Networks & X & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & X & X & X & X & X \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X \\\\\n & & Nanonetworks & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & Health monitoring & \\multicolumn{1}{c}{X} & \\multicolumn{1}{c}{X} & \\multicolumn{1}{c}{X} & \\multicolumn{1}{c}{X} & \\multicolumn{1}{c|}{X} & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & X & X & X \\\\\n & & Nano/Macro-networks & \\checkmark & \\checkmark & X & X & X & \\checkmark & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & X & X & \\checkmark \\\\\n & & Wireless Nanosensor\\newline{}Networks & X & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark & X & X & \\checkmark \\\\\n & & Internet of Nano-Things & \\checkmark & \\checkmark & X & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & Health monitoring & \\checkmark & X & X & X & X & X & X & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark & X & \\checkmark & X & \\checkmark \\\\\n & & Nanonetworks & \\checkmark & \\checkmark & X & \\checkmark & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark \\\\\n & & Nanonetworks & \\checkmark & \\checkmark & X & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark & X & X & \\checkmark \\\\\n & & Wireless Nanosensor\\newline{}Networks & \\checkmark & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & X & X & X & \\checkmark \\\\\n \\midrule\n \\multirow{10}[2]{*}{macro} & & THz communication network & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & X \\\\\n & & THz Wireless Personal Area Networks & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & X \\\\\n & & THz Vehicular networks and small cells, SDN & \\checkmark & \\checkmark & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X & \\checkmark & \\checkmark & \\checkmark & X \\\\\n & & THz communication network & X & X & X & X & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & X & X & X & X & X & X & \\checkmark & \\checkmark & X & \\checkmark \\\\\n & & THz communication network & X & \\checkmark & X & X & X & \\checkmark & X & \\checkmark & X & X & X & X & X & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark \\\\\n & & THz Wireless Personal Area Networks & X & X & X & X & X & X & X & X & X & X & X & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & X \\\\\n & & THz communication network & X & \\checkmark & X & X & X & \\checkmark & \\checkmark & \\checkmark & X & X & X & X & X & X & \\checkmark & X & \\checkmark & \\checkmark & \\checkmark & \\checkmark & X \\\\\n & & THz Vehicular network & X & X & X & X & X & \\checkmark & X & X & X & \\checkmark & X & X & \\checkmark & X & X & \\checkmark & X & \\checkmark & \\checkmark & \\checkmark & X \\\\\n & & THz communication network, indoor networks & \\checkmark & \\checkmark & X & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & X & X & X & X & X & X & X & \\checkmark & X & X & X \\\\\n & & THz communication network, indoor networks & \\checkmark & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & X & X & \\checkmark & X & X & \\checkmark & \\checkmark & X & X & \\checkmark & \\checkmark & X & \\checkmark \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:summ-issue-features}\n\\end{table*}", "id": "0bb9b7d3-b4da-4d86-aa03-d676321bfecb", "level": "subsection", "origin_cites_number": 0, "parent_id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Design issues and considerations for Terahertz MAC protocols" ] ], "subsections": [ "57dabfd6-3b8f-4baa-b48d-58089a40fa29", "f9a0d365-b9d8-4092-931f-5812a722cbb9" ], "title": "Design issues and considerations for Terahertz MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{sec-req:phylayer} \\hfill\n\\textit{Antenna technology: } The transmitted Terahertz signal undergoes several impairments due to the propagation through a medium, ranging from free space loss caused by high frequencies to molecular absorption noise and scattering. To cope with this issue, high gain antennas are required to strengthen the signal in one particular direction and to compensate losses~. In communication networks, many nodes try to access the shared channel, it will be challenging to simultaneously serve all of them if we assume antenna is directional. \n The antenna technology endowed with fast beam switching capability can determine the way nodes access the shared channel using narrow beams and by assigning each node access to the channel for a given time slot assigned by the MAC layer~. MAC layer should include an antenna steering module to rapidly steer beams toward the receiver. For example, the switching can be performed at the pulse or packet level. However, the MAC and antennas of different nodes should be well synchronized in order to reduce errors and delays. With optimized antenna gain, it is possible to reach high data rate and good signal quality. Massive MIMO antenna is also envisioned for Terahertz applications \\saim{which} can enhance MAC performance by increasing the data throughput and by serving more nodes in the network using spatial multiplexing techniques~, where the number of small antennas can reach 1024. In macro scale communication\\blue{,} it is still an open challenge for deep investigations. \n\\saim{In nano communication networks, nodes inter-distance is short, therefore antenna is assumed to be isotropic and non-complex.} Efforts to design terahertz antennas goes back to some few years, radiation is possible at this frequency band for antennas consisting of materials such as InGaAs and graphene taking advantages from their chemical and electronic properties~. \nAntenna dimensions are in the order of micrometers for THz frequencies. The second issue to be considered is materials to be used for antenna design and feeding. Some interesting results achieved for the nano-antenna industry~, power consumption and radiated power, operating band and directivity are the main properties of antenna~. The mutual coupling is also an important challenge to address due to ultra-dense integration of multi-band nanoantenna arrays. A frequency selective approach is proposed in~, to reduce the coupling effects for multi-band ultra massive MIMO systems.\n\\saim{To mitigate high attenuation and multipath problem, fast beam switching and steering techniques can be managed by MAC, phased array antennas are the best choice to meet this requirement. The phased arrays can improve the link budget and also increase fairness among users via beam switching and steering\\blue{~}. However\\blue{,} the actual status of technology is still developing towards reducing the switching time and increasing the number of antenna elements in order to increase gain. Mathematically, the antenna gain for a uniform planar array at a specific angular direction $(\\theta,\\phi)$ describes both functionalities of beamforming and beam steering and can be calculated by~:}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=3.2in,height=2in]{figures/ant-MAC.png}\n\\caption{MAC scheduler with an antenna array. The figure shows the MAC scheduler module and how it is linked to the antenna system, if the current node $i$ needs to send data to node $j$, then MAC sends a command to the antenna system, coded to steer its beams toward node $j$ using a digital to analog module interfacing between MAC and the physical layer. The beam steering operation can be repeated for each frame period to schedule a new transmission. }\n\\label{fig:antenna_sched}\n\\end{figure}\n\\begin{equation}\n\\begin{split}\n G(\\theta,\\phi) & = G_{max} \\frac{sin(Ma(sin(\\theta)cos(\\phi)-\\nu_{0}))}{M sin(a(sin(\\theta)cos(\\phi)-\\nu_{0}))} \\\\ \n &\\qquad\\quad \\frac{sin(Nb(sin(\\theta)sin(\\phi)-\\nu_{1}))}{N sin(b(sin(\\theta)sin(\\phi)-\\nu_{1}))} \n \\end{split}\n\\end{equation}\n\\noindent \\saim{where $a$ and $b$ are two parameters related to vertical and horizontal separation between antenna elements and also a function of Terahertz frequency. $\\nu_{0}$ and $\\nu_{1}$ are two horizontal and vertical steering parameters. MAC layer should select properly $\\nu_{0}$ and $\\nu_{1}$ to establish a communication link with a node, $G_{max}$ denotes the maximum antenna gain. }\nThe Terahertz MAC scheduler maps traffic data to each antenna beam as depicted in Figure~\\ref{fig:antenna_sched}, the diagram shows an example of mapping between user data traffic and antenna beams. MAC selects, based on traffic requirements, for each transmission time interval $[(n-1)T, nT]$, a destination node and its associated beam to transmit data traffic. Many scheduling algorithms can be used such as Round Robin, maximum throughput or minimum delay algorithms. The switching operation of the beam can be performed at a pulse, symbol or frame level.\n\\textit{Interference model and SINR: } Interference exists in the Terahertz communication system and it can be generated from nodes using the same frequency band at the same time or from the signal itself. Interferences can be caused also by reflected and scattered signals~ for either fixed or mobile users. Research works on interference modeling are not sufficient, as a signal to interference ratio depends mainly on the channel model. The interference level affects signal quality and leads to higher bit error rate (BER). The design of MAC should be aware of interference level by enhancing nodes synchronization or adopting channelization methods to access the channel. Access methods define the way each node transmits its data, then elaborating an interference model will help deeply selecting the right access technique.\n\\textit{Link budget and capacity: } A communication link is characterized by link budget and system capacity. Link budget includes transmitting power, all gains, and losses. Link quality is good when the link budget value is higher than the receiver signal to noise ratio threshold. \\saim{This} threshold characterizes the receiver device as well as the bandwidth. Link budget gives the information of all power gains and losses that should be present in one Terahertz link, Terahertz link budget depends on many factors such as antenna gains, atmospheric attenuations, available bandwidth, distance, temperature, total noise, and the THz source output power. The link budget should \\saim{be} higher than a fixed threshold \\saim{which} mainly depends on device technology, to guarantee reliable Terahertz communication. Enhancement of link budget leads to increase in reachability, and reduce data loss. It is used as a reference metric to determine the link range. Shannon capacity is derived from the mutual information maximization between sender and receiver for a particular channel model, it indicates how much data can be transmitted for a given bandwidth and SINR for different scenarios.\nMost of the studies on Terahertz capacity analysis derived for the nanoscale Terahertz network~, and macro-scale networks~ are based on theoretical assumptions and deterministic propagation channel. In realistic scenarios, additional properties of the Terahertz wave should be considered such as scattering, dispersion, atmospheric factors, and Terahertz statistical model.\n Capacity increases with bandwidth and SINR, as a result, MAC layer design should take into consideration the achievable channel capacity for different channel models, as throughput is bounded by the maximum data rate. \nMAC layer should be aware of link quality: link budget and capacity; frames should be protected against errors\\blue{. M}oreover, frame length and transmission duration should be tuned. For MAC design, link requirements should be considered, for example, in the Datacenter use case, the data rate can exceed $100Gbps$, then efficient tracking of link fluctuation is required.\nKnowledge of channel capacity and link budget enhances MAC awareness of the channel and the physical layer via frame optimization and transmission scheduling. In~, the impact of the outdoor channel on fixed link capacity is studied at a 1 km distance. The channel capacity and BER performance for data transmission are analyzed in~.\n\\begin{comment}\n\\textit{Terahertz Channel model: } Terahertz channel model affects physical layer choices, for example, antenna pattern shape and device design. It affects also the MAC layer in terms of transmission time choice and transmission window selection. Terahertz users are assigned short frame size in harsh propagation conditions, therefore, MAC should sense channel fluctuations to transmit data with less error and high throughput. Terahertz channel model is related to the propagation environment, for instance, modeling of indoor Terahertz channel within the data center using ray-tracing simulations is proposed in~ where NLOS and LOS co-exist, Terahertz statistical model for short-range THz D2D communication for randomly distributed scatters in the area between transmitter and receiver is described in~. In-body communication, the nano-network consists of autonomous tiny devices with energy harvesting capabilities, the propagation channel is also different from macroscale communication, as Terahertz medium will be heterogeneous, a channel model is proposed for this particular communication channel~. Performance evaluation for indoor as well as outdoor Terahertz channel was evaluated in~ for reflections.\n A channel measurement study was carried out for LOS and NLOS links in~. The indoor links are observed as robust even with few NLOS reflections, whereas the scattering losses due to surface roughness are relatively low due to diffractive effects. A good BER performance can be achieved by tuning the receiver direction and high-quality links can also be established with an obstruction in the beam path. The results were taken with data rate of only 1 Gbps, further measurements for higher data rates still required with multipath effect and ray tracing phenomenon because with increasing data rates requirements the path loss can also increase. Mainly, BER performance is only moderately affected within indoor and outdoor links. However, due to presence of multipath and absorption loss separate MAC mechanisms are required for indoor and outdoor applications. The indoor applications are most likely to be affected by walls, ceiling and other materials, which can cause multipath and scattering effects and can result in wrong estimation of received information about another user's location or transferred data and can also result in hidden node problem. However, the outdoor scenarios can be affected due to external environmental features like rain and humidity, obstacles like trees and buildings, and distances. Although, using reflector NLOS beam can be used, but for higher performance highly directional antennas with narrow beams are mostly required. Whereas, in indoor scenarios, due to short to medium range NLOS communication can also be used for sufficient performance, in which NLOS beam communication can enhance the network performance. In~, an indoor scenario (WPAN) is discussed with implementation phases as Ethernet extension, handovers within multiple Terahertz plugs, interference management and integration with HetNets for B5G networks. A three-dimensional time-varying Terahertz channel model using Kalman filtering is presented in~, for modeling and tracking access point to user equipment link in an office scenario to analyze the system capacity.\n\\end{comment}\n \\begin{comment}\n \\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=6in,height=3.4in]{figures/decision.jpg}\n\\caption{\\textcolor{red}{\\sout{MAC layer decisions to enhance the communication performance, like (a) Modulation scheme selection: MAC can select a modulation scheme suitable to the link requirement, (b) Frequency and bandwidth selection: it select from available frequencies for transmission, and an associated bandwidth to reduce interference and increase throughput, (c) Beam selection: MAC can direct a Terahertz antenna beam to establish a link between two nodes, (d) Power management: it can select the amount of power to transmit, in order to increase SINR in other nodes or reduce power consumption in the current node.}}}\n\\label{fig:decmac}\n\\end{figure*} \n\\end{comment}\n\\saim{\\textit{Modulation and coding: }} \\saim{The Terahertz channel is characterized by high free space loss, molecular absorption, and noise, as well as limitations of transceivers capabilities. To mitigate the signal quality issue, modulation and coding are the main features envisioned. The modulation guarantees an adaptive data rate for fluctuating channel, high order modulation for low bit error probability can increase data rate~\\blue{. In}~\\blue{,} it is possible to reach $100Gbps$ using 16-QAM optical modulation and BER equal to $10^{-3}$ and using UTC-PD source for heterodyning; coding helps to reduce errors. MAC layer can be designed to support variable throughputs and fits frame length to channel conditions via information from the physical layer. More works should be addressed to modulation and coding techniques to enhance main MAC layer performance and to design MAC protocol's physical layer aware. For nanoscale communication, basic modulation techniques were used, such as On-Off Keying (OOK) and pulse position modulation (PPM) together with femtosecond pulses~. For improved spectral efficiency, a spatial modulation technique is proposed and recommended in~ by using the dense packet arrays of sub-arrays nanoantennas while achieving acceptable beamforming performance.}\n\\saim{Selection of modulation scheme depends on Terahertz device's capabilities such as output power, bandwidth, and signal sensitivity. For short-range communication such as nano communication, it is possible to use larger bandwidth and a low complexity scheme such as OOK~ and QPSK~ can be used. For macro communication, where the range is higher, the Terahertz gap is split into functional windows and transmission should be performed using carriers. High order modulation can be deployed such as 16-QAM~ along with the directional antenna. The channel coding helps to detect and reduce errors at the receiving side, hence reducing the data loss. However it introduces computational complexity, for nano communication. Therefore, simple coding schemes can be implemented such as hamming. For macro communication Reed Solomon and Low-Density Parity Checks schemes can be added~.}\n\\textit{First generation Terahertz devices: } Due to high attenuation, devices with good performance such as higher output power and low noise level are required to optimize the link budget and increase the link data rate. Terahertz transceivers based on electronics were developed, for example, Silicon-Germanium (SiGe) based heterojunction bipolar transistor and Gallium-Nitride (GaN) based monolithic mmWave integrated circuit (MMIC) and also transceiver based on photonics such as quantum cascade laser QCL for high frequencies applications. \n Resonant Tunnelling Diode (RTD) based on InGaAs/AlAs are also promising for Terahertz applications~, RTDs convert mm-wave to Terahertz wave.\n Two kinds of devices that perform conversion to terahertz signal: electronic devices such as E-RTD and photonic devices such as Uni-Travelling Carrier photo-diodes (UTC-PD). Works on both technologies, photonic and electronic devices, are in progress to choose the best one of each scenario based on the required data rate, distance, and sensitivity. \n For some applications, replacing the high capacity wire, such as optical link, by Terahertz bridges is promising as it adds more flexibility to the network and reduces deployment cost. \n Terahertz devices are responsible for signal emission and reception. They can affect the link quality such as link budget and the received power spectral density, and can also affect BER and outage probability. Therefore, to design a MAC protocol with a high data rate, devices with low system noise levels and variable output power are required while maintaining the SINR in the network. SINR of the network is the result of device’s transmitted power, channel and system noise. MAC layer, aware of device technology capability, can monitor power emission, antenna pattern shape, and beam orientation. The antenna technology and device performance can also enhance the node discovery functionality and reduce delay caused by this operation.", "id": "57dabfd6-3b8f-4baa-b48d-58089a40fa29", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "0bb9b7d3-b4da-4d86-aa03-d676321bfecb", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsubsection", "\\saim{Physical layer and device related issues and considerations" ] ], "subsections": [], "title": "\\saim{Physical layer and device related issues and considerations" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{subsubsec-reqdes:MAC-issues}\\label{sec-req:maclayer}\\hfill\nThe issues and considerations related to the MAC layer are discussed below. These issues are also highlighted based on Terahertz applications in Table~\\ref{tab:summ-issue-features}. \n\\textit{Channel access, scheduling, and sharing: } \\saim{For short-range coverage such as Nano communication, antenna is assumed to be isotropic, techniques for random channel access mechanism such as CSMA can be then deployed using the same frequency band. However, techniques to reduce interferences should be implemented as received signals can collide with other signals. For macro-scale communication, it is required to transmit over distances higher than one meter, antenna should be directional to overcome channel attenuation effects, different procedures should be executed before link establishment such as beam alignment and advanced nodes synchronization. Because, if they are not aligned and facing each other, they cannot receive the transmission, which could introduce the deafness problem. In centralized networks, a central controller is required to manage the beam alignment with scheduled transmissions and for Adhoc networks TDMA based approaches can be used to avoid collisions and manage beam alignment so that each node should know when to transmit, to which node to transmit and to which direction. A shared channel can be used among different nodes, but interference can be increased. An alternate band can be used, which can provide synchronization and coordination among different nodes to access the channel. }\n\\textit{Neighbor discovery and Link establishment: }\\label{subsubsec-req:design-mac-nbrdisc} For any communication to occur, a link must be established first to discover the nodes, which needs to be considered while designing an efficient MAC protocol for nano and macro-scale networks. For nanoscale networks, due to constraints energy storage and generation, mechanisms with low message overhead are required. For macro-scale networks, the antenna directionality and mobility adds extra challenges to establish a link between the nodes, which requires a node to track and locate other nodes for stable link maintenance. For seamless communication, a stable link is required at all times. Efficient beam steering mechanisms are required to reduce the handshake time and to reduce the overall neighbor discovery time in Adhoc networks. Further, due to short-range and mobility, frequent link association and handovers may be required, which should also be supported in MAC protocol design. \n\\saim{Nodes discovery occurs before the communication phase in which each node needs to inform its neighbors about its availability and identity. The discovery phase is constrained by the lack of information related to node's position. For nanonetwork, node can send identity messages to its neighbors and any node receiving a discovery message adds the sender to its list. Generally for dynamic networks, when new nodes can enter or leave the neighboring subset, the discovery procedure can be more frequent. For static networks, discovery procedure can be neglected as each node is aware of its neighbors from the deployment phase and discovery message can be exchanged after some critical events. For example one of nodes is out of service or a new node is introduced. Applying node discovery in a macro-scale network is challenging if mobility is added to the system, synchronized beam turning can be applied during this procedure, again a node receives the discovery message once beams are aligned, this procedure is time and energy-consuming, optimizations should be applied to reduce time and energy required to discover neighbors. }\n\\textit{Mobility management and handovers: } Mobility and coverage are two mutually correlated concepts, for mobile Terahertz system~ in which radio coverage should be guaranteed to decrease link outage probability. MAC layer should support mobility management functionality to guarantee service continuity. The handover is a technical concept for mobile networks to describe the changing of the serving base station without interruption of the traffic flow. \n\\saim{One issue rises from nodes mobility is THz localization determination. In~\\blue{,} authors propose THz RFID techniques for device location using specific channel modeling and node's localization correlated to the handover procedure. For example, handover can be triggered in some positions where received power from a serving node is weak. For THz network, localization is more feasible for nanonetworks, however it becomes more challenging for macro networks where directional antennas are deployed. Solving the localization problem can help to accelerate the handover execution. }\n\\textit{Collision avoidance and interference management: } Due to high bandwidth availability and antenna directionality, it is unlikely that a collision might occur in Terahertz communication. However, it can occur when two node pairs beam directions crosses each other and perform frequent and long transmission. The multi-user interference can also occur in a scenario with large number of nodes with mobility. Therefore, a collision detection and avoidance mechanism should be considered while designing an efficient Terahertz MAC protocol. New interference models are required to capture the effect of Terahertz band features and multi user interference~. The directional communication can decrease the multi-user interference but it requires tight synchronization between Tx and Rx. Further, reduced channel code weights can also result in lower channel error probability and can also help in avoiding the multi-user interference and molecular absorption~. \n\\textit{Reliability: } Most of wireless systems require a reliable communication, where the degree of reliability defers from one application to another. The problem becomes more complicated when the channel conditions changes with time and causing time varying absorption~. For Terahertz systems, mainly low frame loss and high throughput are required. In Terahertz systems, the eror control module is mainly responsible for frame protection and retransmission to reduce frame losses where frame error depends on channel model as well as on frame length. Error control module is required especially for harsh channel conditions such as outdoor channel with dynamic conditions~. For nano sensor network, a cross optimization method is proposed in~, to adapt with the frame transmission and size with the channel conditions. Further, for efficient usage\\blue{,} reliable wireless links and beam tracking should be considered in a MAC design. \n\\textit{Throughput and latency: } Terahertz band is endowed with large bandwidth, due to which it is possible to reach a throughput exceeding 100 Gbps. For some applications like Terahertz data center scenario, bandwidth is shared between many nodes and therefore a MAC should support and guarantee high data rate and low delay. Fast scheduling algorithms, appropriate MAC techniques and buffering should be implemented to meet application specific QoS requirements.\n\\saim{\\textit{Energy efficiency and harvesting:} Energy efficiency means using less energy to achieve the required performance. For some scenarios\\blue{,} it is hard to provide energy in a continuous way such as devices using low capacity battery, mobile nodes. Therefore, energy efficiency becomes a priority for certain Terahertz applications such as nano-communication~, body area network, fronthaul communication and THz wireless sensor network used in biomedical and military fields~. To cope with the lack of energy provision, techniques such as energy harvesting, lower modulation order techniques can be used to extend the battery lifecycle. It is also possible to design new data link layer techniques to increase energy efficiency~. For applications such as backhauling, Datacenter and information broadcast where the source of energy is always available, energy efficiency is less prioritized than the previous applications. In Table~\\ref{tab:thz-app-chal}, energy efficiency requirement for different applications is highlighted based on the priority of energy-efficient techniques requirement.}\nIn some applications, the power of nodes is a limiting factor to transmit continuously, a power management module should be implemented on MAC to reduce power consumption without degradation of the system quality of service. For example\\blue{,} switching from active to idle state if the node has no data to transmit and applying the power control strategy depending on channel conditions and target QoS. The second alternative to save power and guarantee the battery life is to harvest and manage energy\\blue{. F}or example for nano-sensors the energy harvesting is applied for more active nodes. Due to low storage constraint of nanodevices, the tradeoff must be considered for energy harvesting and utilization in an efficient way~. \n\\textit{Coverage and Connectivity:} Terahertz communication is characterized by low range connectivity and high available bandwidth. The coverage or range can be optimized using a directional antenna, enhanced Terahertz devices, high output power, and optimized sensitivity. MAC layer can also contribute to coverage and connectivity enhancement by utilizing data link relaying, path diversity and spectrum switching. For example, in vehicular Terahertz network, a mobile node can coordinated to more than one node. In nanosensor network, for short-range communication, each node can transmit to any node out of its range using relaying capabilities. Path diversity is also an alternative solution to increase connectivity when a LOS is temporarily unavailable. With no direct LOS link and to support the seamless communication with less delay, reflectors can also be used to reach out far nodes with no LOS link~. A coverage and achievable rate performance analysis for multi-user Terahertz with single frequency is presented in~.", "id": "f9a0d365-b9d8-4092-931f-5812a722cbb9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "0bb9b7d3-b4da-4d86-aa03-d676321bfecb", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsubsection", "\\saim{MAC layer related issues and considerations" ] ], "subsections": [], "title": "\\saim{MAC layer related issues and considerations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-req:macdecisions}\nMAC layer is responsible for traffic adaptation with the physical layer, adding sophisticated modules to optimize link performance which can be promising. Following are some of the decisions which can be taken at the MAC layer to enhance further the system performance. \n \\begin{itemize}\n \\item {\\bf Bandwidth and frequency selection:} MAC layer should actively sense the physical channel as well as be aware of service requirements of each traffic flow, then, the selection of the appropriate bandwidth and carrier frequency can adapt the traffic to the channel as well as reduce interferences. Most of the actual Terahertz system uses a single frequency, a multiband antenna is also worth considering.\n \\item {\\bf Modulation and coding selection:} The Terahertz channel is generally time-dependent, the transmitted signal undergoes impairments leading to high bit error rate, to mitigate this issue, an adaptive modulation, and coding scheme can be adopted and controlled by the MAC layer. High order modulation selection can increase throughput and low order modulation is required to reduce the bit error rate.\n \\item {\\bf Power management:} This module selects the appropriate power to increase coverage as well as reduce interferences when nodes coordinate between each other. Monitoring nodes using power control can reduce interferences as well as maintain an acceptable energy consumption value. The power management module, can adapt to its environment, for instance, the mean consumed power value in a humid environment will be different from a dry one.\n \\item {\\bf Beam steering:} when using a directional antenna for Terahertz communication, beams should be steered appropriately to the receiving node, selection of the beams orientation coordinates can be performed at MAC layer based on the inputs from the physical layer beam parameters such as phases between elements. \n \\end{itemize}\n Implementing the aforementioned modules in the MAC layer, will increase awareness of the physical layer and channel fluctuation as well as to adapt the Terahertz link to the upper layers. \\saim{Different} physical layer functionalities can be monitored at MAC layer level, for instance, it is possible to change the modulation scheme from high order 16-QAM to low order QPSK to reduce bit error rate and from QPSK to 16-QAM to increase data throughput when channel condition is good, switching operation can be triggered using link quality statistics. The module responsible for beam steering can be also included in the MAC layer, for example using 3 bits to monitor 8 beams and establish a link with 8 neighbors. Monitoring frequencies and bandwidth can be also included in the MAC layer, for multi-band wideband antenna, to reduce interferences and increase data throughput. Finally, the power management module allows monitoring transmitter output power to enhance the link budget if the link breakdown or channel attenuation increases. The power management module can take a decision based on collected measurement from the physical layer and also from other nodes to control signal to interference ratio. Modulation scheme, beams orientation, frequency, and power can be updated at frame level based on collected statistics from a physical layer as well as reports from the networking layer.", "id": "2f3a0ac2-f992-43fb-975c-c0827d275048", "level": "subsection", "origin_cites_number": 0, "parent_id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "MAC layer decisions" ] ], "subsections": [], "title": "MAC layer decisions" }, { "cite_extract_rate": 0, "cites": [], "content": "{\\label{subsec:req-apps}\n In Table~\\ref{tab:summ-issue-features}, different Terahertz MAC protocols are mentioned with their application areas. Terahertz band features and their MAC design issues and considerations are also highlighted. Mainly, these applications are categorized in nano and macro scale scenarios which are also discussed in Section III. Due to unique band features, each MAC protocol of different applications requires novel MAC mechanisms to accommodate the high bandwidth availability, path loss, and noise. Table~\\ref{tab:summ-issue-features}, mentions only those Terahertz applications for which MAC protocol work is available. For nanoscale networks, mostly omnidirectional antenna are assumed due to the short-range and low path loss. For higher transmission range the path loss can severely damage the communication and affect the distance. The Terahertz MAC protocols for nanoscale networks still do not consider the unique features like path loss, molecular absorption noise, multipath effect. For Physical layer functionalities, the MAC protocols are there as mentioned in the Table~\\ref{tab:summ-issue-features}, but they are not considering the antenna design, channel, propagation, and interference model. \nFor macro-scale applications, each indoor and outdoor application has different requirements and therefore require different MAC mechanisms. Due to short-range constraint, the Terahertz band suits the indoor applications like TLAN and TPAN, these involve mobility with communication over short distances. The scenarios like Data centers involve static links between different racks and so require point-to-point/-multipoint communications. These scenarios require different channel models and scattering and multipath phenomena can affect communication in a different way. Therefore, while designing an efficient Terahertz MAC protocols, the features and design issues mentioned in Table~\\ref{tab:summ-issue-features} should be considered. To enhance the communication range, directional antenna should be used which requires novel mechanisms for beam management and tracking with MIMO support, reflectors to mitigate blockage and reach to more than one hop distance. The static points application like KIOSK downloading system needs to support quick link establishment and reliability. These applications also required new mechanisms to access the Terahertz channel and link establishment, especially when frequent link establishment is required and where node density is high. \nThe outdoor scenarios like vehicular communication, backhaul, and small cell are interesting scenarios, which involves mobile and static scenarios. However, the channel can be affected due to different environmental factors like rain, wind, humidity, and dryness. Therefore, new channel and propagation models are required which should also incorporate blocking factors like trees, humans and other physical types of equipment. Massive MIMO can be used to relay information between cells or nearby networks. Adaptive beam management can be utilized by using cooperative massive MIMO and electronically steerable beams~. Further, due to different environmental factors interference mitigation techniques are required for outdoor applications.", "id": "8823c2f5-f322-4c0f-bbe5-df4591e73e33", "level": "subsection", "origin_cites_number": 0, "parent_id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Discussion on Terahertz application scenarios" ] ], "subsections": [], "title": "Discussion on Terahertz application scenarios" }, { "cite_extract_rate": 0, "cites": [], "content": "Due to unique features of the Terahertz band like noise and path loss, the Terahertz band communication can easily be interrupted compared to the interference phenomenon in other lower frequency bands like ISM or GSM. The molecular absorption noise or the atmospheric noise can easily affect the Terahertz communication link and the problem increases with increase in the distance between the transmitter and receiver. Further, the additional environmental noise factors like Sky-noise can result in underestimation of noise or interference figure at the receiver and transmitter which can also affect the MAC protocol performance seriously. The modeling of these factors is very important in the sense that these factors can behave differently in different environments like indoor or outdoor environments, and should be modeled carefully depending on the scenario. Therefore, in designing the MAC protocols for short, medium or long-range Terahertz communication, these environmental factors, and their modeling must be taken into account. The indoor and outdoor scenarios required different channels, propagation and interference models and they need to consider different physical and MAC layer design issues discussed in this section. \\saim{To strengthen the reflected and scattered signals, a metal reflector with goof reflection properties can be embedded and to reduce the power absorption the temperature and humidity can be maintained at a certain level for a particular indoor environment. However, for outdoor environment, novel mechanisms are required to overcome the effect of absorption loss.}", "id": "9c8b19bf-b8fe-48ba-bb44-786e530f726b", "level": "subsection", "origin_cites_number": 0, "parent_id": "e087585c-1918-49f8-acdb-ad6a59eee45b", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Design issues and considerations for Terahertz MAC protocols" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:topologies}\nIn this section, the existing Terahertz MAC protocols are categorized mainly in network topology as centralized, clustered and distributed, as shown in Figure~\\ref{fig:nw_topologies}. Each topology design is then further classified based on the network scale. Different topological designs are considered in the existing literature based on the application area and its requirements and are discussed below. In general, the existing Terahertz MAC protocols are characterized and summarized in Table~\\ref{tab:mac-proto-characterization}. \n\\begin {figure*}\n\\centering\n\\begin{adjustbox}{width=\\linewidth}\n\\tikzstyle{block} = [rectangle, draw, text width=4cm, text centered, rounded corners, minimum height=5em,fill=blue!20]\n\\tikzstyle{line} = [draw,thick, -latex']\n\\tikzstyle{cloud} = [draw, ellipse, text width=4.5cm, text centered]\n\\tikzstyle{edge from parent}=[->,thick,draw]\n\\begin{tikzpicture}[auto,edge from parent fork down]\n\\tikzstyle{level 1}=[sibling distance=60mm,level distance=18ex]\n\\tikzstyle{level 2}=[sibling distance=25mm,level distance=15ex]\n\\tikzstyle{level 3}=[sibling distance=30mm,level distance=34ex]\n\\tikzstyle{level 4}=[sibling distance=35mm,level distance=34ex]\n\\node [block,text width=10cm,minimum height=4em,,fill=black!30] (cst) {\\textbf{THz MAC Protocols for differnt network topologies \\\\ (Sect.\\ref{sec:topologies}) } }\n{\nchild{node [block,text width=4cm,fill=green!20] (opm) {\\textbf{CENTRALIZED \\\\ (Sect.\\ref{subsec-topo:centralized}) } }\n \t\tchild{node [block,text width=3.2cm,fill=orange!20,xshift=-0.65cm, yshift=-1.2cm] (pmt) { \\\\ \n \t\t{\\bf Nanoscale networks \\\\ (Sect.\\ref{subsubsec-topo:centralized-nano}) } \\\\\n \t\tLL-Modelling \\\\\n\t\tCSMA-MAC \\\\\n\t\tDESIGN-WNSN \\\\\n\t\tTCN \\\\\n\t\tCEH-TDMA \\\\\n\t\tSSA-MAC \\\\\n \t }} \n \t child{node [block,text width=3.2cm,fill=orange!20,xshift=0.7cm, yshift=-1.2cm] (pmt) { \\\\ \n \t\t{\\bf Macro scale networks \\\\ (Sect.\\ref{subsubsec-topo:centralized-macro}) } \\\\ \n \t\tMA-ADM \\\\\n \t\tIHTLD-MAC \\\\ \n \t\tMAC-TUDWN \\\\ \n \t\tTRPLE \\\\\n \t\tHLMAC \\\\\n\t MAC-TC \n \t\t}} \n }\nchild{node [block,text width=3.6cm,fill=green!20] (rnm) {\\textbf{CLUSTERED \\\\ (Sect.\\ref{subsec-topo:clustered}) } }\n\t\tchild{node [block,text width=3cm,fill=orange!20,xshift=-0.4cm, yshift = -1.0cm] (pmt) { \\\\ \n\t\t{\\bf Nanoscale networks \\\\ (Sect.\\ref{subsubsec-topo:clustered-nano}) } \\\\\n\t\tES-Aware \\\\\n\t\tEEWNSN \\\\\n\t\tEESR-MAC \\\\\n\t\tDYNAMIC-FH \n\t\t}}\n}\nchild{node [block,text width=3.8cm,fill=green!20] (opm) {\\textbf{DISTRIBUTED \\\\ (Sect.\\ref{subsec-topo:distributed}) } }\n\t\tchild{node [block,text width=3.4cm,fill=orange!20,xshift=-1.4cm, yshift = -2.7cm] (pmt) { \\\\\n \t\t{\\bf Nanoscale networks \\\\ (Sect.\\ref{subsubsec-topo:distributed-nano}) } \\\\\n\t\tPHLAME \\\\\n\t\tDRIH-MAC \\\\\n\t\tRIH-MAC \\\\\n\t\tDMDS \\\\\n\t\t2-state MAC \\\\\n\t\tG-MAC \\\\\n\t\tRBMP \\\\\n\t\tAPIS \\\\\n\t\tMGDI \\\\\n\t\tNS-MAC \\\\\n\t\tMDP \\\\\n\t\tTSN \\\\\n\t\tSMART-MAC \\\\\n\t\tSSA-MAC \\\\\n\t\t}}\n\t\tchild{node [block,text width=3.6cm,fill=orange!20,xshift=0.2cm, yshift = -2.0cm] (pmt) { \\\\\n\t\t{\\bf Macro scale networks \\\\ (Sect.\\ref{subsubsec-topo:distributed-macro}) } \\\\\n\t\t\\textbf{Terahertz networks:} \\\\\n\t\t\\begin{itemize}[label={}]\n\t\t\\item LL-Synch \\\\\n\t\t\\item ISCT \\\\\n\t\t\\item TAB-MAC \\\\\n\t\t\\item MRA-MAC \\\\\n\t\t\\item OPT-RS \\\\\n\t\t\\end{itemize}\n\t\t\\hfill\\\\\n\t\t\\textbf{Vehicular network:} \\\\\n\t\t\\begin{itemize}[label={}]\n\t\t\\item ATLR \\\\\t\n\t\t\\item SDN-CONTR \\\\\n\t\t\\item B5G \n\t\t\\end{itemize}\n\t\t}}\n}\n};\n\\end{tikzpicture}\n\\end{adjustbox}\n\\caption {MAC layer classification based on network topologies. }\n\\label{fig:nw_topologies}\n\\end{figure*}\n\\begin{table*}[htbp]\n \\centering\n \\tiny\n \\caption{Characterization of existing Terahertz MAC protocols.}\n \\begin{tabular}{p{2em}lp{4em}p{6em}p{3.1em}p{5em}p{4em}p{15em}p{3.1em}p{8em}p{5em}p{5em}p{4em}p{4.1em}}\n \\textbf{Year} & \\multicolumn{1}{p{3.11em}}{\\textbf{Paper}} & \\textbf{Band} & \\textbf{Network type} & \\textbf{Network scale} & \\textbf{Topology} & \\textbf{Simulator} & \\textbf{Simulation parameters} & \\textbf{Analytical Model} & \\textbf{Tx/Rx initiated communication} & \\textbf{Modulation Scheme} & \\multicolumn{1}{p{5.11em}}{\\textbf{Channel access method}} & \\textbf{Antenna} & \\textbf{Complexity}\\\\ \\hline \\hline\n 2011 & & THz bands & Nanonetworks & Nano & centralized & C++ & delay, throughput, performance analysis based on distance, propagation time, packet lifetime & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & - & nano & - \\\\ \\hline\n \\multirow{2}{*}{2012} & & 0.1 - 10 THz & Nanonetworks & Nano & distributed & custom & Energy consumption, Delay, Throughput & \\checkmark & Transmitter initiated & RD-TS-OOK & TDMA & nano & - \\\\ \n \t& & THz bands & Nanonetworks & Nano & clustered & No & X & X & Transmitter initiated & \\multicolumn{1}{l}{-} & \\multicolumn{1}{p{5.11em}}{TDMA} & nano & - \\\\ \\hline\n \\multirow{4}{*}{2013} & & 0.1 - 10 THz & Wireless Nanosensor Networks & Nano & Centralized & custom & Throughput, optimal channel access, network lifetime, critical transmission ratio & \\checkmark & Transmitter initiated & pulse based,TS-OOK & \\multicolumn{1}{p{5.11em}}{TDMA} & nano & - \\\\\n & & 0.1 - 10 THz & Terahertz Wireless Personal Area Networks & Macro & Distributed & OPNet & data trasmission rate, avg access delay, time for transmitting frames, access success rate & X & Transmitter initiated & \\multicolumn{1}{l}{} & Hybrid & \\multicolumn{1}{l}{-} & - \\\\\n & & THz bands & Nanonetworks & Nano & Distributed & NanoSim - NS3 & packet loss ratio, physical transmission & \\multicolumn{1}{c}{} & Transmitter initiated & OOK & Random & nano & - \\\\\n & & 0.1 - 10 THz & Terahertz Communication Network & Macro & Distributed & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{-} & \\multicolumn{1}{c}{-} & Transmitter initiated & \\multicolumn{1}{l}{-} & Hybrid & \\multicolumn{1}{l}{-} & - \\\\ \\hline\n \\multirow{4}{*}{2014} & & 0.1 - 10 THz & Terahertz networks & Macro & Distributed & custom & Data rate, throughput & \\checkmark & Transmitter initiated & TS-OOK & TDMA & directional & - \\\\\n & & THz bands & Nanonetworks & Nano & Distributed & Matlab & energy efficiency, harvest rate, packet balance & \\checkmark & Receiver initiated & OOK & TDMA & nano & - \\\\\n & & THz bands & Wireless Nanosensor Networks & Nano & Distributed & custom & SNR, BER, capacity & X & Transmitter initiated & pulse based & FTDMA & \\multicolumn{1}{l}{-} & - \\\\\n & & 0.1 - 10 THz & Nanonetworks & Nano & Distributed & NanoSim - NS3 & Prob collision, RTR, fairness, & X & receiver initiated & OOK & TDMA & nano & - \\\\ \\hline\n \\multirow{4}{*}{2015} & & 0.1 - 10 THz & Nanonetworks & Nano & Distributed & NanoSim - NS3 & Delay, energy consumption, utilization capacity & X & Receiver initiated & Pulse based modulation & \\multicolumn{1}{p{5.11em}}{TDMA} & nano & - \\\\\n & & 1.04 THz & Terahertz Communication Networks & Macro, Nano & Distributed & NS3 & Delay, throughput, Packet delivery ratio & \\checkmark & Receiver initiated & PSK, TS-OOK & \\multicolumn{1}{p{5.11em}}{CSMA} & omni and directional & - \\\\\n & & 0.1 - 10 THz & Nanonetworks & Nano & Distributed & custom & Failure probability, normalized energy per bot, number of retransmissions & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & TDMA & nano & - \\\\ \n & & THz bands & Wireless Nanosensor Networks & Nano & Centralized & custom & Energy consumption, Delay, Throughput & X & Transmitter initiated & OOK & \\multicolumn{1}{p{5.11em}}{TDMA} & nano & - \\\\ \\hline\n \\multirow{9}{*}{2016} & & 0.1 - 10 THz & Nanonetworks & Nano & Distributed & COMSOL multi-physics & BER, PER, enerhgy consumption, latency, throughput & \\checkmark & Transmitter initiated & OOK & \\multicolumn{1}{p{5.11em}}{-} & nano & - \\\\\n & & 0.1 - 10 THz & Internet of Nano Things & Nano & Distributed & custom & delivery ration, debt, throughput & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & CSMA & nano-antenna & - \\\\\n & & 2.4 GHz, 0.1-10 Thz & Terahertz Communication Networks & Macro & Distributed & custom & packet delay, throughput, failure probability, & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & Random, multiple radios & omni and directional & - \\\\\n & & 0.1 - 10 THz & Wireless Nanosensor Networks & Nano & Distributed & Matlab & delay, throughput, collision probability & \\checkmark & Transmitter initiated & TS-OOK & TDMA & nano & - \\\\\n & & 100 GHz & Nanonetworks & Nano & distributed & Any-Logic platform & Coverage, Packet Transmission rate, classification time, collision & X & Transmitter initiated & OOK & \\multicolumn{1}{p{5.11em}}{-} & nano & $O(x)$ \\\\\n & & 0.1 - 10 THz & Wireless Nanosensor Networks & Nano & Distributed & COMSOL multi-physics & link efficiency, optimal packet length with distance & X & Transmitter initiated & OOK & - & nano-antenna & - \\\\\n & & THz bands & Wireless Nanosensor Networks & Nano & Distributed & Matlab & collision probability, energy consumption, transmission distance & \\multicolumn{1}{c}{} & Transmitter initiated & OOK & - & nano & - \\\\\n & & 1-2 THz & Nanonetworks & Nano & Distributed & custom & Throughput, Transmission probability & \\checkmark & Transmitter initiated & OOK & \\multicolumn{1}{p{5.11em}}{FTDMA} & nano-antenna & - \\\\\n & & 0.1 - 10 THz & Terahertz Communication networks & Macro & distributed & \\multicolumn{1}{l}{-} & \\multicolumn{1}{l}{-} & \\multicolumn{1}{c}{-} & Transmitter initiated & \\multicolumn{1}{l}{-} & Hybrid & \\multicolumn{1}{l}{-} & - \\\\ \\hline\n \\multirow{9}{*}{2017} & & 340 GHz & Terahertz Wireless Personal Area Networks & Macro & Distributed & OPNet & delay,throughput, success rate, buffer overflow rate & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & \\multicolumn{1}{p{5.11em}}{Hybrid} & omni & - \\\\\n & & 0.06 - 10 THz & Terahertz Communication Networks & Macro & Centralized & custom & Throughput, Data rate, Delay, and outage probability & \\checkmark & Transmitter initiated & Pulse waveform modulation & Scheduled, multiple radios & omni and directional & - \\\\\n & & 240 GHz & Terahertz networks & Macro & \\multicolumn{1}{l}{} & Matlab & probability of successful frame reception, goodput Gbps, percentage of lost headers, ACK frame size, energy per bit & X & Transmitter initiated & PSSS, PAM-16 & - & \\multicolumn{1}{l}{-} & - \\\\\n & & 2.4 GHz, 0.1-10 Thz & Terahertz Communication Networks & Macro & Distributed & Monte Carlo & Delay, Throughput, outage probability & \\checkmark & Transmitter initiated & Pulse wave modulation & \\multicolumn{1}{p{5.11em}}{Random, multiple radios} & omni and directional & - \\\\\n & & 2.4 GHz, 0.1-10 Thz & Nanonetworks & Nano & Distributed & NS3 & Throughput & X & Transmitter initiated & \\multicolumn{1}{l}{-} & \\multicolumn{1}{p{5.11em}}{Random, mulitple radios} & omni and directional & - \\\\\n & & 100 GHz & Wireless Nanosensor Networks & Nano & Distributed & NS3 & Bandwidth efficiency, pulse drop ratio, packet deliver ratio, fairness, energy consumption & X & Transmitter initiated & OOK & TDMA & nano & $O(N^{'})$ \\\\\n & & 1.0345 THz & Terahertz Communication Networks & Macro & Distributed & custom & Throughout, optimal distance & \\checkmark & Receiver initiated & - & \\multicolumn{1}{p{5.11em}}{CSMA} & directional antenna & - \\\\\n & & 73 GHz and 0.86 THz & Vehicular Network & Macro & Distributed & custom & data transfer & \\checkmark & Transmitter initiated & \\multicolumn{1}{l}{-} & Scheduled, multiple radios & directional & $O(V^{k}) $ \\\\\n & & THz bands & Wireless Nanosensor Networks & Nano & clustering & NS3 & pkt loss ratio, consumed energy, scalability & \\multicolumn{1}{c}{} & Transmitter initiated & \\multicolumn{1}{l}{-} & \\multicolumn{1}{p{5.11em}}{TDMA} & \\multicolumn{1}{l}{-} & - \\\\ \\hline\n \\multirow{6}{*}{2018} & & 0.1 - 10 THz & Vehicular Network & Macro & Distributed & custom & Channel capacity, PSD, number of links & X & Transmitter initiated & \\multicolumn{1}{l}{-} & - & omni & - \\\\\n & & 0.1 - 10 THz & THz Mobile Heterogeneous Network & Macro & Distributed & custom & Uniformity, Randomness, Hamming Correlation, throughput, BER & X & Transmitter initiated & \\multicolumn{1}{l}{} & FTDMA & \\multicolumn{1}{l}{-} & - \\\\\n & & mmWave, 0.1 - 10 THz & Vehicular Network & Macro & Distributed & custom & data transmission rate & X & Transmitter initiated & \\multicolumn{1}{l}{-} & Scheduled, multiple radios & directional & - \\\\\n & & 0.1 - 10 THz & Nanonetworks & Nano & Distributed & custom & Throughput capacity, interference power & \\checkmark & Transmitter initiated & Pulse based modulation (OOK) & TDMA & nano-antenna & - \\\\\n & & 0.1 - 10 THz & Wireless Nanosensor Networks & Nano & Centralized & custom & slot assignment rate, energy consumption & X & Transmitter initiated & TS-OOK & \\multicolumn{1}{p{5.11em}}{CSMA} & nano-antenna & - \\\\\n & & 2.3 THz & Terahertz Communication Networks & Macro, Nano & Centralized & custom & Antenna directivity, antenna controller overhead & X & Transmitter initiated & DAMC & - & omni and directional & - \\\\ \\hline \n \\multirow{2}{*}{\\saim{2019}} & \\saim{} & 0.25 THz & Data Centre & Macro & Distributed & custom & FER, BER, Pkt loss, retransmission & X & Transmitter initiated & 16 QAM & - & directional & - \\\\ \n & \\saim{} & & Wireless Nanosensor Networks & Nano & Centralized & custom & Average delay, remaining energy and transmitted packets & X & Receiver initiated & OOK & - & nano & $O(N), O(N log N)$ \\\\ \\hline \\hline\n \\end{tabular}\n \\label{tab:mac-proto-characterization}\n\\end{table*}", "id": "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ] ], "subsections": [ "e472d192-8845-455c-b2df-a67bc9e02425", "4a05b99f-a937-4798-a9b6-4f3b30116ce4", "8600771c-9e5e-42ca-9cb3-d70388dc325f", "a35ab77e-1b3d-4bdf-ba2f-cdf305614dbd" ], "title": "Terahertz MAC protocols for different network topologies" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-topo:centralized}\nThe centralized architecture is mainly followed in nanoscale networks due to its limited energy, coverage area, and application in body area networks~.", "id": "e472d192-8845-455c-b2df-a67bc9e02425", "level": "subsection", "origin_cites_number": 0, "parent_id": "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Centralized networks" ] ], "subsections": [ "40169383-50c8-45b2-a1f5-ea239e93a918", "02ff7c44-b0d9-4f7b-bc26-0729a6c8f22d" ], "title": "Terahertz MAC protocols for Centralized networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-topo:centralized-nano}\n\\saim{The nanonetworks include several nanodevices that work together to perform simple tasks. Due to limited energy capacity, centralized topology is used in different applications including In-body networks, air quality monitoring, and industrial applications. In these applications\\blue{,} a nano-controller that is capable of performing heavy computation, scheduling and transmission tasks is used. Initially, nanodevices send their information to the controller, and controller can then process and schedule transmission and sends information to external networks via a gateway device. The works in~ uses a centralized network topology for different nano communication network-related applications. A general figure for such architecture is shown in Figure~\\ref{fig:nanobio}. }\n\\saim{A centralized approach is presented in~ in which nano nodes can be deployed in a specific area to detect problems like defects or pollution, where each nano node can perform computational tasks with limited memory and transmits small data over a short-range to the nano-controller. The gateway can collect the information and send to the Internet. A single hop delay-throughput performance is measured in~ for the bio-nano communication network in which bacteria packets travel towards the nano-gateways following the attractant particles emitted by the conjugate nano-gateway. The centralized network topology is also used in~. A centralized TDMA and energy harvesting based protocol is presented in~ in which the nano-controller is responsible for channel access and time slot allocation for nano nodes. To cope with energy consumption\\blue{,} nodes are also responsible for energy harvesting to increase node's life cycle. A centralized approach is then used to free some nodes from performing heavy computational tasks and channel access. Due to smaller distance, the path loss discussed in Section~\\ref{sec-req:pathloss} remains low. However, the interference from near devices can affect the transmission opportunities and access to the channel (cf. Section~\\ref{sec-req:phylayer}). In centralized topology, nodes are within a single hop distance from the controller node. }\n\\mage{Depending on the application requirement, different topologies can be followed in nanonetworks. The works in~ follows a centralised approach. However, for higher nodes density, multi-hop communication is required. Although, in~ high node density is considered. But, random arrangement of nodes with mobility is not considered. }\n\\begin{figure}[htbp!]\n\\centering\n\\includegraphics[width=3.2in,height=2.2in]{figures/topo2human.png}\n\\caption{Nano body communication network.}\n\\label{fig:nanobio}\n\\end{figure}", "id": "40169383-50c8-45b2-a1f5-ea239e93a918", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e472d192-8845-455c-b2df-a67bc9e02425", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Centralized networks" ], [ "subsubsection", "Nanoscale networks" ] ], "subsections": [], "title": "Nanoscale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-topo:centralized-macro}\nBesides, the nanonetworks, centralized architecture is also used for Terahertz communication at larger networks such as macro-scale networks. In~, a centralized network with Terahertz links is presented which consists of an AP and multiple nodes, where the AP coordinates and schedules the transmissions among the nodes and is able to communicate directly to each node. The interesting issue in such networks will be synchronization among the nodes, where each node using a directional antenna needs to point towards an AP, which also increases the interference or collisions probability from the neighbors. In~, directional antennas are used for nodes discovery, initial access, and data transmissions phase without creating disparity problem~. \nThe problem arises when the directional beams from the AP and the receiving node have to be well aligned, \\saim{beams alignment is time and energy-consuming, a beam management module at MAC layer should be implemented for efficient beam steering.} For centralized macro-scale networks, the AP is assumed to be equipped with directional antennas whereas the other nodes are using the omnidirectional antennas and are assumed to switch to the directional antennas mode after the establishment of initial discovery which can incur more delays. A beam switching access technique is discussed in~, beam alignment is performed periodically for the initial access and transmission period. The centralized topologies are also presented in~ in which a piconet coordinator is assumed to provide time synchronization information to nearby devices and handles the scheduling and access control. A MAC design for macro scale communication at 100 Gbps is discussed in~ which considers an indoor picocell, where an AP communicates with a group of users using LOS and directed non-LOS. \n\\mage{An interesting challenge for centralized topology for distances higher than a meter will be to efficiently manage the beam steering and switching among nodes, which is discussed in Section~\\ref{sec-req:phylayer}. For applications like TPAN and TLAN, mobility adds design challenges for MAC and the channel access and scheduling can be managed by a central controller (cf. Section~\\ref{sec-req:maclayer}). In indoor environments with small distances and mobile nodes, multi-user challenges should be addressed which includes efficient mobility and interference management and resource scheduling. }", "id": "02ff7c44-b0d9-4f7b-bc26-0729a6c8f22d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e472d192-8845-455c-b2df-a67bc9e02425", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Centralized networks" ], [ "subsubsection", "Macro scale networks" ] ], "subsections": [], "title": "Macro scale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-topo:clustered}\n\\saim{In a clustered architecture of nodes, a cluster head is elected by nodes in the cluster, the cluster head is responsible for data processing and transmission to the gateway node and to other cluster heads. The clustered architecture is so far seen only in nanonetworks environment which also incurs low path loss due to short distance. Energy efficiency can be improved using clustered architecture when using the central node. }", "id": "4a05b99f-a937-4798-a9b6-4f3b30116ce4", "level": "subsection", "origin_cites_number": 0, "parent_id": "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Clustered networks" ] ], "subsections": [ "507d647b-5c78-4dc6-bf12-10cf986ae0dc" ], "title": "Terahertz MAC protocols for Clustered networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-topo:clustered-nano}\n\\saim{In hierarchical architecture, the network is mainly partitioned in a set of clusters where each cluster is locally coordinated by a nano controller. The nano controller is a device that has more processing capabilities of complex tasks and has high energy availability. Since nanosensor nodes are not capable of processing and handling complex tasks, these tasks are pushed towards the nano-controllers which then coordinate their tasks in an efficient manner. The MAC layer for nano-controller includes more functionalities such as link establishment and resource allocation. A clustered architecture is followed in~. The nanosensor nodes are battery limited devices, which only can store enough energy to perform few simple tasks. The clustered approach can be used to manage high node density where inter-cluster communication can be used to enhance network accessibility. In~, the transmission and harvesting slots are assigned among the different nanosensors within each cluster in a way that the harvested and consumed energy are balanced among nodes. }\nA cluster-based nano-network is also discussed for dense networks in~, in which inter and intracluster communications are carried out to reach gateway node. For plant communication, a clustered based architecture is followed in~ which addresses the frequency selection problem in the Terahertz band which are considered as frequency selective bands. The nanodevice clusters are used to monitor the chemical reaction in plants that schedules the transmission among themselves and transmit the data to microdevices, which then transmit to the Internet via a gateway device. \n\\mage{The clustered architecture requires efficient MAC protocols to handle efficient data relaying among the inter and intra clusters. Nanonetworks can utilize very high bandwidth due to low path losses at small distances with basic modulation and coding schemes. In such environments, a challenge, related to efficient scheduling and channel access, rises from the fact of large number of nodes (cf. Section~\\ref{sec-req:maclayer}). For higher data rates, the nano controller can be pushed towards physical layer synchronization (cf. Section~\\ref{sec-req:maclayer}). In addition, efficient mechanisms are also required to perform grouping of nano nodes, the dynamic migration of nodes from one cluster to another, placement and selection of cluster heads constrained by energy consumption and efficient communication. }", "id": "507d647b-5c78-4dc6-bf12-10cf986ae0dc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4a05b99f-a937-4798-a9b6-4f3b30116ce4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Clustered networks" ], [ "subsubsection", "Nanoscale networks" ] ], "subsections": [], "title": "Nanoscale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-topo:distributed}\nDepends upon the application requirements, the devices both in the nano and macro scale can perform communication tasks in a distributed manner. The details of the works following the distributed management are as follows\\blue{:}", "id": "8600771c-9e5e-42ca-9cb3-d70388dc325f", "level": "subsection", "origin_cites_number": 0, "parent_id": "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Distributed networks" ] ], "subsections": [ "a4e6a834-84a6-4667-95eb-31b01bf1116e", "96ad3c04-95b0-423f-823f-25dab5ff3df9" ], "title": "Terahertz MAC protocols for Distributed networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-topo:distributed-nano}\n\\saim{In distributed network architecture, nodes perform tasks individually and takes independent decisions for communication. Decisions can be easy when nodes are aware of the environment and physical layer parameters mentioned in Section~\\ref{sec-req:phylayer}. For example, when nodes are aware of the channel access and scheduling states of neighbor nodes, overall delay can be minimized and throughput can be increased. In nanonetworks\\blue{,} modulation schemes and schedules can be negotiated among the nodes to perform communication~. The scalable distributed networks are discussed in~.}\n\\saim{Due to the absence of controller in Adhoc nanonetworks, nodes need to schedule their transmissions and coordinate to access the channel without generating the interferences. A distributed scheduling mechanism is proposed~ for nanosensor networks in which every node takes decision locally based on its incoming traffic and channel sensing results. The proposed protocol is shown to be reliable in data delivery with optimal throughput and addresses the fundamental challenge of limited memory nanodevices. An adaptive pulse interval scheduling scheme is \\blue{also} proposed \\blue{in this work} which schedules the arrival pattern of pulses transmitted by a nano sink. }\n\\blue{In Adhoc network architectures, since nodes positions can be random at times, therefore, nodes relaying can be essential to\nguarantee connectivity between distant nodes.} A set of new functionalities should be taken into consideration such as updating the neighboring list for each node. A multihop network can emerge which needs further investigation~. The work in~ is addressing the antenna facing problem, in which nanosensors that are in direct range can be communicated using an omnidirectional antenna, whereas to communicate with other message stations in presence of obstacles, relay nodes are used with directional antennas. The work is shown to improve the throughput, however, due to the limited capacity of nano nodes, it can increase the message and energy overhead. A flooding scheme for high node density ad-hoc nanonetworks is proposed in~, in which a message from the external entity is broadcast in a nano-network with coverage in terms of percentage of receiver nodes. An internal node can also propagate the data towards the external entity of a gateway which is movable. A frequency hopping scheme is modeled in~ to overcome attenuation and noise issues using multiple channels between two nanosensor nodes. For distributed networks, the energy problem is considered in~. \n\\mage{Different topologies require different solutions based on the application requirement, band limitations and design criteria. For example, the small transmission distance and size of nodes require the nodes to be close to each other. Depending on the application the number of the nodes can be huge. Although omnidirectional antennas can be used, high number of nodes can introduce collisions~, which is not discussed in these works. Further, the band limitations and high node density with energy-efficient solutions are also not seen in combined work.}", "id": "a4e6a834-84a6-4667-95eb-31b01bf1116e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "8600771c-9e5e-42ca-9cb3-d70388dc325f", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Distributed networks" ], [ "subsubsection", "Nanoscale networks" ] ], "subsections": [], "title": "Nanoscale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-topo:distributed-macro}\nBesides nanoscale networks, the Terahertz band promises the ultra-high-speed wireless links for macro-scale networks. However, the free space path loss affects the throughput and results in a reduced coverage area. To extend the coverage and minimize the path losses the directional antennas have been encouraged to use. \\saim{But, will require frequent beam switching and steering. An efficient beam scheduling mechanism can be helpful in providing continuous transmission with controlled delay. In mobile environments, the Adhoc nature can bring frequent topology changes that need to be sorted out using a MAC layer protocol. For example, nodes will require mechanism to facilitate nodes entering and leaving the network, relaying, handovers and efficient routes between source and destination with lowest delay (cf. Section~\\ref{sec-req:design-issues}).} \nThe Terahertz communication network with high-speed Terahertz wireless link is presented in~ for macro scale communication. It addresses the problem of handshaking with antenna speed considerations as an important factor to consider while designing a MAC protocol. Nodes are placed within an area of 10 m circular area in a distributed manner. The mobile heterogeneous architecture is presented in~ for Ad-hoc connectivity and WLAN to provide high-speed Terahertz links and broadband access using access points. In~, an intelligent and secure spectrum control strategy is proposed for an indoor network with different access subnets and an anti-jamming strategy with adaptive frequency slot number selection. \n\\saim{The Omni-directional antennas can be used to perform the initial link establishment and for data transfer directional antennas can be used to reach further nodes. But, it introduces some challenges including alignment and synchronization to perform a task and extra antenna overhead.} A distributed Terahertz communication network using both directional and omnidirectional antennas is proposed in~, in which the anchor nodes are used with regular nodes. The anchor nodes are assumed to know their location in advance and regular nodes are equipped with beamforming antenna arrays. Similar work with Omni and directional antenna is described in~ where control signals are used for beam alignment using 2.4 GHz link and for data transfer Terahertz links are used. Although using 2.4 GHz band for control signaling reduces the handshaking delay between the nodes, it limits the coverage area and can leave isolated areas in a network, which further requires multihop strategies to increase the reachability of the network. A relaying strategy is proposed in~ in a network with randomly distributed nodes. However, only few dedicated relays are used to transfer data and nodes are assumed as switching between the transmission and receiving modes, which can increase the delays. \nA software-defined network (SDN) based vehicular network is considered with distance-dependent spectrum switching, where mmWave and Terahertz band are used alternatively based on the data transfer usage~. It is argued that the universal coverage is not possible by using just Terahertz band and therefore a network architecture is proposed, which uses the microwave, mmWave, and Terahertz bands together to achieve the design goal of coverage and channel access. Although, it can extend the coverage, the switching delay increases as nodes number increases and traffic between nodes increases. Similar work is presented in~, which discusses the handoff and MAC protocol to dynamically switch between the mmWave and Terahertz band for high bandwidth data transfer operations. Although, performance is shown to be improved the message overhead and switching delays are high. Further, the synchronization is not focussed on multiple vehicles. Another relay algorithm for autonomous vehicular communication using Terahertz band links to overcome short-range and unstable links is presented in~. \n\\mage{The distributed arrangement of nodes can cause disconnected networks when directional antennas are used. It can also introduce the deafness problem. Synchronization among the nodes to align antenna and exchange neighbor information will be another challenge. To solve the synchronization issue works in~ uses multiple bands, which can increase the hardware cost and switching delays. A work presented in~ provides link layer synchronization while considering the Terahertz band features. However, the full-beam sweeping time can increase the synchronization delay when nodes will be unaware of other nodes and their beam direction. }", "id": "96ad3c04-95b0-423f-823f-25dab5ff3df9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "8600771c-9e5e-42ca-9cb3-d70388dc325f", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Terahertz MAC protocols for Distributed networks" ], [ "subsubsection", "Macro scale networks" ] ], "subsections": [], "title": "Macro scale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Each application has different topology requirements. In nano-communication networks, the nodes are placed at a very small distance from each other. Due to the near node placement, the path loss is less effective in a nano communication network. The omnidirectional antennas can be used in nano-network due to the near placement of nodes. The Omni-directional usage of such scenarios requires a MAC protocol to include collision avoidance methods with efficient sensing mechanism to detect interferences. The path loss increases with distance, therefore to mitigate free space attenuation effect, directional antennas are required. The antenna directionality requirement clearly impacts on link establishment and channel access mechanisms. The transmission schedule can be easily managed in a centralized scenario, in which a central controller is responsible for overall transmission schedules which also requires energy-efficient mechanisms. However, in distributed networks, scheduling the transmissions and resources is a challenge, especially when directional antennas are in use over the short distance. \nAt macro scale, for Terahertz communication networks, the topology must account for many practical concerns like scalability, reconfigurability, LOS connectivity due to antenna directionality requirement, fault tolerance, and cost-performance index. In indoor scenarios\\blue{,} the distance between the APs and users with mobility support should be covered in the MAC protocol design while providing fault-free and seamless communication. The distributed topology for Terahertz MAC protocol must accommodate the dynamic nature of the network while covering the whole network. The Terahertz Datacentre network, for example, requires top of rack nodes to transmit data among different racks. The short-range limits the connectivity of the nodes, therefore, novel mechanisms are required to approach far distance nodes within a Data Centre.", "id": "a35ab77e-1b3d-4bdf-ba2f-cdf305614dbd", "level": "subsection", "origin_cites_number": 0, "parent_id": "b6b2cf16-6437-44c6-9983-ab2c4618a9b4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Terahertz MAC protocols for different network topologies" ], [ "subsection", "Summary and discussion" ] ], "subsections": [], "title": "Summary and discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ch-access}\n\\begin {figure*}\n\\centering\n\\scriptsize\n\\begin{adjustbox}{width=\\linewidth}\n\\tikzstyle{block} = [rectangle, draw, text width=4cm, text centered, rounded corners, minimum height=5em,fill=blue!20]\n\\tikzstyle{line} = [draw,thick, -latex']\n\\tikzstyle{cloud} = [draw, ellipse, text width=4.5cm, text centered]\n\\tikzstyle{edge from parent}=[->,thick,draw]\n\\begin{tikzpicture}[auto,edge from parent fork down]\n\\tikzstyle{level 1}=[sibling distance=60mm,level distance=18ex]\n\\tikzstyle{level 2}=[sibling distance=25mm,level distance=15ex]\n\\tikzstyle{level 3}=[sibling distance=30mm,level distance=34ex]\n\\tikzstyle{level 4}=[sibling distance=35mm,level distance=34ex]\n\\node [block,text width=10cm,minimum height=3em,,fill=black!30] (cst) {\\textbf{Channel Access Mechanisms in THz MAC protcols \\\\ (Sect.\\ref{sec:ch-access}) }}\n{\nchild{node [block,text width=4cm,fill=green!20] (opm) {\\textbf{Nanocale Networks \\\\ (Sect.\\ref{subsec-ch:nano}) } }\n \t\tchild{node [block,text width=2.4cm,fill=orange!20,xshift=-1.3cm, yshift=-1.1cm] (pmt) { \\\\ \n \t\t{\\bf Random Channel Access Mechanisms \\\\ (Sect.\\ref{subsubsec-ch:nano-random}) } \\\\\n \t\t{\\bf CSMA: } \\\\\n \t\tCSMA-MAC \\\\\n \t\tSMART-MAC \\\\\n \t\tDMDS \\\\\n \t\t{\\bf Multiple radios: } \\\\\n \t\tRBMP \\\\\n \t\t}} \n \t child{node [block,text width=2.5cm,fill=orange!20,xshift=-1cm, yshift=-2.6cm] (pmt) { \\\\ \n \t\t{\\bf Scheduled Channel Access Mechanisms \\\\ (Sect.\\ref{subsubsec-ch:nano-scheduled}) } \\\\ \n \t\t{\\bf TDMA: } \\\\\n \t\tDESIGN-WNSN \\\\\n \t\tTCN \\\\\n \t\tES-Aware \\\\\n \t\tEEWNSN \\\\\n \t\tEESR-MAC \\\\\n \t\tPHLAME \\\\\n \t\tDRIH-MAC \\\\\n\t\tRIH-MAC \\\\\n\t\t2-state MAC \\\\\n\t\tG-MAC \\\\\n\t\tAPIS \\\\\n\t\tMDP \\\\\n\t\tSSA-MAC \\\\\n\t\t{\\bf FTDMA: } \\\\\n\t\tDYNAMIC-FH \\\\\n\t\tTSN \\\\\n \t\t}} \n }\nchild{node [block,text width=4cm,fill=green!20] (opm) {\\textbf{Macro Scale Networks \\\\ (Sect.\\ref{subsec-ch:macro}) } }\n\t\tchild{node [block,text width=2.4cm,fill=orange!20,xshift=-0.4cm, yshift = -1.03cm] (pmt) { \\\\\n\t\t{\\bf Random Channel Access Mechanisms \\\\ (Sect.\\ref{subsubsec-ch:macro-random}) } \\\\\n\t\t{\\bf CSMA/CA based:} \\\\\n\t\tLL-Synch \\\\\n\t\tOPT-RS \\\\\n\t\t{\\bf Multiple radios:} \\\\\n\t\tTAB-MAC , \\\\ \n\t\tMRA-MAC , \\\\\n\t\t}}\n\t\tchild{node [block,text width=2.4cm,fill=orange!20,xshift=-0.2cm, yshift = -1.15cm] (pmt) { \\\\\n\t\t{\\bf Sceduled Channel Access Mechanisms \\\\ (Sect.\\ref{subsubsec-ch:macro-scheduled}) } \\\\\n\t\t{\\bf FTDMA:} \\\\\n\t\tISCT \\\\\n\t\t{\\bf TDMA:}\n\t\tTRPLE \\\\\n\t\t{\\bf Multiple radios:} \\\\\n\t\tSDN-CONTR \\\\\n\t\tB5G , \\\\\n\t\tMA-ADM \\\\\n\t\t}}\n\t\tchild{node [block,text width=2.4cm,fill=orange!20,xshift=0.0cm, yshift = -1cm] (pmt) { \\\\\n\t\t{\\bf Hybrid Channel Access Mechanisms \\\\ (Sect.\\ref{subsubsec-ch:macro-hybrid}) } \\\\\n\t\t{\\bf CSMA/CA and TDMA:} \\\\\n\t\tHLMAC \\\\\n \t\tIHTLD-MAC \\\\ \n \t\tMAC-TUDWN \\\\ \n \t\tMAC-TC \\\\\t\t\n\t\t}}\t\t\n}\n};\n\\end{tikzpicture}\n\\end{adjustbox}\n\\caption {Terahertz Channel Access Mechanisms classification.}\n\\label{fig:ch_access_class}\n\\end{figure*}\n\\saim{In this section, the existing channel access mechanisms for Terahertz band communications are presented. They are mainly classified based on macro and nanoscale Terahertz network. They are further classified as Random, Scheduled and Hybrid channel access mechanisms as shown in Figure~\\ref{fig:ch_access_class} and discussed below. }", "id": "531de97c-bbd8-4f31-a238-f8cce366e033", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ] ], "subsections": [ "70b5fc06-302e-4690-a5d7-885f933b0d4e", "cb9c4bfe-0c3c-4877-9d49-d5b78fc60d8c", "ce5f09c5-d57d-442f-9222-e5a55ae49079" ], "title": "Channel Access Mechanism for Terahertz communications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-ch:nano}\n\\saim{The pulse-based communication in nanonetwork transfers information using short pulses which reduces the chance of having collisions (cf. Section~\\ref{sec-req:phylayer}). To avoid possible collision, the duration between two pulses can be increased to allow different users stream at the same time. A nanodevice can send a packet when it has something to send without waiting in a random way, where receiver devices should be able to detect such pulse. A node can also be aware of or predict the next transmission from the received packet. }\nIn nanonetworks\\blue{,} several nanosensor nodes, taking random positions, can be used to maintain the network connectivity for different applications, such as in-body sensing, toxic gas detection and control, and military fields. The sensing and communication capabilities limit their target area to few millimeters. \\blue{Although, smaller size antennas can be integrated and massive data can be exchanged at high data rates, however, this requires a simple, robust and energy-efficient channel access mechanism for communication among the nano nodes.}", "id": "70b5fc06-302e-4690-a5d7-885f933b0d4e", "level": "subsection", "origin_cites_number": 0, "parent_id": "531de97c-bbd8-4f31-a238-f8cce366e033", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Nanoscale Networks" ] ], "subsections": [ "7f055c95-09af-46c0-9884-d263a4815afe", "1280ab26-c011-45ff-8981-e7bb75a8f37c" ], "title": "Nanoscale Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-ch:nano-random}\n\\saim{In Random-access mechanisms, different nodes contend for the channel access or transmit packet in a random manner. The random access is not suitable for applications in which higher number of nano nodes are used. This is due to the limited sensing, computation, battery and memory capacity of nanodevices, which can allow only a few transmissions until another harvesting phase is required. }\n\\saim{The carrier-based channel access mechanisms are mostly unsuitable for nano communication due to extra sensing overhead and energy consumption. In~, a slotted CSMA/CA-based channel access mechanism (CSMA-MAC) is proposed in which nodes contend for the channel. An energy harvesting model is also presented. The slots usage is found higher when slotted CSMA method is used. In addition, the super-frame duration and packet size are also mentioned to be considered for slotted CSMA protocols. In these types of networks, a beacon can be used to either synchronize the next transmissions or to ask for transmission of data packets directly. In direct beacon transmission for data transmission, collision can occur as two nodes can transmit at the same time and it also needs to address the energy constraint as frequent packet transmission can drain the nanodevice energy fairly quickly. Another work that uses carrier sensing is presented in~, where sensing duration is used to optimize the transmission schedules. }\n\\saim{An simple Aloha based channel access mechanism is presented in~, as Smart MAC in which nodes perform handshake before sending a packet to know about one-hop neighbors.} \\saim{When there will be no neighbor to transmit\\blue{,} the node will apply a random backoff delay prior to start again another handshake mechanism. The receiving node verifies when there are any physical collisions. The collisions can occur when there is a higher number of nodes available, which is not addressed and the retransmission mechanism is not discussed. }\n\\saim{A random channel access with multiple radios is used for nanonetworks in~. For control signal transmission\\blue{,} 2.4 GHz band is used and \\blue{THz band is used} for data transmission. The channel is accessed in a random manner in both phases. It also addresses the antenna facing problem in first phase by synchronizing the antenna directions for the data transmission phases. \\blue{This is primarily because} the narrow beams can not cover the whole search space. It mainly overcomes the synchronization problem using directions antennas as in Terahertz band at the cost of multiple radios. The alignment between two phases is required to decide the time in each phase of transmission.}\n\\mage{When the number of nodes is higher, random access or packet transmission can pose several challenges including collisions, transmission delays, and higher energy consumption. However, the energy-efficient harvesting mechanisms are not considered in these works. The works in~ do not consider the limited memory and energy limitations of nano nodes. In~, synchronization among the nodes is addressed but at the cost of multiple radios which increases the challenges. The random access mechanisms can increase the throughput but high message overhead and sensing require sufficient energy availability. Whereas, the scheduled mechanisms, solve the energy issue but at the cost of throughput. A work in~ provides the optimal schedules with random sensing at the beginning of the slot for distributed environment. It considers the limited memory and energy consumption of nano nodes, but not maximizing the overall throughput. The work is for distributed environment but only few nano nodes were used for implementation. }", "id": "7f055c95-09af-46c0-9884-d263a4815afe", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "70b5fc06-302e-4690-a5d7-885f933b0d4e", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Nanoscale Networks" ], [ "subsubsection", "Random channel access" ] ], "subsections": [], "title": "Random channel access" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-ch:nano-scheduled}\n\\saim{Nanodevices require a simple communication and medium access mechanism to effectively collect data from other nanosensor devices. Due to presence of large number of nanodevices, providing optimal schedules for nano nodes\\blue{,} life long communication is a challenging task. In centralized network a nano controller is responsible mainly for single-hop nodes schedule. However, in distributed networks, managing schedules for channel access for more than hop distance nodes is a challenging task. The optimal solution should also consider the limited nano nodes capacity, huge bandwidth availability and energy fluctuations while designing an efficient and optimal channel access scheduling mechanism~ (cf. Section~\\ref{sec-req:design-issues}). }\n\\textit{TDMA based: } \\saim{In TDMA based approaches, each node is assigned a time period to transmit its data. A scheduling mechanism based on a TDMA based approach is presented in~ in which a nano-controller makes the decision for a nanosensor node to transmit the sensing data. In~\\blue{,} a timing-based logical channel concept is used. In these logical channels information can be encoded in the silence period between two events. Although, these logical channels are shown to achieve synchronization among the nodes with energy efficiency, low rate, and collision avoidance. The unique features of Terahertz band are not considered (cf. Section~\\ref{sec-req:features}). }\n\\saim{A dynamic scheduling scheme based on TDMA is presented in~ as ES-aware, in which a nanosensor dynamically assigns variable length transmission time slots which depend upon amount of data to be transmitted; the distance between the nanosensor and controller; and the energy of the nanosensor. To balance the trade-off between the throughput and a lifetime, an optimal scheduling strategy is proposed which aims to provide an optimal transmission order for the nanosensors to maximize the throughput. This algorithm utilizes the inter-symbol spacing for the pulse-based physical layer to allow a large number of nanosensors to transmits the packets in parallel without introducing any collisions. This works are mainly for single-hop networks. }\n\\saim{The TDMA based scheduling for multihop WNSN MAC (EEWNSN) protocol is presented in~, which takes benefits of the clustering techniques to alleviate the mobility effects and transmission collision. After selecting a nano router, the nano router allocates the specific time slots to the nano node according to a systematic allocation pattern. The timeslots are considered fixed and due to the transmission to the closest nano router, the energy consumption can be decreased which can prolong the network lifetime. Another work following a cluster-based architecture for nanonetworks (EESR-MAC) is \\blue{presented} in~, in which initially a master node is selected which then allocates the transmission schedules between the inter/intra-cluster using TDMA approach. The master node's roles are periodically changed among different nodes to avoid long-distance transmissions and to save energy. }\n\\saim{In~, spectrum and energy parameters are considered for schedule assignment. In~, nodes select different Physical layer parameters, energy, and channel conditions, agreed by using a handshake process that can limit the performance due to the limited capacity of nano nodes. Although, parameters can be negotiated the dynamic and optimal parameter selection is still required for these networks to increase the lifetime and performance. A Rate division Time-Spread On-Off Keying (RD TS-OOK) is also proposed which is based on asynchronous exchange of femtosecond long pulses spread over time. To minimize the probability of multiple sequential symbol collisions in a packet, the time between the symbol $T_s$ and symbol rate $\\beta = T_s / T_p$ are chosen differently for different nanodevices for different packets. When all nanodevices are transmitting at the same symbol rate, a catastrophic collision can occur which can cause collision for all symbols in a packet. The orthogonal time hopping sequences can be used to avoid this condition~. The symbol collisions are unlikely to occur due to a very short length of the transmitted symbols $T_p$ and because the time between symbols $T-S$ is much longer than the symbol duration $T_p$. By allowing different nanodevices to transmit at different symbol rates, collision in a given symbol does not lead to multiple consecutive collisions in the same packet. As an example, an RD TS-OOK illustration is shown in Figure~\\ref{fig:rdtsook} in which two nanodevices transmit to a common receiver with different initial transmission times as ${\\tau}^1$ and ${\\tau}^2$. A short pulse represents a logical $1$ and a silence represents a logical $0$. The device 1 plot shown a sequence ``10100\" and device 2 plot shows a sequence of ``11100\". }\n\\begin{figure*}[htb!]\n\\centering\n\\includegraphics[width=4in,height=2.2in]{figures/rdtsook.png}\n\\caption{An example of Rate Division Time-Spread channel access mechanism used for pulse based communication in Terhertz Nano communication networks\\protect.}\n\\label{fig:rdtsook}\n\\end{figure*}\n\\saim{A scheduling mechanism for distributed networks is presented in~ (as DRIH-MAC) using edge colors. Its centralized version is presented in~ using probabilistic method. The edge coloring problem is considerably challenging in ad-hoc based networks due to the absence of a centralized coordinator. In DRIH-MAC~, the medium access control relies on the receiver-initiated and distributed scheduling for nano nodes in which each pair of nano nodes within a communication range will have an edge with different color. The main objective is to determine a minimum number of colors required to color the edges of a graph i.e., two edges incident on a common node do not have the same color, where each color represents a timeslot in which a nano node can communicate with one of its neighbors. At most $(\\delta + 1)$ timeslots are needed at least to reach an agreement/disagreement on color with all neighbors through RTR packet assuming no RTR packet failure. These works are shown to be efficient, however, the limited memory capacity of nano nodes is not considered in these works.} \n\\saim{Due to limited battery capacity, frequent transmission can not occur using a nanodevice. While designing a channel access mechanism, energy harvesting must be considered to achieve optimal network performance. A two-state MAC is proposed in~ in which two states are used for a node a harvesting only and harvesting with transmission. Another work using harvesting in sleeping and transmission modes is presented in~ for grid-based nanonetworks. }\n\\saim{For accommodating bursty traffic in a distributed environment, an adaptive pulse interval scheduling (APIS) scheme is presented in~. In this scheme, the arrival pattern of pulses transmitted is scheduled by nano sinks based on the access bandwidth. It has two scheduling steps such as transmission shifting and interleaving which are based on information collected from short channel sensing. When nano sinks start transmitting pulses, they are first shifted in sequence within interval of $I_S$. After which multi-user transmissions are interleaved by separating pulses with the interval that evenly shares the bandwidth among nano sinks and in response the pulses arrive at the gateway in an ideal pattern. }\n\\mage{The main issues while designing a scheduled access mechanism for nanonetworks are to consider the energy harvesting and consumption, limited computational capacity, Terahertz band features and optimal performance with node density. The works in~ considers energy problem, however, they lack in considering other aspects of memory, collisions, and Terahertz band features. In~, channel parameters, and coding scheme aware protocol is presented but it does not consider the high node density, balance in energy harvesting and consumption and limited computational capacity of nanodevices. The work is required for distributed nanonetworks while considering the unique aspects of Terahertz band. In~, multihop protocol is discussed but energy efficiency is not considered. The nano nodes require an efficient mechanism to balance between energy harvesting and consumption. The work in~, is shown to be better in performance than~, and considers self slot allocation with energy for both centralized and distributed environment, but band features are not considered and the trade-off between energy harvesting and consumption is not discussed. In~, the access scheme is provided while considering this trade-off with fair throughput and optimal lifetime, where nodes are aware of energy and spectrum information. In this work, the high node density is also considered for performance evaluation. }\n\\textit{Frequency and Time Division Multiple Access (FTDMA) based: } In~, dynamic frequency selection strategy (DYNAMIC-FH) is presented which uses FTDMA. An FTDMA is initially considered and for a higher number of nano nodes multi-frequency is proposed with timeslots scheduling for a different number of users. Each node is assigned with different timeslots to avoid collisions in case of higher packet sizes (like multimedia traffic). The main objective of the frequency selection strategies is to minimize energy consumption and increase channel capacity. In~, also a Markov Decision Process-based frequency hopping scheme (TSN) is proposed in which entire band is divided into $K$ frequency sub-channels where the aim is to determine for each timeslot the subchannels be used. \\\\", "id": "1280ab26-c011-45ff-8981-e7bb75a8f37c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "70b5fc06-302e-4690-a5d7-885f933b0d4e", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Nanoscale Networks" ], [ "subsubsection", "Scheduled channel access" ] ], "subsections": [], "title": "Scheduled channel access" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-ch:macro}\nFigure~\\ref{fig:ch_access_class}, shows the classification of Terahertz band channel access mechanisms for Terahertz macroscale networks. The summary and details of each category are discussed below.", "id": "cb9c4bfe-0c3c-4877-9d49-d5b78fc60d8c", "level": "subsection", "origin_cites_number": 0, "parent_id": "531de97c-bbd8-4f31-a238-f8cce366e033", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Macro Scale Networks" ] ], "subsections": [ "9446a9e0-29f6-440b-b0b1-de0f330682bd", "df670065-dc7e-4193-98de-41e71ad9d98d", "1c9cb02b-c5b4-40ac-86ba-3d4bfdc72785" ], "title": "Macro Scale Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-ch:macro-random}\nThe examples of random mechanisms are ALOHA and CSMA techniques. Ideally, for the random mechanism, a node should sense a medium before accessing it. Since there is a large bandwidth available, the chances of collision occurrence are less. Therefore, the random mechanism is also being used followed by message confirmation strategies. However, the idea of collision and interference cannot be ignored completely as there could be many users accessing the same medium and might be transferring a large volume of data which could potentially generate collisions among the two nodes (cf. Section~\\ref{sec-req:maclayer}). The collisions avoidance schemes and recovery from collision schemes are very essential. The delay, on the other hand, can be minimized due to random access, however, further research is required for the collisions and delay parameters trade-off. \n\\textit{CSMA based: } \\saim{In carrier sensing based channel access schemes, control packets are exchanged before data transmission to access channel. The high message overhead and directional antennas can create problems. To reduce the message overhead, in~, a one-way handshake based channel access scheme is proposed.} A node in transmission mode listens for messages from other nodes until one is received. As the directional antennas are used, the antenna facing problem can occur, however, it is assumed that the nodes know each other's position. To address the antenna facing problem, the work is extended in~ by focusing on the use of highly directional antennas to overcome high path loss. In~, the relaying distance is studied that maximizes the throughput by considering the cross-layer effects between the channel, the antenna and the communication layers. This work also focuses on control message exchange to establish nodes association and follows the random channel access, as in~. Although, these schemes are shown as workable, but are not considering the high node density, message overhead and unique Terahertz band features discussed in Section~\\ref{sec-req:features}. \n\\textit{CSMA with multiple radios or hybrid system: } \\saim{The limited power of transceiver and high path loss limits the transmission distance as discussed in Section~\\ref{sec-req:features} and can also increase the channel access problem. Therefore, requires directional antennas with beamforming both at the transmission and reception. The beams need to aligned to establish a link and to access a channel before transmitting data packets (cf. Section~\\ref{sec-req:phylayer}). The beam alignment takes time when antenna sweeps to discover the neighbors. Therefore, in some works like~, multiple radios are used to divide initial access and data transmissions. These works can increase the message overhead, increase antenna switching delay at higher radio costs. The channel is accessed by sending an request/clear-to-send (RTS/CTS) packets including node positions. However, if nodes are mobile, it will cause repeated discovery phase. In~. instead of sending a clear to send packet to the transmitter, the receiver estimates the angle of arrival and sends TTS packet to the transmitter. The transmitter can then switch and adjust its directional antenna and starts pointing towards the receiver antenna to start the data transmission. Although the message overhead is shown to be reduced, the uncertainty of packet loss during user association phase is not considered\\blue{. T}he difference is shown in Figure~\\ref{fig:tabmra} in which one scheme uses 4 transmissions until antenna direction alignment and other uses two transmissions. The high path and absorption loss are also not discussed in these works, which can cause a packet loss. }\n\\mage{Works in~ uses random channel access with a one-way handshake to reduce the control message overhead. However, the high node density can cause collisions problems which are not considered in these works. Further, synchronization can be a problem when using directional antennas in these works. To solve the synchronization issues, the works in~, use multi-band antennas with lower frequency bands and Terahertz bands. Although synchronization problem can be solved, the antenna switching and alignment can increase the delay. }\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=3.2in,height=2.2in]{figures/mra-tab-mac.png}\n\\caption{Random channel access, node association with antenna direction alignment difference of TAB-MAC~\\protect and MRA-MAC~\\protect protocols.}\n\\label{fig:tabmra}\n\\end{figure}", "id": "9446a9e0-29f6-440b-b0b1-de0f330682bd", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cb9c4bfe-0c3c-4877-9d49-d5b78fc60d8c", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Macro Scale Networks" ], [ "subsubsection", "Random channel access" ] ], "subsections": [], "title": "Random channel access" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-ch:macro-scheduled}\n\\saim{In distributed networks assigning the schedules to Terahertz nodes is an NP-hard problem~. Until the device technology is more mature to allow non-LOS communication, scheduled channel access can enhance the network performance in the presence of the above-mentioned constraints. In scheduled access, each node is assigned with a particular timeslot. Some of the variations of scheduled access are discussed below like FTDMA and TDMA based channel access mechanisms in Terahertz. }\n\\textit{FTDMA: } \\saim{An FTDMA based technique is used in~ in which the available frequency is further divided into sub-bands and assigned frequency slot numbers. In FTDMA, frequency is divided into different timeslots. The frequency used by any user in a particular time could be represented by a sequence $S^k$. To avoid jamming different sequences or transmission strategies are adopted by each user. For $n$ users transmitting at the same time, the sequences used by each user must be orthogonal. The performance is shown to be improved for security and throughput, however, the unique features of Terahertz bands are not considered like path loss and noise. }\n\\textit{TDMA: } \\saim{A TDMA based channel access scheme is used in~ to avoid fighting for access. In~, a fully directional MAC protocol for the Terahertz network is presented which relies on pulse-level beam-switching and energy control. A MAC frame structure is presented which is composed of a POLL period, a Downlink (DL) period and an Uplink (UL) period. In the POLL period, the AP learns the traffic demands of the users and schedules the DL/UL transmissions. In DL and UL, each different user is assigned a separate timeslot to access the channel. }\n\\textit{TDMA with Multiple radios: } \\saim{In Terahertz band, directional narrow beams are required to enhance the transmission distance, as discussed in Section~\\ref{sec-req:phylayer}, which can increase the delay to establish initial access, handovers and beam tracking. The TDMA based approaches suits to assign schedules for beam alignment and channel access which also requires synchronization among nodes~. A 2.4 GHz band and mmWave with non-LOS are being used to achieve initial synchronization and beam alignment, and a Terahertz band can be used to transfer data. In this way, next time when a node enters into a communication range or requires data, beams can be aligned in advance to perform seamless communication. }\nA TDMA based channel access scheme is used in~, in which an SDN based controller (SDNC) is used to switch between mmWave and Terahertz band for vehicular communication for high bandwidth data transfer operation. An optimal procedure at the SDN controller for scheduling multiple vehicles for accessing a given small cell tower is also given using a time division approach. The objective is to maximize the bits exchange between the cell tower and vehicle where the condition is to at least schedule one car in each timeslot while considering the distance. \nFor link switching, it is proposed that the Terahertz band should be switched whenever the link between the vehicles and the cell tower is less than $d_{th}$, and to mmWave otherwise. Another work discussing the hybrid usage by switching between the mmWave, $\\mu$W band, and Terahertz band is given in~. In this work, a higher capacity link like the Terahertz band is used for data transfer and mmWave for ACK transfer. For error recovery stop and wait is followed with data transmission using Terahertz band and ACKs using the mmWave band. However, this alternate band usage can introduce excessive overhead, higher delay for receiving an ACK, and also introduces the beamforming overhead as the communication must be directional for the Terahertz band. \nA memory assisted angular division multiplexing MAC protocol (MA-ADM) is proposed in~ for centralized architectures in which an access point is responsible for the coordination and scheduling the transmissions to achieve fairness and efficiency. It also uses Omni-directional antennas to overcome beam alignment and discovery problems and uses directional antennas for data transmission. Memory-Guided message transmission is used by the AP, in which during the network association phase, a node establishes the connection with the AP using an angular slot and register it in the memory. The AP switches the narrow beam by checking the memory towards the registered angular slots for data transmissions to avoid the empty scanning of the unregistered angular slots. The initial use of omnidirectional antenna can limit the service range which can affect the connection of nodes with AP, which is not considered. In addition, the switching delay and cost are also analyzed. To maintain the fairness in which transmission completion is verified by the reception of an ACK message or repeated failure occurrence the service discovery phase is triggered to update the guided transmissions. The scheduling although is considered for data transmission but not focused in detail.\n\\mage{In macro-scale network\\blue{,} synchronization with directional narrow beams is a challenge. The scheduled access can provide contention-free network but increases the delay. A synchronization with narrow beams directional antennas can be used but beam alignment requires more focus to reduce the switching and transmission delay. Although in works like~, TDMA approach is followed for channel access, efficient synchronization is still required. To solve the synchronization problem, multiple bands are used in~ but overhead is high and specific Terahertz band features (cf. Section~\\ref{sec-req:features}) are not considered. A memory assisted approach can be useful, as proposed in~, for using the beam direction from the memory. It is not considering the Terahertz band features. }", "id": "df670065-dc7e-4193-98de-41e71ad9d98d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cb9c4bfe-0c3c-4877-9d49-d5b78fc60d8c", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Macro Scale Networks" ], [ "subsubsection", "Scheduled channel access" ] ], "subsections": [], "title": "Scheduled channel access" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-ch:macro-hybrid}\nThe random access mechanisms can increase the delay and can cause collisions when node density is high but can enhance throughput. However, the scheduled access schemes can reduce the impact of collisions but can enhance the throughput performance. Therefore, hybrid mechanisms are required to overcome the limitations of these schemes. A hybrid channel access mechanism is proposed in~. In hybrid mechanisms CSMA and TDMA are used in which the channel time is divided into multiple superframes. Each superframe consists beacon period, channel time allocation period (CTAP) and channel access period (CAP). The CSMA/CA is used to compete for the channel in CAP period in which the device which wants to transmit data need to send a channel time request command to PNC. The PNC broadcast slot assignment information in the net beacon frame according to request frame received. The devices can synchronize themselves based on the synchronization information and obtain CTAP slot allocation information, which is made of channel time allocation. The devices access channel in TDMA mode where each device transmits data in its allocated slot. An on-demand retransmission mechanism is also proposed to decrease the message overhead with reserved slots mechanism based on channel condition. This work is an extension of~ which also uses a hybrid system and describes a high throughput and low delay Terahertz MAC protocol. The throughput is also shown to be improved by updating the timeslot request numbers with a reduction in latency efficiency. \n\\mage{The hybrid channel access mechanisms can improve the limitations of both random and scheduled channel access schemes. The schemes in~ can improve the performance, the poor network conditions are not considered. The new scheme should consider the Terahertz band features (cf. Section~\\ref{sec-req:features}) and design considerations (cf. Section~\\ref{sec-req:design-issues}).}", "id": "1c9cb02b-c5b4-40ac-86ba-3d4bfdc72785", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cb9c4bfe-0c3c-4877-9d49-d5b78fc60d8c", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Macro Scale Networks" ], [ "subsubsection", "Hybrid channel access mechanism" ] ], "subsections": [], "title": "Hybrid channel access mechanism" }, { "cite_extract_rate": 0, "cites": [], "content": "The traditional channel access schemes are based on continuous signals which cannot be used for nanonetworks due to the size and energy constraint. Instead, short pulses (100 fs) can be generated using simple devices (\\blue{G}raphene antenna) and transmitted at the nanoscale. Therefore, novel channel access mechanisms are required for nanoscale networks. Mostly, scheduling based channel access mechanisms are used to avoid frequent messaging to contend for channel access and energy consumption. The new mechanisms should consider the high network density and limited energy availability and in some cases nodes mobilities. Using short pulses can reduce the collision probability and therefore for high-density network random channel access mechanisms can be used by increasing the duration between two transmissions. But, in their MAC the collision avoidance mechanism should be considered to avoid any possible collision and to reduce the message overhead. By using isotropic antenna, a node at nanoscale can communicate with more nodes in its communication range. At the macro scale, the directional antennas are preferred due to high path loss and coverage enhancement but require antenna direction alignment and beam management. To solve the problem of antenna alignment different mechanisms are proposed in the existing literature which includes searching the space and adding an additional tuning phase~. But, these mechanisms can increase the link establishment delay and energy. It can cause a hidden node or deafness problem (cf. \\blue{Section}~\\ref{sec-req:maclayer}), in which a node remains unaware of the existence of the nearby node due to limited coverage or antenna misalignment. \nPulse based communication is mainly used in nanonetworks with channel access schedules using a nano controller. In pulse-based communication, short Terahertz pulses can be generated using specific devices to reach higher throughput. Whereas, time division technique for Terahertz MAC is mostly related to the fact that each node has equal share for channel access and to avoid possible collision. However, the TDMA based approaches require efficient and optimal scheduling schemes while considering antenna direction, beam alignment, multi-user interference, and Terahertz band features. To avoid tight synchronization requirement, asynchronous MAC protocols requires further research with energy efficiency. The channel access for macro scale relies more on beam steering. Therefore, an alignment phase should be considered at the MAC layer. The antenna pattern for nanoscale is assumed as isotropic and directional for macro-scale network for which the gain requirements are also high. Further, the high interference for nanoscale due to isotropic properties of antenna requires efficient algorithm while considering the channel characteristics, antenna, gain and transmission power. New realistic measurement studies are also required for modeling of channel behavior for different indoor and outdoor applications at macro scale.", "id": "ce5f09c5-d57d-442f-9222-e5a55ae49079", "level": "subsection", "origin_cites_number": 0, "parent_id": "531de97c-bbd8-4f31-a238-f8cce366e033", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Channel Access Mechanism for Terahertz communications" ], [ "subsection", "Summary and discussion" ] ], "subsections": [], "title": "Summary and discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:communication}\n\\begin {figure*}\n\\centering\n\\begin{adjustbox}{width=\\linewidth}\n\\scriptsize\n\\tikzstyle{block} = [rectangle, draw, text width=4cm, text centered, rounded corners, minimum height=5em,fill=blue!20]\n\\tikzstyle{line} = [draw,thick, -latex']\n\\tikzstyle{cloud} = [draw, ellipse, text width=4.5cm, text centered]\n\\tikzstyle{edge from parent}=[->,thick,draw]\n\\begin{tikzpicture}[auto,edge from parent fork down]\n\\tikzstyle{level 1}=[sibling distance=60mm,level distance=18ex]\n\\tikzstyle{level 2}=[sibling distance=25mm,level distance=15ex]\n\\tikzstyle{level 3}=[sibling distance=30mm,level distance=34ex]\n\\tikzstyle{level 4}=[sibling distance=35mm,level distance=34ex]\n\\node [block,text width=10cm,minimum height=4em,,fill=black!30] (cst) {\\textbf{Transmitter and Receiver Initiated THz MAC Protocols \\\\ (Sect.\\ref{sec:communication})} }\n{\nchild{node [block,text width=5cm,fill=green!20] (opm) {\\textbf{Transmitter Initiated MAC protocols \\\\ (Sect.\\ref{subsec-comm:tx}) } }\n \t\tchild{node [block,text width=2.8cm,fill=orange!20,xshift=-0.1cm, yshift=-2.2cm] (pmt) {\\textbf{Nanoscale networks \\\\ (Sect.\\ref{subsubsec-comm:tx-nano}) } \\\\ \n PHLAME \\\\\n ES-AWARE \\\\\n\t\tTCN \\\\ \n\t\tLL-Modelling \\\\\n\t\tDMDS \\\\ \n\t\t2-state MAC \\\\\n\t\tCSMA-MAC \\\\ \n\t\tG-MAC \\\\\n\t\tAPIS \\\\ \n\t\tERR-CONTROL \\\\\n\t\tMGDI \\\\ \n\t\tPS-OPT \\\\\n\t\tNS-MAC \\\\ \n\t\tDYNAMIC-FH \\\\\n\t\tEESR-MAC \\\\ \n\t\tRBMP \\\\% multiple antennas \\\\\n\t\tSSA-MAC \\\\\n \t }}\n child{node [block,text width=2.8cm,fill=orange!20,xshift=0.5cm, yshift=-1.1cm] (pmt) {\\textbf{Macro scale networks \\\\ (Sect.\\ref{subsubsec-comm:tx-macro}) } \\\\\n IHTLD-MAC \\\\\n ATLR \\\\\n \t\tTRPLE \\\\\n \t\tSDNC \\\\\n \t\tB5G \\\\\n \t\tMA-ADM \\\\\n\t\tTAB-MAC \\\\\n\t\tMRA-MAC \\\\\n\t\tMAC-YUGI \\\\ \n } }\n }\nchild{node [block,text width=5cm,fill=green!20] (opm) {\\textbf{Receiver Initiated MAC Protocols \\\\ (Sect.\\ref{subsec-comm:rx}) }}\n\t\tchild{node [block,text width=2.8cm,fill=orange!20,xshift=0.2cm, yshift = -0.4cm] (pmt) {\\textbf{Nanoscale networks \\\\ (Sect.\\ref{subsubsec-comm:rx-nano}) } \\\\\n\t\tDRIH-MAC \\\\\n\t\tMDP \\\\\t\t\n\t\tRIH-MAC \\\\\n\t\tLLsynch \\\\\t\n\t\t}}\n\t\tchild{node [block,text width=2.8cm,fill=orange!20,xshift=0.85cm, yshift = -0.1cm] (pmt) {\\textbf{Macro scale networks \\\\ (Sect.\\ref{subsubsec-comm:rx-nano}) } \\\\\n\t\tOPT-RS \\\\\n\t\tLLsynch \\\\\t\n\t\t} } \n}\n};\n\\end{tikzpicture}\n\\end{adjustbox}\n\\caption {Classifications of Terahertz MAC protocols based on Transmitter and Receiver initiated communications.}\n\\label{fig:communications}\n\\end{figure*}\n\\begin{figure}\n \\centering\n \\subfloat[Tx initiated communication]{{\\includegraphics[width=2.4in,height=1.3in]{figures/TIHS.jpg} }\\label{fig:TIHS}}\\\\\n \\qquad\n \\subfloat[Rx initiated communication]{{\\includegraphics[width=2.4in,height=1.3in]{figures/RIHS.jpg} }\\label{fig:RIHS}}\n \\caption{Message transmission flow for handshake in Terahertz MAC protocols for link establishment. a) Transmitter initiated handshake mechanism which requires confirmation from receiver of its sent packet before starting a data transmission, b) Receiver initiated handshake mechanism in which receiver initiated communication when it required some information or have enough energy to receive a message, mostly used in nano communication networks.}\n \\label{fig:txrx-handshake}\n\\end{figure}\n\\saim{Establishing a link is an essential part to obtain before starting a communication that can be used to achieve synchronization and exchange information to establish a network like neighbor information, physical parameters and beam alignment (cf. Section~\\ref{sec-req:maclayer}). The handshake mechanisms should be carefully designed to reduce link establishment delay while considering energy efficiency. Two kinds of handshaking mechanisms are generally followed in Terahertz communication which are receiver-initiated and transmitter initiated communication to establish links among different nodes. In particular, the receiver-initiated MAC aimed at reducing the number of transmission in resource-constrained nano and macro-scale networks. Whereas the transmitter-initiated communication focuses on the performance efficiency of the network in a traditional way. Typically, the directional antennas are used for transmitter initiated communication in TCNs due to its narrow beam requirement and distance-dependent bandwidth~. Other than directional antenna usage, some proposals use multiple antennas to establish initial coordination between multiple nodes. The solutions so far on receiver and transmitter initiated coordination protocols are mentioned below and are shown in Figure~\\ref{fig:communications}.}", "id": "c64882d2-b2f6-454b-8909-b35d498b09e0", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ] ], "subsections": [ "6fbccc10-d4a8-4b32-b7f5-4e10f0a54d91", "7e7606a7-ddfa-42ca-b701-fe117a36a7c4", "25753c19-3f1c-4f9e-8a8a-0afcb9dbe3c8" ], "title": "Transmitter and receiver initiated Terahertz MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-comm:tx}\nIn a traditional way, the transmitter is mainly responsible for link establishment, data transmission and synchronization of nodes parameters like scheduling times and channel information. Most of the Terahertz MAC protocols are following the transmitter-initiated communication due to its simplicity and distributed nature. However, the distance-dependent behavior of Terahertz band due to absorption and path loss; directional antenna usage; high bandwidth and throughput support, increases the challenges. These challenges include the antenna facing problem introduced when the transmitter and receiver remain unaware initially about the position and antenna direction; the hidden node problem; and reliability of communication i.e., the packet is lost due to path loss or collision. A general example of transmitter initiated communication is shown in Figure~\\ref{fig:txrx-handshake}, in which a transmitter or a node which has information can send a packet, a receiver which receives that packet can trigger an ACK or confirmation to send and after that, a sender can send the Data packet. The transmitter and receiver can agree on the common parameters like physical and MAC layer parameters to reduce the complexity of communication and to reduce the delay.", "id": "6fbccc10-d4a8-4b32-b7f5-4e10f0a54d91", "level": "subsection", "origin_cites_number": 0, "parent_id": "c64882d2-b2f6-454b-8909-b35d498b09e0", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Transmitter initiated MAC protocols" ] ], "subsections": [ "6028a858-5a85-4526-a587-277543b725d5", "d0f5528c-ded7-4f53-ae60-420f13ac01cc" ], "title": "Transmitter initiated MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-comm:tx-nano}\nIn nanonetworks, mostly a nano controller is used to forward and collect data to/from nanodevices in a centralized network. Transmitter-initiated communication is used mostly to allow the nodes to transmit when they have data to send. The node which has data to send will initiate the communication and perform handshake process. \nA transmitter initiated communication scheme is proposed in~ for nanonetworks, which is built on top of RD TS-OOK and takes benefits of a low weight channel coding scheme. The main aim is to negotiate between the transmitter and receiver the communication parameters and channel coding scheme to minimize the interference and maximize the probability of efficient decoding of the received information. The communication is established by sending a request by a node that carries information as Transmission Request (TR) and a node that receives it will agree to the communication parameters and generate an ACK and sends a Transmission Confirmation (TC) message. The TR contains the synchronization trailer, the transmission ID, packet ID, transmitting Data Symbol rate and Error Detecting Code. \\blue{Although, it offers benefits in terms of delay and\nthroughput, it also has few limitations including the handshake process overhead which limits the Terahertz communication performance and limited computational power of nanodevices which requires optimal communication parameters.} \\blue{I}n~\\blue{, an} energy and spectrum aware MAC protocol is proposed, in which the computational load is shifted towards the nano controllers. The works in which energy harvesting is also considered and which uses Tx initiated communication include~. Other works on Tx initiated communication are mentioned in Figure~\\ref{fig:communications}.\n\\mage{The Transmitter initiated communication is used mainly in papers in which distributed architecture is followed and where the goal is to achieve maximum throughput~ and not the energy efficiency. However, Tx initiated communication can increase the control message overhead. As nanonetworks are limited in computational capacity, handshake mechanisms are required which can balance the energy harvesting and consumption with minimum delay for link establishment. The works which consider energy as well are~. A Tx initiated communication is also followed in~ which presents an energy model and also considers the high node density.}", "id": "6028a858-5a85-4526-a587-277543b725d5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6fbccc10-d4a8-4b32-b7f5-4e10f0a54d91", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Transmitter initiated MAC protocols" ], [ "subsubsection", "Nanoscale networks" ] ], "subsections": [], "title": "Nanoscale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-comm:tx-macro}\nFor a macro-scale network with directional antennas, the steerable narrow beam is essential to overcome the high path loss at the Terahertz band and to extend the communication range. \\saim{Tx initiated communication is used in many macro-scale applications, where main goal is to increases network throughput performance.} A Terahertz based communication model and autonomous relaying mechanism for Vehicular communication are presented in~, by considering the channel capacity, autonomous relaying and network establishment. An antenna switching mechanism is proposed in~ for vehicular and small cell deployment using Terahertz band in which lower frequency bands are used to achieve synchronization. \nThe approaches in~ uses multiple antennas to separate out the signaling and data transmissions parts. \\saim{The 2.4 GHz band is proposed to use for signaling and antenna alignment using an omnidirectional antenna to overcome the antenna facing problem. The initial access and control information part are performed using IEEE 802.11 RTS/CTS mechanism and for data transmission directional antennas are used which also uses Tx initiated communication to transmit data between the Terahertz nodes}. Besides the high path loss, the directional antennas in these works are shown to be beneficial in reaching the distance beyond 1 meter. In~, a pulse level beam switching is used with energy control but Terahertz band features are not considered.\nA CSMA/CA-based channel access mechanism is used in~ with on-demand retransmissions mechanism for a TPAN which considers poor link conditions. The beacon frames are used by the piconet coordinator to provide information like channel access, channels slot assignment, and synchronization information. It is shown that network throughput decreases when the channel conditions are poor and proposed MAC protocol shows better performance in comparison with IEEE 802.15.3c and ES-MAC~.\n\\blue{The transmitter-initiated protocols are widely used as they incur less complexity and favors the distributed nature, however, they incurs several challenges due to the use of the directional antenna. On one hand, these directional antennas increase the transmission distance and on the other hand, they introduce the antenna facing the problem.} In outdoor scenarios where mobility is involved\\blue{,} frequent beam switching occurs which requires a novel mechanism to minimize the synchronization and antenna alignment schemes. The WiFi technology is proposed to minimize the control messages overhead and link establishment in works like~ and for antenna alignment which overcomes the facing problem. However, it introduces the high antenna switching overhead and requires efficient scheduling mechanism for seamless control information and data dissemination transmission.", "id": "d0f5528c-ded7-4f53-ae60-420f13ac01cc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6fbccc10-d4a8-4b32-b7f5-4e10f0a54d91", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Transmitter initiated MAC protocols" ], [ "subsubsection", "Macro scale networks" ] ], "subsections": [], "title": "Macro scale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec-comm:rx}\nA general example of the receiver-initiated communication is shown in Figure~\\ref{fig:txrx-handshake}, in which the receiver announces its existence and readiness to receive a packet from the sender. The Rx initiated communication is mainly used in nano and macro scale networks to save energy and reduce excess message overhead. Different existing solutions for both types of networks are discussed below.", "id": "7e7606a7-ddfa-42ca-b701-fe117a36a7c4", "level": "subsection", "origin_cites_number": 0, "parent_id": "c64882d2-b2f6-454b-8909-b35d498b09e0", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Receiver initiated MAC protocols" ] ], "subsections": [ "c1b965cf-5452-4ee6-ba73-519be2b36715", "a4590751-f333-41b4-b2fa-3b7d410f003e" ], "title": "Receiver initiated MAC protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-comm:rx-nano}\nIn nanoscale networks, which are prone to energy utilization, the excess of transmissions or message exchange means more utilization of energy. In nanonetworks, the amount of energy stored is just enough to hardly transmit a packet~. The transmission remains unsuccessful when the receiver receives the request from the sender but does not acquire enough energy to send an ACK or data packet. Therefore, in receiver-initiated protocols, the receiver takes initiative and announce its status of energy first to all senders by sending a Request-to-Receive (RTR) packet. \\saim{The energy efficiency and harvesting are also discussed in Section~\\ref{sec-req:maclayer}.} \nThe receiver initiated communication is used both in centralized as well as distributed networks~. In centralized nanonetworks, a nano controller is mainly responsible for performing major processing and decision making, as they are energy enrich devices. Since the receivers are usually assumed to generate their own energy resources, they harvest just enough energy to transmit a packet. Therefore, in solutions like~, the receiver starts the communication by informing the senders, when it is ready to receive a packet and can exchange the information like coding schemes, error rates, and scheduling. Therefore, limited energy resource is one of the main reason, receiver-initiated communications are preferred in centralized networks. The problems occur when the receiver remains busy \\blue{in} energy harvesting phase, and senders start sending the packets, which can result in the loss of a packet. Therefore, scheduling becomes an essential part of such kind of schemes. In~, a receiver-initiated communication model is presented for centralized topology, in which a receiver announces an RTR packet to nearby nano nodes and then the nano nodes send a DATA packet or ACK in response in a random-access manner with probability $p$ to establish a handshake between the nodes. A distributed scheme is presented in~ in which scheduling and harvesting mechanisms are proposed to work together to enhance the energy utilization of nanonetworks. The proposed scheme uses the receiver-initiated approach to achieve the handshake and schedules. \n\\saim{The unsuccessful transmission-reception because of the node harvesting phase can increase dalay and also cause a hidden node problem.} Therefore, new schemes are required to avoid the hidden node problem in a distributed environment using a receiver-initiated approach. A MAC protocol is discussed in~ in which optimal energy consumption and allocation problem are presented which aims to maximize the data rate. It is also shown that the amount of energy harvested is not enough for transmitting one packet and therefore it can take multiple timeslots to transmit a single packet when the harvesting rate is lower than the energy consumption rate.\n\\mage{In nanonetworks, receiver-initiated communication provides the flexibility to nodes in deciding which when to receive and transmit. The message overhead in these networks can be reduced by following a one-way handshake as provided in~ in which energy harvesting is used. Whereas in~ the trade-off between energy harvesting and consumption is also discussed.}", "id": "c1b965cf-5452-4ee6-ba73-519be2b36715", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "7e7606a7-ddfa-42ca-b701-fe117a36a7c4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Receiver initiated MAC protocols" ], [ "subsubsection", "Nanoscale networks" ] ], "subsections": [], "title": "Nanoscale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec-comm:rx-macro}\nIn these networks, high path loss at Terahertz frequencies affects the achievable distance between the Terahertz devices (cf. Section~\\ref{sec-req:features}). It also requires tight synchronization between a transmitter and receiver to overcome the deafness problem~. A receiver-initiated MAC protocol using directional antennas is discussed in~ which uses a sliding window flow control mechanism with a one-way handshake that increases the channel utilization. High speed turning directional antennas are used to periodically sweep the space. The main objective is to prevent unnecessary transmission when the receiver is not available due to antenna facing the problem. In this scheme, a node with sufficient resources broadcasts its current status by using a CTS message by using a dynamically turning narrow beam while sweeping its entire surrounding space. The CTS frame contains the information of receivers' sliding window size. On the other side, the transmitter checks for a CTS frame from the intended receiver and then points its direction for the required period towards the receiver. The initial neighbor discovery of the neighbor nodes is not considered in~. Further, due to the bit error rate and path losses \\saim{(cf. Section~\\ref{sec-req:phylayer})}, packet reception guarantee is not considered. It is also possible that multiple transmitters might point out at same receiver at the same time, which can result in possible collisions. A MAC protocol focused on cross-layer analysis for relaying strategies is discussed in~ with distance dependant throughput maximization analysis while considering the antenna, physical, link, and network layer effects. In particular, receiver-initiated communication is shown to be better than the transmitter-initiated communication in~.", "id": "a4590751-f333-41b4-b2fa-3b7d410f003e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "7e7606a7-ddfa-42ca-b701-fe117a36a7c4", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Receiver initiated MAC protocols" ], [ "subsubsection", "Macro scale networks" ] ], "subsections": [], "title": "Macro scale networks" }, { "cite_extract_rate": 0, "cites": [], "content": "The nanonetworks are considered as networks with limited energy and resources in which mainly the schemes are preferred in which energy consumption can be minimized. Other than the traditional way, some receiver-initiated mechanism are proposed in the literature, in which a receiver initiates the communication establishment by announcing its status for the sufficient energy resources to receive a packet. Whereas, in transmitter initiated communication, the node which have information to send can initiate to establish the communication. At nanoscale networks, mostly the networks are centralized and therefore nano controllers are used mostly to manage the control and data transmissions and so form a centralized network. In this kind of network, scheduling the transmission is the main requirement as they use pulse based communication instead of carrier sensing based communication. For higher network density, the collisions cannot be ignored and so require an efficient mechanism to avoid collisions and minimize packet loss.\nThe receiver initiated communication is used also in the macro scale networks. Although it minimizes the control messages overhead, it increases the complexity and is not preferable in the distributed scenarios. In a distributed environment, where two nearby nodes performing the same operation such as energy harvesting can increase the delay and can cause a hidden node problem. Therefore, efficient synchronization and scheduling mechanism are required for these scenarios.", "id": "25753c19-3f1c-4f9e-8a8a-0afcb9dbe3c8", "level": "subsection", "origin_cites_number": 0, "parent_id": "c64882d2-b2f6-454b-8909-b35d498b09e0", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Transmitter and receiver initiated Terahertz MAC protocols" ], [ "subsection", "Summary and discussion" ] ], "subsections": [], "title": "Summary and discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:issues-challenges}\nDesigning an efficient Terahertz MAC protocol needs to address different challenges. In this section, these challenges are presented with future research directions.", "id": "fe299020-4e95-45fc-964a-5394cc33dc39", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ] ], "subsections": [ "3edadfd6-ca9f-4bbe-b797-53f40ca51eea", "42a90c8f-88c5-47b0-9320-0ea4839499d7", "988e71f7-da34-45d5-bcb3-a6234f0f4c56", "43e25cb4-9f6d-48df-94cc-1e4224851011" ], "title": "Challenges and Future Research Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3edadfd6-ca9f-4bbe-b797-53f40ca51eea", "level": "subsection", "origin_cites_number": 0, "parent_id": "fe299020-4e95-45fc-964a-5394cc33dc39", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz communication network topologies" ] ], "subsections": [ "3d6e77b9-8eb5-42ff-95a7-c6aa4e6f4054", "eef1c9ee-06cd-4091-818b-1abab6dc814f" ], "title": "Terahertz communication network topologies" }, { "cite_extract_rate": 0, "cites": [], "content": "\\saim{The challenges and future research directions for macro scale network topology design are discussed below.}\n\\saim{\\textit{Static and mobile scenarios:} } \\saim{In fixed point-to-point connectivity, it is fairly easy to maintain stable links. However, when the nodes are mobile, the network topology changes frequently, in which frequent neighbor discovery and link establishment will be required. To move further, point-to-multipoint connectivity requires more attention in terms of MAC protocols design. A Data Centre environment is a good example, in which establishing point-to-point and multipoint among the inter and intra rack communication to effectively replace wired connections is still a challenge.}\n\\saim{The Terahertz signal is highly sensitive to the outdoor environment, for example, weather fluctuation, and the presence of blockers between two nodes can affect the communication link. MAC layer for such a system should include fast link re-establishment mechanism and alignment operation in case of sudden miss-alignment between the two nodes by giving alternative paths for transmission.} \n\\saim{\\textit{Blockage and beam steering:} } \\saim{Antenna arrays and reflectors can be used to avoid the chances of blockage and to find a good propagation path. It is critical to find a good beam pointing mechanism to support user mobility with fast and accurate beam tracking, both in LOS and non-LOS conditions~. A mechanically steerable antenna is demonstrated in for 300 GHz band~. Overall, a faster electronic beam steering is more practical and required. }\n\\saim{Unlike lower frequency band such as microwave, Terahertz is very sensitive to blockers. Efficient techniques are required to reduce the effects of blocking in the system. The path diversity can be one of possible solution using beam diversity and MIMO system. The challenge is to track presence of blockers and to keep the link available when direct LOS is absent. The integration of Terahertz intelligent surface is worth considering which enables user tracking and add path diversity to the propagation scenario, users can be reached even in presence of severe blocking scenario.}\n\\saim{\\textit{Data Centre Geometry:}} \\saim{The high capacity Terahertz links can help in re-designing the Data Centre geometry by moving the racks closer in a cylindrical architecture. Future work demands the flexible Data Centre architecture to support scalability and energy-efficient design while considering the Terahertz features and limitations. It will reduce the deployment delay and cost for future data centers. Full beam scanning can be used for initial access to serve 360-degree search space. Antenna sectors can also be used for 90-degree search space and communication at the cost of additional hardware. }\n\\saim{\\textit{Coverage: }} \\saim{The transmission distance achieved so far for Terahertz communication is still limited to few meters~. Extending this range to reach to 10 m or more for an indoor scenario using low power transmission devices and efficient communication protocols is an active field of research for B5G networks. As a future topic, the Terahertz wireless and fiber connectivity for backhaul and fronthaul high data rate communication are also gaining attention (cf. \\blue{Section} \\ref{sec:applications}). Directional antennas with narrow beams are being encouraged in Terahertz networks to extend the transmission distance. This directional antenna usage can limit the interference and losses (cf. Section \\ref{sec-req:features}), they must use the optimal schedules and tight synchronization. The phased array and MIMO techniques can be used to further extend the transmission distance.}\nTerahertz communication is characterized by a low coverage zone as transmitted signal experiences high path attenuation. A high gain antenna can be deployed to increase network range, however\\blue{,} the communication range is still low compared to lower frequency bands. In order to extend coverage zone\\blue{,} it is possible to add additional functionalities to nodes at the coverage area zone to play the role of coverage extension and coordination with nodes out of the coverage zone. Coverage extension is challenging as more interferences occur and additional time is required for out of range nodes discovery is required\\blue{. S}ome related work on coverage extension can be found in~. From MAC layer point of view, nodes at the edge should coordinate to discover and synchronize with out-of-range nodes. \\blue{A second possibility to further extend the coverage area at macro scale is the deployment of intelligent surfaces, in which when a node is near the intelligent reflector in the inner zone can send messages to the surface in order to be reflected out of the inner zone~.} The MAC layer should be aware of new devices required for coverage extension. Edge nodes are selected by the nodes controller for coverage extension if their received power is lower than a threshold and also if they are located near an intelligent surface. This concept is shown in Figure~\\ref{fig:coverage}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in,height=1.6in]{figures/coverage.png}\n\\caption{Coverage extension in Terahertz networks.}\n\\label{fig:coverage}\n\\end{figure}", "id": "3d6e77b9-8eb5-42ff-95a7-c6aa4e6f4054", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3edadfd6-ca9f-4bbe-b797-53f40ca51eea", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz communication network topologies" ], [ "subsubsection", "Macro scale network" ] ], "subsections": [], "title": "Macro scale network" }, { "cite_extract_rate": 0, "cites": [], "content": "\\saim{The challenges and future research directions for nanoscale network topology design are discussed below.}\n\\saim{\\textit{Node density: }} In the nano-communication network, thousands of nodes can exist in very small regions which are generally limited in capabilities. These networks require efficient MAC layer techniques to meet network requirements such as high throughput, reliability, and low energy consumption. Models related to node density are used in literature, for example interferers are linearly distributed around the receiver~. The density distribution is used to evaluate the interference in the system and assess the limitation of the network\\blue{. Moreover,} non-homogeneous density of nodes can affect the MAC choice, then MAC protocols should be able to track the fast change in node density in each part of the network. \nWith limited energy storage capacity, establishing links and data transmission will require more energy to facilitate larger number of nodes. The nodes mobility in a distributed environment can also change the topology of the nodes frequently which needs to be handled using new MAC layer protocols. In different applications, like agricultural or in-body health monitoring, the nodes scalability is required (cf. \\blue{Section} \\ref{subsec-app:nano}). Managing node discovery and link stability are still challenges while dealing with limited energy resources at nanoscale. Models to quantify the resources like time, storage and amount of communications are required to reduce the computational complexity in centralized as well as distributed networks.\n\\saim{\\textit{Energy harvesting and transmission trade-off: }} The limited capacity can allow only an energy harvesting or transmission at one time which requires to be scheduled efficiently. When a node \\blue{is} busy in harvesting\\blue{,} it can not transmit and when it transmits\\blue{,} it can not generate energy for next transmissions\\blue{,} which can affect the overall latency and throughput of a nanonetwork. Therefore, efficient schedules are required to improve the overall throughput and latency while managing the harvesting and transmissions. \n\\saim{\\textit{Computational complexity: }} \\saim{Nano devices are small devices with limited computational capacity and power management and so require simple communication protocols. More functionalities can be provided by optimizing the nano device's behavior. Therefore, to manage the nanodevices and processing the complex computations externally, hierarchical architecture is required with decisions to be taken at devices with higher computational capacity and powerful communication layers~. Gathering the data in a distributed network architecture to perform complex operations is a challenging task. Therefore, new mechanisms are required for synchronization and coordination for distributed networks. Further, the nanonetworks can also be combined with current network technologies like software-defined networks, IoTs and virtual networks to solve the complex problems at those technologies and higher layers~. }\n\\saim{\\textit{High capacity controller: }} The nanonetwork topology can be dynamic at times, due to dynamic channel conditions which can affect the reliable transmissions. In response, the nanodevices in a network might not share the same topology information. \\blue{Further, due to limited memory, storage, and computational processing capabilities, the nanodevice also faces difficulty in storing the routing tables and heavy algorithms.} To solve this, a controller with software-defined functionality can be utilized to do the extra computations, which can also change and reconfigure the behavior of nanonetwork.", "id": "eef1c9ee-06cd-4091-818b-1abab6dc814f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3edadfd6-ca9f-4bbe-b797-53f40ca51eea", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz communication network topologies" ], [ "subsubsection", "Nanoscale network" ] ], "subsections": [], "title": "Nanoscale network" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "42a90c8f-88c5-47b0-9320-0ea4839499d7", "level": "subsection", "origin_cites_number": 0, "parent_id": "fe299020-4e95-45fc-964a-5394cc33dc39", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz channel access mechanisms" ] ], "subsections": [ "b061e4a2-6d46-4130-99e5-04bc4399c5ba", "f34fd3c5-9e36-4364-a27d-9e46f6ae2476" ], "title": "Terahertz channel access mechanisms" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, challenges and future research directions are mentioned for efficient Terahertz channel access mechanisms.\n\\saim{\\textit{Hybrid protocols:} } \\saim{The scheduled channel access techniques can reduce the collisions, whereas, random access techniques can enhance the throughput and latency requirements. The future research should be on hybrid mechanisms to improve the fairness and throughput among the users using combination of both techniques. }\n\\saim{\\textit{Adaptive band selection: }} \\saim{The Terahertz transmission bands can be affected by high absorption and path loss. Multiple transmission windows are available in the Terahertz band with different attenuation values. Efficient mechanisms are required to efficiently utilize the distance and bandwidth dependent Terahertz band allocation. }\n\\saim{\\textit{Multiple band usage:}} \\saim{To solve the synchronization and initial access, different bands like microwave, mmWave, and Terahertz are being used together. The microwave band can provide wider coverage which can be used for initial access and antenna alignment. However, when to switch among these bands requires new methods to be defined to improve the switching overhead and delay.}\n\\saim{\\textit{Multiple antennas and beam usage:}} \\saim{MIMO antennas can be used to mitigate the Terahertz channel effects (cf. Section \\ref{sec-req:features}) and can improve the path diversity, and antenna gain which can improve the overall coverage. } \n\\saim{\\textit{High order modulation: }} New waveforms, and modulation and coding schemes for the Terahertz system are also required to improve the data rate and quality of service for beyond 5G communications~. The use of OOK mapped with ultra-short pulses at one hand can reduce the transceiver complexity, however, it also introduces challenges for antenna arrays for ultra-broadband design. The complexity of generating and processing high order modulated signals at transmitter and receiver side using Terahertz device is a challenge. \\blue{For good channel conditions, throughput can be improved by using these high order modulation techniques like 64 QAM~}. Further, the decoding and delay complexity at receiver also needs to be focused. \n\\saim{\\textit{Beam management: }} Fast beam scanning and steering mechanisms are required to obtain channel state information to improve the channel usage decisions at macro-scale networks. The random channel access can solve the problem of antenna alignment, however, the range becomes short due to high path loss. An efficient TDMA based scheduling mechanism is required for both centralized and distributed networks to avoid multi-user interference.", "id": "b061e4a2-6d46-4130-99e5-04bc4399c5ba", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "42a90c8f-88c5-47b0-9320-0ea4839499d7", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz channel access mechanisms" ], [ "subsubsection", "Macro scale network" ] ], "subsections": [], "title": "Macro scale network" }, { "cite_extract_rate": 0, "cites": [], "content": "In nanonetworks, due to shorter distance and low path loss\\blue{,} huge bandwidth can be used which can result in very high data rate, short transmission time with low collision probability~. The MAC protocol role is very important in regulating the link behavior, arranging the channel access and coordinating the transmission in a distributed environment. \n\\saim{\\textit{Random access and scheduling: }} \\saim{At the nanoscale, huge bandwidth availability and pulse-based communication, reduces the collision probability. The random access techniques can take benefit out of it but require efficient scheduling of transmission with a larger duration between packet transmissions. However, for high node density scenario, this needs to be carefully designed due to energy limitations~.}\n\\textit{Node density and error control: } Although the actual Terahertz nanoscale network is characterized by a basic MAC layer, it is possible to implement efficient error control protocols to reduce packet retransmissions and packet waiting time. Optimized mechanisms are required overall to run communication tasks at the nanoscale, due to energy limitations and to support a large number of nanodevices. Efficient scheduling mechanisms for nanonetworks are required overall to balance the energy harvesting and channel access for data transmissions. They need to be further optimized, to enable fast communication among a larger number of nanodevices. Further, additional functionalities such as scheduling transmissions are required between inter-cluster and TDMA based intracluster communication to avoid collisions and to increase throughput~.", "id": "f34fd3c5-9e36-4364-a27d-9e46f6ae2476", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "42a90c8f-88c5-47b0-9320-0ea4839499d7", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz channel access mechanisms" ], [ "subsubsection", "Nanoscale network" ] ], "subsections": [], "title": "Nanoscale network" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "988e71f7-da34-45d5-bcb3-a6234f0f4c56", "level": "subsection", "origin_cites_number": 0, "parent_id": "fe299020-4e95-45fc-964a-5394cc33dc39", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz Receiver and Transmitter initiated communication" ] ], "subsections": [ "f87ea341-47db-4a58-91c6-fa402dd056ed", "6a010400-486b-48a8-8046-62471b1d9e62" ], "title": "Terahertz Receiver and Transmitter initiated communication" }, { "cite_extract_rate": 0, "cites": [], "content": "\\blue{In this section, challenges and future research directions are mentioned for Transmitter and Receiver initiated communication. }\n\\saim{\\textit{Message overhead:} } \\blue{T}he transmission of control messages in excess can cause the message overhead problem which can easily occur in directional Terahertz communication. Efficient techniques are required to reduce the message overhead. Different windows or bands can be used to utilize the channel resource efficiently. Although energy is not a high constraint like in nanonetworks, fast link establishment techniques, especially for distributed Terahertz networks and networks with high mobility require novel solutions, for example, small cells and vehicular communications. The control packets transmission to establish a link can cause high message overhead. New handshaking mechanism is required to establish link with reduced message overhead and delay. \n\\saim{\\textit{Memory assisted communication:} } \\saim{A memory assisted communication can enhance the throughput and latency in a Terahertz network~. Nodes position, neighbor tables, beam directions, and weights can be used in memory assisted communication to reduce the alignment and training time for antenna. Relays can be selected by coordinating the neighbor information and transmissions can be scheduled in advance while tracking the node's availability.}\n\\textit{Antenna direction and communication: } At macro scale, communication is linked with directional antennas. To initiate communication, nodes should have enhanced MAC functionalities such as simultaneous sensing and beam steering capabilities to track channel and node position jointly. Additional technology can assist in transmission initiation, such as RADAR and LIDAR for mobile nodes, and lower band technologies. The radar-based sensing is gaining attention to solve the deafness problem in the vehicle to vehicle communication to efficiently align the antenna direction and communication establishment~. A preamble injection is shown useful in low SNR scenarios to avoid deafness problem in~.", "id": "f87ea341-47db-4a58-91c6-fa402dd056ed", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "988e71f7-da34-45d5-bcb3-a6234f0f4c56", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz Receiver and Transmitter initiated communication" ], [ "subsubsection", "Macro scale network" ] ], "subsections": [], "title": "Macro scale network" }, { "cite_extract_rate": 0, "cites": [], "content": "For nanoscale networks, nodes with higher energy harvesting capabilities can take the responsibility of link establishment and transmission initiation, however\\blue{,} designing sophisticated algorithms for rapid link initiation still remains an issue. Research is required for distributed MAC protocols for Nanosensor networks to reduce message overhead with energy control mechanism. Receiver initiated transmission suits the low power nanodevices link establishment process due to reduced message overhead, it should consider energy as well as data size to be transferred. A combination of distributed communication in presence of a controller can help in managing the transmission schedules among nano devices~.", "id": "6a010400-486b-48a8-8046-62471b1d9e62", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "988e71f7-da34-45d5-bcb3-a6234f0f4c56", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Terahertz Receiver and Transmitter initiated communication" ], [ "subsubsection", "Nanoscale network" ] ], "subsections": [], "title": "Nanoscale network" }, { "cite_extract_rate": 0, "cites": [], "content": "The Terahertz technology is capable of delivering high data rate traffic with an acceptable quality of services such as low delay, and minimized energy consumption. However, many challenges still exist and require further research and attention. Some of them are discussed below related to Terahertz MAC protocols in general.", "id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "level": "subsection", "origin_cites_number": 0, "parent_id": "fe299020-4e95-45fc-964a-5394cc33dc39", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ] ], "subsections": [ "d9e3523c-4285-4070-87d7-8723005d4a9e", "880b1d0c-45cd-48e5-bf94-b840c3a5a27c", "ddce99c2-d58e-45a9-9ecf-d12baa6f9ed5", "3de40484-90c9-4c82-92d7-f981a5c54258", "32f6f0b1-91d1-4991-839f-7b1590828914", "37c66c5e-c332-4eb4-8242-4e7c6fb2a6ca", "10ac6807-cec7-45d1-9579-85a804f0ebd3", "4bbb7cd4-efcb-4e49-8f19-f4b2be82ee81", "1c059f8e-fae2-4cf4-a49e-320a73677520", "7f2c25ce-ff93-4121-a171-e168206338f5", "8bb2ee56-96fa-401e-9797-47350157ada6", "93e7092b-39e4-4422-bc6a-c7babae07b0d" ], "title": "General challenges and future research directions" }, { "cite_extract_rate": 0, "cites": [], "content": "Although interference can be reduced by using highly directional antenna, it is not considered by many research works~. It can be considered for large network requiring high data rate connection per nodes, such as top of the rack data center network, where nodes should transmit all the time with high data rate and low delay. Interference for dense indoor scenarios should be deeply studied and interference model needs to be established. The MAC layer should be aware of interference in the channel to elaborate further the rapid scheduling and fast channel access and switching based on channel interference information. The new and dynamic channel selection mechanism is also required while considering the Terahertz band's unique characteristics and band-specific interference and achievable distance. \nThe interference management module can track channel status and decides on the transmission time slot as well as the carrier to be used and physical parameters to be set, such as modulation and coding scheme and annulating side lobes in some directions. The additional procedure can be also implemented such as adoption, at the design stage, of a specific frequency plan for network and setting a sub-band spacing strategy for each application. At operational mode, each node can use a fixed frequency pattern at the deployment stage or adopts the frequency hopping strategy to keep an acceptable SINR and to overcome the molecular absorption of the propagation medium~, as the noise generated by molecule depends on frequency. Using frequency hopping scheme is promising as it tracks the channel switching, however, the designing of the frequency hopping algorithm is a challenge. The second challenge is to explore the number of frequencies the MAC layer can manage to improve the throughput.", "id": "d9e3523c-4285-4070-87d7-8723005d4a9e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Interference management" ] ], "subsections": [], "title": "Interference management" }, { "cite_extract_rate": 0, "cites": [], "content": "The link quality depends on the physical layer and channel\\blue{. F}or Terahertz communication antenna technology improvement is considered as a key factor for link budget enhancement. The MAC layer should monitor antenna by fast switching beams to serve all users in a fair way. MAC can also select antenna carrier frequency and polarization. To monitor efficiently a Terahertz communication, MAC layer should interface with the antenna system, for example controlling the steering angles, beam width and switching time. Because a good command of antenna system can increase data throughput and reduce delays due to misalignment errors. Antennas properties should be optimized, the MAC layer can monitor Terahertz antenna via frequency switching, beamforming, and diversity in order to meet network requirements\\blue{.} \n\\textit{Polarization capabilities: } Using two polarizations (horizontal and vertical) with sophisticated algorithms of cross-polarization cancellation is promising to boost the Terahertz system performance toward higher throughput and lower total system signal to interference ratio~. The main challenges with the dual-polarization approach from the MAC point of view are to balance the traffic between the two polarizations and to mitigate errors. Moreover, channel impulse response and then received power depends on polarization~\\blue{. O}ne challenge is how to exploit efficiently Terahertz wave polarization to increase data throughput by balancing the data flow simultaneously between horizontal and vertical polarization. \n\\textit{Wideband and multi-band antenna: } The design of multi wideband antennas are required to increase MAC efficiency and meet system requirements in term of high throughput, as more bandwidth will be available to transmit more data rate~. Using a multiband antenna can also reduce system latency by deploying separate bands for data transmission and control message exchange.\n\\textit{High antenna gain: } To mitigate channel impairment and extend the communication range, antenna gain should be maximized, Horn, logarithmic antenna and phased array are promising for designing high antenna gain Terahertz communication. For high antenna gain, it is possible to increase node reachability, but more care should be addressed to antenna side lobes as they can generate more interferences.\n\\textit{Spatial diversity: } To mitigate channel impairment and increase channel capacity, multiple antennas along with phased array, exploiting Terahertz propagation diversity, can be deployed for Terahertz links, such as MIMO and ultra massive MIMO. Using MIMO increase spectral and energy efficiency for the link, however, it requires efficient signal processing performance to encode and decode MIMO signals and exploit diversity. From MAC point of view, deployment of ultra massive MIMO will affect resource allocation techniques~.\n\\textit{Fast switching capability: } To increase data throughput and reduce latency per link, the beam switching time should be minimized. Switching can occur at pulse, symbol or frame level.\n\\textit{Adaptive beamforming: } \\blue{Directional antenna is considered an alternative solution to mitigate channel impairment and increase link budget. Nevertheless, antenna pattern can take different shapes, and it should be optimized for a network use case. By using adaptive weighting of antenna elements monitored at the physical layer and MAC, it is possible to reduce effect of interference by annulling lobes in some direction to avoid interferences.} A MAC module can be implemented to control antenna beamforming and adapt the antenna technology to the network topology. In~, a log-periodic toothed antenna is optimized for beamforming and beam steering for the Terahertz band. A concept of intelligent communication by using ultra massive MIMO is proposed in~, to increase the communication distance and data rates at Terahertz band frequencies.", "id": "880b1d0c-45cd-48e5-bf94-b840c3a5a27c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Antenna design" ] ], "subsections": [], "title": "Antenna design" }, { "cite_extract_rate": 0, "cites": [], "content": "Synchronization adds accuracy to network operations and coordination and reduces frame collisions among nodes, as a result, it contributes to QoS enhancement. Moreover, it is responsible for more computation complexity and requires additional time slots before data transmission starts. Nodes memory and time for link establishment are the main cost to pay, in order to deploy the synchronous network. \\blue{At the MAC level, the challenge is\nto design algorithms for nodes and frame synchronization, such that nodes become aware of the transmission time.} Another challenge is to efficiently allocate the radio resources for synchronization procedures such as frequency, time and power, which can increase the delay. To reduce the delay\\blue{,} transceivers with more capabilities such as memory and processing can be used.", "id": "ddce99c2-d58e-45a9-9ecf-d12baa6f9ed5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Synchronization" ] ], "subsections": [], "title": "Synchronization" }, { "cite_extract_rate": 0, "cites": [], "content": "An efficient transceiver is required to deal with different MAC functionalities ranging from framing, synchronization, error control, scheduling and buffering. Authors in~, demonstrate that it is possible to optimize transceiver architecture to bear high data rate reaching $100 Gbps$ using parallelism and optimized memory usage along with frame length and error control techniques. Using efficient processing technique at the transceiver and sufficient memory size, it is possible to implement MAC protocols dealing with fast channel access, efficient scheduling technique and multi traffic communication\\blue{.}", "id": "3de40484-90c9-4c82-92d7-f981a5c54258", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Transceiver design" ] ], "subsections": [], "title": "Transceiver design" }, { "cite_extract_rate": 0, "cites": [], "content": "Before any communication starts, an establishment phase should be initiated. Link establishment is the duty of the MAC layer when a node needs to transmit to another node. This phase starts with setting up all the required parameters such as physical layer parameters, timers, synchronization procedure\\blue{. T}hen after receiving the \\blue{acknowledgement} from the receiver, a new transmission can begin. The challenge is how fast we can establish any Terahertz connection, and how to increase the success probability of link establishment phase\\blue{?} \nDeafness can complicate the neighbor discovery due to transceiver misalignment and prevents the control messages to be exchanged in a timely manner (cf. \\blue{Section} \\ref{subsubsec-req:design-mac-nbrdisc}). To avoid the antenna \\blue{miss-alignment} and link establishment, the mmWave standard IEEE 802.15.3c and IEEE 802.11ad use beam training, in which one node operates in a directional mode and other node search space in a sequential directional mode. After a complete search, sector-level training occurs to perfectly align the beams. For neighbor discovery the challenges are to discover all the nodes with minimum delay, and techniques to search the space for beam alignment in a short time when two nodes are not initially aware of their beam directions. \nParticularly, for Terahertz communication the neighbor discovery is challenging due to unique band characteristics and antenna directionality, and for nanoscale networks due to limited energy resources. Neighbor discovery is required to synchronize nodes within a network and rapidly consider new nodes in the network. As a future research direction, optimization of discovery time by correctly choosing reference time for antenna alignment is an important challenge with timely information exchange. The discovery can be enhanced by using multibeam with fast switching and coordination among distributed nodes~. \\saim{A neighbor discovery protocol with directional antennas with side lobes information and full antenna radiation pattern to better detection is proposed in~. However, multipath effects and LOS blockage were not considered. }", "id": "32f6f0b1-91d1-4991-839f-7b1590828914", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Link establishment, neighbor discovery and deafness problem" ] ], "subsections": [], "title": "Link establishment, neighbor discovery and deafness problem" }, { "cite_extract_rate": 0, "cites": [], "content": "A blockage is a situation when an object crosses the main link between two nodes transmitting to each other, it can also be generated from frequency shifting and reflected signal from surrounding objects. Due to high data rate, a small or temporary blockage can result in very high data loss. Therefore, it is important to propose novel anti-blockage mechanisms to avoid blockage situations and to achieve seamless coverage. At MAC layer, it is important to identify blocked channels to avoid false detection and correction and to distinguish between deafness and unblocked error. In Terahertz band, due to small wavelength (0.3 mm)\\blue{,} the directional links can be easily attenuated by LOS obstacles. In mobility scenarios, these obstacles can occur more frequently, and therefore can degrade the Terahertz link performance. Only increasing the transmission power cannot penetrate the obstacles, therefore an alternative unblocked channel is required to steer around. Reflectors can be used to avoid permanent blockage, but new mechanisms are required with beam steering and management functionalities to avoid link blockage. Modeling the blockage phenomena for each use case is required, and MAC layer should be aware of it. The main challenge is to detect blockage and tackle this issue at MAC layer. One alternative is to differ its transmission till the link is cleared or to select an alternative path with new parameters to avoid transmission interruption. \\saim{To mitigate temporary blockage of LOS wireless link, an NLOS wireless link with reflection and scattering over multiple rough surfaces is analyzed in~. The use of the NLOS links can broaden the access point options to improve the link performance. However, using complex modulation schemes are yet to be analyzed for its feasible working under different environments.}", "id": "37c66c5e-c332-4eb4-8242-4e7c6fb2a6ca", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "LOS blockage" ] ], "subsections": [], "title": "LOS blockage" }, { "cite_extract_rate": 0, "cites": [], "content": "The short-range communication like indoor and Data Centre scenarios, require new and efficient relaying techniques to increase the reachability of nodes. Nodes relaying or forwarding capability can be implemented at the MAC layer. It is activated when the signal from one transmitter needs to be regenerated by intermediary nodes to reach its final destination. However, to activate this, each node must have a complete view of the neighbors which can be exchanged among the nodes as a neighbor table. Due to antenna directionality and tight beam requirement of Terahertz communication, beam switching techniques can be used where antenna can take 0.1 ns to switch antenna beam direction~, which can increase the overall delay in forwarding the packets. The relaying protocol using directional antenna must be designed to reduce this delay and to overcome channel impairment problem. \nIn Terahertz band only short transmission range is achievable until now. Due to which the signal needs to regenerated by an intermediate node to reach the destination. Designing strategies with relaying capabilities by considering the unique band features and environment is a challenging requirement. Work on node relying on Terahertz band was performed at nanoscale communication~, where two modes were considered: amplify and forward, and decode and forward, to strengthen the direct path by maximum ratio combining. \nThe relay node can be selected from the existing neighbors or can be placed especially in a network. Each mechanism needs to address different challenges including link quality, location, and number of relays. In a distributed environment, where nodes communication range is shorter, multihop communication should be enabled. Multihop protocol design can be challenging when high throughput and low latency are the requirements. Therefore, new multihop strategies are required to fulfill Terahertz communication requirements by considering limited capabilities and behavior of communication layers in case of nanoscale networks. For macro-scale networks, the path loss, molecular noise, and antenna directionality should be considered. Reflectors can also be used for communicating and reaching to nodes with LOS blockage.", "id": "10ac6807-cec7-45d1-9579-85a804f0ebd3", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Design of relaying and multihop protocols" ] ], "subsections": [], "title": "Design of relaying and multihop protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "Designing efficient MAC protocols for vehicular and satellite communication with relaying and coordination among the nodes is an open challenge for MAC design. The node should decide the next relay node to strengthen the communication among different nodes using coordination mechanisms. The nodes should be capable of deciding which node it should coordinate.", "id": "4bbb7cd4-efcb-4e49-8f19-f4b2be82ee81", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Coordination" ] ], "subsections": [], "title": "Coordination" }, { "cite_extract_rate": 0, "cites": [], "content": "MAC layer should adapt between traffic flows coming from the upper layer and the channel fluctuations along with the physical layer procedures. For instance, the selection of frame size per transmission period should be adaptively chosen based on packet arrival from the networking layer as well as considering measurements from the physical layer. Scheduling transmissions can be optimized based on measurements gathered from the physical layer. MAC decisions such as band selection and path switching, in case of blockage are also affected by the physical layer and channel status. The transceiver memory should be optimized to support \\blue{traffic} with different QoS profiles.", "id": "1c059f8e-fae2-4cf4-a49e-320a73677520", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Cross layer design" ] ], "subsections": [], "title": "Cross layer design" }, { "cite_extract_rate": 0, "cites": [], "content": "In the Terahertz network, scheduling algorithms can enhance the overall quality of service by using radio resources for a given policy such as maximizing throughput, minimizing the total interference in the network or reducing system delay.\n The scheduling module will be interfaced with the medium access module as well as physical layer, where knowledge about the channel conditions and traffic requirements will govern scheduler decisions. Exchanging information related to buffer, channel quality and requirement of each traffic flow should be considered by the scheduler and also schedules of other nodes.", "id": "7f2c25ce-ff93-4121-a171-e168206338f5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Scheduling" ] ], "subsections": [], "title": "Scheduling" }, { "cite_extract_rate": 0, "cites": [], "content": "Selection of frame size, frames, and multi-frame structure and error control strategies, such as CRC insertion and frame retransmission, can enhance Terahertz link in terms of frame error rate as well as leads to an increase of throughput. Adaptive frame size and control overhead are fundamental to maintain a communication link and reduce errors among transmitted frames. Increasing the packet size can cause a higher number of channel errors which requires more robust error detection and correction schemes. The longer packets can also introduce the buffering problems. Therefore, the optimal packet size and analysis of the trade-off between the size and performance, and flow control policies to avoid congestions and buffer overflow, requires further research.", "id": "8bb2ee56-96fa-401e-9797-47350157ada6", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Framing and Error handling" ] ], "subsections": [], "title": "Framing and Error handling" }, { "cite_extract_rate": 0, "cites": [], "content": "Mobility can easily affect the quality of the established links due to narrow beams. Therefore, frequent re-establishment of links is required to maintain the links and communication over it. Two mobility models are mentioned in~, linear and circular motion. In different Terahertz scenarios, different mobility models need to be studied like in V2X scenarios and small cells, where LOS blockage can also occur frequently. It is important to track the best beam in case of frequent link breakage and beam alignment requirement. \nIn V2X networks\\blue{,} nodes change their position with variable speed. To keep the connectivity with the network, a management module should be implemented at the MAC layer monitoring node location and tracking its speed. For such an application, a mobility model needs to be proposed for each scenario\\blue{.} The mobility management module will decide on the handover and how to make the link robust all the time until the end of the transmission without interruption. It is possible to decide to change a new node as receiver or to transmit by relaying. An update mechanism should be set to sort all neighbor nodes based on their availability. One more challenge is how to sample channel conditions and how fast the MAC can decide for the handover.", "id": "93e7092b-39e4-4422-bc6a-c7babae07b0d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "43e25cb4-9f6d-48df-94cc-1e4224851011", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "General challenges and future research directions" ], [ "subsubsection", "Mobility management" ] ], "subsections": [], "title": "Mobility management" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this paper, a comprehensive survey is presented for Terahertz MAC protocols. The Terahertz band has a great potential to support the future ultra-high bandwidth and low latency requirement for beyond 5G networks. Especially, the existing unused frequency operating at disruptive bandwidths of 70 GHz can be a key enabler for beyond 5G networks. In this regard, the key features, design issues for Terahertz MAC and decisions which should be taken at MAC layer to enhance the performance of the network, are highlighted and discussed. Different Terahertz applications for macro and nanoscale are also discussed with their scenario-specific challenges. The survey has identified numerous gaps and limitations of existing Terahertz MAC protocols for enhancing further research in this domain. To highlight the limitations, the existing literature on Terahertz MAC protocols is also classified based on topology, scale, channel access mechanism and transmitter/initiated communication. To push further the research in this domain, challenges and future research directions are also presented with a cross-layer approach.\n\\section*{Acknowledgment}\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 761579 (TERAPOD).\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{ieeetran}\n\\begin{thebibliography}{100}\n\\providecommand{\\url}[1]{#1}\n\\csname url@samestyle\\endcsname\n\\providecommand{\\newblock}{\\relax}\n\\providecommand{\\bibinfo}[2]{#2}\n\\providecommand{\\BIBentrySTDinterwordspacing}{\\spaceskip=0pt\\relax}\n\\providecommand{\\BIBentryALTinterwordstretchfactor}{4}\n\\providecommand{\\BIBentryALTinterwordspacing}{\\spaceskip=\\fontdimen2\\font plus\n\\BIBentryALTinterwordstretchfactor\\fontdimen3\\font minus\n \\fontdimen4\\font\\relax}\n\\providecommand{\\BIBforeignlanguage}[2]{{\n\\expandafter\\ifx\\csname l@#1\\endcsname\\relax\n\\typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}\n\\typeout{** loaded for the language `#1'. Using the pattern for}\n\\typeout{** the default language instead.}\n\\else\n\\language=\\csname l@#1\\endcsname\n\\fi\n#2}}\n\\providecommand{\\BIBdecl}{\\relax}\n\\BIBdecl\n\\bibitem{ciscodata2018}\n\\BIBentryALTinterwordspacing\nC.~Systems., ``Cisco visual networking index: Global mobile data traffic\n forecast update, 2016–2021 white paper,'' Tech. Rep., 2018. [Online].\n Available:\n \\url{https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/mobile-white-paper-c11-520862.html}\n\\BIBentrySTDinterwordspacing\n\\bibitem{hamza2016}\nA.~S. Hamza, J.~S. Deogun, and D.~R. Alexander, ``Wireless communication in\n data centers: A survey,'' \\emph{IEEE Communications Surveys \\& Tutorials},\n vol.~18, no.~3, pp. 1572--1595, thirdquarter 2016.\n\\bibitem{Akyildiz2014}\nI.~F. Akyildiz, J.~M. Jornet, and C.~Han, ``Terahertz band: Next frontier for\n wireless communications,'' \\emph{Physical Communication}, vol.~12, pp. 16 --\n 32, 2014.\n\\bibitem{teranet2014}\n------, ``Teranets: Ultra-broadband communication networks in the {T}erahertz\n band,'' \\emph{IEEE Wireless Communications}, vol.~21, no.~4, pp. 130--135,\n August 2014.\n\\bibitem{Elayan2018}\nH.~Elayan, O.~Amin, R.~M. Shubair, and M.~S. Alouini, ``Terahertz\n communication: The opportunities of wireless technology beyond 5{G},'' in\n \\emph{International Conference on Advanced Communication Technologies and\n Networking (CommNet)}, April 2018, pp. 1--5.\n\\bibitem{Yilmaz2016}\nT.~Yilmaz and {\\\"{O}}.~B. Akan, ``On the 5{G} wireless communications at the\n low {T}erahertz band,'' \\emph{CoRR}, vol. abs/1605.02606, 2016.\n\\bibitem{zhnagh2010}\nH.~{Zhang}, S.~{Venkateswaran}, and U.~{Madhow}, ``Channel modeling and {MIMO}\n capacity for outdoor millimeter wave links,'' in \\emph{2010 IEEE Wireless\n Communication and Networking Conference}, April 2010, pp. 1--6.\n\\bibitem{Yongsu2006}\nS.~K. Yong and C.-C. Chong, ``An overview of multigigabit wireless through\n millimeter wave technology: Potentials and technical challenges,''\n \\emph{EURASIP Journal on Wireless Communications and Networking}, vol. 2007,\n no.~1, p. 078907, Dec 2006.\n\\bibitem{Kurner2014}\nT.~Kurner and S.~Priebe, ``Towards {T}erahertz communications status in\n research, standardization and regulation,'' \\emph{Journal of Infrared,\n Millimeter, and Terahertz Waves}, vol.~35, no.~1, pp. 53--62, January 2014.\n\\bibitem{Akyildiz2011}\nI.~F. Akyildiz, J.~M. Jornet, and M.~Pierobon, ``Nanonetworks: A new frontier\n in communications,'' \\emph{Communications of the ACM}, vol.~54, no.~11, pp.\n 84--89, 2011.\n\\bibitem{Akyildiz2010}\n------, ``Propagation models for nanocommunication networks,'' in\n \\emph{Proceedings of the Fourth European Conference on Antennas and\n Propagation}, April 2010, pp. 1--5.\n\\bibitem{Dressler2012}\nF.~Dressler and F.~Kargl, ``Towards security in nano-communication: Challenges\n and opportunities,'' \\emph{Nano Communication Networks}, vol.~3, no.~3, pp.\n 151 -- 160, 2012.\n\\bibitem{Jornet2012}\nJ.~M. Jornet and I.~F. Akyildiz, ``The {I}nternet of multimedia nano-things,''\n \\emph{Nano Communication Networks}, vol.~3, no.~4, pp. 242 -- 251, 2012.\n\\bibitem{Akyildiz2010b}\nI.~F. Akyildiz and J.~M. Jornet, ``The {I}nternet of nano-things,'' \\emph{IEEE\n Wireless Communications}, vol.~17, no.~6, pp. 58--63, December 2010.\n\\bibitem{Darchini2013}\nK.~Darchini and A.~S. Alfa, ``Molecular communication via microtubules and\n physical contact in nanonetworks: A survey,'' \\emph{Nano Communication\n Networks}, vol.~4, no.~2, pp. 73 -- 85, 2013.\n\\bibitem{Malak2012}\nD.~Malak and O.~B. Akan, ``Molecular communication nanonetworks inside human\n body,'' \\emph{Nano Communication Networks}, vol.~3, no.~1, pp. 19 -- 35,\n 2012.\n\\bibitem{Akyildiz2010a}\nI.~F. Akyildiz and J.~M. Jornet, ``Electromagnetic wireless nanosensor\n networks,'' \\emph{Nano Communication Networks}, vol.~1, no.~1, pp. 3 -- 19,\n 2010.\n\\bibitem{Abbasi2016}\nQ.~H. Abbasi, K.~Yang, N.~Chopra, J.~M. Jornet, N.~A. Abuali, K.~A. Qaraqe, and\n A.~Alomainy, ``Nano-communication for biomedical applications: A review on\n the state-of-the-art from physical layers to novel networking concepts,''\n \\emph{IEEE Access}, vol.~4, pp. 3920--3935, 2016.\n\\bibitem{Dressler2015}\nF.~Dressler and S.~Fischer, ``Connecting in-body nano communication with body\n area networks: Challenges and opportunities of the {I}nternet of nano\n things,'' \\emph{Nano Communication Networks}, vol.~6, no.~2, pp. 29 -- 38,\n 2015.\n\\bibitem{Busari2018}\nS.~A. Busari, S.~Mumtaz, S.~Al-Rubaye, and J.~Rodriguez, ``5{G} millimeter-wave\n mobile broadband: performance and challenges,'' \\emph{IEEE Communications\n Magazine}, vol.~56, no.~6, pp. 137--143, June 2018.\n\\bibitem{Mumtaz2017}\nS.~Mumtaz, J.~M. Jornet, J.~Aulin, W.~H. Gerstacker, X.~Dong, and B.~Ai,\n ``Terahertz communication for vehicular networks,'' \\emph{IEEE Transactions\n on Vehicular Technology}, vol.~66, no.~7, pp. 5617--5625, July 2017.\n\\bibitem{Petrov2018}\nV.~Petrov, J.~Kokkoniemi, D.~Moltchanov, J.~Lehtomaki, Y.~Koucheryavy, and\n M.~Juntti, ``Last meter indoor {T}erahertz wireless access: Performance\n insights and implementation roadmap,'' \\emph{IEEE Communications Magazine},\n vol.~56, no.~6, pp. 158--165, June 2018.\n\\bibitem{Huang2011}\nK.~c.~Huang and Z.~Wang, ``Terahertz terabit wireless communication,''\n \\emph{IEEE Microwave Magazine}, vol.~12, no.~4, pp. 108--116, June 2011.\n\\bibitem{Han2018}\nC.~Han and Y.~Chen, ``Propagation modeling for wireless communications in the\n {T}erahertz band,'' \\emph{IEEE Communications Magazine}, vol.~56, no.~6, pp.\n 96--101, June 2018.\n\\bibitem{Boulogeorgos2018}\nA.~A.~A. Boulogeorgos, A.~Alexiou, T.~Merkle, C.~Schubert, R.~Elschner,\n A.~Katsiotis, P.~Stavrianos, D.~Kritharidis, P.~K. Chartsias, J.~Kokkoniemi,\n M.~Juntti, J.~Lehtomaki, A.~Teixeira, and F.~Rodrigues, ``Terahertz\n technologies to deliver optical network quality of experience in wireless\n systems beyond 5{G},'' \\emph{IEEE Communications Magazine}, vol.~56, no.~6,\n pp. 144--151, June 2018.\n\\bibitem{Dat2017}\nP.~T. Dat, A.~Kanno, T.~Umezawa, N.~Yamamoto, and T.~Kawanishi, ``Millimeter\n and {T}erahertzwave radio-over-fiber for 5{G} and beyond,'' in \\emph{IEEE\n Photonics Society Summer Topical Meeting Series (SUM)}, July 2017, pp.\n 165--166.\n\\bibitem{Petrov2016}\nV.~Petrov, A.~Pyattaev, D.~Moltchanov, and Y.~Koucheryavy, ``Terahertz band\n communications: Applications, research challenges, and standardization\n activities,'' in \\emph{8th International Congress on Ultra Modern\n Telecommunications and Control Systems and Workshops (ICUMT)}, October 2016,\n pp. 183--190.\n\\bibitem{Akyildiz2016}\nI.~F. Akyildiz and J.~M. Jornet, ``Realizing ultra-massive {MIMO} (1024×1024)\n communication in the (0.06–10) {T}erahertz band,'' \\emph{Nano Communication\n Networks}, vol.~8, pp. 46 -- 54, 2016.\n\\bibitem{Song2011}\nH.~J. Song and T.~Nagatsuma, ``Present and future of {T}erahertz\n communications,'' \\emph{IEEE Transactions on {T}erahertz Science and\n Technology}, vol.~1, no.~1, pp. 256--263, September 2011.\n\\bibitem{Hosako2007}\nI.~Hosako, N.~Sekine, M.~Patrashin, S.~Saito, K.~Fukunaga, Y.~Kasai, P.~Baron,\n T.~Seta, J.~Mendrok, S.~Ochiai, and H.~Yasuda, ``At the dawn of a new era in\n {T}erahertz technology,'' \\emph{Proceedings of the IEEE}, vol.~95, no.~8, pp.\n 1611--1623, August 2007.\n\\bibitem{Hasan2016}\nM.~Hasan, S.~Arezoomandan, H.~Condori, and B.~Sensale-Rodriguez, ``Graphene\n {T}erahertz devices for communications applications,'' \\emph{Nano\n Communication Networks}, vol.~10, pp. 68 -- 78, 2016.\n\\bibitem{Federici2016}\nJ.~F. Federici, J.~Ma, and L.~Moeller, ``Review of weather impact on outdoor\n {T}erahertz wireless communication links,'' \\emph{Nano Communication\n Networks}, vol.~10, pp. 13 -- 26, 2016.\n\\bibitem{Wu2012}\nK.~Wu, Y.~J. Cheng, T.~Djerafi, and W.~Hong, ``Substrate-integrated\n millimeter-wave and {T}erahertz antenna technology,'' \\emph{Proceedings of\n the IEEE}, vol. 100, no.~7, pp. 2219--2232, July 2012.\n\\bibitem{guest-editoral2018}\nK.~M.~S. Huq, J.~M. Jornet, W.~H. Gerstacker, A.~Al-Dulaimi, Z.~Zhou, and\n J.~Aulin, ``Thz communications for mobile heterogeneous networks,''\n \\emph{IEEE Communications Magazine}, vol.~56, no.~6, pp. 94--95, June 2018.\n\\bibitem{hadi2020sensing}\nH.~Sarieddeen, N.~Saeed, T.~A. Naffouri, and M.~S. Alouini, ``{N}ext\n {G}eneration {T}erahertz communications, {A} rendezvous of sensing, imaging,\n and localization,'' \\emph{IEEE Communications Magazine}, vol.~58, no.~5, pp.\n 69--75, 2020.\n\\bibitem{hadi2020signal}\nH.~Sarieddeen, M.~S. Alouini, and T.~A. Naffouri, ``An overview of signal\n processing techniques for {T}erahertz communications,'' June 2020.\n\\bibitem{HAN201967}\nC.~Han, X.~Zhang, and X.~Wang, ``On medium access control schemes for wireless\n networks in the millimeter-wave and terahertz bands,'' \\emph{Nano\n Communication Networks}, vol.~19, pp. 67 -- 80, 2019.\n\\bibitem{fitch2004}\nM.~Fitch and R.~Osiander, ``Terahertz waves for communications and sensing,''\n \\emph{Johns Hopkins APL Technical Digest}, vol.~25, no.~4, pp. 348--355,\n October 2004.\n\\bibitem{kuner2012tftcs}\nT.~Kurner, ``Towards future {T}erahertz communications systems,''\n \\emph{Terahertz Science and Technology}, vol.~5, pp. 11--17, 01 2012.\n\\bibitem{truththz2012}\nC.~M. Armstrong, ``The truth about {T}erahertz,'' \\emph{IEEE Spectrum},\n vol.~49, no.~9, pp. 36--41, September 2012.\n\\bibitem{sasib2013}\nS.~Balasubramaniam, S.~Ben-Yehuda, S.~Pautot, A.~Jesorka, P.~Lio’, and\n Y.~Koucheryavy, ``A review of experimental opportunities for molecular\n communication,'' \\emph{Nano Communication Networks}, vol.~4, no.~2, pp. 43 --\n 52, 2013.\n\\bibitem{ahirata2015}\nA.~Hirata and M.~Yaita, ``Ultrafast {T}erahertz wireless communications\n technologies,'' \\emph{IEEE Transactions on Terahertz Science and Technology},\n vol.~5, no.~6, pp. 1128--1132, November 2015.\n\\bibitem{nabilkhalid2018}\nN.~Khalid, T.~Yilmaz, and O.~B. Akan, ``Energy-efficient modulation and\n physical layer design for low {T}erahertz band communication channel in 5{G}\n femtocell {I}nternet of things,'' \\emph{Ad Hoc Networks}, vol.~79, pp. 63 --\n 71, 2018.\n\\bibitem{lemic2019survey}\n\\BIBentryALTinterwordspacing\nF.~Lemic, S.~Abadal, W.~Tavernier, P.~Stroobant, D.~Colle, E.~Alarcón,\n J.~Marquez-Barja, and J.~Famaey, ``Survey on terahertz nanocommunication and\n networking: A top-down perspective,'' 2019. [Online]. Available:\n \\url{https://arxiv.org/abs/1909.05703}\n\\BIBentrySTDinterwordspacing\n\\bibitem{tsrapaport2019app}\nT.~S. {Rappaport}, Y.~{Xing}, O.~{Kanhere}, S.~{Ju}, A.~{Madanayake},\n S.~{Mandal}, A.~{Alkhateeb}, and G.~C. {Trichopoulos}, ``Wireless\n communications and applications above 100 ghz: Opportunities and challenges\n for 6g and beyond,'' \\emph{IEEE Access}, vol.~7, 2019.\n\\bibitem{elayanhad2019}\nH.~{Elayan}, O.~{Amin}, B.~{Shihada}, R.~M. {Shubair}, and M.-S. {Alouini},\n ``Terahertz band: The last piece of rf spectrum puzzle for communication\n systems,'' \\emph{arXiv e-prints}, p. arXiv:1907.05043, Jul 2019.\n\\bibitem{Kursat2019}\nK.~Tekbiyik, A.~R. Ekti, G.~K. Kurt, and A.~Gorcin, ``Terahertz band\n communication systems: Challenges, novelties and standardization efforts,''\n \\emph{Physical Communication}, vol.~35, p. 100700, 2019.\n\\bibitem{kko2019}\nK.~K. {O}, W.~{Choi}, Q.~{Zhong}, N.~{Sharma}, Y.~{Zhang}, R.~{Han},\n Z.~{Ahmad}, D.~{Kim}, S.~{Kshattry}, I.~R. {Medvedev}, D.~J. {Lary},\n H.~{Nam}, P.~{Raskin}, and I.~{Kim}, ``Opening terahertz for everyday\n applications,'' \\emph{IEEE Communications Magazine}, vol.~57, no.~8, pp.\n 70--76, August 2019.\n\\bibitem{kmshuq2019}\nK.~M.~S. {Huq}, S.~A. {Busari}, J.~{Rodriguez}, V.~{Frascolla}, W.~{Bazzi}, and\n D.~C. {Sicker}, ``Terahertz-enabled wireless system for beyond-5g ultra-fast\n networks: A brief survey,'' \\emph{IEEE Network}, vol.~33, no.~4, pp. 89--95,\n July 2019.\n\\bibitem{zchen2019}\nZ.~{Chen}, X.~{Ma}, B.~{Zhang}, Y.~{Zhang}, Z.~{Niu}, N.~{Kuang}, W.~{Chen},\n L.~{Li}, and S.~{Li}, ``A survey on terahertz communications,'' \\emph{China\n Communications}, vol.~16, no.~2, pp. 1--35, February 2019.\n\\bibitem{nanoparadigm2008}\nI.~F. Akyildiz, F.~Brunetti, and C.~Blázquez, ``Nanonetworks: A new\n communication paradigm,'' \\emph{Computer Networks}, vol.~52, no.~12, pp. 2260\n -- 2279, 2008.\n\\bibitem{akyljornnano2010}\nI.~F. Akyildiz and J.~M. Jornet, ``Electromagnetic wireless nanosensor\n networks,'' \\emph{Nano Communication Networks}, vol.~1, no.~1, pp. 3 -- 19,\n 2010.\n\\bibitem{imntthz2012}\nJ.~M. Jornet and I.~F. Akyildiz, ``The {I}nternet of multimedia nano-things in\n the terahertz band,'' in \\emph{18th European Wireless Conference}, April\n 2012, pp. 1--8.\n\\bibitem{sivapriya2017}\nS.~V. Sivapriya and D.~Sridharan, ``A comprehensive review on {MAC} protocol of\n wireless nanosensor networks for intrabody application,'' \\emph{International\n Journal for Innovative Research in Science and Technology}, vol.~4, no.~2,\n pp. 83--89, July 2017.\n\\bibitem{xwang2018}\nX.~Wang, L.~Kong, F.~Kong, F.~Qiu, M.~Xia, S.~Arnon, and G.~Chen, ``Millimeter\n wave communication: A comprehensive survey,'' \\emph{IEEE Communications\n Surveys \\& Tutorials}, vol.~20, no.~3, pp. 1616--1653, thirdquarter 2018.\n\\bibitem{Ma2015TerahertzWC}\nJ.~Ma, ``Terahertz wireless communication through atmospheric atmospheric\n turbulence and rain,'' 2015.\n\\bibitem{B5G2018}\nA.~S. Cacciapuoti, K.~Sankhe, M.~Caleffi, and K.~R. Chowdhury, ``Beyond 5{G}:\n Terahertz-based {M}edium {A}ccess {P}rotocol for mobile heterogeneous\n networks,'' \\emph{IEEE Communications Magazine}, vol.~56, no.~6, pp.\n 110--115, June 2018.\n\\bibitem{polesetestbed2019}\nM.~Polese, F.~Restuccia, A.~Gosain, J.~Jornet, S.~Bhardwaj, V.~Ariyarathna,\n S.~Mandal, K.~Zheng, A.~Dhananjay, M.~Mezzavilla, J.~Buckwalter, M.~Rodwell,\n X.~Wang, M.~Zorzi, A.~Madanayake, and T.~Melodia, ``\\blue{MillimeTera: Toward\n A Large-Scale Open-Source MmWave and Terahertz Experimental Testbed},'' in\n \\emph{Proceedings of the 3rd ACM Workshop on Millimeter-Wave Networks and\n Sensing Systems}, ser. mmNets’19.\\hskip 1em plus 0.5em minus 0.4em\\relax\n New York, NY, USA: Association for Computing Machinery, 2019, p. 27–32.\n\\bibitem{akylidiz-lte2014}\nI.~F. Akyildiz, D.~M. Gutierrez-Estevez, R.~Balakrishnan, and\n E.~Chavarria-Reyes, ``{LTE}-{A}dvanced and the evolution to {B}eyond {4G\n (B4G)} systems,'' \\emph{Physical Communication}, vol.~10, pp. 31 -- 60, 2014.\n\\bibitem{Glushko2013}\nB.~Glushko, D.~Kin, and A.~Shar, ``Gigabit optical wireless communication\n system for personal area networking,'' \\emph{Optical Memory and Neural\n Networks}, vol.~22, no.~2, pp. 73--80, April 2013.\n\\bibitem{xli2012}\nX.~Li, J.~Vucic, V.~Jungnickel, and J.~Armstrong, ``On the capacity of\n intensity-modulated direct-detection systems and the information rate of\n {ACO-OFDM} for indoor optical wireless applications,'' \\emph{IEEE\n Transactions on Communications}, vol.~60, no.~3, pp. 799--809, 2012.\n\\bibitem{eciaramella2009}\nE.~Ciaramella, Y.~Arimoto, G.~Contestabile, M.~Presi, A.~D'Errico, V.~Guarino,\n and M.~Matsumoto, ``1.28 terabit/s (32x40 gbit/s) {WDM} transmission system\n for free space optical communications,'' \\emph{IEEE Journal on Selected Areas\n in Communications}, vol.~27, no.~9, pp. 1639--1645, December 2009.\n\\bibitem{Taherkhan2020}\nM.~Taherkhani, Z.~G. Kashani, and R.~A. Sadeghzadeh, ``\\blue{On the performance\n of THz wireless LOS links through random turbulence channels},'' \\emph{Nano\n Communication Networks}, vol.~23, p. 100282, 2020.\n\\bibitem{calvanese2019}\nE.~{Calvanese Strinati}, S.~{Barbarossa}, J.~L. {Gonzalez-Jimenez},\n D.~{Ktenas}, N.~{Cassiau}, L.~{Maret}, and C.~{Dehos}, ``6{G}: The next\n frontier: From holographic messaging to artificial intelligence using\n subterahertz and visible light communication,'' \\emph{IEEE Vehicular\n Technology Magazine}, vol.~14, no.~3, pp. 42--50, Sep. 2019.\n\\bibitem{minoru2015}\nM.~Fujishima, S.~Amakawa, K.~Takano, K.~Katayama, and T.~Yoshida, ``Tehrahertz\n {CMOS} design for low-power and high-speed wireless communication,''\n \\emph{IEICE Transactions}, vol. 98-C, pp. 1091--1104, 2015.\n\\bibitem{ingmar2015}\nI.~Kallfass, I.~Dan, S.~Rey, P.~Harati, J.~Antes, A.~Tessmann, S.~Wagner,\n M.~Kuri, R.~Weber, H.~Massler, A.~Leuther, T.~Merkle, and T.~K{\\\"u}rner,\n ``Towards {MMIC}-based 300ghz indoor wireless communication systems,''\n \\emph{IEICE Transactions}, vol. 98-C, pp. 1081--1090, 2015.\n\\bibitem{xyu2016}\nX.~Yu, S.~Jia, H.~Hu, P.~Guan, M.~Galili, T.~Morioka, P.~U. Jepsen, and L.~K.\n Oxenløwe, ``Terahertz photonics-wireless transmission of 160 {G}bit/s\n bitrate,'' in \\emph{21st OptoElectronics and Communications Conference (OECC)\n held jointly with International Conference on Photonics in Switching (PS)},\n July 2016, pp. 1--3.\n\\bibitem{xpang2016}\nX.~Pang, S.~Jia, O.~Ozolins, X.~Yu, H.~Hu, L.~Marcon, P.~Guan, F.~D. Ros,\n S.~Popov, G.~Jacobsen, M.~Galili, T.~Morioka, D.~Zibar, and L.~K. Oxenkwe,\n ``260 gbits photonic-wireless link in the {T}erahertz band,'' in \\emph{IEEE\n Photonics Conference (IPC)}, Oct 2016, pp. 1--2.\n\\bibitem{phpathak2015}\nP.~H. Pathak, X.~Feng, P.~Hu, and P.~Mohapatra, ``Visible light communication,\n networking, and sensing: A survey, potential and challenges,'' \\emph{IEEE\n Communications Surveys \\& Tutorials}, vol.~17, no.~4, pp. 2047--2077,\n Fourthquarter 2015.\n\\bibitem{sraja2012}\nS.~Rajagopal, R.~D. Roberts, and S.~Lim, ``{IEEE} 802.15.7 visible light\n communication: modulation schemes and dimming support,'' \\emph{IEEE\n Communications Magazine}, vol.~50, no.~3, pp. 72--82, March 2012.\n\\bibitem{Ghadikolaei2015}\nH.~S. Ghadikolaei and C.~Fischione, ``The transitional behavior of interference\n in millimeter wave networks and its impact on medium access control,''\n \\emph{IEEE Transactions on Communications}, vol.~64, pp. 723--740, 2015.\n\\bibitem{marnat2017}\nL.~{Marnat}, L.~{Dussopt}, V.~{Puyal}, A.~{Siligaris}, F.~{Hameau}, A.~{Larie},\n and C.~{Dehos}, ``V-band transceiver modules with integrated antennas and\n phased arrays for mmwave access in 5g mobile networks,'' in \\emph{2017 11th\n European Conference on Antennas and Propagation (EUCAP)}, March 2017, pp.\n 2786--2790.\n\\bibitem{roh2014}\nW.~{Roh}, J.~{Seol}, J.~{Park}, B.~{Lee}, J.~{Lee}, Y.~{Kim}, J.~{Cho},\n K.~{Cheun}, and F.~{Aryanfar}, ``Millimeter-wave beamforming as an enabling\n technology for 5g cellular communications: theoretical feasibility and\n prototype results,'' \\emph{IEEE Communications Magazine}, vol.~52, no.~2, pp.\n 106--113, February 2014.\n\\bibitem{jalili2018}\nH.~{Jalili} and O.~{Momeni}, ``Scalable wideband and wide-angle beam steering\n mm-wave/thz radiator and phased arrays in silicon,'' in \\emph{2018\n Asia-Pacific Microwave Conference (APMC)}, Nov 2018, pp. 1157--1159.\n\\bibitem{hemadeh2018}\nI.~A. {Hemadeh}, K.~{Satyanarayana}, M.~{El-Hajjar}, and L.~{Hanzo},\n ``Millimeter-wave communications: Physical channel models, design\n considerations, antenna constructions, and link-budget,'' \\emph{IEEE\n Communications Surveys Tutorials}, vol.~20, no.~2, pp. 870--913,\n Secondquarter 2018.\n\\bibitem{chinni2018}\nV.~K. {Chinni}, M.~{Zezaoui}, C.~{Coinon}, X.~{Wallart}, E.~{Pevtavit}, J.~F.\n {Larnpin}, P.~{Szriftgiser}, M.~{Zaknoune}, and G.~{Ducournau}, ``Indoor 100\n gbit/s thz data link in the 300 ghz band using fast photodiodes,'' in\n \\emph{2018 25th International Conference on Telecommunications (ICT)}, June\n 2018, pp. 288--290.\n\\bibitem{rupasinghe2016}\nN.~{Rupasinghe}, Y.~{Kakishima}, I.~{Guvenc}, K.~{Kitao}, and T.~{Imai},\n ``Geometry performance for 5g mmwave cellular networks,'' in \\emph{2016\n International Symposium on Antennas and Propagation (ISAP)}, Oct 2016, pp.\n 874--875.\n\\bibitem{giordani2019}\nM.~{Giordani}, M.~{Polese}, A.~{Roy}, D.~{Castor}, and M.~{Zorzi}, ``A tutorial\n on beam management for 3gpp nr at mmwave frequencies,'' \\emph{IEEE\n Communications Surveys Tutorials}, vol.~21, no.~1, pp. 173--196, Firstquarter\n 2019.\n\\bibitem{hwang2019}\nH.~{Wang}, P.~{Zhang}, J.~{Li}, and X.~{You}, ``Radio propagation and wireless\n coverage of lsaa-based 5g millimeter-wave mobile communication systems,''\n \\emph{China Communications}, vol.~16, no.~5, pp. 1--18, May 2019.\n\\bibitem{mezzavillam2018}\nM.~{Mezzavilla}, M.~{Zhang}, M.~{Polese}, R.~{Ford}, S.~{Dutta}, S.~{Rangan},\n and M.~{Zorzi}, ``End-to-end simulation of 5g mmwave networks,'' \\emph{IEEE\n Communications Surveys Tutorials}, vol.~20, no.~3, pp. 2237--2263,\n thirdquarter 2018.\n\\bibitem{openairinter}\nOPENAIRINTERFACE, \\url{https://www.openairinterface.org/?page_id=1831},\n accessed: 26-10-2019.\n\\bibitem{terasimhus2018}\nZ.~Hossain, Q.~Xia, and J.~M. Jornet, ``Terasim: An ns-3 extension to simulate\n {T}erahertz band communication networks,'' \\emph{Nano Communication\n Networks}, vol.~17, pp. 36 -- 44, 2018.\n\\bibitem{TRD2015}\n\\BIBentryALTinterwordspacing\nIEEE, ``{IEEE} 802.15.3d: Technical requirement document,'' techreport, 2015.\n [Online]. Available:\n \\url{https://mentor.ieee.org/802.15/dcn/14/15-14-0309-20-003d-technical-requirements-document.docx}\n\\BIBentrySTDinterwordspacing\n\\bibitem{tyilmaz2014}\nT.~Yilmaz and O.~B. Akan, ``Utilizing terahertz band for local and personal\n area wireless communication systems,'' in \\emph{IEEE 19th International\n Workshop on Computer Aided Modeling and Design of Communication Links and\n Networks (CAMAD)}, December 2014, pp. 330--334.\n\\bibitem{cwang2014}\nC.~Wang, B.~Lu, C.~Lin, Q.~Chen, L.~Miao, X.~Deng, and J.~Zhang, ``0.34-thz\n wireless link based on high-order modulation for future wireless local area\n network applications,'' \\emph{IEEE Transactions on Terahertz Science and\n Technology}, vol.~4, no.~1, pp. 75--85, January 2014.\n\\bibitem{ieee802153d2017}\n``{IEEE} standard for high data rate wireless multi-media networks--amendment\n 2: 100 gb/s wireless switched point-to-point physical layer,'' \\emph{IEEE Std\n 802.15.3d-2017 (Amendment to IEEE Std 802.15.3-2016 as amended by IEEE Std\n 802.15.3e-2017)}, pp. 1--55, Oct 2017.\n\\bibitem{hsong2018}\nH.~Song, H.~Hamada, and M.~Yaita, ``Prototype of {KIOSK} data downloading\n system at 300 ghz: Design, technical feasibility, and results,'' \\emph{IEEE\n Communications Magazine}, vol.~56, no.~6, pp. 130--136, June 2018.\n\\bibitem{IHTLD-MAC2017}\nL.~You, Z.~Ren, C.~Chen, and Y.-H. Lv, ``An improved high throughput and low\n delay access protocol for {T}erahertz wireless personal area networks,''\n \\emph{Journal of Computers}, vol.~28, no.~3, pp. 147--158, June 2017.\n\\bibitem{vpyk2016}\nV.~Petrov, D.~Moltchanov, and Y.~Koucheryavy, ``Applicability assessment of\n terahertz information showers for next-generation wireless networks,'' in\n \\emph{IEEE International Conference on Communications (ICC)}, May 2016, pp.\n 1--7.\n\\bibitem{ARD2015}\nIEEE, ``{IEEE} 802.15.3d: Application requirement document,'' Tech. Rep., 2015.\n\\bibitem{ptdat2018}\nP.~T. Dat, A.~Kanno, N.~Yamamoto, and T.~Kawanishi, ``Radio-over-fiber-based\n seamless fiber-wireless convergence for small cell and linear cell\n networks,'' in \\emph{2018 Optical Fiber Communications Conference and\n Exposition (OFC)}, March 2018, pp. 1--3.\n\\bibitem{barrosmall2017}\nM.~T. Barros, R.~Mullins, and S.~Balasubramaniam, ``Integrated terahertz\n communication with reflectors for 5g small-cell networks,'' \\emph{IEEE\n Transactions on Vehicular Technology}, vol.~66, no.~7, pp. 5647--5657, July\n 2017.\n\\bibitem{vpetrovsinr2015}\nV.~Petrov, D.~Moltchanov, and Y.~Koucheryavy, ``Interference and sinr in dense\n {T}erahertz networks,'' in \\emph{IEEE 82nd Vehicular Technology Conference\n (VTC2015-Fall)}, September 2015, pp. 1--5.\n\\bibitem{SDNC2017}\nA.~S. Cacciapuoti, S.~Ramanathan, K.~R. Chowdhury, and M.~Caleffi,\n ``Software-defined network controlled switching between millimeter wave and\n {T}erahertz small cells,'' \\emph{CoRR}, vol. abs/1702.02775, 2017.\n\\bibitem{Guank2017}\nK.~{Guan}, G.~{Li}, T.~{Kürner}, A.~F. {Molisch}, B.~{Peng}, R.~{He},\n B.~{Hui}, J.~{Kim}, and Z.~{Zhong}, ``On millimeter wave and {TH}z mobile\n radio channel for smart rail mobility,'' \\emph{IEEE Transactions on Vehicular\n Technology}, vol.~66, no.~7, pp. 5658--5674, July 2017.\n\\bibitem{ATLR2018}\nC.~Zhang, K.~Ota, J.~Jia, and M.~Dong, ``Breaking the blockage for big data\n transmission: Gigabit road communication in autonomous vehicles,'' \\emph{IEEE\n Communications Magazine}, vol.~56, no.~6, pp. 152--157, June 2018.\n\\bibitem{pkumari2018}\nP.~{Kumari}, J.~{Choi}, N.~{González-Prelcic}, and R.~W. {Heath}, ``Ieee\n 802.11ad-based radar: An approach to joint vehicular communication-radar\n system,'' \\emph{IEEE Transactions on Vehicular Technology}, vol.~67, no.~4,\n pp. 3012--3027, April 2018.\n\\bibitem{vpetrovv2v2019}\n\\BIBentryALTinterwordspacing\nV.~Petrov, G.~Fodor, J.~Kokkoniemi, D.~Moltchanov, J.~Lehtom{\\\"{a}}ki,\n S.~Andreev, Y.~Koucheryavy, M.~J. Juntti, and M.~Valkama, ``On unified\n vehicular communications and radar sensing in millimeter-wave and low\n terahertz bands,'' \\emph{CoRR}, vol. abs/1901.06980, 2019. [Online].\n Available: \\url{http://arxiv.org/abs/1901.06980}\n\\BIBentrySTDinterwordspacing\n\\bibitem{woolard2007}\nD.~L. Woolard, J.~O. Jensen, R.~J. Hwu, and M.~S. Shur, \\emph{Terahertz Science\n and Technology for Military and Security Applications}.\\hskip 1em plus 0.5em\n minus 0.4em\\relax WORLD SCIENTIFIC, 2007.\n\\bibitem{sonmez2015}\nS.~Sonmez and S.~Ergun, ``Terahertz technology for military applications,''\n \\emph{Journal of Management and Information Science}, vol.~3, no.~1, pp. 13\n -- 16, 2015.\n\\bibitem{krzysztof2012}\nK.~Iwaszczuk, P.~Jepsen, and H.~Heiselberg,\n ``\\BIBforeignlanguage{English}{Terahertz technology for defense and\n security-related applications},'' Ph.D. dissertation, 2012.\n\\bibitem{suhwu2013}\nS.~U. Hwu, K.~B. deSilva, and C.~T. Jih, ``Terahertz wireless systems for space\n applications,'' in \\emph{IEEE Sensors Applications Symposium Proceedings},\n February 2013, pp. 171--175.\n\\bibitem{sdong2011}\nS.~Dong, Z.~Zhu, and Y.~Wang, ``Advances of terahertz research and {T}erahertz\n satellite communications,'' in \\emph{International Conference on Electronics,\n Communications and Control (ICECC)}, September 2011, pp. 4122--4125.\n\\bibitem{narytnyk2014author}\nT.~M. Narytnyk, ``Possibilities of using {T}erahertz band radio communication\n channels for super high rate backhaul,'' \\emph{Telecommunications and Radio\n Engineering}, vol.~73, no.~15, pp. 1361--1371, 2014.\n\\bibitem{FELICETTI201627}\nL.~Felicetti, M.~Femminella, G.~Reali, and P.~Liò, ``Applications of molecular\n communications to medicine: A survey,'' \\emph{Nano Communication Networks},\n vol.~7, pp. 27 -- 45, 2016.\n\\bibitem{zarepour2015}\nE.~{Zarepour}, M.~{Hassan}, C.~T. {Chou}, A.~A. {Adesina}, and M.~E.\n {Warkiani}, ``Reliability analysis of time-varying wireless nanoscale sensor\n networks,'' in \\emph{2015 IEEE 15th International Conference on\n Nanotechnology (IEEE-NANO)}, July 2015, pp. 63--68.\n\\bibitem{wang2008}\nZ.~L. Wang, ``Towards self-powered nanosystems: From nanogenerators to\n nanopiezotronics,'' \\emph{Advanced Functional Materials}, vol.~18, no.~22,\n pp. 3553--3567.\n\\bibitem{mathankar2013}\nS.~Mathanker, ``Terahertz applications in food and agriculture: A review,''\n \\emph{Transactions of the ASABE (American Society of Agricultural and\n Biological Engineers)}, vol.~56, pp. 1213--1226, April 2013.\n\\bibitem{azahid2018}\nA.~{Zahid}, K.~{Yang}, H.~{Heidari}, C.~{Li}, M.~A. {Imran}, A.~{Alomainy}, and\n Q.~H. {Abbasi}, ``Terahertz characterisation of living plant leaves for\n quality of life assessment applications,'' in \\emph{2018 Baltic URSI\n Symposium (URSI)}, May 2018, pp. 117--120.\n\\bibitem{dynamic_CH2016}\nA.~Afsharinejad, A.~Davy, and B.~Jennings, ``Dynamic channel allocation in\n electromagnetic nanonetworks for high resolution monitoring of plants,''\n \\emph{Nano Communication Networks}, vol.~7, pp. 2 -- 16, 2016.\n\\bibitem{nkhalid3002017}\nN.~{Khalid}, N.~A. {Abbasi}, and O.~B. {Akan}, ``300 ghz broadband transceiver\n design for low-thz band wireless communications in indoor internet of\n things,'' in \\emph{2017 IEEE International Conference on Internet of Things\n (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE\n Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data\n (SmartData)}, June 2017, pp. 770--775.\n\\bibitem{vpetrovIC2017}\nV.~{Petrov}, D.~{Moltchanov}, M.~{Komar}, A.~{Antonov}, P.~{Kustarev},\n S.~{Rakheja}, and Y.~{Koucheryavy}, ``Terahertz band intra-chip\n communications: Can wireless links scale modern x86 cpus?'' \\emph{IEEE\n Access}, vol.~5, pp. 6095--6109, 2017.\n\\bibitem{oyal2018}\nO.~Yalgashev, ``Towards nanoscale interconnect for system-on-chip,'' Ph.D.\n dissertation, Université de Technologie de Belfort-Montbeliard, 2018.\n\\bibitem{skimaz2016}\nS.~{Kim} and A.~{Zajić}, ``Characterization of 300-ghz wireless channel on a\n computer motherboard,'' \\emph{IEEE Transactions on Antennas and Propagation},\n vol.~64, no.~12, pp. 5411--5423, Dec 2016.\n\\bibitem{wiback2013}\n\\BIBentryALTinterwordspacing\n``Requirements on wireless backhauling and fronthauling, ieee\n 802.15-08-0336-00-0thz,'' November, 2013. [Online]. Available:\n \\url{https://mentor.ieee.org/802.15/dcn/13/15-13-0636-01-0thz-requirements-for-wireless-backhauling-fronthauling.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{seandell2019}\nS.~Ahearne, N.~O. Mahony, N.~Boujnah, S.~Ghafoor, L.~G. Guerrero, and\n C.~Renaud, ``Integrating thz wireless communication links in a data centre\n network,'' in \\emph{IEEE 5G Worlf Forum}, October 2019.\n\\bibitem{koenig2013}\nS.~Koenig, D.~Lopez-Diaz, J.~Antes, F.~Boes, R.~Henneberger, A.~Leuther,\n A.~Tessmann, R.~Schmogrow, D.~Hillerkuss, R.~Palmer, T.~Zwick, C.~Koos,\n W.~Freude, O.~Ambacher, J.~Leuthold, and I.~Kallfass, ``Wireless sub-thz\n communication system with high data rate,'' \\emph{Nature Photonics}, vol.~7,\n pp. 977--981, October 2013.\n\\bibitem{ykatayama2011}\nY.~Katayama, K.~Takano, Y.~Kohda, N.~Ohba, and D.~Nakano, ``Wireless data\n center networking with steered-beam mmwave links,'' in \\emph{IEEE Wireless\n Communications and Networking Conference}, March 2011, pp. 2179--2184.\n\\bibitem{mollahasani2016}\nS.~{Mollahasani} and E.~{Onur}, ``Evaluation of terahertz channel in data\n centers,'' in \\emph{NOMS 2016 - 2016 IEEE/IFIP Network Operations and\n Management Symposium}, April 2016, pp. 727--730.\n\\bibitem{Wukaishun2012}\nK.~Wu, J.~Xiao, and L.~M. Ni, ``Rethinking the architecture design of {d}ata\n {c}enter {n}etworks,'' \\emph{Frontiers of Computer Science}, vol.~6, no.~5,\n pp. 596--603, October 2012.\n\\bibitem{ntt}\nNTT, \\url{http://www.ntt.co.jp/news2016/1605e/160526a.html}, accessed:\n 09-01-2019.\n\\bibitem{google2013}\n\\BIBentryALTinterwordspacing\nA.~D. Angelica, \\emph{Google’s self-driving car gathers nearly 1 {GB}ps},\n 2013. [Online]. Available:\n \\url{http://www.kurzweilai.net/googles-self-driving-car-gathers-nearly-1-gbsec.}\n\\BIBentrySTDinterwordspacing\n\\bibitem{sas}\n\\BIBentryALTinterwordspacing\nSAS, \\emph{Are you ready for your smart car?} [Online]. Available:\n \\url{http://www.sas.com/en\n us/insights/articles/big-data/the-internet-of-things-and-connected-cars.html}\n\\BIBentrySTDinterwordspacing\n\\bibitem{kanno2014}\nA.~Kanno, ``Fiber-wireless signal transport by {T}erahertz waves,'' in\n \\emph{International Conference on Advanced Technologies for Communications\n (ATC 2014)}, Oct 2014, pp. 766--769.\n\\bibitem{greenberg2008}\nA.~Greenberg, J.~Hamilton, D.~A. Maltz, and P.~Patel, ``The cost of a cloud:\n Research problems in data center networks,'' \\emph{SIGCOMM Comput. Commun.\n Rev.}, vol.~39, no.~1, pp. 68--73, December 2008.\n\\bibitem{Ding2016}\nY.-M. Ding, S.~Gao, X.~Shi, and H.~Wu, ``Analysis of inter-satellite terahertz\n communication link,'' in \\emph{3rd International Conference on Wireless\n Communication and Sensor Networks (WCSN)}.\\hskip 1em plus 0.5em minus\n 0.4em\\relax Atlantis Press, 2016.\n\\bibitem{wdou2014}\nW.~{Dou}, L.~{Zhang}, H.~{Meng}, and Z.~{Wang}, ``Tracking antennas for\n inter-satellite communications at sub-millimeter wavelengths,'' in\n \\emph{Proceedings of 3rd Asia-Pacific Conference on Antennas and\n Propagation}, July 2014, pp. 1149--1152.\n\\bibitem{yilmaz22015}\nT.~Yilmaz and O.~B. Akan, ``On the use of low {T}erahertz band for {5G} indoor\n mobile networks,'' \\emph{Computers and Electrical Engineering}, vol.~48, pp.\n 164 -- 173, 2015.\n\\bibitem{thor}\nTHOR, \\url{https://thorproject.eu/}, accessed: 01-04-2019.\n\\bibitem{teranovareport2018}\n\\BIBentryALTinterwordspacing\nA.~A. Boulogeorgos, A.~Alexiou, D.~Kritharidis, A.~Katsiotis, G.~D. Ntouni,\n J.~Kokkoniemi, J.~Lethtomaki, M.~J. Juntti, D.~Yankova, A.~Mokhtar, J.~Point,\n J.~Machado, R.~Elschner, C.~Schubert, T.~Merkle, R.~Ferreira, F.~Rodrigues,\n and J.~Lima, ``Wireless terahertz system architectures for networks beyond\n 5g,'' \\emph{CoRR}, vol. abs/1810.12260, 2018. [Online]. Available:\n \\url{http://arxiv.org/abs/1810.12260}\n\\BIBentrySTDinterwordspacing\n\\bibitem{drih-mac2015}\nS.~Mohrehkesh, M.~C. Weigle, and S.~K. Das, ``{DRIH-MAC}: A distributed\n receiver-initiated harvesting-aware {MAC} for nanonetworks,'' \\emph{IEEE\n Transactions on Molecular, Biological and Multi-Scale Communications},\n vol.~1, no.~1, pp. 97--110, March 2015.\n\\bibitem{gkwalia2016}\nG.~K. Walia, D.~K.~K. Randhawa, and K.~S. Malhi, ``A brief survey on molecular\n communications in nanonetworks,'' in \\emph{International Conference on\n Computational Techniques in Information and Communication Technologies\n (ICCTICT)}, March 2016, pp. 343--348.\n\\bibitem{Qiu2014}\nT.~Qiu, T.-C. Lee, A.~G. Mark, K.~I. Morozov, R.~M{\\\"u}nster, O.~Mierka,\n S.~Turek, A.~M. Leshansky, and P.~Fischer, ``Swimming by reciprocal motion at\n low reynolds number,'' in \\emph{Nature communications}, 2014.\n\\bibitem{Cvetkovic2014}\nC.~Cvetkovic, R.~Raman, V.~Chan, B.~J. Williams, M.~Tolish, P.~Bajaj, M.~S.\n Sakar, H.~H. Asada, M.~T.~A. Saif, and R.~Bashir, ``Three-dimensionally\n printed biological machines powered by skeletal muscle,'' \\emph{Proceedings\n of the National Academy of Sciences}, vol. 111, no.~28, pp. 10\\,125--10\\,130,\n 2014.\n\\bibitem{syim2012}\nS.~Yim and M.~Sitti, ``Design and rolling locomotion of a magnetically actuated\n soft capsule endoscope,'' \\emph{IEEE Transactions on Robotics}, vol.~28,\n no.~1, pp. 183--194, February 2012.\n\\bibitem{qiuf2015}\nF.~Qiu, S.~Fujita, R.~Mhanna, L.~Zhang, B.~R. Simona, and B.~J. Nelson,\n ``Magnetic helical microswimmers functionalized with lipoplexes for targeted\n gene delivery,'' \\emph{Advanced Functional Materials}, vol.~25, no.~11, pp.\n 1666--1671, 2015.\n\\bibitem{sabadal2013}\nS.~Abadal, E.~Alarcón, A.~Cabellos-Aparicio, M.~C. Lemme, and M.~Nemirovsky,\n ``Graphene-enabled wireless communication for massive multicore\n architectures,'' \\emph{IEEE Communications Magazine}, vol.~51, no.~11, pp.\n 137--143, November 2013.\n\\bibitem{amro2012}\nA.~Amro, M.~Ilchenko, V.~Kalinin, and O.~Turabi, ``Sub-terahertz low power uwb\n communication link for wpan,'' in \\emph{IISTE Network and Complex Systems},\n 2012.\n\\bibitem{christina2019}\n\\BIBentryALTinterwordspacing\nC.~Chaccour, R.~Amer, B.~Zhou, and W.~Saad, ``On the reliability of wireless\n virtual reality at terahertz (thz) frequencies,'' \\emph{CoRR}, vol.\n abs/1905.07656, 2019. [Online]. Available:\n \\url{http://arxiv.org/abs/1905.07656}\n\\BIBentrySTDinterwordspacing\n\\bibitem{Chaccour2020CanTP}\nC.~Chaccour, M.~N. Soorki, W.~Saad, M.~Bennis, and P.~Popovski, ``Can terahertz\n provide high-rate reliable low latency communications for wireless vr?''\n 2020.\n\\bibitem{LLModelling2011}\nD.~Arifler, ``Link layer modeling of bio-inspired communication in\n nanonetworks,'' \\emph{Nano Communication Networks}, vol.~2, no.~4, pp. 223 --\n 229, 2011.\n\\bibitem{phlame2012}\nJ.~M. Jornet, J.~C. Pujol, and J.~S. Pareta, ``{PHLAME}: A physical layer aware\n {MAC} protocol for electromagnetic nanonetworks in the {T}erahertz band,''\n \\emph{Nano Communication Networks}, vol.~3, no.~1, pp. 74 -- 81, 2012.\n\\bibitem{EESR-MAC2012}\nV.~Srikanth, S.~Chaluvadi, Sandeep, Vani, and Venkatesh, ``Energy efficient,\n scalable and reliable {MAC} protocol for electromagnetic communication among\n nano devices,'' \\emph{International Journal of Distributed and Parallel\n Systems (IJDPS)}, vol.~03, no.~1, pp. 249--256, January 2012.\n\\bibitem{esaware2013}\nP.~Wang, J.~M. Jornet, M.~A. Malik, N.~Akkari, and I.~F. Akyildiz, ``Energy and\n spectrum-aware {MAC} protocol for perpetual wireless nanosensor networks in\n the {T}erahertz band,'' \\emph{Ad Hoc Networks}, vol.~11, no.~8, pp. 2541 --\n 2555, 2013.\n\\bibitem{MAC-TUDWN2013}\nZ.~Ren, Y.~N. Cao, S.~Peng, and H.~J. Lei, ``A {MAC} protocol for {T}erahertz\n ultra-high data-rate wireless networks,'' in \\emph{Mechanical Engineering,\n Industrial Electronics and Information Technology Applications in Industry},\n ser. Applied Mechanics and Materials, vol. 427.\\hskip 1em plus 0.5em minus\n 0.4em\\relax Trans Tech Publications, 12 2013, pp. 2864--2869.\n\\bibitem{smart-mac2013}\nG.~Piro, L.~A. Grieco, G.~Boggia, and P.~Camarda, ``Nano-sim: Simulating\n electromagnetic-based nanonetworks in the network simulator 3,'' in\n \\emph{Proceedings of the 6th International ICST Conference on Simulation\n Tools and Techniques}, ser. SimuTools '13.\\hskip 1em plus 0.5em minus\n 0.4em\\relax ICST, Brussels, Belgium, Belgium: ICST (Institute for Computer\n Sciences, Social-Informatics and Telecommunications Engineering), 2013, pp.\n 203--210.\n\\bibitem{MAC-TC2013}\nZ.~Ren, Y.-N. Cao, X.~Zhou, Y.~Zheng, and Q.~bin Chen, ``Novel {MAC} protocol\n for {T}erahertz ultra-high data-rate wireless networks,'' \\emph{The Journal\n of China Universities of Posts and Telecommunications}, vol.~20, no.~6, pp.\n 69 -- 76, 2013.\n\\bibitem{TRPLE2014}\nJ.~Lin and M.~A. Weitnauer, ``Pulse-level beam-switching {MAC} with energy\n control in picocell {T}erahertz networks,'' in \\emph{IEEE Global\n Communications Conference (GLOBECOM)}, December 2014, pp. 4460--4465.\n\\bibitem{MDP2014}\nS.~Mohrehkesh and M.~C. Weigle, ``Optimizing energy consumption in {T}erahertz\n band nanonetworks,'' \\emph{IEEE Journal on Selected Areas in Communications},\n vol.~32, no.~12, pp. 2432--2441, December 2014.\n\\bibitem{TSN2014}\nE.~Zarepour, M.~Hassan, C.~T. Chou, and A.~A. Adesina, ``Frequency hopping\n strategies for improving {T}erahertz sensor network performance over\n composition varying channels,'' in \\emph{Proceeding of IEEE International\n Symposium on a World of Wireless, Mobile and Multimedia Networks 2014}, June\n 2014, pp. 1--9.\n\\bibitem{RIH-MAC2014}\nS.~Mohrehkesh and M.~C. Weigle, ``{RIH-MAC}: Receiver-initiated\n harvesting-aware {MAC} for nanonetworks,'' in \\emph{Proceedings of ACM The\n First Annual International Conference on Nanoscale Computing and\n Communication}, ser. NANOCOM' 14.\\hskip 1em plus 0.5em minus 0.4em\\relax New\n York, NY, USA: ACM, 2014, pp. 1--6.\n\\bibitem{LLsynch2015}\nQ.~Xia, Z.~Hossain, M.~Medley, and J.~M. Jornet, ``A link-layer synchronization\n and {M}edium {A}ccess {C}ontrol protocol for {T}erahertz band communication\n networks,'' in \\emph{IEEE Global Communications Conference (GLOBECOM)},\n December 2015, pp. 1--7.\n\\bibitem{TCN2015}\nS.~D’Oro, L.~Galluccio, G.~Morabito, and S.~Palazzo, ``A timing channel-based\n {MAC} protocol for energy-efficient nanonetworks,'' \\emph{Nano Communication\n Networks}, vol.~6, no.~2, pp. 39 -- 50, 2015.\n\\bibitem{joint_error2016}\nN.~Akkari, J.~M. Jornet, P.~Wang, E.~Fadel, L.~Elrefaei, M.~G. Malik,\n S.~Almasri, and I.~F. Akyildiz, ``Joint physical and link layer error control\n analysis for nanonetworks in the {T}erahertz band,'' \\emph{Wirel. Netw.},\n vol.~22, no.~4, pp. 1221--1233, May 2016.\n\\bibitem{Design-WNSN2015}\nS.~J. Lee, C.~A. Jung, K.~Choi, and S.~Kim, ``Design of wireless nanosensor\n networks for intrabody application,'' \\emph{International Journal of\n Distributed Sensor Networks}, vol.~11, no.~7, pp. 1--12, 2015.\n\\bibitem{DMDS2016}\nN.~Akkari, P.~Wang, J.~M. Jornet, E.~Fadel, L.~Elrefaei, M.~G.~A. Malik,\n S.~Almasri, and I.~F. Akyildiz, ``Distributed timely throughput optimal\n scheduling for the internet of nano-things,'' \\emph{IEEE Internet of Things\n Journal}, vol.~3, no.~6, pp. 1202--1212, December 2016.\n\\bibitem{TAB-MAC2016}\nX.-W. Yao and J.~M. Jornet, ``{TAB-MAC}: Assisted beamforming {MAC} protocol\n for {T}erahertz communication networks,'' \\emph{Nano Communication Networks},\n vol.~9, pp. 36 -- 42, 2016.\n\\bibitem{GMAC2016}\nR.~Alsheikh, N.~Akkari, and E.~Fadel, ``Grid based energy-aware {MAC} protocol\n for wireless nanosensor network,'' in \\emph{8th IFIP International Conference\n on New Technologies, Mobility and Security (NTMS)}, November 2016, pp. 1--5.\n\\bibitem{MGDI2016}\nA.~Tsioliaridou, C.~Liaskos, S.~Ioannidis, and A.~Pitsillides, ``Lightweight,\n self-tuning data dissemination for dense nanonetworks,'' \\emph{Nano\n Communication Networks}, vol.~8, pp. 2 -- 15, 2016.\n\\bibitem{pkt_size2016}\nP.~Johari and J.~M. Jornet, ``Packet size optimization for wireless nanosensor\n networks in the {T}erahertz band,'' in \\emph{IEEE International Conference on\n Communications (ICC)}, May 2016, pp. 1--6.\n\\bibitem{NS-MAC2016}\nR.~Alsheikh, N.~Akkari, and E.~Fadel, ``{MAC} protocols for wireless\n nano-sensor networks: Performance analysis and design guidelines,'' in\n \\emph{Sixth International Conference on Digital Information Processing and\n Communications (ICDIPC)}, April 2016, pp. 129--134.\n\\bibitem{HLMAC2016}\nC.~Jianling, W.~Min, C.~Cong, and R.~Zhi, ``High-throughput low-delay {MAC}\n protocol for {T}erahertz ultra-high data-rate wireless networks,'' \\emph{The\n Journal of China Universities of Posts and Telecommunications}, vol.~23,\n no.~4, pp. 17 -- 24, 2016.\n\\bibitem{DLLC2017}\nL.~Lopacinski, M.~Brzozowski, and R.~Kraemer, ``Data link layer considerations\n for future 100 {Gbps} {T}erahertz band transceivers,'' \\emph{Wireless\n communications and mobile computing}, 2017.\n\\bibitem{MAADM2017}\nC.~Han, W.~Tong, and X.-W. Yao, ``{MA-ADM}: A memory-assisted angular-division\n multiplexing {MAC} protocol in {T}erahertz communication networks,''\n \\emph{Nano Communication Networks}, vol.~13, pp. 51 -- 59, 2017.\n\\bibitem{MRA-MAC2017}\nW.~Tong and C.~Han, ``{MRA-MAC}: A multi-radio assisted medium access control\n in terahertz communication networks,'' in \\emph{IEEE Global Communications\n Conference (GLOBECOM)}, December 2017, pp. 1--6.\n\\bibitem{RBMP2017}\nQ.~Li, X.-W. Yao, and C.-C. Wang, ``{RBMP}: A relay-based {MAC} protocol for\n nanonetworks in the {T}erahertz band,'' in \\emph{Proceedings of the 4th ACM\n International Conference on Nanoscale Computing and Communication}, ser.\n NanoCom.\\hskip 1em plus 0.5em minus 0.4em\\relax New York, NY, USA: ACM, 2017,\n pp. 1--2.\n\\bibitem{APIS2017}\nH.~Yu, B.~Ng, and W.~K.~G. Seah, ``Pulse arrival scheduling for nanonetworks\n under limited {IoT} access bandwidth,'' in \\emph{IEEE 42nd Conference on\n Local Computer Networks (LCN)}, October 2017, pp. 18--26.\n\\bibitem{OPTRS2017}\nQ.~Xia and J.~M. Jornet, ``Cross-layer analysis of optimal relaying strategies\n for {T}erahertz band communication networks,'' in \\emph{IEEE 13th\n International Conference on Wireless and Mobile Computing, Networking and\n Communications (WiMob)}, October 2017, pp. 1--8.\n\\bibitem{EEWNSN2017}\nN.~Rikhtegar, M.~Keshtgari, and Z.~Ronaghi, ``{EEWNSN}: Energy efficient\n wireless nano sensor network {MAC} protocol for communications in the\n {T}erahertz band,'' \\emph{Wireless Personal Communications}, vol.~97, no.~1,\n pp. 521--537, November 2017.\n\\bibitem{ISCT2018}\nZ.~Li, L.~Guan, C.~Li, and A.~Radwan, ``A secure intelligent spectrum control\n strategy for future {T}erahertz mobile heterogeneous networks,'' \\emph{IEEE\n Communications Magazine}, vol.~56, no.~6, pp. 116--123, June 2018.\n\\bibitem{2stateMAC2018}\nX.~Yao, C.~Wang, W.~Wang, and J.~M. Jornet, ``On the achievable throughput of\n energy-harvesting nanonetworks in the {T}erahertz band,'' \\emph{IEEE Sensors\n Journal}, vol.~18, no.~2, pp. 902--912, January 2018.\n\\bibitem{slottedCSMAMAC2018}\nS.-J. Lee, H.~Choi, and S.~U. Kim, ``Slotted {CSMA/CA} based energy efficient\n {MAC} protocol design in nanonetworks,'' \\emph{CoRR}, vol. abs/1803.00900,\n 2018.\n\\bibitem{MAC-Yugi2018}\nS.~E. Hosseininejad, S.~Abadal, M.~Neshat, R.~Faraji-Dana, M.~C. Lemme,\n C.~Suessmeier, P.~H. Bolívar, E.~Alarcón, and A.~Cabellos-Aparicio,\n ``{MAC}-oriented programmable terahertz {PHY} via graphene-based yagi-uda\n antennas,'' in \\emph{IEEE Wireless Communications and Networking Conference\n (WCNC)}, April 2018, pp. 1--6.\n\\bibitem{Salous2013}\nS.~Salous, \\emph{Radio Propagation Measurement and Channel Modelling},\n 1st~ed.\\hskip 1em plus 0.5em minus 0.4em\\relax Wiley Publishing, 2013.\n\\bibitem{chmodel2011}\nJ.~M. Jornet and I.~F. Akyildiz, ``Channel modeling and capacity analysis for\n electromagnetic wireless nanonetworks in the {T}erahertz band,'' \\emph{IEEE\n Transactions on Wireless Communications}, vol.~10, no.~10, pp. 3211--3221,\n October 2011.\n\\bibitem{GORDON20173}\nI.~Gordon, L.~Rothman, C.~Hill, R.~Kochanov, Y.~Tan, P.~Bernath, M.~Birk,\n V.~Boudon, A.~Campargue, K.~Chance, B.~Drouin, J.-M. Flaud, R.~Gamache,\n J.~Hodges, D.~Jacquemart, V.~Perevalov, A.~Perrin, K.~Shine, M.-A. Smith,\n J.~Tennyson, G.~Toon, H.~Tran, V.~Tyuterev, A.~Barbe, A.~Császár, V.~Devi,\n T.~Furtenbacher, J.~Harrison, J.-M. Hartmann, A.~Jolly, T.~Johnson,\n T.~Karman, I.~Kleiner, A.~Kyuberis, J.~Loos, O.~Lyulin, S.~Massie,\n S.~Mikhailenko, N.~Moazzen-Ahmadi, H.~Müller, O.~Naumenko, A.~Nikitin,\n O.~Polyansky, M.~Rey, M.~Rotger, S.~Sharpe, K.~Sung, E.~Starikova,\n S.~Tashkun, J.~V. Auwera, G.~Wagner, J.~Wilzewski, P.~Wcisło, S.~Yu, and\n E.~Zak, ``The {HITRAN2016} molecular spectroscopic database,'' \\emph{Journal\n of Quantitative Spectroscopy and Radiative Transfer}, vol. 203, pp. 3 -- 69,\n 2017, hITRAN2016 Special Issue.\n\\bibitem{haixia2011}\nH.~Cui, J.~Yao, and C.~Wan, ``The study on {T}erahertz wave propagation feature\n in atmosphere,'' \\emph{Journal of Physics: Conference Series}, vol. 276,\n no.~1, pp. 1--7, 2011.\n\\bibitem{khaledn2019}\nN.~Khalid, N.~A. Abbasi, and O.~B. Akan, ``\\blue{Statistical characterization\n and analysis of low-THz communication channel for 5G Internet of Things},''\n \\emph{Nano Communication Networks}, vol.~22, p. 100258, 2019.\n\\bibitem{hsong2010}\nH.~Song, K.~Ajito, A.~Wakatsuki, Y.~Muramoto, N.~Kukutsu, Y.~Kado, and\n T.~Nagatsuma, ``Terahertz wireless communication link at 300 ghz,'' in\n \\emph{IEEE International Topical Meeting on Microwave Photonics}, October\n 2010, pp. 42--45.\n\\bibitem{spriebe2011}\nS.~Priebe, C.~Jastrow, M.~Jacob, T.~Kleine-Ostmann, T.~Schrader, and T.~Kurner,\n ``Channel and propagation measurements at 300 {GH}z,'' \\emph{IEEE\n Transactions on Antennas and Propagation}, vol.~59, no.~5, pp. 1688--1698,\n May 2011.\n\\bibitem{spriebe2013}\nS.~Priebe and T.~Kurner, ``Stochastic modeling of {T}erahertz indoor radio\n channels,'' \\emph{IEEE Transactions on Wireless Communications}, vol.~12,\n no.~9, pp. 4445--4455, September 2013.\n\\bibitem{spriebekannicht2013}\nS.~Priebe, M.~Kannicht, M.~Jacob, and T.~Kürner, ``Ultra broadband indoor\n channel measurements and calibrated ray tracing propagation modeling at\n terahertz frequencies,'' \\emph{Journal of Communications and Networks},\n vol.~15, no.~6, pp. 547--558, December 2013.\n\\bibitem{tsujimura2018}\nK.~Tsujimura, K.~Umebayashi, J.~Kokkoniemi, J.~Lehtomäki, and Y.~Suzuki, ``A\n causal channel model for the {T}erahertz band,'' \\emph{IEEE Transactions on\n Terahertz Science and Technology}, vol.~8, no.~1, pp. 52--62, January 2018.\n\\bibitem{aafshari2015}\nA.~Afsharinejad, A.~Davy, B.~Jennings, S.~Rasmann, and C.~Brennan, ``A\n path-loss model incorporating shadowing for {T}erahertz band propagation in\n vegetation,'' in \\emph{IEEE Global Communications Conference (GLOBECOM)},\n December 2015, pp. 1--6.\n\\bibitem{joonas2016}\nJ.~Kokkoniemi, J.~Lehtomäki, and M.~Juntti, ``A discussion on molecular\n absorption noise in the {T}erahertz band,'' \\emph{Nano Communication\n Networks}, vol.~8, pp. 35 -- 45, 2016.\n\\bibitem{itufspl2016}\n\\BIBentryALTinterwordspacing\nITU, ``Recommendation {ITU-R P.525-3}: Calculation of free-space attenuation,''\n Tech. Rep., 2016, accessed: 01-04-2019. [Online]. Available:\n \\url{https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.525-3-201611-I!!PDF-E.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{radiowave2013}\n\\BIBentryALTinterwordspacing\n------, ``Recommendation {ITU-R P.676-10}: Attenuation by atmospheric gases,''\n Tech. Rep., 2013, accessed: 01-04-2019. [Online]. Available:\n \\url{https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.676-10-201309-S!!PDF-E.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{bhardwaj2013}\nS.~Bhardwaj, N.~Nahar, and J.~L. Volakis, ``Link budget analysis for 350 {GH}z\n communication link,'' in \\emph{US National Committee of URSI National Radio\n Science Meeting (USNC-URSI NRSM)}, January 2013, pp. 1--1.\n\\bibitem{miller2008}\nD.~A.~B. Miller, \\emph{Quantum Mechanics for Scientists and Engineers}.\\hskip\n 1em plus 0.5em minus 0.4em\\relax Cambridge University Press, 2008.\n\\bibitem{ituattenu2016}\n\\BIBentryALTinterwordspacing\nITU, ``Recommendation {ITU-R P.676-11}: Attenuation by atmospheric gases,''\n Tech. Rep., 2017, accessed: 01-04-2019. [Online]. Available:\n \\url{https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.676-11-201609-I!!PDF-E.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{painescott2018}\n\\BIBentryALTinterwordspacing\nS.~Paine, ``The am atmospheric model,'' March 2018. [Online]. Available:\n \\url{https://doi.org/10.5281/zenodo.1193771}\n\\BIBentrySTDinterwordspacing\n\\bibitem{smithernest1982}\nE.~K. Smith, ``Centimeter and millimeter wave attenuation and brightness\n temperature due to atmospheric oxygen and water vapor,'' \\emph{Radio\n Science}, vol.~17, no.~6, pp. 1455--1464, 1982.\n\\bibitem{itunoise2016}\n\\BIBentryALTinterwordspacing\nITU, ``Recommendation {ITU-R P.372-13}: Radio noise,'' Tech. Rep., 2016,\n accessed: 01-04-2019. [Online]. Available:\n \\url{https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.372-13-201609-I!!PDF-E.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{fbox1986}\nF.~Box, ``Utilization of atmospheric transmission losses for\n interference-resistant communications,'' \\emph{IEEE Transactions on\n Communications}, vol.~34, no.~10, pp. 1009--1015, October 1986.\n\\bibitem{bronin2014}\nP.~Boronin, V.~Petrov, D.~Moltchanov, Y.~Koucheryavy, and J.~M. Jornet,\n ``Capacity and throughput analysis of nanoscale machine communication through\n transparency windows in the terahertz band,'' \\emph{Nano Communication\n Networks}, vol.~5, no.~3, pp. 72 -- 82, 2014.\n\\bibitem{Elayan2018Ete}\nH.~W. Elayan, C.~Stefanini, R.~M. Shubair, and J.~M. Jornet, ``End-to-end noise\n model for intra-body terahertz nanoscale communication,'' \\emph{IEEE\n Transactions on NanoBioscience}, vol.~17, pp. 464--473, 2018.\n\\bibitem{raditationbrown2003}\nE.~R. Brown, ``Fundamentals of terrestrial millimeter-wave and {T}erahertz\n remote sensing,'' \\emph{International Journal of High Speed Electronics and\n Systems}, vol.~13, no.~04, pp. 995--1097, 2003.\n\\bibitem{boronin2015}\nP.~Boronin, D.~Moltchanov, and Y.~Koucheryavy, ``A molecular noise model for\n {T}erahertz channels,'' in \\emph{IEEE International Conference on\n Communications (ICC)}, June 2015, pp. 1286--1291.\n\\bibitem{radiative2015}\n\\BIBentryALTinterwordspacing\nB.~Carli, ``Basics of radiative transfer.'' [Online]. Available:\n \\url{https://earth.esa.int/dragon/D2_L2_Carli.pdf}\n\\BIBentrySTDinterwordspacing\n\\bibitem{ippolito1985}\nL.~Ippolito, R.~Kaul, R.~Wallace, U.~S.~N. Aeronautics, S.~A. Scientific, T.~I.\n Branch, and O.~R. Incorporated, \\emph{Propagation Effects Handbook for\n Satellite Systems Design: A Summary of Propagation Impairments on 10 to 100\n GHz Satellite Links with Techniques for System Design}, ser. NASA reference\n publication.\\hskip 1em plus 0.5em minus 0.4em\\relax National Aeronautics and\n Space Administration, Scientific and Technical Information Branch, 1985.\n\\bibitem{straiton1975}\nA.~Straiton, ``The absorption and reradiation of radio waves by oxygen and\n water vapor in the atmosphere,'' \\emph{IEEE Transactions on Antennas and\n Propagation}, vol.~23, no.~4, pp. 595--597, July 1975.\n\\bibitem{feymenbook2015}\nR.~B.~L. Richard~Feynman, Matthew~Sands, \\emph{The Feynman Lectures on\n Physics}.\\hskip 1em plus 0.5em minus 0.4em\\relax Ingram Publisher Services\n US, 2015, vol.~1.\n\\bibitem{kokko2015}\nJ.~Kokkoniemi, J.~Lehtoma¨ki, and M.~Juntti, ``Frequency domain scattering\n loss in {T}erahertz band,'' in \\emph{Global Symposium on Millimeter-Waves\n (GSMM)}, May 2015, pp. 1--3.\n\\bibitem{mwu2015}\nF.~M. Wu, B.~F. Wu, X.~Y. Zhao, Z.~H. Zhang, H.~Zhang, T.~Y. Zhang, and\n Y.~Fang, ``Analysis on scattering and relationship with granular size in\n {T}erahertz spectra,'' in \\emph{40th International Conference on Infrared,\n Millimeter, and Terahertz waves (IRMMW-THz)}, August 2015, pp. 1--2.\n\\bibitem{ywang2016}\nY.~Wang, F.~Zhang, Z.~Dong, and H.~Sun, ``Effects of nonsphericity on\n attenuation characteristics of terahertz atmospheric propagation,'' in\n \\emph{41st International Conference on Infrared, Millimeter, and Terahertz\n waves (IRMMW-THz)}, September 2016, pp. 1--2.\n\\bibitem{skim2015}\nS.~Kim and A.~Zajić, ``Statistical modeling of {T}erahertz scatter channels,''\n in \\emph{9th European Conference on Antennas and Propagation (EuCAP)}, April\n 2015, pp. 1--5.\n\\bibitem{Ju2019}\nS.~Ju, S.~H.~A. Shah, M.~A. Javed, J.~Li, G.~Palteru, J.~Robin, Y.~Xing,\n O.~Kanhere, and T.~S. Rappaport, ``Scattering mechanisms and modeling for\n terahertz wireless communications,'' \\emph{ICC 2019 - 2019 IEEE International\n Conference on Communications (ICC)}, pp. 1--7, 2019.\n\\bibitem{Jianjun2018}\nJ.~Ma, R.~Shrestha, L.~Moeller, and D.~M. Mittleman, ``Invited article: Channel\n performance for indoor and outdoor terahertz wireless links,'' \\emph{APL\n Photonics}, vol.~3, no.~5, p. 051601, 2018.\n\\bibitem{Xiao18}\nZ.~Xiao, Q.~Yang, J.~Huang, Z.~Huang, W.~Zhou, Y.~Gao, R.~Shu, and Z.~He,\n ``Terahertz communication windows and their point-to-point transmission\n verification,'' \\emph{Appl. Opt.}, vol.~57, no.~27, pp. 7673--7680, September\n 2018.\n\\bibitem{Moldovan2014}\nA.~Moldovan, M.~A. Ruder, I.~F. Akyildiz, and W.~H. Gerstacker, ``Los and nlos\n channel modeling for {T}erahertz wireless communication with scattered\n rays,'' \\emph{IEEE Globecom Workshops (GC Wkshps)}, pp. 388--392, 2014.\n\\bibitem{chandtmc2014}\nC.~Han and I.~F. Akyildiz, ``Distance-aware multi-carrier ({DAMC}) modulation\n in t{}erahertz band communication,'' in \\emph{2014 IEEE International\n Conference on Communications (ICC)}, June 2014, pp. 5461--5467.\n\\bibitem{Liaskos2014}\nC.~Liaskos, A.~Tsioliaridou, A.~Pitsillides, N.~Kantartzis, A.~Lalas, X.~A.\n Dimitropoulos, S.~Ioannidis, M.~Kafesaki, and C.~M. Soukoulis, ``Building\n software defined materials with nanonetworks,'' 2014.\n\\bibitem{Mabed2018}\nH.~Mabed and J.~Bourgeois, ``A flexible medium access control protocol for\n dense terahertz nanonetworks,'' in \\emph{Proceedings of the 5th ACM\n International Conference on Nanoscale Computing and Communication}, ser.\n NANOCOM '18.\\hskip 1em plus 0.5em minus 0.4em\\relax New York, NY, USA: ACM,\n 2018, pp. 1--7.\n\\bibitem{ningzhu2013}\nN.~Zhu and R.~Ziolkowski, ``\\BIBforeignlanguage{English (US)}{Photoconductive\n thz antenna designs with high radiation efficiency, high directivity, and\n high aperture efficiency},'' \\emph{\\BIBforeignlanguage{English (US)}{IEEE\n Transactions on Terahertz Science and Technology}}, vol.~3, no.~6, pp.\n 721--730, November 2013.\n\\bibitem{woongsoo2015}\nW.~Na, L.~Park, and S.~Cho, ``Deafness-aware {MAC} protocol for directional\n antennas in wireless ad hoc networks,'' \\emph{Ad Hoc Networks}, vol.~24, pp.\n 121 -- 134, 2015.\n\\bibitem{chan2018mimo}\nC.~Han, J.~M. Jornet, and I.~Akyildiz, ``Ultra-massive {MIMO} channel modeling\n for graphene-enabled {T}erahertz-band communications,'' in \\emph{IEEE 87th\n Vehicular Technology Conference (VTC Spring)}, June 2018, pp. 1--5.\n\\bibitem{ssdhillon2017}\nS.~Dhillon, M.~Vitiello, E.~Linfield, A.~Davies, M.~Hoffmann, J.~Booske,\n C.~Paoloni, M.~Gensch, P.~Weightman, G.~Williams, E.~Castro-Camus,\n D.~Cumming, F.~Simoens, I.~Escorcia-Carranza, J.~Grant, S.~Lucyszyn,\n M.~Kuwata-Gonokami, K.~Konishi, M.~Koch, C.~Schmuttenmaer, T.~Cocker,\n R.~Huber, A.~Markelz, Z.~Taylor, V.~Wallace, J.~Zeitler, J.~Sibik, T.~Korter,\n B.~Ellison, S.~Rea, P.~Goldsmith, K.~Cooper, R.~Appleby, D.~Pardo,\n P.~Huggard, V.~Krozer, H.~Shams, M.~Fice, C.~Renaud, A.~Seeds, A.~Stohr,\n M.~Naftaly, N.~Ridler, R.~Clarke, J.~Cunningham, and M.~Johnston, ``The 2017\n {T}erahertz science and technology roadmap,'' \\emph{Journal of Physics D:\n Applied Physics}, vol.~50, no.~4, January 2017.\n\\bibitem{jornetcabellos2015}\nJ.~M. Jornet and A.~Cabellos, ``On the feeding mechanisms for graphene-based\n {T}erahertz plasmonic nano-antennas,'' in \\emph{IEEE 15th International\n Conference on Nanotechnology (IEEE-NANO)}, July 2015, pp. 168--171.\n\\bibitem{esquius2014}\nM.~Esquius-Morote, J.~S. Gómez-Dı´az, and J.~Perruisseau-Carrier,\n ``Sinusoidally modulated graphene leaky-wave antenna for electronic\n beamscanning at {T}erahertz,'' \\emph{IEEE Transactions on Terahertz Science\n and Technology}, vol.~4, no.~1, pp. 116--122, January 2014.\n\\bibitem{graphenejoornakil2010}\nJ.~M. {Jornet} and I.~F. {Akyildiz}, ``Graphene-based nano-antennas for\n electromagnetic nanocommunications in the terahertz band,'' in\n \\emph{Proceedings of the Fourth European Conference on Antennas and\n Propagation}, April 2010, pp. 1--5.\n\\bibitem{elayan2016}\nH.~Elayan, R.~M. Shubair, and A.~Kiourti, ``On graphene-based {T}erahertz\n plasmonic nano-antennas,'' in \\emph{16th Mediterranean Microwave Symposium\n (MMS)}, November 2016, pp. 1--3.\n\\bibitem{jornetakylidiz2013}\nJ.~M. Jornet and I.~F. Akyildiz, ``Fundamentals of electromagnetic nanonetworks\n in the {T}erahertz band,'' \\emph{Found. Trends Netw.}, vol.~7, no. 2-3, pp.\n 77--233, Dec. 2013.\n\\bibitem{mnaftally2017}\nM.~Naftaly, ``Device characterization for {T}erahertz wireless links,'' in\n \\emph{9th International Congress on Ultra Modern Telecommunications and\n Control Systems and Workshops (ICUMT)}, November 2017, pp. 364--369.\n\\bibitem{zhangjornakil2019}\nB.~{Zhang}, J.~M. {Jornet}, I.~F. {Akyildiz}, and Z.~P. {Wu}, ``Mutual coupling\n reduction for ultra-dense multi-band plasmonic nano-antenna arrays using\n graphene-based frequency selective surface,'' \\emph{IEEE Access}, vol.~7, pp.\n 33\\,214--33\\,225, 2019.\n\\bibitem{Kherani2018}\nA.~A. Kherani and R.~Karthik, ``Dynamic beam assignment in narrow beamforming\n and mmwave systems,'' \\emph{2018 Twenty Fourth National Conference on\n Communications (NCC)}, pp. 1--6, 2018.\n\\bibitem{yli2018}\nY.~{Li}, L.~{Su}, T.~{Wei}, Z.~{Zhou}, and N.~{Ge}, ``Location-aware dynamic\n beam scheduling for maritime communication systems,'' in \\emph{2018 10th\n International Conference on Communications, Circuits and Systems (ICCCAS)},\n 2018, pp. 265--268.\n\\bibitem{khalids2019}\nK.~S. Mohamed, M.~Y. Alias, and M.~Roslee, ``Interference avoidance using\n tdma-beamforming in location aware small cell systems,'' \\emph{Applied\n Sciences}, vol.~9, no.~23, 2019.\n\\bibitem{moshir2016}\nF.~{Moshir} and S.~{Singh}, ``Rate adaptation for terahertz communications,''\n in \\emph{2016 13th IEEE Annual Consumer Communications Networking Conference\n (CCNC)}, 2016, pp. 816--819.\n\\bibitem{phasearr2005}\nH.~Visser, \\emph{\\BIBforeignlanguage{English}{Array and phased array antenna\n basics}}.\\hskip 1em plus 0.5em minus 0.4em\\relax United States: Wiley, 2005.\n\\bibitem{vpetrovmmwave2017}\nV.~Petrov, M.~Komarov, D.~Moltchanov, J.~M. Jornet, and Y.~Koucheryavy,\n ``Interference and {SINR} in millimeter wave and {T}erahertz communication\n systems with blocking and directional antennas,'' \\emph{IEEE Transactions on\n Wireless Communications}, vol.~16, no.~3, pp. 1791--1808, March 2017.\n\\bibitem{jmjor2011}\nJ.~M. Jornet and I.~F. Akyildiz, ``Information capacity of pulse-based wireless\n nanosensor networks,'' in \\emph{8th Annual IEEE Communications Society\n Conference on Sensor, Mesh and Ad Hoc Communications and Networks}, June\n 2011, pp. 80--88.\n\\bibitem{chcapacity2010}\n------, ``Channel capacity of electromagnetic nanonetworks in the {T}erahertz\n band,'' in \\emph{IEEE International Conference on Communications}, May 2010,\n pp. 1--6.\n\\bibitem{fytampanis2017}\nP.~G. Fytampanis, G.~V. Tsoulos, G.~E. Athanasiadou, and D.~A. Zarbouti,\n ``Wireless channel capacity estimation in the {T}erahertz band,'' in\n \\emph{International Workshop on Antenna Technology: Small Antennas,\n Innovative Structures, and Applications (iWAT)}, March 2017, pp. 339--342.\n\\bibitem{Akkas2017}\nM.~A. Akkas, ``Terahertz wireless data communication,'' \\emph{Wireless\n Networks}, June 2017.\n\\bibitem{kliui2018}\nK.~Liu, S.~Jia, S.~Wang, X.~Pang, W.~Li, S.~Zheng, H.~Chi, X.~Jin, X.~Zhang,\n and X.~Yu, ``100 gbit/s {T}erahertz photonic wireless transmission in the\n 350-ghz band with extended reach,'' \\emph{IEEE Photonics Technology Letters},\n vol.~30, no.~11, pp. 1064--1067, June 2018.\n\\bibitem{xpang2017}\nX.~Pang, S.~Jia, O.~Ozolins, X.~Yu, H.~Hu, L.~Marcon, P.~Guan, F.~D. Ros,\n S.~Popov, G.~Jacobsen, M.~Galili, T.~Morioka, D.~Zibar, and L.~K. Oxenlwe,\n ``Single channel 106 gbit/s 16{QAM} wireless transmission in the 0.4\n {T}erahertz band,'' in \\emph{Optical Fiber Communications Conference and\n Exhibition (OFC)}, March 2017, pp. 1--3.\n\\bibitem{krishne2014}\nK.~KrishneGowda, A.~Wolf, R.~Kraemer, J.~C. Scheytt, and I.~Kallfass,\n ``Wireless 100 gb/s: {PHY} layer overview and challenges in the {T}erahertz\n freqency band,'' in \\emph{WAMICON}, June 2014, pp. 1--4.\n\\bibitem{jonetakyil2014}\nJ.~M. {Jornet} and I.~F. {Akyildiz}, ``Femtosecond-long pulse-based modulation\n for terahertz band communication in nanonetworks,'' \\emph{IEEE Transactions\n on Communications}, vol.~62, no.~5, pp. 1742--1754, May 2014.\n\\bibitem{vavo2018}\nA.~K. Vavouris, F.~D. Dervisi, V.~K. Papanikolaou, and G.~K. Karagiannidis,\n ``An energy efficient modulation scheme for body-centric nano-communications\n in the {T}erahertz band,'' in \\emph{7th International Conference on Modern\n Circuits and Systems Technologies (MOCAST)}, May 2018, pp. 1--4.\n\\bibitem{sarieddeen2019}\nH.~{Sarieddeen}, M.-S. {Alouini}, and T.~Y. {Al-Naffouri}, ``Terahertz-band\n ultra-massive spatial modulation mimo,'' \\emph{arXiv e-prints}, p.\n arXiv:1905.04732, May 2019.\n\\bibitem{cwang2018yu}\nC.~{Wang}, J.~{Yu}, X.~{Li}, P.~{Gou}, and W.~{Zhou}, ``Fiber-thz-fiber link\n for thz signal transmission,'' \\emph{IEEE Photonics Journal}, vol.~10, no.~2,\n pp. 1--6, April 2018.\n\\bibitem{sgmutlak2018}\nS.~G. Muttlak, O.~S. Abdulwahid, J.~Sexton, M.~J. Kelly, and M.~Missous,\n ``Ingaas/alas resonant tunneling diodes for {T}erahertz applications: An\n experimental investigation,'' \\emph{IEEE Journal of the Electron Devices\n Society}, vol.~6, pp. 254--262, 2018.\n\\bibitem{railguan2017}\nK.~{Guan}, D.~{He}, A.~{Hrovat}, B.~{Ai}, Z.~{Zhong}, and T.~{Kürner},\n ``Challenges and chances for smart rail mobility at mmwave and thz bands from\n the channels viewpoint,'' in \\emph{2017 15th International Conference on ITS\n Telecommunications (ITST)}, May 2017, pp. 1--5.\n\\bibitem{elabsi2018}\nM.~{El-Absi}, A.~A. {Abbas}, A.~{Abuelhaija}, K.~{Solbach}, and T.~{Kaiser},\n ``Distance and tag aware localization in indoor terahertz systems,'' in\n \\emph{2018 First International Workshop on Mobile Terahertz Systems (IWMTS)},\n July 2018, pp. 1--5.\n\\bibitem{Jornet2016LINKAN}\nJ.~M. Jornet and E.~Einarsson, ``Link and network layers design for ultra-high-\n speed terahertz-band communications networks state university of new york\n (suny) at buffalo,'' 2016.\n\\bibitem{JORNET201435}\nJ.~M. Jornet, ``Low-weight error-prevention codes for electromagnetic\n nanonetworks in the terahertz band,'' \\emph{Nano Communication Networks},\n vol.~5, no.~1, pp. 35 -- 44, 2014.\n\\bibitem{aaboug2020}\nA.~A. {Boulogeorgos} and A.~{Alexiou}, \\emph{IEEE Communications Letters},\n vol.~24, no.~2, pp. 277--281, 2020.\n\\bibitem{joint-energy2012}\nJ.~M. Jornet and I.~F. Akyildiz, ``Joint energy harvesting and communication\n analysis for perpetual wireless nanosensor networks in the terahertz band,''\n \\emph{IEEE Transactions on Nanotechnology}, vol.~11, no.~3, pp. 570--580, May\n 2012.\n\\bibitem{jxu2018}\nJ.~{Xu}, J.~{Jiang}, Z.~{Wang}, and Y.~{Zhao}, ``Energy harvesting multi-path\n routing for wireless multimedia nanosensor networks in terahertz band,'' in\n \\emph{2018 14th International Wireless Communications Mobile Computing\n Conference (IWCMC)}, June 2018, pp. 1011--1017.\n\\bibitem{sverma2018}\nS.~{Verma}, S.~{Kaur}, G.~{Dhiman}, and A.~{Kaur}, ``Design of a novel energy\n efficient routing framework for wireless nanosensor networks,'' in \\emph{2018\n First International Conference on Secure Cyber Computing and Communication\n (ICSCCC)}, Dec 2018, pp. 532--536.\n\\bibitem{amoldovan2017akil}\nA.~{Moldovan}, P.~{Karunakaran}, I.~F. {Akyildiz}, and W.~H. {Gerstacker},\n ``Coverage and achievable rate analysis for indoor terahertz wireless\n networks,'' in \\emph{IEEE International Conference on Communications (ICC)},\n May 2017, pp. 1--7.\n\\bibitem{xujuan2019}\nJ.~Xu, J.~Kan, and Y.~Zhang, ``Centralized energy harvesting-based tdma\n protocol for terahertz nanosensor networks,'' \\emph{Sensors}, vol.~19,\n no.~20, 2019.\n\\bibitem{wangw2019}\nW.-L. Wang, C.-C. Wang, and X.-W. Yao, ``Slot self-allocation based mac\n protocol for energy harvesting nano-networks,'' \\emph{Sensors}, vol.~19,\n no.~21, 2019.\n\\bibitem{barati2015}\nC.~N. Barati, S.~A. Hosseini, S.~Rangan, P.~Liu, T.~Korakis, S.~S. Panwar, and\n T.~S. Rappaport, ``Directional cell discovery in millimeter wave cellular\n networks,'' \\emph{IEEE Transactions on Wireless Communications}, vol.~14,\n no.~12, pp. 6664--6678, December 2015.\n\\bibitem{qxia2019}\nQ.~{Xia} and J.~M. {Jornet}, ``Expedited neighbor discovery in directional\n terahertz communication networks enhanced by antenna side-lobe information,''\n \\emph{IEEE Transactions on Vehicular Technology}, vol.~68, no.~8, pp.\n 7804--7814, Aug 2019.\n\\bibitem{sasiibala2013}\nS.~{Balasubramaniam} and J.~{Kangasharju}, ``Realizing the internet of nano\n things: Challenges, solutions, and applications,'' \\emph{Computer}, vol.~46,\n no.~2, pp. 62--68, Feb 2013.\n\\bibitem{lpan2016}\n``{IEEE} standard for low-rate wireless networks,'' \\emph{IEEE Std\n 802.15.4-2015 (Revision of IEEE Std 802.15.4-2011)}, pp. 1--709, April 2016.\n\\bibitem{ssingh2011}\nS.~Singh, R.~Mudumbai, and U.~Madhow, ``Interference analysis for highly\n directional {60-GHz} mesh networks: The case for rethinking {M}edium {A}ccess\n {C}ontrol,'' \\emph{IEEE/ACM Transactions on Networking}, vol.~19, no.~5, pp.\n 1513--1527, October 2011.\n\\bibitem{bonjour2016}\nR.~{Bonjour}, M.~{Singleton}, S.~A. {Gebrewold}, Y.~{Salamin}, F.~C. {Abrecht},\n B.~{Baeuerle}, A.~{Josten}, P.~{Leuchtmann}, C.~{Hafner}, and J.~{Leuthold},\n ``Ultra-fast millimeter wave beam steering,'' \\emph{IEEE Journal of Quantum\n Electronics}, vol.~52, no.~1, pp. 1--8, Jan 2016.\n\\bibitem{srey2015}\nS.~Rey, ``Terapan: Ultra-high data rate transmission with steerable antennas at\n 300 ghz,'' in \\emph{IEEE 802.15-15-0167-02-0thz}, March 2015.\n\\bibitem{wwithaya2018}\nW.~{Withayachumnankul}, ``Terahertz metasurfaces for beamforming and\n polarization conversion,'' in \\emph{2018 Asia-Pacific Microwave Conference\n (APMC)}, Nov 2018, pp. 112--113.\n\\bibitem{zhosain2019}\nZ.~{Hossain}, C.~N. {Mollica}, J.~F. {Federici}, and J.~M. {Jornet},\n ``Stochastic interference modeling and experimental validation for\n pulse-based terahertz communication,'' \\emph{IEEE Transactions on Wireless\n Communications}, vol.~18, no.~8, pp. 4103--4115, Aug 2019.\n\\bibitem{Galal2018}\nA.~Galal and X.~Hesselbach, ``Nano-networks communication architecture:\n Modeling and functions,'' \\emph{Nano Communication Networks}, vol.~17, pp. 45\n -- 62, 2018.\n\\bibitem{chanbicen2016}\nC.~{Han}, A.~O. {Bicen}, and I.~F. {Akyildiz}, ``Multi-wideband waveform design\n for distance-adaptive wireless communications in the terahertz band,''\n \\emph{IEEE Transactions on Signal Processing}, vol.~64, no.~4, pp. 910--922,\n Feb 2016.\n\\bibitem{idan2020}\nI.~{Dan}, G.~{Ducournau}, S.~{Hisatake}, P.~{Szriftgiser}, R.~{Braun}, and\n I.~{Kallfass}, ``\\blue{A Terahertz Wireless Communication Link Using a\n Superheterodyne Approach},'' \\emph{IEEE Transactions on Terahertz Science and\n Technology}, vol.~10, no.~1, pp. 32--43, 2020.\n\\bibitem{mehfuz2015}\nM.~S.~I. Mahfuz and S.~Saha, ``Channel sharing based medium access control\n protocol for wireless nano sensing network,'' \\emph{Global Journal of\n Computer Science and Technology}, vol.~15, no.~8, 2015.\n\\bibitem{idemirkol2006}\nI.~Demirkol, C.~Ersoy, and F.~Alagoz, ``{MAC} protocols for wireless sensor\n networks: a survey,'' \\emph{IEEE Communications Magazine}, vol.~44, no.~4,\n pp. 115--121, April 2006.\n\\bibitem{mkolano2018}\nM.~Kolano, O.~Boidol, S.~Weber, D.~Molter, and G.~V. Freymann, ``Single-laser\n polarization-controlled optical sampling system for {T}erahertz - {TDS},'' in\n \\emph{43rd International Conference on Infrared, Millimeter, and Terahertz\n Waves (IRMMW-THz)}, September 2018, pp. 1--3.\n\\bibitem{spriebejacob2011}\nS.~{Priebe}, M.~{Jacob}, and T.~{Kürner}, ``Polarization investigation of\n rough surface scattering for thz propagation modeling,'' in \\emph{Proceedings\n of the 5th European Conference on Antennas and Propagation (EUCAP)}, April\n 2011, pp. 24--28.\n\\bibitem{msrabbani2015}\nM.~S. Rabbani and H.~Ghafouri-Shiraz, ``Improvement of microstrip antenna's\n gain, bandwidth and fabrication tolerance at {T}erahertz frequency bands,''\n in \\emph{Wideband and Multi-Band Antennas and Arrays for Civil, Security\n Military Applications}, December 2015, pp. 1--3.\n\\bibitem{plu2017}\nP.~Lu, V.~Rymanov, S.~Dülme, B.~Sievert, A.~Rennings, and A.~Stöhr,\n ``Terahertz beam forming and beam switching using lens-assisted quasi-optical\n thz transmitter,'' in \\emph{International Topical Meeting on Microwave\n Photonics (MWP)}, October 2017, pp. 1--4.\n\\bibitem{snie2019}\nS.~{Nie}, J.~M. {Jornet}, and I.~F. {Akyildiz}, ``Intelligent environments\n based on ultra-massive mimo platforms for wireless communication in\n millimeter wave and terahertz bands,'' in \\emph{IEEE International Conference\n on Acoustics, Speech and Signal Processing (ICASSP)}, May 2019, pp.\n 7849--7853.\n\\bibitem{yzhao2017}\nY.~Zhao, B.~Ai, D.~Fei, Y.~Liu, and N.~Li, ``Adaptive beamforming based on\n subband structure in smart antennas,'' in \\emph{32nd General Assembly and\n Scientific Symposium of the International Union of Radio Science (URSI\n GASS)}, August 2017, pp. 1--2.\n\\bibitem{jma2019}\nJ.~{Ma}, R.~{Shrestha}, W.~{Zhang}, L.~{Moeller}, and D.~M. {Mittleman},\n ``Terahertz wireless links using diffuse scattering from rough surfaces,''\n \\emph{IEEE Transactions on Terahertz Science and Technology}, vol.~9, no.~5,\n pp. 463--470, Sep. 2019.\n\\bibitem{zrong2017}\nZ.~Rong, M.~S. Leeson, and M.~D. Higgins, ``Relay-assisted nanoscale\n communication in the {T}erahertz band,'' \\emph{Micro Nano Letters}, vol.~12,\n no.~6, pp. 373--376, 2017.\n\\bibitem{Park2010}\nM.~Park and H.~K. Pan, ``Effect of device mobility and phased array antennas on\n 60 ghz wireless networks,'' in \\emph{Proceedings of the 2010 ACM\n International Workshop on mmWave Communications: From Circuits to Networks},\n ser. mmCom '10.\\hskip 1em plus 0.5em minus 0.4em\\relax New York, NY, USA:\n ACM, 2010, pp. 51--56.\n\\end{thebibliography}\n\\end{document}", "id": "67d10fcc-9af2-4057-9070-d37a680aa748", "level": "section", "origin_cites_number": 0, "parent_id": "068ac4ba-0bb0-453c-9e5f-1071f02038e9", "prefix_titles": [ [ "title", "MAC Protocols for Terahertz Communication: A Comprehensive Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
43
[]
null
[ "Liang Zhao" ]
Event Prediction in the Big Data Era: A Systematic Survey
2020
2020-07-19T23:24:52Z
cs.AI
Events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as earthquakes, civil unrest, system failures, pandemics, and crimes. It is highly desirable to be able to anticipate the occurrence of such events in advance in order to reduce the potential social upheaval and damage caused. Event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth, also thanks to advances in high performance computers and new Artificial Intelligence techniques. There is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex (e.g., spatial, temporal, and semantic) dependencies, and streaming data feeds. Due to the strong interdisciplinary nature of event prediction problems, most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. However, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. This paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. First, systematic categorization and summary of existing techniques are presented, which facilitate domain experts' searches for suitable techniques and help model developers consolidate their research at the frontiers. Then, comprehensive categorization and summary of major application domains are provided to introduce wider applications to model developers to help them expand the impacts of their research. Evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. Finally, open problems and future directions for this promising and important domain are elucidated and discussed.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "06b41427-1045-4fd8-b025-95b34ab2f969", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ] ], "subsections": [ "259e3016-29dd-4d94-91c7-f265b734bb7d", "a1fdb3a3-fa2f-4b15-9a7d-81755feafd07", "2f812193-fd1d-4152-99ce-fe63484cff69", "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "0cc2585c-6b4f-4ace-b2f9-61073d4d0fcf" ], "title": "root" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 340, 339, 341, 8342 ], "content": "An event is a real-world occurrence that takes place in a specific location and time that relates to a particular topic. Events can range from large-scale (e.g., civil unrest events and earthquakes), to medium-scale (e.g., system failures and crime incidents), to small-scale (e.g., authentication events and individual actions) occurrences ~. Event analytics are important in domains as different as healthcare, business, cybersphere, politics, and entertainment, influencing almost every corner of our lives~. The analysis of events has thus been attracting huge attention over the last few decades and can be categorized in terms of their timeliness for various research directions, such as event summarization, detection, and prediction. Unlike retrospective analyses such as event summarization and detection~, event prediction focuses on anticipating events in the future and is the focus of this survey. Accurate anticipation of future events enables one to maximize the benefits and minimize the losses associated with some event in the future, bringing huge benefits for both society as a whole and individual members of society in key domains such as disease prevention~, disaster management~, business intelligence~, and economics stability~.\n\\chapquote{``Prediction is very difficult, especially if it's about the future.'' \\ \\ \\ -- Niels Bohr, 1970}{}{}\nEvent prediction has traditionally been prohibitively challenging across different domains, due to the lack or incompleteness of our knowledge regarding the true causes and mechanisms driving event occurrences in most domains. With the advent of the big data era, however, we now enjoy unprecedented opportunities that open up many alternative approaches for dealing with event prediction problems, sidestepping the need to develop a complete understanding of the underlying mechanisms of event occurrence. Based on large amounts of data on historical events and their potential precursors, event prediction methods typically strive to apply predictive mapping to build on these observations to predict future events, utilizing predictive analysis techniques from domains such as machine learning, data mining, pattern recognition, statistics, and other computational models~. Event prediction is currently experiencing extremely rapid growth, thanks to advances in sensing techniques (physical sensors and social sensors), prediction techniques (Artificial Intelligence, especially Machine Learning), and high performance computing hardware~.\nEvent prediction in big data is a difficult problem that requires the invention and integration of related techniques to address the serious challenges caused by its unique characteristics, including: \\textbf{1) Heterogeneous multi-output predictions.} Event prediction methods usually need to predict multiple facets of events including their time, location, topic, intensity, and duration, each of which may utilize a different data structure~. This creates unique challenges, including how to jointly predict these heterogeneous yet correlated facets of outputs. Due to the rich information in the outputs, label preparation is usually a highly labor-intensive task performed by human annotators, with automatic methods introducing numerous errors in items such as event coding. So, how can we improve the label quality as well as the model robustness under corrupted labels? The multi-faceted nature of events make event prediction a multi-objective problem, which raises the question of how to properly unify the prediction performance on different facets. It is also challenging to verify whether a predicted event ``matches'' a real event, given that the various facets are seldom, if ever, 100\\% accurately predicted. So, how can we set up the criteria needed to discriminate between a correct prediction (``true positive'') and a wrong one (``false positive'')? \\textbf{2) Complex dependencies among the prediction outputs.} Beyond conventional isolated tasks in machine learning and predictive analysis, in event prediction the predicted events can correlate to and influence each other~. For example, an ongoing traffic incident event could cause congestion on the current road segment in the first 5 minutes but then lead to congestion on other contiguous road segments 10 minutes later. Global climate data might indicate a drought in one location, which could then cause famine in the area and lead to a mass exodus of refugees moving to another location. So, how should we consider the correlations among future events? \\textbf{3) Real-time stream of prediction tasks.} Event prediction usually requires continuous monitoring of the observed input data in order to trigger timely alerts of future potential events~. However, during this process the trained prediction model gradually becomes outdated, as real world events continually change dynamically, concepts are fluid and distribution drifts are inevitable. For example, in September 2008 21\\% of the United States population were social media users, including 2\\% of those over 65. However, by May 2018, 72\\% of the United States population were social media users, including 40\\% of those over~. Not only the data distribution, but also the number of features and input data sources can also vary in real time. Hence, it is imperative to periodically upgrade the models, which raises further questions concerning how to train models based on non-stationary distributions, while balancing the cost (such as computation cost and data annotation cost) and timeliness?\nIn addition, event prediction involves many other common yet open challenges, such as imbalanced data (for example data that lacks positive labels in rare event prediction)~, data corruption in inputs~, the uncertainty of predictions~, longer-term predictions (including how to trade-off prediction accuracy and lead time)~, trade-offs between precision and recall~, and how to deal with high-dimensionality~ and sparse data involving many unrelated features~. Event prediction problems provide unique testbeds for jointly handling such challenges.\nIn recent years, a considerable amount of research has been devoted to event prediction technique development and applications, in order to address the aforementioned challenges~. Recently, there has been a surge of research that both proposes and applies new approaches in numerous domains, though event prediction techniques are generally still in their infancy. Most existing event prediction methods have been designed for a specific application domains, but their approaches are usually general enough to handle problems in other application domains. Unfortunately, it is difficult to cross-reference these techniques across different application domains serving totally different communities. Moreover, the quality of event prediction results require sophisticated and specially-designed evaluation strategies due to the subject matter's unique characteristics, for example its multi-objective nature (e.g., accuracy, resolution, efficiency, and lead time) and heterogeneous prediction results (e.g., heterogeneity and multi-output). As yet, however, we lack a systematic standardization and comprehensive summarization approaches with which to evaluate the various event prediction methodologies that have been proposed. This absence of a systematic summary and taxonomy of existing techniques and applications in event prediction causes major problems for those working in the field who lacks clear information on the existing bottlenecks, traps, open problems, and potentially fruitful future research directions. \nTo overcome these hurdles and facilitate the development of better event prediction methodologies and applications, this survey paper aims to provide a comprehensive and systematic review of the current state of the art for event prediction in the big data era. The paper's major contributions include:\n\\begin{itemize}[leftmargin=*]\n \\item \\textbf{A systematic categorization and summarization of existing techniques.} Existing event prediction methods are categorized according to their event aspects (time, location, and semantics), problem formulation, and corresponding techniques to create the taxonomy of a generic framework. Relationships, advantages, and disadvantages among different subcategories are discussed, along with details of the techniques under each subcategory. The proposed taxonomy is designed to help domain experts locate the most useful techniques for their targeted problem settings.\n \\item \\textbf{A comprehensive categorization and summarization of major application domains.} The first taxonomy of event prediction application domains is provided. The practical significance and problem formulation are elucidated for each application domain or subdomain, enabling it to be easily mapped to the proposed technique taxonomy. This will help data scientists and model developers to search for additional application domains and datasets that they can use to evaluate their newly proposed methods, and at the same time expand their advanced techniques to encompass new application domains.\n \\item \\textbf{Standardized evaluation metrics and procedures.} Due to the nontrivial structure of event prediction outputs, which can contain multiple fields such as time, location, intensity, duration, and topic, this paper proposes a set of standard metrics with which to standardize existing ways to pair predicted events with true events. Then additional metrics are introduced and standardized to evaluate the overall accuracy and quality of the predictions to assess how close the predicted events are to the real ones.\n \\item \\textbf{An insightful discussion of the current status of research in this area and future trends.} Based on the comprehensive and systematic survey and investigation of existing event prediction techniques and applications presented here, an overall picture and the shape of the current research frontiers are outlined. The paper concludes by presenting fresh insights into the bottleneck, traps, and open problems, as well as a discussion of possible future directions.\n\\end{itemize}", "id": "259e3016-29dd-4d94-91c7-f265b734bb7d", "level": "section", "origin_cites_number": 22, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Introduction" ] ], "subsections": [ "32ca0d97-ce1e-43c2-a6a2-8b459454e5e4", "13feb38e-f168-4100-aaac-a9303bf823b5" ], "title": "Introduction" }, { "cite_extract_rate": 0.17391304347826, "cites": [ 344, 343, 166, 342 ], "content": "This section briefly outlines previous surveys in various domains that have some relevance to event prediction in big data in three categories, namely: 1. event detection, 2. predictive analytics, and 3. domain-specific event prediction. \nEvent detection has been an extensively explored domain with over many years. Its main purpose is to detect historical or ongoing events rather than to predict as yet unseen events in the future~. Event detection typically focuses on pattern recognition~, anomaly detection~, and clustering~, which are very different from those in event prediction. There have been several surveys of research in this domain in the last decade . For example, Deng et al.~ and Atefeh and Khreich~ provided overviews of event extraction techniques in social media, while Michelioudakis et al.~ presented a survey of event recognition with uncerntainty. Alevizos et al.~ provided a comprehensive literature review of event recognition methods using probabilistic methods.\nPredictive analysis covers the prediction of target variables given a set of dependent variables. These target variables are typically homogeneous scalar or vector data for describing items such as economic indices, housing prices, or sentiments. The target variables may not necessarily be values in the future. Larose~ provides a good tutorial and survey for this domain. Predictive analysis can be broken down into subdomains such as structured prediction~, spatial prediction~, and sequence prediction~, enabling users to handle different types of structure for the target variable. F{\\\"u}l{\\\"o}p et al.~ provided a survey and categorization of applications that utilize predictive analytics techniques to perform event processing and detection, while Jiang~ focused on spatial prediction methods that predict the indices that have spatial dependency. Baklr et al.~ summarized the literature on predicting structural data such as geometric objects and networks, and Arias et al.~ Phillips et al.~, and Yu and Kak~ all proposed the techniques for predictive analysis using social data.\nAs event prediction methods are typically motivated by specific application domains, there are a number of surveys event predictions for domains such as flood events~, social unrest~, wind power ramp forecasting~, tornado events~, temporal events without location information~, online failures~, and business failures~. However, in spite of its promise and its rapid growth in recent years, the domain of event prediction in big data still suffers from the lack of a comprehensive and systematic literature survey covering all its various aspects, including relevant techniques, applications, evaluations, and open problems.", "id": "32ca0d97-ce1e-43c2-a6a2-8b459454e5e4", "level": "subsection", "origin_cites_number": 23, "parent_id": "259e3016-29dd-4d94-91c7-f265b734bb7d", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Introduction" ], [ "subsection", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "The remainder of this article is organized as follows. Section \\ref{sec:formulation} presents generic problem formulations for event prediction and the evaluation of event prediction results. Section \\ref{sec:techniques} then presents a taxonomy and comprehensive description of event prediction techniques, after which Section \\ref{sec:applications} categorizes and summarizes the various applications of event prediction. Section \\ref{sec:discussion} lists the open problems and suggests future research directions and this survey concludes with a brief summary in Section \\ref{sec:conclusions}.", "id": "13feb38e-f168-4100-aaac-a9303bf823b5", "level": "subsection", "origin_cites_number": 0, "parent_id": "259e3016-29dd-4d94-91c7-f265b734bb7d", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Introduction" ], [ "subsection", "Outline" ] ], "subsections": [], "title": "Outline" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:formulation}\nThis section begins by examining the generic denotation and formulation of the event prediction problem (Section \\ref{sec:problem_formulation}) and then considers way to standardize event prediction evaluations (Section \\ref{sec:evaluation}).", "id": "a1fdb3a3-fa2f-4b15-9a7d-81755feafd07", "level": "section", "origin_cites_number": 0, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Problem Formulation and Performance Evaluations" ] ], "subsections": [ "4328a3f1-5620-4dc1-a83c-161ef2a0d369", "4dd48b70-fd8e-40e4-94b7-33662c5dd512" ], "title": "Problem Formulation and Performance Evaluations" }, { "cite_extract_rate": 0.2, "cites": [ 341 ], "content": "\\label{sec:problem_formulation}\nAn event refers to a real-world occurrence that happens at some specific time and location with specific semantic topic~. We can use $y=(t,l,s)$ to denote an event where its time $t\\in\\mathcal T$, its location $l\\in\\mathcal{L}$, and its semantic meaning $s\\in\\mathcal{S}$. Here, $\\mathcal T$, $\\mathcal L$, and $\\mathcal S$ represent the time domain, location domain, and semantic domain, respectively. Notice that these domains need to have very general meanings that cover a wide range of types of entities. For example, the location $\\mathcal L$ can include any features that can be used to locate the place of an event in terms of a point or a neighborhood in either Euclidean space (e.g., coordinate and geospatial region) or non-Euclidean space (e.g., a vertex or subgraph in a network). Similarly, the semantic domain $\\mathcal S$ can contain any type of semantic features that are useful when elaborating the semantics of an event's various aspects, including its actors, objects, actions, magnitude, textual descriptions, and other profiling information. For example, \\emph{(``11am, June 1, 2019'', ``Hermosillo, Sonora, Mexico'', ``Student Protests'')} and \\emph{(``June 1, 2010'', “Berlin, Germany'', ``Red Cross helps pandemics control'')} denote the time, location, and semantics, for two events respectively.\nAn event prediction system requires inputs that could indicate future events, called event indicators, and these could contain both critical information on events that precede the future event, known as precursors, as well as irrelevant information~. Event indicator data can be denoted as $X\\subseteq\\mathcal{T}\\times\\mathcal{L}\\times\\mathcal{F}$, where $\\mathcal{F}$ is the domain of the features other than location and time. If we denote the current time as $t_{\\mathrm{now}}$ and define the past time and future time as $\\mathcal{T}^{-}\\equiv\\{t|t\\le t_{\\mathrm{now}},t\\in\\mathcal{T}\\}$ and $\\mathcal{T}^{+}\\equiv\\{t|t> t_{\\mathrm{now}},t\\in\\mathcal{T}\\}$, respectively, the event prediction problem can now be formulated as follows:\n\\begin{definition}[Event Prediction] Given the event indicator data $X\\subseteq\\mathcal{T}^{-}\\times\\mathcal{L}\\times\\mathcal{F}$ and historical event data $Y_0\\subseteq\\mathcal{T}^{-}\\times\\mathcal{L}\\times\\mathcal{S}$, event prediction is a process that outputs a set of predicted future events $\\hat Y\\subseteq\\mathcal{T}^{+}\\times\\mathcal{L}\\times\\mathcal{S}$, such that for each predicted future event $\\hat y=(t,l,s)\\in \\hat Y$ where $t>t_{\\mathrm{now}}$.\n\\end{definition}\nNot every event prediction method necessarily focuses on predicting all three domains of time, location, and semantics simultaneously, but may instead predict any part of them. For example, when predicting a clinical event such as the recurrence of disease in a patient, the event location might not always be meaningful~, but when predicting outbreaks of seasonal flu, the semantic meaning is already known and the focus is the location and time~ and when predicting political events, sometimes the location, time, and semantics (e.g., event type, participant population type, and event scale) are all necessary~. Moreover, due to the intrinsic nature of time, location, and semantic data, the prediction techniques and evaluation metrics of them are necessarily different, as described in the following.", "id": "4328a3f1-5620-4dc1-a83c-161ef2a0d369", "level": "subsection", "origin_cites_number": 5, "parent_id": "a1fdb3a3-fa2f-4b15-9a7d-81755feafd07", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Problem Formulation and Performance Evaluations" ], [ "subsection", "Problem Formulation" ] ], "subsections": [], "title": "Problem Formulation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:evaluation}\nEvent Prediction Evaluation essentially investigates the goodness of fit for a set of predicted events $\\hat Y$ against real events $Y$. Unlike the outputs of conventional machine learning models such as the simple scalar values used to indicate class types in classification or numerical values in regression, the outputs of event prediction are entities with rich information. Before we evaluate the quality of prediction, we need to first determine the pairs of predictions and the labels that will be used for the comparison. Hence, we must first optimize the process of matching predictions and real events (Section \\ref{sec:matching}) before evaluating the prediction error and accuracy (Section \\ref{sec:acc_err}).\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{images/time2.png}\\vspace{-0.3cm}\n\\caption{Various types of times in event prediction.\\vspace{-0.3cm}}\n\\label{figure:planned_event}\n\\end{figure}", "id": "4dd48b70-fd8e-40e4-94b7-33662c5dd512", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1fdb3a3-fa2f-4b15-9a7d-81755feafd07", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Problem Formulation and Performance Evaluations" ], [ "subsection", "Event Prediction Evaluation" ] ], "subsections": [ "4bedfae7-0c35-44d7-8abd-d94a4f78bca8", "d3502027-6bbb-4d5b-bdf0-90e57f4a8b2a" ], "title": "Event Prediction Evaluation" }, { "cite_extract_rate": 0.2, "cites": [ 8343, 341 ], "content": "\\label{sec:matching}\nThe following two types of matching are typically used: \n\\begin{itemize}\n\\item\\textbf{Prefixed matching:} The predicted events will be matched with the corresponding ground-true real events if they share some key attributes. For example, for event prediction at a particular time and location point, we can evaluate the prediction against the ground truth for that time and location. This type of matching is most common when each of the prediction results can be uniquely distinguished along the predefined attributes (for example, location and time) that have a limited number of possible values, so that one-on-one matching between the predicted and real events are easily achieved~. For example, to evaluate the quality of a predicted event on June 1, 2019 in San Francisco, USA, the true event occurrence on that date in San Francisco can be used for the evaluation.\n\\item\\textbf{Optimized matching:} In situations where one-on-one matching is not easily achieved for any event attribute, then the set of predicted events might need to assess the quality of the match achieved with the set of real events, via an optimized matching strategy~. For example, consider two predictions, \\textbf{Prediction 1:} (``9am, June 4, 2019'', ``Nogales, Sonora, Mexico'', ``Worker Strike''), and \\textbf{Prediction 2}: (``11am, June 1, 2019'', ``Hermosillo, Sonora, Mexico'', ``Student Protests''). The two ground truth events that these can usefully be compared with are \\textbf{Real Event 1}: (``9am, June 1, 2019'', ``Hermosillo, Sonora, Mexico'', ``Teacher Protests''), and \\textbf{Real Event 2}: (``June 4, 2019'', ``Navojoa, Sonora, Mexico'', ``General-population Protest''). None of the predictions are an exact match for any of the attributes of the real events, so we will need to find a ``best'' matching among them, which in this case is between \\textbf{Prediction 1} and \\textbf{Real Event 2} and \\textbf{Prediction 2} and \\textbf{Real Event 1}. This type of matching allows some degree of inaccuracy in the matching process by quantifying the distance between the predicted and real events among all the attribute dimensions. The distance metrics are typically either Euclidean distance~ or some other distance metric~. Some researchers have hired referees to manually check the similarity of semantic meanings~, but another way is to use event coding to code the events into an event type taxonomy and then consider a match to have been achieved if the event type matches~.\nBased on the distance between each pair of predicted and real events, the optimal matching will be the one that results in the smallest average distance~. However, suppose there are $m$ predicted events and $n$ real events, then there can be as many as $2^{m\\cdot n}$ possible ways of matching, making it prohibitively difficult to find the optimal solution. Moreover, there could be different rules for matching. For example, the ``multiple-to-multiple'' rule shown in Figure \\ref{fig:event_matching}(a) allows one predicted (real) event to match multiple real (predicted) events~, while ``Bipartite matching'' only allows one-to-one matching between predicted and real events (Figure \\ref{fig:event_matching}(b)). ``Non-crossing matching'' requires that the real events matched by the predicted events follow the same chronological order (Figure \\ref{fig:event_matching}(c)). In order to utilize any of these types of matching, researchers have suggested using event matching optimization to learn the optimal set of ``(real event, predicted event)'' pairs~.\n\\end{itemize}\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\textwidth]{images/matching_events.jpg}\\vspace{-0.3cm}\n\\caption{Generic framework for hierarchical RNN-based event forecasting.\\vspace{-0.3cm}}\n\\label{fig:event_matching}\n\\end{figure}", "id": "4bedfae7-0c35-44d7-8abd-d94a4f78bca8", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "4dd48b70-fd8e-40e4-94b7-33662c5dd512", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Problem Formulation and Performance Evaluations" ], [ "subsection", "Event Prediction Evaluation" ], [ "subsubsection", "Matching predicted events and real events." ] ], "subsections": [], "title": "Matching predicted events and real events." }, { "cite_extract_rate": 0.1, "cites": [ 341 ], "content": "\\label{sec:acc_err}\nThe effectiveness of the event predictions is evaluated in terms of two indicators: 1) Goodness of Matching, which evaluates performance metrics such as the number and percentage of matched events~, and 2) Quality of Matched Predictions, which evaluates how close the predicted event is to the real event for each pair of matched events~.\n\\begin{itemize}\n\\item\\textbf{Goodness of Matching.} A \\emph{true positive} means a real event has been successfully matched by a predicted event; if a real event has not been matched by any predicted event, then it is called a \\emph{false negative} and a \\emph{false positive} means a predicted event has failed to match any real event, which is referred to as a \\emph{false alarm}. Assume the total number of predictions is $N$, the number of real events is $\\hat N$, the number of true positives is $N_{TP}$, the number of false negatives is $N_{FN}$ and the number of false positives is $N_{FP}$. Then, the following key evaluation metrics can be calculated: Prediction=$N_{TP}/(N_{TP}+N_{FP})$, Recall=$N_{TP}/(N_{TP}+N_{FN})$, F-measure = $2\\cdot\\mathrm{Precision}\\cdot\\mathrm{Recall}/(\\mathrm{Precision}+\\mathrm{Recall})$. Other measurements such as the area under the ROC curves are also commonly used~. This approach can be extended to include other items such as multi-class precision/recall, and Precision/Recall at Top $K$~.\n\\item\\textbf{Quality of Matched Predictions.}\nIf a predicted event matches a real one, it is common to go on to evaluate how close they are. This reflects the quality of the matched predictions, in terms of different aspects of the events. Event time is typically a numerical values and hence can be easily measured in terms of metrics such as mean squared error, root mean squared error, and mean absolute error~. This is also the case for location in Euclidean space, which can be measured in terms of the Euclidean distance between the predicted point (or region) and the real point (or region). Some researchers consider the administrative unit resolution. For example, a predicted location (``New York City'', ``New York State'', ``USA'') has a distance of 2 from the real location (``Los Angeles'', ``California'', ``USA'')~. Others prefer to measure multi-resolution location prediction quality as follows: $(1/3)(l_{country}+l_{country}\\cdot l_{state}+l_{country}\\cdot l_{state}\\cdot l_{city})$, where $l_{city}$, $l_{state}$, and $l_{country}$ can only be either $0$ (i.e., no match to the truth) or $1$ (i.e., completely matches the truth)~. For a location in non-Euclidean space such as a network~, the quality can be measured in terms of the shortest path length between the predicted node (or subgraph) and the real node (or subgraph), or by the F-measure between the detected subsets of nodes against the real ones, which is similar to the approach for evaluating community detection~. For event topics, in addition to conventional ways of evaluating continuous values such as population size, ordinal values such as event scale, and categorical values such as event type, actors, and actions, as well as more complex semantic values such as texts, can be evaluated using Natural Language Process measurements such as edit distance, BLEU score, Top-$K$ precision, and ROUGE~.\n\\end{itemize}", "id": "d3502027-6bbb-4d5b-bdf0-90e57f4a8b2a", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "4dd48b70-fd8e-40e4-94b7-33662c5dd512", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Problem Formulation and Performance Evaluations" ], [ "subsection", "Event Prediction Evaluation" ], [ "subsubsection", "Metrics of Effectiveness" ] ], "subsections": [], "title": "Metrics of Effectiveness" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:techniques}\nThis section focuses on the taxonomy and representative techniques utilized for each category and subcategory. Due to the heterogeneity of the prediction output, the technique types depend on the type of output to be predicted, such as time, location, and semantics. As shown in Figure \\ref{fig:taxonomy}, all the event prediction methods are classified in terms of their goals, including time, location, semantics, and the various combinations of these three. These are then further categorized in terms of the output forms of the goals of each and the corresponding techniques normally used, as elaborated in the following.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\textwidth]{images/taxonomy5.pdf}\\vspace{-0.3cm}\n\\caption{Taxonomy of event prediction problems and techniques.\\vspace{-0.3cm}}\n\\label{fig:taxonomy}\n\\end{figure}", "id": "2f812193-fd1d-4152-99ce-fe63484cff69", "level": "section", "origin_cites_number": 0, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ] ], "subsections": [ "38bc1c87-73d2-4a36-9803-aac9d2369721", "16b7bc6c-2456-4e9d-850f-7cd33640592e", "0b117367-592e-4a5b-aa08-4c81aba9a952", "a899718d-d44b-4906-ba2a-558ce46075dd" ], "title": "Event Prediction Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Event time prediction focuses on predicting when future events will occur. Based on their time granularity, time prediction methods can be categorized into three types: 1) event Occurrence: Binary-valued prediction on whether an event does or does not occur in a future time period; 2) discrete-time prediction: in which future time slot will the event occur; and 3) continuous-time prediction: at which precise time point will the future event occur.", "id": "38bc1c87-73d2-4a36-9803-aac9d2369721", "level": "subsection", "origin_cites_number": 0, "parent_id": "2f812193-fd1d-4152-99ce-fe63484cff69", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Time Prediction" ] ], "subsections": [ "53f5b3d4-464b-4905-b5fb-aea3535cd2c7", "22760e5e-c276-42d4-af95-1a511b27471d", "25fc629e-0d7b-4dbd-8fa3-37efada755ed" ], "title": "Time Prediction" }, { "cite_extract_rate": 0.043478260869565, "cites": [ 8344 ], "content": "Occurrence prediction is arguably the most extensive, classical, and generally simplest type of event time prediction task~. It focuses on identifying whether there will be event occurrence (positive class) or not (negative class) in a future time period~. This problem is usually formulated as a binary classification problem, although a handful of other methods instead leverage anomaly detection or regression-based techniques.\n\\noindent\\textbf{1. Binary classification.} Binary classification methods have been extensively explored for event occurrence prediction. The goal here is essentially to estimate and compare the values of $f(y=``Yes''|X)$ and $f(y=``No''|X)$, where the former denotes the score or likelihood of event occurrence given observation $X$ while the latter corresponds to no event occurrence. If the value of the former is larger than the latter, then a future event occurrence is predicted, but if not, there is no event predicted. To implement $f$, the methods typically used rely on discriminative models, where dedicated feature engineering is leveraged to manually extract potential event precursor features to feed into the models. Over the years, researchers have leveraged various binary classification techniques ranging from the simplest threshold-based methods~, to more sophisticated methods such as logistic regression~, Support Vector Machines~, (Convolutional) Neural Networks~, and decision trees~. In addition to discrminative models, generative models~ have also been used to embed human knowledge for classifying event occurrences using Bayesian decision techniques. Specifically, instead of assuming that the input features are independent, prior knowledge can also be directly leveraged to establish Bayesian networks among the observed features and variables based on graphical models such as (semi-)hidden Markov models~ and autoregresive logit models~. The joint probabilities $p(y=``Yes'',X)$ of $p(y=``No'',X)$ can thus be estimated using graphical models, and then utilized to estimate $f(y=``Yes''|X)=p(y=``Yes''|X)$ and $f(y=``No''|X)=p(y=``No''|X)$ using Bayesian rules~.\n\\noindent\\textbf{2. Anomaly detection.} Alternatively, anomaly detection can also be utilized to learn a ``prototype'' of normal samples (typical values corresponding to the situation of no event occurrence), and then identify if any newly-arriving sample is close to or distant from the normal samples, with distant ones being identified as future event occurrences. Such methods are typically utilized to handle ``rare event'' occurrences, especially when the training data is highly imbalanced with little to no data for ``positive'' samples. Anomaly detection techniques such as one-classification~ and hypotheses testing~ are often utilized here.\n\\noindent\\textbf{3. Regression.} In addition to simply predicting the occurrence or not, some researchers have sought to extend the binary prediction problem to deal with ordinal and numerical prediction problems, including event count prediction based on (auto)regression~, event size prediction using linear regression~, and event scale prediction using ordinal regression~.", "id": "53f5b3d4-464b-4905-b5fb-aea3535cd2c7", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "38bc1c87-73d2-4a36-9803-aac9d2369721", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Time Prediction" ], [ "subsubsection", "Occurrence Prediction" ] ], "subsections": [], "title": "Occurrence Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "In many applications, practitioners want to know the approximate time (i.e. the date, week, or month) of future events in addition to just their occurrence. To do this, the time is typically first partitioned into different time slots and the various methods focus on identifying which time slot future events are likely to occur in. Existing research on this problem can be classified into either direct or indirect approaches.\n\\noindent\\textbf{1. Direct Approaches}. These of methods discretize the future time into discrete values, which can take the form of some number of time windows or time scales such as near future, medium future, or distant future. These are then used to directly predict the integer-valued index of future time windows of the event occurrence using (auto)regression methods~, or to predict the ordinal values of future time scales using ordinal regression or classification~.\n\\noindent\\textbf{2. Indirect Approaches}. These methods adopt a two-step approach, with the first step being to place the data into a series of time bins and then perform time series forecasting using techniques such as autoregressive ~ based on the historical time series $x=\\{x_1,\\cdots, x_T\\}$ to obtain the future time series $\\hat x=\\{x_{T+1},\\cdots, x_{\\hat T}\\}$. The second step is to identify events in the predicted future time series $\\hat x$ using either unsupervised methods such as burstness detection~ and change detection~, or supervised techniques based on learning event characterization function. For example, existing works~ first represent the predicted future time series $\\hat x\\in\\mathbb{R}^{\\hat T\\times D}$ using time-delayed embedding, into $\\tilde x\\in\\mathbb{R}^{\\hat T\\times D'}$ where each observation at time $t$ can be represented as $\\tilde x_t=\\{x_{t-(D'-1)\\tau},\\cdots, x_{t-2\\tau},x_{t-\\tau},x_t\\}$ and $t=T,T+1,\\cdots \\hat T$. Then an event characterization function $f_c(\\tilde x_t)$ is established to map $\\tilde x_t$ to the likelihood of an event, which can be fitted based on the event labels provided in the training set intuitively. Overall, the unsupervised method requires users to assume the type of patterns (e.g., burstiness and change) of future events based on prior knowledge but do not require event label data. However, in cases where the event time series pattern is difficult to assume but the label data is available, supervised learning-based methods are usually used.", "id": "22760e5e-c276-42d4-af95-1a511b27471d", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "38bc1c87-73d2-4a36-9803-aac9d2369721", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Time Prediction" ], [ "subsubsection", "Discrete-time Prediction" ] ], "subsections": [], "title": "Discrete-time Prediction" }, { "cite_extract_rate": 0.23529411764705802, "cites": [ 9104, 345, 346, 9105 ], "content": "Discrete-time prediction methods, although usually simple to establish, also suffer from several issues. First, their time-resolution is limited to the discretization granularity; increasing this granularity significantly increases the computations al resources required, which means the resolution cannot be arbitrarily high. Moreover, this trade-off is itself a hyperparameter that is sensitive to the prediction accuracy, rendering it difficult and time-consuming to tune during training. To address these issues, a number of techniques work around it by directly predicting the continuous-valued event time~, usually by leveraging one of three techniques.\n\\noindent\\textbf{1. Simple Regression. }The simplest methods directly formalize continuous-event-time prediction as a regression problem~, where the output is the numerical-value future event time~ and/or their duration~. Common regressors such as linear regression and recurrent neural networks have been utilized for this. Despite their apparent simplicity, this is not straightforward as simple regression typically assumes Gaussian distribution~, which does not usually reflect the true distribution of event times. For example, the future event time needs to be left-bounded (i.e., larger than the current time), as well asbeing typically non-symmetric and usually periodic, with recurrent events having multiple peaks in the probability density function along the time dimension.\n\\noindent\\textbf{2. Point Processes.} As they allow more flexibility in fitting true event time distributions, point process methods~ are widely leveraged and have demonstrated their effectiveness for continuous time event prediction tasks. They require a conditional intensity function, defined as follows:\n\\begin{align}\n\\label{eq:intensity}\n \\lambda(t|X)=\\mathbb{E}[N(t,t+\\mathrm{d}t)/{\\mathrm{d}t}|X]=g(t|X)/(1-G(t|X))\n\\end{align}\nwhere $g(t|X)$ is the conditional density function of the event occurrence probability at time $t$ given an observation $X$, and whose corresponding cumulative distribution function, $G(t|X))$, $N(t,t+\\mathrm{d}t)$, denotes the count of events during the time period between $t$ and $t+\\mathrm{d}t$, where $\\mathrm{d}t$ is an infinitely-small time period.\nHence, by leveraging the relation between density and accumulative functions and then rearranging Equation \\eqref{eq:intensity}, the following conditional density function is obtained:\n\\begin{align}\n g(t|X)=\\lambda(t|X)\\cdot\\mathrm{exp}\\cdot\\left(-\\int_{t_0}^t\\lambda(u|X)\\mathrm{d}u\\right)\n\\end{align}\n\\indent Once the above model has been trained using a technique such as maximal likelihood~, the time of the next event in the future is predicted as:\n\\begin{align}\n \\hat t=\\int_{t_0}^{\\infty}t\\cdot g(t|X)\\mathrm{d}t\n\\end{align}\nAlthough existing methods typically share the same workflow as that shown above, they vary in the way they define the conditional intensity function $\\lambda(t|X)$. Traditional models typically utilize prescribed distributions such as the Poisson distribution~, Gamma distribution~, Hawks~, Weibull process~, and other distributions~. For example, Damaschke et al.~ utilized a Weibull distribution to model volcano eruption events, while Ertekin et al.~ instead proposed the use of a non-homogeneous Poisson process to fit the conditional intensity function for power system failure events. However, in many other situations where there is no information regarding appropriate prescribed distributions, researchers must start by leveraging nonparametric approaches to learn sophisticated distributions from the data using expressive models such as neural networks. For example, Simma and Jordan~ utilized of RNN to learn a highly nonlinear function of $\\lambda(t|X)$.\n\\noindent\\textbf{3. Survival Analysis}. Survival analysis~ is related to point processes in that it also defines an event intensity or hazard function, but in this case based on survival probability considerations, as follows:\n\\begin{align}\n\\label{eq:survival_analysis}\n H(t|X)=\\left(\\xi(t-\\mathrm{d}t|X)-\\xi(t|X)\\right)/\\xi(t|X),\\ \\mathrm{where }\\ \\xi(t|X)=p(\\hat t>t)\n\\end{align}\nwhere $H(t|X)$ is the so-called Hazard function denoting the hazard of event occurrence between time $(t-\\mathrm{d}t)$ for a $t$ for a given observation $X$. Either $H(t|X)$ or $\\xi(t|X)$ could be utilized for predicting the time of future events. For example, the event occurrence time can be estimated when $\\xi(t|X)$ is lower than a specific value. Also, one can obtain $\\xi(t|X)=\\mathrm{exp\\left(-\\int_0^t H(u|X)\\mathrm{d}u\\right)}$ according to Equation \\eqref{eq:survival_analysis}~. Here $H(t|X)$ can adopt any one of several prescribed models, such as the well-known Cox hazard model~. To learn the model directly from the data, some researchers have recommended enhancing it using deep neural networks~. Vahedian et al.~ suggest learning the survival probability $\\xi(t|X)$ and then applying the function $H(\\cdot|X)$ to indicate an event at time $t$ if $H(t|X)$ is larger than a predefined threshold value. A classifier can also be utilized.\nInstead of using the raw sequence data, the conditional intensity function can also be projected onto additional continuous-time latent state layers that eventually map to the observations~. These latent states can then be extracted using techniques such as hidden semi-Markov models~, which ensure the elicitation of the continuous time patterns.", "id": "25fc629e-0d7b-4dbd-8fa3-37efada755ed", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "38bc1c87-73d2-4a36-9803-aac9d2369721", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Time Prediction" ], [ "subsubsection", "Continuous-time Prediction." ] ], "subsections": [], "title": "Continuous-time Prediction." }, { "cite_extract_rate": 0, "cites": [], "content": "Event location prediction focuses on predicting the location of future events. Location information can be formulated as one of two types: \\textbf{1. Raster-based.} Here, a continuous space is partitioned into a grid of cells, each of which represents a spatial region, as shown in Figure \\ref{fig:event_locations}(a). This type of representation is suitable for situations where the spatial size of the event is non-negligible. \\textbf{2. Point-based.} In this case, each location is represented by an abstract point with infinitely-small size, as shown in Figure \\ref{fig:event_locations}(b). This type of representation is most suitable for the situations where the spatial size of the event can be neglected, or the location regions of the events can only be in discrete spaces such as network nodes.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\textwidth]{images/spatial_examples.png}\\vspace{-0.3cm}\n\\caption{Raster-based and Point-based Event Location Predictions.\\vspace{-0.3cm}}\n\\label{fig:event_locations}\n\\end{figure}", "id": "16b7bc6c-2456-4e9d-850f-7cd33640592e", "level": "subsection", "origin_cites_number": 0, "parent_id": "2f812193-fd1d-4152-99ce-fe63484cff69", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Location Prediction" ] ], "subsections": [ "7eb012cf-4f90-44e9-af3d-10d78151a2f6", "3692ca18-9165-42fb-b67f-304b8a112583" ], "title": "Location Prediction" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 347, 166, 59 ], "content": "There are three types of techniques used for raster-based event location prediction, namely spatial clustering, spatial embedding, and spatial convolution.\n\\noindent\\textbf{1. Spatial clustering}. In raster-based representations, each location unit is usually a regular grid cell with the same size and shape. However, regions with similar spatial characteristics typically have irregular shapes and sizes, which could be approximated as composite representations of a number of grids~. The purpose of spatial clustering here is to group the contiguous regions who collectively exhibit significant patterns. The methods are typically agglomerative style. They typically start from the original finest-grained spatial raster units and proceed by merging the spatial neighborhood of a specific unit in each iteration. But different research works define different criteria for instantiating the merging operation. For example, Wang and Ding~ merge neighborhoods if the unified region after merging can maintain the spatially frequent patterns. Xiong et al.~ chose an alternative approach by merging spatial neighbor locations into the current locations sequentially until the merged region possesses event data that is sufficiently statistically significant. These methods usually run in a greedy style to ensure their time complexity remains smaller than quadratic. After the spatial clustering is completed, each spatial cluster will be input into the classifier to determine whether or not there is an event corresponding to it.\n\\noindent\\textbf{2. Spatial interpolation}. Unlike spatial clustering-based methods, spatial interpolation based methods maintain the original fine granularity of the event location information. The estimation of event occurrence probability can be further interpolated for locations with no historical events and hence achieve spatial smoothness. This can be accomplished using commonly-used methods such as kernel density estimation~ and spatial Kriging~. Kernel density estimation is a popular way to model the geo-statistics in numerous types of events such as crimes~ and terrorism~:\n\\begin{align}\n k(s)=1/(n\\cdot \\gamma)\\sum\\nolimits_{i=1}^n K\\left((s-s_i)/\\gamma\\right)\n\\end{align}\nwhere $k(s)$ denotes the kernel estimation for the location point $s$, $n$ is the number of historical event locations, each $s_i$ is a historical event location, $\\gamma$ is a tunable bandwidth parameter, and $K(\\cdot)$ is a kernel function such as Gaussian kernel~. \nMore recently, Ristea et al.~ further extended KDE-based techniques by leveraging Localized KDE and then applying spatial interpolation techniques to estimate spatial feature values for the cells in the grid. Since each cell is an area rather than a point, the center of each cell is usually leveraged as the representative of this cell. Finally, a classifier will take this as its input to predict the event occurrence for each grid~.\n\\noindent\\textbf{3. Spatial convolution}. In the last few years, Convolutional Neural Networks (CNNs) have demonstrated significant success in learning and representing sophisticated spatial patterns from image and spatial data~. A CNN contains multiple convolutional layers that extract the hierarchical spatial semantics of images. In each convolutional layer, a convolution operation is executed by scanning a feature map with a filter, which results in another smaller feature map with a higher level semantic. Since raster-based spatial data and images share a similar mathematical form, it is natural to leverage CNNs to process it. \nExisting methods~ in this category typically formulate a spatial map as input to predict another spatial map that denotes future event hotspots. Such a formulation is analogous to the ``image translation'' problem popular in recent years in the computer vision domain~. Specifically, researchers typically leverage an encoder-decoder architecture, where the input images (or spatial map) are processed by multiple convolutional layers into a higher-level representation, which is then decoded back into an output image with the same size, through a reverse convolutional operations process known as transposed convolution~.\n\\noindent\\textbf{4. Trajectory destination prediction.} This type of method typically focuses on population-based events whose patterns can be interpreted as the collective behaviors of individuals, such as ``gathering events'' and ``dispersal events''. These methods share a unified procedure that typically consists of two steps: 1) predict future locations based on the observed trajectories of individuals, and 2) detect the occurrence of the ``future'' events based on the future spatial patterns obtained in Step 1. The specific methodologies for each step are as follows:\n\\begin{itemize}\n \\item \\textbf{Step 1: } Here, the aim is to predict each location an individual will visit in the future, given a historical sequence of locations visited. This can be formulated as a sequence prediction problem. For example, Wang and Gerber~ sought to predict the probability of the next time point $t+1$'s location $s_{t+1}$ based on all the preceding time points: $p(s_{t+1}|s_{\\le t})=p(s_{t+1}|s_{t},s_{t-1},\\cdots,s_{0})$, based on various strategies including a historical volume-based prior model, Markov models, and multi-class classification models. Vahedian et al.~ adopted Bayesian theory $p(s_{t+1}|s_{\\le t})=p(s_{\\le t}|s_{t+1})\\cdot p(s_{t+1})/p(s_{\\le t})$ which requires the conditional probability $p(s_{\\le t}|s_{t+1})$ to be stored. However, in many situations, there is huge number of possible trajectories for each destination. For example, with a $128\\times 64$ grid, one needs to store $(128\\times 64)^3\\approx 5.5\\times 10^{11}$ options. To improve the memory efficiency, this can be limited to a consideration of just the source and current locations, leveraging a quad-tree style architecture to store the historical information. To achieve more efficient storage and speed up $p(s_{\\le t}|s_{t+1})$ queries, Vahedian et al.~ further extended the quad-tree into a new technique called VIGO, which removes duplicate destination locations in different leaves.\n \\item \\textbf{Step 2: } The aim in this step is to forecast future event locations based on the future visiting patterns predicted in Step 1. The most basic strategy here is to consider each grid cell independently. For example, Wang and Gerber~ adopted supervised learning strategies to build predictive mapping between the visiting patterns and the event occurrence. A more sophisticated approach is to consider the spatial outbreaks composited by multiple grids. Scalable algorithms have also been proposed to identify regions containing statistically significant hotspots~, such as spatial scan statistics~. Khezerlou et al.~ proposed a greedy-based heuristic tailored for the grid-based data formulation, which extends the original ``seed'' grid containing statistically-large future event densities to four directions until the extended region is no longer a statistically-significant outbreak.\n\\end{itemize}", "id": "7eb012cf-4f90-44e9-af3d-10d78151a2f6", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "16b7bc6c-2456-4e9d-850f-7cd33640592e", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Location Prediction" ], [ "subsubsection", "Raster-based Location Prediction" ] ], "subsections": [], "title": "Raster-based Location Prediction" }, { "cite_extract_rate": 0.2, "cites": [ 9106, 348, 341 ], "content": ". Unlike the raster-based formulation, which covers the prediction of a contiguous spatial region, point-based prediction focuses specifically on locations of interest, which can be distributed sparsely in a Euclidean (e.g., spatial region) or non-Euclidean space (e.g., graph topology). These methods can be categorized into supervised and unsupervised approaches.\n\\noindent\\emph{1. Supervised approaches.}\nIn supervised methods, each location will be classified as either ``positive'' or ``negative'' with regard to a future event occurrence. The simplest setting is based on the independent and identically distributed (i.i.d.) assumption among the locations, where each location is predicted by a classifier independently using their respective input features. However, given that different locations usually have strong spatial heterogeneity and dependency, further research has been proposed to tackle them based on different locations' predictors and outputs, resulting in two research directions: 1) Spatial multi-task learning, and 2) Spatial auto-regressive methods.\n\\begin{itemize}\n\\item\\textbf{Spatial multi-task learning.} Multi-task learning is a popular learning strategy that can jointly learn the models for different tasks such that the learned model can not only share their knowledge but also preserve some exclusive characteristics of the individual tasks~. This notion coincides very well with spatial event prediction tasks, where combining the outputs of models from different locations needs to consider both their spatial dependency and heterogeneity. Zhao et al.~ proposed a spatial multi-task learning framework as follows:\n\\begin{align}\n \\min_{W}\\sum\\nolimits_i^m\\mathcal{L}(Y_i, f(W_i,X_i))+\\alpha\\cdot\\mathcal{R}(\\{W_i\\}_i^k,M),\\ \\ \\ s.t.,\\ \\mathcal{C}(\\{W_i\\}_i^m,D)\\in\\mathbb{C}\n\\end{align}\nwhere $m$ is the total number of locations (i.e., tasks), $W_i$ and $Y_i$ are the model parameters and true labels (event occurrence for all time points), respectively, of task $i$. $\\mathcal{L}(\\cdot)$ is the empirical loss, $f(W_i,X_i)$ is the predictor for task $i$, and $\\mathcal{R}(\\cdot)$ is the spatial regularization term based on the spatial dependency information $M\\in\\mathbb{R}^{m\\times m}$, where $M_{i,j}$ records the spatial dependency between location $i$ and $j$. $\\mathcal{C}(\\cdot)$ represents the spatial constraints imposed over the corresponding models to enforce them to remain within the valid space $\\mathbb{C}$. Over recent years, there have been multiple studies proposing different strategies for $\\mathcal{R}(\\cdot)$ and $\\mathcal{C}(\\cdot)$. For example, Zhao et al.~ assumed that all the locations would be evenly correlated and enforced their similar sparsity patterns for feature selection, while Gao et al.~ further extended this to differentiate the strength of the correlation between different locations' tasks according to the spatial distance between them. This research has been further extended this approach to tree-structured multitask learning to handle the hierarchical relationship among locations at different administrative levels (e.g., cities, states, and countries)~ in a model that also considers the logical constraints over the predictions from different locations who have hierachical relationships. Instead of evenly similar, Zhao, et al.~ further estimated spatial dependency $D$ utilizing inverse distance using Gaussian kernels, while Ning et al.~ proposed estimating the spatial dependency $D$ based on the event co-occurrence frequency between each pair of locations.\n\\item\\textbf{Spatial auto-regressive methods}. Spatial auto-regressive models have been extensively explored in domains such as geography and econometrics, where they are applied to perform predictions where the i.i.d. assumption is violated due to the strong dependencies among neighbor locations. Its generic framework is as follows:\n\\begin{align}\n \\hat Y_{t+1}=\\rho M \\hat Y_{t+1}+X_t\\cdot W+\\varepsilon\n\\end{align}\nwhere $X_t\\in\\mathbb{R}^{m\\times D}$ and $\\hat Y_{t+1}\\in\\mathbb{R}^{m\\times m}$ are the observations at time $t$ and event predictions at time $t+1$ over all the $m$ locations, and $M\\in\\mathbb{R}^{m\\times m}$ is the spatial dependency matrix with zero-valued diagonals. This means the prediction of each location $\\hat Y_{t+1,i}\\in \\hat Y_{t+1}$ is jointly determined by its input $X_{t,i}$ and neighbors $\\{j|M_{i,j}\\ne 0\\}$ and $\\rho$ is a positive value to balance these two factors. Since event occurrence requires discrete predictions, simple threshold-based strategies can be used to discretize $\\hat Y_i$ into $\\hat Y_i'=\\{0,1\\}$~. Moreover, due to the complexity of event prediction tasks and the large number of locations, sometimes it is difficult to define the whole $M$ manually. Zhao et al.~ proposed jointly learning the prediction model and spatial dependency from the data using graphical LASSO techniques. Yi et al.~ took a different approach, leveraging conditional random fields to instantiate the spatial autoregression, where the spatial dependency is measured by Gaussian kernel-based metrics. Yi et al.~ then went on to propose leveraging the neural network model to learn the locations' dependency.\n\\end{itemize}\n\\noindent\\emph{2. Unsupervised approaches}. Without supervision from labels, unsupervised-based methods must first identify potential precursors and determinant features in different locations. They can then detect anomalies that are characterized by specific feature selection and location combinatorial patterns (e.g., spatial outbreaks and connected subgraphs) as the future event indicators~. The generic formulation is as follows:\n\\begin{align}\n\\label{eq:scan}\n (F,R)=\\arg\\max\\nolimits_{F,R}\\ q(F,R)\\ \\ \\ s.t.,\\ \\mathrm{supp}(F_i)\\in\\mathbb{M}(\\mathbb{G},\\beta),\\ \\mathrm{supp}(R_i)\\in\\mathbb{C}\n\\end{align}\nwhere $q(\\cdot)$ denotes scan statistics which score the significance of each candidate pattern, represented by both a candidate location combinatorial pattern $R$ and feature selection pattern $F$. Specifically, $F\\in\\{0,1\\}^{D'\\times n}$ denotes the feature selection results (where ``1'' means selected; ``0'', otherwise) and $R\\in\\{0,1\\}^{m\\times n}$ denotes the $m$ involved locations for the $n$ events. $\\mathbb{M}(\\mathbb{G},\\beta)$ and $\\mathbb{C}$ are the set of all the feasible solutions of $F$ and $R$, respectively. $q(\\cdot)$ can be instantiated by scan statistics such as Kulldorff's scan statistics~ and the Berk-Jones statistic~, which can be applied to detect and forecast events such as epidemic outbreaks and civil unrest events~. Depending on whether the embedding space is an Euclidean region (e.g., a geographical region) or a non-Euclidean region (e.g., a network topology), the pattern constraint $\\mathcal{C}$ can be either constrained to predefined geometric shapes such as a circle, rectangle, or an irregular shape or subgraphs such as connected, cliques, and k-cliques. The problem in Equation \\eqref{eq:scan} is nonconvex and sometimes even discrete, and hence difficult to solve. A generic way is to optimize $F$ using sparse feature selection; there is a useful survey provided in~ and $R$ can be defined using the two-step graph-structured matching method detailed in. More recently, new techniques have been developed that are capable of jointly learning both feature and location selection~.", "id": "3692ca18-9165-42fb-b67f-304b8a112583", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "16b7bc6c-2456-4e9d-850f-7cd33640592e", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Location Prediction" ], [ "subsubsection", "Point-based Prediction" ] ], "subsections": [], "title": "Point-based Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Event semantics prediction addresses the problem of forecasting topics, descriptions, or other meta-attributes in addition to future events' times and locations. Unlike time and location prediction, the data in event semantics prediction usually involves symbols and natural languages in addition to numerical quantities, which means different types of techniques may be utilized. The data are categorized into three types based on how the historical data are organized and utilized to infer future events. The first of these categories covers rule-based methods, where future event precursors are extracted by mining association or logical patterns in historical data. The second type is sequence-based, considering event occurrence to be a consequence of temporal event chains. The third type further generalizes event chains into event graphs, where additional cross-chain contexts need to be modeled. These are discussed in turn below.", "id": "0b117367-592e-4a5b-aa08-4c81aba9a952", "level": "subsection", "origin_cites_number": 0, "parent_id": "2f812193-fd1d-4152-99ce-fe63484cff69", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Semantic Prediction" ] ], "subsections": [ "71416ffe-5d15-4401-b4c7-5c2994d19097", "f9095edf-4e9e-4cc4-8180-071c171e0ec4", "3b662200-27f1-431a-8228-57bea3941428" ], "title": "Semantic Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": ".\nAssociation rule-based methods are amongst the most classic approaches in data mining domain for event prediction, typically consisting of two steps: 1) learn the associations between precursors and target events, and then 2) utilize the learned associations to predict future events. For the first step, for example, an association could be $x=\\{$``election'', ``fraud''$\\}\\rightarrow\\ y=$``protest event'', which indicates that serious fraud occurring in an election process could lead to future protest events. To discover all the significant associations from the ocean of candidate rules efficiently, frequent set mining~ can be leveraged. Each discovered rule needs to come with both sufficient \\emph{support} and \\emph{confidence}. Here, \\emph{support} is defined as the number of cases where both ``$x$'' and ``$y$'' co-occur, while \\emph{confidence} means the ratio indicating that ``$y$'' occurs once ``$x$'' happens. To better estimate these discrimination rules, further temporal constraints can be added that require the occurrence time of ``$x$'' and ``$y$'' to be sufficiently close to be considered ``co-occurrences''. Once the frequent set rules have been discovered, pruning strategies may be applied to retain the most accurate and specific ones, with various strategies for generating final predictions~. Specifically, given each new observation $x'$, one of the simplest strategies is to output the events that are triggered by any of the association rules starting from event $x'$~. Other strategies first rank the predicted results based on their confidence and then predict just the top $r$ events~. More sophisticated and rigorous strategies tend to build a decision list where each element in the list is an association rule mapping, so once a generative model has been built for the decision process, the maximal likelihood can be leveraged to optimize the order of the decision list~.", "id": "71416ffe-5d15-4401-b4c7-5c2994d19097", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "0b117367-592e-4a5b-aa08-4c81aba9a952", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Semantic Prediction" ], [ "subsubsection", "Association-based Prediction" ] ], "subsections": [], "title": "Association-based Prediction" }, { "cite_extract_rate": 0.058823529411764004, "cites": [ 349 ], "content": "This type of research leverages the causality inferred among the historical events to achieve future event predictions. The data here typically shares a generic framework consisting of the following procedures: 1) event representation, 2) event graph construction, and 3) future event inference.\n\\noindent\\textbf{Step 1: Event semantic representation. }This approach typically begins by extracting the events from the target texts using natural language processing techniques such as sanitization, tokenization, POS tag analysis, and name entity recognition. Several types of objects can be extracted to represent the events: i) Noun Phrase-based~, where the noun-phrase corresponds to each event (for example, ``2008 Sichuan Earthquake''); ii) Verbs and Nouns~, where an event is represented as a set of noun-verb pairs extracted from news headlines (for example, ``<capture, people>'', ``<escape, prison>'', or ``<send, prison>''); and iii) Tuple-based~, where each event is represented by a tuple consisting of objects (such as actors, instruments, or receptors), a relationship (or property), and time. An RDF-based format has also been leveraged in some works~.\n\\noindent\\textbf{Step 2: Event causality inference. } The goal here is to infer the cause-effect pairs among historical events. Due to its combinatorial nature, narrowing down the number of candidate pairs is crucial. Existing works usually begin by clustering the events into event chains, each of which consist of a sequence of time-ordered events under the relevant semantics, typically the same topics, actors, and/or objects~. The causal relations among the event pairs can then be inferred in various ways. The simplest approach is just to consider the likelihood that $y$ occurs after $x$ has occurred throughout the training data. Other methods utilize NLP techniques to identify causal mentions such as causal connectives, prepositions, and verbs~. Some formulate causal-effect relationship identification as a classification task where the inputs are the cause and effect candidate events, often incorporating contextual information including related background knowledge from web texts. Here, the classifier is built on a multi-column CNN that outputs either ``1'' or ``0'' to indicate whether the candidate has an effect or not~. In many situations, the cause-effect rules learned directly using the above methods can be too specific and sparse, with low generalizability, so a typical next step is to generalize the learned rules. For example, ``Earthquake hits China'' $\\rightarrow$ ``Red Cross help sent to Beijing'' is a specific rule that can be generalized to ``Earthquake hits [A country]'' $\\rightarrow$ ``Red Cross help sent to [The capital of this country]''. To achieve this, some external ontology or a knowledge base is typically needed in order to establish the underlying relationships among items or provide necessary information on their properties, such as Wikipedia (\\url{https://www.wikipedia.org/}), YAGO~, WordNet~, or ConceptNet~. Based on these resources, the similarity between two cause-effect pairs $(c_i,\\varepsilon_i)$ and $(c_j,\\varepsilon_j)$ can be computed by jointly considering the respective similarity of the putative cause and effect: $\\sigma((c_i,\\varepsilon_i),(c_j,\\varepsilon_j))=(\\sigma(c_i,c_j)+\\sigma(\\varepsilon_i,\\varepsilon_j))/2$. An appropriate algorithm can then be utilized to apply hierarchical agglomerative clustering to group them and hence generate a data structure that can efficiently manage the task of storing and querying them to identify any cause-effect pairs. For example,~ leverage an abstraction tree, where each leaf is an original specific cause-effect pair and each intermediate node is the centroid of a cluster. Instead of using hierarchical clustering,~ directly uses the word ontology to simultaneously generalize cause and effect (e.g., the noun ``violet'' is generalized to ``purple'', the verb ``kill'' is generalized to ``murder-42.1\\footnote{the form of verb class in VerbNet~.}'') and then leverage a hierarchical causal network to organize the generalized rules.\n\\noindent\\textbf{Step 3: Future event Inference. } Given an arbitrary query event, two steps are needed to infer the future events caused by it based on the causality of events learned above. First, we need to retrieve similar events that match the query event from historical event pool. This requires the similarity between the query event and all the historical events to be calculated. To achieve this, Lei et al.~ utilized context information, including event time, location, and other environmental and descriptive information. For methods requiring event generalization, the first step is to traverse the abstraction tree starting from the root that corresponds to the most general event rule. The search frontier then moves across the tree if the child node is more similar, culminating in the nodes which are the least general but still similar to the new event being retrieved~. Similarly,~ proposed another tree structure referred to as a ``circular binary search tree'' to manage the event occurrence pattern. We can now apply the learned predicate rules starting from the retrieved event to obtain the prediction results. Since each cause event can lead to multiple events, a convenient way to determine the final prediction is to calculate the support~, or conditional probability~ of the rules. Radinsky et al.~ took a different approach, instead ranking the potential future events by their similarity defined by the length of their minimal generalization path. For example, the minimal\ngeneralization path for ``London'' and ``Paris'' is ``London''$\\xrightarrow{\\text{capital-of}}$``Great Brain''$\\xrightarrow{\\text{in-continent}}$``Europe''$\\xleftarrow{\\text{in-continent}}$``France''$\\xleftarrow{\\text{capital-of}}$``Paris''. Alternatively, Zhao et al.~ proposed embedding the event causality network into a continuous vector space and then applying an energy function designed to rank potential events, where true cause-effect pairs are assumed to have low energies.", "id": "f9095edf-4e9e-4cc4-8180-071c171e0ec4", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "0b117367-592e-4a5b-aa08-4c81aba9a952", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Semantic Prediction" ], [ "subsubsection", "Causality-based prediction" ] ], "subsections": [], "title": "Causality-based prediction" }, { "cite_extract_rate": 0.192307692307692, "cites": [ 352, 351, 166, 353, 350 ], "content": "These methods share a very straightforward problem formulation. Given a temporal sequence for a historical event chain, the goal is to predict the semantics of the next event using sequence prediction~. The existing methods can be classified into four major categories: 1) classical sequence prediction; 2) recurrent neural networks; 3) Markov chains; and 4) time series predictions.\n\\noindent\\textbf{Sequence classification-based methods}. These methods formulate event semantic prediction as a multi-class classification problem, where a finite number of candidate events are ranked and the top-ranked event is treated as the future event semantic. The objective is $\\hat C=\\arg\\max_{C_i} u(s_{T+1}=C_i|s_1,\\cdots,s_T)$, where $s_{T+1}$ denotes the event semantic in time slot $T+1$ and $\\hat C$ is the optimal semantic among all the semantic candidates $C_i$ ($i=1,\\cdots$). Multi-class classification problems can be split into events with different topics/semantic meaning. Three types of sequence classification methods have been utilized for this purpose, namely feature-based methods, prototype-based methods, and model-based methods such as Markov models. \n\\begin{itemize}\n \\item \\textbf{Feature-based.} One of the simplest methods is to ignore the temporal relationships among the events in the chain, by either aggregating the inputs or the outputs. Tama and Comuzzi~ formulated historical event sequences with multiple attributes for event prediction, testing multiple conventional classifiers. Another type of approach based on this notion utilizes compositional based-methods~ that typically leverage the assumption of independency among the historical input events to simplify the original problem $u(s_{T+1}|s_1,s_2,\\cdots,s_T)=u(s_{T+1}|s_{\\le T})$ into $v(u(s_{T+1}|s_1),u(s_{T+1}|s_2),\\cdots,u(s_{T+1}|s_T))$ where $v(\\cdot)$ is simply an aggregation function that represents a summation operation over all the components. Each component function $u(s_{T+1}|s_i)$ can then be calculated by estimating how likely it is that event semantic $s_{T+1}$ and $s_i$ ($i\\le T$) co-occur in the same event chain. Granroth-Wilding and Clark~ investigated various models ranging from straightforward similarity scoring functions through bigram models and word embedding combined with similarity scoring functions to newly developed composition neural networks that jointly learn the representation of $s_{T+1}$ and $s_i$ and then calculate their coherence. Some other researchers have gone further to consider the dependency among the historical events. For example, Letham et al.~ proposed to optimizing the correct ordering among the candidate events, based on the following equation:\n\\begin{align}\\small\\nonumber\n \\sum\\nolimits_{i\\in\\mathcal I,\\ j\\in\\mathcal J} \\mathbbm{1}_{[u(s_{T+1}=C_i|s_{\\le T})>u(s_{T+1}=C_j|s_{\\le T})]}\\implies \\sum\\nolimits_{i\\in\\mathcal I,\\ j\\in\\mathcal J} e^{u(s_{T+1}=C_i|s_{\\le T})-u(s_{T+1}=C_j|s_{\\le T})}+\\rho\\|W\\|_2^2\n\\end{align}\\normalsize\nwhere the semantic candidate in the set $\\mathcal I$ should be ranked strictly to be lower than those in $\\mathcal J$, with the goal being to penalize the ``incorrect ordering''. Here, $\\mathbbm{1}_{[\\cdot]}$ is an indicator function which is discrete such that $\\mathbbm{1}_{[b\\ge a]}\\le e^{b-a}$ and can thus be utilized as the upper-bound for minimization, as can be seen in the right-hand-side of the above equation. $W$ is the set of parameters of the function $u(\\cdot)$. This can now be relaxed to an exponential-based approximation for effective optimization using gradient-based algorithms~. Other methods focus on first transferring the sequential data into sequence embeddings that can encode the latent sequential context. For example, Fronza et al.~ apply random indexing to represent the words in terms of their its vector representations by embedding the information from neighboring words into each word before utilizing conventional classifiers such as Support Vector Machines (SVM) to identify the future events.\n\\item\\textbf{Model-based.} Markov-based models have also been leveraged to characterize temporal patterns~. \nThese typically use $E_i$ to denote each event under a specific type and $\\mathcal{E}$ denotes the set of event types. The goal here is to predict the event type of the next event to occur in the future. In~, the event types are modeled using the Markov model so given the current event type, the next event type can be inferred simply by looking up the state with the highest probability in the transition matrix. A tool called Wayeb~ has been developed based on this method. Laxman et al.~ developed a more complicated model, based on a mixture of Hidden Markov models and introducing new assumptions and the concept of episodes composed of a subsequence of event types. They assumed different event episodes should have different transition patterns so started by discovering the frequent episodes for events, each of which they modeled by a specific hidden Markov model over various event types. This made it possible to establish the generative process for each future event type $s$ based on the mixture of the above episode Markov models. When predicting, the likelihood of a current observed event sequence over each possible generative process, $p(X|\\Lambda_Y)$ is evaluated, after which a future event type can be considered as either being larger than some threshold (as in~) or the largest among all the different $Y$ values (in~). \n\\item\\textbf{Prototype-based.} Adhikari et al.~ took a different approach, utilizing a prototype-based strategy that first clusters the event sequences into different clusters in terms of their temporal patterns. When a new event sequence is observed, its closest cluster's centroid will then be leveraged as a ``reference event sequence'' whose sub-sequential events will be referred to when predicting future events for this new event sequence.\n\\end{itemize}\n\\noindent\\textbf{Recurrent neural network (RNN)-based methods}. Approaches in this category can be classified into two types: 1. Attribute-based models; and 2. Descriptive-based models. The attribute-based models, ingest feature representation of events as input, while the descriptive-based models typically ingest unstructured information such as texts to directly predict future events.\n\\begin{itemize}\n\\item\\textbf{Attributed-based Methods. }Here, each event $y=(t,l,s)$ at time $t$ is recast and represented as $e_t=(e_{t,1},e_{t,2},\\cdots,e_{t,k})$, where $e_{t,i}$ is the $i$-th feature of the event at time $t$. The feature here can include location and other information such as event topic and semantics. Each sequence $e=(e_1,\\cdots,e_t)$ is then input into the standard RNN architecture for predicting next event $e_{t+1}$ in the sequence at time point $t+1$~. Various types of RNN components and architecture have been utilized for this purpose~, but a vanilla RNN~ for sequence-based event prediction can be written in the following form:\n\\begin{align}\\nonumber\n \\indent h_i=\\mathrm{tahn}(a_{t}),\\ a_{i}=b+W\\cdot h_{i-1}+U\\cdot e_i,\\ o_i=c+V\\cdot h_i,\\ \\psi(i+1)=\\mathrm{softmax}(o_i),\\ i\\le t\n\\end{align}\nwhere $h_i$, $o_i$, and $a_i$ are the latent state, output, and activation for the $i$-th event, respectively, and $W$, $U$, and $V$ are the model parameters for fitting the corresponding mappings. The prediction $e_{t+1}:=\\psi(t+1)$ can then be calculated in a feedforward way from the first event and the model training can be done by back-propagating the error from the layer of $\\psi(t)$. Existing work typically utilizes the variants of vanilla RNN to handle the gradient vanishing problem, especially when the event chain is not short. The most commonly used methods for event prediction are LSTM and GRU~~. For example, the architecture and equation for LSTM are as follows:\n\\begin{align}\n &a_i=\\sigma(W_j\\cdot[h_{i-1},e_i]+b_j),\\ \\tilde C_i=\\mathrm{tanh}(W_C\\cdot[h_{i-1},h_i]+b_C),\\ C_i=\\zeta_i C_{i-1}+a_i\\tilde C_i,\\\\\\nonumber\n &\\ \\ \\ \\ \\zeta_i=\\sigma(W_\\zeta\\cdot[h_{i-1},e_i]+b_\\zeta),\\ o_i=\\sigma(W_o[h_{i-1}],e_i)+b_o),\\ h_i=o_i*\\mathrm{tahn}(C_i)\n\\end{align}\nwhere the additional components $C_{i-1}$ and $\\zeta_i$ are introduced to keep tracking the previous ``history'' and gating the information for forgetting in order to handle longer sequences. For example, some researchers opt to leverage a simple type LSTM architecture to extend the RNN-based sequential event prediction~, while others leverage variants of LSTM, such as bi-directional LSTM instead~and yet others prefer to leverage gated-recurrent units (GRU)~.\nMoving beyond considering just the chain relationships among events, Li et al.~ generalized this into graph-structured relationships to better incorporate the event contextual information via the Narrative Event Evolutionary Graph (NEEG). An NEEG is a knowledge graph where each node is an event and each edge denotes the association between a pair of events, enabling the NEEG to be represented by a weighted adjacency matrix $A$. The basic architecture can be denoted by the following, as detailed in the paper~:\n\\begin{align}\n &a_i=A^{\\mathrm{T}}h_{i-1}+b,\\ z_i=\\sigma(W_z a_i+U_z h_{i-1}),\\ r_i=\\sigma(W_r a_i+U_r h_{i-1}),\\\\\\nonumber\n &\\ \\ \\ c_i=\\mathrm{tanh}(W a_i+U(r_i h_{i-1})),\\ h_i=(1-z_i)h_{i-1}+z_i c_i\n\\end{align}\nHere, the current activation $a_i$ is not only dependent on the previous time point but also influenced by its neighbor nodes in NEEG.\n\\item\\textbf{Descriptive-based Methods. }Attribute-based methods require extra effort during pre-processing in order to convert the unstructured raw data into feature vectors, a process which is not only computationally labor intensive but also not always feasible. Therefore, multiple architectures have been proposed to directly process the raw (textual) event descriptions to enable them to be used to predict future event semantics or descriptions. These models share a similar generic framework~, which begins by encoding each sequence of words into event representations, utilizing an RNN architecture, as shown in Figure~\\ref{figure:descriptive_RNN_framework}. The sequence of events must then be characterized by another higher-level RNN to predict future events. Under this framework, some works begin by decoding the predicted future candidate events into event embedding, after which they are compared with each other and the one with the largest confidence score is selected as the predicted event. These methods are usually constrained by the known list of event types, but sometimes we are interested in open set predictions where the predicted event type can be a new appearance of a type that has not previously been seen in the training set. To achieve this, other methods focus on directly generating future events' descriptions that characterize event semantics that may or may not have appeared before by designing an additional sequence decoder that decodes the latent representation of future events into word sequences. More recent research has enhanced the utility and interpretability of the relationship between words and relevant events, and all the previous events for the relevant future event, by adding a hierarchical attention mechanisms. For example, Yu et al.~ and Su and Jiang~ both proposed word-level attention and event-level attention, while Hu~ leveraged word-level attention in the event encoder as well as in the event decoder. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{images/descriptive_RNN.png}\\vspace{-0.3cm}\n\\caption{Generic framework for hierarchical RNN-based event forecasting.\\vspace{-0.3cm}}\n\\label{figure:descriptive_RNN_framework}\n\\end{figure}\n\\end{itemize}", "id": "3b662200-27f1-431a-8228-57bea3941428", "level": "subsubsection", "origin_cites_number": 26, "parent_id": "0b117367-592e-4a5b-aa08-4c81aba9a952", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Semantic Prediction" ], [ "subsubsection", "Semantic Sequence" ] ], "subsections": [], "title": "Semantic Sequence" }, { "cite_extract_rate": 0, "cites": [], "content": "This section discusses the research into ways to jointly predict the time, location, and semantics of future events. Existing work in this area can be categorized into three types: 1) joint time and semantics prediction; 2) joint time and location prediction; and 3) joint time, location, and semantic prediction.", "id": "a899718d-d44b-4906-ba2a-558ce46075dd", "level": "subsection", "origin_cites_number": 0, "parent_id": "2f812193-fd1d-4152-99ce-fe63484cff69", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Multi-faceted Prediction" ] ], "subsections": [ "66955a23-5c9e-4abd-8cc9-cb935e343800", "d4d71435-dadd-4cc3-805f-65911ba23fc9", "682b3b7c-cca4-4ecf-8300-2e045311a819" ], "title": "Multi-faceted Prediction" }, { "cite_extract_rate": 0.1, "cites": [ 339 ], "content": ". For joint time and semantic prediction, there are three popular types of methods, discussed in turn below\n\\noindent\\textbf{Temporal association rule}. A temporal association rule can be developed from the vanilla association rule $LHS\\rightarrow RHS$ by embedding additional temporal information into either $LHS$, $RHS$ or both, thus redefining the meaning of co-occurrence and association with temporal constraints. For example, Vilalta and Ma~ defined $LHS$ as a tuple $(E_L,\\tau)$, where $\\tau$ is the time window before the target in $RHS$ predefined by the user. Only the events occurring within a time window before the event in $RHS$ will satisfy the $LHS$. Similar techniques have also been leveraged by other researchers~. However, $\\tau$ is difficult to define beforehand and it is preferable to be flexible to suit different target events. To handle this challenge, Yang et al.~ proposed a way to automatically identify information on a continuous time interval from the data. Here, each transaction is composed of not only items but also continuous time duration information. $LHS$ is a set of items (e.g., previous events) while $RHS$ is a tuple $(E_R,[t_1,t_2])$ consisting of a future event semantic representation and its time interval of occurrence. To automatically learn the time interval in $RHS$,~ proposed the use of two different methods . The first is called the \\emph{confidence-interval-based} method, which leverages a statistical distribution (e.g., Gaussian and student-t~) to fit all the observed occurrence times of events in $RHS$, and then treats the statistical confidence interval as the time interval. The second method is known as \\emph{minimal temporal region selection}, which aims to find the temporal region with the smallest interval and covers all historical occurrences of the event in $RHS$.\n\\noindent\\textbf{Time expression extraction}. In contrast to the above statistical-based methods, another way to achieve event time and semantics joint prediction comes from the pattern recognition domain, aiming to directly discover time expressions that mention the (planned) future events. As this type of technique can simultaneously identify time, semantics, and other information such as locations, it is widely used and will be discussed in more details later as part of the discussion of ``Planned future event detection methods'' in Section \\ref{sec:time_location_semantics}.\n\\noindent\\textbf{Time series forecasting-based methods. }The methods based on time series forecasting can be separated into direct methods and indirect methods. \\emph{Direct methods} typically formulate the event semantic prediction problem as a multi-variate time series forecasting problem, where each variable corresponds to an event type $C_i\\ (i=1,\\cdots)$ and hence the predicted event type at future time $\\hat t$ is calculated as $\\hat s_{\\hat t}=\\arg\\max_{C_i} f(s_{\\hat t}=C_i|X)$. For example, in~, a longitudinal support vector regressor is utilized to predict multi-attribute events, where $n$ support vector regressors, each of which corresponds to an attribute, is built to achieve the goal of predicting the next time point's attribute value. Weiss and Page~ took a different approach, leveraging multiple point process models to predict multiple event types. To further estimate the confidence of their predictions, Bilo{\\v{s}} et al.~ first leveraged RNN to learn the historical event representation and then input the result into a Gaussian process model to predict future event types. To better capture the joint dynamics across the multiple variables in the time series, Brandt et al.~ extended this to Bayesian vector autoregression. Utilizing \\emph{indirect-style} methods, they focused on learning a mapping from the observed event semantics down to low-dimensional latent-topic space using tensor decomposition-based techniques. Similarly, Matsubara et al.~ proposed a 3-way topic analysis of the original observed event tensor $Y_0\\in\\mathbb{R}^{D_o\\times D_a\\times D_c}$ consisting of three factors, namely actors, objects, and time. They then went on to decompose this tensor into latent variables via three corresponding low-rank matrices $P_{o}\\in \\mathbb{R}^{D_k\\times D_o}$, $P_{a}\\in \\mathbb{R}^{D_k\\times D_a}$, and $P_{c}\\in \\mathbb{R}^{D_k\\times D_c}$ respectively, as shown in Figure~\\ref{figure:tensor_decomposition}. Here $D_k$ is the number of latent topics. For the prediction, the time matrices $P_{c}$ are predicted into the future $\\hat P_{c}$ via multi-variate time series forecasting, after which a future event tensor are estimated by recovering a ``future event tensor'' $\\hat Y$ by the multiplication among the predicted time matrix $\\hat P_{c}$ as well as the known actor matrix $P_{a}$ and object matrix $P_{o}$.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{images/time_series_forecasting.png}\\vspace{-0.3cm}\n\\caption{Tensor Decomposition and Forecasting for Complex Time-stamped events.\\vspace{-0.3cm}}\n\\label{figure:tensor_decomposition}\n\\end{figure}", "id": "66955a23-5c9e-4abd-8cc9-cb935e343800", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "a899718d-d44b-4906-ba2a-558ce46075dd", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Multi-faceted Prediction" ], [ "subsubsection", "Time and Semantics" ] ], "subsections": [], "title": "Time and Semantics" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 356, 355, 166, 354 ], "content": "This category of methods focuses on jointly predicting the location and time of future events. These methods can be classified into two subtypes: 1) raster-based: which focus on predictions for individual time slots and location regions, and 2) point-based: which predicts continuous time and location points.\n\\noindent\\textbf{Raster-based}. These methods usually formulate data into temporal sequences consisting of spatial snapshots. Over the last few years, various techniques have been proposed to characterize the spatial and temporal information for event prediction.\nThe simplest way to consider spatial information is to directly treat location information as one of the input features, and then feed it into predictive models, such as linear regression~, LSTM~ and Gaussian processes~. During model training, Zhao and Tang~ leveraged the spatiotemporal dependency to regularize their model parameters. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images/RNN_CNN.png}\\vspace{-0.3cm}\n\\caption{Generic framework for spatiotemporal event prediction using CNN+RNN-based deep learning framework.\\vspace{-0.3cm}}\n\\label{figure:cnn_rnn}\n\\end{figure}\nMost of the methods in this domain aim to jointly consider the spatial and temporal dependency for predictions~. At present, the most popular framework is the CNN+RNN architecture, which implements sequence-to-sequence learning problems such as the one illustrated in Figure \\ref{figure:cnn_rnn}. Here, the multi-attributed spatial information for each time point can be organized as a series of multi-channel images, which can be encoded using convolution-based operations. For example, Huang et al.~ proposed the addition of convolutional layers to process the input into vector representations. Other researchers have leveraged variational autoencoders~ and CNN autoencoders~ to learn the low-dimensional embedding of the raw spatial input data. This allows the learned representation of the input to be input into the temporal sequence learning architecture. Different recurring units have been investigated, including RNN, LSTM, convLSTM, and stacked-convLSTM~. The resulting representation of the input sequence is then sent to the output sequence as input. Here, another recurrent architecture is established. The output of the unit for each time point will be input into a spatial decoder component which can be implemented using transposed convolutional layers~, transposed convLSTM~, or a spatial decoder in a variational autoencoder~. A conditional random field is another popular technique often used to model the spatial dependency~.\n\\noindent\\textbf{Point-based}. The spatiotemporal point process is an important technique for spatiotemporal event prediction as it models the rate of event occurrence in terms of both spatial and time points. It is defined as:\n\\begin{align}\n\\label{eq:stpp}\n \\lambda(t,l|X)=\\lim\\nolimits_{|\\mathrm{d}t|\\rightarrow 0,|\\mathrm{d}l|\\rightarrow 0}\\mathbb{E}[N(\\mathrm{d}t\\times \\mathrm{d}l)|X]/(|\\mathrm{d}t||\\mathrm{d}l|)\n\\end{align}\nVarious models have been proposed to instantiate the model of the framework illustrated in Equation \\eqref{eq:stpp}. For example, Liu and Brown et al.~ began by assuming there to be a conditional independence among spatial and temporal factors and hence achieved the following decomposition:\n\\begin{align}\n \\lambda(t,l|X)=\\lambda(t,l|L,T,F)=\\lambda_1(l|L,T,F,t)\\cdot\\lambda_2(t|T)\n\\end{align}\nwhere $X,L,T,$ and $F$ denotes the whole input indicator data as well as its different facets, including location, time, and other semantic features, respectively. Then the term $\\lambda_1(\\cdot)$ can be modeled based on the Markov spatial point process while $\\lambda_2(\\cdot)$ can be characterized using temporal autoregressive models. To handle situations where explicit assumptions for model distributions are difficult, several methods have been proposed to involve the deep architecture during the point process. Most recently, Okawa et al.~ have proposed the following:\n\\begin{align}\n \\lambda(t,l|X)=\\int g_\\theta\\left(t',l',\\mathcal{F}(t',l')\\right)\\cdot\\mathcal{K}((t,l),(t',l'))\\ \\mathrm{d}t'\\mathrm{d}l'\n\\end{align}\nwhere $\\mathcal K(\\cdot,\\cdot)$ is a kernel function such as a Gaussian kernel~ that measures the similarity in time and location dimensions. $\\mathcal{F}(t',l')\\subseteq F$ denotes the feature values (e.g., event semantics) for the data at location $l'$ and time $t'$. $g_\\theta(\\cdot)$ can be a deep neural network that is parameterized by $\\theta$ and returns an nonnegative scalar. The model selection of $g_\\theta(\\cdot)$ depends on the specific data types. For example, these authors constructed an image attention network by combining a CNN with the spatial attention model proposed by Lu et al.~.", "id": "d4d71435-dadd-4cc3-805f-65911ba23fc9", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "a899718d-d44b-4906-ba2a-558ce46075dd", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Multi-faceted Prediction" ], [ "subsubsection", "Time and Location Prediction" ] ], "subsections": [], "title": "Time and Location Prediction" }, { "cite_extract_rate": 0.058823529411764004, "cites": [ 341 ], "content": "\\label{sec:time_location_semantics}In this section, we introduce the strategies that jointly predict the time, location, and semantics of future events, which can be grouped into either system-based or model-based strategies.\n\\noindent\\textbf{System-based}. The first type of the system-based methods considered here is the model-fusion system. The most intuitive approach is to leverage and integrate the aforementioned techniques for time, location, and semantics prediction into an event prediction system. For example, a system named EMBERS~ is an online warning system for future events that can jointly predict the time, location, and semantics including the type and population of future events. This system also provides information on the confidence of the predictions obtained. Using an ensemble of predictive models for time~, location, and semantic prediction, this system achieves a significant performance boost in terms of both precision and recall. The trick here is to first prioritize the precision of each individual prediction model by suppressing their recall. Then, due to the diversity and complementary nature of the different models, the fusion of the predictions from different models will eventually result in a high recall. A Bayesian fusion-based strategy has also been investigated~. Another system named \\emph{Carbon}~ also leverages a similar strategy.\nThe second type of model involves crowd-sourced systems that implement fusion strategies to generate the event predictions made by the human predictors. For example, in order to handle the heterogeneity and diversity of the human predictors' skill sets and background knowledge under limited human resources, Rostami et al.~ proposed a recommender system for matching event forecasting tasks to human predictors with suitable skills in order to maximize the accuracy of their fused predictions. Li et al.~ took a different approach, designing a prediction market system that operates like a futures market, integrating information from different human predictors to forecast future events. In this system, the predictors can decide whether to buy or sell the ``tokens'' (using virtual dollars, for example) for each specific prediction they have made according to their confidence in it. They typically make careful decisions as they will obtain corresponding awards (for correct predictions) or penalties (for erroneous predictions).\n\\noindent\\textbf{Planned future event detection methods}. These methods focus on detecting the planned future events, usually from various media such sources as social media and news and typically relying on NLP techniques and linguistic principles. Existing methods typically follow a workflow similar to the one shown in Figure \\ref{fig:planned_event}, consisting of four main steps: 1) \\emph{Content filtering.} Methods for content filtering are typically leveraged to retain only the texts that are relevant to the topic of interest. Existing works utilize either supervised methods (e.g., textual classifiers~ or unsupervised methods (e.g., querying techniques~); 2) \\emph{Time expression identification} is then utilized to identify future reference expressions and determine the \\emph{time to event}. These methods either leverage existing tools such as the Rosetta text analyzer~ or propose dedicated strategies based on linguistic rules~; 3) \\emph{Future reference sentence extraction} is the core of planned event detection, and is implemented either by designing regular expression-based rules~ or by textual classification~; and 4) \\emph{Location identification}. The expression of locations is typically highly heterogeneous and noisy. Existing works have relied heavily on geocoding techniques that can resolve the event location accurately. In order to infer the event locations, various types of locations are considered by different researchers, such as article locations~, authors' profile locations~, locations mentioned in the articles~, and authors' neighbors' locations~. Multiple locations have been selected using a geometric median~ or fused using logical rules such as probabilistic soft logic~.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images/plannedprotest.png}\\vspace{-0.3cm}\n\\caption{Generic framework for planned event detection.\\vspace{-0.3cm}}\n\\label{fig:planned_event}\n\\end{figure}\n\\noindent\\textbf{Tensor-based methods. }Some methods formulate the data into tensor-form, with dimensions including location, time, and semantics. Tensor decomposition is then applied to approximate the original tensors as the product of multiple low-rank matrices, each of which is a mapping from latent topics to each dimension. Finally, the tensor is extrapolated towards future time periods by various strategies. For example, Mirtaheri~ extrapolated the time dimension-matrix only, which they then multiplied with the other dimensions' matrices to recover the estimated extrapolated tensor into the future. Zhou et al.~ took a different approach, choosing instead to add ``empty values'' for the entries in future time to the original tensor, and then use tensor completion techniques to infer the missing values corresponding to future events.", "id": "682b3b7c-cca4-4ecf-8300-2e045311a819", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "a899718d-d44b-4906-ba2a-558ce46075dd", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Event Prediction Techniques" ], [ "subsection", "Multi-faceted Prediction" ], [ "subsubsection", "Time, location, and Semantics" ] ], "subsections": [], "title": "Time, location, and Semantics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}", "id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "level": "section", "origin_cites_number": 0, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ] ], "subsections": [ "c739b27c-4bbc-47e9-bfd0-6197219fa009", "7a5f3a91-6ad2-44d7-8a93-523b51bd39e8", "e8db337d-4767-468d-ba4a-7973cc074286", "52c88885-b2eb-427a-8d0d-f0da918827bb", "ee12ae57-3245-4bbb-bf6b-f7c4ce0ef0bc", "17123e85-3824-4679-954f-0b1a517bede9", "7cdc8da3-a431-4a2a-9c18-89a3e9aac1e7", "1f4383e1-b2d6-40fd-81df-58e4c9fe1627", "3657bacf-32f8-4c67-b6fb-98f1705ab73d" ], "title": "Applications of Event Predictions" }, { "cite_extract_rate": 0, "cites": [], "content": "This category generally consists of two types of event prediction: 1) population level, which includes disease epidemics and outbreaks, and 2) individual level, which relates to clinical longitudinal events.", "id": "c739b27c-4bbc-47e9-bfd0-6197219fa009", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Healthcare" ] ], "subsections": [ "9a3e6b94-795e-48bc-b35a-ce827fc037dc", "781dacc0-ad59-4c18-a068-5226bff7982b" ], "title": "Healthcare" }, { "cite_extract_rate": 0, "cites": [], "content": "There has been extensive research on disease outbreaks for many different types of diseases and epidemics, including seasonal flu~, Zika~, H1N1~, Ebola~, and COVID-19~. These predictions target both the location and time of future events, while the disease type is usually fixed to a specific type for each model. Compartmental models such as SIR models are among the classical mathematical tools used to analyze, model, and simulate the epidemic dynamics~. More recently, individual-based computational models have begun to be used to perform network-based epidemiology based on network science and graph-theoretical models, where an epidemic is modeled as a stochastic propagation over an explicit interaction network among people~. Thanks to the availability of high-performance computing resources, another option is to construct a ``digital twin'' of the real world, by considering a realistic representation of a population, including members’ demographic, geographic, behavioral, and social contextual information, and then using individual-based simulations to study the spread of epidemics within each network~. The above techniques heavily rely on the model assumptions regarding how the disease progresses individually and is transmitted from person to person~. The rapid growth of large surveillance data and social media data sets such as Twitter and Google flu trends in recent years has led to a massive increase of interest in using data-driven approaches to directly learn the predictive mapping~. These methods are usually both more time-efficient and less dependent on assumptions, while the aforementioned computational models are more powerful for longer-term prediction due to their ability to take into account the specific disease mechanisms~. Finally, there have also been reports of synergistic research that combines both techniques to benefit from their complementary strengths~.", "id": "9a3e6b94-795e-48bc-b35a-ce827fc037dc", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "c739b27c-4bbc-47e9-bfd0-6197219fa009", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Healthcare" ], [ "subsubsection", "Population-level. " ] ], "subsections": [], "title": "Population-level. " }, { "cite_extract_rate": 0, "cites": [], "content": "This research thread focuses on the longitudinal predictive analysis of individual health-related events, including death occurrence~, adverse drug events~, sudden illnesses such as strokes~ and cardiovascular events~, as well as other clinical events~ and life events~ for different groups of people, including elders and people with mental disease. The goal here is usually to predict the time before an event occurs, although some researchers have attempted to predict the type of event. The data sources are essentially the electronic health records of individual patients~. Recently, social media, forum, and mobile data has also been utilized for predicting drug adverse events~ and events that arise during chronic disease (e.g., chemical radiation and surgery)~.", "id": "781dacc0-ad59-4c18-a068-5226bff7982b", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "c739b27c-4bbc-47e9-bfd0-6197219fa009", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Healthcare" ], [ "subsubsection", "Individual-level. " ] ], "subsections": [], "title": "Individual-level. " }, { "cite_extract_rate": 0, "cites": [], "content": "This category focuses on predicting events based on information held in various types of media including: video-based, audio-based, and text-based formats. The core issue is to retrieve key information related to future events utilizing semantic pattern recognition from the data.", "id": "7a5f3a91-6ad2-44d7-8a93-523b51bd39e8", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Media" ] ], "subsections": [ "033686c5-99a0-450d-bd05-1ef9b270c0ab", "fd8ad43d-8146-40c6-bea7-0d703ba833da" ], "title": "Media" }, { "cite_extract_rate": 0.4, "cites": [ 9105, 9107 ], "content": ". While event detection has been extensively researched for video data~ and audio mining~, event prediction is more challenging and has been attracting increasing attention in recent years. The goal here is usually to predict the future status of the objects in the video, such as the next action of soccer players~ or basketball players~, or the movement of vehicles~.", "id": "033686c5-99a0-450d-bd05-1ef9b270c0ab", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "7a5f3a91-6ad2-44d7-8a93-523b51bd39e8", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Media" ], [ "subsubsection", "Video- and Audio- based" ] ], "subsections": [], "title": "Video- and Audio- based" }, { "cite_extract_rate": 0, "cites": [], "content": "A huge amount of news data has accumulated in recent decades, much of which can be used for big data predictive analytics among news events. A number of researchers have focused on predicting the location, time, and semantics of various events. To achieve this, they usually leverage the immense historical news and knowledge base in order to learn the association and causality among events, which is then applied to forecast events when given current events. Some studies have even directly generated textual descriptions of future events by leveraging NLP techniques such as sequence to sequence learning~.", "id": "fd8ad43d-8146-40c6-bea7-0d703ba833da", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "7a5f3a91-6ad2-44d7-8a93-523b51bd39e8", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Media" ], [ "subsubsection", "Text- and script- based" ] ], "subsections": [], "title": "Text- and script- based" }, { "cite_extract_rate": 0, "cites": [], "content": ". This category can be classified into: 1) population based events, including dispersal events, gathering events, and congestion; and 2) individual-level events, which focus on fine-grained patterns such as human mobility behavior prediction.", "id": "e8db337d-4767-468d-ba4a-7973cc074286", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Transportation" ] ], "subsections": [ "69bc2115-ef52-4f9b-8ce4-250aed770386", "06704e24-ba02-4500-92f1-669db72147b3" ], "title": "Transportation" }, { "cite_extract_rate": 0.4, "cites": [ 357, 346 ], "content": ". Here, researchers typically focus on transportation events such as congestion~, large gatherings~, and dispersal events~. The goal is thus to forecast the future time period~ and location~ of such events. Data from traffic meters, GPS, and mobile devices are usually used to sense real-time human mobility patterns. Transportation and geographical theories are usually considered to determine the spatial and temporal dependencies for predicting these events.", "id": "69bc2115-ef52-4f9b-8ce4-250aed770386", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "e8db337d-4767-468d-ba4a-7973cc074286", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Transportation" ], [ "subsubsection", "Group transportation pattern" ] ], "subsections": [], "title": "Group transportation pattern" }, { "cite_extract_rate": 0.4, "cites": [ 358, 355 ], "content": "Another research thread focuses on individual-level prediction, such as predicting an individual's next location~ or the likelihood or time duration of car accidents~. Sequential and trajectory analyses are usually used to process trajectory and traffic flow data.", "id": "06704e24-ba02-4500-92f1-669db72147b3", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "e8db337d-4767-468d-ba4a-7973cc074286", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Transportation" ], [ "subsubsection", "Individual transportation behavior" ] ], "subsections": [], "title": "Individual transportation behavior" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 359 ], "content": "Different types of engineering systems have begun to routinely apply event forecasting methods, including: 1) civil engineering, 2) electrical engineering, 3) energy engineering, and 4) other engineering domains. Despite the variety of systems in these widely different domains, the goal is essentially to predict future abnormal or failure events in order to support the system's sustainability and robustness. Both the location and time of future events are key factors for these predictions. The input features usually consist of sensing data relevant to specific engineering systems.\n\\begin{itemize}\n\\item\\textbf{Civil engineering.} This covers various a wide range of problems in diverse urban systems, such as smart building fault adverse event prediction~, emergency management equipment failure prediction~, manhole event prediction~, and other events~.\n\\item\\textbf{Electrical engineering}. This includes teleservice systems failures~ and unexpected events in wire electrical discharge machining operations~. \n\\item\\textbf{Energy engineering}. Event prediction is also a hot topic in energy engineering, as such systems usually require strong robustness to handle the disturbance from the nature environments. Active research domains here include wind power ramp prediction~, solar power ramp prediction~, and adverse events in low carbon energy production~.\n\\item\\textbf{Other engineering domains}. There is also active research on event prediction in other domains, such as irrigation event prediction in agricultural engineering~ and mechanical fault prediction in mechanical engineering~.\n\\end{itemize}", "id": "52c88885-b2eb-427a-8d0d-f0da918827bb", "level": "subsection", "origin_cites_number": 11, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Engineering Systems" ] ], "subsections": [], "title": "Engineering Systems" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 360, 361 ], "content": "Here, the prediction models proposed generally focus on either network-level events or device-level events. For both types, the general goal is essentially to predict the likelihood of future system failure or attacks based on various indicators of system vulnerability. So far these two categories have essentially differed only in their inputs: the former relies on network features, including system specifications, web access logs and search queries, mismanagement symptoms, spam, phishing, and scamming activity, although some researchers are investigating the use of social media text streams to identify semantics indicating future potential attack targets of DDoS~. For device-level events, the features of interest are usually the binary file appearance logs of machines~. Some work has been done on micro-architectural attacks~ by observing and proactively analyzing the observations on speculative branches, out-of-order executions and shared last level caches~.", "id": "ee12ae57-3245-4bbb-bf6b-f7c4ce0ef0bc", "level": "subsection", "origin_cites_number": 6, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Cyber Systems" ] ], "subsections": [], "title": "Cyber Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "Political event prediction has become a very active research area in recent years, largely thanks to the popularity of social media. The most common research topics can be categorized as: 1) offline events, and 2) online activism.", "id": "17123e85-3824-4679-954f-0b1a517bede9", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Political events" ] ], "subsections": [ "3174d31c-d0af-4f0f-a000-e9b1742e4db3", "d2b28f65-8a74-4bcd-a37a-f6b534d6d2ca" ], "title": "Political events" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 341 ], "content": "This includes civil unrest~, conflicts~, violence~, and riots~. This type of research usually targets the future events' geo-location, time, and topics by leveraging the social sensors that indicate public opinions and intentions. Utilization of social media has become a popular approach for these endeavors, as social media is a source of vital information during the event development stage~. Specifically, many aspects are clearly visible in social media, including complaints from the public (e.g., toward the government), discussions about their intentions regarding specific political events and targets, as well as advertisements for the planned events. Due to the richness of this information, further information on future events such as the type of event~, the anticipated participants population~, and the event scale~ can also be discovered in advance.", "id": "3174d31c-d0af-4f0f-a000-e9b1742e4db3", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "17123e85-3824-4679-954f-0b1a517bede9", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Political events" ], [ "subsubsection", "Offline events" ] ], "subsections": [], "title": "Offline events" }, { "cite_extract_rate": 0, "cites": [], "content": "Due to the major impact of online media such as online forums and social media, many events such as online activism, petitions, and hoaxes in such online platform also involve strong motivations for achieving some political purpose~. Beyond simple detection, the prediction of various types of events have been studied in order to enable proactive intervention to sidetrack the events such as hoaxes and rumor propagation~. Other researchers have sought to foresee the results of future political events in order to benefit a particular group of practitioners, for example by predicting the outcome of online petitions or presidential elections~.", "id": "d2b28f65-8a74-4bcd-a37a-f6b534d6d2ca", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "17123e85-3824-4679-954f-0b1a517bede9", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Political events" ], [ "subsubsection", "Online events" ] ], "subsections": [], "title": "Online events" }, { "cite_extract_rate": 0, "cites": [], "content": "Different types of natural disasters have been the focus of a great deal of research. Typically, these are rare events, but mechanistic models, a long historical records (often extending back dozens or hundreds of years), and domain knowledge are usually available. The input data are typically collected by sensors or sensor networks and the output is the risk or hazard of future potential events. Since these event occurrence are typically rare but very high-stakes, many researchers strive to cover all event occurrences and hence aim to ensure high recalls.", "id": "7cdc8da3-a431-4a2a-9c18-89a3e9aac1e7", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Natural disasters" ] ], "subsections": [ "5d333d17-9065-40be-980f-43fa5d914180", "3396d058-409f-4260-9ee6-4b9d6f66ea05", "149a91d1-04db-4048-b148-d80a122d1ca5" ], "title": "Natural disasters" }, { "cite_extract_rate": 0.1, "cites": [ 8345 ], "content": "\\textbf{Earthquakes.} Predictions here typically focus on whether there will be an earthquake with a magnitude larger than a specified threshold in a certain area during a future period of time. To achieve this, the original sensor data is usually proccessed using geophysical models such as Gutenberg–Richter’s inverse law, the distribution of characteristic earthquake magnitudes, and seismic quiescence~. The processed data are then input into machine learning models that treat them as input features for predicting the output, which can be either binary values of event occurrence or time-to-event values. Some studies are devoted to identifying the time of future earthquakes and their precursors, based on an ensemble of regressors and feature selection techniques~, while others focus on aftershock prediction and the consequences of the earthquake, such as fire prevention~. It worth noting that social media data has also been used for such tasks, as this often supports early detection of the first-wave earthquake, which can then be used to predict the afterstocks or earthquakes in other locations~.\n\\textbf{Fire events}. Research in this category can be grouped into urban fires and wildfires. This type of research often focuses on the time at which a fire will affect a specific location, such as a building. The goal here is to predict the risk of future fire events. To achieve this, both the external environment and the intrinsic properties of the location of interests are important. Therefore, both static input data (e.g., natural conditions and demographics) and time-varying data (e.g., weather, climate, and crowd flow) are usually involved. Shin and Kim~ focus on building fire risk prediction, where the input is the building's profile. Others have studied wildfires, where weather data and satellite data are important inputs. This type of research focuses primarily on predicting both the time and location of future fires~.\nOther researchers have focused on rarer events such as volcanic eruptions. For example, some leverage chemical prior knowledge to build a Bayesian network for prediction ~, while others adopt point processes to predict the hazard of future events~.", "id": "5d333d17-9065-40be-980f-43fa5d914180", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "7cdc8da3-a431-4a2a-9c18-89a3e9aac1e7", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Natural disasters" ], [ "subsubsection", "Geophysics-related" ] ], "subsections": [], "title": "Geophysics-related" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 362 ], "content": "\\textbf{Flood events}. Floods may be caused by many different reasons, including atmospheric (e.g., snow and rain), hydrological (e.g., ice melting, wind-generated waves, and river flow), and geophysical (e.g., terrain) conditions. This makes the forecasting of floods highly complicated task that requires multiple diverse predictors~. Flood event prediction has a long history, with the latest research focusing especially on computational and simulation models based on domain knowledge. This usually involves using ensemble prediction systems as inputs for hydrological and/or hydraulic models to produce river discharge predictions. For a detailed survey on flood computational models please refer to~. However, it is prohibitively difficult to comprehensively consider and model all the factors correctly while avoiding all the accumulated errors from upstream predictions (e.g., precipitation prediction). Another direction, based on data-driven models such as statistical and machine learning models for flood prediction, is deemed promising and is expected to be complementary to existing computational models. These newly developed machine learning models are often based solely on historical data, requiring no knowledge of the underlying physical processes. Representative models are SVM, random forests, and neural networks and their variants and hybrids. A detailed recent survey is provided in~.\n\\textbf{Tornado Forecasting}. Tornadoes usually develop within thunderstorms and hence most tornado warning systems are based on the prediction of thunderstorms. For a comprehensive survey, please refer to~. Machine learning models, when applied to tornado forecasting tasks, usually suffer from high-dimensionality issues, which are very common in meteorological data. Some methods have leveraged dimensional reduction strategies to preprocess the data~ before prediction. Research on other atmosphere-related events such as droughts and ozone events has also been conducted~.", "id": "3396d058-409f-4260-9ee6-4b9d6f66ea05", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "7cdc8da3-a431-4a2a-9c18-89a3e9aac1e7", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Natural disasters" ], [ "subsubsection", "Atmospheric science-related" ] ], "subsections": [], "title": "Atmospheric science-related" }, { "cite_extract_rate": 0.5, "cites": [ 9087, 8346 ], "content": "There is a large body of prediction research focusing on events outside the Earth, especially those affecting the star closest to us, the sun. Methods have been proposed to predict various solar events that could impact life on Earth, including solar flares~, solar eruptions~, and high energy particle storms~. The goal here is typically to use satellite imagery data of the sun to predict the time and location of future solar events and the activity strength~.", "id": "149a91d1-04db-4048-b148-d80a122d1ca5", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "7cdc8da3-a431-4a2a-9c18-89a3e9aac1e7", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Natural disasters" ], [ "subsubsection", "Astrophysics-related" ] ], "subsections": [], "title": "Astrophysics-related" }, { "cite_extract_rate": 0, "cites": [], "content": "Business intelligence can be grouped into company-based events and customer-based events.", "id": "1f4383e1-b2d6-40fd-81df-58e4c9fe1627", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Business" ] ], "subsections": [ "a8750b3c-1848-4d33-8de1-69d1e05f09c9", "5ce2d265-ac12-4986-806a-403762b90fc6" ], "title": "Business" }, { "cite_extract_rate": 0, "cites": [], "content": "The most important customer activities in business is whether a customer will continue doing business with a company and how long a costumer will be willing to wait before receiving the service? A great deal of research has been devoted to these topics, which can be categorized based on the type of business entities namely enterprises, social media, and education, who are primarily interested in churn prediction, site migration, and student dropout, respectively. The first of these focuses on predicting whether and when a customer is likely to stop doing business with a profitable enterprise~. The second aims to predict whether a social media user will move from one site, such as Flickr, to another, such as Instagram, a movement known as site migration~. While site migration is not popular, attention migration might actually be much more common, as a user may ``move'' their major activities from one social media site to another. The third type, student dropout, is a critical domain for education data mining, where the goal is to predict the occurrence of absenteeism from school for no good reason for a continuous number of days; a comprehensive survey is available in~. For all three types, the procedure is first to collect features of a customer's profile and activities over a period of time and then conventional or sequential classifiers or regressors are generally used to predict the occurrence or time-to-event of the future targeted activity.", "id": "a8750b3c-1848-4d33-8de1-69d1e05f09c9", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "1f4383e1-b2d6-40fd-81df-58e4c9fe1627", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Business" ], [ "subsubsection", "Customer activity prediction" ] ], "subsections": [], "title": "Customer activity prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Financial event prediction has been attracting a huge amount of attention for risk management, marketing, investment prediction and fraud prevention. Multiple information resources, including news, company announcements, and social media data could be utilized as the input, often taking the form of time series or temporal sequences. These sequential inputs are used for the prediction of the time and occurrence of future high-stack events such as company distress, suspension, mergers, dividends, layoffs, bankruptcy, and market trends (rises and falls in the company's stock price)~.", "id": "5ce2d265-ac12-4986-806a-403762b90fc6", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "1f4383e1-b2d6-40fd-81df-58e4c9fe1627", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Business" ], [ "subsubsection", "Business process events" ] ], "subsections": [], "title": "Business process events" }, { "cite_extract_rate": 0, "cites": [], "content": "It is difficult to deduce the precise location and time for individual crime incidences. Therefore, the focus is instead estimating the risk and probability of the location, time, and types of future crimes. This field can be naturally categorized based on the various crime types:", "id": "3657bacf-32f8-4c67-b6fb-98f1705ab73d", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbd1ff7b-a51a-4b40-b28d-3ec77cefa901", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Crime" ] ], "subsections": [ "8eb184cd-d495-4d6d-a2df-c0309380f5af", "4b2c5e29-b351-4b89-b367-764b29b6621a", "3964c722-171e-46d4-9dab-9839e9a229fe" ], "title": "Crime" }, { "cite_extract_rate": 0, "cites": [], "content": "This type of crime is typically highly destructive, and hence attracts huge attention in its anticipation and prevention. Terrorist activities are usually aimed at religious, political, iconic, economic or social targets. The attacker typically targets larger numbers of people and the evidences related to such attacks is retained in the long run. Though it is extremely challenging to predict the precise location and time of individual terrorism incidents, numerous studies have shown the potential utility for predicting the regional risks of terrorism attacks based on information gathered from many data sources such as geopolitical data, weather data, and economics data. The Global Terrorism Database is the most widely recognized dataset that records the descriptions of world-wide terrorism events of recent decades. In addition to terrorism events, other similar events such as mass killings~ and armed-conflict events~ have also been studied using similar problem formulations.", "id": "8eb184cd-d495-4d6d-a2df-c0309380f5af", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "3657bacf-32f8-4c67-b6fb-98f1705ab73d", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Crime" ], [ "subsubsection", "Political crimes and terrorism" ] ], "subsections": [], "title": "Political crimes and terrorism" }, { "cite_extract_rate": 0, "cites": [], "content": "Most studies on this topic focus on predicting the types, intensity, count, and probability of crime events across defined geo-spatial regions. Until now, urban crimes are most commonly the topic of research due to data availability. The geospatial characteristics of the urban areas, their demographics, and temporal data such as news, weather, economics, and social media data are usually used as inputs. The geospatial dependency and correlation of the crime patterns are usually leveraged during the prediction process using techniques originally developed for spatial predictions, such as kernel density estimation and conditional random fields. Some works simplify the tasks by only focusing on specific types of crimes such as theft~, robbery, and burglary~.", "id": "4b2c5e29-b351-4b89-b367-764b29b6621a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "3657bacf-32f8-4c67-b6fb-98f1705ab73d", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Crime" ], [ "subsubsection", "Crime incidents" ] ], "subsections": [], "title": "Crime incidents" }, { "cite_extract_rate": 0.5, "cites": [ 358 ], "content": "Unlike the above research on regional crime risks, some recent studies strive to predict the next incidents of criminal individuals or groups. This is because different offenders may demonstrate different behavioral patterns, such as targeting specific regions (e.g., wealthy neighborhoods), victims (e.g., women), for specific benefits (e,g, money). The goal here is thus is to predict the next crime site and/or time, based on the historical crime event sequence of the targeted criminal individual or group. Models such as point processes~ or Bayesian networks~ are usually used to address such problems.", "id": "3964c722-171e-46d4-9dab-9839e9a229fe", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "3657bacf-32f8-4c67-b6fb-98f1705ab73d", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Applications of Event Predictions" ], [ "subsection", "Crime" ], [ "subsubsection", "Organized and serial crimes" ] ], "subsections": [], "title": "Organized and serial crimes" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:discussion}\nDespite the major advances in event prediction in recent years, there are still a number of open problems and potentially fruitful directions for future research, as follows:", "id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "level": "section", "origin_cites_number": 0, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ] ], "subsections": [ "d0f8fb47-3b03-4f42-8861-8f8626eb78f5", "3d957e87-043a-4f17-a938-208fb1fa5e74", "ec99593f-522c-4736-8d34-f85b2b1e7892", "98502f0a-e6bc-4c35-8d52-5388d331f8ae", "e2abe99b-4db7-4795-9b8a-1d285472b45d" ], "title": "Open Challenges and Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "Increasingly sophisticated forecasting models have been proposed to improve the prediction accuracy, including those utilizing approaches such as ensemble models, neural networks, and the other complex systems mentioned above. However, although the accuracy can be improved, the event prediction models are rapidly becoming too complex to be interpreted by human operators. The need for better model accountability and interpretability is becoming an important issue; as big data and Artificial Intelligence techniques are applied to ever more domains this can lead to serious consequences for applications such as healthcare and disaster management. Models that are not interpretable by humans will find it hard to build the trust needed if they are to be fully integrated into the workflow of practitioners. A closely related key feature is the accountability of the event prediction system. For example, disaster managers need to thoroughly understand a model's recommendations if they are to be able to explain the reason for a decision to displace people in a court of law. Moreover, an ever increasing number of laws in countries around the world are beginning to require adequate explanations of decisions reached based on model recommendations. For example, Articles 13-15 in the European Union's General Data Protection Regulation (GDPR)~ require algorithms that make decisions that “significantly affect\" individuals to provide explanations (``right to explanation'') by May 28, 2018. Similar laws have also been established in countries such as the United States~ and China~.", "id": "d0f8fb47-3b03-4f42-8861-8f8626eb78f5", "level": "subsection", "origin_cites_number": 3, "parent_id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ], [ "subsection", "Model Transparency, Interpretability, and Accountability" ] ], "subsections": [], "title": "Model Transparency, Interpretability, and Accountability" }, { "cite_extract_rate": 0, "cites": [], "content": "The massive popularity of the proposal, development, and deployment of event prediction is stimulating a surge interest in developing ways to counter-attack these systems. It will therefore not be a surprise when we begin to see the introduction of techniques to obfuscate these event prediction methods in the near future. As with many state-of-the-art AI techniques applied in other domains such as object recognition, event prediction methods can also be very vulnerable to noise and adversarial attacks. The famous failure of Google Flu trends, which missed the peak of the 2013 flu season by 140 percent due to low relevance and high disturbance affecting the input signal, is a vivid memory for practitioners in the field~. Many predictions relying on social media data can also be easily influenced or flipped by injecting scam messages. Event prediction models also tend to over-rely on low-quality input data that can be easily disturbed or manipulated, lacking sufficient robustness to survive noisy signals and adversarial attacks. Similar problems threaten to other application domains such as business intelligence, crime, and cyber systems.", "id": "3d957e87-043a-4f17-a938-208fb1fa5e74", "level": "subsection", "origin_cites_number": 1, "parent_id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ], [ "subsection", "Vulnerability to Noise and Adversarial Attacks" ] ], "subsections": [], "title": "Vulnerability to Noise and Adversarial Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "Over the years, many domains have accumulated a significant amount of knowledge and experience about event development occurrence mechanisms, which can thus provide important clues for anticipating future events, such as epidiomiology models, socio-political models, and earthquake models. All of these models focus on simplifying real-world phenomena into concise principles in order to grasp the core mechanism, discarding many details in the process. In contrast, data-driven models strive to ensure the accurate fitting of large historical data sets, based on sufficient model expressiveness but cannot guarantee that the true underlying principle and causality of event occurrence modeled accurately. There is thus a clear motivation to combine their complementary strengths, and although this has already attracted great deal of interest~, most of the models proposed so far are merely ensemble learning-based and simply merge the final predictions from each model. A more thorough integration is needed that can directly embed the core principles to regularize and instruct the training of data-driven event prediction methods. Moreover, existing attempts are typically specific to particular domains and are thus difficult to develop further as they require in-depth collaborations between data scientists and domain experts. A generic framework developed to encompass multiple different domains is imperative and would be highly beneficial for the various domain experts.", "id": "ec99593f-522c-4736-8d34-f85b2b1e7892", "level": "subsection", "origin_cites_number": 2, "parent_id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ], [ "subsection", "Integration of mechanistic knowledge and data driven-models" ] ], "subsections": [], "title": "Integration of mechanistic knowledge and data driven-models" }, { "cite_extract_rate": 0, "cites": [], "content": "The ultimate purpose of event prediction is usually not just to anticipate the future, but to change it, for example by avoiding a system failure and flattening the curve of a disease outbreak. However, it is difficult for practitioners to determine how to act appropriately and implement effective policies in order to achieve the desired results in the future. This requires a capability that goes beyond simply predicting future events based on the current situation, requiring them instead to also take into account the new actions being taken in real time and then predict how they might influence the future. One promising direction is the use of counterfactual event~ prediction that models what would have happened if different circumstances had occurred. Another related direction is prescriptive analysis where different actions can be merged into the prediction system and future results anticipated or optimized. Related works have been developed in few domains such as epidemiology. However, as yet these lack sufficient research in many other domains that will be needed if we are to develop generic frameworks that can benefit different domains.", "id": "98502f0a-e6bc-4c35-8d52-5388d331f8ae", "level": "subsection", "origin_cites_number": 1, "parent_id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ], [ "subsection", "Prescriptive and counterfactual analysis" ] ], "subsections": [], "title": "Prescriptive and counterfactual analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "Existing event prediction methods mostly focus primarily on accuracy. However, decision makers who utilize these predicted event results usually need much more, including key factors such as event resolution (e.g., time resolution, location resolution, description details), confidence (e.g., the probability a predicted event will occur), efficiency (whether the model can predict per day or per seccond), lead time (how many days the prediction can be made prior to the event occurring), and event intensity (how serious it is). multi-objective optimization (e.g., accuracy, confidence, resolution). There are typically trade-offs among all the above metrics and accuracy, so merely optimizing accuracy during training will inevitably mean the results drift away from the overall optimal event-prediction-based decision. A system that can flexibly balance the trade-off between these metrics based on decision makers' needs and achieve a multi-objective optimization is the ultimate objective for these models.", "id": "e2abe99b-4db7-4795-9b8a-1d285472b45d", "level": "subsection", "origin_cites_number": 0, "parent_id": "21838b02-3baa-4e8a-ab7c-6c5bf122bb9f", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Open Challenges and Future Directions" ], [ "subsection", "Multi-objective training" ] ], "subsections": [], "title": "Multi-objective training" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusions}\nThis survey has presented a comprehensive survey of existing methodologies developed for event prediction methods in the big data era. It provides an extensive overview of the event prediction challenges, techniques, applications, evaluation procedures, and future outlook, summarizing the research presented in over 200 publications, most of which were published in the last five years. Event prediction challenges, opportunities, and formulations have been discussed in terms of the event element to be predicted, including the event location, time, and semantics, after which we went on to propose a systematic taxonomy of the existing event prediction techniques according to the formulated problems and types of methodologies designed for the corresponding problems. We have also analyzed the relationships, differences, advantages, and disadvantages of these techniques from various domains, including machine learning, data mining, pattern recognition, natural language processing, information retrieval, statistics, and other computational models. In addition, a comprehensive and hierarchical categorization of popular event prediction applications has been provided that covers domains ranging from natural science to the social sciences. Based upon the numerous historical and state-of-the-art works discussed in this survey, the paper concludes by discussing open problems and future trends in this fast-growing domain.\n\\bibliographystyle{ACM-Reference-Format}\n \\bibliography{acmart}\n\\end{document}", "id": "0cc2585c-6b4f-4ace-b2f9-61073d4d0fcf", "level": "section", "origin_cites_number": 0, "parent_id": "06b41427-1045-4fd8-b025-95b34ab2f969", "prefix_titles": [ [ "title", "Event Prediction in the Big Data Era: A Systematic Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
44
[ 340, 339, 341, 8342, 344, 343, 166, 342, 8343, 8344, 9104, 345, 346, 9105, 347, 59, 9106, 348, 349, 352, 351, 353, 350, 356, 355, 354, 9107, 357, 358, 359, 360, 361, 8345, 362, 9087, 8346 ]
1.42435
[ "Rahul Mishra", "Hari Prabhat Gupta", "Tanima Dutta" ]
A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions
2020
2020-10-05T13:12:46Z
cs.LG
Deep Neural Network (DNN) has gained unprecedented performance due to its automated feature extraction capability. This high order performance leads to significant incorporation of DNN models in different Internet of Things (IoT) applications in the past decade. However, the colossal requirement of computation, energy, and storage of DNN models make their deployment prohibitive on resource constraint IoT devices. Therefore, several compression techniques were proposed in recent years for reducing the storage and computation requirements of the DNN model. These techniques on DNN compression have utilized a different perspective for compressing DNN with minimal accuracy compromise. It encourages us to make a comprehensive overview of the DNN compression techniques. In this paper, we present a comprehensive review of existing literature on compressing DNN model that reduces both storage and computation requirements. We divide the existing approaches into five broad categories, \textit{i.e.,} network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous, based upon the mechanism incorporated for compressing the DNN model. The paper also discussed the challenges associated with each category of DNN compression techniques. Finally, we provide a quick summary of existing work under each category with the future direction in DNN compression.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "06f26bac-bd30-4095-9869-55d16cc27805", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ] ], "subsections": [ "8bb84bd4-7832-490e-b8d8-39d22dd9073e", "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "32094ad6-d342-49fc-a67e-74352e80d780", "dd035969-ad6f-463a-b04a-bccc9bef6cde", "e76e7226-ba57-47dc-843b-33e8177f17ac", "ba733adc-5211-4a9a-b70a-7d03cf2c8077", "34c631ba-3048-4884-8a07-2d779d5a3a90", "d19e7744-862f-4783-af40-1e0704c0fc6e" ], "title": "root" }, { "cite_extract_rate": 0.1, "cites": [ 6312, 6311 ], "content": "Recent years have witnessed significant growth in machine learning-based Internet of Things (IoT) applications due to easier and cheaper availability of small computing devices~. The machine learning-based approaches require handcrafted features which reduces its suitability for a new dataset, where, information about data distribution is missing~. Therefore, Deep Neural Network (DNN) based IoT applications have achieved dominance in different application domains, including smart home, agriculture, locomotion mode recognition, \\textit{etc}~. It is because the DNN provides automatic feature extraction capability from a large amount of raw data. The DNN models not only eliminates the human intervention for calculating domain-related features but achieves coercive performance~. DNN model involves the following components, including Convolutional Neural Network (CNN), Fully Connected (FC) layers, and RNN (Recurrent Neural network). A DNN model can have all these components at a time or any of its combination. The CNN extract spatial features and RNN identifies temporal features form the dataset. These features archives a high order performance in classification, regression and prediction of different tasks~.\nDespite these advantages, the DNN models require a significant amount of resources including, energy, processing capacity, and storage. This resource requirement reduces the suitability of DNN model for Resources constraint Devices (RCDs)~. The huge resource requirements of the DNN model also become a bottleneck for real-time inference and to run DNN model on browser-based applications. Therefore, to mitigate these shortcomings of DNN model, \\textit{i.e.,} energy, processing capacity, and storage, different DNN compression techniques have been proposed in the existing literature.\nWe can achieve several benefits by compressing the DNN model over the traditional cumbersome DNN model. Some of these benefits are as follows:\n\\begin{itemize}\n \\item \\textit{Storage capacity:} The DNN model achieves significant accuracy that comes up with a large number of parameters, which requires considerable storage~. Therefore, by compressing the DNN model, we can preserve storage that facilitates the deployment of DNN model on Resource Constraint Devices (RCDs).\n \\item \\textit{Computation requirements:} A large number of FLoating point OPerations (FLOPs) involve in the DNN operation can exceed the limited computational capacity of the RCDs~. Thus, it would be beneficial to employ DNN compression for reducing the computation requirement.\n \\item \\textit{Earliness:} The training and inference time of a DNN model is significantly high that hampers its real-time inference performance~. The DNN compression mechanisms, therefore, provides a higher degree of earliness in both training and inference phases.\n \\item \\textit{Privacy:} The data transmission from the source to the high-end machine leads to the security breach and privacy compromise~. Thus, it would be beneficial to employ in-situ processing using compressed DNN model on RCDs, which helps in preserving privacy and provides data security.\n \\item \\textit{Energy consumption:} The DNN compression also preserves energy for processing the data~. It enhances its suitability for the deployment of compressed DNN model on battery-operated IoT devices.\n\\end{itemize}\nThe demand for IoT based applications is growing day-by-day that encourages data scientists to incorporate DNN models in IoT applications. Therefore, it is beneficial to provide a thorough overview of the DNN compression technique that can meet out the limited storage and processing capacity available at the resource constraint IoT devices. Thus, we carry out a comprehensive survey of existing literature on different DNN compression technique and propose a useful categorization to point out a potential research gap for future work. This paper presents a systematic overview of DNN compression techniques that reduce both storage and computation requirements of the DNN model. We categorize the different DNN compression techniques into five broad categories based on the type of strategy they followed for compression DNN model with minimal accuracy compromise. The five broad categories for DNN compression are network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous. Fig.~\\ref{category} illustrates the categorization hierarchy with different DNN compression techniques and their subcategories.\nNext, section discusses the categorization of the DNN compression technique. Section~\\ref{np} presents the overview of network pruning technique with all sub-categories. The summary of the sparse representation technique is presented in Section~\\ref{sr}. Further, Section~\\ref{bp} and section~\\ref{kd} demonstrate the detailed description of bits precision and knowledge distillation techniques, respectively. The miscellaneous techniques that are not included in the above four category is discussed in Section~\\ref{mln}. Finally, Section~\\ref{fd} presents the discussions and future directions of the DNN compression techniques.", "id": "8bb84bd4-7832-490e-b8d8-39d22dd9073e", "level": "section", "origin_cites_number": 20, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "This section classifies the existing work on DNN compression in five broad categories, \\textit{i.e.,} network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous techniques. These categories were chosen through an extensive literature review on DNN compression techniques. For example, based on the network shrinking, we come up with network pruning, and through sparsification of the weight matrix, we can obtain the category for sparse representation. Fig.~\\ref{category} illustrates the overview of the paper with different categories of DNN compression. Table~\\ref{chal1} summarises the existing work on DNN compression under different categories. The broad categories of the DNN compression techniques are defined as\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[scale=1.0]{Figures/block.eps}\n \\caption{Overview of different categories of compression techniques for deep neural network.}\n \\label{category}\n\\end{figure*}", "id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "level": "section", "origin_cites_number": 0, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ] ], "subsections": [ "ff23444d-31f4-46a5-938b-e4c0ee20cf59", "7197febd-2856-4a58-9cd6-01373a6e1b2b", "18c8f778-882e-458c-91dd-209582a90b2d", "ee70a175-c646-46f4-a83a-110f193bde05", "70101213-7acc-48cd-9b1e-1c070f133cb7" ], "title": "Categorization of DNN compression techniques" }, { "cite_extract_rate": 0.625, "cites": [ 6313, 688, 6319, 841, 6315, 504, 4354, 4628, 8150, 7634, 8389, 6318, 6317, 6314, 6316 ], "content": "Deep network pruning is one of the popular techniques to reduce the size of a deep learning model by incorporating the removal of the inadequate components such as channels, filters, neurons, or layers to produce a lightweight model~. The resultant lightweight model is a low power consuming, memory-efficient, and provides faster inference with minimal accuracy compromise. The adequacy of a component relies on the amount of loss incurred when a component is removed from the model~. Sometimes pruning is also said to be a binary criterion to determine the removal or persistence of a component in the DNN. The pruning is performed on a pre-trained model iteratively, such that only inadequate components are pruned from the model. We further categorized the network pruning techniques into four parts, \\textit{i.e.,} channel pruning, filter pruning, connection pruning, and layer pruning. It helps in reducing the storage and computation requirements of the DNN model. The detailed overview of existing network pruning techniques~ for the DNN model is discussed in Section~\\ref{np}.", "id": "ff23444d-31f4-46a5-938b-e4c0ee20cf59", "level": "subsection", "origin_cites_number": 24, "parent_id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ], [ "subsection", "Network pruning" ] ], "subsections": [], "title": "Network pruning" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6315, 7634, 6320, 9139 ], "content": "Sparse representation exploits the sparsity present in the weight matrices of the DNN model~. In sparse representation, the weights there are zero or near to zero are removed from the weight matrix, which reduces the storage and computation requirements of the DNN model. In other words, those links in the network having similar weights are multiplexed, where, in-place of \\textit{multiple weights with a single link} is replaced with a \\textit{single weight with the multiplex link}. The sparse representation includes low-rank estimates~, quantization, multiplexing, \\textit{etc.} The principle motive behind the sparse representation is to concise the weight matrix without losing the performance of the DNN model. In this work, we illustrate the overview of existing approaches on DNN compression that incorporates sparse representation. Further, we categorized the DNN compression using sparse representation into three sub-categories, \\textit{i.e.,} quantization, multiplexing, and weight sharing. Section~\\ref{sr}presents a detailed description of the sparse representation techniques in the existing literature~ on DNN compression.", "id": "7197febd-2856-4a58-9cd6-01373a6e1b2b", "level": "subsection", "origin_cites_number": 12, "parent_id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ], [ "subsection", "Sparse representation" ] ], "subsections": [], "title": "Sparse representation" }, { "cite_extract_rate": 0.625, "cites": [ 9139, 4628, 4351, 6321, 685 ], "content": "Bits precision in the DNN compression technique reduces the number of bits required for storing the weights in the weight matrices $\\mathbf{W}$. For example, the FLOPs in the DNN model requires $32$ bits, which can be replaced with integer operation that requires only $8$ bits. Similarly, we can use binary, $6$ bits, $16$ bits for replacing $32$ bits FLOPs in DNN model~. We categorized the existing literature on DNN compression using bits precision into three sub-categories, namely, estimation using integer, low bits representation, and binarization. Section~\\ref{bp} highlights the overview of different bits precision techniques~ for DNN compression using limited number of bits.", "id": "18c8f778-882e-458c-91dd-209582a90b2d", "level": "subsection", "origin_cites_number": 8, "parent_id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ], [ "subsection", "Bits precision" ] ], "subsections": [], "title": "Bits precision" }, { "cite_extract_rate": 0.8, "cites": [ 6326, 6322, 6325, 8998, 681, 6324, 4508, 6323 ], "content": "In DNN model, the term knowledge distillation is defined as the process of transferring the generalization ability of the cumbersome model (teacher) to the compact model (student) to improve its performance~. Knowledge distillation provides a mechanism to overcome the accuracy compromise due to DNN compression. The training of the student model using knowledge distillation improves its generalization ability such that it can mimic similar behaviour as the teacher model for predicting the class label probabilities. In this work, we have covered a detailed overview of three categories of knowledge distillation, \\textit{i.e,} logits transfer, teacher assistant, and domain adaptation. Section~\\ref{kd} provides a detailed description of the all subcategories of knowledge distillation covered in the existing literature~.", "id": "ee70a175-c646-46f4-a83a-110f193bde05", "level": "subsection", "origin_cites_number": 10, "parent_id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ], [ "subsection", "Knowledge distillation" ] ], "subsections": [], "title": "Knowledge distillation" }, { "cite_extract_rate": 0.5370370370370371, "cites": [ 6326, 6325, 688, 6330, 6319, 6324, 6315, 4508, 6328, 1791, 681, 504, 6322, 4354, 4628, 6329, 8998, 8999, 685, 7634, 8389, 6327, 6318, 6316, 6317, 6314, 4351, 6321, 9139 ], "content": "The DNN compression technique that do-not qualifies the above four categories are classified as miscellaneous. They include such DNN compression techniques, which perform modelling of DNN in such a manner that can be easily fitted on mobile devices. They adopt parallelization of features or task distribution to reduce the storage and computation requirements of the DNN model. The existing literature~ on DNN compression under miscellaneous category are discussed in Section~\\ref{mln}.\n\\begin{table*}[htbp]\n\\centering\n\\caption{Categorization of existing literature on DNN compression techniques.}\n\\label{chal1}\n\\begin{tabular}{|l|ll}\n\\hline\n\\multirow{17}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}DNN compression\\\\ techniques\\end{tabular}}} \n& \\multicolumn{1}{l|}{\\multirow{4}{*}{Network pruning }} \n& \\multicolumn{1}{l|}{Channel pruning ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Filter pruning~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Connection pruning~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Layer pruning~} \\\\ \\cline{2-3} \n& & \\\\ \\cline{2-3} \n& \\multicolumn{1}{l|}{\\multirow{3}{*}{Sparse representation}}\n& \\multicolumn{1}{l|}{Quantization ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{Multiplexing ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Weight sharing ~} \\\\ \\cline{2-3} \n& \\multicolumn{2}{l}{} \\\\ \\cline{2-3} \n& \\multicolumn{1}{l|}{\\multirow{3}{*}{Bits precision}}\n& \\multicolumn{1}{l|}{Estimation using integer ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{Low bits representation ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Binarization ~} \\\\ \\cline{2-3} \n& \\multicolumn{2}{l}{} \\\\ \\cline{2-3} \n& \\multicolumn{1}{l|}{\\multirow{3}{*}{Knowledge distillation}}\n& \\multicolumn{1}{l|}{Logits transfer ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{Teaching assistant ~} \\\\ \\cline{3-3} \n& \\multicolumn{1}{l|}{}& \\multicolumn{1}{l|}{Domain adaptation~} \\\\ \\cline{2-3} \n& \\multicolumn{2}{l}{} \\\\ \\cline{2-3} \n& \\multicolumn{1}{l|}{\\multirow{1}{*}{Miscellaneous}}\n& \\multicolumn{1}{l|}{} \n \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "70101213-7acc-48cd-9b1e-1c070f133cb7", "level": "subsection", "origin_cites_number": 54, "parent_id": "eb363ec4-5c7b-4833-aea6-b742bc82b88c", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Categorization of DNN compression techniques" ], [ "subsection", "Miscellaneous" ] ], "subsections": [], "title": "Miscellaneous" }, { "cite_extract_rate": 0.48, "cites": [ 688, 6319, 6315, 504, 4354, 4628, 7634, 8389, 6318, 6316, 6317, 6314 ], "content": "\\label{np}\nThis section discusses the network pruning techniques for DNN compression. The key idea is to remove unimportant components such as layers, filters, channels, \\textit{etc.}, from the original DNN model. The remaining components form a compressed DNN model. The compressed model requires both minimal processing resources and consume lower storage than the original DNN model. Further, the compressed model is trained on the existing dataset for a colossal number of epochs that leads to the fine-tuning of the model. The two major research challenges associated with the network pruning are mitigating the accuracy compromise due to network compression and reducing the time in fine-tuning of the DNN model~. We divide the network pruning into four categories depending upon the existing work, \\textit{i.e.,} Channel pruning~, Filter pruning~, Connection pruning~, and Layer pruning~. Fig.~\\ref{net_pru} illustrate the categories and step involved in the network pruning. Table~\\ref{table2} summarizes the existing literature on network pruning under different categories. The detailed description of different network pruning techniques are as follows.\n\\begin{table*}[h]\n\\caption{Summary of network pruning techniques for DNN compression.}\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{|c|c|l|l|l|l|l|l|}\n\\hline\n\\multirow{2}{*}{\\textbf{Paper}} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Abbreviated \\\\ name\\end{tabular}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Address the challenge}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Proposed solution}}} & \\multicolumn{3}{c|}{\\textbf{Suitable for}} & \\multirow{2}{*}{\\textbf{Category}} \\\\ \\cline{5-7} \n & & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textbf{CNN} & \\textbf{FC} & \\textbf{RNN} & \\\\ \\hline\n~ & \\textemdash & To accelerate deep CNN for faster inference & Efficiently removing redundant channels & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{\\multirow{10}{*}{\\begin{tabular}[c]{@{}c@{}}Channel\\\\ pruning\\end{tabular}}} \\\\ \\cline{1-7}\n~ & \\textemdash & Improving channel pruning scheme & \\begin{tabular}[c]{@{}l@{}}Determine pruning threshold prior to\\\\ actual pruning\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\cline{1-7}\n~ & EvoNAS & \\begin{tabular}[c]{@{}l@{}}Resolve dependency between channels and \\\\ neurons\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Combinatorial optimization for network \\\\ pruning\\end{tabular} & \\cmark & \\cmark & \\xmark & \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To decrease model-size, runtime, memory, \\\\ and computation\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Network slimming using channel \\\\ level sparsity\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\cline{1-7}\n~ & VarGNet & To reduce the operation in CNN & \\begin{tabular}[c]{@{}l@{}}Variable group convolution and angular \\\\ distillation loss\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\cline{1-7}\n~ & MobileNet & To build a light-weight DNN model & \\begin{tabular}[c]{@{}l@{}}Deep-wise and $1\\times1$ convolutions\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\hline\n~ & ThiNet & \\begin{tabular}[c]{@{}l@{}}Efficient framework for accelerating operations \\\\ in CNN\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Discarding unimportant filters using \\\\ statistical information\\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{\\multirow{8}{*}{\\begin{tabular}[c]{@{}c@{}}Filter\\\\ pruning\\end{tabular}}} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To reduce the storage and memory requirement \\\\ during training and inference\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Sparsification in fully connected layer \\\\ and convolutional filters\\end{tabular} & \\cmark & \\cmark & \\xmark & \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Prune parameters that results in minimal\\\\ performance degradation\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Knee-guided evolutionary algorithm\\\\ for optimal filter pruning.\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\cline{1-7}\n~ & DeepMon & \\begin{tabular}[c]{@{}l@{}}To provide deep learning inference on mobile \\\\ devices\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}An optimization mechanism for \\\\ convolutional operation on mobile\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To speed up the test-time evaluation of the \\\\ large DNN model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Finds appropriate low-rank approximation \\\\ for convolutional filters\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To reduce storage and bandwidth of DNN \\\\ model with minimal accuracy compromise\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Quantization using clustering \\\\ technique, and further Huffman coding\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{16}{*}{\\begin{tabular}[c]{@{}c@{}}Connection \\\\ pruning\\end{tabular}}} \\\\ \\cline{1-7}\n~ & DeepIoT & \\begin{tabular}[c]{@{}l@{}}To ensure the deployment of deep learning \\\\ models on IoT devices\\end{tabular} & Optimal dropout estimation & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & EIE & \\begin{tabular}[c]{@{}l@{}}To perform DNN based inference on the \\\\ compressed network\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Pruning of redundant connections and \\\\ weight sharing among different elements\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & MTZ & \\begin{tabular}[c]{@{}l@{}}To induced a minimal change in error upon \\\\ layer sharing\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Layer-wise neuron sharing and subsequent \\\\ weight update\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To provides a better selection mechanism \\\\ for deployment of DNN model\\end{tabular} & Guided deployment for wiser pruning & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & SCNN & \\begin{tabular}[c]{@{}l@{}}To improve performance and provides \\\\ energy efficiency\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Zero-valued weight estimation generated \\\\ from ReLU operation\\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}A sparse solution for compressing DNN models\\end{tabular} & Using sparse variational dropout & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Hierarchical priority to prune the nodes inside\\\\ the DNN model\\end{tabular} & Bayesian compression for Deep learning & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Pruning method for DNN model by removing \\\\ unimportant connections\\end{tabular} & Using connection pruning threshold & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-6}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Solving irretrievable network damage due to \\\\ incorrect pruning\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Splicing into the network compression to \\\\ avoid incorrect pruning\\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{} \\\\ \\hline\n~ & FINN & \\begin{tabular}[c]{@{}l@{}}To build a fast and flexible heterogeneous \\\\ DNN architecture\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Using a set of optimization for mapping \\\\ binarized neural networks to hardware\\end{tabular} & \\cmark & \\cmark & \\xmark & \\multicolumn{1}{c|}{\\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}Layer\\\\ pruning\\end{tabular}}} \\\\ \\cline{1-7}\n & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Pruning method for deploying DNN model\\\\ on embedded devices\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Singular value decomposition based \\\\ factorization method\\end{tabular} & \\xmark & \\xmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & MTZ & \\begin{tabular}[c]{@{}l@{}}Minimal change in error upon layer sharing\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Layer pruning and weight update\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\hline\n\\end{tabular}\n}\n\\label{table2}\n\\end{table*}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.0]{Figures/network_p.eps}\n \\caption{Categories and steps involved in network pruning.}\n \\label{net_pru}\n\\end{figure}", "id": "32094ad6-d342-49fc-a67e-74352e80d780", "level": "section", "origin_cites_number": 25, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Network Pruning" ] ], "subsections": [ "af442160-f337-4d9a-bafa-194be5850a46", "bb6dd69c-7152-412f-8a7c-fea179dfccc6", "6d7c6593-5956-45a5-9b00-96a8116e02cd", "1ab40d8e-8be5-4bf4-96cb-8e36acbf7160" ], "title": "Network Pruning" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 6318, 688, 504 ], "content": "\\label{cp}\nIn this section, we provide an overview of different channel pruning techniques discussed in the existing literature. Channel pruning is a concept of reducing the number of channels in the input supplied to the intermediate layers of the DNN model~. The data supplied to the DNN model are initially channelized to form an appropriate input, \\textit{e.g.} image have $3$ channels (RGB). The output of each layer of the DNN model also contains different channels that improve the model performance but increases the computation and storage. Therefore, it is beneficial to remove unimportant channels to reduce the computation and storage requirements. Authors in~ proposed an inference time approach to perform channel pruning by effectively removing the redundant channels from CNN. They claim to accelerate the deep CNN using two sequential steps, including, regression-based channel selection followed by estimation of least reconstruction error. Authors have proved their proposed mechanism for accelerating deep CNN operations using compact architecture by performing experiments on different datasets. The reconstruction error in this approach is formulated as follows. \nLet $k$ represent the number of channels in a feature map which we have to pure to reduce the number of channels to $k{'}$. Let a convolutional filter $\\mathbf{F}$ of size $o\\times k \\times f_h \\times f_w$ applied to a input $\\mathbf{I}$ of dimension $N\\times k \\times f_h \\times f_w$ to produce output matrix $\\mathbf{O}$. Here, $N$ denotes the number of samples in dataset $\\mathcal{D}$ and $f_h\\times f_w$ is the dimension of the convolutional kernel. The minimization problem is formed as\n\\begin{align}\\nonumber\n \\arg \\min_{\\delta, \\mathbf{F}} \\frac{1}{N}\\Big\\| \\mathbf{O} - \\sum_{j=1}^{k}\\delta_j \\mathbf{F}_j\\mathbf{I}_j\\Big\\|_F^2,\\\\\n \\text{subject to, } ||\\delta||_0 \\leq k{'},\n\\end{align}\nwhere, $||\\cdot||_F$ denotes the Frobenius norm. $\\mathbf{I}_j$ represents the data input for channel $j,1\\leq j \\leq k$ and $\\mathbf{F}_j$ is the $j^{th}$ slice of $\\mathbf{F}$. $\\delta$ a coefficient vector of length $k$ used in channel selection, $\\delta_j$ is the $j^{th}$ entry of $\\delta$. Liu \\textit{et.al.}~ proposed an improved scheme of pruning operation, where, at the initial step, the pruning threshold is determined by analyzing the network connections. Further, those connections and channels are removed that are less than the pruning threshold. The threshold estimation in the proposed scheme preserves the accuracy of the pruned network up to some extent. In~ authors claim that the prior assumption of channels and neurons independent in the deep neural network is wrong. They claim that there are certain dependencies among channel and neurons. Authors formed a combinatorial optimization problem for network pruning which is solved using an evolutionary algorithm and attention mechanism. They named the proposed mechanism as Evolutionary NetArchitecture Search (EvoNAS). Authors in~ proposed a novel learning scheme for CNN that decrease model-size, runtime memory, and computing operations. They claim to retain the accuracy of the compressed model. The proposed scheme enforces channel-level sparsity in the network and dierctly applies to CNN architectures. Therefore, introduces minimal overhead during the training of the deep learning model. They named their approach as network slimming that compressed large model to a compact model without a higher accuracy compromise. The main idea behind this approach is to introduce a scaling factor ($\\Delta$) for each channel in the deep learning model. Next, the model and $\\Delta$ are jointly trained using the sparsity regularization.The training objective is defined as\n\\begin{equation}\n Loss = \\sum_{\\mathbf{x},y} \\mathcal{L}(f(\\mathbf{x},\\mathbf{W}),y) + \\mu \\sum_{\\Delta} g(\\Delta),\n\\end{equation}\nwhere, $(\\mathbf{x}, y)$ denotes the training input and output, $\\mathbf{W}$ represents the weight matrix, and $\\mathcal{L}(\\cdot)$ is the training loss imposed by the model. $g(\\cdot)$ denotes the penalty function and $\\mu$ balances the difference between two terms. \nYan \\textit{et. al.}~ proposed an approach (named VarGNet) for reducing the operations in the CNN model that uses variable group convolution. The use of variable group convolution helps in solving the conflict between small computational cost and the unbalanced computational intensity. Further, they adopt angular distillation loss to improve the performance of the generated lightweight model. Authors in~ proposed a DNN compression technique based on the streamlined architecture that uses depth-wise separable convolutions to build a lightweight model. The authors named the technique as MobileNets that estimates two hyperparameters, \\textit{i.e.,} width multiplier and resolution multiplier. It allows the model developer to choose a small network that matches the resources restrictions (latency, size) for different applications. In MobileNets, the depthwise convolution applies a single filter to each input channels. Further, $1 \\times 1$ convolution is performed to combine inputs into a new set of output. MobileNets achieve faster inference by exploiting the sparsity of the dataset. The role of the width multiplier $wd$ is to thin a network uniformly at each layer. For a given layer, the number of input channels $M$ becomes $(wd)M$. The resolution multiplier $rm$ is applied to the input image and the same multiplier subsequently reduces the internal representation of every layer.\n\\noindent $\\bullet$ \\textbf{Remarks:} The existing literature on channel pruning, EvoNAS~ VarGNet~, MobileNets~, have covered different mechanisms of channel pruning. However, none of the existing work helps in diminishing the accuracy compromise due to channel pruning. Apart from the accuracy compromise, the existing literature misses the effect of a channel elimination on the performance of different layers of the DNN model. As, removal of single channel in one layer can also effect other layers in the DNN model.", "id": "af442160-f337-4d9a-bafa-194be5850a46", "level": "subsection", "origin_cites_number": 7, "parent_id": "32094ad6-d342-49fc-a67e-74352e80d780", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Network Pruning" ], [ "subsection", "Channel pruning" ] ], "subsections": [], "title": "Channel pruning" }, { "cite_extract_rate": 0.5, "cites": [ 6319, 6314, 6331 ], "content": "\\label{fp}\nThe convolutional operation in the CNN model incorporates a large number of filters to improve its performance under different processes of classifications, regressions, and predictions. As it assumed that the increment in the number of filters, improves the distinguishable characteristics of the spatial features generated from the CNN model.~. However, this increment in the convolutional filters leads to a significant increase in the number of floating-point operations in the DNN model. Therefore, the elimination of the unimportant filters plays a decisive role in reducing the computational requirements of the DNN model. An example scenario of filter level pruning is illustrated in Fig.~\\ref{filter_c}. The existing work on filter pruning~ are discussed as follows.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.0]{Figures/filter.eps}\n \\caption{An example scenario of the filter level pruning~.}\n \\label{filter_c}\n\\end{figure}\nAuthors in~ proposed an efficient framework, namely ThiNet, for accelerating the operation of the CNN model using compression in both training and testing phases. They have incorporated filter-level pruning, where, an unimportant filter is discarded based on the statistical information computed form the next layer. Authors have established a filter level pruning as an optimization problem to determine the filters that are to be pruned. ThiNet uses a greedy algorithm for solving the optimization problem that is defined as follows \n\\begin{align}\n \\arg \\min_{E} \\sum_{i=1}^{N}\\Big(\\hat{y}_i-\\sum_{j\\in E}\\hat{\\mathbf{x}}_{ij}\\Big)^2,\\\\ \n \\text{subject to, } |E|=k\\times c_{rate},\\\\\n E \\subset \\{1,2,\\cdots,k\\},\n\\end{align}\nwhere, $N$ is the number of training examples ($\\hat{\\mathbf{x}}_i,\\hat{y}_i$), $|E|$ is the number of elements in a subset of E, $k$ denotes number of channel in CNN model, and $c_{rate}$ is compression rate that determines number of channels retrained after compression.\nBhattacharya \\textit{et. al.}~ proposed a DNN compression technique that removes the sparsification in fully connected layer and convolutional filters. The main motive was to reduce the storage requirement of device and processor resources in both training and inference phases. The basic operational principle of this work is the hypothesis that computational and space complexity of DNN model can be considerably increase using sparsification of layers and convolutional filters. \nIn~ authors, defined the filter pruning as a multi-objective optimization problem followed by a knee-guided evolutionary algorithm for its solution. They established a trade-off between the number of parameters and performance compromise due to model compression. The basic principle of this mechanism is to prune parameters that result in little performance degradation. To estimate the importance of a parameter, they have used the criteria of performance loss to identify the redundancy. To achieve a compressed model with a small size, the number of filters preserved must be less as possible to achieve a considerable accuracy. The problem can be treated as to find a compact binary representation that can prune maximum filters while preserving the performance to a greater extent. This pruning mechanism falls under the group of filter pruning in the CNN, as it simultaneously poses the advantage of parameter reduction and minimizing computational overhead. This mechanism can be further improved by incorporating the low-rank estimation. Authors in~ proposed a mechanism named DeepMon to provide deep learning inference on mobile devices. They claim to run the inference in limited time and provides energy efficiency using the Graphical Processing Unit (GPU) on the mobile device. The authors proposed an optimization mechanism for processing convolutional operation on mobile GPU. The mechanism utilizes the internal processing structure of CNN incorporating filters and the number of connections to reuse the results. Thus, removing unimportant filters and connections for proving faster inference.\nAuthors in~ speed up the test-time evaluation of the large DNN model, designed for object recognition tasks. The authors have exploited the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. They have started compression of each convolutional layer by finding an appropriate low-rank approximation, and then fine-tune until the prediction performance is restored.\n\\noindent $\\bullet$ \\textbf{Remarks:} The filter pruning technique presented in the existing literature~ have successfully reduced the number of floating-point operations in the CNN. However, the fully connected layers and recurrent layers also contribute a major portion of floating-point operations in the DNN model, which should not be ignored during DNN compression. Additionally, the existing work cannot be employed on the DNN model that is having a large number of layers with minimal filters on each convolutional layer.", "id": "bb6dd69c-7152-412f-8a7c-fea179dfccc6", "level": "subsection", "origin_cites_number": 6, "parent_id": "32094ad6-d342-49fc-a67e-74352e80d780", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Network Pruning" ], [ "subsection", "Filter pruning" ] ], "subsections": [], "title": "Filter pruning" }, { "cite_extract_rate": 0.461538461538461, "cites": [ 4354, 4628, 6317, 7634, 6316, 6315 ], "content": "The number of input and output connections to a layer of DNN model determines the number of parameters. These parameters can be used to estimate the storage and computation requirement of the DNN model~. Since the DNN model requires a large number of parameters in their operations; therefore, it is convenient to reduce the parameters by eliminating unimportant connection for different layers in the DNN model~. The existing studies on the connection pruning~ have attempted to remove the unimportant connection from the DNN models. Fig.~\\ref{connection_p} illustrates an example scenario of connection pruning in the DNN model. The overview of different connection pruning techniques discussed in the existing literature are as follows\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.00]{Figures/connection.eps}\n \\caption{An illustration of connection pruning in the DNN model.}\n \\label{connection_p}\n\\end{figure}\nAuthors in~ proposed a connection pruning technique that first prunes the DNN model by learning only important connection using dropout. Next, the learned weights are quantized using clustering technique, and further Huffman coding is applied to reduce the storage requirement. The authors under this technique try to fit the model into cache memory rather than main memory. In this way, they encourage us to run applications on tiny computing devices with limited storage and energy. Here, the goal is to reduce the storage and bandwidth of the DNN model with minimal accuracy compromise. It also justified the goal of tiny device execution, such as better privacy, uses least network bandwidth and provides output in real-time. \nIn~ authors proposed an approach that helps in the deployment of deep learning models on IoT devices. Thus, named the approach as DeepIoT. The proposed approach is capable of performing compression on commonly used deep learning structures, including CNN, fully connected layers, and recurrent neural network. DeepIoT incorporates dropout techniques to remove unimportant connections from the network. Here, the chosen dropout is not a fixed value but optimally determined for generating optimal compressed networks. In DeepIoT, small dense matrices are obtained by performing compression of the different structures in the network. These dense matrices contain a minimal number of non-redundant elements, such as input and output dimension of a layer, filters, \\textit{etc}. The process of obtaining dense matrices must safeguard the network from higher accuracy compromise.\nHan \\textit{et. al.}~ proposed an Energy efficient Inference Engine (EIE) to perform DNN based inference on the compressed network obtained after pruning of redundant connections and weight sharing among different elements. The EIE accelerates the sparse matrix multiplication by adopting weight sharing without losing the accuracy of the model. In EIE, the processing is performed using an array of processing elements. Each element simultaneously partitions the network in SRAM and perform the computation of their respective part. In~ authors proposed a Multi-Tasking Zipping (MTZ) framework that automatically merges multiple correlated DNN models to perform cross model compression. In MTZ, the compression is performed using layer-wise neuron sharing and subsequent weight update. Here, the authors induced a minimal change in error upon layer sharing. \nAuthors in~ developed a quantitative characterization approach for a deep learning model to make its deployment on embedded devices. It not only provides a better selection mechanism for deployment of DNN model on the embedded device but also guides the development of compression technique using a wiser connection pruning. In~ authors proposed a zero-valued weight estimation from a network during training that is generated from ReLU operation. The authors named the approach as Sparse CNN (SCNN). It is a DNN compression architecture that improves performance and provides energy efficiency. SCNN uses both weight and activation sparsity, which enhances the power of the compressed DNN. SCNN also employ an effective dataflow mechanism inside the DNN that maintains the sparse weights and activations in the DNN compression. It further reduces unimportant data transmission and storage requirement.\nAuthors in~ proposed a sparse variational dropout, which is an extension of variational dropout. Here, all possible values of dropouts are considered, which leads to a sparse solution for compressing the DNN models. Authors have provided an approximation technique that incorporates KL-divergence in variational dropout. They experimentally demonstrate that higher sparsity is achieved in CNN and fully connected layers. Let $N$ denotes the number of instances in dataset $\\mathcal{D}$, whose $i^{th}$ instance is denoted as $(\\mathbf{x}_i,y_i)_{i=1}^N$. The goal of a learning mechanism is to estimate parameter $\\theta$ of a model $p(y/\\mathbf{x},\\theta)$. When we incorporate Bayesian learning after data $\\mathcal{D}$ arrival, the prior distribution is transformed into a posterior distribution, $p(\\theta/\\mathcal{D})=p(\\mathcal{D/\\theta})p(\\theta)/p(\\mathcal{D})$. In variational inference the posterior distribution $p(\\theta/\\mathcal{D})$ is approximation using distribution $r_{\\phi}(\\theta)$. The authors uses KL divergence $D_{KL}(r_{\\phi}(\\theta)||p(\\theta/\\mathcal{D}))$ to estimate quality of approximation. The optimal value of parameter $\\phi$ is obtained as follows\n\\begin{align}\\nonumber\n \\mathcal{L}(\\phi) = \\mathcal{L}_D(\\phi) - D_{KL}(r_{\\phi}(\\theta)||p(\\theta)),\\\\\n\\text{where, } \\mathcal{L}_D(\\phi) = \\sum_{i=1}^N \\mathbb{E}_{r_{\\phi}(\\theta)}[\\log p(y_i/\\mathbf{x}_i,\\theta)].\n\\end{align}\nLouizos \\textit{et. al.}~ proposed a Bayesian compression for Deep learning. They introduced a hierarchical priority to prune the nodes inside the DNN model inspite weight reduction. The authors have incorporated posterior uncertainties for determining the fixed point precision to encode the weights. Further, the authors employ inducing sparsity for hidden neuron despite individual weights to prune neurons. In~ authors proposed a network pruning method for DNN model that removes unimportant connections. The proposed pruning method first identifies the unimportant connections having weight less than a given threshold. Next, these connections are removed from the DNN model. Further, fine-tuning is performed with the remaining connections.\nAuthors in~ proposed a network compression technique to perform network pruning. The pruning incorporates deleting unimportant parameters and retaining the remaining ones. The authors have incorporated splicing into the network compression to avoid incorrect pruning. There are two major issues related to network interconnections in the DNN model. The First issue is irretrievable network damage due to incorrect pruning, and second is the inconsistency of the DNN model due to an inefficient machine. The authors have claimed to solve both shortcomings of the network pruning through continuous pruning and splicing.\n\\noindent $\\bullet$ \\textbf{Remarks:} The connection pruning technique provides a high order compression of the DNN model in contrast with channel pruning and filter pruning discussed in Section~\\ref{cp} and \\ref{fp}, respectively. The existing studies, DeepIoT~, EIE~, MTZ~, SCNN~, have emphasized on connection pruning to perform DNN compression. However, a major concern of connection pruning technique is the elimination of important connection during DNN compression, needs more emphasis, such as pruning and splicing discussed in~. Additionally, the connection pruning suffers from higher accuracy compromise when the network is dynamically building and rebuilding on RCDs.", "id": "6d7c6593-5956-45a5-9b00-96a8116e02cd", "level": "subsection", "origin_cites_number": 13, "parent_id": "32094ad6-d342-49fc-a67e-74352e80d780", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Network Pruning" ], [ "subsection", "Connection pruning" ] ], "subsections": [], "title": "Connection pruning" }, { "cite_extract_rate": 0.5, "cites": [ 6315, 8389 ], "content": "The last category of the network pruning techniques is layer pruning, where, some selected layers from the network are removed to compress the DNN model. The layer pruning is highly utilized for deploying DNN model on tiny computing devices, where, we need an ultra-high compression of DNN model~. The primary issue with layer pruning is the loss of the semantic structure of the DNN model that generates low-quality features. These low-quality features result in performance inefficiency. The prior studies on layer pruning~ for DNN compression are elaborated as follows.\nAuthors in~ presented a framework that builds fast and flexible heterogeneous architecture, named FINN. They utilized a set of optimization for mapping binarized neural networks to the hardware. The authors have performed the implementation of different DNN components, including, convolutional, pooling, and fully connected layers. The implementation was conducted in such a manner that meet out the performance demand of the users. They claim to perform millions of classifications per second with microsecond latency on embedded devices.In~ authors compared the performance of different classification approaches and proposed a pruning method for deploying DNN model on embedded devices. They have taken three embedded devices, including smartphone, smartwatch, and Raspberry Pi, for deploying compressed LSTM model. They claim to achieve significant accuracy by performing compression up to $25\\%$.\nThe authors have used the Singular Value Decomposition (SVD) based factorization method that allows the decomposition of weight matrix ($\\mathbf{W}$) into three sub-matrices, \\textit{i.e.,} $\\mathbf{A}$, $\\mathbf{E}$, and $\\mathbf{B}$ of dimensions $N\\times l$, $l\\times l$, and $l\\times M$, respectively. Here, $N$ denotes the number of instance in dataset $\\mathcal{D}$ having sample length of size $M$. $l$ denotes the number of class labels in the dataset and matrix $\\mathbf{E}$ is a diagonal matrix. The decomposition of the weight matrix is represented as \n\\begin{align}\n \\mathbf{W}_{N\\times M} = \\mathbf{A}_{N \\times l} \\mathbf{E}_{l \\times l} \\mathbf{B}_{l\\times M}.\n\\end{align}\nThe weight factorization helps in achieving both storage gain $\\mathscr{S}_g$ and computation gain $\\mathscr{C}_g$ defined as follows\n\\begin{align}\n \\mathscr{S}_g = \\frac{N \\times M}{N\\times l + l^2 + l \\times M}.\\\\\n \\mathbf{C}_g = \\frac{N^2 \\times M}{N^2\\times l + l^3 + l^2\\times M}.\n\\end{align}\n\\noindent $\\bullet$ \\textbf{Remarks:} The existing literature on layer pruning, FINN~, have reduced both storage and computation requirement of DNN model by providing an ultra high-level pruning. However, the layer pruning results in higher accuracy compromise due to structural deterioration of the DNN model, which should be mitigated to enhance the utility of layer pruning.", "id": "1ab40d8e-8be5-4bf4-96cb-8e36acbf7160", "level": "subsection", "origin_cites_number": 4, "parent_id": "32094ad6-d342-49fc-a67e-74352e80d780", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Network Pruning" ], [ "subsection", "Layer pruning" ] ], "subsections": [], "title": "Layer pruning" }, { "cite_extract_rate": 0.25, "cites": [ 6315, 7634, 9139 ], "content": "\\label{sr}\nIn the previous section, we have covered those techniques for DNN compression that prunes unimportant components, including, layers, filters, and channels, \\textit{etc}. This section, on the other hand, gives an overview of the DNN compression techniques that preserve the overall structure of the DNN model. Here, the sparsity in the representation of the DNN model is exploited to reduce both storage and processing requirement of the DNN model~. The sparsity in the DNN model persists in the weight matrices due to the following two reasons\n\\begin{enumerate}\n \\item The value of stored weights is zero or near to zero. It could be beneficial to remove these weights to reduce the computation and storage requirements.\n \\item The value of maximum stored weights are alike that provides a convenience to replace multiple weights having single connections with single weight having multiplex connections.\n\\end{enumerate}\nThe existing literature on sparse representation~ have incorporated above two reasons for compressing the DNN models. We categorize the existing literature in three parts, \\textit{i.e.,} Quantization ~, Multiplexing ~, and Weight sharing ~, as illustrated in Table~\\ref{table3}. The overview of existing literature on these categories are as follows.\n\\begin{table*}[h]\n\\caption{Illustration of DNN compression techniques that incorporates sparse representation.}\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{|c|c|l|l|l|l|l|l|}\n\\hline\n\\multirow{2}{*}{\\textbf{Paper}} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Abbreviated \\\\ name\\end{tabular}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Address the challenge}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Proposed solution}}} & \\multicolumn{3}{c|}{\\textbf{Suitable for}} & \\multirow{2}{*}{\\textbf{Category}} \\\\ \\cline{5-7} \n & & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textbf{CNN} & \\textbf{FC} & \\textbf{RNN} & \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To compress DNN by reducing the bits\\\\ required for weight matrices representation\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Weight quantization to reduce the \\\\ storage requirement of DNN model\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{7}{*}{Quantization}} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To provides a better selection mechanism \\\\ for deployment of DNN model\\end{tabular} & Guided deployment for wiser pruning & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Pruning method for deploying DNN model\\\\ on embedded devices\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Singular value decomposition based \\\\ factorization method\\end{tabular} & \\xmark & \\xmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Inference is performed using integer\\\\ arithmetic\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Quantization technique is an affine \\\\ mapping\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\hline\n~ & EIE & \\begin{tabular}[c]{@{}l@{}}To perform DNN based inference on the \\\\ compressed network\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Pruning of redundant connections and \\\\ weight sharing among different elements\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To jointly determine the model that should \\\\be called to perform inference\\end{tabular} & Developed lightweight neural multiplexer & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{3}{*}{Multiplexing}} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Multiplexing different recognition and \\\\prediction task using a single backbone network\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Sequential steps of training a CNN based \\\\ backbone network followed by branches\\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-8}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To identify the shared weights of each layer in \\\\ trained network\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}k-means clustering technique is used where,\\\\ the same cluster must share the same weights\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{9}{*}{\\begin{tabular}[c]{@{}c@{}}Weight\\\\ sharing\\end{tabular}}} \\\\ \\cline{1-7}\n~ & EIE & \\begin{tabular}[c]{@{}l@{}}To perform DNN based inference on the \\\\ compressed network\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Pruning of redundant connections and \\\\ weight sharing among different elements\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & MTZ & \\begin{tabular}[c]{@{}l@{}}Minimal change in error upon layer sharing\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Layer pruning and weight update\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{}\n\\\\ \\cline{1-7}\n~ & \\textemdash & Data sharing during DNN model training & \\begin{tabular}[c]{@{}l@{}}Uses max-margin approach that extracts most \\\\ identifiable training data.\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & \\textemdash & DNN based modeling framework for RCDs & \\begin{tabular}[c]{@{}l@{}}Framework follows a multitasking learning\\\\ principle for training shared DNN model\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\hline\n\\end{tabular}\n}\n\\label{table3}\n\\end{table*}", "id": "dd035969-ad6f-463a-b04a-bccc9bef6cde", "level": "section", "origin_cites_number": 12, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Sparse representation" ] ], "subsections": [ "71ec53d2-5d8e-458a-b31a-f16b29f5f57d", "9cc50350-027e-4da5-b01f-18fb85d12479", "2e617d96-a0a6-4aec-a60c-b88e1fc81ec1" ], "title": "Sparse representation" }, { "cite_extract_rate": 0.25, "cites": [ 9139 ], "content": "This section covers the sparse representation technique that involves weight quantization in the DNN model~. The weight quantization reduces the storage requirement of the DNN model, along with the computation requirement. Authors in~ proposed a weight quantization technique to compress the deep neural network by curtailing the number of bits required for representing the weight matrices. Here, authors try to reduce the number of weights that should store in the memory. In doing so, the same weights are eliminated, and multiple connections are drawn from a single remaining weight. Let us consider a scenario, where, a deep learning model has $4$ neurons at the input and output layers. Next, the weights are quantized to $4$ bins, where, all weights in the same bin share the same value. Thus, during the deployment of the DNN model on the embedded device, we have to store smaller indices for each weight, as illustrated in part (a) of Fig.~$4$. Similarly, the gradients are also updated during the backpropagation of the DNN model and weights are reduced, as illustrated in part (b) of Fig.~$4$.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=1.00]{Figures/quantization.eps}\n \\caption{Illustration of weight quantization using clustering~. Part (a) Forward propagation and part (b) backward propagation.}\n\\end{figure}\nJacob \\textit{et. al.}~ proposed a quantization technique, where, the inference is performed using integer arithmetic. The integer arithmetic provides higher efficiency than floating-point operation and requires a lower number of bits for representation. Authors further design a training step that preserves the accuracy compromise during replacement of floating-point operation with integer operations. The proposed approach thus solve the tradeoff between on-device latency and accuracy compromise due to integer operations. The authors have used integer arithmetic during inference and floating-point operation during training. The quantization technique is an affine mapping of integers $Q$ to the real number $R$, \\textit{i.e.,} of the form\n\\begin{equation}\n R=W(Q-T),\n \\label{affine}\n\\end{equation}\nwhere, Eq.~\\ref{affine} is the quantization scheme having quantization parameters W and T. For example, for $8$-bit quantization $Q$ is set to $8$. The quantization parameter $W$ is an arbitrary positive real number and $T$ is of the same type as variable $Q$.\n\\noindent $\\bullet$ \\textbf{Remarks:} The quantization technique discussed in existing literature~ for compressing DNN model. The techniques cover the model reduction by providing optimal arrangements of weight matrices. However, the negative consequences of weight quantization and its complexity of estimation are missing for the existing work.", "id": "71ec53d2-5d8e-458a-b31a-f16b29f5f57d", "level": "subsection", "origin_cites_number": 4, "parent_id": "dd035969-ad6f-463a-b04a-bccc9bef6cde", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Sparse representation" ], [ "subsection", "Quantization" ] ], "subsections": [], "title": "Quantization" }, { "cite_extract_rate": 0.25, "cites": [ 7634 ], "content": "The multiplexing plays a vital role in reducing the storage requirement of the DNN model. The multiplexing is highly effective when weight matrices have similar values of weights~. These weights are replaced with a single weight with multiplexed connections. The overview of the existing literature on multiplexing for DNN compression~ are as follows. Authors in~ proposed a lightweight neural multiplexer. The input to the multiplexer is the raw data and resource budget, which jointly determine the model that should be called to perform inference. The lightweight data with a low budget of resources are inferences on the mobile device, whereas, hard data with high resource budget resources inferences at the Cloud. In the multiplexing approach, multiple models are multiplexed from small to large. The proposed approach is output to a binary vector that indicates the inference is either on the mobile device or Cloud. The authors introduced a contrastive loss function ($Loss_{c}$) to train all the model in the multiplexing by adding $Loss_{c}$ with the main loss function (\\textit{e.g. cross-entropy loss}) of the model. For a pair of models ($M_1$ and $M_2$), three possible cases can happen, \\textit{i.e.,} both $M_1$ and $M_2$ can predict correct output, $M_1$ or $M_2$ can predict correct output, and non of them can predict correct output. In the third case, $Loss_{c}$ is not applied, and cross-entropy function only work. The contrastive loss function ($Loss_{c}$) is defined as\n\\begin{align}\\nonumber\n Loss_{c}(\\hat{y},y) =& \\sum_{i=1}^{N}\\sum_{j=1,i\\ne j}^{N} \\log \\Big(dist(a_i,a_j)\\Big)\\\\ \\nonumber\n &\\times \\Big((\\hat{y}_i==y\\&\\hat{y}_j==y)\\\\ \\nonumber\n &-(\\hat{y}_i!==y\\&\\hat{y}_j==y)\\\\\n &-(\\hat{y}_i==y\\&\\hat{y}_j==y)\\Big),\n\\end{align}\nwhere, $y$ and $\\hat{y}$ are correct and predicted labels, respectively. $dist{\\cdot}$ denote the cosine distance function given by\n\\begin{equation}\n dist(a_1,a_2)=\\frac{a_1^Ta_2}{a_1^Ta_1\\times a_2^T a_2}.\n\\end{equation}\nIn~ authors proposed a multi-branch CNN architecture, which multiplexes different recognition and prediction task simultaneously using a single backbone network. The proposed mechanism involves sequential steps, \\textit{i.e.,} first, the authors have trained a CNN based backbone network then train different branches of the network by freezing the training of the backbone network. The authors proved the multiplex approach that preserves both time and energy of the system while achieving significant accuracy on an embedded device.\n\\noindent $\\bullet$ \\textbf{Remarks:} The existing literature on multiplexing~ helps in DNN compression for reducing storage and computation requirements. Though the existing literature has designed the framework for multiplexing different DNN models. However, none of the work tried to multiplex the weights, which can be an effective solution for reducing the storage of the DNN model.", "id": "9cc50350-027e-4da5-b01f-18fb85d12479", "level": "subsection", "origin_cites_number": 4, "parent_id": "dd035969-ad6f-463a-b04a-bccc9bef6cde", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Sparse representation" ], [ "subsection", "Multiplexing" ] ], "subsections": [], "title": "Multiplexing" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7634, 6315 ], "content": "In this section, we cover the overview of the weight sharing in the DNN model. Weight sharing proves to be the important criteria for reducing both storage and computation requirements of the DNN model~. In weight sharing, inspite of storing multiple weights for the DNN model, it would be convenient to share the pre-existing weights. Here, the weights stored for $(i-1)^{th}$ layers can be used by the $i^{th}$ layer. The detail descriptions of weight sharing are as follows. Authors in~ use k-means clustering technique for identifying the shared weights of each layer in the trained network. All the connections that are in the same cluster must share the same weights. Here, the authors also assumed that the weights are not shared across the layers in the DNN model. They partition $m$ weights in the network, $\\mathbf{W}=\\{w_1, w_2, \\cdots w_m\\}$ into $c$ clusters $K=\\{k_1, k_2,\\cdots k_c$\\}, where, $m>>c$.\n\\begin{equation}\n \\arg \\min_{K} \\sum_{i=1}^{C} \\sum_{w\\in k_i} |w-k_i|^2.\n\\end{equation}\nWu \\textit{et. al}~ emphasized the technique of sharing training process that involves sharing of training data. This sharing of data (or weights) provides a better understanding of the prediction process involve in the DNN model. However, this data sharing during the model training also leads to privacy compromise. Therefore, the authors proposed a method that discloses a few samples of the training data, which are sufficient for a data analyst to get insight into the DNN model. To reduce the discloser of the training data, authors have used the max-margin approach that extracts most identifiable training data. These identifiable data significantly contributes to obtaining a decision boundary. \nIn~ authors proposed deep learning-based modeling and optimization framework for resource constraint devices. The proposed framework achieves considerable accuracy while consuming limited resources. The framework follows a multitasking learning principle for training shared DNN model. Here, the hidden layers are shared among each other, where, similar weights are associated with multiple links after elimination. The presented framework balances the improvement in the performance by optimizing different losses in the multitasking framework. In the case of multitask learning, $T_s$ supervised tasks are considered having training dataset, $\\mathcal{D}_i=(\\mathbf{x}_j^i,y_j^i)_{j=1}^N$, where, $i \\in \\{1,2, \\cdots, T_s\\}$. Let $\\mathcal{L}(\\cdot)$ is the loss incurred during the training, the objective of the multi-tasking learning is to minimize following term\n\\begin{equation}\n \\min \\sum_{i=1}^{T_s} \\frac{1}{N} \\sum_{j=1}^{N} \\mathcal{L}(y_j^{i},\\phi(\\mathbf{x}_j^{i})) + \\lambda||\\theta||^2.\n\\end{equation}\n\\noindent $\\bullet$ \\textbf{Remarks:} The existing literature on weight sharing~ helps in reducing storage and computation requirement of the DNN model. However, the complexity of weight sharing also comes into the picture, while, representing DNN model using minimal weights. Thus, it could be beneficial to reduce the complexity of weight sharing in the DNN model.", "id": "2e617d96-a0a6-4aec-a60c-b88e1fc81ec1", "level": "subsection", "origin_cites_number": 6, "parent_id": "dd035969-ad6f-463a-b04a-bccc9bef6cde", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Sparse representation" ], [ "subsection", "Weight sharing" ] ], "subsections": [], "title": "Weight sharing" }, { "cite_extract_rate": 0.625, "cites": [ 4628, 685, 4351, 6321, 9139 ], "content": "\\label{bp}\nThis section presents an overview of the DNN compression technique that incorporates bits precision for reducing storage and computation requirements. In bits precision, the number of bits required for representing weight matrices is suppressed for reducing the storage and computation~. For example, we require $32$ bits for storing the weights in the weight matrix in floating-point operations of DNN model. It can be reduced to $8$ bits by performing operations using integers. This transformation from float-to-integer simultaneously reduces storage and computation requirements. However, it comes up with the challenge of conversion complexity from float to integer along with a higher accuracy compromise. The existing literature on bits precision~ tried to tackle out these challenges. Table~\\ref{table4} summarizes different DNN compression techniques using bits precision. The overview of the bits precision techniques in existing literature are as follows.\n\\begin{table*}[h]\n\\caption{Summary of the DNN compression incorporating bits precision for reducing computation and storage.}\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{|c|c|l|l|l|l|l|l|}\n\\hline\n\\multirow{2}{*}{\\textbf{Paper}} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Abbreviated \\\\ name\\end{tabular}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Address the challenge}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Proposed solution}}} & \\multicolumn{3}{c|}{\\textbf{Suitable for}} & \\multirow{2}{*}{\\textbf{Category}} \\\\ \\cline{5-7} \n & & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textbf{CNN} & \\textbf{FC} & \\textbf{RNN} & \\\\ \\hline\n~ & Neuro.ZERO & \\begin{tabular}[c]{@{}l@{}}Adopt integer arithmetic by replacing \\\\ floating-point operations\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Co-processor architecture for toggling during\\\\ availability and scarcity of resources\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Estimation\\\\ using integer\\end{tabular}}} \\\\ \\cline{1-7}\n~ & \\textemdash & To reduce the temporary storage for computation & Quantization mechanism that uses integer & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\hline\n & QNN & \\begin{tabular}[c]{@{}l@{}}To design a DNN network that requires very low \\\\ precision weight and activation during inference\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Linear and logarithmic scheme for reducing\\\\ the bits requirements\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Low bits\\\\ representation\\end{tabular}} \\\\ \\hline\n~ & BNN & \\begin{tabular}[c]{@{}l@{}}To replace floating-point operations in the DNN\\\\ with binary operations\\end{tabular} & Deterministic and stochastic techniques & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{5}{*}{Binarization}} \\\\ \\cline{1-7}\n~ & FINN & \\begin{tabular}[c]{@{}l@{}}Heterogeneous architecture for achieving high \\\\ order flexibility\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Parametric architecture for dataflow \\\\and optimized method for classification\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{} \\\\ \\cline{1-7}\n~ & cDeepArch & \\begin{tabular}[c]{@{}l@{}}Dividing the task into multiple sub-task and \\\\using binarization of weights\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Quantitative measure to estimate the \\\\reduction in resources \\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{} \n \\\\ \\hline\n\\end{tabular}\n}\n\\label{table4}\n\\end{table*}", "id": "e76e7226-ba57-47dc-843b-33e8177f17ac", "level": "section", "origin_cites_number": 8, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Bits precision" ] ], "subsections": [ "8ef3a4b5-45b3-4a9c-9d65-93aaa1756333", "fa3a7f2f-fdcb-45d9-a7ba-89e8b8ec530c", "e38e6ec2-03a9-4de1-821d-817cd20947f6" ], "title": "Bits precision" }, { "cite_extract_rate": 0.5, "cites": [ 9139 ], "content": "This section covers, the most straightforward strategy of reducing computation through bits precision is replacing floating-point operations with the integers. Here, the only complexity is to convert floating values to the integer. The existing work on bits precision using integer~ are elaborated as follows. Authors in~ proposed a mechanism of opportunistically accelerating the inference performance of the DNN model, named Neuro.ZERO. They adopt four accelerating mechanisms, \\textit{i.e.,} extended inference, expedited inference, ensemble inference, and latent training. Further, the authors proposed two algorithms for applying these accelerating mechanisms. They have adopted integer arithmetic by replacing floating-point operations. The authors emphasized on facilitating runtime adaptations, where, accuracy during runtime suppose to increase when the resources are available. The authors adopt co-processor architecture for toggling during availability and scarcity of resources. The extended inference involves an extension of DNN structure that improves the accuracy. It uses additional resources of the secondary processor for running extended part Next, the expedited inference speedup by offloading some task portion of the original DNN model. Later, in ensemble inference, multiple DNN models run simultaneously on primary and secondary processors, and their outputs are combined. Finally, in latent training primary model run as usual, but for unseen data, training is performed on secondary processors.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.07]{Figures/integer.eps}\n \\caption{Process of converting floating point operations into integer operations in DNN model.}\n \\label{integer}\n\\end{figure}\nJacob \\textit{et. al.}~ proposed a quantization mechanism, which uses integer arithmetic despite floating points operations. It relies on the facts that multiplications are inefficient if the operands are wide (floating points $32$ bits) and eliminating multiplication leads to performance degradation. Therefore, it could be beneficial to adopt integer multiplication inspite of floating-point operations to reduce the temporary storage for computation. Fig.~\\ref{integer} illustrates the entire steps involve in performing integer operations by replacing all floating point operations. \n\\noindent $\\bullet$ \\textbf{Remarks:} The estimation using the integer values plays a vital role in reducing the bits requirements for storing weights. The prior studies~ have successfully performed the DNN operations using integer values. However, they lack in estimating the amount of memory we can preserve through integer estimation. Additionally, beyond integer, other low bits estimations are also possible, which could be explored.", "id": "8ef3a4b5-45b3-4a9c-9d65-93aaa1756333", "level": "subsection", "origin_cites_number": 2, "parent_id": "e76e7226-ba57-47dc-843b-33e8177f17ac", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Bits precision" ], [ "subsection", "Estimation using integer" ] ], "subsections": [], "title": "Estimation using integer" }, { "cite_extract_rate": 1, "cites": [ 4351 ], "content": "This section extends the concept of the integer representation to reduce the storage and computation of DNN model. It generalizes the concept of bits precision for using any number of bits for representing weights instead of $8$-bits integers only. The concept of Quantized Neural Networks (QNNs) is discussed in~. Here, the authors design a DNN network that requires very low precision (\\textit{e.g.,} $1$ bits, $4$ bits, and so on) weight and activations during inference. While training the quantized weights and activations helps in estimating the gradient. This QNN scheme highly reduces memory requirements, access latency, and replace floating-point operations with bit-wise operations. Therefore, colossal power efficiency is observed. Authors have used two types of quantization schemes for reducing the bits requirements for representing weights and activations, namely, linear and logarithmic. The linear quantization $L_{Q}(\\cdot)$ is defined as \n\\begin{align}\nL_{Q}(r,l_b)=Clip\\Big(round\\Big(\\frac{r}{2^{l_b}-1}\\Big)2^{l_b-1},V_{\\min}, V_{\\max}\\Big),\n\\end{align} \nwhere, $r$ is the real valued number, $l_b$ is the length of bits, and $round(\\cdot)$ is the round-off function. $V_{\\min}$ and $V_{max}$ are the minimum and maximum scale range, respectively. Further, the logarithmic quantization $Log_{Q}(\\cdot)$ is given as follows\n\\begin{align}\nLog_{Q}(r,l_b)=Clip\\Big(\\log ap2 (r), -(2^{l_b-1}-1), 2^{l_b-1}\\Big),\n\\end{align} \nwhere, $\\log ap2(r)$ denotes the log of approximate power of $2$, in other words it uses the logarithm for most significant bits.\n\\noindent $\\bullet$ \\textbf{Remarks:} The low bits representation in~ is an extension of integer estimation and provides the facility of using any number of bits for performing the DNN operations. However, selecting an optimal number of bits for any estimation is a tedious task.", "id": "fa3a7f2f-fdcb-45d9-a7ba-89e8b8ec530c", "level": "subsection", "origin_cites_number": 1, "parent_id": "e76e7226-ba57-47dc-843b-33e8177f17ac", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Bits precision" ], [ "subsection", "Low bits representation" ] ], "subsections": [], "title": "Low bits representation" }, { "cite_extract_rate": 0.5, "cites": [ 685, 6321 ], "content": "In this section, we illustrate an overview of bits precision techniques~ that incorporates binary number for performing the different operation of the DNN model. Binarization refers to the process of converting floating-point operations into binary operations for reducing the storage and processing requirement of the DNN model~. The overview of the different binarization techniques in DNN models is as follows. Authors in~ introduced a training method that incorporates binary activations and weights. The resultant network is called Binary Neural Network (BNN). During the training of the model; the gradient is estimated using binary activations and weights. Binarization helps in reducing storage requirements and provides energy efficiency. Here, the floating-point operations in the DNN model are replaced with binary operations. The authors in the paper have highlighted two binarization techniques, \\textit{i.e.,} deterministic and stochastic. In deterministic binarization, the real-valued number $r$ is transformed into binary value $b$ using $Sign(\\cdot)$ function, which is defined as\n\\begin{align}\n b=Sign(r)=\\left\\{\\begin{matrix}\n+1 & \\text{if } r\\ge0, \\\\ \n-1 & \\text{otherwise}.\n\\end{matrix}\\right.\n\\end{align} \nIn stochastic binarization, a probability $p$ is estimated for converting real value $r$ to binary value $b$ as\n\\begin{align}\n b=\\left\\{\\begin{matrix}\n+1 & \\text{if } p = \\max \\Big(0,\\min\\Big(1,\\frac{r+1}{2}\\Big)\\Big), \\\\ \n-1 & \\text{if } 1-p.\n\\end{matrix}\\right.\n\\end{align} \nUmuroglu \\textit{et. al.}~ presented a framework for building a fast and flexible DNN compression mechanism, named FINN. It provides a heterogeneous architecture that achieves high order flexibility. The framework is a parametric architecture for dataflow and optimized method for classification on limited resource device. The authors have implemented fully connected, convolutional, and pooling layers with computation as per the requirements of the user. It provides compressed DNN with sub-microsecond latency, thus enhance its suitability for deployment on embedded devices.\nIn~ authors presented a compact DNN architecture, where a task is divided into multiple subtasks for reducing the resource requirement of the DNN model. The authors named the approach as cDeepArch. The decomposition of the task in cDeepArch efficiently utilizes the limited storage and processing capacity during the execution of the task. The authors emphasized that the unorganized compression of the DNN models results in unreliable and low performance for recognizing multiple tasks. Further, the authors established a quantitative measure to estimate the reduction in resources and available resources. cDeepArch incorporates offline training followed by the execution of the task on user smartphone. It also preserves the privacy of sensitive information. Here, to provide privacy of the sensitive information, it is converted into image format for training a DNN model. For reducing the DNN model, cDeepArch adopted layer separation, delayed pooling, integration of activation, and binarization for reducing the computation.\n\\noindent $\\bullet$ \\textbf{Remarks:} The binarization of DNN model presented in the prior studies~ reduce the storage and computations. The proposed binarization approaches achieve a high-order compression but with performance degradation. Thus, it could be beneficial to mitigate this degradation for the successful adaptation of binarization techniques in DNN compression.", "id": "e38e6ec2-03a9-4de1-821d-817cd20947f6", "level": "subsection", "origin_cites_number": 4, "parent_id": "e76e7226-ba57-47dc-843b-33e8177f17ac", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Bits precision" ], [ "subsection", "Binarization" ] ], "subsections": [], "title": "Binarization" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 6326, 6322, 6325, 8998, 681, 6324, 4508 ], "content": "\\label{kd}\nThis section gives an overview of the DNN compression technique that adopts knowledge distillation~ to improve the performance of the compressed DNN model. The existing literature on knowledge distillation~ is further classified into three parts, \\textit{i.e.,} logits transfer, teacher assistant, and domain adaptation, as illustrated in Table~\\ref{table5}. In logits transfer, the unnormalized output of the original DNN model (teacher) act as a soft target for training the compressed DNN model (student)~. However, sometimes the gap between student and teacher is large, which hinders the appropriate knowledge transmission. It could be mitigated by inserting a teacher assailant in the between teacher and student. Further, the domain adaptation is discussed, where, the generalization ability of the teacher model could not improve the performance of the student model due to domain disparity between training and testing data of the student model.\n\\begin{table*}[h]\n\\caption{Illustration of knowledge distillation techniques for improving the performance of compressed DNN model.}\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{|c|c|l|l|l|l|l|l|}\n\\hline\n\\multirow{2}{*}{\\textbf{Paper}} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Abbreviated \\\\ name\\end{tabular}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Address the challenge}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Proposed solution}}} & \\multicolumn{3}{c|}{\\textbf{Suitable for}} & \\multirow{2}{*}{\\textbf{Category}} \\\\ \\cline{5-7} \n & & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textbf{CNN} & \\textbf{FC} & \\textbf{RNN} & \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Improving performance of compression \\\\ DNN model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Knowledge distillation technique \\\\ for improving performance\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\multirow{11}{*}{\\begin{tabular}[c]{@{}c@{}}Logits \\\\ transfer\\end{tabular}}} \\\\ \\cline{1-7}\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Distilling the predictive distribution \\\\ among same class labels for training\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Performance using self-knowledge \\\\ distillation\\end{tabular} & \\cmark & \\cmark & \\cmark & \\\\ \\cline{1-7}\n~ & RONA & \\begin{tabular}[c]{@{}l@{}}DNN compression to provide \\\\ significant privacy to the user\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Designed a private model \\\\ compression framework\\end{tabular} & \\cmark & \\cmark & \\cmark & \\\\ \\cline{1-7}\n~ & FSKD & \\begin{tabular}[c]{@{}l@{}}To highlight the complexity of \\\\ fine-tuning of compress DNN model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Established a relationship between \\\\ compression rate and training epochs\\end{tabular} & \\cmark & \\cmark & \\cmark & \\\\ \\cline{1-7}\n~ & \\textemdash & Interpretation of knowledge distillation & \\begin{tabular}[c]{@{}l@{}}Involve both relevant and irrelevant \\\\ features related to the visual task\\end{tabular} & \\cmark & \\cmark & \\cmark & \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To reduce the gap between student \\\\ and teacher model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Introduced teacher assistant between \\\\ teacher and student during distillation\\end{tabular} & \\cmark & \\cmark & \\cmark & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Teaching\\\\ assistant\\end{tabular} } \\\\ \\hline\n~ & ShrinkTeaNet & \\begin{tabular}[c]{@{}l@{}}To train a few parameter student models\\\\ using cumbersome teacher model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Developing a technique that is more \\\\ robust towards open-set problem\\end{tabular} & \\cmark & \\xmark & \\xmark & \\multicolumn{1}{c|}{\\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Domain\\\\ distillation\\end{tabular}}} \\\\ \\cline{1-7}\n~ & MobileDA & \\begin{tabular}[c]{@{}l@{}}To perform training on simplified DNN \\\\ model that handles domain shift problem\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Learning transferable features for\\\\ domain adaptation\\end{tabular} & \\cmark & \\xmark & \\xmark & \\\\ \\hline\n\\end{tabular}\n}\n\\label{table5}\n\\end{table*}", "id": "ba733adc-5211-4a9a-b70a-7d03cf2c8077", "level": "section", "origin_cites_number": 10, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Knowledge distillation" ] ], "subsections": [ "8b52d9f9-e93a-4763-b860-76af2b06e769", "e072b1d3-82ea-44ab-9bab-470712e0dfcc", "0e392a9f-6337-4c6d-8228-e261cb226770" ], "title": "Knowledge distillation" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 6322, 6325, 8998, 681, 6324 ], "content": "Logits transfer is the simplified approach for knowledge distillation from teacher to student model. Firstly, the teacher model is trained on a given dataset $\\mathcal{D}$. Next, the logit vectors (output of DNN model before the softmax layer) of the teacher model acts as a soft target for a training student model. The knowledge distillation using logits transfer improves the generalization ability of the student model and helps in improving its performance. Different logits transfer mechanism in existing literature are described as follows. \nAuthors in~ proposed a knowledge distillation technique for improving the performance of a compressed DNN model using the generalization ability of the pre-trained cumbersome model. The cumbersome model contains the ensembled output of multiple models, thus achieve higher generalization ability. The deployment of the cumbersome model on an embedded device is impractical due to substantial resource requirements. To overcome such limitations, the small model is deployed on the embedded device, with accuracy improvement using the cumbersome model. The principle objective is to distil the knowledge from large model to small model by training small model in such a way that it achieves similar performance as the cumbersome model. While performing the transmission of the generalization ability of the cumbersome model to the small compressed model, the class label probabilities will act as a soft target for training the small model.\nIn the DNN model, the output features vector obtained at one layer prior to the softmax layer is termed as logit. Let $\\mathbf{a}_i$ denotes a logit vector of data instance $i$ $(\\forall i \\in N)$, where, $N$ represents total number of data instances in dataset $\\mathcal{D}$. To obtain $\\mathbf{a}_i$, let $x_{ij}\\in \\mathbf{X}$, $w_{ij}\\in \\mathbf{W}^T $, and $b_{j}\\in \\mathbf{b}$ represent an element of feature matrix, weight matrix, and bias vector for $j^{th}$ ($1\\leq j \\leq l$) class of $i^{th}$ ($1\\leq i \\leq N$) training instance, respectively, where, $l$ denotes the number of classes in $\\mathcal{D}$. Hence, we can estimate an element $a_{ij}$ of logit vector $\\mathbf{a}_i$ for class $j$ given as\n\\begin{equation}\na_{ij}=w_{ij} x_{ij} + b_{j}.\n\\end{equation}\nLater, the logits vector $\\mathbf{a}_i =\\{a_{ij} | 1\\leq j \\leq l\\}$ passes to a softmax function to compute predicted class label probability $p_{ij}$ as following\n\\begin{equation}\\label{pp}\n p_{ij}=\\frac{e^{a_{ij}}}{\\sum_{j=1}^{l}e^{a_{ij}}}.\n\\end{equation}\nThe predicted probability vector for $i^{th}$ instance is a set of probabilities \\{$p_{i1},p_{i2},\\cdots,p_{il}$\\}, against each class label $l$ and a class label with highest probability value is said to be the predicted class label. Further, we introduce a variable $\\mathcal{T}$ for generating a softer probability distribution; therefore, Eq.~\\ref{pp} is rewritten as\n\\begin{equation}\\label{pp}\n \\rho_{ij}=\\frac{e^{u_{ij}/\\tau}}{\\sum_{j=1}^{k}e^{u_{ij}/\\tau}}.\n\\end{equation}\nThe matching of the logits mainly involves two methods~, \\textit{i.e.,} cross-entropy using soft targets and cross-entropy using correct labels. In cross-entropy using soft targets, the cross-entropy loss of compressed model is calculated using the large value of temperature $\\mathscr{T}$, which was used by the cumbersome model for generating soft targets. Similarly, in cross-entropy with correct labels, we compute the cross-entropy loss of compressed model using similar logit as the cumbersome model. To perform the matching of logits, we have to estimate gradient $\\nabla$ with the element ($a_{ij}$) of logit vector. $a_{ij}$ is the element of logit vector of $i^{th}$ data instance and $j^{th}$ class. Let $b_{ij}$ represent the element of logit vector of the cumbersome model, then the gradient $\\nabla$ is estimated as\n\\begin{align} \\nonumber\n\\nabla=& = \\sum_{j=1}^{l}\\frac{1}{\\mathcal{T}}(p_{ij}-q_{ij}) \\\\ \\label{maineq}\n&= \\sum_{j=1}^{l}\\frac{1}{\\mathcal{T}}\\Big(\\frac{e^{a_{ij}/\\mathcal{T}}}{\\sum_{j=1}^{l}e^{a_{ij}/\\mathcal{T}}}-\\frac{e^{b_{ij}/\\mathcal{T}}}{\\sum_{j=1}^{l}e^{b_{ij}/\\mathcal{T}}}\\Big).\n\\end{align}\nAt the higher value of $\\mathcal{T}$ the magnitude of matching logits in Eq.~\\ref{maineq} can be rewritten in an approximate format as follows\n\\begin{equation}\\small\n \\Delta \\approx \\sum_{j=1}^{l} \\frac{1}{\\mathcal{T}}\\Big(\\frac{1+a_{ij}/\\mathcal{T}}{N+\\sum_{j=1}^{l}a_{ij}/\\mathcal{T}}-\\frac{1+b_{ij}/\\mathcal{T}}{N+\\sum_{j=1}^{l}b_{ij}/\\mathcal{T}}\\Big).\n \\label{final}\n\\end{equation}\nThe main objective of knowledge distillation is to minimize the gradient ($\\nabla$) in Eq.~\\ref{final}. The value of $\\nabla=0$ indicates the performance of the cumbersome and compressed DNN model is exactly same. However, it is impractical to reach this equilibrium state. As it is achieved when the compressed model is trained for an infinite number of epochs with logits of teacher model. To avoid such a problem, we stop training the compressed model, when $\\nabla\\rightarrow0$. At this stage, the compressed model achieves significant performance by consuming minimal resources. Fig.~\\ref{distillation_l} illustrates the steps involve in the knowledge distillation using logits transfer.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.05]{Figures/distillation.eps}\n \\caption{Illustration of knowledge distillation using logits of original DNN model.}\n \\label{distillation_l}\n\\end{figure}\nYun \\textit{et. al.}~ distilled the predictive distribution among the same class labels for training. This distillation results in the regularization of knowledge obtained from wrong predictions, in a single DNN model. In other words, the model improves its performance using self-knowledge distillation by rapidly training and employing more precise predictions. Thus, the authors have tried to improve the performance of a build classifier in self-improving mode using dark knowledge of wrong predictions. The authors have combined two loss functions, \\textit{i.e.,} cross-entropy and Kullback-Leibler (KL) divergence loss, using a regularization parameter $\\lambda$. The combined loss function $\\mathcal{L}_{comb}(\\cdot)$ is defined as\n\\begin{equation} \\nonumber\n \\mathcal{L}_{comb}(\\mathbf{x},\\mathbf{x}{'},y,\\phi,\\mathcal{T})= \\mathcal{L}_{CE}(\\mathbf{x},y,\\phi) + \\lambda \\mathcal{T}^2 \\mathcal{L}_{KL}(\\mathbf{x},\\mathbf{x}{'}, \\phi, \\mathcal{T}),\n\\end{equation}\nwhere, $\\mathcal{L}_{CE}(\\cdot)$ and $\\mathcal{L}_{KL}(\\cdot)$ are the cross-entropy and KL-divergence losses, respectively. $\\mathbf{x}$ is the data instance instance having label $y$ and $\\mathbf{x}{'}$ is another data instance having label $y$. $\\phi$ is the network parameter and $\\mathcal{T}$ is the temperature used in the knowledge distillation for softening the prediction probabilities of the knowledge inferring model.\nIn~ authors have designed a private model compression framework, named pRivate mOdel compressioN frAmework (RONA). The authors have highlighted the need for DNN compression to provide significant privacy to the user by analysing the collected data on the limited resource devices. Using knowledge distillation, the accuracy of the compressed model is improved by jointly using a hint, distillation, and self learnings. In performing knowledge distillation, the cumbersome model is carefully perturbed for enforcing a high level of privacy. Therefore, the authors try to meet two crucial goals simultaneously, \\textit{i.e.,} model compression and preserve privacy.\nAuthors in~ highlighted the complexity of fine-tuning of compress DNN model. The compression of DNN using pruning, quantization, weight decomposition requires fine-tuning, which may take a massive number of epochs for stabilization. Therefore, they established a relationship that higher the compression rate higher will be the epochs counts for stabilizing the performance of the model. Another problem associated with fine-tuning is the compressed DNN model that requires large training dataset for achieving considerable accuracy. Here, the authors try to realize both data and processing efficiency. The authors proposed a Few Sample Knowledge Distillation (FSKD) that performs efficient network compression. The efficiency is achieved in terms of both model training and inference.\nIn~ authors presented a method for successful interpretation of knowledge distillation that involve both relevant and irrelevant features related to the visual task. They presented three types of hypothesis for knowledge distillation, including, 1) it helps in learning more precise class label predictions from raw data, 2) it ensures simultaneously learning multiple class form the dataset, and 3) it provides the well-optimized direction for leaning. The authors have demonstrated the pros and cons of the hypothesis using empirical and experimental analysis on the different datasets.\n\\noindent $\\bullet$ \\textbf{Remarks:} The existing work on logits transfer~ in knowledge distillation improves the performance of the compressed DNN model. However, the logits of original pre-trained DNN model sometimes lead to overfitting, which leads to poor generalization ability of the compressed model. As during fine-tuning of the compressed DNN model, the student model can mimic the exact prediction behaviour as teacher model but lose its generalization ability. It could be a research direction to determine the exact point for halting the training of the student model to retain its generalization ability and avoid overfitting.", "id": "8b52d9f9-e93a-4763-b860-76af2b06e769", "level": "subsection", "origin_cites_number": 6, "parent_id": "ba733adc-5211-4a9a-b70a-7d03cf2c8077", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Knowledge distillation" ], [ "subsection", "Logits transfer" ] ], "subsections": [], "title": "Logits transfer" }, { "cite_extract_rate": 1, "cites": [ 4508 ], "content": "Authors in~ studied the concept of knowledge distillation from a different perspective. The authors introduced teacher assistant in between teacher and student model during knowledge distillation. This insertion of teacher assistant has the main motive to reduce the gap between student and teacher model. They claim, when the size of the student model is fixed, then we can not employ a large teacher model. In other terms, the gap between teacher and student model beyond a limit leads to improper knowledge transmission from student to teacher. To mitigate such difficulty in improving the performance of the student model through knowledge distillation, the teacher assistant plays a decisive role. Fig.~\\ref{t_assistant} illustrates an example scenario demonstrates the role of teacher assistant to fill the gap between large teacher model and small student model. The authors also suggest the futuristic approach of inserting multiple teaching assistants for providing high order compression with minimal accuracy compromise.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=1.0]{Figures/assistant.eps}\n \\caption{An example scenario of teacher assistant between teacher model and student model~.}\n \\label{t_assistant}\n\\end{figure}\n\\noindent $\\bullet$ \\textbf{Remarks:} The work in~ have discussed the benefits of inserting a teaching assistant between teacher and student models, but unfortunately, it will increase the training time by nearly two-fold. First, the training of teacher assistant is performed using teacher model followed by training of student model using teacher assistant. Despite significant improvement in the student model performance, the complexity of training could be a loophole of the proposed approach.", "id": "e072b1d3-82ea-44ab-9bab-470712e0dfcc", "level": "subsection", "origin_cites_number": 1, "parent_id": "ba733adc-5211-4a9a-b70a-7d03cf2c8077", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Knowledge distillation" ], [ "subsection", "Teacher assistant" ] ], "subsections": [], "title": "Teacher assistant" }, { "cite_extract_rate": 0.5, "cites": [ 6324, 6326 ], "content": "The knowledge distillation techniques discussed in the previous sections do not consider the consequence of domain disparity while training student model using logits of the teacher. The domain disparity leads to poor generalization ability of the student model~. To handle domain disparity, different domain adaptation techniques have been proposed in the existing literature~. Fig.~\\ref{domain_t} illustrates the layer-by-layer feature extraction from teacher and student model followed by knowledge distillation from teacher to student.\nThe simultaneous training helps in preserving the student model generalization ability. \nThe overview of different work on knowledge distillation incorporating domain adaptation are as follows.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.90]{Figures/domain.eps}\n \\caption{Layer-by-layer knowledge distillation from teacher to student for eliminating domain disparity.}\n \\label{domain_t}\n\\end{figure}\nDuong \\textit{et. al.}~ proposed a knowledge distillation technique that helps in shrinking the gap between cumbersome teacher and compact student model. Therefore, the authors called this technique as ShrinkTeaNet. The objective of ShrinkTeaNet is simple to train a few parameter student models using cumbersome teacher model. However, the authors emphasized on developing a technique that is more robust towards open-set problem along with maintaining significant accuracy. Further, authors have introduced angular distillation loss for determining the feature direction despite the hard targets that result in overfitting. The angular distillation loss helps in reducing the domain adaptation problem in knowledge distillation. The angular distillation loss ($ \\mathcal{L}_{adl}(S,T)$) is defined as\n\\begin{align}\\nonumber\n \\mathcal{L}_{adl}(S,T) &= dist\\Big(trans(F_t),trans(F_s)\\Big)\\\\ \\label{adl}\n &=\\Big\\| 1-\\frac{trans(F_t)}{trans(F_t)}\\times \\frac{trans(F_s)}{trans(F_s)}\\Big\\|_2^2\n\\end{align}\nwhere, $dist(\\cdot)$ is the distance function that estimates the distance between the transformation function of the teacher ($trans(F_t)$) and student ($trans(F_s)$). Eq.~\\ref{adl} is similar to the cosine similarity for estimating the distance between two vectors. In this approach, the authors suggest training the student model, along with the teacher model parallelly. This parallel training provides a soft target at each step despite final class label predictions that seem to be a hard target for the student model during training.\nIn~ authors emphasized on the problem of domain adaptation that is encountered during the training of the DNN model on embedded Internet of Things (IoT) devices. The authors discovered the DNN model achieves prominent result on the high-end machine but suffers from degradation in accuracy and efficiency on IoT devices. They proposed a framework for Mobile Domain Adaptation (MobileDA) that can learn the transferrable features. The learning is performed in such a way that the structure of the DNN model remains simplified as possible. The authors use cross-domain distillation for training student model on IoT devices using cumbersome teacher model on the high-end machine. The main objective of the MobileDA is to perform training on simplified DNN model that can handle the domain shift problem along with achieving significant accuracy. In other words, the training phase addresses the domain adaptation using cumbersome DNN model running on a high-end machine and testing phase is performed using a simplified DNN model deployed on the embedded device. Further, the authors proposed an algorithm that simultaneously optimizes different loss function for handling domain adaptation.\n\\noindent $\\bullet$ \\textbf{Remarks:} The domain adaptation technique proposed in the existing literature~ have successfully covered the domain disparity problem. However, the gap between the student model and teacher model sometimes hampers the performance of the domain adaptation technique. Thus, it could be beneficial to adopt the teacher assistant in domain adaptation problem to get overall benefits.", "id": "0e392a9f-6337-4c6d-8228-e261cb226770", "level": "subsection", "origin_cites_number": 4, "parent_id": "ba733adc-5211-4a9a-b70a-7d03cf2c8077", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Knowledge distillation" ], [ "subsection", "Domain adaptation" ] ], "subsections": [], "title": "Domain adaptation" }, { "cite_extract_rate": 0.461538461538461, "cites": [ 6327, 6329, 8999, 6328, 6330, 1791 ], "content": "\\label{mln}\nThis section covers the DNN compression technique in the existing literature~ that do not meet the different categorization criteria discussed in the previous sections. Apart from the outlier of categorization criteria, some of them simultaneously incorporate a group of DNN compression techniques. The principal motive behind the miscellaneous category, \\textit{i.e.,} to reduce the storage and computation requirement of the DNN model is same as the above-discussed techniques. In other words, the miscellaneous category enlights the optimization of various DNN parameters before actually building the model. Thus, it helps in developing a storage and computationally efficient model for resource constraint devices. The overview of the DNN compression technique under miscellaneous category is illustrated in Table~\\ref{table6}.\n\\begin{table*}[h]\n\\caption{Summary of miscellaneous approaches for compressing DNN model.}\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{|c|c|l|l|l|l|l|}\n\\hline\n\\multirow{2}{*}{\\textbf{Paper}} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Abbreviated \\\\ name\\end{tabular}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Address the challenge}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Proposed solution}}} & \\multicolumn{3}{c|}{\\textbf{Suitable for}} \\\\ \\cline{5-7} \n & & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textbf{CNN} & \\textbf{FC} & \\textbf{RNN} \\\\ \\hline\n~ & DeepSense & \\begin{tabular}[c]{@{}l@{}}To automatically handle the noise in \\\\ the dataset collected form mobile device\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Unified framework for on device \\\\ classification\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & FastDeepIoT & \\begin{tabular}[c]{@{}l@{}}To exploits the non-linear relation between \\\\ structure of DNN model and its execution time\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Determined the tradeoff between execution \\\\ time and accuracy on embedded device\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & AdaDeep & \\begin{tabular}[c]{@{}l@{}}To automatically specifies a combination of \\\\ compression techniques for DNN model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Determine an optimal balance between \\\\ resources and user demanded performance\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To determine the capability of DNN model \\\\ to perform recognition\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}To utilize cross-sensor correlation for providing \\\\ a high-order of recognition performance\\end{tabular} & \\cmark & \\xmark & \\xmark \\\\ \\hline\n~ & NestDNN & \\begin{tabular}[c]{@{}l@{}}Emphasized on the availability of dynamic \\\\ resources on mobile devices\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}To select optimal resources while performing \\\\ inference on the mobile device\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Benefits of using multimodal-sensors for \\\\ recognition using DNN on embedded devices\\end{tabular} & Using the modality-specific partition & \\cmark & \\cmark & \\xmark \\\\ \\hline\n~ & Deepeye & \\begin{tabular}[c]{@{}l@{}}To perform image recognition with significant\\\\ accuracy in short duration\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Optimizing the operation of convolutional \\\\ and fully connected layers\\end{tabular} & \\cmark & \\cmark & \\xmark \\\\ \\hline\n~ & DeepApp & \\begin{tabular}[c]{@{}l@{}}Determines the mobile device usage \\\\ pattern using historical data\\end{tabular} & Deep reinforcement learning framework & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & DeepFusion & Recognition of different IoT based task & \\begin{tabular}[c]{@{}l@{}}A unified framework that incorporates values \\\\ of multiple sensors\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}To reduce the resource requirement through\\\\ distributed operations\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Implemented a technique to train the DNN\\\\ model in a distributed manner\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & \\textemdash & \\begin{tabular}[c]{@{}l@{}}Issues associated with DNN model running \\\\on high-end machine and mobile devices\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Elaborates the contradiction between \\\\ requirement of DNN model and mobile devices\\end{tabular} & \\cmark & \\cmark & \\cmark \\\\ \\hline\n~ & \\multicolumn{1}{l|}{Shufflenet} & \\begin{tabular}[c]{@{}l@{}} To reduce the power consumption by reducing\\\\ the floating-point operations without accuracy \\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Use pointwise group convolution and \\\\ channel shuffle \\end{tabular} & \\cmark & \\xmark & \\xmark \\\\ \\hline\n\\end{tabular}\n}\n\\label{table6}\n\\end{table*}\nAuthors in~ proposed a unified framework that can handle the noise insertion in the collected dataset due to on-device sensors. Since the exact estimation of the noise distribution is highly tedious; therefore, a classification approach is needed that can automatically handle the noise in the dataset. Along with noise handling, the on-device classification is also beneficial to reduce the inherent delay due to data transmission from the device to the high-end machine. To solve these problems, simultaneously, the authors proposed a mechanism named DeepSense. DeepSense uses CNN and RNN based feature extraction to avoid the difficulty in designing manual features. As it is not always convenient to find highly robust features that can accommodate noise in the sensory data and rapidly changing user behaviour. DeepSense uses parallel execution of multiple sensors data on DNN models that decrease its complexity to run on mobile devices.\nYao \\textit{et. al.}~ proposed a framework that provides an insight into the black box of DNN model. The framework is named as FastDeepIoT that provides an accuracy preserving DNN compression for IoT devices. FastDeepIoT exploits the non-linear relation between the structure of DNN model and its execution time. Using the non-linear relation, the authors determined the tradeoff between execution time and DNN accuracy on the embedded device. Authors perform a high level of device profiling for automatically determining the minimal execution time on an embedded device with little accuracy compromise. The objective behind the FastDeepIoT is to develop a DNN model that check the conditions which are responsible for non-linearity without any knowledge of library and hardware.\nIn~ authors explored the tradeoff between performance and resources of the DNN model on the embedded device. Here, the authors determine the user-specified needs to estimate the appropriate DNN model for its deployment on the embedded device. The authors referred to this framework as AdaDeep. The framework automatically specifies a combination of compression techniques for a given DNN model. This combination of compression techniques leads to an optimal balance between resources and user demanded performance. It imposes resource constraint on accuracy, energy consumption, storage, and latency. AdaDeep relies on reinforcement learning-based optimization that automatically and efficiently solves the constraints optimization problem. The AdaDeep tried to solve following optimization problem\n\\begin{align}\n \\arg \\max_{c_t \\in C_t} \\text{ } & \\lambda_1 \\tau(acc-acc_{min}) + \\lambda_2 \\tau(E_{max}-E),\\\\ \n \\text{s.t., } & L \\leq L_{th},\\\\ \n & St \\leq St_{th},\n\\end{align}\nwhere, accuracy ($acc$), energy ($E$), latency ($L$), and storage ($St$) have the corresponding threshold $acc_{min}$, $E_{max}$, $L_{th}$, and $St_{th}$, respectively. The accuracy and energy constants are combined using constraints $\\lambda_1$ and $\\lambda_2$. $\\tau(\\cdot)$ is the normalization process for both accuracy and energy.\nRadu \\textit{et. al.}~ mainly focus on determining the capability of a DNN model to perform recognition of sensory values on mobile and embedded devices. The authors utilize cross-sensor correlation for providing a high-order of recognition performance. In~ authors presented a DNN framework named NestDNN. The framework emphasized on the availability of dynamic resources on mobile devices. It indicates the DNN model running on a mobile device faces resource scarcity when different applications are running on the mobile device. Therefore, it is beneficial to select optimal resources while performing inference on the mobile device. NestDNN has the main objective of providing on-device computation of DNN model without Cloud support. NestDNN uses DNN model pruning and recovery method that transforms the cumbersome teacher model to a compact multi capacity student model.\nAuthors in~ studied the different benefits of using multimodal-sensors for recognizing various activities using DNN classifier on embedded devices. They mainly emphasized on two variant of DNN model, \\textit{i.e,} fully connected layer and a convolutional neural network that is using the concatenation of multi-sensors values. Here, the authors have used the modality-specific partition. Mathur \\textit{et. al.}~ presented a DNN compression technique called DeepEye. The DeepEye provide a matchbox size computational unit having attached camera sensor. The small size device can process the image captured by the camera in the same way the image processed at high-end machines (Cloud). On the matchbox size device, the DNN model runs successfully to perform image recognition with significant accuracy in short duration. The local execution is possible by optimizing the operation of convolutional and fully connected layers in the DNN model. The convolutional layers are computationally expensive but require low storage. On the other hand, fully connected layers are computationally inexpensive and require higher storage than convolution layer. This reciprocal relationship between convolutional and fully connected layer creates a tradeoff for optimizing a DNN model. If the embedded device has sufficient storage and limited processing capacity, then perform the maximum operations by loading the fully connected layer on the device. Similarly, in vice-versa condition, the convolutional layers are loaded on the device.\nAuthors in~ proposed a cache designing mechanism that helps in providing inference on mobile devices. The authors named the approach as DeepCache. They suggest that the cache memory must be available and should handle the memory variation in the raw sensory data. The caching provide space to store results and helps in exploiting result reusability. The authors claim to require minimal efforts of the developer in designing the proposed mechanism. Through experimental analysis DeepCache have proved to reduce execution time by $18\\%$ and energy consumption by $20\\%$. In~ authors proposed a deep reinforcement learning framework named DeepApp. It determines the mobile device usage pattern using historical data of the applications being used by the user. The features from the historical data are extracted using compressed DNN model. Further, the compressed DNN classifier recognizes the different usage pattern.\nXue \\textit{et. al.}~ proposed a unified framework (named DeepFusion) that incorporates values of multiple sensors for recognition of different IoT based task. DeepFusion parallelly combines spatial features extracted from CNN for each sensory values. This parallelisation reduces the complexity of the DNN model and makes it suitable for its deployment on mobile devices. The multimodal sensory value is considered as it not only provides better distinguishable characteristics but also reduces the noise in the sensory data. In~ authors have implemented a technique to train the DNN model in a distributed manner. Here, the author trends to perform the execution on multiple GPUs. The authors tried to reduce the resource requirement through distributed operations.\nAuthors in~ elaborates the contradiction between resource requirements of the DNN models and deployment of DNN model on miniature mobile devices having limited resources. Other issues associated with training of DNN models on the high-end machine is the privacy and security concern about individual data. The cumbersome DNN models exceed the limited storage on the mobile device and its deployment off the device results in higher energy consumption and delay. It is tedious to perform both training and inference on mobile devices. During the training of DNN model on mobile devices, it requires distributed data generation and processing. Further, upon running a DNN model, it can dominate the entire processing on the device due to a huge number of floating-point operations. In~ authors have incorporated two crucial operations, \\textit{i.e.,} pointwise group convolution and channel shuffle for compressing DNN model. The proposed approach was named as ShuffleNet that reduces the power consumption by reducing the floating-point operations without accuracy compromise.\n\\noindent $\\bullet$\\textbf{Remarks:} The DNN compression techniques, DeepSense~, FastDeepIoT~, AdaDeep~, NestDNN~, DeepEye~, DeepCache~, DeepApp~\nDeepFusion~, ShuffleNet~, in miscellaneous approach have reduced the storage and computation requirements with a different perspective. For example, DeepSense proposed a lightweight feature extraction mechanism using parallelization of sensory values and DeepFusion combines spatial and temporal features of sensory values using a parallelized architecture. Through, these mechanisms are effective and shown significant performance on smartphone using compressed DNN models. However, a higher degree of compression is needed for RCDs, where, storage and computation resources are much smaller than that of smartphones.\n\\vspace{0.8cm}", "id": "34c631ba-3048-4884-8a07-2d779d5a3a90", "level": "section", "origin_cites_number": 13, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Miscellaneous" ] ], "subsections": [], "title": "Miscellaneous" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{fd}\nThe presented categorization for compressing DNN model gives a glimpse of the existing literature and their colossal contributions. After a thorough review of the existing literature on DNN compression, we come up with five broad categories, \\textit{i.e.,} network pruning, sparse representation, bits precision, knowledge distillation, and miscellaneous techniques. All these categories seem to be the predominating area of research that encourages effective utilization of DNN model on resource constraint devices. As, demand for IoT based applications is proliferating, which enables data scientists to incorporate DNN models in IoT. Therefore, it is beneficial to provide a thorough overview of the DNN compression technique that can meet out the limited storage and processing capacity available at the resource constraint IoT devices.\nThe overview of different categories of DNN compression provides several research possibilities. Even though several work have been done in the DNN compression, there are still a lot more to be uncovered. The existing literature mainly emphasized on compressing convolutional neural network and fully connected layers, but very few work have been done for the recurrent neural networks. Additionally, despite different accuracy preserving mechanism adopted in existing literature, still, the DNN compression techniques face performance degradation. Finally, the dynamic resources on tiny computing devices need more exploration.\n\\begin{table}[htbp!]\n\\section*{Abbreviations and symbols}\n\\vspace{0.2cm}\n\\begin{tabular}{llll}\nBNN & & & Binarized Neural Network \\\\\nCNN & & & Convolutional Neural Network \\\\\ncDeepArch & & & compact Deep neural network Architecture \\\\\nDeepIoT & & & Deep neural network for IoT \\\\ \nDNN & & & Deep Neural Network \\\\ \nEIE & & & Energy efficient Inference Engine \\\\\nEvoNAS & & & Evolutionary Network Architecture Search \\\\\nFastDeepIoT & & & Fast Deep neural network for IoT \\\\\nFLOPs & & & FLoating point OPerations \\\\\nFSKD & & & Few Sample Knowledge Distillation \\\\\nGPU & & & Graphical Processing Unit \\\\\nIoT & & & Internet of Things \\\\\nLSTM & & & Long Short Term Memory \\\\\nMobileDA & & & Mobile Domain Adaptation \\\\\nMTZ & & & Multi Tasking Zipping \\\\\nQNN & & & Quantized Neural Network \\\\\nRCD & & & Resource Constraint Device \\\\\nRNN & & & Recurrent Neural Network \\\\\nRONA & & & pRivate mOdel compressioN frAmework \\\\\nSCNN & & & Sparse Convolutional Neural Network \\\\\nSVD & & & Singular Value Decomposition \\\\\nVarGNet & & & Variable Group convolution in DNN \\\\\n$\\mathcal{D}$ & & & Dataset \\\\\n$N$ & & & Number of data instances \\\\\n$M$ & & & Length of data instance \\\\\n$\\mathcal{L}(\\cdot)$ & & & Loss function \\\\\n$\\mathbf{F}$ & & & Convolutional filter \\\\\n$\\mathbf{I}$ & & & Input dimension \\\\\n$\\mathbf{O}$ & & & Output dimension \\\\\n$\\mathbf{W}$ & & & Weight matrix \\\\\n$l$ & & & Number of classes in $l$ \\\\\n$Clip(\\cdot)$ & & & Clip function \\\\\n$Sign(\\cdot)$ & & & Sign function \\\\\n$\\mathcal{L}_{adl}(\\cdot)$ & & & Angular distillation loss \\\\\n$trans(\\cdot)$ & & & Transformation function \\\\\n$\\mathbf{X}$ & & & Input matrix \\\\\n$\\mathbf{L}_{comb}$ & & & Combined loss \\\\\n\\end{tabular}\n\\end{table}\n\\bibliographystyle{IEEEtran}\n\\bibliography{btp}\n\\end{document}", "id": "d19e7744-862f-4783-af40-1e0704c0fc6e", "level": "section", "origin_cites_number": 0, "parent_id": "06f26bac-bd30-4095-9869-55d16cc27805", "prefix_titles": [ [ "title", "A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions" ], [ "section", "Discussions and future research directions" ] ], "subsections": [], "title": "Discussions and future research directions" } ]
45
[ 6312, 6311, 6313, 688, 6319, 841, 6315, 504, 4354, 4628, 8150, 7634, 8389, 6318, 6317, 6314, 6316, 6320, 9139, 4351, 6321, 685, 6326, 6322, 6325, 8998, 681, 6324, 4508, 6323, 6330, 6328, 1791, 6329, 8999, 6327, 6331 ]
0.751906
[ "Andrea Bandini", "José Zariffa" ]
Analysis of the hands in egocentric vision:\\ A survey
2019
2019-12-23T14:30:02Z
cs.CV
Egocentric vision (a.k.a. first-person vision -- FPV) applications have thrived over the past few years, thanks to the availability of affordable wearable cameras and large annotated datasets. The position of the wearable camera (usually mounted on the head) allows recording exactly what the camera wearers have in front of them, in particular hands and manipulated objects. This intrinsic advantage enables the study of the hands from multiple perspectives: localizing hands and their parts within the images; understanding what actions and activities the hands are involved in; and developing human-computer interfaces that rely on hand gestures. In this survey, we review the literature that focuses on the hands using egocentric vision, categorizing the existing approaches into: localization (where are the hands or parts of them?); interpretation (what are the hands doing?); and application (e.g., systems that used egocentric hand cues for solving a specific problem). Moreover, a list of the most prominent datasets with hand-based annotations is provided.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ] ], "subsections": [ "0b94b38e-9f41-4523-9736-29bbf66ca9e0", "6f70ffa8-b87c-41c9-a278-be2c90617e0b", "7a1e900d-f2db-4f24-95e0-36ae47590399", "2e67d397-7219-4ffa-88b7-5a8046d9d153", "5d6e752f-4696-44e4-a775-fe653f7e3245", "c289979f-f5c8-4c7d-8017-03424c08e89e", "207be78a-9a20-4451-8627-9fe3d09faf21" ], "title": "root" }, { "cite_extract_rate": 0.1875, "cites": [ 4255, 4254, 4256 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{T}{he} hands are of primary importance for human beings, as they allow us to interact with objects and environments, communicate with other people, and perform activities of daily living (ADLs) such as eating, bathing, and dressing. It is not a surprise that in individuals with impaired or reduced hand functionality (e.g., after a stroke or cervical spinal cord injury -- cSCI) the top recovery priority is to regain the function of the hands . Given their importance, computer vision researchers have tried to analyze the hands from multiple perspectives: localizing them in the images , inferring the types of actions they are involved in , as well as enabling interactions with computers and robots . Wearable cameras (e.g., cameras mounted on the head or chest) have allowed studying the hands from a point of view (POV) that provides a first-person perspective of the world. This field of research in computer vision is known as egocentric or first-person vision (FPV). Although some studies were published as early as the 1990s , FPV gained more importance after 2012 with the emergence of smart glasses and action cameras (i.e., Google Glass and GoPro cameras). For an overview of the evolution of FPV methods, the reader is referred to the survey published by Betancourt et al. .\nEgocentric vision presents many advantages when compared with third person vision, where the camera position is usually stable and disjointed from the user. For example: the device is recording exactly what the users have in front of them; camera movement is driven by the camera-wearer’s activity and attention; hands and manipulated objects tend to appear at the center of the image and hand occlusions are minimized . These advantages made the development of novel approaches for studying the hands very appealing. However, when working in FPV, researchers must also face an important issue: the camera is not stable, but is moving with the human body. This causes fast movements and sudden illumination changes that can significantly reduce the quality of the video recordings and make it more difficult to separate the hand and objects of interest from the background.\nBetancourt et al. clearly summarized the typical processing steps of hand-based methods in FPV. The authors proposed a unified and hierarchical framework where the lowest levels of the hierarchy concern the detection and segmentation of the hands, whereas the highest levels are related to interaction and activity recognition. Each level is devoted to a specific task and provides the results to higher levels (e.g., hand identification builds upon hand segmentation and hand detection, activity recognition builds upon the identification of interactions, etc.). Although clear and concise, this framework could not cover some of the recent developments in this field, made possible thanks to the availability of large amounts of annotated data and to the advent of deep learning . Other good surveys closely related to the topics discussed in our paper were published in the past few years . The reader should refer to the work of Del Molino et al. for an introduction into video summarization in FPV, to the survey of Nguyen et al. for the recognition of ADLs from egocentric vision, and to the work of Bola\\~{n}os et al. for a review on visual lifelogging. Hand pose estimation and hand gesture recognition methods are analyzed in and , respectively.\nIn this survey we define a comprehensive taxonomy of hand-based methods in FPV expanding the categorization proposed in and classifying the existing literature into three macro-areas: localization, interpretation, and application. For each macro-area we identify the main sub-areas of research, presenting the most prominent approaches published in the past 10 years and discussing advantages and disadvantages of each method. Moreover, we summarize the available datasets published in this field. Our focus in defining a comprehensive taxonomy and comparing different approaches is to propose an updated and general framework of hand-based methods in FPV, highlighting the current trends and summarizing the main findings, in order to provide guidelines to researchers who want to improve and expand this field of research.\nThe remainder of the paper is organized as follows: Section 2 presents a taxonomy of hand-based methods in FPV following a novel categorization that divides these approaches into three macro-areas: localization, interpretation, and application; Section 3 describes the approaches developed for solving the localization problem; Section 4 summarizes the work focused on interpretation; Section 5 summarizes the most important applications of hand-based methods in FPV; Section 6 reviews the available datasets published so far; and, finally, Section 7 concludes with a discussion of the current trends in this field.", "id": "0b94b38e-9f41-4523-9736-29bbf66ca9e0", "level": "section", "origin_cites_number": 16, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:framework}\nStarting from the raw frames, the first processing step is dedicated to the localization of the hands or parts of them within the observed scene. This allows restricting the processing to small regions of interest (ROIs), excluding unnecessary information from the background, or reducing the dimensionality of the problem, by extracting the articulated hand pose. Once the positions of the hands and/or their joints have been determined, higher-level information can be inferred to understand what the hands are doing (e.g. gesture and posture recognition, action and activity recognition). This information can be used for building applications such as human-computer interaction (HCI) and human-robot interaction (HRI) . Therefore, we categorize the existing studies that made use of hand-based methods in FPV into three macro-areas:\n\\begin{itemize}[noitemsep, wide=0pt, leftmargin=\\dimexpr\\labelwidth + 2\\labelsep\\relax]\n \\item \\textbf{Localization} -- approaches that answer the question: \\textbf{where} are the hands (or parts of them)?\n \\item \\textbf{Interpretation} -- approaches that answer the question: \\textbf{what} are the hands doing?\n \\item \\textbf{Application} -- approaches that use methods from the above areas to build real-world applications.\n\\end{itemize}\nFor each area we define sub-areas according to the aims and nature of the proposed methods.", "id": "6f70ffa8-b87c-41c9-a278-be2c90617e0b", "level": "section", "origin_cites_number": 2, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Hand-based methods in FPV -- An updated framework" ] ], "subsections": [ "50498201-6d39-4edf-8f60-65cd472b676d", "cf1c83dd-47c4-4b1b-a55c-c4ed114cb2ed", "efe29f46-2397-4cbc-a8ae-87e3becb63cf" ], "title": "Hand-based methods in FPV -- An updated framework" }, { "cite_extract_rate": 0, "cites": [], "content": "The localization area encloses all the approaches that aim at localizing hands (or parts of them) within the images. The sub-areas are:\n\\begin{itemize}[noitemsep, wide=0pt, leftmargin=\\dimexpr\\labelwidth + 2\\labelsep\\relax]\n \\item \\textbf{Hand segmentation} -- detecting the hand regions with pixel-level detail.\n \\item \\textbf{Hand detection} -- defined both as binary classification problem (does the image contain a hand?) and object localization problem (is there a hand? Where is it located?). The generalization of hand detection over time is \\textbf{hand tracking}.\n \\item \\textbf{Hand identification} -- classification between left and right hand, as well as other hands present in the scene. \n \\item \\textbf{Hand pose estimation} -- estimation of hand joint positions. A simplified version of the hand pose estimation problem is \\textbf{fingertip detection}, where only the fingertips of one or more fingers are identified.\n\\end{itemize}\nFrom the above sub-areas it is possible to highlight two dimensions in the localization problem. The first one is the amount of detail of the information extracted with a method. For example, hand detection results in low-detail information (i.e., binary label or coordinates of a bounding box), whereas hand segmentation produces high-detail information (i.e., pixel-level silhouette). The second dimension is the meaning of the obtained information , hereafter called semantic content. Hand detection and segmentation, although producing different amounts of detail, have the same semantic content, namely the global position of the hand. By contrast, hand pose estimation has higher semantic content than hand detection, as the position of the fingers and hand joints add more information to the global hand location. This categorization is shown in figure \\ref{fig_framework}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.95\\linewidth]{Framework.png}\n\\end{center}\n\\caption{Hand-based approaches in FPV categorized by amount of detail and semantic content.}\n\\label{fig_framework}\n\\end{figure}", "id": "50498201-6d39-4edf-8f60-65cd472b676d", "level": "subsection", "origin_cites_number": 2, "parent_id": "6f70ffa8-b87c-41c9-a278-be2c90617e0b", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Hand-based methods in FPV -- An updated framework" ], [ "subsection", "Localization -- Where are the hands (or parts of them)?" ] ], "subsections": [], "title": "Localization -- Where are the hands (or parts of them)?" }, { "cite_extract_rate": 0.5, "cites": [ 4257 ], "content": "The interpretation area includes those approaches that, starting from lower level information (i.e., detection, segmentation, pose estimation, etc.), try to infer information with higher semantic content. The main sub-areas are:\n\\begin{itemize}[noitemsep, wide=0pt, leftmargin=\\dimexpr\\labelwidth + 2\\labelsep\\relax]\n \\item \\textbf{Hand grasp analysis} -- Detection of the dominant hand postures during hand-object interactions.\n \\item \\textbf{Hand gesture recognition} -- Classification of hand gestures, usually as input for virtual reality (VR) and augmented reality (AR) systems, as will be discussed in Section \\ref{sec:application}.\n \\item \\textbf{Action/Interaction recognition} -- Predicting what type of action or interaction the hands are involved in. Following the taxonomy of Tekin et al. , an action is defined as a verb (e.g. “pour”), whereas an interaction as a verb-noun pair (e.g. “pour water”). This task is called \\textit{interaction detection} if the problem is reduced to a binary classification task (i.e., predicting whether or not the hands are interacting).\n \\item \\textbf{Activity recognition} -- Identification of the activities, defined as set of temporally-consistent actions . For example, preparing a meal is an activity composed of several actions and interactions, such as cutting vegetables, pouring water, opening jars, etc.\n\\end{itemize}\nWe can qualitatively compare these sub-areas according to the two dimensions described above (i.e., amount of detail and semantic content). Hand grasp analysis and gesture recognition have lower semantic content than action/interaction recognition that, in turn has lower semantic content than activity recognition. Activity recognition, although with higher semantic content than action recognition, produces results with lower detail. This is because the information is summarized towards the upper end of the semantic content dimension.\nFollowing these considerations, we represent the localization and interpretation areas of this framework on a two-dimensional plot whose axes are the amount of detail and the semantic content (see Figure \\ref{fig_framework}).", "id": "cf1c83dd-47c4-4b1b-a55c-c4ed114cb2ed", "level": "subsection", "origin_cites_number": 2, "parent_id": "6f70ffa8-b87c-41c9-a278-be2c90617e0b", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Hand-based methods in FPV -- An updated framework" ], [ "subsection", "Interpretation -- What are the hands doing?" ] ], "subsections": [], "title": "Interpretation -- What are the hands doing?" }, { "cite_extract_rate": 0, "cites": [], "content": "The application area includes all the FPV approaches and systems that make use of hand-based methods for achieving certain objectives. The main applications are:\n\\begin{itemize}[noitemsep, wide=0pt, leftmargin=\\dimexpr\\labelwidth + 2\\labelsep\\relax]\n \\item Healthcare application, for example the remote assessment of hand function and the development of ambient assisted living (AAL) systems.\n \\item HCI and HRI, for example VR and AR applications, or HRI systems that rely on the recognition of hand gestures.\n\\end{itemize}\nSome egocentric vision applications were already covered by other surveys . Thus, we will summarizing novel aspects related to hand-based methods in FPV not covered in the previous articles.", "id": "efe29f46-2397-4cbc-a8ae-87e3becb63cf", "level": "subsection", "origin_cites_number": 4, "parent_id": "6f70ffa8-b87c-41c9-a278-be2c90617e0b", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Hand-based methods in FPV -- An updated framework" ], [ "subsection", "Application" ] ], "subsections": [], "title": "Application" }, { "cite_extract_rate": 0.4, "cites": [ 4258, 4259 ], "content": "\\label{sec:localization}\nThe localization of hands (or parts of them) is the first and most important processing step of many hand-based methods in FPV. A good hand localization algorithm allows estimating the accurate position of the hands within the image, boosting the performance of higher-level inference . For this reason, hand localization has been the main focus of researchers in egocentric vision. Although many hand detection, pose-estimation, and segmentation algorithms were developed in third person vision , the egocentric POV presents notable challenges that do not allow a direct translation of these methods. Rogez et al. demonstrated that egocentric hand detection is considerably harder in FPV, and methods developed specifically for third person POV may fail when applied to egocentric videos.\nHand segmentation and detection are certainly the two most extensively studied sub-areas. They are often used in combination, for example to classify as “hand” or “not hand” previously segmented regions , or to segment ROIs previously obtained with a hand detector . However, considering the extensive research behind these two sub-areas, we summarize them separately.\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{HandLocalization.png}\n\\end{center}\n\\caption{Diagram of hand localization tasks in egocentric vision. Hand detection and segmentation have often been used in combination, for example to segment ROIs previously obtained with a hand detector, or to classify as “hand” or “not hand” previously segmented regions. Since they provide the global position of the hands within the frame, they are chosen as the basis for other localization approaches, such as hand identification, hand tracking, hand pose estimation, fingertip detection (\\textbf{*}: Hand identification is now typically incorporated within the hand detection step).}\n\\label{fig_localization}\n\\end{figure*}", "id": "7a1e900d-f2db-4f24-95e0-36ae47590399", "level": "section", "origin_cites_number": 5, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ] ], "subsections": [ "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "1ca9512a-d587-441b-8b06-b07a116734fd", "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77" ], "title": "Localization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:handSeg}\nHand segmentation is the process of identifying the hand regions at pixel-level (see Figure \\ref{fig_localization}). This step allows extracting the silhouette of the hands and has been extensively used as a pre-processing step for hand pose estimation, hand-gesture recognition, action/interaction recognition, and activity recognition. One of the most straightforward approaches is to use the color as discriminative feature to identify skin-like pixels . Although very simple and fast, color-based segmentation fails whenever background objects have similar skin color (e.g., wooden objects) and it is robust only if the user wears colored gloves or patches to simplify the processing . However, this might not be feasible in real-world applications, where the hand segmentation algorithm is supposed to work without external cues, thus mimicking human vision. Illumination changes due to different environments also negatively affect the segmentation performance. Moreover, the availability of large datasets with pixel-level ground truth annotations is another issue when working with hand segmentation. This type of annotation requires a lot of manual work and the size of these datasets is much smaller than those with less detailed annotations (e.g., bounding boxes). Thus, several approaches were proposed to face the above issues.", "id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "level": "subsection", "origin_cites_number": 3, "parent_id": "7a1e900d-f2db-4f24-95e0-36ae47590399", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ] ], "subsections": [ "399a335c-bc4f-460e-ba50-e81579708799", "dd3a47ec-6320-422e-b997-b487774f3baa", "a3813302-f199-4718-b46d-78d483ed0975", "3aa449b6-7f7f-4cdc-8f75-6ac6c6e902fb", "0efbda79-3c9e-45d8-b941-5fc1abedb38f" ], "title": "Hand segmentation" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 4262, 4261, 4258, 3807, 4256, 2571, 4260, 810, 825 ], "content": "\\label{subsubsec:segHandsObjBack}\nTraditional hand segmentation approaches (i.e., not based on deep learning) rely on the extraction of features from an image patch, classifying the central pixel or the entire patch as skin or no-skin using a binary classifier or regression model. The vast majority of approaches combined color with gradient and/or texture features, whereas random forest has been the most popular classification algorithm . The use of texture and gradient features allows capturing salient patterns and contours of the hands that, combined with the color features, help discriminate them from background and objects with similar color.\n\\textbf{Pixel-based classification}. Li and Kitani tested different combinations of color (HSV, RGB, and LAB color spaces) and local appearance features (Gabor filters , HOG , SIFT , BRIEF , and ORB descriptors) to capture local contours and gradients of the hand regions. Each pixel was classified as skin or no-skin using a random forest regression. When using color features alone, the LAB color space provided the best performance, whereas gradient and texture features, such as HOG and BRIEF, improved the segmentation performance when combined with the color information . Zariffa and Popovic used a mixture of Gaussian skin model with dilation and erosion morphological operators to detect a coarse estimate of the hand regions. The initial region was refined by removing small isolated blobs with texture different from the skin, by computing the Laplacian of the image within each blob. Lastly, pixel-level segmentation was achieved by backprojecting using an adaptively selected region in the colour space. In , the coarse segmentation obtained with a mixture of Gaussian skin model was refined by using a structured forest edge detection , specifically trained on available datasets . \n\\textbf{Patch-based classification}. Other authors classified image patches instead of single pixels, in order to produce segmentation masks more robust to pixel-level noise . Serra et al. classified clusters of pixels (i.e., super-pixels) obtained with the simple linear iterative clustering (SLIC) algorithm . For each super-pixel, they used a combination of color (HSV and LAB color spaces) and gradient features (Gabor filters and histogram of gradients) to train a random forest classifier. The segmentation was refined by assuming temporal coherence between consecutive frames and spatial consistency among groups of super-pixels. Similarly, Singh et al. computed the hand binary mask by extracting texture and color features (Gabor filters with RGB, HSV, and LAB color features) from the super-pixels, whereas Urabe et al. used the same features in conjunction with the centroid location of each super-pixel to train a support vector machine (SVM) for segmenting the skin regions. Instead of classifying the whole patch from which color, gradient and texture features are extracted, Zhu et al. learned the segmentation mask within the image patch, by using a random forest framework (i.e., shape-aware structured forest).\n\\textbf{Deep learning} may help solve hand segmentation problems in FPV. However, its use is still hampered by the lack of large annotated datasets with pixel-level annotations. Some deep learning approaches for hand segmentation tackled this issue by using the available annotations in combination with other image segmentation techniques (e.g., super-pixels or GrabCut ) to generate new hand segmentation masks for expanding the dataset and fine-tuning pre-trained networks (see Section \\ref{subsubsec:segLackAnnot} for more details). The availability of pre-trained convolutional neural networks (CNNs) for semantic object segmentation was exploited in . Wang et al. tackled the hand segmentation problem in a recurrent manner by using a recurrent U-NET architecture . The rationale behind this strategy is to imitate the saccadic movements of the eyes that allow refining the perception of a scene. The computational cost can be another issue in CNN-based hand segmentation. To reduce this cost, while achieving good segmentation accuracy, Li et al. implemented the deep feature flow (DFF) with an extra branch to make the approach more robust against occlusions and distortion caused by DFF.", "id": "399a335c-bc4f-460e-ba50-e81579708799", "level": "subsubsection", "origin_cites_number": 33, "parent_id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ], [ "subsubsection", "Discriminating hands from objects and background" ] ], "subsections": [], "title": "Discriminating hands from objects and background" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:segIllChange}\nThe problem of variable illumination can be partially alleviated by choosing the right color-space for feature extraction (e.g., LAB ) and increasing the size of the training set. However, the latter strategy may reduce the separability of the color space and increase the number misclassified examples . Thus, a popular solution has been to use a collection of segmentation models, adaptively selecting the most appropriate one for the current test conditions . Li and Kitani proposed an adaptive approach that selects the nearest segmentation model, namely the one trained in a similar environment. To learn different global appearance models, they clustered the HSV histogram of the training images using k-means and learned a separate random tree regressor for each cluster. They further extended this concept in where they formulated hand segmentation as a model recommendation task. For each test image, the system was able to propose the best hand segmentation model given the color and structure (HSV histogram and HOG features) of the observed scene and the relative performance between two segmentation models. Similarly, Betancourt et al. trained binary random forests to classify each pixel as skin or not skin using the LAB values. For each frame they trained a separate segmentation model storing it along with the HSV histogram, as a proxy to summarize the illumination condition of that frame. K-nearest neighbors (k-NN) classification was performed on the global features to select the \\textit{k} most suitable segmentation models. These models were applied to the test frame and their segmentation results combined together to obtain the final hand mask.", "id": "dd3a47ec-6320-422e-b997-b487774f3baa", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ], [ "subsubsection", "Robustness to illumination changes" ] ], "subsections": [], "title": "Robustness to illumination changes" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 1766, 514, 4263 ], "content": "\\label{subsubsec:segLackAnnot}\nAnnotating images at pixel-level is a very laborious and costly work that refrains many authors from publishing large annotated datasets. Thus, the ideal solution to the hand segmentation problem would be a self-supervised approach able to learn the appearance of the hands on-the-fly, or a weakly supervised method that relies on the available training data to produce new hand masks.\nUsually, methods for online hand segmentation made assumptions on the hand motion and/or required the user to perform a calibration with pre-defined hand movements . In this way, the combination of color and motion features facilitates the detection of hand pixels, in order to train segmentation models online. Kumar et al. proposed an on-the-fly hand segmentation, where the user calibrated the systems by waving the hands in front of the camera. The combination of color and motion segmentation (Horn–Schunck optical flow ) and region growing, allowed locating the hand regions for training a GMM-based hand segmentation model. Region growing was also used by Huang et al. . The authors segmented the frames in super-pixels and extracted ORB descriptors from each super-pixel to find correspondences between regions of consecutive frames, which reflect the motion between two frames. Hand-related matches were distinguished from camera-related matches based on the assumption that camera-related matches play a dominant role in the video. These matches were estimated using RANSAC and after being removed, those left were assumed to belong to the hands and used to locate the seed point for region growing. Zhao et al. based their approach on the typical motion pattern during actions involving the hands: preparatory phase (i.e., the hands move from the lower part of the frame to the image center) and interaction phase. During the preparatory phase they used a motion-based segmentation, computing the TV-L1 optical flow . As the preparatory phase ends, the motion decreases and the appearance becomes more important. A super-pixel segmentation was then performed and a super-pixel classifier, based on the initial motion mask, was trained using color and gradient features.\nTransfer learning has also been used to deal with the paucity of pixel-level annotations. The idea is to exploit the available pixel-level annotations in combination with other image segmentation techniques (e.g., super-pixels or GrabCut ) to generate new hand segmentation masks and fine-tune pre-trained networks. Zhou et al. trained a hand segmentation network using a large amount of bounding box annotations and a small amount of hand segmentation maps . They adopted a DeconvNet architecture made up of two mirrored VGG-16 networks initialized with 1,500 pixel-level annotated frames from . Their approach iteratively selected and added good segmentation proposals to gradually refine the hand map. The hand segmentation proposals were augmented by applying super-pixel segmentation and Grabcut to generate the hand probability map within the ground truth bounding boxes. DeconvNet was trained in an Expectation-Maximization manner: 1) keeping the network parameter fixed, they generated a set of hand masks and selected the best segmentation proposals (i.e., those with largest match with the ground truth mask); 2) they updated the network weights by using the best segmentation hypotheses. Similarly, Li et al. relied on the few available pixel-level annotations to train Deeplab-VGG16 . Their training procedure was composed of multiple steps: 1) Pre-segmentation -- the CNN, pre-trained using the available pixel-level annotations, was applied on the target images to generate pre-segmentation maps; 2) Noisy mask generation -- the pre-segmentation map was combined with a super-pixel segmentation ; and 3) Model retraining -- the new masks were used as ground truth to fine tune the pre-trained Deeplab-VGG16.", "id": "a3813302-f199-4718-b46d-78d483ed0975", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ], [ "subsubsection", "Lack of pixel-level annotations" ] ], "subsections": [], "title": "Lack of pixel-level annotations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:seg3D}\nThe use of depth sensors or stereo cameras helps alleviate some of the aforementioned issues, in particular the robustness to illumination changes and lack of training data. However, the use of devices not specifically developed for wearable applications and their high power consumption has limited their FPV application only to research studies.\nSome authors used the depth information to perform a background/foreground segmentation followed by hand/object segmentation within the foreground region by using appearance information . Wan et al. used a time-of-flight (ToF) camera to capture the scene during hand-object interactions. They observed that the foreground (i.e., arm, hands, and manipulated objects) is usually close to the camera and well distinguishable, in terms of distance, from the background. Thus, after thresholding the histogram of depth values to isolate the foreground, hand pixels were detected by combining color and texture features (e.g., RGB thresholds and Gabor filters). The same ToF camera (Creative$^{\\circledR}$ Senz3D\\textsuperscript{TM}) was used by Rogez et al. . The authors trained a multi-class classifier on synthetic depth maps of 1,500 different hand poses, in order to recognize one of these poses in the test depth images, thus producing a coarse segmentation mask. This mask was then processed in a probabilistic manner to find the binary map corresponding to the hand pixels. Color cues were also used by computing RGB-based super-pixels on the test image. Yamazaki et al. reconstructed the colored point cloud of the scene recorded with a Microsoft$^{\\circledR}$ Kinect\\textsuperscript{TM} v2. The foreground was isolated by detecting and removing large plane structures (i.e., likely belonging to the background) using RANSAC . Afterwards, color segmentation was performed using a person-specific skin color model calibrated on the user’s skin. Ren et al. used a stereo camera to reconstruct the depth map of the scene. Specifically, the depth map was reconstructed using the scanline-based stereo matching and the hand was segmented only using depth information.", "id": "3aa449b6-7f7f-4cdc-8f75-6ac6c6e902fb", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ], [ "subsubsection", "Depth and 3D segmentation" ] ], "subsections": [], "title": "Depth and 3D segmentation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 520 ], "content": "\\label{subsubsec:segRemarks}\nBecause of the high amount of detail obtained with hand segmentation algorithms, this task is the hardest one among hand-based methods in FPV. The pixel- or super-pixel-level accuracy required for this task, combined with the intrinsic problems of egocentric vision, made this sub-area the most challenging and debated of this field of research. The effort of many researchers in finding novel and powerful approaches to obtain better results is justified by the possibility to improve not only the localization accuracy, but also to boost the performance of higher-level inference. In fact, it was demonstrated that a good hand segmentation mask can be sufficient for recognizing actions and activities involving the hands with high accuracy . For this reason, pixel-level segmentation has often been used as basis of higher-inference methods.\nRGB-D information can certainly improve and simplify the hand segmentation task. However, these methods are a minority with respect to the 2D counterpart, since no depth cameras have been developed for specific egocentric applications. With the recent miniaturization of depth sensors (e.g., iPhone$^{\\circledR}$ X and 11) the 3D segmentation is still an area worth exploring and expanding within the next few years.\nMany authors considered detection and segmentation as two steps of the same task. We preferred to split these two sub-areas given the large amount of work produced in the past few years. However, as it will be illustrated in the next section, many hand detection approaches, especially those using region-based CNNs, used the segmentation mask for generating region proposals. Perhaps, with the possibility to re-train powerful object detectors, this process has become inefficient and instead of having a “detection over segmentation”, it will be more convenient to have a “segmentation over detection”, unless the specific problem calls for a pixel-level segmentation of the entire frame. Following the great success of mask R-CNN , an interesting approach in this direction would be to address hand segmentation as an instance segmentation task, embedding bounding box detection and semantic segmentation of the hands in a single model.", "id": "0efbda79-3c9e-45d8-b941-5fc1abedb38f", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "c4df2472-ba86-4695-a0a8-db5f944cf9a0", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand segmentation" ], [ "subsubsection", "Remarks on hand segmentation" ] ], "subsections": [], "title": "Remarks on hand segmentation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:handDetTrack}\nHand detection is the process of localizing the global position of the hands at frame level. This task is usually performed by fitting a bounding box around the area where the hand has been detected (see Figure \\ref{fig_localization}). Hand detection allows extracting coarser information than hand segmentation, although this lower detail is counterbalanced by higher robustness to noise. If the application does not require very detailed information, this is the most popular choice as basis for hand-based higher inference. In the literature we can distinguish two main approaches: hand detection as image classification task; and hand detection as object detection task. Furthermore, hand detection generalized over time is referred to as hand tracking.", "id": "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a1e900d-f2db-4f24-95e0-36ae47590399", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand detection and tracking" ] ], "subsections": [ "6a7d379a-7305-47e1-a2f5-bd9bcac1cf00", "fbee5de2-daa6-42b7-b9f9-bd4fa7295323", "1496c192-b64d-463e-923a-aca0a7bc9006", "360c6161-e7cb-466c-9d60-3d53ed7e9440" ], "title": "Hand detection and tracking" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:detImClass}\nPixel-level segmentation of hand regions, if performed on the entire image, may be prone to high occurrence of false positives . In these cases, a pre-filtering step that prevents from processing frames without any hands is necessary. This approach allows determining whether an image contains hands and it is usually followed by a hand segmentation step responsible for locating the hand region .\nIn , the authors back-projected the frame using a histogram obtained from a mixture of Gaussian skin model , predicting the presence of hands within the image by thresholding the back-projected values. Betancourt et al. proposed an approach based on HOG features and SVM classifier to predict the presence of hands at frame-level, reducing the number of false positives. However, this frame-by-frame filtering increased the risk of removing frames with barely visible hands, thus increasing the false negatives . To solve this issue, the authors proposed a dynamic Bayesian network (DBN) to smooth the classification results of the SVM and improve the prediction performance . Zhao et al. detected the presence of hands within each frame exploiting the typical interaction cycle of the hands (i.e., preparatory phase - interaction - hands out of the frame). Based on this observation, they defined an ego-saliency metric related to the probability of having hands within a frame. This metric was derived from the optical flow map calculated using and was composed of two terms: spatial cue, which gives more weight to motion within the lower part of the image; and temporal cue, which takes into account whether the motion is increasing or decreasing between adjacent frames.", "id": "6a7d379a-7305-47e1-a2f5-bd9bcac1cf00", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand detection and tracking" ], [ "subsubsection", "Hand detection as image classification" ] ], "subsections": [], "title": "Hand detection as image classification" }, { "cite_extract_rate": 0.421052631578947, "cites": [ 4265, 4258, 206, 2671, 802, 4260, 4264, 97 ], "content": "\\label{subsubsec:detObjDet}\nHand detection performed within an object localization framework presents notable challenges. Given the flexibility, the continuous variation of poses, and the high number of degrees of freedom, the hand appearance is highly variable and classical object detection algorithms (e.g., Haar like features with adaboost classification) may work only in constrained situations, such as detection of hands in a specific pose . For these reasons, and thanks to the availability of large annotated datasets with bounding boxes, this is the area that most benefited from the advent of deep learning.\n\\textbf{Region-based approaches}. Many authors proposed region-based CNNs to detect the hands, exploiting segmentation approaches summarized in Section \\ref{subsec:handSeg} to generate region proposals. Bambach et al. proposed a probabilistic approach for region proposal generation that combined spatial biases (e.g., reasoning on the position of the shape of the hands from training data) and appearance models (e.g., non-parametric modeling of skin color in the YUV color space). To guarantee high coverage, they generated 2,500 regions for each frame that were classified using CaffeNet . Afterwards, they obtained the hand segmentation mask within the bounding box, by applying GrabCut . Zhu et al. used a structured random forest to propose pixel-level hand probability maps. These proposals were passed to a multitask CNN to locate the hand bounding box, the shape of the hand within the bounding box, and the position of wrist and palm. In , the authors generated region proposals by segmenting skin regions with and determining if the set of segmented blobs correspond to one or two arms. This estimation was performed by thresholding the fitting error of a straight line. K-means clustering, with \\textit{k} = 2 if two arms are detected, was applied to split the blobs into two separate structures. The hand proposals were selected as the top part of a rectangular bounding box fitted to the arm regions and passed to CaffeNet for the final prediction. To generate hand region proposals, Cruz et al. used a deformable part model (DPM) to make the approach robust to different gestures. DPM learns the hand shape by considering the whole structure and its parts (i.e., the fingers) using HOG features. CaffeNet was used for classifying the proposals. Faster R-CNN was used in . In particular, Likitlersuang et al. fine-tuned the network on videos from individuals with cSCI performing ADLs. False positives were removed based on the arm angle information computed by applying a Haar-like feature rotated 360 degrees around the bounding box centroid. The resulting histogram was classified with a random forest to determine whether the bounding box actually included a hand. Furthermore, they combined color and edge segmentation to re-center the bounding box, in order to promote maximum coverage of the hands while excluding parts of the forearm. \n\\textbf{Regression-based approaches} were also used for detecting the hands. Mueller et al. proposed a depth-based approach for hand detection, implemented using the Intel$^{\\circledR}$ RealSense\\textsuperscript{TM} SR300 camera. A Hand Localization Network (HALNet -- architecture derived from ResNet50 and trained on synthesized data) was used to regress the position of the center of the hand. The ROI was then cropped around this point based on its distance from the camera (i.e., the higher the depth, the smaller the bounding box). Recently, the \\textit{You Only Look Once} (YOLO) detector was applied for localizing hands in FPV , demonstrating better trade-off between computational cost and localization accuracy than Faster R-CNN and single-shot detector (SSD) .", "id": "fbee5de2-daa6-42b7-b9f9-bd4fa7295323", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand detection and tracking" ], [ "subsubsection", "Hand detection as object detection" ] ], "subsections": [], "title": "Hand detection as object detection" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 4265, 836, 4266 ], "content": "\\label{subsubsec:detTrack} \nHand tracking allows estimating the position of the hands across multiple frames, reconstructing their trajectories in time. Theoretically, every hand detection and segmentation approach seen above, with the exception of the binary classification algorithms of section \\ref{subsubsec:detImClass}, can be used as tracker as well, by performing a frame-by-frame detection. This is the most widely used choice for tracking the hand position over time. However, some authors tried to combine the localization results with temporal models to predict the future hand positions. This strategy has several advantages, such as decreasing the computational cost by avoiding to run the hand detection every frame , disambiguate overlapping hands by exploiting their previous locations , and refining the hand location .\nLee et al. studied the child-parent social interaction from the child’s POV, by using a graphical model to localize the body parts (i.e., hands of child and parent, head of the parent). The model was composed of inter-frame links to enforce temporal smoothness of the hand positions over time, shift links to model the global shifts in the field of view caused by the camera motion, and intra-frame constraints based on the spatial configuration of the body parts. Skin color segmentation in the YUV color space was exploited to locate the hands and define intra-frame constraints on their position. This formulation forced the body parts to remain in the neighborhood of the same position between two consecutive frames, while allowing for large displacement due to global motion (i.e., caused by head movements) if this displacement is consistent with all parts. Liu et al. demonstrated that the hand detection is more accurate in the central part of the image due to a center bias (i.e., higher number of training examples with hands in the center of the frame). To correct this bias and obtain homogeneous detection accuracy in the whole frame, they proposed an attention-based tracker (AHT). For each frame, they estimated the target location of the hand by exploiting the result at the previous frame. Then, the estimated hand region was translated to the image center, where a CNN fine-tuned on frames with centralized hands was applied. After segmenting the hand regions using , Cai et al. used the temporal tracking method to discriminate them in case of overlap. \nRegression-based CNNs in conjunction with object tracking algorithms were used in . Kapidis et al. fine-tuned YOLOv3 on multiple datasets to perform the hand detection, discriminating the right and left hand trajectories over time using the simple online real-time tracking (SORT) . For each detected bounding box, this algorithm allowed predicting its next position, also assigning it to existing tracks or to new ones. Visée et al. combined hand detection and tracking to design an approach for fast and reliable hand localization in FPV. Motivated by the slow detection performance of YOLOv2 without GPU, they proposed to combine YOLOv2 with the Kernelized Correlation Filter (KCF) as a trade-off between speed and accuracy. The authors used the detector to automatically initialize and reset the tracker in case of failure or after a pre-defined number of frames.", "id": "1496c192-b64d-463e-923a-aca0a7bc9006", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand detection and tracking" ], [ "subsubsection", "Hand tracking" ] ], "subsections": [], "title": "Hand tracking" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:detRemarks} \nHand detection and segmentation are two closely related tasks that can be combined together. If hand detection is performed using a region-based approach (e.g., Faster R-CNN), hand segmentation can be seen as the pre-processing step of the localization pipeline, whereas in case of regression-based CNNs (e.g., YOLO) hand segmentation may follow the bounding box detection. The higher performance of regression-based methods with respect to region-based CNNs makes the latter approach more appealing in view of optimizing the hand localization pipeline. If there is no need of segmenting the hands at pixel level, the segmentation can just be skipped, whereas in problems where detailed hand silhouettes are needed, hand segmentation can be applied only within the detected the ROI, avoiding unnecessary computation.\nThe combination of detection and tracking algorithm may help to speed-up the localization performance with the possibility of translating these approaches into real-world application where low resource hardware is the only available option . Moreover, as we will show in Section \\ref{sec:interpretation}, hand tracking is an important step for the characterization and recognition of dynamic hand gestures .", "id": "360c6161-e7cb-466c-9d60-3d53ed7e9440", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "67e9bc8d-2f82-4946-b81d-c801a0d448bc", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand detection and tracking" ], [ "subsubsection", "Remarks on hand detection and tracking" ] ], "subsections": [], "title": "Remarks on hand detection and tracking" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4265, 4266, 4267, 4260 ], "content": "\\label{subsec:handIdent} \nHand identification is the process of disambiguating the left and right hands of the camera wearer, as well as the hands of other persons in the scene. The egocentric POV has intrinsic advantages that allow discriminating the hands by using simple spatial and geometrical constraints . Usually, the user’s hands appear in the lower part of the image, with the right hand to the right of the user’s left hand, and vice versa. By contrast, other people’s hands tend to appear in the upper part of the frame . The orientation of the arm regions was used in to distinguish the left from the right user’s hand. To estimate the angle, the authors rotated a Haar-like feature around the segmented hand region, making this approach robust to the presence of sleeves and different skin colors, since it did not require any calibrations . To identify the hands, they split the frame into four quadrants. The quadrant with the highest sum of the Haar-like feature vector determined the hand type: “user’s right” if right lower quadrant; “user’s left” if left lower quadrant; “other hands” if upper quadrants . The angle of the forearm/hand regions was also used by Betancourt et al. . The authors fitted an ellipse around the segmented region, calculating the angle between the arm and the lower frame border and the normalized distance of the ellipse center from the left border. The final left/right prediction was the result of a likelihood ratio test between two Maxwell distributions.\nAlthough simple and effective, spatial and geometric constraints may fail in case of overlapping hands. In this case, the temporal information help disambiguate the hands . Cai et al. were interested in studying the grasp of the right hand. After segmenting the hand regions , they implemented the temporal tracking method proposed in to handle the case of overlapping hands, thus tracking the right hand. Kapidis et al. used the SORT tracking algorithm . This approach combines the Kalman filter to predict the future position of the hand and the Hungarian algorithm to assign the next detection to existing tracks (i.e., left/right) or new ones.\nWith the availability of powerful and accurate CNN-based detectors, the hand identification as separated processing step is deprecated, being incorporated within hand detection (see Section \\ref{subsubsec:detObjDet}) . To this end, both region-based (e.g., Faster R-CNN) and regression-based methods (e.g., YOLO and SSD) have been used. These models were trained or fine-tuned to recognize two or more classes of hands, predicting the bounding box coordinates along with its label (i.e., left, right, and other hands) .", "id": "1ca9512a-d587-441b-8b06-b07a116734fd", "level": "subsection", "origin_cites_number": 14, "parent_id": "7a1e900d-f2db-4f24-95e0-36ae47590399", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand identification" ] ], "subsections": [], "title": "Hand identification" }, { "cite_extract_rate": 0.4, "cites": [ 4255, 4257, 4264, 4259 ], "content": "\\label{subsec:handPose} \nHand pose estimation consists in the localization of the hand parts (e.g., the hand joints) to reconstruct the articulated hand pose from the images (see Figure \\ref{fig_localization}). The possibility to obtain the position of fingers, palm, and wrist, simplifies higher inference tasks such as grasp analysis and hand gesture recognition, since the dimensionality of the problem is reduced yet keeping high-detail information. An important difficulty in hand pose estimation lies in object occlusions and self-occlusions that make it hard to localize hidden joints/parts of the hand. Some authors proposed the use of depth cameras in conjunction with sensor-based techniques to train hand pose estimators more robust to self-occlusions . However, as discussed above, the use of RGB-D imaging techniques might not be easily translated to FPV. Thus, several attempts have also been made to estimate the hand pose using only color images . In this section, we summarize the previous work distinguishing between hand pose estimation approaches with depth sensors and hand pose estimation using monocular color images. Moreover, we summarize approaches for fingertip detection, which can be seen as an intermediate step between hand detection and hand pose estimation.", "id": "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77", "level": "subsection", "origin_cites_number": 10, "parent_id": "7a1e900d-f2db-4f24-95e0-36ae47590399", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand pose estimation and fingertip detection" ] ], "subsections": [ "e6845dc3-03e2-4c93-a275-97a57b89ebe5", "c00b234c-0e95-4415-9439-50614d10e067", "9a22616e-7bec-48df-9430-93812346de96", "4a2e01d3-ec0f-4262-baee-f1ede9a05c2e" ], "title": "Hand pose estimation and fingertip detection" }, { "cite_extract_rate": 0.5, "cites": [ 4255, 4264, 4268, 4259 ], "content": "\\label{subsubsec:pose3D}\nOne of the advantages of using depth information for estimating the hand pose is that it is easier to synthesize depth maps that closely resemble the ones acquired by real sensors, when compared to real versus synthetic color images . In , the authors tackled hand pose estimation as a multiclass classification problem by using a hierarchical cascade architecture. The classifier was trained on synthesized depth maps by using HOG features and tested on depth maps obtained with a ToF sensor. Instead of estimating the joint coordinates independently, they predicted the hand pose as whole, in order to make the system robust to self-occlusions. Similarly, in , the authors predicted the upper arm and hand poses simultaneously, by using a multiclass linear SVM for recognizing K poses from depth data. However, instead of classifying scanning windows on the depth maps, they classified the whole egocentric work-space, defined as the 3D volume seen from the egocentric POV. Mueller et al. proposed a CNN architecture (Joint Regression Net -- JORNet) to regress the 3D locations of the hand joints within the cropped colored depth maps captured with a structured light sensor (Intel$^{\\circledR}$ RealSense\\textsuperscript{TM} SR300). Afterwards, a kinematic skeleton was fitted to the regressed joints, in order to refine the hand pose. Yamazaki et al. estimated the hand pose from hand point clouds captured with the Kinect v2 sensor. The authors built a dataset by collecting pairs of hand point clouds and ground truth joint positions obtained with a motion capture system. The pose estimation was performed by aligning the test point cloud to the training examples and predicting its pose as the one that minimizes the alignment error. The sample consensus initial alignment and iterative closest point algorithms were used for aligning the point clouds. Garcia-Hernando et al. evaluated a CNN-based hand pose estimator for regressing the 3D hand joints from RGB-D images recorded with the Intel$^{\\circledR}$ RealSense\\textsuperscript{TM} SR300 camera. The authors demonstrated that state-of-the-art hand pose estimation performance can be reached by training the algorithms on datasets that include hand-object interactions, in order to improve its robustness to self-occlusions or hand-object occlusions.", "id": "e6845dc3-03e2-4c93-a275-97a57b89ebe5", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand pose estimation and fingertip detection" ], [ "subsubsection", "Hand pose estimation using depth sensors" ] ], "subsections": [], "title": "Hand pose estimation using depth sensors" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4257, 4269 ], "content": "\\label{subsubsec:poseColor}\nIn general, hand pose estimation from monocular color images allows locating the parts of the hands either in the form of 2D joints or semantic sub-regions (e.g., fingers, palm, etc.). This estimation is performed within previously detected ROIs, obtained by either a hand detection or segmentation algorithm. Liang et al. used a conditional regression forest (CRF) to estimate the hand pose from hand binary masks. Specifically, they trained a set of pose estimators separately, conditioned on different distances from the camera, since the hand appearance and size can change dramatically with the distance from the camera. Thus, they synthesized a dataset in which the images were sampled at discretized intervals. The authors also proposed an intermediate step for improving the joint localization, by segmenting the binary silhouette into twelve semantic parts corresponding to different hand regions. The semantic part segmentation was performed with a random forest for pixel-level classification exploiting binary context descriptors. Similarly, Zhu et al. built a structured forest to segment the hand region into four semantic sub-regions: thumb, fingers, palm, and forearm. This semantic part segmentation was performed extending the structured regression forest framework already used for hand segmentation (as discussed in Section \\ref{subsec:handSeg}) to a multiclass problem .\nOther studies adapted CNN architectures developed for human pose estimation (e.g., OpenPose ) for solving the hand pose estimation problem and localizing 21 hand joints. Tekin et al. used a fully convolutional network (FCN) architecture to simultaneously estimate the 3D hand and object pose from RGB images. For each frame, the FCN produced a 3D discretized grid. The 3D location of the hand joints in camera coordinate system was then estimated combining the predicted location within the 3D grid and the camera intrinsic matrix.", "id": "c00b234c-0e95-4415-9439-50614d10e067", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand pose estimation and fingertip detection" ], [ "subsubsection", "Hand pose estimation from monocular color images" ] ], "subsections": [], "title": "Hand pose estimation from monocular color images" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:poseFingertip}\nFingertip detection can be seen as an intermediate step between hand detection and hand pose estimation. Unlike pose estimation, only the fingertips of one or multiple fingers are detected. These key-points alone do not allow reconstructing the articulated hand pose, but can be used as input to HCI/HRI systems , as will be discussed in Section \\ref{sec:application}. If the objective is to estimate the joints of a single finger, the most common solution is to regress the coordinates of these points (e.g., the tip and knuckle of the index finger) from a previously detected hand ROI. This approach has been exploited in . The cropped images, after being resized, were passed to a CNN to regress the location of the key-points . However, since the fingertip often lies at the border of the hand bounding box, the hand detection plays a significant role, and inaccurate detections greatly affect the fingertip localization result . Wu et al. extended the fingertip detection problem to the localization of the 5 fingertips of a hand. They proposed a heatmap-based FCN that, given the detected hand area, produced a 5-channel image containing the estimated likelihood of each fingertip at each pixel location. The maximum of each channel was used to predict the position of the fingertips.", "id": "9a22616e-7bec-48df-9430-93812346de96", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand pose estimation and fingertip detection" ], [ "subsubsection", "Fingertip detection" ] ], "subsections": [], "title": "Fingertip detection" }, { "cite_extract_rate": 0.4, "cites": [ 4264, 4268, 4259, 4269 ], "content": "\\label{subsubsec:poseRemarks}\nAmong the hand localization tasks, hand pose estimation allows obtaining high-detail information with high semantic content at the same time (see Figure \\ref{fig_framework}). This task, if performed correctly, can greatly simplify higher inference steps (e.g., hand gesture recognition and grasp analysis), but may be more prone to low robustness against partial hand occlusions.\nCompared to other localization tasks, hand pose estimation presents a higher proportion of approaches that use depth sensors. This choice has several advantages: 1) the possibility to use motion capture methods for automatically obtaining the ground truth joint positions ; 2) the availability of multiple streams (i.e., color and depth) that can be combined to refine the estimations ; and 3) the possibility to synthesize large datasets of realistic depth maps . In the past few years, human pose estimation approaches have been successfully adapted to the egocentric POV, in order to estimate the hand and arm pose from monocular color images . This opens new possibilities to streamline and improve the performance of localization and hand-based higher inference tasks, such as grasp analysis. To further facilitate the adaptation of existing pose estimation approaches, large annotated datasets with hand joint information are needed. To this end, a combination of 2D and 3D information may be beneficial, in order to get accurate and extensive ground truth annotations in 3D that will allow solving the occlusion problems even when using color images alone.", "id": "4a2e01d3-ec0f-4262-baee-f1ede9a05c2e", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "196b63d0-2f44-4a3a-8cdf-0d660c3f4a77", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Localization" ], [ "subsection", "Hand pose estimation and fingertip detection" ], [ "subsubsection", "Remarks on hand pose estimation" ] ], "subsections": [], "title": "Remarks on hand pose estimation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:interpretation}\nAfter the hands have been localized within the images, higher-level inference can be conducted in the ROIs. This processing is usually devoted to the interpretation of gestures and actions of the hands that, in turn, can be used as cues for hand-based applications such as HCI and HRI, as will be discussed in Section \\ref{sec:application}. Based on the literature published so far, hand-based interpretation approaches in FPV can be divided into hand grasp analysis, hand gesture recognition, action/interaction recognition, and activity recognition (see Figures \\ref{fig_framework} and \\ref{fig_interpret}).\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.99\\linewidth]{HandInterpretation.png}\n\\end{center}\n\\caption{Diagram of the hand interpretation areas in egocentric vision. Grasp analysis and gesture recognition focus directly on describing the hand. In action/interaction and activity recognition, the hand is instrumental in describing the user's behaviour.}\n\\label{fig_interpret}\n\\end{figure}", "id": "2e67d397-7219-4ffa-88b7-5a8046d9d153", "level": "section", "origin_cites_number": 0, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ] ], "subsections": [ "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "d10c8f97-0c05-48b1-b5bf-aa095e551ca2" ], "title": "Interpretation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:graspGesture}\nAccording to Feix et al. , “A grasp is every static hand posture with which an object can be held securely with one hand, irrespective of the hand orientation”. The recognition of the grasp types allows determining the different ways with which humans use their hands to interact with objects . The common grasp modes can be used to describe hand-object manipulations, reducing the complexity of the problem, since the set of possible grasps is typically smaller than the set of possible hand shapes . Moreover, the identification of the most recurrent grasp types has important applications in robotics, biomechanics, upper limb rehabilitation, and HCI. Thus, several taxonomies were proposed in the past decades . For a comprehensive comparison among these taxonomies, the reader is referred to . The analysis of hand grasps conducted via manual annotations is a lengthy and costly process. Thus, the intrinsic characteristics of egocentric vision allowed developing automated methods to study and recognize different grasp types, saving a huge amount of manual labor. Although in most cases the hand grasp analysis has been addressed in a supervised manner (i.e., grasp recognition -- Section \\ref{subsubsec:graspRec}) , some authors proposed to tackle this problem using clustering approaches, in order to discover dominant modes of hand-object interaction and identify high-level relationships among clusters (i.e., grasp clustering and abstraction -- Section \\ref{subsubsec:graspCluster}) .\nSimilar to grasp analysis, hand gesture recognition aims at recognizing the semantic of the hand’s posture and it is usually performed as input to HCI/HRI systems. However, two main differences exist between these two topics: 1) Grasp analysis looks at the hand posture during hand-object manipulations, whereas hand gesture recognition is usually performed on hands free of any manipulations; 2) grasp analysis aims at recognizing only static hand postures , whereas hand gesture recognition can also be generalized to dynamic gestures. According to the literature, hand gestures can be static or dynamic : static hand gesture recognition (see Section \\ref{subsubsec:gestureStatic}) aims at recognizing gestures that do not depend on the motion of the hands, thus relying on appearance and hand posture information only ; dynamic hand gesture recognition (see Section \\ref{subsubsec:gestureDynamic}) is performed using temporal information (e.g., hand tracking), in order to capture the motion cues that allow generating specific gestures .", "id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "level": "subsection", "origin_cites_number": 26, "parent_id": "2e67d397-7219-4ffa-88b7-5a8046d9d153", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ] ], "subsections": [ "b9165edf-9ff3-408d-bef3-b6156eb2abdd", "1d9e4188-817c-44e1-9353-32f1bd20d4c3", "2c776eb5-3686-4175-93fe-432e58c0226f", "b96e6348-f2af-4c12-9004-74657a7895a3", "96ab670a-61f4-41b8-bcfa-644c357a4c24" ], "title": "Hand grasp analysis and gesture recognition" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 514 ], "content": "\\label{subsubsec:graspRec}\nSupervised approaches for grasp recognition are based on the extraction of features from previously segmented hand regions and their multiclass classification following one of the taxonomies proposed in the literature . \nCai et al. used HOG features to represent the shape of the hand and a combination of HOG and SIFT to capture the object context during the manipulation. These features were classified with a multi-class SVM using a subset of grasp types from Feix’s taxonomy . The authors extended their approach in by introducing CNN-based features extracted from the middle layers of and features derived from the dense hand trajectory (DHT) such as the displacement, gradient histograms, histogram of optical flow, and motion boundary histograms. The superior performance of CNN- and DHT-based features and their robustness across different tasks and users suggested that high-level feature representation and motion and appearance information in the space-time volume may be important cues for discriminating different hand configurations. In , the authors used a graph-based approach to discriminate 8 grasp types. Specifically, the binary hand mask was used to produce a graph structure of the hand with an instantaneous topological map neural network. The eigenvalues of the graph’s Laplacians were used as features to represent the hand configurations, which were recognized using an SVM. \nThe use of depth sensors was explored by Rogez et al. . The authors recognized 71 grasp types using RGB-D data, by training a multi-class SVM with deep-learned features extracted from both real and synthetic data. Moreover, the grasp recognition results were refined by returning the closest synthetic training example, namely the one that minimized the distance with the depth of the detected hand region.", "id": "b9165edf-9ff3-408d-bef3-b6156eb2abdd", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ], [ "subsubsection", "Hand grasp recognition" ] ], "subsections": [], "title": "Hand grasp recognition" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7166 ], "content": "\\label{subsubsec:graspCluster}\nThe first attempt to discover hand grasps in FPV was . HOG features were extracted from previously segmented hand regions and grouped by means of a two-stage clustering approach. First, a set of candidate cluster centers was generated through the fast determinantal point process (DPP) algorithm . This step allowed generating a wide diversity of clusters to cover many possible hand configurations. Secondly, each segmented region was assigned to the nearest cluster center. The use of the DPP algorithm was proven to outperform other clustering approaches such as k-means and to be more appropriate in situations, like grasp analysis, where certain clusters are more recurrent than other ones. A hierarchical structure of the grasp types was learned using the same DPP-based clustering approach . A hierarchical clustering approach was also used in to find the relationships between different hand configurations based on a similarity measure between pairs of grasp types. Similarly, in , the authors used a correlation index to measure the visual similarity between grasp types: grasp types with high correlation were clustered at the lower nodes, whereas low-correlated types were clustered higher in the hierarchy. The above approaches were used to build tree-like structures of the grasp types. These structures can be exploited to define new taxonomies depending on the trade-off between detail and robustness of grasp classification, as well as to discover new grasp types not included in previous categorizations .", "id": "1d9e4188-817c-44e1-9353-32f1bd20d4c3", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ], [ "subsubsection", "Hand grasp clustering and abstraction" ] ], "subsections": [], "title": "Hand grasp clustering and abstraction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:gestureStatic}\nThe recognition of static hand gestures is usually performed in a supervised manner, similarly to the approaches presented in Section \\ref{subsubsec:graspRec} for hand grasp recognition. A common strategy is to exploit features extracted from previously segmented hand regions, classifying them into multiple gestures often using SVM classifiers . \nSerra et al. classified the binary segmentation masks into multiple hand configurations by using an ensemble of exemplar-SVMs . This approach was proven to be robust in case of unbalanced classes, like hand gesture recognition applications where most of the frames contain negative examples. Contour features were used in to recognize 14 gestures. The authors described the silhouette of the hand shape using time curvature analysis and fed an SVM classifier with the extracted features. The use of CNNs has also been investigated for the recognition of static hand gestures . Ji et al. used a hybrid CNN-SVM approach, where the CNN was implemented as feature extractor and the SVM as gesture recognizer. In , the authors proposed a CNN architecture to directly classify the binary hand masks into multiple gestures.\nDepth information was used in . In the authors used depth context descriptors and random forest classification, whereas Jang et al. implemented static-dynamic voxel features to capture the amount of point clouds within a voxel, in order to describe the static posture of the hands and fingers. Moreover, depth-based gesture recognition was demonstrated to be more discriminative than color-based recognition . However, in addition to the drawbacks of wearable depth sensors already discussed in the previous sections, the performance were significantly lower in outdoor environments due to the deterioration of the depth map .", "id": "2c776eb5-3686-4175-93fe-432e58c0226f", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ], [ "subsubsection", "Static hand gesture recognition" ] ], "subsections": [], "title": "Static hand gesture recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:gestureDynamic}\nOne of the most common choices for dynamic hand gesture recognition is to use optical flow descriptors from the segmented hand regions, in order to recognize the motion patterns of the gestures to be classified . \nBaraldi et al. developed an egocentric hand gesture classification system able to recognize the user’s interactions with artworks in a museum. After removing camera motion, they computed and tracked the feature points at different spatial scales within the hand ROI and extracted multiple descriptors from the obtained spatio-temporal volume (e.g., HOG, HOF, and MBH). Linear SVM was used for recognizing multiple gestures from the above descriptors, using Bag of Words (BoW) and power normalization to avoid sparsity of the features. In the flow vectors were calculated over the entire duration of a gesture and, based on the resultant direction of the flow vectors, different swipe movements (e.g., left, right, up, and down) were classified using fixed thresholds on the movement orientation.\nOther approaches recognized dynamic gestures as generalization of the static gesture recognition problem . In , the authors proposed a hierarchical approach for estimating hand gestures using a static-dynamic forest to produce hierarchical predictions on the hand gesture type. Static gesture recognition was performed at the top level of the hierarchy, in order to select a virtual object corresponding to the detected hand configuration (e.g., holding a stylus pen). Afterwards, the recognition of dynamic gestures, conditioned to the previously detected static gesture, was performed (e.g., pressing or releasing the button on the pen). Zhang et al. compared engineered features and deep learning approaches (e.g., 2DCNN, 3DCNN, and recurrent models), demonstrating that 3DCNN are more suitable for dynamic hand gesture recognition and the combination of color and depth information can produce better results than the two image modalities alone.", "id": "b96e6348-f2af-4c12-9004-74657a7895a3", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ], [ "subsubsection", "Dynamic hand gesture recognition" ] ], "subsections": [], "title": "Dynamic hand gesture recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:graspGestureRemarks}\nMany similarities can be found between grasp recognition and hand gesture recognition. As mentioned above, the main difference is the context in which the two problems are addressed. Grasp recognition is performed during hand-object manipulations, whereas hand gesture recognition is performed without the manipulation of physical objects. This difference links these two sub-areas to some of the higher levels and FPV applications. In fact, hand gesture recognition approaches have mainly been used for AR/VR applications , whereas grasp analysis can be exploited for action/interaction recognition and activity recognition . In particular, the contextual relationship between grasp types and object attributes, such as rigidity and shape, has motivated authors to exploit object cues for improving the grasp recognition performance.\nHand grasp analysis and gesture recognition are the only interpretation sub-areas where the analysis of the hands is still the main target of the approaches. In fact, higher in the semantic content dimension, sub-areas like action recognition, interaction detection, and activity recognition may use the hand information in combination with other cues (e.g., object recognition) to perform higher level inference. It should be noted though, that not all the higher-level interpretation approaches utilized hand-based processing in FPV. Thus, in the following sections, we will discuss only those methods that explicitly used the hand information for predicting actions and activities, omitting other papers and referring the authors to other surveys or research articles.", "id": "96ab670a-61f4-41b8-bcfa-644c357a4c24", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "5ddb3ce5-826a-4ba9-a2ca-c25abfceb9c5", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Hand grasp analysis and gesture recognition" ], [ "subsubsection", "Remarks on hand grasp analysis and gesture recognition" ] ], "subsections": [], "title": "Remarks on hand grasp analysis and gesture recognition" }, { "cite_extract_rate": 0.2, "cites": [ 4257 ], "content": "\\label{subsec:actionActivity}\nAccording to Tekin et al. , an action is a verb (e.g., “cut”), whereas an interaction is a verb-noun pair (e.g., “cut the bread”). Both definitions refer to short-term events that usually last a few seconds . By contrast, activities are longer temporal events (i.e., minutes or hours) with higher semantic content, typically composed of temporally-consistent actions and interactions (see Figure \\ref{fig_interpret}).\nIn this section, we summarize FPV approaches that relied on hand information to recognize actions, interactions, and activities from sequences of frames. Regarding the actions and interactions, two main types of approaches can be found in literature: those that used hands as the only cue for the prediction (see Section \\ref{subsubsec:actionHand}) and approaches that used a combination of object and hand cues (see Section \\ref{subsubsec:actionHandObj}). Although the second type of approaches might seem more suitable for interaction recognition (i.e., verb + noun prediction), some authors used them for predicting action verbs, exploiting the object information to prune the space of possible actions (i.e., removing unlikely verbs for a given object) . Likewise, other authors tried to use only hand cues to recognize interactions , in order to produce robust predictions without relying on object features or object recognition algorithms. Either way, the boundary between action and interaction recognition is not well defined and often depends on the nature of the dataset on which a particular approach has been tested.", "id": "d10c8f97-0c05-48b1-b5bf-aa095e551ca2", "level": "subsection", "origin_cites_number": 5, "parent_id": "2e67d397-7219-4ffa-88b7-5a8046d9d153", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Action/interaction and activity recognition" ] ], "subsections": [ "3f4677e6-00f2-4237-bdad-78d04fddd478", "1c060e85-d65e-4167-a296-6b3307c1133c", "7a05e184-e91d-498c-b40f-afac9c314bf3", "266bd0ca-9a83-4f4a-b8a5-7d42f9e67711" ], "title": "Action/interaction and activity recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:actionHand}\nThese approaches inferred the camera wearer’s actions exploiting the information provided by hand localization methods. The hypothesis is that actions and interactions can be recognized using only hand cues, for instance features related to the posture and motion of the hands. Existing studies can be divided into feature-based approaches and deep learning-based approaches .\nFeature-based approaches combined motion and shape features of the hands to represent two complementary aspects of the action: movements of hand’s parts and grasp types. This representation allowed discriminating actions with similar hand motion, but different hand posture. Typical choices of motion features were dense trajectories , whereas the hand shape was usually represented with HOG or shape descriptors on the segmented hand mask . All these features were then combined and used to recognize actions/interactions via SVM classifiers. Ishihara et al. used dense local motion features to track keypoints from which HOG, MBH, and HOF were extracted . Global hand shape was represented using HOG features within the segmented hand region. The authors used Fisher vectors and principal component analysis to encode features extracted from time windows of fixed duration, followed by multiclass linear SVM for the recognition. Dense trajectory features were also used by Kumar et al. . The authors proposed a feature sampling scheme that preserved dense trajectories closer to the hand centroid while removing trajectories from the background, which are likely caused by head motion. BoW representation was used and the recognition was performed using SVM with $\\chi^{2}$ kernel. Cai et al. combined hand shape, hand position, and hand motion features for recognizing user’s desktop actions (e.g., browse, note, read, type, and write). Histograms of the hand shape computed on the hand mask were used as shape features. Hand position was represented by the point within the hand region where a manipulation is most likely to happen (e.g., left tip of the right hand region). Motion descriptors relied on the computation of the large displacement optical flow (LDOF) between two consecutive frames. Spatio-temporal distribution of hand motion (i.e., discrete Fourier transform coefficients on the average LDOF extracted from hand sub-regions over consecutive frames) was demonstrated to outperform temporal and spatial distributions alone, suggesting that spatial and temporal information should be considered together when recognizing hand’s actions.\nThe combination of temporal and spatial information was also exploited in deep-learning approaches. This strategy was usually implemented by means of multi-stream architectures. Singh et al. proposed a CNN-based approach to recognize camera wearer’s actions using the following inputs: pixel-level hand segmentation mask; head motion -- as frame-to-frame homography using RANSAC on optical flow correspondences excluding the hand regions; and saliency map -- as the flow map obtained after applying the homography. This information was passed to a 2-stream architecture composed of a 2DCNN and a 3DCNN. The deep-learned features from both streams were combined and actions were predicted using SVM. Urabe et al. used the region around the hands to recognize cooking actions. Appearance and motion maps were obtained using the segmented hand mask passed to 2DCNN and 3DCNN, respectively. Afterwards, class-score fusion was performed by multiplying the output of both streams. The authors demonstrated that a multi-stream approach yielded better results than the two streams alone. Tang et al. used the hand information as auxiliary stream within an end-to-end multi-stream deep neural network (MDNN) that used RGB, optical flow and depth maps as input. The hand stream was composed of a CNN with the hand mask as input. Its output was combined to the MDNN via weighted fusion, in order to predict the action label. The addition of the hand stream improved the recognition performance.", "id": "3f4677e6-00f2-4237-bdad-78d04fddd478", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "d10c8f97-0c05-48b1-b5bf-aa095e551ca2", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Action/interaction and activity recognition" ], [ "subsubsection", "Action/interaction recognition using hand cues" ] ], "subsections": [], "title": "Action/interaction recognition using hand cues" }, { "cite_extract_rate": 0.461538461538461, "cites": [ 4265, 4257, 4255, 4270, 514, 4260 ], "content": "\\label{subsubsec:actionHandObj}\nMany authors demonstrated that the combination of object and hand cues can improve the recognition performance . This is quite intuitive, since during an interaction the grasp type and hand movements strictly depend on the characteristics of the object that is being manipulated (e.g., dimension, shapes, functionality) . Thus, grasp type or hand pose/shape along with object cues can be used to recognize the actions and interactions .\nIn , the authors predicted the attributes of the manipulated object (i.e., object shape and rigidity) and the type of grasp to recognize hand’s actions. They proposed a hierarchical 2-stage approach where the lower layer -- visual recognition -- classified the grasp type and the object attributes and pass this information to the upper layer -- action modeling -- responsible for the action classification via linear SVM. Coskun et al. implemented a recurrent neural network (RNN) to exploit the temporal dependencies of consecutive frames using a set of deep-learned features related to grasp, optical flow, object-object, and hand-object interactions, as well as the trajectories of the hands over the past few frames. Other authors , used hand cues with lower semantic content than hand grasp, such as shape and pose. Fathi et al. extracted a set of object and hand descriptors (e.g., object and hand labels, optical flow, location, shape, and size) at super-pixel level and performed a 2-stage interaction recognition. First, they recognized actions using Adaboost; second, they refined the object recognition in a probabilistic manner by exploiting the predicted verb label and object classification scores. Likitlersuang et al. detected the presence of interactions between the camera wearer’s hands and manipulated objects. This was accomplished by combining the hand shape, represented with HOG descriptors, with color and motion descriptors (e.g., color histogram and optical flow) for the hand, the background, and the object (i.e., regions around the hands). Random forest was used for classification. The articulated hand pose was used in . Garcia-Hernando et al. passed the hand and object key-points to an LSTM that predicted the interactions over the video frames. This approach was extended in , where hand-object interactions were first modeled using a multi-layer perceptron and then used as input to the LSTM.\nOther approaches, instead of explicitly using the hand information for predicting actions and interactions, exploited the results of hand localization algorithms to guide the feature extraction within a neighborhood of the manipulation region . This strategy was motivated by the fact that the most important cues (i.e., motion, object, etc.) during an action are likely to be found in proximity of the hands and manipulated object. Li et al. used a combination of local descriptors for motion and object cues in conjunction with a set of egocentric cues. The former, were extracted from the dense trajectories to represent the motion of the action (i.e., shape of the trajectories, MBH, HoF) and the object appearance (e.g., HOG, LAB color histogram, and LBP along the trajectories). The latter were used to approximate the gaze information, by combining camera motion removal and hand segmentation, in order to focus the attention on the area where the manipulation is happening. Ma et al. used a multi-stream deep learning approach composed of an appearance stream to recognize the object and a motion stream to predict the action verb. The object recognition network predicted the object label by using as input the hand mask and object ROI, whereas the action recognition network used the optical flow map to infer the verb. A fusion layer combined verb and object labels and predicted the interactions. Zhou et al. used the hand segmentation mask, object features extracted from middle layers of AlexNet , and optical flow to localize and recognize the active object using VGG-16 . Afterwards, object features were represented in a temporal pyramid manner and combined with motion characteristics extracted from improved dense trajectories, in order to recognize interactions using non-linear SVM. \nAlthough the above approaches might differ for the type of features and algorithm used to predict actions and interactions, most of them demonstrated that the combination of object and hand cues can provide better recognition performance than single modality recognition .", "id": "1c060e85-d65e-4167-a296-6b3307c1133c", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "d10c8f97-0c05-48b1-b5bf-aa095e551ca2", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Action/interaction and activity recognition" ], [ "subsubsection", "Action/interaction recognition combining hand and object cues" ] ], "subsections": [], "title": "Action/interaction recognition combining hand and object cues" }, { "cite_extract_rate": 0.25, "cites": [ 4270, 4261 ], "content": "\\label{subsubsec:activity}\nAs we climb the semantic content dimension in the proposed framework, the strong dependency on hand cues fades away. Other information comes into play and can be used in conjunction with the hands to predict the activities. This diversification becomes clear when we look at the review published by Nguyen et al. , which categorized egocentric activity recognition as: 1) combination of actions; 2) combination of active objects; 3) combination of active objects and locations; 4) combination of active objects and hand movements; and 5) combination of other information (e.g., gaze, motion, etc.). The description of all these approaches goes beyond the scope of this work, since we are interested in characterizing how hands can be used in activity recognition methods. For a more comprehensive description of activity recognition in FPV, the reader is referred to .\nThe boundary between the recognition of short and long temporal events (i.e., actions/interactions and activities, respectively) is not always well defined and, similar to action/interaction recognition, it may depend on the dataset used for training and testing a particular approach. In fact, some of the methods described in the previous subsections were also tested within an activity recognition framework .\nGenerally, we can identify two types of approaches: activity recognition based on actions and interactions and approaches that used hand localization results to directly prectict the activities . \nApproaches that relied on actions and interactions learned a classifier for recognizing the activities using the detected actions or hand-object interactions as features for the classification. This can be performed by using the histogram of action frequency in the video sequence and its classification using adaboost . Nguyen et al. used Bag of Visual Words representation to model the interactions between hands and objects, since these cues play a key role in the recognition of activities. Dynamic time warping was then used to compare a new sequence of features with the key training features.\nOther authors investigated how good the hand segmentation map is in predicting a small set of social activities, such as 4 interactions between two individuals. The authors used a CNN-based approach using the binary hand segmentation maps as input. The prediction was performed on a frame-by-frame basis and using temporal integration implemented through a voting strategy among consecutive frames, with the latter approach providing better results (up to 73\\% of recognition accuracy) . This result confirms what was already shown for actions and interactions, namely the temporal information becomes essential when performing higher-level inference, especially when modeling relatively long term events like activities. However, this approach was tested only in case of a small sample of social activities. To the best of our knowledge, no experiments using hand cues only were conducted for predicting other types of activities, such as ADLs.", "id": "7a05e184-e91d-498c-b40f-afac9c314bf3", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "d10c8f97-0c05-48b1-b5bf-aa095e551ca2", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Action/interaction and activity recognition" ], [ "subsubsection", "Activity recognition" ] ], "subsections": [], "title": "Activity recognition" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4257, 8758 ], "content": "\\label{subsubsec:actionActivityRemarks}\nMany authors demonstrated that action/interaction recognition performance can be improved by combining different cues, such as hand, object, and motion information. This was proven regardless of the actual method. In fact, both feature-based and deep-learning based methods implemented this strategy by combining multiple features or using multi-stream deep-learning approaches. Recently, multi-task learning approaches have also been proposed for solving the action recognition problem , demonstrating that predicting hand coordinates as an auxiliary task leads to an improvement in verb recognition performance with respect to the single-task approach. In the near future, it will be interesting to compare multi-task and multi-stream architectures to understand whether the joint prediction of action labels and hand positions can actually provide state-of-the-art performance in one or both tasks.\nAnother important aspect on which one should focus when developing novel approaches for action/interaction recognition is the temporal information. This was exploited by using 3DCNN and RNNs or, in case of feature-based approaches, by encoding it in the motion information. The same conclusion can be drawn for activity recognition where, considering the longer duration of the events, the temporal information becomes even more important .\nSometimes the literature is not consistent on the choice of the taxonomy to describe these sub-areas. Some of the approaches summarized above, even though not explicitly referred as action/interaction recognition, actually recognized short actions or interactions. We preferred to be consistent with the definition proposed by Tekin et al. , as we believe that a consistent taxonomy may help authors comparing different approaches and unify their efforts towards solving a specific problem. Moreover, the term “action” has often been used interchangeably with “activity”, which indicates a longer event with higher semantic content. The actions and interactions can rather be seen as the building blocks of the activities. This allowed some authors to exploit this mutual dependency, in order to infer activities in a hierarchical manner, using the methods described above .\nThe number of egocentric activity recognition approaches based on hand information is lower than the number of action and interaction recognition approaches. This difference is due to the fact that higher in the semantic content, authors have a wider choice of cues and features for recognizing a temporal event. In particular, over the past few years, more and more end-to-end approaches for activity recognition have been proposed, similarly to video recognition .", "id": "266bd0ca-9a83-4f4a-b8a5-7d42f9e67711", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "d10c8f97-0c05-48b1-b5bf-aa095e551ca2", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Interpretation" ], [ "subsection", "Action/interaction and activity recognition" ], [ "subsubsection", "Remarks on action/interaction and activity recognition" ] ], "subsections": [], "title": "Remarks on action/interaction and activity recognition" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 4260 ], "content": "\\label{sec:application}\nThe hand-based approaches summarized so far can be implemented to design real-world FPV applications. Most of these applications relied on HCI and HRI and included AR and VR systems , robot control and learning , as well as healthcare applications .", "id": "5d6e752f-4696-44e4-a775-fe653f7e3245", "level": "section", "origin_cites_number": 6, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Application" ] ], "subsections": [ "7d8362b5-05f9-4ef1-b97f-8843021f46fe", "64defbc7-5678-446e-93db-a323cb379d13", "76b93dc9-1efe-4a29-a256-f54fb128cf09" ], "title": "Application" }, { "cite_extract_rate": 0, "cites": [], "content": "Given the recent success of VR headsets and the surge of AR applications, many hand-based methods in FPV were used for AR and VR systems to design natural user interfaces. Most of these applications relied on hand localization -- in particular, hand detection, segmentation, and pose-estimation -- and gesture recognition algorithms, for example to manipulate virtual objects in an AR or VR scenario . Depth sensors were usually implemented to capture the scene, whereas head-worn displays allowed projecting the virtual object in the AR/VR scenario. The recognition of specific hand gestures allowed providing inputs and commands to the system, in order to produce a specific action (e.g., the selection of a virtual object by recognizing the clicking gesture ). The use of depth sensors has usually been preferred to RGB cameras since the localization of hands and objects can be more robust to illumination changes. Some authors even implemented multiple depth sensors : one specific for short distances (i.e., up to 1 m) -- to capture more accurate hand information -- and a long-range depth camera to reproduce the correct proportions between the physical and virtual environment . To improve the hand localization robustness, other systems combined multiple hand localization approaches, such as hand pose estimation in conjunction with fingertip detection . This approach can be helpful when the objective is to localize the fingertips in situations with frequent self-occlusions. Other AR/VR applications relied on dynamic hand gestures (e.g. swipe movements) recorded with a smartphone camera and frugal VR devices (e.g., Google Cardboard), in order to enable interactions in the virtual environment . AR/VR applications were also implemented for tele-support and coexistence reality , in order to allow multiple users to collaborate together remotely. Specific fields of applications were remote co-surgery and expert's tele-assistance and support .\nWithin the AR context, the use of hand-based information was also exploited for recognizing hand-written characters . This application was performed in four steps: 1) hand localization through hand detection and tracking; 2) hand gesture recognition -- to recognize a specific hand posture that triggers the writing module; 3) fingertip detection -- to identify the point to track, whose trajectory defines the characters; and 4) character recognition, based on the trajectories of the detected fingertip .\nHand localization and gesture recognition approaches were also used for cultural heritage applications to develop systems for immersive museum and augmented touristic experiences . Users can experience an entertaining way of accessing the museum knowledge, for example by taking pictures and providing feedback to the artworks with simple hand gestures . Other authors proposed a smart glasses-based system that allowed users to access touristic information while visiting a city by using pointing gestures of the hand.", "id": "7d8362b5-05f9-4ef1-b97f-8843021f46fe", "level": "subsection", "origin_cites_number": 14, "parent_id": "5d6e752f-4696-44e4-a775-fe653f7e3245", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Application" ], [ "subsection", "AR and VR applications" ] ], "subsections": [], "title": "AR and VR applications" }, { "cite_extract_rate": 0.25, "cites": [ 4271 ], "content": "In the HRI field, FPV hand-based approaches have mainly been used for two purposes: robot learning and robot control. Approaches for robot learning recognized movements and/or actions performed by the user's hands, in order to train a robot performing the same set of movements autonomously . Aksoy et al. , decomposed each manipulation into shorter chunks and encoded each manipulation into a semantic event chain (SEC), which encodes the spatial relationship between objects and hands in the scene. Each temporal transition in the SEC (e.g., change of state in the scene configuration) was considered as movement primitive for the robot imitation. In , the robot used the tracked hand locations of a human to learn the hand's future position and predict trajectories of the hands when a particular action has to be executed. By contrast, robot control approaches mainly relied on hand gesture recognition to give specific real-time commands to the robots . The hand gestures are seen as means of communication between the human and the robot and can encode specific commands such as the action to be performed by a robot arm or the direction to be taken by a reconnaissance robot .", "id": "64defbc7-5678-446e-93db-a323cb379d13", "level": "subsection", "origin_cites_number": 4, "parent_id": "5d6e752f-4696-44e4-a775-fe653f7e3245", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Application" ], [ "subsection", "Robot control and learning" ] ], "subsections": [], "title": "Robot control and learning" }, { "cite_extract_rate": 0.1, "cites": [ 4260 ], "content": "\\label{subsec:appHealth}\nEgocentric vision has demonstrated the potential to have an important impact in healthcare. The possibility to automatically analyze the articulated hand pose and recognize actions and ADLs have made these methods appealing for the remote assessment of upper limb functions and AAL systems .\nThe assessment of upper limb function is an important phase in the rehabilitation after stroke or cSCI that allows clinicians to plan the optimal treatment strategy for each patient. However, geographical distances between patients and hospitals create barriers towards obtaining optimal assessments and rehabilitation outcomes. Egocentric vision has inspired researchers to develop video-based approaches for automatically studying hand functions at home . Studies have been conducted in individuals with cSCI, tackling the problem of hand function assessment from two perspectives: localization and interpretation . Fine-tuning object detection algorithms to localize and recognize hands in people with SCI allowed developing hand localization approaches robust to impaired hand poses and uncontrolled situations . Moreover, strategies for improving the computational performance of hand detection algorithms have been adopted (e.g., combining hand detection and tracking), making this application suitable for the use at home. The automatic detection of hand-object manipulations allowed extracting novel measures reflective of the hand usage at home, such as number of interactions per hour, the duration of interactions, and the percentage of interaction over time . These measures, once validated against clinical scores, will help clinicians to better understand how individuals with cSCI and stroke use their hands at home while performing ADLs.\nAnother healthcare application is the development of AAL systems. The increasing ageing population is posing serious social and financial challenges in many countries. These challenges have stimulated the interest in developing technological solutions to help and support older adults with and without cognitive impairments during their daily life . Some of these applications used egocentric vision to provide help and support to older adults during ADLs at home . Egocentric vision AAL builds upon the action and activity recognition approaches illustrated in Section \\ref{subsec:actionActivity}. In particular, approaches have been proposed to automatically recognize how older adults perform ADLs at home, for example to detect early signs of dementia or to support people in conducting the activities .\nIn these specific applications, the use of egocentric vision presents important advantages with respect to other solutions (e.g., sensor-based and third person vision):\n\\begin{itemize}[noitemsep, wide=0pt, leftmargin=\\dimexpr\\labelwidth + 2\\labelsep\\relax]\n \\item FPV can provide high quality videos on how people manipulate objects. This is important when the aim is the recognition of hand-object manipulations and ADLs, since hand occlusions tend to be minimized.\n \\item Egocentric vision provides more details of hand-object interactions than sensor-based technology, by capturing information about both the hand and the object being manipulated. Other sensor-based solutions such as sensor gloves, although providing highly accurate hand information, may limit movements and sensation, which are already reduced in individuals with upper limb impairment .\n\\end{itemize}", "id": "76b93dc9-1efe-4a29-a256-f54fb128cf09", "level": "subsection", "origin_cites_number": 10, "parent_id": "5d6e752f-4696-44e4-a775-fe653f7e3245", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Application" ], [ "subsection", "Remote healthcare monitoring" ] ], "subsections": [], "title": "Remote healthcare monitoring" }, { "cite_extract_rate": 0.25, "cites": [ 4261, 4255, 4268, 4256, 4260, 4272, 2906 ], "content": "\\label{sec:datasets}\n\\begin{table*}[]\n\\begin{center}\n\\begin{tabular}{|ccccccccccc|}\n\\hline\nDataset & Year & Mode & Device & Location & Frames & Videos & Duration & Subjects & \\begin{tabular}[c]{@{}c@{}}Resolution\\\\ (pixels)\\end{tabular} & Annotation \\\\ \\hline\\hline\nGTEA & 2011 & C & GoPro & H & $\\sim$31k & 28 & 34 min & 4 & 1280$\\times$720 & \\begin{tabular}[c]{@{}c@{}} act\\\\ msk\\end{tabular} \\\\ \\hline\nADL & 2012 & C & GoPro & H & \\textgreater{}1M & 20 & $\\sim$10 h & 20 & 1280$\\times$960 & \\begin{tabular}[c]{@{}c@{}}act\\\\ obj\\end{tabular} \\\\ \\hline\nEDSH & 2013 & C & - & H & $\\sim$20k & 3 & $\\sim$10 min & - & 1280$\\times$720 & msk\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Interactive\\\\ Museum \\end{tabular} & 2014 & C & - & H & - & 700 & - & 5 & 800$\\times$450 & \\begin{tabular}[c]{@{}c@{}}gst\\\\ msk\\end{tabular} \\\\ \\hline\nEgoHands & 2015 & C & Google Glass & H & $\\sim$130k & 48 & 72 min & 8 & 1280$\\times$720 & msk\\\\ \\hline\nMaramotti & 2015 & C & - & H & - & 700 & - & 5 & 800$\\times$450 & \\begin{tabular}[c]{@{}c@{}}gst\\\\ msk\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}UNIGE\\\\ Hands \\end{tabular} & 2015 & C & \\begin{tabular}[c]{@{}c@{}}GoPro\\\\ Hero3+\\end{tabular} & H & $\\sim$150k & - & 98 min & - & 1280$\\times$720 & det\\\\ \\hline\nGUN-71 & 2015 & CD & \\begin{tabular}[c]{@{}c@{}}Creative\\\\ Senz3D\\end{tabular} & C & \\begin{tabular}[c]{@{}c@{}}$\\sim$12k\\\\ (annotated)\\end{tabular} & - & - & 8 & - & grs\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}RGBD\\\\ Egocentric\\\\ Action \\end{tabular} & 2015 & CD & \\begin{tabular}[c]{@{}c@{}}Creative\\\\ Senz3D\\end{tabular} & H & - & - & - & 20 & \\begin{tabular}[c]{@{}c@{}}C:640$\\times$480\\\\ D:320$\\times$240\\end{tabular} & act\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Fingerwriting\\\\ in mid-air \\end{tabular} & 2016 & CD & \\begin{tabular}[c]{@{}c@{}}Creative\\\\ Senz3D\\end{tabular} & H & $\\sim$8k & - & - & - & - & \\begin{tabular}[c]{@{}c@{}}ftp\\\\ gst\\end{tabular} \\\\ \\hline\nEgo-Finger & 2016 & C & - & H & $\\sim$93k & 24 & - & 24 & 640$\\times$480 & \\begin{tabular}[c]{@{}c@{}}det\\\\ ftp\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}ANS\\\\ able-\\\\ bodied \\end{tabular} & 2016 & C & Looxcie 2 & - & - & - & 44 min & 4 & 640$\\times$480 & int\\\\ \\hline\nUT Grasp & 2017 & C & \\begin{tabular}[c]{@{}c@{}}GoPro\\\\ Hero2\\end{tabular} & H & - & 50 & $\\sim$4 h & 5 & 960$\\times$540 & grs\\\\ \\hline\nGestureAR & 2017 & C & \\begin{tabular}[c]{@{}c@{}}Nexus 6 and\\\\ Moto G3\\end{tabular} & H & - & 100 & - & 8 & 1280$\\times$720 & gst\\\\ \\hline\nEgoGesture & 2017 & C & - & - & $\\sim$59k & - & - & - & - & \\begin{tabular}[c]{@{}c@{}}det\\\\ ftp\\\\ gst\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Egocentric\\\\ hand-action \\end{tabular} & 2017 & D & \\begin{tabular}[c]{@{}c@{}}Softkinetic\\\\ DS325\\end{tabular} & H & $\\sim$154k & 300 & - & 26 & 320$\\times$240 & gst\\\\ \\hline\nBigHand2.2M & 2017 & D & \\begin{tabular}[c]{@{}c@{}}Intel\\\\ RealSense\\\\ SR300\\end{tabular} & - & $\\sim$290k & - & - & - & 640$\\times$480 & pos\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Desktop\\\\ Action \\end{tabular} & 2018 & C & \\begin{tabular}[c]{@{}c@{}}GoPro\\\\ Hero2\\end{tabular} & H & $\\sim$324k & 60 & 3 h & 6 & 1920$\\times$1080 & \\begin{tabular}[c]{@{}c@{}}act\\\\ msk\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Epic\\\\ Kitchens \\end{tabular} & 2018 & C & GoPro & H & ~11.5M & - & 55 h & 32 & 1920$\\times$1080 & act\\\\ \\hline\nFPHA & 2018 & CD & \\begin{tabular}[c]{@{}c@{}}Intel\\\\ RealSense\\\\ SR300\\end{tabular} & S & \\begin{tabular}[c]{@{}c@{}}$\\sim$105k\\\\ (annotated)\\end{tabular} & 1,175 & - & 6 & \\begin{tabular}[c]{@{}c@{}}C:1920$\\times$1080\\\\D:640$\\times$480\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}act\\\\ pos\\end{tabular} \\\\ \\hline\nEYTH & 2018 & C & - & - & \\begin{tabular}[c]{@{}c@{}}1,290\\\\ (annotated)\\end{tabular} & 3 & - & - & - & msk\\\\ \\hline\nEGTEA+ & 2018 & C & \\begin{tabular}[c]{@{}c@{}}SMI wearable\\\\ eye-tracker\\end{tabular} & H & \\textgreater{}3M & 86 & $\\sim$28 h & 32 & 1280$\\times$960 & \\begin{tabular}[c]{@{}c@{}}act\\\\ gaz\\\\ msk\\end{tabular} \\\\ \\hline\nTHU-READ & 2018 & CD & \\begin{tabular}[c]{@{}c@{}}Primesense\\\\ Carmine\\end{tabular} & H & $\\sim$343k & 1,920 & - & 8 & 640$\\times$480 & \\begin{tabular}[c]{@{}c@{}}act\\\\ msk\\end{tabular} \\\\ \\hline\nEgoGesture & 2018 & CD & \\begin{tabular}[c]{@{}c@{}}Intel\\\\ RealSense\\\\ SR300\\end{tabular} & H & $\\sim$3M & 24,161 & - & 50 & 640$\\times$480 & gst\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}EgoDaily \\end{tabular} & 2019 & C & \\begin{tabular}[c]{@{}c@{}}GoPro\\\\ Hero5\\end{tabular} & - & $\\sim$50k & 50 & - & 10 & 1920$\\times$1080 & \\begin{tabular}[c]{@{}c@{}}det\\\\ hid\\end{tabular} \\\\ \\hline\nANS SCI & 2019 & C & \\begin{tabular}[c]{@{}c@{}}GoPro\\\\ Hero4\\end{tabular} & H & - & - & - & 17 & 1920$\\times$1080 & \\begin{tabular}[c]{@{}c@{}}det\\\\ int\\end{tabular} \\\\ \\hline\nKBH & 2019 & C & HTC Vive & H & \\begin{tabular}[c]{@{}c@{}}$\\sim$12.5k\\\\ (annotated)\\end{tabular} & 161 & - & 50 & 230$\\times$306 & msk\\\\ \\hline\n\\end{tabular}\n\\end{center}\n \\caption{List of available datasets with hand-based annotations in FPV. Image modality (Mode): Color (C); Depth (D); Color+Depth (CD). Camera location: Head (H); Chest (C); Shoulder (S). Annotation: actions/activities (\\textbf{act}); hand presence and location (\\textbf{det}); fingertip positions (\\textbf{ftp}); gaze (\\textbf{gaz}); grasp types (\\textbf{grs}); hand gestures (\\textbf{gst}); hand disambiguation (\\textbf{hid}); hand-object interactions (\\textbf{int}); hand segmentation masks (\\textbf{msk}); object classes (\\textbf{obj}); hand pose (\\textbf{pos}).}\n\\label{tab_datasets}\n\\end{table*}\nThe importance that this field of research gained in recent years is clear when we look at the number of available datasets published since 2015 (Table \\ref{tab_datasets}). Although the type of information and ground truth annotations made available by the authors is heterogeneous, it is possible to identify some sub-areas that are more recurrent than others. The vast majority of datasets provided hand segmentation masks , reflecting the high number of approaches proposed in this area and summarized in Section \\ref{sec:localization}. However, the high number of datasets is counterbalanced by a relative low number of annotated frames, usually in the order of a few hundreds or thousands of images. To expedite the lenghty pixel-level annotation process and build larger datasets for hand segmentation, some authors proposed semi-automated techniques, for example based on Grabcut .\nActions/activities and hand gestures are other common information that were captured and annotated in many datasets. This large amount of data has been used by researchers for developing robust HCI applications that relied on hand gestures. Compared to the amount of hand segmentation masks, action/activities and hand gestures datasets are usually larger, since the annotation process is easier and faster than pixel-level segmentation.\nThe vast majority of datasets included color information recorded from head-mounted cameras. The head position is usually preferred over the chest or shoulders, since it is easier to focus on hand actions and manipulations whenever the camera wearer's is performing a specific activity. GoPro cameras were the most widely used devices for recording the videos, since they are specifically designed for egocentric POV and are readily available on the market. Few datasets, usually designed for hand pose estimation , hand gesture recognition , and action/activity recognition , include depth or color and depth information. In most cases, videos were collected using Creative$^{\\circledR}$ Senz3D\\textsuperscript{TM} or Intel$^{\\circledR}$ RealSense\\textsuperscript{TM} SR300 depth sensors, as these devices were small and lightweight. Moreover, these cameras were preferred over other depth sensors (e.g., Microsoft$^{\\circledR}$ Kinect\\textsuperscript{TM}) because they were originally developed for natural user interface that made them more suitable for studying hand movements in the short range (i.e., up to 1 m of distance from the camera).\nAlthough FPV is gaining a lot of interest for developing healthcare applications for remote monitoring of patients living in the community, only one dataset (i.e., the ANS-SCI dataset ) included videos from people with clinical conditions such as cSCI. This lack of available data is mainly due to ethical constraints that make it harder to share videos and images collected from people with diseases or clinical conditions. In the next few years researchers should try -- within the ethical and privacy constraints -- to build and share datasets for healthcare applications including videos collected from patients. This will benefit the robustness of the hand-based approaches in FPV against the inter-group and within group variability that can be encountered in many clinical conditions.", "id": "c289979f-f5c8-4c7d-8017-03424c08e89e", "level": "section", "origin_cites_number": 28, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "FPV Datasets with hand annotation" ] ], "subsections": [], "title": "FPV Datasets with hand annotation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:conclusion}\nIn this paper we showed how hand-related information can be retrieved and used in egocentric vision. We summarized the existing literature into three macro-areas, identifying the most prominent approaches for hand localization (e.g., hand detection, segmentation, pose estimation, etc.), interpretation (grasp analysis, gesture recognition, action and activity recognition), as well as the FPV applications for building real-world solutions. We believe that a comprehensive taxonomy and an updated framework of hand-based methods in FPV may serve as guidelines for the novel approaches proposed in this field by helping to identify gaps and standardize terminology.\nOne of the main factors that promoted the development of FPV approaches for studying the hands is the availability of wearable action cameras and AR/VR systems. However, we also showed how the use of depth sensors, although not specifically developed for wearable applications, has been exploited by many authors, in order to improve the robustness of hand localization. We believe that the possibility to develop miniaturized wearable depth sensors may further boost the research in this area and the development of novel solutions, since a combination of color and depth information can improve the performance of several hand-based methods in FPV. In particular, these advantages can be exploited in settings that involve short--range observations (i.e., less than 1 m) and indoor environments, which are often encountered when analyzing the hands in FPV.\nFrom this survey it is clear how the hand localization step plays a vital role in any processing pipeline, as a good localization is necessary condition for hand-based higher inference, such as gesture or action recognition. This importance has motivated the extensive research conducted in the past 10 years, especially in sub-areas like hand detection and segmentation. The importance of hand localization methods may also be seen in those approaches where the hands play an auxiliary role, such as activity recognition. In fact, the position of the hands can be used to build attention-based classifiers, where more weight is given to the manipulation area.\nLike other computer vision fields, the advent of deep learning has had a great impact on this area, by boosting the performance of several localization and interpretation approaches, as well as optimizing the number of steps required to pursue a certain objective (see the hand identification example -- Section \\ref{subsec:handIdent}). Hand detection is the localization sub-area that has seen the largest improvements, especially thanks to the availability of object detection networks retrained on large datasets. Other sub-areas, such as hand segmentation and pose estimation, perhaps will see larger improvements in the next few years, especially if the amount of available data annotations grows. Recurrent models, 3DCNN, and the availability of large datasets (e.g., Epic Kitchens, EGTEA+, etc.) have helped pushing the state of the art of action and activity recognition, considering that the combination of temporal and appearance information was demonstrated to be crucial for these tasks. In the near future, efforts should be made in improving methods for the recognition of larger classes of unscripted ADLs, which would benefit the development of applications such as AAL.\nAs this field of research is still growing, we will see novel applications and improvement of the existing ones. The impact of hand-based methods in egocentric vision is clear from the development of applications that relied on HCI and HRI. The importance of the hands as our primary means of interaction with the world around us is currently exploited by VR and AR systems, and the position of the wearable camera offers tremendous advantages for assessing upper limb function remotely and supporting older adults in ADLs. This will translate in the availability of rich information captured in natural environments, with the possibility to improve assessment and diagnosis, provide new interaction modalities, and enable personalized feedback on tasks and behaviours.\n\\section*{Acknowledgments}\nThis work was supported in part by the Craig H. Neilsen Foundation (542675). The authors would also like to thank Gustavo Balbinot, Guijin Li, and Mehdy Dousty for the helpful discussions and feedback.\n\\bibliographystyle{unsrt}\n\\bibliography{references}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{AB.jpg}}]{Andrea Bandini}\nAndrea Bandini (M'16) received his Master's degree in Biomedical Engineering from the University of Firenze (Italy) in 2012, and the PhD the Bioengineering from the University of Bologna (Italy) in 2016. He has been a postdoctoral research fellow at KITE - University Health Network (Toronto, Canada) since September 2016. His research aims at developing intelligent tools for remote assessment and rehabilitation of motor signs associated with neurological disorders (spinal cord injury, stroke, amyotrophic lateral sclerosis, and Parkinson’s disease), by using computer vision and machine learning techniques.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{JZ.png}}]{José~Zariffa}\nJosé Zariffa (M’01, SM’18) received the Ph.D. degree in 2009 from the University of Toronto's Department of Electrical and Computer Engineering and the Institute of Biomaterials and Biomedical Engineering. He later completed post-doctoral fellowships at the International Collaboration On Repair Discoveries (ICORD) at the University of British Columbia in Vancouver, Canada, and at the Toronto Rehabilitation Institute – University Health Network in Toronto, Canada. He is a currently a Scientist at KITE - Toronto Rehabilitation Institute - University Health Network and an Associate Professor at the Institute of Biomaterials and Biomedical Engineering at the University of Toronto in Toronto, Canada. His research interests include technology for upper limb rehabilitation after spinal cord injury, neural prostheses, and interfaces with the peripheral nervous system. Dr. Zariffa is the recipient of an Ontario Early Researcher Award.\n\\end{IEEEbiography}\n\\end{document}", "id": "207be78a-9a20-4451-8627-9fe3d09faf21", "level": "section", "origin_cites_number": 0, "parent_id": "08af7ca6-1a67-48c9-9735-3113b8e18727", "prefix_titles": [ [ "title", "Analysis of the hands in egocentric vision:\\\\ A survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
46
[ 4255, 4254, 4256, 4257, 4258, 4259, 4262, 4261, 3807, 2571, 4260, 810, 825, 1766, 514, 4263, 520, 4265, 206, 2671, 802, 4264, 97, 836, 4266, 4267, 4268, 4269, 7166, 4270, 8758, 4271, 4272, 2906 ]
1.064549
[ "Surangika Ranathunga", "En-Shiun Annie Lee", "Marjana Prifti Skenduli", "Ravi Shekhar", "Mehreen Alam", "Rishemjit Kaur" ]
Neural Machine Translation for Low-Resource Languages: A Survey
2021
2021-06-29T06:31:58Z
cs.CL
Neural Machine Translation (NMT) has seen a tremendous spurt of growth in less than ten years, and has already entered a mature phase. While considered as the most widely used solution for Machine Translation, its performance on low-resource language pairs still remains sub-optimal compared to the high-resource counterparts, due to the unavailability of large parallel corpora. Therefore, the implementation of NMT techniques for low-resource language pairs has been receiving the spotlight in the recent NMT research arena, thus leading to a substantial amount of research reported on this topic. This paper presents a detailed survey of research advancements in low-resource language NMT (LRL-NMT), along with a quantitative analysis aimed at identifying the most popular solutions. Based on our findings from reviewing previous work, this survey paper provides a set of guidelines to select the possible NMT technique for a given LRL data setting. It also presents a holistic view of the LRL-NMT research landscape and provides a list of recommendations to further enhance the research efforts on LRL-NMT.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ] ], "subsections": [ "b2093b4c-a252-444e-8c99-5b104b46bdc0", "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "04e32557-9bb1-48a3-8d8c-57a570286aa7", "8742219f-67f4-4253-9cfa-1c5d1c94bf9b", "d44ca8d5-e84a-4945-99d8-4b2a743f628f", "7a44d66d-a445-4b7a-b8ac-4ef9879169f0", "0d19b510-15ad-4649-a082-492aa8ae0f38", "4a48075b-6440-4ac6-91cc-17bbe278ff64", "2c4eae1c-0595-4c9b-9650-df8b610079cd" ], "title": "root" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4965, 303, 2339 ], "content": "Since the beginning of time, language and communication has been central to human interactions. Therefore, translating between different languages has been pivotal in societal and cultural advancements. \nMachine Translation (MT) was one of the first applications conceived to be solvable by computers; this vision was birthed by the ``translation memorandum'' presented by Warren Weaver, and the word-for-word translation system by IBM in 1954~. \nConsequently, different techniques were developed to address the problem of Machine Translation, with a prominent being Statistical Machine Translation (SMT). \nBecause the performance of the SMT system is directly impacted by the number of parallel sentence pairs available for training, a heavy emphasis has been placed on creating parallel datasets (also known as bitext) in addition to research on new MT techniques.\\\\\nIn 2013, the introduction of end-to-end neural encoder-decoder based MT systems saw a breakthrough with promising results, which soon got popularized as Neural Machine Translation (NMT). Currently NMT is the dominant technique in the community. \nHowever, it was quickly realized that these initial NMT systems required huge volumes of parallel data to achieve comparable results to that of SMT~. \nHigh resource language pairs (such as English and French) do not have dataset size concerns because researchers have created ample amounts of parallel corpora over the years\\footnote{The English-French corpus~ used contained 348 Million parallel sentences.}. \nHowever, the requirement of having large amounts of parallel data is not a realistic assumption for many of the 7000+ languages currently in use around the world and therefore is considered a major challenge for low-resource languages (LRLs)~. \nDue to economic and social reasons, it is useful to automatically translate between most of these LRLs, particularly for countries that have multiple official languages. Therefore, in recent years, there has been a noticeable increase in NMT research (both by academia and industry) that specifically focused on LRL pairs.\nDespite this emphasis, we are not aware of any literature review that systematically examines the NMT techniques tailored for LRL pairs. Although there exists some work that discusses the challenges of using NMT in the context of LRL pairs~ and the application of specific techniques for LRL pairs~, \n none of them gives a comprehensive view of the available NMT techniques for LRL pairs. This makes it difficult for new researchers in the field to identify the best NMT technique for a given dataset specification. In addition, none of these surveys presents a holistic view of the NMT landscape for LRL pairs to derive insights on research efforts and current practices. \nThis survey aims to address the above shortcomings in the NMT research landscape for LRLs. More specifically, it provides researchers working on LRLs a catalogue of methods and approaches for NMT and identifies factors that positively influence NMT research on LRL pairs. To achieve these aims, we answer the following research questions: \n\\begin{enumerate}\n \\item \\textbf{NMT Techniques:} What are the major NMT techniques that can be applied to LRL pairs, and what are the current trends?\n \\item \\textbf{Technique Selection:} How to select the most suitable NMT technique for a given language?\n \\item \\textbf{Future Directions:} How to increase research efforts and what are the future directives for NMT on LRL pairs?\n\\end{enumerate}\nTo answer the above questions, we first conducted a systematic analysis of the NMT techniques that have been applied for LRL pairs, and their progress (Section~\\ref{section:label-NMT-Techniques-for-Low-Resource-Languages}). \nSecondly, we critically analysed the applicability of these techniques for LRL pairs in practical terms. Based on our observations, we provide a set of guidelines for those who want to use NMT for LRL pairs to select the most suitable NMT technique by considering the size and type of the datasets, as well as the available computing resources (Section~\\ref{technique_selection}). Lastly, we conducted a comprehensive analysis of the amount of NMT research conducted for LRLs in the world (Section~\\ref{section:trend_analysis}). Here, we note a strong correlation between the amount of NMT research per language and the amount of publicly available parallel corpora for that language. We also note the recent rise of regional level research communities that contributed to parallel dataset creation, and thus NMT for LRL pairs in turn. \nTherefore, our recommendations to advance the area of NMT for LRL pairs are to 1) create LRL resources (datasets and tools), 2) make computational resources and trained models publicly available, and 3) involve research communities at a regional-level. Based on our analysis of the existing NMT techniques, we also recommend possible improvements to existing NMT techniques that would elevate the development of NMT techniques that work well LRL pairs.", "id": "b2093b4c-a252-444e-8c99-5b104b46bdc0", "level": "section", "origin_cites_number": 5, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{defs_scope}", "id": "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Background" ] ], "subsections": [ "d5c14c13-e8a2-4183-916a-40ef1afbf3eb", "2250caaf-a646-446f-b481-dd5c6b4bd159", "26a330b6-f4c2-4764-93e6-c7d10c626cb7", "bc72059f-1a7d-4a91-a3b2-f8dc30d71caf" ], "title": "Background" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 7873, 2489, 7874, 4967, 4968, 4966, 4969 ], "content": "\\label{LR-Definition}\nFor Natural Language Processing (NLP), a low-resource problem can arise mainly due to the considered languages being low-resourced, or the considered domains being low-resourced~. In this paper, we focus on LRLs only. \\\\\nResearchers have attempted to define LRLs by exploring various criteria such as the number of mother-tongue speakers and the number of available datasets. According to , an LRL\\footnote{An LRL is also known as under resourced, low-density, resource-poor, low data, or less-resourced language } is a language that lacks a unique writing system, lacks (or has) a limited presence on the World Wide Web, lacks linguistic expertise specific to that language, and/or lacks electronic resources such as corpora (monolingual and parallel), vocabulary lists, etc. NLP researchers have used the availability of data (in either labelled, unlabelled or auxiliary data), and the NLP tools and resources as the criteria to define LRLs~.\nOver the years, there have been many initiatives to categorise languages according to the aforementioned different criteria~. Given that the category of a language may change with time, we rely on the language categorization recently proposed by to identify LRLs. As shown in Table~\\ref{tab:language_categories},~ categorised 2485 languages into six groups based on the amount of publicly available data.\n\\begin{table}[htp]\n\\centering\n{\\small \n \\begin{tabular}{p{.04\\textwidth}p{.7\\textwidth}p{.2\\textwidth}}\n \\hline\n Class & Description & Language Examples \\\\ \\hline \\hline\n 0 & Have exceptionally limited resources, and have rarely been considered in language technologies. & Slovene, Sinhala \\\\ \\hline\n 1 & Have some unlabelled data; however, collecting labelled data is challenging. & Nepali, Telugu\\\\ \\hline\n 2 & A small set of labeled datasets has been collected, and language support communities are there to support the language. & Zulu, Irish \\\\ \\hline \n 3 & Has a strong web presence, and a cultural community that backs it. Have been highly benefited by unsupervised pre-training. & Afrikaans, Urdu \\\\ \\hline\n 4 & Have a large amount of unlabeled data, and lesser, but still a significant amount of labelled data. have dedicated NLP communities researching these languages. & Russian, Hindi \\\\ \\hline\n 5 & Have a dominant online presence. There have been massive investments in the development of resources and technologies. & English, Japanese\\\\ \\hline\n \\end{tabular}\n }\n \\caption{Language Categories identified by~}~\\label{tab:language_categories}\n\\end{table}\nUnlike other NLP tasks, MT take place between two languages. Thus, in MT the resourcefulness of a language pair is determined by the available amount of parallel corpora between the considered languages. The terms `high-resource', `low-resource', as well as `extremely low-resource' have been commonly used when referring to the parallel corpora at hand. However, there is no minimum requirement in the size of the parallel corpora to categorise a language pair as high, low, or extremely low-resource. Some early research considered even $1$ million parallel sentences as LR~. More recent research seems to consider a language pair as LR or extremely LR if the available parallel corpora for the considered pair for NMT experiments is below $0.5$ Million, and below $0.1$ Million, respectively~; however, these are not absolute values for the size of the corpora.\\\\\nEven if a particular language has a large number of monolingual corpora while still having a small parallel corpus with another language, this language pair is considered as LR for the NMT task. We assume that languages that have been labelled as LR by~ have very small parallel corpora with other languages, or have no parallel corpora at all.\n\\iffalse", "id": "d5c14c13-e8a2-4183-916a-40ef1afbf3eb", "level": "subsection", "origin_cites_number": 10, "parent_id": "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Background" ], [ "subsection", "Low-Resource Languages (LRLs)" ] ], "subsections": [], "title": "Low-Resource Languages (LRLs)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4970, 7875 ], "content": "\\label{related_defs}\n\\begin{enumerate}\n \\item Monolingual Corpus - A body of text in one language.\n \\item Parallel Data - A collection of text, each of which is translated into one or more other languages than the original\\footnote{\\url{http://www.ilc.cnr.it/EAGLES96/corpustyp/node20.html#:~:text=A\\%20parallel\\%20corpus\\%20is\\%20a,however\\%2C\\%20exist\\%20in\\%20several\\%20languages.}}.\n \\item Bilingual Machine Translation - Machine translation between a pair of languages. Similarly a bilingual parallel corpus is between a pair of languages.\n \\item Zero-shot translation and zero-resource translation. Some researchers~ consider zero-shot to be synonymous to extremely LR case. However, ~ and distinguished these two concepts from the concept of extremely LR setting. In zero-shot, there is no parallel data, and the model is trained with no parallel data for the considered language pair. In the latter case, although no parallel data is available for the considered language pair, synthetic data is generated, and the model is trained with this data for the considered language pair. Hence, in the perspective of the model, learning is not zero-shot.\n\\end{enumerate}\n\\fi", "id": "2250caaf-a646-446f-b481-dd5c6b4bd159", "level": "subsection", "origin_cites_number": 3, "parent_id": "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Background" ], [ "subsection", "Related Terminology and Definitions" ] ], "subsections": [], "title": "Related Terminology and Definitions" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4974, 4969, 4973, 7873, 4972, 4965, 7200, 4971 ], "content": "\\label{label-Related-Work}\nSome of the previous survey papers discussed different\nNMT architectures~. They did not contain any reference of LRL-NMT, except for~, which briefly identified Multilingual NMT (multi-NMT), unsupervised, and semi-supervised LRL-NMT techniques.\nAnother set of papers surveyed only one possible NMT methodology, for example multi-NMT~, leveraging monolingual data for\nNMT~, use of pre-trained embeddings for NMT~, or domain adaptation techniques for NMT ~. Out of these surveys,~ specifically focused on LR settings, may be because monolingual data is more useful in that scenario.\nWe also observed that some surveys focused on the broader MT, both SMT and NMT, in problem domains such as document-level MT~, while others focused on MT for a selected set of languages~. On a different front, we found surveys that discussed LRL scenarios in the general context of NLP, but did not have a noticeable focus on NMT or even MT~.\nTable~\\ref{tab:survey_papers} categorises the survey papers discussed above. In conclusion, although there are surveys that dedicated a brief section on LRL-NMT and others that explicitly focus on LRLs for a selected NMT technique, there is no comprehensive survey on leveraging NMT for LRLs.\\\\\n \\begin{table} [htp]\n\\centering\n{\\small \n \\begin{tabular}{p{.3\\textwidth}p{.7\\textwidth}}\n \\hline\n Type of survey & Examples\\\\ \\hline \\hline\n NMT Architectures & , , , , ~\\\\ \\hline\n Specific NMT Methodologies & , , , \\\\ \\hline\n Specific MT Problem Domain & \\\\ \\hline\n Specific Language & \\\\ \\hline\n LRL NLP & \\\\ \\hline\n \\end{tabular}}\n \\caption{Type of Survey papers}~\\label{tab:survey_papers}\n\\end{table}", "id": "26a330b6-f4c2-4764-93e6-c7d10c626cb7", "level": "subsection", "origin_cites_number": 12, "parent_id": "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Background" ], [ "subsection", "Related Work" ] ], "subsections": [], "title": "Related Work" }, { "cite_extract_rate": 1, "cites": [ 4975, 4969 ], "content": "Most of the NMT techniques discussed in this paper can be used in the context of LRL translation as well as LR domain translation. However, an LR domain, such as medical or finance, can exist for a high-resource language, such as English as well~. In that case, additional language resources (e.g. WordNet, Named Entity Recognisers) can be utilised in developing the solution. However, such resources might not be available for LRL pairs. Thus, solutions that only apply for LR domains are considered out of scope for this paper. In this paper, we use the phrase low-resource language NMT (LRL-NMT) to refer to NMT techniques that are applicable for translation between LRL pairs.\nSimilarly, we omit techniques that focused on NMT in general, without any specific focus on the translation of LRL pairs. We also omit techniques that focus on speech translation only, and multimodal translation (which is typically between images and text), as such research is not common in the context of LRL pairs. Some techniques (e.g. data augmentation (Section~\\ref{sec:dataAug}) and pivoting(Section~\\ref{sec:zero_shot})) have been used in the context of SMT as well, which is not discussed in this review.\nNMT solutions for zero-shot translation (no parallel data to train an MT model) are included because of its relationship to the task of transation of LRL pairs, with an overlap between the techniques used.", "id": "bc72059f-1a7d-4a91-a3b2-f8dc30d71caf", "level": "subsection", "origin_cites_number": 2, "parent_id": "a8eff206-d8a5-42d4-94ac-5f2cf5ed20dc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Background" ], [ "subsection", "Scope of the Survey" ] ], "subsections": [], "title": "Scope of the Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:label-NMT-Techniques-for-Low-Resource-Languages}", "id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ] ], "subsections": [ "7cdcb411-627c-4731-ba2b-6ee0b4c77d4f", "0432fa8e-8177-477c-9c23-7ac3c2857358", "500a7ba1-62dc-4a3c-9cc9-6cc93bd8c606", "69fceb92-f017-465f-b68d-572e7cfdc976", "6c52df28-c470-4f26-b8bb-ea0b68c915b7", "797cd8b4-4a35-44d9-96d1-49176cef9068", "6cc6b627-54f2-4960-be95-cd6d02584470", "8037047d-efca-4036-926d-d5e9d133f87a" ], "title": "NMT Techniques for Low-Resource Languages" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7875, 168, 4970, 38 ], "content": "NMT methodologies fall broadly into supervised, semi-supervised, and unsupervised. Supervised NMT is the default architecture that relies on large-scale parallel datasets. Recurrent neural architecture with attention~, as well as the recently introduced transformer architecture~, are commonly used. However, due to space limitations, we do not detail out these techniques, and interested readers can refer to the aforementioned references.\nBoth these neural architectures rely on large parallel corpora, an advantage not available to LRLs. A solution is to synthetically generate data, which is called \\textbf{data augmentation} (Section~\\ref{sec:dataAug}). These techniques can be applied irrespective of the NMT architecture used. \nIn the extreme case where no parallel data is available, \\textbf{unsupervised} NMT techniques (Section~\\ref{unsupervised_nmt}) can be employed. Even if the available parallel corpora is small, it is possible to combine them with the monolingual data of the languages concerned, in a \\textbf{semi-supervised} manner (Section~\\ref{semi-supervised}). \nEven if parallel data is available, building (bilingual) NMT models between each pair of languages is not practical. As a solution, \\textbf{multi-NMT} models (Section~\\ref{sec:multiNMT}) were introduced, which facilitate the translation between more than one language pair using a single model. Most of the multi-NMT models are based on supervised NMT, while some research is available on the applicability of semi-supervised, and unsupervised NMT in a multilingual setting. Although multi-NMT models were initially introduced to avoid the need to build individual bilingual translation models, their capability in the translation of LRL pairs is shown to be promising. \n\\textbf{Transfer learning} (Section~\\ref{transfer_learning_nmt}) is a technique that is commonly used in low-resource NLP, including NMT. Here, an NMT model trained on a high-resource language pair is used to initialize a child model, which reduces the amount of time taken to train the latter, while guaranteeing better performances over training the child model from scratch. In particular, transfer learning on multi-NMT models have shown very good performance for LRL pairs. This is a very promising development, as it is time-consuming to train a multi-NMT model every time a dataset for a new language pair comes up.\n\\textbf{Zero-shot} NMT (Section~\\ref{sec:zero_shot}) is a problem related to LRL-NMT. In zero-shot, there is no parallel data, and the model is trained with no parallel data for the considered language pair. While some researchers~ consider zero-shot to be synonymous to extremely LR case, others ~ disagree. Promising solutions for zero-shot translation that have been presented include pivoting, multi-NMT, unsupervised NMT, and transfer learning. Zero-shot translation is extremely useful because it eliminates the requirement of the existence of parallel data between every pair of languages. \nFigure~\\ref{fig:NMT_techniques} gives an overview of these techniques. Note that it does not cover all the possible scenarios. For example, semi-supervised NMT techniques can work with monolingual data available either at the source or target side, and multi-NMT works with more than three languages.\\\\\nThe following sub-sections discuss the aforementioned techniques at length. At the end of each sub-section, we discuss how the technique has been employed with respect to LRLs. \n\\begin{figure}\n \\centering\n \\includegraphics[scale = 0.45]{Figures/Figure1.png}\n \\caption{NMT techniques applicable for the translation of LRL pairs. $L_1 - L_3$ refer to languages. Dashed lines indicate translation task, and solid lines indicate the availability of parallel corpora. Double circles in (a) and (b) indicate the languages have monolingual data. (a) Bilingual supervised NMT, (b) Bilingual semi-supervised NMT, (c) Bilingual unsupervised NMT, (d) Multi-NMT, (e) Pivoting. }\n \\label{fig:NMT_techniques}\n\\end{figure}", "id": "7cdcb411-627c-4731-ba2b-6ee0b4c77d4f", "level": "subsection", "origin_cites_number": 6, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 4989, 4981, 4992, 8866, 7876, 8865, 4977, 4980, 4982, 4985, 4978, 8864, 4990, 4983, 860, 4984, 4993, 4986, 4976, 4979, 4991, 8867, 4988, 4987 ], "content": "\\label{sec:dataAug}\nData augmentation (DA) is a set of techniques that is used to create additional data either by modifying existing data or adding data from different sources, to be used in training Machine Learning models. For the problem of MT, data augmentation is used to generate \\emph{synthetic} parallel sentence pairs to train data-centric MT models, such as SMT and NMT. In contrast to the other techniques discussed in this section, data augmentation techniques usually do not alter the NMT architecture but generate data to train these neural architectures. Data augmentation techniques for NMT could be divided into 3 categories: i) word or phrase replacement based augmentation, ii) back-translation based augmentation, and iii) parallel corpus mining.\n\\\\\n\\textbf{1. Word or phrase replacement based augmentation:} In this technique, a subset of sentences from an existing parallel or monolingual corpus is selected, and new synthetic sentences are generated by replacing words or phrases in that selected set of sentences. One solution is to use a bilingual dictionary and replace all the words~ or rare words~ in the selected sentences of a monolingual corpus, with the words in the other language in order to generate its translation. Another solution is to replace frequent words in the target sentences with rare words in the target vocabulary and then modifying the aligned source words accordingly~. The main problem with such synthetic data is a lack of fluency. There have been subsequent attempts to select the best set of words considering fluency ~. Alternatively, instead of replacing words, phrases can be replaced, which preserves the context and in turn, improves the fluency of resulting sentences~. To further improve fluency, syntactic rules (e.g.~morphological, POS, or dependency rules) have been imposed during word replacement~.\n\\textbf{2. Back-Translation based Data Augmentation: } Back-Translation is the process of translating a monolingual corpus in the target language by pre-existing MT system, in the reverse translation direction, into the source language. Then the obtained synthetic source language sentences along with their respective target language sentences are used to construct a synthetic parallel corpus~. Usually, target-side sentences are selected to be back-translated, because monolingual target data helps improve the fluency of the translation model.~ empirically showed that starting with the source side has a lesser success. \\\\\n Synthetic data generated using BT tends to be noisier than the original parallel data, especially if the MT system used to generate the synthetic data is suboptimal. This is particularly the case with MT systems trained with very small amounts of data. Thus, subsequent research improved BT using data selection, data filtering, distinguishing between synthetic and original data, sampling and iterative BT. These improvements are further discussed below. \\\\\n \\textbf{Iterative back-translation}: In one form of iterative back-translation, source and target monolingual data are back-translated using source to target and target to source NMT models, respectively. This procedure is continued iteratively, where the same set of sentences is back-translated several times until no improvement is observed in both translation direction~. Another option is to improve the forward and backward translators in an iterative manner~. However, in these systems, the two translators are trained independently. As a solution,~ jointly trained the two translators. \n \\\\\n \\textbf{Monolingual data selection}: In BT, simply back-translating all the available monolingual data would not guarantee optimal results. One factor that determines the performance of back-translation is the original-synthetic data ratio~. Thus, the synthetic to original parallel data ratio has to be selected carefully. The purpose of data selection is to select the best subset from the available monolingual corpora to be back-translated~.\n \\\\\n \\textbf{Synthetic parallel data filtering}: Even if a subset of monolingual data is selected to be back-translated, the resulting synthetic data could contain noise. Data filtering refers to the process of selecting a subset of the generated synthetic parallel sentences (the highest quality ones) to be used alongside the original data to train the NMT system~. \n \\\\\n \\textbf{Distinguishing between original and back-translated data}: Even after monolingual and parallel data filtering discussed above, it is highly likely that this data would be of lesser quality, compared to the original parallel data. It has been shown that adding a tag to the back-translated data to distinguish them from original data gives better results~. An alternative is to distinguish these two types of data by assigning a weight according to the quality of sentences.\n \\\\\n \\textbf{Sampling}: In sampling, multiple source sentences are generated per target sentence in an attempt to average out errors in synthetic sentences~. Note that this technique as well as the previously discussed two techniques can be applied with other forms of DA techniques as well. However, we find experiments reported only in the context of BT. \\\\\n{\\textbf{3. Parallel Data Mining (bitext mining) from comparable corpora:}}\nComparable corpora refer to text on the same topic that is not direct translations of each other but may contain fragments that are translation equivalents (e.g.~Wikipedia or news articles reporting the same facts in different languages). Parallel sentences extracted from comparable corpora have been long identified as a good source of synthetic data for MT. \nBecause recently introduced multilingual sentence embeddings have become the common technique for generating parallel data to train NMT models, we only discuss those techniques\\footnote{We only discuss multilingual sentence embedding techniques that have been evaluated on NMT tasks. Some techniques have been evaluated on some other NLP tasks s.a. Natural Language Inference.}. In these techniques, a multilingual sentence embedding representation is first learnt between two or more languages. Then, during the sentence ranking step, for each given sentence in one language, a set of nearest neighbours is identified as parallel sentences from the other language, using a sentence similarity measurement technique.\n\\textbf{Multilingual Embedding generation: }Early research used NMT inspired encoder-decoder architectures to generate multilingual sentence embeddings. These include bilingual dual encoder architectures~, and shared multilingual encoder-decoder architectures~.~ leveraged the shared encoder-decoder model across 93 languages, which is publicly released under the LASER toolkit. This toolkit was the base for subsequent massive-scale parallel corpus extraction projects~ .\nThe above-discussed multilingual embedding generation techniques require large parallel corpora during the training process. As a result, unsupervised~, as well as self-learning~ NMT architectures have been used to generate multilingual embeddings. \nA notable development is the use of pre-trained multilingual embedding models such as multilingual BERT (mBERT) or XLM-R~. \\\\\n\\textbf{Sentence Ranking: }The choice of the sentence similarity measurement technique has been largely unsupervised (cosine similarity was the simplest one employed). However, this simple method is suboptimal, and improved cosine similarity measurements are available~. In addition, supervised sentence measurement techniques have been employed to a lesser extent~.", "id": "0432fa8e-8177-477c-9c23-7ac3c2857358", "level": "subsection", "origin_cites_number": 39, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Data Augmentation Techniques" ] ], "subsections": [ "fca0288a-59c1-43bd-8c9a-827609a8c944" ], "title": "Data Augmentation Techniques" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4992, 7164, 4994 ], "content": "The above three data augmentation techniques have shown promising results for translation between LRL pairs. However, each technique has its practical limitations when applied to the translation of LRL pairs. BT assumes that an MT system is already available between the given language pair. Moreover, as evidenced by many empirical studies, the success of BT depends on many factors such as the original-synthetic parallel data ratio, and the domain relatedness of the parallel and monolingual data~. In addition, there have been attempts to leverage high-resource language data for LRL with BT; however, its success depends on the relatedness of the high-resource language and LRL~ or the availability of bilingual lexicons~. Word or phrase replacement based augmentation techniques rely on language-specific resources (e.g.~bilingual dictionaries, Part of Speech (POS) taggers, dependency parsers) that many LRLs would not have. One possibility to explore the use of neural language models trained on monolingual data (e.g. BERT and its variants) to increase the fluency of the synthetic data. For parallel data mining, the applicability of pre-trained multilingual models such as LASER or mBERT is restricted to the languages already included in the pre-trained model. Thus, it is worthwhile to investigate the heuristic and statistical-based parallel corpus mining techniques employed in early SMT research, in the context of LRLs.", "id": "fca0288a-59c1-43bd-8c9a-827609a8c944", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "0432fa8e-8177-477c-9c23-7ac3c2857358", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Data Augmentation Techniques" ], [ "subsubsection", "Data Augmentation for Low-Resource Language NMT" ] ], "subsections": [], "title": "Data Augmentation for Low-Resource Language NMT" }, { "cite_extract_rate": 0.8, "cites": [ 5005, 8868, 5001, 7877, 5004, 1551, 4999, 4997, 5002, 4995, 5003, 8869, 4996, 4998, 5000, 8870 ], "content": "\\label{unsupervised_nmt} \nThe generation of parallel corpora for language translation is expensive and resource-intensive. In contrast, monolingual corpora are often easier to obtain. As a result, unsupervised NMT using monolingual corpora or cross-lingual word embeddings (i.e., cross-lingual representations of words in a joint embedding space) is less data-intensive for LRL-NMT \\footnote{The input corpora used for training data for unsupervised NMT is assumed to be monolingual corpora, while testing uses parallel corpora to evaluate the true translation}. \nGenerally, the architecture for unsupervised NMT makes use of Generative Adversarial Networks (GANs) and contains the following three steps~: i) initialization, ii) back-translation, and iii) discriminative classifier. \n\\textbf{Initialization}:\nThe underlying goal of the initialization step is to bridge the gap between the input representations of the different languages in an unsupervised manner. \nAs shown in Figure~\\ref{fig:unsupervised}, the unsupervised NMT model is initialized by learning a mapping between two or more languages. \nThe intuition is that human beings live a shared common experience in the same physical world. Thus, the embedding of different languages should have a shared mathematical context. Researchers have experimented with various linguistic resources and neural input representations for initialisation. \\\\\nThe traditional lexical resources include bilingual dictionaries or word-by-word gloss (inferred by aligning words , phrases , or sub-words ).\\\\\nNeural representations include cross-lingual n-grams, word embeddings, language models (LMs) or dependency embeddings . \nThese neural input representations are either jointly trained by concatenating the source and target monolingual corpora or by learning the transformation between the separate monolingual embeddings to map them into the shared space. One such technique is to leverage the available bilingual dictionaries by initialising the model with bilingual word embedding . Instead of words, using sub-word representations such as Byte Pair Encoding (BPE) has shown more promise~.\nIn recent works, it has been shown that the cross-lingual masked LMs could be more effective in initialising the models . During training, the LM tries to predict the percentage of tokens that are randomly masked in the input. further extended on the same lines by using n-grams instead of BPE tokens and inferred the cross-lingual n-gram translation tables. \n\\textbf{Back-Translation}: \nNext, the generative step uses back-translation (discussed previously in Section~\\ref{sec:dataAug}) by a denoising autoencoder that combines forward and backward translation from source to target, then target back to the source. The loss function compares the original source text against the doubly translated text. Again, the intuition is that there exists a common latent space between two languages so that the model can reconstruct the sentence in a given noisy version and then reconstruct the source sentence given noisy translation.\\\\\nRecent back-translation based on multi-NMT (Section~\\ref{sec:multiNMT}) models have shown much better results compared to the bilingual counterpart ~.\n\\textbf{Adversarial Architecture}: \nFinally, a discriminative step uses a binary classifier to differentiate the source language from the target language by distinguishing translated target text from original target text. An adversarial loss function trades-off between the reconstruction loss from the back-translation against the discrimination loss from the classifier. The result of this step is a high-quality translation that is more fluent for LRLs.\\\\\nExisting methods in the unsupervised NMT literature modifies the adversarial framework by incorporating additional adversarial steps or additional loss functions into the optimization step. These include dual cycle-GAN architecture and local-global GAN . On the other hand, those methods that add a loss function include embedding agreement , edit and extract , and comparative translations .\n \\begin{figure}\n \\centering\n \\includegraphics[scale = 0.45]{Figures/Figure2.png}\n \\caption{The initialization step of unsupervised NMT, where two languages are mapped to a common space. The input is a trained embedding for each language and a dictionary of mapped words. The dictionary pairs help guide the two embeddings by a) rotation and b) alignment. The resulting embedding space of the dictionary pairs is matched. }\n \\label{fig:unsupervised}\n\\end{figure}", "id": "500a7ba1-62dc-4a3c-9cc9-6cc93bd8c606", "level": "subsection", "origin_cites_number": 20, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Unsupervised NMT" ] ], "subsections": [ "67781361-7535-4644-bd90-d3b914dc874a" ], "title": "Unsupervised NMT" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 8870, 8869, 5008, 5006, 5000, 5007, 4995 ], "content": "The majority of the early unsupervised techniques focused on high-resource languages that have monolingual data in abundance ; except for English-Russian and English-Romanian . \nHowever, having the required input representation for the LRLs is a limitation because some LRLs do not have bilingual dictionaries or proper word alignments. On the other hand, to build a neural representation, large monolingual corpora are needed. How these resources perform in extreme LRLs has not been properly studied. More recent work that explored the conditions for using unsupervised NMT for LRLs having less monolingual data is a promising development~. \\\\ \nVarious researchers have found a reduced translation quality for LRL pairs that are not from similar linguistic families or similar domains . \nFour areas of concerns are: i) different script and dissimilar language, ii) imperfect domain alignment or domain mismatch, iii) diverse datasets, and iv) extremely LRLs .\nTo resolve issues i and ii, proposed robust training by using language models that are agnostic to language similarity and domain similarity, while resolved issue i by combining transfer learning from HRL to LRL with unsupervised NMT to improve translations on a low-resource supervised setup.", "id": "67781361-7535-4644-bd90-d3b914dc874a", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "500a7ba1-62dc-4a3c-9cc9-6cc93bd8c606", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Unsupervised NMT" ], [ "subsubsection", "Unsupervised NMT for Low-Resource Languages" ] ], "subsections": [], "title": "Unsupervised NMT for Low-Resource Languages" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7878, 5009, 5010, 7475, 2489, 5011, 9121, 5012, 4983, 4988 ], "content": "\\label{semi-supervised}\nIn contrast to unsupervised techniques, semi-supervised techniques assume the availability of some amount of parallel corpora alongside monolingual corpora. Semi-supervised techniques can be categorised according to the way the monolingual data is utilised: \n\\textbf{Using monolingual data to generate synthetic parallel data:} The simplest strategy is to create the synthetic parallel corpus (this is essentially data augmentation) either by 1) copying monolingual data of one language as the translated text~, or 2) creating the source side with a null token~. A better way to generate synthetic data is through back-translation, as discussed in Section~\\ref{sec:dataAug}.\n\\textbf{Using monolingual data to generate a language model (LM):} A LM can be integrated into the target-side of the NMT to improve the fluency of the generated text. This is named LM fusion, which can be broadly categorised as shallow fusion and deep fusion~. In shallow fusion, the LM is used to score the candidate words generated by the decoder of the NMT system either during inference time~, or training time~. In deep fusion, the NMT architecture is modified to concatenate the LM with the decoder. Deep fusion provides better performance. However, LM model fusion has few limitations: 1) The NMT model and LM are trained independently and are not fine-tuned, 2) LM is only used at the decoder, 3) in deep fusion, only the final layers of the LM are integrated, disregarding the low-level LM features, and 4) the NMT architecture has to be changed to integrate the LM~. \nInstead of LM fusion,~ used the trained LM model as a weakly informative prior, which drives the output distributions of the NMT model to be probable under the distributions of the LM. This does not require any change to the NMT architecture. \nAnother alternative is to use LMs to initialize the NMT model.~ initialized both encoder and decoder with the LMs of respective languages, while~ used source embeddings to initialize the encoder. Following this line of research, recently there have been initiatives to incorporate BERT fine-tuning for NMT~. \nA promising extension is the use of pre-training multilingual LMs, such as mBART, in the form of an autoregressive sequence-to-sequence model~. \n\\textbf{Changing the NMT training objective to incorporate monolingual data: }\n appended a reconstruction term to the training objective, which reconstructs the observed monolingual corpora using an autoencoder. This method assumes both source and target monolingual corpora are available. They jointly train source-to-target and target-to-source translation models, which serve as the encoder and decoder (respectively) for the autoencoder. also made use of both source and target monolingual data and employed source-to-target and target-to-source translation models. They introduced a new training objective by adding a joint Expectation Maximization (EM) estimation over the monolingual data to the Maximum Likelihood Estimation (MLE) over parallel data. \n\\textbf{Multi-task learning:} Here, separate NMT models are used. trained one model on the aligned sentence pairs to predict the target sentence from the source sentence, while the other is trained on the source monolingual data to predict the reordered source sentence from original source sentences. In essence, they strengthened the encoder using source-side monolingual data.~ followed a similar approach, however, they strengthened the decoder using target-side monolingual data. A similar technique is joint learning, where the source-to-target and target-to-source translation models, as well as language models, are aligned through a shared latent semantic space~.\n\\textbf{Dual Learning:} Dual learning is based on the concept of Reinforcement Learning (RL) and requires monolingual data on both sides. Parallel data is used to build two weak source-to-target and target-to-source translation models. Then, monolingual data of both sides undergo a two-hop translation. For example, source side data is first translated using the source-to-target model, the output of which is again translated by the target-to-source model. This final output is evaluated against the original monolingual sentence and is used as a reward signal to improve the translation models~. This process is carried out iteratively and shows some resemblance to iterative BT~. However, RL based techniques are known to be very inefficient~.~ also argued that the above RL based technique does not properly exploit the monolingual data, and suggested several improvements.~ transferred the knowledge learned in this dual translation task into the primary source-to-target translation task.", "id": "69fceb92-f017-465f-b68d-572e7cfdc976", "level": "subsection", "origin_cites_number": 18, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Semi-Supervised NMT" ] ], "subsections": [ "c6078c0c-a15b-41c6-9dbf-8778e3333722" ], "title": "Semi-Supervised NMT" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 7475, 5013, 9121, 5014 ], "content": "Although semi-supervised techniques have been presented as a solution to the scarcity of parallel data, we note the following concerns with respect to their applicability in the context of LRLs: 1) A LRL translation scenario has been simulated by taking small amounts of parallel data from high-resource languages such as English, French, German, and Chinese. 2) Some research has employed very large monolingual corpora. Although many LRLs have monolingual corpora with sizes larger than parallel corpora, it is difficult to assume they would have such large amounts of monolingual corpora, 3) Lack of comparative evaluations across the different semi-supervised techniques. Except for a few research~, most of the others compared with back-translation only. Interestingly some reported results less than BT~ and iterative BT~, while some reported only marginal gains over BT~. This makes one doubt the actual benefit of these sophisticated techniques. Thus to establish the usefulness of these techniques for true LRLs, experiments should be carried out concerning a wide array of languages and different monolingual dataset sizes.\\\\\nAlthough integrating massive language models such as BERT have shown promising results, these techniques have also been tested with high-resource languages. How these models would work with models built with rather small amounts of monolingual data should be investigated\\footnote{Note that there is a long list of very recent publications in this line. However, due to this reason, above we cited only one.}. However, multilingual models such as mBART are indeed very promising for the translation of LRL pairs, which has been already proven~ as further discussed in Section~\\ref{sec:multiNMT}.", "id": "c6078c0c-a15b-41c6-9dbf-8778e3333722", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "69fceb92-f017-465f-b68d-572e7cfdc976", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Semi-Supervised NMT" ], [ "subsubsection", "Semi-supervised NMT for Low-Resource Languages" ] ], "subsections": [], "title": "Semi-supervised NMT for Low-Resource Languages" }, { "cite_extract_rate": 0.6296296296296291, "cites": [ 8872, 5025, 7879, 4970, 5019, 5016, 5017, 5022, 8871, 5020, 5024, 4507, 5015, 5021, 5018, 5023, 4968 ], "content": "\\label{sec:multiNMT}\nMultilingual NMT (multi-NMT) systems are those handling translation between more than one language pair~. Recent research has shown multilingual models outperform their bilingual counterpart, in particular when the number of languages in the system is small and those languages are related . Particularly, in English-centric datasets, multi-NMT models trained with roughly $50$ languages have shown clear performance gains over bilingual models for LRLs~. This is mainly due to the capability of the model to learn an \\textit{interlingua} (shared semantic representation between languages)~. Training multi-NMT models is a more practical solution as opposed to building separate bilingual models in a real-world setting~.\nDespite these benefits, multi-NMT faces challenging problems such as i) Inclusion of a large number of languages that have varying differences among them, ii) noise especially in the parallel data used, iii) data imbalance (some languages just having a fraction of parallel sentences compared to high-resource languages), and iv) other discrepancies concerning factors such as writing style and topic~. \nWith respect to the translation task, multi-NMT can be categorised into three types (Figure~\\ref{fig:multiNMT}): \n\\begin{enumerate}\n \\item Translating from one source language to multiple target languages, (one-to-many) (Figure~\\ref{fig:multiNMT} (a)): This is essentially a multi-task problem, where each target becomes a new task .\n \\item Translating from multiple source languages to a single target language, (many-to-one) (Figure~\\ref{fig:multiNMT} (b)): This can be considered as a multi-source problem, considered relatively easier than the multi-task problem .\n \\item Translating from multiple languages to multiple languages,(many-to-many) (Figure~\\ref{fig:multiNMT} (c) and (d)). This is the multi-source, multi-target problem, and the most difficult scenario .\n\\end{enumerate}\nSupervised multi-NMT architectures introduced to tackle the aforementioned translation tasks can be broadly categorised into three paradigms: i) single encoder-decoder for all the languages (all source sentences are fed into the encoder irrespective of the language, and the decoder can generate any of the target languages (Figure~\\ref{fig:multiNMT}. (c)); ii) per-language encoder-decoder (each source language has its own encoder, and each target language has its own decoder (Figure~\\ref{fig:multiNMT}. (d)); and iii) shared (a single) encoder/decoder at one side, with per-language decoder/encoder at the other side (Figure~\\ref{fig:multiNMT}. (a) and (b)). The main objective of these different architectures is to maximize the common information shared across languages while retaining language-specific information to distinguish between different languages. This mainly depends on how parameters are shared between individual encoders and decoders. All of these architectures are based on either the recurrent model with attention , or the transformer-based model~. Comparison of the recurrent model against the transformer model under the same settings has shown that the latter is better . Almost all the recent multi-NMT architectures are based on the transformer model.\n \\textbf{Single encoder-decoder for all the languages:}\nFor large-scale multi-NMT implementations, this is currently the state-of-the-art, especially in real-world industry-level systems~. Since all the source languages share the same encoder and all the target languages share the same decoder, while simultaneously supporting one-to-many, many-to-one, and many-to-many cases, this model is commonly known as the `\\textit{universal NMT model}'. The main advantage of a universal model is lower model complexity compared to per language encoder-decoder models (discussed next) because it has a lower parameter count.\nMoreover, as demonstrated by , this universal model is capable of learning a form of interlingua, which is crucial in facilitating zero-shot translation (see Section~\\ref{sec:zero_shot}). \nA major challenge in using this architecture is enabling the decoder to distinguish the target language. The common practice is to add a language identification tag to the source sentence~. An alternative is to add the language name as an input feature~.\nMore recent work has used language-dependent positional embeddings representations ~.\n\\textbf{Per-language encoder-decoder:} \nIn this architecture, there is a separate encoder per source language, as well as a separate decoder per each target language. As opposed to the universal NMT models described above, the requirement to capture language-independent features can be easily achieved by this setup. However, sharing common information across languages is a challenge. The commonly applied solution is the use of shared parameters in the model, employing shared attention~.\n extended this idea even further, and introduced a contextual parameter generator, which enables the model to learn language-specific parameters, while sharing information between similar languages. \n \\begin{figure}\n \\centering\n \\includegraphics[scale = 0.5]{Figures/Figure3.png}\n \\caption{Supervised multi-NMT architectures.}\n \\label{fig:multiNMT}\n\\end{figure}\n\\textbf{Single encoder with per-language decoder / per-language encoder with single decoder:} \nThe single encoder per language with multiple decoder architecture supports the one-to-many scenario, where a single source language gets translated to multiple languages via multiple decoders (multi-task). The multiple encoders, single decoder architecture supports the many-to-one scenario, where multiple encoders process separate source languages while using a single decoder to translate into one target language (multi-source).\nIn the one-to-many scenario, each decoder has its attention mechanism, so no parameter sharing takes place at the decoder side~. A more effective model is to allow partial sharing of parameters across decoders~. \nWhen there are multiple encoders alongside a single decoder, encoder output has to be combined to be sent to the decoder. Initial solutions assumed the used corpus is multi-way parallel~. Ways to relax this restriction are either to mark a corresponding sentence as null if a particular language does not have that sentence~ or to generate the missing sentences with a pre-trained model~ . \nMany research has evaluated the pros and cons of these different architectures. For example, showed that a unique decoder for each target language or unique decoder attention parameters for each target language outperform models with fully shared decoder parameters.~ obtained better results with partial parameter sharing in the transformer model, over the full-parameter sharing recurrent model of . However, the best model selection would depend on the nature of the task at hand. For example, if the model is expected to deal with hundreds of languages, it is desirable to have maximum parameter sharing like in , to reduce the model complexity .", "id": "6c52df28-c470-4f26-b8bb-ea0b68c915b7", "level": "subsection", "origin_cites_number": 27, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Multilingual NMT" ] ], "subsections": [ "2da4949d-805f-4954-9164-a1dc9be68496", "283c33fe-3ad4-4944-9563-e4b9b6e797b8" ], "title": "Multilingual NMT" }, { "cite_extract_rate": 0.764705882352941, "cites": [ 8872, 2489, 5033, 5026, 5001, 4970, 8873, 5017, 5031, 8871, 5020, 5030, 7873, 5034, 5002, 5028, 7096, 5003, 5015, 5027, 4998, 4968, 5032, 5014, 8874, 5029 ], "content": "\\label{Low-Resource_Languages}\nAll of the three supervised multi-NMT techniques discussed above have been leveraged for LRL translation. In addition, the multilingual version of unsupervised and semi-supervised NMT models, as well as transfer learning on multi-NMT parent models have been used for LRL translation. \n\\textbf{1. Supervised multi-NMT architectures:} In the available multilingual datasets, the LRL pairs are heavily under-represented. Thus, the results of supervised multi-NMT models for the LRL pairs are far below the results for high-resource languages, even though they use the same multi-NMT model~. The simplest strategy to alleviate this problem would be to over-sample the parallel corpus related to the LRL pair. There are different sampling strategies such as simple over-sampling and temperature-based sampling~. Sampling data into mini-batches also has to be given careful consideration, to avoid any form of bias. Some strategies include scheduling (cycle through the bilingual language pairs)~, using mini-batches that consist of different languages~, or using mini-batches that contain data from the same target~. Data augmentation~ (Section~\\ref{sec:dataAug}) and pivoting~ (Section~\\ref{sec:zero_shot}) can also be used to generate parallel data for LRL pairs.\n\\textbf{2. Unsupervised multi-NMT:} When a multi-NMT model is built entirely upon monolingual data, it is referred to as unsupervised multi-NMT, which are trained following a similar process to that of bilingual unsupervised models (see Section~\\ref{unsupervised_nmt}). The difference between bilingual and multilingual NMT comes from the way the input representation is constructed. In the English-centric unsupervised model proposed by , first, the embeddings of non-English languages are mapped into the latent space of English embeddings.~ constructed a multilingual masked language model using only a single encoder. Better results over the pure unsupervised model can be obtained if at least one language has parallel data with some other language~.\n\\textbf{3. Semi-supervised multi-NMT} In semi-supervised multi-NMT, monolingual data is used to create an additional training objective on top of the supervised translation training objective. While~ used the MASS objective~ for this purpose,~ employed two monolingual auxiliary tasks: masked language modelling (MLM) for the source-side, and denoising autoencoding for the target side. Semi-supervised NMT is further discussed in Section~\\ref{semi-supervised}.\n\\textbf{4. Transfer Learning on a pre-trained multi-NMT model:} Transfer Learning is discussed in Section~\\ref{transfer_learning_nmt}. Here we note that transfer learning using a multilingual parent has been identified as a promising approach for LRL-NMT~. In particular, some LRL data may not be available during multi-NMT training time and very large multilingual models cannot be re-trained every time parallel data for a new language pair becomes available .\n\\textbf{Input Representation:} Despite the multi-NMT methodology selected, a major factor that decides the success of multi-NMT for LRLs is the input representation. The input representation determines the ability to group semantically similar words from different languages in the embedding space. Input representation can be broadly broken down into two categories: surface form (word-level) representation, and embedding-based representation.\nWhen the surface form representation is used, a semantic grouping of words is achieved by adding a language token to each word~, or by using additional information s.a POS of words~. However, using word-level input results in large vocabulary sizes that are difficult to scale . Even if there are linguistically similar languages that share a common script, the amount of vocabulary overlap is minimal~. LRLs are severely affected by this~.\nAs a solution, sub-word level encoding (BPE~, sentence piece representation~, or transliteration~ was used. \n noted that even sub-word level encoding does not create enough overlap for extremely LRLs, since it still uses the surface form of the word. Further, as pointed out, with such sub-word based techniques, semantically similar and similarly spelt words could get split into different sub-words for different languages. \nAn alternative is to use input representations based on cross-lingual embeddings. argued that, when the input languages are in the same semantic space, the encoder has to learn a relatively simple transform of the input. Moreover, in such shared spaces, LRLs get enriched with more semantic information with the help of high-resource languages. Such universal embedding representations have shown very promising results for LR as well as extremely LRL pairs~.\nA very interesting development is the use of multilingual denoising models pre-trained on monolingual data of a large number of languages (e.g. mBART~), which can be fine-tuned on multilingual translation tasks. This has shown very promising results for LRL pairs~. \nIn summary, we see research efforts on multiple fronts to leverage multi-NMT for LRLs. While earlier research experimented with datasets sub-sampled from high-resource language pairs, later research has experimented with actual LRLs~. Moreover, it is encouraging to see very promising results from transfer learning over multi-NMT models, as training large multi-NMT models is time-consuming. However, most of this research has been carried out holistically, without focusing on individual language, or language-family characteristics. How these characteristics can be exploited to better leverage NMT for LRLs would result in multi-NMT models that are more focused on a required set of languages.\n\\iffalse", "id": "2da4949d-805f-4954-9164-a1dc9be68496", "level": "subsubsection", "origin_cites_number": 34, "parent_id": "6c52df28-c470-4f26-b8bb-ea0b68c915b7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Multilingual NMT" ], [ "subsubsection", "Multi-NMT for Low-Resource Languages" ] ], "subsections": [], "title": "Multi-NMT for Low-Resource Languages" }, { "cite_extract_rate": 0.875, "cites": [ 5040, 5036, 5038, 5030, 8872, 5022, 5035, 4966, 5039, 5034, 5014, 5037, 5029, 5020 ], "content": "\\label{multi-NMT_zero}\nZero-shot and zero-resource scenarios can be considered as a special case of low-resource language translation. However, we believe they need a separate discussion, as zero-shot translation may take place between high-resource languages that do not have large parallel corpora, just with the considered languages. To be more specific, as mentioned in Section~\\ref{Pivoting} zero-shot translation may not be viable when the considered languages are not having large parallel corpora with any other language in the model. \\\\\nFor zero-shot translation to be plausible, the model should be capable of learning a good interlingua . As long as this requirement is satisfied, both shared encoder-decoder models, as well as per language encoder-decoder models should work. Although some early research has shown the latter model being better , later research focused on improving the former model, initially introduced by . These continuous efforts finally guaranteed that zero-shot translation using single encoder-decoder models outperform the pivot-based zero-shot translation, which has been the de facto solution for a long time . Moreover, truly many-to-many models have recently shown to beat the English-centric models in zero-shot settings . \\\\\nIf the multi-NMT model is truly capable of learning a language-independent interlingua, there should be less correlation between the source and target languages. However, when the model is trained with a large amount of languages, the modelling capacity has to be distributed across all the languages, suggesting that the overall model capacity is faced with a bottleneck . Thus, the learned interlingua is not fully language independent. Two solutions have been presented to solve this problem: to explicitly make the source and target languages independent, and to improve model capacity.\\\\\n\\begin{enumerate}\n\\item make the source and target languages independent:\n\\begin{itemize}\n \\item pre-train the decoder as a multilingual language model~,\n \\item iterative training of a multilingual model from source-target and target-source, conditioning the output on pre-trained strong source and target language models of the zero-shot language pair~,\n \\item use regularization during the training process~, and \n \\item use an additional loss function that serves as the objective to a new translation direction from source to source~.\n\\end{itemize}\n\\fi\n\\iffalse\n\\item improve model capacity:\n\\begin{itemize}\n \\item add language-aware layer normalization and linear transformation in-between the encoder and decoder~, and \n \\item add a parallel transformer layer, parameters of which are split by language or language group~.\n\\end{itemize}\n\\end{enumerate}\nWhile all the above methods assume that all the languages exist in the multi-NMT model as parallel data with some other language, ~ showed that this necessarily does not have to be. Using an additional training objective from the monolingual data of an unseen language along with the multi-NMT training objective, they showed very promising results for zero-shot translation.\\\\ \nTransfer learning can be considered as a form of zero-shot translation, if at least one of the pair of languages corresponding to child model is not included in the parent model.~ experimented with zero-shot transfer learning, where the child model has one unseen language (either at the source or target side). They demonstrate how the unseen language translation is derived from carefully selected set of languages in the parent model.\\\\\nAs mentioned in Section~\\ref{multi-NMT_zero}, in the zero-resource case, there is no authentic parallel data, however synthetic parallel data is generated to train the multi-NMT model. Thus, in the eye of the model, it is no longer a zero-shot problem. Multiple methods have been explored for synthetic data generation:\n\\begin{itemize}\n \\item pivoting ,\n \\item back-translation ,\n \\item bridging (a language pair that has the same language as the source and target), and\n \\item First generate zero-shot translations using the trained multi-NMT model on some portion of the training data; then re-start the training process on both the generated translations and the original parallel data. At each iteration, the original training data is augmented only with the last batch of generated translations .\n\\end{itemize}\n\\fi", "id": "283c33fe-3ad4-4944-9563-e4b9b6e797b8", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "6c52df28-c470-4f26-b8bb-ea0b68c915b7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Multilingual NMT" ], [ "subsubsection", "Zero-shot and zero-resource translation" ] ], "subsections": [], "title": "Zero-shot and zero-resource translation" }, { "cite_extract_rate": 0.521739130434782, "cites": [ 5044, 8873, 5031, 5043, 5030, 4967, 5041, 5042, 5045, 5046, 5032, 8874 ], "content": "\\label{sec:transfer_learning}\n\\label{transfer_learning_nmt}\nTransfer learning is a sub-area in Machine Learning that reuses (i.e.~transfers or adapts) knowledge that is gained from solving one particular task, problem, or model (parent) by applying it to a different but related one (child)~.~ first introduced the viability of transfer learning for NMT. In NMT, the parent model is first trained on a large corpus of parallel data from a high-resource language pair (or pairs), which is then used to initialize the parameters of a child model that is trained on a relatively smaller parallel corpus of the LRL pair (Figure~\\ref{fig:transfer_leraning}). \nThe advantages of transferring knowledge from the parent model to the child model include i) reducing the size requirement on child training data, ii) improving the performance of the child task, and iii) faster convergence compared to child models trained from scratch.\nThe transfer process in NMT models can be broadly categorised as either warm-start and cold-start~. Due to the availability of child parallel data during parent model training, warm-start systems are more accurate and has been the focus of most of the previous work~. However, cold-start systems are also of importance due to their resemblance to a real-life scenario where child parallel data is not always available at parent model training time~. \nAs shown in Figure~\\ref{fig:transfer_leraning}, the first step in transfer learning is to train a parent model, which could be either bilingual or multilingual (note that the source and target in multi-NMT models can be many-to-one, one-to-many, or many-to-many. A special case of multi-NMT based transfer learning is fine-tuning large-scale multilingual language models such as mBART using small amounts of parallel data~, as already mentioned in Section~\\ref{sec:multiNMT}). However, the bilingual parent model is more common. The majority of the time, the parent and child have the same target language ~, while others use the same source language for both the parent and child ~. However, it is also possible for the parent and child not to have shared languages in common~. Often, multi-NMT models used as parents in transfer learning have been trained on the many-to-one setting~. Despite the parent model being trained on either bilingual or multilingual source-target languages, the child has always been bilingual with the exception of~, which progressively fine-tuned a parent model in order to build a model that adequately performs on multiple language pairs.\n\\begin{figure}\n \\centering\n \\includegraphics[scale = 0.5]{Figures/Figure4.png}\n \\caption{Transfer Learning process.}\n \\label{fig:transfer_leraning}\n\\end{figure}\nImprovements in transfer learning for NMT corresponds to three main aspects: i) minimizing the language space mismatch between languages, ii) fine-tuning technique and iii) the transfer protocol. \n\\textbf{Minimizing the language space mismatch}: Transfer learning systems have to address the problem of language space mismatch, since parent and child languages may not have the same feature distribution~. When the surface form is used as input, this language mismatch problem becomes a vocabulary mismatch between parent and child models. In the warm-start systems, sub-word segmentation models can be applied to the parent and child training data to build joint vocabularies~.~ took this idea even further and introduced a universal vocabulary for the parent to train on. \n~ showed that a vocabulary based on sub-wording can be employed even in the cold-start scenarios by building a dynamic vocabulary. However, for cold-start scenarios, the better alternative is to pre-train a universal input representation, including child monolingual data, if available~. \n\\textbf{Fine-tuning technique:} Transferring knowledge from the parent model to the child model requires fine-tuning the parameters trained in the parent model on the child dataset. Conversely, when a particular layer of the parent model is not fine-tuned, this is called freezing that layer of the parent model.\nBelow we list the research experiments with differing fine-tuning strategies, where the best freezing setup depends on factors such as the neural architecture employed, the translation task, and the dataset size. Thus we do not draw any conclusions on the best fine-tuning strategy.\n\\textit{No fine-tuning: }The whole parent model is frozen (in other words, copied) to the child~. \n\\textit{Fine-tune the embedding layer:} Similarity between parent and child language pairs (e.g. whether parent and child have the same target) determines which embedding has to be fine-tuned. For example, if the parent and child translate to the same target language, parent decoder embeddings can be transferred to the child~. When the surface form input is used, the most naive way of transferring the embedding layer is to randomly initialize the parent embedding layer before training the child model~. A better alternative is to take the parent and child vocabulary overlap while replacing the rest of the parent embeddings with child embeddings~.\n\\textit{Fine-tune all of the parent model:} No layer of the parent model is freezed~. \n\\textit{Fine-tune a custom set of layers:} This includes fine-tuning a selected combination on input, and inner layers of the encoder and decoder ~. \n\\textbf{Transfer Protocol: }Varying the transfer protocol is also a promising way to improve NMT transfer learning. This can be done in different forms: \n\\begin{itemize}\n \\item Train a chain of consecutive NMT models by transferring the parameters of a parent model to new LRL pairs~. \n \\item Train the initial NMT model on a parallel corpus for a resource-rich language pair, fine-tune it with the combined corpus of parent and child (can be more than one child), and finally, fine-tune further with the selected child data only~.\n \\item First train the unrelated high-resource language pair, then fine-tune it on a similar intermediate language pair and finally fine-tune on the LRL pair~.\n\\end{itemize}\n \\iffalse\n\\begin{itemize}\n \\item~ used the concept of Similar Language Regularization, which makes use of data from a high-resource language along with data from the LRL. \n \\eal{not clear what the regularlization does}\n \\item~ introduced progAdapt and progGrow protocols. In the former, a chain of consecutive NMT models are trained by transferring the parameters of a parent model to new language pairs. The latter progressively adds new pairs to the model.\n \\eal{say upfront for each one what the improvement is and how does it help; knowing the \"name\" doesn't help much...they all have their own unique name}\n \\item~ first trained a multilingual NMT model on two out-of-domain high-resource pairs that have the same target, and fine-tune it on two low-resource in-domain pairs that have the same languages as the parent. Then this model is further fine-tuned on an in-domain very small parallel corpus. The source and target languages are the source languages of the previous models.\n \\item~ introduced a multistage transfer protocol where an initial NMT model is trained on a parallel corpus for a resource rich language pair, which in turn is fine-tuned with the combined corpus of parent and child (can be more than one child). Finally, fine-tuning is further continued with the selected child data only.\n \\item~ first trained the unrelated high-resource language pair, the similar intermediate language pair and the LRL pair in turn.\n \\item~ first trained the parent model, fine-tuned it on a hybrid corpus that combined parent and child data, and finally fine-tuned on the child corpus. \n\\end{itemize}\n\\fi\nMultiple factors determine the success of transfer learning. The relationship between the languages used in parent and child models has been identified as the most crucial~. \nHigh relatedness between languages guarantees high vocabulary overlap when the surface form is used as input and would result in more meaningful cross-lingual embeddings as well. Much research has exploited vocabulary overlap of related languages using sub-word segmentation to achieve good results with transfer learning~, even when parent and child have no language in common~. An important consideration is the size of the sub-word vocabulary. It should be selected in such a way that the child is not overwhelmed by the parent~. For related languages, transliteration has shown to reduce lexical divergence~. The syntactic divergence between parent and child can be reduced by re-ordering the parent data~. Other factors that influence transfer learning include the size of the parent and child corpora, domains of the parent and child data, the number of shared words (vocabulary overlap), and the language script~.", "id": "797cd8b4-4a35-44d9-96d1-49176cef9068", "level": "subsection", "origin_cites_number": 23, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Transfer Learning in NMT" ] ], "subsections": [ "4ad7965b-c608-4132-9a2f-f43017b6b8cf" ], "title": "Transfer Learning in NMT" }, { "cite_extract_rate": 0.5, "cites": [ 5030, 5032, 8874, 5042 ], "content": "Transfer learning was originally introduced as a solution to low-resource (both domain and language) NMT. With respect to translation of LRL pairs, transfer learning using a high-resource pair always yielded better results than training the child model from scratch. This holds even for extremely LR children as well~. Interestingly, some research has shown that transfer learning is better than training a child pair (or a set of pairs ) with one or more parent pairs in a multi-NMT manner~. However, that research has been conducted against an early multi-NMT model~, considering very few languages. Whether the same observation would hold if a more novel multi-NMT model (discussed in Section~\\ref{sec:multiNMT}) is used along with a large number of language pairs should be subject to more research. On the other hand, transfer learning using pre-trained multi-NMT parent models has received only limited attention~. As mentioned above, multiple factors affect the success of transfer learning. Thus the impact of these factors should be evaluated extensively to determine their exact impact on LRL-NMT. Zero-shot translation adds an extra condition to the cold-start scenario, meaning that child parallel data unavailable, as discussed in Section~\\ref{sec:zero_shot}. \n\\iffalse\ndomain-shift problem - transfer learning has no explicit\ntraining process to guarantee that the source and pivot languages\nshare the same feature distributions, causing that the\nchild model inherited from the parent model fails in such\na situation (Ji 2020)\nfine-tuning technique, which forces the model to\nforget the specific knowledge from parent data and learn new\nfeatures from child data. - (Ji) - but this needs child data, and will not work in a zero-shot case.\nHere, the vocabulary mismatch between\nlanguages is still a problem, which seriously limits\nthe performance especially for distant languages.\n In the second\nscenario, training data covering different language directions\nare not available at the same time (most real-world MT training\nscenarios fall in this category, in which new data or new\nneeds in terms of domains or language coverage emerge over\ntime). In such cases, either: i) new MT models are trained\nfrom scratch with new vocabularies built from the incoming\ntraining data, or ii) the word segmentation rules of a prior\n(parent) model are applied on the new data to continue the\ntraining as a fine-tuning task. In all the scenarios, accurate\nword segmentation is crucial to avoid out-of-vocabulary\n(OOV) tokens. However, different strategies for the different\ntraining conditions can result in longer training time or performance\ndegradations. More specifically, limiting the target\ntask with the initial model vocabulary will result in: i) a word\nsegmentation that is unfavorable for the new language directions\nand ii) a fixed vocabulary/model dimension despite the\nvarying language and training dataset size.\nadapting from seed models is a good\nstrategy for rapid construction of MT systems in\nnew languages.(neubig)\nconcern 1 - (reduce lexical divergence between source and target) finding a representation of the data that ensures a sufficient overlap between the vocabularies of the languages.\nintial word-based translation (zoph) does not support this overlap. solutions BPE- transliteration (neubig).\nconcern 2 - language relatedness. this improves vocab overlap between source and target models even if parent is also LR(neubig).\nwhat to transfer - everything except the target is the best(zoph)\ntrainig strategies - 1.similar language regularization (Neubig)\nmulti-parent beter than bilingual parent, especially in cold-start (Neubig). \nAlythough it has been shown that TL is better than multiNMT (Kim) these evaluations worked with small nmber of languages. so difficult to make that claim. \nproblem of parent-child shared vocab is child has to be always there. A shared vocabulary is also problematic in that\nit must be divided into language-specific portions.\nWhen many languages share it, an allocated portion\nfor each will be smaller and accordingly less\nexpressive. This is the reason why the vocabulary\nis usually shared only for linguistically related languages,\neffectively increasing the portion of common\nsurface forms. (Kim)\neven langauge scrpipt is a concern (Kokmi - wfficiently reusing )\n\\fi", "id": "4ad7965b-c608-4132-9a2f-f43017b6b8cf", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "797cd8b4-4a35-44d9-96d1-49176cef9068", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Transfer Learning in NMT" ], [ "subsubsection", "Transfer Learning for Low-Resource Languages" ] ], "subsections": [], "title": "Transfer Learning for Low-Resource Languages" }, { "cite_extract_rate": 0.7222222222222221, "cites": [ 5040, 5036, 5038, 5022, 5037, 7875, 8871, 5014, 5029, 5043, 7880, 5047, 5020 ], "content": "\\label{sec:zero_shot}\nIn the zero-shot scenario, no parallel corpus is available for the considered source(X)-target(Z) language pair. We have identified pivoting, transfer learning, multi-NMT and unsupervised NMT as existing solutions in the literature for zero-shot NMT. \n\\textbf{Pivot-based solutions:}\nAn initial solution for zero-shot translation was the pivot-based translation, also known as pivoting. Pivoting relies on the availability of an intermediate high-resource language (Y), called the `pivot language'. In pivoting, the translation of X-Z is decomposed into the problem of training the two high-resource independent models: source-pivot (X-Y) and pivot-target (Y-Z). A source sentence is first translated using the X-Y model, the output of which is again translated using the Y-Z model to obtain the target sentence.\\\\\nThis basic form of pivoting has two main limitations. First, it suffers from the error propagation problem. Since the source-pivot and pivot-target models are independently trained, errors made in the first phase are propagated into the second phase. This is particularly the case when the source-pivot and pivot-target languages are distantly related. Second, as models have to be independently trained, the total time complexity is increased. \nTo reduce the problem of error propagation, source-pivot and pivot-target models can be allowed to interact with each other during training by sharing the word embedding of the pivot language~. Another solution is to combine pivoting with transfer learning~. Here, the high-resource source-pivot and pivot-target models are first independently trained as in the basic pivoting technique, acting as the parent models. Then the source-target model (child model) is initialized with the source encoder from the pre-trained source-pivot model, and the target decoder from the pivot-target model. In addition to reducing the error propagation, this method reduces time complexity, since only one trained model is used for translation. Another way to reduce error propagation is to use a source-pivot parallel corpus to guide the learning process of a pivot-target model~.~ proposed a similar approach by training the source-target model via Maximum Likelihood Estimation , where the training objective is to maximize the expectation concerning a pivot-source model for the intended source-to-target model on a pivot-target parallel corpus.\nIt has been shown that adding even small amounts of true parallel source-target sentences (thus the extremely low-resource scenario) does increase the translation accuracy in pivoting~ . \nAnother possibility is to make use of monolingual data to generate synthetic parallel data. Pivot monolingual data is preferred because compared to source or target, pivot language would have much more monolingual data~.\nA common observation of the above discussed pivoting research (with the exception of~) is that, although the focus is on zero-shot translation between a source and target, large parallel corpora have been employed for source-pivot and pivot-target pairs. However, some LRLs may not have large parallel datasets even with a high-resource language such as English. Moreover, as empirically shown by~, the performance of pivoting depends on the relatedness of the selected languages. Thus, we believe that more research is needed to determine the impact of pivoting for zero-shot NMT in the context of LRL pairs.\n\\textbf{Transfer Learning-based Solutions}:\nAs mentioned in Section~\\ref{sec:transfer_learning}, transfer learning can be considered a form of zero-shot translation, when no parallel data is available for the child model.~ explored transfer learning for zero-shot translation by mimicking the pivoting technique, assuming a high-resource pivot language. They use source-pivot data to build a universal encoder, which is then used to initialize a pivot-target model. This model is used to directly translate source sentences into target sentences.\n\\textbf{Multi-NMT-based Solutions:}\nAlthough pivot-based models have been the solution for zero-shot NMT for a long time, recent research showed that multi-NMT models can outperform the pivot-based zero-shot translation~. Many-to-many models have recently shown to beat the English-centric multi-NMT models in zero-shot settings~. \nA multi-NMT model can provide a reasonable translation between a source-target pair when the two languages are included in the model in the form of parallel data with any other language because the multi-NMT model is capable of learning an `interlingua' (see Section~\\ref{sec:multiNMT}). If the multi-NMT model is truly capable of learning a language-independent interlingua, there should be less correlation between the source and target languages. However, when the model is trained with a large number of languages, the modelling capacity (loosely measured in terms of the number of free parameters for neural networks~) has to be distributed across all the languages, suggesting that the overall model capacity is faced with a bottleneck~. Thus, the learned interlingua is not fully language independent. Two solutions have been presented to solve this problem: to explicitly make the source and target languages independent~, and to improve model capacity~. \nAs another solution, synthetic data can be generated between zero-shot language pairs using techniques such as pivoting and back-translation . This synthetic parallel data is included in the multilingual corpus used to train the multi-NMT model. \n\\textbf{Unsupervised NMT-based solutions:}\nUnsupervised NMT techniques discussed in Section~\\ref{unsupervised_nmt} rely only on monolingual data. Thus, this translation task can be considered as a zero-shot translation task. \n\\iffalse\n\\begin{enumerate}\n\\item make the source and target languages independent:\n\\begin{itemize}\n \\item pre-train the decoder as a multilingual language model~,\n \\item iterative training of a multilingual model from source-target and target-source, conditioning the output on pre-trained strong source and target language models of the zero-shot language pair~,\n \\item use regularization during the training process~, and \n \\item use an additional loss function that serves as the objective to a new translation direction from source to source~.\n\\end{itemize}\n\\item improve model capacity:\n\\begin{itemize}\n \\item add language-aware layer normalization and linear transformation in-between the encoder and decoder~, and \n \\item add a parallel transformer layer, parameters of which are split by language or language group~.\n\\end{itemize}\n\\end{enumerate}\n\\fi", "id": "6cc6b627-54f2-4960-be95-cd6d02584470", "level": "subsection", "origin_cites_number": 18, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Zero-shot NMT" ] ], "subsections": [], "title": "Zero-shot NMT" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 5020, 5017 ], "content": "\\label{techniques_trend}\nSo far we have discussed seven different techniques that can be used for the translation of LRL pairs, as well as zero-shot translation . This section provides a quantitative view of the use of these techniques in the related literature. \n Figure~\\ref{fig:tech_time_GS} shows how the use of different techniques varied from 2014 onwards based on the research papers indexed in Google Scholar. For each of the technique, Google Scholar was searched with the following query: ``\\textit{<technique\\_name>}'' + ``\\textit{low-resource}'' + ``\\textit{neural machine translation}'' for the year range 2014-2020. However, we acknowledge that the search results contain noise. For example, in certain cases, unsupervised NMT research was referred in unsupervised text generation papers. However, here we are only interested in a comparative view, thus we assume the noise is equally distributed across the search results for all the techniques. \n\\iffalse\n\\sr{admit that there is noise}Figure xx shows the total number of search results from 2014 to 2020 for every technique. It can be observed that the multilingual and unsupervised NMT techniques have gained the highest popularity among the low-resource NMT research community. The lowest number of papers have been obtained for pivoting. This may be attributed to the fact that pivoting requires large parallel corpora between source and pivot language as well as pivot and target language to achieve good performance. For low-resource languages, this is generally not the case and hence, this technique is not yet popular among the low-resource NMT community. \\sr{need to link this to pivoting section. same argument was made there. pivoting is optimal for HR languages in the zero-shot case. also very recently, MNMT was shown better for zero-shot }\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth, keepaspectratio=True ]{Figures/Techniques_GS.pdf}\n \\label{fig:1}\n \\caption{No of Google Scholar results for different techniques }\n\\end{figure}\n\\fi\nFigure~\\ref{fig:tech_time_GS} shows that multi-NMT had the highest number of papers till 2019, however, unsupervised techniques have surpassed it marginally after that. \nIn particular, the use of multi-NMT for LRL pairs started growing with the promising results shown by~ and~ around 2016 and 2017. Transfer learning, and semi-supervised had similar growth till 2018, whereas from 2019 onwards transfer learning has seen a steep increase in popularity. Data augmentation techniques have gained popularity in 2020 as well. However, pivoting seems to have lost its traction, which may be due to the recent advancements in multi-NMT that outperformed pivoting for zero-shot translation~. Overall, it can be seen that the interest of the NMT research community towards LRLs is steadily increasing irrespective of the type of technique. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures/Figure5.pdf}\n \\caption{Number of Google Scholar search results($N_{GS}$) for different techniques from 2014-2020}\n \\label{fig:tech_time_GS}\n\\end{figure}", "id": "8037047d-efca-4036-926d-d5e9d133f87a", "level": "subsection", "origin_cites_number": 3, "parent_id": "04e32557-9bb1-48a3-8d8c-57a570286aa7", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "NMT Techniques for Low-Resource Languages" ], [ "subsection", "Analysis on the Popularity of LRL-NMT Techniques " ] ], "subsections": [], "title": "Analysis on the Popularity of LRL-NMT Techniques " }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{technique_selection}\nThe effectiveness and viability of the techniques presented in Section~\\ref{section:label-NMT-Techniques-for-Low-Resource-Languages} depend on the size and nature of the available parallel and monolingual data and the computational resources at hand. This section gives a set of guidelines to advise practitioners of LRL-NMT on the suitable technique for a particular data setup. These guidelines should not be construed as rigid criteria but only as an advisory for the practitioners. \nFigure \\ref{fig:flowchart} shows a possible process that can be followed in selecting an NMT technique. This flowchart only provides guidelines for the bilingual scenario where parallel data is available for a pair of languages. However, it should be noted that if sufficient computing resources are available, the multilingual versions of all the techniques can be used as they have shown promising results with respect to LRL pairs. We did not show the multilingual scenario just for the sake of clarity. \nWe considered the availability and size of the parallel corpus, the availability of monolingual corpora, and language similarity as the major factors to select a technique.\nThe foremost factor that we have considered is the availability of parallel corpora. If a parallel corpus is available for a language pair, the next step is to check its size (step 1). There is no definite threshold suggested in the literature for the size of the parallel corpus to be considered an LRL scenario in NMT. However, following our discussion in Section~\\ref{LR-Definition}, here we considered an LRL scenario where a particular language pair has less than 0.5M parallel sentences. This is not a hard threshold but a mere suggestion. If a particular language pair has more than 0.5M sentences, we can achieve a reasonable performance through supervised NMT techniques (step 2). \nIf the parallel corpus has less than 0.5M sentences, there could be multiple steps taken by a practitioner (as shown by steps 3-5 in Figure \\ref{fig:flowchart}). One of the steps could be to increase the size of the dataset by using data augmentation (step 3) which is further followed by a supervised NMT technique (step 6). Data augmentation can be performed by using resources such as bilingual dictionaries and monolingual data. The other option could be to integrate the available monolingual and parallel data to perform semi-supervised NMT (step 5).\nIf the source and target languages have parallel data available with some other common language such as English, then we can also recommend attempting pivoting (step 10). However, if such parallel datasets are not available, a practitioner can attempt transfer learning (step 11). For this scenario, a parallel corpus between two high-resource languages can be used to build the parent model, which can further be fine-tuned to the LRL child. Transfer learning can be performed on multi-NMT models as well even when high-end GPU machines are not available. As discussed in Section ~\\ref{sec:transfer_learning}, the effectiveness of transfer learning depends on the language relatedness, therefore the selection of parent model has to be done carefully. It should be noted that it is always possible to increase the original dataset size by applying data augmentation techniques, before applying pivoting, transfer learning, or semi-supervised solutions.\nIf the considered LRLs do not have parallel data but have a reasonable amount of monolingual corpora (which is a reasonable assumption for most of the LRLs), unsupervised NMT can be applied (step 13). If a considered language pair neither has any parallel data, nor the monolingual corpora\\footnote{We refer to electronic resources, which could be the case with endangered languages that do not have any web presence.}, the only option is to manually create parallel and/or monolingual corpora (step 14). \nWe would like to conclude with the following two remarks. First, for each of the discussed LRL-NMT techniques, a large body of past related research is available; therefore, practitioners have to carefully select the most appropriate technique to be used as the baseline for their considered languages. This decision not only depends on the exact size of the available datasets but also the language characteristics and any other associated language/linguistic resources such as POS taggers and bilingual dictionaries. Second, LRL-NMT should be considered as an iterative process. For example, as shown by step (15), once a parallel corpus is manually created, the considered language pair now has a parallel dataset less than 0.5M. With that, either transfer learning, or semi-supervised NMT can be tried out. With this trained model, more parallel data can be generated. Although the generated parallel data might not be 100\\% accurate, the noise can be removed via post-processing (manual/automatic/hybrid) to obtain cleaner data. It is possible to train a new model with this newly cleaned data by integrating it with the original corpus.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale = 0.5]{Figures/Figure6.png}\n \\caption{Flowchart describing selection of an appropriate technique for a given data specification}\n \\label{fig:flowchart}\n\\end{figure}", "id": "8742219f-67f4-4253-9cfa-1c5d1c94bf9b", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Guidelines to select a technique for a given data specification" ] ], "subsections": [], "title": "Guidelines to select a technique for a given data specification" }, { "cite_extract_rate": 1, "cites": [ 7874 ], "content": "\\label{section:trend_analysis}\nThere are over 7000 languages being spoken around the world. A look into the related research reported in Section~\\ref{section:label-NMT-Techniques-for-Low-Resource-Languages} reveals the NMT techniques have mostly been tested on the same set of languages. Identifying reasons for this imbalance in language selection would lead to efforts for more language diversity and inclusion in NMT research. We built upon the work by in which $2485$ languages have been divided into 6 classes (see Table ~\\ref{tab:language_categories}) based on the amount of publicly available un-annotated and annotated corpora. Although did not specifically refer to parallel data available for NMT, we hypothesize that there exists a strong correlation between the language class (and consequently the amount of publicly available data for that language) and the amount of NMT research available for this language.", "id": "d44ca8d5-e84a-4945-99d8-4b2a743f628f", "level": "section", "origin_cites_number": 1, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Landscape of Low-Resource Languages and NMT Research" ] ], "subsections": [ "821b6860-1647-4c8f-8b6f-b06326c22e0a", "219c10ea-b64c-4082-a033-74f5b58fa214" ], "title": "Landscape of Low-Resource Languages and NMT Research" }, { "cite_extract_rate": 1, "cites": [ 7874 ], "content": "We queried Google Scholar with the query ``\\textit{neural machine translation}'' + ``\\textit{language}'' (e.g. ``\\textit{neural machine translation}'' + ``\\textit{Hindi}''). We excluded results before 2014 along with patents and citations. It should be noted that the search results were noisy; the most common among them being the ambiguity in the language name, where the language name is the same as the other entities such as location or author name. For example, the language `Swati', a Bantoid language spoken in Africa is also a common Indian name. Therefore, we manually checked and removed 240 such languages from our analysis. \nIn order to find the LRLs that have been frequently used by the NMT community, we studied the outlier languages in language Classes 0-2 in , using the obtained Google Scholar search results ($N_{GS}$). For each class $c$, the outlier languages were identified using the following equation:\n\\begin{equation}\n N_{GS}^l > Q_3^c + 1.5IQR^c\n\\label{eq:outliers}\n\\end{equation} \nwhere $N_{GS}^l$ represents the number of Google Scholar results obtained for a language $l$, $Q_3^c$ is the third quartile and $IQR^c$ is the interquartile range of Google Scholar results for a language class $c$. In order to ascertain the factors responsible for the interest of the researchers towards these languages, we manually selected some of these outliers with geographical variations and plotted the number of search results with respect to the year. \nThese languages are selected such that it has a diverse mix of language class\\footnote{Hausa and Swahili (from African macroarea) were not among the outlier languages, yet they were included in the plot to show geographical diversity.}.", "id": "821b6860-1647-4c8f-8b6f-b06326c22e0a", "level": "subsection", "origin_cites_number": 1, "parent_id": "d44ca8d5-e84a-4945-99d8-4b2a743f628f", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Landscape of Low-Resource Languages and NMT Research" ], [ "subsection", "Methodology" ] ], "subsections": [], "title": "Methodology" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7881, 2489, 8871, 7882, 4751 ], "content": "We found 12.6\\%, 11.2\\% and 7.1\\% of languages as outliers for Class 0-2 respectively. A few random outlier examples are Sinhala \\& Slovene (Class 0), Nepali \\& Telugu (Class 1), and Irish (Class 2), as shown in Figure~\\ref{fig:boxplot_time}(a). We identified the possible factors responsible for the growth of research for some languages and put them into four categories: geographic considerations, dataset availability, open source frameworks and models, and community involvement. \n\\textbf{Geographic considerations }: We hypothesize that the geographical location where a language is spoken might play an important role in the growth of that language.\nTo validate the importance of geography, we looked at the outlier languages from Class 0-2 with respect to their geographical location\\footnote{The geographical area of a language is determined by using WALS data~}. In Figure~\\ref{fig:geo_corr}(a), we plot the percentage of outlier vs its geographical region. We found that in Class 0, approximately 25\\% of the outliers are languages from the European region, whereas in Class 1, approx 7\\% of the total languages from Europe are outliers.\nThus it is safe to assume that the early growth for NMT was mostly driven by the geographical location of the language. This could be due to the availability of funds, resources, and regional level joint projects. One of the prominent examples is the growth of European languages. Some of the recent trends are also supporting this. For example, the steady increase in the research activity for Irish and Slovene, outliers from Class 2 and Class 0 respectively (see Figure ~\\ref{fig:boxplot_time}) might be due to their presence in the European region. \n\\begin{figure}\n \\begin{subfigure}\n \\centering\n \\includegraphics[width=0.45\\textwidth, height=4.5cm]{Figures/Figure7a.pdf}\n \\end{subfigure}\n \\begin{subfigure}\n \\centering\n \\includegraphics[width=0.45\\textwidth, height=4.5cm]{Figures/Figure7b.pdf}\n \\end{subfigure}\n \\caption{ a) Boxplot showing the distribution of Google Scholar search results ($N_{GS}$) for different language classes. Outliers are marked by black markers and have been calculated independently for each class b)Time-varying trends of a few selected outlier languages from Classes 0-2. Y-axis represents the year and the x-axis represents the number of Google Scholar search results($N_{GS}$) }\n \\label{fig:boxplot_time}\n\\end{figure}\n\\textbf{Dataset availability}: The next source of growth is driven by the availability of the datasets. For example, Sinhala and Nepali, outlier languages from Class 0 and Class 1, respectively have seen a steep rise from 2018-19 onwards (see Fig~\\ref{fig:boxplot_time}(b)). One reason could be due to the release of the FLoRes Evaluation Datasets ~, which includes both Sinhala-English and Nepali-English. Our analysis revealed that the inclusion of a language in the standard yearly challenge such as WMT has a considerable impact on its growth in terms of NMT. For example, WMT-2019~ shared task had Nepali-Hindi parallel corpus. Similarly, Nepali and Sinhala were also part of Google's 102 languages corpus~.\nSimilarly, the increase in the number of publications for the Gujarati language from 2019 onwards could be attributed to the fact that the Gujarati-English language pair was included in the Shared Machine Translation task in WMT 2019~.\nTo quantify the relationship between the availability of datasets and research activity around that language, we used the resource matrix\\footnote{More information on this source: http://matrix.statmt.org/resources/matrix?set=all}. \nThis contains the details of the number of monolingual corpora and parallel corpora for 64 languages. Even though the list is not exhaustive, it is helpful for the growth analysis as it contains the languages from all the classes. In Figure~\\ref{fig:geo_corr}(b), we plot the total number of datasets available v.s. the research activity (number of Google Scholar results for NMT) for a particular language. The number of datasets (X-axis) has been calculated by summing the number of monolingual datasets available for a source language and parallel corpora available between a source language and the other target languages. It can be observed that the availability of datasets is directly correlated with the research activity ($r=0.88$), which further strengthens our claim that the NMT growth for a particular language is directly proportional to the data availability. \n\\textbf{Open-source frameworks and models accessibility}\nThe availability of open-source frameworks and models is a major contributing factor towards the growth of research in the area of NMT. Frameworks such as OpenNMT~, and fairseq , as well as pre-trained models such as mBART provide an easy and scalable implementation that helps in building a baseline and improving it for existing and new languages. These open-source projects are periodically maintained, flexible, and provide most of the latest NMT-related techniques. Since these projects provide standardized codes, it becomes easy to adapt for the LRLs even by novice researchers. It eliminates the need to develop the codes from scratch and helps in accelerating the research process. \n\\textbf{Community involvement}: A recent development is a group of like-minded researchers coming together to increase the visibility of MT systems in the context of languages used in a particular region. It consists of both dataset building and the development of the standardized code and also focuses on training a new generation of enthusiasts to carry forward the work. One of the prominent examples is the Masakhane project~, which aims to put the Africa AI, specifically African language MT, into the world map. Within about two years, the Masakhane community has covered more than 38 African languages and resulted in multiple publications~. \nAs we could see from Figure~\\ref{fig:boxplot_time}(b), two of the representative languages, Swahili and Hausa, have a steep growth after 2018, which coincides with the inception of the Masakhane project. \n\\begin{figure}\n \\centering\n \\begin{subfigure}\n \\centering\n \\includegraphics[width=0.45\\textwidth, height=4.5cm]{Figures/Figure8a.pdf}\n \\end{subfigure}\n \\begin{subfigure}{}\n \\includegraphics[width=0.45\\textwidth, height= 4.5cm ]{Figures/Figure8b.pdf}\n \\end{subfigure}\n \\caption{ a) Percentage of outliers from 7 different geographical regions for Classes 0-2 b) Relationship between the number of datasets available and number of Google Scholar search results ($N_{GS}$)}\n\\label{fig:geo_corr}\n\\end{figure}\nOur results and analysis highlight i) the importance of community building and region-level projects, ii) the inclusion of LRL datasets into yearly challenges and large multilingual datasets, and iii) the availability of open source models and frameworks to increase the focus on LRLs in the NMT landscape. This analysis could provide a cue to the researchers and funding agencies worldwide for the development of LRL resources.", "id": "219c10ea-b64c-4082-a033-74f5b58fa214", "level": "subsection", "origin_cites_number": 9, "parent_id": "d44ca8d5-e84a-4945-99d8-4b2a743f628f", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Landscape of Low-Resource Languages and NMT Research" ], [ "subsection", "Results and Discussions" ] ], "subsections": [], "title": "Results and Discussions" }, { "cite_extract_rate": 0, "cites": [], "content": "This section discusses the open questions in LRL-NMT research and provides the answers to our initial research questions.", "id": "7a44d66d-a445-4b7a-b8ac-4ef9879169f0", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "527b9179-33d7-4ad4-98fe-e6a65de452b2" ], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "While notable advances have been made in LRL-NMT in recent years, there remain interesting directions for exploration in model improvements as well as equitable and inclusive access.", "id": "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a44d66d-a445-4b7a-b8ac-4ef9879169f0", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ] ], "subsections": [ "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "48c2e5ce-c935-420b-9f92-9cde5b66379a", "6762be07-e814-45fe-acf7-e19054b513a2", "d4b234cc-9f43-49eb-80ea-4e09bb9ae8b3" ], "title": "Open Questions for LRL-NMT and Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on the various LRL-NMT techniques discussed, there are multiple improvements that can be applied to the model: allowing the multilingual models to include more LRL pairs, making the models more robust to limitations in the input dataset, expanding the interpretability and explainability of the model in understanding their behaviour on LRL pairs, and mitigating biases in the model.", "id": "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Model Improvements" ] ], "subsections": [ "90aa37a6-013e-4588-97e2-8bc2a4edcf50", "a4888160-a1d3-4816-9976-3128b6e3693e", "8f89745a-a706-48db-af31-157764247170", "960a4c69-e893-473d-be0e-39cdf2bed7ce" ], "title": "Model Improvements" }, { "cite_extract_rate": 1, "cites": [ 8871, 5014, 5020 ], "content": "Massive multi-NMT models is a promising technique, especially for systems produced by multinational tech companies ~. In particular, multi-NMT for zero-shot translation is an important line of research, as it eliminates the need for parallel datasets between every possible language. The work of~ is of particular interest, as it deals with a non-English-centric multilingual dataset, yet managed to outperform English-centric models in zero-shot translation.\nHowever, despite multi-NMT being able to cover about 100 languages ~, only a small fraction of LRLs are included from the more than 7000 possible languages. Therefore, more efforts need to be invested in scaling multi-NMT models that are capable of handling a larger number of languages, which would inherently cover LRLs and extremely LRLs. It is important to investigate how these LRLs are better represented in these massive models without compromising the performance of high-resource languages.~ recently reported promising results on this line, which should be further explored.", "id": "90aa37a6-013e-4588-97e2-8bc2a4edcf50", "level": "paragraph", "origin_cites_number": 3, "parent_id": "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Model Improvements" ], [ "paragraph", "Model Capacity to Include LRLs" ] ], "subsections": [], "title": "Model Capacity to Include LRLs" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7164, 5020 ], "content": "Techniques discussed in Section~\\ref{section:label-NMT-Techniques-for-Low-Resource-Languages} are limited by various input requirements such as the size of the datasets, domain of the datasets, and language relatedness. Although some ablation studies~ have been done to understand the effect of input requirements, they are not sufficient or not exhaustive in considering model robustness. \nFor the example of language similarity, LRLs that have syntactic differences between spoken and written forms (e.g. Sinhala) is a challenge. Another challenge is translating code-mixed data.\nIn terms of domain relatedness, most publicly available datasets are from sources that do not resemble real-life translations. Thus, more focus should be given to multilingual domain adaption research, where pre-trained multi-MNT models can be fine-tuned to the task at hand (e.g.~translating legal or medical records). \nThus, empirical experimentation with the goal to extend the field to new approaches and architectures to address these limitations must be conducted.", "id": "a4888160-a1d3-4816-9976-3128b6e3693e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Model Improvements" ], [ "paragraph", "Model Robustness to Limiting Input Factors:" ] ], "subsections": [], "title": "Model Robustness to Limiting Input Factors:" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7883, 5048 ], "content": "Explaining and interpreting neural models is a challenge for researchers in Deep Learning. \nFor multi-NMT, there have been experiments that delve into models to examine how the learning takes place, and how different languages behave. There have also been attempts to provide theoretical interpretations on aspects of NMT models~. However, we believe more research in NMT model interpretability (such as interlingua learning and parent-child transfer) would help developing solutions specific to LRLs.", "id": "8f89745a-a706-48db-af31-157764247170", "level": "paragraph", "origin_cites_number": 3, "parent_id": "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Model Improvements" ], [ "paragraph", "Model Interpretability and Explainability:" ] ], "subsections": [], "title": "Model Interpretability and Explainability:" }, { "cite_extract_rate": 1, "cites": [ 5049, 5050 ], "content": "It has already been shown that gender bias exists in NMT models when trained on high-resource languages~. Recently, the existence of biases in the context of LRL MT is a problem that has not come into light as far as we know. In particular, when translating between languages belonging to different regions and cultures in multilingual settings, there can be undesirable influences from high-resource languages. As discussed later in this section, parallel data extracted from the web tends to contain bias. While there should be parallel efforts to free datasets of bias, it is important to develop models that are robust to dataset bias in order to prevent the ramification of propagating stereotypes from the social context of high-resource languages in to LRLs.", "id": "960a4c69-e893-473d-be0e-39cdf2bed7ce", "level": "paragraph", "origin_cites_number": 2, "parent_id": "e95a4d68-d978-47dd-ac03-bfac25ef9ccc", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Model Improvements" ], [ "paragraph", "Mitigating Model Bias:" ] ], "subsections": [], "title": "Mitigating Model Bias:" }, { "cite_extract_rate": 0, "cites": [], "content": "Findings in our trend analysis (Section~\\ref{techniques_trend}) suggest that more resources should be made available to underrepresented geographic regions, especially for communities that are traditionally excluded in technological development and those who face social-economic inequities. Therefore the inclusion of these communities can be prioritized when creating datasets and when providing accessibility to substantial computing resources required for building time-consuming and expensive neural models. These communities will also benefit from open-source tools and frameworks, as well as the availability of trained models.", "id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ] ], "subsections": [ "90aa4a2e-2cf7-4491-bcdc-01ceee0a0f76", "78660665-33d0-46c5-800b-0706ece09fb0", "1e1bc7b5-ad46-4285-89b8-d611294d4885", "a92021b0-98cb-44e3-9182-4eac14c1aaea", "16cd8558-9187-4b93-b0b9-0daaacd90040", "0951e77a-01cb-42ef-92de-c6894c6e514a", "7a91fca2-9bcf-44d5-8506-8c71f41f61f3" ], "title": "Equitable and Inclusive Access" }, { "cite_extract_rate": 0.5, "cites": [ 7881, 1551, 8868, 7695, 7877, 4997, 8875, 5004 ], "content": "Most of the datasets used in NMT have originated in a small number of regions in the world focusing on English-centric translations, however about 40\\% the content on the Internet is now non-English\\footnote{\\url{https://www.visualcapitalist.com/the-most-used-languages-on-the-internet/}}.\nAlthough LRL-NMT was initially applied to large corpora by sub-sampling HRLs such as English, French, and German, \nit was later adapted for LRL pairs such as English-Esperanto, ,\nEnglish-Urdu, English-Romanian, and English-Russian \\footnote{ excluded because low-resource was suggested but not experimented}.\nThis focus shifted to European LRLs, and more recently to non-European languages such as Indian, African, East Asian, and Middle Eastern languages ~.\nHowever, the amount of such datasets is less than high resource counterparts. \nTherefore, on par with current trends in some Machine Learning communities, more focus can be given to those that present new LRL datasets rather than the novelty of the employed technique, when accepting papers to conferences and evaluating value of this type of research. Contribution of conferences such as LREC, and journals such as the LREC journal is commendable, in this regard. Projects such as ParaCrawl~ have automatically mined a large amount of parallel data from the web for multiple language pairs, including LRL such as Irish and Nepalese. \nIn addition, regional communities can take the lead in dataset creation due to their expertise in their own cultural context thus could provide better judgement on the bias (discussed in the next point). \nMore recently, the problem of dataset bias has received significant attention from the community . For example, the public crawl data available demonstrate a narrow sub-segment of population and may encode values and perspectives that exist within Western society. More specifically, it has been shown that scrapped text contains geographical bias~, as well as an age and gender bias~. \nFurthermore, these web crawl datasets contain the potential to harm due to abusive language, hate speech, microaggressions, dehumanization, and social-political biases~. Thus, data pre-processing mechanisms can be employed with input from experts such as social scientists, regional communities, and linguists familiar with different languages.", "id": "90aa4a2e-2cf7-4491-bcdc-01ceee0a0f76", "level": "paragraph", "origin_cites_number": 16, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Creation of Datasets:" ] ], "subsections": [], "title": "Creation of Datasets:" }, { "cite_extract_rate": 0, "cites": [], "content": "Same as creating new LRL datasets, accessibility to tools and frameworks for LRLs are critical in advancing the field. Therefore, creating and making them open-source for free access is of tremendous benefit to LRL community. We like to note the positive impact created by open source initiatives such as HuggingFace\\footnote{\\url{https://huggingface.co/}}.", "id": "78660665-33d0-46c5-800b-0706ece09fb0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Open-source Tools and Frameworks" ] ], "subsections": [], "title": "Open-source Tools and Frameworks" }, { "cite_extract_rate": 1, "cites": [ 2489, 5051 ], "content": "Although the community has a recent focus on developing computationally efficient NMT models~, the public release of large-scale multi-NMT models has been limited, with the exception of the work by~. By giving public access to these models, these time-consuming and expensive models can be used as the parent models to be transferred to the LRL child models. \nTherefore publicly releasing NMT models including massive multi-NMT models would be tremendously beneficial to those working in LRL-NMT as well as advancing the field in other areas.", "id": "1e1bc7b5-ad46-4285-89b8-d611294d4885", "level": "paragraph", "origin_cites_number": 2, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Availability of Trained models:" ] ], "subsections": [], "title": "Availability of Trained models:" }, { "cite_extract_rate": 1, "cites": [ 5051 ], "content": "The community has a recent focus on developing computationally efficient NMT models and providing computational resources for researchers \\footnote{Some tech giants also provide research grants to use their cloud GPUs platforms.}. However, more efforts need to be put forth by the research community, industry organizations, and governments in distributing resources and attention. Furthermore, computational resources can also be made available as part of conferences and challenges to further encourage LRLs researchers to participate.\n\\iffalse", "id": "a92021b0-98cb-44e3-9182-4eac14c1aaea", "level": "paragraph", "origin_cites_number": 1, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Availability of Computational Resources:" ] ], "subsections": [], "title": "Availability of Computational Resources:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\sr{simply make this title model robustness. then say performanc eof most of the identified techniques depend on size/nature of datasets, language related ness etc. thus extensive empirical studies are needed to identify their success on different settings. then only one can take a calculated decision on what technique to use.}Existing literature in NMT (unsupervised learning and transfer learning) identified that models perform better for cognant (similar) language as well as similar domains and datasets. Therefore, these methods do not perform well for extreme LRL or zero-shot settings. Additional properties of the data that impact translation performance include mismatched domain alignment, diverse dataset, and extreme low-resource languages. A comprehensive comparative and ablation study must be conducted on different LRL-NMT methods in order to understand the robustness of the methodologies on extreme LRL and how they are impacted by the training dataset's similarity in language and domain.", "id": "16cd8558-9187-4b93-b0b9-0daaacd90040", "level": "paragraph", "origin_cites_number": 1, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Model Robustness to Diverse Languages and Domains:" ] ], "subsections": [], "title": "Model Robustness to Diverse Languages and Domains:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\sr{i suggest we focus only on model capacity in this paragraph. then say although mnmt has shown promising results, lrl are highly under-represented. thus efforts have to be taken to increase the lrl presence in these datasets. also , it has been shown that capacities of models deteriorate when more and more languages are added. this would result in LRLs getting low priority in these mnmt models. thus research should look into how model capacities can be improved, and zero-shot be improved so we do not need dataset between all pairs of languages.}\n\\sr{add a new para on computational resources. may be it would not fit inside model improvments. be it models with large mono or parallel data, building nmt models with them require substantial computing resources. most regions would not have such luxury. so community-driven efforts ar eneeded to share resources or grant resources through funds}\nA barrier to conducting LRL research is the unavailability of computational resources for researchers; there must be equitable fair access. Therefore, the community has a recent focus on developing computationally efficient NMT models in workshop findings and call for research grants . Additionally, more efforts should be invested on increasing the capacity of MNMT models that are capable to handle much larger number of languages, which would inherently cover low and extremely low-resource languages. It is also important to investigate on how these low-resource languages are better represented in the large models, via techniques such as sampling or by leveraging corresponding monolingual data.", "id": "0951e77a-01cb-42ef-92de-c6894c6e514a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Model Capacity for efficiency and more languages:" ] ], "subsections": [], "title": "Model Capacity for efficiency and more languages:" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7883, 5048 ], "content": "\\sr{this should be written in a general sense, eithout specifically focusing on mnmt. We can use Aji's paper on tranfer learning}\\eal{I think we should focus on LRL-NMT, there are plenty of work that are general in NLP} \\sr{i mean, write this wrt nmt. currently it talks abut mnmt only }The learning of interlingua by multi-NMT models while maintaining language-specific representations is still not properly explained. In fact, this is owing to the fact that most Deep Learning models are treated as black boxes. and reported initial experiments to `look inside' the multi-NMT models to see how the learning actually takes place, and how different languages behave. We believe more research in this line would help developing solutions specific to different language-groups, in particular the under-represented ones.\\sr{check on model interpretability}\\\\", "id": "7a91fca2-9bcf-44d5-8506-8c71f41f61f3", "level": "paragraph", "origin_cites_number": 3, "parent_id": "48c2e5ce-c935-420b-9f92-9cde5b66379a", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access" ], [ "paragraph", "Model Interpretability:" ] ], "subsections": [], "title": "Model Interpretability:" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6762be07-e814-45fe-acf7-e19054b513a2", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access:" ] ], "subsections": [ "990a89e4-44a3-4075-afbe-1688e31abd63", "39a6c395-f8c2-479d-9482-d32c5e36a04a", "5bf91c37-5a95-40aa-9ed9-7e16b3b45876" ], "title": "Equitable and Inclusive Access:" }, { "cite_extract_rate": 1, "cites": [ 5014 ], "content": "\\eal{note that there is also a more focus on dataset in the ML community, neurips has a workshop on dataset creation this year...}\nMost of the NMT research focus on English-centric translation. Even most of the parallel datasets are English-centric. However, with about 50\\% the content in the Internet is now non-English, it is important truly many-many models and datasets are introduced. NMT has originally focused on HRL then moved into European language families. More recently, there has been a focus non-European languages such as Indian, African, East Asian, and Middle Eastern languages. had set the first step, by reporting a series of experiments on MBART, which is not English-centric. In fact, this dataset has shown much better promise for zero-shot translation, compared to the English-centric counter part.\\sr{fix the flow. i.e. say initial dataset were all European. then non-european datasets came out. however stillmost datsaets are english-centris. btw, after MBART now there is mT5 by Google. check their one too }\\\\\nMost of the publicly available datasets are from sources such as TED talks, Bible or open subtitles. These datasets do not resemble the type of data that needs to get translated in real-life. In particular, for (low-resource) languages that have syntactic differences between spoken and written forms (e.g. Sinhala) this is a major concern. Thus, more focus should be given to multilingual domain adaption research, where low-resource languages would be able to fine-tune a pre-trained multi-MNT model to the task at hand (e.g.~translating official government documents, or translating code-mixed social media data).", "id": "990a89e4-44a3-4075-afbe-1688e31abd63", "level": "paragraph", "origin_cites_number": 1, "parent_id": "6762be07-e814-45fe-acf7-e19054b513a2", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access:" ], [ "paragraph", "LR Dataset Creation:" ] ], "subsections": [], "title": "LR Dataset Creation:" }, { "cite_extract_rate": 0.5, "cites": [ 5014 ], "content": "As discussed in Section~\\ref{sec:multiNMT}, multi-NMT is a very promising avenue forward. Our trend analysis confirmed the same. However, only few such models have been publicly released. Given that training large multi-NMT models is very much time-consuming, the unavailability of such pre-trained models prevent the less-privileged researchers working in on NMT for their low-resource languages to take benefit of transfer learning on these large models.\\\\", "id": "39a6c395-f8c2-479d-9482-d32c5e36a04a", "level": "paragraph", "origin_cites_number": 2, "parent_id": "6762be07-e814-45fe-acf7-e19054b513a2", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access:" ], [ "paragraph", "Public Availability of Models:" ] ], "subsections": [], "title": "Public Availability of Models:" }, { "cite_extract_rate": 0, "cites": [], "content": "The community should also conduct more regional conferences and challenges targeting low resource languages. The focus should not be on the novelty of technique but on language. If somehow computational resources can also be made available as part of such challenges, it will give a big boost and will encourage under-privileged researchers to participate.", "id": "5bf91c37-5a95-40aa-9ed9-7e16b3b45876", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6762be07-e814-45fe-acf7-e19054b513a2", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Equitable and Inclusive Access:" ], [ "paragraph", "Regional Conferences and Challenges:" ] ], "subsections": [], "title": "Regional Conferences and Challenges:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\eal{please read, I think the points that Timnit brought up could be controversial}\\sr{yes, we can dilute our argument a bit, rather than pointing to wikipedia or reddit. unless there is strong evidence that those replicate views of a certain group, let's not mention it }", "id": "d4b234cc-9f43-49eb-80ea-4e09bb9ae8b3", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f0227d21-1385-40c5-9a9f-dc4d25a2701c", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Bias and Fairness:" ] ], "subsections": [ "b7c63987-00ec-48cd-bc30-200310594b53", "e8ead6a8-e30b-4f6d-b894-8ff03a2e5112" ], "title": "Bias and Fairness:" }, { "cite_extract_rate": 1, "cites": [ 5049, 7695 ], "content": "\\sr{isnt geneder bias a sub-category of dataset bias?}\nRecent works in high-resource NLP and NMT has identified bias and fairness as a priority for the field. More specifically, gender-bias has been studied extensively in high-resource NMT, where gender is not faithfully translated. For example, identified that occupations such as nurse is related to female and doctor is related to male irregardless of the original association. This incorrect inference is attributed to the inherent social bias in the dataset, which encodes the stereotypes into the model . These biases (not just gender, but also possibly race and religion) is then perpetuated into applications such as job postings or news articles, which causes discrimination in the wider social context.", "id": "b7c63987-00ec-48cd-bc30-200310594b53", "level": "paragraph", "origin_cites_number": 2, "parent_id": "d4b234cc-9f43-49eb-80ea-4e09bb9ae8b3", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Bias and Fairness:" ], [ "paragraph", "Gender Bias:" ] ], "subsections": [], "title": "Gender Bias:" }, { "cite_extract_rate": 0, "cites": [], "content": "With the machine learning community becoming aware of technology imbalance in marginalized underpriviledged communities, more recent publications has focused on extreme LRL. Gebru pointed out that large public training data does not guarantee diversity of unbiased social views. \nPublic crawl data available demonstrate a narrow subsegment of population and may encode biased within society. Specifically these webcrawl contains majority Reddit and Wikipedia contain typically males 18-29 years old. In fact the data may contain abusive language, hatespeech, gender bias, microaggressions, dehumanization, and social-political biases.\nFor example in transfer learning of LRL-NMT, models are built with HRL data transferred to less LRL data, where culture use of language is different, this could have negative consequences.\nTherefore for LR NMT, practitioners and researchers must be cognisant of these biases in their data, and address them appropriately in their models and datasets to prevent the ramification of propagating stereotypes from the social context of high-resource languages in to marginalized and under-privledged groups.\n\\sr{using bible data could have the same effect. religious bias}\n\\fi", "id": "e8ead6a8-e30b-4f6d-b894-8ff03a2e5112", "level": "paragraph", "origin_cites_number": 0, "parent_id": "d4b234cc-9f43-49eb-80ea-4e09bb9ae8b3", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Open Questions for LRL-NMT and Future Directions" ], [ "subsubsection", "Bias and Fairness:" ], [ "paragraph", "Dataset Bias:" ] ], "subsections": [], "title": "Dataset Bias:" }, { "cite_extract_rate": 0, "cites": [], "content": "Key findings of this survey can be summarized as follows: \n\\begin{enumerate}\n \\item \\textbf{Techniques and Trends:} Our survey found that there is a substantial amount of LRL-NMT research, and the trend continues. All the LRL-NMT techniques (data augmentation, unsupervised NMT, semi-supervised NMT, multi-NMT, and transfer learning for NMT) except pivoting show an upward trend with respect to the research publications, suggesting that these techniques have established themselves as the de facto solutions for LRL-NMT.\n \\item \\textbf{Technique Selection:} The decision chart we produced on technique selection can be taken as a guide in selecting the most appropriate NMT technique for a given data specification, also considering the availability of computational resources. However, we note that this selection also depends on other factors such as language relatedness, and the domain of data. These factors were discussed with respect to individual LRL-NMT techniques. \n \\item \\textbf{Future Directions:} We identified multiple research areas for the research community to collectively increase efforts on LRL-NMT. These initiatives were broadly categorised as model improvements and equitable and inclusive access.\n\\end{enumerate}", "id": "527b9179-33d7-4ad4-98fe-e6a65de452b2", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a44d66d-a445-4b7a-b8ac-4ef9879169f0", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Answering the Research Questions" ] ], "subsections": [], "title": "Answering the Research Questions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{label-Conclusion}\nDue to the recent advancements in the field, NMT is no longer an unattainable goal for LRLs. However, the sheer volume as well as the acceleration of research taking place makes it difficult to select the state-of-the-art LRL-NMT techniques for a given data specification, as no guidelines are available for the selection of the most appropriate NMT technique for a given data setup. The contribution of this survey paper is to give a comprehensive picture of the LRL-NMT landscape, highlighting the recent trends in technological advancements and provide a guideline to select the most appropriate LRL-NMT technique for a given data specification. Based on our findings through research publications and our quantitative analysis, we provided a set of recommendations to advance the LRL-NMT solutions. We believe that these recommendations would be positively received by the NMT research community.", "id": "0d19b510-15ad-4649-a082-492aa8ae0f38", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "The first author (SR) would like to thank the postgraduate students of the National Language Processing Center, University of Moratuwa, and the undergraduates of the Department of Computer Science and Engineering, University of Moratuwa, for their support.\n\\iffalse", "id": "4a48075b-6440-4ac6-91cc-17bbe278ff64", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Acknowledgement" ] ], "subsections": [], "title": "Acknowledgement" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{label-Language-wise-Analysis} \n -introduction: \\\\\n - language family/ properties (morphological, some families better suited for transfer language, language specific sub wording) \\\\\n - lack of nlp tools: ner post tagging, (some setting may not work for LR due to unavailability of these tools, so a limitation) \\\\\n - discussion/analysis \\\\\n - table or any graphical representation (pie, )", "id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "level": "section", "origin_cites_number": 0, "parent_id": "109ef38f-b3f2-41dd-a765-4a141b2d87cf", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ] ], "subsections": [ "a027e4ad-d4c9-48ea-afbf-23ac9c5e5dc4", "4a98beea-8147-4554-ae3c-f046a9232171", "f663be56-c11a-4f7d-b0f8-927527456463", "41bf707e-3595-4324-8e3c-6e63492e243b", "fbe8a8c8-27d1-447b-a82e-c35f0ae3c908" ], "title": "Language wise Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "- table: techniques vs language (put in the end of this section)\\\\\n - codes of the languages (add codes as the appendix)", "id": "a027e4ad-d4c9-48ea-afbf-23ac9c5e5dc4", "level": "subsection", "origin_cites_number": 0, "parent_id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ], [ "subsection", "techniques vs language" ] ], "subsections": [], "title": "techniques vs language" }, { "cite_extract_rate": 0, "cites": [], "content": "- table : dataset vs language", "id": "4a98beea-8147-4554-ae3c-f046a9232171", "level": "subsection", "origin_cites_number": 0, "parent_id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ], [ "subsection", "dataset vs language" ] ], "subsections": [], "title": "dataset vs language" }, { "cite_extract_rate": 0.24242424242424201, "cites": [ 5054, 8559, 5055, 5060, 7884, 5059, 7885, 8876, 5056, 2414, 7886, 5058, 5057, 5052, 5053, 7881 ], "content": "\\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n Pre-training via Leveraging Assisting Languages and Data Selection for Neural Machine Translation\\\\\n Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages \\\\ \n Findings of the 2019 Conference on Machine Translation ({WMT}19)\\\\\n Neural machine translation of rare words with subword units\\\\\n Neural Machine Translation with Reconstruction \\\\\n Parallel corpora for medium density languages\\\\\n Translating a Language You Don{'}t Know In the {C}hinese Room\\\\ \n JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation\\\\ \n SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing, not LR (English-Japanese)\\\\\n Multilingual Neural Machine Translation involving {I}ndian Languages\\\\\n English-Myanmar Supervised and Unsupervised NMT: NICT’s Machine Translation Systems at WAT-2019\\\\ \n The {IIIT}-{H} {G}ujarati-{E}nglish Machine Translation System for {WMT}19\\\\\n Malay-Corpus-Enhanced Indonesian-Chinese Neural Machine Translation\\\\\n Language Revitalization: A Benchmark for Akan-to-English Machine Translation\\\\\n {E}nglish-{M}yanmar Supervised and Unsupervised {NMT}: {NICT}{'}s Machine Translation Systems at {WAT}-2019\\\\\n A grounded unsupervised universal part-of-speech tagger for low-resource languages\\\\\n No Data to Crawl? Monolingual Corpus Creation from {PDF} Files of Truly low-Resource Languages in {P}eru\\\\\n {L}atin-{S}panish Neural Machine Translation: from the {B}ible to Saint Augustine\n PidginUNMT: Unsupervised Neural Machine Translation from West African Pidgin to English\\\\\n Chinese-Japanese Unsupervised Neural Machine Translation Using Sub-character Level Information\\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n \\\\\n The FLoRes evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english\\\\\n The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation\\\\\n ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization\\\\\n Low Resource Neural Machine Translation: A Benchmark for Five African Languages\\\\\n {CVIT}{'}s submissions to {WAT}-2019\\\\\n \\\\\n Effectively Aligning and Filtering Parallel Corpora under Sparse Data Conditions\\\\", "id": "f663be56-c11a-4f7d-b0f8-927527456463", "level": "subsection", "origin_cites_number": 66, "parent_id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ], [ "subsection", "Newly found papers" ] ], "subsections": [], "title": "Newly found papers" }, { "cite_extract_rate": 0, "cites": [], "content": "(might not be exhaustive)\n - table : tools vs language\n - stemming, lemmatization, parsers,", "id": "41bf707e-3595-4324-8e3c-6e63492e243b", "level": "subsection", "origin_cites_number": 0, "parent_id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ], [ "subsection", "tools vs language" ] ], "subsections": [], "title": "tools vs language" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 863, 3146, 8876 ], "content": " The NiuTrans machine translation systems for WMT19\\\\\n Edinburgh Neural Machine Translation Systems for WMT 16\\\\\n Syntax-directed attention for neural machine translation\\\\\n Character-Aware Low-Resource Neural Machine Translation with Weight Sharing and Pre-training\\\\\n Sentence-Level Adaptation for Low-Resource Neural Machine Translation, supervised learning\\\\\n Efficient Low-Resource Neural Machine Translation with Reread and Feedback Mechanism\\\\\n Low resource languages - 2020 papers \\\\\n\\cite {valeev2019application} Application of Low-resource Machine Translation Techniques to Russian-Tatar Language Pair \\\\\n Low Resource Neural Machine Translation: A Benchmark for Five African Languages \\\\\n\\\\\n\\\\\n\\cite {duh2020benchmarking} Benchmarking Neural and Statistical Machine Translation on Low-Resource African Languages \\\\\nThe {LMU} {M}unich Unsupervised Machine Translation System for {WMT}19\\\\\n\\fi\n\t\\bibliographystyle{ACM-Reference-Format}\n\t\\bibliography{lowMT.bib}\n\\end{document}", "id": "fbe8a8c8-27d1-447b-a82e-c35f0ae3c908", "level": "subsection", "origin_cites_number": 10, "parent_id": "2c4eae1c-0595-4c9b-9650-df8b610079cd", "prefix_titles": [ [ "title", "Neural Machine Translation for Low-Resource Languages: A Survey" ], [ "section", "Language wise Analysis" ], [ "subsection", "newly found papers" ] ], "subsections": [], "title": "newly found papers" } ]
47
[ 4965, 303, 2339, 7873, 2489, 7874, 4967, 4968, 4966, 4969, 4970, 7875, 4974, 4973, 4972, 7200, 4971, 4975, 168, 38, 4989, 4981, 4992, 8866, 7876, 8865, 4977, 4980, 4982, 4985, 4978, 8864, 4990, 4983, 860, 4984, 4993, 4986, 4976, 4979, 4991, 8867, 4988, 4987, 7164, 4994, 5005, 8868, 5001, 7877, 5004, 1551, 4999, 4997, 5002, 4995, 5003, 8869, 4996, 4998, 5000, 8870, 5008, 5006, 5007, 7878, 5009, 5010, 7475, 5011, 9121, 5012, 5013, 5014, 8872, 5025, 7879, 5019, 5016, 5017, 5022, 8871, 5020, 5024, 4507, 5015, 5021, 5018, 5023, 5033, 5026, 8873, 5031, 5030, 5034, 5028, 7096, 5027, 5032, 8874, 5029, 5040, 5036, 5038, 5035, 5039, 5037, 5044, 5043, 5041, 5042, 5045, 5046, 7880, 5047, 7881, 7882, 4751, 7883, 5048, 5049, 5050, 7695, 8875, 5051, 5054, 8559, 5055, 5060, 7884, 5059, 7885, 8876, 5056, 2414, 7886, 5058, 5057, 5052, 5053, 863, 3146 ]
0.746959
[ "Rui Wang", "Rose Yu" ]
Physics-Guided Deep Learning for Dynamical Systems: A Survey
2021
2021-07-02T20:59:03Z
cs.LG
Modeling complex physical dynamics is a fundamental task in science and engineering. Traditional physics-based models are sample efficient, and interpretable but often rely on rigid assumptions. Furthermore, direct numerical approximation is usually computationally intensive, requiring significant computational resources and expertise, and many real-world systems do not have fully-known governing laws. While deep learning (DL) provides novel alternatives for efficiently recognizing complex patterns and emulating nonlinear dynamics, its predictions do not necessarily obey the governing laws of physical systems, nor do they generalize well across different systems. Thus, the study of physics-guided DL emerged and has gained great progress. Physics-guided DL aims to take the best from both physics-based modeling and state-of-the-art DL models to better solve scientific problems. In this paper, we provide a structured overview of existing methodologies of integrating prior physical knowledge or physics-based modeling into DL, with a special emphasis on learning dynamical systems. We also discuss the fundamental challenges and emerging opportunities in the area.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "11aa860a-a47f-4799-8633-7727b5a3edae", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ] ], "subsections": [ "6739ea70-953a-4fff-856c-b68ead691bec", "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "a9633632-c2f0-4164-8ce4-578c209e43ee", "431d9769-2309-472c-bd5c-88ad48003975", "86c528e0-a15b-4253-9f86-cd1bed4952cf", "02e44777-812c-4110-aad6-ca1628c72747", "419bd794-c010-4085-a609-4fa92d46e63a", "73734564-db92-46a8-a87d-86945d31f17a" ], "title": "root" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 6360, 6356, 6364, 6355, 6359, 6363, 6358, 6362, 6361, 9004, 6357, 9003 ], "content": "Modeling complex physical dynamics over a wide range of spatial and temporal scales is a fundamental task in a wide range of fields including, for example, fluid dynamics , cosmology , economics, and neuroscience .\nDynamical systems are mathematical objects that are used to describe the evolution of phenomena over time and space occurring in nature. Dynamical systems are commonly described with differential equations which are equations related to one or more unknown functions and their derivatives. \n\\begin{dfn}\\label{dfn:dynamics}\nFix an integer $k \\geq 1$ and let $U$ denote an open subset of $\\mathbb{R}^n$. Let $u: U \\mapsto \\mathbb{R}^m$ and we write $\\bm{u} = (u^1, ..., u^m)$, where $x \\in U$. Then an expression of the form \n\\begin{equation}\n \\mathcal{F}(D^k \\bm{u}(x), D^{k-1} \\bm{u}(x), ..., D\\bm{u}(x), \\bm{u}(x), x) = 0 \\label{eqn:dynamics}\n\\end{equation}\nis called a $k^{\\text{-th}}$-order system of partial differential equation (or ordinary differential equation when $n=1$), where $\\mathcal{F}: \\mathbb{R}^{mn^k}\\times \\mathbb{R}^{mn^{k-1}}\\times...\\times\\mathbb{R}^{mn}\\times \\mathbb{R}^{m}\\times U \\mapsto \\mathbb{R}^m $.\n\\end{dfn}\n$\\mathcal{F}$ models the dynamics of a $n$-dimensional state $x \\in \\mathbb{R}^n$ and it can be either a linear or non-linear operator. Since most dynamics evolve over time, one of the variables of $u$ is usually the time dimension. In general, one must specify appropriate boundary and initial conditions of Equ.\\ref{eqn:dynamics} to ensure the existence of a solution. Learning dynamical systems is to search for a model $\\mathcal{F}$ that can accurately describe the behavior of the physical process insofar as we are interested. \nPhysics as a discipline has a long tradition of using first principles to describe spatiotemporal dynamics. The laws of physics have greatly improved our understanding of the physical world. Many physics laws are described by systems of highly nonlinear differential equations that have direct implications for understanding and predicting physical dynamics. However, these equations are usually too complicated to be solvable. The current paradigm of numerical methods for solution approximation is purely physics-based: known physical laws encoded in systems of coupled differential equations are solved over space and time via numerical differentiation and integration schemes . However, these methods are tremendously computationally intensive, requiring significant computational resources and expertise. An alternative is seeking simplified models that are based on certain assumptions and roughly can describe the dynamics, such as Reynolds-averaged Navier-stokes equations for turbulent flows and Euler equations for gas dynamics . But it is highly nontrivial to obtain a simplified model that can describe a phenomenon with satisfactory accuracy. More importantly, for many complex real-world phenomena, only partial knowledge of their dynamics is known. The equations may not fully represent the true system states.\nDeep Learning (DL) provides efficient alternatives to learn high-dimensional spatiotemporal dynamics from massive datasets. It achieves so by directly predicting the input-output mapping and bypassing numerical integration. Recent works have shown that DL can generate realistic predictions and significantly accelerate the simulation of physical dynamics relative to numerical solvers, from turbulence modeling to weather prediction . This opens up new opportunities at the intersection of DL and physical sciences, such as molecular dynamics, epidemiology, cardiology and material science . \nDespite the tremendous progress, DL is purely data-driven by nature, which has many limitations. DL models still adhere to the fundamental rules of statistical inference. The nonlinear and chaotic nature of real-world dynamics poses significant challenges to existing DL frameworks. Without explicit constraints, DL models are prone to make physically implausible forecasts, violating the governing laws of physical systems. Additionally, DL models often struggle with generalization: models trained on one dataset cannot adapt properly to unseen scenarios with different distributions, known as distribution shift. For dynamics learning, the distribution shift occurs not only because the dynamics are non-stationary and nonlinear, but also due to the changes in system parameters, such as external forces and boundary conditions. In a word, the current limitation of DL models for learning complex dynamics is their lack of ability to understand the system solely from data and cope with the distributional shifts that naturally occur.\nNeither DL alone nor purely physics-based approaches can be considered sufficient for learning complex dynamical systems in scientific domains. Therefore, there is a growing need for integrating traditional physics-based approaches with DL models so that we can make the best of both types of approaches. There is already a vast amount of work about physics-guided DL , but the focus on deep learning for dynamical systems is still nascent. Physics-guided DL offers a set of tools to blend these physical concepts such as differential equations and symmetry with deep neural networks. On one hand, these DL models offer great computational benefits over traditional numerical solvers. On the other hand, the physical constraints impose appropriate inductive biases on the DL models, leading to accurate simulation, scientifically valid predictions, reduced sample complexity, and guaranteed improvement in generalization to unknown environments. \nThis survey paper aims to provide a structured overview of existing methodologies of incorporating prior physical knowledge into DL models for learning dynamical systems. The paper is organized as below. \n\\begin{itemize}[leftmargin=*,itemsep=1pt]\n \\item Section \\ref{obj} describes the significance of physics-guided DL.\n \\item Section \\ref{prob} formulates the four main learning problems of physics-guided DL, including solving differential equations, dynamics forecasting, learning dynamics residuals, and equation discovery.\n \\item Section \\ref{sec:loss}$\\sim$\\ref{sec:symmetry} categorizes existing physics-guided DL approaches into four groups based on the way how physics and DL are combined. Each lead with a detailed review of recent work as a case study and further categorized based on objectives or model architecture. \n \\begin{itemize}[leftmargin=10pt,itemsep=1pt]\n \\item Section \\ref{sec:loss}: \\texttt{Physics-guided loss function}: prior physics knowledge is imposed as additional soft constraints in the loss function.\n \\item Section \\ref{sec:architecture}: \\texttt{Physics-guided architecture design}: prior physics knowledge is strictly incorporated into the design of neural network modules. \n \\item Section \\ref{sec:hybrid}: \\texttt{Hybrid physics-DL models}: complete physics-based approaches are directly combined with DL models. \n \\item Section \\ref{sec:symmetry}: \\texttt{Invariant and equivariant DL models}: DL models are designed to respect the symmetries of a given physical system. \n \\end{itemize}\n \\item Section \\ref{dis} summarizes the challenges in this field and discusses the emerging opportunities for future research. \n\\end{itemize}", "id": "6739ea70-953a-4fff-856c-b68ead691bec", "level": "section", "origin_cites_number": 28, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{obj}\nThis subsection provides an overview of the motivations and significance of physics-guided DL for learning dynamical systems. By incorporating physical principles, governing equations, mathematical modeling, and domain knowledge into DL models, the rapidly growing field of physics-guided DL can potentially (1)\naccelerate data simulation (2) build physically scientifically valid models (3) improve the generalizability of DL models (4) discover governing equations.", "id": "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Significance of Physics-Guided Deep Learning" ] ], "subsections": [ "02bc0f7d-fafd-49e6-81e5-3604506d3183", "e3e6062a-f714-4c5e-87d8-d0cae69fef76", "74649ae6-c6fe-4c42-9f69-63f48cb5aed2", "93160546-0995-4122-ae5f-b1a49b951ff2" ], "title": "Significance of Physics-Guided Deep Learning" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 6368, 6370, 9006, 6358, 6362, 6369, 6357, 6367, 9005, 6365, 4714, 6366 ], "content": "\\label{data_simulation}\nSimulation is an important method of analyzing, optimizing, and designing real-world processes, which are easily verified, communicated, and understood. It serves as the surrogate modeling and digital twin and provides valuable insights into complex physical systems. Traditional physics-based simulations often rely on running numerical methods: known physical laws encoded in systems of coupled differential equations are solved over space and time via numerical differentiation and integration schemes . Although the governing equations of many physical systems are known, finding approximate solutions using numerical algorithms and computers is still prohibitively expensive. Because the discretization step size is usually confined to be very small due to stability constraints when the dynamics are complex. Moreover, the performance of numerical methods can highly depend on the initial guesses of unknown parameters . Recently, DL has demonstrated great success in the automation, acceleration, and streamlining of highly compute-intensive workflows for science . \nDeep dynamics models can directly approximate high-dimensional spatiotemporal dynamics by directly forecasting the future states and bypassing numerical integration . These models are trained to make forward predictions given the historic frames as input with one or more steps of supervision and can roll out up to hundreds of steps during inference. DL models are usually faster than classic numerical solvers by orders of magnitude since DL is able to take much larger space or time steps than classical solvers .\nAnother common approach is that deep neural networks can directly approximate the solution of complex coupled differential equations via gradient-based optimization, which is the so-called physics-informed neural networks (PINNs). This approach has shown success in approximating a variety of PDEs . Additionally, deep generative models, such as diffusion models and score-based generative models, have been shown effective in accurate molecule graph generation . The computer graphics community has also investigated using DL to speed up numerical simulations for generating realistic animations of fluids such as water and smoke . However, the community focuses more on the visual realism of the simulation rather than the physical characteristics.", "id": "02bc0f7d-fafd-49e6-81e5-3604506d3183", "level": "subsection", "origin_cites_number": 22, "parent_id": "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Significance of Physics-Guided Deep Learning" ], [ "subsection", "Accelerate Data Simulation." ] ], "subsections": [], "title": "Accelerate Data Simulation." }, { "cite_extract_rate": 0.46666666666666606, "cites": [ 6356, 6372, 6371, 9009, 9007, 7970, 9008 ], "content": "Despite the tremendous progress of DL for science, e.g., atmospheric science , computational biology , material science , quantum chemistry , it remains a grand challenge to incorporate physical principles in a systematic manner to the design, training, and inference of such models. DL models are essentially statistical models that learn patterns from the data they are trained on. Without explicit constraints, DL models, when trained solely on data, are prone to make scientifically implausible predictions, violating the governing laws of physical systems. In many scientific applications, it is important that the predictions made by DL models are consistent with the known physical laws and constraints. For example, in fluid dynamics, a model that predicts the velocity field of a fluid must satisfy the conservation of mass and momentum. In materials science, a model that predicts the properties of a material must obey the laws of thermodynamics and the principles of quantum mechanics.\nThus, to build trustworthy predictive models for science and engineering, we need to leverage known physical principles to guide DL models to learn the correct underlying dynamics instead of simply fitting the observed data. For instance, improve the physical and statistical consistency of DL models by explicitly regularising the loss function with physical constraints. Hybrid DL models, e.g., integrate differential equations in DL for temporal dynamics forecasting and achieve promising performance. and studied tensor invariant neural networks that can learn the Reynolds stress tensor while preserving Galilean invariance. presented a hybrid model that combines the numerical RANS-LES coupling method with a custom-designed U-net. The model uses the temporal and spatial filters in the RANS-LES coupling method to guide the U-net in learning both large and small eddies. This approach improves the both accuracy and physical consistency of the model, making it more effective at representing the complex flow phenomena observed in many fluid dynamics applications.", "id": "e3e6062a-f714-4c5e-87d8-d0cae69fef76", "level": "subsection", "origin_cites_number": 15, "parent_id": "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Significance of Physics-Guided Deep Learning" ], [ "subsection", "Build Scientifically Valid Models." ] ], "subsections": [], "title": "Build Scientifically Valid Models." }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 9010, 6374, 6368, 1356, 6365, 5066, 6373 ], "content": "DL models often struggle with generalization: models trained on one dataset cannot adapt properly to unseen scenarios with distributional shifts that may naturally occur in dynamical systems . Because they learn to represent the statistical patterns in the training data, rather than the underlying causal relationships. In addition, most current approaches are still trained to model a specific system and multiple systems with close distributions, making it challenging to meet the needs of the scientific domain with heterogeneous environments. Thus, it is imperative to develop generalizable DL models that can learn and generalize well across systems with various parameter domains.\nPrior physical knowledge can be considered as an inductive bias that can place a prior distribution on the model class and shrink the model parameter search space. With the guide of inductive bias, DL models can better capture the underlying dynamics from the data that are consistent with physical laws. Across different data domains and systems, the laws of physics stay constant. Hence, integrating physical laws in DL enables the models to generalize outside of the training domain and even to different systems. \nEmbedding symmetries into DL models is one way to improve the generalization, which we will discuss in detail in subsection \\ref{sec:symmetry}. For example, designed deep equivariant dynamics models that respect the rotation, scaling, and uniform motion symmetries in fluid dynamics. The models are both theoretically and experimentally robust to distributional shifts by symmetry group transformations and enjoy favorable sample complexity compared with data augmentation.\nThere are many other ways to improve the generalization of DL models by incorporating other physical knowledge. \n proposed a meta-learning framework to forecast systems with different parameters. It leverages prior\nphysics knowledge to distinguish different systems. Specifically, it uses an encoder to infer the physical parameters of\ndifferent systems and a prediction network to adapt and forecast giving the inferred system. Moreover, encodes Lyapunov stability into an autoencoder model for predicting fluid flow and sea surface temperature. They show improved generalizability and reduced prediction uncertainty for neural nets that preserve Lyapunov stability. shows adding spectral normalization to DNN to regularize its Lipschitz continuity can greatly improve the generalization to new input domains on the task of drone landing control.", "id": "74649ae6-c6fe-4c42-9f69-63f48cb5aed2", "level": "subsection", "origin_cites_number": 9, "parent_id": "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Significance of Physics-Guided Deep Learning" ], [ "subsection", "Improve the generalizability of DL models" ] ], "subsections": [], "title": "Improve the generalizability of DL models" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 9013, 9012, 6375, 6377, 6376, 9011, 9014, 6363 ], "content": "One of the main objectives of science is to discover fundamental laws that can solve practical problems . The discovery of governing equations is crucial as it enables us to comprehend the underlying physical laws that regulate complex systems. By identifying the mathematical models that describe the behavior of a system, we can make accurate predictions and gain insights into how the system will behave under different conditions. This knowledge can be applied to optimize the performance of engineering systems, improve the precision of weather forecasts, and understand the mechanisms behind biological processes, among other applications . However, discovering governing equations is a challenging task for various reasons. Firstly, real-world systems are frequently complex and involve many interdependent variables, making it difficult to identify the relevant variables and their relationships. Secondly, many systems are nonlinear and involve interactions between variables that are hard to model using linear equations. Thirdly, the available data may be noisy or incomplete, making it challenging to extract meaningful patterns and relationships. Despite these challenges, recent advances in machine learning have made it possible to automate the process of governing equations discovery and identify complex, nonlinear models from data. These approaches may lead to new discoveries and insights into the behavior of complex systems for a wide range of applications.\nDiscovering governing equations from data is often accomplished by defining a large set of possible mathematical basis functions and learning the coefficients. proposed to find ordinary differential equations by creating a dictionary of possible basis functions and discovering sparse, low-dimensional, and nonlinear models from data using the sparse identification. More recent work, such as , incorporated neural networks to further augment the dictionary to model more complex dynamics. contributed to this trend by introducing an efficient first-order conditional gradient algorithm for solving the optimization problem of finding the best sparse fit to observational data in a large library of potential nonlinear models. Alternatively, presented a shallow neural network approach, \\textit{EQL} to identify concise equations from data. They replaced the activation functions with predefined basis functions, including identity and trigonometry functions, and used specially designed division units to model division relationships in the potential governing equations. Similarly, designed \\textit{PDE-Nets} that use convolution to approximate differential operators and symbolic neural networks to approximate and recover multivariate functions. These models could learn various functional relations, with and without divisions, from noisy data in a confined domain. However, scalability, overfitting, and over-reliance on high-quality measurement data remain critical concerns in this research area .", "id": "93160546-0995-4122-ae5f-b1a49b951ff2", "level": "subsection", "origin_cites_number": 15, "parent_id": "2c786f4c-ff88-40c4-b2c9-bbe2cc54b91e", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Significance of Physics-Guided Deep Learning" ], [ "subsection", "Discover Governing Equations" ] ], "subsections": [], "title": "Discover Governing Equations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{prob}\nIn light of the motivation and significance of physics-guided deep learning we discuss in the previous section, the primary research efforts in this field have been aimed at tackling the following four fundamental problems.", "id": "a9633632-c2f0-4164-8ce4-578c209e43ee", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Problem Formulation" ] ], "subsections": [ "2df7047e-84c8-46a6-a89a-df3ad349ad29", "24e9dc4d-abdb-482d-a5a0-3015f920ffe1", "25fb28d5-de22-49b6-9dd1-7aad94581991", "981289bf-d287-4982-aa03-64158fefc781" ], "title": "Problem Formulation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6378 ], "content": "\\label{prob:solving}\nWhen $\\mathcal{F}$ in Eq. \\ref{eqn:dynamics} is \\textit{known} but Eq. \\ref{eqn:dynamics} is too complicated to be solvable, researchers tend to directly solve the differential Eq.ations by approximating solution of $\\bm{u}(x)$ with a deep neural network, and enforcing the governing equations as a soft constraint on the output of the neural nets during training at the same time. This approach can be formulated as the following optimization problem, \n\\begin{equation}\\label{equ:pinn_loss}\n \\text{min}_{\\theta} \\; \\mathcal{L}(\\bm{u}) + \\lambda_\\mathcal{F} \\mathcal{L}_\\mathcal{F}(\\bm{u})\n\\end{equation}\n$\\mathcal{L}(\\bm{u})$ denotes the misfit of neural net predictions and the training data points. $\\theta$ denotes the neural net parameters. $\\mathcal{L}_\\mathcal{F}(\\bm{u})$ is a constraint on the residual of the differential equation system under consideration and $\\lambda_\\mathcal{F}$ is a regularization parameter that controls the emphasis on this residual. The goal is then to train the neural nets to minimize the loss function in Eq. \\ref{equ:pinn_loss}.", "id": "2df7047e-84c8-46a6-a89a-df3ad349ad29", "level": "subsection", "origin_cites_number": 3, "parent_id": "a9633632-c2f0-4164-8ce4-578c209e43ee", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Problem Formulation" ], [ "subsection", "Solving Differential Equations" ] ], "subsections": [], "title": "Solving Differential Equations" }, { "cite_extract_rate": 1, "cites": [ 6379, 6380 ], "content": "When $\\mathcal{F}$ in Eq. \\ref{eqn:dynamics} is \\textit{partially know}, we can use neural nets to learn the errors or residuals made by physics-based models . The key is to learn the bias of physics-based models and correct it with the help of deep learning. The final prediction of the state is composed of the simulation from the physics-based models and the residual prediction from neural nets as below, \n\\begin{equation}\n\\hat{\\bm{u}} = \\hat{\\bm{u}}_\\mathcal{F} + \\hat{\\bm{u}}_{\\text{NN}}.\n\\end{equation}\nwhere $\\hat{u}_\\mathcal{F}$ is the prediction obtained by numerically solving $\\mathcal{F}$, $\\hat{u}_{\\text{NN}}$is the prediction from neural networks and $\\hat{u}$ is the final prediction made by hybrid physics-DL models. \nThis learning problem generally involves two training strategies: 1) joint training: optimizing the parameters in the differential equations and the neural networks at the same time by minimizing the prediction errors of the system states. 2) two-stage training: we first fit differential equations on the training data and obtain the residuals, then directly optimize the neural nets on predicting the residuals.", "id": "24e9dc4d-abdb-482d-a5a0-3015f920ffe1", "level": "subsection", "origin_cites_number": 2, "parent_id": "a9633632-c2f0-4164-8ce4-578c209e43ee", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Problem Formulation" ], [ "subsection", "Learning Dynamics Residuals" ] ], "subsections": [], "title": "Learning Dynamics Residuals" }, { "cite_extract_rate": 0.75, "cites": [ 4714, 6364, 6368 ], "content": "When $\\mathcal{F}$ in Eq. \\ref{eqn:dynamics} is \\textit{unknown} or numerically solving Eq. \\ref{eqn:dynamics} requires too much computation, many works studied learning high-dimensional spatiotemporal dynamics by directly predicting the input-output system state mapping and bypassing numerical discretization and integration . If we assume the first dimension $x_1$ of $\\bm{u}$ in Eq. \\ref{eqn:dynamics} is the time dimension $t$, then the problem of dynamics forecasting can be defined as learning a map $f: \\mathbb{R}^{n \\times k} \\mapsto \\mathbb{R}^{n \\times q} $ that maps a sequence of historic states to future states of the dynamical system,\n\\begin{equation}\\label{equ_task}\nf(\\bm{u} \\text{\\footnotesize $(t-k+1, \\cdot)$}, ..., \\bm{u}\\text{\\footnotesize $(t, \\cdot)$}) = \\bm{u}\\text{\\footnotesize $(t+1, \\cdot)$}, ..., \\bm{u}\\text{\\footnotesize $(t+q, \\cdot)$}\n\\end{equation}\nwhere $k$ is the input length and $q$ is the output length. $f$ is commonly approximated with purely data-driven or physics-guided neural nets and the neural nets are optimized by minimizing the prediction errors of the state $\\mathcal{L}(\\bm{u})$.", "id": "25fb28d5-de22-49b6-9dd1-7aad94581991", "level": "subsection", "origin_cites_number": 4, "parent_id": "a9633632-c2f0-4164-8ce4-578c209e43ee", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Problem Formulation" ], [ "subsection", "Dynamics Forecasting" ] ], "subsections": [], "title": "Dynamics Forecasting" }, { "cite_extract_rate": 0, "cites": [], "content": "When $\\mathcal{F}$ in Eq. \\ref{eqn:dynamics} is \\textit{unknown} and it is necessary to determine the precise governing equations to solve practical problems, numerous efforts have been made to discover the exact mathematical formulation of $\\mathcal{F}$. The most common approach is to select from a wide range of possible candidate functions and choose the model that minimizes fitting errors on observation data.\nMore specifically, based on Definition \\ref{dfn:dynamics}, the goal of discovering governing equations is to find an approximate function $\\mathcal{\\hat{F}} = \\Phi(\\bm{u}(x), x) \\boldsymbol{\\theta} \\approx \\mathcal{F}$, where $\\Phi(\\bm{u}(x), x) = [\\phi_1(\\bm{u}(x), x), \\phi_2(\\bm{u}, x), \\ldots, \\phi_p(\\bm{u}, x)]$ is a library of candidate functions, such as polynomials and trigonometric functions, and $\\boldsymbol{\\theta} \\in \\mathbb{R}^p$ is a sparse vector indicating which candidate functions are active in the dynamics. This problem can be formulated as an optimization problem, where we aim to minimize the following cost function over a set of observed data $\\{\\bm{y}_i\\}_{i=1}^n$ of $\\bm{u}$:\n\\begin{equation}\\label{equ_discovery}\n\\mathcal{L}(\\boldsymbol{\\theta}) = \\sum_{i=1}^n ||\\Phi(\\bm{y}_i, x) \\boldsymbol{\\theta}||^2\n\\end{equation}", "id": "981289bf-d287-4982-aa03-64158fefc781", "level": "subsection", "origin_cites_number": 0, "parent_id": "a9633632-c2f0-4164-8ce4-578c209e43ee", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Problem Formulation" ], [ "subsection", "Search for Governing Equations" ] ], "subsections": [], "title": "Search for Governing Equations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:loss}\nComplex physical dynamics occur over a wide range of spatial and temporal scales. Standard DL models may simply fit the observed data while failing to learn the correct underlying dynamics, thus leading to low physical consistency and poor generalizability. One of the simplest and most widely used approaches to incorporate physical constraints is via designing loss functions (regularization). Physics-guided loss functions (regularization) can assist DL models to capture correct and generalizable dynamic patterns that are consistent with physical laws. Furthermore, the loss functions constrained by physics laws can reduce the possible search space of parameters. This approach is sometimes referred to as imposing differentiable “soft” constraints, which will be contrasted with imposing “hard” constraints (physics-guided architecture) in the next section. In this chapter, we will start with a case study of physics-guided loss functions, and then categorize these types of methods based on their objectives, including solving differential equations, improving prediction, and accelerating data generation.", "id": "431d9769-2309-472c-bd5c-88ad48003975", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ] ], "subsections": [ "4c0f9bdf-90b8-4dbf-bf96-f4340f873224", "2d94600a-9305-415e-97ce-9f02b018ebd3", "5d1b6a94-b47d-4cf8-8bb8-d59899b49a77", "83f7e0c8-e55d-4858-bf74-074e9b8222ca", "5edd73fd-5c5f-4d91-ad68-1ccd440d4ba5" ], "title": "Physics-Guided Loss Functions and Regularization" }, { "cite_extract_rate": 0.5, "cites": [ 9010, 1356, 6381, 6378, 6995, 6382 ], "content": "The Physics-informed Neural Networks (PINNs) approach is a prime example of incorporating physics knowledge into the design of loss functions. PINNs have shown efficiency and accuracy in learning simple differential equations. Using fully connected neural networks, PINNs directly approximate the solution of differential equations with space coordinates and time stamps as inputs. These networks are trained by minimizing both the loss on measurements and the residual function error through the partial differential equation. More specifically, based on the Def. \\ref{dfn:dynamics}, a fully connected neural network is employed to model solution $\\bm{\\hat{u}}(x, t | \\bm{\\theta}_{\\text{PINN}})$, where $\\bm{\\theta}_{\\text{PINN}}$ denotes the weights of the PINN and be optimized by minimizing the following loss function. \n\\begin{equation}\n \\mathcal{L}_{\\text{PINN}}=\\mathcal{L}(\\bm{u}) + \\lambda_\\mathcal{F} \\mathcal{L}_\\mathcal{F}(\\bm{u})\n\\end{equation}\n$\\mathcal{L}(\\bm{u})=\\Vert \\bm{\\hat{u}}-\\bm{y}\\Vert _{\\Gamma }$ is the error between the $\\bm{\\hat{u}}(x, t | \\bm{\\theta}_{\\text{PINN}})$ and the set of boundary conditions and measured data on the set of points $\\Gamma$ where the boundary conditions and data are defined. $\\mathcal{L}_{\\mathcal{F}}=\\Vert \\mathcal{F}(\\bm{\\hat{u}}(x, t|\\bm{\\theta}_{\\text{PINN}}), x, t)\\Vert _{\\Gamma}$ is the mean-squared error of the residual function to enforce the predictions generated by PINNs satisfy the desired differential equations. \nHowever, while PINNs have shown some success in capturing the underlying physical phenomena, has pointed out that they often struggle to learn complex physical systems due to the difficulties posed by PDE regularizations in the optimization problem. Furthermore, the effectiveness of PINNs is highly dependent on the quality of the input data, and performance may suffer in the presence of noise or limited data . Moreover, limited by poor generalizability of neural networks, PINNs have trouble generalizing to the space and time domain that is not covered in the training set . These limitations present significant challenges for the development and application of PINNs in real-world applications. Nonetheless, continued research into PINNs may help to overcome these challenges and improve their ability to capture and predict complex physical phenomena.", "id": "4c0f9bdf-90b8-4dbf-bf96-f4340f873224", "level": "subsection", "origin_cites_number": 12, "parent_id": "431d9769-2309-472c-bd5c-88ad48003975", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ], [ "subsection", "Case Study: Physics-informed Neural Networks" ] ], "subsections": [], "title": "Case Study: Physics-informed Neural Networks" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 6381, 6356, 6378 ], "content": "Continuing from the previous discussion of PINNs in the previous section, to overcome the optimization difficulties of PINNs, proposed two ways to alleviate this optimization problem. One is to start by training the PINN on a small constraint coefficient and then gradually increase the coefficient instead of using a big coefficient right away. The other one is training the PINN to predict the solution one time step at a time instead of the entire space-time at once. Furthermore, found that PINNs can overfit and propagate errors on domain boundaries, even when using physics-inspired regularizers. To address this, they introduced Gaussian Process-based smoothing on boundary conditions to recover PINNs' performance against noise and errors in measurements. Moreover, proposed a Bayesian framework that combines PINNs with a Bayesian network. Compared to PINNs, the hybrid model can provide uncertainty quantification and more accurate predictions in scenarios with large noise because it can avoid overfitting. Apart from PINNs, proposed to use flow-based generative models to learn the solutions of probabilistic PDEs while the PDE constraints are enforced in the loss function. investigated using neural nets to learn the evolution of the velocity fields of incompressible turbulent flow, the divergence of which is always zero. It found that constraining the model with a divergence-free regularizer can reduce the divergence of prediction and improve prediction accuracy.", "id": "2d94600a-9305-415e-97ce-9f02b018ebd3", "level": "subsection", "origin_cites_number": 5, "parent_id": "431d9769-2309-472c-bd5c-88ad48003975", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ], [ "subsection", "Solving Differential Equations" ] ], "subsections": [], "title": "Solving Differential Equations" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 6385, 6383, 9009, 6372, 6371, 6384, 9015 ], "content": "Physics-guided loss functions or regularization have shown great success in improving prediction performance, especially the physical consistency of DL models. \n used neural nets to model lake temperature at different times and different depths. They ensure that the predictions are physically meaningful by regularizing that the denser water predictions are at lower depths than predictions of less dense water. \n further introduced a loss term that ensures thermal energy conservation between incoming and outgoing heat fluxes for modeling lake temperature. \n designed conservation layers to strictly enforce conservation laws in their NN emulator of atmospheric convection.\n introduced a more systematic way of enforcing nonlinear analytic constraints in neural networks via constraints in the loss function. \n incorporated the loss of atomic force and atomic energy into neural nets for improved accuracy of simulating molecular dynamics.\n proposed a novel multi-fidelity physics-constrained neural network for material modeling, in which the neural net was constrained by the losses caused by the violations of the model, initial conditions, and boundary conditions. proposed a novel paradigm for spatiotemporal dynamics forecasting that performs spatiotemporal disentanglement using the functional variable separation. The specific-designed time invariance and regression loss functions ensure the separation of spatial and temporal information. \nHamiltonian mechanics is a mathematical framework that describes the dynamics of a system in terms of the total energy of the system, which is the sum of the kinetic and potential energy. proposed \\textit{Hamiltonian Neural Nets (HNN)} that parameterizes a Hamiltonian with a neural network and then learn it directly from data. The conservation of desired quantities is constrained in the loss function during training. The proposed \\textit{HNN} has shown success in predicting mass-spring and pendulum systems. \nLagrangian mechanics describes the dynamics of a system in terms of the difference between the kinetic energy and the potential energy of the system. proposed \\textit{Lagrangian Neural Nets (LNN)} used a neural network to parameterize the Lagrangian function that is the kinetic energy minus the potential energy. They trained the neural network with the Euler-Lagrange constraint loss functions such that it can learn to approximately conserve the total energy of the system. further simplify the \\textit{HNN} and \\textit{LNN} via explicit constraints.\n further introduced a meta-learning approach in \\textit{HNN} to find the structure of the Hamiltonian that can be adapted quickly to a new instance of a physical system.\n benchmark recent energy-conserving neural network models based on Lagrangian/Hamiltonian dynamics on four different physical systems.", "id": "5d1b6a94-b47d-4cf8-8bb8-d59899b49a77", "level": "subsection", "origin_cites_number": 10, "parent_id": "431d9769-2309-472c-bd5c-88ad48003975", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ], [ "subsection", "Improving Prediction Performance" ] ], "subsections": [], "title": "Improving Prediction Performance" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 6370, 6357 ], "content": "Simulation is an important method of analyzing, optimizing and designing real-world processes. Current numerical methods require significant computational resources when solving chaotic and complex differential equations. Because numerical discretization step size is confined to be very small due to stability constraints . Also, the estimation of unknown parameters by fitting equations to the observed data requires much manual engineering in each application since the optimization of the unknown parameters in the system highly depends on the initial guesses. Thus, there is an increasing interest in utilizing deep generative models for simulating complex physical dynamics. Many works also imposed physical constraints in the loss function for better physical consistency. \nFor instance, enforced the constraints of covariance into standard Generative Adversarial Networks (GAN) via statistical regularization, which leads to faster training and better physical consistency compared with standard GAN. \n proposed \\textit{tempoGAN} for super-resolution fluid flow, in which an advection difference loss is used to enforce the temporal coherence of fluid simulation. \n modified \\textit{ESRGAN}, which is a conditional GAN designed for super-resolution, by replacing the adversarial loss with a loss that penalizes errors in the energy spectrum between the generated images and the ground truth data. \nConditional GAN is also applied to emulating numeric hydroclimate models in . The simulation performance is further improved by penalizing the snow water equivalent via the loss function.\n proposed a generative model to simulate fluid flows, in which a novel stream function-based loss function is designed to ensure divergence-free motion for incompressible flows. proposed a physics-informed convolutional model for flow super-resolution, in which the physical consistency of the generated high-resolution flow fields is improved by minimizing the residuals of Navier-Stokes equations.", "id": "83f7e0c8-e55d-4858-bf74-074e9b8222ca", "level": "subsection", "origin_cites_number": 7, "parent_id": "431d9769-2309-472c-bd5c-88ad48003975", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ], [ "subsection", "Data Generation" ] ], "subsections": [], "title": "Data Generation" }, { "cite_extract_rate": 1, "cites": [ 6356, 6378 ], "content": "While physics-guided loss functions are easy to design and use, and can improve prediction accuracy and physical consistency, they do have several limitations. Firstly, the physical constraints incorporated in the loss functions are usually considered soft constraints, and may not be strictly enforced. This means that the desired physical properties may not be guaranteed when the models are applied to new datasets. Secondly, PDE regularization can make loss landscapes more complex and cause optimization issues that are difficult to address, as noted in . Finally, there may be a trade-off between prediction errors and physics-guided regularizers. For example, investigated incompressible turbulent flow prediction using neural nets and found that while constraining the model with a divergence-free regularizer can reduce the divergence of predictions, too much regularization may smooth out small eddies in the turbulence, resulting in a larger prediction error.", "id": "5edd73fd-5c5f-4d91-ad68-1ccd440d4ba5", "level": "subsection", "origin_cites_number": 2, "parent_id": "431d9769-2309-472c-bd5c-88ad48003975", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Loss Functions and Regularization" ], [ "subsection", "Pros and Cons" ] ], "subsections": [], "title": "Pros and Cons" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:architecture}\nWhile incorporating physical constraints as regularizers in the loss function can improve performance, DL is still used as a black box model in most cases. The modularity of neural networks offers opportunities for the design of novel neurons, layers, or blocks that encode specific physical properties. The advantage of physics-guided NN architectures is that they can impose ``hard'' constraints that are strictly enforced, compared to the ``soft'' constraints described in the previous section. The ``soft'' constraints are much easier to design than hard constraints, yet not required to be strictly satisfied. DL models with physics-guided architectures have theoretically guaranteed properties, and hence are more interpretable and generalizable. In this chapter, we will start with a case study of \\texttt{Turbulent-Flow Net (TF-Net)} that unifies a popular Computational Fluid Dynamics (CFD) technique and a custom-designed U-net. We further categorize other related methods based on their architectural design. \n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width= 0.85\\linewidth]{Figures/tf-model.png}\n \\caption{Turbulent Flow Net: three identical encoders to learn the transformations of the three components of different scales, and one shared decoder that learns the interactions among these three components to generate the predicted 2D velocity field at the next instant. Each encoder-decoder pair can be viewed as a U-net and the aggregation is a weighted summation.}\n \\label{fig:tfnet}\n\\end{figure}", "id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ] ], "subsections": [ "3141ab9a-f80d-492f-aefc-bc83d6707b59", "7cf73a89-86f8-4418-9a60-3ecb23bb75f3", "83152565-3856-4693-8d4f-42b8ba8a6e4a", "c4430677-3a7e-4d4b-851b-a075c3e0a591", "82c831ad-1a34-44a6-a8fa-1d9fa7064b2c" ], "title": "Physics-Guided Design of Architecture" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6356 ], "content": "\\textit{TF-Net} is a physics-guided DL model for turbulent flow prediction. As shown in Figure \\ref{fig:tfnet}, it applies scale separation to model different ranges of scales of the turbulent flow individually. Computational fluid dynamics (CFD) techniques are at the core of present-day turbulence simulation. Direct Numerical simulations (DNS) are accurate\nbut not computationally feasible for practical applications. Great emphasis was placed on the alternative approaches including Large Eddy Simulation (LES) and Reynolds-averaged Navier-Stokes\n(RANS). Both resort to resolving large scales while modeling small scales, using various averaging techniques and/or low-pass filtering of the governing equations .\nOne of the widely used CFD techniques, the RANS-LES coupling approach , combines both Reynolds-averaged Equations (RANs) and Large Eddy Simulation (LES) approaches in order to take advantage of both methods. Inspired by RANS-LES coupling, \\textit{TF-Net} replaces a priori spectral filters with trainable convolutional layers. The turbulent flow is decomposed into three components, each of which is approximated by a specialized U-net to preserve the multiscale properties of the flow. A shared decoder learns the interactions among these three components and generates the final prediction. The motivation for this design is to explicitly guide the ML model to learn the nonlinear dynamics of large-scale and Subgrid-Scale Modeling motions as relevant to the task of spatiotemporal prediction. In other words, we need to force the model to learn not only the large eddies but also the small ones. When we train a predictive model directly on the data with MSE loss, the model may overlook the small eddies and only focus on large eddies to achieve reasonably good accuracy.\nBesides RMSE, physically relevant metrics including divergence and energy spectrum are used to evaluate the performance of the model's prediction. Figure \\ref{results} shows \\textit{TF-Net} consistently outperforms all baselines on physically relevant metrics (Divergence and Energy Spectrum). Constraining it with the divergence-free regularizer that we described in the previous section can further reduce the Divergence. Figure \\ref{rbc_pred} shows the ground truth and predicted velocity along $x$ direction by \\textit{TF-Net} and three best baselines. We see that the predictions by our TF-Net model are the closest to the target based on the shape and frequency of the motions. \nWe also perform an ablation study to understand each component of TF-Net and investigate whether the model has actually learned the flow with different scales. Figure \\ref{results} right includes predictions and the outputs of each small U-net while the other two encoders are zeroed out. We observe that the outputs of each small U-net are the flow with different scales, which demonstrates that can learn multi-scale behaviors. In summary, \\textit{TF-Net} is able to generate both accurate and physically meaningful predictions of the velocity fields that preserve critical quantities of relevance. \n\\begin{figure*}[hbt!]\n \\begin{minipage}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{Figures/divergence.png}\n \\end{minipage} \n \\begin{minipage}[b]{0.3\\textwidth}\n\\includegraphics[width=\\linewidth]{Figures/spec_ci_square.png}\n \\end{minipage} \n \\begin{minipage}[b]{0.37\\textwidth}\n \\includegraphics[width= \\linewidth]{Figures/ablation.png}\n \\end{minipage} \n \\caption{From left to right: Mean absolute divergence of different models' predictions at varying forecasting horizon; The Energy Spectrum of the target, \\textit{TF-Net}, U-net and ResNet on the leftmost square sub-region; Ablation Study: Ground truth, prediction from \\textit{TF-Net} and the outputs of each small U-net while the other two encoders are zeroed out.}\n \\label{results}\n\\end{figure*}\n\\begin{figure*}[htb!]\n\\centering\n \\begin{minipage}[b]{0.326\\textwidth}\n\\includegraphics[width= \\linewidth]{Figures/im_09.png}\n \\end{minipage} \n\\begin{minipage}[b]{0.326\\textwidth}\n\\includegraphics[width= \\linewidth]{Figures/im_19.png}\n \\end{minipage} \n\\begin{minipage}[b]{0.326\\textwidth}\n\\includegraphics[width= \\linewidth]{Figures/im_59.png}\n \\end{minipage} \n\\caption{Ground truth and predicted velocity $u$ by \\textit{TF-Net} and three best baselines (U-Net, ResNet, and GAN) at time $T+10$, $T+30$ to $T+60$ (suppose $T$ is the time step of the last input frame).}\n\\label{rbc_pred}\n\\end{figure*}", "id": "3141ab9a-f80d-492f-aefc-bc83d6707b59", "level": "subsection", "origin_cites_number": 3, "parent_id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ], [ "subsection", "Case Study: Turbulent-Flow Net" ] ], "subsections": [], "title": "Case Study: Turbulent-Flow Net" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 6375, 9007, 6376, 9014 ], "content": "Convolutional architecture remains dominant in most tasks of computer vision, such as objection, image classification, and video prediction. Thanks to their efficiency and desired inductive biases, such as locality and translation equivariance, convolution neural nets have been widely applied to emulate and predict complex spatiotemporal physical dynamics. Researchers have proposed various ways to bake desired physical properties into the design of convolutional models. \nFor example, proposed to enforce hard linear spatial PDE constraints within CNNs using the Fast Fourier Transform algorithm.\n modified the LSTM units to introduce an intermediate variable to preserve monotonicity in a convolutional auto-encoder model for lake temperature. \n proposed a physics-guided convolutional model, \\textit{PhyDNN}, which uses physics-guided structural priors and physics-guided aggregate supervision for modeling the drag forces acting on each particle in a computational fluid dynamics-discrete element Method. designed \\textit{HybridNet} for dynamics predictions that combine ConvLSTM for predicting external forces with model-driven computation with CeNN for system dynamics. \\textit{HybridNet} achieves higher accuracy on the tasks of forecasting heat convection-diffusion and fluid dynamics. \n proposed to combine deep learning and a differentiable PDE solver for understanding and controlling complex nonlinear physical systems over a long time horizon. \n proposed continuous-filter convolutional layers for modeling quantum interactions. The convolutional kernel is parametrized by neural nets that take relative positions between any two points as input. They obtained a joint model for the total energy and interatomic forces that follow fundamental quantum-chemical principles.\nIn addition, convolution layers have the potential to uncover governing equations. For instance, developed \\textit{PDE-Net}, which utilizes convolution to approximate differential operators over spatial domains of different orders. It also includes a symbolic neural network based on \\textit{EQL} to approximate and recover multivariate functions. The authors demonstrated that \\textit{PDE-Net} is more compact than \\textit{SINDy} dictionaries and numerical experiments suggest that it can uncover the hidden PDE of various observed dynamics.", "id": "7cf73a89-86f8-4418-9a60-3ecb23bb75f3", "level": "subsection", "origin_cites_number": 9, "parent_id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ], [ "subsection", "Convolutional architecture" ] ], "subsections": [], "title": "Convolutional architecture" }, { "cite_extract_rate": 1, "cites": [ 6386, 6388, 6387, 6369 ], "content": "Standard convolutional neural nets only operate on regular or uniform mesh such as images. Graph neural networks move beyond data on the regular grid towards modeling objects with arbitrary positions. For instance, graph neural networks can model the fluid dynamics on irregular meshes that CNNs cannot. designed a deep encoder-processor-decoder graphic architecture for simulating fluid dynamics under Lagrangian description. The rich physical states are represented by graphs of interacting particles, and complex interactions are approximated by learned message-passing among nodes. \n utilized the same architecture to learn mesh-based simulation. The authors directly construct graphs on the irregular meshes constructed in the numerical simulation methods. In addition, they proposed an adaptive re-meshing algorithm that allows the model to accurately predict dynamics at both large and small scales. \n further proposed two tricks to address the instability and error accumulation issues of training graph neural nets for solving PDEs. One is perturbing the input by a certain noise and only backpropagating errors on the last unroll step, and the other one is predicting multiple steps simultaneously in time. Both tricks make the model faster and more stable. \n proposed a \\textit{Neural Operator} approach that learns the mapping between function spaces, and is invariant to different approximations and grids. More specifically, it used the message-passing graph network to learn Green's function from the data, and then the learned Green's function can be used to compute the final solution of PDEs. \n further extended it to \\textit{Fourier Neural Operator} by replacing the kernel integral operator with a convolution operator defined in Fourier space, which is much more efficient than \\textit{Neural Operator}. In , graph networks were also used to represent, learn, and infer robotics systems, bodies, and joints. proposed to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. Another important line of work is incorporating symmetries to design equivariant graph neural nets for modeling molecular dynamics, which will discuss in detail in Section \\ref{sec:symmetry}.", "id": "83152565-3856-4693-8d4f-42b8ba8a6e4a", "level": "subsection", "origin_cites_number": 4, "parent_id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ], [ "subsection", "Graph Neural Networks" ] ], "subsections": [], "title": "Graph Neural Networks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6390, 6391, 2924, 6389 ], "content": "One of the main applications of Multilayer perceptron(MLP) in physics-guided architecture design is finding the linear Koopman operator. Koopman theory provides a way to represent a nonlinear dynamical system using an infinite-dimensional linear Koopman operator that acts on a Hilbert space of measurement functions of the system state. However, finding the appropriate measurement functions that map the dynamics to the function space, as well as an approximate and finite-dimensional Koopman operator, is highly nontrivial. One way to obtain an approximation of the Koopman operator is through the Dynamic Mode Decomposition algorithm , but this requires manually preparing nonlinear observables, which is not always feasible as prior knowledge about them may be lacking.\nTo address this challenge, recent research has explored using neural networks to learn the Koopman operator. One popular approach hypothesizes that there exists a data transformation that can be learned by neural networks, which yields an approximate finite-dimensional Koopman operator. For example, and have proposed using fully connected neural networks to directly map the observed dynamics to a dictionary of nonlinear observables that span a Koopman invariant subspace. This mapping is represented through an autoencoder network, which embeds the observed dynamics onto a low-dimensional latent space where the Koopman operator is approximated by a linear layer. have further generalized this approach to enable learning the Koopman operator for systems with continuous spectra. have also designed a similar autoencoder architecture for forecasting physical processes. But in the latent space, the consistency of both the forward and backward systems is ensured, while other models only consider the forward system. This approach performs well on systems that have both forward and backward dynamics, enabling long time prediction.\nKoopman theory can also be used to model real-world dynamics without known governing laws. have developed a novel approach, \\textit{Koopman Neural Forecaster (KNF)}, to forecast highly non-stationary time series in an interpretable and robust manner. This approach uses Koopman theory to simplify non-linear real-world dynamics into linear systems, which then can be easily manipulated by modifying the Koopman matrix. It employs predefined measurement functions to impose appropriate inductive biases and uses a Koopman operator that varies over time to capture the underlying changing distribution. The model outperforms the state-of-the-art on highly non-stationary time series datasets, including M4, cryptocurrency return forecasting, and sports player trajectory prediction.", "id": "c4430677-3a7e-4d4b-851b-a075c3e0a591", "level": "subsection", "origin_cites_number": 6, "parent_id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ], [ "subsection", "Multilayer Perceptron" ] ], "subsections": [], "title": "Multilayer Perceptron" }, { "cite_extract_rate": 0, "cites": [], "content": "Embedding physics into the design of the model architecture can enable physical principles strictly enforced and theoretically guaranteed. That leads to more interpretable and generalizable deep learning models. However, it is not trivial to design physics-guided architectures that perform and generalize well without hurting the representation power of neural nets. Hard inductive biases can greatly improve the sample efficiency of learning, but could potentially become restrictive when the size of the dataset is big enough for models to learn all the necessary inductive biases from the data or when the inductive biases are not strict.", "id": "82c831ad-1a34-44a6-a8fa-1d9fa7064b2c", "level": "subsection", "origin_cites_number": 0, "parent_id": "86c528e0-a15b-4253-9f86-cd1bed4952cf", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Physics-Guided Design of Architecture" ], [ "subsection", "Pros and Cons" ] ], "subsections": [], "title": "Pros and Cons" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:hybrid}\nThe papers discussed in the previous two sections focus on incorporating the known properties of physical systems into the design of loss functions or neural network modules. In this section, we talk about works that directly combine pure physics-based models, such as numerical methods, with DL models.", "id": "02e44777-812c-4110-aad6-ca1628c72747", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Hybrid Physics-DL Model" ] ], "subsections": [ "3df92053-44fc-41a6-9d76-77723b8aeabc", "837bf1bd-7805-4197-9d9f-babce7404a8f", "2a882f41-fe40-483b-b066-37d3f4b426ef", "c2e5329f-78a7-4073-9443-6dab8240a5fd" ], "title": "Hybrid Physics-DL Model" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 6394, 7970, 6392, 6395, 6393, 6359 ], "content": "Neural Ordinary Differential Equations (\\textit{Neural ODEs}) generalize traditional RNNs that process data sequentially in discrete time steps by modeling data as continuous functions that change over time, allowing for a more flexible way to capture complex dynamics. They changed the traditionally discretized neuron layer depths into continuous equivalents such that the derivative of the hidden state can be parameterized using a neural network. The output of the network is then computed using a black box differential equation solver, making \\textit{Neural ODEs} an efficient combination of neural nets and numerical solvers.\nMore specifically, they parametrize the velocity $\\bm{\\hat{z}}$ of a hidden state $\\bm{z}$ with the\nhelp of a neural network $\\bm{\\hat{z}} = f_{\\theta}(\\bm{z}, t)$. Given the initial time $t_0$ and target time $t_T$, \\textit{Neural ODEs} predict the target state $\\bm{\\hat{y}}_T$ by performing the following encoding, integration, and decoding operations:\n\\begin{equation}\n \\bm{z}(t_0) = \\phi_{\\text{enc}}(\\bm{y}_0), \\;\\;\\;\\;\\;\\; \\bm{z}(t_T) = \\bm{z}(t_0) + \\int_{t_0}^{t_T} f_{\\bm{\\theta}}(\\bm{z}, t) dt, \\;\\;\\;\\;\\;\\; \\bm{\\hat{y}}_T= \\psi_{\\text{dec}}(\\bm{z}(t_T))\n\\end{equation}\nwhere the encoder $\\phi_{\\text{enc}}$ and the decoder $\\psi_{\\text{dec}}$ can be neural networks. Solving an ODE numerically is commonly done by discretization and integration, such as the simple Euler method and higher-order variants of the Runge-Kutta method. However, all of these are computationally intensive since they require backpropagating through the operations of the solvers and store any intermediate quantities of the forward pass, incurring a high memory cost. Thus, the adjoint method is used to efficiently compute gradients during backpropagation. To compute the gradients of a loss function $L$ with respect to the initial state $\\bm{z}(t_0)$ and the parameters $\\bm{\\theta}$, the key idea of the adjoint method is to introduce an adjoint state $\\bm{p}(t)$, $\\bm{p}(t) = \\frac{\\partial L}{\\partial \\bm{z}(t)}$, which satisfies the following differential equation:\n\\begin{equation}\n\\frac{d\\bm{p}(t)}{dt} = -\\bm{p}(t)^T \\frac{\\partial f_{\\bm{\\theta}}(\\bm{z}(t), t)}{\\partial \\bm{z}} \\label{adjoint_1}\n\\end{equation}\nThe adjoint state is used to compute the gradients of the loss function with respect to the initial state and the parameters using the following formulas:\n\\begin{equation}\n \\frac{\\partial L}{\\partial \\bm{\\theta}} = - \\int_{t_T}^{t_0}\\bm{p}(t)^T \\frac{\\partial f_{\\bm{\\theta}}(\\bm{z}(t), t)}{\\partial \\bm{\\theta}} dt; \\label{adjoint_2}\n\\end{equation}\nIn a word, these formulas can be computed efficiently by solving the ODE for $\\bm{p}(t)$ using the same numerical method used to solve the forward ODE. During the forward pass, the ODE solver computes the solution of the differential equation $\\bm{z}(t)$ using the initial state $\\bm{z}(t_0)$ and the function $f_{\\bm{\\theta}}(\\bm{z}(t), t)$. During the backward pass, the adjoint state $\\bm{p}(t)$ is computed by solving Eqn. \\ref{adjoint_1} that starts from the final time $t_T$ and backpropagates through time. This adjoint state is then used to compute the gradients of the loss function with respect to the initial state and the parameters of the ODE function in Eqn. \\ref{adjoint_2}, which can then be used to update the model parameters through gradient descent. \n\\textit{Neural ODEs} have broad potential applications, particularly in domains that require continuous and dynamic models. They offer a useful tool for building continuous-time time series models, which can easily handle data coming at irregular intervals. They also allow for building normalizing flow, which makes it easy to track the change in density, even for unrestricted neural architecture. There have been several follow-up works that further extended the idea of continuous neural nets. For instance, introduced \\textit{Augmented Neural ODE} that is more expressive, empirically more stable, and lower computationally efficient than Neural ODEs. More importantly, it can learn the functions that have continuous trajectories mappings intersecting each other, which \\textit{Neural ODEs} cannot represent.\n further extended this idea of continuous neural nets to graph convolutions, and proposed \\textit{Graph Neural ODE}. proposed Neural Stochastic Differential Equation (\\textit{Neural SDE}), which models stochastic noise injection by stochastic differential equations. They demonstrated that incorporating the noise injection regularization mechanism into the continuous neural network can reduce overfitting and achieve lower generalization errors. proposed a Neural ODE-based generative time-series model that uses the known differential equation instead of treating it as hidden unit dynamics so that they can integrate mechanistic knowledge into the Neural ODE. utilized neural networks to directly approximate the unknown terms in the differential equations. By using the adjoint method, the proposed model can efficiently compute gradients with respect to all parameters in the model, including the initial conditions, the parameters of the ODE, and the boundary conditions.", "id": "3df92053-44fc-41a6-9d76-77723b8aeabc", "level": "subsection", "origin_cites_number": 7, "parent_id": "02e44777-812c-4110-aad6-ca1628c72747", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Hybrid Physics-DL Model" ], [ "subsection", "Case Study: Neural Differential Equations" ] ], "subsections": [], "title": "Case Study: Neural Differential Equations" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6361, 6358, 6380, 9016, 23, 6379 ], "content": "Perhaps the simplest form of hybrid modeling is residual learning, where DL learns to predict the errors or residuals made by physics-based models. The key is to learn the bias of physics-based models and correct it with the help of DL models . A representative example is \\textit{DeepGLEAM} for forecasting COVID-19 mortality that combines a mechanistic epidemic simulation model GLEAM with DL. It uses a Diffusion Convolutional RNN (DCRNN) to learn the correction terms from GLEAM, which leads to improved performance over either purely mechanistic models or purely DL models on the task of one-week ahead COVID-19 death count predictions\nSimilarly, combines graph neural nets with a CFD simulator run on a coarse mesh to generate high-resolution fluid flow prediction. CNNs are used to correct the velocity field from the numerical solver on a coarse grid in . utilized neural networks for subgrid modeling of the LES simulation of two-dimensional turbulence. In , a neural network model is implemented in the reduced order modeling framework to compensate for the errors from the model reduction. proposed \\textit{DR-RNN} that is trained to find the residual minimizer of numerically discretized ODEs or PDEs. They showed that \\textit{DR-RNN} can greatly reduce both computational cost and time discretization error of the reduced order modeling framework. introduced the \\textit{APHYNITY} framework that can efficiently augment approximate physical models with deep data-driven networks. A key feature of their method is being able to decompose the problem in such a way that the data-driven model only models what cannot be captured by the physical model.", "id": "837bf1bd-7805-4197-9d9f-babce7404a8f", "level": "subsection", "origin_cites_number": 9, "parent_id": "02e44777-812c-4110-aad6-ca1628c72747", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Hybrid Physics-DL Model" ], [ "subsection", "Residual Modeling" ] ], "subsections": [], "title": "Residual Modeling" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6362 ], "content": "DL models can be used to replace one or more components of physics-based models that are difficult to compute or unknown. For example, replaced the numerical solver for solving Poisson's equations with convolution networks in the procedure of Eulerian fluid simulation, and the obtained results are realistic and showed good generalization properties. proposed to use neural nets to reconstruct the model corrections in terms of variables that appear in the closure model. applied a U-net to estimate the velocity field given the historical temperature frames, then used the estimated velocity to forecast the sea surface temperature based on the closed-form solution of the advection-diffusion equation. combined the high-dimensional model representation that is represented as a sum of mode terms each of which is a sum of component functions with NNs to build multidimensional potential, in which NNs are used to represent the component functions that minimize the error mode term by mode term.", "id": "2a882f41-fe40-483b-b066-37d3f4b426ef", "level": "subsection", "origin_cites_number": 3, "parent_id": "02e44777-812c-4110-aad6-ca1628c72747", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Hybrid Physics-DL Model" ], [ "subsection", "Intermediate Variable Modeling" ] ], "subsections": [], "title": "Intermediate Variable Modeling" }, { "cite_extract_rate": 0, "cites": [], "content": "Combining physics-based and deep learning models can enable leveraging both the flexibility of neural nets for modeling unknown parts of the dynamics and the interpretability and generalizability of physics-based models. However, one potential downside of hybrid physics-DL models worth mentioning is that all or most of the dynamics could be captured by neural nets and the physics-based models contribute little to the learning. \nThat would hurt the interpretability and the generalizability of the model. We must ensure an optimal balance between the physics-based and DL models. We need neural nets to only model the information that cannot be represented by the physical prior.", "id": "c2e5329f-78a7-4073-9443-6dab8240a5fd", "level": "subsection", "origin_cites_number": 0, "parent_id": "02e44777-812c-4110-aad6-ca1628c72747", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Hybrid Physics-DL Model" ], [ "subsection", "Pros and Cons" ] ], "subsections": [], "title": "Pros and Cons" }, { "cite_extract_rate": 0.5, "cites": [ 6368, 8154, 8153, 6397, 6399, 9017, 9005, 6401, 6398, 8152, 6400, 6396, 306 ], "content": "\\label{sec:symmetry}\nSymmetry has long been implicitly used in DL to design networks with known invariances and equivariances. Convolutional neural networks enabled breakthroughs in computer vision by leveraging translation equivariance . Similarly, recurrent neural networks , graph neural networks , and capsule networks all impose symmetries.\nWhile the equivariant DL models have achieved remarkable success in image and text data , the study of equivariant nets in learning dynamical systems has become increasingly popular recently . Since the symmetries can be integrated into neural nets through not only loss functions but also the design of neural net layers and there has been a large volume of works about equivariant and invariant DL models for physical dynamics, we discuss this topic separately in this section. \n\\begin{wrapfigure}{r}{0.4\\textwidth}\n\\centering\n\\includegraphics[width= 0.4\\textwidth,trim={55 130 55 0}]{Figures/equiv.png} \n\\caption{Illustration of equivariance: $f(x)=2x$ w.r.t $T = \\mathrm{rot}(\\pi/4)$}\n\\label{fig:equi}\n\\end{wrapfigure}\nIn physics, there is a deep connection between symmetries and physics. Noether’s law gives a correspondence between conserved quantities and groups of symmetries. For instance, translation symmetry corresponds to the conservation of energy and rotation symmetry corresponds to the conservation of angular momentum. By building a neural network that inherently respects a given symmetry, we thus make conservation of the associated quantity more likely and consequently the model’s prediction more physically accurate. Furthermore, by designing a model that is inherently equivariant to transformations of its inputs, we can guarantee that our model generalizes automatically across these transformations, making it robust to distributional shifts. \nA \\textbf{group of symmetries} or simply \\textbf{group} consists of a set $G$ together with an associative composition map $\\circ \\colon G \\times G \\to G$. The composition map has an identity $1 \\in G$ and composition with any element of $G$ is required to be invertible. A group $G$ has an \\textbf{action} on a set $S$ if there is an action map $\\cdot \\colon G \\times S \\to S$ which is compatible with the composition law. We say further that $\\rho : G \\mapsto GL(V)$ is a \\textbf{$G$-representation} if the set $V$ is a vector space and each group element $g \\in G$ is represented by a linear map (matrix) $\\rho(g)$ that acts on $V$. Formally, a function $f \\colon X \\to Y$ may be described as respecting the symmetry coming from a group $G$ using the notion of equivariance. \n\\begin{dfn}\\label{dfn:equivariance}\nAssume a group representation $\\rho_{\\text{in}}$ of $G$ acts on $X$ and $\\rho_{\\text{out}}$ acts on $Y$. We say a function $f$ is \\textbf{$G$-equivariant} if \n\\begin{equation}\n f( \\rho_{\\text{in}}(g)(x)) = \\rho_{\\text{out}}(g) f(x) \\label{eq:strictequivariance}\n\\end{equation}\nfor all $x \\in X$ and $g \\in G$. The function $f$ is \\textbf{$G$-invariant} if $f( \\rho_{\\text{in}}(g)(x)) = f(x)$ for all $x \\in X$ and $g \\in G$. This is a special case of equivariance for the case $\\rho_{\\mathrm{out}}(g) = 1$. See Figure \\ref{fig:equi} for an illustration of a rotation equivariant function. \n\\end{dfn}", "id": "419bd794-c010-4085-a609-4fa92d46e63a", "level": "section", "origin_cites_number": 26, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ] ], "subsections": [ "6c5ebb8b-9b87-4db3-8b27-c13e7edee45b", "8a812312-e6e1-48da-a720-2251713df6bf", "663d967b-66a2-4a43-ac10-b22f1a779eac", "e7918c1d-d2ad-48cb-b008-0dda91dcf3ca", "f8b8d401-1a90-42ac-8c1a-1179e1c75423" ], "title": "Invariant and Equivariant DL Models" }, { "cite_extract_rate": 1, "cites": [ 6368, 8154 ], "content": " exploited the symmetries of fluid dynamics to design equivariant networks. The Navier-Stokes equations are invariant under the following five different transformations. Individually, each of these types of transformations generates a group of symmetries in the system. \n\\begin{itemize*}\n \\item Space translation: \\; $T_{\\bm{c}}^{\\mathrm{sp}}\\bm{w}(\\bm{x}, t) = \\bm{w}(\\bm{x-c}, t)$, \\; $\\bm{c} \\in \\mathbb{R}^2$,\n \\item Time translation: \\;$T_{\\tau}^{\\mathrm{time}}\\bm{w}(\\bm{x}, t) = \\bm{w}(\\bm{x}, t-\\tau)$, \\; $\\tau \\in \\mathbb{R}$,\n \\item Galilean transformation: \\;$T_{\\bm{c}}^{\\mathrm{gal}}\\bm{w}(\\bm{x}, t) = \\bm{w}(\\bm{x}-\\bm{c}t, t) + \\bm{c}$, \\; $\\bm{c} \\in \\mathbb{R}^2$,\n \\item Rotation/Reflection: \\; $T_R^{\\mathrm{rot}}\\bm{w}(\\bm{x}, t) = R\\bm{w}(R^{-1}\\bm{x}, t), \\; R \\in O(2)$,\n \\item Scaling: \\; $T_{\\lambda}^{sc}\\bm{w}(\\bm{x},t) = \\lambda\\bm{w}(\\lambda\\bm{x}, \\lambda^2t)$, \\; $\\lambda \\in \\mathbb{R}_{>0}$.\n\\end{itemize*}\nConsider a system of \ndifferential operators $\\mathcal{D}$ acting on $\\hat{\\mathcal{F}}_V$. Denote the set of solutions $\\mathrm{Sol}(\\mathcal{D}) \\subseteq \\hat{\\mathcal{F}}_V.$ We say $G$ is \\textbf{a symmetry group of $\\mathcal{D}$} if $G$ preserves $\\mathrm{Sol}(\\mathcal{D})$. That is, if $\\varphi$ is a solution of $\\mathcal{D}$, then for all $g \\in G$, $g(\\varphi)$ is also. \nIn order to forecast the evolution of a system $\\mathcal{D}$, we model the forward prediction function $f$. Let $\\bm{w} \\in \\mathrm{Sol}(\\mathcal{D})$. The input to $f$ is a collection of $k$ snapshots at times $t - k,\\ldots,t-1$ denoted $\\bm{w}_{t-i} \\in \\mathcal{F}_d$. The prediction function $f\\colon \\mathcal{F}_d^k \\to \\mathcal{F}_d$ is defined $f(\\bm{w}_{t-k},\\ldots,\\bm{w}_{t-1}) = \\bm{w}_{t}$. It predicts the solution at a time $t$ based on the solution in the past. \nLet $G$ be a symmetry group of $\\mathcal{D}$. Then for $g \\in G$, $g(\\bm{w})$ is also a solution of $\\mathcal{D}$. Thus $f(g\\bm{w}_{t-k},\\ldots,g\\bm{w}_{t-1}) = g\\bm{w}_{t}$. Consequently, $f$ is $G$-equivariant.\nThey tailored different methods for incorporating each symmetry into CNNs for spatiotemporal dynamics forecasting. CNNs are time translation-equivariant when used in an autoregressive manner. Convolutions are also naturally space translation equivariant. Scale equivariance in dynamics is unique as the physical law dictates the scaling of magnitude, space and time simultaneously. To achieve this, they replaced the standard convolution layers with group correlation layers over the group $G=(\\mathbb{R}_{>0},\\cdot)\\ltimes(\\mathbb{R}^2,+)$ of both scaling and translations. The $G$-correlation upgrades this operation by both translating \\textit{and} scaling the kernel relative to the input, \n\\begin{equation}\\label{eqn:groupcorr}\n \\bm{v}(\\bm{p}, s, \\mu) = \\sum_{\\lambda \\in \\mathbb{R}_{>0}, t \\in \\mathbb{R}, \\bm{q} \\in \\mathbb{Z}^2 } \\mu \\bm{w}(\\bm{p} + \\mu \\bm{q}, \\mu^2 t, \\lambda ) K(\\bm{q},s,t,\\lambda),\n\\end{equation}\nwhere $s$ and $t$ denote the indices of output and input channels. They add an axis to the tensors corresponding to the scale factor $\\mu$. In addition, the rotational symmetry was modeled using $\\mathrm{SO}(2)$-equivariant convolutions and activations within the \\texttt{E(2)-CNN} framework .\nTo make CNNs equivariant to Galilean transformation, since they are already translation-equivariant, it is only necessary to make them equivariant to uniform motion transformation, which is adding a constant vector field to the vector field. This is part of Galilean invariance and relevant to all non-relativistic physics modeling. And the uniform motion equivariance is enforced by conjugating the model with shifted input distribution. Basically, for each sliding local block in each convolutional layer, they shift the mean of the input tensor to zero and shift the output back after convolution and activation function per sample. In other words, if the input is $\\bm{\\mathcal{P}}_{b \\times d_{in} \\times s\\times s} $ and the output is $\\bm{\\mathcal{Q}}_{b \\times d_{out}} = \\sigma(\\bm{\\mathcal{P}} \\cdot K)$ for one sliding local block, where $b$ is batch size, $d$ is number of channels, $s$ is the kernel size, and $K$ is the kernel, then\n\\begin{align}\n\\bm{\\mu}_i = \\mathrm{Mean}_{jkl} \\left(\\bm{\\mathcal{P}}_{ijkl}\\right); \\quad\n\\bm{\\mathcal{P}}_{ijkl}\\mapsto \\bm{\\mathcal{P}}_{ijkl} - \\bm{\\mu}_i; \\quad\n\\bm{\\mathcal{Q}}_{ij} \\mapsto \\bm{\\mathcal{Q}}_{ij} + \\bm{\\mu}_i.\n\\end{align}\nThis will allow the convolution layer to be equivariant with respect to uniform motion. If the input is a vector field, this operation is applied to each element.\nThe DL models used are \\textit{ResNet} and \\textit{U-Net}, and their equivariant counterparts. Spatiotemporal prediction is done autoregressively. Standard RMSE and an RMSE computed on the energy spectra are used to measure performance. The models are tested on Rayleigh-B\\'enard convection (RBC) and reanalysis ocean current velocity data. For RBC, the test sets have random transformations from the relevant symmetry groups applied to each sample. This mimics real-world data in which each sample has an unknown reference frame. For ocean data, tests are also performed on different time ranges and different domains from the training set, representing distributional shifts. Figure \\ref{vel_u} shows the equivariant models perform significantly better than their non-equivariant counterparts on both simulated RBC data and real-world reanalysis ocean currents. They also show equivariant models also achieve much lower energy spectrum errors and enjoy favorable sample complexity compared with data augmentation. \n\\begin{figure*}[htb!]\n \\begin{minipage}[b]{0.241\\textwidth}\n \\includegraphics[width=\\textwidth]{Figures/um-resnet.png}\n \\end{minipage} \\hfill\n \\begin{minipage}[b]{0.241\\textwidth}\n \\includegraphics[width=\\textwidth]{Figures/mag-resnet.png}\n \\end{minipage} \\hfill\n \\begin{minipage}[b]{0.241\\textwidth}\n\\includegraphics[width=\\textwidth]{Figures/rot-resnet.png}\n \\end{minipage} \\hfill\n \\begin{minipage}[b]{0.241\\textwidth}\n\\includegraphics[width=\\textwidth]{Figures/scale-resnet.png}\n \\end{minipage} \n \\begin{minipage}[b]{\\textwidth}\n\\includegraphics[width=\\textwidth]{Figures/pred_resnet2.png}\n \\end{minipage} \n\\caption{Top: The ground truth and the predicted velocity norm fields $\\|\\bm{w}\\|_2$ of RBC at time step $1$, $5$ and $10$ by the \\texttt{ResNet} and four \\texttt{Equ-ResNets} on four test samples applied with random uniform motion, magnitude, rotation, and scaling transformations respectively. The first column is the target, the second is \\texttt{ResNet} predictions, and the third is predictions by \\texttt{Equ-ResNets}. Bottom: The ground truth and predicted velocity norm fields of ocean currents by \\texttt{ResNet} (\\texttt{Unet}) and four \\texttt{Equ-ResNets} (\\texttt{Equ-Unets}) on the test set.}\n\\label{vel_u}\n\\end{figure*}", "id": "6c5ebb8b-9b87-4db3-8b27-c13e7edee45b", "level": "subsection", "origin_cites_number": 2, "parent_id": "419bd794-c010-4085-a609-4fa92d46e63a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ], [ "subsection", "Case Study: Equivariant Deep Dynamics Models" ] ], "subsections": [], "title": "Case Study: Equivariant Deep Dynamics Models" }, { "cite_extract_rate": 1, "cites": [ 6402, 9007, 6365 ], "content": "\\label{fluid_sym}\nHowever, real-world dynamical data rarely conform to strict mathematical symmetries, due to noise and missing values or symmetry-breaking features in the underlying dynamical system. further explored \\textit{approximately} equivariant convolutional networks that are biased towards preserving symmetry but are not strictly constrained to do so. The key idea is relaxing the weight-sharing schemes by introducing additional trainable weights that can vary across group elements to break the strict equivariance constraints. The proposed approximate equivariant networks can always learn the correct amount of symmetry from the data, and thus consistently perform well on real-world turbulence data with no symmetry, approximate symmetry, and perfect symmetry. When we incorporate prior knowledge into neural nets, we usually need to choose between strictly enforcing it in the design of the model or softly constraining it via regularizers. But this approach allows the model to decide whether and how to use prior knowledge (symmetry) based on the specific task. Moreover, built a meta-learning framework, DyAd, to forecast systems with different parameters. Specifically, it utilized an encoder capable of extracting the time-invariant and translation-invariant parts of a dynamical system and a prediction network to adapt and forecast giving the inferred system. Time invariance is achieved by using 3D convolution and time-shift invariant loss. On challenging turbulent flow prediction and real-world ocean temperature and currents forecasting tasks, this is the \\textit{first} framework that can generalize and predict dynamics across a wide range of heterogeneous domains. \nApart from incorporating symmetries into regular convolution, there has been a surge of interest in designing equivariant continuous convolution models. This is due to the fact that continuous convolution allows for convolutional operations to be performed on a continuous input domain. For instance, proposed \\textit{SchNet}, which is a continuous convolution framework that generalizes the CNN approach to continuous convolutions to model particles at arbitrary positions. Continuous convolution kernels are generated by dense neural networks that operate on the interatomic distances, which ensures rotational and translation invariance of the energy. In a traffic forecasting application, proposed a novel model, \\textit{Equivariant Continuous COnvolution (ECCO)} that uses rotationally equivariant continuous convolutions to embed the symmetries of the system for improved trajectory prediction. The rotational equivariance is achieved by a weight-sharing scheme within kernels in polar coordinates. \\textit{ECCO} achieves superior performance to baselines on two real-world trajectory prediction datasets, Argoverse and TrajNet++.", "id": "8a812312-e6e1-48da-a720-2251713df6bf", "level": "subsection", "origin_cites_number": 3, "parent_id": "419bd794-c010-4085-a609-4fa92d46e63a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ], [ "subsection", "Equivariant Convolutional Neural Networks" ] ], "subsections": [], "title": "Equivariant Convolutional Neural Networks" }, { "cite_extract_rate": 1, "cites": [ 9019, 6404, 6405, 6364, 9018, 6403 ], "content": "In addition to the equivariant convolution, numerous equivariant graph neural nets have also been developed, particularly for modeling atomic systems and molecular dynamics. This is due to the pervasive presence of symmetry in molecular physics, as evidenced by roto-translation equivariance in molecular conformations and coordinates. \n designed E(n)-equivariant graph neural network for predicting molecular properties. It updates edge features with the Euclidean distance between nodes and updates the coordinates of particles with the weighted sum of relative differences of all neighbors. \n proposed to use a score-based generative model for generating molecular conformation. The authors used equivariant graph neural networks to estimate the score function, which is the gradient fields of the log density of atomic coordinates because it is roto-translation equivariant. \n designed \\textit{Cormorant}, a rotationally covariant neural network architecture for learning the behavior and properties of complex many-body physical systems. \\textit{Cormorant} achieves promising results in learning molecular potential energy surfaces on the MD-17 dataset and learning the geometric, energetic, electronic, and thermodynamic properties of molecules on the GDB-9 dataset. \n proposed a model for autoregressive generation of 3D molecular structures with reinforcement learning (RL). The method uses equivariant state representations for autoregressive generation, built largely from \\textit{Cormorant}, and integrates such representations within an existing actor-critic RL generation framework. \n further designed a series SE(3)-equivariant operations and building blocks for DL architectures operating on geometric point cloud data, which was used to construct \\textit{PhiSNet}, a novel architecture capable of accurately predicting wavefunctions and electronic densities.\nAdditionally, permutation invariance also exists in molecular dynamics. For instance, quantum mechanical energies are invariant if we exchange the labels of identical atoms. However, stated that enforcing equivariance to all permutations in graph neural nets can be very restrictive when modeling molecules. Thus, they proposed to decompose a graph into a collection of local graphs that are isomorphic to a pre-selected template graph so that the sub-graphs can always be canonicalized to template graphs before convolution is applied. By doing this, the graph neural nets can not only be much more expressive but also locally equivariant. proposed to build equivariant neural networks based on the idea that nonlinear $O(d)$-equivariant functions can be universally expressed in terms of a lightweight collection of scalars, which are simpler to build. They demonstrated the efficiency and scalability of their proposed approach to two classical physics problems, calculating the total mechanical energy of particles and the total electromagnetic force, that obeys all translation, rotation, reflection, and permutation symmetries. Moreover, since the design of equivariant layers is a difficult task, proposed a lie point symmetry data augmentation method for training graph neural PDE solvers and this method enables these neural solvers to preserve multiple symmetries.", "id": "663d967b-66a2-4a43-ac10-b22f1a779eac", "level": "subsection", "origin_cites_number": 6, "parent_id": "419bd794-c010-4085-a609-4fa92d46e63a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ], [ "subsection", "Equivariant Graph Neural Networks" ] ], "subsections": [], "title": "Equivariant Graph Neural Networks" }, { "cite_extract_rate": 1, "cites": [ 6406, 1695, 9020, 6407 ], "content": "There has been also an emerging area that is symmetry discovery, the key idea of which is to find the weight-sharing patterns in neural networks that have been trained on data with symmetries. For instance, factorized the weight matrix in a fully connected layer into a symmetry (i.e. weight-sharing) matrix and a vector of filter parameters. The two parts are learned separately in the inner and outer loop training with the Model-Agnostic Meta-Learning algorithm (MAML) , which is an optimization-based meta-learning method so that the symmetry matrix can learn the weight-sharing pattern from the data. Furthermore, proposed \\textit{Lie Algebra Convolutional Network (L-conv)}, a novel architecture that can learn the Lie algebra basis and automatically discover symmetries from data. It can be considered as an infinitesimal version of group convolution. further leveraged \\textit{L-conv} to construct the \\textit{LieGAN}, to automatically discover equivariances from a dataset using a paradigm akin to generative adversarial training. It represents symmetry as an interpretable Lie algebra basis and can discover various symmetries. Specifically, a generator learns a group of transformations applied to the data, which preserves the original distribution and fools the discriminator.", "id": "e7918c1d-d2ad-48cb-b008-0dda91dcf3ca", "level": "subsection", "origin_cites_number": 4, "parent_id": "419bd794-c010-4085-a609-4fa92d46e63a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ], [ "subsection", "Symmetry Discovery" ] ], "subsections": [], "title": "Symmetry Discovery" }, { "cite_extract_rate": 1, "cites": [ 6408 ], "content": "By designing a model that is intrinsically equivariant or invariant to input transformations, we can ensure that our model generalizes automatically across these transformations, making it resilient to distributional shifts. In contrast, data augmentation techniques cannot provide equivariance guarantees when the models are applied to new datasets. Empirically and theoretically, it has been shown that equivariant and invariant neural nets offer superior data and parameter efficiency compared to data augmentation techniques. Furthermore, incorporating symmetries enhances the physical consistency of neural nets because of Noether's Law.\nHowever, incorporating too many symmetries may overly constrain the representation power of neural nets and slow down both training and inference. In addition, many real-world dynamics do not have perfect symmetries. A perfectly equivariant model that respects a given symmetry may have trouble learning partial or approximated symmetries in real-world data. Thus, an ideal model for real-world dynamics should be approximately equivariant and automatically learn the correct amount of symmetry in the data, such as the paper we discussed in Section \\ref{fluid_sym}. There are a few other works that explore the same idea. For instance, proposed the soft equivariant layer by directly summing up a flexible layer with one that has strong equivariance inductive biases to model the soft equivariance.", "id": "f8b8d401-1a90-42ac-8c1a-1179e1c75423", "level": "subsection", "origin_cites_number": 1, "parent_id": "419bd794-c010-4085-a609-4fa92d46e63a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Invariant and Equivariant DL Models" ], [ "subsection", "Pros and Cons" ] ], "subsections": [], "title": "Pros and Cons" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{dis}\nIn this paper, we systematically review the recent progress in physics-guided DL for learning dynamical systems. We discussed multiple ways to inject first-principle and physical constraints into DL including (1) physics-informed loss regularizers (2) physics-guided design, (3) hybrid models, and (4) symmetry. By integrating physical principles, the DL models can achieve better physical consistency, higher accuracy, increased data efficiency, improved generalization, and greater interpretability. Despite the great promise and exciting progress in the field, physics-guided AI is still at its infant stage. Below we review the emerging challenges and opportunities of learning physical dynamics with deep learning for future studies.", "id": "73734564-db92-46a8-a87d-86945d31f17a", "level": "section", "origin_cites_number": 0, "parent_id": "11aa860a-a47f-4799-8633-7727b5a3edae", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "eccbcee2-77e5-4ee0-8e97-633aae82422e", "f6b2db2d-5b50-4c4e-a370-6839890c5769", "d70875a1-2a25-4ccf-bfc5-a684deea09d8", "64225fa4-1f2e-4df2-88a3-15964074f7ce", "c4466201-6b1b-45e0-bdc1-834c097f0655", "9aac7376-bdb8-4341-aa04-e950a4b124d3", "9f2a6c61-b7ee-4490-acf2-6c3b7c051f64" ], "title": "Discussion" }, { "cite_extract_rate": 1, "cites": [ 6365, 6394 ], "content": "Generalization is a central problem in machine learning. One current limitation of deep learning models for learning complex dynamics is their inability to understand the system solely from data and handle distributional shifts that naturally occur. Most deep learning models for dynamics modeling are trained to model a specific system and still struggle with generalization. For example, in turbulence modeling, deep learning models trained with fixed boundaries and initial conditions often fail to generalize to fluid flows with different characteristics. To overcome this limitation, one approach is to build physics-guided deep learning models, where the physics part plays a dominant role while the neural networks focus on learning the unknown process . Another promising direction is meta-learning. For instance, proposed a model-based meta-learning method called \\textit{DyAd} that can generalize across heterogeneous domains of fluid dynamics. However, this model can only generalize well on the dynamics with interpolated physical parameters and cannot extrapolate beyond the range of the physical parameters in the training set. Another idea is to transform data into a canonical distribution that neural networks can learn from and then restore the original data after predictions are made . Since neural networks struggle with multiple distributions, this approach aims to find a single distribution that can represent the dynamics effectively. A trustworthy and reliable model for learning physical dynamics should be able to extrapolate to systems with various parameters, external forces, or boundary conditions while maintaining high accuracy. Therefore, further research into generalizable physics-guided deep learning is crucial.", "id": "eccbcee2-77e5-4ee0-8e97-633aae82422e", "level": "subsection", "origin_cites_number": 2, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Improving Generalization" ] ], "subsections": [], "title": "Improving Generalization" }, { "cite_extract_rate": 0, "cites": [], "content": "Long-term forecasting of physical dynamics is a challenging task as it is prone to error accumulation and instability to perturbations in the input, which significantly affect the accuracy of neural networks over a long forecasting horizon. To address these issues, several training techniques have been proposed in recent years. One such technique involves adding noise to the input, which makes the models less sensitive to perturbations . It also suggests when making predictions in an autoregressive manner, the neural nets should be trained to make multiple steps of predictions in each autoregressive call instead of just one step. Additionally, proposed a time-based Lyapunov regularizer to the loss function of deep forecasters to avoid training error propagation and improve the trained long-term prediction. Moreover, utilized online normalization that is normalizing the current training sample using a running mean and standard deviation, which also increases the time horizon that the model can predict. These models are trained on a large amount of simulation data. However, for real-world problems, obtaining real-world data such as experimental data of jet flow can be expensive, which presents a significant challenge for improving the robustness of predictions on limited training data. In such cases, developing robust prediction models that can generalize well on limited training data is of great importance.", "id": "f6b2db2d-5b50-4c4e-a370-6839890c5769", "level": "subsection", "origin_cites_number": 2, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Improving Robustness of Long-term Forecasting" ] ], "subsections": [], "title": "Improving Robustness of Long-term Forecasting" }, { "cite_extract_rate": 1, "cites": [ 8153, 5272 ], "content": "Spatiotemporal phenomena, from global ocean currents to the spread of infectious diseases, are examples of dynamics in non-Euclidean spaces, which means they cannot be easily represented using traditional Euclidean geometry. To address this issue, the field of geometric deep learning has emerged. Geometric deep learning aims to generalize neural network models to non-Euclidean domains such as graphs and manifolds. However, most of the existing work in this field has been limited to static graph data. Thus, learning dynamics in non-Euclidean Ssaces is a promising direction, and geometric concepts, such as rent notions of distance, curvature, and parallel transport, must be taken into account when designing models. For example, when modeling the ocean dynamics on the earth, which is a sphere, we need to encode the gauge equivariance in the design of neural nets since there is no canonical coordinate system on a sphere.", "id": "d70875a1-2a25-4ccf-bfc5-a684deea09d8", "level": "subsection", "origin_cites_number": 2, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Learning Dynamics in Non-Euclidean Spaces" ] ], "subsections": [], "title": "Learning Dynamics in Non-Euclidean Spaces" }, { "cite_extract_rate": 0.5, "cites": [ 6409 ], "content": "The majority of literature on learning dynamics with DL focuses on the methodological and practical aspects. Research into the theoretical analysis of generalization\nis lacking. Current statistical learning theory is based on the typical assumption that training\nand test data are identically and independently distributed (i.i.d.) samples from some unknown\ndistribution . However, this assumption does not hold for most dynamical systems, where observations at different times and locations may be highly correlated. provided the discrepancy-based generalization guarantees\nfor time series forecasting. On the basis of this, took the first step to derive generalization bounds for equivariant models and data augmentation in the dynamics forecasting setting. The derived upper bounds are expressed in terms of measures of distributional shifts and group transformations, as well as the Rademacher complexity. But these bounds are sometimes not very informative since many of the inequalities used can be loose. However, to better understand the performance of DL on learning dynamics, we need to derive\ngeneralization bounds expressed in terms of the characteristics of the dynamics, such as the order and Lyapunov exponents. Deriving lower generalization bounds are also necessary since they reveal the best performance scenarios. Theoretical studies can also inspire research into model\ndesign and algorithm development for learning dynamics.", "id": "64225fa4-1f2e-4df2-88a3-15964074f7ce", "level": "subsection", "origin_cites_number": 2, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Theoretical Analysis" ] ], "subsections": [], "title": "Theoretical Analysis" }, { "cite_extract_rate": 0.25, "cites": [ 6410 ], "content": "A fundamental pursuit in science is to identify causal relationships. In the context of dynamical systems, we may ask which variables directly or indirectly influence other variables through intermediates. While traditional approaches to the discovery of causation involve conducting controlled real experiments , data-driven approaches have been proposed to identify causal relations from observational data in the past few decades . However, most data-driven approaches do not directly address the challenge of learning causality with big data. Many questions remain open, such as using causality to improve deep learning models, disentangling complex and multiple treatments, and designing the environment to control the given dynamics. Additionally, we are also interested in understanding the system’s response under interventions. For example, when using deep learning to model climate dynamics, we need to make accurate predictions under different climate policies, such as carbon pricing policies and the development of clean energy, to enable better decisions by governments for controlling climate change.", "id": "c4466201-6b1b-45e0-bdc1-834c097f0655", "level": "subsection", "origin_cites_number": 4, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Causal Inference in Dynamical Systems" ] ], "subsections": [], "title": "Causal Inference in Dynamical Systems" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 9012, 9020, 1695, 9011 ], "content": "Another promising direction is to seek physics laws with the help of DL. The search for fundamental laws of practical problems is the main theme of science. Once the governing equations of dynamical systems are found, they allow for accurate mathematical modeling, increased interpretability, and robust forecasting. However, current methods are limited to selecting from a large dictionary of possible mathematical terms . The extremely large search space, limited high-quality experimental data, and overfitting issues have been critical concerns. Another line of work is to discover symmetry from the observed data instead of the entire dynamics with the help of DL . But these works can only work well on synthetic data and discover known symmetries. Still, research on data-driven methods based on DL for discovering physics laws is quite preliminary.", "id": "9aac7376-bdb8-4341-aa04-e950a4b124d3", "level": "subsection", "origin_cites_number": 6, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Search for Physical Laws" ] ], "subsections": [], "title": "Search for Physical Laws" }, { "cite_extract_rate": 0, "cites": [], "content": "Given the rapid growth in high-performance computation, we need to improve automation and accelerate streamlining of highly compute-intensive workflows for science. We should focus on how to efficiently train, test, and deploy complex physics-guided DL models on large datasets and high-performance computing systems, such that these models can be quickly utilized to solve real-world scientific problems. To really revolutionize the field, these DL tools need to become more scalable and transferable and converge into a complete pipeline for the simulation and analysis of dynamical systems. A simple example is that we can integrate machine learning tools into the existing numerical simulation platforms so that we do not need to move data between systems every time and we can easily use either or both types of methods for analyzing data. \nIn conclusion, given the availability of abundant data and rapid growth in computation, we envision that the integration of physics and DL will play an increasingly essential role in advancing scientific discovery and addressing important dynamics modeling problems.\n\\section*{Acknowledgement}\nThis work was supported in part by U.S. Department Of Energy, Office of Science under grant DESC0022331, U. S. Army Research Office under Grant W911NF-20-1-0334, Facebook Data Science Award, Google Faculty Award, and NSF Grant \\#2037745.\n\\clearpage\n\\bibliography{refs}\n\\end{document}", "id": "9f2a6c61-b7ee-4490-acf2-6c3b7c051f64", "level": "subsection", "origin_cites_number": 0, "parent_id": "73734564-db92-46a8-a87d-86945d31f17a", "prefix_titles": [ [ "title", "Physics-Guided Deep Learning for Dynamical Systems: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Efficient Computation" ] ], "subsections": [], "title": "Efficient Computation" } ]
48
[ 6360, 6356, 6364, 6355, 6359, 6363, 6358, 6362, 6361, 9004, 6357, 9003, 6368, 6370, 9006, 6369, 6367, 9005, 6365, 4714, 6366, 6372, 6371, 9009, 9007, 7970, 9008, 9010, 6374, 1356, 5066, 6373, 9013, 9012, 6375, 6377, 6376, 9011, 9014, 6378, 6379, 6380, 6381, 6995, 6382, 6385, 6383, 6384, 9015, 6386, 6388, 6387, 6390, 6391, 2924, 6389, 6394, 6392, 6395, 6393, 9016, 23, 8154, 8153, 6397, 6399, 9017, 6401, 6398, 8152, 6400, 6396, 306, 6402, 9019, 6404, 6405, 9018, 6403, 6406, 1695, 9020, 6407, 6408, 5272, 6409, 6410 ]
0.85337
[ "Ninareh Mehrabi", "Fred Morstatter", "Nripsuta Saxena", "Kristina Lerman", "Aram Galstyan" ]
A Survey on Bias and Fairness in Machine Learning
2019
2019-08-23T01:22:04Z
cs.LG
\par\noindent\rule{\textwidth}{0.4pt} With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ] ], "subsections": [ "3fe27568-e32e-45b1-9483-5471c981f453", "773207d9-2d70-4cc6-a5c5-44a367db114c", "7840a15c-3340-46d7-bad0-5834d0761fbb", "4ce9b956-164a-4fdf-a103-b11a8fd6f2e6", "c4976b58-d2d7-4703-a545-b120b4c383c4", "539e2025-f6c3-47b4-adeb-1aa1c312db9d", "f1a962e1-7041-4346-83da-2bc61957ad17", "e86008ad-785b-47fc-b2af-fb96edaafac7", "2e9146e4-82ec-4884-9fe3-eec975d6ffba" ], "title": "root" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 3890 ], "content": "Machine learning algorithms have penetrated every aspect of our lives. \nAlgorithms make movie recommendations, suggest products to buy, and who to date. They are increasingly used in high-stakes scenarios such as loans and hiring decisions .\nThere are clear benefits to algorithmic decision-making; unlike people, machines do not become tired or bored~, and can take into account orders of magnitude more factors than people can. However, like people, algorithms are vulnerable to biases that render their decisions ``unfair''~.\nIn the context of decision-making, fairness is the \\textit{absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics}. Thus, an unfair algorithm is one \nwhose decisions are skewed \ntoward a particular group of people. A canonical example comes \nfrom a tool \nused by courts in the United States to make pretrial detention and release decisions. The software, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), measures the \nrisk of a person to recommit another crime. \nJudges use COMPAS to decide whether to release an offender, or to keep him or her in prison. \nAn investigation into the software found \na bias against \nAfrican-Americans:\\footnote{https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing}\nCOMPAS is more likely to \nhave higher false positive rates for African-American offenders than Caucasian offenders \nin falsely predicting them to be at a higher risk of recommitting a crime or recidivism.\nSimilar findings have been made in other areas, such as an AI system that judges beauty pageant winners but was biased against darker-skinned contestants,\\footnote{https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people} or facial recognition software in digital cameras that overpredicts Asians as blinking.\\footnote{http://content.time.com/time/business/article/0,8599,1954643,00.html} These biased predictions stem from the hidden or neglected biases in data or algorithms.\nIn this survey we identify two potential sources of unfairness in machine learning outcomes---those that arise from biases in the data and those that arise from the algorithms. We review research investigating how biases in data skew what is learned by machine learning algorithms, and \nnuances in the way the algorithms themselves work to prevent them from making fair decisions---even when the data is unbiased. Furthermore, we observe that biased algorithmic outcomes might impact user experience, thus generating a feedback loop between data, algorithms and users that can perpetuate and even amplify existing sources of bias. \nWe begin the review with several highly visible real-world cases of where unfair machine learning algorithms have led to suboptimal and discriminatory outcomes in Section~\\ref{sec:examples}. In Section~\\ref{sec:loop}, we describe the different types and sources of biases that occur within the data-algorithms-users loop mentioned above. Next, in Section~\\ref{sec:alg-fairness}, we present the different ways that the concept of fairness has been operationalized and studied in the literature. We discuss the ways in which these two concepts are coupled. Last, we will focus on different families of machine learning approaches, how fairness manifests differently in each one, and the current state-of-the-art for tackling them in Section~\\ref{sec:methods}, followed by potential areas of future work in each of the domains in Section~\\ref{sec:future}.", "id": "3fe27568-e32e-45b1-9483-5471c981f453", "level": "section", "origin_cites_number": 7, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:examples}\nWith the popularity of AI and machine learning over the past decades, and their prolific spread in different applications, safety and fairness constraints have become a significant issue for researchers and engineers. Machine learning is used in courts to assess the probability that a defendant recommits a crime. It is used in different medical fields, in childhood welfare systems , and autonomous vehicles. All of these applications have a direct effect in our lives and can harm our society if not designed and engineered correctly, that is with considerations to fairness. has a list of the applications and the ways these AI systems affect our daily lives with their inherent biases, such as the existence of bias in AI chatbots, employment matching, flight routing, and automated legal aid for immigration algorithms, and search and advertising placement algorithms. discusses examples of how bias in the real world can creep into AI and robotic systems, such as bias in face recognition applications, voice recognition, and search engines. Therefore, it is important for researchers and engineers to be concerned about the downstream applications and their potential harmful effects when modeling an algorithm or a system.", "id": "773207d9-2d70-4cc6-a5c5-44a367db114c", "level": "section", "origin_cites_number": 3, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Real-World Examples of Algorithmic Unfairness" ] ], "subsections": [ "a40d4e0e-669a-4d7c-a837-1a1f2f5b3614", "fa071114-208a-4f50-a618-d3a786adeb52" ], "title": "Real-World Examples of Algorithmic Unfairness" }, { "cite_extract_rate": 0.2, "cites": [ 3891 ], "content": "COMPAS is an exemplar of a discriminatory system. In addition to this, discriminatory behavior was also evident in an algorithm that would deliver advertisements promoting jobs in Science, Technology, Engineering, and Math (STEM) fields . This advertisement was designed to deliver advertisements in a gender-neutral way. However, less women compared to men saw the advertisement due to gender-imbalance which would result in younger women being considered as a valuable subgroup and more expensive to show advertisements to. This optimization algorithm would deliver ads in a discriminatory way although its original and pure intention was to be gender-neutral. Bias in facial recognition systems and recommender systems have also been largely studied and evaluated and in many cases shown to be discriminative towards certain populations and subgroups. In order to be able to address the bias issue in these applications, it is important for us to know where these biases are coming from and what we can do to prevent them. \nWe have enumerated the bias in COMPAS, which is a widely used commercial risk assessment software. In addition to its bias, it also contains performance issues when compared to humans. When compared to non-expert human judgment in a study, it was discovered to be not any better than a normal human . It is also interesting to note that although COMPAS uses 137 features, only 7 of those were presented to the people in the study. further argues that COMPAS is not any better than a simple logistic regression model when making decisions. We should think responsibly, and recognize that the application of these tools, and their subsequent decisions affect peoples' lives; therefore, considering fairness constraints is a crucial task while designing and engineering these types of sensitive tools. In another similar study, while investigating sources of group unfairness (unfairness across different groups is defined later), the authors in compared SAVRY, a tool used in risk assessment frameworks that includes human intervention in its process, with automatic machine learning methods in order to see which one is more accurate and more fair. Conducting these types of studies should be done more frequently, but prior to releasing the tools in order to avoid doing harm.", "id": "a40d4e0e-669a-4d7c-a837-1a1f2f5b3614", "level": "subsection", "origin_cites_number": 5, "parent_id": "773207d9-2d70-4cc6-a5c5-44a367db114c", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Real-World Examples of Algorithmic Unfairness" ], [ "subsection", "Systems that Demonstrate Discrimination" ] ], "subsections": [], "title": "Systems that Demonstrate Discrimination" }, { "cite_extract_rate": 0.5, "cites": [ 3892 ], "content": "An interesting direction that researchers have taken is introducing tools that can assess the amount of fairness in a tool or system. For example, Aequitas~ is a toolkit that lets users to test models with regards to several bias and fairness metrics for different population subgroups. Aequitas produces reports from the obtained data that helps data scientists, machine learning researchers, and policymakers to make conscious decisions and avoid harm and damage toward certain populations. AI Fairness 360 (AIF360) is another toolkit developed by IBM in order to help moving fairness research algorithms into an industrial setting and to create a benchmark for fairness algorithms to get evaluated and an environment for fairness researchers to share their ideas . These types of toolkits can be helpful for learners, researchers, and people working in the industry to move towards developing fair machine learning application away from discriminatory behavior.", "id": "fa071114-208a-4f50-a618-d3a786adeb52", "level": "subsection", "origin_cites_number": 2, "parent_id": "773207d9-2d70-4cc6-a5c5-44a367db114c", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Real-World Examples of Algorithmic Unfairness" ], [ "subsection", "Assessment Tools" ] ], "subsections": [], "title": "Assessment Tools" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:loop}\nMost AI systems and algorithms are data driven and require data upon which to be trained. Thus, data is tightly coupled to the functionality of these algorithms and systems. In the cases where the underlying training data contains biases, the algorithms trained on them will learn these biases and reflect them into their predictions. As a result, existing biases in data can affect the algorithms using the data, producing biased outcomes. Algorithms can even amplify and perpetuate existing biases in the data. In addition, algorithms themselves can display biased behavior due to certain design choices, even if the data itself is not biased. The outcomes of these biased algorithms can then be fed into real-world systems and affect users' decisions, which will result in more biased data for training future algorithms. For example, imagine a web search engine \nthat puts specific results at the top of its list. Users tend to interact most with the top results and pay little attention to those further down the list~. The interactions of users with items will then be collected by the web search engine, and the data will be \nused to make future decisions on how information should be presented based on popularity and user interest. As a result, results at the top will become more and more popular, not because of the nature of the result but due to the biased interaction and placement of results by these algorithms~. \nThe loop capturing this feedback between biases in data, algorithms, and user interaction is illustrated in Figure~\\ref{fig:cycle}. We use this loop to categorize definitions of bias in the section below.\n\\begin{figure}[!bt]\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 0cm 0cm,clip=true]{new_bias_cycle.pdf}\n\\caption{Examples of bias definitions placed in the data, algorithm, and user interaction feedback loop.}\n\\label{fig:cycle}\n\\end{figure}", "id": "7840a15c-3340-46d7-bad0-5834d0761fbb", "level": "section", "origin_cites_number": 1, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ] ], "subsections": [ "8bcfb982-01c4-4d6d-85a4-68ce6643f0f8", "eeba8824-a85d-4e70-9634-424eff121afd", "710588a1-1fac-410b-a0f6-79066f2a545b" ], "title": "Bias in Data, Algorithms, and User Experiences" }, { "cite_extract_rate": 0, "cites": [], "content": "Bias can exist in many shapes and forms, some of which can lead to unfairness \nin different downstream learning tasks.\nIn , authors talk about sources of bias in machine learning with their categorizations and descriptions in order to motivate future solutions to each of the sources of bias introduced in the paper. In , the authors prepare a complete list of different types of biases with their corresponding definitions that exist in different cycles from data origins to its collection and its processing. Here we will reiterate the most important sources of bias introduced in these two papers and also add in some work from other existing research papers. Additionally, we will introduce a different categorization of these definitions in the paper according to the data, algorithm, and user interaction loop.", "id": "8bcfb982-01c4-4d6d-85a4-68ce6643f0f8", "level": "subsection", "origin_cites_number": 2, "parent_id": "7840a15c-3340-46d7-bad0-5834d0761fbb", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Types of Bias" ] ], "subsections": [ "3696cf56-dad7-4a61-bb83-8b4a2e09f2b1", "4145147d-ef17-4382-9a43-81bec8e4de7d", "b8ade98c-508e-408d-98f6-8850f5561a48" ], "title": "Types of Bias" }, { "cite_extract_rate": 0.277777777777777, "cites": [ 3894, 8699, 7758, 3893, 6984 ], "content": "In this section we talk about biases in data, which, when used by ML training algorithms, might result in biased algorithmic outcomes.\n\\begin{enumerate}\n \\item \\textbf{Measurement Bias.} \\textit{Measurement, or reporting, bias arises from how we choose, utilize, and measure particular features} . An example of this type of bias was observed in the recidivism risk prediction tool COMPAS, where \n prior arrests and friend/family arrests were used as proxy variables to measure level of ``riskiness'' or ``crime''----which on its own can be viewed as mismeasured proxies. This is partly due to the fact that minority communities are controlled and policed more frequently, so they have higher arrest rates. However, one should not conclude that because people coming from minority groups have higher arrest rates therefore they are more dangerous as there is a difference in how these groups are assessed and controlled .\n \\item \\textbf{Omitted Variable Bias.} \\textit{Omitted variable bias\\textsuperscript{\\ref{fl}} occurs when one or more important variables are left out of the model} . An example for this case would be when someone designs a model to predict, with relatively high accuracy, the annual percentage \n rate at which customers will stop subscribing to a service, but soon observes that the majority of users are canceling their subscription without receiving any warning from the designed model. Now imagine that the reason for canceling the subscriptions is appearance of a new strong competitor in the market which offers the same solution, but for half the price. The appearance of the competitor was something that the model was not ready for; therefore, it is considered to be an omitted variable. \n \\item \\textbf{Representation Bias.} \\textit{Representation bias arises from how we sample from a population during data collection process} . Non-representative samples lack the diversity of the population, with missing subgroups and other anomalies. Lack of geographical diversity in datasets like ImageNet (as shown in Figures \\ref{imagenet1} and \\ref{imagenet2}) \n results in demonstrable bias towards Western cultures.\n \\item \\textbf{Aggregation Bias.} \\textit{Aggregation bias (or ecological fallacy) arises when false conclusions are drawn \n about individuals from observing the entire population.} \n An example of this type of bias can be seen in clinical aid tools. Consider diabetes patients who have apparent morbidity differences across ethnicities and genders. Specifically, HbA1c levels, that are widely used to diagnose and monitor diabetes, differ in complex ways across genders and ethnicities. Therefore, \n a model that ignores individual differences will likely not be well-suited for all ethnic and gender groups in the population~.\n This is true even when they are represented equally in the training data. Any general assumptions about \n subgroups within the population can result in aggregation bias.\n \\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.48\\columnwidth]{example_equal_size}\n \\includegraphics[width=0.48\\columnwidth]{example_unbalanced}\n \\caption{Illustration of biases in data.\n The red line shows the regression (MLR) for the entire population, while dashed green lines are regressions for each subgroup, and the solid green line is the unbiased regression. (a) When all subgroups are of equal size, then MLR shows a positive relationship between the outcome and the independent variable. (b) Regression shows almost no relationship in less balanced data. The relationships between variables within each subgroup, however, remain the same. (Credit: Nazanin Alipourfard) \\label{fig:explain}}\n \\end{figure}\n\\begin{figure}[h]\n\\includegraphics[width=0.75\\textwidth, trim=2cm 5cm 0cm 6cm,clip=true]{imagenet2.pdf}\n\\caption{Fraction of each country, represented by their two-letter ISO codes, in Open Images and ImageNet image datasets. In both datasets, US and Great Britain represent the top locations, from \\textsuperscript{\\textcopyright} Shreya Shankar.}\n\\label{imagenet1}\n\\end{figure}\n\\begin{figure}[h]\n\\includegraphics[width=0.75\\textwidth,trim=0cm 2cm 0cm 3.3cm,clip=true]{imagenet.pdf}\n\\caption{Geographic distribution of countries in the Open Images data set. In their sample, almost one third of the data was US-based, and 60\\% of the data was from the six most represented countries across North America and Europe, from \\textsuperscript{\\textcopyright} Shreya Shankar.}\n\\label{imagenet2}\n\\end{figure}\n \\begin{enumerate}\n \\item \\textbf{Simpson's Paradox.} Simpson's paradox is a type of aggregation bias that arises in the analysis of heterogeneous data~. \n The paradox arises when an association observed in aggregated data disappears or reverses when the same data is disaggregated into its underlying subgroups (Fig.~\\ref{fig:explain}(a)). One of the better-known examples of the type of paradox arose during the gender bias lawsuit in university admissions against UC Berkeley~.\n After analyzing graduate school admissions data, it seemed like there was bias toward women, a smaller fraction of whom were being admitted to graduate programs compared to their male counterparts. However, when admissions data was separated and analyzed over the departments, women applicants had equality and in some cases even a small advantage over men. The paradox happened as women tended to apply to departments with lower admission rates for both genders. Simpson's paradox has been observed in a variety of domains, including biology~, psychology~, astronomy~, and computational social science~. \n \\item{\\textbf{Modifiable Areal Unit Problem}} is a statistical bias in geospatial analysis, which arises when modeling data at different levels of spatial aggregation~. This bias results in different trends learned when data is aggregated at different spatial scales.\n\\end{enumerate}\n \\item \\textbf{Sampling Bias.} \\textit{Sampling bias is similar to representation bias, and it arises due to {non-random} sampling of subgroups.} As a consequence of sampling bias, the trends estimated for one population may not generalize to data collected from a new population. For the intuition, consider the example in {Figure~\\ref{fig:explain}}. The left plot represents data collected during a study from three subgroups, which were uniformly sampled (Fig.~\\ref{fig:explain}(a)). Suppose the next time the study was conducted, one of the subgroups was sampled more frequently than the rest (Fig.~\\ref{fig:explain}(b)). The positive trend found by the regression model in the first study almost completely disappears (solid red line in plot on the right), although the subgroup trends (dashed green lines) are unaffected.\n \\item \\textbf{Longitudinal Data Fallacy.}\n Researchers analyzing temporal data must use \\emph{longitudinal analysis} to track cohorts over time to learn their behavior. Instead, temporal data is often modeled using cross-sectional analysis, which combines diverse cohorts at a single time point. The heterogeneous cohorts can bias cross-sectional analysis, leading to different conclusions than longitudinal analysis. \n As an example, analysis of bulk Reddit data~ revealed that comment length decreased over time on average. However, bulk data represented a cross-sectional snapshot of the population, which in reality contained different cohorts who joined Reddit in different years. When data was disaggregated by cohorts, the comment length within each cohort was found to increase over time.\n \\item \\textbf{Linking Bias.} \\textit{Linking bias arises when network attributes obtained from user connections, activities, or interactions differ and misrepresent the true behavior of the users} . In authors show how social networks can be biased toward low-degree nodes when only considering the links in the network and not considering the content and behavior of users in the network. also shows that user interactions are significantly different from social link patterns that are based on features, such as method of interaction or time. The differences and biases in the networks can be a result of many factors, such as network sampling, as shown in , which can change the network measures and cause different types of problems.\n \\end{enumerate}", "id": "3696cf56-dad7-4a61-bb83-8b4a2e09f2b1", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "8bcfb982-01c4-4d6d-85a4-68ce6643f0f8", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Types of Bias" ], [ "subsubsection", "Data to Algorithm" ] ], "subsections": [], "title": "Data to Algorithm" }, { "cite_extract_rate": 0.125, "cites": [ 3895 ], "content": "Algorithms modulate user behavior. Any biases in algorithms might introduce biases in user behavior. In this section we talk about biases that are as a result of algorithmic outcomes and affect user behavior as a consequence.\n\\begin{enumerate}\n \\item \\textbf{Algorithmic Bias.} \\textit{Algorithmic bias is when the bias is not present in the input data and is added purely by the algorithm} . The algorithmic design choices, such as use of certain optimization functions, regularizations, choices in applying regression models on the data as a whole or considering subgroups, and the general use of statistically biased estimators in algorithms , can all contribute to biased algorithmic decisions that can bias the outcome of the algorithms.\n\\item \\textbf{User Interaction Bias.} \\textit{User Interaction bias is \n a type of bias that can not only be observant on the Web but also get triggered from two sources---the user interface and through the user itself by imposing his/her self-selected biased behavior and interaction }. This type of bias can be influenced by other types and subtypes, such as presentation and ranking biases.\n \\begin{enumerate}\n \\item \\textbf{Presentation Bias.} \\textit{Presentation bias is a result of how information is presented .} For example, on the Web \n users can only click on content that they see, so the seen content gets clicks, while everything else gets no click. And it could be the case that the user does not see all the information on the Web . \n \\item \\textbf{Ranking Bias.} \\textit{The idea that top-ranked results are the most relevant and important will result in attraction of more clicks than others}. This bias affects search engines~ and crowdsourcing applications~.\n \\end{enumerate}\n \\item \\textbf{Popularity Bias}. \\textit{Items that are more popular tend to be exposed more. However, popularity metrics are subject to manipulation---for example, by fake reviews or social bots} . As an instance, this type of bias can be seen in search engines or recommendation systems where popular objects would be presented more to the public. But this presentation may not be a result of good quality; instead, it may be due to other biased factors.\n \\item \\textbf{Emergent Bias.} \\textit{Emergent bias occurs as a result of use and interaction with real users. This bias arises as a result of change in population, cultural values, or societal knowledge usually some time after the completion of design} . This type of bias is more likely to be observed in user interfaces, since interfaces tend to reflect the capacities, characteristics, and habits of prospective users by design . This type of bias can itself be divided into more subtypes, as discussed in detail in .\n \\item \\textbf{Evaluation Bias.} \\textit{Evaluation bias happens during model evaluation} . This includes the use of inappropriate and disproportionate benchmarks for evaluation of applications such as Adience and IJB-A benchmarks. These benchmarks are used in the evaluation of facial recognition systems that were biased toward skin color and gender , and can serve as examples for this type of bias .\n\\end{enumerate}", "id": "4145147d-ef17-4382-9a43-81bec8e4de7d", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "8bcfb982-01c4-4d6d-85a4-68ce6643f0f8", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Types of Bias" ], [ "subsubsection", "Algorithm to User" ] ], "subsections": [], "title": "Algorithm to User" }, { "cite_extract_rate": 0.2, "cites": [ 3896, 3897 ], "content": "Many data sources used for training ML models are user-generated. Any inherent biases in users might be reflected in the data they generate. Furthermore, when user behavior is affected/modulated by an algorithm, any biases present in those algorithm might introduce bias in the data generation process. Here we list several important types of such biases.\n\\begin{enumerate}\n \\item \\textbf{Historical Bias.}\\textit{ Historical bias is the already existing bias and socio-technical issues in the world and can seep into from the data generation process even given a perfect sampling and feature selection } . An example of this type of bias can be found in a 2018 image search result where searching for women CEOs ultimately resulted in fewer female CEO images due to the fact that only 5\\% of Fortune 500 CEOs were woman---which would cause the search results to be biased towards male CEOs . These search results were of course reflecting the reality, but whether or not the search algorithms should reflect this reality is an issue worth considering.\n \\item \\textbf{Population Bias.} \\textit{Population bias arises when statistics, demographics, representatives, and user characteristics are different in the user population \n of the platform from the original target population} .\n Population bias creates non-representative data. An example of this type of bias can arise from different user demographics on different social platforms, such as women being more likely to use Pinterest, Facebook, Instagram, while men being more active in online forums like Reddit or Twitter. More such examples and statistics related to social media use among young adults according to gender, race, ethnicity, and parental educational background can be found in .\n \\item \\textbf{Self-Selection Bias.} \\textit{Self-selection bias\\footnote{\\label{fl}https://data36.com/statistical-bias-types-explained/} is a subtype of the selection or sampling bias in which subjects of the research select themselves.} An example of this type of bias can be observed in an opinion poll to measure enthusiasm for a political candidate, where the most enthusiastic supporters are more likely to complete the poll. \n \\item \\textbf{Social Bias.} \\textit{Social bias happens when others' actions affect our judgment.} . An example of this type of bias can be a case where we want to rate or review an item with a low score, but when influenced by other high ratings, we change our scoring thinking that perhaps we are being too harsh .\n \\item \\textbf{Behavioral Bias.} \\textit{Behavioral bias arises from different user behavior across platforms, contexts, or different datasets} . An example of this type of bias can be observed in , where authors show how differences in emoji representations among platforms can result in different reactions and behavior from people and sometimes even leading to communication errors.\n \\item \\textbf{Temporal Bias.} \\textit{Temporal bias arises from differences in populations and behaviors over time} . An example can be observed in Twitter where people talking about a particular topic start using a hashtag at some point to capture attention, then continue the discussion about the event without using the hashtag .\n \\item \\textbf{Content Production Bias.} \\textit{Content Production bias arises from structural, lexical, semantic, and syntactic differences in the contents generated by users} . An example of this type of bias can be seen in where the differences in use of language across different gender and age groups is discussed. The differences in use of language can also be seen across and within countries and populations.\n \\end{enumerate}\nExisting work tries to categorize these bias definitions into groups, such as definitions falling solely under data or user interaction. However, due to the existence of the feedback loop phenomenon , \nthese definitions are intertwined, and we need a categorization which closely models this situation. This feedback loop is not only existent between the data and the algorithm, but also between the algorithms and user interaction . Inspired by these papers, we modeled categorization of bias definitions, as shown in Figure \\ref{fig:cycle}, and grouped these definitions on the arrows of the loop where we thought they were most effective. We emphasize the fact again that these definitions are intertwined, and one should consider how they affect each other in this cycle, and address them accordingly.", "id": "b8ade98c-508e-408d-98f6-8850f5561a48", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "8bcfb982-01c4-4d6d-85a4-68ce6643f0f8", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Types of Bias" ], [ "subsubsection", "User to Data" ] ], "subsections": [], "title": "User to Data" }, { "cite_extract_rate": 0, "cites": [], "content": "There are multiple ways that discriminatory bias can seep into data. \nFor instance, using unbalanced data can create biases against underrepresented groups. analyzes some examples of the biases that can exist in the data and algorithms and offer some recommendations and suggestions toward mitigating these issues.", "id": "eeba8824-a85d-4e70-9634-424eff121afd", "level": "subsection", "origin_cites_number": 1, "parent_id": "7840a15c-3340-46d7-bad0-5834d0761fbb", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Data Bias Examples" ] ], "subsections": [ "dcbc2a62-b715-430b-ab11-8564b21ba7cd", "02157dd4-d25c-43cb-8cc4-20c8dc1de45a" ], "title": "Data Bias Examples" }, { "cite_extract_rate": 0.25, "cites": [ 895 ], "content": "In , the authors show that datasets like IJB-A and Adience are imbalanced and contain mainly light-skinned subjects---79.6\\% in IJB-A and 86.2\\% in Adience. This can bias the analysis towards dark-skinned groups who are underrepresented in the data. In another instance, the way we use and analyze our data can create bias when we do not consider different subgroups in the data. In , the authors also show that considering only male-female groups is not enough, but there is also a need to use race to further subdivide the gender groups into light-skinned females, light-skinned males, dark-skinned males, and dark-skinned females. It's only in this case that we can clearly observe the bias towards dark-skinned females, as previously dark-skinned males would compromise for dark-skinned females and would hide the underlying bias towards this subgroup. Popular machine-learning datasets that serve as a base for most of the developed algorithms and tools can also be biased---which can be harmful to the downstream applications that are based on these datasets. For instance, ImageNet and Open Images are two widely used datasets in machine-learning. In , researchers showed that these datasets suffer from representation bias and advocate for the need to incorporate geographic diversity and inclusion while creating such datasets. In addition, authors in~ write about the existing representational biases in different knowledge bases that are widely used in Natural Language Processing (NLP) applications for different commonsense reasoning tasks.", "id": "dcbc2a62-b715-430b-ab11-8564b21ba7cd", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "eeba8824-a85d-4e70-9634-424eff121afd", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Data Bias Examples" ], [ "subsubsection", "Examples of Bias in Machine Learning Data" ] ], "subsections": [], "title": "Examples of Bias in Machine Learning Data" }, { "cite_extract_rate": 0, "cites": [], "content": "These data biases can be more dangerous in other sensitive applications. For example, in medical domains there are many instances in which the data studied and used are skewed toward certain populations---which can have dangerous consequences for the underrepresented communities.\n showed how exclusion of African-Americans resulted in their misclassification in clinical studies, so they became advocates for sequencing the genomes of diverse populations in the data to prevent harm to underrepresented populations. Authors in studied the 23andMe genotype dataset and found that out of 2,399 individuals, who have openly shared their genotypes in public repositories, 2,098 (87\\%) are European, while only 58 (2\\%) are Asian and 50 (2\\%) African. Other such studies were conducted in which states that UK Biobank, a large and widely used genetic dataset, may not represent the sampling population. Researchers found evidence of a ``healthy volunteer'' selection bias. has other examples of studies on existing biases in the data used in the medical domain. also looks at machine-learning algorithms and data utilized in medical fields, and writes about how artificial intelligence in health care has not impacted all patients equally.", "id": "02157dd4-d25c-43cb-8cc4-20c8dc1de45a", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "eeba8824-a85d-4e70-9634-424eff121afd", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Data Bias Examples" ], [ "subsubsection", "Examples of Data Bias in Medical Applications" ] ], "subsections": [], "title": "Examples of Data Bias in Medical Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "Similar to bias, discrimination is also a source of unfairness. Discrimination can be considered as a source for unfairness that is due to human prejudice and stereotyping based on the sensitive attributes, which may happen intentionally or unintentionally, while bias can be considered as a source for unfairness that is due to the data collection, sampling, and measurement. Although bias can also be seen as a source of unfairness that is due to human prejudice and stereotyping, in the algorithmic fairness literature it is more intuitive to categorize them as such according to the existing research in these areas. In this survey, we mainly focus on concepts that are relevant to algorithmic fairness issues. contain more broad information on discrimination theory that involve more multidisciplinary concepts from legal theory, economics, and social sciences which can be referenced by the interested readers.", "id": "710588a1-1fac-410b-a0f6-79066f2a545b", "level": "subsection", "origin_cites_number": 3, "parent_id": "7840a15c-3340-46d7-bad0-5834d0761fbb", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Discrimination" ] ], "subsections": [ "c4519ec0-517e-4923-bddd-da5b2d77f693", "715cbfa5-3902-4d23-b1d2-f5afdcb0de91", "a6a5e34a-d113-49b4-a2ee-0d9b8ab8892d" ], "title": "Discrimination" }, { "cite_extract_rate": 1, "cites": [ 8700 ], "content": "Differences in treatment and outcomes amongst different groups can be justified and explained via some attributes in some cases. In situations where these differences are justified and explained, it is not considered to be illegal discrimination and hence called explainable . For instance, authors in state that in the UCI Adult dataset , a widely used dataset in the fairness domain, males on average have a higher annual income than females. However, this is because on average females work fewer hours than males per week. Work hours per week is an attribute that can be used to explain low income which needs to be considered. If we make decisions, without considering working hours, such that males and females end up averaging the same income, we will lead to reverse discrimination since we would cause male employees to get lower salary than females. Therefore, explainable discrimination is acceptable and legal as it can be explained through other attributes like working hours. In , authors present a methodology to quantify the explainable and illegal discrimination in data. They argue that methods that do not take the explainable part of the discrimination into account may result in non-desirable outcomes, so they introduce a \\textit{reverse} discrimination which is equally harmful and undesirable. \nThey explain how to quantify and measure discrimination in data or a classifier's decisions which directly considers illegal and explainable discrimination.", "id": "c4519ec0-517e-4923-bddd-da5b2d77f693", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "710588a1-1fac-410b-a0f6-79066f2a545b", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Discrimination" ], [ "subsubsection", "Explainable Discrimination" ] ], "subsections": [], "title": "Explainable Discrimination" }, { "cite_extract_rate": 0.75, "cites": [ 3898, 8700, 8701 ], "content": "In contrast to explainable discrimination, there is unexplainable discrimination in which the discrimination toward a group is unjustified and therefore considered illegal. Authors in also present local techniques for removing only the illegal or unexplainable discrimination, allowing only for explainable differences in decisions. These are preprocessing techniques that change the training data such that it contains no unexplainable discrimination. We expect classifiers trained on this preprocessed data to not capture illegal or unexplainable discrimination. Unexplainable discrimination consists of \\textit{direct} and \\textit{indirect} discrimination.\n\\begin{enumerate}\n\\item{\\textbf{Direct Discrimination.}}\nDirect discrimination happens when protected attributes of individuals explicitly result in non-favorable outcomes toward them . Typically, there are some traits identified by law on which it is illegal to discriminate against, and it is usually these traits that are considered to be ``protected'' or ``sensitive'' attributes in computer science literature. A list of some of these protected attributes is provided in Table \\ref{electionexample} as specified in the Fair Housing and Equal Credit Opportunity Acts (FHA and ECOA) .\n\\item{\\textbf{Indirect Discrimination.}}\nIn indirect discrimination, individuals appear to be treated based on seemingly neutral and non-protected attributes; however, protected groups, or individuals still get to be treated unjustly as a result of implicit effects from their protected attributes (e.g., the residential zip code of a person can be used in decision making processes such as loan applications. However, this can still lead to racial discrimination, such as redlining, as despite the fact that zip code appears to be a non-sensitive attribute, it may correlate with race because of the population of residential areas.) .\n\\end{enumerate}", "id": "715cbfa5-3902-4d23-b1d2-f5afdcb0de91", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "710588a1-1fac-410b-a0f6-79066f2a545b", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Discrimination" ], [ "subsubsection", "Unexplainable Discrimination" ] ], "subsections": [], "title": "Unexplainable Discrimination" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{enumerate}\n\\item{\\textbf{Systemic Discrimination.}}\nSystemic discrimination refers to policies, customs, or behaviors that are a part of the culture or structure of an organization that may perpetuate discrimination against certain subgroups of the population . \n found that employers overwhelmingly preferred competent candidates that were culturally similar to them, and shared similar experiences and hobbies. If the decision-makers happen to belong overwhelmingly to certain subgroups, this may result in discrimination against competent candidates that do not belong to these subgroups. \n\\item{\\textbf{Statistical Discrimination.}}\nStatistical discrimination is a phenomenon where decision-makers use average group statistics to judge an individual belonging to that group. It usually occurs when the decision-makers (e.g., employers, or law enforcement officers) use an individual's obvious, recognizable characteristics as a proxy for either hidden or more-difficult-to-determine characteristics, that may actually be relevant to the outcome . \n\\end{enumerate}", "id": "a6a5e34a-d113-49b4-a2ee-0d9b8ab8892d", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "710588a1-1fac-410b-a0f6-79066f2a545b", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Bias in Data, Algorithms, and User Experiences" ], [ "subsection", "Discrimination" ], [ "subsubsection", "Sources of Discrimination" ] ], "subsections": [], "title": "Sources of Discrimination" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:alg-fairness}\nFighting against bias and discrimination has a long history in philosophy and psychology, and recently in machine-learning. However, in order to be able to fight against discrimination and achieve fairness, one should first define fairness. Philosophy and psychology have tried to define the concept of fairness long before computer science. The fact that no universal definition of fairness exists shows the difficulty of solving this problem . Different preferences and outlooks in different cultures lend a preference to different ways of looking at fairness, which makes it harder to come up with just a single definition that is acceptable to everyone in a situation. Indeed, even in computer science, where most of the work on proposing new fairness constraints for algorithms has come from the West, and a lot of these papers use the same datasets and problems to show how their constraints perform, there is still no clear agreement on which constraints are the most appropriate for those problems. Broadly, fairness is the absence of any prejudice or favoritism towards an individual or a group based on their intrinsic or acquired traits in the context of decision-making . Even though fairness is an incredibly desirable quality in society, it can be surprisingly difficult to achieve in practice. With these challenges in mind, many fairness definitions are proposed to address different algorithmic bias and discrimination issues discussed in the previous section.", "id": "4ce9b956-164a-4fdf-a103-b11a8fd6f2e6", "level": "section", "origin_cites_number": 2, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Algorithmic Fairness" ] ], "subsections": [ "243cac1a-256a-4808-9104-c1f13fedb69b" ], "title": "Algorithmic Fairness" }, { "cite_extract_rate": 0.7058823529411761, "cites": [ 3906, 3902, 3903, 3905, 8704, 3901, 3899, 3907, 3904, 8703, 3900, 8702 ], "content": "In , authors studied fairness definitions in political philosophy and tried to tie them to machine-learning. Authors in studied the 50-year history of fairness definitions in the areas of education and machine-learning. In , authors listed and explained some of the definitions used for fairness in algorithmic classification problems. In , authors studied the general public's perception of some of these fairness definitions in computer science literature. Here we will reiterate and provide some of the most widely used definitions, along with their explanations inspired from .\\\\\n\\\\\n\\textbf{Definition 1.} \\textit{(Equalized Odds). The definition of equalized odds, provided by , states that ``A predictor \\^{Y} satisfies equalized odds with respect to protected attribute A and outcome Y, if \\^{Y} and A are independent conditional on Y. P(\\^{Y}=1|A=0,Y =y) = P(\\^{Y}=1|A=1,Y =y) , y$\\in$\\{0,1\\}''}. This means that the probability of a person in the positive class being correctly assigned a positive outcome and the probability of a person in a negative class being incorrectly assigned a positive outcome should both be the same for the protected and unprotected group members . In other words, the equalized odds definition states that the protected and unprotected groups should have equal rates for true positives and false positives.\n\\\\\n\\\\\n\\textbf{Definition 2.} \\textit{(Equal Opportunity). ``A binary predictor \\^{Y} satisfies equal opportunity with respect to A and Y if P(\\^{Y}=1|A=0,Y=1) = P(\\^{Y}=1|A=1,Y=1)''}\n. This means that the probability of a person in a positive class being assigned to a positive outcome should be equal for both protected and unprotected (female and male) group members . In other words, the equal opportunity definition states that the protected and unprotected groups should have equal true positive rates.\n\\\\\n\\\\\n\\textbf{Definition 3.} \\textit{(Demographic Parity). Also known as statistical parity. ``A predictor \\^{Y} satisfies demographic parity if P(\\^{Y} |A = 0) = P(\\^{Y}|A = 1)''} . The likelihood of a positive outcome should be the same regardless of whether the person is in the protected (e.g., female) group.\n\\\\\n\\\\\n\\textbf{Definition 4.} \\textit{(Fairness Through Awareness). ``An algorithm is fair if it gives similar predictions to similar individuals''} . In other words, any two individuals who are similar with respect to a similarity (inverse distance) metric defined for a particular task should receive a similar outcome.\\\\\n\\\\\n\\textbf{Definition 5.} \\textit{(Fairness Through Unawareness). ``An algorithm is fair as long as any protected attributes A are not explicitly used in the decision-making process''} .\\\\\n\\\\\n\\textbf{Definition 6.} \\textit{(Treatment Equality). ``Treatment equality is achieved when the ratio of false negatives and false positives is the same for both protected group categories''} . \\\\\n\\\\\n\\textbf{Definition 7.} \\textit{(Test Fairness). ``A score S = S(x) is test fair (well-calibrated) if it reflects the same likelihood of recidivism irrespective of the individual's group membership, R. That is, if for all values of s,\nP(Y =1|S=s,R=b)=P(Y =1|S=s,R=w)''} . In other words, the test fairness definition states that for any predicted probability score S, people in both protected and unprotected groups must have equal probability of correctly belonging to the positive class .\\\\\n\\\\\n\\textbf{Definition 8.} \\textit{(Counterfactual Fairness). ``Predictor \\^{Y} is counterfactually fair if under any context X =x and A=a,\nP($\\hat{Y}_{A\\xleftarrow{}a }$(U)=y|X =x,A=a)=P($\\hat{Y}_{A\\xleftarrow{}a'}$(U)=y|X =x,A=a), (for all y and for any value $a'$ attainable by A''} . The counterfactual fairness definition is based on the ``intuition that a decision is fair towards an individual if it is the same in both the actual world and a counterfactual world where the individual belonged to a different demographic group.''\n\\\\\n\\\\\n\\textbf{Definition 9.} \\textit{(Fairness in Relational Domains). ``A notion of fairness that is able to capture the relational structure in a domain---not only by taking attributes of individuals into consideration but by taking into account the social, organizational, and other connections between individuals''} .\\\\\n\\\\\n\\textbf{Definition 10.} \\textit{(Conditional Statistical Parity). For a set of legitimate factors L, predictor \\^{Y} satisfies conditional statistical parity if P(\\^{Y} |L=1,A = 0) = P(\\^{Y}|L=1,A = 1)} . Conditional statistical parity states that people in both protected and unprotected (female and male) groups should have equal probability of being assigned to a positive outcome given a set of legitimate factors L .\n\\\\\n\\\\\nFairness definitions fall under different types as follows:\n\\begin{enumerate}\n\\item{\\textbf{Individual Fairness.} Give similar predictions to similar individuals }.\n\\item{\\textbf{Group Fairness.} Treat different groups equally }.\n\\item{\\textbf{Subgroup Fairness}. Subgroup fairness intends to obtain the best properties of the group and individual notions of fairness. It is different than these notions but uses them in order to obtain better outcomes. It picks a group fairness constraint like equalizing false positive and asks whether this constraint holds over a large collection of subgroups }.\n\\end{enumerate}\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{ |p{5cm}||p{2cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}\n \\hline\nName & Reference &Group & Subgroup &Individual\\\\\n \\hline\nDemographic parity&&\\checkmark && \\\\[0.5pt]\n \\hline\n Conditional statistical parity&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Equalized odds &&\\checkmark&&\\\\[0.5pt]\n \\hline\n Equal opportunity&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Treatment equality&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Test fairness&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Subgroup fairness & && \\checkmark&\\\\[0.5pt]\n \\hline\n Fairness through unawareness&&&&\\checkmark\\\\[0.5pt]\n \\hline\n Fairness through awareness&&&&\\checkmark\\\\[0.5pt]\n \\hline\n Counterfactual fairness&&&&\\checkmark\\\\[0.5pt]\n \\hline\n\\end{tabular}\n\\caption{Categorizing different fairness notions into group, subgroup, and individual types.}\n\\label{fairtypes}\n\\end{table}\nIt is important to note that according to , it is impossible to satisfy some of the fairness constraints at once except in highly constrained special cases. \nIn , the authors show the inherent incompatibility of two conditions: calibration and balancing the positive and negative classes. These cannot be satisfied simultaneously with each other unless under certain constraints; therefore, it is important to take the context and application in which fairness definitions need to be used into consideration and use them accordingly .\nAnother important aspect to consider is time and temporal analysis of the impacts that these definitions may have on individuals or groups. In authors show that current fairness definitions are not always helpful and do not promote improvement for sensitive groups---and can actually be harmful when analyzed over time in some cases. They also show that measurement errors can also act in favor of these fairness definitions; therefore, they show how temporal modeling and measurement are important in evaluation of fairness criteria and introduce a new range of trade-offs and challenges toward this direction. It is also important to pay attention to the sources of bias and their types when trying to solve fairness-related questions.\\\\", "id": "243cac1a-256a-4808-9104-c1f13fedb69b", "level": "subsection", "origin_cites_number": 17, "parent_id": "4ce9b956-164a-4fdf-a103-b11a8fd6f2e6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Algorithmic Fairness" ], [ "subsection", "Definitions of Fairness " ] ], "subsections": [], "title": "Definitions of Fairness " }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 3908, 3909, 3912, 3913, 3112, 8701, 3911, 3910 ], "content": "\\label{sec:methods}\nThere have been numerous attempts to address bias in artificial intelligence in order to achieve fairness; these stem from domains of AI. In this section we will enumerate different domains of AI, and the work that has been produced by each community to combat bias and unfairness in their methods. Table~\\ref{domain} provides an overview of the different areas that we focus upon in this survey. \nWhile this section is largely domain-specific, it can be useful to take a cross-domain view. Generally, methods that target biases in the algorithms fall under three categories: \n\\begin{enumerate}\n \\item \\textbf{Pre-processing.} Pre-processing techniques try to transform the data so that the underlying discrimination is removed . If the algorithm is allowed to modify the training data, then pre-processing can be used . \n \\item \\textbf{In-processing.} In-processing techniques try to modify and change state-of-the-art learning algorithms in order to remove discrimination during the model training process . If it is allowed to change the learning procedure for a machine learning model, then in-processing can be used during the training of a model--- either by incorporating changes into the objective function or imposing a constraint .\n \\item \\textbf{Post-processing.} Post-processing is performed after training by accessing a holdout set which was not involved during the training of the model . If the algorithm can only treat the learned model as a black box without any ability to modify the training data or learning algorithm, then only post-processing can be used in which the labels assigned by the black-box model initially get reassigned based on a function during the post-processing phase .\n\\end{enumerate}\nExamples of some existing work and their categorization into these types is shown in Table \\ref{prepost}.\nThese methods are not just limited to general machine learning techniques, but because of \nAI's popularity, they have expanded to different domains such as natural language processing and deep learning. From learning fair representations to learning fair word embeddings , debiasing methods have been proposed in different AI applications and domains.\nMost of these methods try to avoid unethical interference of sensitive or protected attributes into the decision-making process, while others target exclusion bias by trying to include users from sensitive groups. In addition, some works try to satisfy one or more of the fairness notions in their methods, such as disparate learning processes (DLPs) which try to satisfy notions of treatment disparity and impact disparity by allowing the protected attributes during the training phase but avoiding them during prediction time . A list of protected or sensitive attributes is provided in Table \\ref{electionexample}. They point out what attributes should not affect the outcome of the decision in housing loan or credit card decision-making according to the law. Some of the existing work tries to treat sensitive attributes as noise to disregard their effect on decision-making, while some causal methods use causal graphs, and disregard some paths in the causal graph that result in sensitive attributes affecting the outcome of the decision. Different bias-mitigating methods and techniques are discussed below for different domains---each targeting a different problem in different areas of machine learning in detail. This can expand the horizon of the reader on where and how bias can affect the system and try to help researchers carefully look at various new problems concerning potential places where discrimination and bias can affect the outcome of a system.", "id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "level": "section", "origin_cites_number": 11, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ] ], "subsections": [ "b2cc4f1f-7f19-48f9-916c-9b2a12874118", "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "d135ff68-4939-4964-ba92-cd7df4c8db16", "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "6f88affc-4ee1-4764-9a6b-e5a86fce1a2d" ], "title": "Methods for Fair Machine Learning" }, { "cite_extract_rate": 0.7187500000000001, "cites": [ 3918, 3894, 3898, 3916, 3923, 3914, 3912, 3921, 3935, 3915, 3924, 3931, 3926, 3938, 3910, 3913, 3112, 3934, 3922, 3936, 1441, 3928, 3933, 7759, 3899, 3930, 8701, 3911, 3917, 3920, 3925, 3908, 8700, 3919, 8706, 3110, 3927, 8705, 3932, 3929, 3937, 7760, 6985, 7761 ], "content": "Every dataset is the result of several design decisions made by the data curator. Those decisions have consequences for the fairness of the resulting dataset, which in turn affects the resulting algorithms. In order to mitigate the effects of bias in data, some general methods have been proposed that advocate having good practices while using data, such as having datasheets that would act like a supporting document for the data reporting the dataset creation method, its characteristics, motivations, and its skews . proposes a similar approach for the NLP applications. A similar suggestion has been proposed for models in . Authors in also propose having labels, just like nutrition labels on food, in order to better categorize each data for each task. In addition to these general techniques, some work has targeted more specific types of biases. For example, has proposed methods to test for cases of Simpson's paradox in the data, and proposed methods to discover Simpson's paradoxes in data automatically. Causal models and graphs were also used in some work to detect direct discrimination in the data along with its prevention technique that modifies the data such that the predictions would be absent from direct discrimination . also worked on preventing discrimination in data mining, targeting direct, indirect, and simultaneous effects. Other pre-processing approaches, such as messaging , preferential sampling , disparate impact removal , also aim to remove biases from the data.\n\\begin{table}[h]\n\\centering\n{\\begin{tabular}{ |p{4cm}||p{7cm}|}\n \\hline\n \\textbf{Area}&\\textbf{Reference(s)}\\\\\n \\hline\n Classification& \\\\\n \\hline\n Regression& \\\\\n \\hline\n PCA&\\\\\n \\hline\n Community detection&\\\\\n \\hline\n Clustering& \\\\\n \\hline\n Graph embedding&\\\\\n \\hline\n Causal inference& \\\\\n \\hline\n Variational auto encoders& \\\\\n \\hline\n Adversarial learning& \\\\\n \\hline\n Word embedding& \\\\\n \\hline\n Coreference resolution& \\\\\n \\hline\nLanguage model&\\\\\n \\hline\n Sentence embedding&\\\\\n \\hline\n Machine translation&\\\\\n \\hline\n Semantic role labeling&\\\\\n \\hline\n Named Entity Recognition& \\\\\n \\hline\n\\end{tabular}}\n\\caption{List of papers targeting and talking about bias and fairness in different areas.}\n\\label{domain}\n\\end{table}\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{ |p{6cm}||p{2cm}|p{2cm}|}\n \\hline\n Attribute& FHA & ECOA\\\\\n \\hline\nRace&\\checkmark&\\checkmark\\\\[0.5pt]\n \\hline\n Color&\\checkmark&\\checkmark\\\\[0.5pt]\n \\hline\n National origin&\\checkmark&\\checkmark\\\\[0.5pt]\n \\hline\n Religion&\\checkmark&\\checkmark\\\\[0.5pt]\n \\hline\n Sex&\\checkmark&\\checkmark\\\\[0.5pt]\n \\hline\n Familial status&\\checkmark&\\\\[0.5pt]\n \\hline\n Disability&\\checkmark&\\\\[0.5pt]\n \\hline\n Exercised rights under CCPA&&\\checkmark\\\\[0.5pt]\n \\hline\n Marital status&&\\checkmark\\\\[0.5pt]\n \\hline\n Recipient of public assistance&&\\checkmark\\\\[0.5pt]\n \\hline\n Age&&\\checkmark\\\\[0.5pt]\n \\hline\n\\end{tabular}\n\\caption{A list of the protected attributes as specified in the Fair Housing and Equal Credit Opportunity Acts (FHA and ECOA), from .}\n\\label{electionexample}\n\\end{table}", "id": "b2cc4f1f-7f19-48f9-916c-9b2a12874118", "level": "subsection", "origin_cites_number": 64, "parent_id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Unbiasing Data" ] ], "subsections": [], "title": "Unbiasing Data" }, { "cite_extract_rate": 0, "cites": [], "content": "To address this issue, a variety of methods have been proposed that satisfy some of the fairness definitions or other new definitions depending on the application.", "id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "level": "subsection", "origin_cites_number": 0, "parent_id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ] ], "subsections": [ "f3f2696e-da02-4d46-b579-d4238f074625", "ed3d7f9a-9c0e-4566-9679-2996c8055c84", "d75bb90a-926a-465c-a57f-14660004b6dc", "0325d67a-b15b-45da-ad92-925d0fd5d9b3", "1488231d-641b-4417-a97d-d2b28f9bacd2", "7bcd1575-144f-4d0e-9460-90e5e443aa39" ], "title": "Fair Machine Learning" }, { "cite_extract_rate": 0.64, "cites": [ 3894, 3937, 3923, 3912, 3935, 3112, 3940, 3928, 7759, 3899, 3917, 3919, 3939, 3929, 7760 ], "content": "Since classification is a canonical task in machine learning and is widely used in different areas that can be in direct contact with humans, it is important that these types of methods be fair and be absent from biases that can harm some populations. Therefore, certain methods have been proposed that satisfy certain definitions of fairness in classification. For instance, in authors try to satisfy subgroup fairness in classification, equality of opportunity and equalized odds in , both disparate treatment and disparate impact in , and equalized odds in . Other methods try to not only satisfy some fairness constraints but to also be stable toward change in the test set . The authors in , propose a general framework for learning fair classifiers. This framework can be used for formulating fairness-aware classification with fairness guarantees. In another work , authors propose three different modifications to the existing Naive Bayes classifier for discrimination-free classification. takes a new approach into fair classification by imposing fairness constraints into a Multitask learning (MTL) framework. In addition to imposing fairness during training, this approach can benefit the minority groups by focusing on maximizing the average accuracy of each group as opposed to maximizing the accuracy as a whole without attention to accuracy across different groups. In a similar work , authors propose a decoupled classification system where a separate classifier is learned for each group. They use transfer learning to reduce the issue of having less data for minority groups. In authors propose to achieve fair classification by mitigating the dependence of the classification outcome on the sensitive attributes by utilizing the Wasserstein distance measure. In authors propose the Preferential Sampling (PS) method to create a discrimination free train data set. They then learn a classifier on this discrimination free dataset to have a classifier with no discrimination. In~, authors propose a post-processing bias mitigation strategy that utilizes attention mechanism for classification and that can provide interpretability.\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{ |p{3.6cm}||p{1.5cm}|p{2.5cm}|p{2.5cm}|p{2.5cm}|}\n \\hline\n Algorithm&Reference&Pre-Processing&In-Processing& Post-Processing\\\\\n \\hline\n Community detection&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Word embedding&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Optimized pre-processing&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Data pre-processing&&\\checkmark&&\\\\[0.5pt]\n \\hline\n Classification&& &\\checkmark&\\\\[0.5pt]\n \\hline\n Regression&&&\\checkmark&\\\\[0.5pt]\n \\hline\n Classification&&&\\checkmark&\\\\[0.5pt]\n \\hline\n Classification&&&\\checkmark&\\\\[0.5pt]\n \\hline\n Adversarial learning&&&\\checkmark&\\\\[0.5pt]\n \\hline\n Classification&&&&\\checkmark\\\\[0.5pt]\n \\hline\n Word embedding&&&&\\checkmark\\\\[0.5pt]\n \\hline\n Classification&&&&\\checkmark\\\\[0.5pt]\n \\hline\n Classification&&&&\\checkmark\\\\[0.5pt]\n \\hline\n\\end{tabular}\n\\caption{Algorithms categorized into their appropriate groups based on being pre-processing, in-processing, or post-processing.}\n\\label{prepost}\n\\end{table}", "id": "f3f2696e-da02-4d46-b579-d4238f074625", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Fair Classification" ] ], "subsections": [], "title": "Fair Classification" }, { "cite_extract_rate": 1, "cites": [ 3939, 3930, 3912 ], "content": " proposes a fair regression method along with evaluating it with a measure introduced as the ``price of fairness'' (POF) to measure accuracy-fairness trade-offs. They introduce three fairness penalties as follows:\\\\\nIndividual Fairness: The definition for individual fairness as stated in~, ``for every cross pair $(x, y)\\in S_{1}$, $(x', y')\\in S_{2}$, a model $w$ is penalized for how differently it treats $x$ and $x'$ (weighted by a function of $|y - y'|$) where $S_{1}$ and $S_{2}$ are different groups from the sampled population.'' Formally, this is operationalized as\n\\[f_{1}(w,S)= \\frac{1}{n_{1}n_{2}} \\sum_{\\substack{(x_{i},y_{i})\\in S_{1}\\\\(x_{j},y_{j})\\in S_{2}}}d(y_{i},y_{j})(w.x_{i}-w.x_{j})^{2} \\]\nGroup Fairness: \"On average, the two groups' instances should have similar labels (weighted by the nearness of the labels of the instances)\" .\n\\[f_{2}(w,S)= \\Bigg(\\frac{1}{n_{1}n_{2}} \\sum_{\\substack{(x_{i},y_{i})\\in S_{1}\\\\(x_{j},y_{j})\\in S_{2}}}d(y_{i},y_{j})(w.x_{i}-w.x_{j})\\Bigg)^{2} \\]\nHybrid Fairness: \"Hybrid fairness requires both positive and both negatively labeled cross pairs to be treated similarly in an average over the two groups\" .\n\\[f_{3}(w,S)= \\Bigg ( \\sum_{\\substack{(x_{i},y_{i})\\in S_{1}\\\\(x_{j},y_{j})\\in S_{2}\\\\y_{i}=y_{j}=1}} \\frac{d(y_{i},y_{j})(w.x_{i}-w.x_{j})}{n_{1,1}n_{2,1}} \\Bigg)^{2} + \\Bigg ( \\sum_{\\substack{(x_{i},y_{i})\\in S_{1}\\\\(x_{j},y_{j})\\in S_{2}\\\\y_{i}=y_{j}=-1}} \\frac{d(y_{i},y_{j})(w.x_{i}-w.x_{j})}{n_{1,-1}n_{2,-1}} \\Bigg)^{2} \\]\nIn addition to the previous work, considers the fair regression problem formulation with regards to two notions of fairness statistical (demographic) parity and bounded group loss. uses decision trees to satisfy disparate impact and treatment in regression tasks in addition to classification.", "id": "ed3d7f9a-9c0e-4566-9679-2996c8055c84", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Fair Regression" ] ], "subsections": [], "title": "Fair Regression" }, { "cite_extract_rate": 1, "cites": [ 3936 ], "content": "In , authors studied the semantic role-labeling models and a famous dataset, imSitu, and realized that only 33\\% of agent roles in cooking images are man, and the rest of 67\\% cooking images have woman as agents in the imSitu training set. They also noticed that in addition to the existing bias in the dataset, the model would amplify the bias such that after training a model\\footnote{Specifically, a Conditional Random Field (CRF)} on the dataset, bias is magnified for ``man'', filling only 16\\% of cooking images. Under these observations, the authors of the paper show that structured prediction models have the risk of leveraging social bias. Therefore, they propose a calibration algorithm called RBA (reducing bias amplification); RBA is a technique for debiasing models by calibrating prediction in structured prediction. The idea behind RBA is to ensure that the model predictions follow the same distribution in the training data. They study two cases: multi-label object and visual semantic role labeling classification. They show how these methods amplify the existing bias in data.", "id": "d75bb90a-926a-465c-a57f-14660004b6dc", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Structured Prediction" ] ], "subsections": [], "title": "Structured Prediction" }, { "cite_extract_rate": 0.5, "cites": [ 8700 ], "content": "In authors show that vanilla PCA can exaggerate the error in reconstruction in one group of people over a different group of equal size, so they propose a fair method to create representations with similar richness for different populations---not to make them indistinguishable, or to hide dependence on a sensitive or protected attribute. They show that vanilla PCA on the labeled faces in the wild (LFW) dataset~ has a lower reconstruction error rate for men than for women faces, even if the sampling is done with an equal weight for both genders. They intend to introduce a dimensionality reduction technique which maintains similar fidelity for different groups and populations in the dataset. Therefore, they introduce Fair PCA and define a fair dimensionality reduction algorithm. Their definition of Fair PCA (as an optimization function) is as follows, in which $A$ and $B$ denote two subgroups, $U_A$ and $U_B$ denote matrices whose rows correspond to rows of $U$ that contain members of subgroups $A$ and $B$ given $m$ data points in $R^n$:\\\\\n\\[ min_{U \\in R^{m \\times n}, rank(U) \\leq d} \\; max \\Bigg \\{ \\frac{1}{|A|} loss(A , U_{A}) , \\frac{1}{|B|} loss(B , U_{B}) \\Bigg \\} \\]\nAnd their proposed algorithm is a two-step process listed below:\n\\begin{enumerate}\n\\item{Relax the Fair PCA objective to a semidefinite program (SDP) and solve it.}\n\\item{Solve a linear program that would reduce the rank of the solution.}\n\\end{enumerate}", "id": "0325d67a-b15b-45da-ad92-925d0fd5d9b3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Fair PCA" ] ], "subsections": [], "title": "Fair PCA" }, { "cite_extract_rate": 1, "cites": [ 1441, 3894, 3932, 3925 ], "content": "Inequalities in online communities and social networks can also potentially be another place where bias and discrimination can affect the populations. For example, in online communities users with a fewer number of friends or followers face a disadvantage of being heard in online social media . In addition, existing methods, such as community detection methods, can amplify this bias by ignoring these low-connected users in the network or by wrongfully assigning them to the irrelevant and small communities. In authors show how this type of bias exists and is perpetuated by the existing community detection methods. They propose a new attributed community detection method, called CLAN, to mitigate the harm toward disadvantaged groups in online social communities. CLAN is a two-step process that considers the network structure alongside node attributes to address exclusion bias, as indicated below:\n\\begin{enumerate}\n\\item{Detect communities using modularity values (Step 1-unsupervised using only network structure).}\n\\item{Train a classifier to classify users in the minor groups, putting them into one of the major groups using held-out node attributes (Step 2-supervised using other node attributes).}\n\\end{enumerate}\nFair methods in domains similar to community detection are also proposed, such as graph embedding and clustering .", "id": "1488231d-641b-4417-a97d-d2b28f9bacd2", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Community Detection/Graph Embedding/Clustering" ] ], "subsections": [], "title": "Community Detection/Graph Embedding/Clustering" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3898, 3923, 3916, 3905, 8706, 8705, 3926 ], "content": "Causal models can ascertain causal relationships between variables. Using causal graphs one can represent these causal relationships between variables (nodes of the graph) through the edges of the graph. These models can be used to remove unwanted causal dependence of outcomes on sensitive attributes such as gender or race in designing systems or policies . Many researchers have used causal models and graphs to solve fairness-related concerns in machine learning. In , authors discuss in detail the subject of causality and its importance while designing fair algorithms. There has been much research on discrimination discovery and removal that uses causal models and graphs in order to make decisions that are irrespective of sensitive attributes of groups or individuals. For instance, in authors propose a causal-based framework that detects direct and indirect discrimination in the data along with their removal techniques. is an extension to the previous work. gives a nice overview of most of the previous work done in this area by the authors, along with discussing system-, group-, and individual-level discrimination and solving each using their previous methods, in addition to targeting direct and indirect discrimination. By expanding on the previous work and generalizing it, authors in propose a similar pathway approach for fair inference using causal graphs; this would restrict certain problematic and discriminative pathways in the causal graph flexibly given any set of constraints. This holds when the path-specific effects can be identified from the observed distribution. In authors introduce the path-specific counterfactual fairness definition which is an extension to counterfactual fairness definition and propose a method to achieve it further extending the work in . In authors extended a formalization of algorithmic fairness from their previous work to the setting of learning optimal policies that are subject to constraints based on definitions of fairness. They describe several strategies for learning optimal policies by modifying some of the existing strategies, such as Q-learning, value search, and G-estimation, based on some fairness considerations. In authors only target discrimination discovery and no removal by finding instances similar to another instance and observing if a change in the protected attribute will change the outcome of the decision. If so, they declare the existence of discrimination. In , authors define the following two notions of discrimination---unresolved discrimination and proxy discrimination---as follows: \\\\\n\\textbf{Unresolved Discrimination:} \"A variable V in a causal graph exhibits unresolved discrimination if there exists a directed path from A to V that is not blocked by a resolving variable, and V itself is non-resolving\" . \\\\\n\\textbf{Proxy Discrimination:} \"A variable V in a causal graph exhibits potential proxy discrimination, if there exists a directed path from A to V that is blocked by a proxy variable and V itself is not a proxy\" .\nThey proposed methods to prevent and avoid them. They also show that no observational criterion can determine whether a predictor exhibits unresolved discrimination; therefore, a causal reasoning framework needs to be incorporated. \\\\\nIn , Instead of using the usual risk difference $RD=p_{1}-p_{2}$, authors propose a causal risk difference $RD^{c}=p_{1}-p_{2}^{c}$ for causal discrimination discovery.\nThey define $p_{2}^{c}$ to be:\\\\\n\\[ p_{2}^{c} = \\frac{\\sum_{\\textbf{s} \\in S, dec(\\textbf{s})=\\ominus}w(\\textbf{s})}{\\sum_{\\textbf{s} \\in S}w(\\textbf{s})} \\]\n$RD^{c}$ not close to zero means that there is a bias in decision value due to group \nmembership (causal discrimination) or to covariates that have not been accounted \nfor in the analysis (omitted variable bias).\nThis $RD^{c}$ then becomes their causal discrimination measure for discrimination discovery. is another work of this type that uses causal networks for discrimination discovery.", "id": "7bcd1575-144f-4d0e-9460-90e5e443aa39", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "e20fb19c-7e93-467e-a8cc-5fb0c9fcbfb6", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Machine Learning" ], [ "subsubsection", "Causal Approach to Fairness" ] ], "subsections": [], "title": "Causal Approach to Fairness" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "d135ff68-4939-4964-ba92-cd7df4c8db16", "level": "subsection", "origin_cites_number": 0, "parent_id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Representation Learning" ] ], "subsections": [ "1a51bd3a-ee8d-4a1b-943b-b6641dd5cd05", "89d06303-d503-484c-8790-9d581772abf4" ], "title": "Fair Representation Learning" }, { "cite_extract_rate": 0.8, "cites": [ 3908, 3941, 3911, 3910 ], "content": "Learning fair representations and avoiding the unfair interference of sensitive attributes has been introduced in many different research papers. A well-known example is the Variational Fair Autoencoder introduced in . Here,they treat the sensitive variable as the nuisance variable, so that by removing the information about this variable they will get a fair representation. They use a maximum mean discrepancy regularizer to obtain invariance in the posterior distribution over latent variables. Adding this maximum mean discrepancy (MMD) penalty into the lower bound of their VAE architecture satisfies their proposed model for having the Variational Fair Autoencoder.\nSimilar work, but not targeting fairness specifically, has been introduced in . In authors also propose a debiased VAE architecture called DB-VAE which learns sensitive latent variables that can bias the model (e.g., skin tone, gender, etc.) and propose an algorithm on top of this DB-VAE using these latent variables to debias systems like facial detection systems. In authors model their representation-learning task as an optimization objective that would minimize the loss of the mutual information between the encoding and the sensitive variable. The relaxed version of this assumption is shown in Equation \\ref{eq1}. They use this in order to learn fair representation and show that adversarial training is unnecessary and in some cases even counter-productive. In Equation \\ref{eq1}, c is the sensitive variable and z the encoding of x.\n\\begin{equation}\n \\mathop{min}_{q} \\mathcal{L}(q,x)+\\lambda I(z,c)\n \\label{eq1}\n\\end{equation}\nIn , authors introduce flexibly fair representation learning by disentanglement that disentangles information from multiple sensitive attributes. Their flexible and fair variational autoencoder is not only flexible with respect to downstream task labels but also flexible with respect to sensitive attributes. They address the demographic parity notion of fairness, which can target multiple sensitive attributes or any subset combination of them.", "id": "1a51bd3a-ee8d-4a1b-943b-b6641dd5cd05", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "d135ff68-4939-4964-ba92-cd7df4c8db16", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Representation Learning" ], [ "subsubsection", "Variational Auto Encoders" ] ], "subsections": [], "title": "Variational Auto Encoders" }, { "cite_extract_rate": 1, "cites": [ 3933, 3919 ], "content": "In authors present a framework to mitigate bias in models learned from data with stereotypical associations. They propose a model in which they are trying to maximize accuracy of the predictor on y, and at the same time minimize the ability of the adversary to predict the protected or sensitive variable (stereotyping variable z). The model consists of two parts---the predictor and the adversary---as shown in Figure \\ref{adversary1}. In their model, the predictor is trained to predict Y given X. With the help of a gradient-based approach like stochastic gradient descent, the model tries to learn the weights W by minimizing some loss function LP($\\hat{y}$, y). The output layer is passed to an adversary, which is another network. This network tries to predict Z. \nThe adversary may have different inputs depending on the fairness definition needing to be achieved. For instance, in order to satisfy \\textbf{Demographic Parity}, the adversary would try to predict the protected variable Z using only the predicted label $\\hat{Y}$ passed as an input to it, while preventing the adversary from learning this is the goal of the predictor. Similarly, to achieve \\textbf{Equality of Odds}, the adversary would get the true label Y in addition to the predicted label $\\hat{Y}$. To satisfy \\textbf{Equality of Opportunity} for a given class y, they would only select instances for the adversary where Y=y. takes an interesting and different direction toward solving fairness issues using adversarial networks by introducing FairGAN which generates synthetic data that is free from discrimination and is similar to the real data. They use their newly generated synthetic data from FairGAN, which is now debiased, instead of the real data for training and testing. They do not try to remove discrimination from the dataset, unlike many of the existing approaches, but instead generate new datasets similar to the real one which is debiased and preserves good data utility. The architecture of their FairGAN model is shown in Figure \\ref{adversary}. FairGAN consists of two components: a generator $G_{Dec}$ which generates the fake data conditioned on the protected attribute $P_{G}(x,y,s)=P_{G}(x,y|s)P_{G}(s)$ where $P_{G}(s)=P_{data}(s)$, and two discriminators $D_{1}$ and $D_{2}$. $D_{1}$ is trained to differentiate the real data denoted by $P_{data}(x,y,s)$ from the generated fake data denoted by $P_{G}(x,y,s)$. \n\\begin{figure}[H]\n\\includegraphics[width=.6\\textwidth,trim=10cm 0cm 5cm 1cm,clip=true]{adversary.pdf}\n\\caption{Structure of FairGAN as proposed in .}\n\\label{adversary}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=0.6\\textwidth,trim=0cm 12cm 0cm 12cm,clip=true]{adversary1.pdf}\n\\caption{The architecture of adversarial network proposed in \\textsuperscript{\\textcopyright} Brian Hu Zhang.}\n\\label{adversary1}\n\\end{figure}\nIn addition to that, for achieving fairness constraints, such as statistical parity, $P_{G}(x,y|s=1)=P_{G}(x,y|s=0)$, the training of $D_{2}$ is such that it emphasizes differentiation of the two types of synthetic (generated by the model) samples $P_{G}(x,y|s=1)$ and $P_{G}(x,y|s=0)$ indicating if the synthetic samples are from the unprotected or protected groups. Here s denotes the protected or the sensitive variable, and we adapted the same notation as in .", "id": "89d06303-d503-484c-8790-9d581772abf4", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "d135ff68-4939-4964-ba92-cd7df4c8db16", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair Representation Learning" ], [ "subsubsection", "Adversarial Learning" ] ], "subsections": [], "title": "Adversarial Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "level": "subsection", "origin_cites_number": 0, "parent_id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ] ], "subsections": [ "1fa2d297-36ec-4a86-acde-a856c21c507d", "3d5f7546-001f-4675-83a5-334a56b66858", "bd27d570-e2df-4ee8-8cd1-3441aa6394cc", "92c28f7a-822a-42cb-a6c5-c3ee17314fbb", "fd7d0053-4cc4-4281-8873-6d1faa820bc5", "480287cd-be97-4c66-a013-b3da567e6734" ], "title": "Fair NLP" }, { "cite_extract_rate": 0.8, "cites": [ 3927, 3913, 3112, 3917 ], "content": "In authors noticed that while using state-of-the-art word embeddings in word analogy tests, ``man'' would be mapped to ``computer programmer'' and ``woman'' would be mapped to ``homemaker.'' This bias toward woman triggered the authors to propose a method to debias word embeddings by proposing a method that respects the embeddings for gender-specific words but debiases embeddings for gender-neutral words by following these steps: \\textit{(Notice that Step 2 has two different options. Depending on whether you target hard debiasing or soft debiasing, you would use either step 2a or 2b)}\n\\begin{enumerate}\n \\item \\textbf{Identify gender subspace.} Identifying a direction of the embedding that captures the bias .\n \\item \\textbf{Hard debiasing or soft debiasing:}\n \\begin{enumerate}\n \\item \\textbf{Hard debiasing (neutralize and equalize).} Neutralize puts away the gender subspace from gender-neutral words and makes sure that all the gender-neutral words are removed and zeroed out in the gender subspace .\n Equalize makes gender-neutral words to be equidistant from the equality set of gendered words .\n \\item \\textbf{Soft bias correction.} Tries to move as little as possible to retain its similarity to the original embedding as much as possible, while reducing the gender bias. This trade-off is controlled by a parameter . \n \\end{enumerate}\n\\end{enumerate}\nFollowing on the footsteps of these authors, other future work attempted to tackle this problem by generating a gender-neutral version of (Glove called GN-Glove) that tries to retain gender information in some of the word embedding's learned dimensions, while ensuring that other dimensions are free from this gender effect. This approach primarily relies on Glove as its base model with gender as the protected attribute. However, a recent paper argues against these debiasing techniques and states that many recent works on debiasing word embeddings have been superficial, that those techniques just hide the bias and don't actually remove it. A recent work took a new direction and proposed a preprocessing method for the discovery of the problematic documents in the training corpus that have biases in them, and tried to debias the system by perturbing or removing these documents efficiently from the training corpus. In a very recent work , authors target bias in ELMo's contextualized word vectors and attempt to analyze and mitigate the observed bias in the embeddings. They show that the corpus used for training of ELMo has a significant gender skew, with male entities being nearly three times more common than female entities. This automatically leads to gender bias in these pretrained contextualized embeddings. They propose the following two methods for mitigating the existing bias while using the pretrained embeddings in a downstream task, coreference resolution: \n(1) train-time data augmentation approach, and \n(2) test-time neutralization approach.", "id": "1fa2d297-36ec-4a86-acde-a856c21c507d", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Word Embedding" ] ], "subsections": [], "title": "Word Embedding" }, { "cite_extract_rate": 1, "cites": [ 3918, 3110 ], "content": "The paper shows that coreference systems have a gender bias. They introduce a benchmark, called WinoBias, focusing on gender bias in coreference resolution. In addition to that, they introduce a data-augmentation technique that removes bias in the existing state-of-the-art coreferencing methods, in combination with using word2vec debiasing techniques. Their general approach is as follows: They first generate auxiliary datasets using a rule-based approach in which they replace all the male entities with female entities and the other way around. Then they train models with a combination of the original and the auxiliary datasets. They use the above solution in combination with word2vec debiasing techniques to generate word embeddings. They also point out sources of gender bias in coreference systems and propose solutions \nto them. They show that the first source of bias comes from the training data and propose a solution that generates an auxiliary data set by swapping male and female entities. Another case arises from the resource bias (word embeddings are bias), so the proposed solution is to replace Glove with a debiased embedding method. Last, another source of bias can come from unbalanced gender lists, and balancing the counts in the lists is a solution they proposed. In another work , authors also show the existence of gender bias in three state-of-the-art coreference resolution systems by observing that for many occupations, these systems resolve pronouns in a biased fashion by preferring one gender over the other.", "id": "3d5f7546-001f-4675-83a5-334a56b66858", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Coreference Resolution" ] ], "subsections": [], "title": "Coreference Resolution" }, { "cite_extract_rate": 1, "cites": [ 3112, 3914 ], "content": "In authors introduce a metric for measuring gender bias in a generated text from a language model based on recurrent neural networks that is trained on a text corpus along with measuring the bias in the training text itself. They use Equation \\ref{language_model_bias}, where $w$ is any word in the corpus, $f$ is a set of gendered words that belong to the female category, such as she, her, woman, etc., and $m$ to the male category, and measure the bias using the mean absolute and standard deviation of the proposed metric along with fitting a univariate linear regression model over it and then analyzing the effectiveness of each of those metrics while measuring the bias. \n\\begin{equation}\n bias(w) = log (\\frac{P(w|f)}{P(w|m)})\n \\label{language_model_bias}\n\\end{equation}\nIn their language model, they also introduce a regularization loss term that would minimize the projection of embeddings trained by the encoder onto the embedding of the gender subspace following the soft debiasing technique introduced in . Finally, they evaluate the effectiveness of their method on reducing gender bias and conclude by stating that in order to reduce bias, there is a compromise on perplexity. They also point out the effectiveness of word-level bias metrics over the corpus-level metrics.", "id": "bd27d570-e2df-4ee8-8cd1-3441aa6394cc", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Language Model" ] ], "subsections": [], "title": "Language Model" }, { "cite_extract_rate": 1, "cites": [ 3110, 3938 ], "content": "In authors extend the research in detecting bias in word embedding techniques to that of sentence embedding. They try to generalize bias-measuring techniques, such as using the Word Embedding Association Test (WEAT ) in the context of sentence encoders by introducing their new sentence encoding bias-measuring techniques, the Sentence Encoder Association Test (SEAT). They used state-of-the-art sentence encoding techniques, such as CBoW, GPT, ELMo, and BERT, and find that although there was varying evidence of human-like bias in sentence encoders using SEAT, more recent methods like BERT are more immune to biases. That being said, they are not claiming that these models are bias-free, but state that more sophisticated bias discovery techniques may be used in these cases, thereby encouraging more future work in this area.", "id": "92c28f7a-822a-42cb-a6c5-c3ee17314fbb", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Sentence Encoder" ] ], "subsections": [], "title": "Sentence Encoder" }, { "cite_extract_rate": 0.75, "cites": [ 3943, 3921, 3942 ], "content": "In authors noticed that when translating the word \"friend\" in the following two sentences from English to Spanish, they achieved different results---although in both cases this word should be translated the same way.\\\\\n\"She works in a hospital, my friend is a nurse.\"\\\\\n\"She works in a hospital, my friend is a doctor.\"\\\\\nIn both of these sentences, \"friend\" should be translated to the female version of Spanish friend \"amiga,\" but the results were not reflecting this expectation. For the second sentence, friend was translated to \"amigo,\"---the male version of friend in Spanish. This is because doctor is more stereotypical to males and nurse to females, and the model picks this bias or stereotype and reflects it in its performance. To solve this, authors in build an approach that leverages the fact that machine translation uses word embeddings. They use the existing debiasing methods in word embedding and apply them in the machine translation pipeline. This not only helped them to mitigate the existing bias in their system, but also boosted the performance of their system by one BLUE score. In authors show that Google's translate system can suffer from gender bias by making sentences taken from the U.S. Bureau of Labor Statistics into a dozen languages that are gender neutral, including Yoruba, Hungarian, and Chinese, translating them into English, and showing that Google Translate shows favoritism toward males for stereotypical fields such as STEM jobs. In authors annotated and analyzed the Europarl dataset , a large political, multilingual dataset used in machine translation, and discovered that with the exception of the youngest age group (20-30), which represents only a very small percentage of the total amount of sentences (0.71\\%), more male data is available in all age groups. They also looked at the entire dataset and showed that 67.39\\% of the sentences are produced by male speakers. Furthermore, to mitigate the gender-related issues and to improve morphological agreement in machine translation, they augmented every sentence with a tag on the English source side, identifying the gender of the speaker. This helped the system in most of the cases, but not always, so further work has been suggested for integrating speaker information in other ways.", "id": "fd7d0053-4cc4-4281-8873-6d1faa820bc5", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Machine Translation" ] ], "subsections": [], "title": "Machine Translation" }, { "cite_extract_rate": 1, "cites": [ 3934 ], "content": "In , authors investigate a type of existing bias in various named entity recognition (NER) systems. In particular, they observed that in a context where an entity should be tagged as a person entity, such as \"John is a person\" or \"John is going to school\", more female names as opposed to male names are being tagged as non-person entities or not being tagged at all. To further formalize their observations, authors propose six different evaluation metrics that would measure amount of bias among different genders in NER systems. They curated templated sentences pertaining to human actions and applied these metrics on names from U.S census data incorporated into the templates. The six introduced measures each aim to demonstrate a certain type of bias and serve a specific purpose in showing various results as follows:\n\\begin{itemize}\n \\item Error Type-1 Unweighted: Through this type of error, authors wanted to recognize the proportion of entities that are tagged as anything other than the person entity in each of the male vs female demographic groups. This could be the entity not being tagged or tagged as other entities, such as location.\n \\[ \\frac{\\sum_{n \\in N_f}I(n_{type} \\neq PERSON)}{|N_f|}\\]\n \\item Error Type-1 Weighted: This type of error is similar to its unweighted case except authors considered the frequency or popularity of names so that they could penalize if a more popular name is being tagged wrongfully.\n \\[ \\frac{\\sum_{n \\in N_f}freq_f(n_{type} \\neq PERSON)}{\\sum_{n \\in N_f}freq_f(n)},\\]\nwhere $freq_f(\\cdot)$ indicates the frequency of a name for a particular year in the female census data. Likewise, $freq_m(\\cdot)$ indicates the frequency of a name for a particular year in the male census data.\n \\item Error Type-2 Unweighted: This is a type of error in which the entity is tagged as other entities, such as location or city. Notice that this error does not count if the entity is not tagged.\n \\[ \\frac{\\sum_{n \\in N_f}I(n_{type} \\notin \\{\\emptyset,PERSON\\})}{|N_f|},\\]\nwhere $\\emptyset$ indicates that the name is not tagged.\n \\item Error Type-2 Weighted: This error is again similar to its unweighted case except the frequency is taken into consideration.\n \\[ \\frac{\\sum_{n \\in N_f}freq_f(n_{type} \\notin \\{\\emptyset,PERSON\\})}{\\sum_{n \\in N_f}freq_f(n)}\\]\n \\item Error Type-3 Unweighted: This is a type of error in which it reports if the entity is not tagged at all. Notice that even if the entity is tagged as a non-person entity this error type would not consider it.\n \\[ \\frac{\\sum_{n \\in N_f}I(n_{type} = \\emptyset)}{|N_f|}\\]\n \\item Error Type-3 Weighted: Again, this error is similar to its unweighted case with frequency taken into consideration.\n \\[ \\frac{\\sum_{n \\in N_f}freq_f(n_{type} = \\emptyset)}{\\sum_{n \\in N_f}freq_f(n)}\\]\n\\end{itemize}\nAuthors also investigate the data that these NER systems are trained on and find that the data is also biased toward female gender by not including as versatile names as there should be to represent female names.", "id": "480287cd-be97-4c66-a013-b3da567e6734", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e1f0d57a-fd3f-44b7-bd5f-6d4762dd55e0", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Fair NLP" ], [ "subsubsection", "Named Entity Recognition" ] ], "subsections": [], "title": "Named Entity Recognition" }, { "cite_extract_rate": 0.4, "cites": [ 7759, 3941 ], "content": "The field of algorithmic fairness is a relatively new area of research and work still needs to be done for its improvement. With that being said, there are already papers that propose fair AI algorithms and bias mitigation techniques and compare different mitigation algorithms using different benchmark datasets in the fairness domain. For instance, authors in propose a geometric solution to learn fair representations that removes correlation between protected and unprotected features. The proposed approach can control the trade-off between fairness and accuracy via an adjustable parameter. In this work, authors evaluate the performance of their approach on different benchmark datasets, such as COMPAS, Adult and German, and compare them against various different approaches for fair learning algorithms considering fairness and accuracy measures . In addition, IBM's AI Fairness 360 (AIF360) toolkit has implemented many of the current fair learning algorithms and has demonstrated some of the results as demos which can be utilized by interested users to compare different methods with regards to different fairness measures.", "id": "6f88affc-4ee1-4764-9a6b-e5a86fce1a2d", "level": "subsection", "origin_cites_number": 5, "parent_id": "c4976b58-d2d7-4703-a545-b120b4c383c4", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Methods for Fair Machine Learning" ], [ "subsection", "Comparison of Different Mitigation Algorithms" ] ], "subsections": [], "title": "Comparison of Different Mitigation Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future}\nWhile there have been many definitions of, and approaches to, fairness in the literature, the study in this area is anything but complete. Fairness and algorithmic bias still holds a number of research opportunities. In this section, we provide pointers to outstanding challenges in fairness research, and an overview of opportunities for development of understudied problems.", "id": "539e2025-f6c3-47b4-adeb-1aa1c312db9d", "level": "section", "origin_cites_number": 0, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Challenges and Opportunities for Fairness Research" ] ], "subsections": [ "92291949-ae76-45f3-a36f-8985c3893828", "11f32533-786b-420d-b1e3-8204ae8b97c7" ], "title": "Challenges and Opportunities for Fairness Research" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7761, 3944 ], "content": "There are several remaining challenges to be addressed in the fairness literature. Among them are:\n\\begin{enumerate}\n\\item \\textbf{Synthesizing a definition of fairness.} Several definitions of what would constitute fairness from a machine learning perspective have been proposed in the literature. These definitions cover a wide range of use cases, and as a result are somewhat disparate in their view of fairness. Because of this, it is nearly impossible to understand how one fairness solution would fare under a different definition of fairness. Synthesizing these definitions into one remains an open research problem since it can make evaluation of these systems more unified and comparable. having a more unified fairness definition and framework can also help with the incompatibility issue of some current fairness definitions.\n\\item \\textbf{From Equality to Equity.} The definitions presented in the literature mostly focus on \\emph{equality}, ensuring that each individual or group is given the same amount of resources, attention or outcome. However, little attention has been paid to \\emph{equity}, which is the concept that each individual or group is given the resources they need to succeed~. Operationalizing this definition and studying how it augments or contradicts existing definitions of fairness remains an exciting future direction.\n\\item \\textbf{Searching for Unfairness.} Given a definition of fairness, it should be possible to identify instances of this unfairness in a particular dataset. Inroads toward this problem have been made in the areas of data bias by detecting instances of Simpson's Paradox in arbitrary datasets~; however, unfairness may require more consideration due to the variety of definitions and the nuances in detecting each one. \n\\end{enumerate}\n\\begin{figure}[h]\n\\includegraphics[width=\\textwidth,trim=0cm 0cm 0cm 0cm,clip=true]{new_heatmap.png}\n\\caption{Heatmap depicting distribution of previous work in fairness, grouped by domain and fairness definition.}\n\\label{heatmap}\n\\end{figure}", "id": "92291949-ae76-45f3-a36f-8985c3893828", "level": "subsection", "origin_cites_number": 3, "parent_id": "539e2025-f6c3-47b4-adeb-1aa1c312db9d", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Challenges and Opportunities for Fairness Research" ], [ "subsection", "Challenges" ] ], "subsections": [], "title": "Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "In this work we have taxonomized and summarized the current state of research into algorithmic biases and fairness---with a particular focus on machine learning. Even in this area alone, the research is broad. Subareas, from natural language processing, to representation learning, to community detection, have all seen efforts to make their methodologies more fair. Nevertheless, every area has not received the same amount of attention from the research community. Figure~\\ref{heatmap} provides an overview of what has been done in different areas to address fairness---categorized by the fairness definition type and domain. Some areas (e.g., community detection at the subgroup level) have received no attention in the literature, and could be fertile future research areas.", "id": "11f32533-786b-420d-b1e3-8204ae8b97c7", "level": "subsection", "origin_cites_number": 0, "parent_id": "539e2025-f6c3-47b4-adeb-1aa1c312db9d", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Challenges and Opportunities for Fairness Research" ], [ "subsection", "Opportunities" ] ], "subsections": [], "title": "Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey we introduced problems that can adversely affect AI systems in terms of bias and unfairness. The issues were viewed primarily from two dimensions: data and algorithms. We illustrated problems that demonstrate why fairness is an important issue. We further showed examples of the potential real-world harm that unfairness can have on society---such as applications in judicial systems, face recognition, and promoting algorithms. We then went over the definitions of fairness and bias that have been proposed by researchers. To further stimulate the interest of readers, we provided some of the work done in different areas in terms of addressing the biases that may affect AI systems and different methods and domains in AI, such as general machine learning, deep learning and natural language processing. We then further subdivided the fields into a more fine-grained analysis of each subdomain and the work being done to address fairness constraints in each. The hope is to expand the horizons of the readers to think deeply while working on a system or a method to ensure that it has a low likelihood of causing potential harm or bias toward a particular group. With the expansion of AI use in our world, it is important that researchers take this issue seriously and expand their knowledge in this field. In this survey we categorized and created a taxonomy of what has been done so far to address different issues in different domains regarding the fairness issue. Other possible future work and directions can be taken to address the existing problems and biases in AI that we discussed in the previous sections.", "id": "f1a962e1-7041-4346-83da-2bc61957ad17", "level": "section", "origin_cites_number": 0, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR0011890019. We would like to thank the organizers, speakers and the attendees at the IVADO-Mila 2019 Summer School on Bias and Discrimination in AI. We would like to also thank Brian Hu Zhang and Shreya Shankar.", "id": "e86008ad-785b-47fc-b2af-fb96edaafac7", "level": "section", "origin_cites_number": 0, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Acknowledgments" ] ], "subsections": [], "title": "Acknowledgments" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2e9146e4-82ec-4884-9fe3-eec975d6ffba", "level": "section", "origin_cites_number": 0, "parent_id": "11d83682-7593-45c6-99d3-daa7a02d3b91", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ] ], "subsections": [ "f668f915-0536-4f60-819c-548baf934ae5" ], "title": "Appendix" }, { "cite_extract_rate": 0, "cites": [], "content": "Aside from the existence of bias in datasets, there are datasets that are specifically used to address bias and fairness issues in machine learning. There are also some datasets that are introduced to target the issues and biases previously observed in older existing datasets. Below we list some of the widely known datasets that have the characteristics discussed in this survey.", "id": "f668f915-0536-4f60-819c-548baf934ae5", "level": "subsection", "origin_cites_number": 0, "parent_id": "2e9146e4-82ec-4884-9fe3-eec975d6ffba", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ] ], "subsections": [ "8992d377-6658-41f2-8af2-5c1f7b043bb4", "40d42e67-e703-4da6-89cc-b682175501a3", "3dfcb96e-2c28-4414-9392-9efc71e93c4f", "5a365fff-6556-438c-bae3-716bfe574d89", "0b0a68ca-1749-4849-b8b1-dbd5147e32aa", "a5a84993-2bab-499f-bb18-a03e48f569e2", "3edd129f-8c3b-4e4b-9852-dccfc2d8a9e1", "6103003f-f238-4892-a6f8-7c7e72aa6493" ], "title": "Datasets for Fairness Research" }, { "cite_extract_rate": 0, "cites": [], "content": "UCI Adult dataset, also known as \"Census Income\" dataset, contains information, extracted from the 1994 census data about people with attributes such as age, occupation, education, race, sex, marital-status, native-country, hours-per-week etc., indicating whether the income of a person exceeds \\$50K/yr or not. It can be used in fairness-related studies that want to compare gender or race inequalities based on people's annual incomes, or various other studies .", "id": "8992d377-6658-41f2-8af2-5c1f7b043bb4", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "UCI Adult Dataset" ] ], "subsections": [], "title": "UCI Adult Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The German Credit dataset contains 1000 credit records containing attributes such as personal status and sex, credit score, credit amount, housing status etc. It can be used in studies about gender inequalities on credit-related issues .", "id": "40d42e67-e703-4da6-89cc-b682175501a3", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "German Credit Dataset" ] ], "subsections": [], "title": "German Credit Dataset" }, { "cite_extract_rate": 1, "cites": [ 3918 ], "content": "The WinoBias dataset follows the winograd format and has 40 occupations in sentences that are referenced to human pronouns.\nThere are two types of challenge sentences in the dataset requiring linkage of gendered pronouns to either male or female stereotypical occupations. It was used in the coreference resolution study to certify if a system has gender bias or not---in this case, towards stereotypical occupations .", "id": "3dfcb96e-2c28-4414-9392-9efc71e93c4f", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "WinoBias" ] ], "subsections": [], "title": "WinoBias" }, { "cite_extract_rate": 0, "cites": [], "content": "The Communities and Crime dataset gathers information from different communities in the United States related to several factors that can highly influence some common crimes such as robberies, murders or rapes. The data includes crime data obtained from the 1990 US LEMAS survey and the 1995 FBI Unified Crime Report. It also contains socio-economic data from the 1990 US Census.", "id": "5a365fff-6556-438c-bae3-716bfe574d89", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "Communities and Crime Dataset" ] ], "subsections": [], "title": "Communities and Crime Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The COMPAS dataset contains records for defendants from Broward County indicating their jail and prison times, demographics, criminal histories, and COMPAS risk scores from 2013 to 2014 .", "id": "0b0a68ca-1749-4849-b8b1-dbd5147e32aa", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "COMPAS Dataset" ] ], "subsections": [], "title": "COMPAS Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The Recidivism in Juvenile Justice dataset contains all juvenile offenders between ages 12-17 who committed a crime between years 2002 and 2010 and completed a prison sentence in 2010 in Catalonia's juvenile justice system .", "id": "a5a84993-2bab-499f-bb18-a03e48f569e2", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", " Recidivism in Juvenile Justice Dataset" ] ], "subsections": [], "title": " Recidivism in Juvenile Justice Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The Pilot Parliaments Benchmark dataset, also known as PPB, contains images of 1270 individuals in the national parliaments from three European (Iceland, Finland, Sweden) and three African (Rwanda, Senegal, South Africa) countries. This benchmark was released to have more gender and race balance, diversity, and representativeness .", "id": "3edd129f-8c3b-4e4b-9852-dccfc2d8a9e1", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "Pilot Parliaments Benchmark Dataset" ] ], "subsections": [], "title": "Pilot Parliaments Benchmark Dataset" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3918, 3945 ], "content": "The Diversity in Faces (DiF) is\nan image dataset collected for fairness research in face recognition. DiF is a large dataset containing one million annotations for face images. It is also a diverse dataset with diverse facial features, such as different \ncraniofacial distances, skin color, facial symmetry and contrast, age, pose, gender, resolution, along with diverse areas and ratios .\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{ |p{5.3cm}||p{1.3cm}|p{3.3cm}|p{3.3cm}|}\n \\hline\n Dataset Name& Reference & Size&Area\\\\\n \\hline\nUCI adult dataset& &48,842 income records&Social\\\\[0.5pt]\n \\hline\n German credit dataset&&1,000 credit records&Financial\\\\[0.5pt]\n \\hline\n Pilot parliaments benchmark dataset&&1,270 images &Facial images\\\\[0.5pt]\n \\hline\n WinoBias&&3,160 sentences&Coreference resolution\\\\[0.5pt]\n \\hline\n Communities and crime dataset&&1,994 crime records&Social\\\\[0.5pt]\n \\hline\n COMPAS Dataset&& 18,610 crime records&Social\\\\[0.5pt]\n \\hline\n Recidivism in juvenile justice dataset&&4,753 crime records&Social\\\\[0.5pt]\n \\hline\n Diversity in faces dataset&&1 million images&Facial images\\\\[0.5pt]\n \\hline\n\\end{tabular}\n\\caption{Most widely used datasets in the fairness domain with additional information about each of the datasets including their size and area of concentration.}\n\\label{dataset}\n\\end{table}\n\\bibliographystyle{ACM-Reference-Format.bst}\n\\bibliography{references}\n\\end{document}", "id": "6103003f-f238-4892-a6f8-7c7e72aa6493", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "f668f915-0536-4f60-819c-548baf934ae5", "prefix_titles": [ [ "title", "A Survey on Bias and Fairness in Machine Learning" ], [ "section", "Appendix" ], [ "subsection", "Datasets for Fairness Research" ], [ "subsubsection", "Diversity in Faces Dataset" ] ], "subsections": [], "title": "Diversity in Faces Dataset" } ]
49
[ 3890, 3891, 3892, 3894, 8699, 7758, 3893, 6984, 3895, 3896, 3897, 895, 8700, 3898, 8701, 3906, 3902, 3903, 3905, 8704, 3901, 3899, 3907, 3904, 8703, 3900, 8702, 3908, 3909, 3912, 3913, 3112, 3911, 3910, 3918, 3916, 3923, 3914, 3921, 3935, 3915, 3924, 3931, 3926, 3938, 3934, 3922, 3936, 1441, 3928, 3933, 7759, 3930, 3917, 3920, 3925, 3919, 8706, 3110, 3927, 8705, 3932, 3929, 3937, 7760, 6985, 7761, 3940, 3939, 3941, 3943, 3942, 3944, 3945 ]
1.575295
[ "Nan Gao", "Hao Xue", "Wei Shao", "Sichen Zhao", "Kyle Kai Qin", "Arian Prabowo", "Mohammad Saiedur Rahaman", "Flora D. Salim" ]
Generative Adversarial Networks for Spatio-Temporal Data: A Survey
2020
2020-08-18T11:05:40Z
cs.LG
Generative Adversarial Networks (GANs) have shown remarkable success in producing realistic-looking images in the computer vision area. Recently, GAN-based techniques are shown to be promising for spatio-temporal-based applications such as trajectory prediction, events generation and time-series data imputation. While several reviews for GANs in computer vision have been presented, no one has considered addressing the practical applications and challenges relevant to spatio-temporal data. In this paper, we have conducted a comprehensive review of the recent developments of GANs for spatio-temporal data. We summarise the application of popular GAN architectures for spatio-temporal data and the common practices for evaluating the performance of spatio-temporal applications with GANs. Finally, we point out future research directions to benefit researchers in this area.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "12251930-629c-43b6-96fe-1bbcd729d008", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ] ], "subsections": [ "faad7f9d-9374-486c-98ae-37a7d6392f0a", "c79c56aa-3058-4e02-9c72-ced609965005", "32a02756-1c44-4fb8-9820-9024dd8f9120", "18110330-2f2a-4e5b-86b6-d01be014ca94", "138cb8c3-4c9e-42a7-b149-ddee925de518", "54ce09c2-479b-47f3-88ae-3625c67592f3" ], "title": "root" }, { "cite_extract_rate": 0.47222222222222204, "cites": [ 986, 981, 7009, 987, 76, 983, 978, 8403, 985, 87, 988, 980, 982, 984, 896, 989, 979 ], "content": "Spatio-temporal (ST) properties are commonly observed in various fields, such as transportation , social science and criminology , among which have been rapidly transformed by the proliferation of sensor and big data. However, the vast amount of ST data requires appropriate processing techniques to build effective applications. Generally, traditional data mining methods dealing with transaction data or graph data could perform poorly when applied to ST datasets. The reasons are mainly two-fold : (1) ST data are often in continuous space while traditional data (e.g., transaction data, graph data) are usually discrete; (2) ST data usually have spatial and temporal attributes where the data correlations are more complex to be captured by traditional techniques. Moreover, ST data tend to be highly self-correlated, and data samples are usually not generated independently as in traditional data. \nWith the prevalence of deep learning, many neural networks (e.g., \\textit{Convolutional Neural Network} (CNN) , \\textit{Recurrent Neural Network} (RNN) , \\textit{Autoencoder} (AE) , \\textit{Graph Convolutional Network} (GCN) ) have been proposed and achieved remarkable success for modelling ST data, due to its demonstrated potential for hierarchical feature engineering ability. However, the traditional deep learning based ST modelling methods have some limitations. For instance, existing methods use deterministic models (e.g., RNN) and cannot capture the stochastic behaviour of ST data. Additionally, traditional deep learning approaches lack effective mechanisms to support the reasoning of the abstract data, which makes it hard to identify the factors leading to model improvements . To address the above challenges, we have explored one of the most interesting breakthroughs in the deep learning field: \\textit{Generative Adversarial Networks} (GANs) , which can learn rich distributions over ST data implicitly and work with multi-model outputs . \nGAN is a generative model which learns to produce realistic data adversarially. It consists of two components : the generator $G$ and discriminator $D$. $G$ captures the data distribution and produces realistic data from the latent variable $z$, and $D$ estimates the probability of the data coming from the real data space. GAN adopts the concept of the zero-sum non-cooperative game where $G$ and $D$ are trained to play against each other until reaching a Nash equilibrium. Recently, GANs have gained considerable attention in various fields , involving images (e.g., image translation , super-resolution , joint image generation , object detection , change facial attributes ), videos (e.g., video generation , text to video ), and natural language processing (e.g., text generation , text to image ). \nHowever, image or video generation approaches are not applicable for modelling traditional ST data (e.g., time series, trajectories, ST events, ST graphs) in real-world applications such as traffic flow, regional rainfall, and pedestrian trajectory. On the one hand, image generation usually takes the appearance between the input and output images into account, and fails to adequately handle spatial variations. On the other hand, video generation considers spatial dynamics between images, however, temporal changes are not adequately considered when the prediction of the next image is highly dependent on the previous image . Though the video can be regarded as a special type of ST data due to its dynamic locations in spatial and temporal dimensions, the discussion of using GANs for video generation usually falls into the field of computer vision, where several papers have thoroughly reviewed the recent progress of video generation with GANs . Hence, new approaches need to be explored to successfully modelling ST data with GAN techniques.\nRecently, GANs have been applied to ST data modelling, where the applications usually include the generation of de-identified ST events , time series imputation , trajectory prediction , graph representation , etc. Despite the success of GANs in the computer vision area (e.g., image and video generation), applying GANs to ST data prediction is challenging . For instance, leveraging additional information such as \\textit{Places of Interest (PoI)} and weather information is still untouched in previous research. Besides, different from the images where researchers could rely on visual inspections of the generated instances, evaluation of GANs on ST data remains an unsolved problem. It is neither practical nor appropriate to adopt the traditional evaluation metrics for GAN on ST data .\nA few studies reviewed recent literature on ST data modelling problems or GAN based applications in different fields. For ST data modelling, Atluri et al. reviewed the popular problems and methods for modelling ST data. A taxonomy of the different types of ST data instances has been provided to identify the relevant problems for ST data in real-world applications. Then, Wang et al. reviewed the recent progress in applying deep learning to ST data mining tasks and proposed a pipeline of the utilisation of deep learning models for ST data modelling problems. For GAN based applications, Hong et al. explained the GANs from various perspectives and enumerated popular GAN variants applied to multiple tasks. Recent progress of GANs was discussed in and Wang et al. proposed a taxonomy of GANs for the computer vision area. Particularly, Yi et al. reviewed the recent advances of GANs in medical imaging. \nNevertheless, all the above works reviewed either ST data modelling problems or the recent progress of GANs in the computer vision area . Though many researchers have modelled ST data with GANs, there is no related survey in this area to address the potential of using GANs for ST data applications. The lack of a comprehensive review makes it more difficult for researchers to identify the problems and choose an appropriate method (e.g., architecture, loss function, evaluation metric) when applying GAN techniques for ST applications. For the first time, this paper presents a comprehensive overview of GANs for ST data, describes promising applications of GANs, and identifies some remaining challenges needed to be solved for enabling successful applications in different ST related tasks.\nTo present a comprehensive overview of all the relevant research on GANs for ST data, we use \\textit{Google Scholar} \\footnote{https://scholar.google.com/} to conduct automated keyword-based search . According to , Google Scholar provides coverage and accessibility, and digital libraries such as \\textit{IEEE Explore} \\footnote{https://ieeexplore.ieee.org/}, \\textit{Science Direct} \\footnote{https://www.sciencedirect.com/}, \\textit{ACM Digital Library} \\footnote{https://dl.acm.org/}. \nThe search period is limited from 2014 to 2021 (inclusive) as the GAN has first appeared in 2014 . However, papers that introduce novel concepts or approaches for ST data mining can be predated 2014. To ensure that our survey covers all relevant primary literature, we have included such seminal papers regardless of their publication date.\nThe remainder of the paper is organised as follows. In Section \\ref{sec:pre}, we discuss the properties, characteristics and common research problems of ST data. We also present the popular deep learning methods with non-GAN frameworks for ST data, including the \\textit{Convolutional Neural Networks}, \\textit{Recurrent Neural Networks}, \\textit{Long Short-term Memory} and \\textit{Gated Recurrent Units}. Section \\ref{sec:gan} reviews the definition of GAN and its popular variants with different architecture and loss functions. Section \\ref{sec:st} lists the recent research progress for GANs in different categories of ST applications. Section \\ref{sec:dis} summarises the challenges of processing ST data with GANs, including the adapted architectures, loss functions and evaluation metrics. Finally, we conclude the paper and discuss future research directions.", "id": "faad7f9d-9374-486c-98ae-37a7d6392f0a", "level": "section", "origin_cites_number": 36, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pre}", "id": "c79c56aa-3058-4e02-9c72-ced609965005", "level": "section", "origin_cites_number": 0, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ] ], "subsections": [ "546dd4c0-3daf-449c-9b7e-6153f28e086a", "df6448fb-1ac2-4885-9808-b1f891145edd" ], "title": "Preliminary" }, { "cite_extract_rate": 0, "cites": [], "content": "The existence of time and space introduces a wide variety of ST data types, leading to different ways of formulating ST data mining problems and techniques. In this part, we will first introduce the general properties of ST data, then describe the common types of ST data in different applications using generative adversarial nets techniques.", "id": "546dd4c0-3daf-449c-9b7e-6153f28e086a", "level": "subsection", "origin_cites_number": 0, "parent_id": "c79c56aa-3058-4e02-9c72-ced609965005", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-temporal Data" ] ], "subsections": [ "d447d205-f296-42e7-bb99-90cbb93bf139", "f243b06f-a5ed-4bd1-bd12-71ab06676ad0" ], "title": "Spatio-temporal Data" }, { "cite_extract_rate": 0.5, "cites": [ 983, 991, 990 ], "content": "There are several general properties for ST data (i.e., spatial reference, time reference, auto-correlation, and heterogeneity ) described as below.\n\\textbf{Spatial Reference}. The spatial reference describes whether the objects are associated with the fixed location or dynamic locations . Traditionally, when the data is collected from stationary sensors (e.g., weather stations), we consider the spatial dimension of the data is fixed. Recently, with the boost of mobile computing and location-based services, the dynamic locations of moving objects have been recorded where the collected data comes from sensors attached to different objects, e.g., GPS trajectories from road vehicles . \n\\textbf{Temporal Reference}. The temporal reference describes to what extent the objects evolve . The simplest context includes objects that do not evolve where only the static snapshots of objects are available. In a slightly more complicated situation, objects can change status but only the most recent update snapshot remains where the full history of status is unknown. The extreme context consists of moving objects where the full history of moving is kept, therefore generating time series where all the status have been traversed. \n\\textbf{Auto-correlation}. The observations of ST data are not independent and usually have spatial and temporal correlations between near measurements. For example, in the transportation area, sensors in each parking lot with the unique spatial location can record the temporal information when a vehicle arrives or leaves . This auto-correlation of ST data results in the smoothness of temporal measurements (e.g., temperature changes over time) and consistency between the spatial measurements (e.g., temperature values are similar in adjacent locations). Thereby, the traditional GAN techniques for the computer vision field (e.g., image generation ) without considering the temporal correlation may not well suited for the ST data. \n\\textbf{Heterogeneity}. ST dataset can show heterogeneity in spatial or temporal information on different levels. For instance, traffic flow in a city can show similar patterns between different weeks. During a week, the traffic data on Monday may be different from data on Friday. There can also be inter-week changes due to public events or extreme weather, affecting the traffic patterns in a city. To deal with the heterogeneity of spatial and temporal information, it is necessary to learn different models for different spatio-temporal regions .", "id": "d447d205-f296-42e7-bb99-90cbb93bf139", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "546dd4c0-3daf-449c-9b7e-6153f28e086a", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-temporal Data" ], [ "subsubsection", "Properties" ] ], "subsections": [], "title": "Properties" }, { "cite_extract_rate": 0.34, "cites": [ 981, 1000, 983, 996, 989, 999, 993, 998, 994, 995, 8404, 25, 997, 979, 7320, 992 ], "content": "There are various spatio-temporal data types in real-world applications, differing in the representation of space and time context . Hence, it is crucial to establish the available ST data types in applications to effectively use GAN methods. Here, we describe the four common types of ST data which have been studied with GANs recently: (1) time series ; (2) ST events ; (3) ST graphs ; (4) trajectory data . Among the above four types of ST data, ST events and trajectories capture the observations of discrete objects and events. At the same time, the time series and ST graphs record the information of continuous or discrete ST fields. Though there are other types of ST data available in real-world scenarios, in some cases they can be converted into another, or they can be processed with similar GAN approaches to the above four types (e.g., sequential data vs time series). Next, we briefly discuss the properties of those data types and potential difficulties when facing with GANs. \n\\textbf{Time Series}.\nA time series can be represented as a sequence of data points $X=\\{X_1,X_2,...,X_n\\}$ listed in an order of time (i.e., sequence of discrete-time data ). Examples of time series include the values of indoor temperature during a day , the changes of accelerometer readings in the IoT devices , fluctuations of the stock price in a month , etc. Time series analysis consists of techniques to analyse time series for extracting useful statistic information and data characteristics. The common questions used for dealing with time series include but not limited to:\n\\textit{Can we predict future values for time series based on the historical values ?}\n\\textit{Can we cluster groups of time series with similar temporal and spatial patterns ?}\n\\textit{Can we impute the missing values automatically in multi-variate time series ?}\n\\textit{Can we split time series into different segments with its characteristic properties ?} \n\\textbf{Spatio-temporal Events}.\n\\begin{figure}\n \\centering\n \\subfigure[Spatio-temporal events]{ \\includegraphics[width=0.35\\textwidth]{Images/STevents.png}\\label{fig:STevents}}\\hspace{0.6cm}\n \\subfigure[Two trajectories]{\\includegraphics[width=0.43\\textwidth]{Images/Trajectory.pdf}\n \\label{fig:Traj}}\n \\caption{Examples of spatio-temporal events and trajectories}\n\\end{figure}\nAn spatio-temporal event represents a tuple containing temporal, spatial information as well as an additional observed value . Generally, it is denoted as $x_i = \\{m_i,t_i,l_i\\}$, where $t_i$ and $l_i$ indicates the time and location of the event, $m_i$ means the value to describe the event. Typically, the locations are recorded in three dimensions (i.e., latitude, longitude, and altitude or depth), although sometimes only 1 or 2 spatial coordinates are available. Spatio-temporal events (see Figure ~\\ref{fig:STevents}) are frequently used in real-world applications such as the taxi demand , traffic flow , urban crimes , forest fires , etc. In some cases, spatio-temporal events may even have duration like parking or heliophysics . Usually, an ordered set of spatio-temporal events can also be considered as an trajectory where the spatial locations visited by moving objects. Some common questions that used for analysing spatio-temporal events includes: \\textit{Can we predict the future spatio-temporal events based on the previous observations ?} \\textit{How are spatio-temporal events clustered based on time and space ?} \\textit{Can we identify the anomalous spatio-temporal events that do not follow the common patters of other events ?}\n\\textbf{Trajectory data}.\nA trajectory represents the recordings of locations of a moving object at certain times and it is usually defined as a function mapped from the temporal domain to the spatial domain . Trajectories of moving points can be denoted as a sequence of tuples\n$P =\\{(x_1,y_1,t_1),(x_2,y_2,t_2),...,(x_n,y_n,t_n)\\}$, where $(x_i,y_i,t_i)$ indicates the location $(x_i,y_i)$ at time $t_i$. Several research have been conducted in the field of trajectory data mining and there are four major categories : mobility of people , mobility of transportation , mobility of natural phenomena and mobility of animals . Figure ~\\ref{fig:Traj} shows an example of two trajectories of object $A$ and object $B$. The common questions for processing trajectory data include: \n\\textit{Can we predict the future trajectory based on the historical trajectory traces ?} \n\\textit{Can we divide a collection of trajectories into small representative groups ?}\n\\textit{Can we detect the abnormal behaviours from trajectories ? } \n\\textbf{Spatio-temporal Graph}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{Images/Graphall.pdf}\n \\caption{Example of Spatio-temporal Graph Data}\n \\label{fig:graph}\n\\end{figure}\nSpatio-temporal graph structure provides the representation of the relations between different nodes in different time. A sequence of spatio-temporal graphs can be represented as $\\mathcal{G}=(\\mathcal{G}_1, \\mathcal{G}_2,...,\\mathcal{G}_n)$ where $\\mathcal{G}_i = \\{V_i, E_i, W_i\\}$ indicates the graph snapshot at time $T_i$ ($i\\in \\{1,2,...,n\\}$). Spatio-temporal graphs have been applied in various domains such as commerce (e.g., trades between countries ), transportation (e.g., route planning algorithms , traffic forecasting ) and social science (e.g., studying geo-spatial relations of different social phenomena ). Figure~\\ref{fig:graph} is an example of spatio-temporal graphs in $T_1, T_2, T_3$. Some common questions for processing spatio-temporal graph includes:\n\\textit{Can we forecast the status of graph based on the historical graph representations ? }\n\\textit{Can we predict the links based on the previous graph networks ? }", "id": "f243b06f-a5ed-4bd1-bd12-71ab06676ad0", "level": "subsubsection", "origin_cites_number": 50, "parent_id": "546dd4c0-3daf-449c-9b7e-6153f28e086a", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-temporal Data" ], [ "subsubsection", "Data Types" ] ], "subsections": [], "title": "Data Types" }, { "cite_extract_rate": 0, "cites": [], "content": "Here we introduce the traditional deep learning approaches for ST data with non-GAN networks (i.e., \\textit{Convolutional Neural Network}, \\textit{Recurrent Neural Network}, \\textit{Autoencoder}, \\textit{Graph Convolutional Network}), which are usually integrated into GAN architectures in ST data modelling.", "id": "df6448fb-1ac2-4885-9808-b1f891145edd", "level": "subsection", "origin_cites_number": 0, "parent_id": "c79c56aa-3058-4e02-9c72-ced609965005", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-Temporal Deep Learning with Non-GAN Networks" ] ], "subsections": [ "761a1176-7f65-46e8-ac85-1c1afba325e2", "c4195f9a-68a2-4774-9c80-3084131b53b3", "11aa9f14-0e0b-4a89-b29d-048e4b033d82", "cb3e1cc9-6af4-469c-8de4-19707a4fc0df" ], "title": "Spatio-Temporal Deep Learning with Non-GAN Networks" }, { "cite_extract_rate": 0.5, "cites": [ 990 ], "content": "\\iffalse\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.85\\textwidth]{Images/CNN.pdf}\n \\caption{Basic structure of a typical CNN model}\n \\label{fig:cnn}\n\\end{figure}\n\\fi\n\\textit{Convolutional Neural Network} (CNN) is a type of deep, feed-forward neural network commonly used to analyse visual imagery. A typical CNN model is composed of an input layer, an output layer and some hidden layers. \nCompared to the traditional multilayer perceptron (MLP), CNNs can develop internal representations of two-dimensional images, allowing CNNs to be used more generally on other types of data with spatial correlations. Though CNNs are not specifically developed for non-image data, it has been widely used in ST data mining problem for trajectory and ST raster data .", "id": "761a1176-7f65-46e8-ac85-1c1afba325e2", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "df6448fb-1ac2-4885-9808-b1f891145edd", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-Temporal Deep Learning with Non-GAN Networks" ], [ "subsubsection", "CNN" ] ], "subsections": [], "title": "CNN" }, { "cite_extract_rate": 0.25, "cites": [ 7321 ], "content": "\\textit{Recurrent Neural Network (RNN)} is a type of neural networks where the previous outputs are fed as the input to the current step.\nThe advantage of RNN is the hidden state (internal memory) that captures information calculated so far in a sequence. Figure~\\ref{fig: rnn} shows the basic architecture of an RNN, where $X$ is the input data, $y$ is the output data, $h$ is the hidden state and $U, V, W$ indicates the parameters of the RNN. The current state $h_t$ is calculated by the current input $X_t$ and previous state $h_{t-1}$. \n\\iffalse\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{Images/LSTM.pdf}\n \\caption{Structure of a typical LSTM unit}\n \\label{fig:lstm}\n\\end{figure}\n\\fi\nThough the RNNs work effectively in many application domains, they may suffer from a problem called vanishing gradients . To cope with this problem, two variants of RNN have been developed: Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) networks . LSTM is capable of learning long-term dependencies with a special memory unit. An LSTM cell has three gates (forget gate, input gate, and output gate) to regulate the information flow. \nCompared with standard LSTM models, GRU has fewer parameters which combines the input gate and the forget gate into an 'update gate' and merges the cell state and hidden state. RNN, LSTM and GRU are widely used to learn the temporal correlations of time series and ST data.\n\\iffalse\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.50\\textwidth]{Images/RNN.pdf}\n \\caption{Basic structure of a typical RNN model}\n \\label{fig:rnn}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images/Autoencoder.pdf}\n \\caption{Structure of a typical Autoencoder}\n \\label{fig:autoencoder}\n\\end{figure}\n\\fi\n\\begin{figure}\n\t\\centering\n\t\\subfigure[RNN]{\n\t\\label{fig: rnn}\n\t{\\includegraphics[height=1.1in]{Images/RNN.pdf}}\n\t}\n\t\\hspace{0.15in}\n\t\\subfigure[Autoencoder]{\n\t\\label{fig:autoencoder}\n\t\\includegraphics[height=1.1in]{Images/Autoencoder.pdf}\n }\n \\caption{Structure of RNN and Autoencoder }\n\\end{figure}", "id": "c4195f9a-68a2-4774-9c80-3084131b53b3", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "df6448fb-1ac2-4885-9808-b1f891145edd", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-Temporal Deep Learning with Non-GAN Networks" ], [ "subsubsection", "RNN, LSTM and GRU" ] ], "subsections": [], "title": "RNN, LSTM and GRU" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 166 ], "content": "AE is a neural network that is trained to copy its input to its output by learning data codings in an unsupervised manner . The network is composed of two parts: encoder and decoder, as shown in Figure~\\ref{fig:autoencoder}. The encoder function compresses the input into a latent-space representation and the decoder reconstructs the input through the representation. \nAs a commonly used unsupervised representation learning method, AE is popular for classification and prediction tasks in trajectories , time series and other ST data .", "id": "11aa9f14-0e0b-4a89-b29d-048e4b033d82", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "df6448fb-1ac2-4885-9808-b1f891145edd", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-Temporal Deep Learning with Non-GAN Networks" ], [ "subsubsection", "Autoencoder (AE)" ] ], "subsections": [], "title": "Autoencoder (AE)" }, { "cite_extract_rate": 0.5, "cites": [ 25, 32 ], "content": "With the ability to extract representations from both local graph structure and node features, GCN has become popular in solving learning tasks on spatio-temporal graph dataset. For instance, Yu et al. introduced Spatio-Temporal Graph Convolutional Networks (STGCN) to solve the prediction problem in traffic networks. Other deep learning models have their issues dealing with ST forecasting tasks, such as RNN-based networks often have heavy computation in training and normal convolutional operations are limited on grid structures. STGCN differently converts traffic data into the graph-structured format and use spatio-temporal convolutional blocks to capture spatial and temporal dependencies. Furthermore, the cost of computation could be reduced by Chebyshev Polynomials Approximation or First Order Approximation. Recently, attention mechanisms have been employed with GCN models to learn the impact of the spatio-temporal factors in training, such as Graph Multi-Attention Network (GMAN) and Attention-based Spatial-Temporal Graph Convolutional Network (ASTGCN) .", "id": "cb3e1cc9-6af4-469c-8de4-19707a4fc0df", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "df6448fb-1ac2-4885-9808-b1f891145edd", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Preliminary" ], [ "subsection", "Spatio-Temporal Deep Learning with Non-GAN Networks" ], [ "subsubsection", "Graph Convolutional Network (GCN)" ] ], "subsections": [], "title": "Graph Convolutional Network (GCN)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:gan}", "id": "32a02756-1c44-4fb8-9820-9024dd8f9120", "level": "section", "origin_cites_number": 0, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Generative Adversarial Networks" ] ], "subsections": [ "67ffe654-5809-431a-bde2-0c52accd6245", "83853cc2-25c4-47f4-abcc-88a75e025b98", "fc5128eb-1a1b-492c-86a3-a5892d36c294" ], "title": "Generative Adversarial Networks" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 148, 1001, 529, 76, 1002 ], "content": "\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{Images/GANvariants.pdf}\n \\caption{A view of variants of GAN. $G$ represents the generator network, $D$ is the discriminator network, $z$ represents the noise, $c$ means the class labels, $x_f$ are the generated fake images and $x_r$ are the real images}\n \\label{fig:variants}\n\\end{figure}\nThe original concept of GANs is to create two neural networks and let them compete against each other. As shown in Figure~ \\ref{fig:variants}, the basic architecture of GANs comprises two components: a generator and a discriminator. On the one hand, the generator's task is to synthesis fake images which can fool the discriminator. On the other hand, the discriminator, as to its name, learns to distinguish if its input is a fake image or not . \nLet's left the images generation task aside. The underlying idea of generative adversarial nets is more general, which is to create one fake distribution $p_g$ and make it as close as possible to a data distribution $p_r$. The reason we use such an approach is that $p_r$ could be hard to get directly and by doing in this manner, we get a good approximation of it and then we can sample from this approximate distribution instead . The advantages of this approach are that since the generator is learning to approximate the real distribution directly, there is no need to introduce the Markov Chain and no inference is required due to the isolation between the generator and the real data distribution. Besides, its simple structure makes it easier to incorporate with other techniques .\nThe Generator $G(z;\\theta_g)$, a neural network that parameterized by theta takes a sample $z \\sim p_z$ as input and mapping that to a sample $x\\sim p_g$. And its rival, the Discriminator $D(x;\\theta_d)$ outputs a single binary value to indicate its prediction of the input's origin. During the training session, both parts are trained simultaneously and based on their opponent's result, which forms a minimax game with the overall objective function :\n\\begin{equation*}\n\\underset{G}{min}\\underset{D}{\\ max} \\ V( D,G) =\\mathbb{E}_{x\\sim p_{data}( x)}[\\log D( x)] +\\mathbb{E}_{z\\sim p_{z}( z)}[\\log( 1-D( G( z)))]\n\\end{equation*}\nDespite all the advantages above, the original generative adversarial network is still inadequate in some places. The practical results show that the training is particularly delicate and the generators may suffer from vanishing gradient for optimizing the generator . To address all those problems that might occur, many variants of the vanilla GAN are proposed .", "id": "67ffe654-5809-431a-bde2-0c52accd6245", "level": "subsection", "origin_cites_number": 6, "parent_id": "32a02756-1c44-4fb8-9820-9024dd8f9120", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Generative Adversarial Networks" ], [ "subsection", "Basic Idea of GANs" ] ], "subsections": [], "title": "Basic Idea of GANs" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 148, 54, 63 ], "content": "In traditional generative modelling approaches, the performance of a model is indicated by the reverse Kullback-Leibler (KL) divergence between our desire distribution $p_r$ and our generator's distribution $p_g$ . \n\\begin{equation*}\nD_{KL}( P_{g} \\| P_{r}) =\\int _{\\chi } P_{g}( x) \\ \\log\\frac{P_{g}( x)}{P_{r}( x)}\\mathrm{d} x\n\\end{equation*}\nMinimising this term means making those two distributions closer, and it would get to zero once $p_r=p_g$. However, the generator might still generate fake-looking data due to the imbalanced nature of this function. It could heavily penalise the generator for the part that is in the real distribution but not covered by the generator while paying less attention to the extra part covered by the generator. To avoid this weakness, another option that is discussed in the original GANs paper is called Jensen-Shannon (JS) divergence . \n\\begin{equation*}\nD_{JS}( P_{r} \\| P_{g}) =\\frac{1}{2} D_{KL}\\left( P_{r} \\| \\tfrac{1}{2}( P_{r} +P_{g})\\right) +\\frac{1}{2} D_{KL}\\left( P_{g} \\| \\tfrac{1}{2}( P_{r} +P_{g})\\right)\n\\end{equation*}\nAlthough it shows some promising results, JS divergence is not the ultimate choice since it still suffers from issues like gradient vanishing. Some latest studies show that those can be resolved by using other types of loss function .", "id": "83853cc2-25c4-47f4-abcc-88a75e025b98", "level": "subsection", "origin_cites_number": 5, "parent_id": "32a02756-1c44-4fb8-9820-9024dd8f9120", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Generative Adversarial Networks" ], [ "subsection", "Loss Function" ] ], "subsections": [], "title": "Loss Function" }, { "cite_extract_rate": 0.8, "cites": [ 1001, 1003, 529, 1002 ], "content": "Although the vanilla GAN shows its potential for data generation , and the discriminator in this structure is proved to be effective on classification task . But it still suffers from blurry and possible mode dropping/collapse. Besides, there is no control in the generation process since its unsupervised manner . To this end, some studies introduce other machine learning techniques into the original GAN structure, and some results are promising. The architecture of those variants is shown in Figure~ \\ref{fig:variants}.\nMirza et al. proposed CGAN (Conditional GAN), which introduces a support info vector $y$. In the generator, each input $z$ gets its corresponding $y$, and it is also available the discriminator which can help it better judge. Since this vector is a controlled parameter rather than another random sample, we gain some control of the samples generated. Chen et al. , on the other hand, is also focused on providing support info to the generator, and proposed the InfoGAN. A latent code $c$ is adding to the input of the generator, and after the images go through the discriminator, another module $Q$ is introduced to approximate the distribution of $P(c|x)$ and calculate the variational mutual information $I(c; G(z, c))$ which indicates the level of info remains after the generation process. The result generator can be controlled by maximising this regularisation term according to the latent code $c$. Odena et al. introduced a supervised task into the original GAN and proposed ACGAN (Auxiliary Classifier GAN). Every sample from the real data belongs to a predefined class, and an expected label $c\\sim p_c$ along with noise $z \\sim p_z$ is used as input to generate a data sample of that class. Besides the real/fake discrimination task, an auxiliary classifier is created to classify every sample, enabling the generator's ability to synthesis sample for a particular class.", "id": "fc5128eb-1a1b-492c-86a3-a5892d36c294", "level": "subsection", "origin_cites_number": 5, "parent_id": "32a02756-1c44-4fb8-9820-9024dd8f9120", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Generative Adversarial Networks" ], [ "subsection", "Architecture of GAN Variants" ] ], "subsections": [], "title": "Architecture of GAN Variants" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:st}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{Images/taxonomy.pdf}\n \\caption{Taxonomy of GANs on ST data mining and modelling tasks}\n \\label{fig:tax}\n\\end{figure}\nIn Figure ~\\ref{fig:tax}, we categorise the existing ST data mining and modelling tasks based on four common types of ST data that have been intensively studied with GANs: ST events, time series, ST graphs and trajectories. The ST tasks are summarised based on the previous research on each ST data type (see Table~\\ref{tab:category} for details). For instance, we investigate the prediction problem for the ST events and trajectories. For time-series data, we focus on the time series imputation and generation problems and for ST graphs, the temporal link prediction and graph representation applications on GANs are explored. We also summarise the widely used datasets for each ST data type in Table~\\ref{tab:dataset}.\n\\iffalse\n\\begin{figure}\n \\centering\n \\includegraphics{}\n \\caption{Taxonomy of GANs on ST data}\n \\label{fig:my_label}\n\\end{figure}\n\\fi", "id": "18110330-2f2a-4e5b-86b6-d01be014ca94", "level": "section", "origin_cites_number": 0, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ] ], "subsections": [ "50381dcb-5673-417e-bf1c-f15e9c7de3ee", "788bda59-feb9-4d0f-a421-8989914e21a3", "a0a11f43-a150-4fcc-8f7b-e6c4e6b38925", "537a7111-41c9-435b-aef5-8ccbd033d1ca" ], "title": "GANs for Spatio-temporal Data Modelling" }, { "cite_extract_rate": 0.414634146341463, "cites": [ 981, 7009, 40, 1006, 1007, 1000, 1004, 996, 1008, 988, 8404, 989, 7320, 979, 992, 1009, 1005 ], "content": "In this subsection, we mainly introduce how GANs are applied to predict the ST events (e.g., taxi demand , crime , fluid flows , anomaly detection ) in the future based on the previous events.\n\\newgeometry{,hmargin=0.2cm,vmargin=1cm,landscape}\n\\begin{table}\n \\centering\n \\footnotesize\n \\setlength\\tabcolsep{3pt}\n\\caption{Review of Spatio-temporal Data and Modelling Tasks using GAN}\n\\label{tab:category}\n \\centering\n\\begin{tabular}{@{}lllllll@{}}\n\\toprule\nST data type & Reference & Year & Task & Values in task & Model category & Evaluation methods \\\\ \\midrule\n\\multirow{15}{*}{\\textit{Time series}} & C-RNN-GAN & 2016 & Generation & Musical data & GAN and LSTM & \\begin{tabular}[c]{@{}l@{}}Domain metrics (e.g., polyphony, scale consistency, repetitions, tone span)\\end{tabular}\\\\\n & RCGAN & 2017 & Generation & Medical data & GAN and RNN & TSTR and TRTS \\\\\n & SSL-GAN & 2017 & Generation & Electronic health records & GAN, CNN and AE & Prediction accuracy over multiple data combinations \\\\\n & OccuGAN & 2018 & Generation & Occupancy data & GAN and DNN & \\begin{tabular}[c]{@{}l@{}}Domain metrics (e.g., time of first arrival, number of occupied transitions)\\end{tabular} \\\\\n & Grid-GAN & 2018 & Generation & Smart grid data & CGAN and CNN & TSTR and TRTS \\\\\n & EEG-GAN & 2018 & Generation & EEG brain signals & WGAN and CNN & IS, FID and ED \\\\\n & StockGAN & 2018 & Generation & Stock data & GAN, CNN and LSTM & Prediction accuracy (e.g., RMSRE, DPA) \\\\\n & GRU-GAN & 2018 & Imputation & Medical records, meteorologic data & GAN and GRU & Imputation accuracy \\\\\n & ForGAN & 2019 & Generation & Synthetic series and internet traffic & CGAN and LSTM & KL divergence \\\\\n & NAOMI & 2019 & Imputation & Traffic flow, movement data & GAN and RNN & Imputation accuracy \\\\\n & TimeGAN & 2019 & Generation & Sines, stocks, energy and events data & GAN and AE & Diversity, fidelity and usefulness (e.g., TSTR) \\\\\n & E2GAN & 2019 & Imputation & Medical records, meteorologic data & GAN and GRU & Imputation accuracy \\\\\n & SimGAN & 2020 & Generation & Heart rate ECG signals & GAN & Prediction accuracy over multiple GAN methods \\\\\n & Ad-Attack & 2020 & Generation & Stock prices and electricity data & GAN & Domain metrics (e.g, attack sucess rate, returned of perturbed portfolio) \\\\\n & AOS4Rec & 2020 & Generation & Sequences of recommendation & GAN and GRU & Precision, nDCG and BLEU \\\\\n \\midrule\n\\multirow{9}{*}{\\textit{Trajectory} } & GD-GAN & 2018 & Prediction & Pedestrain trajectories & GAN and LSTM & \\begin{tabular}[c]{@{}l@{}}Average displacement error (ADE) and final displacement error (FDE)\\end{tabular} \\\\\n & SocialGAN & 2018 & Prediction & Socailly acceptable trajectories & GAN and LSTM & \\begin{tabular}[c]{@{}l@{}}Quantitative (e.g., ADE, FDE) and qualitative (e.g., group avoiding) metrics \\end{tabular} \\\\\n & SoPhie & 2019 & Prediction & Pedestrain trajectories & GAN and LSTM & ADE and FDE \\\\\n & Social Ways & 2019 & Prediction & Pedestrain trajectories & GAN and LSTM & ADE and FDE \\\\\n & Social-BiGAT & 2019 & Prediction & Pedestrain trajectories & GAN and LSTM & ADE and FDE \\\\\n & APOIR & 2019 & Prediction & Point-of-Interests & GAN and GRU & Precision, Recall, nDCG and MAP \\\\\n & CoL-GAN & 2020 & Prediction & Pedestrain trajectories & GAN, CNN and LSTM & Average collision times (ACT), ADE and FDE \\\\ \n & AdattTUL & 2020 & Link prediction & Human mobility trajectories & GAN, GRU and LSTM & Prediction accuracy over multiple models \\\\\n & MT-ASTN & 2020 & Prediction & Crowd flow trajectories & GAN, AE & MAE and RMSE over multiple models \\\\ \n \\midrule\n\\multirow{5}{*}{\\textit{ST events} } & D-GAN & 2017 & Prediction & Taxi and bike data & GAN and VAE & Prediction accuracy over multiple models \\\\\n & Taxi-CGAN & 2020 & Prediction & Taxi hotspots data & CGAN and LSTM & \\begin{tabular}[c]{@{}l@{}}False identification test (FIT) and the section\n consistency test (SCT)\\end{tabular} \\\\\n & Crime-GAN & 2017 & Prediction & Crime data & DCGAN, CNN and RNN & \\begin{tabular}[c]{@{}l@{}}Prediction accuracy (e.g., JS divergence) over multiple models\\end{tabular} \\\\\n & MAD-GAN & 2019 & Prediction & Cyber-attacks data & GAN and LSTM & DR-score \\\\ \n \\midrule\n\\multirow{9}{*}{\\textit{Graphs} } & GraphGAN & 2018 & Representation & Social networks & GAN and DNN & Prediction accuracy over multiple models \\\\\n & NetGAN & 2018 & Representation & Citation and blogs networks & WGAN and LSTM & Prediction accuracy over multiple models \\\\\n & ANE & 2018 & Representation & Citation and blogs networks & GAN and DNN & Prediction accuracy over multiple models \\\\\n & NetRA & 2018 & Representation & Social and biological networks & GAN, LSTM and AE & Prediction accuracy over multiple models \\\\\n & GCN-GAN & 2019 & Link Prediction & Mobility networks & GAN, GCN and LSTM & MSE, edge-wise KL divergence, mismatch rate \\\\\n & GANE & 2019 & Representation & Coauthor networks & WGAN and DNN & Prediction accuracy over multiple models \\\\\n & NetworkGAN & 2019 & Link Prediction & Social networks & GAN, GCN and LSTM & RMSE, AUC, KL divergence \\\\\n & ProGAN & 2019 & Representation & Social and citation networks & GAN and DNN & Prediction accuracy over multiple models \\\\\n & MEGAN & 2019 & Representation & Social multi-view networks & GAN and MLP & Prediction accuracy over multiple models \\\\ \n & GRL & 2020 & Link Prediction & Relation triples and freebase entity pairs & GAN, LSTM and RL & MAE, MAP and AUC over multiple models \\\\ \\bottomrule \n\\end{tabular}\n\\end{table}\n\\restoregeometry\nFor the first time, Saxena et al. proposed a generative adversarial network \\textit{D-GAN} for accurate spatio-temporal events prediction. In the model, GAN and VAE are combined to jointly learn generation and variational inference of ST data in an unsupervised manner. They also designed a general fusion module to fuse heterogeneous multiple data sources. Figure~\\ref{fig:dgan} shows the architecture for D-GAN, consisting of four components: \\textit{Encoder}, \\textit{Generator/Decoder}, \\textit{Discriminator}, and \\textit{External feature fusion}. $G$ network is trained using the adversarial process. The decoder (i.e., generator) learns to approximate the distribution of real data, while the $D$ network discriminates between samples generated by $D$ and samples from real distributions. During the training process, D-GAN adopts a reconstruction loss and adversarial loss . In addition, \\textit{ConvLSTM} and \\textit{3D\n-ConvNet} structures were exploited to model long-term patterns and spatial dependencies in ST data. \nRecently, Yu et al. applied a conditional generative adversarial network with long short-term structure (LSTM-CGAN) for taxi hotspot prediction, which captures the spatial and temporal variations of hotspots simultaneously. Furthermore, Jin et al. developed a context-based generative model \\textit{Crime-GAN} to learn the spatio-temporal dynamics of the crime situation. They aggregated Seq2Seq, VAE network and adversarial loss in the framework to study ST data representation better. Furthermore, the deep convolutional generative adversarial network (DCGAN) has been developed for spatio-temporal fluid flow prediction in a tsunami case in Japan . \nGANs have also been used for anomaly detection for ST events. Li et al. proposed MAD-GAN, an unsupervised anomaly detection method for multivariate time series based on GAN. They trained a GAN generator and discriminator with LSTM. Then, the GAN-trained generator and discriminator are employed to detect anomalies in the testing data with a combined Discrimination and Reconstruction Anomaly Score (DR-Score).\n\\begin{figure}\n \\centering \\includegraphics[width=0.95\\textwidth]{Images/dgan.pdf}\n \\caption{D-GAN architecture proposed by Seaena et al. }\n \\label{fig:dgan}\n\\end{figure}", "id": "50381dcb-5673-417e-bf1c-f15e9c7de3ee", "level": "subsection", "origin_cites_number": 41, "parent_id": "18110330-2f2a-4e5b-86b6-d01be014ca94", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Spatio-temporal Events" ] ], "subsections": [], "title": "GANs for Spatio-temporal Events" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 988, 1007, 514, 9089, 1002, 1006, 979 ], "content": "Trajectory prediction refers to the problem of estimating the future trajectories of various agents based on the previous observations . Gupta et al. proposed {SocialGAN} to jointly predict trajectories avoiding collisions for all people. They introduced a variety loss encouraging the generative network of the GAN to spread its distribution and cover the space of possible paths while being consistent with the observed inputs. A new pooling mechanism was proposed to learn a 'global' pooling vector that encodes the subtle cues for all people involved in a scene. In {GD-GAN}~, Fernando et al. designed a GAN based pipeline to jointly learn features for both pedestrian trajectory prediction and social group detection. As the basic GAN structure used in SocialGAN is susceptible to mode collapsing and dropping issues, Amirian et al.~ extended the SocialGAN by incorporating the Info-GAN~ structure in their \\textit{Social Ways} trajectory prediction network.\n\\textit{SoPhie}, proposed by Sadeghian et al.~, is another GAN based trajectory prediction approach that can take both the information from the scene context and social interactions of the agents into consideration.\nTwo separate attention modules are also used to better capture the scene context and the social interactions.\nMore recently, based on BicycleGAN~ framework, Social-BiGAT~ develops the bijection function between the output trajectories and the latent space input to the trajectory generator.\nIt also uses a Graph Attention Network in combination with a VGG network~ to encode social influence from other pedestrians and semantic scene influence of the environment. For trajectories with fewer potential collisions, CoL-GAN~, proposed by Liu et al., exploits a CNN-based network as the trajectory discriminator.\nDifferent from other GAN based trajectory prediction methods such as SocialGAN~ and SoPhie~, the proposed discriminator can classify whether each segment of a trajectory is real or fake. \nRecently, Gao et al. studied the trajectory user linking problem to identify user identities from mobility\npatterns. They combined autoencoder with GANs for jointly human mobility learning, which provides regularized latent space for mobility classification. APOIR was developed to learn the distribution of underlying user preferences in the Point-of-interest (POI) recommendation. It consists of two components: the recommender and the discriminator. The recommender approaches users' true preference, and the discriminator distinguishes the generated POIs from the truly visited ones.", "id": "788bda59-feb9-4d0f-a421-8989914e21a3", "level": "subsection", "origin_cites_number": 12, "parent_id": "18110330-2f2a-4e5b-86b6-d01be014ca94", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Trajectory Prediction" ] ], "subsections": [], "title": "GANs for Trajectory Prediction" }, { "cite_extract_rate": 1, "cites": [ 992, 76, 8403 ], "content": "Specifically, time series is assumed to be a special kind of sequential data where the order matters. It is a sequence series obtained at consecutive equally spaced points at the time dimension, and not the only case of sequential data. However, processing the sequential data (e.g., musical data) may share similar GAN approaches to the time series data. Therefore, we will include several studies for modelling sequential data (e.g., musical data in ). Although natural language data can also be considered as sequential data, we will not include the NLP research with GANs (e.g., text generation , text to image ) since the natural language is not one of the ST data types and usually falls into the field of NLP. In this subsection, we will demonstrate two ST tasks for time series data: generation and imputation.", "id": "a0a11f43-a150-4fcc-8f7b-e6c4e6b38925", "level": "subsection", "origin_cites_number": 3, "parent_id": "18110330-2f2a-4e5b-86b6-d01be014ca94", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Time Series Modelling" ] ], "subsections": [ "c0263314-29fd-4cd0-a9f4-9f0c2fdf4e40", "33fedc1b-60ae-4f63-b3dc-bee7d61b9211" ], "title": "GANs for Time Series Modelling" }, { "cite_extract_rate": 0.416666666666666, "cites": [ 992, 996, 1008, 989, 7320 ], "content": "Data generation refers to creating data from the sampled data source. One of the main purposes of time series generation with GAN is to protect the privacy of sensitive data such as medical data , electroencephalographic (EEG) data , heart signal electrocardiogram (ECG) data , occupancy data , electronic health records (EHR) , etc.\nRecently, GANs have been used to generate sequential data. Mogren et al. proposed C-RNN-GAN (continuous RNN-GAN) to generate continuous-valued sequential data. They built the GAN with an LSTM generator and discriminator. The discriminator consists of a bidirectional layout which allows it to take context in both directions into account for its decisions. They trained the model on sequences of classical music and evaluated with metrics such as polyphony, scale consistency, repetitions and tone span. \nThen, Esteban et al. proposed a regular GAN where recurrent neural networks have substituted both the generator and the discriminator. They presented the Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to generate sequences of real-valued medical data or data subject to some conditional inputs. For evaluation, they proposed to use the capability of the generated synthetic data to train supervised models, i.e., TSTR (train on synthetic, test on real). They addressed that TSTR is more effective than TRTS (train on real, test on synthetic) because TRTS performance may not degrade when GAN suffers mode collapse. \nGANs have been used for the generation of biological-physiological signals such as EEG and ECG. Hartmann et al. proposed EEG-GAN to generate electroencephalographic (EEG) brain signals. With the modification of the improved WGAN training, they trained a GAN to produce artificial signals in a stable fashion which strongly resembles single-channel real EEG signals in the time and frequency domain. For evaluation metrics, they showed that the combination of Frechet inception distance (FID) and sliced Wasserstein distance (SWD), Euclidean distance (ED) can give a good idea about its overall properties. Golany et al. proposed the simulator-based GANs for ECG synthesis to improve a supervised classification. They incorporated ECG simulator equations into the generation networks, and then the generated ECG signals are used to train a deep network.\nChen et al. proposed GAN framework for building occupancy modelling. They first learned the discriminator and generator in the vanilla GAN with the training occupancy data. Then, the learned generator is the required occupancy model, which can be used to generate occupancy data with random inputs. To evaluate, they defined five variables (i.e., mean occupancy, time of the first arrival, time of the last departure, cumulative occupied duration and the number of occupied/unoccupied transitions) with two criteria (i.e., normalised root mean squared error and total variation distance).\nChe et al. used a modified GAN called \\textit{ehrGAN} to generate plausible labelled EHR data. The generator is a modified encoder-decoder CNN network, and the generated EHR data mimics the real patient records which augments the training dataset in a semi-supervised learning manner. In this work, they used the generative networks with the CNN prediction model to improve the performance of risk prediction.\nKoochali et al. proposed \\textit{ForGAN} to predict the next-step time series value $X_{t+1}$ by learning the full conditional probability distribution. They applied a conditional GAN and the condition windows are the previous $t$ values ($X_0, X_1,...,X_t$). With the input of the noise vector, the generator predicts the values at the $t+1$ step and then the discriminator compared this value to the true value at the $t+1$ step with the same condition windows. LSTM network is used in both generator and discriminator. Zhou et al. predicted the stock price at next time step $y_{t+1}$ based on the features in previous t time step $X_1,X_2,...,X_t$ and previous stock price $y_1, y_2,...,y_t$ using generative adversarial nets. \nInstead of generating a sequence of single values, Dang et al. developed an approach for the generation of adversarial attacks where the output is a sequence of probability distributions. The proposed approaches are demonstrated on two challenging tasks: the prediction of electricity consumption and stock market trading. Besides, AOSeRec were proposed to generate a sequence of items consistent with user preferences rather than the next-item prediction. The model integrated the sequence-level oracle and adversarial learning into the seq2seq auto-regressive learning.\nGenerally, an excellent time-series generative model should preserve temporal dynamics, and the generated sequences should follow the original patterns between variables across time. Therefore, Yoon et al. proposed a framework \\textit{TimeGAN} for producing realistic multivariate time-series, combining the flexibility of the unsupervised GAN approach with the control afforded by supervised learning. In addition to the traditional unsupervised adversarial loss on both real and fake data, they presented a stepwise supervised loss with the original data as supervision, which helps learn from the transition dynamics in real sequences. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{Images/GANE1.pdf}\n \\caption{An overview of the time series imputation framework proposed by Luo et al. }\n \\label{fig:GANE1}\n\\end{figure}", "id": "c0263314-29fd-4cd0-a9f4-9f0c2fdf4e40", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "a0a11f43-a150-4fcc-8f7b-e6c4e6b38925", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Time Series Modelling" ], [ "subsubsection", "Generation" ] ], "subsections": [], "title": "Generation" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 1005 ], "content": "In real-world applications, time series are usually incomplete due to various reasons, and the time intervals of observations are usually not fixed . The missing values in time series make it hard for effective analysis . One popular way to handle the missing values of time series is to impute the missing values to get the complete dataset. Generally, there are three different ways for time series imputation: case deletion methods , statistical imputation methods , and machine learning based imputation methods . However, all the existing approaches hardly consider the temporal relations between two observations. In recent years, researchers have started to take advantages of GANs to learn latent representations between observations for time series imputation .\nLuo et al. applied the adversarial model to generate and impute the original incomplete time series. To learn the latent relationships between observations with non-fixed time lags, a novel RNN cell called GRUI was proposed, which considers the non-fixed time lags and fades the influence of the past observations determined by the time lags. They proposed a two-stage model (see Figure~ \\ref{fig:GANE1}) for time series imputation: In the first stage, they adopted the GRUI in the discriminator and generator in GAN to learn the distribution and temporal information of the dataset. In the second stage, for each sample, they tried to optimise the 'noise' input vector and find the best-matched input vector of the generator. The noise was trained with a two-part loss function: masked reconstruction loss and discriminative loss. Masked reconstruction loss is the masked squared errors of the non-missing part between the original and generated sample. It means that the generated time series should be close enough to the original incomplete time series. The discriminative loss forces the generated sample as real as possible. However, this two-stage model needs a considerable time to find the best-matched input vector, which is not always the best, especially when the initial value of the 'noise' is not set properly. \nThen, Luo et al. proposed an end-to-end GAN-based imputation model E$^2$GAN which not only simplifies the process of time series imputation but also generates more reasonable values for the filling of missing values. E$^2$GAN takes a compressing and reconstructing strategy to avoid the 'noise' optimisation stage in . As seen in Figure ~\\ref{fig:GANE2}, in the generator (a denoising auto-encoder), they added a random vector to the original sample and map it into a low-dimensional vector. Then they reconstructed it from the low-dimensional vector. The generator seeks to find a network structure that can best compress and reconstruct the multivariate time series and fool the discriminator. Then they used the reconstructed sample to impute the missing values.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{Images/GANE2.pdf}\n \\caption{An overview of E$^2$GAN framework proposed by Luo et al. }\n \\label{fig:GANE2}\n\\end{figure}\nNon-Autoregressive Multiresolution Imputation (NAOMI) is a new model for the imputation of spatio-temporal sequences like traffic flow data and movement trajectories when arbitrary missing observations are given. NAOMI imputes missing values for spatio-temporal sequences recursively from coarse to fine-grained resolutions with a non-autoregressive decoding procedure. It further employs a generative adversarial learning process to reduce variance for improving the performance.", "id": "33fedc1b-60ae-4f63-b3dc-bee7d61b9211", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "a0a11f43-a150-4fcc-8f7b-e6c4e6b38925", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Time Series Modelling" ], [ "subsubsection", "Imputation" ] ], "subsections": [], "title": "Imputation" }, { "cite_extract_rate": 0, "cites": [], "content": "In this subsection, we introduce the application of GAN on the graph data analysis which mainly focus on two areas: temporal link prediction and graph representation.", "id": "537a7111-41c9-435b-aef5-8ccbd033d1ca", "level": "subsection", "origin_cites_number": 0, "parent_id": "18110330-2f2a-4e5b-86b6-d01be014ca94", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Spatio-temporal Graph Modelling" ] ], "subsections": [ "e27d5917-5584-4f9a-8525-2f908b6efb75", "afc0ba4c-0209-4291-9a7c-de1c28b50605" ], "title": "GANs for Spatio-temporal Graph Modelling" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1000 ], "content": "Temporal link prediction refers to the dynamics prediction problem in network systems (e.g., mobility and traffic prediction) where system behaviours are described by the abstract graphs . Given the snapshots of a graph in previous timestamps, the temporal link prediction task aims to construct the graph topology at the next timestamp. Lei et al. proposed GCN-GAN to predict links in weighted dynamic networks. They combined graph convolutional network (GCN), long short-term memory (LSTM) as well as generative adversarial network (GAN). The generator consists of a GCN hidden layer, LSTM hidden layer and a fully connected layer. Discriminator contains a fully connected feed-forward network. For evaluation, they used edge-wise KL divergence and mismatch rate besides mean square error (MSE). Then, Yang et al. designed an attentive GCN model for temporal link prediction in graphs using GAN. Compared to , attentive GCN allows for assigning different importance to the vertices to learn the spatial features of the dynamic network. Then, temporal matrix factorisation (TMF) LSTM was employed to capture dynamic networks' temporal dependencies and evolutionary patterns. GAN framework was then proposed to improve the performance of temporal link prediction. \nRecently, Want et al. have designed a GAN-based reinforcement learning model (GRL) for knowledge graph completion, which employs both WGAN and LSTM to record trajectories and generate sub-graph sequences. In addition, the deep deterministic policy gradient approach (DDPG) is adopted to optimise both reward and adversarial loss and generates better policies, which leads to more stable training compared with the traditional optimization method.", "id": "e27d5917-5584-4f9a-8525-2f908b6efb75", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "537a7111-41c9-435b-aef5-8ccbd033d1ca", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Spatio-temporal Graph Modelling" ], [ "subsubsection", "Temporal Link Prediction" ] ], "subsections": [], "title": "Temporal Link Prediction" }, { "cite_extract_rate": 0.24193548387096703, "cites": [ 7009, 8405, 1007, 1004, 1008, 988, 652, 8404, 1010, 989, 7320, 979, 1009, 1006, 1005 ], "content": "Wang et al. proposed GraphGAN unifying two types of graph representation methods: discriminative methods and generative methods via adversarial training. They found that the traditional softmax function and its variants are not suitable for the generator for two reasons: (1) softmax treats all vertices equally in the graph for a given vertex and does not consider the graph structure and proximity information; (2) the calculation of softmax involves all vertices in the\ngraph which is time-consuming and computationally inefficient. Therefore, they introduced \\textit{graph softmax} as the implementation of the generator and proved that it satisfies the desirable properties of normalisation, computational efficiency and graph structure awareness.\n\\begin{table}\n\\small\n\\setlength\\tabcolsep{2pt}\n\\caption{Summary of Datasets}\n\\label{tab:dataset}\n\\begin{tabular}{@{}llll@{}}\n\\toprule\nST data type & Dataset & Data source & Used references \\\\ \\midrule\n\\multirow{10}{*}{\\textit{Time series}} & Philips eICU database & Medical data & \\\\\n & MNIST & Hand-written digit images & \\\\\n & Occupancy dataste & Occupancy data in the building & \\\\\n & Pecan street dataset & Energy consumption, solar generation & \\\\\n & PhysioNet dataset & Medical data (e.g., heart rate, glucose) & \\\\\n & KDD cup 2018 dataset & Air quality data & \\\\\n & A5M dataset & Transatlantic link data & \\\\\n & PEMS-SF traffic dataset & Freeway occupancy rate & \\\\\n & Appliances energy dataset & Environmental data & \\\\ \n & UCI electricity dataset & Historical price data & \\\\ \n & Yoochoose & Clicking events from users & \\\\ \n & MovieLens & Movie ratings data & \\\\ \n \\midrule\n\\multirow{8}{*}{\\textit{Trajectory}} & ETH & Videos & \\\\\n & UCY & Videos & \\\\\n & Stanford drone dataset & Videos & \\\\\n & Vittorio emanuele II & Videos & \\\\\n & Foursquare & Location-based social networks & \\\\\n & Gowalla & Location-based social networks & \\\\\n & Brightkite & Location-based social networks & \\\\\n & Yelp & Location-based social networks & \\\\\n \\midrule\n\\multirow{3}{*}{\\textit{ST events}} & Yellow taxi dataset & Taxi demand data & \\\\\n & CitiBike trip dataset & Bike demand data & \\\\\n & SWaT dataset & Attacked data in water system & \\\\ \\midrule\n\\multirow{8}{*}{\\textit{Graphs}} & ArXiv-AstroPh & Scientific collaborations data & \\\\\n & Wikipedia & Network of words &\n \\\\\n & CORA & Citation networks of publications & \\\\\n & CiteSeer & Citation networks of publications & \\\\\n & DBLP & Collaboration graph of authors & \\\\\n & Blogcatalog & Social network for bloggers & \\\\\n & UCI message dataset & Message communication networks & \n \\\\\n & Flickr & Social networks & \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\nAiming at better capturing the essential properties and preserving the patterns of real graphs, Bojchevski et al. introduced NetGAN to learn a distribution of network via the random walks. The merits of using random walks are their invariance under node reordering and efficiency in exploring the sparse graphs by merely traversing the nonzero entries. The results confirmed that the combination of longer random walks and LSTM is advantageous for the model to learn the topology and general patterns in the data. \nAdversarial Network Embedding (ANE) also considers the random walk mechanism to learn network representation with the adversarial learning principle. It consisted of two components: (1) the structure-preserving component is developed to extract network structural properties via either Inductive DeepWalk or Denoising Autoencoder; (2) the adversarial learning component contributes to learning network representations by matching the posterior distribution of the latent representations to given priors. However, using DeepWalk for learning graph embedding could lead to an overfitting issue due to sparsity is common in networks or increasing computational burden when more sampled walks are considered . Therefore, NetRA was proposed to further minimise network locality-preserving loss and global reconstruction error with a discrete LSTM Autoencoder and continuous space generator, such that the mapping from input sequences into vertex representations could be improved.\nMost recently, GAN embedding (GANE) tries to gain the underlying graph distribution based on the probability distribution of edge existence which is similar to GraphGAN. The difference is that this model applies Wasserstein-1 distance as the overall objective function and intends to achieve link prediction and network embedding extraction simultaneously. As a novel network embedding method, the proximity generative adversarial network (ProGAN) is proposed to capture the underlying proximity between different nodes by approximating the generated distribution of nodes in a triplet format to the underlying proximity in the model of GAN. Specifically, a triplet can encode the relationship among three nodes, including similarity and dissimilarity. After the training of the generator and discriminator, the underlying proximities discovered are then used to build network embedding with an encoder.\nThe works mentioned above primarily focus on the single-view network in learning network embedding. However, numerous real-world data are represented by multi-view networks whose nodes have different types of relations. Sun et al. introduced a new framework for multi-view network embedding called MEGAN, which can preserve the information from individual network views, while considering nodes connectivity within one relation and complex correlations among different views. During the training of MEGAN, a pair of nodes are chosen from the generator based on the fake connectivity pattern across views produced by multi-layer perceptron (MLP), and the discriminator is then executed to differentiate the real pair of nodes from the generated one.", "id": "afc0ba4c-0209-4291-9a7c-de1c28b50605", "level": "subsubsection", "origin_cites_number": 62, "parent_id": "537a7111-41c9-435b-aef5-8ccbd033d1ca", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "GANs for Spatio-temporal Data Modelling" ], [ "subsection", "GANs for Spatio-temporal Graph Modelling" ], [ "subsubsection", "Graph Representation" ] ], "subsections": [], "title": "Graph Representation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dis}", "id": "138cb8c3-4c9e-42a7-b149-ddee925de518", "level": "section", "origin_cites_number": 0, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "4f2e9bda-6ba6-4650-9895-7996a4f3651f", "2b7fec53-50fd-480d-a491-7845230a8f97" ], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "Alongside numerous advantages of GANs, there are still challenges needed to be solved for employing GANs for ST applications.\nThe traditional architectures and loss functions of GANs may not be suitable due to the unique properties of ST data. Besides, evaluating ST data is more difficult compared to images where researchers could rely on visual inspections. Therefore, we will mainly focus on: (1) \\textit{how to modify architectures/loss functions of GANs to better capture the spatial and temporal relations for ST data and achieve stable training?} (2) \\textit{how to evaluate the performance of GANs especially when visually inspecting the generated ST samples is not applicable?} We will then address these two problems and indicate the future directions of investigating this area.", "id": "4f2e9bda-6ba6-4650-9895-7996a4f3651f", "level": "subsection", "origin_cites_number": 0, "parent_id": "138cb8c3-4c9e-42a7-b149-ddee925de518", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Challenges and Future Directions" ] ], "subsections": [ "b5137191-acb7-40ac-b088-b3ff2bf8a01a", "006ef4e7-f555-4db3-8831-5ad14b879263" ], "title": "Challenges and Future Directions" }, { "cite_extract_rate": 0.6190476190476191, "cites": [ 117, 7009, 1003, 981, 1007, 1000, 8317, 996, 8404, 989, 7320, 992, 1011 ], "content": "In the computer vision area, fully connected layers were initially used as building blocks in vanilla GAN, but later on were replaced by convolutional layers in DCGAN . Compared with images with only spatial relations, modelling ST data is more complex due to the constraints from both spatial and temporal dimensions. Therefore, adapting architectures and loss functions of GANs for specific ST applications have become the mainstream recently.\nGenerally, original or adapted RNN , LSTM , VAE , CNN , GNN are usually used as the base model (i.e., the discriminator and generator) in the vanilla GAN , WGAN or CGAN , which captures the spatio-temporal relations for ST data. What's more, some new loss functions have been proposed to dealing with specific ST tasks, such as the stepwised supervised loss in TimeGAN , masked reconstruction loss in GRU-GAN , the variety loss in SocialGAN . \nThe architecture of the generator and discriminator is of significant importance since it strongly influences the performance and stability of the GANs on ST data. Though GAN models have achieved remarkable success in ST applications , the unstable training process still remains unresolved and hinders further development for GAN on ST tasks, especially considering the heterogeneity and auto-correlation of ST data. For instance, Saxena et al. concatenated the latent code and data space in the discriminator for faster convergence, better learning and higher training stability. Although many previous studies discussed how to enable the stable training process , the problems of instability of GANs still need further research, especially on the ST data modelling. With further developments of GANs, new architectures and loss functions can be designed based on the characteristics of ST tasks.", "id": "b5137191-acb7-40ac-b088-b3ff2bf8a01a", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "4f2e9bda-6ba6-4650-9895-7996a4f3651f", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Challenges and Future Directions" ], [ "subsubsection", "Architectures and loss functions of GANs" ] ], "subsections": [], "title": "Architectures and loss functions of GANs" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 60, 1013, 117, 992, 1003, 980, 1014, 1012, 989 ], "content": "Though GANs have gained huge success in various fields, evaluating the performance of GANs is still an open question. As illustrated in and , both quantitatively measures (e.g., \\textit{Log-likelihood with Parzen Window Estimation} , \\textit{Fréchet Inception Distance} , \\textit{Maximum Mean Discrepancy} , \\textit{Root Mean Square Error} , \\textit{Histogram} , \\textit{Stepwise Method} ) and qualitative measures (e.g., \\textit{Preference Judgement} , Analysing \\textit{Internals of Models} ) have strengths and limitations. The nebulous notion of quality can be best assessed by a human judge, which is neither practical nor appropriate for different types of ST data. \nIn most cases, it is not easy or even possible to visually evaluate the generated ST data. For instance, the \\textit{Intense Care Unit} (ICU) time series or heart rate \\textit{Electrocardiogram} (ECG) signals could look completely random to a non-medical expert. Usually, the evaluation of generated ST samples requires domain knowledge. For example, Mogren et al. evaluated the generated music sequences using metrics in the field of music such as polyphony, repetitions, tone span and scale consistency. For future ST applications with GANs, some novel metrics based on domain knowledge could be considered to evaluate the generated ST data. \nEspecially, some researchers have proposed the general approach to evaluate the generated ST-data. Esteban et al. developed a general method called \\textit{Train on Synthetic, Test on Real} (TSTR) to evaluate the generated samples of GANs when a supervised task defined on the training data. They used a dataset generated by GANs to train a classification model, then tested on a held-out set of true samples. This evaluation metric is ideal when employing GANs to share synthetic de-identified data because it demonstrates the ability of the generated synthetic data to be used for real applications. In the future, more practical metrics should be developed to evaluate the performance of generated ST samples.", "id": "006ef4e7-f555-4db3-8831-5ad14b879263", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "4f2e9bda-6ba6-4650-9895-7996a4f3651f", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Challenges and Future Directions" ], [ "subsubsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we conducted a comprehensive overview of \\textit{Generative Adversarial Networks} (GANs) for spatio-temporal (ST) data in recent years. Firstly, we discussed the properties of ST data and traditional ways for ST data modelling. Then, we have provided a thorough review and comparison of the popular variants of GAN, and its applications on ST data analysis, such as time series imputation, trajectory prediction, graph representation and link prediction. Besides, we summarised the challenges and future directions for employing GANs for ST applications. \nFinally, though there are many promising results in the literature, we would like to point out, the adoption of GANs for ST data is still in its infancy. This survey can be used as the stepping stone for future research in this direction, which provides a detailed explanation of different ST applications with GANs. We wish this paper could help readers identify the set of problems and choose the relevant GAN techniques when given a new ST dataset.", "id": "2b7fec53-50fd-480d-a491-7845230a8f97", "level": "subsection", "origin_cites_number": 0, "parent_id": "138cb8c3-4c9e-42a7-b149-ddee925de518", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Conclusions" ] ], "subsections": [], "title": "Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "This research was supported by the Australian Government through the Australian Research Council's Linkage Projects funding scheme (LP150100246) and Discovery Project (DP190101485). We also acknowledge the support of RMIT Research Stipend Scholarship and CSIRO Data61 Scholarship.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{nan.bib}\n\\end{document}\n\\endinput", "id": "54ce09c2-479b-47f3-88ae-3625c67592f3", "level": "section", "origin_cites_number": 0, "parent_id": "12251930-629c-43b6-96fe-1bbcd729d008", "prefix_titles": [ [ "title", "Generative Adversarial Networks for Spatio-Temporal Data: A Survey" ], [ "section", "Acknowledgments" ] ], "subsections": [], "title": "Acknowledgments" } ]
50
[ 986, 981, 7009, 987, 76, 983, 978, 8403, 985, 87, 988, 980, 982, 984, 896, 989, 979, 991, 990, 1000, 996, 999, 993, 998, 994, 995, 8404, 25, 997, 7320, 992, 7321, 166, 32, 148, 1001, 529, 1002, 54, 63, 1003, 40, 1006, 1007, 1004, 1008, 1009, 1005, 514, 9089, 8405, 652, 1010, 117, 8317, 1011, 60, 1013, 1014, 1012 ]
1.137791
[ "Chao Cai", "Rong Zheng", "Jun Luo" ]
Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey
2019
2019-01-11T01:58:35Z
cs.SD
With the proliferation of Internet-of-Things devices, acoustic sensing attracts much attention in recent years. It exploits acoustic transceivers such as microphones and speakers beyond their primary functions, namely recording and playing, to enable novel applications and new user experiences. In this paper, we present the first systematic survey of recent advances in active acoustic sensing using commodity hardware with a frequency range below 24~\!kHz. We propose a general framework that categorizes main building blocks of acoustic sensing systems. This framework encompasses three layers, i.e., physical layer, core technique layer, and application layer. The physical layer includes basic hardware components, acoustic platforms as well as the air-borne and structure-borne channel characteristics. The core technique layer encompasses key mechanisms to generate acoustic signals (waveforms) and to extract useful temporal, spatial and spectral information from received signals. The application layer builds upon the functions offered by the core techniques to realize different acoustic sensing applications. We highlight unique challenges due to the limitations of physical devices and acoustic channels and how they are mitigated or overcame by core processing techniques and application-specific solutions. Finally, research opportunities and future directions are discussed to spawn further in-depth investigation on acoustic sensing.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1352d6e5-225d-41b6-be40-dfba979ded66", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ] ], "subsections": [ "c569bf52-61b4-43ce-991b-c83ebdd2905a", "b65eba54-cce9-4866-bd8f-992ff857a71d", "a1a6cc17-4b5b-4f7d-9b44-af40934be2da", "797b87e2-7680-40a8-af64-25093d612994", "76bc7108-cd5a-455f-b0a0-9d7148712e27", "20089aff-71e9-415f-8c3d-3a6f9374f1db" ], "title": "root" }, { "cite_extract_rate": 0.052631578947368, "cites": [ 1695 ], "content": "\\label{sec:introduction}\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=.89\\columnwidth]{Figs/tesser3.pdf}\n \\caption{A general framework for acoustic sensing.}\n \\label{fig:system overview}\n\\end{figure}\nInternet of Things (IoT)~ technologies enable everyday objects to connect and communicate with each other by augmenting them with sensing, processing, and computation units. \nWith the ever increasing computation power and rich built-in sensors available in IoT devices, novel applications emerge by repurposing sensors beyond their primary use. For instance, cameras are intended for taking photos but have been utilized in visible light communication~. Gyroscope and accelerometer sensors are designed for attitude estimation but have been used extensively in activity recognition~. WiFi signals, originally used for communication, have been widely applied in many context-aware computing applications including localization~ and gesture recognition~. \nIn this paper, we target innovative sensing mechanisms that exploit acoustic front-ends on commodity IoT devices. \nAcoustic front-ends, namely microphones and speakers, are among of the most commonly used transducers in IoT devices. They are generally designed for playing back and recording audio signals, but also play a pivotal role in passive sensing applications such as speech recognition~ and acoustic source localization~.\nNovel active sensing mechanisms that treat acoustic front-ends as transceivers to emit and capture wireless signals have gained a lot interests in the research community. \nFor instance, acoustic signals have been used to establish aerial acoustic communication channels to transmit a small amount of information~. \nAlso, the reflective property of acoustic signals have enabled the development of acoustic short-range radars for floor map reconstruction~ and gesture recognition~. \nMoreover, the relatively slow propagation speed of acoustic waves (compared to, e.g., Radio Frequency or RF) in common media allows to achieve comparable performance using a relatively low bandwidth than those with RF technologies. Consequently, it is possible to achieve accurate Time-of-Flight estimations that further support many context-aware applications~.\nLast but not least, \nactive acoustic sensing can enable deformity detection and estimation of non-acoustic emitting objects by transmitting purposefully modulated acoustic signals and make inference based on the reflected waveforms captured by microphones~. \n\\begin{figure*}[b]\n \\centering\n \\subfigure[Sound recording system.]\n {\n \\label{fig:a5}\n \\includegraphics[width=1.3\\columnwidth]{Figs/sound_recording.pdf}\n }\n \\\\\n \\vspace{0.002\\textwidth}\n \\subfigure[Sound playback system.]\n {\n \\label{fig:b5}\n \\includegraphics[width=1.3\\columnwidth]{Figs/sound_playback.pdf}\n }\n \\caption{Diagrams for typical acoustic hardware.}\n \\label{fig:microphone and speaker}\n\\end{figure*}\nDespite tremendous efforts in developing acoustic sensing applications in the past decade, a systematic treatment of fundamental principles, key design considerations, and innovative methodologies is still missing. As a result, when developing applications based on acoustic sensing, researchers and developers often have to start from scratch and reinvent the wheel. \nIn this paper, we provide the first systematic survey on recent advances in acoustic sensing with emphasis on novel sensing approaches on commodity hardware (with a bandwidth below ${24}$ kHz), as opposed to those that require special-purposed hardware such as underwater acoustic communication or ultrasonic sensing. We review the relevant research in a bottom-up manner, from the physical layer, core technique layer, and application layer, as shown in Fig.~\\ref{fig:system overview}. The physical layer includes basic hardware components, acoustic platforms as well as the air-borne and structure-borne channel characteristics.\nThe core technique layer encompasses key mechanisms to generate acoustic signals (waveforms) and to extract useful temporal, spatial and spectral information from received signals. \nThe application layer builds upon the functions offered by the core techniques to realize different acoustic sensing applications. We group acoustic sensing applications into aerial acoustic communication, applications leveraging temporal features such as ranging, acoustic radar, acoustic localization and tracking, applications enabled by estimating acoustic channel characteristics such as gesture recognition, speaker liveliness detection and interactive controls. \nWe highlight unique challenges due to the limitations of physical devices and acoustic channels and how they are mitigated or overcame by core processing techniques and application-specific solutions.\nAlong each category of applications, we discuss research opportunities for further investigation. Finally, we summarize under-investigated areas and emerging applications in acoustic sensing as a whole. \nThe remainder of this paper is organized as follows. In Section~\\ref{sec:device or hardware background}, we introduce typical hardware components and system supports offered by commodity devices and the properties of acoustic channels. In Section~\\ref{sec:core acoustic mechanisms}, we present the core techniques to generate acoustic signals and to extract temporal and channel features from received signals.\nIn Section~\\ref{sec:solution or application}, a variety of applications are discussed in details according to the core techniques on which they are based;\nwe discuss research opportunities for each category separately and present future directions in Section~\\ref{sec:future direction}.\nFinally, we conclude the paper in Section~\\ref{sec:conclusion}.", "id": "c569bf52-61b4-43ce-991b-c83ebdd2905a", "level": "section", "origin_cites_number": 19, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:device or hardware background}\nOn an acoustic-enabled device, applications inject processed digital signals to its acoustic frontend (e.g., speakers), which transmits analog acoustic signals through air-borne or structure-borne channels. Upon reception at the acoustic frontend (e.g., microphones) of a receiver device, the analog signals are transformed to digital signals for further processing and are eventually delivered to an application. During the process, extra latency due to processing and system delays, distortions from acoustic frontends and channels and noise and inferences from on-board circuits or environments are introduced. In this section, we present the physical components of acoustic active sensing pipelines and their characteristics. The limitations of acoustic hardware and systems as well as channels pose non-trivial challenges to acoustic signal processing and application development.", "id": "b65eba54-cce9-4866-bd8f-992ff857a71d", "level": "section", "origin_cites_number": 0, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ] ], "subsections": [ "7c3c68d5-7c3f-4bcb-ab8b-efc343079240", "aa100575-669d-4ee9-b0c0-626d3b7f0e6d", "425c8e75-ed5e-41f3-aa75-6115b3e6e7b7", "7c960dda-ec2c-4fde-b935-928b8ae72bc6" ], "title": "Acoustic Devices and Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:acoustic hardware}\nThe typical pipelines of \\textit{sound recording and playback systems} are shown in\nFig.~\\ref{fig:microphone and speaker}(a) and (b) on the transmitter and receiver sides, respectively. The former converts mechanical (acoustic) waves into digital samples while the latter reverses this process. In a recording system, acoustic signals are first converted into voltage signals by a \\textit{microphone}. \nAn Automatic Gain Control (AGC) or Programmable Gain Amplifier (PGA) then amplifies the voltage signals to \nfit the dynamic range of a posterior Analog-to-Digital Converter (ADC); this helps to improve the digitization resolution and to avoid saturation~. Amplified signals further go through a Low Pass Filter (LPF), also known as an anti-aliasing filter, and become band-limited signals. The cut-off frequency of the LPF is ${f_s / 2}$, where ${f_s}$ is the sampling rate. Filtered signals are\nfinally converted to digital samples by an ADC. A sound playback system reverses the above process. Digital samples are first interpolated and then fed into a Digital-to-Analog Converter (DAC) to become analog signals. The analog signals are further amplified and finally converted to acoustic waves by a \\textit{speaker}. \nContemporary commodity devices often include more than one microphone and one speaker. For instance, modern smartphones utilize two speakers to play stereo audio and employ two microphones to enhance recording qualities. The typical layouts of microphones and speakers on smartphones are shown in Fig.~\\ref{fig:typical layout}. The physical layouts of microphones and speakers are important considerations in some applications, which we defer to Section~\\ref{sec:solution or application} for more detailed discussion. \nOn acoustic playback systems, amplification or gains of sounds can often be configured. High-power speakers are utilized in applications that operate at long ranges. The most important parameters for acoustic recording systems include sampling rate, bit resolution, and the number of channels (or microphones). \nThe higher the sampling rate and bit resolution, the better the signal quality. The sampling rate typically ranges from 8~\\!kHz to 44.1~\\!kHz, but certain devices (e.g., high-end smartphones) have a maximum sampling rate of 192~\\!kHz~. The most common configuration for bit resolution is 8-bit or 16-bit, but customized devices may support a maximum of 32-bit.\nWhereas higher bit resolutions are generally preferred to preserves more signal details, choosing a suitable sampling rate and making use of the two channels commonly available to both sound playback and recording systems have a profound impact on the sensing performance and are further discussed in Section~\\ref{sec:future direction for temporal features}. \nThe acoustic hardware on commodity devices often introduce non-flat frequency response on its input signals. The non-flat frequency response, also known as \\textit{frequency selectivity}, describes a phenomenon where acoustic signals experience different channel gains at different frequencies. This can be problematic for signals that occupy a relative wide frequency range.\nOn the receive side, signals whose bandwidth is below 8~\\!kHz~ can have a higher receiver gain while those above experience sharp attenuation since microphones on commodity devices are designed primarily for recording human voices. \nOn the transmitter side, with \\textit{speaker diaphragm inertia}~, introduced by a non-electronic component (diaphragm) in speakers, the movement of speaker diaphragms cannot catch up with fast changes in input signals. Thus, higher frequency signals tend to be dampened. \nAdditionally, it causes ringing effects~ or frequency leakage~. Ringing effects describe the problem in the time domain where a transmission has a delayed start and/or a prolonged transmission duration. In contrast, frequency leakage describes the problem in frequency domain where a transmission of a band-limited signal can cause out-of-band artifacts. As a result, speaker diaphragm inertia can generate audible noise even when the transmitted signal only occupies inaudible frequency bands.\\footnote{Though the audio range falls between 20~\\!Hz and 20~\\!kHz, most people normally do not hear sounds about 18~\\!kHz~. Therefore, the frequency range between 18 and 20~\\!kHz are often deemed \\textit{inaudible} and hence commonly used for acoustic communication and sensing.} \n\\begin{figure}[t]\n \\centering\n \\subfigure[Type A layout.]\n {\n \\label{fig:a11}\n \\includegraphics[width=0.38\\columnwidth]{Figs/typeA1.pdf}\n }\n \\hspace{0.036\\textwidth}\n \\subfigure[Type B layout.]\n {\n \\label{fig:b11}\n \\includegraphics[width=0.38\\columnwidth]{Figs/typeB2.pdf}\n }\n \\subfigure[Type C layout.]\n {\n \\label{fig:a12}\n \\includegraphics[width=0.38\\columnwidth]{Figs/typeB1.pdf}\n }\n \\hspace{0.036\\textwidth}\n \\subfigure[Type D layout.]\n {\n \\label{fig:b12}\n \\includegraphics[width=0.38\\columnwidth]{Figs/typeD.pdf}\n }\n\\caption{Acoustic sensor layouts of different phone models: Front views on (a) and (b) of Type A (e.g., HUAWEI MATE 30 and Honor V20) and Type B (e.g., OPPO A59 and Xiao MI8 SE). Back views on (c) and (d) of Type C (e.g., One Plus 5T) and Type D (e.g., Samsung Galaxy S5 and HUAWEI MATE9).} \\label{fig:typical layout}\n\\end{figure}", "id": "7c3c68d5-7c3f-4bcb-ab8b-efc343079240", "level": "subsection", "origin_cites_number": 9, "parent_id": "b65eba54-cce9-4866-bd8f-992ff857a71d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Acoustic Hardware" ] ], "subsections": [], "title": "Acoustic Hardware" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:acoustic channel property}\nAs acoustic signals generated by mechanical vibrations can propagate through both air, liquid, and solid medium, we next discuss the properties of two channels, i.e., air and solid medium that are relevant to acoustic sensing on commodity devices. The acoustic signals propagate through air and solid medium are called \\textit{Air-borne Signals} (AS) and \\textit{Structure-borne Signals} (SS), respectively.", "id": "aa100575-669d-4ee9-b0c0-626d3b7f0e6d", "level": "subsection", "origin_cites_number": 0, "parent_id": "b65eba54-cce9-4866-bd8f-992ff857a71d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Acoustic Channel Properties" ] ], "subsections": [ "04f28ddc-76c9-4773-97c2-0a01ca325c26", "347fb81c-93f0-4064-be8e-9fd70ad7e7a8" ], "title": "Acoustic Channel Properties" }, { "cite_extract_rate": 0, "cites": [], "content": "When acoustic signals propagate through air, they radiate their energy spherically towards surroundings and can be deemed as ``wireless signals''. Hence, they share many similar features as radio signals. Specifically, acoustic signals experience path loss, Doppler effects, reflection, refraction, and Diffraction. \n\\begin{itemize}\n \\item \\textsl{Path Loss}: Path loss describes the phenomenon that the energy of acoustic signals decrease over the propagation distance. Specifically, the acoustic intensity, measured as the ratio between power and area, is inversely proportional to the square of distance~. Supposing the intensity of a particular acoustic signal is denoted by $p$, then {a free-space path loss model is given by}~:\n \\begin{equation}\\label{eq:path loss}\n p = k\\frac{p_0}{r^2},\n \\end{equation}\n where $k$ is a coefficient, $p_0$ is the intensity when distance to the source is $r = 0$.\n \\item \\textsl{Doppler Effects}: \nDoppler Effect refers to the change in wave frequency during the relative motion between an acoustic source and its receiver (or observer). In particular, the Doppler shift in frequency is given by~:\n \\begin{equation}\\label{eq:doppler effect}\n \\Delta f = \\frac{\\Delta v_d}{c}f,\n \\end{equation}\n {where $f$ is the original frequency, $\\Delta v_d$ denotes the relative speed between the transmitter and the receiver, and $c$ is the speed of sound in air.} \n \\item \\textsl{Temperature Effect}: Unlikely RF waves, the propagation speed of acoustic signals is temperature dependent given by ~:\n \\begin{equation}\\label{eq:speed temperature relationship}\n c = \\sqrt{403T_{p}\\left( {1 + 0.32e/\\rho} \\right)},\n \\end{equation}\n where $c$ is the sound speed (m/s) in air, $T_p$ is the temperature (in Kelvin), $e$ is the vapor pressure of water in air, $\\rho$ is the absolute atmospheric pressure. From Eqn.~\\eqref{eq:speed temperature relationship}, we see that sound speed is also dependent on humidity. However, since $e/\\rho$ tends to be quite small, Eqn.~\\eqref{eq:speed temperature relationship} can be further simplified as ${c^2} = 403T_{p}$. \n \\item \\textsl{Reflection, Refraction, and Diffraction}: Reflection means that when an acoustic signal hits a boundary of two different mediums, it will be reflected back. The amount of reflected signals depends on the level of difference between the two mediums. Reflections from solid surfaces in the environments contribute to multipath effects. Refraction happens when a fraction of acoustic signals propagate into another medium and change their direction. Diffraction involves a change in direction of waves as they pass through an opening or around a barrier in their path. This happens when obstacles are smaller than the wavelength of the wave.\n\\end{itemize}", "id": "04f28ddc-76c9-4773-97c2-0a01ca325c26", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "aa100575-669d-4ee9-b0c0-626d3b7f0e6d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Acoustic Channel Properties" ], [ "subsubsection", "Characteristics of Air Channels" ] ], "subsections": [], "title": "Characteristics of Air Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "{Acoustic signals in solid medium exhibit much differences with those in air. Next, we highlight two key properties within solid medium,} namely, acoustic dispersion and resonance. \n\\begin{itemize}\n \\item \\textsl{Acoustic Dispersion}: As an acoustic signal containing a rich set of frequency components propagates in a solid medium, high frequency components travel faster than low frequency ones. The speed $c_f$ of a specific frequency component $f$ is given by~:\n \\begin{equation}\\label{eq:acoustic dispersion equation}\n {c_f} = \\sqrt[4]{{\\frac{{Eh{f^2}}}{{12\\rho \\left( {1 - v_p^2} \\right)}}}},\n \\end{equation}\n where $v_p$ is the phase velocity, and $E, \\rho, h$ are constants that characterize a medium: $E$ quantifies elasticity, $\\rho$ characterizes stiffness, and $h$ represents thickness. Fig.~\\ref{fig:acoustic dispersion} shows the signal recorded by an earbud when a person taps her teeth and generates a pulse vibration that travels through her jaw and skull bones. \n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=3.5in]{Figs/acoustic_dispersion1.pdf}\\\\\n \\caption{Acoustic dispersion~. The waveform is recorded by an earbud (with on-board microphone) when a user taps a tooth.}\n \\label{fig:acoustic dispersion}\n \\end{figure}\n \\item \\textsl{Acoustic Resonance}: Acoustic resonance describe the phenomenon that a solid medium selectively amplifies a particular frequency component while attenuating other ones. This particular frequency is called resonant frequency, which changes if the properties or the state of the solid medium differs, say, under external forces or undergoing shape transformations. \n\\end{itemize}\nBesides the afore-mentioned difference, acoustic wave propagation in air and solid medium also differ in speed. The speed of AS is 340~\\!m/s (at temperature of 25$^\\circ$) and for SS, the speed can be over 2000~\\!m/s (depending on the type of medium)~. \nInterestingly, an acoustic event can generate both AS and SS at the same time. SS appears ahead of the AS in the captured signals if a receiver is in contact with the surface that vibrates and generates the acoustic event. However, depending on the degree of coupling, with the same sensing device, the signal intensity of SS tends to be significantly lower than AS.", "id": "347fb81c-93f0-4064-be8e-9fd70ad7e7a8", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "aa100575-669d-4ee9-b0c0-626d3b7f0e6d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Acoustic Channel Properties" ], [ "subsubsection", "Solid Medium Property" ] ], "subsections": [], "title": "Solid Medium Property" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:platform diversity}\nDue to the pervasiveness of acoustic transducers and IoT devices, platforms that support acoustic operations are diverse, ranging from customized embedded devices and wearables to general-purpose smartphones and laptops. On general-purposed devices, which reply on commodity operating systems (OS) such as Windows, Linux, Android, MacOS and iOS, acoustic applications are subject to limitations of available application programming interfaces (API) and need to contend with other applications or system services in resources.\n\\begin{table*}[!b]\n\\centering\n\\caption{Comparison of different waveforms designs for sensing purpose.}\n\\label{tab:waveform design comparision}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Waveform} & \\textbf{Advantages} & \\textbf{Disadvantages} & \\textbf{\\makecell{Suitable applications}}\\\\ \\hline\nPure tone & \\makecell{Doppler-aware, Responsive,\\\\less computational requirements} & \\makecell{Vulnerable to interference} & Gesture recognition/tracking (relatively short range) \\\\ \\hline\nFHSS modulated & \\makecell{less correlation sidelobes,\\\\good multipath resolution} & \\makecell{Sensitive to noise} & Gesture tracking (relatively short range)\\\\ \\hline\nChirp & \\makecell{Noise-resilient, multipath resilient} & \\makecell{Computation intensive} & Ranging, localization, radar (relatively long range)\\\\ \\hline\n\\end{tabular}\n\\end{table*}\nDifferent platforms have their advantages and disadvantages in developing acoustic sensing solutions. Platforms running Windows and Linux offer a wide range of software utilities that allow developers to focus on algorithm design and hence facilitate fast prototyping~. To capture and playback acoustic signals, one can either use existing software utilities such as \\textsf{Audacity}~ for offline processing or directly deal with raw samples using platform-dependent APIs~. However, systems running these OSs tend to be less portable. In contrast, mobile and wearable devices powered by Android or iOS platforms offer native acoustic APIs and are convenient for carrying out various experiments in both indoor and outdoor settings. General-purpose OSs can \nintroduce uncontrollable system delays (in tens of millisecond), missing acoustic samples (e.g., due to insufficient buffers) and ``distortions'', say dynamic gains hence inconsistent acoustic fingerprints, as the result of automatic gain control~. \nThese artifacts should be taken in account or compensated in developing acoustic solutions. \nLastly, customized embedded platforms using commodity hardware components generally allow flexible physical layer configuration, including increasing the number of channels and tuning hardware properties~. \nPlatform diversity implies that it difficult if not impossible to have one-size-fit-all solutions. When porting applications from one platform to another~, one also needs to devise calibration procedures to mitigate such diversity.", "id": "425c8e75-ed5e-41f3-aa75-6115b3e6e7b7", "level": "subsection", "origin_cites_number": 5, "parent_id": "b65eba54-cce9-4866-bd8f-992ff857a71d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Acoustic Platforms" ] ], "subsections": [], "title": "Acoustic Platforms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:challenges on devices}\nThe characteristics of acoustic hardware, channels and platforms are key design considerations in acoustic sensing solutions. Combating the imperfection and limitations of physical layers requires addressing the following challenges: \n\\begin{itemize}\n \\item Frequency selectivity caused by acoustic transducers and channels unavoidably results in signal distortions that affect system performance.\n \\begin{challenge}\n \\label{clg:freq_sel}\n Handling frequency selectivity induced signal distortions. \n \\end{challenge}\n Various compensation approaches can be applied to flatten the frequency response.\n More details on this are presented in Section~\\ref{sec:cir representation}.\n \\item \n Speaker diaphragm inertia mentioned in Section~\\ref{sec:acoustic channel property} may cause disruptive audible noise even if the modulated signal is in inaudible frequency ranges. \n\\begin{challenge}\n\\label{clg:audibility}\n Suppressing audible noise incurred by acoustic transmissions.\n\\end{challenge}\nMaintaining smooth amplitude and phase transitions between each sample is crucial to mitigate this problem and more relevant techniques will be elaborated in Section~\\ref{sec:tracking} and Section~\\ref{sec:future direction for channel characteristics}. \n\\item As explained in Section~\\ref{sec:platform diversity}, the uncertain system delay in response to software signaling can be detrimental to the extraction of temporal characteristics,\n \\begin{challenge}\n \\label{clg:system_delay}\n Masking the uncertainty system delay for the upper layer functionalities.\n \\end{challenge}\n Designing algorithms that are agnostic to system delay may be potential approaches and more discussions on this can be found in Section~\\ref{sec:basic timing measurements} and~\\ref{sec:phase-enabled accurate timing}.\n\\item Though most acoustic devices share the similar hardware pipeline (Fig.~\\ref{fig:microphone and speaker}), the diversity in both acoustic front-ends and software platforms can pose challenges to application deployment. \n\\begin{challenge}\n\\label{clg:calibration}\nReducing or eliminating laborious device-dependent calibration when developing acoustic applications.\n\\end{challenge}\nTackling this challenge may require certain level of platform homogenization, and more discussions can be found in Section~\\ref{sec:channel characteristics-enabled applications}.\n\\end{itemize}\n\\begin{table*}[!b]\n\\centering\n\\caption{{Comparison of different waveforms for communication.}}\n\\label{tab:waveform design comparision for communication}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Waveform} & \\textbf{Advantages} & \\textbf{Disadvantages} & \\textbf{\\makecell{Suitable applications}}\\\\ \\hline\n\\makecell{Pure tone,\\\\FHSS modulated} & \\makecell{High data rate} & \\makecell{Short communication range,\\\\high BER} & Near field communication\\\\ \\hline\nChirp & \\makecell{Noise-resilient, multipath resilient,\\\\long communication range,\\\\low BER} & \\makecell{Low data rate} & Long-range, low-volume, strong interference\\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "7c960dda-ec2c-4fde-b935-928b8ae72bc6", "level": "subsection", "origin_cites_number": 0, "parent_id": "b65eba54-cce9-4866-bd8f-992ff857a71d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Devices and Channels" ], [ "subsection", "Challenges" ] ], "subsections": [], "title": "Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:core acoustic mechanisms}\nIn this section, we present core techniques for acoustic sensing. These techniques mitigate some of the challenges discussed earlier and serve as building blocks for various application. Waveform design for sensing and communication is first introduced, followed by mechanisms extracting temporal and frequency domain characteristics.", "id": "a1a6cc17-4b5b-4f7d-9b44-af40934be2da", "level": "section", "origin_cites_number": 0, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ] ], "subsections": [ "d9d45b4c-9d64-4c03-8dca-24df34755cb3", "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "5cd952a0-5695-4e0a-bba9-229ae4e0ed56" ], "title": "Core Acoustic Sensing Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:waveform design}\nWaveform design is critical for acoustic sensing systems. \nWe draw a distinction between sensing and communication as the designs suitable for them (albeit related) can be rather different. \nThe available bandwidth for transmitted acoustic waveforms normally goes up to 24~\\!kHz. The frequency range from $20$~\\!Hz to 18~\\!kHz is \\textit{audible} to human but has better channel gains in commodity acoustic front-ends~, while the \\textit{inaudible} range from 18 to 24~\\!kHz, is preferred due to less interference but is subject to sharp attenuation.", "id": "d9d45b4c-9d64-4c03-8dca-24df34755cb3", "level": "subsection", "origin_cites_number": 1, "parent_id": "a1a6cc17-4b5b-4f7d-9b44-af40934be2da", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ] ], "subsections": [ "81396aea-2f93-4430-8227-b6862ecb550e", "88a7f546-04d7-40dd-919e-50020cc79686", "796bdfde-ebec-46cd-9205-1d9743ebb829" ], "title": "Waveform Design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:waveform for sensing}\nThe most commonly used signals for sensing include \\textit{Pure Tone}, \\textit{Frequency Hopping Spectrum Spread} (FHSS), and \\textit{Chirp}. Their respective advantages, disadvantages, and suitable applications have been summarized in TABLE~\\ref{tab:waveform design comparision}. \n\\begin{figure}[!t]\n \\centering\n \\subfigure[Pure tone signal.]\n {\n \\label{fig:pure tone waveform}\n \\includegraphics[width=0.44\\columnwidth]{Figs/puretone.pdf}\n }\n \\hspace{0.002\\textwidth}\n \\subfigure[Spectrogram of pure tone signal.]\n {\n \\label{fig:pure tone stft}\n \\includegraphics[width=0.45\\columnwidth]{Figs/puretone_stft2.pdf}\n }\n \\subfigure[Baseband ZC signal.]\n {\n \\label{fig:baseband ZC}\n \\includegraphics[width=0.44\\columnwidth]{Figs/zc_baseband.pdf}\n }\n \\hspace{0.002\\textwidth}\n \\subfigure[Spectrogram of ZC signal.]\n {\n \\label{fig:ZC stft}\n \\includegraphics[width=0.45\\columnwidth]{Figs/zc_stft1.pdf}\n }\n \\subfigure[Time domain chirp signal.]\n {\n \\label{fig:chirp waveform}\n \\includegraphics[width=0.44\\columnwidth]{Figs/chrip_time_domain.pdf}\n }\n \\hspace{0.002\\textwidth}\n \\subfigure[Spectrogram of chirp signal.]\n {\n \\label{fig:chirp stft}\n \\includegraphics[width=0.45\\columnwidth]{Figs/chirp_frequency_domain1.pdf}\n }\n\\caption{Time and frequency domain signal representations. }\n\\label{fig:signal representation}\n\\end{figure}", "id": "81396aea-2f93-4430-8227-b6862ecb550e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d9d45b4c-9d64-4c03-8dca-24df34755cb3", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Active Sensing" ] ], "subsections": [ "2c67dbe8-fd43-495f-9419-b78c614400ca", "302b4dfc-9df4-4be3-ba3c-c30f8ddd332c", "c9f2e0e8-956e-4749-ba53-184767221490" ], "title": "Waveforms for Active Sensing" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 1695 ], "content": "Pure tone signals are commonly used in acoustic sensing due to their low complexity and high resolutions in tracking Doppler shifts. Their waveform is represented as ${s\\left(t\\right) = \\cos\\left( 2\\pi ft + \\phi \\right)}$ where ${f}$ and ${\\phi}$ are the frequency and the initial phase. The waveform and its Short Time Fourier Transform (STFT) spectrogram of a single tone signal are shown in Fig.~\\ref{fig:pure tone waveform} and \\ref{fig:pure tone stft}, respectively. \nConsider that a moving target transmits a pure tone signal with frequency ${f}$, and the detected Doppler frequency is ${f_\\mathrm{shifted}}$ at the receiver side. The relative moving speed can thus be estimated as ${v = \\frac{f_\\mathrm{shifted} - f}{f}c}$, where ${c}$ is the sound speed. Due to the low propagation speed of sound, it is feasible to achieve a cm/s-level estimation accuracy. \nMoreover, \nif phase components across multiple pure tones are available, the phase diversity yields multiple constraints to obtain more accurate phase and frequency estimations~. \nAs phase and frequency shifts are correlated with spatial quantities such as range and speed, pure tone signals have been extensively used in localization and tracking applications ~, as well as gesture recognition~. However, they are not suitable for extracting precise timing information due to the periodicity and shallow peaks of auto-correlation functions.", "id": "2c67dbe8-fd43-495f-9419-b78c614400ca", "level": "paragraph", "origin_cites_number": 7, "parent_id": "81396aea-2f93-4430-8227-b6862ecb550e", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Active Sensing" ], [ "paragraph", "Pure Tone Signals" ] ], "subsections": [], "title": "Pure Tone Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "FHSS modulation rapidly changes the carrier frequency among many distinct frequencies occupying a large spectral band. The changes are controlled by a code or a spreading sequence. \nThe key design consideration of FHSS modulation is to choose an appropriate orthogonal sequence. Commonly used ones for acoustic applications include Zadoff-Chu (ZC)~, Barker code~, GSM training sequence~, and m-sequence~. A baseband ZC signal and its STFT spectrogram after modulation are shown in Fig.~\\ref{fig:baseband ZC} and~\\ref{fig:ZC stft}, respectively. \nThanks to the use of spectrum spreading sequences, correlating FHSS modulated signals results in sharper peaks and smaller sidelobes compared to pure tone or even chirp signals~, \nmaking {multiple acoustic reflections readily distinguishable}. Therefore, FHSS modulated signals are a popular choice for high-accuracy gesture tracking~. Unfortunately, due to their high sensitivity, they can be easily corrupted by channel noises including path loss, the Doppler effect, background interference, etc, making them only suitable for short-range sensing.", "id": "302b4dfc-9df4-4be3-ba3c-c30f8ddd332c", "level": "paragraph", "origin_cites_number": 6, "parent_id": "81396aea-2f93-4430-8227-b6862ecb550e", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Active Sensing" ], [ "paragraph", "FHSS Signals" ] ], "subsections": [], "title": "FHSS Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "A chirp is a signal in which the frequency increases (up-chirp) or decreases (down-chirp) with time.\nA linear chirp \nis represented by ${s\\left( t \\right) = A\\cos \\left( {2\\pi \\left( {{f_{\\min }}t + \\frac{k}{2}{t^2}} \\right) + \\phi } \\right)}$, where ${f_\\mathrm{min}}$ is the initial frequency, ${A}$ is the maximum amplitude, ${\\phi}$ is the initial phase, and $k=\\frac{B}{T}$ is the \\textit{modulation coefficient} or \\textit{chirp rate}~. In sensing, chirp signals are transmitted repeatedly, and thus they are also known as Frequency Modulated Continuous Wave (FMCW). The time and frequency domain representations of a chirp signal are shown in Fig.~\\ref{fig:chirp waveform} and~\\ref{fig:chirp stft}, respectively.\nAuto-correlating chirp signals results in sharp and narrow peaks with 3dB temporal bandwidth inverse proportional to the signal bandwidth, a property also known as {\\it pulse compression}. Since the energy of a signal does not change during pulse compression, concentration of its signal power within a narrow interval leads to a peak SNR gain proportional to the product of signal bandwidth (B) and duration (T)~.\nAs a result, acoustic sensing systems employing chirp signals enjoy three major advantages.\nFirst, chirp signals are robust to dynamic channel conditions such as Doppler effects.\nSecond, they are detectable even when the signal power is under noise floor~, and thus can combat strong background noise or interference.\nThird, chirp signals are resilient to multipath fading, allowing extraction of multipath components~. \nThanks to these desirable features, chirp signals have been widely used as a building block in temporal feature extraction~ and channel characteristics estimation~, as discussed in more details in Section~\\ref{sec:temporal features} and \\ref{sec:channel characteristics}.", "id": "c9f2e0e8-956e-4749-ba53-184767221490", "level": "paragraph", "origin_cites_number": 5, "parent_id": "81396aea-2f93-4430-8227-b6862ecb550e", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Active Sensing" ], [ "paragraph", "Chirp Signals" ] ], "subsections": [], "title": "Chirp Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:waveform for communication}\nThe aforementioned waveforms are applicable to aerial acoustic communications as well but different signal configurations (e.g., frequency, spread sequence or chirp rate) represent different source symbols. A comparison of these waveforms and their suitable applications in communication can be found in TABLE~\\ref{tab:waveform design comparision for communication}. Commonly used modulations in acoustic communication include Frequency Shift Keying (FSK), Orthogonal Frequency Division Multiplexing (OFDM), and Chirp Spread Spectrum (CSS).", "id": "88a7f546-04d7-40dd-919e-50020cc79686", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d9d45b4c-9d64-4c03-8dca-24df34755cb3", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Communication" ] ], "subsections": [ "5ced4afd-2e29-4a95-a4fe-40b55adf2b61", "28d5c462-5594-4c4f-9716-db789c2812dd", "584a9ae8-d206-4002-aec1-cb81acac1e18" ], "title": "Waveforms for Communication" }, { "cite_extract_rate": 0, "cites": [], "content": "As the amplitude and phases of acoustic signals are prone to channel distortions, modulation techniques based on these two features (e.g., Amplitude Shift Keying or Phase Shifted Keying) are not reliable~. In contrast, \nFSK, a modulation technique that uses pure tone signals with distinct frequencies to represent data bits, is more robust to channel distortion and interference. It can be demodulated via FFT analysis, Hilbert Transform, or coherent detection. Though FSK has low complexity in both modulation and demodulation design, its achievable data rates are low~.", "id": "5ced4afd-2e29-4a95-a4fe-40b55adf2b61", "level": "paragraph", "origin_cites_number": 2, "parent_id": "88a7f546-04d7-40dd-919e-50020cc79686", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Communication" ], [ "paragraph", "FSK" ] ], "subsections": [], "title": "FSK" }, { "cite_extract_rate": 0, "cites": [], "content": "Compared to single-carrier FSK, OFDM is theoretically more efficient by modulating data symbols on multiple orthogonal subcarriers. It has the potential to achieve a higher throughput under the same bandwidth~. However, when implemented in software for acoustic communication, its advantage cannot be fully realized. Specifically, advanced processing modules such as carrier sensing and carrier frequency offset correction, which are common in RF-based OFDM systems, require hardware level modification due to tight timing requirements. Moreover, as OFDM only maps bit streams onto individual subcarriers, a separate modulation scheme is needed to encode these bit streams. High-order modulations such as Quadrature Amplitude Modulation (QAM) are infeasible in acoustic communication due to significant channel distortions.\nAs a result, \nOFDM-based systems achieve similar throughputs as those based on FSK. Both FSK and OFDM are only suitable for short-range communication between stationary devices~.\n\\begin{figure}\n \\centering\n \\includegraphics[width=3.4in]{Figs/cssframe.pdf}\\\\\n \\caption{An example CSS frame}\n \\label{fig:css frame}\n\\end{figure}", "id": "28d5c462-5594-4c4f-9716-db789c2812dd", "level": "paragraph", "origin_cites_number": 3, "parent_id": "88a7f546-04d7-40dd-919e-50020cc79686", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Communication" ], [ "paragraph", "OFDM" ] ], "subsections": [], "title": "OFDM" }, { "cite_extract_rate": 0, "cites": [], "content": "With the ability to handle dynamic channel conditions and extend communication ranges, CSS is arguably the most dominant modulation technique for acoustic communications~.\nIt employs noise-resilient chirp signals as carriers, making it robust to co-channel interference, path losses, multipath fading, and the Doppler effects~. \nA CSS frame starts with a preamble followed by different symbols as illustrated in Fig.~\\ref{fig:css frame}. Both preambles and data symbols in a CSS frame use chirp signals but with different chirp rates. \nThough with significantly improved robustness, CSS still suffers from low transmission rates.", "id": "584a9ae8-d206-4002-aec1-cb81acac1e18", "level": "paragraph", "origin_cites_number": 3, "parent_id": "88a7f546-04d7-40dd-919e-50020cc79686", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Waveforms for Communication" ], [ "paragraph", "CSS" ] ], "subsections": [], "title": "CSS" }, { "cite_extract_rate": 0, "cites": [], "content": "Next we discuss two design challenges in devising or selecting suitable waveforms for acoustic sensing and communication. \n\\begin{itemize}\n \\item \n Though transmissions in the range from 20Hz to 18KHz enjoy better channel gains and a relatively large bandwidth,\n their audibility limits the application scenarios. \n In contrast, transmissions in the inaudible range though in absence of unintended noise, suffer from a limited bandwidth and significant attenuation. Therefore, in choosing a suitable bandwidth for target applications, one needs to consider,\n \\begin{challenge}\n \\label{clg:bw_tradeoff}\n Trade-offs among channel quality, bandwidth and audibility.\n \\end{challenge} \n \\item \n As discussed in TABLE~\\ref{tab:waveform design comparision for communication}, existing waveforms either suffer from short communication ranges or low data rates. \n\\begin{challenge}\n\\label{clg:comm_range}\n Designing modulation schemes that are suitable for both {high-speed and long-range inaudible aerial communication.}\n\\end{challenge}\nTo do so, one needs to devise robust high-order modulation techniques. One such approach is presented in Section~\\ref{sec:long range high speed communication via loose orthogonal modulation}.\n\\end{itemize}", "id": "796bdfde-ebec-46cd-9205-1d9743ebb829", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d9d45b4c-9d64-4c03-8dca-24df34755cb3", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Waveform Design" ], [ "subsubsection", "Challenges for Waveform Design" ] ], "subsections": [], "title": "Challenges for Waveform Design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:temporal features}\nTemporal features refer to timing information such as onsets, time-of-arrival and time-difference-of-arrival. This section focuses on the techniques to extract these features. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{Figs/sig_start_det.pdf}\n \\caption{Illustration of signal onset detection.}\n \\label{fig:signal start detection}\n\\end{figure}\n\\begin{figure}[b]\n \\centering\n \\subfigure[Near-far effect.]\n {\n \\label{fig:near far effect}\n \\includegraphics[width=0.45\\columnwidth]{Figs/near_far_survey2.pdf}\n }\n \\hspace{0.005\\linewidth}\n \\subfigure[Clap noise and multipath effect.]\n {\n \\label{fig:multipath effect}\n \\includegraphics[width=0.45\\columnwidth]{Figs/onset_detection_one4.pdf}\n }\n\\caption{(a) The correlation peak of distant samples is even weaker than the noisy peaks from closer ones, making it challenging to set an appropriate threshold for reliable onset detection, (b) Interference including clap noise and multipath effect can lead to incorrect timestamp estimation.}\n\\label{fig:preamble detection}\n\\end{figure}", "id": "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1a6cc17-4b5b-4f7d-9b44-af40934be2da", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ] ], "subsections": [ "24f581fc-c508-47b3-ba2f-496fa4dbb101", "a922dcbd-2b47-40b2-b138-4acfd740443d", "c9f8205c-0e8d-4134-ada2-757458f60efc", "b6a824f2-f5d8-474b-a9f8-e52831975f39" ], "title": "Temporal Feature Extraction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:onset point detection}\nOnset detection, determining the presence and the start time of a particular reference signal\\footnote{In communication system, the reference signal is often the preamble and hence onset detection is the same as preamble detection.} is the cornerstone of acoustic communication and time-sensitive sensing applications. \nOnset detection is often achieved via cross-correlation between the captured signal and a known \\textit{reference signal}. The waveform of the reference signal is carefully designed so that there exists a sharp peak in the correlation results when the reference signal is present; otherwise, the maximum correlation value is far below a pre-defined threshold (Fig.~\\ref{fig:signal start detection}). As a result, in absence of multipath effects or strong interference, a simple threshold-based approach is sufficient to detect the presence of the reference signal and the timestamp of the maximum peak is recorded as the onset of the signal.\nIn practice, the accuracy of threshold-based onset detection is degraded by three phenomena, namely, device heterogeneity, the ``near-far'' effect and the multipath effects. Device heterogeneity refers to the fact speakers and microphones on different devices have different gains. A constant threshold may not work for all devices. \nThe ``near-far'' effect, a term originated from wireless communication systems, describes the phenomenon where the signal power received at a base station is dominated by the signals from closer user devices due to signal attenuation over distance. \nSimilarly, in acoustic systems, when a transmitter and a receiver are close, the resulting correlation peak at the receiver can readily exceed a pre-defined threshold and hence the reference signal is detected (Figure~\\ref{fig:near far effect}(a)). However, when their distance becomes larger, the respective correlation peak may fall below the threshold and in fact sometimes below the values caused by close-by interfering sources. In a multipath rich environment, a receiver not only captures the Line-of-Sight (LoS) signal but also receives multiple delayed and attenuated copies. These delayed and attenuated copies, called None-Line-of-Sight (NLoS) signals, can add up constructively and thus surpass LoS signals in peak intensity. A naive thresholding approach or simply finding the largest peak will result in false onset detection.\nFig.~\\ref{fig:near far effect}(b) illustrates the effects of the three phenomena when the reference signal is a chirp signal. In the experiment, hand claps prior to the transmission of the chirp signal result in noticeable peaks even after cross-correlation at the receiver. Peaks due to NLOS paths are in fact larger than that from the LOS path.", "id": "24f581fc-c508-47b3-ba2f-496fa4dbb101", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Onset Detection" ] ], "subsections": [], "title": "Onset Detection" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:basic timing measurements}\nEstimating Time-of-Arrival (ToA) or Time-Difference-of-Arrival (TDoA) of transmitted acoustic waves at target receivers builds upon onset detection and plays an important role in ranging and localization. Depending on whether the intended targets are equipped with acoustic transceivers, existing techniques can be further categorized into device-based and device-free approaches. Among device-based approaches, \\textit{one-way} or \\textit{two-way} sensing methods typically utilize cross-correlation to achieve sample-level timing resolutions. In device-free approaches, to estimate the time-of-flight between transmission and the reception of reflected waves from a target, phase information can also be used to achieve subsample-level resolutions. \n\\begin{figure}[b]\n \\centering\n \\subfigure[ToA via one-way sensing.]\n {\n \\label{fig:toa via one way}\n \\includegraphics[width=0.45\\columnwidth]{Figs/toa_one_way.pdf}\n }\n \\hspace{0.002\\textwidth}\n \\subfigure[TDoA via one-way sensing.]\n {\n \\label{fig:tdoa via one way}\n \\includegraphics[width=0.45\\columnwidth]{Figs/tdoa_one_way.pdf}\n }\n\\caption{One-way sensing paradigm.}\n\\label{fig:one way sensing paradigm}\n\\end{figure}\n\\begin{figure}[t]\n \\centering\n \\subfigure[ToA via two-way sensing.]\n {\n \\label{fig:toa via two way}\n \\includegraphics[width=0.45\\columnwidth]{Figs/toa_two_way.pdf}\n }\n \\hspace{0.002\\textwidth}\n \\subfigure[TDoA via two-way sensing.]\n {\n \\label{fig:tdoa via two way}\n \\includegraphics[width=0.45\\columnwidth]{Figs/tdoa_two_way.pdf}\n }\n\\caption{Two-way sensing paradigm.}\n\\label{fig:two way sensing paradigm}\n\\end{figure}", "id": "a922dcbd-2b47-40b2-b138-4acfd740443d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Timing Estimation" ] ], "subsections": [ "9eb2ca92-84b6-4759-ab11-266dd748dcd4", "571d62c1-4fb7-49fb-a5af-06e289838116" ], "title": "Timing Estimation" }, { "cite_extract_rate": 0, "cites": [], "content": "One-way sensing generally refers to a sensing paradigm where time information is obtained through unidirectional transmissions. In this paradigm, the transmitter or receiver can be multiple separated devices or a single device with multiple channels as shown in Fig.~\\ref{fig:one way sensing paradigm}. One-way sensing is achieved via tight synchronization so that a receiver knows precisely the onset of an acoustic transmission to estimate its flight time.\nFor time synchronization, this approach often exploits another high-speed signal sources~. These signal sources are often radio signals such as WiFi, Bluetooth, and Zigbee whose propagation time is negligible within the maximum range of acoustic waves. For ToA estimation, an acoustic signal and a synchronization signal are transmitted simultaneously. A receiver determines the ToA from the difference in arrival time between the two signal sources. \nFor TDoA estimation, multiple transmitters or receivers need to be tightly synchronized. In some cases, transmitters or receivers co-locate physically on a single device as shown in Fig.~\\ref{fig:tdoa via one way}. In transmitter-synchronized systems~, acoustic transmissions are triggered concurrently and they arrive at a receiver after different propagation delays. TDoA is then obtained by cross-correlating received samples with known reference signals. \nIn receiver-synchronized systems, TDoA is computed by cross-correlating the received samples from different receive channels~ as shown in Fig.~\\ref{fig:tdoa via one way}. \nOne-way sensing is simple and effective in extracting time information.\nHowever, the main drawback lies in its needs for tight synchronization, especially for cases where distributed devices are involved. This tight synchronization requirement can be compromised by {\\bf Challenge~\\ref{clg:system_delay}} (i.e., uncertain system delays), thereby significantly affecting the timing performance. \nThough the uncertainty can be mitigated by either kernel-level implementation on general-purpose OSs or customized hardware, the applicability of one-way sensing is severely restricted.", "id": "9eb2ca92-84b6-4759-ab11-266dd748dcd4", "level": "paragraph", "origin_cites_number": 3, "parent_id": "a922dcbd-2b47-40b2-b138-4acfd740443d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Timing Estimation" ], [ "paragraph", "One-Way Sensing" ] ], "subsections": [], "title": "One-Way Sensing" }, { "cite_extract_rate": 0, "cites": [], "content": "To relax the need for tight synchronization, two-way sensing has been proposed at the expense of increased hardware and processing complexity.\nIn two-way sensing, acoustic transmissions are bi-directional. It is assumed that each device is equipped with both a speaker and a microphone. Extracting ToA information between two devices is shown in Fig.~\\ref{fig:toa via two way}. At time ${t_A^s}$, device A starts an acoustic transmission. Device B detects the acoustic signal at time ${t_B^r}$ and starts another transmission at time ${t_B^s}$ after an arbitrary delay. Device A detects the second transmission at time ${t_A^r}$. Therefe time information (ToA) can be derived as~:\n\\begin{equation}\\label{eq:toa via two way}\n{t } = \\frac{1}{2}\\left( {t_A^r - t_A^s} \\right) - \\frac{1}{2}\\left( {t_B^r - t_B^s} \\right).\n\\end{equation}\nIf both transmissions can be received by a third device, then time differential information (TDoA depicted in Fig.~\\ref{fig:tdoa via two way}) can be derived by~:\n\\begin{equation}\\label{eq:tdoa via two way}\n{t } = \\frac{1}{2}\\left( {t_A^r - t_A^s} \\right) + \\frac{1}{2}\\left( {t_B^r - t_B^s} \\right) - \\left( {t_C^r - t_C^s} \\right).\n\\end{equation}\nNote that in presence of uncertain system delay ({\\bf Challenge~\\ref{clg:system_delay}}), ${t_A^s,}$ ${t_A^r, t_B^r, t_B^s, t_C^r, t_C^s}$ cannot be recorded precisely in user applications. To overcome such a challenge, two novel techniques have been proposed in . First, in addition to transmissions from other devices, each transmitting device also records its own transmission through its on-board microphone. Second, sample counting in the audio buffer of a device is used to estimate the time elapsed between consecutive acoustic receptions. Combining with the known distance (and thus known propagation delay) between the microphone and the speaker on each device, one can then estimate TOA and TODA following Eqn.~\\eqref{eq:toa via two way} and ~\\eqref{eq:tdoa via two way}.", "id": "571d62c1-4fb7-49fb-a5af-06e289838116", "level": "paragraph", "origin_cites_number": 2, "parent_id": "a922dcbd-2b47-40b2-b138-4acfd740443d", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Timing Estimation" ], [ "paragraph", "Two-Way Sensing" ] ], "subsections": [], "title": "Two-Way Sensing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:phase-enabled accurate timing}\nBoth one-way and two-way sensing for TOA or TDOA rely on cross-correlation to determine the onset of signals, whose resolution is limited by the sampling rate of respective devices. At a sampling rate of ${f_s = 48}$~\\!kHz, the time resolution is upper bounded by $\\frac{1}{f_s} \\approx 2.1 \\times 10^{-5}$~\\!s, or equivalently a range resolution of 7~\\!mm if the sound speed is 340~\\!m/s. However, CC-based onset detection often suffers from 2 to 3 samples errors~, resulting a timing error from 40 to 60 $\\mu$s.To achieve finer time granularity, other signal features should be exploited. \nIn device-free sensing, since the acoustic transmitter and receiver are co-located, to estimate the time flight of reflected waves from interested targets, phase information can be exploited. The relationship between phase changes and time is waveform-dependent. \nWe next present the basic approaches to extract phase from three waveforms discussed in Section~\\ref{sec:waveform for sensing}, namely, pure tone signals, FHSS signals, and chirp signals. \n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=2.5in]{Figs/coherent_detector.pdf}\\\\\n \\caption{Structure of coherent receiver. LPF in figure denotes Low Pass Filter.}\n \\label{fig:coherent receiver}\n\\end{figure}", "id": "c9f8205c-0e8d-4134-ada2-757458f60efc", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Phase-enabled Fine-grained Timing" ] ], "subsections": [ "6fc27b18-97c1-4b80-b8da-1fb79e866a5d", "360c5645-893b-4b0f-bc1d-a939f62766df", "16d5e52c-3782-4673-b704-7f0ed9702c2f" ], "title": "Phase-enabled Fine-grained Timing" }, { "cite_extract_rate": 0, "cites": [], "content": "The phase of pure tone signals can be extracted by a coherent receiver whose structure is shown in Fig.~\\ref{fig:coherent receiver}. In a coherent receiver, two identical copies of an input signal are multiplied by ${\\cos \\left( {2\\pi ft} \\right)}$ and its ${\\frac{\\pi}{2}}$ phase shifted version ${- \\sin \\left( {2\\pi ft} \\right)}$, respectively. After a low pass filter, the In-phase (I) and Quadrature-phase (Q) components can be obtained. The absolute phase $\\theta$ in $s\\left(t\\right) = \\cos\\left( 2\\pi ft + \\phi \\right)$ is calculated by $\\tan^{-1}(Q/I)$. Phase changes can be determined by subtracting consecutive absolute phase as illustrated in Fig.~\\ref{fig:phase for pure tone and FHSS}. \nConsider a pure tone oscillating at ${f_c = 20}$ kHz and the sampling rate is ${f_s = 48}$ kHz. One sample interval corresponds to a phase change of $\\frac{f_c}{f_s}\\times 2\\pi = \\frac{5}{6}\\pi$. Therefore, if the detectable phase change is below $\\frac{5}{6}\\pi$, one can readily achieve sub-sample time resolution. Since such a $\\frac{5}{6}\\pi$ phase change is comparable to the maximum $2\\pi$ value, phase-based approaches generally outperform CC-based ones in time granularity. A major shortcoming of pure tone signals is their vulnerability to background noise and multipath effects. \n\\begin{figure}[thp]\n \\centering\n \\subfigure[Time to phase via pure tone and FHSS signals.]\n {\n \\label{fig:phase for pure tone and FHSS}\n \\includegraphics[width=0.85\\columnwidth]{Figs/phase_basics5.pdf}\n }\n \\\\\n \\subfigure[Time to frequency via chirp mixing.]\n {\n \\label{fig:phase for chirp}\n \\includegraphics[width=0.85\\columnwidth]{Figs/phase_basics6.pdf}\n }\n\\caption{(a) Temporal characteristics are converted to phase in pure tone and FHSS signal based system. (b) Chirp mixing transforms time measurements into frequency information.}\n\\label{fig:phase-based method}\n\\end{figure}", "id": "6fc27b18-97c1-4b80-b8da-1fb79e866a5d", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c9f8205c-0e8d-4134-ada2-757458f60efc", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Phase-enabled Fine-grained Timing" ], [ "paragraph", "Pure Tone Signals" ] ], "subsections": [], "title": "Pure Tone Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "The demodulated baseband signal $y(n)$ from receiver together with the original known one $x(n)$ are then utilized to estimate the Channel State Information (CSI, or Channel Impulse Response, hereafter, we will use CSI instead) $h(n) = \\frac{y(n)}{x(n)}$ (in complex form), where $n$ represents the sample index. We are often particularly interested in a specific index $n_s$ of $h(n)$ as they may reflect interactive actions such as finger movements or respiration. The absolute phase of $h(n_s)$ is not useful but if we track its phase difference in consecutive frames as illustrated in Fig.~\\ref{fig:phase for pure tone and FHSS}, we can be able to track the phase of, say finger-generated echo hence locate it. \nThis can also handle ambient static noises as they are excluded by subtraction. The aforementioned processing steps including CSI estimation first and then CSI phase differentiation has become the routine for FHSS signal based applications. \nFHSS signal is sensitive enough to detect small phase changes but subjects to various channel distortions such as path loss and the Doppler effect hence is only suitable for around device sensing. To sense at long range or low SNR, chirp signals are more appropriate.", "id": "360c5645-893b-4b0f-bc1d-a939f62766df", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c9f8205c-0e8d-4134-ada2-757458f60efc", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Phase-enabled Fine-grained Timing" ], [ "paragraph", "FHSS Signals" ] ], "subsections": [], "title": "FHSS Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "Extracting phases from chirp signals is achieved through chirp mixing, which converts time information into frequency features as illustrated in Fig.~\\ref{fig:phase for chirp}.\nAssume that the received signal $s'(t)$ is a delayed and attenuated version of a transmitted signal $s(t) = \\cos\\left(2\\pi \\left( f_\\mathrm{min} t + \\frac{B}{2T}t^2 \\right) \\right)$. In other word, $s'(t) = As(t - \\Delta t) = A\\cos\\left(2\\pi \\left( f_\\mathrm{min} \\left(t- \\Delta t \\right) + \\frac{B}{2T}\\left(t - \\Delta t\\right)^2 \\right) \\right)$. In chirp mixing, we multiply $s(t)$ and $s'(t)$ and feed the resulting signal through a low pass filter. The mixed results become $s_\\mathrm{mix}(t) = A\\cos\\left(2\\pi \\Delta t \\frac{B}{T} t + 2\\pi f_\\mathrm{min} \\Delta t - \\pi\\frac{B}{T}\\Delta t ^ 2 \\right)$. Clearly, $s_\\mathrm{mix}(t)$ is a single tone signal with frequency $ \\frac{B\\Delta t}{T}$. Applying a Discrete Fourier Transform (DFT) on $s_\\mathrm{mix}(t)$, we can estimate $\\Delta t$ by $f_p\\cdot T/B$, where $f_p$ is the peak frequency after DFT. From the discussion, we see that the timing resolution of chirp mixing is upper bounded by its frequency resolution multiplied by a constant. The frequency resolution of DFT is proportional to $\\frac{1}{T}$. Therefore, the granularity of $\\Delta t$ is proportional to $\\frac{1}{B}$. For example, for a chirp signal in the range of 18KHz to 24KHz, we have a timing resolution of 0.167ms. To further improve the timing resolution, phase information in the mixed signal can be utilized. We see that in the frequency bin $f_p = \\Delta t \\frac{B}{T}$, the phase of the mixed signal $\\phi(t) = 2\\pi \\left( f_\\mathrm{min} \\Delta t - \\frac{1}{2}\\frac{B}{T}\\Delta t ^ 2\\right)$. Therefore, a time difference of 10$\\mu$s corresponds to a phase difference of 0.18 radian when $f_\\mathrm{min} = 18$~\\!kHz, notable enough to be detected. The phase differences can be estimated over multiple chirps. \nTheoretically, phase-based methods can be combined with one-way sensing since the transmitter and receiver devices are synchronized. However, carrier frequency offsets~ due to clock drifts, introduce constant phase shifts and result in large timing errors in the long run.", "id": "16d5e52c-3782-4673-b704-7f0ed9702c2f", "level": "paragraph", "origin_cites_number": 1, "parent_id": "c9f8205c-0e8d-4134-ada2-757458f60efc", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Phase-enabled Fine-grained Timing" ], [ "paragraph", "Chirp Signals" ] ], "subsections": [], "title": "Chirp Signals" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{itemize}\n \\item As discussed in Section~\\ref{sec:onset point detection}, the near-far effect and device heterogeneity problem make the use of constant threshold in onset detection inadequate. Multipath effects and strong interference also pose challenges in onset detection. \n \\begin{challenge}\\label{clg:robust onset detection}\n Robust onset detection under dynamic channel conditions.\n \\end{challenge} \n A possible solution is to set thresholds dynamically based on the perceived noise level. More discussions can be found in Section~\\ref{sec:localization}.\n \\item Design considerations such as waveforms, duration and bandwidth of signals will affect the accuracy and resolution of timing estimation in different approaches. In general, longer time interval and larger bandwidth lead to higher timing accuracy. Transmitting multiple frames is also instrumental phase estimation. However, there is a trade-off between the timeliness of the estimation (e.g., in highly dynamic scenarios) and performance. \n\\begin{challenge}\\label{clg:timing estimation trade-off}\nIdentifying suitable configurations to achieve a good trade-off between performance and adaptiveness to dynamic changes in timing estimation.\n\\end{challenge}\n\\end{itemize}", "id": "b6a824f2-f5d8-474b-a9f8-e52831975f39", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "bc9adb56-4a50-4732-9afa-a4e88b5f1120", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Temporal Feature Extraction" ], [ "subsubsection", "Challenges for Temporal Feature Extraction" ] ], "subsections": [], "title": "Challenges for Temporal Feature Extraction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:channel characteristics}\nThis section presents channel characterization techniques for application development. The objective of channel profiling is to find specific acoustic features and their precise relationships with respective states. These acoustic features, formally defined as channel characteristics or Channel State Information (CSI), here includes influences from both acoustic front-ends and acoustic medium that shape these acoustic features. The construction of a precise relationship is also called channel modeling. In can be in the format of explicit mathematical model or implicit mapping, derived from model-driven and data-driven approaches, respectively. A canonical data flow of common channel modeling methods are shown in Fig.~\\ref{fig:channel modeling}. We now first start with CSI representation.", "id": "5cd952a0-5695-4e0a-bba9-229ae4e0ed56", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1a6cc17-4b5b-4f7d-9b44-af40934be2da", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Channel Profiling" ] ], "subsections": [ "1ba7a2ad-e680-4722-90bb-5ad0813bf65c", "da40022d-71f7-47b3-921c-09fdcca5adf5", "dac9d7d8-8d8c-425c-9d37-804468d655a1" ], "title": "Channel Profiling" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:cir representation}\nAs we mentioned before, CSI representation is to find a specific acoustic feature, say Doppler frequency, to decipher respective states, say hand gestures. \nTo find a successful CSI representation is non-trivial and often requires sophisticated domain knowledge and intensive observations. \nThe first thing in CSI representation would be acquiring adequate preliminary information on common acoustic features and acoustic channel characteristics, which can be found in Section~\\ref{sec:acoustic hardware}. Having profound knowledge on acoustic basics would be helpful to strike best features to reflect respective states. \nIn this section, we present typical CSI representations for acoustic characteristics outlined in Section~\\ref{sec:acoustic channel property}.\n\\textbf{Path Loss, Reflection, Refraction, and Diffraction:} These generic wireless properties of acoustic signals can be a double-edge sword for acoustic sensing. They can on one hand attenuate signal strength and result in multipath effect hence affect signal quality and therefore are harmful; on the other hand, they can reflect spatial information and therefore is useful for applications such as localization. A common adoption to reflect these properties is using correlation spectra. To avoid the side effects by these properties, one can use sophisticated waveform design outlined in Section~\\ref{sec:waveform design} to obtain correlation spectra that can disentangle multiple reflections. Often, waveforms such as chirp signals that have high correlation and processing gain is favorable as multiple reflections are separable if using these signals. For passive sensing system, the features in the correlation spectra may not be noticeable hence require dedicated signal processing. Since these signal processing tricks are application-dependent, we postpone any further discussions on the application layer. To harness the multipath effect, one should adopt waveforms that can enrich acoustic profiles. This could be achieved by wide-band sensing using signals of a relatively long duration. However, using signals of long duration would increase latency hence trade-off between latency and duration should be made. \n\\textbf{Doppler Effect:} To estimate the Doppler effect of a target, one often lets a device to transmit a pure tone signal due to its high Doppler accuracy as we mentioned in Section~\\ref{sec:waveform for sensing}. If the target owns an acoustic-enabled device, we then use this device to capture transmitted signals and perform DFT analysis in a sliding window fashion. Through this, we obtain the major frequency of received signals over time. To this end, we follow Eqn.~\\ref{eq:doppler effect} to estimate the Doppler effect. Note that in practice, multiple tones may be transmitted and one can average the result from each tone so as to improve estimation accuracy. For targets without any devices, we can track the Doppler frequency by their acoustic reflections. The major difficulty here lies in the method to pin down target induced acoustic reflection in the presence of strong self-interference. The adoption of self-cancellation techniques and the utilization of prior information would be common practice. For instance, the walking speed of human beings typically being around 1~\\!m/s can be used to confine the search range in DFT results to obtain a target's moving speed. \n\\textbf{Temperature Effect:} As we mentioned earlier in Section~\\ref{sec:acoustic channel property}, the propagation speed of acoustic signals is dependent on ambient temperature. Therefore, CSI representation in this case is rather straightforward. Since speed can be derived by the division between distance and time, and we have plentiful approaches for timing estimation discussed in Section~\\ref{sec:basic timing measurements}, time is thus more suitable for CSI representation. \n\\textbf{Acoustic Dispersion:} Recalled that acoustic dispersion describe a phenomenon that different acoustic frequency components propagate at heterogeneous speeds in solid medium as revealed by Eqn.~\\ref{eq:acoustic dispersion equation}. This phenomenon could be observed by a rather distinctive time-domain waveform dictated in Fig.~\\ref{fig:acoustic dispersion}. This in turn indicates that time-domain acoustic waveform would be a possible choice for CSI representation. Application based on time domain acoustic dispersive waveform can be found in Section~\\ref{sec:solution or application}. Meanwhile, as different frequency components arrival at different time, when converting time-domain waveform into frequency spectrogram, we may observe particular waveform shaped by frequency versus time. If we could interpolate this waveform by, say linear interpolation, we can thus utilize the curvature of the waveform for CSI representation. This curvature may reveal distance~ or material information. When the curvature is not obvious or the waveform cannot be readily interpolated, magnitude spectrogram or other common acoustic features such as MFCC or GFCC may be alternative choices. Note that acoustic dispersion phenomenon is observable only when there are multiple frequency components so that in active sensing, signals of wide bandwidth like chirp signal is preferable. In passive sensing, acoustic signals produced by natural events generally contain a rich set of multiple frequency components. \n\\textbf{Acoustic Resonance:} \nIn acoustic resonance enabled applications, \nthe goal is to inspect resonance properties of a particular object. Therefore, the primary setting is to place sensors including (piezo) microphone and (piezo) speakers tightly on this object and focus on the properties of structure-borne acoustics. We then actively transmit wide-band signals such as chirp signals through this piezo speaker and let the piezo microphone to capture the transmitted signals. \nFollowing that, DFT is typically applied to check the frequency response of received signals. In order to detect minor resonance frequency change, it is desirable to utilize a high sampling rate so that DFT results can achieve better frequency resolution. From the aforementioned processing steps, we can infer possible CSI representations, for instance, peak frequency or frequency response. An appropriate CSI representation can often ease significant efforts on model construction. \nWith appropriate CSI representations, the next step is to build an application-specific model. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=3.5in]{Figs/channel_estimation3.pdf}\\\\\n \\caption{Canonical data flow of common channel modeling methods.} \n \\label{fig:channel modeling}\n\\end{figure}", "id": "1ba7a2ad-e680-4722-90bb-5ad0813bf65c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "5cd952a0-5695-4e0a-bba9-229ae4e0ed56", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Channel Profiling" ], [ "subsubsection", "CSI Representation" ] ], "subsections": [], "title": "CSI Representation" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 1695 ], "content": "\\label{sec:channel model construction}\nConstructing an appropriate channel model is the cornerstone for acoustic sensing application development. A canonical data flow of common channel modeling methods are shown in Fig.~\\ref{fig:channel modeling}. \nThe basic idea behind channel model construction is to map respective states, say hand moving directions, to certain CSI representations, say Doppler frequency. A naive approach, also called \\textit{Data-Driven} (DD), to construct this model is through direct mapping, which often involves massive data and intensive training. Another method, known as \\textit{Model-Driven} (MD), formulates this mapping with a close-formed expression, which are often more efficient but non-trivial to achieve. The respective advantages, disadvantages, and suitable applications for these two methods are displayed in TABLE~\\ref{tab:processing technique comparison}. In this section, we highlight the basic steps for aforementioned two common channel model construction methods. \n\\begin{table}[b]\n\\caption{Comparison of MD and DD approaches.}\n\\label{tab:processing technique comparison}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n & \\textbf{Problem} & \\textbf{Advantages} & \\textbf{Disadvantages} & \\textbf{\\makecell{Suitable\\\\applications}}\\\\ \\hline\nMD & \\makecell{Regression\\\\ problem} & \\makecell{Effective\\\\and\\\\efficiency} & \\makecell{Require domain\\\\knowledge,\\\\time-consuming} &\\makecell{Temporal\\\\feature\\\\/channel\\\\characteristics}\\\\ \\hline\nDD & \\makecell{Categorical\\\\ problem} & \\makecell{Simple\\\\in\\\\model\\\\designs} & \\makecell{Massive data,\\\\computation-\\\\intensive,\\\\heterogeniety\\\\problem}& \\makecell{Channel\\\\characteristics} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\begin{table*}[t]\n\\centering\n\\caption{Categorizing acoustic sensing enabled applications.}\n\\label{table:application comparison}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{Key physical layer } & \\makecell{\\textbf{Core processing techniques}} & \\textbf{Applications} \\\\ \\hline\n\\makecell{Aerial acoustic \\\\ communication} & \\makecell{Acoustic hardware,\\\\acoustic channel property} & Waveform design & Communication~ \\\\ \\hline\n\\multirow{4}{*}{\\makecell{Temporal feature\\\\based applications}} & \\multirow{4}{*}{\\makecell{Acoustic hardware,\\\\acoustic channel property}} & \n \\makecell{One-way sensing/\\\\ Phase-enabled accurate timing} & Ranging~ \\\\ \\cline{3-4} \n & & \n \\makecell{Signal onset point detection,\\\\Phase-enabled accurate timing} & Acoustic radar~ \\\\ \\cline{3-4} & & \n \\makecell{Signal onset point detection,\\\\one-way/two-way sensing} & Localization~ \\\\ \\cline{3-4} \n & & \n \\multirow{3}{*}{Phase-enabled accurate timing} & \n Device-based tracking~ \\\\ \\cline{4-4} \n & & & \n Device-free gesture tracking~ \\\\ \\cline{4-4} \n & & & \n \\makecell{Biometric sensing~ \\\\ } \\\\ \\hline\n\\multirow{2}{*}{\\makecell{Channel characteristics\\\\enabled applications}} \n& \\multirow{2}{*}{\\makecell{Acoustic hardware,\\\\acoustic channel property,\\\\platform diversity}} & \n \\multirow{3}{*}{Over-the-air channel profiling} & \n Gesture recognition~ \\\\ \\cline{4-4} \n & & & \n \\makecell{Speaker authentication~} \\\\ \\cline{4-4}\n & & & \n \\makecell{Novel interactive control~} \\\\ \\cline{3-4} \n & & \n \\multirow{3}{*}{\\makecell{Structure-borne channel profiling}} & \n Keystroke detection~ \\\\ \\cline{4-4} \n & & & \n \\makecell{Force detection~} \\\\ \\cline{4-4}\n & & & \n \\makecell{Touch recognition~} \\\\ \\cline{4-4}\n \\hline\n\\end{tabular}\n\\end{table*}\nModel-driven approach, formulating acoustic sensing problem as regression, can be effective and efficient, which however, require sophisticated signal processing designs and specific domain knowledge hence is often remarkable challenging. The key insight behind MD approaches is to mathematically quantify the relationship between CSI and certain application-specific states through regression, say mapping hand moving direction to specific Doppler frequency shift in a close-form equation. Actually, the hard part of this approach is to discovery the respective CSI and make it notable via signal processing techniques. Additionally, this approach may only be suitable when CSI is a scalar variable. Since these signal processing techniques are application-specific, we hence postpone their discussions in the next section. \nSince a close-form or explicit parametric model is often hard to craft, one often resort to DD approach. The DD approach, taking acoustic sensing as categorical problem, implicitly builds the inference model by directly mapping CSIs with respective application-specific states. \nThis approach is more suitable to handle the cases when CSI is in the format of a vector, say frequency response, or matrix, say MFCC. \nThe basic idea behind DD approach is fingerprinting. \nThis approach first collects sufficient data (CSIs) under respective states in an offline manner and then utilize these data to train a model using machine learning techniques. The model is then used online to predict the corresponding state with respect to its input data. Such a processing pipeline may require less sophisticated domain knowledge hence is comparatively easier than MD approaches. \nNevertheless, this approach requires massive data or constant calibrations due to device heterogeneity problem hence is labor intensive. Meanwhile, high computational cost and storage requirement are other drawbacks for this method.", "id": "da40022d-71f7-47b3-921c-09fdcca5adf5", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "5cd952a0-5695-4e0a-bba9-229ae4e0ed56", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Channel Profiling" ], [ "subsubsection", "Channel Model Construction" ] ], "subsections": [], "title": "Channel Model Construction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{itemize}\n \\item As having been outlined, mining the specific CSI to reflect particular application-specific state requires sophisticated signal processing techniques and domain knowledge hence is remarkable difficult. Therefore, the challenges is\n \\begin{challenge}\\label{clg:appropriate CIR representation}\n Discovering appropriate CSI representation.\n \\end{challenge}\n This may require strong background and intense observations. An optional choice is resorting to customized hardware that offers more flexible physical-layer reconfigurability, so as to gain more opportunity to enhance the acoustic features, thereby facilitating CSI extraction. More discussions on this can be found in Section~\\ref{sec:future direction}.\n \\item As we mentioned earlier, channel models, especially those using DD approaches, may be platform-dependent, so it requires constant calibrations or user configurations when deploying a particular acoustic sensing applications on heterogeneous platforms. As a result, it is particularly challenging in \n \\begin{challenge}\\label{clg:cross-platform models}\n Building cross-platform acoustic sensing channel models.\n \\end{challenge}\n Potential solutions to this problem may be few shot learning techniques~ or domain adaptation methods~ and more discussions on this can be found in Section~\\ref{sec:future direction for channel characteristics}.\n\\end{itemize}", "id": "dac9d7d8-8d8c-425c-9d37-804468d655a1", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "5cd952a0-5695-4e0a-bba9-229ae4e0ed56", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Core Acoustic Sensing Techniques" ], [ "subsection", "Channel Profiling" ], [ "subsubsection", "Challenges for Estimating Channel Characteristics" ] ], "subsections": [], "title": "Challenges for Estimating Channel Characteristics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:solution or application} \nNow we turn to acoustic sensing applications driven by the mechanisms discussed in Section~\\ref{sec:core acoustic mechanisms}. Existing applications fall into three categories according to their respective supporting mechanisms, namely aerial acoustic communication, applications leveraging temporal features, and solutions enabled by channel characterization. For each application, we review key enabling techniques and performance achieved. Potential future directions for each categories are also discussed. A taxonomy of acoustic sensing applications is given in Table~\\ref{table:application comparison}.\n\\vspace{-.5ex}", "id": "797b87e2-7680-40a8-af64-25093d612994", "level": "section", "origin_cites_number": 0, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ] ], "subsections": [ "da9acbba-a470-4b06-82bc-ad6a546811e8", "98b8d49f-bd0a-4043-938a-589f8b510dfa", "ca05f4b8-afe7-4436-b55e-aef15e251d42" ], "title": "Acoustic Sensing Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:aerial acoustic communication via waveform manipulation}\nExploiting aerial acoustic channels or air-borne acoustic signals for communication, known as aerial acoustic communication, has attracted much attention recently. Aerial acoustic communication enables any device that has embedded microphones and speakers to communicate without extra hardware or complex network configuration. It can serve as an alternative to traditional RF-based device-to-device communication such as Bluetooth, Near Field Communication (NFC), and WiFi Direct. In a nutshell, aerial acoustic communication is realized by representing information as a function of different waveform configurations. As summarized in TABLE~\\ref{table:communication}, different schemes mainly differ in the waveforms and bandwidth utilized, which result in different data rates, operational ranges and audibility.\n\\begin{table*}[b]\n\\vspace{-2ex}\n\\centering\n\\caption{Comparison of aerial acoustic communication systems.}\n\\label{table:communication}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\textbf{Work} & \\textbf{Modulation mechanism} & \\textbf{\\makecell{Maximum \\\\ operating range\\\\(m)}} & \\textbf{Bandwidth (kHz)} & \\textbf{Audible or inaudible} & \\textbf{Bit rate (bps)} \\\\ \\hline\n & OFDM & ${< 2}$ & ${0.735 - 4.41}$, ${18.4}$ & Audible or inaudible & \\makecell{5600\\\\1400} \\\\ \\hline\nDigital voice~ & M-ary FSK & ${< 2}$ & ${0 - 12}$ & Audible & ${2400}$ \\\\ \\hline\n & FHSS & ${20}$ & ${4.1 - 21}$ & Audible & ${20}$ \\\\ \\hline\nDhwani~ & OFDM & $<1$ & ${0 - 24}$ & Audible & ${2400}$ \\\\ \\hline\n~ & OFDM & ${8}$ & ${6.4 - 8}$ & Inaudible & ${240}$ \\\\ \\hline\n~ & \\makecell{Phase modulation via \\\\ complex lapped transform} & ${< 2}$ & ${6.4 - 8}$ & Inaudible & ${600}$ \\\\ \\hline\nDolphin~ & OFDM & ${8}$ & ${8 - 20}$ & Inaudible & ${500}$ \\\\ \\hline\n~ & BOK & ${25}$ & ${19.5 - 22}$ & Inaudible & ${16}$ \\\\ \\hline\n~ & QOK & \\makecell{${2.7}$ m at \\\\ ${35}$ dBSPL } & ${18.5 - 19.5}$ & Inaudible & ${15}$ \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\label{sec:aerial acoustic communication}", "id": "da9acbba-a470-4b06-82bc-ad6a546811e8", "level": "subsection", "origin_cites_number": 8, "parent_id": "797b87e2-7680-40a8-af64-25093d612994", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Aerial Acoustic Communication" ] ], "subsections": [ "9fb0727b-45c8-4115-8f49-194aac6f59ca", "a3076352-9d14-44a7-9a4f-f361f0b3d262", "c20c01cd-646f-4f74-9e57-690b1dec2ccd", "b3f77aed-e86f-432b-884c-db87dcacd4a5" ], "title": "Aerial Acoustic Communication" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:short range high throughput communication}\nThe authors in~ presented a communication system by manipulating the properties of pure tone signals. It leverages the presence or absence of tone signals to represent information (${100\\%}$ Amplitude Shift Keying). This approach achieves a data rate of ${5.6}$~\\!kbps with multiple audible tones. The data rate reduces to ${1.4}$~\\!kbps when a single inaudible tone is used. A maximum communication range of ${2}$~\\!m can be achieved under LOS conditions. Another work called Digital voice~ modulates data bits in the audible band (under ${12}$~\\!kHz) via M-ary FSK, reporting a data rate at tens to thousands of bits per seconds (bps). \nDhwani is an acoustic-based NFC system. It employs OFDM modulations to encode messages and incorporate a technique called JamSecure to prevent malicious attacking. Dhwani occupies a bandwidth of ${24}$~\\!kHz and achieves a maximum data rate of ${2.4}$~\\!kbps. The authors of~ propose an acoustic-enabled mesh network built upon FHSS over 4.2KHz to 21KHz. The above work all utilize the audible band (normally below ${18}$~\\!kHz). \nTransmitting information over audible bands can be disruptive and thus several inaudible (hidden) communication systems are developed. The authors in~ propose to leverage the masking effect of the human hearing system to achieve inaudible acoustic communication, saliently addressing \\textit{\\textbf{Challenge}}-\\ref{clg:audibility}. \nIt employs OFDM modulations and achieves a data rate of ${240}$~\\!bps. Similar work in~ and~ attain data rates of ${600}$ and ${500}$~\\!bps, respectively.", "id": "9fb0727b-45c8-4115-8f49-194aac6f59ca", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "da9acbba-a470-4b06-82bc-ad6a546811e8", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Aerial Acoustic Communication" ], [ "subsubsection", "Short-range Aerial Acoustic Communication" ] ], "subsections": [], "title": "Short-range Aerial Acoustic Communication" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:long range low data rate communication}\nNeither tone-based nor OFDM modulation techniques are robust to Doppler effects. The performance of these methods further deteriorates in multipath rich environments, making them inadequate for long range communications. In contrast, chirp spread spectrum (CSS) utilizes more interference-resilient chirp signals to encode information bits and thus achieves lower bit error rates and longer communication ranges.\nA chirp binary orthogonal keying (BOK) modulation techniques is first presented in . It utilizes orthogonal up and down chirp signals for modulation. The work in~ adopts BOK and realizes a communication range up to ${25}$ m at a data rate of ${16}$ bps. Soonwon et al.~ improve upon BOK and develop a chirp quaternary orthogonal keying (QOK) modulation technique. QOK finds near-orthogonal chirps by an exhaustive search over a pre-defined solution space. With QOK, a Code Division Multiple Access (CDMA) system is built. The system achieves zero frame error rate even at a minimal sound pressure level of ${35}$ dB SPL when the transceivers are ${2.7}$ m away from each other. The authors of~ propose a pseudo-orthogonal CSS modulation where up- and down-chirps overlap in time and thus its transmission rate is doubled. CSS and its variants often achieve less bit error rates and long communication ranges compared with other modulation mechanism. However, the achievable data rate is less competitive as this method is inherently less bandwidth efficient.\n\\begin{table*}[b]\n\\centering\n\\caption{Comparison of different ranging techniques.}\n\\label{tab:ranging techniques comparison}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\textbf{Ranging solution} & \\textbf{Key processing techniques} & \\textbf{Waveform design} & \\textbf{Occupied bandwidth (kHz)} & \\textbf{Ranging error} \\\\ \\hline\nBeepBeep~ & \\makecell{Two-way sensing} & Chirp signal & $2-6$ & An average around 1 to 2~\\!cm within 10~\\!m\\\\ \\hline\nRFBeep~ & \\makecell{One-way sensing } & Tone & NA & Around 30~\\!cm median error within 16~\\!m\\\\ \\hline\nSwordFight~ & \\makecell{Two-way sensing } & Tone & $0-11.025$ or $0-16$ & A media of 2~\\!cm within 2~\\!m distance\\\\ \\hline\nBatMapper~ & \\makecell{Finer temporal feature} & Chirp signal & $8-16$ and $8-10$ & Up to 2~\\!cm within 4~\\!m distance\\\\ \\hline\nSAMS~ & \\makecell{Finer temporal feature} & Chirp signal & $11-21$ & \\makecell{A median of 30~\\!cm \\\\and a $90$-percentile of $100$ cm within 5~\\!m}\\\\ \\hline\nDeepRange~ & \\makecell{Finer temporal feature} & Chirp signal & $18-22$ & \\makecell{A median of 1~\\!cm within 4~\\!m\\\\}\\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "a3076352-9d14-44a7-9a4f-f361f0b3d262", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "da9acbba-a470-4b06-82bc-ad6a546811e8", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Aerial Acoustic Communication" ], [ "subsubsection", "Long-range Low Data Rate Communication" ] ], "subsections": [], "title": "Long-range Low Data Rate Communication" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:long range high speed communication via loose orthogonal modulation}\nTo achieve long-range communication at relatively high data rates (addressing \\textit{\\textbf{Challenge}}-\\ref{clg:comm_range}), \na loose orthogonal modulation approach is proposed in~. \nIts basic idea is to overlap multiple chirp carriers in a single time slot to boost throughput. Though inter-carrier-interference is introduced, its adverse effect can be mitigated, say, by rate adaptation. The reported maximum throughput is 1~\\!kbps, over $60\\times$ of that of existing CSS-based approaches. Even at a distance of 20~\\!m, a data rate of 125~\\!bps can be reached.", "id": "c20c01cd-646f-4f74-9e57-690b1dec2ccd", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "da9acbba-a470-4b06-82bc-ad6a546811e8", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Aerial Acoustic Communication" ], [ "subsubsection", "Long-range High-speed Communication via Loose Orthogonal Modulation" ] ], "subsections": [], "title": "Long-range High-speed Communication via Loose Orthogonal Modulation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future direction for waveform design}\nAerial acoustic communication is gaining attraction as an alternative for device discovery and communication. In fact, audio interfaces has become a part of Google's nearby peer-to-peer communication APIs~. \nHowever, low data dates continue to hamper its wide adoption. For instance, in-vehicle networking demands at least 10~\\!kbps data rate~, \nfar exceeding the maximum throughput attainable by existing aerial acoustic communication systems. Data rates of aerial acoustic communication can be further increased by incorporating more complex techniques such as rake receivers, multiple-input-multiple-output, and advanced coding mechanisms. Another opportunity arises from the exploitation of SS channels as discussed in Section~\\ref{sec:acoustic channel property}. Most existing works are restricted to AS for communication while SS is rarely considered with the only exceptions are and . In~, the authors modulate the vibration motors available in mobile phones, and decoding information through accelerometers. A maximum of 200~\\!bps data rate is achieved. In their follow-up work~, the throughput is boosted to 30~\\!kbps by replacing the receiver with a high-sensitivity microphone and the adoption of high-order OFDM modulations. \nWe envision that more efficient modulation techniques for SS channels can be devised. Furthermore, combining AS and SS in a single system poses new challenges and exciting research venues in acoustic communication.", "id": "b3f77aed-e86f-432b-884c-db87dcacd4a5", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "da9acbba-a470-4b06-82bc-ad6a546811e8", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Aerial Acoustic Communication" ], [ "subsubsection", "Research Opportunities" ] ], "subsections": [], "title": "Research Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:temporal feature based applications}\nIn this section, we present applications leveraging temporal features including acoustic ranging, localization, device-based tracking, device-free gesture tracking, and respiration sensing. For each application, we discuss state-of-the-art solution approaches and how some the challenges outlined in previous sections are addressed.", "id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "level": "subsection", "origin_cites_number": 0, "parent_id": "797b87e2-7680-40a8-af64-25093d612994", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ] ], "subsections": [ "71872f69-d771-4c01-805d-a2a8b5c5e9d0", "291eacc0-a2b1-451a-a365-7c28955e6f36", "a242203e-70c4-4f1d-87e0-5060de1aa6fb", "25f70ee6-6676-4265-9de9-e6288503ee44", "208a1eb7-c8c9-47d6-afa4-356299037895" ], "title": "Applications Leveraging Temporal Features" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ranging}\nRange (the relative distance between two devices) provides a useful contextual information and can be used for distance and size measurements, network management~, and content sharing~. Leveraging acoustic signals for ranging is an economical and convenient alternative to traditional measurement tools. Ranges are computed by multiplying time of flight derived from techniques in Section~\\ref{sec:temporal features} with the propagation speed of acoustic signals, generally assumed to be constant and known {\\it a priori}. \\textit{Ranging errors} are defined as the Root Mean Square (RMS) of the differences between estimated results and the ground truth, while \\textit{ranging accuracy} is inversely proportional to the RMS error. The ranging performance heavily relies on techniques at the processing layer, in particular, robust onset detection. A comparative study is shown in TABLE~\\ref{tab:ranging techniques comparison}. \nBeepBeep~ is a pioneer work that uses acoustic signals for precise ranging on commodity mobile devices based on ToA via a two-way sensing method. It cleverly circumvents unknown system delays (\\textit{\\textbf{Challenge}}-\\ref{clg:system_delay}) and clock synchronization by directly retrieving timestamps from acoustic samples themselves. To mitigate multi-path effects aroused from \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}, the authors locate the earliest ``sharp\" peak after cross-correlation. \nBeepBeep reports centimeter-level (2 to 6~\\!cm) ranging errors. However, the performance of BeepBeep can be degraded by presence of strong NLOS paths.\nThe authors in~ sidestep uncertain and variable system delays in acoustic playback and recording hence addressing \\textit{\\textbf{Challenge}}-\\ref{clg:system_delay} by a kernel space implementation, and build a stand-alone application called RFBeep.\nRFBeep~ performs ranging with ToA using one-way sensing. \nIt reports decimeter-level (up to 50~\\!cm) ranging errors within 16~\\!m. However, the reliance on kernel modification prohibits its wide-scale adoption. \nSwordFight~ is another ranging solution that improves upon BeepBeep in responsiveness, accuracy, and robustness. It works at the same way as BeepBeep but differs in transmitted waveforms and onset detection strategies. It reports a median ranging accuracy of ${2}$~\\!cm with ${12}$~\\!Hz fresh rate in noisy environments. \n\\begin{table*}[b]\n\\centering\n\\caption{Comparison of acoustic-enabled localization systems.}\n\\label{table:localization comparison}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n \\textbf{Category} & \\textbf{Work} & \\textbf{\\makecell{Waveform\\\\design}} & \\textbf{\\makecell{Additional\\\\signal}} & \\textbf{\\makecell{Key processing\\\\ technique}} & \\textbf{\\makecell{Synchronous or\\\\ asynchronous}} &\\textbf{ \\makecell{Concurrent\\\\localization}} & \\textbf{\\makecell{Localization\\\\error}} \\\\ \\hline\n\\multirow{4}{*}{\\makecell{Infrastructure\\\\-based}} & Guoguo~ & \\makecell{Gaussian\\\\duplet pulse} & NA & ToA (one-way) & Synchronous & Supported & Centimeter-level \\\\ \\cline{2-8}\n & ~ & Chirp & NA & TDoA (one-way) & Synchronous & Supported & Centimeter-level \\\\ \\cline{2-8}\n & ALPS~ & Chirp & NA &TDoA (one-way) & Synchronous & Supported & Decimeter-level \\\\ \\cline{2-8}\n & UPS+~ & \\makecell{Ultrasonic\\\\chirp, tone} & NA & ToA (one-way) & Synchronous & Supported & Centimeter-level \\\\ \\cline{2-8}\n & ARABIS~ & Chirp & NA & TDoA (two-way) & Asynchronous & Supported & Centimeter-level \\\\ \\cline{2-8}\n & AALTS~ & \\makecell{Chirp,\\\\pure tone} & NA & TDoA (two-way) & Asynchronous & Supported & Decimeter-level \\\\ \\hline\n\\multirow{3}{*}{\\makecell{Infrastructure\\\\-free}} & ~ & Chirp & WiFi & ToA (two-way) & Asynchronous & Not supported & Meter-level \\\\ \\cline{2-8}\n & Centaur~ & Chirp & WiFi & TDoA (two-way) & Asynchronous & Supported & Meter-level \\\\ \\cline{2-8}\n & EchoTag~ & Chirp & NA & Data-driven & N/A & Not Supported & Centimeter-level \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "71872f69-d771-4c01-805d-a2a8b5c5e9d0", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ], [ "subsubsection", "Ranging" ] ], "subsections": [], "title": "Ranging" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:acoustic radar}\nSimilar to RF radars, acoustic radars work by radiating acoustic signals and estimating the round trip time of reflected echoes. It can be used in obstacle avoidance and mapping. Since acoustic echoes attenuate sharply with the increase of distance $d$ (e.g., in free space, proportional to $\\frac{1}{d^4}$~), the operational range of acoustic radars is often limited. \nMultipath effects also pose additional challenges as it is often difficult to disentangle desired echoes bouncing off targeted objects from those experiences multiple reflections or reflected off undesired objects (\\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}). The evaluation metrics for acoustic radars are the same with the one used in ranging.\nIn , the authors proposed a sonar sensor for depth sensing on smart phones utilizing the phone's microphone and rear speaker. The reported ranging errors up to ${12}$~\\!cm within ${40}$~\\!m distances. Further improvement to ranging accuracy using deep learning techniques is proposed in~. In this work,\nSynthesized data is used to train a neural network to handle platform diversity, multipath effects, and background interference, addressing \\textit{\\textbf{Challenge}}-\\ref{clg:calibration} and \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}\nsimultaneously. The resulting range error can be as low as 1~\\!cm at a distance up to 4~\\!m and is agnostic to various background noises, heterogeneous devices, etc. \nBatMapper is among the first work that demonstrates the feasibility of commodity mobile devices to act as acoustic radars for indoor floor map construction. It exploits speaker-microphone distance constrains to construct a probabilistic model to extract echoes bouncing off surrounding objects, which allows it to mitigate multipath effects (\\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}) for robust onset detection and thereby achieve accurate range estimation. BatMapper reports ${1\\!-\\!2}$~\\!cm estimation errors with ranges up to ${4}$~\\!m. With the assistance of on-board Inertial Measurement Unit (IMU) including a gyroscope and an accelerometer, its $80$-percentile errors are less than $30$~\\!cm in geometric floor reconstruction. \nSAMS~ is another acoustic radar solution that improves upon BatMapper. Different from the correlation method adopted in BatMapper, SAMS utilizes chirp mixing, a finer temporal feature extraction method, which can circumvent insufficient sampling rates for better temporal resolution. SAMS reports a median error of $30$~\\!cm and a $90$-percentile error of $1$~\\!m. \nCompared to ranging between acoustic devices, acoustic radars rely on weak reflected signals from target objects for ranging measurements. Therefore, it is much more challenging to design appropriate signal processing techniques and robust acoustic waveforms for acoustic radars. A technical comparison among different ranging approaches is given in TABLE~\\ref{tab:ranging techniques comparison}.", "id": "291eacc0-a2b1-451a-a365-7c28955e6f36", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ], [ "subsubsection", "Acoustic Radar" ] ], "subsections": [], "title": "Acoustic Radar" }, { "cite_extract_rate": 0.260869565217391, "cites": [ 1695 ], "content": "\\label{sec:localization}\nLocalization is a key enabler for Location Based Service (LBS). Despite tremendous research efforts on indoor localization, many existing solutions either require expensive dedicated infrastructures~ or rely on cumbersome device-dependent kernel hacking~, prohibiting their practical deployment. \nAmong existing cutting-edge indoor localization approaches, acoustic-based systems attract much interests since they can achieve sub-meter level localization accuracy with relatively low infrastructure costs and deployment efforts. The underlying techniques for these localization systems are the timing measurement methods discussed in Section~\\ref{sec:basic timing measurements} and the major challenge is \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}. The evaluation metric for localization systems is RMS error. \nIn this section, we present existing works on acoustic-enabled localization solutions in two categories, namely, \\emph{infrastructure-based} and \\emph{infrastructure-free}. A comparison of related work is summarized in TABLE~\\ref{table:localization comparison}.\n\\textbf{Infrastructure-based} schemes typically deploy low-cost and power-efficient distributed acoustic anchors in target areas. The locations of these anchors are determined in advance. Apart from acoustic transceivers, each anchor may be equipped with wireless modules to communicate among themselves or with a remote server. The remote server can coordinate the transmission schedule among the anchors either in synchronous or asynchronous manner. When the transmitted acoustic signals are detected by either a target or other anchors, the associated timestamps (ToA or TDoA) are obtained. Finally, the location of a target is determined using trialeration or more sophisticated optimization methods. \nIn the subsequent discussion, we first review synchronous schemes, and then asynchronous approaches. \nIn ~, Liu et al. developed a centimeter-level localization system named Guoguo. The anchors in this system are synchronized by Zigbee and are scheduled to transmit orthogonal codes, which are used by targets to perform ToA estimation via one-way sensing. Multilateration is then used to locate the targets.\nA speaker-only localization system was proposed by Lazik and Rowe in~. In their approach, distributed speakers are connected to different synchronized channels of an advanced audio device that transmits chirp signals for localization. A target locates itself locally by performing one-way TDoA estimation. According to , its ${95}$-percentile localization accuracy is within ${10}$~\\!cm. ALPS~ improves upon the work~ in deployment efforts. In ALPS, anchors are synchronized via Bluetooth, and each anchor is equipped with one microphone and one speaker. The locations of the anchors are efficiently obtained through acoustic-assisted simultaneously localization and mapping. ALPS reports average errors of ${30}$~\\!cm and ${16.1}$~\\!cm in locating targets and anchors. Recently, a ultrasonic localization system called UPS+ is presented in~. UPS+ leverages the non-linearity of receiver microphones (details are presented in Section~\\ref{sec:future direction for channel characteristics}) to enable ultrasonic beacons to locate smart devices without ultra-sonic sensors. Consequently, the audibility problem, induced either by \\textit{\\textbf{Challenge}}-\\ref{clg:audibility} or \\textit{\\textbf{Challenge}}-\\ref{clg:timing estimation trade-off}, is eliminated. UPS+ achieves centimeter-level accuracy in localization. \n\\begin{table*}[b]\n\\caption{Comparison of device-free gesture tracking systems.}\n\\label{table:device-free gesture tracking comparison}\n\\begin{center}\n \\begin{tabular}{ | c | c | c | c | c | c |}\n \\hline\n \\textbf{System} & \\textbf{Waveform Design} & \\textbf{\\makecell{Occupied bandwidth\\\\(kHz)}} & \\textbf{\\makecell{Tracking latency (ms)}} & \\textbf{\\makecell{Operation range\\\\(m)}}& \\textbf{\\makecell{Performance}} \\\\ \\hline\n FingerIO~ & OFDM modulated signal & ${18 - 20}$ & ${5.92}$ & ${< 0.5}$ & \\makecell{${8}$~\\!mm (2D) average\\\\tracking error}\\\\ \\hline\n LLAP~ & Multiple pure tones & ${17 - 23}$ & ${\\le 15}$ & ${0.5}$ & \\makecell{${3.5}$~\\!mm (${1}$D) and ${4.57}$~\\!mm (${2}$D)\\\\average tracking error} \\\\ \\hline\n Strata~ & GSM sequence & ${18 - 22}$ & ${12.5}$ & ${0.5}$ & ${3}$~\\!mm tracking error \\\\ \\hline\n & Chirp signal & ${18 - 20}$ & ${40}$ & ${4.5}$ & ${1.2}$ to ${3.7}$~\\!cm within 4.5~\\!m\\\\\n \\hline\n CovertBand~ & OFDM & ${18 - 20}$ & ${4.2}$ & ${6}$ & \\makecell{A median of ${18}$~\\!cm\\\\tracking error} \\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\\end{table*}\nThe localization accuracy of the aforementioned work is highly dependent on clock synchronization accuracy, which depends on network latency, non-negligible in a large-scale network. In contrast, asynchronous approaches can overcome such shortcomings.\nARABIS is an asynchronous acoustic localization system that utilizes two-way ranging~ to avoid the need for synchronization. In ARABIS, anchors transmit acoustic beacons periodically following a coarse time-division-multiple-access schedule. Targets, as well as anchors, overhear the transmissions and record the corresponding timestamps. These timestamps can be used to estimate TDoA information in locating a target. ARABIS reports a ${95}$-percentile localization error of ${7.4}$~\\!cm. AALTS~ improves upon ARABIS by a more robust onset detection approach to handle the near-far problem, hardware heterogeneity, and multipath effects. To handle these challenges (from \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}), it normalizes the current correlation value by the mean of a number of its preceding samples. Additionally, a pseudo orthogonal chirp spread spectrum modulation technique is proposed, which effectively doubles the transmission rate. AALTS achieves $90$-percentile tracking errors of $0.49$~\\!m for mobile targets and a median of $0.12$~\\!m for stationary ones with only four anchor nodes. \n\\textbf{Infrastructure-free} localization systems do not require the deployment of custom-built infrastructure devices in target areas. However, they tend to achieve less competitive localization accuracy compared with infrastructure-based solutions.\nIn~, Liu et al. built a localization system utilizing acoustic and WiFi signals. It first estimates pair-wise distances within a device group via acoustic ranging~, forming a spatial constraints. \nEach device in the group also uses WiFi fingerprints to impose another location constraints. By combining the two, target locations can be determined.\nThis scheme achieves an ${80}$-percentile localization error of ${1}$~\\!m. Since the computation of spatial constraints requires multiple pair-wise acoustic ranging measurements and is time consuming, the application of such an approach is limited to static target localization. \nCentaur~, similar to , is a joint optimization framework utilizing acoustic and WiFi signals, and reports meter-level localization accuracy. The authors propose a novel multipath mitigation algorithm to address \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}, and achieve robust onset detection. The key idea is to inspect signal changes in cross-correlation as opposed to absolute magnitudes considered in existing methods. \nEchoTag~ is an acoustic fingerprinting localization system that can detect minor location changes. It associates different acoustic profiles with different positions, known as tags, to train a classification model. This model is then used for online tag detection, enabling context-aware applications. \nEchoTag reports an accuracy of ${98\\%}$ in distinguishing ${11}$ tags at ${1}$~\\!cm resolution. However, EchoTag is sensitive to environmental dynamics and will suffer from degraded performance in absence of new data collections. As a matter of fact, applications that are based on a fingerprinting strategy are subject to problems aroused from \\textit{\\textbf{Challenge}}-\\ref{clg:calibration} and \\textit{\\textbf{Challenge}}-\\ref{clg:cross-platform models}, limiting their practical adoption. \nAlthough infrastructure-free solutions incur less hardware costs, they require labor-intensive site survey to obtain location-dependent signal profiles, making them sensitive to environment changes. Infrastructure-based approaches deliver a satisfactory localization accuracy at the cost of extra hardware. But the complexity in deploying multiple acoustic anchors and the synchronization requirements are still not economical and lightweight enough for practical deployment. To this end, the authors of~ propose a single beacon-enabled passive localization system that can also identify a target. They discover that a footstep contains separable structure-borne and air-borne components. The former contains range information and the latter provides angle-of-arrival (AoA) along with identity signatures. Consequently, by placing a single acoustic array in the place of interest, a target can be simultaneously tracked and identified. Additionally, the domain adversarial training technique is employed in this proposal so as to enhance the generalizability of the system, easing the efforts in calibration and thus addressing \\textit{\\textbf{Challenge}}-\\ref{clg:cross-platform models}. The reported median localization accuracy can reach 30~\\!cm, which is highly enough given that a foot has a similar size. Another single acoustic anchors based proposal that leverage the geometry constraints shaped by LoS and NLoS acoustics can be found in~ that reports 0.44~\\!m localization accuracy across different environments. It is worth to mention that the aforementioned microphone array enabled localization techniques can obtain specific coordinates of a target rather than conventional source localization that can only obtain AoA information. \nCompared to localization solutions utilizing RF signals~, visible light~, or IMU data~, acoustic-enabled localization techniques strike good trade-offs between costs and accuracy. The acoustic diffraction property makes it feasible to locate targets in presence of small-scale random blockages. This puts less restriction on deployment. However, due to limited transmission ranges, more anchor nodes are needed in infrastructure-based localization systems. \n\\begin{table*}[b]\n\\centering\n\\caption{Comparison between different biometric sensing systems.}\n\\label{tab:biometric sensing comparison}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\textbf{Work} & \\textbf{Waveform design} & \\textbf{Bandwidth} & \\textbf{Key techniques} & \\textbf{Performance}\\\\ \\hline\n & FMCW & $18-20$~\\!kHz & FFT & \\makecell{Less than 0.11~\\!bpm\\\\within 1m} \\\\ \\hline\nBreathJunior~ & FMCW \\& white noise & 24~\\!kHz & \\makecell{Phase-based accurate timing,\\\\beamforming} & \\makecell{0.4~\\!bpm at 40~\\!cm,\\\\3~\\!bpm at 60~\\!cm}\\\\ \\hline\nRespTracker~ & ZC & 2~\\!kHz & \\makecell{Phase-based accurate timing} & \\makecell{Less than 1~\\!bpm at 3~\\!m,\\\\0.8~\\!bpm for moving targets}\\\\ \\hline\n & NA & NA & \\makecell{Envelop detection} & \\makecell{Less than 0.05~\\!bpm\\\\(device close to user)}\\\\ \\hline\nBreathListener~ & tone at 20~\\!kHz & NA & \\makecell{Energy spectrum density,\\\\ensenmble empirical mode decomposition,\\\\generative adversarial network} & \\makecell{0.11~\\!bpm\\\\in driving environment}\\\\ \\hline\nSpiroSonic~ & Multiple tones & $17-24$~\\!kHz & \\makecell{Phase-based accurate timing,\\\\neural network regression} & \\makecell{5\\%-10\\% error in\\\\lung function monitoring}\\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "a242203e-70c4-4f1d-87e0-5060de1aa6fb", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ], [ "subsubsection", "Localization" ] ], "subsections": [], "title": "Localization" }, { "cite_extract_rate": 0.15384615384615302, "cites": [ 1695 ], "content": "\\label{sec:tracking}\nHigh-accuracy object tracking is important in many applications such as automated surveillance, traffic monitoring, and Augmented/Mixed Reality~.\nTracking is a well-investigated topic in computer vision (CV)~. However, CV techniques impose substantial computation costs and do not work well under poor light conditions. Acoustic tracking systems can overcome these limitations. Depending on whether the targeted object can emit acoustic signals or not, existing solutions can be divided into device-based tracking and device-free tracking. \n{\\bf Device-based acoustic tracking} aims to track acoustic emitting devices in motion. \nAAMouse~ utilizes multiple carriers to estimate the Doppler speed of a target and integrates the speed over time for tracking. Though the reported tracking performance is at centimeter-level, tracking errors can accumulate over time, making it unsuitable for long-term tracking. CAT~ improves upon AAMouse by adopting chirp mixing and boosts the tracking accuracy to a sub-centimeter level. However, due to the use of one-way sensing, it is sensitive to irregular SFOs~ that cannot be easily compensated. Under the assumption of linear drift, CAT performs re-calibration when integration errors become intolerable hence allow the system to work sufficiently long. \nAnother application of chirp mixing for high-accuracy tracking is presented in~ where a drone follows a person with a safe range in challenging indoor environments. In this work, the authors introduce several advanced signal processing modules, in particular, MUlitple SIgnal Classification algorithm (MUSIC) to resolve multipath effects (aroused from \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}) and enhance system robustness. Furthermore, a reciprocal filter is introduced to address \\textit{\\textbf{Challenge}}-\\ref{clg:freq_sel}, compensating the frequency selectivity problem and thereby further enhancing the system stability.\nIn Backdoor~, a more precise compensation technique for frequency selectivity is proposed. First adopted in wireless communication, it equalizes channel effects by measuring channel state information using probe signals. Backdoor transmits acoustic signal in 40~\\!kHz and takes advantages of non-linear diaphragm of power-amplifier so that sounds in the range of 20~\\!kHz can be recorded. Thus, it does not suffer from the audibility problem ({\\it{\\bf Challenge}}-\\ref{clg:audibility} or \\textit{\\textbf{Challenge}}-\\ref{clg:bw_tradeoff}). However, this approach requires customized hardware devices, making its adoption more difficult. \nMilliSonic~ achieves sub-millimeter 1D tracking accuracy in the presence of multipath using a single beacon with a small 4-microphone array. The high precision is achieved by leveraging phase information after chirp mixing. \n{\\bf Device-free acoustic tracking} tracks moving objects in an environment by reflected acoustic signals. Due to significant attenuation of reflected signals, high precision device-free acoustic tracking is often limited to short ranges. Next, we use finger, body posture tracking and respiration sensing as driving applications to discuss techniques in this category. \nFingerIO turns a mobile phone or a smartwatch into an active sonar that is capable of tracking moving fingers for Around Device Interaction (ADI). Such a technology can extend the physical interaction boundaries hence is particular useful for small wearables. FingerIO achieves a median accuracy of ${8}$ mm~ in 5.92~\\!ms. \nIt utilizes OFDM modulated signals to estimate the CSI between a hand and a smartphone periodically. In each estimation cycle, the CSI is acquired through cross-correlation. Since only moving fingers can dynamically affect the channel between the speaker and microphone, their movements can be tracked by comparing consecutive channel frames. Static multipath reverberations remain the same across frames and can thus be removed. The proposed technique successfully addresses \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection} by extracting finger-movement-only CSI profiles, and inspired several follow-up work.\nA multi-tone device-free gesture tracking system, LLAP, was proposed in~. It leverages coherent detection to extract the phases of acoustic echoes for finger localization and tracking. In LLAP, a mobile device actively transmits multiple tone carriers and decomposes finger-generated echoes via Empirical Mode Decomposition (EMD) for later processing. It further uses the phase divergence of multiple carriers to coarsely locate the start position of a finger and track its displacement via phase shifts.\nLLAP reports a tracking accuracy of ${3.5}$ mm for ${1}$D hand movement and ${4.57}$ mm for ${2}$D drawing with less than ${15}$ ms latency. However, both FingerIO and LLAP are sensitive to nearby interference. To address nearby interference, another work named Strata was proposed in~. \nIt also uses a coherent detector but applies a GSM training sequence modulated by Binary Phase Shift Keying (BPSK). \nEvaluation results demonstrate that it outperforms FingerIO and LLAP in all cases with an average tracking accuracy at 3~\\!mm. \nThe aforementioned work considers micro-finger gesture tracking. For macro-body parts such as hand or the whole body, one often utilizes more powerful speakers to increase SNR so that the sensing range is larger. We hereby present tracking technologies on macro-body posture. \nThe work in~ shows that it is feasible to achieve room-level hand motion tracking with a customized platform. It employs an acoustic radar along with many advanced processing techniques including MIMO beamforming and deep learning for signal quality enhancement. The proposed system achieves $1.2$-$3.7$~\\!cm tracking errors within $4.5$~\\!m range and supports multi-user tracking. CovertBand~ is an active sensing system for passive multiple object tracking. It builds on an active sonar with an enhanced speaker and uses the same parametric models as FingerIO~ to track human body posture. CovertBand reports a median of $18$~cm in tracking mobile targets. For static objects, it can achieve an accuracy of $8$~cm with a distance up to $8$~m in LOS conditions. \nA comparison of these tracking schemes is given in TABLE~\\ref{table:device-free gesture tracking comparison}. \nThe last category of applications concern continuous monitoring of human's respiration rates and breathing patterns. With device-free acoustic tracking, one can estimate chest displacements caused by respiration over time of target subjects. A comparison of these techniques is shown in TABLE~\\ref{tab:biometric sensing comparison}.\nIn~, a portable life sign detection system based on commodity smartphones was presented. It uses a smartphone as an active sonar to detect chest movements for breathing rate estimation and sleep apnea detection. The proposed system can achieve an error of fewer than ${0.11}$ breaths per minute (bpm) even at a distance of up to ${1}$~\\!m. Though the work of~ utilizes an inaudible frequency range (above $18$~\\!kHz), it can still be perceived by animals and infants whose hearing systems are more sensitive to high frequency sounds. \nTo deal with this audibility issue (\\textit{\\textbf{Challenge}}-\\ref{clg:audibility}) and handle \\textit{\\textbf{Challenge}}-\\ref{clg:timing estimation trade-off}, BreathJunior improves upon~ by using white noise for respiration estimation~. Before transmitting chirp signals for sensing, it randomizes signal phases in frequency domain and recovers it at a receiver end making the generated sounds less obtrusive. The reported respiration rate error can be as low as 0.4~\\!bpm at a distance of 40~\\!cm. The work in~ overcomes audibility issue (\\textit{\\textbf{Challenge}}-\\ref{clg:audibility}) by using Zadoff-Chu (ZC) sequence. As we outlined in Section~\\ref{sec:waveform for sensing}, the FHSS modulated signal enables fine-grained multipath decomposition and allows multi-person respiration monitoring simultaneously. The reported error is within 0.6~\\!bpm under various test environments in presence of multiple targets at a minimal distance of 10~\\!cm. \nRen et al.~ developed a passive sensing system that can detect breathing rates and sleep-related events from breathing signals. This approach employs high-quality sensors, and reports less than ${0.5}$~\\!bpm detection error rates. \nThe performance of previous solutions degrades significantly in noisy environments. To combat noise, BreathListener~ extracts breath patterns from energy spectrum density and regenerates clean breath signal via a generative adversarial network. It achieves an average error of 0.11~\\!bpm for breathing rate estimation in driving conditions. SpiroSonic~ went one step further toward conducting spirometry tests in a regular home setting under various environment noises. It measures a target's chest wall motion via acoustic radar and maps the obtained waveforms to lung function indices. SpiroSonic achieves $5\\%$ to $10\\%$ monitoring error in clinical studies, allowing reliable out of clinic disease tracking and evaluation.", "id": "25f70ee6-6676-4265-9de9-e6288503ee44", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ], [ "subsubsection", "Tracking" ] ], "subsections": [], "title": "Tracking" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future direction for temporal features}\nWe envision that research opportunities utilizing temporal features for acoustic sensing lie in two aspects. First, for large-scale deployments, existing approaches need to be made more robust to multipath effects, background noise, interference from ambient sound emitting sources (aroused from \\textit{\\textbf{Challenge}}-\\ref{clg:robust onset detection}) and the heterogeneity of consumer devices (\\textit{\\textbf{Challenge}}-\\ref{clg:calibration} and \\textit{\\textbf{Challenge}}-\\ref{clg:cross-platform models}). Infrastructure-based methods not only need to minimize initial setup costs, but also need to reduce operating costs, such as (repeated) site surveys and maintenance. Furthermore, solutions that are adaptive to environment changes or hardware upgrades or replacements are attractive. Second, novel applications utilizing accurate timing estimation on acoustic devices can be investigated. One such an example is AcuTe~, where the authors leveraged the relationship between sound speeds and ambient temperatures for temperature sensing on a single smartphone. Chirp mixing is utilized to estimate the time differences and consequently the propagation speed of sound, which has a linear relationship with temperature. A median measurement accuracy of $0.3^\\circ C$ is reported in~.", "id": "208a1eb7-c8c9-47d6-afa4-356299037895", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "98b8d49f-bd0a-4043-938a-589f8b510dfa", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Applications Leveraging Temporal Features" ], [ "subsubsection", "Research Opportunities" ] ], "subsections": [], "title": "Research Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:channel characteristics-enabled applications}\nIn this section, we present applications that detect gestures or motions that induce specific characteristics in over-the-air or structure-borne channels. All approaches in this section are device-free.", "id": "ca05f4b8-afe7-4436-b55e-aef15e251d42", "level": "subsection", "origin_cites_number": 0, "parent_id": "797b87e2-7680-40a8-af64-25093d612994", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Solutions Enabled by Channel Characteristics" ] ], "subsections": [ "85d89970-214a-44b2-93d5-bee41280b943", "acaa2ab9-5be9-4455-80cf-6e75f593b422", "a7b0997f-a3d7-43ec-bb59-a41485ec8abc" ], "title": "Solutions Enabled by Channel Characteristics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:gesture recognition}\nPresence of objects and changes in their shapes and positions in the air affect the channel characteristics and subsequently the reflected acoustic waves. \n\\paragraph*{Gesture recognition} \nGesture recognition aims to understand the expressive meaning of body parts, in particular hands, serving as an interface for humans to interact with smart devices. Previous approaches~ often rely on dedicated devices or computationally intensive image processing techniques. In contrast, acoustic sensing methods are comparatively much more lightweight. The primary principle of these gesture recognition system is to decipher an appropriate CSI to accurately model acoustic features shadowed by a performed gesture. The key challenge lies in the discovery of this CSI (\\textit{\\textbf{Challenge}}-\\ref{clg:appropriate CIR representation}). To evaluate the performance of a gesture recognition system, one often utilizes the detection or recognition accuracy that is defined by the ratio between the number of correctly recognized gestures and the total performed ones. We hereby present existing works on gesture recognition. \nSoundWave~ utilizes Doppler effect as CSI for gesture recognition. \nSoundWave continuously triggers an inaudible tone and infers gestures by sensing the spectrum of hand-reflected echoes. The key idea is that the reflected acoustic echoes from a moving hand are shifted in the frequency domain compared with the transmitted one. If the hand is moving away, the spectrum of the acoustic echoes is below the transmitted one and vice versa. Combining unidirectional movements allows the recognition of more complex gestures such as flick and quick taps. SoundWave reports recognition accuracy over ${86.67\\%}$ in various testbeds. AudioGest~ applies both Doppler effect and multipath profiles for fine-grained gesture recognition. It combines both MD (Doppler frequency estimation) and DD approaches (multipath profiles based linear classifier). AudioGest reports an accuracy of ${96\\%}$ in detecting six hand gestures. Another Doppler based gesture recognition system is presented in~. This gesture recognition system, called EchoWrite, is an input system based on active acoustic sensing. EchoWrite decomposes the writing gestures into different strokes that exhibit different Doppler profiles estimated from acoustic reflections. Given that an English letter can be generated only by a unique combination of different strokes, a Bayesian inference model is exploited to decipher the written letters via acoustic profiles built on reflections. \nUltraGesture~ utilizes FHSS modulated signals to estimate the CSI in a similar vein with common wireless systems. After then, the estimated CSI is then act as input for a deep learning network for gesture classification. The underlying premise is that using CSI as input can preserve all important information while common methods built on Doppler frequency or multipath profiles would lose many critical information. As a result, UltraGesture can recognize 12 gestures with an accuracy over $97\\%$. \nA similar approach can be found in~ that achieves an recognition accuracy of $98.4\\%$ with up to 15 gestures. \nOther gesture recognition systems include VSkin~ that enables gesture recognition on the back of a device and the work in~ that facilitates depth-aware finger tapping on virtual displays. These two are based on the gesture tracking technology in~. VSkin achieves an accuracy of $99.65\\%$ in recognizing tapping events and the work in~ reports a 98.4\\% finger tapping detection accuracy. \n\\paragraph*{Speaker authentication} \nBesides gesture recognition, over-the-air channels can be also exploited for speaker authentication. Commonly used approaches that utilize speech signatures unique to individuals but do not test the ``liveliness\" of the speech. Thus, they can fall victims to replay attacks. \nSpeakPrint is a lipreading technology based on acoustic sensing~. It captures how a user speaks by recording mouth and vocal movement through near-ultrasound signals emitted by a mobile phone at the same time. By features extracted from voice signals and reflected near-ultrasound signals, $100\\%$ accuracy is achieved in detecting replay attacks. For user verification, the authors reports an average true positive rate of 99.56\\% and a false positive rate of 0.013\\%. \n\\paragraph*{Novel interactive controls}\nThe work discussed so far concerns the open air channel between a pair of acoustic speaker and microphone on a smartphone. \nAcoustruments~ takes a very different approach. It fabricates a tube as an acoustic conduit that connects the transceivers on a commodity smartphone.\nThe tube has a physical control unit that can move and thus manipulates the properties of received acoustic signals. Through a fingerprinting strategy, the relationship between physical control and received acoustic features can be thus built. \nThe authors developed a classification model to recognize different control commands and achieves ${99\\%}$ accuracy. Though being innovative and effective, this proposal cannot address \\textit{\\textbf{Challenge}}-\\ref{clg:calibration} and \\textit{\\textbf{Challenge}}-\\ref{clg:cross-platform models}.", "id": "85d89970-214a-44b2-93d5-bee41280b943", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "ca05f4b8-afe7-4436-b55e-aef15e251d42", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Solutions Enabled by Channel Characteristics" ], [ "subsubsection", "Over-the-air Channel Based Approaches" ] ], "subsections": [], "title": "Over-the-air Channel Based Approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:touch force detection}\nRecalled that in Section~\\ref{sec:acoustic channel property} that, structure-borne signals in solid medium exhibit two properties, namely, acoustic dispersion and acoustic resonance. The acoustic dispersive property carry range information hence are often used for localization. The acoustic resonance property often reveal rich acoustic features and are more common in interactive applications. We now present respective applications built on structure-borne channel properties. \nUbiTap~ is a keystroke localization system built on a special physical property of structure-borne acoustic signals, called acoustic dispersion. The key observation in UbiTap is that the range information can be precisely obtained by the slope of a straight line shaped between frequency and time in the spectrogram of passively captured keystroke sounds. With this parameteric model, they achieve millimeter-level keystroke localization performance. \nForcePhone estimates applied forces on smartphones with built-in acoustic sensors by exploiting structure-borne sound propagation. It utilizes the fact that emitted acoustic signals from the speaker of a smartphone can cause vibration of the phone body, the intensity of which is inversely proportional to external pressure.\nBased on this observation, it builds a close-form parametric model between applied forces and received signal intensity levels. ForcePhone reports a mean square error of 54~\\!g at stationary case, which is sufficiently low compared with the maximum 1.5~\\!kg sensing range. \nTouch {\\&} Active~ utilizes a data-driven approach that leverages the acoustic resonant property to recognize different touch gestures as well as touch force.\nThe basic idea is that a certain object has a unique resonant frequency and any touch action can alter it. Such a change manifests itself in the power spectrum of received signals with identifiable features hence can be recognized. This work reports an accuracy of ${99.6\\%}$ and ${86.3\\%}$ in recognizing five touch gestures and six hand postures on a plastic toy. Meanwhile, it can recognize discrete touch force with per-user recognition accuracy as high as 99.6\\%.", "id": "acaa2ab9-5be9-4455-80cf-6e75f593b422", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "ca05f4b8-afe7-4436-b55e-aef15e251d42", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Solutions Enabled by Channel Characteristics" ], [ "subsubsection", "Structure-borne Channel Based Approaches" ] ], "subsections": [], "title": "Structure-borne Channel Based Approaches" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1695 ], "content": "\\label{sec:future direction for channel characteristics}\nAs mentioned in both \\textit{\\textbf{Challenge}}-\\ref{clg:calibration} and \\textit{\\textbf{Challenge}}-\\ref{clg:cross-platform models}, platform diversity can be particularly detrimental to system performance as it requires constant calibration. To ease the pain, building calibration agnostic parametric models is important. However, as acknowledged in \\textit{\\textbf{Challenge}}-\\ref{clg:appropriate CIR representation}, it requires sophisticated domain knowledge and is remarkably challenging. Alternatively, one can leverage deep learning techniques that have shown promising results in generalizability. Techniques such as adversarial training~ allow a model to generalize well on even unseen data, while few-shot learning techniques~ only require limited labels to quickly adapt on target environments. Some early attempts in this direction has been made. For example, the authors in~ employ meta learning~, a kind of shot learning technique, to facilitate cross-device mobile sensing with only one or two data instances. We envision that more sophisticated deep learning approaches will be incorporated in acoustic sensing based on channel characteristics to address device, environment or subject diversity.", "id": "a7b0997f-a3d7-43ec-bb59-a41485ec8abc", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ca05f4b8-afe7-4436-b55e-aef15e251d42", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Acoustic Sensing Applications" ], [ "subsection", "Solutions Enabled by Channel Characteristics" ], [ "subsubsection", "Research Opportunities" ] ], "subsections": [], "title": "Research Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future direction}\nIn the previous sections, we have highlighted unique research opportunities in different categories of acoustic sensing applications. In this section, we discuss a few research directions that we believe are under-investigated or are emerging in acoustic sensing in the future.", "id": "76bc7108-cd5a-455f-b0a0-9d7148712e27", "level": "section", "origin_cites_number": 0, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Future Directions" ] ], "subsections": [ "bf43dd2c-85d0-429c-8d00-2e3a9dcd5eaf", "f907a89d-21a9-4a38-a425-e0a004c3a9ef", "31f2d399-bb8d-4f46-8ac8-09d1a7456e2d", "2d5d1911-679e-4405-9c28-309c9662e6d1" ], "title": "Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "The ever increasing applications of acoustic sensing also bring about many privacy and security issues. \nFor instance, a recent study on acoustic sensing found that a recording system can act as an acoustic mixer~, making it possible to detect ultrasonic signals above ${24}$~\\!kHz on commodity mobile devices with no more than ${48}$~\\!kHz sampling rate. This phenomenon, caused by the non-linearity of acoustic sensors, has been exploited in jamming and communication~. The interesting finding can lead to innovative applications. However, such a technology also poses security threats on smart IoT devices with on-board microphones such as Google Home~ and Amazon Echo~. It allows synthesizing audio signals inaudible to humans to manipulate such devices~ and thus open doors to malicious attacks. Additionally, existing work on keystroke detection~ can pose privacy threats if enabled by malicious attackers with the knowledge of legitimate users. \nTherefore, techniques to detect and defend against such attacks constitute an important area of investigation for acoustic sensing.", "id": "bf43dd2c-85d0-429c-8d00-2e3a9dcd5eaf", "level": "subsection", "origin_cites_number": 7, "parent_id": "76bc7108-cd5a-455f-b0a0-9d7148712e27", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Privacy and Security Threats" ] ], "subsections": [], "title": "Privacy and Security Threats" }, { "cite_extract_rate": 0.5, "cites": [ 1695 ], "content": "In addition to use acoustic sensors for non-conventional purposes such as touch sensing and gesture tracking, it is possible to use other sensors to capture acoustic signals. \nFor instance, the authors from~ and exploit RF-radar and Lidar sensor to hack audio content, respectively. The key observation is that acoustic signals originate from vibrations, which can be detected through sensors that measure displacement. A main benefit of cross-technology sensing is that these non-dedicated acoustic sensors are immune to background acoustic noise and thus provide high SNR signals as long as the sampling rate is adequate. \nTherefore, an interesting research direction is to incorporate data from multiple sensing modalities to enable sophisticated sensing tasks or novel applications in adverse environments.", "id": "f907a89d-21a9-4a38-a425-e0a004c3a9ef", "level": "subsection", "origin_cites_number": 2, "parent_id": "76bc7108-cd5a-455f-b0a0-9d7148712e27", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Repurposing Other Sensors for Acoustic Sensing" ] ], "subsections": [], "title": "Repurposing Other Sensors for Acoustic Sensing" }, { "cite_extract_rate": 0, "cites": [], "content": "With the prevalence of earable devices such as headsets and earbuds, many interesting new applications arise. Almost all earables come with on-board microphones and speakers, making them suitable for acoustic sensing. Additionally, newer devices such as Apple Airpods pro feature IMU sensors and force sensors. Fusing the measurements from multiple sensors opens up new areas of research. In~, Choudhury provides an excellent overview of earable computing research opportunities. Notably, smart earables can be used to enhance human auditory perception in the form of hearing aids~, binaural sound localization~, and 3D spatial sound spatialization~. By characterizing the channel responses of inner ear canals, one can extract useful information for user authentication~ or diagnosing ear infections~. From structure-borne signals and IMU sensor data, chewing and drinking activities can be monitored~. Structure-borne signals can also be used to enhance the recognition of one's speech when air-borne signals are of poor quality in noisy environments. To realize these applications, in addition to common performance measures, it is important to minimize computation and communication overheads due to the limited form factor and battery lifetime of earable devices.", "id": "31f2d399-bb8d-4f46-8ac8-09d1a7456e2d", "level": "subsection", "origin_cites_number": 0, "parent_id": "76bc7108-cd5a-455f-b0a0-9d7148712e27", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Earable Computing" ] ], "subsections": [], "title": "Earable Computing" }, { "cite_extract_rate": 0, "cites": [], "content": "To date, there is no open datasets or common platforms to benchmark acoustic sensing solutions. The lack of open datasets can be in part attributed to the need to design customized waveforms in specific applications. As discussed in Section~\\ref{sec:acoustic hardware}, hardware diversity among COTS devices render solutions tailored to one platform not suitable for others. Absence of common platforms also means that researchers often need to start from scratch and spend much time handling platform-dependent details such as programming languages and development frameworks. To overcome these problems, the acoustic sensing community can learn from successful practices in other communities. For instance, CRAWDAD is a wireless network data resource maintained by researchers at Dartmouth University with contributors from all of the world~. ORBIT~ is a two-tier wireless network emulator/field trial designed to achieve reproducible experimentation, while also supporting realistic evaluation of protocols and applications. It is available for remote or on-site access by academic researchers both in the U.S. and internationally. GNURadio and USRP platforms provide unified hardware platforms and basic processing blocks for radio signal processing and propel research in advanced wireless technologies~. \nRecognizing the need of common platforms, the work in~ and~ are among the first to build general sensing platforms on commodity mobile devices. Later on, the authors in released a cross-platform support for ubiquitous acoustic sensing. Cai et al. develop ASDP, the first Acoustic Software Defined Platform, which encompasses several customized acoustic modules running on a ubiquitous computing board, supported by a dedicated software framework~. Unfortunately, these platforms have not yet gained much attraction in the community. We believe significant efforts are needed toward data sharing and advocate the adoption of common platforms for evaluation.", "id": "2d5d1911-679e-4405-9c28-309c9662e6d1", "level": "subsection", "origin_cites_number": 2, "parent_id": "76bc7108-cd5a-455f-b0a0-9d7148712e27", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Benchmark Datasets and Platforms" ] ], "subsections": [], "title": "Benchmark Datasets and Platforms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nThis paper presented a systematic survey on acoustic sensing. We provided in-depth discussions on the unique challenges posed by acoustic hardware, systems and channels as well as techniques to mitigate them. We organized related research in a bottom-up manner from the physical layer to application-layer solutions to expose researchers and developers with a systematic view of what a full system entails. Discussions on research opportunities and future directions aimed to spawn further efforts in this area and encouraged the research community to take an concerned effort to transform research outcomes to real-world practices and products. \n\\bibliographystyle{abbrv}\n\\bibliography{temp}\n\\end{document}", "id": "20089aff-71e9-415f-8c3d-3a6f9374f1db", "level": "section", "origin_cites_number": 0, "parent_id": "1352d6e5-225d-41b6-be40-dfba979ded66", "prefix_titles": [ [ "title", "Ubiquitous Acoustic Sensing on Commodity IoT Devices: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
51
[ 1695 ]
null
[ "Siddhant Bhambri", "Sumanyu Muku", "Avinash Tulasi", "Arun Balaji Buduru" ]
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
2019
2019-12-03T20:06:49Z
cs.LG
Machine learning has seen tremendous advances in the past few years, which has lead to deep learning models being deployed in varied applications of day-to-day life. Attacks on such models using perturbations, particularly in real-life scenarios, pose a severe challenge to their applicability, pushing research into the direction which aims to enhance the robustness of these models. After the introduction of these perturbations by Szegedy et al. \cite{Szegedy2013}, significant amount of research has focused on the reliability of such models, primarily in two aspects - white-box, where the adversary has access to the targeted model and related parameters; and the black-box, which resembles a real-life scenario with the adversary having almost no knowledge of the model to be attacked. To provide a comprehensive security cover, it is essential to identify, study, and build defenses against such attacks. Hence, in this paper, we propose to present a comprehensive comparative study of various black-box adversarial attacks and defense techniques.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ] ], "subsections": [ "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "5b72c3f7-0e79-4f33-a5c3-d2115ae8346f", "274eed6a-e8f1-4a9e-8fc9-4b9bf527c3ac", "afd4d474-d8b2-495b-a07f-4796fce61f81", "6d3245fb-c6c8-446e-a1f1-2bbe4d6c6052", "ec0c0627-040e-43e2-bccf-e85e0b006228" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section_1}\nMachine Learning (ML) is the most influential field in the sciences today. With applications in Computer Science, Biomedical Engineering, Finance, and Law leveraging the edge in Language, Vision, Audio with specialized branches like Deep Learning (DL), Reinforcement Learning (RL) the involvement of ML in day-to-day life is undeniable. RL models are beating world champions at their own game; DL models can classify a multitude of categories with precise identification of species, objects, material, which is humanly impossible. The field has grown so much that Deep Language models are writing fiction, news reports, and poetry at the same time. While the developments in the area are exciting, the underlying techniques are probabilistic, making the performance of an ML model highly dependent on the training data. \nGaps in the training data have lead to biased models, a biased model can have excellent performance on a set of inputs closer to the training data, but it fails on corner cases where the input was rarely seen while training. Whatever be the domain, an ML model draws boundaries between classes in an $n$ dimensional space, and these decision boundaries can not be reasoned. Sometimes these boundaries can be subtle, making the model categorize inputs into different classes (with high confidence scores) in the immediate neighborhood of a boundary. Taking advantage of these traits in ML, to fool the system into wrongly interpreting an input, Adversarial attacks are crafted. \nA typical adversarial attack starts with studying the model, inputs, outputs, training data, model parameters, etc. according to the threat model. While earlier threat models were surrounded around the training data and complete access to the model parameters, it is less practical in the sense that the attacker can be an outsider. The black-box adversarial attack, on the other hand, limits the attacker's capabilities, with access only to the deployed model, sometimes restrictions are also put on the observable parameters of the model. The setting of black-box attacks and constraints on attacker are closer to the real-world scenario. \nTo understand the Adversarial attacks in the context of Black-box in this section, we introduce the Taxonomy.", "id": "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "level": "section", "origin_cites_number": 0, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Introduction" ] ], "subsections": [ "2ac09a31-d444-4934-8162-2fb6c78a8d14", "55aeba2c-5298-493e-b47b-16192a878571", "f29f181e-03bc-45b7-aee4-4d94a4de1787", "d45097a4-4d4d-4900-8314-34a32ed4de2c" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{taxo}\nHere we introduce the key concepts in the context of the adversarial attacks. \n\\begin{itemize}\n \\item \\textbf{Adversary: }An entity trying to attack the ML model to classify a legitimate-looking input wrongly. Say input $i \\in c$, where c is the original class, the Adversary's goal is to make the model predict an input $i` \\in c`$, where $i`$ is the input adversary carefully crafts and the point to note is $i`$ will still be classified as $c$ for a human annotator. \n \\item \\textbf{Perturbation: }To achieve the above mentioned goals, the adversary crafts an input $i`$ which is perturbed from $i$ by adding an impurity say $\\delta \\dot \\epsilon$. Modified input $i` = i + \\delta \\dot \\epsilon$ is called the perturbed input. Constraints on the perturbation are a challenge to the Adversary. Any changes made should be under a certain value, to make sure that the input is still classified into its original class $c$ by a human annotator. $L2$ distance is often used in literature to define the acceptable impurity.\n \\item \\textbf{Adversarial Input: }Input crafted by Adversary after perturbation is called Adversarial Input. The key properties of this input are the impurity and actual input. The addition of this impurity makes the actual input \"jump the classification boundary,\" hence the model misclassifies an Adversarial Input. \n \\item \\textbf{Targeted Attack: }When the target misclassified class $c`$ of a perturbed input $i`$ is defined, then it is a Targeted Attack. Here, the adversary targets the impurity in such a way that the model classifies given input $i$ to class $c`$. \n \\item \\textbf{Query: }A query is counted as a single instance of sending input to the model under attack and noting the observations made. Minimizing the number of queries made reduces the time taken to build an adversarial example, hence it is a key aspect of building efficient adversary models. \n \\item \\textbf{Threat Model: }\n Threat model defines the rules of the attack, what resources can the adversary use, and the end goal of the attack. Two main components are described below:\n \\begin{itemize}\n \\item \\textbf{Attacker Goals: }Goals define what the adversarial input seeks from the attack. Various security goals such as integrity attack, availability attack, targeted attack, exploratory attack come under goals. An attacker can attempt a single goal or multiple at a given time. \n \\item \\textbf{Attacker Capabilities: }Information at adversary's disposal like the training data (with or without labels), model parameters, number of queries adversary can make come under capabilities of the attacker. Also, training time attack and post-training attack are other aspects in which the attacker's capabilities are defined. \n \\end{itemize}\n \\item \\textbf{White-box Attack: }In a white-box attack, the adversary knows everything about the target model. The knowledge of the adversary includes the learned weights, parameters used to tune the model. Labeled training data is also available in some cases. With this information, the usual strategy of the attacker is to model the distribution from weights and derive perturbed inputs to violate the boundaries while being in the $L2$ limits. \n \\item \\textbf{Black-box Attack: }Contrast to white-box attacks, a black-box attack has limited knowledge of the model, with no labeled data sometimes. An attack with black-box constraints is often modeled around querying the model on inputs, observing the labels or confidence scores. \n \\item \\textbf{Transferability: }It refers to the occasion of using a model trained on one the same of data, by observing inputs and outputs (querying) of the original model called \"substitute model,\" and using the perturbed inputs from this model to attack the original \"target model.\" The assumption here is that the substitute model simulates the target model. \n\\end{itemize}", "id": "2ac09a31-d444-4934-8162-2fb6c78a8d14", "level": "subsection", "origin_cites_number": 0, "parent_id": "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy" ] ], "subsections": [], "title": "Taxonomy" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 313, 5828, 7306 ], "content": "{\n In an early work , the authors introduced the now-standard taxonomy of Adversarial attacks in Machine Learning; the survey gave a structure to the interaction between attacker and defender. By fitting the existing adversarial attacks into this threat model, authors generalized on the principles involved in the security of Machine Learning. Another significant contribution of the work is a line of defenses to Adversarial attacks that the authors presented based on their taxonomy. The work also generalized and standardized the job done in this domain. \n As the adversarial attacks became more frequent, later surveys in the field concentrated in specifics of the domain. is an extensive survey on attacks in Computer Vision with Deep Learning, gave a data-driven view of the same, and studied the fundamental security policy violations in adversarial attacks. \n As mentioned in Section \\ref{section_1}, Black Box adversarial attacks are more close to real life, and a lot of work is being carried out in this domain. However, extensive and informative the surveys above have been, there is a lack of study exclusive to Black Box attacks. With this work, we want to survey the existing benchmark Black Box attacks on different domains. We also address how each of the areas like Computer Vision, NLP differ in terms of robustness towards attack and the challenges in approaching these models. Table \\ref{chronological_table} shows a list of works we discuss in this paper, discussing the milestone contributions in the Black Box Adversarial space.\n}", "id": "55aeba2c-5298-493e-b47b-16192a878571", "level": "subsection", "origin_cites_number": 5, "parent_id": "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Introduction" ], [ "subsection", "Related Surveys - How is our survey different?" ] ], "subsections": [], "title": "Related Surveys - How is our survey different?" }, { "cite_extract_rate": 0, "cites": [], "content": "{\n While black-box attacks limit the capabilities of the attacker, they are more practical. In a general sense, security attacks are carried out on full-grown and deployed systems. Attacks with intent to circumvent a system, disable a system, compromise the integrity are seen in the real world. \n Consideration of two parties, adversary and challenger, is possible in this paradigm. Challenger being the party that trains a model and deploys and adversary being the attacker whose intention is to break the system for a predefined goal. Multiple capability configurations that reflect real-world behavior are possible in this setting. As an example, if a robber evading from the crime scene wants to fool the surveillance system into tagging a different car number, or a criminal targeting the prey's car number getting tagged are configuration changes to adversarial user goals. However, the system under attack in this example is owned by a third party. Neither the robber nor the criminal was part of the development of the surveillance system. \n The limited capabilities of the adversary are just a reflection of real-world scenarios. Hence, works in this domain are more practical. \n}", "id": "f29f181e-03bc-45b7-aee4-4d94a4de1787", "level": "subsection", "origin_cites_number": 0, "parent_id": "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Introduction" ], [ "subsection", "Why Black-Box Attacks?" ] ], "subsections": [], "title": "Why Black-Box Attacks?" }, { "cite_extract_rate": 0, "cites": [], "content": "{\n Adversarial attacks, in general, are evaluated based on the number of queries made on the model to converge on parameters. Lesser the number of queries better will be the attack, as the time taken to start an attack will be minimized. Apart from the time taken to craft an attack, the perturbation norm is a standard measure of determining the effectiveness of an attack. Borrowed from Machine Learning literature, the most common perturbation norms in use are $l_2$ and $l_\\infty$, which are standard error measures. \n \\begin{itemize}\n \\item \\textbf{$l_2$ norm: }Also called as Euclidean norm, $l_2$ is the shortest distance between two vectors. In the adversarial setting, the distance from the original input $i$ and the perturbed input $i`$ is calculated. To maintain the distance at minimum, the goal is to change an input $i$ in a way indistinguishable by a human annotator. \n \\item \\textbf{$l_\\infty$ norm: }It is the highest entry in the vector space. $l_\\infty$ essentially determines the maximum value in the vector given. \n \\end{itemize}\n}", "id": "d45097a4-4d4d-4900-8314-34a32ed4de2c", "level": "subsection", "origin_cites_number": 0, "parent_id": "d1a12b8c-1a5d-4f9f-b395-ceac4454a4bc", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Introduction" ], [ "subsection", "Evaluation of Attack" ] ], "subsections": [], "title": "Evaluation of Attack" }, { "cite_extract_rate": 0.7368421052631571, "cites": [ 909, 8103, 8101, 6144, 6141, 963, 6140, 907, 6143, 6139, 898, 913, 6142, 8102 ], "content": "{\n As discussed in Section \\ref{taxo}, resources with an adversary differ considerably based on the type of attack model. In a white-box attack, an adversary can have full access to the model right from the start of training, which leads to the availability of training data, testing data, network architecture, parameters, and finally, the learned weights of a model, etc. Also, the number of queries an adversary can make comes under resources. For example, in a one-shot adversarial attack, the attacker has only a single shot at fooling the model; in such cases crafting the adversarial example does not involve a fine-tuning step. \n When it comes to black-box attacks, the resources with adversary are considerably less. For one, the adversary will have no access to the model in the training phase, nor the adversary knows weights and parameters used in the model. However, once the model is deployed, based on the type of attack, the probability of each predicted label will be provided to the adversary. A stricter case is where only the predicted label is known to the attacker without any confidence score. Varying degree of information like labeled dataset is provided to adversaries in specific threat models. Again, as said above, the number of queries an attacker can make while crafting an adversarial example is considered under resources. An attack is deemed to be superior if the amount of resources consumed is minimal. \n}\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|ccc|c|}\n\\hline\n\\textbf{Month} & \\textbf{Year} & \\textbf{Method} & \\textbf{Proposed Approach} \\\\ \\hline\nFeb & 2016 & Papernot et al. & Local substitute model \\\\\nDec & 2016 & Narodytska et al. & Gradient-free \\\\ \\hline\nJul & 2017 & Narodytska et al. & Advanced Local Search \\\\\nNov & 2017 & Chen et al. & ZOO \\\\ \\hline\nFeb & 2018 & Brendel et al. & Boundary Attack \\\\\nJul & 2018 & Ilyas et al. & Limited Queries and Info \\\\\nJul & 2018 & Cheng et al. & Opt-Attack \\\\\nOct & 2018 & Bhagoji et al. & Query Reduction using Finite Differences \\\\\nOct & 2018 & Du et al. & Attack using Gray Images \\\\ \\hline\nJan & 2019 & Tu et al. & AutoZOOM \\\\\nApr & 2019 & Shi et al. & Curls \\& Whey \\\\\nApr & 2019 & Dong et al. & Translation-Invariant \\\\\nMay & 2019 & Chen et al. & POBA-GA \\\\\nMay & 2019 & Brunner et al. & Biased Sampling \\\\\nMay & 2019 & Li et al. & NATTACK \\\\\nMay & 2019 & Moon et al. & Combinatorial Optimization \\\\\nMay & 2019 & Ilyas et al. & Bandits \\&Priors \\\\\nJul & 2019 & Alzantot et al. & GenAttacks \\\\\nAug & 2019 & Guo et al. & SimBA \\\\ \\hline\n\\end{tabular}\n\\caption{A comparison of works that have focused on black-box scenarios in an adversarial setting.}\n\\label{chronological_table}\n\\end{table}", "id": "5b72c3f7-0e79-4f33-a5c3-d2115ae8346f", "level": "section", "origin_cites_number": 19, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box vs White-Box" ] ], "subsections": [], "title": "Black-Box vs White-Box" }, { "cite_extract_rate": 0, "cites": [], "content": "{\nA Black Box adversarial example starts with evaluating access to the resources at the attacker's disposal. These include the kind of model, any model related parameters such as confidence scores, test set, training dataset, etc. Once the resources are in place. An attack is carried out in one of the following ways. Using a \\textbf{transferable} attack strategy, the adversary can choose to train a parallel model called a \\textbf{substitute model} to emulate the original model. The attacker can use a much superior architecture than the original for the weight estimation. However, in the absence of a substitute model, the attacker chooses to have a \\textbf{Query Feedback mechanism}, where attacker continuously crafts the perturbed input while querying on the model under attack. \n As mentioned earlier, when a network's architecture and model weights are unknown, which is the case with black-box adversarial attacks, it is common to practice for attackers to train another model from scratch, called a substitute model intending to emulate the model under attack. Once the substitute model with an attacker has achieved satisfactory accuracy, by examining the weights, adversarial examples are crafted. As the perturbed inputs start misclassified by the model at the attacker's hand, these inputs are used on the actual model to achieve the adversarial goals. The choice for a substitute model is usually kept at a much superior level to understand the latent space with superior contrast. As the adversary has much control and knowledge of the task at hand, this is the choice for one-shot and targeted adversarial attacks. To put it in a sentence: transferring the learned weights by continuous querying, and strategically attacking the model comes under transferable attacks. \n Adversarial attacks without using a substitute model pose the challenge of identifying the boundaries, crafting the perturbed input at a manual level. Rather than training a model and then examining the weights, this kind of attack is more hands-on and simultaneous. The attack takes a query feedback form, where the attacker starts with random input and starts adding noise to the input under the acceptable perturbation error level. As the confidence score of input deteriorates, the attacker further pushes the noise in the direction of the noise, more like following the gradient. In some cases, gradient descent itself is used to mark the direction and track the movement of noise in the right direction. Also termed as local search, the technique boils down to searching the correct dimension in the latent space to get a misclassified input. A detailed discussion of attacks following both the substitute model and query feedback mechanism has been the pillar of black-box adversarial attacks. Beginning with a classification flowchart, the following section discusses the same in greater detail. \n }", "id": "274eed6a-e8f1-4a9e-8fc9-4b9bf527c3ac", "level": "section", "origin_cites_number": 0, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Crafting a Black-Box attack " ] ], "subsections": [], "title": "Crafting a Black-Box attack " }, { "cite_extract_rate": 0, "cites": [], "content": "This section provides a detailed discussion of recent and essential works in the black-box adversarial attacks domain. In this section, we cover the dataset used, key contributions, loss function used to calculate perturbations, and other relevant learnings from each of the discussed research works.", "id": "afd4d474-d8b2-495b-a07f-4796fce61f81", "level": "section", "origin_cites_number": 0, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ] ], "subsections": [ "082d368b-97f3-4f10-aa5e-b9f4bc37baed", "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "55ddee66-aa94-4694-90cf-e015f87b78e6" ], "title": "Black-Box Attacks in Computer Vision" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 909, 8103, 8101, 6144, 6141, 963, 6140, 907, 6143, 6139, 898, 913, 6142, 8102 ], "content": "{\n The most frequently used datasets to demonstrate adversarial attacks are as follows: \n \\begin{itemize}\n \\item \\textbf{MNIST} : The dataset includes grayscale handwritten digits of dimensions 28x28. The dataset consists of 60,000 training examples and 10,000 test examples.\n \\item \\textbf{CIFAR 10} : The dataset includes 60,000 RGB images of dimensions 32x32. There are ten classes and 6000 images per class. The train test division consists of 50,000 training images and 10,000 test images.\n \\item \\textbf{ImageNet} : ImageNet is a large scale dataset consisting of 14,197,122 images. It is described using synsets similar to WordNet. There are more than 20,000 categories that have been annotated using more than 100,000 synset categories from WordNet.\n \\end{itemize}\n}\n\\begin{figure}\n\\label{attack_classification}\n \\begin{center}\n \\begin{forest}\n for tree={\n myleaf/.style={label={[align=left]below:{\\strut#1}}},\n s sep=1cm\n }\n [Black-Box Adversarial Attacks,rectangle,rounded corners,draw\n [Gradient Estimation,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Chen et al. \\\\\n $\\bullet$ Ilyas et al. \\\\\n $\\bullet$ Cheng et al. \\\\\n $\\bullet$ Bhagoji et al. \\\\\n $\\bullet$ Du et al. \\\\\n $\\bullet$ Tu et al. \\\\\n $\\bullet$ Ilyas et al. }\n ]\n [Transferability,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Papernot et al. \\\\\n $\\bullet$ Shi et al. \\\\\n $\\bullet$ Dong et al. }\n ]\n [Local Search,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Narodytska et al. \\\\\n $\\bullet$ Narodytska et al. \\\\\n $\\bullet$ Brendel et al. \\\\\n $\\bullet$ Chen et al. \\\\\n $\\bullet$ Brunner et al. \\\\\n $\\bullet$ Li et al. \\\\\n $\\bullet$ Alzantot et al. \\\\\n $\\bullet$ Guo et al. }\n ]\n [Combinatorics,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Moon et al. }\n ]\n ]\n \\node[above=30pt,align=center,anchor=center] {Classification of prior work on black-box scenario-based attacks.};\n \\end{forest}\n \\end{center}\n \\end{figure}", "id": "082d368b-97f3-4f10-aa5e-b9f4bc37baed", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "afd4d474-d8b2-495b-a07f-4796fce61f81", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsubsection", "Popular Image data sets" ] ], "subsections": [], "title": "Popular Image data sets" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "level": "subsection", "origin_cites_number": 0, "parent_id": "afd4d474-d8b2-495b-a07f-4796fce61f81", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Attack Techniques" ] ], "subsections": [ "84cf2917-cb4e-4c5a-a889-8d4bc35e6f9d", "55dd2f00-e31c-4b90-9230-c86b494300ba", "2d3cd532-598b-4c61-a71e-e3761ce47eff", "8e341b0f-b2a6-437d-972f-aef20af44b81" ], "title": "Attack Techniques" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 909, 890, 6143, 917, 6142, 913, 6145, 8102 ], "content": "\\begin{itemize}\n \\item \\textbf{ZOO}: Chen et al. proposed \\textit{Zeroth Order Optimization(ZOO)} to estimate the gradients of target DNN in order to produce an adversarial image. The threat model assumed by the authors is that the target model can only be queried to obtain the probability scores of all the classes. The loss function formulated is:\n \\begin{equation}\n For Targeted Attack: f(x,t) = max\\{max_{i\\neq t}log[F(x)]_i-log[F(x)]_t,-\\textit{k}\\}\n \\end{equation}\n \\begin{equation}\n For Untargeted Attack: f(x) = \\{log[F(x)]_{t_0}- max_{i\\neq t_0}log[F(x)]_i,-\\textit{k}\\}\n \\end{equation}\n The author then uses \\textit{symmetric difference quotient} method to estimate the gradient $ \\frac{\\partial f(x)}{\\partial x_i}$:\n \\begin{equation}\n \\hat{g} := \\frac{\\partial f(x)}{\\partial x_i} \\approx \\frac{f(x+he_i)-f(x-he_i)}{2h}\n \\end{equation}\n The above naive solution requires querying the model $2p$ times, where $p$ is the dimension of the input.\n So the author proposes two \\textit{stochastic coordinate methods}: ZOO-Adam and ZOO-Newton in which a gradient is estimated for a random coordinate and the update formula is obtained using ADAM and Newton's Method until it reaches convergence. The authors also discuss the generation of noise in a lower dimension to improve efficiency and specify its advantages and disadvantages.\n \\vspace{5mm}\n \\item \\textbf{Limited Queries \\& Information}: The authors in take three primary cases into account to devise successful black-box adversarial attacks. The first case talks about the constraint of limited queries that can be made to the model by the adversary. This could again be of two types- one that contains a time limit and the other that is concerned with the monetary limit. The authors present a variant of Natural Evolution Strategies(NES) coupled with Projected Gradient Descent (PGD), as used in white-box attacks, to construct adversarial examples.\n The second case is modeled around the constraint of a partial-information setting where the adversary can obtain confidence scores of the first $k$ classes for a given input. These scores may not add up to 1 since the adversary doesn't have access to the probabilities for each possible classification label. To tackle this scenario, instead of beginning with the input image $x$, it is recommended to begin with $x_o$ belonging to the target adversarial class $y_{adv}$. Hence, after each step, we need to ensure two things-\n \\begin{itemize}\n \\item The target adversarial class needs to remain in the top-k classes at all points in time when the input image is being perturbed.\n \\begin{equation}\n \\epsilon_t = min\\, \\epsilon'\\, s.t.\\, rank\\, \\big( y_{adv}|\\prod_{\\epsilon'}(x^{(t-1)})\\big) < k \n \\end{equation}\n \\item The probability of the input image getting classified as the target class increases with every iteration of PGD.\n \\begin{equation}\n x^{(t)} = arg\\, \\smash{\\displaystyle\\max_{x'}}\\, P\\big(y_{adv}|\\prod_{\\epsilon_{t-1}}(x')\\big) \n \\end{equation}\n \\end{itemize}\n Lastly, in the third scenario, the adversary is not given any confidence scores. Instead, the adversary can only obtain the names of the classification labels for the given input data. The authors define a discretized score $R(x^{t})$ for an adversarial example to quantitatively represent the adversarial nature of the input image at each step $t$, with access only to the $top-k$ sorted labels. \n The proposed approach was evaluated on the InceptionV3 network with 78\\% $top-1$ accuracy and also on Google Cloud Vision (GCV) API, which presents a real-world scenario to perform an adversarial attack. 90\\% and above accuracy is achieved for all three settings when the InceptionV3 network is attacked in the techniques mentioned above.\n \\vspace{5mm}\n \\item \\textbf{Opt-Attack}: Cheng et al. attempt to devise a black box adversarial attack in a much stricter hard-label setting. This means that querying the target model gives only the target label, unlike other threat models where $K$-class probability scores or labels are considered.\n The authors make the attack query-efficient by treating the problem as a real-valued, continuous optimization problem. \n \\begin{comment}\n They also explain the difficulty in formulating attacks in a hard-label black-box setting by using Figure \\ref{fig:query_eff}.\n \\begin{figure}[h]\n \\centering\n \\captionsetup{width= 0.7\\linewidth}\n \\centerline{\\includegraphics[width = 0.8\\linewidth]{Images/Query_Efficient_HL.png}}\n \\caption{(a) Decision Boundary of a Classifier where $f:R^d->\\{1,2...,K\\}$ and $x$ is the input image. (b) Loss function of C\\&W Attack (c) C\\&W Loss Function in hard label setting (d) The objective function proposed by authors.}\n \\label{fig:query_eff}\n \\end{figure}\n \\end{comment}\n The objective function proposed by them is as follows:\n \\begin{equation}\n Untargeted Attack: \\hspace{0.2cm}\n g({\\theta})= min_{\\lambda>0}\\hspace{0.1cm} \\lambda \\hspace{0.2cm}s.t\\hspace{0.2cm} f({x_0+ \\lambda*\\frac{\\theta}{||\\theta||}})\\neq y_0\n \\end{equation}\n \\begin{equation}\n Targeted Attack (given target t): \\hspace{0.2cm}\n g({\\theta})= min_{\\lambda>0}\\hspace{0.1cm} \\lambda \\hspace{0.2cm}s.t\\hspace{0.2cm} f({x_0+ \\lambda*\\frac{\\theta}{||\\theta||}})= t\n \\end{equation}\n Here $\\theta$ is the search direction and $g(\\theta)$ is the distance from the input image $x_0$ to the closest adversarial example in direction $\\theta$. Chen et. al. uses ${\\theta}^* = argmin_{\\theta} \\hspace{0.1cm}g(\\theta)$ to obtain $x^* = x_0 + g({\\theta}^*)*\\frac{\\theta}{||\\theta||}$ by using a coarse grained search to initially find a decision boundary and then fine tuning the solution using Binary Search.\n The optimization problem proposed above is solved using \\textit{Randomized Gradient Free (RGF) Method}, which is an iterative Zeroth Order Optimization(ZOO) Method in which the small increment value $\\textbf{\\textit{u}}_t$ is sampled randomly from a Gaussian Distribution.\n \\vspace{5mm}\n \\item \\textbf{Query Reduction using Finite Differences}: Finite Differences method was used to estimate the gradient in the case of black-box attacks where no knowledge regarding the gradient of the loss function exists. The authors in propose a two-sided gradient estimation for a function $f(x)$, where $x \\in R^d$.\n The loss function used in this scenario is based on logit, which was proved to work well in the white-box attacks . The logit loss is given by:\n \\begin{equation}\n l(x,y) = \\phi(x + \\delta)_{y}\\, -\\, max\\,\\{\\phi(x+\\delta)_i\\, : i \\neq y\\}\n \\end{equation}\n where $y$ represents the actual label for a given sample $x$ and $\\phi(.)$ are the logits.\n By taking the logarithm of the SoftMax probabilities, the logit values can be computed up to an additive constant. This additive constant gets canceled out because the difference in the logits is equal to the loss function. Hence, the proposed method of using Finite Differences can be used to calculate the difference between the logit values of the actual label $y$ and the next most likely label $y'$.\n A drawback to this naïve approach is that the model is required to make $O(d)$ queries per input, where $d$ is the dimensions of input. Authors have further proposed two query reduction techniques based on grouping the features. The first approach is based on a random selection of features and grouping them, which will allow for simultaneous gradient estimation for the selected features. The second approach is more technical since it involves computing directional derivatives along with the principal components as determined by principal component analysis (PCA).\n The proposed approach was evaluated on two data sets - MNIST and CIFAR-10. Two models, one with two convolutional layers attached to a fully connected layer and another with three convolutional layers. These models were trained on the MNIST dataset. On the other hand, ResNet-32 and ResNet-28-10 models were used for training in the case of the CIFAR-10 data set. \n The conclusion obtained from this approach is that the effectiveness of the crafted adversarial examples can be reduced by modifying the model output probabilities on which Gradient Estimation attacks are hugely dependent on.\n \\vspace{5mm}\n \\item \\textbf{Attack using Gray Images}: A completely new approach was followed in to generate black-box adversarial examples by initializing the adversarial sample from a gray image. This is followed by the fact that the adversary doesn't have access to any natural input. This methodology begins by initializing the input for the target model with normalized pixel values within [0,1]. Hence, the adversary now has more flexibility in adding perturbation to the image by either increasing the pixel value to 1 to make it brighter or decreasing the pixel value to 0 to make it darker.\n Then the authors have defined the fitness function in this approach as:\n \\begin{equation}\n J(\\theta) = P(y'|\\theta)\\approx[F(\\theta)]_{y'}\n \\end{equation}\n The objective here is to maximize the probability term given in the fitness function until the input image gets classified into another class y'. Similar to many other approaches, the authors have preferred to adopt NES optimization. Here $\\theta$ represents the perturbed input, and Gaussian search distribution has been chosen given the fact that it provides a more straightforward method of estimating the gradient where the two parameters that need to be optimized are mean $\\phi_{d}$ and variance $\\sigma_{dxd}$.\n An important point to be noted in this approach is that this is a region-based attack algorithm, which implies that the gradients are dependent on the region space. Evaluation of gradients is then followed by performing a projected gradient ascent similar to the approach in . Input update can be shown as:\n \\begin{equation}\n x^t = \\prod_{[0,1]}(x^{t-1}+\\lambda sign(g_t(h,w)))\n \\end{equation}\n Where gamma represents the learning rate, and $\\pi_{[0,1]}$ represents the projection operator, which clips the values of the pixel in the range [0,1]. The search gradients are used to update the image iteratively until the goal is achieved.\n \\vspace{5mm}\n \\item \\textbf{AutoZOOM}: Tu et al. proposed the framework AutoZOOM, which is short for Autoencoder based Zeroth Order Optimization, for crafting efficient Black Box Attacks. The two main contributions of this approach were:\n (i) an adaptive random gradient estimation method for stabilizing query counts and distortion and (ii)\n an autoencoder trained offline on unlabelled data or a simple Bilinear Image Resizer(BiLIN) to \n expedite the attack process. \n The optimization problem formulated by Tu et al. :\n \\begin{equation}\n min_{x \\in {[0,1]}^d} Dist(x,x_0) + \\lambda. Loss(x,M(F(x)),t)\n \\end{equation}\n where $Dist(x,x_0)={||x-x_0||}_p$ , $F(x):{[0,1]}^d \\rightarrow \\mathcal{R}^K$, $M(.)$ is the monotonic function applied on classifier outputs, $Loss(.)$ is the attack objective reflecting the likelihood of predicting $t = argmax_{k \\in \\{1,2,...K\\}} [M(F(x))]_k$ and $\\lambda$ is the regularization constant.\n Loss function can be both either training loss of DNN or C\\&W Loss .\n Inspired by the convergence of \\textit{Symmetric Difference Quotient Method} and taking query efficiency into consideration, the authors proposed a scaled random full gradient estimator of $\\nabla f(x)$, defined as:\n \\begin{equation}\n \\textbf{g} = b. \\frac{f(x+\\beta u)-f(x)}{\\beta}. u\n \\end{equation}\n To control error in gradient estimation, an average is taken over q random directions:\n \\begin{equation}\n \\bar{\\textbf{g}} = \\frac{1}{q} \\sum_{j=1}^{q} \\textbf{g}_j\n \\end{equation}\n They have also proven that $l_2$ distance between $\\bar{g}$ and $\\nabla f(x)$ is constrained by an upper bound:\n \\begin{equation}\n \\begin{split}\n \\mathcal{E}||\\bar{g}-\\nabla f(x)||_2^2 \\leq & 4(\\frac{b^2}{d^2}+\\frac{b^2}{dq}+\\frac{(b-d)^2}{d^2})||\\nabla f(x)||_2^2 + \\frac{2q+1}{q}b^2 \\beta^2 L^2\n \\end{split} \n \\end{equation}\n The above upper bound is minimum when $b \\approx q$. They also mention that the convergence rate of zero order gradient descent methods is $O(\\sqrt{\\frac{d}{T}})$ and so it is plausible to generate perturbation from a lower dimension. In AutoZOOM, random gradient estimation is performed from a lower dimension $d^{\\prime}<d$ in order to improve query efficiency. A decoder,$D:\\mathcal{R}^{d^{\\prime}} \\rightarrow \\mathcal{R}^d$ is used to add noise such that: $x = x_0 + D(\\delta^{\\prime})$ where $\\delta^{\\prime} \\in \\mathcal{R}^{d^{\\prime}}$. Initially $q=1$ but with each iteration q is increased in order to get more accurate estimation.\n \\vspace{5mm}\n \\item \\textbf{Bandits \\& Priors}: Ilyas et al. successfully exploited prior information about the gradient using bandit optimization. Two key observations can meet the challenge of obtaining prior information about the gradient:-\n \\begin{itemize}\n \\item The input data point for which gradient computation is carried out is not arbitrary, and its structure is predictable and also reflected in the gradient.\n \\item Heavy correlation can be found among the gradients that are used in successive iterations while performing iterative gradient attacks (e.g., PGD).\n \\end{itemize}\n Two categories of priors, namely- time-dependent priors and data-dependent priors, have been introduced, which significantly helped in performing improved black-box adversarial attacks. The value of cosine similarity at each optimization step between successive gradients proves the correlation between them. Hence, the authors used the gradient at time $t-1$ as a prior for the gradient calculation at time $t$.\n \\begin{equation}\n \\frac{\\big \\langle \\bigtriangledown_{x}L(x_t,y), \\bigtriangledown_x L(x_{t+1},y) \\big \\rangle}{\\norm{\\bigtriangledown_x L(x_t, y)}_2 \\norm{\\bigtriangledown_x L(x_{t+1},y)}_2}\\quad\n t \\in \\{1...T-1\\}\n \\end{equation}\n On the other hand, an analogy to the correlation between pixel values of an image has been made to understand the relationship between gradients for two close coordinates. Hence, the structure of the input data point can also be used to reduce the query complexity.\n Untargeted adversarial examples have been crafted by considering both $l_2$ and $l_{\\infty}$ norm-based threat models on the ImageNet data set. The approach experimented on three classifiers-Inception-v3, ResNet-50 and VGG16. An average of 1117 queries was required in the case of the proposed approach that made use of bandit optimization along with time and data-dependent priors as compared to about 1735 queries in the simple case of NES. Attack failure rate was decreased from 22.2\\% in $l_2$ and 34\\% in $l_{\\infty}$ for NES to 4.6\\% in $l_2$ and 15.5\\% in $L_\\infty$ for the Bandit approach.\n\\end{itemize}", "id": "84cf2917-cb4e-4c5a-a889-8d4bc35e6f9d", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Attack Techniques" ], [ "subsubsection", "Gradient Estimation" ] ], "subsections": [], "title": "Gradient Estimation" }, { "cite_extract_rate": 0.875, "cites": [ 919, 892, 6146, 8101, 890, 902, 963 ], "content": "\\begin{itemize}\n \\item \\textbf{Local Substitute Model}: In , the targeted deep learning model $O$, also known as the oracle, was used to construct a synthetic data set. Labels for this synthetically crafted input data were obtained by querying the oracle $O$ itself. This assists the attacker to create an approximation $F$ of the target model $O$. The concept of transferability allows the attacker to ensure that $O$ will also misclassify the input images misclassified by the approximated model $F$. The primary challenge faced by the attacker in this approach is to train the model $F$ without any prior knowledge of the architecture $O$. Constraint on the number of times the attacker can query the model $O$ was a limitation as well. To overcome this problem, the authors propose Jacobian-based Dataset Augmentation, which helps in approximating the decision boundaries with limited queries.\n Authors proposed a five-step approach, as listed below:\n \\begin{enumerate}\n \\item The attacker collects a small set $S_o$, which represents the entire set of inputs $S$ to the oracle. For example, a limited number of input images, say $k$ from every class will be obtained.\n \\item Then, a suitable architecture is decided upon that can be used to train the approximation model $F$. It is shown that the number, size, and type of layers do not determine a successful attack.\n \\item The substitute model is trained by repetitively applying the following steps-\n \\begin{enumerate}\n \\item Model $F$ queries the oracle model $O$ for each of the samples $\\Vec{x} \\in S_p$, where $S_p$ is the training set available to the substitute the model. \n \\item Approximated model is then trained by the attacker using the available training set $S_p$.\n \\item Apply the augmentation technique on the training set $S_p$ available with the model $F$ and use it to generate new synthetic data points. This results in an even larger training data set $S_{p+1}$, which is a better representative of the decision boundaries of the target model.\n \\end{enumerate}\n The augmentation technique proposed, i.e., the Jacobian-based Dataset Augmentation is formulated as:\n \\begin{equation}\n S_{p+1} = \\{\\Vec{x}+\\lambda\\cdot sgn(J_F[\\check{O}(\\Vec{x})]:\\Vec{x} \\in S_p) \\} \\cup S_p \n \\end{equation}\n where, $\\lambda$ is the step-size taken during augmentation to create the set $S_{P+1}$ from the initial available training set $S_p$.\n \\end{enumerate}\n \\vspace{5mm}\n \\item \\textbf{Curls \\& Whey}: The authors in aim to enhance the diversity as well as transferability of adversarial examples in the case of black-box attacks. Their approach consists of two steps. The first step proposed deals with the problem of adding noises/perturbation to an input sample monotonically along the gradient ascent's direction using Curl's iteration. In contrast, the second step deals with the issue of removing excessive redundant noises from the crafted adversarial examples using Whey optimization. The proposed methodology can be better understood through .\n The methodology aims to push the input data point $x$ from the current class to a different class with which it shares its decision boundary. To perform the same operation, noise may be added to this data point monotonically along the direction of gradient ascent. However, the authors propose a shortcut to reach a nearer point that is present across the decision boundary. They suggest to initially follow the direction of the gradient descent and reach the 'valley floor' and then perform iterations to climb up the gradient. This is known as the Curl's iteration. Noise refinement mechanisms are followed after and before the Curl's iteration, respectively, which has been defined as Whey optimization.\n \\vspace{5mm}\n \\item \\textbf{Translation-Invariant}: This attack method makes use of an ensemble of images rather than aiming to optimize the objective function. The calculation of the gradient makes use of the assumption that convolutional neural networks possess the translation-invariant property. Hence, the authors propose to shift the original input image by a limited number of pixels (10, in this case) in a 2-dimensional space.\n The proposed method can be integrated with any of the gradient-based attacks which include Fast Gradient Sign Method (FGSM) , Basic Iterative Method (BIM) , Momentum Iterative Fast Gradient Sign Method (MI-FGSM) , Diverse Inputs Method and Carlini \\& Wagner's Method (C\\&W) . Experiments have been shown by the integration of the translation-invariant property with the FGSM attack where the update rule can be stated as:\n \\begin{equation}\n x^{adv} = x^{real} + \\epsilon\\dot sign(W * \\bigtriangledown_{x} J(x^{real}, y))\n \\end{equation}\n Where the notations have the usual meanings as described in .\n\\end{itemize}", "id": "55dd2f00-e31c-4b90-9230-c86b494300ba", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Attack Techniques" ], [ "subsubsection", "Transferability" ] ], "subsections": [], "title": "Transferability" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 909, 6147, 6140, 8103, 8982, 6139, 898, 890, 907, 6141 ], "content": "\\begin{itemize}\n \\item \\textbf{Gradient-free} Authors in propose an adversarial attack methodology based on the concept of greedy local-search technique. Their proposed algorithm perturbs an image by choosing the most relevant pixels of the image to which noise is added. The observed change by adding this noise in each iteration is used to approximate the loss function's gradient. The importance of the pixels can be understood in this step by observing the change in classification accuracy after each round.\n The most important contribution of this work is the creation of the neighboring images to a given input image. This neighborhood will consist of all images that are different from the previous round's image by one pixel only. Initially, a pixel location is randomly selected, and perturbation is added to it. The next pixel location is chosen among all the pixels that lie within a square whose side length is $2p$, and the center is the previous pixel location, where $p$ is an arbitrary parameter.\n The authors have focused on $k-misclassification$, where the correct label for an image should not be enlisted in the top-k classes predicted by the model. As soon as the target model can push the right class out of the top-k classes, no more noise is added to the image, and the algorithm gets terminated. The local search technique adopted is modified to increase the classification score of the targeted class in each round, in the case of targeted attacks.\n \\vspace{5mm}\n \\item \\textbf{Advanced Local Search}: Local search technique is a commonly used incomplete search algorithm used to solve problems on combinatorics. In , the authors have used this strategy over the image space to construct an approximated network gradient. The objective function is to minimize the probability that the perturbed image does not share the classification label with the original image. The cost function defined is-\n \\begin{equation}\n f_{c(I)}(\\hat{I})=o_{c(I)}\n \\end{equation}\n where $NN(\\hat{I})=(o_1,...,o_C)$, in which NN is the target neural network model and $o_j$ represents the probability of image $\\hat{I}$ belonging to class $j$. This function is minimized by the local search technique.\n The next step in the formulation is to create a set of images that can act as neighbors of the original image. The authors have defined these neighbor images with the help of pixel locations since images generated after a single iteration differ only in one pixel from the previous image. Consider the pixels that were perturbed in the last round. Hence, in the current round, all those pixels that lie in a square of side length $2d$ with the perturbed pixel at its center will be considered. Here, $d$ is only a parameter.\n In the last step, a function $g$ has been defined as the transformation function, which generates a new image $\\hat{I}_i$ after every round $i$ by adding perturbation to a set of pixels only.\n \\vspace{5mm}\n \\item \\textbf{Boundary Attack}: Decision-based adversarial attacks that are based on a model's final decision, i.e., class labels or transcribed sentences, are more common in real-life scenarios as compared to confidence score-based attacks. The authors also argue that these attacks are more robust to defense techniques such as gradient masking, intrinsic stochasticity, or vigorous training. Also, lesser information regarding the targeted model is required to conduct a successful attack. This was the first time that decision-based attacks were introduced that focus on deep learning models using real-life data sets such as ImageNet.\n The proposed algorithm initializes by taking a random distribution of pixels for an image in the range of [0, 255]. Using this image, the algorithm traverses the decision boundary of the class to which the sample input image belongs to. The initialized image is then able to reduce its distance w.r.t. the input image besides staying inside the adversarial region. \n \\vspace{5mm}\n \\item \\textbf{POBA-GA}: Chen et al. proposed three parameters to be used to generate different perturbations- different noise point pixel thresholds, number of noise points, and noise point size. The next step is to add this noise to the first adversarial example to generate the next generation of the population. For evaluation purposes, a fitness function $\\phi$ is defined, followed by checking if the adversary has satisfied the termination condition or not. Lastly, to generate the next set of the population for optimizing the perturbation, the suggested approach follows the typical operations of a genetic algorithm, i.e., selection, crossover, and mutation.\n The importance of the proposed fitness function is that it helps to evaluate how useful the adversarial input is at a given iteration. Hence, the convergence of the model largely depends on this function. The authors have incorporated two critical qualities of adversarial images into the definition of the proposed function which are\n \\begin{equation}\n \\phi(AS^t_i) = P(AS^t_i) - \\frac{\\alpha}{max Z(A^0)}Z(A^t_i)\n \\end{equation}\n where the left-hand side of the equation represents the fitness function for Adversarial Sample $i$, $(AS_i)$ is the probability term which accounts for the effectiveness of the attack for the sample to get misclassified and $Z(A^t_i)$ is a metric originally proposed in this methodology to account for the perturbation introduced. $\\alpha$ is used as a parameter to control the proportion for the fitness function between attack performance and the perturbation size. To reduce the number of queries to the target model, alpha is initially taken as zero and later assigned a value when the attack is successful. The fitness function finally looks like-\n \\begin{equation}\n \\phi (AS^t_i) = \n \\begin{cases}\n {p(y_1|AS^t_i) - p(y_0|AS^t_i) - \\frac{\\alpha}{max Z(A^{t_0})}Z(A^t_i)} \\quad \\quad {y_1 \\neq y_0}\\\\\n {p(y_2|AS^t_i) - p(y_0|AS^t_i)}\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\, \\, {y_1 = y_0}\n \\end{cases}\n \\end{equation}\n \\vspace{5mm}\n \\item \\textbf{Biased Sampling}: The authors in adopt a unique approach to introduce perturbations for creating adversarial samples known as Perlin Noise . This method was originally used to generate procedural texture patterns for computer graphics but can also be used to develop parametrized patterns having low-frequency. In the case of an original Boundary Attack , an orthogonal perturbation $\\eta^k$ is applied along with a hyper-spherical surface present around the original image. However, Perlin noise distribution is used to create a sample to introduce a low-frequency before the Boundary Attack. Usually, a permutation vector $v$ with a size of 256 is used to parametrize the noise. The noise patterns are sampled in two directions- along with the height and width of the image due to which the patters get strongly concentrated in the directions of low-frequency.\n Image masking has been performed to calculate the difference between the pixel values of the original image and the adversarial image and can be shown as:\n \\begin{equation}\n M = \\abs {X_{adv} - X_{orig}}\n \\end{equation}\n This calculation is performed after every step and applied on every single element of the orthogonal perturbation $\\eta^k$ that was sampled previously.\n \\begin{equation}\n \\eta^k_{biased} = M \\odot \\eta^k\\, ;\\, \\eta^k_{biased} = \\frac{\\eta^k_{biased}}{\\norm{\\eta^k_{biased}}}\n \\end{equation}\n Hence, the distortion of only the pixels with a considerable difference value is increased, and it remains the same for the remaining pixels, further reducing the search space effectively.\n \\vspace{5mm}\n \\item \\textbf{$\\mathcal{N}$-Attack}: A black box adversarial attack has been proposed in , which tries to learn the probability density over a small region $(l_p \\hspace{0.1cm}ball)$ centered around the input such that a sample drawn from the density is likely an adversarial image. Their work is similar to Ilyas et al. who have employed the loss function: $f(x^{\\prime}) := -F(x^{\\prime})_y $. They have argued that Ilyas et al.'s algorithm is inferior as it relies on accurate estimation of gradients, which is not possible when DNN is non-smooth. For estimating gradients $\\nabla f(x_t)$ with derivative-free methods the authors have also used \\textit{NES} .\n The smooth optimization criteria proposed by the authors is:\n \\begin{equation}\n min_{\\theta}J(\\theta):= \\int f(x^{\\prime}) \\pi_S(x^{\\prime}|\\theta) dx^{\\prime}\n \\end{equation}\n where $f(x^{\\prime})$ is the C\\&W Loss , $\\pi_S(x^{\\prime}|\\theta)$ is the probability density in region S $(l_p \\hspace{0.1cm} ball)$ i.e $||x^{\\prime}-x||_p \\leq \\epsilon$ and also $dim(\\theta) << dim(x)$. The authors reformulated the problem with change of variables:\n \\begin{equation}\n x^{\\prime} = proj_S(g(z)),\\hspace{0.3cm} z \\sim \\mathcal{N}(z|\\mu,\\sigma^2)\n \\end{equation}\n where $g:\\mathbb{R}^{dim(\\mu)} \\mapsto \\mathbb{R}^{dim(x)}$. The transformation steps involve sampling z\n i.e $z \\sim \\mathcal{N}(\\mu,\\sigma^2)$, then $g(z)$ is computed: $g(z)= \\frac{1}{2(tanh(g_0(z))+1)}$. After\n this, $\\delta^{\\prime}=clip_p(g(z)-x)$ for p = 2 or $\\infty$:\n \\begin{equation}\n clip_2(\\delta) = \n \\begin{cases}\n \\delta\\tau_2 / ||\\delta||_2 & if ||\\delta||_2 > \\tau_2 \\\\\n \\delta & else\n \\end{cases}\n \\end{equation}\n \\begin{equation}\n clip_{\\infty} = min(\\delta, \\tau_\\infty)\n \\end{equation}\n Updated $J(\\theta)$ due to variable transformation:\n \\begin{equation*}\n J(\\theta)= \\mathbb{E}_{\\mathcal{N}(z|\\mu,\\sigma^2)}f(proj(g(z)))\n \\end{equation*}\n $\\theta=(\\mu,\\sigma^2)$ are the unknowns. $\\sigma$ is found using Grid Search and \\textit{NES} \n is used to find $\\mu$:\n \\begin{equation*}\n \\mu_{t+1} \\leftarrow \\mu_t - \\eta \\nabla_{\\mu}J(\\theta)|_{\\mu_t}\n \\end{equation*}\n \\begin{equation}\n \\mu_{t+1} \\leftarrow \\mu_t -\\frac{\\eta}{b} \\sum_{i=1}^{b}f(proj_S(g(z_i)))\\nabla_{\\mu}log \\mathcal{N}\n (z_i|\\mu_t,\\sigma^2)\n \\end{equation}\n The process is iteratively repeated until arriving at an sampled image $x^{\\prime}$ such that $C(x^{\\prime}) \\neq C(x)$.\n \\vspace{5mm}\n \\item \\textbf{GenAttack}: Alzantot et al. use genetic algorithms, which are population-based gradient-free optimization strategies to produce an adversarial image. The author justifies using the above approach by mentioning the overhead produced by excessive querying in \\textit{Gradient Estimation Methods} and also the added advantage of being able to bypass defenses based on Gradient Masking and Obfuscation. The Genetic Algorithm is based on the process of natural selection, which involves iteratively choosing the \\textit{\\textbf{P}} set of candidates that constitute a \\textit{generation} in each iteration. The quality of the candidates is assessed using a \\textit{fitness} function, and the candidates for the next generation are obtained using \\textit{crossover}, i.e., breeding of parents selected randomly to produce a child and \\textit{mutation} which is the noise added to the child to reflect the diversity of the population. The fitness function proposed here can be shown as:\n \\begin{equation}\n Fitness(x) = log{f(x)}_t - log{\\sum_{j=0,j\\neq t}^{j=k}f(x)_c}\n \\end{equation}\n Here $f: \\mathbb{R}^d->[0,1]^K $ is the black box classifier which outputs probabilities of the $K$ classes and $t$ is the label for targeted attack. The parents for crossover are randomly sampled using $probs$, i.e. the probabilities which are obtained using softmax of the fitness values. Crossover operation involves selecting features of $parent_1$ and $parent_2$ with $(p,1-p)$ respectively where $p = \\frac{fitness(parent_1)}{fitness(parent_1)+ fitness(parent_2)}$. A Child is mutated using the following formula $child_{mut} = child+ \\mathit{Bernoulli}(p)* \\mathfrak{\\textbf{U}}(-\\alpha \\delta_{max},\\alpha\\delta_{max})$. $\\alpha,\\rho$ and $\\delta_{max}$ are mutation range, mutation probability and $l_{\\infty}$ metric respectively.\n \\vspace{5mm}\n \\item \\textbf{SimBA}: The study shown in proposes a straightforward yet efficient approach to generate adversarial examples in a black-box setting. From a set $Q$ of orthonormal vectors, the algorithm picks any single vector $q$, and the modification made to the input image $x$ can be expressed as $x + q\\epsilon$ or $x - q\\epsilon$. The idea behind this is to modify and add noise to the image in a particular direction and check if this brings about a change in the probability $P(y|x)$, which represents the probability of the label being $y$ given the input image $x$. Here, $\\epsilon$ represents the step size and can be controlled by the adversary. To keep the number of queries to the target model minimum, the methodology proposed ensures that when the algorithm picks $q \\in Q$, there are no two directions that cancel out each other.\n Inspired by the work of Guo , the authors suggest making use of low-frequency noise to be added to the input image for generating better adversarial examples. They employ the discrete cosine transform (DCT) to map signals in a 2D image space $R^{dxd}$ to frequency coefficients belongs to cosine wave function magnitudes.\n\\end{itemize}", "id": "2d3cd532-598b-4c61-a71e-e3761ce47eff", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Attack Techniques" ], [ "subsubsection", "Local Search" ] ], "subsections": [], "title": "Local Search" }, { "cite_extract_rate": 0.75, "cites": [ 6144, 892, 917 ], "content": "\\begin{itemize}\n \\item \\textbf{Combinatorial Optimization}: In the approach presented in , the optimization problem has been treated as a discrete surrogate problem. The authors support this my mentioning that first-order attacks like FGSM and PGD have a good performance due to their constrained nature and also due to querying the gradient of the loss concerning the input.\n Using the Taylor Expansion, the loss and the optimization problem becomes:\n \\begin{equation*}\n l(x_{adv},y) \\approx l(x,y) + (x_{adv}-x)^T \\nabla_x l(x,y)\n \\end{equation*}\n \\begin{equation}\\label{parsi_eq}\n maximize_{||x_{adv}-x||_{\\infty} \\leq \\epsilon} \\Rightarrow maximize_{x_{adv}} {x_{adv}}^T \\nabla_x\n l(x,y) \\hspace{0.3cm} subject \\hspace{0.1cm} to \\hspace{0.1cm} - \\epsilon \\textbf{1} \\preceq x_{adv}-x \\preceq \\epsilon \\textbf{1} \n \\end{equation}\n Due to the bounds present in the equation \\ref{parsi_eq}, the authors concluded that the solution of the above linear program would be obtained at the vertex of the $l_{\\infty}$ ball. So the reformulated optimization problem becomes:\n \\begin{equation}\n maximize_{\\textit{S} \\subseteq \\textit{V}} \\left\\{ F(\\textit{S}) \\triangleq f \\left(x + \\epsilon \\sum_{i \\in \\textit{S}} e_i -\\epsilon \\sum_{i \\notin \\textit{S}} e_i \\right) \\right\\}\n \\end{equation}\n where $f(x)=l(x,y_{gt})$ for untargeted attacks and $f(x) = -l(x,y_{target})$ for targeted attacks. They then explain the property of submodularity, a function $F:2^{\\textit{V}} \\rightarrow \\mathcal{R}$ is submodular if for every $\\textit{A} \\subseteq \\textit{B} \\subseteq \\textit{V}$ and $e \\in \\textit{V} \\setminus \\textit{B}$: $\\Delta (e|\\textit{A}) \\geq \\Delta(e| \\textit{B}) $. So submodular functions exhibit a diminishing property as set size increases. With submodularity and using greedy style algorithms, it is possible to obtain $(1-\\frac{1}{e})$ approximation of global optimum for monotone submodular functions and $\\frac{1}{3}$ approximation for non-monotone submodular functions. The authors use a modified version of local search algorithm by Fiege et al. which alternates between greedily inserting the element while the marginal gain is strictly positive $\\left( \\Delta\\left(e|\\textit{S} > 0 \\right) \\right) $ and removing the element while the marginal gain is also strictly positive.\n\\end{itemize}", "id": "8e341b0f-b2a6-437d-972f-aef20af44b81", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "8130e0d5-5c03-4a93-b9b5-de2233c7b5c1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Attack Techniques" ], [ "subsubsection", "Combinatorics" ] ], "subsections": [], "title": "Combinatorics" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 909, 8103, 6143, 6139, 6144, 898, 913, 6142, 890, 907, 8102, 6141 ], "content": "The attack methods proposed have been tested quantitatively on three primary datasets, as mentioned earlier, i.e., MNIST, CIFAR-10, and ImageNet. Therefore, we present a qualitative comparison between the results of these techniques based. The results can be first divided broadly into two categories: Targeted \\& Untargeted attacks for each of the three datasets. In each table, we mention the model used for generating the adversarial attack, the average number of queries made to the targeted model, average $L_2$ distance between the original image and the perturbed image, attack success rate and lastly some specific parameters mentioned in the experiments of the respective study. The results for different comparative parameters are obtained from standard performance metrics as produced by the individual research studies.\n\\begin{table}[htbp]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & InceptionV3 & - & 1.19916 & 88.90\\% & Step Size = 0.002, \\# Iterations = 1500 \\\\\n \\hline\n Brendel et al. & ResNet-50 & 192102 & 4.876 & 100\\% & - \\\\\n \\hline\n Cheng et al. & ResNet-50 & 145176 & 4.934 & 100\\% & - \\\\\n \\hline\n Moon et al. & InceptionV3 & 722 & - & 98.50\\% & $L_{\\infty}$ setting, $\\epsilon$ = 0.05, Max \\# Queries = 10,000 \\\\\n \\hline\n Guo et al. & InceptionV3 & 1283 & 3.06 & 97.80\\% & Step Size = 0.2, \\# Iterations = 10,000 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating an untargeted attack on the ImageNet data set.}\n \\label{imagenet_untargeted}\n\\end{table}\nFrom Table \\ref{imagenet_untargeted}, we can see that the attack success rate for and is 100\\%. However, the technique proposed by takes the least number of queries to produce a successful black-box attack. \n\\begin{table}[htbp]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & InceptionV3 & - & 3.425 & - & Batch Size = 128, \\# Iterations = 20,000 \\\\\n \\hline\n Ilyas et al. & InceptionV3 & 11550 & - & 99.20\\% & $L_{\\infty}$ setting, $\\epsilon$ = 0.05 \\\\\n \\hline\n Du et al. & InceptionV3 & 1701 & 24.323 & 100\\% & Image size-64x64, Momentum factor = 0.4 \\\\\n \\hline\n Tu et al. & InceptionV3 & 13525 & 0.000136 & 100\\% & Regularization coefficient = 10 \\\\\n \\hline\n Chen et al. & InceptionV3 & 3573 & 4.559451 & 95\\% & \\# Iterations = 400 \\\\\n & ResNet-50 & 3614 & 3.0575142 & 98\\% & \\# Iterations = 400 \\\\\n & VGG19 & 3786 & 4.023045 & 96\\% & \\# Iterations = 400 \\\\\n \\hline\n Brunner et al. & InceptionV3 & - & - & 85\\% & Max \\# Queries = 15,000, $L_2$ setting, $\\epsilon$ = 25.89 \\\\\n \\hline\n Moon et al. & InceptionV3 & 7485 & - & 99.90\\% & $L_{\\infty}$ setting, $\\epsilon$ = 0.05, Max \\# Queries = 100,000 \\\\\n \\hline\n Ilyas et al. & InceptionV3 & 1858 & - & 84.50\\% & Max \\# Queries = 10,000, $L_2$ setting \\\\\n & InceptionV3 & 1117 & - & 95.40\\% & Max \\# Queries = 10,000, $L_{\\infty}$ setting \\\\\n & ResNet-50 & 993 & - & 90.30\\% & Max \\# Queries = 10,000, $L_2$ setting \\\\\n & ResNet-50 & 722 & - & 96.60\\% & Max \\# Queries = 10,000, $L_{\\infty}$ setting \\\\\n & VGG16 & 594 & - & 82.80\\% & Max \\# Queries = 10,000, $L_2$ setting \\\\\n & VGG16 & 370 & - & 91.60\\% & Max \\# Queries = 10,000, $L_{\\infty}$ setting \\\\\n \\hline\n Alzantot et al. & InceptionV3 & 11081 & 61.68669 & 100\\% & Max \\# Queries = 1,000,000, $L_{\\infty}$ setting, $\\epsilon$ = 0.05 \\\\\n \\hline\n Guo et al. & InceptionV3 & 7899 & 9.53 & 100\\% & Step Size = 0.2, \\# Iterations = 30,000 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating a targeted attack on the ImageNet data set.}\n \\label{imagenet_targeted}\n\\end{table}\nA lot of models such as Inception-V3, ResNet-50, VGG16, and VGG19 have been used to experiment with targeted attacks on ImageNet data set, and this can be seen in Table \\ref{imagenet_targeted}. The attacks mentioned in and suffer significantly in terms of the average $L_2$ distance parameter as compared to the others. The authors in used three different models and experimented with their attack by keeping a limit on the max number of queries at 10,000 in $L_2$ and $L_{\\infty}$ settings. The number of queries was less than 2000 and, in most cases, below 1000, whereas the attack success rate ranged from 82.80\\% using the VGG16 model to 96.60\\% in ResNet-50.\n\\begin{table}[htbp]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & - & - & 1.4955 & 100\\% & Batch Size = 128, Step Size = 0.01, \\# Iterations = 3000 \\\\\n \\hline\n Brendel et al. & ResNet-50 & 131972 & 1.11545 & 100\\% & - \\\\\n \\hline\n Cheng et al. & ResNet-50 & 67036 & 1.0827 & 100\\% & - \\\\\n \\hline\n Bhagoji et al. & - & - & 2.7 & 99.7 & Batch Size = 100 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating an untargeted attack on the MNIST data set.}\n \\label{mnist_untargeted}\n\\end{table}\nIn Table \\ref{mnist_untargeted}, three out of four methods achieve a 100\\% success rate, whereas only the attack proposed by surpasses the others in the remaining two parameters.\n\\begin{table}[htbp]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & - & - & 1.9871 & 98.90\\% & Batch Size = 128, Step Size = 0.01, \\# Iterations = 3000 \\\\\n \\hline\n Brendel et al. & ResNet-50 & 93543 & 2.0626 & 100\\% & - \\\\\n \\hline\n Cheng et al. & ResNet-50 & 59094 & 1.7793 & 100\\% & - \\\\\n \\hline\n Bhagoji et al. & Self-designed, accuracy = 99.2\\% & 62720 & - & 100\\% & Batch Size = 100, $L_{\\infty}$ setting, $\\epsilon$ = 0.3 \\\\\n \\hline\n Tu et al. & C\\&W & 2428 & 3.55936 & 100\\% & Regularization coefficient = 0.1 \\\\\n & C\\&W & 730 & 3.23792 & 100\\% & Regularization coefficient = 1 \\\\\n & C\\&W & 510 & 3.66128 & 100\\% & Regularization coefficient = 10 \\\\\n \\hline\n Chen et al. & - & 423 & 2.352 & 100\\% & \\# Iterations = 100 \\\\\n \\hline\n Alzantot et al. & - & 996 & - & 100\\% & Step Size = 1.0, Max \\# Queries = 100,000 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating a targeted attack on the MNIST data set.}\n \\label{mnist_targeted}\n\\end{table}\nAlmost all the methods in Table \\ref{mnist_targeted} show a 100\\% attack success rate. The attacks proposed in achieve an attack in less than 1000 queries.\n\\begin{table}[h!]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & - & - & 0.19973 & 100\\% & Batch Size = 128, Step Size = 0.01, \\# Iterations = 1000 \\\\\n \\hline\n Brendel et al. & ResNet-50 & 384204 & 4.8758 & - & - \\\\\n \\hline\n Cheng et al. & ResNet-50 & 145176 & 4.9339 & - & - \\\\\n \\hline\n Bhagoji et al. & ResNet-32 & - & - & 100\\% & Batch Size = 100 \\\\\n \\hline\n Moon et al. & ResNet-32 & 1261 & - & 48\\% & Max \\# Queries = 20,000 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating an untargeted attack on the CIFAR-10 data set.}\n \\label{cifar_untargeted}\n\\end{table}\nIn Table \\ref{cifar_untargeted}, the black-box attack proposed by achieves an exceptional performance by getting a 100\\% success rate while getting an average $L_2$ distance of 0.19973. uses a ResNet-32 model to achieve an average of 1261 queries. However, the attack success rate is as low as 48\\%.\n\\begin{table}[htbp]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Model} & \\textbf{Average \\# Queries} & \\textbf{Average $L_2$} & \\textbf{Attack Success Rate} & \\textbf{Specific Parameters Used} \\\\\n \\hline\n Chen et al. & same as 14 & - & 0.39879 & 96.80\\% & Batch Size = 128, Step Size = 0.01, \\# Iterations = 1000 \\\\\n \\hline\n Brendel et al. & ResNet-50 & 170972 & 0.2395 & - & - \\\\\n \\hline\n Cheng et al. & ResNet-50 & 130020 & 0.2476 & - & - \\\\\n \\hline\n Bhagoji et al. & ResNet-32 & 61440 & - & 100\\% & Batch Size = 100, $L_{\\infty}$ setting, $\\epsilon$ = 8 \\\\\n \\hline\n Tu et al. & C\\&W & 1524 & 3.6864 & 100\\% & Regularization coefficient = 0.1 \\\\\n & C\\&W & 332 & 0.00101 & 100\\% & Regularization coefficient = 1 \\\\\n & C\\&W & 259 & 0.00115 & 100\\% & Regularization coefficient = 10 \\\\\n \\hline\n Chen et al. & same as 4 & 381 & 0.2088 & 100\\% & \\# Iterations = 100 \\\\\n \\hline\n Alzantot et al. & & 804 & - & 96.50\\% & Step Size = 1.0, Max \\# Queries = 100,000 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of methods used for generating a targeted attack on the CIFAR-10 data set.}\n \\label{cifar_targeted}\n\\end{table}\nAs shown in Table \\ref{cifar_targeted}, again the models proposed in achieve a high attack success rate percentage while using less than 1000 queries on the targeted model. Most of the attacks shown in this table perform well in terms of the average $L_2$ distance parameter.", "id": "55ddee66-aa94-4694-90cf-e015f87b78e6", "level": "subsection", "origin_cites_number": 14, "parent_id": "afd4d474-d8b2-495b-a07f-4796fce61f81", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Black-Box Attacks in Computer Vision" ], [ "subsection", "Comparative Analysis of Black Box attack techniques" ] ], "subsections": [], "title": "Comparative Analysis of Black Box attack techniques" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 8396, 917, 6151, 932, 7311, 930, 3865, 950, 6149, 6148, 934, 941, 6150, 924, 952, 945, 953, 905 ], "content": "In this section, we provide a detailed discussion of recent works in the defenses proposed against adversarial attacks. Conventional paradigms in defenses follow gradient masking, input transformation, adversarial training, adversarial detection-based approaches. The following subsections discuss each of these paradigms, with key examples from each concept. \n\\begin{table}[]\n\\centering\n\\begin{tabular}{|ccc|c|}\n\\hline\n\\textbf{Month} & \\textbf{Year} & \\textbf{Method} & \\textbf{Proposed Approach} \\\\ \\hline\nMar & 2016 & Papernot et al. & Distillation \\\\ \\hline\nMar & 2017 & Hosseini et al. & NULL Labeling \\\\\nSep & 2017 & Meng et al. & MagNet \\\\\nDec & 2017 & Xu et al. & Feature Squeezing \\\\ \\hline\nJan & 2018 & Guo et al. & Input Transformations \\\\\nFeb & 2018 & Buckman et al. & Thermometer Encoding \\\\\nFeb & 2018 & Samangouei et al. & Defense-GAN \\\\\nFeb & 2018 & Xie et al. & Randomization \\\\\nMar & 2018 & Dhillon et al. & SAP \\\\ \nMar & 2018 & Kannan et al. & ALP \\\\\nMar & 2018 & Prakash et al. & Pixel Deflection \\\\\nApr & 2018 & Shaham et al. & Basis Functions Transformation \\\\\nMay & 2018 & Liao et al. & Guided Denoiser \\\\\nMay & 2018 & Song et al. & Pixel Defend \\\\\nJun & 2018 & Wong et al. & Convex Adversarial Polytope \\\\\nSep & 2018 & Tram$\\acute{e}$r et al. & Ensemble Adversarial Training \\\\\nSep & 2018 & Madry et al. & M-PGD \\\\\nNov & 2018 & Rakin et al. & PNI \\\\ \\hline\nMar & 2019 & Xie et al. & Feature Denoising \\\\\nJun & 2019 & Cohen et al. & Randomized Smoothing \\\\\nJul & 2019 & Mustafa et al. & Hidden Space Restriction \\\\ \\hline\n\\end{tabular}\n\\caption{A comparison of works that have focused on defenses against an adversarial setting.}\n\\label{chronological table defenses}\n\\end{table}\n\\begin{figure}\n \\begin{center}\n \\begin{forest}\n for tree={\n myleaf/.style={label={[align=left]below:{\\strut#1}}},\n s sep=0.5cm\n }\n [Defense Mechanisms against Adversarial Attacks,rectangle,rounded corners,draw\n [Gradient Masking,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Papernot et al. \\\\\n $\\bullet$ Xie et al. \\\\\n $\\bullet$ Dhillon et al. }\n ]\n [Input Transformation,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Guo et al. \\\\\n $\\bullet$ Buckman et al. \\\\\n $\\bullet$ Samangouei et al. \\\\\n $\\bullet$ Prakash et al. \\\\\n $\\bullet$ Shaham et al. \\\\\n $\\bullet$ Liao et al. \\\\\n $\\bullet$ Song et al. }\n ]\n [Adversarial Training,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Hosseini et al. \\\\\n $\\bullet$ Tram$\\acute{e}r$ et al. \\\\\n $\\bullet$ Kannan et al. \\\\\n $\\bullet$ Rakin et al. \\\\\n $\\bullet$ Xie et al. \\\\\n $\\bullet$ Cohen et al. \\\\\n $\\bullet$ Mustafa et al. \\\\\n $\\bullet$ Madry et al. }\n ]\n [Adversarial Detection,rectangle,rounded corners,draw,align=center,\n myleaf={$\\bullet$ Meng et al. \\\\\n $\\bullet$ Xu et al. \\\\\n $\\bullet$ Wong et al. }\n ]\n ]\n \\node[above=30pt,align=center,anchor=center] {Classification of prior work on defense strategies proposed against adversarial attacks.};\n \\end{forest}\n \\end{center}\n \\label{defense_classification}\n \\end{figure}", "id": "6d3245fb-c6c8-446e-a1f1-2bbe4d6c6052", "level": "section", "origin_cites_number": 21, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ] ], "subsections": [ "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "34efdf8c-0915-48a4-b1bd-ad00da78be48" ], "title": "Defense against Adversarial attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "level": "subsection", "origin_cites_number": 0, "parent_id": "6d3245fb-c6c8-446e-a1f1-2bbe4d6c6052", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Defense Techniques" ] ], "subsections": [ "2d9097ba-5c50-4d3e-95eb-19cf23f3b06c", "45c9f510-42b5-438b-a116-7cd4b70809f6", "4ffeaa93-1441-45bd-8574-a86beec2a462", "f2ab1a86-5146-4d19-8b3f-1c0db39a8c72" ], "title": "Defense Techniques" }, { "cite_extract_rate": 0.5, "cites": [ 945, 681, 932 ], "content": "\\begin{itemize}\n \\item \\textbf{Distillation}: The defense mechanism proposed by the authors is based on $\\textit{distillation}$ introduced by Hinton et al. . Distillation, in an easy sense, is used to transfer knowledge from larger to smaller DNN architectures to reduce computational complexity. Given a training data set $\\left\\{ (X, Y(X)):X \\in \\mathcal{X} \\right\\}$ where $X$ are the images, and $Y$ are the one-hot vector(e.g., \\{0,0,1...0\\}) spread over the length of classes with the correct class having 1, a deep neural network $F$ is trained with softmax output layer with temperature $T$ which is used to control the knowledge transfer. $F(X)$ is the probability vector over all the classes. A second training set $\\left\\{ (X,F(X)):X \\in \\mathcal{X}\\right\\}$ is obtained and $F(X)$ obtained from the first DNN are here used as the \\textit{soft labels}. A second model, $F^d$, is trained using this dataset, and this model is called a \\textit{distilled} model. This model is much more smoothed. The amplitude of its network gradients exploited by adversarial samples are also thereby reduced.\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\captionsetup{width= 0.8\\linewidth}\n \\centerline{\\includegraphics[width = 0.7\\linewidth]{Images/Distill.png}}\n \\caption{\\textbf{Overview of the proposed Mechanism:} A DNN $F$ is trained using the initial training set. $F(X)$ obtained from this DNN is the softmax probabilities evaluated at temperature $T$. This $F(X)$ is used as \\textit{soft labels} for training a second DNN $F^d$.}\n \\end{figure}\n \\end{comment}\n \\item \\textbf{Randomization}: The goal of this paper is to alleviate adversarial attacks by introducing two random layers at inference time: 1) \\textit{Random Resizing} 2) \\textit{Random Padding}. These randomization effects help to defend against single-step and iterative gradient-based attacks. The random resizing layer converts an image of size $W{*}H{*}3$ into a new image of size $W{'}{*}H{'}{*}3$ such that $|W{'}-W|$ and $|H{'}-H|$ are within a reasonably small range. The random padding layer pads zeros around the resized image in a random manner. Suppose after random padding the size of the image is $W{''}{*}H{''}{*}3$ then there are $(W{''}-W{'}+1){*}(H{''}-H{'}+1)$ different possible patterns.\\\\\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.5\\linewidth]{New_Images/adv_effects_mitigation.png}\n \\caption{Pipeline of the randomization based defense mechanism.}\n \\label{fig:adv_eff_random}\n \\end{figure}\n \\end{comment}\n \\item \\textbf{SAP}: The authors have proposed \\textit{Stochastic Activation Pruning} as a method for guarding pre-trained networks against adversarial examples. During the forward pass, they stochastically prune a subset of activations in each layer. After pruning, they scale up the surviving activations to normalize the dynamic range of inputs to the subsequent layers. Deriving inspiration from game theory , they treat the problem as a minimax zero-sum game between adversary and the model:\n \\begin{equation}\n \\pi^{*},\\rho^{*} := arg \\hspace{0.1cm}min_{\\pi} max_{\\rho} \\mathbb{E}_{p \\sim \\pi,r \\sim \\rho}[J(M_p(\\theta),x+r,y)], \n \\end{equation}\n where $\\rho$ is the adversary's policy from which $r \\sim \\rho$ perturbation is sampled and $\\pi$ is the defender's policy from which $p \\sim \\pi$ model parameters corresponding to $M_p(\\theta)$ is sampled. The adversary's goal is to maximize the defender's loss under strategy $\\rho$ and defender's goal is to minimize the loss by changing model parameters $\\theta$ to $M_p(\\theta)$ under strategy $\\pi$. Now given the $i$'th layer activation map,$h^i \\in \\mathbb{R}^{a^i}$, the probability of sampling $j$'th activation with value $(h^i)_j$ is given by:\n \\begin{equation*}\n p_{j}^{i} = \\frac{|(h^i)_j|}{\\sum_{k=1}^{a^i}|(h^i)_k|}\n \\end{equation*}\n If activation is sampled, it is scaled up by inverse of the probability of sampling over all draws. If not, then the activation is set to 0. In this way, the inverse propensity scoring of each activation is preserved. Under an instance $p$ of policy $\\pi$, $r_{p}^{i}$ sample are drawn giving a new activation map $M_p(h^i)$:\n \\begin{equation*}\n M_p(h^i) = h^i \\odot m_{p}^{i} \\hspace{0.5cm} (m_{p}^{i})_j = \\frac{\\mathbb{I}((h^i)_j)}{1-(1-p_{j}^{i})^{r_{p}^{i}}}\n \\end{equation*}\n This scheme is similar to dropout ; however, it differs from it in its sampling technique: SAP is more likely to sample activations that are high in absolute value whereas dropout samples each activation with the same probability.\n \\end{itemize}", "id": "2d9097ba-5c50-4d3e-95eb-19cf23f3b06c", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Defense Techniques" ], [ "subsubsection", "Gradient Masking" ] ], "subsections": [], "title": "Gradient Masking" }, { "cite_extract_rate": 0.65, "cites": [ 8396, 8104, 917, 930, 950, 934, 64, 941, 6150, 507, 952, 892 ], "content": "\\begin{itemize}\n \\item \\textbf{Input Transformations}: The authors tested several input transformations-1) Image Cropping and Rescaling 2) Bit Depth Reduction 3) JPEG Image Compression and Decompression 4) Total Variance Minimization 5) Image Quilting and their corresponding effects in withstanding different adversarial attacks. Specifically, \\textit{total variance minimization} and \\textit{image quilting} were effective defenses when the model is trained on transformed images. Their strength lies in their \\textit{non-differential nature} and \\textit{randomness} which makes it difficult for the adversary to circumvent them.\n Total Variance Minimization involves synthesizing an image $\\mathbf{z}$ from adversarial input x by solving:\n \\begin{equation}\n min_{z}||(1-X) \\odot (z-x) ||_{2} + \\lambda_{TV}.TV_p(\\mathbf{z}) \n \\end{equation}\n This approach selects a small set of pixels and reconstructs the image $x$ that is consistent with the selected pixels. The pixels are selected by sampling a Bernoulli Random Variable $X(i,j,k)$ at each pixel position $(i,j,k)$, and the pixel is set when $X(i,j,k)=1$. $\\odot$ is the element-wise multiplication.\n \\begin{equation}\n TV_p(\\mathbf{z})= \\sum_{k=1}^K \\left[ \\sum_{i=2}^N ||\\mathbf{z}(i,:,k)-\\mathbf{z}(i-1,:,k)||_p + \\sum_{j=2}^N ||\\mathbf{z}(:,j,k)- \\mathbf{z}(:,j-1,k)||_p \\right]\n \\end{equation}\n $TV_p(z)$ is the total $l_p$ variation of z. The total variation(TV) measures the fine-scale variation in z, thereby encouraging the removal of adversarial perturbation during TV minimization.\\\\\n Image Quilting is a technique that involves synthesizing images by combining images obtained from a database. The algorithm places image patches for a predefined set of grid points and computes minimum graph cuts in overlapping boundary regions to remove edge artifacts. \\newline\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\captionsetup{width= 0.8\\linewidth}\n \\centerline{\\includegraphics[width = 0.5\\linewidth]{Images/input_trans.png}}\n \\caption{Overview of \\textit{Total Variance Minimization} and \\textit{Image Quilting} applied to both original and adversarial image.}\n \\end{figure}\n \\end{comment}\n \\item \\textbf{Thermometer Encoding}: Buckman et al. proposes a \\textit{input discretization} method to break the linear extrapolation behaviour of machine learning models. Goodfellow et al. provided evidence that several network architectures are vulnerable to adversarial examples as the loss function of these networks tend to be highly linear to its inputs. Though neural networks, in principle, can represent highly non-linear functions, networks trained via SGD tend to converge to linear solutions. One hypothesis explaining this states that the nonlinearities used in networks are piece-wise linear. Instead of using highly complex non-linear functions that are difficult to train and generalize, the authors have specified a simple non-differentiable and non-linear transformation(\\textit{discretization}). Two discretization methods have been specified: 1) \\textbf{One Hot Encoding}: for a pixel $\\mathit{i} \\in \\{1,2,...n \\}$:\n \\begin{equation}\n f_{oneshot}(x_i) = \\chi(b(x_i)) \n \\end{equation}\n where $b(\\theta)$ is the quantization function. For $b_i= \\frac{i}{k}$, $b(\\theta)$ is the largest index $\\alpha \\in \\{1,2,...k \\}$ such that $\\theta \\leq b_{\\alpha}$. $\\chi(j)$ is the one hot vector of $j$ i.e $\\chi(j) \\in \\mathbb{R}^k$ such that:\n \\begin{equation*}\n \\chi(j)_l =\n \\begin{cases}\n & 1 \\hspace{0.3cm} if \\hspace{0.1cm} l=j \\\\\n & 0 \\hspace{0.3cm} otherwise\n \\end{cases}\n \\end{equation*}\n So for an input pixel $i$, $b(.)$ obtains the index $\\alpha$ such that $i \\leq b_{\\alpha}$. $\\chi(.)$ computes the one hot vector spread across the length of the input dimension and sets 1 at index $\\alpha$ and 0 everywhere else.\n 2) \\textbf{Thermometer Encoding}: For a pixel $i \\in \\{1,2,...k \\}$:\n \\begin{equation}\n f_{therm}(x_i) = \\tau(b(x_i)) = \\mathbb{C}(f_{oneshot}(x_i))\n \\end{equation}\n where $\\mathbb{C}$ is the cumulative sum function, $\\mathbb{C}(c)_l = \\sum_{j=0}^l c_l$. $\\tau(j)$ is the thermometer vector i.e $\\tau(j) \\in \\mathbb{R}^k$ such that:\n \\begin{equation*}\n \\tau(j)_l = \n \\begin{cases}\n & 1 \\hspace{0.3cm} if\\hspace{0.1cm} l>=j \\\\\n & 0 \\hspace{0.3cm} otherwise\n \\end{cases}\n \\end{equation*}\n $\\tau(j)$ computes the thermometer vector, which is one hot vector with consecutive one after the index $\\alpha$ obtained using the $b(.)$ function. \\newline\n \\item \\textbf{Defense-GAN}: Samangouei et al. proposed a defense mechanism based on \\textit{Wasserstein-GAN} . A GAN is a model in which min-max game is played between two adversaries:- \\textit{a generator(G)} and \\textit{a discriminator(D)}. The \\textit{W-GAN} is similar to \\textit{GAN}, but instead of min-max loss, Wasserstein loss is used for learning the distribution of unperturbed images. At inference time, it finds a close output to the given input image which does not contain adversarial changes i.e., for an input image $x$ we obtain $z^*$ such that:\n \\begin{equation}\n z^* \\hspace{0.1cm} = \\hspace{0.1cm} min_z ||G(z)-x||_{2}^{2}\n \\end{equation}\n The above minimization problem is solved by doing L steps of Gradient Descent on R different random initializations of z. Then $G(z^*)$ is added as the input to the classifier. The motivation of the above mechanism is derived from the global optimality of \\textit{GAN} min-max loss when $p_g=p_{data}$ i.e., the probability that the image is sampled from data distribution is equal to that of the generator and if \\textit{G} and \\textit{D} have enough capacity to represent the data, and the training algorithm is such that $p_g$ converges to $p_{data}$ then:\n \\begin{equation}\n \\mathbb{E}_{x \\sim p_{data}} \\left[ min_z ||G_t(z)-x||_2\\right] \\rightarrow 0 \n \\end{equation}\n where $\\mathbb{E}$ is the expected reconstruction loss which after $t$ steps of training converges to 0, thereby indicating minuscule performance loss on clean examples. This minimization of reconstruction loss helps in reducing the adversarial noise. \\newline\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/defensegan-model.png}\n \\caption{Defense-GAN Mechanism }\n \\label{fig:defgan_model}\n \\end{figure}\n \\end{comment}\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/defensegan-grad-descent.png}\n \\caption{L steps of Gradient Descent}\n \\label{fig:defgan_gd}\n \\end{figure}\n \\end{comment}\n \\item \\textbf{Pixel Deflection}: The authors have proposed two methods for defending against adversarial attacks: 1) Pixel Deflection samples pixels randomly in an input image and replaces it with another randomly sampled pixel from its square neighborhood. Intuitively, Prakash et al. show that adversarial attacks tend to add perturbations in the entire plane of the image, disregarding the location of the object. The background pixels are not that useful in the classification of the object. By \\textit{Pixel Deflection}, certain pixels are dropped, preserving enough background for classification and mitigate the impact of adversarial attacks. The probability for a pixel to be dropped is inversely proportional to the probability of that pixel constituting the part of an object i.e\n \\begin{equation*}\n Pr_{u}(P_i) \\propto \\frac{1}{Pr_{o}(P_i)}\n \\end{equation*}\n 2) Wavelet Denoising is used to reduce noise introduced by both pixel deflection and adversarial attacks. Prior works illustrate certain regularities in wavelet responses of natural images that can be used to denoise images. The author's method uses \\textit{Discrete Wavelet Transform} , which converts the image signal into orthonormal wavelets. These wavelets form the basis for an image's space, orientation, and scale, etc. \\newline\n \\item \\textbf{Basis Function Transformations}: Shaham et al. augments images at inference time using basis functions. The transformations smoothen the images thereby reducing adversarial noise. Basis functions are known functions (usually polynomial) on which linear regression is performed in order to model a non-linear function. The basis functions used in this paper include: 1) \\textit{Low Pass Filtering} 2) \\textit{Principal Component Analysis} 3) \\textit{Wavelet Approximation} 4) \\textit{Soft-thresholding} 5) \\textit{JPEG Compression} \\newline\n \\item \\textbf{ Guided Denoiser}: Since adversarial samples are based on adding noise to clean images, the natural idea that came to the authors was to develop denoising models as a defense mechanism . They defined two denoising networks:\\\\ 1)\\textbf{Pixel Guided Denoiser(PGD)}: Given a clean image $x$, the denoiser $D:x{*} \\rightarrow \\hat{x}$ is used to convert the adversarial image $x{*}$ into $\\hat{x}$ such that $L = ||x-\\hat{x}||$ is minimized where L is the $L_1$ norm. Two denoising functions were discussed-1) \\textit{Denoising Autoencoder(DAE)} and \\textit{Denoising U-Net(DUNET)} which was DAE modified with U-Net .\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.7\\linewidth]{New_Images/Denoiser_1.png}\n \\caption{Denoising Autoencoder and Denoising U-Net}\n \\label{fig:Denoiser_1}\n \\end{figure}\n \\end{comment}\n 2)\\textbf{High-Level representation Guided Denoiser(HGD)}: HGD uses the same U-Net structure as DUNET but differs with it in terms of the loss function. Given a target neural network $f$, the HGD minimizes the loss $L=||f(x)_l-f(\\hat{x})_l||$ where $f(x)_l$ is the representation of the neural network at layer $l$. Three variants of HGD were also specified-1) When $l=-2$, then the representations of the top-most\n convolutional layer is used, and the denoiser is called \\textit{Feature Guided Denoiser}. 2)When $l=-1$,\n logit layer representations are used, and the denoiser is called \\textit{Logits Guided Denoiser}. 3)When the classification loss of the target model is used as a denoising loss function, then the denoiser is called \\textit{Class Guided Denoiser}. \\newline\n \\begin{comment}\n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width = 0.7\\linewidth]{New_Images/Denoiser_2.png}\n \\caption{Feature Guided Denoiser(FGD), Logits Guided Denoiser(LGD) and Class Guided Denoiser(CGD)}\n \\label{fig:Denoiser_2}\n \\end{figure}\n \\end{comment}\n \\item \\textbf{PixelDefend}: The authors have proposed the defense mechanism \\textit{PixelDefend} which aims to detect adversarial images and modify the input images such that it resembles more the images derived from the training distribution $p(x)$. The training distribution $p(x)$ is learnt through a PixelCNN (p(x)=$p_{CNN}(x)$). The probabilities of the input image, $X{'} \\sim q(X)$ and training images, $\\{X_1,X_2,...X_N \\} \\sim p(X)$ are evaluated using PixelCNN and rank of $p_{CNN}(X')$ in $\\{p_{CNN}(X_1),p_{CNN}(X_2),...p_{CNN}(X_N)\\} $ is used as a text-statistic:\n \\begin{equation*}\n T = T(X{'};X_1....X_N) \\triangleq \\sum_{i=1}^{N} \\mathbb{I}[p_{CNN}(X_i) \\leq p_{CNN}(X{'})]. \n \\end{equation*}\n The above equation is used to calculate the \\textit{p-value}, which is used as a metric to decide whether an image is adversarial or not:\n \\begin{equation}\n p = \\frac{1}{N+1} \\left( \\sum_{i=1}^{N}\\mathbb{I}[T_i \\leq T] +1 \\right) = \\frac{T+1}{N+1} = \\frac{1}{N+1} \\left( \\sum_{i=1}^{N} \\mathbb{I}[p_{CNN}(X_i) \\leq p_{CNN}(X{'})]+1 \\right)\n \\end{equation}\n Now the input image $X$ is used to calculate the image $X^*$ that maximizes $p(X)$ subject to the constraint that $X^*$ is within the $\\epsilon_{defend}$ ball(maximum perturbation) of $X$:\n \\begin{align*}\n & max_{X^*}p(X^*) \\\\\n s.t \\hspace{0.2cm} & ||X-X^*||_{\\infty} \\leq \\epsilon_{defend}\n \\end{align*}\n Iteratively pixels ($ x \\leftarrow X[i,j,k]$) are sampled from the range $R \\leftarrow [max(x-\\epsilon_{defend},0),min(x+ \\epsilon_{defend},255)]$, with the goal to maximize the training data distribution $p(X^*)$.\n \\end{itemize}", "id": "45c9f510-42b5-438b-a116-7cd4b70809f6", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Defense Techniques" ], [ "subsubsection", "Input Transformation" ] ], "subsections": [], "title": "Input Transformation" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 7311, 38, 924, 917, 953, 923, 6149 ], "content": "\\begin{itemize}\n \\item \\textbf{NULL Labeling}: In this paper , the authors propose to block transferability by training a classifier such that as input noise level increases, the classifier shows lower confidence on the original label and instead declares the input invalid. The method consists of three steps: 1) initial training, 2) computing NULL probabilities for adversarial examples generated with different amounts of perturbations 3)adversarial training.\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\captionsetup{width= 0.6\\linewidth}\n \\centerline{\\includegraphics[width = \\linewidth]{Images/block_trans_black.png}}\n \\caption{Block Diagram of the proposed mechanism}\n \\end{figure}\n \\end{comment}\n The compute function $f$, which is used for computing the null probability $p_{NULL}$ for adversarial examples, in this case, is taken as the \\textbf{MG Method}. During the Adversarial training phase, the adversarial examples are generated using \\textbf{STG Method}. The network is trained based on the following optimization problems:\n \\begin{align*}\n & \\delta X^* = argmin_{\\delta X} l(X+\\delta X,Z_T;\\theta), \\hspace{0.2cm} s.t. ||\\delta X||_0 \\sim U[1,N_{max}] \\\\\n & \\theta^* = argmin_{\\theta} \\alpha l(X,Z;\\theta) + (1-\\alpha)l(X+ \\delta X^*,Z_A;\\theta) \n \\end{align*}\n Here $l(X,Z;\\theta)$ is the loss function with the parameters $\\theta$. $X$ and $X^*$ are the input and adversarial images, respectively. $\\delta X$ is the noise sampled from the uniform distribution $U$ in the range $[1,N_{max}]$ where $N_{max}$ is the minimum number for which $f(\\frac{N_{max}}{|X|})=1$ where $f$ is the classifier. $Z_T$ is the probability vector that is used for generating adversarial examples, whereas $Z_A$ is the desired output probability vector. $\\alpha$ is the hyperparameter that controls the training between clean and adversarial samples. \\newline\n \\item \\textbf{Ensemble Adversarial Training}: Tramer et al. hypothesized that Kurakin et al.'s adversarially trained model on single-step attacks was still vulnerable to multi-step attacks. He proposed \\textit{Ensemble Adversarial Training} as a method for decoupling the adversarial examples from the parameters of the trained model and increase the diversity of perturbations seen during training. The algorithm augments the training dataset of the target model with adversarial examples transferred from \\textit{other static pre-trained} models. Intuitively, the transfer of adversarial examples from multiple models acts as a good approximation of maximization problem in Madry et al.'s paper, which was not possible from transferable single step attacks. \\newline\n \\item \\textbf{ALP}: In this paper , Kannan et al. proposed a method called as \\textit{logit pairing} and its 2 variants- \\textit{clean} and \\textit{adversarial}. They compared their method with a modified version of defense proposed by Madry et al. which they formulated as \\textbf{mixed minibatch PGD}. \\textbf{M-PGD} is stated as min-max problem:\n \\begin{equation}\n argmin_{\\theta} \\left[ \\mathbb{E}_{(x,y)\\in \\hat{p}_{data}} \\left( max_{\\delta \\in \\mathit{S}}L(\\theta,x+\\delta,y)\\right)+ \\mathbb{E}_{(x,y)\\in \\hat{p}_{data}} \\left( L(\\theta,x,y) \\right) \\right]\n \\end{equation}\n \\textbf{Adversarial and Clean Logit Pairing} involve minimizing following losses respectively:\n \\begin{equation}\n \\label{adv_logit_pair}\n J(\\mathbb{M},\\theta) + \\lambda \\frac{1}{m} \\sum_{i=1}^{m} L\\left( f(x^{(i)};\\theta),f(\\bar{x}^{(i)};\\theta) \\right) \n \\end{equation}\n \\begin{equation}\n \\label{clean_logit_pair}\n J^{(clean)}(\\mathbb{M},\\theta) + \\lambda \\frac{2}{m} \\sum_{i=1}^{\\frac{m}{2}} L\\left( f(x^{(i)};\\theta),f(x^{(i+\\frac{m}{2})};\\theta) \\right)\n \\end{equation}\n Here $J(\\mathbb{M},\\theta)$ is the classifier loss for minibatch $\\mathbb{M}$. $\\lambda$ and m are the penalizing coefficient and training size respectively. $f(x,\\theta)$ is the logit layer and \\textbf{L} is the loss function for calculating the similarity of two images. Here \\textbf{L} is the $L^2$ distance metric and in equation [\\ref{adv_logit_pair}] $(x,\\bar{x})$ are the training and its adversarial image respectively and in equation [\\ref{clean_logit_pair}] $(x,\\Acute{x})$ are two random training images respectively. \\newline\n \\item \\textbf{M-PGD}: Madry et al. conducts a careful examination of the optimization problem in the study of adversarial attacks and robust models. He proposes a guarantee that a adversarial robust model should satisfy in the form of a \\textit{Saddle-Point} problem:\n \\begin{equation}\n min_{\\theta} \\rho(\\theta),\\hspace{0.4cm}\\text{where}\\hspace{0.2cm} \\rho(\\theta) = \\mathbb{E}_{(x,y)\\sim D}\\left[max_{\\delta \\in \\textit{S}}L(\\theta,x+\\delta,y) \\right] \n \\end{equation}\n Evaluation of local maxima of loss values by using \\textit{PGD} attack on \\textit{MNIST} and \\textit{CIFAR-10} models suggest that PGD is the universal adversary among the first-order approaches. Robustness against PGD yields robustness against all first-order adversaries i.e., as long as the adversary uses the gradients of the loss function w.r.t input, it will not find significantly better local maxima than PGD. So to maximize the inner optimization problem, PGD is the best first-order attack, whereas adversarial training is the solution for the outer optimization problem. The authors also provide a comparative study of network capacity against adversarial robustness. For a weaker \\textit{black-box} attack based on transferability, increased network capacity decreases transferability. Adversarial training using stronger adversaries also reduces transferability. \\newline\n \\item \\textbf{PNI}: Rakin et al. has proposed \\textit{Parametric Noise Injection} which aims to make the model more robust by adding trainable gaussian noise at each layer on either activation or weights during the adversarial training phase:\n \\begin{align}\n &\\bar{v_i} = f_{PNI}(v_i) = v_i +\\alpha_i.\\eta; \\hspace{0.4cm} \\eta \\sim \\mathcal{N}(0,\\sigma^2) \\\\\n &where \\hspace{0.2cm} \\sigma = \\sqrt{\\frac{1}{N}\\sum_i(v_i-\\mu)}\n \\end{align}\n Here $v_i$ is the element of noise-free tensor $v$, which can be input/weight/activation. $\\eta$ is the noise sampled from the gaussian distribution, and $\\alpha_i$ is the learnable noise parameter, which is the same along with a layer to avoid over-parameterization and facilitate convergence. The gradient computation of loss w.r.t to noise is:\n \\begin{equation*}\n \\frac{\\partial \\mathcal{L}}{\\partial \\alpha}=\\sum_i \\frac{\\partial \\mathcal{L}}{\\partial f_{PNI}(v_i)} \\frac{\\partial f_{PNI}(v_i)}{\\partial \\alpha}; \\hspace{0.3cm} \\frac{\\partial f_{PNI}(v_i)}{\\partial \\alpha} = \\eta\n \\end{equation*}\n Using a gradient descent optimizer with momentum, the optimization of $\\alpha$ at step $j$ can be written as:\n \\begin{equation*}\n V_{i}^{j} = m.V_{i}^{j-1} + \\frac{\\partial \\mathcal{L}^{j-1}}{\\partial \\alpha}; \\hspace{0.3cm} \\alpha^j = \\alpha^{j-1} - \\epsilon.V_{i}^j\n \\end{equation*} \n where m is the momentum, $\\epsilon$ is the learning rate, and $V$ is the updating velocity.\\newline\n \\item \\textbf{Feature Denoising}: Xie et al. suggests that small perturbations added in the input domain lead to substantial noise in the feature maps of the network. While the feature maps of clean images focus on semantically relevant content, the feature maps of adversarial images are even activated in the irrelevant regions.\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/Feature_Denoise_1.png}\n \\caption[width = 0.7\\linewidth]{Feature maps of $res_3$ block of ImageNet-trained ResNet-50 for both the clean image(top) and adversarial image(bottom). The adversarial perturbations are produced using \\textit{PGD} attack with maximum perturbation $\\epsilon=16$ }\n \\label{fig:fd_1}\n \\end{figure}\n \\end{comment}\n Motivated by this, he proposes \\textit{feature denoising} as a defense mechanism. Adversarial Training of the classifier is performed together with the addition of a \\textit{denoising block}. The structure of the block is inspired from \\textit{self-attention} and \\textit{Non-Local Networks} .\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/Feature_Denoise_2.png}\n \\caption{Denoising Block}\n \\label{fig:fd_2}\n \\end{figure}\n \\end{comment}\n Several denoising operations have been mentioned in this paper: 1) \\textbf{Non-Local Means} : For a input feature map $x$, a denoised feature map $y$ is computed by weighted mean of features in all spatial locations $\\mathcal{L}$:\n \\begin{equation}\n \\label{fd_eq}\n y_i = \\frac{1}{\\mathcal{C}(x)} \\sum_{\\forall j \\in \\mathcal{L}}f(x_i,x_j).x_j,\n \\end{equation}\n where $f(x_i,x_j)$ is the feature dependent weighting function and $\\mathcal{C}(x)$ is the normalization function. \n Two Variants have been discussed- 1) \\textit{Gaussian(softmax)} sets where $f(x_i,x_j)=e^{\\frac{1}{\\sqrt{d}}\\theta(x_i)^T \\phi(x_j)}$ and $\\mathcal{C} = \\sum_{\\forall j \\in \\mathcal{L}}f(x_i,x_j)$ and 2) \\textit{Dot Product} sets where $f(x_i,x_j)=x_{i}^{T}x_j$ and $C(x)=\\mathit{N}$ where $\\mathit{N}$ is the number of pixels in $x$. 2) \\textbf{Bilateral Filter} :It is similar to Equation[\\ref{fd_eq}] but differs from it in its neighborhood $\\Omega(i)$, which is a local region (e.g a 3X3 patch) around pixel $i$. 3) \\textbf{Mean Filter}: Simplest denoising operation is the mean filter i.e average pooling with a stride of 1. Mean filters reduce noise but also smooth structures. 4) \\textbf{Median Filter}: It is defined as $y_i = median\\{\\forall j \\in \\Omega(i):x_j\\}$ where the median is over a local region $\\Omega(i)$ and is performed separately for each channel. It is good at removing \\textit{salt and pepper} noise.\\newline\n \\item \\textbf{Randomized Smoothing}: The goal of this defense method is to obtain a new classifier $g$ from base classifier $f$ which returns the most probable class when the input $x$ perturbed with Gaussian Noise is added to the base classifier:\n \\begin{equation}\n g(x) = arg max_{c \\in \\mathcal{Y}} \\hspace{1mm} \\mathbb{P}(f(x+ \\epsilon)=c), \\hspace{2mm} \\epsilon \\sim \\mathcal{N}(0,\\sigma^2 I)\n \\end{equation}\n Now $n$ samples of Gaussian noise are drawn and added to the input image $x$. These noisy samples of x are given as input to the base classifier $f$ and the class counts are obtained.\n If $\\hat{c_A}$ and $\\hat{c_B}$ are the two classes with maximum counts $n_A$ and $n_B$ then a Binomial Hypothesis test is performed to obtain the p-value ($n_A$ $\\sim$ Binomial($n_A+n_B$,p)). If the p-value is less than a constant $\\alpha$ then return $\\hat{c_A}$ otherwise abstain from returning an output. \\newline\n \\item \\textbf{ Hidden Space Restriction}: The authors of this paper suggest that the main reason for perturbations is the close-proximity of different class samples in the learned space. To counter this, they proposed to disentangle class-wise intermediate feature representation of deep networks i.e., force the features of each class to lie inside a convex region, which is maximally separated from the regions of other classes. For a input sample $x$ let the adversarial polytope (convex region within which the label of the input image doesn't change) be $P_{\\epsilon}(x;\\theta)={F_{\\theta}(x+\\delta) \\hspace{0.1cm}s.t.,||\\delta||_p\\leq \\epsilon }$, then $O_{\\epsilon}^{i,j}$ represents the overlap region between data pair $(i,j)$ i.e $O_{\\epsilon}^{i,j}=P_{\\epsilon}(x_{y_i}^i;\\theta)\\cap P_{\\epsilon}(x_{y_j}^j;\\theta)$. The authors propose that reducing this overlap region between different classes will result in lower adversarial success. To achieve this, the authors augment the loss function used while training. This new loss added is called \\textit{Prototype Conformity Loss}:\n \\begin{equation}\n \\mathcal{L}_{PC}(x,y) = \\sum_{i} \\left\\{ ||f_i-w_{y_i}^c||_2 - \\frac{1}{k-1}\\sum_{j\\neq y_i}\\left( ||f_i-w_{j}^{c}||_2 + ||w_{y_i}^{c}-w_{j}^{c}||_2 \\right) \\right\\} \n \\end{equation}\n where $f_i$ denotes the DNN output representation of image $i$ and $w_{y_i}^c$ are the trainable centroids of class $y_i$. At inference time, the label is given to an image $i$ as follows: $\\hat{y}_i = argmin_j||f_i-w_{y_j}^{c}||$. The total loss function for training the model is:\n \\begin{equation}\n \\mathcal{L}(x,y) = \\mathcal{L}_{CE}(x,y)+ \\mathcal{L}_{PC}(x,y)\n \\end{equation}\n Where $\\mathcal{L}_{CE}(.)$ is the cross-entropy loss. To obtain identical effect in intermediate layers, $\\mathcal{L}_{PC}(.)$ is calculated using a branch $\\mathcal{G}_{\\phi}$, which maps the features to a lower dimension. So the final loss becomes:\n \\begin{align*}\n \\mathcal{L} & = \\mathcal{L}_{CE} + \\sum_{l}^{L}\\mathcal{L}_{PC} \\\\\n & where \\hspace{0.2cm} \\mathcal{L}_{PC}^l = \\mathcal{L}_{PC}(f^l,y)\\\\\n & s.t., \\hspace{0.2cm} f^l = \\mathcal{G}_{\\phi}^l(F_{\\theta}^l(x))\n \\end{align*}\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.7\\linewidth]{New_Images/adv_hidden_space.png}\n \\caption{Illustration of the training procedure.}\n \\label{fig:Hidden_space_restrict}\n \\end{figure}\n \\end{comment}\n \\end{itemize}", "id": "4ffeaa93-1441-45bd-8574-a86beec2a462", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Defense Techniques" ], [ "subsubsection", "Adversarial Training" ] ], "subsections": [], "title": "Adversarial Training" }, { "cite_extract_rate": 1, "cites": [ 905, 3865, 934 ], "content": "\\begin{itemize}\n \\item \\textbf{MagNet}: \\textit{MagNet} is used for defending neural network classifiers against adversarial examples. MagNet consists of two components: 1) a \\textit{detector} which rejects examples far from the decision boundary of different classes. 2) a \\textit{reformer} which given an input $x$ strives to find\n $x{'}$ such that $x{'}$ is a close approximation of $x$ and also is close to the normal examples manifold(decision boundary).\n \\begin{comment}\n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width = 0.8\\linewidth]{New_Images/Magnet-model.png}\n \\caption{Workflow of the MagNet framework during test phase}\n \\label{fig:magnet}\n \\end{figure}\n \\end{comment}\n \\textbf{Detector}: The detector is the function $d: \\mathbb{S} \\rightarrow {0,1}$ which detects whether the input is adversarial. Two kinds of detector were mentioned in the literature: 1) \\textit{Detector based on Reconstruction Error} - In this an autoencoder is trained using the normal samples. The loss function minimized in this case is:\n \\begin{equation*}\n L(\\mathbb{X}_{train}) = \\frac{1}{|\\mathbb{X}_{train}|} \\sum_{x \\in \\mathbb{X}_{train}} ||x-ae(x)||_2\n \\end{equation*}\n $ae(x)$ denotes the reconstructed image obtained from the autoencoder $ae$. The reconstruction error on a test sample $x$ is:\n \\begin{equation}\n E(x) = ||x-ae(x)||_p \n \\end{equation}\n where $E(x)$ is the $l_p$ distance between input image and its reconstructed image. If $E(x)$ is greater than some constant $\\delta_p$ then the input is an adversarial example otherwise it is not. 2) \\textit{Detector based on probability divergence} - Let $f(x)$ be the output of the last layer(softmax) of the neural network $f$ given an input $x$. Then in this case probability divergence of\n $f(x)$ and $f(ae(x))$ is evaluated using Jensen-Shannon Divergence:\n \\begin{equation}\n JSD(P||Q) = \\frac{1}{2} D_{KL}(P||M) + \\frac{1}{2} D_{KL}(Q||M)\n \\end{equation}\n where $D_{KL}(P||Q) = \\sum_{i} P(i) log \\frac{P(i)}{Q(i)}$ is the Kullback–Leibler Divergence and $M = \\frac{1}{2}(P+Q)$. The Jensen–Shannon divergence is a method of measuring the similarity between two probability distributions $P$ and $Q$. $\\\\$\n $\\\\$\n \\textbf{Reformer}: The reformer is the function $r:\\mathbb{S} \\rightarrow \\mathbb{N}_t$ which tries to reconstruct the test input so that it is closer to the training data manifold. An ideal reformer: 1) should not change classification results of normal samples and 2) should change adversarial samples adequately such that the reformed samples are close to normal samples. Two kinds of reform strategies were proposed: 1) \\textit{Noise based Reformer}: a naive reformer which adds random noise to the input sampled from a Gaussian distribution i.e $r(x) = clip(x+ \\epsilon.y)$ where $y \\sim \\mathit{N}(y;0,I)$.\n 2) \\textit{Autoencoder based Reformer}: In this case the autoencoder trained for detector $ae(x)$ is used. For an input $x{'}$ the reconstruction obtained from autoencoder $ae(x{'})$ is fed to the target classifier as it is closer to the normal samples manifold.$\\\\$\n \\item \\textbf{Feature Squeezing}: Xu et al. argues that feature input spaces are unnecessarily large. So he proposed a mechanism to reduce the degrees of freedom available to an adversary by squeezing out unnecessary input features. \\textit{Feature Squeezing} is a simple adversarial example detection framework that compares the model's prediction on the original sample with its prediction after squeezing. If the difference is higher than a threshold value, then the sample is adversarial and is discarded.\n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/fs_framework.png}\n \\caption{Feature Squeezing Framework}\n \\label{fig:fs_model}\n \\end{figure}\n \\end{comment}\n Two techniques have been proposed by the authors:1)\\textit{Squeezing Color Bits}: Grayscale images comprise of 8-bit pixel values, whereas color images consist of 24-bit pixels. The bit depth can be considerably reduced without much loss of information, thereby reducing the search space for adversarial samples. This is done by using a binary filter with 0.5 as the cutoff. \n \\begin{comment}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width = 0.6\\linewidth]{New_Images/grayscale_bit_redux.png}\n \\caption{Image samples with bit reduction from 8 to 1. The rows are of MNIST, CIFAR-10 and ImageNet respectively.}\n \\label{fig:gs_1}\n \\end{figure}\n \\end{comment} \n 2) \\textit{Spatial Smoothing}: It is an image processing technique that is used to reduce noise. Its two variations have been mentioned:1) \\textit{Local Smoothing}: Smoothes the value of a pixel in a particular window size by using nearby pixels. In this paper, the Median Smoothing was used. 2) \\textit{ Non-Local Smoothing}: For a given image patch, non-local smoothing finds several similar patches in a larger area and then replaces the center patch with the average of those patches. \\newline\n \\item \\textbf{Convex adversarial polytope}: In this paper, Wong et al. proposed a method for obtaining \\textit{provably robust} ReLU classifiers. For a ReLU based neural network $f$ where $f_{\\theta} : \\mathbb{R}^{|x|} \\rightarrow \\mathbb{R}^{|y|}$, we obtain an adversarial polytope ($\\mathcal{Z}_{\\epsilon}(x)$) which is the set of all final layer activations that can be obtained by perturbing the input with $\\Delta$. Here, $\\Delta$ is $l_{\\infty}$ norm bounded by $\\epsilon$:\n \\begin{equation}\n \\mathcal{Z}_{\\epsilon}(x) = \\{f_{\\theta}(x + \\Delta): ||\\Delta||_{\\infty} \\leq \\epsilon \\}\n \\end{equation}\n Now, a \\textit{convex outer bound} on this adversarial polytope is created. If no point inside this \\textit{outer bound} changes the class label of the input, then no label is changed in the true adversarial polytope as well. This bound is obtained by solving the following linear program:\n \\begin{equation}\n z \\geq 0,\\hspace{1mm} z \\geq \\hat{z}, \\hspace{1mm} -u\\hat{z}+ (u-l)z \\leq -ul\n \\end{equation}\n Here, $u,l,\\hat{z},z$ are the lower bound, upper bound, pre ReLU activation and ReLU activation respectively. Given a dataset $(x_i,y_i)_{i=1,2,...,N}$, instead of training the model using these data points, the model is trained using the farthest points (i.e with the highest loss) from $x_i$ which are $l_{\\infty}$ by $\\epsilon$:\n \\begin{equation}\n minimize_{\\theta} \\sum_{i=1}^{N} max_{||\\Delta||_{\\infty} \\leq \\epsilon} L(f_{\\theta}(x_i+\\Delta),y_i)\n \\end{equation}\n L is the loss function, and $\\theta$ are the parameters of the feedforward ReLU neural network. \n \\end{itemize}", "id": "f2ab1a86-5146-4d19-8b3f-1c0db39a8c72", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "8482b260-6a7c-40d7-ab1c-e3d3b878aef1", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Defense Techniques" ], [ "subsubsection", "Adversarial Detection" ] ], "subsections": [], "title": "Adversarial Detection" }, { "cite_extract_rate": 0.846153846153846, "cites": [ 8396, 6151, 894, 932, 7311, 930, 3865, 950, 6149, 6148, 934, 892, 941, 890, 906, 924, 952, 945, 953, 917, 905, 902 ], "content": "Similar to the attack section, the defense techniques have now been tested and compared on three datasets i.e., MNIST, CIFAR-10, and ImageNet. The results have been compared along the following dimensions: the attack model on which the defense technique was used, classification accuracy (CA), which is the accuracy obtained on clean images, attack success rate (ASR), which highlights the performance of the specific attack model and parameters used. CA and ASR have two subcategories as well: with defense and without defense which measure the above mentioned metrics with and without defense techniques. The results have been obtained from the respective research studies.\n\\begin{table}[h!]\n \\centering\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Attack Model} & \\multicolumn{2}{c|}{\\textbf{Classification Accuracy}} & \\multicolumn{2}{c|}{\\textbf{Attack Success Rate}} & \\textbf{Specific Parameters Used} \\\\\n & & \\textbf{Without Defense} & \\textbf{With Defense} & \\textbf{Without Defense} & \\textbf{With Defense} & \\\\\n \\hline\n Papernot & JSMA & 99.51\\% & 99.05\\% & 95.89\\% & 0.45\\% & Temperature T = 100 \\\\\n \\hline\n Hosseini & FGSM & 99.35\\% & 99.39\\% & 90\\% & 45\\% & $\\alpha = 0.5$, label Smoothing Parameter q = 0.9 \\\\\n \\hline\n Meng & FGSM & 99.40\\% & 99.10\\% & 8.90\\% & 0\\% & $L_{\\infty}$ metric, $\\epsilon = 0.010$, epochs = 100, lr = 0.001 \\\\\n & I-FGSM & 99.40\\% & 99.10\\% & 28\\% & 0\\% & $L_{\\infty}$ metric, $\\epsilon = 0.010$, epochs = 100, lr = 0.001 \\\\\n & Deepfool & 99.40\\% & 99.10\\% & 80.90\\% & 0.60\\% & $L_{\\infty}$ metric, epochs = 100, lr = 0.001\\\\\n & C\\&W & 99.40\\% & 99.10\\% & 100\\% & 0.20\\% & $L_{\\infty}$ metric, epochs = 100, lr = 0.001 \\\\\n \\hline\n Xu & FGSM & 99.43\\% & 99.28\\% & 46\\% & 8\\% & $L_{\\infty}$, Bit Depth Method of 1 bit \\\\\n & C\\&W & 99.43\\% & 99.28\\% & 100\\% & 0\\% & $L_{\\infty}$, Bit Depth Method of 1 bit \\\\\n & JSMA & 99.43\\% & 99.28\\% & 71\\% & 18\\% & $L_{0}$ metric, Median Smoothing of $3x3$ \\\\\n \\hline\n Buckman & FGSM & 99.30\\% & 98.67\\% & 100\\% & 3.71\\% & $l_{\\infty}$ metric, $\\epsilon = 0.3$, 40 steps for iterative attacks \\\\\n & PGD/LS-PGA & 99.30\\% & 98.67\\% & 100\\% & 5.70\\% & $l_{\\infty}$ metric, $\\epsilon = 0.3$, 40 steps for iterative attacks, $\\xi = 0.01$, $\\delta = 1.2$ \\\\\n \\hline\n Samangouie & FGSM & 99.70\\% & - & 71.84\\% & 8.95\\% & $\\epsilon=0.3$, $L=200$, $R=10$\\\\\n \\hline\n Kannan & PGD & - & 98.80\\% & - & 3.60\\% & $l_{\\infty}$ metric, $\\epsilon=0.3$, lr = 0.045 \\\\\n \\hline\n Wong & FGSM & 98.20\\% & 94.18\\% & 50.01\\% & 3.93\\% & $l_{\\infty}$ metric, $\\epsilon= 0.1$\\\\\n & PGD & 98.20\\% & 94.18\\% & 81.68\\% & 4.11\\% & $l_{\\infty}$ metric, $\\epsilon= 0.1$ \\\\\n \\hline\n Tramer & FGSM & - & 99.30\\% & 72.80\\% & 14\\% & $l_{\\infty}$ metric, $\\epsilon=0.3$, lr = 0.045 \\\\\n \\hline\n Madry & FGSM & 99.20\\% & 98.80\\% & 58.10\\% & 4.40\\% & $l_{\\infty}$ metric, $\\epsilon=0.3$\\\\\n & PGD & 99.20\\% & 98.80\\% & 74\\% & 10.70\\% & $l_{\\infty}$ metric, $\\epsilon=0.3$, 40 iterations of PGD \\\\\n \\hline\n Mustafa & FGSM & 98.71\\% & 99.53\\% & 77\\% & 21.70\\% & epochs = 50, lr = 0.1, $\\epsilon=0.3$ \\\\\n & C\\&W & 98.71\\% & 99.53\\% & 79.10\\% & 22.80\\% & c = 10, 1000 iteration steps for C\\&W \\\\\n & PGD & 98.71\\% & 99.53\\% & 88.10\\% & 30.50\\% & 10 iteration steps for PGD with step size $\\frac{\\epsilon}{10}$ \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Defense techniques against attacks on MNIST data set.}\n \\label{defense_mnist}\n \\begin{adjustbox}{width = \\textwidth, center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Attack Model} & \\multicolumn{2}{c|}{\\textbf{Classification Accuracy}} & \\multicolumn{2}{c|}{\\textbf{Attack Success Rate}} & \\textbf{Specific Parameters Used} \\\\\n & & \\textbf{Without Defense} & \\textbf{With Defense} & \\textbf{Without Defense} & \\textbf{With Defense} & \\\\\n \\hline\n Papernot & JSMA & 80.95\\% & 81.34\\% & 87.59\\% & 5.11\\% & Temperature T = 100 \\\\\n \\hline\n Meng & FGSM & 90.60\\% & 86.80\\% & 54\\% & 0.10\\% & $L_{\\infty}$ metric, $\\epsilon = 0.050$, epochs = 400, lr = 0.001 \\\\\n & Iterative & 90.60\\% & 86.80\\% & 88.90\\% & 0.10\\% & $L_{\\infty}$ metric, $\\epsilon = 0.025$, epochs = 400, lr = 0.001 \\\\\n & Deepfool & 90.60\\% & 86.80\\% & 95.50\\% & 6.60\\% & $L_{\\infty}$ metric, epochs = 400, lr = 0.001 \\\\\n & C\\&W & 90.60\\% & 86.80\\% & 100\\% & 17\\% & $L_{\\infty}$ metric, epochs = 400, lr = 0.001 \\\\\n \\hline\n Xu & FGSM & 94.84\\% & 89.29\\% & 85\\% & 62\\% & $L_{\\infty}$ metric, Median Smoothing of $2x2$ \\\\\n & C\\&W & 94.84\\% & 89.29\\% & 100\\% & 16\\% & $L_{\\infty}$ metric, Median Smoothing of $2x2$ \\\\\n & Deepfool & 94.84\\% & 89.29\\% & 98\\% & 17\\% & $L_{2}$ metric, Median Smoothing of $2x2$ \\\\\n & JSMA & 94.84\\% & 89.29\\% & 71\\% & 16\\% & $L_{0}$ metric, Median Smoothing of $2x2$ \\\\\n \\hline\n Buckman & FGSM & 94.29\\% & 87.67\\% & 56.11\\% & 19.04\\% & $l_{\\infty}$ metric, $\\epsilon = 0.031$, 7 steps for iterative attacks\\\\\n & PGD/LS-PGA & 94.29\\% & 87.67\\% & 98.34\\% & 20.84\\% & $l_{\\infty}$ metric, $\\epsilon = 0.031$, 7 steps for iterative attacks, $\\xi = 0.01$, $\\delta = 1.2$ \\\\\n \\hline\n Dhillon & FGSM & 89.80\\% & - & 30\\% & 22\\% & epochs = 150, Initial lr = 0.5 \\\\\n & I-FGSM & 89.80\\% & - & 40\\% & 15\\% & $\\lambda=1$, SAP-100 \\\\\n \\hline\n Song & FGSM & 92\\% & 88\\% & 89\\% & 76\\% & $l_{\\infty}$ metric, $\\epsilon_{attack}=2/8/16$, $\\epsilon_{defend}=16$ \\\\\n & Deepfool & 92\\% & 88\\% & 94\\% & 20\\% & $l_{\\infty}$ metric, $\\epsilon_{attack}=2/8/16$, $\\epsilon_{defend}=16$ \\\\\n & C\\&W & 92\\% & 88\\% & 100\\% & 22\\% & $l_{\\infty}$ metric, $\\epsilon_{attack}=2/8/16$, $\\epsilon_{defend}=16$ \\\\\n \\hline\n Rakin & FGSM & 92.11\\% & 84.78\\% & 85.92\\% & 45.96\\% & $\\epsilon=0.3/1$ and $\\epsilon=8/255$ under $l_2$ and $l_{\\infty}$ distance respectively \\\\\n & PGD & 92.11\\% & 84.78\\% & 100\\% & 54.17\\% & Number of iteration steps is 7 for PGD \\\\\n \\hline\n Cohen & Deepfool & 90\\% & 78\\% & 80\\% & 35\\% & $l_2$ metric, $\\alpha=0.001$, n = 100000 \\\\\n \\hline\n Madry & FGSM & 95.20\\% & 90.30\\% & 67.30\\% & 4.90\\% & $l_{\\infty}$ metric, $\\epsilon=8$ \\\\\n & PGD & 95.20\\% & 90.30\\% & 96.50\\% & 54.20\\% & $l_{\\infty}$ metric, $\\epsilon=8$, 7 steps of size 2 for PGD\\\\\n \\hline\n Mustafa & FGSM & 90.80\\% & 90.45\\% & 61\\% & 14.50\\% & epochs = 200, lr = 0.1, $\\epsilon=0.03$ \\\\\n & C\\&W & 90.80\\% & 90.45\\% & 68.20\\% & 16.70\\% & c = 0.1, 1000 iteration steps for C\\&W \\\\\n & PGD & 90.80\\% & 90.45\\% & 70.90\\% & 23.60\\% & 10 iteration steps for PGD with step size $\\frac{\\epsilon}{10}$ \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Defense techniques against attacks on CIFAR-10 data set.}\n \\label{defense_cifar}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{adjustbox}{width = \\textwidth,center} \n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\textbf{Method} & \\textbf{Attack Model} & \\multicolumn{2}{c|}{\\textbf{Classification Accuracy}} & \\multicolumn{2}{c|}{\\textbf{Attack Success Rate}} & \\textbf{Specific Parameters Used} \\\\\n & & \\textbf{Without Defense} & \\textbf{With Defense} & \\textbf{Without Defense} & \\textbf{With Defense} & \\\\\n \\hline\n Xu & FGSM & 69.70\\% & 62.10\\% & 99\\% & 67\\% & $L_{\\infty}$ metric, Median Smoothing of $3x3$ \\\\\n & C\\&W & 69.70\\% & 62.10\\% & 99\\% & 23\\% & $L_{\\infty}$ metric, Non Local Means Method (11-3-4) \\\\\n & DeepFool & 69.70\\% & 62.10\\% & 89\\% & 28\\% & $L_{2}$ metric, Median Smoothing of $2x2$ \\\\\n \\hline\n Guo & FGSM & - & 75.10\\% & - & 29.63\\% & $L_{2}$ metric \\\\\n & I-FGSM & - & 75.10\\% & - & 28.48\\% & Pixel dropout probability p = 0.5 \\\\\n & DeepFool & - & 75.10\\% & - & 28.53\\% & Total Variance minimizer $\\lambda_{TV} = 0.03$ \\\\\n & C\\&W & - & 75.10\\% & - & 29.50\\% & Quilting Patch Size of $5x5$ \\\\\n \\hline\n Xie & FGSM & 100\\% & 97.30\\% & 67\\% & 47.60\\% & $l_{\\infty}$ metric, $\\epsilon=10$ \\\\\n & DeepFool & 100\\% & 97.30\\% & 100\\% & 1.70\\% & $l_{\\infty}$ metric, $\\epsilon=10$ \\\\\n & C\\&W & 100\\% & 97.30\\% & 100\\% & 3.10\\% & $l_{\\infty}$ metric, $\\epsilon=10$ \\\\\n \\hline\n Kannan & PGD & - & - & - & 72.10\\% & $l_{\\infty}$ metric, $\\epsilon=0.3$, lr = 0.045 \\\\\n \\hline\n Prakash & FGSM & 100\\% & 98.90\\% & 80\\% & 18.50\\% & $l_2$ metric, $\\epsilon=0.02-0.04$ \\\\\n & I-FGSM & 100\\% & 98.90\\% & 85.90\\% & 16.30\\% & Coefficient for BayesShrink, $\\sigma=0.04$ \\\\\n & DeepFool & 100\\% & 98.90\\% & 73.70\\% & 9.70\\% & Window size for pixel deflection, r = 10 \\\\\n & JSMA & 100\\% & 98.90\\% & 74.50\\% & 3\\% & Number of pixel deflections, K = 100 \\\\\n & C\\&W & 100\\% & 98.90\\% & 95.20\\% & 2\\% & ResNet-50 Model \\\\\n \\hline\n Liao & FGSM-White Box & 76.70\\% & 75.30\\% & 85.60\\% & 63.30\\% & epochs = 20 to 30, initial lr = 0.001, $l_{\\infty}$ metric, $\\epsilon=4/16$ \\\\\n & FGSM-Black Box & 76.70\\% & 75.30\\% & 59\\% & 43.30\\% & epochs = 20 to 30, initial lr = 0.001, $l_{\\infty}$ metric, $\\epsilon=4/16$ \\\\\n \\hline\n Tramer & best(FGSM , I-FGSM , PGD ) & 80.40\\% & 79.80\\% & 44.40\\% & 27\\% & $l_{\\infty}$ metric, $\\epsilon=16$, lr = 0.045 \\\\\n \\hline\n Xie & NIPS '17 CAAD & 78.91\\% & 79.08\\% & 100\\% & 50.50\\% & $l_{\\infty}$ metric, $\\epsilon=32$, $\\alpha=1$ \\\\\n \\hline\n Cohen & DeepFool & 78\\% & 70\\% & 95\\% & 49\\% & $l_2$ metric, $\\alpha=0.001$, n = 100000 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Defense techniques against attacks on ImageNet data set.}\n \\label{defense_imagenet}\n\\end{table}\nTable \\ref{defense_mnist} compares defenses for various attack models on MNIST dataset. Defenses proposed by Meng et al. and Xu et al. are the only ones for which \\emph{Attack Success Rate} reduces to $0\\%$. Distillation by Papernot et al. is the defense method with the least reduction in classification accuracy. \nTable \\ref{defense_cifar} shows similar comparison but instead on CIFAR-10 dataset with the following interesting observations:- 1) MagNet demonstrates maximum robustness among the various defense techniques for FGSM and I-FGSM attacks. 2) The technique proposed by Mustafa et al. demonstrates the highest generalizability in terms of classification accuracy. \nTable \\ref{defense_imagenet} compares defense techniques on ImageNet dataset against various attacks like FGSM , C\\&W , DeepFool , PGD e.t.c. Average \\emph{Attack Success Rates} on the ImageNet dataset are much higher as compared to MNIST and CIFAR-10 even when employing the same defense techniques. Pixel Deflection by Prakash et al. and Randomization by Xie at al. have the best generalizability in terms of classification. The models employed by them can get $100\\%$ classification accuracy when not using the defense methods. Also, Pixel Deflection exhibits the highest robustness in comparison to other methods.", "id": "34efdf8c-0915-48a4-b1bd-ad00da78be48", "level": "subsection", "origin_cites_number": 26, "parent_id": "6d3245fb-c6c8-446e-a1f1-2bbe4d6c6052", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Defense against Adversarial attacks" ], [ "subsection", "Comparitive Analysis of Defense Techniques" ] ], "subsections": [], "title": "Comparitive Analysis of Defense Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Our primary motive behind presenting this study is to encourage the community to push forward research in the black-box discipline. Analysis of the defense techniques discussed in this study reveals that the majority of these methods have been tested mostly on white-box adversarial attacks. To counter the security challenges put forward by the black-box adversaries, defense mechanisms should also be tested comprehensively on these models to enhance their robustness and applicability in a large number of scenarios. In addition, all attacks both white-box and black-box focus on perturbing all the features of the input image. We think that an interesting research direction would be the detection of non-robust features and exploiting only these features for misclassification.\n\\bibliographystyle{unsrt} \n\\bibliography{references} \n\\end{document}", "id": "ec0c0627-040e-43e2-bccf-e85e0b006228", "level": "section", "origin_cites_number": 0, "parent_id": "148419f6-0bed-487d-a5df-2290d0d6ed0e", "prefix_titles": [ [ "title", "A Survey of Black-Box Adversarial Attacks on Computer Vision Models" ], [ "section", "Conclusion and Future Scope" ] ], "subsections": [], "title": "Conclusion and Future Scope" } ]
52
[ 313, 5828, 7306, 909, 8103, 8101, 6144, 6141, 963, 6140, 907, 6143, 6139, 898, 913, 6142, 8102, 890, 917, 6145, 919, 892, 6146, 902, 6147, 8982, 8396, 6151, 932, 7311, 930, 3865, 950, 6149, 6148, 934, 941, 6150, 924, 952, 945, 953, 905, 681, 8104, 64, 507, 38, 923, 894, 906 ]
0.69965
[ "Ce Zhou", "Qian Li", "Chen Li", "Jun Yu", "Yixin Liu", "Guangjing Wang", "Kai Zhang", "Cheng Ji", "Qiben Yan", "Lifang He", "Hao Peng", "Jianxin Li", "Jia Wu", "Ziwei Liu", "Pengtao Xie", "Caiming Xiong", "Jian Pei", "Philip S. Yu", "Lichao Sun" ]
A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT
2023
2023-02-18T20:51:09Z
cs.AI
Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. In contrast to earlier approaches that utilize convolution and recurrent modules to extract features, BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the Generative Pretrained Transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI in recent years. Numerous studies have proposed different methods, datasets, and evaluation metrics, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "160c6790-e92e-4d72-a546-5d2f900ac104", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ] ], "subsections": [ "92692f3f-5d0b-4b5b-92dc-708a1b94dbd5", "5b0bdffe-fb02-409c-9a32-2a9ca46fae77", "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "4e7df0ae-6f27-4db8-94cf-15c255b077e4", "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "90d7385e-341b-4735-9bff-17abb31ffdc4", "e28e7bc1-61ac-4024-be2b-2259d31465ae", "8cba48d4-e174-4a7a-afb3-8ba526649a63", "720be46a-69c4-42e1-a410-af6829c488a7", "1224d99a-6f33-4a1f-8309-c4a5655141c1", "2f7164de-3a58-4a21-ac2f-e8e2ec66800e", "a3fada64-fc86-4ec8-a16c-f5e9675e7655", "99f4cfea-bdc9-409b-976b-dcdca4dfb1e0" ], "title": "root" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 1458, 1550, 2464, 1445 ], "content": "\\label{Section 1}\nPretrained Foundation Models (PFMs) are regarded as essential and significant components of Artificial Intelligence (AI) in the era of big data. The foundation model is first named in~, which means a broader class of models and their functions. PFMs are extensively studied in the three major AI fields: natural language processing (NLP)~, computer vision (CV)~ and graph learning (GL)~. PFMs are powerful general models that are effective in various fields or across fields. They have demonstrated great potential in learning feature representations in various learning tasks, such as text classification~, text generation~, image classification~, object detection~, and graph classification~. \nPFMs show superior performance for training on multiple tasks with large-scale corpus and fine-tuning it to similar small-scale tasks, making it possible to initiate rapid data processing.", "id": "92692f3f-5d0b-4b5b-92dc-708a1b94dbd5", "level": "section", "origin_cites_number": 9, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Introduction" ] ], "subsections": [ "73693802-7c4a-4924-a44d-b558714f64f9", "ec4ad3eb-4813-4807-93c2-dcc827dc64d7" ], "title": "Introduction" }, { "cite_extract_rate": 0.8518518518518511, "cites": [ 679, 1354, 2465, 1150, 2472, 2466, 9115, 364, 1578, 2470, 7165, 8466, 2467, 2469, 7465, 8536, 2471, 11, 2468, 7579, 7, 1582, 1557 ], "content": "PFMs are built upon the pretraining technique, which aims to train a general model using large amounts of data and tasks that can be fine-tuned easily in different downstream applications.\nThe idea of pretraining originates from transfer learning~ in CV tasks. Recognizing the effectiveness of pretraining in the field of CV, people have begun to use pretraining technology to enhance model performance in other areas. When pretraining techniques are applied to the NLP domain, well-trained language models (LMs) can capture rich knowledge beneficial for downstream tasks, such as long-term dependencies, hierarchical relationships, etc. In addition, the significant advantage of pretraining in the NLP field is that training data can be derived from any unlabeled text corpus, that is, there is an unlimited amount of training data in the pretraining process.\nEarly pretraining is a static technique, such as NNLM~ and Word2vec~, but static methods were difficult to adapt to different semantic environments. Therefore, dynamic pretraining techniques are proposed, such as BERT~, XLNet~, etc.\nFig.~\\ref{history_evolution} depicts the history and evolution of PFMs in the NLP, CV, and GL domains. The PFMs based on the pretraining technique use large corpora to learn generic semantic representations. With the introduction of these pioneering works, various PFMs have emerged and been applied to downstream tasks and applications.\nA great example of PFM application is ChatGPT\\footnote{\\url{https://openai.com/blog/chatgpt/}}. ChatGPT is fine-tuned from the generative pretrained transformer GPT-3.5, which was trained on a blend of text and code~. ChatGPT applies reinforcement learning from human feedback (RLHF) , which has become a promising way to align large language models (LLMs) with a human's intent~. The surprisingly superior performance of ChatGPT may lead to a tipping point for a shift of training paradigm for each type of PFMs -- applying \\textit{instruction aligning} techniques, e.g., reinforcement learning (RL), prompt tuning , and chain-of-thought (COT) , to move towards artificial general intelligence. \nWe focus on reviewing PFMs for text, image, and graph, which is a relatively mature research taxonomy.\nFor text, it is a multi-purpose LM to predict the next word or character in a sequence. For example, PFMs can be used for machine translation, question-answering systems, topic modeling, sentiment analysis, etc.\nFor image, it is similar to PFMs on text, which uses huge datasets to train a big model suitable for many CV tasks. For graphs, a similar pretraining idea is also applied to obtain PFMs, which are used for many downstream tasks. Apart from the PFMs for a specific data domain, we also review and state some other advanced PFMs, such as the PFMs for speech, video, and cross-domain data, and multimodal PFMs. An exemplary illustration is the GPT-4 model, as described by OpenAI~, which is a massive multimodal language model that can process both text and image inputs and generate text outputs. GPT-4 has demonstrated human-level performance on various professional and academic evaluation tasks. \nMoreover, there is a growing trend in PFMs that deals with multimodal data, known as unified PFMs. This term refers to models that can handle different types of data such as text, images, and audio. In this regard, we provide a definition of unified PFMs and a review of the current state-of-the-art models in recent research. Notable examples include OFA~, UNIFIED-IO~, FLAVA~, BEiT-3~, and others.\nAccording to the features of existing PFMs, we conclude that the PFMs have the following two major advantages. First, minor fine-tuning is required to enhance the model performance on downstream tasks. Second, the PFMs have already been vetted on the quality aspect. Instead of building a model from scratch to solve a similar problem, we can apply PFMs to task-related datasets. \nThe great promise of PFMs has inspired a wealth of related work to focus on the model efficiency~, security~ and compression~.\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=1\\linewidth]{pictures/times.pdf}\n \\caption{The history and evolution of PFMs.}\n \\label{history_evolution}\n\\end{figure*}", "id": "73693802-7c4a-4924-a44d-b558714f64f9", "level": "subsection", "origin_cites_number": 27, "parent_id": "92692f3f-5d0b-4b5b-92dc-708a1b94dbd5", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Introduction" ], [ "subsection", "PFMs and Pretraining" ] ], "subsections": [], "title": "PFMs and Pretraining" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 1458, 7518, 1550, 2464, 1445 ], "content": "There are several survey studies~ that have reviewed the pretrained models for some specific areas such as text generation~, visual transformer~, objection detection~.\nBommasani et.al.~ summarize the opportunities and risks of the foundation model. However, existing works did not achieve a comprehensive review of PFMs in different areas (e.g., CV, NLP, GL, Speech, Video) and different aspects such as pretraining tasks, efficiency, efficacy, and privacy.\nIn this survey, we specifically track the evolution of PFMs in the NLP domain, as well as how pretraining is transferred to and adopted by CV and GL.\nCompared with other surveys, there is no comprehensive introduction and analysis of existing PFMs from all three fields.\nUnlike reviews of previous pretrained models, we summarize existing models ranging from traditional models to PFMs with recent works in the three domains.\nTraditional models emphasize static feature learning. \nDynamic PFMs give an introduction to structures, which is the mainstream research.\nWe further present some other research for PFMs, including other advanced and unified PFMs, model efficiency and compression, security, and privacy. Finally, we summarize future research challenges and open problems in different domains. We also comprehensively present the related evaluation metrics and datasets \\textbf{in Appendix~\\ref{Evaluation_Metrics} and~\\ref{datasets}}.\nIn summary, the main contributions are as follows:\n\\begin{itemize}\n \\item We present a solid and up-to-date review of the development of PFM in NLP, CV, and GL. Over the review, we discuss and provide insights about the generalized PFM design and pretraining methodology among the three major application domains.\n \\item We summarize the development of PFMs in other multimedia areas such as speech and video. Besides, we discuss advanced topics about PFMs, including unified PFMs, model efficiency and compression, and security and privacy.\n \\item Through the review of PFMs in various modalities for different tasks, we discuss the main challenges and opportunities for future research of very large models in the big data era, which guides a new generation of collaborative and interactive intelligence based on PFMs.\n\\end{itemize}\nThe rest of the survey is organized as follows. \nSection~\\ref{Section 2} introduces the basic components.\nSections~\\ref{Section 3},~\\ref{Section 4} and~\\ref{Section 5} summarize the existing PFMs in NLP, CV and GL, respectively.\nSections~\\ref{Section 6},~\\ref{Section 7} introduce other advanced research for PFMs, including advanced and unified PFMs, model efficiency and compression, as well as security and privacy, respectively. \nFurthermore, we summarize the main challenges for PFMs in Section~\\ref{Section 11} before concluding the survey in Section~\\ref{Section 12}.", "id": "ec4ad3eb-4813-4807-93c2-dcc827dc64d7", "level": "subsection", "origin_cites_number": 6, "parent_id": "92692f3f-5d0b-4b5b-92dc-708a1b94dbd5", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Introduction" ], [ "subsection", "Contribution and Organization" ] ], "subsections": [], "title": "Contribution and Organization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 2}\nThe general conceptual architecture of PFMs is shown in Fig.~\\ref{fig:ptm_concepts}. The PFMs are huge neural network models, which are all about neural information processing. The specific designs of PFMs vary according to the data modality and task requirements in different areas. Transformer is a mainstream model architecture design for PFMs in many areas such as NLP and CV. Training large models need to have various datasets for model pretraining. After training the PFMs, the model should be fine-tuned to satisfy downstream requirements such as efficacy, efficiency, and privacy. In this section, we introduce the basic model architectures, concepts, and settings of PFMs in NLP, CV, and GL domains. For the introduction of a more detailed component, please refer to \\textbf{Appendix~\\ref{Components}}.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures/ptm_concepts.png}\n \\caption{The general conceptual architecture of PFMs: data, model, and system.}\n \\label{fig:ptm_concepts}\n\\end{figure*}", "id": "5b0bdffe-fb02-409c-9a32-2a9ca46fae77", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ] ], "subsections": [ "a96c9b0e-9d71-4bee-bf0c-0428d07e4d1b", "c55e9e0c-cebe-4c51-b0e5-ff280adb44b6", "235dfd2e-07af-44e0-9efe-f1d9c2ab1fa0" ], "title": "Basic Components" }, { "cite_extract_rate": 1, "cites": [ 679, 1183, 8558, 732, 1554, 7095, 38 ], "content": "The Transformer~ is an innovative architecture that facilitates the transfer of weighted representation knowledge between various neural units. It relies solely on attention mechanisms and doesn't use recurrent or convolutional architectures. The attention mechanism is a crucial component of the Transformer as it assigns weights to all the encoded input representations and learns the most important part of the input data. The output of the attention is obtained by taking the weighted sum of the values, and the weights are calculated using the compatibility function of the query with the corresponding key~. Numerous attention mechanisms~ have been developed in large models. For instance, in natural language processing, self-attention is created to connect various positions in a single sequence for generating a representation of the same sequence. Transformer leverages a mask matrix to provide an attention mechanism based on self-attention, in which the mask matrix specifies which words can ``see'' each other.\nTransformer is an important structure for PFMs in NLP, CV, and GL areas. For NLP, the Transformer can help solve the long-range dependency issues when processing sequential input data. For example, the GPT-3~ is a generative model based on the transformer. For CV, the Vision Transformer (ViT)~ is proposed to represent an image to a series of image patches, which is similar to a series of word embeddings. For GL, the Graph Transformer Networks (GTN)~ are employed to learn new graph structures and powerful node representations without domain knowledge. Transformers become scalable enough to drive ground-breaking capabilities for PFMs thanks to the transformer structures to achieve higher parallelization. The ViT-22B model~, for instance, has about 22B parameters, and the largest language models can have upwards of 100B parameters (e.g., GPT-3 has 175B and PaLM~ has 540B parameters).", "id": "a96c9b0e-9d71-4bee-bf0c-0428d07e4d1b", "level": "subsection", "origin_cites_number": 7, "parent_id": "5b0bdffe-fb02-409c-9a32-2a9ca46fae77", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Transformer for PFMs" ] ], "subsections": [], "title": "Transformer for PFMs" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep learning models in CV have been shown a large margin to outperform traditional learning models in most tasks, including the common classification, recognition, detection, and segmentation tasks and the specific matching, tracking, and sequence prediction. These learning methods are not only available in CV, but also in NLP and GL.", "id": "c55e9e0c-cebe-4c51-b0e5-ff280adb44b6", "level": "subsection", "origin_cites_number": 0, "parent_id": "5b0bdffe-fb02-409c-9a32-2a9ca46fae77", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ] ], "subsections": [ "6e0a316a-8315-4a82-a294-df9888be65f4" ], "title": "Learning Mechanisms for PFMs" }, { "cite_extract_rate": 0, "cites": [], "content": "Suppose we are given a training dataset $\\bm X$ containing $\\{(\\bm{x}_i, y_i)\\}_{i=1}^{n}$ to represent the original data in training dataset, where $\\bm{x}_i$ denotes the $i$-th training sample, and $y_i$ denotes the corresponding label. The complete network is to learn a function $f(\\bm{x};\\bm \\theta)$ by minimizing the objective function as follows.\n\\begin{equation}\n \\mathop{\\arg\\min}_{\\bm \\theta} \\ \\ \\frac{1}{n}\\sum\\nolimits_{i=1}^{n}{\\mathcal{L}(f(\\bm{x}_i;\\bm \\theta),y_i)}\n +\\lambda\\Omega(\\bm\\theta),\n\\end{equation}\nwhere $\\mathcal{L}$ and $\\Omega$ represent the predefined loss function and a regularization term, respectively. The function $f$ has a nested form like\n\\begin{equation}\n\\begin{aligned}\n \\bm h_{1}(\\bm{x}_i)&=g(\\bm{x}_i^\\top\\bm{\\omega}_1+b_1), \\\\ \\bm h_{l+1}(\\bm{x}_i)&=g(\\bm h_l(\\bm{x}_i)^\\top\\bm{\\omega}_l+b_l), \n l = 1, 2, \\cdots, N\n\\end{aligned}\n\\end{equation}\nwhere $l$ is the index of layer in deep learning model and $N$ is the number of layers, which means that $\\bm\\theta = \\{\\bm\\omega_l,b_l, l = 1, 2, \\cdots, N\\}$.", "id": "6e0a316a-8315-4a82-a294-df9888be65f4", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c55e9e0c-cebe-4c51-b0e5-ff280adb44b6", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ], [ "paragraph", "Supervised Learning" ] ], "subsections": [ "eea11e17-003c-434f-907f-f15b237f21fb", "062ede80-ae72-4098-9831-c64491ef5acc", "617935ef-4797-40ad-a087-3911a068620b", "4c08a6e4-0707-49aa-9c1d-4324cd96dc7b" ], "title": "Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Assume we are given another unlabelled dataset $\\bm Z = \\{\\bm z_i\\}_{i=1}^{m}$ in addition to the previous dataset with human labels. If we want to utilize both datasets to learn an ideal network, the learning process can be formulated as\n\\begin{equation}\n \\mathop{\\arg\\min}_{\\bm\\theta} \\ \\ \\frac{1}{n}\\sum\\nolimits_{i=1}^n{\\mathcal{L}(f(\\bm x_i;\\bm\\theta),y_i) + \\\\ \n \\frac{1}{m}\\sum\\nolimits_{i=1}^m{\\mathcal{L}^{\\prime}(f^{\\prime}(\\bm{z}_i;\\bm\\theta^{\\prime}),R(\\bm z_i, \\bm X))}} + \\lambda\\Omega(\\bm\\theta),\n\\end{equation}\nwhere $R$ is a relation function defining the targets for unlabelled data, and then these pseudo-labels are integrated into the end-to-end training process. $f^{\\prime}$ is an encoder to learn a new representation for the original data in the dataset $\\bm Z$. Specifically, if there is no label to any data in the training process, we can learn from the properties inside the data itself via the internal distance or the designed pretext tasks, which are known as unsupervised learning and self-supervised learning(SSL), respectively. The latter is our main focus discussed in detail in Section~\\ref{sec:Learning by Generation}.", "id": "eea11e17-003c-434f-907f-f15b237f21fb", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6e0a316a-8315-4a82-a294-df9888be65f4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ], [ "paragraph", "Supervised Learning" ], [ "paragraph", "Semi-Supervised Learning" ] ], "subsections": [], "title": "Semi-Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "The weakly-supervised method is the balance between fully-supervised learning and SSL according to the dependence on human labels. The SSL designs special pretext tasks \nto serve as the supervised learning, but the fully supervised learning\nutilizes existing labels attached to the data. However, both of them can learn good visual features and perform well on specific downstream tasks. Suppose there are inaccurate $K$ labels for the dataset, and any label can be attached to a data sample. Thus, we denote the true label of image ${\\bm x}_i$ as ${\\bm y}_i\\in\\{0,1\\}^K, i=1,2,\\cdots,n$, and any entry of ${\\bm y}_i$ could be $0$ or $1$. Here we need to minimize the total $nK$ loss terms\n, which are formulated as follows.\n\\begin{equation}\n \\mathop{\\arg\\min}_{\\bm\\theta} \\ \\ \\frac{1}{nK}\\sum\\nolimits_{i=1}^n\\sum\\nolimits_{k=1}^K{\\mathcal{L}(f(\\bm x_i;\\bm\\theta),y_i^k)} + \\lambda\\Omega(\\bm\\theta),\n\\end{equation}\nwhere $\\left[y_i^1, y_i^2, \\cdots, y_i^K\\right] = {\\bm y}_i$, and $\\mathcal{L}$ could be a loss function suitable for binomial classification problem. For any entry in ${\\bm y}_i$, computing the loss function of the one-versus-all binomial classification is needed.", "id": "062ede80-ae72-4098-9831-c64491ef5acc", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6e0a316a-8315-4a82-a294-df9888be65f4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ], [ "paragraph", "Supervised Learning" ], [ "paragraph", "Weakly-Supervised Learning" ] ], "subsections": [], "title": "Weakly-Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "SSL utilizes the information in the data itself to learn essential feature representations for different tasks. By applying the self-defined pseudo labels, it can avoid the cost of manually labeling large datasets for PFMs. In NLP, the language models can be trained by predicting masked characters, words, or sentences. Variational autoencoder (VAE) and generative adversarial network (GAN) are two types of generative SSL methods, which are to reconstruct the data itself. Besides, contrastive learning, as a type of discriminative SSL method, is widely applied in CV, NLP, and GL. The main idea of contrastive learning is to learn the prior knowledge distribution of the data itself with the aid of various methods such as data augmentation. In this way, contrastive learning can learn a model that makes similar instances closer in the projected space, and dissimilar instances farther apart in the projected space. Here we show a simple version of contrastive loss: \n\\begin{equation}\n \\mathcal{L}_\\text{c}(\\mathbf{x}_i, \\mathbf{x}_j, \\theta) = m \\| f_\\theta(\\mathbf{x}_i) - f_\\theta(\\mathbf{x}_j) \\|^2_2 \\\\ + (1-m)\\max(0, \\epsilon - \\|f_\\theta(\\mathbf{x}_i) - f_\\theta(\\mathbf{x}_j)\\|_2)^2\n\\label{eq:contrastive}\n\\end{equation}\nwhere $m$ is 1 if two samples have the same label, otherwise 0, and $\\epsilon$ is the upper bound distance.", "id": "617935ef-4797-40ad-a087-3911a068620b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6e0a316a-8315-4a82-a294-df9888be65f4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ], [ "paragraph", "Supervised Learning" ], [ "paragraph", "Self-Supervised Learning" ] ], "subsections": [], "title": "Self-Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "RL is another type of learning paradigm that models the learning process as a sequential interaction between an agent and an environment, where a RL agent seeks to learn an optimal policy for sequential decision-making problems. Specifically, at each time interaction step $t$, the agent receives a state $s_t$ in a state space $\\mathcal{S}$, and selects an action $a_t$ from an action space $\\mathcal{A}$, following a policy $\\pi_{\\theta}(a_t|s_t): \\mathcal{A}\\rightarrow\\mathcal{S}$ parameterized by $\\theta$. \nThen the agent receives a scalar immediate reward $r_t=r(s_t,a_t)$ and the next state $s_{t+1}$ according to the environment dynamics, where $r(s,a)$ is the reward function. \nFor each episode, this process continues until the agent reaches a terminal state. After an episode is finished, the RL agent will restart to begin a new episode. \nThe return for each state is discounted, accumulated reward with the discount factor $\\gamma \\in (0,1]$,\n$R_t=R(s_t,a_t)= \\sum_{k=0}^{\\infty} \\gamma^k r_{t+k}.$\nThe agent aims to maximize the expectation of such long-term return from each state, \n\\begin{equation}\n \\max_{\\theta}{\\mathbb{E}_{s_t}[R_t|s_t,a_t=\\pi_{\\theta}(s_t)]}.\n\\end{equation}", "id": "4c08a6e4-0707-49aa-9c1d-4324cd96dc7b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6e0a316a-8315-4a82-a294-df9888be65f4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Learning Mechanisms for PFMs" ], [ "paragraph", "Supervised Learning" ], [ "paragraph", "Reinforcement Learning" ] ], "subsections": [], "title": "Reinforcement Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Pretraining is an initialization framework, which generally needs to be used in conjunction with fine-tuning downstream tasks.\nIn the scheme of pretraining and finetuning, the parameters of the model are trained on pre-set tasks to capture specific attributes, structure, and community information. The pretrained features can assist downstream tasks, provide sufficient information, and speed up the convergence of the model.", "id": "235dfd2e-07af-44e0-9efe-f1d9c2ab1fa0", "level": "subsection", "origin_cites_number": 0, "parent_id": "5b0bdffe-fb02-409c-9a32-2a9ca46fae77", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Pretraining Tasks for PFMs" ] ], "subsections": [ "956e2ed7-ba51-44b2-9c84-7c69ea7faa9f", "8b882e93-190e-4264-b9b4-fa6f2a75f940", "c78479fb-b493-4590-9b94-3b5ad65aab94" ], "title": "Pretraining Tasks for PFMs" }, { "cite_extract_rate": 0.8, "cites": [ 2473, 1150, 1557, 7 ], "content": "The pretraining tasks can be divided into five categories according to the learning methods: Mask Language Modeling (MLM), Denoising AutoEncoder (DAE), Replaced Token Detection (RTD), Next Sentence Prediction (NSP), Sentence Order Prediction (SOP). \nRTD, NSP, and SOP are contrastive learning methods, which assume that the observed samples are more semantically similar than the random samples. \n\\textbf{Mask Language Modeling (MLM).} MLM erases some words randomly in the input sequence and then predicts these erased words during pretraining. Typical examples include BERT~ and SpanBERT~.\n\\textbf{Denoising AutoEncoder (DAE).} DAE is used to add noise to the original corpus and reconstruct the original input using the corpus containing noise. BART~ is a representative example.\n\\textbf{Replaced Token Detection (RTD).} RTD is a discriminant task that determines whether the LM has replaced the current token. This task is introduced in ELECTRA~. By training the model to distinguish whether a token has been replaced or not, the model can acquire language knowledge.\n\\textbf{Next Sentence Prediction (NSP).} In order to make the model understand the correlation between the two sentences and capture sentence-level representations, a NSP task is introduced. The PFM inputs two sentences from different documents and checks whether the order of the sentences is correct. A typical example is BERT.\n\\textbf{Sentence Order Prediction (SOP).} Different from NSP, SOP uses two contiguous fragments from a document as positive samples and the exchange order of the two fragments as negative samples. The PFMs can better model the correlation between sentences, such as ALBERT~.", "id": "956e2ed7-ba51-44b2-9c84-7c69ea7faa9f", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "235dfd2e-07af-44e0-9efe-f1d9c2ab1fa0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Pretraining Tasks for PFMs" ], [ "subsubsection", "Pretraining Tasks for NLP" ] ], "subsections": [], "title": "Pretraining Tasks for NLP" }, { "cite_extract_rate": 1, "cites": [ 9095, 9094 ], "content": "There are many pretraining tasks created for CV to learn the feature space, which is based on SSL. It utilizes pretext tasks that contain human-designed labels, like jigsaw puzzles or the comparison of various patches from images. This enables the generalization of learned representations to a range of downstream tasks.\n\\textbf{Specific Pretext Task.}\nA pretext task also referred to as a predefined task, is created for the encoder networks to perform during the pretraining phase. The network is trained by predicting the answer to a special pretext task. Based on particular features of the data, pseudo labels are generated for the fictitious task. Then, using guided learning techniques, the encoder networks are trained to solve the pretext task. For example, inpainting aims to pretrain models by predicting the missed center part.\n\\textbf{Frame Order Learning Task.} Learning frame order from videos involves frame processing through time steps, which can serve as the pretraining task for CV. This issue usually relates to completing pretextual exercises that can aid in the acquisition of visual temporal representations.\n\\textbf{Data Generation Task.} The representational capabilities within the generative adversarial networks (GANs) can also be used in the pretraining tasks. Projecting data back into the latent space, as demonstrated by BiGANs~, is helpful for auxiliary supervised discrimination tasks by acting as feature representations.\n\\textbf{Data Reconstruction Task.} Since the images can be divided into patches inspired by the natural language, some pretraining tasks for NLP can also be used in CV, e.g., the autoencoder-based masked prediction. The original image is first divided into a few patches and discrete visual tokens are used to encode each patch. The visual tokens from the masked patches are outputted in the second stage to match the corresponding visual tokens from the fixed tokenizer.\n\\textbf{Miscellaneous.} To train the PFMs in CV, additional pretraining tasks are suggested. For instance, based on contrastive learning, encoder networks are used for pretraining on various data augmentation. The parameters are trained by maximizing the distance between negative pairs (e.g., pairs with different labels) and minimizing the distance between positive pairs (e.g., pairs with the same labels). To pretrain the parameters of the backbone network, the DeepClustering~ method divides the representations into various clusters and labels these clusters as supervised signals.", "id": "8b882e93-190e-4264-b9b4-fa6f2a75f940", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "235dfd2e-07af-44e0-9efe-f1d9c2ab1fa0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Pretraining Tasks for PFMs" ], [ "subsubsection", "Pretraining Tasks for CV" ] ], "subsections": [], "title": "Pretraining Tasks for CV" }, { "cite_extract_rate": 0, "cites": [], "content": "The pre-set tasks in GL are similar to other pretext tasks. However, they can be supervised or unsupervised depending on the design. According to the pretraining purpose and potential motivation in GL, such tasks can be divided into the following categories:\n\\textbf{Graph Information Completion.}\nThis task refers to firstly masking part of the information in the input graph, and then recovering the masked information based on the analysis of the remaining information distribution.\nSimilar tasks also exist in CV and NLP, and their goals are to fill in hidden pixels or words, respectively.\n\\textbf{Graph Property Prediction.}\nDifferent from directly modeling the information of the input graph, this task aims to provide a variety of self-supervised signals by mining the potential properties of the input graph.\nSpecifically, on the one hand, it considers node attributes, local substructure, and connectivity information to provide predictive regression tasks; on the other hand, it assigns pseudo-labels to nodes through information such as clusters, structure density, and attribute similarity to provide classification tasks.\n\\textbf{Graph Consistency Analysis.}\nThe goal of this task is to maximize the consistency between samples with similar semantic information in the graph embedding and minimize the agreement between samples with unrelated semantic information.\nIn the actual scenario, it can be divided into consistency analysis of context/self/cross-scale according to different model training strategies.\n\\textbf{Miscellaneous.}\nCompared with using only one pretext task, some methods have designed some integration mechanisms to incorporate the advantages of multiple pretext tasks into a unified framework.\nBesides, some graph data in specific fields have unique self-supervised signals with practical significance that can be used for pretraining under targeted design.\nIn summary, the transformer is an important component of the large model architecture, which helps learn the important features and mine intrinsic structure in data. Different learning mechanisms can be used for training PFMs according to the datasets and specific tasks. Especially, SSL is a promising mechanism to learn knowledge embeddings from the data considering the large scale of unlabeled data in various areas. RL provides a new way to fine-tune the PFMs for downstream tasks by optimizing a policy (model) against the reward model. How to design effective and efficient tasks for PFMs to master the knowledge behind the data is an important research topic.", "id": "c78479fb-b493-4590-9b94-3b5ad65aab94", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "235dfd2e-07af-44e0-9efe-f1d9c2ab1fa0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Pretraining Tasks for PFMs" ], [ "subsubsection", "Pretraining Tasks for GL" ] ], "subsections": [], "title": "Pretraining Tasks for GL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 3}\nNLP is a research field that integrates linguistics and computer science. Its main research tasks include part-of-speech tagging, named entity recognition, semantic role labeling, machine translation, question answering, sentiment analysis, text summarization, text classification, relationship extraction, event extraction, etc. \nThe idea of PFM first gained popularity in NLP. Then CV and GL adopt the promising pretraining technology.\nThe PFM trains on a large benchmark dataset and is fine-tuned on the primary task dataset to obtain a model which can solve new similar tasks.\nIt models syntactic and semantic representations of words simultaneously and changes the representation of polysemous words dynamically according to different input contexts. PFM learns a rich knowledge of grammar and semantic reasoning with better results. \nNumerous PFMs have been proposed in the past few years, as shown in \\textbf{Table~\\ref{text}}. \nIn this section, we first introduce word representation learning models including the autoregressive language model (LM), contextual LM, and permuted LM. \nThen, we present the neural network architectures for the PFM designing method and the masking designing method. Besides, we summarize boosting methods for enhancing model performance, multi-task learning, and different downstream tasks. Finally, we introduce the instruction-aligning methods, e.g. RLHF and Chain-of-Thoughts, which are applied in PFMs, such as ChatGPT, to provide outputs that more closely match human preferences and are less harmful.", "id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ] ], "subsections": [ "95e3c15e-351a-4fa1-a960-f999e1a0eb56", "715c9697-d21b-45c6-b488-83c39de09bb8", "c73ddc0a-1477-4bcd-be5e-b0d9f7ea2690", "0a4121f5-6ae9-4717-8c66-33026e52d366", "1d6edf07-e612-4411-b731-77aa6ec24303", "27592066-82ce-425a-a3d0-612df6fceb81" ], "title": "PFMs for Natural Language Processing" }, { "cite_extract_rate": 0, "cites": [], "content": "Many large-scale pretrained models have achieved better performance than humans in question answering, machine reading comprehension, and natural language reasoning, which indicates that the current construction approach of PFMs is practical.\nThe existing pretraining LMs are mainly divided into three branches according to the word representations approach: \\emph{(1) autoregressive LM, (2) contextual LM, and (3) permuted LM}. The word prediction direction and contextual information are the most important factors among these three branches.", "id": "95e3c15e-351a-4fa1-a960-f999e1a0eb56", "level": "subsection", "origin_cites_number": 0, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Word Representations Methods" ] ], "subsections": [ "52fdf3e9-1a6e-40a4-a385-dab0848a2c75" ], "title": "Word Representations Methods" }, { "cite_extract_rate": 0.4, "cites": [ 679, 38 ], "content": "The autoregressive LM predicts the next possible word based on the preceding word or the last possible word based on the succeeding word. \nIt is selected as a feature extractor and text representations are extracted from the former words.\nThus, it has better performance in NLG tasks such as text summarization and machine translation. \nFor a sequence, $T=[w_{1}, w_{2}, \\ldots, w_{N}]$, the probability of a given word calculated as follows:\n\\begin{equation}\np\\left(w_{1}, w_{2}, \\ldots, w_{N}\\right)=\\prod_{i=1}^{N} p\\left(w_{i} \\mid w_{1}, w_{2}, \\ldots, w_{i-1}\\right),\n\\end{equation}\nwhere $i>1$ and $N$ is the length of the input sequence.\nThe GPT~ adopts a two-stage method of self-supervised pretraining and supervised fine-tuning and uses stacked Transformer~ as its decoder.\nAs a follow-up, the OpenAI team continues to expand GPT, proposes the GPT-2~ and increases the number of stacked Transformer layers to 48 layers. The total number of parameters reached 1.5 billion. GPT-2 also introduces multi-task learning~.\nThe GPT-2 has a considerable model capacity and can be adjusted for different task models rather than fine-tuning them. However, GPT-2 also uses an autoregressive LM. Therefore, it improves the performance of the model without increasing the cost dramatically. Due to the lack of contextual modeling ability with a one-way Transformer, the main performance improvement of GPT-2 comes from the combined effect of multi-task pretraining, super-large datasets, and super-large models.\nTask-based datasets for fine-tuning are still needed for specific downstream tasks.\nIncreasing the training scale of the LM can lead to a significant enhancement in task-independent performance. Hence, GPT-3~ was developed, which features a model size of 175 billion parameters and is trained with 45 Terabytes of data. This enables it to exhibit good performance without the need for fine-tuning for specific downstream tasks.", "id": "52fdf3e9-1a6e-40a4-a385-dab0848a2c75", "level": "paragraph", "origin_cites_number": 5, "parent_id": "95e3c15e-351a-4fa1-a960-f999e1a0eb56", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Word Representations Methods" ], [ "paragraph", "Autoregressive Language Model" ] ], "subsections": [ "4e905948-4656-4583-bcd3-ea4d62f36d7d", "6f4b628d-0f75-4948-a123-10026b7e2d15" ], "title": "Autoregressive Language Model" }, { "cite_extract_rate": 1, "cites": [ 207, 826, 8385, 8559, 7 ], "content": "The autoregressive LM only uses the information above or below and cannot use the information above and below at the same time. ELMO~ only uses bi-directional Long Short-Term Memory (LSTM), which is a concatenation of two unidirectional LSTMs in backward and forward.\nThe contextual LM predictions are based on contextual words.\nIt uses a Transformer encoder, and the upper and lower layers of the model are all directly connected to each other due to the self-attention mechanism.\nFor a sequence of words $T$, the probability of a given word calculates as follows\n\\begin{equation}\np\\left(w_{1}, w_{2}, \\ldots, w_{N}\\right)=\\prod_{i=1}^{N} p\\left(w_{i} \\mid w_{1}, w_{2}, \\ldots, w_{N}\\right).\n\\end{equation}\nBERT~ uses a stacked multi-layer bi-directional Transformer as the basic structure, and WordPiece~ as a word segmentation method. The model input consists of three parts: word embedding, segment embedding, and position embedding.\nIt uses a bi-directional Transformer as a feature extractor, which offsets the defect of ELMO and GPT. \nHowever, the shortcomings of BERT are also not to be ignored. The bidirectional Transformer structure does not eliminate the constraints of the self-encoding model. Its vast number of model parameters are very unfriendly to devices with low computing resources and are challenging to deploy and apply.\nFurthermore, the hidden language modeling in pretraining will lead to inconsistencies with the input of the model in the fine-tuning stage.\nMost PFMs need more training tasks and a larger corpus.\nAiming at the problem of insufficient training, Liu et al.~ propose the RoBERTa. It uses a larger batch size and unlabeled data. Furthermore, it trains the model for a longer time, removes the NSP task, and adds long sequence training. In processing text input, different from BERT, Byte Pair Encoding (BPE)~ is adopted for word segmentation. BPE uses a different mask mode for each input sequence, even if the input sequence is the same.", "id": "4e905948-4656-4583-bcd3-ea4d62f36d7d", "level": "paragraph", "origin_cites_number": 5, "parent_id": "52fdf3e9-1a6e-40a4-a385-dab0848a2c75", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Word Representations Methods" ], [ "paragraph", "Autoregressive Language Model" ], [ "paragraph", "Contextual Language Model" ] ], "subsections": [], "title": "Contextual Language Model" }, { "cite_extract_rate": 1, "cites": [ 11, 2474 ], "content": "The modeling method with a contextual LM can be regarded as the autoencoding model. However, due to the inconsistency in the training stage and fine-tuning stage, the performance of the autoencoding model is poor in the Natural Language Generation (NLG) task.\nPermuted LM aims to combine the advantages of the autoregressive LM and the autoencoder LM.\nIt improves the defects of the two models to a great extent and can be used as a basic idea for the construction of future pretraining target tasks.\nFor a given input sequence $T=[w_{1},w_{2}...\n,w_{N}]$, the formal representation of the target function of the permuted LM is as follows\n\\begin{equation}\n\\max _{\\theta} \\mathbb{E}_{z \\sim Z_{N}}\\left[\\sum_{t=1}^{N} \\log p_{\\theta}\\left(x_{z_{T=t}} \\mid x_{z_{T<t}}\\right)\\right],\n\\end{equation}\nwhere $\\theta$ is the shared parameter in all permutations, $Z_{N}$ represents the set of all possible permutations of the input sequence $T$, and $z_{T=t}$ and $z_{T<t}$ represents the $t$-th element and the $[1, 2, \\ldots, t-1]$ elements of a permutation $z \\in Z_{N}$.\nMLM represented by BERT can implement bi-directional coding well. However, MLM uses the mask marking during pretraining but not during fine-tuning, which resulted in inconsistent data during pretraining and fine-tuning. To achieve bi-directional coding and avoid the problems of MLM, the permuted LM is proposed. permuted LM is based on the autoregressive LM, which avoids the influence of inconsistent data. However, unlike traditional autoregressive models, permuted LM no longer models sequences in order. It gives all possible permutations of sequences to maximize the expected logarithmic likelihood of the sequence. In this way, any position can take advantage of contextual information from all positions, making permuted LM implement bidirectional encoding.\nThe most common permuted LM models are XLNET~ and MPNet~. XLNET is a PFM based on a permuted language modeling approach, which incorporates two crucial techniques from Transformer-XL: relative positional encoding and the segment recurrence mechanism. In contrast, MPNet combines Masked Language Modeling (MLM) and permuted language modeling to predict token dependencies, using auxiliary position information as input to enable the model to view a complete sentence and reduce position differences. These two models represent significant advancements in the field of PFMs.", "id": "6f4b628d-0f75-4948-a123-10026b7e2d15", "level": "paragraph", "origin_cites_number": 2, "parent_id": "52fdf3e9-1a6e-40a4-a385-dab0848a2c75", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Word Representations Methods" ], [ "paragraph", "Autoregressive Language Model" ], [ "paragraph", "Permuted Language Model" ] ], "subsections": [], "title": "Permuted Language Model" }, { "cite_extract_rate": 0, "cites": [], "content": "ELMO adopts a multi-layer RNN structure. Each layer is a bi-directional LSTM structure composed of a forward and backward LM. The maximum likelihood of these two directions is taken as the objective function. Compared with the word vector method, ELMO introduces contextual information and improves the polysemy problem, but ELMO's overall ability to extract linguistic features is weak.\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{pictures/picture55.pdf}\n \\caption{The architectures of BART~: generalizing BERT (due to the bidirectional encoder), GPT (with the\nleft-to-right decoder). An autoregressive decoder is used to determine the likelihood of the original document after the corrupted document (on the left) has been encoded using a bidirectional model.}\n \\label{ELMO-GPT-BART}\n\\end{figure*}\nThe application research of PFMs has two main directions. One is PFMs with fine-tuning (e.g., BERT), and the other one is PFMs with zero/few-shot prompts (e.g., GPT).\n\\textbf{BERT} uses a bi-directional encoder in Transformer to predict which words are masked and determine whether two sentences are contextual. However, the document is encoded bidirectionally and missing tokens are predicted independently, which reduces the generation ability~.\n\\textbf{GPT} uses an autoregressive decoder as a feature extractor to predict the next word based on the first few words and solve downstream tasks using fine-tuning, so it is more suitable for text-generation tasks. However, GPT only uses the former words for prediction, which cannot learn bidirectional interaction information. \nDifferent from these models, \\textbf{BART}~ is a noise-reducing autoencoder built by seq2seq model adopting the encoder-decoder structure, as shown in Fig.~\\ref{ELMO-GPT-BART} from ~. Pretraining mainly includes using noise to destroy text and using the seq2seq model to rebuild the original text.\nThe encoding layer adopts a bi-directional Transformer.\nIt adopts five modes of adding noise: (1) single word mask; (2) word deletion; (3) span mask;\n(4) sentence rearrangement; (5) document rearrangement.\nIn the encoder part, the sequence has been masked before inputting it into the encoder. Then, the decoder restores the original sequence according to the encoding representation output by the encoder and the sequence that has not been masked.\nThe addition of a series of noise patterns makes the performance of BART in sequence generation and natural language reasoning tasks significantly improved.", "id": "715c9697-d21b-45c6-b488-83c39de09bb8", "level": "subsection", "origin_cites_number": 1, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Model Architecture Designing Methods" ] ], "subsections": [], "title": "Model Architecture Designing Methods" }, { "cite_extract_rate": 0.75, "cites": [ 7096, 2473, 50 ], "content": "The attention mechanism first aggregates essential words into sentence vectors, and vital sentence vectors into text vectors, which allows the model to pay different attention to different inputs~. \nFor BERT, as a bidirectional encoding LM, any two words in an input sentence can see each other.\nHowever, \nit hinders the ability of BERT model to learn NLG tasks. \nJoshi et al.~ propose \nSpanBERT based on RoBERTa, which adopts the idea of dynamic masking and single segment pretraining, as shown in Fig.~\\ref{SpanBERT} from ~.\nThe span mask and the Span Boundary Objective (SBO) are also proposed to mask words of a certain length.\nThe target task of the span-boundary is to restore all the masked span (tokens) by the observed tokens at both ends.\nThe training stage uses the dynamic mask strategy proposed in the RoBERTa, instead of the mask during the data preprocessing.\nUnlike BERT, SpanBERT randomly covers up a continuous text and adds the SBO training target. It predicts the span using the token closest to the span boundary and eliminates the NSP pretraining task.\nThe BERT and GPT can only separate the training encoder and decoder without joint training in the NLG task. Song et al.~ propose the masked seq2seq pretraining model MASS.\nIn the training stage, the input sequence of the encoder is randomly masked as a continuous segment of length $k$. The masked segment will be recovered through the MASS decoder.\nUniLM~ completes the learning of the NLG model by designing a different mask for two sentences in the input data.\nFor the first sentence, UniLM uses the same structure as the Transformer encoder making each word notice its preceding and following words.\nFor the second sentence, each word can only notice all the words in the first sentence and the preceding words in the current sentence.\nThus, the first and second sentences of the model input form the classic seq2seq pattern.\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{pictures/picture66.pdf}\n \\caption{The architecture of SpanBERT~.}\n \\label{SpanBERT}\n\\end{figure*}", "id": "c73ddc0a-1477-4bcd-be5e-b0d9f7ea2690", "level": "subsection", "origin_cites_number": 4, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Masking Designing Methods" ] ], "subsections": [], "title": "Masking Designing Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "0a4121f5-6ae9-4717-8c66-33026e52d366", "level": "subsection", "origin_cites_number": 0, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Boosting Methods" ] ], "subsections": [ "d5594cf1-ec15-4e99-82d4-c0aff6c92c8b" ], "title": "Boosting Methods" }, { "cite_extract_rate": 1, "cites": [ 1150, 2475 ], "content": "Most of the popular pretraining models need lots of pretraining data, which imposes huge requirements on the hardware, making it challenging to retrain, and only fine-tuning can be done to the model.\nTo solve these problems, some models appear. For example, ERNIE Tiny released by Baidu is a miniaturized ERNIE~, that reduces the number of layers and increases the prediction speed by 4.3 times with a slight decrease in accuracy.\nLan et al. propose the ALBERT~ to reduce memory consumption and training speed.\nHowever, it is undeniable that no matter what kind of compression is done for these large-scale models, the performance of the models in these tasks will deteriorate sharply. It requires paying attention to the efficient representation of high-level semantic and grammatical information and lossless compression in future works.\nBy using word-embedded parameter factorization and hidden parameter sharing between layers, ALBERT significantly reduces the number of parameters of the model without performance loss. It proposes the training task of SOP, which predicts the order of the two sentences to improve the performance.", "id": "d5594cf1-ec15-4e99-82d4-c0aff6c92c8b", "level": "paragraph", "origin_cites_number": 2, "parent_id": "0a4121f5-6ae9-4717-8c66-33026e52d366", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Boosting Methods" ], [ "paragraph", "Boosting on Model Performance" ] ], "subsections": [ "0411fac8-d211-46f3-867e-f60b611650a8", "9edfc3e7-eda8-42a3-b875-c17fbee5e7bf", "3c5d0aed-29fb-46fe-b5ff-9e1e66b4e827" ], "title": "Boosting on Model Performance" }, { "cite_extract_rate": 1, "cites": [ 2476, 50, 2475 ], "content": "ERNIE(Baidu)~ is mainly composed of two parts, the Transformer encoder and task embedding. In the Transformer encoder, the self-attention mechanism is used to capture the context information of each token and generate context representation embedding. \nTask embedding is a technique that applies different characteristics to a task.\nERNIE 2.0~ introduces multi-task learning to realize the pretraining of lexical, grammar, and semantics.\nERNIE 2.0 uses seven different pretraining tasks, covering three aspects: word level, sentence level, and semantic level. It uses continual learning, making the knowledge in the previous training task retained and enabling the model to acquire long-distance memory.\nIt uses a Transformer encoder and introduces task embedding, enabling the model to distinguish different tasks in the continual learning process.\nUniLM~ uses three pretraining tasks: unidirectional LM, bidirectional LM, and encoder-decoder LM. \nIt can simultaneously complete three kinds of target tasks in the pretraining stage through the self-attention layer mask mechanism.\nIn the training stage, UniLM adopts the small-segment mask strategy proposed by SpanBERT, and the loss function is composed of the loss functions of the above three pretraining tasks. To maintain the contribution consistency on all loss functions, the three pretraining tasks are trained simultaneously.\nModeling and parameter sharing of multiple tasks make LMs achieve good generalization ability in Natural Language Understanding (NLU) and NLG tasks.", "id": "0411fac8-d211-46f3-867e-f60b611650a8", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d5594cf1-ec15-4e99-82d4-c0aff6c92c8b", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Boosting Methods" ], [ "paragraph", "Boosting on Model Performance" ], [ "paragraph", "Boosting for Multi-task Learning" ] ], "subsections": [], "title": "Boosting for Multi-task Learning" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8560 ], "content": "The pretraining models tend to be large-sized, so how to match different downstream tasks is equally important.\nSome pretraining models that are trained on specialized corpora have appeared~.\nCui et al.~ propose the BERT-whole word masking model (BERT-WWM). They directly use BERT in Chinese to be masked randomly according to the original MLM training, resulting in the loss of semantic information. Since there is no explicit language boundary in Chinese, it is easy to lose significant meaning. ZEN~ is a text encoder based on BERT, which adopts N-gram to enhance performance and effectively integrates considerable granular text information with fast convergence speed and good performance.\nTsai et al.~ propose an oriented multilingual sequence labeling model for sequence labeling tasks. The knowledge distillation method is adopted to achieve better performance in the two tasks: part of speech labeling and morphological attribute prediction for multiple low-resource languages. The inference time is shortened by 27 times.", "id": "9edfc3e7-eda8-42a3-b875-c17fbee5e7bf", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d5594cf1-ec15-4e99-82d4-c0aff6c92c8b", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Boosting Methods" ], [ "paragraph", "Boosting on Model Performance" ], [ "paragraph", "Boosting for Different Downstream Tasks" ] ], "subsections": [], "title": "Boosting for Different Downstream Tasks" }, { "cite_extract_rate": 0.8507462686567161, "cites": [ 1554, 679, 50, 8561, 7580, 2482, 2466, 2478, 1150, 1185, 2493, 2491, 8469, 8369, 2484, 1684, 1551, 2487, 2474, 8385, 856, 1557, 8462, 2480, 9, 826, 2490, 2477, 2483, 2473, 794, 2219, 2475, 11, 7096, 2488, 7271, 2492, 4511, 2479, 8424, 7097, 7098, 2476, 2485, 2489, 7461, 2481, 7370, 1156, 7581, 7463, 7579, 7460, 1553, 2486, 7 ], "content": "\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures/chatgpt_RL.png}\n \\caption{Boosting GPT-3.5 to ChatGPT using Reinforcement Learning from Human Feedback.}\n \\label{fig:chatgpt_RL}\n\\end{figure*}\nAs shown in Fig.~\\ref{fig:chatgpt_RL}, ChatGPT is fine-tuned based on the PFM GPT-3.5 using RLHF. ChatGPT uses a different data collection setup compared to InstructGPT. First, a large dataset with prompts and the desired output behaviors is collected. The dataset is used to fine-tune GPT-3.5 with supervised learning. Second, given the fine-tuned model and a prompt, the model will generate several model outputs. A labeler gives the desired score and ranks the output to compose a comparison dataset, which is used to train the reward model. Finally, the fine-tuned model (ChatGPT) is optimized against the reward model using the Proximal Policy Optimization (PPO) RL algorithm.\nAnother experimental conversational PFM, the Bard~\\footnote{\\url{https://blog.google/technology/ai/bard-google-ai-search-updates/}}, is developed by Google. Bard is based on the LM for Dialogue Applications (LaMDA). LaMDA~ is built upon the Transformer, which is pretrained on 1.56T words of dialog data and web text. Safety and factual grounding are two main challenges for conversational AI, LaMDA applies the approaches that fine-tuning with high-quality annotated data and external knowledge sources to improve model performance. \n\\begin{table*}[!htbp]\n\\centering\n\\caption{Summary of PFMs in NLP. \nThe pretraining task includes language model (LM), masked LM (MLM), permuted LM (PLM), denoising autoencoder (DAE), knowledge graphs (KG), and knowledge embedding (KE).\n}\n\\label{text}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{llllllll}\n\\hline\n\\textbf{Year} & \\textbf{Conference} & \\textbf{Model} & \\textbf{Architecture} & \\textbf{Embedding} & \\textbf{Training method} & \\textbf{Code} \\\\ \\hline\n2013 & NeurIPS & Skip-Gram~ & Word2Vec & Probabilistic & - & \\href{https://github.com/tensorflow/models}{https://github.com/.../models} \\\\ \\hline\n2014 & EMNLP & GloVe~ & Word2Vec & Probabilistic & - & - \\\\ \\hline\n2015 & NeurIPS & LM-LSTM~ & LSTM & Probabilistic & LM & \\href{https://github.com/stanfordnlp/GloVe}{https://github.com/.../GloVe} \\\\ \\hline\n2016 & IJCAI & Shared LSTM~ & LSTM & Probabilistic & LM & \\href{https://github.com/tensorflow/models/tree/master/research/adversarial\\_text}{https://github.com/.../adversarial\\_text} \\\\ \\hline\n2017 & TACL & FastText~ & Word2Vec & Probabilistic & - & \\href{https://github.com/facebookresearch/fastText}{https://github.com/.../fastText} \\\\ \\hline\n2017 & NeurIPS & CoVe~ & LSTM+Seq2Seq & Probabilistic & - & \\href{https://github.com/salesforce/cove}{https://github.com/.../cove} \\\\ \\hline\n2018 & NAACL-HLT & ELMO~ & LSTM & Contextual & LM & \\href{https://allennlp.org/elmo}{https://allennlp.org/elmo} \\\\ \\hline\n2018 & NAACL-HLT & BERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/google-research/bert}{https://github.com/.../bert} \\\\ \\hline\n2018 & & OpenAI GPT~ & Transformer Decoder & Autoregressive & LM & \\href{https://github.com/openai/finetune-transformer-lm}{https://github.com/...transformer-lm} \\\\ \\hline\n2019 & ACL & ERNIE(THU) & Transformer Encoder & Contextual & MLM & \\href{https://github.com/PaddlePaddle/ERNIE}{https://github.com/.../ERNIE} \\\\ \\hline\n2019 & ACL & Transformer-XL~ & Transformer-XL & Contextual & - & \\href{https://github.com/kimiyoung/transformer-xl}{https://github.com/.../transformer-xl} \\\\ \\hline\n2019 & ICLR & InfoWord~ & Transformer Encoder & Contextual & MLM & - \\\\ \\hline\n2019 & ICLR & StructBERT~ & Transformer Encoder & Contextual & MLM & - \\\\ \\hline\n2019 & ICLR & ALBERT ~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/google-research/ALBERT}{https://github.com/.../ALBERT} \\\\ \\hline\n2019 & ICLR & WKLM~ & Transformer Encoder & Contextual & MLM & - \\\\ \\hline\n2019 & ICML & MASS~ & Transformer & Contextual & MLM(Seq2Seq) & \\href{https://github.com/microsoft/MASS}{https://github.com/.../MASS} \\\\ \\hline\n2019 & EMNLP-IJCNLP & KnowBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/allenai/kb}{https://github.com/.../kb} \\\\ \\hline\n2019 & EMNLP-IJCNLP & Unicoder~ & Transformer Encoder & Contextual & MLM+TLM & - \\\\ \\hline\n2019 & EMNLP-IJCNLP & MultiFit~ & QRNN & Probabilistic & LM & \\href{https://github.com/n-waves/multifit}{https://github.com/.../multifit} \\\\ \\hline\n2019 & EMNLP-IJCNLP & SciBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/allenai/scibert}{https://github.com/.../scibert} \\\\ \\hline\n2019 & EMNLP-IJCNLP & BERT-PKD~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/intersun/PKD-for-BERT-Model-Compression}{https://github.com/...Compression} \\\\ \\hline\n2019 & NeurIPS & Xlnet~ & Transformer-XL Encoder & Permutation & PLM & \\href{https://github.com/zihangdai/xlnet}{https://github.com/.../xlnet} \\\\ \\hline\n2019 & NeurIPS & UNILM~ & LSTM + Transformer & Contextual & LM + MLM & \\href{https://github.com/microsoft/unilm}{https://github.com/.../unilm} \\\\ \\hline\n2019 & NeurIPS & XLM~ & Transformer Encoder & Contextual & MLM+CLM+TLM & \\href{https://github.com/facebookresearch/XLM}{https://github.com/.../XLM} \\\\ \\hline\n2019 & OpenAI Blog & GPT-2~ & Transformer Decoder & Autoregressive & LM & \\href{https://github.com/openai/gpt-2}{https://github.com/.../gpt-2} \\\\ \\hline\n2019 & arXiv & RoBERTa~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/pytorch/fairseq}{https://github.com/.../fairseq} \\\\ \\hline\n2019 & arXiv & ERNIE(Baidu)~ & Transformer Encoder & Contextual & MLM+DLM & \\href{https://github.com/PaddlePaddle/ERNIE}{https://github.com/.../ERNIE} \\\\ \\hline\n2019 & EMC2@NeurIPS & Q8BERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/IntelLabs/nlp-architect/blob/master/nlp\\_architect/models/transformers/quantized\\_bert.py}{https://github.com/.../quantized\\_bert.py} \\\\ \\hline\n2019 & arXiv & DistilBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/huggingface/transformers/tree/master/examples/research\\_projects/distillation}{https://github.com/.../distillation} \\\\ \\hline\n2020 & ACL & fastBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/autoliuweijie/FastBERT}{https://github.com/.../FastBERT} \\\\ \\hline\n2020 & ACL & SpanBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/facebookresearch/SpanBERT}{https://github.com/.../SpanBERT} \\\\ \\hline\n2020 & ACL & BART~ & Transformer & En: Contextual & DAE & \\href{https://github.com/huggingface/transformers}{https://github.com/.../transformers} \\\\\n & & & & De: Autoregressive & & \\\\ \\hline\n2020 & ACL & CamemBERT~ & Transformer Encoder & Contextual & MLM(WWM) & \\href{https://camembert-model.fr}{https://camembert-model.fr} \\\\ \\hline\n2020 & ACL & XLM-R~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/facebookresearch/XLM}{https://github.com/.../XLM} \\\\ \\hline\n2020 & ICLR & Reformer~ & Reformer & Permutation & - & \\href{https://github.com/google/trax/tree/master/trax/models/reformer}{https://github.com/.../reformer} \\\\ \\hline\n2020 & ICLR & ELECTRA~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/google-research/electra}{https://github.com/.../electra} \\\\ \\hline\n2020 & AAAI & Q-BERT~ & Transformer Encoder & Contextual & MLM & - \\\\ \\hline\n2020 & AAAI & XNLG~ & Transformer & Contextual & MLM+DAE & \\href{https://github.com/CZWin32768/xnlg}{https://github.com/.../xnlg} \\\\ \\hline\n2020 & AAAI & K-BERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/autoliuweijie/K-BERT}{https://github.com/.../K-BERT} \\\\ \\hline\n2020 & AAAI & ERNIE 2.0~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/PaddlePaddle/ERNIE}{https://github.com/.../ERNIE} \\\\ \\hline\n2020 & NeurIPS & GPT-3~ & Transformer Decoder & Autoregressive & LM & \\href{https://github.com/openai/gpt-3}{https://github.com/.../gpt-3} \\\\ \\hline\n2020 & NeurIPS & MPNet~ & Transformer Encoder & Permutation & MLM+PLM & \\href{https://github.com/microsoft/MPNet}{https://github.com/.../MPNet} \\\\ \\hline\n2020 & NeurIPS & ConvBERT~ & Mixed Attention & Contextual & - & \\href{https://github.com/yitu-opensource/ConvBert}{https://github.com/.../ConvBert} \\\\ \\hline\n2020 & NeurIPS & MiniLM~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/microsoft/unilm/tree/master/minilm}{https://github.com/.../minilm} \\\\ \\hline\n2020 & TACL & mBART~ & Transformer & Contextual & DAE & \\href{https://github.com/pytorch/fairseq/tree/master/examples/mbart}{https://github.com/.../mbart} \\\\ \\hline\n2020 & COLING & CoLAKE~ & Transformer Encoder & Contextual & MLM+KE & \\href{https://github.com/txsun1997/CoLAKE}{https://github.com/.../CoLAKE} \\\\ \\hline\n2020 & LREC & FlauBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/getalp/Flaubert}{https://github.com/.../Flaubert} \\\\ \\hline\n2020 & EMNLP & GLM~ & Transformer Encoder & Contextual & MLM+KG & \\href{https://github.com/THUDM/GLM}{https://github.com/.../GLM} \\\\ \\hline\n2020 & EMNLP (Findings) & TinyBERT~ & Transformer & Contextual & MLM & \\href{https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT}{https://github.com/.../TinyBERT} \\\\ \\hline\n2020 & EMNLP (Findings) & RobBERT~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/iPieter/RobBERT}{https://github.com/.../RobBERT} \\\\ \\hline\n2020 & EMNLP (Findings) & ZEN~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/sinovation/ZEN}{https://github.com/.../ZEN} \\\\ \\hline\n2020 & EMNLP (Findings) & BERT-MK~ & KG-Transformer Encoder & Contextual & MLM & - \\\\ \\hline\n2020 & RepL4NLP@ACL & CompressingBERT~ & Transformer Encoder & Contextual & MLM(Pruning) & \\href{https://github.com/mitchellgordon95/bert-prune}{https://github.com/.../bert-prune} \\\\ \\hline\n2020 & JMLR & T5~ & Transformer & Contextual & MLM(Seq2Seq) & \\href{https://github.com/google-research/text-to-text-transfer-transformer}{https://github.com/...transformer} \\\\ \\hline\n2021 & T-ASL & BERT-wwm-Chinese~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/ymcui/Chinese-BERT-wwm}{https://github.com/...BERT-wwm} \\\\ \\hline\n2021 & EACL & PET~ & Transformer Encoder & Contextual & MLM & \\href{https://github.com/timoschick/pet}{https://github.com/.../pet} \\\\ \\hline\n2021 & TACL & KEPLER~ & Transformer Encoder & Contextual & MLM+KE & \\href{https://github.com/THU-KEG/KEPLER}{https://github.com/.../KEPLER} \\\\ \\hline\n2021 & EMNLP & SimCSE~ & Transformer Encoder & Contextual & MLM+KE & \\href{https://github.com/princeton-nlp/SimCSE}{https://github.com/.../SimCSE} \\\\ \\hline\n2021 & ICML & GLaM~ & Transformer & Autoregressive & LM & - \\\\ \\hline\n2021 & arXiv & XLM-E~ & Transformer & Contextual & MLM & \\\\ \\hline\n2021 & arXiv & T0~ & Transformer & Contextual & MLM & \\href{https://github.com/bigscience-workshop/t-zero}{https://github.com/.../T0} \\\\ \\hline\n2021 & arXiv & Gopher~ & Transformer & Autoregressive & LM & - \\\\ \\hline\n2022 & arXiv & MT-NLG~ & Transformer & Contextual & MLM & - \\\\ \\hline\n2022 & arXiv & LaMDA~ & Transformer Decoder & Autoregressive & LM & \\href{https://github.com/conceptofmind/LaMDA-rlhf-pytorch}{https://github.com/.../LaMDA} \\\\ \\hline\n2022 & arXiv & Chinchilla~ & Transformer & Autoregressive & LM & - \\\\ \\hline\n2022 & arXiv & PaLM~ & Transformer & Autoregressive & LM & \\href{https://github.com/lucidrains/PaLM-pytorch}{https://github.com/.../PaLM} \\\\ \\hline\n2022 & arXiv & OPT~ & Transformer Decoder & Autoregressive & LM & \\href{https://github.com/facebookresearch/metaseq}{https://github.com/.../MetaSeq} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table*}", "id": "3c5d0aed-29fb-46fe-b5ff-9e1e66b4e827", "level": "paragraph", "origin_cites_number": 67, "parent_id": "d5594cf1-ec15-4e99-82d4-c0aff6c92c8b", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Boosting Methods" ], [ "paragraph", "Boosting on Model Performance" ], [ "paragraph", "Examples: ChatGPT and Bard" ] ], "subsections": [], "title": "Examples: ChatGPT and Bard" }, { "cite_extract_rate": 1, "cites": [ 1578 ], "content": "Instruction-aligning methods aim to let the LM follow human intents and generate meaningful outputs. The general approach is fine-tuning the pretrained LM with high-quality corpus in a supervised manner. To further improve the usefulness and harmlessness of LMs, some works introduce RL into the fine-tuning procedure so that LMs could revise their responses according to human or AI feedback. Both supervised and RL approaches can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision-making.", "id": "1d6edf07-e612-4411-b731-77aa6ec24303", "level": "subsection", "origin_cites_number": 1, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Instruction-Aligning Methods" ] ], "subsections": [ "99bd7f7c-a4dc-45f2-b182-b792c3d76c56" ], "title": "Instruction-Aligning Methods" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 2208, 2494, 1587, 8534, 2198 ], "content": "SFT is a well-established technique to unlock knowledge and apply it to specific real-world, even unseen tasks. The template for SFT is composed of input-output pairs and an instruction . For example, given the instruction ``Translate this sentence to Spanish:'' and an input ``The new office building was built in less than three months.'', we want the LM to generate the target ``El nuevo edificio de oficinas se construyó en tres meses.''. The template is commonly humanmade including unnatural instructions and natural instructions , or bootstrap based on a seed corpus . Ethical and social risks of harm from LMs are significant concerns in SFT . LaMDA, the largest LM to date, thus relies on crowdworker annotated data for providing a safety assessment of any generated LaMDA response in three conversation categories: natural, sensitive, and adversarial. The list of rules serves further safety fine-tuning and evaluation purposes.", "id": "99bd7f7c-a4dc-45f2-b182-b792c3d76c56", "level": "paragraph", "origin_cites_number": 6, "parent_id": "1d6edf07-e612-4411-b731-77aa6ec24303", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Instruction-Aligning Methods" ], [ "paragraph", "Supervised Fine-Tuning (SFT)" ] ], "subsections": [ "0b6ce232-e4d0-4aa1-8097-e70973f4ea40", "f0b30ce0-79ed-4853-b7a2-3527355f4a4c" ], "title": "Supervised Fine-Tuning (SFT)" }, { "cite_extract_rate": 0.875, "cites": [ 432, 8472, 364, 2499, 2497, 7099, 2495, 2501, 2498, 1928, 2399, 2500, 2496, 9115 ], "content": "RL has been applied to enhance various models in NLP tasks such as machine translation , summarization , dialogue generation , image captioning , question generation , text-games , and more . RL is a helpful method for optimizing non-differentiable objectives in language generation tasks by treating them as sequential decision-making problems. However, there is a risk of overfitting to metrics that use neural networks, leading to nonsensical samples that score well on the metrics . RL is also used to align LMs with human preferences . \nInstructGPT proposes to fine-tune large models with PPO against a trained reward model to align LMs with human preference , which is the same method applied by ChatGPT \nnamed RLHF. Specifically, the reward model is trained with comparison data of human labelers' manual rankings\nof outputs. For each of them, the reward model or machine labeler calculates a reward, which is used to update the LM using PPO. More details are illustrated in Fig. \\ref{fig:chatgpt_RL}. \nOne of the recent breakthroughs in PFM technology is GPT-4~, which follows a pretraining approach to predict the subsequent token in a document and then undergoes RLHF fine-tuning. \nAs the task complexity increases, GPT-4 outperforms GPT-3.5 in terms of reliability, creativity, and capability to handle more nuanced instructions.\nSparrow , developed by DeepMind, also utilizes RLHF that reduces the risk of unsafe and inappropriate answers. Despite some promising results using RLHF by incorporating fluency, progress in this field is impeded by a lack of publicly available benchmarks and implementation resources, resulting in a perception that RL is a difficult approach for NLP. Therefore, an open-source library named RL4LMs~ is introduced recently, which consists of building blocks for fine-tuning and evaluating RL algorithms on LM-based generation. \nBesides human feedback, one of the latest dialogue agents -- Claude favors Constitutional AI where the reward model is learned via RL from AI Feedback (RLAIF). Both the critiques and the AI feedback are steered by a small set of principles drawn from a ‘constitution’, the specification of a short list of principles or instructions, which is the only thing provided by humans in Claude. The AI feedback focuses on controlling the outputs to be less harmful by explaining its objections to dangerous queries.", "id": "0b6ce232-e4d0-4aa1-8097-e70973f4ea40", "level": "paragraph", "origin_cites_number": 16, "parent_id": "99bd7f7c-a4dc-45f2-b182-b792c3d76c56", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Instruction-Aligning Methods" ], [ "paragraph", "Supervised Fine-Tuning (SFT)" ], [ "paragraph", "Reinforcement Learning from Feedback" ] ], "subsections": [], "title": "Reinforcement Learning from Feedback" }, { "cite_extract_rate": 1, "cites": [ 7468, 8472, 1578 ], "content": "Chain-of-thought (CoT) prompting is a technique for improving the reasoning ability of LLMs by prompting them to generate a series of intermediate steps that lead to the final answer of a multi-step problem. The CoT is a series of intermediate reasoning steps, which can significantly improve the ability of LLMs to perform complex reasoning . Besides, fine-tuning with CoT shows slightly more harmless compared to without CoT . CoT prompting is an emergent property of model scale, meaning it works better with larger and more powerful language models. It is also possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability.\nIn a CoT prompting experiment, a prompt is provided to the model that outlines a multi-step problem. The prompt might pose a question such as ``After selling 30 out of his 100 chickens and 10 out of his 20 pigs, how many animals does a farmer have left?'' The model then generates a sequence of intermediate reasoning steps, for example, ``The farmer has 100-30=70 chickens remaining'' and ``The farmer has 20-10=10 pigs remaining,'' before generating the final answer, such as ``The farmer has 70+10=80 animals remaining.'' CoT prompting has demonstrated its efficacy in improving the performance of LLMs on various reasoning tasks, such as arithmetic, symbolic reasoning, and common sense. It is a promising technique that can enhance the ability of language models to reason about complicated problems.", "id": "f0b30ce0-79ed-4853-b7a2-3527355f4a4c", "level": "paragraph", "origin_cites_number": 3, "parent_id": "99bd7f7c-a4dc-45f2-b182-b792c3d76c56", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Instruction-Aligning Methods" ], [ "paragraph", "Supervised Fine-Tuning (SFT)" ], [ "paragraph", "Chain-of-Thoughts" ] ], "subsections": [], "title": "Chain-of-Thoughts" }, { "cite_extract_rate": 0, "cites": [], "content": "The neural probabilistic LM uses a neural network to estimate the parameters of the probabilistic LM, which reduces the size of the model parameters while enlarging the number of context windows. With the help of a neural network, the LM does not need to improve the smoothing algorithm to alleviate the performance bottleneck continuously.\nSince the training target is unsupervised, a corpus with a large amount of data is enough for training. The negative sampling technique in the training process provides a new idea for the follow-up study of the target task in the LM.\nFurthermore, the neural probabilistic LM promotes the further development of downstream task research because of its good representation capability and training efficiency.\nAfter the pretraining LM, especially the BERT model, is proposed, the research in language modeling has entered a new phase. \nThe bidirectional LM, the hidden LM, and the sorted LM adopted by the bidirectional LM have successfully modeled the grammatical and semantic information in natural language at a deeper level.\nChatGPT is another milestone work in PFMs using RL. \nThe presentation ability of PFMs is qualitatively better than that of the neural probabilistic LM. It even exceeds that of humans in some tasks.", "id": "27592066-82ce-425a-a3d0-612df6fceb81", "level": "subsection", "origin_cites_number": 0, "parent_id": "c043d8a4-ac81-470e-a27d-0a13ffaae1ac", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Natural Language Processing" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 4}\nWith the popularity of PFM used in NLP, it motivates researchers to start exploring PFM in CV.\nThe term ``pretraining'' has not been clearly defined within the realm of deep learning research in CV. This word is first used in convolution-based networks when we adjust the parameters on a more general dataset such as ImageNet, which can make other tasks train to start with a warm-up initialization and thus converge with faster speed. \nIn contrast to early CNN-based transfer learning techniques that rely on pretrained datasets with supervised signals, our examination of PFM centers on SSL which utilizes human-designed labels, such as Jigsaw puzzles, or the comparison of different patches from images as pretext tasks. This allows for learned representations to be generalized to various downstream tasks, including classification, detection, recognition, segmentation, etc.\nHowever, it is costly to rely on data annotations when the learning tasks become more complicated, making the labeling process more arduous and time-consuming than the actual learning. This is where SSL is urgently needed and how it can further fuel the progress of deep learning methods. To reduce the dependency on data labeling, unlabeled data are trained with self-supervision by matching, contrasting, or generating in SSL.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures/pipline_zip.pdf}\n \\caption{The general pipeline for SSL. The top part represents the pretraining, and the bottom stream obtains transferred parameters from above to learn downstream supervised tasks.}\n \\label{fig:pipline}\n\\end{figure*}\nThe general pipeline of SSL is shown in Fig. \\ref{fig:pipline}. During the pretraining stage, a pretext task is designed for the encoder networks to solve. The artificial labels for this pretext task are automatically generated based on specific attributes of the data, such as image patches from the same origin being labeled as ``positive'' and those from different origins as ``negative''. Then, the encoder networks are trained to solve the pretext task by supervised learning methods. Since shallow layers extract fine-grained details such as edges, angles, and textures, while deeper layers capture task-related high-level features such as semantic information or image contents, learned encoders on pretext tasks can be transferred to downstream supervised tasks. During this stage, the parameters of the backbone are fixed, and only a simple classifier, such as a two-layer Multi-Layer Perceptron (MLP), needs to be learned. Considering the limited workload in the downstream training stage, this learning process is commonly referred to as fine-tuning. In summary, the representations learned during the pretraining stage in SSL can be reused on other downstream tasks and achieve comparable results. \nIn this section, we introduce different tasks for pretraining PFMs in CV. The PFMs can be trained by specific pretext tasks, frame order, generation, reconstruction, memory bank, sharing, clustering and so on. We summarize the PFMs proposed in CV in \\textbf{Table~\\ref{tab:pretraining model for image}}.", "id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ] ], "subsections": [ "bbf1343d-17e2-4b47-9679-bde2c480d2f0", "6505e098-bf5b-4afb-a19e-a38f0c9a03f2", "3bd63707-4357-48a2-914a-922ccb66761b", "5285bdeb-057a-4fb0-96bd-eb8d91f710bc", "50ed8c6b-4ab0-4ae5-a530-8e92f1907e7b", "d93ecf7c-0026-4dc5-bb46-2d93ada041fa", "53fe15b6-6791-4528-8285-03f6b110d6b2", "adc375fc-4527-40bd-812c-bcf17f8b5e4f" ], "title": "PFMs for Computer Vision" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 134, 2504, 126, 2507, 2502, 2506, 131, 2505, 1254, 2503 ], "content": "In the early stage of unsupervised learning, the network is trained by designing a special pretext task and predicting the answer to this task. Dosovitskiy et al.~ pretrain the Exemplar CNN to discriminate the different patches from the unlabelled data. The experiments prove the designs can learn useful representations transferred to the standard recognition assignments. In the method based on context prediction~, a handcrafted supervised signal about the position information serves as the label for the pair classification. \nInpainting~ aims to pretrain models by predicting the missed center part. \nBecause inpainting is a semantic-based prediction, another decoder is linked to the context encoder in this manner. Furthermore, the standard pixel-by-pixel reconstruction process of the decoder can be transferred to any other downstream inpainting tasks.\nSpecifically, Colorization~ is a method that evaluates how colorization as a pretext task can help to learn semantic representation for downstream tasks. It is also known as the \\emph{cross-channel encoding} since different image channels serve as input and the output is discriminated. Similarly, Split-Brain\tAutoencoder~ also learns representations in a self-supervised way by forcing the network to solve cross-channel prediction tasks.\nJigsaw~ is proposed to pretrain the designed Context-Free Network (CFN) in a self-supervised manner by first designing the Jigsaw puzzle as a pretext task. Completing Damaged Jigsaw Puzzles (CDJP)~ learns image representation by complicating pretext tasks furthermore, in which puzzles miss one piece and the other pieces contain incomplete color. Following the idea of designing efficient and effective pretext tasks, Noroozi et al.~ use counting visual primitives as a special pretext task and outperform previous SOTA models on regular benchmarks.\nNAT~ learns representation by aligning the output of backbone CNN to low-dimensional noise.\nRotNet~ is designed to predict different rotations of images.\n\\begin{figure*}[!htp]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures/CPC_zip.pdf}\n \\caption{Contrastive Predictive Coding~. The input sequence can represent both images and videos.}\n \\label{fig:cpc}\n\\end{figure*}", "id": "bbf1343d-17e2-4b47-9679-bde2c480d2f0", "level": "subsection", "origin_cites_number": 12, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Specific Pretext Task" ] ], "subsections": [], "title": "Learning by Specific Pretext Task" }, { "cite_extract_rate": 1, "cites": [ 134, 2508 ], "content": "The learning of sequence data such as videos always involves frame processing through time steps. This problem often connects with solving pretext tasks that can help to learn visual temporal representations. Contrastive Predictive Coding (CPC)~ is the first model to learn data representations by predicting the future in latent space. This model can be fed with data in any modalities, like speech, images, text, etc.\nThe components of CPC are shown in Fig. \\ref{fig:cpc} from~, where the $x_t$ represents the input sequence of observations, $z_t$ is a sequence of latent representations after the encoder $g_{enc}$, and $c_t$ is a context latent representation that summarizes all the latent sequence $z_{\\le t}$ after an autoregressive model $g_{ar}$. Unlike the traditional model predicts future frames $x_{t+k}$ by a generative model $p_k(x_{t+k}|c_t)$, CPC models a \"density ratio\" $f_k$ to represent the mutual information between the context latent representation $c_t$ and future frame $x_{t+k}$:\n\\begin{equation}\n f_k(x_{t+k},c_t)\\propto p(x_{t+k}|c_t) / x_{t+k}.\n\\end{equation}\nAfter the encoding of recurrent neural networks, $z_t$ and $c_t$ can both be chosen\nfor the downstream tasks as needed. The encoder and autoregressive model are trained by InfoNCE~ as follows\n\\begin{equation}\n \\mathcal{L}=-\\mathbbm{E}_X[\\log f_k(x_{t+k},c_t) / \\sum\\nolimits_{x_j\\in X}f_k(x_j,c_t)],\n\\end{equation}\nwhere $X$ denotes the training dataset containing both positive and negative samples. The density ratio $f_k$ can be estimated by optimizing $\\mathcal{L}$.\nCPC v2 revisits and improves CPC~ by pretraining on unsupervised representations, and its representation generality can be transferred to data-efficient downstream tasks.", "id": "6505e098-bf5b-4afb-a19e-a38f0c9a03f2", "level": "subsection", "origin_cites_number": 2, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Frame Order" ] ], "subsections": [], "title": "Learning by Frame Order" }, { "cite_extract_rate": 1, "cites": [ 7100, 9095, 2509 ], "content": "\\label{sec:Learning by Generation}\nAlthough many existing applications are popular after the development of the GAN-based approach, the representation abilities inside the GANs are not entirely exploited due to the absence of a feature encoder. \nThus, Bidirectional Generative Adversarial Networks (BiGANs)~ is proposed to project data back into the latent space, which is useful for auxiliary supervised discrimination tasks via serving as feature representations. \nBased on BiGANs, BigBiGAN~ first achieves the SOTA in unsupervised representation learning on ImageNet by adding an encoder and modifying the discriminator. As shown in Fig. \\ref{fig:bigbigan} from~, the traditional components of GANs (encoder $\\mathcal{E}$ and generator $\\mathcal{G}$) are used to produce data-latent pairs, denoted as $(\\textbf{x}\\sim P_{\\textbf{x}},\\hat{\\textbf{z}}\\sim\\mathcal{E}(\\textbf{x}))$ and $(\\hat{\\textbf{x}}\\sim\\mathcal{G}(\\textbf{z}),\\textbf{z}\\sim P_{\\textbf{z}})$. The final loss $\\ell$ is defined as the sum of data-specific term $s_{\\textbf{x}},s_{\\textbf{z}}$ and data-joint term $s_{\\textbf{xz}}$. The introduced discriminator $\\mathcal{D}$ (Adversarially Learned Inference (ALI)~, or BiGAN~) learns to discriminate between pairs from the raw data, latent distribution and encoded vector. \n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures/BigBiGAN_zip.pdf}\n \\caption{The structure of the BigBiGAN framework~.}\n \\label{fig:bigbigan}\n\\end{figure*}", "id": "3bd63707-4357-48a2-914a-922ccb66761b", "level": "subsection", "origin_cites_number": 3, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Generation" ] ], "subsections": [], "title": "Learning by Generation" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 9149, 2512, 732, 2513, 2514, 2515, 1501, 2511, 7339, 2510 ], "content": "The iGPT and ViT models have demonstrated the feasibility of adapting the pretext task of masked prediction using auto-encoder from language to image data. BEiT is the first to demonstrate that autoencoder-based masked prediction can outperform DINO , a conventional SOTA method without pretraining techniques. Specifically, BEiT consists of two stages: token embedding with discrete variational autoencoder (dVAE) , and tokenizer training with masked image prediction. In the first stage, the original image is split into some patches and encoded using discrete tokens, which is different from BERT since image patches don't have off-the-shelf tokens as words in NLP. In the second stage, the BEiT encoder takes a corrupted image containing unmasked and masked patches, and then the visual tokens of the masked patches are outputted to match the corresponding visual tokens from the fixed tokenizer. Despite its success, the separation between masked prediction and autoencoder training induces that the whole framework is not end-to-end and hinders learning effectiveness and efficiency. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures/memorybank_zip.pdf}\n \\caption{The general pipeline for the Memory Bank Method~.}\n \\label{fig:memorybank}\n\\end{figure*}\nTo migrate this issue, MAE proposes an end-to-end simple solution by predicting the masked patches directly from the unmasked ones with the Mean Squared Error (MSE) loss. It's worth noting that MAE uses a masking ratio of 75\\%, which is significantly higher than that of BERT (typically 15\\%). Ablation study suggests that higher masking ratios are beneficial for both fine-tuning and linear probing. Concurrently, SimMIM proposes a similar autoencoder-based solution as MAE, in which they also confirm that a higher marking ratio and leveraging random masking strategy helps improve performance. The major difference is how they partition the responsibility of representation encoding and pretext prediction in the autoencoder. Since the decoder of SimMIM is simple, the encoder of SimMIM synchronously conducts both of them. On the contrary, the encoder in MAE solely undertakes the role of representation encoding, and the decoder is responsible for pretext prediction. Recently, Meta AI announces the Segment Anything Model (SAM)~ which prompts users to specify what to segment in an image, allowing for a wide range of segmentation tasks without the need for additional training. SAM employs an MAE pretrained ViT-H~ image encoder that runs once per image and produces an image embedding, as well as a prompt encoder that embeds input prompts such as clicks or boxes. Following that, a lightweight transformer-based mask decoder predicts object masks from image and prompt embeddings. The results show that SAM can generate high-quality masks from a single foreground point that are typically just modestly inferior to the manually annotated ground truth. It routinely achieves strong quantitative and qualitative outcomes on a wide range of downstream tasks using a zero-shot transfer approach and prompt engineering.\nLeveraging ViT in MAE poses a serious inefficiency issue, where decreasing the patch size results in a quadratic increase in computing resources. To address the problem, there are two important solutions: (1) hierarchical ViT and (2) local attention. In the first direction, hierarchical ViT (hViT) was introduced, which utilizes a shrinking pyramid structure and techniques like shifted windows to reduce computational demands. Unfortunately, hViT cannot be directly applied to enable MAE pretraining because the local window attention used in hViT makes it difficult to handle randomly masked patches as in MAE. Recently, Uniform Masking MAE (UM-MAE) is proposed to empower MAE with hViTs, which introduces a two-stage pipeline: sampling and masking. It starts by randomly sampling a portion of patches (25\\% reported in the paper) from each block, and then follows by masking additional patches on top of the sampled ones. \nThe first step helps to maintain common elements across different local windows, while the second step prevents shortcuts for pixel reconstruction from nearby low-level features, making the task more difficult. Another direction to improve efficiency focuses on reducing the input size by putting the attention of the network into some local small windows of the image. Motivated by the observation that local knowledge is sufficient for reconstructing masked patches, Local masked reconstruction (LoMaR) was proposed. Rather than using the entire image for mask reconstruction, LoMaR samples a number of small windows and focuses attention on local regions, which outperforms MAE on downstream tasks in terms of learning efficiency.", "id": "5285bdeb-057a-4fb0-96bd-eb8d91f710bc", "level": "subsection", "origin_cites_number": 12, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Reconstruction" ] ], "subsections": [], "title": "Learning by Reconstruction" }, { "cite_extract_rate": 0.75, "cites": [ 7018, 681, 2516 ], "content": "Non-Parametric Instance Discrimination (NPID)~ is the first method that utilizes the instances to learn representations for downstream tasks. The detailed pipeline is shown in Fig. \\ref{fig:memorybank}. The feature representations are stored in the memory bank for the convenience of computation because the instance-level classification objective needs all images in the training dataset. For any image $x$ with feature representation $\\textbf{v}=f_\\theta(x)$, its probability of being recognized as $i$-th example is:\n\\begin{equation}\n P(i|\\textbf{v}) = exp(\\textbf{v}_i^{\\mathrm{T}}\\textbf{v}/\\tau) / \\sum\\nolimits_{j=1}^n exp(\\textbf{v}_j^{\\mathrm{T}}\\textbf{v}/\\tau),\n\\end{equation}\nwhere $\\textbf{v}_i$ or $\\textbf{v}_j$ is the representation of $i$-th or $j$-th sample, which serves as a substitute for the parametric class prototype (i.e., weights of a classifier). Addtionally, $\\tau$ is the temperature parameter borrowed from the knowledege distillation~. \nLocal Aggregation (LA)~ is another method that trains a CNN encoder to embed raw images into a lower dimension space -- embedding space.\nWhen a metric of local aggregation is maximized, similar data instances move together in the embedding space while dissimilar instances move apart.\nBased on NPID, Pretext Invariant Representation Learning (PIRL, pronounced as ``pearl'')~~ is proposed to argue that semantic representations are invariant under pretext transformation tasks. Suppose the original view and transformed view of images are denoted as $I$ and $I^{t}$, respectively. These sample views are fed into a CNN encoder, and the total empirical loss on the training dataset $\\mathcal{D}$ can be defined as:\n\\begin{equation}\n \\mathcal{L}_{total}(\\theta;\\mathcal{D})=\\mathbbm{E}_{t\\sim\\mathcal{T}}\\left[\\frac{1}{|\\mathcal{D}|}\\sum\\nolimits_{\\boldsymbol{I}\\in\\mathcal{D}}\\mathcal{L}(\\boldsymbol{V_I},\\boldsymbol{V}_{\\boldsymbol{I}^{t}})\\right],\n\\end{equation}\nwhere $\\mathcal{T}$ denotes the different transformations of images. \nThe loss encourages the representation of image $\\boldsymbol{I}$ to be similar to that of $\\boldsymbol{I}^t$, and the representation of $\\boldsymbol{I}^t$ to be dissimilar to that of different images $\\boldsymbol{I}'$, as shown in the dotted box of Fig.~\\ref{fig:two-stream}. \nTherefore, more negative sample pairs contribute to improving the scalability of the gradient and lead to the final learned encoder with stronger representation ability.\nThat is the reason why the memory bank is introduced to store more previous representations for subsequent comparison. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{figures/sum_cropped.pdf}\n \\caption{Summary of all two-stream models, including contrastive learning and memory-bank-based methods.}\n \\label{fig:two-stream}\n\\end{figure*}", "id": "50ed8c6b-4ab0-4ae5-a530-8e92f1907e7b", "level": "subsection", "origin_cites_number": 4, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Memory Bank" ] ], "subsections": [], "title": "Learning by Memory Bank" }, { "cite_extract_rate": 0, "cites": [], "content": "SSL prefers using two encoder networks for the different data augmentation, and then pretrains the parameters by maximizing the distance between negative pairs or minimizing the distance between positive pairs. Fig.~\\ref{fig:two-stream} shows the two-stream models for all contrastive learning frameworks. The transformation $t$ on the orginal input image $\\boldsymbol{I}$ generates the view $v$, similarly, its counterpart ${t}'$ generates ${v}'$. In general, two different or same encoders $f_\\theta$ and $f'_\\xi$ are used to extract contrastive representations. The subsequent MLP heads $g_\\theta$ and $g'_\\xi$ are used to learn more combinations that are beneficial to the contrastive loss. It is noticed that MLP and memory bank could be removed or preserved under different settings. In terms of the shared encoder, SSL can be divided into two categories: 1) Soft Sharing that two encoders share with similar but different parameters ($f_\\theta \\neq f'_\\xi$); \n2) Hard Sharing that two encoders maintain the same architectures and parameters ($f_\\theta = f'_\\xi$).", "id": "d93ecf7c-0026-4dc5-bb46-2d93ada041fa", "level": "subsection", "origin_cites_number": 0, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Sharing" ] ], "subsections": [ "f232b9b3-0708-4ccd-84aa-6a5b5b5d66e4" ], "title": "Learning by Sharing" }, { "cite_extract_rate": 1, "cites": [ 134, 122, 133, 2517 ], "content": "Facebook AI Research (FAIR) presents Momentum Contrast (MoCo)~ \nby using momentum to control the slight difference between two encoders. As shown in Fig. \\ref{fig:moco}, one of the encoders is served as a dictionary look-up task that generates a queue of encoded data samples $\\{k_0, k_1, \\cdots\\}$.\nAnother encoder generates encoded query $\\{q_0, q_1, \\cdots\\}$ with the training batch updated. \nThe similarity is measured by the dot product of the new coming encoded query $q$ and the encoded keys stored in the dictionary queue. Suppose there are $K$ keys stored in the queue before the new key comes. The $K$ keys are treated as negative samples to the query of the new key. To combine the contrastive loss on both negative and positive samples, InfoNCE Loss ~ is used for the pretraining in MoCo.\nThe key design in MoCo for soft parameter sharing is called momentum update. He et al.~ suggest that the direct parameter change of key encoder (i.e., momentum encoder) to query encoder loses the necessary consistency and yields poor results. The momentum encoder parameter $\\theta_k$ is updated as:\n\\begin{equation}\\label{momentum-update}\n \\theta_k = m\\theta_k + (1-m)\\theta_q,\n\\end{equation}\nwhere the query encoder parameter $\\theta_q$ is learned directly from the gradients of new coming instance, and $m\\in[0, 1)$ is a hyper-parameter that controls the consistency ($\\theta_k$ is more consistent if $m$ is closer to $1$).\nInspired by the design of SimCLR~, in MoCo v2~, the FAIR team introduces an MLP projection head after encoders and utilizes more data augmentation techniques to improve the performance. The further improvements are from that: 1) embedded linear classifier bridges the gap between unsupervised and supervised pretraining representations; 2) more contrastive samples are feasible from both the larger training batch and stronger data augmentation.\nDeepMind proposed Bootstrap Your Own Latent (BYOL)~ that contains representation, projection, and discrimination stages to achieve a new SOTA without using negative samples. They understand the discrimination between different views of raw images as necessary prevention from collapse during the pretraining. However, they argue that many negative samples\nare not indispensable to prevent this collapse. As shown in the left part of Fig. \\ref{fig:two-stream}, there are two streams in BYOL with different parameters. The online network (top green) updates parameters by comparing the prediction generated itself and the regression target provided by the target network. Then the parameters of the target model (bottom red) are updated the same as Eq.~(\\ref{momentum-update}), \\textit{i.e.}, $\\xi\\gets\\tau\\xi+(1-\\tau)\\theta$, where $\\tau$ is the target decay rate to control the degree of parameter changing in the target network. Therefore, the target network can also be understood as a momentum encoder. Here,\n$\\xi$ in the target model is the parameter $\\theta_k$ in momentum encoder, and $\\theta$ in the online network denotes the parameter $\\theta_q$ in the query encoder.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{figures/MoCo_zip.pdf}\n \\caption{The general pipeline of MoCo~, which is also a two-stream framework with different parameters.}\n \\label{fig:moco}\n\\end{figure*}", "id": "f232b9b3-0708-4ccd-84aa-6a5b5b5d66e4", "level": "paragraph", "origin_cites_number": 4, "parent_id": "d93ecf7c-0026-4dc5-bb46-2d93ada041fa", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Sharing" ], [ "paragraph", "Soft Sharing." ] ], "subsections": [ "cd9e37cc-d2be-4335-9b89-7c6b8fe84b1b" ], "title": "Soft Sharing." }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 9094, 2518, 2521, 2519, 2520, 7000 ], "content": "SimCLR~ is proposed by Brain Team in Google Research which utilizes the hard parameter-sharing architecture. This simple framework can also be concluded in Fig. \\ref{fig:two-stream}, in which\nwe can see that representations of different views of the same image are learned in \nthe network $f(\\cdot)$. This base encoder shares the parameters with each other.\nThus, memory bank and momentum setting to learn key and query encoders are not necessary, which contributes to a simpler backbone architecture and easier learning strategy.\nThe loss function to maximize the similarity between different views of the same image (positive pairs) is defined as\n\\begin{equation} \n\\label{nce-loss}\n \\ell_{i,j}=-\\log exp(sim(z_i,z_j)/\\tau) \\\\ / \\sum\\nolimits_{k=1}^{2N}\\mathbbm{1}_{[k\\neq i]}exp(sim(z_i,z_k)/\\tau),\n\\end{equation}\nwhere $(i,j)$ is a pair of positive samples, $\\tau$ is an introduced hyper-parameter called temperature parameter~, and $\\mathbbm{1}_{[k\\neq i]}\\in\\{0,1\\}$ is an indicator function to control the denominator containing only negative pairs. \nTo avoid the dependence on a large number of explicit pairwise feature comparisons, Swapping Assignments between multiple Views of the same image (SwAV)~ is proposed as an online algorithm by Inria and FAIR. SwAV introduces clustering to substitute the previous comparison between pairs, which gains more memory with the help of non-queue architecture. In this method, the clustering prototype joins the computation of the defined loss function. This prototype is encoded as the concatenation of vectors learned through the backpropagation in CNNs. Thus, there is no need for SwAV to compare the encoded representations between different views.\nBased on the existing SwAV, a novel model called SElf-supERvised (SEER)~ aims to learn a pretrained encoder from any random image and unbounded dataset in the wild. The base network is RegNetY architectures~ trained with the SwAV SSL method~. This method proves that the SSL is not specific to a curated dataset such as ImageNet, and the scalability of recent RegNet releases the limitation of traditional backbones such as ResNet. In addition, this method encourages the research community to explore more backbones suitable for universal SSL.\nAttracting the attention in the recent SSL, \nFAIR conducts empirical experiments on the SSL by utilizing the structure of Simple Siamese (SimSiam) networks. This method~ can avoid the design of negative sample pairs, large batches (or memory banks), and momentum encoders in traditional contrastive learning. The two encoders in Fig. \\ref{fig:two-stream} with identical parameters that process two different views $t$ and $t^{\\prime}$ of image $x$ are substituted by the only siamese network. MLP predictor $g$ is used for one of the view representations, and then the stop-gradient operation is applied to another view representation.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.7\\linewidth]{figures/clustering_zip.pdf}\n \\caption{The key pipeline for the DeepCluster model~.}\n \\label{fig:clustering}\n\\end{figure*}", "id": "cd9e37cc-d2be-4335-9b89-7c6b8fe84b1b", "level": "paragraph", "origin_cites_number": 7, "parent_id": "f232b9b3-0708-4ccd-84aa-6a5b5b5d66e4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Sharing" ], [ "paragraph", "Soft Sharing." ], [ "paragraph", "Hard Sharing." ] ], "subsections": [], "title": "Hard Sharing." }, { "cite_extract_rate": 0.893617021276595, "cites": [ 9094, 2518, 2522, 2506, 7102, 131, 2514, 2505, 2523, 7101, 2528, 7000, 7018, 133, 2512, 2502, 2515, 681, 2525, 122, 2503, 2517, 2510, 8562, 2516, 2530, 134, 2504, 7100, 2521, 2508, 2507, 2527, 9095, 2526, 2529, 2524, 1254, 2511, 2513, 865 ], "content": "DeepCluster~ is the first model that adopts the clustering algorithm for large-scale dataset learning. This method groups the representations into different clusters and labels these clusters as supervised signals to pretrain the parameters of the backbone network. It demonstrates SOTA performance on a wide range of standard transferred tasks used in unsupervised learning.\nWhen it comes to the connection between contrastive learning and clustering, SwAV~ has utilized prototypes that serve as a clustering center to help classify the sample pairs during pretraining, while Prototypical Contrastive Learning (PCL)~ first targets bridging contrastive learning with clustering. Compared to instance discrimination as pretext tasks learning low-level representations, clustering can help to encode more semantic information. Then more semantic-based downstream tasks will benefit from it. As shown in Fig. \\ref{fig:clustering}, prototypical contrastive learning uses prototypes to substitute one of the views of generated samples in NCE loss (Eq. (\\ref{nce-loss})), which is the proposed ProtoNCE loss in PCL. In addition, PCL is also a method based on soft parameter sharing, in which the momentum encoder is updated as Eq.(\\ref{momentum-update}).\n\\begin{table*}[t]\n\t\\tiny\n\t\\centering\n\t\\caption{Summary of the PFMs in CV.}\n\t\\label{tab:pretraining model for image}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{threeparttable}\n \\begin{tabular}{llllllll}\n\t\t\\hline \n\t\t\\textbf{Year} & \\textbf{Conference} & \\textbf{Method} & \\textbf{Pretext Task} & \\textbf{Architecture} & \\textbf{Downstream Task}\\tnote{1} & \\textbf{Code} \\\\\n\t\t\\hline \n\t\t2014 & NeurIPS & Exemplar-CNN~ & discrimination & CNN & cla, rec & \\href{https://lmb.informatik.uni-freiburg.de/resources/binaries/nips2014\\_ExemplarCNN.zip}{https://lmb.informatik.uni-freiburg.de/...} \\\\\n\t\t\\hline\n\t\t2015 & ICCV & Context~ & context prediction & CNN & cla, det, clu & \\href{https://github.com/cdoersch/deepcontext}{https://github.com/.../deepcontext} \\\\\n\t\t\\hline\n\t\t2016 & CVPR & Inpainting~ & inpainting & GAN, CNN & cla, det, seg, inp & \\href{https://github.com/pathak22/context-encoder}{https://github.com/.../context-encoder} \\\\\n\t\t\\hline\n\t\t2016 & ECCV & Colorization~ & colorization & CNN & cla, det, seg & \\href{https://github.com/richzhang/colorization}{https://github.com/.../colorization} \\\\\n\t\t\\hline\n\t\t2016 & ECCV & Jigsaw~ & Jigsaw puzzles & CNN & cla, det, seg, ret & \\href{https://github.com/MehdiNoroozi/JigsawPuzzleSolver}{https://github.com/.../JigsawPuzzleSolver} \\\\\n\t\t\\hline\n\t\t2017 & CVPR & Split-Brain~ & channel prediction & CNN & cla, det, seg & \\href{https://richzhang.github.io/splitbrainauto}{https://richzhang.github.io/splitbrainauto} \\\\\n\t\t\\hline\n\t\t2017 & ICCV & Counting~ & counting & CNN & cla, det, seg, ret & \\href{https://github.com/clvrai/Representation-Learning-by-Learning-to-Count}{https://github.com/clvrai/...} \\\\\n\t\t\\hline\n\t\t2017 & ICML & NAT~ & noise & CNN & cla, det & - \\\\\n\t\t\\hline\n\t\t2017 & ICLR & BiGAN~ & generation & GAN, CNN & cla, det, seg & \\href{https://github.com/jeffdonahue/bigan}{https://github.com/.../bigan} \\\\\n\t \\hline\n\t 2018 & WACV & CDJP~ & Jigsaw puzzles & CNN & cla, det, seg & - \\\\\n\t\t\\hline\n\t\t2018 & ICLR & RotNet~ & rotation & NIN, CNN & cla, det, seg & \\href{https://github.com/gidariss/FeatureLearningRotNet}{https://github.com/gidariss/...} \\\\\n\t\t\\hline\n\t\t2018 & arXiv & CPC~ & patch overlapping & CNN, GRU & cla & - \\\\\n\t\t\\hline \n\t\t2018 & CVPR & NPID~ & instance discrimination & CNN & cla & \\href{https://github.com/zhirongw/lemniscate.pytorch}{https://github.com/.../lemniscate.pytorch} \\\\\n\t\t\\hline \n\t\t2018 & ECCV & DeepCluster~ & clustering & CNN & cla, det, seg & \\href{https://github.com/facebookresearch/deepcluster}{https://github.com/.../deepcluster} \\\\\n\t\t\\hline\n\t\t2019 & ICCV & LA~ & local aggregation & CNN & rec, det & \\href{https://github.com/neuroailab/LocalAggregation}{https://github.com/.../LocalAggregation} \\\\\n\t\t\\hline\n\t\t2019 & NeurIPS & BigBiGAN~ & generation & GAN, CNN & gen, cla & \\href{https://tfhub.dev/s?publisher=deepmind\\&q=bigbigan}{https://tfhub.dev/...bigbigan} \\\\\n\t\t\\hline\n\t\t2019 & CVPR & AET~ & transformation & CNN & cla & \\href{https://github.com/maple-research-lab/AET}{https://github.com/.../AET} \\\\\n\t\t\\hline\n\t\t2019 & NeurIPS & AMDIM~ & discrimination & CNN & cla & \\href{https://github.com/Philip-Bachman/amdim-public}{https://github.com/.../amdim-public} \\\\\n\t\t\\hline\n\t\t2020 & CVPR & ClusterFit~ & clustering & CNN & cla, seg & - \\\\\n\t\t\\hline \n\t\t2020 & ICML & CPC v2~ & patch overlapping & CNN & cla, det & - \\\\\n\t\t\\hline \n\t\t2020 & CVPR & PIRL~ & Jigsaw puzzles & CNN & cla, rec, dec & \\href{https://github.com/facebookresearch/vissl/tree/master/projects/PIRL}{https://github.com/.../PIRL} \\\\\n\t\t\\hline\n\t\t2020 & CVPR & MoCo~ & discrimination & CNN & cla, rec, dec, pos, seg & \\href{https://github.com/facebookresearch/moco}{https://github.com/.../moco} \\\\\n\t\t\\hline\n\t\t2021 & ICLR & PCL~ & clustering & CNN & cla, det & \\href{https://github.com/salesforce/PCL}{https://github.com/.../PCL} \\\\\n\t\t\\hline\n\t\t2020 & arXiv & MoCo v2~ & discrimination & CNN & cla, dec & \\href{https://github.com/facebookresearch/moco}{https://github.com/.../moco} \\\\\n\t\t\\hline\n\t\t2020 & ICLR & SeLa~ & self-labelling & CNN & cla, det, seg & \\href{https://github.com/yukimasano/self-label}{https://github.com/.../self-label} \\\\\n\t\t\\hline\n\t\t2020 & ICML & SimCLR~ & discrimination & CNN & cla & \\href{https://github.com/google-research/simclr}{https://github.com/.../simclr} \\\\\n\t\t\\hline\n\t\t2020 & NeurIPS & SimCLR v2~ & self-distillation~ & CNN & cla & \\href{https://github.com/google-research/simclr}{https://github.com/.../simclr} \\\\\n \\hline\n\t\t2020 & ECCV & CMC~ & view matching~ & CNN & cla, seg & \\href{https://hobbitlong.github.io/CMC/}{https://hobbitlong.github.io/CMC} \\\\\n\t\t\\hline\n\t\t2020 & NeurIPS & InfoMin~ & discrimination & CNN & cla, det, loc, seg & \\href{https://hobbitlong.github.io/InfoMin/}{https://hobbitlong.github.io/InfoMin} \\\\\n\t\t\\hline\n\t\t2020 & NeurIPS & SwAV~ & cropping & CNN, Transformer & cla, det & \\href{https://github.com/facebookresearch/swav}{https://github.com/.../swav} \\\\\n\t\t\\hline\n\t\t2020 & NeurIPS & BYOL~ & discrimination & CNN & cla, det, seg & \\href{https://github.com/deepmind/deepmind-research/tree/master/byol}{https://github.com/.../byol} \\\\\n\t\t\\hline\n\t\t2021 & arXiv & MoCo v3~ & discrimination & CNN, Transformer & cla & - \\\\\n\t\t\\hline\n\t\t2021 & ICLR & R\\textsc{e}LIC~ & discrimination & CNN & cla, rel & - \\\\\n\t\t\\hline\n\t\t2021 & ICLR & PCL v2~ & clustering & CNN & cla, det & \\href{https://github.com/salesforce/PCL}{https://github.com/.../PCL} \\\\\n\t\t\\hline\n\t\t2021 & CVPR & SimSiam~ & discrimination & CNN & cla, det, seg & \\href{https://github.com/facebookresearch/simsiam}{https://github.com/.../simsiam} \\\\\n\t\t\\hline\n\t\t2021 & ICML & DirectPred~ & discrimination & CNN & cla & \\href{https://github.com/facebookresearch/luckmatters/tree/main/ssl}{https://github.com/.../ssl} \\\\\n\t\t\\hline\n\t\t2021 & ICCV & DINO~ & discrimination & CNN, Transformer & cla, seg & \\href{https://github.com/facebookresearch/dino}{https://github.com/.../dino} \\\\\n\t\t\\hline\n\t\t2021 & arXiv & MoBY~ & discrimination & CNN, Transformer & cla, det, seg & \\href{https://github.com/SwinTransformer/Transformer-SSL}{https://github.com/.../Transformer-SSL} \\\\\n \\hline\n 2021 & NeurIPS & MST~ & token prediction & CNN, Transformer & cla, det, seg & - \\\\\n \\hline\n\t\t2022 & ICLR & BE\\textsc{i}T~ & token prediction & Transformer & cla, seg & \\href{https://github.com/microsoft/unilm/tree/master/beit}{https://github.com/.../beit} \\\\\n \\hline\n 2022 & CVPR & MAE~ & reconstruction & Transformer & cla, det, seg & \\href{https://github.com/facebookresearch/mae}{https://github.com/facebookresearch/mae}\\\\ \\hline\n 2022 & CVPR & SimMIM~ & reconstruction & Transformer & cla, det, seg & \\href{https://github.com/microsoft/SimMIM}{https://github.com/microsoft/SimMIM}\\\\ \\hline\n 2022 & ArXiv & UM-MAE~ & reconstruction & Transformer & cla, det, seg & \\href{https://github.com/implus/UM-MAE}{https://github.com/implus/UM-MAE}\\\\ \\hline\n 2022 & ArXiv & LoMaR~ & reconstruction & Transformer & cla, det, seg & \\href{https://github.com/junchen14/LoMaR}{https://github.com/junchen14/LoMaR}\\\\ \\hline\n 2022 & Arxiv & CAE~ & reconstruction & Transformer & cla, det, seg & \\href{https://github.com/lxtGH/CAE}{https://github.com/lxtGH/CAE}\\\\ \\hline\n 2023 & AAAI & PeCo~ & reconstruction & Transformer & cla, det, seg & -\\\\ \\hline\n 2023 & ArXiv & SAM~ & reconstruction & Transformer & det, gen, seg & \n \\href{https://github.com/facebookresearch/segment-anything}{https://github.com/facebookresearch/segment-anything}\\\\ \\hline\n\t\\end{tabular}\n\t\\begin{tablenotes}\n\t \\tiny\n\t \\item[1] Downstream task types: classification (cla), recognition (rec), detection (det), localization (loc), segmentation (seg), clustering (clu), inpainting (inp), retrieval (ret), generation (gen), pose estimation (pos), reinforcement learning (rel).\n\t\\end{tablenotes}\n\t\\end{threeparttable}\n\t}\n\\end{table*}", "id": "53fe15b6-6791-4528-8285-03f6b110d6b2", "level": "subsection", "origin_cites_number": 47, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Learning by Clustering" ] ], "subsections": [], "title": "Learning by Clustering" }, { "cite_extract_rate": 0, "cites": [], "content": "This section extensively investigates recent progress in PFMs on images for representation learning, from the early perspective of designing pretext tasks for self-labeling to present contrastive loss-based SSL. The pipelines of the main methods are clearly illustrated. We hope this section\ncan prepare the incoming researchers to acquire a basic understanding of this novel area and some worthwhile research direction. \nWe believe the powerful generalization ability of PFMs would extremely reduce training computation overhead by ``pretraining once and transferring forever''. Recent transformer-based PFMs have gradually outperformed traditional training from scratch on target datasets. \nThis discovery will spur further exploration and research into this exciting field.", "id": "adc375fc-4527-40bd-812c-bcf17f8b5e4f", "level": "subsection", "origin_cites_number": 0, "parent_id": "81242bcb-042c-4bf0-9ebe-f6cb44a94843", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Computer Vision" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 5}\nWith the development of deep learning in graphs, the parameters (i.e., graph embedding) of the model began to increase rapidly.\nTherefore, large-scale labeled data is needed for training the models to avoid under-fitting or over-fitting.\nHowever, the cost of constructing large-scale labeled datasets for graphs is too subjective, expensive, and time-consuming, especially in domains that require professional knowledge and timeliness.\nWhile some semi-supervised approaches have temporarily mitigated the reliance of graph embedding models on label scale, they have not fundamentally resolved this problem. In recent times, researchers have turned their attention towards the application of PFMs in the field of graphs, inspired by their success in CV and NLP. However, for most graphs, obtaining large-scale pretraining data directly is challenging due to the unique nature of information such as nodes and edges. Therefore, recent studies have focused on utilizing the inherent information of a graph's attributes, topology, and community to enhance the effectiveness of the node's features. We have summarized the graph-related PFMs in \\textbf{Table~\\ref{tab:pretraining model for graph}}.", "id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ] ], "subsections": [ "9c50d080-4943-4b08-8efe-735c19238add", "de9bff3b-ca0b-4b40-801a-4552f6809c5c", "9a770a85-42dc-423f-83fa-139f065e8b3e", "ec1546aa-6d3f-4195-8c39-c3f2aca8200b", "c72e0a76-2008-4cfb-a09b-a9aac67e07c2", "e637803f-0a59-4b5d-ad9e-e6dd380765cb" ], "title": "PFMs for Graph Learning" }, { "cite_extract_rate": 1, "cites": [ 7582, 1440, 7335 ], "content": "\\label{sec:graph_gic}\nThe essential motivation of pretraining based on graph information completion (GIC) is to mask part of the information of the input graph data and recover the masked information based on the unmasked graph data, so as to pretrain the graph embedding, as shown in Fig. \\ref{fig:GIC_GPP}. \nSimilar ideas appeared earlier in the field of image and text processing.\nFor instance, in image processing, information such as image pixels and colors are recovered to pretrain the image encoder; in text processing, many methods implement pretraining of word embeddings and encoders by recovering part of the information in a sentence based on context words.\nThese methods inspire the design of graph completion tasks on graph PFMs.\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[Graph Information Completion (GIC).]{\n \\includegraphics[width=.4\\linewidth]{fig_graph/GIC.pdf}}\n \\centering\n \\subfigure[Graph Property Prediction (GPP).]{\n \\includegraphics[width=.4\\linewidth]{fig_graph/GPP.pdf}}\n\t\\caption{Graph Information Completion (GIC) and Graph Property Prediction (GPP).}\n\t\\label{fig:GIC_GPP}\n\\end{figure*}\nAmong them, You et al.~ are inspired by image inpainting, and first propose to cover them by removing the features of the target nodes, and then recover/predict the features of the masked nodes.\nIn order to recover/predict the masked information, GraphCompetion~ is achieved by providing GCNs with unmasked node features (limited to the 2-layer GCNs of the second-order neighbors of each target node).\nThe purpose of GraphCompetion's pretraining is to help the model better perform feature representation and teach the model to extract features from the context.\nYou et al.~ propose the attribute mask task (namely, AttributeMask), which masks node attributes randomly, and then requires the self-supervising module to reconstruct the masked attributes.\nJin et al.~ think deeply about SSL on graph data, and propose the edge mask task (namely, EdgeMask), seeking to develop self-supervision in pairs based not only on a single node itself but on the connection between two nodes in the graph.\nIn particular, EdgeMask randomly masks some edges and then asks the model to reconstruct the masked edges.\nIn short, EdgeMask is expected to help GNN learn local connectivity information.\nHu et al.~ propose a PFM that masks node and edge attributes and then predicts this masked information based on the adjacent structure.", "id": "9c50d080-4943-4b08-8efe-735c19238add", "level": "subsection", "origin_cites_number": 3, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Information Completion" ] ], "subsections": [], "title": "Learning by Graph Information Completion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:graph_gca}\nDifferent from the aforementioned methods that focus on individual elements in the graph, graph consistency analysis (GCA) mainly explores the consistency of the distribution of two elements in the graph.\nSpecifically, the consistency of two elements with similar semantics should be significantly stronger than two elements with unrelated semantics, and this characteristic can be used to pretrain the graph model.\nAccording to the judgment object of consistency, such methods can be roughly divided into the following three categories.\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[Context Consistency.]{\n \\includegraphics[width=.47\\linewidth]{fig_graph/GCA_CC.pdf}}\n \\centering\n \\subfigure[Self Consistency.]{\n \\includegraphics[width=.47\\linewidth]{fig_graph/GCA_SC.pdf}}\n\t\\caption{Graph Consistency Analysis (GCA).}\n\t\\label{fig:GCA}\n\\end{figure*}", "id": "de9bff3b-ca0b-4b40-801a-4552f6809c5c", "level": "subsection", "origin_cites_number": 0, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Consistency Analysis " ] ], "subsections": [ "4ccfac6c-9d70-41dd-b30a-e80b2492db26" ], "title": "Learning by Graph Consistency Analysis " }, { "cite_extract_rate": 1, "cites": [ 1010, 218, 282, 229 ], "content": "Based on the early homogeneity assumption, a mass of graph models tends to project contextual nodes to similar positions in semantic space.\nSuch consistency of the context in the graph is also applied to the pretraining graph model, which attempts to adjust the node representation by capturing the distribution characteristics of the nodes in the context, as shown in Fig. \\ref{fig:GCA} (a).\nRandom walk is an efficient method to acquire context. \nIt can capture the distribution characteristics of different perspectives in the context by designing a variety of walk strategies.\nThe DeepWalk~ adopts a truncated random walk strategy to represent the node context as the form of a sequence of nodes.\nBy introducing the idea of NLP into the network embedding model, DeepWalk regards the node sequence as a ``sentence''\nand models it based on the skip-gram model, providing an unsupervised and scalable training method for node representation.\nFurthermore, on the basis of DeepWalk, node2vec~ uses two different parameter-controlled random walk strategies to obtain deviated node sequences to fully capture the context information.\nDifferent from randomly sampling nodes from the context, some recent methods directly consider the relationship between the node's k-order neighbor distribution (as positive examples) and non-adjacent nodes (as negative examples), and use this to train the graph model.\nLINE~ respectively proposes first- and second-order proximity to describe the local similarity between pairs of nodes in the graph from different perspectives, and uses it to optimize node representation.\nMeanwhile, LINE uses negative sampling and edge sampling techniques to optimize the second-order traversal and excessive training storage overhead.\nVGAE~ introduces a variational autoencoder to encode graph structure data, and model the node first-order neighbor through a GCN encoder and a simple inner product decoder.", "id": "4ccfac6c-9d70-41dd-b30a-e80b2492db26", "level": "paragraph", "origin_cites_number": 4, "parent_id": "de9bff3b-ca0b-4b40-801a-4552f6809c5c", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Consistency Analysis " ], [ "paragraph", "Context Consistency" ] ], "subsections": [ "86076f2a-243d-4e11-8ad7-99d604259a6e", "42e65643-bbf9-43d5-b4e7-3e356e1199d8" ], "title": "Context Consistency" }, { "cite_extract_rate": 1, "cites": [ 2531, 2532, 1184 ], "content": "In the field of NLP and CV, contrastive learning as an efficient self-supervised mechanism is widely used in the pretraining of models.\nIn fact, the internal comparison mechanism of such methods is based on the mutual information estimation of the original graph data and the augmented graph data to maintain the consistency of the data itself, as shown in Fig. \\ref{fig:GCA} (b).\nInspired by contrastive learning, some studies have begun to generate augmented samples of original data samples in the graph model. \nAmong them, two augmented samples from the same original sample are regarded as positive pairs, and two augmented samples from different original samples are regarded as negative pairs.\nFor node-level tasks, GCC~ devises the pretext task as subgraph instance discrimination in and across networks.\nAnd GCC also enhances the ability of GNNs to learn the intrinsic and transferable structural representations by introducing contrastive learning.\nSpecifically, GCC samples subgraphs from the whole graph as augmentations via random walk with restart and artificially designs positional node embedding as node initial features.\nAs a novel graph representation learning model, GCA~ incorporates various priors for topological and semantic aspects of the graph to achieve adaptive contrastive augmentation. \nSpecifically, GCA devises an enhancement scheme based on node centrality measures to highlight important connection structures, while corrupting node features by adding noise to specific nodes to lead the pretraining model to recognize underlying semantic information.\nFor graph-level tasks, some studies have attempted to introduce more diverse contrastive learning strategies.\nAmong them, You et al.~ introduce four common graph augmentation tasks (i.e., node dropping, edge perturbation, attribute masking, and subgraph sampling) into the GL model based on underlying prior and propose a unified comparative learning framework: GraphCL.\nMeanwhile, GraphCL discusses in depth the role of data augmentation in comparative learning and gives experimental demonstration that joint multiple augmentation strategies can improve model performance.", "id": "86076f2a-243d-4e11-8ad7-99d604259a6e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "4ccfac6c-9d70-41dd-b30a-e80b2492db26", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Consistency Analysis " ], [ "paragraph", "Context Consistency" ], [ "paragraph", "Self Consistency" ] ], "subsections": [], "title": "Self Consistency" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 240, 2533, 7103 ], "content": "Unlike the above two methods that consider the consistency of elements in the same scale, contrasting elements in graph data of different scales can also be used to train graph models, e.g., node-subgraphs.\nMost of such methods have the idea of maximizing mutual information~.\nSpecifically, the readout function is usually used to obtain the summary of the graph/subgraph, and the MI estimator can be calculated using the Jensen-Shannon divergence.\nAs a representative method, DGI~ relies on maximizing the MI between the patch representation and the summary of the corresponding high-level graphs, which are all derived using the established graph convolutional network architecture, to learn the node representation.\nTo generate negative samples on a single graph, DGI corrupts the original graph by randomly scrambling node features while keeping the structure unchanged.\nSimilarly, Hassani and Khasahmadi propose CMVRL~, which generates an additional structural view of a sample graph based on graph diffusion.\nThe sample graph and a regular view are sub-sampled together, and the node representation and graph representation are learned based on two shared MLPs, and then contrast learning is achieved through the consistency loss provided by the discriminator.\nSUBG-CON~ samples a series of context subgraphs from the original graph and inputs them to the encoder to obtain the pooled central node and subgraph representation.\nFor the specified node, the context subgraph is expressed as a positive sample, and other randomly sampled subgraphs are expressed as a negative sample.\nThe contrast loss of the latent space will force the encoder to identify positive samples and negative samples in order to distinguish different nodes based on regional structure information.", "id": "42e65643-bbf9-43d5-b4e7-3e356e1199d8", "level": "paragraph", "origin_cites_number": 5, "parent_id": "4ccfac6c-9d70-41dd-b30a-e80b2492db26", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Consistency Analysis " ], [ "paragraph", "Context Consistency" ], [ "paragraph", "Cross Scale Consistency" ] ], "subsections": [], "title": "Cross Scale Consistency" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:graph_gpp}\nConsidering the attribute and structural information of the graph as the target of information completion, pretraining based on graph property prediction (GPP) can also be used to build the graph model in different forms.\nOne of the most common methods is to generate self-supervised signals by exploring the auxiliary property in the graph data and to take the graph property prediction task as the pretraining task of the graph model.\nAccording to the different settings of the pretext task, it can roughly classify two categories: property regression and property classification.", "id": "9a770a85-42dc-423f-83fa-139f065e8b3e", "level": "subsection", "origin_cites_number": 0, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Property Prediction" ] ], "subsections": [ "337f449c-b5d6-4dec-8767-c7610d8be018" ], "title": "Learning by Graph Property Prediction" }, { "cite_extract_rate": 1, "cites": [ 7582 ], "content": "In the graph model, different from the GIC mentioned above, property regression primarily focuses on mining the relationship between the broader numerical structure and property attributes within the graph.\nSpecifically, this branch of methods extracts richer self-supervised signals in graph data for pretraining graph models.\nFor example, similar but different from masking node attributes, the goal of NodeProperty~ is to predict each node's auxiliary property in the graph, e.g., degree, local node importance, and local clustering coefficient.\nIn other words, NodeProperty is used to encourage GNN to capture richer local structural information while optimizing the specific downstream tasks.\nSpecifically, NodeProperty regards the node degree as a representative local node property, i.e., self-supervised signal, and takes other node properties as future work.\nMeanwhile, NodeProperty emphasizes that the intuition of devising self-supervised pretext tasks related to local node property is to ultimately guide the feature embedding of GNN (i.e., node representation) to save this information, which relies on the assumption that the node property information is relevant to the particular task.", "id": "337f449c-b5d6-4dec-8767-c7610d8be018", "level": "paragraph", "origin_cites_number": 1, "parent_id": "9a770a85-42dc-423f-83fa-139f065e8b3e", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Property Prediction" ], [ "paragraph", "Property Regression (PR)" ] ], "subsections": [ "de3fde25-98df-47aa-852e-6683e2e09223" ], "title": "Property Regression (PR)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1440, 2534 ], "content": "Different from the property regression task, the task of property classification is usually implemented by defining pseudo-labels based on a certain distribution in the graph data, which is a typical self-supervised method.\nAmong them, the structure density, similarity of node attributes, and difference between local and global distributions are the most commonly used. \nWe will briefly introduce the application of such methods in GL pretraining.\nAmong these methods, clustering is the most common and effective source of pseudo-labels.\nAmong them, M3S~ designs a multi-stage training strategy, using the idea of graph clustering to iteratively train the graph encoder, achieving enlarged labeled data with virtual labels in the case of very small samples.\nYou et al.~ further propose two pretraining strategies.\nAmong them, Node Clustering assigns $K$ (hyper-parameter) pseudo labels to nodes based on attribute clustering and pretrain node representation by node classification.\nIn addition, You et al. also present Graph Partitioning based on the topology density assumption.\nIn Graph Partitioning, the nodes of a graph are divided into approximately equal $K$ (hyper-parameter) subsets to minimize the number of edges connecting nodes among subsets, and then pseudo labels are provided for nodes.\nIn addition to clustering methods, some researchers generate pseudo labels based on other statistical characteristics of graph data.\nFor instance, in the molecular field, Rong et al.~ use the molecular bonds of subgraphs and related statistical information to guide GNN to learn Context-Sensitive Properties (CSP) and then apply them to prediction.\nRong et al.~ propose a Motif Prediction (MP) task, which can be expressed as a multi-label classification problem, in which each motif corresponds to a label.\nSpecifically, let's assume that $K$ motifs in molecular data are considered.\nFor a specific molecule (abstracted as graph $G$), they use RDKit to detect whether each motif appears in $G$, and then take it as the target of the motif prediction task.", "id": "de3fde25-98df-47aa-852e-6683e2e09223", "level": "paragraph", "origin_cites_number": 3, "parent_id": "337f449c-b5d6-4dec-8767-c7610d8be018", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Graph Property Prediction" ], [ "paragraph", "Property Regression (PR)" ], [ "paragraph", "Property Classification (PC)" ] ], "subsections": [], "title": "Property Classification (PC)" }, { "cite_extract_rate": 0.8, "cites": [ 2536, 2537, 2535, 2513 ], "content": "The masked autoencoder (MAE) is first applied in MAGE~, the masked autoencoders for self-supervised learning on graphs. Following MAE , MGAE operates on a partial network structure (without masked edges) that is based on convolutions. Besides, the decoder of MGAE is designed to model the cross-correlation between the head and tail nodes of an anchor edge. Empirical results demonstrate that MGAE performs better than traditional graph autoencoders and graph SSL approaches. Furthermore, GMAE extends this approach by using a transformer instead of convolutions and reconstructing the features of masked nodes rather than masked edges. In addition to empirical improvements, MaskGAE further provides theoretical justifications for the potential benefits of masked graph modeling. Designing algorithms to accommodate graphs of various complex properties is a promising direction. For instance, to tackle the heterogeneous graphs scenario, HGMAE proposes meta-path masking and adaptive attribute masking with a dynamic mask to enable effective and stable learning on complex graph structure. Moreover, several training strategies are developed, including meta-path-based edge reconstruction to incorporate complex structural information, target attribute restoration to utilize various node attributes, and positional feature prediction to encode node positional information. Besides dealing with more complex graph structures, how to improve the learning efficiency of MAE on graph data remains an open question.", "id": "ec1546aa-6d3f-4195-8c39-c3f2aca8200b", "level": "subsection", "origin_cites_number": 5, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Learning by Masked Autoencoder" ] ], "subsections": [], "title": "Learning by Masked Autoencoder" }, { "cite_extract_rate": 0.75, "cites": [ 2538, 8563, 7357 ], "content": "In addition to the above methods, there are lots of pretraining methods that use relatively novel or hybrid strategies. For example, CG$^3$~ generates an improved node representation by designing a semi-supervised consistency loss to maximize the consistency between different views of the same data or data from the same category.\nNext, CG$^3$ uses the graph generation loss related to the input feature to extract the potential deterministic relationship between the data feature and the input graph topology as a supplementary supervision signal for SSL.\nBased on the attention mechanism, Graph-Bert~ trains itself to reconstruct node attributes and topological structure with sampled linkless subgraphs within their local contexts.\nGMI~ extends the traditional mutual information computing idea from the vector space to the graph domain and proposes to jointly maximize feature mutual information (between the node’s embedding and raw features of its neighbors) and edge mutual information (embedding of two adjacent nodes) for graph representation learning.\nGPT-GNN~ proposes a self-supervised graph generation task to guide itself to capture the topological and semantic attributes of the graph.\nGPT-GNN roughly divides the possibility of graph generation into attribute generation and edge generation to untangle the intrinsic dependence between node attributes and graph topology.", "id": "c72e0a76-2008-4cfb-a09b-a9aac67e07c2", "level": "subsection", "origin_cites_number": 4, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Other Learning Strategies on Graph Data" ] ], "subsections": [], "title": "Other Learning Strategies on Graph Data" }, { "cite_extract_rate": 0.7894736842105261, "cites": [ 2538, 2543, 2537, 218, 7104, 7357, 7583, 1010, 2541, 2540, 2532, 242, 2531, 2533, 229, 8563, 7582, 2536, 2544, 7335, 2535, 282, 2534, 1184, 7103, 1440, 8564, 240, 2539, 2542 ], "content": "In the graph model, as traditional feature learning methods are often accompanied by information loss in the process of feature learning, and the information taken into consideration is relatively one-sided, the obtained graph representation is relatively rough and loses mass information.\nPeople began to focus on the distribution of data and attributes in the graph data as self-supervised signals to pretrain the graph model so that it can capture more valuable information.\nBy transforming the distribution of nodes, attributes, and edges in the graph into different pretext tasks, and using GNNs for modeling, the graph model can fully fit the original distribution of the input graph.\nIn lots of unsupervised or semi-supervised scenarios, such pretrained graph models have been proven to benefit downstream tasks. Besides, federated training large graph models~ can be a promising solution for building pretrained foundation models.\nCurrently, with the in-depth study of contrastive learning strategies, some work has attempted to apply contrastive learning in different forms to the pretraining of graph models.\nThrough the consistency analysis of context, self, and cross-scale, this kind of method greatly improves the performance of the pretrained graph model on different graphs.\n\\begin{table*}[t]\n\t\\tiny\n\t\\centering\n\t\\caption{Summary of PFMs in GL.}\n\t\\label{tab:pretraining model for graph}\n \\scalebox{1.1}{\n\t\\begin{tabular}{llllll} \n\t\t\\hline \n\t\t\\textbf{Year} & \\textbf{Conference} & \\textbf{Method} & \\textbf{Pretext Task} & \\textbf{Encoder} & \\textbf{Code} \\\\\n\t\t\\hline\n\t\t2014 & KDD & DeepWalk~ & GC-C & Shallow NN & \\href{https://github.com/phanein/deepwalk}{https://github.com/phanein/deepwalk} \\\\\\hline\n\t\t2015 & WWW & LINE~ & GC-C & Shallow NN & \\href{https://github.com/tangjianpku/LINE}{https://github.com/tangjianpku/LINE} \\\\\\hline\n\t\t2016 & NeurIPS & VGAE~ & GC-C & GCN & - \\\\\\hline\n\t\t2016 & KDD & node2vec~ & GC-C & Shallow NN & \\href{https://github.com/aditya-grover/node2vec}{https://github.com/aditya-grover/node2vec} \\\\\\hline\n\t\t2017 & NeurIPS & GraphSage~ & GC-C & Shallow NN & \\href{https://github.com/williamleif/GraphSAGE}{https://github.com/williamleif/GraphSAGE} \\\\\\hline\n\t\t2018 & ICLR & DGI ~ & GC-CS & GCN/SAGE & \\href{https://github.com/PetarV-/DGI}{https://github.com/PetarV-/DGI} \\\\\\hline\n\t\t2020 & ICML & GraphCompetion~ & GIC & GCN & \\href{https://github.com/Shen-Lab/SS-GCNs}{https://github.com/Shen-Lab/SS-GCNs} \\\\\\hline\n\t\t2020 & ICLR & AttMasking~ & GIC & GCN & \\href{http://snap.stanford.edu/gnn-pretrain}{http://snap.stanford.edu/gnn-pretrain} \\\\\\hline\n\t\t2020 & ICML & AttributeMask~ & GIC & GCN & \\href{https://github.com/Shen-Lab/SS-GCNs}{https://github.com/Shen-Lab/SS-GCNs} \\\\\\hline\n\t\t2020 & arXiv & EdgeMask~ & GIC & GCN & \\href{https://github.com/ChandlerBang/SelfTask-GNN}{https://github.com/ChandlerBang/SelfTask-GN} \\\\\\hline\n\t\t2020 & arXiv & NodeProperty~ & GPP-PR & GCN & \\href{https://github.com/ChandlerBang/SelfTask-GNN}{https://github.com/ChandlerBang/SelfTask-GN} \\\\\\hline\n\t\t2020 & AAAI & M3S~ & GPP-PC & GCN & - \\\\\\hline\n\t\t2020 & ICML & Node Clustering~ & GPP-PC & GCN & \\href{https://github.com/Shen-Lab/SS-GCNs}{https://github.com/Shen-Lab/SS-GCNs} \\\\\\hline\n\t\t2020 & ICML & Graph Partitioning~ & GPP-PC & GCN & \\href{https://github.com/Shen-Lab/SS-GCNs}{https://github.com/Shen-Lab/SS-GCNs} \\\\\\hline\n\t\t2020 & NeurIPS & CSP~ & GPP-PC & GCN & - \\\\\\hline\n\t\t2020 & NeurIPS & MP~ & GPP-PC & GCN & - \\\\\\hline\n\t\t2020 & NeurIPS & SELAR~ & GC-C & GNN & \\href{https://github.com/mlvlab/SELAR}{https://github.com/mlvlab/SELAR} \\\\\\hline\n\t\t2020 & KDD & GCC~ & GC-S & GIN & \\href{https://github.com/THUDM/GCC}{https://github.com/THUDM/GCC} \\\\\\hline\n\t\t2020 & NeurIPS & GraphCL ~ & GC-S & GCN & \\href{https://github.com/CRIPAC-DIG/GCA}{https://github.com/CRIPAC-DIG/GCA} \\\\\\hline\n\t\t2020 & ICML & CMVRL ~ & GC-CS & GCN & - \\\\\\hline\n\t\t2020 & ICDM & SUBG-CON ~ & GC-CS & GCN & \\href{https://github.com/yzjiao/Subg-Con}{https://github.com/yzjiao/Subg-Con} \\\\\\hline\n\t\t2020 & ICLR & InfoGraph ~ & GC-CS & GCN & \\href{https://github.com/fanyun-sun/InfoGraph}{https://github.com/fanyun-sun/InfoGraph} \\\\\\hline\n\t\t2020 & AAAI & DMGI ~ & GC-CS & GCN & \\href{https://github.com/pcy1302/DMGI}{https://github.com/pcy1302/DMGI} \\\\\\hline\n\t\t2020 & arXiv & Graph-Bert ~ & Hybrid & Transformer & \\href{https://github.com/jwzhanggy/Graph-Bert}{https://github.com/jwzhanggy/Graph-Bert} \\\\\\hline\n\t\t2020 & WWW & GMI ~ & Hybrid & GCN & - \\\\\\hline\n\t\t2020 & KDD & Gpt-GNN ~ & Hybrid & GNN & \\href{https://github.com/acbull/GPT-GNN}{https://github.com/acbull/GPT-GNN} \\\\\\hline\n\t\t2021 & ICML & JOAO ~ & GC-S & GCN & \\href{https://github.com/Shen-Lab/GraphCL\\_Automated}{https://github.com/Shen-Lab/GraphCL\\_Automated} \\\\\\hline\n\t\t2021 & AAAI & CSSL ~ & GC-S & GCN & \\href{https://github.com/UCSD-AI4H/GraphSSL}{https://github.com/UCSD-AI4H/GraphSSL} \\\\\\hline\n\t\t2021 & PAKDD & GIC ~ & GC-CS & GCN & \\href{https://github.com/cmavro/Graph-InfoClust-GIC}{https://github.com/cmavro/Graph-InfoClust-GIC} \\\\\\hline\n\t\t2021 & WWW & SUGAR ~ & GC-CS & GCN & \\href{https://github.com/RingBDStack/SUGAR}{https://github.com/RingBDStack/SUGAR} \\\\\\hline\n\t\t2021 & ICML & GraphLoG ~ & GC-CS & GCN & \\href{https://github.com/DeepGraphLearning/GraphLoG}{https://github.com/DeepGraphLearning/GraphLoG} \\\\\\hline\n\t\t2021 & WWW & SLiCE ~ & GC-CS & GCN & \\href{https://github.com/pnnl/SLICE}{https://github.com/pnnl/SLICE} \\\\\\hline\n\t\t2021 & WSDM & BiGI ~ & GC-CS & GCN & \\href{https://github.com/caojiangxia/BiGI}{https://github.com/caojiangxia/BiGI} \\\\\\hline\n\t\t2021 & WWW & GCA ~ & GC-S & GCN & \\href{https://github.com/CRIPAC-DIG/GCA}{https://github.com/CRIPAC-DIG/GCA}\\\\\\hline\n\t\t2021 & KDD & HeCo ~ & GC-CS & GCN & \\href{https://github.com/liun-online/HeCo}{https://github.com/liun-online/HeCo} \\\\\\hline\n\t\t2021 & AAAI & CG$^3$ ~ & Hybrid & GCN & - \\\\\\hline\n\t\t2021 & ICLR & SuperGAT~ & GC-C & GAT & \\href{https://github.com/dongkwan-kim/SuperGAT}{https://github.com/dongkwan-kim/SuperGAT} \\\\\\hline\n\t\t2021 & KDD & MoCL ~ & Hybrid & GNN & \\href{https://github.com/illidanlab/MoCL-DK}{https://github.com/illidanlab/MoCL-DK} \\\\ \\hline\n\t\t2022 & ArXiv & MGAE ~ & Maksed Edge Reconstruction & GCN &-\\\\ \\hline\n\t 2022 & KDD & GMAE ~ & Maksed Node Reconstruction & Transformer &\\href{https://github.com/THUDM/GraphMAE}{https://github.com/THUDM/GraphMAE} \\\\ \\hline\n\t 2022 & Arxiv & MaskGAE ~ & Partial Maksed Node Reconstruction& Transformer &\\href{https://github.com/EdisonLeeeee/MaskGAE}{https://github.com/EdisonLeeeee/MaskGAE} \\\\ \\hline\n\t 2022 & Arxiv & HGMAE ~ & Metapath Masking Reconstruction& Transformer & - \\\\ \\hline\n\t\\end{tabular}\n }\n\\end{table*}", "id": "e637803f-0a59-4b5d-ad9e-e6dd380765cb", "level": "subsection", "origin_cites_number": 38, "parent_id": "238ea228-7a56-4a7f-a4df-2e79b3065bd3", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Graph Learning" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 6}\nWith the rapid development of the PFMs, except for text, image, and graph, the PFMs have also carried out a lot of research on speech, video, text-image, and cross-data. Besides, researchers have started investigating the unified PFMs that incorporate all three mentioned domains recently.\nTherefore, in this section, we introduce some other advanced and unified PFMs.", "id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ] ], "subsections": [ "4101c970-4dbe-4747-b05a-a0edf3ccecc1", "c5f12ce3-2fb4-4cee-91d0-886b0954df11", "332573be-24a1-4841-b8d3-27d3d94c91b4", "ec1a273d-353b-4af3-af4b-b4673cd4d2a9", "13a9f290-8953-45bf-86bd-7012fe69b5de" ], "title": "PFMs for Other Data Modality" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 2549, 2546, 2547, 864, 2545, 2550, 7584, 2548, 2551 ], "content": "In the field of speech, wav2vec~ obtains speech representation by capturing contextual information on large-scale unlabeled datasets, and fine-tuning on a few samples by noise comparison binary classification task, which greatly improves the performance of downstream tasks.\nFurthermore, vq-wav2vec~ and wav2vec 2.0~ propose a discrete unsupervised pretraining method on the basis of wav2vec, discretizing the original continuous speech signal, so that the methods in the mature NLP community can be migrated and applied.\nMeanwhile, lots of research have tried to design different mechanisms to use the representation obtained by speech pretraining as the initial input, and apply it to different tasks, e.g., automatic speech recognition~, phoneme recognition~, and speech synthesis~.\nIn particular, the extensive application of spoken language understanding has promoted the research of joint pretraining of speech and text.\nFor example, SpeechBERT~ applies MLM to speech and text pairs to perform representation learning on discrete information.\nUnlike~, which relies on a large amount of labeled data for joint pretraining, SPLAT~ uses unlabeled speech data to pretrain the speech embedding module, and proposes a label-level alignment method suitable for label-level downstream tasks based on sequence alignment.\nMusicBERT~ is a pretrained model designed for processing music data. It was developed by training on a vast symbolic music corpus consisting of over one million songs. To improve the pretraining process with symbolic music data, MusicBERT employs several mechanisms, such as OctupleMIDI encoding and a bar-level masking strategy. Huang et al.~ suggest incorporating a metrical structure in the input data, which allows Transformers to better recognize the hierarchical structure of music at the beat-bar-phrase level. AudioTransformer~ is a model that enhances the performance of Transformer architectures by implementing certain techniques, such as pooling, which were previously used in convolutional networks. Verma et al.~ demonstrate how they leverage multi-rate signal processing ideas based on wavelets to improve the Transformer embeddings and obtain better results.", "id": "4101c970-4dbe-4747-b05a-a0edf3ccecc1", "level": "subsection", "origin_cites_number": 11, "parent_id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFMs for Speech" ] ], "subsections": [], "title": "PFMs for Speech" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 2554, 2556, 2555, 2553, 2552 ], "content": "Video is similar to the RGB features of image and sequence information of the text. Many meaningful explorations in self-supervised video representation learning can not only perform efficiently well in video datasets but also generalize to the learning in other areas. Odd-One-Out Networks (O3N)~ is a technique that targets to predict the odd video subsequence among real subsequences sampled from a video in a training dataset. The experiments are conducted by using different video-clip encoders for O3N to prove consistent improvements of this pretraining design. Similarly, Shuffle and Learn~ aims to learn the correct temporal order from a sequence of frames in a video. However, Kim et al.~ designed a new self-supervised task called Space-Time Cubic Puzzles to train 3D CNNs. This task requires a pretrained backbone to arrange permuted 3D spatiotemporal crops. The performance of downstream tasks proves that effective video representations have been learned while solving such puzzles.\nInspired by the contrastive learning in images, many pretraining models in the video also utilize the contrastive loss to learn video presentations for downstream tasks. Inter-Intra Contrastive (IIC) framework~ can learn video representations by using positive and negative pairs generated from different videos. Specifically, different modalities in the same video are treated as positive pairs, and video clips from different videos as negative ones. Temporal Contrastive Pretraining (TCP)~ is another contrastive method based on CPC to learn video representations. Different from the existing GAN-based method that generates future frames for the video directly, TCP can predict the latent representation of future frames of the video, which is better for long-term predictions.\nSequence Contrastive Learning (SeCo)~ is a novel method considering both intra- and inter-frame instance discrimination in sequence order-based task.", "id": "c5f12ce3-2fb4-4cee-91d0-886b0954df11", "level": "subsection", "origin_cites_number": 6, "parent_id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFMs for Video" ] ], "subsections": [], "title": "PFMs for Video" }, { "cite_extract_rate": 0, "cites": [], "content": "The multimodal PFM among text and image can be divided into two categories: single-stream model and cross-stream model. The single-stream model refers to integrating text information and visual information at the beginning of the model. The Cross-stream model refers to text information and visual information encoded by two independent coding modules, respectively. Then different modal information is fused by mutual attention mechanism.", "id": "332573be-24a1-4841-b8d3-27d3d94c91b4", "level": "subsection", "origin_cites_number": 0, "parent_id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFMs for Multimodal" ] ], "subsections": [ "2888b43c-cbd1-4869-920c-68b189be657f" ], "title": "PFMs for Multimodal" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1272, 2015 ], "content": "VisualBERT~ inputs text and images into the model simultaneously, which are aligned and fused using Transformer's self-attention mechanism. The input of the text is the same as BERT, and the input of the image is the image features extracted by Fasters-RCNN. VisualBERT also does pretraining and then fine-tuning the specific task. It adopts two pretraining tasks, namely MLM and sentence-image prediction, determining whether the input sentence describes the corresponding image.\nThe structure of Unicoder-VL~ is very similar to VisualBERT, except for the processing of the image. Unicoder-VL extracts the image feature through Faster-RCNN and concatenates the feature with image position-encoding mapping to the same space. It enhances the image label prediction task, which predicts the categories of images.\nThe pretraining task of VL-BERT~ is the same as Unicoder-VL.\nThe image input of VL-BERT includes four parts: the image region features extracted by Fasters-RCNN, the location of the region in the original image, location coding, fragment encoding, and [IMG] encoding.", "id": "2888b43c-cbd1-4869-920c-68b189be657f", "level": "paragraph", "origin_cites_number": 3, "parent_id": "332573be-24a1-4841-b8d3-27d3d94c91b4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFMs for Multimodal" ], [ "paragraph", "Single-Stream Model" ] ], "subsections": [ "38bce72c-4b48-41da-8192-684816c0192e" ], "title": "Single-Stream Model" }, { "cite_extract_rate": 1, "cites": [ 7585, 1639, 1278, 2558, 2557, 2559, 2525 ], "content": "In ViLBERT~, the text and image modes are first encoded separately, and their outputs go through a standard attention module. This module is based on the Transformer structure. Still, in the self-attention mechanism, each module uses its query to calculate attention with the value and key of another module to integrate the information between different modules.\nThe model is pretrained on two tasks. The first task is the mask task, which is the same as BERT. On the image side, the goal of the task is that when the region image is masked, the classification distribution of the output of the model can be as consistent as possible with the output distribution of the model used to extract the region features (such as Faster-RCNN). The second task is the language image matching task. DALL-E is a series of deep learning models developed by OpenAI to generate images from natural language prompts. The first version of DALL-E uses a transformer-based architecture, similar to the one used in the GPT LMs, to process the textual prompts and generate image-like representations. The model is trained on a dataset of images and their associated textual descriptions based on GPT-3. DALL-E 2~ is the improved version by employing contrastive language-image pretraining (CLIP) for capturing semantic association between image-text pairs and GLIDE diffusion model for text-conditional image synthesis. Furthermore, GPT-4 is proposed by OpenAI recently. It is a large-scale multimodal model which adopts RLHF and demonstrates human-level performance on various professional and academic benchmarks.\nBased on the multi-modal data containing more available information than previous single-modality data, thus the performance of these models gets enhanced by combining with the SSL on the benchmark dataset. Cross and Learn~ is the first method that reveals crossmodal information as an alternative source of supervision and obtains powerful feature representations from combining crossmodal loss and diversity loss in both RGB and optical flow modalities. \nDifferent from the existing methods that learn feature representations from only a single task in cross-domain datasets, Ren and Lee et al.~ propose a novel deep multi-task network to learn more generalizable visual representations to overcome the domain difference and further utilize the cross-domain information in different tasks. In that paper, the cross-domain datasets are real and synthetic datasets generated by a GAN-based network, while the multiple tasks are the predictions of the surface normal, depth, and instance contour in RGB images. This model performs better than any previous single-task-based SSL methods by learning general-purpose visual representations from cross-domain multi-task feature learning. Tian et al.~ believe that a powerful representation is one that models cross-view factors from the perspective of humans view to understand the world. They propose Contrastive Multiview Coding (CMC) to learn a video representation by maximizing mutual information between different views of the same scene.", "id": "38bce72c-4b48-41da-8192-684816c0192e", "level": "paragraph", "origin_cites_number": 7, "parent_id": "2888b43c-cbd1-4869-920c-68b189be657f", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFMs for Multimodal" ], [ "paragraph", "Single-Stream Model" ], [ "paragraph", "Cross-Stream Model" ] ], "subsections": [], "title": "Cross-Stream Model" }, { "cite_extract_rate": 1, "cites": [ 2561, 2563, 2560, 2562 ], "content": "Code generation with LLMs involves using pretrained language models to automatically generate code based on natural language descriptions of a desired program. This approach has the potential to greatly improve the efficiency of software development by reducing the need for manual coding and allowing developers to focus on higher-level tasks.\nThe technique involves training large-scale language models on vast amounts of natural language text and then fine-tuning the models on specific programming tasks. By inputting natural language descriptions of code, the model can generate code snippets that are syntactically and semantically correct. Code generation with LLMs has been applied in various programming domains, including web development, NLP, and data analysis. The models used for code generation include GPT-4, T5, and Codex, among others. For example, Andrei et al.~ have investigated and assessed the fine-tuning of transformer models for personalized code generation. Specifically, they have evaluated the effectiveness of various personalization techniques in the domain of generating unit tests for Java methods and learning to personalize for a specific software project. Shailja et al.~ assess the capacity of LLMs to generate Verilog that is useful. To achieve this, pretrained LLMs are fine-tuned on Verilog datasets collected from GitHub and Verilog textbooks. An evaluation framework is then constructed, consisting of test benches for functional analysis and a flow for testing the syntax of Verilog code generated in response to problems of varying degrees of difficulty. An open-source CodeGen LLM that has undergone fine-tuning has been shown to outperform the current leading commercial Codex LLM. The CodeGen~ is a group of LLMs that have up to 16.1B parameters and can handle both natural language and programming language data. Additionally, they have released the training library JAX FORMER as open-source. Their work demonstrates that the model can perform as well as the previous state-of-the-art zero-shot Python code generation on HumanEval, showcasing the practical applications of the trained model. Synchromesh, introduced in the study by Poesia et al.~, adopts a novel approach called Target Similarity Tuning (TST) to retrieve a small set of examples from a training bank. Then, Synchromesh utilizes these examples to train a pretrained language model and generates programs by applying Constrained Semantic Decoding (CSD). CSD is a general framework that can restrict the output to valid programs in the target language. In this work, the authors show that the combined use of CSD and TST results in significant improvements in prediction accuracy, as well as preventing runtime errors.\nHowever, there are still some limitations to code generation with LLMs, such as the models' tendency to generate overly verbose or inefficient code and their inability to handle complex programming tasks. Nevertheless, the technology has shown significant promise and has the potential to revolutionize the software development industry.", "id": "ec1a273d-353b-4af3-af4b-b4673cd4d2a9", "level": "subsection", "origin_cites_number": 4, "parent_id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "PFM for Code Generation" ] ], "subsections": [], "title": "PFM for Code Generation" }, { "cite_extract_rate": 1, "cites": [ 1582 ], "content": "A big convergence of PFMs handling multiple modalities is emerging, such as backbone architecture, pretraining task, and model scaling up~. Therefore, many unified PFMs proposed by researchers arise. A unified PFM is a unified model pretrained on unimodal and multimodal\ndata with single or multiple transformers as the backbone, which has the ability to perform a large variety of downstream AI tasks, including unimodal tasks and multimodal tasks. There are currently three types of SOTA unified models based on model architectures. We defined them as the single-transformer model, multi-transformer model, and comb-transformer model. A single-transformer model refers to a PFM model which only has a large-scale transformer as its backbone, whereas a multi-transformer model refers to a PFM model having multiple transformers. A comb-transformer model is the PFM model with the combination of both single and multiple transformer structures.", "id": "13a9f290-8953-45bf-86bd-7012fe69b5de", "level": "subsection", "origin_cites_number": 1, "parent_id": "f9872e6b-da5b-4455-8e22-b3eac29cee8a", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "SOTA Unified PFMs" ] ], "subsections": [ "df91898d-8ab1-4769-af61-bf2d096e34b6" ], "title": "SOTA Unified PFMs" }, { "cite_extract_rate": 0.5, "cites": [ 1582, 2471, 2564 ], "content": "UNITER~ is a large-scale PFM for joint image-text embedding, which consists of an Image Embedder, a Text Embedder, and a multi-layer Transformer. It first encodes visual features and bounding box features for image regions using Image Embedder and tokens and positions using Text Embedder. Then, a Transformer module is applied to learn generalizable contextualized embeddings for images and text through four pretraining tasks. Instead of applying random joint masking to both modalities, conditional masking on pretraining tasks is used. Six vision-language tasks are selected as the downstream tasks.\nUni-Perceiver~ is a single siamese model with shared parameters having the ability to process different modalities regarding vision\nand language tasks. Different task inputs and targets are encoded into unified token sequences with modality-specific tokenizers, which are then decoded by a modality-agnostic weight-sharing Transformer encoder into the shared representation space. Any perception task is modeled as finding the maximum likelihood target for each input through the similarity of their representations.\nUni-Perceiver is pretrained on unimodal and multimodal tasks. The evaluation results on various downstream tasks show that the performance is close to SOTA methods by conducting prompt tuning on 1\\% of downstream task data.\nGato~ builds a single large transformer sequence model that works as a multimodal, multi-task, multi-embodiment generalist policy. It can perform various tasks using a single neural network with the same set of weights. Gato is trained on 604 tasks, where different types of data, such as images, text, proprioception, joint torques, and other discrete and continuous observations and actions, are serialized into a flat sequence of tokens, batched, and processed by the transformer. During deployment, sampled tokens are assembled into different actions based on the context.\nOFA~ is a simple sequence-to-sequence learning framework with a unified instruction-based task representation that unifies various tasks. In the pretraining and finetuning stages, OFA requires no extra task-specific layers for downstream tasks to achieve Task-Agnostic. The Modality-Agnostic compute engine is a Transformer with the constraint that no learnable task- or modality-specific components are added to downstream tasks. OFA is pretrained on small-size image-text pairs to achieve crossmodal tasks while attaining highly competitive performances on unimodal tasks.\nUNIFIED-IO~ is a sequence-to-sequence model using a unified architecture that performs large and diverse tasks. UNIFIED-IO is a transformer model where both the encoder and decoder are composed of stacked transformer layers. The unified architecture does not need specific task or modality branches, which is accomplished by homogenizing the input and output of each task into a sequence of discrete vocabulary tokens. It trains a single transformer-based architecture on over 90 diverse datasets in the vision and language fields. UNIFIED-IO is the first model to perform various tasks and produce strong results across 16 diverse benchmarks without finetuning. \nBEiT-3~ is a general-purpose multimodal pretrained model on language, vision, and vision-language tasks. The big convergence of BEiT-3 can be seen from three aspects, including backbone architecture, pretraining task, and model scaling up. It introduces a shared Multiway Transformer as backbone network performing masked data modeling on both unimodal and multimodal data. To process different modalities, every Multiway Transformer block has a shared self-attention module, and a pool of feed-forward networks. \nIt is a giant-size foundation model that contains 1.9B parameters. Experimental results show that BEIT-3 can outperform SOTA models on both vision and vision-language tasks.", "id": "df91898d-8ab1-4769-af61-bf2d096e34b6", "level": "paragraph", "origin_cites_number": 6, "parent_id": "13a9f290-8953-45bf-86bd-7012fe69b5de", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "SOTA Unified PFMs" ], [ "paragraph", "Single-transformer Model" ] ], "subsections": [ "999e5823-8c94-436d-b716-3f921ea01166", "f772b5dd-3e74-483e-b2a9-a1cf58e4dcb3" ], "title": "Single-transformer Model" }, { "cite_extract_rate": 1, "cites": [ 2468 ], "content": "FLAVA~ is an alignment model that targets all modalities at once and aims at solving vision and language tasks, and vision-language tasks. It utilizes a common transformer model architecture to learn strong representations from unimodal and multimodal data. An image encoder transformer is used to capture unimodal image representations. A text encoder transformer is adopted to process unimodal text information. A multimodal encoder transformer takes both encoded unimodal images and text as inputs and integrates their representations for multimodal reasoning. During pretraining, masked image modeling (MIM) and MLM losses are applied to the image and text encoders, respectively. On the other hand, masked multimodal modeling (MMM) and image-text matching (ITM) loss are used over paired image-text data. For downstream tasks, classification heads are applied to the outputs from the image, text, and multimodal encoders, respectively, for visual recognition, language understanding, and multimodal reasoning tasks. FLAVA shows good performance on 35 tasks across different domains. A noticeable advantage is that smaller datasets it used compared with other models.", "id": "999e5823-8c94-436d-b716-3f921ea01166", "level": "paragraph", "origin_cites_number": 1, "parent_id": "df91898d-8ab1-4769-af61-bf2d096e34b6", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "SOTA Unified PFMs" ], [ "paragraph", "Single-transformer Model" ], [ "paragraph", "Multi-transformer Model" ] ], "subsections": [], "title": "Multi-transformer Model" }, { "cite_extract_rate": 1, "cites": [ 8459 ], "content": "UNIMO~ can learn both single modality and multimodalities with one model to achieve robust and generalizable representations. It employs multi-layer self-attention Transformers to learn general textual and visual representations simultaneously and unifies them into the same semantic space via cross-modal contrastive learning (CMCL). The main idea behind CMCL is to keep paired image and text representations close to the representation space while keeping non-paired representations far away. All of them are encoded by the same unified-modal Transformer in pairs or individually, and the representations of images and texts are extracted to compute the contrastive loss.", "id": "f772b5dd-3e74-483e-b2a9-a1cf58e4dcb3", "level": "paragraph", "origin_cites_number": 1, "parent_id": "df91898d-8ab1-4769-af61-bf2d096e34b6", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Other Data Modality" ], [ "subsection", "SOTA Unified PFMs" ], [ "paragraph", "Single-transformer Model" ], [ "paragraph", "Comb-transformer Model" ] ], "subsections": [], "title": "Comb-transformer Model" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 7}\nAs the number of parameters of the pretraining model increases, the pretraining model requires more memory and computing resources. It increases the training cost of PFMs and limits their deployment on resource-constrained devices.\nTherefore, to improve the efficiency of the pretraining model, PFM improves computational efficiency from the following two aspects: model efficiency and model compression.\nThe model efficiency and compression of the PFM refer to simplifying the redundancy of model parameters and structure. Under the condition that the task completion degree is not affected, the model with fewer parameters and a more concise structure is obtained.", "id": "4e7df0ae-6f27-4db8-94cf-15c255b077e4", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ] ], "subsections": [ "b6ec255c-bd2a-49b9-a056-b08c03e3ad37", "d6377901-42cc-4855-b17d-4fd5c8c828f7", "b323c7f8-6f4c-43df-b89c-cb635ea082af" ], "title": "Other Advanced Topics on PFMs" }, { "cite_extract_rate": 1, "cites": [ 1557 ], "content": "Model efficiency devotes to exploring more efficient pretraining methods to pretrain large-scale PFMs with a lower-cost solution. More efficient learning algorithms require more effective training methods and more efficient model architecture. Traditional pretraining tasks may be inefficient. For example, the commonly used masked token prediction task requires the model to predict masked tokens based on context. However, the masked tokens in the samples are usually a subset of the input tokens, and the model can only learn from this part of the tokens, so the training efficiency is low. To solve this problem, ELECTRA~ proposes an RTD task that predicts whether each input marker is replaced by other tokens, which enables the ELECTRA to train against all input tokens. In addition to effective training methods, more efficient architecture can also improve the efficiency of PFMS. For most PFMS based on the Transformer algorithm, a more efficient model architecture can be obtained by reducing the complexity of the Transformer algorithm.", "id": "b6ec255c-bd2a-49b9-a056-b08c03e3ad37", "level": "subsection", "origin_cites_number": 1, "parent_id": "4e7df0ae-6f27-4db8-94cf-15c255b077e4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Model Efficiency" ] ], "subsections": [], "title": "Model Efficiency" }, { "cite_extract_rate": 1, "cites": [ 856, 7579, 1150, 7580 ], "content": "Model compression requires less computing resources and memory. \nIt is a potential approach to reduce the model size and enhance computation efficiency. The model compression strategy can be divided into two ways: parameter compression and structure compression.\nThe methods of parameter compression include parameter pruning, parameter quantization, low-rank decomposition, and parameter sharing. Parameter pruning refers to designing evaluation criteria for model parameters to delete redundant parameters based on a sizeable PFM. For example, Compressing BERT~ prunes BERT before training while maintaining the performance equivalent to that of the original model. Parameter quantization is the quantization of model parameters from 32-bit full-precision floating-point numbers to lower-order numbers. For example, Q8BERT~ uses 8-bit quantization to compress parameters fourfold with little impact on model performance. Low-rank decomposition is to reduce the dimension of a high-dimensional parameter vector into a sparse low-dimensional vector. Parameter sharing refers to the structured matrix or clustering methods to map model parameters and reduce the number of parameters. For example, the ALBERT~ uses decomposition-embedded parameterization and cross-layer parameter sharing to reduce the parameters in the model.\nStructure compression refers to compact networks and knowledge distillation. A compact network means reducing the number of parameters and calculations by designing a new compact network structure. \nKnowledge distillation refers to the transfer of knowledge from the larger teacher model to the smaller student model through the use of a soft label, etc. DistilBERT~, for example, uses the knowledge distillation method to compress BERT, reducing the size of the BERT model by 40\\% while retaining 97\\% of its language comprehension.", "id": "d6377901-42cc-4855-b17d-4fd5c8c828f7", "level": "subsection", "origin_cites_number": 4, "parent_id": "4e7df0ae-6f27-4db8-94cf-15c255b077e4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Model Compression" ] ], "subsections": [], "title": "Model Compression" }, { "cite_extract_rate": 1, "cites": [ 1445 ], "content": "\\label{Section 8}\nThe security risks, social bias, and data privacy in PFMs become an important research topic. Qiu et al.~ recognize that deep neural networks can be attacked by adversarial samples, which mislead the model to produce false predictions. Due to the excellent portability of pretraining models, they have been widely used in NLP, CV, and GL. However, it has been found that the pretraining model is susceptible to the influence of adversarial samples. A tiny interference of the original input may mislead the pretraining model to produce specific false predictions. Meanwhile, it is possible to recover the data samples by querying the PFMs which can cause privacy leakage.", "id": "b323c7f8-6f4c-43df-b89c-cb635ea082af", "level": "subsection", "origin_cites_number": 1, "parent_id": "4e7df0ae-6f27-4db8-94cf-15c255b077e4", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ] ], "subsections": [ "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08" ], "title": "Security and Privacy" }, { "cite_extract_rate": 0.5, "cites": [ 2565 ], "content": "The adversarial sample originates from the image. The adversarial samples of the image are hard to recognize with an invisible change. For example, only one pixel of the image is modified. Human beings do not easily detect such disturbance, but the neural network can identify the modified image, which is the original purpose of the adversarial sample.\nSome work has found that pretrained LMs are vulnerable in some scenarios. Jin et al.~ successfully attack the three target models of BERT, CNN, and RNN by generating natural adversarial samples, which indicates that the current language processing model still has a large room for improvement in terms of security. \nHowever, it is difficult to achieve due to the distinct discreteness of languages in NLP. In particular, the generation of adversarial samples in the text must take into account linguistic characteristics to ensure that the sample's syntax and fluency are not harmed while affecting the model's output.\nFor example,~ uses adversarial samples to attack the fine-tuning stage of the BERT model for text classification and entailment successfully.~ combines the sememe-based word substitution method and search algorithm based on particle swarm optimization to generate adversarial samples.", "id": "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08", "level": "paragraph", "origin_cites_number": 2, "parent_id": "b323c7f8-6f4c-43df-b89c-cb635ea082af", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ], [ "paragraph", "Generation Adversarial Samples" ] ], "subsections": [ "6e00db12-8f5e-4121-8215-7845b116cb7d", "a32d62ef-465a-457c-9f5e-a566801d437d", "f262522b-e5db-439a-a24b-091afae576f3", "2d28f3a9-be90-47f3-b05f-3ed80b7013bf" ], "title": "Generation Adversarial Samples" }, { "cite_extract_rate": 1, "cites": [ 7586, 2465 ], "content": "Some unrelated human factors can also mislead the PFM to make wrong predictions. For example,~ discovers that the performance of BERT is limited in the reasoning task due to utilizing false statistical information in the dataset, which dramatically affects the performance by destroying this property.~ defines universal adversarial triggers. When triggers are connected to any input, it can induce the model to generate specific predictions.", "id": "6e00db12-8f5e-4121-8215-7845b116cb7d", "level": "paragraph", "origin_cites_number": 2, "parent_id": "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ], [ "paragraph", "Generation Adversarial Samples" ], [ "paragraph", "Model Defects" ] ], "subsections": [], "title": "Model Defects" }, { "cite_extract_rate": 1, "cites": [ 2566, 2567 ], "content": "There are still many methods to manipulate the predicted results of the pretraining model employing a backdoor attack.~ demonstrates that it is possible to construct a weight poisoning attack in which pretrained weights are injected. After the fine-tuning stage, the backdoor is exposed. Attackers manipulate model predictions easily by injecting arbitrary keywords.~ shows that PFMs in NLP can be manipulated by modifying the model corpus. The ``meaning'' of new words or existing words can be controlled by changing their weight parameters.", "id": "a32d62ef-465a-457c-9f5e-a566801d437d", "level": "paragraph", "origin_cites_number": 2, "parent_id": "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ], [ "paragraph", "Generation Adversarial Samples" ], [ "paragraph", "Backdoor Attacks" ] ], "subsections": [], "title": "Backdoor Attacks" }, { "cite_extract_rate": 0.5, "cites": [ 8466, 2470 ], "content": "The human-in-the-loop method~ has been proposed and applied to generate more natural, efficient, and diversified adversarial samples. \nSome defense approaches have been proposed to defend against such attacks. designs an auxiliary anomaly detection classifier and uses a multi-task learning procedure to defend against adversarial samples. On the other hand, some defects in the PFM may be inherited by the custom models in transfer learning, \nsuch as the adversarial vulnerabilities and backdoors mentioned above. To mitigate this issue, ~ proposes a relevant model slicing technique to reduce defect inheritance during transfer learning while retaining useful knowledge from the PFM.", "id": "f262522b-e5db-439a-a24b-091afae576f3", "level": "paragraph", "origin_cites_number": 4, "parent_id": "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ], [ "paragraph", "Generation Adversarial Samples" ], [ "paragraph", "Defense Against Attacks" ] ], "subsections": [], "title": "Defense Against Attacks" }, { "cite_extract_rate": 1, "cites": [ 2342 ], "content": "LLMs and other PFMs have been trained on private datasets~. The researchers have discovered that by querying the massive LMs, it is feasible to recover specific training samples. An adversary may, for instance, obtain IRC discussions and personally identifiable information. Even worse, because large models have so many parameters, it is simple for PFM to memorize or learn private information, making larger models more prone to attack than smaller ones.\nMany PFMs such as the LLMs have been trained on private datasets. The researchers have found that it is possible to recover individual training examples by querying the LLMs. For instance, an adversary can extract examples including personally identifiable information, and Internet Relay Chat (IRC) conversations. Even worse, because of the billion parameters of large models, it is easy for PFM to learn private information, making the larger model more vulnerable than smaller models. We must take privacy-preserving measures into account during all PFM processes, including data processing, model training, model inference, and system deployment, in order to reduce the risks of privacy leakage.", "id": "2d28f3a9-be90-47f3-b05f-3ed80b7013bf", "level": "paragraph", "origin_cites_number": 1, "parent_id": "4e4f0f17-6eee-4d4f-9c0a-f4ece492ef08", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Other Advanced Topics on PFMs" ], [ "subsection", "Security and Privacy" ], [ "paragraph", "Generation Adversarial Samples" ], [ "paragraph", "Data Privacy in PFMs" ] ], "subsections": [], "title": "Data Privacy in PFMs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 11}\nThe PFM can avoid training models from the scratch, which is a breakthrough from weak AI to general AI. At present, due to the characteristics of PFM such as large-scale parameters, a large amount of training data, and high computational complexity, there are still many technical challenges in PFMs.\nWe summarize the future research challenges of PFMs from four perspectives: data, foundation, model design, and upstream and downstream tasks. Meanwhile, we point out some open problems in the future research direction.", "id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ] ], "subsections": [ "703d4020-c300-403a-bb35-310287c2df5c", "21b04ac6-7ed8-41ac-acef-52fafa442b87", "ba3eae27-7b76-4296-9f55-60e3c677aaff", "8e68d626-4b06-4303-94fc-991a84acdb04", "787da676-2ca2-4d0e-95bb-bbe4a80772ad" ], "title": "Future Research Challenges and Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "Most pretrained datasets are for single modes and single languages. It is very important for the development of PFMs to construct pretrained datasets for multimodal, multi-lingual, and graph data.\nFor the characteristics of these data, the existing technical challenges are as follows:", "id": "703d4020-c300-403a-bb35-310287c2df5c", "level": "subsection", "origin_cites_number": 0, "parent_id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Data" ] ], "subsections": [ "946ea1a9-3c58-4008-ba51-088a2e8673f0" ], "title": "Challenges on Data" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike NLP and CV, except for the reusable nodes in a few molecular and protein networks, most of the nodes and edges in the graph data do not have a large amount of unlabeled data for pretraining.\nMeanwhile, the pretraining research of the graph model is still in its initial state. Besides, data from the Internet of Things (IoT) will be enormous and contains rich physical world information. For example, inertial measurement unit sensor data can capture users' social activity information~. \nThe theoretical basis, various definitions of the pretext task, and the augmented design of contrastive learning are all imperfect,\nand new research urgently needs to be supplemented.", "id": "946ea1a9-3c58-4008-ba51-088a2e8673f0", "level": "paragraph", "origin_cites_number": 2, "parent_id": "703d4020-c300-403a-bb35-310287c2df5c", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Data" ], [ "paragraph", "Data Deficiencies" ] ], "subsections": [ "5b4feddd-f928-4126-892b-2057d198e2a5", "7ad181ab-c709-4d51-976e-152f87b40c6d" ], "title": "Data Deficiencies" }, { "cite_extract_rate": 0, "cites": [], "content": "Some research work has been done on multimodal PFMs, such as text and image, text and audio, etc. These are mostly PFMs between two modalities. At present, \nthe learning of multimodal PFMs requires new multimodal datasets, which demand the establishment of the data between different modes. Thus, the construction of multimodal datasets is also an urgent problem to be solved.", "id": "5b4feddd-f928-4126-892b-2057d198e2a5", "level": "paragraph", "origin_cites_number": 0, "parent_id": "946ea1a9-3c58-4008-ba51-088a2e8673f0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Data" ], [ "paragraph", "Data Deficiencies" ], [ "paragraph", "Multimodal PFM" ] ], "subsections": [], "title": "Multimodal PFM" }, { "cite_extract_rate": 0, "cites": [], "content": "The multi-lingual PFM solves the resource shortage problem in multiple languages, and it aids in the achievement of new improvements in QA, text summarization, low-resource neural machine translation, and so on. However, the current PFM is still a mask LM. To improve the performance of the multi-LM, some suitable new tasks need to be added. In addition, multi-lingual vocabularies are much larger than single-language vocabularies, resulting in a sharp increase in model parameters to be learned.", "id": "7ad181ab-c709-4d51-976e-152f87b40c6d", "level": "paragraph", "origin_cites_number": 0, "parent_id": "946ea1a9-3c58-4008-ba51-088a2e8673f0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Data" ], [ "paragraph", "Data Deficiencies" ], [ "paragraph", "Multi-lingual PFM" ] ], "subsections": [], "title": "Multi-lingual PFM" }, { "cite_extract_rate": 0, "cites": [], "content": "For a PFM, a theoretical foundation is essential to model performance, whether it is a ``black box'' or ``white box'' method. \nThe foundation studied mainly includes theoretical foundation, semantic understanding, and explicable exploration.", "id": "21b04ac6-7ed8-41ac-acef-52fafa442b87", "level": "subsection", "origin_cites_number": 0, "parent_id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Foundation" ] ], "subsections": [ "6c4648b1-f673-49b8-a66f-fea4a96c2792" ], "title": "Challenges on Foundation" }, { "cite_extract_rate": 0, "cites": [], "content": "SSL in CV learns the experience from the NLP. There is no profound theory to support all kinds of tentative experiments, and further exploration has no handbook. Although there are several theoretical analysis that tries to understand the collapse of pretraining or the generalization ability of the learning representation, the lack of theoretical foundation is still a huge cloud upon the head of SSL.", "id": "6c4648b1-f673-49b8-a66f-fea4a96c2792", "level": "paragraph", "origin_cites_number": 0, "parent_id": "21b04ac6-7ed8-41ac-acef-52fafa442b87", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Foundation" ], [ "paragraph", "Lack of Theoretical Foundation" ] ], "subsections": [ "fe0f138a-36b3-4151-9e75-adf14228bddb" ], "title": "Lack of Theoretical Foundation" }, { "cite_extract_rate": 0, "cites": [], "content": "Does the pretrained LM learn the meaning of the language, or just rely on corpus learning? Many models perform well on various datasets with helpful information that can be extracted, where some approaches even exceed human levels. \nHowever, the performance is poor on domain datasets or relatively small datasets. The models cannot reach a better level of stability and match different downstream tasks.\nThis means that the model cannot serve the real purpose of human language use.", "id": "fe0f138a-36b3-4151-9e75-adf14228bddb", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6c4648b1-f673-49b8-a66f-fea4a96c2792", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Foundation" ], [ "paragraph", "Lack of Theoretical Foundation" ], [ "paragraph", "Semantic Understanding" ] ], "subsections": [], "title": "Semantic Understanding" }, { "cite_extract_rate": 0, "cites": [], "content": "Most existing structures of PFMs are tried for text, image, and graph.\nThe primary method is to increase data, improve computation power, and design training procedures to achieve better results.\nHow to make a trade-off between data, computing resources, and predictive performance is worth studying.", "id": "ba3eae27-7b76-4296-9f55-60e3c677aaff", "level": "subsection", "origin_cites_number": 0, "parent_id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Model Design" ] ], "subsections": [ "2c764781-8ae6-40ad-b248-c37ece960b20" ], "title": "Challenges on Model Design" }, { "cite_extract_rate": 1, "cites": [ 55 ], "content": "There are many attempts at model design, such as generation-based models in the CV area.\nHowever, GAN-based approaches are not popular for the following two reasons: 1) the discriminator has learned meaningful feature representations, but they are forgotten during training~; 2) the mode collapse causes the generator to output samples in singular mode to cheat the discriminator. As a result, although researchers attempt to apply GAN-based approaches on SSL for pretraining, the difficulties in the convergence of discriminator and divergence of generator hinder development and progress in this area.", "id": "2c764781-8ae6-40ad-b248-c37ece960b20", "level": "paragraph", "origin_cites_number": 1, "parent_id": "ba3eae27-7b76-4296-9f55-60e3c677aaff", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Model Design" ], [ "paragraph", "Model Variety" ] ], "subsections": [ "823c2054-bfcf-4b48-9780-d9860a99d2a8", "343e1c7f-c00e-4ed2-bd06-8cdc24680627", "db63dd62-364b-4d08-bf56-8a8cb72252c8" ], "title": "Model Variety" }, { "cite_extract_rate": 0, "cites": [], "content": "With the wide application of the Transformer and the pretraining model showing a general trend of growth, the computational complexity of the pretraining model has become the focus of attention. Due to the huge hardware requirements of model training and other reasons, the high threshold makes it difficult for researchers to train from scratch. BERT-base and GPT-3 contain about 108 million parameters and 175 billion parameters, respectively. It is not conducive to the development of relevant research work. There are some works for pretraining model compression, such as ALBERT having fewer parameters and better effect than BERT-base. The improvement models still require powerful computing equipment, making them difficult to apply universally. Reducing the high computing cost is one of the main challenges in future research.", "id": "823c2054-bfcf-4b48-9780-d9860a99d2a8", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2c764781-8ae6-40ad-b248-c37ece960b20", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Model Design" ], [ "paragraph", "Model Variety" ], [ "paragraph", "Model Compression" ] ], "subsections": [], "title": "Model Compression" }, { "cite_extract_rate": 0, "cites": [], "content": "Although many researchers have designed different pretext tasks for the pretraining, the main problem remains on how to design robust pretext tasks and judge the performance before large-scale computations. In addition, how to compare these proposed methods fairly is also a big issue. As for NLP, deep neural networks are vulnerable to adversarial inputs because of their linear characteristics. Although pretraining models perform well on different NLP tasks, most are based on deep neural networks, which generally have poor robustness. Operations such as cutting and rotating do not change the nature of the image in CV. In contrast, operations such as adding, deleting, and substituting a word in the text are likely to affect the semantics of the text. Therefore, how to improve the robustness of the PFM in NLP is a technical challenge.", "id": "343e1c7f-c00e-4ed2-bd06-8cdc24680627", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2c764781-8ae6-40ad-b248-c37ece960b20", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Model Design" ], [ "paragraph", "Model Variety" ], [ "paragraph", "Model Robustness" ] ], "subsections": [], "title": "Model Robustness" }, { "cite_extract_rate": 0, "cites": [], "content": "The PFMs are vulnerable to attack by adversarial examples, which can easily mislead the model to produce specific false predictions. It is difficult to process due to the unique discreteness of language in the NLP field. Thus, the current PFMs have huge room for improvement in model anti-attack.", "id": "db63dd62-364b-4d08-bf56-8a8cb72252c8", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2c764781-8ae6-40ad-b248-c37ece960b20", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Model Design" ], [ "paragraph", "Model Variety" ], [ "paragraph", "Model Anti-attack" ] ], "subsections": [], "title": "Model Anti-attack" }, { "cite_extract_rate": 0, "cites": [], "content": "The pretrained model in NLP, CV, and GL fields can achieve good performance in most upstream tasks, but not all good in downstream tasks for fine-tuning and prompt. \nHow to achieve consistent results both on upstream and downstream tasks is still a challenge for the PFMs.", "id": "8e68d626-4b06-4303-94fc-991a84acdb04", "level": "subsection", "origin_cites_number": 0, "parent_id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Finetuning and Prompt" ] ], "subsections": [ "f4b581e6-41eb-48b0-a404-2de56f2bc71b" ], "title": "Challenges on Finetuning and Prompt" }, { "cite_extract_rate": 1, "cites": [ 7587 ], "content": "Google Research~ observed the nonlinear relationship between the performance of upstream and downstream tasks. The higher training accuracy with more data on the upstream tasks does not always lead to better performance on the target downstream tasks. This observation challenges the most intuitive understanding of the pretraining process. Even in the most extreme case, the performance of upstream and downstream is at odds.", "id": "f4b581e6-41eb-48b0-a404-2de56f2bc71b", "level": "paragraph", "origin_cites_number": 1, "parent_id": "8e68d626-4b06-4303-94fc-991a84acdb04", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Finetuning and Prompt" ], [ "paragraph", "Saturation Phenomena" ] ], "subsections": [ "66cfeca1-2e5e-4ab6-a9b5-d7e7f87894e0", "486763ac-84f2-465c-a879-dc811c267bd4" ], "title": "Saturation Phenomena" }, { "cite_extract_rate": 0, "cites": [], "content": "There are too many self-supervised tasks, also known as pretext tasks. The pretext task can be used for any downstream tasks, such as detection and classification. It is difficult to match the relationship between pretext tasks and downstream tasks.", "id": "66cfeca1-2e5e-4ab6-a9b5-d7e7f87894e0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f4b581e6-41eb-48b0-a404-2de56f2bc71b", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Finetuning and Prompt" ], [ "paragraph", "Saturation Phenomena" ], [ "paragraph", "Pretext Task" ] ], "subsections": [], "title": "Pretext Task" }, { "cite_extract_rate": 0, "cites": [], "content": "Much of the pretraining on the graph is based on the task graph. Different tasks construct different graphs, where nodes need to be reused. This makes it impossible to pretrain on the graph by introducing as much data as NLP and CV.", "id": "486763ac-84f2-465c-a879-dc811c267bd4", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f4b581e6-41eb-48b0-a404-2de56f2bc71b", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Challenges on Finetuning and Prompt" ], [ "paragraph", "Saturation Phenomena" ], [ "paragraph", "Task-based Graph" ] ], "subsections": [], "title": "Task-based Graph" }, { "cite_extract_rate": 1, "cites": [ 1582 ], "content": "First of all, a big convergence of text, image, graph, and multimodal pretraining is expected. Till the survey is written, no work has considered the graph in their unified PFMs. All of the SOTA unified models mainly focus on the language, vision, and language-vision tasks, while neglecting the importance of the graph in the data domain. Second, a unified backbone architecture for unified PFMs in future research will become more popular. It can be seen that a unified PFM model which only has a large-scale transformer as its backbone, i.e., a single-transformer model, is more focused by researchers than other types of unified PFMs. Third, a unified PFM is expected to achieve SOTA transfer performance for all different tasks in all data domains, including text, image, graph, and multimodalities. Most unified PFMs are only outstanding in a single data domain, whereas the performance in other domains is not competitive. BEiT-3~ shows a great example in both vision and vision-language tasks towards this research direction. Besides, in terms of RL usage in PFMs, even though ChatGPT build the milestone in NLP, CV and GL do not have significant research published yet. More work in this direction is expected in the future.", "id": "787da676-2ca2-4d0e-95bb-bbe4a80772ad", "level": "subsection", "origin_cites_number": 1, "parent_id": "bff33ab9-dc91-49d5-b524-ed2d9a01f149", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Future Research Challenges and Open Problems" ], [ "subsection", "Open Problems for Future PFMs" ] ], "subsections": [], "title": "Open Problems for Future PFMs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 12}\nExisting PFMs in text, image, and graph domains are principally summarized in this survey. \nFirstly, we introduce the basic components of NLP, CV, and GL.\nThen, we provide a summary of existing models designed for pretraining in the three domains and summarize the necessary information in terms of model structures.\nFurthermore, we study some other research for PFMs, including other advanced and unified PFMs, model efficiency and compression, and security and privacy. \nFinally, we present the main challenges and open problems in PFM research.\n\\pagebreak\n\\appendix", "id": "90d7385e-341b-4735-9bff-17abb31ffdc4", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Components}", "id": "e28e7bc1-61ac-4024-be2b-2259d31465ae", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ] ], "subsections": [ "f677f474-2e16-4eea-ac20-04add9ce0c07", "be842727-2b8c-42fb-8055-dd450b50d3ef" ], "title": "Basic Components" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{table*}[htbp]\n\t\\centering\n\t\\caption{Commonly used notations on NLP and graph.}\n\t\\label{tab:graph_notation}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{c|c|c|c} \n\t\t\\hline \n\t\t \\multicolumn{2}{c}{\\textbf{NLP}} & \\multicolumn{2}{|c}{\\textbf{Graph}} \\\\\n\t\t\\hline\n\t\t\\textbf{Notations} & \\textbf{Descriptions} & \\textbf{Notations} & \\textbf{Descriptions} \\\\\n\t\t\\hline\n\t\t$N$ & The length of input text. &$|\\cdot|$ & The length of a set.\\\\\t\\hline\n\t $w_{i}$ & The i-th word in input text. & \t$\\mathcal{G}$ & The set of graphs. \\\\\t\\hline\n\t\t$|V|$ & The word corpus size. & \t$G$ & A graph. \\\\\t\\hline\n\t\t$H_{x}$& The representation of the input sequence. &\t$V$ & The set of nodes in the graph. \\\\\t\\hline\n\t$\\overrightarrow{\\boldsymbol{\\theta}_{\\text {f}}}$\t& The parameters for forward modeling. & \t$v$ & A node. \\\\\t\\hline\n\t$\\overleftarrow{\\boldsymbol{\\theta}_{\\text {b}}}$\t& The parameters for backward modeling. & \t$E$ & The set of edges in the graph. \\\\\t\\hline\n\t $\\theta$ & The shared parameter in all permutations. \t &\t$e_{ij}$ & An edge between $v_{i}$ and $v_{j}$. \\\\\t\\hline\n\t$Z_{N}$ & The set of all possible permutations of $T$.\t & \t$A$ & The adjacency matrix of a graph. \\\\\t\\hline\n\t $z_{T=t}$ & The $t$-th element of $z$.\t & \t$\\mathcal{T}$ & The set of node types in a graph. \\\\\t\\hline\n\t $z_{T<t}$ & The $[1, 2, \\ldots, t-1]$ elements of $z$. & \t$X$ & The feature matrix of a graph. \\\\\t\\hline\n\t$z$\t& A permutation of $T$. & \t$\\mathcal{Y}$ & The set of ground truth labels in a graph. \\\\\t\\hline\n\t$m$ & The dimension of the feature vector. & \t$D$ & The given graph data. \\\\\t\\hline\n\t $b_{1}$, $b_{2}$ & The bias values of the hidden layer and the output layer.\t & \t$M_{GL}$ & The GL model. \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t}\n\\end{table*}", "id": "f677f474-2e16-4eea-ac20-04add9ce0c07", "level": "subsection", "origin_cites_number": 0, "parent_id": "e28e7bc1-61ac-4024-be2b-2259d31465ae", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Basic Components on NLP" ] ], "subsections": [ "f906b05c-8704-4640-bca1-f1869e1847fe" ], "title": "Basic Components on NLP" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7165, 8385, 7 ], "content": "With the rapid development of deep learning, LMs are more and more applicable to the pretraining of NLP models. The LM can estimate the probability of rationality of a paragraph of the text. \nThere are two main types of LMs: statistical LM and neural network LM.\n\\myparagraph{\\textbf{Statistical LM}}\nThe statistical LM is a mathematical model to solve the context-related characteristics of natural language from the perspective of probability and statistics. The core of statistical LMs is to determine the probability of a sentence appearing in a text.\nAs the theoretical basis of the probabilistic LM, the N-gram model profoundly influences the subsequent LM. It plays a pivotal role in the field of the LM. \nThe N-gram LM introduces the Markov hypothesis, which assumes that the probability of the occurrence of the current word only depends on the nearest $n-1$ words.\nThe maximum likelihood probability of a word $w_{i}$ can be calculated by\n\\begin{equation}\n p\\left(w_{i} \\mid w_{1}, w_{2}, \\ldots, w_{N} \\right)= \\\\ p\\left(w_{i} \\mid w_{i-n+1}, w_{i-n+2}, \\ldots, w_{i-1} \\right) = \\\\ \\frac{C\\left(w_{i-n+1}, w_{i-n+2}, \\ldots, w_{i}\\right)}{\\sum_{N} C\\left(w_{i-n+1}, w_{i-n+2}, \\ldots, w_{i-1} \\right)},\n\\end{equation}\nwhere $T=[w_{1}, w_{2}, \\ldots, w_{N}]$ is the text sequence and $C(w_{i-n+1}, w_{i-n+2}, \\ldots, w_{i})$ is the co-occurrence frequency of $(w_{i-n+1}, w_{i-n+2}, \\ldots, w_{i})$. \nThe $p\\left(w_{i} \\mid w_{1}, w_{2}, \\ldots, w_{N} \\right)$ is calculated according to the chain rule\n\\begin{equation}\np\\left(w_{1}, w_{2}, \\ldots, w_{N}\\right)=\\prod_{i=1}^{N} p\\left(w_{i} \\mid w_{1}, w_{2}, \\ldots, w_{i-1}\\right).\n\\end{equation}\nN-gram uses the probabilities of each word in the sequence to represent the co-occurrence probability of the whole text sequence.\nWhen $N$ is large, it indicates a more vital constraint on the occurrence of the next word in the sequence and leads to more sparse frequency information. When $N$ is small, the statistical results have higher reliability and better generalization ability, but the constraint will be weaker.\n\\myparagraph{\\textbf{Neural LM}}\nThe statistical LM adopts maximum likelihood estimation, which is intuitive and easy to understand. However, there are still problems such as a lack of long-term dependence, the rapid growth of parameter space and sparse data. Therefore, the neural network is introduced to map the LM to a continuous space. Neural LMs use distributed representations of words to model natural language sequences. Unlike class-based N-gram models, neurolinguistic models are able to recognize two similar words without losing the ability to encode each word as different from the other. It can be directly used for NLP tasks. It mainly introduces Forward Feedback Neural Networks (FFNN), Recurrent Neural Networks (RNN), and pretrained LMs.\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{pictures/picture11.pdf}\n \\caption{The model architectures of forward feedback neural network, recurrent neural network and Pretrained LMs. $H^{1,2}$, $H^{2,3}$ and $H^{1,3}$ are the weight matrices used to connect each layer.}\n \\label{picture1}\n\\end{figure*}\nAs shown in Fig.~\\ref{picture1} (a), FFNN according to the all former words of $x=[w_{1}, \\ldots, w_{i-1}]$ calculates the probability of $w_{i}$. In order to predict the conditional probability of $w_{i}$, $x$ is sharing the projection matrix $M \\in R^{|V| \\times m}$ to a continuous feature vector space according to the projection index, $|V|$ is word library size, $m$ is the dimension of the feature vector. The output is represented as\n\\begin{equation}\ny=b_{2}+H^{1,3}_{x}+H^{2,3}_{x} \\tanh (b_{1}+H^{1,2}_{x}),\n\\end{equation}\nwhere $H^{1,2}$, $H^{2,3}$ and $H^{1,3}$ are the weight matrices used to connect each layer, and $b_{1}$ and $b_{2}$ are the bias values of the hidden layer and the output layer respectively.\nThe structure of the FFNN contains only limited information about the foregoing and has some limitations on the length of the input sequence. Therefore, the RNN LM comes into being. As shown in Fig.~\\ref{picture1} (b), RNN can accept input of any variable length. When the input window is moved, its internal state mechanism can avoid repeated calculation, and parameter sharing further reduces the number of model parameters. Therefore, compared with FFNN, RNN has a great advantage.\nThe pretraining LM is to get a set of model parameters by pretraining some tasks. It initializes the model with these parameters and then trains to improve the model performance effectively.\nThe commonly used pretraining models are fixed embedding (Word2vec~, Glove~, etc), variable embedding (Embeddings from LMs (ELMO)~, Generative Pretrained Transformer (GPT)~ and Bidirectional Encoder Representations from Transformers (BERT)~, etc). \nHere, we give an example of the GPT model, as shown in Fig.~\\ref{picture1} (c). It adopts a two-stage process. In the first stage, the Transformer decoder is used as the basic unit of the model to perform text prediction. In the second stage, the GPT is initialized differently for different downstream tasks, training the model and fine-tuning the parameters.", "id": "f906b05c-8704-4640-bca1-f1869e1847fe", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "f677f474-2e16-4eea-ac20-04add9ce0c07", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Basic Components on NLP" ], [ "subsubsection", "Language Model" ] ], "subsections": [], "title": "Language Model" }, { "cite_extract_rate": 0, "cites": [], "content": "Due to the extensive use of graph data in many fields, some communities (e.g., chemistry, protein, and social network) have recently focused on the study of graph pretraining.\nThese pretraining models encode graph attributes, structures, and other information into node representations from multiple perspectives by designing different pretext tasks, which are used to optimize downstream tasks.\nIn this section, we introduce the definition of the basic concepts of graphs, and then provide a formal definition of the PFM on the graph.", "id": "be842727-2b8c-42fb-8055-dd450b50d3ef", "level": "subsection", "origin_cites_number": 0, "parent_id": "e28e7bc1-61ac-4024-be2b-2259d31465ae", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Basic Components on GL" ] ], "subsections": [ "eb020326-3155-4fe2-aa54-074e84129494", "ac83e171-82b7-40c5-9ed7-19af337811f0" ], "title": "Basic Components on GL" }, { "cite_extract_rate": 0, "cites": [], "content": "Unless particularly specified, the notations used in this article are illustrated in Table~\\ref{tab:graph_notation}.\nWe use $\\mathcal{G}=\\{G_{i}\\}^{N}_{i}$ to represent a set of graphs, where $N$ represents the number of graphs.\nDepending to the graph's definition of the edges and nodes, graph data can be classified into the following types.\n\\begin{Def}\n\t\\label{def: unattr_graph}\n\tAn \\textbf{unattributed graph} is $G=(V,E)$, where $v \\in V$ is a node, $e \\in E$ is an edge, and naturally $E \\subseteq V \\times V$.\n\tAdjacency matrix $A \\in \\mathbb{R}^{n \\times n}$ represents the topology of graph $G$, where $n=|V|$. $A_{i,j}=1$ denotes there is an edge between node $v_{i}$ and $v_{j}$, otherwise $A_{i,j}=0$.\n\\end{Def}\n\\begin{Def}\n\t\\label{def: attr_graph}\n\tAn \\textbf{attributed graph} is $G=(V,E,X_{v},X_{e})$, where $X_{v} \\in \\mathbb{R}^{n \\times d_{v}}$ and $X_{e} \\in \\mathbb{R}^{m \\times d_{e}}$ are the feature matrices of nodes and edges, $|V|=n$, $|E|=m$, $d_{v}$ and $d_{e}$ denotes the feature dimensions of node and edge.\n\tIn fact, in most application scenarios, only nodes have attributes, and edges have no attributes or only weights.\n\\end{Def}\n\\begin{Def}\n\t\\label{def: undirected_graph}\n\tAn \\textbf{undirected graph} is $G=(V,E)$, where $e_{i,j} \\in E$ means an unordered node pair $(v_{i}, v_{j})$.\n\tIn particular, the adjacency matrix $A$ of the undirected graph is a symmetric matrix (i.e., $A_{i,j}=A_{j,i}$).\n\\end{Def}\n\\begin{Def}\n\t\\label{def: directed_graph}\n\tA \\textbf{directed graph} is $G=(V,E)$, where $e_{i,j} \\in E$ means an ordered node pair $(v_{i}, v_{j})$.\n\\end{Def}\n\\begin{Def}\n\t\\label{def: homo_graph}\n\t$G$ has a node-type mapping function $f_{v}: V \\to \\mathcal{T}^{v}$ and an edge-type mapping function $f_{e}: E \\to \\mathcal{T}^{e}$.\n\tWhen $|\\mathcal{T}^{v}|=|\\mathcal{T}^{e}|=1$, the graph $G=(V,E)$ is a \\textbf{homogeneous graph}.\n\tIn other words, all nodes in $G$ belong to a type, and all edges also belong to one type.\n\\end{Def}\n\\begin{Def}\n\t\\label{def: hete_graph}\n\tWhen $|\\mathcal{T}^{v}|>1$ and/or $|\\mathcal{T}^{e}|>1$, the graph $G=(V,E)$ is a \\textbf{heterogeneous graph}.\n\tIn particular, a heterogeneous graph must be an attributed graph.\n\\end{Def}", "id": "eb020326-3155-4fe2-aa54-074e84129494", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "be842727-2b8c-42fb-8055-dd450b50d3ef", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Basic Components on GL" ], [ "subsubsection", "Notations and Definitions of Graphs" ] ], "subsections": [], "title": "Notations and Definitions of Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "GL methods are usually used to solve machine learning tasks on graph data, and we introduce different settings (supervision mode and learning mode) for GL. \nBefore that, we first provide the notations of the corresponding mathematical formulation of GL.\n$C=\\{c_{1}, c_{2}, \\cdots, c_{K}\\}$ is a set of target components defined in a graph set $\\mathcal{G}$ ($G^{c_{i}} \\in \\mathcal{G}$), and $c_{i}$ is associated with a corresponding ground truth label $y_{i} \\in \\mathcal{Y}=\\{1,2,\\cdots,N_{y}\\}$, where $K$ denotes the total number of target components, and $N_{y}$ is the number of classes being predicted.\nThen the graph data can be represented as $D=\\{c_{i}, G^{c_{i}}, y_{i}\\}^{K}_{i}$, and a complete GL model $M_{GL}$ can also be determined by $y_{i} = M_{GL}(c_{i}, G^{c_{i}})$.\nFor instance, in a node classification task, $c_{i}$ is the node to be classified, $y_{i}$ denotes $c_{i}$'s label in graph $G^{c_{i}}$. Similarly, in a node clustering task, $c_{i}$ is the node to be clustered, $y_{i}$ denotes the corresponding cluster label in graph $G^{c_{i}}$.\n\\myparagraph{\\textbf{Supervision Mode}}\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[scale=0.15]{fig_graph/fig_graph_super.pdf}\n\t\\caption{Schematic of different supervision modes.}\n\t\\label{fig:supervision}\n\\end{figure*}\nDepending on the source and scale of the training data, the supervision settings of GL can be divided into four types as shown in Figure~\\ref{fig:supervision}.\n\\textbf{Supervised} GL is the most common mode in the real scenario.\nGiven the target component $c_{i}$ and the corresponding ground truth label $y_{i}$, the goal is to minimize the loss function between the predicted label of the GL model (i.e., $y^{pred}_{i}=M_{GL}(c_{i}, G^{c_{i}})$) and the expected label $y_{i}$ of all $c_{i}$.\nCompared with supervised learning, \\textbf{unsupervised} GL refers to situations in which no labeled data is provided, only the attributes and structure distribution of graph data (i.e., $(c_{i}, G^{c_{i}})$) can be used.\n\\textbf{Self-supervised} GL is a special case of both supervised and unsupervised learning. \nSpecifically, self-supervised learning mainly uses pretext tasks (e.g., clustering, completion, and partition) to mine its own supervised information (i.e., pseudo-labels) from large-scale unsupervised graph data, and trains the GL model $M_{GL}$ through the self-supervised information, so that it can learn to the valuable features of downstream tasks.\nIn other words, the supervised information of self-supervised learning is not manually labeled, but the pretext tasks automatically construct supervised information from large-scale unsupervised data for supervised learning or training.\n\\textbf{Semi-supervised} learning is a combination of unsupervised and supervised learning, who aims at learning data distribution to predict unlabeled data to solve the problem of difficulty in obtaining labeled data in real scenarios.\nIn GL, semi-supervised learning refers to the realization of pattern recognition given a few labeled data and mass unlabeled data.\n\\myparagraph{\\textbf{Learning Mode}}\nThe GL model $M_{GL}$ is optimized by the given training samples, and adjusted on the validation samples to participate in the test.\nAccording to the visibility of the graph data at different stages, the learning settings of GL model $M_{GL}$ can be classified into two categories: inductive learning and transductive learning.\n\\begin{Def}\n\t\\label{def: ind_learning}\n\t\\textbf{Inductive Learning}, which is the most common setting in machine learning tasks, trains the model on labeled data and then tests on samples that have never appeared in the training stage.\n\tFormally, given a training sample $\\{(c_{i}, G^{c_{i}}, y_{i})\\}^{N_{l}}_{i=1}$, $\\{(c_{j}, G^{c_{j}})\\}^{N_{u}}_{j=1}$, where $N_{l}$ and $N_{u}$ are the numbers of labeled/unlabeled samples.\n\tInductive learning learns a function $f^{ind}: \\mathcal{G} \\mapsto \\mathcal{Y}$ so that $f^{ind}$ is expected to be a good classifier on the future graph data $\\{(c_{k}, G^{c_{k}})\\}$, beyond $\\{(c_{j}, G^{c_{j}})\\}^{N_{u}}_{j=1}$.\n\\end{Def}\n\\begin{Def}\n\t\\label{def: trans_learning}\n\t\\textbf{Transductive Learning} is different from inductive learning in that all samples are visible during both the training and testing stages.\n\tFormally, given a training sample $\\{(c_{i}, G^{c_{i}}, y_{i})\\}^{N_{l}}_{i=1}$, $\\{(c_{j}, G^{c_{j}})\\}^{N_{u}}_{j=1}$, transductive learning learns a function $f^{trans}: \\mathcal{G}^{l+u} \\mapsto \\mathcal{Y}^{l+u}$ so that $f^{trans}$ is expected to be a good classifier on the unlabeled data $\\{(c_{j}, G^{c_{j}})\\}^{N_{u}}_{j=1}$.\n\\end{Def}\nUnder the supervised setting (including semi-/self-supervised), the unified classifier optimization methods of inductive learning and transductive learning can be written as:\n\\begin{equation}\\label{eq:uniformloss}\n\t\\mathcal{L}= \\frac{1}{K} \\sum^{K}_{i=1} \\mathcal{L}(f^{(\\cdot)}_\\theta(c_{i}, G^{c_i}), y_i),\n\\end{equation}\nwhere $\\mathcal{L}$ is the cross-entropy loss, $c_{i}$ can be node, edge or subgraph of its associated graph $G^{c_{i}}$, and $f^{(\\cdot)}_\\theta$ denotes inductive/transductive function with parameter $\\theta$.\nCompared with using only one pretext task, some methods have designed some integration mechanisms to incorporate the advantages of multiple pretext tasks into a unified framework.", "id": "ac83e171-82b7-40c5-9ed7-19af337811f0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "be842727-2b8c-42fb-8055-dd450b50d3ef", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Basic Components" ], [ "subsection", "Basic Components on GL" ], [ "subsubsection", "Learning Settings on Graphs" ] ], "subsections": [], "title": "Learning Settings on Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8cba48d4-e174-4a7a-afb3-8ba526649a63", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ] ], "subsections": [ "11190425-dd22-44ac-b1da-ce058fe83643", "aee7bd49-072e-4f3b-be42-b7fb570a094d", "fc42d69c-a979-435c-aa0b-00c38ca3fbcc" ], "title": "Traditional Learning Methods" }, { "cite_extract_rate": 0.4, "cites": [ 8565, 1684, 7165, 2568 ], "content": "NLP is a research field that integrates linguistics and computer science. Its main research tasks include part-of-speech tagging, named entity recognition, semantic role labeling, machine translation, question answering, sentiment analysis, text summarization, text classification, relationship extraction, event extraction, etc. \nThe LM can be considered the cornerstone of the downstream NLP tasks. \nIt experiences four processes: grammar rule LM, probabilistic LM, neural network LM, and pretraining LM.\nA PFM trains on a large benchmark dataset to obtain a model which can solve new similar tasks, which has become a new hotspot in current LM research.\nWord representations play a significant role in downstream tasks, which is the basis of NLP.\nThe N-gram model preprocesses text features and encodes adjacent $N$ words as a group, which makes it overly dependent on the richness of the training corpus. Otherwise, data-sparse is likely to occur, and the computational complexity will increase exponentially with the increase of $N$.\nNeural Network LM (NNLM)~ adopts the idea of word vector for the first time, and the low-dimensional word vector of distributed representation can solve the discrete problem caused by word embedding well. However, it is still challenging to solve the problem of high computational complexity.\nThe computational complexity of the word2vec model is independent of the selected window size but is determined by the dictionary size and the word vector dimension.\nMany downstream tasks can be significantly improved by training on a large corpus using word vector embedding after initial training.\nHowever, the problem of polysemy for the static word vector is still unsolved, and it still belongs to the shallow LM~~. Therefore, more effective models are urgently needed to deal with the dataset more flexibly.\nTo capture high-level concepts of context, such as polysemy elimination, syntactic structure, etc.\nNeelakantan et al.~ propose to learn multiple embeddings per word type. Zhou et al.~ integrate the features on both dimensions of the matrix to enrich semantics by using subword information.\nBased on the Continuous Bag Of Words (CBOW)~ in word2vec, Hui et al.~ fine-tune the generated word vectors for emotion and obtain the word vectors containing both semantic meaning and emotional tendency, which significantly improved the performance in the Weibo sentiment classification task.\nLiu et al.~ propose a model of hierarchical translation for machine translation. It uses the neural LM based on RNN as the word vector generation model. \nLiang et al.~ propose an approach based on the double-layer self-attention mechanism for machine reading comprehension, and the model is divided into three parts: single document encoder, multi-document encoder, and answer prediction.\nIn the single document encoder, the problem of the context information is represented by the Gated Recurrent Unit (GRU) model.\nZhang et al.~ propose an INDependent RNN (INDRNN) and attention mechanism for user intention classification, using word vectors generated by word2vec as input.\nThe model introduces a word-level attention mechanism to effectively quantify the contribution of domain vocabulary to the intention category.", "id": "11190425-dd22-44ac-b1da-ce058fe83643", "level": "subsection", "origin_cites_number": 10, "parent_id": "8cba48d4-e174-4a7a-afb3-8ba526649a63", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Text Learning" ] ], "subsections": [], "title": "Traditional Text Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "There are several types of neural networks in the deep learning era, from the beginning of most famous convolutional neural networks (CNNs) to the subsequent Attention- and Transformer-based networks. A deep neural network refers to an artificial neural network with more hidden layers, and more parameters are used to represent the target model, which leads to the SOTA performance on the benchmark dataset from image to video. Here, we introduce the milestone networks in CV chronologically.", "id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "level": "subsection", "origin_cites_number": 0, "parent_id": "8cba48d4-e174-4a7a-afb3-8ba526649a63", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ] ], "subsections": [ "ab140725-6a23-46b0-9db1-6450157b1c01", "aa063366-dd27-4046-a390-172ea8a0e90a", "cc275360-66c7-4eeb-acdb-c935357457b1", "194fb98a-9ae8-4a2c-a7cb-7ac0ce02b1a5", "efabf58e-818d-451d-a426-4bfb3309251f" ], "title": "Traditional Image Learning" }, { "cite_extract_rate": 0.8214285714285711, "cites": [ 96, 1223, 1768, 514, 508, 8429, 7589, 305, 206, 836, 486, 2569, 2571, 802, 2570, 1759, 7588, 97, 810, 7501, 1741, 209, 520 ], "content": "ImageNet~, as one of the most important databases in computer vision, has aroused many milestone network architectures in image classification, including AlexNet~, NIN~, VGG~, GoogLeNet~, ResNet~, DenseNet~, etc. When it comes to object detection and semantic segmentation, researchers explore R-CNNs~, FCN~, SSD~, YOLOs~, SegNet~, PSPNet~, Deeplabs~, RefineNet~, etc. on common benchmark datasets, such as PASCAL VOC~, MS COCO~, etc. \nThere are several shared features among these popular convolution-based networks: 1) \\emph{data augmentation}. Deep models require much more data to fit a complicated model, thus the data augmentation technique such as flipping, rotation, cropping, scaling, translation, and even adding noises enlarges the training dataset; 2) \\emph{convolution}. The convolutional kernel is used to extract the features of original image data, which maintains the spatial structure for the adjacent pixels; 3) \\emph{deep architecture}. The deep architecture contains more parameters, which enhance the capability of the model. These common features contribute to the SOTA performance of convolutional neural networks (CNNs) in computer vision for nearly recent 10 years.", "id": "ab140725-6a23-46b0-9db1-6450157b1c01", "level": "subsubsection", "origin_cites_number": 28, "parent_id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ], [ "subsubsection", "Convolution-Based Networks." ] ], "subsections": [], "title": "Convolution-Based Networks." }, { "cite_extract_rate": 0.25, "cites": [ 2572, 2401 ], "content": "Different from CNNs targeting 2D-dimensional image applications, recurrent neural networks (RNNs)~ try to use recursive cells to process pictures in sequence, i.e., video data. However, the weaknesses of gradient explosion and long-term dependencies restrict further development of this model. To handle these problems embedded inside the RNN-based models, long short-term memory (LSTM)~ was proposed by Hochreiter and Schmidhuber in 1997. In addition, the improved capability of LSTMs produces popularity and attracts attention both in NLP and CV~.", "id": "aa063366-dd27-4046-a390-172ea8a0e90a", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ], [ "subsubsection", "Recurrent neural networks" ] ], "subsections": [], "title": "Recurrent neural networks" }, { "cite_extract_rate": 1, "cites": [ 8567, 1252, 2574, 7217, 8566, 7022, 2573, 157, 896 ], "content": "Generative Adversarial Networks (GANs)~ have provided a paradigm to learn representations for unlabelled data, and spawn many GAN-based approaches on downstream tasks. In image translation, pix2pix software~ first proposes the conditional adversarial networks as a solution to the image-to-image translation problems, and achieves reasonable results on real-world datasets. \nMarkovian Generative Adversarial Networks (MGANs)~ is a method to generate texture synthesis, which can be applied to style transfer and video\nstylization. CycleGAN~ provides a learning algorithm to translate an original image from the source domain to a target domain without containing pairs of images in datasets for supervised learning. StyleGAN~ is a style-based generator to serve as an alternative architecture for traditional GANs. Pixel Recurrent Neural Networks (PixelRNN)~ aims to complete images by modeling full dependencies between the color channels. DiscoGAN~ is designed to learn relations between different domains. \nGANs have also provided a novel direction to study data synthesis because it perfectly simulates the distribution of the original data. Laplacian Pyramid of Adversarial Networks (LAPGAN)~ uses a cascade of convolutional networks to generate images in a coarse-to-fine fashion. Similarly, Stacked Generative Adversarial Networks (SGAN)~ decompose variations into multiple levels and gradually resolve uncertainties by stacking several GANs in a top-down way.", "id": "cc275360-66c7-4eeb-acdb-c935357457b1", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ], [ "subsubsection", "Generation-Based Networks" ] ], "subsections": [], "title": "Generation-Based Networks" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 2576, 7268, 7194, 2575, 38 ], "content": "Based on the success of CNNs in the area of CV, the attention module is designed to equip with the popular CNNs. For example, SENet~ proposes a channel attention module, which won first place in the competition of ILSVRC2017. In addition, CBAM~ sequentially infers attention maps along both channel and spatial dimensions. Many innovative works, such as GCNet~ and CCNet~, are inspired by this idea of soft-attention mechanism, which outperforms the traditional CNNs on major benchmarks for both recognition and segmentation tasks. In particular, the self-attention mechanism~, calculating the response at a position among all entities in a sequence by attending to all positions within the same sequence, is proposed to estimate the relevance of one position to other positions in feature maps. To control the expected entities and model more complex relations among different elements in the sequence, masked self-attention and multi-head attention~ are the key components proposed to substitute the function of convolutions in the era of transformers.", "id": "194fb98a-9ae8-4a2c-a7cb-7ac0ce02b1a5", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ], [ "subsubsection", "Attention-Based Networks" ] ], "subsections": [], "title": "Attention-Based Networks" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 7360, 2580, 2579, 732, 2578, 7590, 2577, 2581 ], "content": "Recently, inspired by the self-attention mechanism and subsequent success of the transformer in NLP, researchers in CV also try to use the transformer as an alternative to the convolution. Self-attention-based transformer models always operate in a two-stage training mechanism: 1) pretraining on a primitive dataset (always big but not well labeled) by defining pretext tasks; 2) transferring the pretrained weights to the downstream tasks and adjusting the parameters on the target domain dataset by finetuning. Vision Transformer (ViT)~ is applied on CV and achieves the SOTA performance on major benchmark datasets. Data-efficient image Transformers (DeiT)~was proposed by Facebook AI to train image transformers more efficiently and maintain the SOTA performance simultaneously. DEtection TRansformer (DETR)~ significantly outperforms competitive baselines in both object detection and semantic segmentation. LeViT~ outperforms existing benchmarks with respect to balancing the accuracy and training speed. Image GPT~ is inspired by a sequence transformer in NLP, which can compete with several self-supervised benchmarks on ImageNet. On the basis of this research, DeepViT~ explores a deeper architecture to improve performance consistently by making the transformer go deeper. Moreover, many researchers try to apply the transformer to more specific tasks. Pyramid Vision Transformer (PVT)~ introduces the pyramid structure to overcome the difficulties of porting the transformer to various dense prediction tasks, and achieves the SOTA performance on major benchmark datasets. M3DeTR~ is a novel research on multi-representation, multi-scale, and mutual-relation 3D object detection with transformers. Medical Transformer (MedT)~ has focused on medical image segmentation and outperforms previous CNN-based and transformer-based architecture. In conclusion, the transformer has become a novel and popular research area in CV and its performance is proved by many existing works.", "id": "efabf58e-818d-451d-a426-4bfb3309251f", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "aee7bd49-072e-4f3b-be42-b7fb570a094d", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Image Learning" ], [ "subsubsection", "Transformer-Based Networks" ] ], "subsections": [], "title": "Transformer-Based Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "GL aims to embed the graph as a low-dimensional representation while preserving the desired properties of the original graph data.\nClassical GL methods are usually implemented using statistical methods or artificially designed components.\n\\myparagraph{\\textbf{Dimension Reduction}}\nAs a commonly used method in feature engineering, dimension reduction aims to reduce the dimension of high-dimensional attribute graph data into a lower-dimensional representation.\nIn GL, it highlights the remaining information at the cost of losing part of the attributes.\nAccording to different dimensionality reduction strategies, such methods can be classified into two types.\nThe first type is subspace learning under the linear assumption.\nBased on the assumption that the principal components~ related to the larger variance represent important structural information, and those smaller variances represent noise, principal component analysis calculates a low-dimensional representation that maximizes the variance of the data.\nLinear Discriminant Analysis (LDA)~ achieves dimension reduction by maximizing the ratio of inter-class scattering and intra-class scattering to obtain a linear projection matrix.\nMulti-Dimensional Scaling (MDS)~ is a distance-maintaining manifold learning method. It produces a mapping in a lower dimension to preserve dissimilarities between nodes as much as possible.\nThe second type is nonlinear dimension reduction, which aims to automatically learn nonlinear topology to achieve manifold learning.\nIsomap~ first constructs a neighborhood graph on the manifold and calculates the shortest path between pairs of nodes, and then uses MDS to construct a low-dimensional embedding.\nLocally Linear Embedding (LLE)~ first allocates neighbors for each node. \nThen, it calculates the weighted $W_{i,j}$, the best linear reconstruction feature $X_{i}$ from its neighbors. \nFinally, calculate the low-dimensional embedding for the optimal reconstruction of $W_{i,j}$.\n\\myparagraph{\\textbf{Matrix Factorization}}\nGreatly influenced by the idea of dimension reduction, the models based on matrix factorization emerged in the early research of GL.\nSuch models aim to reconstruct the adjacency matrix of the graph to achieve dimension reduction while maintaining structural information.\nAlthough these models have significant limitations, in fact, their ideas still inspire many current studies.\nDepending on how the matrix is constructed, such methods often append specific constraints.\nGraph Laplacian eigenmaps~ minimizes a loss function to ensure that nodes close to each other on the manifold are mapped into the low-dimensional space and still maintain the local distances.\nNode proximity matrix factorization~ minimizes the objective function $|W-YY^{cT}|$ through matrix factorization to approximate the proximity of nodes in the low-dimensional space, where $Y$ and $Y^{c}$ are the embeddings for nodes and context nodes, and $W$ is the default node proximity matrix.\nGraRep~ aims to preserve the high-order proximity of graphs in the embedding space, thus it derives a $k$-th order transition matrix, $A^{k}$, by multiplying the adjacency matrix to itself $k$ times. \nThe transition probability from node $v_{i}$ to node $v_{j}$ is the entry in the $i$-th row and $j$-th column of the $k$-th order transition matrix, i.e., $p_{k}(v_{i}|v_{j})=A^{k}_{i,j}$.\nThen GraRep defines the loss function using the skip-gram model and negative sampling.\nTo capture the high-order proximity between node pairs, HOPE~ preserves asymmetric transitivity in approximating\nthe high-order proximity.\nSpecifically, the goal of HOPE is to minimize the objective function $||S-WC^{T}||^{2}_{F}$, where the elements $s_{i,j} \\in S$ represent a certain edge feature (e.g., Katz index, the Rooted Page-Rank, the Common Neighbors, and the Adamic-Adar) between the corresponding node pairs $(v_{i}, v_{j})$, $W$ is the node representation matrix, and $C$ is the embedding of the node as the context.\nTo reconstruct the matrix $S$ more simply and elegantly, HOPE proposes to obtain $W$ and $C$ directly based on the low-rank singular value decomposition (SVD). \n\\myparagraph{\\textbf{Graph Kernel}}\nThe kernel method is an important algorithm in pattern recognition and machine learning.\nIts basic idea is to give the graph embedding $x \\in X$ in the original low-dimensional space $X$, and maps the embeddings to a high-dimensional feature space $H$ through a nonlinear function $f^{ker}$.\nThen the nonlinear problem in $X$ can be solved by constructing a linear algorithm in $H$.\nThere are two main types of kernel methods on graph data.\nThe first type uses the embedding method to convert the graph data into vectorial representation, and then directly implements the application based on the kernel function.\nHowever, due to the loss of mass graph structure information when transforming graphs into vectorial representation, such methods do not perform well in real scenarios.\nThe second type of method introduces the graph kernel function to solve this problem. \nBased on retaining the advantages of the original kernel function, it directly represents the structural information of the graph data in the high-dimensional Hilbert space.\nThe definition of the traditional method of graph kernel comes from R-convolution. \nAccording to the difference between the contrast substructure and the decomposition method of the graph structure, a large number of methods based on graph kernel have been proposed.\nFor example, the work of~ proposed a random-walk kernel based on calculating the number of common synchronization between two graph structures,\nTo reduce the computational complexity and optimize the random walk strategy, a graph kernel based on comparing the shortest path information between two graph structures is proposed.\nTo capture more complex topological information, the Weisfeiler-Lehman subtree graph kernel is proposed, which is based on a one-dimensional Weisfeiler-Lehman isomorphism test algorithm to find isomorphic subtree structures in a bunch of graph structures~.", "id": "fc42d69c-a979-435c-aa0b-00c38ca3fbcc", "level": "subsection", "origin_cites_number": 12, "parent_id": "8cba48d4-e174-4a7a-afb3-8ba526649a63", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Traditional Learning Methods" ], [ "subsection", "Traditional Graph Learning" ] ], "subsections": [], "title": "Traditional Graph Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Section 9}\nSince pretraining has received great attention from the research community, the investigation in the theory-backed explanation is similarly eye-catching. During the unsupervised pretraining era before SSL, Erhan et al.~ shed some light on the theoretical explanation for the confirmation and clarity of learning difficulties. researches the influence of pretraining with respect to architecture depth, model capacity, and the number of training samples, and demonstrates the robustness of pretraining from the perspective of both the optimization and the regularization. further prove the regularizer role of the unsupervised pretraining in the downstream supervised tasks.", "id": "720be46a-69c4-42e1-a410-af6829c488a7", "level": "section", "origin_cites_number": 2, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs Theory" ] ], "subsections": [ "7f51ca8f-a2ef-4915-9892-afbc3e061179", "de3557a8-d0ff-477f-bcd8-0aea88f57ef6" ], "title": "PFMs Theory" }, { "cite_extract_rate": 1, "cites": [ 8562, 2582, 2583 ], "content": "\\myparagraph{\\textbf{Pretext Tasks}} \n posits a mechanism based on approximate conditional independence (CI) to connect pretext and downstream task data distributions, which suggests that pretext tasks can self-supervisedly learn the representations from unlabelled data that reduce the sample complexity of downstream supervised tasks. The experiments both on CV and NLP task supports this theory. Representation Learning via Invariant Causal Mechanisms (R\\textsc{e}LIC)~ also provides a theoretical understanding from the perspective that the explicit invariance constraints across augmentations can yield improved generalization guarantees.\n\\myparagraph{\\textbf{Multi-View Redundancy}}\nFrom the perspective of a multi-view setting, understands contrastive learning as exploiting multiple views of data for representation learning.\nThis theory provides a theoretical analysis that the linear functions of these representations from pretraining are still competitive compared with the non-linear optimal predictor of the label. In other words, the linear functions of the learned representations are nearly optimal on downstream prediction tasks whenever the different views provide redundant information about the label.", "id": "7f51ca8f-a2ef-4915-9892-afbc3e061179", "level": "subsection", "origin_cites_number": 3, "parent_id": "720be46a-69c4-42e1-a410-af6829c488a7", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs Theory" ], [ "subsection", "Different Perspectives" ] ], "subsections": [], "title": "Different Perspectives" }, { "cite_extract_rate": 1, "cites": [ 2529, 2584 ], "content": "\\myparagraph{\\textbf{Contrastive Learning}}\nAlthough experimental results show us that previous designs such as contrastive loss or momentum updating can produce impressive performance in SSL. However, one of the most important questions that remain in SSL is why these methods can maintain representation consistency during the pretraining process. A naive view is the minimization between positive pairs can boost invariance learning, while the maximization between negative pairs contributes to avoiding representational collapse. shows that contrastive learning can achieve competitive bound via intra-class concentration, thus leading to the reduction of sample complexity on downstream tasks from the benefit of transferred representations. This research also provides a framework that can be utilized both on the guarantees of the quality of learning representations during the pretraining phase and the future assumptions added to the framework that allow tighter guarantees.\n\\myparagraph{\\textbf{Non-Contrastive Learning}}\nWhile contrastive learning shows an effect by capturing the similarity and dissimilarity among the unlabelled examples, and further converging to an average local optimum which represents the general representations, recent non-contrastive SSL methods such as BYOL and SimSiam also shows the SOTA performance without the design of comparison between negative pairs. Based on the analysis of the eigenspaces, Tian et al.~ study the behavior of non-contrastive SSL training and prove that the effects are from both the predictor and stop-gradient signal. Based on this theory, a novel and simple \\textbf{DirectPred} method is proposed as a by-product of this theoretical exploration.", "id": "de3557a8-d0ff-477f-bcd8-0aea88f57ef6", "level": "subsection", "origin_cites_number": 2, "parent_id": "720be46a-69c4-42e1-a410-af6829c488a7", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs Theory" ], [ "subsection", "Different Categories" ] ], "subsections": [], "title": "Different Categories" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 9094, 2506, 2586, 7000, 2588, 2590, 8569, 7585, 2502, 7022, 2585, 2503, 312, 8568, 2589, 134, 2587, 1254 ], "content": "Pretext tasks are always designed to use pseudo labels generated from the data itself to pretrain the proxy model. \nThere are five categories of pretext tasks for self-supervised: 1) generation-based methods; 2) transformation-based methods; 3) context-based methods; 4) semantic-based methods; 5) view-based methods.\n\\myparagraph{\\textbf{Generation-Based Methods}} This type of method is GAN-based in the deep learning era. For image generation, there are several applications including image colorization~, image super-resolution~, image editing~, context encoders~, image-to-image translation~, etc. On the other hand, video generation tasks contains future prediction~, video action recoginition~, video generation~, and video representaion~.\n\\myparagraph{\\textbf{Transformation-Based Methods}} Transformation is a typical technology that serves as a data augmentation method to enlarge the training dataset in traditional DL. However, if transformations of the same image are labeled as positive samples and others as negative samples, this pretext task can be used for self-supervised pretraining~. Popular transformation in self-supervised learning (SSL) contains color transformation (such as Jitter, Gaussian blur, and adjusting brightness) and geometric transformation (such as flipping, cropping, scaling, and rotation).\n\\myparagraph{\\textbf{Context-Based Methods}} Basically, the design and construction of many artificial tasks, such as solving Jigsaw puzzles~, comparing context similarity, and discriminating sequence order. Solving Jigsaw puzzles is defined as identifying the correct position of patches from an image. This task can help the model to learn an encoder for transfer learning~, and the feature representations are effective after the pretrained dataset is big enough. In addition, the design of video Jigsaw is also proposed for unsupervised learning~. Differently, context similarity tries to label the patches from the same images as positive samples and others as negative samples, then use a predefined similarity function to scale the distance between different pairs~.\n\\myparagraph{\\textbf{Semantic-Based Methods}} Semantic-based methods contain object detection, semantic segmentation, and depth prediction. These tasks also involve pretext tasks because their pixel-based labels can learn a more robust feature representation than simpler tasks. These pre-text tasks always establish on video dataset~.\n\\myparagraph{\\textbf{View-Based Methods}} This type of method contains both single-modal data and multi-modal data. For the single-modal data, the original data is treated as the anchor and different viewpoints generate its positive pair samples. Sometimes the time slices in sequence-based data are treated as negative pairs because the scene is changed as time goes~. In addition, multi-modal data is usual in view-based methods, which are also called cross-modal-based methods here. Such as audio-video cooperative learning~, RGB and optical flow cross-modal distance training~.", "id": "1224d99a-6f33-4a1f-8309-c4a5655141c1", "level": "section", "origin_cites_number": 22, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Pretext Task Taxonomy on CV" ] ], "subsections": [], "title": "Pretext Task Taxonomy on CV" }, { "cite_extract_rate": 0.6875, "cites": [ 7591, 7593, 2594, 2592, 2591, 2593, 1911, 8445, 7105, 1411, 7592 ], "content": "The success of pretraining learning methods in the supervised learning domain has spurred interest in the reinforcement learning (RL) domain to study whether the same paradigms can be adapted to RL algorithms. General pretraining RL can include broad directions, such as Reward-Free RL , Goal-condition RL , and Representation Learning in RL . Here we focus the Representation Learning in RL. Specifically, this direction seeks to \\textit{improve the performance by pretraining the visual perception competent of RL agent, i.e., the state encoder, with some large-scale datasets using unsupervised/self-supervised data augmentation techniques}. The pretraining process empowers the state encoder to capture the essential structure information from the raw inputs (pixel-level input for CV). An RL policy network is built based on the pretrained state encoder to learn the specific downstream control tasks in the fine-tuning stage. Recent studies have demonstrated that can greatly benefit both in sample efficiency and learning effectiveness from unsupervised , semi-supervised , and self-supervised learning techniques. Specifically, this direction could be roughly classified into the following two categories: \nModel-based Pretraining RL and Contrastive-like Pretraining RL.", "id": "2f7164de-3a58-4a21-ac2f-e8e2ec66800e", "level": "section", "origin_cites_number": 16, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Reinforcement Learning" ] ], "subsections": [ "3df97b8f-533d-47bf-a8d7-c1e2d6830a59" ], "title": "PFMs for Reinforcement Learning" }, { "cite_extract_rate": 1, "cites": [ 2595, 2598, 2596, 2597, 7592 ], "content": "Model-based Pretraining RL aims to first pretrain a generative world model to capture the underlying structure of the environment and then leverage the world model as a state encoder or simulator during fine-tuning. World Models is the first work that proposes to learn a compressed spatial and temporal representation of the environment in an unsupervised manner using a simple Variational Autoencoder, which greatly improves the sample efficiency compared to training from scratch. However, learning the world model without being aware of the environment's dynamic might lead to ignorance of some key information in the environment. Dreamer proposed to learn latent dynamics by approximating the representation, transition, and reward model. They then train RL agents purely by imagination in a latent space, which is more efficient since it brings a low memory footprint and enables fast predictions of thousands of imagined trajectories in parallel.\nFurthermore, DreamerPro proposes a reconstruction-free approach based on prototypical representations to migrate the task-irrelevant visual distractions problem in the latent dynamics modeling. DreamerPro significantly outperforms previous SOTA methods when there are complex background distractions. To verify whether learning accurate world models for the real world is promising, Daydreamer applies Dreamer to the real-world physical robots problem and empirically demonstrates significant learning efficiency gains.", "id": "3df97b8f-533d-47bf-a8d7-c1e2d6830a59", "level": "paragraph", "origin_cites_number": 5, "parent_id": "2f7164de-3a58-4a21-ac2f-e8e2ec66800e", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Reinforcement Learning" ], [ "paragraph", "Model-based Pretraining RL" ] ], "subsections": [ "cc367997-8d2f-45f7-8082-f6c3642cddb1" ], "title": "Model-based Pretraining RL" }, { "cite_extract_rate": 0.9090909090909091, "cites": [ 2601, 2600, 7593, 2599, 2602, 2591, 2593, 2603, 122, 8570 ], "content": "Contrastive-like Pretraining RL techniques seek to improve the representation ability of state encoders by pretraining the state encoder with a large amount of out-of-domain data or adding some auxiliary loss using unsupervised learning or data augmentation techniques. \nCURL combines instance contrastive learning and by using MoCo mechanism, which significantly improves the data efficiency of RL agents. Furthermore, RAD proposes an implicit approach that directly trains the RL objective on multiple augmented observations views, which outperforms CURL on some of the environments in the DeepMind Control Suite. Concurrent to RAD, DrQ introduces a simple regularization term, which applies image augmentation to compute current and target Q values. They demonstrate that data efficiency can be significantly improved after applying it to DQN. DrQ-v2 further extends this approach to solve complex humanoid locomotion tasks by inserting similar techniques into the DDPG algorithm. Orthogonal to this direction, demonstrate that pretraining the vision part of RL agent using supervised or unsupervised methods on out-of-domain data can improve the learning efficiency of downstream RL control tasks. Besides ensuring consistency across different views of observation, SPR additionally trains a dynamics model which enforces the representations to be temporally predictive. Based on SPR, SGI proposes to pretrain representations using a combination of latent dynamics modeling, unsupervised goal-conditioned, and inverse dynamics modeling. Compared to previous methods, SGI can better capture the environment's dynamics and facilitate downstream RL control task training.", "id": "cc367997-8d2f-45f7-8082-f6c3642cddb1", "level": "paragraph", "origin_cites_number": 11, "parent_id": "3df97b8f-533d-47bf-a8d7-c1e2d6830a59", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "PFMs for Reinforcement Learning" ], [ "paragraph", "Model-based Pretraining RL" ], [ "paragraph", "Contrastive-like Pretraining RL" ] ], "subsections": [], "title": "Contrastive-like Pretraining RL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Evaluation_Metrics}", "id": "a3fada64-fc86-4ec8-a16c-f5e9675e7655", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Evaluation Metrics" ] ], "subsections": [ "7c815a75-13b9-4780-8864-1723bfa0e089" ], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "The classification task, according to a labeled training document, determines the relationship between document features and document categories. The learned relationship model is then used to determine the category of new documents.\n\\myparagraph{Accuracy and Error Rate} \nThe key metrics for a text classification model are Accuracy and Error Rate. The terms Accuracy and Error Rate are defined as follows:\n\\begin{equation}\nAccuracy =\\frac{(\\mathrm{TP}+\\mathrm{TN})}{N},\n\\end{equation}\n\\begin{equation}\nError Rate = 1 - Accuracy =\\frac{(\\mathrm{FP}+\\mathrm{FN})}{N},\n\\end{equation}\nwhere $\\mathrm{TP}$ and $\\mathrm{FP}$ denote true positive and false positive, $\\mathrm{TN}$ and $\\mathrm{FN}$ stand for true negative and false negative.\n\\myparagraph{Precision, Recall and F1}\nRegardless of the standard type and error rate, there are very important metrics used for unbalanced testing sets. These metrics are similar to the concept of the class label in the testing samples. F1 is defined as the harmonic average of Precision and Recall. Thus, Accuracy, Recall, and F1 can be represented as:\n\\begin{equation}\nPrecision =\\frac{\\mathrm{TP}}{\\mathrm{TP}+\\mathrm{FP}}, \\quad\nRecall =\\frac{\\mathrm{TP}}{\\mathrm{TP}+\\mathrm{FN}},\n\\end{equation}\n\\begin{equation}\nF1 =\\frac{2 \\mathrm { Precision \\times Recall }}{\\mathrm{ Precision }+\\mathrm{Recall}}.\n\\end{equation}\nWhen the accuracy, F1, and recall values hit 1, the desired results are obtained. On the other hand, when the values turn 0, we get the worst consequence. For the multi-class classification task, the precision and recall values of each class can be determined independently, and then the individual and overall performance can be analyzed.\n\\myparagraph{{$ Micro-F1$}} The $ Micro-F1$~ \nis a metric that measures all labels' overall accuracy and recall.\nWe denote $ Micro-F1$ as:\n\\begin{equation}\n Micro-F1=\\frac{2 \\mathrm{P}_{t} \\times R_{t}}{\\mathrm{P}+\\mathrm{R}},\n\\end{equation}\n\\begin{equation}\n P=\\frac{\\sum_{t \\in \\mathcal{S}} T P_{t}}{\\sum_{t \\in S} T P_{t}+F P_{t}},\\quad R=\\frac{\\sum_{t \\in S} T P_{t}}{\\sum_{t \\in \\mathcal{S}} T P_{t}+F N_{t}}.\n\\end{equation}\nwhere $T P_{t}$ and $F P_{t}$ mean true and false positive of the $t$ th label on a text.\n\\myparagraph{{$ Macro-F1$}} \nThe $ Macro-F1$ calculates the average $F1$ of all labels by giving equal weight to them. $ Macro-F1$ is denoted as:\n\\begin{equation}\n{Macro}-F1=\\frac{1}{\\mathcal{S}} \\sum_{t \\in \\mathcal{S}} \\frac{2 \\mathrm{P}_{t} \\times R_{t}}{\\mathrm{P_{t}}+\\mathrm{R_{t}}},\n\\end{equation}\n\\begin{equation}\nP_{t}=\\frac{T P_{t}}{T P_{t}+F P_{t}},\\quad R_{t}=\\frac{T P_{t}}{T P_{t}+F N_{t}}.\n\\end{equation}\nwhere $T N_{t}$ and $F N_{t}$ represent true and false negative of the $t$ th label. $\\mathcal{S}$ stands for the label set of all samples.\n\\myparagraph{Mean Reciprocal Rank (MRR)} \nThe MRR is commonly used to evaluate the performance of ranking algorithms on Question Answering (QA) and Information Retrieval (IR) tasks. MRR is represented as\n\\begin{equation}\n\\mathrm{MRR}=\\frac{1}{Q} \\sum_{i=1}^{Q} \\frac{1}{{rank}_{i}},\n\\end{equation}\nwhere ${rank}_{i}$ is the ranking of the $i$ th ground-truth answer. \nThe number of predicted labels on each text is denoted by $Q$.\nMoreover, there are some metrics, such as EM, Hamming-loss~, P@K and NDCG@K.", "id": "7c815a75-13b9-4780-8864-1723bfa0e089", "level": "paragraph", "origin_cites_number": 2, "parent_id": "a3fada64-fc86-4ec8-a16c-f5e9675e7655", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Evaluation Metrics" ], [ "paragraph", "Classification Task" ] ], "subsections": [ "4705f929-96f4-46f0-9710-6fb050cfa3f5" ], "title": "Classification Task" }, { "cite_extract_rate": 0, "cites": [], "content": "Generation task uses LMs to predict the next most likely word or sentence based on input data.\n\\myparagraph{Bilingual EvaLuation Understudy (BELU)} \nBLEU compares the generated sentences to the reference sentence and makes predictions using automatic machine translation algorithms. The language creation problem is also supported by deep learning technologies such as speech recognition, image caption generation, and text summarization. They can't discover anything better, but it has a few advantages: it's simple to comprehend, correlates well with human judgment, and is language-independent.\nAs a bilingual evaluation aid, BLEU is mainly used to evaluate the quality of machine translation~.\nBLEU compares the degree of overlap between the N-gram in the candidate text and the N-gram in the reference text. The higher overlap indicates better translation quality.\nThe formula for the computation is:\n\\begin{equation}\nB L E U=BP \\times \\operatorname{exp}\\left(\\sum_{n=1}^{N} W_{n} \\log P_{n}\\right),\n\\end{equation}\nwhere $N$ represents N-gram, $BP$ is penalty factor, $P_{N}$ is multivariate precision, and $W_{N}=1/N$ is the corresponding weight of multivariate precision.\n$r$ represents the length of the shortest reference translation, and $c$ represents the length of the candidate translation, then the specific calculation method of penalty factor $BP$ is as follows:\n\\begin{equation}\nBP= \\begin{cases}1, & l_{t}>l_{a} \\\\ e^{1-l_{a} / l_{t}}, & l_{t} \\leq l_{a}\\end{cases},\n\\end{equation}\nwhere $l_{t}$ is the number of words in machine translation and $l_{a}$ is the number of words in reference answer.\nThe penalty factor is mostly used to penalize large gaps between machine and reference translations.\n\\myparagraph{ROUGE (Recall-Oriented Understudy for Gisting Evaluation)} \nROUGE stands for N-gram co-occurrence statistics, which are used in automatic evaluation methods. It is expanded on the similarity of N-grams, which means that an N-gram is a subsequence of the main document text in terms of $N$ words.\nThere are four types of ROUGE, including ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S.\nThe first two are commonly used, and the $N$ in rouge-N refers to N-gram, which is calculated similarly to BLEU, except BLEU is based on accuracy, while ROUGE is based on recall.\n$L$ in ROUGE-L refers to the Longest Common Subsequence, which is calculated as the Longest Common Subsequence between the candidate abstract and the reference abstract. Thus, the longer the length, the higher the score, based on the $F$ value.\nThe calculation formula of ROUGE-N and ROUGE-L is mainly introduced. The calculation formula of ROUGE-N is as follows:\n\\begin{equation}\n ROUGE-N= \\\\ \\frac{\\sum_{S \\in\\{\\text {ReferenceSummaries}\\}} \\sum_{\\text {gram}_{n} \\in S} \\text {Count}_{\\text {match}}\\left(\\operatorname{gram}_{n}\\right)}{\\sum_{S \\in\\{\\text {ReferenceSummaries}\\}} \\sum_{\\text {gram}_{n} \\in S} \\text {Count}\\left(\\text {gram}_{n}\\right)},\n\\end{equation}\nwhere $N$ stands for N-gram, $Count(gram_{n})$ represents the frequency of occurrence of an N-gram, and $Count_{match}(gram_{n})$ represents the frequency of co-occurrence of an N-gram.\nThe calculation formula of ROUGE-L is as follows:\n\\begin{equation}\nROUGE-L=F_{lcs}=\\frac{\\left(1+\\beta^{2}\\right) R_{\\mathrm{lcs}} P_{\\mathrm{lcs}}}{R_{1 \\mathrm{cs}}+\\beta^{2} P_{\\mathrm{lcs}}},\n\\end{equation}\n\\begin{equation}\nR_{\\mathrm{lcs}}=\\frac{L C S(X, Y)}{M},\n\\end{equation}\n\\begin{equation}\nP_{\\text {lcs }}=\\frac{L C S(X, Y)}{N},\n\\end{equation}\nwhere $X$ is the candidate abstract, $Y$ represents the reference abstract, $LCS(X, Y)$ table indicates the length of the Longest Common Subsequence (LCS) of the candidate abstract and references abstract, $M$ stands for the length of reference abstract, and $N$ denotes the length of the candidate abstract.\nThe ROUGE method is characterized by N-gram co-occurrence statistics, based on recall rate (ROUGE-N) and F-value (ROUGE-L).\nThey are often used in text summaries.\nIt is worth noting that ROUGE is word-based correspondence rather than semantic-based correspondence, but this can be mitigated by increasing the number of reference summaries.\n\\myparagraph{METEOR} \nMETEOR, also known as an explicitly sorted translation evaluation metric~, is an improved version of the BLEU standard that aims to address some flaws in the BLEU standard.\nUsing WordNet to calculate matching relationships between specific sequences, synonyms, roots, affixes, and definitions improves BLEU performance and makes it more relevant to manual discrimination.\nThe calculation formula is as follows:\n\\begin{equation}\nMETEOR=(1-Pen) \\times F_{\\text {m}},\n\\end{equation}\n\\begin{equation}\nF_{\\text {m}}=\\frac{P R}{\\alpha P+(1-\\alpha) R},\n\\end{equation}\n\\begin{equation}\nP=\\frac{m}{\\sum_{k} \\text h_{k}(c_{i})},\n\\end{equation}\n\\begin{equation}\nR=\\frac{m}{\\sum_{k} \\text h_{k}(s_{ij})},\n\\end{equation}\nwhere $Pen=\\gamma(\\frac{ch}{m})^{\\theta}$ is a penalty factor, which punishes the word order in candidate translation that is different from that in reference translation. $ch$ refers to the number of chunks, which are clustered units of matched units adjacent to each other in both the candidate translation and the candidate reference translation. $\\alpha, \\beta, \\theta$ is the adjustable parameter, $m$ is the number of unary groups that can be matched in the candidate translation, $c$ is the length of the candidate translation, $ h_{k}(c_{i})$ is the number of occurrences in candidate translations $c_{i}$, and $ h_{k}(s_{ij})$ is the number of occurrences in reference translations $s_{ij}$.\n\\myparagraph{Perplexity} \nPerplexity is also called the degree of confusion~.\nIts core idea is: first, according to the testing sentence, learn a LM $P$.\nThen, according to the LM $P$, the score of the optional sentence is calculated.\nFinally, the above scores are standardized according to sentence length.\nThe calculation formula is as follows:\n\\begin{equation}\nP P L(W)=P\\left(w_{1}, w_{2}, \\ldots, w_{M}\\right)^{-\\frac{1}{M}},\n\\end{equation}\nwhere $W$ is the candidate translation, $M$ is the length of the candidate translation, $P$ is the LM obtained according to the reference translation, and $P(w_{1}, w_{2}, \\ldots, w_{M})$ is the score calculated by the LM for the candidate translation.\nThe Perplexity assessment indicator is based on a LM.\nThe lower the degree of confusion, the better the translation quality, which is often used in machine translation and LMs.\nIts disadvantages are as follows: the larger the dataset is, the faster the degree of confusion decreases; the punctuation in the data will impact the PPL of the model; and the interference of common words.", "id": "4705f929-96f4-46f0-9710-6fb050cfa3f5", "level": "paragraph", "origin_cites_number": 3, "parent_id": "7c815a75-13b9-4780-8864-1723bfa0e089", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Evaluation Metrics" ], [ "paragraph", "Classification Task" ], [ "paragraph", "Generation Task" ] ], "subsections": [], "title": "Generation Task" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{datasets}", "id": "99f4cfea-bdc9-409b-976b-dcdca4dfb1e0", "level": "section", "origin_cites_number": 0, "parent_id": "160c6790-e92e-4d72-a546-5d2f900ac104", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ] ], "subsections": [ "aa79e92f-f28d-4c8e-87b8-ea972edb271f", "aec0bd98-e02a-43ef-82e5-6bf469622fdc", "c0c5905b-5a26-4f77-8444-f904443bb416" ], "title": "Datasets" }, { "cite_extract_rate": 0.6578947368421051, "cites": [ 2623, 2614, 7595, 1150, 1174, 2605, 1156, 2611, 2622, 7594, 7106, 1091, 2616, 1067, 9120, 2617, 7596, 7265, 1159, 2613, 2618, 2615, 2608, 1090, 1938, 2624, 2609, 1173, 1970, 2607, 2612, 2606, 826, 2383, 2620, 2604, 2610, 8571, 8385, 1096, 11, 2621, 2387, 8412, 8410, 2619, 8572, 9121, 943, 7 ], "content": "There are many available datasets in the NLP domain, divided according to different tasks. We summarize them in Table~\\ref{datasets1}.\nIt mainly comprises two categories: the task of classification of texts and the task of generating texts.\nThe text classification tasks mainly include Sentiment Analysis (SA), News Classification (NC), Topic Labelling (TL), Natural Language Inference (NLI), Named Entity Recognition (NER), Question Answering (QA), Dialogue Act Classification (DAC), etc. The generation tasks mainly include text summaries and machine translation.\n\\begin{table*}[]\n\\caption{The statistics of the datasets on NLP. For the QA task, the class represents the sum number of candidate answers and the correct answer. For dialogue, class is the number of slots. Length means the average tokens in turn.}\\label{datasets1}\n\\resizebox{0.9\\textwidth}{!}{\n\\begin{tabular}{cclllll}\n\\hline\n\\textbf{Type} & \\textbf{Task} & \\textbf{Datasets} & \\textbf{Class} & \\textbf{Length} &\\textbf{ Number} & \\textbf{Related Papers} \\\\ \\hline\nClassification & Sentiment Analysis& MR & 2 & 20 & 10662 & \\\\ \\cline{3-7} \n & & SST-1 & 5 & 18 & 11,855 & \\\\ \\cline{3-7} \n & & SST-2 & 2 & 19 & 9,613 & \\\\ \\cline{3-7} \n & & MPQA & 2 & 3 & 10,606 & \\\\ \\cline{3-7} \n & & IMDB & 2 & 294 & 50,000 & \\\\ \\cline{2-7} \n & News Classification& 20NG &20 & 221 & 18,846 & \\\\ \\cline{3-7} \n & & AG News & 4 & 45/7 &127,600 & \\\\ \\cline{3-7} \n & & R8 &8 & 66&7,674 & \\\\ \\cline{3-7} \n & & R52& 52 & 70 & 9,100 & \\\\ \\cline{2-7} \n & Topic Labeling& DBPedia & 14 &55 & 630,000 & \\\\ \\cline{3-7} \n & & Ohsumed & 23 &136 &7,400 & \\\\ \\cline{3-7} \n & & YahooA & 10 & 112 & 1,460,000 & \\\\ \\cline{2-7} \n & Natural Language Inference & SNLI & 3& - &570,152 & \\\\ \\cline{3-7} \n & & MNLI & 3& - & 433,000& \\\\ \\cline{3-7} \n & & QNLI &2 & - & 115,667& \\\\ \\cline{3-7} \n & & WNLI & 2& - &852 & \\\\ \\cline{3-7} \n & & RTE& 2& - & 5,768& \\\\ \\cline{3-7} \n & & SICK &3 & - & 10,000& \\\\ \\cline{3-7} \n & & MSRP &2 & - & 5,801& \\\\ \\cline{2-7} \n & Named Entity Recognition & CoNLL 2003 & 4& - &2,302 & \\\\ \\cline{3-7} \n & &OntoNotes 4.0&18&-&-& \\\\ \\cline{3-7} \n & &OntoNotes 5.0&18&-&2,945,000& \\\\ \\cline{3-7} \n & &MSRA & 3&-&- & \\\\ \\cline{3-7} \n & &ACE 2004 &7 &-&443 & \\\\ \\cline{3-7}\n & &ACE 2005 &7 &-&437 & \n \\\\ \\cline{3-7} \n & &KBP2017 & -&-&- & \\\\ \\cline{2-7} \n & Question Answering& QQP&2 & &799,266 & \\\\ \\cline{3-7} \n & & MRPC & 2& - & -& \\\\ \\cline{3-7} \n & & SQuAD & - & 5,000 & 5,570 & \\\\ \\cline{3-7} \n & & RACE &5 & - & 100,000& \\\\ \\cline{3-7} \n & & TREC &6&10&6,400& \\\\ \\cline{3-7} \n & & WikiQA & - & 873 & 243 & \\\\ \\cline{2-7} \n & Dialog Act Classification & DSTC 4 &89 & - &30,000 & \\\\ \\cline{3-7} \n & & MRDA & 5 & - &62,000 & \\\\ \\cline{3-7} \n & & SwDA & 43 & - &1,022,000 & \\\\ \\cline{1-7} \n Generation & Text Summarization & NYT&- & - &109,910 & \\\\ \\cline{3-7} \n & & CNN&- & 760 & 92,579& \\\\ \\cline{3-7} \n & & Dailymail &- & 653 &219,506 & \\\\ \\cline{3-7} \n & & Gigaword &- & - &3,991,000 & \\\\ \\cline{2-7} \n & Machine Translation& WMT14 &- & - & -& \\\\ \\cline{3-7} \n & & WMT16 &- & - &- & \\\\ \\cline{3-7} \n & & WMT17 &- & - &- & \\\\ \\cline{3-7} \n & & WMT18 &- & - &- & \\\\ \\cline{2-7} \n &Dialogue & DSTC2 &- & - & 3,000& \\\\ \\cline{3-7} \n & & MWOZ &35 & 15.03 & 10,438& \\\\ \\cline{3-7} \n & & GSIM &- & - &3,008 & \\\\ \\cline{3-7} \n & & OOS &151 & - & 23,700& \\\\ \\cline{1-7} \n\\end{tabular}\n}\n\\end{table*}", "id": "aa79e92f-f28d-4c8e-87b8-ea972edb271f", "level": "subsection", "origin_cites_number": 76, "parent_id": "99f4cfea-bdc9-409b-976b-dcdca4dfb1e0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ] ], "subsections": [ "d089f43d-90f2-4b75-a3ac-15d615f3e514" ], "title": "Downstream Tasks and Datasets on NLP" }, { "cite_extract_rate": 0.125, "cites": [ 3691 ], "content": "It consists of judging the emotional polarity and dividing it into several classes.\nDepending on the granularity of sentiments, the SA is divided into three categories: dichotomy (positive and negative), trichotomy (positive, negative, and neutral), and multiple categories.\nHere we introduce several datasets in detail.\n\\myparagraph{Stanford sentiment treebank (SST)~} The dataset is an extension of MR~. SST-1 is a version of SST. It is divided into five categories and the number of training texts and testing texts is 8,544 and 2,210, respectively. It also consists of 20 average tokens.\nThe SST-2~ contains 9,613 movie reviews including 6,920 training texts, 872 development texts, and 1,821 testing texts. \n\\myparagraph{Semantic textual similarity benchmark (STS-B)~} It is used in semantic textual similarity tasks organized in the SemEval context between 2012 and 2017~.\nIt consists of text from image titles, news titles and forums.\nOn a scale of 1 to 5, STS-B displays the semantic similarity of two sentences.\nIt includes 5,749 training sets, 1,379 development sets, and 1,377 testing sets.\n\\myparagraph{Multi-Perspective Question Answering (MPQA)~} This is an opinion dataset which has two categories. \nIt contains 10,606 sentences from various news sources that have been manually annotated for opinions and other private states.\nIt is worth noting that there are 3,311 positive articles and 7,293 negative articles, having no labels for each article.\n\\myparagraph{IMDB reviews~} The dataset is the world’s most authoritative source for binary sentiment classification of film reviews. The number of content in each class is the same and it can be divided into training and testing sets whose number of comments is 25,000 on average.", "id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "level": "paragraph", "origin_cites_number": 8, "parent_id": "aa79e92f-f28d-4c8e-87b8-ea972edb271f", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ] ], "subsections": [ "9fe52d45-e578-4512-bc0d-aa2624058bf0", "7734643e-601a-4301-afa9-e862a3a604e0", "3e442af0-8a82-4ae4-bba4-2f14acc88dee", "387c3436-5b01-4b33-bab1-b87ca6857923", "adb781f7-1acd-43e8-93f7-307e91e07aeb", "48277441-dab6-498d-990e-57aff2317d20", "928fb12c-555b-436f-b3e3-0da733843ee4", "96b1fc1a-aa58-46c9-bd04-71dfc1c6836c", "6692edf1-7b35-4a59-9b52-0c90edbdf04b" ], "title": "Sentiment Analysis (SA)" }, { "cite_extract_rate": 0.2, "cites": [ 1096 ], "content": "As one of the most vital information sources, news content exerts a critical effect on people.\nThe NC facilitates users to acquire essential knowledge in real time. Its applications mainly include news topic identification and recommendation of relevant news based on user interests.\nHere we introduce several datasets in detail.\n\\myparagraph{20 Newsgroups (20NG)~} \n20NG is a text dataset derived from newsgroups. There are 20 classes with the same number of articles per class, including 18846 articles in total. The average number of tokens is 221.\n\\myparagraph{AG News~} This is an academic news search engine, which is divided into four categories. It contains news headlines and introductions. It includes 120,000 training texts and 7,600 testing texts. The number of average tokens is 45/7.\n\\myparagraph{R8 and R52~} They come from Reuters~. R8 contains 8 classes consisting of 66 average tokens and includes 2,189 and 5,485 testing and training courses.\n There are 52 classes in R52, which consists of 70 average tokens. It is divided into 6,532 and 2,568 training and testing texts.", "id": "9fe52d45-e578-4512-bc0d-aa2624058bf0", "level": "paragraph", "origin_cites_number": 5, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "News Classification (NC)" ] ], "subsections": [], "title": "News Classification (NC)" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1096 ], "content": "The task mainly obtains the meaning of the file by defining complex file themes.\nIt is a critical component of topic analysis technology, which aims at simplifying topic analysis by assigning each article to one or more topics. \nHere, we introduce a few in detail.\n\\myparagraph{DBpedia~} It is a large-scale multilingual knowledge base generated by Wikipedia's most commonly used information boxes. It releases DBpedia every month, adding or removing classes and attributes in each version. The most popular version of DBpedia has 14 categories, separated into 560,000 training data and 70,000 testing data. The number of average tokens is 55.\n\\myparagraph{Ohsumed~} This is a biomedical literature database. The number of texts is 7,400. It has 23 cardiovascular disease categories and consists of 136 average tokens. \nAll texts are medical abstracts that are categorized into one or more classes.\n\\myparagraph{Yahoo answers (YahooA)~} The dataset is a topic labeling task having 10 categories. The number of average tokens is 136. There are 140,000 training data and 5,000 testing data. Each text in YahooA has question titles, question contexts, and best answers.", "id": "7734643e-601a-4301-afa9-e862a3a604e0", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Topic Labeling (TL)" ] ], "subsections": [], "title": "Topic Labeling (TL)" }, { "cite_extract_rate": 0.5, "cites": [ 1173, 1174, 439 ], "content": "This task is used to forecast whether the meaning of a text can be inferred from another. Interpretation is a broad form of NLI. \nBy comparing the semantic similarity of sentence pairings, it determines whether a sentence is the interpretation of another one.\nHere we introduce several primary datasets in detail.\n\\myparagraph{The Stanford Natural Language Inference (SNLI)~} \nIt is commonly used in NLI takes. It contains 570,152 human-annotated sentence pairs, which are annotated with three sorts of relationships: neutral, derived, and conflicting. \nMulti-genre Natural Language Inference (MNLI)~ has 3 categories and consists of 430,000 sentence pairs annotated with textual information, which is usually used in textual inference tasks. \nQuestion Natural Language Inference (QNLI)~, whose task with 2 classes is to determine whether a given text pair is a question-answer.\nWinograd Natural Language Inference (WNLI)~ which consists of 2 categories is a dataset that captures the standard reference information between two paragraphs.\n\\myparagraph{Microsoft Research Paraphrase (MSRP)~} The dataset contains sentence pairs for the text-similarity task, including 1,725 training and 4,076 testing sets. A binary label annotates each pair, discriminating whether they are paraphrases.\n\\myparagraph{Sentences Involving Compositional Knowledge (SICK)~} It includes nearly 10,000 English sentence pairs, marked with similarity, and the scale range is 1-5. It has neutral, entailment, and contradictory three categories.", "id": "3e442af0-8a82-4ae4-bba4-2f14acc88dee", "level": "paragraph", "origin_cites_number": 6, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Natural Language Inference (NLI)" ] ], "subsections": [], "title": "Natural Language Inference (NLI)" }, { "cite_extract_rate": 1, "cites": [ 7, 8385, 7596 ], "content": "This is a fundamental task of NLP to identify people, places, organizations, and other entities in text.\nIt is a crucial primary tool for many NLP tasks, including information extraction, question answering, semantic parsing, machine translation, etc.\n\\myparagraph{CoNLL 2003~}\nIt consists of newswire text from the Reuters RCV1\ncorpus. It contains four different entity types (Location, Organization, Person, and Miscellaneous) and includes 1,393 English news articles, and 909 German news articles.\n\\myparagraph{OntoNotes 5.0~}\nThe dataset consists of 174,5K English, 900K Chinese, and 300K Arabic text data. It comes from telephone conversations, news agencies, radio news, radio conversations, and blogs. It has 18 entity classes containing 11 types, seven values, and 2,945,000 text data.\n\\myparagraph{MSRA~}\nThis is a Chinese dataset that is obtained from the news domain. It has three types of entities and is used as a shared task on SIGNAN back in 2006.", "id": "387c3436-5b01-4b33-bab1-b87ca6857923", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Named Entity Recognition (NER)" ] ], "subsections": [], "title": "Named Entity Recognition (NER)" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 439, 8385, 1565 ], "content": "There are two types of QA systems: the extraction guidance system and the generation guidance system. \nThe extractive QA can be regarded as a particular case of text classification. \nHere we detail several datasets.\n\\myparagraph{Microsoft Research Paraphrase Corpus (MRPC)~} It contains 5,800 sentence pairs extracted from Internet news, and the task type is similar to the QQP dataset. Sentence pairs are derived from comments on the same news item and determine whether the two sentences are semantically the same. The assessment criteria were classification accuracy and F1 score.\n\\myparagraph{Stanford Question Answering Dataset (SQuAD)~ } This is a large-scale machine-reading comprehension dataset that contains two tasks.\nSQuAD 1.1~ provides questions and corresponding answers, and the dataset contains 100,000 samples in total, while SQuAD 2.0~ adds unanswered questions and expands the scale to 150,000.\n\\myparagraph{RACE~} The dataset has 5 categories, containing nearly 100,000 questions extracted from middle and high school English tests, with corresponding answers given by experts.\nThe average length of RACE text is more significant than 300, which is longer than other reading comprehension datasets (such as SQuAD) sequences.", "id": "adb781f7-1acd-43e8-93f7-307e91e07aeb", "level": "paragraph", "origin_cites_number": 5, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Question Answering (QA)" ] ], "subsections": [], "title": "Question Answering (QA)" }, { "cite_extract_rate": 0, "cites": [], "content": "The dialogue act is a specific verbal component, which marks the dialogue according to the meaning category of the dialogue. DAC categorizes tags according to the meaning of the dialogue to help understand the speaker's intentions. \n\\myparagraph{Dialog State Tracking Challenge 4 (DSTC 4)~}\nIt belongs to the dialog act classification task and mainly focuses on dialog state tracking on human-human dialogs. It is divided into 89 training classes and contains 24,000 training texts and 6,000 test texts.\n\\myparagraph{ICSI Meeting Recorder Dialog Act (MRDA)~}\nIt includes about 75 hours of speech from 75 naturally occurring meetings among 53 speakers. The number of categories is 5, and it contains 51,000 training texts, 11,000 test texts, and 11,000 validation texts.\n\\myparagraph{Switchboard Dialog Act (SwDA)~}\nThe dataset extends the dialogue behavior label with rounds/discourses. The label summarizes the sentence structure, and relevant and pragmatic information of the relevant turn. The SwDA is split into 43 training classes and includes 1,003,000 training texts, 19,000 test texts, and 112,000 validation texts.", "id": "48277441-dab6-498d-990e-57aff2317d20", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Dialog Act Classification (DAC)" ] ], "subsections": [], "title": "Dialog Act Classification (DAC)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1067, 2611 ], "content": "Text summarization is a summary of given single or multiple documents. It is kept as concise as possible while ensuring that it reflects the critical content of the original document.\nIt can be divided into extractive summarization and generative summarization.\nExtractive summarization is generated by extracting and splicing the critical sentences in documents. Generative summarization is generated by a model, which summarizes documents according to the required content expressed in documents.\n\\myparagraph{NYT~}\nThe dataset comes from the corpus annotated by the New York Time. The named entities are annotated using the Stanford NER tool in conjunction with the Freebase knowledge base. It contains 9,076 articles, with the remaining 100,834 divided into a training set (96,834 examples) and a validation set (4,000 samples).\n\\myparagraph{CNN/Daily Mail~}\nIt is used for the passage-based question-answering task, and it is popular in assessing ATS systems.\nThe dataset consists of CNN/Daily Mail news stories paired with multi-sentence human-generated summaries. There are 287,226 training instances, 13,368 validation instances, and 11,490 testing instances in total.\n\\myparagraph{Gigaword~}\nThis is a dataset of English news chapters consisting of nearly 950 pieces.\nHeadlines -- stories from multiple sources, including the New York Times -- include some articles with a one-sentence, short news feed.", "id": "928fb12c-555b-436f-b3e3-0da733843ee4", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Text Summarization" ] ], "subsections": [], "title": "Text Summarization" }, { "cite_extract_rate": 0, "cites": [], "content": "It refers to the task of translation from one language to another with its semantic equivalence by a computer.\nThere are three categories, rule-based machine translation, statistics-based machine translation, and neural network-based machine translation.\n\\myparagraph{WMT14~}\nIt is a grouping of datasets used in the Ninth Workshop on Statistical Machine Translation shared tasks, including a news translation task, a quality estimation task, a metrics task, and a medical text translation task. \n\\myparagraph{WMT16~}\nThis dataset is a grouping of datasets used in the First Conference on Machine Translation shared tasks. It has ten shared tasks, including a news translation task, an IT domain translation task, a biomedical translation task, an automatic post-editing task, a metrics task, a quality estimation task, a tuning task, a pronoun translation task, a bilingual document alignment task, and a multimodal translation task.\n\\myparagraph{WMT17~}\nThe dataset includes three MT tasks (news, biomedical, and multimodal), an automatic post-editing task, a quality estimation task, a task dedicated to the training of neural MT systems, a task on bandit learning for MT, an automatic post-editing task, and a metrics task.\n\\myparagraph{WMT18~}\nIt mainly features six shared tasks: a news translation task, a biomedical translation task, an automatic post-editing task, a metrics task, a quality estimation task, and a multimodal translation task. Participants must evaluate their approaches to the machine translation topic using the standard datasets created for the shared tasks.", "id": "96b1fc1a-aa58-46c9-bd04-71dfc1c6836c", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Machine Translation (MT)" ] ], "subsections": [], "title": "Machine Translation (MT)" }, { "cite_extract_rate": 0, "cites": [], "content": "As an essential way of man-machine interaction, \nthe dialogue system offers a wide range of applications.\nThe existing dialogue systems can be grouped into task-oriented dialogue systems and non-task-oriented dialogue systems from application scenarios. Among them, the non-task type of conversation system can also be called a chatbot.\n\\myparagraph{DSTC2~}\nThis is a multi-round dialogue dataset of restaurant reservation fields, including 1,612 training data, 506 verification data, and 1,117 test data.\nIt allows the user's goals to change compared to DSTC1. DSTC2 is also richer in terms of the conversation state representation, including the slot value pairs of the user's targets and the ways to find them. \n\\myparagraph{MWOZ~}\nIt contains 8,420/1,000/1,000 conversations for training, validation, and test sets, respectively. It contains 30 pairs in seven domains being a multi-domain fully-labeled corpus. Every sample includes a goal, multiple user and agent utterances, and annotations regarding slot values.\n\\myparagraph{Out-Of-Scope (OOS)~}\nThe dataset includes 15,100 training, 3,100 validation, and 5,500 test sets, respectively. It contains 151 intent classes, containing 150 in-scope and one out-of-scope intent. The out-of-scope intent indicates that a user utterance failed to classify to given predefined objectives.", "id": "6692edf1-7b35-4a59-9b52-0c90edbdf04b", "level": "paragraph", "origin_cites_number": 1, "parent_id": "d089f43d-90f2-4b75-a3ac-15d615f3e514", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on NLP" ], [ "paragraph", "Sentiment Analysis (SA)" ], [ "paragraph", "Dialogue" ] ], "subsections": [], "title": "Dialogue" }, { "cite_extract_rate": 0.897435897435897, "cites": [ 9094, 2518, 2506, 131, 2514, 2505, 7101, 2528, 2524, 7000, 7018, 133, 2502, 7100, 2525, 122, 2503, 2517, 8562, 2516, 2530, 134, 2504, 2527, 2521, 2508, 2507, 2513, 9095, 2526, 732, 2529, 1254, 865 ], "content": "\\begin{table*}[htbp]\n\\centering\n\\caption{The statistics of the datasets used on downstream tasks.}\\label{datasets2}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccccccl}\n\\hline\n\\textbf{Type} & \\textbf{Name} & \\textbf{Usage} & \\textbf{Domain} & \\textbf{Class} & \\textbf{Size} & \\textbf{Related Papers} \\\\ \\hline\n \\multirow{2}{*}{Classification} & \\multirow{2}{*}{ImageNet} & Pretrain \\& & \\multirow{2}{*}{-} & \\multirow{2}{*}{1000+} & \\multirow{2}{*}{1,200,000+} & \\multirow{2}{*}{ \\begin{tabular}[c]{@{}l@{}}\\\\ \\end{tabular}} \\\\\n & & Downstream & & & &\n\\\\ \\cline{2-7} \n & CIFAR-10 & Downstream & - & 10 & 60,000 & \n \n\\\\ \\cline{2-7} \n & CIFAR-100 & Downstream & - & 100 & 60,000 & \n\n\\\\ \\cline{2-7} \n & STL-10 & Downstream & - & 10 & 6,000 &\n\n\\\\ \\cline{2-7} \n & Caltech-101 & Downstream & object & 101 & 9,146 & \n \n\\\\ \\cline{2-7} \n & MNIST-10 & Downstream & digit & 10 & 60,000 & \n \n\\\\ \\cline{2-7}\n & SVHN & Downstream & digit & 10 & 73,257 & \n\n\\\\ \\cline{2-7} \n & Places205 & Downstream & scene & 205 & 2,448,873 &\n\n\\\\ \\cline{2-7} \n & SUN397 & Downstream & scene & 899 & 130,519 &\n\n\\\\ \\cline{2-7} \n & HMDB51 & Downstream & action & 51 & 7000 & \n\n\\\\ \\cline{2-7} \n & UCF101 & Downstream & action & 101 & - & \n \n\\\\ \\cline{2-7} \n & Food-101 & Downstream & food & 101 & 101,000 & \n\n\\\\ \\cline{2-7} \n & Birdsnap & Downstream & bird & 500 & 49,829 &\n\n\\\\ \\cline{2-7} \n & Cars & Downstream & car & 196 & 16,185 &\n\n\\\\ \\cline{2-7} \n & Aircraft & Downstream & aircraft & 102 & 10,200 &\n\n\\\\ \\cline{2-7} \n & Pets & Downstream & pet & 37 & 7,400 &\n\n\\\\ \\cline{2-7} \n & Flowers & Downstream & flower & 102 & 8,189 &\n\n\\\\ \\cline{2-7} \n & DTD & Downstream & texture & 47 & 5,640 &\n\n\\\\ \\cline{2-7} \n & iNaturallist2018 & Downstream & species & 8,000+ & 450,000+ &\n\n\\\\ \\cline{2-7} \n & JFT-300M & Pretrain & - & 3,000+ & 300,000,000+ &\n\n\\\\ \\cline{1-7} \nDetection & COCO & Downstream & object & 80 & 200,000 & \n \n\\\\ \\cline{2-7}\n & VOC07 & Downstream & object & 20 & 9,963 & \n \n\\\\ \\cline{1-7}\nSegmentation & VOC12 & Downstream & object & 20 & 2,913 & \n\n\\\\ \\cline{2-7} \n & NYU-Depth V2 & Downstream & scene & 894 & 1,449 & \n\n\\\\ \\cline{2-7} \n & VOC11 & Downstream & object & 20 & 3,334 & \n\n\\\\ \\cline{2-7} \n & ADE20K & Downstream & scene & 3,688 & 27,574 & \n\n\\\\ \\cline{2-7} \n & Cityscapes & Downstream & scene & 25 & 25,000+ & \n\n\\\\ \\cline{2-7} \n & LVIS & Downstream & vocabulary & 1,200+ & 160,000+ & \n\n\\\\ \\cline{2-7} \n & DAVIS & Downstream & scene & 150 & - & \n\n\\\\ \\cline{1-7} \nInpainting & Paris StreetView & Downstream & scene & - & 15,000 & \n\n\\\\ \\cline{1-7}\nSequence & Moving-MNIST & Downstream & digit & - & 10,000 & \n\n\\\\ \\cline{1-7}\n- & YFCC100M & Pretrain & multimedia & - & 100,000,0000+ & \n\n\\\\ \\cline{1-7}\n\\end{tabular}\n}\n\\end{table*}\nThe datasets in CV mainly contain three types from the perspective of tasks: classification, detection, and segmentation. The popular datasets are concluded in Table \\ref{datasets2}, and some infrequently mentioned datasets in long tails are discussed in the text.", "id": "aec0bd98-e02a-43ef-82e5-6bf469622fdc", "level": "subsection", "origin_cites_number": 39, "parent_id": "99f4cfea-bdc9-409b-976b-dcdca4dfb1e0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on CV" ] ], "subsections": [ "91d791c5-9669-435d-ad7a-46c76cd06f98" ], "title": "Downstream Tasks and Datasets on CV" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 2625, 861, 2626, 732 ], "content": "In this part, we first cover the popular large-scale datasets used frequently in both the pretext and downstream tasks. Then the domain datasets only used for the downstream tasks are unfolded.\n\\myparagraph{MNIST~} \nIt's a collection of handwritten digits that includes $60,000$ samples in training and $10,000$ in testing.\nThe images are fixed-size with $28\\times28$ pixels. The pixel values are from $0$ to $255.0$ in which pixel values smaller than 255.0 can be understood as background (white) and 255 means foreground (black). The labels are from 0 to 9 and only one of these digits exists in an image. Both traditional and deep learning methods are based on this most popular dataset despite advanced methods showing perfect results. Thus, Geoffrey Hinton has described it as \"the drosophila of machine learning\".\n\\myparagraph{Street View House Numbers (SVHN)~} In the domain of digit numbers, it collects real-world digit numbers from house numbers in Google Street View images. It includes $73,257$ digits for training, $26,032$ digits for testing, and $531,131$ additional. All of them are $32\\times32$ color images with both class labels and character-level bounding boxes.\n\\myparagraph{CIFAR}~ As more advanced methods show perfect results on the simple datasets, more sophisticated datasets such as CIFAR-10 and CIFAR-100 are conducted. These two datasets are closer to the real-world object. The CIFAR-10 contains $50,000$ training images and $10,000$ testing images, with $6,000$ images per class and $32\\times32$ pixels in each RGB color image. The CIFAR-100 is similar to the CIFAR-10 but with more detailed label information. There are $100$ classes containing $500$ training images and $100$ testing images in each class. In addition, these $100$ \"fine\" classes are grouped equally into $20$ \"coarse\" classes. Researchers can adapt it to suitable learning methods.\n\\myparagraph{STL-10~} Inspired by the CIFAR-10 dataset, STL-10 is another $96\\times96$ color image dataset containing similar $10$ real-world classes. Each class has $500$ training images and $800$ testing images. The biggest difference is that STL-10 has $100,000$ unlabeled images for unsupervised learning. More construction information can be seen in~.\n\\myparagraph{Caltech-101~} It collects roughly $300\\times200$ color images of objects belonging to 101 categories, with 40 to 800 images per category and 50 on average. The outlines of the objects in the pictures are annotated for the convenience of different learning methods.\n\\myparagraph{ImageNet~} This is one of the most popular and large-scale datasets on computer vision. It is built according to the hierarchical structure of WordNet~. The full ImageNet dataset contains $14,197,122$ images and $21,841$ synsets indexed, attaching on average $1,000$ images to demonstrate each synset. The most frequently-used subset of ImageNet is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset from 2010 to 2017, containing tasks of classification, localization, and detection. The number of samples in training and testing datasets and the labels of images are determined by the specific task, more details are seen in~.\n\\myparagraph{HMDB51~} In addition to the popular MNIST, there still exist many domain datasets used for the downstream tasks in the classification problem. HMDB51 is an action video database for a total of $7,000$ clips in 51 action classes. It contains five types of facial actions and body movements.\n\\myparagraph{UCF101~} It is another action video dataset designed for more realistic action recognition. It is an extension of the UCF50~ dataset containing only 50 action categories with 101 action categories, collected from YouTube. What makes it a famous recognition dataset is the workshop in ICCV13 with UCF101 as its main competition benchmark. \n\\myparagraph{Food-101~} This is a real-world food dataset of $101$ food categories, with $750$ and $250$ images per class in training and testing dataset respectively.\n\\myparagraph{Birdsnap~} \nIt is a fine-grained visual categorization of birds on a broad scale,\nwith bounding boxes and the locations/annotations of 17 parts in the object. It contains $49,829$ images of the 500 most common species in North America, with each species containing $69$ to $100$ images and most species having 100. In addition, some images are also labeled as male or female, immature or adult, and breeding or non-breeding plumage. \n\\myparagraph{SUN397} To target the scene categorization, the extensive Scene UNderstanding (SUN) database~ fills the gap of the existing dataset with the limited scope of categories. This database has $899$ categories and $130,519$ images, and only images with more than $200\\times200$ pixels were kept. SUN397 is a more well-sampled subset that maintains 397 categories with at least 100 images per category, in which other categories containing relatively few unique photographs are discarded. \n\\myparagraph{Places205} Places205~ dataset is another large scale scene dataset consists of $2,448,873$ images from 205 scene categories.\n\\myparagraph{Cars~} The dataset in the domain of cars contains $16,185$ color images of $196$ classes (at the level of Make, Model, Year) of cars. For convenience, this dataset is split into training and testing sets in roughly equal quantities. \n\\myparagraph{Aircraft~} It is another fine-grained visual classification designed for aircraft (also known as FGVC-Aircraft). A popular form of this dataset is the fine-grained recognition challenge 2013 (FGComp2013)~ ran in parallel with the ILSVRC2013. There exist four-level hierarchies: Model, Variant, Family, Manufacturer, from finer to coarser to organize this database. The more detailed information is shown in~. \n\\myparagraph{Pets~} It represents The Oxford-IIIT Pet Dataset that collects 37 pet categories with roughly 200 images per category. All images have an associated ground truth annotation of breed for classification, head ROI for detection, and pixel-level trimap for segmentation. \n\\myparagraph{Flowers~} Similarly, Flowers is another domain dataset in flowers also collected by Oxford; it contains Oxford-17 Flowers of 17 categories and Oxford-102 Flowers of 102 categories. \n\\myparagraph{Describable Textures Dataset (DTD)~} This is an evolving collection of textural images in the wild, which consists of $5,640$ images of 47 categories, with 120 images per category.\n\\myparagraph{iNaturalist2018~} It is a large-scale species classification competition conducted on the FGVC5 workshop at CVPR2018. This dataset contains over 8,000 species categories, with more than $450,000$ images in the training and validation dataset collected from iNaturalist~.\n\\myparagraph{JFT-300M~} JFT-300M is an internal Google dataset introduced by Sun et al~ and well-known from ViT Model~. It is labeled by algorithms that utilize human-computer communications and target classification tasks. This dataset finally contains 300M images with over 1000M labels, thus leading to the multiple labels attached to this large-scale dataset.", "id": "91d791c5-9669-435d-ad7a-46c76cd06f98", "level": "paragraph", "origin_cites_number": 14, "parent_id": "aec0bd98-e02a-43ef-82e5-6bf469622fdc", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on CV" ], [ "paragraph", "Classification" ] ], "subsections": [ "78f6477b-7ba4-4dda-bd23-ec85a2ea321d", "8927ac3e-1d6a-4b82-80e0-ce748bd79e75", "f7a26ae9-9dd6-45cc-8972-5d2e2b08b70d" ], "title": "Classification" }, { "cite_extract_rate": 0.5, "cites": [ 486 ], "content": "The detection is a popular task in the CV, and almost all the research is conducted on COCO and PASCAL VOC datasets.\n\\myparagraph{COCO~} This is a large-scale dataset for object detection, segmentation, and caption; it contains $330,000$ RGB images, with more than $200,000$ labeled. There are 1.5 million object instances of 80 object categories involved. Thus, it is one of the most popular benchmark dataset in detection and segmentation in parallel with the following PASCAL VOC.\n\\myparagraph{PASCAL VOC~} \nFrom 2005 through 2012, the dataset has run challenges assessing performance on object class recognition and has provided standardized image datasets for object class recognition.\nThe main datasets used in self-supervised learning are VOC07, VOC11, and VOC12. Main competitions in VOC07~ contain classification and detection tasks; both of them consist of 20 objects and contain at least one object in each image. Thus, it is common to use VOC07 to serve as the downstream task for the detection.", "id": "78f6477b-7ba4-4dda-bd23-ec85a2ea321d", "level": "paragraph", "origin_cites_number": 2, "parent_id": "91d791c5-9669-435d-ad7a-46c76cd06f98", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on CV" ], [ "paragraph", "Classification" ], [ "paragraph", "Detection" ] ], "subsections": [], "title": "Detection" }, { "cite_extract_rate": 0.5, "cites": [ 1733, 1728, 7597 ], "content": "The segmentation is a semantics-based pixel-level classification. These datasets are difficult to obtain and annotate, thus they are always used as a downstream task.\n\\myparagraph{VOC11~ \\& VOC12~} Both VOC11 and VOC12 contains classification, detection, and segmentation tasks in the main competition, thus leading to the common use of downstream task for the segmentation.\n\\myparagraph{ADE20K~} It collects $27,574$ images from both the SUN and Places205 databases, in which $25,574$ for training and $2,000$ for testing. All $707,868$ objects from $3,688$ categories existing in images are annotated. Especially, this dataset contains $193,238$ annotated object parts and parts of parts, and additional attributes, annotation time, depth ordering for the benefit of the research community.\n\\myparagraph{NYU-Depth V2~} This is a dataset consisting of images and video sequences from 464 indoor scenes that are recorded by both the RGB and Depth cameras from 3 cities. It contains $1,449$ images with the ground truth of depth, and the original RGB values are also provided. In addition, there are $407,024$ new unlabeled frames and additional class labels for the objects in images.\n\\myparagraph{Cityscapes~} It is a dataset of urban street scenes from 50 cities with the ground truth of semantic segmentation. The main instances are vehicles, people, and construction. The high-quality dense pixel annotations contain a volume of $5,000$ images. In addition to the fine annotations, coarser polygonal annotations are provided for a set of $20,000$ images. Moreover, the videos consist of not consistent images with high-quality annotations, and these annotated images with consistently changing views are provided for researchers.\n\\myparagraph{LVIS~} It is a dataset for large vocabulary instance segmentation. It features that 1) a category or word in one image is related to the only segmentation object; 2) more than $1,200$ categories are extracted from roughly $160,000$ images; 3) long tails phenomenon exist in these categories; and 4) more than $2,000,000$ high-quality instance segmentation masks.\n\\myparagraph{Densely Annotated VIdeo Segmentation (DAVIS)~} It is a video dataset designed for the in-depth analysis of the SOTA in video object segmentation, in which DAVIS 2017~ contains both semi-supervised (human-guided at the testing time) and unsupervised (human non-guided at test time) video sequences with multiple annotated instances.", "id": "8927ac3e-1d6a-4b82-80e0-ce748bd79e75", "level": "paragraph", "origin_cites_number": 6, "parent_id": "91d791c5-9669-435d-ad7a-46c76cd06f98", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on CV" ], [ "paragraph", "Classification" ], [ "paragraph", "Segmentation" ] ], "subsections": [], "title": "Segmentation" }, { "cite_extract_rate": 0.75, "cites": [ 2626, 2627, 7587 ], "content": "There are many datasets designed for special visual tasks such as inpainting. In addition, this part covers the data collection in the wild.\n\\myparagraph{Paris StreetView~} The dataset is designed for image inpainting task, which contains $14,900$ training images and 100 testing images collected from street views of Paris. This dataset is collected from Google Street View and mainly focuses on the buildings in the city. \n\\myparagraph{Moving-MNIST~} Based on MNIST, it is a video dataset designed for evaluating sequence prediction or reconstruction, which contains $10,000$ sequences. Each video is long of 20 frames and consisted of two digits (possibly overlapped) moving inside a $64\\times64$ patch. The first benchmark is reported on~ by the method of LSTMs.\n\\myparagraph{Yahoo Flickr Creative Commons 100 Million (YFCC100M)~} The dataset is the largest public multimedia collection that is allowed to search by users for their own targets; this dataset can browse both images and videos. It is free and for researchers to explore and investigate subsets of the YFCC100M in real time. Subsets of the complete dataset can be retrieved by any keyword search and reviewed directly. In addition, the text information attached to any image or video is abundant, such as containing location information and user tags. Briefly, it is more a multimedia library than a domain dataset.\n\\myparagraph{Data in the Wild} More generalized dataset concept in the self-supervised learning era is composed of multimedia websites, APP, or search engines such as Instagram, Flickr, Google Images, etc. I think pictures in the wild will play a major role in the future study of CV because of the quantity of data, the computation source, and the learning power of PFM.", "id": "f7a26ae9-9dd6-45cc-8972-5d2e2b08b70d", "level": "paragraph", "origin_cites_number": 4, "parent_id": "91d791c5-9669-435d-ad7a-46c76cd06f98", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on CV" ], [ "paragraph", "Classification" ], [ "paragraph", "Others" ] ], "subsections": [], "title": "Others" }, { "cite_extract_rate": 0, "cites": [], "content": "The purpose of the pretraining graph model is to improve the performance of downstream tasks.\nAccording to the different analysis objects of the downstream tasks, they can be divided into nodes, edges, and graphs.\nMeanwhile, the PFMs of GL have been widely used in a mass of fields. \nIn this section, we combine the downstream tasks to conduct statistics on the pretraining datasets and the downstream task datasets.", "id": "c0c5905b-5a26-4f77-8444-f904443bb416", "level": "subsection", "origin_cites_number": 0, "parent_id": "99f4cfea-bdc9-409b-976b-dcdca4dfb1e0", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on Graph" ] ], "subsections": [ "2af89c5f-d372-4215-8bba-7f55dacb0c70" ], "title": "Downstream Tasks and Datasets on Graph" }, { "cite_extract_rate": 0.8095238095238091, "cites": [ 2538, 218, 7357, 1010, 242, 2533, 229, 7582, 2628, 1184, 7103, 1440, 7598, 1439, 240, 2539, 2542 ], "content": "Nodes are the most basic element of the graph, so lots of downstream tasks mainly focus on the analysis of nodes.\n\\myparagraph{Node Classification}\nNode ClassiFication (NCF) is one of the most prevalent graph-based tasks, which has important analytical value in most of the different types of graph data.\nDifferent from the pseudo-labels assigned to nodes in the graph in self-supervised methods, the labels in NCF often come from external information such as manual annotation.\nBased on Definition~\\ref{def: ind_learning} and~\\ref{def: trans_learning}, NCF can be divided into two types: transductive and inductive according to the visibility during training, verification, and testing.\nIn addition, the result of NCF can be single-label or multi-label according to the mutual exclusion of labels.\nThe statistical results of common NFC datasets are shown in Table~\\ref{tab:dataset of nodeL}.\n\\begin{table*}[htbp]\n\t\\tiny\n\t\\centering\n\t\\caption{The statistics of the datasets for node-level tasks. Homogeneous:Hom, Heterogeneous:Het.}\n\t\\label{tab:dataset of nodeL}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{cccccccccl} \n\t\t\\hline \n\t\\textbf{Task}\t& \\textbf{Name} & \\textbf{Usage} & \\textbf{Source} & \\textbf{Type} & \\textbf{Nodes} & \\textbf{Edges} & \\textbf{Class} & \\textbf{Features} & \\textbf{Related Paper} \\\\\n\t\t\\hline\n\tNCF\t&Academia & pretrain & Citation & Hom & 138K & 739K & - & - & \\\\\\cline{2-10} \n\t\t&DBLP (SNAP) & pretrain & Citation & Hom & 317K & 2M & - & - & \\\\\\cline{2-10} \n\t&\tDBLP (NetRep) & pretrain & Citation & Hom & 540K & 30M & - & - & \\\\\\cline{2-10}\n\t&\tIMDB & pretrain & Movie & Hom & 896K & 8M & - & - & \\\\\\cline{2-10}\n\t&\tFacebook & pretrain & Social & Hom & 3M & 47M & - & - & \\\\\\cline{2-10}\n\t&\tLiveJournal & pretrain & Social & Hom & 4M & 86M & - & - & \\\\\\cline{2-10}\n\t&\t\\multirow{2}{*}{Cora} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{2,708} & \\multirow{2}{*}{5,429} & \\multirow{2}{*}{7} & \\multirow{2}{*}{1,433} & \\\\\n &\t & & & & & & & & \\\\\\cline{2-10}\n\t&\t\\multirow{2}{*}{CiteSeer} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{3,327} & \\multirow{2}{*}{4,732} & \\multirow{2}{*}{6} & \\multirow{2}{*}{3,703} & \\\\\n\t&\t& & & & & & & & \\\\\\cline{2-10}\n\t&\t\\multirow{2}{*}{PubMed} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{19K} & \\multirow{2}{*}{44K} & \\multirow{2}{*}{3} & \\multirow{2}{*}{500} & \\\\\n\t&\t& & & & & & & & \\\\\\cline{2-10}\n\t&\tACM & Downstream & Citation & Hom & 8,994 & 26K & 4 & 1,902 & \\\\\\cline{2-10}\n\t&\tCora-Full & Downstream & Citation & Hom & 20K & 63K & 70 & 500 & \\\\\\cline{2-10}\n\t&\tCora-ML & Downstream & Citation & Hom & 2,995 & 8,158 & 7 & 2879 & \\\\\\cline{2-10}\n\t&\tReddit-233K & Downstream & Social & Hom & 233K & 57M & 210 & 5,414 & \\\\\\cline{2-10}\n\t&\tBlogCatalog & Downstream & Social & Hom & 10K & 334K & 39 & - & \\\\\\cline{2-10}\n\t&\tYouTube & Downstream & Social & Hom & 1M & 3M & 47 & - & \\\\\\cline{2-10}\n\t&\tReddit-231K & Downstream & Social & Hom & 231K & 11M & 41 & 602 & \\\\\\cline{2-10}\n\t&\tAmazon & Downstream & Social & Het & 130M & - & - & - & \\\\\\cline{2-10}\n\t&\tPPI-30K & Downstream & Protein & Het & 3,890 & 77K & 50 & - & \\\\\\cline{2-10}\n\t&\tPPI-57K & Downstream & Protein & Het & 57K & 819K & 121 & 50 & \\\\\\cline{2-10}\n\t&\tIMDB & Downstream & Movie & Hom & 12K & 37K & 4 & 1,256 & \\\\\\cline{2-10}\n\t&\tFour-Univ & Downstream & Movie & Hom & 4,518 & 3,426 & 6 & 2,000 & \\\\\\cline{2-10}\n\t&\tChameleon & Downstream & Web & Hom & 2,277 & 36K & 6 & 500 & \\\\\\cline{2-10}\n\t&\tCrocodile & Downstream & Web & Hom & 12K & 180K & 6 & 500 & \\\\\\cline{2-10}\n\t&\tFlickr-89K & Downstream & Web & Hom & 89K & 450K & 7 & 500 & \\\\\\cline{2-10}\n\t&\togbn-arxiv & Downstream & Web & Hom & 169K & 117K & 40 & 128 & \\\\\\cline{2-10}\n\t&\tWiki-CS & Downstream & Web & Hom & 12K & 277K & 10 & 300 & \\\\\\cline{2-10}\n\t&\tDBLP & Downstream & Web & Hom & 17K & 53K & 4 & 1639 & \\\\\\cline{2-10}\n\t&\tComputers & Downstream & Co-purchase & Hom & 14K & 246K & 10 & 767 & \\\\\\cline{2-10}\n\t&\tPhoto & Downstream & Co-purchase & Hom & 7,650 & 119K & 8 & 745 & \\\\\\cline{2-10}\n\t&\tCS & Downstream & Co-author & Hom & 18K & 82K & 15 & 500 & \\\\\\cline{2-10}\n\t&\tPhysics & Downstream & Co-author & Hom & 35K & 248K & 5 & 500 & \\\\\\cline{2-10}\n\t&\tH-index & Downstream & Co-author & Hom & 5,000 & 44K & - & - & \\\\\\cline{2-10}\n\t&\tFlickr-81K & Downstream & Photo & Hom & 81K & 6M & 195 & - & \\\\\\cline{2-10}\n\t&\tWikipedia & Downstream & Word & Hom & 4,777 & 185K & 40 & - & \\\\\\cline{2-10}\n\t&\tUS-Airport & Downstream & Airline & Hom & 1,190 & 13K & - & - & \\\\\\cline{2-10}\n\t&\tOAG & Downstream & Academic & Het & 178M & 2B & - & - & \\\\\n\t\t\\hline\nNTKS\t&\tKDD-ICDM & Downstream & Co-author & Hom & 2,867/2,607 & 7,637/4,774 & 697 & - & \\\\\\cline{2-10}\n&\t\tSIGIR-CIKM & Downstream & Co-author & Hom & 2,851/3,548 & 6,354/7,076 & 874 & - & \\\\\\cline{2-10}\n\t&\tSIGMOD-ICDE & Downstream & Co-author & Hom & 2,626/2,559 & 8,304/6,668 & 898 & - & \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t}\n\\end{table*}\n\\myparagraph{Node Clustering}\nThe goal of Node ClusterIng (NCI) is to divide a graph into different classes or clusters according to a certain standard so that the correlation of nodes in the same cluster is as large as possible, and the irrelevance of nodes that are not in the same cluster is also minimized.\nAlthough in the above-mentioned pretraining tasks, NCI is used as a pretext task has appeared, NCI can still test pretraining graph models based on other pretext tasks.\n\\myparagraph{Top-K Search}\nThe goal of task Top-K Search (TKS) is to search the K nodes with the highest predefined associations for a given node in the graph.\nUsually, TKS is used for search tasks such as recommendation and alignment.\nThe detailed statistical results of the datasets are shown in Table~\\ref{tab:dataset of nodeL}.", "id": "2af89c5f-d372-4215-8bba-7f55dacb0c70", "level": "paragraph", "origin_cites_number": 21, "parent_id": "c0c5905b-5a26-4f77-8444-f904443bb416", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on Graph" ], [ "paragraph", "Node-Level Tasks" ] ], "subsections": [ "c074e3cd-2e98-4272-b494-720548f9d202", "dc734dd2-83d6-429f-86be-0de49bfcbb52", "7dbfd67b-ebb4-4a93-8832-e0363a9fec3e" ], "title": "Node-Level Tasks" }, { "cite_extract_rate": 0.7619047619047611, "cites": [ 2538, 7357, 8573, 242, 2533, 229, 8563, 7582, 2628, 7103, 1440, 7598, 1439, 240, 2539, 2542 ], "content": "The edge is also an important part of the graph structure, which associates independent nodes and is the key to distinguishing graph data from non-relational data.\nEspecially in some specific fields (e.g., molecules, proteins), edges contain real information, so there are various tasks related to edges.\n\\myparagraph{Link Classification}\nSimilar to the NCF, the Link Classification (LC) also assigns one or more labels to a given edge.\nIn fact, in LC, the nodes at both ends of the edge are still taken into consideration.\n\\myparagraph{Link Prediction}\nLink Prediction (LP) is a common graph task (e.g., knowledge graph). \nThe goal of LP is to predict edges that are removed or may exist in the graph.\nSimilar to NCI, LP is also one of the pretext tasks in self-supervised learning, and its statistic results as shown in Table~\\ref{tab:dataset of lc}.\n\\begin{table*}[htbp]\n\t\\tiny\n\t\\centering\n\t\\caption{The statistics of the datasets for LC. Homogeneous:Hom, Heterogeneous:Het.}\n\t\\label{tab:dataset of lc}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{ccccccccl} \n\t\t\\hline \n\t\t\\textbf{Name} & \\textbf{Usage} & \\textbf{Source} & \\textbf{Type} & \\textbf{Nodes} & \\textbf{Edges} & \\textbf{Class} & \\textbf{Features} & \\textbf{Related Paper} \\\\\n\t\t\\hline\n\t\t\\multirow{2}{*}{Cora} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{2,708} & \\multirow{2}{*}{5,429} & \\multirow{2}{*}{7} & \\multirow{2}{*}{1,433} & \\\\\n\t\t& & & & & & & & \\\\\\hline\n\t\t\\multirow{2}{*}{CiteSeer} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{3,327} & \\multirow{2}{*}{4,732} & \\multirow{2}{*}{6} & \\multirow{2}{*}{3,703} & \\\\\n\t\t& & & & & & & & \\\\\\hline\n\t\t\\multirow{2}{*}{PubMed} & \\multirow{2}{*}{Downstream} & \\multirow{2}{*}{Citation} & \\multirow{2}{*}{Hom} & \\multirow{2}{*}{19K} & \\multirow{2}{*}{44K} & \\multirow{2}{*}{3} & \\multirow{2}{*}{500} & \\\\\n\t\t& & & & & & & & \\\\\\hline\n\t\tML-100K & Downstream & Movie & Hom & 2,625 & 100K & 5 & - & \\\\\\hline\n\t\tML-1M & Downstream & Movie & Hom & 9,940 & 1M & 5 & - & \\\\\\hline\n\t\tBlogCatalog-5K & Downstream & Social & Hom & 5,196 & 172K & 6 & 8,189 & \\\\\\hline\n\t\tAmazon & Downstream & Social & Het & 130M & - & - & - & \\\\\\hline\n\t\tPPI-57K & Downstream & Protein & Het & 57K & 819K & 121 & 50 & \\\\\\hline\n\t\tFlickr-7K & Downstream & Photo & Hom & 7,575 & 240M & 9 & 12,047 & \\\\\\hline\n\t\tLast-FM & Downstream & Music & Hom & 15K & 73K & 122 & - & \\\\\\hline\n\t\tBook-Crossing & Downstream & Book & Hom & 111K & 443K & 52 & - & \\\\\\hline\n\t\tOAG & Downstream & Academic & Het & 178M & 2B & - & - & \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t}\n\\end{table*}\n\\myparagraph{Top-K Recommendation}\nTop-K Recommendation (TKR) is exactly the same as the definition of TKS, the difference lies in the sorting goal.", "id": "c074e3cd-2e98-4272-b494-720548f9d202", "level": "paragraph", "origin_cites_number": 21, "parent_id": "2af89c5f-d372-4215-8bba-7f55dacb0c70", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on Graph" ], [ "paragraph", "Node-Level Tasks" ], [ "paragraph", "Link-Level Tasks" ] ], "subsections": [], "title": "Link-Level Tasks" }, { "cite_extract_rate": 0.8125, "cites": [ 2631, 2544, 2543, 7335, 2532, 2541, 2630, 2533, 2534, 2629, 1184, 2632, 8573 ], "content": "The graph-level task generally focuses on the distribution of nodes, edges, and attributes in a given graph, in order to infer the possible properties of the entire graph.\n\\myparagraph{Graph Classification}\nGraph Classification (GC) is commonly used in social, molecular, and protein graph data, which aims to predict the property of the given community, chemical compound, and protein.\nThe statistic results as shown in Table~\\ref{tab:dataset of gc}.\n\\begin{table*}[htbp]\n\t\\tiny\n\t\\centering\n\t\\caption{The statistics of the datasets for GC. Homogeneous:Hom, Heterogeneous:Het.}\n\t\\label{tab:dataset of gc}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{ccccccccl} \n\t\t\\hline \n\t\t\\textbf{Name} & \\textbf{Usage} & \\textbf{Source} & \\textbf{Type} & \\textbf{Graphs} & \\textbf{ Nodes} & \\textbf{Edges} & \\textbf{Class} & \\textbf{Related Paper} \\\\\n\t\t\\hline\n\t\tZINC15 & Pretraining & Molecule & Hom & 2M & - & - & - & \\\\\\hline\n\t\tChEMBL & Pretraining & Molecule & Hom & 456K & - & - & - & \\\\\\hline\n\t\tPPI-pre & Pretraining & Protein & Het & 395K & - & - & - & \\\\\\hline\n\t\tMUTAG & Downstream & Molecule & Hom & 188 & - & - & 2 & \\\\\\hline\n\t\tPTC & Downstream & Molecule & Hom & 344 & - & - & 2 & \\\\\\hline\n\t\tBBBP & Downstream & Molecule & Hom & 2,039 & - & - & 2 & \\\\\\hline\n\t\tTox21 & Downstream & Molecule & Hom & 7,831 & - & - & 24 & \\\\\\hline\n\t\tToxCast & Downstream & Molecule & Hom & 8,575 & - & - & 1,234 & \\\\\\hline\n\t\tSIDER & Downstream & Molecule & Hom & 1,427 & - & - & 54 & \\\\\\hline\n\t\tClinTox & Downstream & Molecule & Hom & 1,478 & - & - & 4 & \\\\\\hline\n\t\tMUV & Downstream & Molecule & Hom & 93K & - & - & 34 & \\\\\\hline\n\t\tHIV & Downstream & Molecule & Hom & 41K & - & - & 2 & \\\\\\hline\n\t\tBACE & Downstream & Molecule & Hom & 1,513 & - & - & 2 & \\\\\\hline\n\t\tPPI-88K & Downstream & Protein & Het & 88K & - & - & 80 & \\\\\\hline\n\t\tIMDB-M & Downstream & Movie & Hom & 1,500 & 19K & 99K & 3 & \\\\\\hline\n\t\tIMDB-B & Downstream & Movie & Hom & 1,000 & 19K & 97K & 2 & \\\\\\hline\n\t\tFreeSolv & Downstream & Molecule & Hom & 642 & - & - & - & \\\\\\hline\n\t\tESOL & Downstream & Molecule & Hom & 1,128 & - & - & - & \\\\\\hline\n\t\tLipophilicity & Downstream & Molecule & Hom & 4,200 & - & - & - & \\\\\\hline\n\t\tQM7 & Downstream & Molecule & Hom & 6,830 & - & - & - & \\\\\\hline\n\t\tQM8 & Downstream & Molecule & Hom & 22K & - & - & - & \\\\\\hline\n\t\tCOLLAB & Downstream & Co-author & Hom & 5,000 & 373K & - & 3 & \\\\\\hline\n\t\tRDT-B & Downstream & Co-author & Hom & 2,000 & 859K & - & 2 & \\\\\\hline\n\t\tRDT-M & Downstream & Co-author & Hom & 5,000 & 3M & - & 5 & \\\\\\hline\n\t\tNCI1 & Downstream & Molecule & Hom & 4,110 & 123K & 132K & 2 & \\\\\\hline\n\t\tNCI109 & Downstream & Molecule & Hom & 4,127 & 123K & 133K & 2 & \\\\\\hline\n\t\tPROTEINS & Downstream & Molecule & Hom & 1,113 & 44K & 81K & 2 & \\\\\\hline\n\t\tD$\\&$D & Downstream & Molecule & Hom & 1,178 & 335K & 843K & 2 & \\\\\\hline\n\t\tMutagenicity & Downstream & Molecule & Hom & 4,337 & 131K & 134K & 2 & \\\\\\hline\n\t\tMETR-LA & Downstream & Traffic & Hom & 1 & 207 & - & - & \\\\\t\n\t\t\\hline\n\t\\end{tabular}\n\t}\n\\end{table*}", "id": "dc734dd2-83d6-429f-86be-0de49bfcbb52", "level": "paragraph", "origin_cites_number": 16, "parent_id": "2af89c5f-d372-4215-8bba-7f55dacb0c70", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on Graph" ], [ "paragraph", "Node-Level Tasks" ], [ "paragraph", "Graph-Level Tasks" ] ], "subsections": [], "title": "Graph-Level Tasks" }, { "cite_extract_rate": 0, "cites": [], "content": "The PFMs of GL have been widely used in a mass of fields. \nWe will descript the details of the pretraining datasets and the downstream task datasets.\n\\myparagraph{Citation and Co-author network}\nA citation is a basic local representation, whose structure reflects the citation relationships of papers in a research direction or field.\nSpecifically, a citation network is a kind of relational data composed of research papers as nodes and citation relations as edges.\nAmong them, the citation network used in the GL model usually comes from local samples of common citation databases, e.g., Cora, Citeseer, and PubMed, and serves as downstream tasks.\nSimilarly, \nthe co-author network is a dataset of scientific collaboration that corresponds to a researcher's ego network, in which the researcher and their collaborators are nodes and an edge indicates collaboration between two researchers.\nAccording to different requirements of downstream tasks, such co-author networks can be used for various tasks, e.g., node classification and graph classification.\n\\myparagraph{Molecular and protein network}\nA molecular network usually refers to a compound composed of atoms and atomic bonds, and predicting the properties of the compound is usually regarded as a graph classification task.\nFor example, MUTAG is a collection of nitroaromatic compounds whose goal is to predict their mutagenicity to Salmonella typhimurium.\nPTC uses a graph to show the structure of multiple compounds and aims to predict the carcinogenicity of different compounds in rats.\nThe protein network is a collection of proteins classified as either enzymes or non-enzymes.\nThe amino acids are represented by nodes, and two nodes are connected by an edge if they are less than 6 Angstroms apart.\n\\myparagraph{Social and Movie network}\nThe social network is the social-relational data in the real network environment, which usually represents the relationship between users or posts.\nFor instance, \nReddit is a graph dataset comprised of Reddit posts made in September 2014.\nBlogCatalog is a graph dataset that represents a network of social relationships between bloggers who are listed on the BlogCatalog website.\nThe movie network is usually composed of actors and their co-occurrence participation in the movie.\nFor example, IMDB-B is a movie collaboration dataset that contains a large number of self-networks of actors who play movie roles in IMDB. \nNodes in each graph represent actors/actresses, and if they appear in the same film, an edge connects them.\nThese graphs are based on action and romance genres.\nThe difference between IMDB-M and IMDB-B is that a node in the graph represents one or more actors.\n\\myparagraph{Others}\nSome of the rarer graph data are used to test the universality of the PFM, such as word networks (Wikipedia), book networks (Book-crossing), and airline networks (US-Airport).\nIn addition, there are also some special graph structures adapted to specific models, such as spatiotemporal graphs (METR-LA).\n\\bibliographystyle{ieeetr}\n\\bibliography{PFM}\n\\end{document}", "id": "7dbfd67b-ebb4-4a93-8832-e0363a9fec3e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2af89c5f-d372-4215-8bba-7f55dacb0c70", "prefix_titles": [ [ "title", "A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT" ], [ "section", "Datasets" ], [ "subsection", "Downstream Tasks and Datasets on Graph" ], [ "paragraph", "Node-Level Tasks" ], [ "paragraph", "Data Source" ] ], "subsections": [], "title": "Data Source" } ]
53
[ 1458, 1550, 2464, 1445, 679, 1354, 2465, 1150, 2472, 2466, 9115, 364, 1578, 2470, 7165, 8466, 2467, 2469, 7465, 8536, 2471, 11, 2468, 7579, 7, 1582, 1557, 7518, 1183, 8558, 732, 1554, 7095, 38, 2473, 9095, 9094, 207, 826, 8385, 8559, 2474, 7096, 50, 2475, 2476, 8560, 8561, 7580, 2482, 2478, 1185, 2493, 2491, 8469, 8369, 2484, 1684, 1551, 2487, 856, 8462, 2480, 9, 2490, 2477, 2483, 794, 2219, 2488, 7271, 2492, 4511, 2479, 8424, 7097, 7098, 2485, 2489, 7461, 2481, 7370, 1156, 7581, 7463, 7460, 1553, 2486, 2208, 2494, 1587, 8534, 2198, 432, 8472, 2499, 2497, 7099, 2495, 2501, 2498, 1928, 2399, 2500, 2496, 7468, 134, 2504, 126, 2507, 2502, 2506, 131, 2505, 1254, 2503, 2508, 7100, 2509, 9149, 2512, 2513, 2514, 2515, 1501, 2511, 7339, 2510, 7018, 681, 2516, 122, 133, 2517, 2518, 2521, 2519, 2520, 7000, 2522, 7102, 2523, 7101, 2528, 2525, 8562, 2530, 2527, 2526, 2529, 2524, 865, 7582, 1440, 7335, 1010, 218, 282, 229, 2531, 2532, 1184, 240, 2533, 7103, 2534, 2536, 2537, 2535, 2538, 8563, 7357, 2543, 7104, 7583, 2541, 2540, 242, 2544, 8564, 2539, 2542, 2549, 2546, 2547, 864, 2545, 2550, 7584, 2548, 2551, 2554, 2556, 2555, 2553, 2552, 1272, 2015, 7585, 1639, 1278, 2558, 2557, 2559, 2561, 2563, 2560, 2562, 2564, 8459, 2565, 7586, 2566, 2567, 2342, 55, 7587, 8565, 2568, 96, 1223, 1768, 514, 508, 8429, 7589, 305, 206, 836, 486, 2569, 2571, 802, 2570, 1759, 7588, 97, 810, 7501, 1741, 209, 520, 2572, 2401, 8567, 1252, 2574, 7217, 8566, 7022, 2573, 157, 896, 2576, 7268, 7194, 2575, 7360, 2580, 2579, 2578, 7590, 2577, 2581, 2582, 2583, 2584, 2586, 2588, 2590, 8569, 2585, 312, 8568, 2589, 2587, 7591, 7593, 2594, 2592, 2591, 2593, 1911, 8445, 7105, 1411, 7592, 2595, 2598, 2596, 2597, 2601, 2600, 2599, 2602, 2603, 8570, 2623, 2614, 7595, 1174, 2605, 2611, 2622, 7594, 7106, 1091, 2616, 1067, 9120, 2617, 7596, 7265, 1159, 2613, 2618, 2615, 2608, 1090, 1938, 2624, 2609, 1173, 1970, 2607, 2612, 2606, 2383, 2620, 2604, 2610, 8571, 1096, 2621, 2387, 8412, 8410, 2619, 8572, 9121, 943, 3691, 439, 1565, 2625, 861, 2626, 1733, 1728, 7597, 2627, 2628, 7598, 1439, 8573, 2631, 2630, 2629, 2632 ]
1.559822
[ "Yu Xie", "Chunyi Li", "Bin Yu", "Chen Zhang", "Zhouhua Tang" ]
A Survey on Dynamic Network Embedding
2020
2020-06-15T02:30:05Z
cs.SI
Real-world networks are composed of diverse interacting and evolving entities, while most of existing researches simply characterize them as particular static networks, without consideration of the evolution trend in dynamic networks. Recently, significant progresses in tracking the properties of dynamic networks have been made, which exploit changes of entities and links in the network to devise network embedding techniques. Compared to widely proposed static network embedding methods, dynamic network embedding endeavors to encode nodes as low-dimensional dense representations that effectively preserve the network structures and the temporal dynamics, which is beneficial to multifarious downstream machine learning tasks. In this paper, we conduct a systematical survey on dynamic network embedding. In specific, basic concepts of dynamic network embedding are described, notably, we propose a novel taxonomy of existing dynamic network embedding techniques for the first time, including matrix factorization based, Skip-Gram based, auto-encoder based, neural networks based and other embedding methods. Additionally, we carefully summarize the commonly used datasets and a wide variety of subsequent tasks that dynamic network embedding can benefit. Afterwards and primarily, we suggest several challenges that the existing algorithms faced and outline possible directions to facilitate the future research, such as dynamic embedding models, large-scale dynamic networks, heterogeneous dynamic networks, dynamic attributed networks, task-oriented dynamic network embedding and more embedding spaces. \keywords{Network Analysis \and Dynamic Networks \and Dynamic Network Embedding}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ] ], "subsections": [ "00c8b60c-73a6-4956-8904-087ba9cccdbd", "fb1b6853-8371-47cb-8345-37d615de33b8", "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "73e83d02-6458-454a-98d9-080bd91963aa", "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "c1d28608-76e5-4e53-b902-b3439e85de0a" ], "title": "root" }, { "cite_extract_rate": 0.318181818181818, "cites": [ 282, 1010, 218, 217, 6199, 219, 479 ], "content": "Network analysis has attracted increasing attention over the recent years due to the ubiquity of network data in real world. The graph-structured network is a information carrier commonly used in complex systems, such as semantic networks , protein-protein interaction networks , social networks and criminal networks . In order to construct the feature representations that can be applied to various tasks on graph-structured networks, network embedding is proposed to embed each node in the network into low-dimensional space . However, the real-world networks are usually evolving, and simply exploiting static network embedding techniques for dynamic network embedding will increase the consumption of time and space, since the training model will be retrained once the network changes. Furthermore, in order to capture the temporal information in network evolution, many dynamic network embedding models are proposed to embed the unstructured data into a low-dimensional space, and predict the trend of network evolution. Fig. \\ref{figure1} shows an illustrative example of embedding two snapshots of a dynamic network into a 2D space. Nowadays, dynamic network representation learning has been successfully applied to machine learning tasks on complex networks, such as visualization, node classification, node clustering, and link prediction.\nThrough modeling the interactions between entities, dynamic network embedding methods can facilitate practical applications, e.g., community detection , recommender system and so on. Based on social networks whose interactions are constructed with the relationships, we can discover communities, build interpersonal networks, as well as predict interaction and behavior of users.\n\\begin{figure}[!htb]\n\t\\centering\n\t\\tiny\n\t\\subfloat{\n\t\t\\includegraphics[width=0.45\\linewidth]{dynetwork1.eps}\n\t\t\\includegraphics[width=0.45\\linewidth]{dynetwork2.eps}}\n\t\\\\\n\t\t\\subfloat{\n\t\t\\includegraphics[width=0.45\\linewidth]{n-plt-1.eps}\n\t\t\\includegraphics[width=0.45\\linewidth]{n-plt-3.eps}}\n\t\\caption{Two snapshots of the datasets are collected in the MathOverflow social network in 2011-2012, an online community of interactive mathematics. For illustration, 20 nodes are selected randomly, and we draw the original network structure graph and its 2-D layout after embedding.}\n\t\\label{figure1}\n\\end{figure}\nConsiderable effort has been committed to develop static network embedding techniques. DeepWalk formulates static network embedding as a sequence modeling problem, which generates node sequences by random walk and adopts Skip-Gram to predict the context from input nodes. Based on the DeepWalk model , node2vec integrates depth-first search and breadth-first search strategies for exploring more flexible network structure. LINE carefully designs a loss function for characterizing first-order and second-order proximities, and this method is applicable to large-scale information networks of arbitrary types. In addition, SDNE proposes a structural deep graph embedding model, which combines the advantages of first-order and second-order proximities, and maintains the highly non-linear structure. DNE-APP proposes a deep network embedding model, which adopts a semi-supervised stacked auto-encoder to obtain the compact representation. Furthermore, in order to prevent the manifold fracturing problem, ANE utilizes an adversarial auto-encoder for variational inference to generate network representations by matching the posterior of node embeddings with an arbitrary prior distribution. Moreover, APNE uses matrix decomposition to integrate structure and node label information. However, static networks lack the generalization ability in network evolution. For example, users can randomly create and delete their friendships on YouTube or Facebook, or active neurons tend to develop new relationships with other neurons in brain networks. The newly added or deleted edge has an impact on network analysis, while the existing static network embedding methods can not extract rich temporal information. In order to overcome these obstacles, a great many dynamic network embedding methods are proposed.\nIn graph theory, a static network is composed of a set of nodes and a set of connected edges, which is indicated by an adjacency matrix. Dynamic networks are engaged with definitions which contain history information, snapshots, timestamps, nodes and edges. These definitions are concluded into two categories: snapshot and continuous-time networks. Snapshot is divided from the dynamic network at equal-interval in time sequence, so as to obtain the discrete sequence of network evolution through decomposing the dynamic network into a sequence of static networks. Nevertheless, dividing the network into equal-interval snapshots in time series places particular emphasis on global changes roughly and cannot retain the evolution information of the network elaborately. In contrast, the continuous-time mode can make up for this deficiency by marking each edge with multiple timestamps to predict whether the connection between nodes has just changed, in which each edge is given specific timestamps. Based on the two definitions mentioned above, several methods are positioned to make contributions to both research and practice in dynamic network embedding fields. In specific, Zhou \\textit{et al.} designed a triadic unit, namely the dynamicTriad, which models how the triadic unit is closed from an open one, thus preserving both structural information and dynamics in network evolution. Nguen \\textit{et al.} incorporated temporal dependencies into node embedding and deep graph models, thereby integrating temporal information of dynamic networks. In , the evolving patterns in dynamic network are generated by using evolving random walks, and the current embedding vectors are initialized with previous embedding vectors. Ahmed \\textit{et al.} introduced a technology based on non-negative matrix factorization to obtain latent features from temporal snapshots of dynamic networks. Zhu \\textit{et al.} proposed a generalized eigen perturbation model which can incorporate the variation of network links.\n\\begin{figure*}[!htb]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{conclude.eps}\n\t\\caption{The content system of dynamic network embedding are constructed from two aspects, including embedding model and preprocessing process. In specific, the network dataset with attribute and label are necessarily preprocessed into uniform linear arrays, and then the normalized matrices are generated in the dynamic network evolution integrated with time series. After that, the network representations are constructed by updating the normalized matrices through the optimization algorithm.}\n\t\\label{figure2}\n\\end{figure*}\nAccording to both the attributes of the network and the architecture of the model, we categorize the existing dynamic network embedding techniques into five categories, including embedding based on eigenvalue factorization, embedding based on Skip-Gram, embedding based on auto-encoder, embedding based on graph convolution and embedding based on other methods. Fig. \\ref{figure2} shows how to construct a system of dynamic network embedding by adopting a two-stage execution method. Different embedding methods have various criteria for assessing model performance and weaknesses. Theoretically, methods based on neural networks may suffer high computation and space cost.\nGeneralized from the classification scheme mentioned above, we introduce what challenges these categories of dynamic network embedding techniques confronted with, and how to address them. Although several surveys , , summarize network embedding methods, they have two major limitations. On one hand, the existing graph embedding surveys only involve the relevant research based on traditional methods, and many recent innovative frameworks are not confirmed for inclusion in the review. For instance, mainly introduces three categories of approaches and lacks introductions to the novel methods. Moreover, only shows solicitude for knowledge graph embedding. On the other hand, they only concern on static network embedding while ignoring the critical properties of dynamic networks. Despite remarkable advancements, the domain of dynamic network embedding still requires further research to make summaries on full-scale directions of novel techniques.\nTo summarize, we make the following contribution:\n\\begin{itemize}\n\t\\item We propose a new taxonomy of dynamic network embedding according to different models and describe the challenges in these models, which open up new perspectives for understanding existing works. Furthermore, we systematically categorize the applications that dynamic network embedding supports.\n\t\\item Our work follows the dynamic embedding approaches, but differs from previous works in listing the summarized insights on why dynamic graph embedding can be performed in a certain way, which can provide feasible guidance for future researches.\n\t\\item To facilitate the timely and potential research, we point out six promising future research directions in dynamic network embedding domain, which consist of dynamic embedding models, large-scale networks, heterogeneous networks, attributed networks, task-oriented dynamic network embedding and more embedding spaces.\n\\end{itemize}\nThe rest of our paper is arranged as follows. In Section \\ref{ProForm}, the problem definitions commonly used in the dynamic network embedding strategies are introduced, then we provide a formalized definition of dynamic network embedding. In Section \\ref{Categories}, different dynamic network embedding techniques are elaborated in detail. The applications that dynamic network embedding enables are presented in Section 4. After that, Section 5 summarizes the challenges and points out several potential future research directions. In the end of this survey, we conclude the paper in Section 6.", "id": "00c8b60c-73a6-4956-8904-087ba9cccdbd", "level": "section", "origin_cites_number": 22, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{ProForm}\nIn Section \\ref{Definition}, the definition of several basic concepts of dynamic network embedding are described in detail. In addition, Section \\ref{Notations} provides the notations commonly used in this paper.", "id": "fb1b6853-8371-47cb-8345-37d615de33b8", "level": "section", "origin_cites_number": 0, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Problem Formalization" ] ], "subsections": [ "a4cec22c-6bed-4347-809e-aa9edd3f610c", "3a26b339-157f-4cd6-86c8-f20afbe5627b" ], "title": "Problem Formalization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Definition}\n\\begin{lem}\n\t\\textbf{(Network).} A network is usually represented as a graph $ G=\\left ( V,E \\right ) $, where $ V $ denotes the node set and $ E $ denotes the edge set.\n\\end{lem}\n\\begin{lem}\n\t\\textbf{(Static Network).} Given a network $ G=\\left ( V,E \\right ) $, the network is static if all nodes and edges remain one state that are constant in time.\n\\end{lem}\n\\begin{lem}\n\t\\textbf{(Dynamic Network).} A dynamic network $\\mathcal{G}$ can be divided into a series of snapshots, i.e., $ \\mathcal{G}=\\left \\{ G^{ 1 },...,G^{ T } \\right \\} $, where $ T $ denotes the number of snapshots. For each snapshot $ G^{ t } $, $ G^{ t }=\\left ( V^{ t },E^{ t} \\right ) $ indicates how nodes and edges are connected at the time step $ t $.\n\\end{lem}", "id": "a4cec22c-6bed-4347-809e-aa9edd3f610c", "level": "subsection", "origin_cites_number": 0, "parent_id": "fb1b6853-8371-47cb-8345-37d615de33b8", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Problem Formalization" ], [ "subsection", "Problem Definition" ] ], "subsections": [], "title": "Problem Definition" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Notations}\nThe notations we used are defined in Table \\ref{Tab 1}.\n\\renewcommand\\arraystretch{1.5}\n\\begin{table}[!htbp]\n\t\\setlength{\\abovecaptionskip}{1pt}\n\t\\caption{Notions and the descriptions}\\label{Tab 1}\n\t\\centering\n\t\\begin{tabular}{|p{3.5cm}<{\\centering}|p{4.2cm}<{\\centering}|}\n\t\t\\hline\n\t\tNotions & Descriptions \\\\ \\hline\n\t\t$V$ & set of nodes \\\\ \\hline\n\t\t$E$ & set of edges \\\\ \\hline\n\t\t$d$ & embedding dimension \\\\ \\hline\n\t\t$N$ & number of nodes \\\\ \\hline\n\t\t$\\mathcal{G}$ & a static network \\\\ \\hline\n\t\t$ \\mathcal{G}=\\left \\{ G^{ 1 },...,G^{T} \\right \\} $ & a dynamic network \\\\ \\hline\n\t\t$ G^{T} $ & a snapshot \\\\ \\hline\n\t\t $ Y^{T} $ & embedding representations at time $T$ \\\\ \\hline\n\t\t $X$ & adjacency matrix \\\\ \\hline\n\t\t $\\Delta X$ & change of adjacency matrix \\\\ \\hline\n\t\t $x_{1}, x_{2}, ... x_{n}$ & eigen vectors \\\\ \\hline\n\t\t $A$ & attributes matrix \\\\ \\hline\n\t\t $\\Sigma$ & diagonal matrix \\\\ \\hline\n\t\t $L$ & Laplacian matrix \\\\ \\hline\n\t\t $S$ & collection of node pairs \\\\ \\hline\n\t\\end{tabular}\n\\end{table}", "id": "3a26b339-157f-4cd6-86c8-f20afbe5627b", "level": "subsection", "origin_cites_number": 0, "parent_id": "fb1b6853-8371-47cb-8345-37d615de33b8", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Problem Formalization" ], [ "subsection", "Notations" ] ], "subsections": [], "title": "Notations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Categories}\nIn this section, we introduce a new taxonomy for categorizing existing dynamic network embedding methods, as shown in Fig. \\ref{figure2}. We adopt the classification strategies based on whether the dynamic network is represented as snapshots or continuous-time network marked by timestamps. According to the strategy, we categorize dynamic network embedding into five broad categories: embedding based on eigenvalue factorization, embedding based on Skip-Gram, embedding based on auto-encoder, embedding based on graph convolution and other embedding methods.", "id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "level": "section", "origin_cites_number": 0, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ] ], "subsections": [ "c9923275-f18f-4f7f-a4d6-1c7676fc7974", "bc4b19f0-3de6-4ed2-a629-7c65b363d13e", "94953bfc-98da-48d5-929a-9234a880c277", "02e0f100-b101-4d55-befe-97bd1fc68f90", "0125b55c-3269-42b4-a185-d625046ed66d" ], "title": "Categories" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 7231 ], "content": "\\label{Eigenvalue}\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{SVD.eps}\n\t\\caption{An illustration of singular value decomposition (SVD). $k$ is the rank of matrix $D_{\\mathcal{SVD}}$.}\n\t\\label{figure3}\n\\end{figure}\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{dhpe.eps}\n\t\\caption{The model utilizes eigen decomposition to construct the high-order proximity matrix of the network, and then dynamically updates the node representations of the network in next time step based on the matrix perturbation theory.}\n\t\\label{figure4}\n\\end{figure}\nMatrix factorization is the most general dimensionality reduction method in the field of network embedding. Singular value decomposition (SVD) is a representative algorithm of matrix factorization. The SVD of a matrix $D_{\\mathcal{SVD}}\\in R^{m \\times n}$ is defined as:\n\\begin{equation}\nD_{\\mathcal{SVD}}=U \\Sigma Q^{T}\n\\end{equation}\nwhere $U \\in R^{m \\times m}$, $Q \\in R^{n \\times n}$, and $\\Sigma \\in R^{m \\times n}$. $\\Sigma$ is a diagonalized matrix, in which each element on the main diagonal is a singular value, all other elements are 0. Besides, matrices $U$ and $Q$ satisfy the equation $U^{T}U=I$ and $Q^{T}Q=I$. The definition of SVD can be visually illustrated by the following Fig. \\ref{figure3}.\nA traditional but effective embedding method is to decompose the adjacency matrix by SVD and the attribute matrix by eigen decomposition in complex networks, so as to construct the embedding representation of each node. From the perspective of matrix factorization, the dynamic evolution of network is amount to the constant change of original matrix, where the network embedding is updated according to the perturbation theory of matrix. The overall perspective of eigenvalue factorization in dynamic network embedding is sketched in Fig. \\ref{figure4}.\nLi \\textit{et al.} proposed a dynamic attributed network embedding model which represents dynamic networks as snapshots and considers situations in which adjacency and attribute matrices of the networks evolve over time. The dynamic network representation at the time $t$ is updated based on the matrix variation. For the time $t=1$, the model proposes an embedding algorithm combining the matrix $D$ and $X$ as the hot start. The formula for updating the embedded eigenvalues and eigenvectors is as follows:\t\n\\begin{equation}\n\\begin{small}\n\\begin{array}{c}\n{\\Delta \\lambda_{i}=x_{i}^{\\prime} \\Delta L_{X} x_{i}-\\lambda_{i} x_{i}^{\\prime} \\Delta \\Sigma_{X} x_{i}}\n\\\\ {\\!\\Delta x_{i}\\!=\\!-\\!\\frac{1}{2} x_{i}^{\\prime} \\Delta_{X} x_{i} x_{i}\\!+\\!\\sum_{j\\!=\\!2, j \\neq i}^{k\\!+\\!1}\\left(\\frac{x_{j}^{\\prime} \\Delta L_{X} x_{i}\\!-\\!\\lambda_{i} x_{j}^{\\prime} \\Delta \\Sigma_{X} x_{i}}{\\lambda_{i}-\\lambda_{j}}\\right) x_{j}}\n\\end{array}\n\\end{small}\n\\end{equation}\nwhere $X^{t} \\in \\mathbb{R}^{n \\times n}$ is the adjacency matrix of the attributed network at snapshot $t$ and $\\Sigma_{X}$ is the diagonal matrix with $\\mathrm{\\Sigma}_{X}^{t}(i, i)= \\sum_{j=1}^{n} X^{t}(i, j)$, $x_{1}, x_{2}, ... x_{n}$ are the eigenvectors of the corresponding eigenvalues, in the meantime $0=\\lambda_{1} \\leq \\lambda_{2} \\leq \\ldots \\leq \\lambda_{n}$. $L_X^{t}=\\Sigma_X^{t}-X^{t}$ , and $L_X$ is a Laplacian matrix. Moreover the network representation $Y^{ t+1 }$ is computed by using a correlation maximization method.\t\nCui \\textit{et al.} devised a embedding model which can capture the high-order proximity shown in Fig. \\ref{figure5}, which is an extension of the asymmetric transitivity preserving graph embedding model in the dynamic network processing. Based on the generalized singular value decomposition factorization (generalized singular value decomposition) and matrix perturbation theory , the node representation of time $t+1$ is rapidly and effectively updated when the network structure changes in the next time step (adding/removing nodes/edges) while preserving the high-order proximity. Remarkably, Katz similarity is transformed into a generalized singular value decomposition factorization to model the high-dimensional structure of a dynamic network, and then the node representation at the time $t+1$ of the given network is dynamically updated by applying matrix perturbation theory.\nSuppose $X$ denote the high-order adjacency matrix of the dynamic network, where $X^{K a t z}\\!=\\!M_{a}^{-1} M_{b}$. The Katz Index is utilized as the alternate of $X$, which is the most generally used measures of high-order proximity. The formulas for updating the embedded eigenvalues and eigenvectors are shown as follows:\n\\begin{equation}\n\\begin{array}{c}{\\Delta \\lambda_{i}=\\frac{x_{i}^{T} \\Delta M_{b} x_{i}-\\lambda_{i} x_{i}^{T} \\Delta M_{a} x_{i}}{x_{i}^{T} M_{a} x_{i}}} \\\\ {\\Delta x_{i}=\\sum_{j=1, j \\neq i}^{d} \\alpha_{i j} x_{j}}\\end{array}\n\\end{equation}\nwhere $\\left\\{\\lambda_{i}\\right\\}$ are the eigenvalues of $X$ in descending order and $x_{1}, x_{2}, ... x_{n}$ denote the corresponding eigenvectors, where d is the embedding dimension. As for $\\alpha_{i j}$, it represents the coefficient matrix, which indicates the contribution of $x_{j}$ to $\\Delta x_{i}$. Then the final embedded representation $Y$ is obtained by minimizing the distance between $X$ and $YY^{\\mathsf{'T}}$.\n\\begin{figure*}[!htbp]\n\t\t\\centering\n\t\t\\includegraphics[width=1\\linewidth]{usedynamic.eps}\n\t\t\\label{dimension}\n\t\t\\caption{An illustration of eigenvalue factorization in dynamic network embedding. At snapshot $t$, the model generates spectral embedding results by integrating with network structure $A$, and obtain the static embeddings $Y$. Afterwards, at the snapshot $t + 1$, topological change $\\Delta A$ indicates the dynamic information of the network structure. The model integrates matrix perturbation theory for updating $Y$ for a new embedding $Y^{T}$.}\n\t\t\\label{figure5}\n\\end{figure*}\nChen \\textit{et al.} proposed two online algorithms to track the top eigen pairs in the dynamic network, which are capable of tracking the important network parameters determined by certain eigen functions. Besides, creative methods for analysis of eigen functions with attributions at each time stamp are proposed. Inspired by this, the eigen function can be utilized to map eigen pairs of the network to an attribute matrix:\n\\begin{equation}\n\t\\mathrm{f} :\\left(\\Lambda_{k}, Y_{k}\\right) \\rightarrow \\mathbb{R}^{x}(x \\in \\mathbb{N})\n\\end{equation}\nwhere $\\Lambda_{k}$ is the eigen pairs matrix transformed from the adjacency matrix $X$. $Y_{k}$ denotes the embedding matrix of the given network. After that, through the method of , the eigen function can be estimated by :\n\\begin{equation}\n\tf\\left(\\left(\\Lambda_{k}, Y_{k}\\right)\\right)=\\triangle(G)=\\frac{1}{6} \\sum_{i=1}^{k} \\lambda_{i}^{3}\n\t\\label{fast.1}\n\\end{equation}\nwhere the quantity of triangles in the snapshot $G^{t}$ is $\\triangle(G)$, and $\\Lambda_{k}$ is the specific eigen-pair of $\\Lambda_{k}$.\nAhmed \\textit{et al.} introduced a technology based on non-negative matrix factorization (NMF) to extract latent features which can strengthen the performance of the link prediction task on dynamic networks. This model focuses on the application of link prediction, and defines a new iterative rule to construct matrix factors with noteworthy network characteristics. In addition, it shows how to foster improvements in high prediction accuracy by adopting the fusion of time and structure information for training methods, and the potential NMF characteristics can effectively express the network dynamic rather than static representation.\nHowever, approaches based on matrix factorization typically operates on more than one $n \\times n$ matrices with a large dimension of $n \\times n$, which suffers from high time complexity. To tackle this obstacle, Zhang \\textit{et al.} proposed TIMERS to theoretically explore a lower bound of SVD minimum loss and replace the bound with the minimum loss on dynamic networks based on matrix perturbation. TIMERS optimally sets the restart time of SVD in order to reduce the error accumulation in time. Furthermore, driven by treating the maximum tolerated error as a threshold, TIMERS triggers SVD to restart automatically when the margin exceeds rated threshold. Besides, margin is a value between the reconstruction loss of incremental updates and the minimum loss.", "id": "c9923275-f18f-4f7f-a4d6-1c7676fc7974", "level": "subsection", "origin_cites_number": 9, "parent_id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ], [ "subsection", "Embedding Based on Matrix Factorization" ] ], "subsections": [], "title": "Embedding Based on Matrix Factorization" }, { "cite_extract_rate": 0.461538461538461, "cites": [ 1010, 479, 8978, 282, 218, 7245 ], "content": "\\label{Skip-Gram}\nThe great prevalence of Word2Vec has facilitated the well-known Skip-Gram model, through which we can predict the context from the input. Perozzi \\textit{et al.} suggested that nodes are converted into word vectors and a random walk sequence is viewed as sentences. Since then, a great deal of static network embedding algorithms are proposed based on this model. Fig. \\ref{figure6} presents a synthesis framework which integrates temporal information into network embedding methods.\n\\begin{figure*}[!htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{deepwalk.eps}\n\t\\caption{In the dynamic network, each edge is mapped to a corresponding timestamp. An effective walk is represented by a series of nodes connected by edges of an increasing timestamp. The mapping function $\\Phi$ is a matrix, which maps every node into a $d$-dimensional vector with a total of $\\left|V\\right|\\times d$ parameters which can be optimized by using Skip-Gram model or other models.}\n\t\\label{figure6}\n\\end{figure*}\t\nThe following description outlines extension approaches. The classic node2vec inherits the advantages of DeepWalk and devises a biased random walk to learn node representations and LINE can preserve the first-order and second-order proximities by utilizing multiple loss functions and optimize the performance by a concat technology. Du \\textit{et al.} proposed a decomposable objective function on the basis of their previous LINE model to contribute the dynamic network embedding framework, where only a part of nodes are iteratively updated simultaneously and the representation in each iteration has a strong stability compared with retraining whole model. Moreover, the authors also devised the update node selection mechanism, greatly improving the efficiency of iterative update. Experiments on the node multi-label classification task has performed well on several real networks. Similarly, Sajjad \\textit{et al.} proposed an efficient unsupervised network representation embedding method based on random walks that also divides networks into snapshots. In order to efficiently calculate the embedding results in the next snapshot which is impacted by the previous snapshot, this paper factorizes the process of generating network representations into two steps. Firstly, the authors proposed a random walk update algorithm responsible for updating the set of walk sequence, with the goal of statistically distinguishing the updated set of random walks from a set of random walks generated from scratch on the new graph. Secondly, the Skip-Gram model is modified to update the node representations based on the random walks.\nStatic network embedding usually obtains training corpus through random walks, and then integrates the corpus to the Skip-Gram model. However, the random walks fail to take the chronological order into account, in which condition the edges appear randomly. For instance, a message propagated in a network is directed, but an unconstrained random walk may result in a reverse corpus. At this point, Nguyen \\textit{et al.} expressed the dynamic network as a continuous-time network. Each edge has multiple timestamps, indicating the time corresponding to the existence of a connection relationship. On this basis, each random walk must be constrained to conform to the time sequence of occurrence of edges, so as to integrate the time sequence information of the network into the sequence of random walk. Theoretically, the random walk sequence with time series is a subset of which with non-temporal series. According to the view of information theory, the addition of temporal information reduces the uncertainty of random walk and enforces it outperforming DeepWalk and node2vec algorithms in traditional tasks.\nDynnode2vec modifies node2vec by regarding the prior embedding vectors as the initialization of a Skip-Gram model and employing the random walks in network volution to update the Skip-Gram based on previous timestamps information. Dynnode2vec modifies the static node2vec at time $t$ by capturing historical information from time step $t-1$. In dynnode2vec, only the evolving node embeddings are generated from sets of random walk sequences, rather than considering all nodes in the current timestamps. Hence, novel random walks from local changed regions can efficiently update the embedding vectors in network evolution.\nBesides, dynamic network embedding framework proposed by Zuo \\textit{et al.} also integrates the Hawkes process based on temporal network embedding and the Skip-Gram models. This technology models the evolution process of nodes through neighbourhood formation sequence, and then captures the influence of historical neighbours on the current neighbourhood formation sequence by using Hawkes process.\nMoreover, Liang \\textit{et al.} applied dynamic network embedding to user profiling in Twitter , and proposed a dynamic embedding model of user and word (DUWE), coupled with a model of streaming keyword diversification (SKDM). Particularly, DUWE aims at capturing the semantic information of users over time with the Skip-Gram model and attempts to maximize its log likelihood:\n\\begin{equation}\n\\log p\\left(\\mathrm{n}^{ \\pm} | \\mathrm{V}\\right)=\\sum_{k, l=1}^{V} n_{k, I}^{+} \\log s\\left(\\mathrm{v}_{k}^{\\mathrm{T}} \\mathrm{v}_{l}\\right)+n_{k, l}^{-} \\log s\\left(-\\mathrm{v}_{k}^{\\top} \\mathrm{v}_{l}\\right)\n\\end{equation}\nwhere $n^{ \\pm}=\\left(n^{+}, n^{-}\\right)$, $n^{+}, n^{-} \\in \\mathbb{R}^{V \\times V}$ are the positive and negative indicator matrices for all word pairs with $n_{k, l}^{+}$ and $n_{k, l}^{-}$ being their elements respectively. $\\left(v_{k}, v_{l}\\right)$ is a word pair that can be observed in documents of a corpus. $s(x)$ is computed from a sigmoid function $s(x)=\\frac{1}{1+\\exp (-x)}$ and $s(-x)=1-s(x)$. $V=\\left\\{v_{k}\\right\\}_{k=1}^{V}$ is the definition of the embedding result of all the words in the vocabulary. DUWE is based on the Skip-Gram model, which is extended by employing a Kalman filter to process the evolution information of users and words, aiming to model the dynamic user and word embeddings. In DUWE, the authors utilized the variance of the transformation kernel embedded by all users to represent the diffusion process of the representation in users and words evolution.", "id": "bc4b19f0-3de6-4ed2-a629-7c65b363d13e", "level": "subsection", "origin_cites_number": 13, "parent_id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ], [ "subsection", "Embedding Based on Skip-Gram" ] ], "subsections": [], "title": "Embedding Based on Skip-Gram" }, { "cite_extract_rate": 0.25, "cites": [ 7232 ], "content": "\\label{Auto-encoder}\nAuto-encoder is an artificial neural network that can construct the efficient representation of the input data in an unsupervised manner, which is suitable for dimensionality reduction. The dimension of the learned representation is generally far less than that of input data. In specific, auto-encoder is mainly carried out into two steps, encoder and decoder. In most auto-encoder, the hidden layers of the auto-encoder architecture is realized through the neural network.\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{DDNEDYNGEM.eps}\n\t\\caption{The dynamic network is divided into snapshots. During the training process, the weight parameters of the previous auto-encoder network are retained to initialize the next training network. $Y$ is the set of network representation, and $X$ is the adjacency matrix of networks.}\n\t\\label{figure7}\n\\end{figure*}\t\nSDNE is a classical network embedding model based on auto-encoder. Due to its unsupervised nature and excellent performance, the embedding model can be extended to dynamic networks straightforwardly. For instance, Goyal \\textit{et al.} proposed a dynamic embedding model DynGEM, inspired by SDNE, which also represents dynamic networks as snapshots. It retains the embedding information at the previous moment that will be utilized for the next moment, so that the embedding model at the next moment can directly inherit the parameters of the model trained at the previous moment. It's worth noting that the quantity of network nodes and edges have an impact on the architecture of the embedding model. Therefore, this model employs heuristic information to adjust the overall structure of SDNE according to the new network structure, which makes it easy to be applied to the new network. A simplified version of the DynGEM model is shown in Fig. \\ref{figure7}.\nHowever, another work dyngraph2vec proposed by Goyal \\textit{et al.} reflects that the DynGEM framework and other dynamic embedding algorithms capture the information of the previous step while ignoring the richer historical information. In this case, dyngraph2vec takes the network structure information into account at time step $t, t+1, t+2, \\ldots \\ldots, t+l-1$ when computing the embedding matrix at time $t+l$. To be specific, by expressing the time-ordered network sequence as a corpus, it leverages RNN to encode the historical information for semi-supervised learning and give a simple update rule for the auto-encoder. However, dyngraph2vec employs three models to embed the historical information, which leads to a high level of complexity. Additionally, the NetWalk model proposed by Yu \\textit{et al.} not only incrementally generates network representations in network evolution, but also detects network anomalies in real time. In particular, the node representation can be obtained through a number of walk sequences extracted from the dynamic network. It unifies local walks and hybridizes it with the hidden layer of a deep auto-encoder to yield node representations, thus, the resultant embedding is capable of reconstructing the original network with less loss. Moreover, the learning process performs over dynamic changes by utilizing a sampling stochastic. Then the model uses a dynamic clustering model to flag anomalous vertices or edges.", "id": "94953bfc-98da-48d5-929a-9234a880c277", "level": "subsection", "origin_cites_number": 4, "parent_id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ], [ "subsection", "Embedding Based on Auto-encoder" ] ], "subsections": [], "title": "Embedding Based on Auto-encoder" }, { "cite_extract_rate": 0.625, "cites": [ 7241, 302, 7236, 8113, 242 ], "content": "\\label{Applications}\nRecently, neural networks have attracted extensive attention of researchers in network embedding domain. By virtue of the powerful representation ability, neural networks based network embedding models have achieved outstanding results.\nSince neural networks have shown impressive results on dynamic embedding, an inductive deep representation learning framework called DyRep , is decomposed into two dynamic processes on the network: association process and communication process. The former handles the change of network topology, while the latter deals with the change of network dynamics. In this model, event is defined to uniformly represent the changes of the above two processes. Furthermore, DyRep views the change of node representations as an intermediate bridge between the two processes mentioned above, so as to continuously update the node representation according to new events. When an event occurs, the new representation of the node is aggregated by the event-related neighbour nodes. In addition, the model which aims to aggregate neighbours of nodes dynamically adopts the attention mechanism.\nAfter that, Chen \\textit{et al.} proposed a novel optimization method on neural networks, which assigns different sensitive weights for samples and selects the samples via their weights when computes gradients. The related samples are updated in a diffusion strategy, as the embedding of the selected sample is reconstructed. Meanwhile, in order to decrease the computational cost during selecting samples, the authors presented a nested segment tree for weighted sample selection. This model enforces dynamic embedding algorithms more adequate in analyzing highly dynamic and recency-sensitive data, which can overcome the problem that the optimization process of traditional is complicated.\nAnother example using neural network structure is Know-Evolve , it proposes a deep recurrent architecture for generating non-linearly network representations over time. It models multi-relational timestamped edges to reflect the evolving process. Besides, the information captured by node embeddings only depend on the edge-level information. Sankar \\textit{et al.} proposed DySAT, a dynamic self-attention network embedding model. Specifically, DySAT constructs node representations which incorporates self-attention mechanism into two aspects: structural neighbors and temporal dynamics. Through the dynamic self-attention architecture, it provides dynamic representations for nodes which capture both structural properties and temporal evolutionary patterns.\nRecurrent neural networks (RNN) are commonly utilized for continuous sequence learning by capturing the dynamic information in time series. Motivated by the insights of RNN, several dynamic network embedding methods combine the RNN and other neural networks to learn the node representations. For example, EvolveGCN contains two components, the RNN component and the graph convolutional network (GCN) component. The RNN model updates the weight matrix in the next time step and the GCN model uses the updated weight matrix and the current node embedding matrix to update the subsequent node embedding matrix. EvolveGCN focuses on the evolution of the GCN parameters at every time step rather than the node representations, whose model is adaptive and flexible for preserving network dynamics.\nIn addition, BurstGraph uses two RNN-based variational autoencoders framework to model graph evolution in the vanilla evolution and bursty links�� occurrences at every time step. Furthermore, the encoders share the same GraphSAGE to capture node attributed information. BurstGraph encodes both the structural evolution and attributed evolution in the latent embedding vectors, which can fully exploit the recurrent and consistent patterns in dynamic networks.", "id": "02e0f100-b101-4d55-befe-97bd1fc68f90", "level": "subsection", "origin_cites_number": 8, "parent_id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ], [ "subsection", "Embedding Based on Neural Networks" ] ], "subsections": [], "title": "Embedding Based on Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Other}\nZhou \\textit{et al.} presented DynamicTriad , which preserves the structural information of the network and incorporates the time-varying characteristics of the network into the embedding vectors. Through the analysis of triangular closure problems, the evolution of a dynamic network is devised to describe the change of the network. In order to express some rapidly changing nodes and reflect the evolution of the structure, the triangle closure problem is used to reflect network dynamics. In specific, for any three points, if they are connected by two nodes, they are called open triangles. In contrast, if they are connected by two nodes, they are called closed triangles. One of the characteristics of network structure evolution is that open triangle may evolve into closed triangle over time. By quantifying the probability of development from open triad to closed triad, the embedding vector of each vertex can be obtained at different time steps. Inspired by this ideas, DynamicTriad constructs the loss function to predict triangular changes:\n\\begin{equation}\n\\begin{small}\n\tL_{t r}^{t}=-\\sum_{(i, j) \\in S_{+}^{t}} \\log P_{t r_{+}}^{t}(i, j)-\\sum_{(i, j) \\in S^{t}} \\log P_{t r_{-}}^{t}(i, j)\n\\end{small}\n\\end{equation}\t\nwhere the first term is the probability of open triangle closing, and the second term is the probability that it is not closed. The representation $S_{+}^{t}$ is a collection of node pairs with no connection at the current time t, which will have edge connection at the next time. A collection of node pairs $S^{t}$ indicates that there will be no edge join at either the current time or the next time. Generally, the network is smooth, and the vector expression of the network varies little under the continuous timestamp. Therefore, combined with the rule smoothing term, the node representations can be trained.\nStreaming graph neural networks (DGNN) proposed by Ma \\textit{et al.} examines the relationship between the action time of edge connection and its effect on network embedding in a more detailed manner. It supposes that when new edges appear, both ends of the edges and the first-order neighbors of the endpoints have a high probability to make changes. However, the effect on the neighbors is related to the time difference. Neighbors who interact with the endpoints in the recent period are sensitive to new changes, while neighbors who interact over a relatively long period of time in the past are less affected. The endpoint and first-order neighbors are updated by two modules respectively and the temporal information enhancing long short-term memory model is adopted as the basic framework for updating modules to process information at different time intervals.", "id": "0125b55c-3269-42b4-a185-d625046ed66d", "level": "subsection", "origin_cites_number": 2, "parent_id": "851ef6e4-9ca8-4f4c-bb96-ebaf5e57bbd4", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Categories" ], [ "subsection", "Other Embedding Methods" ] ], "subsections": [], "title": "Other Embedding Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Applications}\nNetwork embedding can improve the performance of various network mining tasks in both time and space. So this section gives a detailed description of several common network mining tasks that dynamic network embedding can benefit. We summarize the datasets and network mining tasks which are commonly used in proposed dynamic network embedding methods. Table \\ref{Tab 2} concludes the statistics of each dataset used to perform different network mining tasks in dynamic network embedding experiments.\n\\renewcommand\\arraystretch{1.5}\n\\begin{table*}[!htbp]\n\t\\setlength{\\abovecaptionskip}{1pt}\n\t\\caption{A summary of datasets and network mining tasks}\\label{Tab 2}\n\t\\centering\n\\scriptsize\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\tDatasets & Node classification & Link prediction & Node clustering & Network visualization\\\\ \\hline\n\t\tAcademic & \\checkmark & \\checkmark & & \\\\ \\hline\n\t\tAutonomous Systems(AS) & & \\checkmark & & \\\\ \\hline\n\t\tCollegeMsg & & \\checkmark & & \\\\ \\hline\n\t\tDBLP & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n\t\tEmail-Eu-core&\\checkmark & \\checkmark &\\checkmark &\\checkmark \\\\ \\hline\n\t\tEnron & & \\checkmark & & \\checkmark \\\\ \\hline\n\t\tEpinions & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n\t\tF2f-Resistance & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n\t\tHEP-TH & & \\checkmark & & \\\\ \\hline\n\t\tInfectious & & \\checkmark & & \\\\ \\hline\n\t\tInternet & & \\checkmark & & \\\\ \\hline\n\t\tLoan & \\checkmark & \\checkmark & & \\\\ \\hline\n\t\tMobile & \\checkmark & \\checkmark & & \\\\ \\hline\n\t\tSynthetic Data (SYD) & & \\checkmark & & \\\\ \\hline\n\t\tTwitter & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n\t\tWikipedia & & \\checkmark & & \\\\ \\hline\t\n\t\\end{tabular}\n\\end{table*}", "id": "73e83d02-6458-454a-98d9-080bd91963aa", "level": "section", "origin_cites_number": 0, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ] ], "subsections": [ "b772ffcf-5c49-4b89-a71b-05c656327103", "0fa187eb-30fb-4c74-8a34-223749f58e99", "b9eb28d0-9e8d-44fc-871f-519afd43c428", "2e84eb5e-e09a-4341-ba88-a600057a2850", "66747f18-b47f-48e2-84a0-b5dc7e84640c" ], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "This section describes the datasets commonly used in dynamic network embedding, which can be used for dynamic network analysis. For each dataset, specific preprocessing methods are adopted for different models. This paper lists the characteristics and detailed description of datasets commonly used in dynamic networks. Dynamic networks include synthetic dynamic graphs and real-world dynamic graphs. We summarize several commonly used dynamic datasets as follows.\n\\begin{itemize}\n\\item \\textbf{Academic} \\footnote{ http://www.aminer.org, an academic search engine\\label{web}}.\nIt is an academic network composed of 51060 researchers as the vertices and 794552 coauthors as the edge. The size of snapshot is 2 years. In each snapshot, the existence of two users that have research cooperation projects will cause an edge created between two users. Each researcher has its specific community label.\t\n\\item \\textbf{Autonomous Systems(AS)} . The dataset is a communication network where each edge records the successful communication information between users and contains the communication records of 733 users from 1997 to 2000, which can be divided into snapshots by year.\n\\item \\textbf{CollegeMsg} \\footnote{ https://snap.stanford.edu/data/CollegeMsg.html\\label{web}}.\nThe dataset consists of private messages sent on the University of California's online social network. Users initiate dialogs with other users in the network based on configuration file information. An edge $(u, v, t)$ indicates that user $u$ sends a message to user $v$ at time $t$. Within 193 days, it records the information of 1899 nodes and 59835 edges, which is divided into three snapshots.\n\\item \\textbf{DBLP} \\footnote{ http://snap.stanford.edu/data/com-DBLP.html\\label{web}}. It is an academic website that counts the number of authors who have published more than two papers. From 2001 to 2016, the dataset restores information among 317080 authors and 1049866 edges, and each author own its research field label.\n\\item \\textbf{Email-Eu-core} \\footnote{ http://snap.stanford.edu/data/email-Eu-core-temporal.html\\label{web}}.\nThe dataset is generated from e-mail network which is a large European research institute. It reflects anonymous communication of e-mail exchanges between members of the institute. There are 42 institutes in which all members are included. The e-mails only represent frequent daily communication between the core members of the organization. Moreover, it consists of 61046 e-mails and the time span is 803 days.\n\\item \\textbf{ENRON} . The dataset contains emails among employees in Enron Inc. from Jan 1999 to July 2002.\n\\item \\textbf{Epinions} \\footnote{ http://www.trustlet.org/wiki/Epinions\\_datasets\\label{web}}. Epinions is a goods review website where users post their comments and opinions about various goods. Besides, users themselves can build trust relationships, which is the basis for seeking advice between users. The dataset is a real-world dynamic network and the data has 16 different time stamps.\n\\item \\textbf{F2f-Resistance} \\footnote{ http://snap.stanford.edu/data/comm-f2f-Resistance.html\\label{web}}.\nThis dataset is weighted, directed and temporal which contains networks extracted from 62 subgraphs, 451 nodes and 3,126,993 edges. Average temporal length per subgraphs 2,290 seconds which can be divided into 4 snapshots.\n\\item \\textbf{HEP-TH} . This dataset records the collaboration among authors in academic conferences, which collected papers from the academic conference from January 1993 to April 2003.\n\\item \\textbf{Infectious} \\footnote{ http://konect.uni-koblenz.de/networks/sociopatterns-infectious\\label{web}}. This network records the interaction of visitors in the exhibition. The formation of each edge requires at least 20 seconds of face-to-face communication between visitors with exact time stamp. It only stores visitors who communicate frequently.\n\\item \\textbf{Internet} \\footnote{ http://konect.uni-koblenz.de/networks/topology\\label{web}}. This is the connection network of the internet autonomous systems. The nodes denote autonomous systems and edges represent the connections between autonomous systems. Multiple edges may connect two nodes, each of which represents an individual connection in time. Edges are\tannotated with the time point of the connection.\n\\item \\textbf{Loan}\nThe structure of this network is similar to that of the mobile dataset, which is provided by ppdai3, and contains 1603712 call records among 200000 registered ppdai users over 13 months. The size of snapshot is set to one month.\n\\item \\textbf{Mobile}\nIt is a mobile network composed of more than 2 million call records provided by China Telecom. In 15 days, 340751 nodes and 2200203 edges were preserved. The user is regarded as the vertex and the snapshot is defined as a continuous one-day period without overlapping. In each snapshot, if one user calls another user, the connection between user vertices will be recorded in the network.\n\\item \\textbf{Synthetic Data (SYD)}. Synthetic dynamic graphs are usually generated by the stochastic block model . Snapshots of synthetic data are generated by setting the cross-block connectivity probability and the in-block connectivity probability.\n\\item \\textbf{Twitter} \\footnote{ https://bitbucket.org/sliang1/uct-dataset/get/UCT-Dataset.zip\\label{web}}.\nThe dataset contains basic information of 1375 Twitter users, as well as blog information of all registered users as of May 31, 2015. The user's successful interaction information generated 3.78 million edges which have certain timestamp.\n\\item \\textbf{Wikipedia} \\footnote{ http://snap.stanford.edu/data/wiki-talk-temporal.html\\label{web}}.\nIt is a dataset from Wikipedia, each edge represents the communication between users which has a specific time label. The time span is 2320 days, including 1140149 registered users and 7833140 connections.\n\\end{itemize}", "id": "b772ffcf-5c49-4b89-a71b-05c656327103", "level": "subsection", "origin_cites_number": 4, "parent_id": "73e83d02-6458-454a-98d9-080bd91963aa", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "Node classification aims to classify each node into corresponding categories and facilitate the downstream applications, such as recommendation, targeted advertising, interest mining and interpersonal network building. The classification technique is used to label the nodes in the dynamic network, which is conducive to the in-depth analysis of the characteristics of the network structure and the extended study of the application. Node classification includes multi-label and multi-class classifications. The former can be conducted on datasets where each node is only labeled by one category. The latter is conducted on datasets where each node is labeled by more than one category. The process of node classification adopted for network embedding is generally decomposed into four steps. Firstly, a network embedding algorithm is applied to derive the embedding representations of networks. Secondly, the dataset with label information needs to be divided into training and testing sets. Then, train a classifier on the training set. Lastly, perform node classification on the testing set. The selectable classifiers contain Liblinear , nearest neighbor classifier , support vector machine(SVM) and so on. Micro-F1 and Macro-F1 are commonly utilized as the evaluation metrics for the node classification task which integrate both the accuracy rate and recall rate of the classification model and are defined as follows:\n\\begin{equation}\nMicro-f1= \\frac{{2 \\times P \\times R}}{{ P+R}}\n\\end{equation}\n\\begin{equation}\nMacro-f1= \\frac{{\\sum\\nolimits_{M \\in C } {F1(M)}}}{{\\left|C\\right| }}\n\\end{equation}\nwhere $F1(M)$ is the $F1$-measure of label $M$, and $C$ is the whole label set. Moreover, $P$ indicates the precision rate and $R$ indicates the recall rate.", "id": "0fa187eb-30fb-4c74-8a34-223749f58e99", "level": "subsection", "origin_cites_number": 3, "parent_id": "73e83d02-6458-454a-98d9-080bd91963aa", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ], [ "subsection", "Node Classification" ] ], "subsections": [], "title": "Node Classification" }, { "cite_extract_rate": 0, "cites": [], "content": "The link prediction task is target to predict whether there exists an edge between two nodes, which can evaluate the performance of embedding methods on preserving explicit network topological structures. Meanwhile, it is also solved by using a binary classification technology, in which the input is the learned embedding representations of nodes and the output is a the predicted result of a binary classifier. If there exists an edge between the two input nodes, the output is $ 1 $, otherwise, the output is $ 0 $. Link prediction adopted for dynamic network embedding is divided into three steps. First of all, learn embedding representations of nodes. Then, divide the dataset into a training set and a testing set. For any pair of nodes in the training and testing sets, a label is generated to record whether these two nodes have connections. Finally, conduct link prediction experiments on the testing set. The AUC and MAP measures are employed as the evaluation metric. MAP aims at measuring models with recall and precision which can reflect the global performance. Compared with AUC, MAP considers the performance ranking of returned items. The formula of MAP is\n\\begin{equation}\nMAP = \\frac{{\\sum\\nolimits_1^n {AP(i)} }}{{|V|}}\n\\label{eq17}\n\\end{equation}\n\\begin{equation}\nAP(i) = \\frac{{\\sum\\nolimits_j {precision@j(i) \\cdot {\\Delta _i}(j)} }}{{|\\{ {\\Delta _i}(j) = 1\\} |}}\n\\label{eq18}\n\\end{equation}\nwhere $precision@j(i)$ is the precision of node $v_{i}$. $\\Delta {}_i(j)=1$ presents a connection between nodes $i$ and $j$.", "id": "b9eb28d0-9e8d-44fc-871f-519afd43c428", "level": "subsection", "origin_cites_number": 3, "parent_id": "73e83d02-6458-454a-98d9-080bd91963aa", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ], [ "subsection", "Link Prediction" ] ], "subsections": [], "title": "Link Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "As an unsupervised task, the objective of node clustering is dividing a network into several clusters, where nodes belonging to the same cluster have more similarity. Analogous to node classification, node clustering endeavors to learn the category information of nodes. After node representations are obtained from embedding algorithms, some typical clustering methods, for example Kmeans , are adopted actually for performing node clustering. The normalized mutual information (NMI) is commonly utilized to evaluate the clustering performance of network embedding algorithms. The function is shown as follows:\n\\begin{equation}\n\\begin{tiny}\nNMI\\!=\\!1\\!-\\!\\frac{H(M_1|M_2)_{norm}\\!+\\!H(M_2|M_1)_{norm}}{2}\n\\end{tiny}\n\\end{equation}\nwhere $NMI$ is used to measure the similarity between $M_1$ and $M_2$ clustering results. $H(\\cdot)$ is the formula of information entropy. To evaluate a model, if nodes that belong to the same category are clustered into the same cluster based on obtained node representations, the embedding algorithm learn desirable node representations.", "id": "2e84eb5e-e09a-4341-ba88-a600057a2850", "level": "subsection", "origin_cites_number": 2, "parent_id": "73e83d02-6458-454a-98d9-080bd91963aa", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ], [ "subsection", "Node Clustering" ] ], "subsections": [], "title": "Node Clustering" }, { "cite_extract_rate": 0, "cites": [], "content": "Network visualization is also a widespread application in dynamic network embedding. In order to generate meaningful visualization results, a network is mapped into two-dimensional space. Embedding representations of nodes are used as the input of visualization tools, such as t-SNE . After node representations are generated by embedding algorithms, t-SNE first maps the $ d $-dimensional node representations into a 2D space which the visualization results are displayed in. Network visualization is conducted on datasets with labels. Nodes with different labels are marked with different colors in the 2D space. Because network embedding preserves the intrinsic structures of a network, visualization results directly reflect that the nodes in the same cluster in two-dimensional space have more similar characteristics.", "id": "66747f18-b47f-48e2-84a0-b5dc7e84640c", "level": "subsection", "origin_cites_number": 1, "parent_id": "73e83d02-6458-454a-98d9-080bd91963aa", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Applications" ], [ "subsection", "Network Visualization" ] ], "subsections": [], "title": "Network Visualization" }, { "cite_extract_rate": 0, "cites": [], "content": "The mentioned survey on the traditional and novel dynamic network embedding methods demonstrates that putting forward creative methods in dynamic networks to effectively improve the performance of downstream network analysis tasks is still a necessary task. Because networks are evolving over time and the above models are only limited to two mode, snapshot and continuous-time mode, which are difficult to adapt for more complex situation. Therefore, we discuss several promising research directions for future works in this section.", "id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "level": "section", "origin_cites_number": 0, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ] ], "subsections": [ "79258454-d985-4f44-9315-ae3d1acce52c", "5b1aec33-9ba6-4316-bdd3-4d6ba38743ce", "3f7ab1c4-f5f4-42c0-b61e-9d4665493176", "3f4484a2-f4d4-4af6-830e-5758889ea708", "0f0f434c-6133-4ce2-b4fd-5bd478baaf01", "acc92987-6f6c-4a6a-aba6-c0b4ed7d1f55" ], "title": "Future Research Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we roughly divide the existing dynamic network embedding methods into five types of models, i.e., matrix factorization based models, Skip-Gram based models, auto-encoder based models, graph convolution based models and other models. However, all of these models are inspired by static embedding methods. As shown in Fig. \\ref{figure2}, for all existing dynamic embedding methods, the dynamic is reflected in the process of handling the original network rather than the process of embedding the network. In fact, there are few works that implements network embedding execution with considering real-time network evolution. For example, if some edges are added or removed in a network, existing methods need to reprocess the original network and then perform network embedding, which is time-consuming. Aiming at effectively updating the embedding representations when the status of a network changes, dynamic embedding models are highly desirable.", "id": "79258454-d985-4f44-9315-ae3d1acce52c", "level": "subsection", "origin_cites_number": 0, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "Dynamic Embedding Models" ] ], "subsections": [], "title": "Dynamic Embedding Models" }, { "cite_extract_rate": 1, "cites": [ 282 ], "content": "For network embedding, Tang \\textit{et al.} proposed LINE to analyze complex networks with large-scale. However, LINE is not applicable for dynamic networks. Due to the complexity of performing network evolution in dynamic networks, existing dynamic network embedding methods can not adopt for more complex real-world networks. By the insight of the above survey, dynamic networks can be represented as snapshots or continuous-time networks marked by timestamps. The more the number of snapshots or timestamps, the higher complexity of network evolution. Therefore, the efficiency of dynamic network embedding can be improved from two aspects, i.e., reducing the complexity of network evolution or improving embedding models. To conclude, large-scale dynamic network embedding is a difficult research point without effective work reaching it.", "id": "5b1aec33-9ba6-4316-bdd3-4d6ba38743ce", "level": "subsection", "origin_cites_number": 1, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "Dynamic Network Embedding for Large-scale Networks" ] ], "subsections": [], "title": "Dynamic Network Embedding for Large-scale Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Although many researchers focus on dynamic network embedding, existing methods are mainly proposed based on homogeneous networks that contain single type of nodes and edges. Nevertheless, it is well recognized that many real-world networks are heterogeneous networks where several types of nodes and edges exist. Dynamic embedding of heterogeneous networks need to consider node types and edge types when networks change over time. What's more, if the structures and properties are extracted from the original heterogeneous networks, the extracted structures and properties should be preserved as much as possible in the process of network evolution. Exploring the domain of dynamic network embedding on heterogeneous networks is an interesting research direction.", "id": "3f7ab1c4-f5f4-42c0-b61e-9d4665493176", "level": "subsection", "origin_cites_number": 0, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "Dynamic Network Embedding on Heterogeneous Networks" ] ], "subsections": [], "title": "Dynamic Network Embedding on Heterogeneous Networks" }, { "cite_extract_rate": 0.5, "cites": [ 7231 ], "content": "Attributed networks contain extra attributed information, such as texts or contents . Dynamic embedding of attributed networks emphasizes the information that both the structures of networks and the attributes of nodes provide as networks evolve. A little work about this research direction has been proposed. Only Li \\textit{et al.} proposed DANE that updates both the adjacency and attribute matrices when a network changes over time. Nevertheless, the work of Li \\textit{et al.} only researches the situation where a dynamic network is divided into sequential snapshots. There is no research to pay attention to the other situation in which the dynamic network is represented as continuous-time networks marked by timestamps.", "id": "3f4484a2-f4d4-4af6-830e-5758889ea708", "level": "subsection", "origin_cites_number": 2, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "Dynamic Network Embedding on Attributed Networks" ] ], "subsections": [], "title": "Dynamic Network Embedding on Attributed Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "As mentioned above, network mining tasks include link prediction, node classification, node clustering, network visualization and so on. Network embedding methods oriented by different network mining tasks are proposed for static networks. For example, Yang \\textit{et al.} described a network embedding technology for node classification. SHINE presents a signed heterogeneous information network embedding framework for sentiment link prediction. Compared with universal network embedding algorithms that can be utilized to conduct different network mining tasks, task-oriented network embedding methods focus on only one task so that some extra information related to the task can be extracted to train the embedding models. Task-oriented embedding models are more effective on the target task than universal embedding models. However, there is no related work to address a single network mining task in dynamic network embedding.", "id": "0f0f434c-6133-4ce2-b4fd-5bd478baaf01", "level": "subsection", "origin_cites_number": 2, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "Task-oriented Dynamic Network Embedding" ] ], "subsections": [], "title": "Task-oriented Dynamic Network Embedding" }, { "cite_extract_rate": 0, "cites": [], "content": "In the previous works, EOE presents a joint deep embedding framework for coupled heterogeneous networks, which embeds a network in two embedding spaces. However, EOE is designed for static networks. According to the existing methods mentioned above, the dynamic network is commonly represented in one embedding space. In general, the deeper embedding with more target spaces means the higher complexity of time and space. But on the other hand, embedding a network into more spaces mines more possibilities of preserving network structures and networks. Therefore, exploring different embedding spaces is an interesting and challenging research direction for dynamic network embedding.", "id": "acc92987-6f6c-4a6a-aba6-c0b4ed7d1f55", "level": "subsection", "origin_cites_number": 1, "parent_id": "4e095d52-e7d7-4248-b7f5-ecbc944db33c", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Future Research Directions" ], [ "subsection", "More Embedding Spaces" ] ], "subsections": [], "title": "More Embedding Spaces" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we summarize the existing dynamic network embedding methods. A dynamic network can be divided into sequential snapshots or continuous-time networks marked by timestamps. Based on this premise, we divide dynamic network embedding models into five categories: embedding models based on eigenvalue decomposition, embedding models based on Skip-Gram, embedding models based on autoencoders,\tembedding models based on graph convolution and other embedding models. Almost all existing dynamic network embedding methods are generalized into these five categories. For each category, we make a detailed description. What's more, the applications of dynamic network embedding are also included in our survey. In the section of applications, the datasets and network mining tasks that are widely adopted in dynamic network embedding are described in detail. Afterwards and primarily, we provide six interesting and promising future research directions.\n\\section*{Acknowledgement}\nThe authors wish to thank the editors and anonymous reviewers for their valuable comments and helpful suggestions which greatly improved the paper's quality. This work was supported by the Key Research and Development Program of Shaanxi Province (Grant no. 2019ZDLGY17-01, 2019GY-042).\n\\bibliographystyle{spmpsci}\n\\bibliography{mybibfile}\n\\end{document}", "id": "c1d28608-76e5-4e53-b902-b3439e85de0a", "level": "section", "origin_cites_number": 0, "parent_id": "1746dcc5-851c-4521-ab0d-90b718fe1c00", "prefix_titles": [ [ "title", "A Survey on Dynamic Network Embedding" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
54
[ 282, 1010, 218, 217, 6199, 219, 479, 7231, 8978, 7245, 7232, 7241, 302, 7236, 8113, 242 ]
0.99225
[ "Sailik Sengupta", "Ankur Chowdhary", "Abdulhakim Sabur", "Adel Alshamrani", "Dijiang Huang", "Subbarao Kambhampati" ]
A Survey of Moving Target Defenses for\\Network Security
2019
2019-05-02T21:06:44Z
cs.CR
Network defenses based on traditional tools, techniques, and procedures (TTP) fail to account for the attacker's inherent advantage present due to the static nature of network services and configurations. To take away this asymmetric advantage, Moving Target Defense (MTD) continuously shifts the configuration of the underlying system, in turn reducing the success rate of cyberattacks. In this survey, we analyze the recent advancements made in the development of MTDs and highlight (1) how these defenses can be defined using common terminology, (2) can be made more effective with the use of artificial intelligence techniques for decision making, (3) be implemented in practice and (4) evaluated. We first define an MTD using a simple and yet general notation that captures the key aspects of such defenses. We then categorize these defenses into different sub-classes depending on {\em what} they move, {\em when} they move and {\em how} they move. In trying to answer the latter question, we showcase the use of domain knowledge and game-theoretic modeling can help the defender come up with effective and efficient movement strategies. Second, to understand the practicality of these defense methods, we discuss how various MTDs have been implemented and find that networking technologies such as Software Defined Networking and Network Function Virtualization act as key enablers for implementing these dynamic defenses. We then briefly highlight MTD test-beds and case-studies to aid readers who want to examine or deploy existing MTD techniques. Third, our survey categorizes proposed MTDs based on the qualitative and quantitative metrics they utilize to evaluate their effectiveness in terms of security and performance. We use well-defined metrics such as risk analysis and performance costs for qualitative evaluation and metrics based on Confidentiality, Integrity, Availability (CIA), attack representation, QoS impact, and targeted threat models for quantitative evaluation. Finally, we show that our categorization of MTDs is effective in identifying novel research areas and highlight directions for future research.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "18148880-f549-40b4-8c44-171d9433ba0c", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ] ], "subsections": [ "17372709-22e9-4094-bd72-6e7828de1107", "24cae700-9785-4dcb-9965-981916bbd293", "61ef46a6-da85-4c8f-96c6-de7df290c870", "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "56f30f45-b9e4-4c67-88a4-c18316a604e6", "2db22e7b-e773-4555-9928-0389f288806d", "f9a0f0c3-0550-4003-8ffe-cd2e2130e47d" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\input{figures/survey_idea.tex}\n\\IEEEPARstart{T}{he} network and cloud infrastructures have become both ubiquitous and more complex in the past few years. Gartner predicts that by the year 2025, 80\\% of the entire IT infrastructure which includes deployed applications, technologies and services will be cloud-based . \nWhile the performance aspects with regards to storage capacity, networking efficiency, and hardware have received due attention and evolved with business demands, aspects that govern the security of cloud infrastructure are still managed using traditional means. Given that security breaches can lead to loss of customer trust and worsen business reputation, a key question is \\emph{how effective are these traditional security approaches?} Is monitoring network traffic for malicious patterns, routinely patching known vulnerabilities, and relying on the network and perimeter defense such as Firewalls, Intrusion Detection Systems, Anti-malware tools, {\\em etc.} enough to detect and thwart attacks?\nThere exist multiple shortcomings in these traditional defense mechanisms. First, the attacker, with time on their side, can spend reconnaissance effort in modeling the cloud system (the defenses in place) and then, carefully plan their attacks. Second, implementations of these defenses in practice are often far from ideal, thereby allowing attackers even more opportunities to exploit the system. A report from 2016 predicts that by the end of 2020, 99\\% of the vulnerabilities exploited will be known to security and IT professionals since a year ago . A major reason for this is the time and complexity associated with the process of routinely patching vulnerabilities. Often, the fear of degradation in the Quality of Service (QoS) provided to customers deters Cloud Service Providers (CSPs) from making changes to the existing system configuration. Third, {\\em zero-day} attacks developed using the information the attacker gathers during the reconnaissance phase can render traditional defenses useless.\nTo address the shortcomings of existing defense methods, \\emph{Moving Target Defense} (MTD) has emerged as a solution that provides \\emph{proactive defense} against adaptive adversaries. The goal of MTD is to constantly move between multiple configurations in a cyber-system (such as changing the open network ports, network configuration, software, {\\em etc.}.) thereby increasing the uncertainty for the attacker; in effect, diminishing the advantage of reconnaissance that an attacker inherently has against traditional defense mechanisms. The advantages of MTDs go away if the shifting mechanism is deterministic because the attacker, with time on their side, will eventually be able to predict this movement and design attacks accordingly. Thus, for MTDs to be effective, they need to have implicit randomness built into them. This survey categorizes MTDs based on what they shift, when they shift and how they shift. \nThe dynamic aspect of MTD adds an extra layer of complexity in implementing these defenses. To address this, one can leverage advances in networking technology. First, \\emph{Network Functions Virtualization} (NFV)~ has emerged as a technology to provide a virtualized implementation of hardware-based equipment such as firewalls, routers, {\\em etc.}. through \\emph{Virtual Machines} (VMs) or containers running on top of a physical server in cloud computing environments. Given MTDs, often, need more hardware than conventional software systems, such virtualization helps to reduce the cost of implementation.\nSecond, \\emph{Software-Defined Networking} (SDN)~, which serves as an enabling technology for NFV, provides a centralized security policy enforcer. The programmable interface afforded by SDN can help administrators implement optimal movement strategies for MTDs.\nFuthermore, enabling MTD implementation at scale will help us evaluate the effectiveness of these defenses in practical settings.\nThe key contributions of this survey are: (1) provides an umbrella under which we can categorize the array of MTD techniques proposed for network environments, (2) introduces a common language that can be utilized to describe (and understand) the assumptions and threat models of various MTDs, (3) gives an overview of how these defenses are implemented by researchers, highlighting testbeds and case studies to guide its large-scale deployment and (4) discusses how MTDs have been evaluated from a qualitative and a quantitative standpoint, in effect, shedding light on how effective these tools and techniques are with regards to security and performance. Figure \\ref{fig:workflowedge} gives a a quick overview of this survey.\nThe rest of the survey is organized in the following manner. In Section \\ref{sec:background}, we introduce the reader to some background knowledge about the various stages of an attack in cloud systems, popular for detections and defenses against malicious traffic, and formal frameworks to capture knowledge about attacks in networking systems. In Section~\\ref{sec:mtd}, we propose a universal notation that captures the key aspect of all the MTDs proposed and use it to investigate and categorize the defenses depending on how they answer the questions (1) \\emph{what to move}, (2) \\emph{when to move} and (3) \\emph{how to move}. In this regard, we also highlight how the different cyber surfaces - discussed under {\\em what to move}-- relate to the various stages of an attack described in Section \\ref{sec:background}. In Section \\ref{sec:mtd_impl}, we discuss how the various MTD works have been implemented in practice, with special emphasis on the role of (and the effectiveness) of SDN and NFV in enabling them. We showcase examples of existing MTD testbeds and perform case-studies that can help security personnel in the adoption of MTD solutions for production-grade networks. In Section~\\ref{effect}, we elaborate on various qualitative and quantitative metrics that have been presented in the literature; this can help a cloud administrator decide if a defense mechanism is secure enough. In Section~\\ref{research}, we highlight areas, in terms of our categorization, that have received less attention and discuss an array of future research directions. Finally, we conclude the survey in Section~\\ref{concl}.\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{Research Work} & \\multicolumn{3}{c|}{Cyber Surface Shift Analysis} & \\multirow{2}{*}{Relation to APTs} & \\multirow{2}{*}{MTD Implementation} & \\multicolumn{2}{c|}{MTD Evaluation} & \\multirow{2}{*}{Research Directions} \\\\\\cline{2-4}\\cline{7-8} \n & What? & When? & How? & & & Qualitative & Quantitative & \\\\ \n \\hline\n ~ & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} \\\\\n \\hline\n ~ &{\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} &{\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} \\\\\n \\hline \n ~ &{\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}}\\\\\n \\hline \n ~ & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} &{\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} \\\\\n \\hline \n ~ & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} \\\\\n \\hline \n ~ &{\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} & {\\color{Gray} \\ding{55}} & {\\color{Red} \\ding{51}} \\\\\n \\hline\n Our Survey & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} & {\\color{Red} \\ding{51}} \\\\\n \\hline\n\\end{tabular}\n\\caption{Our categorization of MTDs is a super-set of the categorization considered in earlier works. The analysis of MTDs in the context of Advanced Persistent Threats (APTs), a unified view to analyse the what, when and hows of MTDs and case study of their implementation are entirely new.}\n\\label{tab:7}\n\\end{table*}", "id": "17372709-22e9-4094-bd72-6e7828de1107", "level": "section", "origin_cites_number": 10, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}\nIn this section, we first discuss related surveys in the area of Moving Target Defense (MTD) that look at the aspects of characterizing and evaluating proactive defenses. This helps us to situate our survey and highlight several aspects of MTDs that existing surveys ignore. Second, we describe the various stages of an attack that establishes a realistic threat model against a cyber-system. This helps us understand which step(s) of an attack a particular MTD technique seeks to disarm. Third, we highlight the traditional defense methods that are presently used to detect or reduce the impact of cyberattacks. As seen later, MTD mechanisms often leverage these traditional defenses, adding movement to the way they are deployed; this makes it harder for an attacker to fool the overall defense. Finally, we provide an overview of existing databases (NVD, OVSDB, CVSS)~ that are used to obtain domain knowledge about known vulnerabilities and popular attack representation methods such as attack graphs and attack trees.", "id": "24cae700-9785-4dcb-9965-981916bbd293", "level": "section", "origin_cites_number": 2, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ] ], "subsections": [ "46b71b3f-0d60-48f3-b03d-81e9ceec5855", "daf7174b-5723-45d7-87b9-08446895a1a6", "f4dd679f-154f-4c15-b60e-cdc7b79703af", "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "755bafaa-850c-45cb-a83e-9c5e472fefc1", "642af0c3-5cfc-43ea-a636-72665a1a2e93" ], "title": "Background and Related Work" }, { "cite_extract_rate": 0, "cites": [], "content": "We present a comparison of our survey to existing surveys in Table~\\ref{tab:7}. Firstly, we observe that most existing surveys~ provide only partial coverage of topics relating to \\emph{what, when, and how} to move the elements of the network. Section \\ref{sec:mtd} provides a more holistic view of various Moving Target Defenses (MTDs). Moreover, the techniques surveyed do not talk about modeling Advanced Persistent Threat (APT) scenarios . Our survey, on the other hand, provides an overview of APT and its relation to the attack and defense surfaces, thereby helping us to highlight how a particular MTD may be effective against both known attacks as well as unknown attacks.\nWe provide an in-depth analysis of how MTDs are implemented and the role of networking technologies such as SDN and NFV in enabling them. We categorize the MTDs based on the maturity level of their implementation-- ranging from simulation-based analysis to research test-beds and production-level industrial products. Table~\\ref{tab:adel} summarizes various MTDs highlighting their use of centralized networking paradigms such as SDN/NFV (yes/no), and the level of maturity at which they have been implemented (analytic/simulation/emulation/commercial). This categorization has not been considered in previous surveys.\nSome existing surveys~ have taken a look at categorizing the evaluation metrics for understanding the effectiveness of MTDs but these works do not talk about the different components that contribute to the evaluation of MTDs. In our analysis, we consider both security and performance metrics associated with each system configuration and with the ensemble, enabling us to highlight directions that can help improve existing MTDs solutions and mechanisms.\nBeyond simply providing a categorization of existing work, the goal of our survey is to establish a common language for researchers who develop MTDs. This will help in making evident the aspects that have been considered and those that have been assumed away in the development of a particular MTD in the future. Our categorization also helps to identify promising directions for future research such as prevention, and hybrid surface shifting, improving the modeling of APTs, {\\em etc.} (see Figure \\ref{fig:fw}).", "id": "46b71b3f-0d60-48f3-b03d-81e9ceec5855", "level": "subsection", "origin_cites_number": 5, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Related Works and the Need for this Survey" ] ], "subsections": [], "title": "Related Works and the Need for this Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{acs}\nOrganizations utilize advanced infrastructure management tools and follow best practices such as software patching, hardening, analysis of the system's log for reducing the attack surface. Yet, skilled adversaries manage to compromise the network assets by utilizing zero-day attacks, customized malware, {\\em etc.}, that are often difficult to detect or prevent using intrusion detection systems and anti-virus tools. For the effective deployment of intelligent cyber-defenses, it is crucial to collect information (called the threat model) about the attack process followed by an adversary.\nAn intelligence-driven approach focused on studying particular threats from an attacker's perspective is key for the detection and mitigation of sophisticated attacks in networks, which are characterized as \\emph{Advanced Persistent Threats} (APTs) . To understand the collection, correlation and categorization of data related to a cyberattack, Lockheed Martin defines the \\emph{Cyber Kill Chain} (CKC)~. The evidence-based knowledge derived from their study can help us in understanding and deploying appropriate defense measures. Thus, we will first describe the different phases of the CKC followed by a brief description of APTs and how they can be viewed through the lens of CKC. This setup will later help us understand how MTDs can be effective against the different phases of an APT.", "id": "daf7174b-5723-45d7-87b9-08446895a1a6", "level": "subsection", "origin_cites_number": 3, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ] ], "subsections": [ "26747140-90b1-47e6-932b-0a70c03693ec", "b750c7d4-56dd-4f1f-81c1-3e2c6b41f868", "39662e0b-9a16-4cb4-8f20-923cdc78bd6d", "105ac763-bf72-4ca4-aa12-1c44c058e61b", "503e7e09-bf25-40e7-8c5c-212859b900d7", "2edf4306-6033-4e30-8875-c715956f1829", "46c62e99-2c5e-4690-9a7e-f8e20bbb4890" ], "title": "Attack Modeling Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "The attacker gathers information about the target environment in this phase. For example, the attacker can perform passive monitoring using automated tools such as trace-route and Nmap to perform network probes.", "id": "26747140-90b1-47e6-932b-0a70c03693ec", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Reconnaissance" ] ], "subsections": [], "title": "Reconnaissance" }, { "cite_extract_rate": 0, "cites": [], "content": "The attacker, based on information obtained in the reconnaissance phase, utilizes tools and techniques such as a phishing e-mail, a malware-infected document, {\\em etc.} to create a targeted attack payload against the victim.", "id": "b750c7d4-56dd-4f1f-81c1-3e2c6b41f868", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Weaponization" ] ], "subsections": [], "title": "Weaponization" }, { "cite_extract_rate": 0, "cites": [], "content": "The transmission of infected payload occurs during this stage. For example, the attacker may leave a malware-infected USB on the victim site or send an email to the Chief Financial Officer (CFO) of the company. This step requires the attacker to evade the authentication and thus, the people (and not the technology) become a more important target during this phase. Thus, a trained workforce can help in reducing the attack surface area.", "id": "39662e0b-9a16-4cb4-8f20-923cdc78bd6d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Delivery" ] ], "subsections": [], "title": "Delivery" }, { "cite_extract_rate": 0, "cites": [], "content": "The attack detonation takes place during this stage. This phase involves the exploitation of a vulnerability and the attacker gaining elevated privileges on the victim's resources by utilizing specially crafted payload that exploits a known (or zero-day) vulnerability.", "id": "105ac763-bf72-4ca4-aa12-1c44c058e61b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Exploitation" ] ], "subsections": [], "title": "Exploitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Once the attacker gains elevated privileges in the exploitation stage, they may install malware on the victim's machine or choose to harvest useful information in the victim's database. Tools that can analyze abnormal behavior such as anti-malware, \\emph{host-based IDS} (HIDS), {\\em etc.} become quite important in attack detection during this stage.", "id": "503e7e09-bf25-40e7-8c5c-212859b900d7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Installation" ] ], "subsections": [], "title": "Installation" }, { "cite_extract_rate": 0, "cites": [], "content": "After the installation phase is complete, the attacker contacts the $C\\&C$ to maintain remote control over the victim machine. Tools such as \\emph{network-based IDS} (NIDS) and outbound \\emph{firewall rules} are quite useful in detecting and blocking malicious outbound connections requests.", "id": "2edf4306-6033-4e30-8875-c715956f1829", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Command and Control ($C\\&C$)" ] ], "subsections": [], "title": "Command and Control ($C\\&C$)" }, { "cite_extract_rate": 0, "cites": [], "content": "During this phase, the attacker executes the actions to achieve the attack goals, such as data-exfiltration, service disruption, {\\em etc.} Two other important behaviors often observed in this attack-step are \\emph{pivoting}, which involves identifying similar target nodes that have already been exploited, and \\emph{lateral movement}, which involves identifying new systems that can be exploited in the network.", "id": "46c62e99-2c5e-4690-9a7e-f8e20bbb4890", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "daf7174b-5723-45d7-87b9-08446895a1a6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Attack Modeling Techniques" ], [ "subsubsection", "Actions on Objectives" ] ], "subsections": [], "title": "Actions on Objectives" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{aptdef}\n\\emph{Advanced Persistent Threats} (APTs) refers to a distinct set of attacks against a high-value target organization that differs from normal cyber attacks in several ways . First, APTs are achieved by a group of highly-skilled attackers who are well-funded. According to \\emph{Mandiant Report}~, APTs such as \\emph{Operation Aurora, Shady Rat, and Red October} have been used on a global scale for industrial espionage . Oftentimes, the attackers mounting APT work closely with government organizations. Second, an APT attacker is extremely persistent; they (1) pursue the objectives repeatedly, often over an extended period, (2) can adapt to a defender's efforts in trying to resist the attack, and (3) determined to maintain a level of access within the defender's system required to achieve their objectives. Third, the key objective of APTs is to either exfiltrate information or undermining the key services in the network using multiple attack vectors .", "id": "f4dd679f-154f-4c15-b60e-cdc7b79703af", "level": "subsection", "origin_cites_number": 5, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Advanced Persistent Threats (APTs)" ] ], "subsections": [ "48138c94-b1ed-4062-992b-0efbbff96f60" ], "title": "Advanced Persistent Threats (APTs)" }, { "cite_extract_rate": 0, "cites": [], "content": "An APT can be broken down into five phases-- reconnaissance, foothold establishment, lateral movement, data exfiltration, and post exfiltration . These can be mapped to different phases of the Cyber Kill Chain.\nThe {\\em reconnaissance} phase in APTs maps directly to the {\\em reconnaissance} phase in CKCs, which was described earlier. The \\textit{foothold establishment} phase comprises of the \\emph{weaponization} and the \\emph{delivery} phases of CKC. The attackers use information gathered from the reconnaissance phase in order to prepare an attack plan and eventually exploit vulnerabilities uncovered in the target organization's web application, databases, and other software.\nOnce the attacker has gained a foothold into the target environment, they can try to move laterally in the target environment and gain access to other sensitive hosts and critical information in the organizational network in the lateral movement phase of APT. In this setting, the attacker needs to continuously explore (perform reconnaissance) and exploit the various components of the defender's system, mapping to the exploitation phase of the CKC.\nIn the data ex-filtration phase of APT, the attacker tries to exfiltrate the collected data to their \\emph{command and control} $(C\\&C)$ center. Most of the intrusion detection and prevention systems do ingress filtering and not egress filtering, thus, data exfiltration can often go undetected. \nThe goal of the attacker in \\emph{post ex-filtration} phase is to maintain persistence in the system for a long duration of time until the funding for attack campaign is finished or the goals of attacking the organization/government are fulfilled.", "id": "48138c94-b1ed-4062-992b-0efbbff96f60", "level": "paragraph", "origin_cites_number": 1, "parent_id": "f4dd679f-154f-4c15-b60e-cdc7b79703af", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Advanced Persistent Threats (APTs)" ], [ "paragraph", "Relation between APTs and CKC" ] ], "subsections": [], "title": "Relation between APTs and CKC" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{defense-mech}\nIn this section, we provide a brief overview of the various defense techniques and highlight how each of these defense mechanisms is effective in various parts of a Cyber Kill Chain (CKC) in Table \\ref{tab:2}. This discussion will help the reader better understand some of the MTD techniques that use these traditional defenses as a building block.\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{p{1.8cm} | p{1.06cm} p{0.6cm}p{0.7cm}p{0.8cm}p{0.9cm}}\n\\hline\n\\textbf{Phase}&\\multicolumn{5}{c}{\\textbf{Network Defense Techniques}} \\\\\n& \\textbf{Detect} & \\textbf{Deny} & \\textbf{Disrupt} &\\textbf{Degrade} & \\textbf{Deceive}\\\\\n\\hline\n\\hline\nReconnaissance & Web Analytics & & & \\\\\n\\hline\nWeaponization & NIDS & NIPS & & & \\\\\n\\hline\nDelivery & Vigilant user & Proxy filter & AV & Queuing & \\\\\n\\hline\nExploitation & HIDS & Patch & DEP & & \\\\\n\\hline\nInstallation & HIDS & chroot & AV & & \\\\\n\\hline\nC2 & NIDS & ACL & NIPS & Tarpit & DNS redirect \\\\\n\\hline\nActions on Objectives & Audit log & & & QoS & Honeypot \\\\\n\\hline\n\\end{tabular}\n\\caption{Highlights how existing defense mechanisms are related to the different phases of the Cyber Kill Chain.}\n\\label{tab:2}\n\\end{table}", "id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "level": "subsection", "origin_cites_number": 0, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ] ], "subsections": [ "8a9d86bd-4c4f-4929-99ee-45a117baaaee", "f744ac3f-7052-4238-8b84-0ea5b56e08f2", "f85fba5e-dea4-41c8-aac1-592765e2b8a3", "2c77a95c-03dd-449b-b740-474d6973a56d", "f401c169-0725-48f1-a191-56d5c05d83bc", "1ce0f04c-9c17-4f54-9ab3-74c28f7c3ef0", "f82814e2-cff4-4d11-a613-f3d588008143", "8da5fffe-3028-40b7-bb61-b1a8f5b05e1b", "fc51bf66-2558-47da-87e0-e3a8eb13bce9", "579afaea-171a-46c7-9f46-af385490edbf", "b5124969-6443-4c43-b2c9-9cae62b6dfef" ], "title": "Defense Methods" }, { "cite_extract_rate": 1, "cites": [ 9076 ], "content": "A huge amount of security-related information is present on the web-- in \\emph{social engineering} websites, phishing sites, and dark-web forums-- including discussion about a particular target product or company. As discussed by Glass \\textit{et. al.}~, the problem of exploring and analyzing the web for information should provide (1) Security relevant information discovery capabilities (2) \\emph{Situation awareness} by performing real-time inference over the available information, and (3) predictive analysis to provide early warning for any likely security attacks.", "id": "8a9d86bd-4c4f-4929-99ee-45a117baaaee", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Web Analytics" ] ], "subsections": [], "title": "Web Analytics" }, { "cite_extract_rate": 0, "cites": [], "content": "There are two types of IDS agents that can be deployed in the network under attack, i.e., Network-based IDS (NIDS) and Host-based IDS (HIDS). NIDS such as \\textit{Bro}~ and \\emph{Snort}~, use signature match techniques based on known attack patterns and can flag incoming network traffic as malicious or benign. HIDS such as \\textit{auditd}~ or \\textit{tripwire}~, on the other hand check the Indicators of Compromise (IOCs) on the physical server or VM, in order to identify malicious activity, e.g., privilege escalation attempt by normal user.", "id": "f744ac3f-7052-4238-8b84-0ea5b56e08f2", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Intrusion Detection Systems (IDS)" ] ], "subsections": [], "title": "Intrusion Detection Systems (IDS)" }, { "cite_extract_rate": 0, "cites": [], "content": "A network security threat prevention technology such as IPS~ examines network traffic flow to detect and prevent vulnerability exploits. The exploits come in the form of malicious inputs to the target application or service. The IPS is often deployed behind the firewall and provides a complementary layer of analysis. Compared to IDS, the IPS system takes automated action based on traffic pattern match such as (1) dropping malicious traffic packets, (2) blocking requests from a source, (3) resetting connection, and (4) sending an alarm to the administrator.", "id": "f85fba5e-dea4-41c8-aac1-592765e2b8a3", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Intrusion Prevention Systems (IPS)" ] ], "subsections": [], "title": "Intrusion Prevention Systems (IPS)" }, { "cite_extract_rate": 0, "cites": [], "content": "A (reverse) proxy server such as \\emph{nginx}, can act as an intermediary between the attacker located on the public network and private network. A proxy can help in shielding the real network from the attacker.", "id": "2c77a95c-03dd-449b-b740-474d6973a56d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Proxy Filtering" ] ], "subsections": [], "title": "Proxy Filtering" }, { "cite_extract_rate": 0, "cites": [], "content": "ACL can be applied at different enforcement points in a network. ACLs, such as \\emph{netfilter} , investigate network traffic and based on properties such as source/destination IP address, port number, protocol {\\em etc.} decide either to \\emph{permit} or \\emph{deny} it.", "id": "f401c169-0725-48f1-a191-56d5c05d83bc", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Access Control List (ACL)" ] ], "subsections": [], "title": "Access Control List (ACL)" }, { "cite_extract_rate": 0, "cites": [], "content": "DEP is a security feature that can be deployed in the form of hardware or software to prevent malicious code from running on the host. They \\emph{monitor programs} run on the host and ensure it uses memory in an expected (or safe) manner.", "id": "1ce0f04c-9c17-4f54-9ab3-74c28f7c3ef0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Data Execution Prevention (DEP)" ] ], "subsections": [], "title": "Data Execution Prevention (DEP)" }, { "cite_extract_rate": 0, "cites": [], "content": "A software program designed to protect the hosts from malicious programs such as a virus, spyware, worms, rootkits, key-loggers, {\\em etc.} The AVs can be classified into two types-- \\emph{signature-based AV} and \\emph{behavior-based AV}. The signature-based AVs check the malicious program against the database of known malicious programs. On the other hand, the behavior-based AVs check the program behavior by running it in a sandbox environment.", "id": "f82814e2-cff4-4d11-a613-f3d588008143", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Anti-Virus (AV)" ] ], "subsections": [], "title": "Anti-Virus (AV)" }, { "cite_extract_rate": 0, "cites": [], "content": "This defense technique involves purposeful introduction of delay when responding to queries. The key idea behind this defense mechanism is that adversaries will give up on a target if it takes too long to achieve the defined goal.", "id": "8da5fffe-3028-40b7-bb61-b1a8f5b05e1b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Tarpit" ] ], "subsections": [], "title": "Tarpit" }, { "cite_extract_rate": 0, "cites": [], "content": "The \\emph{network traffic} can be segmented on the service type and importance. The segmented traffic can be analyzed for bottlenecks and threats. The malicious traffic can be slowed down in order to increase the \\emph{Cost of Attack (COA)} or selectively dropped.", "id": "fc51bf66-2558-47da-87e0-e3a8eb13bce9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "QoS" ] ], "subsections": [], "title": "QoS" }, { "cite_extract_rate": 0, "cites": [], "content": "The malicious connection requests can be served a different response than was expected. The attacker may seek to connect with command and control center using a page with malware, but DNS redirect will kill this chance.", "id": "579afaea-171a-46c7-9f46-af385490edbf", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "DNS Redirect" ] ], "subsections": [], "title": "DNS Redirect" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 314, 3854 ], "content": "A security mechanism, which can be used to detect, deceive or in some cases counter a malicious user trying to gain access to key network services~. Honeypots such as \\emph{Cowrie}~ can help in better understanding the attacker's tools and tactics. The connection requests from the attacker are directed to a decoy service, which mimics the behavior of the normal service and logs the attacker's activity. \nAlthough there exists a large set of defense mechanisms, attackers often use clever techniques to evade detection or prevention. SANS discusses methods like \\emph{obfuscation}, \\emph{fragmentation}, \\emph{encryption} and \\emph{Denial of Service} (DoS) attacks against signature-based detection tools.\nDetection based on HIDS can also be evaded by a skilled attacker using techniques such as file location manipulation (using directories or files white-listed by IDS), application hijacking, {\\em etc.} Furthermore, machine learning models that can help overcome some of the shortcomings of signature-based detection tools can themselves be fooled by adversarial attacks . On similar lines, deception techniques such as DNS redirect and Honeypot can help in deceiving the attacker temporarily, but over a longer period of time, the attacker will eventually change their attack methodologies thereby reducing the effectiveness of these defenses. Stojanovski \\textit{et. al.} performed experimental analysis on how to bypass \\emph{DEP protection} on \\emph{Windows XP} systems.\nThus, there is a need to perform intelligent manipulation to assure the attacker's likelihood of reaching the desired goal is limited, even if they can evade the traditional detection methods. Additionally, the defense mechanisms discussed in the Section~\\ref{defense-mech} target known attacks often with easy to detect signature patterns.", "id": "b5124969-6443-4c43-b2c9-9cae62b6dfef", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "dffbc6b3-7e82-4a57-9c3c-3f811891434e", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Honeypot" ] ], "subsections": [ "f9ca993c-1c64-409a-b821-40f437ee12c1" ], "title": "Honeypot" }, { "cite_extract_rate": 0, "cites": [], "content": "The differences between APTs and regular cyberattacks, discussed in \\ref{aptdef}, make it arduous to use traditional defenses and pre-specified threat models to address APTs as a whole. In this regard, proactive defenses can prove to be effective against APTs. While MTDs, as we will see later, makes it difficult for APT attackers by dynamically shuffling various system components (see Fig \\ref{fig:mtd-apt}), other proactive defenses such as cyber-deception can prove to be effective in gathering threat-model information. For example, Shu \\textit{et al.} propose a cyber deception to protect \\emph{FTP} services against APT attackers . In their research work, a defender reroutes attack traffic to a host, which may be a honeypot, and the defender ensures that an attacker is not able to notice a connection difference between the real IP address and the honeypot trap. By observing attacker's behavior on the honeypot, the defender updates the threat-model and in turn, hardens their FTP services. A key aspect of this work is to make the attackers continuously believe that they are interacting with the original environment as opposed to a honeypot.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures/attackgt.pdf}\n \\caption{Attack representations such as an attack graph (b) and an attack tree (c) for a simple network attack scenario (a) in a small-size cloud system with known vulnerabilities. Creation of these representations from network vulnerability and reachability information often suffers from scalability issues for real-world networks (see Table \\ref{tab:4}).}\n \\label{fig:4a}\n\\end{figure*}\n\\iffalse\n\\texthl{Ankur: Pointwise or tabular desc of issues with current defenses would be better.\\\\Sailik: I agree. But there are two reasons for my deterrence here-- (1) The write up had only described weaknesses against some of the traditional defense mechanisms (as opposed to all of them), which is necessary to make a pretty table, and (2) if we do that here and later use some of these defenses in our MTD, we will have to have another table describing why (and how) is MTD not vulnerable to any of these while each attack by themselves are.}\n\\begin{enumerate}\n \\item Obfuscation: Use a variant of traffic signature to evade IDS signature match.\n \\item Fragmentation: Breaking the attack data into multiple fragments.\n \\item Encryption: Most IDS tools are configured to operate on raw traffic, so encrypted traffic evades intrusion detection. \n \\item Denial of Service: Flooding the detection system with multiple attack packets, creating alarms originating from multiple source IP addresses. This will reduce the chances of IDS detecting the actual attacker considerably.\n\\end{enumerate}\n\\fi", "id": "f9ca993c-1c64-409a-b821-40f437ee12c1", "level": "paragraph", "origin_cites_number": 1, "parent_id": "b5124969-6443-4c43-b2c9-9cae62b6dfef", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Defense Methods" ], [ "subsubsection", "Honeypot" ], [ "paragraph", "Proactive Defenses against APTs" ] ], "subsections": [], "title": "Proactive Defenses against APTs" }, { "cite_extract_rate": 0, "cites": [], "content": "Defenders almost always have information about the system they want to protect. This knowledge can help in enhancing the threat-model and, in turn, improve the effectiveness of MTD techniques. Such information may range from knowledge of the network architecture, the capacity of the individual machines, known vulnerabilities (and an idea of how dangerous they are), {\\em etc.} We discuss here a few popular models and data sources that have been leveraged by researchers.", "id": "755bafaa-850c-45cb-a83e-9c5e472fefc1", "level": "subsection", "origin_cites_number": 0, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Domain Information for Modeling Cyber-attacks" ] ], "subsections": [ "65991ab9-5d5c-4e44-aa7c-562d7fce1ade", "ac51fdc2-8ef9-462e-89bf-a350fb2484f3" ], "title": "Domain Information for Modeling Cyber-attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "are publicly known vulnerabilities and exposures that are categorized and can be referenced uniquely in the \\emph{National Vulnerability Database} (NVD) using a common vulnerability enumeration identifier (CVE-ID). This system is maintained and updated by the \\emph{Mitre} corporation regularly to make defenders and administrators aware of existing and new vulnerabilities.", "id": "65991ab9-5d5c-4e44-aa7c-562d7fce1ade", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "755bafaa-850c-45cb-a83e-9c5e472fefc1", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Domain Information for Modeling Cyber-attacks" ], [ "subsubsection", "Common Vulnerabilities and Exposures (CVEs)" ] ], "subsections": [], "title": "Common Vulnerabilities and Exposures (CVEs)" }, { "cite_extract_rate": 0, "cites": [], "content": "The use of the Common Vulnerability Scoring System (CVSS) for rating attacks is well studied in security . For (most) CVEs listed in the NVD database, we have a six-dimensional \\emph{CVSS v2} vector~, which can be decomposed into multiple components that represent Access Complexity (AC), i.e. how difficult it is to exploit a vulnerability, and the impact on Confidentiality, Integrity, and Availability (CIA) gained by exploiting a vulnerability. The values of AC are categorical \\{EASY, MEDIUM, HIGH\\}, while CIA values are in the range $[0, 10]$. These scores are also known as the Exploitability Score (ES) and the Impact Score (IS).\nAlthough a defender may be aware of the known vulnerabilities present in their system (by being aware of the published CVEs that affect them), the knowledge of how they affect their system, in particular, may help them in making better decisions for security. The attack representation can be useful to quantify the attack and defense surface for MTD. To this extent, we define two heavily used representation methods that can represent known attacks present in a system-- the \\emph{attack tree} and the \\emph{attack graph} as shown in the Figure~\\ref{fig:4a}. \nThe Figure~\\ref{fig:4a}, shows a network attack scenario, where an attacker has access to FTP and SSH server over the network. This example illustrates different methods for representing multi-stage attacks, i.e., attack graph and attack tree which we defined above. The FTP server consists of a vulnerability that allows the attacker to obtain remote shell (rsh) on the system. The SSH server, on the other hand, consists of buffer overflow ($sshd\\_bof$) vulnerability. The goal of the attacker is to obtain root privilege on the SSH server, i.e., \\textit{root(2)}. The attack progression using attack graph and attack tree has been presented in Figure~\\ref{fig:4a} (b), (c) respectively.\n\\textbf{Attack Tree}~ as shown in Figure~\\ref{fig:4a}(c) is another method of representing system security. The Attack Tree represents the network attacks. Attack Tree represents a monotonic path taken by an attacker starting from a leaf node to reach a goal node. Attack Tree usually consists of set of \\emph{AND} nodes (sshd(0,2), user(0) - Figure~\\ref{fig:4a}(c)) and \\emph{OR} nodes (rsh(1,2), $sshd\\_bof$(0,2)). The \\emph{OR} nodes represent one or more ways in which a goal node can be reached, whereas \\emph{AND} nodes represent different conditions that must be fulfilled to achieve a goal node. Children of the node are refinements of this goal, and the attacks that can no longer be refined are represented by leaf nodes~.\n\\begin{definition}\nAn Attack Tree~ can be defined by three tuples $(N, \\rightarrow, N_r)$\n\\begin{itemize}\n \\item N is all possible nodes in the tree;\n \\item $S^+(N)$ is \\emph{multi-set} of all possible subsets of nodes N;\n \\item $\\rightarrow \\subseteq N \\times S^+(N)$ denotes transition relation;\n \\item $N_R$ represents the \\emph{goal node} of the attack tree.\n\\end{itemize} \n\\end{definition}\n\\begin{table*}[t]\n \\centering\n \\setlength{\\extrarowheight}{3pt}\n \\begin{tabular}{|p{40mm}|p{60mm}|p{60mm}|}\n \\hline\n \\textbf{Category} & \\textbf{Details} & \\textbf{Complexity}\\\\\n \\hline\n Automated Attack Analysis ~ & Multi-prerequisite graph based on vulnerability and reachability information & O(E+NlgN); N is attack graph nodes and E is graph edges \\\\\n \\hline\n Attack Cost Modeling~ &\n Time Efficient Cost Effective hardening of network using Attack Graph & $O(n^{\\frac{d}{2}})$; d represents the depth of the attack graph \\\\\n \\hline\n Attack Cost Modeling~ & Model checking based attack graph generation using Markov Decision Process (MDP) & Approximation algorithm $\\rho(n)$ = $H(max_{a \\in A}$\n $\\{\\mu_G(a)\\})$, where A is Attacks, $\\mu$ is maximization function.\\\\\n \\hline\n Scalable Attack Graph~ & Scalable attack graph using logical dependencies. & $O(N^2) - O(N^3)$, where N is number of nodes in attack graph. \\\\\n \\hline\n Attack Graph based Risk Analysis~ & Scalable attack graph for risk assessment using divide and conquer approach & $O(r(n+c)^k)$, where r is small coefficient. \\\\\n \\hline\n Attack Cost Modeling~ & Attack Graph cost reduction and security problem solving framework Min. Cost SAT Solving. & NP-Hard problem, SAT solving methods employed. \\\\\n \\hline\n Ranking Attack Graphs~ & Asset Ranking algorithm for ranking attack graphs to identify attacks. \\emph{Page Rank} based algorithm & Similar to complexity of page rank algorithm. \\\\\n \\hline\n Attack Cost Modeling~ & Identifying critical portions of attack graph. Min. Cost SAT solving, Counter-example guided abstraction refinement (CEGAR) & NP-Hard problem, SAT solving methods used. \\\\\n \\hline\n \\end{tabular}\n \\caption{Complexity of the various Attack Representation methods.}\n \\label{tab:4}\n\\end{table*}\n\\textbf{Attack Graph} is a data structure used to represent attack propagation in a network with vulnerabilities as shown in Figure~\\ref{fig:4a}(b). The start state of the attack graph represents the current privileges of the attacker. The goal state of the attack graph represents a state in which the intruder has successfully achieved his attack goal, e.g., data-exfiltration, root privileges on a web server, {\\em etc.} Security analysts use attack graph for attack detection, network defense, and forensics~. We formally define the attack graph as follows:\n\\theoremstyle{definition} \n\\begin{definition}\\label{definition:AG} Attack Graph (AG)\nAn attack graph is represented as a graph $G=\\{V,E\\}$, where \\textit{V} is set of nodes and \\textit{E} is set of edges of the graph \\textit{G}, where \n\\begin{enumerate}\n \\item $V=N_C \\cup N_D \\cup N_R$, where $N_C$ denotes the set of conjunctive or exploit nodes, $N_D$ is a set of disjunctive nodes or result of an exploit, and $N_R$ is the set of a starting nodes of an attack graph, i.e. root nodes.\n \\item $E=E_{pre} \\cup E_{post}$ are sets of directed edges, such that $e \\in E_{pre} \\subseteq N_D \\times N_C$, i.e., $N_C$ must be satisfied to obtain $N_D$. An edge $e \\in E_{post} \\subseteq N_C \\times N_D$ means that condition $N_C$ leads to the consequence $N_D$. $E_{pre}$ represents the attack graph pre-conditions (ftp(0,1) and user(1) in the Figure~\\ref{fig:4a}(b)) necessary for vulnerability exploitation and $E_{post}$ are the post-conditions (rsh(1,2) in the Figure~\\ref{fig:4a}(b)) obtained as a result of exploit. \n\\end{enumerate}\n\\end{definition}\nThe time taken to construct attack representation methods (ARMs) grows exponentially with an increase in the number of hosts or the number of known vulnerabilities in the network . Authors in survey various research works that try to address this challenge. Amman \\textit{et al.} present a scalable solution that assumes monotonicity; it allowed them to achieve scalability of $O(N^6)$.\nTo mitigate the state explosion problem, most of the existing solutions try to reduce the dependency among vulnerabilities by using some logical representation or by imposing a hierarchical structure to reduce the computing and analysis complexity of constructing and using ARMs . \nIn the latter work, the authors proposed a two-layer AG generation approach where the goal was to develop a faster method by considering network reachability and vulnerability information at different layers. The graphs constructed have vulnerability information of the individual hosts in the lower graph while the topological information of the network is in the upper layer. Unfortunately, the effectiveness of these methods on real-world networks is uncertain.\n\\begin{figure*}\n \\centering\n \\begin{tikzpicture}[\n state/.style={rectangle},\n node distance=3.5cm,\n ]\n \\centering\n \\node[state] (recon) [text width=2cm, align=center] {{\\Huge \\faSearch}\\\\{Reconnaissance}};\n \\node[state] (ef) [right of=recon, text width=2cm, align=center] {{\\Huge \\faPaw}\\\\{Establish Foothold}};\n \\node[state] (lm) [right of=ef, text width=2cm, align=center] {{\\Huge \\faArrowsH}\\\\{Lateral Movement}};\n \\node[state] (de) [right of=lm, text width=2cm, align=center] {{\\Huge \\faDatabase \\faMailForward}\\\\{Data Exfiltration}};\n \\node[state] (pe) [right of=de, text width=2cm, align=center] {{\\Huge \\faEye}{\\Large \\faDatabase}\\\\{Post Exfiltration}};\n \\node[state] (as_s) [below of=ef, yshift=2cm, text width=0.3cm, align=center] {\\color{BrickRed} \\faBullseye};\n \\node[state] (as_e) [below of=pe, yshift=2cm, text width=0.3cm, align=center] {\\color{BrickRed} \\faBullseye};\n \\node[state] (es_s) [below of=recon, yshift=1.1cm, text width=0.3cm, align=center] {\\color{NavyBlue} \\faBullseye};\n \\node[state] (es_e) [below of=lm, yshift=1.1cm, text width=0.3cm, align=center] {\\color{NavyBlue} \\faBullseye};\n \\node[state] (dps_s) [below of=recon, yshift=0.2cm, text width=0.3cm, align=center] {\\color{PineGreen} \\faBullseye};\n \\node[state] (dps_e) [below of=pe, yshift=0.2cm, text width=0.3cm, align=center] {\\color{PineGreen} \\faBullseye};\n \\path[BrickRed, -]\n (as_s) edge node[below] {Attack Surface} (as_e);\n \\path[NavyBlue, -]\n (es_s) edge node[below] {Exploration Surface} (es_e);\n \\path[PineGreen, -]\n (dps_s) edge node[below] {Detection and Prevention Surface} (dps_e);\n \\end{tikzpicture}\n \\caption{A mapping between the different phases of Advanced Persistent Threat (APT) and various surfaces of a cyber system that various MTDs seek to move. Shifting of the exploration surface and the attack surface are effective only against some phases on an APT, whereas shifting the detection and the prevention surface is effective throughout the APT life-cycle.}\n \\label{fig:mtd-apt}\n\\end{figure*}\n\\iffalse\n\\begin{figure*}\n \\centering\n \\begin{tikzpicture}[node distance=0.65cm, auto] \n \\tikzset{\n mynode/.style={rectangle,rounded corners,draw=black, top color=white, bottom color=yellow!50,very thick, inner sep=0.66em, minimum size=1.5em, text centered},\n myarrow/.style={->, >=latex', shorten >=1pt, thick},\n mylabel/.style={text width=5.5em, text centered} \n } \n \\node[mynode] (recon) {Reconnaissance}; \n \\node[mynode, right=of recon] (foot) {Establish Foothold}; \n \\node[mynode, right=of foot] (lat) {Lateral Movement};\n \\node[mynode, right=of lat] (exfil) {Data Ex-filtration};\n \\node[mynode, right=of exfil] (post) {Post Ex-filtration};\n \\draw[myarrow] (recon) -- (foot);\n \\draw[myarrow] (foot) -- (lat);\n \\draw[myarrow] (lat) -- (exfil);\n \\draw[myarrow] (exfil) -- (post);\n \\draw[<->, >=latex', shorten >=2pt, shorten <=2pt, bend right=20, thick, dashed] (lat.north) to node[auto, swap] {Exploration Surface}(recon.north);\n \\draw[<->, >=latex', shorten >=2pt, shorten <=2pt, bend right=20, thick, dashed] (post.north) to node[auto, swap] {Attack Surface}(foot.north);\n \\draw[<->, >=latex', shorten >=2pt, shorten <=2pt, bend right=-15, thick, dashed] (post.south) to node[auto, swap] {Detection and Prevention Surface}(recon.south);\n \\end{tikzpicture} \n \\medskip\n \\caption{A mapping between different phases of APT and Moving Target Defense from Attacker's Modeling Perspective}\n \\label{fig:mtd-apt}\n\\end{figure*}\n\\fi\nTable~\\ref{tab:4} highlights the complexity of creating various attack graph and attack tree-based methods. As can be see, scalability is a major concern when coming up with a good attack representation, thereby impacting the effectiveness of MTD techniques. In , the authors present a scalable solution for approximating the attack graph of a large scale data-centric network by using distributed attack graph computation followed by semantic clustering. We discuss attack representation methods as one of the \\emph{Quantitative metrics} which can be used for measuring effectiveness of MTD in Section~\\ref{effect}.\n\\iffalse", "id": "ac51fdc2-8ef9-462e-89bf-a350fb2484f3", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "755bafaa-850c-45cb-a83e-9c5e472fefc1", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Domain Information for Modeling Cyber-attacks" ], [ "subsubsection", "Common Vulnerability Scoring System (CVSS)" ] ], "subsections": [], "title": "Common Vulnerability Scoring System (CVSS)" }, { "cite_extract_rate": 0, "cites": [], "content": "NMTD requires a change in network configuration, network connection, network parameters, routing randomization, {\\em etc.} Several key issues such as over-dependence on signature-based IDS which can induce false positives, zero-day attacks that rely on step-wise progression towards a specific goal can be eliminated using network-level MTD countermeasures. Such actions, however, also affect the operational capacity and service availability for normal users. Zhuang \\textit{et. al.}~ identified these concerns and proposed an intelligent system, which can take operational and security goals of the system into account and utilize intelligent MTD adaptation in order to secure the network.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=.5\\textwidth]{figures/figure4.pdf}\n \\caption{An overview of network MTD implementation}\n \\label{fig:4}\n\\end{figure}\nGreen \\textit{et. al.}~ describe various components involved in the implementation of network-based MTD framework. As can be seen in the Figure~\\ref{fig:4}, the users trying to access a network service can be partitioned into \\textit{Trusted} and \\textit{Untrusted} clients. The trusted clients are the users who behave according to the network and security policies. A service important from the perspective of normal business operations is often the target of the attackers. The goal of the attacker is to disrupt the service for trusted users or steal the key information from the data-store (back-end of the service). Such service is referred to as \\textit{Target}. A decoy service \\textit{Sink} can be placed in the system in order to confuse the attacker. The access control mechanism in the \\textit{mapping system} is used for distinguishing the legitimate users from the attackers. Only the clients who have been authenticated based on the trust-establishment mechanism are selectively authorized to access the target service (depending on the role - user/admin). Three key properties of the network-based MTD are:\n \\begin{enumerate}\n \\item \\textbf{Moving Property} which requires a timely change of the information or configuration by forcing the clients to access services using mapping system and limiting the exposure to the target system by reducing/shrinking the window of opportunity for the attackers.\n \\item \\textbf{Access Control Property} ensures that network access control policies are properly enforced. Only authenticated and authorized users should be able to access the target service.\n \\item \\textbf{Distinguishability} is used to separate trusted clients from untrusted users based on network activity (failed login attempts, anomalous behavior). Any classification errors in this property can block trusted clients from accessing service and allow untrusted clients to pass through and exploit network service.\n \\end{enumerate}\n The classification is limiting in the sense that access control and authorization falls under the domain of application layer protocols. Moreover, the distinguishability property is counter-intuitive, since the users classified as untrustworthy can be blocked using signature-based tools like Snort IPS~. \nThe classification based on security modeling method described as a part of MTD implementation captures the MTD motivation and dynamics from an implementation point of view. The aspects such as SDN/NFV technologies which provide a suitable platform for deployment of MTD countermeasures, role of intelligent game-theoretic algorithms in the field of MTD, and, qualitative aspects of MTD such as cost-benefit analysis, attack representation and risk impact need to be considered separately in order to identify best platform, MTD parameters and intelligent cyber deception algorithm. \n\\fi", "id": "642af0c3-5cfc-43ea-a636-72665a1a2e93", "level": "subsection", "origin_cites_number": 3, "parent_id": "24cae700-9785-4dcb-9965-981916bbd293", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Background and Related Work" ], [ "subsection", "Network Level MTD (NMTD)" ] ], "subsections": [], "title": "Network Level MTD (NMTD)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mtd}\nThe goal of \\emph{Moving Target Defense} (MTD) is to continually move the components of an underlying system in a randomized fashion; this ensures the information gathered by the attacker in the \\emph{reconnaissance phase} becomes stale during the \\emph{attack phase} given the defender has moved to a new configuration within that time. This increases the uncertainty for the attacker, making it difficult (or rather, more expensive) for them to successfully exploit the system.\nAn MTD can be described using a three tuple $\\langle M, T, C \\rangle$ where $M$ represents the movement strategy that directs the system {\\em how to move}, $T$ denotes the timing function that represents {\\em when} a switch action occurs in the MTD system and $C$ represents the configuration space or the set of configurations between which the system switches, answering the question of {\\em what to switch}. Answers to these three questions can help us define a useful categorization for the various MTDs. We believe that this categorization (1) gives a holistic view of the various MTD systems, providing a heuristic sense of what modeling aspects, when done carefully, make a particular defense effective, and (2) highlights opportunities on how modeling of a particular component of an MTD can either be improved or amalgamated with other MTDs.\nAs we will see, the configuration set $C$ and the timing function $T$ are often a design choice made the system administrator depending on the threat model. However, the movement function $M: H \\rightarrow C$ needs to be a stochastic function that given the history of system configurations $H$ as input, produces the next configuration the system should switch to. A stochastic function is needed because a deterministic function can easily be learned by an attacker over time and thus, does not help the defender in terms of security.", "id": "61ef46a6-da85-4c8f-96c6-de7df290c870", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ] ], "subsections": [ "0133c010-6e36-4f15-a544-e7f247441d51", "b9bd189f-c4ee-40eb-89c5-0be08089fc21", "f4d122f1-492f-4b09-80a2-bae12617a051" ], "title": "Categorizing Moving Target Defenses" }, { "cite_extract_rate": 0.034482758620689, "cites": [ 6792 ], "content": "At an abstract level, from the perspective of an attacker, every software system can be categorized into four surfaces:\n\\begin{itemize}\n \\item Exploration Surface\n \\item Attack Surface\n \\item Detection Surface\n \\item Prevention Surface\n\\end{itemize}\nIn this section, we use these surfaces as a basis for categorizing what the different MTDs shift.\nIn the context of multi-stage attacks like APTs, an adversary needs to exploit all the different surfaces, but not necessarily in a predefined order (see Fig. \\ref{fig:mtd-apt}). For example, an adversary may first try to explore the target network and try to figure out its topology, bandwidth, software deployed on the different nodes, {\\em etc.}, but it may also need to perform reconnaissance at an advanced stage, say, to verify if it has actually gained access and established a foothold, i.e. a new vantage point, within the system. The knowledge gained in the exploration phase helps them to execute attacks that exploit the system, to move to different points in the network, and exfiltrate important information.\nAs both exploration and attack traffic are malicious in nature, they tend to differ from legitimate user traffic. At that point, the detection and prevention surfaces come into play.\nIn Fig. \\ref{fig:what}, we show a Venn diagram that categorizes existing MTDs based on the surface they shift. Although this categorization is at a level of abstraction, we discuss various works and show how different MTDs define the elements of the set $C$. Most MTDs shift one surface at a time, rarely considering scenarios where other surfaces can be shifted in conjunction. After discussing the various MTDs, we briefly mention some unexplored research areas that can lead to the development of new defenses that shift multiple surfaces.\n\\begin{figure*}\n\\small\n\\centering\n\\begin{tikzpicture}\n\\begin{scope}[opacity=0.5]\n \\fill[Salmon!30!white,draw=black] (90:3) ellipse (4 and 3.5);\n \\fill[SpringGreen!50!white,draw=black] (180:3.5) ellipse (5.55 and 3.9);\n \\fill[Goldenrod!30!white,draw=black] (270:2.7) circle (2.7);\n \\fill[CornflowerBlue!30!white,draw=black] (360:4) circle (3.25 and 3);\n \\end{scope} \n \\node at (90:5.75) {\\textbf{Exploration Surface Shifting(ES)}};\n \\node at (106:5.45) {Al-Shaer et al.~};\n \\node at (109:5) {Albanese et al.~};\n \\node at (116:4.7) {Jafarian et al.~};\n \\node at (74:5.45) {Aydeger et al.~};\n \\node at (70:5) {Zhao et al.~};\n \\node at (66:4.58) {Schlenker et al.~};\n \\node at (60:4.15) {Jajodia et al.~};\n \\node at (60:3.5) {Algin et al.~};\n \\node at (132:3.1) {\\textbf{AS+ES}};\n \\node at (133:2.6) {Lei et al.~};\n \\node at (135:2.1) {Zhuang et al.~};\n \\node at (159:6.3) {\\textbf{Attack Surface Shifting(AS)}};\n \\node at (164:6.3) {Manadhata et al.~};\n \\node at (168:6.3) {Zhu et al.~};\n \\node at (172:6.3) {Sengupta et al.~};\n \\node at (176:6.3) {Carter et al.~};\n \\node at (180:6.3) {Thompson et al.~};\n \\node at (184:6.3) {Chowdhary et al.~};\n \\node at (188:6.3) {El Mir et al.~};\n \\node at (192:6.3) {Debroy et al.~};\n \\node at (196:6.3) {Prakash et al.~};\n \\node at (200:6.3) {Neti et al.~};\n \\node at (204:6.3) {Crouse et al.~};\n \\node at (208:6.3) {Bohara et al.~};\n \\node at (270:4) {\\textbf{Prevention Surface Shifting(PS)}};\n \\node at (270:4.45) {Chowdhary et al.~};\n \\node at (270:4.85) {Clark et al.~};\n \\node at (350:5.1) {Colbaugh et al.~};\n \\node at (360:5.15) {Venkatesan et al.~};\n \\node at (355:5.15) {Sengupta et al.~};\n \\node at (345:5.15) {Chowdhary et al.~};\n \\node at (5:5.1) {\\textbf{Detection Surface Shifting(DS)}};\n \\end{tikzpicture}\n\\caption{The different surfaces that can be moved by a particular Moving Target Defense (MTD). Moving the exploration surface makes it harder for the attacker to figure out the exact system configuration by making the sensing actions noisy or erroneous. Moving the attack surface makes a particular attack inapplicable. Moving the detection surface, similar to `patrolling methods', helps in providing efficient detection in budget-constrained cyber environments. Moving the prevention surface makes it harder for an attacker to ex-filtrate data even when they have a strong foot-hold inside the system.}\n\\label{fig:what}\n\\end{figure*}", "id": "0133c010-6e36-4f15-a544-e7f247441d51", "level": "subsection", "origin_cites_number": 29, "parent_id": "61ef46a6-da85-4c8f-96c6-de7df290c870", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ] ], "subsections": [ "b9cb6bcf-6a74-450c-a92b-85828ddd3765", "e6b72c4b-8895-48df-9901-cadff3ba50a2", "9ed638d7-eaf6-46d5-81e9-17f27f0efad2", "61fbae28-2f22-4f34-a9a0-cbb3ff337871", "3bf50346-d27a-499c-b51b-a388504ae4eb" ], "title": "The Configuration Set $C$ -- What to Switch?" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{exp}\nThe goal of shifting the exploration surface is to ensure that the model of a system that an attacker can gather by exploration actions such as by scanning for open-ports, sending non-malicious traffic to uncover system topology, discover vulnerabilities, {\\em etc.}, are noisy or inaccurate. Thus, an adversary, with this faulty information from the reconnaissance phase, is left with no other choice but to work with faulty view of the attack surface.\nIn , the authors propose the concept of Random Host Mutation (RHM) -- moving target hosts are assigned random virtual IP addresses (vIP) in an unpredictable and decentralized fashion. Formally, $C$ represents the set of bipartite graphs, where each configuration represents a mapping from the set of vIP addresses in the Virtual Address Range (VAR) to the set of hosts in the network. Every switch action changes the one-to-one mapping of hosts in the system to VARs. Another work tries to implement an MTD on the same set of configurations using a centralized approach based on \\emph{(SDN)} technologies like \\emph{OpenFlow}.\nAnother line of work focuses on reducing the quality of information that an attacker can gather through exploration. In , the authors use a centralized approach to obfuscate the network links so that the topology that an attacker can retrieve via crossfire attacks is noisy and unreliable. In this work, each state is a possible path from the source to the destination of the crossfire attack. Thus, the configuration space $C$ represents the set of all paths from a source point to a destination point in the network. The MTD paradigm comes into play because the administrator chooses a path, in a randomized fashion, so that the attacker is not able to get reliable path information between any any two points in the network. Given attacks may only succeed if the packet is sent over a particular path, attackers are forced to use a lot of attack traffic in order to succeed in exploiting the system or even obtain a reasonable estimate of the network topology. Such behavior increases their chances of getting caught or dealing with an inaccurate (and hopefully, not useful) model of the system. On similar lines, there have been works that either try to move the fingerprint of a protected host or randomly alter the time schedule that guides when a host transmits information, reducing the effectiveness of selective jamming attacks against smart meters . In , the authors propose to send back incorrect information to an attacker trying to query information about the hosts on the network. Although they come up with deterministic strategies for responses, the possibility of sending back different lies opens up when the underlying attack surface uses an MTD. These works are termed as {\\em cyber-deception} because the defender is trying to deceive the adversary by feeding them false information about the system. In such cases, the goal of the defender is to ensure that the adversary's model of the underlying network is incorrect as opposed to just being incomplete or noisy.\nIn , Jajodia et. al. looks at how they can deploy \\emph{honey-nets} in a strategic fashion and thereby, minimize the expense of deploying all possible honey-nets at once. They show that deploying honey-nets introduces deception in the network against \\emph{noisy-rich} (NR) cyber-attackers (i.e. adversaries who try to exploit all the vulnerabilities present in order to compromise the target network). In this case, if we represent the set of all honey-nets that can be placed in a network as $X$, then the configuration space $C$ consists of all budget-limited subsets of honey-nets that can be deployed by the administrator.", "id": "b9cb6bcf-6a74-450c-a92b-85828ddd3765", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "0133c010-6e36-4f15-a544-e7f247441d51", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ], [ "subsubsection", "Exploration Surface Shifting" ] ], "subsections": [], "title": "Exploration Surface Shifting" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{att}\nMost of the work from the research community has focused on movement of the attack surface. The main aim of switching between attack surfaces is to render invalid an attack action that the attacker chooses after some exportation. For example, an attack to exploit a \\emph{Linux-based OS} will be useless if it is executed on a machine running a \\emph{Windows OS}. We first discuss some MTD methods that are, for a networking environment, defined at an abstract level. We will then, focus on ones that consider moving more specific elements of the network.\nIn one of the earlier works , the authors have a set of systems, which forms their space of MTD configurations $C$, and each configuration $c \\in C$ can be represented by an attack surface characterized by three properties-- (1) the set of entry and exit points into the system, (2) the set of \\emph{channels}, and (3) the set of un-trusted items (data/system/network link). The defender aims to switch between the different configurations to minimize the similarity of the attack surface in consecutive configurations and at the same time, have a minimum impact on performance. As we will soon see, this multi-objective trade-off is a common theme in many of the works on MTD.\nIn , Zhu and Bashar break down a full-stack system into several layers (denoted by $l$). For each layer, they have a set of technologies that can be used to provide the functionality that the layer is responsible for. Now, when all layers (or stacks) come together to form the full-stack system, all possible combinations of technologies cannot be used to provide proper functionality to the end-user. Thus, from all these possible combinations, they handpick a few combinations of technologies that meet the use-case demands for the full-stack software among which they switch, which defines the configuration space $C$.\nOn similar lines, Sengupta et al. also assume a full-stack web-application and thus, has similar action sets where the technologies for each layer are relevant to web-application development. The two papers differ in the way they decide {\\em how to switch} (to be discussed later).\nCarter et al. design an MTD system that switches between different Operating Systems (OSs) (which are called `platforms') . In their case, the configuration set $C$ is the set of all OSs that they can shift between. They mention a notion of similarity (or diversity) between two configurations $\\in C$ based on code syntax and semantics of the functionality provided. On similar lines, authors in move away from MTD systems that consider a set of abstract configurations and implement an MTD that can perform the OS rotation for machines deployed in the systems hosted on a network using a centralized mechanism. We now shift our attention to MTDs that move elements solely relating to networks.\nIn , Chowdhary et al. describe an MTD that leverages port hopping to thwart known attacks on a cloud-based system. In their work, the states of the system are composed of variables, each of which indicates if a certain vulnerability in the system has been exploited (or not) and based on it, decides when and how to move. This fits well with our earlier discussion on how various surfaces are inter-related -- the attack surface shifting comes into play when an attacker can successfully evade the detection surface. Along similar lines, authors in move a deployed Virtual Machine (VM) to a different physical server if the impact of known vulnerabilities (measured using heuristic measures) on the physical server exceeds a certain threshold. In , the authors implement MTD at a different layer of the system abstraction where they move services deployed on a particular VM to another VM. A natural extension could be to use both the layers for designing a hybrid MTD that shifts (1) software deployed on a VM and (2) individual VMs deployed on actual physical servers in the cloud network. This is similar to the concept of multi-layer MTD, as discussed in .\nIn , authors talk about a range of configurations (precisely, $72$ of them) and analyze the effect of using various game-theoretic strategies for switching between them. They shed some light on the amount of security gain that can be achieved in various settings. In , the authors point out a fundamental assumption that is often inherent in cybersecurity in general and MTD systems in particular-- the different configurations of the MTD are not vulnerable to the same attack (similar to the notion of diversity described in ). They create a bipartite graph that maps hosts to vulnerabilities and show that MTD is more effective when the diversity of the configuration $\\in C$ is higher. Similar conclusions have also been drawn in regards to an ensemble of Deep Neural Networks (DNN) where higher {\\em differential immunity} to attacks leads to higher gains in robustness .\nIn rare cases, the number of configurations in $C$ may become large. To address scalability challenges, a solution could be to first reduce the configuration set $C$ by partitioning them into disjoint subsets or choose a reasonably-sized subset and perform MTD on them separately. In , the authors take the latter route and use \\textit{genetic algorithm} search methods where the fitness function measures the diversity of the subset. In , constraining the movement of software between containers both reduces the size of the set $C$ and helps to minimize performance impact.", "id": "e6b72c4b-8895-48df-9901-cadff3ba50a2", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "0133c010-6e36-4f15-a544-e7f247441d51", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ], [ "subsubsection", "Attack Surface Shifting" ] ], "subsections": [], "title": "Attack Surface Shifting" }, { "cite_extract_rate": 0.133333333333333, "cites": [ 6794, 6795, 6792, 6793 ], "content": "}\\label{dec}\nThe concept of detecting attacks by inspecting traffic on the wire or the behavior of a request on a host machine has been the cornerstone of cybersecurity. Unfortunately, placing all possible Intrusion Detection Systems (IDS) on a system, especially in the case of a large network system, can lead to a degradation in the performance of the system. Thus, to minimize the impact on performance while keeping the attacker guessing about whether their attack will be detected or not, researchers have looked at intelligent ways in which a limited number of detection systems can be placed on the underlying network. These methods are similar to patrolling methods that try to identify security breaches in physical security domains like airports, wildlife sanctuaries, {\\em etc.} with limited resources .\nIn and , authors show that when faced with stealthy botnets and external adversaries, who are powerful enough to attack any internal node of a deployed system, shifting the detection surface helps to maintain system performance, while being effective in detecting attacks. In both of these cases, they have a set of nodes $N$ that are deployed in the network. The configuration set $C$ is composed of all $k$-sized subsets of $N$ (where $k (< |N|)\\}$). In , the authors argue that in many real-world multi-layered networks, the assumption that an attacker can attack from any place in the network is too strong and split the configuration set $C$ into disjoint sets based their position in on an attack graph.\nIn contrast to MTD for detection surface shifting, whose goal is to maximize security while adhering to performance constraints of the underlying network, some works investigate detection surface shifting for the sole purpose of enhancing security. In , the authors use an ensemble of classifiers that can distinguish a mail as spam and switch between them to make it more difficult for the attacker to fool the system. In general, the use of machine learning for anomaly detection coupled with the paradigm of MTD for stochastic ensembles can lead to interesting future research, especially the introduction of a new attack surface (authors in highlight several challenges in the use of supervised machine learning in cloud network settings). In these cases, each classifier is a configuration in $C$.\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}\n\\begin{scope}[ opacity=0.6]\n \\fill[CornflowerBlue!30!white,draw=black] (180:3.5) ellipse (5 and 3.9);\n \\fill[Goldenrod!30!white,draw=black] (360:3.5) ellipse (5 and 3.9);\n \\end{scope} \n \\node at (140:4.9) {\\textbf{Constant Period Shifting}};\n \\node at (145:4.5) {Zhu et al.~};\n \\node at (152:4.5) {Manadhata et al.~};\n \\node at (159:4.5) {Albanese et al.~};\n \\node at (166:4.5) {Carter et al.~};\n \\node at (173:4.5) {Sengupta et al.~};\n \\node at (180:4.5) {Colbaugh et al.~};\n \\node at (187:4.5) {Jafarian et al.~};\n \\node at (194:4.5) {Venkatesan et al.~};\n \\node at (201:4.5) {Jajodia et al.~};\n \\node at (208:4.5) {Thompson et al.~};\n \\node at (215:4.5) {Debroy et al.~};\n \\node at (222:4.55) {Aydeger et al.~};\n \\node at (226:4.95) {Algin et al.~};\n \\node at (0:0) {Clark et al.~};\n \\node at (36:4.6) {\\textbf{Variable Period Switching}};\n \\node at (28:4.2) {\\textbf{$\\bullet$ On event switching}};\n \\node at (21:4.2) {Van et al.~};\n \\node at (14:4.2) {Prakash et al.~};\n \\node at (7:4.2) {Zhao et al.~};\n \\node at (0:4.2) {Neti et al.~};\n \\node at (353:4.2) {Chowdhary et al.~};\n \\node at (346:4.2) {Shi et al.~};\n \\node at (336:4.2) {\\textbf{$\\bullet$ Strategic}};\n \\node at (329:4.4) {Lei et al.~};\n \\node at (322:4.7) {El Mir et al.~};\n\\end{tikzpicture}\n\\caption{The time period between two move events is either a fixed interval or decided based on some form of reasoning in the various Moving Target Defenses proposed in the literature.}\n\\label{fig:when}\n\\end{figure*}", "id": "9ed638d7-eaf6-46d5-81e9-17f27f0efad2", "level": "subsubsection", "origin_cites_number": 30, "parent_id": "0133c010-6e36-4f15-a544-e7f247441d51", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ], [ "subsubsection", "Detection Surface Shifting" ] ], "subsections": [], "title": "Detection Surface Shifting" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{prev}\nThe goal of an MTD in shifting the Prevention Surface is to make the attacker uncertain about the defense mechanism that is in place; this forces the attacker to spend more resources and come up with sophisticated methods to exfiltrate the data. It also adds a layer of reasoning on the adversaries part; for example, distinguishing between whether their attack was undetected and went through to the actual system, or was detected and their behavior is presently being monitored becomes difficult.\nInvestigation of MTD techniques for shifting the prevention surface has been scarce, especially in the context of computer networks. We think this is mostly because an administrator can only use these defenses when they can identify an attack with high accuracy, which is often too strong an assumption. Yet, in some cases, authors make this assumption-- in , an MTD mechanism that modifies the bandwidth of the network in response to malicious activity is proposed. The configuration space $C$ consists of infinite actions as the bandwidth can take any real value. Similarly, researchers have also considered shifting the latency period when replying to a query that is, with high probability, malicious . The latter work also considers the deployment of decoy nodes and switches among them to hide from an adversary seeking to actively uncover the prevention surface, i.e. the decoy nodes.", "id": "61fbae28-2f22-4f34-a9a0-cbb3ff337871", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "0133c010-6e36-4f15-a544-e7f247441d51", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ], [ "subsubsection", "Prevention Surface Shifting" ] ], "subsections": [], "title": "Prevention Surface Shifting" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nBy choosing the shift multiple technologies that belong to different surfaces of a system, one can design an MTD that moves more than one surface. Yet, research on hybrid MTDs has been rare. In this regard, we discuss two works that try to shift both the exploration and the attack surface of a system.\nIn , authors show that constructing a simple proxy-based switching approach is ineffective by introducing a new attack known as the proxy harvesting attack that collects a set of IPs over time and then, performs a DDoS attack against all of them. To protect against such attacks, they propose an MTD approach that replaces entire proxies followed by the task of remapping the clients to these new proxies. This renders the exploration effort of the attacker useless and at the same time, reduced the attack efficiency of the attacker. As opposed to looking at a particular network-motivated setting, authors in and formalize the concept of MTD in which they highlight that the set $C = ES \\times AS$, i.e. the elements represents configuration tuples that have technologies from both the attack and the exploration surface.\nAn interesting future direction could be to leverage existing security research on amalgamation of multiple surfaces and extend it to setting where MTD can be used. For example, NICE combines the techniques involving the detection of attacks to countermeasure (or prevention surface) selection in a single framework . This work can be the first step to propose MTD solutions that move both the detection and prevention surface. \nWe believe the implementation of systems that integrate multiple surface shifting mechanisms under a single MTD mechanism has several challenges. For example, some of the surfaces might lend themselves easy to management by a centralized controller like SDN, while others are better suited for movement in a decentralized fashion. Characterizing these challenges that deter researchers from considering multi-surface shifting and suggesting methods to overcome that is an interesting research direction.", "id": "3bf50346-d27a-499c-b51b-a388504ae4eb", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "0133c010-6e36-4f15-a544-e7f247441d51", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Configuration Set $C$ -- What to Switch?" ], [ "subsubsection", "Multi-Surface Shifting" ] ], "subsections": [], "title": "Multi-Surface Shifting" }, { "cite_extract_rate": 0, "cites": [], "content": "Having defined the possible configurations that a defender can switch between, the next question to ask is {\\em at what point in time does a defender choose to execute a switch action that changes their present configuration $c$ to another configuration $c'$}? To answer this question, we divide the works on MTD systems into two broad categories based on how the time-period between multiple switches is determined-— \\emph{Constant Period Switching} (CPS) and \\emph{Variable Period Switching} (VPS). We first describe the primary characteristics of these categories and then categorize the different MTDs (see Fig \\ref{fig:when}).\nIn CPS, the timing function enforces an MTD system to move, in a randomized fashion, after a constant time-period. This time is dependent on the composition of the set $C$ because attacks on different compositions have different investments in regards to time. On the contrary, in VPS, the timing period between one switch action can vary from the time of another switch. The works in this category can be further subdivided into two distinct categories based on whether the timing function $T$ is reflexive or strategic based on opponent modeling. While the two classes have the exact opposite definition, there is no restriction to not use both in certain scenarios. For example, when multiple surfaces being shifted, it is reasonable to have one surface shift using CPS while the other is shifted via VPS. We conclude his section with a brief discussion in this regard and later, elaborate on it in Section~\\ref{research}.", "id": "b9bd189f-c4ee-40eb-89c5-0be08089fc21", "level": "subsection", "origin_cites_number": 0, "parent_id": "61ef46a6-da85-4c8f-96c6-de7df290c870", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ] ], "subsections": [ "6b5d6a58-3629-4157-852f-b51ff74ef879", "785e3ba2-79a2-4a2c-9f97-1c706f911478" ], "title": "The Timing Function $T$-- When to switch?" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nAs mentioned, the key idea of these methods is to move the MTD system after a constant amount of time. In existing works, this is stated as an implicit design choice (for example, in frameworks that model the MTD as a game, the actions of the defender are played after a constant time-period unless they explicitly discretize the time and consider it in the state information), or an explicit one (for example, to ensure security, we move the MTD system after every 100 seconds).\nA lot of the research works in MTD literature such as , do not explicitly bring up the topic of a time-period but inherently assume that the system moves after a constant time.\nIn these works, authors (1) model the problem as a single-step game and generalize the solution of this game to multiple stages (not necessarily repeated games), and (2) showcase the problem of \\emph{when to switch} as a different one than \\emph{how to switch} and only address the latter, forcing the system admin (or defender) to consider a constant time-period when implementing MTD solutions.\nSome research works explicitly state that the time-period is uniform and the equilibrium strategy is played at the start of each stage . Note that, some of these works, such as , experiment different timing functions for switching, i.e. see the effectiveness of an MTD if the constant time-period was set to different predefined values. This helps them empirically figure out the constant timing function that works best.\nSome other works that employ a constant timing function, use different terminology. For example, authors in address DDoS attacks use the notion of a pre-defined fixed frequency of switching while research in refers to $T$ as a constant mutation interval. Other works that have syntactic generalizability (by using notations such as $t_i$ that allow a user to use different time intervals for configuration $i$) default to a constant timing function.\nAn interesting question is {\\em how long should this constant time-period be in practice}? In , the authors vary the time-period from $60-300$ seconds and evaluate the effectiveness of OS rotation in a network-system based on (1) the likelihood of thwarting a successful exploit, (2) the magnitude of impact for an exploit, and (3) the availability of applications. They show that a smaller $t =60s$ was often good enough to thwart Network Mapping (Nmap)~ attacks, but may allow accurate OS fingerprinting when $t=60s$. The latter does not help the attacker because, during the execution of the attack, the fingerprinted OS might simply have shifted. For $t=300s$, these results look less promising against automated attack systems , but work well when evaluated against human experts. To allow for faster rotations, the authors set up machines with different OS and after every time interval, simply redirect traffic to a new OS. While this makes faster switching less painful, it incurs added cost to maintain multiple machines. To reduce this, we believe that having at least two systems is necessary to allow for such small switching periods, especially when shifting between OSs. With two systems, one VMs sets up an environment while the other handles traffic and the role switches in the next time step.\nIn , researchers use the knowledge obtained from historical attack data to obtain a cyber attack inter-arrival (CAIA) rate. Then, they setup an optimization problem to maximize the time-period of switching, constrained upon the fact that the constant timing function has a value less than CAIA. Along similar lines, authors in have looked at \\emph{traceroute}~ data between possible source-destination pairs to decide on a reasonable time-period for obfuscating links or mutating routes of ICMP packets on the network. This helps to mitigate crossfire attacks.\nAn interesting case arises in because the states of this MTD represent a bipartite mapping between hosts and virtual IPs (vIP) and the authors let each edge in the mapping have a separate, but constant, mutation time. Some hosts belong to a set of High-Frequency Mutation (HFM) set while others belong to a Low-Frequency Mutation (LFM) set. To ensure that availability of a host is not impacted, the hosts that are in the HFM set map to two vIPs (one that it was mapped to in the previous round and one to which it is mapped in the present round) for a small time duration compared to the time-period of switching. In , the authors also have a local timer maintained by each host; its expiry triggers the change in their vIP. They use {\\em ad hoc} messaging to communicate this change to other nodes in the network to facilitate routing.", "id": "6b5d6a58-3629-4157-852f-b51ff74ef879", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "b9bd189f-c4ee-40eb-89c5-0be08089fc21", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ], [ "subsubsection", "Constant Period Switching (CPS)" ] ], "subsections": [], "title": "Constant Period Switching (CPS)" }, { "cite_extract_rate": 0.12, "cites": [ 6797, 6792, 6796 ], "content": "}\nThe idea behind this switching strategy, as evident from its name, is to have a way to come up with a timing function that varies the switching time based on the present condition of the system. For example, if the system transitions through a series of configurations $\\langle \\dots, c, c'\\dots \\rangle$ and the time spent in $c$ and $c'$ is denoted by $t$ and $t'$ respectively, then $t \\neq t'$. We believe that doing a VPS along with an MTD mechanism is similar to doing a two-layer MTD where the first layer deals with a meta MTD for shifting the time-surface and then, the second layer MTD is responsible for executing the actual cyber-surface switching. We will now categorize the MTD works under the following sub-classes.\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}\n\\begin{scope}[opacity=0.6]\n \\fill[Salmon!30!white,draw=black] (175:3.8) ellipse (5 and 3);\n \\fill[SpringGreen!30!white,draw=black] (5:3.8) ellipse (5 and 3);\n \\end{scope} \n \\node at (141:4.4) {\\textbf{Single-Stage Modeling}};\n \\node at (146:4.2) {Al Mir et al.~};\n \\node at (152:4.2) {Jafarian et al.~};\n \\node at (158:4.2) {Chowdhary et al.~};\n \\node at (164:4.2) {Zhu et al.~};\n \\node at (170:4.2) {Manadhata et al.~};\n \\node at (176:4.2) {Thompson et al.~};\n \\node at (182:4.2) {Aydeger et al.~}; \n \\node at (188:4.2) {Carter et al.~};\n \\node at (194:4.4) {Sengupta et al.~};\n \\node at (200:4.5) {Colbaugh et al.~};\n \\node at (205:4.6) {Venkatesan et al.~};\n \\node at (212:4.4) {Algin et al.~};\n \\node at (36:4.6) {\\textbf{Multi-Stage Modeling}};\n \\node at (30:4.3) {Chowdhary et al.~};\n \\node at (23:4.2) {Manadhata et al.~};\n \\node at (16:4.2) {Colbaugh et al.~};\n \\node at (9:4.2) {Zhao et al.~};\n \\node at (2:4.2) {Maleki et al.~};\n \\node at (355:4.2) {Lei et al.~};\n \\node at (348:4.2) {Miehling et al.~};\n \\node at (341:4.2) {Nguyen et al.~};\n \\node at (334:4.2) {Sengupta et al.~};\n \\node at (327:4.3) {Clark et al.~};\n \\end{tikzpicture}\n\\caption{Game-theoretic modeling of Moving Target Defenses (MTDs), that is necessary for obtaining the movement strategy $M$, can be categorized into either single-stage or multi-stage game modeling. This choice reflects the type of threat model being considered and the characteristic of the particular system for which the MTD is implemented.}\n\\label{fig:how}\n\\end{figure*}", "id": "785e3ba2-79a2-4a2c-9f97-1c706f911478", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "b9bd189f-c4ee-40eb-89c5-0be08089fc21", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ], [ "subsubsection", "Variable Period Switching (VPS)" ] ], "subsections": [ "11ddfad6-7c20-4391-ba53-262bc81acb4f", "ed5a33f8-65b5-4e20-9c8c-e83f00d77568", "2d98ba32-e0fe-40f9-90e6-919a80195193" ], "title": "Variable Period Switching (VPS)" }, { "cite_extract_rate": 0.2, "cites": [ 6795 ], "content": "Most works that have a general timing function, which isn't constant, fall under this sub-category. The main idea, evident from the nomenclature, is that when a particular event occurs, such as detection of an attack, unavailability of a link or a server, the timing function specifies a time-period to switch. In most cases, the switch action is triggered immediately. \nA well-known case study is that of the \\emph{FlipIt} games . The true state of the system is represented using a boolean variable $1/0$ that indicates whether the defender or the attacker has control of an underlying software system. The value is unknown to both the players, and a defender strategy is based on its belief that the true value is $0$ (which represents the attacker having access to the server). Thus, the timing function is dependent on this belief.\nOn similar lines, in , authors update the belief about the sender type (are they malicious?) upon detection of suspicious packets on the network. This belief, in turn, influences the timing function $T$. Authors in have a belief variable indicating whether a vulnerability was exploited or not. The timing function orders a switch action if the belief value exceeds a certain threshold.\nOther works argue that obtaining belief parameters (such as \\textit{with what probability is the sender an attacker vs. a normal user?} or \\textit{with what probability is the system compromised?}) that are accurate is difficult in cybersecurity settings because they are based on analysis of historical datasets that are both scarce and might represent a different distribution. These works seek to use the intuitive knowledge of security experts in designing the timing function. In , if the admin detects a spike in bandwidth usage of a particular link or sub-net, they change the maximum bandwidth value allocated to that link or the network. Authors in scan open connections routinely and upon detection of unexpected connections, move between MTD configurations to protect the system against port-scanning attacks.", "id": "11ddfad6-7c20-4391-ba53-262bc81acb4f", "level": "paragraph", "origin_cites_number": 5, "parent_id": "785e3ba2-79a2-4a2c-9f97-1c706f911478", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ], [ "subsubsection", "Variable Period Switching (VPS)" ], [ "paragraph", "On-event Switching" ] ], "subsections": [], "title": "On-event Switching" }, { "cite_extract_rate": 0, "cites": [], "content": "To understand these methods, we design a timing function that produced integer values in the range $[0, t_{\\max}]$. These represent discrete time intervals and a value $t$ represents the time at which a move occurs.\\\\\n\\indent In , the authors model the MTD setting as a game-theoretic setting. While we discuss this in much more detail in the upcoming section on the design of the movement function $M$, we briefly highlight it here. The basic idea involves representing a game-state that considers the currently deployed configuration $c$ and an integral-valued time variable $t$. In each state that can be represented by the tuple $(c,t)$, it recommends between the actions {\\em switch} or {\\em stay}.\nThe stay action takes them to the state $(c, t+1)$, while the switch actions move them to the state $(c',0)$. Note that the stochastic switching or movement policy described later may, with some non-zero probability predict, $c' = c$.\nIn general, the idea of incorporating the time variable as a part of the state can result in an explosion in the number of states and thus, computing a policy for all states time intensive.\nAuthors in show that they can model the problem similarly but resort to a simple strategy for defining the timing function. While it may be sub-optimal, scalability is no longer a concern. The timing function considers the impact of (known) vulnerabilities in the currently deployed configuration and based on it, recommends a switching time $t \\in \\{0,1,\\dots,\\infty\\}$. If there are no vulnerabilities whose impact is greater than a predefined threshold, then the system remains static until a new vulnerability is discovered.", "id": "ed5a33f8-65b5-4e20-9c8c-e83f00d77568", "level": "paragraph", "origin_cites_number": 2, "parent_id": "785e3ba2-79a2-4a2c-9f97-1c706f911478", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ], [ "subsubsection", "Variable Period Switching (VPS)" ], [ "paragraph", "Strategic" ] ], "subsections": [], "title": "Strategic" }, { "cite_extract_rate": 0, "cites": [], "content": "Although the idea of having both a CPS and VPS in an MTD mechanism seems counter-intuitive at first, authors in look at MTD mechanisms where one layer shifts the time window of replying to a strategic adversary using an event-based timing function while the other layer that moves the decoy nodes to hide from an active adversary is done using a constant timing function. This acts as a stepping stone for an investigation on the effectiveness of CPS {\\em vs.} VPS for a particular surface and finally when a combination of both should be considered.", "id": "2d98ba32-e0fe-40f9-90e6-919a80195193", "level": "paragraph", "origin_cites_number": 1, "parent_id": "785e3ba2-79a2-4a2c-9f97-1c706f911478", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Timing Function $T$-- When to switch?" ], [ "subsubsection", "Variable Period Switching (VPS)" ], [ "paragraph", "Hybrid Switching" ] ], "subsections": [], "title": "Hybrid Switching" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we look at how different MTDs come up with a policy for movement. While some surveys (eg. ) have been solely devoted to analyzing how this question, we propose a unifying framework that maps any MTD system to a Markov Game. This gives us a better sense of the implicit assumptions made by the various MTDs when coming up with $M$. Before getting into the categorization shown in Figure~\\ref{fig:how}, we first briefly introduce Markov games .\nAn MTD system can be mapped to a Markov game defined by the tuple $\\langle S, A^\\mathcal{D}, A^\\mathcal{A}, T, R^\\mathcal{D}, R^\\mathcal{A} \\rangle$ where $S$ denotes the state of the MTD system (often represented by deployed configuration $c$), $A^\\mathcal{D}$ denotes the set of pure strategies that map to the set of configuration set $C$ and the mixed strategy over this set maps to the movement strategy $M$. The other variables are specific to the domain for which the MTD system is designed and help in coming up with good $M$; $A^\\mathcal{A}$ represents the set of attack actions, $T$ represents the transition from one configuration to another depending on the joint action taken by the defender and the attacker, and $R^i$ represents the rewards for the player $i$ (i.e. the defender $\\mathcal{D}$ or the attacker $\\mathcal{A}$). We will use the superscripts $\\mathcal{D}$ and $\\mathcal{A}$ to represent the functions for the defender and the attacker respectively.", "id": "f4d122f1-492f-4b09-80a2-bae12617a051", "level": "subsection", "origin_cites_number": 2, "parent_id": "61ef46a6-da85-4c8f-96c6-de7df290c870", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Movement Function $M$ -- How to switch?" ] ], "subsections": [ "fc434baa-98b0-47d1-a132-fb540290364c", "eaf9c411-4cfc-448d-930b-428334abe2f3" ], "title": "The Movement Function $M$ -- How to switch?" }, { "cite_extract_rate": 0.066666666666666, "cites": [ 6797 ], "content": "}\nIn this section, we investigate works that have modeled the interaction between an attacker and a defender as a single-stage game. In these settings, the goal is to come up with a single mixed-strategy that can be played in all configurations (or states). Thus, the movement function $M: H\\rightarrow C$ is independent of any history or $H = \\emptyset$. As we shall see, the different works consider various notions of equilibrium to come up with this mixed-strategy.\nIn , the configuration of the system is modified by a pair of actions, one by the defender and one by the attacker. The defender's action, in this case, maps a value of system features represented by the set $F$ to one of the possible actions in the set $\\{enabled,~disabled,~modified,~unchanged\\}$. The attacker's action set $A^\\mathcal{A}$ is comprised of conceptually opposite actions that can {\\em dis-enable} a port, {\\em disable} a functionality, {\\em etc.} and the rewards, $R^\\mathcal{A}$ and $R^\\mathcal{D}$ for the attacker and the defender, are given by weighing the change in system's features $\\Delta F$, i.e. the change in the attack surface $\\Delta AS$ and the attack surface measurement $\\Delta ASM$, as follows.\n\\[\nR^\\mathcal{D}(s,a^\\mathcal{D},a^\\mathcal{A}) = B_1(\\Delta F) + B_2(\\Delta AS) - D_1(\\Delta ASM)\n\\]\n\\[\nR^\\mathcal{A}(s,a^\\mathcal{D},a^\\mathcal{A}) = B_3(\\Delta ASM) - D_2(\\Delta AS)\n\\]\nwhere the $B_i$ and $D_j$ are weights defined by security experts of the system. Note that the \\emph{cost of a defense} action ($D_1$) and the cost of attack action ($D_2$) are incorporated in the reward. Thus, solving for an equilibrium of this game results in strategies that account for the cost-reward trade-off. The authors use the notion of a Subgame Perfect Equilibrium (SPE).\\footnote{In sequential games, a strategy profile is an SPE if it is a Nash equilibrium in every subgame of the original game .}\nIn and , the defender played a predefined strategy that is either uniform random (choose $k$ random vIPs at random) or based on intuitive heuristics (select top-k scanned vIPs). The latter is based on the assumption that the most scanned vIPs will not be scanned as frequently in the new state. The chosen vIPs are then assigned to the $k$ hosts using a purely random matching function. \nIn , the authors explain the game modeling based on a real-world example and compare the effectiveness of a reactive switching strategy to the Uniform Random Strategy (URS). URS, which is found to be more effective, picks all pure strategies $a \\in A^\\mathcal{D}$ with equal probability |$1/A^\\mathcal{D}|$. An array of works that simply use URS as the defender's policy are . This makes an implicit assumption that the game is a constant sum and the normal form game matrix has symmetric rewards. When no information is available about the attacks and the defender has no preference over the MTD configurations, this may be a reasonable assumption. Fortunately, defenders often have an idea about an attacker's threat model and thus, such assumptions (resulting in URS) result in highly sub-optimal movement strategies.\nWorks such as closely mirror URS strategies but make a minute modification-- given the present deployed OS configuration $c$, they consider a URS over remaining OS configurations, i.e. $C \\setminus \\{c\\}$ as the movement function $M$. In , given all networking paths from source to destination, one is picked at random while in , the authors use to Fisher-Yates shuffling to select a transmission schedule at the start of every round.\nIn , and the authors design a single-stage normal form game. Thus, if the defender chooses to deploy a particular configuration $c \\in A^\\mathcal{D}$ and the attacker chooses a particular exploit $e \\in A^\\mathcal{A}$, then the rewards for the players can be read from the normal-form game matrix. At equilibrium, a mixed strategy over the defender's actions turns out to be the optimal movement policy. Thus, $M$ is a stochastic function based on the mixed strategy, chooses the next configuration at the end of each switching period. In , the authors consider an MTD for the Internet of Things (IoT) and show that Zero-Determinant strategy results in a dominant strategy for the single-stage game.\nIn , the authors assume (1) a zero-sum reward structure, and (2) a threat model in which the attacker is irrational. The opponent model for the attacker is determined based on historical traces of the attacker's response to defense strategies and eventually updates the utilities of the game to reflect the rationality of the attacker. After each update, the resulting Nash equilibrium (NE) is played. In , the authors assume that if the present system is working with the operating system $c$ and the system is made to switch to the operating system $c'$, lower the similarity between $c$ and $c'$, more secure the move action. To model this, the defender's rewards are defined w.r.t. switch actions and, over time, fine-tuned by simulating the behavior of an adversary. Similar to , the NE strategy is chosen to be the defender's policy.\nThe other works assume that an attacker will eventually, with reconnaissance on its side, figure out the mixed strategy of the defender (using maximum likelihood estimates). Thus, authors of concentrate on finding the Strong Stackelberg equilibrium (SSE) strategy for the defender. The rewards for their game follow a general-sum structure and are obtained using the CVSS metrics of known attacks that can successfully exploit one of the defender's actions, i.e. the MTD configurations that can be deployed. On similar lines, authors in use the notion of SE to obtain defender's strategy to thwart DDoS attacks.", "id": "fc434baa-98b0-47d1-a132-fb540290364c", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "f4d122f1-492f-4b09-80a2-bae12617a051", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Movement Function $M$ -- How to switch?" ], [ "subsubsection", "Single-Stage Modeling" ] ], "subsections": [], "title": "Single-Stage Modeling" }, { "cite_extract_rate": 0.2, "cites": [ 6792, 6798, 6799 ], "content": "}\nWorks in these sections reason about (1) the history, i.e. paths that the system has taken, which may be a result of the actions taken by the players, to reach the present state, (2) the future, i.e. how the decision in the present state affects the rewards in the future or both.\nIn , similar to the work by in single-stage modeling, the authors discretize the continuous action space of the defender, which represents bandwidth values, to two levels-- high and low. Then they find a meaningful defender strategy over them by ensuring that in a repeated game setting, the advantage that an attacker gained by packet flooding attacks (at some stage in the past) is neutralized by reducing their bandwidth to the low state for $1 \\leq x < \\infty$ number of game stages.\nOn similar lines, authors in consider a repeated game setting and analyze the defender's policy against self-learning attackers. They infer that in their case, the optimal policy converges to the URS. Other works such as also update their belief about an attacker based on observations drawn from multiple plays of the game. This belief is then used to come up with an optimal strategy for the defender.\nA set of works in MTD leverage the attack graph of an underlying system to better reason about the sequential process of an attack. In , the authors model the problem of network intrusion via a Bayesian Attack Graph. The action set for the attacker $A^\\mathcal{A}$ includes the (probabilistic) paths an attacker can take in this graph. Then, the authors map these paths with the defender's imperfect sensing capabilities to form a Partially Observable Markov Decision Process (POMDP)~. Thus, the state and transition of this POMDP are designed to model the attacker's behavior and the optimal policy becomes the defender's strategy in a particular belief state. A shortcoming of this work is that this strategy may be sub-optimal if the \\emph{attacker} deviates away (intentionally or not) from the assumptions that informs the POMDP modeling. On the other hand, modeling the scenario as a Partially Observable Stochastic Game (POSG)~ results in scalability concerns for any real-world system.\nAuthors in , and relax the assumption about partial observability and formalize MTD in a Markov Game framework. In , authors consider policies over the defender's action set that comprises of either single or multiple IP hops. Each action results in the defender uniformly selecting the next state given that an attacker samples actions randomly from $A^\\mathcal{A}$. They can show that multi-element IP hopping actions increase the defender's reward by a greater magnitude compared to static defense strategies. As we will discuss later, the authors do not model the cost of performing a hop action or the downtime associated with switches in the defender's reward function. Thus, the optimal defender policy might be sub-optimal for performance in real-world multi-layered network attack scenarios.\nIn contrast, the works incorporate this trade-off in the rewards ($R^\\mathcal{D}$ and $R^\\mathcal{A}$) of the Markov Game, and can thus generate policies for the defender that trade-off security corresponding to an attacker's actions and performance at each step of the game. In , the inclusion of time variables and attack strategies in the state allows it to formulate the problem as an MDP but, becomes vulnerable when an attacker is aware of this modeling. It also makes it less scalable for large networks. In all these works, the impact on performance $C^\\mathcal{D}$ is a part of $R^\\mathcal{D}$ and often, just a \\emph{heuristic idea} as opposed to simulation in a real system that informs these values. The more accurate these measures become, the better is the strategy.\nAuthors in model the performance and security concerns in a different way than the above methods. The performance cost represents the switching cost while the rewards of the state represent the security costs of deploying a particular configuration. Then, they formulate an optimization problem that produces a defender strategy that reduces the cost of switching over two steps (one switch) and maximizes the security over a single step. Although the authors in do not discuss how sub-optimal their switching strategies are, recent work by extends their approach to a Markov Game setting with the Stackelberg Equilibrium concept.\nLastly, authors in look at an MTD defense mechanism for deception (obfuscation of the links to send attacker to decoy nodes) and model attackers who are actively trying to uncover this deception over a repeated stage interaction. For one problem, they use the Nash equilibrium (NE) while for the other (identifying decoy nodes) they consider the Stackelberg equilibrium (SE) as the defender's strategy. We believe that it is necessary to highlight a shortcoming of the different modeling choices, especially in light of multi-stage attacks and APT scenarios, however optimizing the model and ensuring the scalability is a difficult task, thus we highlight these as possible research opportunities in the Section~\\ref{research}.", "id": "eaf9c411-4cfc-448d-930b-428334abe2f3", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "f4d122f1-492f-4b09-80a2-bae12617a051", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Categorizing Moving Target Defenses" ], [ "subsection", "The Movement Function $M$ -- How to switch?" ], [ "subsubsection", "Multi-Stage Modeling" ] ], "subsections": [], "title": "Multi-Stage Modeling" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mtd_impl}\nIn this section, we discuss how the various Moving Target Defenses (MTDs) have been implemented using research test-beds and industrial products. First, we briefly discuss the tools for MTD implementation. In this regard, we highlight that traditional networking technologies like middleboxes have a set of disadvantages. To overcome this, users can leverage the advancement in networking technologies such as Software Defined Networking (SDN) and Network Function Virtualization (NFV). We briefly emphasize their role in making MTD a pragmatic solution for real-world systems. Second, we highlight how the existing MTDs, discussed in the survey as shown in table \\ref{tab:adel}, have been evaluated. We select a subset of works and conduct a detailed case-study to highlight how movement strategies can be implemented in practice. Third, the sub-section~\\ref{testbed} provides details of the test-beds used for academic research and industry products.", "id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ] ], "subsections": [ "b49915b5-1803-41f7-abfe-142b3dbc9c7a", "7c5fc58d-a81a-4082-813b-f8da405dda6a", "e8cc24b8-ce87-4411-895c-24520c0db83d", "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "bdfb33ec-f5c7-465d-961a-c428763edd3e" ], "title": "Implementation of Moving Target Defenses" }, { "cite_extract_rate": 0, "cites": [], "content": "Middleboxes are the devices used by network operators to perform network functions along the packets data-path from source to destination, e.g., Web Proxy, Firewall, IDS. This provides a decentralized framework that can be used alongside existing network technology to implement strategies for MTDs. Furthermore, researchers have focused significant effort on several issues associated with middleboxes, such as ease of use, ease of management and deployment, the design of general-purpose middleboxes for different network functions {\\em etc.} This makes them seem like a good choice for enabling the practical implementation of MTDs.\n\\begin{table}[t!]\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n\\textbf{Middlebox} & \\textbf{Misconfiguration} & \\textbf{Overload} & \\textbf{Physical/Electric} \\\\\n\\hline\nFirewall & 67.3\\% & 16.3\\% & 16.3\\% \\\\\n\\hline\nProxies & 63.2\\% & 15.7\\% & 21.1\\% \\\\\n\\hline\nIDS & 54.5\\% & 11.4\\% & 34\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{Common causes of middlebox failures. Misconfiguration is a dominant cause of failure because of different underlay and overlay network.}\n \\label{tab:1}\n \\vspace{-1.2em}\n\\end{table}\nUnfortunately, a survey of various middlebox deployments conducted by Sherry \\textit{et al.}~ reveals some drawbacks such as increased operating costs caused by misconfiguration and overloads that affect their normal functioning. As shown in the Table~\\ref{tab:1}, based on the results in the survey of 57 enterprise network administrators from \\emph{NANOG} network operators group, the misconfigured and overloaded middleboxes are the major reasons for middlebox failure. Furthermore, about 9\\% of administrators reported about six and ten hours per week dealing with middlebox failures. Also, the adoption of new and secured middlebox technologies is slow in the industry based on the survey results. In the median case, enterprises update their middleboxes every four years. The use of traditional networks for incorporation of MTD defense can increase chances of network misconfiguraion and outages.\nMoreover, this static nature of the middleboxes themselves, in contrast to the dynamic nature of the system they enable, provides an assymetric advantage to the attackers as noted by Zhuang \\textit{et al.}~. The attackers can perform necessary network reconnaissance, identify the services and configuration of the applications, and operating systems by leveraging the middleboxes. This information helps the attacker in choosing best-fit attack methods against the static configuration and select the best time to attack the system. The attackers can use the compromised resource to maintain the foothold in the network for a long period of time and try to exploit other high-value targets in the same network.\n\\input{sections/04b_impl-table.tex}", "id": "b49915b5-1803-41f7-abfe-142b3dbc9c7a", "level": "subsection", "origin_cites_number": 2, "parent_id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "Middleboxes for enabling Moving Target Defenses" ] ], "subsections": [], "title": "Middleboxes for enabling Moving Target Defenses" }, { "cite_extract_rate": 0, "cites": [], "content": "\\emph{Software Defined Networking} (SDN)~ is defined as a networking paradigm that decouples the control plane from the data plane, allowing a global view of the network, and centralized control capabilities. SDN deals with packet headers, from layers 2-4 of OSI model and other protocols such as MPLS~. SDN and NFV have emerged as a state-of-the-art network architecture for data centers and backbone networks. Google's \\textit{B4 project}~ shows the feasibility of SDN for handling real-world network challenges such as traffic engineering and Quality of Service (QoS).\nThe decoupling allows a network architecture where switches act as forwarding devices, maintaining flow tables populated with flow rules. The new architecture allows considerably more flexible and effective network management solutions, and a unified programmable interface that can be utilized by software developers for application deployment~. \n The SDN architecture can be vertically split into three layers described below: \n \\begin{itemize}\n \\item \\textbf{Application Plane:} It consists of end-user business applications that consume SDN communication and network services. The examples include network security or virtualization applications. \n \\item \\textbf{Control Plane:} The control plane consists of SDN controllers such as ONOS~, OpenDaylight~ providing open Application program interfaces (APIs) to monitor network forwarding behavior. The communication interface between the application and control plane is known as northbound interface. The interface between control and data plane is known as southbound interface. \n \\item \\textbf{Data Plane:} The forwarding elements such as OpenFlow switches are present in the data plane. The forwarding devices can be physical or virtual switches. Control plane installs flow rules in order to govern the forwarding behavior of data plane devices. \n \\end{itemize}\n Network Functions Virtualization (NFV)~ has emerged as a technology that provides a virtualized implementation of hardware-based equipment such as firewall, routers, and Intrusion Detection Systems (IDS). Virtual Network Functions (VNFs) can be realized through virtual machines (VMs) or containers running on top of the physical server of cloud computing infrastructure. SDN acts as an enabling technology for NFV and help in centralized management, making it easier to implement MTD. Despite the great benefits offered by SDN and NFV, its use to implement MTDs has been limited to a few research works (as can be seen in Table \\ref{tab:adel}).", "id": "7c5fc58d-a81a-4082-813b-f8da405dda6a", "level": "subsection", "origin_cites_number": 7, "parent_id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN and NFV for enabling Moving Target Defenses" ] ], "subsections": [], "title": "SDN and NFV for enabling Moving Target Defenses" }, { "cite_extract_rate": 0, "cites": [], "content": "We categorize the MTD industrial, and research implementations from the perspective of networking technologies used, e.g., traditional network or SDN/NFV as shown in the Table~\\ref{tab:adel}, as highlighted in column 2. It is noteworthy that SDN/NFV has been a dominant technology for MTD research. Around 34\\% of the research works in Table~\\ref{tab:adel}, utilize SDN/NFV for implementation of MTD or cyber-deception. Column 3 describes the layer in network protocol stack where the describe MTD solution has been implemented. As on may notice, most research works are implemented at the network or the application layer. In the fourth column of the table, we identify the key technologies used by these research works; if they utilize a particular implementation test-bed (for example, GENI~, then the name of the test-bed is mentioned in this column. Column 5 of the table shows the level of \\emph{maturity} of the MTD research work. We categorized the research works into four levels - \\emph{analytic} - if only numerical results or thought-based experiments have been presented in the said paper. We put the research work under the category \\emph{simulation} - if the research work describes some simulated network, e.g., \\emph{Mininet}. The \\emph{emulation} category consists research works that use close to real world networks, e.g. multiple VMs deployed in a cloud testbed, e.g., GENI~. Lastly, if the research work has been deployed in some commercial product dealing with live attacks in a production grade network, we consider these research works under category \\emph{commercial}. \nWe observed that more than 50\\% of the research works use simulated networks or applications when experimenting with MTD techniques, whereas $\\sim$ 34\\% of research works have used emulated environments for performing MTD based analysis research. Only few MTD solutions e.g., $\\sim$ 13\\% have implemented MTD at commercial level (in production grade networks for dealing with live attacks). This shows that though MTD has been well accepted in research community, and benefits of MTD can help in dealing with different threat models, the industry adoption of MTD is rather slow. The key reason behind this is the adverse impact that MTD can induce on Quality of Service (QoS), and the additional cost-overhead associated with deployment of MTD. We discuss these factors under \\emph{Qualitative} and \\emph{Quantitative} metrics in Section~\\ref{effect}.\n\\begin{figure}[!t]\n \\centering\n \\begin{tikzpicture}[\n state/.style={rectangle},\n node distance=2.3cm,\n ]\n \\centering\n \\node[state] (router) [text width=1cm, align=center] {{\\Huge \\faWifi}\\\\{\\footnotesize Router}};\n \\node[state] (ofTop) [above left of=router, xshift=-0.7cm, yshift=0cm, text width=1.2cm, align=center] {{\\faChevronRight\\faChevronLeft}\\\\{\\footnotesize Open-flow Switch}};\n \\node[state] (ofBottom) [below left of=router, xshift=-0.7cm, yshift=-0.5cm, text width=1.2cm, align=center] {{\\faChevronRight\\faChevronLeft}\\\\{\\footnotesize Open-flow Switch}};\n \\node[state] (dbTop) [above left of=ofTop, xshift=-1cm, yshift=-0.85cm, text width=1.45cm, align=center] {{\\Huge \\faDatabase}\\\\{\\footnotesize Database}};\n \\node[state] (wsTop) [below left of=ofTop, xshift=-1cm, yshift=0.85cm, text width=1.45cm, align=center] {{\\Huge \\faServer}\\\\{\\footnotesize Web Server}};\n \\node[state] (dbBottom) [above left of=ofBottom, xshift=-1cm, yshift=-0.2cm, text width=1.45cm, align=center] {{\\Huge \\faDatabase}\\\\{\\footnotesize Database}};\n \\node[state] (authSer) [left of=ofBottom, xshift=-0.32cm, text width=1.45cm, align=center] {{\\Huge \\faSignIn}\\\\{\\footnotesize Auth Server}};\n \\node[state] (wsBottom) [below left of=ofBottom, xshift=-1cm, yshift=0.2cm, text width=1.45cm, align=center] {{\\Huge \\faServer}\\\\{\\footnotesize Web Server}};\n \\node[state] (internet) [right of=router, xshift=0.2cm, text width=1.2cm, align=center] {{\\Huge \\faGlobe}\\\\{\\footnotesize Internet}};\n \\node[state] (nox) [below of=router, text width=1.2cm, xshift=0.7cm, yshift=0.15cm, align=center] {{\\Huge \\faServer}\\\\{\\footnotesize NOX Controller}};\n \\path[black, -]\n (dbTop) edge (ofTop)\n (wsTop) edge (ofTop)\n (dbBottom) edge (ofBottom)\n (authSer) edge (ofBottom)\n (wsBottom) edge (ofBottom)\n (ofTop) edge (router)\n (ofBottom) edge (router)\n (router) edge (nox)\n (router) edge (internet)\n ;\n\\end{tikzpicture}\n \\caption{OpenFlow-based Random Host Mutation mutates the virtual IPs of hosts based on a pool of available IP addresses.}\n \\label{fig:7}\n\\end{figure}", "id": "e8cc24b8-ce87-4411-895c-24520c0db83d", "level": "subsection", "origin_cites_number": 1, "parent_id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN-based MTD Applications and Case Study" ] ], "subsections": [ "5e978832-8e86-42b3-a130-ca41d4db0993", "ef588ede-8a75-412f-91ad-d71c26828c4f", "ef9817c2-a610-4846-b5a4-13cd3cb4f815" ], "title": "SDN-based MTD Applications and Case Study" }, { "cite_extract_rate": 0, "cites": [], "content": "The first step of the Cyber Kill Chain (CKC) is the identification of vulnerable software and OS versions. Most scanning tools make use of ICMP, TCP or UDP scans to identify the connectivity and reachability of the potential targets. The replies to the scans can also reveal the firewall configuration details, i.e., what traffic is allowed or denied. The time to live (TTL) information can also help in the identification of a number of hops to the attack target~.\nSDN-enabled devices can help in delaying the network attack propagation by hiding the real response and replying back with a random response in order to confuse the attacker. As a result, the attacker will believe that random ports are open in the target environment. The attack cost will be increased since the attacker will need to distinguish the real reply from the fake reply. SDN-enabled devices can also introduce random delays in TCP handshake request that will disrupt the reconnaissance process utilized by the attacker for identification of TCP services. The cost-benefit analysis of MTD adaptations against network mapping attempts has been discussed by Kampankis \\textit{et al.}~. The survey considers cost-benefit aspects of MTD in Section~\\ref{effect}, under the quantitative metrics of MTD.", "id": "5e978832-8e86-42b3-a130-ca41d4db0993", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e8cc24b8-ce87-4411-895c-24520c0db83d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN-based MTD Applications and Case Study" ], [ "subsubsection", "SDN-based Network Mapping and Reconnaissance Protection" ] ], "subsections": [], "title": "SDN-based Network Mapping and Reconnaissance Protection" }, { "cite_extract_rate": 0, "cites": [], "content": "The attacker needs to identify the version of OS or vulnerable service in order to mount an attack. For instance, the attacker can send \\emph{HTTP GET} request to Apache Web Server, and the response can help in identification of vulnerability associated with a particular version of the Apache software. If the attacker gets a reply 404 Not Found, he can identify some obfuscation happening at the target software. A careful attacker can thus change the attack vector to exploit the vulnerability at the target.\nAn SDN-enabled solution can override the actual service version with a bogus version of the web server. Some application proxies leverage this technique to prevent service discovery attempts by a scanning tool. Another attack method is known as Operating System (OS) Fingerprinting, where the attacker tries to discover the version of the OS which is vulnerable. Although modern OS can generate a random response to TCP and UDP requests, the way in which TCP sequence numbers are generated can help an attacker in the identification of OS version.\nIn an SDN-enabled solution, the OS version can be obfuscated by the generation of a random response to the probes from a scanning tool. SDN can introduce a layer of true randomization for the transit traffic to the target. The SDN controller can manage a list of OS profiles and send a reply resembling TCP sequence of a bogus OS, thus misguiding the attacker.\n\\begin{figure}[!tp]\n \\centering\n \\includegraphics[trim=80 5 80 5,clip,width=0.5\\textwidth]{figures/case-study-3.pdf}\n \\caption{SDN/NFV based MTD that models the interaction between an attacker and the defender as a game to select an optimal countermeasure strategy. The work imposes limiting on bandwidth and views the MTD as an adaptation problem.}\n \\label{fig:case-study3}\n\\end{figure}", "id": "ef588ede-8a75-412f-91ad-d71c26828c4f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e8cc24b8-ce87-4411-895c-24520c0db83d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN-based MTD Applications and Case Study" ], [ "subsubsection", "Service Version and OS Hiding" ] ], "subsections": [], "title": "Service Version and OS Hiding" }, { "cite_extract_rate": 0.125, "cites": [ 6795 ], "content": "Once the attackers have obtained necessary information, they proceed towards targeted attacks, with the aim of stealing information - SQL Injection or service disruption - Distributed Denial of Service (DDoS). The SDN-based MTD can introduce various countermeasures at network-level such as network shuffling~, route modification~, IP, and port obfuscation as discussed by Wang \\textit{et al.}~.\nWe now discuss some case studies which show use of SDN/NFV capabilities for implementation of MTD techniques - what, when, and how to switch? Jafarian \\textit{et al.}~ use OpenFlow based random host mutation technique to switch virtual IP (what to switch) targeted by reconnaisance attempts. Debroy \\textit{et al.}~ use SDN framework to identify optimal rate of migration (when to switch) and ideal migration location for VMs under DoS attack. Chowdhary \\textit{et al.}~ use SDN-environment to create a analyze the hosts mounting DDoS attacks on critical network services. The SDN-controller downgrades network bandwidth (how to switch) using a Nash-Folk theorem based punishment mechanism. \n\\noindent $\\RHD$ \\textbf{Case Study (What to Switch)-- OpenFlow Random Host Mutation.}\nSDN makes use of OpenFlow protocol for control plane traffic. Jafarian \\textit{et al.}~ propose OpenFlow enabled MTD architecture as shown in Figure~\\ref{fig:7}, can be used to mutate IP address with a high degree of unpredictability while keeping a stable network configuration and minimal operational overhead.\nThe mutated IP address is transparent to the end host. The actual IP address of the host called real IP (rIP), which is kept unchanged, but it is linked with a short-lived virtual IP address (vIP) at regular interval. The vIP is translated before the host. The translation of rIP-vIP happens at the gateway of the network, and a centralized SDN controller performs the mutation across the network. A Constraint Satisfaction Problem (CSP) is formulated in order to maintain mutation rate and unpredictability constraints. The CSP is solved using Satisfiability Modulo Theories (SMT) solver.\nSensitive hosts have a higher mutation rate compared to the regular hosts in this scheme. The OF-RHM is implemented using Mininet network simulator and NOX SDN controller. OF-RHM is able to thwart about 99\\% of information gathering and external scanning attempts. The framework is also highly effective against worms and zero-day exploits.\n\\begin{figure}[!t]\n \\centering\n \\begin{tikzpicture}[\n state/.style={rectangle},\n node distance=2.3cm,\n ]\n \\centering\n \\node[state] (controller) [text width=1.3cm, align=center] {{\\Huge \\faServer}\\\\{\\footnotesize Open-flow Controller}};\n \\node[state] (attacker) [above left of=controller, xshift=-0.5cm, text width=1.2cm, align=center] {{\\Huge \\faUserSecret}\\\\{\\footnotesize Attacker}};\n \\node[state] (user) [below left of=controller, xshift=-0.5cm, text width=1cm, align=center] {{\\Huge \\faUser}\\\\{\\footnotesize User}};\n \\node[state] (auth) [above of=controller, yshift=-0.1cm, text width=1.5cm, align=center] {{\\Huge \\faSignIn}\\\\{\\footnotesize Auth Server}};\n \\node[state] (cloud) [right of=controller, text width=1.5cm, align=center] {{\\Huge \\faCloud}\\\\{\\footnotesize Cloud env.}};\n \\node[state] (mig) [below right of=cloud, xshift=1cm, text width=1.7cm, align=center] {{\\Huge \\faFilesO}\\\\{\\footnotesize Migration Destination VM}};\n \\node[state] (target) [above right of=cloud, xshift=1cm, text width=1.7cm, align=center] {{\\Huge \\faFilesO}\\\\{\\footnotesize Target Application VM}};\n \\path[BrickRed, ->, dashed]\n (attacker) edge (controller)\n (controller) edge (cloud)\n (cloud) edge (mig);\n \\path[ForestGreen, ->]\n (user) edge (controller)\n (controller.-23) edge (cloud.200)\n (cloud) edge (target);\n \\path[black, <->, dotted]\n (auth) edge (controller)\n (controller.23) edge (cloud.160);\n \\path[black, ->, dotted]\n (cloud.50) edge (target.195)\n (target) edge (mig);\n\\end{tikzpicture}\n \\caption{Virtual Machine migration using frequency minimal MTD seeks to migrate applications to new VMs and redirect an attacker's DDoS attacks without effecting availability to regular users. The goal of this work was to find an optimal timing function $T$, termed as the migration frequency.}\n \\label{fig:8}\n\\end{figure}\n\\noindent $\\RHD$ \\textbf{Case Study (How to Switch)-- Dynamic Game-based Security in SDN-enabled Cloud Networks}.\nDDoS attacks are a major security threat affecting networks. Attacker's leverage sophisticated bots to generate a huge volume of network traffic, and overwhelm critical network services as discussed by Chowdhary \\textit{et al.}~. The case study presented in this research work, focused on the MTD decision {\\em how to move?} The framework presented in Figure~\\ref{fig:case-study3} (a) to communicate with OpenFlow devices using southbound REST API, whereas any application plane decision and network analytic is performed using northbound REST API.\nThe Snort IDS~ detects malicious DDoS patterns, and uses Nash Folk Theorem~ to analyze the behavior of a malicious node, e.g., red color VM shown in Figure~\\ref{fig:case-study3} (a). The scenario is represented as an extensive-form game spanning multiple rounds. If the node sending network traffic P2, the Defect - Figure~\\ref{fig:case-study3} (b), the normal bandwidth B is reduced to $\\frac{B}{2}$. The bandwidth is reduced in each subsequent round by network admin P1, till average bandwidth over a period of time $t=\\{0,1,2,..\\} \\in T$ is B.\nThe SDN controller utilizes Instruction field present inside the flow table in order to change the band rate of the node flagged as malicious, as shown in Figure~\\ref{fig:case-study3} (c). Thus SDN-based rate-limiting serves as an effective countermeasure against flooding/loss of availability attacks like DDoS.\n\\noindent $\\RHD$ \\textbf{Case Study (When to Switch): Frequency Minimal MTD using SDN.}\n\\emph{Frequency minimal MTD}~ approach considers resource availability, QoS and exploits probability for performing MTD in an SDN-enabled cloud infrastructure. The design goals of this research work are as follows:\n\\begin{enumerate}\n \\item Identification of optimal frequency of VM migration, ensuring minimal wastage of cloud resources.\n \\item Finding the preferred location of VM migration without impacting the application performance. \n\\end{enumerate}\nAs shown in Figure~\\ref{fig:8}, the normal clients can access the services hosted in the cloud network via regular path, and the attack path represents the path along which the attacker tries to exploit the target application. The VMs periodically share their resource information such as storage and compute capacity with the controller along the control path. Depending upon the level of threat, the migration of VM can be proactive or reactive. The real IP address of the VM hosting cloud application is hidden from the clients. The data-path shows the path along which VM application and its back-end database are migrated. VM migration is based on the following factors:\n\\begin{itemize}\n \\item \\textbf{VM Capacity:} This parameter considers the capacity of migration target in terms of computing resources available to store files and database of current and future clients.\n \\item \\textbf{Network Bandwidth:} The lower the network bandwidth between the source and target VM, the slower will be the migration process and the longer will be the exposure period (in case of active attacks) for the VM being migrated. This parameter considers bandwidth between source and target while performing VM migration.\n \\item \\textbf{VM Reputation:} This is the objective indicator of VM robustness to deter future cyber attacks. It is the history of VM in terms of cyber attacks launched against the VM. This parameter is considered in order to ensure VM’s suitability for migration.\n\\end{itemize}\nThis research work estimates the optimal migration frequency and ideal migration location based on the parameters described above. The VM migration mechanism is highly effective in dealing with denial-of-service (DoS) attacks.", "id": "ef9817c2-a610-4846-b5a4-13cd3cb4f815", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "e8cc24b8-ce87-4411-895c-24520c0db83d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN-based MTD Applications and Case Study" ], [ "subsubsection", "Protection against multi-stage attacks and service disruption" ] ], "subsections": [], "title": "Protection against multi-stage attacks and service disruption" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{testbed}\n The practicality of MTD can be established by the deployment of MTD techniques and tactics over an underlying network. In this section, we identify some platforms which can assist MTD researchers in conducting experiments or serve as a guideline when creating MTD case studies.", "id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "level": "subsection", "origin_cites_number": 0, "parent_id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ] ], "subsections": [ "bc25b8d5-1df5-4f93-93af-e46c47f20bac", "4ccc2651-134c-46eb-a2c5-0bb851269de8", "0e529436-491d-4e52-b7ab-d8e3fba8d04a", "a9e21946-efc4-4ee9-b933-75b4b165a482", "852a4dd4-714f-4123-8cee-cd3b790bbe57" ], "title": "MTD Testbeds: Research and Prototyping" }, { "cite_extract_rate": 0, "cites": [], "content": "{Global Environment for Networking Innovation} (GENI)~ provides a distributed virtual environment for at-scale networking and security experimentation. Individual experiments can be conducted in isolation using network sliceability (ability to support virtualization and network isolation). Each experimental slice is provided with network and computing resources. The users can request resources for an allotted time period, in which they can conduct experiments and release the resources once they have finished the experiment.\nDebroy \\textit{et al.}~ utilize GENI framework for implementation of a {VM migration} MTD solution. The authors utilized the {InstaGENI} platform at the Illinois site to host a news feed application targeted by DoS attacks. The setup also involved four non-malicious users, one remote SDN controller, and attacking VMs, all hosted at physically distributed locations. The attacker VMs were utilized for sending a large number of HTTP GET requests to news feed site in order to achieve a DoS attack. Proactive and reactive frequency minimal (FM) MTD countermeasures were deployed in response to targeted DoS attacks. The research work shows the capability of GENI platform to host similar MTD experiments where users can study the impact on network bandwidth, VM performance, attack success rate when MTD security countermeasures are implemented at scale on a cloud platform. \n \\begin{figure}[!tp]\n \\centering\n \\begin{tikzpicture}[\n scale=0.8,\n state/.style={rectangle},\n node distance=2cm,\n ]\n \\centering\n \\node[state] (spec) [text width=0.6cm, align=center] {{\\Large \\faFileTextO}};\n \\node[state] (spec_text) [below of=spec, yshift=0.5cm, text width=1.5cm, align=center] {{\\footnotesize MTD Specification}};\n \\node[state] (mtd_cnt) [draw=black, right of=spec, xshift=0.45cm, text width=2.65cm, text height=2.15cm, align=center] {};\n \\node[state] (process) [right of=spec, xshift=-0.2cm, yshift=0.6cm, text width=1.2cm, align=center] {\\footnotesize Processing Module};\n \\node[state] (operation) [right of=process, xshift=-0.65cm, text width=1.2cm, align=center] {\\footnotesize Operations Module};\n \\node[state] (cnp) [right of=process, xshift=-1.4cm, yshift=-1.2cm, text width=2.5cm, align=center] {\\footnotesize Configuration and Provisioning Module};\n \\node[state] (mtd_cnt_text) [below of=mtd_cnt, yshift=0.35cm, text width=2.5cm, align=center] {{\\footnotesize MTD Controller}};\n \\node[state] (cmt) [right of=operation, text width=0.9cm, align=center] {\\footnotesize CMT Puppet};\n \\node[state] (openstack) [right of=cnp, xshift=0.7cm, text width=1.1cm, align=center] {\\footnotesize Openstack APIs}; \n \\node[state] (cloud) [right of=cmt, yshift=-0.5cm, text width=0.6cm, align=center] {{\\Large \\faCloud}}; \n \\node[state] (cloud_text) [below of=cloud, xshift=-0.1cm, yshift=0.5cm, text width=1.9cm, align=center] {{\\footnotesize MTD on Cloud Infrastructure}};\n \\path[black, ->, dashed]\n (spec) edge (mtd_cnt)\n (mtd_cnt) edge[out=0,in=180] (cmt)\n (mtd_cnt) edge[out=0,in=180] (openstack)\n (cmt) edge[out=0,in=180] (cloud)\n (openstack) edge[out=0,in=180] (cloud)\n ;\n \\end{tikzpicture}\n \\caption{A platform that takes an abstract specification of a cloud system as input and outputs the corresponding system on a cloud infrastructure. Advantages of cloud automation using Automated Enterprise Network Compiler (ANCOR) include performing live instance migration.}\n \\label{fig:6}\n\\end{figure}", "id": "bc25b8d5-1df5-4f93-93af-e46c47f20bac", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ], [ "subsubsection", "GENI Platform" ] ], "subsections": [], "title": "GENI Platform" }, { "cite_extract_rate": 0, "cites": [], "content": "{OpenStack}~ is a cloud operating system that consists of compute, storage, and networking resources. The users can log in through GUI to provision isolated virtual networks or utilize OpenStack APIs to create and configure the network. OpenStack is compatible with the existing virtual solutions such as {VMWare ESX}, {Microsoft's Hyper-V}, {KVM}, {LXC}, {QEMU}, {UML}, {Xen}, and {XenServer}. \n\\textbf{Mayflies}~ utilized OpenStack for designing a {fault-tolerant} MTD system. The research work utilizes VM {introspection} to checkpoint the current state of the live node/VM, and {reincarnation} - node {commission/decommission} based on attacks against the introspected node. The strategy allows the Mayflies framework to deal with attacks in a short interval of time, avoiding the attack progress. Zhuang \\textit{et al.}~ conduct MTD experiments to simulate a {pure-random MTD} strategy and an intelligent MTD approach based on intelligent attack indicators. The experiments were conducted using \\emph{NeSSi2}~, an {open-source}, discrete event-based network security {simulator}. The authors proposed implementation on an OpenStack based MTD testbed as future work. \n\\textbf{MTD CBITS} MTD-platform for \\textit{Cloud-Based IT Systems} (MTD CBITS)~ as shown in Figure~\\ref{fig:6} evaluates the practicality and performs detailed analysis of security benefits when performing MTD on a cloud system such as OpenStack.\n The platform makes {automated changes} to the IT systems running on the cloud infrastructure. One adaptation performed in MTD CBITS is {replacing} the {running components} of the system with new ones. The system parameters are stored in the operational model and can be viewed using the MTD system inventory - {CMT (Puppet) APIs}. Any MTD adaptation is also recorded in an operational model for future reference. {OpenStack API} utilizes Puppet agents to communicate with the live instances of the cloud system. MTD controller communicates with agents over a private network.", "id": "4ccc2651-134c-46eb-a2c5-0bb851269de8", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ], [ "subsubsection", "OpenStack Cloud" ] ], "subsections": [], "title": "OpenStack Cloud" }, { "cite_extract_rate": 0, "cites": [], "content": "~ provides a testing and experimentation platform for cybersecurity research. The platform supports high fidelity network, by representing the network in a discrete event simulator. CyberVAN allows hosts represented by VMs to communicate over the simulated network. The testbed utilizes network simulator \\emph{ns-3}~ to simulate network effects such as end-to-end network latency, link capacities, routing protocols, {\\em etc.} The platform also supports {wireless networks} by modeling {mobility}, {interference}, {propagation effects}, as well as the details of different {waveforms}. The {Scenario Management GUI} allows the experimenter to access and manage the elements of an attack scenario, including network traffic and logging, running analytic tools on the VM, saving results of the experiment, pausing and restarting the experiments, {\\em etc.}", "id": "0e529436-491d-4e52-b7ab-d8e3fba8d04a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ], [ "subsubsection", "CyberVAN Testbed" ] ], "subsections": [], "title": "CyberVAN Testbed" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!tp]\n \\centering\n \\includegraphics[trim=10 50 10 20,clip,width=0.5\\textwidth]{figures/scada-testbed.pdf}\n \\caption{An MTD-based architectural platform for IP hopping. The solution has been targeted at Supervisory Control and Data Access Systems (SCADA). The MTD gateway routers act as dynamic proxies, periodically mutating the IP address of the external interfaces exposed on the public wide-area network.}\n \\label{fig:6-1}\n\\end{figure} \nIn the context of smart-grid environments, Pappa \\textit{et. al.} has proposed an MTD for IP hopping~ (see Figure~\\ref{fig:6-1}). The network consists of two SCADA servers running Siemens Spectrum Power TG Energy Management Systems (EMS) software. The software periodically polls Remote Terminal Units (RTUs) for network traffic measurements. The substation networks are connected to the Wireless Area Network (WAN) using MTD gateway routers. The remote attacker can utilize the publicly exposed service over the WAN to identify security vulnerabilities. The services can be exploited using traditional means or using APT techniques. The gateway routes in the network act as dynamic proxies, mutating IP address of the external interfaces, while providing end-to-end SCADA communication. The IP generation algorithm employs a random initial seed to initiate the IP generation at the network boot up. Both routers are assigned set $(j,k)$ of $n$ random IP addresses at the start, in combination of initial seed vector. The initial seed vector ensures order between the two gateway routers is synchronized; this helps to prevent IP collision and network outages. Additionally, the MTD algorithm used creates a dynamic network technology which makes the job of network mapping difficult for the attacker.", "id": "a9e21946-efc4-4ee9-b933-75b4b165a482", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ], [ "subsubsection", "MTD SCADA Testbed for Smart Grid Network" ] ], "subsections": [], "title": "MTD SCADA Testbed for Smart Grid Network" }, { "cite_extract_rate": 0, "cites": [], "content": "\\textbf{TrapX:} The production-grade decoy network technologies such as \\emph{DeceptionGrid}~ and \\emph{CryptoTrap}~ deceive the attackers by imitating the true assets. DeceptionGrid provides {agentless} visibility and control appliance, that dynamically identifies and evaluates {network endpoints} and {applications} when they connect to the main network. {Assuta Medical Center} incorporated DeceptionGrid as a part of their network security suite. The use of this framework helps to counter not only known attacks but also APTs and {zero-day} attack scenarios. The DeceptionGrid created a network of {traps} (decoys) that camouflaged real medical devices such as {MRI} \\& {CT scanners}, {X-ray machines}, and {ultrasound equipment} (PACs). The solution has been deployed on many {VLANs} across {Assuta Medical Center} and provided better visibility into the lateral movement of the attackers. \n\\textit{CryptoTrap} on the other hand is utilized for {countering} the {ransomware} early in their exploitation lifecycle. This helps in countering malware propagation while protecting valuable network assets. The traps (decoys) are masked in the form of {SMB network shares} across the network. The fake data of the company is replicated across the traps. Once the attacker touches these traps, the targeted computer is disconnected from the network and security administrator is alerted about the incident. In effect, the attacker is held captive in the trap and attacker actions are logged for further {threat intelligence}. \n\\textbf{Polyverse:} \\emph{Zero-day} vulnerabilities ,e.g., \\emph{Heartbleed}~ (vulnerability discovered in \\textit{OpenSSL} software in 2014), can cause a significant damage in an underlying network. There is no known attack signature to identify these vulnerabilities, hence they can {bypass} security {monitoring software} unnoticed.\nThe attackers trying to target software or OS {memory-based zero-day} vulnerabilities start with the assumption that the gadgets they are trying to access are located at a certain address or within a specific offset from the absolute base address. \\emph{Polyverse}~ employs MTD strategy to defeat this assumption of the attackers. The polymorphic version of Linux is utilized by Polyverse in order to create a high-level of {entropy} in the software system in such a way that the entire {memory} structure is {diversified}. With the polymorphic versions of Linux, the entire Linux software stack is roughly randomized. The resulting program is semantically identical to the original program (functionally equivalent), however, nearly every machine instruction is changed. \n\\textbf{MorphiSec:} Some threat vectors classified as {Advanced Evasive Attacks} {cloak} the {malicious intent} in order to deceive the security monitoring tools. Some of these techniques include {Polymorphism} (changing malware signature), {Metamorphism} (changing malware code on the fly), {Obfuscation} (hiding malicious activity), {behavior changes} (waiting for normal user activity before executing). \\emph{Morphisec}~ uses MTD at {network-level} (route changes, random addresses and ports), firewall level (policy changes), {host-level} (OS version, memory structure changes), and {application level} (randomizing storage format addresses, application migration, multilingual code generation) in order to deceive and detect attacks using evasive attack techniques.\n\\iffalse", "id": "852a4dd4-714f-4123-8cee-cd3b790bbe57", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "f07597ac-4c26-4715-98fd-716bb0d1e2c9", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "MTD Testbeds: Research and Prototyping" ], [ "subsubsection", "MTD Commercial Domain Solutions" ] ], "subsections": [], "title": "MTD Commercial Domain Solutions" }, { "cite_extract_rate": 0.045454545454545005, "cites": [ 6795 ], "content": "\\label{sdn-situation}\nThe lack of unified network control can make network management challenging and error-prone process as can be seen from Table~\\ref{tab:1} above, and further highlighted by Colville \\textit{et al.}~. \nWe identify different phases of MTD lifecycle, which can be used to construct a \\emph{situation-aware} cyber feedback model as shown in Figure~\\ref{fig:9}. The \\emph{Situation-Aware MTD} model captures different \\emph{security} constraints, attack information and helps in implementation of \\emph{effective} MTD decision which are missing in current research works, discussed in Table~\\ref{tab:6}. The phases are continuous in nature, allowing necessary fine-tuning of MTD parameters at different layers of SDN infrastructure. The five phases can be described as follows:\n\\begin{figure}[!htp]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures/figure9.pdf}\n \\caption{SDN/NFV-based Moving Target Defense (MTD) Lifecycle}\n \\label{fig:9}\n\\end{figure}\n\\begin{enumerate}\n \\item {\\textbf{Vulnerability and Security Policy Analysis:}} This phase of a situation-aware MTD framework involves checking of current system vulnerabilities using vulnerability scanning tools such as Nessus~ or OpenVAS~. Additionally, we enumerate the security policies defined by different security tools such as Snort IDS/IPS~, packet filtering firewall, Deep-packet inspection tools such as nDPI~, {\\em etc.} This phase helps in creating a model of the attack surface, the current privilege level and the current information view of the attacker about the target environment.\n \\item {\\textbf{Attack Representation:}}\n The tools utilized for vulnerability analysis often have incomprehensible output. Even for a small network, the vulnerability report can span across hundreds of pages. This makes the security investment decision very difficult. Therefore the attack representation methods such as attack graphs and trees become quite useful. SDN-infrastructure can help in network decomposition using logical network segmentation. The attack graph generation in a segmented network can speed up the attack analysis and MTD decision. \n \\item {\\textbf{Risk Analysis:}}\n During this stage of the MTD lifecycle, the network controller can collect statistics from different network interfaces and endpoints. SDN-based attack graph analyzer as proposed by Chung \\textit{et al.}~, can be used for alert correlation from different network segments and creating a risk profile of different virtual machines in the network. Intelligent cyber-deception techniques such as game theoretic analysis based on attack graphs as discussed by Nguyen \\textit{et al.}~, can be used for analyzing risk associated with different MTD candidate countermeasures.\n \\item {\\textbf{Countermeasure Selection:}}\n In this stage of SDN-based MTD lifecycle, the countermeasure analysis is performed by the security administrator. The risk profile for a certain VM in the network can be used for performing cost-intrusiveness analysis when selecting a particular countermeasure. The cyber quantification metrics such as Return-of-Investment (ROI) as discussed by Chung \\textit{et al.}~ helps the SDN controller in selection of appropriate MTD countermeasure, e.g. \\textit{network reconfiguration}, \\textit{IDS placement}~, \\textit{VM migration}, {\\em etc.} \n \\item {\\textbf{Network Status Analysis:}}\n The MTD lifecycle is a continuous process and deployment. Any SDN countermeasure should not impact network performance. The cost of countermeasures such as service availability, network downtime, as noted by El. Mir. \\textit{et al.}~ should be given sufficient consideration as a part of MTD ecosystem. The MTD decisions, especially at the network-level (route randomization, network reconfiguration) can introduce security policy conflicts, as discussed by Pisharody \\textit{et al.}~. These security policy conflicts should be continuously inspected by the SDN controller, in order to prevent any security policy violations. \n \\end{enumerate}\nWe summarize the list of research works that focus on SDN-based MTD with their applications and limitations in the Table~\\ref{tab:6}.\n\\begin{table*}[!htp]\n \\centering\n \\caption{SDN-based Moving Target Defense Research, Strengths and Limitations}\\label{tab:6}\n \\setlength{\\extrarowheight}{3pt}\n \\begin{tabular}{|p{25mm}|p{25mm}|p{35mm}|p{35mm}|p{35mm}|}\n \\hline\n \\textbf{Research Work, Year Published} &\\textbf{SDN Component} & \\textbf{Brief Overview} & \\textbf{Strengths, Application} & \\textbf{Limitations} \\\\\n \\hline\n DDoS Defense using MTD and SDN~, 2018 & ONOS-controller assisted IP hopping and route-randomization. & ISP cooperated MTD in a collaborative environment. & DDoS Mitigation. SDN based MTD in high-speed networks & Evaluation using simulation environment. Only probabilistic results for a limited environment have been presented. The effect on normal users has not been discussed. \\\\\n \\hline\n CHAOS: An SDN-based Moving Target Defense System~, 2017 & SDN Floodlight~ -based IP, port and fingerprint obfuscation & The framework identifies the hierarchy of hosts in an intranet and provide unpredictable and flexible obfuscation method. IDS-based flow monitoring. CHAOS modules mark traffic as normal or abnormal. The abnormal connections attempts are obfuscated. & Reduced information disclosure while guaranteeing normal traffic flow. Host-specific risk assessment and countermeasure selection. & Only known attacks based on signature match are considered. Cost-benefit analysis in terms of network bandwidth can be considered. Evaluation results provide details only for information disclosure attacks. \\\\\n \\hline\n Mitigating Crossfire Attacks using SDN-based Moving Target Defense~, 2016 & SDN Floodlight controller used for traceroute profile construction and link obfuscation. & Defense against DDoS Crossfire attacks using route optimization and reorganization. Persistent links in link map construction are flooded by the attacker. & Proactive defense at the reconnaissance phase, during link map construction. Route randomization for using reactive mechanism against suspected sources. & Average link usage and delay are evaluated for a small network. Scalability of the MTD approach can be a concern. \\\\\n \\hline\n U-TRI: unlinkability through random identifier for SDN network~, 2017 & SDN-based network obfuscation to update routing and host forwarding tables. & Randomly changing identifier utilized to replace static link layer address. Consistently changes device \\textit{vid} using \\textit{sub-tree spinning} protocol - VIRO~. & Defense against traffic analysis. Obfuscation of network and transport layer identifiers. & The impact on reconnaissance attempts has not been presented. Results on performance impact are measured in a simulated environment.\\\\\n \\hline\n The SDN-based solution for Moving Target Defense network protection~, 2014 & SDN-based random host and route mutation. & Cost-benefit analysis of MTD countermeasures. Increased attacker uncertainty by mis-reporting information. & Provides OS fingerprinting and service version hiding. Increases cost of reconnaissance & Experimental results does not reflect the scalability of the approach. The adaptation and timing decision for MTD is not analyzed thoroughly.\\\\\n \\hline\n Frequency-Minimal Moving Target Defense using Software-Defined Networking~, 2016 & OpenFlow control responsible for performing proactive and reactive migration. Implemented on SDN GENI~ testbed. & Minimization of cloud management overhead by calculation of optimal VM migration frequency. & MTD based on target server capacity, network bandwidth impact, VM reputation. & Network bandwidth will fluctuate more of flooding attacks. Users may lose session information if VM migration happens during a user transaction. \\\\\n \\hline\n MTD Analysis and evaluation framework in Software Defined Network (MASON)~. & SDN assisted MTD countermeasure selection &\n MASON utilizes a threat scoring system based on vulnerabilities and IDS alerts to select MTD countermeasure. & MTD solution on a real-world SDN cloud testbed, Science DMZ~ & MTD-countermeasure selection against attacker based on threat score alone may fail to incorporate impact on normal users. \\\\\n \\hline \n \\end{tabular}\n \\end{table*}\n\\fi\n\\input{figures/qualitative.tex}", "id": "bdfb33ec-f5c7-465d-961a-c428763edd3e", "level": "subsection", "origin_cites_number": 22, "parent_id": "900fc6c6-3b7c-4ff3-bacb-6fa00ef41c6d", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Implementation of Moving Target Defenses" ], [ "subsection", "SDN/NFV-based Situation Aware MTD" ] ], "subsections": [], "title": "SDN/NFV-based Situation Aware MTD" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{effect}\nIn this section, we present a list of qualitative and quantitative metrics that can help in determining the effectiveness of MTD. First, we discuss the quality of the defender actions $C$, i.e. the various system configurations it can choose to deploy, in terms of performance and security. This discussion helps in identifying a set of qualitative metrics. As we will see, most works either consider this metric implicitly when choosing the configuration set or simply ignore it.\nSecond, we devote a sub-section on qualitative evaluation metrics. These are mathematical functions to help a defender capture measures such as the cost-benefit, the risk analysis, {\\em etc.} of an MTD. As we shall see, they can be measured by simulation on test-beds or by using metrics based on security domain knowledge such as CVSS, attack graphs representations {\\em etc.}", "id": "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ] ], "subsections": [ "72acaf57-1aab-4a7a-97b6-f369f8591f06", "b89ed177-eb0b-4c87-85c3-011e9557e114", "df9b9c32-f0a3-4fd5-81db-f4f27d7db268", "a17883db-aaf2-425f-b082-9230eb87489d" ], "title": "Evaluating the Effectiveness of\\\\Moving Target Defenses" }, { "cite_extract_rate": 0, "cites": [], "content": "When using a Moving Target Defense (MTD) that moves between multiple system configurations, we would want to believe that it increases the security of the deployed system without negatively impacting the performance for legitimate users. Thus, we look at how different MTDs consider these aspects either in the modeling of the defense or its evaluation. In particular, we look at two major aspects-- {security considerations} and {performance considerations}-- to evaluate the quality of an MTD. Under each of these sub-headings, we consider the quality of each individual defense and then, the {overall ensemble} of constituent defenses. This helps us identify {non-trivial heuristics} to understand when an MTD might succeed or fail. For example, an MTD that lacks {\\em differential immunity}, i.e. all the constituent system configurations ($\\in C$) are all vulnerable to the same attack, can never be secure regardless how well it is implemented in practice.\nIn regards to quality metrics, we categorize the various MTDs in Figure \\ref{fig:qual}. An MTD is part of a set if they explicitly consider the particular qualitative metric when designing either the configuration set $C$ (i.e. {\\em what to move?}), the timing function $T$ (i.e. {\\em when to move?}), or the movement function $M$ (i.e. {\\em how to move?}). Note that only a subset of works discussed under security considerations (i.e. shown in the {\\color{CornflowerBlue!80!black} blue} and/or {\\color{Goldenrod!90!black}yellow} boxes on the left of Fig. \\ref{fig:qual}) are featured under {performance considerations} (i.e. shown in the {\\color{Salmon!90!black} pink} and/or {\\color{SpringGreen!90!black}green} boxes on the right of Fig. \\ref{fig:qual}). We discovered that majority of the defenses proposed only consider (and therefore highlight) the improvement with regards to security and ignore the impact on performance. We will now discuss the details about how the different works capture the various qualitative aspects that we put forth in this survey.", "id": "72acaf57-1aab-4a7a-97b6-f369f8591f06", "level": "subsection", "origin_cites_number": 0, "parent_id": "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ] ], "subsections": [ "303f6801-d853-45bd-973c-a4e19b7b0756", "df172cef-7755-46bb-99f3-39c119596d08" ], "title": "Qualitative Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "We first look at works that reason about the security risks of individual actions followed by works that reason about the security risks associated with an ensemble and lastly, discuss works that consider both.", "id": "303f6801-d853-45bd-973c-a4e19b7b0756", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "72acaf57-1aab-4a7a-97b6-f369f8591f06", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Security Considerations" ] ], "subsections": [ "b69bd122-eea6-486d-b31d-fc20f3b2f9f6", "5fde2a5a-dee0-495d-9b5b-8012b1c8021b", "d16ff0e9-0cd5-42bc-8ab9-9b56b177336a" ], "title": "Security Considerations" }, { "cite_extract_rate": 0.21428571428571402, "cites": [ 6800, 6797, 6792 ], "content": "Most MTDs that consider some form of game-theoretic modeling, consider the security of individual defenses in the ensemble by representing them as a part of the defender's utility value . For most of these works, the security metrics considered for each action are obtained using scoring metrics designed by experts such as the Common Vulnerability Scoring Services (CVSS) discussed earlier. These works are able to come up with an intelligent movement strategy $M$ based on reasoning over known vulnerabilities (more specifically, common vulnerabilities and exposures (CVE)) for which CVSS scores are readily available. This can result in highly sub-optimal behavior when faced with zero-day vulnerabilities. In that regard, authors in model the security of a configuration as inversely proportional to the probability with which an adversary can come up with a new attack given the attacks it performed in the earlier time steps. In , the authors try to model zero-day attacks by asking security experts to annotate how effective they think a particular {countermeasure} or defense action will be against zero-day vulnerabilities. As the annotations might be inaccurate (due to the lack of actual black-hat hackers who invent zero-day attacks in the set of security experts who annotate the defense methods), the effectiveness of their MTD cannot be accurately determined. As to how, if possible, utility values can capture the security risks associated with zero-day vulnerabilities is an interesting question and remains to be explored by future works.\nOther works that also only capture the security of individual constituent system configurations are more domain or problem specific in nature. In , each defense action, before being played, is weighed based on the penalty it imposes on an attacker (who tried to do a DDoS attack) over a repeated game setting. In , authors choose an action (to migrate a VM or keep running) after reasoning about the security risks associated with the present vulnerable state. In , the security risk is based on the reputation of the current state, which in turn looks at the type and number of attacks that were done in previous time steps when the particular system was deployed. On similar lines, researchers in find a more compact way to model and continuously update the risk associated with deploying a particular defense action. Some works that consider the longitudinal effects of a movement strategy $M$ model the epistemic effects of movement on an attacker's knowledge about the network. For example, considers the fingerprint of the overall {network} when a particular defense action is selected, while reasons about the topology information a particular defense action leaks to an attacker. In contrast to all these works, models the security risk associated with a particular defense action as inversely proportional to the cost (it believes) an attacker would spend to compromising it.", "id": "b69bd122-eea6-486d-b31d-fc20f3b2f9f6", "level": "paragraph", "origin_cites_number": 14, "parent_id": "303f6801-d853-45bd-973c-a4e19b7b0756", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Security Considerations" ], [ "paragraph", "Considers only Security of Individual Defenses" ] ], "subsections": [], "title": "Considers only Security of Individual Defenses" }, { "cite_extract_rate": 0.15384615384615302, "cites": [ 6795, 314 ], "content": "Works that measure only the security of an ensemble as, opposed to security of actions, mostly showcase the security benefits of the MTD ensemble by comparing them to the security benefits provided by a static system configuration . In essence, these works consider the static system as the control case but unfortunately, do not ensure that this static system is the most secure constituent defense configuration in the ensemble. This way, they might overestimate and thus, over-promise the security guarantees of the designed MTD.\nOn the other hand, works that consider the security metrics associated with the ensemble (as a whole) at modeling time look at metrics that are similar to {\\em entropy} or {\\em diversity} of the ensemble. In , the authors try to use the large address-space of IPv6 to their advantage and are able to create more uncertainty for an attacker who is trying to pinpoint the address to use for a successful attack. On similar lines, authors in argue that fast reincarnations of a functional system (i.e. bringing the service down and starting it up on a new location/container, {\\em etc.}) increases the entropy and makes it more costly for an attacker to continuously keep attacking such a system. On a different note, researchers in try to select a defense configuration from the ensemble based on how diverse it is with respect to the current configuration that might have been attacked. In order to do this, they use a topological distance measure, which in their case is the {symmetric difference} between the edge sets of the {current} and the {consecutive defense} configuration. Although they do not explicitly recognize it as a diversity metric like , they bring to light an interesting issue that most MTD papers seem to either miss or assume by default. If there was an attack, that with extremely little modification, could exploit all the defender's configuration that is a part of the MTD, an MTD would not be an effective defense strategy. For example, MTD work that randomly selects a classifier for malware detection assumes by default that each classifier is not vulnerable to the same attack which seems to be an incorrect assumption to make given current research works . We believe that it would be easier to convince practitioners about the effectiveness of an MTD if researchers can show that diverse defender actions can indeed be generated at a low cost. This is the goal of , who create an approach at the compiler level to {increase software diversity} and , who use a genetic algorithm to draw a pool of configurations that maximize diversity.", "id": "5fde2a5a-dee0-495d-9b5b-8012b1c8021b", "level": "paragraph", "origin_cites_number": 13, "parent_id": "303f6801-d853-45bd-973c-a4e19b7b0756", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Security Considerations" ], [ "paragraph", "Considers only Security of the Ensemble" ] ], "subsections": [], "title": "Considers only Security of the Ensemble" }, { "cite_extract_rate": 0, "cites": [], "content": "Only a few works take a holistic view in regards to security and consider both the security of each individual defense and the ensemble as a whole. In , the general level at which they define the {utility values}, one can capture the security risks associated with an {individual defense} action in terms of {attack surface features} and {attack surface measurements} while the security risks associated with an ensemble can be captured via the utility variable for attack surface shifting. In , the security for individual defense actions is trivial for their settings, where defending a node that is not a stepping stone is not beneficial at all for the defender. The security risks associated with an ensemble are evaluated through experimentation with a static defense. Particularly, the try to increase the number of interruptions for an attacker to start from one point in the network and reach a goal node. In , the selection of the {entire ensemble} is based on the fact that the set of {possible defenses} have a {global property} of {covering} each of the {leaves} that an attacker might want to reach and each individual defense action has some utility in terms of security associated with it. Lastly, the works and model both the {diversity} of {constituent defenses} present in the MTD ensemble and also model the security of each individual defense. In , each {configuration} is mapped to a set of {vulnerabilities} and thus, a diverse ensemble is composed of system configurations that do not have a lot of overlapping vulnerabilities. In , since they use Linux based operating systems, the diversity is modeled in terms of lines of code that is {different} between two {Operating Systems}.", "id": "d16ff0e9-0cd5-42bc-8ab9-9b56b177336a", "level": "paragraph", "origin_cites_number": 5, "parent_id": "303f6801-d853-45bd-973c-a4e19b7b0756", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Security Considerations" ], [ "paragraph", "Considers both" ] ], "subsections": [], "title": "Considers both" }, { "cite_extract_rate": 0, "cites": [], "content": "As mentioned above, a large set of works for developing MTDs focus on showcasing the security benefits and sweep under the rug the performance costs. Note that in the case of MTDs, the impact on performance may arise due to a variety of reasons. First, each system configuration (individual defense action), that is a part of the MTD ensemble, has a performance cost associated with it and moving to a high-cost configuration {impacts} the performance. These concerns are termed as performance considerations for individual defense. Second, the switching from one configuration to another may need a defender to deal with (1) downtime, (2) legitimate requests on the wire that were meant for the previous configuration in a graceful manner and/or (3) keep all the different configurations running (at least two of them) to facilitate a faster switch. All these costs can be termed as shuffling costs and are categorized as performance considerations of the ensemble because they only arise when there is an MTD ensemble.\nWe will now describe how the various MTD systems consider the performance costs associated with each individual configuration followed by the performance cost associated with the ensemble. We also discuss works that reason about both these factors together and analyze the affect on the overall {Quality of Service} (QoS) when such defenses are is deployed.", "id": "df172cef-7755-46bb-99f3-39c119596d08", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "72acaf57-1aab-4a7a-97b6-f369f8591f06", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Performance Considerations" ] ], "subsections": [ "6bb87927-4e77-4444-93d4-6dbcf67ad0b6", "71272ac2-6af0-44bc-8e24-56ce3afe651f", "565ace02-1605-4556-8f6c-13d11746aadb" ], "title": "Performance Considerations" }, { "cite_extract_rate": 0.2, "cites": [ 6800, 6792 ], "content": "Similar to the case of security metrics, when we look at performance considerations for an individual defense, most game-theoretic works model these as a part of the defender's utility function . Various equilibrium concepts in these games yield movement strategies for the defender that gives {priority} to constituent {defenses} that have {low performance costs} while ensuring that the security is not impacted by a lot. In and , the reward functions are defined at an abstract level and the authors point out that they can be used to consider the performance cost of constituent defenses. In and , the authors consider the impact of placing {Network-based Intrusion Detection Systems} (NIDS) on the {latency} of the network and use {centrality} based measures as {heuristic} guidance for it.\nA large number of works also look at problem-specific instances. Authors in perform experiments to evaluate their MTD against crossfire attacks and find that the average time for packet transfer from one point to another increases because a particular path selected at random can be highly sub-optimal. On similar lines, in , the authors consider the performance cost of doing a {\\em defend} action on legitimate user traffic. On a different note, tries to solve a multi-faceted problem where the MTD tries to {obfuscate} the {network topology} to an attacker and, at the same time, ensures that it {does not negatively impact} a {defender's ability} to debug network issues. This is done by leveraging the knowledge asymmetry about the network topology that a defender and an attacker has. In , the {performance costs} of a \\emph{honeynet} configuration, which is the defense action for this MTD, represents the number and \\emph{quality of resources} (or honey) necessary for developing a credible honeynet that fools an attacker. Lastly, authors in combine both the costs of setting up a good defense and the usability of that defense for legitimate traffic as the performance cost for a particular placement of countermeasures.\n\\begin{figure}[t]\n \\centering\n \\input{figures/quantitative.tex}\n \\caption{Categorization of MTDs based on the fine-grained quantitative metrics, under the broader umbrella of security and usability metrics, they use for evaluation.}\n \\label{fig:quantitative}\n\\end{figure}", "id": "6bb87927-4e77-4444-93d4-6dbcf67ad0b6", "level": "paragraph", "origin_cites_number": 10, "parent_id": "df172cef-7755-46bb-99f3-39c119596d08", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Performance Considerations" ], [ "paragraph", "Considers only performance of individual defense actions" ] ], "subsections": [], "title": "Considers only performance of individual defense actions" }, { "cite_extract_rate": 0, "cites": [], "content": "Multiple works try to capture the performance costs that result because of the dynamics involved is shuffling between the different configuration but do not look at the \\emph{performance impacts} resulting because of a particular \\emph{bad constituent configuration}. In and , the authors consider the \\emph{one-step cost} of switching from one defense action to another one and seek to find a strategy that minimizes this. \nOther works, instead of accounting for the performance impact of the ensemble, compare an MTD defense to a static system configuration by measuring usability metrics such as latency, availability to legitimate users, {\\em etc.} In , authors notice an overhead of 40 bytes for the IPv6 header and latency of 12ms during address change as opposed to 3ms when MTD is not implemented on the network packets. They point out that a more efficient implementation might help in reducing this gap. On similar lines, authors in highlight that creating an entire file system replica takes two more minutes in the case of an MTD system in comparison to a non-MTD enabled system refresh. On the contrary, does not notice negligible performance overhead for instance replacements in a $14$ node system (where one is a controller node and the others are compute nodes). Although, they do notice a larger number of {HTTP error packets} on the wire when using the MTD. To ensure that the usability to legitimate users is not impacted at all, authors in keep multiple systems running with the different configurations. At every switch, they simply pick the system that serves the request. On the downside, they incur the cost of maintaining multiple services (at least two). An interesting side effect of using MTD in , mainly done to thwart selective jamming attacks, is the reduction of delay in transmitting packets over the network. Existing approaches that schedule packet delivery in a deterministic way land up in packet collision scenarios. The random schedule selection in each round is shown to reduce the number of collisions, improving end-to-end (ETE) packet delivery time.", "id": "71272ac2-6af0-44bc-8e24-56ce3afe651f", "level": "paragraph", "origin_cites_number": 7, "parent_id": "df172cef-7755-46bb-99f3-39c119596d08", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Performance Considerations" ], [ "paragraph", "Considers only performance of the ensemble" ] ], "subsections": [], "title": "Considers only performance of the ensemble" }, { "cite_extract_rate": 0, "cites": [], "content": "A small section of works either consider or evaluate their MTD in regards to both {performance} of {single constituent} defenses and the {ensemble}. In , the authors model the performance {costs} associated with {each defense action} as a part of the utility values and consider the {shuffle cost} as a part of the rewards the attacker and a legitimate user gets in a repeated game setting. On similar lines, in the empirical analysis of MTDs in the context of {Flip-it} games , the authors model the cost of each defender action as a part of {defender's utility} value while the state of the system after a move action considers the number of {servers} being {controlled} by the {defender}-- an indirect way of measuring system performance. Authors in consider the availability of each defense action over a \\emph{bounded time horizon} and for \\emph{shuffling costs}, consider the downtime or unavailability that results when the system is migrating from one system configuration to another. On similar lines, the authors of model the \\emph{host capacity} and \\emph{network bandwidth} associated with each system configuration and also determine the next configuration based on the performance of other defenses in the previous time steps. For some works, the performance costs associated with the ensemble do not arise due to the shuffle but represent the cost of ensuring exact same performance across defense configurations or cost of implementing a system that can support the various configuration . More specifically, authors in create a system that ensures that the logical virtual identifier distances between any two nodes remain the same, while in , the authors assume an extra cost of creating an ensemble that is not necessary when not using MTD solution.", "id": "565ace02-1605-4556-8f6c-13d11746aadb", "level": "paragraph", "origin_cites_number": 6, "parent_id": "df172cef-7755-46bb-99f3-39c119596d08", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Qualitative Evaluation" ], [ "subsubsection", "Performance Considerations" ], [ "paragraph", "Considers both" ] ], "subsections": [], "title": "Considers both" }, { "cite_extract_rate": 0.086206896551724, "cites": [ 6800, 6795, 6797, 6792, 6796 ], "content": "Although we realize that the quality of MTD is key is determining the quality of defense being offered, quantifiable measures about these qualitative terms are essential for effective evaluation.\nFor example, shuffling \\emph{x \\%} number of hosts leads to \\emph{y \\%} reduction in attack success probability, and \\emph{z \\%} increase in overhead/quality of service (QoS) for the normal user. In this sub-section, we present the quantitative analysis of MTD research works discussed in the survey. We categorize quantitative analysis into \\emph{Security Metrics} and \\emph{Usability Metrics} as shown in Figure~\\ref{fig:quantitative}.\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}\n \\pie [polar, rotate=90, scale font, explode=0.2, style = drop shadow,\n color = {CornflowerBlue!30!white, SpringGreen!30!white, Goldenrod!30!white, BurntOrange!30!white, Salmon!30!white, NavyBlue!30!white}]\n {32/,\n 22/,\n 11/,\n 11/,\n 10/,\n 14/}\n\\end{tikzpicture} \n\\vspace{-0.7em}\n\\end{figure}\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{p{1.8cm}p{6cm}}\n\\rowcolor{CornflowerBlue!30!white} \\textbf{Reconnaissance} & \n \n \n \\\\\n\\rowcolor{SpringGreen!30!white} \\textbf{Vulnerability Exploitation } & \n \n \n \\\\\n\\rowcolor{Goldenrod!30!white}\n\\textbf{(D)DoS Attack} & \n \\\\\n\\rowcolor{BurntOrange!30!white} \\textbf{Multi-Stage Attack} & \n \\\\\n\\rowcolor{Salmon!30!white} \\textbf{APT and Data Exfiltration} & \\\\\n\\rowcolor{NavyBlue!30!white} \\textbf{Other types of Attack} & \n \n \\cite {albanese2013moving} \\\\\n\\end{tabular}\n\\caption{Threat Models considered in the MTDs proposed in the surveyed papers.}\n\\label{tab:6}\n\\end{table}", "id": "b89ed177-eb0b-4c87-85c3-011e9557e114", "level": "subsection", "origin_cites_number": 58, "parent_id": "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Quantitative Metrics" ] ], "subsections": [ "47c112b3-997f-4055-a767-3ba4b112b9aa", "bd7db867-f967-4c37-b045-1ea380490676", "cf123cac-4317-4df3-9ed0-64883812a554" ], "title": "Quantitative Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nAn important aspect of network defense is representation and visualization of network attacks. Enterprise networks are becoming large and complex with different network overlay and underlay technologies. The adage \\emph{what can't be measured cannot be effectively managed} applies aptly here. Security metrics such as the ones shown in the Figure~\\ref{fig:quantitative}, covers attack quantification using CVSS metrics-- Confidentiality, Integrity, and Availability (CIA). This metric also considers attack representation methods (ARMs)-- Attack Graphs and Attack Trees. We now discuss the MTD research works from this perspective of security metrics.\nWe surveyed $\\sim 70$ MTD research articles and analyzed them from the perspective of different threats models they targeted. A summary of the findings has been provided in Table~\\ref{tab:6}. The key observations are as follows a) $32 \\%$ of research works focus on defense against {reconnaissance} attempts, b) $22 \\%$ of these research works target {vulnerability exploitation}, c) $11 \\%$ research works use MTD defense against {DoS/DDoS} attacks. There has been a limited focus on using MTD defense for dealing with {stealthy attacks} like {APT} and {data-exfiltration} (7/68 $\\sim 10 \\%$ research works). Also, some papers used other types of attacks such as timing-based attacks , and network intrusions .\n\\textbf{CIA Metrics:} \\emph{Confidentiality}, \\emph{Integrity}, and \\emph{Availability} (CIA) are used as quantitative metrics for measurement of impact on system under attack. Zaffarano \\textit{et al.}~ use confidentiality metric for measuring information exposed by modeling tasks. The mission \\emph{M} confidentiality valuation \\emph{v} is expressed as, \\emph{Conf (M, v)} = $\\frac{1}{|T|} \\sum_{t \\in T}(t, unexposed)$. As the measurement equation suggests, the goal is to maintain information as {unexposed} over time ($t \\in T$). Similarly \\emph{Integrity} valuation \\emph{v} for mission $M$ is quantified as $Int (M, v) = \\frac{1}{|T|} \\sum_{t \\in T}(t, intact)$. High integrity is ensured by keeping information intact over time. \nConell et al. in~ consider {availability} as an important metric for analyzing impact of MTD countermeasure. The system reconfiguration rate $\\alpha$ is modeled as a function of system resources using {Continuous Time Markov Chain} (CTMC) modeling. The analysis of the effect of reconfiguration on the availability is considered for fine-tuning MTD decision. MASON~ framework utilizes {Intrusion Detection System} (IDS) alerts and vulnerability score \\emph{CVSS} calculated on the basis of CIA values to identify critical services in the network. A \\emph{Page Rank} based threat scoring mechanism is utilized for combining static and dynamic information, and prioritizing network nodes for MTD countermeasure {port hopping}. It is noteworthy, that port-hopping for $40-50\\%$ services can help reduce overall threats in the network by $97\\%$. This research work, however, does not consider the usability impact induced by the MTD countermeasure. \n{\\em MORE} framework shows reduction in reconnaissance attempts and exploits targeting software integrity violation on frequent rotation of Operating Systems (OS) of the network under attack. Experimental analysis shows that a rotation window of $60$s makes \\emph{nmap fingerprinting} attempts ineffective. {Temporal} and {spatial} diversity have been used to introduce genetic algorithm based MTD by Crouse \\textit{et al.}~. The experiments on {average vulnerability} of different configurations show decaying vulnerability rates for evolved configurations selected from the {chromosome pool} of VM configurations.\n\\textbf{Attack Graph/ Attack Tree:} CVSS present only a piece of quantitative information about the vulnerabilities such as complexity of performing network attack, impact on confidentiality or integrity of the system if the attack is successful, {\\em etc.} This information alone is not sufficient for taking MTD decisions. {Attack representation methods} (ARMs) such as \\emph{Attack Graph}~ and \\emph{Attack Tree}~ answer questions such as (a) What are the possible {attack paths} in the system (b) What attack paths can be taken by the attacker to reach a specific target node in the network. SDN based scalable MTD solution~ makes use of attack graph-based approach to perform the {security assessment} of a large scale network. Based on the security state of the cloud network, MTD countermeasures are selected. \n{Bayesian attack graphs} have been used by Miehling \\textit{et al.}~ for defending the network against vulnerability exploitation attempts. The defender's problem is formulated as a {Partially Observable Markov Decision Process} (POMDP) and {optimal defense} policy for selecting countermeasures is identified as a solution for the {POMDP}. \nThe security analysis of a large-scale cloud network in real time is a challenging problem . Attack Graphs help in identification of possible attack scenarios that can lead to exploitation of vulnerabilities in the cloud network. The attack graphs, however, suffer from scalability issues, beyond a few hundred nodes as we discussed in Section~\\ref{sec:background}. \n\\textbf{Risk Metrics:} \nLike any other system, MTD systems have an associated risk once an organization considers deploying whole or part of the MTD technique. According to the \\emph{National Institute of Standards and Technology} (NIST), there are several attacks, service disruptions, and errors caused by human or machines that may lead to breaking benefits and critical assets at the organization or national level. Risks assessment is a critical measure and has many ways to deploy and use. In this survey, we identify and highlight research work that has adopted and took into consideration risks associated with deploying MTD solution. Specifically, we emphasize on the research work that evaluates the {cost} of the adopted MTD solution, since system administrators need to take into account the cost of using the MTD solution. \nRisk assessment in the MTD has a direct and strong {relationship} with the {effectiveness} of the deployed MTD technique. According to , the system administrator can determine how good the MTD solution is by examining the associated risk. Therefore, the authors studied the effect of deploying each MTD technique alone (i.e. shuffle, diversity, and redundancy) by inspecting the Hierarchical Attack Representation Method (HARM). {HARM} is basically a \\textit{ARM} at the upper layer such as attack graph (AG), and another ARM in the lower layer such as attack tree (AT), where these to ARM have a one-to-one mapping between them. Moreover, the authors also studied the associated risk by computing the {instance measure} (IM) which uses vulnerability's {base score}, {impact score}, etc~.\nFeedback-driven multi-stage MTD has been proposed by Zhu \\textit{et al.}~ for dealing with multi-stage attacks like Stuxnet~. Author's quantify the damage or cost caused by an attacker at different stages of the network. The game between attacker-defender is modeled as a {finite zero-sum} matrix game with a bounded {cost function}, and a {mixed-strategy} {Saddle Point Equilibrium} (SPE). Players utilize cost-function learned online to update MTD strategies. The numerical results show that {feedback mechanism} allows network defense to respond to unexpected (exogenous) events and reduce unusual peak of {risk} imposed by different vulnerabilities.\nIn~, the authors provide {metrics} for MTD evaluation and risk analysis. For risk metrics, they proposed statistical metrics to study the effect of how the attacker can quickly conduct and succeed in adversarial attacks. The authors assumed the system will always have a running task that can be measured. The validity of the metrics was studied by simulating the APT attack scenario, where they assumed the APT will always have some sort of overhead that can be measured and detected. Finally, the {performance} of the proposed metrics was measured also by examining the {CPU utilization} of the designed system.\nCheng \\textit{et al.} consider the game theoretical formulation of MTD systems; specifically, they model it as a {Markov Game}. The authors provided a theorem, subject to probabilistic constraints, to calculate the revenue for the defensive and offensive approaches in MTD systems. Their work depended on testing different defensive and offensive strategies and was tested using a networking setup that includes vulnerable services and a {firewall} component as well.\nAnother work considered the {statistical approach} to evaluate the likelihood of a successful attack is the work conducted by . The authors proposed an approach to determine the minimum effort required a system to detect stealthy botnets. Moreover, the {entropy} was measured to determine how close an adversary is to the detection point, where {high entropy} indicates the {attacker} is far in {distance} from the detector. The evaluation of the proposed approach was conducted on a real ISP network obtained from . The results of the proposed algorithm show that the detection strategy has a {complexity} of $O(N^3)$ although theoretical complexity analysis indicates the algorithm is $\\sim$ $O(N^6)$. \nChung \\textit{et al.} provide a detailed and comprehensive evaluation for the optimal countermeasure selection over a set of vulnerable attack paths in the attack graph. By evaluating the CVSS score of vulnerabilities, the authors determined the countermeasure selection option, taking into account the ration of the {Return of Investment} (ROI). Countermeasure option that produces the smallest ROI is considered the optimal one. The system performance of \\textit{NICE} is proven to be efficient in terms of network delay, CPU utilization, and the traffic load. \nTo study the effect of a number of intrusions on the system's threat, Chowdhary \\textit{et al.}~ use a {statistical approach} also to evaluate how the number of intrusion in the system, with the number of vulnerabilities, affect the threat score. The threat scoring algorithm is similar to \\textit{Page Rank} algorithm. To evaluate the proposed work, two experiments were conducted. One experiment is to test the threat scoring engine on {software vulnerabilities} and {IDS alerts}. The second experiments study the {effect} of {port hopping} attack. The results show that as the number of services in the system increase, the service risk value remains unchanged between a specified interval. However, the first few services show an increase in the number of risk value. Finally, the port hopping attack showed a reduction in threat score. \n\\textbf{Policy Conflict Analysis:} The MTD countermeasures such as network address switching can dynamically and rapidly insert new type of traffic, or new {flow rules} (environment managed by SDN). Pisharody \\textit{et al.}~ show how different MTD countermeasures such as {network address} change, {load-balancing}, and {intrusion detection} can cause {security policy violations}. The research works discuss {SDN-based MTD}, but the {policy conflicts} can cause {security violations}~, {loops}, and {blackholes} in the network as discussed by Khursid \\textit{et al.}~. The violations of {network-wide} in-variants and {security policies} must be analyzed before deployment of MTD countermeasure. This research challenge is an important {quantitative metric}, which has not been considered by a lot of research works. We discuss this as a possible {research oppurtunity} in Section~\\ref{research}. \n\\input{figures/research_directions.tex}", "id": "47c112b3-997f-4055-a767-3ba4b112b9aa", "level": "subsubsection", "origin_cites_number": 26, "parent_id": "b89ed177-eb0b-4c87-85c3-011e9557e114", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Quantitative Metrics" ], [ "subsubsection", "Security Metrics" ] ], "subsections": [], "title": "Security Metrics" }, { "cite_extract_rate": 0.1, "cites": [ 6795 ], "content": "} \nThis category as shown in the Figure~\\ref{fig:quantitative}, analyzes MTD research work from the aspects such as {QoS} (network bandwidth, delay), {impact} on existing mission metrics and the {cost} of deploying MTD defense.\n\\textbf{QoS Metrics:} MTD can induce some performance cost on the existing system resources. Jafarian \\textit{et al.}~ identify the {virtual IP} (vIP) {mutation}, {range allocation}, and {range distribution} constraints in order to minimize the QoS impact which can be induced by {vIP} collisions, as well as, maintain optimal-level of unpredictability.\nProbabilistic performance analysis of MTD reconnaissance defenses has been conducted by Crouse \\textit{et al.}~. The research work analyzes quantifiable MTD metrics such as {reconnaissance, deception} performance, \\emph{attack success probability} vs \\emph{connection drop probability}, attacker's success probability under different conditions such as {network-size}, {number of vulnerable computers}. \nTaylor et al. in~ use mission and attack metrics for analyzing the effectiveness of a network defense. The research work analyzes dynamic defenses such as {\\em Active Re-positioning in Cyberspace for Synchronized Evasion} (ARCSYNE) and {\\em Self-shielding Dynamic Network Architecture} (SDNA) using mission and adversary activity set. Mission \\emph{Success}, i.e., the rate at which mission tasks are completed, and Mission \\emph{Productivity}, i.e., how often are mission tasks successful are used as {QoS} measurement metrics are used for evaluations.\nA statistical analysis of static {\\em vs.} dynamic attacks against different MTD strategies-- uniform, random, diversity-based, evolution-based, and optimal-- has been conducted by Carter \\textit{et al.}~. Experimental results on performance {\\em vs.} adaptability shows that diversity-based MTD is the optimal strategy against most attack scenarios. They also show that uncertainty about the adversary type-- slow adversary or fast-evolving adversary-- can adversely impact the effectiveness of an MTD.\nEl-Mir~ models {performance parameters} such as {availability}, {downtime}, and {downtime cost} using a {Continuous Time Markov Chain} (CTMC) model. The experimental results showed that cost-effective {VM migration} can be performed in an SDN-based network with {limited impact} on {network performance}. The research work utilizes normalized CVSS score as a key metric for initiating VM migration. \nSengupta \\textit{et al.}~ analyze the performance impact of placing IDS (NIDS and HIDS) at all possible enforcement points in a cloud network. It is noteworthy that the placement of more than $15$ detection agents in their simulated network fails to provide any additional intrusion detection benefit, whereas the network throughput decreases drastically from $16$ Gbps in the case of a single detection agent to $\\sim 6$ Gbps when $15$ detection agents are placed. \nCHAOS~ analyzes how the delay intentionally introduced by MTD impacts the packet count in a SDN-managed network. The SDN controller utilizes host-mutation and {decoy-severs} to deceive an adversary. The obfuscated network increases the {cost} and {difficulty} for an adversary targeting the network. The percentage of {information disclosure} reduces from $90\\%$ to $10\\%$ in a CHAOS protected network, with slight impact on packet delay ($1$s to $1.5$s for $1800$ packets). \n\\textbf{Cost Metrics:} Protection against Distributed Denial of Service (DDoS) attacks is one of the important priorities for many cyber systems. Wang \\textit{et al.} present a cost-effective MTD solution against DDoS and {\\em Covert Channel} attacks. Through MTD adaptation, their work aims to answer two main questions: 1) what is the adaptation cost?, and 2) what is the cost incurred by a defender if an attacker succeeds in exploiting a particular vulnerability?. This solution does not rely on IDS-generated alerts while making the adaptation. The adaptation cost includes any cost related to purchasing required software or hardware that helps in the adaptation process. Lei \\textit{et al.}~ utilize change-point analysis method for evaluation MTD cost-benefit for a multi-layer network resource graph. The proposed method analyzes mission productivity ($\\Delta M$), and attack success productivity ($\\Delta A$) on dynamic network address translation (DNAT). The evaluation results show reduced attack success probability using DNAT over a network under observation. The path enumeration mechanism used in this research work can, however, suffers from scalability challenges because of frequent path probability calculation and update operations. The cost and effectiveness evaluation of reactive and proactive network defense strategies, has been conducted by Taylor \\emph{et al.}~ using Measurement of Effectiveness (MOE) metrics. The research work considers hop-delay for different attack success rates, and static defense policies. They show that an attacker's productivity, i.e., how quickly attacker can perform adversarial tasks increases against static defense, whereas attacker's confidentiality, i.e., ability to remain undetected is same for both the static and the dynamic defense case. \n\\iffalse\nThe measurement of cyber risk using experience and vulnerability assessment tools based on the Common Vulnerability Scoring System (CVSS)~ is not enough.", "id": "bd7db867-f967-4c37-b045-1ea380490676", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "b89ed177-eb0b-4c87-85c3-011e9557e114", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Quantitative Metrics" ], [ "subsubsection", "Usability Metrics" ] ], "subsections": [], "title": "Usability Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\textit{Karypis and Kumar} research work utilizes an approach based on Parallel Hypergraph Partitioning \\index{parallel hypergraph partitioning} in order to create a scalable attack graph in real time and select MTD countermeasure - VM migration. The MTD strategy considers the available space on the target physical server and the threat level of the destination server before performing the VM migration.", "id": "cf123cac-4317-4df3-9ed0-64883812a554", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "b89ed177-eb0b-4c87-85c3-011e9557e114", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Quantitative Metrics" ], [ "subsubsection", "SDN-based Scalable MTD in Cloud: Case Study" ] ], "subsections": [], "title": "SDN-based Scalable MTD in Cloud: Case Study" }, { "cite_extract_rate": 0, "cites": [], "content": "The current capabilities of the different MTD techniques have shown an alternating level of performance. Each MTD solution is evaluating and presenting the deployed technique by inspecting a particular, or a range, of security problems. \nOther techniques consider the cost of the deployed technique or solution for performance evaluation as well. Yet, for security or system administrator to decide which solution is the best fit for their organization's system, it is important to define and identify what configuration parameters are needed in terms of change, addition, and deletion. The decision of evaluating how effective an MTD system depends on a number of factors such as how feasible the MTD technique/solution is, what configurations need to be changed if any, and what is the cost of adopting such an MTD system, whether it is the financial cost or the service cost for deploying the MTD. \n\\newline We emphasize and show the research work conducted to evaluate the MTD system technique and what is the associated cost for adopting this solution. Some researchers measure the effectiveness of MTD technique by how it protects the system against a certain attack like . Others measure MTD effectiveness by the associated cost analysis, i.e., what configuration parameters need to be changed or adopted, and others analyze the MTD by incorporating other evaluation methods such as ARMs and testing metrics to allocate the best MTD strategy for network defenders. \nTo better understand the security situation of the system, strong efficient security analysis parameters are needed. Zaffarano \\textit{et al.} states that a good evaluation of the MTD system requires 1) data collection on MTD effectiveness, and 2) metrics that analyzes the collected data and provide effectiveness measurements. For this purpose, evaluation metrics were introduced to evaluate the system security from different points of interest by inspecting the configuration parameters of the test experiment, and then ``spawn\"~ the network to get the network information and activity and use a set of sensors to gather data. Unlike the traditional security evaluation metrics, which aim to highlight and study the effect of cyber-attacks against the traditional security component triangle; confidentiality, integrity, and availability, the work relies on these three aspects and additionally examine the operational risks and issues. The authors state that evaluating the MTD system requires a large number of machines and network setup, especially when we want to consider large-scale systems such as data center networks. Therefore, this kind of evaluation needs some sort of automatic network evaluation and construction. For this purpose, existing infrastructure such as \\textit{VMWare ESXi}~ and \\textit{VMWare vSphere}~ is utilized to automate the topology generation and to monitor the performance test administration. The research work depends on the activity models concept, which is a set of tasks that are currently being processed, the time the task took to complete processing and is it being transmitted through the communication channel in plain-text or not. To ensure the metrics' accuracy for evaluating security for large scale systems, the generated network topology need to have different size and complexity. To address this concern, the authors indicated each network's properties on a mission by mission basis, such that each mission corresponds to a different number of topologies that aid the mission. \nThe activity model is represented by a set of tasks and tasks attributes. The tasks may include ordinary tasks for a user such as surfing the internet and checking email inbox. Whereas tasks attribute includes the duration for the task, the success of the tasks, whether the task's information was revealed or not, and the task's data validity. \nSecurity metrics were applied on data gathered from multiple runs that include the mission, the topology, the adversary model, and the MTD technique. The effectiveness metrics are productivity, success, confidentiality, and integrity. The productivity is meant by how much a mission task is completed or how fast the adversary can achieve their goal and perform malicious activity. which implies whether or not this mission has succeeded in its intended purpose. Confidentiality is measured by how much information the mission was able to protect and prevent from leaking, and it is also a measurement of the attacker's ability to stay undetected under the radar. Finally, the integrity metric for the mission is determined by the rate of the information's integrity, and for the attack, it is how much the attacker is able to view information as accurately as possible. A formal representation of each metric was defined and explained, however, the paper lacks system evaluation and experiments, especially by using different MTD techniques and approaches to show the validity of the proposed metrics. \nIn general, the adaptation cost involves the overall cost for the defender when changing one configuration state parameter to another one. On the other hand, the attacked cost is the cost of the defender when suffering from the attack, such as the services downtime. \nThe authors presented an algorithm for cost-effective adaptation, which relies heavily on the assumption that the defender has the knowledge they are being attacked. This assumption is realistic for some of the attacks that can be measured or identified such as DDoS, however, for sophisticated attacks such as APT , this assumption does not hold true. The cost is therein measured by the cost of adaptation which may include the cost of migrating a VM from one physical machine to another one or the cost of degrading the network bandwidth, or the cost might be the attacked cost.\nThe exact adaptation cost is measured by the \"average cost rate of media organizations\" . From network bandwidth point of view, the latency for small size organization is 200ms, 2s for medium sized, and 20s for large sized. Hence, the cost to deploy adaptation is 1\\$, 10\\% and 100\\$ for small, medium and large-sized organizations, respectively. It holds true for sophisticated attacks such as APT.\\\\ \n\\begin{table*}[ht]\n\\centering\n\\caption{MTD Effectiveness approaches }\n\\label{tab:5}\n\\begin{tabular}{|{30mm}|p{80mm}|p{60mm}|}\n\\hline\n\\textbf{Research Work} & \\textbf{MTD Effectiveness Approach} & \\textbf{Drawback or Limitation} \\\\ \\hline\n Wang \\texit{et al.}~\n & The research work presents a solution to protect systems against DDoS attacks using MTD, by calculating the costs for adaptation and attacked. & \n The approach relies on the assumption that the defender knows they are under attack, which doesn't hold true for sophisticated attacks such as APT.\\\\ \\hline\n Zaffarano \\texit{et al.}~\n & Introducing evaluation metrics to test the effectiveness of the MTD technique by evaluating productivity, success, confidentiality, and integrity. & No system evaluation nor experiments to show how effective the proposed metrics are. \\\\ \\hline\n Zhuang \\texit{et al.}~ & A simulation-based study to examine the effect of MTD system when using a frequent change of the system and configuration parameters. & There is no elaboration on the effect of these changes to the running service and network consumption. What is the side-effect of using this approach and how they impact the results? \\\\ \\hline\n Hong \\texit{et al.}~\n & MTD cost analysis using importance measure metrics and ARMs & Lacks evaluation of other unknown attacks, such as zero-day. Also, other cybersecurity risks are not identified such as Malware risks. \\\\ \\hline\n\\end{tabular}\n\\end{table*}\nSome researchers state that the effectiveness of the MTD system is measured by the existence of some objective functions, according to . The collection of objective functions is maintained by having a number of analytical metrics that can capture and identify what is required by the attacker to attack, or exploit the MTD system. Specifically, the analytic module's metrics must entail the set of configuration parameters that involve the attack, and then determine the quality of the MTD defending system. Frequent change to the configuration parameters will lead to randomizing the attack surface, and thus, will increase the level of difficulty for the attacker as the attack surface is always changing, even if the attacker has already acquired some information about the system, the frequent change will enforce them to restart the reconnaissance process. Moreover, the gained privileges by the attacker will be lost and it needs to be acquired again . \n To evaluate the proposed work by , simulation-based experiments were set up using \\textit{NeSSi2} simulator which allows for extensive complex applications by simulating the TCP/IP protocol. The authors defined a number of assumptions for their study, which state that firstly, the attacker knows the basic system architecture, and once a node is compromised, the attacker immediately gain the privilege to the next node in the attack path. Second, the resource mapping system was designed to be either entirely random or based on an identification engine which scans the system for known vulnerabilities and possible attack paths. \n In other words, this system operates as a security policy enforcer. One possible drawback for the resource mapping system is if the adversary compromises a critical node, they will be able to attack, and perhaps compromise the resource mapping system. This critical issue is known as a single point of failure. The simulated system had three main components. \n \\begin{enumerate}\n \\item a Defense component which has the set of configuration parameters that will be used by the adaptation system. \n \\item an Attack component which is responsible for generating complex and comprehensive attack toward the target. \n \\item a Ground truth component which monitors the current system by inspecting a \\textit{conservative attack graph (CAG)} and analyzing the connection between the defense and attack components. \n \\end{enumerate}\n The research work is evaluated by simulating different attack for a certain time duration and then measuring the configuration manager adaptation's response time to this attack. The results show a significant reduction in the number of successful attacks when the configuration parameters are changed in a short time.", "id": "df9b9c32-f0a3-4fd5-81db-f4f27d7db268", "level": "subsection", "origin_cites_number": 8, "parent_id": "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Effectiveness of Current MTD Techniques" ] ], "subsections": [], "title": "Effectiveness of Current MTD Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the earliest research work that presented a formal cost analysis of MTD was conducted by \\textit{Hong and Kim} . Relating the MTD with security analysis by incorporating ARM, and \\textit{Hierarchical Attack Representation Models (HARM)} . MTD best strategy is examined and selected based on the lowest cost. MTD strategies are Shuffle, Diversity, and redundancy. The main security analysis was conducted by using AG, HARM, \\textit{Exhaustive search} (ES) method, and \\textit{importance measure method} (IM).\nShuffle technique does not change vulnerabilities that already exist in the system, but rather it modifies the network reachability by reassigning the IP addresses to the hosts. Therefore, The effect of the shuffle on HARM is only at the upper layer (the AG) since that part represents the reachability information. Other shuffling techniques can be used to trick the attacker such as VM live migration. One drawback of the shuffling techniques is that it requires a high complexity (usually exponential) for the analysis time. The scalability issue is dealt with by considering only one VM migration~. \nThe MTD diversity-based solution is displayed in the lower part of the HARM-structure (the AT). It mainly has a major change on the vulnerabilities in the hosts. If an attacker has certain knowledge about some scanned vulnerabilities, they will need more time and perhaps a new technique to gather, analyze, and compromise a new set of vulnerabilities. in other words, it changes the attack surface. An assumption is made while considering network hardening for MTD which is only the important nodes are considered due to the limited capability the attacker may have, there is a frequent change in the services exposed to the public network. The number of target nodes may increase or decrease, and during the attack, it is not easy to determine which exact target node is or will be exploited first. To have an efficient decision on how and which node in the network should be protected, IM is used to calculate the \\textit{risk} and the \\textit{cost score} for this purpose. IM is calculated using the formula described by Hong \\textit{et al.} , which considers the vulnerability's base score, impact score, {\\em etc.} \nThere is a certain number of risks associated with each MTD strategy. The formula in equation \\ref{eq:risk} shows a calculation of the systems risk score ($R_{system}$) when deploying a VM live migration. $P_{goal}$ is the \\textit{probability of successful attack} on node \\textit{i}, and $I_{goal}$ is the impact of the attack on node \\textit{i}. \n\\begin{equation}\n R_{system} = \\sum_{i\\in N} P_{goal_i } \\times I_{goal_i}\n\\label{eq:risk}\n\\end{equation}\nMTD cost analysis in the HARM approach is tested through simulation using the three approaches; AG, ES, and IM on the HARM. The simulation environment included two CloudBand clusters where each can have up to 450 VM, and both clusters are connected to a resource node (the target node). The results show that using AG technique takes exceptionally high exponential complexity. On the other hand, HARM using ES approach has also an exponential complexity, yet it is much better in comparison with AG. The optimal simulation result was achieved by testing HARM using IM technique. \nA major impact of the explained technique is that it relies heavily on CVSS base-score which may not model the exact risks, especially for MTD techniques and VM live migration. Moreover, the approach only considers already discovered vulnerabilities; other types of risks such as malware or zero-day vulnerabilities are not studied. \nTable \\ref{tab:5} summarizes different evaluation approaches for MTD techniques. It is noticed that every research work is observing and evaluating MTD from a different point of view. Yet, none of them considered the case of APT in MTD systems. Also, security analysis based on vulnerability information or score (\\textit{CVSS} for instance) does not provide a comprehensive assessment of how good or bad is the deployed MTD technique is. An associated cost analysis is also required to allow the administrators to make an accurate decision on which MTD approach they should use. \n\\fi", "id": "a17883db-aaf2-425f-b082-9230eb87489d", "level": "subsection", "origin_cites_number": 4, "parent_id": "44ef66fe-9579-4293-ae61-6cf47c0da6cc", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Evaluating the Effectiveness of\\\\Moving Target Defenses" ], [ "subsection", "Cost-Benefit Analysis" ] ], "subsections": [], "title": "Cost-Benefit Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{research}\nIn this section, we highlight some lessons learnt and the under explored aspects of Moving Target Defenses (MTDs). These lead to a discussion on promising directions for future research (a brief summary is highlighted in Figure \\ref{fig:fw}).", "id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ] ], "subsections": [ "278045cc-6500-47d4-a2ee-31a6d04b6624", "1b5c7f27-a025-4527-bac4-1fcbf10d1a2a", "4e62c8e1-31a2-4800-9e42-0f041ae8cd49", "84d73f36-3445-449d-8506-b17ab44eec43", "4cd35799-5b56-49aa-92ab-150094ae08c0" ], "title": "Research Opportunities" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 6792 ], "content": "Works in MTD have concentrated mostly on the movement of the exploration, the detection and the attack surface. While moving the exploration and the attack surface helps a defender take away the advantage of reconnaissance that an attacker has, the goal of moving the detection surface is to improve the scalability and Quality of Service (QoS).\nThe movement of the prevention surface, which comprises of security modules such as Firewalls, IPS {\\em etc.} has only been investigated by a couple of works (see discussion in Section~\\ref{prev}). An MTD research can consider exploring {Next-Generation Firewall} (NGFW) architectures that combine security modules such as {firewall}, {content filter}, {anti-virus} {\\em etc.} to provide a multi-layered defense-in-depth solution. Some current implementations of NGFW, that can be leveraged for testing the effectiveness of these defenses, are \\emph{Cisco ASA}~ and \\emph{PAN Firewall}~. \n{\nFurther, with the rise of mobile technologies, MTDs can prove to be effective defenses.\nAlthough we discuss a few works on thwarting jamming attacks and identity shifting in mobile ad-hoc networks , MTDs for different surfaces of in mobile infrastructure networks can be a promising direction for future research.}\nAs previously stated, the movement of different surfaces in a single framework, although challenging, can provide greater security benefits than the movement of a single surface. Investigation in this area would require one to identify sets of configurations across the various surfaces that are compatible (in terms of performance) with one another. This prevents the number of strategies from multiplying uncontrollably by leveraging the expertise of system designers.\nThe categorization of the various software surfaces opens up the possibility of considering other logical surface level distinctions and thus, MTDs that shift these surfaces. For example, \\textit{Microsegmentation}~ is a method of creating {secure zones} in data centers and cloud deployments to isolate workloads from one another and secure them individually. With MTD formalism, one can test the existing hypothesis and develop new ones for microsegmentation. We believe that formal modeling, in line with , one might discover that advanced services will be more effective when applied at a {granular-level} (as close to the application as possible in a distributed manner).", "id": "278045cc-6500-47d4-a2ee-31a6d04b6624", "level": "subsection", "origin_cites_number": 6, "parent_id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ], [ "subsection", "The Configuration Set (What to move?)" ] ], "subsections": [], "title": "The Configuration Set (What to move?)" }, { "cite_extract_rate": 0, "cites": [], "content": "{\nThe timing problem has mostly been ignored in previous works on MTDs. While some works perform empirical studies to test the best (constant) time-period for switching, they can be highly specific to threat model and the elements being shifted. For example, while 15 second time-periods are shown to be reasonable when protecting against jamming attacks , 60 seconds time periods are needed to defend against Network Mapping attacks attacks. Also, the complexity of changing the virtual IP address vs. the underlying virtual machine impose different constraints on the lower bound on feasible time-periods.\n}\n{\nNote that the handful of works that empirically determine time-periods are by no means complete. We need an extensive study just to come up with reasonable time periods for particular surfaces, how it effects that attack model and provide guidelines on efficient implementation methods.\n}\nOn the other hand, a few existing approaches that do address the timing problem theoretically, suffer from scalability issues. In , the authors land up increasing number of states in their Markov Game by including time as a parameter. Inferring the defender's strategy in such Markov Games cripples the MTD to work beyond small networks. Effective solutions or improved modeling could both be interesting research directions for the future.", "id": "1b5c7f27-a025-4527-bac4-1fcbf10d1a2a", "level": "subsection", "origin_cites_number": 3, "parent_id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ], [ "subsection", "The timing function (When to Move?)" ] ], "subsections": [], "title": "The timing function (When to Move?)" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 6800, 6792 ], "content": "Randomization in the movement is a necessary part of effective MTDs. Given the extensive use of randomization in cryptography, MTD defenses tend to use a Uniform Random Strategy (URS) an the movement strategy . Several works have argued that often the defender has performance information about their system and known attacks that can be used to exploit their system. In such cases, a game-theoretic modeling can result in better strategies that offer higher gains in terms of both security and performance metrics. Unfortunately, the latter works suffer from scalability issues, encouraging practitioners to default to URS. Works that can leverage existing knowledge without suffering from scalability issues can fill an important gap in MTD research.\nRecently, it has been shown that attacks on networks are better characterized by persistent approaches such as APTs. In such scenarios, attackers are known to demonstrate sequential attack behavior spread over a long time.\nA relatively new line of work investigates modeling MTDs to come up with strategies that are effective against APTs. Although these approaches leverage the use of attack graphs to bootstrap the modeling process , they do not scale well. While real-world cloud services host thousands of nodes, researchers have only considered small-scale settings . Furthermore, works such as and that try to consider partial observability limit the scalability even further. Study on the design of approximation approaches and their effectiveness when compared to scalable baselines such as URS will be key for future research.\nCurrent research often makes strong assumptions about the threat model. This makes results regarding the effects of such defenses questionable. In the future, we hope to see more studies that try to figure out realistic attack scenarios. Figuring out an attacker's behavioral model might also lead the modeling community to relax assumptions often made by rationality of an attacker.", "id": "4e62c8e1-31a2-4800-9e42-0f041ae8cd49", "level": "subsection", "origin_cites_number": 7, "parent_id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ], [ "subsection", "The Movement Function (How to Move?)" ] ], "subsections": [], "title": "The Movement Function (How to Move?)" }, { "cite_extract_rate": 0, "cites": [], "content": "{\nAs seen in Figure \\ref{fig:qual}, none of the existing works demonstrate either empirically or model the impact of a proposed MTD on all four types of metrics. The lack of testing against real-world attackers also makes it difficult to prioritize which metrics a defender needs to care about and to what extent.\n}\n {\nBeyond human studies to understand attack behavior, use of MTDs may introduce a new attack surface in cyber-systems.}\nFor example, firewall filtering rules need to be carefully analyzed to prevent conflicting security policies that may arise at the movement time because such conflicts might result in either dropping legitimate user packets or introducing new attack points. Although some works such as have tried to address this issue of identifying security policy conflict for an SDN-managed cloud network, it is not immediately clear how it can be adapted in the context of other MTDs. A clear idea of when such scenarios arise and finding ways to address them would constitute an important line of research in the future.\nA key idea is to have a continuous feedback cycle that verifies the security policy in place post MTD-countermeasure deployment. This can be done by ensuring end-to-end integration and regression test for the various use-cases pertaining to network traffic. Another solution could be to incorporate the policy conflicts that might arise into the modeling of the MTD. This would produce {\\em safe} movement policies that forsee the use of MTD as a new attack surface and avoid policy conflicts.\n\\iffalse\nWe now highlight an example to motivate such a scenario and then briefly talk about a few works that can be.\nIn the data center network shown in Figure~\\ref{fig:11}, we have \\emph{Tenant A} hosting a web farm. Traffic on \\emph{TCP port 443} is allowed into the IP addresses for the web servers. When an attack directed against \\emph{host A2} is detected, the MTD application responds with countermeasures and takes two actions: \n\\begin{itemize}\n \\item a new webserver ({host A3}) is spawned to handle the load of host A2; and \n \\item the IP for host A2 is migrated to the Honeypot network and assigned to \\textit{host Z}.\n\\end{itemize}\nIn order to run forensics and isolate an attacker, the \\textit{HoneyPot} network permits all in-bound traffic, but allows no out-bound traffic to other sections of the data center. These actions result in new flow rules being injected into the flow table that \n\\begin{itemize}\n \\item permits all traffic inbound to the IP that originally belonged to host A2, but now belongs to host Z.\n \\item modifies an incoming packet’s destination address from host A2 to host A3 if the source is considered to be a non-adversarial source.\n \\item stops all outbound traffic from the IP that originally belonged to host A2 but now belongs to host Z to the rest of the data center.\n \\item permits traffic on port 443 to host A3 (not of great importance to our case). The original policy allowing only port 443 to the IP of host A2, and the new policy allowing all traffic to the IP address of host Z are now in conflict.\n\\end{itemize}\n\\fi", "id": "84d73f36-3445-449d-8506-b17ab44eec43", "level": "subsection", "origin_cites_number": 1, "parent_id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ], [ "subsection", "Qualitative and Quantitative Evaluation" ] ], "subsections": [], "title": "Qualitative and Quantitative Evaluation" }, { "cite_extract_rate": 1, "cites": [ 6792 ], "content": "An important goal of this survey is to establish a common terminology for MTD researchers. Thus, we try to categorize an array of existing works in this terminology. For example, consider the work . This MTD can be categorized as a moving target defense for the movement of detection surfaces with fixed interval switching formulated as a multi-stage game that performs simulation studies of simple use-cases and measures the security and performance of individual defenses in these settings.\nAn interesting idea would be to turn this goal of ours on its head and explore the design of new MTDs based on the permutations of the various categorization aspects designed in this survey. For example, a hybrid surface shifting MTD that (1) shifts the detection surface and then based on a stochastic environment, shifts the prevention surface, (2) models this problem as a two-step game, (3) considers rewards that incorporate performance of individual actions and security of the ensemble, and (4) showcases experimental results on an emulated testbed is a novel Moving Target Defense.", "id": "4cd35799-5b56-49aa-92ab-150094ae08c0", "level": "subsection", "origin_cites_number": 1, "parent_id": "56f30f45-b9e4-4c67-88a4-c18316a604e6", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Research Opportunities" ], [ "subsection", "Proposing novel Moving Target Defenses" ] ], "subsections": [], "title": "Proposing novel Moving Target Defenses" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{concl}\nIn this survey, we looked at various Moving Target Defenses (MTDs) that have been proposed for enhancing network security. First, we categorized them based on {\\em what} surfaces these defenses move (the configuration set), {\\em when} a move operation occurs (the timing function), and {\\em how} they move between the different constituent system configurations (the movement function). In doing so, we highlight how the movement of particular software surfaces is linked to Advanced Persistent Threats; thereby, allowing us to understand how the various MTDs can help thwart real-world sophisticated attacks. The dearth of works that consider the simultaneous movement of different surfaces points to an exciting research direction and possibly, the invention of more effective MTDs. In answering {\\em how to move}, we notice that many approaches leverage Artificial Intelligence (AI) methods in general and game-theoretic techniques in particular for crafting intelligent movement strategies.\nSecond, we discuss how these MTDs can be implemented in practice. We find that the use of centralized technologies like Software Defined Networking (SDN) helps in implementing the MTD countermeasures with limited impact on network performance. We showcase how the surveyed MTDs are implemented in the context of real-world systems ranging from simulation studies to use in commercial products. We highlight the key technologies leveraged by the various MTDs, the layers of the network protocol stack at which an MTD is effective and the level of maturity at which it is implemented. We briefly described a few test-beds that either have been leveraged by existing MTDs and encourage researchers to use them for evaluating the effectiveness of a proposed MTD. We conclude that (1) SDN/NFV is a dominant technology used by MTDs and (2) industrial adoption of MTD solutions is still limited to few application-security products.\nThird, we discuss various metrics used for measuring the effectiveness of MTDs. We put forth two categorizations based on whether these metrics (1) consider security and performance impact of the designed system and (2) are used in the modeling phase to generate particular behavior {\\em vs.} used for evaluation of the MTD.\nWe notice that none of the proposed MTDs consider all the metrics we put forth. One wonders if a defense that models all the proposed metrics can be realized in practice and if so, prove to be better that existing defenses both in terms of performance and security.\nLastly, we highlight areas of network security where the scope of developing MTDs hasn't been investigated. We believe that coming up with defenses which can fill these gaps will improve security and/or performance aspects of various network systems. We conclude by showcasing how our categorization provides a common terminology for researchers to describe existing MTDs and develop future ones.", "id": "2db22e7b-e773-4555-9928-0389f288806d", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "This research is supported in part by following research grants: Naval Research Lab N00173-15-G017, AFOSR grant FA9550-18-1-0067, the NASA grant NNX17AD06G, ONR grants N00014-16-1-2892, N00014-18-1-2442, N00014-18-12840, NSF—US DGE-1723440, OAC-1642031, SaTC-1528099, 1723440 and NSF—China 61628201 and 61571375 and a JP Morgan AI Research Faculty Award. Sailik Sengupta is supported by the IBM Ph.D. Fellowship. Also, Abdulhakim Sabur is a scholarship recipient from Taibah University through Saudi Arabian Cultural Mission (SACM).\n\t\\ifCLASSOPTIONcaptionsoff\n\t\\newpage\n\t\\fi\n\t\\bibliographystyle{IEEEtran}\n\t\\bibliography{main}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip,keepaspectratio]{bio/sailik}}]{Sailik Sengupta} is a Ph.D. student in Computer Science at Arizona State University. He received his B.E. in Computer Science and Engineering from Jadavpur University (2013) and then worked for Amazon where he became an application-security certifier. Sailik is interested in solving problems that arise in multi-agent settings where the participating agents are non-cooperative or adversarial. He has looked at challenges in cyber-security, adversarial machine learning, and human-AI interaction. Sailik was awarded the IBM Ph.D. fellowship in 2018.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip,keepaspectratio]{bio/ankur}}]{Ankur Chowdhary}\n\t\tis a Ph.D. candidate in Computer Science at Arizona State University, Tempe, AZ, USA. He received B.Tech in Information Technology from GGSIPU in 2011 and MS in Computer Science from ASU in 2015. He has worked as Information Security Researcher for Blackberry Ltd., RSG and Application Developer for CSC Pvt. Ltd. His research interests include SDN, Web Security, Network Security, and application of AI and Machine Learning in the field of Security.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip,keepaspectratio]{bio/sabur}}]{Abdulhakim Sabur}\n\tis a Ph.D student in Computer Engineering at Arizona state University, Tempe, AZ, USA. He received his B.S. degree (with Honor) in Computer Science and Engineering from King Saud University, Saudi Arabia in 2015 and a Master degree in Computer Engineering from Arizona State University in 2018. He worked as a researcher in King Abdulaziz City for Science and Technology (KACST) and a teaching assistant in Taibah University. His research interest include Network and information security, vulnerability analysis and management, automated policy and security checking in software defined networking systems.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip,keepaspectratio]{bio/adel-new}}]{Adel Alshamrani}\n\t\tis an assistant professor in department of cybersecurity, College of Computer Science and Engineering at University of Jeddah, Jeddah, Saudi Arabia. He received his B.S. degree in computer science from Umm Al-Qura University, Saudi Arabia in 2007, M.S. degree in computer science from La Trobe University Melbourne, Australia, in 2010, and PhD in computer science from Arizona State University in 2018. He has eight years of work experience in information security, network engineering, and teaching while working in the Faculty of Computing and Information Technology, King Abdul Aziz University, and University of Jeddah. His research interests include information security, intrusion detection, and software defined networking. He is the Chief Information Security Officer (CISO) at the University of Jeddah. \n\t\\end{IEEEbiography}\n \\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip,keepaspectratio]{bio/huang10}}]{Dijiang Huang}\n\t\treceived the B.S. degree from Beijing University of Posts and Telecommunications, China, and the M.S. and Ph.D. degrees from the University of Missouri, Kansas in 1995, 2001, and 2004 respectively. He is an Associate Professor with the School of Computing Informatics and Decision System Engineering, Arizona State University. His research interests include computer networking, security, and privacy.\tHe is an Associate Editor of the Journal of Network and System Management (JNSM) and the IEEE Communications Surveys and Tutorials. He has also served as the chair at multiple international conferences and workshops. His research was supported by the NSF, ONR, ARO, NATO, and Consortium of Embedded System (CES). He was the recipient of the ONR Young Investigator Program\t(YIP) Award.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=0.8in,height=1.25in,clip, keepaspectratio]{bio/rao-econ}}]{Subbarao Kambhampati (Rao)}\n \t is a professor of Computer Science at Arizona State University. He received his B.Tech. in Electrical Engineering (Electronics) from Indian Institute of Technology, Madras (1983), and M.S.(1985) and Ph.D.(1989) in Computer Science (1985,1989) from University of Maryland, College Park. Kambhampati studies fundamental problems in planning, decision making, and game theory. Kambhampati is a fellow of AAAI, AAAS and ACM, and was an NSF Young Investigator. He received multiple teaching awards, including a university last lecture recognition. Kambhampati is the past president of AAAI and was a trustee of IJCAI. He was the program chair for IJCAI 2016, ICAPS 2013, AAAI 2005 and AIPS 2000 and served on the board of directors of Partnership on AI. Kambhampati's research, as well as his views on the progress and societal impacts of AI, have been featured in multiple national and international media outlets.\n\t\\end{IEEEbiography}\n\\end{document}", "id": "f9a0f0c3-0550-4003-8ffe-cd2e2130e47d", "level": "section", "origin_cites_number": 0, "parent_id": "18148880-f549-40b4-8c44-171d9433ba0c", "prefix_titles": [ [ "title", "A Survey of Moving Target Defenses for\\\\Network Security" ], [ "section", "Acknowledgement" ] ], "subsections": [], "title": "Acknowledgement" } ]
2
[ 9076, 314, 3854, 6792, 6794, 6795, 6793, 6797, 6796, 6798, 6799, 6800 ]
1.432677
[ "Marios Fragkoulis", "Paris Carbone", "Vasiliki Kalavri", "Asterios Katsifodimos" ]
A Survey on the Evolution of Stream Processing Systems
2020
2020-08-03T12:43:46Z
cs.DC
Stream processing has been an active research field for more than 20 years, but it is now witnessing its prime time due to recent successful efforts by the research community and numerous worldwide open-source communities. This survey provides a comprehensive overview of fundamental aspects of stream processing systems and their evolution in the functional areas of out-of-order data management, state management, fault tolerance, high availability, load management, elasticity, and reconfiguration. We review noteworthy past research findings, outline the similarities and differences between the $1^{st}$ ('00-'10) and $2^{nd}$ ('11-'22) generation of stream processing systems, and discuss future trends and open problems.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ] ], "subsections": [ "f9bc4b42-58de-468c-aa34-a7fafef466f0", "a074e6de-4533-47f5-a731-9ece78d436aa", "2af1dcb8-7a81-4897-bd68-275168901eef", "859d816d-87a6-416f-a8b8-81462a1d1466", "60b7aebd-8cd3-47cb-a168-613a863dea95", "b7e50fcf-514d-4094-9402-61a6721fb408", "456f6722-b0ce-464c-a64d-a5bb4343d94d", "ac790ee5-82fb-4e06-9845-dd189aefb008" ], "title": "root" }, { "cite_extract_rate": 0.035714285714285005, "cites": [ 4757 ], "content": "Applications of stream processing technology have gone through a resurgence, penetrating multiple and very diverse industries. Nowadays, virtually all Cloud vendors offer first-class support for deploying managed stream processing pipelines, while streaming systems are used in a variety of use-cases that go beyond the classic streaming analytics (windows, aggregates, joins, etc.). For instance, web companies are using stream processing for dynamic car-trip pricing, banks apply it for credit card fraud detection, while traditional industries apply streaming technology for real-time harvesting analytics. \nAt the moment of writing we are witnessing a trend towards using stream processors to build more general event-driven architectures , large-scale continuous ETL and analytics, and microservices .\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=.75\\linewidth]{Figures/overview.pdf}\n\\caption{\\small An overview of the evolution of stream processing and respective domains of focus.}\n\\label{fig:overview}\n\\end{figure*}\nDuring the last 20 years, streaming technology has evolved significantly, under the influence of database and distributed systems. The notion of streaming queries was first introduced in 1992 by the Tapestry system~, and was followed by lots of research on stream processing in the early 00s. Fundamental concepts and ideas originated in the database community and were implemented in prototypical systems such as TelegraphCQ~, Stanford's STREAM, NiagaraCQ~, Auroral/Borealis~, and Gigascope~. Although these prototypes roughly agreed on the data model, they differed considerably on querying semantics . This research period also introduced various systems challenges, such as sliding window aggregation , fault-tolerance and high-availability , as well as load balancing and shedding . This first wave of research was highly influential to commercial stream processing systems that were developed in the following years (roughly during 2004 -- 2010), such as IBM System S, Esper, Oracle CQL/CEP and TIBCO. These systems focused -- for the most part -- on streaming window queries and Complex Event Processing (CEP). This era of systems was mainly characterized by scale-up architectures, processing ordered event streams.\nThe second generation of streaming systems was a result of research that started roughly after the introduction of MapReduce~ and the popularization of Cloud Computing. The focus shifted towards distributed, data-parallel processing engines and shared-nothing architectures on commodity hardware. Lacking well-defined semantics and a proper query language, systems like Millwheel , Storm~, Spark Streaming~, and Apache Flink~ first exposed primitives for expressing streaming computations as hard-coded dataflow graphs and transparently handled data-parallel execution on distributed clusters. With very high influence, the Google Dataflow model re-introduced older ideas such as out-of-order processing and punctuations~, proposing a unified parallel processing model for streaming and batch computations. Stream processors of this era are converging towards fault-tolerant, scale-out processing of massive out-of-order streams.\nFigure~\\ref{fig:overview} presents a schematic categorization of influential streaming systems into three generations and highlights each era's domains of focus. Although the foundations of stream processing have remained largely unchanged over the years, stream processing systems have transformed into sophisticated and scalable engines, producing correct results in the presence of failures. Early systems and languages were designed as extensions of relational execution engines, with the addition of windows. Modern streaming systems have evolved in the way they reason about completeness and ordering (e.g., out-of-order computation) and have witnessed architectural paradigm shifts that constituted the foundations of processing guarantees, reconfiguration, and state management. At the moment of writing, we observe yet another paradigm shift towards general event-driven architectures, actor-like programming models and microservices~, and a growing use of modern hardware~.\nThis survey is the first to focus on the evolution of streaming systems rather than the state of the field at a particular point in time. To the best of our knowledge, this is also the first attempt at understanding the underlying reasons why certain early techniques and designs prevailed in modern systems while others were abandoned. Further, by examining how ideas survived, evolved, and were often re-invented, we reconcile the terminology used by the different generations of streaming systems.", "id": "f9bc4b42-58de-468c-aa34-a7fafef466f0", "level": "section", "origin_cites_number": 28, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Introduction" ] ], "subsections": [ "e07ad078-3971-4d39-be8b-bc99cd7f9940", "fcf8b532-0bf6-44b4-8136-b7602b1b6449", "4bbf405c-f271-483c-9f3d-b320ee98d639" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "With this survey paper, we make the following contributions:\n\\begin{itemize}\n \\item We summarize existing approaches to streaming systems design and categorize early and modern stream processors in terms of underlying assumptions and mechanisms.\n \\item We compare early and modern stream processing systems with regard to out-of-order data management, state management, fault-tolerance, high availability, load management, elasticity, and reconfiguration.\n \\item We highlight important but overlooked works that have influenced today's streaming systems design.\n \\item We establish a common nomenclature for fundamental streaming concepts, often described by inconsistent terms in different systems and communities.\n\\end{itemize}", "id": "e07ad078-3971-4d39-be8b-bc99cd7f9940", "level": "subsection", "origin_cites_number": 0, "parent_id": "f9bc4b42-58de-468c-aa34-a7fafef466f0", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Introduction" ], [ "subsection", "Contributions" ] ], "subsections": [], "title": "Contributions" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4758, 4759 ], "content": "We view the following surveys as complementary to ours and recommend them to readers interested in diving deeper into a particular aspect of stream processing or those who seek a comparison between streaming technology and advances from adjacent research communities.\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table*}[ht]\n\\centering\n\\caption{Evolution of streaming systems}\n\\label{tab:evolution-streaming}\n\\begin{tabular}{p{0.3\\textwidth}\n p{0.4\\textwidth}\n p{0.4\\textwidth}\n }\n\\hline\n\\multicolumn{1}{l}{\\textbf{}} &\n\\multicolumn{1}{l}{\\textbf{1st generation}} &\n\\multicolumn{1}{l}{\\textbf{2nd-3rd generation}}\n\\\\\n\\hline\n\\multicolumn{1}{l}{\\textbf{Results}}\n& \\multicolumn{1}{e}{approximate or exact}\n& \\multicolumn{1}{d}{exact}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Language}}\n& \\multicolumn{1}{e}{SQL extensions, CQL}\n& \\multicolumn{1}{d}{UDF-heavy -- Java, Scala, Python, SQL-like, etc.}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Query plans}}\n& \\multicolumn{1}{e}{global, optimized, with pre-defined operators}\n& \\multicolumn{1}{d}{independent, with custom operators}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Execution}}\n& \\multicolumn{1}{e}{(mostly) scale-up}\n& \\multicolumn{1}{d}{distributed}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Parallelism}}\n& \\multicolumn{1}{e}{pipeline}\n& \\multicolumn{1}{d}{data, pipeline, task}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Time \\& progress}}\n& \\multicolumn{1}{e}{heartbeats, slack, punctuations}\n& \\multicolumn{1}{d}{low-watermark, frontiers}\n\\\\\n\\multicolumn{1}{l}{\\textbf{State management}}\n& \\multicolumn{1}{e}{shared synopses, in-memory}\n& \\multicolumn{1}{d}{per query, partitioned, persistent, larger-than-memory}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Fault tolerance}}\n& \\multicolumn{1}{e}{HA-focused, limited correctness guarantess}\n& \\multicolumn{1}{d}{distributed snapshots, exactly-once}\n\\\\\n\\multicolumn{1}{l}{\\textbf{Load management}}\n& \\multicolumn{1}{e}{load shedding, load-aware scheduling}\n& \\multicolumn{1}{d}{backpressure, elasticity}\n\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\nCugola and Margara~ provide a view of stream processing with regard to related technologies, such as active databases and complex event processing systems, and discuss their relationship with data streaming systems. Further, they provide a categorization of streaming languages and streaming operator semantics. The language aspect is further covered in another recent survey~, which focuses on the languages developed to address the challenges in very large data streams. It characterizes streaming languages in terms of data model, execution model, domain, and intended user audience. R{\\\"o}ger and Mayer~ present an overview of recent work on parallelization and elasticity approaches of streaming systems. They define a general system model which they use to introduce operator parallelization strategies and parallelism adaptation methods. Their analysis also aims at comparing elasticity approaches originating in different research communities. Hirzel et al.~ present an extensive list of logical and physical optimizations for streaming query plans. They present a categorization of streaming optimizations in terms of their assumptions, semantics, applicability scenarios, and trade-offs. They also present experimental evidence to reason about profitability and guide system implementers in selecting appropriate optimizations. To, Soto, and Markl~ survey the concept of state and its applications in big data management systems, covering also aspects of streaming state.\nFinally, Dayarathna and Perera~ present a survey of the advances of the last decade with a focus on system architectures, use-cases, and hot research topics. They summarize recent systems in terms of their features, such as what types of operations they support, their fault-tolerance capabilities, their use of programming languages, and their best reported performance.\nTheoretical foundations of streaming data management and streaming algorithms are out of the scope of this survey. A comprehensive collection of influential works on these topics can be found in Garofalakis et al.~. The collection focuses on major contributions of the first generation of streaming systems. It reviews basic algorithms and synopses, fundamental results in stream data mining, streaming languages and operator semantics, and a set of representative applications from different domains.", "id": "fcf8b532-0bf6-44b4-8136-b7602b1b6449", "level": "subsection", "origin_cites_number": 7, "parent_id": "f9bc4b42-58de-468c-aa34-a7fafef466f0", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Introduction" ], [ "subsection", "Related surveys and research collections" ] ], "subsections": [], "title": "Related surveys and research collections" }, { "cite_extract_rate": 0, "cites": [], "content": "We begin by presenting the essential elements of the domain in Section~\\ref{sec:preliminaries}.\nThen we elaborate on each of the important functionalities offered by stream processing\nsystems: out-of-order data management (Section~\\ref{sec:order}), state management (Section~\\ref{sec:state}),\nfault tolerance and high availability (Section~\\ref{sec:FT}), and load management, elasticity, and reconfiguration (Section~\\ref{sec:elasticity}). Each one of these sections contains a \\emph{Vintage vs. Modern} discussion that compares early to contemporary approaches and a summary of open problems.\nWe summarize our major findings, discuss prospects, and conclude in Table~\\ref{tab:evolution-streaming} and Section~\\ref{sec:conclusion}.", "id": "4bbf405c-f271-483c-9f3d-b320ee98d639", "level": "subsection", "origin_cites_number": 0, "parent_id": "f9bc4b42-58de-468c-aa34-a7fafef466f0", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Introduction" ], [ "subsection", "Survey organization" ] ], "subsections": [], "title": "Survey organization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:preliminaries}\nIn this section, we provide necessary background and explain fundamental stream processing concepts the rest of this survey relies on. We discuss the key requirements of a streaming system, introduce the basic streaming data models, and give a high-level overview of the architecture of early and modern streaming systems.", "id": "a074e6de-4533-47f5-a731-9ece78d436aa", "level": "section", "origin_cites_number": 0, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ] ], "subsections": [ "0048f135-561d-4a15-8f4d-3a50c7b3927c", "27e5bea0-0d7e-4c63-95d8-7c8ef9f7fda3", "82333c2c-927f-4464-b3fb-2cf43cd3f036", "622a2fd1-e3ca-45bb-b35d-1be630d61ca7" ], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:requirements}\nA data stream is a data set that is produced incrementally over time, rather than being available in full before its processing begins~. Data streams are high-volume, real-time data that might be unbounded. Therefore, stream processing systems can neither store the entire stream in an accessible way nor can they control the data arrival rate or order. In contrast to traditional data management infrastructure, streaming systems have to process elements on-the-fly using limited memory. Stream elements arrive continuously and either bear a timestamp or are assigned one on arrival.\nRespectively, a streaming query ingests events and produces results in a continuous manner, using a single pass or a limited number of passes over the data. Streaming query processing is challenging for multiple reasons. First, continuously producing updated results might require storing historical information about the stream seen so far in a compact representation that can be queried and updated efficiently. Such summary representations are known as \\emph{sketches} or \\emph{synopses}. Second, in order to handle high input rates, certain queries might not afford to continuously update indexes and materialized views. Third, stream processors cannot rely on the assumption that state can be reconstructed from associated inputs. To achieve acceptable performance, streaming operators need to leverage incremental computation.\nThe aforementioned characteristics of data streams and continuous queries provide a set of unique requirements for streaming systems, other than the evident performance ones of low latency and high throughput. Given the lack of control over the input order, a streaming system needs to produce correct results when receiving out-of-order and delayed data (cf. Section~\\ref{sec:order}). It needs to implement mechanisms that estimate a stream's progress and reason about result completeness. Further, the long-running nature of streaming queries demands that streaming systems manage accumulated state (cf. Section~\\ref{sec:state}) and guard it against failures (cf. Section~\\ref{sec:FT}). Finally, having no control over the data input rate requires stream processors to be adaptive so that they can handle workload variations without sacrificing performance (cf. Section~\\ref{sec:elasticity}).", "id": "0048f135-561d-4a15-8f4d-3a50c7b3927c", "level": "subsection", "origin_cites_number": 1, "parent_id": "a074e6de-4533-47f5-a731-9ece78d436aa", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Requirements of streaming systems" ] ], "subsections": [], "title": "Requirements of streaming systems" }, { "cite_extract_rate": 0, "cites": [], "content": "There exist many theoretical streaming data models, mainly serving the purpose of studying the space requirements and computational complexity of streaming algorithms and understanding which streaming computations are practical.\nFor instance, a stream can be modeled as a dynamic one-dimensional vector~. The model defines how this dynamic vector is updated when a new element of the stream becomes available. While theoretical streaming data models are useful for algorithm design, early stream processing systems instead adopted extensions of the \\emph{relational} data model. Recent streaming dataflow systems, especially those influenced by the MapReduce philosophy, place the responsibility of data stream modeling on the application developer.", "id": "27e5bea0-0d7e-4c63-95d8-7c8ef9f7fda3", "level": "subsection", "origin_cites_number": 1, "parent_id": "a074e6de-4533-47f5-a731-9ece78d436aa", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Streaming data models" ] ], "subsections": [ "eda7c585-042b-490a-b5d3-31fa35f1927c", "29852a8d-93c0-42f6-af43-958142bda6f8" ], "title": "Streaming data models" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 8837 ], "content": "In the relational streaming model as implemented by first-generation systems~, a stream is interpreted as describing a changing relation over a common schema. Streams are either produced by external sources and update relation tables or are produced by continuous queries and update materialized views. An operator outputs event streams that describe the changing view computed over the input stream according to the relational semantics of the operator. Thus, the semantics and schema of the relation are imposed by the system. \nSTREAM~ defines streams as bags of tuple-timestamp pairs and relations as time-varying bags of tuples. The implementation unifies both types as sequences of timestamped tuples, where each tuple also carries a flag that denotes whether it is an insertion or a deletion. Input streams consist of insertions only, while relations may also contain deletions. TelegraphCQ~ uses a similar data model. Aurora~ models streams as append-only sequences of tuples, where a set of attributes denote the key and the rest of the attributes denote values. Borealis~ generalizes this model to support insertion, deletion, and replacement messages. Messages may also contain additional fields related to QoS metrics. Gigascope~ extends the sequence database model. It assumes that stream elements bear one or more timestamps or sequence numbers, which generally increase (or decrease) with the ordinal position of a tuple in a stream. Ordering attributes can be (strictly) monotonically increasing or decreasing, monotone non-repeating, or increasing within a group of records. In CEDR~, stream elements bear a valid timestamp, $V_{s}$, after which they are considered valid and can contribute to the result. Alternatively, events can have validity intervals. The contents of the relation at time $t$ are all events with $V_{s} \\leq t$.", "id": "eda7c585-042b-490a-b5d3-31fa35f1927c", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "27e5bea0-0d7e-4c63-95d8-7c8ef9f7fda3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Streaming data models" ], [ "subsubsection", "Relational Streaming Model" ] ], "subsections": [], "title": "Relational Streaming Model" }, { "cite_extract_rate": 0, "cites": [], "content": "The dataflow streaming model, as implemented by systems of the second generation~, does not impose any strict schema or semantics to the input stream elements, other than the presence of a timestamp.\nWhile some systems, like Naiad~, require that all stream elements bear a logical timestamp, other systems, such as Flink~ and Dataflow~, expect the declaration of a \\emph{time domain}. Applications can operate in one of three modes: (i) \\emph{event} (or application) time is the time when events are generated at the sources, (ii) \\emph{processing} time is the time when events are processed in the streaming system, and (iii) \\emph{ingestion} time is the time when events arrive at the system.\nModern dataflow streaming systems can ingest any type of input stream, irrespectively of whether its elements represent additions, deletions, replacements or deltas. The application developer is responsible for imposing the semantics and writing the operator logic to update state accordingly and produce correct results. Designating keys and values is also usually not required at ingestion time, however, keys must be defined when using certain data-parallel operators, such as windows.", "id": "29852a8d-93c0-42f6-af43-958142bda6f8", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "27e5bea0-0d7e-4c63-95d8-7c8ef9f7fda3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Streaming data models" ], [ "subsubsection", "Dataflow Streaming Model" ] ], "subsections": [], "title": "Dataflow Streaming Model" }, { "cite_extract_rate": 0, "cites": [], "content": "The general architecture of streaming systems has evolved significantly over the last two decades. \nBefore we delve into the specific approaches to out-of-order management, state, fault tolerance, and load management, \nwe outline some fundamental differences between early (1st generation) and modern (2nd generation) streaming systems. Table \\ref{tab:evolution-streaming} summarizes our findings.\nThe architecture of a first-generation DSMS follows closely that of a database management systems (DBMS), with the addition of certain components designated to address the requirements of streaming data (cf. Section~\\ref{sec:requirements}). In particular, the input manager is responsible for ingesting streams and possibly buffering and ordering input elements.\nThe scheduler determines the order or operator execution, as well as the number of tuples to process and push to the outputs. Two important additional components are the quality monitor and load shedder which monitor stream input rates and query performance and selectively drop input records to meet target latency requirements. Queries are compiled into a shared query plan which is optimized and submitted to the query execution engine. In the common case, a DSMS supports both ad-hoc and continuous queries. Early architectures are designed with the goal to provide fast, but possibly approximate results to queries.\nThe next generation of distributed dataflow systems are usually deployed on shared-nothing clusters of machines. Dataflow systems employ task and data parallelism, have explicit state management support, and implement advanced fault-tolerance capabilities to provide result guarantees. Distributed workers execute parallel instances of one of more operators (tasks) on disjoint stream partitions. In contrast to DSMSs, queries are independent of each other, maintain their own state, and they are assigned dedicated resources. Every query is configured individually and submitted for execution as a separate job. Input sources are typically assumed to be replayble and state is persisted to embedded or external stores. Modern architectures prioritize high throughput, robustness, and result correctness over low latency.\nDespite the evident differences between early and modern streaming systems' architectures, many fundamental aspects have remained unchanged in the past two decades. The following sections examine in detail how streaming systems have evolved in terms of out-of-order processing, state capabilities, fault-tolerance, and load management.\n\\begin{comment}", "id": "82333c2c-927f-4464-b3fb-2cf43cd3f036", "level": "subsection", "origin_cites_number": 0, "parent_id": "a074e6de-4533-47f5-a731-9ece78d436aa", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Architectures of Streaming Systems" ] ], "subsections": [], "title": "Architectures of Streaming Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vasia{ is a good overview of the different types of languages used for stream processing.}\nStreaming query languages have been a subject of research since the very first days of stream processing. Virtually every attempt to create a standard language for streams has been an extension of SQL, by adding windowing and means to convert from streams to relations and vice versa. Most noteworthy examples were CQL~ and its derivatives~. Later, dozens of works tried to extend the same standard for niche use-cases, with custom window types and aggregates; none of those attempts made it to standards.\nUnder the influence of MapReduce-like APIs, the majority of open-source streaming systems implemented functional/fluent APIs embedded in general purpose programming languages to hardcode Aurora-like dataflows. Till today, various communities are working on establishing a language for expressing computations which combine streams and relational tables, without a clear consensus.", "id": "622a2fd1-e3ca-45bb-b35d-1be630d61ca7", "level": "subsection", "origin_cites_number": 4, "parent_id": "a074e6de-4533-47f5-a731-9ece78d436aa", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Languages" ] ], "subsections": [ "663502b4-cba6-432c-886f-6652e17a20fc", "c4806c52-3cbc-405d-9905-241653977962" ], "title": "Languages" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vasia{Mostly notes from lecture preparation.}\nCQL provides three classes of operators: (i) relation-to-relation operators are similar to standard SQL and define queries over tables, (ii) stream-to-relation operators define tables by selecting portions of a stream, and (iii) relation-to-stream operators create streams through querying tables.", "id": "663502b4-cba6-432c-886f-6652e17a20fc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "622a2fd1-e3ca-45bb-b35d-1be630d61ca7", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Languages" ], [ "subsubsection", "CQL" ] ], "subsections": [], "title": "CQL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vasia{Mostly notes from lecture preparation.}\nSQL-based approaches are popular because they promote the promise of providing a unified language for querying both streaming and static data. SQL is not enough because (i) streaming follows a push-based model as opposed to the pull-based model of SQL (an application or client asks for the query results when they need them), (ii) the input streams might be unbounded in which case it is unclear how to define blocking operators such as groupBy, (iii) the complete streams might be too large to store for future use.\nAn example is the ESL language~. It allows ad-hoc SQL queries, updates on database tables, and continuous queries on data streams. New streams (derived) are defined as virtual views in SQL. ESL semantics are equivalent to having an append-only table to which new tuples are continuously added.\nOne of the most important questions that concerned the research community early on is the following: \\emph{What kind of queries can we express and support on data streams?}. In particular, if only monotonic queries can be expressed by non-blocking operators, then can all monotonic queries be expressed using only non-blocking operators?\nUser-defined Aggregates (UDAs) are constructs that allow the definition of custom aggregations using three statement groups: (i) \\texttt{INITIALIZE} is used to initialize local state, \\texttt{ITERATE} allows updating state based on a new element and the current state and \\texttt{TERMINATE} is used to produce the result. Note that it is allowed to define and maintain local tables as state.~ shows that every computable monotonic function on timestamped data streams can be expressed using non-blocking UDAs (NB-UDAs) and union where NB-UDAs are those where the \\texttt{TERMINATE} group is empty.\n\\end{comment}", "id": "c4806c52-3cbc-405d-9905-241653977962", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "622a2fd1-e3ca-45bb-b35d-1be630d61ca7", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Preliminaries" ], [ "subsection", "Languages" ], [ "subsubsection", "SQL Streaming Extensions" ] ], "subsections": [], "title": "SQL Streaming Extensions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:order}\nA streaming system receives data continuously from one or more input sources.\nTypically the order of data in a stream is part of the stream's semantics~.\nDepending on the computations to perform, a streaming system may have to process stream tuples in a certain order to provide semantically correct results~.\nHowever, in the general case, a stream's data tuples arrive out of\norder~ for reasons explained in Section~\\ref{subsec:disorder-causes}.\n\\begin{description}\n \\item \\textit{Out-of-order} data tuples~ arrive in a streaming system after tuples with later event time timestamps.\n\\end{description}\nIn the rest of the paper we use the terms disorder~ and out-of-order~ to refer to the disturbance of order in a stream's data tuples.\nReasoning about order and managing disorder are fundamental considerations for the operation of streaming systems.\nIn the following, we highlight the causes of disorder in Section~\\ref{subsec:disorder-causes}, clarify the relationship between disorder in a stream's tuples and processing progress in Section~\\ref{subsec:disorder-progress}, and outline the two key system architectures for managing out-of-order data in Section~\\ref{subsec:disorder-architectures}.\nThen, we describe the consequences of disorder in Section~\\ref{subsec:disorder-effects} and present the mechanisms for managing disorder in Section~\\ref{subsec:disorder-mechanisms}.\nFinally, in Section~\\ref{sub:vintage}, we discuss the differences of out-of-order data management in early and modern systems and we present open problems in Section~\\ref{sub:open}.", "id": "2af1dcb8-7a81-4897-bd68-275168901eef", "level": "section", "origin_cites_number": 7, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ] ], "subsections": [ "d6ca85f2-0bbe-41b9-8d52-11ff7d00d032", "e32fb9f0-bdf6-45ba-9027-bf93097abb45", "c93c277f-ea7b-46a4-8404-3cc2ee8b5242", "fc060738-11bb-4fa5-8d4c-49fb30c754e7", "ad45beb9-c999-4e67-9a7a-bf9201d42e7a", "642969c0-7ad3-4b8d-a803-892c464c2e58", "373ee140-04c3-4940-89e4-a27d42ee052e" ], "title": "Managing Event Order and Timeliness" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:disorder-causes}\nDisorder in data streams may be owed to stochastic factors that are external to a streaming system or to the operations taking place inside the system.\nThe most common external factor that introduces disorder to streams is the network~.\nDepending on the network's reliability, bandwidth, and load, the routing of some stream tuples can take longer to complete compared to the routing of others, leading to a different arrival order in a streaming system.\nEven if the order of tuples in an individual stream is preserved, ingestion from multiple sources,\nsuch as sensors, typically results in a disordered collection of tuples, unless the sources are carefully coordinated, which is rare.\nExternal factors aside, specific operations on streams break tuple order.\nFirst, join processing takes two streams and produces a shuffled combination of the two, since a parallel join operator repartitions the data according to the join attribute~ and outputs join results by order of match~.\nSecond, windowing based on an attribute different to the ordering attribute reorders the stream~.\nThird, data prioritization~ by using an attribute different to the ordering one also changes the stream's order.\nFinally, the union operation on two unsynchronized streams yields a stream with all tuples of the two input streams interleaving each other in random order~.", "id": "d6ca85f2-0bbe-41b9-8d52-11ff7d00d032", "level": "subsection", "origin_cites_number": 7, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Causes of Disorder" ] ], "subsections": [], "title": "Causes of Disorder" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:disorder-progress}\nIn order to manage disorder, streaming systems need to detect processing progress.\nWe discuss how disorder management and progress tracking are intertwined in Sections~\\ref{subsec:disorder-architectures} and~\\ref{subsec:disorder-effects}.\n\\emph{Progress} regards how much the processing of a stream’s tuples has advanced over time.\nProcessing progress can be defined and quantified with the aid of an attribute \\textit{A} of a stream’s tuples that orders the stream.\nThe processing of the stream progresses when the smallest value of \\textit{A} among the unprocessed tuples increases over time~.\n\\textit{A} then is a \\emph{progressing attribute} and the oldest value of \\textit{A} per se, is a measure of progress because it denotes\nhow far in processing tuples the system has reached since the beginning.\nBeyond this definition, streaming systems often make their own interpretation of progress, which may involve more than one attributes.", "id": "e32fb9f0-bdf6-45ba-9027-bf93097abb45", "level": "subsection", "origin_cites_number": 1, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Disorder and Processing Progress" ] ], "subsections": [], "title": "Disorder and Processing Progress" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:disorder-architectures}\nTwo main architectural archetypes have influenced the design of streaming systems with respect to managing disorder: (i) in-order processing systems~, and (ii) out-of-order processing systems~.\nIn-order processing systems manage disorder by fixing a stream's order. As a result, they essentially track progress by monitoring how far the processing of a data stream has advanced.\nIn-order systems buffer and reorder tuples up to a \\emph{lateness} bound.\nThen, they forward the reordered tuples for processing and clear the corresponding buffers.\nIn out-of-order processing systems, operators or a global authority produce progress information using any of the metrics detailed in Section~\\ref{subsubsec:tracking-progress}, and propagate it to the dataflow graph.\nThe information typically reflects the oldest unprocessed tuple in the system and establishes a lateness bound for admitting out-of-order tuples.\nIn contrast to in-order systems, tuples are processed without delay in their arrival order, as long as they do not exceed the lateness bound.", "id": "c93c277f-ea7b-46a4-8404-3cc2ee8b5242", "level": "subsection", "origin_cites_number": 8, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "System Architectures for Managing Disorder" ] ], "subsections": [], "title": "System Architectures for Managing Disorder" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:disorder-effects}\nIn unbounded data processing, disorder can impede progress~ or lead to wrong results if ignored~.\nDisorder affects processing progress when the operators that comprise the topology of the computation require ordered input.\nVarious implementations of \\textit{join} and \\textit{aggregate} rely on ordered input to produce correct results~.\nWhen operators in in-order systems receive out-of-order tuples, they have to reorder them prior to including them in the window they belong.\nReordering, however, imposes processing overhead, memory space overhead, and latency.\nOut-of-order systems, on the other hand, track progress and process data in whatever order they arrive, up to the lateness bound. To include late tuples in results, they additionally need to store the processing state up to the lateness bound.\nAs a sidenote, order-insensitive operators~, such as \\textit{apply, project, select, dupelim}, and \\textit{union}, are agnostic to disorder in a stream and produce correct results even when presented with disordered input.\nIgnoring out-of-order data might lead to incorrect results if the output is computed on partial input only. Thus, a streaming system needs to be capable of processing out-of-order data and incorporate their effect to the computation.\nHowever, without knowledge of how late data can be, waiting indefinitely can block output and accumulate large computation state.\nThis concern manifests on all architectures and we discuss how it can be countered with disorder management mechanisms, next.", "id": "fc060738-11bb-4fa5-8d4c-49fb30c754e7", "level": "subsection", "origin_cites_number": 3, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Effects of Disorder" ] ], "subsections": [], "title": "Effects of Disorder" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:disorder-mechanisms}\nIn this section, we elaborate on influential mechanisms for managing disorder in unbounded data, namely slack~, heartbeats~, low-watermarks~, pointstamps~, and triggers~.\nHeartbeats, low-watermarks, and pointstamps track processing progress and quantify a lateness bound using a metric, such as time.\nIn contrast, slack merely quantifies the lateness bound.\nIf tuples arrive after the lateness bound expires, triggers can be used to update computation results in \\emph{revision processing}~.\nWe also discuss punctuations~, a generic mechanism for communicating information across the dataflow graph, that has been heavily used as a vehicle in managing disorder.", "id": "ad45beb9-c999-4e67-9a7a-bf9201d42e7a", "level": "subsection", "origin_cites_number": 7, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ] ], "subsections": [ "c37edd49-0a4d-4686-8f85-6cba915c1d54", "d39b7dae-d5e6-430c-aeae-856d8cd9225f", "23e6dc62-b2ef-4514-8ef1-515f92d2667c" ], "title": "Mechanisms for Managing Disorder" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:tracking-progress}\nWe present the four most notable progress tracking mechanisms: slack, heartbeats, low-watermark, and pointstamps.\nIn addition, we accompany the analysis of each mechanism with a figure. Figure~\\ref{fig:disorder-mechanisms} showcases the differences between slack, heartbeats, and low-watermarks.\nThe figure depicts a simple aggregation operator that counts tuples in 4-second event time tumbling windows.\nThe operator awaits for some indication that event time has advanced past the end timestamp of a window so that it computes and outputs an aggregate per window.\nThe indication varies according to the progress-tracking mechanism.\nThe input to this operator are seven tuples containing only a timestamp from t=1 to t=7.\nThe timestamp signifies the event time in seconds that the tuple was produced in the input source.\nEach tuple contains a different timestamp and all tuples are dispatched from a source in ascending order of timestamp.\nDue to network latency, the tuples may arrive to the streaming system out of order.", "id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "ad45beb9-c999-4e67-9a7a-bf9201d42e7a", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ] ], "subsections": [ "a40766a9-bfab-48da-8af6-fbdcd13c0725", "c43d6794-3613-4a10-a73a-da676b35b03e", "fc5bb865-ae5b-4a6f-8c57-2d5e33d40124", "e48e3096-f125-498d-8573-ebb3b4dd6140", "3735427a-495e-4ebc-a14f-ed5c6aee3cde" ], "title": "Tracking processing progress" }, { "cite_extract_rate": 0, "cites": [], "content": "is a simple mechanism that involves waiting for out-of-order data for a fixed amount of a certain metric.\nSlack originally denoted the number of tuples intervening between the actual occurrence of an out-of-order tuple and the position it would have in the input stream if it arrived on time.\nHowever, it can also be quantified in terms of elapsed time.\nEssentially, slack marks a fixed grace period for late tuples.\nFigure~\\ref{fig:slack} presents the slack mechanism.\nIn order to accommodate out-of-order tuples the operator admits out-of-order tuples up to \\textit{slack=1}.\nThus, the operator having admitted tuples with t=1 and t=2 not depicted in the figure will receive tuple with t=4. \nThe timestamp of the tuple coincides with the max timestamp of the first window for interval [0, 4).\nNormally, this tuple would cause the operator to close the window and compute and output the aggregate, but because of the slack value the operator will wait to receive one more tuple.\nThe next tuple t=3 belongs to the first window and is included there.\nAt this point, slack also expires and this event finally triggers the window computation, which outputs C=3 for t=[1, 2, 3].\nOn the contrary, the operator will not accept t=5 at the tail of input because it arrives two tuples after its natural order and is not covered by the slack value.", "id": "a40766a9-bfab-48da-8af6-fbdcd13c0725", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ], [ "paragraph", "Slack" ] ], "subsections": [], "title": "Slack" }, { "cite_extract_rate": 0, "cites": [], "content": "is an alternative to slack that consists of an external signal carrying progress information about a data stream.\nIt contains a timestamp indicating that all succeeding stream tuples will have a timestamp larger than the heartbeat's timestamp.\nHeartbeats can either be generated by an input source or deduced by the system by observing environment parameters, such as network latency bound, application clock skew between input sources, and out-of-order data generation~.\nFigure~\\ref{fig:heartbeat} depicts the heartbeat mechanism.\nAn input manager buffers and orders the incoming tuples by timestamp.\nThe number of tuples buffered, two in this example (t=5, t=6), is of no importance.\nThe source periodically sends a heartbeat to the input manager, i.e. a signal with a timestamp.\nThen the input manager dispatches to the operator all tuples with timestamp less or equal to the timestamp of the heartbeat in ascending order.\nFor instance, when the heartbeat with timestamp t=2 arrives in the input manager (not shown in the figure), the input manager dispatches the tuples with timestamp t=1 and t=2 to the operator.\nThe input manager then receives tuples with t=4, t=6, and t=5 in this order and puts them in the right order.\nWhen the heartbeat with timestamp t=4 arrives, the input manager dispatches the tuple with timestamp t=4 to the operator.\nThis tuple triggers the computation of the first window for interval [0, 4).\nThe operator outputs C=2 counting two tuples with t=[1, 2] not depicted in the figure.\nThe input manager ignores the incoming tuple with timestamp t=3 as it is older than the latest heartbeat with timestamp t=4.\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}{.97\\linewidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{Figures/slack.pdf}\n \\caption{Slack}\n \\label{fig:slack}\n \\end{subfigure}\n \\begin{subfigure}{.97\\linewidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/heartbeat.pdf}\n \\caption{Heartbeat}\n \\label{fig:heartbeat}\n \\end{subfigure}\n \\begin{subfigure}{.97\\linewidth}\n \\centering\n \\includegraphics[width=.89\\linewidth]{Figures/low-watermark.pdf}\n \\caption{Low-watermark}\n \\label{fig:low-watermark}\n \\end{subfigure}\n \\caption{Mechanisms for managing disorder.}\n \\label{fig:disorder-mechanisms}\n\\end{figure}", "id": "c43d6794-3613-4a10-a73a-da676b35b03e", "level": "paragraph", "origin_cites_number": 1, "parent_id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ], [ "paragraph", "A heartbeat" ] ], "subsections": [], "title": "A heartbeat" }, { "cite_extract_rate": 0, "cites": [], "content": "for an attribute \\textit{A} of a stream is the lowest value of \\textit{A} within a certain subset of the stream.\nThus, future tuples will probabilistically bear a higher value than the current low-watermark for the same attribute.\nOften, \\textit{A} is a tuple's event time timestamp.\nThe mechanism is used by a streaming system to track processing progress via the low-watermark for \\textit{A}, to admit out-of-order data whose attribute \\textit{A}'s value is not smaller than the low-watermark. Further, it can be used to remove state that is maintained for \\textit{A}, such as the corresponding hash table entries of a streaming join computation.\nFigure~\\ref{fig:low-watermark} presents the low-watermark mechanism, which signifies the oldest pending work in the system.\nHere punctuations carrying the low-watermark timestamp decide when windows will be closed and computed.\nAfter receiving two tuples with t=1 and t=2, the corresponding low-watermark for t=2 (which is propagated downstream), and tuple t=3, the operator receives tuple t=5.\nSince this tuple carries an event time timestamp greater or equal to 4, which is the end timestamp of the first window, it could be the one to cause the window to fire or close.\nHowever, this approach would not account for out-of-order data.\nInstead, the window closes when the operator receives the low-watermark with t=4.\nAt this point, the operator computes C=3 for t=[1, 2, 3] and assigns tuples with t=[5, 6] to the second window with interval [4, 8).\nThe operator will not admit tuple t=4 because it is not greater (more recent) than the current low-watermark value t=4.", "id": "fc5bb865-ae5b-4a6f-8c57-2d5e33d40124", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ], [ "paragraph", "The low-watermark" ] ], "subsections": [], "title": "The low-watermark" }, { "cite_extract_rate": 0, "cites": [], "content": "Heartbeats and slack are both external to a data stream.\nHeartbeats are signals communicated from an input source to a streaming system's ingestion point.\nDifferently to heartbeats, which is an internal mechanism of a streaming system hidden from users, slack is part of the query specification provided by users~.\nHeartbeats and low-watermarks are similar in terms of \nprogress-tracking logic.\nHowever, two important differences set them apart.\nWhile heartbeats expose the progress of stream tuple generation at the input sources, the low-watermark extends this to the processing progress of computations within the streaming system by reflecting their oldest pending work.\nSecond, the low-watermark generalizes the concept of the oldest value, which signifies the current progress point, to any progressing attribute of a stream tuple besides timestamps.\nIn contrast to heartbeats and slack, \\textit{punctuations} are metadata annotations embedded in data streams.\nA punctuation is itself a stream tuple, which consists of a set of patterns each identifying an attribute of a stream data tuple.\nA punctuation is a generic mechanism that communicates information across the dataflow graph.\nRegarding progress tracking, it provides a channel for communicating progress information such as a tuple attribute's low-watermark produced by an operator~, event time skew~, or slack~.\nThus, punctuations can convey which data cease to appear in an input stream; for instance the data tuples with smaller timestamp than a specific value.\nPunctuations are useful in other functional areas of a streaming system as well, such as state management, monitoring, and flow control.\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}{.4\\linewidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/pointstamp-snapshot-1.pdf}\n \\caption{Pointstamps and frontier}\n \\label{fig:pointstamp-1}\n \\end{subfigure}\n \\begin{subfigure}{.4\\linewidth}\n \\centering\n \\includegraphics[width=.8\\linewidth]{Figures/pointstamp-snapshot-2.pdf}\n \\caption{Frontier moves forward}\n \\label{fig:pointstamp-2}\n \\end{subfigure}\n \\caption{High-level workflow of pointstamps and frontier}\n \\label{fig:pointstamps}\n\\end{figure*}", "id": "e48e3096-f125-498d-8573-ebb3b4dd6140", "level": "paragraph", "origin_cites_number": 3, "parent_id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ], [ "paragraph", "Comparison between heartbeats, slack, and punctuations." ] ], "subsections": [], "title": "Comparison between heartbeats, slack, and punctuations." }, { "cite_extract_rate": 0, "cites": [], "content": "like punctuations, are embedded in data streams, but a pointstamp is attached to each stream data tuple as opposed to a punctuation, which forms a separate tuple.\nPointstamps are pairs of timestamp and location that position data tuples on a vertex or edge of the dataflow graph at a specific point in time.\nAn unprocessed tuple \\textit{p} at a specific location \\textit{could-result-in} another unprocessed tuple \\textit{p'} with timestamp \\textit{t'} at another location when \\textit{p} can arrive at \\textit{p'} before or at timestamp \\textit{t'}.\nUnprocessed tuples p with timestamp t are in the frontier of processing progress when no other unprocessed tuples could-result-in p.\nThus, tuples bearing \\textit{t} or an earlier timestamp are processed and the frontier moves on.\nThe system enforces that future tuples will bear a greater timestamp than the tuples that generated them.\nThis modeling of processing progress traces the course of data tuples on the dataflow graph with timestamps and tracks the dependencies between unprocessed events in order to compute the current frontier.\nThe concept of a frontier is similar to a low-water mark.\nThe example shown in Figure~\\ref{fig:pointstamps} showcases how pointstamps and frontiers work.\nThe example in Figure~\\ref{fig:pointstamp-1} includes three active pointstamps.\nPoinstamps are active when they correspond to one or more unprocessed events.\nPointstamp (1, OP1) is in the frontier of active pointstamps, because its precursor count is 0.\nThe precursor count, specifies the number of active pointstamps that could-result-in that pointstamp.\nIn the frontier, notifications for unprocessed events can be delivered.\nThus, unprocessed events e1 and e2 can be delivered to OP2 and OP3 respectively.\nThe occurrence count is 2 because both events e1 and e2 bear the same pointstamp.\nLooking at this snapshot of the data flow graph, it is easy to see that pointstamp (1, OP1)\ncould-result-in pointstamps (2, OP2) and (2, OP3). Therefore, the precursor count of the latter two pointstamps is 1.\nA bit later as Figure~\\ref{fig:pointstamp-2} depicts, after events e1 and e2 are delivered to OP2 and OP3 respectively, their processing results in the generation of\nnew events e5 and e6, which bear the same pointstamp as unprocessed events e3 and e4 respectively.\nSince there are no more unprocessed events with timestamp 1, and the precursor count of pointstamps (2, OP2) and (2, OP3) is 0,\nthen the frontier moves on to these active pointstamps.\nConsequently, all four event notifications can be delivered.\nObsolete pointstamps (1, OP1), (2, OP2), and (2, OP3), are removed from their location, since they correspond to no unprocessed events.\nAlthough this example is made simple for educational purposes, the progress tracking mechanism, has the power\nto track the progress of arbitrary iterative and nested computations.\nPointstamps/frontiers track processing progress regardless of the notion of event time.\nHowever, it is possible for users to capture out-of-order data with pointstamps/frontiers by establishing a two-dimensional frontier of event time and processing time that is flexibly open on the side of event time.", "id": "3735427a-495e-4ebc-a14f-ed5c6aee3cde", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c37edd49-0a4d-4686-8f85-6cba915c1d54", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking processing progress" ], [ "paragraph", "Pointstamps," ] ], "subsections": [], "title": "Pointstamps," }, { "cite_extract_rate": 0, "cites": [], "content": "Cyclic queries require special treatment for tracking progress.\nA cyclic query always contains a binary operator, such as a join or a union.\nThe output produced by the binary operator meets a loop further in the dataflow graph that connects back to one of the binary operator's input channels.\nIn a progress model that uses punctuations for instance, the binary operator forwards a punctuation only when it appears in both of its input channels otherwise it blocks waiting for both to arrive.\nSince one of the binary operator's input channels depends on its own output channel, a deadlock is inevitable.\nChandramouli et al.~ propose an operator for detecting progress in cyclic streaming queries on the fly.\nThe operator introduces a speculative punctuation in the loop that is derived from the passing events' timestamp.\nWhile the punctuation flows in the loop the operator observes the stream's tuples to validate its guess.\nWhen this happens and the speculative punctuation re-enters the operator, it becomes a regular punctuation that carries progress information downstream.\nThen a new speculative punctuation is generated and is fed in the loop.\nBy combining a dedicated operator, speculative output, and punctuations this work achieves to track progress and tolerate disorder in cyclic streaming queries.\nThe approach works for strongly convergent queries and can be utilized in systems that provide speculative output.\nIn Naiad~, the general progress-tracking model features logical multidimensional timestamps attached to events.\nEach timestamp consists of the input batch to which an event belongs and an iteration counter for each loop the event traverses.\nLike in Chandramouli et al.~, Naiad supports cyclic queries by utilizing a special operator.\nHowever, the operator is used to increment the iteration counter of events entering a loop.\nTo ensure progress, the system allows event handlers to dispatch only messages with larger timestamp than the timestamp of events being currently processed.\nThis restriction imposes a partial order over all pending events.\nThe order is used to compute the earliest logical time of events' processing completion in order to deliver notifications for producing output. Naiad's progress-tracking mechanism is external to the dataflow.\nThis design defies the associated implementation complexity in favor of a) efficient delivery of notifications that is proportional to dataflow nodes instead of edges and b) incremental computation that avoids redundant work.\nAlthough not directly incorporated, the notion of event time can be encapsulated in multidimensional timestamps to account for out-of-order data.", "id": "d39b7dae-d5e6-430c-aeae-856d8cd9225f", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "ad45beb9-c999-4e67-9a7a-bf9201d42e7a", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Tracking progress of out-of-order data in cyclic queries" ] ], "subsections": [], "title": "Tracking progress of out-of-order data in cyclic queries" }, { "cite_extract_rate": 0.125, "cites": [ 8837 ], "content": "\\label{subsubsec:revision-processing}\nRevision processing is the update of computations in face of late, updated, or retracted data, which require the modification of previous outputs in order to provide correct results.\nRevision processing made its debut in Borealis~.\nFrom there on, it has been combined with in-order processing architectures~, as well as\nout-of-order processing architectures~.\nIn some approaches revision processing works by \\textit{storing} incoming data and \\textit{revising} computations in face of late, updated, or retracted data~.\nOther approaches \\textit{replay} affected data, \\textit{revise} computations, and propagate the revision messages to update all affected results until the present~.\nFinally, a third line of approaches maintain multiple \\textit{partitions} that capture events with different levels of lateness and \\textit{consolidate} partial results~.", "id": "23e6dc62-b2ef-4514-8ef1-515f92d2667c", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "ad45beb9-c999-4e67-9a7a-bf9201d42e7a", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Revision processing" ] ], "subsections": [ "17e6dc27-9fbe-4121-bb9f-0ecdf4cbe9c8", "f78a88b0-9a38-416b-89d2-f3897c778384", "6beb97a7-3c2e-44d8-bab6-efbe80c5cfe1" ], "title": "Revision processing" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8837 ], "content": "Microsoft's CEDR~ and StreamInsight~, and Google's Dataflow~ buffer or store stream data and process late events, updates, and deletions incrementally by revising the captured values and updating the computations.\nThe dataflow model~ divides the concerns for out-of-order data into three dimensions: the event time when late data are processed, the processing time when corresponding results are materialized, and how later updates relate to earlier results.\nThe mechanism that decides the emission of updated results and how the refinement will happen is called a \\textit{trigger}.\nTriggers are signals that cause a computation to be repeated or updated when a set of specified rules fire.\nOne important rule regards the arrival of late input data.\nTriggers ensure output correctness by incorporating the effects of late input into the computation results.\nTriggers can be defined based on watermarks, processing time, data arrival metrics,\nand combinations of those; they can also be user-defined.\nTriggers support three refinement policies, accumulating where new results overwrite older ones, discarding where new results complement older ones, and accumulating and retracting where new results overwrite older ones and older results are retracted.\nRetractions, or compensations, are also supported in StreamInsight~.", "id": "17e6dc27-9fbe-4121-bb9f-0ecdf4cbe9c8", "level": "paragraph", "origin_cites_number": 3, "parent_id": "23e6dc62-b2ef-4514-8ef1-515f92d2667c", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Revision processing" ], [ "paragraph", "Store and revise." ] ], "subsections": [], "title": "Store and revise." }, { "cite_extract_rate": 0, "cites": [], "content": "\\emph{Dynamic revision}~ and \\emph{speculative processing}~ replay an affected past data subset when a revision tuple is received.\nAn optimization of this scheme relies on two revision processing mechanisms, upstream processing and downstream processing~.\nBoth are based on a special-purpose operator, called \\emph{connection point}, that intervenes between two regular operators and stores tuples output by the upstream operator.\nAccording to the upstream revision processing, an operator downstream from a connection point can ask for a set of tuples to be replayed so that it can calculate revisions based on old and new results.\nAlternatively, the operator can ask from the downstream connection point to retrieve a set of output tuples related to a received revision tuple.\nUnder circumstances, the operator can calculate correct revisions by incorporating the net effect of the difference between the original tuple and its revised one to the old result.\nDynamic revision emits delta revision messages, which contain the difference of the output between the original and the revised value.\nIt keeps the input message history to an operator in the connection point of its input queue. Since keeping all messages is infeasible, there is a bound in the history of messages kept.\nMessages that go further back from this bound can not be replayed and, thus, revised.\nDynamic revision differentiates between stateless and stateful operators.\nA stateless operator will evaluate both the original $(t)$ and the revised message $(t')$ emitting the delta of their output.\nFor instance, if the operator is a filter, $t$ is true and $t'$ is not, then the operator will emit a deletion message for $t$.\nA stateful operator, on the other hand, has to process many messages in order to emit an output. Thus, an aggregation operator has to re-process the whole window for both a revised message and the original message contained in that window in order to emit revision messages.\nDynamic revision is implemented in Borealis.\nSpeculative processing, on the other hand, applies snapshot recovery if no output has been produced for a disordered input stream.\nOtherwise, it retracts all produced output in a recursive manner.\nIn speculative processing because revision processing is opportunistic, no history bound is set.\n\\setlength{\\tabcolsep}{5pt}\n\\begin{table*}\n\\smaller\\centering\n\\caption{Event order management in streaming systems}\n\\label{tab:disorder-management-streaming}\n\\begin{tabular}{p{0.09\\textwidth}\n p{0.05\\textwidth}\n p{0.07\\textwidth}\n p{0.05\\textwidth}\n p{0.1\\textwidth}\n p{0.1\\textwidth}\n p{0.2\\textwidth}\n p{0.18\\textwidth}\n }\n\\hline\n\\multicolumn{1}{c}{\\textbf{System}} &\n\\multicolumn{3}{c}{\\textbf{Architecture}} &\n\\multicolumn{4}{c}{\\textbf{Progress-tracking}} \\\\\n\\rowcolor{white}\n& \\multicolumn{1}{c}{In-order}\n& \\multicolumn{1}{c}{Out-of-order}\n& \\multicolumn{1}{c}{Revision}\n& \\multicolumn{1}{c}{Mechanism}\n& \\multicolumn{1}{c}{Communication}\n& \\multicolumn{1}{c}{Disorder bound metric}\n& \\multicolumn{1}{c}{Revision approach} \\\\\n\\hline\nAurora*~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Slack}\n& \\multicolumn{1}{d}{User config}\n& \\multicolumn{1}{d}{Number of tuples}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nSTREAMS~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Heartbeat}\n& \\multicolumn{1}{d}{Signal to input manager}\n& \\multicolumn{1}{d}{Timestamp (event time skew, net-}\n& \\multicolumn{1}{d}{---}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{work latency, out-of-order bound)}\n& \\multicolumn{1}{d}{}\n\\\\\n\\hline\nBorealis~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{History bound}\n& \\multicolumn{1}{d}{System config}\n& \\multicolumn{1}{d}{Number of tuples or time units}\n& \\multicolumn{1}{d}{Replay past data, enter revised}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{values, issue delta output}\n\\\\\n\\hline\nGigascope~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nTimestream~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nMillwheel~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Signal to central authority}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nNaiad~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{Pointstamp}\n& \\multicolumn{1}{d}{Part of data tuple}\n& \\multicolumn{1}{d}{Multidimensional timestamp}\n& \\multicolumn{1}{d}{Incremental processing of up-}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{dated data via structured loops}\n\\\\\n\\hline\nTrill~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nStreamscope~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp; sequence number}\n& \\multicolumn{1}{d}{---}\n\\\\\n\\hline\nSamza~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{Find, roll back, recompute af-}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{fected input windows}\n\\\\\n\\hline\nFlink~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Punctuation}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{Store \\& Recompute/Revise}\n\\\\\n\\hline\nDataflow~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{Low-watermark}\n& \\multicolumn{1}{d}{Signal to central authority}\n& \\multicolumn{1}{d}{Timestamp}\n& \\multicolumn{1}{d}{Discard and recompute; accu-}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{mulate and revise; custom}\n\\\\\n\\hline\nSpark~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{d}{Slack}\n& \\multicolumn{1}{d}{User config}\n& \\multicolumn{1}{d}{Number of seconds}\n& \\multicolumn{1}{d}{Discard and recompute; accu-}\n\\\\\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{}\n& \\multicolumn{1}{d}{mulate and revise}\n\\\\\n\\hline\n\\end{tabular}\n\\end{table*}", "id": "f78a88b0-9a38-416b-89d2-f3897c778384", "level": "paragraph", "origin_cites_number": 16, "parent_id": "23e6dc62-b2ef-4514-8ef1-515f92d2667c", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Revision processing" ], [ "paragraph", "Replay and revise." ] ], "subsections": [], "title": "Replay and revise." }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8837 ], "content": "Both \\emph{order-independent processing}~ and \\emph{impatience sort}~ are based on partial processing of independent partitions in parallel and consolidation of partial results.\nIn order-independent processing, when a tuple is received after its corresponding progress indicator a new partition is opened and a new query plan instance processes this partition using standard out-of-order processing techniques.\nOn the contrary, in impatience sort, the latest episode of the vision of CEDR~, an online sorting operator incrementally orders the input arriving at each partition so that it is emitted in order.\nThe approach uses punctuations to bound the disorder as opposed to order-independent processing which can handle events arriving arbitrarily late.\nIn order-independent processing, partitioning is left for the system to decide while in impatience sort it is specified by the users.\nIn order-independent processing, tuples that are too old to be considered in their original partition are included in the partition which has the tuple with the closest data.\nWhen no new data enter an ad-hoc partition for a long time, the partition is closed and destroyed by means of a heartbeat.\nAd-hoc partitions are window-based; when an out-of-order tuple is received that does not belong to one of the ad-hoc partitions, a new ad-hoc partition is introduced.\nAn out-of order tuple with a more recent timestamp than the window of an ad-hoc partition causes that partition to flush results and close.\nOrder-independent processing is implemented in Truviso.\nOn the contrary, in impatience sort, users specify reorder latencies, such as $1ms$, $100ms$, and $1s$, that define the buffering time for ingesting and sorting out-of-order input tuples.\nAccording to the specified reorder latencies, the system creates different partitions of in-order input streams.\nAfter sorting, a union operator merges and synchronizes the output of a partition $P$ with the output of a partition $L$ that features lower reorder latency than $P$.\nThus, the output will incorporate partial results provided by $L$ with later updates that $P$ contains.\nThis way applications that require fast but partial results can subscribe to a partition with small reorder latency and vice versa.\nBy letting applications choose the desired extent of reorder latency this design provides for different trade-offs between completeness and freshness of results.\nImpatience sort is implemented in Microsoft Trill.", "id": "6beb97a7-3c2e-44d8-bab6-efbe80c5cfe1", "level": "paragraph", "origin_cites_number": 3, "parent_id": "23e6dc62-b2ef-4514-8ef1-515f92d2667c", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Mechanisms for Managing Disorder" ], [ "subsubsection", "Revision processing" ], [ "paragraph", "Partition and consolidate." ] ], "subsections": [], "title": "Partition and consolidate." }, { "cite_extract_rate": 0.07142857142857101, "cites": [ 8837 ], "content": "\\label{sub:vintage}\nThe importance of event order in data stream processing became obvious since its early days~ leading to the first wave of simple intuitive solutions. Early approaches involved buffering and reordering arriving tuples using some measure for adjusting the frequency and lateness of data dispatched to a streaming system in order~.\nA few years later, the introduction of out-of-order processing~ improved throughput, latency, and scalability for window operations by keeping track of processing progress without ordering tuples.\nIn the meantime, revision processing~ was proposed as a strategy for dealing with out-of-order data reactively.\nIn the years to come, in-order, out-of-order, and revision processing were extensively explored, often in combination with one another~.\nModern streaming systems implement a refinement of these original concepts.\nInterestingly, concepts devised several years ago, like low-watermarks, punctuations, and triggers, which advance the original revision processing, were popularized recently by streaming systems such as Millwheel~ and the Google Dataflow model~, Flink~, and Spark~. Table~\\ref{tab:disorder-management-streaming} presents how both 1st generation and modern streaming systems implement out-of-order data management.", "id": "642969c0-7ad3-4b8d-a803-892c464c2e58", "level": "subsection", "origin_cites_number": 14, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "1st generation vs. 2nd generation" ] ], "subsections": [], "title": "1st generation vs. 2nd generation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:open}\nManaging data disorder entails architecture support and flexible mechanisms.\nThere are open problems at both levels.\nFirst, which architecture is better is an open debate. Although many of the latest streaming systems adopt an out-of-order architecture, opponents finger the architecture's implementation and maintainance complexity. In addition, revision processing, which is used to reconcile out-of-order tuples is daunting at scale because of the challenging state size. On the other hand, in-order processing is resource-hungry and loses events if they arrive after the disorder bound.\nSecond, applications receiving data streams from different sources may need to support multiple notions of event time, one per incoming stream, for instance.\nHowever, streaming systems to date cannot support multiple time domains.\nFinally, data streams from different sources may have disparate latency characteristics that render their watermarks unaligned.\nTracking the processing progress of those applications is challenging for today's streaming systems.", "id": "373ee140-04c3-4940-89e4-a27d42ee052e", "level": "subsection", "origin_cites_number": 0, "parent_id": "2af1dcb8-7a81-4897-bd68-275168901eef", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Managing Event Order and Timeliness" ], [ "subsection", "Open Problems" ] ], "subsections": [], "title": "Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:state}\nState is effectively what captures all internal side-effects of a continuous stream computation. This includes, for example, active windows, buckets of records, partial or incremental aggregates used in an application as well as possibly some user-defined variables created and updated during the execution of a stream pipeline. A careful look into how state is exposed and managed in stream processing systems unveils an interesting trace of trends in computer systems and cloud computing as well as a revelation of prospects on upcoming capabilities in event-based computing. This section provides an overview of known approaches, modern directions and discussions of open problems in the context of state management.\n\\setlength{\\tabcolsep}{5pt}\n\\begin{table*}\n\\smaller\\centering\n\\caption{State Management Features in Streaming Systems}\n\\label{tab:state}\n{\n\\begin{tabular}{c|a|d|a|a|a|d} \n\\hline\n\\begin{tabular}[c]{@{}l@{}}\\\\\\textbf{System}\\end{tabular} \n& \\multicolumn{1}{c}{\\textbf{Programmable State}} & \n\\multicolumn{1}{c}{\\textbf{State Mgmt Responsibility}} & \n\\multicolumn{3}{c}{\\textbf{State Mgmt Architecture}} & \\multicolumn{1}{c}{\\textbf{Persistence Granularity}} \\\\\n & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{In-memory} & \\multicolumn{1}{c}{Out-of-Core} & \\multicolumn{1}{c}{External} & \\multicolumn{1}{c}{} \\\\ \n\\hline\nAurora/Borealis~ & \\xmark & System & \\checkmark & & & No persistence \\\\ \n\\hline\nSTREAM~ & \\xmark & System & \\checkmark & & & No persistence \\\\ \n\\hline\nTelegraphCQ~ & \\xmark & System & \\checkmark & & & No persistence \\\\ \n\\hline\nS4~ & \\checkmark & User & & & \\checkmark & No persistence \\\\ \n\\hline\nStorm (1.0)~ & \\checkmark & User & & & \\checkmark & No Persistence \\\\ \n\\hline\nSpark(1.0)~ & \\checkmark & System & \\checkmark & & & Batch-level \\\\ \n\\hline\nTrident~ & \\checkmark & System & \\checkmark & \\checkmark & & Batch-level \\\\ \n\\hline\nSEEP~ & \\checkmark & System & \\checkmark & & & Epoch-level \\\\ \n\\hline\nNaiad~ & \\checkmark & System & \\checkmark & & & Epoch-level \\\\ \n\\hline\nTimeStream~ & \\checkmark & System & \\checkmark & & & Epoch-level \\\\ \n\\hline\nMillwheel~ & \\checkmark & System & & & \\checkmark & Record-level \\\\ \n\\hline\nFlink~ & \\checkmark & System & \\checkmark & \\checkmark & & Epoch-level \\\\ \n\\hline\nKafka-Streams~ & \\xmark & System & \\checkmark & \\checkmark & & Epoch-level \\\\ \n\\hline\nSamza~ & \\checkmark & System & \\checkmark & \\checkmark & & Epoch-level \\\\ \n\\hline\nStreamscope~ & \\checkmark & System & \\checkmark & & & Epoch-level \\\\ \n\\hline\nS-Store~ & \\xmark & System & & & \\checkmark & Batch-level \\\\ \n\\hline \n\\end{tabular}\n}\n\\end{table*}", "id": "859d816d-87a6-416f-a8b8-81462a1d1466", "level": "section", "origin_cites_number": 17, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ] ], "subsections": [ "61926de8-b02d-45cf-b256-2ac48a911ce0", "a98f17f6-46f8-4fb8-bc43-9fd76124cca9", "a1980fa7-0a5b-4756-b57b-e99c4b7be0cb", "fb6d893e-3698-4d8a-82b6-2821f15ccaa3", "0f61a93b-72ac-4a17-bb6a-3768cdebbf25", "12be1f86-2bed-464d-829f-537405a4329f", "ae900cae-0ff9-43aa-8261-a1986541b33c", "eee37e9d-c1e5-4718-8d08-45f165bb37aa" ], "title": "State Management" }, { "cite_extract_rate": 0, "cites": [], "content": "Stream state management is still an active research field, incorporating methods on how state should be declared in a stream application, as well as how it should be scaled and partitioned. Furthermore, state management considers state persistence methods infinite/long running applications and defines system guarantees and properties to maintain whenever a change in the system occurs. \nA change during a system's runtime often requires state reconfiguration. Such a change can be the result of a partial process or network failure, but also actions that need to be taken to adjust compute and storage capacity (e.g., scaling-up/down). Most of these research issues have been introduced in part within the context of pioneering DSMSs such as Aurora and Borealis~. Specifically, Boralis has set the foundations in formulating many of these problems such as the need for embedded state, persistent store access as well as failure recovery protocols. In \\autoref{tab:state} we categorize known data stream processing systems according to their respective state management approaches. The rest of this section offers an overview of each of the topics in stream state management along with past and currently employed approaches, all of which we categorize as follows:", "id": "61926de8-b02d-45cf-b256-2ac48a911ce0", "level": "subsection", "origin_cites_number": 1, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Managing Stream Processing State" ] ], "subsections": [], "title": "Managing Stream Processing State" }, { "cite_extract_rate": 0, "cites": [], "content": "State in a programming model can be either implicitly or explicitly declared and used. We define state programmability as the ability of a streaming system to allow its users to define user-defined state. State in this case can be a local variable within a stateful \\texttt{map} function, representing a counter. Programmability in state requires support from the underlying execution engine, a feature that directly affects the engine's complexity. Different system trends have influenced both how state can been exposed in a data stream programming model as well as how it should be scoped and managed. In this section, we discuss different approaches and their trade offs. As shown in \\autoref{tab:state}, very few systems disallow their users to define custom user-defined state. These systems focus more on providing a high-level SQL interface on top of a dataflow processor allowing only their internal operators to define and use state within stateful operations (e.g., joins, windows, aggregates).\n\\stitle{State Management Responsibility} An orthogonal aspect to programmability is state management \\textit{responsibility}, which entails the obligation of maintaining state by either the user or the system. State maintenance includes, allocating memory/disk space for storing application variables, persisting changes to disk and recovering state entries from durable storage if needed upon system recovery. The first generation of data parallel stream processing systems, i.e., Storm~ and S4~ required user-managed state. In such systems, stateful processing was either implemented with no guarantees making use of custom in-memory data structures or, often implemented using external key-value stores which cover certain scalability and persistence needs. For the rest of the systems available, state management concerns have been internally handled by the streaming systems themselves through the use of explicit state APIs or non-programmable, yet, internally managed state abstractions.", "id": "a98f17f6-46f8-4fb8-bc43-9fd76124cca9", "level": "subsection", "origin_cites_number": 2, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Programmability \\& Responsibility" ] ], "subsections": [ "3e119487-aca9-4805-b0dc-a7e1b84eed37" ], "title": "Programmability \\& Responsibility" }, { "cite_extract_rate": 0, "cites": [], "content": "In the early days of data stream management when main memory was scarce, state had a facilitating role, supporting the implementation of system operators, such as CQL's join, filter, and sort as employed in STREAM~. We term this type of state, defined by the designers of a given system and used by the internal operators of that system, \\textit{system-defined state}. A common term used to describe that type of state was ``synopsis''. Typically, users of such systems were oblivious of the underlying state and its implicit nature resembled the use of intermediate results in DBMSs. Systems such as STREAM, as well as Aurora Borealis , attached special synopses to a stream application's dataflow graph supporting different operators, such as a window max, a join index or input source buffers for offsets. A noteworthy feature in STREAM was the capability to re-use synopses compositionally to define other synopses in an application internally in the system. Overall, synopses have been one of the first forms of state in early stream processing systems primarily for stream processing over shared-memory. Several of the issues regarding state, including fault tolerance and load balancing, were already considered back then, for example in Borealis. Although, the lack of user-defined state limited the expressive power of that generation of systems to a subset of relational operations. Furthermore, the use of over-specialized data structures was somewhat oblivious to the needs of reconfiguration which requires state to be flexible and easy to partition.\nIn the post-MapReduce era, there was a primary focus in compute scalability with systems like Storm~ allowing the composition of distributed pipelines of tasks. For application flexibility and simplicity, many of these systems did not provide any state management whatsoever, leaving everything regarding state to the hands of the programmer. That included both declaration and management of state. User-declared and managed state was either defined and used within the working memory and scope provided by the hosting framework or defined and persisted externally, using an existing key value storage or database system (e.g. Redis ). In summary, application-managed state offers flexibility and gives expert users implementation freedom. However, no state management capabilities are offered from the system's side. As a result, the user has to reason about persistence, out-of-core scalability, and all necessary third-party storage system dependencies. These are all complex choices to make and require a combination of deep expertise and additional engineering work to integrate stream and storage technologies.\nCurrently, most stream processing systems allow a level of freedom for user-defined state through a form of a stateful processing API. This enriches stream applications to define their custom state, while also granting the underlying system access to state information in order to employ data management mechanisms for persistence, scalability and fault tolerance. State information includes types used, serializers/deserializers and read and write operations known at runtime. The main limitation of user-defined, system-managed state is the lack of direct control on data structures that materialize that state (e.g., for custom optimizations).", "id": "3e119487-aca9-4805-b0dc-a7e1b84eed37", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "a98f17f6-46f8-4fb8-bc43-9fd76124cca9", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Programmability \\& Responsibility" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "The state management architecture refers to the way that a streaming system stores and manages its internal or user-defined state. We observe three distinct stateful processing directions in the architecture of data stream runtime systems (depicted in \\autoref{fig:streamstate}): \\begin{itemize}\n \\item \\textbf{In-memory} architectures entail storing active state within in-memory data structures. This approach is able to sustain state bounded within available main-memory available in each node executing stream operators. \n \\item \\textbf{Out-of-core} architectures make use of multiple levels of storage mediums such as non-volatile memory to manage state. This approach allows exploiting fast main memory acccess within each compute node while also supporting a growing number of state entries which are split and archived in secondary storage. We observe that the out-of-core data structure of choice used in most systems is a variant of the LSM-Tree such as FASTER~ or RocksDB/LevelDB\\footnote{\\url{github.com/google/leveldb}}.\n \\item \\textbf{External} architectures decouple compute and state, allowing state to be handled by an external database or key-value store. This approach enables more modular system designs (state \\& compute decoupling which is very Cloud-friendly) and effective re-use of several desired properties of database systems (e.g., ACID transactions, consistency guarantees, auto-scaling) in support of more complex guarantees in the context of data streaming. A predominant usage of external state was common within applications in Apache Storm. The lack of system-managed state necessitated users to store all of their state in an external systems. In this architecture, when state access is needed, the streaming operator has to reach out to the external system, dramatically increasing its latency. Google’s Millwheel, the cloud engine of Beam/Google Dataflow is a representative example of system-managed external state architecture. Millwheel builds on the capabilities of BigTable~ and Spanner~ (e.g., blind atomic writes). Tasks in Millwheel are effectively stateless. They do keep recent local changes in memory but overall they commit every single output and state update to BigTable as a single transaction. This means that Millwheel is using an external store for both persisting every single working state per key but also all necessary logs and checkpoints needed for recovery and non-idempontent updates.\n\\end{itemize}", "id": "a1980fa7-0a5b-4756-b57b-e99c4b7be0cb", "level": "subsection", "origin_cites_number": 4, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "State Management Architecture" ] ], "subsections": [], "title": "State Management Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "The persistence granularity refers to the granularity in which a streaming system makes a snapshot of the state. While some older systems did not provide any guarantees for the state persistence (e.g., Aurora/Borealis relied on duplicate/standby operators as described in \\autoref{sec:FT}). Most systems at the moment employ a coarse-grained persistence. \nEpoch-level persistence granularity is typically achieved in the form of application-level snapshots. Most commonly, systems employ a form of asynchronous consistent snapshotting such as the Chandy-Lamport algorithm as such: each epoch, i.e., either periodically or after a certain number of records have been ingested by the system, each operator acquires a copy of its state. The batch-level persistence seen in systems such as Spark Streaming, and Trident/Storm adopts a strict micro-batching processing paradigm: i.e., a batch execution is submitted after collecting a sizable number of records, and the state of an operator is stored right after a given batch has been processed. In S-Store the batch granularity orchestrated as a series of ACID transactions on top of a relational database.\nAnother extreme to the epoch-based approach is record-level persistence. This approach as seen in Millwheel~ follows a record-level epoch model: it stores the state transition of each operator on every single output (detailed in \\autoref{sec:FT}). Section~\\ref{sec:stateconsistency} offers an in-depth analysis of the implications between transactional stream processing and persistence granularity.", "id": "fb6d893e-3698-4d8a-82b6-2821f15ccaa3", "level": "subsection", "origin_cites_number": 2, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Persistence Granularity" ] ], "subsections": [ "924f3f04-49e4-4f85-9e4e-87f515e95b3d" ], "title": "Persistence Granularity" }, { "cite_extract_rate": 0, "cites": [], "content": "Stream processing has been influenced by general trends in scalable computing. State and compute have gradually evolved from a scale-up task-parallel execution model to the more common scale-out data-parallel model with related implications in state representations and operations that can be employed. Persistent data structures have been widely used in database management systems ever since they were conceived. In data stream processing the idea of employing internal and external persistence strategies was uniformly embraced in more recent generations of systems. Section~\\ref{sec:statescales} covers different architectures and presents examples of how modern systems can support large volumes of state, beyond what can fit in memory, within unbounded executions. Another foundational transitioning step in stream technology has been the development and adoption of transactional-level guarantees. Section ~\\ref{sec:stateconsistency} gives an overview of the state of the art and covers the semantics of transactions in data streaming alongside implementation methodologies.\n\\begin{figure}[t!]\n \\includegraphics[width=.9\\linewidth]{Figures/statearchs.pdf}\n \\caption{Scalable Architectures for Stateful Data Streaming}\n \\label{fig:streamstate}\n\\end{figure}", "id": "924f3f04-49e4-4f85-9e4e-87f515e95b3d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fb6d893e-3698-4d8a-82b6-2821f15ccaa3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Persistence Granularity" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:statescales}\nScalable state has been the main incentive of the second generation of stream processing systems which automated deployment and partitioning of data stream computations. The need for scalable state was driven by the need to facilitate unbounded data stream executions where the space complexity for stream state is linear to the over-increasing input consumed by a stream processor at any point in time. This section discusses types of scalable state, as well as scalable system architectures that can sustain support for partitioning, persisting, and committing changes to large volumes of state.", "id": "0f61a93b-72ac-4a17-bb6a-3768cdebbf25", "level": "subsection", "origin_cites_number": 0, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Scalability and State Management" ] ], "subsections": [ "de4c9e45-8558-4c4c-b7ec-0697aa038b33" ], "title": "Scalability and State Management" }, { "cite_extract_rate": 0, "cites": [], "content": "Scalable state takes two forms in a stream application, typically referred to as \\emph{partitioned} and \\emph{non-partitioned} state (also referred to as global state). Depending on the nature of a specific operation, one or both of these state types can be employed.\n\\stitle{Partitioned State.}\nPartitioned state is the de facto way to enable data-parallel computation on massive data streams. Partitioned state allows key-wise logical partitioning of state to compute tasks, where each logical task handles a specific key. This is enabled in the API level through an additional operation that is invoked prior to stateful processing which lifts the scope from task- to key-based processing such as ``keyBy'' in Apache Flink or ``groupBy'' in Beam and Kafka-Streams. At the physical level, multiple keys (or key ranges) can be assigned to a specific physical task or compute node.\n\\stitle{Non-partitioned State.}\nNon-partitioned state is mapped as a singleton to physical compute tasks. Such non-partitioned state is typically used in two ways. First, in order to compute global aggregates over the complete input stream. Second, it can be used to calculate aggregates at the level of the physical operator (e.g., count how many keys have been processed per operator). Task-level state can also be useful for keeping offsets when consuming logs from a physical stream source task. Because non-partitioned state either deals with operator-local computations or with global aggregates, its use is not scalable and should be used with caution by practitioners.", "id": "de4c9e45-8558-4c4c-b7ec-0697aa038b33", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "0f61a93b-72ac-4a17-bb6a-3768cdebbf25", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Scalability and State Management" ], [ "subsubsection", "Parallel vs. Global Stateful Operations" ] ], "subsections": [], "title": "Parallel vs. Global Stateful Operations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:stateconsistency}\nConsistent stream processing has for long been an open research issue due to the challenging nature of distributed unbounded processing but also due to the lack of a formal specification of the problem itself. Consistency relates to guarantees a system can make at the face of failure as well as any need for change during its operation. \nIn data streaming, changing or updating a running data stream application is a concept also known as reconfiguration. For example, this includes the case when one needs to apply a software update to a stream application or scale out to more compute nodes without loss of accuracy or computation. The underlying relation between fault tolerance and reconfiguration has been highlighted by several works in the past such as the research behind the SEEP system that considers an integrated approach to scale and recover tasks from failures. Currently, most stream processors are transactional processing systems governed by consistency rules and processing guarantees. This section highlights the types of guarantees offered by different stream processing systems and implementation strategies that materialize them.\n\\para{Past Challenges and The Lambda Architecture: } When large scale computing became mainstream, a design pattern emerged called ``lambda architecture'' which suggested the separation of systems across different layers according to their specialization and reliability capabilities. Hadoop and transactional databases were reliable in terms of processing guarantees, thus, they could take all critical computation. Whereas, stream processing systems could achieve low latency and scale but they did not offer a clear set of consistency guarantees. For example, in the state-oblivious Storm system the fault-tolerance approach would solely consider which input events have been fully processed or not and which should be replayed on a timeout. Nevertheless, there was no clear picture of what level of consistency can be expected from stream processors. At the same time, databases had formal guarantees. For example, a set of transactions would be processed using ACID guarantees, which includes atomicity across transactions, consistency for the valid states a database can have, isolation in terms of concurrent execution, and durability on what can be recovered after failure. To reason about consistency in the context of data streaming, there had been a need to lay out a set of assumptions (e.g., logged input) and processing granularity for defining a concept related to transactions.\n\\stitle{Consistent State in Stream Processing.} A stream processor today is a distributed system consisting of different concurrently executing tasks. Source tasks subscribe to input streams that are typically recorded in a partitioned log such as Kafka and therefore input streams can be replayed. Sink tasks commit output streams to the outside world and every task in this system can contain its own state. For example, source tasks need to keep the current position of their input streams in their state. \nA system execution can be often modeled through the concept of ``concurrent actions''. An action includes: invoking stream task logic on an input event, mutating its state, and producing output events. Every action happening in such a system causes other actions. Effectively, just a single record sent by a source contributes to state updates throughout the whole pipeline and output events created by the sinks. If a specific action is lost or happens twice, then the complete system enters into an \\textit{inconsistent} state.\nFault tolerance is an integral aspect of streaming systems that significantly impacts their state consistency. We analyze the fault tolerance strategies of existing streaming systems in Section~\\ref{sub:fault-tolerance}.\nIn addition, due to causal dependencies on state, the order of action execution is also critical. Existing reliable stream processors either define a transaction out of each action or a coarse grained set of actions that we call \\emph{epochs}. We explain these approaches in more detail, next.", "id": "12be1f86-2bed-464d-829f-537405a4329f", "level": "subsection", "origin_cites_number": 1, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Managing Consistency and Persistence" ] ], "subsections": [ "3acd9915-be18-4789-9515-535bdfff617d", "4d611f34-13ad-4d1b-b9b9-ecaa134619ef" ], "title": "Managing Consistency and Persistence" }, { "cite_extract_rate": 0, "cites": [], "content": "A form of consistent processing in data streaming is employing a transaction per local action. Google's Millwheel, the cloud runtime for the dataflow data streaming service, employs such a strategy. Millwheel uses BigTable to commit each full compute action which includes: input events, state transitions and generated output. The act of committing these actions is also called a ``strong production'' in Millwheel. \nPersisting state of an operator per output event, is an approach which seemingly induces high latency overhead. However, traditional database optimizations can be used to speed up commit and state read times. Write ahead logging, blind writes, bloom filters, and batch commits at the storage layer can be used to reduce the commit latency. More importantly, since the order of actions is predefined at commit time, state persistence on a per-event basis also guarantees deterministic executions. In addition, this approach has important effects on consistency as perceived by applications that consume the system's output. This follows from the fact that ``exactly-once processing'' in this context relates to each action being atomically committed, as we discuss in Section~\\ref{subsub:output-commit}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.8\\linewidth]{Figures/epochcommit.pdf}\n \\caption{Transactional Epoch Commits in Data Streaming}\n \\label{fig:epochcommits}\n\\end{figure}", "id": "3acd9915-be18-4789-9515-535bdfff617d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "12be1f86-2bed-464d-829f-537405a4329f", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Managing Consistency and Persistence" ], [ "subsubsection", "State Persistence at Event Granularity" ] ], "subsections": [], "title": "State Persistence at Event Granularity" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 7843, 4760 ], "content": "Instead of adopting state persistence on a per-record granularity processing, epoch-level approaches divide computation into a series of mini-batches, also known as ``epochs''.\nIn \\autoref{fig:epochcommits} we depict the overall approach, marking input, system states and outputs with a distinct epoch identifier. Epochs can be defined through markers at the logged input of the streaming application. A system execution can be instrumented to process each epoch and commit the state of the entire task graph after each epoch is processed. If a failure or other reconfiguration action happens during the execution of an epoch then the system can roll back to a previously committed epoch and recover its execution. The term ``exactly-once processing'' in this context relates to each epoch being atomically committed. In Section~\\ref{sub:fault-tolerance} where we present the different levels of processing semantics in streaming we call this flavor \\textit{exactly-once processing on state}. The rest of this section focuses on various approaches used to commit stream epochs.\n\\stitle{Strict Two-Phase Epoch Commits.} A common coordinated protocol to commit epochs is a strict two-phase commit where: Phase-1 corresponds to the full processing of an epoch and the Phase-2 ensures persisting the state of the system at the end of the computation. \nThis approach was popularized by Apache Spark through the use of periodic ``micro-batching'' and it is an effective strategy when batch processing systems are used for unbounded processing. The main downside of this approach is the risk of low task utilization due to synchronous execution, since tasks have to wait for all other tasks to finish their current epoch. Drizzle~ mitigates this problem by chaining multiple epochs in a single atomic commit. A similar approach was also employed by S-Store~, where each database transaction corresponds to an epoch of the input stream that is already stored in the same database.\n\\stitle{Asynchronous Two-Phase Epoch Commits.} \nFor pure dataflow systems, strict two-phase committing is problematic since tasks are uncoordinated and long-running. Furthermore, it is feasible to achieve the same functionality asynchronously through consistent snapshotting algorithms, known from classic distributed systems literature . Consistent snapshotting algorithms exhibit beneficial properties because they not require pausing a streaming application. Furthermore, they acquire a snapshot of a consistent cut in a distributed execution . In other words, they capture the global states of the system during a ``valid'' execution. Throughout different implementations we can identify i) unaligned and ii) aligned snapshotting protocols.\n\\para{I. Unaligned / Chandy-Lamport snapshots} provide one of the most efficient methods to obtain a consistent snapshot. This approach is currently supported by several stream processors, such as IBM Streams and Flink. The core idea is to make use of a punctuation or ``marker'', into the regular stream of events and use that marker to separate all actions that come before and after the snapshot while the system is running. A caveat of unaligned snapshots is the need to record input (a.k.a. in-flight) events that arrive to individual tasks until the protocol is complete. In addition to space overhead for logged inputs, unaligned snapshots require more processing during recovery, since logged inputs need to be replayed (similarly to redo logs in database recovery with fuzzy checkpoints).\n\\para{II. Aligned Snapshots}\nAligned snapshots aim to improve performance during recovery and minimize reconfiguration complexity exhibited by unaligned snapshots. The main differentiation is to prioritize input streams that are expected before the snapshot and thus, end up solely with states that reflect a complete computation of an epoch and no in-flight events as part of a snapshot. For example, Flink's epoch snapshotting mechanism resembles the Chandy Lamport algorithm in terms of using markers to identify epoch frontiers. However, it additionally employs an alignment phase that synchronizes markers within tasks before disseminating further. This is achieved through partially blocking input channels where markers were previously received until all input channels have transferred all messages corresponding to a particular epoch. \nIn summary, unaligned snapshots are meant to offer the best runtime performance but sacrifice recovery times due to the redo-phase needed upon recovery. Whereas, aligned snapshots can lead to slower commit times due to the alignment phase while providing a set of beneficial properties. First, aligned snapshots reflect a complete execution of an epoch which is useful in use-cases where snapshot isolated queries need to be supported on top of data streaming . Furthermore, aligned snapshots yield the lowest reconfiguration footprint as well as setting the basis for live reconfiguration within the alignment phase as exhibited by Chi .", "id": "4d611f34-13ad-4d1b-b9b9-ecaa134619ef", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "12be1f86-2bed-464d-829f-537405a4329f", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Managing Consistency and Persistence" ], [ "subsubsection", "State Persistence at Epoch Granularity" ] ], "subsections": [], "title": "State Persistence at Epoch Granularity" }, { "cite_extract_rate": 0, "cites": [], "content": "State is a concept that has been very central to stream processing. The notion of state itself has been addressed with many names such as ``summary'', ``synopsis'', ``sketch'' or ``stream table'' and it reflects the evolution of data stream management along the years. Early DSMS systems~ (circa 2000-2010) hinted state and its management from the user. They declared and managed internally in in-memory all data structures needed to support a selected set of operations. This type of state, often referred to as ``summary'' was used to internally materialize continuous processing operators such as those of the time-varying relational model of CQL~, as seen in STREAM~.\nA decade later, scalable data computing systems based on the MapReduce~ architecture allowed for arbitrary user-defined logic to be scaled and executed reliably using distributed middleware and partitioned file systems. Following the same trend, many existing data management models were revisited and re-architectured with scalability in mind (e.g., NoSQL, NewSQL databases). Similarly, a growing number of scalable data stream processing systems~ married principles of scalable computing with stream semantics and models that were identified in the past (e.g. out-of-order processing~). This pivoting helped stream management technology to lift all assumptions associated with limited state capacity and thus reach its nearly full potential of executing correctly continuous event-driven applications with arbitrary state. \nAs of today, modern stream processors can compile and execute graphs of long-running operators with complete, user-defined state yet system-managed that is fault-tolerant and reconfigurable given a clear set of transactional guarantees.", "id": "ae900cae-0ff9-43aa-8261-a1986541b33c", "level": "subsection", "origin_cites_number": 14, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "1st vs. 2nd Generation" ] ], "subsections": [], "title": "1st vs. 2nd Generation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4761 ], "content": "Data streaming covers many data management needs today that go beyond real-time analytics, which was the original purpose of the stream processing technology. New needs include support for more complex data pipelines with implicit transactional guarantees. Furthermore, modern applications involve Machine Learning, Graph Analysis and Cloud Apps, all of which have a common denominator: complex state and new access patterns. These needs have cultivated novel research directions in the emerging field of stream state management.\nThe decoupling of state programming from state persistence resembles the concept of data independence in databases. Systems are converging in terms of semantics and operations on state while, at the same time many new methods employed on embedded databases (e.g., LSM-trees, state indexing, externalized state) are helping stream processors to evolve in terms of performance capabilities. A recent study~ showcases the potential of workload-aware state management, adapting state persistence and access to the individual operators of a dataflow graph. To this end, an increasing number of ``pluggable'' systems~ for local state management with varying capabilities are being adopted by stream processors. This opens new capabilities for optimization and sophisticated, yet transparent state management that can automate the process of selecting the right physical plan and reconfigure that plan while continuous applications are executed.", "id": "eee37e9d-c1e5-4718-8d08-45f165bb37aa", "level": "subsection", "origin_cites_number": 3, "parent_id": "859d816d-87a6-416f-a8b8-81462a1d1466", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "State Management" ], [ "subsection", "Open Problems" ] ], "subsections": [], "title": "Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:FT}\n\\setlength{\\tabcolsep}{6pt}\n\\begin{table*}\n\\smaller\\centering\n\\caption{Fault-tolerance in streaming systems.}\n\\label{tab:ha-sps}\n{\n\\begin{tabular}{p{0.13\\textwidth}\n p{0.04\\textwidth}\n p{0.04\\textwidth}\n p{0.03\\textwidth}\n p{0.04\\textwidth}\n p{0.05\\textwidth}\n p{0.03\\textwidth}\n p{0.03\\textwidth}\n p{0.05\\textwidth}\n p{0.03\\textwidth}\n p{0.04\\textwidth}\n p{0.04\\textwidth}\n p{0.09\\textwidth}\n p{0.03\\textwidth}\n }\n\\hline\n\\textbf{System} &\n\\multicolumn{3}{c}{\\textbf{Processing semantics}} &\n\\multicolumn{3}{c}{\\textbf{Replication}} &\n\\multicolumn{3}{c}{\\textbf{Recovery data}} & \n\\multicolumn{4}{c}{\\textbf{Storage medium}} \\\\\n\\rowcolor{white}\n& Least\n& \\multicolumn{2}{c}{Exactly-once}\n& Active\n& Passive\n& None\n& State\n& Output\n& None\n& \\multicolumn{2}{c}{Resilient store}\n& In-Memory\n& None \\\\\n&\n& State\n& Output\n&\n&\n&\n&\n&\n&\n& Local\n& Remote\n&\n&\n\\\\\n\\hline\nAurora*~\n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nTelegraphCQ~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nBorealis~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nS4~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n\\\\\n\\hline\nSeep~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nNaiad~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nTimestream~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nMillwheel~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nStorm~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n\\\\\n\\hline\nTrident~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nS-Store~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nTrill~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nHeron~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n\\\\\n\\hline\nStreamscope~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n\\\\\n\\hline\nStreams~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nSamza~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nFlink~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\nSpark~\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{\\checkmark} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{a}{} \n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{} \n& \\multicolumn{1}{b}{} \n\\\\\n\\hline\n\\end{tabular}\n}\n\\end{table*}\nFault tolerance is a system's capacity to continue its operation in spite of failures delivering the expected service as if no failures had happened.\nIt is specially important for streaming systems for two reasons.\nFirst, streaming systems conduct stateful computations over potentially unbounded data streams.\nWithout fault tolerance streaming systems would have to redo computations from the beginning given that the state or progress thus far would be lost during a failure.\nBesides losing processing progress accumulated over an arbitrary time period, recomputation is many times infeasible because the already processed segment of a data stream has permanently vanished.\nSecond, contemporary streaming systems feature a distributed systems architecture for scalability.\nIn a system deployed on multiple physical machines failures occur commonly.\nBased on this motivation, a lot of exciting work has been performed on fault tolerance in streaming systems.\nWe present it in Section~\\ref{sub:fault-tolerance}.\nIn computer systems, availability is defined as the time period that a system accomplishes its service relative to service interruption periods.\nIt is typically quantified as a percentage, 100\\% being perfect availability~.\nThe term high availability has been adopted to denote that a system achieves a very high percentage of availability like 99.999\\% or higher.\nIn stream processing where systems are not probed by users as in the case of typical information systems like web applications, what service accomplishment means is open to interpretation.\nSurprisingly, no definition for high availability is provided in the stream processing literature.\nExisting research (Section~\\ref{sub:high-availability}) quantifies high availability using combinations of three metrics, namely recovery time, performance overhead in terms of throughput and latency, and resource utilization.\nWe highlight the absence of a definition and suitable metric for high availability in the open problems in Section~\\ref{sub:ha-open-problems} where we propose a definition based on processing progress and a proxy for measuring high availability based on end-to-end latency.\nBefore finishing with the open problems, we separate the 1st generation from the modern in fault tolerance and high availability in Section~\\ref{sub:ft-ha-vintage}.", "id": "60b7aebd-8cd3-47cb-a168-613a863dea95", "level": "section", "origin_cites_number": 23, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ] ], "subsections": [ "cec784fb-7b8f-422c-97e4-f933ccdcddfb", "c8f9a49c-fc34-48ed-a538-db893d951b19", "3b22e1b8-ff4c-4fbf-bdc5-afbd6fdb7db5", "219e1f25-e155-47d4-a07b-6f557b4d0daa" ], "title": "Fault Tolerance \\& High Availability" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:fault-tolerance}\nMany important challenges in stream processing manifest when we take into account failures.\nManaging failures in a distributed streaming system entails maintaining snapshots of state, migrating state, and scaling out operators while affecting as least as possible the healthy parts of the system.\nTable~\\ref{tab:ha-sps} presents the fault-tolerance strategies of\neighteen streaming systems arranged in order of publication appearance\nfrom past to present.\nWe analyse the strategies across the following four dimensions.\n\\textit{1. Processing semantics} conveys how a system's data processing is affected by failures.\nTypically, all systems in the literature are able to produce correct results in failure-free executions.\nBut to mask a failure completely is hard especially in the stream processing domain where, typically, output is delivered as soon as it is produced. \nIn recent years the stream processing domain has settled on the terms \\textit{at least-once} and \\textit{exactly-once} to characterize the processing semantics~.\nAt most-once is also part of the nomenclature but it is mostly obsolete as systems opt to support one of the two stronger levels.\nAt least-once processing semantics means that the system will produce the same results as a failure-free execution with the addition of duplicate records as a side effect of recovery.\nExactly-once lends itself to two different interpretations.\nA system may support exactly-once processing semantics within its boundaries ensuring that any inconsistencies or duplicate execution carried out on recovery is not part of its state.\nWe call that exactly-once processing semantics on \\textit{state}.\nIt should be noted that most systems in this category still assume that the computations they apply as well as the system's functions are deterministic, which is often not the case; processing-time windows and operators processing input from multiple sources are two prime examples of nondeterminism. With nondeterminism at play, the system's state on recovery can diverge. Clonos~ provides exactly-once processing including nondeterministic computations by means of causal consistency. It keeps determinants about nondeterministic computations in a resilient manner and uses them to regenerate the exact computational state following a failure.\nWhile a system can restore its state to a consistent snapshot, the same is not feasible in general to accomplish with the output published by the system.\nOnce the output is out, it is available for consumption by external applications.\nThus, a system with exactly-once processing semantics on state will still produce duplicate output on recovery.\nThis problem has been termed the output commit problem~ in the distributed systems literature.\nSystems that manage to produce the same output under failure as a failure-free execution have exactly-once processing semantics on \\textit{output}.\nIn Section~\\ref{subsub:output-commit} we elaborate how streaming systems treat the output commit problem.\n\\noindent \\textit{2. Replication} regards the use of additional computational resources for recovering an execution.\nWe adopt the terminology of Hwang et al.~ that classify replication as either \\textit{active} where two instances of the same execution run in parallel or \\textit{passive} where each running stateful operator that is part of an execution dispatches its checkpointed state to a standby operator.\n\\noindent \\textit{3. Recovery data} addresses what data are regularly stored for recovery purposes.\nData may include the \\textit{state} of each operator and the \\textit{output} it produces.\nIn addition, many fault tolerance strategies need to replay tuples of input streams during recovery in order to reprocess them.\nFor this purpose input streams are persistently stored typically in message brokers like Apache Kafka.\nHowever, we exclude this fact from the table to save space.\n\\noindent \\textit{4. Storage medium} states where recovery data is stored.\nIt can be in a \\textit{resilient store} that is \\textit{local} to each stateful operator, in a \\textit{remote} resilient store, or in the memory space of a stateful operator.\n\\textit{In-memory} means that operators use their memory space as a primary storage medium for recovery data.\nSystems that cache data for recovery in memory like output tuples do not fall in this category.\nThe table is meant to be read both horizontally to describe a specific system's approach to fault tolerance and vertically to uncover how the different building blocks shape the landscape of fault tolerance in stream processing.\nTwo remarks are necessary.\nFirst, the table contains three more annotations besides the self-explanatory checkmarks.\nStreamscope~ presents and evaluates three distinct fault tolerance strategies, an active replication-based strategy , a passive one, and a strategy that relies on recomputing state by replaying data from input streams.\nSecond, the state column in the recovery data dimension captures not only checkpointed state but also state metadata that allow recomputing the state, such as a changelog~ or state dependencies~.\nThe table reveals four interesting patterns.\nFirst, of all columns, two accumulate the majority of checkmarks, passive replication and storing state for recovery.\nThis is perhaps the most visible pattern on the table that signifies that passive replication by storing state is, unsurprisingly, a very popular option for streaming systems.\nOne typical recovery approach is to restore the latest checkpoint of a failed operator in a new node and replay input that appeared after the checkpoint.\nVariations of this approach include saving in-flight tuples along with the state and maintaining in-flight tuples in upstream nodes.\nSecond, storing in-flight tuples for recovery is not preferred anymore, although it was a popular option for streaming systems in the past.\nThird, while past systems strived to support exactly-once output processing semantics, later systems opt for exactly-once semantics on state and outsource the deduplication of output to external systems.\nWe will elaborate on this aspect in Section~\\ref{subsub:output-commit}. \nFinally, among the various storage media for recovery data a remote resilient store is the clear winner.", "id": "cec784fb-7b8f-422c-97e4-f933ccdcddfb", "level": "subsection", "origin_cites_number": 9, "parent_id": "60b7aebd-8cd3-47cb-a168-613a863dea95", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ], [ "subsection", "Fault-tolerance" ] ], "subsections": [ "9d4fc315-5b94-4a3d-b841-017e3f7d4d7d" ], "title": "Fault-tolerance" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsub:output-commit}\nThe output commit problem~ \nspecifies that a system should only publish output to the outside world when it is certain that it can recover the state from where the output was published so that every output is only published once because output cannot be retracted once it is sent.\nIf output is sent twice, then the system manifests inconsistent behavior with respect to the outside world.\nAn important instance of this problem manifests when a system is restoring some previous consistent state due to a failure.\nIn contrast to the system's state, its output cannot be retracted in general.\nThus, under failures, systems must be careful not to produce duplicate output.\nThe output commit problem is relevant in streaming systems, which typically conform to a distributed architecture and process unbounded data streams.\nIn this setting, the side effects of failures are difficult to mask. \nStreaming systems that solve the output commit problem provide output exactly-once.\nOther terms that refer to the same problem are processing output exactly-once and\nits paraphrases, as well as precise recovery~ and strong productions~.\nAlthough the problem is relevant and hard, solutions in the stream processing domain are scattered in the literature pertaining to each system in isolation.\nWe group the various solutions in three categories, transaction-based, progress-based, and lineage-based, and describe each noting the assumptions it involves.\nEach of the three types of techniques, use a different trait of the input or computation, to identify\nwhether a certain tuple has appeared again. Transaction-based techniques use tuple identity, progress-based techniques\nuse order, while lineage-based techniques use input-output dependencies.\nFinally, we provide two more categories of solutions, special sink operators and external sinks that do solve the problem practically, but strictly speaking they do not meet the problem’s specification because they are either specific or external to a streaming system.\n\\para{Transaction-based.}\nMillwheel~ and Trident~ rely on committing unique ids with records to eliminate duplicate retries.\nMillwheel assigns a unique id to each record entering the system and commits every record it produces to a highly available storage system before sending it downstream.\nDownstream operators acknowledge received records.\nIf a delivered record is retried it is ignored by checking the unique id that it carries.\nMillwheel assumes no input ordering or determinism.\nTrident, on the other hand, batches records into a transaction, which is assigned a unique transaction id and applies a state update to the state backend.\nAssuming that transactions are ordered, Trident can accurately ignore retried batches by checking the transaction id.\n\\para{Progress-based.}\nSeep~ uses timestamp comparison to deliver output exactly-once relying on the order of timestamps.\nEach operator generates increasing scalar timestamps and attaches them to records.\nSeep checkpoints the state and output of each operator together with the vector timestamps of the latest records from each upstream operator that affected the operator's state.\nOn recovery, the latest checkpoint is loaded to a new operator, which replays the checkpointed output records and processes replayed records sent by its upstream operators.\nDownstream operators discard duplicate records based on the timestamps.\nThe system assumes deterministic computations that do not rely on system time or random input.\nA previous version of Seep~ applies the same process with the difference that a recovered operator rewinds its logical clock to the timestamp of the checkpoint it possesses before emitting records.\nThe system assumes deterministic computations without side-effects and a monotonically increasing logical clock providing timestamps.\nIt further assumes that records in a stream are ordered by their timestamps.\n\\para{Lineage-based.}\nTimestream~ and Streamscope~ use dependency tracking to provide exactly-once output.\nDuring normal operation, both systems track operator input and output dependencies by uniquely identifying records with sequence numbers.\nStreamscope persists records with their identifiers asynchronously.\nBoth systems store operator dependencies periodically in an asynchronous manner.\nIn Streamscope, however, each operator checkpoints individually not only its dependencies but also its state.\nOn recovery, Timestream retrieves the dependencies of failed operators by contacting upstream nodes recursively until all inputs required to rebuild the state are made available.\nStreamscope follows a similar process, but starts from a failed operator's checkpoint snapshot.\nFor each input sequence number in that snapshot not found in persistent storage Streamscope contacts upstream operators, which may have to recompute the record starting from their most relevant snapshot that can produce the output record given its sequence number.\nFinally, both systems use garbage collection to discard obsolete dependencies but in a subtly different manner.\nTimestream computes the input records required by upstream operators in reverse topological order from the final output to the original input and discards those unneeded.\nStreamscope does the same but instead of computing dependencies, it uses low watermarks per operator and per stream to discard snapshots and records that are behind.\nIn Timestream storing dependencies asynchronously can lead to duplicate recomputation, but downstream operators bearing the correct set of dependencies can discard them.\nStreamscope applies the same process only if duplicate records cannot be found in persistent storage.\nBoth Timestream and Streamscope assume deterministic computation and input in terms of order and values.\nThe time-based and lineage-based solutions are vulnerable to failures of the last operator(s) on the dataflow graph, which produce the final output, since both solutions rely on downstream operators for filtering duplicate records.\n\\begin{table}[t]\n \\caption{Assumptions that systems make for solving the output commit problem}\n \\label{tab:output-commit}\n \\begin{tabular}{p{0.09\\textwidth}p{0.35\\textwidth}}\n \\textbf{System} & \\textbf{Assumptions} \\\\\n \\hline\n Millwheel & \\multicolumn{1}{e}{External High-throughput Transactional Store} \\\\ \\hline\n Timestream & \\multicolumn{1}{e}{Deterministic computation and input} \\\\ \\hline\n Streamscope & \\multicolumn{1}{e}{Deterministic computation and input} \\\\ \\hline\n Trident & \\multicolumn{1}{e}{Deterministic computation and input, ordering of} \\\\\n & \\multicolumn{1}{e}{transactions} \\\\ \\hline\n Seep & \\multicolumn{1}{e}{Deterministic computation, monotonically-} \\\\\n & \\multicolumn{1}{e}{increasing logical clock, records ordered by} \\\\\n & \\multicolumn{1}{e}{timestamp} \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\para{Special sink operators.}\nStreams~ implements special sinks for retracting output from files and databases.\nThe application of this approach solves the output commit problem for specific use cases, but it is not applicable in general since it defies the core assumption of the problem that output cannot be retracted.\n\\para{External sinks.}\nSome systems like Streams~, Flink~, and Spark~ provide exactly-once semantics on state and outsource the output commit problem to external sinks that support idempotent writes, such as Apache Kafka.\nOne way to categorise the solutions provided by special sink operators and external sinks, is as optimistic output\ntechniques, that push output immediately and retract it or update it if needed, and pessimistic output techniques that use\na form of write ahead log, to write the output they will publish, if everything goes well until the output is permanently committed~.\nOptimistic output techniques, which resemble multi-version concurrency control from the database world, include modifiable\nand versioned output destinations, while pessimistic output techniques include transactional sinks and similar tools.", "id": "9d4fc315-5b94-4a3d-b841-017e3f7d4d7d", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "cec784fb-7b8f-422c-97e4-f933ccdcddfb", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ], [ "subsection", "Fault-tolerance" ], [ "subsubsection", "The output commit problem" ] ], "subsections": [], "title": "The output commit problem" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 4762 ], "content": "\\label{sub:high-availability}\nEmpirical studies of high availability in stream processing~ propose an active replication approach~, a passive replication approach~, a hybrid active-passive replication approach~, or model multiple approaches and evaluate them with simulated experiments~.\n\\para{Active replication.}\nFlux~ implements active replication by duplicating the computation and coordinating the progress of the two replicas.\nFlux restores operator state and in-flight data of a failed partition while the other partition continues to process input.\nA new primary dataflow that runs following a failure quiesces when a new secondary dataflow is ready in a standby machine in order to copy the state of its operators to the new secondary.\nContrastingly, Borealis~ has nodes address upstream node failures by switching to a live replica of the failed upstream node.\nIf a replica is not available, the node can produce tentative output for incomplete input to avoid the recovery delay.\nThe approach sacrifices consistency to optimize availability, but guarantees eventual consistency.\n\\para{Passive replication.}\nHwang et al.~ propose that a server in a cluster has another server as backup where it ships independent parts of its checkpointed state.\nWhen a node fails, its backup servers that hold parts of its checkpointed state initiate recovery in parallel by starting to execute the operators of the failed node whose state they have and collecting the input tuples they have missed from the checkpointed state they possess.\nSGuard~ and Clonos~ save computational resources in another way by checkpointing state asynchronously to a distributed file system.\nUpon a failure a node is selected to run a failed operator.\nThe operator's state is loaded from the file system and its in-memory state is reconstructed before it can join the job.\nBeyond asynchronous checkpointing,\na new checkpoint mechanism~ preserves output tuples until an acknowledgment is received from all downstream operators.\nNext, an operator trims its output tuples and takes a checkpoint.\nThe authors show that passive replication still requires longer recovery time than active replication, but with 90\\% less overhead due to reduced checkpoint size.\n\\para{Hybrid replication.}\nZwang et al.~ propose a hybrid approach to replication, which operates in passive mode under normal operation, but switches to active mode using a suspended pre-deployed secondary copy when a transient failure occurs.\nAccording to the provided experiment results, their approach saves 66\\% recovery time\ncompared to passive replication\nand produces 80\\% less message overhead than active replication.\nAlternatively, Heinze et al.~ propose to dynamically choose the replication scheme for each operator, either active replication or upstream backup, in order to reduce the recovery overhead of the system by limiting the peak latency under failure below a threshold.\nSimilarly, Su et al.~ counter correlated failures by passively replicating processing tasks except for a dynamically selected set that is actively replicated.\n\\para{Modeling and simulations.}\nIn their seminal work Hwang et al.~\nmodel and evaluate the recovery time and runtime overhead of four recovery approaches, active standby, passive standby, upstream backup, and amnesia, across different types of query operators.\nThe simulated experiments suggest that active standby achieves near-zero recovery time at the expense of high overhead in terms of resource utilization, while passive standby produces worse results in terms of both metrics compared to active standby.\nHowever, passive standby poses the only option for arbitrary query networks.\nUpstream backup has the lowest runtime overhead at the expense of longer recovery time.\nWith a similar goal, Shrink~, a distributed systems emulator,\nevaluates the models of five different resiliency strategies\nwith respect to uptime \\textsc{sla} and resource reservation.\nThe strategies differ across three axes, single-node vs multi-node, active vs passive replication, and checkpoint vs replay.\nAccording to the experiments with real queries on real advertising data using\nTrill~, active replication with periodic checkpoints\nis proved advantageous in many streaming workloads,\nalthough no single strategy is appropriate for all of them.", "id": "c8f9a49c-fc34-48ed-a538-db893d951b19", "level": "subsection", "origin_cites_number": 12, "parent_id": "60b7aebd-8cd3-47cb-a168-613a863dea95", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ], [ "subsection", "High availability" ] ], "subsections": [], "title": "High availability" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:ft-ha-vintage}\nIn the early years streaming systems put emphasis on high availability setups with preference towards active replication.\nContrastingly modern systems tend to leverage passive replication especially by allocating extra resources on demand that is appropriate for Cloud setups.\nIn addition, past systems provided approximate results, while modern systems maintain exactly-once processing semantics over their state under failures.\nAlthough past systems lacked in terms of consistency, mainly due to state management aspects, they strived to solve the output commit problem.\nInstead, a typical avenue for modern systems that gains traction is to outsource the deduplication of output to external systems.\nFinally, while streaming systems used to store their output in order to be able to replay tuples to downstream operators recovering from a failure, now systems rely increasingly on replayable input source for replaying input subsets.", "id": "3b22e1b8-ff4c-4fbf-bdc5-afbd6fdb7db5", "level": "subsection", "origin_cites_number": 0, "parent_id": "60b7aebd-8cd3-47cb-a168-613a863dea95", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ], [ "subsection", "1st generation vs. 2nd generation" ] ], "subsections": [], "title": "1st generation vs. 2nd generation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:ha-open-problems}\nMany problems wait to be solved in the scope of fault tolerance and high availability in streaming systems.\nThree of them include novel solutions to the output commit problem, defining and measuring availability in stream processing, and configuring availability for different application requirements.\nFirst, the importance of the output commit problem has the prospect to increase as streaming systems are used in novel ways like for running event-driven applications. Although we presented five different types of solutions, these suffer from computational cost, strong assumptions, limited applicability, and freshness of output results.\nNew types of solutions are required that score better in these dimensions.\nSecond, the literature of high availability in stream processing has significantly enhanced the availability of streaming systems throughout the years.\nBut, to the best of our knowledge, there has been scant research on what availability means in the area of stream processing.\nThe generic definition of availability for computer systems by Gray et al.~ relates availability merely to failures.\nAccording to the definition a system is available when it responds to requests with correct results, which is termed as service accomplishment.\nIn streaming however, processing is continuous and potentially unbounded.\nResponding with correct results becomes more challenging.\nThe factors that may impair availability in streaming include software and hardware failures, overload, backpressure, and types of processing stall, like checkpoints, state migration, garbage collection, and calls to external systems.\nThe common denominator of those factors, is that the system falls behind input.\nThis may not be a problem for other types of systems, like databases which can respond to queries with the historical data\nthey keep, but streaming systems have to continuously catch up processing with the input in order to provide correct results, that is, in order to be available.\nThus, a more specific definition of availability for stream processing can be stated in the following way.\n\\textit{A streaming system is available when it can provide output based on the processing of its current input.}\nThis definition extends to how we measure availability.\nAn appropriate way would be via progress tracking mechanisms, such as \\textit{the slack between processing time and\nevent time over time}, which quantifies the system’s processing progress with respect to the input as per Figure~\\ref{fig:availability}.\nThe area in the plot signifies the slack between event time and processing time over time.\nThe surface enclosing A amounts to 100\\% availability, while the surface containing B equals 60\\% availability.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=.3]{Figures/availability.pdf}\n \\caption{Measuring availability with the slack between processing time and event time over time}\n \\label{fig:availability}\n\\end{figure}\nLast, availability is a prime non-functional characteristic of a streaming system and non-trivial to reason about as we showed.\nProviding user-friendly ways to specify availability as a contract that the system will always respect during its operation will significantly improve the position of streaming systems in production environments.\nConfiguring availability in this way will probably impact resource utilization, performance overhead during normal operation, recovery time, and consistency.", "id": "219e1f25-e155-47d4-a07b-6f557b4d0daa", "level": "subsection", "origin_cites_number": 1, "parent_id": "60b7aebd-8cd3-47cb-a168-613a863dea95", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Fault Tolerance \\& High Availability" ], [ "subsection", "Open Problems" ] ], "subsections": [], "title": "Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:elasticity}\nDue to the push-based nature of streaming inputs from external data sources, stream processors have no control over the rate of incoming events. Satisfying Quality of Service (QoS) under workload variations has been a long-standing research challenge in stream processing systems. \nTo avoid performance degradation when input rates exceed system capacity, the stream processor needs to take actions that will ensure sustaining the load. One such action is \\emph{load shedding}: temporarily dropping excess tuples from inputs or intermediate operators in the streaming execution graph.\nLoad shedding trades off result accuracy for sustainable performance and is suitable for applications with strict latency constraints that can tolerate approximate results.\nWhen result correctness is more critical than low latency, dropping tuples is not an option. If the load increase is transient, the system can instead choose to reliably buffer excess data and process it later, once input rates stabilize. Several systems employ \\emph{back-pressure}, a fundamental load management technique applicable to communication networks that involving producers and consumers. Nevertheless, to avoid running out of available memory during load spikes, \\emph{load-aware scheduling} and rate control can be applied. \nA more recent approach that aims at satisfying QoS while guaranteeing result correctness under variable input load is \\emph{elasticity}. Elastic stream processors are capable of adjusting their configuration and scaling their resource allocation in response to load. Dynamic scaling methods are applicable to both centralized and distributed settings. Elasticity not only addresses the case of increased load, but can additionally ensure no resources are left idle when the input load decreases.\n\\begin{comment}\n\\begin{figure*}[ht]\n \\centering\n \\begin{subfigure}{.33\\linewidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/load-shedding.png}\n \\caption{Load shedding}\n \\label{fig:load-shedding}\n \\end{subfigure}\n \\begin{subfigure}{.33\\linewidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/back-pressure.png}\n \\caption{Back-pressure}\n \\label{fig:back-pressure}\n \\end{subfigure}\n \\begin{subfigure}{.33\\linewidth}\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/elasticity.png}\n \\caption{Elasticity}\n \\label{fig:elasticity}\n \\end{subfigure}\n \\caption{Load management techniques in stream processing.}\n \\label{fig:load-management}\n\\end{figure*}\n\\end{comment}\nNext, we review load shedding (Section~\\ref{sec:load-shedding}), load-aware scheduling and flow control (Section~\\ref{sec:flow-control}), and elasticity techniques (Section~\\ref{sec:elasticity-inner}). As in previous sections, we conclude with a discussion of 1st generation vs. modern and open problems.", "id": "b7e50fcf-514d-4094-9402-61a6721fb408", "level": "section", "origin_cites_number": 0, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ] ], "subsections": [ "0c5cfac7-4d25-4087-a072-305e0866670f", "229b7e49-9fd2-440c-a0c5-087746fe2163", "8918a236-3757-4be5-8e98-bad1687de8b5", "ee24c3f1-ef27-4c53-a975-96d993ee5d64", "d83a4c1a-1556-46d6-87f2-103c5d87a92e" ], "title": "Load management, elasticity, \\& reconfiguration" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:load-shedding}\nLoad shedding~ is the process of discarding data when input rates increase beyond system capacity. The system continuously monitors query performance and if an overload situation is detected, it selectively drops tuples according to a QoS specification. \nLoad shedding is commonly implemented by a standalone component integrated with the stream processor. The load shedder continuously monitors input rates or other system metrics and can access information about the running query plan. Its main functionality consists of detecting overload (\\emph{when} to shed load) and deciding what actions to take in order to maintain acceptable latency and minimize result quality degradation. These actions presume answering the questions of \\emph{where} (in the query plan), \\emph{how many}, and \\emph{which} tuples to drop.\nDetecting overload is a crucial task, as an incorrectly triggered shedding action can cause unnecessary result degradation. To facilitate the decision of \\emph{when}, load shedding components rely on statistics gathered during execution. The more knowledge a load shedder has about the query plan and its execution, the more accurate decisions it can make. For this reason, many stream processors restrict load shedding to a predefined set of operators, such as those that do not modify tuples, i.e.\\ filter, union, and join~. Other operator-restricted load shedding techniques target window operators~, or even more specifically, query plans with \\texttt{SUM} or \\texttt{COUNT} sliding window aggregates~. An alternative, operator-independent approach, is to frame load shedding as a feedback control problem~. The load shedder relies on a dynamic model that describes the relationship between average tuple delay (latency) and input rate. \nOnce the load shedder has detected overload, it needs to perform the actual load shedding. This includes the decision of where in the query plan to drop tuples from, as well as which tuples and how many. \nThe question of where is equivalent to placing special \\emph{drop operators} in the best positions in the query plan. In general, drop operators can be placed at any location in the query plan, however, they are often placed at or near the sources. Dropping tuples early avoids wasting work but it might affect results of multiple queries if the stream processor operates on a shared query network. Alternatively, a load shedding road map (LSRM) can be used~. This is a pre-computed table that contains materialized load shedding plans, ordered by the amount of load shedding they will cause. \nThe question of which tuples to drop is relevant when load shedding takes into account the \\emph{semantic} importance of tuples with respect to results quality.\nA \\emph{random} dropping strategy has been applied to sliding window aggregate queries to provide approximate results by inserting random sampling operators in the query plan~.\n\\emph{Window-aware} load shedding~ applies shedding to entire windows instead of individual tuples, while \n\\emph{concept-driven} load shedding~ is a semantic dropping strategy that selects tuples to discard based on the notion of concept drift.", "id": "0c5cfac7-4d25-4087-a072-305e0866670f", "level": "subsection", "origin_cites_number": 8, "parent_id": "b7e50fcf-514d-4094-9402-61a6721fb408", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Load shedding" ] ], "subsections": [], "title": "Load shedding" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:flow-control}\nWhen load bursts are transient and a temporary increase in latency is preferred to missing results, back-pressure and flow control can provide load management without sacrificing accuracy.\nFlow control methods include buffering excess load, load-aware scheduling that prioritizes operators with the objective to minimize the backlog, regulating the transmission rate, and throttling the producer.\nFlow control and back-pressure techniques do not consider application-level quality requirements, such as the semantic importance of input tuples. Their main requirement is availability of buffer space at the sources or intermediate operators and that any accumulated load is within the system capacity limits, so that it will be eventually possible to process the data backlog.\n\\stitle{Load-aware scheduling} tackles the overload problem by selecting the \\emph{order} of operator execution and by adapting the \\emph{resource allocation}. For instance, backlog can be reduced by dynamically selecting the order of executing filters and joins~. Alternatively, adaptive scheduling~ modifies the allocation of resources given a static query plan.\nThe objective of load-aware scheduling strategies is to select an operator execution order that minimizes the total size of input queues in the system. The scheduler relies on knowledge about operator selectivities and processing costs. These statistics are either assumed to be known in advance, or need to be collected periodically during runtime. Operators are assigned priorities that reflect their potential to minimize intermediate results, and, consequently, the size of queues. \n\\stitle{Back-pressure and flow control.} In a network of consumers and producers such as a streaming execution graph with multiple operators, back-pressure has the effect that all operators slow down to match the processing speed of the slowest consumer. If the bottleneck operator is far down the dataflow graph, back-pressure propagates to upstream operators, eventually reaching the data stream sources. To ensure no data loss, a persistent input message queue, such as Apache Kafka, and adequate storage space are required.\n\\begin{figure*}[ht]\n \\centering\n \\begin{subfigure}{.45\\linewidth}\n \\centering\n \\includegraphics[width=.72\\linewidth]{Figures/sec6/buffer-based-1.pdf}\n \\caption{Local exchange}\\label{fig:buffer-based-local}\n \\end{subfigure}\n \\begin{subfigure}{.48\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{Figures/sec6/buffer-based-2.pdf}\n \\caption{Remote exchange}\\label{fig:buffer-based-remote}\n \\end{subfigure}\n \\caption{Buffer-based flow control.}\n\\end{figure*}\n\\emph{Buffer-based} back-pressure implicitly controls the flow of data via buffer availability. Considering a fixed amount of buffer space, a bottleneck operator will cause buffers to gradually fill up along its dataflow path.\nFigure~\\ref{fig:buffer-based-local} demonstrates buffer-based flow control when the producer and the consumer run on the same machine and share a buffer pool.\nWhen a producer generates a result, it serializes it into an output buffer. If the producer and consumer run on the same machine and the consumer is slow, the producer might attempt to retrieve an output buffer when none will be available. The producer's processing rate will, thus, slow down according to the rate the consumer is recycling buffers back into the shared buffer pool. The case when the producer and consumer are deployed on different machines and communicate via TCP is shown in Figure~\\ref{fig:buffer-based-remote}. If no buffer is available on the consumer side, the TCP connection will be interrupted. The producer can use a threshold to control how much data is in-flight and it is slowed down if it cannot put new data on the wire.\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/sec6/cfc-new.pdf}\n \\caption{Credit-based flow control in a dataflow graph. Receivers regularly announce their credit upstream (gray and white squares indicate full and free buffers, respectively).}\n \\label{fig:cfc}\n\\end{figure}\n\\emph{Credit-based flow control} (CFC)~ is a link-by-link, per virtual channel congestion control technique used in ATM network switches. \nIn a nutshell, CFC uses a credit system to signal the availability of buffer space from receivers to senders. \nThis classic networking technique turns out to be very useful for load management in modern, highly-parallel stream processors and is implemented in Apache Flink~. Figure~\\ref{fig:cfc} shows how the scheme works for a hypothetical dataflow. Parallel tasks are connected via virtual channels multiplexed over TCP connections. Each task informs its senders of its buffer availability via credit messages. This way, senders always know whether receivers have the required capacity to handle data messages. When the credit of a receiver drops to zero (or a specified threshold), back-pressure appears on its virtual channel. An important advantage of this per-channel flow control mechanism is that back-pressure is inflicted on pairs of communicating tasks only and does not interfere with other tasks sharing the same TCP connection. This is crucial in the presence of data skew where a single overloaded task could otherwise block the flow of data to all other downstream operator instances. On the downside, the additional credit announcement messages might increase end-to-end latency.", "id": "229b7e49-9fd2-440c-a0c5-087746fe2163", "level": "subsection", "origin_cites_number": 6, "parent_id": "b7e50fcf-514d-4094-9402-61a6721fb408", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Scheduling and flow control" ] ], "subsections": [], "title": "Scheduling and flow control" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:elasticity-inner}\nThe approaches of load shedding and back-pressure are designed to handle workload variations in a \\emph{statically provisioned} stream processor or application. \nStream processors deployed on cloud environments or clusters can make use of a dynamic pool of resources. \\emph{Dynamic scaling} or \\emph{elasticity} is the ability of a stream processor to vary the resources available to a running computation in order to handle workload variations efficiently. Building an elastic streaming system requires a \\emph{policy} and a \\emph{mechanism}. The policy component implements a control algorithm that collects performance metrics and decides when and how much to scale. The mechanism effects the configuration change. It handles resource allocation, work re-assignment, and state migration, while guaranteeing result correctness. Table~\\ref{tbl:scaling-comparison} summarizes the dynamic scaling capabilities and characteristics of elastic streaming systems.", "id": "8918a236-3757-4be5-8e98-bad1687de8b5", "level": "subsection", "origin_cites_number": 0, "parent_id": "b7e50fcf-514d-4094-9402-61a6721fb408", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Elasticity" ] ], "subsections": [ "c48c9908-5343-4d71-9072-8d2e3ca530f6", "24a60a5e-5309-48db-a30e-2b25c7c7f6b1" ], "title": "Elasticity" }, { "cite_extract_rate": 0.041666666666666005, "cites": [ 4763 ], "content": "A \\emph{scaling policy} involves two individual decisions. First, it needs to detect the symptoms of an unhealthy computation and decide whether scaling is necessary. Symptom detection is a well-understood problem and can be addressed using conventional monitoring tools.\nSecond, the policy needs to identify the causes of exhibited symptoms (e.g. a bottleneck operator) and propose a scaling action. This is a challenging task which requires performance analysis and prediction.\nIt is common practice to place the burden of scaling decisions on application users who have to face conflicting incentives. They can either plan for the highest expected workload, possibly incurring high cost, or they can choose to be conservative and risk degraded performance.\nAutomatic scaling refers to scaling decisions transparently handled by the streaming system in response to load.\nCommercial streaming systems that support automatic scaling include Google Cloud Dataflow~, Heron~, and IBM System S~, while DS2~, Seep~ and StreamCloud~ are recent research prototypes.\nIn Table~\\ref{tbl:scaling-comparison}, we categorize policies into \\emph{heuristic} and \\emph{predictive}. Heuristic policies rely on empirically predefined rules and are often triggered by thresholds or observed conditions while predictive policies make scaling decisions guided by analytical performance models.\nHeuristic policy controllers gather coarse-grained metrics, such as CPU utilization, observed throughput, queue sizes, and memory utilization, to detect suboptimal scaling. CPU and memory utilization can be inadequate metrics for streaming applications deployed in cloud environments due to multi-tenancy and performance interference~. StreamCloud~ and Seep~ try to mitigate the problem by separating user time and system time, but\npreemption can make these metrics misleading. For example, high CPU usage caused by a task running on the same physical machine as a dataflow operator can trigger incorrect scale-ups (false positives) or prevent correct scale-downs (false negatives). Google Cloud Dataflow~ relies on CPU utilization for scale-down decisions only but still suffers false negatives.\nDhalion~ and IBM Streams~ also use congestion and back-pressure signals to identify bottlenecks. These metrics are helpful for identifying bottlenecks but they cannot detect resource over-provisioning.\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}\n\\smaller\\centering\n\\caption{Elasticity policies and mechanisms in streaming systems}\n\\label{tbl:scaling-comparison}\n\\begin{tabular}{p{0.15\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.06\\textwidth}\n p{0.08\\textwidth}\n }\n\\hline\n\\multicolumn{1}{c}{\\textbf{System}} &\n\\multicolumn{2}{c}{\\textbf{Policy}} &\n\\multicolumn{2}{c}{\\textbf{Objective}} &\n\\multicolumn{3}{c}{\\textbf{Reconfiguration}} &\n\\multicolumn{2}{c}{\\textbf{State Migration}}\n\\\\\n\\rowcolor{white}\n& \\multicolumn{1}{c}{Heuristic}\n& \\multicolumn{1}{c}{Predictive}\n& \\multicolumn{1}{c}{Latency}\n& \\multicolumn{1}{c}{Throughput}\n& \\multicolumn{1}{c}{Stop-and-Restart}\n& \\multicolumn{1}{c}{Partial Pause}\n& \\multicolumn{1}{c}{Live} \n& \\multicolumn{1}{c}{At-Once}\n& \\multicolumn{1}{c}{Progressive} \\\\\n\\hline\nBorealis~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{3}{a}{n/a}\n& \\multicolumn{2}{b}{n/a}\n\\\\\nStreamCloud~ \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nSeep~ \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nIBM Streams~ \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nFUGU~ \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nNephele~ \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{}\n\\\\\nDRS~ \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{}\n\\\\\nMPC~ \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nCometCloud~ \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{2}{b}{n/a}\n\\\\\nChronostream~ \n& \\multicolumn{2}{a}{n/a}\n& \\multicolumn{2}{b}{n/a}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nACES~ \n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{3}{a}{n/a}\n& \\multicolumn{2}{b}{n/a}\n\\\\\nStella~ \n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{}\n\\\\ \nGoogle~Dataflow~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{}\n\\\\ \nDhalion~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\ \nDS2~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\ \nSpark Streaming~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nMegaphone~\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n\\\\\nTurbine~\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\nRhino~\n& \\multicolumn2{a}{n/a}\n& \\multicolumn{2}{b}{n/a}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{a}{\\checkmark}\n& \\multicolumn{1}{a}{}\n& \\multicolumn{1}{b}{\\checkmark}\n& \\multicolumn{1}{b}{}\n\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\nPredictive policy controllers build an analytical performance model of the streaming system and formulate the scaling problem as a set of mathematical functions. Predictive approaches include queuing theory~, control theory~, and instrumentation-driven linear performance models~. Thanks to their closed-form analytical formulation, predictive policies are capable of making multi-operator decisions in one step.", "id": "c48c9908-5343-4d71-9072-8d2e3ca530f6", "level": "subsubsection", "origin_cites_number": 24, "parent_id": "8918a236-3757-4be5-8e98-bad1687de8b5", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Elasticity" ], [ "subsubsection", "Elasticity policies" ] ], "subsections": [], "title": "Elasticity policies" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 4763 ], "content": "Elasticity mechanisms are concerned with realizing the actions indicated by the policy. \nThey need to ensure correctness and low-latency redistribution of accumulated state when effecting a reconfiguration. To ensure correctness, many streaming systems rely on the fault-tolerance mechanism to provide reconfiguration capabilities. When adding new workers to a running computation, the mechanism needs not only re-assign work to them but also migrate any necessary state these new workers will now be in charge of. \nElasticity mechanisms need to complete a reconfiguration as quickly as possible and at the same time minimize performance disruption. We review the main methods for state redistribution, reconfiguration, and state transfer next. We focus on systems with embedded state, as reconfiguration mechanisms are significantly simplified when state is external.\n\\stitle{State redistribution.}\nState redistribution must preserve key semantics, so that existing state for a particular key and all future events with this key are routed to the same worker. For that purpose, most systems use hashing methods. \\emph{Uniform hashing} evenly distributes keys across parallel tasks. It is fast to compute and requires no routing state but might incur high migration cost. When a new node is added, state is shuffled across existing and new workers. It also causes random I/O and high network communication. Thus, it is not particularly suitable for adaptive applications. \\emph{Consistent hashing} and variations are more often preferred. Workers and keys are mapped to multiple points on a ring using multiple random hash functions. Consistent hashing ensures that state is not moved across workers that are present before and after the migration. When a new worker joins, it becomes responsible for data items from multiple of the existing nodes. When a worker leaves, its key space is distributed over existing workers. \nApache Flink~ uses a variation of consistent hashing in which state is organized into \\emph{key groups} and those are mapped to parallel tasks as ranges. On reconfiguration, reads are sequential within each key group, and often across multiple key groups. The metadata of key group to task assignments are small and it is sufficient to store key-group range boundaries. The number of key groups limits the maximum number of parallel tasks to which keyed state can be scaled.\nHashing techniques are simple to implement and do not require storing any routing state, however, they do not perform well under skewed key distributions. \\emph{Hybrid partitioning}~ combines consistent hashing and an explicit mapping to generate a compact hash function that provides load balance in the presence of skew. The main idea is to track the frequencies of the partitioning key values and treat normal keys and popular keys differently. The mechanism uses the lossy counting algorithm~ in a sliding window setting to estimate heavy hitters, as keeping exact counts would be impractical for large key domains.\n\\stitle{Reconfiguration strategy.}\nRegardless of the re-partitioning strategy used, if the elasticity policy makes a decision to change an application's resources, the mechanism will have to transfer some amount of state across workers on the same or different physical machines.\nThe \\emph{stop-and-restart} strategy halts the computation, takes a state snapshot of all operators, and then restarts the application with the new configuration. Even though this mechanism is simple to implement and it trivially guarantees correctness, it unnecessary stalls the entire pipeline even if only one or few operators need to be rescaled. As shown in Table~\\ref{tbl:scaling-comparison}, this strategy is very common in modern systems.\n\\emph{Partial pause and restart}, introduced by FLUX~, is a less disruptive strategy that only blocks the affected dataflow subgraph temporarily. The affected subgraph contains the operator to be scaled, as well as upstream channels and upstream operators. Figure~\\ref{fig:partial-pause-and-restart} shows an example of the protocol. To migrate state from operator $a$ to operator $b$, the mechanism will execute the following steps:\n(1) First, it \\emph{pauses} $a$'s upstream operators and stops pushing tuples to $a$. Paused operators start buffering input tuples in their local buffers. operator $a$ continues processing tuples in its buffers until they are empty. (2) Once $a$'s buffers are empty, it extracts its state and sends it to operator $b$. (3) Operator $b$ loads the state and (4) sends a \\emph{restart} signal to upstream operators. Once upstream operators receive the signal they can start processing tuples again.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/sec6/patial-pause-and-restart.pdf}\n \\caption{An example of the partial-pause-and-restart protocol. To move state from operator $a$ to $b$, the mechanism executes the following steps: (1) Pause $a$'s upstream operators, (2) extract state from $a$, (3) load state into $b$, and (4) send a restart signal from $b$ to upstream operators.}\n \\label{fig:partial-pause-and-restart}\n\\end{figure}\nThe \\emph{pro-active replication} strategy maintains state backup copies in multiple nodes so that reconfiguration can be performed in a nearly live manner when needed. The state is organized into smaller partitions, each of which can be transferred independently. Each node has a set of primary state slices and a set of secondary state slices. Figure~\\ref{fig:proactive} shows an example of the protocol as implemented by ChronoStream~. \n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=.9\\linewidth]{Figures/sec6/proactive-replication.pdf}\n \\caption{An example of the proactive replication protocol. To move slice \\#1 from $N_{src}$ to $N_{dest}$, the mechanism executes the following steps: (1) the leader instructs $N_{dest}$ to load slice \\#1, (2) $N_{dest}$ loads slice \\#1 and sends ack to the leader, (3) the leader notifies upstream operators to replay events, (4) upstream start rerouting events to $N_{dest}$, (5) the leader notifies $N_{src}$ that the transfer is complete and $N_{src}$ moves slice \\#1 to the backup group.}\n \\label{fig:proactive}\n\\end{figure}\n\\stitle{State transfer.} Another important decision to make when migrating state from one worker to another is whether the state is moved \\emph{all-at-once} or in a \\emph{progressive} manner. If a large amount of state needs to be transferred, moving it in one operation might cause high latency during re-configuration. \nAlternatively, \\emph{progressive} migration~ moves state in smaller pieces and flattens latency spikes by interleaving state transfer with processing. On the downside, progressive state migration might lead to longer migration duration.", "id": "24a60a5e-5309-48db-a30e-2b25c7c7f6b1", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "8918a236-3757-4be5-8e98-bad1687de8b5", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Elasticity" ], [ "subsubsection", "Elasticity mechanisms" ] ], "subsections": [], "title": "Elasticity mechanisms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{comment}\n\\begin{itemize}\n \\item Dropping data and degrading results quality used to be considered an acceptable solution. Today it is not.\n \\item Queries were considered known beforehand and belonging to a universal query plan, i.e. they affect the execution of each other. Today queries are typically independent jobs / dataflows.\n \\item Load shedding techniques were applied to the set of all queries running in the system as a whole. Today flow control mechanisms and elasticity techniques are implemented in a per-job basis.\n \\item In the distributed stream processor case (e.g. Borealis~) solutions consider load \\emph{per-server} and again make decisions across computing nodes.\n \\item Load shedding models assume a limited set of operators whose properties and characteristics are stable throughout execution and can be computed offline, e.g. we can rely on metrics such as cost-per-tuple and selectivity per operator. Modern dataflows view operators as black boxes with potentially complex UDFs whose cost and selectivity can vary over time, e.g. due to changed workload characteristics or state accumulation.\n\\end{itemize}\n\\end{comment}\nComparing early to modern approaches, we make the following observations. While load shedding was popular among early stream processors, modern systems do not favor the approach of degrading results quality anymore. Another important difference is that load management approaches in 1st generation systems used to affect the execution of multiple queries as they formed a shared dataflow plan (cf. Section~\\ref{sec:preliminaries}). Queries in modern systems are typically executed as independent jobs, thus, back-pressure on a certain query will not affect the execution of other queries running on the same cluster. Scaling down is a quite recent requirement that was not a matter of concern before cloud deployments. \nThe dependence on persistent queues for providing correctness guarantees is another recent characteristic, mainly required by systems employing back-pressure. Finally, while early load shedding and load-aware scheduling techniques assume a limited set of operators whose properties and characteristics are stable throughout execution, modern systems implement general load management methods that are applicable even if cost and selectivity vary or are unknown.", "id": "ee24c3f1-ef27-4c53-a975-96d993ee5d64", "level": "subsection", "origin_cites_number": 1, "parent_id": "b7e50fcf-514d-4094-9402-61a6721fb408", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "1st generation vs. 2nd generation" ] ], "subsections": [], "title": "1st generation vs. 2nd generation" }, { "cite_extract_rate": 0, "cites": [], "content": "Adaptive scheduling methods have so far been studied in the context of simple query plans with operators whose selectivities and costs are fixed and known. It is unclear whether these methods generalize to arbitrary plans, operators with UDFs, general windows, and custom joins. Load-aware scheduling can further cause starvation and increased per-tuple latency, as low-priority operators with records in their input buffers would need to wait a long time during bursts. Finally, existing methods are restricted to streams that arrive in timestamp order and do not support out-of-order or delayed events.\nRe-configurable stream processing is a quite recent research area, where stream processors are designed to not only be capable of adjusting their resource allocation but other elements of their runtime as well.\nElasticity, the ability of a stream processor to dynamically adjust resource allocation can be considered as a special case of re-configuration. Others include code updates for bug fixes, version upgrades, or business logic changes, execution plan switching, dynamic scheduling and operator placement, as well as skew and straggler mitigation. So far, each of the aforementioned re-configuration scenarios have been largely studied in isolation. To provide general re-configuration and self-management, future systems will need to take into account how optimizations interact with each other.", "id": "d83a4c1a-1556-46d6-87f2-103c5d87a92e", "level": "subsection", "origin_cites_number": 0, "parent_id": "b7e50fcf-514d-4094-9402-61a6721fb408", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Load management, elasticity, \\& reconfiguration" ], [ "subsection", "Open Problems" ] ], "subsections": [], "title": "Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "Reactive, dataflow programming, event-based, actors, time-series, temporal databases, active databases, complex-event processing.\nApplications\nBenchmarks", "id": "456f6722-b0ce-464c-a64d-a5bb4343d94d", "level": "section", "origin_cites_number": 0, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Related Concepts" ] ], "subsections": [], "title": "Related Concepts" }, { "cite_extract_rate": 0.1, "cites": [ 4757 ], "content": "\\label{sec:conclusion}\nWhile early streaming systems strove to extend relational execution engines with time-based window processing, modern systems have evolved significantly in terms of architecture and capabilities. Table~\\ref{tab:evolution-streaming} summarizes the evolution of major streaming system aspects over the last three decades.\nWhile approximate results were mainstream in early systems, modern systems have primarily focused on results correctness and have largely rejected the notion of approximation.\nIn terms of languages, modern systems favor general-purpose programming languages, however, we recently witness a trend to return to extensions for streaming SQL~. Over the years, execution has also gradually transitioned from mainly centralized to mainly distributed, exploiting data, pipeline, and task parallelism. At the same time, most modern systems construct independent execution plans per query and apply little optimization and sharing.\nRegarding time, order, and progress, many of the inventions of the past proved to have survived the test of time, since they continue to hold a place in modern streaming systems.\nEspecially Millwheel and the Google Dataflow Model popularized punctuations, watermarks, the out-of-order architecture, and triggers for revision processing. Streaming state management witnessed a major shift, from specialized in-memory synopses to large partitioned and persistent state supported today. As a result, fault tolerance and high availability also shifted towards passive replication and exactly-once processing. Finally, load management approaches have transitioned from load shedding and scheduling methods to elasticity and backpressure coupled with persistent inputs.\nIn state management we identify the most radical changes seen in data streaming so far. The most obvious advances relate to the scalability of state and long-term persistence in unbounded executions. Yet, today's systems have invested thoroughly in providing transactional guarantees that are in par with those modern database management systems can offer today. Transactional stream processing has pivoted data streaming beyond the use for data analytics and has also opened new research directions in terms of efficient methods for backing and accessing state that grows in unbounded terms. Stream state and compute are gradually being decoupled and this allows for better optimizations, wider interoperability with storage technologies as well as novel semantics for shared and external state having stream processors as the backbone of modern continuous applications and live scalable data services.\nWe believe the road ahead is still long for streaming systems. Emerging streaming applications in the areas of Cloud services~, machine learning~, and streaming graph analytics~ present new requirements and are already shaping the key characteristics of the future generation of data stream technology. We expect systems to evolve further and exploit next-generation hardware~, focus on transactions and iteration support, improve their reconfiguration capabilities, and take state management a step further by leveraging workload-aware backends~, shared state and versioning.\n\\bibliographystyle{abbrv}\n\\interlinepenalty=10000\n\\bibliography{references}\n\\balance\n\\end{document}", "id": "ac790ee5-82fb-4e06-9845-dd189aefb008", "level": "section", "origin_cites_number": 10, "parent_id": "1912a489-c536-4d4e-8859-51a6be4f79c3", "prefix_titles": [ [ "title", "A Survey on the Evolution of Stream Processing Systems" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
55
[ 4757, 4758, 4759, 8837, 7843, 4760, 4761, 4762, 4763 ]
0.639156
[ "Alireza Sepas-Moghaddam", "Ali Etemad" ]
Deep Gait Recognition: A Survey
2021
2021-02-18T18:49:28Z
cs.CV
Gait recognition is an appealing biometric modality which aims to identify individuals based on the way they walk. Deep learning has reshaped the research landscape in this area since 2015 through the ability to automatically learn discriminative representations. Gait recognition methods based on deep learning now dominate the state-of-the-art in the field and have fostered real-world applications. In this paper, we present a comprehensive overview of breakthroughs and recent developments in gait recognition with deep learning, and cover broad topics including datasets, test protocols, state-of-the-art solutions, challenges, and future research directions. We first review the commonly used gait datasets along with the principles designed for evaluating them. We then propose a novel taxonomy made up of four separate dimensions namely body representation, temporal representation, feature representation, and neural architecture, to help characterize and organize the research landscape and literature in this area. Following our proposed taxonomy, a comprehensive survey of gait recognition methods using deep learning is presented with discussions on their performances, characteristics, advantages, and limitations. We conclude this survey with a discussion on current challenges and mention a number of promising directions for future research in gait recognition.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ] ], "subsections": [ "f4bd41e2-dc95-40dd-a294-168affd295c2", "f43a8f77-5363-4071-8947-1057e7984e96", "a7c4a5b9-298c-4ac1-a382-d1950b08a89d", "d929d97d-c95d-4152-9099-5e650bfff992", "c203cf63-21d3-48d6-90ac-d1b1607a9797", "d74cb17e-ad4d-4784-8c2d-f13df1f49804", "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "0f41dce4-04fc-4ee5-a289-0d7f3a059438" ], "title": "root" }, { "cite_extract_rate": 0.058823529411764004, "cites": [ 287 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{G}{AIT}, defined as the way people walk, contains relevant cues about human subjects~. As a result, it has been widely used in different application areas such as affect analysis~, sport science~, health~, and user identification~. Gait information can be captured using a number of sensing modalities such as wearable sensors attached to the human body, for instance accelerometers, gyroscopes, and force and pressure sensors~. Non-wearable gait recognition systems predominantly use vision, and are therefore mostly known as vision-based gait recognition. These systems capture gait data using imaging sensors with no cooperation from the subjects and even from far away distances~. The focus of this paper is to survey vision-based gait recognition systems that have mainly relied on deep learning. We focus solely on vision-based gait recognition as a comprehensive review paper has recently been published, surveying wearable-based gait recognition approaches~.\nThe performance of vision-based gait recognition systems, hereafter only referred to only as gait recognition, can be affected by \\textit{i)} variations in the appearance of the individual, such as carrying a handbag/backpack or wearing items of clothing such as a hat or a coat; \\textit{ii)} variations in the camera viewpoint; \\textit{iii)} occlusion factors, for instance where parts of the subject's body are partially covered by an object or by a part of the subject's own body in certain viewpoints (known as self-occlusion)~; and \\textit{iv)} variations in the environment, such as complex backgrounds~ and high or low levels of lighting~, which generally make the segmentation and recognition processes more difficult.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{Images/fig1_v2.pdf}\n\\caption{The number of gait recognition papers published after 2015 using non-deep ({orange}) and deep ({blue}) gait recognition methods. These papers have been published in top-tier journals and conferences in the field. Journal publications include IEEE Transactions ($19\\%$) including \\textit{T-PAMI}, \\textit{T-IP}, \\textit{T-IFS}, \\textit{T-MM}, \\textit{T-CSVT}, and \\textit{T-Biom}, as well as other top journals ($24\\%$) such as \\textit{Pattern Recognition} and \\textit{Pattern Recognition Letter}. Conference publications include highly ranked computer vision and machine learning conferences ($22\\%$) including \\textit{CVPR}, \\textit{AAAI}, \\textit{ICCV}, \\textit{ECCV}, \\textit{ACCV}, \\textit{BMVC}, as well as other top relevant conferences ($35\\%$) such as \\textit{ICASSP}, \\textit{ICIP}, \\textit{ICPR}, \\textit{ICME}, \\textit{ACM Multimedia}, and \\textit{IJCB}. The figure shows clear opposing trends between the two approaches, indicating that, unsurprisingly, deep learning methods have become the dominant approach is recent years.}\n\\label{fig:Evolution}\n\\end{figure}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{Images/Timeline.pdf}\n\\caption{The evolution of deep gait recognition methods.}\n\\label{fig:timeline}\n\\end{figure*}", "id": "f4bd41e2-dc95-40dd-a294-168affd295c2", "level": "section", "origin_cites_number": 17, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "445091ae-c505-4b58-a2b4-5c467eaea0b0", "051b239c-2af8-4765-b1b2-7e334249fdeb", "c6234a5d-bf13-4122-9a0a-8dec280529e6", "43a7a742-8345-4c8a-a830-8baf2e53d713" ], "title": "Introduction" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 289, 290, 293, 288, 292, 291 ], "content": "Gait recognition systems pose challenges that are unique to this field, making it a problem that demands an independent treatment. From a `biometrics' perspective, gait recognition has several unique characteristics that distinguishes it from other biometric modalities. For instance, in contrast to many other biometric systems such as face~, ear~, iris, and fingerprint~ recognition that require subjects to be quite close to acquisition systems, gait data can be captured from far away distances~. As a result, gait recognition videos may often be recorded with low spatial resolution, hence many details regarding the scene become challenging to detect by automated systems. Moreover, while most biometric recognition systems need the subjects’ active cooperation towards acquisition, gait recognition data can be captured in a discrete manner~. As a result, the likelihood of recording gait patterns in an uncontrolled and non-obscured manner considerably increases. Interestingly, this very property makes gait difficult to forge by imposters, making it reliable for sensitive applications such as crime analysis~.\nWhat makes some of the challenges in gait recognition unique and distinct from general `computer vision' problems is that most gait recognition methods learn representations from analysis of the \\textit{skeletons} or \\textit{silhouettes} of subjects. Meanwhile, other visual classification problems often heavily rely on derived features from \\textit{texture} in addition to shape and structure information. \nFor example, despite the similarities of computer vision problems such as `person re-identification'~ and `human activity recognition'~ to gait recognition, gait data still pose challenges and properties that are unique to this field. \nSpecifically, person re-identification methods identify subjects across multiple non-overlapping surveillance cameras, or possibly from the same camera but at different time instances. To this end, these methods aim to learn representations that capture appearance characteristics of individuals such as clothing and skin color tone, that are shared across multiple cameras~. On the contrary, gait recognition methods aim to learn suitable representations with which \\textit{walking patterns} can be disentangled from the visual appearance of the subjects and subsequently used for classification~.\nWhen comparing gait recognition to human activity recognition methods~, the goal of the latter is to identify specific movements or actions of a subject from video clips, which can be considered as `macro' motion patterns. Meanwhile, gait characteristics can be considered nuanced `micro' patterns that sit on top of a specific activity class, namely \\textit{walking}. As a result, the detection of such subtle discriminative information are often more challenging than those dealt with for activity recognition. Furthermore, given the subtlety of gait patterns that make them unique to different subjects, they can often be highly influenced by the temporary personal state of the subject, for instance, fatigue~, excitement and fear~, and even injuries~.", "id": "445091ae-c505-4b58-a2b4-5c467eaea0b0", "level": "subsection", "origin_cites_number": 14, "parent_id": "f4bd41e2-dc95-40dd-a294-168affd295c2", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Unique Characteristics of Gait Recognition" ] ], "subsections": [], "title": "Unique Characteristics of Gait Recognition" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 294, 296, 9138, 295 ], "content": "In the past two decades, many gait recognition methods have been developed to tackle the above-mentioned problems. In recent years, there has been a clear trend in migrating from non-deep methods to deep learning-based solutions for gait recognition. To visualize this trend, we present Figure \\ref{fig:Evolution}, which illustrates the number of gait recognition papers published after 2015. It is observed that the majority of gait recognition methods in 2019 and 2020 have been designed based on deep neural networks. In Figure \\ref{fig:timeline}, we illustrate the evolution of some of the most important gait recognition methods along with their associated accuracy on the CASIA-B~ (perhaps the most popular dataset for gait recognition) when available. The first gait recognition method was proposed in 1997~, followed by the first shallow neural network for gait recognition in 2008~, consisting of only one input layer, one hidden layer, and one output layer. In 2015, the field witnessed significant breakthroughs, notably due to the popularization of deep neural networks~. The method entitled GaitNet~ was then proposed in 2016 based on a 6-layer convolutional neural network (CNN). In 2017, DBNGait~ was proposed based on a deep belief network (DBN), and in~ three different deep CNN architectures with different depths and architectures were fused for gait recognition. VGR-Net~ was one of the important contributions in 2018, followed by the introduction of several significant methods in 2019, including PoseGait~, DisentangledGait~, and GaitSet~, where the best recognition accuracy of 84.2\\% was achieved by GaitSet~. Remarkable advances have been made in 2020, notably by the appearance of several highly efficient methods, including PartialRNN~, GaitPart~, GLN~, HMRGait~, and 3DCNNGait~. The current state-of-the-art results on CASIA-B dataset~ have been reported by 3DCNNGait~ with a recognition accuracy of 90.4\\%. \nSeveral survey papers~ have so far reviewed recent advances in gait recognition, where some of these papers, for instance~, have focused on \\textit{non-vision-based} gait recognition methods. The most recent survey papers on \\textit{vision-based} gait recognition are~, which only cover the papers published until mid 2018. Nonetheless, many important breakthroughs in gait recognition with deep learning have occurred since 2019, as observed in Figures \\ref{fig:Evolution} and \\ref{fig:timeline}. Additionally, none of the surveys~ have specifically focused on deep learning methods for gait recognition.", "id": "051b239c-2af8-4765-b1b2-7e334249fdeb", "level": "subsection", "origin_cites_number": 24, "parent_id": "f4bd41e2-dc95-40dd-a294-168affd295c2", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Motivation" ] ], "subsections": [], "title": "Motivation" }, { "cite_extract_rate": 0, "cites": [], "content": "This paper surveys the most recent advances in gait recognition until the end of January 2021, providing insights into both technical and performance aspects of the deep gait recognition methods in a systematic way. In this context, we first proposes a novel taxonomy with four dimensions, i.e., body representation, temporal representation, feature representation, and neural architecture, to help characterize and organize the available methods. Following our proposed taxonomy, a comprehensive survey of all the available deep gait recognition methods is presented, along with discussions on their characteristics and performances. We have established certain search protocols to make sure other scholars can confidently use this survey in their future research. \nOur key contributions are summarized as follows:\n\\begin{itemize}\n \\item We propose a novel taxonomy with four dimensions to characterize and organize the available deep gait recognition methods.\n \\item We provide a taxonomy-guided review on the evolution of the deep gait recognition methods, where most of these methods have not been reviewed in previous surveys. This provides insights for new topic exploration and future algorithm design.\n \\item We present comparisons between the state-of-the-art using the available results reported on large-scale public gait datasets, providing insight into the effectiveness of different deep gait recognition methods.\n \\item We review 15 publicly available vision-based datasets for gait recognition, along with their associated test protocols.\n \\item We discuss a number of open challenges and identify important future research directions that will be of benefit to researchers working on gait recognition. \n\\end{itemize}", "id": "c6234a5d-bf13-4122-9a0a-8dec280529e6", "level": "subsection", "origin_cites_number": 0, "parent_id": "f4bd41e2-dc95-40dd-a294-168affd295c2", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Contribution" ] ], "subsections": [], "title": "Contribution" }, { "cite_extract_rate": 0, "cites": [], "content": "The rest of this survey is structured as follows. We start with describing the systematic approach used to collect the papers and review the literature. Next, in Section 3, we review the available gait datasets along with their associated test protocols. We then use these datasets and protocols to report the existing performance results when reviewing the deep gait recognition methods. Section 4 presents our proposed taxonomy. Following, Section 5 surveys the state-of-the-art on deep gait recognition and discusses the evolutional trends of deep gait recognition over the past few years. Finally, Section 6 discusses some deep gait recognition challenges and identifies a number of future research areas.", "id": "43a7a742-8345-4c8a-a830-8baf2e53d713", "level": "subsection", "origin_cites_number": 0, "parent_id": "f4bd41e2-dc95-40dd-a294-168affd295c2", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Organization" ] ], "subsections": [], "title": "Organization" }, { "cite_extract_rate": 0.052631578947368, "cites": [ 297 ], "content": "We employed a search protocol to ensure other scholars can confidently use this survey in their future research. To do so, we first discovered candidate papers through the Google Scholar~ search engines and digital libraries, namely IEEE Xplore~, ACM Digital Library~, ScienceDirect~, and CVF Open Access~. Our search terms included combinations of the following queries: ``gait recognition'', ``gait identification'', ``gait biometric'', ``neural architecture'', ``deep learning'', and ``deep representations''. We then filtered the search results, thus excluding papers that neither use deep learning methods for gait recognition nor demonstrate enough technical clarity/depth. To be more specific about the `clarity/depth' criteria, we excluded the papers that \\textit{i}) use non-vision sensors for gait recognition; \\textit{ii}) do not propose a new solution; \\textit{iii}) use non-standard or private datasets for performance evaluation; \\textit{iv}) do not compare the performance of their solution to the state-of-the-art. In cases where other modalities were combined with vision-based sensors, only the technical solution focusing on the vision-based aspect was studied. \n\\begin{table*}\n \\centering\n \\setlength\\tabcolsep{2.3pt}\n \\caption{Summary of well-known gait datasets used in the literature.}\n \\begin{tabular}{l|l|l|l|l|l|l}\n \\hline\n \\textbf {Dataset}& \\textbf{Year}& \\textbf{Data Type} & \\textbf{\\# of Subjects}& \\textbf{Environment} & \\textbf{\\# of} & \\textbf{Variations} \\\\\n \\textbf { }& \\textbf{ }& \\textbf{ } & \\textbf{\\& Sequences}& \\textbf{ } & \\textbf{Views} & \\textbf{ } \\\\\n \\hline\\hline\n CMU MoBo~& 2001 & RGB; Silhouette & 25 / 600 & Indoor & 6 & 3 Walking Speeds; Carrying a Ball\\\\\n SOTON~ & 2002 & RGB; Silhouette & 115 / 2,128 & Indoor \\& Outdoor & 2 & Normal Walking on a Treadmill\\\\\n CASIA-A~& 2003 & RGB & 20 / 240 & Outdoor & 3 & Normal Walking\\\\\n USF HumanID~ & 2005 & RGB & 122 / 1,870 & Outdoor & 2 & Outdoor Walking; Carrying a Briefcase; Time Interval\\\\\n CASIA-B~ & 2006 & RGB; Silhouette & 124 / 13,680 & Indoor & 11 & Normal Walking; Carrying a Bag; Wearing a Coat\\\\\n CASIA-C~ & 2006 & Infrared; Silhouette & 153 / 1,530 & Outdoor & 1 & 3 Walking Speeds; Carrying a Bag \\\\\n OU-ISIR Speed~ & 2010 & Silhouette & 34 / 306 & Indoor & 4 & Nine walking speeds\\\\\n OU-ISIR Clothing~ & 2010 & Silhouette & 68 / 2,746 & Indoor & 4 & Up to 32 combinations of clothing\\\\\n OU-ISIR MV~ & 2010 & Silhouette & 168 / 4,200 & Indoor & 25 & 24 azimuth views and 1 top view\\\\\n OU-ISIR~ & 2012 & Silhouette & 4,007 / 31,368 & Outdoor & 4 & Normal Walking\\\\\n TUM GAID~ & 2012 & RGB; Depth; Audio & 305 / 3,737 & Indoor & 1 & Normal Walking; Backpack; Wearing coating shoes\\\\\n OU-ISIR LP Bag~ & 2017 & Silhouette & 62,528 / 187,584 & Indoor & 1 & Seven different carried objects \\\\\n OU-MVLP~ & 2018 & Silhouette & 10,307 / 259,013 & Indoor & 14 & Normal Walking\\\\\n CASIA-E~ & 2020 & Silhouette & 1014 / Undisclosed & Indoor \\& Outdoor & 15 & 3 Scenes; Normal Walk; Carrying a Bag; Wearing a Coat \\\\\n OU-MVLP Pose~ & 2020 & Skeleton & 10,307 / 259,013 & Indoor & 14 & Normal Walking\\\\\n \\hline\n \\end{tabular}\n \\label{tabDB}\n\\end{table*}\nNaturally, we imposed restrictions on the date of publications to only include search results after 2014, when deep neural networks were first used for biometric recognition~. We then used the returned results in order to perform forward and backward searches, respectively identifying the other resources that have cited the returned articles and the references cited by the returned articles. We repeated this process with the new identified resources until we collected the most relevant papers to the best of our knowledge. We eventually ended up with a final set of publications that have used deep learning for gait recognition.", "id": "f43a8f77-5363-4071-8947-1057e7984e96", "level": "section", "origin_cites_number": 19, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Review Methodology" ] ], "subsections": [], "title": "Review Methodology" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "a7c4a5b9-298c-4ac1-a382-d1950b08a89d", "level": "section", "origin_cites_number": 0, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Test Protocols and Datasets" ] ], "subsections": [ "0d511533-99df-41c9-b80f-7abbd60a8206", "9b9e3d6c-6cb4-4b10-8ca0-3ecb3e33a697" ], "title": "Test Protocols and Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "Evaluation protocols for gait recognition solutions can generally be categorized into \\textit{subject-dependent} and \\textit{subject-independent}. As illustrated in Figure~\\ref{fig:protocols}, in the subject-dependent protocol, both the training and testing sets include samples from all the subjects. However, in the subject-independent protocol, the test subjects are disjoint from the training subjects. Here, the test data are further divided into gallery and probe sets, and the learned model on the disjoint training subjects is then used to extract features from gallery and probe sets. Finally, a classifier is used to compare the probe features with the gallery ones in order to identify the most similar gait patterns and label them as being from the same identity. Both subject-dependent and subject-independent protocols have been widely adopted for gait recognition. For example, in the TUM GAID~ dataset, subject-dependent protocols have been often used, while in the CASIA-B~ and OU-MVLP~ large-scale datasets, subject-independent protocols are utilized. Gait recognition results in the literature have all been measured and presented using rank-1 recognition accuracy.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{Images/protocols.pdf}\n\\caption{An overview of test protocols is presented. These protocols can be categorized into subject-dependent or subject-independent according to whether test subjects appear in the training set or not.}\n\\label{fig:protocols}\n\\end{figure}", "id": "0d511533-99df-41c9-b80f-7abbd60a8206", "level": "subsection", "origin_cites_number": 3, "parent_id": "a7c4a5b9-298c-4ac1-a382-d1950b08a89d", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Test Protocols and Datasets" ], [ "subsection", "Protocols" ] ], "subsections": [], "title": "Protocols" }, { "cite_extract_rate": 0, "cites": [], "content": "In order to evaluate gait recognition systems, different datasets have been collected. These datasets cover various parameters related to acquisition viewpoints, environment conditions, and appearance of the subjects. Generally speaking, large datasets, both in terms of number and distribution of samples and parameters, are often desired and preferred to allow for deep neural networks to be trained effectively. We present an overview of the main characteristics of well-known gait datasets in Table \\ref{tabDB}. These characteristics include the type and modality of data, number of subjects and sequences, number of viewpoints, and also the variations covered in each dataset. To show the chronological evolution of these datasets, we have sorted these datasets by the order of release date in Table \\ref{tabDB}. According to Table \\ref{tabDB}, CASIA B~, CASIA-E~, OU-ISIR MV~, and OU-MVLP~ cover the highest number of acquisition viewpoints while OU-ISIR~, OU-ISIR LP Bag~, and OU-MVLP~ include the highest number of gait sequences. In the following we review the gait datasets available in Table \\ref{tabDB} along with the associated test protocols.\n\\textbf{CMU MoBo:} The CMU Motion of Body (MoBo)~ dataset is one of the first gait datasets in the literature, and consists of RGB and silhouette data from 25 different subjects who walk on a treadmill. This dataset covers three subsets including slow walking speed, fast walking speed, and walking when holding a ball. The test protocol proposed by the authors of this dataset suggests training with one subset and testing with the other subsets. All six combinations are often used to report the results. \n\\textbf{SOTON}: The SOTON dataset~ contains data from 115 subjects. All the sequences have been recorded both indoor and outdoor, with a fixed camera, capturing the subjects walking along a straight path. The indoor gait data has been captured from the subjects when walking on a treadmill. Different papers have divided this dataset into training and testing sets differently, and there is no pre-defined test protocol presented with the dataset.\n\\textbf{CASIA-A}: CASIA-A~ is a dataset that includes data from 20 subjects in outdoor environments. Participants have walked along a straight line, while three cameras positioned at 0$^{\\circ}$, 45$^{\\circ}$, and 90$^{\\circ}$ have captured the gait videos with an average of 90 frames per a sequence. A cross-view test protocol is the most widely used protocol for this dataset, where solutions are trained with all the available views, excluding one which is then used for testing.\n\\textbf{USF HumanID}: The USF HumanID dataset~ has been collected in the context of the HumanID gait challenge, and includes outdoor gait videos from 122 subjects who have walked in an elliptical path. This dataset covers challenging variations including carrying a briefcase, walking on different surfaces, wearing different shoes, and with acquisition times. The data has been captured from two viewing angles by left and right cameras. The evaluation study has been made available along with the dataset~, which considers 12 different test protocols with respect to the above-mentioned variations. \n\\textbf{CASIA-B}: CASIA-B dataset~ is the most widely used gait dataset, and contains multi-view gait data from 124 persons in the form of both RGB and silhouettes. Acquisition has been performed from 11 different viewing angles that range from 0$^{\\circ}$ to 180$^{\\circ}$ with 18$^{\\circ}$ increments. The dataset considers three different walking conditions namely normal walking (NM), walking with a coat (CL), and walking with a bag (BG), respectively with 6, 2, and 2 gait sequences per person per view. The most frequently used test protocol for CASIA-B is a subject-independent protocol which uses the data from the first 74 subjects for training, and the remaining 50 subjects for testing. The test data is then split into a gallery set including the first four gait sequences from the NM gait data and the probe set consists of the rest of the sequences, namely the remaining 2 NM, 2 CL, and 2 BG sequences, per each subject per each view. The results have been mostly reported for all the viewing angles angles, excluding the probe sequences with angles identical to the references.\n\\textbf{CASIA-C}: The CASIA-C dataset~ includes infrared and silhouette data from 153 different subjects, and the sequences have been captured under different variations at night. These variations include three different walking speeds namely slow walking (SW), normal walking (NW), and fast walking (FW), as well as carrying a bag (BW). There are 4 NW, 2 SW, 2 FW, and 2 BW sequences per each subject. As for the evaluation scheme, cross-speed walker identification tests have been considered.\n\\textbf{OU-ISIR Speed}: The OU-ISIR Speed dataset~ provides silhouette data from 34 subjects. This dataset is suitable for evaluation of robustness of gait recognition methods with respect to walking speeds, as it includes nine different speeds, ranging from 2 km/h to 11 km/h, with 1 km/h interval. Cross-speed tests have been adopted for this dataset.\n\\textbf{OU-ISIR Clothing}: The OU-ISIR Clothing dataset~ includes data from 68 subjects who wore up to 32 different types of clothing. Gait sequences were collected in two indoor acquisition sessions on the same day. A subject-independent test protocol has been provided along with the dataset~, which divides the data into pre-defined training, testing, and probe sets particularly with respect to the clothing conditions.\n\\textbf{OU-ISIR MV}: The OU-ISIR MV dataset~ consists of gait silhouettes from 168 subjects with an age range of 4 to 75 years old, and almost equal number of male vs. female participants. The data has been captured from a large range of view variations, including 24 azimuth views, along with 1 top view. A cross-view test protocols have been widely adopted for this dataset.\n\\textbf{OU-ISIR}: The OU-ISIR dataset~ is a large-scale gait dataset, consisting of gait data from 4,007 subjects with almost equal gender distribution, and with ages ranging from 1 to 94 years old. The gait sequences have been captured in two different acquisition sessions in indoor halls using four cameras placed at 55$^{\\circ}$, 65$^{\\circ}$, 75$^{\\circ}$, and 85$^{\\circ}$ degrees. As there are two sequences available for each subject, the test protocol uses the first sequences as gallery and the other one as probe samples. \n\\textbf{TUM GAID}: The TUM GAID~ is a multi-modal gait dataset, including RGB, depth, and audio data from 305 subjects. For a selected set of 32 subjects, the dataset has been captured in two different outdoor acquisition sessions in winter and summer. 10 sequences have been captured from each subject, including normal walking (N), walking with a backpack (B), and walking with disposable shoe covers (S). The test protocol has been made available by the original authors, dividing the data into training, validation, and test sets. Recognition experiments are often then carried out with respect to the N, B, and S gait variations.\n\\textbf{OU-ISIR LP Bag}: The OU-ISIR LP Bag dataset~ consists of gait videos from 62,528 subjects with carried objects, captured using one camera in constrained indoor environments. Three sequences have been obtained per each subject, one with a carried object and two without it. Following the test protocol proposed in , the training set contains data from 29,097 subjects with two sequences with and without the carried objects, and the test set includes the other 29,102 disjoint subjects. In order to divide the test set into probe and gallery sets, two approaches have been adopted, respectively under cooperative and uncooperative scenarios. For the cooperative scenario, the gallery set only contains sequences without carried objects, where the probe set includes sequences with seven different types of carried objects. In the uncooperative scenario, gallery and probe sets are randomly formed such that they both contain sequences with and without carried objects.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=2\\columnwidth]{Images/Taxonomy.pdf}\n\\caption{Our taxonomy consisting of 4 dimensions: body representation, temporal representation, feature representation, and neural architecture.}\n\\label{fig:taxonomy}\n\\end{figure*}\n\\textbf{OU-MVLP}: The OU-MVLP dataset~ is the largest available gait dataset in term of number of gait sequences (259,013). The dataset provides videos of silhouettes and is acquired in two acquisition sessions per each subject. The gender distribution of subjects is almost equal with an age range of 2 to 87 years old. This dataset has been acquired from 14 different views, ranging from 0$^{\\circ}$ to 90$^{\\circ}$, and 180$^{\\circ}$ to 270$^{\\circ}$, where the angle change in each step is 15$^{\\circ}$. Pre-determined lists of 5153 and 5154 subjects have been designated and provided with the dataset as training and testing sets respectively. For testing, sequences from the first and second acquisition sessions respectively form gallery and probe sets. In most of the recent gait recognition papers, either all or four viewing angles, notably 0$^{\\circ}$, 30$^{\\circ}$, 60$^{\\circ}$, and 90$^{\\circ}$, are considered.\n\\textbf{CASIA-E}: CASIA-E dataset~ consists of silhouettes from 1014 subjects with hundreds of sequences per subject, captured in three types of scenes with a simple static, a complex static, and a complex dynamic background. The data has been captured considering three different walking conditions, including normal walking (NM), walking with a coat (CL), and walking with a bag (BG). This dataset has been acquired from 15 different angles, including two vertical views with a height of 1.2 \\textit{m} and 3.5 \\textit{m}, as well as 13 horizontal views ranging from 0$^{\\circ}$ to 180$^{\\circ}$ with 15$^{\\circ}$ increments. This dataset was recently used in the \\textit{TC4 Competition and Workshop on Human Identification at a Distance 2020}~, where the training set included the entire data from the first 500 subjects, while 25 sequences from the last 514 subjects were used for validation. The remaining sequences were used for testing.\n\\textbf{OU-MVLP Pose}: The OU-MVLP Pose dataset~ was built upon the OU-MVLP~, extracting pose skeleton sequences from the RGB images available in OU-MVLP. Two subsets have been created using pre-trained versions of OpenPose~ and AlphaPose~ to extract human joint information. The test protocol is similar to the one proposed for OU-MVLP~.", "id": "9b9e3d6c-6cb4-4b10-8ca0-3ecb3e33a697", "level": "subsection", "origin_cites_number": 18, "parent_id": "a7c4a5b9-298c-4ac1-a382-d1950b08a89d", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Test Protocols and Datasets" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 287 ], "content": "In order to help illustrate an overall structure for the available gait recognition approaches, a few taxonomies have been proposed in the literature~, which have organized the available solutions from different perspectives. The taxonomy proposed in~ is based on the type of sensors, classifiers, and covariate factors such as occlusion types. The taxonomy in~ categorizes gait recognition methods based on the type of features used. Finally, the one proposed in~ considers user appearance, camera, light source, and environment-related factors. Nevertheless, despite the availability of these taxonomies, none focus on deep gait recognition methods that are most successful nowadays. We thus propose a new taxonomy in this paper to better illustrate the technological landscape of gait recognition methods with a particular focus on deep learning techniques. \nFigure \\ref{fig:taxonomy} presents our proposed taxonomy which considers four dimensions, namely body representation, temporal representation, feature representation, and neural architecture. The details of each of these dimensions are described in the following.", "id": "d929d97d-c95d-4152-9099-5e650bfff992", "level": "section", "origin_cites_number": 3, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ] ], "subsections": [ "2c79114d-2675-440e-b99e-ca8ea8efdf6b", "59ef2d5c-0e3f-4744-b619-fd7fd9f272da", "2747149b-6c79-4256-a26a-fec152f302fe", "a7854008-5c73-44f1-8dd8-92174fd2a6f4" ], "title": "Proposed Taxonomy" }, { "cite_extract_rate": 0.2, "cites": [ 294, 298 ], "content": "This dimension relates to the way the body is represented for recognition, which can be based on \\textit{silhouettes} or \\textit{skeletons}. Silhouette is the most frequently used body representation in the literature that can be easily computed by subtracting each image containing the subject from its background, followed by binarization. Gait silhouettes are proven to be effective and convenient for describing the body state in a single frame with low computational cost. This body representation forces recognition solutions to focus on `gait' as opposed to clothing and other non-gait factors that could, from the perspective of a classifier, be used for identification.\nA sequence of silhouettes can represent useful gait features such as speed, cadence, leg angles, gait cycle time, step length, stride length, and the the ratio between swing and stance phases~. It can also be processed to extract motion data, for example using optical flow calculation~. Nonetheless, gait silhouettes are more sensitive to changes in the appearance of the individuals, for instance via different clothing and carrying conditions. \nSkeleton body representation can be captured using depth cameras~ or alternatively be estimated using pose-estimation methods~. Static and dynamic features, for instance stride length, speed, distances, and angles between joints, can be obtained from skeleton joints~. Gait recognition methods based on this type of body representation are generally more robust against viewpoint changes due to the consideration of joint positions~, as opposed to silhouette-based methods. Skeleton-based methods are also more robust against appearance changes~ as the pose-estimation step generally learns to detect body joints over different clothing conditions, which is not the case for gait silhouettes. However, since these approaches rely heavily on accurate detection of body joints, they are generally more sensitive to occlusions~. Additionally, the use of pose-estimators imposes a computational overhead to these recognition systems~.", "id": "2c79114d-2675-440e-b99e-ca8ea8efdf6b", "level": "subsection", "origin_cites_number": 10, "parent_id": "d929d97d-c95d-4152-9099-5e650bfff992", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Body Representation" ] ], "subsections": [], "title": "Body Representation" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 296, 9138 ], "content": "This dimension deals with approaches used to represent the temporal information in gait sequences. Two types of representations, \\textit{templates} and \\textit{volumes}, have been commonly used in the literature. Following we describe these representations.\nTemplates aggregate temporal walking information over a sequence of silhouettes in a single map, for example by averaging the silhouettes over at least one gait cycle. This operation enables recognition solutions to be independent of the number of frames once template maps have been created. With respect to deep gait recognition architectures, gait silhouettes can be aggregated in the initial layer of a network (Figure \\ref{fig:template_Layer}-a), also known as \\textit{temporal templates}, where the aggregated map can then be processed by subsequent layers~. Gait silhouettes can alternatively be aggregated in an intermediate layer of the network after several convolution and pooling layers (Figure \\ref{fig:template_Layer}-b), also known as \\textit{convolutional template}~. \nExamples of temporal templates include: (\\textit{i}) gait energy images (GEI)~, which average gait silhouettes over one period/sequence (Figure \\ref{fig:template_Layer}-c); (\\textit{ii}) chrono gait images (CGI)~, which extract the contour in each gait image to be then encoded using a multi-channel mapping function in the form a single map (Figure \\ref{fig:template_Layer}-d); (\\textit{iii}) frame-difference energy images (FDEI)~, which preserve the kinetic information using clustering and denoising algorithms, notably when the silhouettes are incomplete (Figure \\ref{fig:template_Layer}-e); (\\textit{iv}) gait entropy images (GEnI)~, computing entropy for each pixel in the gait frames to be then averaged in a single gait template (Figure \\ref{fig:template_Layer}-f); and (\\textit{v}) period energy images (PEI)~, a generalization of GEI that preserves more spatial and temporal information by exploiting a multi-channel mapping function based on the amplitudes of frames (Figure \\ref{fig:template_Layer}-g). Examples of convolutional templates include set pooling~ and gait convolutional energy maps (GCEM)~, which average convolutional maps obtained by several convolution and pooling layers, over the whole sequence. \nTo preserve and learn from the order and relationship of frames in gait sequences, instead of aggregating them, sequence volume representations have be adopted (see Figure~\\ref{fig:taxonomy}, second box from the left). Then, to learn the temporal information, two different approaches have been adopted. In the first approach, the temporal dynamics over the sequences are learned using recurrent learning strategies, for example recurrent neural networks, where each frame is processed with respect to its relationships with the previous frames~. The second approach first creates 3D tensors from spatio-temporal information available in sequences, where the depth of the tensors represent the temporal information. These tensors are then learned, for example using 3D CNNs~ or graph convolutional networks (GCNs)~. \n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.70\\columnwidth]{Images/Template_Merged.pdf}\n\\caption{Overview of temporal representations. Generating templates in: (a) the initial layer of a deep network; (b) an intermediate layer of the network after several convolution and pooling layers. Illustration of (c) GEI~, (d) CGI~, (e) FDEI~, (f) GEnI~, and (g) PEI~ temporal gait templates.}\n\\label{fig:template_Layer}\n\\end{figure}", "id": "59ef2d5c-0e3f-4744-b619-fd7fd9f272da", "level": "subsection", "origin_cites_number": 14, "parent_id": "d929d97d-c95d-4152-9099-5e650bfff992", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Temporal Representation" ] ], "subsections": [], "title": "Temporal Representation" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 287, 296, 300, 9138, 299 ], "content": "This dimension encapsulates the region of support for representation learning, which can be either \\textit{global} or \\textit{partial}. \nThe process of learning silhouettes or skeletons holistically is referred to as global representation learning. \nOn the other hand, when learning partial representations, gait data is split into local regions, e.g., patches, body components, and vertical/horizontal bins (see Figure \\ref{fig:taxonomy}, third box from the left). These local regions are then further processed, for example by recurrent neural networks~, capsule networks~, attention-based networks~, or fully connected layers~. Methods based on global representations tend to be more sensitive to occlusions and appearance changes as well as missing key body parts~. On the other hand, partial regions often maintain different contributions towards the final recognition performance, thus learning their importance can improve the overall performance of gait recognition methods~. Additionally, the relations between these partial features can be learned, to preserve positional attributes such as scale, rotation, and location, which improve the robustness of gait recognition methods against orientation and view changes~.", "id": "2747149b-6c79-4256-a26a-fec152f302fe", "level": "subsection", "origin_cites_number": 6, "parent_id": "d929d97d-c95d-4152-9099-5e650bfff992", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Feature Representation" ] ], "subsections": [], "title": "Feature Representation" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep neural networks (DNNs) capture high-level abstractions using hierarchical architectures of multiple nonlinear transformations. Different neural architectures have been designed for gait recognition problems, whose descriptions are provided below.", "id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "level": "subsection", "origin_cites_number": 0, "parent_id": "d929d97d-c95d-4152-9099-5e650bfff992", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ] ], "subsections": [ "560291c9-4237-466b-8c6f-3b2f50e6857b", "0a64ae6b-215e-4bbc-a616-b16efb95417b", "aeed6389-48f4-4a15-9c96-2ff298684b77", "3b70f05a-9bb4-4e51-9b1e-59db041d06ba", "94182df1-674c-49eb-88b3-48314f521e64", "921587ea-28db-423d-af31-7c33141fe12f", "8d3140fa-6714-4add-822c-1e5db03778fa", "c53084bb-267f-4d01-bcd0-637964ddf775", "12af581c-affd-4974-9f07-3336875ad3dc" ], "title": "Neural Architectures" }, { "cite_extract_rate": 0.4, "cites": [ 97, 301 ], "content": "Convolutional neural networks (CNNs) have been used the most for gait recognition. {These models are generally used to learn an embedding where the body shape, represented as a silhouette or skeleton, is encoded in the feature space.\n}\nSpecifically, CNNs generally consist of different types of layers including convolutional, pooling, and fully connected layers. Convolutional layers convolve learned filters with the input image to create activation feature maps that capture features with varying levels of detail. The convolutional layers also include activation functions such as a ReLU~ or a tanh~ functions, to increase the non-linearity in the output. \nPooling layers then reduce the spatial size of the feature maps by using nonlinear down-sampling strategies, such as average or maximum pooling, thus decreasing the complexity of the network. Fully connected layers are finally used to learn the resulting 2D feature maps into 1D vectors for further processing.\nTo better analyze CNNs adopted in the state-of-the-art gait recognition methods, we provide an overview of the most successful used architectures in Table \\ref{tab:CNN}. Note that for the methods combining CNNs with other types of networks, e.g., autoencoder, capsule, and Long Short-Term Memory (LSTM), we only present the architectures of the CNN components in the table. {As can be seen, there is no need for state-of-the-art gait recognition models to exploit very deep CNN architectures. This is due to the fact that input gait data, either in the from of silhouettes or skeletons, do not present considerable complexity in terms of texture information. Hence, even fewer than 10 layers are shown to be sufficient for encoding gait frames. This is contrary to many other domains, such as face or activity recognition, where very deep networks such as ResNet~ and inception~ are used to learn highly discriminative features.} In Table \\ref{tab:CNN}, we also present the size of CNN inputs, showing a trend toward a 64$\\times$64 resolution in the recent literature. Additionally, an analysis in~ showed that the resolutions of 64$\\times$64 and 128$\\times$128 lead to the best gait recognition results for several tested CNNs, where the input resolution of 128$\\times$128 works slightly better than 64$\\times$64. However, as a higher input resolution implies more convolutional and pooling layers, the input resolution of 64$\\times$64 has been most widely adopted to limit the computational complexity of the solutions.", "id": "560291c9-4237-466b-8c6f-3b2f50e6857b", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Convolutional Neural Networks" ] ], "subsections": [], "title": "Convolutional Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "A deep belief network (DBN)~ is a probabilistic generative model, composed by staking restricted Boltzmann machines (RBMs)~ to extract hierarchical representations from the training data. Each RBM is a two-layer generative stochastic model, including a visible and a hidden layer, with connections between the adjacent layers and without connections between the units within each layer. The weights and biases of the units define a probability distribution over the joint states of the visible and hidden units. DBNs have been used for gait recognition in~ and~. In~, fitting, body parameters, and shape features were extracted from silhouettes to be then learned by DBNs, thus extracting more discriminative features. In~, gait has been first represented as motion and spatial components, and two separate DBNs were trained for each component. The extracted features were finally concatenated to represent the final feature.", "id": "0a64ae6b-215e-4bbc-a616-b16efb95417b", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Deep Belief Networks" ] ], "subsections": [], "title": "Deep Belief Networks" }, { "cite_extract_rate": 0.388888888888888, "cites": [ 304, 295, 296, 300, 302, 303, 9138 ], "content": "Recurrent Neural Networks (RNNs) have been widely applied to temporal or sequence learning problems, achieving competitive performances for different tasks~, including gait recognition~. A layer of RNN is typically composed of several cells, each corresponding to one input element of the sequence, e.g., one frame of a gait video. RNNs can also stacks several layers to make the model deeper, where the output of the $i^{th}$ cell in $j^{th}$ layer feeds the $i^{th}$ cell in the $(j+1)^{th}$ layer. Each cell is connected to its previous and subsequent cells, thus memorizing information from the previous time steps~. Among different RNN architectures, LSTM~ and Gated Recurrent Units (GRU)~ are the most widely used RNN architectures that have been used to learn the relationships available in a gait sequence using memory states and learnable gating functions. In an LSTM network~, the cells have a common cell state, which keeps long-term dependencies along the entire LSTM cell chain, controlled by two gates, the so-called input and forget gates, thus allowing the network to decide when to forget the previous state or update the current state with new information. The output of each cell, the hidden state, is controlled by an output gate that allows the cell to compute its output given the updated cell state. GRU~ is another form of RNN that does not use output activation functions as opposed to LSTM. This architecture also includes an update gate that allows the network to update the current state with respect to the new information. The output of the gate, also known as the reset gate, only maintains connections with the cell input.\n{There have been three different approaches to use RNNs for gait recognition systems. The first approach~ (Figure \\ref{fig:taxonomy}-a) that has been mostly adopted for skeleton representations, uses RNNs in order to learn from temporal relationships of joint positions. In the second approach~ (Figure \\ref{fig:taxonomy}-b), as will be discussed later in detail in Section \\ref{sec:hib}, RNNs are combined with other types of the neural architectures, notably CNNs, for learning both spatial and temporal information. The last approach that has been recently adopted in~ (Figure \\ref{fig:taxonomy}-c) uses RNNs to recurrently learn the relationships between partial representations from a single gait template, for instance GCEM~}.\n\\begin{table}\n \\centering\n \\setlength\\tabcolsep{2.5pt}\n \\caption{Overview of recent CNN architectures adopted for deep gait recognition.}\n \\begin{tabular}{l|l|l|l|l|l}\n \\hline\n \\textbf {Method}& \\textbf{Input}& \\textbf{Total \\# of}& \\textbf{\\# of Conv.} & \\textbf{\\# of Pool.} & \\textbf{\\# of FC} \\\\\n & \\textbf{Size} & \\textbf{Layers}& \\textbf{Layers} & \\textbf{Layers} & \\textbf{Layers} \\\\\n \\hline\\hline\n GEINet~ & 88$\\times$128 & 6 & 2 & 2 & 2 \\\\\n Ensem. CNNs~ & 128$\\times$128 & 7 & 3 & 2 & 2 \\\\\n MGANs~ & 64$\\times$64 & 8 & 4 & 1 & 3 \\\\\n EV-Gait~ & 128$\\times$128 & 9 & 6 & 0& 2 \\\\\n Gait-joint~ & 64$\\times$64 & 16 & 12 & 2& 2 \\\\\n Gait-Set~ & 64$\\times$64 & 9 & 6 & 2 & 1 \\\\ \n Gait-RNNPart~ & 64$\\times$64 & 9 & 6 & 2 & 1 \\\\ \n Gait-Part~ & 64$\\times$64 & 9 & 6 & 2 & 1 \\\\ \n SMPL~ & 64$\\times$64 & 5 & 3 & 1 & 1 \\\\ \n Caps-Gait~ & 64$\\times$64 & 9 & 6 & 2 & 1 \\\\ \n \\hline\n \\end{tabular}\n \\label{tab:CNN}\n\\end{table}\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1 \\columnwidth]{Images/RNN.pdf}\n\\caption{Three different approaches for using RNNs in the context of deep gait recognition systems: (a) RNNs directly learn from the movement of joint positions; (b) RNNs are combined with CNNs; and (c) RNNs recurrently learn the relationships between partial representations in gait templates.}\n\\label{fig:RNN}\n\\end{figure}", "id": "aeed6389-48f4-4a15-9c96-2ff298684b77", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Recurrent Neural Networks" ] ], "subsections": [], "title": "Recurrent Neural Networks" }, { "cite_extract_rate": 0.25, "cites": [ 305 ], "content": "Deep auto-encoder (DAE) is a type of network that aims to extract so called bottleneck features or latent space representations, using an encoder-decoder structure. The encoder transforms the input data into a feature representation and the decoder part transforms the representation back to the original input data. The encoder generally includes several fully connected and/or convolutional layers while the decoder consists of layers that perform the inverse operations. DAE networks are generally trained with the aim of minimizing the reconstruction error that measures the difference between the original input and the reconstructed version. Once a DAE is trained, the bottleneck features which are a latent/compressed representation of the knowledge of the original input, are extracted to be used for classification, i.e., gait recognition in our case. The method proposed in~ uses a DAE network, first encoding the input temporal templates using four convolutional layers to extract feature. The decoder then reconstructs the input from the extracted features using four deconvolutional layers. In~, an auto-encoder with 7 fully connected layers along with input and output layers was used to extract robust gait features. In~, a DAE was used to disentangle the input temporal template into identity and covariate features. The backbone of the encoder was based on the Inception module in GoogLeNet~, extracting multi-scale identity and covariate features. The decoder then took those features as input to reconstruct the temporal template using deconvolutional layers.", "id": "3b70f05a-9bb4-4e51-9b1e-59db041d06ba", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Deep AutoEncoders" ] ], "subsections": [], "title": "Deep AutoEncoders" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 7217, 235 ], "content": "Generative Adversarial Networks (GANs) include a generator and a discriminator~, where the generator aims to deceive the discriminator by synthesizing fake samples that resemble the real ones. In turn, the discriminator aims to distinguish between the fake and real samples. As a result of this minimax game between these two components, GANs can generate realistic synthesized samples. {In the context of gait recognition, GANs can be used to solve the problem of gait variations due to clothing, viewpoints, and carrying conditions. For instance, GANs can transform gait data from one view to another, or change the type of clothing worn by the subject, or even remove a backpack that was originally carried by the subject.\nSuch disentanglement of identity from confounding factors often results in improvements in the performance of gait recognition systems~. Nevertheless, one of the most important challenges toward manipulating gait data is the preservation of human identity features while modifying appearance characteristics in the representation space. To this end, two discriminators are often used~. The first discriminator is used to distinguish real vs. fake samples in order to ensure that the generated images appear realistic. The second discriminator is exploited to ensure that identity information are preserved by taking a pair of source and target images as input and producing a scalar probability of whether the input pair belongs to the same person or not.}\nDifferent types of GANs have recently been adopted for gait recognition. Multi-task GAN (MGANs)~ have been proposed for cross-view gait recognition, where a CNN is used to learn the temporal template as view-specific features in a latent space. Then, the features are transformed from one view to another using a view transform layer. The network is then trained with multi-task adversarial and pixel-wise losses. In another paper, Discriminant Gait GAN (DiGGAN)~ considered the mechanisms of using two independent discriminators in order to transfer GEIs form a certain viewpoint to a different viewing angle while also preserving identity information. In~ a Two-Stream GAN (TS-GAN) was proposed to learn both global and partial feature representations when transforming GEI temporal templates with different viewing angles to a GEI temporal templates with a standard view, i.e., 90$^{\\circ}$.", "id": "94182df1-674c-49eb-88b3-48314f521e64", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Generative Adversarial Networks" ] ], "subsections": [], "title": "Generative Adversarial Networks" }, { "cite_extract_rate": 0.5, "cites": [ 300, 306 ], "content": "Capsule Networks (CapsNet)~ have been proposed to address two important shortcomings in CNNs, namely the limits of scalar activations and poor information routing through pooling operations by respectively exploiting capsule activation values and routing-by-agreement algorithms. CapsNets are composed of capsules which are groups of neurons that explicitly encode the intrinsic viewpoint-invariant relationships available in different parts of the objects. {In the context of gait representation learning, a CapsNet can model and understand the structural relationship between the various parts of the body, such as the relationships between legs and feet, upper body and lower body, and trunk and limbs, using a learnable pose matrix. A CapsNet can also be used to model internal hierarchical representations between multiple gait silhouettes or skeleton joint coordinates of a subject in a video. This is in contrast to the standard pooling layers often used in CNNs, which fail to preserve positional attributes in the human body, such as locations, scales, rotations, and relationships between the body parts.} CapsNets generally include two blocks, primary and high-level group of capsules. The first block encodes spatial information with several layers including convolutional, reshaping, and squashing layers, followed by the second block that learns deeper part-whole relationships between hierarchical sub-parts. The concept of capsule network has been recently adopted for gait recognition~. The method proposed in~ first learns the properties of GEI templates using a CNN. It then uses a CapsNet with dynamic routing to retain the relationship within each template with the aim of finding more robust features. Capsule networks have also been combined with other types of deep networks in~ and ~, which we will review in Section \\ref{sec:hib}.", "id": "921587ea-28db-423d-af31-7c33141fe12f", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Capsule Networks" ] ], "subsections": [], "title": "Capsule Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "{3D Convolutional neural networks (3D CNNs) have been recently adopted for gait recognition to learn spatio-temporal dynamics over whole gait sequences~. 3D CNNs are able to extract features that are more robust to changes in camera viewpoints and the appearance of subjects. 3D CNNs take the stacked gait frames in the form of a 3D tensor as input, and then use multiple 3D convolution filters and pooling operations to extract the spatio-angular representations. The limitation of 3D CNNs for gait recognition is the lack of flexibility in processing variable length sequences. In~, this shortcoming was addressed by exploiting multiple 3D CNNs to integrate temporal information at different scales.} In~, a 3D CNN network containing 13 3D convolution filters and pooling layers along with two fully connected layers was designed for gait recognition. The method in~ is composed of several global and partial 3D convolutional layers, where the standard 3D pooling layer was modified to aggregate temporal information in local clips.", "id": "8d3140fa-6714-4add-822c-1e5db03778fa", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "3D Convolutional Neural Networks" ] ], "subsections": [], "title": "3D Convolutional Neural Networks" }, { "cite_extract_rate": 0.5, "cites": [ 235 ], "content": "Graph convolutional networks (GCNs) have been recently developed to extend CNNs to a higher dimensional domain using arbitrarily structured graphs and graph convolution filters~. {Given the inherent hierarchical and graph-like nature of the human body, GCNs can jointly model both the structural information of the human body and temporal relationships available between gait frames in order to learn discriminative and robust features with respect to camera viewpoints and subject appearances. Gait recognition methods based on GCNs consider gait sequence volumes as the spatio-temporal representations for gait recognition~}. In~, gait features were extracted by forming a spatio-temporal graph from the available video sequences. The final features were then obtained using a joint relationship learning scheme by mapping the features onto a more discriminative subspace with respect to human body structure and walking pattern.", "id": "c53084bb-267f-4d01-bcd0-637964ddf775", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Graph Convolutional Networks" ] ], "subsections": [], "title": "Graph Convolutional Networks" }, { "cite_extract_rate": 0.11842105263157801, "cites": [ 294, 296, 307, 235, 304, 9138, 295, 306, 300 ], "content": "\\label{sec:hib}\nA large number of hybrid deep networks that make use of two or more types of networks have been proposed to boost the performance of gait recognition systems. Among these, architectures with CNN+RNN, DAE+GAN, DAE+RNN, and RNN+CapsNet components are the most popular in the deep gait recognition literature (see Figure \\ref{fig:taxonomy}). Following we provide the descriptions and examples of these four hybrid architectures.\n\\textbf{CNN+RNN.} Integration of CNNs with RNNs (notably LSTM and GRU) for learning the temporal relationships following spatial encoding is perhaps the most popular approach for spatio-temporal learning, which has also been used for gait recognition in the literature. \nIn~, a deep gait recognition system was proposed by combining eight different CNN architectures with LSTM to obtain spatio-temporal features from image sequences. The proposed method in~ first divides gait silhouettes into 4 horizontal parts, where each part was fed to an individual CNN with 10 layers. An attention-based LSTM was then used to output frame-level attention scores for each sequence of CNN features. The CNN features were finally multiplied by their corresponding weights to selectively focus on the most important frames for gait recognition. In~, convolutional maps from gait frames were first learned using an 8-layer CNN. The convolutional maps were then aggregated to form GCEM templates which were then split into horizontal bins. These partial features (horizontal bins) were finally learned by an attentive bi-directional GRU to exploit the relations between these parts of the embedding.\n\\textbf{DAE+GAN.} Recently, DAEs have been considered as the backbone of the generator and/or discriminator components in GANs for gait recognition~. GaitGAN~ and GaitGANv2~ used two discriminators with encoder-decoder structures, respectively for fake/real discrimination and identification. These two discriminators ensured that the generated gait images were realistic and that the generated images contained identity information. The Alpha-blending GAN (Ab-GAN) proposed in~ exploits an encoder-decoder network as the generator to generate gait templates without carried objects. Cycle-consistent Attentive GAN (CA-GAN) was proposed in~ and used an encoder-decoder structure for gait view synthesis. The proposed GAN contains two branches to simultaneously exploit both global and partial feature representations. \n\\textbf{DAE+RNNs.} \nThe combination of DAEs and RNNs has recently been proposed for generating sequence-based disentangled features using an LSTM RNN~. In this context, a deep encoder-decoder network with novel loss functions was first used to disentangle gait features, namely identity information from appearance and canonical features that mostly contain spurious information for gait recognition. A multi-layer LSTM was then used to capture temporal dynamics of the gait features to be finally aggregated for the recognition purpose~. \n\\textbf{RNNs+CapsNets.} Recurrently learned features obtained by RNNs can be treated as capsules~, thus learning coupling weights between these capsules through dynamic routing. This encapsulated hierarchical part-whole relationships between the recurrently learned features that can make the hybrid network more robust against appearance and view changes. Additionally, the CapsNet can act as an attention mechanism, thus assigning more importance to the more relevant features. In~, a CapsNet was used to treat the recurrently learned partial representations of a convolution template as capsules, thus learning the coupling weights between the partial features. This led to exploiting the relationships between the partial features while also preserving positional attributes. So, the model could generalize better to unseen gait viewpoints during testing. In ~, a capsule network with dynamic routing was used to exploit the spatial and structural relations between body parts. In this context, the recurrently learned features were first extracted using an LSTM network from a sequence of gait frames to feed the capsule network.\n\\begin{table*}[]\n\\centering\n\\setlength\\tabcolsep{1.80pt}\n\\caption{Classification of deep gait recognition methods based on our proposed taxonomy.}\n\\begin{tabular}{l|l|l|l|l|l|l|l|l}\n\\hline \n\\textbf{Ref.} &\\textbf{Year} & \\textbf{Venue} & \\textbf{Body Rep.} & \\textbf{Temporal Rep.} & \\textbf{Feat. Rep.} & \\textbf{Neural Architecture} & \\textbf{Loss Function} & \\textbf{Dataset} \\\\ \\hline\\hline\n & 2015 & \\textit{T-MM} & Silhouettes & Tmp: GEI & Global & CNN & Cross-Entropy & CASIA-B \\\\\n & 2015 & \\textit{CISP} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B \\\\\n\\hline\n & 2016 & \\textit{ICPR} & Skeleton & Sequence Volume & Partial & LSTM & Undisclosed & CASIA-B \\\\\n & 2016 & \\textit{ICB} & Silhouettes & Tmp: GEI & Global & CNN & Cross-Entropy & OU-ISIR \\\\\n& 2016 & \\textit{ICIP} & Silhouettes & Sequence Volume & Global & 3D CNN & Undisclosed & CMU Mobo; USF HumanID \\\\\n & 2016 & \\textit{ICASSP} & Silhouettes & Tmp: GEI & Global & CNN & Contrastive & OU-ISIR \\\\\n & 2016 & \\textit{BMVC} & Skeleton & Sequence Volume & Global & CNN + LSTM & Cross-Entropy & CASIA-B; CASIA-A \\\\\n\\hline\n & 2017 & \\textit{Int. J. Biom.} & Silhouettes & Tmp: GEI & Partial & DBN & Undisclosed & CASIA-B \\\\\n & 2017 & \\textit{CVIU} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B \\\\\n & 2017 & \\textit{IEEE T-PAMI} & Silhouettes & Tmp: GEI & Global & CNN & Cross-Entropy & CASIA-B; OU-ISIR \\\\\n & 2017 & \\textit{Applied Sci.} & Silhouettes & Tmp: Norm. AC & Global & CNN & Undisclosed & OU-ISIR \\\\\n & 2017 & \\textit{IEEE T-CSVT} & Silhouettes & Tmp: GEI & Global & CNN & Triplet Loss & OU-ISIR \\\\\n & 2017 & \\textit{BIOSIG} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & TUM-GAID \\\\\n & 2017 & \\textit{MM} & Silhouettes & Tmp: GEI & Global & CNN & Triplet Loss & OU-ISIR \\\\\n & 2017 & \\textit{CCBR} & Skeleton & Sequence Volume & Global & CNN + LSTM & Undisclosed & CASIA-B \\\\\n & 2017 & \\textit{CVPRW} & Silhouettes & Tmp: GEI & Global & GAN & Cross-Entropy & CASIA-B \\\\\n & 2017 & \\textit{Neurocomp.} & Silhouettes & Tmp: GEI & Global & DAE & Euclidean & CASIA-B; SZU RGB-D \\\\\n\\hline\n & 2018 & \\textit{Elect. Imaging}& Silhouettes & Sequence Volume & Global & 3D CNN & Undisclosed & CASIA-B \\\\\n & 2018 & \\textit{IEEE Access} & Silhouettes & Sequence Volume & Global & CNN + LSTM & Cross-Entropy & CASIA-C \\\\\n & 2018 & \\textit{Neuroinform.} & Silhouettes & Seq. Vol. + GEI & Global & 3D CNN & Contrastive & OU-ISIR \\\\\n & 2018 & \\textit{DIC} & Skeleton & Tmp: GEI & Global & CNN & Cross-Entropy & CASIA-B \\\\\n & 2018 & \\textit{IEEE Access } & Silhouettes & Sequence Volume & Global & CNN + LSTM & Cross-Entropy & CASIA-B; OU-ISIR \\\\\n & 2018 & \\textit{ISBA} & Silhouettes & Sequence Volume & Global & 3D CNN & Undisclosed & CASIA-B \\\\\n & 2018 & \\textit{ICME} & Silhouettes & Tmp: GEI & Part; Glob. & DAE + GAN & Cross-Entropy & CASIA-B \\\\\n & 2018 & \\textit{JVCIR} & Silhouettes & Tmp: GEI & Global & CNN & Cross-Entropy & CASIA-B; OU-ISIR \\\\\n & 2018 & \\textit{CCBR} & Skeleton & Sequence Volume & Global & CNN + LSTM & Undisclosed & CASIA-B \\\\\n\\hline\n & 2019 & \\textit{PRL} & Skel.; Silh. & Sequence Volume & Global & LSTM & Undisclosed & CASIA-B; TUM-GAID \\\\\n & 2019 & \\textit{IET Biom.} & Silhouettes & Tmp: Weight Avg. & Partial & CNN & Undisclosed & CASIA-B; TUM; OU-ISIR \\\\\n & 2019 & \\textit{CVPR} & Skel.; Silh. & Sequence Volume & Global & DAE + LSTM & Multiple Loss Functi\nons & CASIA-B; FVG \\\\\n & 2019 & \\textit{J. Sys. Arch.} & Silhouettes & Tmp: GEI & Global & DAE + GAN & Multiple Loss Functions & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{PR} & Silhouettes & Tmp: GEI & Global & CNN & Siamese & CASIA-B; SZU \\\\\n & 2019 & \\textit{IEEE T-IFS } & Silhouettes & Tmp: GEI & Global & GAN & Adversarial\\&Cross-Entropy & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{PRL} & Silhouettes & Tmp: GEI & Global & CNN & Restrictive Triplet & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{CVPR} & Silhouettes & Tmp: GEI & Global & CNN & Quintuplet & CASIA-B; OU-ISIR LP Bag \\\\\n & 2019 & \\textit{Neurocomp.} & Silhouettes & Tmp: GEI & Global & GAN & Pixel-wise and Entropy & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{IJCNN} & Silhouettes & Tmp: GEI & Global & GAN & Multiple Loss Functions & CASIA-B \\\\\n & 2019 & \\textit{IEEE T-IFS} & Skeleton & Tmp: GEI & Global & DAE & Contrastive\\&Triplet Loss & OU-ISIR LP Bag; TUM-GAID \\\\\n & 2019 & \\textit{ICVIP} & Silhouettes & Tmp: Weight Avg. & Partial & CNN & View\\&Cross-Entropy & CASIA-B \\\\\n & 2019 & \\textit{IEEE T-MM} & Silhouettes & Sequence Volume & Global & CNN + LSTM & Contrastive & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{IJCNN} & Silhouettes & Tmp: GEI & Global & GAN & Multiple Loss Functions & CASIA-B \\\\\n & 2019 & \\textit{NCAA} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B; CASIA-A; OU-ISIR \\\\\n & 2019 & \\textit{PR} & Silhouettes & Tmp: Set pooling & Global & CNN & Center\\&Soft-Max & CASIA-B \\\\\n & 2019 & \\textit{NCAA} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B; OU-ISIR \\\\\n & 2019 & \\textit{JVCI} & Silhouettes & Tmp: GEI & Global & CapsNet & Standard Capsule Loss & CASIA-B \\\\\n & 2019 & \\textit{AAAI} & Silhouettes & Tmp: Set Pooling & Partial & CNN & Batch All Triplet loss & CASIA-B; OU-MVLP \\\\\n\\hline\n & 2020 & \\textit{IEEE Access} & Skeleton & Sequence Volume & Global & DAE + LSTM & Mean Square Error & Walking Gait \\\\\n & 2020 & \\textit{PR} & Skeleton & Sequence Volume & Partial & CNN & Center\\&Soft-Max & CASIA-B; CASIA-E \\\\\n & 2020 & \\textit{IEEE T-PAMI} & Skel.; Silh. & Sequence Volume & Global & DAE + LSTM & Multiple Loss Functions & CASIA-B; FVG \\\\\n & 2020 & \\textit{IEEE T-IP } & Silhouettes & Sequence Volume & Partial & CNN + LSTM & Angle Center & CASIA-B; OU-MVLP; OU-LP \\\\\n & 2020 & \\textit{PR} & Silhouettes & Tmp: GEI & Global & GAN & Multiple Loss Functions & OULP-BAG; OU-ISIR LP Bag \\\\\n & 2020 & \\textit{MTAP} & Silhouettes & Tmp: MF-GEI & Global & CNN & Undisclosed & CASIA-B \\\\\n & 2020 & \\textit{KBS} & Silhouettes & Sequence Volume & Global & LSTM + Capsule & Capsule\\&Memory & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{JINS} & Silhouettes & Tmp: GEI & Global & CNN + LSTM & Undisclosed & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{IEEE T-CSVT} & Silhouettes & Tmp: GEI & Global & CNN & Contrastive\\&Triplet Loss & CASIA-B; OU-MVLP; OU-ISIR \\\\\n & 2020 & \\textit{arXiv} & Silhouettes & Sequence Volume & Global & 3D CNN & Triplet Loss & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{MTAP} & Silhouettes & Tmp: GEI & Partial & CNN & Undisclosed & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{arXiv} & Skeleton & Sequence Volume & Global & GCN & Triplet Loss\\&ArcFace & CASIA-B \\\\\n & 2020 & \\textit{MTAP} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{JIPS} & Silhouettes & Tmp: GEI & Global & CNN & Soft-Max & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{MTAP} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B \\\\\n & 2020 & \\textit{J. SuperComp.} & Silhouettes & Tmp: GEI & Global & CNN & Undisclosed & CASIA-B; OU-ISIR; OU-MVLP \\\\\n & 2020 & \\textit{ITNEC} & Silhouettes & Tmp: GEI & Global & CapsNet & Standard Capsule Loss & CASIA-B; OU-ISIR \\\\\n& 2020 & \\textit{CVPR} & Silhouettes & Tmp: Hor. Pooling & Partial & CNN & Batch All Triplet loss & CASIA-B; OU-MVLP \\\\\n& 2020 & \\textit{CVPR} & Silhouettes & Tmp: GEI & Global & DAE & Contrastive\\&Triplet Loss & CASIA-B; OU-ISIR LP Bag \\\\\n & 2020 & \\textit{IEEE T-Biom} & Skeleton & Sequence Volume & Global & CNN + LSTM & Cross-Entropy\\&Center & OUMVLP-Pose \\\\\n & 2020 & \\textit{ACCVW} & Silhouettes & Tmp: Set Pooling & Partial & CNN & Batch All Triplet loss & CASIA-E \\\\\n & 2020 & \\textit{ACCVW} & Silhouettes & Tmp: Set Pooling & Partial & CNN & Batch All Triplet loss & CASIA-E \\\\\n & 2020 & \\textit{ICPR} & Silhouettes & Tmp: Set Pooling & Partial & CNN + GRU + Caps. & Triplet Loss\\&Cosine Prox. & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{IEEE T-Biom.} & Silhouettes & Tmp: GCEM & Partial & CNN + GRU & Triplet Loss\\&Cross-Entropy & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{IEEE Access} & Silhouettes & Tmp: Set Pooling & Global & CNN & Triplet Loss & CASIA-B \\\\\n & 2019 & \\textit{ICASSP} & Silhouettes & Tmp: Pooling & Global & CNN & Center-Ranked & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{IJCB} & Silhouettes & Tmp: GEI & Global & DAE + GAN & Center\\&Soft-Max & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{ACCV} & Skel.; Silh. & Sequence Volume & Global & CNN + LSTM & Multiple Loss Functions & CASIA-B; OU-MVLP \\\\\n & 2020 & \\textit{MM} & Silhouettes & Sequence Volume & Global & 3D CNN & Multiple Triplet Losses & CASIA-B; OU-ISIR \\\\\n & 2020 & \\textit{ECCV} & Silhouettes & Tmp: Set Pooling & Global & CNN & Triplet Loss\\&Cross-Entropy & CASIA-B; OU-MVLP \\\\\n\\hline\n\\end{tabular}\n\\label{tab:Status}\n\\end{table*}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\textwidth]{Images/fig_7_new.pdf}\n\\caption{(a) Visualization of deep gait recognition methods, according to three levels of our taxonomy and publication date; (b) The frequency of different neural architectures, loss functions, and gait datasets used in the literature.}\n\\label{fig:tax}\n\\end{figure*}", "id": "12af581c-affd-4974-9f07-3336875ad3dc", "level": "subsubsection", "origin_cites_number": 76, "parent_id": "a7854008-5c73-44f1-8dd8-92174fd2a6f4", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Proposed Taxonomy" ], [ "subsection", "Neural Architectures" ], [ "subsubsection", "Hybrid Networks" ] ], "subsections": [], "title": "Hybrid Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we use our proposed taxonomy to survey deep gait recognition methods available in the literature. Table~\\ref{tab:Status} summarizes the main characteristics of the existing deep solutions, sorted according to their publication dates. This table categorizes the available solutions based on the dimensions associated with the proposed taxonomy. This table also includes the loss functions that the methods have used for training as well as the datasets used for performance assessment. \nTo better analyze the methods presented in Table~\\ref{tab:Status}, we also visualize them in Figure \\ref{fig:tax}(a), where each entry represents a method categorized based on the three levels of our taxonomy as well as the publication year and month. We excluded the body representation dimension in this figure for better readability. The information about temporal representation, feature representation, and publication date are shown in the \\textit{x}, \\textit{y}, and \\textit{z} axes, respectively. Lastly, the color and marker symbol of each vertical line represent the neural architecture.", "id": "c203cf63-21d3-48d6-90ac-d1b1607a9797", "level": "section", "origin_cites_number": 0, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "State-of-the-art" ] ], "subsections": [ "b3854b91-cf86-4aaf-aacc-69b31f0ce176", "29a6cad4-aa4d-450b-9b53-67e616f6bae1" ], "title": "State-of-the-art" }, { "cite_extract_rate": 0.257142857142857, "cites": [ 311, 310, 296, 309, 308, 9138, 295, 300, 298 ], "content": "Our analysis based on Table~\\ref{tab:Status} and Figure \\ref{fig:tax}(a) allows us to reach some interesting conclusions about the recent evolution and trends in deep gait recognition technologies with respect to our proposed taxonomy. Following are the key ideas from the analysis.\n\\textbf{Body Representation.} Silhouettes are the most widely adopted body representation for deep gait recognition, corresponding to over 81\\% of the literature. Skeletons have been considered less frequently compared to silhouettes, corresponding to only 13\\% of the available solutions. There have also been a few methods, i.e., approximately 5\\% of the available literature, that exploit both skeleton and silhouette representations, notably using disentangled representation learning or fusion strategies. Based on our analysis,\nhigh performing gait recognition methods such as~ have all adopted silhouette body representation. Nonetheless, due to recent advancements in effective pose-estimation techniques~ capable of extracting accurate and robust skeleton data from videos, we anticipate methods based on hybrid silhouettes-skeleton body representations to gain popularity in the near future.\n\\textbf{Temporal Representation.} Gait templates have been the most considered representation for capturing temporal gait information, corresponding to 70\\% of the proposed deep methods. Among different types of gait templates, GEI and set-pooling have been adopted the most. Around 30\\% of solutions adopt sequence volumes to preserve the order of available gait frames and to learn from their relationships. Given the frequent use of convolutional templates in some of the recent high-performing literature~, we anticipate that these templates gain further popularity and surpass temporal templates in the future.\n\\textbf{Feature Representation.} Our analysis shows that over 87\\% of available methods are based on global feature representations, where deep features are learned by considering gait information as a whole. Recently, many interesting and high performing methods~ have adopted partial representations by splitting the gait data into local regions. The performance of such techniques points to promising potential in partial representation learning for discriminating key gait features. Hence, we anticipate further research with convincing results in this area. \n\\textbf{Neural Architectures.} As presented in Figure~\\ref{fig:tax}(b), 2D CNNs are the most widely used DNN type for deep gait recognition with 48\\% of the published solutions utilizing only 2D CNN architectures for classification. \n3D CNNs and GANs are the next popular categories, each corresponding to 8\\% of the literature. DAEs, RNNs, CapsNets, DBNs, and GCNs are less considered among DNNs, respectively corresponding to 4\\%, 2\\%, 2\\%, 1\\%, and 1\\% of the methods. Concerning hybrid methods which constitute 26\\% of the published solutions, CNN-RNN combinations are the most widely adapted approach with 16\\% share,\nwhile the combination of DAEs with GANs and RNNs corresponds to 8\\% of the methods, followed by RNN-CapsNet methods that make up 2\\% of the solutions. We expect that hybrid methods that make use of two or more types of DNN attract more attention in the near future and demonstrate robust performance in the field.\n\\textbf{Loss Functions.}\nLoss functions calculate a model's error during training and should ideally be designed to efficiently capture the properties of the problem for facilitating an effective training process~. Figure~\\ref{fig:tax}(b) shows the usage frequency of different well-known loss functions that have been used by deep gait recognition literature. Among the single loss functions, cross-entropy~ has been the most widely adopted with 20\\% of solutions having used it. This loss function takes the output probabilities of the predicated classes\nand makes the model output as close as possible to the ground-truth output. Triplet loss~ is the next popular type with a usage frequency of 17\\%. This loss has been notably used by some of the most recent and state-of-the-art solutions~ and ~. This loss function compares a baseline input, also known as \\textit{anchor}, to a \\textit{positive} sample with the same identity, and a \\textit{negative} sample with a different identity. The loss function then ensures that the dissimilarity between two feature vectors belonging to the same subject is lower than that between feature vectors belonging to two different subjects. Contrastive loss~ corresponds to 7\\% of the recognition methods, and uses pairs of samples including anchor-neighbor or anchor-distant. If the pair of samples is anchor-neighbor, the loss function minimizes their distance; otherwise, it increases their distance. Th next popular loss function, corresponding to 6\\% of the recognition methods, is based on the softmax loss~. There have also been some other loss functions, such as arcface~, center loss~, and Euclidean loss~ with a combined usage frequency of 9\\% that have been less considered for gait recognition. Finally, there have been two classes of deep gait recognition methods that use multiple loss functions (usage frequency of 41\\%), including (\\textit{i}) methods such as~ that add together two or more loss functions to complement each other and compensate their weaknesses; and (\\textit{ii}) methods that have been designed based on networks with multiple components, such GANs with generators and discriminators~ and hybrid networks~, where different loss functions have been used to train different components. We expect that deep gait recognition methods based on multiple losses attract more attention and surpass other approaches in the near future.\n\\textbf{Datasets.}\nWe tally the number of times each dataset has been used by the published literature and present the results in Figure~\\ref{fig:tax}(b). Figure~\\ref{fig:tax}(b) does not include the datasets that have appeared less than 3 times in Table~\\ref{tab:Status}. In addition, a point to consider is that many of the literature use more than one dataset to perform the experiments. We observe that CASIA-B~ is the most widely used dataset, appearing in 80\\% of the published literature, as it provides a large number of samples with variations in carrying and wearing conditions. OU-ISIR~ was the largest gait dataset prior to 2018; we therefore found OU-ISIR to be the second most popular dataset having been used by 40\\% of the solutions. Since the introduction of OU-MVLP~ in 2018, this dataset has been receiving considerable attention from the community and has been used by 18\\% of the methods in a span of only 2 years. The OU-ISIR LP Bag dataset~ only consists of gait data with carried objects, so naturally it was only considered when designing solutions for specific applications such as those intended to be invariant to carrying conditions from a single viewpoint. As a result, this dataset was used for evaluation purposes by only 5\\% of methods. TUM GAID~ has also been less considered by the community, corresponding to 5\\% of published literature. Finally, CASIA-E~ which was developed in 2020 is the sixth most widely used, appearing in 4\\% of the literature. However, we anticipate that this dataset will become the standard benchmark dataset for gait recognition in the near future, due to the fact that it provides hundreds of video sequences per each subject with high variations in appearance and acquisition environments.", "id": "b3854b91-cf86-4aaf-aacc-69b31f0ce176", "level": "subsection", "origin_cites_number": 35, "parent_id": "c203cf63-21d3-48d6-90ac-d1b1607a9797", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "State-of-the-art" ], [ "subsection", "Analysis and Trends" ] ], "subsections": [], "title": "Analysis and Trends" }, { "cite_extract_rate": 0.20588235294117602, "cites": [ 294, 307, 296, 304, 9138, 295, 300 ], "content": "\\label{sec:comp}\nTo shed more light on the performance of deep gait recognition methods, we summarize the performance of the methods tested on the three most popular gait datasets, namely CASIA-B~, OU-ISIR~, and OU-MVLP~ datasets in Tables~\\ref{tab:CASIA},~\\ref{tab:OU}, and~\\ref{tab:OU2} respectively. To perform a fair comparison, these tables only include methods that followed the standard test protocols designed for these datasets, as discussed in Section 3.2. The results show that the method proposed in~ currently provides the best recognition results on CASIA-B (average performance result of 90.4\\%) and OU-ISIR (performance result of 99.9\\%). Concerning the OU-MVLP dataset, results show the superiority of the method proposed in~ (performance result of 89.18\\%) over other methods. Apart from~ and~, there are several other methods including those proposed in~, whose performances are near the state-of-the-art for these datasets. Our analysis shows that some of these best performing methods, including~, make use of two or more types of neural architectures to boost the performance. Some other methods including ~ use multiple loss functions to complement each other and compensate their weaknesses to boost performance. This analysis reveals the effectiveness of hybrid approaches, in term of either neural architectures as well as loss functions, for achieving strong performance in the area.\n\\begin{table}[!t]\n\\centering\n \\setlength\\tabcolsep{6pt}\n\\caption{State-of-the-art results on CASIA-B dataset ~. NM, BG, and CL are respectively normal walking, walking with a bag, and walking with a coat test protocols.}\n\\begin{tabular}{ l l l | l l l | l }\n\\hline\n\\multicolumn{3}{ c |}{\\textbf{Method}} & \\multicolumn{4}{ c }{\\textbf{Performance}} \\\\ \n\\hline\n\t \\textbf{Reference} & \\textbf{Year} & \\textbf{Venue} & \\textbf{NM} & \\textbf{BG} & \\textbf{CL} & \\textbf{Average} \\\\ \\hline\\hline\n\t~ & 2015 & \\textit{IEEE T-MM} & 78.9 & --- & --- & --- \\\\\n\t~ & 2017 & \\textit{Int. J. Biom.} & 90.8 & 45.9 & 45.3 & 60.7\\\\\n\t~ & 2017 & \\textit{IEEE T-PAMI} & 94.1 & 72.4 & 54.0 & 73.5 \\\\ \n\t~ & 2018 & \\textit{DIC} & 83.3 & --- & 62.5 & --- \\\\\n\t~ & 2019 & \\textit{PR} & 75.0 & --- & --- & --- \\\\ \n\t~ & 2019 & \\textit{IEEE T-IFS} & 79.8 & --- & --- & ---\\\\ \n\t~ & 2019 & \\textit{PR} & 89.9 & --- & --- & --- \\\\\n\t~ & 2019 & \\textit{CVPR} & 93.9 & 82.6 & 63.2 & 79.9 \\\\\n\t~ & 2019 & \\textit{CVPR} & 89.9 & --- & --- & --- \\\\\n\t~ & 2019 & \\textit{IET Biom.} & 94.5 & 78.6 & 51.6 & 74.9\\\\\n\t~ & 2019 & \\textit{PRL} & 86.1 & --- & --- & --- \\\\\n\t~ & 2019 & \\textit{AAAI} & 95.0 & 87.2 & 70.4 & 84.2 \\\\\n\t~ & 2020 & \\textit{IEEE T-PAMI} & 92.3 & 88.9 & 62.3 & 81.2\\\\\n\t~ & 2020 & \\textit{IEEE T-IP} & 96.0 & --- & --- & --- \\\\\n\t~ & 2020 & \\textit{IEEE T-CSVT} & 92.7 & --- & --- & --- \\\\\n\t~ & 2020 & \\textit{IEEE Access} & 95.1 & 87.9 & 74.0 & 85.7 \\\\\n\t~ & 2020 & \\textit{ICPR} & 95.7 & 90.7 & 72.4 & 86.3\\\\\n\t~ & 2020 & \\textit{IEEE T-Biom.} & 95.2 & 89.7 & 74.7 & 86.5\\\\\n\t~ & 2020 & \\textit{CVPR} & 96.2 & 91.5 & 78.7 & 88.8 \\\\\n ~ & 2020 & \\textit{CVPR} & 94.5 & --- & --- & --- \\\\\n ~& 2020 & ECCV & 96.8 & \\textbf{94.0} & 77.5 & 89.4 \\\\\n\t~ & 2020 & \\textit{ACCV} & \\textbf{97.9} & 93.1 & 77.6 & 89.5\\\\\n\t~ & 2020 & \\textit{MM} & 96.7 & 93.0 & \\textbf{81.5} & \\textbf{90.4} \\\\\n\\hline\n\\end{tabular}\n\\label{tab:CASIA}\n\\end{table}\n\\begin{table}[!t]\n\\centering\n \\setlength\\tabcolsep{8pt}\n\\caption{State-of-the-art results on OU-ISIR~ dataset.}\n\\begin{tabular}{ l l l | l }\n\\hline\n\\multicolumn{3}{ c |}{\\textbf{Method}} & \\multicolumn{1}{ c }{} \\\\ \n\t \\textbf{Reference} & \\textbf{Year} & \\textbf{Venue} & \\textbf{Performance} \\\\ \\hline\\hline\n & 2016 & \\textit{ICASSP} &90.71 \\\\ \n & 2017 & \\textit{IEEE T-PAMI} &92.77 \\\\ \n & 2017 & \\textit{Applied Sci.} &91.25 \\\\\n & 2018 & \\textit{Neuroinform.} &88.56 \\\\\n & 2018 & \\textit{IEEE Access} &95.67 \\\\\n & 2019 & \\textit{IET Biom.} &97.40 \\\\\n & 2019 & \\textit{IEEE T-IFS } &93.20 \\\\\n & 2019 & \\textit{PRL} &94.62 \\\\\n & 2019 & \\textit{IEEE T-MM} &97.26 \\\\\n & 2020 & \\textit{IJCB} &94.17 \\\\\n & 2020 & \\textit{IEEE T-CSVT} &98.93 \\\\\n & 2020 & \\textit{IEEE T-IP} &99.27 \\\\\n & 2020 & \\textit{MM} &\\textbf{99.90} \\\\\n\\hline\n\\end{tabular}\n\\label{tab:OU}\n\\end{table}\n\\begin{table}[!t]\n\\centering\n \\setlength\\tabcolsep{8pt}\n\\caption{State-of-the-art results on OU-MVLP~ dataset.}\n\\begin{tabular}{ l l l | l }\n\\hline\n\\multicolumn{3}{ c |}{\\textbf{Method}} & \\multicolumn{1}{ c }{} \\\\ \n\t \\textbf{Reference} & \\textbf{Year} & \\textbf{Venue} & \\textbf{Performance} \\\\ \\hline\\hline\n & 2019 & \\textit{AAAI} & 83.40 \\\\\n & 2020 & \\textit{ICASSP} & 57.80 \\\\\n & 2020 & \\textit{IEEE T-CSVT} & 63.10 \\\\\n & 2020 & \\textit{IEEE T-IP} & 84.60 \\\\\n & 2020 & \\textit{ICPR} & 84.50 \\\\\n & 2020 & \\textit{IEEE T-Biom.} & 84.30 \\\\\n & 2020 & \\textit{CVPR} & 88.70 \\\\\n & 2020 & \\textit{ECCV} & \\textbf{89.18} \\\\\n\\hline\n\\end{tabular}\n\\label{tab:OU2}\n\\end{table}\n{", "id": "29a6cad4-aa4d-450b-9b53-67e616f6bae1", "level": "subsection", "origin_cites_number": 34, "parent_id": "c203cf63-21d3-48d6-90ac-d1b1607a9797", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "State-of-the-art" ], [ "subsection", "Performance Comparison" ] ], "subsections": [], "title": "Performance Comparison" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 315, 7217, 8338, 7015, 314, 9138, 313, 312 ], "content": "}\n{Traditional spoofing attacks to gait recognition systems were attempted by trained impostors, notably with similar body shapes and clothes, imitating the walking styles of target subjects~. However, these attacks are rather hard and limited as they require trained and qualified imitators. Different from the traditional spoofing attacks, \\textit{adversarial attacks} to gait recognition systems have been designed to imperceptibly fool recognition systems by synthesizing input gait videos with both good quantitative similarity and visual realism.}\n{Despite the strong performance of deep learning solutions in computer vision, such solutions have been surprisingly vulnerable to adversarial attacks~. These attacks introduce perturbations in visual content that can manipulate the predictions of deep models by resulting in embeddings capable of fooling the classifiers~. \nSince the introduction of the first adversarial attacks to deep neural networks in 2014~, these models have attracted significant attention from the research community in computer vision and pattern analysis. Among the adversarial attack approaches, GANs~ have been one of the most powerful methods for image and video manipulation.}\n{The first attempt to investigates the vulnerability of gait recognition systems to adversarial attacks was done in~ using a GAN for synthesizing gait images. In this context, the foreground of each frame of the source video is first segmented to be then fed to the generator along with the target background. The generator is composed of two parallel encoder-decoder networks, respectively dealing with foreground and background information. The corresponding feature representations from these two networks are fused at multiple scales. Static and dynamic silhouette-based losses have been designed in order to force the model to generate more realistic results for gait recognition. Additionally, triplet loss is used to preserve the similarity between the individuals in the source and generated videos. The performance of two state-of-the-art gait recognition systems, including CNNGait~ and GaitSet~, were evaluated using the generated gait samples. The results showed that the generated samples provide sufficient discriminative information to bypass gait recognition systems. To show how the sequence-based gait recognition is vulnerable to adversarial attacks, another study was done in~. In this study, a novel temporal sparse adversarial attack method was proposed based on a GAN to synthesize high-quality sequences of silhouette frames. To ensure imperceptibility of the proposed method, a few adversarial gait silhouettes are substituted or inserted in the sequence. Experimental results show that the state-of-the-art GaitSet~ method has low robustness to this adversarial attack.}\n{The presented results in~ suggest that adversarial attacks can surprisingly degrade the performance of deep gait recognition methods, thus posing a real threat to such recognition systems. This demonstrates the necessity of adopting efficient countermeasure techniques against adversarial attacks aimed towards deep gait recognition systems.}", "id": "d74cb17e-ad4d-4784-8c2d-f13df1f49804", "level": "section", "origin_cites_number": 15, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Vulnerability to Adversarial Attacks" ] ], "subsections": [], "title": "Vulnerability to Adversarial Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "Despite the great success of gait recognition using deep learning techniques, there still remains a large number of challenges that need to be addressed in the area. Here, we further point out a few promising future research directions and open problems in this domain. These directions may facilitate future research activities and foster real-world applications.", "id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "level": "section", "origin_cites_number": 0, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ] ], "subsections": [ "ef418d92-183b-4dbf-84cd-f4581eba68d5", "4a1ba97f-c323-4eb6-9f69-39c63702afa1", "2f8e6d7f-8501-474a-8b6b-a74553ed5cb3", "9e57c9fd-2598-401a-98a6-1ffaaf205af2", "74e9a790-4f79-4c0c-b201-edd4b360bb05", "a1025c3d-68e9-40c7-a32c-b33f93ac7746", "aaa73ee6-e50d-4ab2-ad19-7591a0a71a0b" ], "title": "Challenges and Future Research Directions" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 318, 8340, 7218, 304, 319, 7016, 295, 8339, 317, 316 ], "content": "Complex gait data arise from the interaction between many factors such as occlusion, camera view-points, appearance of individuals, sequence order, body part motion, or lighting sources present in the data~. These factors can interact in complex manners that can complicate the recognition task. There have recently been a growing number of methods in other research areas, such as face recognition~, action recognition~, emotion recognition~, and pose estimation~, that focus on learning disentangled features by extracting representations that separate the various explanatory factors in the high-dimensional space of the data~. However, the majority of available deep gait recognition methods have not explored disentanglement approaches, and hence are not explicitly able to separate the underlying structure of gait data in the form of meaningful disjoint variables. Despite the recent progress in using disentanglement approaches in a few gait recognition methods~, there is still room for improvement. To foster further progress in this area, the adaptation of novel generative models~ and loss functions~ can be considered to learn more discriminative gait representations by explicitly disentangling identity and non-identity components.", "id": "ef418d92-183b-4dbf-84cd-f4581eba68d5", "level": "subsection", "origin_cites_number": 14, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Disentanglement" ] ], "subsections": [], "title": "Disentanglement" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 321, 7000, 7020, 322, 7019, 320, 7018, 7017 ], "content": "A considerable majority of available deep gait recognition methods follow the supervised learning paradigm, and thus require labeled data during training. Nevertheless, in real-world applications, labeled data may not always be readily available and labels are generally expensive and time-consuming to obtain. In order to utilize unlabeled gait data to learn more efficient and generalizable gait representations, self-supervised learning~ can be exploited. In this context, general and rich high-level semantics can be captured without using any annotated labels. Self-supervised approaches can define various pretext tasks, such as body part motion or sequence order recognition for input sequences~, to be solved by a network. Through learning these pretext tasks, the network can then learn generic features. The network trained with the generated pre-text labels can then be fine-tuned with the actual labels in order to recognize the identity. Among self-supervised approaches, contrastive learning methods~, including SimCLR~, are promising approaches that learn representations by defining an \\textit{anchor} and a \\textit{positive} sample in the feature space, and then aim to make the anchor separable from the \\textit{negative} samples. One important challenge in using self-supervised learning in the context of gait recognition is to design effective pretext tasks to ensure the network can learn meaningful representations.\nAdditionally, \njoint learning of several pretext tasks in a network~, instead of a single pretext task, notably using several loss functions~, can provide the network with more representative features~.\nWe expect these challenges to gain increased popularity in the context of deep gait recognition in the near future.", "id": "4a1ba97f-c323-4eb6-9f69-39c63702afa1", "level": "subsection", "origin_cites_number": 9, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Self-supervised Learning" ] ], "subsections": [], "title": "Self-supervised Learning" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7021, 8341, 324, 326, 325, 323 ], "content": "Multi-task learning is generally performed to simultaneously learn multiple tasks using a shared model, thus learning more generalized and often reinforced representations~. In many cases, these approaches offer advantages such as increased convergence speed, improved learning by leveraging auxiliary information, and reduced overfitting through shared representations~.\nDespite the effectiveness of multi-task learning in a number of other domains~, most deep gait recognition solutions in the literature focus on the single task of identification. Thus, most existing works learn features that are sensitive to identity without considering interactions with other latent factors, such as affective states, gender, and age~. \nIn this context, simultaneous learning of multiple tasks for gait recognition may present new design paradigms and optimization challenges, notably in terms of task identification and loss functions~. \nWe expect these challenges to attract further attention in the near future and be tackled in the context of gait recognition with multi-task learning.", "id": "2f8e6d7f-8501-474a-8b6b-a74553ed5cb3", "level": "subsection", "origin_cites_number": 9, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Multi-task Learning" ] ], "subsections": [], "title": "Multi-task Learning" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 327, 328, 329, 296, 157, 7023, 330, 7022, 300 ], "content": "Deep gait recognition methods require large amounts of data for effective training and reliable evaluation. This issue is evident in Figure~\\ref{fig:tax}(b) where most of the deep gait recognition solutions~ used large-scale gait datasets for instance CASIA-B~, OU-ISIR~, and OU-MVLP~. In the context of deep gait recognition, data synthesis, for instance using GANs~, can be considered for creating large datasets or data augmentation~. Furthermore, developing synthesized datasets can also be advantageous in that subject privacy concerns could be alleviated with fake subject data. Similar approaches have been carried out for the more \\textit{privacy-sensitive} area of facial recognition~, where large datasets comprised only of fake data have been developed to be used in deep learning research~. In addition, such approaches can be used to increase the variance of existing datasets. For instance, large-scale gait datasets such as OU-ISIR~ and OU-MVLP~ only provide normal walking sequences with no variations in occlusion or carrying and clothing conditions. Thus, solutions trained on these datasets usually fail to generalize well when facing variations in appearance and environment during the testing phase. Here, domain adaptation~ is a potential remedy for this problem that can modify existing datasets to include the desired variations, thus eliminating the necessity for collecting new data. \nFurthermore, gait synthesis can be performed for computer animation~ and by game engines~\nto generate large-scale synthetic gait datasets. Hence, we anticipate that with advances in gait data synthesis and domain adaptation techniques, more complementary gait datasets will be constructed to enable the development of more robust solutions.", "id": "9e57c9fd-2598-401a-98a6-1ffaaf205af2", "level": "subsection", "origin_cites_number": 27, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Data Synthesis and Domain Adaptation" ] ], "subsections": [], "title": "Data Synthesis and Domain Adaptation" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7219 ], "content": "The practical value of gait recognition systems is strongly dependant in its ability to generalize to unseen data. To the best of our knowledge, cross-dataset gait recognition on well-known datasets such as CASIA-B~, OU-ISIR dataset~, and OU-MVLP~, has not been performed in the literature as notable solutions available in the literature all use the same gait dataset for both training and testing. However, in many real applications such as deployed products, test or run-time data are often obtained in a variety of different conditions with respect to the training data. In order to examine the generalizability of gait recognition systems in real-world applications, cross-dataset evaluations should be adopted, for example using transfer learning techniques~. In this context, a solution trained on one dataset can be used to extract features from the test data (gallery and probe sets) of another dataset. The extracted features can then feed a classifier to perform gait recognition. Cross-dataset gait recognition can potentially be formulated as an out-of-distribution (OOD) testing problem, where the generalization ability of a deep model beyond the biases of the training set is evaluated~. We expect that OOD tests~ become increasingly popular for evaluating the generalization ability of gait recognition methods.", "id": "74e9a790-4f79-4c0c-b201-edd4b360bb05", "level": "subsection", "origin_cites_number": 6, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Cross-Dataset Evaluation" ] ], "subsections": [], "title": "Cross-Dataset Evaluation" }, { "cite_extract_rate": 0.4, "cites": [ 334, 332, 331, 333 ], "content": "A large number of gait datasets contain multi-view sequences, providing gait information captured from different view-points. Most of the current methods available in the literature only perform single-view gait recognition. These methods generally learn intra-view relationships and ignore inter-view information between multiple viewpoints.\nBy casting the problem as multi-view, descriptors such as gate-level fusion LSTM~, state-level fusion LSTM~, spatio-temporal LSTM~, multi-perspective LSTM~, and multi-view LSTM~, can be adopted to jointly learn both the intra-view and inter-view relationships. Another challenge in multi-view gait recognition is that most existing multi-view descriptors consider a well defined camera network topology with fixed camera positions. However, data collection in real-world environments is often uncontrollable, i.e. data might be captured from unpredictable viewing angles~ or even from moving cameras~. To this end, existing multi-view methods, which mostly rely on pre-trained descriptors, fail to bridge the domain gap between the training and run-time multi-view data. We expect that future research direction in this area will be shaped by proposing novel approaches, for example using clustering algorithms~, combinatorial optimization~, and self-supervised learning~, for adopting generic gait descriptors for multi-view geometry.", "id": "a1025c3d-68e9-40c7-a32c-b33f93ac7746", "level": "subsection", "origin_cites_number": 10, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Multi-View Recognition" ] ], "subsections": [], "title": "Multi-View Recognition" }, { "cite_extract_rate": 0.291666666666666, "cites": [ 337, 335, 290, 332, 288, 336, 297 ], "content": "Some literature in the field have fused gait information with other biometric information such as face~ and ear~, which can be obtained from high-quality gait videos. As we discussed earlier, gait recognition systems are generally challenged when facing variations in subject appearance and clothing, camera view-points, and body occlusions. On the other hand, additional sources of biometric information, notably face and ear, are less sensitive to some of these challenging factors. Instead, face and ear recognition systems can be negatively affected by some other factors such as low image quality, for instance blurred or low resolution images, varying lighting, or facial occlusions, which in turn have limited impact on the performance of gait recognition systems. Hence, various biometric modalities and gait can complement one another to compensate each others' weaknesses in the context of a multi-biometric system~. \nApart from the complementary (hard-)biometric traits, soft-biometric traits such as age~, height~, weight~, gender~, and particular body marks including tattoos~ can also be included to boost overall performance. The combination of other soft- and hard- biometric traits with gait has mostly been done in the literature based on non-deep methods~, while multi-modal deep learning methods~, notably based on fusion~, joint learning~, and attention~ networks, can also be adopted. Hence, we anticipate that research on deep multi-biometric recognition systems that include gait, gain popularity in the coming years.", "id": "aaa73ee6-e50d-4ab2-ad19-7591a0a71a0b", "level": "subsection", "origin_cites_number": 24, "parent_id": "73e8a8f0-de22-4ddf-bbaa-fec216d6b3cc", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Challenges and Future Research Directions" ], [ "subsection", "Multi-biometric Recognition" ] ], "subsections": [], "title": "Multi-biometric Recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "We provided a survey of deep gait recognition methods that was driven by a novel taxonomy with four dimensions namely body representation, temporal representation, feature representation, and neural architectures. Following our taxonomy, we reviewed the most representative deep gait recognition methods and provided discussions on their characteristics, advantages, and limitations. We additionally reviewed the most commonly used datasets along with their evaluation protocols and corresponding performance results reported in the literature. We finally concluded this survey with a discussion on current challenges, pointing out a few promising future research directions in this domain. We expect that this survey provides insights into the technological landscape of gait recognition for guiding researchers in advancing future research. \n\\bibliographystyle{ieeetran}\n\\bibliography{references}\n\\begin{IEEEbiography}[\n{\n\\includegraphics[width=1in,height=1.21in,clip,keepaspectratio]{Alireza.JPG}\n}\n]\n{Alireza Sepas-Moghaddam}\nreceived the B.Sc. and M.Sc. (first class honors) degrees in Computer Engineering in 2007 and 2010, respectively. From 2011 to 2015, he was with the Shamsipour Technical University, Tehran, Iran, as a lecturer. In 2015, he joined Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal, where he completed his Ph.D. degree with distinction and honour in Electrical and Computer Engineering in 2019. From 2019 to 2021, he held a postdoctoral fellow position at Queen’s University, Kingston, ON, Canada, where he worked on different research projects funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Mitacs, as well as private sector. He is currently working as a senior computer vision data scientist at Socure, dealing with digital identity verification and fraud detection from visual signals. His main research interests are theoretical and applied machine learning, notably deep learning, for biometrics, forensics, and affective computing. He has contributed more than 45 papers in notable conferences and journals in his area, including \\textit{CVPR}, \\textit{ICCV}, \\textit{IEEE T-IP}, \\textit{IEEE T-IFS}, \\textit{IEEE T-CSVT}, and \\textit{IEEE T-Biom}.\nHe has served as a program chair, publicity chair, and area chair for several conferences and workshops, including \\textit{ICPR}, \\textit{AAAI-HCSSL}, and \\textit{EUVIP} and has been a reviewer for multiple top-tier conferences and journals in the field. \n\\end{IEEEbiography}\n\\begin{IEEEbiography}[\n{\n\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Ali.jpg}\n}\n]\n{Ali Etemad} \nis an Assistant Professor, as well as a Mitchell Professor in AI for Human Sensing \\& Understanding at the Department of Electrical and Computer Engineering, and Ingenuity Labs Research Institute, Queen’s University, Canada. He leads the Ambient Intelligence and Interactive Machines (Aiim) lab, where his main area of research is machine learning and deep learning focused on human-centered applications with wearables, smart devices, and smart environments. \nHis work has appeared in top-tier venues such as CVPR, AAAI, ICCV, ACM CHI, ICASSP, Interspeech, ICPR, FG, T-AFFC, T-IP, T-AI, T-ASLP, T-IFS, T-BIOM, T-HMS, T-NSRE, IEEE T-GRS, IEEE IoT J., and others. He is a co-inventor of 9 patents and has given over 20 invited talks at different venues. \nDr. Etemad is an Associate Editor for IEEE Transactions on Artificial Intelligence, and has been a PC member/reviewer for many notable conferences and journals in the field. \nHe has been the General Chair for the AAAI Workshop on Human-Centric Self-Supervised Learning (2022), Publicity Co-Chair for European Workshop on Visual Information Processing (2022), and Industry Relations Chair for Canadian Conference on AI (2019).\nDr. Etemad’s lab and research program have been funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada, Ontario Centers of Excellence (OCE), Canadian Foundation for Innovation (CFI), Mitacs, and other organizations, as well as the private sector.\n\\end{IEEEbiography}\n\\end{document}", "id": "0f41dce4-04fc-4ee5-a289-0d7f3a059438", "level": "section", "origin_cites_number": 0, "parent_id": "1ca90fda-21b2-4c6d-bcb1-6e612173f620", "prefix_titles": [ [ "title", "Deep Gait Recognition: A Survey" ], [ "section", "Summary" ] ], "subsections": [], "title": "Summary" } ]
56
[ 287, 289, 290, 293, 288, 292, 291, 294, 296, 9138, 295, 297, 298, 300, 299, 97, 301, 304, 302, 303, 305, 7217, 235, 306, 307, 311, 310, 309, 308, 315, 8338, 7015, 314, 313, 312, 318, 8340, 7218, 319, 7016, 8339, 317, 316, 321, 7000, 7020, 322, 7019, 320, 7018, 7017, 7021, 8341, 324, 326, 325, 323, 327, 328, 329, 157, 7023, 330, 7022, 7219, 334, 332, 331, 333, 337, 335, 336 ]
1.11316
[ "Salman Khan", "Muzammal Naseer", "Munawar Hayat", "Syed Waqas Zamir", "Fahad Shahbaz Khan", "Mubarak Shah" ]
Transformers in Vision: A Survey
2021
2021-01-04T18:57:24Z
cs.CV
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks \eg, Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (\eg, images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (\eg, image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (\eg, visual-question answering, visual reasoning, and visual grounding), video processing (\eg, activity recognition, video forecasting), low-level vision (\eg, image super-resolution, image enhancement, and colorization) and 3D analysis (\eg, point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ] ], "subsections": [ "e2686360-4e70-45de-8977-5427ca5218af", "22b21833-1042-464a-88ed-6aee9d50b135", "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "640cdeab-47b5-475b-8e86-578eb18e1ffa", "a948af98-2404-4514-9b8b-7cfeedc952e4" ], "title": "root" }, { "cite_extract_rate": 0.794117647058823, "cites": [ 679, 166, 4830, 7590, 7993, 707, 2012, 7360, 7302, 7070, 826, 5764, 38, 5762, 2635, 4829, 768, 7, 7373, 732, 728, 5763, 9, 8454, 4766, 5765 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{T}{ransformer} \n models~ have recently demonstrated exemplary performance on a broad range of language tasks \\eg, text classification, machine translation and question answering. Among these models, the most popular ones include BERT (Bidirectional Encoder Representations from Transformers) , GPT (Generative Pre-trained Transformer) v1-3 , RoBERTa (Robustly Optimized BERT Pre-training) and T5 (Text-to-Text Transfer Transformer) . The profound impact of Transformer models has become more clear with their scalability to very large capacity models . For example, the BERT-large model with 340 million parameters was significantly outperformed by the GPT-3~ model with 175 billion parameters while the latest mixture-of-experts Switch transformer scales up to a whopping 1.6 trillion parameters!\n \\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\textwidth]{Figs/FieldGrowth.png}\n \\vspace{-0.7cm}\n \\caption{\\small Statistics on the number of times keywords such as BERT, Self-Attention, and Transformers appear in the titles of Peer-reviewed and arXiv papers over the past few years (in Computer Vision and Machine Learning). The plots show consistent growth in recent literature. This survey covers recent progress on Transformers in the computer vision domain.}\n \\label{fig:growth}\n\\end{figure*}\nThe breakthroughs from Transformer networks in Natural Language Processing (NLP) domain has sparked great interest in the computer vision community to adapt these models for vision and multi-modal learning tasks (Fig.~\\ref{fig:growth}). However, visual data follows a typical structure (e.g., spatial and temporal coherence), thus demanding novel network designs and training schemes. As a result, Transformer models and their variants have been successfully used for image recognition , object detection , segmentation , image super-resolution , video understanding , image generation , text-image synthesis and visual question answering , among several other use cases . This survey aims to cover such recent and exciting efforts in the computer vision domain, providing a comprehensive reference to interested readers. \nTransformer architectures are based on a self-attention mechanism that learns the relationships between elements of a sequence. As opposed to recurrent networks that process sequence elements recursively and can only attend to short-term context, Transformers can attend to complete sequences thereby learning long-range relationships. \n\\sk{Although attention models have been extensively used in both feed-forward and recurrent networks , Transformers are based solely on the attention mechanism and have a unique implementation (i.e., multi-head attention) optimized for parallelization.} An important feature of these models is their scalability to high-complexity models and large-scale datasets \\sk{e.g., in comparison to some of the other alternatives such as hard attention which is stochastic in nature and requires Monte Carlo sampling for sampling attention locations.} \nSince Transformers assume minimal prior knowledge about the structure of the problem as compared to their convolutional and recurrent counterparts~, they are typically pre-trained using pretext tasks on large-scale (unlabelled) datasets . Such a pre-training avoids costly manual annotations, thereby encoding highly expressive and generalizable representations that model rich relationships between the entities present in a given dataset. The learned representations are then fine-tuned on the downstream tasks in a supervised manner to obtain favorable results. \nThis paper provides a holistic overview of the transformer models developed for computer vision applications. We develop a taxonomy of the network design space and highlight the major strengths and shortcomings of the existing methods. Other literature reviews mainly focus on the NLP domain or cover generic attention-based approaches . By focusing on the newly emerging area of visual transformers, we comprehensively organize the recent approaches according to the intrinsic features of self-attention and the investigated task. We first provide an introduction to the salient concepts underlying Transformer networks and then elaborate on the specifics of recent vision transformers. \nWhere ever possible, we draw parallels between the Transformers used in the NLP domain and the ones developed for vision problems to flash major novelties and interesting domain-specific insights. \nRecent approaches show that convolution operations can be fully replaced with attention-based transformer modules and have also been used jointly in a single design to encourage symbiosis between the two complementary set of operations. \nThis survey finally details open research questions with an outlook towards the possible future work.", "id": "e2686360-4e70-45de-8977-5427ca5218af", "level": "section", "origin_cites_number": 34, "parent_id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 1, "cites": [ 7333, 826, 7194, 38, 4782, 7, 1501, 4779 ], "content": "There exist two key ideas that have contributed towards the development of conventional transformer models. \n(a) The first one is \\emph{self-attention}, which allows capturing `long-term' dependencies between sequence elements as compared to conventional recurrent models that find it challenging to encode such relationships. \n(b) The second key idea is that of \\emph{pre-training}\\footnote{\\sk{Several recent Vision Transformers demonstrate that the model can be learned end-to-end on ImageNet-1K without any dedicated pre-training phase . \nHowever, the performance generally remains lower than the pre-trained counter-parts.}} on a large (un)labelled corpus in a \\sk{(self)supervised manner}, and subsequently fine-tuning to the target task with a small labeled dataset . \nBelow, we provide a brief tutorial on these two ideas (Sec.~\\ref{sec: Self-supervision} and~\\ref{sec: Self-Attention}), along with a summary of seminal Transformer networks (Sec.~\\ref{sec: Transformer Model} and~\\ref{sec: The Bidirectional Representations}) where these ideas have been applied. \nThis background will help us better understand the forthcoming Transformer based models used in the computer vision domain (Sec.~\\ref{sec:transformers_vision}).\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Figs/Self-attention.png}\n \\caption{\\small \n An example self-attention block used in the vision domain . \n Given the input sequence of image features, the triplet of (key, query, value) is calculated followed by attention calculation and applying it to reweight the values. \n A single head is shown here and an output projection ($\\textbf{W}$) is finally applied to obtain output features with the same dimension as the input. Figure adapted from .}\n \\vspace{-0.4cm}\n \\label{fig:self-attention}\n\\end{figure}\n \\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.99\\textwidth]{Figs/Transformer_Arch.png}\n \\caption{\\small \\emph{Architecture of the Transformer Model} . The model was first developed for the language translation task where an input sequence in one language is required to be converted to the output sequence in another language. The Transformer encoder (\\emph{middle} row) operates on the input language sequence and converts it to an embedding before passing it on to the encoder blocks. \n The Transformer decoder (\\emph{bottom} row) operates on the previously generated outputs in the translated language and the encoded input sequence from the middle branch to output the next word in the output sequence. The sequence of previous outputs (used as input to the decoder) is obtained by shifting the output sentence to the right by one position and appending start-of-sentence token at the beginning. This shifting avoids the model to learn to simply copy the decoder input to the output. The ground-truth to train the model is simply the output language sequence (without any right shift) appended with an end-of-sentence token. The blocks consisting of multi-head attention (\\emph{top} row) and feed-forward layers are repeated $N$ times in both the encoder and decoder.\n }\n \\label{fig:transformer}\n \\end{figure*}", "id": "22b21833-1042-464a-88ed-6aee9d50b135", "level": "section", "origin_cites_number": 8, "parent_id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Foundations" ] ], "subsections": [ "8addc277-b7f8-4948-b867-35bbbc14fb35", "bafe2504-4c44-4e7e-9f09-0cb1c7edd4d7", "d03e9ab4-05dc-48a8-93fd-d9915a7c4cd2", "5a19d5ec-193b-41aa-95bc-7dc9f74b1ec5" ], "title": "Foundations" }, { "cite_extract_rate": 1, "cites": [ 5766, 7994, 722, 38 ], "content": "}\n\\label{sec: Self-Attention}\nGiven a sequence of items, self-attention estimates the relevance of one item to other items (e.g., which words are likely to come together in a sentence). The self-attention mechanism is an integral component of Transformers, which explicitly models the interactions between all entities of a sequence for structured prediction tasks. Basically, a self-attention layer updates each component of a sequence by aggregating global information from the complete input sequence. \nLets denote a sequence of $n$ entities ($\\mathbf{x}_1, \\mathbf{x}_2, \\cdots \\mathbf{x}_n$) by $\\mathbf{X} \\in \\mathbb{R}^{n \\times d}$, where $d$ is the embedding dimension to represent each entity. The goal of self-attention is to capture the interaction amongst all $n$ entities by encoding each entity in terms of the global contextual information. This is done by defining three learnable weight matrices to transform Queries ($\\mathbf{W}^Q \\in \\mathbb{R}^{d \\times d_q}$), Keys ($\\mathbf{W}^K \\in \\mathbb{R}^{d \\times d_k}$) and Values ($\\mathbf{W}^V \\in \\mathbb{R}^{d \\times d_v}$), where $d_q=d_k$. The input sequence $\\mathbf{X}$ is first projected onto these weight matrices to get $\\mathbf{Q}=\\mathbf{X}\\mathbf{W}^Q$, $\\mathbf{K}=\\mathbf{X}\\mathbf{W}^K$ and $\\mathbf{V}=\\mathbf{X}\\mathbf{W}^V$. The output $\\mathbf{Z}\\in\\mathbb{R}^{n \\times d_v}$ of the self attention layer is,\n$$\\mathbf{Z}= \\mathbf{softmax}\\left (\\frac{\\mathbf{Q}\\mathbf{K}^T}{\\sqrt{d_q}}\\right )\\mathbf{V}.$$\nFor a given entity in the sequence, the self-attention basically computes the dot-product of the query with all keys, which is then normalized using softmax operator to get the attention scores. Each entity then becomes the weighted sum of all entities in the sequence, where weights are given by the attention scores (Fig.~\\ref{fig:self-attention} and Fig.~\\ref{fig:transformer}, top row-left block). \n\\textbf{Masked Self-Attention:} The standard self-attention layer attends to all entities. For the Transformer model which is trained to predict the next entity of the sequence, the self-attention blocks used in the decoder are masked to prevent attending to the subsequent future entities. This is simply done by an element-wise multiplication operation with a mask $\\mathbf{M} \\in \\mathbb{R}^{n \\times n}$, where $\\mathbf{M}$ is an upper-triangular matrix. The masked self-attention is defined by, $$ \\mathbf{softmax}\\left (\\frac{\\mathbf{Q}\\mathbf{K}^T}{\\sqrt{d_q}} \\circ \\mathbf{M}\\right ),$$ where $\\circ$ denotes Hadamard product. Basically, while predicting an entity in the sequence, the attention scores of the future entities are set to zero in masked self-attention.\n\\textbf{Multi-Head Attention:}\nIn order to encapsulate multiple complex relationships amongst different elements in the sequence, the multi-head attention comprises multiple self-attention blocks ($h=8$ in the original Transformer model ). Each block has its own set of learnable weight matrices $\\{\\mathbf{W}^{Q_i},\\mathbf{W}^{K_i},\\mathbf{W}^{V_i} \\}$, where $i=0 \\cdots (h{-}1)$. For an input $\\mathbf{X}$, the output of the $h$ self-attention blocks in multi-head attention is then concatenated into a single matrix $[\\mathbf{Z}_0,\\mathbf{Z}_1,\\cdots\\mathbf{Z}_{h-1}] \\in \\mathbb{R}^{n\\times h \\cdot d_v}$ and projected onto a weight matrix $\\mathbf{W} \\in \\mathbb{R}^{h \\cdot d_v \\times d}$ (Fig.~\\ref{fig:transformer}, top row).\nThe main difference of self-attention with convolution operation is that the filters are dynamically calculated instead of static filters (that stay the same for any input) as in the case of convolution. Further, self-attention is invariant to permutations and changes in the number of input points. \\sk{As a result, it can easily operate on irregular inputs as opposed to standard convolution that requires grid structure. Furthermore, it has been shown in the literature how self-attention (with positional encodings) is theoretically a more flexible operation which can model the behaviour of convolutional models towards encoding local features . Cordonnier \\etal further studied the relationships between self-attention and convolution operations. Their empirical results confirm that multi-head self-attention (with sufficient parameters) is a more generic operation which can model the expressiveness of convolution as a special case. In fact, self-attention provides the capability to learn the global as well as local features, and provide expressivity to adaptively learn kernel weights as well as the receptive field (similar to deformable convolutions ).}", "id": "8addc277-b7f8-4948-b867-35bbbc14fb35", "level": "subsection", "origin_cites_number": 4, "parent_id": "22b21833-1042-464a-88ed-6aee9d50b135", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Foundations" ], [ "subsection", "\\sk{Self-Attention in Transformers" ] ], "subsections": [], "title": "\\sk{Self-Attention in Transformers" }, { "cite_extract_rate": 0.7096774193548381, "cites": [ 4830, 707, 2017, 126, 2506, 8569, 7585, 7360, 2502, 2503, 1272, 5767, 5768, 9141, 7, 732, 2554, 4769, 2589, 4099, 1254, 322 ], "content": "}\n\\label{sec: Self-supervision}\nSelf-attention based Transformer models generally operate in a two-stage training mechanism. First, pre-training is performed on a large-scale dataset (and sometimes a combination of several available datasets ) in either a supervised or a \\sk{self-supervised manner} . Later, the pre-trained weights are adapted to the down-stream tasks using small-mid scale datasets. Examples of downstream tasks include image classification , object detection , \\sk{zero-shot classification} , question-answering and action recognition . The effectiveness of pre-training for large-scale Transformers has been advocated in both the language and vision domains. For example, Vision Transformer model (ViT-L) experiences an absolute $13\\%$ drop in accuracy on ImageNet test set when trained only on ImageNet train set as compared to the case when pretrained on JFT dataset with 300 million images.\nSince acquiring manual labels at a massive scale is cumbersome, self-supervised learning has been very effectively used in the pre-training stage. The self-supervision based pre-training stage training has played a crucial role in unleashing the scalability and generalization of Transformer networks, enabling training even above a \\emph{trillion} parameter networks (e.g., the latest Switch Transformer from Google). \nAn extensive survey on SSL can be found in . As nicely summarized by Y.~LeCun , the basic idea of SSL is to \\emph{fill in the blanks}, i.e., try to predict the occluded data in images, future or past frames in temporal video sequences or predict a pretext task \\eg, the amount of rotation applied to inputs, the permutation applied to image patches or the color of a gray-scale image. Another effective way to impose self-supervised constraints is via contrastive learning. In this case, nuisance transformations are used to create two types of modified versions of the same image i.e., without changing the underlying class semantics (\\eg, image stylizing, cropping) and with semantic changes (\\eg, replacing an object with another in the same scene, or changing the class with minor adversarial changes to the image). Subsequently, the model is trained to be invariant to the nuisance transformations and emphasize on modeling minor changes that can alter semantic labels. \nSelf-supervised learning provides a promising learning paradigm since it enables learning from a vast amount of readily available non-annotated data. In the SSL based pre-training stage, a model is trained to learn a meaningful representation of the underlying data by solving a pretext task. The pseudo-labels for the pretext task are automatically generated (without requiring any expensive manual annotations) based on data attributes and task definition. Therefore, the pretext task definition is a critical choice in SSL. We can broadly categorize existing SSL methods based upon their pretext tasks into \\textbf{(a)} \\textit{generative} approaches which synthesize images or videos (given conditional inputs), \\textbf{(b)} \\textit{context-based} methods which exploit the relationships between image patches or video frames, and \\textbf{(c)} \\textit{cross-modal} methods which leverage from multiple data modalities. Examples of \\textit{generative} approaches include conditional generation tasks such as masked image modeling and image colorization , image super-resolution , image in-painting , and GANs based methods . The \\textit{context-based} pretext methods solve problems such as a jigsaw puzzle on image patches , masked object classification , predict geometric transformation such as rotation , or verify temporal sequence of video frames . Cross-modal pretext methods verify the correspondence of two input modalities \\eg, text \\& image , audio \\& video or RGB \\& flow .", "id": "bafe2504-4c44-4e7e-9f09-0cb1c7edd4d7", "level": "subsection", "origin_cites_number": 31, "parent_id": "22b21833-1042-464a-88ed-6aee9d50b135", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Foundations" ], [ "subsection", "\\sk{(Self) Supervised Pre-training" ] ], "subsections": [], "title": "\\sk{(Self) Supervised Pre-training" }, { "cite_extract_rate": 1, "cites": [ 97, 57, 38 ], "content": "\\label{sec: Transformer Model}\nThe architecture of the Transformer model proposed in is shown in Fig.~\\ref{fig:transformer}. It has an encoder-decoder structure. The encoder (\\emph{middle} row) consists of six identical blocks (i.e., $N{=}6$ in Fig.~\\ref{fig:transformer}), with each block having two sub-layers: a multi-head self-attention network, and a simple position-wise fully connected feed-forward network. Residual connections alongside layer normalization are employed after each block as in Fig.~\\ref{fig:transformer}. Note that, different from regular convolutional networks where feature aggregation and feature transformation are simultaneously performed (\\eg, with a convolution layer followed by a non-linearity), these two steps are decoupled in the Transformer model i.e., self-attention layer only performs aggregation while the feed-forward layer performs transformation.\nSimilar to the encoder, the decoder (\\emph{bottom} row) in the Transformer model comprises six identical blocks. Each decoder block has three sub-layers, first two (multi-head self-attention, and feed-forward) are similar to the encoder, while the third sub-layer performs multi-head attention on the outputs of the corresponding encoder block, as shown in Fig.~\\ref{fig:transformer}. \nThe original Transformer model in was trained for the Machine Translation task. The input to the encoder is a sequence of words (sentence) in one language. \\textbf{Positional encodings} are added to the input sequence to capture the relative position of each word in the sequence. Positional encodings have the same dimensions as the input $d=512$, and can be learned or pre-defined \\eg, by sine or cosine functions. Being an auto-regressive model, the decoder of the Transformer uses previous predictions to output the next word in the sequence. The decoder, therefore, takes inputs from the encoder as well as the previous outputs to predict the next word of the sentence in the translated language. To facilitate residual connections the output dimensions of all layers are kept the same i.e., $d=512$. The dimensions of query, key and value weight matrices in multi-head attention are set to $d_q=64, d_k=64, d_v=64$. \n\\begin{figure*}[htp]\n \\centering\n \\includegraphics[trim=0mm 4.5cm 0mm 0mm, clip,width=0.85\\textwidth]{Figs/taxonomy.png}\n \\caption{\\small \\sk{\\emph{A taxonomy of self-attention design space}. Existing approaches based on self-attention explore single-head or multi-head (transformer) designs for vision tasks. We note that interesting efforts have been made to utilize knowledge from convolution based architectures to improve ViTs (e.g., multi-scale and hybrid designs). We categorize the upcoming sections of this survey according to the types of self-attention block (\\emph{left tree diagram}) as well as the prominent tasks in computer vision (\\emph{right}). }}\n \\vspace{-0.5cm}\n \\label{fig:taxonomy_of_attention}\n \\end{figure*}", "id": "d03e9ab4-05dc-48a8-93fd-d9915a7c4cd2", "level": "subsection", "origin_cites_number": 3, "parent_id": "22b21833-1042-464a-88ed-6aee9d50b135", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Foundations" ], [ "subsection", "Transformer Model" ] ], "subsections": [], "title": "Transformer Model" }, { "cite_extract_rate": 1, "cites": [ 7, 38 ], "content": "\\label{sec: The Bidirectional Representations}\nThe training strategy of the original Transformer model could only attend to the context on the left of a given word in the sentence. This is limiting, since for most language tasks, contextual information from both left and right sides is important. Bidirectional Encoder Representations from\nTransformers (BERT) proposed to jointly encode the right and left context of a word in a sentence, thus improving the learned feature representations for textual data in an self-supervised manner. To this end, BERT introduced two pretext tasks to pre-train the Transformer model in a self-supervised manner: \\textit{Masked Language Model} and \\textit{Next Sentence Prediction}. For adapting the pre-trained model for downstream tasks, a task-specific additional output module is appended to the pre-trained model, and the full model is fine-tuned end-to-end. \nHere, we briefly touch upon the pretext tasks. \\textbf{(1) Masked Language Model (MLM) -} A fixed percentage (15\\%) of words in a sentence are randomly masked and the model is trained to predict these masked words using cross-entropy loss. In predicting the masked words, the model learns to incorporate the bidirectional context. \n\\textbf{(2) Next Sentence Prediction (NSP) -} Given a pair of sentences, the model predicts a binary label i.e., whether the pair is valid from the original document or not. The training data for this can easily be generated from any monolingual text corpus. A pair of sentences \\textit{A} and \\textit{B} is formed, such that \\textit{B} is the actual sentence (next to \\textit{A}) 50\\% of the time, and \\textit{B} is a random sentence for other 50\\% of the time. NSP enables the model to capture sentence-to-sentence relationships which are crucial in many language modeling tasks such as Question Answering and Natural Language Inference.", "id": "5a19d5ec-193b-41aa-95bc-7dc9f74b1ec5", "level": "subsection", "origin_cites_number": 2, "parent_id": "22b21833-1042-464a-88ed-6aee9d50b135", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Foundations" ], [ "subsection", "Bidirectional Representations" ] ], "subsections": [], "title": "Bidirectional Representations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:transformers_vision}\n\\mh{We broadly categorize vision models with self-attention into two categories: the models which use single-head self-attention (Sec.~\\ref{Single-head Self-Attention}), and the models which employ multi-head self-attention based Transformer modules into their architectures (Sec.~\\ref{Multi-head Self-Attention (Transformers)}). Below, we first discuss the first category of single-head self-attention based frameworks, which generally apply global or local self-attention within CNN architectures, or utilize matrix factorization to enhance design efficiency and use vectorized attention models. We then discuss the Transformer-based vision architectures in Sec.~\\ref{Multi-head Self-Attention (Transformers)}.}", "id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "level": "section", "origin_cites_number": 0, "parent_id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ] ], "subsections": [ "6299ef7b-f295-463e-b682-8f9ee4944d05", "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "ade536bb-4351-42af-9ee0-e21de4214d88", "51684b1c-5ca2-4f16-b99f-f58e6926d8cd", "c0eec24a-6618-4f6e-aa34-27e7a3d0deaa", "33f59639-9b4c-4ffa-990e-931e1bc2af21", "67fab8f1-33a1-4a5f-83e6-06ff7fb57eb2", "957dac38-5489-46b3-9df4-133419b7eac6", "e6d25a6b-52e4-4b62-9b01-052d8aa597d3", "08ba35f6-edfe-4960-acd5-0bf78d42a74c", "032a8f47-5a0c-4f25-b3c2-14219a9fbe8f" ], "title": "Self-Attention \\& Transformers in Vision" }, { "cite_extract_rate": 0, "cites": [], "content": "}\\label{Single-head Self-Attention}", "id": "6299ef7b-f295-463e-b682-8f9ee4944d05", "level": "subsection", "origin_cites_number": 0, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Single-head Self-Attention" ] ], "subsections": [ "29bb8935-0c57-4b9b-a572-23e277f28eb5", "a1e38fed-8534-41c0-b86a-792f801df402" ], "title": "\\sk{Single-head Self-Attention" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 5769, 2576, 1283, 486, 7054, 5770, 1733, 7995 ], "content": "Inspired by non-local means operation which was mainly designed for image denoising, \nWang \\etal proposed a differentiable non-local operation for deep neural networks to capture long-range dependencies both in space and time in a feed-forward fashion. Given a feature map, their proposed operator computes the response at a position as a weighted sum of the features at all positions in the feature map. This way, the non-local operation is able to capture interactions between any two positions in the feature map regardless of the distance between them. Videos classification is an example of a task where long-range interactions between pixels exist both in space and time. Equipped with the capability to model long-range interactions, demonstrated the superiority of non-local deep neural networks for more accurate video classification on Kinetics dataset .\nAlthough the self-attention allows us to model full-image contextual information, it is both memory and compute intensive. As shown in Fig.~\\ref{fig:ccnet}(a), in order to encode global context for a given pixel location, non-local block~ computes a \\emph{dense} attention map (in green). The non-local block~ has a high complexity of $\\mathcal{O}(N^2)$, where $N$ denotes the number of input feature maps. \nTo reduce this computational burden, Huang \\etal propose the criss-cross attention module that for each pixel position generates a \\emph{sparse} attention map only on the criss-cross path, as illustrated in Fig.~\\ref{fig:ccnet}(b). Further, by applying criss-cross attention recurrently, each pixel position can capture context from all other pixels. Compared to non-local block, the criss-cross uses 11$\\times$ lesser GPU memory, and has a complexity of $\\mathcal{O}(2\\sqrt{N})$. \nState-of-the-art results are reported for the semantic and instance segmentation tasks on several benchmark datasets including Cityscapes~, ADE20K~, COCO~, LIP~ and CamVid~.\n\\begin{figure}[tp]\n\\centering\n \\begin{subfigure}[t]{0.48\\columnwidth}\n \\includegraphics[width=\\textwidth]{Figs/ccnet_a.png}\n \\caption{\\small Non-local block~}\n \\label{}\n \\end{subfigure}\\hspace{0.03cm}\n \\begin{subfigure}[t]{0.48\\columnwidth}\n \\includegraphics[width=\\textwidth]{Figs/iccnet_b.png}\n \\caption{\\small Criss-cross attention~ }\n \\label{fig:dalle img_comp}\n \\end{subfigure}\n\\caption{\\small Comparison of two different self-attention approaches: Non-local self-attention block~ and Criss-cross self-attention module . Figure is from .}\n\\label{fig:ccnet}\n\\end{figure}\nAnother shortcoming of the convolutional operator comes from the fact that after training, it applies fixed weights regardless of any changes to the visual input. Hu \\etal proposed local relation networks to adaptively compose pixels in a local window. They introduced a new differentiable layer \nthat adapts its weight aggregation based on the compositional relations (similarity) between pixels/features within a local window. Such adaptive weight aggregation introduces geometric priors into the network which are important for the recognition tasks . Convolution is considered to be a top-down operator as it remains fixed across positions while a non-local operation such as introduced in is a bottom-up method as it aggregates input features over the full image. The local relation layer belongs to the category of bottom-up methods but it is restricted to a fixed window size \\eg, 7x7 neighborhood.\nBello \\etal~ explore the possibility of employing self-attention as an alternative to convolutional operators. They employ the relative position encoding~ in two dimensions to develop a new self-attention mechanism that maintains translation equivariance, a desirable property for handling images. Although this self-attention provides competitive results as a stand-alone computational primitive, the best performance is obtained in combination with the convolutional operations. Authors show that attention augmentation leads to systematic performance gains in image classification and object detection for different architectures.", "id": "29bb8935-0c57-4b9b-a572-23e277f28eb5", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "6299ef7b-f295-463e-b682-8f9ee4944d05", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Single-head Self-Attention" ], [ "subsubsection", "Self-Attention in CNNs" ] ], "subsections": [], "title": "Self-Attention in CNNs" }, { "cite_extract_rate": 1, "cites": [ 7996, 314, 732, 5771, 5770, 4738, 38, 5772 ], "content": "As discussed above, convolutional layers possess translation equivariance but can not scale with a large receptive field, therefore can not capture long-range interactions . On the other hand, global attention which attend to all spatial locations of the input can be computationally intensive and is preferred on down-sampled small images, image patches or augmenting the convolutional features space . Ramachandran \\etal proposed to replace convolutional layers in deep neural networks with a local self-attention layer which can be applied to small or large inputs without increasing the computational cost. At a basic level, the proposed self-attention layer considers all pixel positions in a specific window size around a given pixel, compute queries, keys and value vectors for these pixels, and then aggregates the spatial information within this window. The value vectors are aggregated after projecting the softmax score of queries and keys. This process is repeated for all given pixels and the response is concatenated to produce the output pixel. ResNet models with local self-attention layer can solve ImageNet and COCO object detection with fewer parameters as compared to ResNet models based on convolutional layers .\nZhao \\etal note that a traditional convolution operator performs feature aggregation and transformation jointly (by applying a filter and then passing it through a non-linearity). In contrast, they propose to perform feature aggregation separately with self-attention followed by transformation using an element-wise perceptron layer. \nFor feature aggregation, they propose two alternate strategies: (a) pairwise self-attention and (b) patch-wise self-attention. The pairwise self-attention is permutation and cardinality invariant operation, while the patch-wise self-attention does not have such invariance properties (similar to convolution). Both pairwise and patch-wise self-attentions are implemented as a \\emph{vector} attention that learns weights for both the spatial and channel dimensions. This provides an alternate approach for attention that is conventionally performed using scalar weights (by taking a dot-product). The pairwise self-attention is a set operator that computes a \\emph{vector attention} keeping in view the relationships of a particular feature with its neighbors in a given local neighborhood. In contrast, patch-wise self-attention is a generalization of the convolution operator (not a set operator) and looks at all the feature vectors in the local neighbourhood when deriving the attention vectors. Authors show that with considerably fewer parameters, self-attention networks (SAN) can beat ResNet baselines on the ImageNet dataset. They further show robustness against adversarial perturbations and generalization to unseen transformations . This behaviour is due to the dynamic nature of attention that makes it difficult for the adversary to calculate useful fooling directions.", "id": "a1e38fed-8534-41c0-b86a-792f801df402", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "6299ef7b-f295-463e-b682-8f9ee4944d05", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Single-head Self-Attention" ], [ "subsubsection", "Self-Attention as Stand-alone Primitive" ] ], "subsections": [], "title": "Self-Attention as Stand-alone Primitive" }, { "cite_extract_rate": 1, "cites": [ 38, 732 ], "content": "}\\label{Multi-head Self-Attention (Transformers)}\n\\mh{Unlike the approaches discussed in Sec.~\\ref{Single-head Self-Attention} which insert self-attention as a component in CNN inspired architectures, Vision Transformer (ViTs) adapts the architecture of (see Fig.~\\ref{fig:transformer}), which cascades multiple Transformer layers. ViTs have gained significant research attention, and a number of recent approaches have been proposed which build upon ViTs. Below, we discuss these methods by categorizing them into: uniform scale ViTs having single-scale features through all layers (Sec.~\\ref{Uniform-scale Vision Transformers}), multi-scale ViTs that learn hierarchical features which are more suitable for dense prediction tasks (Sec.~\\ref{Hierarchical Multi-Stage ViTs for Dense Prediction}), and hybrid designs having convolution operations within ViTs (Sec.~\\ref{Hybrid ViTs with Convolutions}).}", "id": "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "level": "subsection", "origin_cites_number": 2, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Multi-head Self-Attention (Transformers)" ] ], "subsections": [ "fc8aca59-faba-491f-a0ac-11a0cf585aa9", "7b19b4dc-727a-4208-b9bc-db1a1ef9f43c", "7dee9fb3-0ffc-431b-8765-db8e26d5035d", "63d2b354-d7cc-42cd-8cc0-f18239c70b2c" ], "title": "\\sk{Multi-head Self-Attention (Transformers)" }, { "cite_extract_rate": 0.923076923076923, "cites": [ 7367, 4789, 7167, 2579, 7070, 107, 2519, 7590, 4782, 8950, 732, 38 ], "content": "} \\label{Uniform-scale Vision Transformers}\n\\sk{The original Vision Transformer model belongs to this family, where the multi-head self-attention is applied to a consistent scale in the input image where the spatial scale is maintained through the network hierarchy. We name such models as the uniform-scale ViTs, as described below.}\nVision Transformer (ViT) (Fig.~\\ref{fig:vision_transformer}) is the first work to showcase how Transformers can `altogether' replace standard convolutions in deep neural networks on large-scale image datasets. They applied the original Transformer model (with minimal changes) on a sequence of image 'patches' flattend as vectors. The model was pre-trained on a large propriety dataset (JFT dataset with 300 million images) and then fine-tuned to downstream recognition benchmarks \\eg, ImageNet classification. This is an important step since pre-training ViT on a medium-range dataset would not give competitive results, because the CNNs encode prior knowledge about the images (inductive biases \\eg, translation equivariance) that reduces the need of data as compared to Transformers which must discover such information from very large-scale data. Notably, compared to the iGPT model that also applied Transformers to full-sized images but performs training as a generative task, ViT pre-trains the model with a supervised classification task (although a self-supervision variant is also explored which results in a less performance). \n \\begin{figure}[]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{Figs/VisionTransformer.pdf}\n \\caption{\\small An overview of Vision Transformer (on the \\emph{left}) and the details of Transformer encoder (on the \\emph{right}). The architecture resembles Transformers used in the NLP domain and the image patches are simply fed to the model after flattening. After training, the feature obtained from the first token position is used for classification. Image obtained from . }\n \\label{fig:vision_transformer}\n \\end{figure}\nThe DeiT is the first work to demonstrate that Transformers can be learned on mid-sized datasets (i.e., 1.2 million ImageNet examples compared to 300 million images of JFT used in ViT ) in relatively shorter training episodes. \nBesides using augmentation and regularization procedures common in CNNs, the main contribution of DeiT is a novel native distillation approach for Transformers which uses a CNN as a teacher model (RegNetY-16GF ) to train the Transformer model. The outputs from the CNN aid the Transformer in efficiently figuring out useful representations for input images. A distillation token is appended with the input patch embeddings and the class token. The self-attention layers operate on these tokens to learn their inter-dependencies and outputs the learned class, patch, and distillation tokens. The network is trained with a cross-entropy loss defined on the output class token and a distillation loss to match the distillation token with the teacher output. Both \\emph{soft} and \\emph{hard} label choices were explored for distillation, where the hard distillation was found to perform better. Interestingly, the learned class and distillation tokens do not exhibit a high correlation indicating their complementary nature. The learned representations compare favorably well against top-performing CNN architectures such as EfficientNet and also generalize well for a number of downstream recognition tasks.\nToken to Token (T2T) ViT recursively combines neighboring tokens into a single token to reduce tokens length and aggregate spatial context. Transformer in Transformer computes attention at two levels: patch-level (as done is standard ViTs ) and local sub-patch-level (\\eg by subdividing a $16\\times16$ patch into four $4\\times4$ blocks, and computing attention amongst these blocks). \nIn token labelling ViT , all patch tokens contribute towards loss calculation, different from regular ViTs that only use classification token in the loss. This process includes auxiliary supervision where each image-patch (token) is labeled using a pre-trained CNN model. Similar to CutMix augmentation , tokens from different images are mixed as an augmentation strategy, and the model is trained using the standard classification loss and auxiliary token-label loss. Their model demonstrates excellent performance specially for smaller sized models.\nThe quadratic complexity of self-attention hinders its applicability to longer sequences (high-resolution images). Cross-Covariance Image Transformers (XCiT) incorporate attention across feature-channels instead of tokens, i.e., their cross-covariance attention is given by $\\mathbf{V} \\mathbf{softmax}\\left (\\frac{\\mathbf{K}^T \\mathbf{Q}^T}{\\sqrt{\\tau}}\\right )$. The proposed cross-covariance attention has linear complexity (since it depends upon feature dimension instead of the number of tokens). XCiT can therefore handle large resolution images and demonstrate excellent performance across different vision tasks i.e., self-supervised and fully supervised image classification and dense prediction (detection, segmentation). DeepViT observes that the similarity between attention maps of deeper layer is high and hinders scaling models depth. They propose to re-attend the attention maps in a multi-head block instead of simple aggregation of these attention maps, and show consistent gains over standard multi-head self attention based ViTs.", "id": "fc8aca59-faba-491f-a0ac-11a0cf585aa9", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Multi-head Self-Attention (Transformers)" ], [ "subsubsection", "\\sk{Uniform-scale Vision Transformers" ] ], "subsections": [], "title": "\\sk{Uniform-scale Vision Transformers" }, { "cite_extract_rate": 0.9333333333333331, "cites": [ 5773, 7360, 5774, 4781, 7848, 8842, 7298, 4778, 5775, 5776, 4793, 1501, 2577, 4779 ], "content": "\\label{Hierarchical Multi-Stage ViTs for Dense Prediction}\nIn standard ViTs, the number of the tokens and token feature dimension are kept fixed throughout different blocks of the network. This is limiting, since the model is unable to capture fine spatial details at different scales. Initial Transformer based dense prediction methods (e.g., DETR ) therefore have a convolutional backend. Multi-stage hierarchical design for ViTs, where number of tokens is gradually reduced while the token feature dimension is progressively increased, has been shown to produce effective features for dense prediction tasks . These models generally also perform well for recognition tasks. These architectures mostly sparsify tokens by merging neighboring tokens and projecting them to a higher dimensional feature space. Examples of multi-stage ViTs include Pyramid ViT , Twins , CoaT , Swin Transformer , Convolutional vision Transformer (CvT) , Shuffle Transformer , CrossFormer , RegionViT and Focal Transformer models . Some of them are hybrid designs (with both convolution and self-attention operations, see Sec.~\\ref{Hybrid ViTs with Convolutions}), while others only employ pure self-attention based design (discussed next).\nPyramid ViT (PVT) is the first hierarchical design for ViT, and proposes a progressive shrinking pyramid and spatial-reduction attention. PVTv2 and SegFormer improve original PVT by introducing overlapping patch embedding, depth-wise convolution, and efficient attention. Swin Transformer has a multi-stage hierarchical architecture which computes attention within a local window, by partitioning the window into multiple sub-patches. To capture interactions between different windows (image locations), window partitioning is gradually shifted, along the hierarchy of the network, to capture overlapping regions. \nFocal Transformer models is another hierarchical design, where focal self-attention is introduced to simultaneously capture global and local relationships.\nSimilarly, CrossFormer has a hierarchical pyramid structure, and introduces cross-scale embedding module, along-with long short distance attention and dynamic position bias to faithfully capture both local and global visual cues. RegionViT proposes a regional-to-local attention to encode hierarchical features. Multi-Scale Vision Longformer also considers a local context in self-attention, but employs the efficient Longformer design for self-attention. CrossViT encodes multi-scale features with two branches (each with multiple transformer blocks), by separately processesing smaller and larger image patches. The information from these two multi-scale bracnches is then fused together using a cross-attention module.", "id": "7b19b4dc-727a-4208-b9bc-db1a1ef9f43c", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Multi-head Self-Attention (Transformers)" ], [ "subsubsection", "Multi-scale Vision Transformers" ] ], "subsections": [], "title": "Multi-scale Vision Transformers" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 4794, 5773, 1493, 4795, 7167, 5777, 5774, 5778, 4781, 7997, 4793, 1501, 2577, 2581, 4779 ], "content": "\\label{Hybrid ViTs with Convolutions}\nConvolutions do an excellent job at capturing low-level local features in images, and have been explored in multiple hybrid ViT designs, specially at the beginning to ``patchify and tokenize\" an input image. For example, Convolutional vision Transformer (CvT) incorporate convolution based projection to capture the spatial structure and low-level details, for tokenization of image patches. CvT has a hierarchical design, where number of tokens is progressively reduced while the token-width is increased, thus imitating the impact of spatial downsampling as in CNNs. Convolution enhanced image Transformers employ convolutions based image-to-token module to extract low-level features. Compact Convolutional Transformer (CCT) introduces a new sequence pooling scheme, and incorporates convolutional blocks (conv-pool-reshape) for tokenization. CCT can be trained from scratch on smaller datasets, \\eg, CIFAR10 with $\\sim95\\%$ accuracy, which is a remarkable property not possible with the traditional ViTs. \nLocalViT introduces depthwise convolutions to enhance local features modeling capability of ViTs. LeViT (name inspired from LeNet ) applies a four-layered CNN block (with $3\\times3$ convolutions) at the beginning with progressively increasing channels (3,32,64,128,256). For a $3\\times224\\times224$ input image, the resulting $256\\times14\\times14$ output from the CNN block becomes input to a hierarchical ViT. By virtue of its design, LeViT is $5\\times$ faster than EfficientNet on CPU, at inference. ResT is another hierarchical architecture which applies a CNN block at the beginning for patch-embedding. It incorporates depth-wise convolutions and adaptive position encoding to tackle varying image sizes. A recent approach NesT proposes a simple technique to introduce hierarchy in ViTs. NesT divides an image into non-overlapping blocks (each block is further split into patches). It first separately applies local self-attention on patches within each block, and then enables global interaction between blocks by aggregating them into an image space and applying convolution operation, followed by downsampling. The number of blocks is gradually reduced along the hierarchy of the model, while number of local-patches is kept fixed. This simple scheme performs favorably compared with more sophisticated designs , and enables training NesT on smaller datasets (e.g., CIFAR-10) from scratch.\nDepthwise Convolution and self-Attention Networks (CoAtNets) introduce a relative attention module (which combines depthwise convolutions and self-attention), and vertically stack convolution and attention layers. CoAtNets demonstrate an impressive $86\\%$ ImageNet top-1 accuracy without extra data (i.e. trained only on ImageNet-1k). Shuffle Transformer performs self-attention within a window and has depth-wise convolutions between the window-based multi-head self-attention and MLP. It introduces a shuffle operation to build stronger cross-patch connections. \nCo-scale conv-attentional image Transformers (CoaT) , is a hybrid hierarchical pyramid design, with serial and parallel blocks, where the serial block is similar to standard transformer block except for the attention layer replaced with depthwise convolution. The parallel blocks is applied on the output of serial blocks and encodes relationships between tokens at multiple scales using cross-attention. \nTwins builds upon PVT (an attention only pyramid design), by replacing the absolute position embedding in PVT with relative conditional position embedding , and incorporating the separable depth-wise convolutions instead of the standard spatial attention, to capture local and global context of the image. In this sense, the hybrid designs tend to combine the strengths of both convolution and transformer models. TransCNN propose a hierarchical multi-head self attention block, which first learns interactions within small grids (tokens) using self-attention, and then gradually merges the smaller grids into larger grids. The proposed block can then be plugged into existing CNN architectures.", "id": "7dee9fb3-0ffc-431b-8765-db8e26d5035d", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Multi-head Self-Attention (Transformers)" ], [ "subsubsection", "Hybrid ViTs with Convolutions" ] ], "subsections": [], "title": "Hybrid ViTs with Convolutions" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 133, 1501, 4796, 7590, 2514, 122, 2530, 2517 ], "content": "Contrastive learning based self-supervised approaches, which have gained significant success for CNN based vision tasks, have also been investigated for ViTs. Chen \\etal evaluate different self-supervised frameworks and propose practical strategies including MoCo v3 (extended from v1/v2 ) for stabilized training of self-supervised ViTs. Xie \\etal combine MoCo v2 and BYOL to train DeiT and SwinTransformer . They demonstrate generalization of self-supervised SwinTransformer for dense prediction tasks of detection and segmentation. Self distillation with no labels (DINO) demonstrate that self-supervised ViTs can automatically segment the background pixels of an image, even though they were never trained using pixel-level supervision, a phenomena otherwise not observed in CNNs or fully supervised ViTs. Efficient self-supervised vision transformer (EsViT) propose a multi-stage design, where neighboring tokens are gradually merged along the hierarchy of the network, and use DINO for self-supervision. Apart from standard image-level self-supervision as in DINO, they incorporate additional patch-level self-supervision in which correspondence is promoted between similar patches within augmented versions of an image. EsViT demonstrates excellent performance under self-supervision settings, and its off-the-shelf features transfer better than supervised SwinTransformer on 17 out of 18 evaluated datasets.", "id": "63d2b354-d7cc-42cd-8cc0-f18239c70b2c", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "cec835b9-3f57-4319-bd05-4f67b80d7e5f", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "\\sk{Multi-head Self-Attention (Transformers)" ], [ "subsubsection", "Self-Supervised Vision Transformers" ] ], "subsections": [], "title": "Self-Supervised Vision Transformers" }, { "cite_extract_rate": 0.8, "cites": [ 4800, 7360, 5779, 7373 ], "content": "\\mh{Transformers based modules have been used for object detection in the following manner: (a) Transformer backbones for feature extraction, with a R-CNN based head for detection (see Sec.~\\ref{Hierarchical Multi-Stage ViTs for Dense Prediction}), (b) CNN backbone for visual features and a Transformer based decoder for object detection (see Sec.~\\ref{Detection Transformers with CNN Backbone}, and (c) a purely transformer based design for end-to-end object detection (see Sec.~\\ref{Detection with Pure Transformers}). }", "id": "ade536bb-4351-42af-9ee0-e21de4214d88", "level": "subsection", "origin_cites_number": 5, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Object Detection" ] ], "subsections": [ "ea297d26-2961-4519-a195-2320380cdac8", "1091b51c-d3e3-4fe1-8302-9eb481e687bb" ], "title": "Transformers for Object Detection" }, { "cite_extract_rate": 0.923076923076923, "cites": [ 206, 7360, 5779, 4812, 799, 209, 7373, 802, 8429, 520, 722, 38 ], "content": "\\label{Detection Transformers with CNN Backbone}\nDetection Transformer (DETR) treats object detection as a set prediction task i.e., given a set of image features, the objective is to predict the set of object bounding boxes. The Transformer model enables the prediction of a set of objects (in a single shot) and also allows modeling their relationships. DETR adapts a set loss function which allows bipartite matching between predictions and ground-truth boxes. The main advantage of DETR is that it removes the dependence on hand-crafted modules and operations, such as the RPN (region proposal network) and NMS (non-maximal suppression) commonly used in object detection . In this manner, the dependence on prior knowledge and careful engineering design is relaxed for complex structured tasks like object detection. \n\\begin{figure}[]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Figs/DETR.png}\n \\vspace{-0.3cm}\n \\caption{\\small Detection Transformer (DETR) treats the object detection task as a set prediction problem and uses the Transformer network to encode relationships between set elements. A bipartite set loss is used to uniquely match the box predictions with the ground-truth boxes (shown on the \\emph{right} two columns). In case of no match, a '\\emph{no object}' class prediction is selected. Its simple design with minimal problem-specific modifications can beat a carefully built and popular Faster R-CNN model. Figure from .}\n \\label{fig:detr}\n\\end{figure}\nGiven spatial feature maps from the CNN backbone, the encoder first flattens the spatial dimensions (see Fig.~\\ref{fig:detr}). This gives a sequence of features $d\\times n$, where $d$ is the feature dimension and $n = h\\times w$ with $h, w$ being the height and width of the spatial feature maps. These features are then encoded and decoded using multi-head self-attention modules as in . The main difference in the decoding stage is that all boxes are predicted in parallel while uses an RNN to predict sequence elements one by one. Since the encoder and decoder are permutation invariant, learned positional encodings are used as the object queries by the decoder to generate different boxes. Note that the spatial structure in a CNN detector (e.g., Faster R-CNN) automatically encodes the positional information. DETR obtains performance comparable to the popular Faster R-CNN model which is an impressive feat given its simple design. The DETR has also been extended to interesting applications in other domains, e.g., Cell-DETR extends it for instance segmentation of biological cells. A dedicated attention branch is added to obtain instance-wise segmentations in addition box predictions that are enhanced with a CNN decoder to generate accurate instance masks.\nThe DETR model successfully combines convolutional networks with Transformers to remove hand-crafted design requirements and achieves an end-to-end trainable object detection pipeline. However, it struggles to detect small objects and suffers from slow convergence and a relatively high computational cost . DETR maps images to features space before using the Transformer for the relation modeling. Thus, the computational cost of self-attention grows quadratically with the spatial size of the feature map i.e., $\\mathcal{O}(H^2W^2C)$, where $H$ and $W$ represent the height and width of the feature map. This inherently puts a limitation on the use of multi-scale hierarchical features in DETR training framework which is ultimately important to detect small objects. Furthermore, at the beginning of training, the attention module simply projects uniform attention to all the locations of the feature map and requires a large number of training epochs to tune attention weights to converge to meaningfully sparse locations. This approach contributes to a slow convergence rate of DETR. To mitigate the above-mentioned issues, proposed a deformable attention module to process the feature maps. Inspired from deformable convolutions , deformable attention module only attends to sparse set of elements from the whole feature map regardless of its spatial size. This further allows cross-scale aggregation of feature maps with the help of multi-scale attention modules without increasing the computational cost significantly. Deformable DETR not only performs better but its training time also remains 10$\\times$ lower than the original DETR model . \n\\mh{Anchor DETR replaces the learnable query tokens in with anchor-point based queries, such that each query focuses on predicting the object near the anchor point. The anchor points can be fixed on 2D grid, or learned from uniformly distributed points. Anchor DETR requires 10 $\\times$ fewer training epochs with comparable performance.}\n\\mh{Pix2Seq is a generic Transformer-based framework, without any specialized task-specific modules, and learns to directly produce a sequence of tokens with object descriptions (bounding-boxes and class-labels). A quantization and serialization scheme first converts bounding boxes and class-labels into a sequence of discrete tokens. A generic Transformer based encoder-decoder network is then used to generate these tokens in an auto-regressive manner conditioned on previous predictions and image features.}", "id": "ea297d26-2961-4519-a195-2320380cdac8", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "ade536bb-4351-42af-9ee0-e21de4214d88", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Object Detection" ], [ "subsubsection", "Detection Transformers with CNN Backbone" ] ], "subsections": [], "title": "Detection Transformers with CNN Backbone" }, { "cite_extract_rate": 1, "cites": [ 4800, 7360, 7998, 732, 2577, 38 ], "content": "\\label{Detection with Pure Transformers}\n\\mh{You Only Look at One Sequence (YOLOS) is a simple, attention-only architecture directly built upon the ViT . It replaces the class-token in ViT with multiple learnable object query tokens, and the bipartite matching loss is used for object detection similar to . YOLOS demonstrates the flexibility of ViTs to object detection, in a pure sequence-to-sequence learning manner, with minimal image related 2D inductive biases. In similar spirit, PVT~ is combined with DETR~ to perform object detection with an end-to-end transformer pipeline. We note that it is feasible to combine other recent ViTs with transformer based detection heads as well to create pure ViT based designs , and we hope to see more such efforts in future. }\n\\begin{figure}[]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Figs/AxialAttention.png}\n \\caption{\\small Axial attention module that sequentially applies multi-head axial attention operations along height and width axes. Image from .}\n \\label{fig:axial_attention}\n\\end{figure}", "id": "1091b51c-d3e3-4fe1-8302-9eb481e687bb", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ade536bb-4351-42af-9ee0-e21de4214d88", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Object Detection" ], [ "subsubsection", "Detection with Pure Transformers" ] ], "subsections": [], "title": "Detection with Pure Transformers" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 5780, 5762, 8842, 7998, 4811, 486, 8839, 1733, 4738, 1730, 2577 ], "content": "Self-attention can be leveraged for dense prediction tasks like image segmentation that requires modeling rich interactions between pixels. Below, we discuss axial self-attention , a cross-modal approach that can segment regions corresponding to a given language expression, and ViTs based segmentation architectures .\nPanoptic segmentation~ aims to jointly solve the otherwise distinct tasks of semantic segmentation and instance segmentation by assigning each pixel a semantic label and an instance id. Global context can provide useful cues to deal with such a complex visual understanding task. Self-attention is effective at modeling long-range contextual information, albeit applying it to large inputs for a dense prediction task like panoptic segmentation is prohibitively expensive. A naive solution is to apply self-attention either to downsampled inputs or to limited regions around each pixel~. Even after introducing these constraints, the self-attention still has quadratic complexity and sacrifices the global context.\nTo tackle these issues, Wang \\etal~ propose the position-sensitive axial-attention where the 2D self-attention mechanism is reformulated as two 1D axial-attention layers, applied to height-axis and width-axis sequentially (see Fig.~\\ref{fig:axial_attention}). The axial-attention is compute efficient and enables models to capture the full-image context. It achieves competitive performance for the panoptic segmentation task on COCO~, Mapillary Vistas~, and Cityscapes~ benchmarks and for the image classification on ImageNet dataset~. \nCross-modal Self-attention (CMSA) encodes long-range multi-modal dependencies between linguistic and visual features for \\textit{referring image segmentation task}, that aims to segment entities in an image referred by a language description.\nFor this purpose, a set of cross-modal features is obtained by concatenating image features with each word embedding and the spatial coordinate features. The self-attention operates on these features and generates attention over the image corresponding to each word in the sentence. The segmentation network then performs self-attention at multiple spatial levels and uses a gated multi-level fusion module to refine segmentation masks via information exchange across multi-resolution features. A binary CE loss is used to train the overall model that achieves good improvements on UNC , G-Ref and ReferIt datasets.\n\\mh{\nWhile the segmentation approaches discussed above insert self-attention in their CNN based architectures, some recent works have proposed transformer based encoder-decoder architectures. Segmentation Transformer (SETR) has a ViT encoder, and two decoder designs based upon progressive upsampling, and multi-level feature aggregation.\nSegFormer has a hierarchical pyramid ViT (without position encoding) as an encoder, and a simple MLP based decoder with upsampling operation to get the segmentation mask. \nSegmenter uses ViT encoder to extract image features, and the decoder is a mask Transformer module which predicts segmentation masks, using learnable mask tokens and image-patch tokens as inputs. The authors also propose a baseline linear decoder which projects the patch-embeddings to classification space, thus producing coarse patch-level labels.\n}", "id": "51684b1c-5ca2-4f16-b99f-f58e6926d8cd", "level": "subsection", "origin_cites_number": 15, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Segmentation" ] ], "subsections": [], "title": "Transformers for Segmentation" }, { "cite_extract_rate": 0.620689655172413, "cites": [ 507, 1003, 5782, 5781, 7000, 76, 2525, 122, 38, 2508, 81, 4829, 5680, 2526, 4765, 732, 7999, 5783 ], "content": "Here, we discuss Transformer-based architectures for image synthesis, which is interesting from the perspective of generative modeling and learning unsupervised representations for down-stream tasks.\nParmar \\etal develop an image generation model that can sequentially predict each pixel of an output image given its previously generated pixels (Fig.~\\ref{fig:img_transformer}). Their approach models the joint distribution of the image pixels by factorizing it as a product of pixel-wise conditional distributions. Previously developed auto-regressive models for this task, such as the PixelCNN~, suffer from a limited receptive field which hinders in modeling long term relationships in an image \\eg, part relationships or occlusions. Using self-attention, enhances the receptive field without incurring a high computational cost (\\eg, effective receptive field up to 256 pixels can be achieved as compared to 25 pixels of PixelCNN~). The generative pipeline was also tested on conditional generation tasks \\eg, image super-resolution, image completion, and denoising.\n \\begin{figure}[]\n \\centering\n \\includegraphics[clip=true, trim=0mm 0mm 1mm 2.0mm, width=1\\columnwidth]{Figs/ImageTransformer.png}\n \\caption{\\small (a) Self-attention block in Image Transformer . Given one channel for a pixel $q$, the block attends to the memory of previous synthesized pixels ($m_i$), followed by a feed-forward sub-network. Positional encodings $p_i$ are added in the first layer. (b) The operation performed in Local Self-Attention (example of a 2D case is shown). The image is partitioned into a grid of spatial blocks known as query blocks. In the self-attention operation, each pixel in a query block attends to all pixels in the memory block (shown in cyan rectangle). White grid locations show masked inputs that have zero-contribution towards the self-attention.}\n \\label{fig:img_transformer}\n \\end{figure}\nInspired by the success of GPT model in the language domain, image GPT (iGPT) demonstrated that such models can be directly used for image generation tasks, and to learn strong features for downstream vision tasks (e.g., image classification). Specifically, iGPT trains GPT v2 model on flattened image sequences (1D pixel arrays) and shows that it can generate plausible image outputs without any external supervision. The generated samples depict the model's ability to understand spatial relationships between pixels and high-level attributes such as object classes, texture, and scale. Notably, the design does not use any image-specific knowledge in the design (e.g., the 2D position embeddings used in Image Transformer ).\nThe features learned with iGPT's unsupervised training mechanism compete impressively against other unsupervised approaches, achieving state-of-the-art performance on CIFAR-10/100 and STL datasets while performing comparably to SimCLR (a contrastive learning approach) on ImageNet dataset. This is an astounding result, since the iGPT architecture is exactly the same as used for language modeling tasks, and therefore it does not incorporate any prior domain-specific knowledge. Notably, the competing unsupervised CNN based solutions widely adopt such priors in the form of architectural design, attention mechanisms, loss functions, and regularization . However, on the downside, iGPT has a high compute cost \\eg, iGPT-L version has roughly $36\\times$ high training cost compared to MoCo which is a state of the art self-supervised feature learning approach. For this reason, the training was generally limited to low-resolution of $\\leq 64\\times 64$, while convolutional architectures can effectively learn from high-resolution inputs.\nTransformers typically incur a high compute cost when applied on high-dimensional sequences. To overcome this limitation, Esser \\etal proposed to include inductive biases (commonly used in the CNNs) alongside Transformers to improve their efficiency. Specifically, local connectivity and spatial invariance biases inbuilt in the CNN structure are leveraged by learning a rich dictionary of visual patterns (using a Generative Adversarial approach). A Transformer is then used to learn the long-range interactions between the dictionary items to generate the outputs. In turn, they develop a conditional image generation model capable of producing very high-resolution images (up to megapixel range) using Transformers. This is the first work that demonstrates the application of Transformers to generate such high-resolution images. \nGenerative Adversarial Networks (GANs) with CNNs as default backbone have been very successful for visually appealing image synthesis . TransGAN builds a strong GAN model, free of any convolution operation, with both generator and discriminator based upon the Transformer model . \nThe architecture of both generator and discriminator is based upon the encoder in original Transformer model . For memory efficiency, the generator contains multiple stages, with up-sampling modules in-between, which gradually increase the resolution of feature maps (input sequence length) while reducing the embedding dimension. The discriminator of TransGAN takes flattened image-patches as tokens similar to .\nAuthors introduce different training techniques including data augmentation, training with an auxiliary task and injecting locality to self-attention to scale-up their model for high quality image synthesis . The TransGAN model achieves state-of-the-art results in terms of Inception Score and Fr\\'echet Inception Distance (FID) on STL-10 and performs favorably compared with their CNN-based GAN counterparts on other datasets. \n Unlike previous image generation methods , which directly predict image outputs, learns to generate parameters of 3D objects to be placed in a given scene. Specifically, SceneFormer studies the 3D room layout conditioned scene generation task. Given the empty room shape, can propose new object configurations in the room while maintaining realism. Remarkably, the model does not use any appearance information and only learns to generate new scenes by modeling the inter-object relationships using self-attention in Transformers. Similar to how a Transformer operates on a sentence, it is applied to a sequence of objects to predict the next suitable object in a scene. Specifically, the size, pose, location, and category of the next object is predicted by the Transformer model. A start token indicates the initiation of inference and the number of output token indicate the objects generated by the model in a sequence. The authors also explore generating new scenes given a textual description of the room layout. The independence from the appearance makes the approach efficient, enabling interactive scene generation. \nThe task of generating realistic images from text is interesting and practically valuable (\\eg, for artistic content creation), but at the same time highly challenging. Prior text-to-image synthesis approaches~ are mostly based on GANs~. Although these methods produce encouraging results, they are far from being photo-realistic. \nRamesh~\\etal~ recently proposed DALL·E which is a Transformer model capable of generating high-fidelity images from a given text description. \nDALL·E model has 12 billion parameters and it is trained on a large set of text-image pairs taken from the internet. \nBefore training, images are first resized to 256$\\times$256 resolution, and subsequently compressed to a 32$\\times$32 grid of latent codes using a pre-trained discrete variational autoencoder~. \nDALL·E takes as input a single stream of 1280 tokens (256 for the text and 1024 for the image), and is trained to generate all other tokens autoregressively (one after another). It provides flexibility to generate images either from scratch (Fig.~\\ref{fig:dalle scratch}) or by extending existing images (Fig.~\\ref{fig:dalle img_comp}), while staying faithful to the text caption. \n\\begin{figure*}\n\\centering\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/scratch1.png}\n \\caption{\\small}\n \\label{fig:dalle scratch}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/img_comp.png}\n \\caption{\\small }\n \\label{fig:dalle img_comp}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/multi-attributes.png}\n \\caption{\\small }\n \\label{fig:dalle multi-attr}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/viewpoint.png}\n \\caption{\\small }\n \\label{fig:dalle viewpoint}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/internel_structure.png}\n \\caption{\\small}\n \\label{fig:dalle cross-section}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/unrelated_concepts.png}\n \\caption{\\small}\n \\label{fig:dalle unrelated concepts}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.13\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/img2img.png}\n \\caption{\\small}\n \\label{fig:dalle img2img}\n \\end{subfigure}\n\\vspace{-0.3cm}\n\\caption{\\small Images generated by DALL·E~ from the following text prompts. (a) \\emph{An armchair in the shape of an avocado.} (b) \\emph{A photo of San Francisco's golden gate bridge.} Given a part of the image (in green box), DALL·E performs the image completion. (c) \\emph{An emoji of a baby penguin wearing a blue hat, red gloves, green shirt, and yellow pants.} (d) \\emph{An extreme close-up view of a capybara sitting in a field.} (e) \\textit{ A cross-section view of a pomegranate.} (f) \\emph{A penguin made of watermelon.} (g) \\emph{The exact same cat on the top as a sketch on the bottom.} }\n\\end{figure*}\nThe authors demonstrate the effectiveness of DALL·E by creating images from text describing a wide variety of real and fictional concepts. While generating images purely from textural captions, DALL·E shows impressive performance at controlling multiple objects and their attributes (Fig.~\\ref{fig:dalle multi-attr}), rendering certain viewpoint (Fig.~\\ref{fig:dalle viewpoint}), capturing object's internal structure (Fig.~\\ref{fig:dalle cross-section}), and combining unrelated objects (Fig.~\\ref{fig:dalle unrelated concepts}). \nFurthermore, DALL·E can perform image-to-image translation (Fig.~\\ref{fig:dalle img2img}) guided by the input text.", "id": "c0eec24a-6618-4f6e-aa34-27e7a3d0deaa", "level": "subsection", "origin_cites_number": 29, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Image and Scene Generation" ] ], "subsections": [], "title": "Transformers for Image and Scene Generation" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 5784, 4777, 7070, 5764, 4766 ], "content": "After witnessing the success of Transformer models in high-level vision problems, numerous Transformer-based methods have been proposed for low-level vision tasks, including image super-resolution~, denoising~, deraining~, and colorization~. \nImage restoration requires pixel-to-pixel correspondence from the input to the output images. One major goal of restoration algorithms is to preserve desired fine image details (such as edges and texture) in the restored images. CNNs achieve this by employing a single-scale architecture design that does not involve any downsampling operation. Since the computational complexity of self-attention in Transformer models increases quadratically with number of image patches, it is infeasible to develop Transformer model that can operate on single-scale feature processing pipeline. Consequently, these Transformer-based image restoration models make use of various strategies to reduce the computational burden, such as computing attention on local image windows~, performing spatial reduction attention~, and employing encoder-decoder design~. Here, we briefly discuss a few image restoration Transformer models.", "id": "33f59639-9b4c-4ffa-990e-931e1bc2af21", "level": "subsection", "origin_cites_number": 6, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Low-level Vision" ] ], "subsections": [ "5d5c2cf7-e05d-4267-a4d8-e6af4f8c5b58", "eb2edd80-b529-45c0-9212-f29c7f8015f4", "63b91f43-9c63-40ed-acb2-22407e2aa5b7" ], "title": "Transformers for Low-level Vision" }, { "cite_extract_rate": 0.8, "cites": [ 5785, 7070, 493, 38 ], "content": "Top performing algorithms for high-level computer vision tasks such as object detection and semantic segmentation often employ backbone models that are pre-trained on large-scale datasets \\eg, ImageNet. In contrast, algorithms for low-level vision tasks such as image denoising, super-resolution, and deraining are directly trained on task-specific data, thereby suffer from these limitations: \\textbf{(i)} small number of images available in task-specific datasets (\\eg, the commonly used DIV2K dataset for image super-resolution contains only 2000 images), \\textbf{(ii)} the model trained for one image processing task does not adapt well to other related tasks.\nChen \\etal~ propose a pre-trained model based on Transformer architecture, named as Image Processing Transformer (IPT). It is capable of performing various image restoration tasks such as super-resolution, denoising, and deraining. \nThe overall architecture of IPT consists of multi-heads and multi-tails to deal with different tasks separately, and a shared encoder-decoder Transformer body. \nSince exploiting Transformers at full potential requires training on large-scale data, takes the clean (ground-truth) images from the ImageNet benchmark and synthesize their degraded versions for different tasks. For example, bicubic interpolation is used for generating low-resolution images, additive white Gaussian noise is added to prepare noisy data, and hand-crafted rain streaks are applied to obtain rainy images. In total, 10 million images are used to pre-train the IPT model. During training, each task-specific head takes as input a degraded image and generates visual features. These feature maps are divided into small crops and subsequently flattened before feeding them to the Transformer encoder (whose architecture is the same as ). The outputs of the encoder along with the task-specific embeddings are given as input to the Transformer decoder. The features from the decoder output are reshaped and passed to the multi-tail that yields restored images. The IPT model is optimized with L$_1$ loss. Experimental results show that the pre-trained IPT model, when fine-tuned for a specific low-level vision task, can provide significant performance gains over the state-of-the-art methods .", "id": "5d5c2cf7-e05d-4267-a4d8-e6af4f8c5b58", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "33f59639-9b4c-4ffa-990e-931e1bc2af21", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Low-level Vision" ], [ "subsubsection", "Transformers for Image Processing Tasks" ] ], "subsections": [], "title": "Transformers for Image Processing Tasks" }, { "cite_extract_rate": 0.642857142857142, "cites": [ 5784, 7258, 480, 493, 4766, 5786, 483, 496, 482 ], "content": "Recent years have seen major performance breakthroughs for super-resolution (SR) due to convolutional neural networks (CNNs). Principally, the quality of super-resolved images generated by CNNs is dependent on the choice of optimization objective. While the SR methods that are based on pixel-wise loss functions (\\eg, L1, MSE, etc.) yield impressive results in terms of image fidelity metrics such as PSNR and SSIM, they struggle to recover fine texture details and often produce images that are overly-smooth and perceptually less pleasant.\nFurther, \\emph{perceptual} SR approaches , in addition to per-pixel loss, employ adversarial loss and perceptual loss based on deep features extracted from pre-trained CNNs. While these methods generate images that are sharp, visually pleasant, and perceptually plausible, they show a substantial decrease in reconstruction accuracy measured in PSNR/SSIM. Moreover, the perceptual SR algorithms have a tendency to hallucinate fake textures and cause artifacts. The above mentioned SR approaches follow two distinct (but conflicting) research directions: one maximizing the reconstruction accuracy and the other maximizing the perceptual quality, but never both. \n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=0.75\\columnwidth]{Figs/texture_transformer.png}\n \\vspace{-0.2cm}\n \\caption{\\small Diagram of the texture Transformer module. $Q$ (query), $K$ (key) and $V$ (value) represent texture features extracted from a (bicubic upsampled) low-resolution image, a sequentially down/upsampled reference image, and an original reference image, respectively. The relevance embedding aims to estimate similarity between low-resolution and reference images. $H$ and $S$ respectively denote hard and soft attentions computed from relevance embedding. $T$ indicates high-resolution texture features that are then transferred to the features $F$ of low-resolution image. Figure is from .}\n \\label{fig:ttsr}\n\\end{figure}\nTo alleviate the trade-off between perceptual reproduction and accurate reproduction, Yang \\etal propose a Transformer network (TTSR) for super-resolution.\nDuring training, TTSR uses paired LR-HR images, as well as reference (Ref) images with similar content as of LR images.\nTTSR learns to search relevant regions in the Ref image and transfers rich textures to help super-resolving the input LR image.\nThe texture Transformer module of TTSR method (see Fig.~\\ref{fig:ttsr}) consists of four core components: (1) \\emph{Learnable texture extractor:} takes as input LR$\\uparrow$, Ref$\\downarrow \\uparrow$, and Ref images, and generates texture features query (Q), key (K), and value (V), respectively. Here, $\\uparrow$ denotes bicubic upsampling operation, and $\\downarrow \\uparrow$ represents bicubic down-sampling followed by an upsampling operation. (2) \\emph{Relevance embedding:} first unfolds Q and K into patches and then computes the similarity of each patch in Q with each patch in K in order to generate hard and soft attention maps. (3) \\emph{Hard-attention:} transfers HR texture features from V to (LR features) Q using the hard attention map. (4) \\emph{Soft-attention:} further enhances relevant features while suppressing less relevant ones.\n{While TTSR~ method deals with reference-based image super-resolution, most of the research is conducted on single image super-resolution problem in which only LR-HR paired images are available. Since the computational complexity of the original self-attention operation is prohibitively high for high-resolution images, recently a few efficient transformer models have been proposed that employ window-based attention (SwinIR~) and spatial resolution reduction operation in attention module (ESRT~) to perform super-resolution.}", "id": "eb2edd80-b529-45c0-9212-f29c7f8015f4", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "33f59639-9b4c-4ffa-990e-931e1bc2af21", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Low-level Vision" ], [ "subsubsection", "Transformers for Super-Resolution" ] ], "subsections": [], "title": "Transformers for Super-Resolution" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 2017, 1484, 2015, 5764, 768, 5337, 1278, 2012, 1272 ], "content": "Given a grayscale image, colorization seeks to produce the corresponding colorized sample. It is a one-to-many task as for a given grayscale input, there exist many possibilities in the colorized output space. The challenging nature of this task requires probabilistic models capable of producing multiple colorized output samples. Colorization Transformer is a probabilistic model based on conditional attention mechanism . It divides the image colorization task into three sub-problems \nand proposes to solve each task sequentially by a different Transformer network. The authors first train a Transformer network to map a low-resolution grey-scale image to a 3-bit low-resolution colored image. Low-resolution images in turn allow training of larger models. The 3-bit low-resolution colored image is then upsampled to an 8-bit RGB sample by another Transformer network in the second stage of training. Finally, a third stage Transformer is trained to increase the spatial resolution of the 8-bit RGB sample produced by the second-stage Transformer. Self-attention used in the colorization Transformer is based on row/column attention layers introduced in . These layers capture the interaction between each pixel of an input image while being computationally less costly. The row-wise attention layer applies self-attention to all pixels in a given row, while the column-wise attention layer considers pixels only in a given column of an image. This work is the first successful application of Transformers trained to colorize grey-scale images at high (256$\\times$256) resolution.\n\\begin{figure*}[]\n \\centering\n \\includegraphics[width=\\textwidth]{Figs/Multi-modal_Archs_border.png}\n \\vspace{-0.3cm}\n \\caption{\\small An overview of Transformer models used for multi-modal tasks in computer vision. The Transformer designs in this category can be grouped into single-stream (UNITER , OSCAR , VideoBERT , Unicoder-VL , VisualBERT and VL-BERT ) and dual-stream architectures (LXMERT , ViLBERT and PEMT ). A key distinction between models is the choice of loss functions. While most of the multi-modal methods are focused on images as visual data, VideoBERT and PEMT are designed to work on video streams and leverage unique modalities e.g., audio signals in videos .}\n \\label{fig:multi-modal-archs}\n\\end{figure*}", "id": "63b91f43-9c63-40ed-acb2-22407e2aa5b7", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "33f59639-9b4c-4ffa-990e-931e1bc2af21", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Low-level Vision" ], [ "subsubsection", "Colorization Transformer" ] ], "subsections": [], "title": "Colorization Transformer" }, { "cite_extract_rate": 0.8, "cites": [ 5788, 7796, 5787, 38 ], "content": "Transformer models have also been extensively used for vision-language tasks such as visual question answering (VQA) , visual commonsense reasoning (VSR) , cross-modal retrieval and image captioning . Several works in this direction target effective vision-language pre-training (VLP) on large-scale multi-modal datasets to learn generic representations that effectively encode cross-modality relationships (\\eg, grounding semantic attributes of a person in a given image). These representations can then be transferred to downstream tasks, often obtaining state of the art results. \\sk{Notably, several of these models still use CNNs as vision backbone to extract visual features while Transformers are used mainly used to encode text followed by the fusion of language and visual features.} Such models generally apply the vanilla multi-layer Transformer with multi-modal inputs and do not introduce fundamental changes to the core attention block. However, their main distinction is in the configuration of Transformers and the loss functions, based on which we categorize them into: (a) Multi-stream Transformers (see Sec.~\\ref{Multi-stream Transformers}) and (b) Single-stream Transformers (see Sec.~\\ref{Single-stream Transformers}). The \\emph{single-stream} designs feed the \\emph{multi-modal} inputs to a single Transformer while the multi-stream designs first use independent Transformers for each modality and later learn cross-modal representations using another Transformer (see Fig.~\\ref{fig:multi-modal-archs}). Besides these vision language pretraining methods, we also explain visual grounding approaches towards the end of this section (see Sec.~\\ref{Transformers for Visual Grounding}).", "id": "67fab8f1-33a1-4a5f-83e6-06ff7fb57eb2", "level": "subsection", "origin_cites_number": 5, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Multi-Modal Tasks" ] ], "subsections": [ "f579be71-39a5-4a4f-acbb-174a2dc606a5", "c4799f13-9469-4fd6-9cd2-27470e3c7a90", "83cb9a73-fa75-495a-9a0c-0456acce8029" ], "title": "Transformers for Multi-Modal Tasks" }, { "cite_extract_rate": 0.8636363636363631, "cites": [ 5791, 2012, 8952, 5790, 1278, 38, 1272, 97, 2899, 8951, 768, 7, 5293, 1639, 4099, 5240, 732, 5337, 5789 ], "content": "\\label{Multi-stream Transformers}\nVision and Language BERT (ViLBERT) was the first extension of the BERT model to the multi-modal domain. The goal was to learn representations that can jointly model images and natural language. For this purpose, ViLBERT developed a two-stream architecture where each stream is dedicated to model the vision or language inputs (Fig.~\\ref{fig:multi-modal-archs}-h). The architecture of both parallel streams is a series of Transformer blocks similar to the BERT model. Subsequently, co-attentional Transformer layers are applied to learn cross-modal relationships. The co-attentional framework is very simple. Query, key, and value matrices are computed for each modality in the standard way and then key-value pairs for one modality are passed on to the other modality's attention head. \nViLBERT applies VLP on a set of proxy tasks defined on the Conceptual Concepts dataset (with 3.3M images with weak captions) and later fine-tune the model on downstream tasks such as VQA. The pre-training phase operates in a self-supervised manner, i.e., pretext tasks are created without manual labeling on the large-scale unlabelled dataset. These pretext tasks include predicting whether the text and image inputs are related and predicting the semantics of masked image regions and textual inputs (\\eg, similar to reconstructing masked words in text in the BERT model ). This way, the model learns the inherent structure in the data during pre-training and also models cross-domain associations. With evaluations on several tasks, demonstrated that a two-stream model can perform better than a single-stream model that uses shared parameters to model both language and vision domains .\nSimilar to ViLBERT , Learning Cross-Modality Encoder Representations from Transformers (LXMERT) also uses a two-stream architecture based on BERT framework. The main difference lies in the object-relationship encoder that is used to model the visual features instead of simple image-level features used in ViLBERT. The information in two streams is then fused across modalities using cross-attention blocks similar to .\nCompared to two pre-texts tasks used for VLP in , LXMERT uses five pre-training tasks including masked object and language prediction, cross-modality matching, and visual question answering (Fig.~\\ref{fig:multi-modal-archs}-g). The pre-trained model is fine-tuned on the VQA task, however, a high similarity between pre-training and fine-tuned tasks raises questions on the generalizability of the learned representations to new tasks. To this end, the authors conducted generalization experiments on Visual Reasoning for Real (NLVR) task demonstrating impressive improvements on novel tasks. \nLee \\etal\n note that the multi-modal representation learning approaches like VideoBERT and ViLBERT generally keep the language processing part fixed to a pre-trained model (\\eg, BERT ) to reduce training complexity. For the first time in the literature, they propose to learn an end-to-end multi-modal bidirectional Transformer model called PEMT on audio-visual data from unlabeled videos. First, short-term (\\eg, 1-3 seconds) video dynamics are encoded using CNNs, followed by a modality-specific Transformer (audio/visual) to model long-term dependencies (\\eg, 30 seconds). A multi-modal Transformer is then applied to the modality-specific Transformer outputs to exchange information across visual-linguistic domains. However, learning such a model in a naive form would incur huge memory requirements. To reduce parametric complexity, the parameters are shared across layers within each Transformer which leads upto 80\\% parameter reduction.\nThe Transformer is trained using a contrastive learning approach based on a content-aware negative sampling (Fig.~\\ref{fig:multi-modal-archs}-i). Specifically, the model uses the features obtained from CNNs learned during the training phase to select negative samples that are visually similar to the positive instances. This work also compares various fusion strategies adopted in earlier works such as early (VideoBERT and VL-BERT ), mid-level (ViL-BERT and LXMERT ) and late fusion mechanisms and shows that the mid-level fusion is the optimal choice. The proposed model is pre-trained on Kinetics-700 dataset and later fine-tuned on downstream video classification tasks such as short video classification on UCF101 , audio classification on ESC50 and long-term action recognition on Charades and Kinetics-Sounds datasets.\nTan and Bansal\n introduce the concept of `\\emph{vokens}' (images related to language tokens extracted from sentences). The vokens (visualized tokens) provide visual supervision to the language model to learn better features. The motivation is that humans learn languages by correlating visual information with semantic concepts. In a similar spirit to other self-supervised language representation learning methods , they learn representations by defining an auxiliary task of voken-prediction task.\nSince the existing datasets encode limited visually grounded tokens, they propose a vokenization method to map language tokens to visual vokens, as illustrated in Fig.~\\ref{fig:voken}. \nThe approach uses language-based retrieval for such a mapping and transfers a model trained on a small labeled dataset (MS-COCO) to a large dataset (Wikipedia). Furthermore, it was ensured that the sentence-wide context is considered to obtain the token-voken mapping. The resulting model trained using generated tokens outperforms the state of the art BERT model on a diverse set of NLP tasks. In this sense, the proposed model does not evaluate vision tasks, however, uses vision as a useful grounding cue to train the language model, hence we include it in the multi-modal representation learning group. \nVision-and-Language Navigation (VLN) aims to predict a navigation plan on a map based on the vision and language inputs. Transformer models were used earlier in \n for VLN task. These works first pre-train a cross-modal Transformer using self-supervision on vision and language pairs and subsequently fine-tune on the specific VLN tasks. While these works learn attention between image region and language, Chen \\etal propose to learn cross-modal attention between language inputs and spatial topological maps (to represent an agent's environment as a graph whose nodes denote places and the edges denote their connectivity). Given the topological map and natural language inputs, a VLN task using the Transformer model bears resemblance to sequence prediction in NLP. Specifically, at each time instance, the cross-modal Transformer predicts a single node of the topological map in the navigation plan. The individual language and map encodings are first processed using uni-modal encoders and later a cross-modal encoder (similar to LXMERT ) is applied to aggregate information across modalities. To denote positions in the map, a learned trajectory position encoding is appended with the map features. Based on this Transformer setup, reports a full navigation system that can freely explore the environment and intelligently plan its actions. \n\\mh{\nCLIP is a contrastive approach to learn image representations from text, with a learning objective which maximizes similarity of correct text-image pairs embeddings in a large batch size. Specifically, given a batch of $N$ image-text pairs, CLIP learns a multi-modal embedding space, by jointly training an image-encoder and a text-encoder, such that the cosine similarity of the valid $N$ image-text pairs is maximized, while the remaining $N^2-N$ pairs is minimized. The authors consider ResNet-50 and Vision Transformer (ViT) for encoding images. The modified Transformer model as in is employed for encoding text. CLIP is trained on a large corpus of 400 million image-text pairs\nand demonstrates excellent zero-shot transfer capabilities. At inference, the names of classes are used as input to the text-encoder, and similarity of the encoded image is computed with all encoded texts (classes) to find the image-text pair with highest match. The CLIP achieves an astounding zero-shot classification accuracy of 75\\% on ImageNet, without using an supervision from ImageNet training set. The authors further demonstrate zero-shot transfer capabilities of the CLIP model on 30 different computer vision benchmarks. Note that\nCLIP with ResNet took 18 days to train on 592 V100 GPUs while CLIP with ViT took 12 days on 256 V100 GPUs. This highlights the computational cost of CLIP.}", "id": "f579be71-39a5-4a4f-acbb-174a2dc606a5", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "67fab8f1-33a1-4a5f-83e6-06ff7fb57eb2", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Multi-Modal Tasks" ], [ "subsubsection", "Multi-stream Transformers" ] ], "subsections": [], "title": "Multi-stream Transformers" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 2017, 5791, 2012, 2015, 5257, 1279, 768, 7, 5352, 486, 2023, 1278, 2011, 1272 ], "content": "\\label{Single-stream Transformers}\nDifferent from two-stream networks like ViLBERT and LXMERT , VisualBERT uses a single stack of Transformers to model both the domains (images and text). The input sequence of text (\\eg, caption) and the visual features corresponding to the object proposals are fed to the Transformer that automatically discovers relations between the two domains. Notably, VisualBERT architecture is somewhat similar to VideoBERT (explained in Sec.~\\ref{sec:video}), but instead of only focusing on cooking videos, VisualBERT evaluates on various visual-linguistic tasks (\\eg, VCR, NLVR, VQA, and visual grounding). \nThe VisualBERT model first applies task-agnostic pre-training using two objectives (Fig.~\\ref{fig:multi-modal-archs}-e). The first objective simply attempts to predict missing text tokens using the image features and remaining textual tokens. The second objective attempts to differentiate between the true and false caption of a given image. After task-agnostic pre-training, the authors propose to perform task-specific pre-training to bridge the domain gap before the final fine-tuning to the downstream task. \nSu \\etal propose a multi-modal pre-training approach to learn features that are generalizable to multi-modal downstream tasks such as Visual Commonsense Reasoning and Visual Question Answering. This endeavor requires adequately aligning the visual and linguistic cues so that an effective composite representation is learned. To the end, builds on the BERT model and inputs both the visual and language features. The language features correspond to the token in the input sentence and the visual features correspond to the region of interest (RoI) from the input image (obtained via a standard Faster R-CNN). \nSpecifically, the model is pre-trained on both the visual-lingual dataset (Conceptual Captions ) as well as the language-only datasets (\\eg, Wikipedia). The loss function is identical to BERT, where the model is trained to predict the masked out words or visual ROIs (Fig.~\\ref{fig:multi-modal-archs}-f). In contrary to other works such as UNITER , VL-BERT claims that the visual-linguistic matching tasks are not useful during pre-training, which is in contrast to evidence from later efforts . Their results on several multi-modal tasks show their benefit over the language-only pre-training (\\eg, in BERT).\nUniversal Encoder for Vision and Language (Unicoder-VL) learns multi-modal representations using large-scale image-caption pairs. The language and image inputs are fed to a single Transformer model (with multiple successive encoders) to learn joint embeddings. To this end, it uses masked word prediction, masked object classification, and visual-linguistic matching as self-supervision tasks during pre-training (Fig.~\\ref{fig:multi-modal-archs}-d). Notably, the visual-linguistic matching is carried out only at the global level (i.e., image-sentence alignment). The model is evaluated on image-text retrieval, zero-shot learning, and visual commonsense reasoning where it performs better than the previous models such as ViLBERT and VisualBERT . This shows the significance of rich self-supervised tasks and advocates for a unified Transformer architecture to learn multi-modal features in a common framework. \nThe Unified Vision-Language Pre-training (VLP) model uses a single Transformer network for both encoding and decoding stages. This stands in contrast to BERT inspired VLP models which use independent encoder and decoder networks.\nJoint modeling of encoding and decoding stages allows the Unified VLP model to perform well for both image captioning and visual-question answering tasks, when fine-tuned on these individual tasks. \nThe intuition for shared modeling of encoding and decoding stage stems from the need to better share cross-task information during pre-training. The unified model consists of a stack of 12 Transformer blocks, each with a self-attention layer followed by a feed-forward module.\nThe self-supervised objectives used for pre-training include masked vision-language predictions. Here, the authors explore two variants i.e., bidirectional and sequence-to-sequence prediction of masked works where different context encodings are used for both types of objectives. \nThe proposed approach is evaluated on COCO Captions, Flick 30K Captions and VQA 2.0 and obtains encouraging results compared to previous methods on image captioning and VQA . \nUniversal image-text representation (UNITER) performs pre-training on four large-scale visual-linguistic datasets (MS-COCO , Visual Genome , Conceptual Captions and SBU Captions ). The learned representations transfer well on downstream tasks such as VQA, Multi-modal retrieval, Visual Commonsense reasoning, and NLVR. In order to emphasize on learning the relationships between visual and language domains, specifically designs pre-training tasks to predict masked visual or text region conditioned on the other domain input, and align language and visual inputs on both the global (image-text) and local (word-region) levels (Fig.~\\ref{fig:multi-modal-archs}-a). These tasks are beside the conventional masked language modeling task used in BERT and explicitly include fine-grained word-region alignment alongside conditional masking of inputs that were not considered in the earlier works such as VL-BERT , Visual-BERT , Vilbert and Unicoder-VL . Common to the other approaches, they adopt the Transformer architecture proposed in BERT that operates on both the visual and language embeddings. In contrast to applying independent Transformers to the language and visual inputs (as in ViLBERT and LXMERT ), UNITER adopts a single Transformer applied to the textual and image inputs like . \nVisualBert , Uniter , VL-BERT , VilBERT , and Unicoder-VL models for VLP concatenate image and text features and leave it to the self-attention to automatically discover cross-modal relationships. This can complicate the visual grounding of semantic concepts in an image. To address this problem, Object-Semantics Aligned Pre-Training (Oscar) first uses an object detector to obtain object tags (labels), which are then subsequently used as a mechanism to align relevant visual features with the semantic information (Fig.~\\ref{fig:multi-modal-archs}-b). The motivation is that the textual content generally pertains to major objects in the image, therefore by explicitly adding those image labels to the input, visual features can be better attended. \nSimilar to BERT , Oscar uses a Masked Token Loss for VLP, where different tokens in the textual input and image tags are randomly masked and the model predicts these missing tokens. Further, it also uses a contrastive loss that discriminates between the original and noisy/fake image-tag pairs. The representations thus learned are fine-tuned on VQA, cross-modality retrieval, natural language reasoning, and image captioning tasks to obtain better performances compared to VLP methods that do not use object tags. \\sk{The recent VinVL approach extends Oscar for the object detection task and learns object instance-centered relationships between visual and language domains using an adapted pretraining scheme. The model is trained on a collection of datasets (MS-COCO, OpenImages, Visual Genome and Objects365) and was demonstrated to precisely relate semantic attributes with the visual information and provided better transferability to the downstream visual comprehension tasks. }\n\\begin{figure}[]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Figs/Vokenizer.png}\n \\caption{\\small Visualized tokens (Vokens) : A language model is visually supervised using closely related images that leads to better feature representations from the pretrained model. Figure from . }\n \\label{fig:voken}\n\\end{figure}\n\\mh{", "id": "c4799f13-9469-4fd6-9cd2-27470e3c7a90", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "67fab8f1-33a1-4a5f-83e6-06ff7fb57eb2", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Multi-Modal Tasks" ], [ "subsubsection", "Single-stream Transformers" ] ], "subsections": [], "title": "Single-stream Transformers" }, { "cite_extract_rate": 1, "cites": [ 8953, 8000, 5793, 5792 ], "content": "\\label{Transformers for Visual Grounding}\nModulated DETR (MDETR) has a CNN and BERT backbone to extract features from image and text inputs, respectively. The visual and text features are then separately linearly projected to a shared space, concatenated and fed to a transformer model (with an architecture similar to DETR) to predict the bounding boxes for objects corresponding to the queries in the grounding text. The model is trained by using a loss which predicts a uniform distribution over all relevant text query tokens specific to the predicted bounding boxes. An additional contrastive loss term ensures correspondence between visual and text embedding.\nTransVG is a simple design, where visual and text features are fused together in a transformer module, and the bounding-box corresponding to the query is directly regressed using a learnable token (input to the Transformer module, along-with visual and text features). Referring Transformer is also a simple one stage design where the text and image features are fused in a Transformer encoder, and the Transformer based decoder then directly regresses bounding boxes or segmentation masks.\nVisual Grounding with Transformer has an encoder-decoder architecture, where visual tokens (features extracted from a pretrained CNN model) and text tokens (parsed through an RNN module) are processed in parallel with two distinct branches in the encoder, with cross-modality attention to generate text-guided visual features. The decoder then computes attention between the text queries and visual features and predicts query-specific bounding boxes.}", "id": "83cb9a73-fa75-495a-9a0c-0456acce8029", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "67fab8f1-33a1-4a5f-83e6-06ff7fb57eb2", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Multi-Modal Tasks" ], [ "subsubsection", "Transformers for Visual Grounding" ] ], "subsections": [], "title": "Transformers for Visual Grounding" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 5795, 8838, 1283, 4830, 5794, 8951, 4764, 5329, 5337, 768, 1470, 38 ], "content": "\\label{sec:video}\nExisting approaches for audio-video data analysis generally learn representations on short-length videos (up to a few seconds long), that allow them to encode only short-range dependencies . Long-range dependency modeling is desirable in various uni-modal and multi-modal learning tasks such as activity recognition . Below, we explain recent approaches that seek to resolve this challenge using the expressivity of Transformer networks. \\sk{It is important to note that several of these works still employ (pretrained) CNNs to encode image/frame-level features in the videos on top of which Transformers are applied to model wide context. A few exceptions include which obtain frame-level features also using the ViT based backbones.}", "id": "957dac38-5489-46b3-9df4-133419b7eac6", "level": "subsection", "origin_cites_number": 14, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Video Understanding" ] ], "subsections": [ "f2c6cbf3-ee44-472c-850d-b02522d87a47", "9f238d62-c3ff-45bf-8b24-29fac3fb3571", "dc68133d-1de8-46af-af7d-d03a08eaafa7" ], "title": "Video Understanding" }, { "cite_extract_rate": 1, "cites": [ 5255, 768, 2901, 4764 ], "content": "The VideoBERT model leverages Transformer networks and the strength of self-supervised learning to learn effective multi-modal representations. Specifically, VideoBERT uses the prediction of masked visual and linguistic tokens as a pretext task (Fig.~\\ref{fig:multi-modal-archs}-c). This allows modeling high-level semantics and long-range temporal dependencies, important for video understanding tasks. Given a video, converts speech to text using off-the-shelf speech recognition systems and applies vector quantization (clustering) to obtain visual features from pre-trained video classification models. The BERT model is then directly applied to these concatenated sequences of language and visual tokens to learn their joint distribution. \nThe model can be trained with only-text, video-only, and video+text domains. The resulting model showcases interesting capabilities for cross-modal predictions such as video generation from a given textual input (\\eg, captions or cooking recipe) and (video-based) future forecasting. The video+text model uses a visual-linguistic alignment task to learn cross-modality relationships. The definition of this pre-text task is simple, given the latent state of the \\texttt{[cls]} token, the task is to predict whether the sentence is temporally aligned with the sequence of visual tokens. Further, the learned representations are shown to be very useful for downstream tasks such as action classification, zero-shot classification, and video captioning. \nZhou \\etal explore Masked Transformers for dense video captioning. This requires generating language descriptions for all events occurring in a video. Existing works on this problem generally operate sequentially i.e., first detect events and then generate captions in separate sub-blocks. proposes a unified Transformer network to tackle both tasks jointly, thereby seamlessly integrating the multi-modal tasks of event detection and captioning. First, a video encoder is used to obtain frame-wise representations followed by two decoder blocks focused on proposing the video events and the captions. Since untrimmed videos are considered, a masking network is used in the captioning decoder to focus on describing a single event proposal. Remarkably, was the first approach to target dense video captioning using non-recurrent models and used self-attention in the encoder(applied on CNN derived features) to model broad range context between video frames. Experiments on ActivityNet Captions and YouCookII datasets showed good improvements over previous recurrent network and two-stage based approaches.", "id": "f2c6cbf3-ee44-472c-850d-b02522d87a47", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "957dac38-5489-46b3-9df4-133419b7eac6", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Video Understanding" ], [ "subsubsection", "Joint Video and Language Modeling" ] ], "subsections": [], "title": "Joint Video and Language Modeling" }, { "cite_extract_rate": 1, "cites": [ 8001, 5795, 1283, 7298, 5794, 4830, 97, 5797, 2907, 7590, 209, 732, 4776, 1470, 5796, 38 ], "content": "The traditional CNN based methods in video classification generally perform 3D spatio-temporal processing over limited intervals to understand videos. Neimark \\emph{et al.} propose Video Transformer Network (VTN) that first obtains frame-wise features using 2D CNN and apply a Transformer encoder (Longformer ) on top to learn temporal relationships. \nLongformer is an attractive choice to process long sequences (with an arbitrary length $n$) due to its $\\mathcal{O}(n)$ complexity. \nThe classification token is passed through a fully connected layer to recognize actions or events. The advantage of using Transformer encoder on top of spatial features is two fold: (a) it allows processing a complete video in a single pass, and (b) considerably improves training and inference efficiency by avoiding the expensive 3D convolutions. \nThis makes VTN particularly suitable for modeling long videos where interactions between entities are spread throughout the video length. Their experiments on Kinetics-400 dataset with various backbones (ResNet , ViT and DeiT ) shows competitive performance.\nGirdhar \\etal use a variant of Transformer architecture to aggregate person-specific contextual cues in a video for action classification and localization. Initially, the model uses a Faster-RCNN style processing where a backbone model generates features that are forwarded to the Region Proposal Network to obtain object proposals. Then RoI pooling is applied to generate object-specific features. Multi-head self-attention is then applied on top of the object features as a cascade of self-attention layers. In each Transformer unit, a particular person feature is treated as the `query' (Q), while the features from the neighboring video clip are used as `key' (K) and `value' (V). The location information is explicitly encoded in the input feature map from which K, V and Q are derived, thus incorporating the positional information in the self-attention. For a given $400$$\\times$$400$$\\times$$64$ video clip, the key and value tensors are of size $16$$\\times$$25$$\\times$$25$$\\times$$128$, while the query is $128$ dimensional vector. Although uses only RGB stream, additional modalities like optical flow and audio signal (as in competing works) would further increase the compute complexity. Further, the Transformer model was found to be sub-optimal for action localization, perhaps due to its tendency to incorporate global information. Therefore, it is important to achieve the right trade-off between the global and local context for problems that demand precise delineation (\\eg, action localization and segmentation).\nHuman action recognition based on skeleton representation requires understanding relationships between different joints of a body in a given frame as well as between different frames of a video. Plizzari \\etal proposed a two-stream Transformer network to model such relationships. They introduced spatial self-attention (SSA) to model relations between different body-joints (Fig.~\\ref{fig:ssa})\n while temporal self-attention (TSA) to capture long-range inter-frame dependencies (Fig.~\\ref{fig:tsa}).\n They first used a small residual network to extract features from skeleton data and then used SSA and TSA modules to process those feature maps. SSA finds the correlation between each pair of joints independently, while TSA focuses on how features of a certain joint change between frames along the temporal dimension. The purpose of SSA is to discover relationships among the surrounding joints in the same way as the Transformer relates different words in a phrase. On the other hand, TSA finds long-range relations between frames, similar to how relations among phrases are built in NLP. The two streamed model achieves state-of-the-art results on NTU-RGB+D 60 and NTU-RGB+D 120 datasets.\n \\mh{\nMultiscale Vision Transformers (MViT) build a feature hierarchy by progressively expanding the channel capacity and reducing the spatio-temporal resolution in videos. They introduce multi-head pooling attention to gradually change the visual resolution in their pyramid structure. TimeSFormer extends ViTs to videos, by considering the video as a sequence of patches extracted from individual frames. To capture spatio-temporal relationships, they propose divided attention i.e., spatial and temporal attentions are separately applied within each block. TimeSFormer demonstrates SoTA performance on action recognition, and can be applied to clips over one minute. Another notable pure-transformer based model is the Video Vision Transformer (ViViT) . First, the spatio-temporal tokens are extracted and then efficient factorised versions of self-attention are applied to encode relationships between tokens. However, they require initialization with image-pretrained models to effectively learn the ViT models. There has also been concurrent work on learning sound pretrained models using self-supervised learning with ViTs. An important recent effort is the long-short contrastive learning (LSTCL) framework , which reconstructs representations from different time-scales (narrow and broad) as auxiliary learning tasks and demonstrates good down-stream performance. \n}", "id": "9f238d62-c3ff-45bf-8b24-29fac3fb3571", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "957dac38-5489-46b3-9df4-133419b7eac6", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Video Understanding" ], [ "subsubsection", "Video Action Recognition" ] ], "subsections": [], "title": "Video Action Recognition" }, { "cite_extract_rate": 1, "cites": [ 8838, 7360, 5798, 5799, 5796, 520 ], "content": "The Video Instance Segmentation Transformer (VisTR) model extends DETR for video object instance segmentation (VIS) task. Local features are obtained using a backbone CNN on a collection of video frames. An encoder and a decoder Transformer is used similar to DETR to frame the instance segmentation problem as a sequence to sequence prediction task. The input frame-level features are concatenated to form clip representations and the Transformer outputs instance predictions in a order that is consistent across frames. This integrates the object detection and tracking with-in a single unified architecture. The predicted outputs are matched with the ground-truth using bipartitie matching. Similar to Mask R-CNN , a separate head is used to predict the instance mask based on self-attention and 3D convolutions. The overall results are competitive among the single model approaches on YouTube VIS dataset , but performs somewhat lower compared to more complex CNN-based models such as MaskProp . \n\\begin{figure}[t]\n\\centering\n \\begin{subfigure}[t]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/SSA.png}\n \\caption{\\small Spatial Self-Attention}\n \\label{fig:ssa}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{Figs/TSA.png}\n \\caption{\\small Temporal Self-Attention}\n \\label{fig:tsa}\n \\end{subfigure}\n\\caption{\\small Spatial/Temporal Attention for Skeleton Data Representations. Relationships between body-joints and inter-frame dependencies are modeled using two dedicated self-attention modules. Figure is from .}\n\\end{figure}", "id": "dc68133d-1de8-46af-af7d-d03a08eaafa7", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "957dac38-5489-46b3-9df4-133419b7eac6", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Video Understanding" ], [ "subsubsection", "Video Instance Segmentation" ] ], "subsections": [], "title": "Video Instance Segmentation" }, { "cite_extract_rate": 0.75, "cites": [ 5800, 5763, 7993, 5801, 38, 1523 ], "content": "In the few-shot learning settings, a support set is provided at the inference to adapt to a novel set of categories. Transformer models have been used to learn set-to-set mappings on this support set or learn the spatial relationships between a given input query and support set samples . In terms of absolute performance, the patch-wise spatial self-attention between query and support set images excels compared to an image level association learned in . However, the patch-wise attention computation is computationally expensive. We elaborate on these approaches below.\nDoersch \\etal explore the utility of self-supervision and Transformer model for few-shot fine-grained classification, where distribution mismatch exists between training and evaluation phases. \nThey develop Cross-Transformer model to relate a given query image with the few-examples available in the support set. To this end, the Transformer finds spatially similar regions in the query and support set images, and the corresponding features are then used to obtain class decisions for the query. The queries in the Transformer architecture are derived from the grid features obtained using the query image. Similarly, grid features from the support images are used to construct keys and values which are in turn used to derive attended outputs. This approach, besides a contrastive self-supervision based training mechanism, leads to the best performance on the challenging Meta-dataset .\nYe \\etal propose to adapt the few-shot embeddings learned on the base classes to the few-shot target classes during inference using a Transformer module. This leads to task-specific embeddings that perform better on the discriminative tasks such as few-shot classification. While many other set-to-set functions are also evaluated, such as Graph convolutional networks , Bidirectional LSTMs and DeepSets , the best performance is achieved with the Transformer-based mapping. This is attributed to the better contextualization, task interpolation and extrapolation capability of Transformers and their permutation invariance while maintaining a relatively lower parameter complexity. The Transformer architecture in follows the standard model . The embeddings are adapted using a contrastive loss function for preserving discriminative properties (Fig.~\\ref{fig:feat}). \nThe resulting model achieves strong performance on inductive, transductive, and generalized FSL tasks. \n\\mh{\nLiu \\etal learn a multi-head self-attention based module, to integrate the visual representation learned by the models trained on different domains present in the meta-dataset . The Universal Representation Transformer (URT) layer dynamically re-weights the representations from different domain-specific backbones, and proves very effective in handling few shot tasks across a variety of data distributions.\n}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Figs/Feat_transformer.png}\n \\caption{\\small An overview of FEAT . Compared to the conventional instance embedding methods in FSL that keep the embedding function same for all tasks (a), FEAT uses a set-to-set function to adapt the embedding function to each FSL task (b). It evaluates several set-to-set functions and found the Transformer module to be the most suitable choice for FSL. Figure from .}\n \\label{fig:feat}\n\\end{figure}", "id": "e6d25a6b-52e4-4b62-9b01-052d8aa597d3", "level": "subsection", "origin_cites_number": 8, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers in Low-shot Learning" ] ], "subsections": [], "title": "Transformers in Low-shot Learning" }, { "cite_extract_rate": 0.8, "cites": [ 3639, 8954, 38, 1523 ], "content": "Clustering aims to discover structure in the data by grouping similar data points together. It has numerous applications such as data visualization and interpretation, anomaly detection, and open-set categorization. Neural networks have been developed for set prediction problems , however, the setpoints are processed individually which can lose information about inter-point relationships. Recent works employ Transformers that operate on set inputs called the Set Transformers (ST) for \\emph{amortized} clustering. Amortized clustering is a challenging problem that seeks to learn a parametric function that can map an input set of points to their corresponding cluster centers. Lee \\etal propose to learn such a mapping function using a Transformer architecture comprising of multi-head self-attention blocks .\nThe Transformer model is permutation invariant by design and allows encoding both pair-wise and higher-order relationships between the input points. \nHowever, a full Transformer would lead to a high computational cost of $\\mathcal{O}(n^2)$ in each self-attention layer, where $n$ is the number of points in the set. ST reduces this cost to $\\mathcal{O}(mn)$ by using an Induced Self-Attention Block that uses a low-rank projection ($H \\in \\mathbb{R}^m$) to allow operating on large sets. The model was trained to learn optimal parameters that maximize the likelihood of a mixture of Gaussians (MoGs). Thus MoG parameters are estimated by the ST given a set of data points. Beyond amortized clustering, \\sk{ST is a generic framework which can handle other set-input problems such as counting unique elements in an input set, multi-instance learning, set anomaly detection, and 3D point-cloud classification.} More recently, improves by taking a sequential approach to cluster generation, thereby allowing assignment to a variable number of clusters.", "id": "08ba35f6-edfe-4960-acd5-0bf78d42a74c", "level": "subsection", "origin_cites_number": 5, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for Clustering" ] ], "subsections": [], "title": "Transformers for Clustering" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 4826, 8002, 5771, 4823, 4769, 7374, 38 ], "content": "Given the irregular (variable number of points) and permutation invariant nature of 3D point cloud representations, Transformers provide a promising mechanism to encode rich relationships between 3D data points. To this end, recent works are motivated by the capability of Transformers to learn set-functions. Specifically, introduced a Point Transformer which uses vector attention to learn weights for each channel, while suggest an alternate design where local 3D structure is explicitly encoded. The non-local nature of Transformers is exploited in towards an accurate human pose and mesh reconstruction algorithm. We discuss these approaches below.\nSelf-attention being a set-operator is ideally suited for processing point clouds, a 3D data representation that demands invariance to number of points and their permutations. Zhao \\etal~ propose a point Transformer layer that applies self-attention in the local neighborhood of 3D points. \n The proposed layer builds on vectorized self-attention network (SAN) where attention weights are represented with vectors.\n Furthermore, a positional encoding \n is added both to the attention vector and transformed features (value vectors) to represent location information. The point Transformer layer is sandwiched between two linear layers to create a point Transformer block that is stacked multiple times in the developed network architecture. Their design also included transition down/up blocks to reduce/increase the number of points in the input (in a typical encoding-decoding pipeline style). The resulting architecture shows promising results on the 3D classification and segmentation tasks. \n \\begin{figure*}[htp]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{Figs/METRO.png}\n \\caption{\\small Mesh Transformer architecture. The joint and vertex queries are appended with positional embeddings and passed through multiple self-attention layers to jointly regress 3D coordinates of joints and mesh vertices. Figure is from . }\n \\label{fig:metro}\n\\end{figure*}\n The Point Cloud Transformer (PCT) is a parallel work to and motivated by the permutation invariance property of Transformers. However, compared to , it is more directly based on the conventional Transformer architecture and does not involve vector attention. The key modifications include a 3D coordinate-based position encoding, an offset attention module, and a neighbor embedding that encodes local 3D structure in point-clouds. Specifically, the offset attention layer calculates the difference between the self-attended features and the input features using element-wise subtraction. The local neighbor embedding simply finds self-attention relationships among a group of points instead of individual 3D points. Explicitly incorporating local neighbourhood information makes this a more efficient architecture compared to . The method shows promising performance on 3D shape classification, normal estimation and segmentation tasks on ModelNet40 and ShapeNet datasets. \nThe Mesh Transformer (METRO) model targets 3D human pose and mesh reconstruction from a single 2D image. A key challenge here is to faithfully learn the non-local interactions between body-joints and mesh vertices (\\eg, hand and foot). The expressivity of Transformer network is used to jointly model \\emph{vertex to vertex} relationships in a mesh as well as the \\emph{vertex to body-joint} relationships. The self-attention mechanism can attend to any combination of vertices in the mesh, thereby encoding non-local relationships. \nThe multi-layer Transformer architecture sequentially performs dimensionality reduction to map the 2D image to 3D mesh. Position encoding is performed using the 3D coordinates ($x$,$y$,$z$) of each vertex and each body-joint. Similar to masked language modeling in NLP, METRO uses masked vertex modeling (MVM) which randomly masks some percentage of input queries (see Fig.~\\ref{fig:metro}). The Transformer is tasked with regressing all the joints and vertices which helps encode inter-dependencies between them.\nMETRO obtains state-of-the-art results on human mesh reconstruction on Human3.6M and 3DPW datasets. Since the approach does not depends on a parametric mesh model, it generalizes well to other reconstruction tasks such as 3D hand reconstruction . Overall, this is the first effort to employ Transformers for 3D human reconstruction tasks and leads to fairly good results.", "id": "032a8f47-5a0c-4f25-b3c2-14219a9fbe8f", "level": "subsection", "origin_cites_number": 10, "parent_id": "8b8a30a5-c1dd-4de8-aa81-91ad0d710a62", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Self-Attention \\& Transformers in Vision" ], [ "subsection", "Transformers for 3D Analysis" ] ], "subsections": [], "title": "Transformers for 3D Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "\\& Future Directions}\nDespite excellent performance from Transformer models and their interesting salient features (Table \\ref{tab:key_features}), there exist several challenges associated with their applicability to practical settings (Table \\ref{tab:performance_advantages}). The most important bottlenecks include requirement for large-amounts of training data and associated high computational costs. There have also been some challenges to visualize and interpret Transformer models. In this section, we provide an overview of these challenges, mention some of the recent efforts to address those limitations and highlight the open research questions.", "id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "level": "section", "origin_cites_number": 0, "parent_id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ] ], "subsections": [ "55a2e9e3-55bd-4bf1-a718-bc71572da4f7", "e1f8979b-b006-402f-90ac-da7ce9dd965c", "6a2483ec-2868-4578-adf0-b1609508ae54", "2f2239c2-3571-454d-a7e3-686571020f68", "607185d4-e576-4936-9853-2cf434f7e8c6", "72cba890-d3c3-40d4-9271-a4c09ae2f01b", "080b15a8-8d47-4f62-bf06-9bd4dc67ff0b" ], "title": "\\sk{Open Challenges" }, { "cite_extract_rate": 0.875, "cites": [ 7367, 679, 1484, 7167, 4778, 2519, 7590, 7993, 8950, 8434, 8384, 7796, 4784, 2017, 4780, 4795, 7360, 5803, 7302, 5794, 5764, 7997, 5804, 1182, 4823, 1278, 4793, 38, 7333, 5762, 7366, 5774, 502, 7298, 97, 793, 7, 7373, 732, 4782, 5796, 4769, 1501, 7385, 1472, 1639, 5802, 5240, 1473, 4766, 5337, 4776, 794, 7848, 2577, 4779 ], "content": "\\sk{As discussed in Sec.~\\ref{sec:introduction}, a strength of Transformer models is their flexibility to scale to high parametric complexity. While this is a remarkable property that allows training enormous sized models, this results in high training and inference cost (a detailed comparison between CNN and ViTs is shown in Table~\\ref{tab:parameters}).} As an example, the BERT basic model (with 109 million parameters) took around 1.89 peta-flop days\\footnote{A peta-flop day is a measure of computation and equals to performing $10^{15}$ neural net operations per second for one complete day.} for training, while the latest GPT3 model (175 billion parameters) took around 3640 peta-flop days for training (a staggering $\\sim$1925$\\times$ increase). This comes with a huge price tag, \\eg, according to one estimate , GPT3 training might have cost OpenAI 4.6 million USD. Additionally, these large-scale models require aggressive compression (\\eg, distillation) to make them feasible for real-world settings. \n\\mh{An empirical study on the scalability of Vision Transformers for number of parameters (ranging from five million to two billion), size of the training datasets (ranging from 30 million to three billion training images), and compute budget (1-10000 TPU core-days) is presented in . From this study, We can draw the following conclusions (a) scaling up on compute, model and size of training samples improves performance (b) only large models (with more parameters) can benefit from more training data, and the performance of smaller models platueas quickly and can not leverage from additional data. This indicates that large scale models have the capacity to further enhance their representation learning capabilities. However, with the current designs, scaling upon Transformer models is expensive and compute prohibitive, thus necessitating the need for efficient designs.}\n\\begin{table*}[hbp]\n \\centering \\small\n \\setlength{\\tabcolsep}{5pt}\n \\scalebox{0.75}{\n \\begin{tabular}{x{2.25cm} c p{6cm} x{3cm} x{2cm} x{3cm}} \n \\toprule[0.15em]\n \\rowcolor{mygray} \\textbf{Task} &\\textbf{Method}& \\textbf{Design Highlights} (focus on differences with the standard form)& \\textbf{Input Data Type} & \\textbf{Label Type} & \\textbf{Loss} \\\\\n \\midrule[0.1em]\n Image Classification & ViT & Directly adopted NLP Transformer Encoder for images, Mechanism to linearly embed image patches with positional embedding suitable for the Encoder. & 2D Image & Class labels & Cross-entropy\\\\\\cmidrule{2-6}\n & DeiT & Transformer as s student while CNN as a teacher, Distillation tokens to produce estimated labels from teacher, Attention between class and distillation tokens. & 2D Image & Class labels & Cross-entropy, Distillation loss based on KL-divergence \\\\\n \\cmidrule{2-6}\n & CLIP & Jointly train image and text encoders on image-text pairs, to maximize similarity of valid pairs and minimize otherwise & 2D Images \\& texts & Image-text pairs & Symmetric cross-entropy \\\\\n \\midrule\n Object Detection & DETR & Linear projection layer to reduce CNN feature dimension, Spatial positional embedding added to each multi-head self-attention layer of both encoder and decoder. Object queries (output positional encoding) added to each multi-head self-attention layer of decoder. & 2D Image & Class labels & Hungarian loss based on bipartite matching between predicted and ground truths\\\\\\cmidrule{2-6}\n & D-DETR & Deformable Transformer consists of deformable attention layers to introduce sparse priors in Transformers, Multi-scale attention module. & 2D Image & Class labels & Hungarian loss \\\\\n \\midrule\n Low Shot Learning & CT & Self-supervised pretraining, Query-aligned class prototypes that provide spatial correspondence between the support-set images and query image. & 2D Image & Pretraining without labels and few-shot learning with Class labels & Normalized Cross-entropy \\\\\n \\midrule\n Image Colorization & ColTran & Conditional Row/column multi-head attention layers, Progressive multi-scale colorization scheme. & 2D Image & 2D Image & Negative log-likelihood of the images \\\\\n \\midrule\n Action Recognition & ST-TR & Spatial and Temporal self-attention to operates on graph data such as joints in skeletons. & Skeleton & Action Classes & Cross-entropy \\\\\n \\midrule\n Super-resolution & TTSR & Texture enhancing Transformer module, Relevance embeddings to compute the relevance between the low-resolution and reference image. & 2D Image & 2D Image & Reconstruction loss, Perceptual loss defined on pretrained VGG19 features. \\\\\n \\midrule\n Multi-Model Learning & Oscar & Transformer layer to jointly process triplet representation of image-text [words, tags, features], Masked tokens to represent text data. & 2D Image & Captions, Class labels, Object tags & Negative log-likelihood of masked tokens, Contrastive binary cross-entropy \\\\\n \\midrule\n 3D Classification/Segmentation & PT & Point Transformer block, Transition down block to reduce cardinality of the point set, Transition up for dense prediction tasks. & CAD models, 3D object part segmentation & Object and shape categories & Cross-entropy\\\\\n \\midrule\n 3D Mesh Reconstruction & METRO & Progressive dimensionality reduction across Transformer layers, Positional Encoding with 3D joint and 3D vertex coordinates, Masked vertex/joint modeling. & 2D Image & 3D Mesh + Human Pose & $L_1$ loss on mesh vertices and joints in 3D and 2D projection.\\\\\n \\midrule\n Vision and Language Navigation & Chen \\etal & Uni-modal encoders on language and map inputs followed by a cross-modal transformer, Trajectory position encodings in the map encoder. & Instruction text + RGBD panorama + Topological Environment Map & Navigation Plan & Cross-entropy over nodes and \\texttt{[stop]} action\\\\\n \\midrule\n Referring Image Segmentation & CMSA & Multimodal feature, Cross-modal self-attention on multiple levels and their fusion using learned gates. & 2D Image + Language expression & Segmentation mask & Binary cross-entropy loss \\\\\n \\midrule\n Video Classification & Lee \\etal & Operates on real-valued audio-visual signals instead of tokens, Contrastive learning for pre-training, End-to-end multimodal transformer learning. & Audio-Visual & Activity labels & Contrastive InfoNCE loss and Binary cross-entropy \\\\\n \\bottomrule[0.1em]\n \\end{tabular}}\n \\vspace{0.4em}\n \\caption{A summary of key design choices adopted in different variants of transformers for a representative set of computer vision applications. The main changes relate to specific loss function choices, architectural modifications, different position embeddings and variations in input data modalities. }\n \\label{tab:key_features}\n\\end{table*}\n\\begin{table*}[hbp]\n \\centering\\small\n \\setlength{\\tabcolsep}{4pt}\n \\scalebox{0.75}{\n \\begin{tabular}{x{1.5cm} p{2cm} p{1.5cm} p{1.5cm} p{1.5cm} p{5cm} p{5cm}}\n \\toprule[0.15em]\n \\rowcolor{mygray} \\textbf{Task }& \\textbf{Method} & \\textbf{Metric} & \\hspace{-0.3cm} \\textbf{Dataset} & \\hspace{-0.5cm} \\textbf{Performance} & \\hspace{1.2cm} \\textbf{Highlights} & \\hspace{1.2cm} \\textbf{Limitations} \\\\\n \\midrule[0.1em]\n Image Classification \n &\\pb{ViT \\\\ ICLR'21}\n &Top-1 Acc.\n &ImageNet\n &88.55 \n &\\textbf{a)} First application of Transformer (global self-attention) directly on image patches, \\textbf{b)} Convolution-free network architecture, \\textbf{c)} Outperforms CNN models such as ResNet. \n & \\textbf{a)} Requires training on large-scale data \\eg, 300-Million images, \\textbf{b)} Requires careful transfer learning to the new task, \\textbf{c)} Requires large model with 632-Million parameters to achieve SOTA results. \\\\\\cmidrule{2-7}\n & \\pb{DeiT \\\\ arXiv'20}\n & Top-1 Acc. \n & ImageNet \n & 83.10 \n & \\textbf{a)} Successfully trains Transformer on ImageNet only, \\textbf{b)} Introduces attention-based distillation method. \\textbf{c)} Produces competitive performance with small (86-Million parameters) Transformers. \n & \\textbf{a)} Requires access to pretrained CNN based teacher model thus performance depends on the quality of the teacher model. \\\\ \\midrule\n & \\pb{Swin-T \\\\ arXiv'21}\n & Top-1 Acc. \n & ImageNet \n & 84.5 \n & \\textbf{a)} Provides a general purpose backbone for different vision tasks e.g., classification, detection and segmentation \\textbf{b)} A hierarchical design using shifted-windows operation.\n & \\textbf{a)} Hard to train from scratch on smaller datasets \\textbf{b)} Quadratic compute complexity inherent to the self-attention operation. \\\\ \\midrule\n Low-Shot Learning \n & \\pb{CT \\\\ NeurIPS'20 }\n & Top-1 Acc. \n & \\pb{ImageNet \\\\ COCO} \n & \\pb{62.25 \\\\ 60.35}\n & \\textbf{a)} Self-supervised pre-training mechanism that does not need manual labels, \\textbf{b)} Dynamic inference using Transformer achieving stat-of-the-art results.\n & Proposed algorithm is limited in its capacity to perform on datasets that lack spatial details such as texture. \\\\ \\midrule\n Object Detection \n &\\pb{ DETR \\\\ ECCV'20 }\n & AP \n & COCO \n & 44.9 \n & \\textbf{a)} Use of Transformer allows end-to-end training pipeline for object detection,\\textbf{ b)} Removes the need for hand-crafted post-processing steps. \n & \\textbf{a)} Performs poorly on small objects, \\textbf{b)} Requires long training time to converge. \\\\ \\cmidrule{2-7}\n & \\pb{ D-DETR \\\\ ICLR'21} \n & AP \n & COCO \n & 43.8 \n & \\textbf{ a)} Achieves better performance on small objects than DETR ,\\textbf{ b)} Faster convergence than DETR \n & Obtain SOTA results with 52.3 AP but with two stage detector design and test time augmentations. \\\\\\midrule\n Image Colorization \n & \\pb{\n ColTran \\\\ ICLR'21} \n & FID \n & ImageNet \n & 19.71 \n & \\textbf{a)} First successful application of Transformer to image colorization, \\textbf{b)} Achieves SOTA FID score. \n & \\textbf{a)} Lacks end-to-end training, \\textbf{b)} limited to images of size 256$\\times$256. \\\\ \\midrule\n Action Recognition \n & \\pb{ ST-TR \\\\arXiv'20}\n & Top-1 Acc. \n & NTU 60/120 \n & 94.0/84.7 \n & \\textbf{a)} Successfully applies Transformer to model relations between body joints both in spatial and temporal domain, \\textbf{b)} Achieves SOTA results. \n & Proposed Transformers do not process joints directly rather operate on features extracted by a CNN, thus the overall model is based on hand-crafted design. \\\\\n \\midrule\n Super-Resolution \n & \\pb{ TTSR \\\\ CVPR'20} \n & \\pb{ PSNR/ \\\\ SSIM }\n &\\pb{ CUFED5 \\\\ Sun80 \\\\ Urban100 \\\\Manga109} \n & \\pb{ 27.1 / 0.8\\\\30.0 / 0.81\\\\ 25.9 / 0.78 \\\\ 30.1 / 0.91 } \n & \\vspace{-0.7cm} \\textbf{a)} Achieves state-of-the-art super-resolution by using attention, \\textbf{b)} Novel Transformer inspired architectures that can process multi-scale features. \n & \\vspace{-0.7cm} \\textbf{a)} Proposed Transformer does not process images directly but features extracted by a convolution based network, \\textbf{b)} Model with large number of trainable parameters, and \\textbf{c)} Compute intensive.\\\\\n \\midrule\n Multi-Model Learning & \\pb{ViLBERT \\\\ NeurIPS'19} \n & \\pb{ Acc./ \\\\mAP ($R@1$) } \n & \\pb{ VQA /\\\\ Retrieval\\\\ } \n & 70.6/ 58.2 \n & \\vspace{-0.5cm} \\textbf{a)} Proposed Transformer architecture can combine text and visual information to understand inter-task dependencies, \\textbf{b)} Achieves pre-training on unlabelled dataset. \n & \\vspace{-0.5cm} \\textbf{a)} Requires large amount of data for pre-training,\\textbf{ b)} Requires fine tuning to the new task.\\\\\n \\cmidrule{2-7}\n & \\pb{Oscar \\\\ ECCV'20}\n & \\pb{ Acc./ \\\\mAP ($R@1$)} \n & \\pb{ VQA /\\\\ COCO} \n & \\pb{80.37/57.5}\n & \\textbf{a)} Exploit novel supervisory signal via object tags to achieve text and image alignment, \\textbf{b)} Achieves state-of-the-art results.\n & Requires extra supervision through pre-trained object detectors thus performance is dependent on the quality of object detectors.\\\\\n \\cmidrule{2-7}\n & \\pb{ UNITER \\\\ECCV'20 } \n & \\pb{Acc./\\\\Avg. ($R@1/5/10$)} \n &\\pb{ VQA /\\\\ Flickr30K } \n & 72.47/83.72\n & \\vspace{-0.5cm} Learns fine-grained relation alignment between text and images\n & \\vspace{-0.5cm} Requires large multi-task datasets for Transformer training which lead to high computational cost. \\\\\n \\midrule\n 3D Analysis \n & \\pb{ Point Transformer \\\\ arXiv'20}\n & \\pb{Top-1 Acc.\\\\IoU}\n & \\pb{ModelNet40 } \n & \\pb{92.8\\\\85.9} \n & \\vspace{-0.5cm} \\textbf{a)} Transformer based attention capable to process unordered and unstructured point sets, \\textbf{b)} Permutation invariant architecture.\n & \\vspace{-0.5cm} \\textbf{a)} Only moderate improvements over previous SOTA, \n \\textbf{b)} Large number of trainable parameters around 6$\\times$ higher than PointNet++ . \\\\\n \\cmidrule{2-7}\n & \\pb{ METRO \\\\ arXiv'20}\n & \\pb{MPJPE\\\\PA-MPJPE\\\\MPVE}\n & 3DPW \n & \\pb{77.1\\\\ 47.9\\\\88.2}\n & \\vspace{-0.5cm} \\textbf{a)} Does not depend on parametric mesh models so easily extendable to different objects, \\textbf{b)} Achieves SOTA results using Transformers.\n & \\vspace{-0.5cm} Dependent on hand-crafted network design.\\\\\n \\bottomrule[0.1em]\n \\end{tabular} }\\vspace{0.2cm}\n \\caption{A summary of advantages and limitations of different Transformers based methods in different Tasks. (CT: Cross Transformers, AP: Average Precision, mAP: mean AP, IoU: Intersection over Union, FID: Fréchet inception distance, MPJPE: Mean Per Joint Position Error, MPVE: Mean Per Vertex Error).} \n \\label{tab:performance_advantages}\n\\end{table*}\n\\begin{table*}[htp]\n\t\\centering \\small\n\t\t\\setlength{\\tabcolsep}{6pt}\n\t\t\\scalebox{0.85}{\n\t\t\\begin{tabular} {l c c c c}\n\t\t\t\\toprule[0.15em]\n\t\t\\rowcolor{mygray}\tMethod & \\#Param (M) & GFLOPs & Top-1 Acc (\\%) \\\\\n\t\\midrule[0.1em]\n\tResNet18~$\\star$ & 11.7 & 1.8 & 69.8 \\\\\n\tEfficientNet-B3~$\\star$ & 12.0 & 1.8 & 81.6 \\\\\n\tDeiT-T~ & 5.7 & 1.3 & 72.2 \\\\\n\tT2T-ViT$_t$-7~& 5.0 & 1.3 & 71.7 \\\\ \n\tLocalViT-T~ & 5.9 & 1.3 & 74.8 \\\\\n\tCrossViT-T~ & 6.9 & 1.6 & 73.4 \\\\\n\tPVTv1-T~ & 13.2 & 1.9 &75.1 \\\\\n\tResT-Lite~ & 10.5 & 1.4 & 77.2\\\\\n\tCaiT-XXX-24~ & 12.0 & 2.5 & 77.6 \\\\\n\tPVTv2-B1~ & 13.1 & 2.1 & 78.7 \\\\\n\tLv-ViT-T~ & 8.5 & -- & 79.1 \\\\\n\tRegionViT-T~ & 13.8 & 2.4 & 80.4\\\\\n\t\\midrule\n\tResNet50~$\\star$ &25.6 &4.1 &76.1 \\\\\n\tResNeXt50-32x4d~$\\star$ &25.0 &4.3 &77.6 \\\\\n\tRegNetY-4G~$\\star$ & 21.0 & 4.0 & 80.0 \\\\\n\tEfficientNet-B4~$\\star$ & 19.0 & 4.2 & 82.9 \\\\\n\tDeiT-S~ & 22.1 & 4.6 & 79.9 \\\\\n\tPVTv1-S~ & 24.5 & 3.8 & 79.8 \\\\\n\tLocalViT-S~ & 22.4 & 4.6 & 80.8 \\\\\n\tCrossViT-S~ & 26.7 & 5.6 & 81.0\\\\\n\tTNT-S~& 23.8 & 5.2 & 81.3 \\\\\n\tSwin-T~& 29.0 & 4.5 & 81.3 \\\\\n\tNesT-T~ & 17.0 & 5.8 & 81.5 \\\\\n\tT2T-ViT$_t$-14~ & 21.5 & 5.2 & 81.5 \\\\\n\tCvT-13~ & 20.0 & 4.5 & 81.6 \\\\\n\tResT-B~ & 30.3 & 4.3 & 81.6\\\\\n\tTwins-SVT-S~ & 24.0 & 2.8 & 81.7 \\\\\n\tPVTv2-B2-Li~ & 22.6 &3.9 & 82.1 \\\\\n\tRegionViT-S~ & 30.6 &5.6 & 82.5\\\\\n\tLv-ViT-S~ & 26.0 & 6.6 & 83.3 \\\\\n \\bottomrule[0.15em]\n\t\\end{tabular}}\n\t\\scalebox{0.865}{\n\t\\begin{tabular} {l c c c c}\n\t\t\t\\toprule[0.15em]\n\t\t\\rowcolor{mygray}\tMethod & \\#Param (M) & GFLOPs & Top-1 Acc (\\%) \\\\\n\t\\midrule[0.1em]\n\tResNet101~ $\\star$&44.7 & 7.9 &77.4\\\\\n\tResNeXt101-32x4d~$\\star$ & 44.2 & 8.0 &78.8 \\\\\n\tRegNetY-8G~$\\star$ & 39.0 & 8.0 & 81.7 \\\\\n\tEfficientNet-B5~ $\\star$& 30.0 & 9.9 & 83.6 \\\\\n\tCvT-21~ & 32.0 & 7.1 & 82.5 \\\\\n\tCaiT-S-24~ & 32.2 &9.4 & 82.7 \\\\ \n\tT2T-ViT$_t$-19~ & 39.0 & 9.8 & 81.4 \\\\\n\tPVTv1-M~ & 44.2 & 6.7 & 81.2\\\\\n\tPVTv2-B3~ & 45.2 & 6.9 & 83.2 \\\\\n\tNesT-S~ & 38.0 & 10.4 & 83.3 \\\\\n\t\\midrule\n\tResNet152~ $\\star$& 60.2 & 11.6 & 78.3 \\\\\n\tCaiT-S-36~ & 48.0 & 13.9 & 83.3 \\\\\n\tT2T-ViT$_t$-24~ & 64.0 & 15.0 & 82.2 \\\\\n\tPVTv1-L~& 61.4 & 9.8 & 81.7 \\\\\n\tTNT-B~ & 66.0 & 14.1 & 82.8 \\\\\n\tSwin-S~ & 50.0 & 8.7 & 83.0 \\\\\n\tTwins-SVT-B~ & 56.0 & 8.3 & 83.2 \\\\\n RegionViT-B~ & 72.7 & 13.0 & 83.3\\\\\n PVTv2-B4~ & 62.6 & 10.1 & 83.6 \\\\\n\t\\midrule\n\tResNeXt101-64x4d~ $\\star$ & 83.5 & 15.6 & 79.6\\\\\n\tRegNetY-16G~ $\\star$ & 84.0 & 16.0 & 82.9\\\\ \n\tEfficientNet-B6~ $\\star$ & 43.0 & 19.0 & 84.0 \\\\\n\tNesT-B~ & 68.0 & 17.9 & 83.8\\\\\n\tViT-B/16~ & 86.6 & 17.6 & 79.8 \\\\\n\tDeiT-B/16~ & 86.6 & 17.6 & 81.8 \\\\\n\tSwin-B~ & 88.0 & 15.4 & 83.3 \\\\\n\tTwins-SVT-L~& 99.2 & 14.8 & 83.7 \\\\\n\tPVTv2-B5~ & 82.0 & 11.8 & 83.8\\\\\n\tLv-ViT-M~ & 56.0 & 16.0 & 84.1 \\\\\n\t\\bottomrule[0.15em]\n\t\\end{tabular}}\n\t\\caption{\\sk{A Comparative analysis between different vision transformer and CNN models in terms of their parameter complexity and top-1 (\\%) accuracy on ImageNet validation set. For a direct comparison, we consider models that are trained on ImageNet from scratch on input of size 224x224. $\\star$ denotes pure CNN-based methods.}} \n\t\\label{tab:parameters}\n\\end{table*}\nIn the language domain, recent works focus on reducing the high complexity of Transformer models (basically arising from the self-attention mechanism where a token's representation is updated by considering all tokens from the previous layer). For example, explore selective or sparse attention to previous layer tokens while updating each next layer token. Linformer reduces complexity of standard self-attention operation from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(n)$ (both in time and memory requirements). The main idea is to show that a low-rank matrix is sufficient to model the self-attention mechanism. The Reformer model \n employed locally-sensitive hashing (LSH) to minimize the complexity of self-attention from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(n\nlog(n))$. \\sk{In similar pursuit, the recent Lambda Networks propose to model local context as a linear function which helps reduce complexity of self-attention . These linear function lambdas are applied to the input query to model contextual relationships between pixels.}\nVyas \\etal developed an efficient \\emph{cluster attention} to deal with large input sequences that approximates the original self-attention. The cluster attention groups queries into clusters and then computes attention between cluster centers (instead of attention between all the queries that leads to quadratic complexity). The main idea is that the queries close in the Euclidean space should have similar attention distributions. With a fixed number of clusters, this intuition helps reduce the quadratic complexity to linear complexity of $\\mathcal{O}(nc)$ with respect to the input sequence length $n$ (where $c$ is the number of clusters). We refer interested readers to a survey on efficient Transformers in NLP .\nSimilar to the NLP domain, computer vision models also suffer from the high computational cost of Transformer models. For example, image generators that are based on sequence-based Transformers (\\eg, iGPT) have a high compute cost limiting their applicability to high-resolution inputs. {The time and memory cost of core self-attention operation in Transformers increases quadratically with the number of patches, i.e. $\\mathcal{O}(n^2)$, for $n$ image patches (in some applications, e.g., low-level vision, $n=H\n\\times W$ where $H,W$ denote the height and width of the image). This is a major drawback of existing Transformers that hinders their application to most tasks involving high-resolution (HR) images, such as object detection and segmentation (in high-level vision), and super-resolution, deblurring, denoising, etc. (in low-level vision). Numerous methods have been proposed that make special design choices to perform self-attention more `efficiently', for instance employing pooling/downsampling in self-attention~, local window-based attention~, axial-attention~, low-rank projection attention~, kernelizable attention~, and similarity-clustering based methods~. However, almost all of these approaches either come with a trade-off between complexity and accuracy, require special hardware specifications or are still not applicable to very large images. Therefore, there is a pressing need to develop an efficient self-attention mechanism that can be applied to HR images on resource-limited systems without compromising accuracy. It will be interesting to explore how existing models can be extended to high-dimensional cases \\eg, using a \\emph{multi-scale transformer} design with a somewhat local context modeling.} By inducing inductive biases based on our understanding of the visual learning tasks (e.g., spatial relationships in the local neighbourhood), the high computational cost can be reduced. Similarly, using sparse attention maps modeled with low-rank factorization in the matrices can also help towards reducing the computational cost .", "id": "55a2e9e3-55bd-4bf1-a718-bc71572da4f7", "level": "subsection", "origin_cites_number": 64, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "High Computational Cost" ] ], "subsections": [], "title": "High Computational Cost" }, { "cite_extract_rate": 0.875, "cites": [ 5777, 301, 7590, 4782, 732, 5805, 5806 ], "content": "Since Transformer architectures do not inherently encode inductive biases (prior knowledge) to deal with visual data, they typically require large amount of training to figure out the underlying modality-specific rules. For example, a CNN has inbuilt translation invariance, weight sharing, and partial scale invariance due to pooling operations or multi-scale processing blocks. However, a Transformer network needs to figure out these image-specific concepts on its own from the training examples. Similarly, relationships between video frames need to be discovered automatically by the self-attention mechanism by looking at a large database of video sequences. This results in longer training times, a significant increase in computational requirements, and large datasets for processing. For example, the ViT model requires hundreds of millions of image examples to obtain reasonable performance on the ImageNet benchmark dataset. The question of learning a Transformer in a data-efficient manner is an open research problem and recent works report encouraging steps towards its resolution. For example, DeiT uses a distillation approach to achieve data efficiency while T2T (Tokens-to-Token) ViT models local structure by combining spatially close tokens together, thus leading to competitive performance when trained only on ImageNet from scratch (without pre-training). \\mh{By incorporating CNNs like feature hierarchies in ViTs to effectively capture local image cues, ViTs (e.g., CCT , NesT ) can be trained from scratch even on small-scale datasets (e.g., CIFAR-10).\nAnother approach to data efficient training of ViTs is proposed in \\etal . The authors show that by smoothing the local loss surface using sharpness-aware minimizer (SAM) , ViTs can be trained with simple data augmentation scheme (random crop, and horizontal flip) , instead of employing compute intensive strong data augmentation strategies, and can outperform their counterpart ResNet models.}", "id": "e1f8979b-b006-402f-90ac-da7ce9dd965c", "level": "subsection", "origin_cites_number": 8, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Large Data Requirements" ] ], "subsections": [], "title": "Large Data Requirements" }, { "cite_extract_rate": 1, "cites": [ 7853, 7590, 768, 4782, 732, 5771, 1278 ], "content": "We note that most of the existing works focused on vision tasks tend to directly apply NLP Transformer models on computer vision problems. These include architectures designed for image recognition , video understanding and especially multi-modal processing . Although the initial results from these simple applications are quite encouraging and motivate us to look further into the strengths of self-attention and self-supervised learning, current architectures may still remain better tailored for language problems (with a sequence structure) and need further intuitions to make them more efficient for visual inputs. For example, vector attention from is a nice work in this direction which attempts to specifically tailor self-attention operation for visual inputs via learning channel-wise attentions. Similarly, uses a Jigsaw puzzle based self-supervision loss as a parallel branch in the Transformers to improve person re-identification. A recent work rearranges the spatially close tokens to better model relationships in spatially proximal locations. \\sk{Token distillation from pre-trained CNN models has also been used as a remedy to inject domain biases in the representations. One may argue that the architectures like Transformer models should remain generic to be directly applicable across domains, we notice that the high computational and time cost for pre-training such models demands novel design strategies to make their training more affordable on vision problems.}\n\\mh{", "id": "6a2483ec-2868-4578-adf0-b1609508ae54", "level": "subsection", "origin_cites_number": 7, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Vision Tailored Transformer Designs" ] ], "subsections": [], "title": "Vision Tailored Transformer Designs" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7167, 4747, 1461, 4774, 7523 ], "content": "While Nerual Architecuter Search (NAS) has been well explored for CNNs to find an optimized architecture, it is relatively less explored in Transformers (even for language transformers ). Chen \\etal propose a one-shot NAS for vision transformers, called AutoFormer. BossNAS searches for a hybrid architecture (CNN and Transformer).} \\sk{Another recent effort studies the trade-off between global and local information in Transformers in the context of vision applications . It will be insightful to further explore the domain-specific design choices (e.g., the contrasting requirements between language and vision domains) using NAS to design more efficient and light-weight models similar to CNNs .}", "id": "2f2239c2-3571-454d-a7e3-686571020f68", "level": "subsection", "origin_cites_number": 6, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Neural Architecture Search for ViTs " ] ], "subsections": [], "title": "Neural Architecture Search for ViTs " }, { "cite_extract_rate": 1, "cites": [ 4484, 5808, 4849, 5807 ], "content": "\\sk{Through an extensive set of carefully designed experiments, Naseer \\etal investigate multiple intriguing properties of ViTs in terms of their generalization and robustness. They show that, compared with CNNs, ViTs demonstrate strong robustness against texture changes and severe occlusions, \\eg ViTs retain upto 60\\% top-1 accuracy on ImageNet once 80\\% of the image content is randomly occluded. \n}\nGiven the strong performance of Transformer architectures, it is interesting and critical to interpret their decisions, \\eg, by visualizing relevant regions in an image for a given classification decision. \nThe main challenge is that the attention originating in each layer, gets inter-mixed in the subsequent layers in a complex manner, making it difficult to visualize the relative contribution of input tokens towards final predictions. This is an open problem, however, some recent works target enhanced interpretability of Transformers and report encouraging results. Attention roll-out and attention flow methods were proposed in to estimate the accurate attentions. However, this method functions in an ad-hoc manner and makes simplistic assumptions \\eg, input tokens are linearly combined using attention weights across the layers. Chefer \\etal~ note that the attention scores obtained directly via the self-attention process (encoding relationships between tokens) or reassignments in do not provide an optimal solution. As an alternative, they propose to assign and propagate \\emph{relevancy scores} in the Transformer network such that the sum of relevancy is constant throughout the network. Their design can handle both the positive and negative attributions experienced in the self-attention layer. The proposed framework has an added advantage of being able to provide class-specific visualizations. Despite these seminal works, visualizing and interpreting Transformers is an unsolved problem and methods are needed to obtain spatially precise activation-specific visualizations. Further progress in this direction can help in better understanding the Transformer models, diagnosing any erroneous behaviors and biases in the decision process. It can also help us design novel architectures that can help us avoid any biases.", "id": "607185d4-e576-4936-9853-2cf434f7e8c6", "level": "subsection", "origin_cites_number": 4, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Interpretability of Transformers" ] ], "subsections": [], "title": "Interpretability of Transformers" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 8003, 8955, 543, 826, 4747 ], "content": "Large-scale Transformer networks can have intensive power and computation requirements, hindering their deployment on edge devices and resource-constrained environments such as internet-of-things (IoT) platforms. Some recent efforts have been reported to compress and accelerate NLP models on embedded systems such as FPGAs . Li \\etal used an enhanced block-circulant matrix-based representation to compress NLP models and proposed a new Field Programmable Gate Array (FPGA) architecture design to efficiently manage resources for high throughput and low latency. They could achieve 27x, 3x and 81x improvements in performance (throughput measured in FPS), reduced power consumption, and energy efficiency relative a CPU for RoBERTa model . Towards this goal, proposed to design Hardware-Aware Transformers (HAT) using neural architecture search strategies . Specifically, a SuperTransformer model is first trained for performance approximation which can estimate a model's performance without fully training it. This model comprises the largest possible model in the search space while sharing weights between common parts. Eventually, an evolutionary search is performed considering the hardware latency constraints to find a suitable SubTransformer model for a target hardware platform (\\eg, IoT device, GPU, CPU).\nHowever, such hardware efficient designs are currently lacking for the vision Transformers to enable their seamless deployment in resource-constrained devices. Further, the search cost of the evolutionary algorithms remains significant with the associated impact of CO2 emissions on the environment.\n\\sk{", "id": "72cba890-d3c3-40d4-9271-a4c09ae2f01b", "level": "subsection", "origin_cites_number": 6, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Hardware Efficient Designs" ] ], "subsections": [], "title": "Hardware Efficient Designs" }, { "cite_extract_rate": 1, "cites": [ 4862, 7840 ], "content": "Since Transformers provide a unified design to process different modalities, recent efforts also focus on proposing more generic general purpose reasoning systems based on Transformers.\nInspired by the biological systems that can process information from a diverse range of modalities, Perceiver model aims to learn a unified model that can process any given input modality without making domain-specific architectural assumptions. In order to scale to high-dimensional inputs, Perceiver uses an asymmetric cross attention method to distill input information into low-dimensional latent bottleneck features. Once the features are distilled in a compact and fixed-dimensional form, regular Transformer blocks are applied in the latent space. The original Perceiver model shows performance competitive to ResNets and ViTs on image classification and can process 3D data, audio, images, video or their combinations. However, this model can only generate fixed outputs e.g., class probabilities. A recent improvement called Perceiver IO aims to learn models with both flexible inputs as well as arbitrary sized outputs. This allows application to problems which demand structured outputs such as natural language tasks and visual comprehension. While these models avoid modality dependent architectural choices, the learning itself still involves modality dependent choices e.g., specific augmentations or positional encodings. An interesting and open future direction is to achieve total modality-agnosticism in the learning pipeline. }", "id": "080b15a8-8d47-4f62-bf06-9bd4dc67ff0b", "level": "subsection", "origin_cites_number": 2, "parent_id": "640cdeab-47b5-475b-8e86-578eb18e1ffa", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "\\sk{Open Challenges" ], [ "subsection", "Towards Integrating All Modalities" ] ], "subsections": [], "title": "Towards Integrating All Modalities" }, { "cite_extract_rate": 0, "cites": [], "content": "Attention has played a key role in delivering efficient and accurate computer vision systems, while simultaneously providing insights into the function of deep neural networks. This survey reviews the self-attention approaches and specifically focuses on the Transformer and bi-directional encoding architectures that are built on the principle of self-attention. We first cover fundamental concepts pertaining to self-attention architectures and later provide an in-depth analysis of competing approaches for a broad range of computer vision applications. Specifically, we include state of the art self-attention models for image recognition, object detection, semantic and instance segmentation, video analysis and classification, visual question answering, visual commonsense reasoning, image captioning, vision-language navigation, clustering, few-shot learning, and 3D data analysis. We systematically highlight the key strengths and limitations of the existing methods and particularly elaborate on the important future research directions. With its specific focus on computer vision tasks, this survey provides a unique view of the recent progress in self-attention and Transformer-based methods. We hope this effort will drive further interest in the vision community to leverage the potential of Transformer models and improve on their current limitations \\eg, reducing their carbon footprint.\n\\ifCLASSOPTIONcompsoc\n \\section*{Acknowledgments}\n\\else\n \\section*{Acknowledgment}\n\\fi\n{\\footnotesize The authors would like to thank Tim Prangemeier (TU Darmstadt), Luowei Zhou (Microsoft Research), Jason Corso (University of Michigan), Pichao Wang (Alibaba Group), Yuqing Wang (Meituan), Alex Meinke (Uni-Tuebingen), Irwan Bello (Google Brain) and Manoj Kumar (Google Brain) for their helpful feedback on the survey. We would also like to thank Mohamed Afham for his help with a figure. }\n\\bibliographystyle{ieeetr}\n\\bibliography{eg_bib}\n\\end{document}", "id": "a948af98-2404-4514-9b8b-7cfeedc952e4", "level": "section", "origin_cites_number": 0, "parent_id": "1d8963e3-819a-45e0-946b-089d3b36ab2c", "prefix_titles": [ [ "title", "Transformers in Vision: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
57
[ 679, 166, 4830, 7590, 7993, 707, 2012, 7360, 7302, 7070, 826, 5764, 38, 5762, 2635, 4829, 768, 7, 7373, 732, 728, 5763, 9, 8454, 4766, 5765, 7333, 7194, 4782, 1501, 4779, 5766, 7994, 722, 2017, 126, 2506, 8569, 7585, 2502, 2503, 1272, 5767, 5768, 9141, 2554, 4769, 2589, 4099, 1254, 322, 97, 57, 5769, 2576, 1283, 486, 7054, 5770, 1733, 7995, 7996, 314, 5771, 4738, 5772, 7367, 4789, 7167, 2579, 107, 2519, 8950, 5773, 5774, 4781, 7848, 8842, 7298, 4778, 5775, 5776, 4793, 2577, 4794, 1493, 4795, 5777, 5778, 7997, 2581, 133, 4796, 2514, 122, 2530, 2517, 4800, 5779, 206, 4812, 799, 209, 802, 8429, 520, 7998, 5780, 4811, 8839, 1730, 507, 1003, 5782, 5781, 7000, 76, 2525, 2508, 81, 5680, 2526, 4765, 7999, 5783, 5784, 4777, 5785, 493, 7258, 480, 5786, 483, 496, 482, 1484, 2015, 5337, 1278, 5788, 7796, 5787, 5791, 8952, 5790, 2899, 8951, 5293, 1639, 5240, 5789, 5257, 1279, 5352, 2023, 2011, 8953, 8000, 5793, 5792, 5795, 8838, 5794, 4764, 5329, 1470, 5255, 2901, 8001, 5797, 2907, 4776, 5796, 5798, 5799, 5800, 5801, 1523, 3639, 8954, 4826, 8002, 4823, 7374, 8434, 8384, 4784, 4780, 5803, 5804, 1182, 7366, 502, 793, 7385, 1472, 5802, 1473, 794, 301, 5805, 5806, 7853, 4747, 1461, 4774, 7523, 4484, 5808, 4849, 5807, 8003, 8955, 543, 4862, 7840 ]
1.241151
[ "Görkem Algan", "Ilkay Ulusoy" ]
Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey
2019
2019-12-11T08:26:57Z
cs.LG
Image classification systems recently made a giant leap with the advancement of deep neural networks. However, these systems require an excessive amount of labeled data to be adequately trained. Gathering a correctly annotated dataset is not always feasible due to several factors, such as the expensiveness of the labeling process or difficulty of correctly classifying data, even for the experts. Because of these practical challenges, label noise is a common problem in real-world datasets, and numerous methods to train deep neural networks with label noise are proposed in the literature. Although deep neural networks are known to be relatively robust to label noise, their tendency to overfit data makes them vulnerable to memorizing even random noise. Therefore, it is crucial to consider the existence of label noise and develop counter algorithms to fade away its adverse effects to train deep neural networks efficiently. Even though an extensive survey of machine learning techniques under label noise exists, the literature lacks a comprehensive survey of methodologies centered explicitly around deep learning in the presence of noisy labels. This paper aims to present these algorithms while categorizing them into one of the two subgroups: noise model based and noise model free methods. Algorithms in the first group aim to estimate the noise structure and use this information to avoid the adverse effects of noisy labels. Differently, methods in the second group try to come up with inherently noise robust algorithms by using approaches like robust losses, regularizers or other learning paradigms.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ] ], "subsections": [ "4eed61c4-9fdd-442f-8c8c-ebc1eee50d18", "e2235ea5-b76e-4028-994c-f569fa5a00ad", "6e51ea88-3a48-46e6-897f-ede2d47445fa", "48d73752-4351-4ff3-8d4c-865c3c729928", "a682ea6c-3e81-4982-b6b7-4cfdd43c2a1e", "8f59dd3e-4249-4d9b-aaf1-ecbaa320f1db" ], "title": "root" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 1223, 1749, 514, 3866, 802, 97, 810, 3630, 4165, 209, 4115, 1191 ], "content": "\\label{sec:1}\nRecent advancement in deep learning has led to great improvements on many different domains, such as image classification , object detection , semantic segmentation and others. Despite their impressive ability for representation learning , it is shown that these powerful models can overfit to even complete random noise . Various works are devoted to explain this phenomenon , yet regularizing deep neural networks (DNNs) while avoiding overfitting stays to be an important challenge. It gets even more crucial when there exists noise in data. Therefore, various methods are proposed in the literature to train deep neural networks effectively in the presence of noise.\nThere are two kinds of noise in the literature: feature noise and label noise . Feature noise corresponds to the corruption in observed data features, while label noise means the change of label from its actual class. Even though both noise types may cause a significant decrease in the performance , label noise is considered to be more harmful and shown to deteriorate the performance of classification systems in a broad range of problems . This is due to several factors; the label is unique for each data while features are multiple, and the importance of each feature varies while the label always has a significant impact . This work focuses on label noise; therefore, noise and label noise is used synonymously throughout the article.\nThe necessity of an excessive amount of labeled data for supervised learning is a significant drawback since it requires an expensive dataset collection and labeling process. To overcome this issue, cheaper alternatives have emerged. For example, an almost unlimited amount of data can be collected from the web via search engines or social media. Similarly, the labeling process can be crowdsourced with the help of systems like Amazon Mechanical Turk\\footnote{http://www.mturk.com}, Crowdflower\\footnote{http://crowdflower.com}, which decrease the cost of labeling notably. Another widely used approach is to label data with automated systems. However, all these approaches led to a common problem; label noise. Besides these methods, label noise can occur even in the case of expert annotators. Labelers may lack the necessary experience, or data can be too complex to be correctly classified, even for the experts. Moreover, label noise can also be introduced to data for adversarial poisoning purposes . Being a natural outcome of dataset collection and labeling process makes label noise robust algorithms an essential topic for the development of efficient computer vision systems. \nSupervised learning with label noise is an old phenomenon with three decades of history . An extensive survey about relatively old machine learning techniques under label noise is available . However, no work is proposed to provide a comprehensive survey on classification methods centered around deep learning in the presence of label noise. This work focuses explicitly on filling this absence. Even though deep networks are considered to be relatively robust to label noise , they have an immense capacity to overfit data . Therefore, preventing DNNs to overfit noisy data is very important, especially for fail-safe applications, such as automated medical diagnosis systems. Considering the significant success of deep learning over its alternatives, it is a topic of interest, and many works are presented in the literature. Throughout the paper, these methods are briefly explained and grouped to provide the reader with a clear overview of the literature.\nThis paper is organized as follows. Section \\ref{preliminaries} explains several concepts that are used throughout the paper. Proposed solutions in literature are categorized into two major groups, and these methods are discussed in \\autoref{noisemodelbased} - \\autoref{noisemodelfree}. In \\autoref{experiments} widely used experimental setups are presented, and leaderboard on a benchmarking dataset is provided. Finally, \\autoref{conclusion} concludes the paper.", "id": "4eed61c4-9fdd-442f-8c8c-ebc1eee50d18", "level": "section", "origin_cites_number": 22, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{preliminaries}\nThis section introduces necessary concepts for a better understanding of the paper. Firstly, the problem statement for supervised learning in the presence of noisy labels is given. Secondly, types of label noises are presented. Finally, sources of label noise are discussed.", "id": "e2235ea5-b76e-4028-994c-f569fa5a00ad", "level": "section", "origin_cites_number": 0, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Preliminaries" ] ], "subsections": [ "7ef4b0b4-16b7-4ae5-a39d-40261c5e7e7b", "417c96e1-e709-450a-8184-e9c4bbb11452", "11fcd0cc-694c-4f82-9488-03daa690efe0", "dc1dc4ae-1244-4078-9b61-fd1d656be4b9" ], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "Classical supervised learning consists of an input dataset $\\mathcal{S}=\\{(x_1,y_1),...,(x_N,y_N)\\}\\in (X,Y)^N$ drawn according to an unknown distribution $\\mathcal{D}$, over $(X,Y)$. The learning objective is to find the best mapping function $f:X \\rightarrow Y$ among family of functions $\\mathcal{F}$, where each function is parametrized by $\\theta$.\nOne way of evaluating the performance of a classifier is the so called loss function, denoted as $l:\\mathcal{R}\\times Y \\rightarrow \\mathcal{R^+}$. Given an example $(x_i,y_i) \\in (X,Y)$, $l(f_\\theta(x_i),y_i)$ evaluates how good is the classifier prediction. Then, for any classifier $f$, expected risk is defined as follow, where E denotes the expectation over distribution $\\mathcal{D}$.\n\\begin{equation}\n R_{l,\\mathcal{D}}(f_\\theta)=E_\\mathcal{D}[l(f_\\theta(x),y)]\n\\end{equation}\nSince it is not generally feasible to have complete knowledge over distribution $\\mathcal{D}$, as an approximation, the empirical risk is used.\n\\begin{equation}\n \\hat{R}_{l,\\mathcal{D}}(f_\\theta)=\\dfrac{1}{N}\\sum_{i=1}^{N}l(f_\\theta(x_i),y_i)\n\\end{equation}\nVarious methods of learning a classifier may be seen as minimizing the empirical risk subjected to network parameters. \n\\begin{equation}\n \\theta^\\star = \\underset{\\theta}{\\argmin}\\hat{R}_{l,\\mathcal{D}}(f_\\theta)\n\\end{equation}\nIn the presence of the label noise, dataset turns into $\\mathcal{S}_n=\\{(x_1,\\tilde{y}_1),...,(x_N,\\tilde{y}_N)\\}\\in (X,Y)^N$ drawn according to a noisy distribution $\\mathcal{D}_n$, over $(X,Y)$. Then, the risk minimization results in as follows.\n\\begin{equation}\n \\theta_n^\\star = \\underset{\\theta}{\\argmin}\\hat{R}_{l,\\mathcal{D}_n}(f_\\theta)\n\\end{equation}\nAs a result, obtained parameters by minimizing over $\\mathcal{D}_n$ are different from desired optimal classifier parameters.\n\\begin{center}\n $\\theta^\\star \\neq \\theta_n^\\star$\n\\end{center}\nClassical supervised learning aims to find the best estimator parameters $\\theta^\\star$ for given distribution $\\mathcal{D}$ while iterating over $\\mathcal{D}$. However, in noisy label setup, the task is still finding $\\theta^\\star$ while working on distribution $\\mathcal{D}_n$. Therefore, classical risk minimization is insufficient in the presence of label noise since it would result in $\\theta_n^\\star$. As a result, variations of classical risk minimization methods are proposed in the literature, and they will be further evaluated in the upcoming sections.", "id": "7ef4b0b4-16b7-4ae5-a39d-40261c5e7e7b", "level": "subsection", "origin_cites_number": 0, "parent_id": "e2235ea5-b76e-4028-994c-f569fa5a00ad", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Problem Statement" ] ], "subsections": [], "title": "Problem Statement" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4166, 4168, 4167 ], "content": "\\label{labelnoisemodels}\nA detailed taxonomy of label noise is provided in . In this work, we follow the same taxonomy with a little abuse of notation. Label noise can be affected by three factors: data features, the true label of data, and the labeler characteristics. According to the dependence of these factors, label noise can be categorized into three subclasses.\n\\textit{Random noise} is totally random and depends on neither instance features nor its true class. With a given probability $p_e$ label is changed from its true class. \\textit{Y-dependent noise} is independent of image features but depends on its class; $p_e = p(e|y)$. That means data from a particular class are more likely to be mislabeled. For example, in a handwritten digit recognition task, \"3\" and \"8\" are much more likely to be confused with each other rather than \"3\" and \"5\". \\textit{XY-dependent noise} depends on both image features and its class; $p_e = p(e|x,y)$. As in the y-dependent case, objects from a particular class may be more likely to be mislabeled. Moreover, the chance of mislabeling may change according to data features. If an instance has similar features to another instance from another class, it is more likely to be mislabeled. Generating xy-dependent synthetic noise is harder than the previous two models; therefore, some works tried to provide a generic framework by either checking the complexity of data or their position in feature space . All these types of noises are illustrated in \\autoref{fig:tsneplots}\nThe case of multi-labeled data, in which each instance has multiple labels given by different annotators, is not considered here. In that scenario, works show that modeling each labeler's characteristics and using this information during training significantly boosts the performance . However, various characteristics of different labelers can be explained with given noise models. For example, in a crowd-sourced dataset, some labelers can be total spammers who label with a random selection ; therefore, they can be modeled as random noise. On the other hand, labelers with better accuracies than random selection can be modeled by y-dependent or xy-dependent noise. As a result, the labeler's characteristic is not introduced as an extra ingredient in these definitions.\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=\\textwidth]{noisetypes.png}\n \\caption{T-SNE plot of data distribution of MNIST dataset in feature space for 25\\% noise ratio. a) clean data b) random noise c) y-dependent noise which is still randomly distributed in feature domain d) xy-dependent noise in locally concentrated form e) xy-dependent noise that is concentrated on decision boundaries }\n \\label{fig:tsneplots}\n\\end{figure*}", "id": "417c96e1-e709-450a-8184-e9c4bbb11452", "level": "subsection", "origin_cites_number": 5, "parent_id": "e2235ea5-b76e-4028-994c-f569fa5a00ad", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Label Noise Models" ] ], "subsections": [], "title": "Label Noise Models" }, { "cite_extract_rate": 0.409090909090909, "cites": [ 4166, 4170, 4169, 4168, 3866, 4172, 4171, 4116, 1191 ], "content": "\\label{sourcesoflabelnoise}\nAs mentioned, label noise is a natural outcome of dataset collection process and can occur in various domains, such as medical imaging , semantic segmentation , crowd-sourcing , social network tagging , financial analysis and many more. This work focuses on various solutions to such problems, but it may be helpful to investigate the causes of label noise to understand the phenomenon better.\nFirstly, with the availability of the immense amount of data on the web and social media, it is a great interest of computer vision community to make use of that . Nevertheless, labels of these data are coming from messy user tags or automated systems used by search engines. These processes of obtaining datasets are well known to result in noisy labels.\nSecondly, the dataset can be labeled by multiple experts resulting in a multi-labeled dataset. Each labeler has a varying level of expertise, and their opinions may commonly conflict with each other, which results in noisy label problem . There are several reasons to get data labeled by more than one expert. Opinions of multiple labelers can be used to double-check each other's predictions for challenging datasets, or crowd-sourcing platforms can be used to decrease the cost of labeling for big data. Despite its cheapness, labels obtained from non-experts are commonly noisy with a differentiating rate of error. Some labelers even can be a total spammer who labels with random selection .\nThirdly, data can be too complicated for even the experts in the field, e.g., medical imaging. For example, to collect gold standard validation data for retinal images, annotations are gathered from 6-8 different experts . This complexity can be due to the subjectiveness of the task for human experts or the lack of annotator experience. Considering the fields where the accurate diagnosis is of crucial importance, overcoming this noise is of great interest.\nLastly, label noise can intentionally be injected in purpose of regularizing or data poisoning .", "id": "11fcd0cc-694c-4f82-9488-03daa690efe0", "level": "subsection", "origin_cites_number": 22, "parent_id": "e2235ea5-b76e-4028-994c-f569fa5a00ad", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Sources of Label Noise" ] ], "subsections": [], "title": "Sources of Label Noise" }, { "cite_extract_rate": 0.594827586206896, "cites": [ 7779, 4166, 7782, 4137, 3345, 4134, 301, 4186, 4168, 4184, 7772, 7162, 4180, 4145, 4128, 4171, 4178, 4119, 4131, 4187, 8736, 2277, 3340, 7783, 4175, 7191, 4141, 4126, 4156, 4142, 4174, 4181, 8737, 4138, 4135, 892, 8742, 7780, 8630, 4185, 4136, 4129, 795, 3342, 4182, 4177, 4121, 4173, 4130, 4139, 4238, 4183, 4152, 4253, 4133, 7781, 4188, 4140, 7133, 4132, 3454, 7775, 4179, 3453, 4143, 8743, 8741, 4176, 7774 ], "content": "\\label{methodologies}\nThere are many possible ways to group proposed methods in the literature. For example, one possible way to distinguish algorithms is according to their need for a noise-free subset of data or not. Alternatively, they can be divided according to the noise type they are dealing with or label type such as singly-labeled or multi-labeled. However, these are not handy to understand the main approaches behind the proposed algorithms; therefore, different sectioning is proposed as noise model based and noise model free methods.\nNoise model based methods aim to model the noise structure so that this information can be used during training to come through noisy labels. In general, approaches in this category aim to extract noise-free information contained within the dataset by either neglecting or de-emphasizing information coming from noisy samples. Furthermore, some methods attempt to reform the dataset by correcting noisy labels to increase the quality of the dataset for the classifier. The performance of these methods is heavily dependent on the accurate estimate of the underlying noise. The advantage of noise model based methods is the decoupling of classification and label noise estimation, which helps them to work with the classification algorithm at hand. Another good side is in the case of prior knowledge about the noise structure, noise model based methods can easily be head-started with this extra information inserted to the system.\nDifferently, noise model free methods aim to develop inherently noise robust strategies without explicit modeling of the noise structure. These approaches assume that the classifier is not too sensitive to the noise, and performance degradation results from overfitting. Therefore, the main focus is given to overfit avoidance by regularizing the network training procedure.\nBoth of the mentioned approaches are discussed and further categorized in \\autoref{noisemodelbased} and \\autoref{noisemodelfree}. \\autoref{table:methods} presents all the mentioned methods to provide a clear picture as a whole. It should be noted that most of the time there are no sharp boundaries among the algorithms, and they may belong to more than one category. However, for the sake of integrity, they are placed in the subclass of most resemblance.\n\\begin{singlespace}\n\\begin{table*}[]\n \\begin{tabular}{|l|l|}\n \\hline\n \\multirow{22}{*}{\\rotatebox[origin=c]{90}{\\textbf{\\nameref{noisemodelbased}}}} & \n \\begin{tabular}[c]{@{}l@{}}\\textbf{1. \\nameref{noisychannel}}\\\\ \n \\textit{a.\\nameref{noisychannelexplicit}}: predictions on noisy data , predictions on clean data \\\\\n easy data \\\\\n \\textit{b.\\nameref{noisychanneliterative}}: EM , fully connected layer , anchor point estimate \\\\\n Drichlet-distribution \\\\\n \\textit{c.\\nameref{noisychannelcomplex}}: noise type estimation , relevance estimation \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{2. \\nameref{labelnoisecleansing}}\\\\ \n \\textit{a.\\nameref{labelcleansingclean}}: train on clean set , ensemble , graph-based \\\\\n \\textit{b.\\nameref{labelcleansingcleannoisy}}: iteratively correct , correct for fine-tune \\\\\n \\textit{c.\\nameref{labelcleansingnoisy}}: calculate posterior , posterior with compatibility \\\\ consistency with model , ensemble , prototypes , quality embedding \\\\\n partial labels \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{3. \\nameref{datasetpruning}}\\\\ \n \\textit{a.Data pruning} network prediction based , ensemble of filters , according to noise rate \\\\\n transfer learning , cyclic state , K-means \\\\\n \\textit{b.Label pruning} semi-supervised learning , relabeling \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{4. \\nameref{samplechoosing}}\\\\ \n a.\\textit{\\nameref{curriculumlearning}}: Screening loss , teacher-student \\\\\n selecting uncertain samples , curriculum loss , data complexity \\\\\n consistency with model \\\\ \n b.\\textit{\\nameref{multipleclassifiers}}: Consistency of networks , co-teaching \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{5. \\nameref{sampleimportance}}\\\\ \n Meta task , siamese network , pLOF , abstention \\\\\n estimate noise rate , similarity loss , transfer learning , $\\theta$-distribution \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{6. \\nameref{labelerquality}}\\\\ \n EM , trace regularizer , crowd-layer , image difficulty estimate \\\\\n consistency with network , omitting probability variable \\\\\n softmax layer per labeler \n \\end{tabular}\\\\\\hline\n \\multirow{14}{*}{\\rotatebox[origin=c]{90}{\\textbf{\\nameref{noisemodelfree}}}} & \n \\begin{tabular}[c]{@{}l@{}}\\textbf{1. \\nameref{robustlosses}}\\\\ \n Non-convex loss functions , 0-1 loss surrogate , MAE , IMEA \\\\\n Generalized cross-entropy , symmetric loss , unbiased estimator , \\\\\n modified cross-entropy for omission , information theoric loss \\\\\n linear-odd losses , classification calibrated losses , SGD with robust losses \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{2. \\nameref{metalearning}}\\\\ \n Choosing best methods , pumpout , noise tolerant parameter initialization , \\\\\n knowledge distillation , gradient magnitude adjustment , meta soft labels \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{3. \\nameref{regularizers}}\\\\ \n Dropout , adversarial training , mixup , label smoothing \\\\\n pre-training , dropout on final layer , checking dimensionality \\\\\n , auxiliary image regularizer \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{4. \\nameref{ensemblemethods}}\\\\ \n LogitBoost\\&BrownBoost , noise detection based AdaBoost , rBoost \\\\\n RBoost1\\&RBoost2 , robust multi-class AdaBoost \n \\end{tabular}\\\\\\cline{2-2} \n & \\begin{tabular}[c]{@{}l@{}}\\textbf{5. \\nameref{others}}\\\\ \n Complementary labels , autoencoder reconstruction error \\\\\n minimum covariance determinant , less noisy data , \\\\\n data quality , prototype learning , multiple instance learning \n \\end{tabular}\\\\\\hline \n \\end{tabular}\n \\caption{Existing methods to deal with label noise in the literature}\n \\label{table:methods}\n\\end{table*}\n\\end{singlespace}", "id": "dc1dc4ae-1244-4078-9b61-fd1d656be4b9", "level": "subsection", "origin_cites_number": 116, "parent_id": "e2235ea5-b76e-4028-994c-f569fa5a00ad", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Methodologies" ] ], "subsections": [], "title": "Methodologies" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{noisemodelbased}\nIn the presence of noisy labels, the learning objective is to find the best estimator for hidden distribution $\\mathcal{D}$, while iterating over distribution $\\mathcal{D}_n$. If the mapping function $M:\\mathcal{D} \\rightarrow \\mathcal{D}_n$ is known, it can be used to reverse the effect of noisy samples. Algorithms under this section simultaneously try to find underlying noise structure and train the base classifier with estimated noise parameters. They need a better estimate of $M$ to train better classifiers and better classifiers to estimate $M$ accurately. Therefore, they usually suffer from a chicken-egg problem. Approaches belonging to this category are explained in the following subsections.", "id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "level": "section", "origin_cites_number": 0, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ] ], "subsections": [ "ff7b16cb-76ae-4fa0-9e65-b2494ca8b5bd", "46311c1d-0eda-477f-b3bb-af8c05c5cec7", "ca3a3889-4c87-4b50-9671-2b859b435b44", "1f132fef-34b6-4647-964b-3113bd5a48e6", "66c1ab62-f503-459d-aa3a-607aae88807a", "9700ff2a-e064-4ed7-9c11-038312198934", "e4eaf886-8871-4f75-97a7-d432936ad6ed" ], "title": "Noise Model Based Methods" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7773 ], "content": "\\label{noisychannel}\nThe general setup for the noisy channel is illustrated in \\autoref{fig:noisychannel}. Methods belonging to this category minimize the following risk\n\\begin{equation}\n \\hat{R}_{l,\\mathcal{D}}(f)=\\dfrac{1}{N}\\sum_{i=1}^{N}l(Q(f_\\theta(x_i)),\\tilde{y_i})\n\\end{equation}\nwhere $Q(f_\\theta(x_i)) = p(\\tilde{y_i}|f_\\theta(x_i))$ is the mapping from network predictions to given noisy labels. If $Q$ adapts the noise structure $p(\\tilde{y}|y)$, then network will be forced to learn true mapping $p(y|x)$. \n$Q$ can be formulated with a \\textit{noise transition matrix} $T$ so that $Q(f_\\theta(x_i)) = Tf_\\theta(x_i)$ where each element of the matrix represents the transition probability of given true label to noisy label, $T_{ij}=p(\\tilde{y}=j|y=i)$. Since $T$ is composed of probabilities, weights coming from a single node should sum to one $\\sum_{j}T_{ij}=1$. This procedure of correcting predictions to match given label distribution is also called \\textit{loss-correction} . \nA common problem in noisy channel estimation is scalability. As the number of classes increases, the size of the noise transition matrix increases exponentially, making it intractable to calculate. This can be partially avoided by allowing connections only among the most probable nodes , or predefined nodes . These restrictions are determined by human experts, which allows additional noise information to be inserted into the training procedure. \nThe noisy channel is used only in the training phase. In the evaluation phase, the noisy channel is removed to get noise-free predictions of the base classifier. In these kinds of approaches, performance heavily depends on the accurate estimation of noisy channel parameters; therefore, works mainly focus on estimating $Q$. Various ways of formulating the noisy channel are explained below.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\columnwidth]{noisychannel.png}\n \\caption{Noise can be modeled as a noisy channel on top of base classifier. Noisy channel adapts the characteristic of the noise so that base classifier is fed with noise-free gradients during traning.}\n \\label{fig:noisychannel}\n\\end{figure}", "id": "ff7b16cb-76ae-4fa0-9e65-b2494ca8b5bd", "level": "subsection", "origin_cites_number": 3, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Noisy Channel" ] ], "subsections": [ "c66ea713-c060-4335-bdf2-eab57106d284", "2146baee-5b49-44c5-a416-7c31d548ab10", "437e0291-839d-4bf9-bf2d-4d0ec1de2e48" ], "title": "Noisy Channel" }, { "cite_extract_rate": 0.75, "cites": [ 4145, 2277, 4136 ], "content": "\\label{noisychannelexplicit} Noise transition matrix is calculated explicitly, and then the base classifier is trained using this matrix. Assuming dataset is balanced in terms of clean representative samples and noisy samples, so that there exists samples for each class with $p(y=\\tilde{y}_i|x_i)=1$, constructs $T$ just based on noisy class probability estimates of a pre-trained model, so-called \\textit{confusion matrix}. A similar approach is followed in ; however, the noise transition matrix is calculated from the network's confusion matrix on the clean subset of data. Two datasets are gathered in , namely: easy data and hard data. The classifier is first trained on the easy data to extract similarity relationships among classes. Afterward, the calculated similarity matrix is used as the noise transition matrix. Another method proposed in calculates the confusion matrix on both noisy data and clean data. Then, the difference between these two confusion matrices gives $T$.", "id": "c66ea713-c060-4335-bdf2-eab57106d284", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "ff7b16cb-76ae-4fa0-9e65-b2494ca8b5bd", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Noisy Channel" ], [ "subsubsection", "Explicit calculation" ] ], "subsections": [], "title": "Explicit calculation" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4136, 4152 ], "content": "\\label{noisychanneliterative} Noise transition matrix is estimated incrementally during the training of the base classifier. In expectation-maximization (EM) is used to iteratively train network to match given distribution and estimate noise transition matrix given the model prediction. The same approach is used on medical data with noisy labels in . adds a linear fully connected layer as a last layer of the base classifier, which is trained to adapt noise behavior. To avoid this additional layer to converge the identity matrix and base classifier overfitting the noise, the weight decay regularizer is applied to this layer. suggests using class probability estimates on anchor points (data points that belong to a specific class almost surely) to construct the noise transition matrix. In the absence of a noise-free subset of data, anchor points are extracted from data points with high noisy class posterior probabilities. Then, the matrix is updated iteratively to minimize loss during training. Instead of using softmax probabilities, models noise transition matrix in Bayesian form by projecting it into a Dirichlet-distributed space.", "id": "2146baee-5b49-44c5-a416-7c31d548ab10", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "ff7b16cb-76ae-4fa0-9e65-b2494ca8b5bd", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Noisy Channel" ], [ "subsubsection", "Iterative calculation" ] ], "subsections": [], "title": "Iterative calculation" }, { "cite_extract_rate": 0.5, "cites": [ 4174 ], "content": "\\label{noisychannelcomplex} Different then simple confusion matrix, some works formalize the noisy channel as a more complex function. This enables noisy channel parameters to be calculated not just by using network outputs but additional information about the content of data. For example, three types of label noises are defined in , namely: no noise, random noise, structured noise. An additional convolutional neural network (CNN) is used to interpret the noise type of each sample. Finally, the noisy layer aims to match predicted labels to noisy labels with the help of predicted noise type. Another work in proposes training an extra network as a relevance estimator, which attains the label's relevance to the given instance. Predicted labels are mapped to noisy labels with the consideration of relevance. If relevance is low, in case of noise, the classifier can still make predictions of true class and doesn't get penalized much for it.", "id": "437e0291-839d-4bf9-bf2d-4d0ec1de2e48", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ff7b16cb-76ae-4fa0-9e65-b2494ca8b5bd", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Noisy Channel" ], [ "subsubsection", "Complex noisy channel" ] ], "subsections": [], "title": "Complex noisy channel" }, { "cite_extract_rate": 0.5, "cites": [ 4116 ], "content": "\\label{labelnoisecleansing}\nAn obvious solution to noisy labels is to identify and correct suspicious labels to their corresponding true classes. Cleaning the whole dataset manually can be costly; therefore, some works propose to pick only suspicious samples to be sent to a human annotator to reduce the cost . However, this is still not a scalable approach. As a result, various algorithms are proposed in the literature. Including the label correction algorithm, the empirical risk takes the following form\n\\begin{equation}\n \\hat{R}_{l,\\mathcal{D}}(f)=\\dfrac{1}{N}\\sum_{i=1}^{N}l(f_\\theta(x_i),G(\\tilde{y_i},x_i))\n\\end{equation}\nwhere $G(\\tilde{y_i},x_i)=p(y_i|\\tilde{y_i},x_i)$ represents the label cleaning algorithm. Label cleaning algorithms rely on a feature extractor to map data to the feature domain to investigate noisiness. While some works use a pre-trained network as the feature extractor, others use the base classifier as it gets more and more accurate during training. This approach results in an iterative framework: as the classifier gets better, the label cleaning is more accurate, and as the label quality gets better, the classifier gets better. From this point of view, label cleaning can be viewed as a dynamically evolving component of the system instead of preprocessing of the data. Such methods usually tackle the difficulty of distinguishing informative hard samples from those with noisy labels . As a result, they can end up removing too many samples or changing labels in a delusional way. Approaches for label cleaning can be separated according to their need for clean data or not.", "id": "46311c1d-0eda-477f-b3bb-af8c05c5cec7", "level": "subsection", "origin_cites_number": 2, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Label Noise Cleaning" ] ], "subsections": [ "3a6e480e-6188-4e77-8875-fd46cf09ba43", "9bb5a874-af0f-4df0-a75a-43e1fe801ec1", "8c410080-caf1-4216-91e3-bb11d24825f2" ], "title": "Label Noise Cleaning" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8743, 4180 ], "content": "\\label{labelcleansingclean} In the existence of a clean subset of data, the aim is to fuse noise-free label structure to noisy labels for correction. If the clean subset is large enough to train a network, one obvious way is to relabel noisy labels by predictions of the network trained on clean data. For relabeling, uses alpha blending of given noisy labels and predicted labels. An ensemble of networks trained with different subsets of the dataset is used in . If they all agree on the label, it is changed to the predicted label; otherwise, it is set to a random label. Instead of keeping the noisy label, setting it randomly helps break the noise structure and makes noise more uniformly distributed in label space. In a graph-based approach is used, where a conditional random field extracts relation among noisy labels and clean labels.", "id": "3a6e480e-6188-4e77-8875-fd46cf09ba43", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "46311c1d-0eda-477f-b3bb-af8c05c5cec7", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Label Noise Cleaning" ], [ "subsubsection", "Using data with clean labels" ] ], "subsections": [], "title": "Using data with clean labels" }, { "cite_extract_rate": 0.5, "cites": [ 4178 ], "content": "\\label{labelcleansingcleannoisy} Some works rely on a subset of data, for which both clean and noisy labels are provided. Then label noise structure is extracted from these conflicting labels and used to correct noisy data. In , the label cleaning network gets two inputs: extracted features of instances by the base classifier and corresponding noisy labels. Label cleaning network and base classifier are trained jointly so that label cleaning network learns to correct labels on the clean subset of data and provides corrected labels for base classifier on noisy data. Same approach is decoupled in in teacher-student manner. First, the student is trained on noisy data. Then features are extracted from the clean data via the student model, and the teacher learns the structure of noise depending on these extracted features. Afterward, the teacher predicts soft labels for noisy data, and the student is again trained on these soft labels for fine-tuning.", "id": "9bb5a874-af0f-4df0-a75a-43e1fe801ec1", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "46311c1d-0eda-477f-b3bb-af8c05c5cec7", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Label Noise Cleaning" ], [ "subsubsection", "Using data with both clean and noisy labels" ] ], "subsections": [], "title": "Using data with both clean and noisy labels" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 4138, 4135, 4186, 4156, 4181, 4184, 4187 ], "content": "\\label{labelcleansingnoisy} Noise-free data is not always available, so the primary approach in this situation is to estimate cleaner posterior label distribution incrementally. However, there is a possible undesired solution to this approach so that all labels are attained to a single class and base network predicting constant class, which would result in delusional top training accuracy. Therefore, additional regularizers are commonly used to make label posterior distribution even. A joint optimization framework for both training base classifier and propagating noisy labels to cleaner labels is presented in . Using expectation-maximization, both classifier parameters and label posterior distribution is estimated to minimize the loss. A similar approach is used in with additional compatibility loss conditioned on label posterior. Considering noisy labels are in the minority, this term assures posterior label distribution does not diverge too much from the given noisy label distribution so that majority of the clean label contribution is not lost. deploy a confidence policy where labels are determined by either network output or given noisy labels, depending on the confidence of the model's prediction. Arguing that, in the case of noisy labels, the model first learns correctly labeled data and then overfits to noisy data, aims to extract the probability of a sample being noisy or not from its loss value. To achieve this, the loss of each instance is fitted by a beta mixture model, which models the label noise in an unsupervised manner. proposes a two-level approach. In the first stage, with any chosen inference algorithm, the ground truth labels are determined, and data is divided into two subsets as noisy and clean. In the second stage, an ensemble of weak classifiers is trained on clean data to predict true labels of noisy data. Afterward, these two subsets of data are merged to create the final enhanced dataset. constructs prototypes that can represent deep feature distribution of the corresponding class for each class. Then corrected labels are found by checking similarity among data samples and prototypes. introduces a new parameter, namely \\textit{quality embedding}, which represents the trustworthiness of data. Depending on two latent variables, true class probability and quality embedding, an additional network tries to extract each instance's true class. In a multi-labeled dataset, where each instance has multiple labels representing its content, some labels may be partially missing resulting in partial labels. In the case of partial labels, uses one network to find and estimate easy missing labels and another network to be trained on this corrected data. formulates video anomaly detection as a classification with label noise problem and trains a graph convolutional label noise cleaning network depending on features and temporal consistency of video snippets.", "id": "8c410080-caf1-4216-91e3-bb11d24825f2", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "46311c1d-0eda-477f-b3bb-af8c05c5cec7", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Label Noise Cleaning" ], [ "subsubsection", "Using data with just noisy labels" ] ], "subsections": [], "title": "Using data with just noisy labels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{datasetpruning}\nInstead of correcting noisy labels to their true classes, an alternative approach is to remove them. While this would result in loss of information, preventing the harmful impact of noise may result in better performance. In these methods, there is a risk of removing too many samples. Therefore, it is crucial to remove as few samples as possible to prevent unnecessary data loss. \nThere are two alternative ways for data pruning. The first option is to remove noisy samples completely and train the classifier on the pruned dataset. The second option is to remove just labels of noisy data and transform the dataset into two subsets as; labeled and unlabeled data. Then semi-supervised learning algorithms can be employed on the resultant dataset.", "id": "ca3a3889-4c87-4b50-9671-2b859b435b44", "level": "subsection", "origin_cites_number": 0, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Dataset Pruning" ] ], "subsections": [ "39942045-6350-433a-98f1-5356b4e8060f", "1bbb108f-91c4-4fec-897e-71a2bed5fdea" ], "title": "Dataset Pruning" }, { "cite_extract_rate": 0.375, "cites": [ 795, 7783, 8741 ], "content": "The most straightforward approach is to remove instances misclassified by the base network . uses an ensemble of filtering methods, where each of them assigns a noisiness level for each sample. Then, these predictions are combined, and data with the highest noisiness level predictions are removed. extends this work with label correction. If the majority of noise filters predict the same label for the noisy instance, it's label is corrected to the predicted label. Otherwise, it is removed from the dataset. In , with the help of a probabilistic classifier, training data is divided into two subsets: confidently clean and noisy. Noise rates are estimated according to the sizes of these subsets. Finally, relying on the base network's output confidence in data instances, the number of most unconfident samples is removed according to the estimated noise rate. In , transfer learning is used so that network trained on a clean dataset from a similar domain is fine-tuned on the noisy dataset for relabeling. Afterward, the network is again trained on relabeled data to re-sample the dataset to construct a final clean dataset. In , the learning rate is adjusted cyclicly to change network status between underfitting and overfitting. Since, while underfitted, noisy samples cause high loss, samples with large noise during this cyclic process are removed. first train network on noisy data and extract feature vectors by using this model. Afterward, data is clustered with the K-means algorithm running on extracted features, and outliers are removed. provides a comparison of performances of various noise-filtering methods for crowd-sourced datasets.", "id": "39942045-6350-433a-98f1-5356b4e8060f", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "ca3a3889-4c87-4b50-9671-2b859b435b44", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Dataset Pruning" ], [ "subsubsection", "Removing Data" ] ], "subsections": [], "title": "Removing Data" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 4129, 4253, 3342, 7780 ], "content": "The simplest option is to employ straightforward semi-supervised training on labeled and unlabeled data . Alternatively, label removing can be done iteratively in each epoch to update the dataset for better utilization of semi-supervised learning dynamically. uses consistency among the given label and moving average of model predictions to evaluate if the provided label is noisy or not. Then the model is trained on clean samples on the next iteration. This procedure continues until convergence to the best estimator. The same approach is used in with a little tweak. Instead of comparing with given labels, the moving average of predictions is compared with predicted labels in the current epoch. To avoid the data selection biased caused by one model, uses two models to select an unlabeled set for each other. Afterward, each network is trained in a semi-supervised learning manner on the dataset chosen by its peer network. Another approach in this class is to train a network on labeled and unlabeled data and then use it to relabel noisy data . Assuming that correctly labeled data account for the majority, proposes splitting datasets into labeled and unlabeled subgroups randomly. Then, labels are propagated to unlabeled data using a similarity index among instances. This procedure is repeated to produce multiple labels per instance, and then the final label is set with majority voting.", "id": "1bbb108f-91c4-4fec-897e-71a2bed5fdea", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "ca3a3889-4c87-4b50-9671-2b859b435b44", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Dataset Pruning" ], [ "subsubsection", "Removing Labels" ] ], "subsections": [], "title": "Removing Labels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{samplechoosing}\nA widely used approach to overcome label noise is to manipulate the input stream to the classifier. Guiding the network with choosing the right instances to feed can help the classifier finding its way easier in the presence of noisy labels. It can be formulated as follows\n\\begin{equation}\n \\hat{R}_{l,\\mathcal{D}}(f)=\\dfrac{1}{N}\\sum_{i=1}^{N}V(x_i,y_i)l(f_\\theta(x_i),\\tilde{y_i}))\n \\label{eq:sc}\n\\end{equation}\nwhere $V(x_i,y_i)\\in\\{0,1\\}$ is a binary operator that decides to whether use the given data $(x_i,y_i)$ or not. If $V(x_i,y_i)=1$ for all data, then it turns out to be classical risk minimization (\\autoref{eq:sc}). If $V$ happens to be a static function, which means choosing the same samples during whole training according to a predefined rule, then it turns out to be dataset pruning, as explained in \\autoref{datasetpruning}. Differently, sample choosing methods continuously monitor the base classifier and select samples to be trained on for the next training iteration. The task can be seen as drawing a path through data that would mimic the noise-free distribution of $\\mathcal{D}$. Since these methods operate outside of the existing system, they are easier to attach to the existing algorithm at hand by just manipulating the input stream. However, it is vital to keep the balance so that system does not ignore unnecessarily large quantities of data. Additionally, these methods prioritize low loss samples, which results in a slow learning rate since hard informative samples are considered only in the later stages of training. Two major approaches under this group are discussed in the following subsections.", "id": "1f132fef-34b6-4647-964b-3113bd5a48e6", "level": "subsection", "origin_cites_number": 0, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Sample Choosing" ] ], "subsections": [ "d151439e-cdab-408e-8517-6d43d77a1e9a", "59741d1f-f243-46b9-bbf7-5974cc60aa60" ], "title": "Sample Choosing" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7775, 4139, 4188 ], "content": "\\label{curriculumlearning} Curriculum learning (CL) , inspired from human cognition, proposes to start from easy samples and go through harder samples to guide training. This learning strategy is also called \\textit{self-paced learning} when prior to sample hardness is not known and inferred from the loss of the current model on that sample. In the noisy label framework, clean labeled data can be accepted as an easy task, while noisily labeled data is the harder task. Therefore, the idea of CL can be transferred to label noise setup as starting from confidently clean instances and go through noisier samples as the classifier gets better. Various screening loss functions are proposed in to sort instances according to their noisiness level. The teacher-student approach is implemented in , where the teacher's task is to choose confidently clean samples for the student. Instead of using a predefined curriculum, the teacher constantly updates its curriculum depending on the student's outputs. Arguing that CL slows down the learning speed, since it focuses on easy samples, suggests choosing uncertain samples that are mispredicted sometimes and correctly on others during training. These samples are assumed to be probably not noisy since noisy samples should be mispredicted all the time. Arguing that it is hard to optimize 0-1 loss, \\textit{curriculum loss} that chooses samples with low loss values for loss calculation, is proposed as an upper bound for 0-1 loss in . In , data is split into subgroups according to their complexities. Since less complex data groups are expected to have more clean labels, training will start from less complex data and go through more complex instances as the network gets better. Next samples to be trained on can be chosen by checking the consistency of the label with the network prediction. In , if both label and model prediction of the given sample is consistent, it is used in the training set. Otherwise, the model has a right to disagree. Iteratively this provides better training data and a better model. However, there is a risk of the model being too skeptical and choosing labels in a delusional way; therefore, consistency balance should be established.", "id": "d151439e-cdab-408e-8517-6d43d77a1e9a", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "1f132fef-34b6-4647-964b-3113bd5a48e6", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Sample Choosing" ], [ "subsubsection", "Curriculum Learning" ] ], "subsections": [], "title": "Curriculum Learning" }, { "cite_extract_rate": 0.8, "cites": [ 3340, 4126, 4140, 4137 ], "content": "\\label{multipleclassifiers} Some works use multiple classifiers to help each other to choose the next batch of data to train on. This is different from the teacher-student approach since none of the networks supervise the other but rather help each other out. Multiple models can provide robustness since networks can correct each other's mistakes due to their differences in learned representations. For this setup to work, the initialization of the classifiers is essential. They are most likely to be initialized with a different subset of the data. If they have the same weight initializations, there is no update since they agree to disagree with labels. In label is assumed to be noisy if both networks disagree with the given label, and update on model weights happens only when the prediction of two networks conflicts. The paradigm of \\textit{co-teaching} is introduced in , where two networks select the next batch of data for each other. The next batch is chosen as the data batch, which has small loss values according to the pair network. It is claimed that using one network accumulates the noise-related error, whereas two networks filter noise error more successfully. The idea of co-teaching is further improved by iterating over data where two networks disagree to prevent two networks converging each other with the increasing number of epochs . Another work using co-teaching first trains two networks on a selected subset for a given number of epochs and then moves to the full dataset .", "id": "59741d1f-f243-46b9-bbf7-5974cc60aa60", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "1f132fef-34b6-4647-964b-3113bd5a48e6", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Sample Choosing" ], [ "subsubsection", "Multiple Classifiers" ] ], "subsections": [], "title": "Multiple Classifiers" }, { "cite_extract_rate": 0.846153846153846, "cites": [ 4171, 4238, 4131, 8630, 4173, 4175, 7781, 3453, 4128, 7774, 4136 ], "content": "\\label{sampleimportance}\nSimilar to sample choosing, training can be made more effective by assigning weights to instances according to their estimated noisiness level. This has an effect of emphasizing cleaner instances for a better update on model weights. Following empirical risk is minimized by these algorithms.\n\\begin{equation}\n \\hat{R}_{l,\\mathcal{D}}(f)=\\dfrac{1}{N}\\sum_{i=1}^{N}\\beta(x_i,y_i)l(f_\\theta(x_i),\\tilde{y_i}))\n\\end{equation}\nwhere $\\beta(x_i,y_i)$ determines the instance dependent weight. If $\\beta$ would be binary, then formulation is the same with sample choosing, as explained in \\autoref{samplechoosing}. Differently, here $\\beta$ is not binary and has a different value for each instance. Like in sample choosing algorithms, $\\beta$ is a dynamic function, which means weights for instances keep changing during the training. Therefore, it is commonly a challenge to prevent $\\beta$ changing too rapidly and sharply, such that it disrupts the stabilized training loop. Moreover, these methods commonly suffer from accumulated errors. As a result, they can easily get biased towards a certain subset of data. There are various methods proposed to obtain optimal $\\beta$ to fade away the negative effects of noise.\nThe simplest approach would be, in case of availability of both clean and noisy data, weighting clean data more . However, this utilizes information poorly; moreover, clean data is not always available. Works of and , uses meta-learning paradigm to determine the weighting factor. In each iteration, gradient descent step on the given mini-batch for weighting factor is performed so that it minimizes the loss on clean validation data. A similar method is adopted in , but instead of implicit calculation of the weighting factor, multi layer perceptron (MLP) is used to estimate the weighting function. \\textit{Open-set noisy labels}, where data samples associated with noisy labels might belong to a true class that is not present in the training data, are considered in . Siamese network is trained to detect noisy labels by learning discriminative features to apart clean and noisy data. Noisy samples are iteratively detected and pulled from clean samples. Then, each iteration weighting factor is recalculated for noisy samples, and the base classifier is trained on the whole dataset. also iteratively separates noisy samples and clean samples. On top of that, not to miss valuable information from clean hard samples, noisy data are weighted according to their noisiness level, estimated by pLOF . introduces \\textit{abstention}, which gives option to abstain samples, depending on their loss value, with an abstention penalty. Therefore, the network learns to abstain from confusing samples, and with the abstention penalty, the tendency to abstain can be adjusted. In , weighting factor is conditioned on distribution of training data, $\\beta(X,Y)= P_\\mathcal{D}(X,Y)/P_{\\mathcal{D}_n}(X,\\tilde{Y})$. The same methodology is extended to the multi-class case in . In , the weighting factor is determined by checking instance similarity to its representative class prototype in the feature domain. formulates the problem as transfer learning where the source domain is noisy data, and the target domain is a clean subset of data. Then weighting in the source domain is arranged in a way to minimize target domain loss. uses $\\theta$ values of samples in $\\theta$-distribution to calculate their probability of being clean and use this information to weight clean samples more in training. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\columnwidth]{vss.png}\n \\caption{Illustration of different types of algorithms. Starting from left; 1) representation of samples from a single class in the 2d-space. Green samples represent the clean samples, and red ones represent noisy samples. 2) label noise cleaning algorithms aim to correct the labels of noisy data 3) dataset pruning methods aim to eliminate noisy data (or just their labels) 4) sample importance weighting algorithms aim to up-weight clean samples and down-weight noisy samples (which is illustrated by size)}\n \\label{fig:methods}\n\\end{figure}", "id": "66c1ab62-f503-459d-aa3a-607aae88807a", "level": "subsection", "origin_cites_number": 13, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Sample Importance Weighting" ] ], "subsections": [], "title": "Sample Importance Weighting" }, { "cite_extract_rate": 0.23076923076923, "cites": [ 4166, 4168, 4142 ], "content": "\\label{labelerquality}\nAs explained in \\autoref{sourcesoflabelnoise}, there can be several reasons for the dataset to be labeled by multiple annotators. Each labeler may have a different level of expertise, and their labels may occasionally contradict each other. This is a typical case in crowd-sourced data or datasets that require a high level of expertise such as medical imaging . Therefore, modeling and using labeler characteristics can significantly increase performance .\nIn this setup, there are two unknowns; noisy labeler characteristics and ground truth labels. One can estimate both with expectation-maximization . If noise is assumed to be y-dependent, the labeler characteristic can be modeled with a noise transition matrix, just like in \\autoref{noisychannel}. adds a regularizer to the loss function, which is the sum of traces of annotator confusion matrices, to force sparsity on each labeler's estimated confusion matrix. A similar approach is implemented in , where a crowd-layer is added to the end of the network. In , xy-dependent noise is also considered by taking image complexities into account. Human annotators and computer vision systems are used mutually in , where consistency among predictions of these two components is used to evaluate labelers' reliability. deals with the noise when the labeler omits a tag in the image. Therefore, instead of the noise transition matrix for labelers, the omitting probability variable is used, which is estimated together with the true class using the expectation-maximization algorithm. Separate softmax layers are trained for each annotator in and an additional network to predict the true class of data depending on the outputs of labeler specific networks and features of data. This setup enables to model each labeler and their overall noise structure in separate networks.", "id": "9700ff2a-e064-4ed7-9c11-038312198934", "level": "subsection", "origin_cites_number": 13, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Labeler Quality Assessment" ] ], "subsections": [], "title": "Labeler Quality Assessment" }, { "cite_extract_rate": 0, "cites": [], "content": "A visual illustration of some of the methods is presented in \\autoref{fig:methods}. Noise model based methods are heavily dependent on the accurate estimate of the noise structure. This brings a dilemma. For a better noise model, one needs better estimators, and for better estimators, it is necessary to have a better estimate of underlying noise. Therefore, many approaches can be seen as an expectation-maximization of both noise estimation and classification. However, it is essential to prevent the system diverging from reality. Therefore regularizing noise estimates and not letting it getting delusional is crucial. To achieve this, the literature commonly makes assumptions about the underlying noise structure, which damages their applicability to different setups. On the other hand, this lets any prior information about the noise be inserted into the system for a head-start. It is also useful to handle domain-specific noise. One another advantage of these algorithms is they decouple noise estimation and classification tasks. Therefore, they are easier to implement on an existing classification algorithm at hand.", "id": "e4eaf886-8871-4f75-97a7-d432936ad6ed", "level": "subsection", "origin_cites_number": 0, "parent_id": "6e51ea88-3a48-46e6-897f-ede2d47445fa", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Based Methods" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{noisemodelfree}\nThese methods aim to achieve label noise robustness without explicitly modeling it, but rather designing robustness in the proposed algorithm. Noisy data is treated as an anomaly; therefore, these methods are similar to overfit avoidance. They commonly rely on the classifier's internal noise tolerance and aim to boost performance by regularizing undesired memorization of noisy data. Various methodologies are presented in the following subsections.", "id": "48d73752-4351-4ff3-8d4c-865c3c729928", "level": "section", "origin_cites_number": 0, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ] ], "subsections": [ "82c6f8a5-6d4e-4080-9fa2-a73cfba30e9e", "dea1efe8-e2f2-40c6-8b81-931ed7d08532", "28012b01-301a-4708-92c3-cd44f9415683", "26d8730c-2d1a-4e36-bca3-aade882eee4e", "d7ad34e5-7f03-4a38-8328-caca64a82a36", "647e7687-df60-4e1a-b3e4-83005371fcfe" ], "title": "Noise Model Free Methods" }, { "cite_extract_rate": 0.625, "cites": [ 4132, 4130, 3454, 4121, 4182, 4119, 4179, 4189, 8736, 4134 ], "content": "\\label{robustlosses}\nA loss function is said to be noise robust if the classifier learned with noisy and noise-free data achieves the same classification accuracy . Algorithms under this section aim to design loss function so that the existence of the noise would not decrease the performance. However, it is shown that noise can badly affect the performance even for the robust loss functions . Moreover, these methods treat both noisy and clean data in the same way, which prevents the utilization of any prior information over data distribution.\nIn , it is shown that certain non-convex loss functions, such as 0-1 loss, has noise tolerance much more than commonly used convex losses. Extending this work derives sufficient conditions for a loss function to be noise tolerant for uniform noise. Their work shows that, if the given loss function satisfies $\\sum_{k}l(f_\\theta(x),k) = C, \\forall x \\in X$ where $C$ is a constant value, then loss function is tolerant to uniform noise. In this content, they empirically show that none of the standard convex loss functions has noise robustness while 0-1 loss has, up to a certain noise ratio. However, 0-1 loss is non-convex and non-differentiable; therefore, surrogate loss of 0-1 loss is proposed in , which is still noise sensitive. Widely used \\textit{categorical cross entropy (CCE)} loss is compared with \\textit{mean absolute value of error (MAE)} in the work of , where it is shown empirically that mean absolute value of error is more noise tolerant. shows that the robustness of MEA is due to its weighting scheme. While CCE is sensitive to abnormal samples and produces bigger gradients in magnitude, MAE treats all data points equally, which would result in an underfitting of data. Therefore, \\textit{Improved mean absolute value of error (IMAE)}, which is an improved version of MAE, is proposed in , where gradients are scaled with a hyper-parameter to adjusts weighting variance of MAE. also argues that MAE provides a much lower learning rate than CCE; therefore, a new loss function is suggested, which combines the robustness of MAE and implicit weighting of CCE. With a tuning parameter, the loss function characteristic can be adjusted in a line from MAE to CCE. Loss functions are commonly not symmetric, meaning that $l(f_\\theta(x_i),y_i) \\neq l(y_i,f_\\theta(x_i))$. Inspired from the idea of symmetric KL-divergence, proposes symmetric cross entropy loss $l_{SCE}(f_\\theta(x_i),y_i) = l(f_\\theta(x_i),y_i) + l(y_i,f_\\theta(x_i))$ to battle noisy labels. \nGiven that noise prior is known, provides two surrogate loss functions using the prior information about label noise, namely, an unbiased and a weighted estimator of the loss function. considers asymmetric omission noise for the binary classification case, where the task is to find road pixels from a satellite map image. Omission noise makes the network less confident about its predictions, so they modified cross-entropy loss to penalize the network less for producing wrong but confident predictions since these labels are more likely to be noisy. Instead of using distance-based loss, proposes to use information-theoretic loss, in which determinant based mutual information between given labels and predictions are evaluated for loss calculation. Weakly supervised learning with noisy labels are considered in , and necessary conditions for loss to be noise tolerant are drawn. shows that classification-calibrated loss functions are asymptotically robust to symmetric label noise. Stochastic gradient descent with robust losses is analyzed in general and shown to be more robust to label noise than its counterparts.", "id": "82c6f8a5-6d4e-4080-9fa2-a73cfba30e9e", "level": "subsection", "origin_cites_number": 16, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Robust Losses" ] ], "subsections": [], "title": "Robust Losses" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 7779, 7740, 4091, 4143, 681, 3345, 4185, 1695, 7772, 8737 ], "content": "\\label{metalearning}\nWith the recent advancements of deep neural networks, the necessity of hand-designed features for computer vision systems are mostly eliminated. Instead, these features are learned autonomously via machine learning techniques. Even though these algorithms can learn complex functions on their own, there remain many hand-designed parameters such as network architecture, loss function, optimizer algorithm, and so on. Meta-learning aims to eliminate these necessities by learning not just the required complex function for the task but also learning the learning itself . Algorithms belonging to this category usually implement an additional learning loop for the meta objective, optimizing the base learning procedure. In general, the biggest drawback of these methods is their computational cost. Since they require nested loops of gradient computations for each training loop, they are several times slower than the conventional training process.\nDesigning a task beyond classical supervised learning in a meta-learning fashion has been used to deal with label noise as well. A meta task is defined as predicting the most suitable method, among the family of methods, for a given noisy dataset in . \\textit{Pumpout} presents a meta objective as recovering the damage done by noisy samples by erasing their effect on model via \\textit{scaled gradient ascent}. As a meta-learning paradigm, model-agnostic-meta-learning (MAML) seeks optimal weight initialization that can easily be fine-tuned for the desired objective. A similar mentality is used in for noisy labels, which aims to find noise-tolerant model parameters that are less prone to noise under teacher-student training framework . Multiple student networks are fed with data corrupted by synthetic noise. A meta objective is defined to maximize consistency with teacher outputs obtained from raw data without synthetic noise. Therefore, student networks are forced to find most noise robust weight initialization such that weight update will still be consistent after training an epoch on synthetically corrupted data. Then, final classifier weights are set as an exponential moving average of student networks. \nAlternatively, in the case of available clean data, a meta objective can be defined to utilize this information. The approach used in is to train a teacher network in a clean dataset and transfer its knowledge to the student network to guide the training process in the presence of mislabeled data. They used \\textit{distillation} technique proposed in for controlled transfer of knowledge from teacher to student. A similar methodology of using \\textit{distillation} and label correction in the human pose estimation task is implemented in . In , the target network is trained on excessive noisy data, and the confidence network is trained on a clean subset. Inspiring from , the confidence network's task is to control the magnitude of gradient updates to the target network so that noisy labels are not resulting in updating gradients. uses clean data to produce soft labels for noisy data, for which the classifier trained on it would give the best performance on the clean data. As a result, it seeks optimal label distribution to provide the most noise robust learning for the base classifier.", "id": "dea1efe8-e2f2-40c6-8b81-931ed7d08532", "level": "subsection", "origin_cites_number": 13, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Meta Learning" ] ], "subsections": [], "title": "Meta Learning" }, { "cite_extract_rate": 0.8, "cites": [ 7162, 4141, 301, 4183, 4133, 7133, 892, 7191 ], "content": "\\label{regularizers}\nRegularizers are well known to prevent DNNs from overfitting noisy labels. From this perspective, these methods treat performance degradation due to noisy data as overfitting to noise. Even though this assumption is mostly valid in random noise, it may not be the case for more complex noises. Some widely used techniques are weight decay, dropout , adversarial training , mixup , label smoothing . shows that pre-training has a regularization effect in the presence of noisy labels. In an additional softmax layer is added, and dropout regularization is applied to this layer, arguing that it provides more robust training and prevents memorizing noise due to randomness of dropout . proposes a complexity measure to understand if the network starts to overfit. It is shown that learning consists of two steps: 1) dimensionality compression, which models low-dimensional subspaces that closely match the underlying data distribution, 2) dimensionality expansion, which steadily increases subspace dimensionality to overfit the data. The key is to stop before the second step. \\textit{Local intrinsic dimensionality} is used to measure the complexity of the trained model and stop before it starts to overfit. takes a pre-trained network on a different domain and fine-tunes it for the noisy labeled dataset. Groups of image features are formed, and group sparsity regularization is imposed so that model is forced to choose relative features and up-weights the reliable images.", "id": "28012b01-301a-4708-92c3-cd44f9415683", "level": "subsection", "origin_cites_number": 10, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Regularizers" ] ], "subsections": [], "title": "Regularizers" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7782 ], "content": "\\label{ensemblemethods}\nIt is well known that bagging is more robust to label noise than boosting . Boosting algorithms like AdaBoost puts too much weight on noisy samples, resulting in overfitting the noise. However, the degree of label noise robustness changes for the chosen boosting algorithm. For example, it is shown that BrownBoost and LogitBoost are more robust than AdaBoost . Therefore, noise-robust alternatives of AdaBoost is proposed in literature, such as noise detection based AdaBoost , rBoost , RBoost1\\&RBoost2 and robust multi-class AdaBoost .", "id": "26d8730c-2d1a-4e36-bca3-aade882eee4e", "level": "subsection", "origin_cites_number": 6, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Ensemble Methods" ] ], "subsections": [], "title": "Ensemble Methods" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 4177, 8742, 4176 ], "content": "\\label{others}\n\\textit{Complementary labels} define classes that observations do not belong to. For example, in the case of ten classes, there is one true class and nine complimentary classes for each instance. Since annotators are less likely to mislabel, some works propose to work in complementary label space . uses reconstruction error of autoencoder to discriminate noisy data from clean data, arguing that noisy data tend to have bigger reconstruction error. In , the base model is trained with noisy data. An additional generative classifier is trained on top of the feature space generated by the base model. By estimating its parameters with \\textit{minimum covariance determinant}, noise-robust decision boundaries are aimed to be found. In , a special setup is considered where dataset consists of noisy and \\textit{less-noisy} data for binary classification task. aims to extract the quality of data instances. Assuming that the training dataset is generated from a mixture of the target distribution and other unknown distributions, it estimates the quality of data samples by checking the consistency between generated and target distributions. \n\\textit{Prototype learning} aims to construct prototypes that can represent features of a class in order to learn clean representations. Some works in the literature propose to create clean representative prototypes for noisy data, so that base classifier can be trained on them instead of noisy labels.\nIn multiple-instance learning, data are grouped in clusters, called bags, and each bag is labeled as positive if there is at least one positive instance in it and negative otherwise. The network is fed with a group of data and produces a single prediction for each bag by learning the inner discriminative representation of data. Since the group of images is used and one prediction is made, the existence of noisy labels along with true labels in a bag has less impact on learning. In , authors propose to effectively choose training samples from each bag by minimizing the total bag level loss. Extra model is trained in as an attention model, which determines parts of the images to be focused on. The aim is to focus on a few regions on correctly labeled images and not focus on any region for mislabeled images.", "id": "d7ad34e5-7f03-4a38-8328-caca64a82a36", "level": "subsection", "origin_cites_number": 10, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Others" ] ], "subsections": [], "title": "Others" }, { "cite_extract_rate": 0, "cites": [], "content": "Overall, methods belonging to this category treat noisy data as an anomaly. Therefore, they are in a similar line with overfit avoidance and anomaly detection. Even though this assumption may be quite valid for random noise, it loses its validity in the case of more complicated and structured noises. Since noise modeling is not decoupled from classification task explicitly, proposed methods are, in the general sense, embedded into the existing algorithm. This prevents their quick deployment to the existing system at hand. Moreover, algorithms belonging to meta-learning and ensemble methods can be computationally costly since they require multiple iterations of training loops.", "id": "647e7687-df60-4e1a-b3e4-83005371fcfe", "level": "subsection", "origin_cites_number": 0, "parent_id": "48d73752-4351-4ff3-8d4c-865c3c729928", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Noise Model Free Methods" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.68, "cites": [ 7773, 4134, 4186, 4156, 4184, 4135, 4238, 97, 652, 4143, 7769, 4152, 4253, 8741, 4187, 4185, 7774 ], "content": "\\label{experiments}\nThis section discusses how the proposed algorithms from the literature conduct experiments to test their robustness against label noise. In general, for quick implementation and testing, most of the works start by testing on toy datasets (MNIST, MNIST-Fashion, CIFAR10\\&CIFAR100) with synthetic label noise. However, as explained in \\autoref{labelnoisemodels}, there are various types of artificial noises. Moreover, each work experiment with a different model architecture and hyper-parameter set. Therefore, results on these datasets are not appropriate for a fair comparison of the algorithms. They are instead used as a proof of concept for the proposed algorithm.\nSome works from the literature use two alternative datasets. The first one is the Food101N dataset containing 310k images of food recipes belonging to 101 different classes. However, its noise ratio is pretty small (around 20\\%), making it inadequate to evaluate noise robust algorithms' performance. The second option is the WebVision dataset containing 2.4 million images crawled from Flickr website and Google Images search. This is a big dataset, which requires a lot of computational power to run algorithms. Some works conduct tests on this dataset by using data only from the first 50 classes, aiming to make it computationally feasible. But still, WebVision fails to provide a benchmarking dataset for the evaluation of noise robust algorithms.\nTo fill the absence of a benchmarking dataset, collects a large amount of images from the web with labels interpreted from the surrounding user tags. As a result, it has real-world noisy labels with an estimated noise ratio of around 40\\%. Dataset consists of one million images belonging to 14 different classes. 50K, 14K and 10K additional images with verified clean labels for train, validation and test purposes. We observed a high majority of the methods do not use additional 50K clean data for training. Furthermore, the literature seems to have a consensus on the experimental setup. All methods use the same model architecture of ResNet50 with pre-trained parameters on Imagenet and stochastic gradient descent optimizer. Considering the identical experimental setup and real-world noisiness of the dataset, the Clothing1M dataset is widely accepted as a benchmarking dataset.\nWe listed (to the best of our knowledge) all of the results presented on this dataset in \\autoref{table:clothing1m}. We collected results only from the works trained on 1M noisy training data without additional 50K clean data for a fair evaluation. We sorted algorithms according to their test accuracy. Nevertheless, it should be noted that each method has its pros and cons, such as computational cost, memory requirements, etc.\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|}\n \\hline \\multicolumn{1}{|c|}{\\textbf{Method}} & \\multicolumn{1}{c|}{\\textbf{Category}} & \\multicolumn{1}{c|}{\\textbf{Accuracy}} \\\\ \\hline\n & \\nameref{metalearning} & 76.02 \\\\ \n & \\nameref{datasetpruning} & 74.76 \\\\ \n & \\nameref{sampleimportance} & 74.69 \\\\ \n & \\nameref{labelnoisecleansing} & 74.45 \\\\ \n & \\nameref{noisychannel} & 74.18 \\\\ \n & \\nameref{datasetpruning} & 73.77 \\\\ \n & \\nameref{sampleimportance} & 73.72 \\\\ \n & \\nameref{labelnoisecleansing} & 73.49 \\\\ \n & \\nameref{metalearning} & 73.47 \\\\ \n & \\nameref{robustlosses} & 73.20 \\\\ \n & \\nameref{noisychannel} & 73.07 \\\\ \n & \\nameref{others} & 72.50 \\\\ \n & \\nameref{robustlosses} & 72.46 \\\\ \n & \\nameref{labelnoisecleansing} & 72.23 \\\\ \n & \\nameref{labelnoisecleansing} & 71.74 \\\\ \n & \\nameref{noisychannel} & 71.10 \\\\ \n & \\nameref{robustlosses} & 71.02 \\\\ \n & \\nameref{labelnoisecleansing} & 71.00 \\\\ \\hline\n \\end{tabular}\n \\caption{Leaderboard for algorithms tested on the Clothing1M dataset. All results are taken from the corresponding paper. For fair evaluation only the works which did not used additional 50k clean training data are presented.}\n \\label{table:clothing1m}\n\\end{table}", "id": "a682ea6c-3e81-4982-b6b7-4cfdd43c2a1e", "level": "section", "origin_cites_number": 25, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Experiments" ] ], "subsections": [], "title": "Experiments" }, { "cite_extract_rate": 0.48484848484848403, "cites": [ 4134, 4170, 4169, 499, 4168, 7784, 4180, 7785, 4238, 652, 4172, 4190, 4171, 3630, 4165, 7769 ], "content": "\\label{conclusion}\nThroughout this paper, it is shown that label noise is an important obstacle to deal with in order to achieve desirable performance from real-world datasets. Despite its importance for supervised learning in practical applications, it is also an important step to collect datasets from the web , design networks that can learn from unlimited web data with no human supervision . Furthermore, beside image classification, there are more fields where dealing with mislabeled instances is important, such as generative networks , semantic segmentation , sound classification and more. All these factors make dealing with label noise an important step through self-sustained learning systems.\nDifferent approaches to come through noisy label phenomenon are proposed in the literature. All methods have their advantages and disadvantages, so one can choose the most appropriate algorithm for the use case. However, in order to draw a generic line, we make the following suggestions. If the noise structure is domain-specific and there is prior information or assumption about its structure, noise model based methods are more appropriate. Among these models, one can choose the best-suited method according to need. For example, if noise can be represented as a noise transition matrix, noisy channel or labeler quality assessment for multi labeler case can be chosen. If the purpose is to purify the dataset as a preprocessing stage, then dataset pruning or label noise cleansing methods can be employed. Sample choosing or sample importance weighting algorithms are handy if instances can be ranked according to their informativeness on training. Unlike noise model-based algorithms, noise model free methods do not depend on any prior information about the noise structure. Therefore, they are easier to implement if noise is assumed to be random, and performance degradation is due to overfitting since they do not require the hassle of implementing an external algorithm for noise structure estimation. If there is no clean subset of data, robust losses or regularizers are appropriate options since they treat all samples the same. Meta-learning techniques can be used in the presence of a clean subset of data since they can easily be adapted to utilize this subset. \nEven though an extensive amount of research is conducted for machine learning techniques , deep learning in the presence of noisy labels is certainly an understudied problem. Considering its dramatic effect on DNNs , there still are many open research topics in the field. For example, truly understanding the impact of label noise on deep networks can be a fruitful future research topic. shows that each layer of CNN learns to extract different features from the data. Moreover, learned representations form a hierarchical pattern, where each layer learns more complex features from the previous layer. A fully connected layer uses features from the last layer to interpret the corresponding label on the final layer. Understanding which parts of the network is highly affected by label noise may help analyze the adverse effect of the label noise on neural networks. For example, if initial layers are affected, one can conclude that learned primitive features are corrupted, so the rest of the network cannot be adequately trained. On the other hand, if final convolution layers are affected, it can be said that the network can not form the hierarchical feature pattern throughout the convolutional layers. Alternatively, if the convolutional layers are not affected but the fully connected layer is the cause of the problem, it can be concluded that feature representation learning is not corrupted, but the network cannot correctly interpret meanings from the extracted features. Moreover, it can be interesting to investigate the cause of the problem for different types of label noise models presented in \\autoref{labelnoisemodels}. If for different noise models, different parts of the network are affected, one can analyze the true nature of the label noise in the dataset by checking corruption in the neural network layers.\nAlternatively, the question of how to train in the existence of both attribute and label noise is an understudied problem with significant potential for practical applications . shows noisy labels degrades the learning, especially for challenging samples. So, instead of overfitting to noisy samples, underfitting to challenging samples may be the reason for the performance degradation, which is an open question to be answered in the future. Another possible research direction may be on the effort of breaking the structure of the noise to make it uniformly distributed in the feature domain . This approach would be handy where labelers have a particular bias.\nA widely used approach for quick testing of proposed algorithms is to create noisy datasets by adding synthetic label noise to benchmarking toy datasets . However, this prevents fair comparison and evaluation of algorithms since each work adds its own noise type. Some large datasets with noisy labels are proposed in literature . These datasets are collected from the web, and labels are attained from noisy user tags. Even though these datasets provide a useful domain for benchmarking proposed solutions, their noise rates are mostly unknown and they are biased in terms of data distribution for classes. Moreover, one can not adjust the noise rate for testing under extreme or moderate conditions. From this perspective, we believe literature lacks a noisy dataset where a major part of it has both noisy and verified labels; thus, the noise rate can be adjusted as desired.\nMinimal attention is given to the learning from a noisy labeled dataset when there is a small amount of data. This can be a fruitful research direction considering its potential in fields where harvesting dataset is costly. For example, in medical imaging, collecting a cleanly annotated large dataset is not feasible most of the time , due to its cost or privacy. Effectively learning from a small amount of noisy data with no ground truth can significantly improve autonomous medical diagnosis systems. Even though some pioneer researches are available , there is still much more to be explored.\nThe ability to effectively learn from noisily labeled data brings up big opportunities for practical applications of machine learning algorithms. The bottleneck of data collection can easily be resolved with the massive amount of data collected from the web. Labels for this data can be assigned with simple algorithms (such as interpreting from the surrounding text ). By effectively dealing with noisy labels, deep learning algorithms can be fed with massive datasets. Moreover, there are research opportunities for alternative usage of semi-supervised learning algorithms along with noisily labeled data. Common usage of semi-supervised learning methods for noisily labeled data is to remove the labels of noisy data and then train with conventional semi-supervised learning methods. Alternatively, algorithms can be developed to use three types of data; cleanly labeled data, noisily labeled data, and unlabeled data. With the help of these algorithms, a massive amount of unlabeled and noisy data can be effectively used under the supervision of a small cleanly annotated data. \nBesides classification, knowledge of learning from noisily labeled data can be used in alternative fields by transforming the task. For example, in a multi-labeled dataset, where each instance belongs to multiple classes, not all classes are equally relevant. One can assume labels with a small resemblance to the data sample as noisy and employ algorithms designed for learning from noisy labels . Similarly, divides an untrimmed video into smaller parts and aims to find the video snippet most relevant to the video tag. Irrelevant video parts are assumed to be noisily labeled. Even though the original dataset does not have noisy labels, it can be transformed to use algorithms from the field. As a result, learning from noisy labels has many potentials in various areas besides straight image classification.\n\t\\bibliography{surveyrefs} \n\t\\begin{comment}\n\t\\begin{wrapfigure}{l}{20mm} \n\t\t\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio,angle=270]{pp_gorkem.jpg}\n\t\\end{wrapfigure}\\par\n\t \\textbf{G\\\"orkem Algan} received his B.Sc. degree in Electrical-Electronics Engineering in 2012, from Middle East Technical University (METU), Turkey. He received his M.Sc. from KTH Royal Institute of Technology, Sweden and Eindhoven University of Technology, Netherlands with double degree in 2014. He is currently a Ph.D. candidate at the Electrical-Electronics Engineering, METU. His current research interests include deep learning in the presence of noisy labels.\n\t\\begin{wrapfigure}{l}{25mm} \n\t\t\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{pp_ilkay.jpg}\n\t\\end{wrapfigure}\\par\n\t \\textbf{Ilkay Ulusoy} was born in Ankara, Turkey, in 1972. She received the B.Sc. degree from the Electrical and Electronics Engineering Department, Middle East Technical University (METU), Ankara, in 1994, the M.Sc. degree from The Ohio State University, Columbus, OH, USA, in 1996, and the Ph.D. degree from METU, in 2003. She did research at the Computer Science Department of the University of York, York, U.K., and Microsoft Research Cambridge, U.K. She has been a faculty member in the Department of Electrical and Electronics Engineering, METU, since 2003. Her main research interests are computer vision, pattern recognition, and probabilistic graphical models.\t\t\n\t\\end{comment}\n\\end{document}", "id": "8f59dd3e-4249-4d9b-aaf1-ecbaa320f1db", "level": "section", "origin_cites_number": 33, "parent_id": "1da83c5d-fa27-4a53-a17c-f942c45c275a", "prefix_titles": [ [ "title", "Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
58
[ 1223, 1749, 514, 3866, 802, 97, 810, 3630, 4165, 209, 4115, 1191, 4166, 4168, 4167, 4170, 4169, 4172, 4171, 4116, 7779, 7782, 4137, 3345, 4134, 301, 4186, 4184, 7772, 7162, 4180, 4145, 4128, 4178, 4119, 4131, 4187, 8736, 2277, 3340, 7783, 4175, 7191, 4141, 4126, 4156, 4142, 4174, 4181, 8737, 4138, 4135, 892, 8742, 7780, 8630, 4185, 4136, 4129, 795, 3342, 4182, 4177, 4121, 4173, 4130, 4139, 4238, 4183, 4152, 4253, 4133, 7781, 4188, 4140, 7133, 4132, 3454, 7775, 4179, 3453, 4143, 8743, 8741, 4176, 7774, 7773, 4189, 7740, 4091, 681, 1695, 652, 7769, 499, 7784, 7785, 4190 ]
0.858728
[ "Vijini Mallawaarachchi", "Lakmal Meegahapola", "Roshan Alwis", "Eranga Nimalarathna", "Dulani Meedeniya", "Sampath Jayarathna" ]
Change Detection and Notification of Web Pages: A Survey
2019
2019-01-09T10:20:40Z
cs.IR
The majority of currently available webpages are dynamic in nature and are changing frequently. New content gets added to webpages, and existing content gets updated or deleted. Hence, people find it useful to be alert for changes in webpages that contain information that is of value to them. In the current context, keeping track of these webpages and getting alerts about different changes have become significantly challenging. Change Detection and Notification (CDN) systems were introduced to automate this monitoring process, and to notify users when changes occur in webpages. This survey classifies and analyzes different aspects of CDN systems and different techniques used for each aspect. Furthermore, the survey highlights the current challenges and areas of improvement present within the field of research.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ] ], "subsections": [ "d342c56c-0425-48ca-ad1e-dd28c883969a", "4b12aa44-53f2-4365-a6d9-68a7cc284f74", "e0f7f541-3dd6-40f5-be3a-75475986d97d", "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "e425e367-4ff1-45d0-a593-4277e0f4c653", "bab5dd27-3080-49c5-bc24-d4cd5c11a1ad", "60b7757f-6092-4d79-872d-cf6dff00abdb", "765aa397-8736-4120-a550-0990abf2bf37" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section1}\nThe World Wide Web (WWW or Web in simpler terms) is being evolved at a rapid pace, and keeping track of changes is becoming more challenging. Many websites are being created and updated daily with the advancement of tools and web technologies. Hence, websites at present have become more dynamic, and their content keeps changing continuously. Many users are interested in keeping track of the changes occurring on websites such as news websites, booking websites, wiki pages and blogs. Back in the 1990s, people used to register to Really Simple Syndication (RSS) feeds, originated from the Resource Description Framework (RDF) specification to keep track of frequently updated content. Later in 2005, RSS was replaced by Atom . Currently, the majority of the users keep track of websites and get the latest updates using bookmarks in web browsers.\nWeb syndication technologies (e.g., RSS and Atom) emerged as a popular means of delivering frequently updated web content on time . Users can subscribe to RSS or Atom feeds, and get the latest updates. However, when considered from a perspective of webpage change detection, RSS feeds have many potential issues. A study carried out to characterize web syndication behavior and content shows that the utilization of fields specified in the XML specification of RSS is less, which can result in missing information, errors and uncategorized feeds. Furthermore, services such as Google Reader have been discontinued due to the declining popularity of RSS feeds caused by the rising popularity of \\emph{microblogging} (also known as \\emph{social media}), shifting of formats from XML to JSON, and market forces promoting proprietary interfaces and de-emphasizing interoperability.\nIn the current context, managing and using bookmarked websites and RSS feeds have become a significant challenge, and people are continuously seeking better and convenient methods. Change Detection and Notification (CDN) systems make it easy for users to get notified about changes that have occurred on webpages, without having to refresh the webpage continuously. \nGoogle Alerts , Follow That Page and Visualping are some of the most popular CDN services which are used by many users to get updates about content changes that occur on webpages.", "id": "d342c56c-0425-48ca-ad1e-dd28c883969a", "level": "section", "origin_cites_number": 8, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "244f295b-aa92-4f75-98f6-dbda8542f964", "4238a9fd-d8df-404e-b4bb-6d9140f1e96e", "b4d3c02e-c06c-4578-9ae9-7b9ceb1bc6d2" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "CDN systems automatically detect changes made to pages in the web, and notifies about the changes to interested parties. The significant difference between search engines and CDN systems is that search engines are developed for searching webpages, whereas CDN systems are developed for monitoring changes that occur on webpages. In theory, most of the search engines also have an underlying change detection mechanism to determine which sites to crawl and keep their search results up-to-date . The use of CDN systems allows users to reduce the browsing time, and facilitates users with the ability to check for changes on webpages of their interest .\nCDN systems emerged in the WWW with the introduction of Mind-it (currently discontinued) , the first CDN tool which was developed by NetMind in 1996. Since then, several services were introduced such as ChangeDetection (introduced in 1999, now known as Visualping ), ChangeDetect (introduced in 2002), Google Alerts (introduced in 2003), Follow That Page and Wachete . CDN systems have evolved throughout the past few decades, with improvements in detection rates, efficient crawling mechanisms and user-friendly notification techniques.\nCDN systems available at present have become easier to use, and are more flexible to incorporate user requirements. The majority of the currently available CDN systems such as Wachete and VisualPing provide various monitoring options such as;\n\\begin{enumerate}\n \\item Multiple webpage monitoring: multiple parts of a webpage, an entire webpage, multiple webpages or an entire website.\n \\item Content to monitor: text, images, links, documents (PDF, DOC, DOCX).\n \\item The threshold of changes: the percentage of changes occurring on a given webpage.\n \\item Frequency of monitoring: hourly, daily, weekly, monthly or on-demand monitoring.\n \\item Frequency of notification: twice a day, once a day, once a week or as changes occur.\n\\end{enumerate}", "id": "244f295b-aa92-4f75-98f6-dbda8542f964", "level": "subsection", "origin_cites_number": 9, "parent_id": "d342c56c-0425-48ca-ad1e-dd28c883969a", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Change Detection \\& Notification Systems" ] ], "subsections": [], "title": "Change Detection \\& Notification Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on the architecture involved, change detection can be segregated into two main subdomains. The first branch is server-side change detection, and the other is client-side change detection . Server-side change detection systems use servers that poll webpages, track changes, and notify them to users. The client-side change detection systems make the client-side infrastructure poll the webpages, and track changes on their own.\nCDN systems obtain versions of webpages by crawling them, and saving the data to version repositories. These data are saved in an unstructured manner, mostly in the format of documents with tags, to allow easy storage and retrieval. Then, changes are detected by comparing a previously saved version with the latest version of a particular webpage using similarity computations. The majority of the change detection mechanisms convert the data of a saved version into an XML-like format where an element represents opening and closing HTML tags (e.g., \\emph{<h>} and \\emph{</h>}). Certain CDN systems allow the user to define a threshold of change, and the user gets notified about a change, only if the change exceeds this threshold.\nThe majority of the CDN systems operate on predefined monitoring schedules, that are specified by the user or determined by the system itself. Detected changes are visualized using many methods. A common means of visualizing text changes is by showing the newly added text in green color, and the deleted text in red color (often with strikethrough formatting) .\nAnother prominent factor discussed in CDN systems is their crawling schedules. Most of the currently available CDN systems crawl webpages under predefine schedules . However, webpages can be updated at different time schedules (certain webpages may be frequently updated, whereas some webpages may get updated rarely), thus how often they change can vary. Hence, CDN systems require mechanisms for estimating the change frequency to create efficient checking schedules for webpages. This will reduce the number of page checks where no changes were detected, and maximize the probability of achieving optimum resource utilization.", "id": "4238a9fd-d8df-404e-b4bb-6d9140f1e96e", "level": "subsection", "origin_cites_number": 3, "parent_id": "d342c56c-0425-48ca-ad1e-dd28c883969a", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Categories of Change Detection and Notification" ] ], "subsections": [], "title": "Categories of Change Detection and Notification" }, { "cite_extract_rate": 0, "cites": [], "content": "According to our study, only a limited number of surveys have been carried out regarding webpage CDN techniques. Additionally, it is challenging to find comprehensive evaluations of existing CDN systems which discuss different aspects of such systems. Oita et al. have reviewed major approaches used for the change detection process in webpages. They have reviewed temporal aspects of webpages, types of webpage changes, models used to represent webpages to facilitate change detection and various similarity metrics that are used for detecting changes. Shobhna and Chaudhary~ discuss about a selected set of CDN systems with different types of change detection algorithms in a summarized manner. However, there is a requirement to explore and improve CDN systems by comprehensively considering the various aspects of CDN such as the architecture, monitoring frequency, estimation of change frequency, change notification methods and change visualization mechanisms.\nSeveral CDN systems have been developed, and are available for public use . However, we discovered that still there are issues related to these systems, and limited evaluations have been carried out. Hence, the first objective of this survey would be to deliver a comprehensive overview of the different characteristics of CDN systems. The second objective is to study existing CDN systems, and evaluate their features and various performance aspects. Our final objective is to evaluate the different aspects of CDN, study new trends and highlight the areas which require improvement. We believe that this survey can shed light on the relevant research areas, and possibly, pave the way for new scientific research fields.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\textwidth]{images/Change_Detection_Overall.png}\n \\caption{Organization of the survey and the aspects discussed.}\n \\label{fig1:survey-organization}\n\\end{figure}\nThe organization and the aspects discussed in this survey are summarized in Figure~\\ref{fig1:survey-organization}. Section~\\ref{section2} discusses the dynamic nature of webpages, and the experiments that have been conducted to understand the changing behavior of webpages. Section~\\ref{section3} presents different architectural approaches, which have been introduced to develop CDN systems. Further, various traditional architectures and several architectures that have been customized to improve the efficiency of the CDN mechanisms are presented. Section~\\ref{section4} confers about the techniques used for detecting changes on webpages. It includes different crawling techniques, change detection techniques, scheduling algorithms and methods to detect the frequency of webpage changes. Section~\\ref{section5} presents different notification techniques to notify users when changes have been detected on webpages of their interest, whereas Section~\\ref{section6} describes how these changes are visualized to the user. Section~\\ref{section7} compares and evaluates the different features of publicly available CDN systems and Section~\\ref{section8} discusses current trends and issues in the field of CDN. Finally, Section~\\ref{section9} concludes the survey paper with further improvements which can be incorporated into existing CDN systems, and presents the identified future research directions for CDN systems.", "id": "b4d3c02e-c06c-4578-9ae9-7b9ceb1bc6d2", "level": "subsection", "origin_cites_number": 3, "parent_id": "d342c56c-0425-48ca-ad1e-dd28c883969a", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Survey Motivation" ] ], "subsections": [], "title": "Survey Motivation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section2}\nThe World Wide Web (WWW) keeps on growing larger and more diverse every day as new content is being added with the advancement of web content creation tools. The most common units of information on the web are pages, documents and resources . These units can include textual as well as non-textual information such as audio files, video files and images. They can undergo various changes since the time they were added to the WWW. Hence, it is crucial to understand the changing frequency and the dynamic nature of webpages to provide efficient solutions to detect such changes.", "id": "4b12aa44-53f2-4365-a6d9-68a7cc284f74", "level": "section", "origin_cites_number": 1, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Dynamics of Web-based Content" ] ], "subsections": [ "856b8d1d-77a6-47ab-9948-9e95ff164382", "3b0fd51b-ff7f-4726-9888-55eda5cbb087" ], "title": "Dynamics of Web-based Content" }, { "cite_extract_rate": 0, "cites": [], "content": "Webpages are individual files that consist of text, graphics and other features such as links to scripts and style sheets . Webpages are implemented using HyperText Markup Language (HTML) or a comparable markup language. The WWW is considered as a collection of such webpages. Webpages are linked together to form websites. Once a webpage is requested on a web browser, the web browser obtains the relevant content, coordinates the resources and presents the webpage. The web browser uses Hypertext Transfer Protocol (HTTP) to send such requests to retrieve webpages. Webpages fall into two broad categories namely, (1) static and (2) dynamic.", "id": "856b8d1d-77a6-47ab-9948-9e95ff164382", "level": "subsection", "origin_cites_number": 1, "parent_id": "4b12aa44-53f2-4365-a6d9-68a7cc284f74", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Dynamics of Web-based Content" ], [ "subsection", "What are Webpages and their Models of Change?" ] ], "subsections": [ "2e5110c2-03e8-472b-8eb3-5b0e9557e9e9", "d90ecd1a-326a-44ed-ab94-a7cae800ed6a" ], "title": "What are Webpages and their Models of Change?" }, { "cite_extract_rate": 0, "cites": [], "content": "Static webpages have content that does not change after the developer has saved the webpage to the web server . The webpage remains the same until the developer replaces it with an updated static webpage in the server. Static webpages are not tailored to each visitor. They appear the same to all the users who view it. Figure~\\ref{fig2:static-webpages} depicts how a static webpage is displayed once the client requests it. \n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{images/Static_webpages.png}\n \\caption{Process of retrieving a static webpage.}\n \\label{fig2:static-webpages}\n\\end{figure}\nStatic webpages can be created easily as existing web development tools allow us to drag and drop HTML elements as required. Similarly, it is easy to host because only a web server is required to host, without requiring any additional software. Furthermore, static webpages have the advantages of fast loading and improved performance for end-users. However, if dynamic functionalities such as personalized features are required, they have to be added separately.", "id": "2e5110c2-03e8-472b-8eb3-5b0e9557e9e9", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "856b8d1d-77a6-47ab-9948-9e95ff164382", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Dynamics of Web-based Content" ], [ "subsection", "What are Webpages and their Models of Change?" ], [ "subsubsection", "Static Webpages" ] ], "subsections": [], "title": "Static Webpages" }, { "cite_extract_rate": 0, "cites": [], "content": "Dynamic webpages are pages that do not exist until it is generated by a program in response to a request from a client while guaranteeing that the content is up-to-date . Their content changes in response to different contexts or conditions. As the information obtained from the client is used to generate the webpage to be shown, it can be tailored according to the client. Figure~\\ref{fig3:dynamic-webpages} illustrates the process of generating and displaying a dynamic webpage when a request is made by a client. \n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{images/Dynamic_webpages.png}\n \\caption{Process of retrieving a dynamic webpage.}\n \\label{fig3:dynamic-webpages}\n\\end{figure}\nDynamic webpages pave the way for more functionality and better usability, which is not available via static webpages. They allow the creation of web applications which can be stored on a central server, and can be authorized to access from any place via a web browser. Details related to the user and context can be retrieved from external databases to tailor the webpage. Moreover, it reduces the cost and burden for updating the entire webpage whenever changes occur frequently . However, dynamic webpages may face security risks as corporate applications and databases can be exposed to unintended parties. Furthermore, additional software should be installed and maintained, which is required to generate tailored websites to clients.\nThe Common Gateway Interface (CGI) is the software, which runs in the server that is invoked when a client requests to retrieve a webpage. It is responsible for generating webpages dynamically. PHP, ASP.NET and JSP are some of the common languages that are often used to create webpages, and they use the CGI to generate dynamic webpages. The majority of the webpages available at present are dynamic. Dynamic webpages have become popular as many services (e.g., Content Management Systems (CMS) such as WordPress~ and Drupal~) are available at present, where anyone, even a person with limited programming knowledge, can create a website with a few webpages via a control panel, update pages as required and deploy them instantly.\nThe content found in webpages may get outdated or maybe no longer required. Hence, timely maintenance should be done to ensure that the webpages are up-to-date. Three key events: creations, updates and deletions are considered to cause webpages to change .\n\\begin{enumerate}\n \\item \\textit{Creations}: Once a webpage is created, it cannot be seen publicly on the Web until it is linked by an existing webpage. Hence, adding a new webpage causes at least an update of an existing one; adding a new link to the newly created webpage. However, at present, search engines such as Google provide the facility to add the Uniform Resource Locator (URL) of a newly created webpage, so that it can be indexed, and made available to the public . \n \\item \\textit{Updates}: Updates can be made to the existing content of webpages. Updates can be of two types. The first type is a minor change, where the content of the webpage is almost the same as its previous version, but slight modifications may have been done, such as at the sentence or paragraph level. The second type is a major change, where all the content of the webpage is drastically different from its previous version.\n \\item \\textit{Deletions}: A page is considered to be deleted if there are no links available to traverse to that particular page or if the page itself is removed from the Web. However, there may be instances where the webpage has been removed but still the link to that webpage exists (known as a \\emph{broken link}). Furthermore, content can be deleted from a webpage as well.\n\\end{enumerate}", "id": "d90ecd1a-326a-44ed-ab94-a7cae800ed6a", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "856b8d1d-77a6-47ab-9948-9e95ff164382", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Dynamics of Web-based Content" ], [ "subsection", "What are Webpages and their Models of Change?" ], [ "subsubsection", "Dynamic Webpages" ] ], "subsections": [], "title": "Dynamic Webpages" }, { "cite_extract_rate": 0, "cites": [], "content": "Identifying the amount of change and changing patterns of webpages has been of great interest to researchers. Many studies have been carried out to understand the changing nature of webpages. The content of webpages and their change frequencies have been highly focused areas in this research scope. The available literature demonstrates the ever-changing nature of webpages and various reasons for those changes. Different factors have been considered regarding the dynamic behavior of web content.\nCho and Garcia-Molina have conducted an experiment with 720,000 webpages from 270 websites to study how webpages evolve over time. They have crawled these webpages every day for 4 months. Based on the results, the researchers have found that more than 20\\% of the webpages had shown changes whenever they visited them, and 40\\% of the webpages had changed in a week. Over 50\\% of the .com and .edu webpages had not changed at all during the time frame of the experiment. A massive-scale experiment which extended the study done by Cho and Garcia-Molina was performed by Fetterly et al. . They have studied how frequently webpages change, and the quantity of change occurred, using approximately 150 million webpages over eleven weeks. According to their findings, approximately 65\\% of the pages had zero change over the considered time. Further, it shows the relationships in-between certain top-level domain types (e.g., .org, .gov, .com), and their respective frequencies of changing. It was revealed that .org and .gov domains are less likely to change than .edu domains. Furthermore, it was shown that pages including spams would change more frequently. \nOlston and Pandey~ have crawled 10,000 web URLs from the Yahoo crawled collection and 10,000 web URLs from the Open Directory listing. According to the results, from the dynamic content on webpages of the Open Directory listing, about a third has shown a scroll behavior. Adar et al. have crawled and analyzed approximately 55,000 webpages, which are revisited by users, to characterize the various changes occurring in them. The authors have tracked the frequency and amount of change that has occurred in the webpages individually. 34\\% of the webpages had not shown any changes whereas the remaining pages had displayed at least one change every 123 hours. This study has shown that popular webpages change more frequently when compared to less popular webpages. Webpages falling under categories such as sports, news and personal pages change most frequently, and webpages with government and educational domain addresses have no frequent changes.\n\\input{tables/table1.tex}\nFurthermore, the work carried out by Elsas and Dumais~ describes the temporal aspects of changing web documents. The authors describe news websites to consist of highly changing webpages. Whenever new stories are available, or the old stories are updated, the news websites would change. These changes occur in different amounts and at different frequencies. To observe how documents are changed, the authors have created a set of approximately 2,000,000 HTML pages, and crawled them every week for 10 weeks. Over the sampled period, over 62\\% of pages had no virtual difference. They have also pointed out that highly relevant documents are both more likely to change, and contain a higher amount of change than general documents. As the final outcome, the authors have proposed a ranked retrieval model for dynamic documents based on the temporal aspect of content which lets differential weighting of content. Work done by Saad and Gançarski~ has monitored over 100 webpages from the France Televisions (FranceTV) archive, which depicts the evolution of changes within the French Web. Each page was crawled every hour, and over 3000 versions were obtained daily. The results have shown that the pages at the root level of the archive, such as homepages changed significantly, whereas pages in the deepest levels did not change.\nTable~\\ref{tab1:dynamic-web} presents a summary of the various research work carried out to detect the dynamic nature of web content. From Table~\\ref{tab1:dynamic-web}, it can be seen that webpages belonging to popular websites such as sports and news websites tend to change frequently whereas webpages belonging to government and educational domains change less frequently. Hence, it can be concluded that webpages belonging to popular websites tend to change more frequently than those of less popular websites that target specific functions and niche audiences.", "id": "3b0fd51b-ff7f-4726-9888-55eda5cbb087", "level": "subsection", "origin_cites_number": 6, "parent_id": "4b12aa44-53f2-4365-a6d9-68a7cc284f74", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Dynamics of Web-based Content" ], [ "subsection", "Detecting the Dynamic Nature of Webpages" ] ], "subsections": [], "title": "Detecting the Dynamic Nature of Webpages" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section3}\nSeveral studies have proposed different architectures for change detection systems. The two main architectures that are being used widely within current CDN systems are server-based architecture and client-based architecture and they have their advantages and disadvantages.", "id": "e0f7f541-3dd6-40f5-be3a-75475986d97d", "level": "section", "origin_cites_number": 1, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Architectural Aspects" ] ], "subsections": [ "7cc42520-af8e-4594-b777-773b6d1300e1", "ee5405e6-cb94-4338-8b1c-6ae174bc987d", "3462fcd0-d66c-47b0-824c-c1ea33f08109" ], "title": "Architectural Aspects" }, { "cite_extract_rate": 0, "cites": [], "content": "The server-based architecture, as depicted in Figure~\\ref{fig4:server-archi} consists of the main server, which polls webpages periodically to track changes, and sends alerts about these changes to the subscribed users (clients) by email notifications. If a large number of webpages exist, the computational load for the server will increase as the server must identify changes in each of the webpages added by users. This can also lead to reduced detection frequencies. The process of scaling such tools with the server-based architecture becomes expensive and makes the system less efficient. Due to these issues, the frequency in which changes are detected on webpages will not be sufficient and the server might fail to observe some changes that have occurred in frequently-changing webpages.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images/Server_side_Architecture.png}\n \\caption{Server-based architecture for CDN systems.}\n \\label{fig4:server-archi}\n\\end{figure}\nSitemaps~ is used by servers to inform search engines about the changes that it thinks are important, and are made available for crawling. This allows crawlers to identify webpages that have changed recently without having to download and process HTML pages~. Support for Sitemaps protocol by the search engine industry was announced by Google, Yahoo and Microsoft in the year 2006~. Since then, many websites such as Amazon, CNN and Wikipedia have begun supporting Sitemaps. Studies have shown that the use of sitemaps has improved crawling results~. Data from Sitemaps can be used by CDN systems to get updated content. The use of Sitemaps helps to eliminate difficulties faced by crawlers and expose data regarding changes only.", "id": "7cc42520-af8e-4594-b777-773b6d1300e1", "level": "subsection", "origin_cites_number": 3, "parent_id": "e0f7f541-3dd6-40f5-be3a-75475986d97d", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Architectural Aspects" ], [ "subsection", "Server-based Architecture" ] ], "subsections": [], "title": "Server-based Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "The client-based architecture involves clients who wish to track changes occurring on webpages, and these machines poll webpages in iterations to identify changes. Users having extra computational resources can detect frequent changes occurring on webpages, and the other users might be unable to do so. The client-based architecture has been implemented in the form of browser plugins, and hence, may get bottlenecks due to network connectivity and machine performance. Figure~\\ref{fig5:client-archi} illustrates the client-based architecture for a CDN system. \n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images/Client_side_Architecture.png}\n \\caption{Client-based architecture for CDN systems.}\n \\label{fig5:client-archi}\n\\end{figure}\nOut of the currently available CDN systems, a limited number of them are built using the client-based architecture. They come in the form of browser plugins/extensions such as Distill~ and Check4Change~. Once the extension is installed and enabled successfully, the user can add webpages to monitor. It will regularly check the webpages the user has added to be monitored, within the user’s browser. If any changes are detected, the user will get a browser notification.", "id": "ee5405e6-cb94-4338-8b1c-6ae174bc987d", "level": "subsection", "origin_cites_number": 1, "parent_id": "e0f7f541-3dd6-40f5-be3a-75475986d97d", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Architectural Aspects" ], [ "subsection", "Client-based Architecture" ] ], "subsections": [], "title": "Client-based Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "Several customized architectures of CDN can be found in which researchers have tried to improve the efficiency from an architectural perspective. A design to crawl webpages in parallel using several machines, and integrate the problems with crawling has been proposed by Yadav et al.~. They have designed a new architecture to build a parallel crawler. The proposed architecture consists of two main components. They are the Multi-threaded Server (MT-Server) and the client crawlers. MT-Server is the principal component that manages a collection of client machine connections, which downloads and crawls webpages. It has a special methodology to distribute URLs among the clients when their \"priority index\" is determined. The client crawlers are considered as the different machines which interact with the server. The client count that can be supported may vary according to the available resources and their implementation. Furthermore, there exists no communication between clients, and the clients interact only with the server. However, scaling the system can result in high costs as the count of server nodes has to grow.\nWork carried out by Prieto et al.~ presents a system with a collaborative architecture to detect changes in distributed systems. The authors have proposed the Web Change Detection system (WCD). The four major components of this system’s architecture are the web client, web server, WCD agent and WCD server. Web client is a general browser of the web, which loads the webpages. Web server is a general server that caches the webpages that were monitored. WCD agent is an application operating in the browser that sends information about modifications that have occurred on webpages to the WCD server. WCD server stores and sends the WCD agents to clients, and stores the data about monitored webpages. To detect the near-duplicates, PageRank~ values have been considered along with the shash tool~. High change detection rates and a low cost of maintenance have been produced by this tool. However, in times of excessive usage, if the system gets many requests, it may fail to process them in real-time.\nAn \"approach for distributed aggregation, faster scalable content classification and change detection in web content\" has been proposed by Nadaraj~. The author has presented an algorithm to determine webpage changes, and a method to identify what field has been altered with the help of a data structure named Bloom filters~. Web crawlers are being used to collect details about pages and URLs. It uses Bloom filters to identify the duplicate URLs in a webpage. Hash keys for every visited URL are saved in the bloom filter. Bloom filter entries will be validated when new URLs are discovered. This prevents the crawling mechanism from repeating in loops. The system creates a hash key for the content that has been crawled, and checks the presence of the hash within the Bloom lookup. If present, the content is the same as the existing content; otherwise, the content has been updated. If the hash key for the URL is not found, then the URL is added to the Bloom filter lookup file, and a hash for the crawled content is created and inserted. This method increases the efficiency of the crawling process as it distributes the workload. Furthermore, the strong association of crawlers to working machines will be minimized, and the crawlers will be able to function without any restrictions in distributed networks.\nA hybrid architecture for CDN is proposed by Meegahapola et al.~. This architecture is a hybrid of server-based and client-based architectures. It has two independent crawling schedules for the server and the clients. The server will crawl all the webpages it has registered, and the clients will crawl the webpages which they want to track. The change detection process occurs independently in the server and clients. If the server detects a change, it will be notified to the interested clients. If a change is detected by a client, then it will directly report back to the server, and the server will notify the interested clients. According to this architecture, the time elapsed in between two consecutive server poll actions is divided among the available clients which in turn speeds up the detection process. Table~\\ref{tab2:cdn-archi} presents a summary of the various architectures that are being used by commercially available CDN systems and that have been proposed by researchers.\n\\input{tables/table2.tex}", "id": "3462fcd0-d66c-47b0-824c-c1ea33f08109", "level": "subsection", "origin_cites_number": 7, "parent_id": "e0f7f541-3dd6-40f5-be3a-75475986d97d", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Architectural Aspects" ], [ "subsection", "Customized Architectures" ] ], "subsections": [], "title": "Customized Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section4}", "id": "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "level": "section", "origin_cites_number": 0, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ] ], "subsections": [ "6ecdbe4c-e0e7-4e29-8a26-046dc4d0c747", "035a66df-cff1-476d-95e2-0a9f7d414274", "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "c7fda993-c22d-4b7e-87a1-f5355ac3e355" ], "title": "Detecting Changes of Webpages" }, { "cite_extract_rate": 0.5, "cites": [ 338 ], "content": "A web crawler (also known as a spider) is \"a software or a programmed script that browses the WWW in a systematic, automated manner\"~, and systematically downloads numerous webpages starting from a seed URL~. Web crawlers date back to the 1990s, where they were introduced when the WWW was invented. The World Wide Web Worm~ and MOMspider~ were among the early web crawlers. Moreover, commercial search engines developed their own web crawlers as well such as Google crawler~ and AltaVista~. Later on, web crawlers that could efficiently download millions of webpages were built. \nWe can consider the Internet to be a \"directed graph\" where each node represents a webpage, and the edges represent hyperlinks connecting these webpages~. Web crawlers traverse over this graph-like structure of the Internet, go to webpages, and download their content for indexing purposes. It is crucial to identify which crawling technique should be used according to the purpose of the application.\nWeb crawling can be considered as the main process behind web caching, search engines and web information retrieval. A web crawler begins crawling from a seed URL and visits pages. Then it downloads the page, and retrieves the URLs in it. The discovered URLs will be kept in a queue, and this process repeats as the crawler travels from page to page. Figure~\\ref{fig7:web-crawling} shows an overview of the web crawling process. Many crawling techniques are being used by web crawlers at present~ such as (1) general-purpose crawling, (2) focused crawling and (3) distributed crawling.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images/Web_crawling_process.png}\n \\caption{Overview of the web crawling process.}\n \\label{fig7:web-crawling}\n\\end{figure}", "id": "6ecdbe4c-e0e7-4e29-8a26-046dc4d0c747", "level": "subsection", "origin_cites_number": 6, "parent_id": "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Web Crawlers and Crawling Techniques" ] ], "subsections": [ "67acc0b7-9438-41f9-9c56-86de667bf252", "9a498a3a-0dce-4a45-a81f-908514bf4328", "6d540a9d-eac6-457d-ae28-03ea3e8a1ddc" ], "title": "Web Crawlers and Crawling Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "In the general-purpose crawling technique, the web crawlers collect all the possible webpages from a set of URLs and their links to a central location. It can fetch numerous pages from many different locations. However, this technique can slow down the crawling process and reduce the network bandwidth as all the pages are being fetched.", "id": "67acc0b7-9438-41f9-9c56-86de667bf252", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6ecdbe4c-e0e7-4e29-8a26-046dc4d0c747", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Web Crawlers and Crawling Techniques" ], [ "subsubsection", "General-Purpose Crawling" ] ], "subsections": [], "title": "General-Purpose Crawling" }, { "cite_extract_rate": 0, "cites": [], "content": "Focused crawling is used to collect pages and documents that belong to a specific topic. This can reduce the workload and network traffic during download as the required pages are only downloaded. It crawls only the relevant regions of the Web. This method saves hardware and network resources significantly.\nInitially, focused crawling was introduced by Chakrabarti et al.~ as \"a new approach to topic-specific Web resource discovery\". The focused crawler has three major components. They are as follows.\n\\begin{enumerate}\n \\item \"Classifier decides whether to expand links on the webpages crawled\".\n \\item \"Distiller determines visit priorities of crawled pages\".\n \\item \"Crawler consists of dynamically reconfigurable priority controls which are controlled by the classifier and distiller\"~.\n\\end{enumerate}\nDiligenti et al.~ have highlighted the importance of assigning credits to different documents across crawling paths that produce larger sets of topically relevant pages. The authors have proposed a focused crawling algorithm with a model that captures link hierarchies containing relevant pages. Later on, Menczer et al.~ have discussed different ways to compare topic-driven crawling techniques, whereas \"a general framework to evaluate topical crawlers\" was presented by Srinivasan et al.~. Pant and Menczer~ have introduced a topical crawling technique to gather documents related to business intelligence, which can support organizations to identify competitors, partners or acquisitions. The authors have tested four crawlers; Breadth-First, Naive Best-First, DOM and Hub-Seeking. The results of precision and recall on a given topic show that the Hub-Seeking crawler outperforms other crawlers. A popular technique to design-focused crawlers is the utilization of the \"link structure\" of the documents. Li et al.~ have proposed a focused crawling technique using a decision tree created by anchor texts found in hyperlinks. Jamali et al. ~ have presented a novel \"hybrid focused crawler\", which utilizes the \"link structure\" and similarity of the content to a particular topic. \nMali and Meshram~ have proposed another approach for a web crawler with focused crawling features. Three layers are present in this architecture; \"page relevance computation\", \"determination of page change\" and \"updating the URL repository\". During the crawling mechanism, all the pages are not downloaded. Instead, it extracts the URLs and the words of interest. Frequency of related words and the number of forward links and backward links to and from the webpage collaboratively decide the importance of that webpage being parsed. Certain parameters such as topic vectors and relevance scores are used to check the importance of the page. If the relevance level exceeds a predefined threshold, it is downloaded for extraction. Otherwise, the page will be discarded. \nWork done by Bhatt et al.~ has studied focused web crawlers with their advantages and disadvantages. Additionally, the authors have suggested methods to further improve the efficiency of web crawlers. With the advancement in optimization algorithms, researchers have turned their focus on optimizing web crawlers. Optimizations allow web crawlers to select more suitable webpages to be fetched. Focused crawlers have been optimized to increase the efficiency and performance of the crawler using search algorithms such as Breadth-First Search~, Fish-Search~, Shark-Search~ and evolutionary algorithms such as Genetic algorithms~ and Ant algorithms~.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{images/Distributed_crawler.png}\n \\caption{Arrangement of components in a distributed crawling system.}\n \\label{fig9:distributed-crawl}\n\\end{figure}", "id": "9a498a3a-0dce-4a45-a81f-908514bf4328", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "6ecdbe4c-e0e7-4e29-8a26-046dc4d0c747", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Web Crawlers and Crawling Techniques" ], [ "subsubsection", "Focused Crawling" ] ], "subsections": [], "title": "Focused Crawling" }, { "cite_extract_rate": 0.07142857142857101, "cites": [ 338 ], "content": "This method uses multiple processes to crawl webpages and download their content. Web crawlers are distributed over different systems, allowing them to operate independently. Distributed crawling techniques were introduced due to the inherent problems faced by centralized crawling solutions, such as reduced throughput of the crawl and link congestion~. \nFigure~\\ref{fig9:distributed-crawl} denotes the general arrangement of components in a distributed crawling system. The crawlers are distributed across different systems, and they crawl webpages in different parts of the Web. The Web can be partitioned using different graph partitioning algorithms, and each crawler is provided with a seed URL to start the crawling process~. All the crawled content is then aggregated and sent to the main server. This content can be processed within the main server or they can be sent to other services for different purposes such as indexing and version comparison within CDN systems. One of the most popular high-performance distributed crawlers found at present is the Googlebot~. Googlebot is the web crawler used by Google to fetch webpages for indexing. It is designed to be distributed on several machines so that the crawler can scale as the Web grows. Table~\\ref{tab3:crawling-techniques} summarizes the three main crawling techniques with their advantages, disadvantages and comparison of features.\n\\input{tables/table3.tex}\nDetection of crashed agents is one of the important concerns of distributed web crawlers. Several distributed crawler solutions have been presented during the past~. To address this concern, crawlers can be designed as reliable failure detectors~, in which timeouts can be used to detect crashed agents. UbiCrawler~ is an example of a web crawler with a reliable failure detector. It is a distributed web crawler, which consists of several agents that are responsible to crawl their own share of the Web. However, it does not guarantee that the same webpage is not visited more than once. Based on the experience with UbiCrawler, the authors have built BUbiNG~, a fully distributed web crawler which can detect (near-)duplicate webpages based on the content. However, it does not support URL prioritization. \nAnother aspect that has drawn the attention of researchers is the efficient partitioning mechanisms of the Web space. Work done by Exposto et al.~ has presented a multi-objective approach for partitioning the Web space by modeling the Web hosts and IP hosts as graphs. These graphs are partitioned, and a new graph is created with the weights calculated using the original weights and the edge-cuts. Results show that efficient partitioning techniques have improved download times, exchange times and relocation times.\nKc et al.~ have introduced LiDi Crawl (which stands for Lightweight Distributed Crawler), which is a centralized crawling application with limited resources. It consists of a central node and several individual crawling components. The distributed crawling nature results in the reduced dependence on expensive resources. Kumar and Neelima~ have proposed a scalable, fully-distributed web crawler, without a central node. It consists of multiple agents, where each agent is coordinated so that they crawl their own region on the Web. An agent runs several threads, where each thread is made responsible to visit a single host. The main objective of having multiple agents is to break down the task of crawling into separate parts, and allow specialized modules to carry them out efficiently. Anjum et al.~ state that web crawlers should be aware of webpage modifications, and have discussed techniques to retrieve information on such modifications. However, the authors have found that the presence of multiple JavaScript and CSS files can reduce the efficiency of certain retrieval techniques. \nEfficient crawling of Rich Internet Applications (RIA) is an open problem as RIAs consist of many characteristics such as, the use of JavaScript and browser plugins, which makes the crawling process complex. Model-Based Crawling~ was introduced to determine an optimal crawling strategy for RIA. An extension of this model is presented by Dincturk et al.~, which uses a statistical approach in determining a crawling strategy. The recent work carried out by Mirtaheri et al.~, intends to lower the time taken to crawl RIAs by introducing Dist-RIA crawler, which is a distributed crawler to crawl RIAs in parallel. However, it assumes that all the nodes have equal processing power, and assigns an equal number of tasks to each node. This can be problematic when there is heterogeneity.\n\\input{tables/table4.tex}\nMulti-Threaded Crawler for Change Detection of Web (MTCCDW) has been introduced by Meegahapola et al.~ to optimize the change detection process of webpages. Many threads were used to carry out the tasks of (1) \"retrieving current versions of webpages\", (2) \"retrieving old versions from a version repository\" and (3) \"comparison of the two versions to detect the changes\"~. Two thread-safe queues were used in between these three tasks to optimize them. The authors have determined the optimum number of threads to be used in each of these tasks, to ensure that the CDN system works optimally without idling. This multi-threading-based algorithm differs from standard process optimization tasks because of the I/O operations which are involved in fetching a webpage from the Web. Hence this algorithm provides a more significant improvement in processing times over doing I/O operations and processing in a central processing unit. Table~\\ref{tab4:crawling-solutions} summarizes the various crawling solutions presented in this subsection.", "id": "6d540a9d-eac6-457d-ae28-03ea3e8a1ddc", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "6ecdbe4c-e0e7-4e29-8a26-046dc4d0c747", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Web Crawlers and Crawling Techniques" ], [ "subsubsection", "Distributed Crawling" ] ], "subsections": [], "title": "Distributed Crawling" }, { "cite_extract_rate": 0, "cites": [], "content": "Among the important research problems observed in the web information retrieval domain, the scheduling process of visiting webpages along hierarchies of links has become significant. The frequency of change may differ in different webpages as they get updated at different time schedules. Certain webpages such as wiki pages may get updated rarely whereas news websites and blogs may get updated more frequently. To crawl these webpages more efficiently, dynamic mechanisms are required to detect the change frequency, and create checking schedules accordingly. Users will be notified immediately after changes have occurred on webpages that they are interested in. This will ensure computational resources are used optimally. However, most of these algorithms are kept proprietary, and a limited amount of details are published about them to prevent being reproduced~. \nVarious solutions have been proposed to determine optimal webpage crawling schedules based on different assumptions and different statistical frameworks. Work done by Coffman et al.~ has described crawler scheduling policies by assuming independent Poisson page-change processes. Studies carried out by Cho and Garcia-Molina~ have addressed the problem of determining the optimal number of crawls per page by using a staleness metric and Poisson process, where they assume uniform synchronization over time. Work done by Wolf et al.~ has proposed a technique to detect optimal crawl frequencies and theoretical optimal times to crawl webpages based on probability, resource allocation theory and network flow theory. Pandey and Olston~ have proposed a scheduling policy based on the \"user-centric search repository quality\".\n\\input{tables/table5.tex}\nVarious scheduling strategies for web crawling have been studied and compared in previous studies~. Among these strategies are optimal, depth, length, batch, partial~, depth-first search, breadth-first search, best-first search and best n-first search~. However, some of the strategies which have been considered cannot determine the temporal aspects such as how frequently webpages are undergoing changes, and prioritize the crawls accordingly. Evolutionary programming techniques such as genetic programming~ can be used to solve this issue by facilitating schedulers to rank webpages according to how likely they are being modified. Moreover, certain algorithms may get affected as webpages are crawled even though nothing has changed, where computational resources are used inefficiently. Table~\\ref{tab5:scheduling-solutions} compares the various scheduling solutions presented previously.", "id": "035a66df-cff1-476d-95e2-0a9f7d414274", "level": "subsection", "origin_cites_number": 7, "parent_id": "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Scheduling Algorithms" ] ], "subsections": [], "title": "Scheduling Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Changes occurring on webpages can be divided into five categories as discussed by Oita and Senellart~. They are (1) content changes, (2) structural (or layout) changes, (3) attribute (or presentation) changes, (4) behavioural changes and (5) type changes. Content changes include changes occurring in the text, images, etc., whereas structural (or layout) changes consist of changes occurring to the placement of HTML tags. Attribute (or presentation) changes include changes done to the design and presentation of a webpage, such as changes in the fonts, colors or captions. Behavioral changes occur when active components such as embedded applications and scripts of webpages are changed. Type changes occur when modifications are done to the existing HTML tags, such as when a \\emph{p} tag becomes a \\emph{div} tag. Studies have been focused on all of these types of changes~ and different methods and algorithms have been proposed~. \nA major concern of change detection in webpages is the ability to identify relevant and irrelevant changes. The research carried out by Borgolte et al.~ has focused to ignore irrelevant changes such as change of advertisements, and try to detect important changes occurring on webpages using different methods. The Delta framework introduced in this research, consists of four precise steps. In the initial step, it retrieves the currently available version of a website, and normalizes the DOM tree. Then in the next step, similarities are measured in comparison to the base version of the website. Similar changes are clustered to lower the analysis engine submissions. The final step is the \"analysis of identified and novel modifications with a potential computationally costly analysis engine\". Compared to many other pieces of research that focus on detecting changes that have been done to a website, the Delta framework focuses more on detecting significant changes occurring on a website.\nChanges occurring within information collections can be difficult to track due to the vast amount of data being available. Unexpected changes within the collection can make the content unorganized and outdated, which can cause burdens such as the removal of outdated resources and replacement for lost resources. Jayarathna and Poursardar~ have presented a study that focuses on a categorization and classification framework to manage and curate distributed collections of web-based resources. The selected pages were categorized with their relationships between the anchor text to identify the relevant information of the target page. They have proposed a digital collection manager that addresses webpages with textual content only. It has been shown that due to unexpected changes, documents in these collections can get problematic. Further research is necessary to detect changes occurring to other resources in digital collections, such as audio files, video files and documents.\nVarious change detection approaches have been proposed across the available literature to detect the different changes occurring on webpages. Elaborated in Table~\\ref{tab6:change-diff-algo} are a few popular algorithms that were considered in this survey. One of the common approaches that have been used for change detection in hierarchically structured documents such as webpages is the use of tree-structured data with \\emph{diff} algorithms~. Many variations of these diff algorithms have been presented in the available literature~. The basic idea on top of which all of these algorithms are built on is the process of modeling the hierarchically structured documents in the form of a tree with nodes, and compare the trees to identify changes that have occurred. Similarly, two trees can be developed; one with nodes from the parsed content of a previous version and the other with nodes from the parsed content of the currently available version. The two trees could be compared to identify which nodes have changed so that the changes can be determined.", "id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "level": "subsection", "origin_cites_number": 6, "parent_id": "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ] ], "subsections": [ "91bd141f-ba5a-4676-b0de-52f3b30a0ea1", "1bbdf8b8-bd60-4372-91ca-4cdf8f828a40", "fd04cb6c-0b6d-43e8-b829-5e3d7c26ac5c", "073a71aa-4170-40ef-b918-648cf7fb32a2", "52ed44cb-8e0c-4553-a836-bdcdf827a5fe", "bc6cc98a-e2aa-4b89-b924-7cd5e71f74bf" ], "title": "Change Detection Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "The Delta framework proposed in~ involves tree difference algorithms. Firstly, the modifications that have occurred in a website are extracted using fuzzy tree difference algorithms. Secondly, a machine learning algorithm is used to cluster similar modifications. The authors have employed a tree difference algorithm in which they have generalized to remove irrelevant modifications during the early phases of the analysis.", "id": "91bd141f-ba5a-4676-b0de-52f3b30a0ea1", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "Fuzzy Tree Difference Algorithm" ] ], "subsections": [], "title": "Fuzzy Tree Difference Algorithm" }, { "cite_extract_rate": 0, "cites": [], "content": "The BULD Diff algorithm~ has been used in computing the differences among the given two XML documents. It matches nodes in the two XML documents, and constructs a delta that represents the changes between the two compared documents. BULD stands for \"Bottom-Up, Lazy-Down\" propagation~, which has been derived from its matchings that are propagated \"bottom-up\" and \"lazily down\". This algorithm can run in linear time. First, the algorithm matches the largest identical parts of both the XML documents. Then the algorithm tries to match subtrees and the parents of the matched subtrees. This process is repeated until no more matches are made. Then the remaining unmatched nodes can be identified as insertions or deletions.", "id": "1bbdf8b8-bd60-4372-91ca-4cdf8f828a40", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "BULD Diff Algorithm" ] ], "subsections": [], "title": "BULD Diff Algorithm" }, { "cite_extract_rate": 0, "cites": [], "content": "The research carried out by Wang et al.~ has investigated how XML documents change and methods to perform change detection of XML documents. The authors have pointed out the fact that XML is becoming important as it is at present the de-facto standard in publishing web applications and data transfer/transportation. Internet query systems, search engines and continuous query systems heavily rely on efficiently detecting webpage changes in XML-documents because of the frequent rate at which webpages change in present days. The authors have described an algorithm named X-Diff~, which is an algorithmic model used to compute the difference in-between two versions of an XML-document. Unordered trees, structural information of XML-documents and high performance are among the main features of X-diff algorithm. The authors have tested the X-Diff algorithm for its accuracy and efficiency of change detection by considering three main nodes in the DOM tree namely element nodes, text nodes and attribute nodes. Going beyond what has currently being done, the authors have compared the ordered tree model of change detection and unordered tree model which they have used. They have also discussed the characteristics of XML domain, and established few key concepts such as \"node signature\" and \"XHash\".", "id": "fd04cb6c-0b6d-43e8-b829-5e3d7c26ac5c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "X-Diff Algorithm" ] ], "subsections": [], "title": "X-Diff Algorithm" }, { "cite_extract_rate": 0, "cites": [], "content": "The research carried out by Jain and Khandagale~, has focused on detecting changes in a specific location on a website or any document. This method involves tree comparison techniques. The majority of the existing techniques/systems check for changes in the whole webpage without allowing the user to select specific areas to monitor. When considering frequent changes, the cost of communication (or the information exchange) will cause inefficiencies by disturbing the focus to a given context. Hence, the authors have proposed a tree comparison mechanism to overcome these difficulties. They have considered the management of zone changes (changes in a particular place on a webpage), and it is quite achievable using a tree mechanism because once a webpage is converted into XML format, it can be converted into a tree according to the HTML tags and attributes. When it has to localize the detection mechanism, the focus is given only to a particular node, and the detection process will continue to their child nodes. However, the limit for the depth of the tree is not specified, which can be inefficient with large trees.\nYadav et al.~ describe a methodology to build a model to efficiently monitor webpage changes. The authors have suggested an algorithm that efficiently identifies and extracts the changes on various versions of a particular webpage, and evaluates the significance/importance of the change. This algorithmic model has three main sections. Initially, a document-tree is created for the webpage, and then it encodes the tree. Finally, it matches the current version with the previous version of the encoded tree. The algorithm searches for two change types occurring in web content. They are namely, structural changes and content changes.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\textwidth]{images/HTML_Tree_structure.png}\n \\caption{HTML document tree hierarchy of a sample webpage.}\n \\label{fig10:html-doc-tree}\n\\end{figure}\nTree models can be of two types: the first type is the ordered tree model where the left-to-right order between nodes is crucial, and this order directly contributes to the result of change detection; the second type is unordered tree model, where only the ancestor relationships are significant. Different studies have been carried out using these two tree models. Level Order Traversal~ is a form of breadth-first search that uses an ordered tree model. First, it constructs a document tree by taking in an HTML file and parses the elements. Then, the opening tags are identified as tree nodes, and the tree is constructed for the webpage as illustrated in Figure~\\ref{fig10:html-doc-tree}, while maintaining parent-child relationships and left-to-right order in-between siblings. Then the algorithm traverses through the tree to detect changes, and identify at which level of the tree the changes have occurred. However, some researches~ have argued that the unordered tree model can generate more accurate results.", "id": "073a71aa-4170-40ef-b918-648cf7fb32a2", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "Tree Difference Algorithms" ] ], "subsections": [], "title": "Tree Difference Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "The Johnson's algorithm~ was originally proposed as a method to \"find the shortest paths between all the pairs of vertices in a sparse weighted directed graph\". The same algorithm has been introduced by Johnson and Tanimoto~ to detect changes to content. The authors have tested a prototype to manage and deliver the latest tutorial content on the web to students. The system was designed to anticipate changes to documents. The difference between documents stored is evaluated and updated accordingly. This algorithm has been used in a tool named as Walden’s Paths Path Manager~. It computes the distance between two documents based on paragraphs, headings and keywords. Each signature difference is calculated, and the sum is taken as the total distance. It categorizes the change type, and it is easy to identify which type of content is mostly changed on a webpage. However, it does not consider the changes that occur to links. Furthermore, the results produced by this algorithm can be hard for a normal user to understand without the assistance of a computing device.\n\\input{tables/table6.tex}", "id": "52ed44cb-8e0c-4553-a836-bdcdf827a5fe", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "Johnson's Algorithm" ] ], "subsections": [], "title": "Johnson's Algorithm" }, { "cite_extract_rate": 0, "cites": [], "content": "The Proportional algorithm~ is based on signatures, and gives a much simple calculation of the distance. It computes a distance that is normalized and symmetric using each signature (paragraphs, headings, keywords and links). The proportional change of each signature is used with regards to the total number of changes. The measured distance has properties such as normalized properties and symmetric properties. These properties help the user of the algorithm significantly by providing a listing or visualization of various webpage changes. Having such listings, and visualizations makes it possible for the user to easily analyze changes of the webpage without necessarily reading and reviewing all the pages in the path. However, there is a slight performance trade-off when compared to the Johnson's Algorithm as changes to each signature are computed individually.\nTable~\\ref{tab6:change-diff-algo} summarizes the change detection algorithms discussed in this subsection, and compares the methodology/functions, advantages, disadvantages and associated research work. According to Table~\\ref{tab6:change-diff-algo}, it can be observed that there are many change detection algorithms based on difference calculation and traversal techniques. Different algorithms detect different changes such as changes occurring to text, visual representation and XML structure. The majority of the algorithms perform at fairly good speeds, and provide faster detection rates along with efficient resource utilization and low computational costs. Algorithms such as level order traversal have a low overhead, which will in return reduce the network traffic in communication. Algorithms such as the Cosine algorithm and the Fuzzy Tree Difference algorithm provide different levels of changes so that the threshold can be decided upon the specific usage of the algorithms. Algorithms such as the Johnson’s algorithm can be used to categorize the change types so that it is easy to identify which type of content is mostly changed in a webpage. \nHowever, there are certain shortcomings in each of these algorithms. If the content being compared is very small the Shingling algorithm will not be able to generate sufficient shingles. The Johnson’s algorithm does not identify changes occurring in links. Algorithms such as CX-DIFF and Vi-DIFF consist of highly expensive computations, and can even lead to overloading of the servers. Level order traversal algorithms do not provide sufficient information to the user about changes occurring.", "id": "bc6cc98a-e2aa-4b89-b924-7cd5e71f74bf", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "b67181c5-b1dc-4ec8-b562-5e31167a24e7", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Change Detection Algorithms" ], [ "subsubsection", "Proportional Algorithm" ] ], "subsections": [], "title": "Proportional Algorithm" }, { "cite_extract_rate": 0, "cites": [], "content": "Since the 1970s, several studies have been carried out based on statistics, to estimate the change frequency~. Nevertheless, the majority of these studies have been done under the assumption that the history of changes is available for the webpage, which might not be true always in the area of change detection of webpages. When analyzing CDN systems, it is visible that the complete change histories will not be available for a webpage being tracked. In several related studies~, the Poisson model has been introduced as a model that can be used to estimate the rate of change of a webpage (also known as change frequency). Most of the work which has been carried out is with an assumption: \"changes arrive as a Poisson process, and that the average rate of change can be estimated under this model\". Brewington and Cybenko~ have used an exponential probabilistic model to infer the times between changes occurring in individual webpages. The ages of webpages that have gone through many changes over a time period can be closely modeled using an exponential distribution. However, it models all the webpages as dynamic, even if the webpages change rarely and their only changes are their removal from the Web.\nAccording to Olston and Najork~, the binary freshness model can be used to measure the freshness in a webpage. This model which is also known as obsolescence is a function that is of the form\n\\begin{equation}\n f(p,t) \\in \\{0,1\\}\n\\end{equation}\nwhere $f(p, t)$ denotes the freshness of a particular webpage $p$ over a $t$ time period. It compares the live copy of a specific webpage $p$ with a cached version of the webpage across a time period $t$ to check if they are identical (or near-identical). Under this model, if $f(p, t)$ equals to one, then the webpage $p$ can be called fresh over the time $t$, whereas otherwise it is considered as a stale webpage which has gone through some change. This model is simple, but effective, and provides readers with a very good understanding of webpage changes. The first most study regarding the freshness maximization problem was done by Coffman et al.~, where the authors have proposed a Poisson model for webpage changes. A set of random and independent events that occur in a fixed-rate can be modeled using the Poisson Process. A webpage can undergo changes that cause the cached copy of the web crawler to go stale. If $\\lambda(p)$ is the rate parameter of a Poisson distribution where $p$ is the webpage, that specific distribution can be used to denote the occurrence of changes in that specific webpage. This also suggests that the changes happen independently and randomly with a mean rate of $\\lambda(p)$ changes per unit of time. \nHowever, the binary freshness model lacks the ability to determine whether one page is fresher than the other since the model outputs a binary value; fresh or stale. Hence Cho and Garcia-Molina~ have introduced a non-binary freshness model known as the temporal freshness metric which is,\n\\begin{equation}\n f(p,t) \\propto age(p,t)\n\\end{equation}\nin which $age(p, t)$ represents the age of a page $p$ up to time $t$ and $age(p, t) \\in \\{0, a\\}$ where $a$ is the time duration the copies differed. If a cached-copy of webpage $p$ is indistinguishable from its live-copy, then $age(p, t) = 0$. The intuition of this methodology is that \"the more time a cached page stays unsynchronized with its live-copy, the more their content tends to drift away\".\nCho and Garcia-Molina~ have proposed several frequency estimators for different online applications that require frequency estimation with different accuracy levels. The authors have proposed frequency estimators for scenarios, where the existence of a change is known, and the last date of change is known. Furthermore, the authors have proposed a model to categorize elements into different classes based on their change frequencies. These aspects can be modeled in CDN systems to categorize webpages once their change frequencies have been estimated.\n\\input{tables/table7.tex}\nGrimes and O’Brien~ state that for every webpage, the hourly update rate can be represented by a Poisson distribution together with $\\lambda = 1 / \\Delta$ where $\\lambda$ is a parameter and $\\Delta$ is the mean time between changes. The authors have also described a mechanism to identify webpage changes and a model for the computation of the rate of change of a given webpage. \nWork carried out by Meegahapola et al.~ has proposed a methodology to pre-predict the update frequency of webpages (the rate at which changes happen), and reschedule the crawling schedule to make the process of crawling webpages more efficient in search engines and web crawlers used for change detection. The authors have introduced a change frequency detection system named Change Frequency Estimator Module (CFEM). It includes two processes. Whenever a fresh webpage is added, that webpage will be crawled to detect a suitable change frequency, and it will be recorded in the system. Then these values are sent to a machine learning model~ which will predict the time interval between two crawls for a particular webpage. The change values together with change frequencies for a webpage is sent to the machine learning model, it would output a time interval called a \\emph{loop time} corresponding to that particular webpage. This value is an estimation of the average time taken by the webpage between two changes or in other terms, the \\emph{refresh rate}. It has also been observed by the authors that frequently changing websites obtained lower loop times in comparison to webpages which do not change often. Table~\\ref{tab7:freq-detect-tech} summarizes the different change frequency detection techniques with their characteristics and limitations.", "id": "c7fda993-c22d-4b7e-87a1-f5355ac3e355", "level": "subsection", "origin_cites_number": 12, "parent_id": "e390f1b3-f7c1-42f4-b37a-c259597b6e50", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Detecting Changes of Webpages" ], [ "subsection", "Frequency of Webpage Changes" ] ], "subsections": [], "title": "Frequency of Webpage Changes" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section5}\nCDN systems provide facilities to notify users about information changes or occurrence of events in a particular webpage of interest. From our studies, we have determined three main characteristics that should be considered when designing a notification system. They are (1) when to notify, (2) how to notify and (3) what to notify.\nWhen to notify changes is an important aspect to decide on when developing a change notification system. Users may want to get notifications as soon as they occur, or some may want to get periodic notifications as they may not want to have a clutter of notifications. The way notifications are sent decides how useful the notification system will be. The system should be able to send the notifications to the user in a way that the user will not be overwhelmed with the notifications. The content to be notified may depend on the user as they may have different information needs. Hence, it is wise to allow the user to customize the content.\nThe change notification process of CDN systems available at present consists of a wide range of notification methods. A few of the popular methods are web interfaces, browser plugin notifications, emails, SMS alerts and popup alerts~.", "id": "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "level": "section", "origin_cites_number": 1, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Notification" ] ], "subsections": [ "c65eb916-8a26-4fde-9fe2-264c020f5daa", "2b2a0915-04bb-4b62-a01e-497136401ed3", "a0ba5115-040b-4c0b-9d2c-3d9e1d7b3c1d", "8224e4d8-133c-45d9-b885-72857b7f7d6c" ], "title": "Change Notification" }, { "cite_extract_rate": 0, "cites": [], "content": "WebCQ~ is a CDN system that is among the earliest to become popular back at the beginning of the 2000s. The notification system of WebCQ runs on the server-side, and hence, it uses periodic notifications so that it can run efficiently with a large user base and webpage entries. The interval to send notifications is defined by the user when registering a webpage to track. WebCQ allows users to view changes on a web interface, where they can query to find reports detailing the changes or reports with a summary of changes. \n\\begin{figure}[!ht]\n \\centering\n \\begin{minipage}{1\\textwidth}\n \\begin{minipage}{0.43\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images/Dashboard_Visualping.png}\\\\\n (a)\n \\end{minipage}\n \\begin{minipage}{0.57\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images/Dashboard_ChangeTower.png}\\\\\n (b)\n \\end{minipage}\n \\end{minipage}\n \\caption{Dashboard of (a) Visualping~ and (b) ChangeTower~}\n \\label{fig11:VisualPingChangeTowe}\n\\end{figure}\nThroughout the past two decades, CDN systems have evolved, by utilizing modern front-end development frameworks, to provide more appealing and user-friendly interfaces with improved user experience. These interfaces have been able to convey useful information to the user in a more efficient and readable manner. Recently introduced change detection systems such as Visualping~ and ChangeTower~ provide more advanced features for notifying changes within the web interfaces (as shown in Figure~\\ref{fig11:VisualPingChangeTowe}). The majority of the systems provide a dashboard for the user with summaries of recent changes that have occurred to the webpages they are tracking.", "id": "c65eb916-8a26-4fde-9fe2-264c020f5daa", "level": "subsection", "origin_cites_number": 2, "parent_id": "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Notification" ], [ "subsection", "Web Interfaces" ] ], "subsections": [], "title": "Web Interfaces" }, { "cite_extract_rate": 0, "cites": [], "content": "Distill Web Monitor~ is a CDN system, which is available as a browser plugin. Figure~\\ref{fig12:distill} illustrates the browser plugin of Distill Web Monitor. It allows users to view changes highlighted on the webpage itself. Furthermore, this system provides various notification options for users including, emails, browser notifications, SMS, sounds and popup alerts.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{images/Distill_Web_Monitor.png}\n \\caption{Browser plugin of Distill Web Monitor~}\n \\label{fig12:distill}\n\\end{figure}", "id": "2b2a0915-04bb-4b62-a01e-497136401ed3", "level": "subsection", "origin_cites_number": 1, "parent_id": "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Notification" ], [ "subsection", "Browser Plugins" ] ], "subsections": [], "title": "Browser Plugins" }, { "cite_extract_rate": 0, "cites": [], "content": "Email notifications have become popular in all the online services as a means to convey new updates and news to the users. Similarly, most of the CDN systems provide the facility to get notified via emails, once changes occur in monitored webpages. Emails generally contain links which when clicked by the user, will be redirected to a page with further information about the relevant change.\n\\input{tables/table8.tex}", "id": "a0ba5115-040b-4c0b-9d2c-3d9e1d7b3c1d", "level": "subsection", "origin_cites_number": 0, "parent_id": "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Notification" ], [ "subsection", "Emails" ] ], "subsections": [], "title": "Emails" }, { "cite_extract_rate": 0, "cites": [], "content": "Certain CDN systems provide the facility to get notifications via Short Message Service (SMS). The system requests the user to enter a mobile phone number to which the user wants to have the notifications delivered. Google Alerts~ and Distill Web Monitor~ are two popular services that provide this facility for its users. SMS alerts are very useful for users who wish to get notifications about webpage updates while traveling and when they do not have access to the Internet to log into the online system. However, the SMS alert feature may require the user to upgrade to their paid versions. Table~\\ref{tab8:change-notify} compares the different features of the various change notification techniques.", "id": "8224e4d8-133c-45d9-b885-72857b7f7d6c", "level": "subsection", "origin_cites_number": 2, "parent_id": "ca0f479c-e7c0-49d5-9f2b-07319fed2a8b", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Notification" ], [ "subsection", "SMS Alerts" ] ], "subsections": [], "title": "SMS Alerts" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section6}\nVisualization of changes is an important aspect of CDN systems. Proper visualization techniques will allow the users to easily identify the changes of the webpages being tracked. Publicly available CDN systems visualize changes occurring on webpages in different ways~. Most of the changes are depicted in the original interface itself that is loaded by the CDN system.\n\\begin{figure}[!ht]\n \\centering\n \\begin{minipage}{1\\textwidth}\n \\begin{minipage}{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images/Wachete.png}\\\\\n (a)\n \\end{minipage}\n \\begin{minipage}{0.54\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images/Visualping.png}\\\\\n (b)\n \\end{minipage}\n \\end{minipage}\n \\caption{(a) Text change visualization in Wachete~ and (b) visual comparison of changes in Visualping~}\n \\label{fig11:wachete_visualping}\n\\end{figure}", "id": "e425e367-4ff1-45d0-a593-4277e0f4c653", "level": "section", "origin_cites_number": 2, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Visualization" ] ], "subsections": [ "d2194d19-0e85-4343-81f2-a7435d5dfbb5", "356b2404-d013-4a36-8c0f-4917ce49bf85", "07756d72-73c5-47cf-96cd-2b7e05a5149f" ], "title": "Change Visualization" }, { "cite_extract_rate": 0, "cites": [], "content": "One popular means of visualizing textual content is by graphically annotating the differences in the HTML content. This was first introduced as the HtmlDiff tool~, which marks up the HTML text to indicate how it has changed from a previous version. This tool was introduced in the AT\\&T Internet Difference Engine (AIDE)~ which was developed to detect and visualize changes to webpages. Most of the currently available CDN tools use the HtmlDiff technique or its variants that highlight the changes in different colors. The most commonly used color convention is that deleted text is highlighted in red color with strike-through formatting, whereas newly added text is highlighted in green color. The text which has not changed is not highlighted. This method of change visualization is more straightforward as the changes have already been highlighted and are shown to the user. Figure~\\ref{fig11:wachete_visualping} (a) shows the visualization used in Wachete~.", "id": "d2194d19-0e85-4343-81f2-a7435d5dfbb5", "level": "subsection", "origin_cites_number": 3, "parent_id": "e425e367-4ff1-45d0-a593-4277e0f4c653", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Visualization" ], [ "subsection", "HTML Differencing" ] ], "subsections": [], "title": "HTML Differencing" }, { "cite_extract_rate": 0, "cites": [], "content": "Another interactive method of visualizing changes is by showing the current version and previous version of a webpage side by side on an interface allowing a user to observe the changes for himself. Visualping~ provides this facility for its users, as shown in Figure~\\ref{fig11:wachete_visualping} (b). The two versions are shown side by side, and the cursor can be moved horizontally to slide the visible window to view and compare a particular area on the webpage in the two versions. This method is more interactive as the user is involved in identifying the changes which cannot be seen at once. However, certain users may find this method too tedious and time-consuming as the user himself has to find the changes by making comparisons between the two versions of a webpage.", "id": "356b2404-d013-4a36-8c0f-4917ce49bf85", "level": "subsection", "origin_cites_number": 1, "parent_id": "e425e367-4ff1-45d0-a593-4277e0f4c653", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Visualization" ], [ "subsection", "Visual Comparison" ] ], "subsections": [], "title": "Visual Comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "Versionista~ is a CDN tool, where the user can view the modifications in the form of a log. Figure~\\ref{fig15:versionista} illustrates a sample change log from Versionista. However, some users may find this format hard to read and understand as the changes are not visible immediately.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images/Versionista.png}\n \\caption{Sample change log of Versionista~}\n \\label{fig15:versionista}\n\\end{figure}\nTable~\\ref{tab9:change-visualize} denotes a feature comparison of the various change visualization methods. It can be seen that different techniques have different levels of information representation, understandability, ease of identifying, and cost.\n\\input{tables/table9.tex}", "id": "07756d72-73c5-47cf-96cd-2b7e05a5149f", "level": "subsection", "origin_cites_number": 0, "parent_id": "e425e367-4ff1-45d0-a593-4277e0f4c653", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Change Visualization" ], [ "subsection", "Change Logs" ] ], "subsections": [], "title": "Change Logs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section7}\nDifferent CDN systems are available at present, and each of them has its own supported set of features. Table~\\ref{tab10:public-cdn} denotes twelve popular CDN systems, and compares their features such as pages monitored, detection architecture and notification methods.\n\\input{tables/table10.tex}\nMost of the systems support change detection of a single page and multiple pages for a single user freely. However, systems such as Versionista~, Wachete~ and PageScreen~ offer a limited number of webpages which can be checked under the trial version. If a user wants to track more webpages than the given limit, then the user has to upgrade to the paid version where unlimited tracking is provided.\nThe majority of the systems use server-side change detection approaches, whereas a few systems use client-side change detection approaches. According to studies, most of the commercial systems available at present use server-side change detection due to the easy implementation in a central location, where users can call the functionality as a service via a web interface. There are a few tools such as Visualping~ and Follow that Page~, where they use both server-side and client-side change detection approaches. However, tools such as Distill~ use client-side change detection via browser plugins.\nIt is evident that all the systems support fixed interval checks. This can cause issues because the user has no knowledge of how often a particular webpage will get changed, and the user may fail to observe important updates by selecting arbitrary checking intervals. Hence, dynamic scheduling mechanisms should be addressed to enhance the efficiency of CDN systems.\nMost systems support a wide range of fixed interval checks such as twice a day, daily, weekly and monthly. More frequent checks such as hourly checks, 3 hourly checks and 6 hourly checks are provided in the paid versions of many CDN systems. However, the browser plugin of Distill supports checks varying from every 5 seconds up to 29 days. Such high checking frequencies are possible as the Distill system runs on the client, where the client may have ample resources. However, server-based systems may not support such high checking frequencies as the server can get overloaded with the growing user base and the number of pages to be tracked.\nWhen considering the notification methods, it can be seen that all the tools provide email alerts. Emails have become popular due to its simplicity, easy implementation and extensive use among the clients. A few systems provide browser plugin alerts and SMS alerts. However, with the development of mobile devices, certain services such as Watchete and Visualping have provided mobile applications that can be installed on smartphones. This has allowed the user to get updates and manage monitors via his/her smartphone. Some systems such as ChangeDetect~ and PageScreen provide free trials for users to try out their features for a limited time. After the trial period has passed the users must upgrade to continue to get the services. Trackly~ provides a free plan where it allows a user to track three webpages. Trackly also provides 30-day free trials for all its plans and consists of the same features as the paid version. ChangeMon~ provides a 7-day free trial where a user is not required to sign up for an account and can create up to 1000 monitors.", "id": "bab5dd27-3080-49c5-bc24-d4cd5c11a1ad", "level": "section", "origin_cites_number": 5, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Publicly Available CDN Systems" ] ], "subsections": [], "title": "Publicly Available CDN Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section8}\nIn the modern world, webpage change detection has become very complicated due to many reasons such as (1) evolution of technologies used in webpage creation, (2) addition of rich and dynamic content to webpages and (3) privacy concerns and regulations. In this section, we will discuss some of the trends, concerns and insights as to what modern researchers/developers should pay attention to when building solutions related to webpage CDN.", "id": "60b7757f-6092-4d79-872d-cf6dff00abdb", "level": "section", "origin_cites_number": 0, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "40dd1786-4ea2-4c3b-b0b0-e254bb5dcd7c", "58915237-17fa-40c0-b337-09686c7220e2", "cdad660e-bcf9-40ec-8a85-e38f884d8b9f", "199a52ec-3c30-4bfb-b004-1f9456d9ab09" ], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "Change detection of webpages primarily relies on web crawling. Ethical aspects and security concerns of web crawling are important considerations when building web crawler-based applications although they are mostly neglected. Research and guidelines in this area are limited, and work done by Thelwall and Stuart~ is one of the modern works regarding this topic. After conducting an extensive analysis regarding applications of web crawlers, they have come up with four main types of issues that web crawlers may cause to the society or individuals. They are namely (1) denial of service – due to repetitive requests to web servers, failures may occur causing disturbances to normal users of the website; (2) cost – increasing number of requests may incur additional costs to the website owner depending on the cost of the web hosting plan which is used; (3) privacy – even though data in webpages are public, gathering of information in large scale might lead to privacy violations; (4) copyright – crawlers create copies of webpages/datasets from websites without prior consent, which may directly involve copyright violations. The authors have further explained how Robots Exclusion Protocol (robots.txt)~ can be used to overcome the above issues from the perspectives of website owners and web crawler owners. In doing so, they have emphasized the limitations of the set of guidelines, and elaborated on alternative techniques to overcome the limitations.\nEven though many commercial search engines and CDN systems have adopted the Robots Exclusion Protocol to various extents, the degree to which each web crawler abides by the guidelines set by the protocol differs. A study carried out by Sun et al.~ has come up with a mechanism to quantify the ethicality of web crawlers using a vector space model. While acknowledging the fact that the unethicality of crawling may differ from webpage to webpage, they have used a common set of ethical guidelines set by Eichmann~ and Thelwall et al.~ in creating this mechanism. More research on this avenue would be interesting as a lot of governments, and policy regulating agencies have shown an increasing interest regarding ethical aspects and data privacy of online content during the last 5 years. To come up with widely adopted guidelines and regulations regarding web crawling, having such ethicality quantification measurements of web crawlers would be crucial. Further, regardless of the guidelines set by the Robots Exclusion Protocol that has been adopted by over 30\\% of webpages by 2007~, many web crawlers do not abide by the regulations. Because of this, many websites with crucial and private data deploy anti-crawling techniques such as (1) IP address ban; (2) captcha; (3) obfuscated JavaScript code; (4) frequent structural changes; (5) limit the frequency of requests and downloadable data allowances and (6) mapping important data like images, which is considered one of the most effective ways.", "id": "40dd1786-4ea2-4c3b-b0b0-e254bb5dcd7c", "level": "subsection", "origin_cites_number": 4, "parent_id": "60b7757f-6092-4d79-872d-cf6dff00abdb", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Ethical Aspects and Security Concerns of Web Crawling" ] ], "subsections": [], "title": "Ethical Aspects and Security Concerns of Web Crawling" }, { "cite_extract_rate": 0, "cites": [], "content": "Modern webpages developed using technologies such as HTML5 with rich content such as images and videos, are constantly changing their elements to provide rich and interactive functionality to users~. Cascading Style Sheets (CSS) is used to manage their layouts, and JavaScript is used to manage their user interactions. The content and layouts of such webpages have become more dynamic to adapt to different user actions and devices. Moreover, dynamic webpages can have temporary information such as help pop-ups, stored in a set of stacked layers apart from the basic two-dimensional layout, hence giving the name three-dimensional webpages~. These layers can undergo many changes. Hence, detecting changes in such dynamic webpages has become challenging.\nCurrently, a limited amount of research work can be found for change detection in dynamic webpages within the available literature. However, web tripwires~, CRAWLJAX~, ReDeCheck~ and detection of visibility faults caused by dynamic layout changes~ can be considered as significant work for crawling and detecting changes in dynamic webpages. These make use of the client-side code to determine state changes occurred within the browser’s dynamically built DOM tree when user events are triggered. The DOM structure of webpages created using JavaScript frameworks such as angular, ember and react can change as events are triggered, and relevant DOM elements are rendered. Crawler scripts can determine such changes by accessing the rendered content via calling the innerHTML property of the elements, as the HTML content is not readily visible~. Since accessing the client-side code is a critical aspect in detecting changes of dynamic webpages, it is worth to explore more efficient methods to perform this task.\nWhen considering from an industry perspective, not many commercially available CDN systems support the monitoring of dynamic webpages. A handful of CDN systems such as Wachete~, Visualping~ and Versionista~ allow users to monitor dynamic and JavaScript rendered pages. Moreover, the algorithms used are kept as trade secrets by these companies, and are not available publicly. However, the monitoring process of highly dynamic JavaScript webpages can timeout due to large amounts of data that have to be processed. Hence, there is an opportunity for researchers to contribute for the development of efficient algorithms to determine changes occurring in highly dynamic webpages.\nModern webpages have rich content such as images, audio and video to enhance the user experience. Webpages may be changed by changing these rich content. Most of the currently available CDN systems can detect changes within the HTML content. In the case of images, a change can be detected if the contents of the \\emph{<img>} tag change. Sometimes the actual image may be changed but the image has the same file name as before, and hence, such changes are not detected. However, to the best of our knowledge, currently available systems do not digitally analyze images, and detect whether an image is actually changed or not. However, several image change detection algorithms such as image differencing, image ratioing and change vector analysis can be found in the research domain of image processing. These algorithms are used for applications such as aerial analysis of images of forest areas and urban environments~. Similar ideas can be utilized in CDN systems for webpages to detect changes occurring in images. However, if such sophisticated methods are implemented within CDN systems, they will require more computational resources to operate efficiently.", "id": "58915237-17fa-40c0-b337-09686c7220e2", "level": "subsection", "origin_cites_number": 8, "parent_id": "60b7757f-6092-4d79-872d-cf6dff00abdb", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Change Detection in Dynamic Webpages and Webpages with Rich Content" ] ], "subsections": [], "title": "Change Detection in Dynamic Webpages and Webpages with Rich Content" }, { "cite_extract_rate": 0, "cites": [], "content": "The resource synchronization problem has been identified as an issue with frequently changing web sources~. It is defined as the \"need for one or more destination servers to remain synchronized with (some of the) resources made available by a source server\". From a CDN perspective, we can state this problem as clients wanting to stay up to date with the new content of frequently changing webpages. Previous work includes the use of Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)~ to harvest digital resources~, where new and updated resources are collected incrementally, but it was not adopted broadly. \nResourceSync is an emerging framework~ that describes means for both client-pull and server-push of change notifications. It consists of several modules that allow intended parties to stay in sync with changing resources. The framework allows source servers to publish up-to-date lists of resources and changes to the resources. Destination servers can access this published data and synchronize the resources. This framework can be adapted for large repositories. The framework represents changed resources or differences between the previous and new representations as a bitstream, allowing to obtain changes to different media types. The use of the ResourceSync framework in CDN systems can make the synchronizing process of clients more efficient, and deliver change notifications in a timely manner.", "id": "cdad660e-bcf9-40ec-8a85-e38f884d8b9f", "level": "subsection", "origin_cites_number": 4, "parent_id": "60b7757f-6092-4d79-872d-cf6dff00abdb", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Resource Synchronization" ] ], "subsections": [], "title": "Resource Synchronization" }, { "cite_extract_rate": 0, "cites": [], "content": "With the emergence of User-Generated Content (UGC) proprietorship and anti-scraping tools, commercial crawlers are not allowed to crawl content, and bots are blocked~. Hence, the details about webpages are unknown before crawling. Despite these challenges, if the changes of webpages can be made available to web crawlers in a standard manner, web crawlers can easily access these data to detect changes that have occurred. An emerging idea that can be used by CDN systems is the Linked Data Notifications (LDN) protocol.\nThe LDN protocol describes how servers (termed as receivers) can make applications (termed as senders) push messages to them, and how other applications (termed as consumers) can retrieve those messages~. This protocol allows us to share and reuse notifications across different applications while paying less attention to how they were created or what their contents are. Notifications are implemented in such a manner that they can run on different technology stacks, and continuously work together while supporting decentralized communication in the web. The idea was first brought forward by Capadisli~, and has been considered as a W3C recommendation. LDN identifies a notification as \"an individual entity with its own Uniform Resource Identifier (URI)\", and hence, they can be retrieved and reused. The protocol stores notifications in a manner so that they are compatible with the Linked Data Platform (LDP) standard. \nThe overview of LDN is illustrated in Figure~\\ref{fig16:LDN}. A sender wants to deliver a notification to the server which is intended to a receiver. The sender selects a target source, finds the location of the target’s inbox, and sends the notification to that particular inbox. Then the receiver allows consumers to access the notification in its inbox. Similarly, when considering the perspective of CDN, CDN systems can make use of the LDN protocol to implement notifications sent to subscribed users. Since the notifications are reused, the system becomes more efficient, and improves the system productivity. LDN will become the next trend in CDN systems.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images/Linked_Data_Notifications.png}\n \\caption{Overview of Linked Data Notifications~.}\n \\label{fig16:LDN}\n\\end{figure}", "id": "199a52ec-3c30-4bfb-b004-1f9456d9ab09", "level": "subsection", "origin_cites_number": 1, "parent_id": "60b7757f-6092-4d79-872d-cf6dff00abdb", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Linked Data Notifications" ] ], "subsections": [], "title": "Linked Data Notifications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section9}\nThe history of CDN systems dates to the 1990s when they were introduced to automate the detection process of webpage changes and notify interested users. Since then, various CDN systems and techniques have been introduced to improve the efficiency of the CDN process. This paper presented a survey on CDN systems for webpages and various related techniques. We have reviewed and compared different techniques in the literature involving various aspects of CDN systems. Among them, techniques used in web crawler scheduling, change detection and change frequency were identified as significant research areas, where extensive research has been carried out. The most common change detection algorithms are based on difference calculation between tree structures of documents. The process of identifying the frequency of changes occurring on webpages plays a significant role in optimizing crawler schedules to retrieve updated content.\nWe have also compared different change notification and change visualization techniques that are being used by currently available CDN systems. Most of such systems show notifications on a web interface, and use email notifications as their main method of notifying the user, whereas some systems provide the facility to get notifications via SMS alerts or browser plugin notifications. Additionally, a majority of applications use HTML differencing techniques for change visualization between two variants of a webpage, whereas some systems provide a visual separation among versions, which allows the user to identify the changes by observing the two versions. Moreover, we have compared different features of twelve popular CDN systems that are publicly available at present. According to the comparison results, it is evident that most of the systems support checks at fixed intervals, but not checks at random intervals. These systems can be improved by introducing intelligent crawling schedules to optimize the crawling process by crawling webpages at their estimated change frequency. \nFinally, we have discussed new trends such as LDN, and issues such as determining changes in dynamic webpages, privacy concerns and ethical aspects in CDNs. Throughout this survey, we have identified four important directions of research. The first research direction focuses on improving the architecture of CDN systems, where computing resources and temporal resources can be utilized efficiently while overcoming the limitations of traditional server-based and client-based architectures. The second research direction focuses on improving change detection algorithms to track webpage changes quickly with high accuracy. The third research direction focuses on identifying the change frequency of webpages, and designing optimized crawler schedules so that computing resources can be used efficiently by deploying crawlers when required. The final research direction is improving and developing methods and algorithms to detect changes in dynamic and JavaScript rendered webpages, that can efficiently handle large amounts of data.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{10-references}\n\\end{document}\n\\endinput", "id": "765aa397-8736-4120-a550-0990abf2bf37", "level": "section", "origin_cites_number": 0, "parent_id": "1e2d119d-e006-4b3f-8220-ef195f9b44ff", "prefix_titles": [ [ "title", "Change Detection and Notification of Web Pages: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
59
[ 338 ]
null
[ "Irem Ulku", "Erdem Akagunduz" ]
A Survey on Deep Learning-based Architectures\\for Semantic Segmentation on 2D images
2019
2019-12-21T09:31:09Z
cs.CV
Semantic segmentation is the pixel-wise labelling of an image. Boosted by the extraordinary ability of convolutional neural networks (CNN) in creating semantic, high level and hierarchical image features; several deep learning-based 2D semantic segmentation approaches have been proposed within the last decade. In this survey, we mainly focus on the recent scientific developments in semantic segmentation, specifically on deep learning-based methods using 2D images. We started with an analysis of the public image sets and leaderboards for 2D semantic segmentation, with an overview of the techniques employed in performance evaluation. In examining the evolution of the field, we chronologically categorised the approaches into three main periods, namely pre-and early deep learning era, the fully convolutional era, and the post-FCN era. We technically analysed the solutions put forward in terms of solving the fundamental problems of the field, such as fine-grained localisation and scale invariance. Before drawing our conclusions, we present a table of methods from all mentioned eras, with a summary of each approach that explains their contribution to the field. We conclude the survey by discussing the current challenges of the field and to what extent they have been solved.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ] ], "subsections": [ "0f099f18-e4c4-412a-82fe-c0c62d3104eb", "24f4e8e0-1290-45ad-8cfd-5e1603b65295", "085f3dfd-8d14-4669-8d3e-ab87b2ddc4fa", "486bc016-7734-4261-b9f7-ad3f68d1c163", "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "c1996fe2-7e7b-4050-8dc9-c059ec3b7798" ], "title": "root" }, { "cite_extract_rate": 0.25, "cites": [ 1725 ], "content": "Semantic segmentation has recently become one of the fundamental problems, and accordingly, a hot topic for the fields of computer vision and machine learning. Assigning a separate class label to each pixel of an image is one of the important steps in building complex robotic systems such as driverless cars/drones, human-friendly robots, robot-assisted surgery, and intelligent military systems. Thus, it is no wonder that in addition to scientific institutions, industry-leading companies studying artificial intelligence are now summarily confronting this problem. \nThe simplest problem definition for semantic segmentation is pixel-wise labelling. Because the problem is defined at the pixel level, finding only class labels that the scene includes is considered insufficient, but localising labels at the original image pixel resolution is also a fundamental goal. Depending on the context, class labels may change. For example, in a driverless car, the pixel labels may be \\emph{human, road} and \\emph{car} whereas for a medical system , they could be \\emph{cancer cells, muscle tissue, aorta wall} etc. \nThe recent increase in interest in this topic has been undeniably caused by the extraordinary success seen with convolutional neural networks (CNN) that have been brought to semantic segmentation. Understanding a scene at the semantic level has long been one of the main topics of computer vision, but it is only now that we have seen actual solutions to the problem.\nIn this paper, our primary motivation is to focus on the recent scientific developments in semantic segmentation, specifically on the evolution of deep learning-based methods using 2D images. The reason we narrowed down our survey to techniques that utilise only 2D visible imagery is that, in our opinion, the scale of the problem in the literature is so vast and widespread that it would be impractical to analyse and categorise all semantic segmentation modalities (such as 3D point clouds, hyper-spectral data, MRI, CT\\footnote{We consider MRI and CT essentially as 3D volume data. Although individual MRI/CT slices are 2D, when doing semantic segmentation on these types of data, neighbourhood information in all three dimensions are utilised. For this reason, medical applications are excluded from this survey.} etc.) found in journal articles to any degree of detail. In addition to analysing the techniques which make semantic segmentation possible and accurate, we also examine the most popular image sets created for this problem. Additionally, we review the performance measures used for evaluating the success of semantic segmentation. Most importantly, we propose a taxonomy of methods, together with a technical evolution of them, which we believe is novel in the sense that it provides insight to the existing deficiencies and suggests future directions for the field.\nThe remainder of the paper is organised as follows: in the following subsection, we refer to other survey studies on the subject and underline our contribution. Section 2 presents information about the different image sets, the challenges, and how to measure the performance of semantic segmentation. Starting with Section 3, we chronologically scrutinise semantic segmentation methods under three main titles, hence in three separate sections. Section 3 covers the methods of pre- and early deep convolutional neural networks era. Section 4 provides details on the fully convolutional neural networks, which we consider a milestone for the semantic segmentation literature. Section 5 covers the state-of-the-art methods on the problem and provides details on both the architectural details and the success of these methods. Before finally concluding the paper in Section 7, Section 6 provides a future scope and potential directions for the field.", "id": "0f099f18-e4c4-412a-82fe-c0c62d3104eb", "level": "section", "origin_cites_number": 4, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Introduction" ] ], "subsections": [ "14feeb0c-118f-418e-a254-a2a30068fd1e" ], "title": "Introduction" }, { "cite_extract_rate": 0.8, "cites": [ 1728, 1726, 976, 8490, 1727, 1729, 1725, 810 ], "content": "Very recently, driven by both academia and industry, the rapid increase of interest in semantic segmentation has inevitably led to a number of survey studies being published .\nSome of these surveys focus on a specific problem, such as comparing semantic segmentation approaches for horizon/skyline detection , whilst others deal with relatively broader problems related to industrial challenges, such as semantic segmentation for driverless cars or medical systems . These studies are useful if working on the same specific problem, but they lack an overarching vision that may `technically' contribute to the future directions of the field.\nAnother group of survey studies on semantic segmentation have provided a general overview of the subject, but they lack the necessary depth of analysis regarding deep learning-based methods. Whilst semantic segmentation was studied for two decades prior to deep learning, actual contribution to the field has only been achieved very recently, particularly following a revolutionary paper on fully convolutional networks (FCN) (which has also been thoroughly analysed in this paper). It could be said that most state-of-the-art studies are in fact extensions of that same study. For this reason, without scrupulous analysis of FCNs and the direction of the subsequent papers, survey studies will lack the necessary academic rigour in examining semantic segmentation using deep learning.\nThere are recent reviews of deep semantic segmentation, for example by and , which provide a comprehensive survey on the subject. These survey studies cover almost all the popular semantic segmentation image sets and methods, and for all modalities such as 2D, RGB, 2.5D, RGB-D, and 3D data. Although they are inclusive in the sense that most related material on deep semantic segmentation is included, the categorisation of the methods is coarse, since the surveys attempt to cover almost everything umbrellaed under the topic of semantic segmentation literature. \nA detailed categorisation of the subject was provided in . Although this survey provides important details on the subcategories that cover almost all approaches in the field, discussions on how the proposed techniques are chronologically correlated are left out of their scope. Recent deep learning studies on semantic segmentation follow a number of fundamental directions and labour with tackling the varied corresponding issues. In this survey paper, we define and describe these new challenges, and present the chronological evolution of the techniques of all the studies within this proposed context. We believe our attempt to understand the evolution of semantic segmentation architectures is the main contribution of the paper. We provide a table of these related methods, and explain them briefly one after another in chronological order, with their metric performance and computational efficiency. This way, we believe that readers will better understand the evolution, current state-of-the-art, as well as the future directions seen for 2D semantic segmentation.", "id": "14feeb0c-118f-418e-a254-a2a30068fd1e", "level": "subsection", "origin_cites_number": 10, "parent_id": "0f099f18-e4c4-412a-82fe-c0c62d3104eb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Introduction" ], [ "subsection", "Surveys on Semantic Segmentation" ] ], "subsections": [], "title": "Surveys on Semantic Segmentation" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "24f4e8e0-1290-45ad-8cfd-5e1603b65295", "level": "section", "origin_cites_number": 0, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ] ], "subsections": [ "5553cbf6-1d47-4afe-a8b9-1457c090e311", "7e7a3568-4f20-471e-ba07-d512cd6c93e4" ], "title": "Image Sets, Challenges and Performance Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{datasets}\nThe level of success for any machine-learning application is undoubtedly determined by the quality and the depth of the data being used for training. When it comes to deep learning, data is even more important since most systems are termed end-to-end; thus, even the features are determined \\emph{by} the data, not \\emph{for} the data. Therefore, data is no longer the object but becomes the actual subject in the case of deep learning.\nIn this section, we scrutinise the most popular large-scale 2D image sets that have been utilised for the semantic segmentation problem. The image sets were categorised into two main branches, namely general-purpose image sets, with generic class labels including almost every type of object or background, and also urban street image sets, which include class labels such as car and person, and are generally created for the training of driverless car systems. There are many other unresolved 2D semantic segmentation problem domains such as medical imaging, satellite imagery, or infrared imagery. However, urban street image is currently driving scientific development in the field because they attract more attention from the industry. Therefore, very large-scale image sets and challenges with crowded leaderboards exist, yet, only specifically for industrial users. Scientific interest for depth-based semantic segmentation is growing rapidly; however, as mentioned in the Introduction, we have excluded depth-based and 3D-based segmentation datasets from the current study in order to focus with sufficient detail on the novel categorisation of recent techniques pertinent to 2D semantic segmentation.", "id": "5553cbf6-1d47-4afe-a8b9-1457c090e311", "level": "subsection", "origin_cites_number": 0, "parent_id": "24f4e8e0-1290-45ad-8cfd-5e1603b65295", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Image Sets and Challenges" ] ], "subsections": [ "4eb62e15-9432-4e24-b9a3-5e206a5b328a", "6fde11ca-1b1d-49b1-8c49-f2bf6e69390f", "8a457c68-568c-4588-accf-9bac8b3fc0dc" ], "title": "Image Sets and Challenges" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 486, 1728, 1732, 1730, 1731 ], "content": "\\begin{itemize}\n\\item PASCAL Visual Object Classes (VOC) : This image set includes image annotations not only for semantic segmentation, but for also classification, detection, action classification, and person layout tasks. The image set and annotations are regularly updated and the leaderboard of the challenge is public\\footnote{\\url{ http://host.robots.ox.ac.uk:8080/leaderboard/main\\_bootstrap.php}} (with more than 140 submissions just for the segmentation challenge alone). It is the most popular among the semantic segmentation challenges and is still active following its initial release in 2005. The PASCAL VOC semantic segmentation challenge image set includes 20 foreground object classes and one background class. The original data consisted of 1,464 images for the purposes of training, plus 1,449 images for validation. The 1,456 test images are kept private for the challenge. The image set includes all types of indoor and outdoor images and is generic across all categories.\nThe PASCAL VOC image set has a number of extension image sets, the most popular among these are PASCAL Context and PASCAL Parts . The first is a set of additional annotations for PASCAL VOC 2010, which goes beyond the original PASCAL semantic segmentation task by providing annotations for the whole scene. The statistics section contains a full list of more than 400 labels (compared to the original 21 labels). The second is also a set of additional annotations for PASCAL VOC 2010. It provides segmentation masks for each body part of the object, such as the separately labelled limbs and body of an animal. For these extensions, the training and validation set contains 10,103 images, while the test set contains 9,637 images. There are other extensions to PASCAL VOC using other functional annotations such as the Semantic Parts (PASParts) image set and the Semantic Boundaries Dataset (SBD) . For example, PASParts additionally provides `instance’ labels such as two instances of an object within an image are labelled separately, rather than using a single class label. However, unlike the former two additional extensions , these further extensions have proven less popular as their challenges have attracted much less attention in state-of-the-art semantic segmentation studies; thus, their leaderboards are less crowded. In Figure \\ref{PascalParts}, a sample object, parts and instance segmentation are depicted.\n\\begin{figure*}[t]\n\\centering\n\\includegraphics*[clip=false,width=1\\textwidth]{figures/Fig1.png}\n\\caption{A sample image and its annotation for object, instance and parts segmentations separately, from left to right.}\n \\label{PascalParts}\n\\end{figure*} \n\\item Common Objects in Context (COCO) : With 200K labelled images, 1.5 million object instances, and 80 object categories, COCO is a very large scale object detection, semantic segmentation, and captioning image set, including almost every possible types of scene. COCO provides challenges not only at the instance-level and pixel-level (which they refer to as \\emph{stuff}) semantic segmentation, but also introduces a novel task, namely that of \\emph{panoptic} segmentation , which aims at unifying instance-level and pixel-level segmentation tasks. Their leaderboards\\footnote{\\url{http://cocodataset.org}} are relatively less crowded because of the scale of the data. On the other hand, for the same reason, their challenges are assessed only by the most ambitious scientific and industrial groups, and thus are considered as the state-of-the-art in their leaderboards. Due to its extensive volume, most studies partially use this image set to pre-train or fine-tune their model, before submitting to other challenges such as PASCAL VOC 2012. \n\\item ADE20K dataset : ADE20K contains more than 20K scene-centric images with objects and object parts annotations. Similarly to PASCAL VOC, there is a public leaderboard\\footnote{\\url{http://sceneparsing.csail.mit.edu/}} and the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. The samples in the dataset have varying resolutions (average image size being 1.3M pixels), which can be up to 2400$\\times$1800 pixels. There are total of150 semantic categories included for evaluation.\n\\item Other General Purpose Semantic Segmentation Image Sets: Although less popular than either PASCAL VOC or COCO, there are also some other image sets in the same domain. Introduced by , YouTube-Objects is a set of low-resolution (480$\\times$360) video clips with more than 10k pixel-wise annotated frames. \nSimilarly, SIFT-flow is another low-resolution (256$\\times$256) semantic segmentation image set with 33 class labels for a total of 2,688 images. These and other relatively primitive image sets have been mostly abandoned in the semantic segmentation literature due to their limited resolution and low volume.\n\\end{itemize}", "id": "4eb62e15-9432-4e24-b9a3-5e206a5b328a", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "5553cbf6-1d47-4afe-a8b9-1457c090e311", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Image Sets and Challenges" ], [ "subsubsection", "General Purpose Semantic Segmentation Image Sets" ] ], "subsections": [], "title": "General Purpose Semantic Segmentation Image Sets" }, { "cite_extract_rate": 0.2, "cites": [ 1733 ], "content": "\\begin{itemize}\n\\item Cityscapes : This is a largescale image set with a focus on the semantic understanding of urban street scenes. It contains annotations for high-resolution images from 50 different cities, taken at different hours of the day and from all seasons of the year, and also with varying backgrounds and scene layouts. The annotations are carried out at two quality levels: fine for 5,000 images and course for 20,000 images. There are 30 different class labels, some of which also have instance annotations (vehicles, people, riders etc.). Consequently, there are two challenges with separate public leaderboards\\footnote{\\url{https://www.cityscapes-dataset.com/benchmarks/}}: one for pixel-level semantic segmentation, and a second for instance-level semantic segmentation. There are more than 100 entries to the challenge, making it the most popular regarding semantic segmentation of urban street scenes.\n\\item Other Urban Street Semantic Segmentation Image Sets: There are a number of alternative image sets for urban street semantic segmentation, such as Cam-Vid , KITTI , SYNTHIA , and IDD . These are generally overshadowed by the Cityscapes image set for several reasons. Principally, their scale is relatively low. Only the SYNTHIA image set can be considered as largescale (with more than 13k annotated images); however, it is an artificially generated image set, and this is considered a major limitation for security-critical systems like driverless cars.\n\\end{itemize}", "id": "6fde11ca-1b1d-49b1-8c49-f2bf6e69390f", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "5553cbf6-1d47-4afe-a8b9-1457c090e311", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Image Sets and Challenges" ], [ "subsubsection", "Urban Street Semantic Segmentation Image Sets" ] ], "subsections": [], "title": "Urban Street Semantic Segmentation Image Sets" }, { "cite_extract_rate": 0.4, "cites": [ 509, 1734 ], "content": "In addition to the aforementioned large-scale image sets of different categories, there are several image sets with insufficient scale or strong imbalance such that, when applied to deep learning-based semantic segmentation models, high-level segmentation accuracies cannot be directly obtained. Most public challenges on semantic segmentation include sets of this nature such as the DSTL or RIT-18 , just to name a few. Because of the overwhelming numbers of these types of sets, we chose to include only the details of the large-scale sets that attract the utmost attention from the field. \nNonetheless, being able to train a model that performs well on small-scale or imbalanced data is a correlated problem to ours. Besides conventional deep learning techniques such as transfer learning or data augmentation; the problem of insufficient or imbalanced data can be attacked by using specially designed deep learning architectures such as some optimized convolution layer types (, etc.) and others that we cover in this survey paper. What is more, there are recent studies that focus on the specific problem of utilizing insufficient sets for the problem of deep learning-based semantic segmentation . Although we acknowledge this problem as fundamental for the semantic segmentation field, we leave the discussions on techniques to handle small-scale or imbalanced sets for semantic segmentation, beyond the scope of this survey paper.", "id": "8a457c68-568c-4588-accf-9bac8b3fc0dc", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "5553cbf6-1d47-4afe-a8b9-1457c090e311", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Image Sets and Challenges" ], [ "subsubsection", "Small-scale and Imbalanced Image Sets" ] ], "subsections": [], "title": "Small-scale and Imbalanced Image Sets" }, { "cite_extract_rate": 0, "cites": [], "content": "There are two main criteria in evaluating the performance of semantics segmentation: accuracy, or in other words, the success of an algorithm; and computation complexity in terms of speed and memory requirements. In this section, we analyse these two criteria separately.", "id": "7e7a3568-4f20-471e-ba07-d512cd6c93e4", "level": "subsection", "origin_cites_number": 0, "parent_id": "24f4e8e0-1290-45ad-8cfd-5e1603b65295", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Performance Evaluation" ] ], "subsections": [ "e2ee2643-5821-4de5-8c1b-c00fd3023aca", "51517df9-2061-423a-b542-fcd5b7bd4352" ], "title": "Performance Evaluation" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 1735, 486, 7501, 1732, 1733, 1736, 1731 ], "content": "Measuring the performance of segmentation can be complicated, mainly because there are two distinct values to measure. The first is classification, which is simply determining the pixel-wise class labels; and the second is localisation, or finding the correct set of pixels that enclose the object. Different metrics can be found in the literature to measure one or both of these values. The following is a brief explanation of the principal measures most commonly used in evaluating semantic segmentation performance. \n\\begin{itemize}\n \\item\\emph{ROC-AUC}: ROC stands for the Receiver-Operator Characteristic curve, which summarises the trade-off between true positive rate and false-positive rate for a predictive model using different probability thresholds; whereas AUC stands for the area under this curve, which is 1 at maximum. This tool is useful in interpreting binary classification problems and is appropriate when observations are balanced between classes. However, since most semantic segmentation image sets are not balanced between the classes, this metric is no longer used by the most popular challenges.\n \\item\\emph{Pixel Accuracy}: Also known as \\emph{global accuracy} , pixel accuracy (PA) is a very simple metric which calculates the ratio between the amount of properly classified pixels and their total number. Mean pixel accuracy (mPA), is a version of this metric which computes the ratio of correct pixels on a per-class basis. mPA is also referred to as \\emph{class average accuracy} . \n \\begin{equation}\\label{pa}\n PA=\\frac{\\sum_{j=1}^{k}{n_{jj}}}{\\sum_{j=1}^{k}{t_{j}}}, \\qquad mPA=\\frac{1}{k}\\sum_{j=1}^{k}\\frac{n_{jj}}{t_{j}}\n \\end{equation} \n where $n_{jj}$ is the total number of pixels both classified and labelled as class \\emph{j}. In other words, $n_{jj}$ corresponds to the total number of \\textit{True Positives} for class \\emph{j}. $t_{j}$ is the total number of pixels labelled as class \\emph{j}.\n \\item\\emph{Intersection over Union} (IoU): Also known as the Jaccard Index, IoU is a statistic used for comparing the similarity and diversity of sample sets. In semantics segmentation, it is the ratio of the intersection of the pixel-wise classification results with the ground truth, to their union.\n \\begin{equation}\\label{iu}\n IoU=\\frac{\\sum_{j=1}^{k}{n_{jj}}}{\\sum_{j=1}^{k}({n_{ij}+n_{ji}+n_{jj}})}, \\qquad i \\neq j\n \\end{equation}\n where, $n_{ij}$ is the number of pixels which are labelled as class \\emph{i}, but classified as class \\emph{j}. In other words, they are \\textit{False Positives} (false alarms) for class \\emph{j}. Similarly, ${n_{ji}}$, the total number of pixels labelled as class \\emph{j}, but classified as class \\emph{i} are the \\textit{False Negatives} (misses) for class \\emph{j}.\n Two extended versions of IoU are also widely in use:\n $\\circ$\\emph{ Mean Intersection over Union} (mIoU): mIoU is the class-averaged IoU, as in (\\ref{miou}).\n \\begin{equation}\\label{miu}\n mIoU=\\frac{1}{k}\\sum_{j=1}^{k}\\frac{n_{jj}}{n_{ij}+n_{ji}+n_{jj}}, \\qquad i \\neq j\n \\label{miou} \n \\end{equation}\n $\\circ$\\emph{ Frequency-weighted IoU} (FwIoU): This is an improved version of MIoU that weighs each class importance depending on appearance frequency by using $t_{j}$ (the total number of pixels labelled as class \\emph{j}, as also defined in (\\ref{pa})). The formula of FwIoU is given in (\\ref{fiu}):\n \\begin{equation}\\label{fiu}\n FwIoU=\\frac{1}{\\sum_{j=1}^{k}t_{j}}\\sum_{j=1}^{k}{t_{j}\\frac{n_{jj}}{n_{ij}+n_{ji}+n_{jj}}}, \\qquad i \\neq j\n \\end{equation}\n IoU and its extensions, compute the ratio of true positives (hits) to the sum of false positives (false alarms), false negatives (misses) and true positives (hits). Thereby, the IoU measure is more informative when compared to pixel accuracy simply because it takes false alarms into consideration, whereas PA does not. However, since false alarms and misses are summed up in the denominator, the significance between them is not measured by this metric, which is considered its primary drawback. In addition, IoU only measures the number of pixels correctly labelled without considering how accurate the segmentation boundaries are.\n \\item\\emph{Precision-Recall Curve (PRC)-based metrics}: Precision (ratio of hits over a summation of hits and false alarms) and recall (ratio of hits over a summation of hits and misses) are the two axes of the PRC used to depict the trade-off between precision and recall, under a varying threshold for the task of binary classification. PRC is very similar to ROC. However, PRC is more powerful in discriminating the effects between the false positives (alarms) and false negatives (misses). That is predominantly why PRC-based metrics are commonly used for evaluating the performance of semantic segmentation. The formula for Precision (also called Specificity) and Recall (also called Sensitivity) for a given class \\emph{j}, are provided in (\\ref{prerec}):\n \\begin{equation}\\label{prerec}\n Prec.=\\frac{n{_{jj}}}{n_{ij}+n_{jj}}, \\quad\n Recall=\\frac{n_{jj}}{n_{ji}+n_{jj}}, i \\neq j\n \\end{equation}\n There are three main PRC-based metrics:\n $\\circ$\\emph{ F}$_{score}$: Also known as the ‘\\emph{dice coefficient}’, this measure is the harmonic mean of the precision and recall for a given threshold. It is a normalised measure of similarity, and ranges between 0 and 1 (Please see (\\ref{f})).\n \\begin{equation}\\label{f}\n F_{score}=2 \\times \\frac{Precision\\times Recall}{Precision+Recall}\n \\end{equation}\n $\\circ$\\emph{ PRC-AuC}: This is similar to the ROC-AUC metric. It is simply the area under the PRC. This metric refers to information about the precision-recall trade-off for different thresholds, but not the \\emph{shape} of the PR curve.\n $\\circ$\\emph{ Average Precision} (AP): This metric is a single value that summarises both the shape and the AUC of PRC. In order to calculate AP, using the PRC, for uniformly sampled recall values (e.g., 0.0, 0.1, 0.2, ..., 1.0), precision values are recorded. The average of these precision values is referred to as the average precision. This is the most commonly used single value metric for semantic segmentation. Similarly, mean average precision (mAP) is the mean of the AP values, calculated on a per-class basis.\n \\item\\emph{Hausdorff Distance} (HD): Hausdorff Distance is used incorporating the longest distance between classified and labelled pixels as an indicator of the largest segmentation error , with the aim of tracking the performance of a semantic segmentation model. The unidirectional HDs as $hd(X,Y)$ and $hd(Y,X)$ are presented in (\\ref{hdu1}) and (\\ref{hdu2}), respectively.\n \\begin{equation}\\label{hdu1}\n hd\\left ( X, Y \\right )= \\max_{{x\\epsilon X}}\\min_{{y\\epsilon Y}}\\left \\| x-y \\right \\|_{2},\n \\end{equation}\n \\begin{equation}\\label{hdu2}\n hd\\left ( Y, X \\right )= \\max_{{y\\epsilon Y}}\\min_{{x\\epsilon X}}\\left \\| x-y \\right \\|_{2}.\n \\end{equation}\n where, $X$ and $Y$ are the pixel sets. The $x$ is the pixel in the segmented counter $X$ and $y$ is the pixel in the target counter $Y$ . The bidirectional HD between these sets is shown in (\\ref{hdb}), where the Euclidean distance is employed for (\\ref{hdu1}), (\\ref{hdu2}) and (\\ref{hdb}).\n \\begin{equation}\\label{hdb}\n HD\\left ( X,Y \\right )= max\\left ( hd\\left ( X,Y \\right ),hd\\left ( Y,X \\right ) \\right ).\n \\end{equation}\n\\end{itemize}\nIoU and its variants, along with AP, are the most commonly used accuracy evaluation metrics in the most popular semantic segmentation challenges .", "id": "e2ee2643-5821-4de5-8c1b-c00fd3023aca", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "7e7a3568-4f20-471e-ba07-d512cd6c93e4", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Performance Evaluation" ], [ "subsubsection", "Accuracy" ] ], "subsections": [], "title": "Accuracy" }, { "cite_extract_rate": 0.5, "cites": [ 1733 ], "content": "The burden of computation is evaluated using two main metrics: how fast the algorithm completes, and how much computational memory is demanded. \n\\begin{itemize}\n\\item\\emph{Execution time}: This is measured as the whole processing time, starting from the instant a single image is introduced to the system/algorithm right through until the pixel-wise semantic segmentation results are obtained. The performance of this metric significantly depends on the hardware utilised. Thus, for an algorithm, any execution time metric should be accompanied by a thorough description of the hardware used. There are notations such as Big-O, which provide a complexity measure independent of the implementation domain. However, these notations are highly theoretical and are predominantly not preferred for extremely complex algorithms such as deep semantic segmentation as they are simple and largely inaccurate.\nFor a deep learning-based algorithm, the offline (i.e., training) and online (i.e., testing) operation may last for considerably different time intervals. Technically, the execution time refers only to the online operation or, academically speaking, the test duration for a single image. Although this metric is extremely important for industrial applications, academic studies refrain from publishing exact execution times, and none of the aforementioned challenges was found to have provided this metric. A recent study, provided a 2D histogram of Accuracy (MIoU\\%) vs frames-per-second, in which some of the state-of-the-art methods with open source codes (including their proposed structure, namely image cascade network – ICNet), were benchmarked using the Cityscapes image set.\n\\item\\emph{Memory Usage}: Memory usage is specifically important when semantic segmentation is utilised in limited performance devices such as smartphones, digital cameras, or when the requirements of the system are extremely restrictive. The prime examples of these would be military systems or security-critical systems such as self-driving cars. \nThe usage of memory for a complex algorithm like semantic segmentation may change drastically during operation. That is why a common metric for this purpose is \\emph{peak memory usage}, which is simply the maximum memory required for the entire segmentation operation for a single image. The metric may apply to computer (data) memory or GPU memory depending on the hardware design.\nAlthough critical for industrial applications, this metric is not usually made available for any of the aforementioned challenges.\n\\end{itemize}\nComputational efficiency is a very important aspect of any algorithm that is to be implemented on a real system. A comparative assessment of the speed and capacity of various semantic segmentation algorithms is a challenging task. Although most state-of-the-art algorithms are available with open-source codes, benchmarking all of them, with their optimal hyper-parameters, seems implausible. \nFor this purpose, we provide an inductive way of comparing the computational efficiencies of methods in the following sections. In Table \\ref{MethodsTable}, we categorise methods into mainly four levels of computational efficiency and discuss our categorisation related to the architectural design of a given method. This table also provides a chronological evolution of the semantic segmentation methods in the literature.", "id": "51517df9-2061-423a-b542-fcd5b7bd4352", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7e7a3568-4f20-471e-ba07-d512cd6c93e4", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Image Sets, Challenges and Performance Evaluation" ], [ "subsection", "Performance Evaluation" ], [ "subsubsection", "Computational Complexity" ] ], "subsections": [], "title": "Computational Complexity" }, { "cite_extract_rate": 1, "cites": [ 810 ], "content": "As mentioned in the Introduction, the utilisation of FCNs is a breaking point for semantic segmentation literature. Efforts on semantic segmentation literature prior to FCNs can be analysed in two separate branches, as pre-deep learning and early deep learning approaches. In this section, we briefly discuss both sets of approaches.", "id": "085f3dfd-8d14-4669-8d3e-ab87b2ddc4fa", "level": "section", "origin_cites_number": 1, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Before Fully Convolutional Networks" ] ], "subsections": [ "9b3cff5a-4f9e-4315-a55c-3cf708aadcfd", "ffef2800-e682-4cee-9f8a-34513a7ec32c" ], "title": "Before Fully Convolutional Networks" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 810, 1737 ], "content": "The differentiating factor between conventional image segmentation and semantic segmentation is the utilisation of semantic features in the process. Conventional methods for image segmentation such as thresholding, clustering, and region growing, etc. (\\emph{please see} \\emph{for a survey on conventional image segmentation techniques}) utilise handcrafted low-level features (i.e., edges, blobs) to locate object boundaries in images. Thus, in situations where the semantic information of an image is necessary for pixel-wise segmentation, such as in similar objects occluding each other, these methods usually return a poor performance.\nRegarding semantic segmentation efforts prior to DCNNs becoming popular, a wide variety of approaches utilised graphical models, such as Markov Random Fields (MRF), Conditional Random Fields (CRF) or forest-based (or sometimes referred to as ‘holistic’) methods, in order to find scene labels at the pixel level. The main idea was to find an inference by observing the dependencies between neighbouring pixels. In other words, these methods modelled the semantics of the image as a kind of prior information among adjacent pixels. Thanks to deep learning, today we know that image semantics require abstract exploitation of largescale data. Initially, graph-based approaches were thought to have this potential. The so-called ‘super-pixelisation’, which is usually the term applied in these studies, was a process of modelling abstract regions. However, a practical and feasible implementation for largescale data processing was never achieved for these methods, while it was accomplished for DCNNs, first by and then in many other studies.\nAnother group of studies, sometimes referred to as the ‘Layered models’ , used a composition of pre-trained and separate object detectors so as to extract the semantic information from the image. Because the individual object detectors failed to classify regions properly, or because the methods were limited by the finite number of object classes provided by the ‘hand-selected’ bank of detectors in general, their performance was seen as relatively low compared to today’s state-of-the-art methods.\nAlthough the aforementioned methods of the pre-deep learning era are no longer preferred as segmentation methods, some of the graphical models, especially CRFs, are currently being utilised by the state-of-the-art methods as post-processing (refinement) layers, with the purpose of improving the semantic segmentation performance, the details of which are discussed in the following section.\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}{4.8cm}\t\n \\centering\n\t\\includegraphics*[trim=0 0 0 0,width=1.0\\textwidth]{figures/Figure2a.png}\n\t\\caption{Input Image}\t\n\\end{subfigure}\n\\begin{subfigure}{4.8cm}\n \\centering\n\t\\includegraphics*[trim=0 0 0 0,width=1.0\\textwidth]{figures/Figure2b.png}\n\t\\caption{Segmented Image}\t\n\\end{subfigure}\n\\begin{subfigure}{4.8cm}\n \\centering\n\t\\includegraphics*[trim=0 0 0 0,width=1.0\\textwidth]{figures/Figure2c.png}\n\t\\caption{Refined Result}\t \n\\end{subfigure}\n\\caption{Effect of using graphical model-based refinement on segmentation results.}\n \\label{CRFRefinement}\n\\end{figure*}", "id": "9b3cff5a-4f9e-4315-a55c-3cf708aadcfd", "level": "subsection", "origin_cites_number": 18, "parent_id": "085f3dfd-8d14-4669-8d3e-ab87b2ddc4fa", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Before Fully Convolutional Networks" ], [ "subsection", "Pre-Deep Learning Approaches" ] ], "subsections": [ "3eabd60a-239f-4226-bbde-0b39185bcfff" ], "title": "Pre-Deep Learning Approaches" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 1739, 1738, 1737 ], "content": "Deep neural networks are powerful in extracting abstract local features. However, they lack the capability to utilise global context information, and accordingly cannot model interactions between adjacent pixel predictions . On the other hand, the popular segmentation methods of the pre-deep learning era, the graphical models, are highly suited to this sort of task. That is why they are currently being used as a refinement layer on many DCNN-based semantic segmentation architectures.\nAs also mentioned in the previous section, the idea behind using graphical models for segmentation is finding an inference by observing the low-level relations between neighbouring pixels. In Figure \\ref{CRFRefinement}, the effect of using a graphical model-based refinement on segmentation results can be seen. The classifier (see Figure \\ref{CRFRefinement}.b) cannot correctly segment pixels where different class labels are adjacent. In this example, a CRF-based refinement is applied to improve the pixel-wise segmentation results. CRF-based methods are widely used for the refinement of deep semantic segmentation methods, although some alternative graphical model-based refinement methods also exist in the literature . \nCRFs are a type of discriminative undirected probabilistic graphical model. They are used to encode known relationships between observations and to construct consistent interpretations. Their usage as a refinement layer comes from the fact that, unlike a discrete classifier, which does not consider the similarity of adjacent pixels, a CRF can utilise this information. The main advantage of CRFs over other graphical models (such as Hidden Markov Models) is their conditional nature and their ability to avoid the problem of label bias . Even though a considerable number of methods (see Table \\ref{MethodsTable}) utilise CRFs for refinement, these models started to lose popularity in relatively recent approaches because they are notoriously slow and very difficult to optimise .", "id": "3eabd60a-239f-4226-bbde-0b39185bcfff", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "9b3cff5a-4f9e-4315-a55c-3cf708aadcfd", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Before Fully Convolutional Networks" ], [ "subsection", "Pre-Deep Learning Approaches" ], [ "subsubsection", "Refinement Methods" ] ], "subsections": [], "title": "Refinement Methods" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1740, 810 ], "content": "\\label{pre-early-deep}\nBefore FCNs first appeared in 2014\\footnote{FCN ] was officially published in 2017. However, the same group first shared the idea online as pre-printed literature in 2014 .}, the initial few years of deep convolutional networks saw a growing interest in the idea of utilising the newly discovered deep features for semantic segmentation . The very first approaches, which were published prior to the proposal of a Rectified Linear Unit (ReLU) layer , used activation functions such as \\emph{tanh} (or similar continuous functions), which can be computationally difficult to differentiate. Thus, training such systems were not considered to be computation-friendly, or even feasible for largescale data.\nHowever, the first mature approaches were just simple attempts to convert classification networks such Alex-Net and VGG to segmentation networks by fine-tuning the fully connected layers . They suffered from the overfitting and time-consuming nature of their fully connected layers in the training phase. Moreover, the CNNs used were not sufficiently deep so as to create abstract features, which would relate to the semantics of the image.\nThere were a few early deep learning studies in which the researchers declined to use fully connected layers for their decision-making. However, they utilised different structures such as a recurrent architecture or using labelling from a family of separately computed segmentations . By proposing alternative solutions to fully connected layers, these early studies showed the first traces of the necessity for a structure like the FCN, and unsurprisingly they were succeeded by .\nSince their segmentation results were deemed to be unsatisfactory, these studies generally utilised a refinement process, either as a post-processing layer or as an alternative architecture to fully connected decision layers . Refinement methods varied, such as Markov random fields , nearest neighbour-based approach , the use of a calibration layer , using super-pixels , or a recurrent network of plain CNNs . Refinement layers, as discussed in the previous section, are still being utilised by post-FCN methods, with the purpose of increasing the pixel-wise labelling performance around regions where class intersections occur.", "id": "ffef2800-e682-4cee-9f8a-34513a7ec32c", "level": "subsection", "origin_cites_number": 9, "parent_id": "085f3dfd-8d14-4669-8d3e-ab87b2ddc4fa", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Before Fully Convolutional Networks" ], [ "subsection", "Early Deep Learning Approaches" ] ], "subsections": [], "title": "Early Deep Learning Approaches" }, { "cite_extract_rate": 0.625, "cites": [ 1741, 97, 7501, 810, 305 ], "content": "In , the idea of dismantling fully connected layers from deep CNNs (DCNN) was proposed, and to imply this idea, the proposed architecture was named as ‘Fully Convolutional Networks’ (see Figure 3). The main objective was to create semantic segmentation networks by adapting classification networks such as AlexNet , VGG , and GoogLeNet into fully convolutional networks, and then transferring their learnt representations by fine-tuning. The most widely used architectures obtained from the study are known as ‘FCN-32s’, ‘FCN16s’, and ‘FCN8s’, which are all transfer-learnt using the VGG architecture .\nFCN architecture was considered revolutionary in many aspects. First of all, since FCNs did not include fully connected layers, inference per image was seen to be considerably faster. This was mainly because convolutional layers when compared to fully connected layers, had a marginal number of weights. Second, and maybe more significant, the structure allowed segmentation maps to be generated for images of any resolution. In order to achieve this, FCNs used deconvolutional layers that can upsample coarse deep convolutional layer outputs to dense pixels of any desired resolution. Finally, and most importantly, they proposed the skip architecture for DCNNs.\n\\begin{figure}[t]\n\\centering\n\\includegraphics*[trim=20 0 80 10,clip=false,width=0.98\\textwidth]{figures/Fig_FCN_Skip_Connections.pdf}\n\\caption{Fully convolutional networks (FCNs) are trained end-to-end and are designed to\nmake dense predictions for per-pixel tasks like semantic segmentation. FCNs consist of no fully connected layers .}\n \\label{FCNs}\n\\end{figure} \nSkip architectures (or connections) provide links between nonadjacent layers in DCNNs. Simply by summing or concatenating outputs of unconnected layers, these connections enable information to flow, which would otherwise be lost because of an architectural choice such as max-pooling layers or dropouts. The most common practice is to use skip connections preceding a max-pooling layer, which downsamples layer output by choosing the maximum value in a specific region. Pooling layers helps the architecture create feature hierarchies, but also causes loss of localised information which could be valuable for semantic segmentation, especially at object borders. Skip connections preserve and forward this information to deeper layers by way of bypassing the pooling layers. Actually, the usage of skip connections in was perceived as being considerably primitive. The ‘FCN-8s’ and ‘FCN-16s’ networks included these skip connections at different layers. Denser skip connections for the same architecture, namely ‘FCN-4s’ and ‘FCN-2s’, were also utilised for various applications . This idea eventually evolved into the encoder-decoder structures for semantic segmentation, which are presented in the following section.", "id": "486bc016-7734-4261-b9f7-ad3f68d1c163", "level": "section", "origin_cites_number": 8, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Fully Convolutional Networks for Semantic Segmentation" ] ], "subsections": [], "title": "Fully Convolutional Networks for Semantic Segmentation" }, { "cite_extract_rate": 1, "cites": [ 8429 ], "content": "Almost all subsequent approaches on semantic segmentation followed the idea of FCNs; thus it would not be wrong to state that decision-making with fully-connected layers effectively ceased to exist\\footnote{Many methods utilise fully connected layers such as RCNN , which are discussed in the following sections. However, this and other similar methods that include fully connected layers have mostly been succeeded by fully convolutional versions for the sake of computational efficiency.} following the appearance of FCNs to the issue of semantic segmentation.\nOn the other hand, the idea of FCNs also created new opportunities to further improve deep semantic segmentation architectures. Generally speaking, the main drawbacks of FCNs can be summarised as inefficient loss of label localisation within the feature hierarchy, inability to process global context knowledge, and the lack of a mechanism for multiscale processing. Thus, most subsequent studies have been principally aimed at solving these issues through the proposal of various architectures or techniques. For the remainder of this paper, we analyse these issues under the title, ‘fine-grained localisation’. Consequently, before presenting a list of the post-FCN state-of-the-art methods, we focus on this categorisation of techniques and examine different approaches that aim at solving these main issues. In the following, we also discuss scale invariance in the semantic segmentation context and finish with object detection-based approaches, which are a new breed of solution that aim at resolving the semantic segmentation problem simultaneously with detecting object instances.\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}{8cm}\t\n\t\\includegraphics*[clip=false,trim=23 20 25 10,width=1\\textwidth]{figures/Fig_Localizatino_a.pdf}\n\t\\caption{Encoder-Decoder Architecture.}\t\n\\end{subfigure}\n\\begin{subfigure}{8cm}\t\n\t\\includegraphics*[trim=0 0 25 -10,clip=false,width=1\\textwidth]{figures/SPPLayers.pdf}\n\t\\caption{Spatial-Pyramid Pooling Layer}\t\n\\end{subfigure}\n\\begin{subfigure}{8cm}\t\n\t\\includegraphics*[trim=-10 50 0 -30,clip=false,width=1\\textwidth]{figures/Fig_Dilated_Conv.png}\n\t\\caption{Regular vs. Dilated Convolutions}\t\n\\end{subfigure}\n\\begin{subfigure}{8cm}\t\n\t\\includegraphics*[trim=0 0 0 40,clip=true,width=1\\textwidth]{figures/Fig_DeepLab3.png}\n\t\\caption{DeepLabv3+ Architecture}\t\n\\end{subfigure}\n\\begin{subfigure}{8cm}\t\n\t\\includegraphics*[trim=0 310 0 0,clip=false,width=1\\textwidth]{figures/Fig_DlinkNet.png}\n\t\\caption{DlinkNet Architecture}\t\n\\end{subfigure}\n\\caption{Different architectures for fine-grained pixel-wise label localisation.}\n\\label{Localization}\n\\end{figure}", "id": "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "level": "section", "origin_cites_number": 1, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ] ], "subsections": [ "8056ef15-b70f-46d2-aa67-e56388a68b74", "b43e8d58-b826-4b10-90a7-9a74f992259e", "34b80f3b-2d89-4524-9834-3bce20359d42", "5c2bb78c-80a6-489a-b0eb-e5a37bb4c679" ], "title": "Post-FCN Approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "Semantic segmentation is, by definition, a dense procedure; hence it requires fine-grained localisation of class labels at the pixel level. For example, in robotic surgery, pixel errors in semantic segmentation can lead to life or death situations. Hierarchical features created by pooling (i.e., max-pooling) layers can partially lose localisation. Moreover, due to their fully convolutional nature, FCNs do not inherently possess the ability to model global context information in an image, which is also very effective in the localisation of class labels. Thus, these two issues are intertwined in nature, and in the following, we discuss different approaches that aim at overcoming these problems and to providing finer localisation of class labels.", "id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "level": "subsection", "origin_cites_number": 0, "parent_id": "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ] ], "subsections": [ "35163cd9-a439-440c-8c93-c9c656c80cab", "840460fa-3679-40ac-baf4-ed5a3c96c169", "d5e4d293-145c-4e6d-b9e4-ea7e84ccbde4", "5a2d7d3b-6ed9-406f-99b0-8211a126106e", "4f7624ce-cdb7-4ecc-ada8-bc7b705fe0e0", "390bb8b3-5f84-4c7b-bd9a-5dc971c729e7" ], "title": "Techniques for Fine-grained Localisation" }, { "cite_extract_rate": 1, "cites": [ 1741, 7501, 1742, 1728 ], "content": "The so-called Encoder-Decoder (ED) architectures (also known as the U-nets, referring to the pioneering study of ) are comprised of two parts. Encoder gradually reduces the spatial dimension with pooling layers, whilst decoder gradually recovers the object details and spatial dimension. Each feature map of the decoder part only directly receives the information from the feature map at the same level of the encoder part using skip connections, thus EDs can create abstract hierarchical features with fine localisation (see Figure \\ref{Localization}.a). U-Net and Seg-Net are very well-known examples. In this architecture, the strongly correlated semantic information, which is provided by the adjacent lower-resolution feature map of the encoder part, has to pass through additional intermediate layers in order to reach the same decoder layer. This usually results in a level of information decay. However, U-Net architectures have proven very useful for the segmentation of different applications, such as medical images , street view images , satellite images , just to name a few. Although earlier ED architectures were designed for object segmentation tasks only, there are also modified versions such as ``TernausNetV2'' , that provide instance segmentation capability with minor architectural changes.", "id": "35163cd9-a439-440c-8c93-c9c656c80cab", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Encoder-Decoder Architecture" ] ], "subsections": [], "title": "Encoder-Decoder Architecture" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 509, 1743 ], "content": "The idea of constructing a fixed-sized spatial pyramid was first proposed by , in order to prevent a Bag-of-Words system losing spatial relations among features. Later, the approach was adopted to CNNs by , in that, regardless of the input size, a spatial pyramid representation of deep features could be created in a Spatial Pyramid Pooling Network (SPP-Net). The most important contribution of the SPP-Net was that it allowed inputs of different sizes to be fed into CNNs. Images of different sizes fed into convolutional layers inevitably create different-sized feature maps. However, if a pooling layer, just prior to a decision layer, has stride values proportional to the input size, the feature map created by that layer would be fixed (see Figure \\ref{Localization}.b). By , a modified version, namely Pyramid Attention Network (PAN) was additionally proposed. The idea of PAN was combining an SPP layer with global pooling to learn a better feature representation.\nThere is a common misconception that SPP-Net structure carries an inherent scale-invariance property, which is incorrect. SPP-Net allows the efficient training of images at different scales (or resolutions) by allowing different input sizes to a CNN. However, the trained CNN with SPP is scale-invariant if, and only if, the training set includes images with different scales. This fact is also true for a CNN without SPP layers.\nHowever, similar to the original idea proposed in , the SPP layer in a CNN constructs relations among the features of different hierarchies. Thus, it is quite similar to skip connections in ED structures, which also allow information flow between feature hierarchies.\nThe most common utilisation of an SPP layer for semantic segmentation is proposed in , such that the SPP layer is appended to the last convolutional layer and fed to the pixel-wise classifier.", "id": "840460fa-3679-40ac-baf4-ed5a3c96c169", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Spatial Pyramid Pooling" ] ], "subsections": [], "title": "Spatial Pyramid Pooling" }, { "cite_extract_rate": 0.75, "cites": [ 1744, 8491 ], "content": "This idea is based on fusing features extracted from different sources. For example, in the so-called ‘DeepMask’ network utilises skip connections in a feed-forward manner, so that an architecture partially similar to both SPP layer and ED is obtained. The same group extends this idea with a top-down refinement approach of the feed-forward module and propose the so-called ‘SharpMask’ network , which has proven to be more efficient and accurate in segmentation performance. Another approach from this category is the so-called ‘ParseNet’ , which fuses CNN features with external global features from previous layers in order to provide context knowledge. Another approach by is to fuse the ``stage features'' (i.e. deep encoder activations) with ``refinement path features'' (an idea similar to skip connections), using a convolutional (Feature Adaptive Fusion FAF) block. Although a novel idea in principle, feature fusion approaches (including SPP) create hybrid structures, therefore they are relatively difficult to train.", "id": "d5e4d293-145c-4e6d-b9e4-ea7e84ccbde4", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Feature Concatenation" ] ], "subsections": [], "title": "Feature Concatenation" }, { "cite_extract_rate": 0.5, "cites": [ 97 ], "content": "The idea of dilated (atrous) convolutions is actually quite simple: with contiguous convolutional filters, an effective receptive field of units can only grow linearly with layers; whereas with dilated convolution, which has gaps in the filter (see Figure \\ref{Localization}.c), the effective receptive field would grow much more quickly . Thus, with no pooling or subsampling, a rectangular prism of convolutional layers is created. Dilated convolution is a very effective and powerful method for the detailed preservation of feature map resolutions. The negative aspect of the technique, compared to other techniques, concerns its higher demand for GPU storage and computation, since the feature map resolutions do not shrink within the feature hierarchy .", "id": "5a2d7d3b-6ed9-406f-99b0-8211a126106e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Dilated Convolution" ] ], "subsections": [], "title": "Dilated Convolution" }, { "cite_extract_rate": 0.75, "cites": [ 1739, 1737, 1745 ], "content": "As also discussed in Section 3.1.1, CNNs naturally lack mechanisms to specifically ‘focus’ on regions where class intersections occur. Around these regions, graphical models are used to find inference by observing low-level relations between neighbouring feature maps of CNN layers. Consequently, graphical models, mainly CRFs, are utilised as refinement layers in deep semantic segmentation architectures. As in , CRFs connect low-level interactions with output from multiclass interactions, and in this way, global context knowledge is constructed.\nAs a refinement layer, various methods exist that employ CRFs to DCNNs, such as the Convolutional CRFs , the Dense CRF , and CRN-as-RNN . In general, CRFs help build context knowledge and thus a finer level of localisation in class labels.", "id": "4f7624ce-cdb7-4ecc-ada8-bc7b705fe0e0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Conditional Random Fields" ] ], "subsections": [], "title": "Conditional Random Fields" }, { "cite_extract_rate": 0.875, "cites": [ 7063, 38, 8492, 1748, 6977, 1746, 1747 ], "content": "The ability of Recurrent Neural Networks (RNNs) to handle sequential information can help improve segmentation accuracy. For example, used Conv-LSTM layers to improve their semantic segmentation results in image sequences. However, there are also methods that use recurrent structures on still images. For example, the Graph LSTM network is a generalization of LSTM from sequential data or multidimensional data to general graph-structured data for semantic segmentation on 2D still images. Graph-RNN is another example of a similar approach in which an LSTM-based network is used to fuse a deep encoder output with the original image in order to obtain a finer pixel-level segmentation. Likewise, in , the researchers utilised LSTM-chains in order to intertwine multiple scales, resulting in pixel-wise segmentation improvements. There are also hybrid approaches where CNNs and RNNs are fused. A good example of this is the so-called ReSeg model , in which the input image is fed to a VGG-like CNN encoder, and is then processed afterwards by recurrent layers (namely the ReNet architecture) in order to better localise the pixel labels. Another similar approach is the DAG-RNN , which utilize a DAG-structured CNN+RNN network, and models long-range semantic dependencies among image units. To the best of our knowledge, no purely recurrent structures for semantic segmentation exist, mainly because semantic segmentation requires a preliminary CNN-based feature encoding scheme.\nThere is currently an increasing trend in one specific type of RNN, namely ‘recurrent attention modules’. In these modules, attention is technically fused in the RNN, providing a focus on certain regions of the input when predicting a certain part of the output sequence. Consequently, they are also being utilised in semantic segmentation .\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{12cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig_MaskRCNN.pdf}\n\t\\caption{Regions with CNN features-based (Mask-RCNN) architecture}\t\n\\end{subfigure}\n\\begin{subfigure}{12cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig_YOLACT.png}\n\t\\caption{Fully Convolutional Object Detector-based (YOLO)-based architecture}\t\n\\end{subfigure}\n\\caption{Different architectures for object detection-based semantic segmentation methods}\n\\label{ObjectBased}\n\\end{figure}", "id": "390bb8b3-5f84-4c7b-bd9a-5dc971c729e7", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "8056ef15-b70f-46d2-aa67-e56388a68b74", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Techniques for Fine-grained Localisation" ], [ "subsubsection", "Recurrent Approaches" ] ], "subsections": [], "title": "Recurrent Approaches" }, { "cite_extract_rate": 0.5, "cites": [ 1223, 37, 1749 ], "content": "Scale Invariance is, by definition, the ability of a method to process a given input, independent of the relative scale (i.e. the scale of an object to its scene) or image resolution. Although it is extremely crucial for certain applications, this ability is usually overlooked or is confused with a method’s ability to include multiscale information. A method may use multiscale information to improve its pixel-wise segmentation ability, but can still be dependent on scale or resolution. That is why we find it necessary to discuss this issue under a different title and to provide information on the techniques that provide scale and/or resolution invariance.\nIn computer vision, any method can become scale invariant if trained with multiple scales of the training set. Some semantic segmentation methods such as utilise this strategy. However, these methods do not possess an inherent scale-invariance property, which is usually obtained by normalisation with a global scale factor (such as in SIFT by ). This approach is not usually preferred in the literature on semantic segmentation. The image sets that exist in semantic segmentation literature are extremely large in size. Thus, the methods are trained to memorise that training set, because in principle, overfitting a largescale training set is actually tantamount to solving the entire problem space.", "id": "b43e8d58-b826-4b10-90a7-9a74f992259e", "level": "subsection", "origin_cites_number": 6, "parent_id": "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Scale-Invariance" ] ], "subsections": [], "title": "Scale-Invariance" }, { "cite_extract_rate": 0.7462686567164181, "cites": [ 1755, 1757, 1760, 802, 1759, 1761, 1751, 508, 7502, 850, 6977, 209, 520, 1741, 8492, 1764, 1769, 206, 97, 7504, 7501, 810, 1749, 1223, 7063, 1754, 1750, 1752, 1756, 1770, 737, 37, 1758, 7503, 1763, 1762, 1743, 7505, 7064, 857, 1745, 1753, 509, 1768, 1765, 1766, 1767 ], "content": "There has been a recent growing trend in computer vision, which aims at specifically resolving the problem of object detection, that is, establishing a bounding box around all objects within an image. Given that the image may or may not contain any number of objects, the architectures utilised to tackle such a problem differ to the existing fully-connected/convolutional classification or segmentation models.\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6a_original.jpg}\n\t\\caption{\\scriptsize image}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6b_groundtruth.png}\n\t\\caption{\\scriptsize{reference}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6c_fcn32s.png}\n\t\\caption{\\scriptsize{FCN-32S}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6d_fcn8s.png}\n\t\\caption{\\scriptsize FCN-8S}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6i_CMSA.png}\n\t\\caption{\\scriptsize CMSA}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6e_DeepLabV1.png}\n\t\\caption{\\scriptsize{DeepLabv1}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6f_CRFasRNN.png}\n\t\\caption{\\scriptsize{CRF-as-RNN}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6g_DeepLabV2.png}\n\t\\caption{\\scriptsize{DeepLab.v2}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6h_DeepLabV2_wCRF.png}\n\t\\caption{\\scriptsize{DeepLab.v2+CRF}}\t\n\\end{subfigure}\n\\begin{subfigure}{3.2cm}\t\n\t\\includegraphics*[trim=0 0 0 0,clip=false,width=1\\textwidth]{figures/Fig6j_PAN.png}\n\t\\caption{\\scriptsize{PAN}}\t\n\\end{subfigure}\n\\caption{(a) A sample image from the PASCAL VOC validation set, (b) its semantic segmentation ground truth, and results obtained from different studies are depicted: c) FCN-32S , d) FCN-8S , e) CMSA , f) DeepLab-v1 , g) CRF-as-RNN , h) DeepLab-v2 , i) DeepLab-v2 with CRF refinement , j) PAN .}\n\\label{SampleResults}\n\\end{figure*}\nThe pioneering study that represents this idea is the renowned ‘Regions with CNN features’ (RCNN) network . Standard CNNs with fully convolutional and fully connected layers lack the ability to provide varying length output, which is a major flaw for an object detection algorithm that aims to detect an unknown number of objects within an image. The simplest way to resolve this problem is to take different regions of interest from the image, and then to employ a CNN in order to detect objects within each region separately. This region selection architecture is called the ‘Region Proposal Network’ (RPN) and is the fundamental structure used to construct the RCNN network (see Figure \\ref{ObjectBased}.a). Improved versions of RCNN, namely ‘Fast-RCNN’ and ‘Faster-RCNN’ were subsequently also proposed by the same research group. Because these networks allow for the separate detection of all objects within the image, the idea was easily implemented for instance segmentation, as the ‘Mask-RCNN’ .\nThe basic structure of RCNNs included the RPN, which is the combination of CNN layers and a fully connected structure in order to decide the object categories and bounding box positions. As discussed within the previous sections of this paper, due to their cumbersome structure, fully connected layers were largely abandoned with FCNs. RCNNs shared a similar fate when the ‘You-Only-Look-Once’ (YOLO) by and ‘Single Shot Detector’ (SSD) by were proposed. YOLO utilises a single convolutional network that predicts the bounding boxes and the class probabilities for these boxes. It consists of no fully connected layers, and consequently provides real-time performance. SSD proposed a similar idea, in which bounding boxes were predicted after multiple convolutional layers. Since each convolutional layer operates at a different scale, the architecture is able to detect objects of various scales. Whilst slower than YOLO, it is still considered to be faster then RCNNs. This new breed of object detection techniques was immediately applied to semantic segmentation. Similar to MaskRCNN, ‘Mask-YOLO’ and ‘YOLACT’ architectures were implementations of these object detectors to the problem of instance segmentation (see Figure \\ref{ObjectBased}b). Similar to YOLACT, some other methods also achieve fast, real-time instance segmentation such as; ESE-Seg , SOLO, SOLOv2, DeepSnake , and CenterPoly.\nLocating objects within an image prior to segmenting them at the pixel level is both intuitive and natural, due to the fact that it is effectively how the human brain supposedly accomplishes this task . In addition to these ``two-stage (detection+segmentation) methods, there are some recent studies that aim at utilizing the segmentation task to be incorporated into one-stage bounding-box detectors and result in a simple yet efficient instance segmentation framework . However, the latest trend is to use global-area-based methods by generating intermediate FCN feature maps and then assembling these basis features to obtain final masks . \nIn recent years, a trend of alleviating the demand for pixel-wise labels is realized mainly by employing bounding boxes, and by expanding from semantic segmentation to instance segmentation applications. In both semantic segmentation and instance segmentation methods, the category of each pixel is recognized, and the only difference is that instance segmentation also differentiates object occurrences of the same category. Therefore, weakly-supervised instance segmentation (WSIS) methods are also utilized for instance segmentation. The supervision of WSIS methods can use different annotation types for training, which are usually in the form of either bounding boxes or image-level labels . Hence, employing object detection-based methods for semantic segmentation is an area significantly prone to further development in near future by the time this manuscript is prepared. \n\\begin{table*}[p]\n\\centering\n\\begin{tabular}{|p{2.5cm}|p{9.3cm}|p{2.9cm}|>{\\centering\\arraybackslash}p{0.8cm}|}\n\\hline\nMethod & Method Summary & Rankings & Eff.\\\\\\hline \\hline\nMultiScale-Net.\\newline\\scriptsize & \\emph{Multiscale convolutional network fused parallel with a segmentation framework (either superpixel or CRF-based). Relatively lower computational efficiency due to a CRF block.} & \\scriptsize68.7\\% mPA @SIFTflow & $\\star$$\\hspace{0.1cm}$$\\star$\\\\\\hline\nRecurrent CNN\\newline\\scriptsize & \\emph{Recurrent architecture constructed by using different instances of a CNN, in which each network instance is fed with previous label predictions (obtained from the previous instance). Heavy computational load when multiple instances (3 in their best performing experiments) are fed.} & \\scriptsize77.7\\% mPA @SIFTflow & $\\star$ \\\\\\hline\nFCN\\newline \\newline & \\emph{Fully convolutional encoder structure (i.e., no fully connected layers) with skip connections that fuse multiscale activations at the final decision layer. Relative fast due to no fully connected layers or a refinement block.} & \\scriptsize85.2\\% mPA @SIFTflow\\newline\\scriptsize62.2\\% mIoU @PASCAL 2012\\newline65.3\\% mIoU @CitySca. (w/o course)\\newline39.3\\% mIoU @ADE20K & $\\star$ $\\star$ $\\star$ \\\\\\hline\nDeepLab.v1\\newline & \\emph{CNN with dilated convolutions, succeeded by a fully-connected (i.e. Dense) CRF.} Fast and optimized computation leads to near real-time performance. & \\scriptsize66.4\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$\\\\\\hline\nCMSA\\newline \\newline & \\emph{Layers of a pyramidal input are fed to separate FCNs for different scales in parallel. These multiscale FCNs are also connected in series to provide pixel-wise category, depth and normal output, simultaneously. Relatively lower computational efficiency due to progressive processing of sequence of different scales}. & \\scriptsize83.8\\% mPA @SIFTflow\\newline\\scriptsize62.6\\% mIoU @PASCAL 2012 & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nUNet\\newline & \\emph{Encoder/decoder structure with skip connections that connect same levels of ED and final input-sized classification layer. Efficient computation load due to no fully connected layers or a refinement block.} & \\scriptsize72.7\\% mIoU @PASCAL 2012 (\\emph{tested by )}& $\\star$ $\\star$ $\\star$\\\\\\hline\nSegNet\\newline\\newline & \\emph{Encoder/decoder structure (similar to UNet) with skip connections that transmit only pooling indices (unlike U-Net, for which skip connections concatenate same-level activations. Efficient computation load due to no fully connected layers or a refinement block).} & \\scriptsize59.9\\% mIoU @PASCAL 2012\\newline79.2\\% mIoU @CitySca. (w/o course) & $\\star$ $\\star$ $\\star$\\\\\\hline\nDeconvNet\\newline\\newline & \\emph{Encoder/decoder structure (namely ‘the Conv./Deconv. Network’) without skip connections. The encoder (convolutional) part of the network is transferred from the VGG-VD-16L. Efficient computation load due to no fully connected layers or a refinement block.} . & \\scriptsize74.8\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$ \\\\\\hline\nMSCG \\newline \\newline& \\emph{Multiscale context aggregation using only a rectangular prism of dilated convolutional layers, without pooling or subsampling layers, to perform pixel-wise labelling. Efficient computation load due to no fully connected layers or a refinement block.} & \\scriptsize67.6\\% mIoU @PASCAL 2012\\newline\\scriptsize67.1\\% mIoU @CitySca. (w/o course) & $\\star$ $\\star$ $\\star$ \\\\\\hline\nCRF-as-RNN\\newline \\newline & \\emph{Fully convolutional CNN (i.e., FCN) followed by a CRF-as-RNN layer, in which an iterative CRF algorithm is formulated as an RNN. Because of the RNN block, computational efficiency is limited.} & \\scriptsize65.2\\% mIoU @PASCAL 2012\\newline\\scriptsize62.5\\% mIoU @CitySca. (w/o course) & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nFeatMap-Net.\\newline \\newline & \\emph{Layers of a pyramidal input fed to parallel multiscale feature maps (i.e., CNNS), and later fused in an upsample/concatenation (i.e. pyramid pooling) layer to provide the final feature map to a Dense CRF Layer. Well-planned but loaded architecture leads to moderate computational efficiency.} & \\scriptsize88.1\\% mPA @SIFTflow\\newline\\scriptsize75.3\\% mIoU @PASCAL 2012 & $\\star$$\\hspace{0.1cm}$$\\star$\\\\\\hline \nGraph LSTM\\newline \\newline & \\emph{Generalization of LSTM from sequential data to general graph-structured data for semantic segmentation on 2D still images, mostly people/parts. Graph-LSTM processing considerably limits computation efficiency.} & \\scriptsize60.2\\% mIoU @PASCAL Person/Parts 2010 & $\\star$\\\\\\hline\nDAG-RNN\\newline \\newline & \\emph{DAG-structured CNN+RNN network that models long-range semantic dependencies among image units. Due to chain structured sequential processing of pixels with a recurrent model, the computational efficiency is considerably limited.} & \\scriptsize85.3\\% mPA @SIFTflow & $\\star$\\\\\\hline\n\\end{tabular}\n\\end{table*}\n\\begin{table*}[p]\n\\centering\n\\begin{tabular}{|p{2.5cm}|p{9.3cm}|p{2.9cm}|>{\\centering\\arraybackslash}p{0.8cm}|}\n\\hline\ncont'd. & & & \\\\\\hline \\hline\nDeepLab.v2\\newline \\newline & \\emph{Improved version of DeepLab.v1, with additional ‘dilated (atrous) spatial pyramid pooling’ (ASPP) layer. Similar computational performance to DeepLab.v1.} & \\scriptsize79.7\\% mIoU @PASCAL 2012\\newline\\scriptsize70.4\\% mIoU @CitySca. (w/o course) & $\\star$ $\\star$ $\\star$ \\\\\\hline\nPSPNet\\newline \\newline & \\emph{CNN followed by a pyramid pooling layer similar to , but without a fully connected decision layer. Hence, computational performance closer to FCN .} & \\scriptsize85.5\\% mIoU @PASCAL 2012\\newline81.2\\% mIoU @CitySca. (w. course)\\newline55.4\\% mIoU @ADE20K & $\\star$ $\\star$ $\\star$ \\\\\\hline\nDeepLab.v3\\newline & \\emph{Improved version of DeepLab.v2, with optimisation of ASPP layer hyperparameters and without a Dense CRF layer, for faster operation.} & \\scriptsize85.7\\% mIoU @PASCAL 2012\\newline81.3\\% mIoU @CitySca. (w. course) & $\\star$ $\\star$ $\\star$ \\\\\\hline\nDIS\\newline \\newline & \\emph{One network predicts labelmaps/tags, while another performs semantic segmentation using these predictions. Both networks use ResNet101 for preliminary feature extraction. They declare similar computational efficiency to DeepLabv2 } & \\scriptsize41.7\\% mIoU @COCO\\newline\\scriptsize86.8\\% mIoU @PASCAL 2012& $\\star$ $\\star$ $\\star$ \\\\\\hline\nMask-RCNN\\newline \\newline & \\emph{Object Detector Fast-RCNN followed by ROI-pooling and Convolutional layers, applied to instance segmentation, with near real-time performance (see Figure \\ref{ObjectBased}.a).} & \\scriptsize37.1\\% mIoU @COCO \\emph{tested by} & $\\star$ $\\star$ $\\star$ \\\\\\hline\nGCN\\newline \\newline & \\emph{Fed by an initial ResNet-based encoder, GCN uses large kernels to fuse high- and low-level features in a multiscale manner, followed by a convolutional Border Refinement (BR) module. Its fully convolutional architectue allows near real-time performance. } & \\scriptsize83.6\\% mIoU @PASCAL 2012\\newline76.9\\% mIoU @CitySca. (w/o course) & $\\star$ $\\star$ $\\star$\\\\\\hline\nSDN\\newline \\newline & \\emph{UNET architecture that consists of multiple shallow deconvolutional networks, called SDN units, stacked one by one to integrate contextual information and guarantee fine recovery of localised information. Computational efficiency similar to UNET-like architectures.} & \\scriptsize83.5\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$ \\\\\\hline\nDFN\\newline \\newline & \\emph{Consists of two sub-networks: Smooth Net (SN) and Border Net (BN). SN utilises an attention module and handles global context, whereas BN employs a refinement block to handle borders. Limited computational efficiency due to an attention block}. & \\scriptsize86.2\\% mIoU @PASCAL 2012\\newline\\scriptsize80.3\\% mIoU @CitySca. (w.course) & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nMSCI\\newline \\newline &\\emph{Aggregates features from different scales via connections between Long Short-term Memory (LSTM) chains. Limited computational efficiency due to multiple RNN blocks (i.e. LSTMs).} & \\scriptsize88.0\\% mIoU @PASCAL 2012 & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nDeepLab.v3+\\newline \\newline & \\emph{Improved version of DeepLab.v3, using special encoder-decoder structure with dilated convolutions (with no Dense CRF employed for faster operation).} & \\scriptsize87.3\\% mIoU @PASCAL 2012\\newline82.1\\% mIoU @CitySca. (w. course) & $\\star$ $\\star$ $\\star$\\\\\\hline\nHPN\\newline \\newline & \\emph{Followed by a convolutional ‘Appearance Feature Encoder’, a ‘Contextual Feature Encoder’ consisting of LSTMs generates super-pixel features fed to a Softmax-based classification layer. Limited computational efficiency due to multiple LSTMs.} & \\scriptsize85.8\\% mIoU @PASCAL 2012\\newline\\scriptsize92.3\\% mPA @SIFTflow & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nEncNet\\newline \\newline & \\emph{Fully connected structure to extract context is fed by dense feature maps (obtained from ResNet ) and followed by a convolutional prediction layer. Fully connected layers within their ``Context Encoding Module'' limits computational performance.} & \\scriptsize85.9\\% mIoU @PASCAL 2012\\newline55.7\\% mIoU @ADE20K & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nPSANet\\newline & \\emph{A convolutional point-wise spatial attention (PSA) module is attached to o pretrained convolutional encoder, so that pixels are interconnected through a self-adaptively learnt attention map to provide global context. Additional PSA module limits computational efficieny compared to fully convolutional architectures (e.g. FCN).} & \\scriptsize85.7\\% mIoU @PASCAL 2012\\newline81.4\\% mIoU @CitySca. (w. course) & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\n\\end{tabular}\n\\end{table*}\n\\begin{table*}[p]\n\\centering\n\\begin{tabular}{|p{2.5cm}|p{9.3cm}|p{2.9cm}|p{0.8cm}|}\n\\hline\ncont'd. & & & \\\\\\hline \\hline\nPAN\\newline & \\emph{SPP layer with global pooling architecture. Similar architecture and thus, computational efficiency with PSPNet .} & \\scriptsize84.0\\% mIoU @PASCAL 2012 (\\emph{taken from the paper, not listed in the leaderboard}) & $\\star$ $\\star$ $\\star$ \\\\\\hline\nExFuse\\newline \\newline & \\emph{Improved version of GCN for feature fusing which introduces more semantic information into low-level features and more spatial details into high-level features, by additional skip connections. Computational performance comparable to GCN.} & \\scriptsize87.9\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$ \\\\\\hline\nEMANet152\\newline \\newline & \\emph{Novel attention module between two CNN structures converts input feature maps to output feature maps, thus providing global context. Computationally more efficient compared to other attention governing architectures (e.g. PSANet).} & \\scriptsize88.2\\% mIoU @PASCAL 2012\\newline39.9\\% mIoU @COCO & $\\star$ $\\star$ $\\star$ \\\\\\hline\nKSAC\\newline \\newline & \\emph{Allows branches of different receptive fields to share the same kernel to facilitate communication among branches and perform feature augmentation inside the network. The idea is similar to ASPP layer of DeepLabv3 , hence similar computational performance.} & \\scriptsize88.1\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$ \\\\\\hline\nCFNet\\newline \\newline & \\emph{Using a distribution of co-occurrent features for a given target in an image, a fine-grained spatial invariant representation is learnt and the CFNet is constructed. Similar architecture to PSANet , hence similar (and limited) computational performance due to fully connected layers.} & \\scriptsize87.2\\% mIoU @PASCAL 2012 & $\\star$$\\hspace{0.1cm}$$\\star$ \\\\\\hline\nYOLACT\\newline & \\emph{Object Detector YOLO followed by Class Probability and Convolutional layers, applied to instance segmentation (see Figure \\ref{ObjectBased}.b), with \\underline{real-time} semantic segmentation performance}. & \\scriptsize72.3\\% mAP$_{50}$ @PASCAL SBD\\newline\\scriptsize31.2\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\newline$\\star$ \\\\\\hline\nESE-Seg\\newline & \\emph{ESE-Seg is an object detection-based approach that uses explicit shape encoding by explicitly decoding the multiple object shapes with tensor operations in real-time.} & \\scriptsize69.3\\% mAP$_{50}$ @PASCAL SBD\\newline\\scriptsize21.6\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\newline$\\star$ \\\\\\hline\nSOLO\\newline \\newline & \\emph{The central idea of SOLO framework is to reformulate the instance segmentation as two simultaneous problems: category prediction and instance mask generation, using a single convolutional backbone. The model can run in real-time with proper parameter tuning.} & \\scriptsize37.8\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\\\\\hline\nEfficientNet-L2 + NASFPN + Noisy Student\\newline & \\emph{The study aims at understaing the effect of pre- and self training and apply this to semantic segmentation problem. For their experiment, they utilize a neural architecture search (NAS) strategy \nwith EfficientNet-L2 as the backbone architecture. The model is the leader of PASCAL VOC 2012 challenge by the time this manuscript was written.} & \\scriptsize90.5\\% mIoU @PASCAL 2012 & $\\star$ $\\star$ $\\star$ \\\\\\hline\nDCNAS\\newline \\newline & \\emph{Neural Architecture Search applied to MobileNetV3 , a densely connected search space for semantic segmentation. Although computational performance is not explicitly indicated, the resulting architecture possibly provides U-Net like computational efficiency for model inference. } & \\scriptsize86.9\\% mIoU @PASCAL 2012 (\\emph{taken from the paper, not listed in the leaderboard})\\newline83.6\\% mIoU @CitySca. (w. course)& $\\star$ $\\star$ $\\star$\\\\\\hline\nSOLOv2\\newline \\newline & \\emph{Updated, real-time version of SOLO , empowered by an efficient and holistic instance mask representation scheme, which dynamically segments each instance in the image, without resorting to bounding\nbox detection.} & \\scriptsize37.1\\% mAP @COCO& $\\star$ $\\star$ $\\star$\\newline$\\star$\\\\\\hline\n\\end{tabular}\n\\label{MethodsTable}\n\\end{table*}\n\\begin{table*}[h]\n\\centering\n\\begin{tabular}{|p{2.5cm}|p{9.3cm}|p{2.9cm}|p{0.8cm}|}\n\\hline\ncont'd. & & & \\\\\\hline \\hline\nDeep Snake\\newline \\newline & \\emph{Deep Snake is a fully convolutional architecture with a contour-based approach for real-time instance segmentation.} & \\scriptsize62.1\\% mAP$_{50}$ @PASCAL SBD\\newline\\scriptsize30.3\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\newline$\\star$\\\\\\hline\nBlendMask\\newline \\newline & \\emph{Using both top-down and\nbottom-up instance segmentation approaches, BlendMask learns attention maps for each instance using a single\nconvolution layer.} & \\scriptsize37.1\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\newline$\\star$\\\\\\hline\nSwiftNetRN18-Pyr\\newline \\newline & \\emph{Based on shared pyramidal representation and fusion of heterogeneous features, SwiftNetRN18-Pry fuses hybrid representation within a ladder-style decoder. Provides beyond real-time performance with modest accuracy.} & \\scriptsize35.0\\% mIoU @ADE20K & $\\star$ $\\star$ $\\star$\\newline$\\star$\\\\\\hline\nBOXInst\\newline \\newline & \\emph{Achieves mask-level instance segmentation with only bounding-box\nannotations for training. Core idea is to redesign the loss of learning masks in instance segmentation} & \\scriptsize61.4\\% mAP$_{50}$ @PASCAL SBD\\newline\\scriptsize31.6\\% mAP @COCO & $\\star$ $\\star$ $\\star$\\\\\\hline\n\\end{tabular}\n\\caption{State-of-the-art semantic segmentation methods, showing the method name and reference, brief summary, problem type targeted, and refinement model (if any).}\n\\label{MethodsTable}\n\\end{table*}", "id": "34b80f3b-2d89-4524-9834-3bce20359d42", "level": "subsection", "origin_cites_number": 67, "parent_id": "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Object Detection-based Methods" ] ], "subsections": [], "title": "Object Detection-based Methods" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 7505, 1741, 1745, 1768, 7502, 1763 ], "content": "In Table \\ref{MethodsTable}, we present several semantic segmentation methods, each with a brief summary, explaining the fundamental idea that represents the proposed solutions, their position in available leaderboards, and a categorical level of the method's computational efficiency. The intention is for readers to gain a better evolutionary understanding of the methods and architectures in this field, and a clearer conception of how the field may subsequently progress in the future. Regarding the brief summaries of the listed methods, please refer to the categorisations provided earlier in this section.\nTable \\ref{MethodsTable} includes 34 methods spanning an eight-year period, starting with early deep learning approaches through to the most recent state-of-the-art techniques. Most of the listed studies have been quite successful and have significantly high rankings in the previously mentioned leaderboards. Whilst there are many other methods, we believe this list to be a clear depiction of the advances in deep learning-based semantic segmentation approaches. In Figure \\ref{SampleResults}, a sample image from the PASCAL VOC validation set, its semantic segmentation ground truth and results obtained from some of the listed studies are depicted. Figure \\ref{SampleResults} clearly shows the gradually growing success of different methods starting with the pioneering FCN architectures to more advanced architectures such as DeepLab or CRF-as-RNN .\nJudging by the picture it portrays, the deep evolution of the literature clearly reveals a number of important implications. First, graphical model-based refinement modules are being abandoned due to their slow nature. A good example of this trend would be the evolution of DeepLab from to (see Table \\ref{MethodsTable}). Notably, no significant study published in 2019 and 2020 employed a CRF-based or similar module to refine their segmentation results. Second, most studies published in the past two years show no significant leap in performance rates. For this reason, researchers have tended to focus on experimental solutions such as object detection-based or Neural Architecture Search (NAS)-based approaches. Some of these very recent group of studies focus on (NAS)-based techniques, instead of hand-crafted architectures. EfficientNet-NAS belongs to this category and is the leading study in PASCAL VOC 2012 semantic segmentation challenge at the time the paper was prepared. We believe that the field will witness an increasing interest in NAS-based methods in the near future. In general, considering all studies of the post-FCN era, the main challenge of the field still remains to be \\emph{efficiently} integrating (i.e. in real-time) global context to localisation information, which still does not appear to have an off-the-shelf solution, although there are some promising techniques, such as YOLACT .\nIn Table \\ref{MethodsTable}, the right-most column represents a categorical level of computational efficiency. \nWe use a four-level categorisation (one star to four stars) to indicate the computational efficiency of each listed method. For any assigned level of the computational efficiency of a method, we explain our reasoning in the table with solid arguments. For example, one of the four-star methods in Table \\ref{MethodsTable} is ``YOLACT'' by , which claims to provide real-time performance (i.e. $>$30fps) on both PASCAL VOC 2012 and COCO image sets.", "id": "5c2bb78c-80a6-489a-b0eb-e5a37bb4c679", "level": "subsection", "origin_cites_number": 7, "parent_id": "a810e757-8e18-4cb5-b9eb-b720ad7146bd", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Post-FCN Approaches" ], [ "subsection", "Evolution of Methods" ] ], "subsections": [], "title": "Evolution of Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Although tremendous successes have been achieved so far in the semantic segmentation field, there are still many open challenges in this field due to hard requirements time-consuming pixel-level annotations, lack of generalization ability to new domains and classes, and need for real-time performance with higher segmentation accuracies. In this section, we categorize possible future directions under different titles by providing examples of recent studies that represent that direction.", "id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "level": "section", "origin_cites_number": 0, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ] ], "subsections": [ "be8f087e-949b-4d20-87cb-2330fbe327d3", "27fe3c08-c148-46da-9550-40e08accd4e9", "9962178a-f02b-44d6-b98a-ef8f2f5f18f2", "763bf75f-b1d6-40b7-8dca-a0c747e711f3", "48b1b56f-3ed3-4918-bd61-df9ea4de0d3d" ], "title": "Future Scope and Potential Research Directions" }, { "cite_extract_rate": 0.6086956521739131, "cites": [ 1778, 1771, 1751, 1779, 7065, 1777, 1780, 6978, 1776, 737, 1772, 1774, 1775, 1773 ], "content": "Over the last few years, there has been an increasing research effort directed towards the approaches that are alternative to pixel-level annotations such as; unsupervised, semi-supervised and weakly-supervised methods. Recent studies show that, WSSS methods usually perform better than the other schemes where annotations are in the form of image-level labels , video-level labels , scribbles , points , and bounding boxes . In case of image-level labels, class activation maps (CAMs) are used to localize the small discriminative regions which are not suitable particularly for the large-scale objects, but can be utilized as initial seeds (pseudo-masks) .", "id": "be8f087e-949b-4d20-87cb-2330fbe327d3", "level": "subsection", "origin_cites_number": 23, "parent_id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ], [ "subsection", "Weakly-Supervised Semantic Segmentation (WSSS)" ] ], "subsections": [], "title": "Weakly-Supervised Semantic Segmentation (WSSS)" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 8493, 1781, 1776, 1782, 1783 ], "content": "Motivated by humans' ability to recognize new concepts in a scene by using only a few visual samples, zero-shot and/or few-shot learning methods have been introduced. Few-shot semantic segmentation (FS3) methods has been proposed to recognize objects from unseen classes by utilizing few annotated examples; however, these methods are limited to handling a single unseen class only. Zero-shot semantic segmentation (ZS3) methods have been developed recently to generate visual features by exploiting word embedding vectors in the case of zero training samples . However, the major drawback of ZS3 methods is their insufficient prediction ability to distinguish between the seen and the unseen classes even if both are included in a scene. This disadvantage is usually overcome by generalized ZS3 (GZS3), which recognizes both seen and unseen classes simultaneously. GZS3 studies mainly rely on generative-based methods. Feature extractor training is realized without considering semantic features in GZS3 adopted with generative approaches so that the bias is introduced towards the seen classes. Therefore, GZS3 methods result in performance reduction on unseen classes . Much of the recent work on ZS3 has involved such as; exploiting joint embedding space to alleviate the seen bias problem , analyzing different domain performances , and incorporating spatial information .", "id": "27fe3c08-c148-46da-9550-40e08accd4e9", "level": "subsection", "origin_cites_number": 9, "parent_id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ], [ "subsection", "Zero-/Few-Shot Learning" ] ], "subsections": [], "title": "Zero-/Few-Shot Learning" }, { "cite_extract_rate": 0.6875, "cites": [ 7066, 1790, 8496, 1787, 8494, 1788, 1786, 1789, 1785, 1784, 8495 ], "content": "Recent studies also rely on the use of synthetic large-scale image sets such as GTA5 and SYNTHIA because of their capability to cope with laborious pixel-level annotations. Although these rich-labeled synthetic images have the advantage of reducing the labeling cost, they also bring about domain shift while training with unlabeled real images. Therefore, applying domain adaptation for aligning the synthetic and the real image sets is of much importance . Unsupervised domain adaptation (UDA) methods are widely employed in semantic segmentation .", "id": "9962178a-f02b-44d6-b98a-ef8f2f5f18f2", "level": "subsection", "origin_cites_number": 16, "parent_id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ], [ "subsection", "Domain Adaptation" ] ], "subsections": [], "title": "Domain Adaptation" }, { "cite_extract_rate": 0.625, "cites": [ 1793, 871, 7292, 505, 1792, 1794, 1791, 1795, 850, 687 ], "content": "Adopting compact and shallow model architectures and restricting the input to be low-resolution are brand new innovations proposed very recently to overcome the computational burden of large-scale semantic segmentation. To choose a real-time semantic segmentation strategy, all aspects of an application should be considered, as all of these strategies somehow correlate with decreasing the model’s discriminative ability and losing information of object boundaries or small objects to some extent. Some other strategies have also been proposed for the retrieval of rich contextual information in real-time applications including attention mechanisms , depth-wise separable convolutions , pyramid fusion , grouped convolutions and neural architecture search , pipeline parallelism .", "id": "763bf75f-b1d6-40b7-8dca-a0c747e711f3", "level": "subsection", "origin_cites_number": 16, "parent_id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ], [ "subsection", "Real-Time Processing" ] ], "subsections": [], "title": "Real-Time Processing" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 8497, 1796, 1797 ], "content": "Contextual information aggregation with the purpose of augmenting pixel representations in semantic segmentation architectures is another promising research direction in recent years. In this aspect, mining contextual information , exploring context information on spatial and channel dimensions , focusing on object based contextual representations and capturing the global contextual information for fine-resolution remote sensing imagery are some of the recent studies. Alternative methods of reducing dense pixel-level annotations in semantic segmentation have been described which are based on using pixel-wise contrastive loss .", "id": "48b1b56f-3ed3-4918-bd61-df9ea4de0d3d", "level": "subsection", "origin_cites_number": 7, "parent_id": "511d59eb-d8b1-4b46-87e9-ff1fa938af77", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Future Scope and Potential Research Directions" ], [ "subsection", "Contextual Information" ] ], "subsections": [], "title": "Contextual Information" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we aimed at reviewing the current developments in the literature regarding deep learning-based 2D image semantic segmentation. We commenced with an analysis of the public image sets and leaderboards for 2D semantic segmentation and then continued by providing an overview of the techniques for performance evaluation. Following this introduction, our focus shifted to the 10-year evolution seen in this field under three chronological titles, namely the pre- and early- deep learning era, the fully convolutional era, and the post-FCN era. After a technical analysis on the approaches of each period, we presented a table of methods spanning all three eras, with a brief summary of each technique that explicates their contribution to the field.\nIn our review, we paid particular attention to the key technical challenges of the 2D semantic segmentation problem, the deep learning-based solutions that were proposed, and how these solutions evolved as they shaped the advancements in the field. To this end, we observed that the fine-grained localisation of pixel labels is clearly the definitive challenge to the overall problem. Although the title may imply a more ‘local’ interest, the research published in this field evidently shows that it is the global context that determines the actual performance of a method. Thus, it is eminently conceivable why the literature is rich with approaches that attempt to bridge local information with a more global context, such as graphical models, context aggregating networks, recurrent approaches, and attention-based modules. It is also clear that efforts to fulfil this local-global semantics gap at the pixel level will continue for the foreseeable future.\nAnother important revelation from this review has been the profound effect seen from public challenges to the field. Academic and industrial groups alike are in a constant struggle to top these public leaderboards, which has an obvious effect of accelerating development in this field. Therefore, it would be prudent to promote or even contribute to creating similar public image sets and challenges affiliated to more specific subjects of the semantic segmentation problem, such as 2D medical images.\nConsidering the rapid and continuing development seen in this field, there is an irrefutable need for an update on the surveys regarding the semantic segmentation problem. However, we believe that the current survey may be considered as a milestone in measuring how much the field has progressed thus far, and where the future directions possibly lie.\n\\bibliographystyle{chicagoB} \n\\bibliography{ref} \n\\end{document}", "id": "c1996fe2-7e7b-4050-8dc9-c059ec3b7798", "level": "section", "origin_cites_number": 0, "parent_id": "1ea135c8-d69d-401d-8fff-aca1042236fb", "prefix_titles": [ [ "title", "A Survey on Deep Learning-based Architectures\\\\for Semantic Segmentation on 2D images" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
60
[ 1725, 1728, 1726, 976, 8490, 1727, 1729, 810, 486, 1732, 1730, 1731, 1733, 509, 1734, 1735, 7501, 1736, 1737, 1739, 1738, 1740, 1741, 97, 305, 8429, 1742, 1743, 1744, 8491, 1745, 7063, 38, 8492, 1748, 6977, 1746, 1747, 1223, 37, 1749, 1755, 1757, 1760, 802, 1759, 1761, 1751, 508, 7502, 850, 209, 520, 1764, 1769, 206, 7504, 1754, 1750, 1752, 1756, 1770, 737, 1758, 7503, 1763, 1762, 7505, 7064, 857, 1753, 1768, 1765, 1766, 1767, 1778, 1771, 1779, 7065, 1777, 1780, 6978, 1776, 1772, 1774, 1775, 1773, 8493, 1781, 1782, 1783, 7066, 1790, 8496, 1787, 8494, 1788, 1786, 1789, 1785, 1784, 8495, 1793, 871, 7292, 505, 1792, 1794, 1791, 1795, 687, 8497, 1796, 1797 ]
0.797468
[ "Morteza Hoseinzadeh" ]
A Survey on Tiering and Caching in High-Performance Storage Systems
2019
2019-04-25T19:57:31Z
cs.AR
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ] ], "subsections": [ "e983a207-5495-4043-8eac-ff0d8c94ae4b", "e9eb50b3-29cf-4876-81e3-c5e453fcbe15", "ec34750d-daa4-4add-aad6-11ebfb9ff79b", "48bbe958-0c7f-46c3-8ab0-0715412da51d", "33f59b94-b159-448c-ae85-e7403c3c005d" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": ".}}\n\t\tWith the advancement in the computing and networking technologies especially around the Internet, and emerging tremendous number of new data sources such as Internet of Things (IoT) endpoints, wearable devices, mobile platforms, smart vehicles, etc., enterprise data-intensive analytics input is now scaled up to petabytes and it is predicted to be exceeding 44 zettabytes by 2020~. Concerning this rapid data expansion, hardware has been endeavoring to provide more capacity with higher density supporting high-performance storage systems. Figure~\\ref{fig:types} represents available and emerging storage technologies as of today. In terms of storage technology, Hard Disk Drives (HDD) is now supplanted by fast, reliable Solid State Drives (SSD). Additionally, one-time emerging persistent memory devices are now going to be available in the market as Intel launched Optane DIMMs~. Price-wise, when new technologies become available, dated technologies become cheaper. Nowadays, SSDs are very common such that they are being used as All-Flash Arrays (AFA) in data centers~. However, storage IO is still the biggest bottleneck on large scale data centers. As shown in~, the time consumed to wait for I/Os is the primary cause of idling and wasting CPU resources, since lots of popular cloud applications are I/O intensive, such as video streaming, file sync, backup, data iteration for machine learning, etc. \n\t\t\\begin{figure}[t]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3in]{drawings/types-eps-converted-to.pdf}\n\t\t\t\\caption{Memory technologies}\n\t\t\t\\label{fig:types}\n\t\t\\end{figure}\n\t\\begin{table*}[th] \n\t\t\\centering\n\t\t\\caption{Comparison of different storage technologies~}\n\t\t\\label{tbl:str}\n\t\t\\small\n\t\t\\begin{tabular}{|r|c|c|c|c|c|c|}\n\t\t\t\\hline \t\t\t& STT-RAM & DRAM & NVDIMM & Optane SSD$^\\dagger$ & NAND SSD$^\\ddagger$ & HDD \\\\ \\hline\n\t\t\tCapacity* \t\t& 100s of MBs & Up to 128GB & 100s of GBs & Up to 1TB & Up to 4TB & Up to 14TB \\\\ \\hline\n\t\t\tRead Lat. \t\t& $6ns$ & $10-20ns$ & $50ns$ & $9\\mu s$ & $35\\mu s$ & $10ms$ \\\\ \\hline\n\t\t\tWrite Lat. \t\t& $13ns$ & $10-20ns$ & $150ns$ & $30\\mu s$ & $68\\mu s$ & $10ms$ \\\\ \\hline\n\t\t\tPrice \t\t\t& \\$1-3K/GB & \\$7.6/GB & \\$3-13/GB & \\$1.30/GB & \\$0.38/GB & \\$0.03/GB \\\\ \\hline\n\t\t\tAddressability \t& Byte & Byte & Byte/Block & Block & Block & Block \\\\ \\hline\n\t\t\tVolatility \t\t& Non-Volatile & Volatile & Non-Volatile & Non-Volatile & Non-Volatile & Non-Volatile \\\\ \\hline\n\t\t\t\\multicolumn{7}{l}{\\tiny $^\\dagger$Intel Optane SSD 905P Series (960GB) (AIC PCIe x 4 3D~XPoint) ~~~~ $^\\ddagger$Samsung 960 Pro 1TB M.2 SSD with 48-layer 3D NAND (Source: Wikibon) ~~~~ *Per module} \n\t\t\\end{tabular}\n\t\\end{table*}\n\t\tTo solve the problem caused by I/O bottlenecks, parallel I/O to multiple HDDs in a Redundant Array of Independent Disks (RAID) becomes a common approach.\n\t\tHowever, the performance improvement from RAID is still limited. Therefore, lots of big data applications strive to store intermediate data to memory as much as possible such as Apache Spark.\n\t\tUnfortunately, memory is too expensive, and its capacity is minimal (e.g., 64$\\sim$128GB per server), so it alone is not able to support super-scale cloud computing use cases.\n\t\tSome researches propose making use of NVM-based SSDs like 3D~XPoint Optane DIMM~ and PCM-based DIMMs~ instead of DRAM to provide high density and non-volatility. But, these storage devices are not matured enough to be instantly used as the main memory and are still very expensive.\n\t\tCaching and Tiering have been used for a long time to hide long latency of slow devices in the storage hierarchy. In the past, high-end HDDs such as 15k RPM where used as the performance tier and low-end HDDs such as 7200 RPM served as the capacity tier~. Today, NAND-Flash SSDs replaces fast HDDs, and while low-end HDDs are obsolete, high-end HDDs are used for capacity requirements. Soon, modern storage technologies such as NVM will break with the past and change the storage semantics. As in device level, today's high-speed SSDs are equipped with a write buffer as in Apple's Fusion Drive~. In system level, almost all file systems come with a page cache which buffers data pages in DRAM, and letting applications have access to the contents of the files. Using persistent memory as a storage medium, some file systems skip the page cache~.\n\t\tIn application level, lots of big data applications strive to store intermediate data to memory as much as possible such as Apache Spark. However, NVM is not economically affordable to be used as a large enterprise storage system, and SSDs suffer from limited write endurance.\n\t\tIn this survey, we discuss several studies on caching and tiering solutions for high-performance storage systems. In section~\\ref{sec:bkgnd}, we give a short background of storage devices and their technologies. Section~\\ref{sec:caching} will investigate several research studies on caching solutions followed by section~\\ref{sec:tiering} which discusses several papers on storage tiering solutions. At the end of section~\\ref{sec:tiering}, we briefly introduce Ziggurat, which is developed in our group. Finally, section~\\ref{sec:conc} concludes the paper.", "id": "e983a207-5495-4043-8eac-ff0d8c94ae4b", "level": "section", "origin_cites_number": 0, "parent_id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Introduction\\protect\\footnote{Parts of this section is taken from my published papers~\\cite{yang2017autotiering,hoseinzadeh2014reducing,shengan2019ziggurat" ] ], "subsections": [], "title": "Introduction\\protect\\footnote{Parts of this section is taken from my published papers~\\cite{yang2017autotiering,hoseinzadeh2014reducing,shengan2019ziggurat" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:bkgnd}\n This section briefly covers background information of individual technology parts in the computer memory hierarchy. We also discuss the counterpart pieces of hardware and software required for networking them together.", "id": "e9eb50b3-29cf-4876-81e3-c5e453fcbe15", "level": "section", "origin_cites_number": 0, "parent_id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ] ], "subsections": [ "cd769e28-63bd-4aef-8de5-ec2264a7621f", "fe470bda-e688-4e77-9a2f-715dd067006e" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on the response time, the memory hierarchy is designed to separate the computer storage into an organized multi-level structure aiming to enhance the overall performance and storage management. Different types of storage media are designated as levels according to their performance, capacity, and controlling technology. In general, the lower level in the hierarchy, the smaller its bandwidth and the larger its storage capacity. There are four primary levels in the hierarchy as follows~.", "id": "cd769e28-63bd-4aef-8de5-ec2264a7621f", "level": "subsection", "origin_cites_number": 0, "parent_id": "e9eb50b3-29cf-4876-81e3-c5e453fcbe15", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Memory Hierarchy" ] ], "subsections": [ "db0b5b49-cc22-44b5-822a-84d2a80fb13c", "4682e2ff-2a12-41f8-b08e-1970c4038b6e", "56503cec-1920-4c49-a1ba-7736172a6448", "3a4794e2-ee60-438d-8b6f-8fff78ee2640" ], "title": "Memory Hierarchy" }, { "cite_extract_rate": 0, "cites": [], "content": "On-chip memory cells such as processor registers and caches fall into this level. To provide the highest performance, architects use storage technologies with the lowest response type such as SRAM, Flip-Flops, or Latch buffers. Embedded DRAM is another technology which is used in some application specific integrated circuits (ASIC)~. In recent years, some emerging technologies such as spin-torque transfer random access memory (STT-RAM) has received attention for the last level cache~. They not only provide low response time, but they also offer high density and persistence.\n\t\t\tNotice that there are multiple sub-levels in this level of the memory hierarchy. Processor register file, which has the lowest possible latency, resides in the nearest sub-level to the processor followed by multiple levels of caches (i.e., L1, L2, and so on). Although in symmetric multi-processor (SMP) architecture caches may be private or shared amongst the cores, they are still considered on the same level in the hierarchy.", "id": "db0b5b49-cc22-44b5-822a-84d2a80fb13c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd769e28-63bd-4aef-8de5-ec2264a7621f", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Memory Hierarchy" ], [ "subsubsection", "Internal" ] ], "subsections": [], "title": "Internal" }, { "cite_extract_rate": 0, "cites": [], "content": "The primary storage or the main memory of the computer system temporarily maintains all code and data (partial) of the running applications including the operating system. At this level of the hierarchy, the capacity is more important compared with the internal levels. The whole code and data of running applications settle at this level. Although the storage capacity in this level is much larger than the internal level, the performance should also be high enough to enable fast data transfer between the main and internal levels. Using the spacial and temporal locality, the memory controller manages to move bulks of data back and forth between the last level cache and the main memory via the address and data bus. In contrast with internal levels in which data can be accessed in bytes, unit access of data is a cache or memory line (usually 64 Bytes).\n DRAM technology has been long used as the best candidate for this level. Other technologies such as phase change memory (PCM)~ have been introduced as a scalable DRAM alternative with the ability to persist data. 3D~XPoint~ has been successfully prototyped and announced. Detailed information on the storage technologies can be found in~\\cref{sec:tech:mem}.", "id": "4682e2ff-2a12-41f8-b08e-1970c4038b6e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd769e28-63bd-4aef-8de5-ec2264a7621f", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Memory Hierarchy" ], [ "subsubsection", "Main" ] ], "subsections": [], "title": "Main" }, { "cite_extract_rate": 0, "cites": [], "content": "The secondary storage or the on-line mass storage level is composed of persistent block devices to store massive data permanently. In contrast with the two levels above, the storage is not directly accessible by the processor. Instead, the storage media are connected to the processor via IO ports. Solid State Drives (SSD), Hard Disk Drives (HDD), and rotating optical devices are examples of secondary storage media. When a process is being executed, the processor submits an IO request to the block device via an IO BUS such as PCIe, IDE, or SATA in order to load a chunk of data (usually a block of 4KB) into a specific location in the main memory using the Direct Memory Access (DMA) feature.", "id": "56503cec-1920-4c49-a1ba-7736172a6448", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd769e28-63bd-4aef-8de5-ec2264a7621f", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Memory Hierarchy" ], [ "subsubsection", "Secondary Storage" ] ], "subsections": [], "title": "Secondary Storage" }, { "cite_extract_rate": 0, "cites": [], "content": "The tertiary storage or off-line bulk storage includes any kinds of removable storage devices. If accessing the data is under control of the processing unit, it is called tertiary storage or near-line storage. For example, a robotic mechanism mounts and dismounts removable devices on demand. Otherwise, it is called off-line storage, when a user physically attaches and detach the storage media. In some storage classifications, tertiary storage and off-line storage are distinguished. However, we consider them identical in this paper. The rest of this section will discuss the most related technologies and their characteristics.", "id": "3a4794e2-ee60-438d-8b6f-8fff78ee2640", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd769e28-63bd-4aef-8de5-ec2264a7621f", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Memory Hierarchy" ], [ "subsubsection", "Tertiary Storage" ] ], "subsections": [], "title": "Tertiary Storage" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tech}\n The main factor that makes storage media different from each other is their technologies. Throughout the computer history, memory technologies have been evolved vastly. Figure~\\ref{fig:types} represents currently available and emerging technologies at a glance. Generally, the computer memory system can be classified into volatile and non-volatile memories. Traditionally, non-volatile memories which usually fall in secondary and tertiary storage groups, are used to store data permanently. In contrast, volatile memories are usually used as caches to temporarily maintain close to the processor because of their high performance. Nevertheless, their usage may switch often. For example, a high-end SSD may be used as a cache for slow storage devices. Likewise, recently emerged storage class memories can be used as a non-volatile media to permanently store data in spite of being in the primary storage place. Table~\\ref{tbl:str} compares different computer storage technologies.", "id": "fe470bda-e688-4e77-9a2f-715dd067006e", "level": "subsection", "origin_cites_number": 0, "parent_id": "e9eb50b3-29cf-4876-81e3-c5e453fcbe15", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Technology" ] ], "subsections": [ "e8f0d7cf-3e68-440d-87de-e342765d24cd", "610f9a10-4c5c-4b8f-9c69-d8eea425164a", "e4151375-a332-4841-b81c-dc6369fc479c" ], "title": "Technology" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tech:mem}\n SRAM and DRAM have been long known as the primary technologies served as the processor's internal cache and the system's main memory, respectively. Due to the nature of an SRAM cell, it can retain information in no time. An SRAM cell is composed of two back-to-back inverters. In its standby state, these two inverters keep reinforcing each other as long as they are supplied. One of them represents bit data, and the other one corresponds to the inverted value of the bit data. While reading, a sense amplifier reads the output ports of the inverters and find which one has a higher voltage and determines the stored value. Although SRAM is almost as fast as a logic gate circuit, its density is too low as its electronic structure is made of at least four transistors. Additionally, it is CMOS compatible, so, integrating SRAM cells in the processor's die is possible. On the other hand, a DRAM cell comprises only one transistor and a capacitor. In contrast with SRAM which statically keeps data, DRAM requires refreshing the data due to the charge leakage nature of the capacitor. The density of DRAM is much higher than SRAM, but it is not CMOS compatible. So, integrating DRAM in the processor's die is not easy. Also, it requires larger peripheral circuitry for read and write operations. Since reading from a DRAM cell is destructive, a write should happen following each read to restore the data. Overall, the higher capacity with a lower cost of DRAM made it the best candidate for the primary memory, so far. However, DRAM has faced a scaling wall because it uses electric charge in capacitors to maintain data. So, while technology scaling, not only the reliability of a capacitor dramatically drops, but also there would be cell-to-cell interference. Not to mention that the active power consumption of refresh overhead is another challenging issue.\n\t\t\tMany emerging technologies have been investigated to address the scaling issue among others. Researchers have been seeking a reliable solution for a byte-addressable and power efficient alternative to DRAM. Spin-Transfer Torque RAM (STT-RAM) is one of the high-performance solutions~. Having a fixed layer and a free layer of ferromagnetic material, it stores bits in the form of high and low resistance property of the fixed layer based on the spin orientation of the free layer. Although it provides higher performance comparing with DRAM along with non-volatility which voids refreshing, its expensive costs make it an unfordable option of DRAM replacement. Its super high density and low power consumption make it a potential candidate for on-CPU cache technology.\n\t\t\tNonetheless, Phase Change Memory (PCM) is another emerging technology which is more promising than the others. It stores digital information in the form of resistance levels of a phase change material which ranges from little resistance of its crystalline state to very high resistance of its amorphous state~. As shown in table~\\ref{tbl:str}, PCM has a lower performance compared with DRAM, especially in write operations. It also can endure a smaller number of writes and requires refreshing to prevent resistance drift. There is a body of research focusing on addressing these issues~.\n\t\t\tNotwithstanding, PCM is one of the best options to be used as a storage class memory technology, and solid-state drives. Table~\\ref{tbl:str} shows the beneficiary of PCM and 3D~XPoint devices over NVMe driver. Connecting to the memory bus, they provide near DRAM performance while having a large capacity of a storage device. This type of memory technology is recognized as Storage Class Memory (SCM) which can be categorized as memory type (M-SCM, Persistent Memory, or NVM) with fast access latency and low capacity (as in 3D~XPoint DIMM), or storage-type (S-SCM) with high capacity and low access latency (as in Optane SSD, see section~\\ref{sec:tech:store})~.", "id": "e8f0d7cf-3e68-440d-87de-e342765d24cd", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fe470bda-e688-4e77-9a2f-715dd067006e", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Technology" ], [ "subsubsection", "Memory Technology" ] ], "subsections": [], "title": "Memory Technology" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tech:store}\n\t\t\tBesides the internal and the main memory, permanent data should reside in some storage device to be accessed on demand. For a long time, Hard Disk Drives (HDD) have been playing this role. An HDD consists of rigid rapidly rotating disks held around a spindle and a head which relocates using an actuator arm. Digital data is stored in the form of transitions in magnetization of a thin film of magnetic material on each disk. The electromechanical aspect of HDD and the serialization of the stored data make HDD orders of magnitude slower than the mentioned non-volatile memory technologies. However, its low price and extremely high density make it a good candidate for secondary and tertiary storage levels. According to table~\\ref{tbl:str}, the capacity of an HDD can be 1000x larger than DRAM while the operational latency is roughly $10^6$ times slower.\n\t\t\tSolid State Drives (SSD) offer higher performance, shock resistance, and compact storage at the cost of higher prices by using Flash technology. A Flash cell consists of a MOSFET with one word-line control gate and another floating gate. It keeps data in the form of electrical switch in the floating gate which can be programmed to be on or off. Whether the networking of the MOSFETs resembles a NAND or a NOR logic, it is called NAND-Flash or NOR-Flash SSD. The read operation is as simple as reading the bit-line while charging the word-line. However, writing to a flash cell requires erasing a large portion (MBs) of storage area using tunnel release and put data afterward with tunnel injection. SSDs may use traditional protocols and file systems such as SATA, SAS, NTFS, FAT32, etc. There are also some interfaces such as mSATA, m.2, u.2, and PCIe, and some protocols such as NVMe that are specifically designed for SSDs. The capacity of NAND-flash based SSD ranges from 128GB to 100TB, and the performance can be up to 10GB/s. Despite all benefits that a NAND-Flash SSD provides, its lifespan is limited to $10^4$ writes per cell. Intel and Micron recently shipped Optane SSD with the new technology of 3D~XPoint~ that offers longer lifespan and higher performance. A 3D~XPoint cell preserves data based on the change of bulk resistance~. Due to the stackable cross-gridded arrangement, the density of 3D~XPoint is much higher than traditional non-volatile memory technologies. Intel also announced 3D~XPoint DIMM form-factor which can provide memory band-with for non-volatile storage.", "id": "610f9a10-4c5c-4b8f-9c69-d8eea425164a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fe470bda-e688-4e77-9a2f-715dd067006e", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Technology" ], [ "subsubsection", "Storage Technology" ] ], "subsections": [], "title": "Storage Technology" }, { "cite_extract_rate": 0, "cites": [], "content": "The technologies mentioned above are engaged in different levels on the memory hierarchy. In one hand, organization of the storage system in the hierarchy can vary based on data intensity. In the other hand, the pace of data growth in data centers and cloud-storage service providers mandates server administrators to seek a high-performance mass storage system which requires a data management software running on top of networked storage devices and server machines. Therefore, choosing one technology to design a massive storage system is not the best solution. So, data center experts opt to develop a hybrid storage system~. Figure~\\ref{fig:hyb} depicts the overall categories of the hybrid storage architectures. In this study, we focus on a host-managed tiering and caching methods.\n\t\t\t\\begin{figure}[t]\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=3in]{hyb.png}\n\t\t\t\t\\caption{Hybrid storage architectures~}\n\t\t\t\t\\label{fig:hyb}\n\t\t\t\\end{figure}\n The exponentially expanding digital information requires fast, reliable, and massive data centers to not only archive data, but also rapidly process them. So, high-performance and large capacity are both required. However, the portion of digital information with different values may not be even. The IDC report~ predicts that with the speed of doubling every two years, the size of the digital universe might exceed 44 zettabytes ($2^{70}$) by 2020. This tremendously extensive information is not being touched equally. While the cloud will touch only 24\\% of the digital universe by 2020, 13\\% will be stored in the cloud, and 63\\% may not be touched at all~. The speed of data that requires protection, which is more than 40\\%, is even faster than the digital universe itself. So, the major of data usually resides in cheaper, more reliable, and larger devices and the minor of it which is still not processed is preserved in fast storage media. Therefore, a hybrid storage system with a caching/tiering mechanism will be undoubtedly required.", "id": "e4151375-a332-4841-b81c-dc6369fc479c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fe470bda-e688-4e77-9a2f-715dd067006e", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Background" ], [ "subsection", "Technology" ], [ "subsubsection", "Mass Storage Dilemma" ] ], "subsections": [], "title": "Mass Storage Dilemma" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:caching}\n With the aim of alleviating the long latency of slow devices, a caching mechanism can be used in a hybrid storage system. There are two main principles in caching subsystems: 1) while keeping the original data in the moderate levels of the hierarchy, a \\textit{copy} of under-processing data resides in the cache; and 2) the lifetime of data in the cache layer is short, and it is meant to be \\textit{temporary}. The performance of the storage system with caching is chiefly influenced by four factors~:\n\t\t\\begin{enumerate}\n\t\t\t\\item Data allocation policy essentially controls the data flow and determines the usefulness of the cache, accordingly. The distribution of the data among multiple devices is reflected by the caching policy, such as read-only, write-back, etc.\n\t\t\t\\item The translation, depending on its mechanism, may also influence the performance. In a hybrid storage system, the same data may be kept in different locations in multiple devices, and each copy of the data should be addressable. The address translation mechanism is important to be fast for data retrieval, and compact for metadata space usage.\n\t\t\t\\item An accurate data hotness identification method is necessary for better cache utilization. It helps to prevent cache pollution with unnecessary data, and consequently, improving the overall performance by instantly providing hot data.\n\t\t\t\\item The cache usage efficiency is another important factor which is influenced by the scheduling algorithm for managing the queues, synchronization, and execution sequence.\n\t\t\\end{enumerate}\n\t\tA caching mechanism can be managed by either in hardware by the device or in software by the host operating system (see figure~\\ref{fig:hyb}). Device-managed caching systems are beyond the scope of this study, so we focus on host-managed methods. With a host-manage caching mechanism, the host may use separate devices to enhance the performance. One of the most common cases is using SSD as a cache because of its high performance as opposed to slow HDDs, and high capacity compared with DRAM. Besides SSDs, emerging Non-Volatile Memory (NVM) devices are promised to be involved in storage caching mechanisms. In this section of the paper, we discuss a few storage caching techniques including using either SSD or SCM as a storage cache.\n\t\t\\begin{figure}[t]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3in]{drawings/ssd-cache-eps-converted-to.pdf}\n\t\t\t\\caption{Dataflows in SSD caches}\n\t\t\t\\label{fig:ssd-cache}\n\t\t\\end{figure}", "id": "ec34750d-daa4-4add-aad6-11ebfb9ff79b", "level": "section", "origin_cites_number": 0, "parent_id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ] ], "subsections": [ "e1454a41-7a0e-4899-9148-a38e33dc2620", "9ff81c20-883f-4e2f-a075-b0d7e60efe64", "68385ad9-48b8-4775-9956-a0c15312c131" ], "title": "Storage Caching Solutions" }, { "cite_extract_rate": 0, "cites": [], "content": "Covering the performance gap between the disk drive and the main memory, SSD devices have been widely used for caching slow drives. Figure~\\ref{fig:ssd-cache} shows common data flows in a caching system using an SSD device. \\circled{1} happens when the read request completes within the SSD cache without involving the HDD. If the requested block is not in the SSD, the HDD may be accessed to retrieve data in DRAM via \\circled{2}, and if it is identified as a hot data, it is going to be cached in SSD via \\circled{3}. A background process which executes a hot data identification may migrate data from the HDD to the SSD via \\circled{4} regardless of not being requested. A flush command or a write-back can copy dirty blocks back to the HDD in \\circled{5}. A write operation may be completed directly in SSD when the block is already there as in \\circled{6}, and whether the cache uses a \\textit{write-through} or \\textit{write-back} policy, the dirty block can be copied in HDD via \\circled{7}. The \\textit{write-through} policy in SSD caches is obsolete as it is designed for volatile caches in which dirty blocks should be persisted at some point. In case of using the \\textit{read-only} or \\textit{write-around} policy, all new write operations are performed by \\circled{8} directly in HDD. Based on the caching policy, data may flow through these paths.", "id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec34750d-daa4-4add-aad6-11ebfb9ff79b", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ] ], "subsections": [ "d6e51808-e689-4162-94bc-b546b14c63c7", "09eecdb1-6e41-4faf-a8a6-837a80a99f86", "e166b207-a7b1-44f5-95ac-3d14231189dd", "9b3e45fa-4f1f-43b0-afc8-1d59fc6bab5d", "28c9d3ca-257a-4e0f-bfe9-e2cb7fc94be2" ], "title": "SSD as a Cache" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ro}\n\t\tUpon arrival of a new write request in a read-only cache architecture~ where the accessing block is not located in SSD, the request is completed by successfully recording it to HDD via \\circled{8}. When it was already cached in SSD for priority read operations, the request is considered as completed only after updating the HDD copy of data and discarding the SSD copy, successfully. This kind of cache architecture helps the durability of the SSD device as the writing traffic to the SSD is limited to fetching data from HDD. Meanwhile, the cache space can be better utilized for reading operations, and it might improve the overall read performance which, unlike write operations, is on the critical path. However, the SSD lifespan is still vulnerable to the cache updating policy. If the data selection is not accurate enough, the cache might be polluted with unnecessary data, and a Garbage Collection (GC) process or a replacement mechanism should run to make space for demanding data. This process may incur a write overhead to the SSD and reduce the lifespan. \n\t\tThe replacement algorithm is essential to alleviate the writing pressure on SSD cache. Section~\\ref{sec:repl} will discuss more on common algorithms. Besides, the block hotness identification also affects the SSD lifespan vastly. MOLAR~ determines the data hotness based on the control metric of demotion count, and place the evicted blocks wisely from the tier-1 cache (DRAM) to the tier-2 cache (SSD). Using the I/O patterns of applications on an HPC system, proposes a heuristic file-placement algorithm to improve the cache performance. Since the applications in an HPC is more mechanized as opposed to end-user applications which have an unpredictable I/O pattern, assuming foreknown patterns is not far from being realistic. To understand the I/O pattern, a distributed caching middleware detects and manipulates the frequently-accessed blocks in the user level.", "id": "d6e51808-e689-4162-94bc-b546b14c63c7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ], [ "subsubsection", "SSD as a Read-Only Cache" ] ], "subsections": [], "title": "SSD as a Read-Only Cache" }, { "cite_extract_rate": 0, "cites": [], "content": "Due to its non-volatility feature, SSD caches do not use a write-through policy to keep the original data up-to-date, in contrast with DRAM caches which are volatile and need to be synchronized or persistent. So, an SSD R/W cache may only employ a write-back or flushing mechanism. Using SSD as an R/W cache to improve the performance in terms of both read and write operations is very common~. In such architectures, new writes are performed in the SSD cache as shown in figure~\\ref{fig:ssd-cache}:\\circled{6}, and they will be written back to the disk later. Since there are two versions of the same data, a periodic flush operation usually runs to prevent data synchronization problem. Although using an R/W SSD cache normally improves the storage performance, when the cache is nearly full, it fires the GC process to clean invalid data which may interfere the main process and degrade the performance. Meanwhile, if the workload is write-intensive with a small ratio of data reuse, the HDD may be under a heavy write load which prevents the disk to have long idle periods. This fact wards off the SSD flushing process and impose extra performance overhead to the system. However, SSD can keep data permanently, thus flushing all write data is not necessary. So, a write-back cache policy can improve storage performance. Nevertheless, an occasional flush operation at the cost of small performance degradation is required in case of SSD failure problem. Furthermore, the SSD limited write endurance is another issue which is more problematic in R/W caches comparing with read-only caches. Notice that the random write in an SSD device is roughly tenfold slower than the sequential write and causes excessive internal fragmentation. Many algorithms~ and architectures~ have been design to alleviate the write traffic and control the GC process in SSD caches. \n\t\tRandom Access First (RAF)~ cache management extends the SSD lifespan by splitting the SSD to read and write caches. The former one maintains random-access data evicted from file cache with the aim of reducing flash wear and write hits. The latter one is a circular write-through log to respond to write requests faster and perform the garbage collection. A monitoring module in the kernel intercepts page-level operations and sends them to a dispatcher who is a user-level daemon performing random-access data detection and distributes the operations among the caches.\n\t\tIn , balancing the read and write traffics in two different parts of the cache is beneficial for both performance and SSD lifespan. These parts can use different technologies such as DRAM, NVM, or SSD. In section~\\ref{sec:ro} we described SSD as an RO cache in which the write traffic may go to the DRAM cache. In other designs, SSD may be used as a write cache for HDD.", "id": "09eecdb1-6e41-4faf-a8a6-837a80a99f86", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ], [ "subsubsection", "SSD as a Read-Write Cache" ] ], "subsections": [], "title": "SSD as a Read-Write Cache" }, { "cite_extract_rate": 0, "cites": [], "content": "In a virtualization environment with multiple Virtual Machines (VM) running with different IO patterns, the randomness of write operations is a pain-point for SSD flashes. To reduce the number of random writes, proposes a cache scheme in which they adopt the idea of log-structured file systems to the virtual disk layer and convert the random writes to sequential writes. Leveraging Sequential Virtual Disks (SVD) in a virtual environment of a home cloud server with multiple virtual machines (VM) in which synchronous random writes dominate, it uses SSD drives completely sequentially to prolong its lifespan while improving the performance. \n\t\tvCacheShare~ is an SSD cache architecture on a virtual cluster which simply skips SSD cache for write operations. By tracing the IO traffic of each virtual disk and analyzing them periodically, vCacheShare optimally partitions the SSD cache for each of the virtual disks.", "id": "e166b207-a7b1-44f5-95ac-3d14231189dd", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ], [ "subsubsection", "SSD Caches in Virtualization Environments" ] ], "subsections": [], "title": "SSD Caches in Virtualization Environments" }, { "cite_extract_rate": 0, "cites": [], "content": "For expanding SSD's lifetime, deduplication is one of the most effective ways. Some research studies~ prevent writing data to the SSD drive if the contents were already cached. For instance, reduces the number of writes to the SSD by avoiding duplicated data in a virtualization environment in which the high integration of VMs can introduce a lot of data duplication. Using a hash function (SHA-1), data signature will be calculated upon a data fetch after a cache miss, and if the signature was already in the cache, the address would be mapped to the content, and it saves one write operation.\n\t\tCacheDedup~ is an in-line deduplication mechanism for Flash caching in both client machines and servers. This design is complementary to Nitro~ with a set of modifications on the architecture and the algorithms for deduplication-aware cache management. The benefits of deduplication are not only a better utilization of the cache space but also it helps to increase the hit-ratio. Additionally, since the flash devices are limited in write endurance, it also delays wearing out the device by avoiding excessive writes due to duplicate data.\n\t\t\\begin{figure}[h]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3in]{drawings/cachededup-eps-converted-to.pdf}\n\t\t\t\\caption{Architecture of CacheDedup~}\n\t\t\t\\label{fig:cachededup}\n\t\t\\end{figure}\n\t\tAs shown inf figure~\\ref{fig:cachededup}, CacheDedup is composed of two data structures: Metadata Cache and Data Cache. The Metadata Cache maintains the information for tracking the footprints of source addresses in the primary storage. It has a source address index table and a footprint store. When a read/write operation comes up, the corresponding footprint index is obtained from the source-to-index mapping, and then the footprint-to-block-address mapping gives the block address of corresponding contents in the Data Cache. Since the source-to-cache address space has a many-to-one relationship due to eliminating duplicate data, the size of mappings is not bounded to the size of the cache. Also, to prevent re-fetching data from the primary store, CacheDedup keeps the historical fingerprints for those blocks that have already been evicted.\n\t\tCacheDedup can be deployed on both client and server machines. When it is running on a client machine, it can better hide the Network I/O for duplicate data and hence get better performance for applications. In server side, multiple clients may request for the same data, and CacheDedup can help data reduction. Notice that in the server side there should be cache coherence protocol over the network to maintain data consistency. Although the proposed design is described all in software, the authors claim that it can be embedded in the Flash Translation Layer in the hardware device, as well. The described system works with block I/O level referring source block addresses, but it also can be used in file system level with (file handler, offset) tuple.\n\t\tOne of the main parts of the design is the replacement algorithm. There are two algorithms: D-LRU and D-ARC. The details can be found in section~\\ref{sec:repl}. D-ARC algorithm is more complicated than D-LRU. D-ARC has a scan-resistant nature which prevents single-accessed data to pollute the cache capacity. Although both algorithms can be used in CacheDedup, D-ARC achieves better performance while D-LRU is simple. Both algorithms have the no-cache-wastage property, i.e., it doesn’t allow orphaned address and orphaned data blocks at the same time. This study shows the improvement on cache hit ratio, I/O latency, and the number of writes sent to the cache device.", "id": "9b3e45fa-4f1f-43b0-afc8-1d59fc6bab5d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ], [ "subsubsection", "Deduplication in SSD Caches" ] ], "subsections": [], "title": "Deduplication in SSD Caches" }, { "cite_extract_rate": 0, "cites": [], "content": "Although random write to SSD is slower than sequential write, yet it is an order of magnitude faster than random writes in a Shingled Magnetic Recording (SMR) device such as HDD. Therefore, to benefit from the high-capacity and low \\$/GB of SMRs and the high performance of SSDs, a hybrid storage system may redirect all random writes to the SSD cache and leave the sequential writes to the SMR, as in~.\n\t\t\\begin{figure}[h]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3in]{drawings/shingle.pdf}\n\t\t\t\\caption{A magnetic disk in a SMR device under operation of writing new data on a whole band.}\n\t\t\t\\label{fig:shingle}\n\t\t\\end{figure}\n\t\tThe writing mechanism is depicted in figure~\\ref{fig:shingle}. In an SMR drive, writing to a magnetic \\textit{track} partially overlaps a previously written track and makes it narrower to provide higher density. This is because of the physical limitations of the writing head which is not as delicate as the reading head and remarks a wider trail while writing onto a magnetic disk. As imagined, a random write destroys adjacent tracks, and they all should be rewritten by a read-modify-write (RMW) operation. In the SMR architecture, a \\textit{band} is a bunch of consecutive tracks grouped and separated from adjacent bands by a narrow \\textit{guard band}. Random write requires an RMW operation on a whole band. So, a persistent cache which is either a flash buffer or some non-overlapped track on the magnetic disk is used to buffer writes before writing them to the corresponding band. HS-BAS~ is a hybrid storage system based on band-awareness of SMR disk to improve the performance of Shingled Write Disk (SWD or SMR disk) with sustained random writes by taking SSD as a writing cache for SWD. To make use of SWD devices in a RAID system, proposes three architectural models of using SSDs as caches for SWDs. With this option, a RAID system may provide more storage capacity at lower cost with the same performance or even slightly better. Partially Open Region for Eviction (PORE)~ caching policy is another use of SSD as a cache for SMR devices. It considers the SMR write amplification due to the Logical Block Address (LBA) wide range in addition to the popularity for replacement decision making. To put it simply, SSD handles random writes and flushes sequentially to the SMR device.", "id": "28c9d3ca-257a-4e0f-bfe9-e2cb7fc94be2", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e1454a41-7a0e-4899-9148-a38e33dc2620", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "SSD as a Cache" ], [ "subsubsection", "SSD as a Cache for SMR Drives" ] ], "subsections": [], "title": "SSD as a Cache for SMR Drives" }, { "cite_extract_rate": 0, "cites": [], "content": "The advent of NVM technologies, as described in section~\\ref{sec:tech:mem}, allow persistent data operations at near-DRAM latencies, which is an order of magnitude faster than SSD. A study~ on using NVM as an I/O cache for SSD or HDD reveals that the current I/O caching solution cannot fully benefit from the low-latency and high-throughput of NVM. Recent researches have been trying to overcome the complexity of using NVM as a Direct Access (DAX) storage device and using it as a cache for SSD/HDD~.\n\t\tIn recent years, Intel provided a Persistent Memory Development Kit (PMDK)~ which provides several APIs to access the persistent memory from the user level directly.\n\t\tNVM Bankshot~ is a user-level library exposing the NVM by implementing caching functions to the applications and bypassing the kernel to lower the hit latency. However, PMDK outperforms Bankshot in many ways as it is more recent. \n\t\tMost NVM technologies can endure orders of magnitude more writes comparing with NAND SSD, but still limited. They also provide an in-place byte-size update which is way faster than RMW operations in SSDs. With these features, most of DRAM caching policies can be used as NVM-based cache with significant modifications for carefully managing the write traffic. \n\t\tHierarchical ARC (H-ARC)~ cache is an NVM-based cache that optimizes ARC algorithm to take four states of recency, frequency, dirty, and clean into account and split the cache first into the dirty-/clean-page caches and then split each part into recency-/frequency-page cache. Based on a similar mechanism as ARC (see section~\\ref{sec:repl}), it adapts the sizes of each section, hierarchically in each level. So, H-ARC keeps dirty pages with higher frequency longer in the cache.\n\t\tI/O-Cache~ also uses NVM as a buffer cache for HDDs which coalesces multiple dirty blocks into a single sequential write. This technique is also used in many other NVM-based designs~. \n\t\tTransactional NVM disk Cache (Tinca)~ aims to achieve crash consistency through transactional supports while avoiding double writes by exploiting an NVM-based disk cache. Leveraging the byte addressability feature of NVM, Tinca maintains fine-grained cache metadata to enable copy-on-write (COW) while writing a data block. Tinca also uses a role switch method in which each block has a role and can be either a \\textit{log block} in ongoing committing transactions, or a \\textit{buffer block} in a completed transaction. With the two of COW and role switch mechanisms, Tinca supports a commit protocol to coalesce and write multiple blocks in a single transaction.", "id": "9ff81c20-883f-4e2f-a075-b0d7e60efe64", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec34750d-daa4-4add-aad6-11ebfb9ff79b", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "NVM Storage Cache" ] ], "subsections": [], "title": "NVM Storage Cache" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:repl}\n\t\tTo keep the most popular blocks in the cache, several general-purpose and domain-specific algorithms have been designed. In general, the majority of these algorithms are based on two empirical assumptions that are temporal locality and skewed popularity~. The former assumes that the recently used blocks are most likely going to be requested shortly again. The latter supposes that some blocks are more frequently accessed comparing with the others. Accordingly, the well-known mechanism of Least-Recently-Used (LRU) and Least-Frequently-Used (LFU) have been created and commonly used for data replacement in caches because of their simplicity and $O(1)$ overhead. Unlike CPUs, the storage applications may not be interested in the temporal locality since there is a page cache in the DRAM which adequately manages the locality. Also, a simple search operation over the entire storage space may flush all popular blocks in the cache and replace them with seldom accessed ones. There are many more advanced algorithms have been proposed to address this issue which is mostly general-purpose.", "id": "68385ad9-48b8-4775-9956-a0c15312c131", "level": "subsection", "origin_cites_number": 0, "parent_id": "ec34750d-daa4-4add-aad6-11ebfb9ff79b", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "Cache Replacement Algorithms" ] ], "subsections": [ "3d2f50e1-052a-468a-92f3-63c42c82ae32", "ad355f1f-5436-4a42-9386-25a3cee69ae9" ], "title": "Cache Replacement Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "The Frequency-Based Replacement (FBR)~ algorithm benefits from both LRU and LFU algorithms. It keeps LRU ordering and decides primarily upon the frequency count of the blocks in a section. Its complexity ranges from $O(1)$ to $O(log_2n)$ according to the section size. Using the aggregation of recency information for block referencing behavior recognition, Early Eviction LRU (EELRU)~ aims to provide an on-line adaptive replacement method for all reference patterns. It would perform LRU unless many recently fetched blocks had just been evicted. In that case, a \\textit{fallback} algorithm either evicts the LRU block or the $e^{th}$ MRU one, where $e$ is a pre-determined recency position. The Low Inter-reference Recency Set (LIRS)~ algorithm takes \\textit{reuse distance} as a metric for dynamically ranking accessed blocks. It divides the cache into a Low Inter-reference Recency (LIR) for most highly ranked blocks and a High Inter-reference Recency (HIR) for other blocks. When an HIR block is accessed, it goes to the LIR, and when LIR is full, the lowest ranked block from LIR turns into the highest ranked HIR block. With the aim of removing cold blocks quickly, 2Q~ uses one FIFO queue $A1_{in}$ and two LRU lists of $A1_{out}$ and $A_m$. A first accessed block comes into $A1_{in}$, and upon eviction, it goes to $A1_{out}$. Reusing the block promotes it to $A_{m}$. Similarly, Multi-Queue~ algorithm uses multiple LRU queues of $Q_0, ..., Q_{m-1}$ where the block lifetime in $Q_j$ is longer than $Q_i$ ($i<j$) as a block in $Q_i$ is hit at least $2^i$ times. Adaptive Replacement Cache (ARC)~ divides the cache space into $T_1$ and $T_2$, where $T_1$ stores one-time accessed blocks whereas $T_2$ keeps the rest of the blocks. Two ghost caches $B_1$ and $B_2$ maintains the identifiers of evicted blocks from $T_1$ and $T_2$, respectively, whereas t. Using $B_1$ and $B_2$, the sizes of $T_1$ and $T_2$ is dynamically adjusted by a dividing point $P$ to balance between recency and frequency which is tuned according to hit rates.", "id": "3d2f50e1-052a-468a-92f3-63c42c82ae32", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "68385ad9-48b8-4775-9956-a0c15312c131", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "Cache Replacement Algorithms" ], [ "subsubsection", "General Purpose Algorithms" ] ], "subsections": [], "title": "General Purpose Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Base on the write performance of SSD and its lifetime issue, SSD caches usually consider two factors: 1) keeping dirty pages longer in the cache to avoid fetching a page more than once, and 2) avoiding cache space pollution with low popular blocks.\n\t\tClean First LRU (CFLRU)~ splits the cache space into a \\textit{clean-page} cache and a \\textit{dirty-page} cache, and evicts only from the clean-page cache unless there is no clean page left. This basic algorithm tries to keep dirty pages longer in the cache, but yet it ignores skipping one-time access pages.\n\t\tLazy ARC (LARC)~ is designed explicitly for SSD caches to prevent write overheads and prolong the SSD lifespan. It filters the seldom accessed blocks and skips caching them. Similar to 2Q and ARC, it considers the fact that blocks which are hit recently at least twice are more likely to be popular. It has a ghost cache to keep the identifiers of the first accessed blocks. If a block from the ghost cache is reaccessed, it is considered popular and placed in the cache. Since it prevents unnecessary writes to the SSD, it can be also categorized as a data hotness identification method. \n\t\tThe Second-level ARC (L2ARC)~ is also optimized for SSD caches as it reduces the number of writes to the device. It has been used in the Solaris ZFS file system. It uses SSD as the second level cache of the in-DRAM ARC cache to periodically fill it with the most popular data contents of the DRAM cache.\n\t\tWith a large space overhead, SieveStore~ keeps the information of the miss count of every block in the storage system, and only allows those blocks with large miss count to be cached in SSD. Similar algorithms are used in some enterprise products such as Intel Turbo Memory~.\n\t\tSimilar to ARC, Duplication-aware ARC (D-ARC)~ consists of four LRU caches. D-ARC uses cache block contents or fingerprint instead of addresses. Based on the high or low levels of dirty ratio and the reference count of blocks, it partitions data in four groups and always evicts least referenced and dirtiest cache blocks. Hence, the removed block is more likely the most unpopular one which is not going to be reused in the near future, and the SSD would not eject it up to the point that it is no longer required. This will reduce the write bandwidth to the SSD device and save extra writes due to false evictions.\n\t\tTo the same end, Expiration-Time Driven (ETD)~ cache algorithm delays a cache block eviction to its expiration time, and instead of updating the cache on a miss, it the evicts a block when it is expired, and then chooses a replacement form a list of candidate blocks.\n\t\tD-LRU~ is a duplication-aware LRU which consists of two separate LRU policies. First, it inserts the address $x$ in Metadata Cache using LRU. Second, the corresponding fingerprint of address $x$ is inserted into Data Cache using the other LRU.\n\t\tPORE~ is another domain-specific policy which is beneficial in SSD-SMR hybrid storage systems. It splits the SMR LBA range into \\textit{Open} and \\textit{Forbidden} regions. The replacement policy may only evict dirty blocks in the open region. The written back blocks are stored in the SMR write buffer or persistent cache for subsequent writing to the corresponding band. The open region is periodically changed to cover all dirty blocks across the SMR LBA range. This algorithm helps to avoid writing on random bands which significantly destroys the performance.", "id": "ad355f1f-5436-4a42-9386-25a3cee69ae9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "68385ad9-48b8-4775-9956-a0c15312c131", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Caching Solutions" ], [ "subsection", "Cache Replacement Algorithms" ], [ "subsubsection", "Domain Specific Algorithms" ] ], "subsections": [], "title": "Domain Specific Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tiering}\n\t\tIn the past, high-end and low-end HDDs were used as the performance and the capacity tiers, respectively. Nowadays, many types of storage media with different characteristics and capacities are used in a multi-tiered storage system. The main difference between caching and tiering is that in a caching system, a copy of data is kept in the cache while in a tiering system, the original data migrates between multiple tiers via two operations of promotion and demotion. Data is classified based on the application needs and characteristics of available tiers, usually into hot and cold. The hot data resides in the performance tier leaving the cold data to stay in the capacity tier. Considering multiple factors such as randomness, transfer speed, etc., there might be more than two tiers.\n\t\t\\begin{figure}[h]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3in]{drawings/tiering-eps-converted-to.pdf}\n\t\t\t\\caption{General storage tiering phases}\n\t\t\t\\label{fig:tiering}\n\t\t\\end{figure}\n\t\tFigure~\\ref{fig:tiering} illustrates a general storage tiering mechanism which consists of four phases. In the data collection phase, the system gathers required information for decision making. \n\t\tThe application profile of IO pattern can be obtained either online or offline.\n\t\tAn online profiling module may collect IO information while the application is running at the cost of potential performance overhead. This mechanism is useful when there is a user involved, such as personal computers, or virtual environment cloud systems. An offline profiling module, on the other hand, obtains the application IO profile before it is running. This kind of profiling mechanism is suitable for cluster analytical applications in which no random parameter interferes with the IO path except the running applications which are predictable. Some other information such as application/system specifications can also be fed into the tiering algorithm by the user or the machine all at once.\n\t\tIn the analysis phase, the system evaluates several possible plans or models and generates a list of recommendations in the form of a solutions space. Some tiering algorithms may skip this phase by directly finding the answer with some analysis. The solution space consists of several estimations under different circumstances evaluated by a cost function or a performance model (e.g., running a particular application or the whole system under a particular distribution of data among the tiers). Each solution comes with a cost estimation which will be later used for decision making in the next phase.\n\t\tIn this phase, a sorting algorithm might suffice for deciding which migration plan is worth taking. According to the goals, the scores of each plan, and their costs, a tiering algorithm determine whether or not migrating a chunk of data in which direction.", "id": "48bbe958-0c7f-46c3-8ab0-0715412da51d", "level": "section", "origin_cites_number": 0, "parent_id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Tiering Solutions" ] ], "subsections": [ "66372442-b50f-4c75-9a14-a6865b48b363", "942c1b8d-6c94-4ab2-a1af-4c2b48e01073", "6323ff46-b792-476d-b8a9-1d78359d8168" ], "title": "Storage Tiering Solutions" }, { "cite_extract_rate": 0, "cites": [], "content": "A comprehensive study on available storage types is provided in . It compares the Micron all-PCM SSD prototype with eMLC flash SSD regarding performance and evaluates it as a promising option for tiered storage systems. Using a simulation methodology with estimated/obtained performance characteristics of each device, it tests every possible combination of PCM SSD, eMLC SSD, and HDD. Although nowadays we have Optane SSD available in the market from Intel and Micron, and we know that it offers much better performance than the out-of-date all-PCM SSD prototype, this paper assumes that the write operation of PCM SSD is 3.5x slower than that of eMLS SSD. With this assumption, and having a very simple IOPS based dynamic tiering algorithm, they show the benefits of using PCM SSD in a multi-tiered storage system in a variety of real-world workloads as an enterprise solution.\n\t\tOnline OS-level Data Tiering (OODT)~ efficiently distributes data among multiple tiers based on the access frequency, data type (metadata or user data), access pattern (random or sequential), and the read/write operation frequency. Using a weighted priority function, OODT sorts data chunks for each tier based on their degree of randomness, read ratio, and request type. OODT can interpret fixed size requests (4KB). If the request is larger than that, it will be broken into several small sub-requests and treat with them independently in a module called the dispatcher. By using a mapping table, all data chunks can be tracked down to the tier number and the physical block index. To enable online migration, it obtains the statistics of the blocks and keeps it in an access table which gets up-to-date by the dispatcher. The most important part of OODT, and every other tiering schemes is the priority computation (may be referred as scoring, sorting, or matching in other techniques) which determines the matched tier for each data. Using a simple weighted linear formula with four inputs of $P_{access}$, $P_{random}$, $P_{read}$, and $P_{metadata}$, OODT calculates the priority for potential migrations.\n\t\tCloud Analytics Storage Tiering (CAST)~, as it sounds, is a storage tiering solution for data analytics applications in the cloud. With an offline workload profiling, CAST makes job performance prediction models for each tenant on different cloud storage services. Then, it combines the obtained prediction models with workload specifications and its goals to perform a cost-efficient data placement and storage provisioning plan. They model the data placement and storage provisioning problem into a non-linear optimization problem in which they maximize the tenant utilization in terms of the completion time and the costs. An enhanced version of CAST is also proposed in which is called {CAST++} and adds data reuse patterns and workflow awareness to CAST.\n\t\tBased on a measured IOPS in a virtualization environment, AutoTiering~ dynamically migrates virtual disk files (VMDK) from one tier to another in an all-flash storage system. It uses a sampling mechanism to estimate the IOPS of running a VM on other tiers. Based on this measurement and the costs, it sorts all possible movements by their scores. For each VMDK, a daemon on the hypervisor collects the IO related statistics including the IOPS results of latency injection test to resemble a slower tier at the end of every sampling period. For simulating faster tiers, AutoTiering takes benefits of a linear regression model. If the IOPS does not change by slowing down the IO process, and there is a VM in the queue waiting for the performance tier, a demotion will take the VMDK to a capacity tier and let the other VMDK take over the performance tier by promoting it.\n\t\t\\begin{figure*}[t]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=7in]{drawings/strata.pdf}\n\t\t\t\\caption{Strata design~. LibFS directs writes to the \\textit{update log} and serve the reads from the shared area. File data cache is a read-only cache, containing data from SSD or HDD.}\n\t\t\t\\label{fig:strata}\n\t\t\\end{figure*}", "id": "66372442-b50f-4c75-9a14-a6865b48b363", "level": "subsection", "origin_cites_number": 0, "parent_id": "48bbe958-0c7f-46c3-8ab0-0715412da51d", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Tiering Solutions" ], [ "subsection", "SSD as a Performance Tier" ] ], "subsections": [], "title": "SSD as a Performance Tier" }, { "cite_extract_rate": 0, "cites": [], "content": "NVMFS~ is a hybrid file system that improves the random write operations in NAND-flash SSD by exploiting the byte-addressability of an auxiliary NVM device. The key feature of this file system is that it redirects small random IOs on NVM which include metadata and hot file data blocks. This scheme helps to reduce the write traffic to SSD, hence improves SSD's durability. The technique is to transform random writes at the file system level to a sequential write at the SSD level. It groups data with same update likelihood and submits a single large SSD write request.\n\t\tNVMFS comprises 2 LRU lists: dirty and clean. The dirty LRU list absorbs updates in the NVRAM. When a page is written back to the SSD device, it moves from dirty list to the clean list. NVMFS dynamically adjust dirty and clean LRU lists. Once the NVRAM usage reaches 80\\%, a background thread starts flushing data from the dirty list and move them to the clean LRU list until it goes down to 50\\%. NVRAM has a non-overwrite on SSD policy: periodical cleanup internal fragmentation that integrates multiple partial extents into one and recycles the free space. \n\t\tThe authors explain file system consistency through 5 steps. 1: Check if the NVRAM usage is over 80\\%; 2: if so, group random small IOs from the dirty LRU list into large (512K) extents; 3: then, sequentially write the extent to SSD (better block erase at FTL); 4: insert the flushed pages into the clean LRU list; and finally 5: update metadata by recording the new data position within \\verb+page_info+ structure. Therefore, when a crash happens at any point, it can be recovered.\n\t\tTo prevent segment cleaning inconsistency, NVMFS exploits transactions during defragmentation, similar to the transactions in log-structured file systems. After choosing a candidate extent, it migrates the valid blocks of that to NVRAM, and then updates the corresponding \\textit{inodes}. When the inodes are updated, then it releases the space in SSD. Data will always be consistent even when a crash happens in the middle of the process.\n\t\tStrata~ is a multi-tiered file system which exploits NVM as the performance tier, and SSD/HDD as the capacity tiers.\n\t\tIt consists of two parts: KernFS and LibFS.\n\t\tTo fire up Strata, applications are required to be recompiled with LibFS which re-implements standard POSIX interface.\n\t\tOn the kernel side, KernFS should be running to grant the application access to the shared storage area which is a combination of NVM, SSD, and HDD. It uses the byte-addressability of the NVM to coalesce logs and migrate them to lower tiers to minimize write amplification. File data can only be allocated in NVM in Strata, and they can be migrated only from a faster tier to a slower one. The profiling granularity of Strata is a page, which increases the bookkeeping overhead and wastes the locality information of file accesses.\n\t\tStrata attains fast write operation by separating and delegating the tasks of logging and digesting to the user space and the kernel space, respectively. The KernFS grants LibFS direct access to the NVM for its own private \\textit{update log} and the the \\textit{shared area} for read-only operations, as shown in figure~\\ref{fig:strata}. KernFS perform the digest operation in parallel via multiple threads. One benefit of this operation is that despite the randomness and small size of the initial writes to the update log, they can be coalesced and written sequentially to the shared area which helps to minimize fragmentation and metadata overhead. This also helps efficient flash erasure and shingled write operations. \n\t\tFor crash consistency, LibFS works with a durable unit of \\textit{Strata transaction} which provides ACID semantics to the applications update log. To implement this, Strata wrap every POSIX system call in one or multiple Strata transactions. Figure~\\ref{fig:strata} represents the Strata design and the LibFS and KernFS components.", "id": "942c1b8d-6c94-4ab2-a1af-4c2b48e01073", "level": "subsection", "origin_cites_number": 0, "parent_id": "48bbe958-0c7f-46c3-8ab0-0715412da51d", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Tiering Solutions" ], [ "subsection", "NVM as a Performance Tier" ] ], "subsections": [], "title": "NVM as a Performance Tier" }, { "cite_extract_rate": 0, "cites": [], "content": "In a journaling file system, like Ext4, the metadata updates are usually very small (e.g. \\textit{inode} size of 256B). Although modifying an \\textit{inode} requires small write operations, due to block size operations of the storage devices, a whole \\textit{inode} block (e.g., 4K) would be replaced. In recent years, Non-Volatile Memories have attracted a lot of attention due to their feature of connecting via the memory bus. This feature means that the CPU may issue byte-level (cache line size) persistent updates.\n\t\tFile System Metadata Accelerator (FSMAC)~ decouples data and metadata I/O path and use NVM to store file system metadata due to its small access size and popularity. Metadata is permanently stored in NVM and by default, never flushed back to the disk periodically. All updates to the metadata are in-place updates. Not only after a power failure in the middle of a metadata update operation, metadata would be corrupt, but also the authors argue that because of the performance gap between NVM and a block device, the data update is behind metadata update which becomes persistent in NVM once updated. Since the byte-size versioning is very complex and tedious to implement, and block-size versioning imposes write amplification and NVM space waste, FSMAC uses fine-grained versioning (\\textit{inode}-size, i.e., 128bytes) that can maintain consistency at reasonable implementation and space costs. \n\t\tTo address the write ordering issue of data and metadata without destroying the performance gained due to the byte-addressability of NVM, FSMAC uses a light-weight combination of fine-grained versioning and transaction. An original version of metadata is created before updating it to recover from a crash securely. It will be deleted only after the successful completion of the updating transaction. After that, the whole file system will be consistent.\n\t\tUsing this opportunity, C. Chen et al. proposed fine-grained metadata journaling on NVM~. Although it is not directly related to tiering nor caching solution, using NVM to keep a part of storage data is a kind of classification problem which is fundamental in tiering approaches.\n\t\t\\begin{figure}[h]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=3.5in]{drawings/fine-grnd.pdf}\n\t\t\t\\caption{Fine-grained Metadata Journal Format on NVM~}\n\t\t\t\\label{fig:fine-grnd}\n\t\t\\end{figure}\n\t\tIn contrast to conventional journaling file systems in which the modified \\textit{inode} blocks in the page buffer in DRAM are persisted to the disk in form of transactions, in NVM-base fine-grained journaling file system~, only modified \\textit{inodes} are linked together and persisted in the NVM (Figure~\\ref{fig:fine-grnd}). Using cache flush instruction and memory fence, it provides the consistency of ordered writes. Instead of using large Descriptor and Commit (or Revoke) blocks which are 8K in total, a new data structure, \\verb+TxnInfo+, is introduced which contains the number of modified \\textit{inodes} in the list (\\verb+Count+), and a \\textit{Magic Number} for identifying \\verb+TxnInfo+ during the recovery time.\n\t\tThe journal area in NVM is a ring buffer with a head and a tail pointer. Writing to it is composed of three steps: 1) \\verb+memcpy+ modified \\textit{inodes} from DRAM to NVM; 2) flush the corresponding cache lines and issue a memory barrier; and 3) atomically update the tail pointer in the journal area in NVM using the atomic 8-byte write, flush its cache line, and issue a memory barrier.\n\t\t\\begin{figure*}[t]\n\t\t\t\\subfloat[The file structure of Ziggurat and basic migration\\label{fig:mig_intro}]{\n\t\t\t\t\\includegraphics[height=5.4cm]{mig_intro.pdf}\n\t\t\t}\n\t\t\t\\subfloat[Group migration\\label{fig:mig_range}]{\n\t\t\t\t\\includegraphics[height=5.4cm]{mig_range.pdf}\n\t\t\t}\n\t\t\t\\caption{\\textbf{Migration mechanism of Ziggurat}~.\n\t\t\t\tZiggurat migrates file data between tiers using its basic migration and group migration mechanisms.\n\t\t\t\tThe blue arrows indicate data movement, while the black ones indicate pointers.}\n\t\t\t\\label{fig:mig}\n\t\t\\end{figure*}\n\t\tIn traditional journaling file systems, committing the \\textit{Running Transaction}, which is a link list of modified \\textit{inode} blocks, is triggered by either a predefined timer or a predefined number of modified \\textit{inode} blocks. In the fine-grained journaling, when a predefined timer is up, similar to traditional file systems, the committing process starts. The unset of this process is also controlled by the number of modified \\textit{inodes}, because \\verb+TxnInfo+ can hold the information of a limited number of modified \\textit{inodes}. The committing process begins with relinking all modified \\textit{inodes} from the \\textit{Running Transaction} to the \\textit{Committing Transaction} so that the \\textit{Running Transaction} can accept new modified \\textit{inodes}. Then, all modified \\textit{inodes} are \\verb+memcpy+ed to NVM starting from tail, and then the \\verb+TxnInfo+ is calculated afterwards. The corresponding cache lines, thereafter, are flushed, and a memory fence is issued. Finally, the tail pointer will atomically get updated, confirming that the transaction is committed. Notice that, data is consistent during this process, even with a crash happening in the middle, because the tail is controlling the visibility of data. Comparing to traditional journaling, this method reduces transaction writes by up to 99\\%.\n\t\tTo prevent too long journals which deteriorates the performance, file systems usually use checkpointing periodically. The fine-grained journaling triggers checkpointing once in every 10 minutes or upon 50\\% utilization of the NVM. Like traditional journaling file systems, it takes over the modified \\textit{inode} block list and write the blocks one after another. Then, it discards the journal in NVM by making head and tail pointers equal, which guarantees the recoverability, because when a crash happens, we still have the modified \\textit{inodes} in NVM in the recovery.\n\t\tThe recovery process in the fine-grained journaling starts from the tail in NVM, backward. It retrieves the corresponding \\textit{inode} blocks to DRAM. The obsolete \\textit{inode} blocks get up-to-date by applying modified \\textit{inodes} inside the block. After that all \\textit{inode} blocks are updated in DRAM, it flushes them back to the disk. Finally, make the head and tail pointers identical atomically. The consistency is guaranteed similar to the checkpointing process.", "id": "6323ff46-b792-476d-b8a9-1d78359d8168", "level": "subsection", "origin_cites_number": 0, "parent_id": "48bbe958-0c7f-46c3-8ab0-0715412da51d", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Tiering Solutions" ], [ "subsection", "NVM as a Metadata Tier" ] ], "subsections": [ "2588a5dd-48d2-4a5c-8c7a-b326d3aaa5c3" ], "title": "NVM as a Metadata Tier" }, { "cite_extract_rate": 0, "cites": [], "content": "}}\n\t\t\\label{sec:zig}\n\t\tZiggurat~ is a multi-tiered NVM-based file system a tiered file system that spans NVNM and disks, and it was developed in our research group. The paper is published in the proceedings of the 17th USENIX Conference on File and Storage Technologies (FAST '19). It is based on our well-known NVM-base file system, NOVA~. Ziggurat exploits the benefits of NVM through intelligent data placement during file writes and data migration. \n\t\tZiggurat includes two placement predictors that analyze the file write sequences and predict whether the incoming writes are both large and stable and whether updates to the file are likely to be synchronous.\n\t\tThen, it steers the incoming writes to the most suitable tier based on the prediction: writes to synchronously-updated files go to the NVM tier to minimize the synchronization overhead.\n\t\tSmall, random writes also go to the NVM tier to entirely avoid random writes to disk. \n\t\tThe remaining large sequential writes to asynchronously-updated files go to disk. Ziggurat seeks five principal design goals which are as follows.\n\t\t\\textbf{Send writes to the most suitable tier}. Although NVM is the fastest tier in Ziggurat, file writes should not always go to NVM. \n\t\tNVM is best-suited for small updates (since small writes to disk are slow) and synchronous writes (since NVM has higher bandwidth and lower latency). \n\t\tHowever, for larger asynchronous writes, targeting disk is faster, since Ziggurat can buffer the data in DRAM more quickly than it can write to NVM, and the write to disk can occur in the background. \n\t\tZiggurat uses its synchronicity predictor to analyze the sequence of writes to each file and predict whether future accesses are likely to be synchronous (i.e., whether the application will call \\texttt{fsync} shortly).\n\t\t\\textbf{Only migrate cold data in cold files}. During migration, Ziggurat targets the cold portions of cold files. \n\t\tHot files and hot data in unevenly-accessed files remain in the faster tier.\n\t\tWhen the usage of the fast tier is above a threshold, Ziggurat selects files with the earliest average modification time to migrate. \n\t\tWithin each file, Ziggurat migrates blocks that are older than average. Unless the whole file is cold (i.e., its modification time is not recent), in which case we migrate the entire file.\n\t\t\\textbf{High NVM space utilization}. Ziggurat fully utilizes NVM space to improve performance. Ziggurat uses NVM to absorb synchronous writes. Ziggurat uses a dynamic migration threshold for NVM based on the read-write pattern of applications, so it makes the most of NVM to handle file reads and writes efficiently. We also implement reverse migration to migrate data from disk to NVM when running read-dominated workloads.\n\t\t\\textbf{Migrate file data in groups}. To maximize the write bandwidth of disks, Ziggurat performs migration to disks as sequentially as possible. The placement policy ensures that most small, random writes go to NVM. However, migrating these small write entries to disks directly will suffer from the poor random access performance of drives. To make migration efficient, Ziggurat coalesces adjacent file data into large chunks for movement to exploit sequential disk bandwidth. \n\t\t\\textbf{High scalability}. Ziggurat extends NOVA’s per-CPU storage space allocators to include all the storage tiers. It also uses per-cpu migration and page cache write-back threads to improve scalability.\n\t\tFigure~\\ref{fig:mig_intro} shows the basic procedures of how Ziggurat migrates a write entry from NVM to disk. The first step is to allocate continuous space on disk to hold the migrated data. Ziggurat copies the data from NVM to disk. Then, it appends a new write entry to the inode log with the new location of the migrated data blocks. After that, it updates the log tail in NVM and the radix tree in DRAM. Finally, Ziggurat frees the old blocks of NVM.\n\t\tFigure~\\ref{fig:mig_range} exhibits the steps of group migration which avoids fine-grain migration to improve efficiency and maximize sequential bandwidth to disks. They are similar to migrating a write entry. In step 1, it allocates large chunks of data blocks in the lower tier. In step 2, it copies multiple pages to the lower tier with a single sequential write. After that, it appends the log entry, and update the \\textit{inode} log tail, which commits the group migration. The old pages and logs are freed afterward. Ideally, the group migration size (the granularity of group migration) should be set close to the future I/O size, so that applications can fetch file data with one sequential read from disk. Also, it should not exceed the CPU cache size to maximize the performance of loading the write entries from disks.\n\t\tIn a nutshell, Ziggurat bridges the gap between disk-based storage and NVM-based storage and provides high performance and large capacity to applications.", "id": "2588a5dd-48d2-4a5c-8c7a-b326d3aaa5c3", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6323ff46-b792-476d-b8a9-1d78359d8168", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Storage Tiering Solutions" ], [ "subsection", "NVM as a Metadata Tier" ], [ "subsubsection", "Our Multi-Tiered File System: Ziggurat\\protect\\footnote{Parts of this section is taken from the original paper accepted in FAST'19~\\cite{shengan2019ziggurat" ] ], "subsections": [], "title": "Our Multi-Tiered File System: Ziggurat\\protect\\footnote{Parts of this section is taken from the original paper accepted in FAST'19~\\cite{shengan2019ziggurat" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conc}\n\t\tThe diversity in storage technologies and their different characteristics make each of them individually suitable for a set of storage needs. In the software side, the ever expanding cloud of digital information requires large scale enterprise data servers with high-performance storage systems. While old well-designed storage technologies like HDDs provide large space and high density at a relatively low costs, new technologies such as SSD and NVM offer super fast and reliable IO workflow at a much higher costs. The general desire is to have the high-performance of the new technologies with high storage capacities and low costs. Despite the fact that the speed of processor's development is much higher than the storage technology development, software solutions such as caching and tiering attract the expert's attention to overcome the aforementioned limitation. In this survey, we extensively investigated several caching and tiering solutions for high-performance storage systems. \n\t\tWe observed that although there are several caching and tiering proposals which use SSD as the performance tier, the young technology of NVM did not receive enough attention to be used in such systems. It is not unexpected since this technology has been developed recently and the first products of this type is shipped to the market just a few month before this publication. By the way, we also looked into some recent scientific papers on using NVM as a performance tier, and we also introduced Ziggurat, a multi-tiering file system using NVM as a performance tier to cover the long latencies of SSDs and HDDs.\n\t{\t\n\t\t\\normalsize \n\t\t\\bibliographystyle{acm}\n\t\t\\bibliography{references}\n\t}\n\\end{document}", "id": "33f59b94-b159-448c-ae85-e7403c3c005d", "level": "section", "origin_cites_number": 0, "parent_id": "1ecefa44-9351-47d2-bba8-a25e36cb7de9", "prefix_titles": [ [ "title", "A Survey on Tiering and Caching in High-Performance Storage Systems" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
61
[]
null
[ "Peng Xu", "Xiatian Zhu", "David A. Clifton" ]
Multimodal Learning with Transformers: \\ A Survey
2022
2022-06-13T21:36:09Z
cs.CV
Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks. Thanks to the recent prevalence of multimodal applications and big data, Transformer-based multimodal learning has become a hot topic in AI research. This paper presents a comprehensive survey of Transformer techniques oriented at multimodal data. The main contents of this survey include: (1) a background of multimodal learning, Transformer ecosystem, and the multimodal big data era, {(2) a} {systematic} {review of \vani{} Transformer, Vision Transformer, and multimodal Transformers, from a geometrically topological perspective,} (3) a review of multimodal Transformer applications, via two important paradigms, \ie, for multimodal pretraining and for specific multimodal tasks, (4) a summary of the common challenges and designs shared by the multimodal Transformer models and applications, and (5) a discussion of open problems and potential research directions for the community.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "21139f68-9b71-4857-a410-85c84d8234d4", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ] ], "subsections": [ "58c254c5-72b9-4976-a493-e0824a0d96b5", "2e3f79e1-6ad8-40a4-ae3d-6d29a207351d", "af1c8fea-49d0-4414-b29b-01d4ab51cdd8", "5f41fbc9-6717-4317-8a00-37d207d6aced", "7fd39549-23f3-4dac-b3b1-996a2a33f076", "d60d1615-9fb4-46ed-9d46-853e64b9762a", "2d2b9adf-b8ba-46b9-b286-c1b7238138a2" ], "title": "root" }, { "cite_extract_rate": 0.847457627118644, "cites": [ 4765, 5208, 7905, 5219, 5221, 5233, 4862, 5222, 864, 5216, 7903, 5213, 5217, 5231, 5224, 7360, 7904, 4832, 7302, 5215, 4769, 5230, 5232, 5225, 5234, 5235, 38, 5218, 7907, 5220, 5227, 5214, 5223, 5226, 5210, 7298, 7040, 768, 5229, 7, 7906, 5212, 7908, 7078, 1639, 5209, 1030, 732, 5228, 5211 ], "content": "\\label{sec:introduction}}\nThe initial inspiration of Artificial Intelligence (AI)\nis {to imitate human perception}, \\eg, seeing, hearing, touching, smelling.\nIn general, a {modality} is often associated with a specific sensor that creates a unique communication channel, such as vision and language .\nIn humans,\na fundamental mechanism in our sensory perception is the ability to leverage multiple modalities of perception data collectively in order to engage ourselves properly with the world under dynamic unconstrained circumstances, with each modality serving as a distinct information source characterized by different statistical properties.\nFor example, an image gives the visual appearance of an ``elephants playing in water'' scene via thousands of pixels, whilst the corresponding text describes this moment with a sentence using discrete words.\nFundamentally, a multimodal AI system needs to ingest, interpret, and reason about multimodal information sources to realize similar human level perception abilities.\n{Multimodal learning} (MML) is a general approach to building AI models that can extract and relate information from multimodal data .\nThis survey focuses on multimodal learning with Transformers (as demonstrated in Figure \\ref{fig:transformer}), inspired by their intrinsic advantages and scalability in modelling different modalities (\\eg, language, visual, auditory) and tasks (\\eg, language translation, image recognition, speech recognition) with fewer modality-specific architectural assumptions (\\eg, translation invariance and local grid attention bias in vision) .\nConcretely, the input to a Transformer could encompass one or multiple sequences of tokens, and each sequence's attribute (\\eg, the modality label, the sequential order),\nnaturally allowing for MML without architectural modification .\nFurther, learning per-modal specificity and inter-modal correlation \ncan be simply realized by controlling the input pattern of self-attention.\nCritically, there is a recent surge of research attempts and activities across distinct disciplines exploring the Transformer architectures, resulting in a large number of novel MML methods being developed in recent years, along with significant and diverse advances in various areas\n.\nThis calls for a timely review and summary of representative methods to enable researchers to understand the global picture of the MML field across related disciplines and more importantly to capture a holistic structured picture of current achievements as well as major challenges.\n\\keypoint{Taxonomy}\nFor better readability and reachability from and across different disciplines, \nwe adopt a {two-tier} structured taxonomy\nbased on the application and challenge dimensions respectively.\nThis has several benefits:\n(1) Researchers with expertise in specific applications\ncan find \nthose applications appropriate to their own research domain\nbefore connecting to other related domains.\n(2) Similar model designs and architectures developed in different domains\ncan be summarized in an abstract, formula-driven perspective so that the mathematical ideas of various models formed in different applications can be correlated and contrasted on common ground, crossing domain-specific restrictions.\nCrucially, our taxonomy offers an interesting stereo-view of individual works\nwith the insights in both application specificity and formulation generality.\nIt is hoped that this can help to break down domain boundaries\nand foster more effective idea communication and exchange across modalities.\nBy using the prompt modelling strategy as a basis for investigation, we also include \nthe classical classification problem (\\eg, image classification) -- usually regarded\nas a single modality learning application in conventional MML surveys --\nas a special MML application.\nThis has the potential to significantly enrich MML, as the classification problem\nis an AI topic amongst the most extensive studies in the literature .\n\\keypoint{Scope}\nThis survey will discuss the multimodality specific designs of Transformer architecture including, but not limited to, the following modalities:\nRGB image , depth image , \\blue{multispectral image }, video , audio/speech/music ,\ntable , scene graph/layout ,\npose skeleton , SQL , \nrecipe , programming language ,\nsign language ,\npoint cloud ,\nsymbolic knowledge (graph) , multimodal knowledge graph , sketch drawing , 3D object/scene , document , programming code and Abstract Syntax Tree (AST) -- a kind of graph , optical flow ,\n medical knowledge (\\eg, diagnosis code ontology ).\nNote that this survey will not discuss the multimodal papers where Transformer is used simply as the feature extractor without multimodal designs.\n\\keypoint{Related Surveys}\nWe relate this paper to existing surveys \nof the two specific dimensions MML and Transformers.\nThere exist a few MML surveys .\nIn particular, proposed a structured, acknowledged taxonomy by five challenges, which we also adopt\nas part of our structure.\nUnlike , and , which review general machine learning models,\nwe instead focus on Transformer architectures and their self-attention mechanisms.\nSeveral surveys dedicated to Transformers have been recently introduced, with a range of emphases including \ngeneral Transformers ,\nefficient designs ,\nvisualization ,\ncomputer vision tasks ,\nmedical imaging ,\nvideo tasks , and \nvision language pretraining .\nWhile consider MML,\ntheir reviews are somewhat limited in the scope, taxonomy, and coverage.\nTo our knowledge, only a few surveys on video-language pretraining (VLP) are relevant to MML.\nHowever, VLP is only a subdomain of MML.\nIn this survey, we focus solely on the intersection of multimodal learning and Transformers.\n\\begin{figure}[!t]\n\t\\centering\t\n\t\\includegraphics[width=0.5\\linewidth]{./figures/transformer.png}\n\t\\caption{Overview of Transformer .}\n\t\\label{fig:transformer}\n\\end{figure}\n\\keypoint{{Features}}\nTo our knowledge, this paper is the first comprehensive review of the state of Transformer based multimodal machine learning.\nThe major features of this survey include \n{\\textbf{(1)}} {We highlight that Transformers have the advantage that they can work in a modality-agnostic way. Thus, they are compatible with various modalities (and combinations of modalities). }\n{To support this view, we, for the first time, offer an understanding of the intrinsic traits of Transformers in a multimodal context from a geometrically topological perspective.}\nWe suggest that self-attention be treated as a graph style modelling, which models the input sequence (both uni-modal and multimodal) as a fully-connected graph. \nSpecifically, self-attention models the embedding of arbitrary tokens from an arbitrary modality as a graph node. \n{\\textbf{(2)}} We discuss the key components of Transformers in a multimodal context as mathematically as possible.\n{\\textbf{(3)}} Based on Transformers, cross-modal interactions (\\eg,\nfusion, alignment) are essentially processed by self-attention\nand its variants.\nIn this paper, we extract the mathematical essence and formulations of Transformer based MML practices, from the perspective of self-attention designs.\n\\keypoint{{Contributions}}\nHaving presented our review of the landscape of multimodal learning, Transformer ecosystem, and multimodal big data era in Section \\ref{sec:background},\nwe summarize our main contributions as the follows. \n{{\\textbf{(1)}} In Section \\ref{sec:multi-modal-transformer}, we present a \n{systematic} reviewing of \\vani{} Transformer, Vision Transformer, and multimodal Transformers, from a geometrically topological perspective.} \n{\\textbf{(2)}} We contribute a taxonomy for Transformer based MML from two complementary perspectives, \\ie, application based and challenge based.\nIn Section \\ref{sec:applications-and-representative-models},\nwe provide a review of multimodal Transformer applications, via\ntwo important paradigms, \\ie, for multimodal pretraining and for specific multimodal tasks.\nIn Section \\ref{sec:challenges-and-designs}, \nwe summarize the common challenges and designs shared by the various multimodal Transformer models and applications. \n{\\textbf{(3)}} In Section \\ref{sec:discussion-and-outlook}, we discuss current bottlenecks, existing problems, and potential research directions for Transformer based MML.", "id": "58c254c5-72b9-4976-a493-e0824a0d96b5", "level": "section", "origin_cites_number": 59, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}", "id": "2e3f79e1-6ad8-40a4-ae3d-6d29a207351d", "level": "section", "origin_cites_number": 0, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Background" ] ], "subsections": [ "9a29f2bd-6dfd-4aa9-84a3-29ec53a79083", "1a8106cc-4753-45af-b6d5-2744346344ec", "b46863a5-1c8c-4ffc-bc75-abb3a937511e" ], "title": "Background" }, { "cite_extract_rate": 0.5483870967741931, "cites": [ 5221, 5237, 7565, 8885, 5241, 5240, 3003, 5235, 38, 9131, 7467, 5236, 5242, 5238, 1030, 5239, 1582 ], "content": "MML has been an important research area in recent decades; an early multimodal application --\naudio-visual speech recognition was studied in 1980s .\nMML is key to human societies.\nThe world we humans live in is a \\md{} environment, thus both our observations and behaviours are \\md{} . \nFor instance, an AI navigation robot \nneeds multimodal sensors to perceive the real-world environment , \\eg, camera, LiDAR, radar, ultrasonic, GNSS, HD Map, odometer.\nFurthermore, human behaviours, emotions, events, actions, and humour are multimodal, thus various human-centred MML tasks are widely studied, including \nmultimodal emotion recognition , multimodal event representation ,\nunderstanding multimodal humor ,\nface-body-voice based video person-clustering , \\etc.\nThanks to the development of the internet and a wide variety of intelligent devices in recent years, increasing amounts of multimodal data are being transmitted over the internet, thus an increasing number of multimodal application scenarios are emerging.\nIn modern life, we can see various multimodal applications, including commercial services (\\eg, e-commerce/commodity retrieval , vision-and-language navigation (VLN) ), communication (\\eg, lip reading , sign language translation ), human-computer interaction , healthcare AI , surveillance AI , \\etc. \nMoreover, in the era of Deep Learning, deep neural networks greatly promote the development of MML, and\nTransformers are a highly competitive architecture family, bringing new challenges and opportunities to MML.\n\\blue{In particular, the recent success of large language models and their multimodal derivatives further demonstrates the potential of Transformers in multimodal foundation models.}", "id": "9a29f2bd-6dfd-4aa9-84a3-29ec53a79083", "level": "subsection", "origin_cites_number": 31, "parent_id": "2e3f79e1-6ad8-40a4-ae3d-6d29a207351d", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Background" ], [ "subsection", "Multimodal Learning (MML)" ] ], "subsections": [], "title": "Multimodal Learning (MML)" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 1275, 7080, 2514, 2023, 2012, 2011, 7070, 2017, 2015, 8886, 5248, 7535, 1278, 38, 1272, 5243, 7298, 768, 7, 5247, 1501, 5249, 2527, 9149, 5246, 1639, 7370, 7850, 732, 5244, 7590, 7909, 11, 5245, 1279 ], "content": "Transformers are emerging as promising learners.\n\\vani{} Transformer benefits from a self-attention mechanism, and is a breakthrough model for sequence-specific representation learning that was originally proposed for NLP, achieving the \\sota{} on various NLP tasks.\n{Following the great success of \\vani{} Transformer, a lot of derivative models have been proposed, \\eg, BERT , BART , GPT , \nLongformer ,\nTransformer-XL ,\nXLNet .}\nTransformers currently stand at the dominant position in NLP domains, and this motivates\nresearchers try to apply Transformers to other modalities, such as visual domains.\nIn early attempts for visual domain, the general pipeline \nis ``CNN features + standard Transformer encoder'', and researchers achieved BERT-style pretraining, via preprocessing raw images by resizing to a low resolution and reshaping into a 1D sequence .\nVision Transformer (ViT) is a seminal work that contributes an end-to-end solution by applying the encoder of Transformer to images. \nBoth ViT and its variants have been widely applied to various computer vision tasks, including\nlow-level tasks , recognition , detection , segmentation , \\etc, and also work well for both supervised and self-supervised visual learning.\nMoreover, some recently-released works provide further theoretical understanding for ViT, \\eg, its internal representation robustness , the continuous behaviour of its latent representation propagation .\nMotivated by the great success of Transformer, \nVideoBERT is a breakthrough work that is the first work to extend Transformer to the multimodal tasks. VideoBERT demonstrates the great potential of Transformer in multimodal context.\nFollowing VideoBERT, a lot of Transformer based multimodal pretraining models (\\eg, ViLBERT , \nLXMERT , VisualBERT , VL-BERT , UNITER , CBT , Unicoder-VL , B2T2 , VLP , 12-in-1 , Oscar , Pixel-BERT , ActBERT , ImageBERT , HERO , UniVL ) have become research topics of increasing interest in the field of machine learning.\nIn 2021, CLIP \nwas proposed. It is a new milestone that uses \\md{} pretraining to convert classification as a retrieval task that enables the pretrained models to tackle zero-shot recognition.\nThus, CLIP is a successful practice that makes full use of large-scale multimodal pretraining to enable zero-shot learning.\nRecently, the idea of CLIP is further studied,\n\\eg, CLIP pretrained model based zero-shot semantic segmentation ,\nALIGN , CLIP-TD , ALBEF , and CoCa .", "id": "1a8106cc-4753-45af-b6d5-2744346344ec", "level": "subsection", "origin_cites_number": 42, "parent_id": "2e3f79e1-6ad8-40a4-ae3d-6d29a207351d", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Background" ], [ "subsection", "Transformers: a Brief History and Milestones" ] ], "subsections": [], "title": "Transformers: a Brief History and Milestones" }, { "cite_extract_rate": 0.837837837837837, "cites": [ 5252, 5259, 7911, 7796, 486, 5271, 5251, 7910, 5254, 5266, 5262, 5269, 7535, 5255, 5261, 5257, 768, 8887, 5212, 5268, 5253, 5267, 2901, 5256, 5250, 5260, 5264, 5270, 5258, 5263, 5265 ], "content": "Big Data} \n{In the past decade, with the rapid development of internet applications such as social media and online retail,\nmassive multimodal \ndatasets have been proposed, \\eg,\nConceptual Captions , COCO , VQA , Visual Genome , SBU Captions , Cooking312K , LAIT , \ne-SNLI-VE ,\nARCH\n, Adversarial VQA ,\nOTT-QA ,\nMULTIMODALQA (MMQA) , \nVALUE ,\nFashion IQ ,\nLRS2-BBC \n,\nActivityNet ,\nVisDial .}\nSome emergent new trends among the recently released multimodal datasets are: \n{\\textbf{(1)}} {Data scales are larger.}\nVarious recently released datasets are million-scale, \\eg, \nProduct1M ,\nConceptual 12M ,\nRUC-CAS-WenLan (30M),\nHowToVQA69M ,\nHowTo100M ,\nALT200M ,\nLAION-400M .\n{\\textbf{(2)}} {More modalities.} \nIn addition to the general modalities of vision, text, and audio, further diverse modalities are emerging, \\eg, \nPano-AVQA -- the first large-scale spatial and audio-visual question\nanswering dataset on $360^{\\circ}$ videos,\nYouTube-360 (YT-360) ($360^{\\circ}$ videos),\nAIST++ (a new multimodal dataset of 3D\ndance motion and music),\nArtemis (affective language for visual arts). In particular, MultiBench~ provides a dataset including 10 modalities. \n{\\textbf{(3)}}\n{More scenarios.} \nIn addition to common caption and QA datasets,\nmore applications and scenarios have been studied, \\eg, \nCIRR (real-life images), \nProduct1M ,\nBed and Breakfast (BnB) (vision-and-language navigation),\nM3A (financial dataset),\nX-World (autonomous drive). \n{\\textbf{(4)}}\n{Tasks are more difficult.} \nBeyond the straightforward tasks,\nmore abstract multimodal tasks are proposed,\n\\eg, MultiMET (a multimodal dataset for metaphor understanding),\nHateful Memes (hate speech in multimodal memes). \n{\\textbf{(5)}} {Instructional videos have become increasingly popular}, \\eg, cooking video YouCookII . Aligning a sequence of instructions to a video of someone carrying out a task is an example of a powerful pretraining pretext task .\n\\blue{Pretext tasks are pre-designed problems to force the models to learn representation by solving them.} \n\\input{tex/datasets}\nSimilar to other deep neural network architectures, Transformers are also data hungry.\nTherefore, their high-capacity models and multimodal big data basis co-create the prosperity of the Transformer based multimodal machine learning.\nFor instance, big data bring zero-shot learning capability to VLP Transformer models.", "id": "b46863a5-1c8c-4ffc-bc75-abb3a937511e", "level": "subsection", "origin_cites_number": 37, "parent_id": "2e3f79e1-6ad8-40a4-ae3d-6d29a207351d", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Background" ], [ "subsection", "\\MM{" ] ], "subsections": [], "title": "\\MM{" }, { "cite_extract_rate": 1, "cites": [ 732, 5272, 5273, 38 ], "content": "}\n\\label{sec:multi-modal-transformer}\nIn this section, we use mathematical formulations to review the key techniques of \\vani{} Transformer , \nVision Transformer , and multimodal Transformers \\footnote{In this survey, ``multimodal Transformer'' means ``Transformer in multimodal learning context''.}, including tokenized inputs, self-attention, multi-head attention, basic Transformer layers/blocks, \\etc.\nWe highlight that \\vani{} Transformers can be understood from a geometrically topological perspective , because due to the self-attention mechanism, given each tokenized input from any modalities, \\vani{} self-attention (Transformer) can model it as a fully-connected graph in topological geometry space .\nCompared with other deep networks (for instance, CNN is restricted in the aligned grid spaces/matrices),\nTransformers intrinsically have a more general and flexible modelling space.\nThis is a notable advantage of Transformers for multimodal tasks.\nSections \\ref{sec:vanilla-transformer}, \\ref{sec:vit}, and \\ref{sec:transformer-in-multimodal-context} will review the key designs of \\vani{} Transformer, Vision Transformer, and multimodal Transformers, respectively.", "id": "af1c8fea-49d0-4414-b29b-01d4ab51cdd8", "level": "section", "origin_cites_number": 4, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ] ], "subsections": [ "7cb80339-995d-4e4d-b955-76baae247a0f", "27e5d64b-d25c-4141-9328-b27fa5837922", "b140ace7-aa64-4e9e-a5a9-29d3cb6942b7" ], "title": "{Transformers" }, { "cite_extract_rate": 1, "cites": [ 97, 1511, 57, 71 ], "content": "Transformer}\n\\label{sec:vanilla-transformer}\n\\input{tex/transformer}\n\\vani{} Transformer has an encoder-decoder structure and is the origin of the Transformer-based research field.\nIt takes tokenized input (see Section \\ref{sec:tokenized-input}).\nBoth its encoder and decoder are stacked by the Transformer layers/blocks,\nas demonstrated in Figure \\ref{fig:transformer}.\nEach block has two sub-layers, \\ie, a multi-head self-attention (MHSA) layer (see Section \\ref{sec:self-attention-and-multi-head-attention}) and a position-wise fully-connected feed-forward network (FFN) (see Section \\ref{sec:ffn}).\nTo help the back propagation of the gradient, both MHSA and FFN use Residual Connection (given an input $x$, the residual connection of any mapping $f(\\cdot)$ is defined as $x \\gets f(x) + x$), followed by normalization layer.\nThus, assuming that the input tensor is $\\mathbf{Z}$, the output of MHSA and FFN sub-layers can be formulated as:\n\\begin{equation}\n\\label{eq:mhsa-and-ffn}\n \\mathbf{Z} \\gets N ( {sublayer} (\\mathbf{Z}) + \\mathbf{Z}),\n\\end{equation}\nwhere ${sublayer}(\\cdot)$ is the mapping implemented by the sub-layer\nitself and $N(\\cdot)$ denotes normalization, \\eg, $BN(\\cdot)$ , $LN(\\cdot)$ .\n\\keypoint{Discussion}\n\\magenta{There is an important unsolved problem that is post-normalization versus pre-normalization.} \nThe original \\vani{} Transformer uses post-normalization for each MHSA and FFN sub-layer.\nHowever, if we consider this from the mathematical perspective, pre-normalization makes more sense {.} \n{This is similar to the basic principle of the theory of matrix, that normalization should be performed before projection, \\eg, Gram–Schmidt process \\footnote{\\url{https://en.wikipedia.org/wiki/Gram\\%E2\\%80\\%93Schmidt_process}}.}\nThis problem should be studied further by both theoretical research and experimental validation.", "id": "7cb80339-995d-4e4d-b955-76baae247a0f", "level": "subsection", "origin_cites_number": 4, "parent_id": "af1c8fea-49d0-4414-b29b-01d4ab51cdd8", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "\\vani{" ] ], "subsections": [ "57521590-9cef-456e-9029-fb345788adf9", "44ba2e1d-5109-4df5-9aef-f3c3c66af41d", "048cfeab-9b3f-464f-9b5a-62792d05f734" ], "title": "\\vani{" }, { "cite_extract_rate": 1, "cites": [ 7853, 5274, 5275, 732, 7, 4826 ], "content": "\\label{sec:tokenized-input}\n\\keypoint{{Tokenization}}\n\\vani{} Transformer was originally proposed for machine translation as a sequence-to-sequence model, thus it is straightforward to take the vocabulary sequences as input.\nAs mentioned previously, the original self-attention can model an arbitrary input as a fully-connected graph, independently of modalities.\nSpecifically, both \\vani{} and variant Transformers take in the tokenized sequences, where each token can be regarded as a node of the graph.\n\\keypoint{{Special/Customized Tokens}}\n{In Transformers,\nvarious special/customized tokens can be semantically defined as place-holders in the token sequences,\n\\eg, mask token \\texttt{[MASK]} .\nSome common special tokens are summarized in appendix. Special tokens can be used in both uni-modal and multimodal Transformers.}\n\\keypoint{{Position Embedding}}\n{ Position embeddings are added\nto the token embeddings to retain positional information .\n\\vani{} Transformer uses sine and cosine functions to produce position embedding.\nTo date, various implementations of position embedding have been proposed.\nThe concrete solutions are outside the focus of this survey.\n}\n\\keypoint{Discussion}\n{The main advantages of input tokenization include the following:} \n{\\textbf{(1)}} Tokenization is a more general approach from a geometrically topological perspective, achieved by minimizing constraints caused by different modalities. \n{In general, every modality has intrinsic constraints on modelling.\nFor instance, sentences have sequential structures that are well-suited by RNN, and photos are restricted in aligned grid matrices that CNN works well for.\nTokenization helps Transformers inherently to process different modalities universally via irregular sparse structures. Thus even \\vani{} Transformer can encode multimodal inputs flexibly by just concatenation, weighted summation, even without any multimodal tailor-made modifications. }\n{\\textbf{(2)}} Tokenization is a more flexible approach to organize the input information via concatenation/stack, weighted summation, \\etc.\n\\vani{} Transformer injects temporal information to the token embedding by summing position embedding.\nFor instance, when use Transformer to model free-hand sketch drawing , each input token can integrate various drawing stroke patterns, \\eg, stroke coordinates, stroke ordering, pen state (start/end). \n{\\textbf{(3)}} {Tokenization is compatible with the task-specific customized tokens, \\eg, \\texttt{[MASK]} token for Masked Language Modelling, \\texttt{[CLASS]} token for classification.} \n\\keypoint{Discussion} \n\\magenta{How to understand position embedding to Transformers is an open problem.}\nIt can be understood as a kind of implicit coordinate basis of feature space, to provide temporal or spatial information to the Transformer.\nFor cloud point and sketch drawing stroke , their token element is already a coordinate, meaning that position embedding is optional, not necessary. \nFurthermore, position embedding can be regarded as a kind of general additional information.\nIn other words, from a mathematical point of view, any additional information can be added, such as detail of the manner of position embedding, \\eg,\nthe pen state of sketch drawing stroke ,\ncameras and viewpoints in surveillance .\nThere is a comprehensive survey discussing the position information in Transformers.\nFor both sentence structures (sequential) and general graph structures (sparse, arbitrary, and irregular),\nposition embeddings help Transformers to learn or encode the underlying structures.\nConsidered from the mathematical perspective of self-attention, \\ie, scaled\ndot-product attention,\nattentions are invariant to the positions of words (in text) or nodes (in graphs), if position embedding information is missing.\n\\blue{Thus, in most cases, position embedding is necessary for Transformers.}", "id": "57521590-9cef-456e-9029-fb345788adf9", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "7cb80339-995d-4e4d-b955-76baae247a0f", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "\\vani{" ], [ "subsubsection", "Input Tokenization" ] ], "subsections": [], "title": "Input Tokenization" }, { "cite_extract_rate": 0.75, "cites": [ 5275, 4764, 38, 5276, 5277, 5230 ], "content": "\\label{sec:self-attention-and-multi-head-attention}\nThe core component of \\vani{} Transformer is the Self-Attention (SA) operation that is also termed ``Scaled Dot-Product Attention''.\nAssume that\n$\\mathbf{X} = [\\mathbf{x}_1, \\mathbf{x}_2, \\cdots] \\in \\mathbb{R}^{N \\times d}$ is an input sequence of $N$ elements/tokens,\nand an optional preprocessing is positional encoding by point-wise summation $\\mathbf{Z} \\gets \\mathbf{X} \\oplus Position Embedding$ or concatenation $\\mathbf{Z} \\gets concat( \\mathbf{X}, Position Embedding)$.\n\\keypoint{Self-Attention (SA)}\nAfter preprocessing, embedding $\\mathbf{Z}$ will\ngo through three projection matrices ($\\mathbf{W}^{Q} \\in \\mathbb{R}^{d \\times d_q}$, $\\mathbf{W}^{K} \\in \\mathbb{R}^{d \\times d_k}$, and $\\mathbf{W}^{V} \\in \\mathbb{R}^{d \\times d_v}$, $d_q = d_k$) to generate three embeddings $\\mathbf{Q}$ (Query), $\\mathbf{K}$ (Key), and $\\mathbf{V}$ (Value):\n\\begin{equation}\n\\label{eq:projecting}\n \\mathbf{Q} = \\mathbf{Z} \\mathbf{W}^{Q}, \\mathbf{K} = \\mathbf{Z} \\mathbf{W}^{K}, \\mathbf{V} = \\mathbf{Z} \\mathbf{W}^{V}.\n\\end{equation}\nThe output of self-attention is defined as\n\\begin{equation}\n\\label{eq:attention}\n \\mathbf{Z} = {SA} (\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}) = {Softmax}\\left( \\frac{\\mathbf{Q} \\mathbf{K}^\\top}{\\sqrt{d_q}} \\right) \\mathbf{V}.\n\\end{equation}\nGiven an input sequence, self-attention allows each element to attend to all the other elements, so that self-attention encodes the input as a fully-connected graph. Therefore, the encoder of \\vani{} Transformer can be regarded as a fully-connected GNN encoder, \nand the Transformer family has the non-local ability of global perception, similar to the Non-Local Network .\n\\keypoint{Masked Self-Attention (MSA)}\nIn practice, modification \nof self-attention is needed to help the decoder of Transformer to learn contextual dependence, to prevent positions from attending to subsequent positions, as\n\\begin{equation}\n\\label{eq:masked-attention}\n \\mathbf{Z} = {MSA} (\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}) = {Softmax}\\left( \\frac{\\mathbf{Q} \\mathbf{K}^\\top}{\\sqrt{d_q}} \\odot \\mathbf{M} \\right) \\mathbf{V},\n\\end{equation}\nwhere $\\mathbf{M}$ is a masking matrix.\nFor instance, in GPT , an upper triangular mask to enable look-ahead attention\nwhere each token can only look at the past tokens.\nMasking can be used in both encoder and decoder of Transformer, and has flexible implementations, \\eg, 0-1 hard mask , soft mask .\nIn both uni-modal and multimodal practices,\nspecific masks are designed based on domain knowledge and prior knowledge.\nEssentially, MSA is used to inject additional knowledge to Transformer models, \\eg, . \n\\keypoint{Multi-Head Self-Attention (MHSA)}\nIn practice, multiple self-attention sub-layers can be stacked in parallel and their concatenated outputs are fused by a projection matrix $\\mathbf{W}$, to form a structure named Multi-Head Self-Attention:\n\\begin{equation}\n\\label{eq:multihead-attention}\n \\mathbf{Z} = {MHSA} (\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}) = {concat} (\\mathbf{Z}_{1}, \\cdots, \\mathbf{Z}_{H}) \\textbf{W},\n\\end{equation}\nwhere each head $\\mathbf{Z}_{h} = {SA} (\\mathbf{Q}_{h}, \\mathbf{K}_{h} \\mathbf{V}_{h})$ and $h \\in [1, H]$,\nand $\\textbf{W}$ is a linear projection matrix.\nThe idea of MHSA is a kind of ensemble.\nMHSA helps the model to jointly attend to information from multiple representation\nsub-spaces.", "id": "44ba2e1d-5109-4df5-9aef-f3c3c66af41d", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "7cb80339-995d-4e4d-b955-76baae247a0f", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "\\vani{" ], [ "subsubsection", "Self-Attention and Multi-Head Self-Attention" ] ], "subsections": [], "title": "Self-Attention and Multi-Head Self-Attention" }, { "cite_extract_rate": 0.5, "cites": [ 1512 ], "content": "\\label{sec:ffn}\nThe output of the multi-head attention sub-layer will go through the position-wise Feed-Forward Network (FFN) that consists of successive linear layers with non-linear activation. \nFor instance, a two-layer FFN can be formulated as\n\\begin{equation}\n\\label{eq:attention}\n FFN (\\mathbf{Z}) = \\sigma ( \\mathbf{Z} \\mathbf{W}_{1} + \\mathbf{b}_1) \\mathbf{W}_{2} + \\mathbf{b}_{2},\n\\end{equation}\nwhere $\\mathbf{W}_{1}$, $\\mathbf{b}_{1}$, $\\mathbf{W}_{2}$, and $\\mathbf{b}_{2}$ denote the weights and biases of the two linear transformations, while $\\sigma(\\cdot)$ is non-linear activation, \\eg, $\\text{ReLU}(\\cdot)$ , $GELU (\\cdot)$ . In some Transformer literature, FFN is also termed Multi-Layer Perceptron (MLP).", "id": "048cfeab-9b3f-464f-9b5a-62792d05f734", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7cb80339-995d-4e4d-b955-76baae247a0f", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "\\vani{" ], [ "subsubsection", "Feed-Forward Network" ] ], "subsections": [], "title": "Feed-Forward Network" }, { "cite_extract_rate": 1, "cites": [ 732 ], "content": "\\label{sec:vit}\n{Vision Transformer (ViT) } \n{has an image-specific input pipeline in which the input image must be split into fixed-size (\\eg, $16 \\times 16$, $32 \\times 32$) patches.}\nAfter going through the linearly embedded layer and adding the position embeddings, all the patch-wise sequences will be encoded by a standard Transformer encoder.\nGiven an image $\\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C}$ ($H$ height, $W$ width, $C$ channels), ViT needs to reshape $\\mathbf{X}$ into a sequence of flattened 2D patches: $\\mathbf{x}_{p} \\in \\mathbb{R}^\\mathbf{N \\times (P^{2} \\cdot C)}$, where $(P \\times P)$ is the patch resolution and $N = HW/P^2$.\nTo perform classification, a standard approach is to\nprepend an extra learnable embedding ``classification token'' \\texttt{[CLASS]} to the sequence of embedded patches:\n\\begin{equation}\n\\label{eq:attention}\n \\textbf{Z} \\gets concat(\\texttt{[CLASS]}, \\mathbf{X} \\mathbf{W}),\n\\end{equation}\nwhere $\\mathbf{W}$ denotes the projection.\n\\input{tables/multi-modal-inputs}", "id": "27e5d64b-d25c-4141-9328-b27fa5837922", "level": "subsection", "origin_cites_number": 1, "parent_id": "af1c8fea-49d0-4414-b29b-01d4ab51cdd8", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "Vision Transformer" ] ], "subsections": [], "title": "Vision Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:transformer-in-multimodal-context}\nRecently, a large number of Transformers have been studied extensively for various multimodal tasks, and shown to be compatible with various modalities in both discriminative and generative tasks.\nIn this section, we will review the key techniques/designs of the existing multimodal Transformer models, from the perspectives of multimodal input (Section \\ref{sec:multimodal-input}), self-attention variants (Section \\ref{sec:self-attention-in-multimodal-context}), and network architectures (Section \\ref{sec:architectures}).", "id": "b140ace7-aa64-4e9e-a5a9-29d3cb6942b7", "level": "subsection", "origin_cites_number": 0, "parent_id": "af1c8fea-49d0-4414-b29b-01d4ab51cdd8", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "Multimodal Transformers" ] ], "subsections": [ "561ec406-3a38-4acd-838c-2a2cd4760ec0", "09a69d19-bfd1-4ce6-924b-308217b49548", "763237b6-0d0e-4651-970e-cd2e510ab10f" ], "title": "Multimodal Transformers" }, { "cite_extract_rate": 0.7619047619047611, "cites": [ 2023, 5224, 2015, 8886, 7535, 5230, 1278, 8888, 1272, 5226, 9131, 5275, 768, 5212, 5278, 732 ], "content": "Input}\n\\label{sec:multimodal-input}\nThe Transformer family is a general architecture that can be formulated as a type of general graph neural network.\nSpecifically, self-attention can process each input as a fully-connected graph, by attending to the global (non-local) patterns.\nTherefore, this intrinsic trait helps Transformers can work in a modality\nagnostic pipeline that is compatible with various modalities by treating the embedding of each token as a node of the graph.\n\\keypoint{{Tokenization and Embedding Processing}}\nGiven an input from an arbitrary modality, \nusers only need to perform two main steps, (1) tokenize the input, and (2) select an embedding space to represent the tokens, before inputting the data into Transformers.\nIn practice, both the tokenizing input and selecting embedding for the token are vital for Transformers but highly flexible, with many alternatives.\nFor instance, given an image, the solution of tokenizing and embedding is not unique.\nUsers can choose or design tokenization at multiple granularity levels -- coarse-grained vs. fine-grained.\n\\eg, use ROIs (obtained by an object detector) and CNN features as tokens and token embeddings , use patches and linear projection as tokens and token embeddings , or use graph node (obtained by object detector and graph generator) and GNN features as tokens and token embeddings .\nGiven a tokenization plan, the subsequent embedding approaches can be diverse.\nFor example, for video input, a common tokenization is to treat the non-overlapping\nwindows (down-sampled) over the video as tokens, and their embeddings can then be extracted by various 3D CNNs, \\eg, \nVideoBERT , CBT , and UniVL use S3D ,\nActBERT uses ResNet-3D .\nTable~\\ref{table:multi-modal-input} summarizes some common practices of multimodal inputs for Transformers, including\n RGB,\n video,\n audio/speech/music,\n text,\n graph, \\etc.\n\\keypoint{Discussion}\nWhen considered from the perspective of geometric topology,\neach of the modalities listed in Table~\\ref{table:multi-modal-input} can be regarded as a graph. \nAn RGB image is essentially a neat grid graph in the pixel space.\nBoth video and audio are clip/segment based graphs over a complex space involving temporal and semantic patterns.\nBoth 2D and 3D drawing sketches are a kind of sparse graph if we consider their key points along the drawing strokes.\nSimilar to sketches, the human pose also is a kind of graph.\n3D point cloud is a graph in which each coordinate is a node.\n\\magenta{Other abstract modalities also can be interpreted as graphs, \\eg, \nsource code ,\ndata flow of source code ,\ntable ,\nSQL database schema ,\ntext question graph , and\nelectronic health records (EHRs) .}\n\\keypoint{Token Embedding Fusion}\nIn practice, \nTransformers allow each token position to contain multiple embeddings.\nThis is essentially a kind of early-fusion of embeddings, for both uni-modal and multimodal Transformer models. (This will be discussed further in subsequent sections.) \nThe most common fusion is the token-wise summing of the multiple embeddings, \\eg, a specific token embedding $\\oplus$ position embedding.\nSimilar to the flexible tokenization, token embedding fusion is also flexible and widely applied to both uni-modal and multimodal Transformer applications.\nIn , token-wise weighted summing is used to perform early-fusion of RGB and grey-scale images for multimodal surveillance AI.\nIn particular, token embedding fusion has an important role in multimodal Transformer applications as various embeddings can be fused by token-wise operators, \\eg,\nin VisualBERT and Unicoder-VL , segment embeddings are token-wise added to indicate which modality (vision or language) each token is from,\nVL-BERT injects global visual context to linguistic domain by ``linguistic token embedding $\\oplus$ full image visual feature\nembedding'',\nInterBERT adds location information for ROI by ``ROI embedding $\\oplus$ location embedding'',\nin ImageBERT , five kinds of embeddings are fused ``image embedding $\\oplus$ position embedding $\\oplus$ linguistic embedding $\\oplus$ segment embedding $\\oplus$ sequence position embedding''.\n\\input{tables/fusion-comparision}", "id": "561ec406-3a38-4acd-838c-2a2cd4760ec0", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "b140ace7-aa64-4e9e-a5a9-29d3cb6942b7", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "Multimodal Transformers" ], [ "subsubsection", "\\MD{" ] ], "subsections": [], "title": "\\MD{" }, { "cite_extract_rate": 0.928571428571428, "cites": [ 5252, 5281, 5224, 5278, 4832, 5280, 9131, 768, 5269, 5282, 1278, 5283, 5279 ], "content": "Context}\n\\label{sec:self-attention-in-multimodal-context}\n{In multimodal Transformers,\ncross-modal interactions (\\eg, fusion, alignment) are essentially processed by self-attention and its variants. \nThus, in this section, we will review the main multimodal modelling practices of Transformers, from a perspective of self-attention designs, including\n(1) early summation (token-wise, weighted),\n(2) early concatenation,\n(3) hierarchical attention (multi-stream to one-stream),\n(4) hierarchical attention (one-stream to multi-stream),\n(5) cross-attention, and\n(6) cross-attention to concatenation.} See Table \\ref{table:fusion-comparison} and Figure \\ref{fig:self-attention-fusion}.\nFor brevity, we will state and compare the mathematical formulations in two-modality cases. Please note that all discussed self-attention and its variants are such flexible that can be extended to multiple modality cases.\n{Specifically, the following formulations are modality-, tokenization-, and embedding- agnostic, as self-attention models the embedding of arbitrary token from arbitrary modality as a node of a graph.} \nGiven inputs $\\mathbf{X}_\\texttt{A}$ and $\\mathbf{X}_\\texttt{B}$ from two arbitrary modalities,\n$\\mathbf{Z}_{(\\texttt{A})}$ and $\\mathbf{Z}_{(\\texttt{B})}$ denote their respective token embeddings.\nLet $\\mathbf{Z}$ denoting the token embedding (sequence) produced by the multimodal interactions.\n$Tf(\\cdot)$ stands for the processing of Transformer layers/blocks.\n\\begin{figure*}[!t]\n\t\\centering\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-1}\n\t\t\\includegraphics[width=0.08\\textwidth]{./my-figures/fusion-1.pdf}}\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-2}\n\t\t\\includegraphics[width=0.17\\textwidth]{./my-figures/fusion-2.pdf}}\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-3}\n\t\t\\includegraphics[width=0.17\\textwidth]{./my-figures/fusion-3.pdf}}\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-4}\n\t\t\\includegraphics[width=0.17\\textwidth]{./my-figures/fusion-4.pdf}}\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-5}\n\t\t\\includegraphics[width=0.17\\textwidth]{./my-figures/fusion-5.pdf}}\n\t\\subfigure[]{\n\t\t\\label{fig:fusion-6}\n\t\t\\includegraphics[width=0.17\\textwidth]{./my-figures/fusion-6.pdf}}\n\t\\caption{Transformer-based cross-modal interactions\\blue{: (a) Early Summation, (b) Early Concatenation, (c) Hierarchical Attention (multi-stream to one-stream), (d) Hierarchical Attention (one-stream to multi-stream), (e) Cross-Attention, and (f) Cross-Attention to Concatenation}. ``Q'': Query embedding; ``K'': Key embedding; ``V'': Value embedding. ``TL'': Transformer Layer. Best viewed in colour. \n }\n\t\\label{fig:self-attention-fusion}\n\\end{figure*}\n\\keypoint{(1) Early Summation}\nIn practice, early summation is a simple and effective multimodal interaction, where the token embeddings from multiple modalities can be weighted summed at each token position and then processed by Transformer layers:\n\\begin{equation}\n\\label{eq:early-summation}\n \\mathbf{Z} \\gets Tf(\\alpha \\mathbf{Z}_{(\\texttt{A})} \\oplus \\beta \\mathbf{Z}_{(\\texttt{B})}) = {MHSA} (\\mathbf{Q}_{(\\texttt{AB})}, \\mathbf{K}_{(\\texttt{AB})}, \\mathbf{V}_{(\\texttt{AB})}), \n\\end{equation}\nwhere $\\oplus$ is element-wise sum, and $\\alpha$ and $\\beta$ are weightings.\nConcretely, $\\mathbf{Q}_{(\\texttt{AB})} = (\\alpha \\mathbf{Z}_{(\\texttt{A})} \\oplus \\beta \\mathbf{Z}_{(\\texttt{B})}) \\mathbf{W}^{Q}_{(\\texttt{AB})}$, $\\mathbf{K}_{(\\texttt{AB})} = (\\alpha \\mathbf{Z}_{(\\texttt{A})} \\oplus \\beta \\mathbf{Z}_{(\\texttt{B})}) \\mathbf{W}^{K}_{(\\texttt{AB})}$, and $\\mathbf{V}_{(\\texttt{AB})} = (\\alpha \\mathbf{Z}_{(\\texttt{A})} \\oplus \\beta \\mathbf{Z}_{(\\texttt{B})}) \\mathbf{W}^{V}_{(\\texttt{AB})}$.\n{Its main advantage is that it does not increase computational complexity.\nHowever, its main disadvantage is due to the manually set weightings.} \nAs discussed in Section \\ref{sec:tokenized-input} and \\ref{sec:multimodal-input}, summing position embedding is intrinsically a case of early summation.\n\\keypoint{(2) Early Concatenation}\nAnother straightforward solution is early concatenation that the token embedding sequences from multiple modalities are concatenated and input into Transformer layers as\n\\begin{equation}\n\\label{eq:early-concatenation}\n \\mathbf{Z} \\gets Tf(\\mathcal{C}(\\mathbf{Z}_{(\\texttt{A})}, \\mathbf{Z}_{(\\texttt{B})})).\n\\end{equation}\n{Thus, all the multimodal token positions can be attended as a whole sequence, such that the positions of each modality can be encoded well by conditioning the context of other modalities.}\nVideoBERT is the one of the first multimodal Transformer works, where video and text are fused via early concatenation {that can encode the global multimodal context well .\nHowever, the longer sequence after concatenation will increase computational complexity.}\nEarly concatenation is also termed ``all-attention'' or ``Co-Transformer'' .\n\\keypoint{(3) Hierarchical Attention} (multi-stream to one-stream)\nTransformer layers can be combined hierarchically to attend to the cross-modal interactions.\nA common practice is that multimodal inputs are encoded by independent Transformer streams and their outputs are concatenated and fused by another Transformer :\n\\begin{equation}\n\\label{eq:hierarchical-attention-2-to-1}\n \\mathbf{Z} \\gets Tf_3(\\mathcal{C}(Tf_1(\\mathbf{Z}_{(\\texttt{A})}), Tf_2(\\mathbf{Z}_{(\\texttt{B})}))).\n\\end{equation}\nThis kind of hierarchical attention is an implementation of late interaction/fusion, and can be treated as a special case of early concatenation.\n\\keypoint{(4) Hierarchical Attention} (one-stream to multi-stream)\nInterBERT is another good practice of hierarchical attention where concatenated multimodal inputs are encoded by a shared single-stream Transformer that is followed by two separate Transformer streams.\nThis flow can be formulated as\n\\begin{equation}\n\\label{eq:hierarchical-attention-1-to-2}\n \\left\\{ \n \\begin{aligned}\n \\blue{\\mathcal{C}(}\\mathbf{Z}_{(\\texttt{A})}, \\mathbf{Z}_{(\\texttt{B})}\\blue{)} & \\gets Tf_{1}(\\mathcal{C}(\\mathbf{Z}_{(\\texttt{A})}, \\mathbf{Z}_{(\\texttt{B})})), \\\\\n \\mathbf{Z}_{(\\texttt{A})} & \\gets Tf_{2}(\\mathbf{Z}_{(\\texttt{A})}), \\\\\n \\mathbf{Z}_{(\\texttt{B})} & \\gets Tf_{3}(\\mathbf{Z}_{(\\texttt{B})}).\n \\end{aligned}\n \\right.\n\\end{equation}\n{This method perceives the cross-modal interactions and meanwhile preserves the independence of uni-modal representation.}\n\\keypoint{(5) Cross-Attention}\nFor two-stream Transformers, if the $\\mathbf{Q}$ (Query) embeddings are exchanged/swapped in a cross-stream manner, the cross-modal interactions can also be perceived.\nThis method is termed cross-attention or co-attention , which was first proposed in VilBERT :\n\\begin{equation}\n\\label{eq:cross-attention}\n \\left\\{ \n \\begin{aligned}\n \\mathbf{Z}_{(\\texttt{A})} & \\gets {MHSA} (\\mathbf{Q}_{\\texttt{B}}, \\mathbf{K}_{\\texttt{A}}, \\mathbf{V}_{\\texttt{A}}), \\\\\n \\mathbf{Z}_{(\\texttt{B})} & \\gets {MHSA} (\\mathbf{Q}_{\\texttt{A}}, \\mathbf{K}_{\\texttt{B}}, \\mathbf{V}_{\\texttt{B}}).\n \\end{aligned}\n \\right.\n\\end{equation}\n{Cross-attention attends to each modality\nconditioned on the other and does not cause higher computational complexity, however if considered for each modality, this method fails to perform cross-modal attention globally and thus loses the whole context.\nAs discussed in , \ntwo-stream cross-attention can learn cross-modal interaction, whereas there is no self-attention to the self-context inside each modality.}\n\\keypoint{(6) Cross-Attention to Concatenation}\nThe two streams of cross-attention can be further concatenated and processed by another Transformer to model the global context.\nThis kind of hierarchically cross-modal interaction is also widely studied\n, and {alleviates the drawback of cross-attention.}\n\\begin{equation}\n\\label{eq:cross-attention-to-concatenation}\n \\left\\{ \n \\begin{aligned}\n \\mathbf{Z}_{(\\texttt{A})} & \\gets {MHSA} (\\mathbf{Q}_{\\texttt{B}}, \\mathbf{K}_{\\texttt{A}}, \\mathbf{V}_{\\texttt{A}}), \\\\\n \\mathbf{Z}_{(\\texttt{B})} & \\gets {MHSA} (\\mathbf{Q}_{\\texttt{A}}, \\mathbf{K}_{\\texttt{B}}, \\mathbf{V}_{\\texttt{B}}), \\\\\n \\mathbf{Z} & \\gets Tf(\\mathcal{C}(\\mathbf{Z}_{(\\texttt{A})}, \\mathbf{Z}_{(\\texttt{B})})).\n \\end{aligned}\n \\right.\n\\end{equation}\n\\keypoint{Discussion}\n{All these aforementioned self-attention variants for multimodal interactions are modality-generic, and can be applied in flexible strategies and for multi-granular tasks.\nSpecifically, these interactions can be flexibly combined and nested.}\nFor instance, multiple cross-attention streams are used in hierarchical attention (one-stream to multi-stream) that in a two-stream decoupled model $Tf_2$ and $Tf_3$ of Eq. \\ref{eq:hierarchical-attention-1-to-2} are implemented by cross-attention defined in Eq. \\ref{eq:cross-attention}. \n{Moreover, they can be extended to multiple ($\\geq 3$) modalities.}\nTriBERT is a tri-modal cross-attention (co-attention) for vision, pose, and audio, where given a Query embedding, its Key and Value embeddings are the concatenation from the other modalities. Cross-attention to concatenation is applied to three modalities (\\ie, language,\nvideo, and audio) in .", "id": "09a69d19-bfd1-4ce6-924b-308217b49548", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "b140ace7-aa64-4e9e-a5a9-29d3cb6942b7", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "Multimodal Transformers" ], [ "subsubsection", "Self-Attention Variants in \\MD{" ] ], "subsections": [], "title": "Self-Attention Variants in \\MD{" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 5284, 7909, 2012, 5278, 1278, 1279, 1272 ], "content": "\\label{sec:architectures}\nEssentially,\nvarious multimodal Transformers work due to their internal multimodal attentions that are \nthe aforementioned self-attention variants.\nMeanwhile, as illustrated in Figure \\ref{fig:self-attention-fusion}, these attentions determine the external network structures of the multimodal Transformers where they are embedded. \nIn general, if we consider from the angle of network structures, (1) early summation and early concatenation work in single-stream, (2) cross-attention work in multi-streams, (3) hierarchical attention and cross-attention to concatenation work in hybrid-streams.\nThus,\nmultimodal Transformers can be divided into\nsingle-stream (\\eg, Uniter , Visualbert , Vl-bert , Unified VLP ), multi-stream (\\eg, ViLBERT , Lxmert , ActBERT ), hybrid-stream (\\eg, InterBERT ), \\etc.\nFrom the perspective of timing of interaction, these multimodal attentions fall into three categories, \\ie, early interaction: early summation, early concatenation, and hierarchical attention (one-stream to multi-stream), late interaction: hierarchical attention (multi-stream to one-stream), or throughout interaction: cross-attention, cross-attention to concatenation.\nAs demonstrated in Figure 2 in , the multimodal Transformer models have another architecture taxonomy based on the computational size of the components.", "id": "763237b6-0d0e-4651-970e-cd2e510ab10f", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "b140ace7-aa64-4e9e-a5a9-29d3cb6942b7", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Transformers" ], [ "subsection", "Multimodal Transformers" ], [ "subsubsection", "Network Architectures" ] ], "subsections": [], "title": "Network Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\label{sec:applications-and-representative-models}\nIn this section we survey multimodal Transformers {based on the application scenarios.}\nWe consider two important paradigms:\n(1) Transformers for multimodal pretraining (Section \\ref{sec:transformers-for-multi-modal-pretraining}, including both task-agnostic (Section \\ref{sec:task-agnostic-multi-modal-pretraining}) and task-specific (Section \\ref{sec:task-specific-multi-modal-pretraining}) multimodal pretraining),\nand\n(2) Transformers for specific multimodal tasks (Section \\ref{sec:transformers-for-specific-multi-modal tasks}).", "id": "5f41fbc9-6717-4317-8a00-37d207d6aced", "level": "section", "origin_cites_number": 0, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Application Scenarios" ] ], "subsections": [ "5b8dde6d-0134-49ff-8114-3677f36acfcc", "95c9bf40-869d-42e1-9367-1698471f2615" ], "title": "{Application Scenarios" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 2012, 1279, 768, 1278, 1272 ], "content": "\\label{sec:transformers-for-multi-modal-pretraining}\nInspired by the great success of Transformer based pretraining in NLP community,\nTransformers are also widely studied for multimodal pretraining as the various large-scale multimodal corpora is emerging.\nRecent work has demonstrated that if pretrained on large scale multimodal corpora Transformer based models clearly outperform other competitors in a wide range of multimodal down-stream tasks, and moreover achieve the zero-shot generalization ability. \nThese superiorities have led Transformer-based multimodal pretraining to become a hot topic, which \nhas two main directions, \\ie, general pretraining for agnostic down-stream tasks (Section \\ref{sec:task-agnostic-multi-modal-pretraining}), goal-oriented pretraining for specific down-stream tasks (Section \\ref{sec:task-specific-multi-modal-pretraining}).\nWe focus on these key points: \n(1) What trends are emerging? \n(2) Where/how do the cross-modal interactions take place during pretraining? \n(3) How to sort out and understand the pretraining pretext objectives? \nHow can they drive Transformers to learn the cross-modal interactions?", "id": "5b8dde6d-0134-49ff-8114-3677f36acfcc", "level": "subsection", "origin_cites_number": 7, "parent_id": "5f41fbc9-6717-4317-8a00-37d207d6aced", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Application Scenarios" ], [ "subsection", "Transformers for Multimodal Pretraining" ] ], "subsections": [ "fe91bb70-88c5-4914-8596-78a4c61b8b55", "01084df1-deec-4ba7-8763-5e3a45eeaee5" ], "title": "Transformers for Multimodal Pretraining" }, { "cite_extract_rate": 0.9347826086956521, "cites": [ 5252, 7905, 1275, 7080, 7913, 2018, 5285, 7912, 2011, 2012, 2023, 5283, 5287, 5289, 2422, 2017, 2015, 8886, 5266, 5290, 7535, 1278, 5281, 1272, 5286, 5288, 5292, 768, 7906, 5282, 7, 7121, 2022, 7339, 5293, 5278, 1639, 209, 5260, 5244, 5291, 7909, 1279 ], "content": "\\label{sec:task-agnostic-multi-modal-pretraining}\nRecently Transformer-oriented pretraining has been widely studied involving diverse modality combinations, \\eg, video-text , image-text , acoustic-text .\nAmong existing work,\nthe following main trends are emerging: \n{\\textbf{(1)}} {Vision-language pretraining (VLP) is a major research problem in this field.} VLP is including both ``image + language'' and ``video + language'', also termed visual-linguistic pretraining. A great deal of excellent work has been proposed, \\eg, \nVideoBERT ,\nViLBERT ,\nLXMERT ,\nVisualBERT ,\nVL-BERT ,\nUNITER ,\nCBT ,\nUnicoder-VL ,\nB2T2 ,\nVLP ,\n12-in-1 ,\nOscar ,\nPixel-BERT ,\nActBERT ,\nImageBERT ,\nHERO ,\nUniVL , SemVLP . \n{\\textbf{(2)}} {Speech can be used as text.} Thanks to recent advances in automatic speech recognition (ASR) techniques, in a multimodal context, speech can be converted to text by the\n off-the-shelf speech\nrecognition tools. \nFor instance, VideoBERT and CBT make full use of speech rather than low-level sounds as a source of cross-modal supervision, by extracting high-level semantic text.\n{\\textbf{(3)}} {Overly dependent on the well-aligned multimodal data.} A majority of Transformer-based multimodal pretraining works in a self-supervised manner, however, it is overly dependent on the well-aligned multimodal sample pairs/tuples.\nFor instance, large amount of image-language pretraining Transformer models are pretrained on large-scale image-text pairs, \\eg,\nVisualBERT , VL-BERT , ViLBERT ,\nLXMERT , UNITER .\nFor another example,\nthe instructional videos (\\eg, cooking) \\footnote{{Note that instructional videos also have weakly aligned cases .}} are widely used as the pretraining corpora, \\eg,\nHowToVQA69M ,\nHowTo100M ,\nas {in general,}\ntheir visual clues/content and the spoken words have a higher probability to align with each other, {if compared with other videos.} \nHowever, using cross-modal alignment as cross-modal supervision is costly for large-scale applications.\nThus, how to use the weakly-aligned or even unpaired/unaligned multimodal data as the pretraining corpora is still understudied. Some recent attempts study the use of weakly-aligned cross-modal supervision to train Transformers to learn the cross-modal interactions. \n{\\textbf{(4)}} \n{Most of the existing pretext tasks transfer well across modalities.}\nFor instance, Masked Language Modelling (MLM) in the text domain has been applied to audio and image, \\eg, Masked Acoustic Modelling , Masked Image Region Prediction ,\nwhile both Sentence Ordering Modelling (SOM) in text domain and Frame Ordering Modelling (FOM) in video domain share the same idea. \nWe will further discuss the pretext tasks for multimodal Transformer pretraining in the follows.\n{\\textbf{(5)}} {Model structures are mainly in three categories.} Essentially, in multimodal pretraining scenarios, Transformer models work based on those self-attention variants that are discussed in Section \\ref{sec:self-attention-in-multimodal-context}. Thus, if considered from the perspective of model structures, the existing Transformers for multimodal pretraining are also mainly in three categories, \\ie, single-stream, multi-stream, hybrid-stream. \n{\\textbf{(6)}}\n{Cross-modal interactions can perform within various components/levels in the pretraining pipelines.}\nFor Transformer based multimodal pretraining, the key is to drive the Transformer (encoder w/, w/o decoder) to learn the cross-modal interactions.\nIn the existing Transformer-based multimodal pretraining practices, the cross-modal interactions are flexible, which can perform within various components/levels in the pretraining pipelines.\nIn general, Transformer-based multimodal pretraining pipelines have three key components, from bottom to top, \\ie, tokenization, Transformer representation, objective supervision. \nFor not only the multimodal pretraining but also the specific multimodal tasks,\nthe cross-modal interactions can perform within arbitrary component(s) of the three.\nAs discussed in Section \\ref{sec:self-attention-in-multimodal-context},\nbecause self-attention models the embedding of an arbitrary token from an arbitrary modality as a node of a graph,\n the existing pretraining pipelines\ncan, in general, be transferred independently across\nmodalities,\nunless considered with modality-specific objectives.\n\\keypoint{Discussion} Vision Language Pretraining (VLP) follows two general pipelines: two-stage (need object detector, \\eg, Faster R-CNN ) (\\eg, LXMERT , ViLBert , VL-Bert , UNITER ) and end-to-end (\\eg, Pixel-Bert , SOHO , KD-VLP , Simvlm ). \nTwo-stage pipelines have a main advantage -- object-aware perceiving, by using the supervised pre-trained visual detectors, \nhowever these are based on a strong assumption that the visual representations can be fixed.\n\\keypoint{Discussion}\nHow to look for more corpora that intrinsically have well-aligned cross-modal supervision, such as instructional videos, is still an open problem.\nHowever,\nweakly-aligned cross-modal samples are popular in the real-life scenarios, for instance, enormous weakly aligned multimodal\ndata samples are emerging in e-commerce , due to fine-grained categories, complex combinations, and fuzzy correspondence.\nWell labelled/aligned cross-modal datasets are very costly in collecting and annotating; \nhow to use weakly-aligned or even unaligned corpora crawled from the web is a promising question. \nSome recently successful practice \nused weakly aligned image-text pairs\nto perform pretraining, and achieve both competitive performance and zero-shot learning capability for image classification, image-text retrieval, and open-ended visual question answering, \\etc.\nBecause these practices in weak supervision make full use of large-scale pretraining corpora, they yield greater promise of zero-shot generalization.\n\\keypoint{Pretext Tasks}\nIn Transformer based multimodal pretraining,\nthe pretraining tasks/objectives are also termed pretext tasks/objectives.\nTo date, various pretext tasks have been studied, \\eg, masked language modelling (MLM) ,\nmasked image region prediction/{classification} (also termed masked object classification (MOC)) ,\nmasked region regression (MRR) ,\nvisual-linguistic matching (VLM) (\\eg, image–text\nmatching (ITM) , image text matching (ITM), phrase-region alignment (PRA) , word-region alignment (WRA) , video-subtitle matching (VSM) ),\nmasked frame modelling (MFM) ,\nframe order modelling (FOM) ,\nnext sentence prediction (NSP) ,\nmasked sentence generation (MSG) ,\nmasked group modelling (MGM) , \nprefix language modelling (PrefixLM) ,\nvideo conditioned masked\nlanguage model ,\ntext conditioned masked frame\nmodel ,\nvisual translation language modelling\n(VTLM) , and\nimage-conditioned masked language modelling (also termed image-attended masked language modelling) .\n These down-stream task -agnostic pretext pretraining is optional,\nand the down-stream task objectives can be trained directly,\nwhich will be discussed in Section~\\ref{sec:task-specific-multi-modal-pretraining}.\n{Table~\\ref{table:pretext-tasks} provides the common and representative pretext tasks for Transformer based multimodal pretraining.}\n\\blue{In practice, pretext tasks can be combined, and some representative cases are summarized in Table 3 of , Table 2 of .} \n\\input{tables/pretext-tasks}\nThe pretext tasks have multiple taxonomies: \n{\\textbf{(1)}} {Supervision.}\n{The common multimodal pretraining Transformers use well-aligned, weakly-aligned, and even unaligned multimodal sample pairs/tuples, to work in supervised, weakly-supervised, and unsupervised manners, respectively.}\nMeanwhile, \nif we consider the definitions of their pretext tasks/objectives from supervision, the pretexts can be sorted into unsupervised/self-supervised (\\eg, masked language modelling (MLM) ) and supervised (\\eg, image-text matching (ITM) ), \\etc.\nNowadays, self-supervised attempts are the majority. \n{\\textbf{(2)}} {Modality.} Considering the mathematical formulations, some pretexts are defined on single modality, \\eg, masked language modelling , masked acoustic modelling , masked region regression (MRR) ,\nwhile other pretexts are defined on multiple modalities, \\eg, image-conditioned masked language modelling (IMLM) , image-text matching (ITM) , video-subtitle matching (VSM) .\nThus, from this mathematical view, the pretext tasks can be divided into two categories, \\ie, uni-modal and multimodal. \nHowever, this classification is not really accurate.\nIt should be highlighted that in multimodal pretraining Transformer models, even if the pretext objective formulations only include uni-modal elements, pretexts can still involve other modalities, essentially conditioned on the clues from other modalities, by (a) prepositive token level interactions and/or Transformer level interactions, \n(b) co-training with other pretexts that involve other modalities. \nFor instance, VL-BERT uses two dual pretext tasks, \\ie, masked language modelling and masked RoI classification.\n{\\textbf{(3)}} {Motivation.} If consider their motivations, the pretext tasks include masking, describing, matching, ordering, \\etc.\nSome recent surveys focus on VLP and compare the existing VLP Transformer models from the angles of domain (image-text or video-text), vision feature extraction, language feature extraction, architecture (single- or dual- stream), decoder (w/, w/o), pretext tasks/objectives, pretraining datasets, and down-stream tasks, \\eg, Table 3 of , Table 2 of .\nDifferent from these views,\nin this survey,\nwe would propose our comparisons from some new perspectives.\nSpecifically:\n{\\textbf{(1)}}\nThe core of Transformer ecosystem is self-attention, thus we would compare the existing multimodal pretraining Transformer models from the angles of how and when the self-attention or its variants perform cross-modal interactions. \n{\\textbf{(2)}} Considering from a geometrically topological perspective, self-attention helps Transformers intrinsically \nwork in a modality agnostic pipeline that is compatible\nwith various modalities by taking in the embedding of each\ntoken as a node of graph, thus we would highlight that the existing VLP can be applied to other modalities, beyond visual and linguistic domains. \n{\\textbf{(3)}} \nWe suggest to treat the Transformer-based multimodal pretraining pipelines having three key components, from bottom to top, \\ie, tokenization, Transformer representation, objective supervision. \n\\keypoint{Discussion}\nIn spite of the recent advances,\nmultimodal pretraining Transformer methods still have some obvious bottlenecks.\n\\magenta{For instance,\nas discussed by in VLP field,\nwhile the BERT-style cross-modal pretraining models produce excellent results on various down-stream vision-language tasks, they fail to be applied to generative tasks directly.}\nAs discussed in ,\nboth VideoBERT and CBT have to train a separate video-to-text decoder for video captioning.\nThis is a significant gap between the pretraining models designed for discriminative and generative tasks, as the main reason is discriminative task oriented pretraining models do not involve the decoders of Transformer.\nTherefore, how to design more unified pipelines that can work for both discriminative and generative down-stream tasks is also an open problem to be solved.\nAgain for instance,\ncommon multimodal pretraining models often underperform for fine-grained/instance-level tasks as discussed by .\n\\keypoint{Discussion}\n\\magenta{As discussed in ,\nthe masked language\nand region modelling as pre-training task have a main advantage that the Transformer encoder learned from these supervisions can encode both vision and language patterns based on bidirectional context and it is naturally fit for the semantic understanding tasks, \\eg, VQA, image-text retrieval.}\n\\keypoint{Discussion}\nHow to boost the performance for multimodal pretraining Transformers is an open problem.\nSome practices demonstrate that \nmulti-task training (by adding auxiliary loss) and adversarial training improve multimodal pretraining Transformers to further boost the performance. \nMeanwhile, overly compound\npretraining objectives potentially upgrade the challenge of balancing among different loss terms, thus complicate the training optimization .\nMoreover, \nthe difficulty of the pretexts is also worth discussing.\nIn general, if aim to learn more explicit object concepts, more complex pretext losses will be used . However, for pretexts, whether more complexity is better remains a question.", "id": "fe91bb70-88c5-4914-8596-78a4c61b8b55", "level": "subsubsection", "origin_cites_number": 46, "parent_id": "5b8dde6d-0134-49ff-8114-3677f36acfcc", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Application Scenarios" ], [ "subsection", "Transformers for Multimodal Pretraining" ], [ "subsubsection", "Task-Agnostic Multimodal Pretraining" ] ], "subsections": [], "title": "Task-Agnostic Multimodal Pretraining" }, { "cite_extract_rate": 0.75, "cites": [ 5267, 5224, 5294, 5259, 5282, 8889 ], "content": "\\label{sec:task-specific-multi-modal-pretraining}\nIn practices of multimodal Transformers,\nthe aforementioned down-stream task -agnostic pretraining is optional, not necessary,\nand down-stream task specific pretraining is also widely studied .\nThe main reasons include:\n(1) Limited by the existing technique, it is extremely difficult to design a set of highly universal network architectures, pretext tasks, and corpora that work for all the various down-stream applications.\n(2) There are non-negligible gaps among various down-stream applications, \\eg, task logic, data form, making it difficult to transfer from pretraining to down-stream applications.\nTherefore,\na large number of down-stream tasks still need tailor-made pretraining to improve the performance.\nGuhur \\etal propose in-domain pretraining for vision-and-language navigation, as \nthe general VLP\nfocuses on learning vision-language correlations, not designed for sequential decision making as required\nin embodied VLN.\nMurahari \\etal present a visual dialogue oriented approach to leverage pretraining on general\nvision-language datasets. \nXGPT is tailor-made for image captioning, to overcome the limitation that BERT-based cross-modal pre-trained models fail to be applied\nto generative tasks directly.\nERNIE-ViLG is designed for bidirectional image-text generation with Transformers.\nSpecial modalities have their own unique domain knowledge that can be used to design the specific pretrain pretexts.\nGraphCodeBERT uses\ntwo structure-aware pretext tasks (\\ie, predict where a variable is identified from, data flow edge\nprediction between variables) for programming source code.\nTo learn from the spatial cues in $360^{\\circ}$ video,\nMorgado \\etal propose to perform contrastive audio-visual spatial alignment of $360^{\\circ}$ video and spatial audio.\nMed-BERT is a contextualized embedding model pretrained on a structured electronic health record dataset of two million patients. \nKaleido-BERT is a VLP Transformer model tailor-made for the fashion domain.", "id": "01084df1-deec-4ba7-8763-5e3a45eeaee5", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "5b8dde6d-0134-49ff-8114-3677f36acfcc", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Application Scenarios" ], [ "subsection", "Transformers for Multimodal Pretraining" ], [ "subsubsection", "Task-Specific Multimodal Pretraining" ] ], "subsections": [], "title": "Task-Specific Multimodal Pretraining" }, { "cite_extract_rate": 0.8269230769230761, "cites": [ 7361, 5312, 5314, 5296, 5297, 5303, 1515, 5283, 5298, 5216, 7914, 5307, 5295, 5224, 8890, 4832, 5306, 5304, 1274, 5313, 5254, 5269, 4769, 5311, 5230, 5232, 5218, 5315, 5226, 5310, 5316, 5308, 5229, 5300, 5302, 5236, 7339, 8615, 5305, 5309, 3016, 5299, 5301 ], "content": "\\label{sec:transformers-for-specific-multi-modal tasks}\nRecent work has demonstrated that Transformer models can encode various multimodal inputs in both classical and novel discriminative applications, \\eg, RGB \\& optical flow , \n{RGB \\& depth ,\nRGB \\& point \\blue{cloud} ,\nRGB \\& LiDAR ,}\ntextual description \\& point cloud , acoustic \\& text , audio \\& visual observation for Audio-Visual Navigation ,\nspeech query \\& schema of SQL database ,\ntext question/query \\& the schema SQL database ,\naudio \\& tags ,\nmultimodal representation for video ,\ntext query \\& video ,\naudio \\& video for audio visual speech enhancement (AVSE) ,\naudio \\& video for Audio-Visual Video Parsing ,\naudio \\& video for audio-visual speech recognition ,\nvideo \\& text for Referring Video Object Segmentation (RVOS) ,\nsource code \\& comment \\& data flow ,\nimage \\& text for retrieval .\nMeanwhile, Transformers also contribute to various multimodal generative tasks, including single-modality to single-modality (\\eg, raw audio to 3D mesh sequence ,\nRGB to 3D scene ,\nsingle image to 3D human texture estimation ,\nRGB to scene graph ,\ngraph to graph ,\nknowledge graph to text ,\nvideo to scene graph ,\nvideo to caption ,\nimage to caption ,\ntext to speech ,\ntext to image ,\ntext to shape ,\nRGB to 3D human pose and mesh ,\nmusic to dance ),\nmultimodality to single modality (\\eg, \nimage \\& text to scene graph ,\nVideo Dialogue (text \\& audio \\& visual to text) ,\nMono Audio \\& Depth to Binaural Audio ,\nmusic piece \\& seed 3D motion to long-range future 3D motions ,\nX-raying image \\& question to answer ,\nvideo \\& text \\& audio to text ),\nand multimodality to multimodality (\\eg, ).", "id": "95c9bf40-869d-42e1-9367-1698471f2615", "level": "subsection", "origin_cites_number": 52, "parent_id": "5f41fbc9-6717-4317-8a00-37d207d6aced", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "{Application Scenarios" ], [ "subsection", "Transformers for Specific Multimodal Tasks" ] ], "subsections": [], "title": "Transformers for Specific Multimodal Tasks" }, { "cite_extract_rate": 1, "cites": [ 1030 ], "content": "\\label{sec:challenges-and-designs}\nComplementing the application {scenario taxonomy} discussed in Section \\ref{sec:applications-and-representative-models},\nwe further survey prior work from the perspective of technical challenges.\nWe discuss seven challenges of Transformer based multimodal learning, including fusion, alignment, transferability,\nefficiency,\nrobustness,\nuniversalness, and interpretability.\nThis further extends the taxonomy introduced in to tackle the higher diversity and wider scopes of \nexisting Transformer based MML works in recent years.", "id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "level": "section", "origin_cites_number": 1, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ] ], "subsections": [ "d09231d3-5fdd-4151-9975-ded41a5290e2", "f7fc30fc-556d-4d35-9a46-031944e81028", "bad9d95a-67dc-4e7c-8d54-dbd6cc1d9fb2", "114808c7-9385-47a0-aeb9-0420bcf760c2", "2c73f720-1e86-467b-8745-426df8275bae", "4a41ea0c-3835-4244-80e6-f7158103a48e", "67c2f447-557d-4017-98b2-a7cf956f3b77" ], "title": "Challenges and Designs" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 2015, 768, 8891, 5317, 1272 ], "content": "In general, MML Transformers fuse information across \nmultiple modalities primarily at three levels:\ninput (\\ie, early fusion), intermediate representation (\\ie, middle fusion), and prediction (\\ie, late fusion).\nCommon early fusion based MML Transformer models \n are also known as {one-stream architecture},\nallowing the adoption of the merits of BERT\ndue to minimal architectural modification.\nThe main difference between these one-stream models\nis the usage of problem-specific modalities with variant masking techniques.\nWith attention operation, a noticeable fusion scheme\nis introduced based on a notion of bottleneck tokens .\nIt applies for both early and middle fusion\nby simply choosing to-be-fused layers.\nWe note that the simple prediction-based late fusion is less adopted \nin MML Transformers.\nThis makes sense considering the motivations of learning stronger multimodal contextual representations and great advance of computing power. \nFor enhancing and interpreting the fusion of MML,\nprobing the interaction and measuring the fusion between modalities \nwould be an interesting direction to explore.", "id": "d09231d3-5fdd-4151-9975-ded41a5290e2", "level": "subsection", "origin_cites_number": 7, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Fusion" ] ], "subsections": [], "title": "Fusion" }, { "cite_extract_rate": 0.8484848484848481, "cites": [ 7915, 8892, 1275, 5318, 5329, 5324, 2012, 5283, 2017, 5284, 5321, 5330, 1273, 5325, 5328, 5292, 5322, 5247, 5323, 5326, 5327, 5320, 7916, 7917, 5319, 1639, 6983, 5331 ], "content": "Cross-modal alignment is the key to \na number of real-world multimodal applications.\nTransformer based cross-modal alignment has been studied for various tasks, \\eg,\nspeaker localization in multi-speaker videos ,\nspeech translation ,\ntext-to-speech alignment ,\ntext-to-video retrieval ,\nand visual grounding of natural language .\nRecently, Transformer based alignment\n\nhas led to a surge of leveraging large quantities of web data (\\eg, image-text pairs) for vision and language tasks.\nA representative practice is to map two modalities into a common representation space with contrastive learning over paired samples.\nThe models based on this idea are often enormous in size\nand expensive to optimize from millions or billions of training data.\nConsequently, successive works mostly exploit pretrained models\nfor tackling various down-stream tasks .\nThese alignment models\nhave the ability of zero-shot transfer\nparticularly for image classification via \\blue{prompt engineering~}.\nThis novel perspective is mind-blowing, given that image classification is conventionally regarded as a unimodal learning problem\nand zero-shot classification remains an unsolved challenge despite extensive research .\nThis has been studied\nfor more challenging and fine-grained tasks (\\eg, object detection , visual question answering , and instance retrieval )\nby \nimposing region (semantic parts such as objects) level alignment.\nFine-grained alignment will however incur more computational costs\nfrom explicit region detection \nand how to eliminate this whilst keeping the region-level learning capability becomes a challenge. \nSeveral ideas introduced recently include\nrandom sampling ,\nlearning concept dictionary ,\nuniform masking ,\npatch projection ,\njoint learning of a region detector , and\nrepresentation aligning before mask prediction .", "id": "f7fc30fc-556d-4d35-9a46-031944e81028", "level": "subsection", "origin_cites_number": 33, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Alignment" ] ], "subsections": [], "title": "Alignment" }, { "cite_extract_rate": 0.866666666666666, "cites": [ 5252, 5333, 5332, 5334, 1639, 5335, 8893, 2018, 768, 7912, 2023, 2012, 8383 ], "content": "\\label{sec:transferability}\nTransferability is a major challenge for Transformer based multimodal learning, involving the question of how to transfer models across different datasets and applications.\nData augmentation and adversarial perturbation strategies help multimodal Transformers to improve the generalization ability. VILLA is a two-stage strategy (task-agnostic adversarial pretraining, followed by task-specific adversarial finetuning) that improves VLP Transformers.\nIn practice,\nthe distribution gap between training data and practical data is noticeable.\nFor instance,\nsupervised data samples (well-labelled, well-aligned) are costly in practical applications, thus\nhow to transfer the supervised multimodal Transformers pretrained on well-aligned cross-modal pairs/tuples to the weakly aligned test bed is challenging . \nCLIP is an inspiring solution that transfers knowledge across modalities by learning a shared multimodal embedding space, enabling zero-shot transfer\nof the model to down-stream tasks.\nThe main inspiration that CLIP presents the community is that the pretrained multimodal (image and text) knowledge can be transferred to down-stream zero-shot image prediction by using a prompt template ``\\texttt{A photo of a \\{label\\}.}'' to bridge the distribution gap between training and test datasets.\nOver-fitting is a major obstacle to transfer. Multimodal Transformers can be overly fitted to the dataset\nbiases during training, due to the large modelling capability.\nSome recent practices exploit how to transfer the oracle model trained on noiseless dataset to real dataset.\nFor instance, \nKervadec \\etal explore \nhow transferable reasoning patterns are in VQA,\nand demonstrate that for LXMERT /BERT-like\nreasoning patterns can be partially transferred from an ideal dataset to a real dataset.\nCross-task gap is another major obstacle to transfer , due to the different reasoning and input-output workflows, \\eg,\nhow to use multimodal datasets to finetune the language pretrained model is difficult .\nIn real applications, \nmultimodal pretrained Transformers sometimes need to handle the uni-modal data at inference stage due to the issue of missing modalities. One solution is using knowledge\ndistillation, \\eg, distilling\nfrom multimodal to uni-modal attention in Transformers , distilling from multiple uni-modal Transformer teachers to a shared Transformer encoder .\nThere is a huge gap across discriminative and generative multimodal tasks.\nAs discussed in ,\nthe BERT-like encoder-only multimodal Transformers (\\eg, VideoBERT , CBT ) need\nseparately to train decoders for generation tasks. This could create a pretrain-finetune discrepancy detrimental to the\ngenerality.\nRecently, more and more attempts study this issue further, \\eg,\nGilBERT is a generative VLP models for\na discriminative task, \\ie, image-text retrieval.\nCross-lingual gap also should be considered for the transferability of Transformer based multimodal learning, \\eg,\nuniversal cross-lingual generalization from English to non-English multimodal contexts .", "id": "bad9d95a-67dc-4e7c-8d54-dbd6cc1d9fb2", "level": "subsection", "origin_cites_number": 15, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Transferability" ] ], "subsections": [], "title": "Transferability" }, { "cite_extract_rate": 0.875, "cites": [ 5284, 5341, 5339, 4735, 5342, 793, 5336, 5338, 5337, 8891, 5340, 7590, 5320, 7562 ], "content": "\\label{sec:efficiency}\nMultimodal Transformers suffer from two major efficiency issues:\n(1) Due to the large model parameter capacity, they are data hungry and thus dependent on huge scale training datasets.\n(2) They are limited by the time and memory complexities that grow quadratically with the input sequence length, which are caused by the self-attention.\nIn multimodal contexts, \ncalculation explosion will become worse due to jointly high dimension representations.\nThese two bottlenecks are interdependent and should be considered together.\nTo improve the training and/or inferring efficiency for multimodal Transformers,\n recent efforts have attempted to find various solutions, to use fewer training data and/or parameters.\nThe main ideas can be summarized as the follows. \n{\\textbf{(1)}} {Knowledge distillation.}\nDistill the knowledge from the trained larger Transformers to smaller Transformers .\nMiech \\etal conduct distillation from a slower model (early concatenation based Transformers, $\\mathcal{O}((N_{(\\texttt{A})} + N_{(\\texttt{B})})^{2})$) to a faster one (independently dual branch Transformers, $\\mathcal{O}(N_{(\\texttt{A})}^{2})$).\n{\\textbf{(2)}} {Simplifying and compressing model.}\nRemove the components to simplify the pipelines.\nTaking the VLP Transformer models as an example,\ntwo-stage pipeline is costly as they need object detector.\nOne simplifying is processing the visual input in convolution-free manner, \\eg, E2E-VLP , ViLT . \nDropToken reduces\nthe training complexity via\nrandom dropping a portion of the video and audio tokens from input sequence during training.\nDropToken can be treated as an implementation of dropout or adversarial training.\nWeight-sharing is also a common practice for simplifying multimodal Transformer models.\nWen \\etal present a weight-sharing Transformer on top of the visual and\ntextual encoders to align text and image.\nLee \\etal propose a novel parameter sharing scheme based on low-rank approximation.\n{\\textbf{(3)}} {Asymmetrical network structures.}\nAssign different model capacities and computational size properly for different modalities, to save parameters. See Figure 2 in .\n{\\textbf{(4)}} {Improving utilization of training samples.} \nLiu \\etal train a simplified LXMERT by making full use of fewer samples at different granularities.\nLi \\etal use fewer data to train CLIP by fully mining the potential self-supervised signals of (a) self-supervision within each modality, (b) multi-view supervision across modalities, and (c) nearest-neighbour supervision from other similar pairs.\n{\\textbf{(5)}} {Compressing and pruning model.}\nSearch the optimal sub-structures/sub-networks of multimodal Transformers, \\eg,\nplaying Lottery Tickets with the VLP Transformer models , adaptively freezing some layers during training .\n{\\textbf{(6)}} {Optimizing the complexity of self-attention.}\nTransformers cost time and memory that grows quadratically with the input sequence length~.\nOne potential solution is optimizing the $\\mathcal{O}(N^{2})$ complexity, \\eg,\nChild \\etal present sparse factorizations of the attention matrix to reduce the quadratical complexity to $\\mathcal{O}(n \\sqrt{n})$,\nTransformer-LS is an efficient Transformer for both language and vision long sequence, with linear computational and memory complexity.\n{\\textbf{(7)}} {Optimizing the complexity of self-attention based multimodal interaction/fusion.}\nNagrani \\etal propose Fusion via Attention Bottlenecks (FSN, fusion bottleneck) to improve the early concatenation based multimodal interaction.\nFSN passes on the messages through a small number of bottleneck \nlatents, thus requiring the model to purify the most necessary information\nfrom each modality for cross-modal sharing.\nThis strategy uses the fusion bottleneck as a bridge, and not only\nimproves fusion performance, but also reduces computational cost.\n{\\textbf{(8)}} {Optimizing other strategies.}\nUse optimal strategies to perform the common Transformer based multimodal interactions.\nGiven the quadratic complexity of self-attention,\nusing early concatenation based multimodal interaction to synchronously fuse the inputs from multiple modalities/views is costly.\nYan \\etal present an efficient solution that sequentially fuses information between all pairs of\ntwo adjacent views in ascending order of sequence length.\nThis is intrinsically a greedy strategy.", "id": "114808c7-9385-47a0-aeb9-0420bcf760c2", "level": "subsection", "origin_cites_number": 16, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Efficiency" ] ], "subsections": [], "title": "Efficiency" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 4844, 5344, 5345, 5343, 5346, 8894, 2018, 5261, 5249 ], "content": "\\label{sec:robustness}\nMultimodal Transformers pretrained on large-scale corpora achieve the state-of-the-art for various multimodal applications, while their robustness is still unclear and understudied.\nThis at least involves two key challenges, \\ie, how to theoretically analyse the robustness, how to improve the robustness. \nAlthough that recent attempts study and evaluate how the Transformer components/sub-layers contribute to the robustness, \nthe main bottleneck is that the community lacks theoretical tools to analyse the Transformer family. \nRecently, the common practices to analyse robustness are mainly based on experiment evaluations , \\eg, cross-dataset evaluations, perturbation-based evaluations.\nThus, some multimodal datasets are proposed for evaluating the robustness.\nRecent attempts mainly use two straightforward methods to improve the robustness for multimodal Transformer models: (1) augmentation and adversarial learning based strategies ,\n(2) fine-grained loss functions .\nFor instance:\nVILLA is a generic adversarial training framework that can be applied to various multimodal Transformers.\nAkula \\etal empirically demonstrate that\nViLBERT fails to exploit linguistic structure, and they propose two methods to improve the robustness of ViLBERT, one based on contrastive learning and the other based on multi-task learning.", "id": "2c73f720-1e86-467b-8745-426df8275bae", "level": "subsection", "origin_cites_number": 11, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Robustness" ] ], "subsections": [], "title": "Robustness" }, { "cite_extract_rate": 1, "cites": [ 5333, 1639, 8886, 8893, 2023, 5281, 5320, 1279 ], "content": "\\label{sec:universalness}\nDue to the highly diversity of tasks and modalities of multimodal learning, universalness is an important problem for multimodal Transformer models.\nA large amount of recent attempts study how to use as unified as possible pipelines to handle various modalities and multimodal tasks.\nIdeally, the unified multimodal Transformers can be compatible with various data (\\eg,\naligned and unaligned, uni-modal and multimodal) and tasks (\\eg, supervised and unsupervised, uni-modal and multimodal, discriminative and generative), and meanwhile have either few-shot or even zero-shot generalization ability. Thus, the current solutions for universalness goal for multimodal Transformers are preliminary probes.\nThe currently unifying-oriented attempts mainly include: \n{\\textbf{(1)}} {Unifying the pipelines for both uni-modal and multimodal inputs/tasks.} \nAs discussed Section \\ref{sec:transferability}, in practical scenarios,\nmultimodal Transformers need to\nhandle uni-modal data due to the issue of missing modalities.\nDistilling multimodal knowledge into small models that are adaptable to uni-modal data and tasks is a successful practice .\n{\\textbf{(2)}} {Unifying the pipelines for both multimodal understanding and generation.}\nIn general, for multimodal Transformer pipelines,\nunderstanding and discriminative tasks require Transformer encoders only, while generation/generative tasks require both Transformer encoders and decoders.\nExisting attempts use multi-task learning to combine the understanding and generation workflows, where two kinds of workflows are jointly trained by multi-task loss functions.\nFrom the perspective of model structures, typical solutions include:\n(a) encoder + decoder, \\eg, E2E-VLP .\n(b) separate encoders + cross encoder + decoder, \\eg, UniVL , CBT .\n(c) single unified/combined encoder-decoder, \\eg, VLP .\n(d) \\blue{two-stream} decoupled design . \n{\\textbf{(3)}} {Unifying and converting the tasks themselves}, \\eg, CLIP converts zero-shot recognition to retrieval, thus reduces the costs of modifying the model.\nHowever, the aforementioned practices suffer some obvious challenges and bottlenecks, at least including: \n{\\textbf{(1)}} Due to modality and task gaps, universal models should consider the trade-off between universalness and cost. Unifying the pipelines of different modalities and tasks generally cause larger or more complicated model configuration, whereas for a specific modality or task, some components are redundant. \n{\\textbf{(2)}} Multi-task loss functions increase the complexity of training. How to co-train multiple objectives properly and effectively is challenging, due to that different objectives generally should be optimized in different strategies.", "id": "4a41ea0c-3835-4244-80e6-f7158103a48e", "level": "subsection", "origin_cites_number": 8, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Universalness" ] ], "subsections": [], "title": "Universalness" }, { "cite_extract_rate": 0.9, "cites": [ 5348, 5347, 7918, 8895, 8896, 5351, 5349, 5350, 2012 ], "content": "\\label{sec:interpretability}\nWhy and how Transformers perform so well in multimodal learning has been investigated .\nThese attempts mainly use probing task and ablation study. \nCao \\etal design a set of probing tasks on UNITER and LXMERT , to evaluate what patterns are learned in pretraining.\nHendricks \\etal probe the image–language Transformers by fine-grained image–sentence pairs, and find that verb understanding is harder than subject or object understanding.\n\\magenta{Chen \\etal examine the optimal combination of pretraining\ntasks via ablation study, to compare how different pretexts contribute to the Transformers.}\nDespite these attempts,\nthe interpretability of multimodal Transformers is still under-studied to date.", "id": "67c2f447-557d-4017-98b2-a7cf956f3b77", "level": "subsection", "origin_cites_number": 10, "parent_id": "7fd39549-23f3-4dac-b3b1-996a2a33f076", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Challenges and Designs" ], [ "subsection", "Interpretability" ] ], "subsections": [], "title": "Interpretability" }, { "cite_extract_rate": 0.875, "cites": [ 5252, 1275, 4862, 5285, 2012, 4826, 2422, 2017, 5321, 5294, 7302, 7535, 1272, 7920, 5328, 8896, 5292, 5322, 5352, 5323, 7919, 5320, 5345, 5293, 7916, 5256, 1639, 5211 ], "content": "\\label{sec:discussion-and-outlook}\nDesigning the universal MML models to excel across all the unimodal and multimodal down-stream tasks with different characteristics simultaneously is a non-trivial challenge.\nFor instance, two-stream architectures are typically preferred over one-stream ones \nfor cross-modal retrieval-like tasks in efficiency, since the representation of each modality can be pre-computed beforehand and reused repeatedly. \nThat being said, how to design task-agnostic MML architectures is still an open challenge, in addition to other design choices such as pretext and objective loss functions.\nFurthermore, a clear gap remains\nbetween the state-of-the-art and this ultimate goal.\nIn general, existing multimodal Transformer models \nare superior only for specific MML tasks,\nas they are designed specifically for only a subset of specific tasks\n.\nEncouragingly, several recent studies towards { universal modality learning} in terms of modality-agnostic network design and more task-generic architecture design have been introduced,\nand it is hoped this will spark further investigation.\nTo that end, instead of exhaustively exploring the vast model design space, {seeking in-depth understanding and interpretation of a MML model's behaviour might be insightful for superior algorithm design}, even though the interactions and synergy across different modalities are intrinsically complex and even potentially inconsistent over tasks\n.\nFor more fine-grained MML, it is widely acknowledged that discovering the latent semantic alignments across modalities is critical.\nAn intuitive strategy is to leverage semantic parts (\\eg, objects) pre-extracted by an off-the-shelf detector for MML .\nThis, however, is not only complex and error-prone, but computationally costly .\nSeveral remedies introduced recently include\nrandom sampling ,\nlearning concept dictionary ,\njointly learning a region detector , and\nrepresentation aligning before mask prediction .\nGiven the scale of MML training data, exploring this direction\nneeds exhaustive computational costs, and\nit is supposed that industrial research teams with rich resources are more likely to afford. \n{Ideally, a favourable MML method would leave fine-grained\nsemantic alignment across modalities to emerge on its own},\nwhich is worthy of careful investigation in the future.\nAs the learning scale expands exponentially,\nthe training data become inevitably noisy and heterogeneous .\nIt has been recently shown that properly tackling the noise issue is useful .\nAnother related facet is training strategy,\n\\eg, how many stages of training is superior over the common one-stage policy .\nFurther, the quadratic complexity with Transformers\nbecomes more acute for multimodal data due to longer input.\nDespite extensive research on efficient variants ,\n{dedicated efficiency study for MML is still underestimated even empirically and call for more investigation}.\nIdentifying the strengths of Transformers for multimodal machine learning is a big open problem.\nThe following main points can be summarized from the literature:\n{\\textbf{(1)}} Transformers can encode implicit knowledge .\n{\\textbf{(2)}} The multi-head brings multiple modelling sub-spaces that can further enhance the expressive ability of the model.\nIdeally, multiple heads after training are good and different.\nThis is essentially a good practice of ensemble learning.\n{\\textbf{(3)}} Transformers intrinsically have a nature of global aggregation that perceives the non-local patterns. \n{\\textbf{(4)}} Thanks to the large model capacity, Transformer models handle the challenging domain gaps and shifts (\\eg, linguistic and visual) better via effective pretraining on large-scale corpora .\n{\\textbf{(5)}} Transformers can represent the inputs as graphs, which are\nintrinsically compatible with more modalities, \\eg, table and SQL.\n{\\textbf{(6)}} For modelling series and sequence patterns (\\eg, time-series), Transformers have better training and inference efficiency against RNN-based models,\nthanks to their parallel computation in training and/or inference. \nTransformers are inherently permutation invariant for processing a sequence of points, \\eg, well-suited for point cloud learning .\n{\\textbf{(7)}} Tokenization makes Transformers flexible to organize multimodal inputs, as discussed in Section \\ref{sec:tokenized-input}.", "id": "d60d1615-9fb4-46ed-9d46-853e64b9762a", "level": "section", "origin_cites_number": 32, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Discussion and Outlook" ] ], "subsections": [], "title": "Discussion and Outlook" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nThis survey focuses on multimodal machine learning with Transformers.\nWe reviewed the landscape by introducing the Transformer designs and training in the multimodal contexts.\nWe summarized the key challenges and solutions for this emerging and exciting field.\nMoreover, we discussed open problems and potential research directions. We hope that this survey gives a helpful and detailed overview for\nnew researchers and practitioners, provides\na convenient reference for relevant experts (\\eg, multimodal machine learning researchers, Transformer network designers), and encourages\nfuture progress.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{bib}\n\\vspace{-1.5cm}\n \\begin{IEEEbiography}[{\\includegraphics[height=0.9in,clip,keepaspectratio]{authors/peng.jpg}}]{Peng Xu} is a lecturer in the Department of Electronic Engineering, Tsinghua University.\n Previously, he was a postdoctoral research assistant in the Department of Engineering Science at the University of Oxford. \n \\end{IEEEbiography}\n \\vspace{-2.2cm}\n\\begin{IEEEbiography}[{\\includegraphics[height=0.9in,clip,keepaspectratio]{authors/xzhu_2022.jpeg}}]{Xiatian Zhu}\n is a Senior Lecturer at the Surrey Institute for People-Centred Artificial Intelligence, and Centre for Vision, Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey. \n \\end{IEEEbiography}\n\\vspace{-2.2cm}\n\\begin{IEEEbiography}[{\\includegraphics[height=0.9in,clip,keepaspectratio]{authors/david-clifton-provided.jpg}}]{David A. Clifton}\nis a Professor of Clinical Machine Learning and leads the Computational Health Informatics (CHI) Lab in the Department of Engineering Science at the University of Oxford. \n \\end{IEEEbiography}\n\\clearpage\n\\appendix\n\\keypoint{{Notations and Abbreviations}}\nThroughout this survey,\nunless specified otherwise, mathematical symbols and abbreviated terms follow the conventions in Table \\ref{table:definitions}.\n\\input{tables/definition_table}\n\\keypoint{{Special/Customized Tokens}}\n{In both uni-modal and multimodal Transformers,\nvarious special/customized tokens are semantically defined as place-holders in the token sequences.\nSome common special tokens are summarized in Table~\\ref{table:tokens}.}\n\\input{tables/tokens}\n\\end{document}", "id": "2d2b9adf-b8ba-46b9-b286-c1b7238138a2", "level": "section", "origin_cites_number": 0, "parent_id": "21139f68-9b71-4857-a410-85c84d8234d4", "prefix_titles": [ [ "title", "Multimodal Learning with Transformers: \\\\ A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
62
[ 4765, 5208, 7905, 5219, 5221, 5233, 4862, 5222, 864, 5216, 7903, 5213, 5217, 5231, 5224, 7360, 7904, 4832, 7302, 5215, 4769, 5230, 5232, 5225, 5234, 5235, 38, 5218, 7907, 5220, 5227, 5214, 5223, 5226, 5210, 7298, 7040, 768, 5229, 7, 7906, 5212, 7908, 7078, 1639, 5209, 1030, 732, 5228, 5211, 5237, 7565, 8885, 5241, 5240, 3003, 9131, 7467, 5236, 5242, 5238, 5239, 1582, 1275, 7080, 2514, 2023, 2012, 2011, 7070, 2017, 2015, 8886, 5248, 7535, 1278, 1272, 5243, 5247, 1501, 5249, 2527, 9149, 5246, 7370, 7850, 5244, 7590, 7909, 11, 5245, 1279, 5252, 5259, 7911, 7796, 486, 5271, 5251, 7910, 5254, 5266, 5262, 5269, 5255, 5261, 5257, 8887, 5268, 5253, 5267, 2901, 5256, 5250, 5260, 5264, 5270, 5258, 5263, 5265, 5272, 5273, 97, 1511, 57, 71, 7853, 5274, 5275, 4826, 4764, 5276, 5277, 1512, 8888, 5278, 5281, 5280, 5282, 5283, 5279, 5284, 7913, 2018, 5285, 7912, 5287, 5289, 2422, 5290, 5286, 5288, 5292, 7121, 2022, 7339, 5293, 209, 5291, 5294, 8889, 7361, 5312, 5314, 5296, 5297, 5303, 1515, 5298, 7914, 5307, 5295, 8890, 5306, 5304, 1274, 5313, 5311, 5315, 5310, 5316, 5308, 5300, 5302, 8615, 5305, 5309, 3016, 5299, 5301, 8891, 5317, 7915, 8892, 5318, 5329, 5324, 5321, 5330, 1273, 5325, 5328, 5322, 5323, 5326, 5327, 5320, 7916, 7917, 5319, 6983, 5331, 5333, 5332, 5334, 5335, 8893, 8383, 5341, 5339, 4735, 5342, 793, 5336, 5338, 5337, 5340, 7562, 4844, 5344, 5345, 5343, 5346, 8894, 5348, 5347, 7918, 8895, 8896, 5351, 5349, 5350, 7920, 5352, 7919 ]
1.219219
[ "Alana de Santana Correia", "Esther Luna Colombini" ]
Attention, please! A survey of Neural Attention Models in Deep Learning
2021
2021-03-31T02:42:28Z
cs.LG
In humans, Attention is a core property of all perceptual and cognitive operations. Given our limited ability to process competing sources, attention mechanisms select, modulate, and focus on the information most relevant to behavior. For decades, concepts and functions of attention have been studied in philosophy, psychology, neuroscience, and computing. For the last six years, this property has been widely explored in deep neural networks. Currently, the state-of-the-art in Deep Learning is represented by neural attention models in several application domains. This survey provides a comprehensive overview and analysis of developments in neural attention models. We systematically reviewed hundreds of architectures in the area, identifying and discussing those in which attention has shown a significant impact. We also developed and made public an automated methodology to facilitate the development of reviews in the area. By critically analyzing 650 works, we describe the primary uses of attention in convolutional, recurrent networks and generative models, identifying common subgroups of uses and applications. Furthermore, we describe the impact of attention in different application domains and their impact on neural networks' interpretability. Finally, we list possible trends and opportunities for further research, hoping that this review will provide a succinct overview of the main attentional models in the area and guide researchers in developing future approaches that will drive further improvements.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ] ], "subsections": [ "4f96ee17-8c70-472d-b042-28031c34fcbc", "e9f7f711-5e86-4799-ab32-58a1b7898f77", "8fffd84f-2212-4774-84a1-20ea6256a1f8", "eb87d2c5-07a0-49c2-87d8-8983f28d475b", "7295f067-9e4d-4cf1-a4a3-b025da57d328", "75e63f10-91ba-40b8-88f1-78f077a10310", "9a62d0da-c761-4ac4-a34d-c5ccef979e7c" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:introduction}\nAttention is a behavioral and cognitive process of focusing selectively on a discrete aspect of information, whether subjective or objective, while ignoring other perceptible information~, playing an essential role in human cognition and the survival of living beings in general. In animals of lower levels in the evolutionary scale, it provides perceptual resource allocation allowing these beings to respond correctly to the environment's stimuli to escape predators and capture preys efficiently. In human beings, attention acts on practically all mental processes, from reactive responses to unexpected stimuli in the environment - guaranteeing our survival in the presence of danger - to complex mental processes, such as planning, reasoning, and emotions. Attention is necessary because, at any moment, the environment presents much more perceptual information than can be effectively processed, the memory contains more competing traits than can be remembered, and the choices, tasks, or motor responses available are much greater than can be dealt with~.\nAt early sensorial processing stages, data is separated between sight, hearing, touch, smell, and taste. At this level, Attention selects and modulates processing within each of the five modalities and directly impacts processing in the relevant cortical regions. For example, attention to visual stimuli increases discrimination and activates the relevant topographic areas in the retinotopic visual cortex~, allowing observers to detect contrasting stimuli or make more precise discriminations. In hearing, attention allows listeners to detect weaker sounds or differences in extremely subtle tones but essential for recognizing emotions and feelings~. Similar effects of attention operate on the somatosensory cortex~, olfactory cortex~, and gustatory cortex~. In addition to sensory perception, our cognitive control is intrinsically attentional. Our brain has severe cognitive limitations - the number of items that can be kept in working memory, the number of choices that can be selected, and the number of responses that can be generated at any time are limited. Hence, evolution has favored selective attention concepts as the brain has to prioritize. \nLong before contemporary psychologists entered the discussion on Attention, William James~ offered us a precise definition that has been, at least, partially corroborated more than a century later by neurophysiological studies. According to James, ``Attention implies withdrawal from some things in order to deal effectively with others... Millions of items of the outward order are present to my senses which never properly enter into my experience. Why? Because they have no interest for me. My experience is what I agree to attend to. Only those items which I notice shape my mind — without selective interest, experience is an utter chaos.'' Indeed, the first scientific studies of Attention have been reported by Herman Von Helmholtz\n(1821-1894) and William James (1890-1950) in the nineteenth century. They both conducted experiments to understand the role of Attention.\nFor the past decades, the concept of attention has permeated most aspects of research in perception and cognition, being considered as a property of multiple and different perceptual and cognitive operations~. Thus, to the extent that these mechanisms are specialized and decentralized, attention reflects this organization. These mechanisms are in wide communication, and the executive control processes help set priorities for the system. Selection mechanisms operate throughout the brain and are involved in almost every stage, from sensory processing to decision making and awareness. Attention has become a broad term to define how the brain controls its information processing, and its effects can be measured through conscious introspection, electrophysiology, and brain imaging. Attention has been studied from different perspectives for a long time.", "id": "4f96ee17-8c70-472d-b042-28031c34fcbc", "level": "section", "origin_cites_number": 8, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Introduction" ] ], "subsections": [ "86655437-9987-4a2d-86c3-444cb4c50dc2", "6799dc21-761f-45e7-aaef-27acbc8cbc80", "946d2727-ecf4-4ff6-b242-b45355399356" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "Computational attention systems based on psychophysical models, supported by neurobiological evidence, have existed for at least three decades~. Treisman's Feature Integration Theory (FIT) ~, Wolfe's Guides Search ~, Triadic architecture ~, Broadbent's Model ~, Norman Attentional Model ~ ~, Closed-loop Attention Model ~, SeLective Attention Model ~, among several other models, introduced the theoretical basis of computational attention systems.\nInitially, attention was mainly studied with visual experiments where a subject looks at a scene that changes in time~. In these models, the attentional system was restricted only to the selective attention component in visual search tasks, focusing on the extraction of multiple features through a sensor. Therefore, most of the attentional computational models occurred in computer vision to select important image regions. Koch and Ullman ~ introduced the area's first visual attention architecture based on FIT ~. The idea behind it is that several features are computed in parallel, and their conspicuities are collected on a salience map. Winner-Take-All (WTA) determines the most prominent region on the map, which is finally routed to the central representation. From then on, only the region of interest proceeds to more specific processing. Neuromorphic Vision Toolkit (NVT), derived from the Koch-Ullman ~ model, was the basis for developing research in computational visual attention for several years. Navalpakkam and Itti introduce a derivative of NVT which can deal with top-down cues ~. The idea is to learn the target's feature values from a training image in which a binary mask indicates the target. The attention system of Hamker ~ ~ calculates various features and contrast maps and turns them into perceptual maps. With target information influencing processing, they combine detection units to determine whether a region on the perceptual map is a candidate for eye movement. VOCUS ~ introduced a way to combine bottom-up and top-down attention, overcoming the limitations of the time. Several other models have emerged in the literature, each with peculiarities according to the task. Many computational attention systems focus on the computation of mainly three features:\nintensity, orientation, and color. These models employed neural networks or filter models that use classical linear filters to compute features.\nComputational attention systems were used successfully before Deep Learning (DL) in object recognition ~, image compression ~, image matching ~, image segmentation ~, object tracking ~, active vision ~, human-robot interaction ~, object manipulation in robotics ~, robotic navigation ~, and SLAM ~. In mid-1997, Scheier and Egner ~ presented a mobile robot that uses attention for navigation. Still, in the 90s, Baluja and Pomerleau ~ used an attention system to navigate an autonomous car, which followed relevant regions of a projection map. Walther ~ combined an attentional system with an object recognizer based on SIFT features and demonstrated that the attentional front-end enhanced the recognition results. Salah et al. ~ combined attention with neural networks in an Observable Markov model for handwritten digit recognition and face recognition. Ouerhani et al. ~ proposed the focused image compression, which determines the number of bits to be allocated for encoding regions of an image according to their salience. High saliency regions have a high quality of reconstruction concerning the rest of the image.", "id": "86655437-9987-4a2d-86c3-444cb4c50dc2", "level": "subsection", "origin_cites_number": 26, "parent_id": "4f96ee17-8c70-472d-b042-28031c34fcbc", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Pre-Deep Learning Models of Attention" ] ], "subsections": [], "title": "Pre-Deep Learning Models of Attention" }, { "cite_extract_rate": 1, "cites": [ 180, 2635, 9093, 1515, 2633, 2634, 2636, 728 ], "content": "By 2014, the DL community noticed attention as a fundamental concept for advancing deep neural networks. Currently, the state-of-the-art in the field uses neural attention models. As shown in figure~\\ref{fig:hist_works}, the number of published works grows each year significantly in the leading repositories. In neural networks, attention mechanisms dynamically manage the flow of information, the features, and the resources available, improving learning. These mechanisms filter out irrelevant stimuli for the task and help the network to deal with long-time dependencies simply. Many neural attentional models are simple, scalable, flexible, and with promising results in several application domains~~~. Given the current research extent, interesting questions related to neural attention models arise in the literature: \\textbf{how these mechanisms help improve neural networks' performance}, \\textbf{which classes of problems benefit from this approach}, and \\textbf{how these benefits arise}. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.55\\linewidth]{img/trabalhos_pubicados_ano.pdf}\n \\caption{Works published by year between 01/01/2014 to 15/02/2021. The main sources collected are ArXiv, CVPR, ICCV, ICLR, IJCNN, NIPS, and AAAI. The other category refers mainly to the following publishing vehicles: ICML, ACL, ACM, EMNLP, ICRA, ICPR, ACCV, CORR, ECCV, ICASSP, ICLR, IEEE ACCESS, Neurocomputing, and several other magazines.}\n \\label{fig:hist_works}\n\\end{figure}\nTo the best of our knowledge, most surveys available in the literature do not address all of these questions or are more specific to some domain.\nWang et al.~ propose a review on recurrent networks and applications in computer vision, Hu~, and Galassi et al.~ offer surveys on attention in natural language processing (NLP). Lee et al.~ present a review on attention in graph neural networks, and Chaudhari et al.~ presented a more general, yet short, review.", "id": "6799dc21-761f-45e7-aaef-27acbc8cbc80", "level": "subsection", "origin_cites_number": 8, "parent_id": "4f96ee17-8c70-472d-b042-28031c34fcbc", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Deep Learning Models of Attention: the beginning" ] ], "subsections": [], "title": "Deep Learning Models of Attention: the beginning" }, { "cite_extract_rate": 0, "cites": [], "content": "To assess the breadth of attention applications in deep neural networks, we present a systemic review of the field in this survey. Throughout our review, we \\textbf{critically analyzed 650 papers} while addressing \\textbf{quantitatively 6,567}. \nAs the main contributions of our work, we highlight:\n\\begin{enumerate}\n \\item A replicable research methodology. We provide, in the Appendix, the detailed process conducted to collect our data and we make available the scripts to collect the papers and create the graphs we use;\n \\item An in-depth overview of the field. We critically analyzed 650 papers and extracted different metrics from 6,567, employing various visualization techniques to highlight overall trends in the area;\n \\item We describe the main attentional mechanisms; \n \\item We present the main neural architectures that employ attention mechanisms, describing how they have contributed to the NN field;\n \\item We introduce how attentional modules or interfaces have been used in classic DL architectures extending the Neural Network Zoo diagrams;\n \\item Finally, we present a broad description of application domains, trends, and research opportunities.\n\\end{enumerate}\nThis survey is structured as follows. In Section~\\ref{sub:attention_overview} we present the field overview reporting the main events from 2014 to the present. Section~\\ref{sub:attention_mechanisms} contains a description of attention main mechanisms. In Section~\\ref{sub:attention_classic_architectures} we analyze how attentional modules are used in classic DL architectures. Section~\\ref{sec:applications} explains the main classes of problems and applications of attention. Finally, in Section~\\ref{sec:trends} we discuss limitations, open challenges, current trends, and future directions in the area, concluding our work in section~\\ref{sec:conclusion} with directions for further improvements.", "id": "946d2727-ecf4-4ff6-b242-b45355399356", "level": "subsection", "origin_cites_number": 0, "parent_id": "4f96ee17-8c70-472d-b042-28031c34fcbc", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Contributions" ] ], "subsections": [], "title": "Contributions" }, { "cite_extract_rate": 1, "cites": [ 180, 168, 9093, 303, 2637, 746 ], "content": "\\label{sub:attention_overview}\nHistorically, research in computational attention systems has existed since the 1980s. Only in mid-2014, the Neural Attentional Networks (NANs) emerged in Natural Language Processing (NLP), where attention provided significant advances, bringing promising results through scalable and straightforward networks. Attention allowed us to move towards the complex tasks of conversational machine comprehension, sentiment analysis, machine translation, question-answering, and transfer learning, previously challenging. Subsequently, NANs appeared in other fields equally important for artificial intelligence, such as computer vision, reinforcement learning, and robotics. There are currently numerous attentional architectures, but few of them have a significantly higher impact, as shown in Figure~\\ref{fig:main_architectures_field}. In this image, we depict the most relevant group of works organized according to citation levels and innovations where RNNSearch~, Transformer~, Memory Networks~, ``show, attend and tell''~, and RAM~ stand out as key developments.\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{img/principais_arquiteturas_ok_2.pdf}\n \\caption{Main Neural Attention Networks (NAN). Each circle corresponds to an architecture. The radius of the circles is defined based on the impact of the NAN on the field. The impact was defined by the citation number and the architecture innovation level. The greater the radius of the circle, the more significant the impact of architecture, and vice versa. Architectures labels are color-coded as follows: orange - natural language processing, red - computer vision, dark brown - computer vision and natural language processing, dark yellow - reinforcement learning and computer vision, light yellow - reinforcement learning and natural language processing, blue - imitation learning and robotics, and purple - others.}\n \\label{fig:main_architectures_field}\n\\end{figure*}\nThe \\textit{bottleneck problem} in the classic encoder-decoder framework worked as the initial motivation for attention research in Deep Learning. In this framework, the encoder encodes a source sentence into a fixed-length vector from which a decoder generates the translation. The main issue is that a neural network needs to compress all the necessary information from a source sentence into a fixed-length vector. Cho et al. ~ showed that the performance of the classic encoder-decoder deteriorates rapidly as the size of the input sentence increases. To minimize this bottleneck, Bahdanau et al.~ proposed \\textbf{RNNSearch}, an extension to the encoder-decoder model that learns to align and translate together. RNNSearch generates a translated word at each time-step, looking for a set of positions in the source sentence with the most relevant words. The model predicts a target word based on the context vectors associated with those source positions and all previously generated target words. The main advantage is that RNNSearch does not encode an entire input sentence into a single fixed-length vector. Instead, it encodes the input sentence into a sequence of vectors, choosing a subset of these vectors adaptively while generating the translation. The attention mechanism allows extra information to be propagated through the network, eliminating the fixed-size context vector's information bottleneck. This approach demonstrated that the attentive model outperforms classic encoder-decoder frameworks for long sentences for the first time.\nRNNSearch was instrumental in introducing the first attention mechanism, \\textbf{soft attention} (Section~\\ref{sub:attention_mechanisms}). This mechanism has the main characteristic of smoothly selecting the network's most relevant elements. Based on RNNSearch, there have been numerous attempts to augment neural networks with new properties. Two research directions stand out as particularly interesting - \\textbf{attentional interfaces} and \\textbf{end-to-end attention}. Attentional interfaces treat attention as a module or set of elective modules, easily plugged into classic Deep Learning neural networks, just like RNNSearch. So far, this is the most explored research direction in the area, mainly for simplicity, general use, and the good results of generalization that the attentional interfaces bring. End-to-end attention is a younger research direction, where the attention block covers the entire neural network. High and low-level attentional layers act recursively or cascaded at all network abstraction levels to produce the desired output in these models. End-to-end attention models introduce a new class of neural networks in Deep Learning. End-to-end attention research makes sense since no isolated attention center exists in the human brain, and its mechanisms are used in different cognitive processes.", "id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "level": "section", "origin_cites_number": 6, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ] ], "subsections": [ "744f8d2e-a25d-43d2-8d16-940b34a51075", "e3210eac-3771-4fd4-8636-d1ae7effe940", "f3bfdaf7-a4b9-4b08-81ed-a6b0d8c695c2", "06967c24-144e-447f-869e-42b2365f50c1", "5f570775-8b1f-4791-80cb-cd8884392f1d" ], "title": "Overview" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 2643, 167, 1877, 2640, 2639, 1141, 2641, 2634, 7186, 2638, 746, 2642 ], "content": "\\label{sub:attentional_interfaces}\nRNNSearch is the basis for research on attentional interfaces. The attentional module of this architecture is widely used in several other applications. In voice recognition~, allowing one RNN to process the audio while another examines it focusing on the relevant parts as it generates a description. In-text analysis~, it allows a model to look at the words as it generates an analysis tree. In conversational modeling~, it allows the model to focus on the last parts of the conversation as it generates its response. There are also important extensions to deal with other information bottlenecks in addition to the classic encoder-decoder problem. \\textbf{BiDAF}~ proposes a multi-stage hierarchical process to question-answering. It uses the bidirectional attention flow to build a multi-stage hierarchical network with context paragraph representations at different granularity levels. The attention layer does not summarize the context paragraph in a fixed-length vector. Instead, attention is calculated for each step, and the vector assisted at each step, along with representations of previous layers, can flow to the subsequent modeling layer. This reduces the loss of information caused by the early summary. At each stage of time, attention is only a function of the query and the paragraph of the context in the current stage and does not depend directly on the previous stage's attention. The hypothesis is that this simplification leads to a work division between the attention layer and the modeling layer, forcing the attention layer to focus on learning attention between the query and the context.\nYang et al.~ proposed the \\textbf{Hierarchical Attention Network (HAN)} to capture two essential insights about document structure. Documents have a hierarchical structure: words form sentences, sentences form a document. Humans, likewise, construct a document representation by first building representations of sentences and then aggregating them into a document representation. Different words and sentences in a document are differentially informative. Moreover, the importance of words and sentences is highly context-dependent, i.e., the same word or sentence may have different importance in different contexts. To include sensitivity to this fact, HAN consists of two levels of attention mechanisms - one at the word level and one at the sentence level - that let the model pay more or less attention to individual words and sentences when constructing the document's representation. Xiong et al.~ created a coattentive encoder that captures the interactions between the question and the document with a dynamic pointing decoder that alternates between estimating the start and end of the answer span. To learn approximate solutions to computationally intractable problems, \\textbf{Ptr-Net}~ modifies the RNNSearch's attentional mechanism to represent variable-length dictionaries. It uses the attention mechanism as a pointer.\nSee et. al.~ used a hybrid between classic sequence-to-sequence attentional models and a Ptr-Net~ to abstractive text summarization. The hybrid pointer-generator~ copies words from the source text via pointing, which aids accurate reproduction of information while retaining the ability to produce novel words through the generator. Finally, it uses a mechanism to keep track of what has been summarized, which discourages repetition. FusionNet~ presents a novel concept of \"history-of-word\" to characterize attention information from the lowest word-embedding level up to the highest semantic-level representation. This concept considers that data input is gradually transformed into a more abstract representation, forming each word's history in human mental flow. FusionNet employs a fully-aware multi-level attention mechanism and an attention score-function that takes advantage of the history-of-word. Rocktäschel et al.~ introduce two-away attention for recognizing textual entailment (RTE). The mechanism allows the model to attend over past output vectors, solving the LSTM's cell state bottleneck. The LSTM with attention does not need to capture the premise's whole semantics in the LSTM cell state. Instead, attention generates output vectors while reading the premise and accumulating a representation in the cell state that informs the second LSTM which of the premises' output vectors to attend to determine the RTE class. Luong, et al.~, proposed global and local attention in machine translation. Global attention is similar to soft attention, while local is an improvement to make hard attention differentiable - the model first provides for a single position aligned to the current target word, and a window centered around the position is used to calculate a vector of context. \nAttentional interfaces have also emerged in architectures for computer vision tasks. Initially, they are based on human saccadic movements and robustness to change. The human visual attention mechanism can explore local differences in an image while highlighting the relevant parts. One person focuses attention on parts of the image simultaneously, glimpsing to quickly scan the entire image to find the main areas during the recognition process. In this process, the different regions' internal relationship guides the eyes' movement to find the next area to focus. Ignoring the irrelevant parts makes it easier to learn in the presence of disorder. Another advantage of glimpse and visual attention is its robustness. Our eyes can see an object in a real-world scene but ignore irrelevant parts. Convolutional neural networks (CNNs) are extremely different. CNNs are rigid, and the number of parameters grows linearly with the size of the image. Also, for the network to capture long-distance dependencies between pixels, the architecture needs to have many layers, compromising the model's convergence. Besides, the network treats all pixels in the same way. This process does not resemble the human visual system that contains visual attention mechanisms and a glimpse structure that provides unmatched performance in object recognition.\nRAM~ and STN are pioneering architectures with attentional interfaces based on human visual attention. \\textbf{RAM}~ can extract information from an image or video by adaptively selecting a sequence of regions, glimpses, only processing the selected areas at high resolution. The model is a Recurrent Neural Network that processes different parts of the images (or video frames) at each instant of time \\textit{t}, building a dynamic internal representation of the scene via Reinforcement Learning training. The main model advantages are the reduced number of parameters and the architecture's independence to the input image size, which does not occur in convolutional neural networks. This approach is generic. It can use static images, videos, or a perceptual module of an agent that interacts with the environment. \\textbf{STN (Spatial Transformer Network)}~ is a module robust to spatial transformation changes. In STN, if the input is transformed, the model must generate the correct classification label, even if it is distorted in unusual ways. STN works as an attentional module attachable -- with few modifications -- to any neural network to actively spatially transform feature maps. STN learns transformation during the training process. Unlike pooling layers, where receptive fields are fixed and local, a Spatial Transformer is a dynamic mechanism that can spatially transform an image, or feature map, producing the appropriate transformation for each input sample. The transformation is performed across the map and may include changes in scale, cut, rotations, and non-rigid body deformations. This approach allows the network to select the most relevant image regions (attention) and transform them into a desired canonical position by simplifying recognition in the following layers.\nFollowing the RAM approach, the \\textbf{Deep Recurrent Attentive Writer (DRAW)}~ represents a change to a more natural way of constructing the image in which parts of a scene are created independently of the others. This process is how human beings draw a scene by recreating a visual scene sequentially, refining all parts of the drawing for several iterations, and reevaluating their work after each modification. Although natural to humans, most approaches to automatic image generation aim to generate complete scenes at once. This means that all pixels are conditioned in a single latent distribution, making it challenging to scale large image approaches. DRAW belongs to the family of variational autoencoders. It has an encoder that compresses the images presented during training and a decoder that reconstructs the images. Unlike other generative models, DRAW iteratively constructs the scenes by accumulating modifications emitted by the decoder, each observed by the encoder. DRAW uses RAM attention mechanisms to attend to parts of the scene while ignoring others selectively. This mechanism's main challenge is to learn where to look, which is usually addressed by reinforcement learning techniques. However, at DRAW, the attention mechanism is differentiable, making it possible to use backpropagation.", "id": "744f8d2e-a25d-43d2-8d16-940b34a51075", "level": "subsection", "origin_cites_number": 14, "parent_id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ], [ "subsection", "Attentional interfaces" ] ], "subsections": [], "title": "Attentional interfaces" }, { "cite_extract_rate": 0.5, "cites": [ 7194, 2643, 2650, 167, 2649, 2644, 2645, 2648, 862, 2637, 2646, 7599, 2647 ], "content": "\\label{sub:multimodality}\nThe first attention interfaces' use in DL were limited to NLP and computer vision domains to solve isolated tasks. Currently, attentional interfaces are studied in multimodal learning. Sensory multimodality in neural networks is a historical problem widely discussed by the scientific community~~. Multimodal data improves the robustness of perception through complementarity and redundancy. The human brain continually deals with multimodal data and integrates it into a coherent representation of the world. However, employing different sensors present a series of challenges computationally, such as incomplete or spurious data, different properties (i.e. dimensionality or range of values), and the need for data alignment association. The integration of multiple sensors depends on a reasoning structure over the data to build a common representation, which does not exist in classical neural networks. Attentional interfaces adapted for multimodal perception are an efficient alternative for reasoning about misaligned data from different sensory sources.\nThe first widespread use of attention for multimodality occurs with the attentional interface between a convolutional neural network and an LSTM in image captioning~. In this model, a CNN processes the image, extracting high-level features, whereas the LSTM consumes the features to produce descriptive words, one by one. The attention mechanism guides the LSTM to relevant image information for each word's generation, equivalent to the human visual attention mechanism. The visualization of attention weights in multimodal tasks improved the understanding of how architecture works. This approach derived from countless other works with attentional interfaces that deal with video-text data~~~, image-text data~~, monocular/RGB-D images~~~, RADAR~, remote sensing data~~~~, audio-video~~, and diverse sensors~~~, as shown in Figure~\\ref{fig:modalities}. \nZhang et al.~ used an adaptive attention mechanism to learn to emphasize different visual and textual sources for dialogue systems for fashion retail. An adaptive attention scheme\nautomatically decided the evidence source for tracking dialogue states\nbased on visual and textual context. \\textbf{Dual Attention Networks}~ presented attention mechanisms to capture the fine-grained interplay between images and textual information. The mechanism allows visual and textual attention to guide each other during collaborative inference. \\textbf{HATT}~ presented a new attention-based hierarchical fusion to explore the complementary features of multimodal features progressively, fusing temporal, motion, audio, and semantic label features for video representation. The model consists of three attention layers. First, the low-level attention layer deals with temporal, motion, and audio features inside each modality and across modalities. Second, high-level attention selectively focuses on semantic label features. Finally, the sequential attention layer incorporates hidden information generated by encoded low-level attention and high-level attention. Hori et. al.~ extended simple attention multimodal fusion. Unlike the simple multimodal fusion method, the feature-level attention weights can change according to the decoder state and the context vectors, enabling the decoder network to pay attention to a different set of features or modalities when predicting each subsequent word in the description. Memory Fusion Network~ presented the Delta-memory Attention module for multi-view sequential learning. First, an LSTM system, one for each of the modalities, encodes the modality-specific dynamics and interactions. Delta-memory attention discovers both cross-modality and temporal interactions in different memory dimensions of LSTMs. Finally, Multi-view Gated Memory (unifying memory) stores the cross-modality interactions over time.\nHuang et al.~ investigated the problem of matching image-text by exploiting the bi-directional attention with fine-granularity correlations between visual regions and textual words. \\textbf{Bi-directional attention} connects the word to regions and objects to words for learning mage-text matching. Li et. al.~ introduced \\textbf{Long Short-Term Memory with Pointing} (LSTM-P) inspired by humans pointing behavior~, and Pointer Networks~. The pointing mechanism encapsulates dynamic contextual information (current input word and LSTM cell output) to deal with the image captioning scenario's novel objects. Liu et. al.~ proposed a cross-modal attention-guided erasing approach for referring expressions. Previous attention models focus on only the most dominant features of both modalities and neglect textual-visual correspondences between images and referring expressions. To tackle this issue, cross-modal attention discards the most dominant information from either textual or visual domains to generate difficult training samples and drive the model to discover complementary textual-visual correspondences. Abolghasemi et al.~ demonstrated an approach for augmenting a deep visuomotor policy trained through demonstrations with \\textbf{Task Focused Visual Attention} (TFA). Attention receives as input a manipulation task specified in natural language text, an image with the environment, and returns as output the area with an object that the robot needs to manipulate. TFA allows the policy to be significantly more robust from the baseline policy, i.e., no visual attention. Pu et al.~ adaptively select features from the multiple CNN layers for video captioning. Previous models often use the output from a specific layer of a CNN as video features. However, this attention model adaptively and sequentially focuses on different layers of CNN features.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[clip, trim=0.3cm 6cm 0.5cm 6cm, width=1.00\\textwidth]{img/modalidades_ultimo_editado.pdf}\n \\caption{A diagram showing sensory modalities of neural attention models. Radial segments correspond to attention architectures, and each track corresponds to a modality. Modalities are: (A) audio, (B) biomedical signals, (I) image, (O) other sensors, (L) LiDAR, (R) remote sensing data, (T) text, and (V) video. The following coloring convention is used for the individual segments: white (the modality is not implemented), light yellow (CNN), light orange (RNN), orange (Self-attentive networks), red (Memory networks), dark red (framework), and brown (GNN). This diagram emphasizes multimodal architectures so that only the most representative single modality (i.e., text or image) architectures are shown. Most multimodal architectures use the image/text or video/text modalities.}\n \\label{fig:modalities}\n\\end{figure}", "id": "e3210eac-3771-4fd4-8636-d1ae7effe940", "level": "subsection", "origin_cites_number": 26, "parent_id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ], [ "subsection", "Multimodality" ] ], "subsections": [], "title": "Multimodality" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 2652, 2651, 9093, 2637, 9109, 7600, 553, 2653, 1966, 2634 ], "content": "\\label{sub:attention_memory}\nAttentional interfaces also allow the neural network iteration with other cognitive elements (i.e., memories, working memory). Memory control and logic flow are essential for learning. However, they are elements that do not exist in classical architectures. The memory of classic RNNs, encoded by hidden states and weights, is usually minimal and is not sufficient to remember facts from the past accurately. Most Deep Learning models do not have a simple way to read and write data to an external memory component. The \\textbf{Neural Turing Machine} (NTM)~ and Memory Networks (MemNN)~ - a new class of neural networks - introduced the possibility for a neural network dealing with addressable memory. NTM is a differentiable approach that can be trained with gradient descent algorithms, producing a practical learning program mechanism. NTM memory is a short-term storage space for information with its rules-based manipulation. Computationally, these rules are simple programs, where data are those programs' arguments. Therefore, an NTM resembles a working memory designed to solve tasks that require rules, where variables are quickly linked to memory slots. NTMs use an attentive process to read and write elements to memory selectively. This attentional mechanism makes the network learn to use working memory instead of implementing a fixed set of symbolic data rules.\n\\textbf{Memory Networks}~ are a relatively new framework of models designed to alleviate the problem of learning long-term dependencies in sequential data by providing an explicit memory representation for each token in the sequence. Instead of forgetting the past, Memory Networks explicitly consider the input history, with a dedicated vector representation for each history element, effectively removing the chance to forget. The limit on memory size becomes a hyper-parameter to tune, rather than an intrinsic limitation of the model itself. This model was used in question-answering tasks where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. Large-scale question-answer tests were performed, and the reasoning power of memory networks that answer questions that require an in-depth analysis of verb intent was demonstrated. Mainly due to the success of MemNN, networks with external memory are a growing research direction in DL, with several branches under development as shown in figure~\\ref{fig:memory_networks_family}.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.80\\textwidth]{img/memory_networks_family.pdf}\n \\caption{Memory-based neural networks (MemNN). Architectures labels are color-coded as follows: orange - natural language processing, red - computer vision, purple - others. The end-to-End Memory networks is the first end-to-end differentiable version of MemNN. GMN~ and MemGNN~ are the first graph networks with memory. DMN~, MemGNN~, Episodic graph memory networks~, Episodic CAMN~, are the first instances of the episodic memory framework.}\n \\label{fig:memory_networks_family}\n\\end{figure}\n\\textbf{End-to-end Memory Networks}~ is the first version of MemNN applicable to realistic, trainable end-to-end scenarios, which requires low supervision during training. Aug Oh. et al.~ extends Memory Networks to suit the task of semi-supervised segmentation of video objects. Frames with object masks are placed in memory, and a frame to be segmented acts as a query. The memory is updated with the new masks provided and faces challenges such as changes, occlusions, and accumulations of errors without online learning. The algorithm acts as an attentional space-time system calculating when and where to meet each query pixel to decide whether the pixel belongs to a foreground object or not. Kumar et al.~ propose the first network with episodic memory - a type of memory extremely relevant to humans - to iterate over representations emitted by the input module updating its internal state through an attentional interface. In~, an episodic memory with a key-value retrieval mechanism chooses which parts of the input to focus on thorough attention. The module then produces a summary representation of the memory, taking into account the query and the stored memory. Finally, the latest research has invested in Graph Memory Networks (GMN), which are memories in GNNs~, to better handle unstructured data using key-value structured memories~~~~.", "id": "f3bfdaf7-a4b9-4b08-81ed-a6b0d8c695c2", "level": "subsection", "origin_cites_number": 12, "parent_id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ], [ "subsection", "Attention-augmented memory" ] ], "subsections": [], "title": "Attention-augmented memory" }, { "cite_extract_rate": 0.652173913043478, "cites": [ 679, 2654, 1515, 2657, 2643, 248, 553, 180, 2655, 2656, 2658, 7194, 252, 7371, 793 ], "content": "\\label{sub:end_to_end_attention_models}\nIn mid-2017, research aiming at end-to-end attention models appeared in the area. The Neural Transformer (NT)~ and Graph Attention Networks~ - purely attentional architectures - demonstrated to the scientific community that attention is a key element for the future development in Deep Learning. The Transformer's goal is to use self-attention (Section~\\ref{sub:attention_mechanisms}) to minimize traditional recurrent neural networks' difficulties. The \\textbf{Neural Transformer} is the first neural architecture that uses only attentional modules and fully-connected neural networks to process sequential data successfully. It dispenses recurrences and convolutions, capturing the relationship between the sequence elements regardless of their distance. Attention allows the Transformer to be simple, parallelizable, and low training cost~. \\textbf{Graph Attention Networks} (GATs) are an end-to-end attention version of GNNs~. They have stacks of attentional layers that help the model focus on the unstructured data's most relevant parts to make decisions. The main purpose of attention is to avoid noisy parts of the graph by improving the signal-to-noise ratio (SNR) while also reducing the structure's complexity. Furthermore, they provide a more interpretable structure for solving the problem. For example, when analyzing the Attention of a model under different components in a graph, it is possible to identify the main factors contributing to achieving a particular response condition.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\textwidth]{img/Transformer_family.pdf}\n \\caption{Transformer-based neural networks. Architectures labels are color-coded as follows: orange - natural language processing, red - computer vision, purple - others.}\n \\label{fig:transformer_family}\n\\end{figure}\nThere is a growing interest in NT and GATs, and some extensions have been proposed~~~~, with numerous Transformer-based architectures as shown figure~\\ref{fig:transformer_family}. These architectures and all that use self-attention belong to a new category of neural networks, called Self-Attentive Neural Networks. They aim to explore self-attention in various tasks and improve the following drawbacks: 1) a Large number of parameters and training iterations to converge; 2) High memory cost per layer and quadratic growth of memory according to sequence length; 3) Auto-regressive model; 4) Low parallelization in the decoder layers. Specifically, Weighted Transformer~ proposes modifications in the attention layers achieving a 40 \\% faster convergence. The multi-head attention modules are replaced by modules called branched attention that the model learns to match during the training process. The Star-transformer~ proposes a lightweight alternative to reduce the model's complexity with a star-shaped topology. To reduce the cost of memory, Music Transformer~, and Sparse Transformer~ introduces relative self-attention and factored self-attention, respectively. Lee et al.~ also features an attention mechanism that reduces self-attention from quadratic to linear, allowing scaling for high inputs and data sets.\nSome approaches adapt the Transformer to new applications and areas. In natural language processing, several new architectures have emerged, mainly in multimodal learning. Doubly Attentive Transformer~ proposes a multimodal machine-translation method, incorporating visual information. It modifies the attentional decoder, allowing textual features from a pre-trained CNN encoder and visual features. The Multi-source Transformer~ explores four different strategies for combining input into the multi-head attention decoder layer for multimodal translation. Style Transformer~, Hierarchical Transformer~, HighWay Recurrent Transformer~\\cite {highway_transformer}, Lattice-Based Transformer~\\cite {lattice_transformer}, Transformer TTS Network~\\cite {li2019neural}, Phrase-Based Attention~ are some important architectures in style transfer, document summarization and machine translation. Transfer Learning in NLP is one of Transformer's major contribution areas. BERT~, GPT-2~, and GPT-3~ based NT architecture to solve the problem of Transfer Learning in NLP because current techniques restrict the power of pre-trained representations. In computer vision, the generation of images is one of the Transformer's great news. Image Transformer~, SAGAN~, and Image GPT~ uses self-attention mechanism to attend the local neighborhoods. The size of the images that the model can process in practice significantly increases, despite maintaining significantly larger receptive fields per layer than the typical convolutional neural networks. Recently, at the beginning of 2021, OpenAi introduced the scientific community to DALL·E~, the Newest language model based on Transformer and GPT-3, capable of generating images from texts extending the knowledge of GPT-3 for viewing with only 12 billions of parameters.", "id": "06967c24-144e-447f-869e-42b2365f50c1", "level": "subsection", "origin_cites_number": 23, "parent_id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ], [ "subsection", "End-to-end attention models" ] ], "subsections": [], "title": "End-to-end attention models" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 180, 679, 2661, 7600, 2659, 7107, 2634, 676, 2660 ], "content": "\\label{sub:attention_today}\nCurrently, hybrid models that employ the main key developments in attention's use in Deep Learning (Figure~\\ref{fig:timeline}) have aroused the scientific community's interest. Mainly, hybrid models based on Transformer, GATs, and Memory Networks have emerged for multimodal learning and several other application domains. Hyperbolic Attention Networks (HAN)~, Hyperbolic Graph Attention Networks (GHN)~, Temporal Graph Networks (TGN)~ and Memory-based Graph Networks (MGN)~ are some of the most promising developments. Hyperbolic networks are a new class of architecture that combine the benefits of self-attention, memory, graphs, and hyperbolic geometry in activating neural networks to reason with high capacity over embeddings produced by deep neural networks. Since 2019 these networks have stood out as a new research branch because they represent state-of-the-art generalization on neural machine\ntranslation, learning on graphs, and visual question answering tasks while keeping the neural representations compact. Since 2019, GATs have also received much attention due to their ability to learn complex relationships or interactions in a wide spectrum of problems ranging from biology, particle physics, social networks to recommendation systems. To improve the representation of nodes and expand the capacity of GATs to deal with data of a dynamic nature (i.e. evolving features or connectivity over time), architectures that combine memory modules and the temporal dimension, like MGNs and TGNs, were proposed.\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{img/main_architecture.pdf}\n \\caption{Key developments in Attention in DL Timeline. RNNSearch presented the first attention mechanism. Neural Turing machine and Memory networks introduced memory and dynamic flow control. RAM and DRAW learned to combine multi-glimpse, visual attention, and sequential processing. Spatial Transformer introduced a module to increase the robustness of CNNs to variations in spatial transformations. Show, attend and tell created attention for multimodality. The Pointer network used attention as a pointer. BiDAF, HAN, and DCN presented attentional techniques to align data with different hierarchical levels. ACT introduced the computation time topic. Transformer~ was the first self-attentive neural network with an end-to-end attention approach. GATs introduced attention in GNNs. BERT~, GPT-2~, GPT-3~, and DALL·E~ are the state-of-the-art in language models and text-to-image generation. Finally, BRIMs~ learned to combine bottom-up and top-down signals.}\n \\label{fig:timeline}\n\\end{figure*}\nAt the end of 2020, two research branches still little explored in the literature were strengthened: 1) explicit combination of bottom-up and top-down stimuli in bidirectional recurrent neural networks and 2) adaptive computation time. Classic recurrent neural networks perform recurring iteration within a particular level of representation instead of using a top-down iteration, in which higher levels act at lower levels. However, Mittal et al.~ revisited the bidirectional recurrent layers with attentional mechanisms to explicitly route the flow of bottom-up and top-down information, promoting selection iteration between the two levels of stimuli. The approach separates the hidden state into several modules so that upward iterations between bottom-up and top-down signals can be appropriately focused. The layer structure has concurrent modules so that each hierarchical layer can send information both in the bottom-up and top-down directions.\nThe adaptive computation time is an interesting little-explored topic in the literature that began to expand only in 2020 despite initial studies emerging in 2017. ACT applies to different neural networks (e.g. RNNs, CNNs, LSTMs, Transformers). The general idea is that complex data might require more computation to produce a final result, while some unimportant or straightforward data might require less. The attention mechanism dynamically decides how long to process network training data. The seminal approach by Graves et al.~ made minor modifications to an RNN, allowing the network to perform a variable number of state transitions and a variable number of outputs at each stage of the input. The resulting output is a weighted sum of the intermediate outputs, i.e., soft attention. A halting unit decides when the network should stop or continue. To limit computation time, attention adds a time penalty to the cost function by preventing the network from processing data for unnecessary amounts of time. This approach has recently been updated and expanded to other architectures. Spatially Adaptive Computation Time (SACT)~ adapts ACT to adjust the per-position amount of computation to each spatial position of the block in convolutional layers, learning to focus computing on the regions of interest and to stop when the features maps are \"good enough\". Finally, Differentiable Adaptive Computation Time (DACT)~ introduced the first differentiable end-to-end approach to computation time on recurring networks.", "id": "5f570775-8b1f-4791-80cb-cd8884392f1d", "level": "subsection", "origin_cites_number": 13, "parent_id": "e9f7f711-5e86-4799-ab32-58a1b7898f77", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Overview" ], [ "subsection", "Attention today" ] ], "subsections": [], "title": "Attention today" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:attention_mechanisms}\nDeep attention mechanisms can be categorized into soft attention (global attention), hard attention (local attention), and self-attention (intra-attention). \n\\textbf{Soft Attention}. Soft attention assigns a weight of 0 to 1 for each input element. It decides how much attention should be focused on each element, considering the interdependence between the input of the deep neural network's mechanism and target. It uses softmax functions in the attention layers to calculate weights so that the entire attentional model is deterministic and differentiable. Soft attention can act in the spatial and temporal context. The spatial context operates mainly to extract the features or the weighting of the most relevant features. For the temporal context, it works by adjusting the weights of all samples in sliding time windows, as samples at different times have different contributions. Despite being deterministic and differentiable, soft mechanisms have a high computational cost for large inputs. Figure~\\ref{fig:soft_attention_example} shows an intuitive example of a soft attention mechanism.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.90\\textwidth]{img/soft_attention_example.pdf}\n \\caption{An intuitive example of Soft Attention. Visual QA architecture outputs an answer given an image and a textual question as input. It uses a soft attention mechanism that weighted visual features for the task for further processing. The premise is that the norm of the visual features correlates with their relevance. Besides, those feature vectors with high magnitudes correspond to image regions that contain relevant semantic content.}\n \\label{fig:soft_attention_example}\n\\end{figure}\n\\textbf{Hard Attention}. Hard attention determines whether a part of the mechanism's input should be considered or not, reflecting the interdependence between the input of the mechanism and the target of the deep neural network. The weight assigned to an input part is either 0 or 1. Hence, as input elements are either seen, the objective is non-differentiable. The process involves making a sequence of selections on which part to attend. In the temporal context, for example, the model attends to a part of the input to obtain information, decidinng where to attend in the next step based on the known information. A neural network can make a selection based on this information. However, as there is no ground truth to indicate the correct selection policy, the hard-attention type mechanisms are represented by stochastic processes. As the model is not differentiable, reinforcement learning techniques are necessary to train models with hard attention. Inference time and computational costs are reduced compared to soft mechanisms once the entire input is not being stored or processed. Figure~\\ref{fig:hard_attention_example} shows an intuitive example of a hard attention mechanism.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.90\\textwidth]{img/hard_attention_example.pdf}\n \\caption{An intuitive example of Hard Attention. Given an image and a textual question as input, the Visual QA architecture outputs an answer. It uses a hard attention mechanism that \\textbf{selects only} the important visual features for the task for further processing. }\n \\label{fig:hard_attention_example}\n\\end{figure}\n\\textbf{Self-Attention}. Self-attention quantifies the interdependence between the input elements of the mechanism. This mechanism allows the inputs to interact with each other \"self\" and determine what they should pay more attention to. The self-attention layer's main advantages compared to soft and hard mechanisms are parallel computing ability for a long input. This mechanism layer checks the attention with all the same input elements using simple and easily parallelizable matrix calculations. Figure~\\ref{fig:self_attention_example} shows an intuitive example of a self-attention mechanism.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.90\\textwidth]{img/self_attention_example.pdf}\n \\caption{Self-Attention examples. a) Self-attention in sentences\n b) Self-attention in images. The first image shows five representative query locations with color-coded dots with the corresponding color-coded arrows summarizing the most-attended regions.}\n \\label{fig:self_attention_example}\n\\end{figure}", "id": "8fffd84f-2212-4774-84a1-20ea6256a1f8", "level": "section", "origin_cites_number": 0, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Attention Mechanisms" ] ], "subsections": [], "title": "Attention Mechanisms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sub:attention_classic_architectures}\nThis section introduces details about attentional interfaces in classic DL architectures. Specifically, we present the uses of attention in convolutional, recurrent networks and generative models.\n\\input{attention_based_cnns}\n\\input{attention_based_rnns}\n\\input{attention_based_generative}", "id": "eb87d2c5-07a0-49c2-87d8-8983f28d475b", "level": "section", "origin_cites_number": 0, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Attention-based Classic Deep Learning Architectures" ] ], "subsections": [], "title": "Attention-based Classic Deep Learning Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}\nIn a few years, neural attention networks have been used in numerous domains due to versatility, interpretability, and significance of results. These networks have been explored mainly in computer vision, natural language processing, and multi-modal tasks, as shown in figure~\\ref{fig:applications}. In some applications, these models transformed the area entirely (i.e., question-answering, machine translation, document representations/embeddings, graph embeddings), mainly due to significant performance impacts on the task in question. In others, they helped learn better representations and deal with temporal dependencies over long distances. This section explores a list of application domains and subareas, mainly discussing each domain's main models and how it benefits from attention. We also present the most representative instances within each area and list them with reference approaches in a wide range of applications.\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{img/applications.pdf}\n \\caption{Diagram showing the main existing applications of neural attention networks. The main areas are Natural language processing (NLP), Computer Vision (CV), multimodal tasks (mainly with images and texts - CV/NLP), reinforcement learning (RL), robotics, recommendation systems, and others (e.i., graph embeddings, interpretability.).}\n \\label{fig:applications}\n\\end{figure*}\n\\input{applications/nlp}\n\\input{applications/computer_vision}\n\\input{applications/cv_nlp}\n\\input{applications/recommender}\n\\input{applications/rl}\n\\input{applications/robotics}\n\\input{applications/interpretability}", "id": "7295f067-9e4d-4cf1-a4a3-b025da57d328", "level": "section", "origin_cites_number": 0, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:trends}\nAttention has been one of the most influential ideas in the Deep Learning community in recent years, with several profound advances, mainly in computer vision and natural language processing. However, there is much space to grow, and many contributions are still to appear. In this section, we highlight some gaps and opportunities in this scenario.", "id": "75e63f10-91ba-40b8-88f1-78f077a10310", "level": "section", "origin_cites_number": 0, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ] ], "subsections": [ "d2f635e2-e10f-4b16-805f-07a5b18afa1b", "f7e3dbb5-fa11-4416-a83a-e65df3c40dc5", "bfa7d747-25dc-439a-af6c-2d9f08240eb3", "4f44a650-fd02-4169-b323-49b6535b7666", "ae5327bc-c6b3-4d92-bd10-1bd4a85c3c06", "b0e48451-b816-4293-956a-91ab69f219b4", "f1e2932c-68ee-4008-a710-0876a562c5b1", "f0681cc5-f1e8-4d84-911c-f7d8c0f09488", "8776b886-f174-4e1e-9608-b972d9c48dca", "5e5d9264-172a-425e-bfb5-f9a3916595e0", "f3276cba-fff9-4a36-ade2-0bfd808d3c80" ], "title": "Trends and Opportunities" }, { "cite_extract_rate": 0.4, "cites": [ 180, 679 ], "content": "Over the past eight years, most of the papers published in the literature have involved attentional mechanisms. Models that are state of the art in DL use attention. Specifically, we note that end-to-end attention networks, such as Transformers~ and Graph Attention Networks~, have been expanding significantly and have been used successfully in tasks across multiple domains (Section~\\ref{sub:attention_overview}). In particular, Transformer has introduced a new form of computing in which the neural network's core is fully attentional. Transformer-based language models like BERT~, GPT2~, and GPT3~ are the most advanced language models in NLP. Image GPT~ has recently revolutionized the results of unsupervised learning in imaging. It is already a trend to propose Transfomer based models with sparse attentional mechanisms to reduce the Transformer's complexity from quadratic to linear and use attentional mechanisms to deal with multimodality in GATs. However, Transformer is still an autoregressive architecture in the decoder and does not use other cognitive mechanisms such as memory. As research in attention and DL is still at early stages, there is still plenty of space in the literature for new attentional mechanisms, and we believe that end-to-end attention architectures might be very influential in Deep Learning's future models.", "id": "d2f635e2-e10f-4b16-805f-07a5b18afa1b", "level": "subsection", "origin_cites_number": 5, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "End-To-End Attention models" ] ], "subsections": [], "title": "End-To-End Attention models" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2662 ], "content": "Attention has played a crucial role in the growth of learning from multimodal data. Multimodality is extremely important for learning complex tasks. Human beings use different sensory signals all the time to interpret situations and decide which action to take. For example, while recognizing emotions, humans use visual data, gestures, and voice tones to analyze feelings. Attention allowed models to learn the synergistic relationship between the different sensory data, even if they are not synchronized, allowing the development of increasingly complex applications mainly in emotion recognition,~, feelings~, and language-based image generation~. We note that multimodal applications are continually growing in recent years. However, most research efforts are still focused on relating a pair of sensory data, mostly visual and textual data. Architectures that can scale easily to handle more than one pair of sensors are not yet widely explored. Multimodal learning exploring voice data, RGBD images, images from monocular cameras, data from various sensors, such as accelerometers, gyroscopes, GPS, RADAR, biomedical sensors, are still scarce in the literature.", "id": "f7e3dbb5-fa11-4416-a83a-e65df3c40dc5", "level": "subsection", "origin_cites_number": 3, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Learning Multimodality" ] ], "subsections": [], "title": "Learning Multimodality" }, { "cite_extract_rate": 1, "cites": [ 2653, 2634, 9093 ], "content": "Attention proposed a new way of thinking about the architecture of neural networks. For many years, the scientific community neglected using other cognitive elements in neural network architectures, such as memory and logic flow control. Attention has made possible including in neural networks other elements that are widely important in human cognition. Memory Networks~, and Neural Turing Machine~ are essential approaches in which attention makes updates and recoveries in external memory. However, research on this topic is at an early stage. The Neural Turing Machine has not yet been explored in several application domains, being used only in simple datasets for algorithmic tasks, with a slow and unstable convergence. We believe that there is plenty of room to explore the advantages of NTM in a wide range of problems and develop more stable and efficient models. Still, Memory Networks~ presents some developments (Section~\\ref{sub:attention_overview}), but few studies explore the use of attention to managing complex and hierarchical structures of memory. Attention to managing different memory types simultaneously (i.e., working memory, declarative, non-declarative, semantic, and long and short term) is still absent in the literature. To the best of our knowledge, the most significant advances have been made in Dynamic Memory Networks~ with the use of episodic memory. Another open challenge is how to use attention to plug external knowledge into memory and make training faster. Finally, undoubtedly one of the biggest challenges still lies in including other human cognition elements such as imagination, reasoning, creativity, and consciousness working in harmony with attentional structures.", "id": "bfa7d747-25dc-439a-af6c-2d9f08240eb3", "level": "subsection", "origin_cites_number": 3, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Cognitive Elements" ] ], "subsections": [], "title": "Cognitive Elements" }, { "cite_extract_rate": 1, "cites": [ 2639, 746 ], "content": "Recurrent Attention Models (RAM)~ introduced a new form of image computing using glimpses and hard attention. The architecture is simple, scalable, and flexible. Spatial Transformer (STN)~ presented a simple module for learning image transformations that can be easily plugged into different architectures. We note that RAM has a high potential for many tasks in which convolutional neural networks have difficulties, such as large, high-resolution images. However, currently, RAM has been explored with simple datasets. We believe that it is interesting to validate RAM in complex classification and regression tasks. Another proposal is to add new modules to the architecture, such as memory, multimodal glimpses, and scaling. It is interesting to explore STN in conjunction with RAM in classification tasks or use STN to predict transformations between sets of images. RAM aligned with STN can help address robostusnees to spatial transformation, learn the system dynamics in Visual Odometry tasks, enhance multiple-instance learning, addressing multiple view-points.", "id": "4f44a650-fd02-4169-b323-49b6535b7666", "level": "subsection", "origin_cites_number": 2, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Computer Vision" ] ], "subsections": [], "title": "Computer Vision" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2663, 2664, 306 ], "content": "Capsule networks (CapsNets), a new class of deep neural network architectures proposed recently by Hinton et al.~, have shown excellent performance in many fields, particularly in image recognition and natural language processing. However, few studies in the literature implement attention in capsule networks. AR CapsNet~ implements a dynamic routing algorithm where routing between capsules is made through\nan attention module. The attention routing is a fast forward-pass while keeping spatial information. DA-CapsNet~ proposes a dual attention mechanism, the first layer is added after the convolution layer, and the second layer is added after the primary caps. SACN~ is the first model that incorporates the self-attention mechanism as an integral layer. Recently, Tsai. et al.~ introduced a new attentional routing mechanism in which a daughter capsule is routed to a parent capsule-based between the father's state and the daughter's vote. We particularly believe that attention is essential to improve the relational and hierarchical nature that CapsNets propose. The development of works aiming at the dynamic attentional routing of the capsules and incorporating attentional capsules of self-attention, soft and hard attention can bring significant results to current models.", "id": "ae5327bc-c6b3-4d92-bd10-1bd4a85c3c06", "level": "subsection", "origin_cites_number": 5, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Capsule Neural Network" ] ], "subsections": [], "title": "Capsule Neural Network" }, { "cite_extract_rate": 0.75, "cites": [ 166, 2665, 9093, 1086, 7601, 2634 ], "content": "According to LeCun~ one of the great challenges of artificial intelligence is to combine the robustness of connectionist systems (i.e., neural networks) with symbolic representation to perform complex reasoning tasks. While symbolic representation is highly recursive and declarative, neural networks encode knowledge implicitly by adjusting weights. For many decades exploring the fusion between connectionist and symbolic systems has been overlooked by the scientific community. Only over the past decade, research with hybrid approaches using the two families of AI methodologies has grown again. Approaches such as statistical relational learning (SRL)~ and neural-symbolic learning~ were proposed. Recently, attention mechanisms have been integrated into some neural-symbolic models, the development of which is still at an early stage. Memory Networks~ (Section~\\ref{sub:attention_overview}) and Neural Turing Machine~ (Section~\\ref{sub:attention_overview}) were the first initiatives to include reasoning in deep connectionist models.\nIn the context of neural logic programming, attention has been exploited to reason about knowledge graphs or memory structures to combine the learning of parameters and structures of logical rules. Neural Logic Programming~ uses attention on a neural controller that learns to select a subset of operations and memory content to execute first-order rules. Logic Attention Networks~ facilitates inductive KG embedding and uses attention to aggregate information coming from graph neighbors with rules and attention weights. A pGAT~ uses attention to knowledge base completion, which involves the prediction of missing relations between entities in a knowledge graph. While producing remarkable advances, recent approaches to reasoning with deep networks do not adequately address the task of symbolic reasoning. Current efforts are only about using attention to ensure efficient memory management. We believe that attention can be better explored to understand which pieces of knowledge are relevant to formulate a hypothesis to provide a correct answer, which are rarely present in current neural systems of reasoning.", "id": "b0e48451-b816-4293-956a-91ab69f219b4", "level": "subsection", "origin_cites_number": 8, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Neural-Symbolic Learning and Reasoning" ] ], "subsections": [], "title": "Neural-Symbolic Learning and Reasoning" }, { "cite_extract_rate": 1, "cites": [ 2666 ], "content": "Incremental learning is one of the challenges for the DL community in the coming years. Machine learning classifiers are trained to recognize a fixed set of classes. However, it is desirable to have the flexibility to learn additional classes with limited data without re-training in the complete training set. Attention can significantly contribute to advances in the area and has been little explored. Ren et al.~ were the first to introduce seminal work in the area. They use Attention Attractor Networks to regularize the learning of new classes. In each episode, a set of new weights is trained to recognize new classes until they converge. Attention Attractor Networks helps recognize new classes while remembering the classes beforehand without revising the original training set.", "id": "f1e2932c-68ee-4008-a710-0876a562c5b1", "level": "subsection", "origin_cites_number": 1, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Incremental Learning" ] ], "subsections": [], "title": "Incremental Learning" }, { "cite_extract_rate": 1, "cites": [ 2667 ], "content": "In Reinforcement Learning (RL), an action that leads to a higher final cumulative reward should have more value. Therefore, more \"credit\" should be assigned to it than an action that leads to a lower final reward. However, measuring the individual contribution of actions to future rewards is not simple and has been studied by the RL community for years. There are at least three variations of the CAP problem that have been explored. The temporal CAP refers to identifying which actions were useful or useless in obtaining the final feedback. The structural CAP seeks to find the set of sensory situations in which a given sequence of actions will produce the same result. Transfer CAP refers to learning how to generalize a sequence of actions in tasks. Few works in the literature explore attention to the CAP problem. We believe that attention will be fundamental to advance credit assignment research. Recently, Ferret et al.~ started the first research in the area by proposing a seminal work with attention to learn how to assign credit through a separate supervised problem and transfer credit assignment capabilities to new environments.", "id": "f0681cc5-f1e8-4d84-911c-f7d8c0f09488", "level": "subsection", "origin_cites_number": 1, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Credit Assignment Problem (CAP)" ] ], "subsections": [], "title": "Credit Assignment Problem (CAP)" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 8575, 8574, 180, 2639, 2668, 1877 ], "content": "There are investigations to verify attention as an interpretability tool. Some recent studies suggest that attention can be considered reliable for this purpose. However, other researchers criticize the use of attention weights as an analytical tool. Jain and Wallace~ proved that attention is not consistent with other explainability metrics and that it is easy to create distributions similar to those of the trained model but to produce a different result. Their conclusion is that changing attention weights does not significantly affect the model's prediction, contrary to research by Rudin~ and Riedl~ (Section~\\ref{sec:applications_interpretability}). On the other hand, some studies have found how attention in neural models captures various notions of syntax and co-reference~~~. Amid such confusion, Vashishth et al.~ investigated attention more systematically. They attempted to justify the two types of observation (that is, when attention is interpretable and not), employing various experiments on various NLP tasks. The conclusion was that attention weights are interpretable and are correlated with metrics of the importance of features. However, this is only valid for cases where weights are essential for predicting models and cannot simply be reduced to a gating unit. Despite the existing studies, there are numerous research opportunities to develop systematic methodologies to analyze attention as an interpretability tool. The current conclusions are based on experiments with few architectures in a specific set of applications in NLP.", "id": "8776b886-f174-4e1e-9608-b972d9c48dca", "level": "subsection", "origin_cites_number": 7, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Attention and Interpretability" ] ], "subsections": [], "title": "Attention and Interpretability" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 166, 1255, 2669, 2634, 7194 ], "content": "In the last decade, unsupervised learning has also been recognized as one of the most critical challenges of machine learning since, in fact, human learning is mainly unsupervised~. Some works have recently successfully explored attention within purely unsupervised models. In GANs, attention has been used to improve the global perception of a model (i.e., the model learns which part of the image gives more attention to the others). SAGAN~ was one of the pioneering efforts to incorporate self-attention in Convolutional Gans to improve the quality of the images generated. Image Transformer is an end-to-end attention network created to generate high-resolution images that significantly surpassed state-of-the-art in ImageNet in 2018. AttGan~ uses attention to easily take advantage of multimodality to improve the generation of images. Combining a region of the image with a corresponding part of the word-context vector helps to generate new features with more details in each stage.\nAttention has still been little explored to make generative models simpler, scalable, and more stable. Perhaps the only approach in the literature to explore such aspects more deeply is DRAW~, which presents a sequential and straightforward way to generate images, being possible to refine image patches while more information is captured sequentially. However, the architecture was tested only in simple datasets, leaving open spaces for new developments. There is not much exploration of attention using autoencoders. Using VAEs, Bornschein et al.~ increased the generative models with external memory and used an attentional system to address and retrieve the corresponding memory content.\nIn Natural Language Processing, attention is explored in unsupervised models mainly to extract aspects of sentiment analysis. It is also used within autoencoders to generate semantic representations of phrases~. However, most studies still use supervised learning attention, and few approaches still focus on computer vision and NLP. Therefore, we believe that there is still a great path for research and exploration of attention in the unsupervised context, particularly we note that the construction of purely bottom-up attentional systems is not explored in the literature and especially in the context of unsupervised learning, these systems can great value, accompanied by inhibition and return mechanisms.", "id": "5e5d9264-172a-425e-bfb5-f9a3916595e0", "level": "subsection", "origin_cites_number": 7, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "Unsupervised Learning" ] ], "subsections": [], "title": "Unsupervised Learning" }, { "cite_extract_rate": 1, "cites": [ 180, 2634, 746 ], "content": "Although attention has been used in several domains, there are still potential applications that can benefit from it. The prediction of time series, medical applications, and robotics applications are little-explored areas of the literature. Predicting time series becomes challenging as the size of the series increases. Attentional neural networks can contribute significantly to improving results. Specifically, we believe that exploring RAM~ with multiple glimpses looking at different parts of the series or different frequency ranges can introduce a new way of computing time series. In medical applications, there are still few works that explore biomedical signals in attentional architectures. There are opportunities to apply attention to all applications, ranging from segmentation and image classification, support for disease diagnosis to support treatments such as Parkinson's, Alzheimer's, and other chronic diseases.\nFor robotics, there are countless opportunities. For years the robotics community has been striving for robots to perform tasks in a safe manner and with behaviors closer to humans. However, DL techniques need to cope well with multimodality, active learning, incremental learning, identify unknowns, uncertainty estimation, object and scene semantics, reasoning, awareness, and planning for this task. Architectures like RAM~, DRAW~ and Transformer~ can contribute a lot by being applied to visual odometry, SLAM and mapping tasks.", "id": "f3276cba-fff9-4a36-ade2-0bfd808d3c80", "level": "subsection", "origin_cites_number": 3, "parent_id": "75e63f10-91ba-40b8-88f1-78f077a10310", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Trends and Opportunities" ], [ "subsection", "New Tasks and Robotics" ] ], "subsections": [], "title": "New Tasks and Robotics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this survey, we presented a systematic review of the literature on attention in Deep Learning to overview the area from its main approaches, historical landmarks, uses of attention, applications, and research opportunities. In total, we critically analyzed more than 600 relevant papers published from 2014 to the present. To the best of our knowledge, this is the broadest survey in the literature, given that most of the existing reviews cover only particular domains with a slightly smaller number of reviewed works. Throughout the paper, we have identified and discussed the relationship between attention mechanisms in established deep neural network models, emphasizing CNNs, RNNs, and generative models. We discussed how attention led to performance gains, improvements in computational efficiency, and a better understanding of networks' knowledge. We present an exhaustive list of application domains discussing the main benefits of attention, highlighting each domain's most representative instances. We also showed recent discussions about attention on the explanation and interpretability of models, a branch of research that is widely discussed today. Finally, we present what we consider trends and opportunities for new developments around attentive models. We hope that this survey will help the audience understand the different existing research directions and provide significant scientific community background in generating future research.\nIt is worth mentioning that our survey results from an extensive and exhaustive process of searching, filtering, and critical analysis of papers published between 01/01/2014 until 15/02/2021 in the central publication repositories for machine learning and related areas. In total, we collected more than 20,000 papers. After successive automatic and manual filtering, we selected approximately 650 papers for critical analysis and more than 6,000 for quantitative analyses, which correspond mainly to identifying the main application domains, places of publication, and main architectures. For automatic filtering, we use keywords from the area and set up different combinations of filters to eliminate noise from psychology and classic computational visual attention techniques (i.e., saliency maps). In manual filtering, we separate the papers by year and define the originality and number of citations of the work as the main selection criteria. In the appendix, we provide our complete methodology and links to our search codes to facilitate improving future revisions on any topic in the area.\nWe are currently complementing this survey with a theoretical analysis of the main neural attention models. This complementary survey will help to address an urgent need for an attentional framework supported by taxonomies based on theoretical aspects of attention, which predate the era of Deep Learning. The few existing taxonomies in the area do not yet use theoretical concepts and are challenging to extend to various architectures and application domains. Taxonomies inspired by classical concepts are essential to understand how attention has acted in deep neural networks and whether the roles played corroborate with theoretical foundations studied for more than 40 years in psychology and neuroscience. This study is already in the final stages of development by our team and will hopefully help researchers develop new attentional structures with functions still little explored in the literature. We hope to make it available to the scientific community as soon as possible.\n\\section*{Appendix}\n\\label{sec:systematic_review}\nThis survey employs a systematic review (SR) approach aiming to collect, critically evaluate, and synthesize the results of multiple primary studies concerning Attention in Deep Learning. The selection and evaluation of the works should be meticulous and easily reproducible. Also, SR should be objective, systematic, transparent, and replicable. Although recent, the use of attention in Deep Learning is extensive. Therefore, we systematically reviewed the literature, collecting works from a variety of sources. SR consists of the following steps: defining the scientific questions, identifying the databases, establishing the criteria for selecting papers, searching the databases, performing a critical analysis to choose the most relevant works, and preparing a critical summary of the most relevant papers, as shown Figure~\\ref{fig:systematic_review_steps}. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.50\\linewidth]{img/systematic_review.pdf}\n \\caption{Steps of the systematic review used in this survey.}\n \\label{fig:systematic_review_steps}\n\\end{figure}\nThis survey covers the following aspects: 1) The uses of attention in Deep Learning; 2) Attention mechanisms; 3) Uses of attention; 4) Attention applications; 5) Attention and interpretability; 6) Trends and challenges. These aspects provide the main topics regarding attention in Deep Learning, which can help understand the field's fundamentals. The second step identifies the main databases in the machine learning area, such as arXiv, DeepMind, Google AI, OpenAI, Facebook AI research, Microsoft research, Amazon research, Google Scholar, IEEE Xplore, DBLP, ACM, NIPS, ICML, ICLR, AAAI, CVPR, ICCV, CoRR, IJCNN, Neurocomputing, and Google general search (including blogs, distill, and Quora). Our searching period comprises 01/01/2014 to 06/30/2019 (first stage) and 07/01/2019 to 02/15/2021 (second stage), and the search was performed via a Phyton script~\\footnotemark[2]. The papers' title, abstract, year, DOI, and source publication were downloaded and stored in a JSON file. The most appropriate set of keywords to perform the searches was defined by partially searching the field with expert knowledge from our research group. The final set of keywords wore: attention, attentional, and attentive.\n\\footnotetext[2]{https://github.com/larocs/attention\\_dl}\nHowever, these keywords are also relevant in psychology and visual attention. Hence, we performed a second selection to eliminate these papers and remmove duplicate papers unrelated to the DL field. After removing duplicates, 18,257 different papers remained. In the next selection step, we performed a sequential combination of three types of filters: 1) Filter I: Selecting the works with general terms of attention (i.e., attention, attentive, attentional, saliency, top-down, bottom-up, memory, focus, and mechanism); 2) Filter II: Selecting the works with terms related to DL (i.e. deep learning, neural network, ann, dnn deep neural, encoder, decoder, recurrent neural network, recurrent network, rnn, long short term memory, long short-term memory, lstm, gated recurrent unit, gru, autoencoder, ae, variational autoencoder, vae, denoising ae, dae, sparse ae, sae, markov chain, mc, hopfield network, boltzmann machine, em, restricted boltzmann machine, rbm, deep belief network, dbn, deep convolutional network, dcn, deconvolution network, dn, deep convolutional inverse graphics network, dcign, generative adversarial network, gan, liquid state machine, lsm, extreme learnng, machine, elm, echo state network, esn, deep residual network, drn, konohen network, kn, turing machine, ntm, convolutional network, cnn, and capsule network); 3) Filter III: Selecting the works with specific words of attention in Deep Learning (i.e., attention network, soft attention, hard attention, self-attention, self attention deep attention, hierarchical attention, transformer, local attention, global attention, coattention, co-attention, flow attention, attention-over-attention, way attention, intra-attention, self-attentive, and self attentive).\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.62\\linewidth]{img/systematic_review_tree.pdf}\n \\caption{The filter strategies for selecting the relevant works. Search I corresponds to articles collected between 01/01/2014 to 06/30/2019 (first stage), and Search II corresponds to papers collected between 07/01/2019 to 02/15/2021 (second stage).}\n \\label{fig:filter_papers_steps}\n\\end{figure}\nThe decision tree with the second selection is shown in Figure~\\ref{fig:filter_papers_steps}. The third filtering selects works with at least one specific term of attention in deep learning. In the next filtering, we remove papers without abstract, the collection of filters verify if there is at least one specific term of Deep Learning and remove the works with the following keywords: visual attention, saliency, and eye-tracking. For the papers with abstract, the selection is more complex, requiring three cascade conditions: 1) First condition: Selecting the works that have more than five filter terms from filter II; 2) Second condition: selecting the works that have between three and five terms from filter II and where there is at least one of the following: attention model, attention mechanism, attention, or attentive; 3) Third condition: Selecting the works with one or two terms from filter II; without the terms: salience, visual attention, attentive, and attentional mechanism. A total of 6,338 works remained for manual selection. We manually excluded the papers without a title, abstract, or introduction related to the DL field. After manual selection, 3,567 works were stored in \\textbf {Zotero}. Given the number of papers, we grouped them by year and chose those above a threshold (average citations in the group). Only works above average were read and classified as relevant or not for critical analysis. To find the number of citations, we automated the process with a Python script. 650 papers were considered relevant for this survey's critical analysis, and 6,567 were used to perform quantitative analyzes. \n\\bibliographystyle{unsrt} \n\\bibliography{references} \n\\end{document}", "id": "9a62d0da-c761-4ac4-a34d-c5ccef979e7c", "level": "section", "origin_cites_number": 0, "parent_id": "25aabc07-c583-4f10-8638-f4d1afc51cb2", "prefix_titles": [ [ "title", "Attention, please! A survey of Neural Attention Models in Deep Learning" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
63
[ 180, 2635, 9093, 1515, 2633, 2634, 2636, 728, 168, 303, 2637, 746, 2643, 167, 1877, 2640, 2639, 1141, 2641, 7186, 2638, 2642, 7194, 2650, 2649, 2644, 2645, 2648, 862, 2646, 7599, 2647, 2652, 2651, 9109, 7600, 553, 2653, 1966, 679, 2654, 2657, 248, 2655, 2656, 2658, 252, 7371, 793, 2661, 2659, 7107, 676, 2660, 2662, 2663, 2664, 306, 166, 2665, 1086, 7601, 2666, 2667, 8575, 8574, 2668, 1255, 2669 ]
1.461655
[ "Piyawat Lertvittayakumjorn", "Francesca Toni" ]
Explanation-Based Human Debugging of NLP Models: A Survey
2021
2021-04-30T17:53:07Z
cs.CL
Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this survey, we review papers that exploit explanations to enable humans to give feedback and debug NLP models. We call this problem \emph{\ebhd (\EBHD)}. In particular, we categorize and discuss existing work along three dimensions of \EBHD (the bug context, the workflow, and the experimental setting), compile findings on how \EBHD components affect the feedback providers, and highlight open problems that could be future research directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ] ], "subsections": [ "bb8d881f-5a9a-4d40-9b4d-6a1e4f8408ac", "c98144b9-7fac-4a44-8376-9fd8e9053e10", "49eb1249-b5bf-4767-a98f-51e47db0ee54", "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "69b484af-4448-4700-b87f-90338424cf86" ], "title": "root" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4865, 7507, 4864, 7696, 8847, 4863 ], "content": "Explainable AI focuses on generating explanations for AI models as well as for their predictions. \nIt is gaining more and more attention these days since explanations are necessary in many \napplications, especially in high-stake domains such as healthcare, law, transportation, and finance . \nSome researchers have explored various merits of explanations to humans, such as \nsupporting human decision making ,\nincreasing human trust in AI \nand even teaching humans to perform challenging tasks .\nOn the other hand, explanations can benefit the AI systems as well, e.g.,\nwhen explanations are used\nto promote system acceptance ,\nto verify the model reasoning ,\nand to find potential causes of errors . \nIn this paper, we review progress to date \nspecifically on how explanations have been used in the literature to enable humans to fix bugs in NLP models. We refer to this research area as \\emph{\\ebhd (\\EBHD)}, as a general umbrella term encompassing \\emph{explanatory debugging} and \\emph{human-in-the-loop debugging} .\nWe define \\EBHD as the process of fixing or mitigating bugs in a trained model using human feedback given in response to explanations for the model.\n\\EBHD is helpful when the training data at hand leads to suboptimal models (due, for instance, to biases \nor artifacts in the data), and hence human knowledge is needed to verify and improve the trained models. \nIn fact, \\EBHD is related to three challenging and intertwined issues in NLP: explainability , interactive and human-in-the-loop learning , and knowledge integration . \nAlthough there are overviews for each of these topics\n(as cited above), our paper is the first to draw connections among the three towards the final application of model debugging in NLP.\n\\begin{figure*}[t] \n \\centering\n \\includegraphics[width=0.90\\linewidth]{figs/workflow-options-7.pdf}\n \\caption{A general framework for \\ebhd (\\EBHD) of NLP models, consisting of the inspected (potentially buggy) model, the humans providing feedback, and a three-step workflow. Boxes list examples of the options (considered in the selected studies) for the components or steps in the general framework.} \\label{fig:overview}\n\\end{figure*}\n\\begin{figure*}[t] \n \\centering\n \\includegraphics[width=0.90\\linewidth]{figs/LIME2.pdf}\n \\caption{The proposal by as an instance of the general \\EBHD framework.} \\label{fig:lime}\n\\end{figure*}\nWhereas most people agree on the meaning of the term \\emph{bug} in software engineering, various meanings have been ascribed to this term in machine learning (ML) research.\nFor example, considered bugs as implementation errors, similar to software bugs, while\n defined a bug as a particularly damaging or inexplicable test error.\nIn this paper, we follow the definition of (model) bugs from as contamination in the learning and/or prediction pipeline that makes the model produce incorrect predictions or learn error-causing associations. Examples of bugs include spurious correlation, labelling errors, and undesirable behavior in out-of-distribution (OOD) testing.\nThe term \\emph{debugging} is also interpreted differently by different researchers. \nSome consider debugging as a process of identifying or uncovering causes of model errors , while others stress that debugging must not only reveal the causes of problems but also fix or mitigate them . In this paper, we adopt the latter interpretation.", "id": "bb8d881f-5a9a-4d40-9b4d-6a1e4f8408ac", "level": "section", "origin_cites_number": 18, "parent_id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "209ae615-a488-4929-bb23-a528a6a66b13" ], "title": "Introduction" }, { "cite_extract_rate": 0.3125, "cites": [ 4867, 4866, 4868, 7507, 4869 ], "content": "We focus on work \nusing\nexplanations of NLP models to expose whether there are bugs\nand exploit human feedback to fix the bugs (if any). \nTo collect relevant papers, we started from some pivotal \\EBHD work, e.g., , and added \\EBHD papers citing or being cited by the pivotal work, e.g., .\nNext, to ensure that we did not miss any important work, we searched for papers on Semantic Scholar\\footnote{\\url{https://www.semanticscholar.org/}} using the Cartesian product of five keyword sets: \\{debugging\\}, \\{text, NLP\\}, \\{human, user, interactive, feedback\\}, \\{explanation, explanatory\\}, and \\{learning\\}.\nWith 16 queries in total, we collected the top 100 papers (ranked by relevancy) for each query and kept only the ones appearing in at least 2 out of the 16 query results.\nThis resulted in 234 papers which we then manually checked, leading to selecting a few additional papers, including . \nThe overall process resulted in \\numpapers papers in Table~\\ref{tab:papers} as the \\textit{selected studies} primarily discussed in this survey. \nIn contrast, \nsome papers from the following categories appeared in the search results, but were not selected\nsince, strictly speaking, they are not in the main scope of this survey:\ndebugging without explanations ,\ndebugging outside the NLP domain , \nrefining the ML pipeline instead of the model , \nimproving the explanations instead of the model , \nand\nwork centered on revealing but not fixing bugs .", "id": "209ae615-a488-4929-bb23-a528a6a66b13", "level": "paragraph", "origin_cites_number": 16, "parent_id": "bb8d881f-5a9a-4d40-9b4d-6a1e4f8408ac", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Scope of the survey." ] ], "subsections": [ "8daaf7fc-c2db-4919-9807-e70bee7ffa77" ], "title": "Scope of the survey." }, { "cite_extract_rate": 1, "cites": [ 7507 ], "content": "\\EBHD \nconsists of\nthree main \nsteps as shown in Figure~\\ref{fig:overview}.\nFirst, the explanations, which provide interpretable insights into the inspected model and possibly reveal bugs, are \ngiven to humans.\nThen, the humans inspect the explanations and give feedback in response.\nFinally, the feedback is used to update and improve the model.\nThese steps can be carried out once, as a one-off improvement, or iteratively, depending on how the debugging framework is designed. \nAs a concrete example, Figure~\\ref{fig:lime} illustrates how improved an SVM text classifier trained on the 20Newsgroups dataset .\nThis dataset has many\nartifacts which could make the model rely on wrong words or tokens when making predictions, reducing its generalizability\\footnote{For more details, please see section~\\ref{subsec:bugsources}.}. \nTo perform \\EBHD, recruited humans \nfrom a crowdsourcing platform (i.e., Amazon Mechanical Turk) and asked them to inspect LIME explanations\\footnote{LIME stands for Local Interpretable Model agnostic Explanations . For each model prediction, it returns relevance scores for words in the input text to show how important each word is for the prediction.} (i.e., word relevance scores) for model predictions of ten examples.\nThen, the humans gave feedback by identifying words in the explanations that should not have got high relevance scores (i.e., supposed to be the artifacts). These words were then removed from the training data, and the model was retrained. The process was repeated for three rounds, and the results show that the model generalized better after every round. \nUsing the general framework in Figure~\\ref{fig:overview}, we can break the framework of into components \nas depicted in Figure~\\ref{fig:lime}. \nThroughout the paper, when reviewing \nthe selected studies,\nwe will use the general framework in Figure~\\ref{fig:overview} for analysis, comparison, and discussion.", "id": "8daaf7fc-c2db-4919-9807-e70bee7ffa77", "level": "paragraph", "origin_cites_number": 1, "parent_id": "209ae615-a488-4929-bb23-a528a6a66b13", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Scope of the survey." ], [ "paragraph", "General Framework." ] ], "subsections": [ "1e9de09b-b4d3-4874-9241-07ac21aac0cb", "73714224-8905-4062-8e33-4905c175af89" ], "title": "General Framework." }, { "cite_extract_rate": 0, "cites": [], "content": "To avoid confusion, it is worth noting that there are actually two human roles in the \\EBHD process. One, of course, is that of \n\\textit{feedback provider(s)}, looking at the explanations and providing feedback (noted as `Human' in Figure~\\ref{fig:overview}). \nThe other is\nthat of \\textit{model developer(s)}, training the model and organizing the \\EBHD process (not shown in Figure~\\ref{fig:overview}). \nIn practice, a person could be both \nmodel developer and \nfeedback provider. This usually happens during the model validation and improvement phase where the developers try to fix the bugs themselves.\nSometimes, however, other stakeholders could also take the feedback provider role.\nFor instance, if the model is trained to classify electronic medical records, the developers (who are mostly ML experts) hardly have the medical knowledge to provide feedback. So, they may ask doctors acting as consultants to the development team to be the feedback providers during the model improvement phase.\nFurther, \\EBHD can be carried out after deployment, with end users as the feedback providers.\nFor example, a model auto-suggesting the categories of new emails to end users can provide explanations supporting the suggestions as part of its normal operation. Also, it can allow the users to provide feedback to both the suggestions and the explanations. \nThen, a routine written by the developers will be triggered to process the feedback and update the model automatically to complete the \\EBHD workflow.\nIn this case, we need to care about the trust, frustration, and expectation of the end users while and after they give feedback.\nIn conclusion, \\EBHD can take place practically both before and after the model is deployed, and many stakeholders can act as the feedback providers, including, but not limited to, the model developers, the domain experts, and the end users.", "id": "1e9de09b-b4d3-4874-9241-07ac21aac0cb", "level": "paragraph", "origin_cites_number": 0, "parent_id": "8daaf7fc-c2db-4919-9807-e70bee7ffa77", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Scope of the survey." ], [ "paragraph", "General Framework." ], [ "paragraph", "Human Roles." ] ], "subsections": [], "title": "Human Roles." }, { "cite_extract_rate": 0, "cites": [], "content": "Section~\\ref{sec:categorization} explains the choices made by existing work to achieve \\EBHD of NLP models. This illustrates the current state of the field with the strengths and limitations of existing work. \nNaturally, though, a successful \\EBHD framework cannot neglect the ``imperfect'' nature of feedback providers, who may not be an ideal oracle.\nHence, Section~\\ref{sec:human_factor} compiles relevant human factors which could affect the effectiveness of the debugging process as well as the satisfaction of the feedback providers.\nAfter that, we identify open challenges of \\EBHD for NLP in Section~\\ref{sec:open_problems} before concluding the paper in Section~\\ref{sec:conclusion}.", "id": "73714224-8905-4062-8e33-4905c175af89", "level": "paragraph", "origin_cites_number": 0, "parent_id": "8daaf7fc-c2db-4919-9807-e70bee7ffa77", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Scope of the survey." ], [ "paragraph", "General Framework." ], [ "paragraph", "Paper Organization." ] ], "subsections": [], "title": "Paper Organization." }, { "cite_extract_rate": 0.416666666666666, "cites": [ 7516, 4868, 7507, 4870, 4869 ], "content": "\\label{sec:categorization}\n\\begin{table*}[t]\n\\setlength{\\tabcolsep}{4.5pt}\n\\centering\n\\small\n\\begin{tabular}{|L{0.28\\textwidth}| C{0.05\\textwidth} C{0.07\\textwidth} C{0.10\\textwidth}| C{0.05\\textwidth} C{0.06\\textwidth} C{0.08\\textwidth} C{0.05\\textwidth} | C{0.07\\textwidth}|} \n \\hline\n \\multirow{2}{*}{Paper} & \\multicolumn{3}{c|}{Context} & \\multicolumn{4}{c|}{Workflow} & \\multirow{2}{*}{Setting}\\\\ \\cline{2-8}\n & Task & Model & Bug sources & Exp. scope & Exp. method & Feedback & Update & \\\\\n \\hline\n &TC&NB&AR&G,L&SE&LB,WS&M,D&SP\\\\\n &TC&NB&SS&L&SE&WO&T&SP\\\\\n &TC&NB&SS&G,L&SE&WO,LB&M,D&SP\\\\\n &TC&NB&AR&G,L&SE&WO,WS&M&SP\\\\\n &TC&SVM&AR&L&PH&WO&D&CS\\\\\n &TC&LR&WL&L&PH&LB&D&SM\\\\\n \\multirow{2}{*}{}&VQA&TellQA&AR&\\multirow{2}{*}{G}&\\multirow{2}{*}{PH}&\\multirow{2}{*}{RU}&\\multirow{2}{*}{D}&\\multirow{2}{*}{SP}\\\\\n &TC&fastText&AR,OD&&&&&\\\\\n &TC&LR&AR&L&PH&WO&D&SM\\\\\n &TQA&NeOp&AR&L&SE&AT&T&NR\\\\\n &TC&LR&WL&L&PH&LB&D&SM\\\\\n &TC&CNN&AR,SS,OD&G&PH&FE&T&CS\\\\\n &TC&NB&AR,SS&L&SE&LB,WO&M,D&CS\\\\\n &TC&LR&WL&L&PH&LB&D&SM\\\\\n &TC&BERT*&AR,OD&L&PH&RE&D,T&SP\\\\\n &NLI&BERT&AR&L&PH&ES&D&SP\\\\\n \\hline\n \\end{tabular}\n\\caption{Overview of existing work on \\EBHD of NLP models. We use abbreviations as follows: \\textbf{Task}: TC = Text Classification (single input), VQA = Visual Question Answering, TQA = Table Question Answering, NLI = Natural Language Inference / \n\\textbf{Model}: NB = Naive Bayes, SVM = Support Vector Machines, LR = Logistic Regression, TellQA = Telling QA, NeOp = Neural Operator, CNN = Convolutional Neural Networks, BERT* = BERT and RoBERTa /\n\\textbf{Bug sources}: AR = Natural artifacts, SS = Small training subset, WL = Wrong label injection, OD = Out-of-distribution tests /\n\\textbf{Exp. scope}: G = Global explanations, L = Local explanations /\n\\textbf{Exp. method}: SE = Self-explaining, PH = Post-hoc method /\n\\textbf{Feedback} (form): LB = Label, WO = Word(s), WS = Word(s) Score, ES = Example Score, FE = Feature, RU = Rule, AT = Attention, RE = Reasoning /\n\\textbf{Update}: M = Adjust the model parameters, D = Improve the training data, T = Influence the training process /\n\\textbf{Setting}: SP = Selected participants, CS = Crowdsourced participants, SM = Simulation, NR = Not reported.\n} \\label{tab:papers}\n\\end{table*} \nTable~\\ref{tab:papers} summarizes the selected studies\nalong three dimensions, amounting to \nthe debugging context (i.e., tasks, models, and bug sources), the workflow (i.e., the three steps in our general framework), and the experimental setting (i.e., the mode of human engagement).\nWe will discuss these dimensions with respect to the broader knowledge of explainable NLP and human-in-the-loop learning, to shed light on the current state of \\EBHD of NLP models.", "id": "c98144b9-7fac-4a44-8376-9fd8e9053e10", "level": "section", "origin_cites_number": 12, "parent_id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ] ], "subsections": [ "529a4abe-46e1-4bad-8356-542ec52c9967", "046f9b18-3513-4d56-82c3-c60190ba0aa9", "5b122951-af49-414e-bead-48a02462c5e7" ], "title": "Categorization of Existing Work" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:context}\nTo demonstrate the debugging process, existing works need to set up the bug situation they aim to fix, including the target NLP task, \nthe inspected ML model,\nand the source of the bug to be addressed.", "id": "529a4abe-46e1-4bad-8356-542ec52c9967", "level": "subsection", "origin_cites_number": 0, "parent_id": "c98144b9-7fac-4a44-8376-9fd8e9053e10", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Context" ] ], "subsections": [ "19ba55c1-e53d-44f6-b54b-236a658bbf6e", "f77bbe8b-0a16-4291-a160-8fcbb8182d98", "fdd7cacd-e596-496f-a18e-ea86a4361eb2" ], "title": "Context" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 7516, 4865 ], "content": "Most papers in Table~\\ref{tab:papers} focus on text classification with single input (TC) for \na variety of specific problems such as e-mail categorization , topic classification , spam classification , sentiment analysis \nand auto-coding of transcripts .\nBy contrast, targeted natural language inference (NLI) which is a type of text-pair classification, predicting whether a given premise entails a given hypothesis.\nFinally, two papers involve question answering (QA), i.e., (focusing on visual question answering (VQA)) and \n(focusing on table question answering (TQA)).\n suggested that most researchers work on TC because, for this task, it is much easier for lay \nparticipants to understand explanations and give feedback (e.g., which keywords should be added or removed from the list of top features)\\footnote{Nevertheless, some specific TC tasks, such as authorship attribution and deceptive review detection , are exceptions because lay people are generally not good at these tasks. Thus, they are not suitable for \\EBHD.}.\nMeanwhile, some other NLP tasks require the feedback providers\nto have linguistic knowledge such as part-of-speech tagging, parsing, and machine translation. \nThe need for linguists or experts renders experiments for these tasks more difficult and costly.\nHowever, we suggest that there are several tasks \nwhere the trained models are prone to be buggy but the tasks are underexplored in the \\EBHD setting, though they are not too difficult to experiment on with lay people.\n\\textit{NLI}, the focus of , is one of them. Indeed,\n and showed that NLI models can exploit annotation artifacts and fallible syntactic heuristics to make predictions rather than learning the logic of the actual task.\nOther tasks and their bugs \ninclude:\n\\textit{QA}, where found that the answers from models \nare sometimes inconsistent (i.e., contradicting previous answers); and \\textit{reading comprehension}, where \n showed that \nmodels, which answer a question by reading a given paragraph, can be fooled by an irrelevant sentence \nbeing appended to the paragraph. \nThese non-TC NLP tasks would be worth exploring further in the \\EBHD setting.", "id": "19ba55c1-e53d-44f6-b54b-236a658bbf6e", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "529a4abe-46e1-4bad-8356-542ec52c9967", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Context" ], [ "subsubsection", "Tasks" ] ], "subsections": [], "title": "Tasks" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 4871, 4868, 826, 7507, 1445, 4869 ], "content": "Early work used Naive Bayes models with bag-of-words (NB) as text classifiers , which are relatively easy to generate explanations for and to incorporate human feedback into (discussed in section~\\ref{subsec:workflow}).\nOther traditional models used include logistic regression (LR) \nand support vector machines (SVM) , both with bag-of-words features.\nThe next generation of tested models involves word embeddings.\nFor text classification, \nfocused on convolutional neural networks (CNN) and touched upon bidirectional LSTM networks , while used fastText, relying also on n-gram features .\nFor VQA and TQA, the inspected models used attention mechanisms for attending to relevant parts of the input image or table. These models are Telling QA and Neural Operator (NeOp) , used by and , respectively.\nWhile the NLP community nowadays is mainly driven by pre-trained language models \nwith many papers studying their behaviors , \nonly and have used pre-trained language models, including BERT and RoBERTa , as test beds for \\EBHD.", "id": "f77bbe8b-0a16-4291-a160-8fcbb8182d98", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "529a4abe-46e1-4bad-8356-542ec52c9967", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Context" ], [ "subsubsection", "Models" ] ], "subsections": [], "title": "Models" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7516, 4872, 8848, 4869, 4873 ], "content": "\\label{subsec:bugsources}\nMost of the papers in Table~\\ref{tab:papers} experimented on training datasets with natural artifacts (AR), which cause spurious correlation bugs (i.e., the input texts having signals which are correlated to but not the reasons for specific outputs) and undermine models' generalizability.\nOut of the \\numpapers papers we surveyed, 5 used the 20Newsgroups dataset as a case study, since it has lots of natural artifacts. \nFor example, some punctuation marks appear more often in one class due to the writing styles of the authors contributing to the class, so the model uses these punctuation marks as clues to make predictions. However, because 20Newsgroups is a topic classification dataset, a better model should focus more on the topic of the content since the punctuation marks can also appear in other classes, especially when we apply the model to texts in the wild.\nApart from classification performance drops, natural artifacts can also cause model biases, as shown in and debugged in .\nIn the absence of strong natural artifacts, bugs can still be simulated using several techniques. \nFirst, using only a small subset of labeled data (SS) for training could cause the model to exploit spurious correlation leading to poor performance . \nSecond, injecting wrong labels (WL) into the training data can obviously blunt the model quality . \nThird, using out-of-distribution tests (OD) can reveal that the model does not work effectively in the domains that it has not been trained on .\nAll of these techniques give rise to undesirable model behaviors, requiring debugging.\nAnother technique, not found in Table~\\ref{tab:papers} but suggested in related work , is contaminating input texts in the training data with decoys (i.e., injected artifacts) which could deceive the model into predicting for the wrong reasons. This has been experimented with in the computer vision domain , and its use in the \\EBHD setting in NLP could be an interesting direction to explore.", "id": "fdd7cacd-e596-496f-a18e-ea86a4361eb2", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "529a4abe-46e1-4bad-8356-542ec52c9967", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Context" ], [ "subsubsection", "Bug Sources" ] ], "subsections": [], "title": "Bug Sources" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:workflow}\nThis section describes existing work around the three steps of the \\EBHD workflow in Figure~\\ref{fig:overview}, i.e., how to generate and present the explanations, how to collect human feedback, and how to update the model using the feedback.\nResearchers need to make decisions on these key points harmoniously to create an effective debugging workflow.", "id": "046f9b18-3513-4d56-82c3-c60190ba0aa9", "level": "subsection", "origin_cites_number": 0, "parent_id": "c98144b9-7fac-4a44-8376-9fd8e9053e10", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ] ], "subsections": [ "65c2c958-84ce-431f-a866-b1d203b8794a", "63a75679-8702-4ba0-91cb-d01a889e8005", "6c064860-8785-4a00-b33f-ee42ff0539c8", "ee7f185f-0b80-4374-931e-1d4296023b81" ], "title": "Workflow" }, { "cite_extract_rate": 0, "cites": [], "content": "The main role of explanations here is to \nprovide interpretable insights into the model and uncover its potential misbehavior or irrationality, which sometimes cannot be noticed by looking at the model outputs or the evaluation metrics.", "id": "65c2c958-84ce-431f-a866-b1d203b8794a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "046f9b18-3513-4d56-82c3-c60190ba0aa9", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Providing Explanations" ] ], "subsections": [ "4ba6754b-5693-434b-a98e-4936c742ad3a", "9160f106-1e22-4924-8c1f-f9ab659496ec", "52f5a31e-40d7-49c4-8393-7be54c5f1341" ], "title": "Providing Explanations" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 4868, 4872, 4874, 4870, 7507 ], "content": "Basically, there are two main types of explanations that could be provided to feedback providers. Local explanations (L) explain the predictions by the model for individual inputs. \nIn contrast, global explanations (G) explain the model overall, independently of any specific inputs.\nIt can be seen from Table~\\ref{tab:papers} that most existing work use local explanations.\nOne reason for this may be that,\nfor complex models, \nglobal explanations can hardly reveal \ndetails of the models' inner workings \nin a comprehensible way to users. \nSo, some bugs are imperceptible in such high-level global explanations and then not corrected by the users.\nFor example, the debugging framework \\textit{FIND}, proposed by , uses only global explanations, and it was shown to work more effectively on significant bugs (such as gender bias in abusive language detection) than on less-obvious bugs (such as dataset shift between product types of sentiment analysis on product reviews). \nOtherwise, presented adversarial replacement rules as global explanations to reveal the model weaknesses only, without explaining how the whole model worked. \nOn the other hand, using local explanations has limitations in that it demands a large amount of effort from feedback providers to inspect the explanation of every single example in the training/validation set.\nWith limited human resources, efficient ways to rank or select examples to explain would be required .\nFor instance, \n and targeted explanations of incorrect predictions in the validation set.\n picked sets of non-redundant local explanations to illustrate the global picture of the model. \nInstead, leveraged heuristics from active learning to choose unlabeled examples that maximize some informativeness criteria.\nRecently, some work in explainable AI considers generating explanations for a group of predictions (e.g., for all the false positives of a certain class), thus staying in the middle of the two extreme explanation types (i.e., local and global). \nThis kind of explanation is not too fine-grained, yet it can capture some suspicious model behaviors if we target the right group of examples.\nSo, it would be worth studying in the context of \\EBHD \n(to the best of our knowledge, no existing study experiments with it).", "id": "4ba6754b-5693-434b-a98e-4936c742ad3a", "level": "paragraph", "origin_cites_number": 7, "parent_id": "65c2c958-84ce-431f-a866-b1d203b8794a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Providing Explanations" ], [ "paragraph", "Explanation scopes." ] ], "subsections": [], "title": "Explanation scopes." }, { "cite_extract_rate": 0.5238095238095231, "cites": [ 8849, 4870, 4868, 1824, 7507, 3699, 1813, 7517, 7516, 4875, 4869 ], "content": "To generate explanations in general, there are two important questions we need to answer. \nFirst, \nwhich format should the explanations have?\nSecond, how do we generate the explanations?\nFor the first question, we see many possible answers in the literature of explainable NLP (e.g., see the survey by ). For instance, \\textit{input-based explanations} (so called feature importance explanations) identify parts of the input that are important for the prediction. \nThe explanation could be \na list of importance scores of words in the input, so called \\textit{attribution scores} or \\textit{relevance scores} .\n\\textit{Example-based explanations} select influential, important, or similar examples from the training set to explain why the model makes a specific prediction .\n\\textit{Rule-based explanations} provide interpretable decision rules that approximate the prediction process\n.\n\\textit{Adversarial-based explanations} return the smallest changes in the inputs that could change the predictions, revealing the model misbehavior .\nIn most NLP tasks, \ninput-based explanations are the most popular approach for explaining predictions . This is also the case for \\EBHD as most selected studies use input-based explanations \nfollowed by example-based explanations\n. \nMeanwhile, only use adversarial-based explanations, whereas experiment with input-based, rule-based, and example-based explanations. \nFor the second question, there are two ways to generate the explanations: self-explaining methods and post-hoc explanation methods. Some models, e.g., Naive Bayes, logistic regression, and decision trees, are \\textit{self-explaining} (SE) , \nalso referred to as transparent or inherently interpretable .\nLocal explanations of self-explaining models can be obtained at the same time as predictions, usually from the process of making those predictions, while \nthe models themselves can often serve directly as global explanations.\nFor example, feature importance explanations for a Naive Bayes model can be directly derived from the likelihood terms in the Naive Bayes equation, as done by several papers in Table~\\ref{tab:papers} .\nAlso, using attention scores on input as explanations, as done in , is a self-explaining method because the scores were obtained during the prediction process. \nIn contrast, \\textit{post-hoc explanation methods} (PH) perform additional steps to extract explanations after the model is trained (for a global explanation) or after the prediction is made (for a local explanation).\nIf the method is allowed to access model parameters, it may calculate word relevance scores by propagating the output scores back to the input words or analyzing the derivative of the output with respect to the input words . \nIf the method cannot access the model parameters, it may perturb the input and see how the output changes to estimate the importance of the altered parts of the input .\nThe important words and/or the relevance scores can be presented to the feedback providers in the \\EBHD workflow in many forms such as a list of words and their scores , word clouds , and a parse tree . \nMeanwhile, the influence functions method, used in , identifies training examples which influence the prediction by analyzing how the prediction would change if we did not have each training point. This is another post-hoc explanation method as it takes place after prediction. \nIt is similar to the other two example-based explanation methods used in .", "id": "9160f106-1e22-4924-8c1f-f9ab659496ec", "level": "paragraph", "origin_cites_number": 21, "parent_id": "65c2c958-84ce-431f-a866-b1d203b8794a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Providing Explanations" ], [ "paragraph", "Generating explanations." ] ], "subsections": [], "title": "Generating explanations." }, { "cite_extract_rate": 0, "cites": [], "content": "It is important to carefully design the presentation of explanations, taking into consideration the background knowledge, desires, and limits of the feedback providers.\nIn the debugging application by , lay users were asked to provide feedback to email categorizations predicted by the system. \nThe users were allowed to ask several Why questions (inspired by ) through either the menu bar, or by right-clicking on the object of interest (such as a particular word).\nExamples include ``Why will this message be filed to folder A?'', ``Why does word x matter to folder B?''.\nThe system then responded by textual explanations (generated using templates), together with visual explanations such as bar plots for some types of questions.\nAll of these made the interface become more user-friendly.\nIn \\citeyear{kulesza2015principles}, \\citeauthor{kulesza2015principles} proposed, as desirable principles, that the presented explanations should be sound (i.e., truthful in describing the underlying model), complete (i.e., not omitting important information about the model), but not overwhelming (i.e., remaining comprehensible).\nHowever, these principles are challenging especially when working on non-interpretable complex models.", "id": "52f5a31e-40d7-49c4-8393-7be54c5f1341", "level": "paragraph", "origin_cites_number": 2, "parent_id": "65c2c958-84ce-431f-a866-b1d203b8794a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Providing Explanations" ], [ "paragraph", "Presenting explanations." ] ], "subsections": [], "title": "Presenting explanations." }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 7516, 7802, 4868, 7507, 4870, 4869 ], "content": "After seeing explanations, \nhumans generally desire to improve the model by giving feedback .\nSome existing work asked humans to confirm or correct machine-computed explanations.\nHence, the form of feedback fairly depends on the form of the explanations,\nand in turn this shapes how to update the model too (discussed in section \\ref{subsec:update}).\nFor text classification, \nmost \\EBHD papers asked humans to decide which words (WO) in the explanation (considered important by the model) are in fact relevant or irrelevant .\nSome papers even allowed humans to adjust the word importance scores (WS) .\nThis is analogous to specifying relevancy scores for example-based explanations (ES) in .\nMeanwhile, feedback at the level of learned features (FE) (i.e., the internal neurons in the model) and learned rules (RU) rather than individual words, was asked in and , respectively.\nAdditionally, humans may be asked to check the predicted labels or even the ground truth labels (collectively noted as LB in Table~\\ref{tab:papers}) . \nTargeting the table question answering, asked humans to identify where in the table and the question the model should focus (AT). This is analogous to identifying relevant words to attend for text classification.\nIt is likely that identifying important parts in the input is sufficient to make the model accomplish simple text classification tasks.\nHowever, this might not be enough for complex tasks which require reasoning. \nRecently, asked humans to provide, as feedback, compositional explanations \nto show how the humans would reason (RE) about the models’ failure cases.\nAn example of the feedback for a hate speech detection is ``Because $X$ is the word dumb, $Y$ is a hateful word, and $X$ is directly before $Y$, the attribution scores of both $X$ and $Y$ as well as the interaction score between $X$ and $Y$ should be increased''.\nTo acquire richer information like this as\nfeedback, their framework requires more expertise from the feedback providers.\nIn the future, it would be interesting to explore how we can collect and utilize other forms of feedback, e.g., natural language feedback , \nnew training examples , and other forms of decision rules used by humans .", "id": "63a75679-8702-4ba0-91cb-d01a889e8005", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "046f9b18-3513-4d56-82c3-c60190ba0aa9", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Collecting Feedback" ] ], "subsections": [], "title": "Collecting Feedback" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:update}\nTechniques to incorporate human feedback into the model can be categorized into three approaches.", "id": "6c064860-8785-4a00-b33f-ee42ff0539c8", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "046f9b18-3513-4d56-82c3-c60190ba0aa9", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Updating the Model" ] ], "subsections": [ "e6895a92-66ec-42f4-b748-570892d50fcd", "8fdfada4-8e63-4bc6-808a-fa238dbce062", "e0fdcd8d-308a-404d-bc74-65efeda71978" ], "title": "Updating the Model" }, { "cite_extract_rate": 0, "cites": [], "content": "When the model is transparent and the explanation displays the model parameters in an intelligible way, humans can directly adjust the parameters based on their judgements. This idea was adopted by where humans can adjust a bar chart showing word importance scores, corresponding to the parameters of the underlying Naive Bayes model.\nIn this special case, steps 2 and 3 in Figure~\\ref{fig:overview} are combined into a single step.\nBesides, human feedback can be used to modify the model parameters indirectly. \nFor example, increased a word weight in the Naive Bayes model by 20\\% for the class that the word supported, according to human feedback, and reduced the weight by 20\\% for the opposite class (binary classification). \nThis choice gives good results, however, \nit is not clear why and whether 20\\% is the best choice here.\nOverall, this approach is fast because it does not require model retraining. However, it is important to ensure that the adjustments made by humans generalize well to all examples.\nTherefore, the system should update the overall results (e.g., performance metrics, predictions, and explanations) in real time after applying any adjustment, so the humans can investigate the effects and further adjust the model parameters (or undo the adjustments) if necessary. \nThis agrees with the correctability principles proposed by that the system should be actionable and reversible, honor user feedback, and show incremental changes.", "id": "e6895a92-66ec-42f4-b748-570892d50fcd", "level": "paragraph", "origin_cites_number": 3, "parent_id": "6c064860-8785-4a00-b33f-ee42ff0539c8", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Updating the Model" ], [ "paragraph", "(1) Directly adjust the model parameters (M)." ] ], "subsections": [], "title": "(1) Directly adjust the model parameters (M)." }, { "cite_extract_rate": 0.8, "cites": [ 7516, 4868, 7507, 4869 ], "content": "We can use human feedback to improve the training data and retrain the model to fix bugs.\nThis approach includes \ncorrecting mislabeled training examples ,\nassigning noisy labels to unlabeled examples , \nremoving irrelevant words from input texts ,\nand creating augmented training examples to reduce the effects of the artifacts .\nAs this approach modifies the training data only, it is applicable to any model regardless of the model complexity.", "id": "8fdfada4-8e63-4bc6-808a-fa238dbce062", "level": "paragraph", "origin_cites_number": 5, "parent_id": "6c064860-8785-4a00-b33f-ee42ff0539c8", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Updating the Model" ], [ "paragraph", "(2) Improve the training data (D)." ] ], "subsections": [], "title": "(2) Improve the training data (D)." }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4869 ], "content": "Another approach is to influence the (re-)training process in a way that the resulting model will behave as the feedback suggests.\nThis approach could be either model-specific (such as attention supervision) or model-agnostic (such as user co-training).\n used human feedback to supervise attention weights of the model.\nSimilarly, added a loss term to regularize explanations guided by human feedback.\n proposed \n\\textit{(i)} constraint optimization, translating human feedback into constraints governing the training process\nand \\textit{(ii)} \nuser co-training, using feedback as another classifier working together with the main ML model in a semi-supervised learning setting.\n disabled some learned features deemed irrelevant, \nbased on the feedback, and re-trained the model, forcing it to use only the remaining features.\nWith many techniques available, however, there has not been a study testing which technique is more appropriate for which task, domain, or model architecture. The comparison issue is one of the open problems for \\EBHD research (to be discussed in section~\\ref{sec:open_problems}).", "id": "e0fdcd8d-308a-404d-bc74-65efeda71978", "level": "paragraph", "origin_cites_number": 3, "parent_id": "6c064860-8785-4a00-b33f-ee42ff0539c8", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Updating the Model" ], [ "paragraph", "(3) Influence the training process (T)." ] ], "subsections": [], "title": "(3) Influence the training process (T)." }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7516, 7507 ], "content": "The debugging workflow (explain, feedback, and update) can be done iteratively to gradually improve the model \nwhere the presented explanation changes after the model update. \nThis allows humans to fix vital bugs first and finer bugs in later iterations, as reflected in via the performance plots.\nHowever, the interactive process could be susceptible to \n\\textit{local decision pitfalls} where local improvements for individual predictions could add up to inferior overall performance .\nSo, we need to ensure that the update in the current iteration is generally favorable and does not overwrite the good effects of previous updates.", "id": "ee7f185f-0b80-4374-931e-1d4296023b81", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "046f9b18-3513-4d56-82c3-c60190ba0aa9", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Workflow" ], [ "subsubsection", "Iteration" ] ], "subsections": [], "title": "Iteration" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 7516 ], "content": "\\label{subsec:setting}\nTo conduct experiments, some studies in Table~\\ref{tab:papers} selected human participants (SP) to be their feedback providers.\nThe selected participants could be people without ML/NLP knowledge or with ML/NLP knowledge depending on the study objectives and the complexity of the feedback process.\nEarly work even conducted experiments with the participants in-person .\nAlthough this limited the number of participants (to less than 100), the researchers could closely observe their behaviors and gain some insights concerning human-computer interaction.\nBy contrast, some used a crowdsourcing platform, Amazon Mechanical Turk\\footnote{\\url{https://www.mturk.com/}} in particular, to collect human feedback for debugging the models.\nCrowdsourcing (CS) enables researchers to conduct experiments at a large scale; however, the quality of human responses could be varying. So, it is important to ensure some quality control such as specifying required qualifications , using multiple annotations per question , having a training phase for participants, and setting up some obvious questions to check if the participants are paying attention to the tasks\n.\nFinally, simulation (SM), without real humans involved but using oracles as human feedback instead, has also been considered (for the purpose of testing the \\EBHD framework only). \nFor example, set 20\\% of input words as relevant using feature selection.\nThese were used to respond to \npost-hoc explanations, i.e., top $k$ words selected by LIME.\n simulated mislabeled examples by flipping the labels of a random 10\\% of the training data.\nSo, when the explanation showed suspicious training examples, the true labels could be used to provide feedback.\nCompared to the other settings,\nsimulation is faster and cheaper, yet its results may not reflect the effectiveness of the framework when deployed with real humans. Naturally, human feedback is sometimes inaccurate and noisy, and humans could also be interrupted or frustrated while providing feedback . These factors, discussed in detail in the next section, cannot be thoroughly studied in only simulated experiments.", "id": "5b122951-af49-414e-bead-48a02462c5e7", "level": "subsection", "origin_cites_number": 9, "parent_id": "c98144b9-7fac-4a44-8376-9fd8e9053e10", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Categorization of Existing Work" ], [ "subsection", "Experimental Setting" ] ], "subsections": [], "title": "Experimental Setting" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:human_factor}\nThough the major goal of \\EBHD is to improve models, we cannot disregard the effect on feedback providers of the debugging workflow. \nIn this section, we compile findings \nconcerning how explanations and feedback could affect the humans, discussed \nalong five dimensions: model understanding, willingness, trust, frustration, and expectation.\nAlthough some of the findings were not derived in NLP settings, we believe that they are generalizable and worth discussing in the context of \\EBHD.", "id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "level": "section", "origin_cites_number": 0, "parent_id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ] ], "subsections": [ "7baa1c24-7d36-4346-aca2-e02a7ca521d4", "21371609-87a7-492e-a772-a56d119a418e", "b7c1d9c7-6b32-4bba-98f5-b6169797a125", "4f378e49-98a4-45ef-b353-096c9a7956e2", "cfd7496f-dd66-4b5c-b479-a38e61d9ae22", "ddb5bace-4633-4682-926a-5678bc41c3dd" ], "title": "Research on Human Factors" }, { "cite_extract_rate": 0, "cites": [], "content": "So far, we have used explanations as means to help humans understand models and conduct informed debugging.\nHence, it is important to verify, at least preliminarily, that the explanations help feedback providers form an accurate understanding of how the models work. This is an important prerequisite towards successful debugging.\nExisting studies have found that some explanation forms are more conducive to developing model understanding in humans than others. \n found that rule-based and keyword-based explanations were easier to understand than similarity-based explanations (i.e., explaining by similar examples in the training data).\nAlso, they found that some users did not understand why the absence of some words could make the model become more certain about its predictions. \n\nfound that explaining why the system behaved and did not behave in a certain way resulted in good user's understanding of the system, though the former way of explanation (why) was more effective than the latter (why not).\n\nreported that interactive explanations could improve users’ comprehension on the model better than static explanations, although the interactive way took more time. \nIn addition, \nrevealing inner workings of the model could further help understanding; however, it introduced additional cognitive workload that might \nmake participants doubt whether they really understood the model well.", "id": "7baa1c24-7d36-4346-aca2-e02a7ca521d4", "level": "subsection", "origin_cites_number": 3, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Model Understanding" ] ], "subsections": [], "title": "Model Understanding" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 1798 ], "content": "We would like humans to provide feedback for improving models, but do humans naturally want to?\nPrior to the emerging of \\EBHD, studies found that humans are not willing to be constantly asked about labels of examples as if they were just simple oracles .\nRather, they want to provide more than just data labels after being given explanations .\nBy collecting free-form feedback from users, and \ndiscovered various feedback types. The most prominent ones include removing-adding features (words), tuning weights, and leveraging feature combinations.\n further analyzed categories of background knowledge underlying the feedback and found, in their experiment, that it was mainly based on commonsense knowledge and English language knowledge. \nSuch knowledge may not be efficiently injected into the model if we exploit human feedback which contains only labels.\nThis agrees with some participants, in , who described their feedback as inadequate when they could only confirm or correct predicted labels.\nAlthough human feedback beyond labels contains helpful information, it is naturally neither complete nor precise. \nobserved that human feedback usually focuses on a few features that are most different from human expectation, ignoring the others.\nAlso, they found that humans, especially lay people, are not good at correcting model\nexplanations quantitatively (e.g., adjusting weights). \nThis is consistent with the findings of\n\nthat human explanations are selective (in a biased \nway) and rarely refer to probabilities but express causal relationships instead.", "id": "21371609-87a7-492e-a772-a56d119a418e", "level": "subsection", "origin_cites_number": 7, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Willingness" ] ], "subsections": [], "title": "Willingness" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 4876, 7507 ], "content": "Trust (as well as frustration and expectation, discussed next) is an important issue when the system end users are feedback providers in the \\EBHD framework.\nIt has been discussed widely that explanations engender human trust in AI systems .\nThis trust may be misplaced at times. \nShowing more detailed explanations can cause users to over rely on the system, leading to misuse where users agree with incorrect system predictions .\nMoreover, some users may over trust the explanations (without fully understanding them) only because the tools generating them are publicly available, widely used, and showing appealing visualizations \n.\nHowever, recent research reported that explanations do not necessarily increase trust and reliance.\n\nfound that, even though explanations help users comprehend systems, they cannot increase human trust in using the systems in high-stakes applications involving lots of qualitative factors, such as graduate school admissions. \n reported that explanations of low-quality models decrease trust and system acceptance as they reveal model weaknesses to the users.\nAccording to , despite correct predictions, the trust still drops if the users see from the explanations that the model relies on the wrong reasons. \nThese studies go along with a perspective by that explanations should help calibrate user perceptions to the model quality, signaling whether the users should trust or distrust the AI.\nAlthough, in some cases, explanations successfully warned users of faulty models , this is not easy when the model flaws are not obvious . \nBesides explanations, the effect of feedback on human trust is quite inconclusive according to some (but fewer) studies.\nOn one hand, \n found that, after lay humans see explanations of low-quality models and lose their trust, the ability to provide feedback makes human trust and acceptance rally, remedying the situation.\nIn contrast, reported that providing feedback decreases human trust in the system as well as their perception of system accuracy no matter whether the system truly improves after being updated or not.", "id": "b7c1d9c7-6b32-4bba-98f5-b6169797a125", "level": "subsection", "origin_cites_number": 11, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Trust" ] ], "subsections": [], "title": "Trust" }, { "cite_extract_rate": 0.25, "cites": [ 4877 ], "content": "Working with explanations can cause frustration \nsometimes.\nFollowing the discussion on trust, explanations of poor models increase user frustration (as they reveal model flaws), whereas the ability to provide feedback reduces frustration. Hence, in general situations, the most frustrating condition is showing explanations to the users without allowing them to give feedback . \nAnother cause of frustration is the risk of detailed explanations overloading users .\nThis is especially a crucial issue for inherently interpretable models where all the internal workings can be exposed to the users.\nThough presenting all the details is comprehensive and faithful, it could create\nbarriers for lay users .\nIn fact, even ML experts may feel frustrated if they need to understand a decision tree with a depth of ten or more.\n found that showing all the model internals undermined users' ability to detect flaws in the model, likely due to information overload. So, they suggested that model internals should be revealed only when the users request to see them.", "id": "4f378e49-98a4-45ef-b353-096c9a7956e2", "level": "subsection", "origin_cites_number": 4, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Frustration" ] ], "subsections": [], "title": "Frustration" }, { "cite_extract_rate": 0, "cites": [], "content": " observed that some participants expected the model to improve after the session where they interacted with the model, regardless of whether they saw explanations or gave feedback during the interaction session. \n\\EBHD should manage these expectations properly. \nFor instance, the system should report changes or improvements to users after the model gets updated.\nIt would be better if the changes can be seen incrementally in real time .", "id": "cfd7496f-dd66-4b5c-b479-a38e61d9ae22", "level": "subsection", "origin_cites_number": 2, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Expectation" ] ], "subsections": [], "title": "Expectation" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on the findings on human factors reviewed in this section, we summarize suggestions for effective \\EBHD as follows.", "id": "ddb5bace-4633-4682-926a-5678bc41c3dd", "level": "subsection", "origin_cites_number": 0, "parent_id": "49eb1249-b5bf-4767-a98f-51e47db0ee54", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Summary" ] ], "subsections": [ "af8e8500-9640-4cbf-8714-e41c19c97c77" ], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "Buggy models usually lead to implausible explanations, adversely affecting human trust in the system.\nAlso, it is not yet clear whether giving feedback increases or decreases human trust.\nSo, it is safer to let the developers or domain experts in the team (rather than end users) be the feedback providers.\nFor some kinds of bugs, however, feedback from end users is essential for improving the model. \nTo maintain their trust, we may collect their feedback implicitly (e.g., by inferring from their interactions with the system after showing them the explanations ) or collect the feedback without telling them that the explanations are of the production system (e.g., by asking them to answer a separate survey). \nAll in all, we need different strategies to collect feedback from different stakeholders.", "id": "af8e8500-9640-4cbf-8714-e41c19c97c77", "level": "paragraph", "origin_cites_number": 1, "parent_id": "ddb5bace-4633-4682-926a-5678bc41c3dd", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Summary" ], [ "paragraph", "Feedback providers." ] ], "subsections": [ "ca2bd397-4203-4ad2-b4c3-db968b8bf078", "d2649efb-edfc-42f6-998d-3e74c98a177a", "0ab8ca3d-66b8-4201-9510-b560fa4bbd3e" ], "title": "Feedback providers." }, { "cite_extract_rate": 0, "cites": [], "content": "We should avoid using forms of explanations which are difficult to understand, such as similar training examples and absence of some keywords in inputs, unless the humans are already trained to interpret them.\nAlso, too much information should be avoided as it could overload the humans; instead, humans should be allowed to request more information if they are interested,\ne.g., by using interactive explanations .", "id": "ca2bd397-4203-4ad2-b4c3-db968b8bf078", "level": "paragraph", "origin_cites_number": 1, "parent_id": "af8e8500-9640-4cbf-8714-e41c19c97c77", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Summary" ], [ "paragraph", "Feedback providers." ], [ "paragraph", "Explanations." ] ], "subsections": [], "title": "Explanations." }, { "cite_extract_rate": 0, "cites": [], "content": "Given that human feedback is not always complete, correct, or accurate,\n\\EBHD should use it with care, e.g., by relying on collective feedback rather than individual feedback and allowing feedback providers to verify and modify their feedback before applying it to update the model.", "id": "d2649efb-edfc-42f6-998d-3e74c98a177a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "af8e8500-9640-4cbf-8714-e41c19c97c77", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Summary" ], [ "paragraph", "Feedback providers." ], [ "paragraph", "Feedback." ] ], "subsections": [], "title": "Feedback." }, { "cite_extract_rate": 0, "cites": [], "content": "Humans, especially lay people, usually expect the model to improve over time after they give \nfeedback. So, the system should display improvements after the model gets updated.\nWhere possible, showing the changes incrementally \nin real time is preferred, as the feedback providers can check if their feedback works as expected or not.", "id": "0ab8ca3d-66b8-4201-9510-b560fa4bbd3e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "af8e8500-9640-4cbf-8714-e41c19c97c77", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Research on Human Factors" ], [ "subsection", "Summary" ], [ "paragraph", "Feedback providers." ], [ "paragraph", "Update." ] ], "subsections": [], "title": "Update." }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:open_problems}\nThis section lists potential research directions and open problems for \\EBHD of NLP models.", "id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "level": "section", "origin_cites_number": 0, "parent_id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ] ], "subsections": [ "42ff39a9-76ae-43b6-bdf8-f0e148be4fe5", "e94edf9e-18ef-40cb-b309-9bfb8acc2417", "7fe04ab0-d178-4bce-a668-44d36b584bb0", "09c185ed-be90-4def-9aa3-6012eba0aa41", "bf863ab3-10ea-4bd3-bed2-d615dfb7e54d" ], "title": "Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "All papers in Table~\\ref{tab:papers} conducted experiments only on English datasets. \nWe acknowledge that qualitatively analyzing explanations and feedback in languages at which one is not fluent is not easy, not to mention recruiting human subjects who know the languages. \nHowever, we hope that, with more multilingual data publicly available and growing awareness in the NLP community , there will be more \\EBHD studies targeting other languages in the near future.\nAlso, most existing \\EBHD works target \ntext classifiers. It would be interesting to conduct more \\EBHD work for other NLP tasks such as reading comprehension, question answering, and natural language inference (NLI), to see whether existing techniques still work effectively. \nShifting to other tasks requires an understanding of specific bug characteristics in those tasks.\nFor instance, unlike bugs in text classification which are usually due to word artifacts, bugs in NLI concern syntactic heuristics between premises and hypotheses . Thus, giving human feedback \nat word level may not be helpful, and more advanced methods may be needed.", "id": "42ff39a9-76ae-43b6-bdf8-f0e148be4fe5", "level": "subsection", "origin_cites_number": 2, "parent_id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ], [ "subsection", "Beyond English Text Classification" ] ], "subsections": [], "title": "Beyond English Text Classification" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 4867 ], "content": " remarked that the evaluation setup of existing \\EBHD work is often too easy or unrealistic. For example, bugs are obvious artifacts which could be removed using simple text pre-processing (e.g., removing punctuation and redacting named entities). \nHence, it is not clear how powerful such \\EBHD frameworks are when dealing with real-world bugs. \nIf bugs are not dominant and happen less often, global explanations may be too coarse-grained to capture them while \nmany local explanations may be needed to spot a few appearances of the bugs, leading to inefficiency.\nAs reported by ,\nfeedback results in minor improvements when the model is already reasonably good. \nOther open problems, whose solutions may help deal with \nchallenging bugs, include the following.\nFirst, different people may give different feedback for the same explanation. As raised by , how can we integrate their feedback to get robust signals for model update? How should we deal with conflicts among feedback and training examples ?\nSecond, confirming or removing what the model has learned is easier than injecting, into the model, new knowledge (which may not even be apparent in the explanations). How can we use human feedback to inject new knowledge, especially when the model is not transparent?\nLastly, \\EBHD techniques have been proposed for tabular data and image data . Can we adapt or transfer them across modalities to deal with NLP tasks?", "id": "e94edf9e-18ef-40cb-b309-9bfb8acc2417", "level": "subsection", "origin_cites_number": 6, "parent_id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ], [ "subsection", "Tackling More Challenging Bugs" ] ], "subsections": [], "title": "Tackling More Challenging Bugs" }, { "cite_extract_rate": 0.8, "cites": [ 7516, 4868, 4870, 4869 ], "content": "Most selected studies focus on improving correctness of the model (e.g., by expecting a higher F1 or a lower bias after debugging).\nHowever, \nonly some of them discuss efficiency of the proposed frameworks.\nIn general, we can analyze the efficiency of an \\EBHD framework by looking at the efficiency of each main step in Figure~\\ref{fig:overview}.\nStep 1 generates the explanations, so its efficiency depends on the explanation method used and, in the case of local explanation methods, the number of local explanations needed.\nStep 2 lets humans give feedback, so its efficiency concerns the amount of time they spend to understand the explanations and to produce the feedback.\nStep 3 updates the model using the feedback, so its efficiency relates to the time used for processing the feedback and retraining the model (if needed).\nExisting work mainly reported efficiency of step 1 or step 2.\nFor instance, approaches using example-based explanations \nmeasured the improved performance with respect to the number of explanations computed (step 1) . \n compared the improved F1 of \\EBHD with the F1 of instance labelling given the same amount of time for humans to perform the task (step 2).\nConversely, compared the time humans need to do \\EBHD versus instance labelling in order to achieve the equivalent degree of correctness improvement (step 2).\nNone of the selected studies considered the efficiency of the three steps altogether. In fact, the efficiency of step 1 and 3 is important especially for black box models where the cost of post-hoc explanation generation and model retraining is not negligible. \nIt is even more crucial for iterative or responsive \\EBHD.\nThus, analyzing and enhancing efficiency of \\EBHD frameworks (for both machine and human sides) require further research.", "id": "7fe04ab0-d178-4bce-a668-44d36b584bb0", "level": "subsection", "origin_cites_number": 5, "parent_id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ], [ "subsection", "Analyzing and Enhancing Efficiency" ] ], "subsections": [], "title": "Analyzing and Enhancing Efficiency" }, { "cite_extract_rate": 0, "cites": [], "content": "User studies \nare naturally difficult to replicate as they are inevitably affected by choices of user interfaces, phrasing, population, incentives, etc. .\nFurther, research in ML rarely \nadopts practices from the human-computer interaction community , limiting the possibility to compare across studies.\nHence, most existing work only considers model performance before and after debugging or compares the results among several configurations of a single proposed framework.\nThis leads to little knowledge \nabout which explanation types or feedback mechanisms are more effective across several settings.\nThus, one promising research direction would be proposing a standard setup or a benchmark for\nevaluating and comparing \\EBHD frameworks \nreliably across different settings.", "id": "09c185ed-be90-4def-9aa3-6012eba0aa41", "level": "subsection", "origin_cites_number": 2, "parent_id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ], [ "subsection", "Reliable Comparison across Papers" ] ], "subsections": [], "title": "Reliable Comparison across Papers" }, { "cite_extract_rate": 0, "cites": [], "content": "So far, we have not seen \\EBHD research widely deployed in applications,\nprobably due to its difficulty to set up the debugging aspects outside a research environment.\nOne way to promote adoption of \\EBHD is to integrate \\EBHD frameworks into available visualization systems such as the Language Interpretability Tool (LIT) , allowing users to provide feedback to the model after seeing explanations and supporting experimentation.\nAlso, to move towards deployment, it is important to follow human-AI interaction guidelines and evaluate \\EBHD with potential end users, not just via simulation or crowdsourcing, since human factors play an important role in real situations .", "id": "bf863ab3-10ea-4bd3-bed2-d615dfb7e54d", "level": "subsection", "origin_cites_number": 3, "parent_id": "02a9b3d3-7c0f-4572-b628-bf3bb8efe52a", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Open Problems" ], [ "subsection", "Towards Deployment" ] ], "subsections": [], "title": "Towards Deployment" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nWe presented a general framework of \\ebhd (\\EBHD) of NLP models and analyzed existing work in relation to the components of this framework to illustrate the \nstate-of-the-art in the field. \nFurthermore, we summarized findings on human factors with respect to \\EBHD, suggested design practices accordingly, and identified open problems for future studies.\nAs \\EBHD is still an ongoing research topic, we hope that our survey will be helpful for guiding interested researchers \nand for examining future \\EBHD papers. \n\\section*{Acknowledgments}\nWe would like to thank Marco Baroni (the action editor) and anonymous reviewers for very helpful comments. \nAlso, we thank Brian Roark and Cindy Robinson for their technical support concerning the submission system.\nBesides, the first author wishes to thank the support from Anandamahidol Foundation, Thailand.\n\\bibliography{anthology,survey}\n\\bibliographystyle{acl_natbib}\n\\end{document}", "id": "69b484af-4448-4700-b87f-90338424cf86", "level": "section", "origin_cites_number": 0, "parent_id": "26e7485a-9b67-4600-a0c0-c92ba6777268", "prefix_titles": [ [ "title", "Explanation-Based Human Debugging of NLP Models: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
3
[ 4865, 7507, 4864, 7696, 8847, 4863, 4867, 4866, 4868, 4869, 7516, 4870, 4871, 826, 1445, 4872, 8848, 4873, 4874, 8849, 1824, 3699, 1813, 7517, 4875, 7802, 1798, 4876, 4877 ]
1.498625
[ "Wei Yang Bryan Lim", "Nguyen Cong Luong", "Dinh Thai Hoang", "Yutao Jiao", "Ying-Chang Liang", "Qiang Yang", "Dusit Niyato", "Chunyan Miao" ]
Federated Learning in Mobile Edge Networks: A Comprehensive Survey
2019
2019-09-26T04:03:51Z
cs.NI
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ] ], "subsections": [ "69856a4e-b74f-448b-acb3-7c296be1ae42", "a45b8f2e-5693-4022-b460-4f6e38041188", "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "59289545-bba6-4ef3-9e35-c47a8165f934", "ce302473-c70b-411d-815f-c3000491c151", "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "297974f4-53c5-47b3-9406-058a62082d77", "2702b63b-d507-4c14-9fa5-5bc6f5225669" ], "title": "root" }, { "cite_extract_rate": 0.47619047619047605, "cites": [ 166, 5973, 3357, 3582, 5977, 5974, 5976, 8646, 5972, 602, 584, 2918, 547, 582, 5971, 7608, 671, 3420, 5975, 3422 ], "content": "\\label{sec:intro}\nCurrently, there are nearly $7$ billion connected Internet of Things (IoT) devices and $3$ billion smartphones around the world. These devices are equipped with increasingly advanced sensors, computing, and communication capabilities. As such, they can potentially be deployed for various crowdsensing tasks, e.g., for medical purposes and air quality monitoring . Coupled with the rise of Deep Learning (DL) , the wealth of data collected by end devices opens up countless possibilities for meaningful research and applications. \nIn the traditional cloud-centric approach, data collected by mobile devices is uploaded and processed centrally in a cloud-based server or data center. In particular, data collected by IoT devices and smartphones such as measurements , photos , videos , and location information are aggregated at the data center . Thereafter, the data is used to provide insights or produce effective inference models. However, this approach is no longer sustainable for the following reasons. Firstly, data owners are increasingly privacy sensitive. Following privacy concerns among consumers in the age of big data, policy makers have responded with the implementation of data privacy legislations such as the European Commission's General Data Protection Regulation (GDPR) and Consumer Privacy Bill of Rights in the US . In particular, the consent (GDPR Article 6) and data minimalization principle (GDPR Article 5) limits data collection and storage only to what is consumer-consented and absolutely necessary for processing. Secondly, a cloud-centric approach involves long propagation delays and incurs unacceptable latency for applications in which real-time decisions have to be made, e.g., in self-driving car systems . Thirdly, the transfer of data to the cloud for processing burdens the backbone networks especially in tasks involving unstructured data, e.g., in video analytics . This is exacerbated by the fact that cloud-centric training is relatively reliant on wireless communications . As a result, this can potentially impede the development of new technologies. \nWith data sources mainly located outside the cloud today , Mobile Edge Computing (MEC) has naturally been proposed as a solution in which the computing and storage capabilities of end devices and edge servers are leveraged on to bring model training closer to where data is produced . As defined in , an end-edge-cloud computing network comprises: (i) end devices, (ii) edge nodes, and (iii) cloud server. For model training in conventional MEC approaches, a collaborative paradigm has been proposed in which training data are first sent to the edge servers for model training up to lower level DNN layers, before more computation intensive tasks are offloaded to the cloud , (Fig. \\ref{Edge_AI}). However, this arrangement incurs significant communication costs and is unsuitable especially for applications that require persistent training . In addition, computation offloading and data processing at edge servers still involve the transmission of potentially sensitive personal data. This can discourage privacy-sensitive consumers from taking part in model training, or even violate increasingly stringent privacy laws . Although various privacy preservation methods, e.g.,\ndifferential privacy (DP)~, have been proposed, a number of users are still not willing to expose their private\ndata for fear that their data may be inspected by external servers. In the long run, this discourages the\ndevelopment of technologies as well as new applications. \n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\columnwidth]{Figs/cloudedgefl}\n \\caption{\\small Edge AI approach brings AI processing closer to where data is produced. In particular, FL allows training on devices where the data is produced.}\n \\label{Edge_AI}\n\\end{figure}\nTo guarantee that training data remains on personal devices and to facilitate collaborative machine learning of complex models among distributed devices, a decentralized ML approach called Federated Learning (FL) is introduced in . In FL, mobile devices use their local data to cooperatively train an ML model required by an FL server. They then send the model updates, i.e., the model's weights, to the FL server for aggregation. The steps are repeated in multiple rounds until a desirable accuracy is achieved. This implies that FL can be an enabling technology for ML model training at mobile edge networks. As compared to conventional cloud-centric ML model training approaches, the implementation of FL for model training at mobile edge networks features the following advantages. \n\\begin{itemize}\n\\item \\textit{Highly efficient use of network bandwidth:} Less information is required to be transmitted to the cloud. For example, instead of sending the raw data over for processing, participating devices only send the updated model parameters for aggregation. As a result, this significantly reduces costs of data communication and relieves the burden on backbone networks. \n\\item \\textit{Privacy:} Following the above point, the raw data of users need not be sent to the cloud. Under the assumption that FL participants and servers are non-malicious, this enhances user privacy and reduces the probability of eavesdropping to a certain extent. In fact, with enhanced privacy, more users will be willing to take part in collaborative model training and so, better inference models can be built.\n\\item \\textit{Low latency:} With FL, ML models can be consistently trained and updated. Meanwhile, in the MEC paradigm, real-time decisions, e.g., event detection, can be made locally at the edge nodes or end devices. Therefore, the latency is much lower than that when decisions are made in the cloud before transmitting them to the end devices. This is vital for time critical applications such as self-driving car systems in which the slightest delays can potentially be life threatening . \n\\end{itemize}\nGiven the aforementioned advantages, FL has seen recent successes in several applications. For example, the Federated Averaging algorithm (\\textit{FedAvg}) proposed in has been applied to Google's Gboard to improve next-word prediction models. In addition, several studies have also explored the use of FL in a number of scenarios in which data is sensitive in nature, e.g., to develop predictive models for diagnosis in health AI and to foster collaboration across multiple hospitals and Government agencies .\nBesides being an enabling technology for ML model training \\textit{at} mobile edge networks, FL has also been increasingly applied as an enabling technology \\textit{for} mobile edge network optimization. Given the computation and storage constraints of increasingly complex mobile edge networks, conventional network optimization approaches that are built on static models fare relatively poorly in modelling dynamic networks . As such, a data-driven Deep Learning (DL) based approach for optimizing resource allocation is increasingly popular. For example, DL can be used for representation learning of network conditions whereas Deep Reinforcement Learning (DRL) can optimize decision making through interactions with the dynamic environment . However, the aforementioned approaches require user data as an input and these data may be sensitive or inaccessible in nature due to regulatory constraints. As such, in this survey, we also consider FL's potential to serve as an enabling technology for optimizing mobile edge networks, e.g., in cell association , computation offloading , and vehicular networks .\nHowever, there are several challenges to be solved before FL can be implemented at scale. Firstly, even though raw data no longer needs to be sent to the cloud servers, communication costs remain an issue due to the high dimensonality of model updates and limited communication bandwith of participating mobile devices. In particular, state-of-the art DNN model training can involve the communication of millions of parameters for aggregation. Secondly, in a large and complex mobile edge network, the heterogeneity of participating devices in terms of data quality, computation power, and willingness to participate have to be well managed from the resource allocation perspective. Thirdly, FL does not guarantee privacy in the presence of malicious participants or aggregating servers. In particular, recent research works have clearly shown that a malicious participant may exist in FL and can infer the information of other participants just from the shared parameters alone. As such, privacy and security issues in FL still need to be considered.\nAlthough there are surveys on MEC and FL, the existing studies usually treat the two topics separately. For existing surveys on FL, the authors in place more emphasis on discussing the architecture and categorization of different FL settings to be used for the varying distributions of training data. The authors in highlight the applications of FL in wireless communications but do not discuss the issues pertaining to FL implementation. In addition, the focus of is on cellular network architecture rather than mobile edge networks. In contrast, the authors in provide a brief tutorial on FL and the challenges related to its implementation, but do not consider the issue of resource allocation in FL, or the potential applications of FL for mobile edge network optimization. On the other hand, for surveys in MEC that focus on implementing ML model training at edge networks, a macroscopic approach is usually adopted in which FL is briefly mentioned as one of the enabling technologies in the MEC paradigm, but without detailed elaboration with regards to its implementation or the related challenges. In particular, the authors in , , and study the architectures and process of training and inference at edge networks without considering the challenges to FL implementation. In addition, surveys studying the implementation of DL for mobile edge network optimization mostly do not focus on FL as a potential solution to preserve data privacy. For example, the authors in discuss strategies for optimizing caching and computation offloading for mobile edge networks, but do not consider the use of privacy preserving federated approaches in their studies. Similarly, considers the use of DRL in communications and networking but do not include federated DRL approaches. \n\\begin{table*}\n\\centering\n\\arrayrulecolor{black}\n\\caption{\\small An overview of selected surveys in FL and MEC}\n\\label{surveytable}\n\\begin{tabular}{!{\\color{black}\\vrule}l|l!{\\color{black}\\vrule}l!{\\color{black}\\vrule}} \n\\hline\n\\rowcolor[rgb]{0.682,0.667,0.667} \\multicolumn{1}{|l|}{\\textbf{Ref.}} & \\textbf{Subject} & \\textbf{Contribution} \\\\ \n\\hline\n & \\multirow{3}{*}{FL} & Introductory tutorial on categorization of different FL settings, e.g., vertical FL, horizontal FL, and Federated Transfer Learning \\\\ \n\\cline{1-1}\\cline{3-3}\n & & FL in optimizing resource allocation for wireless networks while preserving data privacy \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Tutorial on FL and discussions of implementation challenges in FL \\\\ \n\\cline{1-1}\\arrayrulecolor{black}\\cline{2-2}\\arrayrulecolor{black}\\cline{3-3}\n & \\multirow{10}{*}{MEC} & Computation offloading strategy to optimize DL performance in edge computing \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on architectures and frameworks for edge intelligence \\\\ \n\\cline{1-1}\\cline{3-3}\n & & ML for IoT management, e.g., network management and security \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on computation offloading in MEC \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on DRL approaches to address issues in communications and networking \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on techniques for computation offloading \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on architectures and applications of MEC \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on computing, caching, and communication issues at mobile edge networks \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on the phases of caching and comparison among the different caching schemes \\\\ \n\\cline{1-1}\\cline{3-3}\n & & Survey on joint mobile computing and wireless communication resource management in MEC \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\\begin{figure*}[!]\n \\centering\n\\includegraphics[width=\\linewidth]{Figs/flowchart}\n \\caption{\\small Classification of related studies to be discussed in this survey. }\n \\label{flowchart}\n\\end{figure*}\nIn summary, most existing surveys on FL do not consider the applications of FL in the context of mobile edge networks, whereas existing surveys on MEC do not consider the challenges to FL implementation, or the potential of FL to be applied in mobile edge network optimization. This motivates us to have a comprehensive survey that has the following contributions: \n\\begin{itemize}\n\\item We motivate the importance of FL as an important paradigm shift towards enabling collaborative ML model training. Then, we provide a concise tutorial on FL implementation and present to the reader a list of useful open-source frameworks that paves the way for future research on FL and its applications. \n\\item We discuss the unique features of FL relative to a centralized ML approach and the resulting implementation challenges. For each of this challenge, we present to the reader a comprehensive discussion of existing solutions and approaches explored in the FL literature.\n\\item We discuss FL as an enabling technology for mobile edge network optimization. In particular, we discuss the current and potential applications of FL as a privacy-preserving approach for applications in edge computing.\n\\item We discuss the challenges and future research directions of FL.\n\\end{itemize}\nFor the reader's convenience, we classify the related studies to be discussed in this survey in Fig. \\ref{flowchart}. The classification is based on (i) FL at mobile edge network, i.e., studies that focus on solving the challenges and issues related to implementing the collaborative training of ML models on end devices, and (ii) FL for mobile edge network, i.e., studies that specifically explore the application of FL for mobile edge network optimization. While the former group of studies works on addressing the fundamental issues of FL, the latter group uses FL as an application tool to solve issues in edge computing. We also present a list of common abbreviations for reference in Table \\ref{tab:abbrev}. \nThe rest of this paper is organized as follows. Section \\ref{sec:intro_FD} introduces the background and fundamentals of FL. Section \\ref{sec: communication} reviews solutions provided to reduce communication costs. Section \\ref{sec:resource} discusses resource allocation approaches in FL. Section \\ref{sec:security} discusses privacy and security issues. Section \\ref{sec:application} discusses applications of FL for mobile edge network optimization. Section \\ref{sec:challenges_open_issues} discusses the challenges and future research directions in FL. Section \\ref{sec:conclusions} concludes the paper. \n\\begin{table}[]\n\\centering{}\\caption{\\small List of common abbreviations. \\label{tab:abbrev}}\n\\begin{tabular}{|l|l|}\n\\hline\n\\rowcolor[HTML]{9B9B9B} \n\\textbf{Abbreviation} & \\textbf{Description} \\\\ \\hline\nBAA & Broadband Analog Aggregation \\\\ \\hline\nCNN & Convolutional Neural Network \\\\ \\hline\nCV & Computer Vision \\\\ \\hline\nDDQN & Double Deep Q-Network \\\\ \\hline\nDL & Deep Learning \\\\ \\hline\nDNN & Deep Neural Network \\\\ \\hline\nDP & Differential Privacy \\\\ \\hline\nDQL & Deep Q-Learning \\\\ \\hline\nDRL & Deep Reinforcement Learning \\\\ \\hline\nFedAvg & Federated Averaging \\\\ \\hline\nFL & Federated Learning \\\\ \\hline\nGAN & Generative Adversarial Network \\\\ \\hline\nIID & \\begin{tabular}[c]{@{}l@{}}Independent and Identically \\\\ Distributed\\end{tabular} \\\\ \\hline\nIoT & Internet of Things \\\\ \\hline\nIoV & Internet of Vehicles \\\\ \\hline\nLSTM & Long Short Term Memory \\\\ \\hline\nMEC & Mobile Edge Computing \\\\ \\hline\nML & Machine Learning \\\\ \\hline\nMLP & Multilayer Perceptron \\\\ \\hline\nNLP & Natural Language Processing \\\\ \\hline\nOFDMA & \\begin{tabular}[c]{@{}l@{}}Orthogonal Frequency-division\\\\ Multiple Access\\end{tabular} \\\\ \\hline\nQoE & Quality of Experience \\\\ \\hline\nRNN & Recurrent Neural Network \\\\ \\hline\nSGD & Stochastic Gradient Descent \\\\ \\hline\nSMPC & Secure Multiparty Computation \\\\ \\hline\nSNR & Signal-to-noise ratio \\\\ \\hline\nSVM & Support Vector Machine \\\\ \\hline\nTFF & TensorFlow Federated \\\\ \\hline\nUE & User Equipment \\\\ \\hline\nURLLC & Ultra reliable low latency communication \\\\ \\hline\n\\end{tabular}\n\\end{table}", "id": "69856a4e-b74f-448b-acb3-7c296be1ae42", "level": "section", "origin_cites_number": 42, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8394 ], "content": "\\label{sec:intro_FD}\n\\label{sec:intro_edge_AI}\nArtificial Intelligence (AI) has become an essential part of our lives today, following the recent successes and progression of DL in several domains, e.g., Computer Vision (CV) and Natural Language Processing (NLP) . In traditional training of Deep Neural Networks (DNNs), a cloud based approach is adopted whereby data is centralized and model training occurs in powerful cloud servers. However, given the ubiquity of mobile devices that are equipped with increasingly advanced sensing and computing capabilities, the trend of migrating intelligence from the cloud to the edge, i.e., in the MEC paradigm, has naturally arisen. In addition, amid growing privacy concerns, the concept of FL has been proposed. \nFL involves the collaborative training of DNN models on end devices. There are, in general, two steps in the FL training process namely (i) local model training on end devices and (ii) global aggregation of updated parameters in the FL server. In this section, we first provide a brief introduction to DNN model training, which generalizes local model training in FL. Note that while FL can be applied to the training of ML models in general, we focus specifically on DNN model training in this section as a majority of the papers that we subsequently review study the federated training of DNN models. In addition, the DNN models are easily aggregated and outperform conventional ML techniques especially when the data is large. The implementation of FL at mobile edge networks can thus naturally leverage on the increasing computing power and wealth of data collected by distributed end devices, both of which are driving forces contributing to the rise of DL . As such, a brief introduction to general DNN model training will be useful for subsequent sections. Thereafter, we proceed to provide a tutorial of the FL training process that incorporates both global aggregation and local training. In addition, we also highlight the statistical challenges of FL model training and present the protocols and open-source frameworks of FL.", "id": "a45b8f2e-5693-4022-b460-4f6e38041188", "level": "section", "origin_cites_number": 3, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ] ], "subsections": [ "451fa65e-d0ea-454a-85ca-028b1856d5df", "7b753a48-9322-4168-8412-014839ef1401", "827e50d4-03f3-44e3-b393-22f92ba46fce", "15b0125f-a145-4c6d-9f58-0624dc573cf0", "f43bdcdd-1aef-4e5b-b1bc-37b0dca38e84" ], "title": "Background and Fundamentals of Federated Learning" }, { "cite_extract_rate": 0.45, "cites": [ 166, 1003, 8394, 7633, 289, 1344, 5978, 652, 620 ], "content": "Conventional ML algorithms rely on \\textit{hand-engineered} feature extractors to process raw data . As such, domain expertise is often a prerequisite for building an effective ML model. In addition, feature selection has to be customized and reinitiated for each new problem. On the other hand, DNNs are representation learning based, i.e., DNNs can automatically discover and learn these features from raw data and thus often outperform conventional ML algorithms especially when there is an abundance of data. \nDL lies within the domain of the brain-inspired computing paradigm, of which the neural network is an important part of . In general, a neural network design emulates that of a neuron . It comprises three layers: (i) input layer, (ii) hidden layer, and (iii) output layer. In a conventional feedforward neural network, a weighted and bias-corrected input value is passed through a non-linear activation function to derive an output (Fig. \\ref{fig:backprop}). Some activation functions include the ReLu and softmax functions . A typical DNN comprises multiple hidden layers that map an input to an output. For example, the goal of a DNN trained for image classification is to produce a vector of scores as the output, in which the positional index of the highest score corresponds to the class to which the input image is classified to belong. As such, the objective of training a DNN is to optimize the weights of the network such that the loss function, i.e., difference between the ground truth and model output, is minimized. \n\\begin{figure}[tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figs/backprop}\n\\par\\end{centering}\n\\caption{\\small In forward pass, an output is derived from the weights and inputs. In backpropagation, the input gradient $e$ is used to calibrate the weights of the DNN model.\\label{fig:backprop}}\n\\end{figure}\nBefore training, the dataset is first split into the training and test dataset. Then, the training dataset is used as input data for the optimization of weights in the DNN. The weights are calibrated through stochastic gradient descent (SGD), in which the weights are updated by the product of (i) the learning rate $lr$, i.e., the step size of gradient descent in each iteration, and (ii) partial derivative of the loss function $L$ with respect to the weight $w$. The SGD formula is as follows:\n\\begin{align}\nW = W - lr \\frac{\\partial L}{\\partial W} \\label{eq:sgd_dl_1}\\\\\n\\frac{\\partial L}{\\partial W} \\approx \\frac{1}{m} \\sum_{i\\epsilon B} \\frac{\\partial l^{(i)}}{\\partial W}\\label{eq:sgd_dl_2}\n\\end{align}\nNote that the SGD formula presented in (\\ref{eq:sgd_dl_1}) is that of a mini-batch GD. In particular, equation (\\ref{eq:sgd_dl_2}) is derived as the average gradient matrix over the gradient matrices of $B$ batches, in which each batch is a random subset consisting of $m$ training samples. This is preferred over the full batch GD, i.e., where the entirety of the training set is included in computing the partial derivative, since the full batch GD can lead to slow training and batch memorization . The gradient matrices are derived through backpropagation from the input gradient $e$ (Fig. \\ref{fig:backprop}) .\nThe training iterations are then repeated over many epochs, i.e., full passes over the training set, for loss minimalization. A well-trained DNN generalizes well, i.e., achieve high \\textit{inference} accuracy when applied to data that it has not seen before, e.g., the test set. There are other alternatives to supervised learning, e.g., semi-supervised learning , unsupervised learning and reinforcement learning . In addition, there also exists several DNN networks and architectures tailored to process the varying natures of input data, e.g., Multilayer Perceptron (MLP) , Convolutional Neural Network (CNN) typically for CV tasks, and Recurrent Neural Network (RNN) usually for sequential tasks. However, an in-depth discussion is out of the scope of this paper. We refer interested readers to for comprehensive discussions of DNN architectures and training strategies. We next focus on FL, an important paradigm shift towards enabling privacy preserving and collaborative DL model training.", "id": "451fa65e-d0ea-454a-85ca-028b1856d5df", "level": "subsection", "origin_cites_number": 20, "parent_id": "a45b8f2e-5693-4022-b460-4f6e38041188", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ], [ "subsection", "Deep Learning" ] ], "subsections": [], "title": "Deep Learning" }, { "cite_extract_rate": 0.5, "cites": [ 3431, 5979, 671, 582 ], "content": "\\label{fltrain}\nMotivated by privacy concerns among data owners, the concept of FL is introduced in . FL allows users to collaboratively train a\nshared model while keeping personal data on their devices, thus alleviating\ntheir privacy concerns. As such, FL can serve as an enabling technology for ML model training at mobile edge networks. For an introduction to the categorizations of different FL settings, e.g., vertical and horizontal FL, we refer the interested readers to .\nIn general, there are two main entities in the FL\nsystem, i.e., the data owners (viz. \\textit{participants}) and the model owner (viz. \\textit{FL server}). Let $\\mathcal{N}=\\left\\{ 1,\\ldots,N\\right\\} $\ndenote the set of $N$ data owners, each of which has\na private dataset $D_{i\\in\\mathcal{N}}$. Each data owner $i$ uses\nits dataset $D_{i}$ to train a\\emph{ local model} $\\mathbf{w}_{i}$\nand send only the local model parameters to the FL server. Then,\nall collected local models are aggregated $\\mathbf{\\mathbf{w}}=\\cup_{i\\in\\mathcal{N}}\\mathbf{w}_{i}$\nto generate a \\emph{global model} $\\mathbf{w}_{G}$. This\nis different from the traditional centralized training which uses\n$\\mathbf{D}=\\cup_{i\\in\\mathcal{N}}D_{i}$ to train a model $\\mathbf{w}_{T}$, i.e., data from each individual source is aggregated first before model training takes place centrally.\n\\begin{figure}[tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figs/flsteps}\n\\par\\end{centering}\n\\caption{\\small General FL training process involving \\textit{N} participants.\\label{fig:Federated-learning-model}}\n\\end{figure}\nA typical architecture and training process of an FL system is shown in\nFig.~\\ref{fig:Federated-learning-model}. In this system, the data\nowners serve as the FL participants which collaboratively\ntrain an ML model required by an aggregate server. An underlying assumption is that the data owners are honest, which\nmeans they use their real private data to do the training and submit the\ntrue local models to the FL server. Of course, this assumption may not always be realistic and we discuss the proposed solutions subsequently in Sections \\ref{sec:resource} and \\ref{sec:security}. \nIn general,\nthe FL training process includes the following three steps.\nNote: the \\textit{local} model refers to\nthe model trained at each participating device, whereas the \\textit{global} model refers to\nthe model aggregated by the FL server. \n\\begin{itemize}\n\\item \\textit{Step 1 (Task initialization)}: The server decides the training task, i.e., the target application, and the corresponding data requirements.\nThe server also specifies the hyperparameters of the global model\nand the training process, e.g., learning rate. Then, the server broadcasts the initialized\nglobal model $\\mathbf{w}_{G}^{0}$ and task to selected participants. \n\\item \\textit{Step 2 (Local model training and update)}: Based on the global model\n$\\mathbf{w}_{G}^{t}$, where $t$ denotes the current iteration index,\neach participant respectively uses its local data and device to update\nthe local model parameters $\\mathbf{w}_{i}^{t}$. The goal of participant $i$ in iteration \\emph{$t$}\nis to find optimal parameters $\\mathbf{w}_{i}^{t}$ that minimize\nthe loss function $L(\\mathbf{w}_{i}^{t})$, i.e., \n\\begin{equation}\n\\mathbf{w}_{i}^{t^{*}}=\\arg\\min_{\\mathbf{w}_{i}^{t}}L(\\mathbf{w}_{i}^{t}).\\label{eq:local_training_goal}\n\\end{equation}\nThe updated local model parameters are subsequently sent\nto the server. \n\\item \\textit{Step 3 (Global model aggregation and update)}: The server aggregates\nthe local models from participants and then sends the updated\nglobal model parameters $\\mathbf{w}_{G}^{t+1}$ back to the data owners.\n\\end{itemize}\nThe server wants to minimize the global loss function $L(\\mathbf{w}_{G}^{t})$, i.e., \n\\begin{equation}\nL(\\mathbf{w}_{G}^{t})=\\frac{1}{N}\\sum_{i=1}^{N}L(\\mathbf{w}_{i}^{t}).\\label{eq:global_goal}\n\\end{equation}\nSteps $2$-$3$ are repeated until the global loss function converges or a desirable training accuracy is achieved.\n Note\nthat the FL training process can be used for different ML\nmodels that essentially use the SGD method such as Support\nVector Machines (SVMs) , neural networks, and linear regression .\nA training dataset usually contains a set of $n$ data feature vectors\n$\\mathbf{x}=\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{n}\\}$ and a set of\ncorresponding data labels\\footnote{In the case of unsupervised learning, there is no data label.}\n$\\mathbf{y}=\\{y_{1},\\ldots,y_{n}\\}$. In addition, let $\\hat{y_{j}}=f(\\mathbf{x}_{j};\\mathbf{w})$\ndenote the predicted result from the model $\\mathbf{w}$ updated/trained by data\nvector $x_{j}$. Table~\\ref{tab:loss-functions} summarizes several\nloss functions of common ML models~. \n\\begin{table}[tbh]\n\\centering{}\\caption{\\small Loss functions of common ML models\\label{tab:loss-functions}}\n\\begin{tabular}{|>{\\centering}m{1.9cm}|>{\\centering}m{5cm}|}\n\\hline \nModel & Loss function $L(\\mathbf{w}_{i}^{t})$\\tabularnewline\n\\hline \nNeural network & $\\frac{1}{n}\\sum_{j=1}^{n}(y_{i}-f(\\mathbf{x}_{j};\\mathbf{w}))^{2}$\\linebreak (Mean\nSquared Error)\\tabularnewline\n\\hline \nLinear regression & $\\frac{1}{2}\\left\\Vert y_{j}-\\mathbf{w}^{T}\\mathbf{x}_{j}\\right\\Vert ^{2}$\\tabularnewline\n\\hline \nK-means & $\\sum_{j}\\left\\Vert \\mathbf{x}_{j}-f(\\mathbf{x}_{j};\\mathbf{w})\\right\\Vert $\\linebreak ($f(\\mathbf{x}_{j};\\mathbf{w})$\nis the centroid of all objects assigned to $x_{j}$'s class)\\tabularnewline\n\\hline \nsquared-SVM & $[\\frac{1}{n}\\sum_{j=1}^{n}\\max(0,1-y_{j}(\\mathbf{w}^{T}\\mathbf{x}_{j}-bias))]$\\linebreak $+\\lambda\\left\\Vert \\mathbf{w}^{T}\\right\\Vert ^{2}$($bias$\nis the bias parameter and $\\lambda$ is const.)\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\nGlobal model aggregation is an integral part of FL. A straightforward\nand classical algorithm for aggregating the local models is the \\textit{FedAvg} algorithm proposed in , which is similar to that of local SGD . The pseudocode for \\textit{FedAvg} is given in Algorithm~\\ref{alg:AveragingAlg}.\n\\begin{algorithm}[tbh]\n\\scriptsize\n\\begin{algorithmic}[1]\n\\Require{Local minibatch size $B$, number of participants $m$ per iteration, number of local epochs $E$, and learning rate $\\eta$.}\n\\Ensure{Global model $\\mathbf{w}_{G}$.}\n\\State{[Participant $i$]}\n\\State{\\textbf{LocalTraining}($i$, $\\mathbf{w}$):}\n\t\\State{Split local dataset $D_i$ to minibatches of size $B$ which are included into the set $\\mathcal{B}_i$.}\n\t\\For{each local epoch $j$ from $1$ to $E$}\n\t\t\\For{each $b \\in \\mathcal{B}_i$}\n\t\t\t\\State{$\\mathbf{w} \\gets \\mathbf{w} - \\eta\\Delta L(\\mathbf{w};b)$ \\qquad ($\\eta$ is the learning rate and $\\Delta L$ is the gradient of $L$ on $b$.)}\n\t\t\\EndFor\n\t\\EndFor\n\t\\State{}\n\t\\State{[Server]}\n\t\\State{Initialize $\\mathbf{w}_{G}^{0}$}\n\t\\For{each iteration $t$ from $1$ to $T$}\n\t\t\\State{Randomly choose a subset $\\mathcal{S}_t$ of $m$ participants from $\\mathcal{N}$}\n\t\t\\For{each partipant $i \\in \\mathcal{S}_t$ $\\textbf{parallely}$}\n\t\t\t\\State{$\\mathbf{w}_{i}^{t+1} \\gets \\textbf{LocalTraining}$($i$, $\\mathbf{w}_{G}^{t}$)}\n\t\t\\EndFor\n\\State{$\\mathbf{w}_{G}^{t}=\\frac{1}{\\sum_{i\\in\\mathcal{N}}D_{i}}\\sum_{i=1}^{N}D_{i}\\mathbf{w}_{i}^{t}$ \\qquad (Averaging aggregation)} \n\t\\EndFor\n\\end{algorithmic} \n\\caption{Federated averaging algorithm~\\label{alg:AveragingAlg}}\n\\end{algorithm}\n As described in Step 1 above, the server first initializes the task (lines 11-16). Thereafter, in Step 2, the participant $i$ implements the local\ntraining and optimizes the target in (\\ref{eq:local_training_goal})\non minibatches from the original local dataset (lines 2-8). Note that a minibatch refers to a randomized subset of each participant's dataset. At the $t^{th}$\niteration (line 17), the server minimizes the global loss in (\\ref{eq:global_goal})\nby the averaging aggregation which is formally defined as\n\\begin{equation}\n\\mathbf{w}_{G}^{t}=\\frac{1}{\\sum_{i\\in\\mathcal{N}}D_{i}}\\sum_{i=1}^{N}D_{i}\\mathbf{w}_{i}^{t}.\\label{eq:averaging_aggr}\n\\end{equation}\nThe FL training process is iterated till the global loss function converges, or a desirable accuracy is achieved.", "id": "7b753a48-9322-4168-8412-014839ef1401", "level": "subsection", "origin_cites_number": 8, "parent_id": "a45b8f2e-5693-4022-b460-4f6e38041188", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ], [ "subsection", "Federated Learning" ] ], "subsections": [], "title": "Federated Learning" }, { "cite_extract_rate": 0.5, "cites": [ 596, 7721, 5980, 5442, 5981, 582, 7720 ], "content": "\\label{stats}\nFollowing an elaboration of the FL training process in the previous section, we now proceed to discuss the statistical challenges faced in FL. \nIn traditional distributed ML, the central server has access to\nthe whole training dataset. As such, the server can split the dataset\ninto subsets that follow similar distributions. The subsets are subsequently sent to\nparticipating nodes for distributed training. However, this approach is\nimpractical for FL since the local dataset is only accessible by the data\nowner. \nIn the FL setting, the participants may have local datasets that follow different distributions, i.e., the datasets of participants are non-IID. While the authors in show that the aforementioned \\textit{FedAvg} algorithm is able to achieve desirable accuracy even when data is non-IID across participants, the authors in found otherwise. For example, the accuracy of a \\textit{FedAvg}-trained CNN model has 51\\% lower accuracy than centrally-trained CNN model for CIFAR-10 . This deterioration in accuracy is further shown to be quantified by the earth mover's distance (EMD) , i.e., difference in FL participant's data distribution as compared to the population distribution. As such, when data is non-IID and highly skewed, data-sharing is proposed in which a shared dataset with uniform distribution across all classes is sent by the FL server to each FL participant. Then, the participant trains its local model on its private data together with the received data. The simulation result shows that accuracy can be increased by 30\\% with 5\\% shared data due to reduced EMD. However, a common dataset may not always be available for sharing by the FL server. An alternative solution to gather contributions towards building the common dataset is subsequently discussed in section \\ref{sec:resource}.\nThe authors in also find that global imbalance, i.e., the situation in which the collection of data held across all FL participants is class imbalanced, leads to a deterioration in model accuracy. As such, the Astraea framework is proposed. On initialization, the FL participants first send their data distribution to the FL server. A rebalancing step is introduced before training begins in which each participant performs data augmentation on the minority classes, e.g., through random rotations and shifts. After training on the augmented data, a mediator is created to coordinate intermediate aggregation, i.e., before sending the updated parameters to the FL server for global aggregation. The mediator selects participants with data distributions that best contributes to an uniform distribution when aggregated. This is done through a greedy algorithm approach to minimize the Kullback-Leibler Divergence between local data and uniform distribution. The simulation results show accuracy improvement when tested on imbalanced datasets.\nGiven the heterogeneity of data distribution across devices, there has been an increasing number of studies that borrow concepts from multi-task learning to learn separate, but structurally related models for each participant. Instead of minimizing the conventional loss function presented previously in Table \\ref{tab:loss-functions}, the loss function is modified to also model the relationship amongst tasks . Then, the MOCHA algorithm is proposed in which an alternating optimization approach is used to approximately solve the minimization problem. Interestingly, MOCHA can also be calibrated based on the resource constraints of a participating device. For example, the quality of approximation can be adaptively adjusted based on network conditions and CPU states of the participating devices. However, MOCHA cannot be applied to non-convex DL models. \nSimilarly, the authors in also borrow concepts from multi-task learning to deal with the statistical heterogeneity in FL. The FEDPER approach is proposed in which all FL participants share a set of base layers trained using the \\textit{FedAvg} algorithm. Then, each participant separately trains another set of personalization layers using its own local data. In particular, this approach is suitable for building recommender's systems given the diverse preferences of participants. The authors show empirically using the Flickr-AES dataset that the FEDPER approach outperforms a pure \\textit{FedAvg} approach since the personalization layer is able to represent the personal preference of an FL participant. However, it is worth to note that the collaborative training of the base layers are still important to achieve a high test accuracy, since each participant has insufficient local data samples for purely personalized model training.\nApart from data heterogeneity, the convergence of a distributed learning algorithm is always\na concern. Higher convergence rate helps to save a large amount\nof time and resources for the FL participants, and also significantly increases\nthe success rate of the federated training since fewer communication\nrounds imply reduced participant dropouts. To ensure convergence, the study in propose \\textit{FedProx}, which modifies the loss function to also include a tunable parameter that restricts how much local updates can affect the prevailing model parameters. The \\textit{FedProx} algorithm can be adaptively tuned, e.g., when training loss is increasing, model updates can be tuned to affect the current parameters less. Similarly, the authors in also propose the \\textit{LoAdaBoost FedAvg} algorithm to complement the aforementioned data-sharing approach in ML on medical data. In \\textit{LoAdaBoost FedAvg}, participants train the model on their local data and compare the cross-entropy loss with the median loss from the \\textit{previous} training round. If the current cross-entropy loss is higher, the model is retrained before global aggregation so as to increase learning efficiency. The simulation results show that faster convergence is achieved as a result. \nIn fact, the statistical challenges of FL coexist with other issues that we explore in subsequent sections. For example, the communication costs incurred in FL can be reduced by faster convergence. Similarly, resource allocation policies can also be designed to solve statistical heterogeneity. As such, we revisit these concepts in greater detail subsequently.", "id": "827e50d4-03f3-44e3-b393-22f92ba46fce", "level": "subsection", "origin_cites_number": 14, "parent_id": "a45b8f2e-5693-4022-b460-4f6e38041188", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ], [ "subsection", "Statistical Challenges of FL" ] ], "subsections": [], "title": "Statistical Challenges of FL" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 590, 619, 656, 653, 5419 ], "content": "\\label{proto}\nTo improve scalability, an FL protocol \nhas been proposed in from the system level. This protocol deals with\nissues regarding unstable device connectivity and communication security etc.\nThe FL protocol (Fig.~\\ref{fig:Federated-learning-protocol}) consists of three phases in each training round: \n\\begin{enumerate}\n\\item \\emph{Selection:} In the participant selection phase, the FL server chooses a subset of connected devices to participate in a training round. The selection criteria may subsequently be calibrated to the server's needs, e.g., training efficiency . In Section \\ref{sec:resource}, we further elaborate on proposed participant selection methods.\n\\item \\emph{Configuration:} The server is configured accordingly to the aggregation mechanism preferred, e.g. simple or secure aggregation . Then, the server sends the training schedule and global model to each participant.\n\\item \\emph{Reporting:} The server receives updates from participants. Thereafter, the updates can be aggregated, e.g., using the \\textit{FedAvg} algorithm.\n\\end{enumerate}\nIn addition, to manage device connections accordingly to varying FL population size, pace steering is also recommended. Pace steering adaptively manages the optimal time window for participants to reconnect to the FL server . For example, when the FL population is small, pace steering is used to ensure that there is a sufficient number of participating devices that connect to the server simultaneously. In contrast, when there is a large population, pace steering randomly chooses devices to participate to prevent the situation in which too many participating devices are connected at one point of time.\nApart from communication efficiency, communication security during local updates transmission is\nanother problem to be resolved. Specifically,\nthere are mainly two aspects in communication security:\n\\begin{enumerate}\n\\item \\emph{Secure aggregation}: To prevent local updates from being traced and utilized\nto infer the identity of the FL participant, \na virtual and trusted third party server is deployed for local model\naggregation . The Secret Sharing mechanism~ is also used for\ntransmission of local updates with authenticated encryption. \n\\item \\emph{Differential privacy}: Similar to secure aggregation, differential\nprivacy (DP) prevents the FL server from identifying the owner of a local\nupdate. The difference is that to achieve the goal of privacy preservation,\nthe DP in FL~ adds a certain degree of noise in\nthe original local update while providing theoretical guarantees on\nthe model quality.\n\\end{enumerate}\nThese concepts on privacy and security are presented in detail in Section \\ref{sec:security}. Recently, some open-source frameworks for FL have been developed as follows:\n\\begin{enumerate}\n\\item \\emph{TensorFlow Federated (TFF)}: TFF is a framework based on Tensorflow developed by Google for decentralized ML and other distributed\ncomputations. TFF consists of two layers (i) FL and (ii) Federated Core (FC). The FL layer is a high-level interface that allows the implementation of FL to existing TF models without the user having to apply the FL algorithms personally. The FC layer combines TF with communication operators to allow users to experiment with customized and newly designed FL algorithms.\n\\item \\emph{PySyft}: PySyft~ is a framework based on PyTorch\nfor performing encrypted, privacy-preserving DL and implementations of\nrelated techniques, such as Secure Multiparty Computation (SMPC) and\nDP, in untrusted environments while protecting data. Pysyft is developed such that it retains the native Torch interface, i.e., the ways to execute all tensor operations remain unchanged from that of Pytorch. When a SyftTensor is created, a LocalTensor is automatically created to also apply the input command to the native Pytorch tensor. To simulate FL, participants are created as \\textit{Virtual Workers}. Data, i.e., in the structure of tensors, can be split and distributed to the Virtual Workers as a simulation of a practical FL setting. Then, a PointerTensor is created to specify the data owner and storage location. In addition, model updates can be fetched from the Virtual Workers for global aggregation.\n\\item \\emph{LEAF}: An open source framework of datasets that can be used as benchmarks in FL, e.g., Federated Extended MNIST (FEMNIST), an MNIST dataset partitioned based on writer of each character, and Sentiment140 , a dataset partitioned based on different users. In these datasets, the writer or user is assumed to be a participant in FL, and their corresponding data is taken to be the local data held in their personal devices. The implementation of newly designed algorithms on these benchmark datasets allow for reliable comparison across studies.\n\\item \\emph{FATE}: Federated AI Technology Enabler (FATE) is an open-source framework by WeBank that supports the federated and secure implementation of several ML models. \n\\end{enumerate}\n\\begin{figure*}[t]\n\\begin{centering}\n\\includegraphics[width=0.85\\textwidth]{fl-system}\n\\par\\end{centering}\n\\caption{\\small Federated learning protocol .\\label{fig:Federated-learning-protocol}}\n\\end{figure*}", "id": "15b0125f-a145-4c6d-9f58-0624dc573cf0", "level": "subsection", "origin_cites_number": 11, "parent_id": "a45b8f2e-5693-4022-b460-4f6e38041188", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ], [ "subsection", "FL protocols and frameworks" ] ], "subsections": [], "title": "FL protocols and frameworks" }, { "cite_extract_rate": 1, "cites": [ 1309, 623 ], "content": "Besides the statistical challenges we present in Section \\ref{stats}, FL has some unique characteristics and features as compared to other distributed ML approaches:\n\\begin{enumerate}\n\\item \\textit{Slow and unstable communication}: In the traditional distributed training\nin a data center, the communication environment can be assumed to\nbe perfect where the information transmission rate is very high and\nthere is no packet loss. However, these assumptions are not applicable to \nthe FL environment where heterogeneous devices are involved in training. For example, the Internet upload speed is typically\nmuch slower than download speed . Also, some participants with unstable wireless communication channels may consequently\ndrop out due to disconnection from the Internet.\n\\item \\textit{Heterogeneous devices}: Apart from bandwidth constraints, FL involves heterogeneous devices with varying resource constraints. For example, the devices can have different computing capabilities, i.e., CPU states and battery level. The devices can also have different levels of \\textit{willingness} to participate, i.e., FL training is resource consuming and given the distributed nature of training across numerous devices, there is a possibility of free ridership.\n\\item \\textit{Privacy and security concerns}:\nAs we have previously discussed, data owners are increasingly privacy sensitive. However, as will be subsequently presented in Section \\ref{sec:security}, malicious participants are able to infer sensitive information from shared parameters, which potentially negates privacy preservation. In addition, we have previously assumed that all participants and FL servers are trustful. In reality, they may be malicious.\n\\end{enumerate}\nThese unique characteristics of FL lead to several practical issues in FL implementation\nmainly in three aspects that we now proceed to discuss, i.e., (i) statistical challenges (ii) communication costs (iii) resource allocation and (iv) privacy and security. In the following sections, we review related work that address each of these issues.", "id": "f43bdcdd-1aef-4e5b-b1bc-37b0dca38e84", "level": "subsection", "origin_cites_number": 2, "parent_id": "a45b8f2e-5693-4022-b460-4f6e38041188", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Background and Fundamentals of Federated Learning" ], [ "subsection", "Unique characteristics and issues of FL" ] ], "subsections": [], "title": "Unique characteristics and issues of FL" }, { "cite_extract_rate": 0.75, "cites": [ 627, 617, 623, 97, 582, 5982 ], "content": "\\label{sec: communication}\nIn FL, a number of rounds of communications between the participants and the FL server may be required to achieve a target accuracy (Fig. \\ref{fig:Federated-learning-protocol}). For complex DL model training involving, e.g. CNN, each update may comprise millions of parameters . The high dimensionality of the updates can result in the incurrence of high communication costs and can lead to a training bottleneck. In addition, the bottleneck can be worsened due to (i) unreliable network conditions of participating devices and (ii) the asymmetry in Internet connection speeds in which upload speed is faster than download speed, resulting in delays in model uploads by participants . As such, there is a need to improve the communication efficiency of FL. The following approaches to reduce communication costs are considered:\n\\begin{itemize}\n\\item \\textit{Edge and End Computation}: In the FL setup, the communication cost often dominates computation cost . The reason is that on-device dataset is relatively small whereas mobile devices of participants have increasingly fast processors. On the other hand, participants may only be willing to participate in the model training only if they are connected to Wi-Fi . As such, more computation can be performed on edge nodes or on end devices before each global aggregation so as to reduce the number of communication rounds needed for the model training. In addition, algorithms to ensure faster convergence can also reduce number of communication rounds involved, at the expense of more computation on edge servers and end devices.\n\\item \\textit{Model Compression}: This is a technique commonly used in distributed learning . Model or gradient compression involves the communication of an update that is transformed to be more compact, e.g., through sparsification, quantization or subsampling , rather than the communication of full update. However, since the compression may introduce noise, the objective is to reduce the size of update transferred during each round of communication while maintaining the quality of trained models .\n\\item \\textit{Importance-based Updating}: This strategy involves selective communication such that only the important or relevant updates are transmitted in each communication round. In fact, besides saving on communication costs, omitting some updates from participants can even improve the global model performance.\n\\end{itemize}", "id": "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "level": "section", "origin_cites_number": 8, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Communication Cost" ] ], "subsections": [ "505350da-04c7-4276-95b4-c41837226063", "bc0516f6-5205-43e4-879f-70e89c2c489b", "af37e371-e11a-40d3-86e3-4cb9ab5fa789", "1ee6a9c5-8838-4e8d-9441-2cab8985fa6c" ], "title": "Communication Cost" }, { "cite_extract_rate": 0.2, "cites": [ 5066, 582 ], "content": "To decrease the number of communication rounds, additional computation can be performed on participating end devices before each iteration of communication for global aggregation (Fig. \\ref{fig:computation}(a)). The authors in consider two ways to increase computation on participating devices: (i) increasing parallelism in which more participants are selected to participate in each round of training and (ii) increasing computation per participant whereby each participant performs more local updates before communication for global aggregation. A comparison is conducted for the FederatedSGD (\\textit{FedSGD}) algorithm and the proposed \\textit{FedAvg} algorithm. For the \\textit{FedSGD} algorithm, all participants are involved and only one pass is made per training round in which the minibatch size comprises of the entirety of the participant's dataset. This is similar to the full-batch training in centralized DL frameworks. For the proposed \\textit{FedAvg} algorithm, the hyperparameters are tuned such that more local computations are performed by the participants. For example, the participant can make more passes over its dataset or use a smaller local minibatch size to increase computation before each communication round. The simulation results show that increased parallelism does not lead to significant improvements in reducing communication cost, once a certain threshold is reached. As such, more emphasis is placed on increasing computation per participant while keeping the fraction of selected participants constant. For MNIST CNN simulations, increased computation using the proposed \\textit{FedAvg} algorithm can reduce communication rounds by more than $30$ times when the dataset is IID. For non-IID dataset, the improvement is less significant ($2.8$ times) using the same hyperparameters. However, for Long Short Term Memory (LSTM) simulations , improvements are more significant even for non-IID data ($95.3$ times). In addition, \\textit{FedAvg} increases the accuracy eventually since model averaging produces regularization effects similar to dropout , which prevents overfitting. \nAs an extension, the authors in also validate that a similar concept as that of works for vertical FL. In vertical FL, collaborative model training is conducted across the same set of participants with different data features. The Federated Stochastic Block Coordinate Descent (FedBCD) algorithm is proposed in which each participating device performs multiple local updates first before communication for global aggregation. In addition, convergence guarantee is also provided with an approximate calibration of the number of local updates per interval of communication.\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{Figs/edgeendcompute}\n\t\\caption{Approaches to increase computation at edge and end devices include (a) Increased computation at end devices, e.g., more passes over dataset before communication (b) Two-stream training with global model as a reference and (c) Intermediate edge server aggregation .}\n\t\\label{fig:computation}\n\\end{figure}\nAnother way to decrease communication cost can also be through modifying the training algorithm to increase convergence speed, e.g., through the aforementioned \\textit{LoAdaBoost FedAvg} in . Similarly, the authors in also propose increased computation on each participating device by adopting a two-stream model (Fig. \\ref{fig:computation}(b)) commonly used in transfer learning and domain adaption . During each training round, the global model is received by the participant and fixed as a reference in the training process. During training, the participant learns not just from local data, but also from other participants with reference to the fixed global model. This is done through the incorporation of Maximum Mean Discrepancy (MMD) into the loss function. MMD measures the distance between the means of two data distributions , . Through minimizing MMD loss between the local and global models, the participant can extract more generalized features from the global model, thus accelerating the convergence of training process and reducing communication rounds. The simulation results on the CIFAR-10 and MNIST dataset using DL models such as AlexNet and 2-CNN respectively show that the proposed two-stream FL can reach the desirable test accuracy in 20\\% fewer communication rounds even when data is non-IID. However, while convergence speed is increased, more computation resources have to be consumed by end devices for the aforementioned approaches. Given the energy constraints of participating mobile devices in particular, this necessitates resource allocation optimization that we subsequently discuss in Section \\ref{sec:resource}.\nWhile the aforementioned studies consider increasing computation on participating \\textit{devices}, the authors in propose an edge computing inspired paradigm in which proximate edge \\textit{servers} can serve as intermediary parameter aggregators given that the propagation latency from participant to the edge server is smaller than that of the participant-cloud communication (Fig. \\ref{fig:computation}(c)). A hierarchical FL (\\textit{HierFAVG}) algorithm is proposed whereby for every few local participant updates, the edge server aggregates the collected local models. After a predefined number of edge server aggregations, the edge server communicates with the cloud for global model aggregation. As such, the communication between the participants and the cloud occurs only once after an interval of multiple local updates. Comparatively, for the \\textit{FedAvg} algorithm proposed in , the global aggregation occurs more frequently since no intermediate edge server aggregation is involved. The authors further prove the covergence of \\textit{HierFAVG} for both convex and non-convex objective functions given non-IID user data. The simulation results show that for the same number of local updates between two global aggregations, more intermediate edge aggregations before each global aggregation can lead to reduced communication overhead as compared to the \\textit{FedAvg} algorithm. This result holds for both IID and non-IID data, implying that intermediate aggregation on edge servers may be implemented on top of \\textit{FedAvg} so as to reduce communication costs. However, when applied to non-IID data, the simulation results show that \\textit{HierFAVG} fails to converge to the desired accuracy level (90\\%) in some instances, e.g., when edge-cloud divergence is large or when there are many edge servers involved. As such, a further study is required to better understand the tradeoffs between adjusting local and edge aggregation intervals, so as to ensure that the parameters of the \\textit{HierFAVG} algorithm can be optimally calibrated to suit other settings. Nevertheless, \\textit{HierFAVG} is a promising approach for the implementation of FL at mobile edge networks, since it leverages on the proximity of intermediate edge server to reduce communication costs, and potentially relieve the burden on the remote cloud.", "id": "505350da-04c7-4276-95b4-c41837226063", "level": "subsection", "origin_cites_number": 10, "parent_id": "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Communication Cost" ], [ "subsection", "Edge and End Computation" ] ], "subsections": [], "title": "Edge and End Computation" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 627, 623, 5984, 5983 ], "content": "To reduce communication costs, the authors in propose structured and sketched updates to reduce the size of model updates sent from participants to the FL server during each communication round. \n\\textit{Structured updates} restrict participant updates to have a pre-specified structure, i.e., low rank and random mask. For the low rank structure, each update is enforced to be a low rank matrix expressed as a product of two matrices. Here, one matrix is generated randomly and held constant during each communication round whereas the other is optimized. As such, only the optimized matrix needs to be sent to the server. For the random mask structure, each participant update is restricted to be a sparse matrix following a pre-defined random sparsity pattern generated independently during each round. As such, only the non-zero entries are required to be sent to the server. \nOn the other hand, \\textit{sketched updates} refer to the approach of encoding the update in a compressed form before communication with the server, which subsequently decodes the updates before aggregation. One example of sketched update is the subsampling approach, in which each participant communicates only a random subset of the update matrix. The server then averages the subsampled updates to derive an unbiased estimate of the true average. Another example of sketched update is the probabilistic quantization approach , in which the update matrices are vectorized and quantized for each scalar. To reduce the error from quantization, a structured random rotation that is the product of a Walsh-Hadamard matrix and\nbinary diagonal matrix can be applied before quantization. \nThe simulation results on the CIFAR-10 image classification task show that for structured updates, the random mask performs better than that of the low rank approach. The random mask approach also achieves higher accuracy than sketching approaches since the latter involves a removal of some information obtained during training. However, the combination of all three sketching tools, i.e., subsampling, quantization, and rotation, can achieve higher compression rate and faster convergence, albeit with some sacrifices in accuracy. For example, by using 2 bits for quantization and sketching out all but 6.25\\% of update data, the number of bits needed to represent updates can be reduced by 256 times and the accuracy level achieved is 85\\%. In addition, sketching updates can achieve higher accuracy in training when there are more participants trained per round. This suggests that for practical implementation of FL where there are many participants available, more participants can be selected for training per round so that subsampling can be more aggressive to reduce communication costs. \n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Figs/compress}\n\t\\caption{\\small The compression techniques considered are summarized above by the diagram from authors in . (i) Federated dropout to reduce size of model (ii) Lossy compression of model (iii) Decompression for training (iv) Compression of participant updates (v) Decompression (vi) Global aggregation}\n\t\\label{fig:compress}\n\\end{figure}\nThe authors in extend the studies in by proposing lossy compression and federated dropout to reduce \\textit{server-to-participant} communication costs. A summary of the proposed techniques are adapted from the authors' work in Fig. \\ref{fig:compress}. For participant-to-server upload of model parameters that we discuss previously, the decompressions can be averaged over many updates to receive an unbiased estimate. However, there is no averaging for server-to-participant communications since the same global model is sent to all participants during each round of communication. Similar to , subsampling and probabilistic quantization are considered. For the application of structured random rotation before subsampling and quantization, Kashin's representation is applied instead of the Hadamard transformation since the former is found to perform better in terms of accuracy-size tradeoff. \nIn addition to the subsampling and quantization approaches, the federated dropout approach is also considered in which a fixed number of activation functions at each fully-connected layer is removed to derive a smaller sub-model. The sub-model is then sent to the participants for training. The updated submodel can then be mapped back to the global model to derive a complete DNN model with all weights updated during subsequent aggregation. This approach reduces the server-to-participant communication cost, and also the size of participant-to-server updates. In addition, local computation is reduced since fewer parameters have to be updated. The simulations are performed on MNIST, CIFAR-10, and EMNIST datasets. For the lossy compression, it is shown that the subsampling approach taken by does not reach an acceptable level of performance. The reason is that the update errors can be averaged out for participant-to-server uploads but not for server-to-participant downloads. On the other hand, quantization with Kashin's representation can achieve the same performance as the baseline without compression while having communication cost reduced by nearly 8 times when the model is quantized to 4 bits. For federated dropout approaches, the results show that a dropout rate of 25\\% of weight matrices of fully-connected layers (or filters in the case of CNN) can achieve acceptable accuracy in most cases while ensuring around 43\\% reduction in size of models communicated. However, if dropout rates are more aggressive, convergence of the model can be slower. \nThe aforementioned two studies suggest useful model compression approaches in reducing communication costs for both server-to-participant and participant-to-server communications. As one may expect, the reduction in communication costs come with sacrifices in model accuracy. It will thus be useful to formalize the compression-accuracy tradeoffs, especially since this varies for different tasks, or when different number of FL participants are involved.", "id": "bc0516f6-5205-43e4-879f-70e89c2c489b", "level": "subsection", "origin_cites_number": 6, "parent_id": "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Communication Cost" ], [ "subsection", "Model Compression" ] ], "subsections": [], "title": "Model Compression" }, { "cite_extract_rate": 0.4, "cites": [ 627, 596, 623, 4338, 582, 5985 ], "content": "Based on the observation that most parameter values of a DNN model are sparsely distributed and close to zero , the authors in propose the edge Stochastic Gradient Descent (eSGD) algorithm that selects only a small fraction of important gradients to be communicated to the FL server for parameter update during each communication round. The eSGD algorithm keeps track of loss values at two consecutive training iterations. If the loss value of the current iteration is smaller than the preceding iteration, this implies that current training gradients and model parameters are important for training loss minimalization and thus, their respective hidden weights are assigned a positive value. In addition, the gradient is also communicated to the server for parameter update. Once this does not hold, i.e., the loss increases as compared to the previous iteration, other parameters are selected to be updated based on their hidden weight values. A parameter with larger hidden weight value is more likely to be selected since it has been labeled as important several times during training. To account for small gradient values that can delay convergence if they are ignored and not updated completely , these gradient values are accumulated as residual values. Since the residuals may arise from different training iterations, each update to the residual is weighted with a discount factor using the momentum correction technique . Once the accumulated residual gradient reaches a threshold, they are chosen to replace the least important gradient coordinates according to the hidden weight values. The simulation results show that eSGD with a 50\\% drop ratio can achieve higher accuracy than that of the thresholdSGD algorithm proposed in , which uses a fixed threshold value to determine which gradient coordinates to drop. eSGD can also save a large proportion of gradient size communicated. However, eSGD still suffers from accuracy loss as compared to standard SGD approaches. For example, when tested on simple classification tasks using the MNIST dataset, the model accuracy converges to just 91.22\\% whereas standard SGD can achieve 99.77\\% accuracy. If extended to more sophisticated tasks, the accuracy can potentially deteriorate to a larger extent. In addition, the accuracy and convergence speed of the eSGD approach fluctuates arbitrarily based on hyperparameters used, e.g., minibatch size. As such, further studies have to be conducted to formally balance the tradeoffs between communication costs and training performance.\nWhile studies the selective communication of gradients, the authors in propose the Communication-Mitigated Federated Learning (CMFL) algorithm that uploads only relevant local model updates to reduce communication costs while guaranteeing global convergence. In each iteration, a participant's local update is first compared with the global update to identify if the update is relevant. A relevance score is computed where the score equates to percentage of same-sign parameters in the local and global update. In fact, the global update is not known in advance before aggregation. As such, the global update made in the previous iteration is used as an estimate for comparison since it was found empirically that more than 99\\% of the normalized difference of two sequential global updates are smaller than 0.05 in both MNIST CNN and Next-Word-Prediction LSTM. An update is considered to be irrelevant if its relevance score is smaller than a predefined threshold. The simulation results show that CMFL requires 3.47 times and 13.97 times fewer communication rounds to reach 80\\% accuracy for MNIST CNN and Next-Word-Prediction LSTM, respectively, as compared to the benchmark \\textit{FedAvg} algorithm. In addition, CMFL can save significantly more communication rounds as compared to Gaia. Note that Gaia is a geo-distributed ML approach suggested in which measures relevance based on \\textit{magnitude} of updates rather than sign of parameters. When applied with the aforementioned MOCHA algorithm \\ref{stats} , CMFL can reduce communication rounds by 5.7 times for the Human Activity Recognition dataset and 3.3 times for the Semeion Handwritten Digit dataset . In addition, CMFL can achieve slightly higher accuracy since it involves the elimination of irrelevant updates that are outliers which harm training. \n\\begin{table*}[]\n\\centering \\caption{\\small Approaches to communication cost reduction in FL. \\label{tab:communication}}\n\\begin{adjustbox}{width=\\textwidth}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\rowcolor[HTML]{C0C0C0} \n\\multicolumn{1}{|c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Approaches}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Ref.}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Key Ideas}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Tradeoffs and Shortcomings}} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}More local updates in between communication for global aggregation, to reduce \\\\ instances of communication\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Increased computation cost and poor performance\\\\ in non-IID setting\\end{tabular} \\\\ \\cline{2-4} \n & & \\begin{tabular}[c]{@{}l@{}}Similar to the ideas of , but with convergence guarantees for vertical FL\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Increased computation cost and delayed\\\\ convergence if global aggregation is too infrequent\\end{tabular} \\\\ \\cline{2-4} \n & & \\begin{tabular}[c]{@{}l@{}}Transfer learning-inspired two-stream model for FL participants to learn from the fixed \\\\global model so as to accelerate training convergence\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Increased computation cost and delayed \\\\ convergence\\end{tabular} \\\\ \\cline{2-4} \n\\multirow{-4}{*}{\\begin{tabular}[c]{@{}l@{}}Edge and End \\\\ Computation\\end{tabular}} & & \\begin{tabular}[c]{@{}l@{}}MEC-inspired edge server assisted FL that aids in intermediate parameter aggregation to \\\\ reduce instances of communication\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}System model is not scalable when there are \\\\ more edge servers\\end{tabular} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}Structured and sketched updates to compress local models communicated from participant\\\\ to FL server\\end{tabular} & Model accuracy and convergence issues \\\\ \\cline{2-4} \n\\multirow{-2}{*}{\\begin{tabular}[c]{@{}l@{}}Model \\\\ Compression\\end{tabular}} & & \\begin{tabular}[c]{@{}l@{}}Similar to the ideas of , but for communication from FL server \\\\ to participants\\end{tabular} & Model accuracy and convergence issues \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}Selective communication of gradients that are assigned importance scores, \\\\ i.e., to reduce training loss\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Only empirically tested on simple datasets and \\\\ tasks, with fluctuating results\\end{tabular} \\\\ \\cline{2-4} \n\\multirow{-2}{*}{\\begin{tabular}[c]{@{}l@{}}Importance-based \\\\ Updating\\end{tabular}} & & \\begin{tabular}[c]{@{}l@{}}Selective communication of local model updates that have higher relevance scores when \\\\ compared to previous global model\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Difficult to implement when global aggregations\\\\ are less frequent\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table*}", "id": "af37e371-e11a-40d3-86e3-4cb9ab5fa789", "level": "subsection", "origin_cites_number": 15, "parent_id": "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Communication Cost" ], [ "subsection", "Importance-based Updating" ] ], "subsections": [], "title": "Importance-based Updating" }, { "cite_extract_rate": 0.5, "cites": [ 582, 5986 ], "content": "In this section, we have reviewed three main approaches for communication cost reduction in FL, and for each approach, we discuss the solutions proposed in different studies. We summarize the approaches along with references in Table \\ref{tab:communication}. From this review, we gather the following lessons learned:\n\\begin{itemize}\n\\item Communication cost is a key issue to be resolved before we can implement FL at scale. In particular, the state-of-the-art DL models have high inference accuracy but are increasingly complex with millions of parameters. The slow upload speed of mobile devices can thus impede the implementation of efficient FL. \n\\item This section explores several key approaches to communication cost reduction. However, many of the approaches, e.g., model compression, result in a deterioration in model accuracy or incur high computation cost. For example, when too many local updates are implemented between communication rounds, the communication cost is indeed reduced but the convergence can be significantly delayed . The tradeoff between these sacrifices and communication cost reduction thus has to be well-managed. \n\\item The current studies of this tradeoff are often mainly empirical in nature, e.g., several experiments have to be done to find the optimal number of local training iterations before communication. With more effective optimization approaches formalized theoretically and tested empirically, FL can eventually become more scalable in nature. For example, the authors in study the tradeoffs between the completion time of FL training and energy cost expended. Then, a weighted sum of completion time and energy consumption is minimized using an iterative algorithm. For delay-sensitive scenarios, the weights can be adjusted such that the FL participants consume more energy for completion time minimization. \n\\item Apart from working to directly reduce the size of model communicated, studies on FL can draw inspiration from applications and approaches in the MEC paradigm. For example, a simple case study introduced in considers the base station as an intermediate model aggregator to reduce instances of device-cloud communication. Unfortunately, there are convergence issues when more edge servers or mobile devices are considered. This is exacerbated by the non-IID distribution of data across different edge nodes. For future works, this statistical challenge can be met, e.g., through inspirations from multi-task learning as we have discussed in Section \\ref{stats}. In addition, more effective and innovative system models can be explored such that FL networks can utilize the wealth of computing and storage resources that are closer to the data sources to facilitate efficient FL.\n\\item For the studies that we have discussed in this section, the heterogeneity among mobile devices, e.g., in computing capabilities, is often not considered. For example, one of the ways to reduce communication cost is to increase computation on edge devices, e.g., by performing more local updates before each communication round. In fact, this does not merely lead to the expenditure of greater computation cost. The approach may also not be feasible for devices with weak processing power, and can lead to the \\textit{straggler effect}. As such, we further explore issues on resource allocation in the next section.\n\\end{itemize}", "id": "1ee6a9c5-8838-4e8d-9441-2cab8985fa6c", "level": "subsection", "origin_cites_number": 4, "parent_id": "b279d0a9-a81b-4e03-b78c-6bfca3339a27", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Communication Cost" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 5987, 582 ], "content": "\\label{sec:resource}\nFL involves the participation of heterogeneous devices that have different dataset qualities, computation capabilities, energy states, and willingness to participate. Given the device heterogeneity and resource constraints, i.e., in device energy states and communication bandwidth, resource allocation has to be optimized to maximize the efficiency of the training process. In particular, the following resource allocation issues need to be considered:\n\\begin{itemize}\n\\item \\textit{Participant Selection:} As part of the FL protocol presented in Section \\ref{proto}, participant selection refers to the selection of devices to participate in each training round. Typically, a set of participants is randomly selected by the server to participate. Then, the server has to aggregate parameter updates from all participating devices in the round before taking a weighted average of the models . As such, the training progress of FL is limited by the training time of the slowest participating devices, i.e., stragglers . New participant selection protocols are thus investigated in order to address the training bottleneck in FL. \n\\item \\textit{Joint Radio and Computation Resource Management:} Even though computation capabilities of mobile devices have grown rapidly, many devices still face a scarcity of radio resources . Given that local model transmission is an integral part of FL, there has been a growing number of studies that focus on developing novel wireless communication techniques for efficient FL.\n\\item \\textit{Adaptive Aggregation:} As discussed in Section \\ref{fltrain}, FL involves global aggregation in which model parameters are communicated to the FL server for aggregation. The conventional approach to global aggregation is a synchronous one, i.e., global aggregations occur in fixed intervals after all participants complete a certain number of rounds of local computation. However, adaptive calibrations of global aggregation frequency can be investigated to increase training efficiency subject to resource constraints .\n\\item \\textit{Incentive Mechanism:} In the practical implementation of FL, participants may be reluctant to participate in a federation without receiving compensation since training models is resource-consuming. In addition, there exists information asymmetry between the FL server and participants since participants have greater knowledge of their available computation resources and data quality. Therefore, incentive mechanisms have to be carefully designed to both incentivize participation and reduce the potential adverse impacts of information asymmetry.\n \\end{itemize}", "id": "59289545-bba6-4ef3-9e35-c47a8165f934", "level": "section", "origin_cites_number": 3, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ] ], "subsections": [ "f946e8d6-f9e9-41a5-8e79-0bb6c5623dd3", "ab71673f-1afa-4ab4-b2ac-6ddd4c0ad067", "5eca4010-24be-4e22-9b08-93587a264909", "e51a9ff7-da03-4d21-b611-1d80c08bdbdb", "2fd067fb-1843-47b6-9c86-2aebbd838d92" ], "title": "Resource Allocation" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 619, 7721, 7928, 5988, 5445, 582, 1408 ], "content": "To mitigate the training bottleneck, the authors in propose a new FL protocol called \\textit{FedCS}. This protocol is illustrated in Fig. \\ref{fig:fedcs}. The system model is a MEC framework in which the operator of the MEC is the FL server that coordinates training in a cellular network that comprises participating mobile devices that have heterogeneous resources. Accordingly, the FL server first conducts a \\textit{Resource Request} step to gather information such as wireless channel states and computing capabilities from a subset of randomly selected participants. Based on this information, the MEC operator selects the maximum possible number of participants that can complete the training within a prespecified deadline for the subsequent global aggregation phase. By selecting the maximum possible number of participants in each round, accuracy and efficiency of training are preserved. To solve the maximization problem, a greedy algorithm is proposed, i.e., participants that take the least time for model upload and update are iteratively selected for training. The simulation results show that compared with the FL protocol which only accounts for training deadline without performing participant selection, \\textit{FedCS} can achieve higher accuracy since \\textit{FedCS} is able to involve more participants in each training round . However, \\textit{FedCS} has been tested only on simple DNN models. When extended to the training of more complex models, it may be difficult to estimate how many participants should be selected. For example, more training rounds may be needed for the training of complex models, and the selection of too few participants may lead to poor performance considering that some participants may drop out during training. In addition, there is bias towards selecting participants with devices that have better computing capabilities. These participants may not hold data that is representative of the population distribution. In particular, we revisit the fairness issue subsequently in this section. \n\\begin{figure}[!]\n\t\\centering \n\t\\includegraphics[width=\\columnwidth]{Figs/partselection}\n\t\\caption{\\small Participant selection under the FedCS and Hybrid-FL protocol.}\n\t\\label{fig:fedcs}\n\\end{figure}\nWhile \\textit{FedCS} addresses heterogeneity of resources among participants in FL, the authors in extend their work on the \\textit{FedCS} protocol with the Hybrid-FL protocol that deals with differences in data distributions among participants. The dataset of participants participating in FL may be non-IID since it is reflective of each individual user's specific characteristics. As we have discussed in Section \\ref{stats}, the non-IID dataset may significantly degrade the performance of the \\textit{FedAvg} algorithm . One proposed measure to address the non-IID nature of the dataset is to distribute publicly available data to participants, such that the EMD between their on-device dataset and the population distance is reduced. However, such a dataset may not always exist, and participants may not download them for security reasons. Thus, an alternative solution is to construct an approximately IID dataset using inputs from a limited number of privacy insensitive participants . In the Hybrid-FL protocol, during the \\textit{Resource Request} step (Fig.~\\ref{fig:fedcs}), the MEC operator asks random participants if they permit their data to be uploaded. During the participant selection phase, apart from selecting participants based on computing capabilities, participants are selected such that their uploaded data can form an approximately IID dataset in the server, i.e., the amount of collected data in each class has close values (Fig.~\\ref{fig:fedcs}). Thereafter, the server trains a model on the collected IID dataset and merge this model with the global model trained by participants. The simulation results show that even with just 1\\% of participants sharing their data, classification accuracy for non-IID data can be significantly improved as compared to the aforementioned \\textit{FedCS} benchmark where data is not uploaded at all. However, the recommended protocol can violate the privacy and security of users, especially if the FL server is malicious. In the case when participants are malicious, data can be falsified before uploading, as we will further discuss in Section \\ref{sec:security}. In addition, the proposed measure can be costly especially in the case of videos and images. As such, it is unlikely that participants will volunteer for data uploading when they can free ride on the efforts of other volunteers. For feasibility, a well-designed incentive and reputation mechanism is needed to ensure that only trustworthy participants are allowed to upload their data. \nIn general, the mobile edge network environment in which FL is implemented on is dynamic and uncertain with variable constraints, e.g., wireless network and energy conditions. Thus, this can lead to training bottlenecks. To this end, Deep Q-Learning (DQL) can be used to optimize resource allocation for model training as proposed in . The system model is a Mobile Crowd Machine Learning (MCML) setting which enables participants in a mobile crowd network to collaboratively train DNN models required by a FL server. The participating mobile devices are constrained by energy, CPU, and wireless bandwidth. Thus, the server needs to determine proper amounts of data, energy, and CPU resources that the mobile devices use for training to minimize energy consumption and training time. Under the uncertainty of the mobile environment, a stochastic optimization problem is formulated. In the problem, the server is the agent, the state space includes the CPU and energy states of the mobile devices, and the action space includes the number of data units and energy units taken from the mobile devices. To achieve the objective, the reward function is defined as a function of the accumulated data, energy consumption, and training latency. To overcome the large state and action space issues of the server, the DQL technique based on Double Deep Q-Network (DDQN) is adopted to solve the server's problem. The simulation results show that the DQL scheme can reduce energy consumption by around 31\\% compared with the greedy algorithm, and training latency is reduced up to 55\\% compared with the random scheme. However, the proposed scheme is applicable only in federations with few participating mobile devices. \nAs an extension to , the authors in also proposes a resource allocation approach using DRL, with the added uncertainty that FL participants are mobile and so they may venture out of the network coverage range with a certain probability. Without prior knowledge of the mobile network, the FL server is able to optimize resource allocation across participants, e.g., channel selection and device energy consumption.\nThe aforementioned resource allocation approaches focus on improving the training efficiency of FL. However, this may cause some FL participants to be left out of the aggregation phase because they are stragglers with limited computing or communication resources. \nOne consequence of this \\textit{unfair} resource allocation, a topic that is commonly explored in resource allocation for wireless networks and ML . For example, if the participant selection protocol selects mobile devices with higher computing capabilities to participate in each training round , the FL model will be overrepresented by the distribution of data owned by participants with devices that have higher computing capabilities. Therefore, the authors in and consider fairness as an additional objective in FL. Fairness is defined in to be the \\textit{variance} of performance of an FL model across participants. If the variance of the testing accuracy is large, this implies the presence of more bias or less fairness, since the learned model may be highly accurate for certain participants and less so for other underrepresented participants. The authors in propose the \\textit{q}-Fair FL (\\textit{q}-FFL) algorithm that reweighs the objective function in \\textit{FedAvg} to assign higher weights in the loss function to devices with higher loss. The modified objective function is as follows:\n\\begin{equation}\n\\min _{w} F_{q}(w)=\\sum_{k=1}^{m} \\frac{p_{k}}{q+1} F_{k}^{q+1}(w)\n\\end{equation}\nwhere $F_k$ refers to the standard loss functions presented in Table \\ref{tab:loss-functions}, $q$ refers to the calibration of fairness in the system model, i.e., setting $q=0$ returns the formulation to the typical FL objective, and $p_{k}$ refers to ratio of local samples to the total number of training samples. In fact, this is a generalization of the Agnostic FL (AFL) algorithm proposed in , in which the device with the \\text{highest} loss dominates the entire loss function. The simulation results show that the proposed \\textit{q}-FFL can achieve lower variance of testing accuracy and converges more quickly than the AFL algorithm. However, as expected, for some calibrations of the \\textit{q}-FFL algorithm, there can be convergence slowdown since stragglers can delay the training process. As such, an asynchronous aggregation approach (to be subsequently discussed in this section) can potentially be considered for use with the \\textit{q}-FFL algorithm.\nIn contrast, the authors in propose a neural network based approach to estimate the local models of FL participants that are left out during training. In the system model, resource blocks are first allocated by the base station to users whose models have larger effects on the global FL model. In particular, one user is selected to always be connected to the base station. This user's model parameters are then used as input to the feedforward neural network to estimate the model parameters of users who are left out during the training iteration. This allows the base stations to be able to integrate more locally trained FL model parameters to each iteration of global aggregation, thus improving the FL convergence speed.", "id": "f946e8d6-f9e9-41a5-8e79-0bb6c5623dd3", "level": "subsection", "origin_cites_number": 13, "parent_id": "59289545-bba6-4ef3-9e35-c47a8165f934", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ], [ "subsection", "Participant Selection" ] ], "subsections": [], "title": "Participant Selection" }, { "cite_extract_rate": 0.5, "cites": [ 5989, 8967, 8968, 8966, 582 ], "content": "While most FL studies have previously assumed orthogonal-access schemes such as Orthogonal Frequency-division Multiple Access (OFDMA) , the authors in propose a multi-access Broadband Analog Aggregation (BAA) design for communication-latency reduction in FL. Instead of performing communication and computation separately during global aggregation at the server, the BAA scheme builds on the concept of over-the-air computation to \\textit{integrate} computation and communication through exploiting the signal superposition property of a multiple-access channel. The proposed BAA scheme allows the reuse of the whole bandwidth (Fig. \\ref{fig:aircomp}(a)) whereas OFDMA orthogonalizes bandwidth allocation (Fig. \\ref{fig:aircomp}(b)). As such, for orthogonal-access schemes, communication latency increases in direct proportion with the number of participants whereas for multi-access schemes, latency is independent of the number of participants. The bottleneck of signal-to-noise ratio (SNR) during BAA transmission is the participating device with the longest propagation distance given that devices that are nearer have to lower their transmission power for amplitude alignment with devices located further. To increase SNR, participants with longer propagation distance have to be dropped. However, this leads to the truncation of model parameters. As such, to manage the SNR-truncation tradeoff, three scheduling schemes are considered namely i) \\textit{Cell-interior scheduling}: participants beyond a distance threshold are not scheduled, ii) \\textit{All-inclusive scheduling}: all participants are considered, and iii) \\textit{Alternating scheduling}: edge server alternates between the two aforementioned schemes. The simulation results show that the proposed BAA scheme can achieve similar test accuracy as the OFDMA scheme while achieving latency reduction from 10 times to 1000 times. As a comparison between the three scheduling schemes, the cell-interior scheme outperforms the all-inclusive scheme in terms of test accuracy for high mobility networks where participants have rapidly changing locations. For low mobility networks, the alternating scheduling scheme outperforms cell-interior scheduling. \nAs an extension, the authors in also introduce error accumulation and gradient sparsification in addition to over-the-air computation. In , gradient vectors that are not transmitted as a result of power constraints are completely dropped. To improve the model accuracy, the untransmitted gradient vectors can first be stored in an error accumulation vector. In the next round, local gradient estimates are then corrected using the error vector. In addition, when there are bandwidth limitations, the participating device can apply gradient sparsification to keep only elements with the highest magnitudes for transmission. The elements that are not transmitted are subsequently added on to the error accumulation vector for gradient estimate correction in the next round. The simulation results show that the proposed scheme can achieve higher test accuracy than over-the-air computation without error accumulation or gradient sparsification since it corrects gradient estimates with the error accumulation vector and allows for a more efficient utilization of the bandwidth. \n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth, height= 9cm]{Figs/aircomp}\n\t\\caption{\\small A comparison between (a) BAA by over-the-air computation which reuses bandwidth (above) and (b) OFDMA (below) which uses only the allocated bandwidth.}\n\t\\label{fig:aircomp}\n\\end{figure}\nSimilar to and , the authors in propose an integration of computation and communication via over-the-air computation. However, it is observed that aggregation error incurred during over-the-air computation can lead to a drop in model accuracy as a result of signal distortion. As such, a participant selection algorithm is proposed in which the number of devices selected for training is maximized so as to improve statistical learning performance while keeping the signal distortion below a threshold. Due to the nonconvexity of the mean-square-error (MSE) constraint and intractability of the optimization problem, a difference-of-convex functions (DC) algorithm is proposed to solve the maximization problem. The simulation results show that the proposed DC algorithm is scalable and can also achieve near-optimal performance that is comparable to global optimization, which is non-scalable due to its exponential time complexity. In comparison with other state-of-the-art approaches such as the semidefinite relaxation technique (SDR) proposed in , the proposed DC algorithm can also select more participants, thus also achieving higher model accuracy.", "id": "ab71673f-1afa-4ab4-b2ac-6ddd4c0ad067", "level": "subsection", "origin_cites_number": 10, "parent_id": "59289545-bba6-4ef3-9e35-c47a8165f934", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ], [ "subsection", "Joint Radio and Computation Resource Management" ] ], "subsections": [], "title": "Joint Radio and Computation Resource Management" }, { "cite_extract_rate": 0.75, "cites": [ 590, 1325, 3431 ], "content": "The proposed \\textit{FedAvg} algorithm synchronously aggregates parameters as shown in Fig. \\ref{fig:async}(a) and is thus susceptible to the straggler effect, i.e., each training round only progresses as fast as the slowest device since the FL server waits for \\textit{all} devices to complete local training before global aggregation can take place . In addition, the model does not account for participants that can join halfway when the training round is already in progress. As such, the asynchronous model is proposed to improve the scalability and efficiency of FL. For asynchronous FL, the server updates the global model whenever it receives a local update (Fig. \\ref{fig:async}(b)). The authors in find empirically that an asynchronous approach is robust to participants joining halfway during a training round, as well as when the federation involves participating devices with heterogeneous processing capabilities. However, the model convergence is found to be significantly delayed when data is non-IID and unbalanced. As an improvement, propose the \\textit{FedAsync} algorithm in which each newly received local updates are adaptively weighted according to staleness, that is defined as the difference between the current epoch and iteration in which the received update belongs to. For example, a stale update from a straggler is outdated since it should have been received in previous training rounds. As such, it is weighted less. In addition, the authors also prove the convergence guarantee for a restricted family of non-convex problems. However, the current hyperparameters of the \\textit{FedAsync} algorithm still have to be tuned to ensure convergence in different settings. As such, the algorithm is still unable to generalize to suit the dynamic computation constraints of heterogeneous devices. In fact, given the uncertainty surrounding the reliability of asynchronous FL, synchronous FL remains to be the approach most commonly used today .\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{Figs/asyncnew}\n\t\\caption{\\small A comparison between (a) synchronous and (b) asynchronous FL.}\n\t\\label{fig:async}\n\\end{figure}\nFor most existing implementations of the \\textit{FedAvg} algorithm, the global aggregation phase occurs after a fixed number of training rounds. To better manage the dynamic resource constraints, the authors in propose an adaptive global aggregation scheme which varies the global aggregation frequency so as to ensure desirable model performance while ensuring an efficient use of available resources, e.g., energy, during the FL training process. In , the MEC system model used consists of (i) the local update phase where the model is trained using local data, (ii) edge aggregation phase where the intermediate aggregation occurs and (iii) global aggregation phase where updated model parameters are received and aggregated by the FL server. In particular, the authors study how the training loss is affected when the total number of edge server aggregation and local updates between global aggregation intervals vary. For this, a convergence bound of gradient descent with non-IID data is first derived. Then, a control algorithm is subsequently proposed to adaptively choose the optimal global aggregation frequency based on the most recent system state. For example, if global aggregation is too time consuming, more edge aggregations will take place before communication with the FL server is initiated. The simulation results show that the adaptive aggregation scheme outperforms the fixed aggregation scheme in terms of loss function minimization and accuracy within the same time budget. However, the convergence guarantee of the adaptive aggregation scheme is only considered for convex loss functions currently.", "id": "5eca4010-24be-4e22-9b08-93587a264909", "level": "subsection", "origin_cites_number": 4, "parent_id": "59289545-bba6-4ef3-9e35-c47a8165f934", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ], [ "subsection", "Adaptive Aggregation" ] ], "subsections": [], "title": "Adaptive Aggregation" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 619, 8967, 5993, 5992, 5990, 5989, 7928, 3431, 5991 ], "content": "The authors in propose a service pricing scheme in which participants serve as training service providers for a model owner. In addition, to overcome energy inefficiency in the transfer of model updates, a cooperative relay network is proposed to support model update transfer and trading. The interaction between participants and model owner is modelled as a Stackelberg game in which the model owner is the buyer and participants are the sellers. The Stackelberg game is proposed in which each rational participant can noncooperatively decide on its own profit maximization price. In the lower-level subgame, the model owner determines size of training data to maximize profits with consideration of the increasing concave relationship between learning accuracy of the model and size of training data. In the upper-level subgame, the participants decide the price per unit of data to maximize their individual profits. The simulation results show that the proposed mechanism can ensure uniqueness of the Stackelberg equilibrium. For example, model updates that contain valuable information are priced higher at the Stackelberg equilibrium. In addition, model updates can be transferred cooperatively, thus reducing congestion in communication and improving energy efficiency. However, the simulation environment only involves relatively few mobile devices.\nSimilar to , the authors in also model the interaction between participants and model owner as a Stackelberg game, which is well-suited to represent the FL server-participant interaction involved in FL. \nHowever, unlike the aforementioned conventional approaches to solving Stackelberg formulations, a DRL-based approach is adopted together with the Stackelberg game by the authors in . In the DRL formulation, the FL server acts as an agent that decides a payment in response to the participation level and payment history of edge nodes, with the objective of minimizing incentive expenses. Then, the edge nodes determine an optimal participation level in response to the payment policy. This learning based incentive mechanism design enables the FL server to derive an optimal policy in response to its observed state, without requiring any prior information. \n \\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{Figs/contractnew}\n\t\\caption{\\small Participants with unknown resource constraints maximize their utility only if they choose the bundle that best reflects their constraints.}\n\t\\label{fig:contract}\n\\end{figure}\nIn contrast to , the authors in propose an incentive design using a contract theoretic approach to attract participants with high-quality data for FL. In particular, well-designed contracts can reduce information asymmetry through self-revealing mechanisms in which participants select only the contracts specifically designed for their types. For feasibility, each contract must satisfy the Individual Rationality (IR) and Incentive Compatibility (IC) constraints. For IR, each participant is assured of a positive utility when the participant participates in the federation. For IC, every utility maximizing participant only chooses the contract designed for its type. The model owner aims to maximize its own profits subject to IR and IC constraints. As illustrated in Fig. \\ref{fig:contract}, the optimal contracts derived are self-revealing such that each high-type participant with higher data quality only chooses contracts designed for its type, whereas each low-type participant with lower data quality does not have the incentive to imitate high-type participants. The simulation results show that all types of participants only achieve maximum utility when they choose the contract that matches their types. In addition, the proposed contract theory approach also has better performance in terms of profit for the model owner compared with the Stackelberg game-based incentive mechanism. This is because under the contract theoretic approach, the model owner can extract more profits from the participants whereas under the Stackelberg game approach, the participants can optimize their individual utilities. In fact, the information asymmetry between FL servers and participants make contract theory a powerful and efficient tool for mechanism design in FL. As an extension, the authors in introduced a \\textit{multi-dimensional} contract in which each FL participant determines the optimal computation power and image quality it is willing to contribute for model training, in exchange for contract rewards in each iteration.\nThe authors in further introduce reputation as a metric to measure the reliability of FL participants and design a reputation-based participant selection scheme for reliable FL . In this setting, each participant has a reputation value derived from two sources, (i) direct reputation opinions from past interactions with the FL server and (ii) indirect reputation opinions from other task publishers, i.e., other FL servers. The indirect reputation opinions are stored in an open-access reputation blockchain to ensure secure reputation management in a decentralized manner. Before model training, the participants choose a contract that best fits its dataset accuracy and resource conditions. Then, the FL server chooses the participants that have reputation scores which are larger than a prespecified threshold. After the FL task is completed, i.e., a desirable accuracy is achieved, the FL server updates the reputation opinions, which are subsequently stored in the reputation blockchain. The simulation results show that the proposed scheme can significantly improve the accuracy of the FL model since unreliable workers are detected and not selected for FL training.\n\\begin{table*}[]\n\\centering \\caption{\\small Approaches to resource allocation in FL. \\label{tab:resource}}\n\\begin{adjustbox}{width=\\textwidth}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\rowcolor[HTML]{C0C0C0} \n\\multicolumn{1}{|c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Approaches}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Ref.}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Key Ideas}} & \\multicolumn{1}{c|}{\\cellcolor[HTML]{C0C0C0}\\textbf{Tradeoffs and Shortcomings}} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}FedCS to select participants based on computation capabilities so as to\\\\ complete FL training before specified deadline\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Difficult to estimate training duration accurately for\\\\ complex models\\end{tabular} \\\\ \\cline{2-4} \n & & \\begin{tabular}[c]{@{}l@{}}Following , Hybrid-FL to select participants so as to accumulate \\\\ IID, distributable data for FL model training\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Request of data sharing may defeat the \\\\ original intent of FL\\end{tabular} \\\\ \\cline{2-4} \n & & DRL to determine resource consumption by FL participants & \\\\ \\cline{2-3}\n & & \\begin{tabular}[c]{@{}l@{}}Following , DRL for resource allocation with\\\\ mobility-aware FL participants\\end{tabular} & \\multirow{-2}{*}{\\begin{tabular}[c]{@{}l@{}}DRL models are difficult to train especially\\\\ when the number of FL participants are large\\end{tabular}} \\\\ \\cline{2-4} \n\\multirow{-5}{*}{\\begin{tabular}[c]{@{}l@{}}Participant\\\\ Selection\\end{tabular}} & & Fair resource allocation to reduce variance of model performance & Convergence delays with more fairness \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}Propose BAA to integrate computation and communication through\\\\ exploiting the signal superposition property of multiple-access channel\\end{tabular} & \\\\ \\cline{2-3}\n & & \\begin{tabular}[c]{@{}l@{}}Improves on by accounting for gradient vectors that are not\\\\ transmitted due to power constraints\\end{tabular} & \\\\ \\cline{2-3}\n\\multirow{-3}{*}{\\begin{tabular}[c]{@{}l@{}}Joint Radio and \\\\ Computation\\\\ Resource Management\\end{tabular}} & & \\begin{tabular}[c]{@{}l@{}}Improves on using the DC algorithm to minimize\\\\ aggregation error\\end{tabular} & \\multirow{-3}{*}{\\begin{tabular}[c]{@{}l@{}}Signal distortion can lead to drop in accuracy,\\\\ the scalability is also an issue when large \\\\ heterogeneous networks are involved\\end{tabular}} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}Asynchronous FL where model aggregation occurs whenever local\\\\ updates are received by FL server\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Significant delay in convergence in non-IID\\\\ and unbalanced dataset\\end{tabular} \\\\ \\cline{2-4} \n\\multirow{-2}{*}{\\begin{tabular}[c]{@{}l@{}}Adaptive \\\\ Aggregation\\end{tabular}} & & Adaptive global aggregation frequency based on resource constraints & \\begin{tabular}[c]{@{}l@{}}Convergence guarantees are limited to restrictive\\\\ assumptions\\end{tabular} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}l@{}}Stackelberg game for incentivizing higher quantities of training data\\\\ or compute resource contributed \\end{tabular} & \\begin{tabular}[c]{@{}l@{}}FL server derives lower profits. Also, assumption\\\\ that there is only one FL server\\end{tabular} \\\\ \\cline{2-4} \n & & Contract theoretic approach to incentivize FL participants & \\\\ \\cline{2-3}\n\\multirow{-3}{*}{\\begin{tabular}[c]{@{}l@{}}Incentive \\\\ Mechanism\\end{tabular}} & & Reputation mechanism to select effective workers & \\multirow{-2}{*}{Assumption that there is only one FL server} \\\\ \\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table*}", "id": "e51a9ff7-da03-4d21-b611-1d80c08bdbdb", "level": "subsection", "origin_cites_number": 21, "parent_id": "59289545-bba6-4ef3-9e35-c47a8165f934", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ], [ "subsection", "Incentive Mechanism" ] ], "subsections": [], "title": "Incentive Mechanism" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 590, 7928, 5991, 5990, 5986, 1325 ], "content": "In this section, we have discussed four main issues in resource allocation. The issues and approaches are summarized in Table \\ref{tab:resource}. From this section, the lessons learned are as follows:\n\\begin{itemize}\n\\item In heterogeneous mobile networks, the consideration of resource allocation is important to ensure efficient FL. For example, each training iteration is only conducted as quickly as the slowest FL participant, i.e., the straggler effect. In addition, the model accuracy is highly dependent on the quality of data used for training by FL participants. In this section, we have explored different dimensions of resource heterogeneity for consideration, e.g., varying computation and communication capabilities, willingness to participate, and quality of data for local model training. In addition, we have explored various tools that can be considered for resource allocation. For example, DRL is useful given the dynamic and uncertain wireless network conditions experienced by FL participants, whereas contract theory can serve as a powerful tool in mechanism design under the context of information asymmetry. Naturally, traditional optimization approaches have also been well explored in radio resource management for FL, given the high dependency on communications efficiency in FL.\n\\item In Section \\ref{sec: communication}, communication cost reduction comes with a sacrifice in terms of either higher computation costs or lower inference accuracy. Similarly, there exist different tradeoffs to be considered in resource allocation. A scalable model is thus one that enables customization to suit varying needs. For example, the study of allows the FL server to calibrate levels of fairness when allocating training importance, whereas the study in enables the tradeoffs between training completion time and energy expense to be calibrated by the FL system adminstrator.\n\\item In synchronous FL, the FL system is susceptible to the straggler effect. As such, asynchronous FL has been proposed as a solution in and . In addition, asynchronous FL also allows participants to join the FL training halfway even while a training round is in progress. This is more reflective of practical FL settings and can be an important contributing factor towards ensuring the scalability of FL. However, synchronous FL remains to be the most common approach used due to convergence guarantees . Given the many advantages of asynchronous FL, new asynchronous algorithms should be investigated. In particular, for future proposed algorithms, the convergence guarantee in a non-IID setting for non-convex loss functions needs to be considered.\n\\item The study of incentive mechanism design is a particularly important aspect of FL. In particular, due to data privacy concerns, the FL servers are unable to check for training data quality. With the use of self-revealing mechanisms in contract theory, or through modeling the interactions between FL server and participants with game theoretic concepts, high quality data can be motivated as contributions from FL participants. However, existing studies in , , and generally assume that a\nfederation enjoys a monopoly. In particular, each system model is assumed to only consist of multiple individual participants collaborating with a sole FL server. There can be exceptions to this setting as follows: (i) the participants may be competing data owners who are reluctant to share their model parameters since the competitors\nalso benefit from a trained global model and (ii) the FL servers\nmay compete with other FL servers, i.e., model owners.\nIn this case, the formulation of the incentive mechanism design\nwill be vastly different from that proposed. A relatively novel approach has been to model the regret of each FL participants in joining the various competing federations for model training. For future works, a system model with multiple competing federations can be considered together with Stackelberg games and contract theoretic approaches.\n\\item In this section, we have assumed that FL assures the privacy and security of participants. However, as discussed in the following section, this assumption may not hold in the presence of malicious participants or FL server. \n\\end{itemize}", "id": "2fd067fb-1843-47b6-9c86-2aebbd838d92", "level": "subsection", "origin_cites_number": 9, "parent_id": "59289545-bba6-4ef3-9e35-c47a8165f934", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Resource Allocation" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0.5, "cites": [ 614 ], "content": "\\label{sec:security}\nOne of the main objectives of FL is to protect the privacy of participants, i.e., the participants only need to share parameters of the trained model instead of sharing their actual data. However, some recent research works have shown that privacy and security concerns may arise when the FL participants or FL servers are malicious in nature. In particular, this defeats the purpose of FL since the resulting global model can be corrupted, or the participants may even have their privacy compromised during model training. In this section, we discuss the following issues:\n\\begin{itemize}\n\t\\item \\textit{Privacy}: Even though FL does not require the exchange of data for collaborative model training, a malicious participant can still infer sensitive information, e.g., gender, occupation, and location, from other participants based on their shared models. For example, in~, when training a binary gender classifier on the FaceScrub~ dataset, the authors show that they can infer if a certain participant's inputs are included in the dataset just from inspecting the shared model, with a very high accuracy of up to 90\\%. Thus, in this section, we discuss privacy issues related to the shared models in FL and review solutions proposed to preserve the privacy of participants. \n\t\\item \\textit{Security}: In FL, the participants locally train the model and share trained parameters with other participants in order to improve the accuracy of prediction. However, this process is susceptible to a variety of attacks, e.g., data and model poisoning, in which a malicious participant can send incorrect parameters or corrupted models to falsify the learning process during global aggregation. Consequently, the global model will be updated incorrectly, and the whole learning system becomes corrupted. This section discusses more details on emerging attacks in FL as well as some recent countermeasures to deal with such attacks.\n\\end{itemize}", "id": "ce302473-c70b-411d-815f-c3000491c151", "level": "section", "origin_cites_number": 2, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ] ], "subsections": [ "9436f299-d77d-488c-81f2-7749bbacfea4", "4de273f1-2950-457b-a4ee-9560525e1191", "0b63118b-0dab-4522-83aa-c544dfcbfe18" ], "title": " Privacy and Security Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "9436f299-d77d-488c-81f2-7749bbacfea4", "level": "subsection", "origin_cites_number": 0, "parent_id": "ce302473-c70b-411d-815f-c3000491c151", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Privacy Issues" ] ], "subsections": [ "e6875e90-30f2-47e7-bb9b-64bd78cd4e74", "6d8162e0-32ab-437f-8104-83ee9b129d2f", "22f0ecb5-77e0-4892-a883-3246442b6787", "5c0985e1-35af-4ec3-a02c-d474bbb37596" ], "title": "Privacy Issues" }, { "cite_extract_rate": 0.5, "cites": [ 2676, 603, 7318 ], "content": "One of the first research works that shows the possibility of extracting information from a trained model is~. In this paper, the authors show that during the training phase, the correlations implied in the training samples are gathered inside the trained model. Thus, if the trained model is released, it can lead to an unexpected information leakage to attackers. For example, an adversary can infer the ethnicity or gender of a user from its trained voice recognition system. In~, the authors develop a model-inversion algorithm which is very effective in exploiting information from decision tree-based or face recognition trained models. The idea of this approach is to compare the target feature vector with each of the possible value and then derive a weighted probability estimation which is the correct value. The experiment results reveal that by using this technique, the adversary can reconstruct an image of the victim's face from its label with a very high accuracy.\nRecently, the authors in~ show that it is even possible for an adversary to infer information of a victim through queries to the prediction model. In particular, this occurs when a malicious participant has the access to make prediction queries on a trained model. Then, the malicious participant can use the prediction queries to extract the trained model from the data owner. More importantly, the authors point out that this kind of attack can successfully extract model information from a wide range of training models such as decision trees, logistic regressions, SVMs, and even complex training models including DNNs. Some recent research works have also demonstrated the vulnerabilities of DNN-based training models against model extraction attacks~. Therefore, this raises a serious privacy concern for participants in sharing training models in FL.", "id": "e6875e90-30f2-47e7-bb9b-64bd78cd4e74", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "9436f299-d77d-488c-81f2-7749bbacfea4", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Privacy Issues" ], [ "subsubsection", "Information exploiting attacks in machine learning - A brief overview" ] ], "subsections": [], "title": "Information exploiting attacks in machine learning - A brief overview" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7727, 7608 ], "content": "In order to protect the privacy of parameters trained by DNNs, the authors in~ introduce a technique, called \\emph{differentially private stochastic gradient descent}, which can be effectively implemented on DL algorithms. The key idea of this technique is to add some ``noise'' to the trained parameters by using a differential privacy-preserving randomized mechanism~, e.g., a Gaussian mechanism, before sending such parameters to the server. In particular, at the gradient averaging step of a normal FL participant, a Gaussian distribution is used to approximate the differentially private stochastic gradient descent. Then, during the training phase, the participant keeps calculating the probability that malicious participants can exploit information from its shared parameters. Once a predefined threshold is reached, the participant will stop its training process. In this way, the participant can mitigate the risk of revealing private information from its shared parameters. \nInspired by this idea, the authors in~ develop an approach which can achieve a better privacy-protection solution for participants. In this approach, the authors propose two main steps to process data before sending trained parameters to the server. In particular, for each learning round, the aggregate server first selects a random number of participants to train the global model. Then, if a participant is selected to train the global model in a learning round, the participant will adopt the method proposed in~, i.e., using a Gaussian distribution to add noise to the trained model before sending the trained parameters to the server. In this way, a malicious participant cannot infer information of other participants by using the parameters of shared global model as it has no information regarding who has participated in the training process in each learning round.", "id": "6d8162e0-32ab-437f-8104-83ee9b129d2f", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "9436f299-d77d-488c-81f2-7749bbacfea4", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Privacy Issues" ], [ "subsubsection", "Differential privacy-based protection solutions for FL participants" ] ], "subsections": [], "title": "Differential privacy-based protection solutions for FL participants" }, { "cite_extract_rate": 0.4, "cites": [ 5994, 3477 ], "content": "While DP solutions can protect private information of a honest participant from other malicious participants in FL, they only work well if the server is trustful. If the server is malicious, it can result in a more serious privacy threat to all participants in the network. Thus, the authors in~ introduce a collaborative DL framework to render multiple participants to learn the global model without uploading their explicit training models to the server. The key idea of this technique is that instead of uploading the whole set of trained parameters to the server and updating the whole global parameters to its local model, each participant wisely selects the number of gradients to upload and the number of parameters from the global model to update as illustrated in Fig.~\\ref{fig:CollTrain}. In this way, malicious participants cannot infer explicit information from the shared model. One interesting result of this paper is that even when the participants do not share all trained parameters and do not update all parameters from the shared model, the accuracy of proposed solution is still close to that of the case when the server has all dataset to train the global model. For example, for the MNIST dataset~, the accuracy of prediction model when the participants agree to share 10\\% and 1\\% of their parameters are respectively 99.14\\% and 98.71\\%, compared with 99.17\\% for the centralized solution when the server has full data to train. However, the approach is yet to be tested on more complex classification tasks.\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Figs/CollTrain}\n\t\\caption{\\small Selective parameters sharing model.}\n\t\\label{fig:CollTrain}\n\\end{figure}\nAlthough selective parameter sharing and DP solutions can make information exploiting attacks more challenging, the authors in~ show that these solutions are susceptible to a new type of attack, called powerful attack, developed based on Generative Adversarial Networks (GANs) . GANs is a class of ML technique which uses two neural networks, namely generator network and discriminator network, that compete with each other to train data. The generator network tries to generate the fake data by adding some ``noise'' to the real data. Then, the generated fake data is passed to the discriminator network for classification. After the training process, the GANs can generate new data with the same statistics as the training dataset. Inspired by this idea, the authors in~ develop a powerful attack which allows a malicious participant to infer sensitive information from a victim participant even with just a part of shared parameters from the victim as illustrated in Fig.~\\ref{fig:GAN_attacks}. To deal with the GAN attack, the authors in~ introduce a solution using secret sharing scheme with extreme boosting algorithm. This approach executes a lightweight secret sharing protocol before transmitting the newly trained model in plaintext to the server at each round. Thereby, other participants in the network cannot infer information from the shared model. However, the limitation of this approach is the reliance on a trusted third party to generate signature key pairs. \n\\begin{figure*}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Figs/GAN_attacks}\n\t\\caption{\\small GAN Attack on collaborative deep learning.}\n\t\\label{fig:GAN_attacks}\n\\end{figure*}\nDifferent from all aforementioned works, the authors in~ introduce a collaborative training model in which all participants cooperate to train a federated GANs model. The key idea of this method is that the federated GANs model can generate artificial data that can replace participants' real data, and thus protecting the privacy of real data for the honest participants. In particular, in order to guarantee participants' data privacy while still maintaining flexibility in training tasks, this approach produces a federated generative model. This model can output artificial data that does not belong to any real user in particular, but comes from the common cross-user data distribution. As a result, this approach can significantly reduce the possibility of malicious exploitation of information from real data. However, this approach inherits existing limitations of GANs, e.g., training instability due to the generated fake data, which can dramatically reduce the performance of collaborative learning models.", "id": "22f0ecb5-77e0-4892-a883-3246442b6787", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "9436f299-d77d-488c-81f2-7749bbacfea4", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Privacy Issues" ], [ "subsubsection", "Collaborative training solutions" ] ], "subsections": [], "title": "Collaborative training solutions" }, { "cite_extract_rate": 0, "cites": [], "content": "Encryption is an effective way to protect data privacy of the participants when they want to share the trained parameters in FL. In~, the homomorphic encryption technique is introduced to protect privacy of participants' shared parameters from a honest-but-curious server. A honest-but-curious server is defined to be a user who wants to extract information from the participants' shared parameters, but keeps all operations in FL in proper working condition. The idea of this solution is that the participants' trained parameters will be encrypted using the homomorphic encryption technique before they are sent to the server. This approach is effective in protecting sensitive information from the curious server, and also achieves the same accuracy as that of the centralized DL algorithm. A similar concept is also presented in~ with secret sharing mechanism used to protect information of FL participants. \nAlthough both the encryption techniques presented in~ and~ can prevent the curious server from extracting information, they require multi-round communications and cannot preclude collusions between the server and participants. Thus, the authors in~ propose a hybrid solution which integrates both additively homomorphic encryption and DP in FL. In particular, before the trained parameters are sent to the server, they will be encrypted using the additively homomorphic encryption mechanism together with intentional noises to perturb the original parameters. As a result, this hybrid scheme can simultaneously prevent the curious server from exploiting information as well as solve the collusion problem between the server and malicious participants. However, in this paper, the authors do not compare the accuracy of the proposed approach with the case without homomorphic encryption and DP. Thus, the performance of proposed approach, i.e., in terms of model accuracy, is not clear.", "id": "5c0985e1-35af-4ec3-a02c-d474bbb37596", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "9436f299-d77d-488c-81f2-7749bbacfea4", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Privacy Issues" ], [ "subsubsection", "Encryption-based Solutions" ] ], "subsections": [], "title": "Encryption-based Solutions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:securityissues}", "id": "4de273f1-2950-457b-a4ee-9560525e1191", "level": "subsection", "origin_cites_number": 0, "parent_id": "ce302473-c70b-411d-815f-c3000491c151", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Security Issues" ] ], "subsections": [ "1705aaee-44c7-4c66-aaa4-b8557c9e40df", "0ebb5bb6-388b-43b9-a52a-920b017ef82b", "0d9b23c3-967b-4457-85bf-955848611c53" ], "title": "Security Issues" }, { "cite_extract_rate": 0.5, "cites": [ 8969, 5468 ], "content": "In FL, a participant trains its data and sends the trained model to the server for further processing. In this case, it is intractable for the server to check the real training data of a participant. Thus, a malicious participant can poison the global model by creating \\emph{dirty-label} data to train the global model with the aim of generating falsified parameters. For example, a malicious participant can generate a number of samples, e.g., photos, under a designed label, e.g., a clothing branch, and use them to train the global model to achieve its business goal, e.g., the prediction model shows results of the targeted clothing branch. Dirty-label data poisoning attacks are demonstrated to achieve high misclassifications in DL processes, up to 90\\%, when a malicious participant injects relatively few dirty-label samples (around 50) to the training dataset~. This calls for urgent solutions to deal with data poisoning attacks in FL. \nIn~, the authors investigate impacts of a sybil-based data poisoning attack to a FL system. In particular, for the sybil attack, a malicious participant tries to improve the effectiveness of data poisoning in training the global model by creating multiple malicious participants. In Table~\\ref{tab:Syblil}, the authors show that with only two malicious participants, the attack success rate can achieve up to 96.2\\%, and now the FL model is unable to correctly classify the image of ``1'' (instead it always incorrectly predicts them to be the image of ``7''). To mitigate sybil attacks, the authors then propose a defense strategy, namely FoolsGold. The key idea of this approach is that honest participants can be distinguished from sybil participants based on their updated gradients. Specifically, in the non-IID FL setting, each participant's training data has its own particularities, and sybil participants will contribute gradients that appear more similar to each other than those of other honest participants. With FoolsGold, the system can defend the sybil data poisoning attack with minimal changes to the conventional FL process and without requiring any auxiliary information outside of the learning process. Through simulations results on 3 diverse datasets (MNIST~, KDDCup~, Amazon Reviews~), the authors show that FoolsGold can mitigate the attack under a variety of conditions, including different distributions of participant data, varying poisoning targets, and various attack strategies. \n\\begin{table}[!]\n\t\\centering\n\t\\caption{\\small The accuracy and attack success rates for no-attack scenario and attacks with 1 and 2 sybils in a FL system with MNIST dataset~.}\n\t\\label{tab:Syblil}\n\t\\begin{tabular}{||c||c|c|c||} \\hline\n\t\t& \\textbf{Baseline} & \\textbf{Attack 1} & \\textbf{Attack 2} \\\\ \n\t\t\\hline\n\t\t\\hline \t\t\n\t\tNumber of honest participants & 10 & 10 & 10 \t\t\t\\\\ \\cline{1-4}\n\t\tNumber of sybil participants & 0 & 1 & 2 \t\t\t\\\\ \\cline{1-4}\n\t\tThe accuracy (digits: 0, 2-9) & 90.2\\% & 89.4\\% & 88.8\\% \\\\ \\cline{1-4}\n\t\tThe accuracy (digit: 1) & 96.5\\% & 60.7\\% & 0.0\\% \\\\ \\cline{1-4}\n\t\t\\textbf{Attack success rate} & 0.0\\% & 35.9\\% & 96.2\\% \\\\ \\cline{1-4}\n\t\t\\hline\\end{tabular}\n\\end{table}", "id": "1705aaee-44c7-4c66-aaa4-b8557c9e40df", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "4de273f1-2950-457b-a4ee-9560525e1191", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Security Issues" ], [ "subsubsection", "Data Poisoning Attacks" ] ], "subsections": [], "title": "Data Poisoning Attacks" }, { "cite_extract_rate": 1, "cites": [ 5467, 2673 ], "content": "Unlike data poisoning attacks which aim to generate fake data to cause adverse impacts to the global model, a model poisoning attack attempts to directly poison the global model that it sends to the server for aggregation. As shown in~ and , model poisoning attacks are much more effective than those of data poisoning attacks, especially for large-scale FL with many participants. The reason is that for data poisoning attacks, a malicious participant's updates are scaled based on its dataset and the number of participants in the federation. However, for model poisoning attacks, a malicious participant can modify the updated model, which is sent to the server for aggregation, directly. As a result, even with one single attacker, the whole global model can be poisoned. The simulation results in~ also confirm that even a highly constrained adversary with limited training data can achieve high success rate in performing model poisoning attacks. Thus, solutions to protect the global model from model poisoning attacks have to be developed. \nIn~, some solutions are suggested to prevent model poisoning attacks. Firstly, based on an updated model shared from a participant, the server can check whether the shared model can help to improve the global model's performance or not. If not, the participant will be marked to be a potential attacker, and after few rounds of observing the updated model from this participant, the server can determine whether this is a malicious participant or not. The second solution is based on the comparison among the updated models shared by the participants. In particular, if an updated model from a participant is too different from the others, the participant can potentially be a malicious one. Then, the server will continue observing updates from this participant before it can determine whether this is a malicious user or not. However, model poisoning attacks are extremely difficult to prevent because when training with millions of participants, it is intractable to evaluate the improvement from every single participant. As such, more effective solutions need to be further investigated. \nIn~, the authors introduce a more effective model poisoning attack which is demonstrated to achieve 100\\% accuracy on the attacker's task within just a single learning round. In particular, a malicious participant can share its poisoned model which not only is trained for its intentional purpose, but which also contains a backdoor function. In this paper, the authors consider to use a semantic backdoor function to inject into the global model. The reason is that this function can make the global model misclassify even without a need to modify the input data of the malicious participant. For example, an image classification backdoor function can inject an attacker-chosen label to all images with some certain features, e.g., all dogs with black stripes can be misclassifed to be cats. In the simulations, the authors show that this attack can greatly outperform other conventional FL data poisoning attacks. For example, in a word-prediction task with 80,000 total participants, compromising just eight of them is enough to achieve 50\\% backdoor accuracy, as compared to 400 malicious participants needed to perform the data-poisoning attack.", "id": "0ebb5bb6-388b-43b9-a52a-920b017ef82b", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "4de273f1-2950-457b-a4ee-9560525e1191", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Security Issues" ], [ "subsubsection", "Model Poisoning Attacks" ] ], "subsections": [], "title": "Model Poisoning Attacks" }, { "cite_extract_rate": 0.625, "cites": [ 5468, 7608, 5994, 5467, 7727 ], "content": "\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Figs/BlockFL}\n\t\\caption{\\small An illustration of (a) conventional FL and (b) the proposed BlockFL architectures.}\n\t\\label{fig:BlockFL}\n\\end{figure}\nFree-riding is another attack in FL that occurs when a participant wants to benefit from the global model without contributing to the learning process. The malicious participant, i.e., free-rider, can pretend that it has very small number of samples to train or it can select a small set of its real dataset to train, e.g., to save its resources. As a result, the honest participants need to contribute more resources in the FL training process. To address this problem, the authors in~ introduce a blockchain-based FL architecture, called BlockFL, in which the participants' local learning model updates are exchanged and verified by leveraging the blockchain technology. In particular, each participant trains and sends the trained global model to its associated miner in the blockchain network and then receives a reward that is proportional to the number of trained data samples as illustrated in Fig.~\\ref{fig:BlockFL}. In this way, this framework can not only prevent the participants from free-riding, but also incentivize all participants to contribute to the learning process. A similar blockchain-based model is also introduced in~ to provide data confidentiality, computation auditability, and incentives for the participants of FL. However, the utilization of the blockchain technology implies the incurrence of a significant cost for implementing and maintaining miners to operate the blockchain network. Furthermore, consensus protocols used in blockchain networks, e.g., proof-of-work (PoW), can cause a long delay in information exchange, and thus they may not be appropriate to implement on FL models. \n\\begin{table*}[!]\n\t\\caption{\\small A summary of attacks and countermeasures in FL.}\n\t\\label{tab:Sec_Pri}\n\t\\scriptsize\t\n\t\\begin{centering}\n\t\t\\begin{tabular}{|>{\\centering\\arraybackslash}m{2cm}|>{\\centering\\arraybackslash}m{4.5cm}|>{\\centering\\arraybackslash}m{10.5cm}|}\n\t\t\t\\hline\n\t\t\t\\cellcolor{mygray} & \\cellcolor{mygray} &\\cellcolor{mygray} \\tabularnewline\n\t\t\t\\cellcolor{mygray} \\multirow{-2 }{*}{\\textbf{Attack Types}} &\\cellcolor{mygray} \\multirow{-2}{*} {\\textbf{Attack Method}} &\\cellcolor{mygray} \\multirow{-2}{*} {\\textbf{Countermeasures}} \\tabularnewline\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tInformation exploiting attacks (privacy issues) & Attackers try to illegally exploit information from the shared model. & \\begin{itemize}\n\t\t\t\t\\item \\emph{Differentially private stochastic gradient descent}: Add ``noise'' to the trained parameters by using a differential privacy-preserving randomized mechanism~.\n\t\t\t\t\\item \\emph{Differentially private and selective participants}: Add ``noise'' to the trained parameters and select randomly participants to train global model in each round~.\n\t\t\t\t\\item \\emph{Selective parameter sharing}: Each participant wisely selects the number of gradients to upload and the number of parameters from the global model to update~.\n\t\t\t\t\\item \\emph{Secrete sharing scheme with extreme boosting algorithm}: This approach executes a lightweight secret sharing protocol before transmitting the newly trained model in plaintext to the server at each round~.\n\t\t\t\t\\item \\emph{GAN model training}: All participants are cooperative to train a federated GANs model~.\n\t\t\t\\end{itemize} \\\\\n\t\t\t\\hline\n\t\t\tData poisoning attacks & Attackers poison the global model by creating \\emph{dirty-label} data and use such data to train the global model. & \\begin{itemize} \\item \\emph{FoolsGoal}: Distinguish honest participants based on their updated gradients. It is based on the fact that in the non-IID FL setting, each participant's training data has its own particularities, and malicious participants will contribute gradients that appear more similar to each other than those of the honest participants~. \\end{itemize} \\\\\n\t\t\t\\hline\n\t\t\tModel poisoning attacks & Attackers attempt to directly poison the global model that they send to the server for aggregation. & \\begin{itemize}\n\t\t\t\t\\item Based on an updated model shared from a participant, the server can check whether the shared model can help to improve the global model's performance or not. If not, the participant will be marked to be a potential attacker~.\n\t\t\t\t\\item Compare among the updated global models shared by the participants, and if an updated global model from a participant is too different from others, it could be a potential malicious participant~.\n\t\t\t\\end{itemize} \\\\\n\t\t\t\\hline\n\t\t\tFree-riding attacks & Attackers benefit from the global model without contributing to the learning process, e.g., by pretending that they have very small number of samples to train. & \\begin{itemize}\n\t\t\t\t\\item \\emph{BlockFL}: Participants' local learning model updates are exchanged and verified by leveraging blockchain technology. In particular, each participant trains and sends the trained global model to its associated miner in the blockchain network and then receives a reward that is proportional to the number of trained data samples~.\n\t\t\t\\end{itemize} \\\\\n\t\t\t\\hline\t\t\t\n\t\t\\end{tabular}\n\t\t\\par\\end{centering}\n\\end{table*}", "id": "0d9b23c3-967b-4457-85bf-955848611c53", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "4de273f1-2950-457b-a4ee-9560525e1191", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Security Issues" ], [ "subsubsection", "Free-Riding Attacks" ] ], "subsections": [], "title": "Free-Riding Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we have discussed two key issues, i.e., privacy and security, when trained models are exchanged in FL. In general, it is believed that FL is an effective privacy-preserving learning solution for participants to perform collaborative model training. However, in this section, we have shown that a malicious participant can exploit the process and gain access to sensitive information of other participants. Furthermore, we have also shown that by using the shared model in FL, an attacker can perform attacks which can not only breakdown the whole learning system, but also falsify the trained model to achieve its malicious goal. In addition, solutions to deal with these issues have also been reviewed, which are especially important in order to guide FL system administrators in designing and implementing the appropriate countermeasures. We summarize the key information of attacks and their corresponding countermeasures in Table VII.", "id": "0b63118b-0dab-4522-83aa-c544dfcbfe18", "level": "subsection", "origin_cites_number": 0, "parent_id": "ce302473-c70b-411d-815f-c3000491c151", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", " Privacy and Security Issues" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0.46666666666666606, "cites": [ 5997, 5975, 5977, 5971, 5974, 5996, 5995 ], "content": "\\label{sec:application}\nIn the aforementioned studies, we have discussed the issues pertaining to the implementation of FL as an enabling technology that allows collaborative learning at mobile edge networks. In this section, we focus instead on the applications of FL for mobile edge network optimization. \nAs highlighted by the authors in , the increasing complexity and heterogeneity of wireless networks enhance the appeal of adopting a data-driven ML based approach for optimizing system designs and resource allocation decision making for mobile edge networks. However, as discussed in previous sections, the private data of users may be sensitive in nature. As such, existing learning based approach can be combined with FL for privacy-preserving applications.\nIn this section, we consider four applications of FL in edge computing:\n\\begin{itemize}\n\\item \\textit{Cyberattack Detection:} The ubiquity of IoT devices and increasing sophistication of cyberattacks imply that there is a need to improve existing cyberattack detection tools. Recently, DL has been widely successful in cyberattack detection. Coupled with FL, cyberattack detection models can be learned collaboratively while maintaining user privacy.\n\\item \\textit{Edge Caching and Computation Offloading:} Given the computation and storage capacity constraints of edge servers, some computationally intensive tasks of end devices have to be offloaded to the remote cloud server for computation. In addition, commonly requested files or services should be placed on edge servers for faster retrieval, i.e., users do not have to communicate with the remote cloud when they want to access these files or services. As such, an optimal caching and computation offloading scheme can be collaboratively learned and optimized with FL.\n\\item \\textit{Base Station Association:} In a dense network, it is important to optimize base station association so as to limit interference faced by users. However, traditional learning based approaches that utilize user data often assume that such data is centrally available. Given user privacy constraints, an FL based approach can be adopted instead.\n\\item \\textit{Vehicular Networks:} The Internet of Vehicles (IoV) features smart vehicles with data collection, computation and communication capabilities for relevant functions, e.g., navigation and traffic management. However, this wealth of knowledge is again, private and sensitive in nature since it can reveal the driver's location and personal information. In this section, we discuss the use of an FL based approach in traffic queue length prediction and energy demand in electric vehicle charging stations done at the edge of IoV networks.\n\\end{itemize}\n\\begin{table*}[]\n\\centering \\caption{\\small FL based approaches for mobile edge network optimization. \\label{tab:application}}\n\\begin{tabular}{|l|l|l|}\n\\hline\n\\textbf{Applications} & \\textbf{Ref.} & \\textbf{Description} \\\\ \\hline\n\\multirow{3}{*}{Cyberattack Detection} & & Cyberattack detection with edge nodes as participants \\\\ \\cline{2-3} \n & & Cyberattack detection with IoT gateways as participants \\\\ \\cline{2-3} \n & & Blockchain to store model updates \\\\ \\hline\n\\multirow{4}{*}{Edge caching and computation offloading} & & DRL for caching and offloading in UEs \\\\ \\cline{2-3} \n & & DRL for computation offloading in IoT devices \\\\ \\cline{2-3} \n & & Stacked autoencoder learning for proactive caching \\\\ \\cline{2-3} \n & & Greedy algorithm to optimize service placement schemes \\\\ \\hline\n\\multirow{2}{*}{Base station assoication} & & Deep echo state networks for VR application \\\\ \\cline{2-3} \n & & Mean field game with imitation for cell association \\\\ \\hline\n\\multirow{2}{*}{Vehicular networks} & & Extreme value theory for large queue length prediction \\\\ \\cline{2-3} \n & & Energy demand learning in electric vehicular networks \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "level": "section", "origin_cites_number": 15, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Applications of Federated Learning for Mobile Edge Computing" ] ], "subsections": [ "a6ca9148-8aad-4b67-a42c-c3513f305da8", "9bb6bf60-a8e6-4f3a-928f-802af0dd72af", "bdbb9fa8-26c1-4b46-8920-41f52291e380", "090fafc1-63ae-4a0f-8017-679d11acea66" ], "title": "Applications of Federated Learning for Mobile Edge Computing" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 5998 ], "content": "Cyberattack detection is one of the most important steps to promptly prevent and mitigate serious consequences of attacks in mobile edge networks. Among different approaches to detect cyberattacks, DL is considered to be the most effective tool to detect a wide range of attacks with high accuracy. In~, the authors show that DL can outperform all conventional ML techniques with very high accuracy in detecting intrusions on three datasets, i.e., KDDcup 1999, NSL-KDD , and UNSW-NB15 . However, the detection accuracy of solutions based on DL depends very much on the available datasets. Specifically, DL algorithm only can outperform other ML techniques when given sufficient data to train. However, this data may be sensitive in nature. Therefore, some FL-based attack detection models for mobile edge networks have been introduced recently to address this problem. \nIn~, the authors propose a cyberattack detection model for an edge network empowered by FL. In this model, each edge node operates as a participant who owns a set of data for intrusion detection. To improve the accuracy in detecting attacks, after training the global model, each participant will send its trained model to the FL server. The server will aggregate all parameters from the participants and send the updated global model back to all the participants as illustrated in Fig.~\\ref{fig:IDS_FL}. In this way, each edge node can learn from other edge nodes without a need of sharing its real data. As a result, this method can not only improve accuracy in detecting attacks, but also enhance the privacy of intrusion data at the edge nodes and reduce traffic load for the whole network. A similar idea is also presented in~ in which IoT gateways operate as FL participants and an IoT security service provider works as a server node to aggregate trained models shared by the participants. The authors in~ show empirically that by using FL, the system can successfully detect 95.6\\% of attacks in approximately 257 ms without raising any false alarm when evaluated in a real-world smart home deployment setting.\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{Figs/IDS_FL}\n\t\\caption{\\small FL-based attack detection architecture for IoT edge networks.}\n\t\\label{fig:IDS_FL}\n\\end{figure}\nIn both~ and~, it is assumed that the participants, i.e., edge nodes and IoT gateways, are honest, and they are willing to contribute in training their updated model parameters. However, if some of the participants are malicious, they can make the whole intrusion detection corrupted. Thus, the authors in~ propose to use blockchain technology in managing data shared by the participants. By using the blockchain, all incremental updates to the anomaly detection ML model are stored in the ledger, and thus a malicious participant can be easily identified. Furthermore, based on shared models from honest participants stored in the ledger, the intrusion detection system can easily recover the proper global model if the current global model is poisoned.", "id": "a6ca9148-8aad-4b67-a42c-c3513f305da8", "level": "subsection", "origin_cites_number": 6, "parent_id": "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Applications of Federated Learning for Mobile Edge Computing" ], [ "subsection", "Cyberattack Detection" ] ], "subsections": [], "title": "Cyberattack Detection" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 5977, 582 ], "content": "To account for the dynamic and time-varying conditions in a MEC system, the authors in propose the use of DRL with FL to optimize caching and computation offloading decisions in an MEC system. The MEC system consists of a set of user equipments (UEs) covered by base stations. For caching, the DRL agent makes the decision to cache or not to cache the downloaded file, and which local file to replace should caching occur. For computation offloading, the UEs can choose to either offload\ncomputation tasks to the edge node via wireless channels,\nor perform the tasks locally. This caching and offloading decision process is illustrated in Fig. \\ref{fig:caching}. The states of the MEC system include wireless network conditions,\nUE energy consumption, and task queuing states, whereas the reward function is\ndefined as quality of experience (QoE) of the UEs. Given the large state and action space in the MEC environment, a DDQN approach is adopted. To protect the privacy of users, an FL approach is proposed in which training can occur with data remaining on the UEs. In addition, existing FL algorithms, e.g., \\textit{FedAvg} , can also ensure that training is robust to the unbalanced and non-IID data of the UEs. The simulation\nresults show that the DDQN with FL approach achieves similar average\nutilities among UEs as compared to the centralized DDQN\napproach, while consuming less communication resources and\npreserving user privacy. However, the simulations are only performed with $10$ UEs. If the implementation is expanded to target a larger number of heterogeneous UEs, there can be significant delays in the training process especially since the training of a DRL model is computationally intensive. As an extension, transfer learning can be used to increase the efficiency of training, i.e., training is not initialized from scratch.\nSimilar to , the authors in propose the use of DRL in optimizing computation offloading decisions in IoT systems. The system model consists of IoT devices and edge nodes. The IoT devices can harvest energy units from the edge nodes to be stored in the energy queue. In addition, an IoT device also maintains a local task queue with unprocessed and unsuccessfully processed tasks. These tasks can be processed locally or offloaded to the edge nodes for processing, in a First In First Out (FIFO) order . In the DRL problem formulation, the network states are defined to be a function of energy queue length, task execution delay, task handover delay from edge node association, and channel gain between the IoT device and edge nodes. A task can fail to be executed, e.g., when there is insufficient energy units or communication bandwidth for computation offloading. The utility considered is a function of task execution delay, task queuing delay, number of failed tasks and penalty of execution failure. The DRL agent makes the decision to either offload computation to the edge nodes or perform computation locally. To ensure privacy of users, the agent is trained without users having to upload their own data to a centralized server. In each training round, a random set of IoT devices are selected to download the model parameters of the DRL agent from the edge networks. The model parameters are then updated using their own data, e.g., energy resource level, channel gain, and local sensing data. Then, the updated parameters of the DRL agent are sent to the edge nodes for model aggregation. The simulation results show that the FL based approach can achieve same levels of total utility as the centralized DRL approach. This is robust to varying task generation probabilities. In addition, when task generation probabilities are higher, i.e., there are more tasks for computation in the IoT device, the FL based scheme can achieve a lower number of dropped tasks and shorter queuing delay than the centralized DRL scheme. However, the simulation only involves $15$ IoT devices serviced by relatively many edge nodes. To better reflect practical scenarios where fewer edge nodes have to cover several IoT devices, further studies can be conducted on optimizing the edge-IoT collaboration. For example, the limited communication bandwidth can cause significant task handover delay during computation offloading. In addition, with more IoT devices, the DRL training will take a longer time to converge especially since the devices have heterogeneous computation capabilities.\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Figs/caching}\n\t\\caption{\\small FL-based (a) caching and (b) computation offloading.}\n\t\\label{fig:caching}\n\\end{figure}\nInstead of using a DRL approach, the authors in propose the use of an FL based stacked autoencoder learning model, i.e., FL based proactive content caching scheme (FPCC), to predict content popularity for optimized caching while protecting user privacy. In the system model, each user is equipped with a mobile device that connects to the base station that covers its geographical location. Using a stacked autoencoder learning model, the latent representation of a user's information, e.g., location, and file rating, i.e., content request history, is learned. Then, a similarity matrix between the user and its historically requested files is obtained in which each element of the matrix represents the distance between the user and the file. Based on this similarity matrix, the $K$ nearest neighbours of each user are determined, and the similarity between the user's historical watch list and the neighbours' are computed. An aggregation approach is then used to predict the most popular files for caching, i.e., files with highest similarity scores across all users. Being the most popular files across users that are most frequently retrieved, the cached files need not be re-downloaded from its source server everytime it is demanded. To protect the privacy of users, FL is adopted to learn the parameters of the stacked autoencoder without the user having to reveal its personal information or its content request history to the FL server. In each training round, the user first downloads a global model from the FL server. Then, the model is trained and updated using their local data. The updated models are subsequently uploaded to the FL server and aggregated using the \\textit{FedAvg} algorithm. The simulation results show that the proposed FPCC scheme could achieve the highest cache efficiency, i.e, the ratio of cached files matching user requests, as compared to other caching methods such as the Thompson sampling methods . In addition, privacy of the user is preserved.\nThe authors in introduce a privacy-aware service placement scheme to deploy user-preferred services on edge servers with consideration for resource constraints in the edge cloud. The system model consists of a mobile edge cloud serving various mobile devices. The user's preference model is first built based on information such as number of times of requests for a service, and other user context information, e.g., ages and locations. However, since this can involve sensitive personal information, an FL based approach is proposed to train the preference model while keeping users' data on their personal devices. Then, an optimization problem is formulated in which the objective is to maximize quantity of services demanded from the edge based on user preferences, subject to constraints of storage capacity, computation capability, uplink and downloading bandwidth. The optimization problem is then solved using a greedy algorithm, i.e., the service which most improves the objective function is added till resource constraints are met. The simulation results show that the proposed scheme can outperform the popular service placement scheme, i.e., where only the most popular services are placed on the edge cloud, in terms of number of requests processed on edge clouds since it also considers the aforementioned resource constraints in maximizing quantity of services.", "id": "9bb6bf60-a8e6-4f3a-928f-802af0dd72af", "level": "subsection", "origin_cites_number": 9, "parent_id": "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Applications of Federated Learning for Mobile Edge Computing" ], [ "subsection", "Edge Caching and Computation Offloading" ] ], "subsections": [], "title": "Edge Caching and Computation Offloading" }, { "cite_extract_rate": 0.4, "cites": [ 5996, 5971 ], "content": "The authors in propose an FL based deep echo state networks (ESNs) approach to minimize breaks in presence (BIPs) for users of virtual reality (VR) applications. A BIP event can be a result of delay in information transmission which can be caused when the user's body movements obstruct the wireless link. BIPs cause the user to be aware that they are in a virtual environment, thus reducing their quality of experience. As such, a user association policy has to be designed such that BIPs are minimized. The system model consists of base stations that cover a set of VR users. The base stations receive uploaded tracking information from each associated user, e.g., physical location and orientation, while the users download VR videos for their use in the VR application. For data transmission, the VR users have to associate with one of the base stations. As such, a minimization problem is formulated where BIPs are minimized with respect to expected locations and orientations of the VR user. To derive a prediction of user locations and orientations, the base station has to rely on the historical information of users. However, the historical information stored at each base station only collects partial data from each user, i.e., a user connects to multiple base stations and its data is distributed across them. As such, an FL based approach is implemented whereby each base station first trains a local model using its partial data. Then, the local models are aggregated to form a global model capable of generalization, i.e., comprehensively predicting a user's mobility and orientations. The simulation results show that the federated ESN algorithm can achieve lower BIPs experienced by users as compared to the centralized ESN algorithm proposed in , since a centralized approach only makes partial prediction with the incomplete data from sole base stations, whereas the federated ESN approach can make predictions based on a model learned collaboratively from more complete data.\nFollowing the ubiquity of IoT devices, the traditional cloud-based approach may no longer be sufficient to cater to dense cellular networks. As computation and storage moves to the edge of networks, the association of users to base stations are increasingly important to facilitate efficient ML model training among the end users. To this end, the authors in consider solving the problem of cell association in dense wireless networks with a collaborative learning approach. \nIn the system model, the base stations cover a set of users in an LTE cellular system. In a cellular system, users are likely to face similar channel conditions as their neighbors and thus can benefit from learning from their neighbours that are already associated with base stations. As such, the cell association problem is formulated as a mean-field game (MFG) with imitation in which each user maximizes its own throughput while minimizing the cost of imitation.\nThe MFG is further\nreduced into a single-user Markov decision process that is then solved\nby a neural Q-learning algorithm. In\nmost other proposed solution for cell association, it is assumed that\nall information is known to the base stations and users. However, given\nprivacy concerns, the assumption of information sharing may\nnot be practical. As such, a collaborative learning approach can be considered where only the outcome of the learning algorithm is exchanged\nduring the learning process whereas usage data\nis kept locally in each user's own device. The simulation results show that imitating users can attain higher utility within a shorter\ntraining duration as compared to non-imitating users.", "id": "bdbb9fa8-26c1-4b46-8920-41f52291e380", "level": "subsection", "origin_cites_number": 5, "parent_id": "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Applications of Federated Learning for Mobile Edge Computing" ], [ "subsection", "Base Station Association" ] ], "subsections": [], "title": "Base Station Association" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 5997, 5990, 5995 ], "content": "Ultra reliable low latency communication (URLLC) in vehicular networks\nis an essential prerequisite towards developing an intelligent transport system.\nHowever, existing radio resource management techniques do\nnot account for rare events such as large queue lengths at\nthe tail-end distribution. To model the occurrence of such low\nprobability events, the authors in propose the use of extreme value theory (EVT) . The approach requires\nsufficient samples of queue state information (QSI) and data\nexchange among vehicles. As such, an FL approach is proposed in which\nvehicular users (VUEs) train the learning model with data kept locally and\nupload only their updated model parameters to the roadside units (RSU). The\nRSU then averages out the model parameters and return an updated\nglobal model to the VUEs. In a synchronous approach, all\nVUEs upload their models at the end of a prespecified interval. However, the simultaneous uploading by multiple vehicles can lead to delays in communication.\nIn contrast for an asynchronous approach, each VUE only\nevaluates and uploads their model parameters after a predefined number\nof QSI samples are collected. The global model is also updated whenever a local update is received, thus reducing communication delays. To further reduce overhead,\nLyapunov optimization for power allocation is also utilized. The\nsimulation results show that under this framework, there is\na reduction of the number of vehicles experiencing large queue lengths\nwhereas FL can ensure minimal data exchange relative to a\ncentralized approach.\nApart from QSI, the vehicles in vehicular networks are also exposed to a wealth of useful captured images that can be adopted to build better inference models, e.g., for traffic optimization. However, these images are sensitive in nature since they can give away the location information of vehicular clients. As such, an FL approach can be used to facilitate collaborative ML while ensuring privacy preservation. However, the images captured by vehicles are often varying in quality due to motion blurs. In addition, another source of heterogeneity is the difference in computing capabilities of vehicles. Given the information asymmetry involved, the authors in propose a multi-dimensional contract design in which the FL server designs contract bundles comprising varying levels of data quality, compute resources, and contractual payoffs. Then, the vehicular client chooses the contract bundle that maximizes its utility, in accordance to its hidden type. Similar to the results in , the simulation results show that the FL server derives greatest utility under the proposed contract theoretic approach, in contrast to the linear pricing or Stackelberg game approach.\nThe authors in propose a federated energy demand learning (FEDL) approach to manage energy resources in charging stations (CSs) for electric vehicles (EVs). When a large number of EVs congregate at a CS, this can lead to energy transfer congestion. To resolve this, energy is supplied from the power grids and reserved in advance to meet the real-time demands from the EVs , rather than having the CSs request for energy from the power grid only upon receiving charging requests. As such, there is a need to forecast energy demand for EV networks using historical charging data. However, this data is usually stored separately at each of the CS that the EVs utilize and is private in nature. As such, an FEDL approach is adopted in which each CS trains the demand prediction model on its own dataset before sending only the gradient information to the charging station provider (CSP). Then, the gradient information from the CS is aggregated for global model training. To further improve model accuracy, the CSs are clustered using the constrained K-means algorithm based on their physical locations. The clustering-based FEDL reduces the cost of biased prediction . The simulation results show that the root mean squared error of a clustered FEDL model is lower than conventional ML algorithms, e.g., multi-layer perceptron regressor . However, the privacy of user data is still not protected by this approach, since user data is stored in each of the CS. As an extension, the user data can possibly be stored in each EVs separately, and model training can be conducted in the EVs rather than the CSs. This can allow more user features to be considered to enhance the accuracy of EDL, e.g., user consumption habits.\n\\textit{Summary:} In this section, we discuss that FL can also be used for mobile edge network optimization. In particular, DL and DRL approaches are suitable for modelling the dynamic environment of increasingly complex edge networks but require sufficient data for training. With FL, model training can be carried out while preserving the privacy of users. A summary of the approaches are presented in Table \\ref{tab:application}.", "id": "090fafc1-63ae-4a0f-8017-679d11acea66", "level": "subsection", "origin_cites_number": 10, "parent_id": "a10f5c93-1fee-4fa8-a6ac-ad80e6dd1548", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Applications of Federated Learning for Mobile Edge Computing" ], [ "subsection", "Vehicular Networks" ] ], "subsections": [], "title": "Vehicular Networks" }, { "cite_extract_rate": 0.535714285714285, "cites": [ 590, 619, 8967, 3477, 5985, 5999, 582, 6002, 2676, 5989, 7608, 1325, 6000, 7727, 6001 ], "content": "\\label{sec:challenges_open_issues}\nApart from the aforementioned issues, there are still challenges new research directions in deploying FL at scale to be discussed as follows.\n\\begin{itemize}\n\\item \\textit{Dropped participants:} The approaches discussed in Section \\ref{sec:resource}, e.g., , , and , propose new algorithms for participant selection and resource allocation to address the training bottleneck and resource heterogeneity. In these approaches, the wireless connections of participants are assumed to be always available. However, in practice, participating mobile devices may go offline and can drop out from the FL system due to connectivity or energy constraints. A large number of dropped devices from the training participation can significantly degrade the performance , e.g., accuracy and convergence speed, of the FL system. New FL algorithms need to be robust to device drop out in the networks and anticipate the scenarios in which only a small number of participants are left connected to participate in a training round. One potential solution is that the FL model owner provides free dedicated/special connection, e.g., cellular connections, as an incentive to the participants to avoid drop out. \n\\item \\textit{Privacy concerns:} FL is able to protect the privacy of each participants since the model training may be conducted locally, with just the model parameters exchanged with the FL server. However, as specified in ,~, and , communicating the model updates during the training process can still reveal sensitive information to an adversary or a third-party. The current approaches propose security solutions such as DP, e.g., , , and , and collaborative training, e.g., and~. However, the adoption of these approaches sacrifices the performance, i.e., model accuracy. They also require significant computation on participating mobile devices. Thus, the tradeoff between privacy guarantee and system performance has to be well balanced when implementing the FL system. \n\\item \\textit{Unlabeled data:} It is important to note that the approaches reviewed in the survey are proposed for supervised learning tasks. This means that the approaches assume that labels exist for all of the data in the federated network. However, in practice, the data generated in the network may be unlabeled or mislabeled . This poses a big challenge to the server to find participants with appropriate data for model training. Tackling this challenge may require the challenges of\nscalability, heterogeneity, and privacy in the FL systems to be addressed. One possible solution is to enable mobile devices to construct their labeled data by learning the ``labeled data'' from each other. Emerging studies have also considered the use of semi-supervised learning inspired techniques . \n\\item \\textit{Interference among mobile devices:} The existing resource allocation approaches, e.g., and , address the participant selection based on the resource states of their mobile devices. In fact, these mobile devices may be geographically close to each other, i.e., in the same cell. This introduces an interference issue when they update local models to the server. As such, channel allocation policies may need to be combined with the resource allocation approaches to address the interference issue. While studies in , , and consider multi-access schemes and over-the-air computation, it remains to be seen if such approaches are scalable, i.e., able to support a large federation of many participants. To this end, data driven learning based solutions, e.g., federated DRL, can be considered to model the dynamic environment of mobile edge networks and make optimized decisions.\n\\item \\textit{Communication security:} The privacy and security threats studied in Section \\ref{sec:securityissues} revolve mainly around data-related compromises, e.g., data and model poisoning. Due to the exposed nature of the wireless medium, FL is also vulnerable to communication security issues such as Distributed Denial-of-Service (DoS) and jamming attacks . In particular, for jamming attacks, an attacker can transmit radio frequency jamming signals with high power to disrupt or cause interference to the communications between the mobile devices and the server. Such an attack can cause errors to the model uploads/downloads and consequently degrade the performance, i.e., accuracy, of the FL systems. Anti-jamming schemes such as frequency hopping, e.g., sending one more copy of the model update over different frequencies, can be adopted to address the issue.\n\\item \\textit{Asynchronous FL:} In synchronous FL, each training round only progresses as quickly as the slowest device, i.e., the FL system is susceptible to the straggler effect. As such, asynchronous FL has been proposed as a solution in and . In addition, asynchronous FL also allows participants to join the FL training halfway even while a training round is in progress. This is more reflective of practical FL settings and can be an important contributing factor towards ensuring the scalability of FL. However, synchronous FL remains to be the most common approach used due to convergence guarantees . Given the many advantages of asynchronous FL, new asynchronous algorithms should be explored. In particular, for future proposed algorithms, the convergence guarantee in a non-IID setting for non-convex loss functions need to be considered. An approach to be considered is the possibile inclusion of \\textit{backup} workers following the studies of .\n\\item \\textit{Comparisons with other distributed learning methods:} Following the increased scrutiny on data privacy, there has been a growing effort on developing new privacy preserving distributed learning algorithms. One study proposes \\textit{split learning} , which also enables collaborative ML without requiring the exchange of raw data with an external server. In split learning, each participant first trains the neural network up to a cut layer. Then, the outputs from training are transmitted to an external server that completes the other layers of training. The resultant gradients are then back propagated up to the cut layer, and eventually returned to the participants to complete the local training. In contrast, FL typically involves the communication of full model parameters. The authors in conduct an empirical comparison between the communication efficiencies of split learning and FL. The simulation results show that split learning performs well when the model size involved is larger, or when there are more participants involved, since the participants do not have to transmit the weights to an aggregating server. However, FL is much easier to implement since the participants and FL server are running the same global model, i.e., the FL server is just in charge of aggregation and thus FL can work with one of the participants serving as the master node. As such, more research efforts can be directed towards guiding system administrators to make an informed decision as to which scenario warrants the use of either learning methods.\n\\item \\textit{Further studies on learning convergence:} One of the essential considerations of FL is the\nconvergence of the algorithm. FL finds weights to minimize the global model aggregation. This is actually a distributed optimization problem, and the convergence is not always guaranteed. Theoretical analysis and evaluations\non the convergence bounds of the gradient descent based\nFL for convex and non-convex loss functions are important research directions. While existing studies have covered this topic, many of the guarantees are limited to restrictions, e.g., convexity of the loss function.\n\\item \\textit{Usage of tools to quantify statistical heterogeneity:} Mobile devices typically generate and collect data in a non-IID manner across the network. Moreover, the number of data samples among the mobile devices may vary significantly. To improve the convergence of FL algorithm, the statistical heterogeneity of the data needs to be quantified. Recent works, e.g., , have developed tools for quantifying statistical heterogeneity through metrics such as local dissimilarity. However, these metrics cannot be easily calculated over the federated network before training begins. The importance of these metrics motivates future directions such as the development of efficient algorithms to quickly determine the level of heterogeneity in federated networks.\n\\item \\textit{Combined algorithms for communication reduction:} Currently, there are three common techniques of communication reduction in FL as discussed in Section \\ref{sec: communication}. It is important to study how these techniques can be combined with each other to improve the performance further. For example, the model compression technique can be combined with the edge server-assisted FL. The combination is able to significantly reduce the size of model updates, as well as the instances of communication with the FL server. However, the feasibility of this combination has not been explored. In addition, the tradeoff between accuracy and communication overhead for the combination technique needs to be further evaluated. In particular, for simulation results we discuss in Section \\ref{sec: communication}, the accuracy-communication cost reduction tradeoff is difficult to manage since it varies for different settings, e.g., data distribution, quantity, number of edge servers, and number of participants.\n\\item \\textit{Cooperative mobile crowd ML:} In the existing approaches, mobile devices need to communicate with the server directly and this may increase the energy consumption. In fact, mobile devices nearby can be grouped in a cluster, and the model downloading/uploading between the server and the mobile devices can be facilitated by a ``cluster head'' that serves as a relay node . The model exchange between the mobile devices and the cluster head can then be done in Device-to-Device (D2D) connections. Such a model can improve the energy efficiency significantly. Efficient coordination schemes for the cluster head can thus be designed to further improve the energy efficiency of a FL system. \n\\item \\textit{Applications of FL:} Given the advantages of guaranteeing data privacy, FL has an increasingly important role to play in many\napplications, e.g., healthcare, finance and transport systems. For most current studies on FL applications, the focus mainly lies in the federated training of the learning model, with the implementation challenges neglected.\nFor future studies on the applications of FL, besides\na need to consider the aforementioned issues in the survey,\ni.e., communication costs, resource allocation, and privacy\nand security, there is also a need to consider the specific issues related to the system model in which FL will be adopted in. For example, for delay critical applications, e.g, in vehicular networks, there will be more emphasis on training efficiency and less on energy expense.\n\\end{itemize}", "id": "297974f4-53c5-47b3-9406-058a62082d77", "level": "section", "origin_cites_number": 28, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Challenges and Future Research Directions" ] ], "subsections": [], "title": "Challenges and Future Research Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusions}\nThis paper has presented a tutorial of FL and a comprehensive survey on the issues regarding FL implementation. Firstly, we begin with an introduction to the motivation for MEC, and how FL can serve as an enabling technology for collaborative model training at mobile edge networks. Then, we describe the fundamentals of DNN model training, FL, and system design towards FL at scale. Afterwards, we provide detailed reviews, analyses, and comparisons of approaches for emerging implementation challenges in FL. The issues include communication cost, resource allocation, data privacy and data security. Furthermore, we also discuss the implementation of FL for privacy-preserving mobile edge network optimization. Finally, we discuss challenges and future research directions.\n\\bibliographystyle{IEEEtran}\n\\bibliography{FederatedLearning}\n\\end{document}", "id": "2702b63b-d507-4c14-9fa5-5bc6f5225669", "level": "section", "origin_cites_number": 0, "parent_id": "2a7871ed-4981-4f35-ad2d-8f4aca73a40f", "prefix_titles": [ [ "title", "Federated Learning in Mobile Edge Networks: A Comprehensive Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
64
[ 166, 5973, 3357, 3582, 5977, 5974, 5976, 8646, 5972, 602, 584, 2918, 547, 582, 5971, 7608, 671, 3420, 5975, 3422, 8394, 1003, 7633, 289, 1344, 5978, 652, 620, 3431, 5979, 596, 7721, 5980, 5442, 5981, 7720, 590, 619, 656, 653, 5419, 1309, 623, 627, 617, 97, 5982, 5066, 5984, 5983, 4338, 5985, 5986, 5987, 7928, 5988, 5445, 1408, 5989, 8967, 8968, 8966, 1325, 5993, 5992, 5990, 5991, 614, 2676, 603, 7318, 7727, 5994, 3477, 8969, 5468, 5467, 2673, 5997, 5996, 5995, 5998, 5999, 6002, 6000, 6001 ]
1.206159
[ "Zhihao Wang", "Jian Chen", "Steven C. H. Hoi" ]
Deep Learning for Image Super-resolution:\\A Survey
2019
2019-02-16T08:39:36Z
cs.CV
Image Super-Resolution (SR) is an important class of image processing techniques to enhance the resolution of images and videos in computer vision. Recent years have witnessed remarkable progress of image super-resolution using deep learning techniques. This article aims to provide a comprehensive survey on recent advances of image super-resolution using deep learning approaches. In general, we can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR. In addition, we also cover some other important issues, such as publicly available benchmark datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future directions and open issues which should be further addressed by the community in the future.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2c20cd26-fec7-4463-b98c-798fc974562d", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ] ], "subsections": [ "e40a3a7a-c96c-4593-8997-9422d0c66615", "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "7a2c21dd-b0fb-4435-8e94-827357705759", "a53a0c61-637a-47c6-bcab-c20c491483f3", "93f3d938-3f64-432f-856b-181ab4d5675a", "d99390bf-c1ab-42f0-a206-de70277ab044", "8f0f5d78-51f4-4970-b0a8-a53685116af3" ], "title": "root" }, { "cite_extract_rate": 0.23684210526315702, "cites": [ 7029, 7028, 480, 483, 481, 482, 7030, 7254, 8358 ], "content": "}\n\\begin{figure*}\n \\centering\n {\n \\setlength{\\fboxrule}{1pt}\n \\includegraphics[width=0.95\\linewidth]{resource/fig_taxonomy}\n }\n \\caption{\n Hierarchically-structured taxonomy of this survey.\n }\n \\label{fig_sr_methodology}\n\\end{figure*}\n\\IEEEPARstart{I}{mage} super-resolution (SR), which refers to the process of recovering high-resolution (HR) images from low-resolution (LR) images, is an important class of image processing techniques in computer vision and image processing.\nIt enjoys a wide range of real-world applications, such as medical imaging , surveillance and security ), amongst others. \nOther than improving image perceptual quality, it also helps to improve other computer vision tasks . \nIn general, this problem is very challenging and inherently ill-posed since there are always multiple HR images corresponding to a single LR image.\nIn literature, a variety of classical SR methods have been proposed, including prediction-based methods , edge-based methods , statistical methods , patch-based methods and sparse representation methods , etc.\nWith the rapid development of deep learning techniques in recent years, deep learning based SR models have been actively explored and often achieve the state-of-the-art performance on various benchmarks of SR.\nA variety of deep learning methods have been applied to tackle SR tasks, ranging from the early Convolutional Neural Networks (CNN) based method (e.g., SRCNN ) to recent promising SR approaches using Generative Adversarial Nets (GAN) (e.g., SRGAN ).\nIn general, the family of SR algorithms using deep learning techniques differ from each other in the following major aspects: different types of network architectures , different types of loss functions , different types of learning principles and strategies , etc.\nIn this paper, we give a comprehensive overview of recent advances in image super-resolution with deep learning.\nAlthough there are some existing SR surveys in literature, our work differs in that we are focused in deep learning based SR techniques, while most of the earlier works aim at surveying traditional SR algorithms or some studies mainly concentrate on providing quantitative evaluations based on full-reference metrics or human visual perception .\nUnlike the existing surveys, this survey takes a unique deep learning based perspective to review the recent advances of SR techniques in a systematic and comprehensive manner.\nThe main contributions of this survey are three-fold:\n\\begin{enumerate}\n \\item We give a comprehensive review of image super-resolution techniques based on deep learning, including problem settings, benchmark datasets, performance metrics, a family of SR methods with deep learning, domain-specific SR applications, etc.\n \\item We provide a systematic overview of recent advances of deep learning based SR techniques in a hierarchical and structural manner, and summarize the advantages and limitations of each component for an effective SR solution.\n \\item We discuss the challenges and open issues, and identify the new trends and future directions to provide an insightful guidance for the community.\n\\end{enumerate}\nIn the following sections, we will cover various aspects of recent advances in image super-resolution with deep learning.\nFig. \\ref{fig_sr_methodology} shows the taxonomy of image SR to be covered in this survey in a hierarchically-structured way.\nSection 2 gives the problem definition and reviews the mainstream datasets and evaluation metrics.\nSection 3 analyzes main components of supervised SR modularly.\nSection 4 gives a brief introduction to unsupervised SR methods.\nSection 5 introduces some popular domain-specific SR applications, and Section 6 discusses future directions and open issues.", "id": "e40a3a7a-c96c-4593-8997-9422d0c66615", "level": "section", "origin_cites_number": 38, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_problem_setting}", "id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "level": "section", "origin_cites_number": 0, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ] ], "subsections": [ "53549365-a742-4965-8411-f673b3d3f2e5", "d1d57480-8772-4298-b729-a54a8a65eac9", "37eac979-1c0e-4e61-9440-a289c60a8cdd", "3f34f12a-efa9-4a1c-90a0-042ac955589a", "907bce52-c45d-42aa-9cd6-d94aed9e1c55" ], "title": "Problem Setting and Terminology" }, { "cite_extract_rate": 1, "cites": [ 484 ], "content": "\\label{sec_problem_definitions}\nImage super-resolution aims at recovering the corresponding HR images from the LR images.\nGenerally, the LR image $I_x$ is modeled as the output of the following degradation:\n\\begin{equation}\n I_x = \\mathcal{D} (I_y ; \\delta),\n\\end{equation}\nwhere $\\mathcal{D}$ denotes a degradation mapping function, $I_y$ is the corresponding HR image and $\\delta$ is the parameters of the degradation process (e.g., the scaling factor or noise).\nGenerally, the degradation process (i.e., $\\mathcal{D}$ and $\\delta$) is unknown and only LR images are provided.\nIn this case, also known as blind SR, researchers are required to recover an HR approximation $\\hat{I_y}$ of the ground truth HR image $I_y$ from the LR image $I_x$, following:\n\\begin{equation}\n \\hat{I_y} = \\mathcal{F} (I_x ; \\theta),\n\\end{equation}\nwhere $\\mathcal{F}$ is the super-resolution model and $\\theta$ denotes the parameters of $\\mathcal{F}$.\nAlthough the degradation process is unknown and can be affected by various factors (e.g., compression artifacts, anisotropic degradations, sensor noise and speckle noise), researchers are trying to model the degradation mapping.\nMost works directly model the degradation as a single downsampling operation, as follows:\n\\begin{equation}\n \\label{eq_degradation_impl_simple}\n \\mathcal{D} (I_y ; \\delta) = (I_y) \\downarrow_s, \\{s\\} \\subset \\delta,\n\\end{equation}\nwhere $\\downarrow_s$ is a downsampling operation with the scaling factor $s$.\nAs a matter of fact, most datasets for generic SR are built based on this pattern, and the most commonly used downsampling operation is bicubic interpolation with anti-aliasing.\nHowever, there are other works modelling the degradation as a combination of several operations:\n\\begin{equation}\n \\label{eq_degradation_impl_combine}\n \\mathcal{D} (I_y ; \\delta) = (I_y \\otimes \\kappa) \\downarrow_s + n_\\varsigma, \\{\\kappa,s,\\varsigma\\} \\subset \\delta,\n\\end{equation}\nwhere $I_y \\otimes \\kappa$ represents the convolution between a blur kernel $\\kappa$ and the HR image $I_y$, and $n_\\varsigma$ is some additive white Gaussian noise with standard deviation $\\varsigma$.\nCompared to the naive definition of Eq. \\ref{eq_degradation_impl_simple}, the combinative degradation pattern of Eq. \\ref{eq_degradation_impl_combine} is closer to real-world cases and has been shown to be more beneficial for SR .\nTo this end, the objective of SR is as follows:\n\\begin{equation}\n \\hat{\\theta} = \\mathop{\\arg \\min}_{\\theta} \\mathcal{L} (\\hat{I_y}, I_y) + \\lambda \\Phi (\\theta),\n\\end{equation}\nwhere $\\mathcal{L} (\\hat{I_y}, I_y)$ represents the loss function between the generated HR image $\\hat{I_y}$ and the ground truth image $I_y$, $\\Phi (\\theta)$ is the regularization term and $\\lambda$ is the tradeoff parameter.\nAlthough the most popular loss function for SR is pixel-wise mean squared error (i.e., pixel loss), more powerful models tend to use a combination of multiple loss functions, which will be covered in Sec. \\ref{sec_loss}.", "id": "53549365-a742-4965-8411-f673b3d3f2e5", "level": "subsection", "origin_cites_number": 1, "parent_id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Problem Definitions" ] ], "subsections": [], "title": "Problem Definitions" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 485, 7031, 7256, 7255, 7028, 7032, 482, 487, 7030, 486 ], "content": "\\label{sec_datasets}\n\\begin{table*}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\caption{List of public image datasets for super-resolution benchmarks.}\n \\label{table_datasets}\n \\centering\n \\begin{tabular}{|l|c|c|c|c|l|}\n \\hline\n Dataset & Amount & Avg. Resolution & Avg. Pixels & Format & Category Keywords \\\\\n \\hline\n BSDS300 & $300$ & $( 435, 367)$ & $154,401$ & JPG & animal, building, food, landscape, people, plant, etc. \\\\\n BSDS500 & $500$ & $( 432, 370)$ & $154,401$ & JPG & animal, building, food, landscape, people, plant, etc. \\\\\n DIV2K & $1000$ & $(1972, 1437)$ & $2,793,250$ & PNG & environment, flora, fauna, handmade object, people, scenery, etc. \\\\\n General-100 & $100$ & $( 435, 381)$ & $181,108$ & BMP & animal, daily necessity, food, people, plant, texture, etc. \\\\\n L20 & $20$ & $(3843, 2870)$ & $11,577,492$ & PNG & animal, building, landscape, people, plant, etc. \\\\\n Manga109 & $109$ & $( 826, 1169)$ & $966,011$ & PNG & manga volume \\\\\n OutdoorScene & $10624$ & $( 553, 440)$ & $249,593$ & PNG & animal, building, grass, mountain, plant, sky, water \\\\\n PIRM & $200$ & $( 617, 482)$ & $292,021$ & PNG & environments, flora, natural scenery, objects, people, etc. \\\\\n Set5 & $5$ & $( 313, 336)$ & $113,491$ & PNG & baby, bird, butterfly, head, woman \\\\\n Set14 & $14$ & $( 492, 446)$ & $230,203$ & PNG & humans, animals, insects, flowers, vegetables, comic, slides, etc. \\\\\n T91 & $91$ & $( 264, 204)$ & $58,853$ & PNG & car, flower, fruit, human face, etc. \\\\\n Urban100 & $100$ & $( 984, 797)$ & $774,314$ & PNG & architecture, city, structure, urban, etc. \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\nToday there are a variety of datasets available for image super-resolution, which greatly differ in image amounts, quality, resolution, and diversity, etc.\nSome of them provide LR-HR image pairs, while others only provide HR images, in which case the LR images are typically obtained by \\textit{imresize} function with default settings in MATLAB (i.e., bicubic interpolation with anti-aliasing).\nIn Table \\ref{table_datasets} we list a number of image datasets commonly used by the SR community, and specifically indicate their amounts of HR images, average resolution, average numbers of pixels, image formats, and category keywords.\nBesides these datasets, some datasets widely used for other vision tasks are also employed for SR, such as ImageNet , MS-COCO , VOC2012 , CelebA . \nIn addition, combining multiple datasets for training is also popular, such as combining T91 and BSDS300 , combining DIV2K and Flickr2K .", "id": "d1d57480-8772-4298-b729-a54a8a65eac9", "level": "subsection", "origin_cites_number": 22, "parent_id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Datasets for Super-resolution" ] ], "subsections": [], "title": "Datasets for Super-resolution" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_iqa}\nImage quality refers to visual attributes of images and focuses on the perceptual assessments of viewers.\nIn general, image quality assessment (IQA) methods include subjective methods based on humans' perception (i.e., how realistic the image looks) and objective computational methods. \nThe former is more in line with our need but often time-consuming and expensive, thus the latter is currently the mainstream.\nHowever, these methods aren't necessarily consistent between each other, because objective methods are often unable to capture the human visual perception very accurately, which may lead to large difference in IQA results .\nIn addition, the objective IQA methods are further divided into three types : full-reference methods performing assessment using reference images, reduced-reference methods based on comparisons of extracted features, and no-reference methods (i.e., blind IQA) without any reference images.\nNext we'll introduce several most commonly used IQA methods covering both subjective methods and objective methods.", "id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "level": "subsection", "origin_cites_number": 2, "parent_id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ] ], "subsections": [ "7d05a9f9-4c3f-4087-b3d5-d8a2d8f8a89e", "dd763322-cbeb-4406-a6b4-7a02aaffa7c6", "f971a9d2-8d17-455c-93e6-da6aefec6037", "98ccac76-f3e5-49c7-8cc8-645bbdba6750", "c7f6f3a2-41a1-48d0-82c4-2fd4aa7a7551", "5a2fe187-4ad9-463c-b80d-2d2d2370e997" ], "title": "Image Quality Assessment" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_iqa_psnr}\nPeak signal-to-noise ratio (PSNR) is one of the most popular reconstruction quality measurement of lossy transformation (e.g., image compression, image inpainting).\nFor image super-resolution, PSNR is defined via the maximum pixel value (denoted as $L$) and the mean squared error (MSE) between images.\nGiven the ground truth image $I$ with $N$ pixels and the reconstruction $\\hat{I}$, the PSNR between $I$ and $\\hat{I}$ are defined as follows:\n\\begin{align}\n \\label{equ:mse}\n {\\rm PSNR} &= 10 \\cdot \\log_{10} (\\frac {L^2} {\\frac{1}{N} \\sum_{i=1}^{N} (I(i) - \\hat{I}(i)) ^2}) ,\n\\end{align}\nwhere $L$ equals to $255$ in general cases using 8-bit representations.\nSince the PSNR is only related to the pixel-level MSE, only caring about the differences between corresponding pixels instead of visual perception, it often leads to poor performance in representing the reconstruction quality in real scenes, where we're usually more concerned with human perceptions.\nHowever, due to the necessity to compare with literature works and the lack of completely accurate perceptual metrics, PSNR is still currently the most widely used evaluation criteria for SR models.", "id": "7d05a9f9-4c3f-4087-b3d5-d8a2d8f8a89e", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Peak Signal-to-Noise Ratio" ] ], "subsections": [], "title": "Peak Signal-to-Noise Ratio" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_iqa_ssim}\nConsidering that the human visual system (HVS) is highly adapted to extract image structures , the structural similarity index (SSIM) is proposed for measuring the structural similarity between images, based on independent comparisons in terms of luminance, contrast, and structures.\nFor an image $I$ with $N$ pixels, the luminance $\\mu_I$ and contrast $\\sigma_I$ are estimated as the mean and standard deviation of the image intensity, respectively, i.e., $\\mu_I = \\frac{1}{N} \\sum_{i=1}^N I(i)$ and $\\sigma_I = (\\frac{1}{N-1} \\sum_{i=1}^N (I(i) - \\mu_I)^2)^\\frac{1}{2}$,\nwhere $I(i)$ represents the intensity of the $i$-th pixel of image $I$.\nAnd the comparisons on luminance and contrast, denoted as $\\mathcal{C}_l(I, \\hat{I})$ and $\\mathcal{C}_c(I, \\hat{I})$ respectively, are given by:\n\\begin{align}\n \\mathcal{C}_l(I, \\hat{I}) &= \\frac {2 \\mu_I \\mu_{\\hat{I}} + C_1} {\\mu_I^2 + \\mu_{\\hat{I}}^2 + C_1}, \\\\\n \\mathcal{C}_c(I, \\hat{I}) &= \\frac {2 \\sigma_I \\sigma_{\\hat{I}} + C_2} {\\sigma_I^2 + \\sigma_{\\hat{I}}^2 + C_2},\n\\end{align}\nwhere $C_1 = (k_1 L)^2$ and $C_2 = (k_2 L)^2$ are constants for avoiding instability, $k_1 \\ll 1$ and $k_2 \\ll 1$.\nBesides, the image structure is represented by the normalized pixel values (i.e., $(I - \\mu_I) / \\sigma_I$), whose correlations (i.e., inner product) measure the structural similarity, equivalent to the correlation coefficient between $I$ and $\\hat{I}$.\nThus the structure comparison function $\\mathcal{C}_s(I, \\hat{I})$ is defined as:\n\\begin{align}\n \\sigma_{I\\hat{I}} &= \\frac{1}{N-1} \\sum_{i=1}^{N} (I(i) - \\mu_I) (\\hat{I}(i) - \\mu_{\\hat{I}}), \\\\\n \\mathcal{C}_s(I, \\hat{I}) &= \\frac {\\sigma_{I \\hat{I}} + C_3} {\\sigma_I \\sigma_{\\hat{I}} + C_3},\n\\end{align}\nwhere $\\sigma_{I,\\hat{I}}$ is the covariance between $I$ and $\\hat{I}$, and $C_3$ is a constant for stability.\nFinally, the SSIM is given by:\n\\begin{equation}\n {\\rm SSIM}(I, \\hat{I}) = [\\mathcal{C}_l(I, \\hat{I})]^\\alpha\n [\\mathcal{C}_c(I, \\hat{I})]^\\beta\n [\\mathcal{C}_s(I, \\hat{I})]^\\gamma,\n\\end{equation}\nwhere $\\alpha$, $\\beta$, $\\gamma$ are control parameters for adjusting the relative importance.\nSince the SSIM evaluates the reconstruction quality from the perspective of the HVS, it better meets the requirements of perceptual assessment , and is also widely used.", "id": "dd763322-cbeb-4406-a6b4-7a02aaffa7c6", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Structural Similarity" ] ], "subsections": [], "title": "Structural Similarity" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 489, 8359, 488, 487, 483 ], "content": "\\label{sec_iqa_mos}\nMean opinion score (MOS) testing is a commonly used subjective IQA method, where human raters are asked to assign perceptual quality scores to tested images.\nTypically, the scores are from $1$ (bad) to $5$ (good).\nAnd the final MOS is calculated as the arithmetic mean over all ratings.\nAlthough the MOS testing seems a faithful IQA method, it has some inherent defects, such as non-linearly perceived scales, biases and variance of rating criteria.\nIn reality, there are some SR models performing poorly in common IQA metrics (e.g., PSNR) but far exceeding others in terms of perceptual quality, in which case the MOS testing is the most reliable IQA method for accurately measuring the perceptual quality .", "id": "f971a9d2-8d17-455c-93e6-da6aefec6037", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Mean Opinion Score" ] ], "subsections": [], "title": "Mean Opinion Score" }, { "cite_extract_rate": 0.75, "cites": [ 491, 8360, 490 ], "content": "\\label{sec_iqa_learning_perceptual}\nIn order to better assess the image perceptual quality while reducing manual intervention, researchers try to assess the perceptual quality by learning on large datasets.\nSpecifically, Ma \\textit{et al.} and Talebi \\textit{et al.} propose no-reference Ma and NIMA, respectively, which are learned from visual perceptual scores and directly predict the quality scores without ground-truth images.\nIn contrast, Kim \\textit{et al.} propose DeepQA, which predicts visual similarity of images by training on triplets of distorted images, objective error maps, and subjective scores.\nAnd Zhang \\textit{et al.} collect a large-scale perceptual similarity dataset, evaluate the perceptual image patch similarity (LPIPS) according to the difference in deep features by trained deep networks, and show that the deep features learned by CNNs model perceptual similarity much better than measures without CNNs.\nAlthough these methods exhibit better performance on capturing human visual perception, what kind of perceptual quality we need (e.g., more realistic images, or consistent identity to the original image) remains a question to be explored, thus the objective IQA methods (e.g., PSNR, SSIM) are still the mainstreams currently.", "id": "98ccac76-f3e5-49c7-8cc8-645bbdba6750", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Learning-based Perceptual Quality" ] ], "subsections": [], "title": "Learning-based Perceptual Quality" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7029, 7033, 492, 493, 483 ], "content": "\\label{sec_iqa_task}\nAccording to the fact that SR models can often help other vision tasks , evaluating reconstruction performance by means of other tasks is another effective way. \nSpecifically, researchers feed the original and the reconstructed HR images into trained models, and evaluate the reconstruction quality by comparing the impacts on the prediction performance.\nThe vision tasks used for evaluation include object recognition , face recognition , face alignment and parsing , etc.", "id": "c7f6f3a2-41a1-48d0-82c4-2fd4aa7a7551", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Task-based Evaluation" ] ], "subsections": [], "title": "Task-based Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "In addition to above IQA methods, there are other less popular SR metrics.\nThe multi-scale structural similarity (MS-SSIM) supplies more flexibility than single-scale SSIM in incorporating the variations of viewing conditions.\nThe feature similarity (FSIM) extracts feature points of human interest based on phase congruency and image gradient magnitude to evaluate image quality.\nThe Natural Image Quality Evaluator (NIQE) makes use of measurable deviations from statistical regularities observed in natural images, without exposure to distorted images.\nRecently, Blau \\textit{et al.} prove mathematically that distortion (e.g., PSNR, SSIM) and perceptual quality (e.g., MOS) are at odds with each other, and show that as the distortion decreases, the perceptual quality must be worse.\nThus how to accurately measure the SR quality is still an urgent problem to be solved.", "id": "5a2fe187-4ad9-463c-b80d-2d2d2370e997", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "37eac979-1c0e-4e61-9440-a289c60a8cdd", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Image Quality Assessment" ], [ "subsubsection", "Other IQA Methods" ] ], "subsections": [], "title": "Other IQA Methods" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 8358, 7256, 481, 493, 482, 7255, 7028 ], "content": "In addition to the commonly used RGB color space, the YCbCr color space is also widely used for SR. \nIn this space, images are represented by Y, Cb, Cr channels, denoting the luminance, blue-difference and red-difference chroma components, respectively.\nAlthough currently there is no accepted best practice for performing or evaluating super-resolution on which space, earlier models favor operating on the Y channel of YCbCr space , while more recent models tend to operate on RGB channels .\nIt is worth noting that operating (training or evaluation) on different color spaces or channels can make the evaluation results differ greatly (up to $4$ dB) .", "id": "3f34f12a-efa9-4a1c-90a0-042ac955589a", "level": "subsection", "origin_cites_number": 9, "parent_id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Operating Channels" ] ], "subsections": [], "title": "Operating Channels" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 490, 494 ], "content": "In this section, we will briefly introduce two most popular challenges for image SR, NTIRE and PIRM .\n\\textbf{NTIRE Challenge.}\nThe New Trends in Image Restoration and Enhancement (NTIRE) challenge is in conjunction with CVPR and includes multiple tasks like SR, denoising and colorization.\nFor image SR, the NTIRE challenge is built on the DIV2K dataset and consists of bicubic downscaling tracks and blind tracks with realistic unknown degradation.\nThese tracks differs in degradations and scaling factors, and aim to promote the SR research under both ideal conditions and real-world adverse situations.\n\\textbf{PIRM Challenge.}\nThe Perceptual Image Restoration and Manipulation (PIRM) challenges are in conjunction with ECCV and also includes multiple tasks.\nIn contrast to NTIRE, one sub-challenge of PIRM focuses on the tradeoff between generation accuracy and perceptual quality, and the other focuses on SR on smartphones.\nAs is well-known , the models target for distortion frequently produce visually unpleasing results, while the models target for perceptual quality performs poorly on information fidelity.\nSpecifically, the PIRM divided the perception-distortion plane into three regions according to thresholds on root mean squared error (RMSE).\nIn each region, the winning algorithm is the one that achieves the best perceptual quality , evaluated by NIQE and Ma .\nWhile in the other sub-challenge , SR on smartphones, participants are asked to perform SR with limited smartphone hardwares (including CPU, GPU, RAM, etc.), and the evaluation metrics include PSNR, MS-SSIM and MOS testing.\nIn this way, PIRM encourages advanced research on the perception-distortion tradeoff, and also drives lightweight and efficient image enhancement on smartphones.", "id": "907bce52-c45d-42aa-9cd6-d94aed9e1c55", "level": "subsection", "origin_cites_number": 7, "parent_id": "a6cc8412-517b-4e0a-a1f4-0cf417b15a19", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Problem Setting and Terminology" ], [ "subsection", "Super-resolution Challenges" ] ], "subsections": [], "title": "Super-resolution Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "Nowadays researchers have proposed a variety of super-resolution models with deep learning.\nThese models focus on supervised SR, i.e., trained with both LR images and corresponding HR images.\nAlthough the differences between these models are very large, they are essentially some combinations of a set of components such as model frameworks, upsampling methods, network design, and learning strategies.\nFrom this perspective, researchers combine these components to build an integrated SR model for fitting specific purposes.\nIn this section, we concentrate on modularly analyzing the fundamental components (as Fig. \\ref{fig_sr_methodology} shows) instead of introducing each model in isolation, and summarizing their advantages and limitations.", "id": "7a2c21dd-b0fb-4435-8e94-827357705759", "level": "section", "origin_cites_number": 0, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ] ], "subsections": [ "689493e0-ec64-420a-b3d0-7d947940e262", "f6adcd75-0abb-4db7-a8b4-0d6afdfa320e", "40ccdbeb-49e2-41b8-849a-4470b011cae3", "d654c603-6009-4a1e-89b5-b983ee09835f", "130e5bca-8421-423a-bba5-191d29e16c10", "fc2fb2c1-1bc8-4bc6-b0f3-577d02a87eea" ], "title": "Supervised Super-resolution" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_sr_frameworks}\n\\begin{figure}[!t]\n \\centering\n \\subfloat[Pre-upsampling SR]{\\includegraphics[width=0.4\\textwidth]{resource/fig_framework_pre}\n \\label{fig_framework_pre}} \\\\\n \\subfloat[Post-upsampling SR]{\\includegraphics[width=0.4\\textwidth]{resource/fig_framework_post}\n \\label{fig_framework_post}} \\\\\n \\subfloat[Progressive upsampling SR]{\\includegraphics[width=0.4\\textwidth]{resource/fig_framework_progressive}\n \\label{fig_framework_progressive}} \\\\\n \\subfloat[Iterative up-and-down Sampling SR]{\\includegraphics[width=0.4\\textwidth]{resource/fig_framework_up_down}\n \\label{fig_framework_up_down}} \\\\\n \\caption{\n Super-resolution model frameworks based on deep learning.\n The cube size represents the output size.\n The \\textcolor{Gray}{gray} ones denote predefined upsampling, while the \\textcolor{LimeGreen}{green}, \\textcolor{Dandelion}{yellow} and \\textcolor{CornflowerBlue}{blue} ones indicate learnable upsampling, downsampling and convolutional layers, respectively.\n And the blocks enclosed by dashed boxes represent stackable modules.\n }\n \\label{fig_framework}\n\\end{figure}\nSince image super-resolution is an ill-posed problem, how to perform upsampling (i.e., generating HR output from LR input) is the key problem.\nAlthough the architectures of existing models vary widely, they can be attributed to four model frameworks (as Fig. \\ref{fig_framework} shows), based on the employed upsampling operations and their locations in the model.", "id": "689493e0-ec64-420a-b3d0-7d947940e262", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Super-resolution Frameworks" ] ], "subsections": [ "519339a2-242b-44d5-84f0-42bc4f8e8d0e", "49399822-0598-45df-ac83-415cc84fbe69", "5b9686f8-a7e5-4eb6-9df5-54d91e1f02d5", "c84341e6-19ad-464b-80f1-c98cb381ad5f" ], "title": "Super-resolution Frameworks" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7031, 495, 8358, 7256, 7028 ], "content": "\\label{sec_framework_pre}\nOn account of the difficulty of directly learning the mapping from low-dimensional space to high-dimensional space, utilizing traditional upsampling algorithms to obtain higher-resolution images and then refining them using deep neural networks is a straightforward solution.\nThus Dong \\textit{et al.} firstly adopt the pre-upsampling SR framework (as Fig. \\ref{fig_framework_pre} shows) and propose SRCNN to learn an end-to-end mapping from interpolated LR images to HR images.\nSpecifically, the LR images are upsampled to coarse HR images with the desired size using traditional methods (e.g., bicubic interpolation), then deep CNNs are applied on these images for reconstructing high-quality details.\nSince the most difficult upsampling operation has been completed, CNNs only need to refine the coarse images, which significantly reduces the learning difficulty.\nIn addition, these models can take interpolated images with arbitrary sizes and scaling factors as input, and give refined results with comparable performance to single-scale SR models .\nThus it has gradually become one of the most popular frameworks , and the main differences between these models are the posterior model design (Sec. \\ref{sec_network_design}) and learning strategies (Sec. \\ref{sec_learning_strategies}).\nHowever, the predefined upsampling often introduce side effects (e.g., noise amplification and blurring), and since most operations are performed in high-dimensional space, the cost of time and space is much higher than other frameworks .", "id": "519339a2-242b-44d5-84f0-42bc4f8e8d0e", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "689493e0-ec64-420a-b3d0-7d947940e262", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Super-resolution Frameworks" ], [ "subsubsection", "Pre-upsampling Super-resolution" ] ], "subsections": [], "title": "Pre-upsampling Super-resolution" }, { "cite_extract_rate": 0.5, "cites": [ 7256, 482, 496 ], "content": "\\label{sec_framework_post}\nIn order to improve the computational efficiency and make full use of deep learning technology to increase resolution automatically, researchers propose to perform most computation in low-dimensional space by replacing the predefined upsampling with end-to-end learnable layers integrated at the end of the models.\nIn the pioneer works of this framework, namely post-upsampling SR as Fig. \\ref{fig_framework_post} shows, the LR input images are fed into deep CNNs without increasing resolution, and end-to-end learnable upsampling layers are applied at the end of the network.\nSince the feature extraction process with huge computational cost only occurs in low-dimensional space and the resolution increases only at the end, the computation and spatial complexity are much reduced.\nTherefore, this framework also has become one of the most mainstream frameworks .\nThese models differ mainly in the learnable upsampling layers (Sec. \\ref{sec_upsampling_methods}), anterior CNN structures (Sec. \\ref{sec_network_design}) and learning strategies ({Sec. \\ref{sec_learning_strategies}), etc.", "id": "49399822-0598-45df-ac83-415cc84fbe69", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "689493e0-ec64-420a-b3d0-7d947940e262", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Super-resolution Frameworks" ], [ "subsubsection", "Post-upsampling Super-resolution" ] ], "subsections": [], "title": "Post-upsampling Super-resolution" }, { "cite_extract_rate": 1, "cites": [ 7030, 489, 7254 ], "content": "\\label{sec_framework_progressive}\nAlthough post-upsampling SR framework has immensely reduced the computational cost, it still has some shortcomings.\nOn the one hand, the upsampling is performed in only one step, which greatly increases the learning difficulty for large scaling factors (e.g., 4, 8).\nOn the other hand, each scaling factor requires training an individual SR model, which cannot cope with the need for multi-scale SR.\nTo address these drawbacks, a progressive upsampling framework is adopted by Laplacian pyramid SR network (LapSRN) , as Fig. \\ref{fig_framework_progressive} shows.\nSpecifically, the models under this framework are based on a cascade of CNNs and progressively reconstruct higher-resolution images.\nAt each stage, the images are upsampled to higher resolution and refined by CNNs.\nOther works such as MS-LapSRN and progressive SR (ProSR) also adopt this framework and achieve relatively high performance.\nIn contrast to the LapSRN and MS-LapSRN using the intermediate reconstructed images as the ``base images'' for subsequent modules, the ProSR keeps the main information stream and reconstructs intermediate-resolution images by individual heads.\nBy decomposing a difficult task into simple tasks, the models under this framework greatly reduce the learning difficulty, especially with large factors, and also cope with the multi-scale SR without introducing overmuch spacial and temporal cost.\nIn addition, some specific learning strategies such as curriculum learning (Sec. \\ref{sec_curriculum_learning}) and multi-supervision (Sec. \\ref{sec_multi_supervision}) can be directly integrated to further reduce learning difficulty and improve final performance.\nHowever, these models also encounter some problems, such as the complicated model designing for multiple stages and the training stability, and more modelling guidance and more advanced training strategies are needed.", "id": "5b9686f8-a7e5-4eb6-9df5-54d91e1f02d5", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "689493e0-ec64-420a-b3d0-7d947940e262", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Super-resolution Frameworks" ], [ "subsubsection", "Progressive Upsampling Super-resolution" ] ], "subsections": [], "title": "Progressive Upsampling Super-resolution" }, { "cite_extract_rate": 0.8, "cites": [ 498, 497, 7032, 7255 ], "content": "\\label{sec_framework_up_down}\nIn order to better capture the mutual dependency of LR-HR image pairs, an efficient iterative procedure named back-projection is incorporated into SR .\nThis SR framework, namely iterative up-and-down sampling SR (as Fig. \\ref{fig_framework_up_down} shows), tries to iteratively apply back-projection refinement, i.e., computing the reconstruction error then fusing it back to tune the HR image intensity.\nSpecifically, Haris \\textit{et al.} exploit iterative up-and-down sampling layers and propose DBPN, which connects upsampling and downsampling layers alternately and reconstructs the final HR result using all of the intermediately reconstructions.\nSimilarly, the SRFBN employs a iterative up-and-down sampling feedback block with more dense skip connections and learns better representations.\nAnd the RBPN for video super-resolution extracts context from continuous video frames and combines these context to produce recurrent output frames by a back-projection module.\nThe models under this framework can better mine the deep relationships between LR-HR image pairs and thus provide higher-quality reconstruction results.\nNevertheless, the design criteria of the back-projection modules are still unclear.\nSince this mechanism has just been introduced into deep learning-based SR, the framework has great potential and needs further exploration.", "id": "c84341e6-19ad-464b-80f1-c98cb381ad5f", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "689493e0-ec64-420a-b3d0-7d947940e262", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Super-resolution Frameworks" ], [ "subsubsection", "Iterative Up-and-down Sampling Super-resolution" ] ], "subsections": [], "title": "Iterative Up-and-down Sampling Super-resolution" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_upsampling_methods}\nIn addition to the upsampling positions in the model, how to perform upsampling is of great importance.\nAlthough there has been various traditional upsampling methods , making use of CNNs to learn end-to-end upsampling has gradually become a trend. \nIn this section, we'll introduce some traditional interpolation-based algorithms and deep learning-based upsampling layers.", "id": "f6adcd75-0abb-4db7-a8b4-0d6afdfa320e", "level": "subsection", "origin_cites_number": 4, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Upsampling Methods" ] ], "subsections": [ "de67ad4e-9079-4007-bf3c-d76797af7546", "e83198c5-8c15-4bd4-bf6e-cc354c2ace47" ], "title": "Upsampling Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!t]\n \\centering\n \\subfloat[Starting]{\\includegraphics[width=0.1\\textwidth]{resource/fig_interpolation_0}\n \\label{fig_interpolation_0}}\n \\hfil\n \\subfloat[Step 1]{\\includegraphics[width=0.1\\textwidth]{resource/fig_interpolation_1}\n \\label{fig_interpolation_1}}\n \\hfil\n \\subfloat[Step 2]{\\includegraphics[width=0.1\\textwidth]{resource/fig_interpolation_2}\n \\label{fig_interpolation_2}}\n \\hfil\n \\subfloat[End]{\\includegraphics[width=0.1\\textwidth]{resource/fig_interpolation_3}\n \\label{fig_interpolation_3}}\n \\caption{\n Interpolation-based upsampling methods.\n The \\textcolor{Gray}{gray} board denotes the coordinates of pixels, and the \\textcolor{CornflowerBlue}{blue}, \\textcolor{Dandelion}{yellow} and \\textcolor{LimeGreen}{green} points represent the initial, intermediate and output pixels, respectively.\n }\n \\label{fig_interpolation}\n\\end{figure}\nImage interpolation, a.k.a. image scaling, refers to resizing digital images and is widely used by image-related applications.\nThe traditional interpolation methods include nearest-neighbor interpolation, bilinear and bicubic interpolation, Sinc and Lanczos resampling, etc.\nSince these methods are interpretable and easy to implement, some of them are still widely used in CNN-based SR models.\n\\textbf{Nearest-neighbor Interpolation.}\nThe nearest-neighbor interpolation is a simple and intuitive algorithm.\nIt selects the value of the nearest pixel for each position to be interpolated regardless of any other pixels.\nThus this method is very fast but usually produces blocky results of low quality.\n\\textbf{Bilinear Interpolation.}\nThe bilinear interpolation (BLI) first performs linear interpolation on one axis of the image and then performs on the other axis, as Fig. \\ref{fig_interpolation} shows.\nSince it results in a quadratic interpolation with a receptive field sized $2 \\times 2$, it shows much better performance than nearest-neighbor interpolation while keeping relatively fast speed.\n\\textbf{Bicubic Interpolation.}\nSimilarly, the bicubic interpolation (BCI) performs cubic interpolation on each of the two axes, as Fig. \\ref{fig_interpolation} shows.\nCompared to BLI, the BCI takes $4 \\times 4$ pixels into account, and results in smoother results with fewer artifacts but much lower speed.\nIn fact, the BCI with anti-aliasing is the mainstream method for building SR datasets (i.e., degrading HR images to LR images), and is also widely used in pre-upsampling SR framework (Sec. \\ref{sec_framework_pre}).\nAs a matter of fact, the interpolation-based upsampling methods improve the image resolution only based on its own image signals, without bringing any more information.\nInstead, they often introduce some side effects, such as computational complexity, noise amplification, blurring results.\nTherefore, the current trend is to replace the interpolation-based methods with learnable upsampling layers.", "id": "de67ad4e-9079-4007-bf3c-d76797af7546", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f6adcd75-0abb-4db7-a8b4-0d6afdfa320e", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Upsampling Methods" ], [ "subsubsection", "Interpolation-based Upsampling" ] ], "subsections": [], "title": "Interpolation-based Upsampling" }, { "cite_extract_rate": 0.5, "cites": [ 499, 7034, 484, 7257, 481, 7255, 496 ], "content": "\\label{sec_learning_based_upsampling}\n\\begin{figure}[!t]\n \\centering\n \\subfloat[Starting]{\\includegraphics[height=60pt]{resource/fig_deconv_0}\n \\label{fig_deconv_0}}\n \\hfil\n \\subfloat[Expanding]{\\includegraphics[height=60pt]{resource/fig_deconv_1}\n \\label{fig_deconv_1}}\n \\hfil\n \\subfloat[Convolution]{\\includegraphics[height=60pt]{resource/fig_deconv_2}\n \\label{fig_deconv_2}}\n \\caption{\n Transposed convolution layer.\n The \\textcolor{CornflowerBlue}{blue} boxes denote the input, and the \\textcolor{LimeGreen}{green} boxes indicate the kernel and the convolution output.\n }\n \\label{fig_deconv}\n\\end{figure}\n\\begin{figure}[!t]\n \\centering\n \\subfloat[Starting]{\\includegraphics[height=45pt]{resource/fig_sub_pixel_0}\n \\label{fig_sub_pixel_0}}\n \\hfil\n \\subfloat[Convolution]{\\includegraphics[height=45pt]{resource/fig_sub_pixel_1}\n \\label{fig_sub_pixel_1}}\n \\hfil\n \\subfloat[Reshaping]{\\includegraphics[height=45pt]{resource/fig_sub_pixel_2}\n \\label{fig_sub_pixel_2}}\n \\caption{\n Sub-pixel layer.\n The \\textcolor{CornflowerBlue}{blue} boxes denote the input, and the boxes with other colors indicate different convolution operations and different output feature maps.\n }\n \\label{fig_sub_pixel}\n\\end{figure}\n\\begin{figure}[!t]\n \\centering\n {\n \\setlength{\\fboxrule}{1pt}\n {\\includegraphics[width=0.35\\textwidth]{resource/fig_meta_upscale}\n }\n }\n \\caption{\n Meta upscale module.\n The \\textcolor{CornflowerBlue}{blue} boxes denote the projection patch, and the \\textcolor{LimeGreen}{green} boxes and lines indicate the convolution operation with predicted weights.\n }\n \\label{fig_meta_upscale}\n\\end{figure}\nIn order to overcome the shortcomings of interpolation-based methods and learn upsampling in an end-to-end manner, transposed convolution layer and sub-pixel layer are introduced into the SR field.\n\\textbf{Transposed Convolution Layer.}\nTransposed convolution layer, a.k.a. deconvolution layer , tries to perform transformation opposite a normal convolution, i.e., predicting the possible input based on feature maps sized like convolution output.\nSpecifically, it increases the image resolution by expanding the image by inserting zeros and performing convolution.\nTaking $2\\times$ SR with $3 \\times 3$ kernel as example (as Fig. \\ref{fig_deconv} shows), the input is firstly expanded twice of the original size, where the added pixel values are set to $0$ (Fig. \\ref{fig_deconv_1}).\nThen a convolution with kernel sized $3 \\times 3$, stride $1$ and padding $1$ is applied (Fig. \\ref{fig_deconv_2}).\nIn this way, the input is upsampled by a factor of 2, in which case the receptive field is at most $2 \\times 2$.\nSince the transposed convolution enlarges the image size in an end-to-end manner while maintaining a connectivity pattern compatible with vanilla convolution, it is widely used as upsampling layers in SR models .\nHowever, this layer can easily cause ``uneven overlapping'' on each axis , and the multiplied results on both axes further create a checkerboard-like pattern of varying magnitudes and thus hurt the SR performance.\n\\textbf{Sub-pixel Layer.}\nThe sub-pixel layer , another end-to-end learnable upsampling layer, performs upsampling by generating a plurality of channels by convolution and then reshaping them, as Fig. \\ref{fig_sub_pixel} shows.\nWithin this layer, a convolution is firstly applied for producing outputs with $s^2$ times channels, where $s$ is the scaling factor (Fig. \\ref{fig_sub_pixel_1}).\nAssuming the input size is $h \\times w \\times c$, the output size will be $h \\times w \\times s^2 c$.\nAfter that, the reshaping operation (a.k.a. \\textit{shuffle} ) is performed to produce outputs with size $sh \\times sw \\times c$ (Fig. \\ref{fig_sub_pixel_2}).\nIn this case, the receptive field can be up to $3 \\times 3$.\nDue to the end-to-end upsampling manner, this layer is also widely used by SR models .\nCompared with transposed convolution layer, the sub-pixel layer has a larger receptive field, which provides more contextual information to help generate more realistic details.\nHowever, since the distribution of the receptive fields is uneven and blocky regions actually share the same receptive field, it may result in some artifacts near the boundaries of different blocks.\nOn the other hand, independently predicting adjacent pixels in a blocky region may cause unsmooth outputs.\nThus Gao \\textit{et al.} propose PixelTCL, which replaces the independent prediction to interdependent sequential prediction, and produces smoother and more consistent results.\n\\textbf{Meta Upscale Module.}\n The previous methods need to predefine the scaling factors, i.e., training different upsampling modules for different factors, which is inefficient and not in line with real needs.\nSo that Hu \\textit{et al.} propose meta upscale module (as Fig. \\ref{fig_meta_upscale} shows), which firstly solves SR of arbitrary scaling factors based on meta learning.\nSpecifically, for each target position on the HR images, this module project it to a small patch on the LR feature maps (i.e., $k \\times k \\times c_{in}$), predicts convolution weights (i.e., $k \\times k \\times c_{in} \\times c_{out}$) according to the projection offsets and the scaling factor by dense layers and perform convolution.\nIn this way, the meta upscale module can continuously zoom in it with arbitrary factors by a single model.\nAnd due to the large amount of training data (multiple factors are simultaneously trained), the module can exhibit comparable or even better performance on fixed factors.\nAlthough this module needs to predict weights during inference, the execution time of the upsampling module only accounts for about 1\\% of the time of feature extraction .\nHowever, this method predicts a large number of convolution weights for each target pixel based on several values independent of the image contents, so the prediction result may be unstable and less efficient when faced with larger magnifications.\nNowadays, these learning-based layers have become the most widely used upsampling methods.\nEspecially in the post-upsampling framework (Sec. \\ref{sec_framework_post}), these layers are usually used in the final upsampling phase for reconstructing HR images based on high-level representations extracted in low-dimensional space, and thus achieve end-to-end SR while avoiding overwhelming operations in high-dimensional space.", "id": "e83198c5-8c15-4bd4-bf6e-cc354c2ace47", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "f6adcd75-0abb-4db7-a8b4-0d6afdfa320e", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Upsampling Methods" ], [ "subsubsection", "Learning-based Upsampling" ] ], "subsections": [], "title": "Learning-based Upsampling" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_network_design}\n\\begin{figure*}[!t]\n \\centering\n \\subfloat[Residual Learning]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_residual_learning}\n \\label{fig_residual_learning}}\n \\subfloat[Recursive learning]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_recursive_learning}\n \\label{fig_recursive_learning}}\n \\subfloat[Channel attention]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_channel_attention}\n \\label{fig_channel_attention}}\n \\subfloat[Dense connections]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_dense_connections}\n \\label{fig_dense_connections}} \\\\\n \\subfloat[Local multi-path learning]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_multi_path_learning}\n \\label{fig_multi_path_learning}}\n \\subfloat[Scale-specific multi-path learning]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_multi_path_learning_specific}\n \\label{fig_multi_path_learning_specific}}\n \\subfloat[Group convolution]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_group_conv}\n \\label{fig_group_conv}}\n \\subfloat[Pyramid pooling]{\\includegraphics[width=0.25\\textwidth]{resource/fig_net_pyramid_pooling}\n \\label{fig_pyramid_pooling}} \\\\\n \\caption{\n Network design strategies.\n }\n \\label{fig_modelling_strategies}\n\\end{figure*}\nNowadays the network design has been one of the most important parts of deep learning.\nIn the super-resolution field, researchers apply all kinds of network design strategies on top of the four SR frameworks (Sec. \\ref{sec_sr_frameworks}) to construct the final networks.\nIn this section, we decompose these networks to essential principles or strategies for network design, introduce them and analyze the advantages and limitations one by one.", "id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ] ], "subsections": [ "e2bf59f0-9d13-40a4-aacd-65cd2ed1ed45", "8f7faf9a-44bf-4a4a-a876-6b36ac5bd458", "1dadc8f5-aa09-4f5b-9d6f-f5a916d558a5", "9d2cd66e-cbba-47ff-ac53-21cc3e8b6ed5", "d97c223a-4bc5-4a47-8979-2535eb24b334", "678fc038-cc30-476b-9384-4e4afbdc9b84", "b527778b-723c-49d8-9ab4-5333d03ca380", "991fb891-2f49-4b32-8d73-a582b4734886", "fda2defb-1f41-4ab9-ad3c-c031428143aa", "c306a6a1-8f8c-4859-987d-2a95118d2c98", "a87c4558-b49d-41c3-b3f4-d81e2530643c" ], "title": "Network Design" }, { "cite_extract_rate": 0.5, "cites": [ 97, 7031, 500, 493, 7028, 496 ], "content": "\\label{sec_residual_learning}\nBefore He \\textit{et al.} propose ResNet for learning residuals instead of a thorough mapping, residual learning has been widely employed by SR models , as Fig. \\ref{fig_residual_learning} shows.\nAmong them, the residual learning strategies can be roughly divided into global and local residual learning.\n\\textbf{Global Residual Learning.}\nSince the image SR is an image-to-image translation task where the input image is highly correlated with the target image, researchers try to learn only the residuals between them, namely global residual learning.\nIn this case, it avoids learning a complicated transformation from a complete image to another, instead only requires learning a residual map to restore the missing high-frequency details.\nSince the residuals in most regions are close to zero, the model complexity and learning difficulty are greatly reduced.\nThus it is widely used by SR models .\n\\textbf{Local Residual Learning.}\nThe local residual learning is similar to the residual learning in ResNet and used to alleviate the degradation problem caused by ever-increasing network depths, reduce training difficulty and improve the learning ability.\nIt is also widely used for SR .\nIn practice, the above methods are both implemented by shortcut connections (often scaled by a small constant) and element-wise addition, while the difference is that the former directly connects the input and output images, while the latter usually adds multiple shortcuts between layers with different depths inside the network.", "id": "e2bf59f0-9d13-40a4-aacd-65cd2ed1ed45", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Residual Learning" ] ], "subsections": [], "title": "Residual Learning" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 489, 97, 7031, 495, 497, 481, 496 ], "content": "\\label{sec_recursive_learning}\nIn order to learn higher-level features without introducing overwhelming parameters, recursive learning, which means applying the same modules multiple times in a recursive manner, is introduced into the SR field, as Fig. \\ref{fig_recursive_learning} shows.\nAmong them, the 16-recursive DRCN employs a single convolutional layer as the recursive unit and reaches a receptive field of $41 \\times 41$, which is much larger than $13 \\times 13$ of SRCNN , without over many parameters.\nThe DRRN uses a ResBlock as the recursive unit for 25 recursions and obtains even better performance than the 17-ResBlock baseline.\nLater Tai \\textit{et al.} propose MemNet based on the memory block, which is composed of a 6-recursive ResBlock where the outputs of every recursion are concatenated and go through an extra $1 \\times 1$ convolution for memorizing and forgetting.\nThe cascading residual network (CARN) also adopts a similar recursive unit including several ResBlocks.\nRecently, Li \\textit{et al.} employ iterative up-and-down sampling SR framework, and propose a feedback network based on recursive learning, where the weights of the entire network are shared across all recursions.\nBesides, researchers also employ different recursive modules in different parts.\nSpecifically, Han \\textit{et al.} propose dual-state recurrent network (DSRN) to exchange signals between the LR and HR states.\nAt each time step (i.e., recursion), the representations of each branch are updated and exchanged for better exploring LR-HR relationships.\nSimilarly, Lai \\textit{et al.} employ the embedding and upsampling modules as recursive units, and thus much reduce the model size at the expense of little performance loss. \nIn general, the recursive learning can indeed learn more advanced representations without introducing excessive parameters, but still can't avoid high computational costs.\nAnd it inherently brings vanishing or exploding gradient problems, consequently some techniques such as residual learning (Sec. \\ref{sec_residual_learning}) and multi-supervision (Sec. \\ref{sec_multi_supervision}) are often integrated with recursive learning for mitigating these problems .", "id": "8f7faf9a-44bf-4a4a-a876-6b36ac5bd458", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Recursive Learning" ] ], "subsections": [], "title": "Recursive Learning" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 7254, 481, 482, 488, 7030, 305, 496 ], "content": "\\label{sec_multi_path_learning}\nMulti-path learning refers to passing features through multiple paths, which perform different operations, and fusing them back for providing better modelling capabilities.\nSpecifically, it could be divided into global, local and scale-specific multi-path learning, as bellow.\n\\textbf{Global Multi-path Learning.}\nGlobal multi-path learning refers to making use of multiple paths to extract features of different aspects of the images.\nThese paths can cross each other in the propagation and thus greatly enhance the learning ability.\nSpecifically, the LapSRN includes a feature extraction path predicting the sub-band residuals in a coarse-to-fine fashion and another path to reconstruct HR images based on the signals from both paths.\nSimilarly, the DSRN utilizes two paths to extract information in low-dimensional and high-dimensional space, respectively, and continuously exchanges information for further improving learning ability.\nAnd the pixel recursive super-resolution adopts a conditioning path to capture the global structure of images, and a prior path to capture the serial dependence of generated pixels.\nIn contrast, Ren \\textit{et al.} employ multiple paths with unbalanced structures to perform upsampling and fuse them at the end of the model.\n\\textbf{Local Multi-path Learning.}\nMotivated by the inception module , the MSRN adopts a new block for multi-scale feature extraction, as Fig. \\ref{fig_multi_path_learning} shows.\nIn this block, two convolution layers with kernel size $3\\times 3$ and $5\\times 5$ are adopted to extract features simultaneously, then the outputs are concatenated and go through the same operations again, and finally an extra $1\\times 1$ convolution is applied.\nA shortcut connects the input and output by element-wise addition.\nThrough such local multi-path learning, the SR models can better extract image features from multiple scales and further improve performance.\n\\textbf{Scale-specific Multi-path Learning.}\nConsidering that SR models for different scales need to go through similar feature extraction, Lim \\textit{et al.} propose scale-specific multi-path learning to cope with multi-scale SR with a single network.\nTo be concrete, they share the principal components of the model (i.e., the intermediate layers for feature extraction), and attach scale-specific pre-processing paths and upsampling paths at the beginning and the end of the network, respectively (as Fig. \\ref{fig_multi_path_learning_specific} shows).\nDuring training, only the paths corresponding to the selected scale are enabled and updated.\nIn this way, the proposed MDSR greatly reduce the model size by sharing most of the parameters for different scales and exhibits comparable performance as single-scale models.\nThe similar scale-specific multi-path learning is also adopted by CARN and ProSR .", "id": "1dadc8f5-aa09-4f5b-9d6f-f5a916d558a5", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Multi-path Learning" ] ], "subsections": [], "title": "Multi-path Learning" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 7031, 7034, 96, 481, 7255, 7258 ], "content": "\\label{sec_dense_connections}\nSince Huang \\textit{et al.} propose DenseNet based on dense blocks, the dense connections have become more and more popular in vision tasks.\nFor each layer in a dense block, the feature maps of all preceding layers are used as inputs, and its own feature maps are used as inputs into all subsequent layers, so that it leads to $l \\cdot (l - 1) / 2$ connections in a $l$-layer dense block ($l \\ge 2$).\nThe dense connections not only help alleviate gradient vanishing, enhance signal propagation and encourage feature reuse, but also substantially reduce the model size by employing small growth rate (i.e., number of channels in dense blocks) and squeezing channels after concatenating all input feature maps.\nFor the sake of fusing low-level and high-level features to provide richer information for reconstructing high-quality details, dense connections are introduced into the SR field, as Fig. \\ref{fig_dense_connections} shows.\nTong \\textit{et al.} not only adopt dense blocks to construct a 69-layers SRDenseNet, but also insert dense connections between different dense blocks, i.e., for every dense block, the feature maps of all preceding blocks are used as inputs, and its own feature maps are used as inputs into all subsequent blocks.\nThese layer-level and block-level dense connections are also adopted by MemNet , CARN , RDN and ESRGAN .\nThe DBPN also adopts dense connections extensively, but their dense connections are between all the upsampling units, as are the downsampling units.", "id": "9d2cd66e-cbba-47ff-ac53-21cc3e8b6ed5", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Dense Connections" ] ], "subsections": [], "title": "Dense Connections" }, { "cite_extract_rate": 0.5, "cites": [ 493, 501 ], "content": "\\label{sec_attention_mechanism}\n\\textbf{Channel Attention.}\nConsidering the interdependence and interaction of the feature representations between different channels, Hu \\textit{et al.} propose a ``squeeze-and-excitation'' block to improve learning ability by explicitly modelling channel interdependence, as Fig. \\ref{fig_channel_attention} shows.\nIn this block, each input channel is squeezed into a channel descriptor (i.e., a constant) using global average pooling (GAP), then these descriptors are fed into two dense layers to produce channel-wise scaling factors for input channels.\nRecently, Zhang \\textit{et al.} incorporate the channel attention mechanism with SR and propose RCAN, which markedly improves the representation ability of the model and SR performance.\nIn order to better learn the feature correlations, Dai \\textit{et al.} further propose a second-order channel attention (SOCA) module.\nThe SOCA adaptively rescales the channel-wise features by using second-order feature statistics instead of GAP, and enables extracting more informative and discriminative representations.\n\\textbf{Non-local Attention.}\nMost existing SR models have very limited local receptive fields.\nHowever, some distant objects or textures may be very important for local patch generation.\nSo that Zhang \\textit{et al.} propose local and non-local attention blocks to extract features that capture the long-range dependencies between pixels.\nSpecifically, they propose a trunk branch for extracting features, and a (non-)local mask branch for adaptively rescaling features of trunk branch.\nAmong them, the local branch employs an encoder-decoder structure to learn the local attention, while the non-local branch uses the embedded Gaussian function to evaluate pairwise relationships between every two position indices in the feature maps to predict the scaling weights.\nThrough this mechanism, the proposed method captures the spatial attention well and further enhances the representation ability.\nSimilarly, Dai \\textit{et al.} also incorporate the non-local attention mechanism to capture long-distance spatial contextual information.", "id": "d97c223a-4bc5-4a47-8979-2535eb24b334", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Attention Mechanism" ] ], "subsections": [], "title": "Attention Mechanism" }, { "cite_extract_rate": 1, "cites": [ 503, 500, 505, 481, 502, 494, 504 ], "content": "\\label{sec_advanced_convolution}\nSince convolution operations are the basis of deep neural networks, researchers also attempt to improve convolution operations for better performance or greater efficiency.\n\\textbf{Dilated Convolution.}\nIt is well known that the contextual information facilitates generating realistic details for SR.\nThus Zhang \\textit{et al.} replace the common convolution by dilated convolution in SR models, increase the receptive field over twice and achieve much better performance.\n\\textbf{Group Convolution.}\nMotivated by recent advances on lightweight CNNs , Hui \\textit{et al.} and Ahn \\textit{et al.} propose IDN and CARN-M, respectively, by replacing the vanilla convolution by group convolution.\nAs some previous works have proven, the group convolution much reduces the number of parameters and operations at the expense of a little performance loss .\n\\textbf{Depthwise Separable Convolution}\nSince Howard \\textit{et al.} propose depthwise separable convolution for efficient convolution, it has been expanded to into various fields.\nSpecifically, it consists of a factorized depthwise convolution and a pointwise convolution (i.e., $1 \\times 1$ convolution), and thus reduces plenty of parameters and operations at only a small reduction in accuracy .\nAnd recently, Nie \\textit{et al.} employ the depthwise separable convolution and much accelerate the SR architecture.", "id": "678fc038-cc30-476b-9384-4e4afbdc9b84", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Advanced Convolution" ] ], "subsections": [], "title": "Advanced Convolution" }, { "cite_extract_rate": 0.75, "cites": [ 506, 507, 488 ], "content": "\\label{sec_region_recursive_learning}\nMost SR models treat SR as a pixel-independent task and thus cannot source the interdependence between generated pixels properly.\nInspired by PixelCNN , Dahl \\textit{et al.} firstly propose pixel recursive learning to perform pixel-by-pixel generation, by employing two networks to capture global contextual information and serial generation dependence, respectively.\nIn this way, the proposed method synthesizes realistic hair and skin details on super-resolving very low-resolution face images (e.g., $8\\times 8$) and far exceeds the previous methods on MOS testing (Sec. \\ref{sec_iqa_mos}).\nMotivated by the human attention shifting mechanism , the Attention-FH also adopts this strategy by resorting to a recurrent policy network for sequentially discovering attended patches and performing local enhancement.\nIn this way, it is capable of adaptively personalizing an optimal searching path for each image according to its own characteristic, and thus fully exploits the global intra-dependence of images.\nAlthough these methods show better performance to some extent, the recursive process requiring a long propagation path greatly increases the computational cost and training difficulty, especially for super-resolving HR images.", "id": "b527778b-723c-49d8-9ab4-5333d03ca380", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Region-recursive Learning" ] ], "subsections": [], "title": "Region-recursive Learning" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 509, 508 ], "content": "\\label{sec_pyramid_pooling}\nMotivated by the spatial pyramid pooling layer , Zhao \\textit{et al.} propose the pyramid pooling module to better utilize global and local contextual information.\nSpecifically, for feature maps sized $h\\times w\\times c$, each feature map is divided into $M\\times M$ bins, and goes through global average pooling, resulting in $M\\times M\\times c$ outputs.\nThen a $1\\times 1$ convolution is performed for compressing the outputs to a single channel.\nAfter that, the low-dimensional feature map is upsampled to the same size as the original feature map via bilinear interpolation.\nBy using different $M$, the module integrates global as well as local contextual information effectively.\nBy incorporating this module, the proposed EDSR-PP model further improve the performance over baselines.", "id": "991fb891-2f49-4b32-8d73-a582b4734886", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Pyramid Pooling" ] ], "subsections": [], "title": "Pyramid Pooling" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7035 ], "content": "\\label{sec_wavelet_transformation}\nAs is well-known, the wavelet transformation (WT) is a highly efficient representation of images by decomposing the image signal into high-frequency sub-bands denoting texture details and low-frequency sub-bands containing global topological information.\nBae \\textit{et al.} firstly combine WT with deep learning based SR model, take sub-bands of interpolated LR wavelet as input and predict residuals of corresponding HR sub-bands.\nWT and inverse WT are applied for decomposing the LR input and reconstructing the HR output, respectively.\nSimilarly, the DWSR and Wavelet-SRNet also perform SR in the wavelet domain but with more complicated structures.\nIn contrast to the above works processing each sub-band independently, the MWCNN adopts multi-level WT and takes the concatenated sub-bands as the input to a single CNN for better capturing the dependence between them.\nDue to the efficient representation by wavelet transformation, the models using this strategy often much reduce the model size and computational cost, while maintain competitive performance .", "id": "fda2defb-1f41-4ab9-ad3c-c031428143aa", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Wavelet Transformation" ] ], "subsections": [], "title": "Wavelet Transformation" }, { "cite_extract_rate": 0.5, "cites": [ 494 ], "content": "\\label{sec_desubpixel}\nIn order to speed up the inference speed, Vu \\textit{et al.} propose to perform the time-consuming feature extraction in a lower-dimensional space, and propose desubpixel, an inverse of the shuffle operation of sub-pixel layer (Sec. \\ref{sec_learning_based_upsampling}). \nSpecifically, the desubpixel operation splits the images spatially, stacks them as extra channels and thus avoids loss of information.\nIn this way, they downsample input images by desubpixel at the beginning of the model, learn representations in a lower-dimensional space, and upsample to the target size at the end. \nThe proposed model achieves the best scores in the PIRM Challenge on Smartphones with very high-speed inference and good performance.", "id": "c306a6a1-8f8c-4859-987d-2a95118d2c98", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "Desubpixel" ] ], "subsections": [], "title": "Desubpixel" }, { "cite_extract_rate": 1, "cites": [ 510 ], "content": "\\label{sec_xunit}\nIn order to combine spatial feature processing and nonlinear activations to learn complex features more efficiently, Kligvasser \\textit{et al.} propose xUnit for learning a spatial activation function.\nSpecifically, the ReLU is regarded as determining a weight map to perform element-wise multiplication with the input, while the xUnit directly learn the weight map through convolution and Gaussian gating.\nAlthough the xUnit is more computationally demanding, due to its dramatic effect on the performance, it allows greatly reducing the model size while matching the performance with ReLU.\nIn this way, the authors reduce the model size by nearly 50\\% without any performance degradation.", "id": "a87c4558-b49d-41c3-b3f4-d81e2530643c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "40ccdbeb-49e2-41b8-849a-4470b011cae3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Network Design" ], [ "subsubsection", "xUnit" ] ], "subsections": [], "title": "xUnit" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_learning_strategies}", "id": "d654c603-6009-4a1e-89b5-b983ee09835f", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Learning Strategies" ] ], "subsections": [ "5f7b8408-097b-4ff9-9bbe-75059eeb4009", "c7f23b9b-1823-4ada-ab4a-2eeb93847ad0", "47efaa78-c242-4ecd-8e17-3885d537c601", "f7b986d5-4b1c-4a86-9dca-a3d11ae4ee40" ], "title": "Learning Strategies" }, { "cite_extract_rate": 0.628571428571428, "cites": [ 8317, 516, 513, 515, 512, 121, 514, 480, 7258, 483, 97, 481, 482, 487, 7030, 63, 78, 7254, 511, 518, 7022, 517 ], "content": "\\label{sec_loss}\nIn the super-resolution field, loss functions are used to measure reconstruction error and guide the model optimization.\nIn early times, researchers usually employ the pixel-wise L2 loss, but later discover that it cannot measure the reconstruction quality very accurately.\nTherefore, a variety of loss functions (e.g., content loss , adversarial loss ) are adopted for better measuring the reconstruction error and producing more realistic and higher-quality results.\nNowadays these loss functions have been playing an important role.\nIn this section, we'll take a closer look at the loss functions used widely.\nThe notations in this section follow Sec. \\ref{sec_problem_definitions}, except that we ignore the subscript $y$ of the target HR image $\\hat{I_y}$ and generated HR image $I_y$ for brevity.\n\\textbf{Pixel Loss.}\nPixel loss measures pixel-wise difference between two images and mainly includes L1 loss (i.e., mean absolute error) and L2 loss (i.e., mean square error):\n\\begin{align}\n \\mathcal{L}_{\\text{pixel\\_l1}} (\\hat{I}, I) &= \\frac{1}{hwc} \\sum_{i,j,k} |\\hat{I}_{i,j,k} - I_{i,j,k}| , \\\\\n \\mathcal{L}_{\\text{pixel\\_l2}} (\\hat{I}, I) &= \\frac{1}{hwc} \\sum_{i,j,k} (\\hat{I}_{i,j,k} - I_{i,j,k})^2 ,\n\\end{align}\nwhere $h$, $w$ and $c$ are the height, width and number of channels of the evaluated images, respectively.\nIn addition, there is a variant of the pixel L1 loss, namely Charbonnier loss , given by:.\n\\begin{equation}\n \\mathcal{L}_{\\text{pixel\\_Cha}} (\\hat{I}, I) = \\frac{1}{hwc} \\sum_{i,j,k} \\sqrt{(\\hat{I}_{i,j,k} - I_{i,j,k})^2 + \\epsilon^2} ,\n\\end{equation}\nwhere $\\epsilon$ is a constant (e.g., $10^{-3}$) for numerical stability.\nThe pixel loss constrains the generated HR image $\\hat{I}$ to be close enough to the ground truth $I$ on the pixel values.\nComparing with L1 loss, the L2 loss penalizes larger errors but is more tolerant to small errors, and thus often results in too smooth results.\nIn practice, the L1 loss shows improved performance and convergence over L2 loss .\nSince the definition of PSNR (Sec. \\ref{sec_iqa_psnr}) is highly correlated with pixel-wise difference and minimizing pixel loss directly maximize PSNR, the pixel loss gradual becomes the most widely used loss function.\nHowever, since the pixel loss actually doesn't take image quality (e.g., perceptual quality , textures ) into account, the results often lack high-frequency details and are perceptually unsatisfying with oversmooth textures .\n\\textbf{Content Loss.}\nIn order to evaluate perceptual quality of images, the content loss is introduced into SR .\nSpecifically, it measures the semantic differences between images using a pre-trained image classification network.\nDenoting this network as $\\phi$ and the extracted high-level representations on $l$-th layer as $\\phi^{(l)}(I)$, the content loss is indicated as the Euclidean distance between high-level representations of two images, as follows:\n\\begin{equation}\n \\mathcal{L}_{\\text{content}} (\\hat{I}, I ; \\phi, l) = \\frac{1}{h_l w_l c_l} \\sqrt{\\sum_{i,j,k} (\\phi^{(l)}_{i,j,k}(\\hat{I}) - \\phi^{(l)}_{i,j,k}(I))^2} ,\n\\end{equation}\nwhere $h_l$, $w_l$ and $c_l$ are the height, width and number of channels of the representations on layer $l$, respectively.\nEssentially the content loss transfers the learned knowledge of hierarchical image features from the classification network $\\phi$ to the SR network.\nIn contrast to the pixel loss, the content loss encourages the output image $\\hat{I}$ to be perceptually similar to the target image $I$ instead of forcing them to match pixels exactly.\nThus it produces visually more perceptible results and is also widely used in this field , where the VGG and ResNet are the most commonly used pre-trained CNNs.\n\\textbf{Texture Loss.}\nOn account that the reconstructed image should have the same style (e.g., colors, textures, contrast) with the target image, and motivated by the style representation by Gatys \\textit{et al.} , the texture loss (a.k.a style reconstruction loss) is introduced into SR.\nFollowing , the image texture is regarded as the correlations between different feature channels and defined as the Gram matrix $G^{(l)} \\in \\mathbb{R}^{c_l \\times c_l}$, where $G^{(l)}_{ij}$ is the inner product between the vectorized feature maps $i$ and $j$ on layer $l$:\n\\begin{equation}\n G^{(l)}_{ij}(I) = \\operatorname{vec}(\\phi_i^{(l)}(I)) \\cdot \\operatorname{vec}(\\phi_j^{(l)}(I)) ,\n\\end{equation}\nwhere $\\operatorname{vec}(\\cdot)$ denotes the vectorization operation, and $\\phi_i^{(l)}(I)$ denotes the $i$-th channel of the feature maps on layer $l$ of image $I$.\nThen the texture loss is given by:\n\\begin{equation}\n \\mathcal{L}_{\\text{texture}} (\\hat{I}, I ; \\phi, l) = \\frac{1}{c_l^2} \\sqrt{\\sum_{i,j} (G^{(l)}_{i,j}(\\hat{I}) - G^{(l)}_{i,j}(I))^2} .\n\\end{equation}\nBy employing texture loss, the EnhanceNet proposed by Sajjadi \\textit{et al.} creates much more realistic textures and produces visually more satisfactory results.\nDespite this, determining the patch size to match textures is still empirical.\nToo small patches lead to artifacts in textured regions, while too large patches lead to artifacts throughout the entire image because texture statistics are averaged over regions of varying textures.\n\\textbf{Adversarial Loss.}\nIn recent years, due to the powerful learning ability, the GANs receive more and more attention and are introduced to various vision tasks.\nTo be concrete, the GAN consists of a generator performing generation (e.g., text generation, image transformation), and a discriminator which takes the generated results and instances sampled from the target distribution as input and discriminates whether each input comes from the target distribution.\nDuring training, two steps are alternately performed:\n(a) fix the generator and train the discriminator to better discriminate,\n(b) fix the discriminator and train the generator to fool the discriminator.\nThrough adequate iterative adversarial training, the resulting generator can produce outputs consistent with the distribution of real data, while the discriminator can't distinguish between the generated data and real data.\nIn terms of super-resolution, it is straightforward to adopt adversarial learning, in which case we only need to treat the SR model as a generator, and define an extra discriminator to judge whether the input image is generated or not.\nTherefore, Ledig \\textit{et al.} firstly propose SRGAN using adversarial loss based on cross entropy, as follows:\n\\begin{align}\n \\mathcal{L}_{\\text{gan\\_ce\\_g}} (\\hat{I} ; D) &= - \\log D(\\hat{I}) , \\\\\n \\mathcal{L}_{\\text{gan\\_ce\\_d}} (\\hat{I}, I_s ; D) &= - \\log D(I_s) - \\log (1 - D(\\hat{I})) ,\n\\end{align}\nwhere $\\mathcal{L}_{\\text{gan\\_ce\\_g}}$ and $\\mathcal{L}_{\\text{gan\\_ce\\_d}}$ denote the adversarial loss of the generator (i.e., the SR model) and the discriminator $D$ (i.e., a binary classifier), respectively, and $I_s$ represents images randomly sampled from the ground truths.\nBesides, the Enhancenet also adopts the similar adversarial loss.\nBesides, Wang \\textit{et al.} and Yuan \\textit{et al.} use adversarial loss based on least square error for more stable training process and higher quality results , given by:\n\\begin{align}\n \\mathcal{L}_{\\text{gan\\_ls\\_g}} (\\hat{I} ; D) &= (D(\\hat{I}) - 1)^2 , \\\\\n \\mathcal{L}_{\\text{gan\\_ls\\_d}} (\\hat{I}, I_s ; D) &= (D(\\hat{I}))^2 + (D(I_s) - 1)^2 .\n\\end{align}\nIn contrast to the above works focusing on the specific forms of adversarial loss, Park \\textit{et al.} argue that the pixel-level discriminator causes generating meaningless high-frequency noise, and attach another feature-level discriminator to operate on high-level representations extracted by a pre-trained CNN which captures more meaningful attributes of real HR images. \nXu \\textit{et al.} incorporate a multi-class GAN consisting of a generator and multiple class-specific discriminators.\nAnd the ESRGAN employs relativistic GAN to predict the probability that real images are relatively more realistic than fake ones, instead of the probability that input images are real or fake, and thus guide recovering more detailed textures.\nExtensive MOS tests (Sec. \\ref{sec_iqa_mos}) show that even though the SR models trained with adversarial loss and content loss achieve lower PSNR compared to those trained with pixel loss, they bring significant gains in perceptual quality .\nAs a matter of fact, the discriminator extracts some difficult-to-learn latent patterns of real HR images, and pushes the generated HR images to conform, thus helps to generate more realistic images.\nHowever, currently the training process of GAN is still difficult and unstable.\nAlthough there have been some studies on how to stabilize the GAN training , how to ensure that the GANs integrated into SR models are trained correctly and play an active role remains a problem.\n\\textbf{Cycle Consistency Loss.}\nMotivated by the CycleGAN proposed by Zhu \\textit{et al.} , Yuan \\textit{et al.} present a cycle-in-cycle approach for super-resolution. \nConcretely speaking, they not only super-resolve the LR image $I$ to the HR image $\\hat{I}$ but also downsample $\\hat{I}$ back to another LR image $I'$ through another CNN.\nThe regenerated $I'$ is required to be identical to the input $I$, thus the cycle consistency loss is introduced for constraining their pixel-level consistency:\n\\begin{equation}\n \\mathcal{L}_{\\text{cycle}} (I', I) = \\frac{1}{hwc} \\sqrt{\\sum_{i,j,k} (I'_{i,j,k} - I_{i,j,k})^2} .\n\\end{equation}\n\\textbf{Total Variation Loss.}\nIn order to suppress noise in generated images, the total variation (TV) loss is introduced into SR by Aly \\textit{et al.} .\nIt is defined as the sum of the absolute differences between neighboring pixels and measures how much noise is in the images, as follows:\n\\begin{equation}\n \\mathcal{L}_{\\text{TV}} (\\hat{I}) = \\frac{1}{hwc} \\sum_{i,j,k} \\sqrt{\n (\\hat{I}_{i,j+1,k} - \\hat{I}_{i,j,k})^2 +\n (\\hat{I}_{i+1,j,k} - \\hat{I}_{i,j,k})^2} .\n\\end{equation}\nLai \\textit{et al.} and Yuan \\textit{et al.} also adopt the TV loss for imposing spatial smoothness.\n\\textbf{Prior-Based Loss.}\nIn addition to the above loss functions, external prior knowledge is also introduced to constrain the generation.\nSpecifically, Bulat \\textit{et al.} focus on face image SR and introduce a face alignment network (FAN) to constrain the consistency of facial landmarks. \nThe FAN is pre-trained and integrated for providing face alignment priors, and then trained jointly with the SR.\nIn this way, the proposed Super-FAN improves performance both on LR face alignment and face image SR.\nAs a matter of fact, the content loss and the texture loss, both of which introduce a classification network, essentially provide prior knowledge of hierarchical image features for SR.\nBy introducing more prior knowledge, the SR performance can be further improved.\nIn this section, we introduce various loss functions for SR.\nIn practice, researchers often combine multiple loss functions by weighted average for constraining different aspects of the generation process, especially for distortion-perception tradeoff .\nHowever, the weights of different loss functions require a lot of empirical exploration, and how to combine reasonably and effectively remains a problem.", "id": "5f7b8408-097b-4ff9-9bbe-75059eeb4009", "level": "subsubsection", "origin_cites_number": 35, "parent_id": "d654c603-6009-4a1e-89b5-b983ee09835f", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Learning Strategies" ], [ "subsubsection", "Loss Functions" ] ], "subsections": [], "title": "Loss Functions" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 7254, 7031, 519, 71, 482, 484, 7035, 7258 ], "content": "In order to accelerate and stabilize training of deep CNNs, Sergey \\textit{et al.} propose batch normalization (BN) to reduce internal covariate shift of networks.\nSpecifically, they perform normalization for each mini-batch and train two extra transformation parameters for each channel to preserve the representation ability.\nSince the BN calibrates the intermediate feature distribution and mitigates vanishing gradients, it allows using higher learning rates and being less careful about initialization.\nThus this technique is widely used by SR models .\nHowever, Lim \\textit{et al.} argue that the BN loses the scale information of each image and gets rid of range flexibility from networks.\nSo they remove BN and use the saved memory cost (up to 40\\%) to develop a much larger model, and thus increase the performance substantially.\nSome other models also adopt this experience and achieve performance improvements.", "id": "c7f23b9b-1823-4ada-ab4a-2eeb93847ad0", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "d654c603-6009-4a1e-89b5-b983ee09835f", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Learning Strategies" ], [ "subsubsection", "Batch Normalization" ] ], "subsections": [], "title": "Batch Normalization" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 7254, 62, 497 ], "content": "\\label{sec_curriculum_learning}\nCurriculum learning refers to starting from an easier task and gradually increasing the difficulty.\nSince super-resolution is an ill-posed problem and always suffers adverse conditions such as large scaling factors, noise and blurring, the curriculum training is incorporated for reducing learning difficulty.\nIn order to reduce the difficulty of SR with large scaling factors, Wang \\textit{et al.} , Bei \\textit{et al.} and Ahn \\textit{et al.} propose ProSR, ADRSR and progressive CARN, respectively, which are progressive not only on architectures (Sec. \\ref{sec_framework_progressive}) but also on training procedure.\nThe training starts with the $2\\times$ upsampling, and after finishing training, the portions with $4\\times$ or larger scaling factors are gradually mounted and blended with the previous portions.\nSpecifically, the ProSR blends by linearly combining the output of this level and the upsampled output of previous levels following , the ADRSR concatenates them and attaches another convolutional layer, while the progressive CARN replace the previous reconstruction block with the one that produces the image in double resolution.\nIn addition, Park \\textit{et al.} divide the $8\\times$ SR problem to three sub-problems (i.e., $1\\times$ to $2\\times$, $2\\times$ to $4\\times$, $4\\times$ to $8\\times$) and train independent networks for each problem.\nThen two of them are concatenated and fine-tuned, and then with the third one.\nBesides, they also decompose the $4\\times$ SR under difficult conditions into $1\\times$ to $2\\times$, $2\\times$ to $4\\times$ and denoising or deblurring sub-problems.\nIn contrast, the SRFBN uses this strategy for SR under adverse conditions, i.e., starting from easy degradation and gradually increasing degradation complexity.\nCompared to common training procedure, the curriculum learning greatly reduces the training difficulty and shortens the total training time, especially for large factors.", "id": "47efaa78-c242-4ecd-8e17-3885d537c601", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "d654c603-6009-4a1e-89b5-b983ee09835f", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Learning Strategies" ], [ "subsubsection", "Curriculum Learning" ] ], "subsections": [], "title": "Curriculum Learning" }, { "cite_extract_rate": 1, "cites": [ 489, 7031, 495, 7030, 496 ], "content": "\\label{sec_multi_supervision}\nMulti-supervision refers to adding multiple supervision signals within the model for enhancing the gradient propagation and avoiding vanishing and exploding gradients.\nIn order to prevent the gradient problems introduced by recursive learning (Sec. \\ref{sec_recursive_learning}), the DRCN incorporates multi-supervision with recursive units.\nSpecifically, they feed each output of recursive units into a reconstruction module to generate an HR image, and build the final prediction by incorporating all the intermediate reconstructions. \nSimilar strategies are also taken by MemNet and DSRN , which are also based on recursive learning.\nBesides, since the LapSRN under the progressive upsampling framework (Sec. \\ref{sec_framework_progressive}) generates intermediate results of different scales during propagation, it is straightforward to adopt multi-supervision strategy.\nSpecifically, the intermediate results are forced to be the same as the intermediate images downsampled from the ground truth HR images.\nIn practice, this multi-supervision technique is often implemented by adding some terms in the loss function, and in this way, the supervision signals are back-propagated more effectively, and thus reduce the training difficulty and enhance the model training.", "id": "f7b986d5-4b1c-4a86-9dca-a3d11ae4ee40", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "d654c603-6009-4a1e-89b5-b983ee09835f", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Learning Strategies" ], [ "subsubsection", "Multi-supervision" ] ], "subsections": [], "title": "Multi-supervision" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_other_improvements}\nIn addition to the network design and learning strategies, there are other techniques further improving SR models.", "id": "130e5bca-8421-423a-bba5-191d29e16c10", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ] ], "subsections": [ "f78a8e94-a3ba-457e-b64b-540649b45ed6", "11702c98-62eb-4c24-a3b8-74b1e5f297a8", "58b1b25b-13ef-43e1-b293-798744d77254", "728e90bf-43c6-4b78-99b6-c265092e2016", "d5df91d7-a2d3-4c1f-aae4-1079cb0e0b0f" ], "title": "Other Improvements" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8358 ], "content": "Context-wise network fusion (CNF) refers to a stacking technique fusing predictions from multiple SR networks (i.e., a special case of multi-path learning in Sec. \\ref{sec_multi_path_learning}).\nTo be concrete, they train individual SR models with different architectures separately, feed the prediction of each model into individual convolutional layers, and finally sum the outputs up to be the final prediction result.\nWithin this CNF framework, the final model constructed by three lightweight SRCNNs achieves comparable performance with state-of-the-art models with acceptable efficiency .", "id": "f78a8e94-a3ba-457e-b64b-540649b45ed6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "130e5bca-8421-423a-bba5-191d29e16c10", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ], [ "subsubsection", "Context-wise Network Fusion" ] ], "subsections": [], "title": "Context-wise Network Fusion" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 500, 7032, 482, 7030, 496 ], "content": "Data augmentation is one of the most widely used techniques for boosting performance with deep learning.\nFor image super-resolution, some useful augmentation options include cropping, flipping, scaling, rotation, color jittering, etc. .\nIn addition, Bei \\textit{et al.} also randomly shuffle RGB channels, which not only augments data, but also alleviates color bias caused by the dataset with color unbalance.", "id": "11702c98-62eb-4c24-a3b8-74b1e5f297a8", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "130e5bca-8421-423a-bba5-191d29e16c10", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ], [ "subsubsection", "Data Augmentation" ] ], "subsections": [], "title": "Data Augmentation" }, { "cite_extract_rate": 0.5, "cites": [ 520, 518, 487 ], "content": "\\label{sec_mtl}\nMulti-task learning refers to improving generalization ability by leveraging domain-specific information contained in training signals of related tasks, such as object detection and semantic segmentation , head pose estimation and facial attribute inference .\nIn the SR field, Wang \\textit{et al.} incorporate a semantic segmentation network for providing semantic knowledge and generating semantic-specific details. \nSpecifically, they propose spatial feature transformation to take semantic maps as input and predict spatial-wise parameters of affine transformation performed on the intermediate feature maps.\nThe proposed SFT-GAN thus generates more realistic and visually pleasing textures on images with rich semantic regions. \nBesides, considering that directly super-resolving noisy images may cause noise amplification, the DNSR proposes to train a denoising network and an SR network separately, then concatenates them and fine-tunes together.\nSimilarly, the cycle-in-cycle GAN (CinCGAN) combines a cycle-in-cycle denoising framework and a cycle-in-cycle SR model to joint perform noise reduction and super-resolution.\nSince different tasks tend to focus on different aspects of the data, combining related tasks with SR models usually improves the SR performance by providing extra information and knowledge.", "id": "58b1b25b-13ef-43e1-b293-798744d77254", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "130e5bca-8421-423a-bba5-191d29e16c10", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ], [ "subsubsection", "Multi-task Learning" ] ], "subsections": [], "title": "Multi-task Learning" }, { "cite_extract_rate": 1, "cites": [ 7258, 521 ], "content": "PSNR-based models produce images closer to ground truths but introduce blurring problems, while GAN-based models bring better perceptual quality but introduce unpleasant artifacts (e.g., meaningless noise making images more ``realistic'').\nIn order to better balance the distortion and perception, Wang \\textit{et al.} propose a network interpolation strategy.\nSpecifically, they train a PSNR-based model and train a GAN-based model by fine-tuning, then interpolate all the corresponding parameters of both networks to derive intermediate models.\nBy tuning the interpolation weights without retraining networks, they produce meaningful results with much less artifacts.", "id": "728e90bf-43c6-4b78-99b6-c265092e2016", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "130e5bca-8421-423a-bba5-191d29e16c10", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ], [ "subsubsection", "Network Interpolation" ] ], "subsections": [], "title": "Network Interpolation" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 7031, 7256, 497, 7257, 7255, 495, 7258, 483, 522, 7034, 493, 7032, 481, 482, 7030, 496, 501, 7254 ], "content": "\\label{sec_self_ensemble}\nSelf-ensemble, a.k.a. enhanced prediction , is an inference technique commonly used by SR models.\nSpecifically, rotations with different angles (0$^\\circ$, 90$^\\circ$, 180$^\\circ$, 270$^\\circ$) and horizontal flipping are applied on the LR images to get a set of 8 images.\nThen these images are fed into the SR model and the corresponding inverse transformation is applied to the reconstructed HR images to get the outputs.\nThe final prediction result is conducted by the mean or the median of these outputs.\nIn this way, these models further improve performance.\n\\begin{table*}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\caption{\n Super-resolution methodology employed by some representative models.\n The ``Fw.'', ``Up.'', ``Rec.'', ``Res.'', ``Dense.'', ``Att.'' represent SR frameworks, upsampling methods, recursive learning, residual learning, dense connections, attention mechanism, respectively.\n }\n \\label{tab_sota_sr_models}\n \\rowcolors{2}{gray!0}{gray!8}\n \\centering\n {\n \\setlength{\\fboxrule}{1pt}\n \\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|l|}\n \\hline\n Method & Publication & Fw. & Up. & Rec. & Res. & Dense& Att. & $\\mathcal{L}_{\\text{L1}}$ & $\\mathcal{L}_{\\text{L2}}$ & Keywords \\\\\n \\hline\n SRCNN & 2014, ECCV & Pre. & Bicubic & ~ & ~ & ~ & ~ & ~ & \\yes & ~ \\\\\n DRCN & 2016, CVPR & Pre. & Bicubic & \\yes & \\yes & ~ & ~ & ~ & \\yes & Recursive layers \\\\\n FSRCNN & 2016, ECCV & Post. & Deconv & ~ & ~ & ~ & ~ & ~ & \\yes & Lightweight design \\\\\n ESPCN & 2017, CVPR & Pre. & Sub-Pixel & ~ & ~ & ~ & ~ & ~ & \\yes & Sub-pixel \\\\\n LapSRN & 2017, CVPR & Pro. & Bicubic & ~ & \\yes & ~ & ~ & \\yes & ~ & $\\mathcal{L}_{\\text{pixel\\_Cha}}$ \\\\\n DRRN & 2017, CVPR & Pre. & Bicubic & \\yes & \\yes & ~ & ~ & ~ & \\yes & Recursive blocks \\\\\n SRResNet & 2017, CVPR & Post. & Sub-Pixel & ~ & \\yes & ~ & ~ & ~ & \\yes & $\\mathcal{L}_{\\text{Con.}}$, $\\mathcal{L}_{\\text{TV}}$ \\\\\n SRGAN & 2017, CVPR & Post. & Sub-Pixel & ~ & \\yes & ~ & ~ & ~ & ~ & $\\mathcal{L}_{\\text{Con.}}$, $\\mathcal{L}_{\\text{GAN}}$ \\\\\n EDSR & 2017, CVPRW & Post. & Sub-Pixel & ~ & \\yes & ~ & ~ & \\yes & ~ & Compact and large-size design \\\\\n EnhanceNet & 2017, ICCV & Pre. & Bicubic & ~ & \\yes & ~ & ~ & ~ & ~ & $\\mathcal{L}_{\\text{Con.}}$, $\\mathcal{L}_{\\text{GAN}}$, $\\mathcal{L}_{\\text{texture}}$ \\\\\n MemNet & 2017, ICCV & Pre. & Bicubic & \\yes & \\yes & \\yes & ~ & ~ & \\yes & Memory block \\\\\n SRDenseNet & 2017, ICCV & Post. & Deconv & ~ & \\yes & \\yes & ~ & ~ & \\yes & Dense connections \\\\\n DBPN & 2018, CVPR & Iter. & Deconv & ~ & \\yes & \\yes & ~ & ~ & \\yes & Back-projection \\\\\n DSRN & 2018, CVPR & Pre. & Deconv & \\yes & \\yes & ~ & ~ & ~ & \\yes & Dual state \\\\\n RDN & 2018, CVPR & Post. & Sub-Pixel & ~ & \\yes & \\yes & ~ & \\yes & ~ & Residual dense block \\\\\n CARN & 2018, ECCV & Post. & Sub-Pixel & \\yes & \\yes & \\yes & ~ & \\yes & ~ & Cascading \\\\\n MSRN & 2018, ECCV & Post. & Sub-Pixel & ~ & \\yes & ~ & ~ & \\yes & ~ & Multi-path \\\\\n RCAN & 2018, ECCV & Post. & Sub-Pixel & ~ & \\yes & ~ & \\yes & \\yes & ~ & Channel attention \\\\\n ESRGAN & 2018, ECCVW & Post. & Sub-Pixel & ~ & \\yes & \\yes & & \\yes & ~ & $\\mathcal{L}_{\\text{Con.}}$, $\\mathcal{L}_{\\text{GAN}}$ \\\\\n RNAN & 2019, ICLR & Post. & Sub-Pixel & ~ & \\yes & ~ & \\yes & \\yes & ~ & Non-local attention \\\\\n Meta-RDN & 2019, CVPR & Post. & Meta Upscale & ~ & \\yes & \\yes & ~ & \\yes & ~ & Magnification-arbitrary \\\\\n SAN & 2019, CVPR & Post. & Sub-Pixel & ~ & \\yes & ~ & \\yes & \\yes & ~ & Second-order attention \\\\\n SRFBN & 2019, CVPR & Post. & Deconv & \\yes & \\yes & \\yes & ~ & \\yes & ~ & Feedback mechanism \\\\\n \\hline\n \\end{tabular}\n }\n\\end{table*}\n\\begin{figure*}\n \\centering\n {\n \\setlength{\\fboxrule}{1pt}\n \\includegraphics[width=0.95\\linewidth]{fig_sr_benchmark}\n }\n \\caption{\n Super-resolution benchmarking.\n The $x$-axis and the $y$-axis denote the Multi-Adds and PSNR, respectively, and the circle size represents the number of parameters.\n }\n \\label{fig_sr_benchmarking}\n\\end{figure*}", "id": "d5df91d7-a2d3-4c1f-aae4-1079cb0e0b0f", "level": "subsubsection", "origin_cites_number": 26, "parent_id": "130e5bca-8421-423a-bba5-191d29e16c10", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "Other Improvements" ], [ "subsubsection", "Self-Ensemble" ] ], "subsections": [], "title": "Self-Ensemble" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 493 ], "content": "\\label{sec_sota_sr_models}\nIn recent years, image super-resolution models based on deep learning have received more and more attention and achieved state-of-the-art performance.\nIn previous sections, we decompose SR models into specific components, including model frameworks (Sec. \\ref{sec_sr_frameworks}), upsampling methods (Sec. \\ref{sec_upsampling_methods}), network design (Sec. \\ref{sec_network_design}) and learning strategies (Sec. \\ref{sec_learning_strategies}), analyze these components hierarchically and identify their advantages and limitations.\nAs a matter of fact, most of the state-of-the-art SR models today can basically be attributed to a combination of multiple strategies we summarize above.\nFor example, the biggest contribution of the RCAN comes from the channel attention mechanism (Sec. \\ref{sec_attention_mechanism}), and it also employs other strategies like sub-pixel upsampling (Sec. \\ref{sec_learning_based_upsampling}), residual learning (Sec. \\ref{sec_residual_learning}), pixel L1 loss (Sec. \\ref{sec_loss}), and self-ensemble (Sec. \\ref{sec_self_ensemble}).\nIn similar manners, we summarize some representative models and their key strategies, as Table \\ref{tab_sota_sr_models} shows.\nIn addition to SR accuracy, the efficiency is another very important aspect and different strategies have more or less impact on efficiency.\nSo in the previous sections, we not only analyze the accuracy of the presented strategies, but also indicate the concrete impacts on efficiency for the ones with a greater impact on efficiency, such as the post-upsampling (Sec. \\ref{sec_framework_post}), recursive learning (Sec. \\ref{sec_recursive_learning}), dense connections (Sec. \\ref{sec_dense_connections}), xUnit (Sec. \\ref{sec_xunit}).\nAnd we also benchmark some representative SR models on the SR accuracy (i.e., PSNR), model size (i.e., number of parameters) and computation cost (i.e., number of Multi-Adds), as shown in Fig. \\ref{fig_sr_benchmarking}.\nThe accuracy is measured by the mean of the PSNR on 4 benchmark datasets (i.e., Set5 , Set14 , B100 and Urban100 ).\nAnd the model size and computational cost are calculated with PyTorch-OpCounter , where the output resolution is 720p (i.e., $1080 \\times 720$).\nAll statistics are derived from the original papers or calculated on official models, with a scaling factor of 2.\nFor better viewing and comparison, we also provide an interactive online version\\footnote{https://github.com/ptkin/Awesome-Super-Resolution}.", "id": "fc2fb2c1-1bc8-4bc6-b0f3-577d02a87eea", "level": "subsection", "origin_cites_number": 6, "parent_id": "7a2c21dd-b0fb-4435-8e94-827357705759", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Supervised Super-resolution" ], [ "subsection", "State-of-the-art Super-resolution Models" ] ], "subsections": [], "title": "State-of-the-art Super-resolution Models" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_unsupervised_sr}\nExisting super-resolution works mostly focus on supervised learning, i.e., learning with matched LR-HR image pairs.\nHowever, since it is difficult to collect images of the same scene but with different resolutions, the LR images in SR datasets are often obtained by performing predefined degradation on HR images.\nThus the trained SR models actually learn a reverse process of the predefined degradation.\nIn order to learn the real-world LR-HR mapping without introducing manual degradation priors, researchers pay more and more attention to unsupervised SR, in which case only unpaired LR-HR images are provided for training, so that the resulting models are more likely to cope with the SR problems in real-world scenarios.\nNext we'll briefly introduce several existing unsupervised SR models with deep learning, and more methods are yet to be explored.", "id": "a53a0c61-637a-47c6-bcab-c20c491483f3", "level": "section", "origin_cites_number": 0, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Unsupervised Super-resolution" ] ], "subsections": [ "7f5b0f8a-2798-4e0b-93a3-daff2f5aa63a", "7b9b2d2b-ed23-46f2-a7a0-ddbcbd346afd", "b5ed866c-3103-4967-9bd8-995336da6184" ], "title": "Unsupervised Super-resolution" }, { "cite_extract_rate": 0, "cites": [], "content": "Considering that the internal image statistics inside a single image have provided sufficient information for SR, Shocher \\textit{et al.} propose zero-shot super-resolution (ZSSR) to cope with unsupervised SR by training image-specific SR networks at test time rather than training a generic model on large external datasets.\nSpecifically, they estimate the degradation kernel from a single image using and use this kernel to build a small dataset by performing degradation with different scaling factors and augmentation on this image.\nThen a small CNN for SR is trained on this dataset and used for the final prediction.\nIn this way, the ZSSR leverages on the cross-scale internal recurrence inside every image, and thus outperforms previous approaches by a large margin ($1$ dB for estimated kernels and $2$ dB for known kernels) on images under non-ideal conditions (i.e., images obtained by non-bicubic degradation and suffered effects like blurring, noise, compression artifacts), which is closer to the real-world scenes, while give competitive results under ideal conditions (i.e., images obtained by bicubic degradation).\nHowever, since it needs to train different networks for different images during testing, the inference time is much longer than others.", "id": "7f5b0f8a-2798-4e0b-93a3-daff2f5aa63a", "level": "subsection", "origin_cites_number": 2, "parent_id": "a53a0c61-637a-47c6-bcab-c20c491483f3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Unsupervised Super-resolution" ], [ "subsection", "Zero-shot Super-resolution" ] ], "subsections": [], "title": "Zero-shot Super-resolution" }, { "cite_extract_rate": 1, "cites": [ 523, 518, 7022 ], "content": "To cope with super-resolution without introducing predefined degradation, researchers attempt to learn SR models with weakly-supervised learning, i.e., using unpaired LR-HR images.\nAmong them, some researchers first learn the HR-to-LR degradation and use it to construct datasets for training the SR model, while others design cycle-in-cycle networks to learn the LR-to-HR and HR-to-LR mappings simultaneously.\nNext we'll detail these models.\n\\textbf{Learned Degradation.}\nSince the predefined degradation is suboptimal, learning the degradation from unpaired LR-HR datasets is a feasible direction.\nBulat \\textit{et al.} propose a two-stage process which firstly trains an HR-to-LR GAN to learn degradation using unpaired LR-HR images and then trains an LR-to-HR GAN for SR using paired LR-HR images conducted base on the first GAN.\nSpecifically, for the HR-to-LR GAN, HR images are fed into the generator to produce LR outputs, which are required to match not only the LR images obtained by downscaling the HR images (by average pooling) but also the distribution of real LR images.\nAfter finishing training, the generator is used as a degradation model to generate LR-HR image pairs.\nThen for the LR-to-HR GAN, the generator (i.e., the SR model) takes the generated LR images as input and predicts HR outputs, which are required to match not only the corresponding HR images but also the distribution of the HR images.\nBy applying this two-stage process, the proposed unsupervised model effectively increases the quality of super-resolving real-world LR images and obtains large improvement over previous state-of-the-art works.\n\\textbf{Cycle-in-cycle Super-resolution.}\nAnother approach for unsupervised super-resolution is to treat the LR space and the HR space as two domains, and use a cycle-in-cycle structure to learn the mappings between each other.\nIn this case, the training objectives include pushing the mapped results to match the target domain distribution and making the images recoverable through round-trip mappings.\nMotivated by CycleGAN , Yuan \\textit{et al.} propose a cycle-in-cycle SR network (CinCGAN) composed of 4 generators and 2 discriminators, making up two CycleGANs for \\textit{noisy LR} $\\rightleftharpoons$ \\textit{clean LR} and \\textit{clean LR} $\\rightleftharpoons$ \\textit{clean HR} mappings, respectively.\nSpecifically, in the first CycleGAN, the noisy LR image is fed into a generator, and the output is required to be consistent with the distribution of real clean LR images.\nThen it's fed into another generator and required to recover the original input.\nSeveral loss functions (e.g., adversarial loss, cycle consistency loss, identity loss) are employed for guaranteeing the cycle consistency, distribution consistency, and mapping validity.\nThe other CycleGAN is similarly designed, except that the mapping domains are different.\nBecause of avoiding the predefined degradation, the unsupervised CinCGAN not only achieves comparable performance to supervised methods, but also is applicable to various cases even under very harsh conditions.\nHowever, due to the ill-posed essence of SR problem and the complicated architecture of CinCGAN, some advanced strategies are needed for reducing the training difficulty and instability.", "id": "7b9b2d2b-ed23-46f2-a7a0-ddbcbd346afd", "level": "subsection", "origin_cites_number": 3, "parent_id": "a53a0c61-637a-47c6-bcab-c20c491483f3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Unsupervised Super-resolution" ], [ "subsection", "Weakly-supervised Super-resolution" ] ], "subsections": [], "title": "Weakly-supervised Super-resolution" }, { "cite_extract_rate": 1, "cites": [ 524 ], "content": "Considering that the CNN structure is sufficient to capture a great deal of low-level image statistics prior for inverse problems, Ulyanov \\textit{et al.} employ a randomly-initialized CNN as handcrafted prior to perform SR.\nSpecifically, they define a generator network which takes a random vector $z$ as input and tries to generate the target HR image $I_y$.\nThe goal is to train the network to find an $\\hat{I_y}$ that the downsampled $\\hat{I_y}$ is identical to the LR image $I_x$.\nSince the network is randomly initialized and never trained, the only prior is the CNN structure itself.\nAlthough the performance of this method is still worse than the supervised methods ($2$ dB), it outperforms traditional bicubic upsampling considerably ($1$ dB).\nBesides, it shows the rationality of the CNN architectures itself, and prompts us to improve SR by combining the deep learning methodology with handcrafted priors such as CNN structures or self-similarity.", "id": "b5ed866c-3103-4967-9bd8-995336da6184", "level": "subsection", "origin_cites_number": 1, "parent_id": "a53a0c61-637a-47c6-bcab-c20c491483f3", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Unsupervised Super-resolution" ], [ "subsection", "Deep Image Prior" ] ], "subsections": [], "title": "Deep Image Prior" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_domain_specific_apps}", "id": "93f3d938-3f64-432f-856b-181ab4d5675a", "level": "section", "origin_cites_number": 0, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ] ], "subsections": [ "4249a5d4-6210-4dbd-8e01-2f372b42cfdf", "6d553ba0-1664-4a99-bd40-41c57c208850", "f10fbcca-4576-41af-b398-54da4a876ac6", "ee943b58-26cc-44a8-ad44-ee0b138dc3f4", "4f792960-ffed-4436-a2b8-5beb48b58693", "cd62f88e-a3c1-433d-b251-fc32a0c25662" ], "title": "Domain-Specific Applications" }, { "cite_extract_rate": 0.5, "cites": [ 527, 525, 526, 7259 ], "content": "Depth maps record the depth (i.e., distance) between the viewpoint and objects in the scene, and plays important roles in many tasks like pose estimation and semantic segmentation . \nHowever, due to economic and production constraints, the depth maps produced by depth sensors are often low-resolution and suffer degradation effects such as noise, quantization and missing values.\nThus super-resolution is introduced for increasing the spatial resolution of depth maps.\nNowadays one of the most popular practices for depth map SR is to use another economical RGB camera to obtain HR images of the same scenes for guiding super-resolving the LR depth maps.\nSpecifically, Song \\textit{et al.} exploit the depth field statistics and local correlations between depth maps and RGB images to constrain the global statistics and local structures.\nHui \\textit{et al.} utilize two CNNs to simultaneously upsample LR depth maps and downsample HR RGB images, then use RGB features as the guidance for upsampling depth maps with the same resolution.\nAnd Haefner \\textit{et al.} further exploit the color information and guide SR by resorting to the shape-from-shading technique.\nIn contrast, Riegler \\textit{et al.} combine CNNs with an energy minimization model in the form of a powerful variational model to recover HR depth maps without other reference images.", "id": "4249a5d4-6210-4dbd-8e01-2f372b42cfdf", "level": "subsection", "origin_cites_number": 8, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Depth Map Super-resolution" ] ], "subsections": [], "title": "Depth Map Super-resolution" }, { "cite_extract_rate": 0.36842105263157804, "cites": [ 529, 528, 531, 506, 7033, 492, 530 ], "content": "Face image super-resolution, a.k.a. face hallucination (FH), can often help other face-related tasks . \nCompared to generic images, face images have more face-related structured information, so incorporating facial prior knowledge (e.g., landmarks, parsing maps, identities) into FH is a very popular and promising approach.\nOne of the most straightforward way is to constrain the generated images to have the identical face-related attributes to ground truth.\nSpecifically, the CBN utilizes the facial prior by alternately optimizing FH and dense correspondence field estimation.\nThe Super-FAN and MTUN both introduce FAN to guarantee the consistency of facial landmarks by end-to-end multi-task learning.\nAnd the FSRNet uses not only facial landmark heatmaps but also face parsing maps as prior constraints.\nThe SICNN , which aims at recovering the real identity, adopts a super-identity loss function and a domain-integrated training approach to stable the joint training.\nBesides explicitly using facial prior, the implicit methods are also widely studied.\nThe TDN incorporates spatial transformer networks for automatic spatial transformations and thus solves the face unalignment problem.\nBased on TDN, the TDAE adopts a decoder-encoder-decoder framework, where the first decoder learns to upsample and denoise, the encoder projects it back to aligned and noise-free LR faces, and the last decoder generates hallucinated HR images.\nIn contrast, the LCGE employs component-specific CNNs to perform SR on five facial components, uses k-NN search on an HR facial component dataset to find corresponding patches, synthesizes finer-grained components and finally fuses them to FH results.\nSimilarly, Yang \\textit{et al.} decompose deblocked face images into facial components and background, use the component landmarks to retrieve adequate HR exemplars in external datasets, perform generic SR on the background, and finally fuse them to complete HR faces.\nIn addition, researchers also improve FH from other perspectives.\nMotivated by the human attention shifting mechanism , the Attention-FH resorts to a recurrent policy network for sequentially discovering attended face patches and performing local enhancement, and thus fully exploits the global interdependency of face images.\nThe UR-DGN adopts a network similar to SRGAN with adversarial learning.\nAnd Xu \\textit{et al.} propose a multi-class GAN-based FH model composed of a generic generator and class-specific discriminators.\nBoth Lee \\textit{et al.} and Yu \\textit{et al.} utilize additional facial attribute information to perform FH with the specified attributes, based on the conditional GAN .", "id": "6d553ba0-1664-4a99-bd40-41c57c208850", "level": "subsection", "origin_cites_number": 19, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Face Image Super-resolution" ] ], "subsections": [], "title": "Face Image Super-resolution" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 533, 532 ], "content": "Compared to panchromatic images (PANs, i.e., RGB images with 3 bands), hyperspectral images (HSIs) containing hundreds of bands provide abundant spectral features and help various vision tasks . \nHowever, due to hardware limitations, collecting high-quality HSIs is much more difficult than PANs and the resolution is also lower.\nThus super-resolution is introduced into this field, and researchers tend to combine HR PANs and LR HSIs to predict HR HSIs.\nAmong them, Masi \\textit{et al.} employ SRCNN and incorporate several maps of nonlinear radiometric indices for boosting performance.\nQu \\textit{et al.} jointly train two encoder-decoder networks to perform SR on PANs and HSIs, respectively, and transfer the SR knowledge from PAN to HSI by sharing the decoder and applying constraints such as angle similarity loss and reconstruction loss.\nRecently, Fu \\textit{et al.} evaluate the effect of camera spectral response (CSR) functions for HSI SR and propose a CSR optimization layer which can automatically select or design the optimal CSR, and outperform the state-of-the-arts.", "id": "f10fbcca-4576-41af-b398-54da4a876ac6", "level": "subsection", "origin_cites_number": 7, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Hyperspectral Image Super-resolution" ] ], "subsections": [], "title": "Hyperspectral Image Super-resolution" }, { "cite_extract_rate": 1, "cites": [ 534, 535, 536 ], "content": "Generally, the LR images for training SR models are generated by downsampling RGB images manually (e.g., by bicubic downsampling).\nHowever, real-world cameras actually capture 12-bit or 14-bit RAW images, and performs a series of operations (e.g., demosaicing, denoising and compression) through camera ISPs (image signal processors) and finally produce 8-bit RGB images.\nThrough this process, the RGB images have lost lots of original signals and are very different from the original images taken by the camera.\nTherefore, it is suboptimal to directly use the manually downsampled RGB image for SR.\nTo solve this problem, researchers study how to use real-world images for SR.\nAmong them, Chen \\textit{et al.} analyze the relationships between image resolution (R) and field-of-view (V) in imaging systems (namely R-V degradation), propose data acquisition strategies to conduct a real-world dataset City100, and experimentally demonstrate the superiority of the proposed image synthesis model.\nZhang \\textit{et al.} build another real-world image dataset SR-RAW (i.e., paired HR RAW images and LR RGB images) through optical zoom of cameras, and propose contextual bilateral loss to solve the misalignment problem. \nIn contrast, Xu \\textit{et al.} propose a pipeline to generate realistic training data by simulating the imaging process and develop a dual CNN to exploit the originally captured radiance information in RAW images.\nThey also propose to learn a spatially-variant color transformation for effective color corrections and generalization to other sensors.", "id": "ee943b58-26cc-44a8-ad44-ee0b138dc3f4", "level": "subsection", "origin_cites_number": 3, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Real-world Image Super-resolution" ] ], "subsections": [], "title": "Real-world Image Super-resolution" }, { "cite_extract_rate": 0.36842105263157804, "cites": [ 531, 7260, 538, 522, 539, 498, 537 ], "content": "For video super-resolution, multiple frames provide much more scene information, and there are not only intra-frame spatial dependency but also inter-frame temporal dependency (e.g., motions, brightness and color changes).\nThus the existing works mainly focus on making better use of spatio-temporal dependency, including explicit motion compensation (e.g., optical flow-based, learning-based) and recurrent methods, etc.\nAmong the optical flow-based methods, Liao \\textit{et al.} employ optical flow methods to generate HR candidates and ensemble them by CNNs.\nVSRnet and CVSRnet deal with motion compensation by Druleas algorithm , and uses CNNs to take successive frames as input and predict HR frames.\nWhile Liu \\textit{et al.} perform rectified optical flow alignment, and propose a temporal adaptive net to generate HR frames in various temporal scales and aggregate them adaptively.\nBesides, others also try to directly learn the motion compensation.\nThe VESPCN utilizes a trainable spatial transformer to learn motion compensation based on adjacent frames, and enters multiple frames into a spatio-temporal ESPCN for end-to-end prediction.\nAnd Tao \\textit{et al.} root from accurate LR imaging model and propose a sub-pixel-like module to simultaneously achieve motion compensation and super-resolution, and thus fuse the aligned frames more effectively.\nAnother trend is to use recurrent methods to capture the spatial-temporal dependency without explicit motion compensation.\nSpecifically, the BRCN employs a bidirectional framework, and uses CNN, RNN, and conditional CNN to model the spatial, temporal and spatial-temporal dependency, respectively.\nSimilarly, STCN uses a deep CNN and a bidirectional LSTM to extract spatial and temporal information.\nAnd FRVSR uses previously inferred HR estimates to reconstruct the subsequent HR frames by two deep CNNs in a recurrent manner.\nRecently the FSTRN employs two much smaller 3D convolution filters to replace the original large filter, and thus enhances the performance through deeper CNNs while maintaining low computational cost.\nWhile the RBPN extracts spatial and temporal contexts by a recurrent encoder-decoder, and combines them with an iterative refinement framework based on the back-projection mechanism (Sec. \\ref{sec_framework_up_down}).\nIn addition, the FAST exploits compact descriptions of the structure and pixel correlations extracted by compression algorithms, transfers the SR results from one frame to adjacent frames, and much accelerates the state-of-the-art SR algorithms with little performance loss. \nAnd Jo \\textit{et al.} generate dynamic upsampling filters and the HR residual image based on the local spatio-temporal neighborhoods of each pixel, and also avoid explicit motion compensation.", "id": "4f792960-ffed-4436-a2b8-5beb48b58693", "level": "subsection", "origin_cites_number": 19, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Video Super-resolution" ] ], "subsections": [], "title": "Video Super-resolution" }, { "cite_extract_rate": 0.5, "cites": [ 540, 541, 542 ], "content": "Deep learning based super-resolution is also adopted to other domain-specific applications and shows great performance.\nSpecifically, the Perceptual GAN addresses the small object detection problem by super-resolving representations of small objects to have similar characteristics as large objects and be more discriminative for detection.\nSimilarly, the FSR-GAN super-resolves small-size images in the feature space instead of the pixel space, and thus transforms the raw poor features to highly discriminative ones, which greatly benefits image retrieval.\nBesides, Jeon \\textit{et al.} utilize a parallax prior in stereo images to reconstruct HR images with sub-pixel accuracy in registration.\nWang \\textit{et al.} propose a parallax-attention model to tackle the stereo image super-resolution problem.\nLi \\textit{et al.} incorporate the 3D geometric information and super-resolve 3D object texture maps.\nAnd Zhang \\textit{et al.} separate view images in one light field into groups, learn inherent mapping for every group and finally combine the residuals in every group to reconstruct higher-resolution light fields.\nAll in all, super-resolution technology can play an important role in all kinds of applications, especially when we can deal with large objects well but cannot handle small objects.", "id": "cd62f88e-a3c1-433d-b251-fc32a0c25662", "level": "subsection", "origin_cites_number": 6, "parent_id": "93f3d938-3f64-432f-856b-181ab4d5675a", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Domain-Specific Applications" ], [ "subsection", "Other Applications" ] ], "subsections": [], "title": "Other Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we have given an extensive survey on recent advances in image super-resolution with deep learning.\nWe mainly discussed the improvement of supervised and unsupervised SR, and also introduced some domain-specific applications.\nDespite great success, there are still many unsolved problems.\nThus in this section, we will point out these problems explicitly and introduce some promising trends for future evolution.\nWe hope that this survey not only provides a better understanding of image SR for researchers but also facilitates future research activities and application developments in this field.", "id": "d99390bf-c1ab-42f0-a206-de70277ab044", "level": "section", "origin_cites_number": 0, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ] ], "subsections": [ "b189992d-d4f2-4bdb-8aa8-56df37859ed4", "6e1445d8-2320-4c43-a72b-a4e4b5144a80", "a669a929-2c70-4689-9f55-1e58edd497df", "de8974e7-5736-4a7c-8a09-1f2dba4b3fa8", "6168fcfd-be9a-4482-a1b8-fa019e1df3c7" ], "title": "Conclusion and Future Directions" }, { "cite_extract_rate": 0.625, "cites": [ 543, 544, 8361, 482, 7255 ], "content": "Good network design not only determines a hypothesis space with great performance upper bound, but also helps efficiently learn representations without excessive spatial and computational redundancy.\nBelow we will introduce some promising directions for network improvements.\n\\textit{Combining Local and Global Information.}\nLarge receptive field provides more contextual information and helps generate more realistic results.\nThus it is promising to combine local and global information for providing contextual information of different scales for image SR.\n\\textit{Combining Low- and High-level Information.}\nShallow layers in CNNs tend to extract low-level features like colors and edges, while deeper layers learn higher-level representations like object identities.\nThus combining low-level details with high-level semantics can be of great help for HR reconstruction.\n\\textit{Context-specific Attention.}\nIn different contexts, people tend to care about different aspects of the images.\nFor example, for the grass area people may be more concerned with local colors and textures, while in the animal body area people may care more about the species and corresponding hair details.\nTherefore, incorporating attention mechanism to enhance the attention to key features facilitates the generation of realistic details. \n\\textit{More Efficient Architectures.}\nExisting SR modes tend to pursue ultimate performance, while ignoring the model size and inference speed.\nFor example, the EDSR takes 20s per image for $4\\times$ SR on DIV2K with a Titan GTX GPU , and DBPN takes 35s for $8\\times$ SR .\nSuch long prediction time is unacceptable in practical applications, thus more efficient architectures are imperative.\nHow to reduce model sizes and speed up prediction while maintaining performance remains a problem.\n\\textit{Upsampling Methods.}\nExisting upsampling methods (Sec. \\ref{sec_upsampling_methods}) have more or less disadvantages:\ninterpolation methods result in expensive computation and cannot be end-to-end learned, the transposed convolution produces checkerboard artifacts, the sub-pixel layer brings uneven distribution of receptive fields, and the meta upscale module may cause instability or inefficiency and have further room for improvement.\nHow to perform effective and efficient upsampling still needs to be studied, especially with high scaling factors.\nRecently, the neural architecture search (NAS) technique for deep learning has become more and more popular, greatly improving the performance or efficiency with little artificial intervention .\nFor the SR field, combining the exploration of the above directions with NAS is of great potential.", "id": "b189992d-d4f2-4bdb-8aa8-56df37859ed4", "level": "subsection", "origin_cites_number": 8, "parent_id": "d99390bf-c1ab-42f0-a206-de70277ab044", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ], [ "subsection", "Network Design" ] ], "subsections": [], "title": "Network Design" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7254, 482 ], "content": "Besides good hypothesis spaces, robust learning strategies are also needed for achieving satisfactory results.\nNext we'll introduce some promising directions of learning strategies.\n\\textit{Loss Functions.}\nExisting loss functions can be regarded as establishing constraints among LR/HR/SR images, and guide optimization based on whether these constraints are met.\nIn practice, these loss functions are often weighted combined and the best loss function for SR is still unclear.\nTherefore, one of the most promising directions is to explore the potential correlations between these images and seek more accurate loss functions.\n\\textit{Normalization.}\nAlthough BN is widely used in vision tasks, which greatly speeds up training and improves performance, it is proven to be sub-optimal for super-resolution .\nThus other effective normalization techniques for SR are needed to be studied.", "id": "6e1445d8-2320-4c43-a72b-a4e4b5144a80", "level": "subsection", "origin_cites_number": 3, "parent_id": "d99390bf-c1ab-42f0-a206-de70277ab044", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ], [ "subsection", "Learning Strategies" ] ], "subsections": [], "title": "Learning Strategies" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 483 ], "content": "Evaluation metrics are one of the most fundamental components for machine learning.\nIf the performance cannot be measured accurately, researchers will have great difficulty verifying improvements.\nMetrics for super-resolution face such challenges and need more exploration.\n\\textit{More Accurate Metrics.}\nNowadays the PSNR and SSIM have been the most widely used metrics for SR.\nHowever, the PSNR tends to result in excessive smoothness and the results can vary wildly between almost indistinguishable images.\nThe SSIM performs evaluation in terms of brightness, contrast and structure, but still cannot measure perceptual quality accurately .\nBesides, the MOS is the closest to human visual response, but needs to take a lot of efforts and is non-reproducible.\nAlthough researchers have proposed various metrics (Sec. \\ref{sec_iqa}), but currently there is no unified and admitted evaluation metrics for SR quality.\nThus more accurate metrics for evaluating reconstruction quality are urgently needed.\n\\textit{Blind IQA Methods.}\nToday most metrics used for SR are all-reference methods, i.e., assuming that we have paired LR-HR images with perfect quality.\nBut since it's difficult to obtain such datasets, the commonly used datasets for evaluation are often conducted by manual degradation.\nIn this case, the task we perform evaluation on is actually the inverse process of the predefined degradation.\nTherefore, developing blind IQA methods also has great demands.", "id": "a669a929-2c70-4689-9f55-1e58edd497df", "level": "subsection", "origin_cites_number": 3, "parent_id": "d99390bf-c1ab-42f0-a206-de70277ab044", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "As mentioned in Sec. \\ref{sec_unsupervised_sr}, it is often difficult to collect images with different resolutions of the same scene, so bicubic interpolation is widely used for constructing SR datasets.\nHowever, the SR models trained on these datasets may only learn the inverse process of the predefined degradation.\nTherefore, how to perform unsupervised super-resolution (i.e., trained on datasets without paired LR-HR images) is a promising direction for future development.", "id": "de8974e7-5736-4a7c-8a09-1f2dba4b3fa8", "level": "subsection", "origin_cites_number": 0, "parent_id": "d99390bf-c1ab-42f0-a206-de70277ab044", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ], [ "subsection", "Unsupervised Super-resolution" ] ], "subsections": [], "title": "Unsupervised Super-resolution" }, { "cite_extract_rate": 0.75, "cites": [ 484, 523, 518 ], "content": "Image super-resolution is greatly limited in real-world scenarios, such as suffering unknown degradation, missing paired LR-HR images.\nBelow we'll introduce some directions towards real-world scenarios.\n\\textit{Dealing with Various Degradation.}\nReal-world images tend to suffer degradation like blurring, additive noise and compression artifacts.\nThus the models trained on datasets conducted manually often perform poorly in real-world scenes.\nSome works have been proposed for solving this , but these methods have some inherent drawbacks, such as great training difficulty and over-perfect assumptions.\nThis issue is urgently needed to be resolved.\n\\textit{Domain-specific Applications.}\nSuper-resolution can not only be used in domain-specific data and scenes directly, but also help other vision tasks greatly (Sec. \\ref{sec_domain_specific_apps}).\nTherefore, it is also a promising direction to apply SR to more specific domains, such as video surveillance, object tracking, medical imaging and scene rendering.\n\\appendices\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\section*{Acknowledgment}\nProf. Jian Chen is supported by the Guangdong special branch plans young talent with scientific and technological innovation (Grant No. 2016TQ03X445), the Guangzhou science and technology planning project (Grant No. 201904010197) and Natural Science Foundation of Guangdong Province, China (2016A030313437).\n\\bibliographystyle{IEEEtran}\n\\bibliography{reference}\n\\begin{IEEEbiography}\n [{\\includegraphics[width=1.1in,clip,keepaspectratio]{resource/bio_zhihao_wang.jpg}}]\n {Zhihao Wang} received the BE degree in South China University of Technology (SCUT), China, in 2017, and is working toward the ME degree at the School of Software Engineering, SCUT.\n Now he is as a visiting student at the School of Information Systems, Singapore Management University, Singapore. His research interests are computer vision based on deep learning, including visual recognition and image super-resolution.\n\\vspace{-0.2in}\n\\end{IEEEbiography}\n\\begin{IEEEbiography}\n [{\\includegraphics[width=1.1in,clip,keepaspectratio]{resource/bio_jian_chen.png}}]\n {Jian Chen} is currently a Professor of the School of Software Engineering at South China University of Technology where she started as an Assistant Professor in 2005.\n She received her B.S. and Ph.D. degrees, both in Computer Science, from Sun Yat-Sen University, China, in 2000 and 2005 respectively.\n Her research interests can be summarized as developing effective and efficient data analysis techniques for complex data and the related applications.\n\\vspace{-0.2in}\n\\end{IEEEbiography}\n\\begin{IEEEbiography}\n [{\\includegraphics[width=1.1in,clip,keepaspectratio]{resource/bio_steven_hoi.pdf}}]\n {Steven C. H.~Hoi} is currently the Managing Director of Salesforce Research Asia, and an Associate Professor (on leave) of the School of Information Systems, Singapore Management University, Singapore. Prior to joining SMU, he was an Associate Professor with Nanyang Technological University, Singapore. He received his Bachelor degree from Tsinghua University, P.R. China, in 2002, and his Ph.D degree in computer science and engineering from The Chinese University of Hong Kong, in 2006.\n His research interests are machine learning and data mining and their applications to multimedia information retrieval (image and video retrieval), social media and web mining, and computational finance, etc., and he has published over 150 refereed papers in top conferences and journals in these related areas.\n He has served as the Editor-in-Chief for Neurocomputing Journal, general co-chair for ACM SIGMM Workshops on Social Media (WSM'09, WSM'10, WSM'11), program co-chair for the fourth Asian Conference on Machine Learning (ACML'12), book editor for ``Social Media Modeling and Computing'', guest editor for ACM Transactions on Intelligent Systems and Technology (ACM TIST), technical PC member for many international conferences, and external reviewer for many top journals and worldwide funding agencies, including NSF in US and RGC in Hong Kong. He is an IEEE Fellow and ACM Distinguished Member.\n\\vspace{-0.2in}\n\\end{IEEEbiography}\n\\clearpage\n\\if 0", "id": "6168fcfd-be9a-4482-a1b8-fa019e1df3c7", "level": "subsection", "origin_cites_number": 4, "parent_id": "d99390bf-c1ab-42f0-a206-de70277ab044", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "Conclusion and Future Directions" ], [ "subsection", "Towards Real-world Scenarios" ] ], "subsections": [], "title": "Towards Real-world Scenarios" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7031, 7256, 497, 528, 7255, 506, 7260, 7033, 500, 538, 495, 7258, 522, 489, 97, 7034, 71, 481, 493, 482, 539, 487, 7035, 7030, 496, 501, 7254, 498, 8358, 537, 96, 492, 518, 7022, 530 ], "content": "For better reading of this survey, we provide all the notations used in this survey and their detailed definitions in Table \\ref{tab_notations}.\nAnd we also list the full text of all the abbreviations used herein in Table \\ref{tab_abbreviations}.\n\\fi\n\\setcounter{table}{0}\n\\setcounter{page}{1}\n\\begin{table*}[h]\n \\renewcommand{\\arraystretch}{1.3}\n \\caption{Notations.}\n \\label{tab_notations}\n \\centering\n \\begin{tabular}{|c|l|}\n \\hline\n Notation & Description \\\\\n \\hline\n $I_x$ & LR image \\\\\n $I_y$ & ground truth HR image, abbreviated as $I$ \\\\\n $\\hat{I_y}$ & reconstructed HR image, abbreviated as $\\hat{I}$ \\\\\n $I_s$ & randomly sampled HR image from the real HR images \\\\\n \\hline\n $I(i)$ & intensity of the $i$-th pixel of image $I$ \\\\\n $D$ & discriminator network of GAN \\\\\n $\\phi$ & image classification network \\\\\n $\\phi^{(l)}$ & extracted representations on $l$-th layer by $\\phi$ \\\\\n $\\operatorname{vec}$ & vectorization operation \\\\\n $G^{(l)}$ & Gram matrix of representations on $l$-th layer \\\\\n $l$ & layer of CNN \\\\\n $h$, $w$, $c$ & width, height and number of channels of feature maps \\\\\n $h_l$, $w_l$, $c_l$ & width, height and number of channels of feature maps in $l$-th layer \\\\\n \\hline\n $\\mathcal{D}$ & degradation process \\\\\n $\\delta$ & parameters of $\\mathcal{D}$ \\\\\n $\\mathcal{F}$ & super-resolution process \\\\\n $\\theta$ & parameters of $\\mathcal{F}$ \\\\\n \\hline\n $\\otimes$ & convolution operation \\\\\n $\\kappa$ & convolution kernel \\\\\n $\\downarrow$ & downsampling operation \\\\\n $s$ & scaling factor \\\\\n $n$ & Gaussian noise \\\\\n $\\varsigma$ & standard deviation of $n$ \\\\\n $z$ & a random vector \\\\\n \\hline\n $\\mathcal{L}$ & loss function \\\\\n $\\mathcal{L}_{\\text{content}}$ & content loss \\\\\n $\\mathcal{L}_{\\text{cycle}}$ & content consistency loss \\\\\n $\\mathcal{L}_{\\text{pixel\\_l1}}$ & pixel L1 loss \\\\\n $\\mathcal{L}_{\\text{pixel\\_l2}}$ & pixel L2 loss \\\\\n $\\mathcal{L}_{\\text{pixel\\_Cha}}$ & pixel Charbonnier loss \\\\\n $\\mathcal{L}_{\\text{gan\\_ce\\_g}}$, $\\mathcal{L}_{\\text{gan\\_ce\\_d}}$ & adversarial loss of the generator and discriminator based on cross entropy \\\\\n $\\mathcal{L}_{\\text{gan\\_hi\\_g}}$, $\\mathcal{L}_{\\text{gan\\_hi\\_d}}$ & adversarial loss of the generator and discriminator based on hinge error \\\\\n $\\mathcal{L}_{\\text{gan\\_ls\\_g}}$, $\\mathcal{L}_{\\text{gan\\_ls\\_g}}$ & adversarial loss of the generator and discriminator based on least square error \\\\\n $\\mathcal{L}_{\\text{TV}}$ & total variation loss \\\\\n $\\Phi$ & regularization term \\\\\n $\\lambda$ & tradeoff parameter of $\\Phi$ \\\\\n $\\epsilon$ & small instant for stability \\\\\n \\hline\n $\\mu_I$ & luminance of image $I$, i.e., mean of intensity \\\\\n $\\sigma_I$ & contrast of image $I$, i.e., standard deviation of intensity \\\\\n $\\sigma_{I,\\hat{I}}$ & covariance between images $I$ and $\\hat{I}$ \\\\\n $\\mathcal{C}_l$, $\\mathcal{C}_c$, $\\mathcal{C}_s$ & comparison function of luminance, contrast, structure \\\\\n $\\alpha$, $\\beta$, $\\gamma$ & weights of $\\mathcal{C}_l$, $\\mathcal{C}_c$, $\\mathcal{C}_s$ \\\\\n $C_1$, $C_2$, $C_3$ & constants \\\\\n $k_1$, $k_2$ & constants \\\\\n \\hline\n $L$ & maximum possible pixel value \\\\\n $N$ & number of pixels \\\\\n $M$ & number of bins \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\\begin{table*}[h]\n \\renewcommand{\\arraystretch}{1.3}\n \\caption{Abbreviations.}\n \\label{tab_abbreviations}\n \\centering\n \\begin{tabular}{|clcl|}\n \\hline\n Abbreviation & Full name & Abbreviation & Full name \\\\\n \\hline\n FH & face hallucination & PAN & panchromatic image \\\\\n HR & high-resolution & SR & super-resolution \\\\\n HSI & hyperspectral image & TV & total variation \\\\\n HVS & human visual system & WT & wavelet transformation \\\\\n LR & low-resolution & & \\\\\n \\hline\n FSIM & feature similarity & MS-SSIM & multi-scale SSIM \\\\\n IQA & image quality assessment & NIQE & natural image quality evaluator \\\\\n MOS & mean opinion score & PSNR & peak signal-to-noise ratio \\\\\n MSSIM & mean SSIM & SSIM & structural similarity \\\\\n \\hline\n BN & batch normalization & GAN & generative adversarial net \\\\\n CNN & convolutional neural network & LSTM & long short term memory network \\\\\n CycleGAN & cycle-in-cycle GAN & ResNet & residual network \\\\\n DenseNet & densely connected CNN & SENet & squeeze-and-excitation network \\\\\n FAN & face alignment network & SPMC & sub-pixel motion compensation \\\\\n \\hline\n ADRSR & automated decomposition and reconstruction & LCGE & learn FH via component generation and enhancement \\\\\n Attention-FH & attention-aware FH & MemNet & memory network \\\\\n BRCN & bidirectional recurrent CNN & MS-LapSRN & multi-scale LapSRN \\\\\n CARN & cascading residual network & MSRN & multiscale residual network \\\\\n CARN-M & CARM based on MobileNet & MTUN & multi-task upsampling network \\\\\n CBN & cascaded bi-network & MWCNN & multi-level wavelet CNN \\\\\n CinCGAN & cycle-in-cycle GAN & ProSR & progressive SR \\\\\n CNF & context-wise network fusion & RBPN & recurrent back-projection network \\\\\n CVSRnet & compressed VSRnet & RCAN & residual channel attention networks \\\\\n DBPN & deep back-projection network & RDN & residual dense network \\\\\n DNSR & denoising for SR & RNAN & residual non-local attention networks \\\\\n DRCN & deeply-recursive CNN & SAN & Second-order Attention Network \\\\\n DRRN & deep recursive residual network & SFT-GAN & GAN with spatial feature transformation \\\\\n DSRN & dual-state recurrent network & SICNN & super-identity CNN \\\\\n DWSR & deep wavelet prediction for SR & SOCA & second-order channel attention \\\\\n EDSR & enhanced deep SR network & SRCNN & SR CNN \\\\\n EDSR-PP & EDSR with pyramid pooling & SRFBN & SR feedback network \\\\\n ESPCN & efficient sub-pixel CNN & SRGAN & SR GAN \\\\\n ESRGAN & enhanced SRGAN & SRDenseNet & SR DenseNet \\\\\n FAST & free adaptive SR via transfer & STCN & spatial-temporal CNN \\\\\n FRVSR & frame-recurrent video SR & TDAE & transformative discriminative auto-encoder \\\\\n FSRCNN & fast SRCNN & TDN & transformative discriminative network \\\\\n FSR-GAN & feature SRGAN & Super-FAN & SR with FAN \\\\\n FSRNet & face SR network & UR-DGN & ultra-resolving by discriminative generative networks \\\\\n FSTRN & fast spatio-temporal ResNet & VESPCN & video ESPCN \\\\\n IDN & information distillation network & VSRnet & video SR network \\\\\n LapSRN & Laplacian pyramid SR network & ZSSR & zero-shot SR \\\\\n MDSR & multi-scale deep SR system & & \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\\end{document}", "id": "8f0f5d78-51f4-4970-b0a8-a53685116af3", "level": "section", "origin_cites_number": 63, "parent_id": "2c20cd26-fec7-4463-b98c-798fc974562d", "prefix_titles": [ [ "title", "Deep Learning for Image Super-resolution:\\\\A Survey" ], [ "section", "" ] ], "subsections": [], "title": "" } ]
4
[ 7029, 7028, 480, 483, 481, 482, 7030, 7254, 8358, 484, 485, 7031, 7256, 7255, 7032, 487, 486, 489, 8359, 488, 491, 8360, 490, 7033, 492, 493, 494, 495, 496, 498, 497, 499, 7034, 7257, 97, 500, 305, 96, 7258, 501, 503, 505, 502, 504, 506, 507, 509, 508, 7035, 510, 8317, 516, 513, 515, 512, 121, 514, 63, 78, 511, 518, 7022, 517, 519, 71, 62, 520, 521, 522, 523, 524, 527, 525, 526, 7259, 529, 528, 531, 530, 533, 532, 534, 535, 536, 7260, 538, 539, 537, 540, 541, 542, 543, 544, 8361 ]
0.980886
[ "Feifei Shao", "Long Chen", "Jian Shao", "Wei Ji", "Shaoning Xiao", "Lu Ye", "Yueting Zhuang", "Jun Xiao" ]
Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey
2021
2021-05-26T17:15:53Z
cs.CV
Weakly-Supervised Object Detection (WSOD) and Localization (WSOL), \ie, detecting multiple and single instances with bounding boxes in an image using image-level labels, are long-standing and challenging tasks in the CV community. With the success of deep neural networks in object detection, both WSOD and WSOL have received unprecedented attention. Hundreds of WSOD and WSOL methods and numerous techniques have been proposed in the deep learning era. To this end, in this paper, we consider WSOL is a sub-task of WSOD and provide a comprehensive survey of the recent achievements of WSOD. Specifically, we firstly describe the formulation and setting of the WSOD, including the background, challenges, basic framework. Meanwhile, we summarize and analyze all advanced techniques and training tricks for improving detection performance. Then, we introduce the widely-used datasets and evaluation metrics of WSOD. Lastly, we discuss the future directions of WSOD. We believe that these summaries can help pave a way for future research on WSOD and WSOL.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ] ], "subsections": [ "9ccbcfac-47e2-47c4-b6a6-daa7425f2e36", "9e1fbaf4-764f-4115-b463-68f479f804c4", "57f26c22-b2e9-4b99-a5c9-6b9157c32986", "e2bc9197-68e8-4c34-9892-4132e9e3f384", "74ea7ff3-69a1-4ddc-a744-5eb8ef38e5ce", "80d0c9da-6233-48ab-bd98-ebb5dc2b3fd1", "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "1bfb95db-ecd4-4dab-be9d-ade21af94465", "5d4d4d9b-ac69-47c5-b147-5bcd28ddb450", "55b3e435-aa9e-4a20-83de-0ebb04c9d0bf" ], "title": "root" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 514, 799, 1534, 305, 206, 486, 2933, 802, 895, 2934, 97, 209, 1230, 8607, 520 ], "content": "\\IEEEPARstart{O}{bject} detection~ is a fundamental and challenging task that aims to locate and classify object instances in an image. The object localization is to use a bounding box (an axis-aligned rectangle tightly bounding the object) to search for the spatial location and range of an object in an image as much as possible~. Object classification is to assess the presence of objects from a given set of object classes in an image. As one of the most fundamental tasks in computer vision, object detection is an indispensable technique for many high-level applications, \\eg, robot vision~, face recognition~, image retrieval~, augmented reality~, autonomous driving~, change detection~ and so on. With the development of convolutional neural networks (CNNs) in visual recognition~ and release of large scale dateset~, today's state-of-the-art object detector can achieve near-perfect performance under fully-supervised setting, \\ie, Fully-Supervised Object Detection (FSOD)~. Unfortunately, these fully-supervised object detection methods suffer from two inevitable limitations: 1) The large-scale instance annotations are difficult to obtain and labor-intensive. 2) When labeling these data, they may introduce annotation noises inadvertently.\nTo avoid the mentioned problems, the community starts to solve object detection in a weakly-supervised setting, \\ie, Weakly-Supervised Object Detection (WSOD). Different from the fully-supervised setting (cf. Fig.~\\ref{fsod_vs_wsod} (a)), WSOD aims to detect instances with only image-level labels (\\eg, categories of instances in the whole images). Meanwhile, WSOD can benefit from the large-scale datasets on the web, such as Facebook and Twitter. Another similar task is Weakly-Supervised Object Localization (WSOL), which only detects one instance in an image. Because WSOD and WSOL detect multiple instances and single instances respectively, we consider WSOL as a sub-task of WSOD. In the following paper, we use WSOD to represent both WSOD and WSOL. \n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.85\\linewidth]{./images/fsod_vs_wsod.pdf}\n \\end{center}\n \\caption{(a) Fully-Supervised Object Detection (FSOD) uses the \\emph{instance-level} annotations as supervision. (b) Weakly-Supervised Object Detection (WSOD) uses the \\emph{image-level} annotations as supervision.}\n \\label{fsod_vs_wsod}\n\\end{figure}\n\\begin{figure*}[t]\n \\begin{center}\n \\includegraphics[width=0.9\\linewidth]{./images/sections_figure2.pdf}\n \\end{center}\n \\caption{The main content of this paper.}\n \\label{sections_figure}\n\\end{figure*}\nIn this paper, we go over all typical WSOD methods and give a comprehensive survey (cf. Fig.~\\ref{sections_figure}) of recent advances in WSOD. Since the number of papers on WSOD is breathtaking, we sincerely apologize to those authors whose research on WSOD and other related fields are not included in this survey. In Section~\\ref{sec:wsod}, we introduce the background, main challenges, and basic framework. In Section~\\ref{sec:milestones}, according to the development timeline of WSOD, we introduce several modern classical methods in detail. Then, in-depth analyses are provided towards the all advanced techniques and tricks for the main challenges. In Section~\\ref{sec:datasets}, we demonstrate all prevailing benchmarks and standard evaluation metrics for WSOD. In Section~\\ref{sec:directions}, we briefly discuss the future directions.", "id": "9ccbcfac-47e2-47c4-b6a6-daa7425f2e36", "level": "section", "origin_cites_number": 21, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:wsod}", "id": "9e1fbaf4-764f-4115-b463-68f479f804c4", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ] ], "subsections": [ "b6a2ddd6-3dbc-4d11-aa38-e3ad1025e91f", "cf380f17-be2b-4358-b139-2b6daf64a9e5", "f5e50a1e-ba56-4da2-80ee-6c686c039fa9" ], "title": "WSOD" }, { "cite_extract_rate": 0.6388888888888881, "cites": [ 2282, 2291, 2306, 2936, 7089, 2294, 2287, 2259, 7090, 2275, 2296, 2283, 2286, 7637, 2937, 737, 2935, 2304, 2302, 2295, 2280, 2284, 2305 ], "content": "\\blue{WSOD aims to classify and locate object instances using only image-level labels in the training phase. As shown in Fig.~\\ref{fsod_vs_wsod} (b), given an image with cat and dog, WSOD not only classifies the cat and dog but also locates their location using bounding boxes. Different from FSOD that can use instance-level annotations in the training phase shown in Fig.~\\ref{fsod_vs_wsod} (a), WSOD only accesses image-level labels. Because of this restriction, though hundreds of WSOD methods have been proposed, the performance gap between WSOD and FSOD is still large. For example, the mAP of state-of-the-art FSOD approach~ and WSOD approach~ is 86.9\\% and 54.9\\% on PASCAL VOC 2007 dataset~, respectively. Therefore, there are still many challenges in terms of the task of WSOD for researchers to solve, especially in the direction of improving the detection performance.}\n\\addtolength{\\tabcolsep}{-3pt}\n\\begin{table*}[t]\n \\centering\n \\caption{A summary of the state-of-the-art WSOD methods. For the proposals, SS represents selective search, EB represents edge boxes, and SW represents sliding window. The Challenges denotes the main contributions of corresponding papers.}\n \\label{table08}\n \\begin{tabular}{lcccccccc}\n \\toprule\n \\multirow{2}*{Approach}&\n \\multirow{2}*{Year} & \n \\multirow{2}*{Proposals} & \n \\multicolumn{2}{c}{Network}& \n \\multicolumn{3}{c}{Challenges} &\n \\multirow{2}*{Code on Github} \\\\\n \\cmidrule (r){4-5} \\cmidrule (r){6-8}\n &&&MIL-based&CAM-based&Discriminative Region&Multiple Instances&Speed&\\\\\n \\midrule\n \\rowcolor{mygray}\n WSDDN~&CVPR2016&EB&$\\surd$&&&&&hbilen/WSDDN\\\\\n CAM~&CVPR2016&Heatmap&&$\\surd$&&&$\\surd$&zhoubolei/CAM\\\\\n \\rowcolor{mygray}\n WSLPDA~&CVPR2016&EB&$\\surd$&&$\\surd$&&&jbhuang0604/WSL\\\\\n WELDON~&CVPR2016&SW&$\\surd$&&$\\surd$&&$\\surd$&\\\\\n \\rowcolor{mygray}\n ContextLocNet~&ECCV2016&SS&$\\surd$&&$\\surd$&&&vadimkantorov/contextlocnet\\\\\n Grad-CAM~&ICCV2017&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&ramprs/grad-cam\\\\\n \\rowcolor{mygray}\n OICR~&CVPR2017&SS&$\\surd$&&$\\surd$&&&ppengtang/oicr\\\\\n WCCN~&CVPR2017&EB&$\\surd$&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n ST-WSL~&CVPR2017&EB&$\\surd$&&$\\surd$&$\\surd$&&\\\\\n WILDCAT~&CVPR2017&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&durandtibo/wildcat.pytorch\\\\\n \\rowcolor{mygray}\n SPN~&ICCV2017&SW&$\\surd$&&&&$\\surd$&ZhouYanzhao/SPN\\\\\n TP-WSL~&ICCV2017&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&\\\\\n \\rowcolor{mygray}\n PCL~&TPAMI2018&SS&$\\surd$&&$\\surd$&$\\surd$&&ppengtang/pcl.pytorch\\\\\n GAL-fWSD~&CVPR2018&EB&$\\surd$&&&&$\\surd$&\\\\\n \\rowcolor{mygray}\n W2F~&CVPR2018&SS&$\\surd$&&$\\surd$&$\\surd$&$\\surd$&\\\\\n ACoL~&CVPR2018&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&xiaomengyc/ACoL\\\\\n \\rowcolor{mygray}\n ZLDN~&CVPR2018&EB&$\\surd$&&$\\surd$&&&\\\\\n TS$^2$C~&ECCV2018&SS&$\\surd$&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n SPG~&ECCV2018&Heatmap&&$\\surd$&&&$\\surd$&xiaomengyc/SPG\\\\\n WSRPN~&ECCV2018&EB&$\\surd$&&&&&\\\\\n \\rowcolor{mygray}\n C-MIL~&CVPR2019&SS&$\\surd$&&&&&WanFang13/C-MIL\\\\\n WS-JDS~&CVPR2019&EB&$\\surd$&&$\\surd$&&&shenyunhang/WS-JDS\\\\\n \\rowcolor{mygray}\n ADL~&CVPR2019&Heatmap&&$\\surd$&&&$\\surd$&junsukchoe/ADL\\\\\n Pred NET~&CVPR2019&SS&$\\surd$&&&&&\\\\\n \\rowcolor{mygray}\n WSOD2~&ICCV2019&SS&$\\surd$&&$\\surd$&&&researchmm/WSOD2\\\\\n OAILWSD~&ICCV2019&SS&$\\surd$&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n TPWSD~&ICCV2019&SS&$\\surd$&&$\\surd$&&&\\\\\n SDCN~&ICCV2019&SS&$\\surd$&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n C-MIDN~&ICCV2019&SS&$\\surd$&&$\\surd$&&&\\\\\n DANet~&ICCV2019&Heatmap&&$\\surd$&&&$\\surd$&xuehaolan/DANet\\\\\n \\rowcolor{mygray}\n NL-CCAM~&WACV2020&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&Yangseung/NL-CCAM\\\\\n ICMWSD~&CVPR2020&SS&$\\surd$&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n EIL~&CVPR2020&Heatmap&&$\\surd$&$\\surd$&&$\\surd$&Wayne-Mai/EIL\\\\\n SLV~&CVPR2020&SS&$\\surd$&&$\\surd$&&&\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\\addtolength{\\tabcolsep}{3pt}", "id": "b6a2ddd6-3dbc-4d11-aa38-e3ad1025e91f", "level": "subsection", "origin_cites_number": 36, "parent_id": "9e1fbaf4-764f-4115-b463-68f479f804c4", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "A Problem Definition" ] ], "subsections": [], "title": "A Problem Definition" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2259, 737, 2296 ], "content": "\\label{sec:challenge}\nThe main challenges of WSOD come from two aspects: localization accuracy and speed. For localization accuracy, it consists of discriminative region problem and multiple instances with the same category problem. For speed, it is an important characteristic of real applications. In TABLE~\\ref{table08}, we summarize all typical WSOD methods and their contributions to these challenges.\n\\textbf{Discriminative Region Problem.} It is that detectors~ tend to focus on the most discriminative parts of the object. During training, there may exist more than one proposal around an object, and the most discriminative part region of the object is likely to have the highest score (\\eg, the region A is the most discriminative region in Fig.~\\ref{OICR_comparison} (left) and it has a higher score than that of other regions). If the model selects positive proposals only based on scores, it is easy to focus on the most discriminative part of the object rather than the whole object extent.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.85\\linewidth]{./images/OICR_comparison}\n \\end{center}\n \\caption{Detection results between model without classifier refinement (left) and model with classifier refinement (right). The figure comes from~.} \n \\label{OICR_comparison}\n\\end{figure}\n\\textbf{Multiple Instance Problem.} It denotes that detectors~ are difficult to accurately recognize multiple instances when there may exist several objects with the same category in an image. Although there are multiple instances with the same category in an image, these detectors~ only select the highest score proposal of each category as the positive proposal and ignores other possible instance proposals.\n\\textbf{Speed Problem.} At present, the speed bottleneck of the WSOD approaches is mainly concentrated in proposal generation. Selective search (SS)~ and Edge boxes (EB)~ that are widely used in WSOD are too time-consuming.", "id": "cf380f17-be2b-4358-b139-2b6daf64a9e5", "level": "subsection", "origin_cites_number": 5, "parent_id": "9e1fbaf4-764f-4115-b463-68f479f804c4", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "Main Challenges" ] ], "subsections": [], "title": "Main Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:framework}\nThe basic framework of WSOD methods can be categorized into MIL-based networks and CAM-based networks according to the detection of multiple instances and a single instance.", "id": "f5e50a1e-ba56-4da2-80ee-6c686c039fa9", "level": "subsection", "origin_cites_number": 0, "parent_id": "9e1fbaf4-764f-4115-b463-68f479f804c4", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "Basic WSOD Framework" ] ], "subsections": [ "f071d696-05af-451b-9c94-54e3300f6807", "1ead24ef-1d81-47d2-b27d-95e21d77be3f", "a27600bd-6e4a-4fd8-b2c2-b16adb696593" ], "title": "Basic WSOD Framework" }, { "cite_extract_rate": 0.529411764705882, "cites": [ 2259, 2282, 305, 2306, 97, 514, 2275, 509, 895 ], "content": "\\label{sec:mil-based}\n\\blue{When the detection network predicts multiple instances in an image, it is considered a Multiple Instance Learning (MIL) problem~. Taking Fig.~\\ref{fsod_vs_wsod} (b) for example, an image is interpreted as a bag of proposals in the MIL problem. If the image is labeled cat, it means that at least one of the proposals tightly contains the cat instance. Otherwise, all of the regions do not contain the cat instance (likewise for dogs). The MIL-based network is based on the structure of WSDDN~ that consists of three components: proposal generator, backbone, and detection head. }\n\\textbf{Proposal Generator.} Numerous proposal generators are usually used in MIL-based networks. 1) \\textit{Selective search (SS)}~: it leverages the advantages of both exhaustive search and segmentation to generate initial proposals. 2) \\textit{Edge boxes (EB)}~: it uses object edges to generate proposals and is widely used in many approaches~. 3) \\textit{Sliding window (SW)}: it denotes that each point of the feature maps corresponds to one or more proposals in the relative position of the original image. And SW is faster than SS~ and EB~ in proposal generation. \n\\textbf{Backbone.} \\blue{With the development of CNNs and large scale datasets (\\eg, ImageNet~), the pretrained AlexNet~, VGG16~, GoogLeNet~, ResNet~, and SENet~ are prevailing feature representation networks for both classification and object detection.}\n\\textbf{Detection Head.} \\blue{It includes a classification stream and a localization stream. The classification stream predicts class scores for each proposal, and the localization stream predicts every proposal's existing probability score for each class. Then the two scores are aggregated to predict the confidence scores of an image as a whole, which are used to inject image-level supervision in learning.}\n\\blue{Given an image, we first feed it into the proposal generator and backbone to generate proposals and feature maps, respectively. Then, the feature maps and proposals are forwarded into a spatial pyramid pooling (SPP)~ layer to generate fixed-size regions. Finally, these regions are fed into the detection head to classify and localize object instances.}", "id": "f071d696-05af-451b-9c94-54e3300f6807", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "f5e50a1e-ba56-4da2-80ee-6c686c039fa9", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "Basic WSOD Framework" ], [ "subsubsection", "MIL-based Network" ] ], "subsections": [], "title": "MIL-based Network" }, { "cite_extract_rate": 1, "cites": [ 737 ], "content": "\\label{sec:cam-based}\n\\blue{When the detection network only predicts a single instance in an image, it is considered an object localization problem. The CAM-based network is based on the structure of CAM~, which consists of three components: backbone, classifier, and class activation maps.}\n\\textbf{Backbone.} \\blue{It is similar to that of the MIL-based network introduced in Section~\\ref{sec:mil-based}.}\n\\textbf{Classifier.} \\blue{It is designed to classify the classes of an image, which includes a global average pooling (GAP) layer and a fully connected layer. }\n\\textbf{Class Activation Maps.} \\blue{It is responsible for locating object instances by using a simple segmentation technique. Because the class activation maps are produced by matrix multiplying the weight of the fully connected layer to the feature maps of the last convolutional layer, it spotlights the class-specific discriminative regions in every activation map. Therefore, it is easy to generate bounding boxes of every class by segmenting the activation map of the class. }\n\\blue{Given an image, we first feed it into the backbone to generate feature maps of this image. Then, the feature maps are forwarded into the classifier to classify the image's classes. Meanwhile, we matrix multiply the weight of the fully connected layer to the feature maps of the last convolutional layer to produce class activation maps. Finally, we segment the activation map of the highest probability class to yield bounding boxes for object localization.}", "id": "1ead24ef-1d81-47d2-b27d-95e21d77be3f", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f5e50a1e-ba56-4da2-80ee-6c686c039fa9", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "Basic WSOD Framework" ], [ "subsubsection", "CAM-based Network" ] ], "subsections": [], "title": "CAM-based Network" }, { "cite_extract_rate": 0, "cites": [], "content": "\\blue{In this section, we discuss the differences between MIL-based networks and CAM-based networks.}\n\\blue{Firstly, MIL-based network leverages SS~, EB~ or SW to generate thousands of initial proposals, but CAM-based network segments the activation map to one proposal for each class. Therefore, MIL-based network is better than CAM-based network when detecting multiple instances with the same category in an image, but the training and inference speed of CAM-based network is faster than MIL-based.}\n\\blue{Secondly, because the size of the proposals produced by SS or EB is not consistent, MIL-based network leverages an SPP layer to generate fixed-size vectors followed by feeding these fixed-size vectors into the fully connected layers for later training. However, a CAM-based network leverages a GAP layer to generate a fixed-size vector on the feature maps. Then, it feeds the vector into a fully connected layer for classifying.}\n\\blue{Finally, Both MIL-based networks and CAM-based networks will face the discriminative region problem and multiple instance problem. In addition, MIL-based networks will face the training and test speed problem, since SS and EB are too time-consuming and yield plenty of initial proposals.}", "id": "a27600bd-6e4a-4fd8-b2c2-b16adb696593", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "f5e50a1e-ba56-4da2-80ee-6c686c039fa9", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "WSOD" ], [ "subsection", "Basic WSOD Framework" ], [ "subsubsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:milestones}\nSince 2016, there are some landmark methods (cf. Fig.~\\ref{milestones}) for the research of WSOD. In the following, we will briefly introduce these milestones.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.85\\linewidth]{./images/milestones.pdf}\n \\end{center}\n \\caption{The Milestones of WSOD since 2016.}\n \\label{milestones}\n\\end{figure}", "id": "57f26c22-b2e9-4b99-a5c9-6b9157c32986", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Milestones of WSOD" ] ], "subsections": [ "600b68e9-3062-4d34-b822-403fb271cf19", "f31a26c2-85c7-4f0b-b802-55b4548075ab" ], "title": "Milestones of WSOD" }, { "cite_extract_rate": 1, "cites": [ 2259, 2937, 7090, 2296, 509 ], "content": "\\label{sec:mil-based methods}\n\\textbf{WSDDN.} \\blue{The biggest contribution of WSDDN~ is using two streams network, which aims to perform classification and localization respectively. WSDDN first uses a SPP~ on the top of the feature maps and generates a feature vector after two fully connected layer procedures. Next, the feature vector is fed into the classification stream and localization steam. Specifically, the classification stream is responsible for computing the class scores of each region, and the localization stream is designed to compute every region's existing probability for each class. Then, the matrix product of the class scores of each region and the existing probability for each class is considered as the final prediction scores. However, because of only accessing image-level labels in the training phase, the most discriminative part of the object will be paid more attention than the whole object instance in training. Due to the above limitation, WSDDN suffers from the discriminative region problem.}\n\\textbf{OICR.} \\blue{To alleviate the discriminative region problem, OICR~ uses WSDDN as its baseline and adds three instance classifier refinement procedures after the baseline. Every instance classifier refinement procedure, which consists of two fully connected layers, is designed to further predict the class scores for each proposal. Because the output of each instance classifier refinement procedure is the supervision of its latter refinement procedure, OICR can continue to learn so that larger area can have higher scores than WSDDN. Although the prediction of WSDDN may only focus on the discriminative part of the object, it will be refined after several instance classifier refinement procedures. }\n\\textbf{SDCN.} SDCN~ introduces a segmentation-detection collaborative mechanism. It consists of a detection branch and segmentation branch, which are responsible for detecting bounding boxes and generating segmentation masks respectively. In SDCN, the detection results will be converted to a heatmap by setting a classification score to all pixels within each proposal as the supervision mask of the segmentation branch. Meanwhile, the proposals of the highest overlap with the connected regions from the segmentation masks will be the pseudo ground-truth boxes of the detection branch. Both detection branch and segmentation branch are optimized alternatively and promoted each other, so SDCN achieves better detection performance than OICR.\n\\textbf{ICMWSD.} Different from SDCN which uses object detection with segmentation collaboration mechanism, ICMWSD~ addresses the problem of focusing on the most discriminative part of an object by leveraging context information. Firstly, ICMWSD obtains a dropped features by dropping the most discriminative parts. Then, maximizing the loss of the dropped features to force ICMWSD to look at the surrounding context regions.", "id": "600b68e9-3062-4d34-b822-403fb271cf19", "level": "subsection", "origin_cites_number": 5, "parent_id": "57f26c22-b2e9-4b99-a5c9-6b9157c32986", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Milestones of WSOD" ], [ "subsection", "MIL-based Methods" ] ], "subsections": [], "title": "MIL-based Methods" }, { "cite_extract_rate": 1, "cites": [ 2282, 2280, 737 ], "content": "\\textbf{CAM.} The biggest contribution of CAM~ is using class activation maps to predict instances. CAM firstly leverages a GAP layer on the last convolutional feature maps to generate a feature vector. \\blue{Then, the feature vector is fed into a classifier with a fully connected layer to generate prediction scores of an image.} Finally, CAM generates bounding boxes of each class by using a simple thresholding technique to segment the activation map of every class. \\blue{However, class activation maps of CAM spotlight the regions that are the most discriminative parts of the object, so CAM also faces the discriminative region problem as WSDDN.}\n\\textbf{WCCN.} To alleviate the discriminative region problem, WCCN~ proposes to use a cascaded network that has three cascade stages trained in an end-to-end pipeline. \\blue{The first stage is the CAM~ network that aims to generate class activation maps and initial proposals. The second stage is a segmentation network that uses the class activation maps to train object segmentation for refining object localization. The final stage is a MIL network that performs multiple instances of learning on proposals extracted in the second stage. Because the second and third stages refine object localization, WCCN alleviates the problem that tends to focus on the most discriminative part of the object.}\n\\textbf{ACoL.} To alleviate the discriminative region problem, ACoL~ introduces two parallel-classifiers for object localization using adversarial complementary learning. Specifically, it first leverages the first classifier to localize the most discriminative regions. Then, ACoL uses the masked feature maps by masking the most discriminative regions discovered in the first classifier as the input feature maps of the second classifier. This forces the second classifier to select the next discriminative regions. Finally, ACoL fuses the class activation maps of both classifiers to generate bounding boxes of every class by segmenting the activation map of the highest probability class.", "id": "f31a26c2-85c7-4f0b-b802-55b4548075ab", "level": "subsection", "origin_cites_number": 3, "parent_id": "57f26c22-b2e9-4b99-a5c9-6b9157c32986", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Milestones of WSOD" ], [ "subsection", "CAM-based Methods" ] ], "subsections": [], "title": "CAM-based Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tec_discriminative}\nIn this section, we will introduce several advanced techniques for solving the discriminative region problem.", "id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ] ], "subsections": [ "cc91d332-8e08-4a84-9839-f3a6a101dc58", "e9087d2e-81e5-45fa-a18e-9311df94831e", "fe348b22-323b-4264-b4b5-36e4ad80a171", "3b4da6c9-0073-45d7-89e5-d47c9711fbcb", "89989386-09fe-4531-8cec-bff75368388f", "0144f94a-79d4-402a-a36c-b11a02b3ef8b", "dadb3521-a095-4201-8ae2-60d36e7f5b37", "54ee603e-8a88-4025-9979-69342548efab", "9e808fc8-51b0-437b-a79b-2beb0b2f1a82" ], "title": "Specific Techniques for Discriminative Region Problem" }, { "cite_extract_rate": 0.8, "cites": [ 7089, 7090, 2304, 2305 ], "content": "The context of one region is external information of this region, which can be obtained by masking the region of the feature maps with special numbers (\\eg, zero). There are two types of the strategy of using context modeling as follows.\n\\textbf{Strategy A.} It selects the regions that have a big gap between their scores and their contextual region's scores as positive proposals. For example, \\blue{WSLPDA~ first replaces the pixel values within one proposal with zero to obtain the contextual region. Then, WSLPDA compares the scores of proposals and their contextual region. If the gap between the two scores is large, it indicates that the proposal is likely positive. ContextLocNet~ subtracts the localization score of one proposal from the localization score of the external rectangle region of the proposal. Then, the subtraction is considered as the final localization score of the proposal. Similar to WSLPDA and ContextLocNet, TS$^2$C~ selects a positive proposal by comparing the mean objectness scores of the pixels in one proposal and its surrounding region. But to alleviate the impact of background pixels in the surrounding region, TS$^2$C computes the mean objectness scores only using pixels with large confidence values in the surrounding region.}\n\\textbf{Strategy B.} It selects positive proposals by leveraging the loss of context regions. For example, \\blue{OAILWSD~ believes that a proposal not tightly covers the object instance if the loss of the context feature maps of this proposal tends to decrease. Thus, OAILWSD first leverages the context classification loss to label regions. Then, it selects the top-scoring regions whose context class probabilities are low as positive proposals. ICMWSD~ first drops the most discriminative parts of the feature maps to obtain contextual feature maps. Then, it maximizes the loss of the contextual feature maps to force it focuses on the context regions.}", "id": "cc91d332-8e08-4a84-9839-f3a6a101dc58", "level": "subsection", "origin_cites_number": 5, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Context Modeling" ] ], "subsections": [], "title": "Context Modeling" }, { "cite_extract_rate": 0, "cites": [], "content": "In the self-training algorithm, the early prediction instances are then used in the detector for latter training as the pseudo ground-truth instances. There are two types of self-training algorithms: inter-stream and inter-epoch. In inter-stream self-training, the instances of each stream supervise its later stream. In inter-epoch self-training, the instances of each epoch supervise its later epoch. The key idea of self-training is that even if the early top-scoring proposals may only focus on the discriminative part of the object, they will be refined after several refinement procedures.", "id": "e9087d2e-81e5-45fa-a18e-9311df94831e", "level": "subsection", "origin_cites_number": 0, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Self-training Algorithm" ] ], "subsections": [ "c90a1426-0fa6-428c-bef7-f7270e86482b", "627794f6-b919-4f19-a73a-0af4d0885500" ], "title": "Self-training Algorithm" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 2305, 7090, 2296, 2287 ], "content": "OICR~ expects B, C, and D can inherit the class score of A to correctly localize objects in Fig.~\\ref{OICR_comparison} (right). So, OICR adds three refinement classifiers with two fully connected layers in WSDDN to address the issue shown in Fig.~\\ref{OICR_comparison} (left). Specifically, the supervision of the first refinement classifier is the output of WSDDN. As for other refinement classifiers, the supervision of the current refinement classifier is the output of its previous refinement classifier. Inspired by OICR, WSOD2~ consists of numerous classifiers. ICMWSD~ also inserts refinement streams in WSDDN, however, every refinement stream includes a classifier and a regressor. Besides, some approaches~ use OICR as their baseline.", "id": "c90a1426-0fa6-428c-bef7-f7270e86482b", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "e9087d2e-81e5-45fa-a18e-9311df94831e", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Self-training Algorithm" ], [ "subsubsection", "Inter-stream Self-training" ] ], "subsections": [], "title": "Inter-stream Self-training" }, { "cite_extract_rate": 1, "cites": [ 2275 ], "content": "\\blue{Self-Taught-WS~ uses relative improvement (RI) of the scores of each proposal of two adjacent epochs as a criterion for selecting the positive sample. Specifically, it chooses the proposals of the previous epoch whose intersection over union (IoU) $\\geq 0.5$ with the maximal RI proposal as the positive samples of the current epoch.}", "id": "627794f6-b919-4f19-a73a-0af4d0885500", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e9087d2e-81e5-45fa-a18e-9311df94831e", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Self-training Algorithm" ], [ "subsubsection", "Inter-epoch Self-training" ] ], "subsections": [], "title": "Inter-epoch Self-training" }, { "cite_extract_rate": 1, "cites": [ 2282, 2304 ], "content": "The cascaded network includes several stages and the supervision of the current stage is the output of the previous stage. \\blue{Such as WCCN~ and TS$^2$C~ consist of three stages. The first stage is the CAM module that is to generate initial proposals using class activation maps. The intermediate stage is the object segmentation module that is designed to refine initial proposals. The final stage is a multiple instance learning module that is responsible for detecting accurate objects.}", "id": "fe348b22-323b-4264-b4b5-36e4ad80a171", "level": "subsection", "origin_cites_number": 2, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Cascaded Network" ] ], "subsections": [], "title": "Cascaded Network" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 2936, 7090, 2302, 2287 ], "content": "Bounding box regression can improve object localization performance using instance-level annotations in the training phase, but the WSOD task only accesses image-level labels. To use bounding box regression for refining the initial proposals from SS~ or EB~, some approaches propose to yield high-quality pseudo ground-truth boxes as the supervision of bounding box regression.\nNow, numerous approaches~ include at least one of the bounding box regressors. The supervision of the regressor is the output of previous classifiers.", "id": "3b4da6c9-0073-45d7-89e5-d47c9711fbcb", "level": "subsection", "origin_cites_number": 7, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Bounding Box Regression" ] ], "subsections": [], "title": "Bounding Box Regression" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2259, 2280, 737, 2935 ], "content": "From Fig.~\\ref{OICR_comparison} (left), some researchers find that the highest score region only covers the most discriminative part of the object. To localize the whole object extent, masking the most discriminative part of the object is designed to force the detector to find the next discriminative region. \nTP-WSL~ is a two-phase learning network that detects the next discriminative regions by masking the most discriminative region. In the first phase, it yields class activation maps followed by masking the most discriminative region using a threshold among the activation map of the highest probability class. In the second phase, it multiplies the masked activation map by the feature maps of the second network to refine the feature maps for detecting the next discriminative regions.\n\\blue{Different from TP-WSL that has two backbones, ACoL~ consists of one shared backbone and two parallel-classifiers. The masked feature maps from the first classifier are fed into the second classifier to generate class activation maps. Finally, ACoL locates object instances in the fused activation maps by fusing the two-class activation maps of both classifiers. EIL~ proposes to share the weights of the two parallel-classifiers of ACoL, and it only segments the activation map of the highest probability class from the unmasked branch to yields object proposals. Comparing C-MIDN~ with ACoL, there are three differences. First, the detection network of C-MIDN is WSDDN~, but the detection network of ACoL is CAM~. Second, C-MIDN does not compute the loss of high overlap with the first detection module's top-scoring proposal in the second branch, but ACoL masks the first detection module's top-scoring proposal's region with zero in the second branch. Finally, C-MIDN chooses the top-scoring proposals of the second detection module and the top-scoring proposals of the first detection module with low overlap with selected proposals as positive proposals, but ACoL yields positive proposals by segmenting the fused class activation maps.}", "id": "89989386-09fe-4531-8cec-bff75368388f", "level": "subsection", "origin_cites_number": 6, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Discriminative Region Removal" ] ], "subsections": [], "title": "Discriminative Region Removal" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1812 ], "content": "Low-level features usually retain richer object details, such as edges, corners, colors, pixels, and so on. We can obtain accurate object localization if making full use of these low-level features. For example, Grad-CAM~ leverages high-resolution Guided Backpropagation~ that highlights the image's details to create both high-resolution and class-discriminative visualizations. WSOD2~ first computes the score of a region proposal. Then, it selects the same region in low-level image features (\\eg, superpixels) and computes the score of this region. Finally, the product of the two scores is the final score of the region proposal.", "id": "0144f94a-79d4-402a-a36c-b11a02b3ef8b", "level": "subsection", "origin_cites_number": 3, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Incorporating Low-level Features" ] ], "subsections": [], "title": "Incorporating Low-level Features" }, { "cite_extract_rate": 0.5, "cites": [ 2937 ], "content": "Segmentation-detection collaborative mechanism includes a segmentation branch and a detection branch. The primary reasons for the collaborative mechanism are the following: 1) MIL (detection) can correctly distinguish an area as an object, but it is not good at detecting whether the area contains the entire object. 2) Segmentation can cover the entire object instance, but it cannot distinguish whether the area is a real object or not~. So, some models leverage deep cooperation between detection and segmentation by supervising each other to achieve accurate localization.\nWS-JDS~ first chooses the region proposals with top-scoring pixels generated by the semantic segmentation branch as the positive samples of the detection branch. Then, it sets the classification score to all pixels within each positive proposal of the detection branch as the supervision mask of the segmentation branch. \\blue{Similar to WS-JDS, SDCN~ also combines the detection branch with the segmentation branch which is introduced in Section~\\ref{sec:mil-based methods}.}", "id": "dadb3521-a095-4201-8ae2-60d36e7f5b37", "level": "subsection", "origin_cites_number": 2, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Segmentation-detection Collaborative Mechanism" ] ], "subsections": [], "title": "Segmentation-detection Collaborative Mechanism" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 2936, 2937, 209, 2294, 2275, 2304, 2296, 802, 8429 ], "content": "\\label{sec:wsod_fsod}\nTransforming WSOD to FSOD is another popular technique to achieve object detection using image-level labels, which is designed to train an FSOD model using the output of the WSOD model. The primary problem of transformation is to yield good pseudo ground-truth boxes from WSOD. There are several strategies to mine boxes as the pseudo ground-truth boxes. 1) \\textit{top score}: numerous approaches~ select top score detection boxes of WSOD as the pseudo ground-truth boxes. 2) \\textit{relative improvement (RI)}: ST-WSL~ selects the boxes with the maximal relative score improvement of two adjacent epochs as the pseudo ground-truth boxes. 3) \\textit{mergence}: W2F~ merges several small boxes into a big candidate box and uses these merged boxes as the pseudo ground-truth boxes for later training. SLV~ first merges the scores of several boxes to the pixels and then generates bounding boxes of each class by using a simple thresholding technique to segment the map of every class.\nIn addition, there are several FSOD models that have been used as follows: Fast R-CNN~, Faster R-CNN~, and SSD~. Numerous approaches~ use prediction boxes of WSOD as the pseudo ground-truth boxes to train Fast R-CNN. W2F~ uses prediction boxes of WSOD to train Faster R-CNN. GAL-fWSD~ uses prediction boxes of WSOD to train SSD.", "id": "54ee603e-8a88-4025-9979-69342548efab", "level": "subsection", "origin_cites_number": 13, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Transforming WSOD to FSOD" ] ], "subsections": [], "title": "Transforming WSOD to FSOD" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8429 ], "content": "\\blue{In the previous sections, we individually introduce several techniques that are commonly used to improve the detection performance by detailed listing numerous approaches. In this section, we will compare and discuss these techniques. }\n\\blue{Firstly, context modeling and discriminative region removal are two similar techniques. Context modeling is to calculate the scores of the proposal and its context region respectively. Then it chooses the positive proposal derives from the two scores. On the other hand, the discriminative region removal is to directly erases top-scoring regions by setting zero value in the feature maps of the first branch followed by feeding the erased feature maps into the second branch.}\n\\blue{Secondly, the self-training algorithm usually co-occurs with bounding box regression. Bounding box regression is responsible for refining the initial proposals from SS~ or EB~. And self-training algorithm is designed to refine the prediction result of the baseline. The core problem of both the self-training algorithm and bounding box regression is yielding good pseudo ground-truth boxes.}\n\\blue{Thirdly, the cascaded network and segmentation-detection collaborative mechanism are two similar techniques. They leverage the segmentation module to improve the performance of the object detection module. A cascaded network is a sequential structure that the previous module is responsible for training the latter module. However, the segmentation-detection collaborative mechanism is a circular structure that leverages deep cooperation between detection and segmentation supervising each other to achieve accurate localization.}\n\\blue{Finally, incorporating low-level features technique leverages the advantage of the high-resolution characteristics of low-level features to improve object localization. The key idea of transforming WSOD to FSOD technique is to make full use of the advantages of the network structure of the FSOD model (\\eg, Fast R-CNN~).}", "id": "9e808fc8-51b0-437b-a79b-2beb0b2f1a82", "level": "subsection", "origin_cites_number": 3, "parent_id": "e2bc9197-68e8-4c34-9892-4132e9e3f384", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Discriminative Region Problem" ], [ "subsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0.5, "cites": [ 2275, 2294 ], "content": "\\label{sec:tec_multiple}\n\\blue{In this section, we will introduce how to make full use of the spatial relationship of proposals to solve multiple instance problem introduced in Section~\\ref{sec:challenge}. Specifically, the two proposals that are far away from each other are likely to correspond to two object instances, while the two proposals with large overlap may correspond to the same object instance.}\n\\blue{ST-WSL~ leverages a graph to detect multiple instances with the same category in an image. It first chooses N top-scoring proposals of each positive class as the nodes of the graph. The edge between two nodes indicates a large overlap between them. Then it selects the greatest degree (number of connections to other nodes) nodes as positive proposals using Non-Maximum Suppression (NMS) algorithm ~. PCL~ introduces the proposal cluster to replace the proposal bag that includes all of the proposals of each category. PCL assigns the same label and spatially adjacent proposals to the same proposal cluster. If proposals do not overlap each other, they will be assigned in different proposal clusters. Then, PCL selects the highest score proposal from each proposal cluster as the positive proposal. W2F~ iteratively merges the highly overlapping proposals with top-scoring into big proposals. Finally, these big proposals are considered positive proposals.}", "id": "74ea7ff3-69a1-4ddc-a744-5eb8ef38e5ce", "level": "section", "origin_cites_number": 4, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Multiple Instance problem" ] ], "subsections": [], "title": "Specific Techniques for Multiple Instance problem" }, { "cite_extract_rate": 0.5, "cites": [ 2291, 2280, 209, 737, 2284, 2935, 802, 2295, 2286 ], "content": "In this section, we will introduce several advanced techniques for solving the speed problem introduced in Section~\\ref{sec:challenge}. The main reason for the slow speed is that the MIL-based method adopts SS~ or EB~ to generate a large number of initial proposals that most of which are negative.\nThe methods for improving speed can be broadly categorized into three groups: 1) \\textit{Transformation-based}~: these approaches use their prediction boxes as the pseudo ground-truth boxes to train Faster R-CNN~ or SSD~ and then use Faster R-CNN or SSD to infer images. 2) \\textit{Sliding-window-based}~: these approaches use the sliding window technique to quickly generate proposals by walking through every point on the feature map. 3) \\textit{Heatmap-based}~: these approaches segment the heatmap using a threshold to generate proposals to improve the speed of proposal generation.", "id": "80d0c9da-6233-48ab-bd98-ebb5dc2b3fd1", "level": "section", "origin_cites_number": 18, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Specific Techniques for Speed Problem" ] ], "subsections": [], "title": "Specific Techniques for Speed Problem" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tricks}\nBesides the techniques in the previous chapter, training tricks \\blue{without changing network structure} also can improve detection results. In this section, we will introduce several training tricks for improving object localization.", "id": "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Training Tricks" ] ], "subsections": [ "08e89b6d-afea-4b27-bd7a-52da1fd7729d", "0adf3465-774a-41a4-ac25-00d063ca6407", "63c15766-8dd2-4067-a214-0290e264c274", "d60cfdf8-d638-4c16-8c02-2aefddfe203a" ], "title": "Training Tricks" }, { "cite_extract_rate": 0.875, "cites": [ 2259, 2282, 2306, 737, 7089, 2935, 2296 ], "content": "Previous approaches~ use all of the images at once without a training sequence to train the detection model. The easy-to-hard strategy denotes that the model is trained by using the images with progressively increasing difficulty. In this way, the model can gain better detection results. For example, ZLDN~ first computes the difficulty scores of images. Then, all of the images are ranked in an ascending order based on the difficulty scores. Finally, ZLDN uses the images with increasing difficulty to progressively train themselves.", "id": "08e89b6d-afea-4b27-bd7a-52da1fd7729d", "level": "subsection", "origin_cites_number": 8, "parent_id": "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Training Tricks" ], [ "subsection", "Easy-to-hard Strategy" ] ], "subsections": [], "title": "Easy-to-hard Strategy" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2284 ], "content": "Negative evidence contains the low-scoring regions, activations, and activation maps. For example, WELDON~ uses the classification scores of the $k$ top-scoring proposals and the $m$ low-scoring proposals to generate the classification scores of the image by simply summing. WILDCAT~ leverages the $k^+$ highest probability activations and $k^-$ lowest probability activations of the activation map to generate the prediction score. NL-CCAM~ uses the lowest probability activation maps. Specifically, it first ranks all of the activation maps in a descending order based on the probability of every class. Then, it fuses these class activation maps using a specific combinational function into one map, which is segmented to predict object instances.", "id": "0adf3465-774a-41a4-ac25-00d063ca6407", "level": "subsection", "origin_cites_number": 3, "parent_id": "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Training Tricks" ], [ "subsection", "Negative Evidence" ] ], "subsections": [], "title": "Negative Evidence" }, { "cite_extract_rate": 1, "cites": [ 2283 ], "content": "If the loss function of the model is non-convex, it tends to fall into sub-optimal and falsely localizes object parts while missing a full object extent during training~. So C-MIL~ replaces the non-convex loss function with a series of smoothed loss functions to alleviate the problem that the model tends to get stuck into local minima. At the beginning of training, C-MIL first performs the image classification task. During the training process, the loss function of C-MIL is slowly transformed from the convex image classification loss to the non-convex object detection loss function.", "id": "63c15766-8dd2-4067-a214-0290e264c274", "level": "subsection", "origin_cites_number": 1, "parent_id": "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Training Tricks" ], [ "subsection", "Optimizing Smoothed Loss Functions" ] ], "subsections": [], "title": "Optimizing Smoothed Loss Functions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\blue{In the previous sections, we individually introduce several training tricks that are independent of the network structure. In this section, we will compare and discuss these tricks.}\n\\blue{Firstly, an easy-to-hard strategy is applied to the data phase, which is responsible for adjusting the order of the training images. Secondly, negative evidence acts on the training phase, which is designed to refine positive proposals or feature maps. Finally, optimizing smoothed loss functions act on the optimizing phase, which is responsible for avoiding suboptimal.}", "id": "d60cfdf8-d638-4c16-8c02-2aefddfe203a", "level": "subsection", "origin_cites_number": 0, "parent_id": "ce7d5bf1-8d1b-41d8-aef3-9041d9f0eac1", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Training Tricks" ], [ "subsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:datasets}", "id": "1bfb95db-ecd4-4dab-be9d-ade21af94465", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Datasets and Performance Evaluation" ] ], "subsections": [ "0e25f43d-18cb-4250-9fdd-efdf225dd053", "80078116-a35b-49e5-88e4-dc487a92e339", "14b5b33f-b99a-4375-971a-7af70c7e8948" ], "title": "Datasets and Performance Evaluation" }, { "cite_extract_rate": 0.4, "cites": [ 486, 895 ], "content": "Datasets play an important role in WSOD task. Most approaches of the WSOD use PASCAL VOC~, MSCOCO~, ILSVRC~, or CUB-200~ as training and test datasets.\n\\textbf{PASCAL VOC.} It includes 20 categories and tens of thousands of images with instance annotations. PASCAL VOC has several versions, such as PASCAL VOC 2007, 2010, and 2012. Specifically, PASCAL VOC 2007 consists of 2,501 training images, 2,510 validation images, and 4,092 test images. PASCAL VOC 2010 consists of 4,998 training images, 5,105 validation images, and 9,637 test images. PASCAL VOC 2012 consists of 5,717 training images, 5,823 validation images, and 10,991 test images.\n\\textbf{MSCOCO.} It is large-scale object detection, segmentation, and captioning dataset. MSCOCO has 80 object categories, 330K images ($>$200K labeled), 1.5 million object instances. In object detection, MSCOCO is as popular as PASCAL VOC datasets. Because MSCOCO has more images and categories than PASCAL VOC datasets, the difficulty of training on the MSCOCO dataset is higher than that of PASCAL VOC datasets.\n\\textbf{ILSVRC.} The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a large-scale dataset. In ILSVRC, the model usually uses 200 fully labeled categories and 1,000 categories in object detection and object localization, respectively. ILSVRC has several versions, such as ILSVRC 2013, ILSVRC 2014, and ILSVRC 2016. Specifically, ILSVRC 2013 which is usually used in object detection has 12,125 images for training, 20,121 images for validation, and 40,152 images for testing. In addition, ILSVRC 2014 and 2016 inherit the ILSVRC 2012 dataset in object localization, which contains 1.2 million images of 1,000 categories in the training set. And ILSVRC 2012 dataset has 50,000 and 100,000 images with labels in the validation and test set, respectively.\n\\textbf{CUB-200.} Caltech-UCSD Birds 200 (CUB-200) contains 200 bird species which is a challenging image dataset. It focuses on the study of subordinate categorization. CUB-200-2011~ is an extended version of CUB-200, which adds many images for each category and labels new part localization annotations. CUB-200-2011 contains 5,994 images in the training set and 5,794 images in the test set.", "id": "0e25f43d-18cb-4250-9fdd-efdf225dd053", "level": "subsection", "origin_cites_number": 5, "parent_id": "1bfb95db-ecd4-4dab-be9d-ade21af94465", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Datasets and Performance Evaluation" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0.8, "cites": [ 2280, 737, 2284, 2295 ], "content": "In the state-of-the-art WSOD approaches, there are three standard evaluation metrics: mAP, CorLoc, and top error.\n\\textbf{mAP (mean Average Precision).} Average Precision (AP) is usually used in image classification and object detection. It consists of precision and recall. If $tp$ denotes the number of the correct prediction samples among all of the positive samples, $fp$ denotes the number of the wrong prediction samples among all of the positive samples, and $fn$ denotes the number of the wrong prediction samples among all of the negative samples, precision and recall can be computed as\n\\begin{equation}\n \\begin{aligned}\n \\text{recall} & = tp / (tp + fn), \\\\\n \\text{precision} &= tp / (tp + fp),\n \\end{aligned}\n\\end{equation}\nwhere the correct prediction sample denotes IoU of the positive sample and its corresponding ground-truth box $\\geq$ 0.5. Meanwhile, the IoU is defined as\n\\begin{equation}\n {\\rm IoU}(b, b^g) = area(b \\cap b^g) / area(b \\cup b^g),\n\\end{equation}\nwhere $b$ denotes a prediction sample, $b^g$ denotes a corresponding ground-truth box, and $area$ denotes the region size of the intersection or union. The mAP is the mean of all of the class average precisions and is a final evaluation metric of performance on the test dataset.\n\\textbf{CorLoc (Correct Localization).} CorLoc denotes the percentage of images that exist at least one instance of the prediction boxes whose IoU $\\geq$ 50\\% with ground-truth boxes for every class in these images. CorLoc is a final evaluation metric of localization accuracy on the trainval dataset.\n\\textbf{Top Error.} \\blue{Top error consists of Top-1 classification error (1-err cls), Top-5 classification error (5-err cls), Top-1 localization error (1-err loc), and Top-5 localization error (5-err loc). Specifically, Top-1 classification error is equal to $1.0 - cls_1$, where $cls_1$ denotes the accuracy of the highest prediction score (likewise for Top-1 localization error). Top-5 classification error is equal to $1.0 - cls_5$, where $cls_5$ denotes that it counts as correct if one of the five predictions with the highest score is correct (likewise for Top-5 localization error). Numerous approaches~ use top error to evaluate the performance of the model.}", "id": "80078116-a35b-49e5-88e4-dc487a92e339", "level": "subsection", "origin_cites_number": 5, "parent_id": "1bfb95db-ecd4-4dab-be9d-ade21af94465", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Datasets and Performance Evaluation" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0.658536585365853, "cites": [ 2282, 2289, 2291, 2306, 2936, 7089, 2294, 8429, 2287, 2259, 7090, 2275, 2296, 802, 2283, 2286, 2937, 737, 2935, 2304, 2302, 2295, 2290, 2280, 2284, 2276, 2305 ], "content": "\\noindent\\textbf{Results on Pascal VOC.} The results of state-of-the-art WSOD methods on datasets Pascal VOC 2007, 2010, and 2012 are shown in Table~\\ref{table_voc}. The WSOD methods with ``+FR\" denote that their initial predictions are fed into the Fast R-CNN~ and serve as pseudo ground-truth bounding box annotations, \\ie, these methods transform the WSOD into FSOD problems. From the results, we can observe the performance on all three Pascal VOC datasets have achieved unprecedented progress in recent few years (\\eg, mAP 52.1\\% in CVPR'20 vs. 29.1\\% in CVPR'16 on Pascal VOC 2012). Meanwhile, comparing the methods and their counterparts with Fast R-CNN (\\eg, OICR vs. OICR+FR), we can find the detection performance can be further improved by using this FSOD transforming strategy.\n\\textbf{Results on MSCOCO.} The results of state-of-the-art WSOD methods on dataset MSCOCO are shown in Table~\\ref{table_coco}. We only report the AP metric, and the AP$_{50}$ denotes that the IoU threshold is equal to $0.5$. Similarly, the performance on MSCOCO also doubled in the last few years (\\eg, AP$_{50}$ 11.5\\% vs. 24.8\\% in test set). Since MSCOCO contains more object categories than PASCAL VOC datasets, the results on MSCOCO are still far from satisfactory. However, the performance gains by transforming WSOD to FSOD is relative marginal (\\eg, 0.7\\% gains in AP for PCL model).\n\\textbf{Results on ILSVRC 2020 and CUB-200.} TABLE~\\ref{table_ILSVRC_CUB} summaries the object localization performance of state-of-the-art WSOD methods on these two datasets. Compared to the PASCAL VOC and MSCOCO, quite few WSOD methods have evaluated their performance on these two benchmarks. From TABLE~\\ref{table_ILSVRC_CUB}, we can find the performance gains are also significant (1-err cls 35.6\\% vs. 27.7\\% in ILSVRC 2012).\n\\addtolength{\\tabcolsep}{-4.5pt} \n\\begin{table}[t]\n \\centering\n \\caption{The summary of detection results (mAp (\\%) and CorLoc (\\%)) of state-of-the-art WSOD methods on Pascal VOC 2007, 2010, and 2012 datasets. The FR means Fast R-CNN~.}\n \\label{table_voc}\n \\begin{tabular}{lcccccc}\n \\toprule\n \\multirow{2}*{Approach}&\n \\multicolumn{2}{c}{2007}&\n \\multicolumn{2}{c}{2010}&\n \\multicolumn{2}{c}{2012}\\\\\n \\cmidrule (r){2-3} \\cmidrule (r){4-5} \\cmidrule (r){6-7}\n &mAP &CorLoc &mAP &CorLoc &mAP &CorLoc \\\\\n \\midrule\n WSDDN~$_{\\text{CVPR2016}}$&39.3&58.0&36.2&59.7&-&-\\\\\n \\rowcolor{mygray}\n WSLPDA~$_{\\text{CVPR2016}}$&39.5&52.4&30.7&-&29.1&-\\\\\n ContextLocNet~$_{\\text{ECCV2016}}$&36.3&55.1&-&-&35.3&54.8\\\\\n \\rowcolor{mygray}\n OICR~$_{\\text{CVPR2017}}$&42.0&61.2&-&-&38.2&63.5\\\\\n WCCN~$_{\\text{CVPR2017}}$&42.8&56.9&39.5&-&37.9&-\\\\\n \\rowcolor{mygray}\n ST-WSL~$_{\\text{CVPR2017}}$&41.7&56.1&-&-&39.0&58.8\\\\\n SPN~$_{\\text{ICCV2017}}$&-&60.6&-&-&-&-\\\\\n \\rowcolor{mygray}\n TST~$_{\\text{ICCV2017}}$&34.5&60.8&-&-&-&-\\\\\n PCL~$_{\\text{TPAMI2018}}$&45.8&63.0&-&-&41.6&65.0\\\\\n \\rowcolor{mygray}\n GAL-fWSD~$_{\\text{CVPR2018}}$&47.0&68.1&\\textbf{45.1}&\\textbf{68.3}&43.1&67.2\\\\\n W2F~$_{\\text{CVPR2018}}$&52.4&70.3&-&-&47.8&69.4\\\\\n \\rowcolor{mygray}\n ZLDN~$_{\\text{CVPR2018}}$&47.6&61.2&-&-&42.9&61.5\\\\\n MELM~$_{\\text{CVPR2018}}$&47.3&61.4&-&-&42.4&-\\\\\n \\rowcolor{mygray}\n TS$^2$C~$_{\\text{ECCV2018}}$&44.3&61.0&-&-&40.0&64.4\\\\\n C-WSL~$_{\\text{ECCV2018}}$&45.6&63.3&-&-&43.0&64.9\\\\\n \\rowcolor{mygray}\n WSRPN~$_{\\text{ECCV2018}}$&47.9&66.9&-&-&43.4&67.2\\\\\n C-MIL~$_{\\text{CVPR2019}}$&40.7&59.5&-&-&46.7&67.4\\\\\n \\rowcolor{mygray}\n WS-JDS~$_{\\text{CVPR2019}}$&45.6&64.5&39.9&63.1&39.1&63.5\\\\\n Pred NET~$_{\\text{CVPR2019}}$&53.6&\\textbf{71.4}&-&-&49.5&70.2\\\\\n \\rowcolor{mygray}\n WSOD2~$_{\\text{ICCV2019}}$&53.6&69.5&-&-&47.2&\\textbf{71.9}\\\\\n OAILWSD~$_{\\text{ICCV2019}}$&47.6&66.7&-&-&43.4&66.7\\\\\n \\rowcolor{mygray}\n TPWSD~$_{\\text{ICCV2019}}$&51.5&68.0&-&-&45.6&68.7\\\\\n SDCN~$_{\\text{ICCV2019}}$&50.2&68.6&-&-&43.5&67.9\\\\\n \\rowcolor{mygray}\n C-MIDN~$_{\\text{ICCV2019}}$&52.6&68.7&-&-&50.2&71.2\\\\\n ICMWSD~$_{\\text{CVPR2020}}$&\\textbf{54.9}&68.8&-&-&\\textbf{52.1}&70.9\\\\\n \\rowcolor{mygray}\n SLV~$_{\\text{CVPR2020}}$ &53.5&71.0&-&-&49.2&69.2\\\\\n \\hline\n OICR~+FR$_{\\text{CVPR2017}}$ &47.0&64.3&-&-&42.5&65.6\\\\\n \\rowcolor{mygray}\n PCL~+FR$_{\\text{TPAMI2018}}$ &48.8&66.6&-&-&44.2&68.0\\\\\n MEFF~+FR$_{\\text{CVPR2018}}$ &51.2&-&-&-&-&-\\\\\n \\rowcolor{mygray}\n C-WSL~+FR$_{\\text{ECCV2018}}$ &47.8&65.6&-&-&45.4&66.9\\\\\n WSRPN~+FR$_{\\text{ECCV2018}}$ &50.4&68.4&-&-&45.7&69.3\\\\ \n \\rowcolor{mygray} \n WS-JDS~+FR$_{\\text{CVPR2019}}$ &52.5&68.6&\\textbf{45.7}&\\textbf{68.1}&46.1&69.5\\\\\n SDCN~+FR$_{\\text{ICCV2019}}$ &53.7&\\textbf{72.5}&-&-&46.7&69.5\\\\\n \\rowcolor{mygray}\n C-MIDN~+FR$_{\\text{ICCV2019}}$ &53.6&71.9&-&-&\\textbf{50.3}&\\textbf{73.3}\\\\\n SLV~+FR$_{\\text{CVPR2020}}$ &\\textbf{53.9}&72.0&-&-&-&-\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\\addtolength{\\tabcolsep}{4.5pt} \n\\addtolength{\\tabcolsep}{-1.5pt} \n\\begin{table}[t]\n \\centering\n \\caption{Detetion results on MSCOCO dataset comes from~. These models use VGG16 as their convolutional neural network. There is no difference between AP and mAP under the MSCOCO context.}\n \\label{table_coco}\n \\scalebox{0.95}{\n \\begin{tabular}{lccccc}\n \\toprule\n \\multirow{2}*{Approach}& \n \\multirow{2}*{Year}&\n \\multicolumn{2}{c}{Val}&\n \\multicolumn{2}{c}{Test}\\\\\n \\cmidrule (r){3-4} \\cmidrule (r){5-6}\n &&AP&AP$_{50}$&AP&AP$_{50}$\\\\\n \\midrule\n WSDDN~&CVPR2016&-&-&-&11.5\\\\\n \\rowcolor{mygray}\n WCCN~&CVPR2017&-&-&-&12.3\\\\\n PCL~&TRAMI2018&8.5&19.4&-&-\\\\\n \\rowcolor{mygray}\n C-MIDN~&ICCV2019&9.6&21.4&-&-\\\\\n WSOD2~&ICCV2019&10.8&22.7&-&-\\\\\n \\rowcolor{mygray}\n ICMWSD~&CVPR2020&\\textbf{11.4}&\\textbf{24.3}&\\textbf{12.1}&\\textbf{24.8}\\\\\n \\hline\n Diba et al.~+SSD~&arXiv 2017&-&-&-&13.6\\\\ \n \\rowcolor{mygray}\n OICR~+Ens+FR~&CVPR2017&7.7&17.4&-&-\\\\\n MEFF~+FR~&CVPR2018&8.9&19.3&-&-\\\\\n \\rowcolor{mygray}\n PCL~+Ens.+FR~&TPAMI2018&9.2&19.6&-&-\\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\\addtolength{\\tabcolsep}{1.5pt}\n\\begin{table*}[t]\n \\centering\n \\caption{Object localization performance on ILSVRC 2012 and CUB-200-2011 datasets.}\n \\label{table_ILSVRC_CUB}\n \\begin{tabular}{lccccccccc}\n \\toprule\n \\multirow{2}*{Approach}& \n \\multirow{2}*{Year}&\n \\multicolumn{4}{c}{ILSVRC 2012 (top error \\%)}&\n \\multicolumn{4}{c}{CUB-200-2011 (top error \\%)}\\\\\n \\cmidrule (r){3-6} \\cmidrule (r){7-10}\n &&1-err cls&5-err cls&1-err loc&5-err loc&1-err cls&5-err cls&1-err loc&5-err loc\\\\\n \\midrule\n CAM~&CVPR2016&35.6&13.9&57.78&45.26&-&-&-&-\\\\\n \\rowcolor{mygray}\n ACoL~&CVPR2018&32.5&\\textbf{12.0}&54.17&36.66&-&-&54.08&39.05\\\\\n SPG~&ECCV2018&-&-&51.4&\\textbf{35.05}&-&-&53.36&40.62\\\\\n \\rowcolor{mygray}\n DANet~&ICCV2019&32.5&\\textbf{12.0}&54.17&40.57&\\textbf{24.6}&\\textbf{7.7}&47.48&38.04\\\\\n NL-CCAM~&WACV2020&\\textbf{27.7}&-&\\textbf{49.83}&39.31&26.6&-&47.6&\\textbf{34.97}\\\\\n \\rowcolor{mygray}\n EIL~&CVPR2020&29.73&-&53.19&-&25.23&-&\\textbf{42.54}&-\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\\begin{table*}[t]\n \\centering\n \\caption{Some techniques and tricks for improving detection results and the approaches that utilize\n them. 1) Cont: Context modeling, 2) Self-t: Self-training algorithm, 3) Casc: Cascaded network, 4) BboxR: Bounding box regression, 5) DisRR: Discriminative region removal, 6) Low-l: Incorporating low-level features, 7) Seg-D: Segmentation-detection collaborative mechanism, 8) Trans: Transforming WSOD to FSOD, 9) E-t-h: Easy-to-hard strategy, 10) NegE: Negative evidence, 11) SmoL: Optimizing smoothed loss functions.}\n \\label{table09}\n \\begin{tabular}{lcccccccccccc}\n \\toprule\n \\multirow{2}*{Approach}&\n \\multicolumn{8}{c}{Specific techniques for discriminative region problem} & \n \\multicolumn{3}{c}{Training tricks}\\\\\n \\cmidrule (r){2-9} \\cmidrule (r){10-12}\n &Cont&Self-t&Casc&BboxR&DisRR&Low-l&Seg-D&Trans&E-t-h&NegE&SmoL\\\\\n \\midrule\n \\rowcolor{mygray}\n WSDDN~&&&&&&&&&&&\\\\\n CAM~&&&&&&&&&&&\\\\\n \\rowcolor{mygray}\n WSLPDA~&$\\surd$&&&&&&&&&&\\\\\n WELDON~&&&&&&&&&&$\\surd$&\\\\\n \\rowcolor{mygray}\n ContextLocNet~&$\\surd$&&&&&&&&&&\\\\\n Grad-CAM~&&&&&&$\\surd$&&&&&\\\\\n \\rowcolor{mygray}\n OICR~&&$\\surd$&&&&&&$\\surd$&&&\\\\\n WCCN~&&&$\\surd$&&&&&&&&\\\\\n \\rowcolor{mygray}\n ST-WSL~&&$\\surd$&&&&&&$\\surd$&&&\\\\\n WILDCAT~&&&&&&&&&&$\\surd$&\\\\\n \\rowcolor{mygray}\n SPN~&&&&&&&&&&&\\\\\n TP-WSL~&&&&&$\\surd$&&&&&&\\\\ \n \\rowcolor{mygray}\n PCL~&&$\\surd$&&&&&&$\\surd$&&&\\\\\n GAL-fWSD~&&&&&&&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n W2F~&&$\\surd$&&&&&&$\\surd$&&&\\\\\n ACoL~&&&&&$\\surd$&&&&&&\\\\\n \\rowcolor{mygray}\n ZLDN~&&&&&&&&&$\\surd$&&\\\\\n TS$^2$C~&$\\surd$&&$\\surd$&&&&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n SPG~&&&&&&&&&&&\\\\\n WSRPN~&&&&&&&&&&&\\\\\n \\rowcolor{mygray}\n C-MIL~&&&&&&&&&&&$\\surd$\\\\\n WS-JDS~&&&&&&&$\\surd$&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n ADL~&&&&&&&&&&&\\\\\n Pred NET~&&&&$\\surd$&&&&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n WSOD2~&&$\\surd$&&$\\surd$&&$\\surd$&&&&&\\\\\n OAILWSD~&$\\surd$&$\\surd$&&&&&&&&&\\\\\n \\rowcolor{mygray}\n TPWSD~&&$\\surd$&&$\\surd$&&&&&&&\\\\\n SDCN~&&&&&&&$\\surd$&$\\surd$&&&\\\\\n \\rowcolor{mygray}\n C-MIDN~&&$\\surd$&&&$\\surd$&&&$\\surd$&&&\\\\\n DANet~&&&&&&&&&&&\\\\\n \\rowcolor{mygray}\n NL-CCAM~&&&&&&&&&&$\\surd$&\\\\\n ICMWSD~&$\\surd$&$\\surd$&&$\\surd$&&&&&&&\\\\\n \\rowcolor{mygray}\n EIL~&&&&&$\\surd$&&&&&&\\\\\n SLV~&&$\\surd$&&$\\surd$&&&&$\\surd$&&&\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}", "id": "14b5b33f-b99a-4375-971a-7af70c7e8948", "level": "subsection", "origin_cites_number": 41, "parent_id": "1bfb95db-ecd4-4dab-be9d-ade21af94465", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Datasets and Performance Evaluation" ], [ "subsection", "Experimental Results" ] ], "subsections": [], "title": "Experimental Results" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:directions}\nAlthough we have summarized many advanced techniques and tricks for improving detection results, there are still several research directions that can be further explored.", "id": "5d4d4d9b-ac69-47c5-b147-5bcd28ddb450", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Future Directions and Tasks" ] ], "subsections": [ "5ecf381a-e969-4630-938a-820602070f9b", "806eba73-1bd2-4931-b096-1b96d4f1feac" ], "title": "Future Directions and Tasks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 97, 2938, 2275, 799 ], "content": "\\textbf{Better Initial Proposals.} The main proposal generators of the existing methods are selective search~, edge boxes~, heatmap, and sliding window. Selective search and edge boxes are too time-consuming and yield plenty of initial proposals that most of them are negative proposal. Segmenting heatmap tends to focus on the discriminative part of the object. The performance of the sliding window is strongly dependent on the size of objects. For example, if the size of the object instance is roughly fixed, then the sliding window works very well. Otherwise, it works badly. Because these generators have inherent disadvantages, we need to design a proposal generator that can yield fewer and more accurate initial proposals. The quality of the initial proposals directly affects the detection performance of the detector. So how to yield good initial proposals may be a new research direction.\n\\textbf{Better Positive Proposals.} Most WSOD methods select the proposals with the top score as positive proposals, which tend to focus on the most discriminative parts of the object rather than the whole object region. Because of the above problem, ST-WSL~ selects positive proposals according to the number of their surrounding proposals. And Self-Taught-WS~ selects positive proposals relying on the relative improvement (RI) of the scores of each proposal of two adjacent epochs. Besides, the key of self-training and cascaded network is to select accurate proposals as the pseudo ground-truth boxes for later training. Thus, how can we design a better algorithm that can accurately select positive proposals may be an important research direction.\n\\textbf{Lightweight Network.} Today's state-of-the-art object detectors~ leverage a very deep CNN to extract image feature maps and high-dimension fully connected layers to detect object instances that can achieve satisfactory detection performance. But the deep CNN and high-dimension fully connected layers rely on large memory and strong GPU computation power. Hence, a deep network is difficult to deploy on CPU devices (\\eg, mobile phones). With the popularity of mobile devices, lightweight network with few parameters has received more and more attention from researchers, such as Light-Head R-CNN~. Thus, designing a lightweight network in weakly supervised object detection may be a new research direction.", "id": "5ecf381a-e969-4630-938a-820602070f9b", "level": "subsection", "origin_cites_number": 6, "parent_id": "5d4d4d9b-ac69-47c5-b147-5bcd28ddb450", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Future Directions and Tasks" ], [ "subsection", "Model Directions" ] ], "subsections": [], "title": "Model Directions" }, { "cite_extract_rate": 0.65, "cites": [ 799, 2942, 206, 2939, 2933, 7407, 2320, 802, 2941, 7405, 209, 520, 2940 ], "content": "\\textbf{Medical Imaging.} \\blue{With the development of deep learning, it has evolved into cross-learning with multiple disciplines, especially the medical field. Because of lacking brain's Magnetic Resonance Imaging (MRI) and X-rays images with sufficient labels, weakly-supervised brain lesion detection~ has received attention from researchers. The purpose of weakly-supervised brain lesion detection is to give the model the ability to accurately locate lesion region and classify lesion category that helps the doctor complete the diagnosis of the disease. Weakly-supervised lesion detection is not only applied in brain disease, but also other organ diseases, such as chest, abdomen, and pelvis. In addition to lesion detection, weakly-supervised learning is applied in disease prognosis~. During hospital visits, patients need to undergo a large number of tests to pinpoint their disease. These tests are generally presented to doctors and patients in the form of graphic reports. However, these numerous graphic reports lack correct labeling information. So, medical imaging may be another potential research direction in a weakly supervised setting.}\n\\textbf{3D Object Detection.} \\blue{In recent years, with the continuous improvement of the accuracy of object detection of images~, 3D object detection~ has received unprecedented attention. The purpose of 3D object detection is to detect object instances in the point cloud using 3D bounding boxes. Comparing with 2D object detection, 3D object detection tends to cost more computation and its supervision is more difficult to obtain and labor-intensive. Therefore, how to train light and accurate 3D detection models in the point cloud using simple labels may be a big challenge. Fortunately, weakly-supervised object detection is successfully applied in 2D object detection. According to the above analysis, we think that 3D weakly-supervised object detection that uses weak supervision(\\eg, 2D bounding boxes and text labels) to train object detection models in the 3D scene may be a hot research direction.}\n\\textbf{Video Object Detection.} \\blue{Video object detection~ is to classify and locate object instances in a piece of video. One of the solutions is to break the video into many frames and the detector achieves object detection in these frame images~. However, the detector will face one big problem that the quality of these frame images has deteriorated. To improve the performance of video object detection, expanding the training dataset is a good approach. Unfortunately, tagging object location and category in videos is much more difficult than in 2D images. Therefore, training video object detection in the weakly-supervised setting is necessary.}", "id": "806eba73-1bd2-4931-b096-1b96d4f1feac", "level": "subsection", "origin_cites_number": 20, "parent_id": "5d4d4d9b-ac69-47c5-b147-5bcd28ddb450", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Future Directions and Tasks" ], [ "subsection", "Application Directions" ] ], "subsections": [], "title": "Application Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this paper, we summarize plenty of the deep learning WSOD methods and give a lot of solutions to solve the above challenges. In summary, the main contents of this paper are listed as follows.\n\\begin{itemize}\n \\item \\blue{We analyze the background, and main challenges, and basic framework of WSOD. Furthermore, we introduce several landmark methods in detail.}\n \\item For main challenges, we analyze almost all of the WSOD methods since 2016 and summarize numerous techniques and training tricks (cf. TABLE~\\ref{table09}).\n \\item We introduce currently popular datasets and important evaluation metrics in the WSOD task.\n \\item We conclude and discuss valuable insights and guidelines for future progress in model and application directions.\n\\end{itemize}\n\\section*{Acknowledgment}\nThis work was supported by the National Natural Science Foundation of China (U19B2043, 61976185), Zhejiang Natural Science Foundation (LR19F020002, LZ17F020001), the Fundamental Research Funds for the Central Universities, Chinese Knowledge Center for Engineering Sciences and Technology.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{IEEEabrv,main_jrnl}\n\\end{document}", "id": "55b3e435-aa9e-4a20-83de-0ebb04c9d0bf", "level": "section", "origin_cites_number": 0, "parent_id": "2f09dfe5-fc96-4d60-bf8a-870e95820098", "prefix_titles": [ [ "title", "Deep Learning for Weakly-Supervised Object Detection and Object Localization: A Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
65
[ 514, 799, 1534, 305, 206, 486, 2933, 802, 895, 2934, 97, 209, 1230, 8607, 520, 2282, 2291, 2306, 2936, 7089, 2294, 2287, 2259, 7090, 2275, 2296, 2283, 2286, 7637, 2937, 737, 2935, 2304, 2302, 2295, 2280, 2284, 2305, 509, 1812, 8429, 2289, 2290, 2276, 2938, 2942, 2939, 7407, 2320, 2941, 7405, 2940 ]
0.63238
[ "Wenjie Xiong", "Jakub Szefer" ]
Survey of Transient Execution Attacks
2020
2020-05-27T15:43:04Z
cs.CR
Transient execution attacks, also called speculative execution attacks, \hl{have drawn much interest in the last few years as they can cause critical data leakage.} Since the first disclosure of transient execution attacks in January 2018, a number of new attack types or variants have been demonstrated in different processors. \hl{A transient execution attack consists of two main components: transient execution itself and a covert channel that is used to actually exfiltrate the information.} \hl{Transient execution is caused by fundamental features of modern processors that boost performance and efficiency, while covert channels are unintended channels that can be abused for information leakage, resulting from sharing of the micro-architecture components.} Given the severity of the transient execution attacks, they have motivated computer architects in both industry and academia to rethink the design of the processors and to propose hardware defenses. \hl{To help understand the transient execution attacks, this paper summarizes the phases of the attacks and the security boundaries that are broken by the attacks.} \hl{This paper further analyzes possible causes of transient execution and different types of covert channels. This paper in addition presents metrics for comparing different aspects of the transient execution attacks (security boundaries that are broken, required control of the victim's execution, etc.) and uses them to evaluate the feasibility of the different attacks -- both the existing attacks, and potential new attacks suggested by our classification used in the paper.} The paper finishes by discussing the different mitigations at the micro-architecture level that have so far been proposed.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2f4b0bae-709d-4d09-b856-92d94c793465", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ] ], "subsections": [ "7fcf47b3-6a7c-46cc-afa5-5ea71922a855", "72738b70-3d67-4913-883a-4dac5223aebb", "fe310dc9-e89b-4c6e-91e7-5d2005110f30", "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "ccfa843e-cbae-4445-b4cf-783a75c1f4e0", "2fa00822-ddfc-46d2-883a-0da7b331ff92", "c43469f3-8be3-4b00-9473-de68dc682fca" ], "title": "root" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1365, 7343, 1362, 1366, 1361, 1364, 7347, 1363, 7344, 7346, 7345, 1360 ], "content": "\\label{sec:intro}\nIn the past decades, computer architects have been working hard to improve the performance of computing systems.\nDifferent optimizations have been introduced in the various processor micro-architectures to improve the performance,\nincluding pipelining, out-of-order execution, and branch prediction~.\nSome of the optimizations require aggressive speculation of the executed instructions.\nFor example, while waiting for a conditional branch to be resolved, branch prediction will predict whether the branch will be taken or not,\nand the processor begins to execute code down the predicted control flow path before the outcome of the branch is known.\nSuch speculative execution of instructions causes the micro-architectural state of the processor to be modified.\nThe execution of the instructions down the incorrect speculated path is called the {\\em transient execution} -- because\nthe instructions execute transiently and should ideally disappear with no side-effects if there was mis-speculation.\nWhen a mis-speculation is detected, the architectural and micro-architectural side effects should be cleaned up -- but it is not done so today, leading to a number of recently publicized transient execution attacks~ \\hl{that leak data across different security boundaries in computing systems.}\nToday's processor designs aim to ensure the execution of a program results in architectural states as if each instruction is\nexecuted in the program order.\nAt the Instruction Set Architecture~(ISA) level, today's processors behave correctly.\nUnfortunately, the complicated underlying micro-architectural states, due to different optimizations, are modified during the transient execution, and the various transient execution attacks have shown that data can be leaked from the micro-architectural states.\nFor example, timing channels can lead to information leaks that can reveal some of the micro-architectural states which are not visible\nat the ISA level~.\n\\hl{Especially, the micro-architectural states of a processor \nare today not captured by the ISA specification, and there are micro-architectural vulnerabilities that cannot be found or analyzed by only examining the processor's ISA.}\nBesides focusing on pure performance optimization, many processors are designed to share hardware units in order to reduce area and improve power efficiency.\nFor example,\nhyper-threading allows different programs to execute concurrently on the same processor pipeline by sharing\nthe execution and other functional units among the hardware threads in the pipeline.\nAlso, because supply voltage does not scale with the size of the transistors~,\nmodern processors use multi-core designs.\nIn multi-core systems, caches, memory-related logic, and peripherals are shared among the different processor cores.\nSharing of the resources has led to numerous timing-based side and covert channels~ -- the channels can occur independent of transient execution, or together with transient execution, which is the focus of this survey. \n\\begin{figure*}[t]\n\\includegraphics[width=4.5in]{gfx/attack_phase.pdf}\n\\caption{\\small Phases of transient execution attacks. }\n \\label{fig:attack_phase}\n\\end{figure*}\nTransient execution combined with covert channels results in {\\em transient execution attacks} which can\ncompromise the confidentiality of the system. As shown in Figure~\\ref{fig:attack_phase},\nduring such attacks, the secret or sensitive data is available during transient execution\n-- this differentiates the transient execution attacks from conventional covert channel attacks where the data is assumed to be always available to the sender, not just during transient execution\\footnote{There are also attacks using the timing difference in transient execution, e.g., . These attacks are still conventional covert channel attacks, where the timing difference comes from the prediction units. Thus, these attacks are not in the scope of this paper, but are listed in Section~\\ref{sec:related_attacks}.}. \nAfter the secret data is accessed during transient execution and encoded into a covert channel, the secret data\ncan be later extracted by the attacker from the covert channel.\nA number of transient execution attack variants has been demonstrated, e.g.,\nSpectre~, Meltdown~,\nForeshadow~, LazyFP~,\nMicro-architectural Data Sampling (MDS)~, Load Value Injection (LVI)~.\nThese attacks have been shown to allow data leaks across different security boundaries, e.g., system privilege level, SGX enclave, sandbox, etc.\n\\hl{The transient execution attacks have been assigned 9 Common Vulnerabilities and Exposures (CVE) IDs out of 14 CVE IDs that correspond to vulnerabilities about gaining information on Intel products in 2018, and 4 out of 9 in 2019, according to the CVE Details database}~. \\hl{These attacks also affect other vendors, such as AMD or Arm, for example. }\nIn addition, these attacks have raised a lot of interest, and motivated computer architects to rethink the design of processors and propose a number of hardware defenses~ -- this survey summarizes the attacks and the hardware defenses, while software-based defenses are summarized in existing work~.", "id": "7fcf47b3-6a7c-46cc-afa5-5ea71922a855", "level": "section", "origin_cites_number": 36, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Introduction" ] ], "subsections": [ "b8107d71-c4b6-4382-bf75-5adafc902ff7", "bd2ddec6-f05f-4609-a914-7d97684f0510", "a9860064-4a88-4044-a4e8-bc7acbf8360b" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "This paper provides a survey of existing {\\em transient execution attacks} from Jan. 2018 to July 2020.\n\\hl{We start by providing background on the micro-architectural features that lead to the attacks.\nWe then define the transient execution attacks and summarize the phases and attack scenarios.\nWe analyze the types of transient execution and covert channels leveraged by the transient execution attacks to show the root causes of these attacks.}\nIn the end, we discuss the mitigation strategies for the transient execution and covert channels.\nThe contributions of this survey are the following:\n\\begin{itemize}\n\\item We summarize different attack scenarios and summarize the security boundaries that are broken by the attacks.\n\\item We provide a taxonomy of the existing transient execution attacks by analyzing the causes of transient execution that they leveraged,\nand we propose metrics to compare the feasibility of the attacks.\n\\item We summarize and categorize the existing and potential timing-based covert channels in micro-architectures that can\nbe used with transient execution attacks, and also propose metrics to compare these covert channels.\n\\item We discuss the feasibility of the existing attacks based on the metrics we propose.\n\\item We compare the different mitigation strategies that have been so far designed at the micro-architectural level in various publications.\n\\end{itemize}\n\\label{sec:background}\nThis section gives background about various optimizations in micro-architecture\nthat lead to the transient execution, and thus in turn contribute to the transient execution attacks.\nMany more details about CPU micro-architecture and pipeline details available in architecture textbooks, e.g.,~.\nThis section also discusses conventional side channels and covert channels based on timing;\nmany more details about the timing channels are given in other surveys, e.g.,~.\nThe two concepts combined lead to the transient execution attacks, which\nis the topic of the rest of this survey.", "id": "b8107d71-c4b6-4382-bf75-5adafc902ff7", "level": "subsection", "origin_cites_number": 2, "parent_id": "7fcf47b3-6a7c-46cc-afa5-5ea71922a855", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Introduction" ], [ "subsection", "Outline and Contributions" ] ], "subsections": [], "title": "Outline and Contributions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background_perf_opt}\nDue to the data dependencies between instructions, the CPU pipeline sometimes has to stall until the dependencies are resolved.\nTo reduce the stalls in the pipeline and to keep the pipeline full, many performance-improving and area-reducing optimizations have been proposed and implemented\nin today's commodity processors.\n\\textbf{Out-of-Order Execution (OoO):}\nIn OoO, some of the younger instructions can be executed earlier than the older instructions (based on program order) if all the dependencies of the younger instructions are available.\nOoO helps to improve instruction-level parallelism.\n\\hl{In OoO, the life cycle of instruction is: fetch, dispatch, issue (a.k.a. execute), and retire (a.k.a. commit).\nThe instruction is first fetched to the pipeline, and after decoding, micro-ops are dispatched.\nOnce all the dependencies of the instruction are satisfied, an instruction (or micro-op) is issued for execution. \nOoO uses a reorder buffer (ROB) to hold the instructions (or micro-ops) and execution results and to retire each instruction (or micro-op)\nin the program order. Instructions (or micro-ops) are retired (committed) when they reach the head of ROB.}\n\\textbf{Speculative Execution:}\nSpeculative execution occurs when there is a control flow instruction for which the processor does not know the result yet.\nFor example, a branch condition needs to be computed before the branch result (taken or not taken) can be obtained.\nTo improve performance, processors often speculate the outcome of such instructions and begin to execute instructions down\nthe speculated path. Later, either speculation is confirmed to be correct, or there is a mis-speculation, and instructions that began to execute down the mis-speculated path are squashed (cleared from the pipeline). \nAfter mis-speculation, the processor should clean up all architectural and micro-architectural state\nto create an illusion as if these instructions never executed.\nIn addition to control flow, speculation is used whenever there are other types of prediction, such as prefetching\nor value prediction (not used in commodity processors).\n\\textbf{Delayed Fault Validation:}\nSimilar to speculation for control flow, processors speculate that there is no fault and continue execution even when there is a potential fault.\nEspecially, fault checking can be delayed, and some processors allow for instructions to continue executing\nduring which time the data (that should have been inaccessible due to the fault) can be accessed and micro-architecture states can be modified. If there is any fault, the processor then should clean up all the architectural and micro-architectural state changes.\n\\textbf{Caching:}\nOne of the key optimizations for memory-related operations is caching. Modern processors\nhave multiple levels of caches, typically L1, L2, and Last Level Cache (LLC). \nA cache hit occurs when data is found in a cache, and the processor does not need to go to the main memory to fetch the data.\nBased on the level of cache where data is found, it can be up to about $500\\times$ faster to get data if it is a hit.\nOther cache-like structures exist in processors, such as TLBs, which can also offer performance improvement by caching data.\n\\textbf{Sharing of Functional Units in Hyper-Threading within a Processor Core:}\nTo reduce the area and power consumption, functional units are shared among hardware threads in hyper-threading.\nToday, most processors are built with two-thread SMT (Simultaneous Multi-Threading) configurations.\nIn SMT, in the execution stage, the processor decides with instructions from which threads can execute\nbased on the available execution units. It is expected that not all programs need all the units at the same time, so the sharing allows for good performance\nwhile reducing the need to duplicate all units among all SMT threads.\n\\textbf{Sharing of Resources Among Cores in Multi-Core Processors:}\nSimilar to sharing within the processor core, many resources are shared among cores.\nThese include caches, prefetchers, various buffers, memory controllers, directories, and other cache coherence related structures,\nmemory busses, I/O, etc. Rather than each core having a separate one of these units, they are shared to reduce hardware resources.", "id": "bd2ddec6-f05f-4609-a914-7d97684f0510", "level": "subsection", "origin_cites_number": 0, "parent_id": "7fcf47b3-6a7c-46cc-afa5-5ea71922a855", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Introduction" ], [ "subsection", "Performance Optimizations that Enable Transient Execution Attacks" ] ], "subsections": [], "title": "Performance Optimizations that Enable Transient Execution Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[t]\n\\includegraphics[width=3.2in]{gfx/side_and_covert_channel_background-crop.pdf}\n\\caption{\\small Components of a side or covert channel: (a) a sender in a covert channel, also called victim in a side channel;\n(b) and (c) a change in state of the processor; and (d) receiver in the side or a covert channel.}\n \\label{fig:side_and_covert_channels_bkg}\n\\end{figure}\nFigure~\\ref{fig:side_and_covert_channels_bkg} shows the components of a typical side or covert channel.\nFirst, there is the entity sending (or leaking) the information. Commonly this entity is called the sender in covert channels, and the victim\nin side channels. Execution of some instructions by the sender causes the state of the processor to be modified.\nThe states can be internal processor states or physical states such as temperature or EM emanations.\nFinally, the receiver observes the state change to learn about a bit of information transmitted by the sender.\nIn timing channels, which are the focus of this survey, the focus is on changes in the processor, which can be observed by\nsoftware through measurements of the execution time of some operations, Figure~\\ref{fig:side_and_covert_channels_bkg} (c).\nThe state modification can be the architectural state or the micro-architectural state. However, because by design processors\ndo not leak any information from the architectural state (e.g., two processes in different memory spaces cannot access each other's memory),\nmajority of the timing channels abuse the micro-architectural state changes, which cannot be directly read,\nbut which can be observed through timing differences. These covert channels are the result of the various performance optimizations\ndiscussed in Section~\\ref{sec:background_perf_opt}. There are currently only limited hardware features that are designed to defend against covert channels, e.g., Intel Cache Allocation Technology (CAT)~, and many covert channels remain open.", "id": "a9860064-4a88-4044-a4e8-bc7acbf8360b", "level": "subsection", "origin_cites_number": 1, "parent_id": "7fcf47b3-6a7c-46cc-afa5-5ea71922a855", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Introduction" ], [ "subsection", "Covert and Side Channels" ] ], "subsections": [], "title": "Covert and Side Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:threat_model}\nThe existing transient execution attacks have various threat models. \nIn this section, we summarize the attacker's goals and assumptions on the attacker (e.g., the levels of hardware sharing between the attacker and the victim) in the existing attacks. \nLater, we use the assumptions defined in this section when we discuss the attack components.", "id": "72738b70-3d67-4913-883a-4dac5223aebb", "level": "section", "origin_cites_number": 0, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Threat Model" ] ], "subsections": [ "a449f265-2e15-4dea-a5bb-c16581af2086", "d8574c88-cb25-4247-ae6f-e4c229d3f2c4", "a085e1c8-0ba0-4d1f-8eb0-ccf85b8d5e56" ], "title": "Threat Model" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 1363, 7344 ], "content": "\\label{sec:attacker_goal}\n\\begin{figure}[t]\n\\includegraphics[width=4in]{gfx/security_boundary-crop.pdf}\n\\caption{\\small Security boundaries in computer systems that are broken by transient execution attacks.}\n\\label{fig:security_boundary}\n\\end{figure}\nThere are many security boundaries (between different privilege levels or security domains) in a typical processor, as shown in Figure~\\ref{fig:security_boundary}.\nThe goal of the attacker of the transient execution attacks is to cross the security boundaries to obtain information related to the victim's secret data. \nWe categorize the possible privilege levels or security domains where the attack can originate, shown as red arrows in Figure~\\ref{fig:security_boundary}, and wherefrom it is trying to extract data as follows:\n\\begin{enumerate} \n\\item \\textbf{user-level program attacking another user-level program:}\nThe attacker and the victim are two separate user applications, \\hl{e.g., the attacker process tries to learn the memory content of another process}. \\hl{demonstrates when an victim OpenSSH server process is running, how an attacker process learns the host private key.}\n\\item \\textbf{user-level program attacking the kernel:}\nThe attacker runs in the user level and wants to read the privileged data of the kernel, \\hl{e.g., }\\hl{ demonstrate an attack that dumps kernel memory from an unprivileged application.}\n\\item \\textbf{virtual machine attacking another virtual machine:}\nThe attacker and the victim reside in two different guest virtual machines, \\hl{e.g.,} \\hl{ shows it is possible for an attacker VM to learn the private key of OpenSSH server in victim VM.} \n\\item \\textbf{virtual machine attacking the hypervisor:}\n\\hl{The attacker is a guest OS and the victim is the host hypervisor e.g.,}~\\hl{ demonstrate an attack against KVM that leaks host memory when the attacker has full control of the OS inside VM.}\n\\item \\textbf{code running outside an enclave attacking victim running inside an enclave:}\nThe victim runs inside a security domain protected by some hardware scheme, e.g., SGX enclaves~, XOM~, Aegis~, Bastion~, Sanctum~ or Keystone~, and the attacker code runs outside of it. \n~ \\hl{demonstrated such an attack that retrieves secret inside the SGX enclave.}\n\\item \\textbf{attacks across security domains protected by software:}\nThe victim runs inside the security domain protected by some software scheme, e.g., sandboxes in JavaScript, and the attacker code runs outside of it, as shown in~.\n\\end{enumerate}\nIn addition, some attacks focus on \\textbf{Stale Data} stored somewhere in hardware, e.g., data remaining in registers buffers after it should have been cleaned up~.\nSometimes the stale data is of a different privilege level or security domain, so the attacker will break the security domain when accessing stale data.\n\\hl{Existing transient execution attacks can break all of the security boundaries above. Details will be discussed in Section}~\\ref{sec:consequence}, especially in Table~\\ref{tbl:privilege_level}.", "id": "a449f265-2e15-4dea-a5bb-c16581af2086", "level": "subsection", "origin_cites_number": 11, "parent_id": "72738b70-3d67-4913-883a-4dac5223aebb", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Threat Model" ], [ "subsection", "Attacker's Goal: Breaking Security Boundaries" ] ], "subsections": [], "title": "Attacker's Goal: Breaking Security Boundaries" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:colocation_attacker}\n\\hl{The transient execution attacks relies on attacker's ability to access the micro-architectural states that contains victim's information.}\nIf there are both the attacker and the victim running, \\hl{it is assumed the attacker can co-locate with the victim. The attack can be launched in a variety of settings.}\nThe attacker and the victim could share the hardware in the following settings:\n\\begin{itemize}\n\\item \\textbf{Same thread:} running on the same logical core (hardware thread) in a time-sliced setting, and there might be context switches in between.\n\\item \\textbf{Same core, simultaneous multithreading (SMT):} running on different logical cores (hardware threads) on the same physical core.\n\\item \\textbf{Same chip, different core:} sharing LLC, memory bus, and other peripheral devices.\n\\item \\textbf{Same motherboard:} sharing memory bus and peripheral devices.\n\\end{itemize}\n\\hl{In some Spectre-type attacks, the speculation of the victim can be controlled by the attacker when the attacker is able to train the prediction unit. In this case, the attack must be able to share the prediction unit (entry) with the victim. We will discuss the level of sharing needed to use different prediction unit in Table}~\\ref{tbl:pred_share} in Section~\\ref{sec:transient_exe}.\n\\hl{Covert channels also relies on sharing of hardware components. When the attacker and the victim share different components, different covert channels can be used. More details will be discussed in Table}~\\ref{tbl:channel_share} in Section~\\ref{sec:covert_channel}. \n\\hl{Colocating is not a trivial task. An attack is more feasible when less sharing is required.}", "id": "d8574c88-cb25-4247-ae6f-e4c229d3f2c4", "level": "subsection", "origin_cites_number": 0, "parent_id": "72738b70-3d67-4913-883a-4dac5223aebb", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Threat Model" ], [ "subsection", "Level of Hardware Sharing" ] ], "subsections": [], "title": "Level of Hardware Sharing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:addr_space_attacker}\nThe attacker's and the victim's address space can be one of the following:\n\\begin{itemize}\n\\item \\textbf{In the same address space:} In this case, the attacker and the victim have the same virtual to physical address mapping.\n\\item \\textbf{In different address spaces with shared memory:} In this case, the attacker and the victim have different virtual to physical address mappings. But some of the attacker's pages and the victim's pages map to the same physical pages. This can be achieved by sharing dynamic libraries (e.g., libc).\n\\item \\textbf{In different address spaces without shared memory:} The attacker and the victim have different virtual to physical address mapping. And their physical addresses do not overlap.\n\\end{itemize}\n\\hl{During the attack, the attacker and the victim need to share the same entry of hardware component to mis-train the prediction unit or for covert channel. To share the same entry, the attacker need to control the address to map to the same entry as the victim. The required address space for the attacker in mis-training will be listed in Table}~\\ref{tbl:pred_share} in Section~\\ref{sec:transient_exe}.", "id": "a085e1c8-0ba0-4d1f-8eb0-ccf85b8d5e56", "level": "subsection", "origin_cites_number": 0, "parent_id": "72738b70-3d67-4913-883a-4dac5223aebb", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Threat Model" ], [ "subsection", "Address Space of the Attacker" ] ], "subsections": [], "title": "Address Space of the Attacker" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:components}\nWe define transient execution attacks as attacks that access data during transient execution\nand then leverage a covert channel to leak information.\nThe phases of these attacks are shown in Figure~\\ref{fig:attack_phase}. \n\\hl{\nAlthough not indicated in the ``transient execution attacks\" name, covert channels are an essential component of the transient execution attacks, because the micro-architectural states changed during transient execution are not visible at the architectural level, and are only accessible by using a covert channel to learn the state change (and thus the secret).} \n\\hl{In this section, we summaries the attack scenarios, e.g., the attacker's goal, the location of the attacker, etc.}", "id": "fe310dc9-e89b-4c6e-91e7-5d2005110f30", "level": "section", "origin_cites_number": 0, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ] ], "subsections": [ "fa21ac29-586c-4e50-91de-e91a1ca0cb11", "5fa522cd-4086-49f3-84c8-dca2f9c54afa", "ae65d75e-af86-4e24-b70c-f761bc7d302d" ], "title": "Transient Execution Attack Scenarios" }, { "cite_extract_rate": 0.2, "cites": [ 1363, 7344 ], "content": "\\label{sec:attacker_goal}\nThere are many security boundaries (between different privilege levels or security domains) in a typical processor, as shown in Figure~\\ref{fig:security_boundary}.\nThe goal of the attacker of the transient execution attacks is to cross the security boundaries to obtain information related to the victim's protected data. \nIn Figure~\\ref{fig:security_boundary}, we categorize the possible privilege levels or security domains where the attack can originate and wherefrom it is trying to extract data as follows:\n\\begin{enumerate} \n\\item \\textbf{Across user-level applications:}\nThe attacker and the victim are two separate user applications, \\hl{and the attacker process tries to learn the memory content of another process, e.g.,}~\\hl{ demonstrates how an attacker process learns the private key when a victim OpenSSH server process is running in.}\n\\item \\textbf{User-level program attacking the kernel:}\nThe attacker runs in the user level and wants to read the privileged data of the kernel, \\hl{e.g., }\\hl{ demonstrates an attack that allows an unprivileged application to dump kernel memory.}\n\\item \\textbf{Virtual machine attacking another virtual machine:}\nThe attacker and the victim resides in two different guest virtual machines, \\hl{e.g.,} \\hl{ shows it is possible for an attacker VM to learn the private key of OpenSSH server in the victim VM.} \n\\item \\textbf{Virtual machine attacking the hypervisor:}\n\\hl{The attacker is a guest OS and the victim is the host hypervisor, e.g.,}~\\hl{ demonstrates an attack against KVM that leaks hypervisor's memory when the attacker has full control of the OS inside a VM.}\n\\item \\textbf{Attacking the victim running inside an enclave:}\nThe victim runs inside a security domain protected by some hardware scheme, e.g., SGX enclaves~, XOM~, Aegis~, Bastion~, Sanctum~ or Keystone~, and the attacker code runs outside of it,\ne.g.,~ \\hl{demonstrates such an attack that retrieves secret from inside the SGX enclave.}\n\\begin{figure}[t]\n\\includegraphics[width=1.8in]{gfx/security_boundary-crop.pdf}\n\\caption{\\small Security boundaries in computer systems that are broken by transient execution attacks.}\n\\label{fig:security_boundary}\n\\end{figure}\n\\item \\textbf{Across security domains protected by software:}\nThe victim runs inside the security domain protected by some software scheme, e.g., sandboxes in JavaScript, and the attacker code runs outside of it, as shown in~.\n\\end{enumerate}\n\\hl{All of the security boundaries listed above are broken by one or more of the existing transient execution attacks. The attacks have been shown to be able to retrieve coherent data, as well as non-coherent data. Details will be discussed in Section}~\\ref{sec:consequence}, especially in Table~\\ref{tbl:privilege_level}.", "id": "fa21ac29-586c-4e50-91de-e91a1ca0cb11", "level": "subsection", "origin_cites_number": 10, "parent_id": "fe310dc9-e89b-4c6e-91e7-5d2005110f30", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Attacker's Goal: Breaking Security Boundaries" ] ], "subsections": [ "879606d1-7adf-4d1a-8361-d7d97df7b922" ], "title": "Attacker's Goal: Breaking Security Boundaries" }, { "cite_extract_rate": 0.5, "cites": [ 7345 ], "content": "\\hl{We categorize all the data in the processor state into \\textbf{coherent data} and \\textbf{non-coherent data}. \\textit{Coherent data} are those coherent with the rest of the system, e.g., data in caches are maintained by cache coherence protocol. Coherent data can be accessed by its address. \\textit{Non-coherent data} are temporarily fetched into micro-architectural buffers or registers, are not synchronized with the rest of the system, and may not be cleaned up after use, e.g., data in the STL buffer. Thus, non-coherent data may be {stale}.\nNon-coherent data that is left in the buffer can be of a different privilege level or security domain, so the attacker will break the security domain when accessing the non-coherent stale data.} Some attacks~ \nfocus on attacking buffers to retrieve {such non-coherent data}, which in turn breaks the security boundaries.", "id": "879606d1-7adf-4d1a-8361-d7d97df7b922", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "fa21ac29-586c-4e50-91de-e91a1ca0cb11", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Attacker's Goal: Breaking Security Boundaries" ], [ "subsubsection", "Attacks Targeting Coherent and Non-Coherent Data" ] ], "subsections": [], "title": "Attacks Targeting Coherent and Non-Coherent Data" }, { "cite_extract_rate": 0, "cites": [], "content": "As shown in Figure~\\ref{fig:attack_phase}, we divide the transient execution attacks into three phases:\n{\\bf Setup Phase}: The processor executes a set of instructions that modify the micro-architectural\nstates such that it will later cause the transient execution of the desired code (called {\\em disclosure gadget}) to occur in a manner predictable to the attacker. An example is\nperforming indirect jumps to a specific address to ``train'' the branch predictor. \nThe setup can be done by the attacker running some code or the attacker causing the victim to run in a predictable manner so that the micro-architectural state is set up as the attacker expects.\n{\\bf Transient Execution Phase}: The transient execution is actually triggered in this phase, and the desired disclosure gadget executes due to the prior training in the setup phase. \\hl{The piece of code that accesses and transmits secret into the covert channel is called }{\\em disclosure gadget}, following the terminology in~. The instructions belonging to the disclosure gadget are eventually squashed, and the architectural states of the transient instructions are rolled back, but as many of the attacks show, the micro-architectural changes caused by the disclosure gadget remain, so secret data can be later decoded from the covert channel.\nThis phase can be either executed by the victim or by the attacker.\n{\\bf Decoding Phase}: The attacker is able to recover the data via the covert channel by running the attacker's code or by triggering the victim's code and observing the behavior or result of the execution.\n\\hl{During an attack, the \\textit{Setup Phase} and the \\textit{Transient Execution Phase} cause the transient execution of the disclosure gadget to occur. Then, the \\textit{Transient Execution Phase} and the \\textit{Decoding Phase} leverage the covert channel to transmit data to the attacker. Thus, the Transient Execution Phase is critical for both accessing the secret and encoding it into a channel.}", "id": "5fa522cd-4086-49f3-84c8-dca2f9c54afa", "level": "subsection", "origin_cites_number": 1, "parent_id": "fe310dc9-e89b-4c6e-91e7-5d2005110f30", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Phases of the Attack" ] ], "subsections": [], "title": "Phases of the Attack" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure*}[t]\n\\includegraphics[width=5.5in]{gfx/attacker_location-crop.pdf}\n\\caption{\\small Possible scenarios of transient execution attacks:\na-d) the attacker triggers part of the victim code to execute transiently to leak secret through the covert channel, or\ne-h) the attacker executes transiently to access data that she does not have permission to access and encodes it into the covert channel.\n}\n\\label{fig:attacker_loc}\n\\end{figure*}\nEach phase listed above can be performed by the attacker code or by the victim code, resulting in eight attack scenarios in Figure~\\ref{fig:attacker_loc}. When a phase is performed by the victim, the attacker is assumed to have the ability to trigger the victim to execute the disclosure gadget.\nWe categorize the attacks based on who is executing transiently to encode the secret into the covert channel.", "id": "ae65d75e-af86-4e24-b70c-f761bc7d302d", "level": "subsection", "origin_cites_number": 0, "parent_id": "fe310dc9-e89b-4c6e-91e7-5d2005110f30", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Transient Execution by the Victim vs. the Attacker" ] ], "subsections": [ "6713e38a-d96b-4406-a311-ec52b0304cd4", "51eda86d-c5b8-4235-b0d7-16c899cda60f", "c77580cf-3a89-4371-a1f2-3b055342929d" ], "title": "Transient Execution by the Victim vs. the Attacker" }, { "cite_extract_rate": 0, "cites": [], "content": "If the victim is the one who executes transiently, as shown in Figure~\\ref{fig:attacker_loc} (a--d), the victim is triggered to execute a disclosure gadget that encode some secret into the covert channel during transient execution, and the attacker obtains the secret by decoding the data from the covert channel. In this scenario, the attacker is assumed to be able to control or trigger the execution of the disclosure gadget in the victim's codebase. The attacker can do this by calling some victim functions with certain parameters. For example, in SGXpectre~, the attacker can launch the target enclave~program.\nDifferent from the conventional side and covert channels, here, the encoding phase is executed transiently, and thus, the attack cannot be detected by simply analyzing the software semantics of the victim code. This attack vector leverages the difference between the expected semantics of software execution and the actual execution in hardware and is a fundamental problem in current computer architectures.\nThere are two options for preparing for the transient execution (i.e., setup phase). First, if the hardware component that causes transient execution, e.g., the prediction unit, is shared between the attacker and the victim, then the attacker's execution can manipulate the prediction unit to trigger the disclosure gadget in the victim code to execute transiently, as shown in Figure~\\ref{fig:attacker_loc} (c,d). The second option is that the attacker triggers a setup gadget in the victim codebase to set up the transient execution, as shown in Figure~\\ref{fig:attacker_loc} (a,b).\nFor the first option, the attacker is required to share the prediction unit with the victim and to prepare some code to set up the hardware to lure the victim into desired transient execution.\nFor the second option, the attacker is required to understand the victim's code and be able to trigger the setup gadget to execute with a controlled input, e.g., by calling a function of the victim code.\nDecoding data from the covert channel can be done by the attacker code, as shown in Figure~\\ref{fig:attacker_loc} (b,d), or by the victim code, as shown in Figure~\\ref{fig:attacker_loc} (a,c). For the second case, the attacker may directly query a decoding gadget in the victim code and leverage the results of the decoding gadget the infer information through the covert channel, or the attacker may trigger the execution of the decoding gadget and measure the time or other side effect of the execution.", "id": "6713e38a-d96b-4406-a311-ec52b0304cd4", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "ae65d75e-af86-4e24-b70c-f761bc7d302d", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Transient Execution by the Victim vs. the Attacker" ], [ "subsubsection", "Victim is Executing Transiently." ] ], "subsections": [], "title": "Victim is Executing Transiently." }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1361, 7343 ], "content": "As shown in Figure~\\ref{fig:attacker_loc} (e--h), the attacker can directly obtain the secret in transient execution.\nThe attacker will then encode the data into a covert channel and decode it to obtain the secret in the architectural state, such as in her memory. \nThe attacker can also launch different software threads for the setup or the decoding phases. \nThe attacker's code shown in Figure~\\ref{fig:attacker_loc} (e--h) might be in different threads, even on different cores.\nDuring the attack, the attacker directly obtains the secret during transient execution, and thus, the attacker should be able to have a pointer to the location of the victim data. \nThere might be only the attacker code running, or the attacker and the victim running in parallel.\nWhen there is only the attacker code running, the victim's protected data should be addressable to the attacker or the data is in some register in the hardware, i.e., the attacker should have a way to point to the data.\nIn Meltdown~, the attacker code first loads protected data by its virtual address to register and then transfers the data through a covert channel.\nWhen the attacker and the victim are running concurrently, the attacker should be able to partially control the victim's execution or synchronize with the victim execution.\nFor example, in Micro-architectural Data Sampling (MDS) attacks~, the attacker needs to synchronize with the victim execution to extract useful information from the non-coherent data of the victim in the buffers.\nIn micro-architectural implementations, transient execution allows the attacker to access more data than it is allowed in the architecture (ISA) level. Thus, this type of attack is implementation-dependent and does not work on all the CPUs, e.g., Meltdown~, Foreshadow~, MDS~, are reported to work on Intel processors.\nSimilar to the case when the victim is executing transiently, the setup phases and decoding phases can also be done by the victim, resulting in four attack scenarios in Figure~\\ref{fig:attacker_loc} (e--h). However, in the current known attacks, the attacker always sets up, triggers the transient execution, and decodes from the channel, which is more practical.", "id": "51eda86d-c5b8-4235-b0d7-16c899cda60f", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ae65d75e-af86-4e24-b70c-f761bc7d302d", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Transient Execution by the Victim vs. the Attacker" ], [ "subsubsection", "Attacker is Executing Transiently." ] ], "subsections": [], "title": "Attacker is Executing Transiently." }, { "cite_extract_rate": 1, "cites": [ 1360 ], "content": "}\n\\label{sec:feasibility_scenario}\n\\begin{table*}[t]\n\\centering\n\\caption{\\small \\hl{Required Control of Victim Execution in Different Attack Scenarios.}}\n\\begin{threeparttable}\n\\small\n\\begin{tabular}{ |C{5em} || C{5em} | C{5em} |C{5em} || C{5em} | C{5.5em} | C{5em} | }\\hline\n\\textbf{Scenario in Figure~\\ref{fig:attacker_loc}} & \\textbf{Setup Phase} & \\textbf{Transient Execution Phase} & \\textbf{Decoding Phase} & \\textbf{Number of Victim Gadgets to~be Triggered*} & \\textbf{Sharing Required during Transient Execution**} & \\textbf{Sharing Required for Covert Channel**}\\\\\n\\hline\na& Victim & Victim & Victim & 2-3 & No&No\\\\\nb& Victim & Victim & Attacker & 1-2 & No&Yes\\\\\n\\hline\nc& Attacker & Victim & Victim & 2 &Yes&No\\\\\nd& Attacker & Victim & Attacker &1 &Yes&Yes\\\\\n\\hline\n\\hline\ne& Victim & Attacker & Victim & 2 &Yes&Yes\\\\\nf& Victim & Attacker & Attacker & 1 &Yes&No\\\\\n\\hline\ng& {Attacker} & Attacker & Victim & 1 &No&Yes\\\\\nh& {Attacker} & Attacker & Attacker & 0 & No& No\\\\\n\\hline\n\\end{tabular}\n* The number shows the number of different code gadgets in the victim's codebase to be triggered by the attacker. \nWe assume the decoding gadget is different from the disclosure gadget. The setup gadget may or may not be the same code as the disclosure gadget, \nso the two gadgets can be counted as either $1$ (same) or $2$ (different) gadgets,\ngiving a range of gadgets required, as show in the fifth column of the table.\n** Here, we refer to sharing of hardware between the attacker and the victim. In addition, the attacker (or the victim) could also have multiple software threads running and sharing hardware between the threads. We assume colocation between the each party's threads is possible, and do not list that here.\n\\begin{tablenotes}\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:pred_ctrl_victim}\n\\end{table*}\n\\hl{The required number of gadgets in the victim codebase to be triggered and required sharing in different transient execution scenarios is summarized in Table}~\\ref{tbl:pred_ctrl_victim}. In addition, Figure~\\ref{fig:attacker_loc} \\hl{shows the attack scenarios demonstrated in different publications. \nIn a practical attack, it is desired to have most phases to be executed by the attacker's code and less required sharing of hardware.}\n\\hl{In most of the existing attacks,\nthe attacker completes setup and decoding steps, as shown in Figure}~\\ref{fig:attacker_loc} \\hl{(d,h), because they use less gadgets in the victim codebase and are more practical for the attacker. Attack scenarios (a,b) in Figure}~\\ref{fig:attacker_loc} \\hl{are also demonstrated that have less requirement of shared hardware.}\nIn Spectre V1, since the victim disclosure gadget can be reused as the setup gadget for training the predictor, triggering victim to run the setup phase does not require additional effort for the attacker, and thus, Figure~\\ref{fig:attacker_loc}~(b) is also practical.\nThe attacker can also use the victim's code to complete both setup and decoding steps, as shown in Figure~\\ref{fig:attacker_loc} (a). In this case, the attacker can launch the attack remotely~. \nScenarios (c) and (e--g) in Figure~\\ref{fig:attacker_loc}\\hl{ require more gadgets in the victim code and are not demonstrated in the publications so far. However, if the attacker has the ability to trigger the victim to execute certain gadgets (as required by some of the attacks already), those scenarios are still feasible and should be considered when designing mitigations.}", "id": "c77580cf-3a89-4371-a1f2-3b055342929d", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "ae65d75e-af86-4e24-b70c-f761bc7d302d", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution Attack Scenarios" ], [ "subsection", "Transient Execution by the Victim vs. the Attacker" ], [ "subsubsection", "\\hl{Feasibility of the Attack Scenarios" ] ], "subsections": [], "title": "\\hl{Feasibility of the Attack Scenarios" }, { "cite_extract_rate": 0.47368421052631504, "cites": [ 7343, 1362, 1366, 1361, 7347, 1363, 7344, 7345, 1360 ], "content": "\\label{sec:transient_exe}\n\\hl{Transient execution is the phenomenon where code is executed speculatively, and it is not\nknown if the instructions will be committed or squashed until the retirement of the instruction or a pipeline squash event.\nUpon the squash, \nnot all the micro-architectural side effects are cleaned up properly, causing the possible \ntransient execution attacks. \nHence, all causes of pipeline squash are also causes of transient execution and need to be understood to know what cause transient execution attacks to occur.\nIn this section, we first discuss all possible causes of transient execution, \nthen we propose a set of the metrics to evaluate feasibility of the transient \nexecution attacks.}\n\\begin{table*}[t]\n\\centering\n\\caption{\\small Data Leaked by the Transient Execution Attacks.}\n\\begin{threeparttable}\n\\centering\n\\small\n\\begin{tabular}{ l | l | l | l| l|c c c c c c |c }\n&\\multicolumn{3}{c|}{\\multirow{5}{4.5em}{\\bf Causes of Transient Execution}} &\\multicolumn{1}{c|}{\\multirow{5}{4em}{\\bf Example Attacks}} & \\multicolumn{6}{c|}{\\bf Coh. Data**} & \\multirow{2}{2em}{{\\bf Non-coh. Data**}}\\\\\n &\\multicolumn{3}{l|}{} && \\rotatebox{90}{hypervisor} & \\rotatebox{90}{across VM} & \\rotatebox{90}{kernel data} & \\rotatebox{90}{across user app.} & \\rotatebox{90}{SGX} & \\rotatebox{90}{sandbox} & \\rotatebox{90}{} \\\\\n\\hline\n\\multirow{10}{4.5em}{{Victim Executes {Transiently}}} \n& \\multirow{6}{2em}{{\\rotatebox{90}{Prediction}}}&\\multirow{3}{2em}{{Ctrl Flow}}&PHT&\nSpectre V1~& $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\Box$\\\\\n&&&BTB&Spectre V2~& $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\Box$\\\\\n&&&RSB&Spectre V5~& $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\Box$\\\\\n\\cline{3-12}\n&&\\multirow{2}{1.5em}{Addr.}& \\multirow{1}{1em}{STL}&Spectre V4, LVI~ &$\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$\\\\\n&&&LFB&LVI~&$\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$& $\\boxtimes$\\\\\n\\cline{3-12}\n&&Value& \\multicolumn{8}{l}{no commercial implementation} \\\\\n\\cline{2-12}\n&\\multicolumn{2}{l|}{{Exception}}& *&LVI~&$\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$\\\\\n\\cline{2-12}\n&\\multicolumn{3}{l|}{{Interrupts}}& \\multicolumn{8}{l}{no known attack}\\\\\n\\cline{2-12}\n&\\multicolumn{3}{l|}{{Load-to-load reordering}}& \\multicolumn{8}{l}{no known attack}\\\\\n\\hline\n\\multirow{12}{4.5em}{{Attacker Executes Transiently}} & \n \\multirow{5}{2em}{{\\rotatebox{90}{Prediction}}}\n&\\multirow{1}{2em}{{Ctrl Flow}} \n&*& \\multicolumn{8}{l}{no known attack}\\\\ \n&&&\\\\\n\\cline{3-12}\n&& \\multirow{2}{2em}{Addr.}\n& STL&Fallout~ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ & $\\boxtimes$\\\\\n&&& LFB&RIDL, ZombieLoad~ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ & $\\boxtimes$ \\\\\n\\cline{3-12}\n&&Value& \\multicolumn{8}{l}{no commercial implementation} \\\\\n\\cline{2-12}\n& \\multicolumn{2}{l|}{\\multirow{5}{4em}{Exception}}& PF-US &Meltdown (V3)~ &$\\Box$ &$\\Box$ & $\\boxtimes$ & $\\Box$&$\\Box$ &$\\Box$ & $\\Box$\\\\\n&\\multicolumn{2}{l|}{}& PF-P& Foreshadow (L1TF)~ &$\\boxtimes$ &$\\boxtimes$ &$\\boxtimes$ &$\\boxtimes$ &$\\boxtimes$ &$\\Box$ &$\\Box$ \\\\\n&\\multicolumn{2}{l|}{}& PF-RW&V1.2~ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\boxtimes$ &$\\Box$ \\\\\n&\\multicolumn{2}{l|}{}& NM&LazyFP ~ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\boxtimes$ \\\\\n&\\multicolumn{2}{l|}{}& GP&V3a~&$\\Box$&$\\Box$ &$\\boxtimes$ &$\\Box$ &$\\Box$ &$\\Box$ &$\\Box$ \\\\\n\\cline{2-12}\n&\\multicolumn{3}{l|}{{Interrupts}}& \\multicolumn{8}{l}{no known attack}\\\\\n\\cline{2-12}\n&\\multicolumn{3}{l|}{{Load-to-load reordering}}& \\multicolumn{8}{l}{no known attack}\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n$\\boxtimes$ indicates that the attack can leak the protected data; $\\Box$ indicates that the attack cannot leak the data.\\\\\n* indicates all hardware components that cause the corresponding transient execution, we combine them in the same row because the data leaked in the attacks are the same.\n**{\\em Coh. Data} is short for coherent data, {\\em Non-coh. Data} is short for non-coherent data.\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:privilege_level}\n\\label{tbl:spec_primitive}\n\\end{table*}", "id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "level": "section", "origin_cites_number": 19, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ] ], "subsections": [ "72b2078f-6bae-4ddf-bed7-e8b4f8710c48", "fa391ee6-dea6-4242-9950-f0977d171577", "2dc47435-a05c-4f34-bd10-83e6135da759", "280841bf-fd81-4b06-9437-6c1c55848957", "a040bcfa-22fd-4877-998c-27229a8ac0bf", "cb4aac55-51a9-481a-98d8-e0bb0a27bc7b", "db7cac0e-f182-465f-94ef-6c73c1e95ba7" ], "title": "Transient Execution" }, { "cite_extract_rate": 0.25, "cites": [ 1365, 7343 ], "content": "The following is an exhaustive list of possible causes of transient execution (i.e., causes of pipeline squashing).\n\\textbf{Mis-prediction:} The first possible cause of transient execution is mis-prediction. Modern computer architectures make predictions to make full use of the pipeline to gain performance. When the prediction is correct, the execution continues and the results of the predicted execution will be used. In this way, predictions boost performance by executing instructions earlier. If the prediction is wrong, the code (transiently) executed down the incorrect (mis-predicted path) will be squashed. \nThere are three types of predictions: control flow prediction, address speculation, and value prediction.\n\\begin{enumerate}[itemindent=24pt,leftmargin=0pt,listparindent=\\parindent]\n\\item \\textbf{Control Flow Prediction:} Control flow prediction predicts the execution path that a program will follow.\nBranch prediction unit (BPU) stores the history of past branch directions and targets and leverages the locality in the program control flow to make predictions for future branches. \nBPU predicts whether the branch is to be taken or not (i.e., branch direction) by using pattern history table (PHT), and what is the target address (i.e., branch or indirect jump target) by using branch target buffer (BTB) or return stack buffer (RSB). The implementation details of PHT, BTB, and RSB in Intel processors will be discussed in Section~\\ref{sec:Ctrl_flow_Intel}.\n\\item \\textbf{Address Speculation:}\nAddress speculation is a prediction on the address when the physical address is not fully available yet, e.g., whether two addresses are the same. It is used to improve performance in the memory system, e.g., store-to-load (STL) forwarding in the load-store queue, line-fill buffer (LFB) in the cache. The implementation details of STL and LFB in Intel processors will be discussed in Section~\\ref{sec:Addr_spec_Intel}.\n\\item \\textbf{Value Prediction:}\nTo further improve the performance, while the pipeline is waiting for the data to be loaded from memory hierarchy on a cache miss, value prediction units have been designed to predict the data value and to continue the execution based on the prediction. \nWhile this is not known to be implemented in commercial architectures, value prediction had been proposed in the literature~. \n\\end{enumerate}\n\\textbf{Exceptions:} \nThe second possible cause for transient execution to occur are exceptions.\n\\hl{If an instruction causes an exception, the handling of the exception is sometimes delayed until the instructing is retired, allowing code to (transiently) execute until the exception is handled. There are a number of causes of exceptions, such as a wrong permission bit (e.g., present bit, reserved bit) in Page Table Entry (PTE), etc.} A list of all the exception types or permission bit violations is summarized in . \\hl{In addition, Xiao et al. developed a software framework to automatically explore the vulnerabilities on a variety of Intel and AMD processors}~.\nSometimes the exceptions are suppressed due to another fault, e.g., nested exceptions. For example, when using transactional memory (Intel TSX~), if a problem occurs during the transaction, all the architectural states in the transaction will be rolled back by a transaction abort, suppressing the exception that occurred in the middle of the transaction~. Another way is to put the instruction that would cause exception in a mis-predicted branch.\nIn this survey, even if the exception is suppressed later, we categorize the attack to be due to exceptions.\n\\textbf{Interrupts:} The third possible cause for transient execution is (external) interrupts. If a peripheral device or a different core causes an interrupt, \\hl{the processor stops executing the current program, saves the states, and transfers control to interrupt handler. In one common implementation, when stoping execution, the oldest instruction in the ROB will finish execution, and all the rest of the instructions in the ROB will be squashed, the instructions that were executed after the oldest instruction (but end up being squashed) are executed transiently.}\nAfter the interrupt is handled, the current program may continue the execution, i.e., the instructions that are squashed will be fetched into the pipeline again.\n\\textbf{Load-to-Load Reordering (Multi-Core):} \nThe fourth possible cause for transient execution is load-to-load reordering. Current x86 architectures use the total store order~(TSO) memory model~. \\hl{In TSO, all observable load and store reordering are not allowed except store to load reordering where a load bypasses an older store of a different address.} To prevent a load to load reordering, if a load has executed but not yet retired and the core receives a cache invalidation for the line read by the load, the pipeline will be squashed. Transient execution occurs between the instruction issue and when the load-to-load reordering is detected.", "id": "72b2078f-6bae-4ddf-bed7-e8b4f8710c48", "level": "subsection", "origin_cites_number": 8, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Causes of Transient Execution" ] ], "subsections": [], "title": "Causes of Transient Execution" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 7347, 1366, 1363, 1361, 7343 ], "content": "Not all transient execution can be leveraged in an attack,\nand Table~\\ref{tbl:spec_primitive} shows the causes of transient execution in existing attacks.\nMis-prediction is leveraged in Spectre-type attacks, e.g.,~. Address speculation is leveraged in MDS attacks~ and LVI~. Exceptions of loads or stores are leveraged in Meltdown attacks~, Foreshadow attacks~, and LVI~, etc.\nOther types of exceptions, interrupts, and load-to-load reordering are not considered to be exploitable. Because the instructions that get squashed due to exceptions, interrupts and load-to-load, are legal to be resumed later on, and no extra data is accessible to the attacker during the transient execution.\nThe sample codes of different variants are shown in Figure~\\ref{fig:spectre_sample}. The victim code should allow a potential mis-prediction or exception to happen. In Spectre V1~, to leverage PHT, a conditional branch should exist in the victim code followed by the gadget. Similarly, in Spectre V2~ and V5~, the victim code should have an indirect jump (or a return from a function) that uses BTB (or RSB) for prediction of the execution path. In Spectre V4~, to use STL, the victim code should have a store following a load having potential address speculation.\n\\hl{In LVI}~\\hl{, a load that triggers a page fault (accessing} {\\tt trusted\\_ptr}\\hl{) will forward non-coherent data in the store buffer which is injected by a malicious store (}{\\tt *arg\\_copy = untrusted\\_ptr}\\hl{), and then, the secret data addressed by the injected value (}{\\tt **untrusted\\_ptr}\\hl{) is leaked.\nIn Meltdown}~\\hl{, the attacker code should make an illegal load to cause an exception. \nIn MDS attack}~\\hl{, a faulty load (}{\\tt value=*(new\\_page)}\\hl{) will forward non-coherent data in the buffer.}\n\\begin{figure*}[t]\n\\includegraphics[width=4.5in]{gfx/spectre_sample-crop.pdf}\n\\caption{\\small Example code of transient execution attacks. Code highlighted in orange triggers transient execution. Code highlighted in yellow with dashed frame is the disclosure gadget.}\n \\label{fig:spectre_sample}\n\\end{figure*}", "id": "fa391ee6-dea6-4242-9950-f0977d171577", "level": "subsection", "origin_cites_number": 11, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Causes of Transient Execution in Known Attacks" ] ], "subsections": [], "title": "Causes of Transient Execution in Known Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:transient_metric}\nIf the attacker wants to launch a transient execution attack, the attacker should be able to cause transient execution of the disclosure gadget in a controlled manner.\nWe propose the following metrics to evaluate the different causes of transient execution:\n\\begin{itemize}[itemindent=15pt,leftmargin=0pt,listparindent=\\parindent]\n\\item \\textbf{Security Boundaries that are Broken}:\nThis metric indicates the security boundaries that are broken during the transient execution attacks -- this will be discussed in Section~\\ref{sec:consequence}.\n\\item \\textbf{Required Control of the Victim's Execution}:\nThis metric evaluates whether the attacker needs to control the execution of victim code -- details will be discussed in Section~\\ref{sec:mistrain_setup}.\n\\item \\textbf{Required Level of Sharing}:\nThis metric evaluates how close the attacker should co-locate with the victim and whether the attacker should share memory space with the victim to trigger the transient execution in a controlled manner -- details will be discussed in Section~\\ref{sec:mistrain}.\n\\item \\textbf{Speculative Window Size}:\nThis metric indicates how many instructions can be executed transiently -- the speculation window size will be discussed in more detail in Section~\\ref{sec:spec_window}.\n\\end{itemize}", "id": "2dc47435-a05c-4f34-bd10-83e6135da759", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Metrics for Causes of Transient Execution" ] ], "subsections": [], "title": "Metrics for Causes of Transient Execution" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 1363, 1361, 7343 ], "content": "\\label{sec:consequence}\nAs discussed in Section~\\ref{sec:attacker_goal}\\hl{, the attacker's goal is to access the coherent or non-coherent data across the security boundaries in the system.} \nTable~\\ref{tbl:privilege_level} \\hl{lists the type of data and the security boundaries across which the data can be leaked in the known transient execution attacks, assuming all the instructions in the disclosure gadget can execute transiently and the covert channel can transmit information to the attacker.}\n\\hl{If the victim is executing transiently, the disclosure gadget can read any coherent data that the victim could access architecturally, \neven if the semantics of the victim code do not intend it to access the data}~. \n\\hl{Hence, in these attacks, the attacker can break the isolation between the victim and the attacker and learn data in the victim's domain.}\nFor example, SWAPGS instruction is a privileged instruction that usually executed after switching from user-mode to kernel-mode. If SWAPGS is executed transiently in the kernel-mode in the incorrect path, kernel data can be leaked~.\n\\hl{When the victim is executing transiently, the attacker can also learn the non-coherent data (for example, stale data) and also data that depends on non-coherent data (e.g., data in an address that is depended on non-coherent data).\nFor example, in Spectre V4}~, \\hl{stale data that contains the address of the secret data in the store buffer is forwarded to the younger instructions transiently, and the disclosure gadget accesses and transmits the secret data to the attacker. \nAs another example, in LVI attack}~\\hl{, the attacker injects malicious value through buffers, such as STL or LFB, causing a victim's transient execution that depends on a value controlled by the attacker and potentially leaks the value in address controlled by the attacker.} \n\\hl{If the attacker is executing transiently, transient execution allows the attacker to access illegal data directly.} \nAs shown in Table~\\ref{tbl:privilege_level}\\hl{, the security boundaries that are broken depend on the causes of transient execution.} \nIn some processor implementations, even if a load causes an exception due to permission violation, the coherent data might still be propagated to the following instructions and learned by the attacker.\nFor example, in Meltdown~, privileged data is accessible transiently to an unprivileged user even if the privileged bit in the page table is set.\nIn L1 terminal fault (L1TF)~, secret data in the L1 cache is accessible transiently even if the present bit in the page table is not set.\nIn Table~\\ref{tbl:privilege_level}, the attacks leveraging exceptions are categorized by the cause of the exception, e.g., page fault (PF), and the related permission bit. \n\\hl{Non-coherent data present in the micro-architecture buffers (e.g., Line Fill Buffers (LFB) or store buffer (STB)) can sometimes be accessed by the attacker in transient execution}~. In addition, in CROSSTALK~, \\hl{a hardware buffer called staging buffer is discovered. The staging buffer is for some type of off-core reads, e.g., {\\tt RDRAND} instruction that requesting DRNG (Digital Random Number Generator), {\\tt CPUID} instruction that read from MachineSpecific Registers (MSRs).\nThe staging buffer is shared across cores, and thus, the CROSSTALK paper demonstrated a cross-core attack where the victim fetch some data from RNG, and the attacker then learn the random number in the stage buffer during transient execution.}", "id": "280841bf-fd81-4b06-9437-6c1c55848957", "level": "subsection", "origin_cites_number": 10, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Security Boundaries that are Broken" ] ], "subsections": [], "title": "Security Boundaries that are Broken" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mistrain_setup}\nFor the attacks leveraging mis-prediction, (mis-)training is a essential setup step to steer the control flow to execute the desired disclosure gadget.\nThe (mis-)training can be part of victim code, which is triggered by the attacker, as shown in Figure~\\ref{fig:attacker_loc} (b) and Table~\\ref{tbl:pred_ctrl_victim}. In the example of Spectre V1, the attacker can first provide inputs to train the branch predictor (i.e., PHT) to execute the gadget branch, because in this way the training code will always share the branch predictor with the attack code. In this case, the attacker should be able to control the execution of victim code.\nThe (mis-)training code can also be a part of the attacker's code and run in parallel with the victim code, as shown in Figure~\\ref{fig:attacker_loc} (d), e.g., in Spectre V2. Then, it is required that the attacker's training thread and the victim's thread should be co-located to share the same prediction unit (e.g., BTB). Further, to share the same entry of the prediction unit, if the prediction unit is indexed by physical address, the attacker and the victim should also share the same memory space to share the entry, which will be discussed in the next subsection.\nFor the attacks that leverage exceptions, the instructions that follow the exception will be executed transiently, and thus, no mis-training is required, but the attacker needs to make sure the disclosure gadget is located in the code such that it is executed after the exception-causing instruction.", "id": "a040bcfa-22fd-4877-998c-27229a8ac0bf", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Required Control of the Victim's Execution" ] ], "subsections": [], "title": "Required Control of the Victim's Execution" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1362, 1366, 1363, 1361, 7346, 7343 ], "content": "\\label{sec:mistrain}\n\\hl{As shown in Table}~\\ref{tbl:pred_ctrl_victim}\\hl{, in some scenarios, the setup code and the disclosure gadget are run by different parties, e.g., Figure~}\\ref{fig:attacker_loc}\\hl{ (c-f), or in attacker's different software threads, e.g., Figure}~\\ref{fig:attacker_loc}\\hl{ (g-h). These cases require that the setup code shares the same prediction unit (entry) with the disclosure gadget. \nOne common attack scenario is that the attacker mis-trains the prediction unit to lure the execution of the disclosure gadget of the victim, e.g., Figure}~\\ref{fig:attacker_loc}\\hl{ (d).\nHardware sharing can be as follows:} \n\\label{sec:colocation_attacker}\n\\begin{itemize}\n\\item \\textbf{Same thread:} The attacker and the victim (if both of them executing) or the attacker's software threads (if only the attacker is executing) are running on the same logical core (hardware thread) in a time-sliced setting, and there might be context switches in between.\n\\item \\textbf{Same core, different thread:} The attacker and the victim (if both of them executing) or the attacker's threads (if only the attacker is executing) are running on different logical cores (hardware threads) through simultaneous multithreading (SMT) on the same physical core.\n\\item \\textbf{Same chip, different core:} The attacker and the victim (if both of them executing) or the attacker's threads (if only the attacker is executing) are on different CPU cores, but are sharing LLC, memory bus, and other peripheral devices.\n\\item \\textbf{Same motherboard, different chip:} The attacker and the victim (if both of them executing) or the attacker's threads (if only the attacker is executing) share memory bus and peripheral devices.\n\\end{itemize}\n\\label{sec:addr_space_attacker}\n\\hl{Some prediction units have multiple entries indexed by address, and in that case, the attacker needs to share the same entry of the prediction unit with the victim during the setup. To share the same entry, the attacker needs to control the address to map to the same predictor entry as the victim.}\nThe address space can be one of the following:\n\\begin{itemize}\n\\item \\textbf{In the same address space:} In this case, the attacker and the victim have the same virtual to physical address mapping.\n\\item \\textbf{In different address spaces with shared memory:} In this case, the attacker and the victim have different virtual to physical address mappings. But some of the attacker's pages and the victim's pages map to the same physical pages. This can be achieved by sharing dynamic libraries (e.g., {\\tt libc}).\n\\item \\textbf{In different address spaces without shared memory:} The attacker and the victim have different virtual to physical address mapping. Further, their physical addresses do not overlap.\n\\end{itemize}\n\\begin{table*}[t]\n\\centering\n\\caption{\\small Level of Sharing and (Mis-)training the Prediction Unit on Intel Processors.}\n\\begin{threeparttable}\n\\small\n\\begin{tabular}{ l l | l l l l }\n& \\parbox{0.7in}{\\bf Prediction Unit} &\\parbox{0.5in}{\\rotatebox{30}{\\bf same thread}} & \\parbox{0.5in}{\\rotatebox{30}{\\bf same core, different thread}} & \\parbox{0.6in}{\\rotatebox{30}{\\bf same chip, different core}} & \\parbox{0.8in}{\\rotatebox{30}{\\bf same motherboard}} \\\\\n\\hline\n\\multirow{3}{0.4in}{\\rotatebox{0}{Ctrl Flow}} & PHT~ & f(virtual addr) &f(virtual addr) &-- & -- \\\\\n&BTB~ & {f(virtual addr)} & {f(virtual addr)}\\tnote{a} & -- & -- \\\\\n&RSB~ & not by address\\tnote{b}& -- &-- &-- \\\\\n\\hline\n\\multirow{3}{0.4in}{\\rotatebox{0}{Addr.}} &STL~ & f(physical addr) \\tnote{c} & -- & --& -- \\\\\n&LFB~ & not by address & not by address & --& -- \\\\\n&Other\\tnote{d} & & & & \\\\\n\\hline\nValue & no commercial impl. & & & & \\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\footnotesize\n``--\" indicates the prediction unit is not possible to be trained under the corresponding sharing setting;\nOtherwise, the prediction unit can be trained and\n``f(virtual addr)\" indicates the prediction unit is indexed by a function of the virtual address,\n``f(physical addr)\" indicates the prediction unit is indexed by a function of the physical address,\n and ``not by address\" indicates the prediction unit is not indexed by addresses.\\\\\n\\item[a] Conflicting results are presented in different publications~. \\\\\n\\item[b] Most OSes overwrite RSBs on context switches.\\\\\n\\item[c] STL is possible after context switch, but not on SGX enclave exit.\\\\\n\\item[d] In~, it is indicated that there could be other structures which forward data speculatively.\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:pred_share}\n\\end{table*}\n\\hl{In the following, we discuss the level of sharing required to trigger transient execution of disclosure gadget for an attack leveraging mis-prediction. \nIn particular, the scenario depends on the implementation, and thus, we discuss each of the prediction units in Intel Processors in detail.}", "id": "cb4aac55-51a9-481a-98d8-e0bb0a27bc7b", "level": "subsection", "origin_cites_number": 9, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Required Sharing during Transient Execution" ] ], "subsections": [ "f5fceee7-2e77-4e85-8c1d-dddc6a157881", "89e6daa1-889f-4d34-b58c-b5ee0a817d8c", "007b9f6b-6222-44d1-8975-184568450082" ], "title": "Required Sharing during Transient Execution" }, { "cite_extract_rate": 0.25, "cites": [ 1363, 1366 ], "content": "\\label{sec:Ctrl_flow_Intel}\nTo predict the branch direction, modern branch predictors use a hybrid mechanism~. One major component of the branch predictor is the pattern history table (PHT). Typically, a PHT entry is indexed based on some bits of the branch address, so a branch at a certain virtual address will always use the same entry in the PHT. In each entry of the PHT, a saturating counter stores the history of the prior branch results, which in turn is used to make future predictions.\nTo predict the branch targets, a branch target buffer (BTB) stores the previous target address of branches and jumps. Further, a return instruction is a special indirect branch that always jumps to the top of the stack. The BTB does not give a good prediction rate on return instructions, and thus, return stack buffer (RSB) has been introduced in commercial processors. \nThe RSB stores $N$ most recent return addresses.\nIn Intel processors, the PHT and BTB\\footnote{In~, the authors did not observe BTB collision between logical cores. However, it is demonstrated that the attacker can mis-train the indirect jump of a victim when they are two hyper-threads sharing the same physical core in~. Thus, we think BTB is shared across hyper-threads in some of the processors. }\n are shared for all the processes running on the same physical core (same or different logical core in SMT).\nThe RSB is dedicated to each logical core in the case of hyper-threading~.\nTable~\\ref{tbl:pred_share} shows whether the prediction unit can be trained when the training code and the victim are running in parallel in different settings.\nThe results are implementation-dependent and Table~\\ref{tbl:pred_share} shows the result from Intel processors.\nThe prediction units sometimes have many entries, and the attacker and the victim should use the same entry for mis-training.\nThe attacker and the victim will use the same entry only if they are using the same index. When the prediction unit is indexed by virtual address, the attacker can train the prediction unit from another address space using the same virtual address as the victim code. If only part of the virtual address is used as the index, which is shown as {\\tt f(virtual addr)} in Table~\\ref{tbl:pred_share}, the attacker can even train with an aliased virtual address, which maps to the same entry of the prediction unit as the victim address.\nThe RSB is not indexed by the address, rather it overflows when many nested calls are made,\nand this creates conflicts when there are more than $N$ nested calls, and will cause mis-prediction.\n\\begin{figure*}[t]\n\\includegraphics[width=4in]{gfx/volatile_channel.pdf}\n\\caption{\\small Steps for the sender and the receiver to transfer information through volatile covert channels. The yellow box shows the shared resource. The solid (dashed) arrow shows the shared resource is (is not) requested or used by the corresponding party.}\n \\label{fig:volatile_channel}\n\\end{figure*}", "id": "f5fceee7-2e77-4e85-8c1d-dddc6a157881", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "cb4aac55-51a9-481a-98d8-e0bb0a27bc7b", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Required Sharing during Transient Execution" ], [ "subsubsection", "Control Flow Prediction:" ] ], "subsections": [], "title": "Control Flow Prediction:" }, { "cite_extract_rate": 0.4, "cites": [ 1361, 7343 ], "content": "\\label{sec:Addr_spec_Intel}\nOne of the uses of address speculation is in the memory disambiguation to resolve read-after-write hazards, which are the data dependencies between instructions in out-of-order execution.\nIn Intel processors, there are two known uses of address speculation. First, loads are assumed not to conflict with earlier stores with unknown addresses, and speculatively store-to-load (STL) forwarding will not happen. When the address of a store is later resolved, the addresses of younger loads will be checked. And if store-to-load forwarding should have happened and data dependence has been violated, the loads will be flushed, and the new data is reloaded from the store, as shown in the attacks~. Second, for performance, when the address of a load {\\em partially} matches the address of a preceding store, the store buffer will forward the data of the store to the load speculatively, even though the full addresses of the two may not match~. In the end, if there is mis-prediction, the load will be marked as faulty, flushed, and reloaded again.\nAnother use of address speculation is in conjunction with the line-fill buffer (LFB), which is the buffer storing cache-lines to be filled to the L1 cache. LFB may forward data speculatively without knowledge of the target address~.\nAddress speculation may also be used in other hardware structures in Intel processors, as indicated in ~.\nTo trigger address speculation, the availability of the address should be delayed to force the hardware to predict the address.\nOne way is to make the address calculation depends on some uncached data, as in Spectre V4~.\nAnother way is to use a newly mapped page, so that the physical address is available only after OS handles the page-in event, as in~. In an extreme case, the speculation can even be caused by a NULL pointer or an invalid address, and then the error is suppressed in the attacker code, as in attack~.\nIn STL, the entries are indexed by a function of physical addresses. In this case, the training code needs to share memory space with the victim to achieve an attack.", "id": "89e6daa1-889f-4d34-b58c-b5ee0a817d8c", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "cb4aac55-51a9-481a-98d8-e0bb0a27bc7b", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Required Sharing during Transient Execution" ], [ "subsubsection", "Address Speculation:" ] ], "subsections": [], "title": "Address Speculation:" }, { "cite_extract_rate": 0, "cites": [], "content": "There is no commercial processor that implement value prediction yet. \nThus, there are no known exploits that abuse value prediction. However, similar to control flow prediction, if the predictor is based on states that are shared between different threads and not cleaned up during context switch, the prediction can be hijacked by the attacker.", "id": "007b9f6b-6222-44d1-8975-184568450082", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cb4aac55-51a9-481a-98d8-e0bb0a27bc7b", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Required Sharing during Transient Execution" ], [ "subsubsection", "Value Prediction:" ] ], "subsections": [], "title": "Value Prediction:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:spec_window}\nTo let an attack happen, there should be a large enough speculative window for the disclosure gadget to finish executing transiently, as shown in Figure~\\ref{fig:attack_phase}. The speculative window size is the window from the time the transient execution starts (instruction fetch) to the time the pipeline is~squashed.\nIn attacks leveraging predictions, the speculative window depends on the time the prediction is resolved. In a conditional branch, the time depends on the time to resolve the branch condition; in indirect jump, this depends on the time to obtain the target address; and in address speculation, this depends on the time to get the virtual and then the physical address.\n\\hl{In}~\\hl{, a tool called \\textit{Speculator} is proposed to reverse engineer the micro-architecture using hardware performance counters. The results of the \\textit{Speculator} show the speculative window of branches that depend on uncached data is about 150 cycles on Intel Broadwell, about 300 cycles on Intel Skylake, and about 300 cycles on AMD Zen, and the speculative window of STL is about 55 cycles on Intel Broadwell.}\nIn attacks leveraging exceptions, the speculative window depends on the implementation of~exceptions.\nTo make the speculative window large enough for the disclosure gadget, the attacker can delay the obtaining of the result of the branch condition or the addresses by leveraging uncached loads from main memory, chains of dependent instructions, etc.", "id": "db7cac0e-f182-465f-94ef-6c73c1e95ba7", "level": "subsection", "origin_cites_number": 1, "parent_id": "a1434b30-3403-4a4b-ae71-f8feeb87c7a5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Transient Execution" ], [ "subsection", "Speculative Window Size" ] ], "subsections": [], "title": "Speculative Window Size" }, { "cite_extract_rate": 0.15384615384615302, "cites": [ 1363, 7344, 7346, 1360 ], "content": "\\label{sec:covert_channel}\nTransient execution enables the attacker to access the secret data transiently,\nand\na covert channel\\footnote{The channel is considered a covert channel, not a side channel~, because the attacker has \ncontrol over the disclosure gadget, which encodes the secret. }~ is required\nfor the attacker to eventually obtain the secret data in architectural states.\nThere is a distinction between {\\em conventional channels} where the encoding happens in software execution path,\nand {\\em transient execution channels} where the encoding phase is executed transiently.\nHere, we focus on covert channels that can be used in transient attacks -- these can also be used as conventional covert channels.\nThere are two parties in a covert channel: the sender and the receiver. \nIn the covert channels, the sender execution will change some micro-architectural state and the receiver will observe the change to extract information, e.g., by observe the execution time.\n \\begin{table*}[t]\n\\setlength{\\tabcolsep}{3pt}\n\\centering\n\\caption{\\small Known Covert Channels in Micro-architecture.}\n\\centering\n\\begin{threeparttable}\n\\small\n\\begin{tabular}{ l l | l l l l | c C{0.8in} }\n & \\multirow{8}{1.3in}{\\bf Covert Channel Type} &\\multicolumn{4}{C{0.6in}|}{\\bf Level of Sharing} & \\multirow{8}{0.6in}{\\bf Bandwidth} & \\multirow{8}{0.95in}{\\bf Required Time~Resolution of the Receiver (CPU cycles)} \\\\\n & &\\parbox{0.1in}{\\rotatebox{90}{same thread}} & \\parbox{0.1in}{\\rotatebox{90}{same core, different thread}} & \\parbox{0.1in}{\\rotatebox{90}{same chip, different core}} & \\parbox{0.1in}{\\rotatebox{90}{same motherboard}} & & \\\\\n\\hline\n\\multirow{4}{0.6in}{{Volatile Covert Channels}} & Execution Ports~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$& not given & 50 vs. 80\\\\\n& FP division unit~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & $\\sim$70kB/s & 314 vs. 342\\\\\n& L1 Cache Ports~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & not given & 36 vs. 48\\\\\n& Memory Bus~ &$\\boxtimes$ & $\\boxtimes$ & $\\boxtimes$& $\\boxtimes$ & $\\sim$700 B/s & 2500 vs. 8000\\\\\n\\hline\n\\multirow{10}{0.6in}{{Persistent Covert Channels}} \n& AVX2 unit~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & $>$0.02B/s & 200 vs. 550\\\\\n& PHT~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & not given & 65 vs. 90\\\\\n& BTB~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & not given & 56 vs. 65\\\\\n& STL~ &$\\boxtimes$ & $\\Box$ &$\\Box$& $\\Box$ & not given & 30 vs. 300\\\\\n&TLB~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & $\\sim$5kB/s per set & 105 vs. 130\\tnote{a}\\\\\n&L1, L2 (tag, LRU)~ &$\\boxtimes$ & $\\boxtimes$ &$\\Box$& $\\Box$ & $\\sim$1MB/s per cache entry & 5 vs. 15\\tnote{b} \\\\\n&LLC (tag, LRU)~ &$\\Box$ & $\\Box$ &$\\boxtimes$ & $\\Box$ & $\\sim$0.7MB/s per set & 500 vs. 800\\\\\n&Cache Coherence~ &$\\Box$ & $\\Box$ &$\\boxtimes$ & $\\boxtimes$ & $\\sim$1MB/s per cache entry & 100 vs. 250\\tnote{c}\\\\\n&Cache Directory~ &$\\Box$ & $\\Box$ &$\\boxtimes$ & $\\Box$ & $\\sim$0.2MB/s per slice & 40 vs. 400\\\\\n&DRAM row buffer~&$\\Box$ & $\\Box$ &$\\boxtimes$ & $\\boxtimes$ & $\\sim$2MB/s per bank& 300 vs. 350\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n$\\boxtimes$ indicates that the attack is possible to leak the protected data; $\\Box$ indicates that the attack cannot leak the data.\n\\item[a] Depending on the level of TLB used, the required time resolution varies. The biggest one is shown.\n\\item[b] Shows the time resolution for covert channel use L1 cache.\n\\item[c] Depending on the setup, the required time resolution varies. The biggest one is shown.\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:channel_share}\n\\end{table*}", "id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "level": "section", "origin_cites_number": 26, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ] ], "subsections": [ "72b61380-4422-41e8-af0a-9f5fcca3977e", "eb0110d7-3528-44a1-9dd3-225a482d73cf", "28338247-7cee-4052-9db4-47b1f3b14832", "9c3fc637-6097-4b4a-bd99-ad36006f6bd5", "a647d7a6-bcb3-4342-9dee-6192aa5c53c3", "6fd5bdd1-7aa1-4b25-a844-269b474f77be", "2ac728bb-46fc-492e-a83e-150f463359ec" ], "title": "Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:receiver_obv}\nThis survey focuses on covert channels that do not require physical presence and which only require attacker's software (or software under the attacker's control) to be executing on the same system as the victim.\nThus, we do not consider physical channels, such as power~, EM field~, acoustic signals~, etc.\nThere are certain physical channels that can be accessed from software and not require physical presence, such as temperature~. \nHowever, thermal conduction is slow and the bandwidth is limited.\nAny sharing of hardware resources between users could lead to a covert channel between a sender and a receiver~.\nThe receiver can observe the status of the hardware with some metadata from the covert channel, such as the execution time, values of hardware performance counters (HPC), system behavior, etc.\nThe most commonly used observation by the receiver of the covert channels is the timing of execution.\nIn today's processors, components are designed to achieve a better performance, and thus, the execution time contains information about whether certain hardware unit is available during execution (e.g., port), whether the micro-architectural states are optimal for the code (e.g., cache hits or misses), etc.\nTo observe the hardware states via timing, a timer is needed. In x86, {\\em rdtscp} instruction can be used to read a high-resolution time stamp counter of the CPU, and thus, can be used to measure the latency of a chosen piece of code. When the {\\em rdtscp} is not available, a counting thread can be used as a timer~.\nThe receiver can also gain information from hardware performance counters (HPCs). HPCs have information about branch prediction, cache, TLB, etc, \nand have been used in covert channel attacks~.\nHowever, HPCs must be configured in kernel mode~, and thus, are not suitable for unprivileged attackers.\nThe receiver can further observe the state of the hardware by the system behaviors.\nIn Prime+Abort attack~, for example, TSX can be exploited to allow an attacker to receive an abort (call-back) if\nthe victim process accessed a critical address. \nIn other cases, several covert channels are used in series. Here, for transient execution attacks, we only consider channels where the receiver can decode data architecturally.\nFor example, in the Fetch+Bounce covert channel~, first, the secret is encoded into the TLB states, which affect the STL forwarding, and then a cache Flush+Reload covert channel is used to observe the STL forwarding results.\nThe first channel can only be observed by instructions in transient execution and the states will be removed when the instruction retires.\nWe only consider the second covert channel to be critical for transient execution attack because \nit allows the attacker to observe the secret architecturally.", "id": "72b61380-4422-41e8-af0a-9f5fcca3977e", "level": "subsection", "origin_cites_number": 11, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Assumptions about Covert Channels" ] ], "subsections": [], "title": "Assumptions about Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "We categorize the covert channels into \\textbf{volatile channels} and \\textbf{persistent channels}.\nIn volatile channels, the sender and the receiver share the resource on the fly, no states are changed, e.g., sharing a port or some logic concurrently. The sender and the receiver have contention when communicating using this type of channel. \nIn persistent channels, the sender changes the micro-architectural states, and the receiver can observe the state changes later, e.g., change of cache state. Although the states may be changed later, we call them persistent channels to differentiate from the volatile channels. \nThe persistent covert channels will be discussed in the next subsection.", "id": "eb0110d7-3528-44a1-9dd3-225a482d73cf", "level": "subsection", "origin_cites_number": 0, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Types of Covert Channels" ] ], "subsections": [], "title": "Types of Covert Channels" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7344 ], "content": "\\label{sec:volatile_channel}\nIn a \\textit{volatile covert channel}, there is contention for hardware between the sender and the receiver on the fly, and thus, the two should run concurrently, for example, as two hyper-threads in SMT processors, or running concurrently on two different cores. \nAnother scenario is that the sender and the receiver are two part of code in the same software thread that their instructions are scheduled to execute concurrently due to OoO~. \nAs shown in Figure~\\ref{fig:volatile_channel}, the receiver first measures the baseline execution time when the sender is not using the shared resource. Then, the sender causes contention on the shared resource or not depending on the message to be sent, while the receiver continues to measure the execution time. If the execution time increases, the receiver knows the sender is using the shared resource at the moment.\nExecution units, ports, and buses are shared between the hyper-threads running concurrently on the same physical core, and can be used for covert channels~.\n\\hl{There is also a covert channel leveraging the contention in the floating point division} unit~.\nL1 cache ports are also shared among hyper-threads.\nIn Intel processors, L1 cache is divided into banks, and each cache bank can only handle a single (or a limit number of) requests at a time. CacheBleed~ leverages the contention L1 cache bank to build a covert channel. Later, Intel resolved the cache bank conflicts issue with the Haswell generation.\nHowever, MemJam~ attack demonstrates that there is still a false dependency of memory read-after-write requests when the addresses are of the same L1 cache set and offset for newer generations of Intel processors. This false dependency can be used for a covert channel.\nAs shown in Table~\\ref{tbl:channel_share}, the covert channel in execution ports and L1 cache ports can lead to covert channels \\hl{within the same thread when the sender and the receiver code are executed in parallel due to OoO} and between hyper-threads in SMT setting.\nMemory bus serves memory requests to all the cores using the main memory.\nIn~, it is shown that the memory bus can act as a high-bandwidth covert channel medium, and covert channel attacks on various virtualized x86 systems are demonstrated.", "id": "28338247-7cee-4052-9db4-47b1f3b14832", "level": "subsection", "origin_cites_number": 6, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Volatile Covert Channels" ] ], "subsections": [], "title": "Volatile Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:persistent_channel}\nIn a \\textit{persistent channel}, the sender and the receiver share the same micro-architectural states, e.g., registers, caches, etc.\nDifferent from volatile covert channels, the state will be memorized in the system for a while. And the sender and the receiver do not have to execute concurrently.\nDepending on whether the state can only be used by one party or can be directly accessed by different parties in the system, we further divide the persistent channels into occupancy-based and encode-based, as shown in Figure~\\ref{fig:persistent_channel}.", "id": "9c3fc637-6097-4b4a-bd99-ad36006f6bd5", "level": "subsection", "origin_cites_number": 0, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Persistent Covert Channels" ] ], "subsections": [ "0f8b9c8b-4904-4e3a-b7a5-261d34e637e3", "8e4181b0-82a2-4c27-ae7b-f904537133cc" ], "title": "Persistent Covert Channels" }, { "cite_extract_rate": 0.052631578947368, "cites": [ 7346 ], "content": "To leverage occupancy-based covert channel, the user needs to occupy the states (e.g., registers, cache, or some entries) or data to affect the execution.\n\\begin{itemize}[itemindent=15pt,leftmargin=0pt,listparindent=\\parindent]\n\\item \\textit{Eviction-based Persistent Channels:}\nIn this channel, the sender and the receiver will compete and evict the other party to occupy some states to store their data or metadata to (de-)accelerate their execution. One example of the eviction-based channel is the Prime+Probe attack~. The receiver first occupies a cache set (i.e., primes). Then, the sender may use the state for her data or not, depending on the message to be sent. And in the end, the receiver reads (i.e., probes) her data that were used to occupy the cache set in the first step to see whether those data are still in the cache by measuring the timing, as shown in the first row of Figure~\\ref{fig:persistent_channel}. Other examples of the eviction-based channel are cache Evict+Time attack~, the covert channel in DRAM row buffer~.\n\\begin{figure*}[t]\n\\includegraphics[width=4.3in]{gfx/persistent_channel.pdf}\n\\caption{\\small Steps for the sender and the receiver to transfer information through different types of persistent covert channels.}\n \\label{fig:persistent_channel}\n\\end{figure*} \nAnother possible contention is that the sender needs to use the same piece of data (e.g., need exclusive access to the data for write), and thus, the receiver's copy of data can be invalidated.\nSome state is used for tracking the relationship of data in different components, which can cause the data in one component to be invalidated.\nFor example, cache coherency policy can invalidate a cache line in a remote cache, and thus, it results in a covert channel between threads on different cores on the same processor chip~.\nCache directory keeps the tags and cache coherence state of cache lines in the lower levels of cache in a non-inclusive cache hierarchy and can cause eviction of a cache line in the lower cache level (a remote cache relative to the sender) to build a covert channel~.\n\\item \\textit{Reuse-based Persistent Channels:}\nIn this channel, the sender and the receiver will share some data or metadata, and if the data is stored in the shared state, it could (de-)accelerate both of their execution. The cache Flush+Reload attack~ transfers information by {\\em reusing} the same data in the cache. The receiver first cleans the cache state. Then, the sender loads the shared data or not. And in the end, the receiver measures the execution time of loading the shared data, as in Figure~\\ref{fig:persistent_channel}. If the sender loads the shared data in the second step, the receiver will observer faster timing compared to the case when the sender does not load the shared data.\nThere are other reuse-based attacks, such as Cache Collision attack~ and the cache Flush+Flush attack~.\n\\hl{Prediction units can also be leveraged for such covert channels due to a longer latency for mis-prediction. \nFor example, PHT}~, BTB~, and STL~ have been demonstrated to be usable for constructing covert channels. For example, when sharing BTB, the sender and the receiver use the same indirect jump source, ensuring the same BTB entry is used. If the receiver has the same destination address as the sender, the BTB will make a correct prediction resulting in a faster jump.\n\\end{itemize}", "id": "0f8b9c8b-4904-4e3a-b7a5-261d34e637e3", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "9c3fc637-6097-4b4a-bd99-ad36006f6bd5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Persistent Covert Channels" ], [ "subsubsection", "Occupancy-based Persistent Covert Channels" ] ], "subsections": [], "title": "Occupancy-based Persistent Covert Channels" }, { "cite_extract_rate": 0.25, "cites": [ 1360 ], "content": "In encode-based persistent covert channels, the sender and the receiver can both directly change and probe the shared state. One example of such a channel is the AVX channel~. There are two AVX2 unit states: power-off and power-on. To save power, the CPU can power down the upper half of the AVX2 unit by default. In step 2, if the sender then uses the AVX2 unit, it will be power-on the unit for at least 1 ms. In step 3, the receiver can measure whether the AVX2 unit is power-on by measuring the time of using AVXs unit. In this way, the sender encodes the message into the state of the AVX2 unit, as shown in Figure~\\ref{fig:persistent_channel}.\nOther examples are the covert channels leveraging cache LRU states~.", "id": "8e4181b0-82a2-4c27-ae7b-f904537133cc", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "9c3fc637-6097-4b4a-bd99-ad36006f6bd5", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Persistent Covert Channels" ], [ "subsubsection", "Encode-based Persistent Covert Channels" ] ], "subsections": [], "title": "Encode-based Persistent Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:CC_metric}\nWe propose the following metrics to compare different covert channels:\n\\begin{itemize}[itemindent=15pt,leftmargin=0pt,listparindent=\\parindent]\n\\item \\textbf{Level of Sharing}:\nThis metric indicates how the sender and the receiver should co-locate.\nAs shown in Table~\\ref{tbl:channel_share}, some of the covert channels only exists when the sender and the receiver share the same physical core. Other attacks exist when the sender and the receiver share the same chip or even the same motherboard.\n\\item \\textbf{Bandwidth}:\nThis metric measures how fast the channel is. The faster the channel, the faster the attacker can transfer the secret. Table~\\ref{tbl:channel_share} compared the bandwidth of different covert channels. Usually, the bandwidth is measured in a real system considering the noise from activities by other software and the operating system.\n\\item \\textbf{Time Resolution of the Receiver}:\nAs shown in Figures~\\ref{fig:volatile_channel} and~\\ref{fig:persistent_channel}, the receiver needs to measure and differentiate different states. For a timing channel, the time resolution of the receiver's clock decides whether the receiver can observe the difference between the sender sending 0 or 1. The last column of Table~\\ref{tbl:channel_share} shows the timing difference between states. Some channels, such as cache L1, require a very high-resolution clock to differentiate 5 cycles from 15 cycles, while the LLC covert channel only needs to differentiate 500 cycles from 800 cycles, and the receiver only needs a coarse-grained clock.\n\\item \\textbf{Retention Time}:\nThis metric measures how long the channel can keep the secret.\nIn some of the covert channels (volatile channels in Section~\\ref{sec:volatile_channel}), no state is changed, e.g., the channel leveraging port contention~. The retention time of such channels is zero, and the receiver must measure the channel concurrently when the sender is sending information.\nOther covert channels (persistent channels in Section~\\ref{sec:persistent_channel}) leverage state change in micro-architecture, the retention time depends on how long the state will stay, for example, AVX2 unit will be powered off after about 1ms. If the receiver does not measure the state in time, she will obtain no information. For other states, such as register, cache, etc., the retention time depends on the usage of the unit and when the unit will be used by another user.\n\\end{itemize}", "id": "a647d7a6-bcb3-4342-9dee-6192aa5c53c3", "level": "subsection", "origin_cites_number": 1, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Metrics for Covert Channels" ] ], "subsections": [], "title": "Metrics for Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:comp_CC}\nTable~\\ref{tbl:channel_share}\\hl{ lists different covert channels in micro architecture.\nThe existence of covert channel depends on whether the unit is shared in that setting.}\nFor example, AVX2 units, TLB, and the L1/L2 caches are shared among programs using the same physical core. Therefore, a covert channel can be built among hyper-threads and threads sharing a logical core in a time-sliced setting. The LLC, cache coherence states, and DRAM are shared among different cores on the chip, and therefore, a covert channel can be built between different cores.\nSome covert channels may use more than one component listed in Table~\\ref{tbl:channel_share}. For example, in the cache hierarchy, there could be multiple levels of caches shared among the sender and the receiver. In Flush+Reload cache covert channel, the receiver can use the {\\em clflush} instruction to flush a cache line from all the caches, and the sender may load the cache line into L1/L2 of that core or the shared LLC. If the sender and the receiver are in the same core, then the receiver will reload the data from L1. If the sender and the receiver are in different cores and only sharing the LLC, the receiver will reload the data from LLC.\nTherefore, even with the same covert channel protocol, the location of the covert channel depends on the actual setting of the sender and the receiver.\nAs shown in Table~\\ref{tbl:channel_share}, the channels in caches have relatively high bandwidth ($\\sim$1MBits/s), which allows the attacker to launch efficient attacks. Covert channels in AVX and TLB are slower but enough for practical attacks.", "id": "6fd5bdd1-7aa1-4b25-a844-269b474f77be", "level": "subsection", "origin_cites_number": 0, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Comparison of Covert Channels" ] ], "subsections": [], "title": "Comparison of Covert Channels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[t]\n\\includegraphics[width=3.2in]{gfx/disclosure_gadget_sample.pdf}\n\\caption{\\small Example disclosure gadgets for different covert channels.}\n \\label{fig:disclosure_gadget}\n\\end{figure}\nThe covert channel is used in the disclosure gadget to transfer the secret to be accessible to the attacker architecturally. \nDisclosure gadget usually contains two steps: 1. load the secret into a register; 2. encode the secret into a covert channel.\nAs shown in Figure~\\ref{fig:disclosure_gadget}, the disclosure gadget code depends on the covert channel used. For covert channels in the memory hierarchy (e.g., cache side channel), it will consist of memory access whose address depends on the secret value. For AVX-based covert channels, the disclosure gadget encodes the secret by using (or not using) AVX instruction.", "id": "2ac728bb-46fc-492e-a83e-150f463359ec", "level": "subsection", "origin_cites_number": 0, "parent_id": "5c358c86-0e6e-485e-ae28-1102c5f65ba7", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Covert Channels" ], [ "subsection", "Disclosure Gadget" ] ], "subsections": [], "title": "Disclosure Gadget" }, { "cite_extract_rate": 0.75, "cites": [ 1368, 1369, 1367 ], "content": "\\label{sec:transient_attacks}\nThe transient execution attacks contain two parts: triggering transient execution to obtain data \nthat is otherwise not accessible (discussed in Section~\\ref{sec:transient_exe}) \nand transferring the data via a covert channel (discussed in Section~\\ref{sec:covert_channel}).\nIf the victim executes transiently, the victim will encode the secret into the channel, and the behavior cannot be analyzed from \nthe software semantics without a hardware model of prediction.\nIf the attacker executes transiently, the micro-architecture propagates data that is not allowed to propagate at the ISA level (propagation is not visible at ISA level, but can be reconstructed through cover channels which observe the changes in micro-architecture).\nTo formally model and detect the behavior, a new micro-architectural model, including the transient behavior, should be used~.", "id": "ccfa843e-cbae-4445-b4cf-783a75c1f4e0", "level": "section", "origin_cites_number": 4, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ] ], "subsections": [ "6b17c85b-65b5-445b-a8d9-e6b9169eb092", "22a45cfa-75c7-4a2f-93ed-a85f5f76ba78", "42dca5b4-9762-48f1-a5a9-ead5b56aa9c7" ], "title": "Existing Transient Execution Attacks" }, { "cite_extract_rate": 0.47368421052631504, "cites": [ 7343, 1362, 1366, 1361, 7347, 1363, 7344, 7345, 1360 ], "content": "\\begin{table*}[t]\n\\centering\n\\caption{\\small Transient Execution Attacks Types.}\n\\begin{threeparttable}\n\\small\n\\begin{tabular}{ l | l l l l l l }\n& \\multicolumn{6}{c}{\\bf Cause of Transient Execution}\\\\\n \\parbox{0.9in}{\\bf Covert Channel} &\\parbox{0.4in}{\\rotatebox{0}{PHT}} & \\parbox{0.4in}{\\rotatebox{0}{BTB}} & \\parbox{0.4in}{\\rotatebox{0}{RSB}} & \\parbox{0.4in}{\\rotatebox{0}{STL}} & \\parbox{0.4in}{\\rotatebox{0}{LFB}} & \\parbox{0.5in}{\\rotatebox{0}{Exception}} \\\\\n\\hline\n Execution Ports &$\\Box$ & &$\\Box$&$\\Box$&$\\Box$ &$\\Box$\\\\\n L1 Cache Ports &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$&$\\Box$\\\\\n Memory Bus &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$&$\\Box$\\\\\n AVX2 unit && $\\Box$ &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ \\\\\n FP div unit && $\\Box$ &$\\Box$ &$\\Box$&$\\Box$& \\\\\nTLB & $\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$ &$\\Box$\\\\\nL1, L2 (tag, LRU) && & & & &\\\\\nLLC (tag, LRU) &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$ &$\\Box$\\\\\nCache Coherence & &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &\\\\\nCache Directory &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$ &$\\Box$\\\\\nDRAM row buffer &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$ &$\\Box$ \\\\\nOther Channel &$\\Box$ &$\\Box$&$\\Box$&$\\Box$ &$\\Box$ &$\\Box$\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n$\\Box$ shows attacks that are possible but not demonstrated yet.\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:attack_demo}\n\\end{table*}\nTo launch an attack, the attacker needs a way to cause transient execution of the victim or herself and a covert channel.\nTable~\\ref{tbl:attack_demo} shows the attacks that are demonstrated in the publications. For demonstrating different speculation primitives, researchers usually use the covert channel in caches (row L1, L2 in Table~\\ref{tbl:attack_demo}). This is because the cache Flush+Reload covert channel is simple and efficient.\nFor demonstrating different covert channels used in transient execution attacks, researchers usually use PHT (Spectre V1). This is because Spectre V1 is easy to demonstrate.\nNote that every entry in the table can become an attack.\nFor mitigations, each entry of the table should be mitigated, either mitigate all the covert channels or prevent accessing the secret data in transient execution.", "id": "6b17c85b-65b5-445b-a8d9-e6b9169eb092", "level": "subsection", "origin_cites_number": 19, "parent_id": "ccfa843e-cbae-4445-b4cf-783a75c1f4e0", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ], [ "subsection", "Existing Transient Execution Attacks Types" ] ], "subsections": [], "title": "Existing Transient Execution Attacks Types" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "22a45cfa-75c7-4a2f-93ed-a85f5f76ba78", "level": "subsection", "origin_cites_number": 0, "parent_id": "ccfa843e-cbae-4445-b4cf-783a75c1f4e0", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ], [ "subsection", "Feasibility of Existing Attacks" ] ], "subsections": [ "e2f9c31e-1144-4398-b0bd-0b8c8771af60", "a16016c0-b53b-4db3-a258-b9722e8926fa" ], "title": "Feasibility of Existing Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "As discussed in Section~\\ref{sec:feasibility_scenario} and Section~\\ref{sec:mistrain_setup}, Spectre attacks require the attacker to mis-train the prediction unit in the setup phase to let the victim execute gadget speculatively. To be able to mis-train, the attacker either needs to control part of the victim's execution to generate the desired history for prediction or needs to co-locate with the victim on the same core.\nMDS attacks also require the attacker and the victim to share the same address speculation unit.\nAs shown in Table~\\ref{tbl:pred_share}, the prediction unit is shared only within a physical core, for some unit, not even share between each hyper-thread. In practice, it is not trivial to co-locate on the same core.", "id": "e2f9c31e-1144-4398-b0bd-0b8c8771af60", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "22a45cfa-75c7-4a2f-93ed-a85f5f76ba78", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ], [ "subsection", "Feasibility of Existing Attacks" ], [ "subsubsection", "Feasibility of the Transient Execution" ] ], "subsections": [], "title": "Feasibility of the Transient Execution" }, { "cite_extract_rate": 0, "cites": [], "content": "As shown in Table~\\ref{tbl:pred_ctrl_victim} and Section~\\ref{sec:comp_CC}, in some scenarios, a covert channel across processes is required, and thus, the sharing of hardware is needed, which requires the co-location of threads. Furthermore, for a certain attack implementation, only one disclosure primitive is used, and the attack can be mitigated by blocking the covert channel.", "id": "a16016c0-b53b-4db3-a258-b9722e8926fa", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "22a45cfa-75c7-4a2f-93ed-a85f5f76ba78", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ], [ "subsection", "Feasibility of Existing Attacks" ], [ "subsubsection", "Feasibility of the Covert Channel" ] ], "subsections": [], "title": "Feasibility of the Covert Channel" }, { "cite_extract_rate": 0.25, "cites": [ 1365 ], "content": "\\hl{Most of the existing studies focus on Intel processors, Table}~\\ref{tbl:commercial}\\hl{ lists the known attacks on processors by different venders, such as} AMD~, Arm~, RISC-V~\\hl{. As shown in the table, Spectre-type attacks using branch prediction are found on all the platforms, this is because branch speculation is fundamental in modern processors. Other transient execution depends on the micro-architecture implementation of speculation units, and show different results on different platforms.}\n\\begin{table*}[t]\n\\centering\n\\caption{\\small \\hl{Known Transient Execution Attacks on Different Platforms.}}\n\\begin{threeparttable}\n\\centering\n\\small\n\\begin{tabular}{ l l| c c c c c c c }\n \\multicolumn{2}{C{18em}|}{\\bf Cause of Transient Execution} & {\\bf Intel} & {\\bf AMD~} & {\\bf Arm~} & {\\bf RISC-V} \\\\\n\\hline\n\\multirow{3}{6em}{{Control Flow}} \n& PHT (V1)& $\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ & $\\boxtimes$ \\\\\n& BTB (V2)&$\\boxtimes$ & $\\boxtimes$ &$\\boxtimes$ &$\\boxtimes$ \\\\\n& RSB (V5)&$\\boxtimes$ & $\\Box$\\ &$\\boxtimes$ & $\\Box$\\\\\n\\hline\n\\multirow{2}{10em}{{Address Speculation}} \n& STL (V4,MDS) &$\\boxtimes$ &$\\Box$ &$\\boxtimes$ &$\\Box$\\ \\\\\n& LFB (MDS)&$\\boxtimes$ &$\\Box$ &$\\Box$ &$\\boxtimes$ \\\\\n\\hline\n\\multirow{6}{4em}{{Exception}} \n& PF-US (V3) &$\\boxtimes$&$\\Box$ & $\\boxtimes$ & $\\Box$\\\\\n& PF-P (L1TF) &$\\boxtimes$ &$\\Box$ &$\\Box$ &$\\Box$ \\\\\n& PF-RW (V1.2) &$\\boxtimes$ &$\\Box$ &$\\boxtimes$ &$\\Box$ \\\\\n& NM (LazyFP) &$\\boxtimes$ &$\\Box$ &$\\Box$ &$\\Box$ \\\\\n& GP (V3a) &$\\boxtimes$&$\\Box$ &$\\boxtimes$ &$\\Box$ \\\\\n& Other & $\\boxtimes$&$\\boxtimes$ &$\\boxtimes$ &$\\Box$\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n$\\boxtimes$ indicates that an attack of the type on the platform; $\\Box$ indicates that there is no known attack.\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:commercial}\n\\end{table*}", "id": "42dca5b4-9762-48f1-a5a9-ead5b56aa9c7", "level": "subsection", "origin_cites_number": 4, "parent_id": "ccfa843e-cbae-4445-b4cf-783a75c1f4e0", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Existing Transient Execution Attacks" ], [ "subsection", "Attacks on Different Commercial Platforms" ] ], "subsections": [], "title": "Attacks on Different Commercial Platforms" }, { "cite_extract_rate": 0.5, "cites": [ 1372, 1371, 1370 ], "content": "\\label{sec:mitigations}\nIn this section, we focus on micro-architectural mitigations to attacks that occur when the victim executes transiently under wrong control flow prediction. \n\\hl{As shown in Table}~\\ref{tbl:commercial}\\hl{, attacks that leveraging control flow prediction are more fundamental and affect all modern computer architectures.\nAttacks that leveraging address speculation and exceptions are implementation-dependent, and we consider them as implementation bugs. They can be fixed, although the performance penalty is unknown now.}\nWe focus on possible future micro-architecture designs that are safe against control flow prediction. Thus, software mitigation schemes, such as~, \\hl{and software vulnerability detection schemes}~ are not discussed in detail.", "id": "2fa00822-ddfc-46d2-883a-0da7b331ff92", "level": "section", "origin_cites_number": 6, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ] ], "subsections": [ "f6f67ddd-b35f-4518-bfd6-ee749ee08553", "20ee94ca-b37a-49db-97bb-981acb8d805e" ], "title": "Mitigations of Spectre-type Attacks in Micro-architecture Design" }, { "cite_extract_rate": 0, "cites": [], "content": "The simplest mitigation is to stop any transient execution. However, it will come with a huge performance overhead, e.g., adding a fence after each branch to stop branch prediction causes 88\\% performance loss~.", "id": "f6f67ddd-b35f-4518-bfd6-ee749ee08553", "level": "subsection", "origin_cites_number": 1, "parent_id": "2fa00822-ddfc-46d2-883a-0da7b331ff92", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Transient Execution" ] ], "subsections": [ "0e7f79e9-2f26-4a74-a9c3-210855a1024c", "f4e1eece-7f34-4cb1-8a01-f6d4d123eb1a", "a17caebb-f56c-431c-973a-ab341ba3c62f" ], "title": "Mitigating Transient Execution" }, { "cite_extract_rate": 0.5, "cites": [ 1371, 1368, 1367, 1373 ], "content": "To mitigate Spectre-type attacks, one solution is to limit the attackers' ability to mis-train the prediction units to prevent the disclosure gadget to be executed transiently (the first metric in Section~\\ref{sec:transient_metric}). The prediction units (e.g., PHT, BTB, RSB, STL) should not be shared among different users. This can be achieved by static partition for concurrent users and flush the state during context switches.\nFor example, there are ISA extensions for controlling and stopping indirect branch predictions~. In~, a decode-level branch predictor isolation technique is proposed, where a special micro-op that clears the branch predictor states will be executed when the security domain switches. In~, it is proposed to use thread-private random number to encode the branch prediction table, to build isolation between threads in the branch predictor.\nHowever, for both proposals, if the attacker can train the prediction unit by executing victim code with certain input (e.g., always provide valid input in Spectre V1), isolation is not enough. \nThere is also mitigation in software to stop speculation by making the potential secret data depends on the result of the branch condition leveraging data dependency, e.g., masking the data with the branch condition~, because current processors do not speculate on data.\nHowever, this solution requires to identify all control flow dependency and all disclosure gadgets, to figure out all possible control flow that could lead to the execution of the disclosure gadgets, and to patch each of them. \nIt is a challenge to identify all (current and future) disclosure gadgets, because disclosure gadgets may vary due to the encoding to different covert channels, and formal methods that model the micro-architecture behavior are required~.", "id": "0e7f79e9-2f26-4a74-a9c3-210855a1024c", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "f6f67ddd-b35f-4518-bfd6-ee749ee08553", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Transient Execution" ], [ "subsubsection", "Mitigating the Trigger of Transient Execution" ] ], "subsections": [], "title": "Mitigating the Trigger of Transient Execution" }, { "cite_extract_rate": 0.0625, "cites": [ 1374 ], "content": "To mitigate leak of secret during the transient execution attacks, one way is to prevent the transient execution of the disclosure gadget, i.e., to stop loading of secrets in transient execution or stop propagating the secret to younger instructions in the disclosure gadget transiently.\nFor Meltdown-type and MDS-type attacks, it means to stop propagating secret data to the younger instructions. \nFor Spectre-type attacks, however, the logic may not know which data is secret. \nTo mitigate the attacks, secret data should be tagged with metadata as in secure architecture designs, which will be discussed in Section~\\ref{sec:sec_arch}.\nAnother solution is that data cannot be propagated speculatively, and thus, cannot be send to covert channels speculatively, which can potentially prevent transient execution attacks with any covert channel.\nIn {\\em Context-Sensitive Fencing}~, fences will be injected at decoder-level to stop speculative data propagation if there are potential Spectre attacks.\nIn {\\em NDA}~, a set of propagation policies are designed for defending the attacks leveraging different types of transient executions (for example, transient execution due to branch prediction or all transient execution), showing the trade-off between security and performance.\nSimilarly, in {\\em SpecShield}~, different propagation policies are designed and evaluated.\nIn {\\em Conditional Speculation}~, the authors propose a defense scheme targeting covert channels in the memory system and propose an architecture where data cannot be transiently propagated to instructions that lead to changes in memory system showing $13\\%$ performance overhead. To reduce performance overhead of the defense, they further change the design to only target Flush+Reload cache side channels, resulting performance overhead of $7\\%$.\nFurthermore, in {\\em STT}~, a dynamic information flow tracking based micro-architecture is proposed to stop the propagation of speculative data to covert channels but reduce the performance overhead by waking up instructions as early as possible.\n\\hl{Speculative data-oblivious (SDO) execution}~\\hl{ is based on STT. To reduce performance overhead, SDO introduces new predictions that do not depend on operands (holding data potentially depending on speculative data). Specifically, speculative data-oblivious loads are designed to allow safe speculative load. }\nThe overhead to defend Spectre-like attacks is moderate, e.g., $7.7\\%$ in {\\em Context-Sensitive Fencing}~, $21\\%$ reported in {\\em SpecShield}~, $20\\sim51\\%$ ($113\\%$ for defending all transient execution attacks) reported in {\\em NDA}~, and $8.5\\%$ for branch speculation ($14.5\\%$ for all transient execution) in {\\em STT}~, $4.19\\%$ for branch speculation ($10.05\\%$ for all transient execution) in {\\em STT+SDO}~.\n\\begin{table*}[t]\n\\centering\n\\caption{\\small Comparison of Different Mitigation Schemes in Micro-architecture.}\n\\begin{threeparttable}\n\\small\n\\begin{tabular}{ | p{2.2in} | p{3in} | }\n\\hline\n\\textbf{Mitigation Schemes} & \\textbf{Performance Overhead} \\\\\n\\hline\nFence after each branch & $88\\%$~ \\\\\n\\hline\nStop propagating all data & $30$--$55\\%$~; $21\\%$~; $20$--$51\\%$~; $8.5\\%$~; $4.19\\%$~ \\\\\nStop propagating all data to cache changes& $7.7\\%$~, $13\\%$~\\\\\nStop propagating all data to Flush+Reload &$7\\%$~\\\\\n\\hline\nStop propagating all tagged secret data& $71\\%$ for security-critical applications, $<1\\%$~for real-world workloads~\\\\\n\\hline\nPartitioned cache & $1$--$15\\%$~ \\\\\n\\hline\nStop (Undo) speculative change in caches &7.6\\%~; 11\\%~; 4\\%~; 5.1\\%~; 8.3\\%~\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tbl:mitigation}\n\\end{table*}\nThere should be a large enough speculative window to let the disclosure gadget execute transiently for the attack to happen. The micro-architecture may be able to limit the speculation window size to prevent the encoding to the covert channel (the fourth metric in Section~\\ref{sec:transient_metric}). However, the disclosure gadget can be very small that only contains two loads from L1~, which is only about 20 cycles in total. Detecting a malicious windowing gadget accurately can be~challenging.", "id": "f4e1eece-7f34-4cb1-8a01-f6d4d123eb1a", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "f6f67ddd-b35f-4518-bfd6-ee749ee08553", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Transient Execution" ], [ "subsubsection", "Mitigating Transient Execution of Disclosure Gadget" ] ], "subsections": [], "title": "Mitigating Transient Execution of Disclosure Gadget" }, { "cite_extract_rate": 0.125, "cites": [ 1363 ], "content": "\\label{sec:sec_arch}\nSecure architectures are designed to protect the confidentiality (or integrity) of certain data or code.\nThus, secure architectures usually come with ISA extensions to identify the data or code to be protected, e.g., secret data region, and micro-architecture designs to isolate the data and code to be protected~.\nWith knowledge about the data to be protected, hardware can further stop propagating secret data during speculation.\nThe hardware can identify data that is depended on the secret with taint checking, as proposed in~, and forbid tainted data to have micro-architectural side effects, or flush\nall the states on exit from the protected domain, to defend against persistent covert channels, and disable SMT to defend volatile covert channels.\nThe overhead of such mitigation depends on the size of secret data to be protected. For example, as reported in {\\em ConTExT}~, the overhead is $71.14\\%$ for OpenSSL RSA encryption and less than $1\\%$ for real-world workloads.\nSimilar overhead is reported in {\\em SpectreGuard}~.\nIntel also proposed a new memory type, named speculative-access protected memory (SAPM)~. Any access to SAPM region will cause instruction-level serialization and speculative execution beyond the SAPM-accessing instruction will be stopped until the retirement of that instruction.", "id": "a17caebb-f56c-431c-973a-ab341ba3c62f", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "f6f67ddd-b35f-4518-bfd6-ee749ee08553", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Transient Execution" ], [ "subsubsection", "Mitigations in Secure Architectures" ] ], "subsections": [], "title": "Mitigations in Secure Architectures" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 1374, 1364 ], "content": "To limit the covert channels, one way is to isolate all the hardware across the sender and receiver of the channel, so the change cannot be observable to the receiver. However, this is not always possible, e.g., in some attacks, the attacker is both the sender and the receiver of the channel.\nAnother mitigation is to eliminate the sender of the covert channel in transient execution.\nFor volatile covert channels, the mitigation is challenging.\nFor permanent covert channels, there should not be speculative change to any micro-architectural states or any micro-architectural state changes should be rolled back when the pipeline is squashed. \nCovert channels in memory systems, such as caches and TLBs, are most commonly used. Hence, most of the existing mitigations focus on cache and TLB side channels.\n{\\em InvisiSpec}~ proposed the concept of ``visibility point\" of a load, which indicates the time when a load is safe to cause micro-architecture state changes that are visible to attackers. Before the visibility point, a load may be squashed, and should not cause any micro-architecture state changes visible to the attackers. To reduce performance overhead, a ``speculative buffer\" is used to temporarily cache the load, without modifications in the local cache. After the ``visibility point\", the data will be fetched into the cache. For cache coherency, a new coherency policy is designed such that the data will be validated when stale data is potentially fetched. The gem5~ simulation results show a 7.6\\% performance loss for SPEC 2006 benchmark~.\nSimilarly, SafeSpec~ proposed to add ``shadow buffers\" to caches and TLBs, so that transient changes in the caches and TLBs does not happen. \nIn {\\em Muontrap}~\\hl{, ``filter cache\" (L0 cache) is added to each physical thread to hold speculative data. \nThe proposed filter cache only holds data that is in Shared state, so it will not change the timing of accessing other caches. If the shared state in L0 is not possible without causing the cache line in another cache to change state form Modified or Exclusive state, the access will be delayed until it is at the head of ROB.\nThe cache line will be written through to L1 when the corresponding instruction commits.\nDifferent from the buffers in InvisiSpec}~ and SafeSpec~\\hl{, the filter cache is a real cache that is cleared upon a context switch, syscall, or when the execution change security boundaries (e.g., explicit flush when exiting sandbox) to ensure isolation between security boundaries. Muontrap results in a 4\\% slowdown for SPEC 2006.}\n{\\em CleanupSpec}~ proposed to use a combination of undoing the speculative changes and secure cache designs. When mis-speculation is detected and the pipeline is squashed, the changes to the L1 cache are rolled back. For tracking the speculative changes in caches, 1Kbyte storage overhead is introduced.\nTo prevent the cross-core or multi-thread covert channel, partitioned L1 with random replacement policy and randomized L2/LLC are used.\nBecause only a small portion of transient executions results in mis-speculations, the method shows an average slowdown of 5.1\\%.\n{\\em ReversiSpec}~ \\hl{proposed a comprehensive cache coherence protocol considering speculative cache accesses. The cache coherence protocol proposed an interface including three operations: 1) speculative load, 2) merge when a speculative load is safe, 3) purge when a speculative load is squashed. Compared to InvisiSpec}~\\hl{, the speculative buffer only stores data when the data is not in the cache, and thus,\nless data movement will occur when a load is safe (merge). Compared to CleanupSpec}~\\hl{, purge is fast as not all the changes have propagated in to cache. The performance overhead is 8.3\\%.}\nMoreover, accessing speculative loads that hit in L1 cache will not cause side effects (except LRU state updates) in the memory system. Therefore, only allowing speculative L1 hits can mitigate transient execution attacks using covert channels (other than LRU) in the memory system. \nIn {\\em Selective Delay}~, to improve performance, for a speculative load that miss in L1, value prediction is used. The load will fetch from deeper layers in the memory hierarchy until the load is not speculative. In their solution, 11\\% performance overhead is shown.\nMeanwhile, many secure cache architectures are proposed to use randomization to mitigate the cache covert channels in general (not only the transient execution attacks).\nFor example, {\\em Random Fill cache}~ decouples the load and the data that is filled into the cache, and thus, the cache state will no longer reflect the sender's memory access pattern.\n{\\em Random Permutation (RP) cache}~, {\\em Newcache cache}~, {\\em CEASER cache}~, and {\\em ScatterCache}~ randomize memory-to-cache-set mapping to mitigate contention-based occupancy-based covert channels in the cache.\n{\\em Non Deterministic cache}~ randomizes cache access delay and de-couple the\nrelation between cache block access and cache access timing.\nSecure TLBs~ are also proposed to mitigate covert channels in TLBs.\nBut again, all the possible covert channels need to be mitigated to fully mitigate transient execution attacks.\nFurther, {\\em Cyclone}~ proposed a micro-architecture to detect cache information leaks across security domains.\nAnother mitigation is to degrade the quality of the channel or even make the channel unusable for a practical attack.\nFor example, many timing covert channels require the receiver to have a fine-grained clock to observe the channel (the second metric in Section~\\ref{sec:CC_metric}). Limiting the receiver's observation will reduce the bandwidth or even mitigate the covert channel~. Noise can also be added to the channel to reduce the bandwidth (the third metric in Section~\\ref{sec:CC_metric}).\nHowever, the above mitigations only cover covert channels in memory systems.\nTo mitigate other covert channels, there are the following challenges:\n1. Identify all possible covert channels in micro-architecture, including future covert channels.\nFormal methods are required in this process. For example, information flow tracking, such as methods in~, can be used to analyze the hardware components, where the data of transient execution could flow to. Then, analyze if each of the components could result in a permanent or transient covert channel.\n2. Mitigate each of the possible covert channels.", "id": "20ee94ca-b37a-49db-97bb-981acb8d805e", "level": "subsection", "origin_cites_number": 22, "parent_id": "2fa00822-ddfc-46d2-883a-0da7b331ff92", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Covert Channels" ] ], "subsections": [ "48e84ca1-f1d3-44d6-aca3-0f6616dc7be0" ], "title": "Mitigating Covert Channels" }, { "cite_extract_rate": 0.105263157894736, "cites": [ 1375, 7346 ], "content": "\\label{sec:sec_arch_covert_channel}\nWith clearly defined security domain, isolation can be designed to mitigate not only transient covert channels and also conventional covert channels.\nFor example, to defend cache covert channels, a number of partitioned caches to different security domains are proposed, either statically~ or dynamically~. With partition, shared resource no longer exists between the sender and the receiver, and the receiver cannot observe secret dependent behavior to decode the secret. \nThe above proposal assumes the hardware is isolated for each security domain. However, there is also a scenario where software outside the security domain may use the same hardware after a context switch.\nIn {\\em Mi6} processor~, caches and ports partitioning are used to isolate software on different cores. Further, when there is a context switch, a security monitor flushes the architecture and micro-architecture states, which holds the information of in-flight speculation from the previously executing program. To protect the security monitor, speculation is not used in the execution of the security monitor. In OPTIMUS~\\hl{, a dynamic partitioning scheme in the granularity of core is proposed to achieve both security and high performance.}\n\\label{sec:related_attacks}\nAnother category of attacks that can be mistaken with the transient execution attacks is the covert channel attacks leveraging transient execution.\nDifferent from the transient execution attacks where the goal of the attacker is to compromise the confidentially of victim's secret, these attacks have the goal to build novel covert channels leveraging the hardware units for transient execution~, such as the branch prediction unit, STL, etc.\nModern computer architectures gain performance benefits from transient execution.\nCorrect predictions result in useful transient execution results and make the execution faster. When a wrong prediction is made, the results of transient execution will be discarded and sometimes cause a small penalty.\nTherefore, there is a time difference in the execution due to transient execution, and a timing-based covert channel can be built.\nAs shown in Table~\\ref{tbl:pred_share}, prediction units are shared between different users. The sender can train a prediction unit, and then the receiver can observe different prediction results.\nReal covert channel attacks have been demonstrated by leveraging the prediction units, such as {\\em Branchscope} attack, which uses PHT~, {\\em Jump over ASLR}, which uses BTB~, and {\\em Spoiler} attack which uses STL~.\nOther than building covert channel across processes and SGX enclave~,\nthese attacks also break KASLR (Kernel address space layout randomization)~ and\nleak the physical address mapping~.", "id": "48e84ca1-f1d3-44d6-aca3-0f6616dc7be0", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "20ee94ca-b37a-49db-97bb-981acb8d805e", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Mitigations of Spectre-type Attacks in Micro-architecture Design" ], [ "subsection", "Mitigating Covert Channels" ], [ "subsubsection", "Mitigations in Secure Architectures" ] ], "subsections": [], "title": "Mitigations in Secure Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nThis paper provide a survey of the transient execution attacks.\nThis paper first defines the transient execution attacks and the three phases of the attacks. \nIt then categorizes possible causes of transient executions. \nThe security boundaries that are broken in the transient executions are discussed.\nIt also analyzes the causes of transient execution by proposing a set of metrics and using the metrics to compare the feasibility. \nFurthermore, the covert channels that can be used in the attacks are categorized and compared with a new set of metrics.\nCombining the transient execution and the covert channels, different types of attacks are compared. In the end, possible mitigation schemes in micro-architecture designs are discussed and compared.\n\\section*{Acknowledgements}\nThis work was supported in part by NSF grants \\nsf{1651945} and \\nsf{1813797},\nand through SRC award number 2844.001.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{ref}\n\\end{document}", "id": "c43469f3-8be3-4b00-9473-de68dc682fca", "level": "section", "origin_cites_number": 0, "parent_id": "2f4b0bae-709d-4d09-b856-92d94c793465", "prefix_titles": [ [ "title", "Survey of Transient Execution Attacks" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
66
[ 1365, 7343, 1362, 1366, 1361, 1364, 7347, 1363, 7344, 7346, 7345, 1360, 1368, 1369, 1367, 1372, 1371, 1370, 1373, 1374, 1375 ]
0.402729
[ "Meishan Zhang" ]
A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures
2020
2020-06-19T10:21:17Z
cs.CL
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ] ], "subsections": [ "b04284a8-3396-424c-8021-087d126ccd5d", "45541c9b-5413-4a2e-8549-37536168b30f", "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "9321cfad-d380-45ff-ae5a-cdcb8f310bb1", "22ff2a35-25cf-48f6-b0a7-3b8fb4e5cec9", "71fdac5a-30d5-4d45-8687-4a12e85adefe", "6990ee8e-1b57-418d-aa02-ad2b46dc41f6", "7d10b76d-891b-4635-8a29-8b635d66d356", "46654902-5367-451c-b03d-c14f5b1c43e5", "3e9686ef-7d97-4a68-844d-6ac583e89a37" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section1}\nSentence-level syntactic and semantic parsing is one major topic in the natural language processing (NLP) community,\nwhich aims to uncover the internal structural relations in sentences .\nFrom the view of linguistics,\nthe goal of parsing is to disclose how words are combined to form sentences and the rules that govern the formation of sentences.\nOn the other hand, from the view of NLP applications,\nparsing can be beneficial for a number of tasks,\nsuch as machine translation, question answering, information extraction, sentiment analysis and generation ,\nand the performance of parsing matters greatly.\nParsing has been extensively studied for decades.\nThe goal of syntactic parsing is to derive the syntax information in sentences,\nsuch as the subjects, objects, modifiers and topics.\nThere have been a number of achievements for the task,\nand large-scale corpora for a range of languages have been already available.\nCompared with syntactic parsing,\nsemantic parsing is much more difficult due to the complex structure of various semantics such as predicate-argument,\nand it is also a long-range goal of NLP.\nWith the recent advance in data-driven machine learning models,\nsemantic parsing has received increasing interests, especially under the neural setting.\nSeveral datasets based on certain formalizations have been developed to facilitate research.\nParsing often relies on specific grammars,\nwhich are used to refine the output structures of syntax and semantics.\nThere are many sophisticated grammars for accurately expressing the syntactic and semantic information at the sentence-level.\nIn this paper, we focus on two popular grammars which are concerned mostly.\nContext-free grammar (CFG), well known as \\textbf{constituent parsing} (or phrase-structure parsing) \n(thus, also as constituent grammar or phrase-structure grammar), adopts hierarchal phrase-structural trees to organize sentence-level syntactic information,\nwhich has been researched intensively since very early.\nDependency grammar is another widely-adopted grammar for syntactic and semantic parsing,\nwhere words are directly connected by dependency links, with labels indicating their syntactic or semantic relations .\nBecause of the conciseness and easy annotation of dependency structures,\n\\textbf{dependency parsing} has received more attention than constituent parsing.\nBesides, there are many other great grammars.\nThe representative topics include combinatory categorial grammar (CCG),\nhead-driven phrase structure grammar (HPSG),\nlexical functional grammar (LFG),\nabstract meaning representation (AMR),\nminimal recursion semantics (MRS),\nuniversal conceptual cognitive annotation (UCCA)\nand also several logic-targeted formalizations.\nAll these categories have been researched for a long time\nand in particular several of which are now quickly developed because of the powerfulness of neural networks\nas well as pretrained contextualized word representations.\nHowever, this article leaves these studies for future more comprehensive surveys.\n\\setlength{\\tabcolsep}{4.0pt}\n\\begin{table}[H]\n\\begin{threeparttable}\n\\footnotesize\n\\caption{A comparison of representative constituent parsing models, where phrase-level F1 scores are reported, PTB and CTB are two benchmark datasets for the English and Chinese parsing, respectively. } \\label{table:const:performance}\n\\begin{tabular}{l|c|cc}\n\\hline\nModel & Main Features & PTB & CTB \\\\ \\hline \\hline\n\\multicolumn{4}{c}{ \\texttt{Chart-based, Statistical Models}} \\\\ \\hline\n\\sworkcite{collins-1997-three} & head-lexicalization & 88.2 & N/A \\\\\n\\sworkcite{Charniak2000} & max-entropy & 89.5 & 80.8 \\\\\n\\sworkcite{mcclosky-etal-2006-effective} & self-training & \\bf 92.3 & N/A \\\\\n\\sworkcite{petrov-klein-2007-improved} & PCFG & 90.1 & \\bf 83.3 \\\\\n\\sworkcite{hall-etal-2014-less} & CRF & 89.9 & N/A \\\\\n\\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Transition-based, Statistical Models}} \\\\ \\hline\n\\sworkcite{sagae-lavie-2005-classifier} & greedy & 86.0 & N/A \\\\\n\\sworkcite{zhu-etal-2013-fast} & global learning, beam & \\bf 91.3 & \\bf 85.6 \\\\ \\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Chart-based, Neural Models}} \\\\ \\hline\n\\sworkcite{socher-etal-2013-parsing} & recursive NN & 90.4 & N/A \\\\\n\\sworkcite{durrett-klein-2015-neural} & CNN & 91.1 & N/A \\\\\n\\sworkcite{stern-etal-2017-minimal} & LSTM, span & 91.8 & N/A \\\\\n\\sworkcite{kitaev-klein-2018-constituency} (a) & self-attentive & 93.5 & N/A \\\\\n\\sworkcite{kitaev-klein-2018-constituency} (b) & +ELMo & \\bf 95.1 & N/A \\\\ \\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Transition-based, Neural Models}} \\\\ \\hline\n\\sworkcite{wang-etal-2015-feature} & neural+discrete & 90.7 & \\bf 86.6 \\\\\n\\sworkcite{watanabe-sumita-2015-transition} & global learning, beam & 90.7 & N/A \\\\\n\\sworkcite{dyer-etal-2016-recurrent} & language modelling & 92.4 & 82.7 \\\\\n\\sworkcite{cross-huang-2016-span} & dynamic oracle & 91.3 & N/A \\\\\n\\sworkcite{liu-zhang-2017-order} & in-order & 91.8 & 86.1 \\\\\n\\sworkcite{fried-klein-2018-policy} & policy gradient & 92.6 & 86.0 \\\\\n\\sworkcite{kitaev2019tetra} & policy gradient & \\bf 95.4 & 86.0 \\\\\n\\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Other Methods (report neural models only) }} \\\\ \\hline\n\\sworkcite{shen-etal-2018-straight} & distance to tree & 91.8 & 86.5 \\\\\n\\sworkcite{teng-zhang-2018-two} & local classification & 92.7 & 87.3 \\\\\n\\sworkcite{vilares-etal-2019-better} & sequence labeling & 91.1 & 85.6 \\\\\n\\sworkcite{zhou-zhao-2019-head} & HPSG grammar & \\bf 96.3 & \\bf 92.2 \\\\\n\\sworkcite{mrini2019rethinking} & HPSG, improved attention & \\bf 96.3 & \\bf N/A \\\\\n\\hline\n\\end{tabular}\n\\end{threeparttable}\n\\end{table}\nHere we make a brief survey for syntactic and semantic parsing based on \\textbf{constituent grammar} and \\textbf{bi-lexicalized dependency grammar}.\nIn Section 2 and 3 we review the studies of constituent parsing and dependency parsing, respectively,\nwhere the dependency parsing is based tree structure and specifically targeted to syntax.\nWe further investigate semantic-oriented dependency graph parsing in Section 4.\nSection 5 and 6 review cross-domain and cross-lingual parsing, which is one hot direction.\nSection 7 reviews the joint models which are targeted to parsing as the final goal,\nwhile Section 8 reviews the parser application strategies, where parsers are evaluated on downstream applications.\nSection 9 introduces the related treebank work, which serves the major training corpus for various parsers as well as for parser model evaluations.\nFinally, in Section 10, the conclusion and future work are summarized.", "id": "b04284a8-3396-424c-8021-087d126ccd5d", "level": "section", "origin_cites_number": 7, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "Constituent parsing is one fundamental task for syntax parsing,\nwhich has received great interest for decades .\nFigure \\ref{fig-constituent} shows an example constituent tree,\nwhere nodes in the constituent tree are constituent spans, also known as phrases.\nThe goal of constituent parsing is to uncover these phrases as well as their relations.\nThe standard evaluation method of constituent parsers is based on recognition of the phrases,\nwhere precision, recall and the F1-measure scores are adopted as the major metrics.\nThe mainstream approaches of constituent parsing include the chart-based and the transition-based models.\nCurrent neural models have achieved state-of-the-art performances under both two kinds of methods.\nIn fact, neural constituent parsing starts very early before the prosperity of deep learning .\nIn this section, first we introduce the chart-based and transition-based constituent models,\nand then show several other models out of the two categories.\nHere before the detailed introduction,\nwe show an overall picture of the performances of various representative constituent parsers in Table \\ref{table:const:performance},\nwhere ensemble models are excluded for fair comparisons.", "id": "45541c9b-5413-4a2e-8549-37536168b30f", "level": "section", "origin_cites_number": 4, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ] ], "subsections": [ "7e358581-87c3-419a-a5aa-c05934b0b72d", "70d9f357-92d8-4745-af7e-2d0fedf8a7e9", "c2138b06-43bc-43b9-9d22-96f5318c5451", "a629e9bd-054b-4c6e-abf9-3c0523065c88", "81fc8545-22eb-4125-bc59-9d98a5a0a0c7" ], "title": "Constituent Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "7e358581-87c3-419a-a5aa-c05934b0b72d", "level": "subsection", "origin_cites_number": 0, "parent_id": "45541c9b-5413-4a2e-8549-37536168b30f", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Chart-Based Parsing" ] ], "subsections": [ "3ced353d-02b3-481e-a335-c7489b315b0e", "2ee018e6-3e07-48f0-9c7b-f9cb9ca3e3ab" ], "title": "Chart-Based Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "Early successful constituent parsing models exploit the productive CFG rules to guide the generation of constituent trees.\nThe chart parsing algorithms are exploited universally for decoding,\nand most of the effort is focused on the refinement of CFG rules, which serve as the major sources of parameter estimation.\n\\newcite{collins-1997-three} and \\newcite{Charniak2000} extend probabilistic context-free grammar (PCFG) with head lexicalization,\nassociating PCFG rules with head words, which can effectively boost the PCFG parsing performance.\nUnlexicalized models have also received great attention,\nby using fine-grained structural annotation or automatic latent variables\n to enrich PCFG rules,\nleading to comparable or even better performance than lexicalized models.\n\\begin{figure}[H]\n\\includegraphics[scale=0.6]{figures/constituent-example-crop.pdf}\n\\caption{An example of constituent tree.}\n\\label{fig-constituent}\n\\end{figure}\nThe above models suffer the difficulty of integrating non-local features\nsince future decisions are invisible during decoding which is critical for global inference.\nCondition random field (CRF) is one way for global modeling.\n\\newcite{hall-etal-2014-less} propose a strong constituent parsing model by adapting the standard n-gram CRF models for CFG,\nand meanwhile presenting rich sophisticated features.\nThe dependencies among adjacent CFG rules can be modeled,\nwhich are used for global inference.", "id": "3ced353d-02b3-481e-a335-c7489b315b0e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7e358581-87c3-419a-a5aa-c05934b0b72d", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Chart-Based Parsing" ], [ "subsubsection", "Statistical Models" ] ], "subsections": [], "title": "Statistical Models" }, { "cite_extract_rate": 1, "cites": [ 7, 8385 ], "content": "\\newcite{socher2010learning} is the first work to define scores over phrases by recursive neural networks.\nThe CFG-based constituent trees can be naturally modeled in this way.\nNeural CRF parsing is accordingly proposed by \\newcite{durrett-klein-2015-neural},\nwhich can be regarded as a neural enhancing of \\newcite{hall-etal-2014-less}.\nThe work simply uses feed-forward neural networks to encode atomic features instead of human composition.\nNotice that it is different from \\newcite{socher2010learning} as no recursive composition is used here.\n\\newcite{stern-etal-2017-minimal} propose state-of-the-art chart-based neural models.\nOn the one hand, they use deep bidirectional long-short term memory (LSTM) neural networks to enhance sentence representations,\ndesigning sophisticated strategies for span representation.\nOn the other hand, they also adopt top-down incremental parsing for decoding,\nwhich dilutes the differences between chart-based and transition-based approaches.\nTheir results are very strong on par with the state-of-the-art transition-based methods at the same time.\nThe work is further followed by \\newcite{gaddy-etal-2018-whats} with extensive analysis and \\newcite{kitaev-klein-2018-constituency} with a self-attentive encoder.\nIn particular, \\newcite{kitaev-klein-2018-constituency} exploit contextualized word representation including ELMo \nand BERT ,\nleading to almost the best parsing performance in the literature.", "id": "2ee018e6-3e07-48f0-9c7b-f9cb9ca3e3ab", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7e358581-87c3-419a-a5aa-c05934b0b72d", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Chart-Based Parsing" ], [ "subsubsection", "Neural Models" ] ], "subsections": [], "title": "Neural Models" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "70d9f357-92d8-4745-af7e-2d0fedf8a7e9", "level": "subsection", "origin_cites_number": 0, "parent_id": "45541c9b-5413-4a2e-8549-37536168b30f", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Transition-Based Parsing" ] ], "subsections": [ "a4b498b6-823b-4aad-bc1d-f3f3724e3a56", "1b3d47a3-c033-4c89-9cf0-e261bd71d186" ], "title": "Transition-Based Parsing" }, { "cite_extract_rate": 0.2, "cites": [ 9026 ], "content": "The transition-based models demonstrate highly promising for constituent parsing .\nThe key idea is to define a transition system with transition states and actions,\nwhere states denote partial parsing outputs,\nand actions specify next-step state-transition operations.\nTransition actions indicate the incremental tree construction process.\nFor constituent parsing, typical actions include the \\emph{shift} to building terminal nodes, the \\emph{unary} to building unary nodes,\nand the \\emph{binary} to building binary nodes.\nThe details can be referred to as \\newcite{sagae-lavie-2005-classifier}.\nThe model is also commonly referred to as the shift-reduce model, where \\emph{unary} and \\emph{binary} are actions of reduction.\nBy converting constituent parsing into predicting a sequence of transition actions,\ndiscriminant classifiers such as max-entropy and support vector machine (SVM) can be applied for the prediction,\nwith rich manually-crafted features.\nThe initial shift-reduce model classifies the sequence of actions for a single constituent tree separately,\ngreedily searching for the best output constituent tree,\nwhich may suffer the error propagation problem since the early step errors can affect later predictions.\nTo this end, globally modeling with beam search is proposed to alleviate the problem,\nwhich decodes the total sequence of actions for a full constituent tree as a whole .\nThe discriminative perceptron-style online learning greatly promotes this line of work ,\nwhich enables legal parameter optimizations towards inexact search.\nFor feature engineering, the contextual lexicalized words, POS tags, distances and their compositions are all extensively investigated,\nwhich can be referred to for details.", "id": "a4b498b6-823b-4aad-bc1d-f3f3724e3a56", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "70d9f357-92d8-4745-af7e-2d0fedf8a7e9", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Transition-Based Parsing" ], [ "subsubsection", "Statistical Models" ] ], "subsections": [], "title": "Statistical Models" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6489 ], "content": "\\newcite{watanabe-sumita-2015-transition} and \\newcite{wang-etal-2015-feature} could be the direct extensions of \\newcite{zhu-etal-2013-fast} by using neural networks.\nThe composition of atomic features is achieved by feed-forward neural networks.\n\\newcite{cross-huang-2016-incremental} find that the greedy style decoding can also achieve highly competitive performance when a deep LSTM encoder is exploited.\nThen, several studies suggest dynamic oracles to optimize greedy constituent parsers\n.\nThe main idea is to let models make optimum decisions when facing erroneous transition states .\nA proportion of training instances with erroneous transition states and their oracle actions are added into the original training corpus.\nThere have been several studies exploiting different transition strategies.\n\\newcite{dyer-etal-2016-recurrent} suggest the recurrent neural work grammar (RNNG),\nwhich is a top-down transition-based system.\n\\newcite{liu-zhang-2017-order} design an in-order transition system to make a compromise between top-down and bottom-up transitions.\n\\newcite{coavoux-etal-2019-unlexicalized} present a novel system with an additional GAP action for discontinuous constituency parsing,\nand they also find that unlexicalized models give better performance.\n\\newcite{fernandez2019faster} optimize the transition actions to facilitate the construction of non-binarized constituent nodes,\navoiding the preprocessing of binarization for constituent trees.\n\\newcite{kitaev2019tetra} suggest the tetra-tagging system, which combines sequence tagging and transition action classification.\nThe system achieves state-of-the-art performance on the benchmark PTB dataset with BERT representations.", "id": "1b3d47a3-c033-4c89-9cf0-e261bd71d186", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "70d9f357-92d8-4745-af7e-2d0fedf8a7e9", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Transition-Based Parsing" ], [ "subsubsection", "Neural Models" ] ], "subsections": [], "title": "Neural Models" }, { "cite_extract_rate": 0.75, "cites": [ 6490, 11, 2640 ], "content": "Neural networks such as deep LSTM and multi-head self-attention\nare capable of encoding global features implicitly into their final representations,\nwhich weakens the role of decoding as a source of feature induction.\nBased on the observation,\nseveral studies attempt to use simple frameworks,\naiming for a wide community for parsing.\nOne representative attempt is to exploit neural sequence-to-sequence models for structural constituent parsing .\nThe key idea is to first linearize a phrase-structural constituent tree into a sequence of symbols by certain traversing strategies,\nand then directly feed the pair of input words and output symbols into a standard sequence-to-sequence model.\nThese models require large-scale corpora for training,\nwhich could be obtained by auto-parsed high-confidence constituent trees from other state-of-the-art parsers.\nNeural sequence labeling models have also been investigated for constituent parsing .\n\\newcite{gomez-rodriguez-vilares-2018-constituent} propose the first work of this line,\nwhich exploits the lowest common ancestor between adjacent words as clues to encode the word roles.\n\\newcite{vilares2020parsing} extend the work by language modeling and enhance parsing with pretraining.\nFurther, more direct schemes have been proposed with local modeling for constituent parsing.\n\\newcite{shen-etal-2018-straight} directly predict the distance of constituent phrases\nand then decode greedily in a top-down way for a full constituent tree.\nSimilarly, \\newcite{teng-zhang-2018-two} propose two models based on local span prediction,\nachieving highly competitive performance on par with transition-based models.\nRecently, \\newcite{zhou-zhao-2019-head} present to exploit the HPSG-based grammar for constituent parsing,\nand further power the model with XLNet word representations ,\nachieving the top performances for both CTB and PTB datasets.\n\\newcite{mrini2019rethinking} revise the multi-head self-attention mechanism in \\newcite{zhou-zhao-2019-head},\nleading to a similar performance with a smaller number of layers.\n\\setlength{\\tabcolsep}{1.5pt}\n\\begin{table}[H]\n\\begin{threeparttable}\n\\footnotesize\n\\caption{A comparison of representative dependency parsing models, where UAS are reported, PTB and CTB5.1 (CTB in the Table for short) are two benchmark datasets for the English and Chinese parsing, respectively. } \\label{table:dep:performance}\n\\begin{tabular}{l|c|cc}\n\\hline\nModel & Main Features & PTB & CTB \\\\ \\hline \\hline\n\\multicolumn{4}{c}{ \\texttt{Graph-based, Statistical Models}} \\\\ \\hline\n\\sworkcite{mcdonald-etal-2005-online} & 1-order & 90.9 & 83.0 \\\\\n\\sworkcite{McDonald2006} & 2-order & 91.5 & 85.2 \\\\\n\\sworkcite{koo-etal-2008-simple} & word clusters & 93.2 & N/A \\\\\n\\sworkcite{chen-etal-2009-improving} & auto subtrees & 93.2 & 86.7 \\\\\n\\sworkcite{bohnet-2010-top} & feature hashing & 92.9 & N/A \\\\\n\\sworkcite{koo-collins-2010-efficient} & 3-order & 93.0 & 86.0 \\\\\n\\sworkcite{ma-zhao-2012-fourth} & 4-order & \\bf 93.4 & \\bf 87.4 \\\\\n\\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Transition-based, Statistical Models}} \\\\ \\hline\n\\sworkcite{nivre-cl08} (a) & arc-standard & 89.7 & 82.7 \\\\\n\\sworkcite{nivre-cl08} (b) & arc-eager & 89.9 & 80.3 \\\\\n\\sworkcite{zhang-clark-2008-tale} & global learning, beam & 91.4 & 84.3 \\\\\n\\sworkcite{zhang-nivre-2011-transition} & rich non-local features & \\bf 92.9 & \\bf 86.0 \\\\\n\\sworkcite{goldberg-nivre-2012-dynamic} & dynamic oracle & 91.0 & 84.7 \\\\ \\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Graph-based, Neural Models}} \\\\ \\hline\n\\sworkcite{pei-etal-2015-effective} & feed-forward & 93.3 & N/A \\\\\n\\sworkcite{zhang-etal-2016-probabilistic} & CNN & 93.4 & 87.7 \\\\\n\\sworkcite{wang-chang-2016-graph} & 2-layer LSTM & 94.1 & 87.6 \\\\\n\\sworkcite{kiperwasser-goldberg-2016-simple} & 2-layer LSTM & 93.1 & 86.6 \\\\\n\\sworkcite{dozat2016deep} & 3-layer LSTM, biaffine & 95.7 & 88.9 \\\\\n\\sworkcite{li2019self} (a) & self-attentive & 95.9 & 92.2 \\\\\n\\sworkcite{li2019self} (b) & +ELMO & 96.6 & 90.3 \\\\\n\\sworkcite{li2019self} (c) & +BERT & \\bf 96.7 & \\bf 92.2 \\\\\n\\sworkcite{ji-etal-2019-graph} & GNN & 96.0 & N/A \\\\ \\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Transition-based, Neural Models}} \\\\ \\hline\n\\sworkcite{chen-manning-2014-fast} & feed-forward & 91.8 & 83.9 \\\\\n\\sworkcite{dyer-etal-2015-transition} & stack-LSTM & 93.1 & 87.2 \\\\\n\\sworkcite{zhou-etal-2015-neural} & global learning, beam & 93.3 & N/A \\\\\n\\sworkcite{andor-etal-2016-globally} & global learning, beam & 94.6 & N/A \\\\\n\\sworkcite{kiperwasser-goldberg-2016-simple} & 2-layer LSTM & 93.9 & 87.6 \\\\\n\\sworkcite{ballesteros-etal-2017-greedy} & char, stack-LSTM & 93.6 & 87.6 \\\\\n\\sworkcite{ma-etal-2018-stack} & 3-layer LSTM & \\bf 95.9 & \\bf 90.6 \\\\ \\hline\n\\hline\n\\multicolumn{4}{c}{ \\texttt{Other Methods (report neural models only) }} \\\\ \\hline\n\\sworkcite{kiperwasser-goldberg-2016-easy} & easy-first & 93.0 & 87.1 \\\\\n\\sworkcite{li-etal-2018-seq2seq} & sequence-to-sequence & 92.1 & 86.2 \\\\\n\\sworkcite{strzyz-etal-2019-viable} & sequence labeling & 93.7 & N/A \\\\\n\\sworkcite{zhou-zhao-2019-head} & HPSG grammar & 97.2 & \\bf 91.2 \\\\\n\\sworkcite{mrini2019rethinking} & HPSG, improved attention & \\bf 97.3 & \\bf N/A \\\\\n\\hline\n\\end{tabular}\n\\end{threeparttable}\n\\end{table}", "id": "c2138b06-43bc-43b9-9d22-96f5318c5451", "level": "subsection", "origin_cites_number": 4, "parent_id": "45541c9b-5413-4a2e-8549-37536168b30f", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Other Frameworks" ] ], "subsections": [], "title": "Other Frameworks" }, { "cite_extract_rate": 0, "cites": [], "content": "The semi-supervised architecture aims to enhance a supervised model by statistical information extracted from raw text.\n\\newcite{mcclosky-etal-2006-effective} present the first work which achieves improved performance for constituent parsing by self-training,\nand \\newcite{mcclosky-etal-2008-self} study self-training empirically to show the conditions of usefulness.\n\\newcite{candito2009improving} exploit unsupervised word clusters learned from raw text to enhance constituent parsing.\nWhile recent studies shift to the neural network setting, the borderline between semi-supervised and supervised is becoming vague,\nas pretraining from raw text is one critical for the successfulness of neural models.", "id": "a629e9bd-054b-4c6e-abf9-3c0523065c88", "level": "subsection", "origin_cites_number": 0, "parent_id": "45541c9b-5413-4a2e-8549-37536168b30f", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Semi-Supervised Models" ] ], "subsections": [], "title": "Semi-Supervised Models" }, { "cite_extract_rate": 0, "cites": [], "content": "The model ensemble is one effective way to boost the performance of constituent parsing.\nInitial work focuses on the output reranking .\nWe can take either the k-best outputs of a constituent parser or one-best outputs from heterogeneous parsers as the inputs,\nand then build a new constituent tree by using a feature-rich reranking model.\nBenefiting from sophisticated manually-crafted non-local features,\nthe framework can improve the parser performance significantly.\nHowever, related studies under the neural setting have received much less concern,\nwhich can be potentially due to that the majority of state-of-the-art neural models exploit the same sentence encoders,\nindicating that features are resemble in different kinds of models, and meanwhile homogeneous ensemble (e.g., different random seeds)\nby simply voting can achieve unsurpassable performances.\n\\begin{figure}[H]\n\\includegraphics[scale=0.75]{figures/dependency-example-crop.pdf}\n\\caption{An example of dependency tree.}\n\\label{fig-dependency}\n\\end{figure}", "id": "81fc8545-22eb-4125-bc59-9d98a5a0a0c7", "level": "subsection", "origin_cites_number": 2, "parent_id": "45541c9b-5413-4a2e-8549-37536168b30f", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Constituent Parsing" ], [ "subsection", "Model Ensemble" ] ], "subsections": [], "title": "Model Ensemble" }, { "cite_extract_rate": 0, "cites": [], "content": "Dependency parsing is developed for syntax and semantic analysis by using bilexicalized dependency grammar,\nwhere all syntactic and semantic phenomena are represented by bilexicalized dependencies .\nFigure \\ref{fig-dependency} shows an example tree of dependency parsing.\nFor the evaluation of various dependency parsers, dependency accuracy is used as the major metric,\nin terms of the unlabeled attachment score (UAS) and the labeled attachment score (LAS).\nIn the early stage, dependency parsing is constrained to trees, projective or non-projective .\nRecently, several studies have devoted to dependency parsing over graphs .\nOn the one hand, initial dependency trees are mostly syntactic oriented,\nwhile recently there are growing interests focusing on semantic relations between words .\nThis section mainly focuses on dependency tree parsing,\nwhile dependency graph parsing will be discussed in the next section.\nThe majority of dependency parsing models can be divided into two types,\ngraph-based and transition-based ,\nboth of which have been extensively investigated under the traditional statistical setting and\nthe neural setting .\nThere also exist other interesting approaches for dependency parsing outside the two categories .\nTable \\ref{table:dep:performance} shows an overall picture of the performances of several representative dependency parsers,\nand all ensemble models are excluded in this table.\nThe graph-based and transition-based models are almost comparable (graph-based models are slightly higher) across both traditional statistical and neural settings,\nand other types of models achieve good performance with the support of sophisticated neural networks.\nCurrently, neural models achieve state-of-the-art performances for dependency parsing .", "id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "level": "section", "origin_cites_number": 10, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ] ], "subsections": [ "def2b85a-529c-496d-a307-d10b607b90cb", "5f0b8234-d541-4cec-b500-c7a1e56fa85d", "3da5456f-dd40-404e-8baa-0ff3ea2bf96a", "1f96f995-24ab-4959-952f-55fcaac89ade", "356b7b5a-b87a-4311-a6ab-fb40f5cc38f2" ], "title": "Dependency Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "def2b85a-529c-496d-a307-d10b607b90cb", "level": "subsection", "origin_cites_number": 0, "parent_id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Graph-Based Parsing" ] ], "subsections": [ "b140a2de-9b3c-4698-87b5-45748e318072", "8ff2b71e-2cb9-41c4-b4d0-6b1cfb9d55de" ], "title": "Graph-Based Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "Graph-based methods exploit the maximum spanning tree (MST) algorithm for decoding,\nwhich decomposes a full dependency tree into small factors, such as dependency edges,\nand scores the full tree by summing the scores of all the included factors.\nThe score of each factor can be calculated independently by the features extracted from it.\nThe models by using dependency edges as the basic scoring factor are referred to as first-order models,\nwhere the order indicates the maximum number of edges in a factor.\n\\newcite{mcdonald-etal-2005-online} propose a feature-rich first-order MST parser based on discriminative max-margin training.\nLater, higher-order MST parsers have been studied.\nWith larger factors, the parsing models can exploit more sophisticated features, and thus can potentially bring improved performance.\nSecond order MST parsing models have studied extensively ,\nwhere the newly added features include the relations from parent-sibling and parent-child-grandchild factors.\nNotice that higher-order MST decoding can have higher time complexity (i.e., from $O(n^3)$ to $O(n^4)$),\nwhich may lead to intolerable parsing speed.\nThe problem could be largely alleviated by \\newcite{Bohnet2010} with feature hashing.\n\\newcite{koo-collins-2010-efficient} propose an efficient third-order dependency parsing model,\nwhich adds grand-sibling and tri-sibling features into the model.\n\\newcite{lei-etal-2014-low} exploit low-rank tensor to alleviate the burden of feature engineering.\nFourth-order dependency parsing has been investigated by \\newcite{ma-zhao-2012-fourth}.\nAs a whole, second-order and third-order parsers could be good choices considering both performance and efficiency.", "id": "b140a2de-9b3c-4698-87b5-45748e318072", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "def2b85a-529c-496d-a307-d10b607b90cb", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Graph-Based Parsing" ], [ "subsubsection", "Statistical Models" ] ], "subsections": [], "title": "Statistical Models" }, { "cite_extract_rate": 0.75, "cites": [ 8385, 8242, 7 ], "content": "\\newcite{pei-etal-2015-effective} present a graph-based neural model\nby embedding all discrete atomic features in the traditional statistical MST models and\nthen composing these embeddings with a similar feed-forward network of \\workcite{chen-manning-2014-fast}.\nConvolution neural network is then applied for neural feature composition in \\newcite{zhang-etal-2016-probabilistic}.\nFollowing, deep bidirectional LSTMs are exploited to substitute the simple neural feed-forward network .\nAs sentence-level global information can be encoded through these neural structures,\nthe performance gap between first- and higher-order decoding is largely reduced.\n\\newcite{dozat2016deep} propose a deep biaffine parser which achieves the impressive performances,\nboosting the UAS and LAS numbers into a new degree.\nThe parser exploits a three-layer bidirectional LSTM as the encoder,\nand a biaffine operation as the decoder to score all possible dependency edges.\nThis work adopts several tricks to reach their final performance,\ne.g., the node-level dropouts, and the same dropout mask at every recurrent timestep.\n\\newcite{li2019self} further enhance the biaffine parser with self-attentive encoder and\ncontextualized word representations such as ELMo and BERT .\n\\newcite{ji-etal-2019-graph} exploit graph neural networks to better the input sentence encoder.", "id": "8ff2b71e-2cb9-41c4-b4d0-6b1cfb9d55de", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "def2b85a-529c-496d-a307-d10b607b90cb", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Graph-Based Parsing" ], [ "subsubsection", "Neural Models" ] ], "subsections": [], "title": "Neural Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Transition-based models have achieved great success on dependency parsing.\nTo some extent, the transition-based framework is then received great attention to other NLP tasks involving structural learning\nbecause of the successfulness of dependency parsing.\nFor example, the transition-based constituent parsing is initially inspired by transition-based dependency parsing.\nOn the one hand, the transition-based models can obtain nearly equivalent performance compared with graph-based methods.\nOn the other hand, these models are highly efficient, which can achieve linear time complexity.\nTransition-based models convert dependency parsing into an incremental state-transition process,\nwhere states denote partial outputs and they are advanced step by step by predefined transition actions.", "id": "5f0b8234-d541-4cec-b500-c7a1e56fa85d", "level": "subsection", "origin_cites_number": 0, "parent_id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Transition-Based Parsing" ] ], "subsections": [ "b7258df6-7ce7-4950-8147-e55659ebdf5e", "cb812e5a-0c1f-41b0-891c-0c0c8af7c185" ], "title": "Transition-Based Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "The initial work for feature-rich transition-based dependency parsing is suggested by \\newcite{nivre-iwpt03} and \\newcite{Yamada2003},\nand then the framework is extensively investigated .\nThere are two typical transition configurations, arc-standard and arc-eager, respectively,\nwhich are comparable in parsing performances.\nTypically, the transition actions include \\emph{shift} operation (aiming for starting next word processing),\n\\emph{arc-left} (aiming for building a left directional dependency),\nand \\emph{arc-right} (aiming for right directional dependencies).\nBesides, several researchers propose other transition configurations ,\nwhich can handle various complex cases, such as non-projective dependencies.\nEarly transition-based methods usually exploit discriminative classifiers for action prediction when a certain transition state is given,\nwhich processes the parsing in a local manner.\nThe scheme may suffer the error propagation problem, where early errors can influence future predictions.\nTo alleviate the problem, global learning with beam-search is one effective way.\n\\newcite{zhang-clark-2008-tale} firstly apply the strategy.\nRich global features that have been exploited in high-order graph-based dependency parsers can be also integrated\ninto the model .\nThe strategy can be also enhanced with dynamic programming further .\nAnother alternative strategy is the dynamic oracle,\nwhich is firstly proposed by \\newcite{goldberg-nivre-2012-dynamic} for transition-based models by using arc-eager.\nThe method defines dynamic gold-standard oracle based on a sample of erroneous states,\nand then add these instances to enhance model training.\nThus, we can minimize global performance losses when errors occur.\nAlthough the strategy gives slightly worse performance than \\newcite{zhang-nivre-2011-transition},\nit enables dependency parsing in a greedy way, greatly increasing the parsing efficiency.\nThe strategy has been investigated by several studies with different configurations,\nsuch as arc-standard and non-projective parsing .", "id": "b7258df6-7ce7-4950-8147-e55659ebdf5e", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "5f0b8234-d541-4cec-b500-c7a1e56fa85d", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Transition-Based Parsing" ], [ "subsubsection", "Statistical Models" ] ], "subsections": [], "title": "Statistical Models" }, { "cite_extract_rate": 0.75, "cites": [ 8243, 8242, 6492, 6491, 1171, 8244 ], "content": "\\workcite{chen-manning-2014-fast} is one millstone work for neural dependency parsing,\nwhich substitutes traditional manually-crafted discrete features with neural features.\nThe work uses simple feed-forward neural networks to compose the embeddings of all atomic features automatically,\nand thus is free of feature engineering.\nFinally, the proposed model obtained much better performance than the corresponding statistical baseline.\nPretrained word embeddings and the neural composition function are the keys to success.\nThere exist several directions to improve the performance of neural transition-based dependency parsing.\nFirst, we can exploit better neural network structures.\nStack-LSTM is presented by \\newcite{dyer-etal-2015-transition} and then followed by several studies ,\nwhich can represent transition states by utilizing partial structural information.\nIn parallel, deep bidirectional LSTM is also investigated .\n\\newcite{ma-etal-2018-stack} exploit a similar encoder as \\newcite{dozat2016deep}, achieving slightly better performances than \\workcite{dozat2016deep}.\nIn fact, with powerful neural encoders, especially pretrained contextualized word representations,\nthe performance difference between graph-based and transition-based is very marginal .\nSeveral researchers suggest global learning with beam-search strategy in \\workcite{zhang-nivre-2011-transition} under the neural setting.\n\\newcite{zhou-etal-2015-neural} make the pioneer attempts for this goal,\nwhich is further perfected with a theoretical guaranty by \\newcite{andor-etal-2016-globally}.\nThese models have achieved state-of-the-art performance before the biaffine parser .\nOne major drawback is that the strategy suffers from the efficiency problem due to the beam search.\nThe dynamic oracle strategy is applied as well making the greedy transition-based neural dependency parsers\n.\nRecently, both global learning and dynamic oracle are difficult to give much-improved capacity\nwhen pretrained contextualized word representations are exploited.", "id": "cb812e5a-0c1f-41b0-891c-0c0c8af7c185", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "5f0b8234-d541-4cec-b500-c7a1e56fa85d", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Transition-Based Parsing" ], [ "subsubsection", "Neural Models" ] ], "subsections": [], "title": "Neural Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Several interesting models outside the graph-based and transition-based framework are also concerned.\nFor example, the grammar-based framework can be applied to dependency parsing as well.\nFirst, a dependency tree is converted to an equivalent phrase-structural constituent tree,\nand then a grammar-based constituent parsing model can be applied for dependency parsing.\nThe method is proposed firstly by \\newcite{mcdonald-06-phd-thesis},\nand also highly emphasized in \\newcite{kubler2009dependency}.\nSeveral studies have exploited this method as one component for model ensembling .\nRecently, \\newcite{zhou-zhao-2019-head} and \\newcite{mrini2019rethinking} adopt the HPSG grammar for the same goal,\nachieving very competitive performances.\n\\newcite{goldberg-elhadad-2010-efficient} present an easy-first dependency parsing model, which processes the input sentences in a non-directional way.\nThe output dependency tree is constructed recursively, where the highest-confidence dependency arc is selected at each time.\nThe neural version of the work is exploited by \\newcite{kiperwasser-goldberg-2016-easy} by using hierarchical LSTMs.\nSequence-to-sequence learning can be also applied to neural dependency parsing, where the transition-based linearization can be served as one natural solution.\n\\newcite{li-etal-2018-seq2seq} present a strong sequence-to-sequence model by head prediction for each word.\n\\newcite{strzyz-etal-2019-viable} suggest a sequence labeling model for dependency parsing.", "id": "3da5456f-dd40-404e-8baa-0ff3ea2bf96a", "level": "subsection", "origin_cites_number": 1, "parent_id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Other Frameworks" ] ], "subsections": [], "title": "Other Frameworks" }, { "cite_extract_rate": 0, "cites": [], "content": "Here we briefly offer a survey for semi-supervised dependency parsing under the traditional statistical setting,\nwhich utilizes statistical information extracted from a raw text to enhance a baseline model.\nThis scheme has received little attention under the neural setting because of pretraining.\nAs a whole, the semi-supervised dependency parsing models can be categorized into three types\naccording to the extracted information from the raw text,\nnamely word-level, partial-tree level, and sentence-level methods.\nFor word-level information, one representative work is \\workcite{koo-etal-2008-simple},\nwhich augments the atomic features of a baseline model with word clusters.\n\\newcite{zhou-etal-2011-exploiting} exploit selectional preference information from web texts to improve dependency parsing.\nActually, word embeddings can be also regarded as a kind of semi-supervised word-level information,\nwhich has been suggested by \\newcite{turian-etal-2010-word} for NLP, but not experimented on dependency parsing.\n\\newcite{chen-etal-2014-feature} further extend the idea into feature embeddings, embedding all features including words.\nFor the partial-tree level integration, \\newcite{chen2008dependency}\nexploit high-frequency auto-parsed bilexical dependencies to enhance the baseline supervised model.\nFurther, \\newcite{chen-etal-2009-improving} extend the work by using higher-order subtrees.\n\\newcite{chen-etal-2012-utilizing} could be regarded as a general framework for the partial tree level integration,\nby utilizing dependency language models learned from auto-parsed dependency trees.\nSelf-training, co-training as well as tri-training are straightforward methods for sentence-level\nsemi-supervised learning ,\nwhere high-confidence auto-parsed dependency trees from several baseline models,\nare used to augment the training dataset.\n\\newcite{li-etal-2014-ambiguity} propose an ambiguity-aware learning method\nto effectively model the confidence of auto-parsed dependency trees,\nleading to significant performance improvements.", "id": "1f96f995-24ab-4959-952f-55fcaac89ade", "level": "subsection", "origin_cites_number": 1, "parent_id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Semi-Supervised Models" ] ], "subsections": [], "title": "Semi-Supervised Models" }, { "cite_extract_rate": 0, "cites": [], "content": "By effectively combining heterogeneous models,\nthe dependency parsing performance can be further boosted.\n\\newcite{nivre-mcdonald-2008-integrating} first analyze the differences between graph-based and transition-based models,\nand then combine the two kinds of models to utilize their complementary information,\nresulting in better performances.\n\\newcite{sun-wan-2013-data} perform parsing ensemble by including grammar-based models further,\nwhich are highly diverse with the graph-based and transition-based models.\nUnder the neural setting, simple voting can achieve very strong performances.\nThe above studies are all targeted at different parsing models based on the same treebank.\nThere are several studies aimed at the parser ensemble based on heterogeneous treebanks,\nwhose annotation guidelines are highly different.\n\\newcite{li-etal-2012-exploiting} exploit stacked learning combine with quasi-synchronous grammars for effective integration.\n\\newcite{guo-etal-2016-universal} study a similar ensemble by using deep multitask learning,\nwhere treebanks of different languages are also concerned.\n\\newcite{jiang-etal-2018-supervised} present and study the task of supervised treebank conversion,\nwhich can be served as one method for integration.", "id": "356b7b5a-b87a-4311-a6ab-fb40f5cc38f2", "level": "subsection", "origin_cites_number": 0, "parent_id": "516ffc24-2db1-440e-97d0-6e1eb5b3a2a5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Dependency Parsing" ], [ "subsection", "Model Ensemble" ] ], "subsections": [], "title": "Model Ensemble" }, { "cite_extract_rate": 0, "cites": [], "content": "The dependency parsing models mentioned in the previous section are all aimed for dependency tree parsing,\nwhich majorly reflects syntactic and shallow-semantic information in sentences.\nAs there are growing demands of deep semantic parsing,\nwhich is difficult to be expressed by dependency tree only,\ndependency graph parsing has received increasing interests ,\nwhich allows multiple (including zero) heads for one word in sentences.\nNote that the semantic graph is still formalized by a set of bilexicalized dependencies,\nwith nodes corresponding to surface lexical words, and edges indicating the semantic relations between nodes.\nThere are different formalizations of the semantic dependency graph.\nWe can combine syntactic tree-based dependency parsing and semantic role labeling (SRL) to result in a dependency graph,\nwhich is referred to as joint dependency syntax and SRL .\nRecently, the conception of semantic dependency parsing (SDP) has been introduced ,\nwhich provides different views of semantic relations, such as DELPH-IN MRS (DM), \npredicate-argument structures (PDS) and Prague semantic dependencies (PSD).\nFollowing, we will review the studies of the two types of semantic dependency graph parsing.\n\\begin{figure}[H]\n\\includegraphics[scale=0.7]{figures/depsrl-example-crop.pdf}\n\\caption{An example of joint syntactic and semantic dependencies.}\n\\label{fig-depsrl}\n\\end{figure}", "id": "9321cfad-d380-45ff-ae5a-cdcb8f310bb1", "level": "section", "origin_cites_number": 5, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ] ], "subsections": [ "340f588e-7f78-45cd-9785-9056bc28ebf7", "ded4738c-fbc0-410b-b0e9-1a35611ffb36" ], "title": "Semantic Dependency Graph" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-dep-srl}\nFigure \\ref{fig-depsrl} shows an example dependency graph of joint syntactic and semantic dependencies.\nHere we do not intend to introduce the pipeline models,\nwhich train syntactic and semantic models separately,\nand then output the dependency graph by either two steps or jointly .\nAlthough these models can perform dependency graph parsing,\nthey receive less attention as this topic.\nWe focus on the models of joint learning and decoding for full dependency graph parsing.\nTable \\ref{table:jointsrl:performance} shows the performance of several studies on this line.\n\\setlength{\\tabcolsep}{2.2pt}\n\\begin{table}[H]\n\\begin{threeparttable}\n\\footnotesize\n\\caption{A comparison of typical joint dependency syntax and SRL models on the CONLL08 English dataset. } \\label{table:jointsrl:performance}\n\\begin{tabular}{l|c|ccc}\n\\hline\nModel & Main Features & Syn & Sem & All \\\\ \\hline \\hline\n\\sworkcite{johansson-2009-statistical} & joint inference & 86.6 & 77.1 & 81.8 \\\\\n\\sworkcite{titov2009online} & transition-based & 87.5 & 76.1 & 81.8\\\\\n\\sworkcite{henderson-etal-2013-multilingual} & sigmoid belief network & 87.5 & 76.1 & 81.8 \\\\\n\\sworkcite{swayamdipta-etal-2016-greedy} & neural, stack-LSTM & \\bf 89.1 & \\bf 80.5 & \\bf 84.5\\\\\n\\hline\n\\end{tabular}\n\\end{threeparttable}\n\\end{table}\n\\newcite{titov2009online} extend the transition-based dependency parsing with a particular \\emph{swap} operation,\nenable the model to process non-planarity multiple graphs jointly,\nand thus dependency graph parsing can be performed jointly.\n\\newcite{henderson-etal-2013-multilingual} also exploit the transition-based framework to derive syntactic and semantic\ndependencies concurrently based on a similar transition system as \\newcite{titov2009online},\nbut adopt a different model estimation by using an incremental sigmoid belief network with latent variables.\n\\newcite{lluis-etal-2013-joint} present a graph-based model with a dual decomposition algorithm for decoding,\nassigning syntactic and semantic dependencies concurrently.\nAll the aforementioned studies are based on the traditional statistical setting.\nUnder the neural setting, there is little work focus on the task, with one exception.\n\\newcite{swayamdipta-etal-2016-greedy} present a transition-based stack-LSTM model for joint syntactic and semantic dependencies,\nwhere their transition system is largely followed \\workcite{henderson-etal-2013-multilingual}.\nSince then, neural dependency graph dependency parsing models are centered on other datasets.\n\\begin{figure}[H]\n\\includegraphics[scale=0.75]{figures/depgraph-example-crop.pdf}\n\\caption{An example of semantic dependency graph.}\n\\label{fig-depgraph}\n\\end{figure}\n\\setlength{\\tabcolsep}{3.0pt}\n\\begin{table}[H]\n\\begin{threeparttable}\n\\footnotesize\n\\caption{A comparison of typical dependency parsing models on the SemEval-2015 shared dataset, where WSJ and Brown indicate the in-domain and out-of-domain test sections. } \\label{table:sdp:performance}\n\\begin{tabular}{l|c|cc}\n\\hline\nModel & Main Features & WSJ & Brown \\\\ \\hline \\hline\n\\sworkcite{du2015peking} & tree approximations & 85.4 & 80.8 \\\\\n\\sworkcite{almeida2015lisbon} & 2-order graph & 85.2 & 81.2 \\\\\n\\sworkcite{peng-etal-2017-deep} & multi-task learning & 87.2 & 83.6 \\\\\n\\sworkcite{wang2018neural} & transition, LSTM & 86.9 & 82.8 \\\\\n\\sworkcite{dozat-manning-2018-simpler} & LSTM, biaffine & 89.5 & 86.3 \\\\\n\\sworkcite{wang-etal-2019-second} & 2-order graph, LSTM & \\bf 89.8 & \\bf 86.9 \\\\\n\\hline\n\\end{tabular}\n\\end{threeparttable}\n\\end{table}", "id": "340f588e-7f78-45cd-9785-9056bc28ebf7", "level": "subsection", "origin_cites_number": 2, "parent_id": "9321cfad-d380-45ff-ae5a-cdcb8f310bb1", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ], [ "subsection", "Joint Dependency Syntax and SRL" ] ], "subsections": [], "title": "Joint Dependency Syntax and SRL" }, { "cite_extract_rate": 0, "cites": [], "content": "SDP could be regarded as an extension from syntactic dependency parsing\nby characterizing more semantic relations over the bilexical dependencies ,\nwhich can be greatly benefited from the advances of dependency parsing.\nWhile recently, \\newcite{oepen-etal-2014-semeval} and \\newcite{oepen2015semeval} present SDP from a different view,\nwhich converts the already available linguistic-informed semantic annotations into dependencies,\nincluding three different formalisms: DM, PAS and PAD,\nand currently it has been widely accepted for deep semantic parsing.\nFigure \\ref{fig-depgraph} shows an example of SDP.\nFor SDP, graph- and transition-based models are also the mainstream methods,\nand most of these models are adapted from dependency tree parsing.\nTable \\ref{table:sdp:performance} shows the performance of several representative SDP models.", "id": "ded4738c-fbc0-410b-b0e9-1a35611ffb36", "level": "subsection", "origin_cites_number": 2, "parent_id": "9321cfad-d380-45ff-ae5a-cdcb8f310bb1", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ], [ "subsection", "Semantic Dependency Parsing" ] ], "subsections": [ "96230d37-59d7-4a07-8274-ae88b88075ff", "849a3c64-f9bc-4060-9a9e-3c2d095a8638", "5b27ff6e-adf8-4d84-89d6-f1d86ba0df27" ], "title": "Semantic Dependency Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "There are a range of graph-based SDP models for the shared tasks of SDP in SemEval .\nGenerally, it is hard to develop a graph-based decoding algorithm targeted to arbitrary dependency graphs.\nThus, most models have imposed particular constraints.\n\\newcite{kuhlmann-jonsson-2015-parsing} present a cubic-time exact inference algorithm for non-crossing dependency graphs.\n\\newcite{cao-etal-2017-parsing} and \\newcite{cao-etal-2017-quasi} investigate the maximum subgraph algorithm for 1-endpoint-crossing, pagenumber-2 graphs.\n\\newcite{sun-etal-2017-parsing} attempt to solve the dependency graph parsing by subgraph decomposition and merging.\n\\newcite{sun-etal-2017-semantic} propose an interesting book embedding strategy for SDP.\nAll the above models exploit manually-crafted discrete features.\nUnder the neural setting, \\newcite{peng-etal-2017-deep} present a multi-task learning framework to different views of SDP.\n\\newcite{dozat-manning-2018-simpler} extend the biaffine dependency parsing for SDP.\nRecently, \\newcite{wang-etal-2019-second} propose a second-order SDP model based on \\workcite{dozat-manning-2018-simpler}.\nAs a whole, neural models can obtain better performances for SDP.", "id": "96230d37-59d7-4a07-8274-ae88b88075ff", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ded4738c-fbc0-410b-b0e9-1a35611ffb36", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ], [ "subsection", "Semantic Dependency Parsing" ], [ "subsubsection", "Graph-based" ] ], "subsections": [], "title": "Graph-based" }, { "cite_extract_rate": 0, "cites": [], "content": "The transition-based SDP models can also achieve competitive performance, and meanwhile, these models are more efficient and free of constraints,\nthus they have received great attention .\nActually, transition-based dependency graph parsing can be dated back to \\newcite{sagae-tsujii-2008-shift},\nand the model is enhanced with dynamic oracle by \\newcite{tokgoz2015transition}.\n\\newcite{sun-etal-2014-grammatical} define a K-permutation transition system to handle dependency graph generation.\n\\newcite{zhang2016transition} present two novel transition systems for deep semantic dependency parsing.\n\\newcite{gildea-etal-2018-cache} presents a transition-based system by including a cache to capture dependency graphs,\nRecently, \\newcite{wang2018neural} propose a strong transition-based SDP model by using neural networks.\nThey exploit deep bidirectional LSTM as sentential encoder together with stack-LSTM for better representation of transition states.\n\\newcite{buys-blunsom-2017-robust} present a transition-based model for general semantic graph parsing,\nwhich is also suitable for SDP.", "id": "849a3c64-f9bc-4060-9a9e-3c2d095a8638", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ded4738c-fbc0-410b-b0e9-1a35611ffb36", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ], [ "subsection", "Semantic Dependency Parsing" ], [ "subsubsection", "Transition-based" ] ], "subsections": [], "title": "Transition-based" }, { "cite_extract_rate": 0, "cites": [], "content": "Dependency graph parsing by using tree approximations and post-processing is also able to obtain competitive performance.\nThese kinds of models first convert dependency graphs into trees, and then tree-based parsing can be applied .\n\\newcite{du2015peking} ensemble several tree approximation strategies and achieve the top performance in SemEval 2015 .\n\\newcite{agic2015semantic} conduct a comprehensive investigation of semantic dependency graph parsing using tree approximations.\n\\begin{figure}[H]\n\\includegraphics[scale=0.65]{figures/cross-domain-crop.pdf}\n\\caption{The architecture of cross-domain parsing.}\n\\label{fig-cross-domain}\n\\end{figure}", "id": "5b27ff6e-adf8-4d84-89d6-f1d86ba0df27", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "ded4738c-fbc0-410b-b0e9-1a35611ffb36", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Semantic Dependency Graph" ], [ "subsection", "Semantic Dependency Parsing" ], [ "subsubsection", "Other Methods" ] ], "subsections": [], "title": "Other Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Cross-domain adaption is one hot topic in the NLP community,\nespecially for the syntactic and semantic parsing tasks,\nwhere the data annotation is extremely laborious and expensive.\nCurrently supervised parsing has achieved incredibly high performances thanks to the recent advances of neural networks.\nHowever, the performance could drop significantly when the well-trained parsers are applied to texts in different domains as the training corpus.\nIt is impractical to annotate training datasets for all domains.\nThus, cross-domain adaption is very important to make parser applicable.\nThe studies of cross-domain parsing are focused on two settings majorly: unsupervised domain adaption,\nwhere no target domain training dataset is available,\nand semi-supervised domain adaption, where a small-scale of training instances are available for a target domain.\nFigure \\ref{fig-cross-domain} shows the architecture of cross-domain parsing, where the differences between the two settings are illustrated.", "id": "22ff2a35-25cf-48f6-b0a7-3b8fb4e5cec9", "level": "section", "origin_cites_number": 0, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Domain Parsing" ] ], "subsections": [ "5ed82c49-c529-4e22-9927-f3c60992ffc3", "b11ed4a2-bbb0-4651-84b4-79f6cdd0f842" ], "title": "Cross-Domain Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "Self-training is one useful strategy for cross-domain parser adaption,\nalthough it has achieved very limited gains under the in-domain semi-supervised setting.\nInitial work mostly focuses on constituent parsing.\n\\newcite{mcclosky-etal-2006-reranking} exploit a reranking strategy to\nobtain a set of high-confidence auto-parsed outputs, and then add them to the training corpus.\n\\newcite{sagae-2010-self} shows that without reranking self-training alone can also give significant improvements.\n\\newcite{kawahara-uchimoto-2008-learning} firstly apply self-training successfully on dependency parsing,\nwhich exploits an extra classifier to determine whether a parsed tree is reliable.\n\\newcite{chen-etal-2008-learning} exploit only high-confidence partial dependencies for next-round training.\n\\newcite{yu2015domain} propose a novel confidence estimation method,\nleading to improved performance on the out-of-domain dataset.\nBesides self-training, there are several other methods for unsupervised domain adaption.\n\\newcite{steedman-etal-2003-example} apply co-training to constituent parsing, which is similar to self-training\nbut difference in that the example selection is performed by two parsers.\n\\newcite{sagae-tsujii-2007-dependency} use a similar co-training method for dependency parsing.\nFurther, \\newcite{sogaard-rishoj-2010-semi} exploit tri-training for domain adaption of dependency parsing,\nextending two parsers into parsers.\nInterestingly, \\newcite{plank-van-noord-2011-effective} select training instances\nfrom the source-domain dataset instead, where the instances most relevant to the target domain are chosen.\n\\newcite{yang-etal-2015-domain} exploit deep belief neural networks to enhance the dependency parsing performance on out-of-domain test data,\nwhich can effectively extract useful information from target-domain raw texts.\nMulti-source domain adaption is also a promising direction,\nwhich assumes that training corpora of several source domains are available.\nThe setting is highly matched with the real practical scenario.\n\\newcite{mcclosky-etal-2010-automatic} present the first work of this setting for dependency parsing.\nThey linearly combine the parsing models of different domains\nwith the weights learned from a regression model,\nconsidering the performance of each individual parser on the target domain.", "id": "5ed82c49-c529-4e22-9927-f3c60992ffc3", "level": "subsection", "origin_cites_number": 0, "parent_id": "22ff2a35-25cf-48f6-b0a7-3b8fb4e5cec9", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Domain Parsing" ], [ "subsection", "Unsupervised Domain Adaption" ] ], "subsections": [], "title": "Unsupervised Domain Adaption" }, { "cite_extract_rate": 0, "cites": [], "content": "With a small number of target domain training dataset,\n\\newcite{reichart-rappoport-2007-self} show that self-training can effectively improve the performance of constituent parsing.\nRecently, most work focuses on effectively training on the mixed source and target training instances\nby separating the domain-dependent and domain-invariant features .\nBy treating these features differently, the final model can accurately transfer the useful knowledge\nfrom the source domain into the target.\n\\newcite{finkel-manning-2009-hierarchical} extend the idea with a hierarchical Bayesian model\nand evaluate it on dependency parsing, achieving better performance on the target domain\nthan training with only the target-domain data.\nUnder the neural setting, adversarial learning is one effective method for the same purpose .\n\\newcite{sano2017adversarial} firstly apply the method on dependency parsing.\nActive learning can be one promising approach for semi-supervised domain adaption.\nConsidering that full-sentence syntax/semantic annotation is extremely expensive,\npartial annotation might be preferable.\nFor constituent parsing, \\newcite{joshi-etal-2018-extending} suggest partial annotation\nof constituent brackets to enhance domain adaption.\nFor dependency parsing, \\newcite{flannery2015combining} exploit partial annotation\ncombined with active learning for cross-domain dependency parsing in Japanese.\nRecently, \\newcite{li-etal-2019-semi-supervised} investigate the strategy comprehensively for Chinese dependency parsing\nunder the neural setting.\n\\begin{figure}[H]\n\\includegraphics[scale=0.65]{figures/cross-language-crop.pdf}\n\\caption{The architecture of cross-lingual parsing.}\n\\label{fig-cross-lingual}\n\\end{figure}", "id": "b11ed4a2-bbb0-4651-84b4-79f6cdd0f842", "level": "subsection", "origin_cites_number": 2, "parent_id": "22ff2a35-25cf-48f6-b0a7-3b8fb4e5cec9", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Domain Parsing" ], [ "subsection", "Semi-Supervised Domain Adaption" ] ], "subsections": [], "title": "Semi-Supervised Domain Adaption" }, { "cite_extract_rate": 0, "cites": [], "content": "Cross-lingual parsing, which aims to parse the sentence structures of low-resource languages\nwith the help of resource-rich languages such as English.\nThere have been a number of studies for this task,\nand the majority of work focuses on dependency parsing\ndue to the relatively structural conciseness as well as well-developed universal dependencies.\nIn particular, with the recent development of cross-lingual or universal\nword representations based on neural pretraining techniques,\nthe task has been concerned with increasing interests.\nThe task includes two main settings,\nthe unsupervised setting assuming that no training corpus is available for target languages,\nand the semi-supervised/supervised setting where there exists a certain scale of corpora for the target languages.\nThe architecture of cross-lingual parsing is shown in Figure \\ref{fig-cross-lingual},\nwhere the detailed difference between unsupervised and semi-supervised/supervised settings are illustrated as well.", "id": "71fdac5a-30d5-4d45-8687-4a12e85adefe", "level": "section", "origin_cites_number": 0, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ] ], "subsections": [ "1815802f-9ff6-4f30-a090-71db7228a63a", "931cc855-31f7-4342-accf-f34875ca2d00" ], "title": "Cross-Lingual Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "For unsupervised cross-lingual parsing,\nthe mainstream methods can be classified into two categories,\nmodel transferring and annotation projection,\nwhere the first category trains a model on the source-language training corpus,\nand then directly uses it to parse the target-language texts,\nand the second category\nprojects the source-language parse annotations into the target-language by using a parallel corpus,\nresulting in a pseudo training corpus for the target language,\nand then trains a target-language parsing model on the pseudo corpus.", "id": "1815802f-9ff6-4f30-a090-71db7228a63a", "level": "subsection", "origin_cites_number": 0, "parent_id": "71fdac5a-30d5-4d45-8687-4a12e85adefe", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ], [ "subsection", "Unsupervised Setting" ] ], "subsections": [ "856e1d07-0da2-4c54-b927-9ca9fdaeacb7", "7e8a332b-5848-44e1-bb7d-f81c702c4a2b", "a656a032-b667-4467-ba16-bca71903f6d6" ], "title": "Unsupervised Setting" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 6493 ], "content": "The model transferring approach is straightforward for cross-lingual parsing.\nThe most effort is concerned with language-independent features,\nwhich play consistent functions across languages.\nThis line of work is initially presented by \\newcite{zeman-resnik-2008-cross} which suggests delexiciallized models for cross-lingual dependency parsing,\nand is further developed by \\newcite{mcdonald-etal-2011-multi} for multi-source transferring,\nwhere multiple source languages are used to enhance a target language.\nSeveral researchers resort to various non-lexical features to enhance the delexicalized cross-lingual models\n.\nRecently, \\newcite{tackstrom-etal-2012-cross} exploit cross-lingual word clusters,\nwhich is one king of cross-lingual word representations.\nUnder the neural setting, the exploration of cross-lingual word representations is greatly facilitated.\n\\newcite{guo-etal-2015-cross} propose to use cross-lingual word embeddings for lexicalized cross-lingual dependency parsing.\nThis method is then received much attention and\ncan be further enhanced by various ways such as better neural structures \nand multi-source adaption .\nCross-lingual pretrained contextualized word representations give the state-of-the-art performances of this category.\n\\newcite{schuster-etal-2019-cross} provide a method to learn contextual ELMO representations effectively and\nthen apply the representations on the task, achieving much better performances than cross-lingual word embeddings.\n\\newcite{wang-etal-2019-cross} and \\newcite{wu-dredze-2019-beto} apply cross-lingual mBERT\nto zero-shot cross-lingual dependency parsing.\n\\newcite{lample2019cross} introduce the XLM concurrently to mBERT, which is also a kind of\nstrong multilingual contextualized word representations for cross-lingual parsing .\nAll these recent studies lead to state-of-the-art performances in the literature of this category.", "id": "856e1d07-0da2-4c54-b927-9ca9fdaeacb7", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "1815802f-9ff6-4f30-a090-71db7228a63a", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ], [ "subsection", "Unsupervised Setting" ], [ "subsubsection", "Model Transferring" ] ], "subsections": [], "title": "Model Transferring" }, { "cite_extract_rate": 0.2, "cites": [ 6494 ], "content": "The annotation projection approach requires slightly more effort compared with model transferring,\nwhich aims to build a pseudo training corpus through bitext projection.\nWith the pseudo training corpus, the final model can capture rich target-language characteristics.\nThe method relies on a set of parallel sentences between the source and target languages.\nA source parser trained on the source treebank is used to parse the source-side sentences of the parallel corpus,\nand then the automatic source annotations are projected onto the target language sentences according to word alignments,\nresulting in the final pseudo training corpus.\nThere are a range of strategies to achieve the goal.\nFor example, we can use different kinds of parallel corpora,\nsuch as EuroParl and the book Bible,\nand can also exploit various sophisticated methods to improve the projection quality.\nFor constituent parsing, \\newcite{snyder-etal-2009-unsupervised} exploit the method for unsupervised constituent parsing,\nand find that it can significantly outperform the purely-unsupervised models.\n\\newcite{jiang-etal-2011-relaxed} suggest an EM algorithm to incremental boost the quality of the projected constituent trees with relaxing constraints.\nThe number of studies on constituent parsing is relatively small,\nwhich may be possible due to that the projection of constituent structures is very complex.\nFor dependency parsing, \\newcite{hwa2005bootstrapping} present the first work of this category,\nand then the approach has been extensively studied under different settings, such as\nconfidence-aware learning ,\nneural network enhancing ,\nand multi-source adaption .\nInterestingly, \\newcite{jiang2015joint} propose a joint model for cross-lingual constituent and dependency parsing with annotation projection.\nThe approach achieves great success for cross-lingual dependency parsing.", "id": "7e8a332b-5848-44e1-bb7d-f81c702c4a2b", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "1815802f-9ff6-4f30-a090-71db7228a63a", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ], [ "subsection", "Unsupervised Setting" ], [ "subsubsection", "Annotation Projection" ] ], "subsections": [], "title": "Annotation Projection" }, { "cite_extract_rate": 0, "cites": [], "content": "There are also several other methods for unsupervised cross-lingual parsing.\nTreebank translation is one representative strategy,\nwhich is essentially highly similar to annotation projection.\nThe approach also aims to construct a pseudo training corpus.\nDifferent from annotation projection, it directly translates the source training corpus into the target language.\nBesides bitext projection, it requires translation to produce target language sentences.\n\\newcite{tiedemann2014treebank} firstly propose this method\nand their method is further perfected by their later studies .\n\\newcite{zhang-etal-2019-cross} study the approach under the neural setting with partial translation,\nand combine their model with model transferring.\nThe methods exploited in cross-domain parsing may be also suitable (e.g., self-training) for this setting\nbecause of the cross-lingual word representations.\nHowever, these kinds of methods have been seldom studied.\n\\newcite{rasooli-collins-2017-cross} combine the advantages of model transferring, annotation projection,\ntreebank translation as well as self-training to obtain a very strong model for cross-lingual dependency parsing.\nSentence reordering is one interesting method presented recently, which aims to\nreorder the input source language syntactic trees to make it highly similar to the target language.\nThe idea is first studied by \\newcite{wang-eisner-2018-synthetic}.\n\\newcite{rasooli-collins-2019-low} exploit the method with two strong reordering strategies,\nobtaining very competitive performance compared with even supervised parsing models.", "id": "a656a032-b667-4467-ba16-bca71903f6d6", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1815802f-9ff6-4f30-a090-71db7228a63a", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ], [ "subsection", "Unsupervised Setting" ], [ "subsubsection", "Other Methods" ] ], "subsections": [], "title": "Other Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "As the availability of treebanks for a range of languages,\nhow to effectively exploit both source and target language treebanks\nis one interesting problem.\nSince very early, several studies show that two languages are better than one language alone for parsing.\n\\newcite{smith2004bilingual} show that joint training the English and the Korean parser can bring better performance.\n\\newcite{burkett-klein-2008-two} also demonstrate the same observation.\nUnder the neural setting, this line of work can be conducted more conveniently due to the cross-lingual word representations.\n\\newcite{ammar-etal-2016-many} propose to use one single universal model to parse all languages.\nHowever, their final performance is still below the corresponding individual baselines.\n\\newcite{smith-etal-2018-82} train 34 models for 46 different languages.\nBy aggregating multiple treebanks from one language or closely related languages,\nwe can achieve competitive performances and meanwhile reduce the number of required models greatly.\nMost recently, \\newcite{kondratyuk-straka-2019-75} propose a sophisticated strategy to train one universal model for 75 languages by leveraging a multilingual BERT self-attention,\nwhich achieves better performances than the corresponding individual models.", "id": "931cc855-31f7-4342-accf-f34875ca2d00", "level": "subsection", "origin_cites_number": 0, "parent_id": "71fdac5a-30d5-4d45-8687-4a12e85adefe", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Cross-Lingual Parsing" ], [ "subsection", "Semi-Supervised/Supervised Setting" ] ], "subsections": [], "title": "Semi-Supervised/Supervised Setting" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section we discuss joint models of parsing,\nfocusing only on the final goal being the parsing task.\nThe studies of jointly modeling syntax-semantic parsing as well as a targeted downstream task will be introduced in the next section.\nThe development of joint models is mainly motivated by the error prorogation problem of the preconditioned tasks.\nPOS tagging is one of the major preconditioned tasks,\nas POS tags are one kind of valuable feature source for parsing.\nBefore POS tagging, several languages such as Chinese require word segmentation as a prerequisite step.\nParsing is generally performed based on words, while sentences of these languages do not have explicit word boundaries.\nIn summary, here we briefly investigate two kinds of joint models: joint POS tagging and parsing,\njoint segmentation \\& tagging and parsing,\nand we show their relationship in Figure \\ref{fig-joint-model}.\n\\begin{figure}[H]\n\\includegraphics[scale=0.75]{figures/joint-model-crop.pdf}\n\\caption{The architecture of joint models targeted for parsing, where word segmentation is only available to the Chinese language.}\n\\label{fig-joint-model}\n\\end{figure}\nNoticeably, there are several studies for joint syntactic and semantic parsing.\nThe dependency-based joint models have been already described in Section \\ref{sec-dep-srl}.\nThus, one can refer to there for details.\nFor joint constituent parsing and semantic role labeling,\nthere are very few studies.\nThe representative work is \\newcite{li-etal-2010-joint},\nwhich is the first work of this kind by using sophisticated manually-crafted features.\nThe work shows that their joint model is able to give better performances for both Chinese constituent parsing and SRL.", "id": "6990ee8e-1b57-418d-aa02-ad2b46dc41f6", "level": "section", "origin_cites_number": 0, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Joint Models" ] ], "subsections": [ "028e35d7-a850-4e17-b54c-010b7d616325", "d00a2601-7709-4fee-83ce-23c003d103de" ], "title": "Joint Models" }, { "cite_extract_rate": 0.5, "cites": [ 9027, 1171 ], "content": "For joint POS tagging and constituent parsing,\nthe chart-based PCFG parsing naturally performs the two tasks concurrently ,\nwhere POS tags can be directly induced from the bottom lexical rules.\nBased on the transition-based framework, joint POS tagging and constituent parsing can be\neasily achieved by the shift operation with one additional parameter to indicate the POS tag of the processing word.\n\\newcite{wang-xue-2014-joint} investigate the joint task and present a number of non-local features.\n\\newcite{li-etal-2011-joint} propose the first joint model of POS tagging and dependency parsing based on graph factoring,\nwhere the basic scoring units are augmented with POS tags.\n\\newcite{li-etal-2012-separately} enhance the model with better learning strategies.\n\\newcite{hatori-etal-2011-incremental} is the first transition-based model for joint POS tagging and dependency parsing.\n\\newcite{bohnet-nivre-2012-transition} extend the transition-based model for non-projective dependency parsing.\nThe two kinds of models achieve comparable performances for both tasks.\nUnder the neural setting, \\newcite{alberti-etal-2015-improved} investigate\nthe model of \\newcite{bohnet-nivre-2012-transition} with neural features.\n\\newcite{zhang-weiss-2016-stack} suggest a joint POS tagging and dependency parsing model by stack propagation.\n\\newcite{yang2017joint} further investigate the neural joint task with LSTMs\nby using graph-based and transition-based frameworks, respectively.\nIn fact, the importance of joint modeling has been largely weakened\nas parsing without POS tags can also obtain strong performance which is close to the same model with POS tags .", "id": "028e35d7-a850-4e17-b54c-010b7d616325", "level": "subsection", "origin_cites_number": 4, "parent_id": "6990ee8e-1b57-418d-aa02-ad2b46dc41f6", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Joint Models" ], [ "subsection", "Joint POS Tagging and Parsing" ] ], "subsections": [], "title": "Joint POS Tagging and Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "The task of joint segmentation, tagging and parsing is majorly targeted to Chinese parsing.\nThe series of this work starts very early by character-level parsing.\nLater, \\newcite{zhao-2009-character} demonstrate that Chinese dependency parsing based on characters is better,\nwhich can naturally perform the three tasks.\nRecently, \\newcite{hatori-etal-2012-incremental} propose a transition-based joint model for word segmentation, POS tagging and dependency parsing.\n\\newcite{li-zhou-2012-unified} suggest a similar transition-based joint model by using indivisible subwords as well as their internal structures.\n\\newcite{zhang-etal-2013-chinese} and \\newcite{zhang-etal-2014-character} conduct character-level constituent and dependency parsing by extending word-level annotations into characters,\nachieving state-of-the-art performances for both tasks under the discrete setting.\nAll the four models exploit transition-based framework.\n\\newcite{zhang-etal-2015-randomized} propose the first work by using graph-based inference, with efficient hill-climb decoding.\n\\newcite{zheng2015character} is the first work of adopting neural networks for character-level constituent parsing,\nachieving comparable with the state-of-the-art discrete model by a simple convolutional neural network.\n\\newcite{li2018neural} present a neural model for character-level dependency parsing.\n\\newcite{yan2019unified} propose a strong joint model for word segmentation and dependency parsing only,\nstate-of-the-art biaffine parser and pretrained BERT are exploited in this work.\nUnder the neural network, the joint framework might be highly challenging,\nas the baselines are strong and meanwhile neural networks can learn global high-level features implicitly.", "id": "d00a2601-7709-4fee-83ce-23c003d103de", "level": "subsection", "origin_cites_number": 1, "parent_id": "6990ee8e-1b57-418d-aa02-ad2b46dc41f6", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Joint Models" ], [ "subsection", "Joint Segmentation, Tagging and Parsing" ] ], "subsections": [], "title": "Joint Segmentation, Tagging and Parsing" }, { "cite_extract_rate": 0.5, "cites": [ 8412, 6495, 8005, 6496 ], "content": "When a well-trained syntactic/semantic parser is available,\nhow to use it effectively to benefit for downstream applications is one important topic in the parsing community,\nwhich is also related to the verification of the usefulness of syntactic and semantic parsing.\nIn fact, the topic has been extensively studied,\nand the parsing outputs have been demonstrated effective for a number of tasks such as semantic role labeling ,\nrelation extraction , sentiment analysis \nand machine translation .\nThe exploration methods have major changes from the statistical discrete models\nto the neural models.\nHere we briefly summarize the mainstream approaches of parser exploration in terms of the two settings.", "id": "7d10b76d-891b-4635-8a29-8b635d66d356", "level": "section", "origin_cites_number": 8, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Parser Application" ] ], "subsections": [ "a15ac61c-7a30-49c9-a36a-9c3007fc5f71", "d14b5ead-4320-4835-9289-f382f70c2351" ], "title": "Parser Application" }, { "cite_extract_rate": 0, "cites": [], "content": "Under the traditional statistical setting,\nthe exploration of parser resorts to manually-crafted discrete features,\nwhich are mostly designed sophisticatedly according to the targeted tasks.\nWe briefly summarize the widely-adopted features here.\nFor constituent trees, such features include non-terminal categories,\nCFG rules, phrase-level word ngrams, syntax paths to the root or some other word,\nthe matching with a completed phrase.\nFor dependency trees, dependency-based ngrams, dependency labels, dependency paths\nare widely-used features.\nAll these kinds of features are further adapted to various tasks aiming\nto get most of the parsing information effectively .\nBesides, the tree-kernel based approach can also be good alternatives .\nSeveral approaches suggest using multiple heterogeneous parsers for better performances,\nincluding the integration of constituent and dependency parsers as well as parsers trained on heterogeneous treebanks .", "id": "a15ac61c-7a30-49c9-a36a-9c3007fc5f71", "level": "subsection", "origin_cites_number": 11, "parent_id": "7d10b76d-891b-4635-8a29-8b635d66d356", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Parser Application" ], [ "subsection", "Feature-Based Statistical Methods" ] ], "subsections": [], "title": "Feature-Based Statistical Methods" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8337, 6495, 6497, 6498, 9028, 6496, 6499, 265 ], "content": "One simple method to use parsing features based on neural networks is to embed all the atomic features,\nand then exploit sophisticated neural networks to compose them automatically.\nThe most representative method of this kind is the path-based LSTMs,\nwhich exploit LSTM over sequential-level constituent or dependency paths .\nThe recent tendency of using the end-to-end framework for the majority of NLP tasks\nleads to universal representations based on parser outputs.\nWe build a universal encoder with structural outputs of a parser, and then adapt them to different tasks by decoders, as shown by Figure \\ref{fig-parser-apply}.\nThere are several ways to build the encoder.\nHere we divide the methods into four types: recursive neural network; linearization-based; implicated structural-aware word representations and graph neural networks (GNN).\n\\begin{figure}[H]\n\\includegraphics[scale=0.75]{figures/parser-apply-crop.pdf}\n\\caption{Parser enhanced universal encoder for downstream tasks.}\n\\label{fig-parser-apply}\n\\end{figure}\nThe recursive neural network is one natural method to model tree-structural outputs,\nwhich composes a tree input from bottom-to-up or top-to-down incrementally.\nWe can use various composition operations leading to more sophisticated tree-level neural networks\nsuch as tree convolutions suggested by \\newcite{mou-etal-2015-discriminative} and Tree-LSTM proposed by \\newcite{tai-etal-2015-improved}.\nAll these related studies give improved performances for a range of tasks .\nThe key idea of the linearization-based methods is to convert structural inputs into a sequence of symbols,\nand then adopt standard sequential encoders to model the new sequence directly .\nUsually, the conversion can be referred to as the linearization process of transition-based parsers,\nor we can incrementally traverse a tree or graph in different ways.\nThe method has received fewer concerns which might be due to its extreme simplicity,\nalthough it is effective and meanwhile much efficient .\nThe implicit structural-aware word representations, firstly presented by \\newcite{zhang-etal-2017-end} for relation extraction, are similar to the idea of contextualized word representations,\nwhich exploit the hidden outputs of a well-pretrained parser as inputs for the downstream tasks .\nThis method can also efficiently represent structural information such as syntax and semantics.\nBesides, the method can be easily adapted to the multi-task-learning strategy for\nparser application ,\nwhile parser requires to be jointly trained in multi-task-learning.\nRecently, there are grown interests on the topic of graph neural networks,\nwhich can be naturally applied to encode structural syntactic and semantic graphs.\nIndeed, there have been several studies already\nby using either graph convolutional networks or graph attention networks ,\nand all these works demonstrate the effectiveness of GNN for structure encoding.", "id": "d14b5ead-4320-4835-9289-f382f70c2351", "level": "subsection", "origin_cites_number": 12, "parent_id": "7d10b76d-891b-4635-8a29-8b635d66d356", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Parser Application" ], [ "subsection", "Representation Learning with Neural Networks" ] ], "subsections": [], "title": "Representation Learning with Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Finally, we review the work of corpus development in syntactic and semantic parsing,\nwhich is critical to the performance of supervised parsing.\nThere are several classical treebanks such as the Penn Treebanks of English and Chinese languages,\nwhich greatly promote the development of the parsing community.\nIn fact, there are treebanks for a range of languages,\nand here we focus majorly on the Chinese and English treebanks.\nIn addition, there are a number of shared tasks,\nwhich also offer valuable corpora for syntactic and semantic parsing.", "id": "46654902-5367-451c-b03d-c14f5b1c43e5", "level": "section", "origin_cites_number": 0, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Corpus and Shared Tasks" ] ], "subsections": [ "5d51d2ef-350c-49d2-b9d5-add04198ba87", "16e12ac4-216b-4e62-b2a3-c5d04b694d13", "d1b495a1-7827-4581-a038-52421d6a01bb", "f79bf090-8fc1-4bf0-9447-7db1d45a36b7" ], "title": "Corpus and Shared Tasks" }, { "cite_extract_rate": 0, "cites": [], "content": "The English Penn Treebank (PTB) by \\newcite{marcus-etal-1993-building} could be the most famous resource for syntactic parsing,\nwhich annotates bracketed syntactic phrase structures for over 40,000 sentences covering about 4.5 million words.\nIn addition, \\newcite{xuexia2005} annotate the Penn Treebank for the Chinese language, for short as CTB,\nand now there are over 130,000 sentences with phrase-structure annotations covering over 2 million words.\nBoth the two datasets have annotated POS tags as well, which are important to automatic syntactic parsing.\nFor Chinese, gold-standard word segmentation has been annotated in CTB as well.\nThe two datasets are also converted into dependency treebanks for dependency parsing,\nwhich could be achieved by rule-based head lexicalization over the constituent trees .\nRecently, Stanford dependencies have been exploited the most popularly especially for the English language,\nwhere the conversion rules are relatively more fine-grained and meanwhile can reflect\nmore syntactic and semantic phenomena.\nThere are several small-scale treebanks with the same annotation guideline as PTB,\nwhich can be useful resources for domain adaption studies of constituent and dependency parsing,\nregarding that PTB are focused on the news genre data.\nFor example, the Brown Treebank is exploited most frequently for cross-domain parsing as the literature genre.\n\\newcite{tateisi2005syntax} offer a treebank of the biomedical domain.\nThe two treebanks are targeted to researches on constituent parsing.\nRecently, \\newcite{kong-etal-2014-dependency} annotate a treebank for twitter texts based on dependency grammar.", "id": "5d51d2ef-350c-49d2-b9d5-add04198ba87", "level": "subsection", "origin_cites_number": 6, "parent_id": "46654902-5367-451c-b03d-c14f5b1c43e5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Corpus and Shared Tasks" ], [ "subsection", "Penn Treebank" ] ], "subsections": [], "title": "Penn Treebank" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 9029 ], "content": "The present of Universal Dependencies (UD) has received great attention for facilitating multilingual researches,\nwhich aims to develop cross-linguistically consistent treebank annotation for multiple languages.\nUD can capture similarities as well as idiosyncrasies among typologically different languages such as English-alike languages, morphologically-rich languages\nand pro-drop languages.\nThe development of UD is initially based on Stanford typed dependencies and the universal Google dependency scheme .\nNow it goes through several versions , with significant changes on the guidelines,\nalso supporting language-specific extensions when necessary.\nCurrently the UD treebank version 2.5 includes 157 treebanks over 90 languages.\nBesides multilingual dependency parsing, there is an increasing tendency to exploit them for evaluating monolingual dependency parsing based on the datasets as well .", "id": "16e12ac4-216b-4e62-b2a3-c5d04b694d13", "level": "subsection", "origin_cites_number": 7, "parent_id": "46654902-5367-451c-b03d-c14f5b1c43e5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Corpus and Shared Tasks" ], [ "subsection", "Universal Dependencies" ] ], "subsections": [], "title": "Universal Dependencies" }, { "cite_extract_rate": 0, "cites": [], "content": "For the Chinese languages, treebank development has been concerned by several studies besides the CTB.\nThe Sinica Treebank has offered phrase-structural syntactic trees over about 360,000 words in traditional Chinese .\n\\newcite{qiang2004annotation} release a constituent treebank covering about one million words for simplified Chinese.\n\\newcite{zhan2012application} also annotate constituent trees over a scale of 0.9 million words for Chinese.\nThe guidelines of all these phrase-structural treebanks are quite different.\nThere are several treebank resources directly based on the dependency structure, as it is believed that dependency grammar is simpler and easier to be developed.\n\\newcite{liu2006building} and \\newcite{che2012chinese} construct a Chinese dependency treebank consuming over 1.1 million words.\n\\newcite{qiu-etal-2014-multi} create a multi-view Chinese dependency treebank containing 14,463 sentences,\nwhich is further augmented with predicate-argument information by \\newcite{qiu2016dependency} for a semantic-oriented dependency treebank.\nMost recently, \\newcite{li-etal-2019-semi-supervised} release a large scale Chinse dependency treebank covering about 3 million words as well as different domains,\nincluding news, web blogs, literature texts.", "id": "d1b495a1-7827-4581-a038-52421d6a01bb", "level": "subsection", "origin_cites_number": 1, "parent_id": "46654902-5367-451c-b03d-c14f5b1c43e5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Corpus and Shared Tasks" ], [ "subsection", "Chinese Treebank" ] ], "subsections": [], "title": "Chinese Treebank" }, { "cite_extract_rate": 0, "cites": [], "content": "Nearly all the shared tasks are focused on dependency parsing,\nand most of which devote to multilingual parsing with the support of several treebanks in different languages.\nThese shared tasks, on the one hand, can evaluate the current state-of-the-art parsing models,\nand on the other hand offer valuable datasets for parsing,\nfacilitating the future research work.\nThe ConLL06 organizes the first shared task for dependency parsing involving 13 languages ,\nand domain adaption is considered later in ConLL07 .\nAt ConLL08 and ConLL09, semantic dependencies extracted from SRL are integrated, leading to joint syntactic-semantic parsing .\nRecently, the shared task on ConLL 2017 starts to adopt universal dependencies for dependency parsing ,\nand at ConLL 2018, 82 UD treebanks in 57 languages are included for evaluation .\nBesides ConLL, SANCL 2012 organizes a shared task on parsing English web text ,\nwhich offers a benchmark dataset for cross-domain dependency parsing in English.\nIn addition, the NLPCC 2019 shared task on cross-domain dependency parsing also offers a valuable dataset in Chinese .\nThe above shared tasks focus on syntactic dependency parsing.\nFor semantic dependency parsing, \\newcite{che-EtAl:2012:STARSEM-SEMEVAL} present the first shared task to Chinese texts in SemEval,\nwhere dependency trees are used in the evaluation.\n\\newcite{che-etal-2016-semeval} start to use dependency graphs for formal semantic representation.\nFor the English language, \\newcite{oepen-etal-2014-semeval} organize a shared task for broad coverage semantic parsing by using three distinct dependency-based semantic formalizations.\nDependency graphs are exploited to represent various semantics.\n\\newcite{oepen2015semeval} extend the shared task of \\workcite{oepen-etal-2014-semeval} with more languages including Chinese and Czech.\n\\newcite{oepen-etal-2019-mrp} cover more topics of semantic graph parsing for deep semantics, including not only dependency-based graphs,\nbut also several other formalizations such as UCCA and AMR.", "id": "f79bf090-8fc1-4bf0-9447-7db1d45a36b7", "level": "subsection", "origin_cites_number": 8, "parent_id": "46654902-5367-451c-b03d-c14f5b1c43e5", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Corpus and Shared Tasks" ], [ "subsection", "Shared Tasks" ] ], "subsections": [], "title": "Shared Tasks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this article, we made a thorough review of the past work of syntactic and semantic parsing focusing on constituent parsing and dependency parsing.\nTraditional statistical models, as well as currently-dominant neural network methods, were both summarized.\nFirst, for the parsing models, neural network methods with pretrained contextualized word representations have achieved the top performances\nfor almost all datasets.\nThere is a grown tendency to use simple encoder-decoder frameworks for parsing,\nso that well-investigated training strategies can be applied.\nSecond, broad-coverage semantic parsing is receiving increasing attention, which might be the next stage hop topic.\nThe task performances are now gradually acceptable\nthanks to the neural network models as well as the development of linguistic resources.\nThe cross-domain and cross-lingual settings are important scenarios for parsing,\nwhich are difficult to be resolved yet play the key role to the real applications.\nFor the cross-domain setting, there is still a large demand for resources.\nWhile for cross-lingual parsing, there exist a number of methods.\nA comprehensive and fair comparison of these methods as well as their integrations might be valuable.\nIn addition, the difference between cross-domain and cross-language is becoming smaller\nbecause of the universal word representations.\nOne can regard cross-lingual parsing as a special case of cross-domain technically.\nThe importance of joint models is decreasing.\nBy using neural networks, global features across different tasks can be directly captured by sophisticated neural structures such as deep LSTM and self-attention,\nand on the other hand, we can build one share encoder across tasks to reduce the influence of error propagation.\nFor parser application, which might be regarded as the reverse direction of joint models,\nneural network encoders can lead to highly effective and elegant universal representations with syntactic and semantic information.\nAlso, all current state-of-the-art methods still require a comprehensive and fair comparison.\nFinally, treebank development is the major source of the advances of syntactic and semantic parsing,\nwhich might be the most difficult and highly valuable job.\nIn particular, the semantic knowledge of one sentence can have several different views.\nComprehensive annotations require extremely-high costs.\nHow to effectively perform treebank annotation is one task deserved investigation.\nFor future directions, there is still a lot of work left to be followed.\nMost importantly, parsing with more complex grammars would receive increasing attention,\nalthough this survey is no covered.\nFor syntactic parsing, the performances of CCG, HPSG and LFG parsing are still unsatisfactory, especially for non-English languages.\nFor semantic parsing, the dependency-based grammar is not enough for rich semantics,\neven being relaxed with graph constraints.\nNon-lexicalized nodes are necessary to express several complicated semantics.\nThus, AMR, UCCA and MRS could be promising for practical deep semantic parsing.\nBased on the CFG and dependency grammars,\nthe cross-domain and cross-lingual settings are deserved to be concerned,\nwhich can be further unified.\nLightly-supervised or zero-shot models might be practical solutions.\nFor the joint models as well as parser applications,\nmulti-task-learning and pretraining might become more popular architectures for adaption.\n\\section*{Acknowledgments}\nThis work is supported by National Natural Science Foundation of China (NSFC) grants 61602160 and 61672211.\n\\begingroup\n \\setlength{\\bibsep}{0pt}\n \\setstretch{1}\n \\bibliographystyle{unsrtnat}\n \\bibliography{survey-syntax-semantic}\n\\endgroup\n\\renewcommand{\\thesection}{Appendix}\n\\begin{appendix}\n\\end{appendix}\n\\end{multicols}\n\\end{CJK}\n\\end{document}", "id": "3e9686ef-7d97-4a68-844d-6ac583e89a37", "level": "section", "origin_cites_number": 0, "parent_id": "3041659b-6b9d-44e0-94d2-651838f3bcad", "prefix_titles": [ [ "title", "A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures" ], [ "section", "Conclusion and Future Directions" ] ], "subsections": [], "title": "Conclusion and Future Directions" } ]
67
[ 7, 8385, 9026, 6489, 6490, 11, 2640, 8242, 8243, 6492, 6491, 1171, 8244, 6493, 6494, 9027, 8412, 6495, 8005, 6496, 8337, 6497, 6498, 9028, 6499, 265, 9029 ]
0.955337
[ "Gaurav Menghani" ]
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
2021
2021-06-16T17:31:38Z
cs.LG
Deep Learning has revolutionized the fields of computer vision, natural language understanding, speech recognition, information retrieval and more. However, with the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have all have increased significantly. Consequently, it has become important to pay attention to these footprint metrics of a model as well, not just its quality. We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency (spanning modeling techniques, infrastructure, and hardware) and the seminal work there. We also present an experiment-based guide along with code, for practitioners to optimize their model training and deployment. We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support. Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "30f20a23-cf88-46fe-a956-22f060f17d75", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ] ], "subsections": [ "86a5c64d-fe12-4136-a8ce-7220c13b8e60", "a050690c-cab5-46d2-acc8-5cc202f483c4", "dbea70fb-3601-4d63-9af0-7661299b4470", "bb65a187-5627-4dbe-9cb2-ae457b7b0c1f", "9f086486-ae1f-4444-8cde-1c9cffd71d03", "2501746f-3c9c-49b3-83bd-392a3e078d3e" ], "title": "root" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7, 38, 97, 514, 679, 305 ], "content": "Deep Learning with neural networks has been the dominant methodology of training new machine learning models for the past decade. Its rise to prominence is often attributed to the ImageNet competition in 2012. That year, a University of Toronto team submitted a deep convolutional network (AlexNet , named after the lead developer Alex Krizhevsky), performed 41\\% better than the next best submission. As a result of this trailblazing work, there was a race to create deeper networks with an ever increasing number of parameters and complexity. Several model architectures such as VGGNet , Inception , ResNet etc. successively beat previous records at ImageNet competitions in the subsequent years, while also increasing in their footprint (model size, latency, etc.)\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Computer Vision Models]{{\\includegraphics[width=7.5cm]{images/vision-models-growth} }}\n \\qquad\n \\subfloat[\\centering Natural Language Models]{{\\includegraphics[width=7.5cm]{images/language-models-growth} }}\n \\caption{Growth in the number of parameters in Computer Vision models over time. }\n \\label{fig:parameter-growth}\n\\end{figure}\nThis effect has also been noted in natural language understanding (NLU), where the Transformer architecture based on primarily Attention layers, spurred the development of general purpose language encoders like BERT , GPT-3 , etc. BERT specifically beat 11 NLU benchmarks when it was published. GPT-3 has also been used in several places in the industry via its API. The common aspect amongst these domains is the rapid growth in the model footprint (Refer to Figure \\ref{fig:parameter-growth}), and the cost associated with training and deploying them.\nSince deep learning research has been focused on improving the state of the art, progressive improvements on benchmarks like image classification, text classification, etc. have been correlated with an increase in the network complexity, number of parameters, the amount of training resources required to train the network, prediction latency, etc. For instance, GPT-3 comprises of 175 billion parameters, and costs millions of dollars to train just one iteration (). This excludes the cost of experimentation / trying combinations of different hyper-parameters, which is also computationally expensive.\nWhile these models perform well on the tasks they are trained on, they might not necessarily be efficient enough for direct deployment in the real world. A deep learning practitioner might face the following challenges when training or deploying a model.\n\\begin{itemize}\n\\item \\textbf{Sustainable Server-Side Scaling}: Training and deploying large deep learning models is costly. While training could be a one-time cost (or could be free if one is using a pre-trained model), deploying and letting inference run for over a long period of time could still turn out to be expensive in terms of consumption of server-side RAM, CPU, etc.. There is also a very real concern around the carbon footprint of datacenters even for organizations like Google, Facebook, Amazon, etc. which spend several billion dollars each per year in capital expenditure on their data-centers.\n\\item \\textbf{Enabling On-Device Deployment}: Certain deep learning applications need to run realtime on IoT and smart devices (where the model inference happens directly on the device), for a multitude of reasons (privacy, connectivity, responsiveness). \nThus, it becomes imperative to optimize the models for the target devices.\n\\item \\textbf{Privacy \\& Data Sensitivity}: Being able to use as little data as possible for training is critical when the user-data might be sensitive. Hence, efficiently training models with a fraction of the data means lesser data-collection required.\n\\item \\textbf{New Applications}: Certain new applications offer new constraints (around model quality or footprint) that existing off-the-shelf models might not be able to address.\n\\item \\textbf{Explosion of Models}: While a singular model might work well, training and/or deploying multiple models on the same infrastructure (colocation) for different applications might end up exhausting the available resources.\n\\end{itemize}", "id": "86a5c64d-fe12-4136-a8ce-7220c13b8e60", "level": "section", "origin_cites_number": 9, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Introduction" ] ], "subsections": [ "6705d820-90a9-4d43-95e4-e0206d4119e2" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "The common theme around the above challenges is \\emph{efficiency}. We can break it down further as follows:\n\\begin{itemize}\n\\item \\textbf{Inference Efficiency}: This primarily deals with questions that someone deploying a model for inference (computing the model outputs for a given input), would ask. Is the model small? Is it fast, etc.? More concretely, how many parameters does the model have, what is the disk size, RAM consumption during inference, inference latency, etc.\n\\item \\textbf{Training Efficiency}: This involves questions someone training a model would ask, such as How long does the model take to train? How many devices? Can the model fit in memory?, etc. It might also include questions like, how much data would the model need to achieve the desired performance on the given task?\n\\end{itemize}\nIf we were to be given two models, performing equally well on a given task, we might want to choose a model which does better in either one, or ideally both of the above aspects. If one were to be deploying a model on devices where inference is constrained (such as mobile and embedded devices), or expensive (cloud servers), it might be worth paying attention to inference efficiency. Similarly, if one is training a large model from scratch on either with limited or costly training resources, developing models that are designed for training efficiency would help. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=5.5cm]{images/pareto-optimality.png}\n \\caption{Pareto Optimality: Green dots represent pareto-optimal models (together forming the pareto-frontier), where none of the other models (red dots) get better accuracy with the same inference latency, or the other way around.}\n \\label{fig:pareto-optimality}\n\\end{figure}\nRegardless of what one might be optimizing for, we want to achieve \\emph{pareto-optimality}. This implies that any model that we choose is the best for the tradeoffs that we care about. As an example in Figure \\ref{fig:pareto-optimality}, the green dots represent pareto-optimal models, where none of the other models (red dots) get better accuracy with the same inference latency, or the other way around. Together, the pareto-optimal models (green dots) form our \\emph{pareto-frontier}. The models in the pareto-frontier are by definition more efficient than the other models, since they perform the best for their given tradeoff. Hence, when we seek efficiency, we should be thinking about discovering and improving on the pareto-frontier.\nTo achieve this goal, we propose turning towards a collection of algorithms, techniques, tools, and infrastructure that work together to allow users to train and deploy \\emph{pareto-optimal} models with respect to model quality and its footprint.", "id": "6705d820-90a9-4d43-95e4-e0206d4119e2", "level": "subsection", "origin_cites_number": 0, "parent_id": "86a5c64d-fe12-4136-a8ce-7220c13b8e60", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Introduction" ], [ "subsection", "Efficient Deep Learning" ] ], "subsections": [], "title": "Efficient Deep Learning" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 684, 9139, 7199, 681, 41, 168 ], "content": "In this section we present the mental model to think about the collection of algorithms, techniques, and tools related to efficient deep learning. We propose to structure them in five major areas, with the first four focused on modeling, and the final one around infrastructure and tools.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=9cm]{images/efficiency-areas.png}\n \\caption{A mental model for thinking about algorithms, techniques, and tools related to efficiency in Deep Learning.}\n \\label{efficiency-areas}\n\\end{figure}\n\\begin{enumerate}\n\\item \\textbf{Compression Techniques}: These are general techniques and algorithms that look at optimizing the model's architecture, typically by compressing its layers. A classical example is quantization , which tries to compress the weight matrices of a layer, by reducing its precision (eg., from 32-bit floating point values to 8-bit unsigned integers), with minimal loss in quality.\n\\item \\textbf{Learning Techniques}: These are algorithms which focus on training the model differently (to make fewer prediction errors, require less data, converge faster, etc.). The improved quality can then be exchanged for a smaller footprint / a more efficient model by trimming the number of parameters if needed. An example of a learning technique is distillation , which allows improving the accuracy of a smaller model by learning to mimic a larger model.\n\\item \\textbf{Automation}: These are tools for improving the core metrics of the given model using automation. An example is hyper-parameter optimization (HPO) where optimizing the hyper-parameters helps increase the accuracy, which could then be then exchanged for a model with lesser parameters. Similarly, architecture search falls in this category too, where the architecture itself is tuned and the search helps find a model that optimizes both the loss / accuracy, and some other metric such as model latency, model size, etc. \n\\item \\textbf{Efficient Architectures}: These are fundamental blocks that were designed from scratch (convolutional layers, attention, etc.), that are a significant leap over the baseline methods used before them (fully connected layers, and RNNs respectively). As an example, convolutional layers introduced parameter sharing for use in image classification, which avoids having to learn separate weights for each input pixel, and also makes them robust to overfitting. Similarly, attention layers solved the problem of Information Bottleneck in Seq2Seq models. These architectures can be used directly for efficiency gains. \n\\item \\textbf{Infrastructure}: Finally, we also need a foundation of infrastructure and tools that help us build and leverage efficient models. This includes the model training framework, such as Tensorflow , PyTorch , etc. (along with the tools required specifically for deploying efficient models such as Tensorflow Lite (TFLite), PyTorch Mobile, etc.). We depend on the infrastructure and tooling to leverage gains from efficient models. For example, to get both size and latency improvements with quantized models, we need the inference platform to support common neural network layers in quantized mode.\n\\end{enumerate}\nWe will survey each of these areas in depth in the following section.", "id": "a050690c-cab5-46d2-acc8-5cc202f483c4", "level": "section", "origin_cites_number": 7, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "A Mental Model" ] ], "subsections": [], "title": "A Mental Model" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "dbea70fb-3601-4d63-9af0-7661299b4470", "level": "section", "origin_cites_number": 0, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ] ], "subsections": [ "fb6421a7-9c74-4742-bffa-7cb009bd4f61", "7121d372-dd58-44ea-99fc-0f62e1b0b471", "f1702346-e88a-407d-b33b-64ca5fa704cb", "caef380e-1cf9-4df7-abaf-8a249b487e26", "1ee9027c-97b5-4599-9fd5-673895bfabfb" ], "title": "Landscape of Efficient Deep Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Compression techniques as mentioned earlier, are usually generic techniques for achieving a more efficient representation of one or more layers in a neural network, with a possible quality trade off. The efficiency goal could be to optimize the model for one or more of the footprint metrics, such as model size, inference latency, training time required for convergence, etc. in exchange for as little quality loss as possible. In some cases if the model is over-parameterized, these techniques can improve model generalization.", "id": "fb6421a7-9c74-4742-bffa-7cb009bd4f61", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbea70fb-3601-4d63-9af0-7661299b4470", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Compression Techniques" ] ], "subsections": [ "3c14d2fa-97e1-4c69-a795-d56b5c827995", "132fa894-718e-47a6-8d97-d6bf2210d21d", "140f23aa-177d-4e5c-a06b-3b294cfe075d" ], "title": "Compression Techniques" }, { "cite_extract_rate": 0.6086956521739131, "cites": [ 847, 846, 844, 842, 839, 838, 843, 8389, 840, 845, 841, 849, 848 ], "content": "Given a neural network $f(X, W)$, where $X$ is the input and $W$ is the set of parameters (or weights), pruning is a technique for coming up with a minimal subset $W'$ such that the rest of the parameters of $W$ are pruned (or set to 0), while ensuring that the quality of the model remains above the desired threshold. After pruning, we can say the network has been made \\emph{sparse}, where the sparsity can be quantified as the ratio of the number of parameters that were pruned to the number of parameters in the original network ($s = (1 - \\frac{|W'|}{|W|})$). The higher the sparsity, the lesser the number of non-zero parameters in the pruned networks.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7.5cm]{images/pruning.jpg}\n \\caption{A simplified illustration of pruning weights (connections) and neurons (nodes) in a neural network comprising of fully connected layers.}\n \\label{simple-pruning}\n\\end{figure}\nSome of the classical works in this area are Optimal Brain Damage (OBD) by LeCun et al. , and Optimal Brain Surgeon paper (OBD) by Hassibi et al. . These methods usually take a network that has been pre-trained to a reasonable quality and then iteratively prune the parameters which have the lowest `saliency' score, such that the impact on the validation loss is minimized. Once pruning concludes, the network is fine-tuned with the remaining parameters. This process is repeated a number of times until the desired number of original parameters are pruned (Algorithm~\\ref{algo:pruning}).\n\\begin{algorithm}[H]\n\\small\n\\SetAlgoLined\n\\KwData{Pre-trained dense network with weights $W$, inputs $X$, number of pruning rounds $N$, fraction of parameters to prune per round $p$.}\n\\KwResult{Pruned network with weights $W'$.}\n$W' \\gets W$\\;\n\\For{$i \\gets 1$ \\textbf{to} $N$} {\n $S \\gets \\texttt{compute\\_saliency\\_scores}(W')$\\;\n $W' \\gets W' - \\texttt{select\\_min\\_k}\\large(S, \\frac{|W'|}{p}\\large)$\\;\n $W' \\gets $\\texttt{fine\\_tune}($X$, $W'$)\n}\nreturn $W'$\n \\caption{Standard Network Pruning with Fine-Tuning}\n \\label{algo:pruning}\n\\end{algorithm}\nOBD approximates the saliency score by using a second-derivative of the parameters ($\\large \\frac{\\partial^2 L}{\\partial w_{i}^2}$), where $L$ is the loss function, and $w_{i}$ is the candidate parameter for removal. The intuition is that the higher this value for a given parameter, the larger the change in the loss function's gradient if it were to be pruned.\nFor the purpose of speeding up the computation of the second-derivatives, OBD ignores cross-interaction between the weights ($\\large \\frac{\\partial^2 L}{\\partial w_{i} \\partial w_{j}}$), and hence computes only the diagonal elements of the Hessian matrix. Otherwise, computing the full Hessian matrix is unwieldy for even a reasonable number of weights (with $n=10^4$, the size of the matrix is $10^4 \\times 10^4 = 10^8$). In terms of results, LeCun et al. demonstrate that pruning reduced the parameters in a well-trained neural net by ~ 8x (combination of both automatic and manual removal) without a drop in classification accuracy. \nAcross different pruning strategies, the core algorithm could remain similar, with changes in the following aspects.\n\\begin{itemize}\n \\item \\textbf{Saliency}: While use second-order derivatives, other methods rely on simpler magnitude based pruning , or momentum based pruning etc. to determine the saliency score.\n \\item \\textbf{Structured v/s Unstructured}: The most flexible way of pruning is unstructured (or random) pruning, where all given parameters are treated equally. In structured pruning, parameters are pruned in blocks (such as pruning row-wise in a weight matrix, or pruning channel-wise in a convolutional filter , etc.). The latter allows easier leveraging of inference-time gains in size and latency, since these blocks of pruned parameters can be intelligently skipped for storage and inference. Note that unstructured pruning can also be viewed as structured pruning with block size = 1.\n \\item \\textbf{Distribution}: The decision about how to distribute the sparsity budget (number of parameters to be pruned), could be made either by pooling in all the parameters from the network and then deciding which parameters to prune, or by smartly selecting how much to prune in each layer individually . have found that some architectures like MobileNetV2, EfficientNet have thin first layers that do not contribute significantly to the number of parameters and pruning them leads to an accuracy drop without much gain. Hence, intuitively it would be helpful to allocate sparsity on a per-layer basis. \n \\item \\textbf{Scheduling}: Another question is how much to prune, and when? Should we prune an equal number of parameters every round , or should we prune at a higher pace in the beginning and gradually decrease .\n \\item \\textbf{Regrowth}: Some methods allow regrowing pruned connections to keep the same level of sparsity through constant cycles of prune-redistribute-regrow. Dettmers et al. estimate training time speedups between 2.7x - 5.6x by starting and operating with a sparse model throughout. However there is a gap in terms of implementation of sparse operations on CPU, GPU, and other hardware. \n\\end{itemize}\n\\renewcommand{\\arraystretch}{1.0}\n\\begin{table}[]\n\\small\n\\begin{tabular}{llllll}\n\\hline\n\\multicolumn{1}{c}{Model Architecture} & Sparsity Type & Sparsity \\% & FLOPs & Top-1 Accuracy \\% & Source \\\\ \\hline\n\\multirow{6}{*}{MobileNet v2 - 1.0} \n & Dense (Baseline) & 0\\% & 1x & 72.0\\% & Sandler et al. \\\\ \\cline{2-6} \n & Unstructured & 75\\% & 0.27x & 67.7\\% & Zhu et al. \\\\ \\cline{2-6} \n & Unstructured & 75\\% & 0.52x & 71.9\\% & Evci et al. \\\\ \\cline{2-6} \n & Structured (block-wise) & 85\\% & 0.11x & 69.7\\% & Elsen et al. \\\\ \\cline{2-6} \n & Unstructured & 90\\% & 0.12x & 61.8\\% & Zhu et al. \\\\ \\cline{2-6} \n & Unstructured & 90\\% & 0.12x & 69.7\\% & Evci et al. \\\\ \\hline\n\\end{tabular}\n\\caption{A sample of various sparsity results on the MobileNet v2 architecture with depth multiplier = 1.0.}\n\\label{tab:mnv2-sparsity}\n\\end{table}\n\\textbf{Beyond Model Optimization}: Frankle et al.’s work on the Lottery Ticket Hypothesis took a different look at pruning, and postulated that within every large network lies a smaller network, which can be extracted with the original initialization of its parameters, and retrained on its own to match or exceed the performance of the larger network. The authors demonstrated these results on multiple datasets, but others such as were not able to replicate this on larger datasets such as ImageNet . Rather Liu et al. demonstrate that the pruned architecture with random initialization does no worse than the pruned architecture with the trained weights.\n\\textbf{Discussion}: There is a significant body of work that demonstrates impressive theoretical reduction in the model size (via number of parameters), or estimates the savings in FLOPs (Table~\\ref{tab:mnv2-sparsity}). However, a large fraction of the results are on \\emph{unstructured} pruning, where it is not currently clear how these improvements can lead to reduction in footprint metrics (apart from using standard file compression tools like GZip).\nOn the other hand, structured pruning with a meaningful block size is conducive to latency improvements. Elsen et al. construct sparse convolutional networks that outperform their dense counterparts by $1.3$ - $2.4 \\times$ with $\\approx$ 66\\% of the parameters, while retaining the same Top-1 accuracy. They do this via their library to convert from the NHWC (channels-last) standard dense representation to a special NCHW (channels-first) `Block Compressed Sparse Row' (BCSR) representation which is suitable for fast inference using their fast kernels on ARM devices, WebAssembly etc. . Although they also introduce some constraints on the kinds of sparse networks that can be accelerated . Overall, this is a promising step towards practical improvements in footprint metrics with pruned networks.", "id": "3c14d2fa-97e1-4c69-a795-d56b5c827995", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "fb6421a7-9c74-4742-bffa-7cb009bd4f61", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Compression Techniques" ], [ "subsubsection", "Pruning" ] ], "subsections": [], "title": "Pruning" }, { "cite_extract_rate": 0.52, "cites": [ 305, 514, 850, 8375, 7299, 842, 8391, 852, 8390, 8389, 9139, 685, 851 ], "content": "Almost all the weights and activations of a typical network are in 32-bit floating-point values. One of the ideas of reducing model footprint is to reduce the precision for the weights and activations by \\emph{quantizing} to a lower-precision datatype (often 8-bit fixed-point integers). There are two kinds of gains that we can get from quantization: (a) lower model size, and (b) lower inference latency. Often, only the model size is a constraint, and in this case we can employ a technique called weight quantization and get model size improvements , where only the model weights are in reduced precision. In order to get latency improvements, the activations need to be in fixed-point as well (Activation Quantization , such that all the operations in the quantized graph are happening in fixed-point math as well.\n\\textbf{Weight Quantization}: A simple \\emph{scheme} for quantizing weights to get model size improvements (similar to ) is as follows. Given a 32-bit floating-point weight matrix in a model, we can map the minimum weight value ($x_{min}$) in that matrix to $0$, and the maximum value ($x_{max}$) to $2^{b}-1$ (where $b$ is the number of bits of precision, and $b < 32$). Then we can linearly extrapolate all values between them to an integer value in [$0, 2^{b}-1$] (Figure ~\\ref{fig:quantization}). Thus, we are able to map each floating point value to a fixed-point value where the latter requires a lesser number of bits than the floating-point representation. This process can also be done for signed $b$-bit fixed-point integers, where the output values will be in the range [-$2^{\\frac{b}{2}} -1$, $2^{\\frac{b}{2}} -1$]. One of the reasonable values of $b$ is $8$, since this would lead to a $32 / 8 = 4\\times$ reduction in space, and also because of the near-universal support for \\texttt{uint8\\_t} and \\texttt{int8\\_t} datatypes.\nDuring inference, we go in the reverse direction where we recover a lossy estimate of the original floating point value (\\emph{dequantization}) using just the $x_{min}$ and $x_{max}$. This estimate is lossy since we lost $32 - b$ bits of information when did the rounding (another way to look at it is that a range of floating point values map to the same quantized value).\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8cm]{images/quantized-values.png}\n \\caption{Quantizing floating-point continuous values to discrete fixed-point values. The continuous values are clamped to the range $x_{min}$ to $x_{max}$, and are mapped to discrete values in [$0$, $2^b - 1$] (in the above figure, $b = 3$, hence the quantized values are in the range [$0, 7$].}\n \\label{fig:quantization}\n\\end{figure}\n formalize the quantization scheme with the following two constraints:\n\\begin{itemize}\n \\item The quantization scheme should be linear (affine transformation), so that the precision bits are linearly distributed.\n \\item $0.0$ should map exactly to a fixed-point value $x_{q_0}$, such that dequantizing $x_{q_0}$ gives us $0.0$. This is an implementation constraint, since $0$ is also used for padding to signify missing elements in tensors, and if dequantizing $x_{q_0}$ leads to a non-zero value, then it might be interpreted incorrectly as a valid element at that index.\n\\end{itemize}\nThe second constraint described above requires that $0$ be a part of the quantization range, which in turn requires updating $x_{min}$ and $x_{max}$, followed by clamping $x$ to lie in $[x_{min}, x_{max}]$. Following this, we can quantize $x$ by constructing a piece-wise linear transformation as follows:\n\\begin{equation}\n\\small\n\\label{eq:quantization-std}\n \\textrm{quantize}(x) = x_q = \\textrm{round}\\bigg(\\frac{x}{s}\\bigg) + z\n\\end{equation}\n$s$ is the floating-point \\emph{scale} value (can be thought of as the inverse of the slope, which can be computed using $x_{min}$, $x_{max}$ and the range of the fixed-point values). $z$ is an integer \\emph{zero-point} value which is the quantized value that is assigned to $x = 0.0$. This is the terminology followed in literature (Algorithm \\ref{algo:quantization}).\nThe dequantization step constructs $\\hat{x}$, which is a lossy estimate of $x$, since we lose precision when quantizing to a lower number of bits. We can compute it as follows:\n\\begin{equation}\n\\small\n\\label{eq:dequantization-std}\n \\textrm{dequantize}(x_q) = \\hat{x} = s (x_q - z)\n\\end{equation}\n Since $s$ is in floating-point, $\\hat{x}$ is also a floating-point value (Algorithm \\ref{algo:dequantization}). Note that the quantization and dequantization steps can be performed for signed integers too by appropriately changing the value $x_{q_{min}}$ (which is the lowest fixed-point value in $b$-bits) in Algorithm~\\ref{algo:quantization}.\n\\begin{minipage}[t]{7.5cm}\n\\null \n \\begin{algorithm}[H]\n \\small\n \\SetAlgoLined\n \\KwData{Floating-point tensor to compress $\\mathbf{X}$, number of precision bits $b$ for the fixed-point representation.}\n \\KwResult{Quantized tensor $\\mathbf{X_q}$.}\n $\\textbf{X}_{min}, \\textbf{X}_{max} \\gets \\textrm{min}(\\mathbf{X}, 0), \\textrm{max}(\\mathbf{X}, 0)$\\;\n $\\mathbf{X} \\gets \\textrm{clamp}(\\mathbf{X}, \\textbf{X}_{min}, \\textbf{X}_{max})$\\;\n $s \\gets \\ddfrac{x_{max} - x_{min}}{2^b - 1}$\\;\n $z \\gets \\textrm{round}\\bigg(x_{q_{min}} - \\ddfrac{x_{min}}{s}\\bigg)$\\;\n $\\mathbf{X_q} \\gets \\textrm{round}\\bigg(\\ddfrac{\\mathbf{X}}{s}\\bigg) + z$\\;\n return $\\mathbf{X_q}$\\;\n \\caption{Quantizing a given weight matrix $\\mathbf{X}$}\n \\label{algo:quantization}\n \\end{algorithm}\n \\end{minipage}\n\\hfill\n\\begin{minipage}[t]{6.5cm}\n \\null\n \\begin{algorithm}[H]\n \\small\n \\SetAlgoLined\n \\KwData{Fixed-point matrix to dequantize $\\mathbf{X_q}$, along with the scale $s$, and zero-point $z$ values which were calculated during quantization.} \n \\KwResult{Dequantized floating-point weight matrix $\\widehat{\\mathbf{X}}$.}\n $\\widehat{\\mathbf{X}} \\gets s(\\mathbf{X_q} - z)$\\;\n return $\\widehat{\\mathbf{X}}$\\;\n \\caption{Dequantizing a given fixed-point weight matrix $\\mathbf{X_q}$}\n \\label{algo:dequantization}\n \\end{algorithm}\n\\end{minipage}\nWe can utilize the above two algorithms for quantizing and dequantizing the model's weight matrices. Quantizing a pre-trained model's weights for reducing the size is termed as \\emph{post-training quantization} in literature . This might be sufficient for the purpose of reducing the model size when there is sufficient representational capacity in the model.\nThere are other works in literature that demonstrate slightly different variants of quantization. XNOR-Net , Binarized Neural Networks and others use $b=1$, and thus have weight matrices which just have two possible values $0$ or $1$, and the quantization function there is simply the $\\textrm{sign}(x)$ function (assuming the weights are symmetrically distributed around $0$).\nThe promise with such extreme quantization approaches is the theoretical $32 / 1 = 32\\times$ reduction in model size without much quality loss. Some of the works claim improvements on larger networks like AlexNet , VGG , Inception etc., which might already be more amenable to compression. A more informative task would be to demonstrate extreme quantization on smaller networks like the MobileNet family . Additionally binary quantization (and other quantization schemes like ternary , bit-shift based networks , etc.) promise latency-efficient implementations of standard operations where multiplications and divisions are replaced by cheaper operations like addition, subtraction, etc. These claims need to be verified because even if these lead to theoretical reduction in FLOPs, the implementations still need support from the underlying hardware. A fair comparison would be using standard quantization with $b=8$, where the multiplications and divisions also become cheaper, and are supported by the hardware efficiently via SIMD instructions which allow for low-level data parallelism (for example, on x86 via the SSE instruction set, on ARM via the Neon intrinsics, and even on specialized DSPs like the Qualcomm Hexagon ).\n\\textbf{Activation Quantization}: To be able to get \\emph{latency improvements} with quantized networks, the math operations have to be done in fixed-point representations too. This means all intermediate layer inputs and outputs are also in fixed-point, and there is no need to dequantize the weight-matrices since they can be used directly along with the inputs.\nVanhoucke et al. demonstrated a $3 \\times$ inference speedup using a fully fixed-point model on an x86 CPU, when compared to a floating-point model on the same CPU, without sacrificing accuracy. The weights are still quantized similar to post-training quantization, however all layer inputs (except the first layer) and the activations are fixed-point. In terms of performance, the primary driver for this improvement was the availability of fixed-point SIMD instructions in Intel's SSE4 instruction set , where commonly used building-block operations like the Multiply-Accumulate (MAC) can be parallelized. Since the paper was published, Intel has released two more iterations of these instruction sets which might further improve the speedups. \n\\textbf{Quantization-Aware Training (QAT)}: The network that Vanhoucke et al. mention was a 5 layer feed-forward network that was post-training quantized. However post-training quantization can lead to quality loss during inference as highlighted in as the networks become more complex. These could be because of: (a) outlier weights that skew the computation of the quantized values for the entire input range towards the outliers, leading to less number of bits being allocated to the bulk of the range, or (b) Different distribution of weights within the weight matrix, for eg. within a convolutional layer the distribution of weights between each filter might be different, but they are quantized the same way. These effects might be more pronounced at low-bit widths due to an even worse loss of precision. Wang et al. try to retain the post-training quantization but with new heuristics to allocate the precision bits in a learned fashion. Tools like the TFLite Converter augment post-training quantization with a representative dataset provided by the user, to actively correct for errors at different points in the model by comparing the error between the activations of the quantized and unquantized graphs.\nJacob et al. propose (and further detailed by Krishnamoorthi et al. ) a training regime which is \\emph{quantization-aware}. In this setting, the training happens in floating-point but the forward-pass simulates the quantization behavior during inference. Both weights and activations are passed through a function that simulates this quantization behavior (\\emph{fake-quantized} is the term used by many works ).\nAssuming $\\mathbf{X}$ is the tensor to be fake-quantized, Jacob et al. propose adding special quantization nodes in the training graph that collect the statistics (moving averages of $x_{min}$ and $x_{max}$) related to the weights and activations to be quantized (see Figure \\ref{fig:quantization-graphs}(a) for an illustration). Once we have these values for each $\\mathbf{X}$, we can derive the respective $\\widehat{\\mathbf{X}}$ using equations (\\ref{eq:quantization-std} and \\ref{eq:dequantization-std}) as follows. \n\\begin{equation}\n\\small\n\\begin{split}\n \\widehat{\\mathbf{X}}{}& = \\textrm{FakeQuant}(\\mathbf{X}) \\\\\n & = \\textrm{Dequantize}(\\textrm{Quantize}(\\mathbf{X})) \\\\\n & = s((\\textrm{round}\\bigg(\\ddfrac{\\textrm{clamp}(\\mathbf{X}, x_{min}, x_{max})}{s}\\bigg) + z) - z) \\\\\n & = s\\bigg(\\textrm{round}\\bigg(\\ddfrac{\\textrm{clamp}(\\mathbf{X}, x_{min}, x_{max})}{s}\\bigg)\\bigg) \\\\\n\\end{split}\n\\label{eq:fake-quant}\n\\end{equation}\nSince the above equation is not directly differentiable because of the rounding behavior, to optimize a loss function $L$ w.r.t. $\\mathbf{X}$, we can compute $\\ddfrac{\\partial{L}}{\\partial{\\mathbf{X}}}$ by chain-rule using the Straight-Through Estimator (STE) . This allows us to make the staircase function differentiable with a linear approximation (See for details).\nQuantization-Aware Training allows the network to adapt to tolerate the noise introduced by the clamping and rounding behavior during inference. Once the network is trained, tools such as the TFLite Model Converter can generate the appropriate fixed-point inference model from a network annotated with the quantization nodes.\n\\textbf{Other Notable Works}:\nPolino et al. allow non-uniform distribution of precision with learning a vector of quantization-points $p$, along with using distillation to further reduce loss of accuracy. The results for simpler datasets like CIFAR-10 are comparable to . However, when working with ResNet architecture on the ImageNet dataset, they achieve lower model size and faster inference by using shallower student networks. This is not a fair comparison, since other works do not mix distillation along with quantization. Fan et al. demonstrate accuracy improvement on top of standard QAT () with $b < 8$. They hypothesize that the networks will learn better if the fake-quantization is not applied to the complete tensor at the same time to allow unbiased gradients to flow (instead of the STE approximation). Instead, they apply the fake-quantization operation stochastically in a block-wise manner on the given tensor. They also demonstrate improvements over QAT on 4-bit quantized Transformer and EfficientNet networks.\n\\textbf{Results}:\nRefer to Table \\ref{tab:mnv2-quantization} for a comparison between the baseline floating-point model, post-training quantized, and quantization-aware trained models . The model with post-training quantization gets close to the baseline, but there is still a significant accuracy difference. The model size is $4 \\times$ smaller, however the latency is slightly higher due to the need to dequantize the weights during inference. The model with 8-bit Quantization-Aware Training (QAT) gets quite close to the baseline floating point model while requiring $4 \\times$ less disk space and being $1.64 \\times$ faster.\n\\renewcommand{\\arraystretch}{1.0}\n\\begin{table}[]\n\\small\n\\begin{tabular}{llllll}\n\\hline\nModel Architecture & Quantization Type & Top-1 Accuracy & Size (MB) & Latency (ms, Pixel2) \\\\ \\hline\n\\multirow{3}{*}{MobileNet v2-1.0 (224)} & Baseline & 71.9\\% & 14 & 89 \\\\ \\cline{2-5} \n & Post-Training Quantization & 63.7\\% & 3.6 & 98 \\\\ \\cline{2-5} \n & Quantization-Aware Training & 70.9\\% & 3.6 & 54 \\\\ \\hline\n\\end{tabular}\n\\caption{\nA sample of various quantization results on the MobileNet v2 architecture for 8-bit quantization . We picked results on 8-bit, since from they can be readily used with hardware and software that exists today.\n}\n\\label{tab:mnv2-quantization}\n\\end{table}\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Quantization-Aware Training]{{\\includegraphics[width=4.5cm]{images/quantization-aware-training.jpg} }}\n \\qquad\n \\subfloat[\\centering Final fixed-point inference graph]{{\\includegraphics[width=4.5cm]{images/quantization-integer-inference.jpg} }}\n \\caption{(a) shows the injection of fake-quantization nodes to simulate quantization effect and collecting tensor statistics, for exporting a fully fixed-point inference graph. (b) shows the inference graph derived from the same graph as (a). Inputs and weights are in \\texttt{uint8}, and results of common operations are in \\texttt{uint32}. Biases are kept in \\texttt{uint32} }.\n \\label{fig:quantization-graphs}\n\\end{figure}\n\\textbf{Discussion}:\n\\begin{itemize}\n \\item Quantization is a well-studied technique for model optimization and can help with very significant reduction in model size (often $4 \\times$ when using 8-bit quantization) and inference latency.\n \\item Weight quantization is straight-forward enough that it can be implemented by itself for reducing model size. Activation quantization should be strongly considered because it enables both latency reduction, as well as lower working memory required for intermediate computations in the model (which is essential for devices with low memory availability)\n \\item When possible, Quantization-Aware Training should be used. It has been shown to dominate post-training quantization in terms of accuracy.\n \\item However, tools like Tensorflow Lite have made it easy to rely on post-training quantization. shows that often there is minimal loss when using post-training quantization, and with the help of a representative dataset this is further shrunk down. Wherever there is an opportunity for switching to fixed-point operations, the infrastructure allows using them.\n \\item For performance reasons, it is best to consider the common operations that follow a typical layer such as Batch-Norm, Activation, etc. and `fold' them in the quantization operations.\n\\end{itemize}", "id": "132fa894-718e-47a6-8d97-d6bf2210d21d", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "fb6421a7-9c74-4742-bffa-7cb009bd4f61", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Compression Techniques" ], [ "subsubsection", "Quantization" ] ], "subsections": [], "title": "Quantization" }, { "cite_extract_rate": 0, "cites": [], "content": "There are other compression techniques like Low-Rank Matrix Factorization, K-Means Clustering, Weight-Sharing etc. which are also actively being used for model compression and might be suitable for further compressing hotspots in a model.", "id": "140f23aa-177d-4e5c-a06b-3b294cfe075d", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fb6421a7-9c74-4742-bffa-7cb009bd4f61", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Compression Techniques" ], [ "subsubsection", "Other Compression Techniques" ] ], "subsections": [], "title": "Other Compression Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Learning techniques try to train a model differently in order to obtain better quality metrics (accuracy, F1 score, precision, recall, etc.) while allowing supplementing, or in some cases replacing the traditional supervised learning. The improvement in quality can sometimes be traded off for a smaller footprint by reducing the number of parameters / layers in the model and achieving the same baseline quality with a smaller model. An incentive of paying attention to learning techniques is that they are applied only on the training, without impacting the inference.", "id": "7121d372-dd58-44ea-99fc-0f62e1b0b471", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbea70fb-3601-4d63-9af0-7661299b4470", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Learning Techniques" ] ], "subsections": [ "7517fadb-8748-41f5-bdb4-ab02737d97c9", "4fe37685-a091-4593-8358-236d350f4d53", "e39013bc-55a8-464c-9492-1527f777fe0c" ], "title": "Learning Techniques" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 7, 855, 854, 857, 681, 853, 7300, 856 ], "content": "Ensembles are well known to help with generalization . The intuition is that this enables learning multiple independent hypotheses, which are likely to be better than learning a single hypothesis. goes over some of the standard ensembling methods such as bagging (learning models that are trained on non-overlapping data and then ensembling them), boosting (learning models that are trained to fix the classification errors of other models in the ensemble), averaging (voting by all the ensemble models), etc.Bucila et al. used large ensembles to label synthetic data that they generated using various schemes. A smaller neural net is then trained to learn not just from the labeled data but also from this weakly labeled synthetic data. They found that single neural nets were able to mimic the performance of larger ensembles, while being $1000 \\times$ smaller and faster. This demonstrated that it is possible to transfer the cumulative knowledge of ensembles to a single small model. Though it might not be sufficient to rely on just the existing labeled data. \nHinton et al. , in their seminal work explored how smaller networks (students) can be taught to extract `dark knowledge' from larger models / ensembles of larger models (teachers) in a slightly different manner. Instead of having to generate synthetic-data, they use the larger teacher model to generate \\emph{soft-labels} on existing labeled data. The soft-labels assign a probability to each class, instead of hard binary values in the original data. The intuition is that these soft-labels capture the relationship between the different classes which the model can learn from. For example, a truck is more similar to a car than to an apple, which the model might not be able to learn directly from hard labels.\nThe student network learns to minimize the cross-entropy loss on these soft labels, along with the original ground-truth hard labels. Since the probabilities of the incorrect classes might be very small, the logits are scaled down by a `temperature' value $\\geq 1.0$, so that the distribution is `softened'. If the input vector is $\\mathbf{X}$, and the teacher model's logits are $\\mathbf{Z^{(t)}}$, the teacher model's softened probabilities with temperature $T$ can be calculated as follows using the familiar softmax function:\n\\begin{equation}\n \\small\n \\mathbf{Y}_i^{(t)} = \\ddfrac{\\exp(\\mathbf{Z_i^{(t)}} / T)}{\\sum_{j=1}^{n} \\exp(\\mathbf{Z_j^{(t)}} / T)}\n\\end{equation}\nNote that as $T$ increases, the relative differences between the various elements of $Y^{(t)}$ decreases. This happens because if all elements are divided by the same constant, the softmax function would lead to a larger drop for the bigger values. Hence, as the temperature $T$ increases, we see the distribution of $Y^{(t)}$ `soften' further.\nWhen training along with labeled data ($\\mathbf{X}$, $\\mathbf{Y}$), and the student model's output ($\\mathbf{Y^{(s)}}$), we can describe the loss function as:\n\\begin{equation}\n \\small\n \\begin{split}\n L& = \\lambda_1 \\cdot L_{\\rm ground-truth} + \\lambda_2 \\cdot L_{\\rm distillation} \\\\\n & = \\lambda_1 \\cdot \\textrm{CrossEntropy}(\\mathbf{Y}, \\mathbf{Y^{(s)}}; \\theta) + \\lambda_2 \\cdot \\textrm{CrossEntropy}(\\mathbf{Y^{(t)}}, \\mathbf{Y^{(s)}}; \\theta)\n \\end{split}\n\\end{equation}\n$\\textrm{CrossEntropy}$ is the cross-entropy loss function, which takes in the labels and the output. For the first loss term, we pass along the ground truth labels, and for the second loss term we pass the corresponding soft labels from the teacher model for the same input. $\\lambda_1$ and $\\lambda_2$ control the relative importance of the standard ground truth loss and the distillation loss respectively. When $\\lambda_1 = 0$, the student model is trained with just the distillation loss. Similarly, when $\\lambda_2 = 0$, it is equivalent to training with just the ground-truth labels. Usually, the teacher network is pre-trained and frozen during this process, and only the student network is updated. Refer to Figure \\ref{fig:distillation} for an illustration of this process.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm]{images/distillation.jpg}\n \\caption{Distillation of a smaller student model from a larger pre-trained teacher model. Both the teacher and student models receive the same input. The teacher is used to generate `soft-labels' for the student, which gives the student more information than just hard binary labels. The student is trained using the regular cross-entropy loss with the hard labels, as well as using the distillation loss function which uses the soft labels from the teacher. In this setting, the teacher is frozen, and only the student receives the gradient updates. }\n \\label{fig:distillation}\n\\end{figure}\nIn the paper, Hinton et al. were able to closely match the accuracy of a 10 model ensemble for a speech recognition task with a single distilled model. Urban et al. did a comprehensive study demonstrating that distillation significantly improves performance of shallow student networks as small as an MLP with one hidden layer on tasks like CIFAR-10. Sanh et al. use the distillation loss for compressing a BERT model (along with a cosine loss that minimizes the cosine distance between two internal vector representation of the input as seen by the teacher and student models). Their model retains 97\\% of the performance of BERT-Base while being 40\\% smaller and 60\\% faster on CPU. \nIt is possible to adapt the general idea of distillation to work on intermediate outputs of teachers and students. Zagoruyko et al. transfer intermediate `attention maps' between teacher and student convolutional networks. The intuition is to make the student focus on the parts of the image where the teacher is paying attention to. MobileBERT uses a progressive-knowledge transfer strategy where they do layer-wise distillation between the BERT student and teacher models, but they do so in stages, where the first $l$ layers are distilled in the $l$-th stage. Along with other architecture improvements, they obtain a 4.3$\\times$ smaller and 5.5$\\times$ faster BERT with small losses in quality. \nAnother idea that has been well explored is exploiting a model trained in a supervised training to label unlabeled data. Blum et al. in their paper from 1998, report halving the error rate of their classifiers by retraining on a subset of pseudo-labels generated using the previous classifiers. This has been extended through distillation to use the teacher model to label a large corpus of unlabeled data, which can then be used to improve the quality of the student model . \nOverall, distillation has been empirically shown to improve both the accuracy as well as the speed of convergence of student models across many domains. Hence, it enables training smaller models which might otherwise not be have an acceptable quality for deployment.\n\\textbf{Discussion}:\n\\begin{itemize}\n \\item Distillation is an adaptable technique that needs minimal changes in the training infrastructure to be used. Even if the teacher model cannot be executed at the same time as the student model, the teacher model's predictions can be collected offline and treated as another source of labels.\n \\item When there is sufficient label data, there is ample evidence that distillation is likely to improve the student model's predictions. If there is a large corpus of unlabeled data, the teacher model can be used to generate pseudo-labels on the unlabeled data, which can further improve the student model's accuracy.\n \\item Strategies for intermediate-layer distillation have also shown to be effective in the case of complex networks. In such scenarios, a new loss term minimizing the difference between the outputs of the two networks at some semantically identical intermediate point(s) needs to be added. \n\\end{itemize}\n\\somecomment{\nSource: https://github.com/dkozlov/awesome-knowledge-distillation\n1. Caruana\n1. Hinton\n1. DistillBERT\n1. Patient Knowledge Distillation for BERT Model Compression\n1. Learning Efficient Object Detection Models with Knowledge Distillation\n}", "id": "7517fadb-8748-41f5-bdb4-ab02737d97c9", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "7121d372-dd58-44ea-99fc-0f62e1b0b471", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Learning Techniques" ], [ "subsubsection", "Distillation" ] ], "subsections": [], "title": "Distillation" }, { "cite_extract_rate": 0.65, "cites": [ 505, 858, 97, 861, 859, 844, 8393, 8392, 862, 7191, 863, 860, 305 ], "content": "When training large models for complex tasks in a supervised learning regime, the size of the training data corpus correlates with improvement in generalization. demonstrates logarithmic increase in the prediction accuracy with increase in the number of labeled examples. However, getting high-quality labeled data often requires a human in the loop and could be expensive.\nData Augmentation is a nifty way of addressing the scarcity of labeled data, by synthetically inflating the existing dataset through some \\emph{augmentation methods}. These augmentation methods are transformations that can be applied cheaply on the given examples, such that the new label of the augmented example does not change, or can be cheaply inferred. As an example, consider the classical image classification task of labeling a given image to be a cat or a dog. Given an image of a dog, translating the image horizontally / vertically by a small number of pixels, rotating it by a small angle, etc. would not materially change the image, so the transformed image should still be labeled as `dog' by the classifier. This forces the classifier to learn a robust representation of the image that generalizes better across these transformations. \nThe transformations as described above have long been demonstrated to improve accuracy of convolutional networks . They have also been a core part of seminal works in Image Classification. A prime example is AlexNet , where such transformations were used to increase the effective size of the training dataset by 2048 $\\times$, which won the ImageNet competition in 2012. Since then it has became common to use such transformations for Image Classification models (Inception , XCeption , ResNet , etc.).\nWe can categorize data-augmentation methods as follows (also refer to Figure \\ref{fig:data-augmentation}):\n\\begin{itemize}\n \\item \\textbf{Label-Invariant Transformations}: These are some of the most common transformations, where the transformed example retains the original label. These can include simple geometric transformations such as translation, flipping, cropping, rotation, distortion, scaling, shearing, etc. However the user has to verify the label-invariance property with each transformation for the specific task at hand.\n \\item \\textbf{Label-Mixing Transformations}: Transformations such as Mixup , mix inputs from two different classes in a weighted manner and treat the label to be a correspondingly weighted combination of the two classes (in the same ratio). The intuition is that the model should be able to extract out features that are relevant for both the classes. Other transformations like Sample Pairing also seem to help .\n \\item \\textbf{Data-Dependent Transformations}: In this case, transformations are chosen such that they maximize the loss for that example , or are adversarially chosen so as to fool the classifier .\n \\item \\textbf{Synthetic Sampling}: These methods synthetically create new training examples. Algorithms like SMOTE allow re-balancing the dataset to make up for skew in the datasets, and GANs can be used to synthetically create new samples to improve model accuracy.\n \\item \\textbf{Composition of Transformations}: These are transformations that are themselves composed of other transformations, and the labels are computed depending on the nature of transformations that stacked.\n\\end{itemize}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm]{images/data-augmentation.png}\n \\caption{Some common types of data augmentations. Source: }\n \\label{fig:data-augmentation}\n\\end{figure}\n\\renewcommand{\\arraystretch}{1.0}\n\\begin{table}[]\n\\small\n\\centering\n\\begin{tabular}{lL}\n\\hline\nTransformation & Validation Accuracy Improvement (\\%) \\\\ \\hline\nrotate & 1.3 \\\\ \nshear-x & 0.9 \\\\\nshear-y & 0.9 \\\\\ntranslate-x & 0.4 \\\\\ntranslate-y & 0.4 \\\\\nsharpness & 0.1 \\\\\nautoContrast & 0.1 \\\\ \\hline\n\\end{tabular}\n\\caption{\nA breakdown of the contribution of various transformations on the validation accuracy of a model trained on the CIFAR-10 dataset. Source: .\n}\n\\label{tab:vision-augmentation}\n\\end{table}\n\\textbf{Discussion}:\nApart from Computer Vision, Data-Augmentation has also been used in NLP, and Speech. In NLP, a common idea that has been used is `back-translation' where augmented examples are created by training two translation models, one going from the source language to the target language, and the other going back from the target language to the original source language. Since the back-translation is not exact, this process is able to generate augmented samples for the given input. Other methods like WordDropout stochastically set embeddings of certain words to zero. SwitchOut introduces a similarity measure to disallow augmentations that are too dissimilar to the original input. In Speech , the input audio samples are translated to the left / right before being passed to the decoder.\nWhile the augmentation policies are usually hand-tuned, there are also methods such as AutoAugment where the augmentation policy is learned through a Reinforcement-Learning (RL) based search, searching for the transformations to be applied, as well as their respective hyper-parameters. Though this is shown to improve accuracy, it is also complicated and expensive to setup a separate search for augmentation, taking as many as 15000 GPU hours to learn the optimal policy on ImageNet. The RandAugment paper demonstrated that it is possible to achieve similar results while reducing the search space to just two hyper-parameters (number of augmentation methods, and the strength of the distortion) for a given model and dataset.\nOverall, we see that data-augmentation leads to better generalization of the given models. Some techniques can be specific for their domains RandAugment (Vision), BackTranslation and SwitchOut (NLP), etc. However, the core principles behind them make it likely that similar methods can be derived for other domains too (refer to our categorization of data-augmentation methods above).\n\\somecomment{\n1. https://towardsdatascience.com/analyzing-data-augmentation-for-image-classification-3ed30aa61411 -- Get the figure with the embeddings of the augmented images.\n1. \n}", "id": "4fe37685-a091-4593-8358-236d350f4d53", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "7121d372-dd58-44ea-99fc-0f62e1b0b471", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Learning Techniques" ], [ "subsubsection", "Data Augmentation" ] ], "subsections": [], "title": "Data Augmentation" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 7, 7210, 131, 864, 126, 865, 7000 ], "content": "The Supervised-Learning paradigm relies heavily on labeled data. As mentioned earlier, it requires human intervention, and is expensive as well. To achieve reasonable quality on a non-trivial task, the amount of labeled data requires is large too. While techniques like Data-Augmentation, Distillation etc., help, they too rely on the presence of some labeled data to achieve a baseline performance.\nSelf-Supervised learning (SSL) avoids the need for labeled data to learn generalized representations, by aiming to extract more supervisory bits from each example. Since it focuses on learning robust representations of the example itself, it does not need to focus narrowly on the label. This is typically done by solving a \\emph{pretext task} where the model pretends that a part / structure of the input is missing and learns to predict it (Refer to Figure \\ref{fig:pre-training-tasks} for examples). Since unlabeled data is vast in many domains (Books, Wikipedia, and other text for NLP, Web Images \\& Videos for Computer Vision, etc.), the model would not be bottlenecked by data for learning to solve these pretext tasks. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=6cm]{images/pre-training-tasks.png}\n \\caption{General theme of pretext tasks. Source: }\n \\label{fig:pre-training-tasks}\n\\end{figure}\nOnce the models learn generic representations that transfer well across tasks, they can be adapted to solve the target task by adding some layers that project the representation to the label space, and fine-tuning the model with the labeled data. Since the labeled data is not being used for learning rudimentary features, but rather how to map the high-level representations into the label space, the quantum of labeled data is going to be a fraction of what would have been required for training the model from scratch. From this lens, fine-tuning models pre-trained with Self-Supervised learning are \\emph{data-efficient} (they converge faster, attain better quality for the same amount of labeled data when compared to training from scratch, etc.) ().\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=5cm]{images/ulmfit-val-error.png}\n \\caption{Validation Error w.r.t. number of training examples for different training methods on IMDb (from scratch, ULMFiT Supervised: pre-training with WikiText-103 and fine-tuning using labeled data, ULMFit Semi-Supervised: Pre-Training with WikiText-103 as well as unlabeled data from the target dataset and fine-tuning with labeled data). Source: }\n \\label{fig:ulmfit}\n\\end{figure}\nAn example of this two step process of pre-training on unlabeled data and fine-tuning on labeled data has gained rapid acceptance in the NLP community. ULMFiT pioneered the idea of training a general purpose language model, where the model learns to solve the pretext task of predicting the next word in a given sentence, without the neeof an associated label. The authors found that using a large corpus of preprocessed unlabeled data such as the WikiText-103 dataset (derived from English Wikipedia pages) was a good choice for the pre-training step. This was sufficient for the model to learn general properties about the language, and the authors found that fine-tuning such a pre-trained model for a binary classification problem (IMDb dataset) required only 100 labeled examples ($\\approx 10\\times$ less labeled examples otherwise). Refer to Figure \\ref{fig:ulmfit}. If we add a middle-step of pre-training using unlabeled data from the same target dataset, the authors report needing $\\approx 20\\times$ fewer labeled examples.\nThis idea of pre-training followed by fine-tuning is also used in BERT (and other related models like GPT, RoBERTa, T5, etc.) where the pre-training steps involves learning to solve two tasks. Firstly, the Masked Language Model where about 15\\% of the tokens in the given sentence are masked and the model needs to predict the masked token. The second task is, given two sentences $A$ and $B$, predict if $B$ follows $A$. The pre-training loss is the mean of the losses for the two tasks. Once pre-trained the model can then be used for classification or seq2seq tasks by adding additional layers on top of the last hidden layer. When it was published, BERT beat the State-of-the-Art on eleven NLP tasks.\nSimilar to NLP, the pretext tasks in Vision have been used to train models that learn general representations. extracts two patches from a training example and then trains the model to predict their relative position in the image (Refer to Figure \\ref{fig:vision-pretext-tasks}(a)). They demonstrate that using a network pre-trained in this fashion improves the quality of the final object detection task, as compared to randomly initializing the network. Similarly, another task is to predict the degree of rotation for a given rotated image . The authors report that the network trained in a self-supervised manner this way can be fine-tuned to perform nearly as well as a fully supervised network.\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Detecting relative order of patches. Source: .]{{\\includegraphics[width=5cm]{images/obj-detection-patches.png} }}\n \\subfloat[\\centering Predicting the degree of rotation of a given image.]{{\\includegraphics[width=5cm]{images/rotation-pretext.png} }}\n \\caption{Pretext tasks for vision problems.}\n \\label{fig:vision-pretext-tasks}\n\\end{figure}\nAnother common theme is Contrastive Learning, where the model is trained to distinguish between similar and dissimilar inputs. Frameworks such as SimCLR , try to learn representations $h_i$ and $h_j$ for two given inputs $\\tilde{x_i}$ and $\\tilde{x_j}$, where the latter two are differently augmented views of the same input, such that the cosine similarity of the projections of $h_i$ and $h_j$, $z_i$ and $z_j$ (using a separate function $g(.)$) can be maximized. Similarly, for dissimilar inputs the cosine similarity of $z_i$ and $z_j$ should be minimized. The authors report a Top-1 accuracy of $73.9\\%$ on ImageNet with only 1\\% labels (13 labels per class), and outperform the ResNet-50 supervised baseline with only 10\\% labels.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=4cm]{images/simclr.png}\n \\caption{SimCLR framework for learning visual representations. Source: }\n \\label{fig:simclr}\n\\end{figure}\n\\textbf{Discussion}:\nSelf-Supervised Learning (SSL) has demonstrated significant success in the general representational learning with unlabeled data, followed by fine-tuning to adapt the model to the target task with a modest number of labeled examples. Yann LeCun has likened Self-Supervision as the cake, and Supervised Learning as the icing on top , implying that SSL will be the primary way of training high-quality models in the future as we move beyond tasks where labeled data is abundant. \nWith unlabeled data being practically limitless, SSL's success is dependent on creating useful pretext tasks for the domain of interest. As demonstrated across NLP , Vision , Speech , etc., Self-Supervision is indeed not just helpful in speeding and improving convergence, but also enabling achieving high quality in tasks where it was intractable to get enough labeled samples. \nPractically, for someone training Deep Learning models on a custom task (say a speech recognition model for a remote African dialect), having a pre-trained checkpoint of a model trained in a self-supervised fashion (such as wav2vec , which pre-trained in a similar way to BERT ), enables them to only spend an extremely tiny fraction of resources on both data labeling, as well as training to fine-tune to a good enough quality. In some cases, such as SimCLR , SSL approaches have actually beaten previous supervised baselines with sophisticated models like ResNet-50. Hence, we are hopeful that SSL methods will be crucial for ML practitioners for training high-quality models cheaply.", "id": "e39013bc-55a8-464c-9492-1527f777fe0c", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "7121d372-dd58-44ea-99fc-0f62e1b0b471", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Learning Techniques" ], [ "subsubsection", "Self-Supervised Learning" ] ], "subsections": [], "title": "Self-Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "It is possible to delegate some of the work around efficiency to automation, and letting automated approaches search for ways of training more efficient models. Apart from reducing work for humans, it also lowers the bias that manual decisions might introduce in model training, apart from systematically and automatically looking for optimal solutions. The trade-off is that these methods might require large computational resources, and hence have to be carefully applied.", "id": "f1702346-e88a-407d-b33b-64ca5fa704cb", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbea70fb-3601-4d63-9af0-7661299b4470", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Automation" ] ], "subsections": [ "e6a1e781-71e9-4da9-93ec-861b960cf9f9", "0a5bbbab-e21c-4292-b644-52f241b827ef" ], "title": "Automation" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 866, 7041, 869, 868, 867 ], "content": "One of the commonly used methods that fall under this category is Hyper-Parameter Optimization (HPO) . Hyper-parameters such as initial learning rate, weight decay, etc. have to be carefully tuned for faster convergence . They can also decide the network architecture such as the number of fully connected layers, number of filters in a convolutional layer, etc.\nExperimentation can help us build an intuition for the \\emph{range} in which these parameters might lie, but finding the best values requires a search for the exact values that optimize the given objective function (typically the loss value on the validation set). Manually searching for these quickly becomes tedious with the growth in the number of hyper-parameters and/or their possible values. Hence, let us explore possible algorithms for automating the search. To formalize this, let us assume without the loss of generalization, that we are optimizing the loss value on the given dataset's validation split. Then, let $\\mathcal{L}$ be the loss function, $f$ be the model function that is learnt with the set of hyper-parameters ($\\lambda$), $x$ be the input, and $\\theta$ be the model parameters. With the search, we are trying to find $\\lambda^{*}$ such that,\n\\begin{equation}\n \\small\n \\lambda^{*} = \\argmin_{\\lambda \\in \\Lambda} \\mathcal{L}(f_{\\lambda}(x; \\theta), y)\n \\label{eq:hyper-param-opt}\n\\end{equation}\n$\\Lambda$ is the set of all possible hyper-parameters. In practice, the $\\Lambda$ can be a very large set containing all possible combinations of the hyper-parameters, which would often be intractable since hyper-parameters like learning rate are real-valued. A common strategy is to approximate $\\Lambda$ by picking a finite set of \\emph{trials}, $S = \\{\\lambda^{(1)}, \\lambda^{(2)}, ..., \\lambda^{(n)}\\}$, such that $S \\in \\Lambda$, and then we can approximate Equation (\\ref{eq:hyper-param-opt}) with:\n\\begin{equation}\n \\small\n \\lambda^{*} \\approx \\argmin_{\\lambda \\in \\{\\lambda^{(1)}, ..., \\lambda^{(n)}\\}} \\mathcal{L}(f_{\\lambda}(x; \\theta), y)\n \\label{eq:hyper-param-opt-approx}\n\\end{equation}\nAs we see, the choice of $S$ is crucial for the approximation to work. The user has to construct a range of reasonable values for each hyper-parameter $\\lambda_{i} \\in \\lambda$. This can be based on prior experience with those hyper-parameters.\nA simple algorithm for automating HPO is \\textbf{Grid Search} (also referred to as Parameter Sweep), where $S$ consists of all the distinct and valid combinations of the given hyper-parameters based on their specified ranges. Each trial can then be run in parallel since each trial is independent of the others, and the optimal combination of the hyper-parameters is found once all the trials have completed. Since this approach tries all possible combinations, it suffers from the \\emph{curse of dimensionality}, where the total number of trials grow very quickly.\nAnother approach is \\textbf{Random Search} where trials are sampled randomly from the search space . Since each trial is independent of the others, it can still be executed randomly. However, there are few critical benefits of Random Search:\n\\begin{enumerate}\n \\item Since the trials are i.i.d. (not the case for Grid Search), the resolution of the search can be changed on-the-fly (if the computational budget has changed, or certain trials have failed).\n \\item Likelihood of finding the optimal $\\lambda^{*}$ increases with the number of trials, which is not the case with Grid Search.\n \\item If there are $K$ real-valued hyper-parameters, and $N$ total trials, grid search would pick $N^{\\frac{1}{K}}$ for each hyper-parameter. However, not all hyper-parameters might be important. Random Search picks a random value for each hyper-parameter per trial. Hence, in cases with low effective dimensionality of the search space, Random Search performs better than Grid Search.\n\\end{enumerate}\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Grid Search]{{\\includegraphics[width=5cm]{images/hpo-grid-search.png} }}\n \\subfloat[\\centering Random Search]{{\\includegraphics[width=5cm]{images/hpo-random-search.png} }}\n \\subfloat[\\centering Bayesian Optimization]{{\\includegraphics[width=5cm]{images/hpo-bayesian-opt.png} }}\n \\caption{Hyper-Parameter Search algorithms. Source: }\n \\label{fig:hpo-algorithms}\n\\end{figure}\n\\textbf{Bayesian Optimization} (BO) based search is a \\emph{model-based} sequential approach where the search is guided by actively estimating the value of the objective function at different points in the search space, and then spawning trials based on the information gathered so far. The estimation of the objective function is done using a \\emph{surrogate function} that starts off with a prior estimate. The trials are created using an \\emph{acquisition function} which picks the next trial using the surrogate function, the likelihood of improving on the optimum so far, whether to explore / exploit etc. As the trials complete, both these functions will refine their estimates. Since the method keeps an internal model of how the objective function looks and plans the next trials based on that knowledge, it is model-based. Also, since the selection of trials depends on the results of the past trials, this method is sequential. BO improves over Random Search in that the search is guided rather than random, thus fewer trials are required to reach the optimum. However, it also makes the search sequential (though it is possible to run multiple trials in parallel, overall it will lead to some wasted trials).\nOne of the strategies to save training resources with the above search algorithms is the \\textbf{Early Stopping} of trials that are not promising. Google's Vizier uses Median Stopping Rule for early stopping, where a trial is terminated if it's performance at a time step $t$ is below the the median performance of all trials run till that point of time.\nOther algorithms for HPO include:\n\\begin{enumerate}\n \\item \\textbf{Population Based Training (PBT)} : This method is similar to evolutionary approaches like genetic algorithms, where a fixed number of trials (referred to as the population) are spawned and trained to convergence. Each trial starts with a random set of hyper-parameters, and trained to a pre-determined number of steps. At this point, all trials are paused, and every trial's weights and parameters might be replaced by the weights and parameters from the `best' trial in the population so far. This is the \\emph{exploitation} part of the search. For \\emph{exploration}, these hyper-parameters are perturbed from their original values. This process repeats till convergence. It combines both the search and training in a fixed number of trials that run in parallel. It also only works with adaptive hyper-parameters like learning rate, weight-decay, etc. but cannot be used where hyper-parameters change the model structure. Note that the criteria for picking the `best' trial does not have to be differentiable. \n \\item \\textbf{Multi-Armed Bandit Algorithms}: Methods like Successive Halving (SHA) and Hyper-Band are similar to random search, but they allocate more resources to the trials which are performing well. Both these methods need the user to specify the total computational budget $B$ for the search (can be the total number of epochs of training, for instance). They then spawn and train a fixed number of trials with randomly sampled hyper-parameters while allocating the training budget. Once the budget is exhausted, the worse performing fraction ($\\frac{\\eta - 1}{\\eta}$) of the trials are eliminated, and the remaining trials' new budget is multiplied by $\\eta$. In the case of SHA, $\\eta$ is 2, so the bottom $\\frac{1}{2}$ of the trials are dropped, and the training budget for the remaining trials is doubled. For Hyper-Band $\\eta$ is 3 or 4. Hyper-Band differs from SHA in that the user does not need to specify the maximum number of parallel trials, which introduces a trade-off between the total budget and the per-trial allocation.\n\\end{enumerate}\n\\textbf{HPO Toolkits}: There are several software toolkits that incorporate HPO algorithms as well as an easy to use interface (UI, as well as a way to specify the hyper-parameters and their ranges). Vizier (an internal Google tool, also available via Google Cloud for blackbox tuning). Amazon offers Sagemaker which is functionally similar and can also be accessed as an AWS service. NNI , Tune , Advisor are other open-source HPO software packages that can be used locally.", "id": "e6a1e781-71e9-4da9-93ec-861b960cf9f9", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "f1702346-e88a-407d-b33b-64ca5fa704cb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Automation" ], [ "subsubsection", "Hyper-Parameter Optimization (HPO)" ] ], "subsections": [], "title": "Hyper-Parameter Optimization (HPO)" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 870, 684, 872, 875, 8389, 871, 544, 874, 869, 873 ], "content": "Neural Architecture Search can be thought of an extension of Hyper-Parameter Optimization wherein we are searching for parameters that change the network architecture itself.\nWe find that there is consensus in the literature around categorizing NAS as a system comprising of the following parts:\n\\begin{enumerate}\n \\item \\textbf{Search Space}: These are the operations that are allowed in the graph (Convolution ($1\\times1, 3\\times3, 5\\times5$), Fully Connected, Pooling, etc.), as well as the semantics of how these operations and their outputs connect to other parts of the network.\n \\item \\textbf{Search Algorithm \\& State}: This is the algorithm that controls the architecture search itself. Typically the standard algorithms that apply in HPO (Grid Search, Random Search, Bayesian Optimization, Evolutionary Algorithms), can be used for NAS as well. However, using Reinforcement Learning (RL) , and Gradient Descent are popular alternatives too. \n \\item \\textbf{Evaluation Strategy}: This defines how we evaluate a model for fitness. It can simply be a conventional metric like validation loss, accuracy, etc. Or it can also be a compound metric, as in the case of MNasNet which creates a single custom metric based on accuracy as well as latency.\n\\end{enumerate}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7.5cm]{images/nas-controller.png}\n \\caption{Neural Architecture Search: The controller can be thought of as a unit that encodes the search space, the search algorithm itself, and the state it maintains (typically the model that helps generate the candidates). The algorithm generates candidate models in the search space $S$, and receives an evaluation feedback. This feedback is used to update the state, and generate better candidate models.}\n \\label{fig:nas-controller}\n\\end{figure}\nThe user is supposed to either explicitly or implicitly encode the search space. Together with the search algorithm, we can view this as a `controller' which generates sample candidate networks (Refer to Figure \\ref{fig:nas-controller}). The evaluation stage will then train and evaluate these candidates for fitness. This fitness value is then passed as feedback to the search algorithm, which will use it for generating better candidates. While the implementation of each of these blocks vary, this structure is common across the seminal work in this area.\nZoph et. al's paper from 2016 , demonstrated that end-to-end neural network architectures can be generated using Reinforcement Learning. In this case, the controller is a Recurrent Neural Network, which generates the architectural hyper-parameters of a feed-forward network one layer at a time, for example, number of filters, stride, filter size, etc. They also support adding skip connections (refer Figure \\ref{fig:nasnet-controller}). The network semantics are baked into the controller, so generating a network that behaves differently requires changing the controller. Also, training the controller itself is expensive (taking 22,400 GPU hours ), since the entire candidate network has to be trained from scratch for a single gradient update to happen. In a follow up paper , they come up with a refined search space where instead of searching for the end-to-end architecture, they search for \\emph{cells}: A `Normal Cell' that takes in an input, processes it, and returns an output of the same spatial dimensions. And a `Reduction Cell' that process its input, and returns an output whose spatial dimensions are scaled down by a factor of 2. Each cell is a combination of $B$ blocks. The controller's RNN generates one block at a time, where it picks outputs of two blocks in the past, the respective operations to apply on them, and how to combine them into a single output. The Normal and Reduction cells are stacked in alternating fashion ($N$ Normal cells followed by 1 Reduction cell, where $N$ is tunable) to construct an end-to-end network for CIFAR-10 and ImageNet. Learning these cells individually rather than learning the entire network seems to improve the search time by 7$\\times$, when compared to the end-to-end network search in , while beating the state-of-the-art in CIFAR-10 at that time.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm]{images/nasnet.png}\n \\caption{A NASNet controller generating the architecture, recursively making one decision at a time and generating a single block in the image (making a total of 5 decisions). Source: .}\n \\label{fig:nasnet-controller}\n\\end{figure}\nOther approaches such as evolutionary techniques , differentiable architecture search , progressive search , parameter sharing , etc. try to reduce the cost of architecture search (in some cases reducing the compute cost to a couple of GPU days instead of thousands of GPU days). These are covered in detail in .\nWhile most of the early papers focused on finding the architectures that performed best on quality metrics like accuracy, unconstrained by the footprint metrics. However, when focusing on efficiency, we are often interested in specific tradeoffs between quality and footprint. Architecture Search can help with multi-objective searches that optimize for both quality and footprint. MNasNet is one such work. It incorporates the model's latency on the target device into the objective function directly, as follows:\n\\begin{equation}\n\\small\n\\underset{m}{\\operatorname{maximize}} \\quad A C C(m) \\times\\left[\\frac{L A T(m)}{T}\\right]^{w}\n\\end{equation}\nWhere $m$ is the candidate model, $ACC$ is the accuracy metric, and $LAT$ is the latency of the given model on the desired device. $T$ is the target latency. $w$ is recommended to be $-0.07$. FBNet uses a similar approach with a compound reward function that has a weighted combination of the loss value on the validation set and the latency. However instead of measuring the latency of the candidate model on device, they use a pre-computed lookup table to approximate the latency to speed up the search process. They achieve networks that are upto $2.4\\times$ smaller and $1.5\\times$ faster than MobileNet, while finishing the search in 216 GPU Hours. Other works such as MONAS use Reinforcement Learning to incorporate power consumption into the reward function along with hard constraints on the number of MAC operations in the model, and discover pareto-frontiers under the given constraints. \n\\textbf{Discussion}: Automation plays a critical role in model efficiency. Hyper-Parameter Optimization (HPO) is now a natural step in training models and can extract significant quality improvements, while minimizing human involvement. In case the cost HPO becomes large, algorithms like Bayesian Optimization, Hyper-Band etc. with early stopping techniques can be used. HPO is also available in ready-to-use software packages like Tune , Vizier via Google Cloud , NNI , etc. Similarly, recent advances in Neural Architecture Search (NAS) also make it feasible to construct architectures in a learned manner, while having constraints on both quality and footprint . Assuming several hundred GPU hours worth of compute required for the NAS run to finish, and an approx cost of \\$3 GPU / hour on leading cloud computing services, this makes using NAS methods financially feasible and not similar in cost to manual experimentation with model architecture when optimizing for multiple objectives.", "id": "0a5bbbab-e21c-4292-b644-52f241b827ef", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "f1702346-e88a-407d-b33b-64ca5fa704cb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Automation" ], [ "subsubsection", "Neural Architecture Search (NAS)" ] ], "subsections": [], "title": "Neural Architecture Search (NAS)" }, { "cite_extract_rate": 0, "cites": [], "content": "Another common theme for tackling efficiency problems is to go back to the drawing board, and design layers and models that are efficient by design to replace the baseline. They are typically designed with some insight which might lead to a design that is better in general, or it might be better suited for the specific task. In this section, we lay out an examples of such efficient layers and models to illustrate this idea.", "id": "caef380e-1cf9-4df7-abaf-8a249b487e26", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbea70fb-3601-4d63-9af0-7661299b4470", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Efficient Architectures" ] ], "subsections": [ "755968cb-8899-4570-b0fb-52e968f71843", "7802e004-084c-4ded-a0cb-415602cc49d1" ], "title": "Efficient Architectures" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 505, 97, 842, 305 ], "content": "One of the classical example of efficient layers in the Vision domain are the Convolutional layers, which improved over Fully Connected (FC) layers in Vision models. FC layers suffer from two primary issues:\n\\begin{enumerate}\n \\item FC layers ignore the spatial information of the input pixels. Intuitively, it is hard to build an understanding of the given input by looking at individual pixel values in isolation. They also ignore the spatial locality in nearby regions. \n \\item Secondly, using FC layers also leads to an explosion in the number of parameters when working with even moderately sized inputs. A $100\\times100$ RGB image with 3 channels, would lead to each neuron in the first layer having $3\\times10^4$ connections. This makes the network susceptible to overfitting also.\n\\end{enumerate}\nConvolutional layers avoid this by learning ‘filters’, where each filter is a 3D weight matrix of a fixed size ($3\\times3$, $5\\times5$, etc.), with the third dimension being the same as the number of channels in the input. Each filter is convolved over the input to generate a feature map for that given filter. These filters learn to detect specific features, and convolving them with a particular input patch results in a single scalar value that is higher if the feature is present in that input patch. \nThese learned features are simpler in lower layers (such as edges (horizontal, vertical, diagonal, etc.)), and more complex in subsequent layers (texture, shapes, etc.). This happens because the subsequent layers use the feature maps generated by previous layers, and each pixel in the input feature map of the $i$-th layer, depends on the past $i-1$ layers. This increases the \\emph{receptive field} of the said pixel as $i$ increases, progressively increasing the complexity of the features that can be encoded in a filter. \nThe core idea behind the efficiency of Convolutional Layers is that the same filter is used everywhere in the image, regardless of where the filter is applied. Hence, enforcing spatial invariance while sharing the parameters. Going back to the example of a $100\\times100$ RGB image with 3 channels, a $5\\times5$ filter would imply a total of $75$ ($5\\times5\\times3$) parameters. Each layer can learn multiple unique filters, and still be within a very reasonable parameter budget. This also has a regularizing effect, wherein a dramatically reduced number of parameters allow for easier optimization, and reducing the likelihood of overfitting. \nConvolutional Layers are usually coupled with Pooling Layers, which allow dimensionality reduction by subsampling the input (aggregating a sliding 2-D window of pixels, using functions like max, avg, etc.). Pooling would lead to smaller feature maps for the next layer to process, which makes it faster to process. LeNet5 was the first Convolutional Network which included convolutional layers, pooling, etc. Subsequently, many iterations of these networks have been proposed with various improvements. AlexNet , Inception , ResNet , etc. have all made significant improvements over time on known image classification benchmarks using Convolutional Layers.\n\\textbf{Depth-Separable Convolutional Layers}: In the convolution operation, each filter is used to convolve over the two spatial dimensions and the third channel dimension. As a result, the size of each filter is $s_x \\times s_y \\times$ \\texttt{input\\_channels}, where $s_x$ and $s_y$ are typically equal. This is done for each filter, resulting in the convolution operation happening both spatially in the $x$ and $y$ dimensions, and depthwise in the $z$ dimension.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=5cm]{images/depthwise.png}\n \\caption{Depth-Separable Convolution. Source: .}\n \\label{fig:depthwise}\n\\end{figure}\nDepth-separable convolution breaks this into two steps (Refer to Figure \\ref{fig:depthwise}):\n\\begin{enumerate}\n \\item Doing a point-wise convolution with $1 \\times 1$ filters, such that the resulting feature map now has a depth of \\texttt{output\\_channels}.\n \\item Doing a spatial convolution with $s_x \\times s_y$ filters in the $x$ and $y$ dimensions.\n\\end{enumerate}\nThese two operations stacked together (without any intermediate non-linear activation) results in an output of the same shape as a regular convolution, with much fewer parameters ($1\\times1\\times$\\texttt{input\\_channels}$\\times$ \\texttt{output\\_channels}$) + (s_x \\times s_y \\times$ \\texttt{output\\_channels}$)$, v/s $s_x\\times s_y\\times $ \\texttt{input\\_channels} $\\times$ \\texttt{output\\_channels} for the regular convolution). Similarly there is an order of magnitude less computation since the point-wise convolution is much cheaper for convolving with each input channel depth-wise (for more calculations refer to ). The Xception model architecture demonstrated that using depth-wise separable convolutions in the Inception architecture, allowed reaching convergence sooner in terms of steps and a higher accuracy on the ImageNet dataset while keeping the number of parameters the same. \nThe MobileNet model architecture which was designed for mobile and embedded devices, also uses the depth-wise separable layers instead of the regular convolutional layers. This helps them reduce the number of parameters as well as the number of multiply-add operations by $7-10\\times$ and allows deployment on Mobile for Computer Vision tasks. Users can expect a latency between 10-100ms depending on the model. MobileNet also provides a knob via the depth-multiplier for scaling the network to allow the user to trade-off between accuracy and latency.", "id": "755968cb-8899-4570-b0fb-52e968f71843", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "caef380e-1cf9-4df7-abaf-8a249b487e26", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Efficient Architectures" ], [ "subsubsection", "Vision" ] ], "subsections": [], "title": "Vision" }, { "cite_extract_rate": 0.7222222222222221, "cites": [ 7, 880, 728, 877, 38, 2401, 679, 878, 7302, 7301, 879, 876, 168 ], "content": "\\hfill\n\\\\\n\\textbf{Attention Mechanism \\& Transformer Family}:\nOne of the issues plaguing classical Sequence-to-Sequence (Seq2Seq) models for solving tasks such as Machine Translation (MT), was that of the information-bottleneck. Seq2Seq models typically have one or more encoder layers which encode the given input sequence ($\\mathbf{x} = (x_1, x_2, ..., x_{T})$) into a fixed length vector(s) (also referred to as the context, $\\mathbf{c}$), and one or more decoder layers which generate another sequence using this context. In the case of MT, the input sequence can be a sentence in the source language, and the output sequence can be the sentence in the target language.\nHowever, in classical Seq2Seq models such as the decoder layers could only see the hidden state of the final encoder step ($c = h_{T}$). This is a \\emph{bottleneck} because the encoder block has to squash all the information about the sequence in a single context vector for all the decoding steps, and the decoder block has to somehow infer the entire encoded sequence from it (Refer to Figure \\ref{fig:information-bottleneck}). It is possible to increase the size of the context vector, but it would lead to an increase in the hidden state of all the intermediate steps, and make the model larger and slower. \n\\begin{figure}\n\\centering\n\\begin{minipage}{.55\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\linewidth]{images/information-bottleneck.png}\n \\captionof{figure}{Information Bottleneck in a Seq2Seq model for translating from English to Hindi. The context vector $c$ that the decoder has access to is fixed, and is typically the last hidden state ($h_T$).}\n \\label{fig:information-bottleneck}\n\\end{minipage}\n\\qquad\n\\begin{minipage}{.38\\textwidth}\n \\centering\n \\includegraphics[width=.4\\linewidth]{images/attention-bahdanau.png}\n \\captionof{figure}{Attention module learning a weighted context vector for each output token from the hidden states. Source: .}\n \\label{fig:bahdanau-attn}\n\\end{minipage}\n\\end{figure}\nThe Attention mechanism was introduced in Bahdanau et al. to be able to create a custom context vector for each output token, by allowing all hidden states to be visible to the decoder and then creating a weighted context vector, based on the output token's alignment with each input token. Essentially, the new weighted context vector is $c_i = \\sum_{j}^{T} \\alpha_{ij} . h_j$, where $\\alpha_{ij}$ is the learned alignment (attention weight) between the decoder hidden state $s_{i-1}$ and the hidden state for the $j$-th token ($h_j$). $\\alpha_{ij}$ could be viewed as how much attention should the $i$-th input token be given when processing the $j$-th input token. This model is generalized in some cases by having explicit Query ($Q$), Key ($K$), and Value ($V$) vectors. Where we seek to learn the attention weight distribution ($\\mathbf{\\alpha}$) between $Q$ and $K$, and use it to compute the weighted context vector ($\\mathbf{c}$) over $V$. In the above encoder-decoder architecture, $Q$ is the decoder hidden state $s_{i-1}$, and $K = V$ is the encoder hidden state $h_j$. Attention has been used to solve a variety of NLU tasks (MT, Question Answering, Text Classification, Sentiment Analysis), as well as Vision, Multi-Modal Tasks etc. . We refer the reader to for further details on the taxonomy of attention models. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7.5cm]{images/transformer-encoder-decoder.png}\n \\caption{Transformer with its Encoder and Decoder blocks. Source: .}\n \\label{fig:transformer-encoder-decoder}\n\\end{figure}\nThe Transformer architecture was proposed in 2017, which introduced using Self-Attention layers for both the Encoder and the Decoder. They demonstrated that Attention layers could be used to replace traditional RNN based Seq2Seq models. The Self-Attention layer the query, key, and value vectors are all derived from the same sequence by using different projection matrices. \nSelf-Attention also allows parallelizing the process of deriving relationships between the tokens in the input sequences. RNNs inherently force the process to occur one step at a time, i.e., learning long range dependencies is $O(n)$, where $n$ is the number of tokens. With Self-Attention, all tokens are processed together and pairwise relationships can be learnt in $O(1)$ . This makes it easier to leverage optimized training devices like GPUs and TPUs. The authors reported up to $300\\times$ less training FLOPs as required to converge to a similar quality when compared to other recurrent and convolutional models. Tay et al. discuss the computation and memory efficiency of several Transformer variants and their underlying self-attention mechanisms in detail.\nAs introduced earlier, the BERT model architecture beat the state-of-the-art in several NLU benchmarks. BERT is a stack of Transformer encoder layers that are pre-trained using a bi-directional masked language model training objective. It can also be used as a general purpose encoder which can then be used for other tasks. Other similar models like the GPT family have also been used for solving many NLU tasks.\n\\textbf{Random Projection Layers \\& Models}\nPre-trained token representations such as word2vec , GLoVE , etc. are common for NLU tasks. However, since they require a $d$-dimensional vector for storing each token, the total size consumed by the table quickly grows very large if the vocabulary size $V$ is substantial ($O(V.d)$). \nIf model size is a constraint for deployment, we can either rely on compression techniques (as illustrated earlier) to help with Embedding Table compression, or evaluate layers and models that can work around the need for embedding tables. Random Projection based methods are one such family of models that do so. They propose replacing the embedding table and lookup by mapping the input feature $x$ (unicode token / word token, etc.), into a lower dimensional space. This is done using the random projection operator $\\mathbb{P}$, such that $\\mathbb{P}(x) \\in \\{0,1\\}^{T.r}$, which can be decomposed into $T$ individual projection operations each generating an $r$-bit representation ($\\mathbb{P}(x) = [\\mathbb{P}_1(x), ..., \\mathbb{P}_T(x)]$, where $\\mathbb{P}_i(x) \\in {0,1}^{r}$). $T$ and $r$ can be manually chosen.\nEach random projection operation $\\mathbb{P}_i$ is implemented using Locality Sensitive Hashing (LSH) , each using a different hash function (via different seeds). For theoretical guarantees about the Random Projection operation, refer to , which demonstrates that the operation preserves the similarity between two points in the lower-dimensional space it maps these points to (this is crucial for the model to be learn the semantics about the inputs). If this relationship holds in the lower-dimensional space, the projection operation can be used to learn discriminative features for the given input. The core-benefit of the projection operation when compared to embedding tables is $O(T)$ space required instead of $O(V.d)$ ($T$ seeds required for $T$ hash functions). On the other hand, random-projection computation is $O(T)$ too v/s $O(1)$ for embedding table lookup. Hence, the projection layer is clearly useful when model size is the primary focus of optimization.\nAcross the various papers in the projection model family, there are subtle differences in implementation (computing complex features before () v/s after the projection operation (), generating a ternary representation instead of binary (), applying complex layers and networks on top like Attention (), QRNN ()), etc. \n\\begin{figure}\n \\centering\n \\subfloat[\\centering PRADO Model. Source: .]{{\\includegraphics[width=4cm]{images/prado.png} }}\n \\subfloat[\\centering PQRNN Model. Source: ]{{\\includegraphics[width=4cm]{images/pqrnn.png} }}\n \\subfloat[\\centering Proformer Model. Source: .]{{\\includegraphics[width=3.5cm]{images/proformer.png} }}\n \\caption{Collection of notable Random-Projection based models.}\n \\label{fig:projection}\n\\end{figure}\nSome of the Projection-based models (refer to Figure \\ref{fig:projection}) have demonstrated impressive results on NLU tasks. PRADO () generates n-gram features from the projected inputs, followed by having a Multi-Headed Attention layer on top. It achieved accuracies comparable to standard LSTM models, while being ~$100\\times$ smaller, and taking 20-40 ms for inference on a Nexus 5X device. PQRNN , another Projection-based model that additionally uses a fast RNN implementation (QRNN) on top of the projected features. They report outperforming LSTMs while being $140\\times$ smaller, and achieving $97.1\\%$ of the quality of a BERT-like model while being $350\\times$ smaller.\nProformer introduces a Local Projected Attention (LPA) Layer, which combines the Projection operation with localized attention. They demonstrate reaching $\\approx$ 97.2\\% BERT-base’s performance while occupying only 13\\% of BERT-base’s memory. ProFormer also had 14.4 million parameters, compared to 110 million parameters of BERT-base.", "id": "7802e004-084c-4ded-a0cb-415602cc49d1", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "caef380e-1cf9-4df7-abaf-8a249b487e26", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Efficient Architectures" ], [ "subsubsection", "Natural Language Understanding" ] ], "subsections": [], "title": "Natural Language Understanding" }, { "cite_extract_rate": 0, "cites": [], "content": "In order to be able to train and run inference efficiently, there has to be a robust software and hardware infrastructure foundation. In this section we go over both these aspects. Refer to Figure \\ref{fig:infrastructure} for a mental model of the software and hardware infrastructure, and how they interact with each other. In this section we provide a non-exhaustive but comprehensive survey of leading software and hardware infrastructure components that are critical to model efficiency.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=9cm]{images/infrastructure-block-diag.png}\n \\caption{A visualization of the hardware and software infrastructure with emphasis on efficiency. On the left hand side is the model-training phase, which generates a trained model checkpoint. This model is then used on the inference side, which could either be server-side (conventional machines in cloud or on-prem), or on-device (mobile phones, IoT, edge devices, etc.).}\n \\label{fig:infrastructure}\n\\end{figure}", "id": "1ee9027c-97b5-4599-9fd5-673895bfabfb", "level": "subsection", "origin_cites_number": 0, "parent_id": "dbea70fb-3601-4d63-9af0-7661299b4470", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Infrastructure" ] ], "subsections": [ "4d3abc58-7346-4fd1-bd18-c43221e19f74", "db37fd3c-07af-4802-b27f-8645e997be91", "93efb057-339c-471d-8f11-e09d22bd7c85", "7fb876a4-1d0f-40f8-a319-a37cfd04f180" ], "title": "Infrastructure" }, { "cite_extract_rate": 0.1, "cites": [ 7199 ], "content": "Tensorflow (TF) is a popular machine learning framework, that has been used in production by many large enterprises. It has some of the most extensive software support for model efficiency.\n\\textbf{Tensorflow Lite for On-Device Usecases}: Tensorflow Lite (TFLite) is a collection of tools and libraries designed for inference in low-resource environments. At a high-level we can break down the TFLite project into two core parts:\n\\begin{itemize}\n \\item \\textbf{Interpreter and Op Kernels}: TFLite provides an interpreter for running specialized TFLite models, along with implementations of common neural net operations (Fully Connected, Convolution, Max Pooling, ReLu, Softmax, etc. each of which as an \\emph{Op}). The implementation of such an operation is known as an \\emph{Op Kernel}. Both the interpreter and Op Kernels are primarily optimized for inference on ARM-based processors as of the time of writing this paper. They can also leverage smartphone DSPs such as Qualcomm’s Hexagon for faster execution. The interpreter also allows the user to set multiple threads for execution.\n \\item \\textbf{Converter}: The TFLite converter\\ccite{TensorflowAuthors2021c} as the name suggests is useful for converting the given TF model into a single flatbuffer\\ccite{FlatBufferAuthors2021} file for inference by the interpreter. Apart from the conversion itself, it handles a lot of internal details like getting a graph ready for quantized inference, fusing operations\\ccite{TensorflowAuthors2021d}, adding other metadata to the model, etc. With respect to quantization, it also allows post-training quantization as mentioned earlier\\ccite{TensorflowAuthors2021f} with an optional representative dataset to improve accuracy.\n\\end{itemize}\n\\textbf{Other Tools for On-Device Inference}: TF Micro \\ccite{TensorflowAuthors2021e} goes further, and consists of a slimmed down interpreter, and a smaller set of ops for inference on very low resource microcontrollers. TF Model Optimization toolkit is a Tensorflow library for applying common compression techniques like quantization, pruning, clustering etc. TensorflowJS (TF.JS)\\ccite{TensorflowAuthors2021h} is a library within the TF ecosystem that can be used to train and run neural networks within the browser or using Node.js . These models can also accelerated through GPUs via the WebGL interface . It supports both, importing models trained in TF, as well as creating new models from scratch in TF.JS.\n\\textbf{XLA for Server-Side Acceleration}: Typically a TF model graph is executed by TF's executor process and it uses standard optimized kernels for running it on CPU, GPU, etc. XLA is a graph compiler that can optimize linear algebra computations in a model, by generating new kernels that are customized for the graph. These kernels are optimized for the model graph in question. For example, certain operations which can be fused together are combined in a single composite op. This avoids having to do multiple costly writes to RAM, when the operands can directly be operated on while they are still in cheaper caches. Kanwar et al. report a 7$\\times$ increase in training throughput, and 5$\\times$ increase in the maximum batch size that can be used for BERT training. This allows training a BERT model for \\$32 on Google Cloud.", "id": "4d3abc58-7346-4fd1-bd18-c43221e19f74", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "1ee9027c-97b5-4599-9fd5-673895bfabfb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Infrastructure" ], [ "subsubsection", "Tensorflow Ecosystem" ] ], "subsections": [], "title": "Tensorflow Ecosystem" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 882, 881, 41 ], "content": "PyTorch is another popular machine-learning platform actively used by both academia and industry. It is often compared with Tensorflow in terms of usability and features.\n\\textbf{On-Device Usecases}: PyTorch also has a light-weight interpreter that enables running PyTorch models on Mobile , with native runtimes for Android and iOS. This is analogous to the TFLite interpreter and runtime as introduced earlier. Similar to TFLite, PyTorch offers post-training quantization , and other graph optimization steps such as constant folding, fusing certain operations together, putting the channels last (NHWC) format for optimizing convolutional layers\\ccite{PyTorchAuthors2021c}.\n\\textbf{General Model Optimization}: PyTorch also offers the Just-in-Time (JIT) compilation facility , which might seem similar to Tensorflow's XLA, but is actually a mechanism for generating a serializable intermediate representation (high-level IR, per ) of the model from the code in TorchScript , which is a subset of Python. TorchScript adds constraints on the code that it can convert, such as type-checks, which allows it to sidestep some pitfalls of typical Python programming, while being Python compatible. It allows creating a bridge between the flexible PyTorch code for research and development, to a representation that can be deployed for inference in production. For example, exporting to TorchScript is a requirement to run on mobile devices . This representation is analogous to the static inference mode graphs generated by TensorFlow. The alternative for XLA in the PyTorch world seem to be the Glow and TensorComprehension compilers. They help in generating the lower-level intermediate representation that is derived from the higher-level IR (TorchScript, TF Graph). These low-level deep learning compilers are compared in detail in .\nPyTorch offers a model tuning guide , which details various options that ML practitioners have at their disposal. Some of the core ideas in there are:\n\\begin{itemize}\n \\item Turn on mixed-precision training when using NVIDIA GPUs. This is described further in detail in the GPU sub-section in 3.5.4.\n \\item Fusion of pointwise-operations (add, subtract, multiply, divide, etc.) using PyTorch JIT. Even though this should happen automatically, but adding the \\texttt{torch.jit.script} decorator to methods which are completely composed of pointwise operations can force the TorchScript compiler to fuse them.\n \\item Enabling buffer checkpointing allows keeping the outputs of only certain layers in memory, and computing the rest during the backward pass. This specifically helps with cheap to compute layers with large outputs like activations. A reduced memory usage can be exchanged for a larger batch size which improves utilization of the training platform (CPU, GPU, TPU, etc.). \n \\item Enabling device-specific optimizations, such as the cuDNN library, and Mixed Precision Training with NVIDIA GPUs (explained in the GPU subsection).\n \\item Train with Distributed Data Parallel Training, which is suitable when there is a large amount of data and multiple GPUs are available for training. Each GPU gets its own copy of the model and optimizer, and operates on its own subset of the data. Each replicas gradients are periodically accumulated and then averaged.\n\\end{itemize}", "id": "db37fd3c-07af-4802-b27f-8645e997be91", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "1ee9027c-97b5-4599-9fd5-673895bfabfb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Infrastructure" ], [ "subsubsection", "PyTorch Ecosystem" ] ], "subsections": [], "title": "PyTorch Ecosystem" }, { "cite_extract_rate": 0, "cites": [], "content": "We can further extract efficiency by optimizing for the hardware the neural networks run on. A prime deployment target is ARM's Cortex-family of processors. Cortex supports SIMD (Single-Instruction Multiple Data) instructions via the Neon architecture extension. SIMD instructions are useful for operating upon registers with vectors of data, which are essential for speeding up linear algebra operations through vectorization of these operations. QNNPACK and XNNPACK libraries are optimized for ARM Neon for mobile and embedded devices, and for x86 SSE2, AVX architectures, etc. QNNPACK supports several common ops in quantized inference mode for PyTorch. XNNPACK supports 32-bit floating point models and 16-bit floating point for TFLite. If a certain operation isn’t supported in XNNPACK, it falls back to the default implementation in TFLite.\nSimilarly, there are other low-level libraries like Accelerate for iOS , and NNAPI for Android that try to abstract away the hardware-level acceleration decision from higher level ML frameworks.", "id": "93efb057-339c-471d-8f11-e09d22bd7c85", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "1ee9027c-97b5-4599-9fd5-673895bfabfb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Infrastructure" ], [ "subsubsection", "Hardware-Optimized Libraries" ] ], "subsections": [], "title": "Hardware-Optimized Libraries" }, { "cite_extract_rate": 0.19047619047619002, "cites": [ 7, 7005, 8389, 883 ], "content": "\\textbf{GPU}: Graphics Processing Units (GPUs) were originally designed for acclerating computer graphics, but began to be used for general-purpose usecases with the availability of the CUDA library in 2007, and libraries like like cuBLAS for speeding up linear algebra operations. In 2009, Raina et al. demonstrated that GPUs can be used to accelerate deep learning models. In 2012, following the AlexNet model's substantial improvement over the next entrant in the ImageNet competition further standardized the use of GPUs for deep learning models. Since then Nvidia has released several iterations of its GPU microarchitectures with increasing focus on deep learning performance. It has also introduced Tensor Cores which are dedicated execution units in their GPUs, which are specialized for Deep Learning applications. TensorCores support training and inference in a range of precisions (fp32, TensorFloat32, fp16, bfloat16, int8, int4). As demonstrated earlier in quantization, switching to a lower precision is not always a significant trade-off, since the difference in model quality might often be minimal. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{images/multiply-accumulate-reduced-precision.jpg}\n \\caption{Reduced Precision Multiply-Accumulate (MAC) operation: An illustration of the $\\mathbf{A} = (\\mathbf{B} \\times \\mathbf{C}) + \\mathbf{D}$ operation. $\\mathbf{B}$ and $\\mathbf{C}$ are in a reduced precision (fp16, bfloat16, TensorFloat32 etc.), while $\\mathbf{A}$ and $\\mathbf{D}$ are in fp32. The speedup comes from doing the expensive matrix-multiplication with a reduced precision format.}\n \\label{fig:mac-reduced-precision}\n\\end{figure}\nTensor Cores optimize the standard Multiply-and-Accumulate (MAC) operation , $\\mathbf{A} = (\\mathbf{B} \\times \\mathbf{C}) + \\mathbf{D}$. Where, $\\mathbf{B}$ and $\\mathbf{C}$ are in a reduced precision (fp16, bfloat16, TensorFloat32), while $\\mathbf{A}$ and $\\mathbf{D}$ are in fp32. The core speedup comes from doing the expensive matrix-multiplication in a lower-precision. The result of the multiplication is in fp32, which can be relatively cheaply added with $\\mathbf{D}.$ When training with reduced-precision, NVIDIA reports between 1$\\times$ to 15$\\times$ training speedup depending on the model architecture and the GPU chosen . Tensor Cores in NVidia's latest Ampere architecture GPUs also support faster inference with sparsity (specifically, structured sparsity in the ratio 2:4, where 2 elements out of a block of 4 elements are sparse) . They demonstrate an up to 1.5$\\times$ speed up in inference time, and up to 1.8$\\times$ speedup in individual layers. NVIDIA also offers the cuDNN libary that contains optimized versions of standard neural network operations such as fully-connected, convolution, batch-norm, activation, etc. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7cm]{images/data-types.jpg}\n \\caption{Common floating point format used in Training \\& Inference: \\texttt{fp32} is the standard 32-bit floating point number from IEEE-754 standard . One bit is allocated for storing the sign. The exponent controls the range of the floating point value that can be expressed with that format, and the mantissa controls the precision. Note that \\texttt{fp16} reduces the precision as well as range. The \\texttt{bfloat16} format is a reasonable compromise because it keeps the same range as \\texttt{fp32} while trading of precision to take up a total of 16 bits. NVidia GPUs also support Tensor Float 32 format that allocates 3 more bits to the mantissa than \\texttt{bfloat16} to achieve better precision. However, it takes up a total of 19 bits which does not make it a trivially portable format.}\n \\label{fig:data-types}\n\\end{figure}\n\\textbf{TPU}: TPUs are proprietary application-specific integrated circuits (ASICs) that Google has designed to accelerate deep learning applications with Tensorflow. Because they are not general purpose devices, they need not cater for any non-ML applications (which most GPUs have had to), hence they are finely tuned for parallelizing and accelerating linear algebra operations. The first iteration of the TPU was designed for inference with 8-bit integers, and was being used in Google for a year prior to their announcement in 2016 . Subsequent iterations of the TPU architectures enabled both training and inference with TPUs in floating point too. Google also opened up access to these TPUs via their Google Cloud service in 2018 .\n\\begin{figure}\n \\centering\n \\subfloat[\\centering A Systolic Array Cell implementing a Multiply-Accumulate (MAC) operation.]{{\\includegraphics[width=4cm]{images/systolic-array-cell.jpg} }}\n \\qquad\n \\subfloat[\\centering 4x4 Matrix Multiplication using Systolic Array]{{\\includegraphics[width=5cm]{images/systolic-array-matmul.jpg} }}\n \\caption{Systolic Arrays in TPUs: Figure (a) shows a Systolic Array implementing a MAC operation, where the variables $A$ and $B$ are received by the cell, and $C$ is the resident memory. $A$ is passed to the horizontally adjacent cell on the right, and $B$ is passed to the vertically adjacent cell below on the next clock tick. Figure (b) demonstrates how two 4$\\times$4 matrices are multiplied using Systolic Arrays which is a mesh of cells constructed in Figure (a). The $i$-th row of array is fed the $i$-th column of $A$ (preceded by $i - 1$ 0s, which act as a delay). Similarly, the $i$-th column of the array is fed the $i$-th column of $B$ (preceded by $i - 1$ 0s). The corresponding $a_{ij}$ and $b_{jk}$ are passed to the neighboring cells on the next clock tick.}\n \\label{fig:systolic-array}\n\\end{figure}\nThe core architecture of the TPU chips leverages the Systolic Array design (refer to Figure \\ref{fig:systolic-array}), where a large computation is split across a mesh-like topology, where each cell computes a partial result and passes it on to the next cell in the order, every clock-step (in a rhythmic manner analogous to the systolic cardiac rhythm). Since there is no need to access registers for the intermediate results, once the required data is fetched the computation is not memory bound. Each TPU chip has two Tensor Cores (not to be confused with NVidia's Tensor Cores), each of which has a mesh of systolic arrays. There are 4 inter-connected TPU chips on a single TPU board. To further scale training and inference, a larger number of TPU boards can be connected in a mesh topology to form a 'pod'. As per publicly released numbers, each TPU chip (v3) can achieve 420 teraflops, and a TPU pod can reach 100+ petaflops .\nTPUs have been used inside Google for applications like training models for Google Search, general purpose BERT models , for applications like DeepMind's world beating AlphaGo and AlphaZero models , and many other research applications . They have also set model training time records in the MLPerf benchmarks. Similar to the GPUs, TPUs support the bfloat16 data-type which is a reduced-precision alternative to training in full floating point 32-bit precision. XLA support allows transparently switching to bfloat16 without any model changes.\n\\textbf{EdgeTPU}: EdgeTPU is a custom ASIC chip designed by Google for running inference on edge devices, with low power requirements (4 Tera Ops / sec (TOPS) using 2 watts of power ). Like the TPU, it is specialized for accelerating linear algebra operations, but only for inference and with a much lower compute budget. It is further limited to only a subset of operations , and works only with int8 quantized Tensorflow Lite models. Google releases the EdgeTPU using the Coral platform in various form-factors, ranging from a Raspberry-Pi like Dev Board to independent solderable modules . It has also been released with the Pixel 4 smartphones as the Pixel Neural Core , for accelerating on-device deep learning applications. The EdgeTPU chip itself is smaller than a US penny, making it amenable for deployment in many kinds of IoT devices.\n\\textbf{Jetson}: Jetson is a family of accelerators by Nvidia to enable deep learning applications for embedded and IoT devices. It comprises of the Nano, which is a low-powered \"system on a module\" (SoM) designed for lightweight deployments, as well as the more powerful Xavier and TX variants, which are based on the NVidia Volta and Pascal GPU architectures. As expected, the difference within the Jetson family is primarily the type and number of GPU cores on the accelerators. This makes the Nano suited for applications like home automation, and the rest for more compute intensive applications like industrial robotics.", "id": "7fb876a4-1d0f-40f8-a319-a37cfd04f180", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "1ee9027c-97b5-4599-9fd5-673895bfabfb", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Landscape of Efficient Deep Learning" ], [ "subsection", "Infrastructure" ], [ "subsubsection", "Hardware" ] ], "subsections": [], "title": "Hardware" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{images/quality-compression-trade-offs.jpg}\n \\caption{Trade off between Model Quality and Footprint: There exists a trade-off between model quality and model footprint. Model quality can be improved with techniques like distillation, data-augmentation, hyper-param tuning etc. Compression techniques can in turn help trade off some model quality for a better model footprint. Some / all of the improvement in footprint metrics can also be traded for better quality by simply adding more model capacity.}\n \\label{fig:quality-compression-tradeoff}\n\\end{figure}\nSo far, we presented a broad set of tools and techniques in the Efficient Deep Learning landscape. In this section, we present a practical guide for practitioners to use, and how these tools and techniques work with each other. As mentioned earlier, what we seek are \\emph{pareto-optimal} models, where we would like to achieve the best possible result in one dimension, while holding the other dimensions constant. Typically, one of these dimensions is \\textbf{Quality}, and the other is \\textbf{Footprint}. Quality related metrics could included Accuracy, F1, Precision, Recall, AUC, etc. While Footprint related metrics can include Model Size, Latency, RAM, etc.\nNaturally, there exists a trade-off between Quality and Footprint metrics. A higher-capacity / deeper model is more likely to achieve a better accuracy, but at the cost of model size, latency, etc. On the other hand a model with lesser capcity / shallower, while possibly suitable for deployment, is also likely to be worse in accuracy. As illustrated in Figure \\ref{fig:quality-compression-tradeoff}, we can traverse from a model with better quality metrics, and exchange some of the quality for better footprint by naively compressing the model / reducing the model capacity (\\textbf{Shrink}). Similarly it is possible to naively improve quality by adding more capacity to the model (\\textbf{Grow}). Growing can be addressed by the author of the model via appropriately increasing model capacity and tweaking other hyper-parameters to improve model quality. Shrinking can be achieved via Compression Techniques (Quantization, Pruning, Low-Rank Approximation, etc.), Efficient Layers \\& Models, Architecture Search via Automation, etc. In addition, we can also \\textbf{Improve} the quality metrics, while keeping the footprint same through Learning Techniques (Distillation, Data Augmentation, Self-Supervised Tuning), Hyper-Parameter Tuning, etc. (See Table \\ref{tab:grow-shrink-improve} for more examples.)\n\\begingroup\n\\renewcommand{\\arraystretch}{1.1} \n\\begin{table}[]\n\\small\n\\begin{tabular}{l|l|l}\n\\hline\n\\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}\\textbf{Grow}\\\\ (Model Capacity)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}\\textbf{Shrink}\\\\ (Footprint)\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}\\textbf{Improve}\\\\ (Quality)\\end{tabular}} \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}l@{}}Add layers, width, etc.\\\\ either manually or \\\\ using width / depth / \\\\ compound scaling \\\\ multipliers\\end{tabular}} &\n \\begin{tabular}[c]{@{}l@{}}Reduce layers, width, etc. \\\\ either manually or using \\\\ width / depth / compound \\\\ scaling multipliers\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}Manual Tuning (Architecture / \\\\ Hyper-Parameters / \\\\ Features, etc.)\\end{tabular} \\\\ \\cline{2-3} \n &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Compression Techniques}: \\\\ Quantization, Pruning, \\\\ Low-Rank Factorization, etc.\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Learning Techniques}:\\\\ Data-Augmentation, Distillation, \\\\ Unsupervised Learning, etc.\\end{tabular} \\\\ \\cline{2-3} \n &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Automation}:\\\\ Hyper-Param Optimization, \\\\ Architecture Search, etc.\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Automation}:\\\\ Hyper-Param Optimization, \\\\ Architecture Search, etc.\\end{tabular} \\\\ \\cline{2-3} \n &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Efficient Layers \\& Models}:\\\\ Projection, PQRNN, (NLP),\\\\ Separable Convolution (Vision), \\\\ etc.\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}\\textbf{Efficient Layers \\& Models}:\\\\ Transformers (NLP), \\\\ Vi-T (Vision), etc.\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\caption{Examples of techniques to use in the Grow, Shrink, and Improve phases.}\n\\label{tab:grow-shrink-improve}\n\\end{table}\n\\endgroup\nCombining these three phases, we propose two strategies towards achieving pareto-optimal models:\n\\begin{enumerate}\n \\item \\textbf{Shrink-and-Improve for Footprint-Sensitive Models}: If as a practitioner, you want to reduce your footprint, while keeping the quality the same, this could be a useful strategy for on-device deployments and server-side model optimization. Shrinking should ideally be minimally lossy in terms of quality (can be achieved via learned compression techniques, architecture search etc.), but in some cases even naively reducing capacity can also be compensated by the Improve phase. It is also possible to do the Improve phase before the Shrink phase.\n \\item \\textbf{Grow-Improve-and-Shrink for Quality-Sensitive Models}: When you want to deploy models that have better quality while keeping the same footprint, it might make sense to follow this strategy. Here, the capacity is first added by growing the model as illustrated earlier. The model is then improved using via learning techniques, automation, etc. and then shrunk back either naively or in a learned manner. Alternatively, the model could be shrunk back either in a learned manner directly after growing the model too.\n\\end{enumerate}\nWe consider both these strategies as a way of going from a potentially non pareto-optimal model to another one that lies on the pareto-frontier with the trade-off that is appropriate for the user. Each efficiency technique individually helps move us closer to that target model.\n\\somecomment{\nModelling Techniques:\n- Quality Impact v/s Footprint v/s Engg Resources Required v/s Training Resources v/s Inference Resources\n1. Bottlenecking on Data: How much time do resources spend waiting for data?\n1. When training memory is reduced, train with a larger batch size.\n1. Leverage device specific optimizations (cuDNN, AMP)\n1. Avoid moving data between accelerators and device (TPUs).\n1. Parallelism (data parallelism: have multiple devices processing different data, works best when model fits in the device memory. try model parallelism otherwise).\n1. For smaller model, dropout and L2 regularization is sufficient for avoiding over-fitting. However, distillation helps.\n1. For larger models distillation isn't very helpful, augmentation is good.\n}", "id": "bb65a187-5627-4dbe-9cb2-ae457b7b0c1f", "level": "section", "origin_cites_number": 0, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "A Practitioner's Guide to Efficiency" ] ], "subsections": [ "7c7b62ef-5605-466d-b357-7c38d37efe7f", "7adfb685-924d-4fc7-8965-fd436e834f71" ], "title": "A Practitioner's Guide to Efficiency" }, { "cite_extract_rate": 0, "cites": [], "content": "In order to demonstrate what we proposed above, we undertook the task of going through the exercise of making a given Deep Learning model efficient. Concretely, we had the following goals with this exercise:\n\\begin{enumerate}\n \\item Achieve a new pareto-frontier using the efficiency techniques. Hence, demonstrating that these techniques can be used in isolation as well as in combination with other techniques, in the real-world by ML Practitioners.\n \\item With various combinations of efficiency techniques and model scaling, demonstrate the tradeoffs for both `Shrink-and-Improve', and `Grow-Improve-and-Shrink' strategies for discovering and traversing the pareto-frontier. In other words, provide empirical evidence that it is possible for practitioners to either reduce model capacity to bring down the footprint (shrink) and then recover the model quality that they traded off (improve), or increase the model capacity to improve quality (growing) followed by model compression (shrinking) to improve model footprint.\n\\end{enumerate}\nWe picked the problem of classifying images in the CIFAR-10 dataset on compute constrained devices such as smartphones, IoT devices etc. We designed a deep convolutional architecture where we could scale the model capacity up or down, by increasing or decreasing the `width multiplier' ($w$) value. In the implementation, $w$ scales the number of filters for the convolutional layers (except the first two). Hence, using different values of $w$ in $[0.1, 0.25, 0.5, 0.75, 1.0]$ we obtain a family of models with different quality and footprint tradeoffs. We trained these models with some manual tuning to achieve a baseline of quality v/s footprint metrics. In this case, we measured quality through accuracy, and footprint through number of parameters, model size, and latency. In terms of techniques, we used Quantization for Shrinking, and Data Augmentation and Distillation for Improving. Many other techniques could be used to further drive the point home (Automation such as Hyper-Parameter Tuning, Efficient Layers such as Separable Convolutions), but were skipped to keep the interpretation of the results simpler. We used the Tensorflow-backed Keras APIs for training, and the TFLite framework for inference. The latencies were measured on three kinds of devices, low-end (Oppo A5), mid-end (Pixel 3XL), and high-end (Galaxy S10), in order of their increasing CPU compute power. The model size numbers reported are the sizes of the generated TFLite models, and the latency numbers are the average single-threaded CPU latency after warmup on the target device. The code for the experiments is available via an IPython notebook \\href{https://github.com/reddragon/efficient-dl-survey-paper/blob/main/CIFAR_10_End_to_End.ipynb}{here}.\nTable \\ref{tab:float-model-cifar10} compiles the results for 6 width-multipliers in increasing order, ranging from $0.05$ to $1.0$. Between the smallest to the largest models, the number of params grows by $\\approx 91.4 \\times$, and the model size grows by $\\approx 80.2 \\times$. The latency numbers also grow between $3.5 - 10 \\times$ based on the device. Within the same row, footprint metrics will not change since we are not changing the model architecture. In Table \\ref{tab:float-model-cifar10} we purely work with techniques that will improve the model quality (Data Augmentation and Distillation). Table \\ref{tab:quant-model-cifar10} reports the numbers for the Quantized versions of the corresponding models in Table \\ref{tab:float-model-cifar10}. We use Quantization for the Shrink phase, to reduce model size by $\\approx 4 \\times$, and reduce the average latency by $1.5 - 2.65 \\times$. Figures \\ref{fig:params-size} and \\ref{fig:latency-plots} plot the notable results from Tables \\ref{tab:float-model-cifar10} and \\ref{tab:quant-model-cifar10}.\n\\begingroup\n\\begin{table}[]\n\\small\n\\centering\n\\begin{tabular}{ccccccccc}\n\\midrule\n\\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Width \\\\ Multiplier\\end{tabular}}} &\n \\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}\\# Params\\\\ (K)\\end{tabular}}} &\n \\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Model \\\\ Size\\\\ (KB)\\end{tabular}}} &\n \\multicolumn{3}{c}{\\begin{tabular}[c]{@{}c@{}}Accuracy\\\\ (\\%)\\end{tabular}} &\n \\multicolumn{3}{c}{Average Latency (ms)} \\\\\n \\cmidrule(r){4-6}\n \\cmidrule(r){7-9}\n\\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{Baseline} &\n \\multicolumn{1}{c}{Augmentation} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Augmentation \\\\ + Distillation\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Oppo \\\\ A5\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Pixel \\\\ 3XL\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Galaxy \\\\ S10\\end{tabular}} \\\\\n\\midrule \\\\\n0.05 & 14.7 & 65.45 & 70.17 & 71.71 & 72.89 & 6.72 & 0.6 & 0.78 \\\\\n0.1 & 26 & 109.61 & 75.93 & 78.22 & 78.93 & 6.85 & 1.7 & 0.85 \\\\\n0.25 & 98.57 & 392.49 & 80.6 & 84.14 & 84.51 & 8.15 & 2.02 & 0.93 \\\\\n0.5 & 350.05 & 1374.11 & 83.04 & 87.47 & 88.03 & 11.46 & 2.8 & 1.33 \\\\\n0.75 & 764.87 & 2993.71 & 83.79 & 89.06 & 89.51 & 16.7 & 4.09 & 1.92 \\\\\n1 & 1343.01 & 5251.34 & 84.42 & 89.41 & 89.92 & 24 & 5.99 & 2.68 \\\\\n\\midrule\n\\end{tabular}\n\\caption{Quality and Footprint metrics for Floating-Point models for the CIFAR-10 dataset.}\n\\label{tab:float-model-cifar10}\n\\end{table}\n\\endgroup\n\\begingroup\n\\setlength{\\tabcolsep}{5pt} \n\\renewcommand{\\arraystretch}{0.9} \n\\begin{table}[]\n\\small\n\\centering\n\\begin{tabular}{ccccccccc}\n\\midrule\n\\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Width \\\\ Multiplier\\end{tabular}}} &\n \\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}\\# Params\\\\ (K)\\end{tabular}}} &\n \\multicolumn{1}{c}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Model \\\\ Size\\\\ (KB)\\end{tabular}}} &\n \\multicolumn{3}{c}{\\begin{tabular}[c]{@{}c@{}}Accuracy\\\\ (\\%)\\end{tabular}} &\n \\multicolumn{3}{c}{Average Latency (ms)} \\\\\n \\cmidrule(r){4-6}\n \\cmidrule(r){7-9}\n\\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{Baseline} &\n \\multicolumn{1}{c}{Augmentation} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Augmentation \\\\ + Distillation\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Oppo \\\\ A5\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Pixel \\\\ 3XL\\end{tabular}} &\n \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Galaxy \\\\ S10\\end{tabular}} \\\\\n\\midrule \\\\\n0.05 & 14.7 & 26.87 & 69.9 & 71.72 & 72.7 & 4.06 & 0.49 & 0.43 \\\\\n0.1 & 26 & 38.55 & 75.98 & 78.19 & 78.55 & 4.5 & 1.25 & 0.47 \\\\\n0.25 & 98.57 & 111 & 80.76 & 83.98 & 84.18 & 4.52 & 1.31 & 0.48 \\\\\n0.5 & 350.05 & 359.31 & 83 & 87.32 & 87.86 & 6.32 & 1.73 & 0.58 \\\\\n0.75 & 764.87 & 767.09 & 83.6 & 88.57 & 89.29 & 8.53 & 2.36 & 0.77 \\\\\n1 & 1343.01 & 1334.41 & 84.52 & 89.28 & 89.91 & 11.73 & 3.27 & 1.01 \\\\\n\\midrule\n\\end{tabular}\n\\caption{Quality and Footprint metrics for \\emph{Quantized} models for the CIFAR-10 dataset. Each model is the quantized equivalent of the corresponding model in Table \\ref{tab:float-model-cifar10}.}\n\\label{tab:quant-model-cifar10}\n\\end{table}\n\\endgroup\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Number of Params v/s Accuracy]{{\\includegraphics[width=4.5cm]{images/plot-num-params.pdf} }}\n \\qquad\n \\subfloat[\\centering Model Size v/s Accuracy]{{\\includegraphics[width=4.55cm]{images/plot-size.pdf} }}\n \\caption{Change in Accuracy with respect to Number of Params and Model Size. Each point on a curve is a model from Table \\ref{tab:float-model-cifar10} in figure (a) and from Table \\ref{tab:quant-model-cifar10} in figure (b).}\n \\label{fig:params-size}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\subfloat[\\centering Low-End Device Latency]{{\\includegraphics[width=4.5cm]{images/plot-latency-low.pdf} }}\n \\subfloat[\\centering Mid-Tier Device Latency]{{\\includegraphics[width=4.5cm]{images/plot-latency-mid.pdf} }}\n \\subfloat[\\centering High-End Device Latency]{{\\includegraphics[width=4.5cm]{images/plot-latency-high.pdf} }}\n \\caption{Average latency of models on different devices (low-, mid-, and high-end smartphones). The orange curve denotes the quantized models in addition to being trained with distillation and data augmentation.}\n \\label{fig:latency-plots}\n\\end{figure}", "id": "7c7b62ef-5605-466d-b357-7c38d37efe7f", "level": "subsection", "origin_cites_number": 3, "parent_id": "bb65a187-5627-4dbe-9cb2-ae457b7b0c1f", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "A Practitioner's Guide to Efficiency" ], [ "subsection", "Experiments" ] ], "subsections": [], "title": "Experiments" }, { "cite_extract_rate": 1, "cites": [ 842, 884 ], "content": "Let us try to interpret the above data to validate if our strategies can be used practically.\n\\textbf{Shrink-and-Improve for Footprint-Sensitive Models}: Refer to Table \\ref{tab:float-model-cifar10} and Figure \\ref{fig:params-size}. If our goal was to deploy the model with Width Multiplier ($w$) = $1.0$ and accuracy $84.42\\%$, but the bottleneck was the model size (5.25 MB) and latency on a low-end device (24 ms on Oppo A5). This is the classic case of the footprint metrics not meeting the bar, hence we could apply the Shrink-and-Improve strategy, by first naively scaling our model down to a Width Multiplier ($w$) of $0.25$. This smaller model when manually tuned, as seen in Table \\ref{tab:float-model-cifar10}, achieves an accuracy of $80.76\\%$. However, when we use a combination of Data Augmentation \\& Distillation from a separately trained larger teacher model with an accuracy of $90.86\\%$, the accuracy of the smaller model improves to $84.18\\%$, very close to the target model that we want to deploy. The size of this smaller model is 392.49 KB, which is $13.8 \\times$ smaller, and the latency is 8.15 ms, which is $2.94 \\times$ faster at a comparable accuracy. It is possible to further compress this model by using Quantization for some additional shrinking. The same smaller model ($w = 0.25$) when Quantized in Table \\ref{tab:quant-model-cifar10}, is 111 KB in size (\\textbf{$47.3 \\times$ smaller}) and has a latency of 4.52 ms (\\textbf{$5.31 \\times$ faster}), while retaining an accuracy of $84.18\\%$. It is possible to do this for other pairs of points on the curves.\n\\textbf{Grow-Improve-Shrink for Quality-Sensitive Models}: Assuming our goal is to deploy a model that has footprint metrics comparable to the model with $w = 0.25$ (392.49 KB model size, 0.93 ms on a high-end Galaxy S10 device), but an accuracy better than the baseline $80.6\\%$ (refer to Table \\ref{tab:float-model-cifar10}). In this case, we can choose to first grow our model to $w = 0.5$. This instantly blows up the model size to 1.37 MB ($3.49 \\times$ bigger), and latency to 1.33 ms ($1.43 \\times$ slower). However, we ignore that for a bit and improve our model's quality to $88.03\\%$ with Data Augmentation \\& Distillation. Then using Quantization for shrinking (refer to Table \\ref{tab:quant-model-cifar10}), we can get a model that is 359.31 KB in size (32 KB smaller) and has a 0.58 ms latency on Galaxy S10 ($1.6 \\times$ faster), with an accuracy of $87.86\\%$, an absolute 7.10\\% increase in accuracy while keeping the model size approximately same and making it $1.6 \\times$ faster. It is also possible to apply this strategy to other pairs of models.\nThus, we've verified that the above two strategies can work both ways, whether your goal is to optimize for quality metrics or footprint metrics. We were also able to visually inspect through Figures \\ref{fig:params-size} and \\ref{fig:latency-plots} that efficiency techniques can improve on the pareto frontiers constructed through manual tuning. To contain the scope of experimentation, we selected two sets of efficiency techniques (Compression Techniques (Quantization), and Learning Techniques (Data Augmentation \\& Distillation). Hence, it would be useful to explore other techniques as well such as Automation (for Hyper-Parameter Tuning to further improve on results), and Efficient Layers \\& Models (Separable Convolution as illustrated in MobileNet could be used in place of larger convolutional layers). Finally, we would also like to emphasize paying attention to performance of Deep Learning models (optimized or not) on underrepresented classes and out-of-distribution data to ensure model fairness, since quality metrics alone might not be sufficient for discovering deeper issues with models .", "id": "7adfb685-924d-4fc7-8965-fd436e834f71", "level": "subsection", "origin_cites_number": 2, "parent_id": "bb65a187-5627-4dbe-9cb2-ae457b7b0c1f", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "A Practitioner's Guide to Efficiency" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we started with demonstrating the rapid growth in Deep Learning models, and motivating the fact that someone training and deploying models today has to make either implicit or explicit decisions about efficiency. However, the landscape of model efficiency is vast. \nTo help with this, we laid out a mental model for the readers to wrap their heads around the multiple focus areas of model efficiency and optimization. The surveys of the core model optimization techniques give the reader an opportunity to understand the state-of-the-art, apply these techniques in the modelling process, and/or use them as a starting point for exploration. The infrastructure section also lays out the software libraries and hardware which make training and inference of efficient models possible. \nFinally, we presented a section of explicit and actionable insights supplemented by code, for a practitioner to use as a guide in this space. This section will hopefully give concrete and actionable takeaways, as well as tradeoffs to think about when optimizing a model for training and deployment. To conclude, we feel that with this survey we have equipped the reader with the necessary understanding to break-down the steps required to go from a sub-optimal model to one that meets their constraints for both quality as well as footprint.", "id": "9f086486-ae1f-4444-8cde-1c9cffd71d03", "level": "section", "origin_cites_number": 0, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "We would like to thank the Learn2Compress team at Google Research for their support with this work. We would also like to thank Akanksha Saran and Aditya Sarawgi for their help with proof-reading and suggestions for improving the content.\n\\bibliographystyle{ACM-Reference-Format}\n\\small\n\\bibliography{paper}\n\\end{document}\n\\endinput", "id": "2501746f-3c9c-49b3-83bd-392a3e078d3e", "level": "section", "origin_cites_number": 0, "parent_id": "30f20a23-cf88-46fe-a956-22f060f17d75", "prefix_titles": [ [ "title", "Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better" ], [ "section", "Acknowledgements" ] ], "subsections": [], "title": "Acknowledgements" } ]
68
[ 7, 38, 97, 514, 679, 305, 684, 9139, 7199, 681, 41, 168, 847, 846, 844, 842, 839, 838, 843, 8389, 840, 845, 841, 849, 848, 850, 8375, 7299, 8391, 852, 8390, 685, 851, 855, 854, 857, 853, 7300, 856, 505, 858, 861, 859, 8393, 8392, 862, 7191, 863, 860, 7210, 131, 864, 126, 865, 7000, 866, 7041, 869, 868, 867, 870, 872, 875, 871, 544, 874, 873, 880, 728, 877, 2401, 878, 7302, 7301, 879, 876, 882, 881, 7005, 883, 884 ]
1.298299
[ "Giuseppe Marra", "Sebastijan Dumančić", "Robin Manhaeve", "Luc De Raedt" ]
From Statistical Relational to Neurosymbolic \\ Artificial Intelligence: a Survey.
2021
2021-08-25T19:47:12Z
cs.AI
\rev{This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proof-based; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning. }
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "31670094-45b4-4854-914d-94aaf932ce65", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ] ], "subsections": [ "504b42a2-053a-4d14-a0e9-a9517b24369f", "71d65d84-75cd-4c92-b14d-bddcaa016dd0", "394ef4e2-3d6b-46dc-9354-ff1b43665302", "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "3cf78a3e-98ed-45de-848f-3a61063e51b3", "bd238fce-3df5-4531-ba35-f4316c49d81b", "322e754b-6517-46c9-9006-90e863943e5b", "528b4897-8064-4ac1-a40f-dc1e7d30614f", "35970a41-0514-46a0-8d9d-63d0769ff87a", "0c29e7f0-ea36-44aa-a1f0-0b0c69b42bfb", "df605b7c-7a16-4a8a-9a34-34eff8f50d0f" ], "title": "root" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 7601, 8475 ], "content": "The integration of learning and reasoning is a key challenge in artificial intelligence and machine learning today. Various communities are addressing it, especially\n\t the field of neurosymbolic artificial intelligence (NeSy) .\n\tNeSy's goal is to integrate symbolic reasoning with neural networks.\n\t\\nesy{} already has a long tradition, and it has recently attracted a lot of attention.\nIndeed, the topic has been addressed by prominent researchers such as Y. Bengio and H. Kautz in their keynotes at AAAI 2020, by Y. Bengia and G. Marcus in the AI Debate and Hochreiter has recently stated that \\nesy{} is ``the most promising approach to a broad AI''.\n\tAnother domain with a rich tradition in integrating learning and reasoning is that of statistical relational learning and artificial intelligence (StarAI) . StarAI focuses on integrating logical and probabilistic reasoning.\nHistorically, these two endeavours have adopted different learning paradigms, probabilistic versus neural, for integrating logic into machine learning. This in turn has resulted in two different subcommunities. StarAI has focused on probabilistic logics, their semantics and making inference more tractable, while learning is usually based on parameter learning techniques from probabilistic graphical models.\nOn the other hand, \\nesy{} extends neural networks with symbolic knowledge, focusing on scalable approximate models, paying less attention to semantical issues. In particular, \\nesy{} techniques can often be characterized by a clear parameterization in terms of neural networks, i.e. layered structures of latent representations, and by resorting to the gradient-based backpropagation paradigm for learning.\n\\rev{Despite a different focus and approach, the two domains want to achieve the same goal, that is, to integrate learning and reasoning. \nIt is therefore surprising that there has been relatively little interaction between the two domains, but see }.\n\tThis discrepancy is the key motivation behind this survey: it aims at pointing out the similarities between these two endeavours and, in this way, it wants to stimulate \n\tcross-fertilization. \n\tWe start from the literature on StarAI, following the key concepts, \\rev{definitions} and techniques outlined in several textbooks and tutorials such as , because it turns out that the same issues and techniques that arise in StarAI apply to \\nesy{} as well. \n\t\\rev{The key contributions of this paper are:}\n\t\\begin{enumerate}\n\t\t\\item \n\t\t{\\em We identify seven dimensions that these fields have in common and that can be used to categorize both StarAI and \\nesy{} approaches}. \n\t\tThese seven dimensions are concerned with (1) model vs proof-based inference, \\rev{(2) logic syntax,} (3) semantics, (4) learning parameters or structure, (5) representing entities as symbols or subsymbols, (6) integrating logic with probabilistic and/or neural concepts, and \\rev{(7) learning tasks}.\n\t\t\\item We provide evidence for our claim by positioning a wide range of StarAI and \\nesy{} systems along these dimensions and pointing out analogies between them. \n\t\tThis provides not only new insights into the relationships between StarAI and NeSy, but it also allows one to carry over and adapt techniques from one field to another. These insights provide opportunities for cross-fertilization between StarAI and NeSy, by focusing on those dimensions that have not been fully exploited yet.\n\t\\item \\rev{We gently introduce key logical concepts and techniques inherited from StarAI. In this way, the paper also provides a gentle introduction to symbolic AI and StarAI techniques for the interested ``connectionist'' practitioner.} \n\t\t\\item We illustrate each dimension using existing methods, \n and in this way, also present an intuitive and concrete overview of the research field. \n\t\\end{enumerate}\n\tUnlike some other perspectives on neurosymbolic computation , the present survey limits itself to a logical perspective and to developments in neurosymbolic computation that are consistent with this perspective. \\rev{Therefore, we usually refer to symbols and symbolic algorithms as synonyms for logical representations and logical reasoning.}\n\tFurthermore, the survey focuses on representative and prototypical systems rather than aiming at completeness (which would not be possible given the fast developments in the field). \n\tSeveral other surveys about neurosymbolic AI have been proposed. An early overview of neurosymbolic computation is that of . Unlike the present survey it focuses very much on a logical and a reasoning perspective. Today, the focus has shifted very much to learning. More recently, analysed the intersection between NeSy and graph neural networks (GNN). described neurosymbolic systems in terms of the composition of blocks described by few patterns, concerning processes and exchanged data. In contrast, this survey is more focused on the underlying principles that govern such a composition. Finally, exploits a neural network viewpoint by investigating in which components (i.e. input, loss or structure) symbolic knowledge is injected.", "id": "504b42a2-053a-4d14-a0e9-a9517b24369f", "level": "section", "origin_cites_number": 14, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Introduction" ] ], "subsections": [ "baca8b6b-5536-41e9-a0f9-bc4e2294ec62" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\rev{The next seven sections each describe one dimension by first introducing the main underlying concepts, either based on logic, probability or machine learning, and then showing how they are incorporated in StarAI and NeSy systems. Section \\ref{sec:proof_vs_model} presents how to use logic for inference by distinguishing between proof-based and model-based systems, while Section \\ref{sec:syntax} introduces logic at the syntac level, in particular, propositional, relational and first-order logic. Section \\ref{sec:semantics} then introduces the semantics of logic and shows how to extend it to a continuous semantics, using fuzzy and probabilistic logics. Section \\ref{sec:struct} discusses the dimension of learning, distinguishing parameter learning from structure learning. Section \\ref{sec:symb_vs_subsymb} focuses on the representational level and to what extent neurosymbolic models use symbolic and/or subsymbolic features. Section \\ref{sec:paradigms} positions neurosymbolic approaches along the spectrum of three main paradigms, i.e. logic, probability and neural networks. \\rev{Section \\ref{sec:tasks} describes general classes of learning tasks to which neurosymbolic systems are usually applied.} Finally, in Section 10, we conclude by introducing open challenges in the neurosymbolic landscape.}\n\tWe summarize various neurosymbolic approaches along these dimensions in Table \\ref{tab:big_table}.\n\t\\include{big_table}", "id": "baca8b6b-5536-41e9-a0f9-bc4e2294ec62", "level": "paragraph", "origin_cites_number": 0, "parent_id": "504b42a2-053a-4d14-a0e9-a9517b24369f", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Introduction" ], [ "paragraph", "Structure of the paper" ] ], "subsections": [], "title": "Structure of the paper" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:proof_vs_model}\n\tIn this paper, we focus on clausal logic as it is a standard form to which any first order logical formula can be converted. In clausal logic, theories are represented in terms of {clauses}. \n\tMore formally, a \\textit{clause} is an expression of the form $ h_1 \\vee ... \\vee h_k \\leftarrow b_1 \\wedge ... \\wedge b_n$.\n\tThe\n\t$h_k$ are \\textit{head literals} or conclusions, while the $b_i$ are \\textit{body literals} or conditions. Clauses with no conditions ($n=0$) and one conclusion ($k=1$) are \\textit{facts}.\n\tClauses with only one conclusion ($k=1$) are \\textit{definite clauses}.\n\tThe question we want to answer in this section is how to use such clausal theories to reason? And, how to infer new facts from the known clauses? Along this first dimension, we will investigate the two fundamental ways to view logical inference and determine the implications for StarAI and NeSy systems. In one view, we want to find \\textit{proofs} for a certain query, which leads to the proof-theoretic approach to logic. In the other view, we want to find \\textit{models} (that is, truth assignments to the logical atoms) that satisfy a given theory. This leads to the model-theoretic approach to logic.", "id": "71d65d84-75cd-4c92-b14d-bddcaa016dd0", "level": "section", "origin_cites_number": 0, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Proof- vs Model-theoretic View of Logic" ] ], "subsections": [ "88cc4046-116e-4648-b8c1-799e686a9f88", "de2541d6-848e-4cbb-aea2-6b2aa605f9f0", "dfea86d8-fb1c-484d-a9d2-d17c03c4d644" ], "title": "Proof- vs Model-theoretic View of Logic" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\tThe proof-theoretic approach finds proofs for a query in a logic theory. While this approach to inference is applicable to any logic theory, we focus on logic programs in this paper. Syntactically, a logic program is a \\textit{definite} clause theory, which is a theory where all the clauses are definite (i.e. only one conclusion).\n\tIn logic programs, definite clauses are interpreted as if-then \\textbf{rules} ($h$ is true if $b_1, ..., b_n$ are true). \n \\rev{A proof for a query $q$ is a sequence of logical inference steps that demonstrates the truth of a query based on the given program. A compact way of representing the set of all proofs in a logic program uses an AND/OR tree, which consists of AND and OR nodes and edges amongst them. Each node represents a goal. An AND node branches into one or more outgoing edges, each representing one of the sub-goals that need to be simultaneously satisfied for the goal in the AND node to be true. An OR node represents choices or alternatives between multiple clauses that can be used to prove a particular sub-goal. The OR node branches into multiple outgoing edges, each representing one of these possible choices. Leaf nodes in the AND/OR tree represent true facts.}\n\tTypically, forward or backward chaining inference is used to search for proofs for queries. \n\tWe illustrate this in Example \\ref{ex:logic_program}.\n\t\\begin{boxedexample}[label=ex:logic_program]{Logic programs and proofs}\n\t\tConsider the following logic program (adapted from ): \n\t\t\\begin{lstlisting}\n\t\tburglary.\n\t\thears_alarm_mary. \n\t\tearthquake.\n\t\thears_alarm_john. \n\t\talarm <- earthquake. \n\t\talarm <- burglary.\n\t\tcalls_mary <- alarm,hears_alarm_mary.\n\t\tcalls_john <- alarm,hears_alarm_john. \n\t\t\\end{lstlisting}\n The rules for \\textit{alarm} state that there will be an \\textit{alarm} if there is a \\textit{burglary} or an \\textit{earthquake}.\n\\noindent The set of proofs for the query \\textit{calls\\_mary} can be represented compactly as an AND/OR tree.\n \\include{to_include/and_or_tree}\n\\end{boxedexample}", "id": "88cc4046-116e-4648-b8c1-799e686a9f88", "level": "paragraph", "origin_cites_number": 1, "parent_id": "71d65d84-75cd-4c92-b14d-bddcaa016dd0", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Proof- vs Model-theoretic View of Logic" ], [ "paragraph", "\\rev{Proof-theoretic logic" ] ], "subsections": [ "565706e8-0b20-4fdb-b6e9-e14ec63f057e" ], "title": "\\rev{Proof-theoretic logic" }, { "cite_extract_rate": 0, "cites": [], "content": "} On the other hand, the model theoretic perspective on logic is to find a model or truth assignment to the logical atoms that satisfy a given logic theory. \n\tAn \\textit{interpretation}, or possible world, is a truth-assignment to the propositions (or ground atoms) of the language, and\n\tcan be uniquely identified with the set of propositions it assigns $True$ (thus considering all the other \\textit{False}). \n\tAn interpretation is a \\textit{model} of a clause $ h_1 \\vee ... \\vee h_k \\leftarrow b_1 \\wedge ... \\wedge b_n$ if at least one of the $h_i$ is in the interpretation when all the $b_1 \\wedge ... \\wedge b_n$ are in the interpretation as well. An interpretation $I$ is a model of a theory $T$, and we write $I \\models T$, if it is a model of all clauses in the theory. We say that the theory is \\textit{satisfiable} if it has a model. The satisfiability problem, that is, deciding whether a theory has a model, is one of the most fundamental ones in computer science (cf. the SAT problem for propositional logic). \n\t\\begin{boxedexample}[label = ex:model_based]{Model-theoretic}\n\t\tConsider the following theory (adopted from ): \n\t\t\\begin{lstlisting}\n\t\tsmokes_mary $\\leftarrow$ smokes_john, influences_john_mary.\n\t\tsmokes_john $\\leftarrow$ smokes_mary, influences_mary_john.\n\t\tsmokes_mary $\\leftarrow$ stress_mary.\n\t\tsmokes_john $\\leftarrow$ stress_john.\n\t\t\\end{lstlisting}\n\t\tA model of the previous theory is the set:\n\t\t$$M = \\{\\mathtt{stress\\_john, smokes\\_john}\\}$$ \n\t\tBy considering all the elements of this set \\textit{True} and all others \\textit{False}, the four clauses are satisfied.\n\t\\end{boxedexample}\n\tIn the model-theoretic perspective, one uses the logic theory as a set of \n\t{\\em constraints} on the propositions, that is, the propositions are related to one another, without imposing a directed inference relationship between them as in forward or backward chaining. More details on these connections can be found in .", "id": "565706e8-0b20-4fdb-b6e9-e14ec63f057e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "88cc4046-116e-4648-b8c1-799e686a9f88", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Proof- vs Model-theoretic View of Logic" ], [ "paragraph", "\\rev{Proof-theoretic logic" ], [ "paragraph", "\\rev{Model-theoretic logic" ] ], "subsections": [], "title": "\\rev{Model-theoretic logic" }, { "cite_extract_rate": 0.1, "cites": [ 5706 ], "content": "Statistical Relational AI's focus is on unifying logical and probabilistic graphical models (PGMs). A PGM is a graphical model that compactly\n\trepresents a (joint) probability distribution $P(X_1, ... , X_n)$ over $n$ discrete or continuous random variables $X_1, ... ,X_n$.\n\tThe key idea is that the joint factorizes over some factors $f_i$ specified over subsets $X^i$ of the variables $\\{X_1, ... ,X_n\\}$.\n\t$$P(X_1, ... , X_n) = \\frac{1}{Z} f_1(X^1) \\times ... \\times f_k(X^k) $$\n\tThe random variables correspond to the nodes in the graphical structure, and the factorization is determined\n\tby the edges in the graph. \n\tThere are two classes of graphical models: \\textit{directed} ones, or Bayesian networks, and \\textit{undirected} ones, or Markov Networks.\n\tIn Bayesian networks, the underlying graph structure is a directed acyclic graph,\n\tand the factors $f^i(X_i | parents(X_i))$ correspond to the conditional probabilities $P(X_i | parents(X_i))$, where $parents(X_i)$\n\tdenotes the set of random variables that are a parent of $X_i$ in the graph.\n\tIn Markov networks, the graph is undirected and the factors $f^i(X^i)$ correspond to the set of nodes $X^i$ that form (maximal) cliques in the graph. Furthermore, the factors are non-negative and $Z$ is a normalisation constant.\n\t\\begin{evidencebox}\n\t\t\\rev{The distinction between directed and undirected graphical models is parallel to the proof- vs model-theoretic view of logic. This parallel is at the very core of StarAI. In fact, by viewing each variable $X_i$ (or proposition) \\textit{at the same time} as a random and as a logical variable , clausal theories can be extended to define probabilistic models. Clauses can then be translated into binary valued factors by labeling them with weights (or probabilities), thus parameterizing the corresponding factors.}\n\t\\end{evidencebox}\n\tIn the remainder of this section, we will show how StarAI has used this parallel to define two types of systems .\n\tThe first type of StarAI system generalizes directed models and resembles Bayesian networks. It includes well-known representations such as plate notation , probabilistic relational models (PRMs) , probabilistic logic programs (PLPs) , and Bayesian logic programs (BLPs) . \n\tToday the most typical and popular representatives of this category are the probabilistic (logic) programs. \n\tProbabilistic logic programs were introduced by \n\tPoole~ and the first learning algorithm is due to Sato~. \n\tProbabilistic logic programs are essentially definite clause programs where every fact is annotated with the probability that it is \\textit{True}. \n\tThis then results in a possible world semantics.\n\tThe reason why probabilistic logic programs are viewed as directed models is clear when looking at the derivations\n\tfor a query, cf. Example \\ref{ex:logic_program}. At the top of the AND-OR tree, there is the query that one wants to prove and the structure of the tree\n\tis that of a directed graph (even though it need not be acyclic). One can straightforwardly map\n\tdirected graphical models, that is, Bayesian networks,\n\tonto such probabilistic logic programs by associating one definite clause to every entry in the conditional probability tables,\n\tyielding factors of the form $P(X | Y_1, ... , Y_n)$. Assuming boolean random variables, each entry ${x,y_1, ...,y_n}$ with parameter value $v$ can be represented using the definite clause $X(x) \\leftarrow Y_1 (y_1) \\wedge ... \\wedge Y_n(y_n) \\wedge p_{x,y_1, ...,y_n}$ and \n\tprobabilistic fact $v::p_{x,y_1, ...,y_n}$. \n\tA probabilistic version of Example~\\ref{ex:logic_program} is shown in Example~\\ref{ex:problog} using the syntax of ProbLog .\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\\draw (160,105) .. controls (160,96.72) and (177.91,90) .. (200,90) .. controls (222.09,90) and (240,96.72) .. (240,105) .. controls (240,113.28) and (222.09,120) .. (200,120) .. controls (177.91,120) and (160,113.28) .. (160,105) -- cycle ;\n\\draw (110,40) .. controls (110,28.95) and (125.67,20) .. (145,20) .. controls (164.33,20) and (180,28.95) .. (180,40) .. controls (180,51.05) and (164.33,60) .. (145,60) .. controls (125.67,60) and (110,51.05) .. (110,40) -- cycle ;\n\\draw (220,40) .. controls (220,28.95) and (239.7,20) .. (264,20) .. controls (288.3,20) and (308,28.95) .. (308,40) .. controls (308,51.05) and (288.3,60) .. (264,60) .. controls (239.7,60) and (220,51.05) .. (220,40) -- cycle ;\n\\draw (271.04,105) .. controls (271.04,96.72) and (299.91,90) .. (335.52,90) .. controls (371.13,90) and (400,96.72) .. (400,105) .. controls (400,113.28) and (371.13,120) .. (335.52,120) .. controls (299.91,120) and (271.04,113.28) .. (271.04,105) -- cycle ;\n\\draw (100,175) .. controls (100,166.72) and (120.15,160) .. (145,160) .. controls (169.85,160) and (190,166.72) .. (190,175) .. controls (190,183.28) and (169.85,190) .. (145,190) .. controls (120.15,190) and (100,183.28) .. (100,175) -- cycle ;\n\\draw (200,120) -- (252.57,158.24) ;\n\\draw [shift={(255,160)}, rotate = 216.03] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (200,120) -- (147.43,158.24) ;\n\\draw [shift={(145,160)}, rotate = 323.97] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (335.52,120) -- (257.69,158.67) ;\n\\draw [shift={(255,160)}, rotate = 333.58000000000004] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (60,120) -- (142.29,158.72) ;\n\\draw [shift={(145,160)}, rotate = 205.2] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (260,60) -- (202.74,85.77) ;\n\\draw [shift={(200,87)}, rotate = 335.77] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (150,60) -- (197.36,85.57) ;\n\\draw [shift={(200,87)}, rotate = 208.37] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\\draw (1.04,105) .. controls (1.04,96.72) and (29.91,90) .. (65.52,90) .. controls (101.13,90) and (130,96.72) .. (130,105) .. controls (130,113.28) and (101.13,120) .. (65.52,120) .. controls (29.91,120) and (1.04,113.28) .. (1.04,105) -- cycle ;\n\\draw (210,175) .. controls (210,166.72) and (230.15,160) .. (255,160) .. controls (279.85,160) and (300,166.72) .. (300,175) .. controls (300,183.28) and (279.85,190) .. (255,190) .. controls (230.15,190) and (210,183.28) .. (210,175) -- cycle ;\n\\draw (119,31) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{burglary}};\n\\draw (228,31) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{earthquake}};\n\\draw (181.14,95.38) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{alarm}};\n\\draw (276,96.13) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{ hears\\_alarm\\_mary}};\n\\draw (106.92,166.13) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{calls\\_john}};\n\\draw (13,96.13) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{hears\\_alarm\\_john}};\n\\draw (216.92,166.13) node [anchor=north west][inner sep=0.75pt] [align=left] {\\texttt{calls\\_mary}};\n\\end{tikzpicture}\n\t\t\\caption{The Bayesian network corresponding to the ProbLog program in Example \\ref{ex:problog}}\n\t\t\\label{fig:bayes_net_problog}\n\t\\end{figure}\n\t\\begin{boxedexample}\n [label = ex:problog]{ProbLog}\n\t\tWe show a probabilistic extension of the alarm program using ProbLog.\n\t\t\\begin{lstlisting}\n\t\t0.1::burglary.\n\t\t0.3::hears_alarm_mary. \n\t\t0.05::earthquake.\n\t\t0.6::hears_alarm_john.\n\t\talarm <- earthquake. \n\t\talarm <- burglary.\n\t\tcalls_mary <- alarm,hears_alarm_mary.\n\t\tcalls_john <- alarm,hears_alarm_john. \n\t\t\\end{lstlisting}\n\t\tThis program can be mapped to the Bayesian network in Figure~\\ref{fig:bayes_net_problog}\n\t\tThis probabilistic logic program defines a distribution $p$ over possible worlds $\\omega$. Let $P$ be a Problog program and $F = \\{p_1::c_1, \\cdots, p_n::c_n\\}$ be the set of ground probabilistic facts $c_i$ of the program and $p_i$ their corresponding probabilities.\n\t\tProbLog defines a probability distribution over $\\omega$ in the following way:\n\t\t\\begin{equation*}\n\t\tp(\\omega) = \\begin{cases} 0, &\\mbox{if } \\omega \\not\\models P \\\\\n\t\t\\displaystyle \\prod_{c_i \\in \\omega: c_i = T} p_i \\cdot \\prod_{c_j \\in \\omega: c_j = F} (1 - p_j), & \\mbox{if } \\omega \\models P \\end{cases}\n\t\t\\end{equation*}\n\t\\end{boxedexample}\n\tThe second type of StarAI system generalizes undirected graphical models such as Markov networks or random fields.\n\tThe prototypical example is Markov Logic Networks (MLNs) , and also Probabilistic Soft Logic (PSL) follows this idea.\n\tUndirected StarAI models consist of a set of weighted clauses $w:h_1 \\vee ... \\vee h_k \\leftarrow b_1 \\wedge ... \\wedge b_m$\n that become soft constraints. The higher the weight of a ground clause, the less likely possible worlds that violate these constraints are. In the limit, when the weight is $+\\infty$ the constraint must be satisfied and becomes a purely logical constraint, a hard constraint.\n\tThe weighted clauses specify a more general relationship between the conclusion and the condition than the definite clauses of directed models. \n\tWhile clauses of undirected models can still be used in (resolution) theorem provers, they are commonly viewed as constraints that relate these two sets of atoms.\n\tSuch undirected StarAI models can be mapped to an undirected probabilistic graphical model in which there is a one-to-one correspondence between grounded weighted clauses and factors, as we show in Example \\ref{ex:mln}.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\include{to_include/markov_field}\n\t\t\\caption{The Markov Field corresponding to the Markov logic network in Example \\ref{ex:mln}}\n\t\t\\label{fig:mln}\n\t\\end{figure}\n\t\\begin{boxedexample}[label = ex:mln]{Markov Logic Networks}\n\t\tWe show a probabilistic extension (adapted from ) of the theory in Example~\\ref{ex:model_based} using the formalism of Markov Logic Networks. We use a First Order language with domain $D = \\{\\textit{john},\\textit{mary}\\}$ and weighted clauses $\\alpha_1$ and $\\alpha_2$, i.e.:\n\t\t\\begin{align*}\n\t\t\\alpha_1: \\quad &\\mathtt{\n\t\t\t2.0::smokes(Y) \\leftarrow smokes(X), influences(X,Y)} \\\\\n\t\t\\alpha_2: \\quad &\\mathtt{0.5::smokes(X) \\leftarrow stress(X)}\n\t\t\\end{align*}\n\t\tIn Figure~\\ref{fig:mln}, we show the corresponding Markov field.\n\t\tA Markov Logic Network defines a probability distribution over possible worlds as follows.\n\t\tLet $ A = [\\alpha_1, \\cdots, \\alpha_n]$ be a set of logical clauses and let $B = [\\beta_1, \\cdots, \\beta_n]$ the corresponding positive weights. Let $\\theta_j$ be a possible assignment of constants (from the domain $D$) to the variables (e.g. $\\mathtt{X,Y}$) of the clause $\\alpha_i$, that is, a substitution. Let $\\alpha_i\\theta_j$ the grounded clause where all variables in $\\theta_j$ have been replaced by their corresponding constants. Finally, let $\\mathbbm{1}(\\omega,\\alpha_i\\theta_i)$ be an indicator function, evaluating to 1 if the ground clause is \\textit{True} in $\\omega$, 0 otherwise. \n\t\tThe probabilistic semantics of Markov Logic is the distribution (with $Z$ the normalization constant):\n\t\t\\begin{equation*}\n\t\tp(\\omega) = \\frac{1}{Z} \\exp \\big(\\sum_i \\beta_i \\sum_{j} \\mathbbm{1}(\\omega, \\alpha_i\\theta_j) \\big)\n\t\t\\end{equation*}\n\t\tIntuitively, in MLNs, a world is more probable if it makes many of its ground instances \\textit{True}. Notice that MLNs are usually defined on first-order clause theories, with variables and domains. We will further investigate this issue in Section \\ref{sec:syntax}.\n\t\\end{boxedexample}", "id": "de2541d6-848e-4cbb-aea2-6b2aa605f9f0", "level": "subsection", "origin_cites_number": 10, "parent_id": "71d65d84-75cd-4c92-b14d-bddcaa016dd0", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Proof- vs Model-theoretic View of Logic" ], [ "subsection", "Implications for StarAI" ] ], "subsections": [], "title": "Implications for StarAI" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 2665, 5707, 3726, 5709, 3727, 3741, 8942, 7988, 5710, 5711, 4323, 7304, 3672, 5708 ], "content": "\\rev{The distinction between proof vs models and inference rules vs constraints, turns out to be fundamental for neurosymbolic systems as well.}\n\t\\begin{evidencebox}\n\t\t\\rev{In neurosymbolic AI, weighted clauses are not used to construct a probabilistic graphical model, but they are likewise used to construct neural models. More specifically, NeSy systems that exploit a proof-theoretic approach use the proofs to build the \\textit{architecture} of the neural net. On the other side of the spectrum, NeSy systems that exploit a model-theoretic approach use the constraints to build a \\textit{loss function} for the neural net. }\n\t\\end{evidencebox}\n\t\\rev{Both choices are extremely natural.\n Proof trees capture the structure of the inference process in a graphical representation. Therefore, they can be used as the structure of the neural network computation, which corresponds to their architecture. On the other hand, the desired behaviour of the variables is expressed in terms of constraints. \n Loss functions are the de facto standard to enforce desired behaviours on the output variables of a neural network.}\n\\rev{First, we survey proof-based NeSy models, which use theorem proving for logical inference and proofs to template the neural architecture. In particular, when proving a specific query atom, they keep track of all the used rules in a proof tree, such as the one shown in Example \\ref{ex:logic_program}.\tWeights on facts and rules are then used to label leaves or edges of the tree, respectively, while real valued activation functions are used to label the AND and OR nodes. The result is a computational graph that can be executed (or evaluated) bottom-up, starting from the leaves up to the root. Generally speaking, the output of the computational graph is a \\textit{score} for the query atom. Different semantics can be exploited in building the computational graph, ranging from relaxations of truth values (such as in fuzzy logic) to probabilities (see Section \\ref{sec:semantics}). The connection between the proof tree and the neural network suggests schemes for learning the parameters of these models. Indeed, the obtained computational graph is always differentiable. Thus, given a set of atoms that are known to be \\textit{True} (resp. \\textit{False}), one can maximize (resp. minimize) their score using the corresponding computational graphs.\n\tInference in these models is turned into \\textit{evaluation} of the computational graph. The direction of the rules indicates the direction of the evaluation, in the same way as it indicates the direction of inference in logic programming. \n\tAmong this category are} systems based on Prolog or Datalog, such as TensorLog , Neural Theorem Provers (NTPs) , NLProlog , DeepProbLog , NLog and DiffLog .\n\tLifted Relational Neural Networks (LRNNs) and $\\partial$ILP are other examples of non-probabilistic directed models, where weighted definite clauses are compiled into a neural network architecture in a forward chaining fashion.\n\tThe systems that imitate logical reasoning with tensor calculus, Neural Logic Programming (NeuralLP) and Neural Logic Machines (NLM) , are likewise instances of directed logic. An example of a proof-based NeSy model is given in Example \\ref{ex:kbann}. \n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{subfigure}[b]{0.55\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{prolog}\nalarm <- earthquake. \nalarm <- burglary.\ncalls_mary <- alarm, \n hears_alarm_mary.\n\t\t\t\\end{prolog}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.43\\textwidth}\n\t\t\t\\centering\n\t\t\t\\include{to_include/and_or_simple}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\t\\centering\n\t\t\t\\include{to_include/and_or_nn}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\t\\centering\n\t\t\t\\include{to_include/and_or_hidden}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\t\\centering\n\t\t\t\\include{to_include/and_or_fully}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\caption{Knowledge-Based Artificial Neural Network. Network creation process. (1) the initial logic program; (2) the AND-OR tree for the query \\textit{calls\\_mary}; (3) mapping the tree into a neural network; (4) adding hidden neurons, (5) adding interlayer connections.}\n\t\t\\label{fig:kbann}\n\t\\end{figure}\n\\begin{boxedexample}[label = ex:kbann]{Knowledge-Based Artificial Neural Networks}\n\t\\noindent Knowledge-Based Artificial Neural Networks (KBANN) is the first method to use definite clausal logic and theorem proving to template the architecture of a neural network. \n\t\\noindent KBANN turns a program into a neural network in several steps:\n\t\\begin{enumerate}\n\t\t\\item KBANN starts from a definite clause program and a set of queries.\n\t\t\\item The program is turned into an AND-OR tree using the proofs for the queries.\n\t\t\\item The AND-OR tree is turned into a neural network with a similar structure. Nodes are divided into layers. The weights and the biases are set such that evaluating the network returns the same outcome of querying the program.\n\t\t\\item New hidden units are added. Hidden units play the role of unknown rules that need to be learned. They are initialized with zero weights; i.e. they are inactive.\n\t\t\\item New links are added from each layer to the next one, obtaining the final neural network. \n\t\\end{enumerate}\n\t\\noindent An example of this process is shown in Figure~\\ref{fig:kbann}. KBANN needs some restrictions over the kind of rules. In particular, the rules are assumed to be conjunctive, non-recursive, and variable-free (or propositional). Many of these restrictions are removed by more recent systems.\n\\end{boxedexample}\n\\rev{We now survey the second class of NeSy systems, the model-based ones. These systems use logic to define a loss function (usually a regularization term) for neural networks. The networks compute scores for the set of atoms that correspond to the output neurons. At each training step, the logic-based loss function determines the degree to which the assigned scores violate the logical theory and uses this to determine the penalty. \nLogical inference is turned into a learning problem (i.e. ``learning to satisfy'') and it is usually cast in a variational optimization scheme.\\footnote{This is reminiscent of the variational approach to probabilistic inference in probabilistic graphical models and constitutes a further parallel between the fields.}\nAs a consequence, in constraint-based models, the neural network has to solve two tasks at the same time: solving a subsymbolic learning problem (e.g. perception) as well as approximating the logical inference process .}\nA large group of NeSy approaches, including Semantic Based Regularization (SBR) , Logic Tensor Networks (LTN) , Semantic Loss (SL) and DL2 , exploits logical knowledge as a soft regularization constraint that favours solutions that satisfy the logical constraints. SBR and LTN compute atom (fuzzy) truth assignments as the output of the neural network and translate the provided logical formulas into a real valued regularization loss term using fuzzy logic. SL uses marginal probabilities of the target atoms to define the regularization term and relies on arithmetic circuits to evaluate it efficiently, as detailed in Example \\ref{ex:sl}. DL2 defines a numerical loss providing no specific fuzzy or probabilistic semantics, which allows for including numerical variables in the formulas (e.g. by using a logical term $x > 1.5$). Another group of approaches, including Neural Markov Logic Networks (NMLN) and Relational Neural Machines (RNM) extend MLNs, allowing factors of exponential distributions to be implemented as neural architectures. Finally, compute ground atoms scores as dot products between relation and entity embeddings; implication rules are then translated into a logical loss through a continuous relaxation of the implication operator.\n\t\\begin{boxedexample}[label = ex:sl]{Semantic Loss}\n\t\t\\noindent The Semantic Loss is an example of an undirected model where (probabilistic) logic is exploited as a \\textit{regularization} term in training a neural model.\n\t\t\\noindent Let $p = [p_1,\\dots,p_n]$ be a vector of probabilities for a list of propositional variables $X = [X_1,\\dots,X_n]$. In particular, $p_i$ denotes the probability of variable $X_i$ being \\textit{True} and corresponds to a single output of a neural net having $n$ outputs.\n\t\tLet $\\alpha$ be a logic sentence defined over $X$.\n\t\t\\noindent Then, the \\textit{semantic loss} between $\\alpha$ and $p$ is:\n\t\t\\begin{equation*}\n\t\tLoss(\\alpha,p) \\propto - \\log \\,\\, \\sum_{x \\models \\alpha} \\,\\,\\, \\prod_{i: x \\models X_i} p_i \\,\\,\\, \\prod_{i: x \\models \\neg X_i} (1-p_i).\n\t\t\\end{equation*}\n\t\t\\noindent The authors provide the intuition behind this loss: \n\t\t\\begin{quote}\n\t\t\t\\textit{The semantic loss is proportional to the negative logarithm of the probability of generating a state that satisfies the constraint when sampling values according to $p$.}\n\t\t\\end{quote} \n\t\t\\noindent Suppose you want to solve a multi-class classification task (example adapted from ), where each input example must be assigned to a single class. Then, one would like to enforce \\textit{mutual exclusivity} among the classes. This can be easily done on supervised examples, by coupling a softmax activation layer with a cross entropy loss. However, there is no standard way to impose this constraint for unlabeled data, which can be useful in a semi-supervised setting.\n\t\t\\noindent The solution provided by the Semantic Loss framework is to encode mutual exclusivity into the propositional constraint $\\beta$:\n\t\t\\begin{equation*}\n\t\t\\beta = (X_1 \\land \\neg X_2 \\land \\neg X_3) \\lor\n\t\t(\\neg X_1 \\land X_2 \\land \\neg X_3) \\lor\n\t\t(\\neg X_1 \\land \\neg X_2 \\land X_3)\n\t\t\\end{equation*}\n\t\tConsider a neural network classifier with three outputs $ p =[ p_1, p_2, p_3]$. Then, for each input example (whether labeled or unlabeled), we can build the semantic loss term:\n\t\t\\begin{equation*}\n\t\tL(\\beta,p) = p_1(1 - p_2)(1 - p_3) +\n\t\t(1 - p_1)p_2(1 - p_3) +\n\t\t(1 - p_1)(1 - p_2)p_3\n\t\t\\end{equation*}\n\t\t\\noindent It can be summed up to the standard cross-entropy term for the labeled examples.\n\t\tUnlike for directed methods such as KBANN (Example \\ref{ex:kbann}) and TensorLog, \n the logic is turned into a loss-function that is used during training. The function constrains the underlying probabilities, but there are no directed or causal relationships among them.\n\tMoreover, during inference only the probabilities $p$ are used while the logic formula $\\beta$ is not used anymore. On the contrary, in KBANN, the logic is compiled into the architecture of the network and, therefore, it it is also exploited at evaluation time. \n\t\\end{boxedexample}\n\t\\rev{To conclude, let us stress a key difference between \n the two classes of NeSy systems w.r.t. \\textit{the way they incorporate the knowledge expressed in the logical clauses}. Proof-based, directed models use logic to define the architecture of a neural symbolic network. \n Thus, logic is part of the inference of the model and acts as a structural constraint. The designer has full control of where and how the logic is used inside the network. Thus, logical knowledge can easily be extended or modified at test-time, without the need to retrain, leading to a high degree of modularity and out-of-distribution generalization . On the other hand, when logic is only encoded in an objective function, the neural net learns to (approximately) satisfy it. Therefore, the knowledge is only latently encoded in the weights of the network, which leads to a loss of control and interpretability. However, the latter techniques are often much more scalable, especially at inference time. The balance between control and interpretability, on the one hand, and scalability, on the other hand, is an open and important research question in the \\nesy{} community. }", "id": "dfea86d8-fb1c-484d-a9d2-d17c03c4d644", "level": "subsection", "origin_cites_number": 22, "parent_id": "71d65d84-75cd-4c92-b14d-bddcaa016dd0", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Proof- vs Model-theoretic View of Logic" ], [ "subsection", "Implications for NeSy" ] ], "subsections": [], "title": "Implications for NeSy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:syntax}\n\tIn Section \\ref{sec:proof_vs_model}, we have introduced clausal logic, without paying much attention to the structure of the atoms and literals. This structure and its consequences for StarAI and \\nesy{} models are the topic of the present section. Consider the following example: \n\t\\begin{boxedexample}[label = ex:prop_clausal_logic]{Propositional Clausal logic}\n\t\tConsider the following set of clauses. \n\t\t\\begin{lstlisting}\n\t\tmortal_socrates $\\leftarrow$ human_socrates.\n\t\tmortal_aristotle $\\leftarrow$ human_aristotle.\n\t\thuman_socrates.\n\t\thuman_aristotle.\n\t\t\\end{lstlisting}\n\t\\end{boxedexample} \n\tHere, the literals do not possess any internal structure. \\rev{They are {\\em propositions}, which are atoms that we can only assign the value $True$ or $False$. We say that we are working in \\textit{propositional logic}. }\n\tThis contrasts with\n\t\\textit{first-order} logic in which the literals take the form $p(t_1, ... , t_m)$, with $p$ a predicate of arity $m$ and the $t_i$ terms, that is, constants, variables, or structured terms of the form $f(t_1, ..., t_q)$, where $f$ is a functor and the $t_i$ are again terms. \n\tIntuitively, constants represent objects or entities, functors represent functions, variables make abstraction of specific objects, and predicates specify relationships amongst objects.\n\tThe subset of first order logic where there are no functors is called {\\em relational logic}. \n\t\\begin{boxedexample}[label=ex:fol_clausal_logic]{First order clausal logic}\n\t\tIn contrast to the previous example, we now\n\t\twrite the theory in a more compact manner using \\textit{first order logic}. By convention, constants start with a lowercase letter, while variables start with an uppercase.\n\t\tEssential is the use of the variable \\textit{X}, which is implicitly universally quantified.\n\t\t\\begin{lstlisting}\n\t\tmortal(X) $\\leftarrow$ human(X). \n\t\thuman(socrates).\n\t\thuman(aristotle).\n\t\t\\end{lstlisting}\n\t\\end{boxedexample} \n\t\\rev{It is interesting to understand the connection between propositional, relational and first-order logic. To this end, we introduce the concept of grounding.} \n\tWhen an expression (i.e, clause, atom or term) does not contain any variable it is called {\\em ground}. \n\tA substitution $\\theta$ is an expression of the form $\\{X_1/c_1, ..., X_k/c_k \\}$ with the $X_i$ different variables, the $c_i$ terms. Applying a substitution $\\theta$ to an expression $e$ (term, atom or clause) yields the instantiated expression $e\\theta$ where all variables $X_i$ in $e$ have been simultaneously replaced by their corresponding terms $c_i$ in $\\theta$. We can take for instance the clause $mortal(X) \\leftarrow human(X)$ and apply the substitution $\\{X/socrates\\}$ to yield $mortal(socrates) \\leftarrow human(socrates)$. Grounding is the process whereby all possible substitutions that ground the clauses are applied. Notice that grounding a first order logical theory may result in an infinite set of ground clauses (when there are functors), and a polynomially larger set of clauses (when working with finite domains).\n\t\\rev{Finite domains are the focus in both StarAI and \\nesy{}. In such domains,} any problem expressed in first-order logic can be equivalently expressed in relational logic and any problem expressed in relational logic can likewise be expressed in propositional logic by grounding out the clauses .", "id": "394ef4e2-3d6b-46dc-9354-ff1b43665302", "level": "section", "origin_cites_number": 2, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Syntax" ] ], "subsections": [ "3524cff6-f045-4e2a-be1c-cdf237557b34", "7f592cdb-5aaa-4d78-a9de-739ab8dc18a6" ], "title": "Logic - Syntax" }, { "cite_extract_rate": 0, "cites": [], "content": "StarAI typically focus on first order logic ~. \\rev{In Section \\ref{sec:proof_vs_model}, we have seen how StarAI models can be easily interpreted in terms of probabilistic graphical models (PGM). Here, we want to show that FOL is a powerful tool for building such models.} \n\t\\begin{evidencebox}\n\t\t\\rev{FOL allows for knowledge in the form of logical rules to be interpreted as a template for defining the graphical models. Grounding the theory then corresponds to \\textit{unrolling} the template. At the same time, first order logic has also an important statistical and learning advantage: a FOL rule leads to parameter sharing in the model as the parameters of a single FOL rule are tied to all its groundings. Parameter sharing compresses the representation of the corresponding probabilistic model, resulting in more efficient learning and better generalization.}\n\t\\end{evidencebox}\n\t\\rev{These properties are reminiscent of plate notation for probabilistic graphical models, bringing logical reasoning into the picture . In Example \\ref{ex:mln}, we have used only two first order rules but we obtained a larger graphical model with six factors (see Figure \\ref{fig:mln}) by grounding (i.e. unrolling) the rules over the domain. All factors corresponding to the same rule share the same weight.}", "id": "3524cff6-f045-4e2a-be1c-cdf237557b34", "level": "subsection", "origin_cites_number": 4, "parent_id": "394ef4e2-3d6b-46dc-9354-ff1b43665302", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Syntax" ], [ "subsection", "Implications for StarAI" ] ], "subsections": [], "title": "Implications for StarAI" }, { "cite_extract_rate": 0.75, "cites": [ 5714, 3741, 5712, 5710, 5707, 7304, 4323, 5709, 5713, 8943, 3726, 3727 ], "content": "}\n\t\\rev{NeSy exploits the internal structure of literals, resulting in many relational and first-order systems. System exploiting propositional logic are Semantic Loss (SL) ~ and DL2~. Relational logic-based systems are DiffLog~, $\\theta$ILP , Lifted Relational Neural Networks (LRNN) , Neural Theorem Provers (NTP) \n\tand NeurASP~.\n\t\tFinally, many systems are based on first-order logic or first-order logic programs, such as \n\t\tDeepProbLog , NLog , NLProlog , \n\t\tDeepStochLog ,\n\t\tLogic Tensor Networks , Semantic Based Regularization , Relational Neural Machines and Logical Neural Networks .}\n\t\\rev{The focus in \\nesy{} on structured terms is strongly related to that in StarAI and plays a fundamental role in \\nesy{}. \n\t In fact, grounding a relational or first-order theory can often be seen as unrolling either the architecture (e.g., DeepStochLog, LRNN ) or the loss function (e.g., SBR , LTN ) of the corresponding neural model. Unrolling fixed modules over multiple elements of a complex data structure is fundamental to neural networks on sequences (recurrent nets, RNN), trees (recursive nets, RvNN) and graphs (graph nets, GNN). \\nesy{} can be regarded as unrolling more complex logical structures, with similar benefits in terms of model capacity, modularization and generalization, and strong control due to the formal semantics.}\n\t\\rev{Moreover, first-order \\nesy{} models can explicitly deal with how subsymbolic data (e.g. images or audio) are fed to the neural components of the system.\n In fact, \\nesy{} systems often use subsymbolic data samples as elements of the domain of discourse. For example, the element \\textit{mary} can be used to refer to an image, e.g. $mary = \\smallimg{mary.jpeg}$. Feeding such samples as input to a neural network can then be naturally encoded as grounding a predicate over the domain of interest. When the internal structure of the literals is absent, as in SL, this mapping must be handled outside the logical framework.}\nWhile both relational logic and first-order logic have their advantages, there is a noteworthy distinction in the latter. \nFirst-order logic allows representing real valued functions through the use of functors. For example, segmentation can be modeled as a functor returning the bounding box of an object inside an image, e.g. \\textit{location(mary,image)} . Therefore, FOL-based systems can address regression tasks, diverging from the conventional classification tasks associated with relational logic systems.", "id": "7f592cdb-5aaa-4d78-a9de-739ab8dc18a6", "level": "subsection", "origin_cites_number": 16, "parent_id": "394ef4e2-3d6b-46dc-9354-ff1b43665302", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Syntax" ], [ "subsection", "Implications for \\nesy{" ] ], "subsections": [], "title": "Implications for \\nesy{" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:semantics}", "id": "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "level": "section", "origin_cites_number": 0, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Semantics" ] ], "subsections": [ "8d5d9e2a-e3dc-4e1e-ba7c-32324338d1ca", "5422e096-fdd4-4f74-a7c2-b0a811af5aa2", "09725807-0a82-437e-980b-ae2733823d35", "0c28445b-24b7-4df7-a61f-1e0537492ed8" ], "title": "Logic - Semantics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\rev{The semantics of logical, probabilistic logical and neurosymbolic systems is defined in terms of a model theoretic semantics.}\n\\rev{In the present section, we will restrict our attention to Herbrand interpretations and models as is usual in logic programming and statistical relational AI (see Section \\ref{sec:proof_vs_model})}.\n\\rev{We can distinguish three different levels of semantics, which are also closely tied to the used syntax of the underlying logic.}\n\\rev{First, when the logical theory consists of definite clauses only, the semantics is given by the least Herbrand model. The least Herbrand model of a definite clause theory\nis unique and it is the smallest w.r.t. set inclusion. It contains all ground facts (from the Herbrand domain) that are logically entailed by the theory.\nFor instance, considering the facts $a$ and $b$ and the rules $d \\leftarrow a, b$ and $b \\leftarrow c$ would give the least Herbrand model $\\{a, b, d\\}$.}\n\\rev{Second, when the logical theory can contain any set of clauses, the semantics is given by the set of all possible Herbrand models. For instance, considering the clause $a \\vee b $\nyields the models $\\{a\\}, \\{b\\}$ and $\\{a, b\\}$. So there is not necessarily a unique model, not even when considering only minimal models, where\nwe have $\\{a\\}, \\{b\\}$.} \n\\rev{Third, while Horn-clauses are the basis for \"pure\" Prolog and logic programs, there exist several extensions to this formalism to accommodate\nnegated literals in the condition part of rules or disjunction in the head. A popular framework in this regard is answer set programming (ASP). \nIn ASP the clause $a \\vee b$ could be represented by two clauses $a \\leftarrow \\neg b$ and $b \\leftarrow \\neg a$ which would have two stable models $\\{a\\}$ and $\\{b\\}$.}", "id": "8d5d9e2a-e3dc-4e1e-ba7c-32324338d1ca", "level": "subsection", "origin_cites_number": 0, "parent_id": "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Semantics" ], [ "subsection", "Model-theoretic semantics" ] ], "subsections": [], "title": "Model-theoretic semantics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\rev{The previous three levels of semantics are based on Boolean models, i.e. models where each atom is either present (i.e. \\textit{True}) or absent (i.e. \\textit{False})}. Differently, \\textit{fuzzy logic}, and in particular t-norm fuzzy logic, assigns a truth value to atoms in the \\textit{continuous} real interval $[0,1]$. Logical operators are then turned into real-valued functions, mathematically grounded in the t-norm theory. A \\textit{t-norm} $t(x, y)$ is a real function $t:[0,1] \\times [0,1] \\to [0,1]$ that models the logical AND and from which the other operators can be derived. Table~\\ref{tab:t-norms} shows well-known t-norms and the functions corresponding to their connectives. A fuzzy logic formula is mapped to a real valued function of its input atoms, as we show in Example~\\ref{ex:fuzzy}. Fuzzy logic generalizes Boolean logic to continuous values. All the different t-norms are coherent with Boolean logic in the endpoints of the interval $[0,1]$, which correspond to \\textit{completely true} and \\textit{completely false} values. The concept of model in fuzzy logic can be easily recovered from an extension of the model-theoretic semantics of the Boolean logic. Any \\textit{fuzzy} interpretation is a model of a formula if the formula evaluates to 1.\n\\begin{boxedexample}[label=ex:fuzzy]{Fuzzy logic}\n\t\tLet us consider the following propositions: \\textit{alarm}, \\textit{burglary} and \\textit{earthquake}. Defining a fuzzy semantics for this language requires one to assign truth degrees to each of the propositions and selecting a particular t-norm to implement the connectives. \n\t\tLet us consider the \\L ukasiewicz t-norm and the following interpretation of the language:\n\t\t\\begin{minipage}{0.40\\linewidth}\n\t\t\t\\begin{align*}\n\t\t\t\\mathcal{I} = \\{&\\texttt{alarm} = 0.7, \\\\\n\t\t\t&\\texttt{burglary}=0.6, \\\\ &\\texttt{earthquake}=0.3\\}\n\t\t\t\\end{align*}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.60\\linewidth}\n\t\t\t\\begin{align*}\n\t\t\tt_{\\vee}(x,y) & = \\min ( 1, x + y)\\\\\n\t\t\tt_{\\rightarrow}(x,y) & = \\min(1, 1 - x + y)\n\t\t\t\\end{align*}\n\t\t\\end{minipage}\n\t\t\\newline\n\t\tOnce we have defined the semantics of the language, we can evaluate logic sentences, e.g.:\n\t\t\\begin{align*}\n\t\t& \\texttt{alarm} \\leftarrow (\\texttt{burglary} \\vee \\texttt{earthquake}) = \\\\\n\t\t& \\min(1,1 - \\min(1, \\texttt{burglary} + \\texttt{earthquake}) +\\texttt{alarm}) = 0.8\n\t\t\\end{align*}\n\t\tThis evaluation can be performed automatically by parsing the logical sentence in the corresponding \\textit{expression tree} and then compiling the expression tree using the corresponding t-norm operation: \n\t\t\\begin{center}\n\t\t\t\\include{to_include/expression_tree_fuzzy}\n\t\t\\end{center}\n\t\tThe resulting circuit represents a differentiable function and the truth degree of the sentence is computed by evaluating the circuit bottom-up.\n\t\\end{boxedexample}\n\t\\begin{table}[tb]\n\t\t\\centering\n\t\t\\begin{tabular}{c|c|c|c}\n\t\t\t& Product & \\L ukasiewicz & G\\\"{o}del \\\\ \n\t\t\t\\hline\n\t\t\t$x \\land y$ & $x \\cdot y$ & $\\max(0, x + y -1$) & $\\min(x, y$) \\\\\n\t\t\t\\hline\n\t\t\t$x \\lor y$ & $x + y - x \\cdot y$ & $\\min(1, x + y)$ & $\\max(x, y)$ \\\\\n\t\t\t\\hline\n\t\t\t$\\lnot x$ & $1 - x$ & $1 - x$ & $1 - x$ \\\\\n\t\t\t\\hline\n\t\t\t$x \\Rightarrow y \\;\\; (x>y)$ & $y/x$ & $\\min(1,1-x+y)$ & $y$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\caption{Logical connectives on the inputs $x,y$ when using the fundamental t-norms.}\n\t\t\\label{tab:t-norms}\n\t\\end{table}", "id": "5422e096-fdd4-4f74-a7c2-b0a811af5aa2", "level": "subsection", "origin_cites_number": 0, "parent_id": "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Semantics" ], [ "subsection", "Fuzzy semantics" ] ], "subsections": [], "title": "Fuzzy semantics" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8685, 5706 ], "content": "Statistical Relational AI has extended the previous semantics by defining probability distributions $p(\\omega)$ over models, or \\textit{possible worlds}\\footnote{In this paper, we use the distribution semantics as representative of the probabilistic approach to logic. While this is the most common solution in StarAI, many other solutions exist , whose description is out of the scope of the current survey. A detailed overview of the different flavours of formal reasoning about uncertainty can be found in .}.\nThe goal is to reason about the uncertainty of logical statements. In particular, the probability that a certain formula $\\alpha$ holds is computed as the sum of the probabilities of the possible worlds that are models of $\\alpha$ (i.e. where $\\alpha$ is True):\n\t\\begin{equation}\n\t\\label{eq:wmc}\n\tp(\\alpha) = \\sum_{\\omega \\models \\alpha} p(\\omega) \n\t\\end{equation}\n\tThis is an instance of the Weighted Model Counting (WMC) problem. In fact, we are counting how many worlds are models of $\\alpha$ and we are weighting each of them by its probability according to the distribution $p(\\omega)$.\n\t\\begin{table}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc|l}\n\t\t\tB & E & J & M & $p(\\omega)$ \\\\\n\t\t\t\\hline\n\t\t\tF&F&F&F& 0.2394 \\\\\n\t\t\tF&F&F&T& 0.1026 \\\\\n\t\t\tF&F&T&F& 0.3591 \\\\\n\t\t\tF&F&T&T& 0.1539 \\\\\n\t\t\tF&T&F&F& 0.0126 \\\\\n\t\t\tF&T&F&T& 0.0054 \\\\\n\t\t\tF&T&T&F& 0.0189 \\\\\n\t\t\tF&T&T&T& 0.0081 \\\\\n\t\t\tT&F&F&F& 0.0266 \\\\\n\t\t\tT&F&F&T& 0.0114 \\\\\n\t\t\tT&F&T&F& 0.0399 \\\\\n\t\t\tT&F&T&T& 0.0171 \\\\\n\t\t\tT&T&F&F& 0.0014 * \\\\\n\t\t\tT&T&F&T& 0.0006 * \\\\\n\t\t\tT&T&T&F& 0.0021 * \\\\\n\t\t\tT&T&T&T& 0.0009 * \n\t\t\\end{tabular}\n\t\t\\caption{A distribution over possible worlds for the four propositional variables $burglary$ (B), $earthquake$ (E), $hears\\_alarm\\_john$ (J) and $hears\\_alarm\\_mary$ (M). The $*$ indicates those worlds where $burglary \\wedge earthquake$ is \\textit{True}.}\n\t\t\\label{tab:distribution_semantics}\n\t\\end{table}\n\t\\begin{boxedexample}[label = ex:distr_semantics]{Probabilistic Logic}\n\t\tLet us consider the following set of propositions $B = burglary$, $E = earthquake$, $J = hears\\_alarm\\_john$ and $M = hears\\_alarm\\_mary$. In probabilistic logic, a probability distribution over all the possible worlds is defined. For example, Table \\ref{tab:distribution_semantics} represents a valid distribution.\n\t\tSuppose we want to compute the probability of the formula $burglary \\wedge earthquake$. This is done by summing up the probabilities of all the worlds where both $burglary$ and $earthquake$ are \\textit{True} (indicated by a $*$ in Table \\ref{tab:distribution_semantics}). \n\t\\end{boxedexample}\n The StarAI community has provided several formalisms to define such probability distributions over possible worlds using labeled logic theories. Probabilistic Logic Programs (cf. Example \\ref{ex:problog}) and Markov logic networks (cf. Example \\ref{ex:mln}) are two prototypical frameworks. For example, the distribution in Table~\\ref{tab:distribution_semantics} is the one modeled by the ProbLog program in Example \\ref{ex:problog}.\n\\rev{It is interesting to compare Markov Logic (Example \\ref{ex:mln}) to ProbLog (Example \\ref{ex:problog}) in terms of their model-theoretic semantics. Markov Logic is defined as a set of weighted full clauses, i.e. as an unnormalized probability distribution over full clausal theories.\nThis means that, given any subset of the theory, there can be many possible models. For instance, the theory $a \\vee b$, has three possible models. To obtain a probability distribution over models, Markov Logic needs to distribute the probability mass over its models. \nTo do this, the maximum entropy principle is used, which results in equal distributions of the probability mass.} \n\\rev{Conversely, ProbLog defines a probability distribution over definite clause theories, each obtained as subsets of the provided probabilistic facts. However, since each of these theories has a unique least Herbrand model, the probability mass corresponding to the selected facts is assigned to the corresponding unique Herbrand model. This means that when working with definite clauses only, there is no need to distribute the probability mass to multiple models and, therefore, no extra assumptions such as maximum entropy are necessary.} \nProbabilistic inference (i.e. weighted model counting) is generally intractable. That is why, in StarAI, techniques such as \\textit{knowledge compilation} (KC)~ are used.\n\tKnowledge compilation transforms a logical formula $\\alpha$ into a new representation in an offline step, which can be computationally expensive. Using this new representation a particular set of queries can be answered efficiently (i.e. in poly-time in the size of the new representation). \n\tFrom a probabilistic point of view, this translation solves the disjoint-sum problem, which states that one cannot simply sum up the probability \n of two disjuncts but also has to subtract the probability of the intersection.\n After the translation, the probabilities of any conjunction and of any disjunction can be simply computed by multiplying, resp. summing up, the probabilities of their operands. Thus a logical formula $\\alpha$ can be compiled into an arithmetic circuit $ac(\\alpha)$. The weighted model count of the query formula can then simply be computed by evaluating the corresponding arithmetic circuit bottom up; i.e. $p(\\alpha) = ac(\\alpha)$. \n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\noindent\n\t\t\\begin{subfigure}{0.4\\linewidth}\n\t\t\t\\include{to_include/dDNNF}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{0.4\\linewidth}\n\t\t\t\\include{to_include/arithmetic_circuit}\n\t\t\\end{subfigure}\n\t\t\\caption{dDNNF (left) and arithmetic circuit (right) corresponding to the ProbLog program in Example \\ref{ex:problog}}\n\t\t\\label{fig:kc}\n\t\\end{figure}\n\t\\begin{boxedexample}[label = ex:kc]{Knowledge Compilation}\n\t\tLet us consider the ProbLog program in Example \\ref{ex:problog} and the corresponding tabular representation in Table \\ref{tab:distribution_semantics}. Let us consider the query $q = calls(mary)$. Now we can use Equation~\\ref{eq:wmc} to compute the probability $p(q)$. To do this, we iterate over the table and we sum all the probabilities of the worlds where $calls(mary)$ is True, which we know from Example \\ref{ex:logic_program} are those where either $burglary=T$ or $earthquake=T$ and where $hears\\_alarm(mary)=T$. This yields $p(q) = 0.0435$. This method would require us to iterate over $2^N$ terms (where $N$ is the number of probabilistic facts).\n\t\tKnowledge compilation compiles $\\alpha$ into some normal form that is logically equivalent. In Figure \\ref{fig:kc}, the target representation is a decomposable, deterministic negative normal form (d-DNNF)~, for which weighted model counting is poly-time in the size of the formula. Decomposability means that, for every conjunction, the two conjuncts do not share any variables. Deterministic means that, for every disjunction, the two disjuncts are \\rev{mutually exclusive, i.e., only one of the disjuncts can be true at the same time}. \n\t\tThe formula in d-DNNF can then be straightforwardly turned into an arithmetic circuit by substituting AND nodes with multiplication and OR nodes by summation. \n In Figure \\ref{fig:kc}, we show the d-DNNF and the arithmetic circuit of the distribution defined by the ProbLog program in Example \\ref{ex:problog}. The bottom-up evaluation of this arithmetic circuit computes the correct marginal probability $p(\\alpha)$ much more efficiently than the naive iterative sum that we have computed before. \n\t\\end{boxedexample}\n\tEven though probabilistic Boolean logic is the most common choice in StarAI, some approaches use probabilistic fuzzy logic. The most prominent approach is Probabilistic Soft Logic (PSL) , illustrated in Example~\\ref{ex:psl}. Similarly to Markov logic networks, Probabilistic Soft Logic (PSL) defines log linear models where features are represented by ground clauses. However, PSL uses a fuzzy semantics of the logical theory. Therefore, atoms are mapped to real valued variables and ground clauses to real valued factors.\n\t\\begin{boxedexample}[label = ex:psl]{Probabilistic Soft Logic}\n\t\tLet us consider the logical rule $\\alpha = smokes(X) \\leftarrow stress(X)$ with weight $\\beta$.\n\t\tAs we have seen in Example~\\ref{ex:mln}, Markov Logic translates the formula into a discrete factor by using the indicator functions $\\mathbbm{1}(\\omega, \\alpha\\theta)$:\n\t\t\\begin{equation*}\n\t\t\\phi^{MLN}(\\omega, \\alpha) = \\beta \\mathbbm{1}(\\omega, \\alpha\\{X/mary\\}) + \\beta \\mathbbm{1}(\\omega, \\alpha\\{X/john\\})\n\t\t\\end{equation*}\n\t\tInstead of discrete indicator functions, PSL translates the formula into a continuous t-norm based function:\n\t\t\\begin{equation*}\n\t\tt(\\omega, \\alpha) = \\min(1, 1 - stress(X) + smokes(X))\n\t\t\\end{equation*}\n\t\tand the corresponding potential is then translated into the continuous and differentiable function:\n\t\t\\begin{equation*}\n\t\t\\phi^{PSL}(\\omega, \\alpha) = \\beta t(\\omega, \\alpha\\{X/mary\\}) + \\beta t(\\omega, \\alpha\\{X/john\\})\n\t\t\\end{equation*}\n\t\tAnother important task in StarAI is MAP inference. In MAP inference, given the distribution $p(\\omega)$, one is interested in finding the interpretation $\\omega^\\star$ where $p$ is maximal, i.e.\n\t\t\\begin{equation}\n\t\t\\label{eq:map}\n\t\t\\omega^\\star = \\text{arg}\\max_\\omega p(\\omega)\n\t\t\\end{equation}\n\t\tWhen the $\\omega$ is a boolean interpretation, i.e. $\\omega \\in \\{0,1\\}^n$, like in ProbLog or MLNs, this problem is \\rev{related to maxSAT, which is NP-hard}. However, in PSL, $\\omega$ is a fuzzy interpretation, i.e. $\\omega \\in [0,1]^n$ and $p(\\omega) \\propto \\exp\\big(\\sum_i \\beta_i \\phi(\\omega, \\alpha_i)\\big)$ is a continuous and differentiable function. The MAP inference problem can thus be \\textit{approximated} more efficiently than its boolean counterpart using gradient-based techniques.\n\t\\end{boxedexample}", "id": "09725807-0a82-437e-980b-ae2733823d35", "level": "subsection", "origin_cites_number": 6, "parent_id": "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Semantics" ], [ "subsection", "Implications for StarAI" ] ], "subsections": [], "title": "Implications for StarAI" }, { "cite_extract_rate": 0.695652173913043, "cites": [ 5712, 5707, 3726, 5718, 5717, 5715, 681, 5716, 3727, 5719, 5721, 5710, 5711, 5720, 4323, 3672 ], "content": "We have seen that in StarAI, one can turn inference tasks into the evaluation (as in KC) or gradient-based optimization (as in PSL) of a differentiable parametric circuit. The parameters are scalar values (e.g. probabilities or truth degrees) that are attached to basic elements of a logical theory (facts or clauses).\n\tA natural way of carrying over the StarAI approach to NeSy is the reparameterization method. Reparameterization substitutes the scalar values assigned to facts or formulas with the output of a neural network. One can interpret this substitution in terms of a different parameterization of the original model. Many probabilistic methods parameterize the underlying distribution in terms of neural components. In particular, as we show in Example \\ref{ex:deepproblog}, DeepProbLog exploits neural predicates to compute the probabilities of probabilistic facts as the output of neural computations over vectorial representations of the constants, which is similar to SL in the propositional counterpart (see Example \\ref{ex:sl}). NeurASP also inherits the concept of a neural predicate from DeepProbLog.\n\t\\begin{boxedexample}[label = ex:deepproblog]{Probabilistic semantics reparameterization in DeepProbLog}\n\t\tDeepProbLog is a neural extension of the probabilistic logic programming language ProbLog. DeepProbLog allows images or other subsymbolic representations as terms of the program.\n\t\tLet us consider a possible neural extension of the program in Example~\\ref{ex:problog}. We could extend the predicate $calls(X)$ with two extra inputs, i.e. $calls(B,E,X)$. $B$ is supposed to contain an image of a security camera, while $E$ is supposed to contain the time-series of a seismic sensor. We would like to answer queries like $calls(\\smallimg{burglary1.png},\\smallimg{earthquake1.png},mary)$, i.e. what is the probability that $mary$ calls, given that the security camera has captured the image $\\smallimg{burglary1.png}$ and the sensor the signal $\\smallimg{earthquake1.png}$ .\n\t\tDeepProbLog can answer this query using the following program:\n\t\t\\begin{lstlisting}\n\t\tnn(nn_burglary, [B]) :: burglary(B).\n\t\tnn(nn_earthquake, [E]) :: earthquake(E).\n\t\t0.3::hears_alarm(mary). \n\t\t0.6::hears_alarm(john). \n\t\talarm(B,_) <- burglary(B).\n\t\talarm(_,E) <- earthquake(E).\n\t\tcalls(B,E, X) <- alarm(B,E), hears_alarm(X).\n\t\t\\end{lstlisting}\n\t\tHere, the program has been extended in two ways. First, new arguments (i.e. $B$ and $E$) have been introduced in order to deal with the subsymbolic inputs. Second, the probabilistic facts $burglary$ and $earthquake$ have been turned into \\textit{neural predicates}. Neural predicates are special probabilistic facts that are annotated by neural networks instead of by scalar probabilities.\n\t\tInference in DeepProbLog mimics that of ProbLog. Given the query and the program, knowledge compilation is used to build the arithmetic circuit in Figure \\ref{fig:deepproblog}.\n\t\tSince the program is structurally identical to the purely symbolic one in Example \\ref{ex:kc}, the arithmetic circuit is exactly the same. The only only difference is that some leaves of the tree (i.e. capturing probabilities of facts) can now also be neural networks.\n\t\tGiven a set of queries that are \\textit{True}, i.e.:\n\t\t\\begin{align*}\n\t\t\\mathcal{D} = \\{&calls(\\smallimg{burglary1.png},\\smallimg{earthquake1.png},mary),\\\\ &calls(\\smallimg{burglary2.png},\\smallimg{earthquake2.png},john), \\\\ &calls(\\smallimg{burglary3.png},\\smallimg{earthquake3.png},mary), ...\\},\n\t\t\\end{align*} we can train the parameters $\\theta$ of the DeepProbLog program (both neural networks and scalar probabilities) by maximizing the log-likelihood of the training queries using gradient descent:\n\t\t\\begin{equation*}\n\t\t\\max_{\\theta} \\sum_{q \\in \\mathcal{D}} \\log p(q)\n\t\t\\end{equation*}\n\t\\end{boxedexample}\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\include{to_include/arithmetic_circuit_dpl}\n\t\t\\caption{A neural reparametrization of the arithmetic circuit in Example \\ref{ex:kc} as done by DeepProbLog (cf. Example \\ref{ex:deepproblog}). Dashed lines indicate a negative output, i.e 1 - x. \\rev{We use a different notation for negation than in Figure \\ref{fig:kc} to stress that both leaves are parameterized by the same neural network}.}\n\t\t\\label{fig:deepproblog}\n\t\\end{figure}\n\tSimilarly to DeepProbLog, NMLNs and RNMs use neural networks to parameterize the factors (or the weights) of a Markov Logic Network.\n\t computes marginal probabilities as logistic functions over similarity measures between embeddings of entities and relations. An alternative solution to exploit a probabilistic semantics is to use knowledge graphs (see also \\ref{sec:kge_gnn}) to define probabilistic priors to neural network predictions, as done in .\n\tSBR and LTN reparametrize fuzzy atoms using neural networks that take as inputs the feature representation of the constants and return the corresponding truth value, as shown in Example~\\ref{ex:sbr}. Logical rules are then relaxed into soft constraints using fuzzy logic. Many other systems exploit fuzzy logic to inject knowledge into neural models . These methods can be regarded as variants of a unique conceptual framework as the differences are often minor and in the implementation details. \n\t\\begin{boxedexample}[label = ex:sbr]{Semantic-Based Regularization}\n\t\t\\noindent Semantic-Based Regularization (SBR) is an example of an undirected model where fuzzy logic is exploited as a \\textit{regularization} term when training a neural model. \n\t\tLet us consider a possible grounding for the rule in Example \\ref{ex:psl}:\n\t\t\\begin{lstlisting}\n\t\tsmokes(mary) $\\leftarrow$ stress(mary)\n\t\t\\end{lstlisting}\n\t\tFor each grounded rule $r$, SBR builds a regularization loss term $L(r)$ in the following way. First, it maps each constant $c$ (e.g. \\textit{mary}) to a set of (perceptual) features $x_c$ (e.g. a tensor of pixel intensities $x_\\texttt{mary}$). Each relation $r$ (e.g. \\textit{smokes, stress}) is then mapped to a neural network $f_r(x)$, where $x$ is the tensor of features of the input constants and the output is a truth degree in $[0,1]$. For example, the atom \\textit{smokes(mary)} is mapped to the function call $f_\\texttt{smokes}(x_\\texttt{mary})$. \n\t\tThen, a fuzzy logic t-norm is selected and logic connectives are mapped to the corresponding real valued functions. For example, when the \\L ukasiewicz t-norm is selected, the implication is mapped to the binary real function $f(x,y) = \\min(1, 1 - x + y)$.\n\t\tFor the rule above, the Semantic-Based Regularization loss term is (for the \\L ukasiewicz t-norm):\n\t\t\\begin{equation*}\n\t\tL^{\\text{\\L}}(r) = \\min \\Big(1, 1 - f_\\texttt{stress}(x_\\texttt{mary}) + f_\\texttt{smokes}(x_\\texttt{mary}) \\Big)\n\t\t\\end{equation*}\n\t\tThe aim of Semantic-Based Regularization is to use the regularization term together a with classical loss function for supervised learning to learn the functions associated to the relations (here $f_\\texttt{stress}$ and $f_\\texttt{smokes}$).\n\t\tIt is worth comparing this method with the Semantic Loss (Example \\ref{ex:sl}). Both methods turn a logic formula (either propositional or first-order) to a real valued function that is used as a regularization term. However, because of the different semantics, these two methods have different properties. On the one hand, SL preserves the original logical semantics, by using probabilistic logic. However, due to the probabilistic assumption, the input formula cannot be compiled directly into a differentiable loss but needs to be first translated, i.e. compiled, into an equivalent deterministic and decomposable formula. While this step is necessary for the probabilistic model to be sound, the size of the resulting formula can be exponential in the size of the grounded theory. On the other hand, in SBR, the formula can be compiled directly into a differentiable loss, whose size is linear in the size of the grounded theory. However, in order to do so, the semantics of logic is altered, by turning it into fuzzy logic.\n\t\\end{boxedexample}\n\tFuzzy logic can also be used to relax rules. For example, in LRNN, $\\partial$ILP, DiffLog and the approach of , the scores of the proofs are computed using fuzzy logic connectives. \n\tThe theory t-norms has identifying parameterized (i.e. weighted) classes of t-norms that are very close to standard neural computation patterns (e.g. ReLU or sigmoidal layers). This creates an interesting, still not fully understood, connection between soft logical inference and inference in neural networks. A large class of methods relaxes logical statements numerically, without explicitly defining a specific semantics.\n\tUsually, the atoms are assigned scores in $\\mathbb{R}$ computed by a neural scoring function over embeddings.\n\tNumerical approximations are then applied either to combine these scores according to logical formulas or to aggregate proofs scores.\n\tThe resulting neural architecture is usually differentiable and, thus, trained end-to-end.\n\tSome NeSy methods, such as PSL, have used mixed probabilistic and fuzzy semantics. In particular, Deep Logic Models (DLM) extend PSL by adding neurally parameterized factors to the Markov field, while uses fuzzy logic to train posterior regularizers for standard deep networks using knowledge distillation . \n \\rev{The semantics of computational logic has also been explored and extended along other directions that have also been used within AI, for example, \\textit{modal} and \\textit{temporal} logics . While their analysis is out of the scope of the paper, it is worth mentioning that also such formalisms have been investigation from a neurosymbolic perspective .}", "id": "0c28445b-24b7-4df7-a61f-1e0537492ed8", "level": "subsection", "origin_cites_number": 23, "parent_id": "3f30c5c1-1c7c-4ae0-b5f0-88220243574e", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic - Semantics" ], [ "subsection", "Implications for NeSy" ] ], "subsections": [], "title": "Implications for NeSy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:struct}\n\tLearning approaches in StarAI and NeSy are usually distinguished as to whether the structure~ or the parameters of the model are learned~.\n\tIn structure learning, the learning task is to discover the logical theory, i.e., a set of logical clauses and their corresponding probabilities or weights that reliably explains the examples. \n\tWhat \\textit{explaining the examples} exactly means depends on the learning setting.\n\tIn discriminative learning, we are interested in learning a theory that explains, or predicts, a specific target relation given background knowledge.\n\tIn generative learning, there is no specific target relation; instead, we are interested in a theory that explains the interactions between all relations in a dataset.\n\tIn contrast to structure learning, parameter learning starts with a given logical theory and only learns the corresponding probabilities or weights.\n\tStructure learning is an inherently NP-complete problem of searching for the right combinatorial structure, whereas parameter learning can be achieved with any curve fitting technique, such as gradient descent or least-squares.\n\tWhile parameter learning is, in principle, an easier problem to solve, it comes with a strong dependency on the provided user input. If the provided clauses are of low quality, the resulting model will also be of low quality.\n\tStructure learning, on the other hand, is less dependent on the user provided input, but is an inherently more difficult problem.", "id": "3cf78a3e-98ed-45de-848f-3a61063e51b3", "level": "section", "origin_cites_number": 3, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Structure versus Parameter Learning" ] ], "subsections": [ "88f64637-a474-4f43-9b9a-5ccb928849a4", "9af5d6bb-eaff-4088-a25c-009703783367" ], "title": "Structure versus Parameter Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Both structure and parameter learning are common in StarAI.\n\tStructure learning in StarA is an instance of learning by search~ and is closely connected to program synthesis.\n\tThe existing techniques are typically extensions of techniques originating in inductive logic programming (ILP) ~, which learn deterministic logical theories, and probabilistic graphical models (PGMs), which learn Bayesian or Markov networks from data.\n\tBeing an instance of learning by search, the central components of a learning framework are a space of valid structures and a search procedure.\n\tIn ILP, valid structures are logical theories; for Bayesian networks, valid structures are DAGs capturing their graph structure.\n\tThe resulting search space is then traversed with generic search procedures.\n\t StarAI structure learning techniques suffer from a combinatorial explosion.\n\tThat is especially the case with ILP techniques, in which the search space consists of programs containing several clauses. \n\tTherefore, it is necessary to limit the search space to make learning tractable.\n\tThe most common way to do this is to impose a \\textit{language bias} -- a set of instructions on how to construct the search space, such that it is narrowed down to a subset of the space of all logical theories.\n\tThough language bias can make the problem more tractable, it requires special care: too many restrictions might eliminate the target theory, while too few restrictions make the search space too large to traverse.\n\tAnother strategy is to leverage the compositionality of logic programs: \n adding an additional clause to a program increases its coverage and does not affect the prediction of examples covered by the initial program.\n\tThat is, we can learn a single clause at a time instead of simultaneously searching over theories containing multiple clauses.\n\tLearning clauses and their probabilities is usually treated as a two stage process.\n\tILP-based StarAI learning techniques first identify useful (deterministic) clauses, and then learn the corresponding probabilities or weights via parameter learning. Similarly, StarAI methods grounded primarily in PGMs, such as MLNs, search for frequently occurring cliques in data~, lift them into logical clauses, and then learn the weights or probabilities.\n\tParameter learning techniques are often also extensions of well known statistical approaches such as least-squares regression~, gradient descent~, and expectation maximisation~.\n\t\\begin{boxedexample}[label = ex:probfoil]{Structure learning with ProbFoil}\n\t\tAs an illustration of structure learning techniques, we will focus on ProbFoil~.\n\t\tAssume that we are interested in learning the definition of \\textit{grandparent} from akinship data.\n\t\tThat is, we are given a set of examples of grandparent relations\n\t\t\\begin{lstlisting}\n\t\tgrandparent(abe,lisa).\n\t\tgrandparent(abe,bart).\n\t\tgrandparent(jacqueline,lisa).\n\t\tgrandparent(jacqueline,bart).\n\t\t\\end{lstlisting}\n\t\tand background knowledge containing the following facts:\n \\begin{lstlisting}\n\tfather(homer,lisa). \n father(homer,bart). \n father(abe,homer).\n\tmother(jacqueline,marge). \n mother(marge,bart).\n\tmother(marge,lisa).\n\t\t\\end{lstlisting}\n\t\tProbFoil iteratively searches for a single clause that covers as many examples as possible, until all examples are covered or it adding more clauses does not improve the results. \n\t\tWhile searching for the best clause, it starts from the most general one, \\textit{grandparent(X,Y)}, which effectively states that every pair of people forms a grandparent relationship.\n\t\tThen it gradually specialises the clause by adding literals to the body. For instance, extending \\textit{grandparent(X,Y)} with a \\textit{mother/2} predicate results in the following clauses\n\t\t\\begin{lstlisting}\n\t\tgrandparent(X,Y) <- mother(X,Y).\n\t\tgrandparent(X,Y) <- mother(X,X).\n\t\tgrandparent(X,Y) <- mother(Y,X).\n\t\tgrandparent(X,Y) <- mother(Y,Y).\n\t\t\\end{lstlisting}\n\t\tExtending the initial clause with the \\textit{father/2} results in similar clauses.\n\t\tHaving the new candidate clauses, ProbFoil scores each candidate by counting how many positive and negative examples are covered.\n\t\tThese candidate clauses would not cover any examples and ProbFoil continues to refine the candidates by adding another literal to the body.\n\t\tThis would result in clauses of the following form:\n\t\t\\begin{lstlisting}\n\t\tgrandparent(X,Z) <- mother(X,Y), father(Y,Z).\n\t\tgrandparent(X,Z) <- mother(X,Y), mother(Y,Z).\n\t\tgrandparent(X,Z) <- father(X,Y), mother(Y,Z).\n\t\t...\n\t\\end{lstlisting}\n\t\tSome of the new candidates will cover only positive examples, such as \n \\begin{lstlisting}\n grandparent(X,Z) <- mother(X,Y), mother(Y,Z)\n \\end{lstlisting}\n that covers both examples\n \\begin{lstlisting}\n grandparent(jacqueline,lisa).\n grandparent(jacqueline,bart).\n \\end{lstlisting}\n\t\tHaving found one clause, ProbFoil learns the corresponding probability labels and adds the clause to the theory.\n\t\tProbFoil then repeats the same procedure, starting with the most general clause, to cover the remaining examples.\n\t\\end{boxedexample}", "id": "88f64637-a474-4f43-9b9a-5ccb928849a4", "level": "subsection", "origin_cites_number": 8, "parent_id": "3cf78a3e-98ed-45de-848f-3a61063e51b3", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Structure versus Parameter Learning" ], [ "subsection", "Implications for StarAI" ] ], "subsections": [], "title": "Implications for StarAI" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 5724, 8942, 2665, 5710, 5722, 7989, 3726, 5723, 3727 ], "content": "While StarAI learning techniques are categorised exclusively as either structure or parameter learning, \\nesy{} learning techniques combine both.\n\tWe will now discuss four groups of NeSy learning approaches: neurally-guided search, structure learning via parameter learning, program sketching, and implicitly structure learning.\n\t\\textit{Neurally guided structure search} is the \\nesy{} paradigm most similar to structure learning in StarAI.\n\tIt addresses one of the major weaknesses of StarAI structure learning methods - uninformed search over valid theories.\n\tInstead, neurally guided search relies on a \\textit{recognition model}, typically a neural network, to prioritise parts of the symbolic search space so that the target model can be found faster.\n\tGenerally speaking, the recognition model predicts the probability of a certain structure, e.g. a predicate or an entire clause, to be a part of the target model.\n\tFor instance, Deepcoder~ uses input-output examples to predict the probability of each predicate appearing in the target model.\n\tTherefore, Deepcoder turns a systematic search into an informed one by introducing a ranking over predicates in the search space.\n\tLikewise, EC$^{2}$~ derives the probability of a program solving the task at hand.\n\tSeveral approaches push this direction further and explore the idea of replacing an explicit symbolic model space with an implicit generative model over symbolic models ~. \n\tFor instance, in , the authors learn a generative model over grammar rules, conditioned on the examples.\n\tStructure learning is then performed by sampling grammar rules from the generative model, according to their probability, and evaluating them symbolically on the provided examples. \n\tThese approaches clearly show how symbolic search can be made tractable by introducing various forms of guidance via neural models.\n\tThese guidance-based approaches reduce, to a large extent, the most important weakness of symbolic structure learning approaches - the generation of many useless clauses or models.\n\tOn the other hand, these approaches often need large amounts of data for training, sometimes millions of examples even though creating data is relatively easy by enumerating random model structures and sampling examples from them~.\n\t\\begin{boxedexample}[label = ex:ngps]{Neurally-guided structure learning}\n\t\t\\rev{To illustrate neurally-guided search, we use the approach of Zhang et al. .\n StarAI techniques for structure learning typically perform a systematic search, which results in many useless models being tested.\n Given $N$ atoms, we can construct $N^l$ clauses of length $l$; this is an enormous space that is difficult to search efficiently.} \\\\\n \\rev{Zhang et al. sidestep the systematic search by introducing a neural network that chooses which programs to explore next.\n This search space is made of clauses such that an empty clause is at the top and children are extensions of the empty clause with all possible predicates; their children are further extensions with all individual atoms.} \\\\\n \\rev{The approach follows a top-down search strategy, exploring shorter clauses before longer ones, with a twist: instead of following a predefined order, the approach uses a neural network to decide which child to expand next. \n The approach can be viewed as a best-first search with a heuristic function implemented by a neural model. To this end, the network (1) encodes all literals in each clause separately, (2) scores all literals, (3) pools the scores of each literal per candidate, and (4) chooses the best candidate based on their scores.\n Ordering the search space in this way leads to substantial improvements in computation time, typically several orders of magnitude.}\n\t\\end{boxedexample}\n\tAn alternative way to reduce the combinatorial complexity of learning is to learn only a part of the program.\n\tThis is known as \\textit{program sketching:} a user provides an almost complete target model with certain parts being unspecified (known as \\textit{holes}). \n\tFor instance, when learning a model in the form of a (logic) program for sorting numbers or strings, the user might leave the comparison operator unspecified and provide the rest of the program.\n\tThe learning task is then to fill in the holes.\n\tExamples of \\nesy{} systems based on sketching are DeepProbLog and $\\partial$4, which fill in the holes in a (symbolic) program via neural networks.\n\tThe advantage of sketching is that it provides a nice interface for \\nesy{} systems, as the holes can be filled either symbolically or neurally.\n\tHoles provide a clear interface in terms of inputs and outputs and are agnostic to the specific implementation.\n\tThe disadvantage of sketching is that the user still needs to know, at least approximatively, the structure of the program.\n\tThe provided structure, the sketch, acts as a strong bias.\n\tDeciding which functionality is left as a hole is a non-trivial issue: as the sketch becomes less strict, the search space becomes larger. \n\t\\textit{Structure learning via parameter learning} (Example \\ref{ex:difflog}) is arguably the most prominent learning paradigm in \\nesy{}, positioned in between the two StarAI learning paradigms.\n\tStructure learning via parameter learning is technically equivalent to parameter learning in that the learning tasks consists of learning the probabilities of a fixed set of clauses.\n\tHowever, in contrast to StarAI in which the user carefully selects the informative clauses, the clauses are typically enumerated from user-provided templates of predefined complexity.\n\tConstructed in this way, the majority of clauses are noisy and erroneous and are of little use.\n\tThey would receive very low, but non-zero, probabilities.\n\t Approaches that follow this learning principle include NTPs , $\\partial$ILP , DeepProbLog, NeuralLP and DiffLog .\n\tThe advantage of structure learning via parameter learning is that it removes the combinatorial search from the learning.\n\tHowever, the number of clauses that needs to be considered is still extremely large, which leads to difficult optimisation problems (cf. ). Furthermore, irrelevant clauses are never removed from the model and are thus always considered during inference.\n\tThis can lead to spurious interactions even when low probabilities are associated to irrelevant clauses: as the number of irrelevant clauses is extremely large, their cumulative effect can be substantial.\n\t\\begin{boxedexample}[label = ex:difflog]{Structure learning via parameter learning}\n\t\tAs an illustration of structure learning via parameter learning, we focus on DiffLog .\n\t\tDiffLog expects the candidate clauses to be provided by the user.\n\t\tThe user can either provide the rules she knows are useful or construct them by using a clause template and instantiating it .. \n\t\tGiven a set of positive examples, DiffLog proceeds by constructing \\textit{derivation trees} for each example.\n\t\tConsider the problem of learning the connectivity relation over a graph.\n\t\tThe input tuples (background knowledge in StarAI terminology) specify edges in a graph\n\t\t\\begin{lstlisting}\n\tedge(a,b). edge(b,c). edge(b,d). edge(d,e). edge(c,f).\n\t\t\\end{lstlisting}\n\t\tThe examples indicate the connectivity relations among the nodes in the graph (for simplicity, consider only the following two examples)\n\t\t\\begin{lstlisting}\n\t\tconnected(a,b). connected(a,c). \n\t\t\\end{lstlisting}\n\t\tAlso assume that the candidate clause set contains the following clauses (with $p_1$ and $p_2$ their weights):\n\t\t\\begin{lstlisting}\n\t\t$p_1$::connected(X,Y) <- edge(X,Y). \n\t\t$p_2$::connected(X,Y) <- edge(X,Z), connected(Z,Y).\n\t\t\\end{lstlisting}\n\t\tDerivation trees are essentially proofs of individual examples that correspond to branches in the SLD-tree .\n\t\tFor instance, the example \\textit{connected(a,b)} can be proven using the first clause, whereas the example \\textit{connected(a,c)} can be proven by chaining the two clauses ($connected(a,c) \\leftarrow edge(a,b), connected(b,c)$ and $connected(b,c) \\leftarrow edge(b,c)$).\n\t\tDiffLog uses derivation trees to formulate the learning problem as numerical optimisation over the weights associated with the rules.\n\t\tMore precisely, DiffLog defines the probability of deriving an example as the product of the weights associated to the clauses used in the derivation tree of the corresponding example.\n\t\tFor instance, DiffLog would formulate the learning problem for the two examples as follows\n\t\t$$\\min_{p_1, p_2} \\ \\underbrace{(1 - p_1)}_{\\small \\tt connected(a,b)} + \\underbrace{(1 - p_1\\times p_2)}_{\\small \\tt connected(a,c)}.$$\n\t\\end{boxedexample}\n\tThe last group of approaches learns the structure of a program only \\textit{implicitly}.\n\tFor instance, Neural Markov Logic Networks (NMLN) , a generalisation of MLNs, extract structural features from relational data.\n\tWhereas MLNs define potentials only over cliques defined by the structure (logical formulas) of a model, NMLNs add potentials over \\textit{fragments} of data (projected over a subset of constants).\n\tNMLNs thus do not necessarily depend on the symbolic structure of the model, be it learned or provided by a user, but can still learn to exploit relational patterns present in data.\n\tMoreover, NMLNs can incorporate embeddings of constants.\n\tThe benefit of this approach is that it removes combinatorial search from learning and performs learning via more scalable gradient-based methods.\n\tHowever, one loses the ability to inspect and interpret the discovered structure.\n\tAdditionally, to retain tractability, NMLNs limit the size of fragments which imposes limits on the complexity of the discovered relational structure.", "id": "9af5d6bb-eaff-4088-a25c-009703783367", "level": "subsection", "origin_cites_number": 15, "parent_id": "3cf78a3e-98ed-45de-848f-3a61063e51b3", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Structure versus Parameter Learning" ], [ "subsection", "Implications for NeSy" ] ], "subsections": [], "title": "Implications for NeSy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:symb_vs_subsymb}\n\tIn neurosymbolic artificial intelligence, approaches can be characterized by the way they represents entities and relationships in two classes: symbolic methods, where entities are represented using symbols such as strings and natural numbers, and subsymbolic methods, where entities are represented using numerical or distributed representations.\n\tSymbolic representations include constants $(an, bob)$, numbers $(4, -3.5)$, variables $(X, Y)$ and structured terms $f(t_1, ... ,t_n)$ where $f$ is a functor and the $t_i$ are \\rev{constants}, variables or structured terms. \n \\rev{Structured terms are a powerful construct that can represent arbitrary structures over entities, such as relations, lists or trees.}\n\tsubsymbolic AI systems, such as neural networks, require that entities are represented numerically using vectors, matrices or tensors. Throughout this this section, we will call these subsymbolic representations or subsymbols. \n\tsubsymbolic AI systems usually require that these representations have a fixed size and dimensionality. Exceptions require special architectures and are still the subject of active research (e.g. RNNs for list-like inputs or GCNs~ for graph-type inputs).", "id": "bd238fce-3df5-4531-ba35-f4316c49d81b", "level": "section", "origin_cites_number": 1, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Symbolic vs subsymbolic representations" ] ], "subsections": [ "9758e29f-afe8-4348-b5ab-e226dfae12d2", "a1474eee-df30-4fdc-964c-c5b1bb43d21d" ], "title": "Symbolic vs subsymbolic representations" }, { "cite_extract_rate": 0, "cites": [], "content": "A powerful and elegant mechanism for reasoning with symbols in logic is {\\em unification}. Essentially, it calculates the most general substitution that makes two symbols syntactically equal, if it exists. \\rev{This does not allow one to compare two different entities, but allows one to find what two structured terms have in common.}\n\tFor instance, the terms $p(a,Y)$ and $p(X,b)$ can be unified using the substitution $\\{X=a, Y=b\\}$. \n\tConversely, due to their numerical nature, calculating the similarity between subsymbols is straightforward. \n\tSimilarity metrics such as the radial-basis function or distance metrics such as the L1 and L2 norm can be used. However, it is not clear when to decide that two subsymbolically represented entities are the same.", "id": "9758e29f-afe8-4348-b5ab-e226dfae12d2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "bd238fce-3df5-4531-ba35-f4316c49d81b", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Symbolic vs subsymbolic representations" ], [ "paragraph", "Comparing representations" ] ], "subsections": [ "a5362428-8a2e-441c-a432-e33fed8ed15b" ], "title": "Comparing representations" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1166, 1168 ], "content": "Many systems need to translate back and forth between symbolic and subsymbolic representations. In fact, a lot of research on deep learning is devoted to efficiently representing symbols so that neural networks can properly leverage them.\n\tA straightforward example is to translate symbols to a subsymbolic representation that can serve as input for a neural network. Generally, these symbols are replaced by a one-hot encoding or by learned embeddings. \n \\rev{Note, however, that this does not imply that the system can perform symbolic manipulation on this input. Rather, it serves as an index to a set of learned, latent embeddings.}\n\tA more interesting example is encoding relations in subsymbolic space. The wide variety of methods~ developed for this purpose indicates that this is far from a solved problem. \n\tDifferent encodings have different benefits. For example, TransE~ encodes relations as vector translations from subject to object embeddings. A disadvantage is that symmetric relations are represented by the null vector, and entities in symmetric relations are pushed towards each other.\n\tMore complex structures are even harder to represent. For example, there is currently a lot of research in how to utilize graph-structured data in neural networks (cf. \\ref{sec:kge_gnn}).\n\tTranslating from a subsymbolic representation back to a symbolic one happens, for example, at the end of a neural network classifier. Here, a subsymbolic vector needs to be translated to discrete classes. Generally, this happens through the use of a final layer with a soft-max activation function which then models the confidence scores of these classes as a categorical distribution. However, other options are possible. For example, some methods are only interested in the most likely class, and will use an arg-max instead. Alternatively, a Gumbel-softmax activation can be used as a differentiable approximation of sampling from the categorical distribution.", "id": "a5362428-8a2e-441c-a432-e33fed8ed15b", "level": "paragraph", "origin_cites_number": 3, "parent_id": "9758e29f-afe8-4348-b5ab-e226dfae12d2", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Symbolic vs subsymbolic representations" ], [ "paragraph", "Comparing representations" ], [ "paragraph", "Translating between representations" ] ], "subsections": [], "title": "Translating between representations" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 5708, 5707, 5711, 5726, 7304, 8943, 5725, 3727 ], "content": "In StarAI systems, the input, intermediate and output representations are all using the same symbolic representations. \\rev{Although there are StarAI systems that can support numerical values, these are still treated as symbols, which is different than a latent, subsymbolic representation.}\n\tIn neural systems, the input and intermediate representations are subsymbolic. The output representation can be either symbolic (e.g. classifiers) or subsymbolic (e.g. auto-encoders, GANs).\n\tThe most important aspect of neurosymbolic systems is that they combine symbolic and subsymbolic representations.\n\tNeSy systems can be categorized by how they do this. \n We distinguish several approaches.\n\tIn the first approach, the inputs are symbolic, but they are translated to subsymbols in a single translation step, after which the intermediate representations used during reasoning are purely subsymbolic. This approach is followed by the majority of NeSy methods. Some examples include Logic Tensor Networks~, Semantic-based Regularization~, Neural Logic Machines~ and TensorLog~.\n\t\\begin{boxedexample}[label = ex:ltn]{Logic Tensor Networks}\n\t\tLogic tensor networks~ make this translation step explicit. The authors introduce the concept of a \\textit{grounding} (not to be confused with the term grounding used in logic). Here, a grounding is a mapping of all symbolic entities onto their subsymbolic counterpart. More formally, the authors define a grounding as a mapping $\\mathcal{G}$ where:\n\t\t\\begin{itemize}\n\t\t\t\\item $\\mathcal{G}(c) \\in \\mathbb{R}^n$ for every constant symbol $c$\n\t\t\t\\item $\\mathcal{G}(f) \\in \\mathbb{R}^{n.m} \\rightarrow \\mathbb{R}^n$ for every function $f$ of arity $n$\n\t\t\t\\item $\\mathcal{G}(p) \\in \\mathbb{R}^{n.m} \\rightarrow [0,1]$ for every predicate $p$ of arity $n$\n\t\t\\end{itemize}\n\t\tThe grounding of a clause is then performed by combining the aforementioned groundings using a t-norm.\n\t\\end{boxedexample}\n\tIn the second approach, intermediate representations are both symbolic and subsymbolic, but not simultaneously. This means that some parts of the reasoning work on the subsymbolic representation, and other parts deal with the symbolic representation, but not at the same time.\n\tThis is indicative of NeSy methods that implement an interface between the logic and neural aspect.\n\tThis approach is more natural for systems that originate from a logical framework such as DeepProbLog~, NeurASP~), ABL~ and NLog~.\n\t\\begin{boxedexample}[label = ex:abl]\n{ABL}\n\t\tIn ABL~ there are three components that function in an alternating fashion. There is a perception model, a consistency checking component and an abductive reasoning component.\n\t\tTake for example the task where there are 3 MNIST images that need to be recognized such that the last is the result of applying an operation on the first two (e.g. $\\digit{3}+\\digit{5}=\\digit{8}$). The structure of the expression is given as background knowledge, but the exact operation (addition) needs to be abduced.\n\t\tFirst, the perception model classifies the images into pseudo-labels, using the most likely prediction (i.e. arg-max). \n\t\tThe abductive reasoning component then tries to abduce a logically consistent hypothesis. \n\t\tFor example, if the digits are correctly classified as $3$, $5$ and $8$, the only logically consistent hypothesis is that the operation is an addition.\n\t\tIf this is not possible, there is an error in the pseudo-labels. \n\t\tA heuristic function is then used to determine which pseudo-labels are wrong.\n\t\tThe reasoning module then searches for logically consistent pseudo-labels. These revised pseudo-labels are then used to retrain the perception model.\n\t\\end{boxedexample}\n\tIn the final approach, intermediate representations are considered simultaneously as symbolic and subsymbolic by the reasoning mechanism.\n This is implemented in only a few methods, such as the NTP and the CTP. \n\t\\begin{boxedexample}[label = ex:ntp]{Neural Theorem Prover}\n\t\tIn the Neural Theorem Prover, two entities can be unified if they are similar, and not just if they are identical. As such, the NTP interweaves both symbols and subsymbols during inference. For each symbol $S$, there is a learnable subsymbol $T_S$.\n\t\tSoft-unification happens by applying the normal unification procedure where possible. However, if two symbols $S_1$ and $S_2$ can not be unified, the comparison is assigned a score based on the similarity between $T_{S_1}$ and $T_{S_2}$. The similarity is calculated using a radial basis function $\\varphi(||x-y||_2)$.\n\t\tFor example, to unify \\texttt{mother(an,bob)} and \\texttt{parent(X,bob)}, soft-unification proceeds as follows:\n\t\t\\begin{align*}\n\t\t\\{\\mathtt{mother(an,bob)} &= \\mathtt{parent(X,bob)}\\} \\\\\n\t\t&\\Downarrow \\quad \\varphi(\\mathtt{mother}, \\mathtt{parent}) \\\\\n\t\t\\{\\mathtt{an} = \\mathtt{X}&, \\mathtt{bob} = \\mathtt{bob}\\} \\\\\n\t\t&\\Downarrow \\quad X = an \\\\\n\t\t\\{\\mathtt{bob} &= \\mathtt{bob}\\} \\\\\n\t\t&\\Downarrow\\\\\n\t\t&\\{~\\}\n\t\t\\end{align*}\n\t\tSoft-unification is not only used to learn which constants and predicates are similar, but can also be used to perform rule learning. By adding new, parameterized rules with unique predicates, soft-unification allows these new predicates to become very similar to other predicates and as such behave as newly introduced rules.\n\t\tFor example, consider the program consisting of the fact \\textit{mother(an,bob)} and a single parameterized rule $r1(X,Y) \\leftarrow r2(Y,X)$. The Neural Theorem Prover can answer the query \\textit{child(bob,an)} as follows: \n\t\t\\begin{center}\n\\tikzset{every picture/.style={line width=0.75pt}} \n\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\\draw (240,60) -- (140,100) ;\n\\draw (240,60) -- (340,100) ;\n\\draw (340,140) -- (340,180) ;\n\\draw (201,32) node [anchor=north west][inner sep=0.75pt] [align=left] {$\\mathtt{child(bob,an)}$};\n\\draw (299,111) node [anchor=north west][inner sep=0.75pt] [align=left] {$\\mathtt{r2(ann,bob)}$};\n\\draw (335,191) node [anchor=north west][inner sep=0.75pt] [align=left] { $\\mathtt{T}$ };\n\\draw (131,112) node [anchor=north west][inner sep=0.75pt] [align=left] { $\\mathtt{T}$ };\n\\draw (151,62) node [anchor=north west][inner sep=0.75pt] [align=left] {(1)};\n\\draw (299,62) node [anchor=north west][inner sep=0.75pt] [align=left] {(2)};\n\\draw (349,151) node [anchor=north west][inner sep=0.75pt] [align=left] {(3)};\n\\end{tikzpicture}\n\t\t\\end{center}\n\t\t\\begin{align*}\n\t\t(1) ~& \\mathtt{child} = \\mathtt{mother} &\\varphi(||T_{child}-T_{mother}||_2)\\\\\n\t\t& \\mathtt{bob} = \\mathtt{an} &\\varphi(||T_{an}-T_{bob}||_2)\\\\\n\t\t(2)~ & \\mathtt{child} = \\mathtt{r1} &\\varphi(||T_{child}-T_{r1}||_2)\\\\\n\t\t(3) ~& \\mathtt{child} = \\mathtt{r2} &\\varphi(||T_{r2}-T_{mother}||_2)\n\t\t\\end{align*}\n\t\tThe figure above shows the two possible derivations the neural theorem prover can make to infer \\textit{child(bob, an)}. One the one hand, it can soft-unify with the fact \\textit{mother(an, bob)}, where \\textit{mother} unifies with \\textit{child} and \\textit{an} with \\textit{bob}. On the other hand, it can use the parameterized rule which encodes an inverse relation. In that case, \\textit{mother} unifies with \\textit{r1} and \\textit{r2} with \\textit{child}. If we optimize the subsymbolic embeddings for the latter, this will be equivalent to learning the rule $mother(X,Y) \\leftarrow child(Y,X)$. This example also shows that soft-unification potentially adds a lot of different proofs, which can result in computational problems. This problem was solved in later iterations of the system .\n\t\\end{boxedexample}\n\\tikzset{every picture/.style={line width=0.75pt}} \n\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\\draw (240,60) -- (140,100) ;\n\\draw (240,60) -- (340,100) ;\n\\draw (340,140) -- (340,180) ;\n\\draw (201,32) node [anchor=north west][inner sep=0.75pt] [align=left] {$\\mathtt{child(bob,an)}$};\n\\draw (299,111) node [anchor=north west][inner sep=0.75pt] [align=left] {$\\mathtt{r2(ann,bob)}$};\n\\draw (335,191) node [anchor=north west][inner sep=0.75pt] [align=left] { $\\mathtt{T}$ };\n\\draw (131,112) node [anchor=north west][inner sep=0.75pt] [align=left] { $\\mathtt{T}$ };\n\\draw (151,62) node [anchor=north west][inner sep=0.75pt] [align=left] {(1)};\n\\draw (299,62) node [anchor=north west][inner sep=0.75pt] [align=left] {(2)};\n\\draw (349,151) node [anchor=north west][inner sep=0.75pt] [align=left] {(3)};\n\\end{tikzpicture}", "id": "a1474eee-df30-4fdc-964c-c5b1bb43d21d", "level": "subsection", "origin_cites_number": 11, "parent_id": "bd238fce-3df5-4531-ba35-f4316c49d81b", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Symbolic vs subsymbolic representations" ], [ "subsection", "Implications for StarAI and NeSy" ] ], "subsections": [], "title": "Implications for StarAI and NeSy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:paradigms}\n\tWhen two or more paradigms are integrated, examining which of the base paradigms are preserved, and to which extent, tells us a lot about the strengths and weaknesses of the resulting paradigm.\n\tIt has been argued that when combining different perspectives in one model or framework, such as logic, probabilistic and neural ones, it is desirable to have the original paradigms as a special case.\n\t\\rev{In this section, we analyze to which extent different models in StarAI and NeSy preserve the three basic paradigms. Intuitively, with preserving we mean to which extent one can exactly replicate the model and inference algorithm of the original paradigm. We will use the capital letters \\textit{L, P} and \\textit{N} to label systems where the logic, probability and neural paradigms can be recovered in full. We will use lowercase letters (i.e. \\textit{l, p} and \\textit{n}) when a method only partially recovers these paradigms, i.e. retain some but not all of the features.} The absence of a letter means that the paradigm is not considered by an approach.", "id": "322e754b-6517-46c9-9006-90e863943e5b", "level": "section", "origin_cites_number": 1, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic vs Probability vs Neural" ] ], "subsections": [ "64c621c6-ac4f-48ef-b44f-303dd6942485", "b873315e-f300-4fef-8037-818fbe2be872" ], "title": "Logic vs Probability vs Neural" }, { "cite_extract_rate": 0, "cites": [], "content": "Traditionally, StarAI focused on the integration of logic and probability.\n\t\\textbf{\\textit{lP:}}\n\tThe classical knowledge-based model construction approach uses logic only to generate a probabilistic graphical model. \n\tThus the graphical model can be used to define the semantics of the model and also to perform inference.\n\tThis can make it harder to understand the effects of applying logical inference rules to the model. \n\tFor instance, in MLNs, the addition of the resolvent of two weighted rules makes it hard to predict the effect on the distribution. \n\t\\textbf{\\textit{Lp}:} On the other hand, the opposite holds for probabilistic logic programs (PLPs) and their variants. \n\tWhile the effect of a logical operation is clear, it is harder to identify and exploit properties such as conditional or contextual independencies, that are needed for efficient probabilistic inference.", "id": "64c621c6-ac4f-48ef-b44f-303dd6942485", "level": "subsection", "origin_cites_number": 0, "parent_id": "322e754b-6517-46c9-9006-90e863943e5b", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic vs Probability vs Neural" ], [ "subsection", "StarAI: Logic + Probability" ] ], "subsections": [], "title": "StarAI: Logic + Probability" }, { "cite_extract_rate": 0.625, "cites": [ 5727, 5710, 5711, 5719, 3726 ], "content": "In NeSy, we consider a third paradigm: neural computation.\n\tWith neural computation, we refer mainly to the set of models and techniques that allows for exploiting (deep) latent spaces to learn intermediate representations. This includes dealing with perceptual inputs and also dealing directly with embeddings of symbols. \n\t\\textbf{\\textit{lN}}: Many \\nesy{} approaches focus on the neural aspect (i.e., they originated as a neural method to which logical components have been added). For example, LTNs and SBRs turn the logic into a regularization function to provide a penalty whenever the logical constraints are violated. At test time the logical loss component is dropped and only the network is used to make predictions. Moreover, by using fuzzy logic, these methods do not integrate the probabilistic paradigm. \n\t\\textbf{\\textit{Ln}}: Another class of \\nesy{} methods does retain the focus on logic. \n\tThese methods usually expand an existing logical framework into a differentiable version. Examples include LRNNs , TensorLog , DiffLog , $\\partial$ILP , $\\partial$4 and NTPs .\n\tThe key inference concepts are mapped onto an analogous concept that behaves identically for the edge cases but is continuous and differentiable in non-deterministic cases. \n\tAs described in the previous sections, many such systems cast logical inference as forward or backward chaining. The focus on logic is clear if one considers that logical inference is performed symbolically to build the network and the semantics is relaxed only in a subsequent stage to learn the parameters. \\rev{While the architecture mimics the logical reasoning, it is often far from the deep-stacked architecture of neural networks.}\n \\textbf{\\textit{LN}}: \\rev{It is worth mentioning a later iteration of LRNN, where the framework has been extended to allow for tensorial weights on atoms and custom aggregation functions . In that framework, it is shown how specifying logic rules can be regarded as specifying the layers of a deep architecture. This provides a nice and complete integration between forward-chaining logical reasoning and neural networks\n that is able to implement any existing neural architecture.\n }\n\t\\textbf{\\textit{lPN} and \\textit{LpN}} There are two final classes of methods that start from existing StarAI methods, \\textit{lP} and \\textit{Lp} respectively, and extend them with primitives that can be interfaced with neural networks and allow for differentiable operations.\n\tIn the \\textit{lPN} class, NeSy methods such as SL, RNMs and NMLNs follow the knowledge-based model construction paradigm. In the \\textit{LpN} class, methods such as DeepProbLog and NeurASP extend PLP. \nThere is usually a trade-off that one must make: \t\nsystems in the \\textit{lN} or \\textit{Ln} classes are usually more scalable but \\textit{(i)} do not model a probability distribution and \\textit{(ii)} often relax the logic. On the contrary, \\textit{LpN} or \\textit{lPN} systems preserve the original paradigms but at the cost of more complex inference (e.g. they usually resort to exact probabilistic inference).\n\tAn aspect that significantly aids in developing a common framework, and analysing its properties, is the development of an intermediate representation language that can serve as a kind of \\emph{assembly language} . \n\tOne such idea concerns performing probabilistic inference by mapping it onto a weighted model counting (WMC) problem. \n\tThis can then in turn be solved by compiling it into a structure (e.g. an arithmetic circuit) that allows for efficient inference.\n\tThis has the added benefit that this structure is differentiable, which facilitates the integration between logic based systems and neural networks. \n\t StarAI based systems often use this approach.", "id": "b873315e-f300-4fef-8037-818fbe2be872", "level": "subsection", "origin_cites_number": 8, "parent_id": "322e754b-6517-46c9-9006-90e863943e5b", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Logic vs Probability vs Neural" ], [ "subsection", "NeSy: Logic + Probability + Neural" ] ], "subsections": [], "title": "NeSy: Logic + Probability + Neural" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tasks}\n\t\\rev{In this Section, we analyze the learning tasks to which the \\nesy{} models considered in this paper have been applied.}", "id": "528b4897-8064-4ac1-a40f-dc1e7d30614f", "level": "section", "origin_cites_number": 0, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ] ], "subsections": [ "d6105bd2-5f38-4f97-a7bd-ae4de168c174" ], "title": "Tasks" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 181, 7304, 1806, 3727 ], "content": "\\rev{A classical task in NeSy is to use logic as distant supervision for a learning model. Here, input $X$ is paired with label $y$. However, instead of using a single model to map $X$ to $y$, the input $X$ is firstly mapped to a set of intermediate concepts $C$ by a (set of) neural networks. Then, these concepts are used to compute $y$ in a symbolic way. Logic programs are usually exploited to map the concepts $C$, represented as logical atoms, to the label $y$, which represents the logical query. Therefore, the neural networks are not directly supervised (on $C$) but they are only distantly supervised through the label $y$ and the knowledge contained in the logic program. The intuition is that when the label $y$ is only weakly linked to the input, it is more convenient to break the task in several easier subtasks and then compose them using background knowledge in the form of a logic program. Notice that the logic program is fundamental for the inference. Without the program, the networks will not be able to solve their subtasks, as there is no direct supervision. Moreover, by splitting the task into subtasks, the inference done by the composite system (neural + logic) is far more explainable than a corresponding end-to-end neural network. A classical example is the MNIST addition , shown in Example \\ref{ex:mnist_addition}. Distant supervision tasks are very common in prototypical systems such as DeepProbLog, DeepStochLog, NLog, NeurASP, SATNet . A downside of such tasks is that, to enable learning of untrained neural subtasks, the logic has to consider all possible combinations of concepts that are compatible with the label $y$, even though only few (or one) are correct. The challenge is to balance the exploration of multiple combinations with a greedy strategy for scaling to larger problems . Other problems falling in this category are scene parsing, image segmentation and semantic image interpretation }\n \\begin{boxedexample}[label = ex:mnist_addition]{MNIST Addition} \n Given the classical MNIST dataset, $\\mathcal{D} = \\{(x_i,y_i)\\}$, with $x_i$ an MNIST image, and $y_i$ its numeric label, the MNIST addition dataset is built by mapping pairs of images to the label representing their sum. In particular, $\\mathcal{D}_{\\text{add}} = \\{(x_i,x_j,z_{ij}) : z_{ij} = y_i + y_j \\land (x_i,y_i),(x_j,y_j) \\in \\mathcal{D}\\}$. The idea is to learn to classify the digits without direct supervision on their labels, but only using distant supervision about sums of such images. \n The task is often also coupled to background knowledge of what addition is, e.g. in Prolog syntax:\n \\begin{lstlisting}\naddition(X1, X2, Z) <- digit(X1,Y1), digit(X2,Y2), Z is Y1 + Y2. \n \\end{lstlisting}\n Such knowledge is used to reason about the (most-likely) pairs \\texttt{Y1,Y2} that sum to the provided label \\texttt{Z}. Logic is then used to link the actual outputs of the learning model \\texttt{Y1,Y2} to the distant supervision \\texttt{Z}.\n \\end{boxedexample}", "id": "d6105bd2-5f38-4f97-a7bd-ae4de168c174", "level": "paragraph", "origin_cites_number": 7, "parent_id": "528b4897-8064-4ac1-a40f-dc1e7d30614f", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ], [ "paragraph", "Distant Supervision" ] ], "subsections": [ "d9eb4e04-ce78-4d16-9f72-94154473deea" ], "title": "Distant Supervision" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 5714, 3741, 5720, 5709 ], "content": "\\rev{A related class of tasks is semi-supervised classification } with knowledge. Here, the starting point is a standard classification task, where a set of inputs $X$ is mapped by a neural model to a set of labels $C$. However, we are also provided with some additional knowledge $y$ related to the labels $C$ of the inputs. This knowledge is often expressed in terms of logical rules and programs. The setting is very similar to distant supervision, where we have three levels: inputs $X$, concepts $C$ and additional labels $y$. However, in this case, we have also access to supervision for some (usually few) concepts $C$. Although this task could be tackled in a purely supervised way by discarding the information contained in $y$, \\nesy{} approaches can improve the predictions of several input patterns using the external knowledge. When the external knowledge is relating concepts $C$ of multiple patterns, the task is called collective classification , as one can improve the accuracy on multiple patterns by collectively predicting their classes. A classical example in this setting is document classification in citation networks, cf. Example \\ref{ex:citation}. By treating the information contained in $y$ as extra knowledge, these tasks are often tackled using regularization based systems, like SBR, DLM, RNM or Semantic Loss. However, logic programs can also be used to simulate a label-passing scheme along the citation network, as done in DeepStochLog . A characteristic of this class of tasks is that the additional information $y$ is often very noisy (e.g. the manifold rule in the citation network is not always valid). While this task is closely related to distant supervision, there is an important difference: in semi-supervised classification, the additional knowledge $y$ is meant to provide an additional signal, which, however, would not suffice in the absence of direct supervision on the concepts $C$. \n \\begin{boxedexample}[label = ex:citation]{Document classification in citation networks}\n In document classification in citation networks, we are provided with both labelled and unlabelled scientific papers. A label is often the domain area of the paper (e.g. Machine Learning, Artificial Intelligence, Databases, etc.). \n However, a network of citations between papers is also provided, linking papers between domains.\n The idea of the document classification task is that in many domains, a paper cited by other papers with a certain label is likely to belong to the same domain. When classifying a document, one has to balance the signal coming from the features of the document (i.e. words) and that coming from neighbors in the citation network to provide a \\textit{collective} prediction.\n In \\nesy{} systems, this is usually done by coupling the subsymbolic model with a rule of the following type:\n \\begin{lstlisting}\n w:: domain(X,Y) <- cite(X,X1), domain(X1,Y). \n \\end{lstlisting}\n The rules get a different weight according to the domain to account for the differences between them. \n \\end{boxedexample}", "id": "d9eb4e04-ce78-4d16-9f72-94154473deea", "level": "paragraph", "origin_cites_number": 7, "parent_id": "d6105bd2-5f38-4f97-a7bd-ae4de168c174", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ], [ "paragraph", "Distant Supervision" ], [ "paragraph", "Semi-supervised Classification" ] ], "subsections": [ "c47d6a00-5a1b-40c6-9278-09edc1c79d9b", "d7c376c1-f6e7-41a6-9b6e-28a3f939810e", "f4d6de92-316d-4daa-a0ba-2c746822d5de" ], "title": "Semi-supervised Classification" }, { "cite_extract_rate": 0.5, "cites": [ 8942, 5710, 5711, 5720 ], "content": "\\rev{Another common task in \\nesy{} is knowledge graph completion (KGC) or link prediction. A knowledge graph (KG) is a pair of $(E,R)$, where $N$ is the set of entities and $R$ the set of edges. In a KG, an edge is a triple $(e_1, r, e_2)$, where $e_1$ and $e_2$ are the head and tail of the edge and $r$ is the relation between them. In a KGC task, the goal is to predict missing edges in the input graph. Link prediction has been one of the key tasks in StarAI , and more recently also in NeSy as NeSy allows to merge symbolic reasoning (from StarAI) with the recent geometric deep learning approaches based on Knowledge Graph Embeddings (KGE) and Graph Neural Networks . \\nesy{} systems focusing on this task include NTPs , NMLN , DLM , DiffLog , TensorLog .}", "id": "c47d6a00-5a1b-40c6-9278-09edc1c79d9b", "level": "paragraph", "origin_cites_number": 8, "parent_id": "d9eb4e04-ce78-4d16-9f72-94154473deea", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ], [ "paragraph", "Distant Supervision" ], [ "paragraph", "Semi-supervised Classification" ], [ "paragraph", "Knowledge Graph Completion" ] ], "subsections": [], "title": "Knowledge Graph Completion" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8942, 7988 ], "content": "Most previously mentioned tasks can be described as classification\\footnote{Even though, many of them use a generative model to tackle the classification task, instead of a conditional one.}. \\nesy{} has recently also focused on tasks concerned with modeling the input data distribution as accurately as possible. The goal is then to sample new patterns from the learned distribution. The idea behind \\nesy{} generative approaches is that one can learn important features from data using deep generative models (e.g. variational auto-encoders or Markov Chain Monte Carlo methods). Combining symbolic features with logic reasoning can be used to control, stratify and simplify the inference. The generative modeling can either refer to the relational structure, e.g. molecule generation in NMLNs , or to the subsymbolic space, e.g. image generation in VAEL or .", "id": "d7c376c1-f6e7-41a6-9b6e-28a3f939810e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "d9eb4e04-ce78-4d16-9f72-94154473deea", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ], [ "paragraph", "Distant Supervision" ], [ "paragraph", "Semi-supervised Classification" ], [ "paragraph", "Generative Tasks" ] ], "subsections": [], "title": "Generative Tasks" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 2665, 5710, 5728, 3726, 3727 ], "content": "\\rev{Rather than exploiting symbolic knowledge predictive tasks, one can also induce \n \\textit{symbolic knowledge}. In all previous tasks, symbolic knowledge is provided by the user as part of the input. However, as explored in Section \\ref{sec:struct}, we can still apply several neurosymbolic techniques by learning the symbolic knowledge when this is not the case. The unknown symbolic knowledge is then the actual target to be learned. A classical example is \\textit{program synthesis}, where the goal is to learn the program from positive and negative examples of the desired input-output behaviour. Ideally, all positive pairs and none of the negatives should be covered. Many systems learn logic programs, i.e. NTPs , $\\partial$ILP , DeepProbLog, NeuralLP, DiffLog, DeepCoder.\n Sometimes, the input-output pairs are not part of the training dataset, but are actually generated by a black-box neural model. The induced programs then explain the behaviour of the model, which relates NeSy to the domain of \\textit{explainability} .}", "id": "f4d6de92-316d-4daa-a0ba-2c746822d5de", "level": "paragraph", "origin_cites_number": 7, "parent_id": "d9eb4e04-ce78-4d16-9f72-94154473deea", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Tasks" ], [ "paragraph", "Distant Supervision" ], [ "paragraph", "Semi-supervised Classification" ], [ "paragraph", "Knowledge Induction" ] ], "subsections": [], "title": "Knowledge Induction" }, { "cite_extract_rate": 0, "cites": [], "content": "To conclude, we list some interesting challenges for \\nesy{}.", "id": "35970a41-0514-46a0-8d9d-63d0769ff87a", "level": "section", "origin_cites_number": 0, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ] ], "subsections": [ "a1d98b58-9672-4eb5-80a4-d72f0adf272e" ], "title": "Open Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "The statistical relational AI and probabilistic graphical model communities have devoted a lot of attention to the semantics of its models. This has resulted in several clear choices (such as directed vs. undirected, trace-based vs. possible world ), with corresponding strengths and weaknesses that clarify the relationships between the different models. Workshops have been held on this topic\\footnote{For instance, \\url{https://pps2018.luddy.indiana.edu/}}. Furthermore, some researchers have investigated how to transform one type of model into another . At the same time, weighted model counting has emerged as a common assembly language for inference. The situation in neurosymbolic computation today is very much like that of the early days in statistical relational learning, in which there were many competing formalisms, sometimes characterized as the statistical relational learning alphabet soup. It would be great to get more insight into the semantics of neurosymbolic approaches and their relationships. This survey hopes to contribute towards this goal.", "id": "a1d98b58-9672-4eb5-80a4-d72f0adf272e", "level": "paragraph", "origin_cites_number": 2, "parent_id": "35970a41-0514-46a0-8d9d-63d0769ff87a", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ] ], "subsections": [ "6f37ac7d-158b-4d60-89b1-c3be4d76632d" ], "title": "Semantics" }, { "cite_extract_rate": 0, "cites": [], "content": "Although relatively few methods explore the integration of logical and neural methods from a probabilistic perspective, we believe that a probabilistic approach is a very natural way to integrate the two, since it has been shown how one can recover the single methods as special cases. However, many open questions remain. Probabilistic inference is computationally expensive, usually requiring approximations. It would be interesting to determine exactly how probabilistic approximate inference compares with other approximations based on relaxations of the logic, like fuzzy logic.", "id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "level": "paragraph", "origin_cites_number": 1, "parent_id": "a1d98b58-9672-4eb5-80a4-d72f0adf272e", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ] ], "subsections": [ "2225ccc8-0707-44bd-8f81-0bdebfb52224", "422c662c-f7da-4490-a64b-1d55a3d7423a", "588fd0c9-1da5-47a9-91a7-ef4be62e8832", "9539d3a1-1ce6-40b0-8897-7d2fd89a6849", "986b4d82-d655-4355-b6bd-d14bee40b72f" ], "title": "Probabilistic reasoning" }, { "cite_extract_rate": 0, "cites": [], "content": "The selection of the t-norm fuzzy logic and the corresponding translation of the connectives is very heterogeneous in the literature. It is often unclear which properties of Boolean logic a model is preserving, while there is a tendency to consider fuzzy logic as a continuous surrogate of Boolean logic without considering implications for the semantics. There is a clear need for further studies in this field. On the one hand, one may want to define new models which are natively fuzzy, thus not requiring a translation from Boolean logic. On the other hand, an interesting research direction concerns the characterisation of what are appropriate fuzzy approximations of Boolean logic relative to a set of properties that one wants to preserve (see Section \\ref{sec:semantics}).", "id": "2225ccc8-0707-44bd-8f81-0bdebfb52224", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ], [ "paragraph", "Fuzzy semantics" ] ], "subsections": [], "title": "Fuzzy semantics" }, { "cite_extract_rate": 0, "cites": [], "content": "While significant progress has been made on learning the structure of purely relational models (without probabilities), learning StarAI models remains a major challenge due to the complexity of inference and the combinatorial nature of the problem.\n\tIncorporating neural aspects complicates the problem even more.\n\t\\nesy{} methods have certainly shown potential for addressing this problem (Section \\ref{sec:struct}), but the existing methods are still limited and mostly domain-specific which impedes their wide application.\n\tFor instance, the current systems that support structure learning require user effort to specify the clause templates or write a sketch of a model.", "id": "422c662c-f7da-4490-a64b-1d55a3d7423a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ], [ "paragraph", "Structure learning" ] ], "subsections": [], "title": "Structure learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Scalable inference is a major challenge for StarAI and therefore also for \\nesy{} approaches with an explicit logical or probabilistic reasoning component.\n\tInvestigating to what extent neural methods can help with this challenge by means of lifted (exploiting symmetries in models) or approximate inference, as well as reasoning from intermediate representations , are promising future research directions.", "id": "588fd0c9-1da5-47a9-91a7-ef4be62e8832", "level": "paragraph", "origin_cites_number": 1, "parent_id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ], [ "paragraph", "Scaling inference" ] ], "subsections": [], "title": "Scaling inference" }, { "cite_extract_rate": 0, "cites": [], "content": "A major advantage of StarAI methods, as compared to neural ones, is their data efficiency -- StarAI methods can efficiently learn from small amounts of data, whereas neural methods are data hungry.\n\tOn the other hand, StarAI methods do not scale to big data sets, while neural methods can easily handle them.\n\tWe believe that understanding how these methods can help each other to overcome their complementary weaknesses, is a promising research direction.", "id": "9539d3a1-1ce6-40b0-8897-7d2fd89a6849", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ], [ "paragraph", "Data efficiency" ] ], "subsections": [], "title": "Data efficiency" }, { "cite_extract_rate": 1, "cites": [ 5729, 5730 ], "content": "The effectiveness of deep learning comes from the ability to change the representation of the data so that the target task becomes easier to solve.\n\tThe ability to change the representation also at the symbolic level would significantly increase the capabilities of \\nesy{} systems.\n\tThis is a major open challenge for which neurally inspired methods could help achieve progress .\n\t\\section*{Acknowledgements}\n\tThis work has received funding from the Research Foundation-Flanders (FWO)\n\t(G. Marra: 1239422N, S. Dumančić: 12ZE520N, R. Manhaeve: 1S61718N). \n\tLuc De Raedt has received funding from the Flemish Government (AI Research Program), from the FWO, from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 694980 SYNTH: Synthesising Inductive Data Models) and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. This work was also supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.\n\t\\bibliographystyle{plain}\n\t\\bibliography{main}\n\t\\appendix", "id": "986b4d82-d655-4355-b6bd-d14bee40b72f", "level": "paragraph", "origin_cites_number": 2, "parent_id": "6f37ac7d-158b-4d60-89b1-c3be4d76632d", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Open Challenges" ], [ "paragraph", "Semantics" ], [ "paragraph", "Probabilistic reasoning" ], [ "paragraph", "Symbolic representation learning" ] ], "subsections": [], "title": "Symbolic representation learning" }, { "cite_extract_rate": 1, "cites": [ 7233 ], "content": "\\label{sec:kge_gnn}\n\tIn this appendix, we introduce two approaches, namely Knowledge Graph Embeddings and Graph Neural Networks, which are commonly used in relational tasks in the deep learning community. We analyze them in the spirit of the seven dimensions introduced in the paper. In fact, it turns out that they share many features with neurosymbolic systems. In this way, we would like to suggest that NeSy can also be found at the intersection of statistical relational and geometric deep learning approaches.\n\tOne of the most popular relational representations is that of a knowledge graph (KG). A KG is\n\ta multi-relational graph composed of entities (i.e. nodes) and relations (i.e. edges). It is common to represent an edge as a triple of the form: \\textit{(head entity, relation, tail entity)}, e.g. ($homer$, $fatherOf$, $bart$).\n\tStarAI has been extensively used to solve many tasks on KGs: prediction of missing relationships (i.e. knowledge graph completion), prediction of properties of entities, or clustering entities based on their connectivity patterns. StarAI is particularly well suited to reasoning with knowledge graphs since it models explicitly the probabilistic dependencies among different relationships. \n\tHowever, in order to scale to larger knowledge graphs, the probabilistic dependencies of StarAI models have been relaxed to give rise to a new class of scalable models based on latent features, which are particularly interesting from a neurosymbolic viewpoint. The key intuition behind relational latent feature\n\tmodels is that the relationships between entities can be more efficiently predicted by modeling simpler interactions in a latent feature space. Knowledge Graph Embeddings and Graph Neural Networks represent two of the ways to encode such latent representations.", "id": "0c29e7f0-ea36-44aa-a1f0-0b0c69b42bfb", "level": "section", "origin_cites_number": 1, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Knowledge graphs embeddings and Graph Neural Networks for neurosymbolic AI" ] ], "subsections": [ "525771a4-9400-42b9-9e72-981457921a8b", "128f81ba-b844-472a-86b3-8ca1662bba8a" ], "title": "Knowledge graphs embeddings and Graph Neural Networks for neurosymbolic AI" }, { "cite_extract_rate": 0.4, "cites": [ 5732, 3741, 3672, 5731 ], "content": "Knowledge Graph Embedding (KGE) models assume that triples (i.e. relations) are conditionally independent given a set of global latent variables, called embeddings, for entities and relations. Therefore, the existence of a triple can be predicted by a scoring function $f(e_h,e_r,e_t)$, where $e_i$ is the embedding of the corresponding object.\n\tKGE models mainly differ in terms of the scoring function and the embedding space they use. Translation or distance-based models use a scoring function measuring to what extent the tail of a triple can be obtained by a relation-specific translation of the head, i.e. $f(e_h,e_r,e_t) = || e_h + e_r - e_t||$. Semantic matching methods instead exploit similarity-based scoring\n\tfunctions, such as $f(e_h,W_r,e_t) = || e_hW_re_t^\\top||$. It is interesting to note that standard multi-layer perceptrons are often used to learn this similarity measure, i.e. $f(e_h,W_r,e_t) = nn([e_h, e_t]; W_r)$. This is similar to neural interfaces typical of $lPN$ and $LpN$ models of Section \\ref{sec:paradigms}, cf. the neural predicates of DeepProbLog in Example~\\ref{ex:deepproblog}.\n\tKGE learn from ground relations only. They learn the embeddings of entities and relations as to maximize the score for a set of known \\textit{True} triples. However, some methods incorporate higher level information, like first-order logical clauses or logical queries , bridging KGE and multiple NeSy systems (like SL or SBR). \n\tIt is interesting to analyze KGE methods in terms of the dimensions we described in our paper. KGE methods mostly work as model-based, undirected methods. They constrain the embeddings to be coherent with the logical facts and rules, which are no longer used after learning. They are heavily based on subsymbolic representations. These models learn the correct parameters (i.e. embeddings) for the task at hand. Even if no explicit semantics needs to be given to the scoring function $f(e_h,W_r,e_t)$, a fuzzy logic interpretation is often used when injecting logical rules ", "id": "525771a4-9400-42b9-9e72-981457921a8b", "level": "subsection", "origin_cites_number": 10, "parent_id": "0c29e7f0-ea36-44aa-a1f0-0b0c69b42bfb", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Knowledge graphs embeddings and Graph Neural Networks for neurosymbolic AI" ], [ "subsection", "Knowledge Graph Embeddings" ] ], "subsections": [], "title": "Knowledge Graph Embeddings" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 259, 5733, 281, 208, 7332, 8475, 5734, 5735, 7045, 216, 1059 ], "content": "Graph Neural Networks (GNN) () are deep learning models for dealing with graphs as input. Inference in these models can be cast as \\textit{message passing} : at each inference step (i.e. GNN layer), a node sends a message to all its outgoing neighbors and updates its state by aggregating all the messages coming from its ingoing neighbors. Messages are computed by standard neural networks. Each inference step computes a transformation of the representations of the nodes. In the last layer, these representations are used to classify the nodes or are aggregated to classify the entire graph. \n\tThere are many connections between GNNs and neurosymbolic computation since both of them apply neural methods to relational data: graphs for GNNs, and logic representations for NeSy. While many works try to close the gap between these two representations , the application of GNN techniques to logic-based graph structures is still very limited. The most related line of work is about using GNNs on knowledge graphs . The underlying idea is to differentiate the messages exchanged by two nodes if they are related by different kinds of edges. For example, given two nodes \\texttt{homer} and \\texttt{bart}, the message corresponding to the edge \\texttt{fatherOf(homer,bart)} will be different from the one corresponding to \\texttt{sameFamily(homer,bart)}. However, these models were intended as classifiers of known relational structures (nodes and edges) and not to reason about the knowledge graph itself (e.g. determining whether an edge exists between two nodes). To perform relational reasoning, GNN-based models rely on techniques from the KGE community on top of the representations extracted by the GNN. This often takes the shape of an auto-encoding scheme: a GNN encodes an input graph in a latent representation and a KGE-based factorization technique is used to reconstruct the whole graph . \n\tAn important characteristic of GNNs is that they rely exclusively on neural computation to perform inference (i.e. to compute messages) and there is no clear direction on how to inject external knowledge about inference, e.g. as logical rules. This contrasts with NeSy, where this is one of the main goals. \n\tThere are also some interesting connections between GNNs and StarAI models. In , GNNs based on knowledge-graphs are not used as a modeling choice but rather to approximate inference in Markov Logic Networks, which is somewhat similar to regularization based methods (see Section \\ref{sec:proof_vs_model}). Similarly, in GNNs are used to encode logical formulae expressed as graphs to approximate a weighted model counting problem.\n\tFinally, it is interesting to analyse GNNs in the spirit of some of the dimensions of NeSy. GNNs act as directed models with a proof-based inference scheme: they perform a series of inference steps to compute the final answer. In the original version of GNNs , the node states are updated until a fixed point is reached, which resembles forward-chaining in logic programming. The representation of nodes belongs to a subsymbolic numerical space. Finally, GNNs can be considered as implicit structure learners: inference rules are learned through the learning of the neural message passing functions. \n\tGraph Neural Networks have recently received a lot of attention from many different communities, thanks to the representation power of neural networks and the capability of learning in complex relational settings. It is no surprise that people have started to study the expressivity of this class of models. One of the most interesting analyses from a neurosymbolic viewpoint is measuring the expressivity of GNNs in terms of variable counting logics. Recently, showed that GNNs are as expressive as 2-variable counting logic\n\t$C_2$. This fragment of first order logic admits formulas with at most two variables extended with counting quantifiers. The expressivity of this fragment is limited compared to many neurosymbolic models, especially those based on logic. However, GNNs learn the logical structure of the problem implicitly as part of the message passing learning scheme and they rely neither on expert-provided knowledge nor on heavy combinatorial search strategies to structure learning (see Section \\ref{sec:struct}). An open and challenging question that unites the GNN and NeSy communities is how to bring the expressivity to higher-order fragments , like in NeSy and StarAI, while keeping both the learning and the inference tractable, like in GNNs.", "id": "128f81ba-b844-472a-86b3-8ca1662bba8a", "level": "subsection", "origin_cites_number": 15, "parent_id": "0c29e7f0-ea36-44aa-a1f0-0b0c69b42bfb", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Knowledge graphs embeddings and Graph Neural Networks for neurosymbolic AI" ], [ "subsection", "Graph Neural Networks" ] ], "subsections": [], "title": "Graph Neural Networks" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 5736, 5737, 5707, 5706 ], "content": "Fuzzy logic, as many-valued extension over Boolean logic, has a very long tradition . However, the use of fuzzy logic in StarAI and NeSy is not dictated by the need of dealing with vagueness, but by the advantageous computational properties of t-norms. Indeed, a common use case is to have an initial theory defined in Boolean logic which is \\textit{fuzzyfied}. Inference is then carried out with the fuzzyfied theory and the answers are eventually discretized back to Boolean values (usually using a threshold at 0.5). \n\tThe reason for this approach is that one would like to exploit the differentiability of t-norms to address logical inference of FOL theories in a more scalable way than standard combinatorial optimization algorithms (e.g. SAT solvers). This is particularly important in undirected and regularization-based methods (such as PSL and LTN ). In fact, it has been shown that there are fragments of fuzzy logic that can even provide convex inference problems. Example \\ref{ex:bool_vs_fuzzy}, however, shows that naively approaching logical inference through a fuzzy relaxation and gradient-based optimization can introduce unexpected behaviours.\n\t\\begin{boxedexample}[label = ex:bool_vs_fuzzy]{Fuzzyfication and soft-Satisfability}\n\t\tLet us consider a disjunction, like $A \\vee B \\vee C$. In Boolean logic, if we state that the disjunction is \\textit{satisfied} (i.e. \\textit{True}), then we expect at least one among the three variables to be \\textit{True}. \n\t\tSuppose we want to find a truth assignment for all the variables that satisfies the disjunction above. The approach of the majority of NeSy fuzzy approaches is the following. First, the rule is relaxed into a fuzzy real function. For example, using the \\L ukasiewicz t-norm, $F_\\oplus(A,B,C)= min(1, A+B+C)$. Secondly, a gradient-based algorithm (e.g. backpropagation with Adam ) is used to maximize the value of the formula with respect to the \\textit{fuzzy} truth degree of the three variables. Finally, the obtained \\textit{fuzzy} solution $A^\\star, B^\\star, C^\\star$ is translated back into a Boolean assignment using a $0.5$ threshold.\n\t\tLet us consider a possible optimal fuzzy solution, like $(A^\\star, B^\\star, C^\\star) = (0.34, 0.34, 0.34)$ and its discretized version $(A^\\star, B^\\star, C^\\star) = (False, False,$ $False)$, using a threshold at $0.5$. The discretized solution does not satisfy the initial Boolean formula, even though it is a global optimum in the fuzzyfied problem. \n\t\\end{boxedexample}\n\tSimilarly, shows that, while it is very common to reason about universally quantified formulae in the form of $\\forall x: A(x) \\to B(x)$, like `all humans are mortal', using gradients and fuzzy logic to make inference can be extremely counterintuitive, especially with specific t-norms such as the product t-norm. It is unclear whether there exists a generally accepted subset of properties of Boolean logic that one wants to preserve and whether one can define a t-norm that guarantees such properties.", "id": "df605b7c-7a16-4a8a-9a34-34eff8f50d0f", "level": "section", "origin_cites_number": 7, "parent_id": "31670094-45b4-4854-914d-94aaf932ce65", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Fuzzy logic, fuzzyfication and soft-Satisfability" ] ], "subsections": [ "c6063ac6-99de-4318-925a-eab4859602f1" ], "title": "Fuzzy logic, fuzzyfication and soft-Satisfability" }, { "cite_extract_rate": 0, "cites": [], "content": "Another common reason for using fuzzy logic is to exploit a differentiable semantics. Then, gradient-based methods can be used to train the parameters of a weighted logical theory such as in LRNNs . This contrasts with gradient-based training of the parameters of probabilistic logics based on the distribution semantics. Possible worlds in probabilistic logic are defined as possible assignments of truth values to all the ground atoms of a logical theory. The assignments of truth values specify the \\textit{semantics} of the logic. On the contrary, fuzzy logic assigns continuous truth degrees to formulas or proofs, which are \\textit{syntactic} structures. As a consequence, while the probability of an atom will always be equal to the sum of the probabilities of the worlds in which it is \\textit{True} (cf. Equation \\ref{eq:wmc}), the fuzzy degree of an atom may vary depending on how that atom has been proven or defined, as shown in Example \\ref{ex:prob_vs_fuzzy}.\n\t\\begin{boxedexample}[label = ex:prob_vs_fuzzy]{Distribution semantics vs fuzzy logic semantics}\n\t\tConsider the following annotated program (adapted from ).\n\t\t\\begin{lstlisting}\n\t\t0.3::b.\n\t\ta1 <- b.\n\t\ta2 <- b,b.\n\t\t\\end{lstlisting}\n\t\tHere, $0.3$ is the label of $\\mathtt{b}$ without any particular semantics yet. \n\t\tGiven the idempotency of the Boolean conjunction, we would expect the scores of both $\\mathtt{a1}$ and $\\mathtt{a2}$ to be identical since both $\\mathtt{b}$ and $\\mathtt{b} \\wedge \\mathtt{b}$ are \\textit{True} when $\\mathtt{b}$ is \\textit{True}. In a probabilistic approach, the score is interpreted as a probability, i.e. $p(\\mathtt{b}) = 0.3$. The probability of any atom is the sum of the probabilities of all the worlds where that atom is \\textit{True}. It is easy to see that, for both $\\mathtt{a1}$ and $\\mathtt{a2}$, these are the worlds where $\\mathtt{b}$ is \\textit{True}, thus $p(\\mathtt{a1}) = p(\\mathtt{a2}) = p(\\mathtt{b}) = 0.3$. On the other hand, in the fuzzy setting, the score of $\\mathtt{b}$ is interpreted as its truth degree, i.e. $t(\\mathtt{b}) = 0.3$. Let us consider the product t-norm, where $t(x \\wedge y) = t(x)t(y)$, then $t(\\mathtt{a1}) = t(\\mathtt{b}) = 0.3$ while $t(\\mathtt{a2}) = t(\\mathtt{b})t(\\mathtt{b}) = 0.09$. While this issue could be solved by choosing a different t-norm (e.g. the minimum t-norm), similar issues arise in different definitions.\n\t\\end{boxedexample}\n\tThe differences between the two semantics are not due to the probabilistic or the fuzzy semantics, but more to the distinction between semantics based on possible worlds and semantics based on proofs or derivations. In fact, a similar behaviour is observed in Stochastic Logic Programs under the name of \\textit{memoization}.\n\\end{document}", "id": "c6063ac6-99de-4318-925a-eab4859602f1", "level": "subsection", "origin_cites_number": 2, "parent_id": "df605b7c-7a16-4a8a-9a34-34eff8f50d0f", "prefix_titles": [ [ "title", "From Statistical Relational to Neurosymbolic \\\\ Artificial Intelligence: a Survey." ], [ "section", "Fuzzy logic, fuzzyfication and soft-Satisfability" ], [ "subsection", "Distribution semantics and fuzzy logic semantics" ] ], "subsections": [], "title": "Distribution semantics and fuzzy logic semantics" } ]
69
[ 7601, 8475, 5706, 2665, 5707, 3726, 5709, 3727, 3741, 8942, 7988, 5710, 5711, 4323, 7304, 3672, 5708, 5714, 5712, 5713, 8943, 8685, 5718, 5717, 5715, 681, 5716, 5719, 5721, 5720, 5724, 5722, 7989, 5723, 1166, 1168, 5726, 5725, 5727, 181, 1806, 5728, 5729, 5730, 7233, 5732, 5731, 259, 5733, 281, 208, 7332, 5734, 5735, 7045, 216, 1059, 5736, 5737 ]
1.10903
[ "Vaishak Belle" ]
Symbolic Logic meets Machine Learning: \\ A Brief Survey in Infinite Domains
2020
2020-06-15T15:29:49Z
cs.AI
The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as statistical relational learning, neuro-symbolic systems, and high-level control have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. In this article, we survey work that provides further evidence for the connections between logic and learning. Our narrative is structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there is considerable overlap. We place an emphasis on the following ``sore" point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. We report on results that challenge this view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ] ], "subsections": [ "fd4766f5-f661-4c93-82d9-4297dbbf91fe", "dc3d8034-6633-4d75-aa80-d0f0633c74e0", "a715b355-5229-4cca-872a-673dcade22b2", "66dfe73d-6adf-4a8f-bcd1-7c3121a3fbec", "1cda66fd-50ab-40fa-8bc2-8a5604130ee9" ], "title": "root" }, { "cite_extract_rate": 0.4, "cites": [ 1177, 3728, 3726, 8475, 3725, 3727 ], "content": "The tension between \\textit{deduction} and \\textit{induction} is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as \\textit{statistical relational learning} , \\textit{neuro-symbolic systems} , and \\textit{high-level control} have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. Indeed, logic emphasizes high-level reasoning, and encourages structuring the world in terms of objects, properties, and relations. In contrast, much of the inductive machinery assume random variables to be independent and identically distributed, which can be problematic when attempting to exploit symmetries and causal dependencies between groups of objects. \nBut the threads connecting logic and learning go deeper, far beyond the apparent flexibility that logic offers for modeling relations and hierarchies in noisy domains. At a conceptual level, for example, although there is much debate about what precisely commonsense knowledge might look like, it is widely acknowledged that concepts such as time, space, abstraction and causality are essential . In that regard, (classical, or perhaps non-classical) logic can provide the formal machinery to reason about such concepts in a rigorous way. At a pragmatic level, despite the success of methods such as deep learning, it is now increasingly recognized that owing to a number of reasons, including model re-use, transferability, causal understanding, relational abstraction, explainability and data efficiency, those methods need to be further augmented with logical, symbolic and/or programmatic artifacts . Finally, for building intelligent agents, it is recognized that low-level, data-intensive, reactive computations needs to be tightly integrated with high-level, deliberative computations , the latter possibly also engaging in hypothetical and counterfactual reasoning. \nHere, a parallel is often drawn to Kahneman's so-called \\textit{System 1} versus \\textit{System 2} processing in human cognition , in the sense that experiential and reactive processing (learned behavior) needs to be coupled with cogitative processing (reasoning, deliberation and introspection) for sophisticated machine intelligence. \nThe purpose of this article is not to resolve this debate, but rather provide further evidence for the connections between logic and learning. In particular, our narrative is inspired by a recent symposium on logic and learning , where the landscape was structured in terms of three strands: {\\it\n\\begin{enumerate}\n\\item \\textbf{Logic vs. Machine Learning}, including the study of problems that can be solved using either logic-based techniques or via machine learning, $\\ldots$; \n\\item \\textbf{Machine Learning for Logic}, including the learning of logical artifacts, such as formulas, logic programs, $\\ldots$; and \n\\item \\textbf{Logic for Machine Learning}, including the role of logics in delineating the boundary between tractable and intractable learning problems, $\\ldots,$ and the use of logic as a declarative framework for expressing machine learning constructs.\n\\end{enumerate}\n}{} \nIn this article, we particularly focus on the following ``sore\" point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. It is true that logical formulas are discrete structures, but they can very easily also express properties about countably infinite or even uncountably many objects. Consequently, in this article we survey some recent results that tackle the integration of logic and learning in infinite domains. In particular, in the context of the above three strands, we report on the following developments. On (1), we discuss approaches for logic-based probabilistic inference in continuous domains. On (2), we cover approaches for learning logic programs in continuous domains, as well as learning formulas that represent countably infinite sets of objects. Finally, on (3), we discuss attempts to use logic as a declarative framework for common tasks in machine learning over discrete and continuous features, as well as using logic as a meta-theory to consider notions such as the \\textit{abstraction} of a probabilistic model. \n{We remark that this survey is undoubtedly a biased view, as the area of research is large, but we do attempt to briefly cover the major threads. Readers are encouraged to refer to discussions in , among others, to get a sense of the breadth of the area.\n}", "id": "fd4766f5-f661-4c93-82d9-4297dbbf91fe", "level": "section", "origin_cites_number": 15, "parent_id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.23333333333333303, "cites": [ 7747, 3732, 3729, 3731, 8683, 8684, 3730 ], "content": "\\label{sec:logic_vs_machine_learning}\nTo appreciate the role and impact of logic-based solvers for machine learning systems, it is perhaps useful to consider the core computational problem underlying (probabilistic) machine learning: the problem of inference, including evaluating the partition function (or conditional probabilities) of a probabilistic graphical model such as a Bayesian network. \nWhen leveraging Bayesian networks for machine learning tasks , the networks are often learned using local search to maximize a likelihood or a Bayesian quantity. For example, given data \\( \\D \\) and the current guess for the network \\( \\N \\), we might estimate the ``goodness'' of the guess by means of a score: \\( {\\it score}(\\N ,\\D) \\propto \\log \\Pr(\\D\\mid \\N) - {\\it size}(\\N) \\). That is, we want to maximize the fit of the data wrt the current guess, but we would like to penalize the model complexity, to avoid overfitting. Then, we would opt for a second guess \\( \\N' \\) only if \\( {\\it score}(\\N',\\D) \\gt {\\it score}(\\N,\\D) \\). Needless to say, even with a reasonable local search procedure, the most significant computational effort here is that of probabilistic inference. \nReasoning in such networks becomes especially challenging with logical syntax. The prevalence of large-scale social networks, machine reading domains, and other types of relational knowledge bases has led to numerous formalisms that borrow the syntax of predicate logic for probabilistic modeling . This has led to a large family of solvers for the \\emph{weighted model counting} (WMC) problem . The idea is this: given a Bayesian network, a relational Bayesian network, a factor graph, or a probabilistic program , one considers an encoding of the formalism as a \\emph{weighted propositional theory}, consisting of a propositional theory \\( \\Delta \\) and a weight function \\( w \\) that maps atoms in \\( \\Delta \\) to \\( \\real\\su + \\). Recall that SAT is the problem of finding an assignment to such a \\( \\Delta, \\) whereas \\#SAT counts the number of assignments for \\( \\Delta. \\) WMC extends \\#SAT by computing the sum of the weights of all assignments: that is, given a set of models \\( \\M(\\Delta) = \\set{ M \\mid M \\models \\Delta} \\), \nwe evaluate the quantity \\(\n\tW(\\Delta) = \\sum\\sub {M \\in \\M(\\Delta)} w(M)\n\\) \nwhere \\( w(M) \\) is factorized in terms of the atoms true at \\( M. \\) To obtain the conditional probability of a query \\( q \\) against evidence \\( e \\) (wrt the theory \\( \\Delta \\)), we define \\( \\Pr(q\\mid e) = W(\\Delta\\land q \\land e) / W(\\Delta\\land e). \\)\nThe popularity of WMC can be explained as follows. Its formulation elegantly decouples the logical or symbolic representation from the numeric representation, which is encapsulated in the weight function. When building solvers, this allows us to reason about logical equivalence and reuse SAT solving technology (such as constraint propagation and clause learning). WMC also makes it more natural to reason about deterministic, hard constraints in a probabilistic context . Both exact solvers, based on knowledge compilation , as well as approximate solvers have emerged in the recent years, as have lifted techniques that exploit the relational syntax during inference (but in a finite domain setting). For ideas on generating such representations randomly to assess scalability and compare inference algorithms, see , for example. \nOn the point of modelling finite vs infinite properties, \nnote that owing to the underlying propositional language, the formulation is limited to discrete random variables. A similar observation can be made for SAT, which for the longest time could only be applied in discrete domains. This changed with the increasing popularity of \\emph{satisfiability modulo theories} (SMT) , which enable us to, for example, reason about the satisfiability of linear constraints over the rationals. Extending earlier insights on piecewise-polynomial weight functions , the formulation of \\emph{weighted model integration} (WMI) was proposed in . WMI extends WMC by leveraging the idea that SMT theories can represent mixtures of Boolean and continuous variables: for example, a formula such as \\( p \\land (x\\gt 5) \\) denotes the logical conjunction of a Boolean variable \\( p \\) and a real-valued variable \\( x \\) taking values greater than 5. For every assignment to the Boolean and continuous variables, the WMI problem defines a weight. The total WMI is computed by integrating these weights over the domain of solutions to \\( \\Delta \\), which is a mixed discrete-continuous (or simply \\textit{hybrid}) space. Consider, for example, the special case when \\( \\Delta \\) has no Boolean variables, and the weight of every model is 1. Then, the WMI simplifies to computing the volume of the polytope encoded in \\( \\Delta \\). When we additionally allow for Boolean variables in \\( \\Delta \\), this special case becomes the hybrid version of \\#SAT, known as \\#SMT . Since that proposal, numerous advances have been made on building efficient WMI solvers (e.g., ) including the development of compilation targets . \nNote that WMI proposes an extension of WMC for uncountably infinite (i.e., continuous) domains. What about countably infinite domains? The latter type is particularly useful for reasoning in (general) first-order settings, where we may say that a property such as \\( \\forall x,y,z ({\\it parent}(x,y) \\land {\\it parent}(y,z) \\supset {\\it grandparent}(x,z)) \\) applies to every possible \\( x, y\\) and $z$. Of course, in the absence of the finite domain assumption, reasoning in the first-order setting suffers from undecidability properties, and so various strategies have emerged for reasoning about an \\textit{open universe} . One popular approach is to perform \\emph{forward reasoning}, where samples needed for probability estimation are obtained from the facts and declarations in the probabilistic model . Each such sample corresponds to a possible world. But there may be (countably or uncountably) infinitely many worlds, and so exact inference is usually sacrificed. A second approach is to restrict the model wrt the query and evidence atoms and define estimation from the resulting finite sub-model , which may also be substantiated with exact inference in special cases . \nGiven the successes of logic-based solvers for inference and probability estimation, one might wonder whether such solvers would also be applicable to learning tasks in models with relational features and hard, deterministic constraints? \n These, in addition to other topics, are considered in the next section.", "id": "dc3d8034-6633-4d75-aa80-d0f0633c74e0", "level": "section", "origin_cites_number": 30, "parent_id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ], [ "section", "Logic vs. Machine Learning" ] ], "subsections": [], "title": "Logic vs. Machine Learning" }, { "cite_extract_rate": 0.30769230769230704, "cites": [ 8685, 3726, 3737, 3735, 3736, 8686, 3733, 3734 ], "content": "\\label{sec:machine_learning_for_logic}\nAt least since the time of Socrates, inductive reasoning has been a core issue for the logical worldview, as we need a mechanism for obtaining axiomatic knowledge. In that regard, the learning of logical and symbolic artifacts is an important issue in AI, and computer science more generally . There is a considerable body of work on learning propositional and relational formulas, and in context of probabilistic information, learning weighted formulas . Approaches can be broadly lumped together as follows. \\begin{enumerate}\n\t\\item \\emph{Entailment-based scoring:} Given a logical language \\( \\L, \\) background knowledge \\( \\B \\subset \\L, \\) examples \\( \\D \\) (usually a set of \\( \\L \\)-atoms), \n\t\tfind a hypothesis \\( \\H \\in {\\overline \\H}, \\H \\subset \\L \\) such that \\( \\B \\cup \\H \\) entail the instances in \\( \\D. \\)\n\tHere, the set \\( {\\overline \\H} \\) places restrictions of the syntax of \\( \\H \\) so as to control model complexity and generalization. (For example, \\( \\H = \\D \\) is a trivial hypothesis that satisfies the entailment stipulation.)\n\t\\item \\emph{Likelihood-based scoring:} Given \\( \\L,\\B \\) and \\( \\D \\) as defined above, find \\( \\H\\subset \\L \\) such that \\( {\\it score}(\\H, \\D) \\gt {\\it score}(\\H', \\D) \\) for every \\( \\H' \\neq \\H. \\) As discussed before, we might define \\( {\\it score}(\\H ,\\D) \\propto \\log \\Pr(\\D\\mid \\H) - {\\it size}(\\H) \\). Here, like \\( {\\overline \\H} \\) above, \\( {\\it size}(\\H) \\) attempts to the control model complexity and generalization. \n\\end{enumerate}\nMany recipes based on these schemes are possible. For example, we may use entailment-based inductive synthesis for an initial estimate of the hypothesis, and then resort to Bayesian scoring models . The synthesis step might invoke neural machinery . We might not require that the hypothesis entails every example in \\( \\D \\) but only the largest consistent subset, which is sensible when we expect the examples to be noisy . We might compile \\( \\B \\) to an efficient data structure, and perform likelihood-based scoring on that structure , and so \\( \\B \\) could be seen as deterministic domain-specific constraints. Finally, we might stipulate the conditions under which a ``correct'' hypothesis may be inferred wrt unknown ground truth, only a subset of which is provided in \\( \\D. \\) This is perhaps best represented by the (probably approximately correct) PAC-semantics that captures the quality possessed by the output of learning algorithm whilst costing for the number of examples that need to be observed . (But other formulations are also possible, e.g., .) \nThis discussion pertained to finite domains. \nWhat about continuous spaces? By means of arithmetic fragments and \nformulations like WMI, it should be clear that it now becomes possible to extend the above schemes to learn continuous properties. For example, one could learn linear expressions from data . For an account that also tries to evaluate a hypothesis that is correct wrt unknown ground truth, see . \nIf the overall objective is to obtain a distribution of the data, other possibilities present themselves. \nIn , for example, real-valued data points are first lumped together to obtain \natomic continuous random variables. From these, relational formulas are constructed so as to yield hybrid probabilistic programs. The learning is based on likelihood scoring. In , the real-valued data points are first intervalized, and polynomials are learned for those intervals based on likelihood scoring. These weighted atoms are then used for learning clauses by entailment judgements . \nSuch ideas can also be extended to data structures inspired by knowledge compilation, often referred to as \\textit{circuits} . Knowledge compilation arose as a way to represent logical theories in a manner where certain kinds of computations (e.g., checking satisfiability) is significantly more effective, often polynomial in the size of the circuit. In the context of probabilistic inference, the idea was to then position probability estimation to also be computable in time polynomial in the size of the circuit . Consequently, (say) by means of likelihood-based scoring, the learning of circuits is particularly attractive because once learned, the bottleneck of inference is alleviated . In \n, along the lines of the work above on learning logical formulas in continuous domains, it is shown that the learning of circuits can also be coupled with WMI. \nWhat about countably infinite domains? In most pragmatic instances of learning logical artifacts, the difference between the uncountable and countably infinite setting is this: in the former, we see finitely many real-valued samples as being drawn from an (unknown) interval, and we could inspect these samples to crudely infer a lower and upper bound. In the latter, based on finitely many relational atoms, we would need to infer a universally quantified clause, such as \\( \\forall x,y,z ({\\it parent}(x,y) \\land {\\it parent}(y,z) \\supset {\\it grandparent}(x,z)) \\). If we are after a hypothesis that is simply guaranteed to be consistent wrt the observed examples, then standard rule induction strategies would suffice , and we could interpret the rules as quantifying over a countably infinite domain. But this is somewhat unsatisfactory, as there is no distinction between the rules learned in the standard finite setting and its supposed applicability to the infinite setting. What is really needed is an analysis of what rule learning would mean wrt the infinitely many examples that have \\emph{not} been observed. This was recently considered via the PAC-semantics in , by appealing to ideas on reasoning with open universes discussed earlier . \nBefore concluding this section, it is worth noting that although the above discussion is primarily related to the learning of logical artifacts, it can equivalently be seen as a class of machine learning methods that leverage symbolic domain knowledge . \nIndeed, logic-based probabilistic inference over deterministic constraints, and entailment-based induction augmented with background knowledge are instances of such a class. \nAnalogously, the automated construction of relational and statistical knowledge bases by combining background knowledge with extracted tuples (obtained, for example, by applying natural language processing techniques to large textual data) is another instance of such a class. \nIn the next section, we will consider yet another way in which logical and symbolic artifacts can influence learning: we will see how such artifacts are useful to enable tractability, correctness, modularity and compositionality.", "id": "a715b355-5229-4cca-872a-673dcade22b2", "level": "section", "origin_cites_number": 26, "parent_id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ], [ "section", "Machine Learning for Logic" ] ], "subsections": [], "title": "Machine Learning for Logic" }, { "cite_extract_rate": 0.268292682926829, "cites": [ 1613, 3742, 3743, 3738, 3735, 3744, 8686, 3727, 3740, 3739, 3741 ], "content": "\\label{sec:logic_for_machine_learning}\nThere are two obvious ways in which a logical framework can provide insights on machine learning theory. First, consider that computational tractability is of central concern when applying logic in computer science, knowledge representation, database theory and search . Thus, the natural question to wonder is whether these ideas would carry over to probabilistic machine learning. On the one hand, probabilistic extensions to tractable knowledge representation frameworks could be considered . But on the other, as discussed previously, ideas from knowledge compilation, and the use of circuits, in particular, are proving very effective for designing tractable paradigms for machine learning. While there has always been an interest in capturing tractable distributions by means of low tree-width models , knowledge compilation has provided a way to also represent high tree-width models and enable exact inference for a range of queries . See for a comprehensive view on the use of knowledge compilation for machine learning. \nThe other obvious way logic can provide insights on machine learning theory is by offering a formal apparatus to reason about \\textit{context}. \nMachine learning problems are often positioned as atomic tasks, such as a classification task where regions of images need to be labeled as cats or dogs. However, even in that limited context, we imagine the resulting classification system as being deployed as part of a larger system, which includes various modules that communicate or interface with the classification system. We imagine an implicit accountability to the labelling task in that the detected object is either a cat or a dog, but not both. If there is information available that all the entities surrounding the object of interest have been labelled as lions, we would want to accord a high probability to the object being a cat, possibly a wild cat. There is a very low chance of the object being a dog, then. If this is part of a vision system on a robot, we should ensure that the robot never tramples on the object, regardless of whether it is a type of cat or a dog. To inspect such patterns, and provide meta-theory for machine learning, it can be shown that symbolic, programmatic and logical artifacts are enormously useful. We will specifically consider correctness, modularity and compositionality to explore the claim.\nOn the topic of correctness, the classical framework in computer science is \\emph{verification}: can we provide a formal specification of what is desired, and \ncan the system be checked against that specification? In a machine learning context, we might ask whether the system, during or after training, satisfies a specification. \nThe specification here might mean constraints about the physical laws of the domain, or notions of perturbation in the input space while ensuring that the labels do not change, or insisting that the prediction does not label an object as being both a cat and a dog, or otherwise ensuring that outcomes are not subject to, say, gender bias. Although there is a broad body of work on such issues, touching more generally on \\textit{trust} , we discuss approaches closer to the thrust of this article. For example, show that a trained neural network can be verified by means of an SMT encoding of the network. In recent work, show that the loss function of deep learning systems can be adjusted to logical constraints by insisting that the distribution on the predictions is proportional to the weighted model count of those constraints. In , prior (logical) constraints are compiled to a circuit to be used for probability estimation. \nIn , circuits are shown to be amenable to training against probabilistic and causal prior constraints, including assertions about fairness, for example. \nIn , a somewhat different approach to respecting domain constraints is taken: the low-level prediction is obtained as usual from a machine learning module, which is then interfaced with a probabilistic relational language and its symbolic engine. That is, the reasoning is positioned to be tackled directly by the symbolic engine. In a sense, such approaches cut across the three strands: the symbolic engine uses weighted model counting, the formulas in the language could be obtained by (say) entailment-based scoring, and the resulting language supports modularity and compositionality (discussed below). \nWhile there is not much to be said about the distinction between finite vs infinite wrt correctness, many of these ideas are likely amenable to extensions to an infinite setting in the ways discussed in the previous sections (e.g., considering constraints of a continuous or a countably infinite nature). \nOn the topic of modularity, recall that the general idea is to reduce, simplify or otherwise abstract a (probabilistic) computation as an atomic entity, which is then to be referenced in another, possibly more complex, entity. In standard programming languages, this might mean the compartmentalization and interrelation of computational entities. For machine learning, approaches such as probabilistic programming support probabilistic primitives in the language, with the intention of making learning modules re-usable and modular. It can be shown, for example, that the computational semantics of some of these languages reduce to WMC . Thus, in the infinite case, a corresponding reduction to WMI follows . \nA second dimension to modularity is the notion of \\emph{abstraction}. Here, we seek to model, reason and explain the behavior of systems in a more tractable search space, by omitting irrelevant details. The idea is widely used in natural and social sciences. Think of understanding the political dynamics of elections by studying micro level phenomena (say, voter grievances in counties) versus macro level events (e.g., television advertisements, gerrymandering). In particular, in computer science, it is often understood as the process of mapping one representation onto a simpler representation by suppressing irrelevant information. \nIn fact, integrating low-level behavior with high-level reasoning, exploiting relational representations to reduce the number of inference computations, and many other search space reduction techniques can all loosely be seen as instances of abstraction . \nWhile there has been significant work on abstraction in deterministic systems , for machine learning, however, a probabilistic variant is clearly needed. In , an account of abstraction for loop-free propositional probabilistic programs is provided, where certain parts of the program (possibly involving continuous properties) can be reduced to a Bernoulli random variable. For example, suppose every occurrence of the continuous random variable $x$, drawn uniformly on the interval [0,1], in a program is either of the form $x\\leq 7$ or of the form $x\\gt 7$. Then, we could use a discrete random variable $b$ with a 0.7 probability of being true to capture $x\\leq 7$; and analogously, $\\neg b$ to capture $x\\gt 7$. The resulting program is likely to be simpler. \nIn , an account of abstraction for probabilistic relational models is considered, where the notion of abstraction also extends to deterministic constraints and complex formulas. For example, a single probabilistic variable in the abstracted model could denote a complex logical formula in the original model. Moreover, the logical properties that enable verifying and inducing abstractions are also considered, and it is shown how WMC is sufficient for the computability of these properties (also see ). \nIncidentally, abstraction brings to light a reduction between finite vs infinite: it is shown in that the modelling of piecewise densities as weighted propositions, which is leveraged in WMI , is a simple case of the more general account. Therefore, it is worthwhile to investigate whether this or other accounts of abstraction could emerge as general-purpose tools that allow us to inspect the conditions under which infinitary statements reduce to finite computations. \nA broader point here is the role abstraction might play in generating explanations . For example, a user's understanding of the domain is likely to be different from the low-level data that a machine learning system interfaces with~, and so, abstractions can capture these two levels in a formal way. \nFinally, we turn to the topic of compositionality, which, of course, is closely related to modularity in that we want to distinct modules to come together to form a complex composition. \nNot surprisingly, this is of great concern in AI, as it is widely acknowledged that most AI systems will involve heterogeneous components, some of which may involve learning from data, and others reasoning, search and symbol manipulation . In continuation with the above discussion, probabilistic programming is one such endeavor that purports to tackle this challenge by allowing modular components to be composed over programming and/or logical connectives . (See for ideas in deterministic systems.) However, probabilistic programming only composes probabilistic computations, but does not offer an obvious means to capture other types of search-based computations, such as SAT, and integer and convex programming. \nRecall that the computational semantics of probabilistic programs reduces to WMC . Following works such as , an interesting observation made in is that by appealing to a sum of products computation over different semiring structures, we can realize a large number of tasks such as satisfiability, unweighted model counting, sensitivity analysis, gradient computations, in addition to WMC. \nIt was then shown in that the idea could be generalized further for infinite domains: by defining a measure on first-order models, WMI and convex optimization can also be captured. As the underlying language is a logical one, composition can already be defined using logical connectives. But an additional, more involved, notion of composition is also proposed, where a sum of products over different semirings can be concatenated. To reiterate, the general idea behind these proposals \nis to arrive at a principled paradigm that allows us to interface learned modules with other types of search and optimization computations for the compositional building of AI systems. See also for analogous discussions, but where a different type of coupling for the underlying computations is suggested. Overall, we observed that a formal apparatus (symbolic, programmatic and logical artifacts) help us define such compositional constructions by providing a meta-theory.", "id": "66dfe73d-6adf-4a8f-bcd1-7c3121a3fbec", "level": "section", "origin_cites_number": 41, "parent_id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ], [ "section", "Logic for Machine Learning" ] ], "subsections": [], "title": "Logic for Machine Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusions}\nIn this article, we surveyed work that provides further evidence for the connections between logic and learning. \nOur narrative was structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there was considerable overlap. \nWe covered a large body of work on what these connections look like, including, for example, pragmatic concerns such as the use of hard, domain-specific constraints and background knowledge, all of which considerably eases the requirement that all of the agent's knowledge should be derived from observations alone. (See discussions in on the limitations of learned behavior, for example.) \nWhere applicable, we placed an emphasis on how extensions to infinite domains are possible. In the very least, logical artifacts can help in constraining, simplifying and/or composing machine learning entities, and in providing a principled way to study the underlying representational and computational issues. \nIn general, this type of work could help us move beyond the narrow focus of the current learning literature so as to deal with time, space, abstraction, causality, quantified generalizations, relational abstractions, unknown domains, unforeseen examples, among other things, in a principled fashion. In fact, what is being advocated is the tackling of problems that symbolic logic and machine learning might struggle to address individually. One could even think of the need for a recursive combination of strands 2 and 3: purely reactive components interact with purely cogitative elements, but then those reactive components are learned against domain constraints, and the cogitative elements are induced from data, and so on. More broadly, making progress towards a formal realization of \\emph{System 1} versus \\emph{System 2} processing might also contribute to our understanding of human intelligence, or at least capture human-like intelligence in automated systems. \n\\bibliographystyle{abbrv}\n\\bibliography{group}\n\\end{document}", "id": "1cda66fd-50ab-40fa-8bc2-8a5604130ee9", "level": "section", "origin_cites_number": 1, "parent_id": "3179fec8-9fa2-4590-ac34-0d3a7530e454", "prefix_titles": [ [ "title", "Symbolic Logic meets Machine Learning: \\\\ A Brief Survey in Infinite Domains" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
70
[ 1177, 3728, 3726, 8475, 3725, 3727, 7747, 3732, 3729, 3731, 8683, 8684, 3730, 8685, 3737, 3735, 3736, 8686, 3733, 3734, 1613, 3742, 3743, 3738, 3744, 3740, 3739, 3741 ]
1.403268
[ "Zhuang Li", "Lizhen Qu", "Gholamreza Haffari" ]
Context Dependent Semantic Parsing: A Survey
2020
2020-11-02T07:51:05Z
cs.CL
Semantic parsing is the task of translating natural language utterances into machine-readable meaning representations. Currently, most semantic parsing methods are not able to utilize contextual information (e.g. dialogue and comments history), which has a great potential to boost semantic parsing performance. To address this issue, context dependent semantic parsing has recently drawn a lot of attention. In this survey, we investigate progress on the methods for the context dependent semantic parsing, together with the current datasets and tasks. We then point out open problems and challenges for future research in this area. The collected resources for this topic are available at: \url{https://github.com/zhuang-li/Contextual-Semantic-Parsing-Paper-List}.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "321e790e-018a-41e3-8473-e7e9883a1392", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ] ], "subsections": [ "b8456ffc-88d1-4dce-9af7-5cb66e55fd57", "a541cf51-c35e-4bd7-b0ec-f7b7038dbad0", "0888dee2-d5ac-4bd8-807b-f7439f676fed", "37d74331-727a-46c3-844f-3da36f456e56", "117c71f7-624f-4f92-9105-d8b545d244ba" ], "title": "root" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 6899, 6900, 6898, 6897, 6896 ], "content": "\\blfootnote{\n \\hspace{-0.65cm} \n This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: \\url{http://creativecommons.org/licenses/by/4.0/}.\n}\nSemantic parsing is concerned with mapping natural language (NL) utterances into machine-readable structured \\textit{meaning representations} (\\MRs). These representations are in the formats of formal languages, e.g. Prolog, SQL, and Python. A formal language is typically defined by means of a formal \\textit{grammar}, which consists of a set of rules. Following the convention of the chosen formal language, \\MRs are also referred to as logical forms or programs. An \\MR is often executable in a (programming) environment to yield a result (e.g. results of SQL queries) enabling automated reasoning~.\nMost research work on semantic parsing treats each NL utterance as an \\emph{independent} input, ignoring the text surrounding them~, such as interaction histories in dialogues. \nThe surrounding text varies significantly across different application scenarios. In a piece of free text, we refer to the surrounding text of a current utterance as its \\textit{context}. The context is different with respect to different utterances. In our sequel, we differentiate between context \\textit{independent} semantic parsing (\\CiSP) and context \\textit{dependent} semantic parsing (\\CdSP) by whether a corresponding parser utilizes context information. A knowledge base or a database (on which a \\MR is executed for the purpose of question answering) can be considered as context as well~.\nThis type of context does not change with respect to the utterances. In this survey, we only consider the former kind of context which does vary with different utterances.\n\\input{tab-example-pets.tex}\nThe utilization of context in semantic parsing imposes both challenges and opportunities. \nAs shown in Table \\ref{tab:CoSQL}, one challenge is to resolve references, such as \\textit{those} in ``For each of those, what is the maximum age''. This example shows also another challenge caused by elliptical (incomplete) utterances. The sentence ``What about the average age?'' alone misses information about the database table and the column \\textit{pettype}. The incomplete meaning needs to be complemented by the discourse context. \nCompared with \\CiSP, which usually assumes that the information within the utterance is complete, \\CdSP is expected to tackle challenges posed by involving context in the parsing process \n. In addition, tackling the above challenges provides us with more opportunities to inspect the linguistic phenomena which could influence semantic parsing. Our survey on \\CdSP fills the gap in the literature, as the recent surveys in the semantic parsing research mainly focus on \\CiSP~. \nThis paper is organised as follows. We start with providing a brief and fundamental understanding of \\CiSP in \\S2. We then present a comprehensive organization of the recent advances in \\CdSP in \\S3. We discuss current \\CiSP tasks, datasets, and resources in \\S4. Finally, we cover open research problems in \\S5, and conclude by providing a roadmap for future research in this area.", "id": "b8456ffc-88d1-4dce-9af7-5cb66e55fd57", "level": "section", "origin_cites_number": 7, "parent_id": "321e790e-018a-41e3-8473-e7e9883a1392", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.5, "cites": [ 6900 ], "content": "\\CiSP aims to learn a mapping $\\pi_{\\theta} : \\mathcal{X} \\rightarrow \\mathcal{Y}$, which translates an NL utterance $x \\in \\mathcal{X}$ into an \\MR $y \\in \\mathcal{Y}$. An \\MR $y$ can be executed in a programming environment (e.g. databases, knowledge graphs, etc.) to yield a result $z$, namely denotation. The structure of an \\MR takes a form of either a tree or graph, depending on its underlying formal language. The languages of \\MRs are categorized into three types of formalism\n: logic based (e.g. first order logic), graph based (e.g. AMR~), and programming languages (e.g. Java, Python)~. Some semantic parsers explicitly apply a production grammar to yield \\MRs from utterances. Such a grammar consists of a set of production rules, which define a list of candidate derivations for each NL utterance. Each derivation deterministically produces a grammatically valid \\MR.", "id": "a541cf51-c35e-4bd7-b0ec-f7b7038dbad0", "level": "section", "origin_cites_number": 2, "parent_id": "321e790e-018a-41e3-8473-e7e9883a1392", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ] ], "subsections": [ "d1388aa1-f859-436a-97f9-1580b1fa7dad", "0accbdad-eb7e-426a-a0ca-4f0b3266d1fb" ], "title": "Background" }, { "cite_extract_rate": 1, "cites": [ 6900 ], "content": "Given an utterance $x \\in \\mathcal{X}$ and its paired \\MR $y \\in \\mathcal{Y}$, a \\CiSP model can form a \\textit{conditional} distribution $p(y | x)$.\nThe model learning can be supervised by either utterance-\\MR pairs or merely utterance-denotation pairs. If only denotations are available, a widely used approach~ is to marginalize over all possible \\MRs for a denotation $z$, which leads to a \\textit{marginal} distribution $p(z | x) = \\sum_{y} p(z, y| x)$. A parsing algorithm aims to find the optimal \\MR in the combinatorially large search space.\nWe coarsely categorize the existing models into: symbolic approaches, neural approaches, and neural-symbolic approaches based on the category of machine learning methodology and whether any production grammars are explicitly used in models.", "id": "d1388aa1-f859-436a-97f9-1580b1fa7dad", "level": "subsection", "origin_cites_number": 1, "parent_id": "a541cf51-c35e-4bd7-b0ec-f7b7038dbad0", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ], [ "subsection", "Semantic Parsing Models" ] ], "subsections": [ "e551a263-fa5f-4543-97d2-4b3bf2aa80fe" ], "title": "Semantic Parsing Models" }, { "cite_extract_rate": 0.5, "cites": [ 6900, 6897, 6901 ], "content": "\\label{sec:pipe}\nA symbolic semantic parser employs production grammars to generate candidate derivations and find the most probable one via a scoring model. The scoring model is a statistical or machine learning model. \n Each derivation is represented by handcrafted features extracted from utterances or partial \\MRs. Let $\\Phi(x, d)$ denote the features of a pair of utterance and derivation, and $G(x)$ be the set of candidate derivations based on $x$. A widely used scoring model is the log linear model~. \n\\begin{equation}\n p( d | x) = \\frac{\\exp(\\bm{\\theta}\\Phi(x, d))}{\\sum_{d' \\in G(x)} \\exp(\\bm{\\theta}\\Phi(x, d'))}\n\\end{equation}\nwhere $\\bm{\\theta}$ denotes the model parameters. If only utterance-denotation pairs are provided at training time, a model marginalizes over all possible derivations yielding the same denotations by $p(z | x) = \\sum_{d} p(z, d| x)$~. Those corresponding parsers further differentiate between graph-based parsers~ and shift-reduce parsers~ due to the adopted parsing algorithms and the ways to generate derivations. From a machine learning perspective, these approaches are also linked to a structured prediction problem.", "id": "e551a263-fa5f-4543-97d2-4b3bf2aa80fe", "level": "paragraph", "origin_cites_number": 6, "parent_id": "d1388aa1-f859-436a-97f9-1580b1fa7dad", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ], [ "subsection", "Semantic Parsing Models" ], [ "paragraph", "Symbolic Approaches" ] ], "subsections": [ "b4beea59-21ae-4c9c-befd-dc1675e2c2f8", "f6a9856b-4e78-4007-961d-d1003c3a47c0" ], "title": "Symbolic Approaches" }, { "cite_extract_rate": 1, "cites": [ 166, 2401, 6902, 3605, 38 ], "content": "\\label{sec:end}\nNeural approaches apply neural networks to translate NL utterances into \\MRs without using production grammars. These approaches formulate semantic parsing as a machine translation problem by viewing NL as the source language and the formal language of \\MRs as the target language. \nMost work in this category adopts \\Seq as the backbone architecture, which consists of an encoder and a decoder. The encoder projects NL utterances into hidden representations, whereas the decoder generates linearized \\MRs sequentially.\n Both encoders and decoders employ either recurrent neural networks (RNN)~ or Transformers . Note that, these methods do not apply any production grammars to filter out syntactically invalid \\MRs.\nThe variants of the \\Seq based models also explore structural information of \\MRs. \\textsc{Seq2Tree} utilizes a tree-structured RNN as the decoder, which constrains generated \\MRs to take syntactically valid tree structures. The \\textsc{Coarse2Fine} model~ adopts a two-stage generation for the task. In the first stage, a \\Seq model is applied to generate \\MR templates, which replace entities in \\MRs by slot variables for a high-level generalization. In the second stage, another \\Seq model is applied to fill the slot variables with the corresponding entities.", "id": "b4beea59-21ae-4c9c-befd-dc1675e2c2f8", "level": "paragraph", "origin_cites_number": 5, "parent_id": "e551a263-fa5f-4543-97d2-4b3bf2aa80fe", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ], [ "subsection", "Semantic Parsing Models" ], [ "paragraph", "Symbolic Approaches" ], [ "paragraph", "Neural Approaches" ] ], "subsections": [], "title": "Neural Approaches" }, { "cite_extract_rate": 1, "cites": [ 6904, 9083, 4279, 6903 ], "content": "In order to ensure the generated \\MRs to be syntactically valid without compromising the generalization power of neural networks, neural-symbolic approaches fuse both symbolic and neural approaches by applying production grammars to the generated \\MRs; then the derivations are scored by neural networks. \nThe majority of these methods linearize derivations such that they are able to leverage \\Seq~. At each time step, the decoder of these methods emits either a parse action or a production rule, leading to a grammatically valid \\MR at the end. these works produce derivations by varying grammars. \\textsc{NSM} uses a subset of Lisp syntax. \\textsc{TranX}~ defines the grammars in Abstract Syntax Description Language, while \\textsc{IRNet} considers the context-free grammar of a language called SemQL. \nThere are also neural-symbolic approaches adopting neural architectures other than \\Seq. One of such examples is , which adopts a dynamic neural module network (DNMN) to generate \\MRs.", "id": "f6a9856b-4e78-4007-961d-d1003c3a47c0", "level": "paragraph", "origin_cites_number": 4, "parent_id": "e551a263-fa5f-4543-97d2-4b3bf2aa80fe", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ], [ "subsection", "Semantic Parsing Models" ], [ "paragraph", "Symbolic Approaches" ], [ "paragraph", "Neural-Symbolic Approaches" ] ], "subsections": [], "title": "Neural-Symbolic Approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "In semantic parsing, \\textit{exact match accuracy} is the most commonly used evaluation metric. With \\textit{exact match accuracy}, the parsing results are considered correct only when the output \\MR/denotations exactly match the string of the ground truth \\MR/denotations. One flaw of the evaluation metric is that some types of MRs (e.g., SQL) do not hold order constraints. \\newcite{yu2018spider} proposed a metric \\textit{set match accuracy} to evaluate the semantic parsing performance over SQLs, which treats each SQL statement as a set of clauses and ignore their orders.\nDue to the variety of domains and languages over different datasets, it is difficult to measure all semantic parsing methods in a unified framework. To address this issue, \\newcite{yu2018spider}, \\newcite{yu2019sparc} and \\newcite{yu2019cosql} built different shared-task platforms with leaderboard for semantic parsing evaluation on the common datasets and consistent evaluation metrics.", "id": "0accbdad-eb7e-426a-a0ca-4f0b3266d1fb", "level": "subsection", "origin_cites_number": 0, "parent_id": "a541cf51-c35e-4bd7-b0ec-f7b7038dbad0", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Background" ], [ "subsection", "Evaluation" ] ], "subsections": [], "title": "Evaluation" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6905, 6906 ], "content": "Context dependent semantic parsing involves modelling of context in the parsing process. For each current NL utterance, we define its \\textit{context} as the information beyond this utterance. With this definition, there are two types of context for semantic parsing, \\textit{local} context and \\textit{global} context. The \\textit{local} context for an utterance is the text and multimedia content surrounding it, which is meaningful only for this utterance. In plain texts, the concept of local context is also quite close to discourse, which is defined as a group of collocated, structured, coherent sentences~. In contrast, its \\textit{global} context is the information accessible to more than one utterance, including databases and external text corpora, images or class environment~. The content of local context varies for each NL utterance while the global context is always static. The work in our survey is only concerned with local context. Therefore, we always refer to \"local context\" as \"context\" in the following sections. \nContext provides additional information to resolve ambiguity and vagueness in current utterances. For semantic parsing, one type of ambiguity is caused by references in current utterances, which need to be resolved to previously mentioned objects and relations. References may include explicit or implicit lexical triggers, such as \\textit{those} in \"For each of those, ...\" in our introductory example (Table \\ref{tab:CoSQL}). Another ambiguity illustrated by the same example is resulted by ellipsis. The previous context provides constraints to restrict the scope of possible \\MRs indicated by current utterances. In addition, context provides information to disambiguate word senses and entities, and link them to knowledge bases to enable complex reasoning. However, semantic parsing literature largely neglects word sense disambiguation, which is regarded as an AI complete problem~. Last but not least, context allows to exploit discourse coherence for semantic parsing. Coherence relations characterize structural relationships between sentences, thus limit the search space of parse candidates for the following utterances of current ones.\nFormally, a context dependent parser takes both an input utterance $x_i$ and its context $C_i$, where $C_i$ could include a broad range of multimedia content. And we consider a group of inter-related utterances with the union set of their context as one \\textit{interaction}, $I = (\\mathbf{x}, \\mathbf{C})$, where $\\mathbf{x} = [x_1, ...,x_i,...,x_T]$ and $\\mathbf{C} = \\cup_{i=1}^{T}C_{i}$. Currently, most \\CdSP work focus on the research problems of context $C_i$ regarding the history utterances, \\MRs, denotations. Such a parser learns a mapping from a current utterance $x_{i}$ to an \\MR $y_i$ by $\\pi_{\\theta}(x_{i}, C_i)$.", "id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "level": "section", "origin_cites_number": 3, "parent_id": "321e790e-018a-41e3-8473-e7e9883a1392", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ] ], "subsections": [ "a7325d38-7cf4-41b3-bf34-266fb7445dcc", "4f2c4e8f-03a9-4d2a-875a-0025d4eaf479", "a42c11dd-cba0-4190-ab4f-e20c9525e938", "640e3315-aedb-4aa7-abfb-05ad7be7ec1b", "6d868391-5ac7-479e-9463-c4e87430bb7a" ], "title": "Context Dependent Semantic Parsing" }, { "cite_extract_rate": 0.5, "cites": [ 6907 ], "content": "Existing symbolic approaches formulate \\CdSP as a structured prediction problem by including contextual information into their feature models. Their models capture $ p( d_i | x_i, C_i)$ by including context as a condition. Both \\newcite{zettlemoyer2009learning} and \\newcite{srivastava2017parsing} divide the parsing process into two steps: i) generate initial parses using \\CiSP; ii) complete initial parses using contextual information. In contrast, \\newcite{long2016simpler} parses a sequence of utterances in one step. In all those work, symbolic features are used to represent contexts. \nIn two-step approaches, \\newcite{zettlemoyer2009learning} and \\newcite{srivastava2017parsing} differ in the details of individual steps. In the first step, \\newcite{zettlemoyer2009learning} extends \\MRs with predicates representing references, while \\newcite{srivastava2017parsing} generates a set of context independent parses for each utterance. In the second step, \\newcite{zettlemoyer2009learning} collects possible derivations by applying three heuristic rules to replace references with entities in context and extend initial \\LFs with constraints, then finds the best derivations according to a linear model. In~, their model expands the initial parse set with parses selected from context using heuristic rules, then finds the best parses in the expanded set. Their feature model includes a multinomial random variable indicating the current hidden state of discourse.\nThe shift-reduce parser in~ generates derivations for a whole utterance sequence. This method stores the previously generated derivations in a stack, performs a sequence of \\textit{shift} and \\textit{build} operations to generate \\LFs. In its feature model, a context is represented by a sequence of past \\LFs and a random variable denoting the current world state.\nUtterances and \\MRs histories form a context of a \\CdSP parser. The common practice is to extract handcrafted features from both utterances and \\MRs to represent contexts. Some typical feature patterns are as follows:", "id": "a7325d38-7cf4-41b3-bf34-266fb7445dcc", "level": "subsection", "origin_cites_number": 2, "parent_id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Symbolic Approaches" ] ], "subsections": [ "6aaa20f2-9158-4b75-92a8-7292af7a0e72" ], "title": "Symbolic Approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "In~, they consider indicator features of lexical triggers, whether the current utterance is repeated, as well as the position of the current utterance in an interaction.", "id": "6aaa20f2-9158-4b75-92a8-7292af7a0e72", "level": "paragraph", "origin_cites_number": 1, "parent_id": "a7325d38-7cf4-41b3-bf34-266fb7445dcc", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Symbolic Approaches" ], [ "paragraph", "Utterance" ] ], "subsections": [ "4ff929d2-dc36-402d-8f59-7bed1be0ef5b" ], "title": "Utterance" }, { "cite_extract_rate": 0, "cites": [], "content": "In , there is a feature indicating if the predicates exist in the history \\LFs. Such feature allows the model to learn to copy the segments from the context that contains the expected predicates. \\newcite{long2016simpler} adopts the feature indicating if the argument in current \\MR is one of arguments in the last \\MR. \\newcite{srivastava2017parsing} uses the combinations of predicates in successive turns as the indicator features.", "id": "4ff929d2-dc36-402d-8f59-7bed1be0ef5b", "level": "paragraph", "origin_cites_number": 1, "parent_id": "6aaa20f2-9158-4b75-92a8-7292af7a0e72", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Symbolic Approaches" ], [ "paragraph", "Utterance" ], [ "paragraph", "Meaning Representations" ] ], "subsections": [], "title": "Meaning Representations" }, { "cite_extract_rate": 1, "cites": [ 6908 ], "content": "\\begin{figure}\n \\centering\n \\includegraphics[width=1\\textwidth]{figs/SQL_example-cropped.pdf}\n \\label{fig:semantic_tree}\n \\caption{Coreference resolution architecture of . Considering the example in Table \\ref{tab:CoSQL}, \\newcite{chen2019context} firstly generates a \\MR template for $Q_2$ as \"SELECT max(petage), \\textit{REF} FROM pets GROUP BY \\textit{REF}\". The \\textit{REF} tokens would then be replaced with the \"pettype\" from the precedent \\MR. }\n \\label{fig:sql_example}\n\\end{figure}\nExisting neural \\CdSP methods extend the \\Seq architecture to incorporate contextual information in two ways. The first approach is to build context-aware encoders to encode historical utterances or \\MRs into neural representations, which provide decoders contextual information to resolve ambiguity in current utterances. As previously predicted \\MRs provide the constraints and information missed in current utterances, the second approach is to utilize context-aware decoders to reuse or revise those predicted \\MRs for generating current \\MRs.", "id": "4f2c4e8f-03a9-4d2a-875a-0025d4eaf479", "level": "subsection", "origin_cites_number": 1, "parent_id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Neural Approaches" ] ], "subsections": [ "28692668-fe8c-48f8-82c5-52ec47621cd5" ], "title": "Neural Approaches" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 6899, 6898, 6909, 1888, 6911, 6910, 6896, 168, 1883 ], "content": "Encoders of \\CdSP methods differentiate between utterance encoders and \\MR encoders. Utterance encoders construct neural representations for both current and historical utterances, while \\MR encoders build neural representations based on on historical \\MRs.\nUtterance encoders aim to embed rich information hidden in utterances into fixed-length representations, which provide contextual information in addition to current utterances for decoders. They apply first an RNN to map each utterance into a continuous vector of fixed-size. Then there are three ways to encode utterances in context into a fixed-size neural representation.\n\\begin{itemize}\n \\item \n For each utterance in a dialogue, a straightforward method is to concatenate its previous $k - 1$ utterances with current utterance in order and encode them with the RNN~.\n As a result, decoders have access to information in at most $k$ utterances. However, this method fails to access information beyond the $k$ utterances. In addition, it is computationally expensive because if an utterance belongs to multiple contexts, it would be repeatedly encoded for modelling all the contexts.\n \\item To overcome the above weakness, an alternative method is to treat a sub-sequence of utterances up to time $t$ as a sequence of vectors, and project them into a \\textit{discourse state} vector by using a turn-level RNN~. In another word, those models apply hierarchical RNNs to map each context into a fixed-size vector. In this method, each utterance is encoded only once and reused for modelling different contexts. However, this approach often leads to significant information loss~ due to the challenges imposed by encoding sequences of utterances into single vectors. \n \\item In order to focus on history utterances most relevant to current decoder states or utterances, soft attention~ is applied to construct context vectors. The query vectors are either the hidden state of an decoder~ or an utterance vector~. To differentiate between positional information, token embeddings of history utterance are concatenated with their position embeddings~, which encode the positions of history utterances relative to the current utterances. This method reflects the observation that similar utterances tend to share relevant information, such as references of the same entities. Both discourse states and attended representations are also widely used by the neural dialogue models~, thus suffer from the same problems caused by composition complexity. As a result, the trained models are found insensitive to utterance order and word order in context~.\n\\end{itemize}\n\\MR encoders construct a neural context representation at time $t$ based on the \\MRs predicted before $t$. As \\MRs are expressed in a formal language, \\MR encoders also apply RNNs to encode each \\MR or segments of \\MRs into embedding vectors. Then \\MR encoders build context representations of historical \\MRs in the same spirit as utterance encoders. In~, they only concatenate the embeddings of $k$ most recent history \\MR tokens as they assume current \\MR is always an extension of previous \\MRs. In~, a bidirectional RNN is applied to construct a vector for each segment, which is extracted from historical \\MRs. Soft attention is also applied in~ for building context vectors, which uses the current hidden state of their decoder as the query vector to attend over the token embeddings of the previous \\MR.", "id": "28692668-fe8c-48f8-82c5-52ec47621cd5", "level": "paragraph", "origin_cites_number": 11, "parent_id": "4f2c4e8f-03a9-4d2a-875a-0025d4eaf479", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Neural Approaches" ], [ "paragraph", "Context-aware Encoders" ] ], "subsections": [ "8186c75f-b8c3-4431-b509-a482cbf52d9e" ], "title": "Context-aware Encoders" }, { "cite_extract_rate": 1, "cites": [ 6899, 6898, 167, 1888, 6908, 6896, 6912 ], "content": "Decoders in \\CdSP models produce \\MRs based on the neural representations provided by their encoders. Such a decoder yields an \\MR by generating a sequence of \\MR tokens according to model distribution $P(\\mathbf{y}| \\mathbf{x}, \\mathbf{C})$, where $\\mathbf{C}$ denotes context information. There are three major ways to utilize context information.\nOne key problem of \\CdSP is incomplete information in current utterances. The straightforward way is to take neural context representations $\\mathbf{C}$ as additional input of decoders, which are yielded by context-aware encoders. Those context representations contains information from previous utterances, historical \\MRs, or both. The decoders take them as input by concatenating them with the ones from current utterances at each decoding step~. Thus, the quality of decoding depends tightly on the quality of contextual encoding, which is still a challenging problem~.\n\\MRs of current utterances often contain segments from previous \\MRs~. The shared parts are references to previously mentioned entities or constraints implied by context. Reuse of \\MR segments is realized by a designated \\textit{copy} component, which selects a segment to copy when the probability of copying is high. As decoders in \\Seq produce a sequence of decisions for each input, the corresponding model generates a sequence of mixed decisions, including both \\textit{copy} of segments and generation of new \\MR tokens. In a similar manner, copying of \\MR tokens from previous \\MR is proposed in~.\nCoreference resolution is explicitly addressed in~\\newcite{chen2019context}. As illustrated by the example in Figure \\ref{fig:sql_example}, a special token \\textit{REF} is introduced in the output vocabulary for denoting if an entity in the preceding \\MR is referred in that utterance. If that is the case, the corresponding entity token is copied from the previous \\MR to replace the \\textit{REF} token via a pointer network module~.", "id": "8186c75f-b8c3-4431-b509-a482cbf52d9e", "level": "paragraph", "origin_cites_number": 7, "parent_id": "28692668-fe8c-48f8-82c5-52ec47621cd5", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Neural Approaches" ], [ "paragraph", "Context-aware Encoders" ], [ "paragraph", "Context-aware Decoders" ] ], "subsections": [], "title": "Context-aware Decoders" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6898, 6912, 6913, 7744 ], "content": "\\begin{figure}\n \\centering\n \\includegraphics[width=1\\textwidth]{figs/copy_action.pdf}\n \\caption{The symbolic memory architecture of . Considering the example in Table \\ref{tab:CoSQL}, \\newcite{guo2018dialog} defines different types of actions, $A_{ac}$ and $A_{ec}$, to copy action sequence $A_{10}, A_{12}, A_{13}$ and the entity \\textit{pettype} from the symbolic memory, respectively.}\n \\label{fig:copy_action}\n\\end{figure}\nNeural-symbolic approaches introduce grammar into the decoding process or utilize symbolic representations as intermediate representations, while applying neural nets for representation learning. They take advantages from both the good context representation obtained by neural nets and reduced complexity of decoding due to the constraints introduced by grammars. In existing work, those approaches regard the generation of an \\MR as the prediction of a sequence of actions. Neural-symbolic methods normally take the same methods as the neural approaches to encode the contextual information. What differentiate them is the neural-symbolic could handle context by i) designing specific actions, and ii) utilizing symbolic context representations.\nThe context specific actions proposed in~ adopt \\textit{copy} mechanism to reuse the previous \\MRs. \\textsc{CAMP}~ include three actions to copy three different SQL clauses from precedent queries. \n\\newcite{liu2020FarAwayContextModelingSP} allows copying of any actions or subtrees from precedent SQL queries. The \\textit{subsequent} action in~ adds SQL conditions from the previous query into the current semantic parse to address the ellipsis problem. Different from other approaches, \\newcite{iyyer2017search} uses a \\textsc{DynSP}, which is in a similar neural network structure as the DNMN, instead of the \\Seq to generate the action sequences.\nProduction rules are also used to explicitly address the coreference resolution. In~, the authors defined fours actions to instantiate the entities, predicates, types and numbers. Then the pointer network is utilized to find mentions of the four entry semantic categories in the current and history utterances. The entities in utterances are later mapped to entities in knowledge bases by using their entity linking tool.\nInstead of directly copying from previous \\MRs, the parser \\textsc{Dialog2Action}~ incorporates a dialogue memory, which maintains \\textit{symbolic} representations of entities, predicates and action subsequences from an interaction history (Figure \\ref{fig:copy_action}). That parser defines three types of designated actions to copy entities, predicates and action subsequences from the memory respectively. Instead of decisively copying from memory, each type of action probabilistically selects the corresponding segments conditioning on the \\textit{symbolic} representations, which are later integrated into the generated action sequences. \n\\newcite{guo2019coupling} employs the same neural-symbolic models as in to capture contextual information. Different from other approaches, \\newcite{guo2019coupling} adopts the meta-learning approach to improve the generation ability of \\CdSP models. Inspired by , \\newcite{guo2019coupling} utilize the context from other interactions to guide the learning of \\CdSP over utterances within current interactions via MAML. \\newcite{guo2019coupling} considers an input utterance $x_i$ and its context $C_i$ as an instance. A context-aware retriever would retrieve instances, which are semantically close to the current instances, from other interactions. When learning model parameters, the retrieved instances and the current instances are considered as the support set and test set, respectively, and grouped as tasks as in the common MAML paradigm.", "id": "a42c11dd-cba0-4190-ab4f-e20c9525e938", "level": "subsection", "origin_cites_number": 6, "parent_id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Neural-Symbolic Approaches" ] ], "subsections": [], "title": "Neural-Symbolic Approaches" }, { "cite_extract_rate": 1, "cites": [ 6898 ], "content": "In~, 13 different context modeling methods for both neural and neural-symbolic \\CdSP parsers were evaluated on two benchmark datasets. None of those methods achieve consistent superior results over the others in all experimental settings. Among them, concatenation of $k$ recent utterances for decoders and copy of parse actions from precedent \\MRs are the top performing ones in most settings. ~\\newcite{liu2020FarAwayContextModelingSP} defines 12 fined-grained types summarized with multiple hierarchies according to the contextual linguistic phenomena, and inspects how different linguistic phenomena influence the model behavior. One interesting conclusion is that the methods in their experiments all perform poorly on the instances involving coreference problems that require complex inference. But note that, those methods in that study were not compared with the ones with explicit coreference resolution. Another interesting finding is that all the models perform better on the utterances which only augment the semantics of previous sentences than on the utterances which substitute the partial semantics of the precedent utterances.", "id": "640e3315-aedb-4aa7-abfb-05ad7be7ec1b", "level": "subsection", "origin_cites_number": 1, "parent_id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Comparison between Different \\CdSP Approaches" ] ], "subsections": [], "title": "Comparison between Different \\CdSP Approaches" }, { "cite_extract_rate": 1, "cites": [ 6899, 6914, 3063, 6916, 6915 ], "content": "Feedback/Interactive Semantic Parsing is another line of research in semantic parsing that utilizes context to refine \\MRs in an iterative manner. Most Feedback Semantic Parsing systems~ start with using an \\CiSP parser to parse a given utterance into an initial \\MR. Then the \\MR is interpreted in natural language and sent to a user. The user provides feedback, based on which the systems revise the initial parse. The process repeats till convergence. Therefore, in Feedback Semantic Parsing, interaction histories are only used to revise the parses. In contrast, \\CdSP focuses on modelling the dependencies between the utterances. \\newcite{elgohary2020speak} empirically compares \\CdSP with Feedback Semantic Parsing. They train a \\CdSP model, EditSQL~, on two \\CdSP datasets, \\Sparc and \\Cosql, and evaluate it on the test set of a feedback semantic parsing dataset, \\SPLASH. The performance is merely 3.4\\% and 3.2\\% in terms of accuracy, indicating that the two tasks are distinct by addressing different aspects of context.", "id": "6d868391-5ac7-479e-9463-c4e87430bb7a", "level": "subsection", "origin_cites_number": 5, "parent_id": "0888dee2-d5ac-4bd8-807b-f7439f676fed", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Context Dependent Semantic Parsing" ], [ "subsection", "Comparison between \\CdSP and Feedback Semantic Parsing" ] ], "subsections": [], "title": "Comparison between \\CdSP and Feedback Semantic Parsing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\input{tab-example-data.tex}\nTable \\ref{tab:context_data} summarizes the basic properties and statistics of existing \\CdSP datasets. There are two scenarios of the \\CdSP datasets, Single-party Scenarios and Multi-party Scenarios. In the former scenarios, the user utterances are translated into \\MRs to obtain the execution results from the programming environment. In the latter scenarios, there are systems which respond to the users in natural language based on the user utterances and the execution results.\nThe user utterances are manually labeled with different types of annotations, including \\MRs, denotations, and dialogue acts. The system responses are usually annotated with the dialogue acts. We especially highlight those annotations that explicitly reflect contextual dependencies of utterances in the sequel.", "id": "37d74331-727a-46c3-844f-3da36f456e56", "level": "section", "origin_cites_number": 0, "parent_id": "321e790e-018a-41e3-8473-e7e9883a1392", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Datasets and Resources" ] ], "subsections": [ "f2b8d82a-6953-4c79-b18d-ac672c464017", "f428c8ab-b908-460e-bb23-7a56d331e087" ], "title": "Datasets and Resources" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "f2b8d82a-6953-4c79-b18d-ac672c464017", "level": "subsection", "origin_cites_number": 0, "parent_id": "37d74331-727a-46c3-844f-3da36f456e56", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Datasets and Resources" ], [ "subsection", "Scenarios" ] ], "subsections": [ "1b5a8458-fd5b-4876-ba50-10c54b23220a" ], "title": "Scenarios" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 3063, 8303, 6908, 6907 ], "content": "In \\Sparc~, \\Sqa~ and \\Atis, the user utterances within each interaction are around a topic described by the provided text. To collect \\Sparc and \\Sqa, crowd-workers are asked to raise questions to obtain the information that answers the questions sampled from other corpora~. But the assumption for \\Sqa is the answers of the current question must be the subset of answers from the last turn. In \\Atis~, crowd-workers raise questions around the detailed scripts describing air travel planning scenarios. \n\\TemporalStructure~ and \\TimeExp~ particularly focused on addressing the temporal-related dependency. In \\TemporalStructure, human users or the simulators raise natural language questions chronologically towards a knowledge base. The facts in the knowledge base are organized in time series. Therefore, the questions in \\TemporalStructure are rich with time expressions. \\TimeExp only annotate temporal mentions (text segments that describe time expressions) instead of complete questions. All the mentions are from the time expression-rich corpora.\n\\Scone~ and \\CSQA~ use semi-automatic approaches to simulate the contextual dependency. Each interaction in \\Scone is merely labeled with an initial denotation and an end denotation. The denotations in \\Scone are regarded as the states that can be manipulated by the programs. Within each interaction, multiple candidate sequences of programs would be automatically generated while only the sequence of programs, which could correctly transit the initial state to the end state, would be kept and described with natural language by the crowd-workers. To create \\CSQA dataset, the crowd-workers are asked to raise questions that can be answered from single fact tuples (e.g. relation: \\textit{CEO}, subject: \\textit{Google}, object: \\textit{Sundar Pichai}) in the knowledge graph or the complex facts which are the composition of multiple tuples. To create coherent dependency among questions, the questions that share the relations or entities are placed next to each other. And crowd-workers would manually modify the questions such that the sequence of questions would include contextual linguistic properties such as ambiguity, underspecification or coreference. It is worth mentioning that, with such method, \\CSQA includes the largest number of interactions until now, which is over 200k.", "id": "1b5a8458-fd5b-4876-ba50-10c54b23220a", "level": "paragraph", "origin_cites_number": 9, "parent_id": "f2b8d82a-6953-4c79-b18d-ac672c464017", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Datasets and Resources" ], [ "subsection", "Scenarios" ], [ "paragraph", "Single-party Scenarios" ] ], "subsections": [ "164c4a3e-7159-47d0-82af-d206060db598" ], "title": "Single-party Scenarios" }, { "cite_extract_rate": 0, "cites": [], "content": "Similar to the scenario of \\Sparc, to obtain the answers to the questions sampled from \\Spider~, the conversations in \\Cosql~ are conducted between two human interlocutors, who play the roles of user and system, respectively. The dialogues in the \\SpaceBook~ are under the scenarios formed by the routing requests. One human interlocutor pretends to be a tourist walking around Edinburgh while another interlocutor plays the role of a system responding to the tourist. The conversations in \\EmailDiag are between the human agent and an email assistant instead of two humans.", "id": "164c4a3e-7159-47d0-82af-d206060db598", "level": "paragraph", "origin_cites_number": 3, "parent_id": "1b5a8458-fd5b-4876-ba50-10c54b23220a", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Datasets and Resources" ], [ "subsection", "Scenarios" ], [ "paragraph", "Single-party Scenarios" ], [ "paragraph", "Multi-party Scenarios" ] ], "subsections": [], "title": "Multi-party Scenarios" }, { "cite_extract_rate": 0, "cites": [], "content": "The contextual linguistic phenomena in the \\CdSP corpora is quite close to the phenomena in the corpora of tasks such as document-level machine translation, question answering, dialogue system, etc.. However, in \\CdSP datasets, the contextual linguistic phenomena has a tight relation with the annotations.\n\\newcite{iyyer2017search}, \\newcite{vlachos2014new} defined specific components in the \\MR languages of \\Sqa and \\SpaceBook to explicitly model the context dependency. \\Sqa introduced a keyword \\textit{subsequent}. All the answers of \\MR statements after \\textit{subsequent} would only be the subset of the answers of the precedent \\MR. In the language of \\SpaceBook, to resolve the coreference problem, a special predicate \\textit{equivalent} could indicate the identical entities across questions at different turns.\nThe context dependency could be reflected by some properties of annotations. \\newcite{yu2019sparc} analyzed semantic changes over turns in \\Sparc by calculating the overlapping percentage of tokens between the SQL annotations at different turns. In \\Sparc, the average overlapping percentage increases at later turns within one interaction, where the users tend to narrow down their topics with turns increasing. Both \\newcite{yu2019sparc} and \\newcite{liu2020FarAwayContextModelingSP} categorized the contextual phenomena in \\Sparc into fine-grained types and calculate their frequency. \\newcite{yu2019sparc} found some SQL representations correspond to certain contextual phenomena types. For instance, in the questions of the \\textit{theme-entity}, which means the current question and precedent question are around the same entities but request for different properties, their corresponding SQL representations have the same \\textit{FROM} and \\textit{WHERE} clauses. But the SQL representations for other types may vary.\nFor the datasets, \\SpaceBook and \\Cosql, \\newcite{yu2019cosql} and \\newcite{vlachos2014new} label utterances with dialogue acts along with \\MRs. Different from , \\newcite{vlachos2014new} integrated the dialogue acts into the \\MRs. The dialogue acts can be considered as the overall functions of the utterances while different dialogue acts reflect different properties of utterances. For example, in \\Cosql, the unanswerable questions that can not be parsed into SQLs are labelled with dialogue acts such as \\textit{NOT\\_RELATED}, \\textit{CANNOT\\_UNDERSTAND}, or \\textit{CANNOT\\_ANSWER}. The ambiguous questions that need to be clarified are labelled with \\textit{AMBIGUOUS}. The following questions are then labeled with \\textit{CLARIFY}. The dialogue acts could provide additional contextual information for \\CdSP.", "id": "f428c8ab-b908-460e-bb23-7a56d331e087", "level": "subsection", "origin_cites_number": 1, "parent_id": "37d74331-727a-46c3-844f-3da36f456e56", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Datasets and Resources" ], [ "subsection", "Context and Annotations" ] ], "subsections": [], "title": "Context and Annotations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\CdSP distinct from \\CiSP by context modelling and utilization of context information in the parsing process to complete missing information in \\MRs. Despite significant progress in recent years, there are still multiple directions worth pursuing.", "id": "117c71f7-624f-4f92-9105-d8b545d244ba", "level": "section", "origin_cites_number": 0, "parent_id": "321e790e-018a-41e3-8473-e7e9883a1392", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Challenges and Future Directions" ] ], "subsections": [ "9f019292-b9cc-44f9-a0be-f8837ae153df" ], "title": "Challenges and Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\newcite{yu2018spider} and \\newcite{liu2020FarAwayContextModelingSP} analyzed the influence of different types of contextual information on \\CdSP methods. Despite some empirical results, it still lacks of a thorough understanding of pros and cons of each type of context in relation to the parsing task. For example, in which cases should parsers extract information from \\MRs in context instead of utterances? Apart from ellipsis and coreference resolution, are there other linguistically motivated problems in context the current parsers have not addressed yet?", "id": "9f019292-b9cc-44f9-a0be-f8837ae153df", "level": "paragraph", "origin_cites_number": 0, "parent_id": "117c71f7-624f-4f92-9105-d8b545d244ba", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Analysis of Linguistic Phenomena Benefiting from Context" ] ], "subsections": [ "73fd17e5-1eb7-4596-820a-f4fb6a6042a2" ], "title": "Analysis of Linguistic Phenomena Benefiting from Context" }, { "cite_extract_rate": 0, "cites": [], "content": "Current \\CdSP approaches fall in the scope of near-side pragmatics, in particular reference resolution, and current \\CdSP datasets (e.g. \\Cosql, \\SpaceBook) consider dialogue acts as merely the overall function of the utterances~. However, \\textit{far-side pragmatics} focuses on what happens beyond saying, including implicatures and communicative intentions etc.. Incorporating far-side pragmatics in semantic parsing will be especially useful towards completely understanding dialogues. Thus, there is a need to create large corpora annotated with rich information about various aspects of pragmatics for both training and evaluation.", "id": "73fd17e5-1eb7-4596-820a-f4fb6a6042a2", "level": "paragraph", "origin_cites_number": 1, "parent_id": "9f019292-b9cc-44f9-a0be-f8837ae153df", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Analysis of Linguistic Phenomena Benefiting from Context" ], [ "paragraph", "Incorporating Far-side Pragmatics" ] ], "subsections": [ "e8038842-2355-4479-9735-294e34339301", "f4cff5dd-1eff-4533-adfc-1fcd6b740b89" ], "title": "Incorporating Far-side Pragmatics" }, { "cite_extract_rate": 0.75, "cites": [ 969, 6898, 3699 ], "content": "A key challenge of context based modelling is composition complexity caused by highly varying context. The empirical results in~ show that the SOTA models can capture well nearby context information but it is still challenging to capture long-range dependencies in context. One possible direction is to find out the underlying causal structure~, which should be sparse and explains well which contextual information leads to current utterances. If we can focus only on the key reasons in context that lead to changes of \\MRs, the influence from noisy information and overfitting of models is expected to decrease significantly. Another potential benefit of understanding causal structures in context is to improve robustness of parsers by ignoring non-robust features~.", "id": "e8038842-2355-4479-9735-294e34339301", "level": "paragraph", "origin_cites_number": 4, "parent_id": "73fd17e5-1eb7-4596-820a-f4fb6a6042a2", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Analysis of Linguistic Phenomena Benefiting from Context" ], [ "paragraph", "Incorporating Far-side Pragmatics" ], [ "paragraph", "Causal Structure Discovery in Context" ] ], "subsections": [], "title": "Causal Structure Discovery in Context" }, { "cite_extract_rate": 1, "cites": [ 6917 ], "content": "Since most \\CdSP datasets are small in terms of the number of utterances and interactions, the direction on addressing the low-resource problem in \\CdSP is quite promising. The meta-learning approaches, such as the MAML \\CdSP in , could be a potential direction to address this issue. The other typical methods to solve low-resource issues, including weakly supervision, data augmentation, semi-supervised learning, self-supervised learning etc., could be further investigated in the scenarios of \\CdSP.\n\\section*{Acknowledgements}\nWe thank Xuanli He and anonymous reviewers for their useful suggestions. \n\\bibliographystyle{coling}\n\\bibliography{coling2020}\n\\end{document}", "id": "f4cff5dd-1eff-4533-adfc-1fcd6b740b89", "level": "paragraph", "origin_cites_number": 1, "parent_id": "73fd17e5-1eb7-4596-820a-f4fb6a6042a2", "prefix_titles": [ [ "title", "Context Dependent Semantic Parsing: A Survey" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Analysis of Linguistic Phenomena Benefiting from Context" ], [ "paragraph", "Incorporating Far-side Pragmatics" ], [ "paragraph", "Low-resource \\CdSP" ] ], "subsections": [], "title": "Low-resource \\CdSP" } ]
71
[ 6899, 6900, 6898, 6897, 6896, 6901, 166, 2401, 6902, 3605, 38, 6904, 9083, 4279, 6903, 6905, 6906, 6907, 6908, 6909, 1888, 6911, 6910, 168, 1883, 167, 6912, 6913, 7744, 6914, 3063, 6916, 6915, 8303, 969, 3699, 6917 ]
1.188365
[ "Devis Tuia", "Michele Volpi", "Loris Copa", "Mikhail Kanevski", "Jordi Munoz-Mari" ]
A survey of active learning algorithms for supervised remote sensing image classification
2021
2021-04-15T21:36:59Z
cs.CV
\textbf{This is the pre-acceptance version, to read the final version published in 2011 in the IEEE Journal of Selected Topics in Signal Processing (IEEE JSTSP), please go to: \href{https://doi.org/10.1109/JSTSP.2011.2139193}{10.1109/JSTSP.2011.2139193}}\\ Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ] ], "subsections": [ "9f8d3c3d-dd73-4be2-949f-6388307ef8c8", "0c24f63b-6c0b-4f7c-a826-45cfbfb05ce5", "9c33e713-e531-4890-ab4a-b33425a0f7dd", "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "544044ff-dee5-45d4-b4e9-335bae0209f4", "fc8866e1-005e-4ac4-be0e-43641ad50928", "16c4e1fd-cca0-4c03-98cb-9399b4dc0f51", "fba5b779-cfe8-4a33-833a-07b334d324ec", "132e66fa-8f0c-4248-b231-33f0f600a11b" ], "title": "root" }, { "cite_extract_rate": 0.05, "cites": [ 8400 ], "content": "\\PARstart{N}{owadays}, the recourse to statistical learning models~ is a common practice for remote sensing data users; models such as Support Vector Machines (SVM,~) or neural networks~ are considered as state of the art algorithms for the classification of landuse using new generation satellite imagery~. Applications of such models to very high spatial~ or spectral~ resolution have proven their efficiency for handling remote sensing data.\nHowever, the performances of supervised algorithms strongly depend on the representativeness of the data used to train the classifier~. This constraint makes the generation of an appropriate training set a difficult and expensive task requiring extensive manual analysis of the image. This is usually done by visual inspection of the scene or by field surveys and successive labeling of each sample.\nIn the case of field surveys, which is usual for medium resolution, hyperspectral or SAR images, the discovery of a new label is expensive -- both in terms of time and money -- because it involves terrain campaigns. Therefore, there is a limit to the number of pixels that can be acquired. For this reason, compact and informative training sets are needed. \nIn the case of visual inspection or photo-interpretation, more common in VHR imagery, it is easier to collect data samples, since the labeling can be done directly on the image. However, the labeling is often done by mass selection on screen and several neighboring pixels carrying the same information are included. As a consequence, the training set is highly redundant. Such a redundancy considerably slows down the training phase of the model. \\mitch{, because several pixels carrying the same information are evaluated.}{} Moreover, the inclusion of noisy pixels may result in a wrong representation of the class statistics, which may lead to poor classification performances and/or overfitting~. For these reasons, a training set built by photointerpretation should also be kept as small as possible and focused on those pixels effectively improving the performance of the model.\nSumming up, besides being small, a desirable training set must be constructed in a smart way, meaning it must represent correctly the class \\mitch{statistics and covering the entirety of the data variability}{boundaries by sampling discriminative pixels}. This is particularly critical in very high spatial and spectral resolution image classification, which deal with large \\mitch{}{and/or} complex features spaces using limited training information only~. \nIn the machine learning literature this approach to sampling is known as \\emph{active learning}. The leading idea is that a classifier trained on a small set of well-chosen examples can perform as well as a classifier trained on a larger number of randomly chosen examples, while it is computationally cheaper~. Active learning focuses on the interaction between the user and the classifier. In other words, the model returns to the user the pixels whose classification outcome is the most uncertain. After accurate labeling by the user, pixels are included into the training set in order to reinforce the model. This way, the model is optimized on well-chosen difficult examples, maximizing its generalization capabilities.\nThe active learning framework has demonstrated its effectiveness when applied to large datasets needing accurate selection of examples~. This is suitable for remote sensing applications, where the number of pixels among which the search is performed is large and manual definition is - as stated above - redundant and time consuming.\nAs a consequence, active learning algorithms gain an increasing interest in remote sensing image processing and several approaches have been proposed to solve image classification tasks. This paper presents the general framework of active learning and reviews some of the methods that have been proposed in remote sensing literature. Note that this survey only covers remote sensing application of active learning principles: for a general introduction and survey of the most recent developments in the machine learning community, please refer to~.\nThe remainder of the paper is organized as follows: Section~\\ref{sec:AL} presents the general framework of active learning and the families of methods that will be detailed in \\red{Sections~\\ref{sec:committee} to~\\ref{sec:poster}}, as well as the references to specific methods. Section~\\ref{sec:data} presents the datasets considered in the experiments. Section~\\ref{sec:res} compares the different approaches numerically. Section~\\ref{sec:disc} gives an overview and guidelines for potential users. Section~\\ref{sec:concl} concludes the paper.", "id": "9f8d3c3d-dd73-4be2-949f-6388307ef8c8", "level": "section", "origin_cites_number": 20, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:AL}\nLet $X~=~\\{\\x_i,y_i\\}_{i=1}^l$ be a training set of labeled samples, with $\\x_i \\in \\mathcal{X}$ and $y_i = \\{1, ..., N\\}$. $\\mathcal{X}$ is the $d$-dimensional input space $\\in \\mathbb{R}^d$. Let also $U~=~\\{\\x_i\\}_{i=l+1}^{l+u} \\in \\mathcal{X}$, with $u \\gg l$ be the set of unlabeled pixels to be sampled, or the \\emph{pool of candidates}. \nActive learning algorithms are iterative sampling schemes, where a classification model is adapted regularly by feeding it with new labeled pixels corresponding to the ones that are most beneficial for the improvement of the model performance. These pixels are usually found in the areas of \\emph{uncertainty} of the model and their inclusion in the training set forces the model to solve the \\mitch{uncertainties}{regions of low confidence}. For a given iteration $\\epsilon$, the algorithm selects from the pool $U^\\epsilon$ the $q$ candidates that will at the same time maximize the gain in performance and reduce the uncertainty of the model when added to the current training set $X^\\epsilon$. Once the batch of pixels $S^\\epsilon = \\{\\x_m\\}_{m=1}^q \\subset U$ has been selected, it is labeled by the user, i.e. the labels $\\{y_m\\}_{m=1}^q$ are discovered. Finally, the set $S^\\epsilon$ is both added to the current training set ($X^{\\epsilon+1} = X^\\epsilon \\cup S^\\epsilon$) and removed from the pool ($U^{\\epsilon+1} = U^\\epsilon \\backslash S^\\epsilon$). The process is iterated until a stopping criterion is met. Algorithm~\\ref{alg:AL} summarizes the active selection process. From now on, the iteration index $\\epsilon$ will be omitted in order to ease notation. \n\\begin{algorithm}[!t]\n\\caption{General active learning algorithm}\n\\label{alg:AL}\n\\vspace{0.2cm}\n\\textbf{Inputs}\\\\\n- Initial training set $X^\\epsilon = \\{\\x_i,y_i\\}_{i=1}^l$ ($X \\in \\mathcal{X}$, $\\epsilon=1$).\\\\\n- Pool of candidates $U^\\epsilon = \\{\\x_i\\}_{i=l+1}^{l+u}$ ($U \\in \\mathcal{X}$, $\\epsilon=1$).\\\\\n- Number of pixels $q$ to add at each iteration (defining the batch of selected pixels $S$).\n\\vspace{0.2cm}\n\\begin{algorithmic}[1]\n\\REPEAT\n\t\\STATE Train a model with current training set $X^\\epsilon$.\n\t\\FOR{each candidate in $U^\\epsilon$}\n\t\\STATE Evaluate a user-defined \\emph{heuristic}\n\t\\ENDFOR\n\t\\STATE Rank the candidates in $U^\\epsilon$ according to the score of the heuristic\n\t\\STATE Select the $q$ most interesting pixels. $S^\\epsilon = \\{\\x_k\\}_{k=1}^q$\n\t\\STATE The user assigns a label to the selected pixels. $S^\\epsilon = \\{\\x_k,y_k\\}_{k=1}^q$\n\t\\STATE Add the batch to the training set $X^{\\epsilon+1}=X^{\\epsilon} \\cup S^\\epsilon$.\n\t\\STATE Remove the batch from the pool of candidates $U^{\\epsilon+1}=U^{\\epsilon} \\backslash S^\\epsilon$\n\t\\STATE $\\epsilon = \\epsilon + 1$\n\\UNTIL{a stopping criterion is met.}\n\\end{algorithmic}\n\\vspace{0.2cm}\n\\end{algorithm}\nAn active learning process requires interaction between the user and the model: the first provides the labeled information and the knowledge about the desired \\blue{classes}, while the latter provides both its own interpretation of the distribution of the classes and the most relevant pixels that are needed in order to solve the discrepancies encountered. This point is crucial for the success of an active learning algorithm: the machine needs a strategy to rank the pixels in the pool $U$. \nThese strategies, or \\emph{heuristics}, differentiate the algorithms proposed in the next sections and can be divided into three main families~: \n\\begin{itemize}\n\t\\item[1 -] \\emph{Committee}-based heuristics (Section~\\ref{sec:committee})\n\t\\item[2 -] \\emph{Large margin}-based heuristics (Section~\\ref{sec:lm})\n\t\\item[3 -] \\emph{Posterior probability}-based heuristics (Section~\\ref{sec:poster})\n\\end{itemize}\nA last family of active learning heuristics, the \\emph{cluster}-based, has recently being proposed in the community~: cluster-based heuristics aim at pruning a hierarchical clustering tree until the resulting clusters are consistent with the labels provided by the user. Therefore, these strategies rely on an unsupervised model, rather than on a predictive model. Since the aim of these heuristics is different form that of the other families presented, they will not be detailed in this survey.", "id": "0c24f63b-6c0b-4f7c-a826-45cfbfb05ce5", "level": "section", "origin_cites_number": 2, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Active learning: concepts and definitions" ] ], "subsections": [], "title": "Active learning: concepts and definitions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:committee}\nThe first family of active learning methods quantifies the uncertainty of a pixel by considering a committee of learners~. Each member of the committee exploits different hypotheses about the classification problem and consequently labels the pixels in the pool of candidates. The algorithm then selects the samples showing maximal disagreement between the different classification models in the committee. \nTo limit computational complexity, heuristics based on multiple classifier systems have been proposed in machine learning literature. In~, methods based on boosting and bagging are proposed in this sense for binary classification only. In~, results obtained by query-by-boosting and query-by-bagging are compared using several batch datasets showing excellent performance of the heuristics proposed. Methods of this family have the advantage to be applicable to any kind of model or combination of models. \nIn the remote sensing community, committee-based approaches to active learning have been proposed exploiting two types of uncertainty: first, committees varying the pixels members have been considered in the query-by-bagging heuristic~. Then, committees based on subsets of the feature space available have been presented in~Di and Crawford~. The next two sections present the algorithms proposed in these papers.", "id": "9c33e713-e531-4890-ab4a-b33425a0f7dd", "level": "section", "origin_cites_number": 7, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Committee based active learning" ] ], "subsections": [ "dd9d8542-ab43-4865-9ca1-5532a86beb4d", "d0e71651-ef54-4cae-9092-e430dfc27eb8" ], "title": "Committee based active learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:EQB}\nIn the implementations of , bagging is proposed to build the committee: first, $k$ training sets built on a draw with replacement of the original data are defined. These draws account for a part of the available labeled pixels only. Then, each set is used to train a classifier and to predict the $u$ labels of the candidates. At the end of the procedure, $k$ labelings are provided for each candidate pixel $\\x_i \\in U$.\nIn~, the entropy $H^{\\text{BAG}}$ of the distribution of the predictions \\mitch{provided by the $k$ classifiers for each pixel $\\x_i$ in $U$}{} is used as heuristic. In~, this measure has been subsequently normalized in order to bound it with respect to the number of classes predicted by the committee and avoid hot spots of the value of uncertainty in regions where several classes overlap. The \\emph{normalized entropy query-by-bagging} heuristic can be stated as follows:\n\\begin{equation}\n\t\\hat{\\x}^{\\text{\\emph{n}EQB}} = \\arg\\max_{\\x_i \\in U}\\Big\\{\\frac{H^{\\text{BAG}}(\\x_i)}{ \\mbox{log}(N_i)}\\Big\\}\n\\label{eq:neqb}\n\\end{equation}\nwhere\n\\begin{align}\n\t&H^{\\text{BAG}}(\\x_i) = -\\sum_{\\omega=1}^{N_i} p^{\\text{BAG}}(y_i^* = \\omega|\\x_i) \\mbox{log}\\left[p^{\\text{BAG}}(y^*_i= \\omega|\\x_i)\\right]\\\\\n\t\t& \\text{where} \\qquad p^{\\text{BAG}}(y^*_i = \\omega|\\x_i) = \\frac{\\sum_{m=1}^k \\delta(y_{i,m}^{*}, \\omega )}{\\sum_{m=1}^k\\sum_{j = 1}^{N_i} \\delta(y_{i,m}^*, \\omega_j ) } \\nonumber \n\\label{eq:entropy}\n\\end{align}\n$H^{\\text{BAG}}(\\x_i)$ is an empirical measure of entropy, $y_i^*$ is the prediction for the pixel $\\x_i$ and $p^{\\text{BAG}}(y^*_i = \\omega|\\x_i)$ is the observed probability to have the class $\\omega$ predicted using the training set $X$ by the committee of $k$ models for the sample $\\x_i$. $N_i$ is the number of classes predicted for pixel $\\x_i$ \\mitch{by the committee}{}, with $1 \\leq N_i \\leq N$. The $\\delta(y_{i,m}^*,\\omega)$ operator returns the value $1$ if the classifier using the $m$-th bag classifies the sample $\\x_i$ into class $\\omega$ and $0$ otherwise. Entropy maximization gives a naturally multiclass heuristic. A candidate for which all the classifiers in the committee agree is associated with null entropy and its inclusion in the training set does not bring additional information. On the contrary, a candidate with maximum disagreement between the classifiers results in maximum entropy, and its inclusion will be highly beneficial.", "id": "dd9d8542-ab43-4865-9ca1-5532a86beb4d", "level": "subsection", "origin_cites_number": 4, "parent_id": "9c33e713-e531-4890-ab4a-b33425a0f7dd", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Committee based active learning" ], [ "subsection", "Normalized entropy query-by-bagging (nEQB)" ] ], "subsections": [], "title": "Normalized entropy query-by-bagging (nEQB)" }, { "cite_extract_rate": 0.5, "cites": [ 8401 ], "content": "When confronted to high dimensional data, it may be relevant to construct the committee by splitting the feature space into a number of subsets, or \\emph{views}~. Di and Crawford~ exploit this principle to generate different views of a hyperspectral image on the basis of the block-diagonal structure of the covariance matrix. By generating views corresponding to the different blocks, independent classifications of the same pixel can be generated and an entropy-based heuristic can be used similarly to $n$EQB.\nGiven a partition of the $d$-dimensional input space into $V$ disjoint views accounting for data subsets $\\x^v$ such that $\\bigcup_{v=1}^V \\x^v = \\x $, the `Adaptive maximum disagreement ' (AMD) heuristic selects candidates according to the following rule:\n\\begin{equation}\n\t\\hat{\\x}^{\\text{AMD}} = \\arg\\max_{\\x_i \\in U} H^{\\text{MV}}(\\x_i)\n\t\\label{eq:AMD}\n\\end{equation}\nwhere the multiview entropy $H^{\\text{MV}}$ is assessed over the predictions of classifiers using a specific view $v$:\n\\begin{align}\n\t&H^{\\text{MV}}(\\x_i) = -\\sum_{\\omega=1}^{N_i} p^{\\text{MV}}(y^*_{i,v} = \\omega|\\x^v_i) \\mbox{log}\\left[p^{\\text{MV}}(y^*_{i,v}= \\omega|\\x^v_i)\\right]\\\\\n\t& \\text{where} \\qquad p^{\\text{MV}}(y^*_i = \\omega|\\x^v_i) = \\frac{\\sum_{v=1}^V W^{\\epsilon-1}(v,\\omega) \\delta(y^*_{i,v}, \\omega )}{\\sum_{v=1}^V\\sum_{j = 1}^{N_i} W^{\\epsilon-1}(v,\\omega)} \\nonumber\n\\label{eq:MVentr}\n\\end{align}\nwhere the $\\delta(y_{i,v}^*,\\omega )$ operator returns the value $1$ if the classifier using the view $v$ classifies the sample $y_i$ into class $\\omega$ and $0$ otherwise. $\\mathbf{W}^{\\epsilon-1}$ is a $N \\times V$ weighting matrix incorporating the abilities of discrimination between the views in the different classes. At each iteration, $\\mathbf{W}^{\\epsilon-1}$ is updated on the basis of the true labels of the pixels sampled at iteration $\\epsilon-1$:\n\\begin{equation}\nW^\\epsilon(v,\\omega) = W^{\\epsilon-1}(v,\\omega)+\\delta(y_{i,v},\\omega ), \\quad \\forall i \\in S\n\\label{eq:W}\n\\end{equation}\nand its columns are normalized to a unitary sum. This matrix weights the confidence of each view to predict a given class. In~, the selection is done on a subset of $U$ containing the candidate pixels maximizing the uncertainty, which are the pixels for which the committee has predicted the highest number of classes. This way, the computational load of the algorithm is reduced.", "id": "d0e71651-ef54-4cae-9092-e430dfc27eb8", "level": "subsection", "origin_cites_number": 2, "parent_id": "9c33e713-e531-4890-ab4a-b33425a0f7dd", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Committee based active learning" ], [ "subsection", "Adaptive maximum disagreement (AMD)" ] ], "subsections": [], "title": "Adaptive maximum disagreement (AMD)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:lm}\nThe second family of methods is specific to margin-based classifiers. \nMethods such as SVM are naturally good base methods for active learning: the distance to the separating hyperplane, that is the absolute value of the decision function without the sign operator, is a straightforward way of estimating the classifier confidence on an unseen sample. Let's first consider a binary problem: the distance of a sample $\\x_i$ from the SVM hyperplane is given by\n\\begin{equation}\nf(\\x_i)=\\sum_{j=1}^n \\alpha_j y_j K(\\x_j,\\x_i)+b\n\\label{eq:decfunct}\n\\end{equation}\nwhere $K(\\x_j, \\x_i)$ is a kernel, which defines the similarity between the candidate $\\x_i$ and the support vectors $\\x_j$, which are the pixels showing non zero $\\alpha_j$ coefficients. The labels $y_j$ of the support vectors are $+1$ for samples of the positive class and $-1$ for those on the negative. For additional information, see SVM literature in~. \nThis evaluation of the distance is the base ingredient of almost all large margin heuristics. Roughly speaking, these heuristics use the intuition that a sample away from the decision boundary (with a high $f(\\x_i)$) has a high confidence about its class assignment and is thus not interesting for future sampling.\nSince SVM rely on a sparse representation of the data, large margin based heuristics aim at finding the pixels in $U$ that are most likely to receive a non-zero $\\alpha_i$ weight if added to $X$. In other words, the points more likely to become support vectors are the ones lying within the margin of the current model~. The heuristic taking advantage of this property is called margin sampling (MS)~. Recent modifications of MS, aiming at minimizing the risk of selecting points that will not become support vectors, can be found in~.\nMS is the most studied active learning algorithm in remote sensing. Its first application can be found in~. Modifications of the MS heuristic have been proposed in~. Later on, since no cross-information among the samples is considered in the MS, the questions of diversity in batches of samples have been considered in~. The next sections present the MS heuristic and subsequent modifications proposed in order to enhance diversity when selecting batches of samples.", "id": "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "level": "section", "origin_cites_number": 12, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Large margin based active learning" ] ], "subsections": [ "46a14016-6a16-4632-a90d-002c27d86ea9", "9dde6920-321f-4866-969e-ae6a532e831e", "b4b3fbe3-f9dd-4869-b53d-b3ea1f2f3618", "7bae26d1-94ed-4e43-ac7f-09791c25101b" ], "title": "Large margin based active learning" }, { "cite_extract_rate": 0, "cites": [], "content": "As stated above, margin sampling takes advantage of SVM geometrical properties, and in particular of the fact that unbounded support vectors are labeled examples that lie on the margin with a decision function value of exactly one~. Consider the pool of candidates of Fig.~\\ref{fig:dist}(a) referring to a three classes toy problem. In a multiclass one-against-all setting, the distance to each hyperplane is represented by Figs.~\\ref{fig:dist}(d-f). \nThe `margin sampling' (MS) heuristic performs sampling of the candidates by minimizing Eq.~\\eqref{eq:minMarg}:\n\\begin{equation}\n\\hat{\\x}^{\\text{MS}} = \\arg\\min_{\\x_i \\in U } \\Big\\{\\min_{\\omega}|f(\\x_i,\\omega)|\\Big\\}\n\\label{eq:minMarg}\n\\end{equation}\nwhere $f(\\x_i,\\omega)$ is the distance of the sample to the hyperplane defined for class $\\omega$ in a one-against-all setting for multiclass problems. The MS heuristic for the toy problem is reported in~Fig.~\\ref{fig:dist}(b). MS heuristic can be found in the literature under the names of `most ambiguous'~, `binary level uncertainty'~ or SVM$_\\text{SIMPLE}$~.\n\\begin{figure*}\n\\centering\n\\begin{tabular}{|ccc|}\n\\hline\n\\includegraphics[width=4.2cm]{\\figdir Tuia_1a}&\n\\includegraphics[width=4.2cm]{\\figdir distance_tot}&\n\\includegraphics[width=4.2cm]{\\figdir distance_MCLU}\\\\\n(a)&(b) MS, Eq.~\\eqref{eq:minMarg}&(c) MCLU, Eq.~\\eqref{eq:MCLUfct}\\\\\n\\includegraphics[width=4.2cm]{\\figdir distance_cl1}&\n\\includegraphics[width=4.2cm]{\\figdir distance_cl2}&\n\\includegraphics[width=4.2cm]{\\figdir distance_cl3}\\\\\n(d) $|f(\\x,\\omega_0)|$&(e) $|f(\\x,\\omega_1)|$&(f) $|f(\\x,\\omega_2)|$\\\\ \\hline\n\\end{tabular}\n\\caption{Large margin heuristics for a three classes toy example represented in subfigure (a). The color intensity represents the distance from the hyperplane, ranging from black (on the boundary) to white (maximal distance): (b) MS heuristic; (c) MCLU heuristic; areas in black are the areas of maximal uncertainty, minimizing Eq.~\\eqref{eq:minMarg} or Eq.~\\eqref{eq:MCLUfct} respectively. \nBottom row: absolute values of per-class distances (d)-(f). }\n\\label{fig:dist}\n\\end{figure*}", "id": "46a14016-6a16-4632-a90d-002c27d86ea9", "level": "subsection", "origin_cites_number": 5, "parent_id": "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Large margin based active learning" ], [ "subsection", "Margin sampling (MS)" ] ], "subsections": [], "title": "Margin sampling (MS)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:MCLU}\nIn~, the idea of margin sampling is extended to multiclass uncertainty (see~). Instead of dealing with the most uncertain class of the SVM, the `multiclass level uncertainty' (MCLU) considers the difference between the distance to the margin for the two most probable classes. \n\\begin{align}\n&\\hat{\\x}^{\\text{MCLU}} = \\arg \\min_{\\x_i \\in U} \\Big\\{f(\\x_i)^{\\text{MC}}\\Big\\}\\label{eq:MCLU}\\\\\n&\\text{where} \\qquad f(\\x_i)^{\\text{MC}} = \\max_{\\omega \\in N}|f(\\x_i,\\omega)|-\\max_{\\omega \\in N\\backslash \\omega^+}|f(\\x_i,\\omega)|\\label{eq:MCLUfct}\n\\end{align}\nwhere $\\omega^+$ is the class showing maximal confidence, i.e. the argument of the first term of Eq.~\\eqref{eq:MCLUfct} showing maximal $f(\\x_i)^{\\text{MC}}$. A high value of this criterion corresponds to samples assigned with high certainty to the most confident class, while a small value represents unreliable classification. Fig.~\\ref{fig:dist}(c) illustrates the heuristic in comparison to MS. Although they are very similar, MCLU performs better in the area where the three classes mix, in the top-right area of the feature space: in this area, MCLU returns maximal uncertainty as it is evaluated on the basis of all the per-class decision values, while MS returns an uncertainty slightly lower than on the two-classes boundaries.", "id": "9dde6920-321f-4866-969e-ae6a532e831e", "level": "subsection", "origin_cites_number": 2, "parent_id": "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Large margin based active learning" ], [ "subsection", "Multiclass level uncertainty (MCLU)" ] ], "subsections": [], "title": "Multiclass level uncertainty (MCLU)" }, { "cite_extract_rate": 0, "cites": [], "content": "In~, instead of using the distance to the hyperplane as a measure of uncertainty, the support vector coefficients are used to convert the multiclass classification problem into a binary support vector detection problem. In the `significance space construction' (SSC) heuristic, the training samples related to support vector coefficients are used to define a second classification function $f(\\x)^{\\text{SSC}}$, where training pixels with $\\alpha_j > 0$ (the support vectors) are classified against training pixels with $\\alpha_j = 0$. Once applied to the pool of candidates, this second classifier predicts which pixels are likely to become support vectors. \n\\begin{equation}\n\\hat{\\x}^{\\text{SSC}} = \\arg_{\\x_i \\in U} f(\\x_i)^{\\text{SSC}} > 0\n\\label{eq:SSC}\n\\end{equation}\nOnce the candidates more likely to become support vectors have been highlighted, a random selection among them is done.", "id": "b4b3fbe3-f9dd-4869-b53d-b3ea1f2f3618", "level": "subsection", "origin_cites_number": 1, "parent_id": "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Large margin based active learning" ], [ "subsection", "Significance space construction (SSC)" ] ], "subsections": [], "title": "Significance space construction (SSC)" }, { "cite_extract_rate": 0, "cites": [], "content": "In applicative scenarios, diversity among samples~ is highly desirable. Diversity concerns the capability of the model to reject candidates that rank well according to the heuristic, but are redundant among each other. Diversity has been studied extensively for margin-based heuristics, where the base margin sampling heuristic is constrained using a measure of diversity between the candidates (see Algorithm~\\ref{alg:diversity}). \n\\begin{algorithm}[!b]\n\\caption{General diversity based heuristic (for a single iteration)}\n\\label{alg:diversity}\n\\vspace{0.2cm}\n\\textbf{Inputs}\\\\\n- Current training set $X^{\\epsilon} = \\{\\x_i,y_i\\}_{i=1}^l$ ($X \\in \\mathcal{X}$).\\\\\n- Subset of the pool of candidates minimizing Eq.~\\eqref{eq:minMarg} or~\\eqref{eq:MCLU} $U^* = \\{\\x_i\\}$ $ (U^* \\in \\mathcal{X}$ and $U^* \\subset U^{\\epsilon}$).\\\\\n- Number of pixels $q$ to add at each iteration (defining the batch of selected pixels $S$).\n\\vspace{0.2cm}\n\\begin{algorithmic}[1]\n\\STATE Add the pixel minimizing Eq.~\\eqref{eq:minMarg} or~\\eqref{eq:MCLUfct} to $S$.\n\\REPEAT{}\n\t\\STATE Compute the user defined diversity criterion between pixels in $U^*$ and in $S$ (with MAO, cSV or ABD).\n\t\\STATE Select the pixel $\\x_D$ maximizing diversity with current batch.\n\t\\STATE Add $\\x_D$ to current batch $S = S \\cup \\x_D$.\n\t\\STATE Remove $\\x_D$ to current list of cadidates $U^* = U^* \\setminus \\x_D$.\n\\UNTIL{batch $S$ contains $q$ elements.}\n\\STATE The user labels the selected pixels. $S = \\{\\x_k,y_k\\}_{k=1}^q$\n\\STATE Add the batch to the training set $X^{\\epsilon+1}=X^{\\epsilon} \\cup S$.\n\\STATE Remove the batch from the complete pool of candidates $U^{\\epsilon+1}=U^{\\epsilon} \\backslash S$\n\\end{algorithmic}\n\\vspace{0.2cm}\n\\end{algorithm}\nThe first heuristic proposing explicit diversity in remote sensing is found in~, where the margin sampling heuristic is constrained with a measure of the angle between candidates in feature space. This heuristic, called `most ambiguous and orthogonal' (MAO) is iterative: starting from the samples selected by MS, \\mitch{}{$U^{\\text{MS}}\\subset U$}, this heuristic iteratively chooses the samples minimizing the highest values between the candidates list and the samples already included in the batch \\mitch{of samples}{} $S$. For a single iteration, this can be resumed as:\n\\begin{equation}\n\\hat{\\x}^{\\text{MAO}} = \\arg \\min_{\\x_i \\in U^{\\text{MS}} }\\Big\\{\\max_{\\x_j \\in S}K(\\x_i,\\x_j)\\Big\\}\n\\label{eq:MAO}\n\\end{equation}\nIn~, the MAO criterion is combined with the MCLU uncertainty estimation in the `multiclass level uncertainty - angle-based diversity' (MCLU-ABD) heuristic. Te selection is performed among a subset of $U$ maximizing the MCLU criterion. Moreover, the author generalize the MAO heuristic to any type of kernels by including normalization in feature space.\n\\begin{align}\n\\hat{\\x}^{\\text{MCLU-ABD}} = \\arg \\min_{\\x_i \\in U^{\\text{MCLU}} }&\\Bigg\\{\\lambda f(\\x_i)^{\\text{MC}} + \\\\\\nonumber\n&(1-\\lambda)\\max_{\\x_j \\in S}\\frac{K(\\x_i,\\x_j)}{\\sqrt{K(\\x_i,\\x_i)K(\\x_j,\\x_j)}}\\Bigg\\}\\\\\n\\label{eq:MCLU-ABD}\n\\end{align}\nwhere $f(\\x_i)^{\\text{MC}}$ is the multiclass uncertainty function defined by Eq.~\\eqref{eq:MCLUfct}.\nIn~, the diversity of the chosen candidates is enforced by constraining the MS solution to pixels associated to different closest support vectors. This approach ensures a certain degree of diversification in the MS heuristic, by dividing the margin in the feature space as a function of the geometrical distribution of the support vectors. Compared to the previously presented heuristics, this approach has the advantage of ensuring diversity with respect to the current model, but does not guarantee diversity of the samples between each other (since two close samples can be associated to different support vectors).\n\\begin{equation}\n\\hat{\\x}^{\\text{cSV}} = \\arg \\min_{\\x_i \\in U^{\\text{MS}} } \\Big\\{\\lvert f(\\x_i,\\omega)\\rvert \\ \\Big\\vert \\ cSV_i \\not\\in cSV_\\theta\\Big\\}\n\\label{eq:cSV}\n\\end{equation}\nwhere $\\theta = [1, \\ldots, q-1]$ are the indices of the already selected candidates and $cSV$ is the set of selected closest support vectors.\nFinally, diversity can be ensured using clustering in the feature space. In~, kernel $k$-means~ was used to cluster the samples selected by MCLU and select diverse batches. After partitioning the $U^{\\text{MCLU}}$ set into $q$ clusters with kernel $k$-means, the `multiclass level uncertainty - enhanced cluster based diversity (MCLU-ECBD)' selects a single pixel per cluster, minimizing the following query function:\n\\begin{equation}\n\\hat{\\x}^{\\text{MCLU-ECBD}} = \\arg \\min_{\\x_i \\in c_m}\\Big\\{ f(\\x_i)^{\\text{MC}}\\Big\\}\\text{,} \\quad m = [1, ... q], \\x_i \\in U^{\\text{MCLU}}\n\\label{eq:MCLU-ECBD}\n\\end{equation}\nwhere $c_m$ is one among the $q$ clusters defined with kernel $k$-means. \nIn~, a hierarchical extension of this principle is proposed to exclude from the selected batch the pixels more likely to become bounded support vectors. This way, the redundancy affecting samples close to each other in the feature space among different iterations is controlled along with the maximization of the informativeness of each pixel.\nIn the `informative hierarchical margin cluster sampling' (hMCS-i), a dataset composed by i) a subset of the pool of candidates optimizing the MCLU criterion ($U^{\\text{MCLU}}$) and ii) the bounded support vectors sampled at the previous iteration, is iteratively partitioned in a binary way. The partitioning always considers the biggest current cluster found and continues until $q$ clusters not containing a bounded support vector are found. Once the $q$ clusters have been defined, a search among the candidates falling in these $q$ clusters is performed.\n\\begin{equation}\n\\hat{\\x}^{\\text{hMCS-i}} = \\arg \\min_{\\x_i \\in c_m } \\Big\\{f(\\x_i)^{\\text{MC}} \\Big\\}\\text{,}\\quad m = [1, ..., q \\vert n^{bSV}_{c_m} = 0], \\x_i \\in U^{\\text{MCLU}}\n\\label{eq:hMCS}\n\\end{equation}", "id": "7bae26d1-94ed-4e43-ac7f-09791c25101b", "level": "subsection", "origin_cites_number": 8, "parent_id": "4355cda7-2c05-483f-8ba7-6a88643cc3e2", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Large margin based active learning" ], [ "subsection", "On the need for a diversity criterion" ] ], "subsections": [], "title": "On the need for a diversity criterion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:poster}\nThe third class of methods uses the estimation of posterior probabilities of class membership (i.e. $p( y | \\x)$) to rank the candidates. The posterior probability gives an idea of the confidence of the class assignment (which is usually done by maximizing it over all the possible classes): by considering the change on the overall posterior distribution or the per-class distribution for each candidate, these heuristics use these probability estimates to focus sampling in uncertain areas. This section details two heuristics, the KL-max and the Breaking ties.", "id": "544044ff-dee5-45d4-b4e9-335bae0209f4", "level": "section", "origin_cites_number": 0, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Posterior probability based active learning" ] ], "subsections": [ "e1f6b836-abb9-4d52-abf9-04c6538aa022", "e604938a-7c05-4eae-a00c-0f7093b23169" ], "title": "Posterior probability based active learning" }, { "cite_extract_rate": 0, "cites": [], "content": "The first idea is to sample the pixels whose inclusion in the training set would maximize the changes in the posterior distribution. An application of these methods can be found in ~, where the heuristic maximizes the Kullback-Leibler divergence between the distributions before and after adding the candidate. In remote sensing, a probabilistic method based on this strategy and using a Maximum Likelihood classifier can be found in~. In this setting, each candidate is removed from $U$ and it is included in the training set with the label maximizing its posterior probability. The Kullbach-Leibler divergence KL is then computed between the posterior distributions of the models with and without the candidate. After computing this measure for all candidates, the pixel maximizing the following is chosen:\n{\\small\n\\begin{align}\n&\\hat{\\x}^{\\text{KL-max}} = \\\\\\nonumber &= \\arg \\max_{\\x_i \\in U } \\Bigg\\{\\sum_{\\omega \\in N} \\frac{1}{(u-1)} \\text{KL}\\Big(p^+(\\omega|\\x)\\Big\\vert\\Big\\vert p(\\omega|\\x)\\Big)p(y^*_i = \\omega|\\x_i)\\Bigg\\}\\label{eq:Kmax}\n\\end{align} }\nwhere\n{\\small\n\\begin{align}\n \\text{KL}\\Big(p^+( \\omega|\\x)\\Big\\vert\\Big\\vert p( \\omega|\\x)\\Big) = \\sum_{\\x_j \\in U \\setminus \\x_i} p^+(y^*_j = \\omega|\\x_j)\\log\\frac{p^+(y^*_j = \\omega|\\x_j)}{p(y^*_j = \\omega|\\x_j)}\n\\end{align} }\nand $p^+(\\omega|\\x)$ is the posterior distribution for class $\\omega$ and pixel $\\x$, estimated using the increased training set $X^+ = [X,(\\x_i,y^*_i)]$, where $y_i^*$ is the class maximizing the posterior probability. Recently, the authors of~ extended this approach, proposing to use boosting to weight pixels that were previously selected, but were no longer relevant to the current classifier. These heuristics are useful when used with classifiers with small computational cost: since each iteration implies to train $u+1$ models, this type of heuristics is hardly applicable with computationally demanding methods as SVM. Moreover, a selection of batches of pixels is not possible.", "id": "e1f6b836-abb9-4d52-abf9-04c6538aa022", "level": "subsection", "origin_cites_number": 3, "parent_id": "544044ff-dee5-45d4-b4e9-335bae0209f4", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Posterior probability based active learning" ], [ "subsection", "KL-max" ] ], "subsections": [], "title": "KL-max" }, { "cite_extract_rate": 0, "cites": [], "content": "Another strategy, closer to the idea of EQB presented in Section \\ref{sec:EQB}, consists of building a heuristic exploiting the conditional probability of predicting a given label $p(y_i^* = \\omega|\\x_i)$ for each candidate $\\x_i \\in U$. In this case, note that the predictions for the single candidates $y_i^* = \\arg\\max_{\\omega \\in N}f(\\x_i,\\omega)$ are used. Such estimates are provided by several methods, including probabilistic neural networks or maximum likelihood classifiers. A possibility to obtain posterior probabilities from SVM outputs\\footnote{Even though they are not posterior probabilities from a Bayesian point of view} is to use Platt's estimation~. \nIn this case, the per-class posterior probability is assessed fitting a sigmoid function to the SVM decision function~:\n\\begin{equation}\n\tp(y_i^* = \\omega|\\x_i) =\\frac{1}{1 + e^{(Af(\\x_i,\\omega)+B)}}\n\\end{equation}\n\\mitch{where $\\omega = \\{1,... ,N\\}$ is the class considered and $f(\\x_i,\\omega)$ is the binary decision function of the SVM without the sign operator (see Eq.~\\eqref{eq:decfunct}).}{} where $A$ and $B$ are parameters to be estimated (for details, see~). \nOnce the posterior probabilities are obtained, it is possible to assess the uncertainty of the class membership for each candidate in a direct way. In this case the heuristic choses candidates showing a near uniform probability of belonging to each class, i.e. $p( y_i^* = \\omega | \\x_i) = 1/N, \\ \\forall \\omega \\in N$.\nThe `Breaking ties' (BT) heuristic for a binary problem relies on the smallest difference of the posterior probabilities for each sample . In a multi-class setting, this reciprocity can still be confirmed and used, since independently from the number of classes $N$, the difference between the two highest probabilities can be indicative of the way an example is handled by the classifier\\mitch{(see a similar interpretation for the MCLU heuristic in Section~\\ref{sec:MCLU})}{}. When the two highest probabilities are close (``on a tie''), the classifier's confidence is the lowest. The BT heuristic can thus be formulated as:\n\\begin{equation}\n\\hat{\\x}^{\\text{BT}} = \\arg \\min_{\\x_i \\in U } \\Big\\{\\max_{\\omega \\in N} \\big\\{p(y_i^* = \\omega|\\x)\\big\\} - \\max_{\\omega \\in \\mitch{}{N}\\backslash \\omega^+} \\big\\{p(y_i^* = \\omega|\\x)\\big\\}\\Big\\}\n\\label{eq:BT}\n\\end{equation}\nwhere $\\omega^+$ is the class showing maximal probability, i.e. the argument of the first term of Eq.~\\eqref{eq:BT}. By comparing Eq.~\\eqref{eq:MCLU} with Eq.~\\eqref{eq:BT}, it is clear that the link between BT and the MCLU heuristic when using SVM classifiers (see Section~\\ref{sec:MCLU}) is really strong.", "id": "e604938a-7c05-4eae-a00c-0f7093b23169", "level": "subsection", "origin_cites_number": 2, "parent_id": "544044ff-dee5-45d4-b4e9-335bae0209f4", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Posterior probability based active learning" ], [ "subsection", "Breaking ties (BT)" ] ], "subsections": [], "title": "Breaking ties (BT)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:data}\nThis Section details the datasets considered and the setup of the experiments performed.", "id": "fc8866e1-005e-4ac4-be0e-43641ad50928", "level": "section", "origin_cites_number": 0, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ] ], "subsections": [ "6e70c940-20c9-4ab7-ad99-8a4761d0a98e", "bbc1da7f-38e6-4e9a-a5cf-edcfa29f4d5e" ], "title": "Datasets and setup" }, { "cite_extract_rate": 0, "cites": [], "content": "Active heuristics have been tested on three challenging remote sensing classification scenarios (Fig.~\\ref{fig:data}), whose data distributions are detailed in Fig.~\\ref{fig:Distr}. \n\\begin{figure}[!b]\n\\centering\n\t\\begin{tabular}{cc}\n\t\\multicolumn{2}{c}{\\includegraphics[width=8.4cm]{\\figdir Pavia}}\\\\\n\t\\multicolumn{2}{c}{\\includegraphics[width=8.4cm]{\\figdir PaviaGT}}\\\\\n\t\\multicolumn{2}{c}{ROSIS Pavia}\\\\\n\t\\includegraphics[height=4cm]{\\figdir Pines_40_30_20}&\n\t\\includegraphics[height=4cm]{\\figdir pinesGT}\\\\\n\t\\multicolumn{2}{c}{AVIRIS Indian Pines}\\\\\n\\includegraphics[height=4cm]{\\figdir imageZH_MR}&\n\\includegraphics[height=4cm]{\\figdir ZH_MR_GT}\\\\\n\t\\multicolumn{2}{c}{QuickBird Zurich}\\\\\n\t\\end{tabular}\n\t\\caption{Images considered in the experiments: (top) ROSIS image of the city of Pavia, Italy (bands $[56-31-6]$ and corresponding ground survey); (middle) AVIRIS Indian Pines hyperspectral data (bands $[40-30-20]$ and corresponding ground survey); (bottom) QuickBird multispectral image of a suburb of the city of Zurich, Switzerland (bands $[3-2-1]$ and corresponding ground survey).}\n\t\\label{fig:data}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\begin{tabular}{ccc}\n\\includegraphics[width=4cm]{\\figdir MeanSpectraPavia}&\n\\includegraphics[width=4cm]{\\figdir testLabels}\\\\\n(a)&(b)\\\\\n\\includegraphics[width=4cm]{\\figdir MeanSpectraPines}&\n\\includegraphics[width=4cm]{\\figdir manifold_52_102_208}\n\\\\\n(c)&(d)\\\\\n\\includegraphics[width=4cm]{\\figdir ZHspectrum_ext2}&\n\\includegraphics[width=4cm]{\\figdir ZHmanifold}\n\\\\\n(e) & (f)\\\\\n\\end{tabular}\n\\caption{Data distribution of the three images considered. First row: ROSIS image of Pavia: (a) mean spectral profiles; (b) example of data manifold in bands 55 (Red) and 77 (Near infrared). Middle row: AVIRIS Indian Pines: (c) mean spectral profiles; (d) example of data manifold in bands 52, 102 and 208. Bottom row: Zurich QuickBird: (e) mean spectral profiles; (f) data manifold in bands 2 (G), 3 (R) and 4 (NIR).}\n\\label{fig:Distr}\n\\end{figure}", "id": "6e70c940-20c9-4ab7-ad99-8a4761d0a98e", "level": "subsection", "origin_cites_number": 0, "parent_id": "fc8866e1-005e-4ac4-be0e-43641ad50928", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ], [ "subsection", "Datasets" ] ], "subsections": [ "0d621b05-e50b-4902-ba8d-1510e19e9756", "5744c67c-bb8c-4a84-a58d-895f062fc480", "2fba1c13-bf38-48d4-a59c-6a88de2989ac" ], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "the two top rows of Fig.~\\ref{fig:data} show a hyperspectral 1.3m spatial resolution image of the city of Pavia (Italy) taken by the airborne ROSIS-03 optical sensor~. The image consists of 102 spectral bands of size ($1400 \\times 512$) pixels with a spectral coverage ranging from 0.43 to 0.86 $\\mu$m. $5$ classes of interest (Buildings, Roads, Water, Vegetation and Shadows) have been selected and a labeled dataset of 206`009 pixels has been extracted by visual inspection. Among the available pixels, 20`000 have been used for the training set $X$ and candidate set $U$. Ten independent experiments have been performed, starting with $5 \\times 5 = 25$ labeled pixels ($5$ per class) in $X$ and the remaining pixels in $U$. When using LDA, 150 pixels ($30$ per class) have been included in the starting set. The higher number of starting training pixels used for LDA is justified by the requirements of the model ($n$ must be greater than the dimensionality of the data). In each experiment, 80`000 randomly selected pixels have been used to test the generalization capabilities of the heuristics.\nThe data distribution of the five classes is illustrated in the first row of Fig.~\\ref{fig:Distr}: from the mean spectra (Fig.~\\ref{fig:Distr}(a)) the classes are well distinguished and separable with the sole spectral information and the resulting data manifold (Fig.~\\ref{fig:Distr}(b)) shows a data distribution which can be handled by most linear and non linear models.", "id": "0d621b05-e50b-4902-ba8d-1510e19e9756", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "6e70c940-20c9-4ab7-ad99-8a4761d0a98e", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ], [ "subsection", "Datasets" ], [ "subsubsection", "Hyperspectral VHR" ] ], "subsections": [], "title": "Hyperspectral VHR" }, { "cite_extract_rate": 0, "cites": [], "content": "the second dataset, illustrated in the second row of Fig.~\\ref{fig:data}, is a 220-bands AVIRIS image taken over Indiana's Indian Pine test site in June 1992~. The image is $145 \\times 145$ pixels, contains 16 classes representing different crops, and a total of 10`366 labeled pixels. This image is a classical benchmark to validate model accuracy and constitutes a very challenging classification problem because of the strong mixture of the class signatures. Twenty water absorption channels were removed prior to analysis\n. In the experiments, classes with less than 100 labeled pixels were removed, resulting thus in a 12 classes classification problem with 10`171 labeled pixels (see the ground truth pixels in Fig.~\\ref{fig:data}). Among the available labeled pixels, 7`000 were used for the $X$ and $U$ sets. Each experiment starts with $5 \\times 12 =60$ pixels (5 per class). As for the previous image, the remaining 3`171 pixels have been used to test the generalization capabilities.\nVisualization of the spectral mean profiles and of the data manifold (second row of Fig.~\\ref{fig:Distr}) illustrates a completely different situation with respect to the previous image: high nonlinearity and strongly overlapping classes characterize this dataset. Therefore, linear classifiers do not perform well on this dataset and will not be considered in the experiments.", "id": "5744c67c-bb8c-4a84-a58d-895f062fc480", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "6e70c940-20c9-4ab7-ad99-8a4761d0a98e", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ], [ "subsection", "Datasets" ], [ "subsubsection", "Hyperspectral MR" ] ], "subsections": [], "title": "Hyperspectral MR" }, { "cite_extract_rate": 0, "cites": [], "content": "The third image, illustrated in the last row of Fig.~\\ref{fig:data}, is a 4-bands QuickBird scene of a suburb of the city of Zurich (Switzerland) taken in 2002. The image is $329 \\times 347$ pixels with a spatial resolution of 2.4m. Nine classes of interest have been extracted by careful visual inspection, namely Residential buildings, Commercial buildings, Trees, Vegetation, Harvested fields, Bare soil, Roads, Parking lots and Water. Since some of the classes to be separated are of landuse and have very similar responses in the spectral domain (see, for instance, the residential and commercial buildings, or roads and parking lots), 16 contextual bands, extracted using opening (8) and closing (8) morphological operators (see~), have been added to the four spectral bands, resulting in a 20-dimensional dataset. As for the Pavia dataset, 20`000 pixels have been extracted for the $X$ and $U$ sets. Each experiment starts with $5 \\times 9 =45$ pixels ($5$ per class). \nThe complexity of this third dataset is confirmed by both the spectra and the manifold illustrated in the bottom row of Fig.~\\ref{fig:Distr}. Strong overlaps between the asphalt and the soil classes are observed, which is also confirmed by the similarity between the spectral profiles. However, the spatial features added improve the differentiation of the classes (see, for instance, the opening features for the vegetation classes and the closing features for the asphalt classes).", "id": "2fba1c13-bf38-48d4-a59c-6a88de2989ac", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "6e70c940-20c9-4ab7-ad99-8a4761d0a98e", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ], [ "subsection", "Datasets" ], [ "subsubsection", "Multispectral VHR" ] ], "subsections": [], "title": "Multispectral VHR" }, { "cite_extract_rate": 0, "cites": [], "content": "In the experiments, SVM classifiers with RBF kernel and LDA classifiers have been considered for the experiments. When using SVM, free parameters have been optimized by 5-fold cross validation optimizing an accuracy criterion. The active learning algorithms have been run in two settings, adding $N+5$ and $N+20$ pixels per iteration. To reach convergence, $70$ ($40$ in the case $N + 20$) iterations have been executed for the first image, $100$ ($50$) for the second and $80$ ($50$) for the third. $n$EQB has been run with committees of $7$ models using $75\\%$ of the available training data. For the experiments using LDA, $40$ (20) iterations have been performed and $n$EQB using $12$ models and $85\\%$ of the data have been used. \nAn upper bound on the classification accuracy has been computed by considering a model trained on the whole $X \\cup U$ set (`Standard SVM/LDA' hereafter). The lower bound on performance has been considered by assessing a model using an increasing training set, randomly adding the same number of pixels at each epoch (`Random Sampling' hereafter). \nEach heuristic has been run ten times with different initial training sets. All the graphics report mean and standard deviation of the ten experiments.", "id": "bbc1da7f-38e6-4e9a-a5cf-edcfa29f4d5e", "level": "subsection", "origin_cites_number": 0, "parent_id": "fc8866e1-005e-4ac4-be0e-43641ad50928", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Datasets and setup" ], [ "subsection", "Experimental setup" ] ], "subsections": [], "title": "Experimental setup" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:res}\nIn this section, some of the heuristics presented are compared on the three datasets presented above. \nThe experiments do not aim at defining which heuristic is best, since they respond unequally well to different data architectures. Rather, it attempts to illustrate the strengths and weaknesses of the different families of methods and to help the user in selecting the methodology that will perform best depending on the characteristics of the problem at hand. \nThe heuristics studied are the following: $n$EQB, MS, MCLU, MCLU-ABD and BT. Their comparison with Random sampling (RS), the base learner (Standard SVM/LDA) and between each other will show the main differences among the active learning architectures presented. \n\\begin{figure}[!t]\n\\begin{tabular}{ccc}\n& $N+5$ & $N+20$\\\\\n\\rotatebox{90}{ Pavia ROSIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pavia_comparison_10}\n&\n\\includegraphics[width=3.8cm]{./images/Pavia_comparison_25}\\\\\n&(a)&(d)\\\\\n\\rotatebox{90}{ Indian Pines AVIRIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pines_comparison_17}\n&\n\\includegraphics[width=3.8cm]{./images/Pines_comparison_30}\n\\\\\n&(b)&(e)\\\\\n\\rotatebox{90}{ Zurich QuickBird}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_comparison_14}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_comparison_25}\n\\\\\n&(c)&(f)\\\\\n\\end{tabular}\n\\caption{The three families of heuristics trained with SVMs (RS = Random Sampling).}\n\\label{fig:comp}\n\\end{figure}\nFigure~\\ref{fig:comp} compares a heuristic for each family presented, when using SVM classifiers. In general, MS performs better than the two other families. This is expected, since MS ranks the candidates directly using the SVM decision function without further estimations: the slightly inferior performances of the $n$EQB and the BT algorithms are mainly due to the small size of the initial training set, which does not allow these criteria to perform optimally. $n$EQB uses too small bags of training pixels and BT cannot estimate the posterior probabilities correctly, because the fit of Platt's probabilities is dependent on the number of samples used. As a consequence, the performances of these two heuristics in the first iterations is similar to random sampling, a behavior already observed in~. Summarizing, when using SVMs, the most logical choice among the families seems to be a large margin heuristic.\n\\begin{figure}[!b]\n\\begin{tabular}{ccc}\n& $N+5$ & $N+20$\\\\\n\\rotatebox{90}{ Pavia ROSIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pavia_MS_MCLU_10}\n&\n\\includegraphics[width=3.8cm]{./images/Pavia_MS_MCLU_25} \\\\\n&(a)&(d)\\\\\n\\rotatebox{90}{ Indian Pines AVIRIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pini_MS_MCLU_17}\n&\n\\includegraphics[width=3.8cm]{./images/Pini_MS_MCLU_40} \\\\\n&(b)&(e)\\\\\n\\rotatebox{90}{ Zurich QuickBird}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_MS_MCLU_14}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_MS_MCLU_30} \\\\\n&(c)&(f)\\\\\n\\end{tabular}\n\\caption{Large margin active learning without diversity criterion. An example comparing MS and MCLU (RS = Random Sampling).}\n\\label{fig:largeComp}\n\\end{figure}\nRegarding this family, Figs.~\\ref{fig:largeComp} and~\\ref{fig:diver} illustrate two concepts regarding the two stages of large margin heuristics: the uncertainty and diversity criteria. Figures~\\ref{fig:largeComp} compares the MS and MCLU criteria and shows that both describe the uncertainty of the candidates in a similar way. Therefore, both can be used for efficient active learning. The use of a diversity criterion seems to slightly improve the quality of the results (Fig.~\\ref{fig:diver}): except for the AVIRIS image -- well known for the high degree of mixture of its spectral signatures -- a spectral diversity criterion such as MCLU-ABD efficiently increases performances with little added computational cost. None of the solutions obtained with the inclusion of the diversity criterion degrade the ones relying on the uncertainty assumption only.\n\\begin{figure}[!t]\n\\begin{tabular}{ccc}\n& $N+5$ & $N+20$\\\\\n\\rotatebox{90}{ Pavia ROSIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pavia_MCLU_ABD_10} \n&\n\\includegraphics[width=3.8cm]{./images/Pavia_MCLU_ABD_25} \\\\\n&(a)&(d)\\\\\n\\rotatebox{90}{ Indian Pines AVIRIS}\n&\n\\includegraphics[width=3.8cm]{./images/Pines_MCLU_ABD_17} \n&\n\\includegraphics[width=3.8cm]{./images/Pines_MCLU_ABD_30}\\\\\n&(b)&(e)\\\\\n\\rotatebox{90}{ Zurich QuickBird}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_MCLU_ABD_17}\n&\n\\includegraphics[width=3.8cm]{./images/ZH_MR_MCLU_ABD_30} \\\\\n&(c)&(f)\\\\\n\\end{tabular}\n\\caption{Effect of diversity criteria on large margin active learning. An example comparing MCLU and MCLU-ABD (RS = Random Sampling).}\n\\label{fig:diver}\n\\end{figure}\nAs stated above, other heuristics must be used for other classifiers. Figure~\\ref{fig:LDA} compares the $n$EQB and BT heuristics applied to the Pavia image using LDA. \nIn this case, both heuristics perform similarly and show a very interesting behavior: active sampling helps the LDA to estimate the subspace that better separates data. In fact, when sampling randomly, noise and outliers can make the estimation of the Fisher's ratio biased, resulting in a suboptimal linear combination of variables in the decision function. Sampling and assigning correct labels to the pixels returned by these heuristics help estimating the correct \\emph{per-class} extent (covariance) and position (mean). From 330 pixels up, the standard LDA result is improved by the active learning training sets, providing more harmonious solutions that allow a better generalization. \nSumming up, when using methods other than large margin-based algorithms, performances of the heuristics are similar and the choice must be driven by the specific constraints of time and number of iteration allowed. We will come back to these issues in the next section.\n\\begin{figure}[!t]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=3.8cm]{./images/Pavia_comparisonLDA_10} &\n\\includegraphics[width=3.8cm]{./images/Pavia_comparisonLDA_25} \\\\\n(a) $N+5$ & (b) $N+20$\\\\\n\\end{tabular}\n\\caption{Committee-based and posterior probability heuristics trained with LDA classifiers on the Pavia ROSIS image (RS = Random Sampling).}\n\\label{fig:LDA}\n\\end{figure}", "id": "16c4e1fd-cca0-4c03-98cb-9399b4dc0f51", "level": "section", "origin_cites_number": 1, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Numerical results" ] ], "subsections": [], "title": "Numerical results" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:disc}\nThroughout the experiments presented, nearly all the algorithms compared showed fast convergence to the upper bounds represented by the Standard SVM/LDA. At convergence, all the heuristics outperformed the random selection of pixels. The interest of using multiclass uncertainty or adding a criterion of diversity has been demonstrated by the experiments above. Table~\\ref{tab:sum} summarizes the points raised in this paper and gives a general overview of the methods reviewed. \nLarge margin-based methods with diversity criterion seem the most appropriate when using SVM, while committee-based heuristic leave freedom to the user to exploit the model he is most confident with. Moreover, the user can build ensembles of classifiers exploiting the intrinsic advantages of specific classifiers for a given type of image. Weighted committees or boosting candidates are also possible (see Section \\ref{sec:committee} for some references). Probabilistic heuristics have the advantage of speed, but cannot always provide batches of samples and, if the classifier does not return such estimates naturally, must rely on further approximations of the posterior probabilities.\nHowever, it would not be correct to base the choice of the heuristic in a model-based fashion only. The choice of the best heuristic is problem-oriented and depends on the needs of the user in terms of time, complexity and size of the batch to be provided at each iteration. \nThis section draws some guidelines to select the most appropriate heuristic. \nA first distinction could be done depending on the type of the images considered: \n\\begin{itemize}\n\\item[-] when dealing with hyperspectral images, which are typically high dimensional, strategies taking direct advantage of the data structure should be preferred: typically, multi-view heuristics such as the AMD or the ECBD-ABD are particularly well-suited to this type of data. The first exploits cross-informations directly in the space of the spectral bands, while the second selects the samples according to spectral angles among the candidates.\n\\item[-] when the initial training set is very small, heuristics based on posterior probabilities should be avoided, since such estimation strongly depends on the quality of the estimation of the class statistics (typically in the case of LDA). The same holds for committees-based on bagging, especially if the bags contain a small share of the original samples.\n\\item[-] when dealing with complex manifolds, in which redundancy can greatly affect the quality of the sampling and of the resulting training set, approaches based on modeling of the relationships among samples in the feature space can be strongly beneficial to select pixels reflecting such complexity. The use of kernel $k$-means in the MCLU-ECBD, in hMCS or the distance to support vectors in the cSV heuristic provide solutions in this sense.\n\\end{itemize}\nA second distinction, more related to operational requirements, is based on the type of sampling to be performed by the user~, \n\\begin{itemize}\n\\item[-] when working by photointerpretation (typically in VHR imaging), sampling can be done on-screen directly by the user. This allows for large amounts of iterations and can thus be solved by small batches of samples. In this case complex heuristics favoring spectral diversity are to be preferred, since the complexity of the heuristics enforcing diversity strongly increases with the size of the batch considered. \n\\item[-] on the contrary, when sampling is to be done on the field (typically in hyperspectral or mid-resolution images), only a few iterations with large batches are allowed. In this case, all the heuristics seem to provide the same type of convergence and the user should prefer simple heuristics such as MCLU, BT or EQB, depending on the model used. In this case, the spatial location of samples seems to be much more important than the heuristic used: a pioneering work in this sense can be found in ~, where MS and BT are exploited with spatially adaptive cost and the sampling is optimized with respect to the spatial distance among the samples chosen.\n\\item[-] when sampling is done with moving sensors and the samples are acquired sequentially by an automatic device, batches of samples are not necessary. In this case models with small computational cost should be preferred, as they can update fast and almost instantly provide the next location to be sampled. In this case, BT and KL-max are most valuable. \n\\end{itemize}\n\\begin{table*}[!t]\n\\caption{Summary of active learning algorithms (B : binary, M : multiclass, $q$ : number of candidates, $k$ : members of the committee of learners, $S$ : batch, $SVs$ : support vectors, $\\checkmark$ : yes, $\\times$ : no).}\n\\label{tab:sum}\n\\begin{center}\n\\setlength{\\tabcolsep}{1pt}\n\\begin{tabular}{p{1.8cm}|p{1.6cm}|c|c|c|c|c c|p{4cm}}\n\\hline\nFamily&Heuristic& Reference &Batches&Uncer-& Classifier& \\multicolumn{2}{c|}{Diversity}&{Models to train}\\\\\n&&&& tainty&&$S$&$SV$s&\\\\\n\\hline\\hline\n\\multirow{2}{*}{Committee}&EQB&&\\checkmark&M&All&$\\times$ &$\\times$&$k$ models\\\\\n&AMD&&\\checkmark&M&All&$\\times$ &$\\times$&$k$ models\\\\\n\\hline\n\\multirow{9}{*}{Large margin}&MS&&\\checkmark&B& SVM &$\\times$ &$\\times$&Single SVM\\\\\n&MCLU&&\\checkmark&M&SVM&$\\times$ &$\\times$&Single SVM\\\\\n&SSC&&\\checkmark&B&SVM&$\\times$ &$\\times$&2 SVMs\\\\\n&cSV&&\\checkmark&B&SVM&$\\checkmark$ &$\\times$&Single SVM + distances to support vectors \\\\\n&MOA&&\\checkmark&B&SVM&\\checkmark &$\\times$&Single SVM + distances to already selected pixels\\\\\n&MCLU-ABD&&\\checkmark&M&SVM&\\checkmark&$\\times$&Single SVM + distances to already selected pixels\\\\\n&MCLU-ECBD&&\\checkmark&M&SVM&\\checkmark &$\\times$&Single SVM + nonlinear clustering of candidates\\\\\n&hMCS-i&&\\checkmark&M&SVM&\\checkmark &\\checkmark&Single SVM + nonlinear clustering of candidates and SVs\\\\\n\\hline\nPosterior&KL-max&&$\\times$&M&$$ $p(y|\\x)$&$\\times$&$\\times$&$(q-1)$ models\\\\\nprobability&BT&&\\checkmark&M& $ p(y|\\x)$&$\\times$&$\\times$&Single model\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}", "id": "fba5b779-cfe8-4a33-833a-07b334d324ec", "level": "section", "origin_cites_number": 10, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:concl}\nIn this paper we presented and compared several state of the art approaches to active learning for the classification of remote sensing images. A series of heuristics have been classified by their characteristics into four families. For each family, some heuristics have been detailed and then applied to {three challenging} remote sensing datasets for multispectral and hyperspectral classification. Advantages and drawbacks of each method have been analyzed in detail and recommendations for further improvement have been worded. \nHowever, this review is not exhaustive and the research in the field is far from being over: \nthere is a healthy and rich research community developing new heuristics for active sampling that have been or will be presented in the remote sensing and signal processing community. \nActive learning has a strong potential for remote sensing data processing. Efficient training sets are needed by the users, especially when dealing with large archives of digital images. New problems are being tackled with active learning algorithms, guaranteeing the renewal of the field. Some recent examples can be found in the active selection of unlabeled pixels for semi-supervised classification~, spatially adaptive heuristics~ or the use of active learning algorithms for model adaptation across domains~. \nFurther steps for active learning methods are the inclusion of contextual information in the heuristics: so far, the heuristics proposed only take advantage of spectral criteria -- or at most include contextual features in the data vector -- but few heuristics directly consider positional information and/or textures. Another crucial issue is the robustness to noise: since they are based on the uncertainty of the pixels, current heuristics are useless for images related to high levels of noise such as SAR. This field remains, at present, totally unexplored.\n\\section*{Acknowledgments}\nThe authors would like to acknowledge prof. Paolo Gamba (Univ. Pavia) who provided the Pavia dataset, as well as the authors in for the Indian Pines data.\n\\bibliographystyle{IEEEbib}\n\\bibliography{refsALreview}\n\\end{document}", "id": "132e66fa-8f0c-4248-b231-33f0f600a11b", "level": "section", "origin_cites_number": 5, "parent_id": "32f5c325-ad32-4a99-bf88-0a6a7380d5c1", "prefix_titles": [ [ "title", "A survey of active learning algorithms for supervised remote sensing image classification" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
5
[ 8400, 8401 ]
1.496501
[ "A. Oelen", "M. Y. Jaradeh", "M. Stocker", "S. Auer" ]
Generate FAIR Literature Surveys with Scholarly Knowledge Graphs
2020
2020-06-02T16:19:00Z
cs.DL
Reviewing scientific literature is a cumbersome, time consuming but crucial activity in research. Leveraging a scholarly knowledge graph, we present a methodology and a system for comparing scholarly literature, in particular \emph{research contributions} describing the addressed problem, utilized materials, employed methods and yielded results. The system can be used by researchers to quickly get familiar with existing work in a specific research domain (e.g., a concrete research question or hypothesis). Additionally, it can be used to publish literature surveys following the FAIR Data Principles. The methodology to create a research contribution comparison consists of multiple tasks, specifically: (a) finding similar contributions, (b) aligning contribution descriptions, (c) visualizing and finally (d) publishing the comparison. The methodology is implemented within the Open Research Knowledge Graph (ORKG), a scholarly infrastructure that enables researchers to collaboratively describe, find and compare research contributions. We evaluate the implementation using data extracted from published review articles. The evaluation also addresses the FAIRness of comparisons published with the ORKG.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ] ], "subsections": [ "b5fdf9c3-044e-45b3-8bb1-0b7d691c74d6", "b95d3fb3-5cd7-4c69-8c96-6cadc3ef7b13", "32d3f690-7c93-4310-b607-6553d1915112", "a1917066-53fd-432d-9c65-ea4825cda6c7", "def65367-1d9c-4420-92cd-cd2f5cd6b277", "9c67205a-5ea7-42a6-af06-049ac9b5d00b", "f87ef12c-02fb-41f5-9f16-69dee5190666", "4c8c53d6-d473-464f-a034-d85b2b90268e" ], "title": "root" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 6548, 6549 ], "content": "When conducting scientific research, reviewing the existing literature is an essential activity~. \nFamiliarity with the state-of-the-art is required to effectively contribute to advancing it and do relevant research. Mainly because published scholarly knowledge is unstructured~, it is currently very tedious to review existing literature. Relevant literature has to be found among hundreds and increasingly thousands of PDF articles. This activity is supported by library catalogs and online search engines, such as Scopus or Google Scholar~. Because the search is keyword based, typically large numbers of articles are returned by search engines. Researchers have to manually identify the relevant papers. Having identified the relevant papers, the relevant pieces of information need to be extracted in order to obtain an overview of the literature. Overall, these are manual and time consuming steps. We argue that a key issue is that the scholarly knowledge communicated in the literature does not meet the FAIR Data Principles~. While PDF articles can be found and accessed (assuming Open Access or an institutional subscription), the scholarly literature is insufficiently interoperable and reusable, especially for machines. For units more granular than the PDF article, such as a specific result, findability and accessibility score low even for humans.\nWe present a methodology and its implementation integrated into the Open Research Knowledge Graph (ORKG)~ that can be used to generate and publish literature surveys in form of machine actionable, comparable descriptions of research contributions. Machine actionability of research contributions relates to the ability of machines to access and interpret the contribution data. The benefits for researchers of such an infrastructure are (at least) two-fold. Firstly, it supports researchers in creating state-of-the-art overviews for specific research problems efficiently. Secondly, it supports researchers in publishing literature surveys that adhere to the FAIR principles, thus contributing substantially to reuse of state-of-the-art overviews and therein contained information, for both humans and machines. \nLiterature reviews are articles that focus on analysing existing literature. Among other things, reviews can be used to gain understanding about a research problem or to identify further research directions~. Reviews can be used by authors to quickly obtain an overview of either emerging or mature research topics~. Review papers are important for research fields to develop. When review papers are lacking, the development of a research field is weakened~. Compiling literature review papers is a complicated task~ and is often more time consuming than performing original research~. The structure of such articles often consists of tables that compare published research contributions. Although in the literature the terms ``literature review'' and ``literature survey'' are sometimes used interchangeably, we make the following distinction. We refer to the tables in review articles as \\textit{literature surveys}. Together with a (textual) analysis and explanation, they form the \\textit{literature review}. The state-of-the-art (SoTA) analysis is a special kind of literature review with the objective of comparing the latest and most relevant papers in a specific domain. \nWe implement the presented methodology in the ORKG. The ORKG is a scholarly infrastructure designed to acquire, publish and process \\emph{structured} scholarly knowledge published in the literature~. ORKG is part of a larger research agenda aiming at machine actionable scholarly knowledge that understands the ability to more efficiently compare literature as a key feature. \nWe tackle the following research questions: \n\\begin{enumerate}\n \\item How to generate literature surveys using scholarly knowledge graphs?\n \\item How to ensure that published literature surveys comply with the FAIR principles?\n \\item How to effectively specify and visualize literature surveys in a user interface?\n\\end{enumerate}\nIn support of the first research question, we present a methodology that describes the steps required to generate literature surveys. In support of the second research question, we describe how the FAIRness of the published literature review is ensured. Finally, in support of the third research question, we demonstrate how the methodology is implemented within the ORKG. \nThe paper is structured as follows. Section~\\ref{section:motivating-example} motivates the work. Section \\ref{section:related-literature} reviews related work. Section~\\ref{section:system-design} presents the system design, the underlying methodology and its implementation. Section~\\ref{section:data-collection} explains how the knowledge graph is populated with data. Section~\\ref{section:evaluation} presents the evaluation of the system, specifically system FAIRness and performance. Finally, Section~\\ref{section:discussion} discusses the presented and future work.", "id": "b5fdf9c3-044e-45b3-8bb1-0b7d691c74d6", "level": "section", "origin_cites_number": 11, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:motivating-example}\nWe motivate our work by means of two use cases that underscore the usefulness of a literature survey generation system. In the first use case, a researcher wants to obtain an overview on state-of-the-art research addressing a specific problem. The second use case describes how a researcher can publish a FAIR compliant literature review with the ORKG.", "id": "b95d3fb3-5cd7-4c69-8c96-6cadc3ef7b13", "level": "section", "origin_cites_number": 0, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Use cases" ] ], "subsections": [ "86a170e1-f5ee-40dd-9553-058683d000c4" ], "title": "Use cases" }, { "cite_extract_rate": 0, "cites": [], "content": "A state-of-the-art (SoTA) analysis reviews new and emerging research. They are useful for multiple reasons. Firstly, they provide a broad overview of a research problem and support understanding. Secondly, they juxtapose different approaches for a problem. Thirdly, they can support claims on why certain research is relevant by giving an overview of the breadth of research addressing a problem. \nThe proposed approach enables automated generation of surveys to quickly obtain an overview of state-of-the-art research as well as sharing of surveys for others to reuse.", "id": "86a170e1-f5ee-40dd-9553-058683d000c4", "level": "paragraph", "origin_cites_number": 0, "parent_id": "b95d3fb3-5cd7-4c69-8c96-6cadc3ef7b13", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Use cases" ], [ "paragraph", "Familiarize with the state-of-the-art" ] ], "subsections": [ "b8ac6e76-2ccc-4355-afcd-c2adb9788bf8" ], "title": "Familiarize with the state-of-the-art" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:motivating-example:publish-reviews}\nLiterature reviews typically consist of multiple (survey) tables in which different approaches from original papers are compared based on a set of properties. These tables can be seen as the main contribution and most informative part of the review paper, since the tables juxtapose and compare existing work. Comparison tables are published in review papers as static content in PDF documents. This presentational format is generated from datasets that typically contain more (structured) information than what is presented in the published table. However, the additional information is not published. It is ``dark data'' which is not stored or indexed and likely lost over time~. Furthermore, published tables are not machine actionable. Their overall low FAIRness hinders reusability of the published content. With the presented service, it is possible to publish a literature survey with high FAIRness, i.e. that is compliant with the FAIR principles to a high degree. Section~\\ref{section:related-literature} discusses this aspect in more details.", "id": "b8ac6e76-2ccc-4355-afcd-c2adb9788bf8", "level": "paragraph", "origin_cites_number": 1, "parent_id": "86a170e1-f5ee-40dd-9553-058683d000c4", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Use cases" ], [ "paragraph", "Familiarize with the state-of-the-art" ], [ "paragraph", "Publishing of literature reviews" ] ], "subsections": [ "34cdcc07-1e15-458e-bb25-8a7ebb70bcd8" ], "title": "Publishing of literature reviews" }, { "cite_extract_rate": 0, "cites": [], "content": "The weaknesses of the current approach to literature review can be summarized as follows:\n\\begin{itemize}\n \\item {\\it Static} -- reviews are static, since once published as PDF they are rarely updated and there are no possibilities or incentives for creating new or updated reviews for considerable time.\n \\item {\\it Lack of machine assistance} -- machine assistance is hardly possible, since the PDF representation of reviews is only human readable and relevant raw data is mostly not published along with the review.\n \\item {\\it Delay} -- reviews are produced and published with significant delay (often years) after original research work was done.\n \\item {\\it Coverage} -- due to the amount of work required, reviews are often only performed for relatively popular research topics and are stale or missing for less popular topics.\n \\item {\\it Lacking collaboration} -- collaboration on reviews is not possible and reviews currently represent only the viewpoint of the few authors not the community.\n \\item {\\it Missing overarching systematic semantic representation} -- the overlap between different reviews and related work sections in individual original research papers is not explicit and cannot be exploited.\n\\end{itemize}\nWe deem that these weaknesses of the current approach to scholarly literature review and synthesis significantly hinder scientific progress.", "id": "34cdcc07-1e15-458e-bb25-8a7ebb70bcd8", "level": "paragraph", "origin_cites_number": 0, "parent_id": "b8ac6e76-2ccc-4355-afcd-c2adb9788bf8", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Use cases" ], [ "paragraph", "Familiarize with the state-of-the-art" ], [ "paragraph", "Publishing of literature reviews" ], [ "paragraph", "Summary of weaknesses of the current approach to literature review" ] ], "subsections": [], "title": "Summary of weaknesses of the current approach to literature review" }, { "cite_extract_rate": 0.04, "cites": [ 8369 ], "content": "\\label{section:related-literature}\nThe task of comparing research contributions can be reviewed in light of the more general task of comparing resources (or entities) in a knowledge graph. While this is a well-known task in multiple domains (for instance in e-commerce systems~), not much work has focused on comparison in knowledge graphs, specifically. One of the few works with this focus is by \\citeauthor{Petrova2017}~ who created a framework for comparing entities in RDF graphs using SPARQL queries. In order to compare contributions, they first have to be found. Finding is an information retrieval problem. As a well-known technique, TF-IDF~ can be used for this task. More sophisticated techniques can be used to determine the structural similarity between graphs (e.g., ) and matching semantically similar predicates. This relates to dataset interlinking~ or more generally ontology alignment~. For property alignment, techniques of interest include edit distance (e.g., Jaro-Winkler~ or Levenshtein~) and vector distance. \\citeauthor{Gromann2019}~ found that fastText~ performs best for ontology alignment. \nIn light of the FAIR Data Principles~, scholarly data should be Findable, Accessible, Interoperable and Reusable both for humans and machines. Due to the publication format, literature survey tables published in scholarly articles weakly adhere to the FAIR guidelines, particularly so for machines. Scholarly data should be considered first-class objects~, including data used to create literature surveys. \\citeauthor{Rodriguez-Iglesias2016}~ describe the difficulties of making data FAIR within the plant sciences. They argue that it is more complicated than reformatting data. On the other hand they suggest that most FAIR principles can be implemented relatively easily by using off-the-shelf technologies. \\citeauthor{Boeckhout2018}~ argue that the FAIR principles alone are not sufficient to lead to responsible data sharing. More applied principles are needed to ensure better scholarly data. This claim is supported by the findings of \\citeauthor{Mons2017}~ who suggest that there are very diverse interpretations of the guidelines. In their work, they try to clarify what is FAIR and what is not. \nAn efficient literature comparison relies on scholarly knowledge being represented in a structured way. There is substantial related work on representing scholarly knowledge in structured form~. Building on the work of numerous philosophers of science, \\citeauthor{Hars2001}~ proposed a comprehensive scientific knowledge model that includes concepts such as theory, methodology and statement. More recently, ontologies were engineered to describe different aspects of the scholarly communication process. Semantic Publishing and Referencing (SPAR)\\footnote{http://purl.org/spar/\\{cito,c4o,fabio,biro,pro,pso,pwo,doco,deo\\}} is a collection of ontologies that can be used to describe scholarly publishing and referencing of documents~. \\citeauthor{IniestaCorcho:SePublica2014}~ reviewed the state-of-the-art ontologies to describe scholarly articles. \\citeauthor{Sateli2015SemanticRO}~ use some of these scholarly ontologies to add semantic representations of scholarly articles to the Linked Open Data cloud. A literature survey comparing scholarly ontologies is available via the ORKG.\\footnote{\\url{https://www.orkg.org/orkg/comparison/R8342}} Most of these ontologies are designed to capture metadata about and structure of scholarly articles, not the content communicated in articles. \nAnother literature survey is created to compare approaches for semantically representing scholarly communication.\\footnote{\\url{https://www.orkg.org/orkg/comparison/R8364}}\nAn initial attempt for semantifying review articles was done in~. The work comprises a relatively rigid ontology for describing contributions (mainly centered around research problems, approaches, implementations and evaluations) and a prototypical implementation using Semantic MediaWiki. We relax this constraint, since we are not limited by a rigid ontology schema but rather allow arbitrary domain-specific semantic structures for research contributions. The work by \\citeauthor{DBLP:conf/ercimdl/VahdatiFALV19}~ focuses on semantic article representations for generating literature overviews. Their method is to use crowdsourcing to generate the overviews. \\citeauthor{Kohl2018}~ present CADIMA, a system that supports systematic literature reviews. The tool supports the formal process of performing a literature review but does, for example, not publish data in machine actionable form for reuse.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{img/methodology.pdf}\n \\caption{Research contribution comparison methodology.}\n \\label{fig:workflow-comparison}\n\\end{figure}", "id": "32d3f690-7c93-4310-b607-6553d1915112", "level": "section", "origin_cites_number": 25, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Related work" ] ], "subsections": [], "title": "Related work" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:system-design}\nWe now present the system design of the literature comparison service. It consists of a methodology that describes how to perform a comparison of research contributions. An early version of this methodology has been presented at the 3rd SciKnow workshop~. The methodology consists of five steps: 1) finding comparison candidates, 2) selecting related statements, 3) aligning contribution descriptions, 4) visualizing comparisons and 5) publishing FAIR comparisons. The methodology is depicted in Figure~\\ref{fig:workflow-comparison}. First, we discuss the data structure of the ORKG, which forms the foundation of the comparison. Then, each step of the methodology is described in more detail. Finally, we discuss the implementation.", "id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "level": "section", "origin_cites_number": 1, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ] ], "subsections": [ "c64100b1-b57e-46ef-a8f6-a6c7131b671e", "d4eda852-27fa-4aa8-880a-559597b2dc69", "5d1f0f97-df75-4699-9f80-7b7af6dea650", "fbebdd62-b2b1-408f-9c69-5d4f7f13cfce", "1b062162-78e7-427e-8efd-91ae002ca800", "da78b76d-a336-4f78-9f93-2fb8a21e6591", "252651d1-6cd2-4bc9-abba-721f972b6d1e" ], "title": "System design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:ontology}\nIn ORKG, each paper is typed as \\textit{paper} class. A paper consists of at least one \\textit{research contribution}, which addresses at least one \\textit{research problem}. Research contributions consist of \\textit{contribution data} that describe the contribution. For instance, a paper in Computer Science might have descriptions for materials, methods, implementation and results as contribution data. These predefined core concepts can be easily extended with domain specific research problems, methods, etc. in ORKG curation using crowdsourcing or other curation approaches. The underlying data structure uses the notion of statements. Statements are triples that consist of a subject, a predicate (also called a property) and an object. The granularity of a comparison is at the research contribution, meaning that contributions are compared rather than papers. For simplicity, we use the terms ``paper comparison'' and ``contribution comparison'' interchangeably. Because a comparison happens on contribution level, it is possible to compare specific elements of a paper instead of the complete paper. The benefit of this is that a comparison does not contain data from irrelevant contributions. The ORKG OWL ontology is available online.\\footnote{\\url{https://gitlab.com/TIBHannover/orkg/orkg-ontology}}", "id": "c64100b1-b57e-46ef-a8f6-a6c7131b671e", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "ORKG ontology" ] ], "subsections": [], "title": "ORKG ontology" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:workflow-comparison-candidates}\nTo perform a comparison, a starting contribution is needed. This contribution is called \\textit{main contribution} and is always manually selected by a user. The main contribution is compared against other \\textit{comparison contributions}. There are two different approaches for selecting the comparison contributions. The first approach automatically selects comparison contributions based on similarity. The second approach lets users manually select contributions.", "id": "d4eda852-27fa-4aa8-880a-559597b2dc69", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Select comparison candidates" ] ], "subsections": [ "dceac5ad-70aa-4f9b-94ef-abad8d3758f3", "2079aa06-4aa7-4551-8f0b-5dcb0826279d" ], "title": "Select comparison candidates" }, { "cite_extract_rate": 0, "cites": [], "content": "Comparing contributions makes only sense when contributions can sensibly be compared. For example, it does not make (much) sense to compare a biology paper to a history paper. \nWe thus argue that it makes only sense to compare contributions that are similar. More specifically, contributions that share the same (or a similar set of) properties are good comparison candidates. For instance, a paper about question answering has the property \\textit{orkg:disambiguationTask}\\footnote{\\textit{orkg:} denotes the ontology of the ORKG system described in Section~\\ref{section:ontology}} and another paper is using the same property to describe what disambiguation tasks are performed. Since they share the same property it makes them likely candidates for comparison. Finding similar contributions is therefore based on finding contributions that share the same or similar informative description properties. To achieve this, each comparison contribution is converted into a string by concatenating all properties of the contribution. \nTF-IDF~ is used to query these strings with the string of the main contribution as query. The search returns the most similar contributions by weighting the most informative properties higher due to TF-IDF. The top-k contributions are selected and form a set of contributions that are used in the next step.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{img/similar-contributions.png}\n \\caption{Implementation of the first step of the methodology: the selection of comparison candidates. Showing both the similarity-based and the manual selection approaches.}\n \\label{fig:comparison-candidates}\n\\end{figure}\nFigure~\\ref{fig:comparison-candidates} displays how the similar contribution selection is implemented. As depicted, three similar contributions are suggested to the user (with the corresponding similarity percentage being displayed next to paper title). These suggested contributions can be directly compared.", "id": "dceac5ad-70aa-4f9b-94ef-abad8d3758f3", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "d4eda852-27fa-4aa8-880a-559597b2dc69", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Select comparison candidates" ], [ "subsubsection", "Find similar contributions" ] ], "subsections": [], "title": "Find similar contributions" }, { "cite_extract_rate": 0, "cites": [], "content": "There are scenarios where comparison based on similarity computation is not suitable or desired. For example, a researcher wants to compare a specific set of implementations to see which performs best. \nTherefore, the manual selection method is implemented in a similar fashion to an e-commerce shopping cart. When the ``Add to comparison'' checkbox is checked, a box appears listing the selected contributions (Figure~\\ref{fig:comparison-box}). \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.45\\columnwidth]{img/comparison-box.png}\n \\caption{Box showing the manually selected contributions.}\n \\label{fig:comparison-box}\n\\end{figure}", "id": "2079aa06-4aa7-4551-8f0b-5dcb0826279d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d4eda852-27fa-4aa8-880a-559597b2dc69", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Select comparison candidates" ], [ "subsubsection", "Manual selection" ] ], "subsections": [], "title": "Manual selection" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:workflow-related-statements}\nThis step selects the statements from the graph related to the set of contributions selected in the previous step. Statements are selected transitively to match contributions in subject or object position. This search is performed until a predefined maximum transitive depth $\\delta$ has been reached. The intuition is that the deeper a property is nested the less likely is its relevance for the comparison. The process of selecting statements is repeated until depth $\\delta=5$ is reached. This number is chosen empirically to include statements that are not directly related to the contribution, but to exclude statements that are less relevant because they are nested too deep.", "id": "5d1f0f97-df75-4699-9f80-7b7af6dea650", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Select related statements" ] ], "subsections": [], "title": "Select related statements" }, { "cite_extract_rate": 1, "cites": [ 8369, 6548 ], "content": "As described in the first step, comparisons are built using shared or similar properties of contributions. In case the same property has been used between contributions, these properties are grouped and form one \\textit{comparison row}. However, often different properties are used to describe the same concept. This occurs for various reasons. The most obvious reason is when two different ontologies are used to describe the same property. For example, for describing the population of a city, DBpedia uses \\textit{dbo:populationTotal} while WikiData uses \\textit{WikiData:population} (actually the property identifier is P1082; for the purpose here we use the label). When comparing contributions, these properties should be considered as equivalent. Especially for community-created knowledge graphs, differently identified properties likely exist that are, in fact, equivalent.\nTo overcome this problem, we use pre-trained fastText~ word embeddings to determine the similarity of properties. If the similarity is higher than a predetermined threshold $\\tau$, the properties are considered equivalent and are grouped. This happens when the similarity threshold $\\tau \\geq 0.9$ (also empirically determined). In the end, each group of properties will be visualized as one row in the comparison table. The result of this step is a list of statements for each contribution, where similar properties are grouped. Based on this similarity matrix $\\gamma$ is generated \n\\begin{equation}\n \\gamma_{p_{i}} = \\left [ cos(\\overrightarrow{p_i}, \\overrightarrow{p_j}) \\right ]\n \\label{eq:similarity_matrix}\n\\end{equation}\nwith $cos(.)$ as the cosine similarity of vector embeddings for property pairs $(p_i, p_j) \\in \\mathcal{P}$, whereby $\\mathcal{P}$ is the set of all contributions.\nFurthermore, we create a mask matrix $\\Phi$ that selects properties of contributions $c_i \\in \\mathcal {C}$, whereby $\\mathcal{C}$ is the set of contributions to be compared. Formally,\n\\begin{equation}\n \\Phi_{i,j} = \\begin{cases}\n1 & \\text{ if } p_{j} \\in c_i \\\\ \n0 & \\text{ otherwise } \n\\end{cases}\n \\label{eq:mask_matrix}\n\\end{equation}\nNext, for each selected property $p$ we create the matrix $\\varphi$ that slices $\\Phi$ to include only similar properties. Formally,\n\\begin{equation}\n\\varphi_{i,j} =(\\Phi_{i,j})_{\\substack{c_i \\in \\mathcal{C}\\\\ p_j \\in sim(p) }}\n\\label{eq:slice_mask}\n\\end{equation}\nwhere $sim(p)$ is the set of properties with similarity values $\\gamma[p] \\geq \\tau$ with property $p$. Finally, $\\varphi$ is used to efficiently compute the common set of properties~. This process is displayed in Algorithm~\\ref{alg:alining-properties}.\n\\begin{algorithm}\n\\caption{Align contribution descriptions}\\label{alg:alining-properties}\n\\begin{algorithmic}[1]\n\\Procedure{AlignProperties}{properties, threshold}\n\\ForEach {property $p_1 \\in properties$}\n \\ForEach {property $p_2 \\in properties$}\n \\State $similarity \\gets$ cos(Embb($p_1$), Embb($p_2$))\n \\If {$similarity > threshold$}\n \\State $similarProps \\gets similarProps \\cup \\{ p_1, p_2 \\}$\n \\EndIf\n \\EndFor\n\\EndFor\n\\Return $similarProps$\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}", "id": "fbebdd62-b2b1-408f-9c69-5d4f7f13cfce", "level": "subsection", "origin_cites_number": 2, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Align contribution descriptions" ] ], "subsections": [], "title": "Align contribution descriptions" }, { "cite_extract_rate": 0, "cites": [], "content": "The next step of the workflow is to visualize the comparison and present the data in a human understandable format. Tabular format is often appropriate for visualizing comparisons since tables provide a good overview of data. Another aspect of the visualization is determining which properties should be displayed and which ones should be hidden. A property is displayed when it is shared among a predetermined amount $\\alpha$ of contributions, where $\\alpha$ mainly depends on comparison use and can be determined based on the total amount of contributions in the comparison. By default, only properties that are common to at least two contributions ($\\alpha \\geq 2$) are displayed. \nAnother aspect of comparison visualization is the possibility to customize the resulting table. This is needed because of the similarity-based matching of properties and the use of predetermined thresholds. For example, users should be able to enable or disable properties. They should also get feedback on property provenance (i.e., the property's path in the graph). Ultimately, this contributes to a better user experience, with the possibility to manually correct mistakes made by the system. \nFigure~\\ref{fig:comparison-table} displays a comparison for research contributions related to visualization tools published in the literature. In this example, four properties are displayed. Literals are displayed as plain text while resources are displayed as links. When a resource link is selected, a popup is displayed showing the statements related to this resource. The UI implements some additional features that are particularly useful to compare research contributions.", "id": "1b062162-78e7-427e-8efd-91ae002ca800", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Visualize comparison" ] ], "subsections": [ "04b99168-d893-480d-972b-bd9514984b9f" ], "title": "Visualize comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "Users can customize comparisons including transposing the table as well as hiding and rearranging the properties. Especially the option to hide properties is helpful when contributions with many statements are compared. Only properties considered relevant to the user can be selected to display. Customizing the comparison table can be useful before exporting or sharing the comparison.", "id": "04b99168-d893-480d-972b-bd9514984b9f", "level": "paragraph", "origin_cites_number": 0, "parent_id": "1b062162-78e7-427e-8efd-91ae002ca800", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Visualize comparison" ], [ "paragraph", "Customization" ] ], "subsections": [ "6d0b1e3c-a8a3-4dd7-a2ef-5cb37393a9a0", "0851ca07-6e0f-4b3d-8617-8e8005a1d81b" ], "title": "Customization" }, { "cite_extract_rate": 0, "cites": [], "content": "Comparisons can be shared using a persistent link. Especially when sharing the comparison for research purposes, it is important to refer to the original comparison. Since contribution descriptions may change over time comparisons may also change. To support persistency, the whole state of the comparison is stored in a document-oriented database and retrieved when the permalink is invoked.", "id": "6d0b1e3c-a8a3-4dd7-a2ef-5cb37393a9a0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "04b99168-d893-480d-972b-bd9514984b9f", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Visualize comparison" ], [ "paragraph", "Customization" ], [ "paragraph", "Sharing and persistence" ] ], "subsections": [], "title": "Sharing and persistence" }, { "cite_extract_rate": 0, "cites": [], "content": "It is possible to export comparisons in different output formats such as PDF, CSV, RDF and \\LaTeX. The \\LaTeX~export is useful for direct integration in research papers. Together with the \\LaTeX~table, a BibTeX file containing the bibliographic information of the papers used in the comparison is also generated. Also, a persistent link referring back to the comparison in ORKG is showed as table footnote.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{img/comparison-table.png}\n \\caption{Comparison of research contributions related to visualization tools.}\n \\label{fig:comparison-table}\n\\end{figure}", "id": "0851ca07-6e0f-4b3d-8617-8e8005a1d81b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "04b99168-d893-480d-972b-bd9514984b9f", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Visualize comparison" ], [ "paragraph", "Customization" ], [ "paragraph", "Export" ] ], "subsections": [], "title": "Export" }, { "cite_extract_rate": 0, "cites": [], "content": "Visualized and customized comparison tables can be stored. Storing tables is part of the publishing process and therefore only needed when a generated table is going to be used in a paper. In order to regenerate the table the whole state of the comparison should be saved. The knowledge graph from which the comparison was generated changes over time and thus storing just the URIs of the respective papers would not suffice. While saving a comparison, the user can provide additional metadata to ensure findability, an aspect of the FAIR principles. Metadata include a comparison title, which would normally consist of a one sentence description of the comparison. Additionally, a longer textual description can be provided. This metadata is extended with machine generated data, such as the creation date and the creator of the comparison. The metadata is stored in the knowledge graph to support easy access and interoperability. In Figure~\\ref{fig:RDF-comparison}, the structure of the metadata is displayed using the Dublin Core Metadata Terms\\footnote{\\url{https://dublincore.org/specifications/dublin-core/dcmi-terms}}. The comparison data itself is stored in a document-oriented database. An RDF export of both the metadata and the comparison data can be generated. The comparison data is modeled with the RDF Data Cube Vocabulary\\footnote{\\url{https://www.w3.org/TR/vocab-data-cube}}. A unique identifier is attached when the comparison is saved. This ID is used when the comparison is shared or when it is referenced in a paper. The literature comparison can also be performed without publishing. Although the workflow and the steps to create a comparison stay the same, the goal is different. Instead of creating a comparison that will be published and referenced in a paper, the comparison will be used by the researcher herself. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.85\\columnwidth]{img/RDF-comparison.pdf}\n \\caption{The graph structure of the metadata for a published comparison. The \\textit{dcterms:} prefix denotes the Dublin Core Metadata Terms ontology.}\n \\label{fig:RDF-comparison}\n\\end{figure}", "id": "da78b76d-a336-4f78-9f93-2fb8a21e6591", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Publish comparison" ] ], "subsections": [], "title": "Publish comparison" }, { "cite_extract_rate": 0.25, "cites": [ 8253 ], "content": "The user interface of the comparison feature is seamlessly integrated with the ORKG front end, which is written in JavaScript and is publicly available\\footnote{\\url{https://gitlab.com/TIBHannover/orkg/orkg-frontend}}. The back end of the comparison feature is a service separate from the ORKG back end written in Python and also available Open Source\\footnote{\\url{https://gitlab.com/TIBHannover/orkg/orkg-similarity}}. The comparison back end is responsible for step two and three of the comparison methodology. The input in step two is the set of contribution IDs. The API selects the related statements and aligns the properties and returns the data needed to visualize the comparison. This data includes the list of papers, list of all properties and the values per property. \n\\begin{table*}[t]\n\\caption{List of imported survey tables in the ORKG. The paper and table reference can be used to identify the original table. }\n\\label{table:imported-review-tables}\n\\begin{adjustbox}{scale=.90}\n\\begin{tabular}{l|l|l|r|l|l}\n\\toprule\n\\textbf{Paper reference} & \\textbf{Table reference} & \\textbf{Research problem} & \\textbf{Papers} & \\textbf{ORKG representation} & \\textbf{Information loss} \n\\\\ \\midrule\n\\citeauthor{Bikakis2016}~ & \nTable 1 & Generic visualizations & \n11 &\n\\url{https://orkg.org/orkg/c/pdLJDk} & \nNo \n\\\\ \n\\citeauthor{Bikakis2016}~ & \nTable 2 & Graph visualizations & \n21 &\n\\url{https://orkg.org/orkg/c/Rx476Z} & \nNo\n\\\\ \n\\citeauthor{Diefenbach2018a}~ & \nTable 2 & \nQuestion answering evaluations & \n33 &\n\\url{https://orkg.org/orkg/c/gaVisD} & \nNo\n\\\\ \n\\citeauthor{Diefenbach2018a}~ & \nTable 3,4,5,6 & \nQuestion answering systems & \n26 &\n\\url{https://orkg.org/orkg/c/IuEWl2} & \nNo\n\\\\ \n\\citeauthor{Hussain2017a}~ & \nTable 4 &\nAuthor name disambiguation &\n5 &\n\\url{https://orkg.org/orkg/c/vDxKdr} & \nNo\n\\\\\n\\citeauthor{Hussain2017a}~ & \nTable 5 &\nAuthor name disambiguation &\n6 &\n\\url{https://orkg.org/orkg/c/XXg8Wg} & \nNo\n\\\\ \n\\citeauthor{Hussain2017a}~ & \nTable 6 &\nAuthor name disambiguation &\n9 &\n\\url{https://orkg.org/orkg/c/9rOwPV} & \nNo\n\\\\ \n\\citeauthor{Hussain2017a}~ & \nTable 7 &\nAuthor name disambiguation &\n6 &\n\\url{https://orkg.org/orkg/c/mB7kIK} & \nNo\n\\\\ \n\\citeauthor{Naidu2018}~ & \nTable 4 &\nText summarization &\n52 &\n\\url{https://orkg.org/orkg/c/OUqYB9} & \nNo\n\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\end{table*}", "id": "252651d1-6cd2-4bc9-abba-721f972b6d1e", "level": "subsection", "origin_cites_number": 4, "parent_id": "a1917066-53fd-432d-9c65-ea4825cda6c7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "System design" ], [ "subsection", "Technical details" ] ], "subsections": [], "title": "Technical details" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:data-collection}\nIn order to generate useful literature reviews it is crucial for the knowledge graph to contain sufficient and relevant papers. Populating the knowledge graph with high quality paper descriptions it not straightforward. Structured descriptions of papers should be created in such a way that it is possible to compare papers based on shared properties. Both published papers and papers that will be published in the future should be added to the ORKG, retrospectively or prospectively. Although a comprehensive description on how to populate the ORKG is out-of-scope here, we now briefly describe how we envision populating the ORKG in a manner that would facilitate comparing contributions.\nProspectively, authors can become part of generating structured descriptions of their papers. This should be done in a crowdsourced manner and can become part of the paper submission process. Input templates that collect relevant properties can be used to ensure structured and comparable paper descriptions. Retrospectively, automated (machine learning) methods can be helpful ensure scalability of the process of adding a paper.", "id": "def65367-1d9c-4420-92cd-cd2f5cd6b277", "level": "section", "origin_cites_number": 0, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Data collection" ] ], "subsections": [ "a72860f0-4d1e-4b2f-aa43-73a96ef0f3e7" ], "title": "Data collection" }, { "cite_extract_rate": 0, "cites": [], "content": "To populate the ORKG with comparable paper descriptions, we leverage the data published in review papers. Review papers consist of high quality, curated and often structured data that is collected from a set of papers that address the same (or a similar) research problem. Hence, using reviews to populate a scholarly knowledge graph is a relatively straightforward approach to obtain high quality structured paper descriptions. We now present a methodology to convert survey paper data into a knowledge graph structure. The steps are as follows:\n\\begin{enumerate}\n \\item \\textbf{Survey paper selection.} The first step is the selection of survey papers that are suitable for building a knowledge graph. Firstly, the survey should compare peer-reviewed scientific articles. For instance, a comparison of different systems without a reference to peer-reviewed work is not suitable for the scholarly knowledge graph. Secondly, the review should compare the papers' content in a structured way and should not merely list work in a field. Especially reviews that present their results and literature comparisons in tabular format are suitable. The result of this step is a list of papers that will be added to the ORKG. \n \\item \\textbf{Table selection.} Given the selected survey papers, tables have to be selected. Some surveys contain only one table while in others multiple tables are presented. In some cases a collection of tables can be joined into one larger table. \n \\item \\textbf{Data modeling.} Given the selected tables, a suitable graph structure has to be determined. The data structure has to be modeled. For instance, when implemented systems are compared, a suitable structure could be: \\texttt{[has implementation] -> System name}. The referenced system can be described with a list of properties to be compared. Additionally, a research problem has to be defined, which is typically the same for all papers that are part of the table. \n \\item \\textbf{Metadata collection.} Next, the metadata for the papers that are referenced in the survey table is collected. In case a referenced paper has a DOI\\footnote{Digital Object Identifier}, the metadata can be automatically retrieved via a lookup service (e.g. Crossref\\footnote{\\url{https://www.crossref.org}}). Otherwise, at least the title, authors and publication date have to be collected.\n \\item \\textbf{Data ingestion.} Finally, the paper data is ingestion into the knowledge graph. The paper data consists of both the paper's metadata and the extracted data from the comparison table. \n This does not result in a single description of the survey paper. Each paper referenced in the survey table is ingested individually. In order to speed up the process of adding papers, we developed a Python package\\footnote{\\url{https://gitlab.com/TIBHannover/orkg/orkg-pypi}} that has a function to add a paper to the knowledge graph. \n\\end{enumerate}\nThis methodology has been used to populate the ORKG with comparable paper data. The data is used to evaluate the presented literature review tool. The imported paper data is not only useful for the evaluation, but does also provide significant value to the ORKG itself.\nIn total, four review papers were selected for importing into the ORKG. The Python script for importing the table data is available online.\\footnote{\\url{https://gitlab.com/TIBHannover/orkg/orkg-papers}} From those papers, 12 different tables were imported. Together, 169 papers were reviewed in those four survey papers. This resulted in a total amount of $3\\,750$ statements being added to the knowledge graph. Table~\\ref{table:imported-review-tables} lists the imported review papers and tables. The survey papers address different research problems. Figure~\\ref{fig:graph-example-import} depicts an excerpt of the resulting graph for one particular paper. A set of comparison tables made with the imported data is available online.\\footnote{\\url{https://orkg.org/orkg/featured-comparisons}} This list includes some alternative comparison tables that were generated with the same data. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{img/graph-example-import.pdf}\n \\caption{Partial graph structure of an imported paper. Orange colored resources indicate potentially interesting values for a paper comparison.}\n \\label{fig:graph-example-import}\n\\end{figure}", "id": "a72860f0-4d1e-4b2f-aa43-73a96ef0f3e7", "level": "subsection", "origin_cites_number": 0, "parent_id": "def65367-1d9c-4420-92cd-cd2f5cd6b277", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Data collection" ], [ "subsection", "Leverage legacy review paper tables" ] ], "subsections": [], "title": "Leverage legacy review paper tables" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:evaluation}\nIn this section, we present an evaluation of multiple aspects of the presented comparison methodology and implementation. Firstly, we evaluate information representation. Then, we evaluate the FAIRness of published reviews. Finally, we present a performance evaluation that tests the scalability.", "id": "9c67205a-5ea7-42a6-af06-049ac9b5d00b", "level": "section", "origin_cites_number": 0, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ] ], "subsections": [ "52143d93-69a3-4b3d-a716-7d4ad8e48679", "c31bac6f-f294-4381-8adf-fd5c75c400a7", "34ac5be8-a18c-4ef0-bc7d-dc420b39ab8d" ], "title": "Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "This part of the evaluation focuses on the aspect of information representation. We use the data from the imported review papers, as described in Section~\\ref{section:data-collection}. In order to build and publish useful and correct literature reviews, at a minimum our service should display the same information that was originally presented in the review tables. This means that there should not be information loss when review tables are published using our service. If there is no information loss, it means our service can be used as an alternative to the current way of publishing review tables. Apart from generating the same table, the added value comes from the ability to aggregate new (tabular) views using the same data as well as the increased FAIRness of the data published via our service. For each of the imported review tables, listed in Table~\\ref{table:imported-review-tables}, we can evaluate whether the same table can be generated with our service. For this, we have compared the table from the review paper to the table generated by the ORKG comparison service. A collection of 169 paper with 9 distinct literature views/tables are part of this evaluation. These tables can be viewed online, the links are listed in the ``ORKG representation'' column. The results of this evaluation are displayed in the same table, in column ``Information loss''. As the results show, using our service it is possible to recreate the same tabular views as originally published in the review papers.", "id": "52143d93-69a3-4b3d-a716-7d4ad8e48679", "level": "subsection", "origin_cites_number": 0, "parent_id": "9c67205a-5ea7-42a6-af06-049ac9b5d00b", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "Information representation" ] ], "subsections": [], "title": "Information representation" }, { "cite_extract_rate": 0, "cites": [], "content": "As described before, with the presented service it is possible to publish a generated comparison that adheres with the FAIR principles. Because the service leverages a knowledge graph to generate and save comparisons, complying with the FAIR principles is more obvious for the ORKG comparison service than for tables in published PDF articles. In order to evaluate the FAIRness of a published comparison, we evaluate each of the four FAIR principles in detail. \\citeauthor{Wilkinson2016}~ described each principle by assigning sub-principles.\\footnote{For a more detailed definition of the FAIR principles, see: \\url{https://go-fair.org/fair-principles}} We discuss the relevant sub-principles and explain how they are met. We use the term (meta)data to refer to both the actual comparison data (i.e., the data that is used to create the comparison table) and the associated metadata (e.g., the title, description and creator of a comparison). Table \\ref{table:fair-principles-compliance} presents an overview for the evaluation of the FAIR principles.", "id": "c31bac6f-f294-4381-8adf-fd5c75c400a7", "level": "subsection", "origin_cites_number": 1, "parent_id": "9c67205a-5ea7-42a6-af06-049ac9b5d00b", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "FAIR data evaluation" ] ], "subsections": [ "ac11071f-8ae2-423d-ab33-85fce076119e" ], "title": "FAIR data evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "To make data findable for both humans and machines (i.e., agents), a unique and persistent identifier should be attached to the data (F1). Additionally, metadata should describe the data (F2). In the metadata, the unique identifier of the data should be mentioned (F3). Also a search interface should be available to find the data (F4). To ensure the findability of comparisons, users can title and describe them. Furthermore, machine generated metadata is attached to a comparison (e.g., the number of papers and the creation date). A unique identifier is generated and attached to the data and included in the metadata. Finally, the ORKG search interface allows users to search the whole graph and has a dedicated filter to specifically find comparisons. Additionally, comparisons can be indexed and found by third-party search engines (such as Google or Bing).", "id": "ac11071f-8ae2-423d-ab33-85fce076119e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c31bac6f-f294-4381-8adf-fd5c75c400a7", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "FAIR data evaluation" ], [ "paragraph", "Findable." ] ], "subsections": [ "154af8e0-ed85-46d2-9570-9da5491ee971", "68397955-d4af-4601-8b25-5a6aef856a6b", "a1388af8-78d9-4c21-9ff4-2ccd2d4a9694" ], "title": "Findable." }, { "cite_extract_rate": 0, "cites": [], "content": "Having found data, agents need to know how to access it. This principle is primarily about using accessible standardised communication protocols (A1). Additionally, metadata should be available even when the data is not (A2). The metadata is part of the knowledge graph, which can be accessed via the HTTP protocol. The data can be accessed without authentication. To support A2, the metadata and the actual comparison data are stored separately. Therefore, it is possible to access only metadata when the original data is not available anymore (for example when data is retracted by the author).", "id": "154af8e0-ed85-46d2-9570-9da5491ee971", "level": "paragraph", "origin_cites_number": 0, "parent_id": "ac11071f-8ae2-423d-ab33-85fce076119e", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "FAIR data evaluation" ], [ "paragraph", "Findable." ], [ "paragraph", "Accessible." ] ], "subsections": [], "title": "Accessible." }, { "cite_extract_rate": 0, "cites": [], "content": "To ensure the interoperability of data, it should use a formal language for knowledge representation (I1) and should use vocabularies that are FAIR (I2). Finally, references or links to other (meta)data should be made (I3). As argued before, thanks to highly structured data and the integration of shared vocabularies, interoperability is an inherent feature of knowledge graphs. Data is (partially) described using the ORKG core ontology and other ontologies we use to canonicalize the representation of relevant information content types. Links to other data are present in the knowledge graph. For example, if a comparison uses the ``Web'' resource to specify the domain of an application, this resource is generic, can be shared among paper descriptions and comparisons, and can be described in more detail, independently of a particular comparison.", "id": "68397955-d4af-4601-8b25-5a6aef856a6b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "ac11071f-8ae2-423d-ab33-85fce076119e", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "FAIR data evaluation" ], [ "paragraph", "Findable." ], [ "paragraph", "Interoperable." ] ], "subsections": [], "title": "Interoperable." }, { "cite_extract_rate": 0, "cites": [], "content": "Finally, data should be reuseable. This can be accomplished by adding relevant (meta)data (R1). Required are an accessible data license (R1.1) and detailed provenance (R1.2) data. Finally, (meta)data should use community standards to describe data (R1.3). It is possible to add additional metadata to a comparison, e.g. metadata about the scope of the comparison, which could be a reference to the paper in which the comparison is being used. The metadata is complemented with the metadata that is already part of the Findability principle, e.g. provenance data about the creator of the comparison. The data license of the graph data is CC BY-SA\\footnote{\\url{https://creativecommons.org/licenses/by-sa/2.0}} (Attribution-ShareAlike), which allows reuse of the data. There is currently no community standard to describe the comparison data. However, standard ontologies are used to describe metadata (e.g., Dublin Core).\n\\begin{table}[t]\n\\centering\n\\caption{Overview of FAIR principles compliance.}\n\\label{table:fair-principles-compliance}\n\\begin{adjustbox}{scale=.85}\n\\begin{tabular}{l|c|p{67mm}}\n\\toprule\n\\textbf{Principle} & \\textbf{Level} & \\textbf{Explanation} \\\\ \\midrule\n\\multicolumn{3}{l}{\\textit{Findable}} \\\\ \\midrule\nF1 & 3 & Unique IDs exist, DOI assignment for future work \\\\ \nF2 & 2 & Machine and user generated metadata is attached \\\\ \nF3 & 1 & Properties used to link data to metadata \\\\ \nF4 & 1 & Comparisons are findable via a search interface \\\\ \\midrule\n\\multicolumn{3}{l}{\\textit{Accessible}} \\\\ \\midrule\nA1 & 2 & Data is accessed over HTTP (via REST or a user interface), requires user effort to integrate the ORKG API specification \\\\ \nA1.1 & 1 & The protocol is free and widely used \\\\ \nA1.2 & 1 & No authentication is required to access the data \\\\ \nA2 & 1 & Metadata is stored in a persistent way and available without the data itself \\\\ \\midrule\n\\multicolumn{3}{l}{\\textit{Interoperable}} \\\\ \\midrule\nI1 & 1 & RDF (with type assertions) and CSV export of comparisons\\\\\nI2 & 2 & Reuse of ontologies where possible (ORKG core, Dublin core, RDF Data Cube Vocabulary). User responsible for other ontology reuse. \\\\ \nI3 & 3 & For comparisons, the compared paper metadata is linked. More references are needed and can be created by users. \\\\ \\midrule\n\\multicolumn{3}{l}{\\textit{Reusable}} \\\\ \\midrule\nR1 & 1 & Machine and user generated metadata is created while publishing \\\\\nR1.1 & 1 & CC-BY SA license \\\\ \nR1.2 & 1 & If a registered user publishes a comparison, the user is associated with the published data \\\\ \nR1.3 & 2 & Users can describe contributions using domain-relevant ontologies \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n1=Yes; 2=Yes, requires user effort; 3=Partially/future work\n\\end{table}\n\\noindent\nThe evaluation of the FAIR principles shows that comparisons published with our service rank high in FAIRness, which can be even further increased with some effort from users. Users are mainly responsible for adding the correct information to the comparison and reuse vocabularies. Otherwise, findability, accessibility and to some extent also interoperability are largely handled by the service.", "id": "a1388af8-78d9-4c21-9ff4-2ccd2d4a9694", "level": "paragraph", "origin_cites_number": 0, "parent_id": "ac11071f-8ae2-423d-ab33-85fce076119e", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "FAIR data evaluation" ], [ "paragraph", "Findable." ], [ "paragraph", "Reusable." ] ], "subsections": [], "title": "Reusable." }, { "cite_extract_rate": 0, "cites": [], "content": "In order to evaluate the performance of the overall comparison, we compared the implemented ORKG approach to a naive approach for comparing multiple resources. The naive approach compares each property against all other properties to perform the property alignment. Table~\\ref{tab:sota-eval} shows the time needed to generate comparisons, for both the naive and the ORKG approach. In total, eight papers are compared with on average ten properties per paper. In the naive approach, the ``Align contribution descriptions'' step is not scaling well, since each property is compared against all others. If multiple contributions are selected, the number of property similarity checks grows exponentially. Table~\\ref{tab:sota-eval} shows that the ORKG approach outperforms the naive approach. The total number of papers used for the evaluation is limited to eight because the naive approach does not scale to larger sets. \n\\begin{table}[t]\n\\caption{Time (in seconds) to perform comparisons with 2-8 contributions using the naive and ORKG approaches.}\n\\begin{adjustbox}{scale=.9}\n\\begin{tabular}{l|lllllll}\n& \\multicolumn{7}{c}{\\textbf{Number of compared research contributions}} \\\\\n\\hline\n & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} & \\textbf{7} & \\textbf{8} \\\\ \\hline\n\\textbf{Naive} & 0.00026 & 0.1714 & 0.763 & 4.99 & 112.74 & 1772.8 & 14421 \\\\\n\\textbf{ORKG} & 0.0035 & 0.0013 & 0.01158 & 0.02 & 0.0206 & 0.0189 & 0.0204 \n\\end{tabular}\n\\end{adjustbox}\n\\label{tab:sota-eval}\n\\end{table}", "id": "34ac5be8-a18c-4ef0-bc7d-dc420b39ab8d", "level": "subsection", "origin_cites_number": 0, "parent_id": "9c67205a-5ea7-42a6-af06-049ac9b5d00b", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Evaluation" ], [ "subsection", "Performance evaluation" ] ], "subsections": [], "title": "Performance evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section:discussion}\nOne of the aims of the contribution comparison functionality is to support literature reviews and make this activity less cumbersome and time consuming for researchers. To live up to this aim, more structured contribution descriptions are needed. Existing scholarly knowledge graph initiatives focus primarily on scholarly metadata, while with ORKG we focus on making the actual research contributions machine readable. Currently, the ORKG does not yet contain sufficient contribution descriptions in order for the comparison functionally to be practically useful for researchers. Furthermore, for an evaluation of the effectiveness of certain components of the methodology (such as finding related papers or aligning similar properties), more contribution data is needed. Publishing surveys does not rely on data quantity and is therefore evaluated more extensively in this work. The performance evaluation results indicate that the comparison feature performs well. This means the technical infrastructure is in place for the literature survey service.\nIn the evaluation, we focused on the aspects of the system that are necessary for researchers to use the system in practice. The information representation evaluation is a straightforward evaluation to see if existing survey tables can be regenerated with the ORKG. This is a minimal requirement for researchers when using the system, since they should at least be able to recreate tables. This evaluation does not give insight to the usefulness and usability of the system, but still provides an indication that the service can be successfully used to publish literature surveys. One of the reasons for using the service is that also ``dark data'' in comparisons is published (as discussed in Section~\\ref{section:motivating-example:publish-reviews}).\nAnother interesting aspect of the service is that published literature surveys rank high in FAIRness. Therefore, the second part of the evaluation focuses on how the FAIR principles are met. Merely publishing data as RDF is not sufficient to fully meet the FAIR principles. Hence, we conducted a more detailed evaluation that describes how the service complies with each sub-principle. Since FAIR is not a \\textit{standard}, the principles are permissive and not prescriptive~. No technical requirements are specified. Both the implementation and evaluation of the guidelines are therefore subject to interpretation. With respect to data interoperability and reusability, certain aspects of the service can be improved. For example, to improve interoperability, the contribution data should be reusing existing vocabularies where possible. Additionally, although most of FAIRification is done by the system, the researcher is responsible for adding correct and relevant metadata while publishing a survey.\nAs indicated earlier, the usefulness of the presented tool depends on the number of papers present in the knowledge graph. Therefore, future work will focus on data collection, both in a crowdsourced and automated manner. We plan on extending the methodology presented in Section~\\ref{section:data-collection} with automated extraction of data and tables from literature review papers. With the extracted review data, the knowledge graph can be extended more quickly than the previously presented manual method. It could form the basis of a high quality scholarly knowledge graph that contains relevant and FAIR survey table data. Furthermore, in the future we will assign (DataCite) DOIs to published surveys. They will serve as a persistent identifier for the survey data~.", "id": "f87ef12c-02fb-41f5-9f16-69dee5190666", "level": "section", "origin_cites_number": 2, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Discussion \\& future work" ] ], "subsections": [], "title": "Discussion \\& future work" }, { "cite_extract_rate": 0, "cites": [], "content": "Reviewing existing literature is an important but cumbersome and time consuming activity. To address this problem, we presented a methodology and service that can be used to generate literature surveys from a scholarly knowledge graph. This service can be used by researchers in order to get familiar with existing literature. Additionally, the tool can be used to publish literature surveys in a way that they largely adhere to the FAIR data principles. The presented methodology addresses multiple aspects, including finding suitable contributions, aligning contribution descriptions, visualization and publishing. The methodology is implemented within the Open Research Knowledge Graph (ORKG). Since the comparison relies on structured scholarly knowledge, we discussed how to populate the ORKG with relevant data. This is done by extracting tabular survey data from existing literature reviews. In order to evaluate whether the proposed service can be used to publish literature surveys, the original survey table representations were compared with the ones generated by our service. As the results indicate, it is possible to use the service as an addition or potentially even replacement of the current publishing approach, since the same tables can be generated. The evaluation also showed how the published literature surveys largely adhere to the FAIR data principles. This is crucial for data reusability and machine actionability. To conclude, the proposed literature comparison service addresses multiple weaknesses of the current survey publishing approach and can be used by researchers to generate, publish and reuse literature surveys. \n\\begin{acks}\nThis work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and the TIB Leibniz Information Centre for Science and Technology. We want to thank Kheir Eddine Farfar for his contributions to this work.\n\\end{acks}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{refs,mendeley}\n\\end{document}\n\\endinput", "id": "4c8c53d6-d473-464f-a034-d85b2b90268e", "level": "section", "origin_cites_number": 0, "parent_id": "3313ad50-7887-4970-bb3f-a8d5311e5f61", "prefix_titles": [ [ "title", "Generate FAIR Literature Surveys with Scholarly Knowledge Graphs" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
72
[ 6548, 6549, 8369, 8253 ]
2.126841
[ "Yantong Wang", "Vasilis Friderikos" ]
A Survey of Deep Learning for Data Caching in Edge Network
2020
2020-08-17T12:02:32Z
cs.NI
The concept of edge caching provision in emerging 5G and beyond mobile networks is a promising method to deal both with the traffic congestion problem in the core network as well as reducing latency to access popular content. In that respect end user demand for popular content can be satisfied by proactively caching it at the network edge, i.e, at close proximity to the users. In addition to model based caching schemes learning-based edge caching optimizations has recently attracted significant attention and the aim hereafter is to capture these recent advances for both model based and data driven techniques in the area of proactive caching. This paper summarizes the utilization of deep learning for data caching in edge network. We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure. Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning. Furthermore, a comparison of state-of-the-art literature is provided from the aspects of caching topics and deep learning methods. Finally, we discuss research challenges and future directions of applying deep learning for caching.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "347fe6bc-3506-4394-85f3-39521077022f", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ] ], "subsections": [ "c1e3d948-68a4-41aa-9453-ed02e0504669", "77d13750-5344-4e36-b9ce-8573ab07d97a", "c0dca0bc-798e-46e6-903c-5acb8a815d31", "3f275de1-b1c2-48e0-ae07-49947668a760", "bcb206fa-772b-4e9f-9da9-3ca19215514a", "9f55687b-d2fc-4fa2-a01d-e44198b97b4c" ], "title": "root" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 3705, 3582, 3704, 3706, 3420, 7633, 3703 ], "content": "\\label{sec:introduction}\nUndoubtedly, future 5G and beyond mobile communication networks will have to address stringent requirements of delivering popular content at ultra high speeds and low latency due to the proliferation of advanced mobile devices and data rich applications. In that ecosystem, edge-caching has received significant research attention over the last decade as an efficient technique to reduce delivery latency and network congestion especially during peak-traffic times or during unexpected network congestion episodes by bringing popular data closer to the end users. One of the main reasons of enabling edge caching in the network is to reduce the number of requests that traverse the access and core mobile network as well as reducing the load at the origin servers that would have to, otherwise, respond to all requests directly in absence of edge caching. In that case popular content and objects can be stored and served from edge locations, which are closer to the end users. This operation is also beneficial from the end user perspective since edge caching can dramatically reduce the overall latency to access the content and increase in the sense overall user experience. It is also important to note that the notion of popular content means that the requests of top 10\\% of video content on the Internet account for almost 80\\% of all traffic; which relates to multiple requests from different end users of the same content\n.\nRecently, deep learning (DL) has attracted significant attention from both academia and industry and has been applied to diverse domains like self-driving, medical diagnosis, playing complex games such as Go . DL has also made their way into communication areas . In this paper, we pay attention to the application of DL in caching policy. Though there are some earlier surveys related to machine learning applications, they either focus on general machine learning techniques for caching , or concentrate on overall wireless applications . The work provides a big picture of applying machine learning in wireless communications. In , the authors consider the machine learning on both caching and routing strategy. A comprehensive survey on machine learning applications for caching content in edge networks is provided in . The researchers provide a survey about machine learning on mobile edge caching and communication resources. On the other hand, overviews how artificial neural networks can be employed for various wireless network problems. The authors in detail a survey on deep reinforcement learning (DRL) for issues in communications and networking. presents a comprehensive on deep learning applications and edge computing paradigm. Our work can be distinguished from the aforementioned papers based on the fact that we focus on the deep learning techniques on content caching and both wired and wireless caching are taken into account. Our main contributions are listed as follows:\n\\begin{itemize}\n\t\\item We classify the content caching problem into Layer 1 Caching and Layer 2 Caching. Each layer caching consists of four tightly coupled subproblems: where to cache, what to cache, cache dimensioning and content delivery. Related researches are provided accordingly. \n\t\\item We present the fundamentals of DL techniques which are widely used in content caching, such as convolutional neural network, recurrent neural network, actor-critic model based deep reinforcement learning, etc.\n\t\\item We analyze a broad range of state-of-the-art literature which use DL to content caching. These papers are compared based on the DL structure, layer caching coupled subproblems and the objective of DL in each scenarios. Then we discuss research challenges and potential directions for the utilization of DL in caching. \n\\end{itemize}\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=\\textwidth]{figure/paper.eps}\n\t\\caption{Survey Architecture}\n\t\\label{fig:organization}\n\\end{figure}\nThe rest of this survey is organized as follows (as illustrated in Figure \\ref{fig:organization} ). Section \\ref{sec:caching} presents the categories of content caching problem. Section \\ref{sec:dl} reviews typical deep neural network structures. In Section \\ref{sec:dl_caching}, we list state-of-the-art DL-based caching strategies and their comparison. Section \\ref{sec:challenges} debates challenges as well as potential research directions. In the end, Section \\ref{sec:conclusions} concludes this paper. For better readability, the abbreviations in this paper is listed as Table \\ref{tab:abbr} shows.\n\\begin{table}[htb]\n\t\\centering\n\t\\caption{\\label{tab:abbr} List of Abbreviations.}\n\t\\begin{tabular}{l|p{.38\\textwidth}|l|p{.38\\textwidth}}\n\t\t\\hline\n\t\t\\textbf{Abbr.} & \\textbf{Description} & \\textbf{Abbr.} & \\textbf{Description} \\\\\n\t\t\\hline\n\t\t3C & Computing, Caching and Communication & A3C & Asynchronous Advantage Actor-Critic \\\\\n\t\t\\hline\n\t\tBBU & Baseband Unit & CCN & Content-Centric Network \\\\\n\t\t\\hline\n\t\tCNN & Convolutional Neural Network & CoMP-JT & Coordinated Multi Point Joint Transmission \\\\\n\t\t\\hline\n\t\tCR & Content Router & C-RAN & Cloud-Radio Access Network \\\\\n\t\t\\hline\n\t\tCSI & Channel State Information & D2D & Device to Device \\\\\n\t\t\\hline\n\t\tDDPG & Deep Deterministic Policy Gradient & DL & Deep Learning \\\\\n\t\t\\hline\n\t\tDNN & Deep Neural Network & DQN & Deep Q Network \\\\\n\t\t\\hline\n\t\tDRL & Deep Reinforcement Learning & DT & Digital Twin \\\\\n\t\t\\hline\n\t\tED & End Device & ES & Edge Server \\\\\n\t\t\\hline\n\t\tETSI & \\multicolumn{3}{l}{European Telecommunication Standardization Institute} \\\\\n\t\t\\hline\n\t\tESN & Echo-State Network & FIFO & First In First Out\\\\\n\t\t\\hline\n\t\tFNN & Feedforward Neural Network & FBS & Femto Base Station \\\\\n\t\t\\hline\n\t\tICN & Information-Centric Network & LFU & Least Frequently Used \\\\\n\t\t\\hline\n\t\tLP & Linear Programming & LRU & Least Recently Used \\\\\n\t\t\\hline\n\t\tLSTM & Long Short-Term Memory & MAR & Mobile Augmented Reality \\\\\n\t\t\\hline\n\t\tMD & Mobile Device & MILP & Mixed Integer Linear Programming \\\\\n\t\t\\hline\n\t\tMBS & Macro Base Station & NFV & Network Function Virtualization\\\\\n\t\t\\hline\n\t\tPNF & Physical Network Function & PPO & Proximal Policy Optimization \\\\\n\t\t\\hline\n\t\tQoE & Quality of Experience & RL & Reinforcement Learning \\\\\n\t\t\\hline\n\t\tRNN & Recurrent Neural Network & RRH & Remote Radio Head \\\\\n\t\t\\hline\n\t\tSAE & Sparse Auto Encoder & SDN & Software Defined Network \\\\\n\t\t\\hline\n\t\tseq2seq & Sequence to Sequence & SNM & Shot Noise Model \\\\\n\t\t\\hline\n\t\tTRPO & Trust Region Policy Optimization & TTL & Time to Live \\\\\n\t\t\\hline\n\t\tVNF & Virtual Network Function & WSN & Wireless Sensor Network \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}", "id": "c1e3d948-68a4-41aa-9453-ed02e0504669", "level": "section", "origin_cites_number": 9, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:caching}\nThe paradigm of data caching in edge networks is illustrated in Figure \\ref{fig:caching}. Similarity to , the scope of edge in this paper is along the path between end user and data server, which contains Content Router (CR), Macro Base Station (MSB), Femto Base Station (FSB) and End Device (ED). In the context of Cloud-Radio Access Network (C-RAN), both baseband unit (BBU) and remote radio head (RRH) are considered as potential caching candidates to hosting content, where the BBUs are clustered as a BBU pool centrally and RRHs are deployed near BS's antenna distributively. According to the hierarchical structure of edge network, the data caching is classified into two categories: Layer 1 Caching and Layer 2 Caching. In this section we illustrate the typical research topics in these two areas.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[trim=1mm 1mm 5mm 1mm, clip, width=\\textwidth]{figure/Cache.eps}\n \\caption{Data Caching in Edge Network}\n \\label{fig:caching}\n\\end{figure}", "id": "77d13750-5344-4e36-b9ce-8573ab07d97a", "level": "section", "origin_cites_number": 2, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Data Caching Review" ] ], "subsections": [ "93519284-cc11-4b04-b82f-f74d3c6a42fc", "a2d45d41-6ccf-4b15-99cb-2ea399aa1ef2" ], "title": "Data Caching Review" }, { "cite_extract_rate": 0.25, "cites": [ 3709, 3707, 3710, 3708 ], "content": "In Layer 1 Caching, the popular content is considered to be hosted in CRs. In the context of Information-Centric Network (ICN), CR plays dual roles both as a typical router (i.e. data flow forwarding) and content store (i.e. local area data caching facility). Generally, the CR is connected via wired networks. Layer 1 Caching consists of four tightly coupled problems: where to cache, what to cache, cache dimensioning and content delivery.\nWhere to cache focuses on selecting the proper CRs to host the content. For instance, in Figure \\ref{fig:caching}, contents replicas can be placed in lower hierarchical level CRs, such as router B and C, as the mean of reducing transmission cost but with extra cost pay for hosting contents; reversely, consolidating caching in CR A can be adopted to saving caching cost at the expense of more transmission cost and has the risk of expiring end users' delay requirement. Here the caching/hosting cost is the cost to deploy the content, which could be measured by space utilization, energy consumption or other metrics. The transmission cost represents the price for delivering the content from cached CR (or data server) to end user and is basically estimated via the number of hops. Where to cache problem usually has been modelled as a Mixed Integer Linear Programming (MILP):\n\\begin{subequations} \n\\label{fml:MILP}\n\\begin{align}\n\\label{MILP:obj}\n\\mathop{\\min}_{\\substack{x}}\\; & c^T x\\\\\n\\textrm{s.t.}\\quad \\label{MILP:con1}\n& Ax\\leq b \\\\\n\\label{MILP:con2}\n& x\\in\\{0,1\\} \\\\\n\\label{MILP:con3}\n\\text{or}\\quad & x\\geq 0\n\\end{align}\n\\end{subequations}\nwhere $x$ is the decision variable. Normally it is a binary variable indicating the CR assignment. In special cases, with the aim of modelling or linearization, some non-binary auxiliary variables are introduced as constraint \\eqref{MILP:con3} shows. If taking caching a part of a file not the complete into consideration, the decision variable $x$ is a continuous variable representing the segments host in the CR, then constraint \\eqref{MILP:con2} becomes $x\\in[0,1]$ and MILP model turns to linear programming (LP). There are many work allocating contents via MILP with different objectives and limitations. The authors in propose a model to minimize the user delay and load balancing level of CRs with the satisfaction of cache space. The work in considers a trade-off between caching and transmission cost with cache space, link bandwidth and user latency constraints. In , an energy efficient optimization model is constructed consisting of caching energy and transport energy. provides more details of mathematical model and related heuristic algorithms in caching deployment of wired networks. \nWhat to cache concentrates on selecting the proper contents in CRs for the purpose of maximizing the cache hit ratio. Via exploiting the statistical patterns of user requests, the popularity of requested information and user preference can be forecasted and play a very significant role in determining caching content. On the one hand, from the view of aggregated request contents, researchers propose many different models and algorithms for popularity estimation. One widely used model in web caching is the Zipf model based the assumption that the content popularity is static and each users' request is independent. However, this method fails to reflect the temporal and spatial correlations of the content, where the temporal correlation reflects the popularity varies over time and the spatial correlation represents the content preference is different on the geographical area and social cultural media. A temporal model named the shot noise model (SNM) is built in which enables users to estimate the content popularity dynamically. Inspired by SNM, the work in considers both spatial and temporal characteristics during caching decision. On the other hand, from the view of a specific end user during a certain period, caching his/her preference content (may not be the popular in network) can also help to reduce the traffic flow. Many approaches in recommendation systems can be applied in this case. Another aspect of what to cache problem is the designing of cache eviction strategies when storage space faces the risk of overflow. Depending on the life of caching contents, these policies can be divided into two categories roughly: one is like first in first out (FIFO), least frequently used (LFU), least recently used (LRU) and randomized replacement, the contents would not be removed until no more memory is available; the other one is called time to live (TTL) strategy, where the eviction happens once the related timer expires. presents analytic model for hit ratio in TTL-based cache requested by independent and identically distributed flows. It worth noting that in , the TTL-based cache policy is used for the consistency of dynamic contents instead of contents replacement. In , the authors introduce a TTL model for cache eviction and the timer is reset once related content cache hit happens.\nCache dimensioning highlights how much storage space to be allocated. Benefit from the softwarization and virtualization technologies, the cache size in each CR or edge cloud can be managed in a more flexible and dynamical way, which makes the cache dimensioning decisions become an important feature in data caching. Technically, the cache hit ratio rises with the increasing of cache memory, and consequently eases the traffic congestion in the core network. However, excessive space allocation would waste the resource like energy to support the caching function. Hence there is a trade-off between cache size cost and network congestion. Economically, taking such scenario into consideration: a small content provider wants to rent service from a CDN provider such as Akamai or Huawei Cloud, and there is also a balance between investment saving and network performance. In , the proper cache size of individual CR in Content-Centric Network (CCN) is investigated via exploiting the network topology. In , the authors consider the effect of network traffic distribution and user behaviours when designing cache size.\nContent delivery considers how to transform the caching content to the requested user. The delivery traffic embraces single cache file downloading and video content steaming and the metrics for these two scenarios vary. Regarding file downloading, the content cannot be consumed until the delivery is completed. Therefore the downloading time of the entire file is viewed as a metric to reflect the quality of experience (QoE). For video steaming, especially for those large video splitted into several chunks, the delay limitation only works on the first chunk. In that case, delivering the first chunk in time and keep the smooth transmission of the rest chunks are the key aims . Apart from those measuring metrics, another problem in content delivery is the routing policy. In CCN , one implementation of ICN architecture, employs a flooding-based name routing protocol to publish the request among cached CRs. On one hand, flooding strategy simplifies the designing complexity and reduce the maintaining cost particularly in an unstable scenario; on the other hand, it costly wastes bandwidth resources. In , the authors discuss the optimal radius in scoped flooding. The deliver route is often considered jointly with where to cache problem, in which the objective function \\eqref{MILP:obj} includes both deployment and routing cost.", "id": "93519284-cc11-4b04-b82f-f74d3c6a42fc", "level": "subsection", "origin_cites_number": 16, "parent_id": "77d13750-5344-4e36-b9ce-8573ab07d97a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Data Caching Review" ], [ "subsection", "Layer 1 Caching" ] ], "subsections": [], "title": "Layer 1 Caching" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3712, 3716, 8682, 3714, 3717, 3713, 8681, 3715, 3711 ], "content": "Contrast to Layer 1 caching in wired connection, Layer 2 caching considers implementing caching techniques in wireless network. Though both of them need solve where to cache, what to cache, cache dimensioning and content delivery problems, wireless caching is more challenging and some mature strategies in wired caching cannot be migrated directly to wireless case. Some reasons come from the listed aspects: the resources in wireless environment, such as caching storage and spectrum, are limited compared with CRs in Layer 1 Caching; the mobility of end users and dynamic network typologies are also required to be considered during the design of caching strategies; moreover, the wireless channels are uncertain since they can be effected by fading and interference.\nIn wireless caching, where to cache focus on finding the proper candidates among MBS, FBS, ED, even BBU pool and RRU in C-RAN to host the content. Caching at MBS and FBS can alleviate backhaul congestion since end users obtain the requested content from BS directly instead of from CR via backhaul links. Compared with FBS, MBS has wider coverage and typically, there is no overlap among different MBSs . As mentioned above, the caching space in BSs is limited and it is impractical to cache all popular content. With the aim of improving cache-hit ratio, a MILP-modelled collaborative caching strategy among MBSs is proposed in . If the accessed MBS does not host the content, the request will be served by a neighbour MBS which cache the file rather than by the data server. For FBS caching, a distributed caching method is presented in and the main idea is that the ED locating in the FBS coverage overlap is able to obtain contents from multiple hosters. Caching at ED can not only ease backhaul congestion but also improve the area spectral efficiency . When the end user requests a content, he/she would be severed by the local storage if the content is precached in his/her ED or by adjacent ED via D2D communication if the content is host accordingly. In , the authors model the cache-enabled D2D network as a Poisson cluster process, where end users are grouped into several clusters and the collective performance is improved. Individually, caching the interested contents for other users affects personal benefit. In , a Stackelberg game model is applied to formulate the conflict among end users and a related incentive mechanism is designed to encourage content sharing. For the case of cache-enabled C-RAN, caching at BBU can ease the traffic congestion in the backhaul while caching at RRH can reduce the fronthaul communication cost. On the other hand, caching all at BBU raises the signaling overhead of BBU pool while at RRH weakens the processing capability. Therefore, where to cache the content in C-RAN makes a substantial contribution to balancing the signal processing capability at the BBU pool and the backhaul/fronthaul costs . The work in investigates caching at RRHs with jointly considering cell outage probability and fronthaul utilization. Due to the end users' mobility, the prediction/awareness of user moving behaviour also influence the proper hoster selection. There are some researches exploiting user mobility in cache strategy designing like and .\nSimilar with Layer 1, what to cache decision as well as eviction policy of layer 2 depends on the accurate prediction on content popularity or user preference in proactive caching method. The content popularity contains the feature of temporal and spatial correlations, which has already been described in Layer 1 Caching. In Layer 2 caching, the proper spatial granularity in popular contents estimation needs to take special attentions . For example, the coverage of MBS and FBS are different, which makes the popularity in MBS and FBS are different as well. Because the former based on a large number of users' behaviors but the individual may prefer specific content categories. For small cells, the preference estimation requires more accurate information like historical data . In order to capture the temporal and spatial dynamics of user preference, many different deep learning based algorithms are proposed, which will be illustrated in Section \\ref{sec:dl_caching}.\nCache dimensioning in Layer 2 Caching has more complicated factors need to be considered, not only including the network topology and content popularity as Layer 1 Caching, but also containing backhaul transmission status and wireless channel features. The proper cache size assignment is studied in the scenario of backhaul limited cellular network . It also provides the closed-form boundary of minimum cache size in one cell case. In the case of dense wireless network, the work in quantifies the minimum required cache to achieve the linear capacity scaling of network throughput. The authors of also consider the scenario of dense networks. They derive the closed-form of the optimal memory size which can reduce the consumption of backhaul capacity as well as guarantee wireless QoS.\nAccording to the number of transmitters and receivers, we divide the content delivery in Layer 2 caching into three categories: one candidate serves one end user, such as unicast and D2D transmission; one candidate serves multiple users like multicast; and coordinated delivery including multiple transmitters serve one or more receivers like coordinated multi-point joint transmission (CoMP-JT). Once the requested content is cached locally, BS can serve the end user via unicast or the adjacent device shares the contents by implementing D2D transmission. Concurrent transmission has the risk of co-channel interference in dense deployed networks. In D2D network, link scheduling is introduced to select subsets of links to transmit simultaneously . With the aim of improving the spectral efficiency, multicast is applied in content delivery when serving multiple requests simultaneously with the same content. Therefore there is a trade off between spectral efficiency and service delay. For the aim of serving more users in one transmission as well as higher spectral efficiency, the BS will wait to collect enough requirement for the same content which makes the first request a long waiting time. An optimal dynamic multicast scheduling is proposed in to balance these two factors. Multicast can also serve multiple requests with different contents. In , the authors provides a coded caching scheme which requires the communication link is error free and each user caches a part of its own content and partial of other users. Then BS multicasts the coded data to all users. Each user can decode his own requested content by XOR operation between the received data and the precached other users' file. However, the coding complexity increases exponentially as the quantity of end users grows. The CoMP-JT can improve the spectral efficiency as well via sharing channel state information (CSI) and contents among BSs but it also needs high-capacity backhaul consumption for exchanging data. In C-RAN, the BBUs are centralized in the BBU pool, which makes communication among BSs very efficiency. designs CoMP-JT in C-RAN for the purpose of minimizing power consumption with limitations of transmission energy, link capacity and requested QoS.", "id": "a2d45d41-6ccf-4b15-99cb-2ea399aa1ef2", "level": "subsection", "origin_cites_number": 15, "parent_id": "77d13750-5344-4e36-b9ce-8573ab07d97a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Data Caching Review" ], [ "subsection", "Layer 2 Caching" ] ], "subsections": [], "title": "Layer 2 Caching" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dl}\nAs Figure \\ref{fig:DNN} shows, some typical deep neural network (DNN) methods are stated. These models are classified into three categories depending on the training methods: supervised learning, unsupervised learning and reinforcement learning. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=\\textwidth]{figure/DNN.eps}\n \\caption{Typical DNN Structures}\n \\label{fig:DNN}\n\\end{figure}", "id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "level": "section", "origin_cites_number": 0, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ] ], "subsections": [ "17a86b07-b7c3-41f4-9b7f-05a0e6ea6ae0", "f6c49080-7dcd-4236-b8c7-a294adda5983", "12cbfae2-b31a-489d-8357-cb9dedb5c4fa", "66cec1fd-1f07-4527-b02b-b98ccb6507c3", "cbd77acf-a82e-43b7-9bdd-bef9313a0f73" ], "title": "Deep Learning Outline" }, { "cite_extract_rate": 1, "cites": [ 166 ], "content": "FNN is a kind of DNNs whose information propagation direction is forward and there is no cycle in neurons. In this paper, the term FNN is used to represent fully connected neural network, which indicates the connection between two adjacent layers is filled. According to the Universal Approximation Theorem, FNN has the ability to approximate any closed and bounded function with enough neurons in hidden layer . The hidden layer is applied to extract features of input vector, and then feed the output layer, which works as a classifier. Though FNN is very powerful, it gets into trouble when dealing with real-world task such as image recognition due to enormous weight parameters (because of fully connected) and lack of data augmentation.", "id": "17a86b07-b7c3-41f4-9b7f-05a0e6ea6ae0", "level": "subsection", "origin_cites_number": 1, "parent_id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Feedforward Neural Network (FNN)" ] ], "subsections": [], "title": "Feedforward Neural Network (FNN)" }, { "cite_extract_rate": 1, "cites": [ 810 ], "content": "For the aim of overcoming the aforementioned drawback of FNN, CNN employs convolution and pooling operations, where the former applies sliding convolutional filters to the input vector and the later does down sampling, usually via maximum or mean pooling. Generally, CNN tends to contain deeper layers and smaller convolutional filters, and the structure becomes fully convolutional network , reducing the ratio of pooling layers as well as fully connected layers. Taxonomically, CNN belongs to FNN and has been broadly employed in image recognition, video analysis, natural language processing, etc. Including CNN, one of the limitations of FNN is that the output only depends on current input vectors. So it is hard to deal with sequential tasks.", "id": "f6c49080-7dcd-4236-b8c7-a294adda5983", "level": "subsection", "origin_cites_number": 1, "parent_id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Convolutional Neural Network (CNN)" ] ], "subsections": [], "title": "Convolutional Neural Network (CNN)" }, { "cite_extract_rate": 0.5, "cites": [ 166 ], "content": "In order to deal with sequential tasks and using historical information, RNN employs neurons with self feedback in hidden layers. Unlike the hidden neuron in FNN, the output of recurrent neuron depends on both current output of precious layer and last hidden state. Compared with FNN approximates any continues functions, RNN with Sigmoid activation function can simulate a universal Turing Machine and has the ability to solve all computational problems . It is worth noting that RNN has the risk to suffer from long-term dependencies problem including gradient exploding and vanishing. Additionally, RNN has more parameters waiting to be trained due to adding recurrent weights. In the following, we introduce some RNN variants as Figure \\ref{fig:RNN} shows.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[trim=0mm 0mm 35mm 0mm, clip, width=\\textwidth]{figure/RNN.eps}\n \\caption{RNN Variants}\n \\label{fig:RNN}\n\\end{figure}", "id": "12cbfae2-b31a-489d-8357-cb9dedb5c4fa", "level": "subsection", "origin_cites_number": 2, "parent_id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Recurrent Neural Network (RNN)" ] ], "subsections": [ "15615e34-4c33-4644-b6a2-80e552ca205d", "f028c925-baae-474d-ae9c-0ae54c3029f3", "bf6576fa-f9cd-4916-b753-6a9371a05fee" ], "title": "Recurrent Neural Network (RNN)" }, { "cite_extract_rate": 1, "cites": [ 166 ], "content": "As aforementioned, simple RNN contains more parameters in training step, where the recurrent weights and input weights are difficult to learn . The basic idea of ESN is fixing these two kinds of weights and only learn the output weights (as links highlighted in Figure \\ref{fig:RNN}). The hidden layer is renamed as reservoir in ESN, where the neurons are sparsely connected and the weights are randomly assigned. The recurrent weights keep constant so the information of previous moments is stored in the reservoir with constant weight like voice echoing.", "id": "15615e34-4c33-4644-b6a2-80e552ca205d", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "12cbfae2-b31a-489d-8357-cb9dedb5c4fa", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Recurrent Neural Network (RNN)" ], [ "subsubsection", "Echo-State Network (ESN)" ] ], "subsections": [], "title": "Echo-State Network (ESN)" }, { "cite_extract_rate": 1, "cites": [ 166 ], "content": "Recently, an efficient way to cope with long-term dependencies in practical is employing gated RNN, including LSTM . Then we compare with the recurrent neuron in simple RNN: Internally LSTM introduces three gates to control signal propagation, where input gate $I$ decides the partition of input signal to be stored, forget gate $F$ controls ratio of last moment memory to be kept until next period (the name \"forget gate\" may be a little misleading because it actually represents the ratio to be remembered) and output gate $O$ influences the proportion of current state to be delivered; Externally LSTM has four inputs embracing one input signal and three control signals for three gates. All these four signals are derived via the calculation of current network input and last moment delivered state.", "id": "f028c925-baae-474d-ae9c-0ae54c3029f3", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "12cbfae2-b31a-489d-8357-cb9dedb5c4fa", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Recurrent Neural Network (RNN)" ], [ "subsubsection", "Long Short-Term Memory (LSTM)" ] ], "subsections": [], "title": "Long Short-Term Memory (LSTM)" }, { "cite_extract_rate": 1, "cites": [ 167 ], "content": "A typical application of RNN is converting one sequence to another sequence (seq2seq) such as machine translation. Conventionally, the output of seq2seq architecture is a probability distribution of output dictionary. However, it cannot deal with the problem that the size of output relies on the length of input due to fixed output dictionary. In , the authors modify the output to be the distribution of input sequence, which is analogous to pointers in C/C++. Pointer network has been widely used in text condensation.", "id": "bf6576fa-f9cd-4916-b753-6a9371a05fee", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "12cbfae2-b31a-489d-8357-cb9dedb5c4fa", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Recurrent Neural Network (RNN)" ], [ "subsubsection", "Pointer Network" ] ], "subsections": [], "title": "Pointer Network" }, { "cite_extract_rate": 1, "cites": [ 166 ], "content": "Auto Encoder is a stack of two NNs named encoder and decoder respectively, where the former tries to learn the representative characteristics of input and generate a related code, and the later reads the code and reconstructs the original input. In order to avoid the auto encoder simply copying the input, some restrictions are considered like the dimension of code is smaller than input vector . The quality of auto encoder can be measured via reconstruction error, which estimates the similarity between input and output. In most cases, the auto encoder is used for the proper representation of input vector so the decoder part is removed after unsupervised training. The code can be employed as input for further deep learning models.", "id": "66cec1fd-1f07-4527-b02b-b98ccb6507c3", "level": "subsection", "origin_cites_number": 1, "parent_id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Auto Encoder" ] ], "subsections": [], "title": "Auto Encoder" }, { "cite_extract_rate": 0, "cites": [], "content": "Reinforcement Learning (RL) is a Markov Decision Process represented by a quintuple $\\{\\mathcal{S,A,P,R},\\gamma\\}$, where $\\mathcal{S}$ is the state space controlled by environment; $\\mathcal{A}$ is the action space determined by agent; $\\mathcal{P}$ is the state transition function measuring the probability of moving to a new state $s_{t+1}$ given previous state $s_t$ and action $a_t$; $R$ is reward function calculated by environment considering state and action; $\\gamma$ is a discount factor for estimating total reward. During the interaction between agent and environment, agent observes current state $s_t$ from environment, and then takes action $a_t$ following its policy $\\pi$. The environment moves to a new state $s_{t+1}$ stochastically based on $\\mathcal{P}(s_t,a_t)$ and returns a reward $r_t$ to agent. The RL's aim is finding the policy $\\pi$ to maximum accumulated reward $\\sum_t\\gamma^tr_t$. In the early stage, RL focuses on scenarios whose $\\mathcal{S}$ and $\\mathcal{A}$ are discrete and limited. So the agent can use a table to record these information. Recently, some tasks have enormous discrete states and actions such as playing go and even continuous value such as self-driving, which makes table recording impractical. In order to solve this, DRL combines RL and DL, where RL defines the problem and optimization object; DL models the policy and the reward expectation. Depending on the roles of DNN in DRL, we classify the DRL into 3 categories as Figure shows.", "id": "cbd77acf-a82e-43b7-9bdd-bef9313a0f73", "level": "subsection", "origin_cites_number": 0, "parent_id": "c0dca0bc-798e-46e6-903c-5acb8a815d31", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Deep Reinforcement Learning (DRL)" ] ], "subsections": [ "d2c418b7-118c-42b8-b9d8-37f74087f743", "4e5ada48-bb01-4d53-88dc-d7516f8cb54c", "92185b95-3c92-401c-8a45-161c99f6f228" ], "title": "Deep Reinforcement Learning (DRL)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1409, 1408 ], "content": "In value-based method, DNN does not get involved with policy decision but estimates the policy performance. Two functions are introduced for the measurement: $V^\\pi(s)$ represents the reward expectation of policy $\\pi$ starting from state $s$; $Q^\\pi(s,a)$ illustrates the reward expectation of policy $\\pi$ starting from state $s$ and taking action $a$. In addition, $V^\\pi(s)$ is the expected value of $Q^\\pi(s,a)$. If we can estimate $Q^\\pi(s,a)$, the policy $\\pi$ can also be improved by choosing the action $a^*$ hold $Q^\\pi(s,a^*)\\geq V^\\pi(s)$. So the DNN employed in agent is approximating function $Q^\\pi(s,a)$, where the inputs are state $s$ and action $a$ and output is the estimated value $Q^\\pi(s,a)$. There are some representative critic methods like Deep Q Networks (DQN) and its variants Double DQN , Dueling DQN , etc.", "id": "d2c418b7-118c-42b8-b9d8-37f74087f743", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "cbd77acf-a82e-43b7-9bdd-bef9313a0f73", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Deep Reinforcement Learning (DRL)" ], [ "subsubsection", "DNN as Critic (Value-Based)" ] ], "subsections": [], "title": "DNN as Critic (Value-Based)" }, { "cite_extract_rate": 1, "cites": [ 2219, 1391 ], "content": "In policy-based method, DNN gets involved in the action selection directly instead of via $Q^\\pi(s,a)$. The policy can be viewed as an optimization problem, where the objective function is maximizing reward expectation and the search space is policy space. The input of DNN is current state and output is the probability distribution of potential actions. By employing gradient ascent, we can update the DNN to provide better action then maximize total reward. Some popular algorithms include Trust Region Policy Optimization (TRPO) , Proximal Policy Optimization (PPO) .", "id": "4e5ada48-bb01-4d53-88dc-d7516f8cb54c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "cbd77acf-a82e-43b7-9bdd-bef9313a0f73", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Deep Reinforcement Learning (DRL)" ], [ "subsubsection", "DNN as Actor (Policy-Based)" ] ], "subsections": [], "title": "DNN as Actor (Policy-Based)" }, { "cite_extract_rate": 0.5, "cites": [ 1390 ], "content": "Generally, compared with policy-based approach, the value-based method is less stable and suffer from poor convergence since the policy is derived based on $Q^\\pi(s,a)$ approximation. But value-based method is more sample efficient, while policy-based method is easier to fall into local optimal solution because the search space is vast. The actor-critic model combines these two approaches, i.e. the agent contains two DNNs named actor and critic respectively. In each training iteration, the actor considers current state $s$ and policy $\\pi$ for deciding action $a$. Then the environment changes to state $s'$ and returns reward $r$. The critic updates its own parameters based on the feedback from environment and output a mark for the actor's action. The actor updates the policy $\\pi$ depending on critic's mark. Some typical algorithms are proposed recent years like Deep Deterministic Policy Gradient (DDPG) and Asynchronous Advantage Actor-Critic (A3C) .", "id": "92185b95-3c92-401c-8a45-161c99f6f228", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "cbd77acf-a82e-43b7-9bdd-bef9313a0f73", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning Outline" ], [ "subsection", "Deep Reinforcement Learning (DRL)" ], [ "subsubsection", "Actor-Critic Model" ] ], "subsections": [], "title": "Actor-Critic Model" }, { "cite_extract_rate": 0.162790697674418, "cites": [ 3721, 7746, 3718, 3719, 3723, 3722, 3720 ], "content": "\\label{sec:dl_caching}\nWe divide the studies regarding deep learning for data caching in edge networks into four categories depending on the DL tools employed: FNN and CNN; RNN; Auto Encoder; DRL. Recently many works utilize more than one DL techniques for jointly considered caching problems. For instance, at the beginning we applies a RNN to predict content popularity, and then a DRL to find suboptimal solutions of content placement for the purpose of reducing time complexity. In such case, we classify the related work into DRL since it represents the caching allocation policy. Unless mention the caching location (such as CRs, MBSs, FBSs, EDs and BBUs) otherwise, the approaches in this section can be utilized for both Layer 1 and Layer 2 caching. Table \\ref{tab:summary} summarize some studies of DL for caching.\n\\begin{comment}\n\\begin{longtable}[c]{p{.12\\textwidth}|l|p{.2\\textwidth}|p{.23\\textwidth}|p{.26\\textwidth}}\n \\caption{Summary of Deep Learning for Data Caching in Edge Network}\n \\label{tab:summary1}\\\\\n \\hline\n \\textbf{DL method} & \\textbf{Study} & \\textbf{Caching Problem} & \\textbf{Caching Objective} & \\textbf{DL Objective} \\\\\n \\endfirsthead\n \\hline\n \\textbf{DL method} & \\textbf{Study} & \\textbf{Caching Problem} & \\textbf{Caching Objective} & \\textbf{DL Objective} \\\\\n \\endhead\n \\hline\n CNN & & L1 where to cache \\& Content Delivery & minimize caching \\& transmission cost & nominate proper CRs for caching \\\\\n \\hline\n CNN & & L1 where to cache \\& Content Delivery & minimize caching \\& transmission cost & reduce feasible region of CRs for caching \\\\\n \\hline\n CNN\\& FNN & & L2 content delivery & minimize transmission time/energy & reduce feasible region of time slot allocation for delivery \\\\\n \\hline\n FNN & & L2 where to cache \\& content delivery & minimize energy cost & determine proper MBSs for caching \\& time duration for delivery \\\\\n \\hline\n Auto-encoder & & L1 \\& L2 what to cache & improve cache hit ratio & predict content popularity \\\\\n \\hline\n ESN & & L2 what to cache \\& content delivery & minimize transmit power & predict content popularity \\& user mobility \\\\\n \\hline\n DRL & & L2 what to cache \\& content delivery & minimize service latency & decide content caching, computing offloading and radio resource allocation \\\\\n \\hline\n LSTM & & L2 what to cache & improve cache hit ratio & predict content popularity \\\\\n \\hline\n FNN & & L2 what to cache & save traffic data & predict visiting content \\\\\n \\hline\n RNN \\& DRL & & L2 what to cache \\& where to cache & minimize service latency & predict popularity \\& decide content caching and offloading \\\\\n \\hline\n FNN & & L1 what to cache & improve cache hit ratio & predict content popularity \\\\\n \\hline\n Auto-encoder \\& RNN & & L2 content delivery & minimize transmission delay & reduce traffic load \\& select optimal BS subset\\\\\n \\hline\n CNN & & L2 what to cache & minimize service latency & predict visiting content \\\\\n \\hline\n ESN \\& LSTM \\& DRL & & L2 where to cache \\& what to cache \\& content delivery & minimize delay \\& power consumption & predict user mobility, content popularity \\& determine D2D link \\\\\n \\hline\n DRL & & L2 where to cache \\& content delivery & minimize content access cost & decide content caching \\& bandwidth allocation \\\\\n \\hline \n DRL & & L2 where to cache \\& what to cache & maximize cache hit ratio \\& minimize transmission delay & decide cache replacement \\\\\n \\hline\n auto-encoder \\& DRL & & L2 content delivery & minimize average delay and power consumption & decide multicast scheduling and caching jointly\\\\\n \\hline\n DRL & & L2 what to cache & maximize traffic load served by FBSs & decide which and how many contents be cached\\\\\n \\hline\n DRL & & L2 what to cache & minimize transmission power & predict users' preference and content popularity \\\\\n \\hline\n DRL & & L2 what to cache & maximize network operator's utility & decide content replacement \\\\\n \\hline\n DRL & & L2 what to cache & minimize service latency & decide cache replacement \\& power allocation \\\\\n \\hline\n RNN & & L2 what to cache & improve cache hit ratio & predict content popularity \\\\\n \\hline\n DRL & & L2 what to cache & maximize overall content quality & decide cache replacement \\\\\n \\hline \n DRL & & L2 what to cache & maximize cache hit ratio & decide caching strategy \\\\\n \\hline\n DRL & & L2 what to cache & maximize energy efficient & decide which content to be cached \\\\\n \\hline\n CNN \\& RNN \\& DRL & & L2 what to cache & maximize cost reduction of operators & predict popularity \\& searching best NN model \\\\\n \\hline\n LSTM & & L2 what to cache & maximize cache hit ratio & decide content replacement \\\\\n \\hline\n DRL & & L2 where to cache & maximize QoE & decide content placement \\\\\n \\hline\n DRL & & L2 what to cache & improve cache hit ratio & decide content update \\\\\n \\hline\n DRL & & L2 what to cache & minimize transmitting cost & decide cache replacement \\\\\n \\hline\n auto-encoder & & L2 what to cache & improve backhaul offloading \\& user satisfaction & predict content popularity \\\\\n \\hline\n DRL & & L2 what to cache & minimize cumulative cost & determine caching content \\\\\n \\hline \n DRL & & L2 what to cache & maximize chunk hit ratio \\& minimize service time & determine caching chunks \\\\\n \\hline\n\\end{longtable}\n\\end{comment}\n\\begin{table}[htb]\n\\caption{Summary of Deep Learning for Data Caching}\n\\label{tab:summary}\n\\begin{tabular}{p{.07\\textwidth}|p{.1\\textwidth}|p{.28\\textwidth}|p{.43\\textwidth}}\n \\hline\n \\textbf{method} & \\textbf{Study} & \\textbf{Caching Problem} & \\textbf{DL Objective}\\\\\n \\hline\n \\multirow{8}{.07\\textwidth}{FNN and CNN} & & content delivery & reduce feasible region of time slot allocation\\\\\n \\cline{2-4}\n & & where to cache, content delivery & determine MBSs for caching \\& delivery duration \\\\\n \\cline{2-4}\n & & where to cache, content delivery & nominate proper CRs for caching \\\\\n \\cline{2-4}\n & & where to cache, content delivery & reduce feasible region for caching \\\\\n \\cline{2-4}\n & & what to cache & extract video features\\\\\n \\cline{2-4}\n & & what to cache & predict requested content \\& frequency \\\\\n \\cline{2-4}\n & & what to cache & predict requested content \\\\\n \\cline{2-4}\n & & what to cache & predict content popularity\\\\\n \\hline\n \\multirow{4}{.07\\textwidth}{RNN} & & what to cache & predict requested content \\& user mobility \\\\\n \\cline{2-4}\n & & what to cache & predict content popularity \\\\\n \\cline{2-4}\n & & content delivery & reduce traffic load, select optimal BS subset\\\\\n \\hline\n \\multirow{3}{.07\\textwidth}{Auto Encoder} & & what to cache & predict content popularity \\\\\n \\cline{2-4}\n & & what to cache & predict top popular contents \\\\\n \\hline\n \\multirow{17}{.07\\textwidth}{DRL} & & what to cache & decide cache placement \\\\\n \\cline{2-4}\n & & what to cache & decide cache replacement \\& power allocation \\\\\n \\cline{2-4}\n & & what to cache & predict popularity \\& searching best NN model \\\\\n \\cline{2-4}\n & & where to cache & decide cache location \\\\\n \\cline{2-4}\n & & content delivery & users grouping\\\\\n \\cline{2-4}\n & & where to cache, content delivery & decide BS connection, computation offloading \\& caching location\\\\\n \\cline{2-4}\n & & what to cache, content delivery & decide caching \\& bandwidth allocation \\\\\n \\cline{2-4}\n & & what to cache, content delivery & decide caching, computing offloading \\& radio resource allocation \\\\\n \\cline{2-4}\n & & what to cache, content delivery & decide multicast scheduling \\& caching replacement\\\\\n \\cline{2-4}\n & & where \\& what to cache & predict popularity, decide caching \\& task offloading \\\\\n \\cline{2-4}\n & & where \\& what to cache, content delivery & predict user mobility \\& content popularity, determine D2D link\\\\\n \\hline\n\\end{tabular}\n\\end{table}", "id": "3f275de1-b1c2-48e0-ae07-49947668a760", "level": "section", "origin_cites_number": 43, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning for Data Caching" ] ], "subsections": [ "64ba4f69-8c92-44ec-9c4d-6de443abc8ba", "8a40aabe-271f-491f-92cb-1da70950927c", "67ee9973-0fe6-40d1-b44e-d9d39d452c5f", "c63a6eb6-9774-4886-b68b-d0543cf7f42c" ], "title": "Deep Learning for Data Caching" }, { "cite_extract_rate": 0.2, "cites": [ 3723, 3721 ], "content": "In , the content delivery problem in wireless network is formulated as two MILP optimization models with the aims of minimum delivery time slot and energy consumption respectively. Both models consider the data rate for content delivery. Considering the computational complexity of solving MILP, a CNN is introduced to reduce the feasible region of decision variables, where the input is channel coefficients matrix. The FNN in paper plays a similar role as to simplify the searching space of the content delivery optimization model. \nFor resource allocation problem, the authors of model it as linear sum assignment problems then utilize CNN and FNN to solve the model. The idea is extended in and , where the authors consider where to cache problem among potential CRs and content delivery jointly, which is modeled as MILP with the aim of balancing caching and transmission cost by considering the user mobility, space utilization and bandwidth limitations. The cache allocation is viewed as multi-label classification problem and is decomposed into several independent sub-problems, where each one correlates with a CNN to predict assignment. The input of CNN is a grey-scale image which combines the information of user mobility, space and link utilization level. In , a hill climbing local search algorithm is provided to improve the performance of CNN while in , the prediction of CNN is used to feed a smaller MILP model. \nFor these above works , the FNN or CNN input is extracted from the optimization model. The work in trains a CNN via original graph instead of parameters matrix/image, which makes the process human recognizable and interpretable. Though the authors take traveling salesman problem not data caching as an example, the method can be viewed as a potential research direction. \nIn , an ILP model is proposed to minimize the backhaul video-data type load by determining the portion of cached content in BSs. Considering the fact that the mobile users covered by a BS change frequently, therefore predicting user preference is unnecessary. Instead, the authors concentrate on the popular content in general. At the beginning, a 3D CNN is introduced to extract spatio-temporal features of videos. The popularity of new contents without historical information is determined via comparing similar video features. The authors of also considers the spatio-temporal features among visiting contents in a mobile bus WiFI environment. By exploiting the previous 9 days collecting data, the content that the user may visit on the last day and corresponding visiting frequency can be forecast. The social property is taken into account in . By observing users interests on tweets during 2016 U.S. election, a CNN based predicted model can foresee the content category that is most likely to be requested. Such kind of content would be cached in MBSs and FBSs.\nThe work of examines the role of DNN in caching from another aspect. The authors propose a FNN to predict content popularity as a regression problem. The results show that FNN outperforms RNN, though the later is believed to be effective to solve sequential predictions. Moreover, replacing the FNN by a linear estimator does not devalue the performance significantly. The author provides explanation that FNN would work better than linear predictor in the case of incomplete information, and RNN has more advantages to model the popularity prediction as a classification rather than a regression problem.", "id": "64ba4f69-8c92-44ec-9c4d-6de443abc8ba", "level": "subsection", "origin_cites_number": 10, "parent_id": "3f275de1-b1c2-48e0-ae07-49947668a760", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning for Data Caching" ], [ "subsection", "FNN and CNN" ] ], "subsections": [], "title": "FNN and CNN" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 3720 ], "content": "Considering RNN is superior in dealing with sequential tasks, the work applies a bidirectional RNN for online content popularity prediction in mobile edge network. Simple RNN's output depends on previous and current storage, but the bidirectional can also take future information into account. The forecast model consists three blocks cascadingly: a CNN reads user requests and extract features; bidirectional LTSM learns association of requests over time step; FNN is added in the end to improve the prediction performance. Then content eviction is based on the popularity prediction. \nThe authors in utilize ESN to predict both content request distribution and end user mobility pattern. The user's preference is viewed as context which links with personal information combining gender, age, job, location, etc.. For the request prediction, the input of ESN is user's information vector and the output represent the probability distribution of content. For mobility prediction, the input includes historical and present user's location and the output is the expected position for next time duration. Eventually, the prediction influences the caching content decisions in BBUs and RRHs for the purpose of minimizing traffic load and delay in CRAN. The authors extend their work in by introducing conceptor-based ESN which can split users' context into different patterns and learn them independently. Therefore a more accurate prediction is achieved. \nIn , a caching decision policy named PA-Cache is proposed to predict time-variant video popularity for cache eviction when the space is full. The temporal content popularity is exploited by attaching every hidden layer representation of RNN to an output regression. In order to improve the accuracy, hedge backpropagation is introduced during training process which decides when and how to adapt the depth of the DNN in an evolving manner. Similarly, the work in also considers caching replacement of video content. A deep LSTM network is utilized for popularity prediction consisting of stacking multiple LSTM layers and one softmax layer, where the input of the network is request sequence data (device, timestamp, location, title of video) without any prepossessing and the output is estimated content popularity. Another work concentrates on prediction and interactions between user mobility and content popularity can be found .\nThe work of recognizes the popularity prediction as a seq2seq modeling problem and proposes LSTM Encoder-Decoder model. The input vector consists of past probabilities where each vectors are calculated during a predefined time window. In , the authors focus on caching content delivery with the aim of minimizing BSs to cover all requested users, i.e. set cover problem, via coded caching. Unlike , an auto encoder is introduced in coded caching stage for file conversion to reduce transmission load. In addition, a RNN model is employed to select BSs for broadcasting. \nThe paper shows the potential of RNN in solving where to cache problem. In , a task allocation model is formulated as a knapsack problem and the decision variables represent the task is processed locally in mobile devices (MDs) or remotely in edge servers (ESs). The authors design a multi-pointer network structure of 3 RNNs, where 2 encoders encode MDs and ESs respectively, 1 decoder demonstrates ES and MD pairing. Considering the similarity of where to cache optimization model and knapsack problem, the multi-pointer network can be transferred for caching location decision after according parameter modifications.", "id": "8a40aabe-271f-491f-92cb-1da70950927c", "level": "subsection", "origin_cites_number": 9, "parent_id": "3f275de1-b1c2-48e0-ae07-49947668a760", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning for Data Caching" ], [ "subsection", "RNN" ] ], "subsections": [], "title": "RNN" }, { "cite_extract_rate": 0, "cites": [], "content": "Generally, auto encoder is utilized to learn efficient representation or extract features of raw data in an unsupervised manner. The work in considers the cache replacement in wireless sensor network (WSN) based on content popularity. Considering sparse auto encoder (SAE) can extract representative expression of input data, the authors employ a SAE followed by a classifier where the input contains collecting user content requests and the output represents the contents popularity level. The authors also think about the implementation in a distributed way by SDN/NFV technical, i.e. the input layer is deployed on sink node, while the rest layers are implemented on the main controller. A related work applying auto encoder in 5G network proactive caching can be found in . In , two auto encoders are utilized for extracting the features of users and content respectively. Then the extracted information is explored to estimate popularity at the core network. Similarly, the auto encoder in is for spatio-temporal popularity features extraction and auto encoders work collaboratively in to predict top K popular videos.", "id": "67ee9973-0fe6-40d1-b44e-d9d39d452c5f", "level": "subsection", "origin_cites_number": 5, "parent_id": "3f275de1-b1c2-48e0-ae07-49947668a760", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning for Data Caching" ], [ "subsection", "Auto Encoder" ] ], "subsections": [], "title": "Auto Encoder" }, { "cite_extract_rate": 0.23809523809523803, "cites": [ 7746, 3718, 3719, 3724, 3722 ], "content": "The work in focuses on the cooperative caching policy at FBSs with maximum distance separable coding in ultra dense networks. A value-based model is utilized to determine caching categories and the content quantity at FBSs during off peak duration. The authors in study the problem of caching 360\\degree videos and virtual viewports in FBSs with unknown content popularity. The virtual viewport represents the most popular tiles of a 360\\degree video over users' population. A DQN is introduced to decide which tiles of a video to be hosted and in which quality. Additionally, employs DQN for content eviction decision offering a satisfactory quality of experience and is for the purpose of minimizing energy consumption. In , the authors also apply DQN to decide cache eviction in a single BS. Moreover, the critic is generated with stacking LSTM and FNN to evaluate Q value and an external memory is added for recording learned knowledge. For the purpose of improving the prediction accuracy, the Q value update is determined by the similarity of estimated value of critic and recording information in the external memory, instead of critic domination. The paper puts forth DQN a two-level network caching, where a parent node links with multiple leaf nodes to cache content instead of a single BS. In , a DRL framework with Wolpertinger architecture is presented for content caching at BSs. The Wolpertinger architecture is based on actor-critic model and performs efficiently in large discrete action space. employs two FNNs working as actor and critic respectively, where the former determines requested content is cached or not and the later estimates the reward. The whole framework consists two phases: in offline phase, these two FNNs are trained in supervised learning; in online phase, the critic and actor update via the interaction with environment. The authors extend their work to a multi agent actor-critic model for decentralized cooperative caching at multiple BSs . In , an actor-critic model is used for solving cache replacement problem, which balance the data freshness and communication cost. The aforementioned papers put attention on the network performance while ignore the influence of caching on information processing and resource consumption. Therefore, authors of design cache policy considering both network performance during content transmission and processing efficiency during data consumption. A DQN is employed to determine the number of chunks of the requested file to be updated. The paper investigates a joint cache replacement and power allocation optimization problem to minimize latency in a downlink F-RAN. A DQN is proposed for finding a suboptimal solution. Though is regarded as solving what to cache problem like , the reinforcement learning approach plays a different role. In , a DNN is utilized for content popularity prediction and then a RL is used for DNN hyperparameters tuning instead of determining caching content. Therefore the action space consists of choosing model architectures (i.e. CNN, LSTM, etc.), number of layers and layer configurations.\nIn , the authors generate an optimization model with the aim of maximizing network operator's utility in mobile social networks under the framework of mobile edge computing, in network caching and D2D communications (3C). The trust value which if estimated through social relationships among users are also considered. Then a DQN model is utilized for solving optimization problem, including determine video provider and subscriber association, video transcoding offloading and the video cache allocation for video providers.. The DQN employs two CNNs for training process, where one generates target Q value and the other is for estimated Q value. Unlike the conventional DQN, the authors in introduces a dueling structure, i.e. the Q value is not computed in the final fully connected layer, but is decomposed into two components and use the summary as estimated Q value, which helps achieve a more robust result. The authors also consider utilizing dueling DQN model in different scenarios like cache-enabled opportunistic interference alignment and orchestrating 3C in vehicular network . The work provides a DDPG model to cope with continuous valued control decision for 3C in vehicular edge networks, which is combined with the idea of DQN and actor-critic model. The DDPG structure can be divided into two parts as DQN, one is for estimated Q value and the other for target Q value. Each part consists of two DNNs, which play the role of actor and critic respectively. The critic updates its parameters like DQN while the actor learns policy via deterministic policy gradient approach. The proposed DRL is used for deciding content caching/replacement, vehicle organization and bandwidth resource assignment on different duration. \nThe paper provides an optimization model which takes what to cache and content delivery into consideration in the fog-enabled IoT network in order to minimize service latency. Since the wireless signals and user requests are stochastic, a actor-critic model is engaged where the actor makes decision for requesting contents while critic estimates the reward. Specially, the action space $S$ consists of decision variables and reward function is a variant of the objective function. A caching replacement strategy and dynamic multicast scheduling strategy are studied in . In order to get a suboptimal result, an auto encoder is used to approximate the state. Further, a weighted double DQN scheme is utilized for avoiding overestimation of Q value. applies a RNN to predict content popularity by collecting historical requests and the output represents the popularity in the near future. Then the prediction is employed for cooperative caching and computation offloading among MEC servers, which is modelled as a ILP problem. For the purpose of solving it efficiently, a multi-agent DQN is applied where each user is viewed as an agent. The action space consists of task local computing and offloading decision as well as local caching and cooperative caching determination. The reward is measured by accumulated latency. The agent choose its own action based on current state without cooperation. The where to cache, what to cache and content delivery decision of D2D network are jointly modelled in . Two RNNs, ESN and LSTM, are considered to predict mobile users' location and requested content popularity. Then the prediction result is used for determining content categories and cache locations. The content delivery is formulated as the actor-critic based DRL framework. The state spaces include CSI, transmission distances and communication power between requested user and other available candidates. The function of DRL is determining the communication link among users with the aim of minimizing power consumption and content delay.\nWe notice that most papers prefer to use value-based model (critic) and value-policy-based (actor-critic) model in DRL framework, but rare paper considers only policy-based model to solve data caching problem. One proper reason is that the search space of caching problem is enormous so policy-based model is easier to fall into local optimal solution, resulting in poor performance. Though the value-based model is less stable, some variant structures are utilized like Double DQN in to avoid value overestimation and dueling DQN in to improve robust.", "id": "c63a6eb6-9774-4886-b68b-d0543cf7f42c", "level": "subsection", "origin_cites_number": 21, "parent_id": "3f275de1-b1c2-48e0-ae07-49947668a760", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Deep Learning for Data Caching" ], [ "subsection", "DRL" ] ], "subsections": [], "title": "DRL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:challenges}\nA serious of open issues on content caching and potential research directions are discussed in this section. We first extend the idea of content caching to virtual network function chain since caching can be viewed as a specific network function. Then we consider the caching for augmented reality applications. Moreover, we notice that the cache dimensioning has not been covered yet by DL methods. Finally, we debate the addition cost introduced by DL.", "id": "bcb206fa-772b-4e9f-9da9-3ca19215514a", "level": "section", "origin_cites_number": 0, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Research Challenges and Future Directions" ] ], "subsections": [ "e584c09a-75d8-4201-90c6-a0fd658b4342", "f03f2b46-f5ed-47c7-b95e-abf7079d9dd7", "76a7e5ab-c389-46f2-914c-27d36e617810", "bf719ec3-fb89-4a44-9c42-480d45208596" ], "title": "Research Challenges and Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "The concept of Network Function Virtualization (NFV) has been firstly discussed and proposed within the realms of the European Telecommunication Standardization Institute (ETSI)\\footnote{Network Functions Virtualisation, An Introduction, Benefits, Enablers, Challenges and Call for Action, ETSI, 2012 https://portal.etsi.org/NFV/NFV\\_White\\_Paper.pdf}. The rational is to facilitate the dynamic provisioning of network services through virtualization technologies to decouple the service creation process form the underlying hardware. The framework allows network services to be implemented by a specific chaining and ordering of a set of functions which can be implemented either on a more traditional dedicated hardware which in this this case are called Physical Network Functions (PNFs), or alternatively as Virtual Network Functions (VNFs) which is a software running on top of virtualized general-purpose hardware. The decoupling between the hardware and the software is one of the important considerations the other – equally important – is that a virtualized service lend itself naturally to a dynamic programmable service creation where VNF resources can be deployed as required. Hence, edge cloud and network resource usage can be adapted to the instantaneous user demand whilst avoiding a more static over-provisioned configurations.\nWithin that framework, the incoming network service requests include the specification of the service function chain that need to be created in the form of an ordered sequence of VNFs. For example different type of VNFs such as a firewall or a NAT mechanism need to be visited in a specific order. In such constructed service chain each independent VNF requires specific underlying resources in terms for example of CPU cycles and/or memory. \nUnder this framework, caching of popular content can be considered as a specialized VNF chain function since inevitably delivery of the cached popular content to users will require a set of other functions to be supported related to security, optimization of the content etc. However, the issue of data caching and VNF chaining have evolved rather independently in the literature and the issue on how to optimize data caching when seeing it as part of VNF chain is still an interesting open ended issue.", "id": "e584c09a-75d8-4201-90c6-a0fd658b4342", "level": "subsection", "origin_cites_number": 0, "parent_id": "bcb206fa-772b-4e9f-9da9-3ca19215514a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Research Challenges and Future Directions" ], [ "subsection", "Caching as a Virtual Network Function Chain" ] ], "subsections": [], "title": "Caching as a Virtual Network Function Chain" }, { "cite_extract_rate": 0, "cites": [], "content": "Mobile augmented reality (MAR) applications can be considered as a way to augment the physical real-world environment\nwith artificial computer-based generated information and is an area that has received significant research attention recently. In order to successfully superimpose different digital object in the physical world MAR applications include several computationally and storage complex concepts such as image recognition, mobile camera calibration, and also the use of advanced 2D and 3D graphics rendering. These functionalities are highly computationally intensive and as such require support from an edge cloud, in addition the virtual objects to be embedded in the physical world are expected to be proactively cached closer to the end user so that latency is minimized. Ultra low latency in these type of applications is of paramount importance so that to provide a photorealistic embedding of virtual objects in the video view of the end user. However, since computational and augmented reality objects need to be readily available, the caching of those objects should be considered in conjunction with the computational capabilities of he edge cloud. In addition to the above when MAR is considered under the lenses of an NFV environment the application might inherently require access to some VNFs and therefore the above discussion on VNF chaining for MAR applications is also valid in this case. \nRecently the concept of Digital Twin (DT) , has received significant research attention due to the plethora of applications ranging from industrial manufacturing and health to smart cities. In a nutshell, a DT can be defined as an accurate digital replica of a real world object across multiple granularity levels; and this real world object could be a machine, a robot or an industrial process or (sub) system. By reflecting the physical status of the system under consideration in a virtual space open up a plethora of optimization, prediction, fault tolerance and automation process that cannot be done using solely the physical object. At the core of DT applications is the requirement of stringent two–way real time communication between the digital replica and the physical object. This requirement inevitably require support from edge clouds to minimize latency and efficient storage and computational resources including caching. In that setting, the use of the aforementioned deep learning technologies will have a key role to play in order to provide high quality real time decision making to avoid misalignment between the digital replica of the physical object under consideration. Efficient machine-to-DT connectivity would require capabilities similar to the above mentioned augmented reality application but due to the continuous real-time control-loop operation DTs will require a complete new set of network optimization capabilities and in that frontier efficient caching and data-driven techniques will have a central role to play. Hence, as the research regarding the inter-play between low latency communications and DTs is still in embryonic stage there is significant scope in the investigation of suitable data driven deep learning techniques to be utilized for distributed allocation of caching and computing resources.", "id": "f03f2b46-f5ed-47c7-b95e-abf7079d9dd7", "level": "subsection", "origin_cites_number": 2, "parent_id": "bcb206fa-772b-4e9f-9da9-3ca19215514a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Research Challenges and Future Directions" ], [ "subsection", "Caching for Mobile Augmented Reality (MAR) applications and Digital Twins (DTs)" ] ], "subsections": [], "title": "Caching for Mobile Augmented Reality (MAR) applications and Digital Twins (DTs)" }, { "cite_extract_rate": 0, "cites": [], "content": "As introduced in Section \\ref{sec:caching}, cache dimensioning explores the appropriate cache size allocation for content host such as CRs and BSs. Disappointingly, there is rare paper applies DL on cache dimensioning decisions. One proper reason is lack of training data set in contrast to content popular prediction, where we have historical user request log to train a DNN. In addition, the caching size allocation affects the network performance and economic investment. Recently, network slicing is identified as an important tool to enable 5G to provide multi-services with diverse characteristics. The slice is established on physical infrastructure including network storage. Therefore it is a very interesting topic to consider the allocation of the memory space to support content caching and other storage services, which guarantees QoE and satisfies task requirements. Furthermore, for the case lack of training data set, DRL can be viewed as a promising technology to configure slicing settings as well as cache dimensioning. For the action space designing, it can be either discrete by setting storage levels, or continuous which is allocate the memory space directly. However, there are requirements to design caching-enabled network slicing model especially for dynamic allocation as well as associated DRL framework including state space, detailed action space, reward function and agent structure.", "id": "76a7e5ab-c389-46f2-914c-27d36e617810", "level": "subsection", "origin_cites_number": 0, "parent_id": "bcb206fa-772b-4e9f-9da9-3ca19215514a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Research Challenges and Future Directions" ], [ "subsection", "Deep Learning for Cache Dimensioning" ] ], "subsections": [], "title": "Deep Learning for Cache Dimensioning" }, { "cite_extract_rate": 0, "cites": [], "content": "Though the application of DL brings performance efficiency for caching policy, additional cost introduced by DL is unneglected, since training and deploying DL model require not only network resources but also time duration. Naturally there is a trade-off between the cost which DL-assisted caching policy saved and the consumption which supports DL itself running, which indicates the trading with DL results in either profit, loss, or break even. Therefore, where and when to apply DL should be carefully investigated. In addition, for the purpose of reducing resource consumption and accelerating training process, some knowledge transfer methods like transfer learning can be utilized, which can transform the knowledge already learnt from the source domain to a relevant target domain.", "id": "bf719ec3-fb89-4a44-9c42-480d45208596", "level": "subsection", "origin_cites_number": 1, "parent_id": "bcb206fa-772b-4e9f-9da9-3ca19215514a", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Research Challenges and Future Directions" ], [ "subsection", "The Cost of Deep Learning" ] ], "subsections": [], "title": "The Cost of Deep Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusions}\nThis article presents a comprehensive study for the application of deep learning methods in the area of content caching. Particularly, the data caching is divided into two classifications according to the caching location of edge network. Each category contains where to cache, what to cache, cache dimensioning and content delivery. Then we introduce typical DNN methods which are categorised via training process into supervised learning, unsupervised learning and RL. Further, this paper critically compares and analyzes state-of-the-art papers on parameters, such as DL methods employed, the caching problems solved and the objective of applying DL. The challenges and research directions of DL on caching is also examined on the topic of extending caching to VNF chains, the application of caching for MAR as well as DTs, DL for cache size allocation and the additional cost of employing DL. Undoubtedly, DL is playing a significant role in 5G and beyond. We hope this paper will increase discussions and interests on DL for caching policy design and relevant applications, which will advance future network communications. \n\\vspace{6pt} \n\\bibliographystyle{unsrt}\n\\bibliography{reference}\n\\end{document}", "id": "9f55687b-d2fc-4fa2-a01d-e44198b97b4c", "level": "section", "origin_cites_number": 0, "parent_id": "347fe6bc-3506-4394-85f3-39521077022f", "prefix_titles": [ [ "title", "A Survey of Deep Learning for Data Caching in Edge Network" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
73
[ 3705, 3582, 3704, 3706, 3420, 7633, 3703, 3709, 3707, 3710, 3708, 3712, 3716, 8682, 3714, 3717, 3713, 8681, 3715, 3711, 166, 810, 167, 1409, 1408, 2219, 1391, 1390, 3721, 7746, 3718, 3719, 3723, 3722, 3720, 3724 ]
1.55362
[ "Dominik Sisejkovic", "Lennart M. Reimann", "Elmira Moussavi", "Farhad Merchant", "Rainer Leupers" ]
Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities
2021
2021-07-05T10:18:26Z
cs.CR
In the past decade, a lot of progress has been made in the design and evaluation of logic locking; a premier technique to safeguard the integrity of integrated circuits throughout the electronics supply chain. However, the widespread proliferation of machine learning has recently introduced a new pathway to evaluating logic locking schemes. This paper summarizes the recent developments in logic locking attacks and countermeasures at the frontiers of contemporary machine learning models. Based on the presented work, the key takeaways, opportunities, and challenges are highlighted to offer recommendations for the design of next-generation logic locking.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "34e673cd-0e21-4b9c-bcb0-f8f8f8c633ee", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ] ], "subsections": [ "4af06642-e7a5-4612-b225-f1cae2b32e2a", "75631a2a-145f-4b09-90ff-408cabadfd55", "cb342b15-09e2-41e4-9c1f-0d7558736604", "4e814a9b-2e2f-4194-94e3-5e0b2adbe038" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "The involvement of untrusted parties in the modern Integrated Circuit (IC) supply has given rise to a plethora of security concerns, including Intellectual Property (IP) piracy, reverse engineering, counterfeiting, and hardware Trojans~. Consequently, a variety of countermeasures have been introduced, including IC metering~, split manufacturing~, camouflaging~, and logic locking~. Among these, only logic locking can protect a design against all untrusted parties in the supply chain~.", "id": "4af06642-e7a5-4612-b225-f1cae2b32e2a", "level": "section", "origin_cites_number": 7, "parent_id": "34e673cd-0e21-4b9c-bcb0-f8f8f8c633ee", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Introduction" ] ], "subsections": [ "a08af25e-5d31-4208-aa5a-4800589c0307", "54ad821b-9b66-4cd9-b802-b507cf6fc674" ], "title": "Introduction" }, { "cite_extract_rate": 0.1, "cites": [ 8606 ], "content": "Logic locking performs design manipulations by binding the correct functionality of a hardware design to a secret key that is only known to the legitimate IP owner. Hereby, both the original functionality and the structure of the design remain concealed while passing through the hands of external design houses and the foundry. In the past decade, various security aspects of logic locking have been thoroughly evaluated through the introduction of key-recovery attacks~, among which the Boolean satisfiability (SAT) attack has gained a lot of attention~. This has led to a division of logic locking into \\textit{pre- and post-SAT schemes}. Pre-SAT schemes were focusing on specific security features, such as random XOR/XNOR key-gate insertion~, thwarting the path-sensitization attack~ or maximizing output corruption for incorrect keys~. With the introduction of SAT-based attacks, the design objective has shifted towards achieving SAT-resilience, resulting in a new generation of schemes, including SARLock~, Anti-SAT~, CASLock~, SFLL~, and others~.", "id": "a08af25e-5d31-4208-aa5a-4800589c0307", "level": "subsection", "origin_cites_number": 10, "parent_id": "4af06642-e7a5-4612-b225-f1cae2b32e2a", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Introduction" ], [ "subsection", "Logic Locking: A Brief Overview" ] ], "subsections": [], "title": "Logic Locking: A Brief Overview" }, { "cite_extract_rate": 0.125, "cites": [ 8606, 7636 ], "content": "With the advent of efficient and easy-to-use Machine Learning (ML) models, ML-based techniques have been gradually introduced into various hardware-security domains~. The latest efforts in the logic locking community have been invested in challenging the security properties of locking schemes using ML. Recent works were able to efficiently attack pre- and post-SAT schemes~. {The introduction of ML-based tools for the security analysis of logic locking \\textit{has opened up a new chapter} in the design of locking schemes and attacks, thereby initiating the start of the \\textit{post-ML} locking-scheme era. Herewith, the ML ecosystem offers a novel path to uncover hidden vulnerabilities and provide new directions in the development of future ML-resilient locking schemes.\n\t\t\\textbf{Contributions}\n\t\tThe ML era has undoubtedly initiated a new stage in logic locking design and evaluation. In this paper, we review all major developments in the domain of \\textit{ML-based attacks and countermeasures in logic locking}, and analyze major challenges and research opportunities. Note that a comprehensive overview of the state of pre-ML schemes and attacks can be found in~.\n\t\tThe rest of this paper is organized as follows. Section~\\ref{sec:background} introduces the relevant background on logic locking. Section~\\ref{sec:ll-in-ml-era} reviews the major developments in ML-based logic locking attacks and compiles a summary of the open challenges and opportunities. Finally, Section~\\ref{sec:conclusion} concludes the paper.", "id": "54ad821b-9b66-4cd9-b802-b507cf6fc674", "level": "subsection", "origin_cites_number": 16, "parent_id": "4af06642-e7a5-4612-b225-f1cae2b32e2a", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Introduction" ], [ "subsection", "The Advent of Machine Learning" ] ], "subsections": [], "title": "The Advent of Machine Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}\n\t\tThis section introduces the preliminaries on classification, working principles and attack models of logic locking.", "id": "75631a2a-145f-4b09-90ff-408cabadfd55", "level": "section", "origin_cites_number": 0, "parent_id": "34e673cd-0e21-4b9c-bcb0-f8f8f8c633ee", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Background" ] ], "subsections": [ "5c43bb1f-f9b2-45b3-995b-7a7bcddebadf", "12a9727d-dd75-4b28-bfdf-0fa62287f301", "0272a9b9-bb42-4a7b-b2d3-dbebdd61d188", "156b835f-9d2c-44be-96fe-edb4453da9fb" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "Logic locking can be generally classified into two orthogonal classes: \\textit{combinational} and \\textit{sequential}~. Combinational logic locking performs key-dependent manipulations in the combinational path of a design. On the other hand, sequential logic locking focuses on transforming and obfuscating the state space of a circuit. As the reviewed work operates in the domain of combinational locking, in the rest of this work, the term \\textit{logic locking} refers to combinational locking schemes.", "id": "5c43bb1f-f9b2-45b3-995b-7a7bcddebadf", "level": "subsection", "origin_cites_number": 1, "parent_id": "75631a2a-145f-4b09-90ff-408cabadfd55", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Background" ], [ "subsection", "Classification" ] ], "subsections": [], "title": "Classification" }, { "cite_extract_rate": 0, "cites": [], "content": "The idea of logic locking lies in the functional and structural manipulation of a hardware design that creates a dependency to an activation key, hereby trading area, power, and delay for security. If the correct key is provided, the locked design will perform as originally intended \\textit{for all input patterns}. Otherwise, an incorrect key will yield an incorrect output for at least \\textit{some input patterns}. Logic locking can be performed on different design levels. However, typically logic locking is deployed on a gate-level netlist through the insertion of additional gates (known as \\textit{key gates}) or more complex structures. A visual example of a locked design is shown in Fig.~\\ref{fig:enc-example}~(b). Here, the original netlist in Fig.~\\ref{fig:enc-example}~(a) is locked through the insertion of two key-controlled gates in the form of an XOR and XNOR (XOR + INV) gate, marked as $KG_{1}$ and $KG_{2}$, respectively. To understand the functional implications of the key gates, let us consider the gate $KG_{1}$. This key gate takes two inputs: the original wire $x$ (output of gate $G_{1}$) and the input key bit $k_{1}$. If a correct key value is set, i.e., $k_{1}=0$, the value of $x$ is preserved and forwarded to $x'$. However, if an incorrect key value is set, i.e., $k_{1}=1$, the value of $x$ is inverted, leading to incorrect output values. Based on this concept, throughout the past decade, a variety of locking schemes have been introduced, based on XOR, XNOR, AND, OR, and MUX gates as well as more elaborate structures~. \n\t\t\\begin{figure}[t]\n\t\t\t\\centering\n\t\t\t\\subfloat[Original Circuit]{\n\t\t\t\t\\includegraphics[width=0.35\\columnwidth]{epic_example.eps}\n\t\t\t}\n\t\t\t\\subfloat[Locked Circuit]{\n\t\t\t\t\\includegraphics[width=0.4\\columnwidth]{epic_example_locked.eps}\n\t\t\t}\n\t\t\t\\caption{Example: logic locking using XOR/XNOR gates.}\n\t\t\t\\label{fig:enc-example}\n\t\t\t\\vspace{-0.1in}\n\t\t\\end{figure}", "id": "12a9727d-dd75-4b28-bfdf-0fa62287f301", "level": "subsection", "origin_cites_number": 1, "parent_id": "75631a2a-145f-4b09-90ff-408cabadfd55", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Background" ], [ "subsection", "Working Principles" ] ], "subsections": [], "title": "Working Principles" }, { "cite_extract_rate": 0, "cites": [], "content": "The role of logic locking in the IC supply chain is demonstrated in Fig.~\\ref{fig:supplychain}. Based on a trusted Register-Transfer Level (RTL) design, the legitimate IP owner performs logic synthesis to generate a gate-level netlist. At this point, logic locking is deployed, resulting in a locked netlist and an activation key. Typically, after the netlist is locked, another synthesis round is performed to facilitate the structural integration of scheme-induced netlist changes. Therefore, we can differentiate between the pre-resynthesis and post-resynthesis netlist. The former is locked but not resynthesized, while the latter is locked and resynthesized. In the next step, the locked netlist proceeds in the untrusted part of the supply chain. This often includes an untrusted external design house (for layout synthesis) and the foundry. After fabrication, the produced IC is returned to the IP owner for activation. Herewith, logic locking protects a design by concealing its functional and structural secrets in the activation key, thereby bridging the untrusted regime gap. \n\t\tIn terms of hardware Trojans, it is assumed that a sound understanding of the design's functionality and structure is required to insert an intelligible, controllable and design-specific Trojan (e.g, a targeted denial-of-service attack). However, functionality-independent Trojans remain viable. These include, e.g., the manipulation of the circuit's physical characteristics, leading to performance or reliability degradation. In the former case, finding the activation key is a prerequisite for successfully performing the reverse-engineering process. \n\t\t\\begin{figure}[!t]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\columnwidth]{flow4.eps}\n\t\t\t\\caption{Logic locking in the IC design and fabrication flow.}\n\t\t\t\\label{fig:supplychain}\n\t\t\t\\vspace{-0.15in}\n\t\t\\end{figure}\n\t\t\\begin{figure*}[t]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{all_attacks.eps}\n\t\t\t\\caption{ML-based attacks on logic locking (deployment phase).}\n\t\t\t\\label{fig:attack-visualization}\n\t\t\t\\vspace{-0.1in}\n\t\t\\end{figure*}", "id": "0272a9b9-bb42-4a7b-b2d3-dbebdd61d188", "level": "subsection", "origin_cites_number": 0, "parent_id": "75631a2a-145f-4b09-90ff-408cabadfd55", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Background" ], [ "subsection", "Logic Locking in the IC Supply Chain" ] ], "subsections": [], "title": "Logic Locking in the IC Supply Chain" }, { "cite_extract_rate": 0, "cites": [], "content": "The attack model includes the following: ($i$) the attacker has access to the locked netlist, either as an untrusted design house or by reverse engineering the layout, ($ii$) the location of the key inputs (pins) is known, ($iii$) the deployed locking scheme is known, and ($iv$) the attacker has access to an activated IC to use as oracle for retrieving golden Input/Output (I/O) patterns.\n\t\tThe fourth assumption is the differentiating factor that classifies all key-retrieval attacks into \\textit{oracle-less} and \\textit{oracle-guided} attacks. Oracle-less attacks assume that an activated design is not available. This is often the case in low-volume production for security-critical applications~. On the contrary, the oracle-guided scenario assumes the availability of an activated IC, thereby representing a high-volume production setting~.\n\t\tMoreover, sometimes, an attack assumes the availability of only a few golden I/O patterns, e.g., in the form of test vectors. Due to the available knowledge that is provided by these patterns, these attacks fall into the oracle-guided class. As discussed in the next section, ML-based attacks have been explored in both classes. Henceforth, we refer to the gate-level netlist under attack as the \\textit{target} netlist.", "id": "156b835f-9d2c-44be-96fe-edb4453da9fb", "level": "subsection", "origin_cites_number": 2, "parent_id": "75631a2a-145f-4b09-90ff-408cabadfd55", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Background" ], [ "subsection", "Attack Model" ] ], "subsections": [], "title": "Attack Model" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ll-in-ml-era}\n\t\tThis section describes the recent developments of ML-based applications in the logic locking domain.", "id": "cb342b15-09e2-41e4-9c1f-0d7558736604", "level": "section", "origin_cites_number": 0, "parent_id": "34e673cd-0e21-4b9c-bcb0-f8f8f8c633ee", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ] ], "subsections": [ "8a41fbf8-901a-45b8-9e50-9a86985de29d", "b5e83d76-9451-4150-b043-b622bb4e368c", "ca9a962e-8ebf-4b88-8027-88e803e66e7d", "1d12de22-82f0-4f45-8eba-64bb646b3352" ], "title": "Logic Locking in the Machine Learning Era" }, { "cite_extract_rate": 0, "cites": [], "content": "Previous work in this domain has mostly been focusing on the development of novel ML-based attacks on logic locking, both for the oracle-less and oracle-guided model. A simplified visualization of all reviewed attack flows is presented in Fig.~\\ref{fig:attack-visualization}. Hereby, the attacks are presented in the \\textit{deployment} stage (after training). An exhaustive comparison of the attacks is summarized in Table~\\ref{table:attacks}. The comparison lists the attacks in order of appearance per attack class. For convenience, a glossary of the used acronyms is given in Table~\\ref{table:glossary}. \n\t\tThe following review reflects the descriptions in Fig.~\\ref{fig:attack-visualization} and Table~\\ref{table:attacks}, thereby only focusing on the \\textit{basic attack mechanisms}. All other details can be found in the mentioned \\textit{comparison table} and the provided references.", "id": "8a41fbf8-901a-45b8-9e50-9a86985de29d", "level": "subsection", "origin_cites_number": 0, "parent_id": "cb342b15-09e2-41e4-9c1f-0d7558736604", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "ML-Based Attacks" ] ], "subsections": [ "50b37d47-0116-40d6-afbd-43bf9dacb32b", "eda58d36-635e-44a7-9a87-c2f7c7682303" ], "title": "ML-Based Attacks" }, { "cite_extract_rate": 0.103448275862068, "cites": [ 2932, 7636, 2931 ], "content": "ML-based attacks in this class have exploited \\textit{scheme-related structural residue} to identify a correct key-bit value or the locking circuitry itself. This category includes SAIL~, SnapShot~, and GNNUnlock~.\n\t\t\\textbf{SAIL:} This attack deploys ML algorithms to retrieve local logic structures from the post-resynthesis target netlist to predict the correct key values (Fig.~\\ref{fig:attack-visualization}~(a)). The attack targets XOR/XNOR-based logic locking exclusively. Hereby, the attack exploits two leakage points of the locking flow: ($i$) the deterministic structural changes induced by logic synthesis around the key-affected netlist gates and ($ii$) the nature of XOR/XNOR-based locking (XOR for key bit 0 and XNOR for key bit 1). Therefore, the attack encloses two components: the Change Prediction Model (CPM) and the Reconstruction Model (RM). For each netlist subgraph around a key input (known as \\textit{locality}), CPM is trained to predict whether a synthesis-induced change has occurred. If a change is predicted, RM is deployed to reconstruct the pre-resynthesis netlist structures, i.e., it reverses the effect of the synthesis process. Finally, based on the intrinsic nature of XOR/XNOR-based locking, SAIL extracts the correct key value.\n\t\t\\textbf{SnapShot:} The attack utilizes a neuroevolutionary approach to automatically design suitable neural networks to \\textit{directly} predict correct key values from a locked post-resynthesis target netlist (Fig.~\\ref{fig:attack-visualization}~(b)). The attack exploits the structural alterations induced by locking schemes to learn a correlation between the key-induced structural patterns and the key value. Compared to SAIL, SnapShot implements an end-to-end ML approach, thereby having the advantage of being applicable to any locking scheme as well as not relying on learning specific transformation rules of the logic synthesis process.\n\t\t\\textbf{GNNUnlock:} The attack leverages a graph neural network to identify all the gates in a post-resynthesis target netlist that belong to the locking circuitry (Fig.~\\ref{fig:attack-visualization}~(c)). Therefore, compared to SAIL and SnapShot, GNNUnlock learns to differentiate locking gates from regular gates instead of learning a correlation between the netlist sub-graphs and the correct key. To enhance the removal accuracy, after identification, a deterministic post-processing mechanism is deployed to remove the gates depending on the intrinsic features of the underlying schemes. Hereby, GNNUnlock has specifically been trained and deployed to target SAT-attack-resilient schemes, i.e., Provably Secure Logic Locking (PSLL). Herein lies the success of the attack: PSLLs often induce isolable structural netlist changes to produce a specific SAT-resilient circuit behavior. In the pre-ML era, it has been long assumed that PSLLs can be protected through additional locking and resynthesis.\n\t\t\\begin{table}[b]\\footnotesize\n\t\t\t\\centering\n\t\t\t\\vspace{-0.2in}\n\t\t\t\\caption{Glossary}\n\t\t\t\\label{table:glossary}\n\t\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\t\\begin{tabular}\n\t\t\t\t\t{l|l||l|l}\n\t\t\t\t\t\\toprule\n\t\t\t\t\t\\textbf{Acronym} & \\textbf{Definition}&\\textbf{Acronym} & \\textbf{Definition}\\\\\n\t\t\t\t\t\\midrule\n\t\t\t\t\t\\midrule\n\t\t\t\t\tAND-OR & AND/OR-based LL~ & LUT & Lookup table \\\\\n\t\t\t\t\tAnti-SAT & Anti-Boolean satisfiability~ & MLP & Multi-layer perceptron\\\\\n\t\t\t\t\tC & Combinational circuits & MPNN & Message-passing neural network\\\\\n\t\t\t\t\tCNN & Convolutional neural network & OG & Oracle-guided attack\\\\\n\t\t\t\t\tCS & Logic-cone-size-based LL~ & OL & Oracle-less attack\\\\\n\t\t\t\t\tD-RNN & Deep recurrent neural network & OLL & Optimal LL~\\\\\n\t\t\t\t\tFLL & Fault analysis-based LL~ & PSO & Particle swarm optimization\\\\\n\t\t\t\t\tFU & Functional & RLL & Random LL~\\\\\n\t\t\t\t\tGA & Genetic algorithm & S & Sequential circuits\\\\\n\t\t\t\t\tGL & Gate level & SFLL-HD & Stripped functionality LL~\\\\\n\t\t\t\t\tGNN & Graph neural network & SLL & Strong (secure) LL~\\\\\n\t\t\t\t\tLL & Logic locking & ST & Structural\\\\\n\t\t\t\t\tLSTM & Long short-term memory & TTLock & Tenacious and traceless LL~\\\\\n\t\t\t\t\t\\bottomrule\n\t\t\t\t\t\\bottomrule\n\t\t\t\t\\end{tabular}\n\t\t\t}\n\t\t\\end{table}\n\t\t\\begin{table*}[hbtp]\\footnotesize\n\t\t\t\\setlength{\\tabcolsep}{2pt}\n\t\t\t\\caption{Overview of ML-Based Attacks on Logic Locking}\n\t\t\t\\label{table:attacks}\n\t\t\t\\resizebox{\\textwidth}{!}{\n\t\t\t\t\\begin{threeparttable}\n\t\t\t\t\t\\centering\n\t\t\t\t\t\\begin{tabular}{l||>{\\centering\\arraybackslash}m{1.3cm}>{\\centering\\arraybackslash}m{0.5cm}>{\\centering\\arraybackslash}m{1.7cm}>{\\centering\\arraybackslash}m{1.2cm}>{\\centering}m{1.7cm}>{\\centering\\arraybackslash}m{1.2cm}>{\\centering}m{1cm}>{\\centering}m{1.2cm}>{\\centering\\arraybackslash}m{2.1cm}>{\\centering\\arraybackslash}m{1.7cm}>{\\centering\\arraybackslash}m{1.5cm}>{\\centering\\arraybackslash}m{1.7cm}}\n\t\t\t\t\t\t\\toprule\n\t\t\t\t\t\t\\textbf{Attack}& \\textbf{Objective}&\\textbf{Class}& \\textbf{Level/IC Type/Attack Basis}&\\textbf{ML Model}&\\textbf{Benchmarks}\\textsuperscript{16}&\\textbf{Evaluated Schemes}&\\textbf{Scheme Independent}&\\textbf{Exact Output}\\textsuperscript{15}&\\textbf{Evaluate Key Length in Bits}&\\textbf{\\% Accuracy [min, max]}&\\textbf{Time Complexity\\textsuperscript{17}}&\\textbf{Known Protection}\\\\\n\t\t\t\t\t\t\\midrule\n\t\t\t\t\t\tSAIL~& key retrieval & OL&GL/C,S\\textsuperscript{1}/ST&Random Forest& ISCAS'85&RLL, SLL, CS& \\xmark&\\xmark&$\\{4,\\cdots,192\\}$\\textsuperscript{4}&$[66.89, 73.88]$&$\\bigO(l)$\\textsuperscript{13}& UNSAIL~, SARO~\\textsuperscript{22}\\\\\\hline\n\t\t\t\t\t\tSnapShot~& ($i$) GSS and ($ii$) SRS key retrieval\\textsuperscript{14}& OL&GL/C,S/ST&MLP, CNN, GA& ISCAS'85, ITC'99, Ariane RV&RLL& \\cmark&\\xmark&$\\{64\\}$&$[57.71, 61.56]^{(i)}$, $[71.57, 81.67]^{(ii)}$&$\\bigO(l)$\\textsuperscript{13}& D-MUX~\\\\\\hline\n\t\t\t\t\t\tGNNUnlock~&key gates removal&OL&GL/C,S/ST&GNN&ISCAS'85, ITC'99&Anti-SAT, TTLock, SFLL-HD&\\xmark\\textsuperscript{12}&\\xmark&$\\{8,\\cdots,128\\}$&$[100.00]$\\textsuperscript{11}&$\\bigO(n)$\\textsuperscript{13}&\\omark\\\\\n\t\t\t\t\t\t\\midrule\n\t\t\t\t\t\t\\midrule\n\t\t\t\t\t\tBOCANet~& key retrieval & OG\\textsuperscript{2}&GL/C/FU&D-RNN \\& LSTM& Trust-Hub, ISCAS'85&RLL&\\cmark&\\xmark & $\\{32,64,128,256\\}$&$[89.00, 100.00]$&$\\bigO(\\alpha)$\\textsuperscript{19}&\\omark\\\\\\hline\n\t\t\t\t\t\tSURF~&key retrieval& OG &GL/C,S\\textsuperscript{1}/ST,FU&SAIL \\& heuristic optimization&ISCAS'85&RLL, SLL, CS&\\xmark\\textsuperscript{3}&\\xmark&$\\{4,\\cdots,192\\}$\\textsuperscript{4} &$[90.58,98.83]$&$\\bigO(t)$\\textsuperscript{13}&UNSAIL~\\\\\\hline\n\t\t\t\t\t\tGenUnlock~&key retrieval& OG &GL/C,S/FU&GA&ISCAS'85, MCNC&RLL,SLL, AND-OR, FLL&\\cmark&\\xmark&$\\{8,\\cdots, 1618\\}$\\textsuperscript{7}&$[100.00]$\\textsuperscript{8}&$\\bigO(\\beta)$\\textsuperscript{20}&\\omark\\\\\\hline\n\t\t\t\t\t\tNNgSAT~&key retrieval& OG &GL/C,S/FU&MPNN&ISCAS'85, ITC'99&SAT-hard\\textsuperscript{5}&\\cmark&\\cmark&n/a\\textsuperscript{5}&$[93.50]$\\textsuperscript{6}&$\\bigO(\\lambda)$\\textsuperscript{18}&\\omark\\\\\t\\hline\n\t\t\t\t\t\tPSO Attack~&key retrieval& OG\\textsuperscript{9} &GL/C,S/FU&PSO&ISCAS'85, ITC'99&RLL, OLL&\\cmark&\\xmark&$\\{64,128\\}$&$[82.07,99.80]$\\textsuperscript{10}&$\\bigO(\\gamma)$\\textsuperscript{21}&point-function locking~\\\\\n\t\t\t\t\t\t\\bottomrule\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\begin{tablenotes}\n\t\t\t\t\t\t\\scriptsize\n\t\t\t\t\t\t\\item [1]~~The attack is in theory applicable to sequential circuits, however no evaluation has been performed yet.\n\t\t\t\t\t\t\\item [2]~~The attack relies on having access to at least some golden input/output patterns ($<0.5\\%$ of total I/O pairs).\n\t\t\t\t\t\t\\item [3]~~In theory, the key-refinement search algorithm could be utilized based on any seed key. However, this has not been addresses thus far.\n\t\t\t\t\t\t\\item [4]~~The following key lengths have been evaluated: $\\{4,8,16,24,32,64,96,128,192\\}$.\n\t\t\t\t\t\t\\item [5]~~$n\\times m$ bitwise multipliers $(8<m,n<32)$, $n\\times m$ crossbar network of 2-to-1 MUXes $(16<m, n<36)$, $n$-input LUTs built by 2-to-1 MUXes $(n<16)$, and $n$-to-1 AND-trees.\n\t\t\t\t\t\t\\item [6]~~Indicates the percentage of successfully de-obfuscated circuits compared to the baseline~.\n\t\t\t\t\t\t\\item [7]~~The key length is selected based on an area overhead of 5\\%, 10\\% or 25\\%.\n\t\t\t\t\t\t\\item [8]~~The quality of the retrieved approximate keys is quantified by a user-defined output-fidelity measure.\n\t\t\t\t\t\t\\item [9]~~The attack relies on an oracle without access to the scan chain.\n\t\t\t\t\t\t\\item [10]~Accuracy refers to the average number of cases where the retrieved key results in 0\\% erroneous outputs for $10^{6}$ random patterns.\n\t\t\t\t\t\t\\item [11]~The accuracy refers to the successful removal of the locking circuitry (not the key retrieval).\n\t\t\t\t\t\t\\item [12]~The attack has not been evaluated for other locking schemes so far and the post-processing steps are scheme-specific.\n\t\t\t\t\t\t\\item [13]~Notation: the key length $l$, the number of netlist nodes $n$, the total number of iterations $t=p\\cdot{n} + p\\cdot{w}+r\\cdot{l}\\cdot{i}\\cdot{n}+r\\cdot{l}\\cdot{i}\\cdot{p}$, where $p$ is the number of output pins, $i$ is the number of IO pairs, $r$ is the number of runs, and $w$ is the number of wires.\n\t\t\t\t\t\t\\item [14]~GSS refers to the generalized set scenario which trains the ML model based on a set of locked benchmarks that are different from the target. SRS captures the self-referencing scenario where the training data is generated by re-locking the target benchmark.\n\t\t\t\t\t\t\\item [15]~If the output of the attack is an exact result, the attack can guarantee a 100\\% correct deobfuscation for the complete I/O space.\n\t\t\t\t\t\t\\item [16]~ISCAS'85~, MCNC~, ITC'99~, RISC-V Ariane core~, and Trust-Hub~.\n\t\t\t\t\t\t\\item [17]~If the time complexity can be clearly determined, it refers to the time complexity after the training process.\n\t\t\t\t\t\t\\item [18]~The execution time of the SAT attack is $\\sum_{i=1}^{\\lambda}{t_{i}}$, where $\\lambda$ is the number of iterations and $t_{i}$ the time required for one SAT-solver call. $\\lambda$ depends on the characteristics of the search space and the branching preferences of the SAT solver. Therefore, the time complexity of the attack is typically measured in terms of $\\lambda$. More details can be found in~.\n\t\t\t\t\t\t\\item [19]~The complexity is linear to the number of training samples $\\alpha$, as the final key is determined based on the MSE of the trained outputs and the key-induced generated outputs.\n\t\t\t\t\t\t\\item [20]~$\\beta=g\\cdot{p}\\cdot{l}$, where $g$ is the number of generations, $p$ is the population size, and $l$ is the key length. Note that the complexity changes with any adaptations of the GA.\n\t\t\t\t\t\t\\item [21]~$\\gamma=g\\cdot{p}\\cdot{\\delta}$, where $g$ is the number of generations, $p$ is the population size, and $\\delta$ the complexity of performing circuit simulation for the fitness evaluation.\n\t\t\t\t\t\t\\item [22]~An empirical evaluation of the resilience against SAIL has not been presented in the paper; only a discussion based on a proposed metric system has been provided.\n\t\t\t\t\t\\end{tablenotes}\n\t\t\t\t\\end{threeparttable}\n\t\t\t}\n\t\t\t\\vspace{-0.1in}\n\t\t\\end{table*}", "id": "50b37d47-0116-40d6-afbd-43bf9dacb32b", "level": "subsubsection", "origin_cites_number": 29, "parent_id": "8a41fbf8-901a-45b8-9e50-9a86985de29d", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "ML-Based Attacks" ], [ "subsubsection", "Oracle-Less Attacks" ] ], "subsections": [], "title": "Oracle-Less Attacks" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7636 ], "content": "Due to the availability of an oracle, existing ML-based attacks in this class have mostly exploited functional features of the target IC. This class includes the following attacks: BOCANet~, SURF~, GenUnlock~, NNgSAT~, and the PSO-guided attack~.\n\t\t\\textbf{BOCANet:}~This attack leverages Recurrent Neural Networks (RNN) based on long short-term memory to construct a correct activation key (Fig.~\\ref{fig:attack-visualization}~(d)). The ML model is trained on a sequence of I/O observations taken from an activated IC, thereby learning the functional I/O mapping of the circuit, i.e., its Boolean function. Once trained, the key retrieval consists of two steps. First, a random key is applied to the model as input. Second, the initial key value is subsequently updated based on the Mean-Squared-Error (MSE) of the trained outputs and the newly generated outputs that are affected by the introduced key. Note that the ML model can be utilized to predict correct inputs or outputs as well. BOCANet exploits the functional effect a correct key has on generating a correct I/O mapping.\n\t\t\\textbf{SURF:}~This attack is based on a joint structural and functional analysis of the circuit to retrieve the activation key (Fig.~\\ref{fig:attack-visualization}~(e)). The ML-aspect of the attack lies in \\textit{utilizing the SAIL attack} to generate the pre-resynthesis netlist structures and a seed key. Afterwards, based on the outputs of SAIL, SURF iteratively refines the key by means of a structure-aware greedy optimization algorithm guided by a functional simulation of the obfuscated netlist and a set of golden I/O pairs. The optimization is guided by the observation that specific key gates only affect a specific set of outputs. Thus, the key bits can be partitioned based on which output they affect. Performing a systematic perturbation of the key bits can lead to a more refined key. The success of the heuristic is grounded in the limited local effects that traditional locking schemes have on the value of the output for incorrect keys.\n\t\t\\textbf{GenUnlock:}~The attack flow of GenUnlock leverages a Genetic Algorithm (GA)-based exploration of suitable activation keys for a locked circuit (Fig.~\\ref{fig:attack-visualization}~(f)). The heuristic search is steered by the key fitness that is computed based on the matching ratio of the key on the golden I/O training set. Through multiple generations, the fitness of the key population is subsequently improved through the application of genetic operators (selection, crossover, and mutation). Once the accepted tolerance for the correctness of the key is reached, the algorithm returns the set of the fittest keys. Similarly to SURF, GenUnlock exploits the fact that a heuristic key-refinement procedure eventually leads to more accurate activation keys. \n\t\t\\textbf{NNgSAT:}~The main objective of NNgSAT is the deployment of a Message-Passing Neural Network (MPNN) to facilitate the resolution of SAT-hard circuit structures during the application of a SAT-based attack (Fig.~\\ref{fig:attack-visualization}~(g)). The motivation is driven by the fact that common SAT solvers run into scalability issues when tackling hard-to-be-solved locked circuit structures, e.g, multipliers and AND trees. Therefore, in NNgSAT, a neural network is trained to predict the satisfying assignment on a set of SAT-hard cases. In deployment, the SAT-attack flow offloads the SAT-hard problems to the trained model to speed up the attack procedure. The effectiveness of NNgSAT lies in the fact that it is possible to transfer prediction knowledge from learned clauses to unseen problems. \n\t\t\\textbf{PSO-guided Attack:}~This attack is based on a Particle Swarm Optimization (PSO) heuristic that searches through the key space directed by a selected cost function (Fig.~\\ref{fig:attack-visualization}~(f)). The cost function is modeled as the Hamming distance between the golden and the obtained output responses (for a selected key). Therefore, the search algorithm relies on having access to an activated IC to compare against. A major motivator for this attack is its applicability without having access to an open scan chain, as this is often a limiting factor for SAT-based attacks. In essence, the PSO-guided attack and GenUnlock are similar in nature, as both rely on black-box evolutionary procedures guided by a functionality-driven objective function.", "id": "eda58d36-635e-44a7-9a87-c2f7c7682303", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "8a41fbf8-901a-45b8-9e50-9a86985de29d", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "ML-Based Attacks" ], [ "subsubsection", "Oracle-Guided Attacks" ] ], "subsections": [], "title": "Oracle-Guided Attacks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2932, 2931 ], "content": "\\textbf{UNSAIL:}~This logic locking scheme has been developed to thwart attacks that target the resolution of structural transformations of logic synthesis~. The core idea of UNSAIL is to generate confusing training data that leads to false predictions in the CPM and RM modules of SAIL. This is realized through the additional manipulation of the netlist after synthesis to force the existence of \\textit{equivalent} netlist sub-graph observations that are linked to \\textit{different} key values.\n\t\t\\textbf{SARO:}~The Scalable Attack-Resistant Obfuscation (SARO) operates in two steps~. First, SARO splits the design into smaller partitions to maximize the structural alterations in the netlist. Second, a systematic truth table transformation is deployed to lock the partitions. In order to increase the complexity of pattern recognition attacks (such as SAIL), the transformations aim to maximize randomness in the netlist.\n\t\t\\textbf{Point-Functions and PSO:} As mentioned in Table~\\ref{table:attacks}, the PSO-guided attack is not applicable to point-function-based locking schemes. The reason is that this type of locking yields SAT-resilient behavior in which any incorrect key corrupts only \\textit{a very limited amount of outputs}. Consequently, this behavior offers no advantage in the guidance of the heuristic search, as it does not yield differentiating fitness values. Note that the applicability of point functions depends on the design of the fitness function that is used to guide the heuristic.\n\t\t\\textbf{D-MUX:} The recently introduced Deceptive Multiplexer (D-MUX) LL scheme builds on the concept of multiple MUX-based insertion strategies that create structural paths that are equally likely to be driven by 0 or 1 key values~. Hence, D-MUX offers efficient protection against data-driven attacks.", "id": "b5e83d76-9451-4150-b043-b622bb4e368c", "level": "subsection", "origin_cites_number": 3, "parent_id": "cb342b15-09e2-41e4-9c1f-0d7558736604", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "ML-Resilient Schemes" ] ], "subsections": [], "title": "ML-Resilient Schemes" }, { "cite_extract_rate": 0, "cites": [], "content": "\\textbf{Deobfuscation Runtime Prediction:}~Apart from ML-based attacks, machine learning has also found its way into other aspects of logic locking. A recent work has designed a framework named ICNet for the prediction of the key-retrieval runtime for SAT-based attacks using graph neural networks~. The framework obtains the predicted deobfuscation runtime based on the characteristics of the circuit topology. ICNet offers an end-to-end approach to evaluate the hardness of logic locking with respect to the SAT attack, thereby increasing the development efficiency of novel locking schemes. \n\t\t\\textbf{ML-Attack on Analog IC Locking:}~ML has started to have an impact on locking mechanisms even beyond digital circuits. The authors in~ have developed an oracle-guided attack on locked analog ICs using genetic algorithms. The approach has successfully broken all known analog logic locking techniques.", "id": "ca9a962e-8ebf-4b88-8027-88e803e66e7d", "level": "subsection", "origin_cites_number": 2, "parent_id": "cb342b15-09e2-41e4-9c1f-0d7558736604", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Other Applications of ML" ] ], "subsections": [], "title": "Other Applications of ML" }, { "cite_extract_rate": 0, "cites": [], "content": "The presented efforts gather around two focal points: oracle-less and oracle-guided attacks. The intrinsic mechanisms of these attacks shed light on major vulnerabilities in existing logic locking that are exploitable by ML. We summarize the observations as follows:", "id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "level": "subsection", "origin_cites_number": 0, "parent_id": "cb342b15-09e2-41e4-9c1f-0d7558736604", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ] ], "subsections": [ "f1a22310-9774-4923-88e9-39031f0c57c0", "45fa94fe-3444-4d74-801c-6eb313e3eb65", "0f226960-767d-435e-ba81-0709f1f4a0d8", "058cfcba-944a-40ba-a6a6-3673887f8227", "7bee07b9-543e-4474-ab4e-8d349ee21c61", "4f146a54-7ec3-42e7-9ae0-dbd895f2b463", "71768a97-ef1a-4c84-ad10-da8f3f475652", "6c43d9c7-1dfb-4c58-8706-caf10292d0cd" ], "title": "Lessons Learned - Challenges and Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "Oracle-guided attacks focus on functional aspects of schemes, whereas structural leakage is exploited in the oracle-less model due to the absence of an activated IC. This indicates two pitfalls. First, the evaluated schemes have a predictable effect on the functionality of the circuit for incorrect keys, enabling the possibility to perform a guided heuristic search of the key space. Second, the existing schemes induce structural changes which strongly correlate with the value of the key. A mitigation depends on what an attempted attack tries to exploit. For example, to overcome SAIL or SnapShot, a scheme must not reveal anything about the correctness of the key through the induced structural change. This is achieved if the inserted change does not differ depending on the key value. Similarly, to protect against GNNUnlock, the deployed schemes have to ensure resilience against isolation, i.e., the locking circuitry must not be structurally independent from the original design. In the case of GenUnlock and the PSO attack, the behavior of the underlying scheme must be constant or fully random for all incorrect key values; disabling any chance for a guided heuristic convergence towards a correct key. Similar observations can be made for the other attacks as well. Nevertheless, conveying all necessary security objectives into a uniform locking scheme remains an open challenge.", "id": "f1a22310-9774-4923-88e9-39031f0c57c0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "Structural vs. Functional Analysis" ] ], "subsections": [], "title": "Structural vs. Functional Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "The reliance on logic synthesis transformations to enable the security of logic locking schemes has to be revised. As shown in SAIL, the synthesis rules are predictable and reversible, thereby having little impact on deducing a correct key for traditional schemes. Nevertheless, resynthesis can increase the difficulty to reverse engineering the functionality of a design. However, this needs further evaluation in the context of novel locking policies.", "id": "45fa94fe-3444-4d74-801c-6eb313e3eb65", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "Logic Synthesis for Security" ] ], "subsections": [], "title": "Logic Synthesis for Security" }, { "cite_extract_rate": 0, "cites": [], "content": "PSLL schemes induce isolable changes which create a clear structural distinction between the locking-related gates and the original (regular) gates. The main reason for this vulnerability is the requirement of achieving SAT-resistant behavior for incorrect keys. This specific functional pattern requires significant structural netlist changes consolidated in isolable components (e.g., the tree of complementary logic in Anti-SAT or the restore/perturb units in TTLock and SFLL-HD).", "id": "0f226960-767d-435e-ba81-0709f1f4a0d8", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "Structural Isolation of Post-SAT Schemes" ] ], "subsections": [], "title": "Structural Isolation of Post-SAT Schemes" }, { "cite_extract_rate": 0, "cites": [], "content": "Regardless of the attack type, ML-based attacks tend to generate an approximate output (as marked in Table~\\ref{table:attacks}), meaning that the retrieved key or deobfuscated circuit cannot be evaluated as correct with an exact certainty. Note that this is not the case for NNgSAT. Nevertheless, the approximate output can be utilized as seed for other attacks to speed up the reverse engineering procedure.", "id": "058cfcba-944a-40ba-a6a6-3673887f8227", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "Output Uncertainty" ] ], "subsections": [], "title": "Output Uncertainty" }, { "cite_extract_rate": 0, "cites": [], "content": "As shown in Table~\\ref{table:attacks}, the existing attacks focus on extrapolating information leakage from gate-level netlists. So far, it is unclear if the same leakage is present in RTL-based locking schemes as well.", "id": "7bee07b9-543e-4474-ab4e-8d349ee21c61", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "RTL vs Gate-Level" ] ], "subsections": [], "title": "RTL vs Gate-Level" }, { "cite_extract_rate": 0, "cites": [], "content": "Available mitigation schemes suffer from overspecialization for thwarting specific attack vectors. So far, it has not been evaluated which form of logic locking offers a comprehensive protection against any form of ML-based guessing attack. Hence, a potentially fruitful opportunity lies within the automatic design of resilient schemes~.", "id": "4f146a54-7ec3-42e7-9ae0-dbd895f2b463", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "Overspecialization" ] ], "subsections": [], "title": "Overspecialization" }, { "cite_extract_rate": 0, "cites": [], "content": "To the best of our knowledge, ML-based techniques have not been deployed yet for the evaluation of sequential logic locking techniques.", "id": "71768a97-ef1a-4c84-ad10-da8f3f475652", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "ML for Sequential Locking" ] ], "subsections": [], "title": "ML for Sequential Locking" }, { "cite_extract_rate": 0, "cites": [], "content": "There is still a need to have access to a rich benchmark set for data-driven analysis~.", "id": "6c43d9c7-1dfb-4c58-8706-caf10292d0cd", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1d12de22-82f0-4f45-8eba-64bb646b3352", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Logic Locking in the Machine Learning Era" ], [ "subsection", "Lessons Learned - Challenges and Opportunities" ], [ "subsubsection", "The Need for Benchmarks" ] ], "subsections": [], "title": "The Need for Benchmarks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\n\t\tThis work summarizes the recent developments of logic locking at the frontiers of machine learning. The presented ML-based attacks indicate the presence of structural and functional leakage in contemporary locking schemes which has been overlooked by the traditional interpretation of security. We show that an ML-based analysis is able to pinpoint novel vulnerabilities and challenge existing security assumptions. The offered discussion consolidates the major ML-induced challenges in logic locking design, offering a fruitful ground for future research directions in the post-ML era.\n\t\t\\bibliographystyle{IEEEtran}\n\t\t\\bibliography{bibliography_short}\n\t\\end{document}", "id": "4e814a9b-2e2f-4194-94e3-5e0b2adbe038", "level": "section", "origin_cites_number": 0, "parent_id": "34e673cd-0e21-4b9c-bcb0-f8f8f8c633ee", "prefix_titles": [ [ "title", "Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
74
[ 8606, 7636, 2932, 2931 ]
1.070896
[ "Sicheng Zhao", "Shangfei Wang", "Mohammad Soleymani", "Dhiraj Joshi", "Qiang Ji" ]
Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey
2019
2019-10-03T21:22:47Z
cs.MM
The wide popularity of digital photography and social networks has generated a rapidly growing volume of multimedia data (\textit{i.e.}, image, music, and video), resulting in a great demand for managing, retrieving, and understanding these data. Affective computing (AC) of these data can help to understand human behaviors and enable wide applications. In this article, we survey the state-of-the-art AC technologies comprehensively for large-scale heterogeneous multimedia data. We begin this survey by introducing the typical emotion representation models from psychology that are widely employed in AC. We briefly describe the available datasets for evaluating AC algorithms. We then summarize and compare the representative methods on AC of different multimedia types, \textit{i.e.}, images, music, videos, and multimodal data, with the focus on both handcrafted features-based methods and deep learning methods. Finally, we discuss some challenges and future directions for multimedia affective computing.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ] ], "subsections": [ "6716f171-5279-4be6-829f-92b9ec4bb350", "62f1d80f-e95a-45c5-ae8a-d6a71fc0fefa", "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "50a2810e-2777-44d0-85b6-12b160fc2d4d", "6a4a2789-5122-4ca2-8207-b0cf83ceda85", "14b7a32f-91ab-4f67-baaf-fabb6149902f", "20c98c17-ea71-4ba4-a9aa-815a42eced5b", "5da3fa17-09f5-4d0c-9b55-5610d97ddda5", "fc0b3d97-cde9-409b-91c4-fe25e8140878" ], "title": "root" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 1016, 1015 ], "content": "\\label{sec:Introduction}\nUsers are increasingly recording their daily activities, sharing interesting experiences, and expressing personal viewpoints using mobile devices on social networks, such as Twitter\\footnote{\\url{https://twitter.com}}, Facebook\\footnote{\\url{https://www.facebook.com}}, and Weibo\\footnote{\\url{https://www.weibo.com}}, \\textit{etc}. As a result, a rapidly growing volume of multimedia data (\\textit{i.e.}, image, music, and video) has been generated, as shown in Figure~\\ref{fig:multimedia}, which results in a great demand for the management, retrieval, and understanding of these data. Most existing work on multimedia analysis focus on the cognitive aspects, \\textit{i.e.}, understanding the objective content, such as object detection in images~, speaker recognition in speech~, and action recognition in videos~. Since what people feel have a direct influence on their decision making, affective computing (AC) of these multimedia data is of significant importance and has attracted increasing attention . For example, companies would like to know how customers evaluate their products and can thus improve their services~; depression and anxiety detection from social media can help understand psychological distress and thus potentially prevent suicidal actions~.\nWhile the sentiment analysis in text~ has long been a standard task, AC from other modalities, such as image and video, has just begun to be considered recently. In this article, we aim to review the existing AC technologies comprehensively for large-scale heterogeneous multimedia data, including image, music, video, and multimodal data.\nAffective computing of multimedia (ACM) aims to recognize the emotions that are expected to be evoked in viewers by a given stimuli. Similar to other supervised learning tasks, ACM is typically composed of three steps: data collection and annotation, feature extraction, and mapping learning between features and emotions~. One main challenge for ACM is the affective gap, \\textit{i.e.}, ``the lack of coincidence between the features and the expected affective state in which the user is brought by perceiving the signal''~. In the early stage, various hand-crafted features were designed to bridge this gap with traditional machine learning algorithms, while more recently researchers have focused on end-to-end deep learning from raw multimedia data to recognize emotions. Existing ACM methods mainly assign the dominant (average) emotion category (DEC) to an input stimuli, based on the assumption that different viewers have similar reactions to the same stimuli. We can usually formulate this task as a single-label learning problem.\nHowever, emotions are influenced by subjective and contextual factors, such as the educational background, cultural diversity, and social interaction~. As a result, different viewers may react differently to the same stimuli, which creates the subjective perception challenge. Therefore, the perception inconsistency makes it insufficient to simply predict the DEC for the highly subjective variable. As stated in~, we can perform two kinds of ACM tasks to deal with the subjectivity challenge: predicting personalized emotion perception for each viewer and assigning multiple emotion labels for each stimuli. For the latter one, we can either assign multiple labels to each stimuli with equal importance using multi-label learning methods, or predict the emotion distributions which tries to learn the degrees of each emotion~.\nIn this article, we concentrate on surveying the existing methods on ACM and analyzing potential research trends. Section~\\ref{sec:EmotionModels} introduces the widely-used emotion representation models from psychology. Section~\\ref{sec:Datasets} summarizes the existing available datasets for evaluating ACM tasks. Section~\\ref{sec:Image}, Section~\\ref{sec:Music}, Section~\\ref{sec:Video}, and Section~\\ref{sec:multimodal} survey the representative methods on AC of images, music, videos, and multimodal data, respectively, including both handcrafted features-based methods and deep learning methods. Section~\\ref{sec:FutureDirections} provides some suggestions for future research, followed by conclusion in Section~\\ref{sec:Conclusion}.\nTo the best of our knowledge, this article is among the first that provide a comprehensive survey of affective computing of multimedia data from different modalities. Previous surveys mainly focus on a single modality, such as images~, speech~, music~, video~, and multimodal data~. From this survey, readers can more easily compare the correlations and differences among different AC settings. We believe that this will be instrumental in generating novel research ideas.", "id": "6716f171-5279-4be6-829f-92b9ec4bb350", "level": "section", "origin_cites_number": 24, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:EmotionModels}\nThere are two dominant emotion representation models deployed by psychologists: categorical emotions (CE), and dimensional emotion space (DES). CE models classify emotions into a few basic categories, such as \\emph{happiness} and \\emph{anger}, \\emph{etc}. Some commonly used models include Ekman's six basic emotions and Mikels's eight emotions~. When classifying emotions into \\emph{positive} and \\emph{negative} (polarity)~, sometimes including \\emph{neutral}, ``emotion'' is called ``sentiment''. However, sentiment is usually defined as an atitude held toward an object . Emotions are usually represented by DES models as continuous coordinate points in a 3D or 2D Cartesian space, such as valence-arousal-dominance (VAD) and activity-temperature-weight~. VAD is the most widely used DES model, where valence represents the pleasantness ranging from positive to negative, arousal represents the intensity of emotion ranging from excited to calm, and dominance represents the degree of control ranging from controlled to in control. Dominance is difficult to measure and is often omitted, leading to the commonly used two dimensional VA space~.\nThe relationship between CE and DES and the transformation from one to the other are studied in~. For example, positive valence relates to a happy state, while negative valence relates to a sad or angry state. CE models are easier for users to understand and label, but the limited set of categories may not well reflect the subtlety and complexity of emotions. DES can better describe detailed emotions with subtle differences flexibly, but it is difficult for uses to distinguish the absolute continuous values, which may also be problematic. \nCE and DES are mainly employed in classification and regression tasks, respectively, with discrete and continuous emotion labels. If we discretize DES into several constant values, we can also use it for classification~. Ranking based labeling can be applied to ease DEC comprehension difficulties in raters.\nAlthough less explored in this context, one of the most well-known theories that explains the development of emotional experience is appraisal theory. According to this theory, cognitive evaluation or appraisal of a situation or content in case of multimedia results in emergence of emotions~. According to Ortony, Clore and Collins (OCC)~, emotions are experienced following a scenario comprising a series of phases. First, there is a perception of an event, object or an action. Then, there is an evaluation of events, objects or action according to personal wishes or norms. Finally, perception and evaluation result in a specific emotion or emotions arising. Certain appraisal dimensions such as novelty and complexity can be labeled and detected from content. For example, \\citeauthor{Soleymani2015} automatically recognized image novelty and complexity that are related to interestingness. There are also domain specific emotion taxonomy and scales. Geneva Emotional Music Scale is a music specific emotion model for describing emotions induced by music. It consists of a hierarchical structure with 45 emotions, nine emotional categories and three superfactors that can describe emotion in music.\n\\begin{table}[!t]\n\\centering\\scriptsize\n\\caption{Representative emotion models employed in ACM.}\n\\begin{tabular}\n{cccc}\n\\hline\n\\textbf{Model} & \\textbf{Ref} & \\textbf{Type} & \\multicolumn{1}{c}{\\textbf{Emotion states/dimensions}} \\\\\n\\hline\nEkman & & CE & happiness, sadness, anger, disgust, fear, surprise\\\\\nMikels & & CE & amusement, anger, awe, contentment, disgust, excitement, fear, sadness\\\\\nPlutchik & & CE & ($\\times$ 3 scales) anger, anticipation, disgust, joy, sadness, surprise, fear, trust\\\\\nClusters & & CE & 29 discrete labels are consistently grouped into 5 clusters at a similar distance level \\\\\nSentiment & & CE & positive, negative, (or neutral)\\\\\nVA(D) & & DES & valence-arousal(-dominance)\\\\\nATW & &DES & activity-temperature-weight\\\\\n\\hline\n\\end{tabular}\n\\label{tab:EmotionModels}\n\\end{table}\nAnother relevant concept worth mentioning is that emotion in response to multimedia can be expected, induced or perceived emotion. \nExpected emotion is the emotion that the multimedia creator intends to make people feel, perceived emotion is what people perceive as being expressed, while induced/felt emotion is the actual emotion that is felt by a viewer. Discussing the difference or correlation of various emotion models is out of the scope of this article. The typical emotion models that have been widely used in ACM are listed in in Table~\\ref{tab:EmotionModels}.", "id": "62f1d80f-e95a-45c5-ae8a-d6a71fc0fefa", "level": "section", "origin_cites_number": 14, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Emotion Models from Psychology" ] ], "subsections": [], "title": "Emotion Models from Psychology" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Datasets}", "id": "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "level": "section", "origin_cites_number": 0, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ] ], "subsections": [ "31b031e5-4c1a-41d6-91e4-966f262d426b", "6be4124c-0549-4c40-a1cf-f2efaaa9747b", "d278bc85-2fef-47c9-b4d6-d1b6bb157aad", "72c21e7b-1086-4048-bb25-1bc86a5c6b31" ], "title": "Datasets" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 1017 ], "content": "\\label{sec:ImageDatasets}\nThe early datasets for AC of images mainly come from the psychology community with small-scale images. The International Affective Picture System (\\textbf{IAPS}) is an image set that is widely used in psychology to evoke emotions~. Each image that depicts complex scenes is associated with the mean and standard deviation (STD) of VAD ratings in a 9-point scale by about 100 college students. The \\textbf{IAPSa} dataset is selected from IAPS with 246 images~, which are labeled by 20 undergraduate students. The \\textbf{Abstract} dataset consists of 279 abstract paintings without contextual content. Approximately 230 people peer rated these paintings. The Artistic dataset (\\textbf{ArtPhoto}) includes 806 artistic photographs from a photo sharing site~ with emotions determined by the artist uploading the photos. The Geneva affective picture database (\\textbf{GAPED}) is composed of 520 negative, 121 positive, and 89 neutral images~. Besides, these images are also rated with valence and arousal values, ranging from 0 to 100 points. There are 500 abstract paintings in both \\textbf{MART} and \\textbf{devArt} datasets, which are collected from the Museum of Modern and Contemporary Art of Trento and Rovereto~, and the ``DeviantArt'' online social network~, respectively.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n\\caption{Released and freely available datasets for AC of images, where `Ref' is short for Reference, `\\# Images' and `\\# Annotators' respectively represent the total number of images and annotators (f: female, m: male), `Labeling' represents the method to obtain labels, such as human annotation (annotation) and keyword searching (keyword), and `Labels' means the detailed labels in the dataset, such as dominant emotion category (dominant), average dimension values (average), personalized emotion (personalized), and emotion distribution (distribution).}\n\\begin{tabular}\n{cccccccc}\n\\hline\n\\textbf{Dataset} & \\textbf{Ref} & \\textbf{\\# Images} & \\textbf{Type} & \\textbf{\\# Annotators}& \\textbf{Emotion model} & \\textbf{Labeling} & \\textbf{Labels}\\\\\n\\hline\nIAPS & & 1,182 & natural & $\\approx$100 (half f) & VAD & annotation & average \\\\\nIAPSa & & 246 & natural & 20 (10f,10m) & Mikels & annotation & dominant \\\\\nAbstract & & 279 & abstract & $\\approx$230 & Mikels & annotation & dominant \\\\\nArtPhoto & & 806 & artistic & -- & Mikels & keyword & dominant \\\\\nGAPED & & 730 & natural & 60 & Sentiment, VA & annotation & dominant, average \\\\\nMART & & 500 & abstract & 25 (11f,14m) & Sentiment & annotation & dominant\\\\\ndevArt & & 500 & abstract & 60 (27f,33m) & Sentiment & annotation & dominant\\\\\nTweet & & 603 & social & 9 & Sentiment & annotation & dominant \\\\\nFlickrCC & & $\\approx$500,000 & social & -- & Plutchik & keyword & dominant \\\\\nFlickr & & 301,903 & social & 6,735 & Ekman & keyword & dominant \\\\\nEmotion6 & & 1,980 & social & 432 & Ekman+neutral & annotation & distribution \\\\\nFI & & 23,308 & social & 225 & Mikels & annotation & dominant \\\\\nIESN & & 1,012,901 & social & 118,035 & Mikels, VAD & keyword & personalized \\\\\nFlickrLDL & & 10,700 & social & 11 & Mikels & annotation & distribution \\\\\nTwitterLDL & & 10,045 & social & 8 & Mikels & annotation & distribution \\\\\n\\hline\n\\end{tabular}\n\\label{tab:ImageDataset}\n\\end{table*}\nRecent datasets, especially the large-scale ones, are constructed using images from social networks. The Tweet dataset (\\textbf{Tweet}) consists of 470 and 113 tweets for positive and negative sentiments, respectively~. The \\textbf{FlickrCC} dataset includes about 500k Flickr creative common (CC) images which are generated based on 1,553 adjective noun pairs (ANPs)~. The images are mapped to the Plutchnik's Wheel of Emotions with 8 basic emotions, each with 3 scales. The \\textbf{Flickr} dataset contains about 300k images~ with the emotion category defined by the synonym word list which has the most same words as the adjective words of an image's tags and comments. The \\textbf{FI} dataset consists of 23,308 images which are collected from Flicker and Instagram by searching the emotion keywords~ and labeled by 225 Amazon Mechanical Turk (MTurk) workers. The number of images in each Mikels emotion category is larger than 1,000. The \\textbf{Emotion6} dataset~ consists of 1,980 images collected from Flickr with 330 images for each Ekman's emotion category. Each image was scored by 15 MTurk workers to obtain the discrete emotion distribution information. The \\textbf{IESN} dataset that is constructed for personalized emotion prediction contains about 1M images from Flickr. Lexicon-based methods and VAD averaging~ are used to segment the text of metadata from uploaders for expected emotions and comments from viewers for personalized emotions. There are 7,723 active users with more than 50 involved images. We can also easily obtain the DEC and emotion distribution for each image. \\textbf{FlickrLDL} and \\textbf{TwitterLDL} datasets~ are constructed for discrete emotion distribution learning. The former one is a subset of FlickrCC, which are labeled by 11 viewers. The latter one consists of 10,045 images which are collected by searching various sentiment key words from Twitter and labeled by 8 viewers. These datasets are summarized in Table~\\ref{tab:ImageDataset}.", "id": "31b031e5-4c1a-41d6-91e4-966f262d426b", "level": "subsection", "origin_cites_number": 12, "parent_id": "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Images" ] ], "subsections": [], "title": "Datasets for AC of Images" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:MusicDatasets}\nA notable benchmark for music recognition is music mood classification (AMC) task, organized by annual Music Information Retrieval Evaluation eXchange\\footnote{\\url{http://www.music-ir.org/mirex/wiki/}} (MIREX)~. In MIREX mood classification task, initially 600 songs were shared with the participants. Starting from 2013, 1,438 30 seconds excerpts from Korean pop songs have been added to MIREX. MIREX benchmark aims to automatically classify songs into five emotion clusters derived from cluster analysis of online tags. MIREX mood challenge emotional representation has been debated in the literature due to its data-driven origin rather than psychology of emotion. For example, in~, semantic and acoustic overlaps have been found between clusters. MIREX mood challenge considers only one label for the whole song and disregards the dynamic time evolving nature of music.\nComputer Audition Lab 500 (CAL500) is a dataset of 500 popular songs which is labeled by multiple tags including emotions~. The dataset is labeled in the lab by 66 labelers. Soundtracks datasets~ for music and emotion is developed by~\\citeauthor{Eerola2011} and contains instrumental music from soundtrack of 60 movies. The expert annotators selected songs based on five basic discrete categories (\\textit{anger}, \\textit{fear}, \\textit{sadness}, \\textit{happiness}, and \\textit{tenderness}) and dimensional VA representation of emotions. Although not developed with music content analysis in mind, the Database for Emotion Analysis using Physiological Signals or DEAP dataset~ also includes valence, arousal and dominance ratings for 120 one-minute music video clips of western pop music. Each video clip is annotated by by 14--16 participants who were asked to report their felt valence, arousal, and dominance on a 9-point scale. AMG1608~ is another music dataset that contains arousal and valence ratings for 1,608 Western songs in different genres and is annotated through MTurk.\nMusic datasets with emotion labels usually consider one emotion label per song (static). MoodSwings dataset~ was the first to annotate music dynamically over time. MoodSwings was developed by Schmidt \\etal and includes 240 15s excerpts of western pop songs with per-second valence and arousal labels, collected on MTurk. The MediaEval ``Emotion in Music'' challenge was organized in years 2013--2015 in MediaEval Multimedia Evaluation initiative\\footnote{http://www.multimediaeval.org}. MediaEval is a community-driven benchmarking campaign dedicated to evaluating algorithms for social and human-centered multimedia access and retrieval~. Unlike MIREX, ``Emotion in Music'' task focused on dynamic emotion recognition in music tracking arousal and valence over time~. The data from MediaEval tasks were compiled in MediaEval Database for Emotional Analysis in Music (DEAM) which is the largest available dataset with dynamic annotations, at 2Hz, with valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons license. PMEmo is a dataset of 794 songs with dynamic and static arousal and valence annotations in addition to electrodermal responses from ten participants~.\nThese datasets are summarized in Table~\\ref{tab:MusicDataset}. For a more detailed review of available music datasets with emotional labels, we refer the readers to~.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n\\caption{Released and freely available datasets for music emotion recognition, where `\\# Songs' and `\\# Annotators' respectively represent the total number of songs and annotators per song, `Labeling' represents the method to obtain labels, such as human annotation (annotation), and `Labels' means the detailed labels in the dataset, such as dominant emotion category (dominant), average dimension values (average), personalized emotion (personalized), and emotion distribution (distribution).}\n\\begin{tabular}\n{cccccccc}\n\\hline\n\\textbf{Dataset} & \\textbf{Ref} & \\textbf{\\# Songs} & \\textbf{Type} & \\textbf{\\# Annotators}& \\textbf{Emotion model} & \\textbf{Labeling} & \\textbf{Labels}\\\\\n\\hline\nMIREX mood& & 2,038 & western and kpop & 2--3 & Clusters & annotation & dominant, distribution\\\\\nCAL500 & & 500 & western & >2 & -- & annotation & dominant\\\\\nSoundtracks& & 110 & instrumental &110 & self-defined, VA & annotation & distribution \\\\\nMoodSwings& & 240 & western & Unknown & VA & annotation & distribution\\\\\nDEAP & & 120 & western & 14--16 & VAD & annotation & average \\\\\nAMG1608& & 1,608 & western & 15 & VA & annotation & distribution\\\\\nDEAM & & 1,802 & diverse & 5--10 & VA & annotation & average, distribution \\\\\nPMEmo & & 794 & western & 10 & VA & annotation & distribution \\\\\n\\hline\n\\end{tabular}\n\\label{tab:MusicDataset}\n\\end{table*}", "id": "6be4124c-0549-4c40-a1cf-f2efaaa9747b", "level": "subsection", "origin_cites_number": 12, "parent_id": "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Music" ] ], "subsections": [], "title": "Datasets for AC of Music" }, { "cite_extract_rate": 0, "cites": [], "content": "The target of video affective content computing is to recognize the emotions evoked by videos. In this field, it is necessary to construct a large benchmark dataset with precise emotional tags. However, the majority of existing research evaluate their proposed methods on their own collected datasets. The scarce video resources in those self-collected datasets, combined with the copyright restrictions result in limited accessibility for other researchers to reproduce existing work. Therefore, it is beneficial to summarize some publicly available datasets in this field. In general, publicly available datasets can be classified into two types: datasets consisting of videos only, such as movie clips or user generated videos, and datasets including both videos and audience's information.", "id": "d278bc85-2fef-47c9-b4d6-d1b6bb157aad", "level": "subsection", "origin_cites_number": 0, "parent_id": "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Videos" ] ], "subsections": [ "d6fc556f-5306-4560-b61f-84603258d71a", "7eba031f-c063-4798-bf20-621f78f3118a" ], "title": "Datasets for AC of Videos" }, { "cite_extract_rate": 0.1, "cites": [ 1018 ], "content": "The LIRIS-ACCEDE dataset~ is one of the largest datasets in this area. Because it is collected under Creative Commons licenses, there are no copyright issues. The LIRIS-ACCEDE dataset is a living database in development. In order to fulfill requirements of different tasks, new data, features and tags are included. The LIRIS-ACCEDE dataset includes the Discrete LIRIS-ACCEDE collection and the Continuous LIRIS-ACCEDE collection in 2015 and was used for the MediaEval Emotional Impact of Movies tasks from 2015 to 2018.\nThe Discrete LIRIS-ACCEDE collection~ includes 9,800 clips, which is derived from 40 feature films and 120 short films. Specifically, the majority of the 160 films are collected from the video platform VODO. The duration of all 160 films is about 73.6 hours in total. All of the 9,800 video clips last about 27 hour in total and the duration of each clip is between 8 to 12 seconds, which is long enough for viewers to feel emotions.\nIn this collection, all the 9,800 video clips are labeled by values of valence and arousal.\nThe Continuous LIRIS-ACCEDE collection~ differs from the Discrete LIRIS-ACCEDE collection in annotation type. Roughly speaking, the annotations for movie clips in the Discrete LIRIS-ACCEDE collection are global. It means that a whole 8 to 12 second video clip is represented by a single value of valence and arousal. This annotation type limits the possiblity for tracking emotions. To address this issue, 30 longer films are selected from the 160 films mentioned above. The total duration of all the selected films is about 7.4 hours. There are emotional annotations according to valence and arousal of each second of the films in the collection.\nThe MediaEval Affective Impact of Movies collections between 2015 and 2018 are used for the MediaEval affective Impact of Movies tasks in each corresponding year. Specifically, the MediaEval 2015 Affective Impact of Movies~ includes two sub-tasks: affect detection and violence detection. The Discrete LIRIS-ACCEDE collection was used as the development set. And 1,100 additional video clips were extracted from 39 new movies and included. Indeed, all the new collected data were shared under Creative Commons licenses. In addition, three values were used to label the 10,900 video clips: a binary signal representing the presence of violence, a class tag of the excerpt for felt arousal and an annotation for felt valence.\nThe MediaEval 2016 Affective Impact of Movies Task~ also includes two sub-tasks: Global emotion prediction and Continuous emotion prediction. The Discrete LIRIS-ACCEDE collection and the Continuous LIRS-ACCEDE collection were used as the development sets for the first and second sub-tasks, respectively. In addition, 49 new movies were chosen as the test sets. 1,200 short video clips from the new movies were extracted for the first task, and 10 long movies were selected for the second task. For the first sub-task, the tags include scores of valence and arousal for each whole movie clip. And for the second sub-task, scores of valence and arousal for each second of the movies are evaluated.\n\\label{sec:VideoDatasets}\nThe MediaEval 2017 Affective Impact of Movies Task~ is focused on long movies for two sub-tasks: valence/arousal prediction and Fear prediction. The Continuous LIRIS-ACCEDE collection was selected as the development set, and an additional 14 new movies were collected as the test set. The annotations contain a valence value and an arousal value. In addition, there are a binary value to represent whether the segment is supposed to induce fear or not for each 10-second segment.\nThe MediaEval 2018 Affective Impact of Movies task~ is also dedicated to valence/arousal prediction and fear prediction. The Continuous LIRIS-ACCEDE collection and the test set of the MediaEval 2017 Emotional Impact of Movies task were used as the development set. In addition, 12 other movies selected from the set of the 160 movies mentioned in the Discrete LIRIS-ACCEDE part were used as test set. Specifically, for the first sub-task, there are annotations containing valence and arousal values for each second of the movies. And the beginning and ending times of each sequence in movies that induce fear are recorded for the second sub-task. \nThe VideoEmotion dataset~ is a well-designed user-generated video collection. It contains 1,101 videos downloaded from web platforms, such as YouTube and Flickr. The annotations of the videos in this dataset are based on Plutchik's wheel of emotions~.\nBoth the YF-E6 Dataset and the VideoStory-P14 Dataset are introduced in~. In order to collect the YF-E6 emotion dataset, six basic emotion types are used as keywords to search videos on YouTube and Flickr. There are 3,000 videos collected in the YF-E6 dataset totally. Then there were 10 annotators performing the labeling tasks. Only when all tags for a video clip were more than 50 percent consistent, the video clip was added to the dataset. Finally, the dataset includes 1,637 videos labeled with six basic emotion types. The VideoStory-P14 Dataset is based on the VideoStory dataset. Similar to the VideoEmotion Dataset, the keywords in Plutchik's Wheel of Emotions were used for the search process of the construction of the VideoStory dataset. Finally, there are 626 videos in the videoStory-P14 dataset with each having a unique emotion tag.", "id": "d6fc556f-5306-4560-b61f-84603258d71a", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "d278bc85-2fef-47c9-b4d6-d1b6bb157aad", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Videos" ], [ "subsubsection", "Datasets consisting of videos only" ] ], "subsections": [], "title": "Datasets consisting of videos only" }, { "cite_extract_rate": 0.07692307692307601, "cites": [ 1018 ], "content": "The DEAP dataset~ includes the EEG and peripheral physiological signals that are collected from 32 participants during watching 40 one-minute long excerpts of music videos. In addition, frontal face videos collected from 22 of the 32 participants are gathered. Annotators labeled each video according to the level of like/dislike, familiarity, arousal, valence, and dominance. Though the DEAP dataset is publicly available, it should be noted that it does not include the actual videos because of the licensing issues, but the links of videos are provided.\nThe MAHNOB-HCI~ is a multimodal dataset including multi-class information recorded in response to video affective stimuli. Particularly, speeches, face videos, and eye gazes are recorded. In addition, two experiments were conducted to record both peripheral and central nervous system physiological signals from 27 subjects. In the first experiment, subjects were assigned to report their emotional responses to 20 affective induced videos, including the level of arousal, valence and dominance, and predictability as well as emotion categories. In the second experiment, the participants evaluated whether they agreed with the displayed labels after watching short videos and images. The dataset is available for academic use through a web-interface.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n \\caption{Released and freely available datasets for video emotion recognition, where `\\#Clips' and `Hours' respectively represent the total number and hours of video clips, `Type' means the genre of the videos in the dataset, `Emotion model' represents the labeling type, `Labeling' represents the method to obtain labels, such as human annotation (annotation) and keyword searching (keyword), and `Labels' means the detailed labels in the dataset, such as dominant emotion category (dominant), average dimension values (average), personalized emotion (personalized), and emotion distribution (distribution).}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}\n{C{1.8 cm} cccccccc}\n\\hline\n\\textbf{Dataset} & \\textbf{Ref} & \\textbf{\\#Clips}& \\textbf{Hours} & \\textbf{Type} & \\textbf{\\# Annotators} & \\textbf{Emotion model} & \\textbf{Labeling} & \\textbf{Labels} \\\\\n\\hline\nDiscrete LIRIS-ACCEDE & & 9,800 & 26.9 & film & - & VA & annotation & dominant \\\\\nContinuous LIRIS-ACCEDE & & 30 & 7.4 & film & 10 (7f,3m) & VA & annotation & average \\\\\nMediaEval 2015 & & 1,100 & - & film & - & 3 discrete VA values & annotation & dominant \\\\\nMediaEval 2016 & & 1,210 & - & film & - & VA & annotation & distribution, average \\\\\nMediaEval 2017 & & 14 & 8 & film & - & VA, fear & annotation & average \\\\\nMediaEval 2018 & & 12 & 9 & film & - & VA, fear & annotation & average \\\\\nVideoEmotion & & 1,101 & 32.7 & user-generated & 10 (5f,5m) & Plutchik & annotation & dominant \\\\\nYF-E6 & & 1,637 & 50.9 & user-generated & 10(5f,5m) & Emkan & annotation & dominant \\\\\nVideoStory-P14 & & 626 & - & user-generated & - & Plutchik & keyword & dominant \\\\\nDEAP & & 120 & 2 & music video & - & VAD & annotation & personalized \\\\\nMAHNOB-HCI & & - & - & multiple types & - & VAD, Ekman+neutral & annotation & personalized \\\\\nDECAF & & 76 & - & music video/movies & - & VAD & annotation & personalized \\\\\nAMIGOS & & 20 & - & movies collection & - & VAD, Ekman & annotation & personalized \\\\\nASCERTAIN & & 36 & - & movies collection & 58 (21f,37m) & VA & annotation & personalized \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:VideoDataset}\n\\end{table*}\nThe DECAF dataset~ consists of Infra-red facial video signals, Electrocardiogram (ECG), Magnetoencephalogram (MEG), horizontal Electrooculogram (hEOG) and Trapezius Electromyogram (tEMG), recorded from 30 participants watching 36 movie clips and 40 one-minute music videos, which are derived from the DEAP dataset~. The subjective feedback is based on valence, arousal, and dominance space. In addition, time-continuous emotion annotations for movie clips are also included in the dataset.\nThe AMIGOS dataset~ includes multi-class affective data, individual and groups of viewers' responses to both short and long videos. The EEG, ECG, GSR, frontal, and full body video were recorded in two experimental settings, \\textit{i.e.}, 40 participants watching 16 short emotional clips and 4 long clips. The duration of each selected short videos is between 51 and 150 seconds, and the duration of each long excerpt is about 20 minutes. Finally, participants annotated the affective level of valence, arousal, control, familiarity, liking, and basic emotions.\nBig-five personality scales and affective self-ratings of 58 users together with their EEG, ECG, GSR, and facial activity data were included in the ASCERTAIN dataset~ . The number of videos used as the stimulus is 36 and the length of each video clip is between 51 and 128 seconds. It is the first physiological dataset that is useful for both affective and personality recognition.\nThe publicly available datasets for video affective content analysis are summarized in Table~\\ref{tab:VideoDataset}.", "id": "7eba031f-c063-4798-bf20-621f78f3118a", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "d278bc85-2fef-47c9-b4d6-d1b6bb157aad", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Videos" ], [ "subsubsection", "Datasets including both videos and audience's reactions" ] ], "subsections": [], "title": "Datasets including both videos and audience's reactions" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 1019, 1020, 1021, 7322 ], "content": "\\label{sec:multimodalDatasets}\nIn addition to audiovisual content and viewers' reactions, other modalities, such as language, also contain significant information for affective understanding of multimedia content. \nVisual sentiment is the sentiment associated with the concepts depicted in images. Two datasets were developed through mining images associated with the adjective-noun pair (ANP) representations that have affective significance~. ANPs in were generated by first using seed terms from Plutchik's Wheel of Emotion~ to query Flickr\\footnote{\\url{https://www.flickr.com}} and YouTube\\footnote{\\url{https://www.youtube.com/}}. After mining the tags associated with visual content on YouTube and Flickr, adjective and noun candidates were identified through part-of-speech tagging. Then adjective and nouns were paired to create ANP candidates which were filtered by sentiment strength, named entities, and popularity. The Visual Sentiment Ontology (VSO), \\footnote{\\url{https://visual-sentiment-ontology.appspot.com}}, is the results of this process. Sentibank resulted in the creation of a set of photo-tweet sentiment dataset, with both visual and textual data with polarity labels, collected on Amazon Mechanical Turk\\footnote{\\url{http://www.ee.columbia.edu/ln/dvmm/vso/download/twitter_dataset.html}}. This work was later extended to form a multilingual ANP set and its dataset, in~\\footnote{\\url{http://mvso.cs.columbia.edu}}, containing 15,630 ANPs from 12 major languages and 7.37M images~. My Reaction When (MRW) dataset contains 50,107 video-sentence pairs crawled from social media, depicting physical or emotional reactions to the situations described in sentences~. The GIFs are sourced from Giphy\\footnote{\\url{https://giphy.com/}}. Even though there is no emotional labels, the language and visual associations are mainly based on sentiment which makes this dataset an interesting resource for affective content analysis.\nCMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) is a collection of multiple datasets for multimodal sentiment analysis and emotion recognition. This collection includes more than 23,500 sentence utterance videos from more than 1,000 people from YouTube~\\footnote{\\url{https://github.com/A2Zadeh/CMU-MultimodalSDK}}. All the videos are transcribed and aligned with audiovisual modalities. \nA multimodal multi-party dataset for emotion recognition in conversation (MELD) was primarily developed for emotion recognition in multiparty interaction purposes~. MELD contains visual, audio, and textual modalities and includes 13,000 utterances from 1,433 dialogues from the TV-series Friends, with each utterance labeled with emotion and sentiment. \n\\begin{table*}[!t]\n\\centering\\scriptsize\n \\caption{Released and freely available datasets for multimodal multimodal emotion recognition. Disc. for MELD corresponds to six Ekman emotions in addition to neutral. `Labeling' represents the method to obtain labels, such as human annotation (annotation), self-reported felt emotion and keyword searching (keyword), `Labels' means the detailed labels in the dataset, such as dominant emotion category (dominant), average dimension values (average), personalized emotion (personalized), and emotion distribution (distribution).} \n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}\n{c c c c c c c c}\n\\hline\n\\textbf{Dataset} & \\textbf{Ref} & \\textbf{\\#Samples}& \\textbf{Modalities} & \\textbf{Type} & \\textbf{Emotion model} & \\textbf{Labeling}&\\textbf{Labels}\\\\\n\\hline\nSentiBank tweet & & 603 & images, text & images & Sentiment&annotation& dominant\\\\\nMVSO & & 7.36M & image, metadata & photos & Sentiment &automatic & average\\\\\nCMU-MOSEI & & 23,500 & video, audio, text & YouTube videos & Sentiment &annotation& average\\\\\nMELD & & 13,000 & video, audio, text & TV series & Sentiment, Disc. &annotation& dominant\\\\\nCOGNIMUSE & & 3.5h & video, audio, text & movies & VA&annotation, self-report& dominant\\\\\nVR & & 73 & video, audio & VR videos & VA&self-report&average \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:MMDataset}\n\\end{table*}\nCOGNINMUSE is a collection of videos annotated with sensory and semantic saliency, events, cross-media semantics, and emotions~. A subset of 3.5h extracted from movies, including textual modality, are annotated on arousal and valence. \n\\citeauthor{Li_VR} collected a dataset of 360 degrees virtual reality videos that can elicit different emotions~. Even though the dataset consists of 73 short videos, on average 183s long, it is one of the first datasets of its kind whose content understanding stays limited. These multimodal datasets are summarized in Table~\\ref{tab:MMDataset}.", "id": "72c21e7b-1086-4048-bb25-1bc86a5c6b31", "level": "subsection", "origin_cites_number": 9, "parent_id": "21f5b1a5-8481-4ed1-83d5-a0396cc43cfa", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for AC of Multimodal Data" ] ], "subsections": [], "title": "Datasets for AC of Multimodal Data" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Image}\nIn the early stages, AC researchers mainly worked on designing handcrafted features to bridge the affective gap. Recently, with the advent of deep learning especially convolutional neural networks (CNNs), current methods have shifted to an end-to-end deep representation learning. Motivated by the fact that the perception of image emotions may be dependent on different types of features~, some methods employ fusion strategies to jointly consider multiple features. In this section, we summarize and compare these methods. Please note that here we classify the directly extracted CNN features based on pre-trained deep models into handcrafted features category.", "id": "50a2810e-2777-44d0-85b6-12b160fc2d4d", "level": "section", "origin_cites_number": 1, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Images" ] ], "subsections": [ "6469fc7b-8c4d-423d-88c3-f522313e7cbb", "c7360062-0ae0-4488-a262-c17a829a8a11" ], "title": "Affective Computing of Images" }, { "cite_extract_rate": 0.043478260869565, "cites": [ 1022 ], "content": "\\label{sec:ImageHandcrafted}\n\\textbf{Low-level Features} are difficult to be understood by viewers. These features are often directly derived from other computer vision tasks. Some widely extracted features include GIST, HOG2x2, self-similarity and geometric context color histogram features as in~, because of their individual power and distinct description of visual phenomena in a scene perspective.\nCompared with the above generic features, some specific features derived from art theory and psychology have been designed. For example, \\citeauthor{machajdik2010affective}~\\shortcite{machajdik2010affective} extracted elements-of-art features, including \\emph{color} and \\emph{texture}. The MPEG-7 visual descriptors are employed in~, which include four color-related ideas and two texture-related ideas. How shape features in natural images influence emotions is investigated in~ by modeling the concepts of roundness-angularity and simplicity-complexity. \\citeauthor{sartori2015s}~\\shortcite{sartori2015s} designed two kinds of visual features to represent different color combinations based on Itten's color wheel.\n\\textbf{Mid-level Features} contain more semantics, are more easily interpreted by viewers than low-level features, and thus are more relevant to emotions. \\citeauthor{patterson2012sun}~\\shortcite{patterson2012sun} proposed to detect 102 attributes in 5 different categories, including materials, surface properties, functions or affordances, spatial envelop attributes, and object presence. Besides these attributes, eigenfaces that may contribute to facial images are also incorporated in~. More recently, in~, SIFT features are first extracted as basic features, which are fed into bag-of-visual-words (BoVW) to represent the multi-scale blocks. Another mid-level representation is the latent topic distribution estimated by probabilistic latent semantic analysis.\nHarmonious composition is essential in an artwork. Several compositional features, such as low depth of field, are designed to analyze such characteristics of an image~. Based on the fact that figure-ground relationships, color patterns, shapes and their diverse combinations are often jointly employed by artists to express emotions in their artworks, \\citeauthor{wang2013interpretable}~\\shortcite{wang2013interpretable} proposed to extract interpretable aesthetic features.\nInspired by princiles-of-art, \\citeauthor{zhao2014exploring}~\\shortcite{zhao2014exploring} designed corresponding mid-level features, including \\emph{balance}, \\emph{emphasis}, \\emph{harmony}, \\emph{variety}, \\emph{gradation}, and \\emph{movement}. For example, Itten's color contrasts and the rate of focused attention are employed to measure \\textit{emphasis}.\n\\begin{table*}\n\\centering\\scriptsize\n\\caption{Summary of the hand-crafted features at different levels for AC of images. `\\# Feat' indicates the dimension of each feature.}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccc p{8cm} c}\n\\hline\n\\textbf{Feature} & \\textbf{Ref} & \\textbf{Level} & \\multicolumn{1}{c}{\\textbf{Short description}} & \\textbf{\\# Feat} \\\\\n\\hline\nLOW\\_C & & low & GIST, HOG2x2, self-similarity and geometric context color histogram features & 17,032\\\\\nElements & & low & color: mean saturation, brightness and hue, emotional coordinates, colorfulness, color names, Itten contrast, Wang's semantic descriptions of colors, area statistics; texture: Tamura, Wavelet and gray-level co-occurrence matrix & 97\\\\\nMPEG-7 & & low & color: layout, structure, scalable color, dominant color; texture: edge histogram, texture browsing & $\\approx$200 \\\\\nShape & & low & line segments, continuous lines, angles, curves & 219\\\\\nIttenColor & & low & color co-occurrence features and patch-based color-combination features & 16,485\\\\\n\\hline\nAttributes & & mid & scene attributes & 102 \\\\\nSentributes & & mid & scene attributes, eigenfaces & 109 \\\\\nComposition & & mid & level of detail, low depth of field, dynamics, rule of thirds & 45 \\\\\nAesthetics & & mid & figure-ground relationship, color pattern, shape, composition & 13 \\\\\nPrinciples & & mid & principles-of-art: balance, contrast, harmony, variety, gradation, movement & 165\\\\\nBoVW & & mid & bag-of-visual-words on SIFT, latent topics & 330 \\\\\n\\hline\nFS & & high & number of faces and skin pixels, size of the biggest face, amount of skin w.r.t. the size of faces & 4 \\\\\nANP & & high & semantic concepts based on adjective noun pairs & 1,200 \\\\\nExpressions & & high & automatically assessed facial expressions (anger, contempt, disgust, fear, happiness, sadness, surprise, neutral) & 8 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:ImageHandCraftedFeatures}\n\\end{table*}\n\\textbf{High-level Features} that represent the semantic content contained in images can be easily understood by viewers. We can also well recognize the conveyed emotions in images through these semantics. In the early years, simple semantic content including faces and skins contained in images are extracted in~. For the images that contain faces, facial expressions may directly determine the emotions. \\citeauthor{yang2010exploring}~\\shortcite{yang2010exploring} extracted 8 kinds of facial expressions as high-level features. They built compositional features of local Haar appearances by a minimum error based optimization strategy, which are embedded into an improved AdaBoost algorithm. For the images detected without faces, the experessions are simply set as \\emph{neutral}. Finally, they generated a 8 dimensional vector with each element representing the number of corresponding facial expressions.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n\\caption{Representative work on AC of images using hand-crafted features, where `Fusion' indicates the fusion strategy of different features, `cla, reg, ret, cla\\_p, dis\\_d, dis\\_c' in the Task column are short for classification, regression, retrieval, personalized classification, discrete distribution learning, continuous distribution learning (the same below), respectively, `Result' is the reported best accuracy for classification, mean squared error for regression, discounted cumulative gain for retrieval, F1 for personalized classification, and KL divergence for distribution learning (the first line~ is the result on sum of squared difference) on the corresponding datasets.}\n\\begin{tabular}\n{l p{2.5cm} l p{1.5cm} p{2.5cm} l R{2.5cm}} \n\\hline\n\\textbf{Ref} & \\textbf{Feature} & \\textbf{Fusion} & \\textbf{Learning} & \\textbf{Dataset} & \\textbf{Task} & \\textbf{Result} \\\\\n\\hline\n & Elements, Composition, FS & early & NB & IAPSa, Abstract, ArtPhoto & cla & 0.471, 0.357, 0.495\\\\\n & MPEG-7 & -- & KNN & unreleased & cla & 0.827\\\\\n & Shape, Elements & early & SVM, SVR & IAPSa; IAPS & cla; reg & 0.314; V-1.350, A-0.912\\\\%shape:0.299, V: 1.708, A: 0.943\\\\\n & Segmented objects & -- & SL & IAPS, ArtPhoto & cla & 0.612, 0.610\\\\\n & Sentributes & -- & SVM, LR & Tweet & cla & 0.824\\\\\n & Aesthetics & -- & NB & Abstract, ArtPhoto & cla & 0.726, 0.631\\\\\n & Principles & -- & SVM, SVR & IAPSa, Abstract, ArtPhoto; IAPS & cla; reg & 0.635, 0.605, 0.669; V-1.270, A-0.820\\\\\n & LOW\\_C, Elements, Attributes, Principles, ANP, Expressions & graph & MGL & IAPSa, Abstract, ArtPhoto, GAPED, Tweet & ret & 0.773, 0.735, 0.658, 0.811, 0.701\\\\\n & IttenColor & -- & SL & MART, devArt & cla & 0.751, 0.745\\\\\n & BoVW & -- & MIL & IAPSa, Abstract, ArtPhoto & cla & 0.699, 0.636, 0.707\\\\\n & IttenColor & -- & MC & MART, devArt & cla & 0.728, 0.761\\\\\n\\hline\n & GIST, Elements, Attributes, Principles, ANP, Expressions & graph & RMTHG & IESN & cla\\_p & 0.582\\\\\n\\hline\n & GIST, Elements, Principles & - & SSL & Abstract & dis\\_d & 0.134\\\\\n & GIST, Elements, Attributes, Principles, ANP, deep features from AlexNet & weighted & WMMSSL & Abstract, Emotion6, IESN & dis\\_d & 0.482, 0.479, 0.478 \\\\\n & ANP, VGG16 & - & ACPNN & Abstract, Emotion6, FlickrLDL, TwitterLDL & dis\\_d & 0.480, 0,506, 0,469, 0.555\\\\\n & GIST, Elements, Attributes, Principles, ANP, AlexNet & weighted & WMMCPNN & Abstract, Emotion6, IESN & dis\\_d & 0.461, 0.464, 0.470\\\\\n & GIST, Elements, Attributes, Principles, ANP, AlexNet & -- & MTSSR & IESN & dis\\_c & 0.436\\\\\n\\hline\n\\end{tabular}\n\\label{tab:ImageHandCraftedMethods}\n\\end{table*}\nMore recently, the semantic concepts are described by adjective noun pairs (ANPs)~, which are detected by SentiBank~ or DeepSentiBank~. The advantages of ANP are that it turns a neutral noun into an ANP with strong emotions and makes the concepts more detectable, compared to nouns and adjectives, individually. A 1,200 dimensional vector representing the probability of the ANPs can form a feature vector.\nTable~\\ref{tab:ImageHandCraftedFeatures} summarizes the above-mentioned hand-crafted features at different levels for AC of images. Some recent methods also extracted CNN features from pre-trained deep models, such as AlexNet~ and VGGNet~.\nTo map the extracted handcrafted features to emotions,\n\\textbf{Machine Learning Methods} are commonly employed. Some typical learning models include Naive Bayes (NB)~, support vector machine (SVM)~, $K$ nearest neighbor (KNN)~, sparse learning (SL)~, logistic regression (LR)~, multiple instance learning (MIL)~, and matrix completion (MC)~ for emotion classification , support vector regression (SVR)~ for emotion regression, and multi-graph learning (MGL)~ for emotion retrieval.\nInstead of assigning the DEC to an image, some recent methods began to focus on the perception subjectivity challenge, \\textit{i.e.}, predicting personalized emotions for each viewer or learning emotion distributions for each image. The personalized emotion perceptions of a specified user after viewing an image is predicted in~, associated with online social networks. They considered different types of factors that may contribute to emotion recognition, including the images' visual content, the social context related to the corresponding users, the emotions' temporal evolution, and the images' location information. To jointly model these factors, they proposed rolling multi-task hypergraph learning (RMTHG), which can also easily hanlde the data incompleteness issue.\nGenerally, the distribution learning task can be formulated as a regression problem, which slightly differs for different distribution categories (\\textit{i.e.}, discrete or continuous). For example, if emotion is represented by CE, the regression problem targets predicting the discrete probability of each emotion category with the sum equal to 1; if we represent emotion based on DES, the regression problem is typically transformed to the prediction of the parameters of specified continuous probability distributions. For the latter one, we usually need to firstly determine the form of continuous distributions, such as exponential distribution and Gaussian distribution. Some representative learning methods for emotion distribution learning of discrete emotions include shared sparse learning (SSL)~, weighted multimodal SSL (WMMSSL) , augmented conditional probability neural network (ACPNN)~, and weighted multi-model CPNN (WMMCPNN)~. Both SSL and WMMSSL can only model one test image each time, which is computationally inefficient. After the parameters are learned, ACPNN and WMMCPNN can easily predict the emotion distributions of a test image. Based on the assumption that the VA emotion labels can be well modeled by a mixture of 2 bidimensional Gaussian mixture models (GMMs), \\citeauthor{zhao2017continuous}~\\shortcite{zhao2017continuous} proposed to learn continuous emotion distributions in VA space by multi-task shared sparse regression (MTSSR). Specifically, the parameters of GMMs are regressed, including the mean vector and covariance matrix of the 2 Gaussian components as well as the mixing coefficients.\nTable~\\ref{tab:ImageHandCraftedMethods} summarizes some representative work based on hand-crafted features. Generally, high-level features (such as ANP) can achieve better recognition performance for images with rich semantics, mid-level features (such as Principles) are more effective for artistic photos, while low-level features (such as Elements) perform better for abstract paintings.", "id": "6469fc7b-8c4d-423d-88c3-f522313e7cbb", "level": "subsection", "origin_cites_number": 23, "parent_id": "50a2810e-2777-44d0-85b6-12b160fc2d4d", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Images" ], [ "subsection", "Handcrafted Features-Based Methods for AC of Images" ] ], "subsections": [], "title": "Handcrafted Features-Based Methods for AC of Images" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 1023, 1017, 1024 ], "content": "\\label{sec:ImageDeep}\nTo deal with the situation where images are weakly labeled, a potentially cleaner subset of the training instances are selected progressively~. First, they trained an initial CNN model based on the training data. Second, they selected the training samples with distinct sentiment scores between the two classes with a high probability based on the prediction score of the trained model on the training data itself. Finally, the pre-trained AlexNet on ImageNet is fine-tuned to classify emotions into 8 categories by changing the last layer of the CNN from 1000 to 8~. Besides using the fully connected layer as classifier, they also trained an SVM classifier based on the extracted features from the second to the last layer of the pre-trained AlexNet model.\nMulti-level deep representations (MldrNet) are learned in~ for image emotion classification. They segmented the input image into 3 levels of patches, which are input to 3 different CNN models, including Alexnet, aesthetics CNN (ACNN), and texture CNN (TCNN). The fused features are fed into multiple instance learning (MIL) to obtain the emotion labels. \\citeauthor{zhu2017dependency}~\\shortcite{zhu2017dependency} proposed to integrate the different levels of features by a Bidirectional GRU model (BiGRU) to exploit their dependencies based on MldrNet. They generated two features from the Bi-GRU model and concatenated them as the final feature representations. To enforce the feature vectors extracted from each pair of images from the same category to be close enough, and those from different categories to be far away, they proposed to jointly optimize a contrastive loss together with the traditional cross-entropy loss.\nMore recently, \\citeauthor{yang2018retrieving}~\\shortcite{yang2018retrieving} employed deep metric learning to explore the correlation of emotional labels with the same polarity, and proposed a multi-task deep framework to optimize both retrieval and classification tasks. By considering the relations among emotional categories in the Mikels' wheel, they jointly optimized a novel sentiment constraint with the cross-entropy loss. Extending triplet constraints to a hierarchical structure, the sentiment constraint employs a sentiment vector based on the texture information from the convolutional layer to measure the difference between affective images. In~, \\citeauthor{yang2018retrieving} proposed a weakley supervised coupled convolutional neural network to exploit the discriminability of localized regions for emotion classification. Based on the image-level labels, a sentiment map is firstly detected in one branch with the cross spatial pooling strategy. And then the holistic and localized information are jointly combined in the other branch to conduct a classification task. The detected sentiment map can easily explain which regions of an image determine the emotions.\nThe above deep methods mainly focused on the dominant emotion prediction. There are also some work on emotion distribution learning based on deep models. The very first work is a mixed bag of emotions, which trains a deep CNN regressor (CNNR) for each emotion category in Emotion6~ based on the AlexNet architecture. They changed the number of output nodes to 1 to predict a real value for each emotion category and replaced the Softmax loss with Euclidean loss. To ensure the sum of different probabilities to be 1, they normalized the predicted probabilities of all emotion categories. However, CNNR has some limitations. First, the predicted probability cannot be guaranteed to be non-negative. Second, the probability correlations among different emotions are ignored, since the regressor for each emotion category is trained independently. In~, \\citeauthor{yang2017joint} designed a multi-task deep framework based on VGG16 by jointly optimizing the cross-entropy loss for emotion classification and Kullback-Leibler (KL) divergence loss for emotion distribution learning. To match the single emotion dataset to emotion distribution learning settings, they transformed each single label to emotion distribution with emotion distances computed on Mikels' wheel~. By extending the size of training samples, this method achieves the state-of-the-art performance for discrete emotion distribution learning.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n\\caption{Representative work on deep learning based AC of images, where `Pre' indicates whether the network is pre-trained using ImageNet, `\\# Feat' indicates the dimension of last feature mapping layer before the emotion output layer, `Cla' indicates the classifier used after the last feature mapping with default Softmax, `Loss' indicates the loss objectives (besides the common cross-entropy loss for classification), and `Result' is the reported best accuracy for classification, discounted cumulative gain for retrieval, and KL divergence for distribution learning on the corresponding datasets.}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}\n{c C{1.5cm} c C{1cm} cc C{3cm} C{0.4cm} C{2cm}} \n\\hline\n\\textbf{Ref} & \\textbf{Base net} & \\textbf{Pre} & \\textbf{\\#Feat} & \\textbf{Cla} & \\textbf{Loss} & \\textbf{Dataset} & \\textbf{Task} & \\textbf{Result} \\\\\n\\hline\n & self-defined & no & 24 & -- & -- & FlickrCC & cla & 0.781\\\\\n & AlexNet & yes & 4,096 & SVM & -- & FI, IAPSa, Abstract, ArtPhoto & cla & 0.583, 0.872, 0.776, 0.737\\\\\n & AlexNet, ACNN, TCNN & yes & 4,096, 256, 4,096 & MIL & -- & \\tiny{FI, IAPSa, Abstract, ArtPhoto, MART} & cla & \\tiny{0.652, 0.889, 0.825, 0.834, 0.764}\\\\\n & self-defined & no & 512 & -- & contrastive & FI, IAPSa, ArtPhoto & cla & 0.730, 0.902, 0.855\\\\\n & GoogleNet-Inception & yes & 1,024 & -- & sentiment & FI, IAPSa, Abstract, ArtPhoto & cla; ret & 0.676, 0.442, 0.382, 0.400; 0.780, 0.819, 0.788, 0.704\\\\\n & ResNet-101 & yes & 2,048 & -- & -- & FI, Tweet & cla & 0.701, 0.814\\\\\n\\hline\n & AlexNet & yes & 4,096 & -- & Euclidean & Emotion6 & dis\\_d & 0.480\\\\\n & VGG16 & yes & 4,096 & -- & KL & Emotion6, FlickrLDL, TwitterLDL & dis\\_d & 0.420, 0,530, 0,530\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:ImageDeepMethods}\n\\end{table*}\nThe representative deep learning based methods are summarized in Table~\\ref{tab:ImageDeepMethods}. The deep representation features generally perform better than the hand-crafted ones, which are intuitively designed for specific domains based on several small-scale datasets. However, how the deep features correlate to specific emotions is unclear.", "id": "c7360062-0ae0-4488-a262-c17a829a8a11", "level": "subsection", "origin_cites_number": 11, "parent_id": "50a2810e-2777-44d0-85b6-12b160fc2d4d", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Images" ], [ "subsection", "Deep Learning-Based Methods for AC of Images" ] ], "subsections": [], "title": "Deep Learning-Based Methods for AC of Images" }, { "cite_extract_rate": 0.0625, "cites": [ 1025 ], "content": "\\label{sec:Music}\nMusic emotion recognition (MER) strives to identify emotion expressed by music and subsequently predict listener's felt emotion from acoustic content and music metadata, \\textit{e.g.}, lyrics, genre, \\textit{etc.} Emotional understanding of music have applications in music recommendation and is particularly useful for producing music retrieval. An analysis of search queries from creative professionals showed that 80\\% contain emotional terms, showing emotions prominence in that field~. A growing number of work have tried to address emotional understanding of music from acoustic content and metadata (see~ for earlier reviews on this topic). \nEarlier work on emotion recognition from music relied on extracting acoustic features similar to the ones used in speech analysis, such as audio energy and formants. Acoustic features describe attributes related to musical dimensions. Musical dimensions include melody, harmony, rhythm, dynamics, timbre (tone color), expressive techniques, musical texture, and musical form~, as shown in Table~\\ref{tab:MusicFeatures}. Some also add energy as a musical feature which is important for MER~. \nMelody is a linear succession of tones and can be captured by features representing key, pitch and tonality. Among others, chroma is often used to represent melodic features~. \nHarmony is how the combination of various pitches are processed during hearing. Understanding harmony involves chords or multiple notes played together. Examples of acoustic features capturing harmony include chromagram, key, mode, and chords~.\nRhythm consists of repeated patterns of musical sounds, \\textit{i.e.}, notes and pulses that can be describes in terms of tempo and meter. Higher tempo songs often induce higher arousal and fluent rhythm is associated with higher valence and firm rhythm is associated with sad songs~. Mid-level acoustic features, such as onset rate, tempo and beat histogram, can represent rhythmic characteristics of music. \nDynamics of music involve the variation in softness or loudness of notes which include change of loudness (contrast) and emphasis on individual sounds (accent)~. Dynamics of music can be captured by changes in acoustic features related to energy such as root mean square (RMS) energy.\nTimbre is the perceived sound quality of musical notes. Timbre is what differentiates different voices and instruments playing the same sound. Acoustic features capturing timbre, such as MFCC and spectrum shape, describe sound quality~. Acoustic features describing timbre include MFCC, spectral features (centroid, contract, flatness), and zero crossing rate~.\nExpressive techniques are the way a musical piece is played including tempo and articulation~. Acoustic features, such as tempo, attack slope, and time, can be used to describe this dimension.\nMusical texture is how rhythmic, melodic, and harmonic features are combined in music production~. It is related to the range of tones played at the same time. Musical form describes how a song is structured, such as introduction verse and chorus~. \nEnergy whose dynamics are described in music dynamic features is strongly associated with arousal perception. \n\\begin{table*}[!t]\n\\centering\\scriptsize\n \\caption{Musical dimensions and acoustic features describing them.}\n\\begin{tabular}\n{ l p{10cm} }\n\\hline\n\\textbf{Musical dimension} & \\textbf{Acoustic features} \\\\\n\\hline\nMelody & Pitch \\\\%\\hline\nHarmony & chromagram, chromagram peak, key, mode, key clarity, harmonic, change, chords \\\\%\\hline\nRhythm & tempo, beat histograms, rhythm regularity, rhythm strength, onset rate\\\\%\\hline\nDynamics and loudness & RMS energy, loudness, timpral width\\\\%\\hline\nTimbre & MFCC, spectral shapres (centroid, shape, spread, skewness, kurtosis, contrast and flatness), brightness, rolloff frequency, zero crossing rate, spectral contrast, auditory modulation features, inharmonicity, roughness, dissonance, odd to even harmonic ratio \\\\%\\hline\nMusical form & Similarity Matrix (similarity between all possible frames) \\\\%\\hline\nTexture & attack slope, attack time\\\\\n\\hline\n\\end{tabular}\n\\label{tab:MusicFeatures}\n\\end{table*}\nThere are a number of toolboxes available for extracting acoustic features from music that can be used for music emotion recognition. \nMusic Analysis, Retrieval and Synthesis for Audio Signals (Marsyas)~ is an open source framework developed in C++ that supports extracting a large range of acoustic features with music information retrieval applications in mind, including time-domain zero-crossings, spectral centroid, rolloff, flux, and Mel-Frequency Cepstral Coefficients (MFCC) \\textit{etc.}\nMIRToolbox~ is an open source toolbox implemented in MATLAB for music information retrieval applications. MIRToolbox offers the ability to extract a comprehensive set of acoustic features at different levels including features related to tonality, rhythm, and structures.\nSpeech and music interpretation by large-space extraction or OpenSMILE~ is an open source software developed in C++ with the ability to extract a large number of acoustic features for speech and music analysis in real-time. \nLibROSA~ is a Python package for music and audio analysis. It is mainly developed with music information retrieval application in mind and supports importing from different audio sources and extracting musical features such as onsets chroma and tempo in addition to the low-level acoustic features. \nESSENTIA~ is an open source library developed in C++ with Python interface that is developed for audio analysis. ESSENTIA contains an extensive collection of algorithms supporting audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. \nMusic emotion recognition either attempts to classify songs or excerpt into categories (classification) or estimate their expressed emotions on continuous dimensions (regression). The choice of machine learning model in music emotion recognition depends on the emotional representation used. Mood clusters~, dimensional representations such as arousal, tension and valence as well as music specific emotion representation can be used. An analysis of the methods proposed for MediaEval ``Music in Emotion'' task submissions revealed that using deep learning accounted for the superior performance for emotion recognition much more than the choice features~. Recent methods for emotion recognition in music rely on deep learning and often use spectrogram features that are converted to images~. \\citeauthor{aljanaki2018data} proposed learning musically meaningful mid-level perceptual features that can describe emotions in music~. They demonstrated that perceptual features such as melodiousness, modality, rhythmic complexity and dissonance can describe a large portion of emotional variance in music both in dimensional representation and MIREX clusters. They also trained a deep convolutional neural network to recognize these mid-level attributes. There have been also work attempting to use lyrics in addition to acoustic content for recognizing emotion in music~. However, lyrics are copyrighted and not easily available which hinders further work in this direction.", "id": "6a4a2789-5122-4ca2-8207-b0cf83ceda85", "level": "section", "origin_cites_number": 16, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Music" ] ], "subsections": [], "title": "Affective Computing of Music" }, { "cite_extract_rate": 0.5, "cites": [ 1026 ], "content": "\\label{sec:Video}\nCurrently, the features used in affective video content analysis are mainly from two categories~. One is considering the stimulus of video content and extracting the features reflecting the emotions conveyed by the video content itself. And the other is extracting features from the viewers. Features extracted from the video content are content-based features, and features formed from the signals of the viewers' responses are viewer-related features.", "id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "level": "section", "origin_cites_number": 2, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ] ], "subsections": [ "0091d8de-1e85-4000-afb6-6cc1c3dcc3de", "849377a8-415a-42d7-8a8d-eb0aab34165c", "930f941a-bd9c-4939-bb1e-c81c0b2dac96", "cefa92e9-5af7-4000-8441-381e9420e9cf", "dc8e2b0f-60af-45ac-bc34-2b55fa2b0f2b" ], "title": "Affective Computing of Videos" }, { "cite_extract_rate": 0, "cites": [], "content": "Generally speaking, the video content comprise of a series of ordered frames as well as corresponding audio signals. Therefore, it is natural to extract features from these two modalities. The audiovisual features can further be divided into low-level and mid-level according to their ability to describe the semantics of video content.", "id": "0091d8de-1e85-4000-afb6-6cc1c3dcc3de", "level": "subsection", "origin_cites_number": 0, "parent_id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Content-related Features" ] ], "subsections": [ "688b2f4a-8d63-4f40-b57e-7f57ac458bc1", "1257a6cb-e4fc-4f9d-88bf-5637943576d0" ], "title": "Content-related Features" }, { "cite_extract_rate": 0.05, "cites": [ 1018 ], "content": "Commonly, the low-level features are directly computed from the raw visual and audio content, and usually carry no semantic information. As for visual content, color, lighting, and tempo are important elements that can endow the video with strong emotional rendering and further give viewers direct visual stimuli. In many cases, computations are conducted over each frame of the video, and the average values of the computational results of the overall video are considered as visual features. Specifically, the color-related features often contain the histogram and variance of color~, the proportions of color~, the number of white frame and fades~, the grayness~, darkness ratio, color energy~, brightness ratio and saturation~, \\textit{etc}. In addition, the differences of dark and light can be reflected by the lighting key, which is used to evoke emotions in video and draw the attention of viewers by creating an emotional atmosphere~. As for the tempo-related features, properties of shot can reinforce the expression of video, such as shot change rate and shot length variance~ according to movie grammar. To better take advantage of the temporal information of the video, the motion vectors have been computed as features in~. Since the optical flow can characterize the influence of camera motions, the histogram of optimal flow matrix (HOF) has been computed as features in~. Additionally, \\citeauthor{yi2018multi}~\\shortcite{yi2018multi} traced motion key points at multiple spatial scales and computed the mean motion magnitude of each frame as features.\nTo represent audio content, pitch, zero crossing rates (ZCR), Mel frequency cepstrum coefficients (MFCC), and energy are the most popular features . In particular, the MFCC~ and its $\\Delta{MFCC}$ are used to characterize emotions in video clips frequently; while the derivatives and statistics (min, max,mean) of MFCC or $\\Delta{MFCC}$ are also explored widely. As for pitch, ~ shows that pitch of sound is associated closely with some emotions, such as anger with higher pitch and sadness with lower standard deviation of pitch. Similar situation can also occur in the energy~. For example, the total energy of anger or happiness is higher than the counterpart of unexciting emotions. ZCR~ is used to separate different types of audio signals, such as music, environmental sound and speech of human. Besides these frequent related features, audio flatness~, spectral flux~, delta spectrum magnitude, harmony~, band energy ratio, spectral centroid~, and spectral contrast~ are also utilized.\nEvidently, the aforementioned features are mostly handcrafted. With the emergence of deep learning, features can be automatically learned through deep neural networks. Some pre-trained convolutional neural networks (CNNs) are used to learn static representations from every frame or some selected key frames, while a Long-short term memory (LSTM) is exploited to capture dynamic representations existing in videos.\nFor instance, in~, an AlexNet with seven fully-connected layers trained on 2600 ImageNet classes is used to learn features. A Convolutional Auto-Encoder (CAE) is designed to ensure the CNNs can extract the visual features effectively in~. \\citeauthor{ben2018deep}~ first used the pre-trained ResNet-152 to extract feature vectors. And then, these vectors are fed into an LSTM according to their temporal order to extract high-order representations. Pre-trained model, SoundNet, is utilized to learn audio features. Because the expressive emotions of video are induced and communicated by the protagonist in video in many cases, the features of protagonist are extracted from the key frame by a pre-trained CNN and used in video affective analysis in~. In addition to the protagonist, other objects in each frame of video also give insights into emotional expression of video. For example, in~, \\citeauthor{shukla2018looking} removed the non-gaze regions from video frames (Eye ROI) and built the coarse grained scene structure remaining gist information by Guassian filter with variance. After the operations above in~, the next video affective analysis may pay more attention to important information and reduce unnecessary noise.", "id": "688b2f4a-8d63-4f40-b57e-7f57ac458bc1", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "0091d8de-1e85-4000-afb6-6cc1c3dcc3de", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Content-related Features" ], [ "subsubsection", "Low-level features" ] ], "subsections": [], "title": "Low-level features" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike low-level features, mid-level features often contain semantic information. For example, EmoBase10 feature depicting audio cues is computed in~. \\citeauthor{hu2016multi}~ proposed a method of combining the audio and visual features to model contextual structures of the key frames selected from video. This can produce a kind of so-called multi-instance sparse coding (MI-SC) for next analysis. In addition, the lexical features~ are extracted from the dialogues of speakers by using a natural language toolkit. These features can reflect the emotional changes in videos and can also represent a certain emotional expression in overall videos. \\citeauthor{muszynski2019recognizing}~ used aesthetic movie highlights related to occurrences of meaningful movie scenes to define some experts. These features produced by experts are more knowledgeable and abstract for video affective analysis, especially movies. HHTC features, which are computed on the basis of combination of Huang Transform in visual-audio and cross-correlation features, are proposed in~.", "id": "1257a6cb-e4fc-4f9d-88bf-5637943576d0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "0091d8de-1e85-4000-afb6-6cc1c3dcc3de", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Content-related Features" ], [ "subsubsection", "Mid-level features" ] ], "subsections": [], "title": "Mid-level features" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Viewer-relatedFeatures}\nBesides the content-related features, viewers' facial expressions and changes of physiological signals evoked by content of videos are the most common sources for extracting viewer-related features. \\citeauthor{mcduff2017large}~ coded the facial actions of viewers for further affective video analysis. Among various physiological signals, electrocardiography (ECG), galvanic skin response (GSR), electroencephalography (EEG) are the mostly frequently ones and their statistical measures, such as mean, median, spectral power bands, \\textit{etc}., are often recommended as features. \\citeauthor{Wang2015Emotion}~ used EEG signals to construct a new EEG feature with the assistance of the relationship among video content by exploiting canonical correlation analysis (CCA).\nIn~, some viewer-related features are extracted from the whole pupil dilation ratio time-series without the differences among pupil diameter in human eyes, such as its average and derivation for global features as well as the four spectral power bands for local features.\nIn addition to viewers' responses mentioned above, the comments or other textual information produced by viewers can also reflect their attitudes or emotional reactions toward the videos. In the light of this, it is reasonable to consider users' textual comments or other textual information to extract features. In~, the ``sentiment analysis'' module using Unigrams and Bigrams is built to learn comment-related features of the collected data according to the YouTube link provided by the DEAP dataset.", "id": "849377a8-415a-42d7-8a8d-eb0aab34165c", "level": "subsection", "origin_cites_number": 3, "parent_id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Viewer-related Features" ] ], "subsections": [], "title": "Viewer-related Features" }, { "cite_extract_rate": 0.125, "cites": [ 1028, 1026, 1027 ], "content": "After feature extracting, a classifier or a regressor is used to obtain emotional analysis results. For classification, there are several frequently used classifiers, including support vector machines (SVM)~, Naive Bayes (NB)~, Linear Discriminant Analysis~, logistic regression (LR)~, and ensemble learning~, \\textit{etc}.\nRecent work show that the SVM-based methods are very popular for affective video content analysis due to its simplicity, max-margin training property, and use of kernels ~. For example, \\citeauthor{yi2018multi}' work ~ demonstrated that linear SVM is more suitable for classification than RBM, MLP, and LR . In~, LDA, linear SVM (LSVM), and Radial Basis SVM (RSVM) classifiers are employed in emotion recognition experiments, and the RSVM obtained the best F1 scores. In~, both Navie Bayes and SVM are used as classifiers in unimodal and multimodal conditions. In the unimodal experimental condition, NB is not better than SVM. And the fusion results showed that SVM is much better than NB in multimodal situations. However, SVM also has its shortages, such as the difficulty of selecting suitable kernel functions. Indeed, SVM is not always the best choice. In~, the results demonstrated that ensemble learning outperforms SVM in terms of classification accuracy. Ensemble learning has acquired a lot of attention in many fields because of its accuracy, simplicity, and robustness. In addition, in~, LR is adopted as the classifier for its effectiveness and simplicity. In fact, LR is used frequently in many transfer learning tasks.\nHowever, all the classifiers mentioned above are not able to capture the temporal information. Some other methods try to use temporal information. For example, \\citeauthor{gui2018implicit}~ combined SVM and LSTM to predict emotion labels. Specifically, global features and sequence features are proposed to represent the pupillary response signals. Then a SVM classifier is trained with the global features and a LSTM classifier is trained with the sequence features. Finally, a decision fusion strategy is proposed to combine these two classifiers.\n\\begin{table*}[!t]\n\\centering\\scriptsize\n\\caption{Representative work on AC of videos using kinds of features, where $P_{a}$, $P_{v}$, $MSE_{a}$, $MSE_{v}$, $Acc_{a}$, $Acc_{v}$, $Acc$ and $MAP$ indicates the Pearson correlation coefficients of arousal and valence, the mean sum error of arousal and valence, the accuracy of arousal and valence, the average accuracy and mean average precision respectively. `statistics' means (min, max, mean).}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}\n{p{0.4 cm} p{4 cm} p{0.7 cm} p{2 cm} p{2 cm} p{0.3 cm} R{2cm}} \n\\hline\n\\textbf{Ref} & \\textbf{Feature} & \\textbf{Fusion} & \\textbf{Learning} & \\textbf{Dataset} & \\textbf{Task} & \\textbf{Result} \\\\\n\\hline\n & Mel frequency spectral; MFCC, Chroma and their derivatives and statistics; Audio compressibility; Harmonicity; Shot frequency; HOF and statistics; Histogram of 3D HSV and statistics; Video compressibility; Histogram of facial area & decision & LSTM & Dataset described by Malandrakis & reg & $P_{a}$:$0.84 \\pm 0.06$ $MSE_{a}$:$0.08 \\pm 0.04$ $P_{v}$:$0.50 \\pm 0.14$ $MSE_{v}$:$0.21 \\pm 0.06$\\\\\n & Average, standard deviation and four spectral power bands of pupil dilation ratio time-series & decision & SVM, LSTM & MAHNOB-HCI & cla & $Acc_{a}$:0.730, $Acc_{v}$:0.780\\\\\n & Audio and visual deep features from pretrained model & feature & CNN, LSTM, SVM & PMIT & cla & $MAP$:0.0.2122\\\\\n & MFCC; Color values; HoG; Dense trajectory descriptor; CNN-learned features & decision & CNN, SVM, Ensemble & DEAP & cla & $Acc$: 0.81, 0.49 \\\\\n & HHTC features & feature & SVR & Discrete LIRIS-ACCEDE & reg & $MSE_{a}$:0.294, $MSE_{v}$: 0.290 \\\\\n & Statistical measures (such as mean, median, skewness kurtosis) for EEG data, power spectral features, ECG, GSR, Face/Head-pose & decision & SVM/NB & music excerpts~ & cla & F1 (v: 0.59, 0.58, a: 0.60, 0.57) \\\\\n & Time-span visual and visual features & feature & CNN Opensmile toolbox & music excerpts~ & cla & $MSE_{a}$:0.082 $MSE_{v}:0.071$\\\\\n & tempo; pitch; zero cross; roll off; MFCCs; Saturation; Color heat; Shot length feature; General preferences; Visual excitement; Motion feature; fMRI feature & feature & DBM SVM & TRECVID & cla & - \\\\\n & Colorfulness; MFCC; CNN-learned features from the keyframes containing protagonist & decision & CNN, SVM, SVR & LIRIS-ACCEDE, PMSZU & cla/reg & - \\\\\n & Multi-frame motion vectors & decision & CNN & SumMe, TVsum, Continuous LIRIS-ACCEDE & reg & - \\\\\n & The median of the L values in Luv space; means and variances of components in HSV space; texture feature; mean and standard deviation of motions between frames in a short; MFCC; Spectral power; mean and variance of the spectral centroids; Time domain zero crossings rate; Multi-instance sparse coding & feature & SVM & Musk1, Musk2, Elephant, Fox, Tiger & cla & $Acc$: 0.911, 0.906, 0.885, 0.627, 0.868\\\\\n & Lighting key; Color; Motion vectors; ZCR; energy; MFCC; pitch; Textual features & decision & SMO, Navie Bayes & DEAP & cla & F1:0.849 0.811 $Acc$: 0.911 0.883 \\\\\n & Key lighting; Grayness; Fast motion; Shot chanage rate; Shot length variation; MFCC; CNN-learned features; power spectral density; EEG; ECG; respiation; galvanic skin resistance & feature & SVM & DEAP & cla & $Acc_v$:0.7 0.7 0.7125, $Acc_a$:0.6876 0.7 0.8 F1 (A:0.664 0.687 0.789) \\\\\n & MFCC; ZCR; energy; pitch, color histograms; lighting key; motion vector & decision & SVM, Navie Bayes & DEAP & cla & F1: 0.869, 0.846 $Acc$: 0.925, 0.897 \\\\\n & CNN feature, low-level audio visual features, EEG & decision & LDA, LSVM, RSVM & Dataset introduced by the authors & cla & - \\\\\n & MKT; ConvNets feature; EmoLarge; IS13; MFCC; EmoBase10; DSIFT; HSH & decision & SVM, LR, RBM, MLP & MediaEval 2015, 2016 Affective Impact of Movies & cla & $Acc_a$:0.574, $Acc_v$:0.462\\\\\n & CNN feature & - & SVM, LDA & dataset in~ & cla & - \\\\\n & CNN feature & - & SVR & LIRIS-ACCEDE & reg & $MSE_{a}$: 0.021, $MSE_{v}$: 0.027\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:VideoHandCraftedMethods}\n\\end{table*}\nA regressor is needed when mapping the extracted features to the continuous dimensional emotion space. Recently, one of the most popular regression method is support vector regression (SVR)~. For example, in~, video features like audio, color, aesthetic are fed into SVR in the SVR-Standard experiment. And in the SVR-Transfer learning experiment, the pre-trained CNN is treated as a feature extractor. The CNN's outputs are used as the input to the SVR. The experimental results showed that the SVR-Transfer learning outperforms other methods. Indeed, the various kernel functions in SVR provide a stronger adaptability.", "id": "930f941a-bd9c-4939-bb1e-c81c0b2dac96", "level": "subsection", "origin_cites_number": 24, "parent_id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Machine Learning Methods" ] ], "subsections": [], "title": "Machine Learning Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "In total, there are two fusion strategies for multimodal information: feature-level fusion and decision-level fusion. Feature-level fusion means that the multimodal features are combined and then used as the input of a classifier or a regressor. Decision-level fusion fuses several results of different classifiers, and the final results are computed according to the fusion methods.\nOne way of feature-level fusion is implemented by feature accumulation or concatenation~. In~, two feature vectors for visual and audio data are averaged as the global genre representations. In~, multi-class features are concatenated to generate a high dimensional joint representation. Some machine learning methods are also employed to learn joint features~. In~, a two-branch network is used to combine the visual and audio features. The outputs of the two-branch network are then fed into a classifier, and the experiment results showed that the joint features outperform other methods. In~, the low-level audio-visual features and fMRI-derived features are fed into multimodal DBM to learn joint representations. The target of this method is to learn the relation between audio-visual features and fMRI-derived features. In~, PCA is used to learn the multimodal joint features. In~, canonical correlation analysis (CCA) is used to construct a new video feature space with the help of EEG features and a new EEG feature space with the assistance of video content, so only one modality is needed to predict emotion during the testing process.\nBy combining the results of different classifiers, decision-level fusion strategy is able to achieve better results~. In~, linear fusion and SVM-based fusion techniques are explored to combine outputs of several classifiers. Specifically, the output of each classifier has its own weight in linear fusion. The final result is the weighted sum of all outputs. In SVM-based fusion, the outputs of unimodal classifiers are concatenated together. And then the higher level representations for each video clip are fed into a fusion SVM to predict the emotion. Based on these results, linear fusion is better than SVM-based fusion. In~, linear fusion is also used to fuse the outputs of multiple classifiers. The differences among these linear fusion methods depend on the distribution of weights.", "id": "cefa92e9-5af7-4000-8441-381e9420e9cf", "level": "subsection", "origin_cites_number": 13, "parent_id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Data Fusion" ] ], "subsections": [], "title": "Data Fusion" }, { "cite_extract_rate": 0.5, "cites": [ 1029 ], "content": "\\label{sec:Deep learning method}\nIn tradition, video emotional recognition includes two steps, \\textit{i.e.}, feature extraction step and regression or classification step. Because of the lack of consensus on the most relevant emotional features, we may not be able to extract the best features for the problem at hand. As a result, this two-step mode has hampered the development of affective video content analysis. In order to solve this problem, some methods based on end-to-end training frameworks are proposed. \\citeauthor{khorrami2016deep} combined CNN and RNN to recognize the emotional information of videos. According to their method, a CNN is trained using frame facial images sampled from videos to extract features. Then the features are fed into a RNN to perform continuous emotion recognition. In~, a single network using ConvLSTM is proposed, where videos are input to the network and the predicted emotional information is output directly. In fact, due to the complexity of CNNs and RNNs, the training of these frameworks needs large amounts of data. However, in video affective content analysis, the samples in existing datasets are usually limited. This is the reason why end-to-end methods are still less common compared to the traditional two step methods, despite of their influential potentials.", "id": "dc8e2b0f-60af-45ac-bc34-2b55fa2b0f2b", "level": "subsection", "origin_cites_number": 2, "parent_id": "14b7a32f-91ab-4f67-baaf-fabb6149902f", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Videos" ], [ "subsection", "Deep Learning Methods" ] ], "subsections": [], "title": "Deep Learning Methods" }, { "cite_extract_rate": 0.185185185185185, "cites": [ 7, 1030, 1032, 1684, 1031 ], "content": "\\label{sec:multimodal}\nIn this section, we survey the work that analyze multimodal data beyond audiovisual content. Most of the existing work on affective understanding of multimedia rely on one modality, even when additional modalities are available, for example in videos~. Earlier work on emotional understanding of multimedia used hand crafted features from different modalities that are fused at feature or decision levels~. The more recent work mainly use deep learning models~. \nLanguage is a commonly used modality in addition to vision and audio. There is a large body of work on text-based sentiment analysis~. Sentiment analysis from text is well-established and is deployed at scale in industry at a broad set of applications involving opinion mining~. With the shift toward an increasingly multimodal social web, multimodal sentiment analysis is becoming more relevant. For example, vloggers post their opinions on YouTube, and photos commonly accompany user posts on Instagram and Twitter. \nAnalyzing text for emotion recognition requires representing terms by features. Lexically-based approaches are one of the most popular methods for text-based emotion recognition. They involve using knowledge of words' affect for estimating document or content's affect. Linguistic Inquiry and Word Count (LIWC) is a well-known lexical tool that matches the terms in a document with its dictionary and generates scores along different dimensions including affective and cognitive constructs such as ``present focus'' and ``positive emotion''~. The terms in each category or selected by experts is extensively validated on different content. AffectNet is another notable lexical resource which includes a semantic netowrk of 10,000 items with representations for ``pleasantness'', ``attention'', ``sensitivity'', and ``aptitude''~. The continuous representations can be mapped to 24 distinct emotions. DepecheMood is a lexicon created through a data-driven method mining a news website annotated with its particular set of discrete emotions, namely, ``afraid'', ``amusemed'', ``anger'', ``annoyed'', ``don't care'', ``happy'', and ``inspired''~. DepecheMood is extended to DepecheMood++ by including Italian~.\nThe more recent development in text-based affective analysis is models powered by deep learning. Leveraging large scale data, deep neural networks are able to learn representations that are relevant for affective analysis in language. Word embeddings are one of the most common representations used to represent language. Word embeddings, such as Word2Vec~ or GloVe~, learn language context of the word by learning a representation (a vector), that can capture semantic and syntactic similarities. More recently, representation learning models that can encode the whole sequence of terms (sentences, documents) showed impressive performance in different language understanding tasks, including sentiment and emotional analysis. \nBidirectional Encoder Representations from Transformers (BERT)~ is a method for learning a language model that can be trained on large amount of data in an unsupervised manner. This pre-trained model is very effective in representing a sequence of terms as a fixed-length representation (vector). BERT architecture is a multi-layer bidirectional Transformer network that encodes the whole sequence at once. BERT representation achieves state-of-the-art results in multiple natural language understanding tasks. \nThe audiovisual features that are used for multimodal understanding of affect are similar to the ones discussed in previous sections. The main technique between miltimodal models lies in methods for multimodal fusion. Multimodal methods involve extracting features from multiple modalities, \\textit{e.g.}, audiovisual, and training joint or separate machine learning models for fusion~. Multimodal fusion can be done in model-based and model-agnostic ways. The model-agnostic fusion methods do not rely on a specific classification or regression method and include feature-level, decision-level, or hybrid fusion techniques. Model-based methods address multimodal fusion in model construction. Examples of model-based fusion methods include Multiple Kernel Learning (MKL)~, graphical models, such as Conditional Random Fields~ and neural networks~.\n\\citeauthor{Pang2015}~ used Deep Boltzmann Machine (DBM) to learn a joint representation across text, vision, and audio to recognize expected emotions from social media videos. Each modality is separately encoded with stacking multiple Restricted Boltzmann Machines (RBM) and pathways are merged to a joint representation layer. The model was evaluated for recognizing eight emotion categories for 1,101 videos from~. \\citeauthor{muszynski2019recognizing}~ studied perceived vs induced emotion in movies. To this end, they collected additional labels on a subset of LIRIS-ACCEDE dataset~. They found that perceived and induced emotions do not always agree. Using multimodal Deep Belief Networks (DBN), they could demonstrate that fusion of electrodermal responses with audiovisual content features improves the overall accuracy for emotion recognition~.\nIn~, authors performed regression to estimate intended arousal and valence levels (as judged by experts in~). LSTM recurrent neural networks are used for unimodal regressions and fused via early and late fusion for audiovisual estimation with late fusion achieving the best results. \\citeauthor{Tarvainen2018}~ performed an in-depth analysis on how emotions are constructed in movies. They identified scene type as a major factor in emotions in movies. They then used content features to recognize emotions along three dimensions of hedonic tone (valence), energetic arousal (awake--tired) and tense arousal (tense--calm).\nBilinear fusion is a method that is proposed to model inter- and intra- modality interaction among modalities by performing outer product between unimodal embeddings~. \\citeauthor{zadeh2017tensor}~ extended this to a Tensor Fusion Network to model intra-modality and inter-modality dynamics in multimodal sentiment analysis. The tensor fusion network includes modality embedding sub-networks, a tensor fusion layer modeling the unimodal, bimodal and trimodal interactions using a three-fold Cartesian product from modality embeddings along with a final sentiment inference sub-network conditioned on the tensor fusion layer. The main drawback of such methods is the increase in the dimensionality of the resulting multimodal representation.", "id": "20c98c17-ea71-4ba4-a9aa-815a42eced5b", "level": "section", "origin_cites_number": 27, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Affective Computing of Multimodal Data" ] ], "subsections": [], "title": "Affective Computing of Multimodal Data" }, { "cite_extract_rate": 0.25, "cites": [ 7022, 1033 ], "content": "\\label{sec:FutureDirections}\nAlthough remarkable progress has been made on affective computing of multimedia (ACM) data, there are still several open issues and directions that can boost the performance of ACM.\n\\textbf{Multimedia Content Understanding.} As emotions may be directly evoked by the multimedia content in viewers, accurately understanding what is contained in multimedia data can significantly improve the performance of ACM. Sometimes it is even necessary to analyze the subtle details. For example, we may feel ``amused'' on a video with a laughing baby; but if the laugh is from a negative character, it is more possible for us to feel ``angry''. In such cases, besides the common property, such as ``laugh'', we may need to further recognize the identity, such as ``a lovely baby'' and ``an evil antagonist''.\n\\textbf{Multimedia Summarization.} Emotions can play a vital role in selection of multimedia for creation of summaries or highlights. This is an important application in entertainment and sports industries (\\textit{e.g.} movie trailers, sports highlights). There has been some recent work in this direction where affect information from audio visual cues has led to the successful creation of video summaries~. In particular, work reported in~ used audiovisual emotions in part to create an AI trailer for a $20^{th}$ Century Fox film in 2016. Similarly, AI Highlights described in~ hinged on audiovisual emotional cues and have successfully been employed to create the official highlights at Wimbledon and US Open since 2017. This is a very promising direction for affective multimedia computing which can have a direct impact on real world media applications.\n\\textbf{Contextual Knowledge Modeling.} The contextual information of a viewer watching some multimedia is very important. Similar multimedia data under different contexts may evoke totally different emotions. For example, we may feel ``happy\" when listening a song about love in a wedding; but if the same song is played when two lovers are departing, it is more likely that we feel ``sad\". The prior knowledge of viewers or multimedia data may also influence the emotion perceptions. An optimistic viewer and a pessimistic viewer may have totally different emotions about the same multimedia data.\n\\textbf{Group Emotion Clustering.} It is too generic to simply recognize the dominant emotion, while it is too specific to predict personalized emotion. It would make more sense to model emotions for groups or cliques of viewers with similar interests and backgrounds. Clustering different viewers into corresponding groups possibly based on the user profiles may provide a feasible solution to this problem.\n\\textbf{New AC Setting Adaptation.} Because of the domain shift~, the deep learning models trained on one labeled source domain may not work well on the other unlabeled or sparsely labeled target domain, which results in the models' low transferability to new domains. Exploring domain adaptation techniques that fit well on the AC tasks is worth investigating. One possible solution is to translate the source data to an intermediate domain that are indistinguishable from the target data while preserving the source labels~ using Generative Adversarial Networks~. How do deal with some practical settings, such as multiple labeled source domains and emotion models' homogeneity, is more challenging.\n\\textbf{Regions-of-Interest Selection.} The contributions of different regions of given multimedia may vary to the emotion recognition. For example, the regions that contain the most important semantic information in images are more discriminative than background; some video frames are of no use to emotion recognition. Detecting and selecting the regions-of-interest may significantly improve the recognition performance as well as the computation efficiency.\n\\textbf{Viewer-Multimedia Interaction.}\nInstead of direct analysis of the multimedia content or implicit consideration of viewers' physiological signals (such as facial expressions, Electroencephalogram signals, \\textit{etc}.), joint modeling of both multimedia content and viewers' responses may better bridge the affective gap and result in superior performances. We should also study how to deal with missing or corrupted data. For example, some physiological signals are unavailable during the data collection stage.\n\\textbf{Affective Computing Applications.} Although AC is claimed to be important in real-world applications, few practical systems have been developed due to the relatively low performance. With the availability of larger datasets and improvements in self-supervised and semi-supervised learning, we foresee the deployment of ACM in real-world applications. For example, in media analytics, the content understanding methods will identify the emotional preferences of users and emotional nuances of social media content to better target advertising effort; in fashion recommendation, intelligent costumer service, such as customer-multimedia interaction, can provide better experience to customers; in advertisement, generating or curating multimedia that evokes strong emotions can attract more attention. We believe that an emotional artificial intelligence will become a significant component of mainstream multimedia applications.\n\\textbf{Benchmark Dataset Construction.}\nExisting studies on ACM mainly adopt small-scale datasets or construct relatively larger-scale ones using keyword searching strategy without annotation quality guaranteed. To advance the development of ACM, creating a large-scale and high-quality dataset is in urgent need. It has shown that there are three critical factors for dataset construction of ACM, \\textit{i.e.}, the context of viewer response, personal variation among viewers, and the effectiveness and efficiency of corpus creation~. In order to include a large number of samples, we may exploit online systems and crowdsourcing platforms to recruit large numbers of viewers with a representative spread of backgrounds to annotate multimedia and provide contextual information on their emotional responses. Since emotion is a subjective variable, personalized emotion annotation would make more sense, from which we can obtain the dominant emotion and emotion distribution. Further, accurate understanding of multimedia content can boost the affective computing performance. Inferring emotional labels from social media users' interaction with data, \\textit{e.g.}, likes, comments, in addition to their spontaneous responses, \\textit{e.g.}, facial expression, where possible, will provide new avenues for enriching affective datasets.", "id": "5da3fa17-09f5-4d0c-9b55-5610d97ddda5", "level": "section", "origin_cites_number": 8, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Future Directions" ] ], "subsections": [], "title": "Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Conclusion}\nIn this article, we have surveyed affective computing (AC) methods for heterogeneous multimedia data. For each multimedia type, \\textit{i.e.}, image, music, video, and multimodal data, we summarized and compared available datasets, handcrafted features, machine learning methods, deep learning models, and experimental results. We also briefly introduced the commonly employed emotion modelds and outlined potential research directions in this area. Although deep learning-based AC methods have achieved remarkable progress in recent years, an efficient and robust AC method that is able to obtain high accuracy under unconstrained conditions is yet to be achieved. With the advent of deep understanding of emotion evocation in brain science, accurate emotion measurement in psychology, and novel deep learning network architectures in machine learning, affective computing of multimedia data will remain an\nactive research topic for a long time.\n\\begin{acks}\nThis work was supported by Berkeley DeepDrive, the National Natural Science Foundation of China (Nos. 61701273, 91748129), and the National Key R\\&D Program of China (Grant No. 2017YFC011300). The work of MS is supported in part by the U.S. Army. Any opinion, content or information presented does not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.\n\\end{acks}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{references}\n\\end{document}", "id": "fc0b3d97-cde9-409b-91c4-fe25e8140878", "level": "section", "origin_cites_number": 0, "parent_id": "36832e43-191f-4e88-aa11-6c7b91fe1bf8", "prefix_titles": [ [ "title", "Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
75
[ 1016, 1015, 1017, 1018, 1019, 1020, 1021, 7322, 1022, 1023, 1024, 1025, 1026, 1028, 1027, 1029, 7, 1030, 1032, 1684, 1031, 7022, 1033 ]
1.336732
[ "Dingwen Zhang", "Junwei Han", "Gong Cheng", "Ming-Hsuan Yang" ]
Weakly Supervised Object Localization and Detection: A Survey
2021
2021-04-16T06:44:50Z
cs.CV
As an emerging and challenging problem in the computer vision community, weakly supervised object localization and detection plays an important role for developing new generation computer vision systems and has received significant attention in the past decade. As methods have been proposed, a comprehensive survey of these topics is of great importance. In this work, we review (1) classic models, (2) approaches with feature representations from off-the-shelf deep networks, (3) approaches solely based on deep learning, and (4) publicly available datasets and standard evaluation metrics that are widely used in this field. We also discuss the key challenges in this field, development history of this field, advantages/disadvantages of the methods in each category, the relationships between methods in different categories, applications of the weakly supervised object localization and detection methods, and potential future directions to further promote the development of this research field.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3a2d911b-3708-4f3a-880a-851136519e79", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ] ], "subsections": [ "6019d392-fb6b-4290-8c58-a72268a80ff9", "a3e94559-bb78-4175-a692-379986435e13", "bba63d48-815b-4407-9de8-7df72823a004", "ce302070-cf4f-4d53-a26c-8a51743b26f9", "930ae35a-10a8-4a65-83e8-a12f89f6fe36", "79d342a2-5acc-4290-aaff-2761ce99e494", "5cef946e-16aa-4550-aeb5-c885a8308ec2", "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "92209b0e-e020-4aa4-8af9-77a6bcf0b9af" ], "title": "root" }, { "cite_extract_rate": 0.2, "cites": [ 2259, 209, 2260 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{W}{eakly} supervised learning (WSL) has recently received much attention in computer vision community.\nA plethora of methods on this topic have been proposed in the past decade to address the challenging computer vision tasks including semantic segmentation~, object detection~, and 3D reconstruction~, to name a few.\nAs shown in Fig.~\\ref{illustration}, a WSL problem is defined as the learning process when some partial information regarding the task (e.g., class label or object location) on a small subset of the data points is at our disposal.\nCompared to the conventional learning framework, e.g., fully supervised learning approaches, the WSL framework needs to operate on the small amount of weakly-labelled training data to learn the target model, which alleviates a huge amount of human labor to annotate training samples.\nIt can also facilitate the learning process when the fine-grained annotation\nis extremely labor intensive and time consuming to even obtain the whole labeled data required by the fully-supervised approaches.\nWhile a plethora of WSL-based vision methods have been developed, this survey mainly focuses on the task of weakly supervised object localization and detection, which is shown as the red dot in Fig.~\\ref{illustration}.\nIt is well-known that object localization and detection is a fundamental research problem in computer vision.\nLearning object localization and detection models under weak supervision has attracted much attention in the past decades.\nWhile existing methods treat weakly supervised object localization (WSOL) and weakly supervised object detection (WSOD) as two different tasks\\footnote{ {The difference between WSOL and WSOD mainly lies in that WSOL mainly aims at localizing a single known (or unknown) object from each given image scene. The goal of WSOD is to instead detect every possible object instance from the given image scene. This makes WSOD a little more difficult than WSOL.}}, we consider these as a common task due to several reasons: 1) these tasks learn with the same image-level human annotation; 2) these two tasks need certain supervision as input and usually aim to localize objects on the bounding-box level as output; 3) WSOD task can be accomplished by directly training off-the-shelf fully supervised object detectors on the object locations obtained from WSOL.\nDuring the last decade, considerable efforts have been made to develop various approaches for learning object detectors with weak supervision.\nSome of the existing algorithms only learn weakly supervised object detectors for one or several certain object categories, such as vehicles~, traffic signs~, pedestrians~, faces~, tuberculosis bacilli~, aircrafts~, and human actions~.\nWhile other approaches, e.g., , focus on developing weakly supervised learning frameworks for unconstrained object categories, i.e., learning frameworks can be extended to learn object detector for the given category-specific weakly-labelled training images.\nAs enormous methods have been developed for these important tasks, a comprehensive review of the literature concerning weakly supervised object localization and detection is of great importance.\nAs weakly supervised object localization and detection methods mainly exploit the image-level manual annotation, the learning frameworks not only need to address the typical issues, such as the intra-class variations in appearance, transformation, scale and aspect ratio, encountered in conventional fully supervised object localization and detection task, but also the \\textbf{learning under uncertainty} challenges caused by the inconsistency between human annotations and real supervisory signals.\nIn weakly supervised object localization and detection, the accuracy of object locations and learning processes are closely related.\nThe key is to propagate the image-level supervisory signals to the instance-level (bounding-box-level) training data for the learning processes.\nAs each training image can be labeled by numerous bounding boxes of different accuracy, propagating such weak supervision inevitably involves a large amount of ambiguous and noisy information as each training instance.\nMore specifically, the \\textbf{learning under uncertainty} issue would cause the following challenges that make the weakly supervised learning process challenging:\n\\begin{itemize}\n \\item \\textbf{Learning with inaccurate instance locations:} This issue is mainly caused by the definition ambiguity in object parts and context.\n Without precise annotation or definition, it is difficult for a learner to decide whether an object category label associates with a discriminative object part, the whole object region, or the object with a certain context region.\n As a result, the bounding-box instance locations inferred by the learner may contain many inaccurate samples including the ones with local object parts or undesired contextual regions.\n These samples would negatively affect the performance of WSL-based detectors.\n \\item \\textbf{Learning with noisy samples:} Even when the bounding-box locations can be precisely labeled, the training examples enclosed by bounding-boxes may still be noisy as background pixels are usually included.\n As there is no additional information to separate foreground objects from the background, the learner may tag a ``background'' label to an object region when it fails to recognize the object category.\n In addition, the learner may mistakenly label a bounding-box that contains a bicycle as a motorcycle, as these two object categories share many similar features.\n \\item \\textbf{Learning with domain shifts:} For a certain object category, the image regions localized during the learning process may only contain samples with limited diversity in object shape, appearance, scale, and view angle.\n This makes the subsequent learning process biased to limited knowledge of the object category and does not generalize well for test samples.\n For instance, a learner can hardly localize or detect a flying swan when all the training samples contain the swimming ones on lakes.\n This issue happens frequently among the weakly supervised learning process when there is a large gap between the training and testing domains.\n \\item \\textbf{Learning with insufficient instance samples:}\n Similar to the issues in conventional learning methods, it is difficult to train effective object detectors under the weakly-supervised setting when the amount of training samples is limited.\n In addition, the number of positive samples is usually much smaller than that of negative samples for binary classes.\n Furthermore, the data distributions for a large number of categories is usually long-tailed.\n This issues are significantly exaggerated for the WSL-based methods using deep learning.\n\\end{itemize}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{illustration.jpg}\n \\caption{Illustration of the weakly supervised learning tasks in computer vision community. The blue area in the top block indicates the conventional fully-supervised learning tasks, while the red area in the top block indicates the weakly supervised learning tasks. The coordinate axes shows different levels of human annotation or supervision requirement, from low cost to high cost. Notice that the high cost annotation can be transformed to low cost annotation easily, e.g., from bounding-box level to image level, whereas the low cost annotation is hard to be transformed to high cost annotation. In the bottom block, we also show the label cost, in terms of annotation time, and the examples of different type of annotations. In this survey, we mainly focus on reviewing the research progress in weakly supervised object localization and detection, i.e., the red dot in the top block.}\n \\label{illustration}\n\\end{figure}\nTo address the above-mentioned issues in learning weakly supervised object detectors, existing methods are usually constructed based on two steps: initialization and refinement.\nThe initialization stage is used to leverage certain prior knowledge to propagate image-level annotation into instance level, and thus can generate instance-level annotation (but with label noise, sample bias, and limited quality in location accuracy) for the learning process.\nThe refinement stage is used to leverage new instance samples obtained from the first stage to mine truthful knowledge about the objects of interest gradually and finally obtain the desired object models for localization and detection.\nThese two learning stages need to collaborate to address the aforementioned five-fold challenges.\nIn initialization stage, efforts should be made to improve the annotation quality as much as possible to generate training instances with proper locations, accurate labels, high diversity, and high recall rate.\nAs the annotation quality obtained in the learning stage cannot be perfect, in the refinement stage, further efforts should be made to improve the learner's robustness to cope with the inaccurate instance location, noisy examples, biased instance sample, insufficient instance sample issues as well as the capacity to take advantage of the unlabelled instance samples.\nWhen properly addressing the problems in each learning stage,\ngood weakly supervised object detectors can be learned.\nIn this work, we review the existing weakly supervised object localization and detection approaches\\footnote{Some early methods, such as~, learn to localize category-wise key points under the weak supervision, while this survey mainly focuses on the methods for localizing instances with bounding-boxes.}, which are divided into three main categories and eight subcategories.\nThese three main categories are based on classic approaches, feature representations from off-the-shelf deep models, and deep learning frameworks.\nThe eight subcategories include approaches for initialization, refinement, initialization and refinement, pre-trained deep features, inherent cues in deep models, fine-tuned deep models, single-network training, and multi-network training.\nWe further discuss the relationship between the approaches in different categories.\nIn addition, we also discuss open problems and challenges of current studies and propose several promising research directions in the future for constructing more effective weakly supervised object localization and detection frameworks.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{category2.jpg}\n \\caption{In the left block, taxonomy of the existing approaches for weakly supervised object localization and detection, which includes three main\n categories and eight subcategories.\n In the right block, the relationships between the approaches in different categories are shown.\n}\n \\label{taxonomy}\n\\end{figure*}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{history.JPG}\n \\caption{Developments of weakly supervised localization and detection methods. The yellow histogram shows the number of publications in this research field in each year, and the curves show the number of proposed methods each year for a particular category of approach.}\n \\label{history}\n\\end{figure}", "id": "6019d392-fb6b-4290-8c58-a72268a80ff9", "level": "section", "origin_cites_number": 15, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.1, "cites": [ 2261, 2262, 2263 ], "content": "\\label{Taxonomy}\nIn the last decade, a plethora of methods have been developed for weakly supervised object localization and detection.\nWe can generally categorize existing methods based on classic formulations, feature representations from off-the-shelf deep models, and deep weakly supervised learning algorithms.\nWhile inside each main category, we further divide the approaches into two or three subcategories.\nFig.~\\ref{taxonomy} shows our taxonomy of the studies in the research field of weakly supervised object localization and detection.\nIn addition, Fig. \\ref{history} reviews the development history of each of the main category as well as that of the whole research field.\nA few approaches based on classic formulations appeared around 2002.\nFrom 2002 to 2009, the research in this field went through a very slow pace.\nSince 2014, numerous approaches based on both classic formulations and learned feature representations from deep models have been developed and received much attention.\nWhile in the last few years, more approaches solely based on deep learning have become the main stream to address the problems of weakly supervised object localization and detection.\nWhile a plethora of methods have been developed to address different aspects of these problems in the past decades, this field is gaining increasing attention.\n\\renewcommand{\\multirowsetup}{\\centering}\n \\begin{table*} [t]\n \\tiny\n \\caption{Summary of the approaches for initialization, which is a subcategory in the weakly supervised object localization and detection approaches that learn by classic models. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n \\label{table initialization}\n \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2.5cm}<{\\centering}m{1.2cm}<{\\centering}m{1.5cm}<{\\centering}m{3.1cm}<{\\centering}m{1.8cm}<{\\centering}m{1.5cm}<{\\centering}m{3.5cm}<{\\centering}m{1.8cm}<{\\centering}}\n \\hline\n \\multirow{2}{*}{\\textbf{Methods}} & \\multirow{2}{*}{\\textbf{Detector}} & \\multirow{2}{*}{\\textbf{Descriptor}} & \\multirow{2}{*}{\\textbf{Prior knowledge}} & \\multirow{2}{*}{\\textbf{Extra training data}} & \\multirow{2}{*}{\\textbf{Learning model}} & \\multirow{2}{*}{\\textbf{Learning strategy}} & \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline \\hline\nCao-PR-2017 & SVM & HOG+PCA & Road map prior + density prior & None & SVM & MIL with density estimation & Vehicle in satellite imagery \\\\\nShi-TPAMI-2015 & DPM & SIFT+Lab+LBP, BOW & Appearance prior + geometry prior & None & Topic model & Bayesian inference & General objects \\\\\nTang-ICIP-2014 & DPM & HOG & Saliency (objectness score) & None & DPM & Select initial boxes + DPM training & General objects \\\\\n Xie-VCIP-2013 & None & SIFT & Intra-class consistency & None & None & Low-rank and sparse coding & General objects \\\\\nSiva-CVPR-2013 & DPM & Lab+SIFT & Saliency & Labelme + PASCAL07, 12 (unlabelled) &None & None & General objects \\\\\n Shi-ICCV-2013 & DPM & SIFT & Appearance prior (spatial distribution, size, saliency) & None & Topic model & Bayesian inference & General objects \\\\\n Sikka-FG-2013 & None & SIFT, LLC, BOW & None & None & MilBoost & Generating multiple segments for initialization and use Milboost for learning & Pain (on face) \\\\\n Siva-ECCV-2012& None & SIFT+BOW & Iter-class variance + saliency & None & None & Negative mining & General objects \\\\\n Shi-BMVC-2012 & None & BOW & Mapping relationship between the box overlap and appearance similarity & Part of PASCAL 07 (box annotation) & RankSVM & Transfer learning by ranking & General objects \\\\\n Khan-AAPRW-2011 & DPM & Phog/phow & None & Internet image (weakly annotated) & MIL & Learning from internet image & Pascal@8 \\\\\nZhang-BMVC-2010 & SVM & IHOF & Co-occurrence & None & SVM & High order feature learning by exploring co-occurrence & General objects \\\\\n\\hline\n \\end{tabular}}\n\\end{table*}\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{Summary of the approaches for refinement, which is a subcategory in the weakly supervised object localization and detection approaches that learn by classic models.\n\tHere, * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n\t \\label{table refinement} \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2.5cm}<{\\centering}m{1.5cm}<{\\centering}m{3cm}<{\\centering}m{2cm}<{\\centering}m{2cm}<{\\centering}m{2cm}<{\\centering}m{3.1cm}<{\\centering}m{2.5cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\n Wang-TMI-2018 & None & Color & None & None & Low-rank model & Low-rank Factorization & General lesion \\\\\n Wang-TC-2017 & None & SIFT+LAB & None & None & Probability model & BOW learning+instance labeling & General objects \\\\\n Cholakkal-CVPR-2016 & None & SIFT & Saliency & None & SVM* & ScSPM-based top down saliency & Salient objects \\\\\n Zadrija-GCPR-2015 & None & SIFT+FV & None & None & GMM + linear classifier & Patch-level spatial layout learning & Traffic sign \\\\\n Krapac-ICCVTA-2015 & None & SIFT+FV & None & None & Sparse logistic regression & Sparse classification & Traffic sign \\\\\n Cinbis-CVPR-2014 & SVM & FV & Center prior & None & SVM & Multi-fold MIL & General objects \\\\\n Wang-ICIP-2014 & SVM + graph model & SIFT & None & None & SVM + graph model & Maximal entropy random walk & Car, dog \\\\\n Wang-ICIP-2014 & SVM & SIFT & None & None & SVM & Clustering for window mining & General objects \\\\\n Tang-CVPR-2014 & None & SIFT & Saliency & None & Boolean constrained quadratic program & Mine similarity and discriminativeness both for image and box & General objects \\\\\n Hoai-PR-2014 & SVM & SIFT,BOW & None & None & SVM* & Localization-classification SVM & Face,car, human motion \\\\\n Wang-WACV-2013 & Task-specific detectors & HOG/SC & Background\\ saliency & None & MIL+AdaBoost* & Soft-label Boosting after MIL & Vehicle, pedestrain\\\\\n Kanezaki-MM-2013 & Linear classifiers & 3D voxel feature (color+C3HLAC +Intensity, texture, GRSD) & None & None & Linear classifiers & Multi-class MIL & Balls, tools \\\\\n Pandey-ICCV-2011 & DPM* & HOG & None & None & DPM* & Learning DPM with fully latent variable & General objects \\\\\nBlaschko-NIPS-2010 & None & BOW/HOG & None & None & SVM* & Learning SVM with structured output ranking objective & Cat, pedestrian \\\\\nHoai-ICCV-2009 & SVM & SIFT,BOW & None & None & SVM* & Localization-classification SVM & Face,car, human motion \\\\\n Galleguillos-ECCV-2008 & None & SIFT+BOW & None & None & MilBoost & Train MIL classifier for localization & Landmarks, faces, airplanes, leopard, motorbike, car \\\\\n Rosenberg-BMVC-2002 & GMM & Orientation deriviate filters & Exampler prior & Training exampler (box annotation) & GMM & Learning from exampler training data to weakly labelled training data & Telephone \\\\\n\\hline\n \\end{tabular}}\n\\end{table*}\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{Approaches for both initialization and refinement, which is a subcategory in the weakly supervised object localization and detection approaches that learn by classic models. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n\t \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{1.8cm}<{\\centering}m{1.2cm}<{\\centering}m{1.5cm}<{\\centering}m{2cm}<{\\centering}m{1.8cm}<{\\centering}m{1.5cm}<{\\centering}m{3cm}<{\\centering}m{1.5cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\n Wang-cvpr-2013 & None & Color+SIFT & None & None & HST+SVM & Joint parsing and attribute localization & Scene attributes \\\\\n Deselaers-IJCV-2012 & DPM & GIST+CH+BOW+HOG & Generic knowledge & Meta-training data with box annotation & CRF+DPM & Learning appearance model by trainsferring genearic knowledge & General objects \\\\\n Siva-ICCV-2011 & DPM & SIFT+BOW+HOG & Inter-class prior + intra-class prior & None & DPM & Model drift learning & General objects \\\\\nDeselaers-ECCV-2010 & DPM & GIST+CH+BOW+HOG & Generic knowledge & Meta-training data with box annotation & CRF+DPM & Learning appearance model by transferring generic knowledge & General objects \\\\ \\hline\n \\end{tabular}}\n\\end{table*}\nAs shown in the right block of Fig.~\\ref{taxonomy}, methods in\nmain categories are related in several aspects.\nNumerous methods are developed based on the classic formulations with the advances of feature representations from deep models.\nSimilarly, a number of methods based solely on deep models are end-to-end trainable by considering classic formulations and feature extraction schemes.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=7 cm]{subcategory1.jpg}\n \\caption{Flowchart of the weakly supervised object localization and detection approaches using classic models.}\n \\label{nondeep flowchart}\n\\end{figure}", "id": "a3e94559-bb78-4175-a692-379986435e13", "level": "section", "origin_cites_number": 30, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Taxonomy" ] ], "subsections": [], "title": "Taxonomy" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 2261, 2262, 2263 ], "content": "\\label{Classic Models}\nIn this section, we review the classic approaches that learn weakly supervised object localizer or detector without using deep features.\nThese methods typically consist of one initialization module followed by one refinement process as shown in\nFig.~\\ref{nondeep flowchart}.\nIn , the detector is based on the deformable part model (DPM)~.\nIn other approaches , the detector is based on the support vector machine (SVM) classifier.\nThe features used by these approaches are hand-crafted feature descriptors, such as HOG in , SIFT in , and Lab color in , which are sometimes used to build higher level representation such as bag-of-words (BOW) in , ,,,, Fisher vector representation in , and subspace-based representation in .\nIn the following, we divide these approaches for initialization and refinement process.", "id": "bba63d48-815b-4407-9de8-7df72823a004", "level": "section", "origin_cites_number": 18, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Classic Models" ] ], "subsections": [ "bfae7607-5833-4424-b0e8-9e0a00e92605", "8f1d3bcb-1c87-4161-b6dc-8cbe17a169ff", "31104e17-5d59-42f1-88ae-903542c63cd2", "19588196-93c9-4245-8a54-e9fb29d5fd1e" ], "title": "Classic Models" }, { "cite_extract_rate": 0.40740740740740705, "cites": [ 2261, 2262, 2267, 2263, 2269, 2270, 2266, 2264, 2265, 8537, 2268 ], "content": "Numerous methods have been developed to mine reliable instance samples, using prior knowledge, as weak supervision for the following processes.\nA brief summary of these approaches are shown in Table \\ref{table initialization}.\nZhang et al.~ leverage the prior-knowledge of object co-occurrence to identify translation and scale invariant high order features for weakly supervised object localization.\nIn~ Shi et al. propose a transfer learning paradigm to first use a RankSVM to learn the mapping relationship between the box overlap and appearance similarity from an auxiliary training data (with bounding-box level annotation) and then transfer the learned prior-knowledge for localizing objects of interest in the given weakly labeled images.\nA simple yet effective approach, named as negative mining, is developed by Siva et al. to explore the inter-class variance among the object regions in weakly labeled training images.\nThe final object locations are obtained by\nusing a linear combination of the inter-class variance and saliency prior.\nSimilarly, Tang et al.~ and Xie et al. use the saliency prior and intra-class consistency to mine the initial object locations, respectively.\nIn , Shi et al. explore the appearance prior and geometry prior in their topic model to build a Bayesian joint modeling framework for weakly supervised object localization.\nOn the other hand, Cao et al.~ exploit the road map prior and density prior to mine the initial vehicle locations from the weakly labeled satellite images and then trained the vehicle detector under a modified multiple-instance learning (MIL) model.\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{Summary of the approaches using pre-trained feature representations, which is a subcategory in the weakly supervised object localization and detection approaches based on the off-the-shelf deep models. * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n \\label{table off-the-shelf feature}\n \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2cm}<{\\centering}m{1cm}<{\\centering}m{1.6cm}<{\\centering}m{2.8cm}<{\\centering}m{3.8cm}<{\\centering}m{1.2cm}<{\\centering}m{3.5cm}<{\\centering}m{2.2cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\n Gonthier-arxiv-2018 & None & CNN & Supervised objectness score (Fast R-CNN(Resnet)) & ImageNet(tag label) & SVM & MIL & Objects in art (watercolor2K, people-art) \\\\\nZadrija-CVIU-2018 & None & VGG19 Conv 5\\_4. SIFT, Fisher vector & None & ImageNet(tag label) & Sparse model & None & Zebra crossings, traffic signs \\\\\n Cinbis-TPAMI-2017 & SVM* & FV+CNN & Center prior & ImageNet(tag label) & SVM* & Multi-fold MIL & General objects \\\\\nWei-IJCAI-2017 & None & CNN & None & ImageNet(tag label) & None & Deep descriptor transforming & General objects \\\\\n Zhang-IJCAI-2016 & SVM & CNN & Saliency prior & ImageNet(tag label) & SVM & Easy-to-hard(SPL+CL) & General objects \\\\\n Li-ECCV-2016 & None & FC6 & Strong detector prior (sparsity) & ImageNet(tag label) & SVM & Regularizing score distribution & General objects \\\\\n Ren-TPAMI-2016 & SVM* & FC6 & None & ImageNet(tag label) & SVM* & MIL+bag splitting & General objects \\\\\n Wan-ICIP-2016 & SVM* & FC7 & None & ImageNet(tag label) & SVM* & Correlation suppression+part suppression & General objects \\\\\n Rochan-IVC-2016 & None & Color histogram +CNN & Saliency & PASCAL (edge box), ImageNet & SVM & None & General objects \\\\\n Shi-ECCV-2016 & SVM & FC7(Alexnet) & Size prior & ImageNet(tag label), PASCAL2012 (object size) & SVM & Easy-to-hard(curriculum) & General objects \\\\\n Wang-TIP-2015 & SVM & FC6 & None & ImageNet(tag label) & pLSA, SVM & Online latent category learning & General objects \\\\\nRochan-CVPR-2015 & SVM & CNN & Objectness score, word embedding prior & YouTube-Objects (for parameter validation), Familiar object categories(detector) & SVM, Sparse reconstruction & Appearance transfer from text representation & General objects \\\\\nBilen-CVPR-2015 & LSVM & FC7+spatial features & None & ImageNet(tag label) & LSVM & Convex clustering & General objects \\\\\nWang-ICCV-2015 & None & FC6 & None & PASCAL(edge box), ImageNet & SVM* & Relaxed multiple-Instance SVM & General objects \\\\\nZhou-ICMBD-2015 & SVM & FC7 & Saliency prior & ImageNet(tag label) & SVM & Negative Bootstrapping & Airplanes in remote sensing \\\\\nHan-TGRS-2015 & SVM & DBM & Saliency, intra-class compactness, inter-class separability & None & DBM + SVM & Bayesian framework for initialization + refinement detector training & Objects in remote sensing \\\\\nMathe-Arxiv-2014 & Sequential detector & FC6 & Human fixation & ImageNet(tag label) & MIL*+RL & Constrained multiple instance SVM learning + reinforcement learning of detector & Human actions \\\\\n Wang-ECCV-2014 & SVM & FC6 & None & ImageNet(tag label) & pLSA, SVM & Online latent category learning & General objects \\\\\n Bilen-BMVC-2014 & LSVM & DeCAF & None & ImageNet(tag label) & LSVM* & LSVM with posterior regularization on symmetry and mutual exclusion & General objects \\\\\nSong-NIPS-2014 & DPM & FC7 & Objectness score & ImageNet(tag label) & LSVM & Frequent configuration mining+detector training & General objects \\\\\n Song-ICML-2014 & LSVM & DeCAF & None & ImageNet(tag label) & Graph model+LSVM* & Initialization via discriminative submodular cover+smoothed LSVM learning & General objects \\\\\n\\hline\n \\end{tabular}}\n\\end{table*}", "id": "bfae7607-5833-4424-b0e8-9e0a00e92605", "level": "subsection", "origin_cites_number": 27, "parent_id": "bba63d48-815b-4407-9de8-7df72823a004", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Classic Models" ], [ "subsection", "Initialization" ] ], "subsections": [], "title": "Initialization" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 2272, 2271, 2274, 2273 ], "content": "After potential object instances are obtained, these hypotheses are verified in the following refinement processes.\nThe goals of these approaches are to design learning objective functions, optimization strategies, or learning mechanisms to gradually determine objects of interest from the extracted initial instance training samples.\nA brief summary of these approaches is shown in Table \\ref{table refinement}.\n Hoai et al. propose an approach which localizes the instances of the positive class and learns a sub-window classifier to recognize the corresponding object class.\nBlaschko et al. use a structured output SVM to learn a regressor from the weakly labeled training images to object locations that are parameterized by the coordinates of the bounding boxes.\nThe object locations were treated as latent variables, while the image-level annotation was used to constrain the set of values the latent variable can take.\nSimilarly, Pandey et al. learn weakly supervised object detectors by using DPMs with latent SVM training.\nIn , a soft-label boosting approach is developed to exploit the soft labels that are estimated during the MIL process to train object detectors based on Boosting algorithm.\nIn , Tang et al. treat the weakly supervised object localization problem as an object co-localization task, and present a joint image-box formulation to mine reliable object locations via a Boolean constrained quadratic program.\nThis approach can handle noisy labels in the image-level annotations.\nTo address the property that the MIL process may converge to poor local optima after the initialization, Cinbis et al. design a multi-fold MIL training paradigm.\nThis method divides the whole weakly labelled training images into multiple folds and implements the detector training process and object re-localization process in different folds, thereby alleviating the issue with convergence of poor local optima.\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{Summary of the approaches using visual cues, which is a subcategory in the weakly supervised object localization and detection approaches based on the off-the-shelf deep models. * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n\t \\label{table pre-learning} \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2.5cm}<{\\centering}m{1.5cm}<{\\centering}m{1.5cm}<{\\centering}m{2cm}<{\\centering}m{3cm}<{\\centering}m{1.5cm}<{\\centering}m{3.1cm}<{\\centering}m{1.8cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\n Li-ISPRS-2018 & CAM* (VGG-F) & CNN & None & ImageNet(tag label) & VGG-F* & Learning learning + CAM learning (patch level) & Remote sensing objects \\\\\n Wilhelem-DICTA-2017 & None & CNN & None & ImageNEt(tage label) & CAM & CAM+KDE refine & General objects \\\\\n Tang-TMM-2017 & DPM & CNN & Saliency + objectness score & ImageNet(tag label) & DPM+ CNN & Region initialization+DPM and feature learning+bounding box modification & General objects \\\\\n Kolesnikov -BMVC-2016 & None & CNN & Human feedback annotation & ImageNet(tag label) & CAM & Active learning for identifying object cluster & General objects \\\\\n Bency-ECCV-2016 & None & CNN & None & ImageNet(tag label) & VGG16 & Beam-search based on CNN classifier & General objects \\\\\nZhou-MSSP-2016 & SVM & FC7 & Saliency prior & ImageNet(tag label), remote sensing data(unlabelled) & CNN (AlexNet), SVM & Deep feature transfer +MIL & Remote sensing objects (airplane, car, airport) \\\\\nBergamo-WACV-2016 & SVM & CNN & None & ImageNEt(tage label) & CNN, SVM & Mask out initialization + SVM detector training & General objects \\\\\nHoffman-CVPR-2015 & SVM & FC7 & Detector prior+ representation prior & ImageNet(tag label), ILSVRC13 validation subset(box annotation) & CNN, Latent SVM & Transferring detectors and representation from auxiliary data & General objects \\\\\n\\hline\n \\end{tabular}}\n\\end{table*}", "id": "8f1d3bcb-1c87-4161-b6dc-8cbe17a169ff", "level": "subsection", "origin_cites_number": 14, "parent_id": "bba63d48-815b-4407-9de8-7df72823a004", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Classic Models" ], [ "subsection", "Refinement" ] ], "subsections": [], "title": "Refinement" }, { "cite_extract_rate": 0, "cites": [], "content": "A number of iterative approaches have been developed that take both initialization and refinement into account.\nIn , Siva et al. propose an intra-class metric and an inter-class metric to initialize the potential object locations.\nAfter obtaining the initial object locations, this method iteratively trains a DPM object detector and uses a model drift detection approach to identify the termination refinement dynamically.\nDeselaers et al. present a conditional random field (CRF) model, which is used to learn generic prior knowledge of the objects from meta-training data firstly to localize the potential objects of interest in the weakly labelled training images.\nThis algorithm updates the CRF model to learn the appearance and shape models for the target object category and localizes the objects of interest in the refinement stage. \nThe alternation of localization and learning processes progressively transforms the CRF model from class-generic prior knowledge into the specialized knowledge for a certain target class.\nFor learning weakly supervised attribute localizer, Wang et al. initialize the learning process by building a Hierarchical Space Tiling (HST) scene configuration model and the corresponding appearance models are trained based on HST.\nA joint inference and learning process is designed to update the scene attributes and the correlations between the scene parts and attributes gradually.", "id": "31104e17-5d59-42f1-88ae-903542c63cd2", "level": "subsection", "origin_cites_number": 4, "parent_id": "bba63d48-815b-4407-9de8-7df72823a004", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Classic Models" ], [ "subsection", "Initialization and Refinement" ] ], "subsections": [], "title": "Initialization and Refinement" }, { "cite_extract_rate": 0, "cites": [], "content": "Although the classic weakly supervised learning models are studied in early age, the two-stage learning frameworks, i.e., the learning initialization stage and refinement stage, built by these methods have been widely applied in future works. In the learning initialization stage, these methods provide two kinds of information cues to infer the candidate object regions. The first one is the bottom-up cues, including the region saliency, objectness, intra-class consistency, inter-class discriminability, et al. The other one is the top-down cues, which usually provide the appearance prior for the learning process. Notice that as such top-down cues are hard to obtain from the weakly labeled data, auxiliary training data (with instance-level manual annotation) are usually leveraged to explore the top-down cues which are then transferred to the weakly labeled target data. In the refinement stage, classic machine learning models, such as SVM and CRF, are adopted to gradually refine both the appearance model and locations of the objects of interest.\nThe advantage of the classic weakly supervised learning methods is that the learning processes can be implemented on small-scale training data and the whole frameworks are quick both in the training phase and the testing phase. The disadvantage is that their performance is not satisfactory, which is due to the limitation in feature representation and model complexity.", "id": "19588196-93c9-4245-8a54-e9fb29d5fd1e", "level": "subsection", "origin_cites_number": 0, "parent_id": "bba63d48-815b-4407-9de8-7df72823a004", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Classic Models" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 1223, 514, 2272, 8429, 2275, 895, 2278, 2264, 2273, 2276, 2279, 2277 ], "content": "In this section, we review the approaches that learn weakly supervised object localizer or detector based on classic formulations and feature representations based on the deep neural networks, either pre-trained from the ImageNet dataset (with image tag annotation) or further fine-tuned on the weakly supervised training images in the target domain.\nThe feature representations are based on the widely used deep models for image classification, such as AlexNet and VGG .\nThe detectors are constructed based on classic formulations such as DPM and SVM~, or recent models such as RCNN and fast RCNN .\nWe further divide these approaches into three subcategories using pre-trained deep features, inherent cues in deep models, and fine-tuned deep models as shown in Fig. \\ref{off-the-shelf-flowchart}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=8.5 cm]{subcategory.jpg}\n \\caption{Illustration of different usages of the off-the-shelf deep neural networks by the weakly supervised object localization and detection approaches based on the off-the-shelf deep models. }\n \\label{off-the-shelf-flowchart}\n\\end{figure}\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{Summary of the approaches with fine-tuned deep models, which is a subcategory in the weakly supervised object localization and detection approaches based on the off-the-shelf deep models. * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. }\n\\label{table prepost-learning}.\n \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2cm}<{\\centering}m{2cm}<{\\centering}m{1.6cm}<{\\centering}m{2.8cm}<{\\centering}m{2.5cm}<{\\centering}m{1.5cm}<{\\centering}m{3.7cm}<{\\centering}m{1.5cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\n Zhang-IJCV-2019 & Fast RCNN (VGG16) & Pre-trained FC7 & Tag number + mask out prior(AlexNet) & ImageNet(tage label) & SVM & Easy-to-hard & General objects \\\\\n Uijlings-CVPR-2018 & None & CNN & Semantic objectness(SSD*) & ImageNet(tage label), ILSVRC(full annotation) & SSD*+SVM+ Fast RCNN & MIL+knowledge transfer & General objects \\\\\n Jie-CVPR-2017 & Fast RCNN (VGG16) & CNN & Image-to-object transfer prior & ImageNet(tage label) & Fast RCNN (VGG16) & Initialization based on classification network and subgraph discovery + iterative Fast RCNN learning & General objects \\\\\nShi-ICCV-2017 & Fast RCNN & CNN & Things and stuff prior & ImageNet(tage label), PASCAL Context (full annotation) & FCN,Fast RCNN & Localizing objects based on things and stuff prior and training Fast TCNN iteratively & General objects \\\\\nSingh-CVPR-2016 & Fast RCNN & CNN & Tracking prior & ImageNEt(tage label), Youtube-objects (unlabelled) & Fast RCNN & Discriminative region minning+transferring tracking object pattern + learn object detector & General objects \\\\\nLi-CVPR-2016 & VGG* & CNN & Mask out prior(AlexNet) & ImageNEt(tage label) & VGG*, SVM & Progressive Domain Adaptation & General objects \\\\\nLiang-ICCV-2015 & CNN & CNN & Instance example, motion prior & ImageNEt(tage label) & CNN,R-CNN & Seed selection based on instance example and instance tracking & General objects \\\\\nChen-ICCV-2015 & RCNN & CNN & Online data type & Web data (weak label) & BLVC net +E-LDA + RCNN & Simple image initialization + graph-based representation adaptation on hard image & General objects \\\\\nZhou-CVPR-2015 & RCNN & FC7 & None & ImageNet(tag label) & SVM, R-CNN & .Max-margin visual concept discovery + Domain-specific detector selection & General objects\\\\\n\\hline\n \\end{tabular}}\n\\end{table*}", "id": "ce302070-cf4f-4d53-a26c-8a51743b26f9", "level": "section", "origin_cites_number": 21, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Off-the-shelf Deep Models" ] ], "subsections": [ "72d7b986-53df-40a6-a211-40fcbb5e65ab", "a62482ff-4ed2-4ba2-b72d-45432f53b8f2", "eab65442-451f-481c-9954-9b19dd3d9a08", "0873c3d1-8648-437a-a297-c566cfb8420e" ], "title": "Off-the-shelf Deep Models" }, { "cite_extract_rate": 0.6136363636363631, "cites": [ 2282, 2297, 2288, 2289, 2291, 2286, 2294, 7089, 2285, 2293, 2287, 2283, 2269, 2299, 2259, 2296, 2284, 2292, 2264, 2265, 737, 8537, 2295, 2298, 2281, 2290, 2280 ], "content": "The methods of this category replace the hand-crated feature representations with the pre-trained deep features typically from as AlexNet and VGG.\nA brief summary of these approaches is shown in Table \\ref{table off-the-shelf feature}.\nSong et al. determine discriminative feature configurations of an object class via graph modeling, and train object detectors within the multiple-instance learning paradigm.\nThe deep features of this work are extracted based on the DeCAF scheme~ and AlexNet~.\nBy using the deep features and spatial features to represent each proposal region, Bilen et al. propose a convex clustering process for learning the object models under the weak supervision.\nThe learning objective is able to enforce the similarity among the selected proposal windows.\nIn , Shi and Ferrari develop a curriculum learning strategy to feed training images into the MIL loop in a pre-defined order, where images containing larger objects are learned at the early stages while images containing smaller objects are learned at later stages.\nRen et al. present a bag-splitting-based MIL mechanism that iteratively generated new negative bags from the positive ones.\nThis algorithm can gradually reduce the ambiguity in positive images and thus facilitate the learning of more reliable training instance samples.\nIn , Wei et al. leverage the pre-trained CNN model to implement a Deep Descriptor Transforming process, which can obtain the category-consistent image regions via evaluating the correlations of the descriptors in the convolutional activations of the CNN model.\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n \t\\caption{A brief summary of the approaches using single-network training scheme, which is a subcategory in the weakly supervised object localization and detection approaches with deep weakly supervised learning algorithms. * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n \\label{table endtoend}\n \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2.5cm}<{\\centering}m{1.5cm}<{\\centering}m{1cm}<{\\centering}m{1cm}<{\\centering}m{2.5cm}<{\\centering}m{1.5cm}<{\\centering}m{2.8cm}<{\\centering}m{1.8cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\nHuang-NIPS-2020&Faster RCNN &CNN &None &ImageNet(tag label) &Faster RCNN*(VGG16/ResNet50) &Proposal attention aggregation and distillation & General objects \\\\\nShen-CVPR-2020&Faster RCNN* &CNN &None &ImageNet(tag label) + Flickr &Faster RCNN*(VGG16) &bagging-mixup + background noise decomposition + clean dat modelling& General objects \\\\\nMai-CVPR-2020&None &CNN &None &ImageNet(tag label) &VGG/InceptionV3 &Integrating discriminative region mining and adversarial erasing & General objects \\\\\nZhang-ECCV-2020 &None &CNN &Cross-image consistency &ImageNet(tag label) &VGG/InceptionV3/ ResNet50 &Inter-image stochastic consistency and global consistency & General objects \\\\\nYang-WACV-2020&None &CNN &None &ImageNet(tag label) &VGG &Weighted classification activation map combination & General objects \\\\\nYang-ICCV-2019 & Faster RCNN* (VGG16) & CNN & None & ImageNet(tag label) & Faster RCNN* (VGG16) + CAM & Online classifier learning with bounding box regression & General objects \\\\\n Wan-CVPR-2019 & Faster RCNN* (VGG16) & CNN & None & ImageNet(tag label) & Faster RCNN* (VGG16) & Continuation MIL & General objects \\\\\nShen-CVPR-2019 & WSDDN* (VGG16) & CNN & None & ImageNet(tag label) & Two-stream CNN (WSDDN+DeepLab) & Joint detection and segmentation with cyclic guidance & General objects \\\\\nWan-CVPR-2019 & Fast RCNN & CNN & None & ImageNet(tag label) & Two-stream CNN & continuation instance selection and detector estimation & General objects \\\\\nChoe-CVPR-2019 & None & CNN & None & ImageNet(tag label) & CNN & Feature learning by attention-based dropout & General objects \\\\\nJiang-ICCV-2019 & None &CNN & None & ImageNet(tag label) & VGG16/Resnet101 & Online attention accumulation on CAM & General objects\\\\\nSangineto-TPAMI-2018 & Fast RCNN (VGG16) & CNN & None & ImageNet(tag label) & Fast RCNN (VGG16) & Easy-to-hard & General objects \\\\\nInoue-CVPR-2018 & SSD & CNN & None & PASCAL (full label as source domain),ImageNet & SSD & Domain transfer + pseudo-labeling & Cartoon objects \\\\\n Wan-CVPR-2018 & Faster RCNN* (VGG16) & CNN & None & ImageNet(tag label) & Faster RCNN* (VGG16) & Min-entropy latent modeling & General objects \\\\\nShen-TNNLS-2018 & VGG16* & CNN & None & ImageNet (tag label) & vgg16* & Object-specific pixel gradient mapping+Iterative component mining & General objects \\\\\nTang-TPAMI-2018 & Fast RCNN* (model ensemble) & CNN & None & ImageNet(tag label) & Fast RCNN* (model ensemble) & MIL+oicr+multi-scale+proposal cluster learning & General objects \\\\\n Zhang-CVPR-2018 & None & CNN & None & ImageNet(tag label) & VGG16* & Adversarial complementary erasing & General objects (for ILSVRC) \\\\\n Choe-BMVC-2018 & ResNet & CNN & None & Tiny ImageNet(tag label) & ResNet & GoogLeNet Resize (GR) augmentation & General objects (for Tiny ImageNet) \\\\\n Zhang-ECCV-2018 & Inception-v3+CAM & CNN & None & ImageNet(tag label) & Inception-v3+CAM & Self-produced guidance learning & General objects (for ILSVRC and CUB) \\\\\n Gao-ECCV-2018 & Fast RCNN* & CNN & Count (human label) & ImageNet(tag label) & Fast RCNN*+Fast RCNN & WSL with count-based region selection & General objects \\\\\n Singh-ICCV-2017 & CAM* (GoogLeNet)& CNN & None & ImageNet (tag label) & CAM* (GoogLeNet) & Random hidding patches & General objects (for ILSVRC) \\\\\n Zhu-ICCV-2017 & None & CNN & None & ImageNet(tag label) & GoogLeNet* & Soft proposal layer+CAM & General objects \\\\\nWan-ICIP-2017 & None & CNN & None & ImageNet(tag label) & CAM*(VGG) & CAM with spatial pyramid pooling layer & General objects \\\\\n Durand-CVPR-2017 & None & CNN & None & ImageNet(tag label) & CAM* (ResNet101) & CAM with multi-map transfer layer & General objects \\\\\nTang-CVPR-2017 & Fast RCNN* (model ensemble) & CNN & None & ImageNet(tag label) & Fast RCNN*+Fast RCNN & MIL+oicr+multi-scale & General objects \\\\\n Jiang-CVPR-2017 & Fast RCNN* (AlexNEt) & CNN & None & PASCAL (edge box),ImageNet & AlexNet+ ROIpool & Region calssification+region selection+multi-scale & General objects \\\\\nDiba-CVPR-2017 & Faster RCNN* (VGG16) & CNN & None & ImageNet(tag label) & Multi-stream CNN & Cascading LocNet (CAM), SegNet, and MILNet+multi-scale & General objects \\\\\nSelvaraju-ICCV-2017 & None & CNN & None & ImageNet(tag label) & VGG* & Gradient-based class activation mapping & General objects \\\\\nTang-PR-2017 & None & CNN & None & ImageNet(tag label) & Fast RCNN* (VGG-16) & SPP with discovery block and classification block & General objects \\\\\nGudi-BMVC-2017 & None & CNN & None & ImageNet(tag label) & CAM* (VGG-16) & CAM with Spatial Pyramid Averaged Max (SPAM) Pooling & General objects\\\\\n Bilen-CVPR-2016 & Fast RCNN* (model ensemble) & CNN & None & PASCAL (edge box),ImageNet & Fast RCNN* & MIL+multi scale & General objects \\\\\n Kantorov-ECCV-2016 & Fast RCNN* (VGG-F) & CNN & Context & ImageNet(tag label) & Fast RCNN* & MIL+multi-scale & General objects \\\\\n Teh-BMVC-2016 & CNN & CNN & None & PASCAL (edge box),ImageNet & CNN & Proposal attention learning & General objects\\\\\n Durand-CVPR-2016 & None & CNN & None & ImageNet(tag label) & CNN & Feature extraction network+weakly supervised prediction module & General objects \\\\\n Zhou-CVPR-2016 & None & CNN & None & ImageNet(tag label) & GoogLeNet* & Class activation mapping & General objects\\\\\n Oquab-CVPR-2015 & None & CNN & None & ImageNet(tag label) & CNN & Fully convolutional CNN with global max pooling & General objects \\\\\n Wu-CVPR-2015 & None & CNN & None & PASCAL (BING),ImageNet & CNN & Deep multiple instance learning network & General objects\\\\\n\\hline\n \\end{tabular}}\n\\end{table*}", "id": "72d7b986-53df-40a6-a211-40fcbb5e65ab", "level": "subsection", "origin_cites_number": 44, "parent_id": "ce302070-cf4f-4d53-a26c-8a51743b26f9", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Off-the-shelf Deep Models" ], [ "subsection", "Pre-trained Deep Features" ] ], "subsections": [], "title": "Pre-trained Deep Features" }, { "cite_extract_rate": 1, "cites": [ 2271, 2273, 2274, 2272 ], "content": "Instead of using the pre-trained deep models as feature extractor, the methods of this category obtain useful information cues (such as the activations in the intermediate network layers and the semantic scores in the output network layer) from the pre-trained deep neural networks to facilitate the weakly supervised learning process.\nThe focus of these approaches mainly lies in the initialization stage of the weakly supervised learning process.\nA brief summary of these approaches is shown in Table \\ref{table pre-learning}.\nBergamo et al. propose a self-taught deep learning approach for localizing objects of interest under weak supervision.\nIn the initialization stage, they design a mask-out strategy based on the deep semantic cues from a pre-trained classification network.\nSpecifically, this method first calculates the degeneration of the image-level classification scores when masking out a certain object proposal region and then selects those with large differences as the interested object regions.\nAfter the initialization stage, this method trains an SVM-based object detector in the subsequent refinement stage for final object localization.\nSimilar to , Bency et al. propose a beam search algorithm to leverage the activation maps of a pre-trained classification network to localize the objects of interest.\nThis method is based on the observation that when image regions centered around objects of interest are classified by a pre-trained DNN, they obtain higher semantic scores than other image regions.\nHoffman et al. develop a transfer learning-based algorithm, where the deep neural network is first trained on both the weakly labeled auxiliary training data and the strongly labeled training data to obtain the background detector.\nThen, an SVM-MIL model is adopted to learn the object detectors based on the potential foreground regions that are obtained by using the pre-trained DNN to screen the image background regions.\nTo overcome the problem that the objects of interest would sometimes co-occur with the distracting image background, Kolesnikov et al. {propose a user-guided weakly supervised learning framework to improve the localization capacity.}\nThis approach first trains a classification network under the image level annotation.\nFor each image, the intermediate feature maps of the pre-trained network are clustered, and the object clutter is identified by a user.", "id": "a62482ff-4ed2-4ba2-b72d-45432f53b8f2", "level": "subsection", "origin_cites_number": 4, "parent_id": "ce302070-cf4f-4d53-a26c-8a51743b26f9", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Off-the-shelf Deep Models" ], [ "subsection", "Inherent Cues in Deep Models" ] ], "subsections": [], "title": "Inherent Cues in Deep Models" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 8538, 2306, 2303, 2301, 802, 2302, 2309, 2300, 2304, 2308, 2276, 2307, 2279, 2310, 2305, 2277 ], "content": "The methods of this category fine-tune the off-the-shelf DNN models during the weakly supervised learning process to obtain strong object detectors .\nA brief summary of these approaches is shown in Table \\ref{table prepost-learning}.\nChen et al. first train CNNs from the web image data via an easy-to-hard learning scheme.\nThe learned deep features are used to mine object locations by using the exemplar-LDA detector .\nThe off-the-shelf RCNN detector is then adopted to learn object models based on their localization results.\nIn , Li et al. first use the mask-out strategy based on the pre-trained classification network to obtain the class-specific object proposals.\nAn SVM-based MIL process is used to localize object instances and\nthe classification network is further fine-tuned on the localized object instances for better performance.\nShi et al. propose to transfer the prior knowledge of \\textit{things} and \\textit{stuff} to help the weakly supervised learning process.\nA semantic segmentation network is trained from the source set (with available bounding-box annotations) to generate the stuff map and thing map for the weakly labeled images in the target set.\nThese maps are used to obtain potential object locations, and the fast RCNN model is adopted in a Deep MIL scheme to train the object detectors.\nRecently, revisits knowledge transfer for training the weakly supervised object detector.\nIn this method, a DNN-based proposal extractor is learned from the source data firstly.\nThe DNN is designed based on the SSD architecture and trained with a semantic hierarchy.\nThe network is then used to provide proposals and other prior knowledge for the weakly labeled images in the target set.\nAn MIL process is used to determine the proposals that cover the objects of interest\nbased on which the fast RCNN model is adopted to learn the final object detectors.\nIn , Zhang et al. first learn to localize the objects of interest via a collaborative self-paced curriculum learning mechanism based on pre-trained deep features.\nThe fast RCNN model is applied to learn object detectors.\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{table*} [t]\n \\tiny\n\t\\caption{A brief summary of the approaches with multi-network training, which is a subcategory in the weakly supervised object localization and detection approaches with deep weakly supervised learning algorithms. * indicates a certain variation of the corresponding model. An approach is considered for general object category when it is tested for detecting more than five object categories in the corresponding literature. The approaches with None detector indicate the weakly supervised object localization approaches.}\n\t\\label{table multinet} \\resizebox{\\linewidth}{!}{\\begin{tabular}{m{2cm}<{\\centering}m{1.5cm}<{\\centering}m{1.5cm}<{\\centering}m{1.8cm}<{\\centering}m{3cm}<{\\centering}m{2.6cm}<{\\centering}m{3cm}<{\\centering}m{1.5cm}<{\\centering}}\n\\hline\n \\multirow{2}{*}{\\textbf{Methods}} &\n \\multirow{2}{*}{\\textbf{Detector}} &\n \\multirow{2}{*}{\\textbf{Descriptor}} &\n \\multirow{2}{*}{\\textbf{Prior knowledge} } &\n \\multirow{2}{*}{\\textbf{Extra training data}} &\n \\multirow{2}{*}{\\textbf{Learning model}} &\n \\multirow{2}{*}{\\textbf{Learning strategy}} &\n \\multirow{2}{*}{\\textbf{Object category}} \\\\\n & & & & & & &\\\\\n\\hline\n\\hline\nZhang-CVPR-2020 &None &CNN &Common object co-localization &ImageNet(tag label) &VGG/InceptionV3/ResNet50/ DenseNet161 &Classification + pseudo supervised object localization & General objects \\\\\nZhong-ECCV-2020 &Faster RCNN &CNN &Location prior &ImageNet(tag label) + COCO (box label) &Oneclass universal detector + MIL classifier (on ResNet50) &Progressive knowledge transfer & General objects \\\\\nKosugi-ICCV-2019 & Fast RCNN* &CNN & Mask-out prior & ImageNet(tag label) & Mask-out net + OICR* & Mask-our prior-guided label refinement & General objects \\\\\nSingh-CVPR-2019 & Fast RCNN* & CNN & Motion prior & ImageNet(tag label), videos & RPN+WSDDN/OICR (VGG16) & Training RPN using weakly-labeled videos for WSOD & General objects \\\\\nArun-CVPR-2019 & Fast RCNN & CNN & None & ImageNet(tag label) & Fast RCNN (VGG16) + Fast RCNN* (VGG16) & Employ dissimilarity coefficient for modeling uncertainty & General objects \\\\\nLi-TPAMI-2019 & Faster RCNN* (VGG16) & CNN & Objectness (classifier) prior & ImageNet(tag label), ILSVRC2013(box label for unseen categories) & Faster RCNN* (VGG16) *2 & Objectness transfer+MIL +multi-scale & General objects \\\\\n Zhang-ECCV-2018 & fast RCNN* (VGG16) & CNN & None & PASCAL (edge box), ImageNet(tag label) & Multi-view WSDDN+multi view Fast RCNN & Two phase multi-view learning & General objects \\\\\nShen-CVPR-2018 & SSD & CNN & None & ImageNet(tag label) & SSD+Fast RCNN* & MIL+GAN & General objects \\\\\n Zhang-CVPR-2018 & Faster RCNN (VGG16) & CNN & None & ImageNet(tag label) & MIDN+Faster RCNN & WSOD+Pseudo Ground-truth Mining+FOD & General objects \\\\\nZhang-CVPR-2018 & Fast RCNN* (VGG16) & CNN & None & PASCAL (edge box), ImageNet & WSDDN + Fast RCNN* & WSDDN+easy-to-hard FOD & General objects \\\\\nTang-ECCV-2018 & Fast RCNN* (VGG16) & CNN & None & ImageNet(tag label) & Fast RCNN* (VGG16) & Alternating training of WSRPN and WSOD+multi-scale & General objects \\\\\nTao-TMM-2018 & Fast RCNN* (VGG16) & CNN & Web image & Web dataset(weak label),imageNet(tag label) & Midn & Easy-to-hard & General objects \\\\\n Wang-IJCAI-2018 & Faster RCNN (VGG16) & CNN & Model consistency & ImageNet(tag label) & Faster RCNN+Fast RCNN* & MIL+collaborative learning & General objects \\\\\nWei-ECCV-2018 & Faster RCNN* (VGG16) & CNN & Shape prior+ context prior & ImageNet(tag label) & MIDN+CAM+ Deeplab & Tight Box Mining+MIL +OICR+multi-scale & General objects \\\\\n Ge-CVPR-2018 & Faster RCNN (VGG16) & CNN & Local objectness and global attention & ImageNet(tag label) & MIDN,TripNet, GoogleNet, FCN, Fast RCNN & Multi evidence fusion+ outlier filtering +pixel label preidction +box generation+multi-scale & General objects \\\\\nDong-MM-2017 & Fast RCNN* & CNN & None & ImageNet(tag label) & Fast RCNN*+R-FCN & Easy-to-hard & General objects \\\\\n Li-BMVC-2017 & Fast RCNN* & CNN & Shape prior & ImageNet(tag label) & Fast RCNN* +CAM+DeepLab & Easy-to-hard & General objects \\\\\nWang-CVPR-2017 & CNN & CNN & None & ImageNet(tag label) & CAM* & Image-level training+pixel-level fine tuning & Salient objects \\\\\n Sun-CVPR-2016 & None & CNN & None & ImageNet(tag label) & Multi-scale FCN+ CNN(vgg16) & Cascade localization and recognition & General objects \\\\\n Zhang-TGRS-2016 & CNN & CNN & None & ImageNet(tag label), auxiliary data(image label) & CPRNet+LocNet & Alternative training CPRNet and LocNet & Aircraft \\\\\n\\hline\n \\end{tabular}}\n\\end{table*}", "id": "eab65442-451f-481c-9954-9b19dd3d9a08", "level": "subsection", "origin_cites_number": 28, "parent_id": "ce302070-cf4f-4d53-a26c-8a51743b26f9", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Off-the-shelf Deep Models" ], [ "subsection", "Fine-tuned Deep Models" ] ], "subsections": [], "title": "Fine-tuned Deep Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Introducing the off-the-shelf deep models into the weakly supervised object localization or detection framework is the most straightforward approach to integrate deep learning and weakly supervised learning.\nThe methods in this category show that: 1) feature learning is an important factor to improve the weakly supervised learning process; 2) DCNN models can infer discriminative spatial locations when learned under the image-level supervision; 3) pre-training DNN models on large-scale auxiliary training data is a simple but effective way to encode useful cues for the weakly supervised learning process.\nCompared with the classic models,\nthe methods of this category exploit large-scale auxiliary training data to learn powerful feature representations and top-down cues.\nBy using DNN models as the object detector or localizer, a significant performance gain can be obtained.\nHowever, more effective feature learning models can be exploited in the weakly supervised learning process.", "id": "0873c3d1-8648-437a-a297-c566cfb8420e", "level": "subsection", "origin_cites_number": 0, "parent_id": "ce302070-cf4f-4d53-a26c-8a51743b26f9", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Off-the-shelf Deep Models" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.529411764705882, "cites": [ 2282, 2291, 7089, 2300, 2312, 2284, 2304, 2311, 2303 ], "content": "In this section, we review the methods that learn weakly supervised object localizers or detectors by designing novel deep weakly supervised learning frameworks\\footnote{Notice that here we mainly indicate that the core learning blocks used in the learning frameworks are based on deep learning, while some minor computational components in pre-processing or post-processing, such as proposal extraction and bounding-box modification, et al., are not necessarily be implemented by deep learning.}.\nDifferent from the approaches discussed in previous sections, both the feature representations and the object detectors of the approaches in this category are learned by newly-designed deep neural networks.\nThe whole weakly supervised learning framework may be designed in a compact network model, such as in , or contain several function-distinct DNN components, such as in .\nWe categorize these approaches into two groups using single-network training and multi-network training, respectively.", "id": "930ae35a-10a8-4a65-83e8-a12f89f6fe36", "level": "section", "origin_cites_number": 17, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Deep Weakly Supervised Learning" ] ], "subsections": [ "352d0dfb-6a12-4bf2-b2bf-5544ab013008", "65da26f9-ce63-494b-9e38-4a72e74d3aa0", "c9fa8b67-133f-4688-92de-bc8e20c858b9" ], "title": "Deep Weakly Supervised Learning" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 2259, 2282, 2298, 2290, 2314, 2280, 2315, 737, 107, 7090, 2286, 687, 2313, 2295 ], "content": "The methods of this category are designed with a single deep neural network using the training images (or together with the extracted object proposals) as inputs and the image-level classification labels as the outputs.\nThese approaches do not usually rely on meticulously designed initialization processes to obtain the potential object regions.\nInstead, these methods discover the interested object regions solely based on the end-to-end learning process of the designed DNN models.\nThe DNN models used in these approaches usually have similar feature learning layers as the conventional image classification network, e.g., AlexNet, VGG, GoogleNet, and ResNet, followed by the instance label inferring and image label propagation layers to inherently predict the labels of each proposal region and generate the final image labels from the predicted proposal labels, respectively. Some of the methods contain multiple network streams for online inferring multiple informative cues.\nA brief summary of these approaches are\nshown in Table \\ref{table endtoend}.\nIn the DNN models proposed by Wu et al. and Bilen et al. , the network inputs are the training images and extracted object proposal regions while the outputs are the image-level semantic scores.\nThe first parts of these networks extract the features and infer the labels for each proposal region, and the second parts propagate the proposal scores to the image-level via the max pooling scheme or the two-stream score regularization method .\nZhou et al. present an end-to-end weakly supervised deep learning based on the class activation mapping (CAM).\nThe weights of the feature maps in the intermediate layers are inferred based on the correspondence between the feature map and a certain object category.\nThe feature maps are then combined to form the class activation maps based on the inferred weights, which highlight the locations of the objects of interest. {Notably, this method is determined to be highly efficient in recent work . }\nBuilt on CAM , Durand et al. introduce the multi-map transfer layer and the WILDCAT pooling layer to facilitate the more accurate deep MIL process.\nRecently, a number of two-branch MIL models have been developed in which one is based on a typical deep network and the other one is introduced for weakly supervised learning.\nBased on the WSDDN , Tang et al. integrate MIL branch and the instance classifier refinement branch into a unified deep learning framework such that more accurate online instance classifier learning is realized under the weak supervision.\nIn , Diba et al. propose a weakly supervised cascaded convolutional network, which contains three branches.\nThe first branch adopts the CAM module to generate the class activation maps. \nThe second branch uses the generated class activation maps as the supervision signal to learn a segmentation module to generate the segmentation masks of the objects of interest.\nUsing the candidate object proposals selected based on the obtained segmentation masks as supervision, the third network branch uses a MIL process to mine accurate object locations from the candidate object regions.\nIn , Zhang et al. present a CAM-based network architecture which contains a classification branch and a counterpart classifier branch for object localization.\nSpecifically, the classification branch is used to localize the discriminative object regions, which drives the counterpart classifier branch to discover new and complementary object regions by erasing its discovered regions from the feature maps.\nWan et al. propose a min-entropy latent model (MELM) for weakly supervised object detection based on the assumption that minimizing entropy results in minimum randomness of a system.\nThe network architecture is similar to , but global min-entropy and local min-entropy losses are introduced to train a DNN model to select the proposal cliques of largest object probability and mine truthful object locations from the selected proposal cliques.\nZhang et al. develop a self-generated guidance method for weakly supervised object localization.\nIn this work, a self-generated guidance map is derived from a CAM {layer to help learning features and object location masks from the previous network layers.}\n {More recently, Gao et al. propose a token semantic coupled attention mapping for WSOL, which models the long-range visual dependency of the image regions and thus avoid partial activation. Ren et al. introduce the instance-associative spatial diversification constraints and build the parametric spatial dropout block to address the instance ambiguity and incomplete localization problems. Besides, they additionally adopt a sequential batch back-propagation algorithm, which enables their model to use a large ResNet as the backbone\\footnote{{As discussed in , it is non-trivial to introduce the deeper network backbones, e.g, the deep residual network, into the weakly supervised object detection frameworks as it would encounter dramatic deterioration of detection accuracy and training non-convergence. }}. Although there are other methods that use ResNet as the backbone , there is very limited exploration of using more recent backbone architectures, e.g., DesNet and Res2net , in both WSOL and WSOD frameworks. }", "id": "352d0dfb-6a12-4bf2-b2bf-5544ab013008", "level": "subsection", "origin_cites_number": 18, "parent_id": "930ae35a-10a8-4a65-83e8-a12f89f6fe36", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Deep Weakly Supervised Learning" ], [ "subsection", "Single-Network Training" ] ], "subsections": [], "title": "Single-Network Training" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2259, 2298, 2306, 737, 2301, 2304, 2310, 209, 8429, 2309 ], "content": "The methods in this school collaborate multiple networks, either in one training stage or in multiple training stages, to accomplish the weakly supervised object localization or detection task.\nThe approaches of this category usually train a network to mine the initial object regions and another network for the detection task under the MIL framework .\nAn additional object detection network, e.g., Fast RCNN, may also be used to train the final object detectors .\nBy integrating multiple networks, these methods tend to achieve better performance both in object localization and detection.\nA brief summary of these approaches is shown in Table \\ref{table multinet}.\nLi et al. propose a multiple instance curriculum learning method, where a network based on the WSDDN model is used to mine candidate object proposals and another one based on the CAM algorithm to generate saliency maps from the selected proposals.\nA curriculum is designed to select confident training examples based on the consistency between the regions outputted by the two networks.\nThe object detectors are then trained by using the confident training examples iteratively.\nDong et al. present a dual-network progressive approach for weakly supervised object detection, where a positive instance selection network and a region refinement network are adopted to minimize the classification error and modify object localization, respectively.\nThese two networks are worked under a co-training paradigm.\nIn , Ge et al. first obtain intermediate object localization and pixel labeling results using a classification network.\nA triplet-loss network and an instance classification network are then constructed to detect outlier and filter object instances.\nFinally, the filtered object instances are used as the supervision to train another Fast RCNN-based detection network.\n {In order to overcome the limitations brought by the imprecise of the extracted object proposals,} Wei et al. propose to mine object proposals with tight boxes to learn weakly supervised object detector. {The assumption is that the proposals with tight boxes are more likely to contain the objects of interest thus mining such kind of proposals would help screen the cluttered background regions. In their approach, }\na semantic segmentation network is first learned using the object localization map generated by CAM as the pseudo ground-truth.\nThe predicted segmentation masks are used to mine object proposals with tight boxes, and fed into the online instance classifier refinement (OICR) network to learn weakly supervised object detector.\n {With the same motivation as , Tang et al. propose to combine a two-stage region proposal network with an OICR network to learn the weakly supervised object detectors. Instead of mining proposals based on the semantic segmentation cue, Tang et al. use both the low-level cues (feature maps in shallow layers) and the high-level cues (semantic scores in deep layers) to mine reliable proposals in the two stages, respectively. The parameters of the region proposal network are obtained based on the network trained by .}\n {Zhang et al. explore the reliability of each training image by evaluating the image difficulty and then feed the images into the learning procedure in an easy-to-hard order. Specifically, the image difficulty is evaluated by diagnosing the localization outputs of the pre-trained WSDDN model based on the concentrateness of the high-scored proposal locations. }\nIn , Zhang et al. use three networks to learn weakly supervised object detectors.\nFirst, an OICR network is trained to generate the initial object regions.\nThen, they train an RPN based on {the pseudo ground-truth boxes obtained after implementing a post-process on the initial object regions, and use the learned RPN} to generate more accurate object locations.\nFinally, fully supervised object detectors are trained based on the obtained object locations.", "id": "65da26f9-ce63-494b-9e38-4a72e74d3aa0", "level": "subsection", "origin_cites_number": 15, "parent_id": "930ae35a-10a8-4a65-83e8-a12f89f6fe36", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Deep Weakly Supervised Learning" ], [ "subsection", "Multi-Network Training " ] ], "subsections": [], "title": "Multi-Network Training " }, { "cite_extract_rate": 0, "cites": [], "content": "Compared with the off-the-shelf deep model-based weakly supervised object detection and localization methods, the deep weakly supervised learning methods exploits the merits of deep learning and weakly supervised learning approaches.\nWithout complex design in the learning initialization stage, the end-to-end deep weakly supervised learning methods have been shown to perform well by introducing the MIL mechanism into the network design of the DCNN models.\nWhile the multi-network training methods can further improve the learning performance by combining multiple function-specific networks.\nOn the other hand, the performance of these methods is limited by whether the information extracted from the weakly-supervised module is effective or not.\nAs such, prior knowledge may be useful to guide the deep learning process without solely relying on the weakly-supervised network.", "id": "c9fa8b67-133f-4688-92de-bc8e20c858b9", "level": "subsection", "origin_cites_number": 0, "parent_id": "930ae35a-10a8-4a65-83e8-a12f89f6fe36", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Deep Weakly Supervised Learning" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 2286 ], "content": "During the last decades, significant efforts have been made to develop various methods for learning weakly supervised object localizer or detector.\nFor fair performance evaluation, it is of great importance to introduce some publicly available benchmark datasets and evaluation metrics.\n\\begin{table}[t]\n \\centering\n \\caption{ {Brief summarization of the characteristics of the datasets. The top three are the datasets usually used for the WSOD task, while the bottom two are the common datasets for the WSOL task. \\#Categories indicates the number of object categories. \\#Images indicates the number of images. ``GO'' is short for Generic Objects. }}\n \\begin{tabular}{ccccc}\n \\hline\n Dataset & \\multicolumn{1}{c}{Content} & \\multicolumn{1}{c}{\\#Categories} & \\multicolumn{1}{c}{\\#Images} & \\multicolumn{1}{c}{Metrics} \\\\\n \\hline\\hline\n PASCAL VOC 07 & \\multirow{3}[2]{*}{GO} & \\multirow{3}[2]{*}{20} & 9,962 & \\multirow{3}[2]{*}{mAP, CorLoc} \\\\\n PASCAL VOC 10 & & & 21,738 & \\\\\n PASCAL VOC 12 & & & 22,531 & \\\\\n \\hline\n CUB-200-2011 & Birds & 200 & 11,788 & Top-1/5 Loc \\\\\n ILSVRC 2016 & GO & 1000 & 1.2 M & GT Loc \\\\\n \\hline\n \\end{tabular}\n \\label{tab:dataset}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=9.5 cm]{dataset.jpg}\n \\caption{ {Illustration of examples from the PASCAL VOC (top block), CUB-200-2011 (bottom-left block), and ILSVRC 2016 (bottom-right block) datasets.}}\n \\label{dataset}\n\\end{figure}\nExisting weakly supervised object detection methods are usually evaluated on the PASCAL VOC datasets, including the PASCAL VOC 2007, 2010, and 2012 sets.\nThe PASCAL VOC 2007 , PASCAL VOC 2010 and PASCAL VOC 2012 contain 9,962, 21,738, and 22,531 images of 20 object classes.\nThese three datasets are divided into train, val, and test sets, where the trainval set (5,011 images for PASCAL VOC 2007, 10,869 images for PASCAL VOC 2010, and 11,540 images for PASCAL VOC 2012) are used to train the weakly supervised object detector, and the rest for evaluation.\nThe mean of AP (mAP) metric is used to measure the performance where one object is successfully detected if the intersection over union (IoU) between the ground-truth and predicted boxes is more than 50 percentage.\nThe weakly supervised object localization performance is usually evaluated on the PASCAL VOC, ILSVRC, and CUB datasets.\nOn the PASCAL VOC datasets , weakly supervised object localization methods only use the trainval sets, which are different from the setting in weakly supervised object detection.\nThat is, both the weakly supervised learning process and localization process are implemented on the same image data.\nTo evaluate the localization performance on the PASCAL VOC datasets, the correct localization metric (CorLoc) is adopted, where the bounding-box with the highest class-specific score from each image is examined to be whether correct (with more than 50\\% overlap with the ground-truth box) or not.\nIn addition to the PASCAL VOC datasets, the ILSVRC 2016 dataset \\cite {russakovsky2015imagenet} (i.e., the ImageNet) and CUB-200-2011 dataset are also widely used for performance evaluation.\nThe ILSVRC 2016 dataset contains more than 1.2 million images of 1,000 classes for training, while the validation set, which contains 50,000 images, is used for testing.\nThe CUB-200-2011 dataset contains 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing.\nFor these two datasets, {The commonly-used evaluation metrics are GT-known localization accuracy (GT Loc), Top-1 localization accuracy (Top-1 Loc), and Top-5 localization accuracy (Top-5 Loc). Specifically, GT Loc judges the localization results as correct when the intersection over union (IoU) between the ground-truth bounding box and the estimated box is no less than 50\\%, while Top-1 Loc considers the localization results as correct when the class predicted with the highest score is equal to the ground-truth class and the estimated bounding box has no less than 50\\% IoU with the ground-truth bounding box . Top-5 Loc differs from Top-1 Loc in that it checks if the target label is one of the top 5 predictions. As can be seen, the Top-1 Loc is a harder metric than the GT Loc as it needs to additionally predict the class label correctly. This will dramatically increase the task difficulty when performing under very large or fine-grained semantic spaces. The difficulty of Top-5 Loc is between Top-1 Loc and GT Loc as it requires the model to predict the class label but does not restrict the prediction to be perfectly correct.}\n {We provide a brief summarization of the characteristics of the aforementioned datasets in Table \\ref{tab:dataset}. We additionally show some examples from each dataset to illustrate the bias of image content in different datasets. As displayed in Fig. \\ref{dataset}, the PASCAL datasets contain relatively more complex image content, where multiple object instances and categories may appear in a single image and different images would contain objects with significant scale variations. Although it only contains 20 object categories, its category diversity is higher than the CUB-200-2011 dataset as all the 200 categories in the CUB-200-2011 dataset are related to birds. ILSVRC 2016 contains far more images and categories than the PASCAL datasets. However, the content of the images from the ILSVRC 2016 dataset tends to be simpler than that of the PASCAL datasets---each of the images from the ILSVRC 2016 dataset typically contains only one single object and the objects have more consistent sizes and are placed in clearer background context relative to those from the PASCAL datasets. }\\\\\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=9.5 cm]{videoapp.jpg}\n \\caption{Weakly supervised object localization or detection methods for video understanding. The examples are from . }\n \\label{videoap}\n\\end{figure}", "id": "79d342a2-5acc-4290-aaff-2761ce99e494", "level": "section", "origin_cites_number": 7, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Datasets and Evaluation Metrics" ] ], "subsections": [], "title": "Datasets and Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "In recent years, weakly supervised object localization and detection techniques have been used in numerous vision problems, especially when it is difficult to collect ground-truth labels.", "id": "5cef946e-16aa-4550-aeb5-c885a8308ec2", "level": "section", "origin_cites_number": 0, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Applications" ] ], "subsections": [ "2093fbc7-1f16-499f-9309-a24b9da6de5b", "5bcd51f6-c4c7-4e80-b0c3-9cec6a4e37d4", "6c796257-03d1-4c11-b23e-711e3eea5646", "ec87da5b-525f-4f14-a7b7-8c787fb56bb5" ], "title": "Applications" }, { "cite_extract_rate": 0.5, "cites": [ 2319, 2317, 2316, 2318, 8539 ], "content": "As it is time-consuming to obtain object-level annotations for each video frame, weakly supervised object localization and detection methods have also been applied in the field of video understanding (see Fig. \\ref{videoap}).\nFor example, Chanda et al. build a two-stream learning framework, which adapts the information from the labeled images (source domain) to the weakly labeled videos (target domain).\nIn Zhang et al. propose a self-paced fine-tuning network for learning two network heads to localize and segment the object of interest from the weakly labeled training videos.\nThe network is equipped with the multi-task self-paced learning function which can integrate confident knowledge from each single task (localization or segmentation) and use it to build stronger deep feature representation for both tasks.\nOn the other hand, develop methods to localize temporal actions in the given untrimmed videos, where the main goal is to predict the temporal boundary of each action instance contained in the weakly labeled training videos.\n {Essentially, such a weakly supervised action localization (WSAL) task is an emerging, yet rapidly developing topic in recent years, and the methods for solving this task are highly related to the weakly supervised object detection and localization methods. The additional challenges are: (i) the duration of the interesting action has very large variation, i.e., from a few seconds to thousands of seconds; and (ii) the features extracted to represent the interesting action would be entangled with those of the complex scenes of the video frame. }\n {Notice that when applying to video understanding, there are strong correlations among adjacent video frames. So, additional informative constraints can be introduced to facilitate the weakly supervised object detection or localization under this scenario. }", "id": "2093fbc7-1f16-499f-9309-a24b9da6de5b", "level": "subsection", "origin_cites_number": 10, "parent_id": "5cef946e-16aa-4550-aeb5-c885a8308ec2", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Applications" ], [ "subsection", "Video Understanding" ] ], "subsections": [], "title": "Video Understanding" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2267, 2299 ], "content": "One interesting application of the weakly supervised object localization and detection techniques is the analysis of art images (see Fig. \\ref{art}).\nInoue et al. propose a cross-domain weakly-supervised object detection framework for learning the object detectors from weakly labeled watercolor images.\nA progressive domain adaptation method to transfer the style of the fully-labeled data from the source domain (the normal RGB domain) to the target domain (the watercolor domain) is developed.\nIn Gonthier et al. propose a weakly supervised learning algorithm for detecting objects in paintings.\nThe IconArt database which contains object classes that are absent from the photographs in daily life is developed for performance evaluation.\nIn addition, Crowley\nand Zisserman adopt a weakly supervised object localization scheme for automatically annotating images of gods and animals in decorations on classical Greek vases.\n {When applying to art image analysis, a key challenge arises due to the distinctiveness of the content domain---even the same semantics and image scenes would be presented differently to those in the natural environment. Under this scenario, models with stronger self-domain adaptation capacity would be required for the task.}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=9.5 cm]{artapp.jpg}\n \\caption{Examples of the application of weakly supervised object localization or detection approaches for the analysis of art images.\n The examples are from , where detection results in different colors in the painting images indicate different types of objects. }\n \\label{art}\n\\end{figure}", "id": "5bcd51f6-c4c7-4e80-b0c3-9cec6a4e37d4", "level": "subsection", "origin_cites_number": 3, "parent_id": "5cef946e-16aa-4550-aeb5-c885a8308ec2", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Applications" ], [ "subsection", "Art Image Analysis" ] ], "subsections": [], "title": "Art Image Analysis" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2321, 2322, 2320 ], "content": "As shown in Fig. \\ref{bio}, medical image analysis is one area where the weakly supervised object localization and detection methods are of critical importance as only few annotations of target objects by trained experts in bio-image (e.g., organ or tissues).\nTo alleviate this problem, Hwang and Kim develop a two-stream DNN model to localize the tuberculosis regions from the chest X-ray images.\n {Considering that the medical image-based applications usually do not have the pre-trained networks, this work proposes a weakly supervised learning scheme without requiring any pre-trained network parameters. The proposed network contains a fully connected layer-based classification branch and a CAM-based localization branch with shared convolutional layers for feature extraction. Both of the two branches are supervised by image label annotation, where a weighting parameter is introduced to dynamically control the relative importance between them to gradually switch the focus of the learning process from the classification branch to the localization branch. The authors demonstrate that the features learned from the classification layer at the early stage can provide informative cues to learn the localization branch at the late stage.}\n {For detecting a general type of lesions, Wang et al. model the normal image as the combination of background and noise, while modeling the abnormal images as the combination of background, blood vessels, and noise. With the assumption that the noise for the normal image and abnormal image is the unified distribution, the image data can then be decomposed by the low-rank subspace learning technique to obtain the vessel areas. }\nIn , Gondal et al. apply the weakly supervised object detection network on the retina images and achieve few false positives with high sensitivity on the lesion-level prediction.\nLi et al. apply a sparse coding-based weakly supervised learning method for localizing actinophrys in microscopic images.\nDubost et al. propose weakly supervised regression neural networks for detecting brain lesions. Besides, some recent works also show great research interests in weakly supervised learning-based brain image analysis, such as brain disease prognosis , brain tumor or lesion segmentation , brain structure estimation , etc.\n {Notice that compared to common images, medical imaging data usually suffers from issues of low contrast and limited texture. Fortunately, some spatial priors could be obtained for different organs or lesions. These priors can be used to guide the weakly supervised learning process on medical imaging data.}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=9.5 cm]{bioimage.jpg}\n \\caption{Examples of the application of weakly supervised object localization or detection approaches in medical image analysis. The examples are from . }\n \\label{bio}\n\\end{figure}", "id": "6c796257-03d1-4c11-b23e-711e3eea5646", "level": "subsection", "origin_cites_number": 9, "parent_id": "5cef946e-16aa-4550-aeb5-c885a8308ec2", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Applications" ], [ "subsection", "Medical Imaging" ] ], "subsections": [], "title": "Medical Imaging" }, { "cite_extract_rate": 0, "cites": [], "content": "Remote sensing imagery analysis is one of the most widely studied applications based on weakly supervised object localization and detection, where the input images are usually of large scale and the annotation process tends to be very time-consuming (see Fig. \\ref{remote}).\nZhang et al. propose a saliency-based weakly supervised detector learning method to learn the detectors of the airplane, vehicle, and airport from the remote sensing images collected from different sensors.\nIn this work, a weakly supervised object detection benchmark dataset for remote sensing imagery analysis is developed.\nHan et al. and Zhou et al. introduce the Bayesian inference and negative bootstrapping methods, respectively, for effective weakly supervised detectors for remote sensing images.\nIn , Zhang et al. propose a coupled CNN method which combines a candidate region proposal network and a localization network to detect aircrafts in images.\n {When applying to remote sensing imagery analysis, the target objects are usually very small in size, which would dramatically increase the localization and detection difficulty given only weak supervision. }\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=9.5 cm]{remote.jpg}\n \\caption{Examples of the application of weakly supervised object localization or detection approaches in remote sensing imagery analysis. The examples are from . }\n \\label{remote}\n\\end{figure}", "id": "ec87da5b-525f-4f14-a7b7-8c787fb56bb5", "level": "subsection", "origin_cites_number": 8, "parent_id": "5cef946e-16aa-4550-aeb5-c885a8308ec2", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Applications" ], [ "subsection", "Remote Sensing Imagery Analysis" ] ], "subsections": [], "title": "Remote Sensing Imagery Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "We discuss the issues to be addressed in this field for future research.", "id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "level": "section", "origin_cites_number": 0, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ] ], "subsections": [ "e95954d7-ff46-4ac2-807f-3d34b6e59526", "abffd2fa-2c1a-4f41-8253-4df7d27f9686", "56c3e88b-fc74-49e6-aa73-cff57609f8ef", "47c758a2-7983-451c-ad2b-fa45ee0b2b84", "8b5b77db-9af6-4869-b487-c1496c6cc1b1" ], "title": "Future Directions" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2269, 2323, 737 ], "content": "Weakly supervised object localization or detection methods can be easily formulated within the MIL framework.\nEarly methods in this filed usually add prior knowledge or post regularization on the classic MIL models, such as LSVM , while the current research obtains the breakthrough by building deep MIL models . To further improve the weakly supervised learning performance, efforts should be made to introduce the most advanced ideas and techniques in the research filed of MIL, such as the set-level problem the key instance shift issue and the scalable issue in MIL.\nFurther research towards more advanced MIL techniques would also bring helpful insights for the WSOL and WSOD in the future.", "id": "e95954d7-ff46-4ac2-807f-3d34b6e59526", "level": "subsection", "origin_cites_number": 9, "parent_id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Multiple Instance Learning" ] ], "subsections": [], "title": "Multiple Instance Learning" }, { "cite_extract_rate": 0.5, "cites": [ 7091, 2260, 2324, 1751, 8429, 8540 ], "content": "Another future direction is to combine multiple weakly supervised learning tasks into a unified learning framework. These tasks may include object detection , semantic segmentation , instance segmentation , 3D shape reconstruction , and depth estimation .\nEssentially, efforts for simultaneously accomplishing multiple aforementioned tasks have been made in the conventional fully supervised learning scenario , which have demonstrated that such learning mechanism can bring helpful information from one task to the other ones.\nThe methods proposed by Zhang et al. is an early attempt to implement such a weakly supervised multi-task learning mechanism and the experimental results show that training object segmentation and 3D shape reconsecration models jointly indeed benefits the both weakly supervised learning tasks. With similar spirits to , Zhang et al. and Shen et al. establish a self-paced fine-tuning network and a cyclic guidance network to jointly learn object localization and segmentation models under the weak supervision, respectively. {Under the multi-task weakly supervised learning scenario, one key problem is that the learning ambiguity of each individual task might be aggregated and the imprecise prediction on one task might affect the learning on other tasks. To deal with this problem, one needs to disentangle the complex multi-task learning, separately learning each individual task first, and then leveraging the confidence knowledge from each task to provide informative priors to guide the learning processes of the other tasks. }", "id": "abffd2fa-2c1a-4f41-8253-4df7d27f9686", "level": "subsection", "origin_cites_number": 12, "parent_id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Multi-Task Learning" ] ], "subsections": [], "title": "Multi-Task Learning" }, { "cite_extract_rate": 0.4, "cites": [ 2306, 8537 ], "content": "To address the learning under uncertainty issue that is inherently existed in the weakly supervised learning process, robust learning strategy will become one of the key techniques in the future. The goal is to alleviate the influence of the noisy samples during the learning process. In implementation, such learning strategy is usually achieved by selecting easy and confident training samples in the early learning stages while using hard and more ambiguous training samples in the late learning stages. Essentially, a number of recent methods have already introduced the robust learning strategies into their learning frameworks. For example, Shi and Ferrari propose a curriculum learning strategy to feed training images into the WSOL learning loop in order from images containing bigger objects down to smaller ones. The training order is determined by the size of the object, which is estimated based on a regression model. Similarly, Zhang et al. design a zigzag learning strategy, where they first develop a criterion to automatically rank the localization difficulty of an image, and then learn the detector progressively by feeding examples with increasing difficulty. As can be seen, these methods are just intuitive ways to introduce robust learning strategy into the weakly supervised object localization and detection frameworks, while they have already achieved obvious performance gains when compared with the conventional learning strategy. Along this line, Zhang et al. propose a self-paced curriculum learning framework for weakly supervised object detection. By integrating the curriculum learning with the self-paced learning , the established learning framework provides a more theoretical-sounded way to improve the learning robustness. However, the solid robust learning theory is still lack in this research field.", "id": "56c3e88b-fc74-49e6-aa73-cff57609f8ef", "level": "subsection", "origin_cites_number": 5, "parent_id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Robust Learning Theory" ] ], "subsections": [], "title": "Robust Learning Theory" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 2329, 2328, 2326, 2280, 2325, 2330, 2327, 8541 ], "content": "Besides the conventional CNN models, it is also worth trying to apply some more advanced learning models into the learning process of the weakly supervised object detector. Here we give two examples. The first one is the deep reinforcement learning.\nAccording to~, biological vision systems are believed to have a sequential process with changing retinal fixations that gradually accumulate evidence of certainty when searching or localizing objects. Several existing methods~ have also demonstrated that designing deep reinforcement learning frameworks to model such a sequential searching process can indeed help to address the object localization, detection, and tracking problems in the computer vision community. Thus, it is highly desirable, both biologically and computationally, to explore deep reinforcement learning models that facilitate the weakly supervised object localization and detection systems in such a sequential searching process .\nThe second one is the generative adversary learning. As we know, generative adversary learning has been demonstrated to have advantages in unsupervised and semi-supervised learning scenarios . It can generate the desired data distribution based on very weak supervision, i.e, ``real'' or ``fake''. Such capacity endows generative adversary learning very large potential in solving the weakly supervised object localization and detection problems. Although existing methods, such as , have already made efforts to introduce such a learning mechanism into the weakly supervised object localization and detection, there is still much room for improvement along this research direction.", "id": "47c758a2-7983-451c-ad2b-fa45ee0b2b84", "level": "subsection", "origin_cites_number": 14, "parent_id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Reinforcement and Adversarial Learning" ] ], "subsections": [], "title": "Reinforcement and Adversarial Learning" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 2332, 2289, 8538, 2334, 2307, 2331, 2333, 2309 ], "content": "From Table \\ref{table endtoend} and Table \\ref{table multinet}, we can observe that most of the current deep weakly supervised object detection methods have not introduced any prior knowledge into their learning frameworks. However, from our review on classic models (see Sec. \\ref{Classic Models}), prior knowledges actually play important roles in avoiding the weakly supervised learning process from drifting to trivial solutions. Considering this issue, some recent works utilize prior knowledges of saliency , objectness , shape , count , human action , human object interaction , mask-out scoring in their frameworks. However, research towards building effective deep MIL frameworks (such as the one with prior knowledge distillation or cross domain adaptation ) to embed helpful prior knowledge into the weakly supervised learning process needs to be further explored in the future. In addition, the co-occurring patterns mined in co-saliency detection and object co-localization approaches can also be used as informative priors to guide the deep multiple instance learning process in weakly supervised object localization and detection.", "id": "8b5b77db-9af6-4869-b487-c1496c6cc1b1", "level": "subsection", "origin_cites_number": 15, "parent_id": "d8feaee7-5d65-4327-96a9-38b968bfe1c4", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Future Directions" ], [ "subsection", "Prior-guided Deep MIL" ] ], "subsections": [], "title": "Prior-guided Deep MIL" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we provide a comprehensive survey of existing literatures in the research field of weakly supervised object localization and detection. We start with the introduction of the definition of the task and the key challenges that make the weakly supervised learning process hard to implement. Then, we introduce the development history of this field, the taxonomy of methods for weakly supervised object localization and detection, and the relationship between different categories. After reviewing existing literatures in each category of methodology, we introduce the benchmark datasets and evaluation metrics that are widely used in this filed, which are followed by the reviewing of the applications of the existing weakly supervised object localization and detection algorithms. Finally, we point out several future directions that may further promote the development of this research field.\n{\\small\n\\bibliographystyle{ieee}\n\\bibliography{WSsurvey}\n}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{dingwen_zhang}}]{Dingwen Zhang}\nreceived his Ph.D. degree from the Northwestern Polytechnical University, Xi'an, China, in 2018. He is currently a full professor in School of Automation, Northwestern Polytechnical University. From 2015 to 2017, he was a visiting scholar at the Robotic Institute, Carnegie Mellon University. His research interests include computer vision and multimedia processing, especially on saliency detection, video object segmentation, and weakly supervised learning.\n\\vspace*{-30pt}\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{HJW}}]{Junwei Han}\n(M'12-SM'15) is a Professor with Northwestern Polytechnical University, Xi'an, China. He received Ph.D. degree in Northwestern Polytechnical University in 2003. He was a Research Fellow in Nanyang Technological University, The Chinese University of Hong Kong, and University of Dundee. His research interests include computer vision and brain imaging analysis. He has published over 100 papers in IEEE TRANSACTIONS and top tier conferences. He is currently an Associate Editor of IEEE Trans. on Neural Networks and Learning Systems, IEEE Trans. on Circuits and Systems for Video Technology, IEEE Trans. Cognitive and Developmental Systems, IEEE Trans. on Human-Machine Systems, Neurocomputing, and Machine Vision and Applications.\n\\vspace*{-30pt}\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{GC}}]{Gong Cheng}\nreceived the B.S. degree from Xidian University, Xi'an, China, in 2007, and the M.S. and Ph.D. degrees from Northwestern Polytechnical University, Xi'an, in 2010 and 2013, respectively. He is currently a Professor with Northwestern Polytechnical University. His main research interests are computer vision and pattern recognition.\n\\vspace*{-30pt}\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{mhyang}}]{Ming-Hsuan Yang}\nreceived the PhD degree in computer science from the University of Illinois at Urbana-Champaign, in 2000. He is a professor in Electrical Engineering and Computer Science from the University of California, Merced. He served as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence from 2007 to 2011, and is an associate editor of the International Journal of Computer Vision, the Computer Vision and Image Understanding, the Image and Vision Computing, and the Journal of Artificial Intelligence Research. He received the NSF CAREER award in 2012, and the Google Faculty Award in 2009. He is a Fellow of the IEEE and Senior Member of the ACM.\n\\vspace*{-30pt}\n\\end{IEEEbiography}\n\\end{document}", "id": "92209b0e-e020-4aa4-8af9-77a6bcf0b9af", "level": "section", "origin_cites_number": 0, "parent_id": "3a2d911b-3708-4f3a-880a-851136519e79", "prefix_titles": [ [ "title", "Weakly Supervised Object Localization and Detection: A Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
76
[ 2259, 209, 2260, 2261, 2262, 2263, 2267, 2269, 2270, 2266, 2264, 2265, 8537, 2268, 2272, 2271, 2274, 2273, 1223, 514, 8429, 2275, 895, 2278, 2276, 2279, 2277, 2282, 2297, 2288, 2289, 2291, 2286, 2294, 7089, 2285, 2293, 2287, 2283, 2299, 2296, 2284, 2292, 737, 2295, 2298, 2281, 2290, 2280, 8538, 2306, 2303, 2301, 802, 2302, 2309, 2300, 2304, 2308, 2307, 2310, 2305, 2312, 2311, 2314, 2315, 107, 7090, 687, 2313, 2319, 2317, 2316, 2318, 8539, 2321, 2322, 2320, 2323, 7091, 2324, 1751, 8540, 2329, 2328, 2326, 2325, 2330, 2327, 8541, 2332, 2334, 2331, 2333 ]
0.697224
[ "Manish Gupta", "Puneet Agrawal" ]
Compression of Deep Learning Models for Text: A Survey
2020
2020-08-12T10:42:14Z
cs.CL
In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Transformer~\cite{vaswani2017attention} based models like Bidirectional Encoder Representations from Transformers (BERT)~\cite{devlin2018bert}, Generative Pre-training Transformer (GPT-2)~\cite{radford2019language}, Multi-task Deep Neural Network (MT-DNN)~\cite{liu2019multi}, Extra-Long Network (XLNet)~\cite{yang2019xlnet}, Text-to-text transfer transformer (T5)~\cite{raffel2019exploring}, T-NLG~\cite{rosset2019turing} and GShard~\cite{lepikhin2020gshard}. But these models are humongous in size. On the other hand, real world applications demand small model size, low response times and low computational power wattage. In this survey, we discuss six different types of methods (Pruning, Quantization, Knowledge Distillation, Parameter Sharing, Tensor Decomposition, and Sub-quadratic Transformer based methods) for compression of such models to enable their deployment in real industry NLP projects. Given the critical need of building applications with efficient and small models, and the large amount of recently published work in this area, we believe that this survey organizes the plethora of work done by the `deep learning for NLP' community in the past few years and presents it as a coherent story.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ] ], "subsections": [ "57df7b91-17f0-42ac-b043-8a5e492f9515", "92ac6101-b828-4042-8708-2e0386e701bb", "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "57903397-1a0f-4365-9fd9-a6bf95e76421", "a059247f-141a-43b7-a6a9-f3440e7622cd", "65404f84-cbf0-4721-8f05-178f8230aa55", "d06ca9cc-bcd7-4f53-9be8-ee86f8754664" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:overview}\nIn this survey, we discuss compression methods using pruning, quantization, knowledge distillation, parameter sharing, tensor decomposition and sub-quadratic Transformers. \nThe most obvious way to reduce model size is to sparsify weight matrices. Pruning methods differ based on what is pruned and the actual logic used to prune. Given a matrix, one can prune \n\\begin{itemize}\n \\item Some weight entries\n \\item Rows or columns (i.e., neurons)\n \\item Weight blocks\n \\item Attention heads (in case of Transformer based methods)\n \\item Layers or a combination of the structures.\n\\end{itemize}\nOther interesting aspects of pruning methods include the following.\n\\begin{itemize}\n \\item How to decide which weights/neurons/blocks/heads to prune?\n \\item Should you prune large networks or build small networks?\n \\item Should you do one-shot pruning versus iterative/gradual pruning? \n \\item How does regularization help when pruning?\n\\end{itemize}\n We discuss these aspects of pruning based methods in Section~\\ref{sec:pruning}.\nBesides removing the weights themselves, another way to reduce the size of weight matrices is to reduce the number of bits needed to represent each weight. Typically weights are stored as 32-bit double values. In an extreme case, weights can be quantized to two values (binary 1 bit). But other popular ways include quantization to three values (ternary) or multiple bits. Quantization can be uniform or non-uniform. Quantization methods can be deterministic or stochastic. Quantization can be performed in a loss-aware or unaware manner. Quantization bin ranges can be trained versus tuned. Finally, the level of quantization needs to be different across layers of RNNs, LSTMs or Transformers to attain a favorable model size versus accuracy tradeoff. We discuss these aspects of quantization based methods in Section~\\ref{sec:quantization}.\nA third way of doing model compression is using knowledge distillation (also broadly known as student teacher networks). In such methods, the idea is to first train a deep teacher model using labeled data, and then transfer ``dark knowledge'' from teacher to train a shallow student network. Thus, the student model is trained to mimic a pre-trained, larger teacher. After the student is trained, it is deployed while the teacher network can be thrown. Distillation methods vary based on the following design choices.\n\\begin{itemize}\n \\item Different types of teacher model\n \\item Different types of loss function like squared error between the logits of the models, KL divergence between the predictive distributions, or some other measure of agreement between the model predictions\n \\item Different choices for what dataset the student model trains on (a large unlabeled dataset, a held-out data set, or the original training set)\n \\item Different choices for what to mimic from the teacher -- teacher's class probabilities, teacher's feature representation.\n \\item Learn from whom -- teacher, teacher assistant, or other fellow students. \n\\end{itemize}\nWe discuss these aspects of knowledge distillation based methods in Section~\\ref{sec:knowledgeDistillation}.\nAnother way that reduces overall weights is to have parameters shared across multiple weight structures. Broadly, the method is called parameter sharing. Methods differ depending on the following aspects.\n\\begin{itemize}\n \\item Which parameters are shared\n \\item Technique used to share parameters\n \\item The level at which sharing is performed\n\\end{itemize}\nWe discuss these aspects of parameter sharing based methods in Section~\\ref{sec:parameterSharing}.\nYet another way to avoid large matrices is to approximate them using a combination of smaller matrices. Such tensor decomposition methods for model compression factorize large tensors into multiple smaller components. Methods differ based on the following aspects.\n\\begin{itemize}\n \\item Type of factorization technique\n \\item Matrices being factorized\n \\item The property of weight matrix being exploited\n\\end{itemize}\nWe discuss these aspects of tensor decomposition methods in Section~\\ref{sec:matrixDecomposition}.\nIn Transformer based models, besides the model size, latency is a concern because of quadratic complexity in terms of the input sequence size. Also, RAM needed for Transformer models is quadratic in nature. Hence, very recently, there have been several efforts on designing Transformers with sub-quadratic complexity. Some of these methods are super-linear while many are linear. Linear complexity methods use various techniques for enforcing linearity -- the broad idea is to compute a transformed representation for every token using attention over a fixed small number of other tokens. Methods differ in terms of defining the set of other tokens to be used to compute a transformed representation for the current token. We discuss such methods in Section~\\ref{sec:linearTransformers}.", "id": "57df7b91-17f0-42ac-b043-8a5e492f9515", "level": "section", "origin_cites_number": 0, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Model Compression Methods: Overview" ] ], "subsections": [], "title": "Model Compression Methods: Overview" }, { "cite_extract_rate": 0.5, "cites": [ 844 ], "content": "\\label{sec:pruning} \nThe first proposed methods for model compression were based on pruning. Pruning can be done in two ways: structured versus unstructured. In unstructured pruning, individual weight connections are removed from a network by setting them to 0. One can prune away weights from a weight matrix depending on various criteria (e.g., prune away low magnitude weights). Such unstructured pruning methods lead to sparse matrices and need special sparse matrix manipulation libraries at inference time. Hence, various structured pruning methods have also been proposed which aim to prune away structures like neurons, weight matrix blocks, attention heads or layers. Fig.~\\ref{fig:pruning} provides a broad overview of various pruning styles. Fig~\\ref{fig:pruning}(B) depicts unstructured pruning. Fig~\\ref{fig:pruning}(C)-(G) shows various structured pruning methods. In this section, we discuss unstructured weight pruning methods in Section~\\ref{subsec:PruningWeights} and structured pruning methods in Sections~\\ref{subsec:PruningNeurons}-\\ref{subsec:PruningHeadsLayers}.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{pruning}\n \\caption{Different Types of Pruning: (A) represents no pruning. Filled cells represent pruned entries. (B), (H) and (I) are unstructured pruning methods discussed in Section~\\ref{subsec:PruningWeights}. Remaining are structured pruning methods: (C) is discussed in Section~\\ref{subsec:PruningNeurons}, (D) in Section~\\ref{subsec:PruningBlock}, (E), (F) and (G) in Section~\\ref{subsec:PruningHeadsLayers}.}\n \\label{fig:pruning}\n\\end{figure*}\nIn pruning, the main idea is to grow a large model and then prune away weights to end up with a much smaller but effective model. This is inspired by the following biological observation. Trillions of synapses are generated in the human brain during the first few months of birth. At one year old, synapse count peaks at 1000 trillion. And then natural pruning begins to occur. A ten year old child has nearly 500 trillion synapses. This `pruning' mechanism removes redundant connections in the brain~. \nOne natural question is should you prune large networks or build small dense networks? Pruning involves extra processing plus sparse matrices need special handling. Can we avoid it by training smaller models? Zhu et al.~ experimented with models of various sizes with/ without pruning of stacked LSTMs models for language modeling, and seq2seq models for Neural Machine Translation (NMT). They found that large-sparse models consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.", "id": "92ac6101-b828-4042-8708-2e0386e701bb", "level": "section", "origin_cites_number": 2, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ] ], "subsections": [ "2362b1c7-732c-4073-87eb-1e3f045414b0", "56b21370-ac94-4d19-9ed7-985faad42cc8", "49a2e5f4-1aec-4e81-ac60-4671c95948a6", "10a33a73-e952-454b-b5ce-09086a71de83", "80f88ea7-37a5-4100-9fe4-121a616869c9" ], "title": "Pruning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:PruningWeights}\nThe weights to be pruned can be chosen using two heuristics: (1) Using Hessian matrix computation, or (2) using magnitude of the weights. Further, magnitude based pruning methods could do pruning in one shot (typically statically after training is done), or iteratively (also called dynamic or gradual pruning), or iteratively with pruning and densification. Accordingly, there are four main ways of performing unstructured weight pruning: (1) Hessian based methods, (2) magnitude pruning, (3) iterative magnitude pruning, and (4) iterative magnitude pruning and densification. In this subsection, we discuss these methods in detail.", "id": "2362b1c7-732c-4073-87eb-1e3f045414b0", "level": "subsection", "origin_cites_number": 0, "parent_id": "92ac6101-b828-4042-8708-2e0386e701bb", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Weights" ] ], "subsections": [ "17e91dfe-059c-4ae9-8bb2-0fdcc9c148d7", "db3c2850-1885-440e-a725-4d53c91cb9a3", "bdf39c62-744f-439a-a336-cc2914c01b9e", "8ead8d95-de3b-4322-8928-4d2d5dd4066f" ], "title": "Pruning Weights" }, { "cite_extract_rate": 0, "cites": [], "content": "In their seminal work (Optimal Brain Damage or OBD) on proposing weight pruning as a method for model compression, LeCun et al.~ defined saliency of a weight as change in the objective function $E$ caused by deleting that parameter. Using Taylor series and making multiple assumptions, they propose to use the following as a measure of saliency of weight $u_i$.\n\\begin{eqnarray}\n s(u_i)=\\frac{1}{2}\\sum_i h_{ii} \\delta u_i^2\n\\end{eqnarray}\n\\noindent where $h_{ii}=\\frac{\\partial^2 E}{\\partial u_i \\partial u_j}$ is the $i^{th}$ element on the diagonal of the Hessian matrix. Weights with low saliency can be pruned and the pruned network can be retrained to adjust the remaining weights. \nThe procedure for computation of the diagonal of the Hessian is very similar to the back-propagation algorithm used for computing the first derivatives. Hence, computing the diagonal of the Hessian is of the same order of complexity as computing the gradient. \nOBD ignores cross terms in the Hessian matrix. But on most real datasets, Hessian is strongly non-diagonal. Hence, to avoid pruning of important weights, Hassibi et al.~ proposed a method called Optimal Brain Surgeon (OBS) which considers cross terms as well. Using a similar derivative of $E$ wrt weight $w_i$, saliency of the weight is computed as follows.\n\\begin{eqnarray}\nL_i=\\frac{1}{2}\\frac{w_i^2}{[H^{-1}]_{ii}} \n\\end{eqnarray}\nComputing $H^{-1}$ is difficult. Hence, they provide a faster recursion relation for calculating $H^{-1}$ from training data and structural information of the network. Also, unlike other methods (like OBD or magnitude pruning), OBS does not demand (typically slow) retraining after the pruning of a weight. In every step, we delete $w_i$ with min $L_i$ and update all weights as follows.\n\\begin{eqnarray}\n \\delta w=-\\frac{w_i}{[H^{-1}]_{ii}}H^{-1}e_i\n\\end{eqnarray}\n\\noindent where $e_i$ is the unit vector in weight space corresponding to (scalar) weight $w_i$. Unfortunately, these methods (OBD and OBS) are computationally prohibitive as second derivative (i.e. Hessian) computations are expensive.", "id": "17e91dfe-059c-4ae9-8bb2-0fdcc9c148d7", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "2362b1c7-732c-4073-87eb-1e3f045414b0", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Weights" ], [ "subsubsection", "Hessian based Methods" ] ], "subsections": [], "title": "Hessian based Methods" }, { "cite_extract_rate": 0.5, "cites": [ 4478 ], "content": "A more computationally feasible method for pruning connections and relearning weights based solely on the magnitude of the original weights is to simply prune away weights with small magnitudes. The idea was first proposed by Han et al.~. Pruning away low magnitude weights makes the matrix sparse. Sparse matrices can be stored in Compressed Sparse Row/Column (CSR/CSC) formats. Further space can be saved by storing the index difference instead of the absolute position, and encoding this difference using small fixed number of bits. See et al.~ experimented with these pruning methods on encoder-decoder LSTM NMT (neural machine translation) models. They perform magnitude pruning on all weight matrices of a 4-layer LSTM. They find that higher layers, attention and softmax weights are the most important, while lower layers and the embedding weights hold a lot of redundancy. At the lower layers the parameters for the input are most crucial, but at higher layers the parameters for the gates also become important. These methods typically have a target pruning percent as a hyper-parameter and pruning is either performed statically (after training the full model) or dynamically (while training itself). Retraining the sparse pruned network helps in improving accuracy.\nIn a typical encoder-decoder LSTM model, there are these weight classes: source embedding weights, target embedding weights, source layer weights, target layer weights, attention weights and softmax weights. An important consideration related to magnitude pruning is how do we distribute the pruning over these different weight classes of a model, given a target $x$\\% pruning? Three ways suggested by See et al.~ include class-blind, class-uniform and class-distribution. In the class-blind way, we take all parameters, sort them by magnitude and prune the $x$\\% with smallest magnitude, regardless of the weight class. So some classes are pruned proportionally more than others. In the class-uniform way, Within each class, we sort the weights by magnitude and prune the x\\% with smallest magnitude. So all classes have exactly x\\% of their parameters pruned. In the class-distribution scheme, for each class $c$, weights with magnitude less than $\\lambda\\sigma_c$ are pruned. Here, $\\sigma_c$ is the standard deviation of that class and $\\lambda$ is a universal parameter chosen such that in total, $x$\\% of all parameters are pruned. Class-blind pruning is the simplest and adheres to the principle that pruning weights (or equivalently, setting them to zero) is least damaging when those weights are small, regardless of their locations in the architecture. Class-uniform pruning and class-distribution pruning both seek to prune proportionally within each weight class, either absolutely, or relative to the standard deviation of that class. They observe that class-blind pruning outperforms both other schemes.", "id": "db3c2850-1885-440e-a725-4d53c91cb9a3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "2362b1c7-732c-4073-87eb-1e3f045414b0", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Weights" ], [ "subsubsection", "Magnitude Pruning Methods" ] ], "subsections": [], "title": "Magnitude Pruning Methods" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4331, 4479, 844 ], "content": "Typically, it has been seen that rather than pruning in one-shot, it is a good idea to prune gradually over epochs. This way of pruning is called iterative or gradual pruning (see Fig.~\\ref{fig:pruning}(H)). For starting proportion $x$\\% and ending proportion $y$\\%, iterative magnitude pruning procedure prunes $x$\\% of each of the weights, does re-training, and then prunes $(y-x)/T$\\% of the weights every $K$ iterations. $T$ is the number of times, pruning is done. Sometimes, pruning is started after few warmup iterations have already been performed. Magnitude pruning has been seen to be very effective with regularization ($L_1/L_2$) while training. Dropouts also help in effective pruning. \nIn some pruning methods, a weight once pruned cannot be a part of the network in future iterations. On the other hand, other methods do not modify the gradients of a pruned weight in the back-propagation step. In that case, it is possible for the updates of a pruned weight to be larger than the threshold of that layer, and then the weight will be involved in the forward pass again. Also, in every pruning iteration, we could either use a fixed threshold~ or monotonically increase it~. \nIn case of gradual pruning~, where pruning threshold $\\epsilon$ is monotonically increased, $\\epsilon$ is determined as follows in every iteration $i$. Let $f$ be the number of iterations after which $\\epsilon$ is updated. Also, after a few warmup iterations, weights are sorted using absolute values and we pick the weight corresponding to the $90^{th}$ percentile as $q$. Pruning threshold $\\epsilon$ is increased in two stages. In the first stage (which starts at start iteration $s$ and continues until ramp iteration $r$, we use $\\theta$ as the initial slope to prune weights. In the second stage (which starts at ramp iteration $r$ and continues until end iteration $e$), we use $\\phi$ as the ramp slope to change the rate of pruning. Typically, $\\phi$ is set to 1.5$\\theta$ where $\\theta$ is calculated as follows.\n\\begin{eqnarray}\n \\theta=\\frac{2qf}{2(r-s)+3(e-r)}\n\\end{eqnarray}\nThus, from iteration $s$ to $r$, we set the pruning threshold as follows.\n\\begin{eqnarray}\n\\epsilon=\\frac{\\theta(i-s+1)}{f} \n\\end{eqnarray}\nFrom iterations $r+1$ to $e$, we set the pruning threshold as follows.\n\\begin{eqnarray}\n\\epsilon=\\frac{\\theta(r-s+1)+\\phi(i-r+1)}{f}\n\\end{eqnarray}\nTypically when pruning, biases are not pruned since they are much fewer in number. Overall, RNN/LSTM model size can be reduced by 90\\% and speed-up is around 2x to 7x using gradual pruning with no deterioration in accuracy. Also, layers closer to input are pruned more aggressively compared to the final layers.\nAnother way of performing iterative pruning is to set a pruning target per iteration~. In this scheme, we start with an initial sparsity value $s_0$. To achieve a final sparsity value of $s_f$ after $n$ pruning steps with pruning frequency $f$, pruning target in iteration $i$ can be computed as follows.\n\\begin{eqnarray}\ns_i=s_f+(s_0-s_f)\\left(1-\\frac{i}{nf}\\right)^3\n\\end{eqnarray}\nThus, the sparsity of the network is gradually increased while allowing the network training steps to recover from any pruning-induced loss in accuracy. We prune the network rapidly in the initial phase when the redundant connections are abundant and gradually reduce the number of weights being pruned each time as there are fewer and fewer weights remaining in the network.\nCheong et al.~ found that iterative pruning leads to poor results when pruning Transformer models like BERT. Guo et al.~ found that there are two problems with pruning especially when done with regularization. \n\\begin{itemize}\n \\item The larger weights $w_j$ are penalized more heavily than smaller weights $w_i$ in $L_1$ regularization, which violates the original intention of weight pruning, ``removing the unimportant connections''. \n \\item Direct optimization of a regularization penalty term causes divergence from the original loss function and has negative effect on the effectiveness of gradient-based update.\n\\end{itemize}\nThey propose to perform reweighted $L_1$ minimization where $\\alpha_i>0$ are inversely proportional to magnitude of corresponding weights $|w_i|$. Thus, they solve the following optimization problem\n\\begin{eqnarray}\n \\min_w f(w)+\\gamma \\sum_i \\alpha_i |w_i|\n\\end{eqnarray}\n\\noindent where $f(w)$ is the original loss function for the network. This optimization is solved using a reweighted proximal pruning (RPP) method (which depends on proximal operators). RPP decouples the goals of high sparsity from minimizing loss, and hence leads to improved accuracy even with high levels of pruning for BERT.", "id": "bdf39c62-744f-439a-a336-cc2914c01b9e", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "2362b1c7-732c-4073-87eb-1e3f045414b0", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Weights" ], [ "subsubsection", "Iterative Magnitude Pruning Methods" ] ], "subsections": [], "title": "Iterative Magnitude Pruning Methods" }, { "cite_extract_rate": 0.5, "cites": [ 4480 ], "content": "Further, the effectiveness of pruning can be improved by performing pruning and densification~ alternately across multiple iterations (see Fig.~\\ref{fig:pruning}(I)). There are two ways of doing this. In the first method~, in each iteration, either pruning is performed or densification. The sparse training regularizes the model, and the dense training restores the pruned weights, increasing the model capacity without overfitting. Sparsification helps the optimizer escape saddle points, and leads to regularized training which converges to a significantly better minima. In the second method~, in every iteration some dormant weights can reappear in the network while other active ones can get pruned out. A dormant $w\\in W$ is activated iff $|w.grad|$ is larger than the $(100\\alpha)^{th}$ percentile of all elements in $|W.grad|$. A $w\\in W$ is removed iff $|w|$ is smaller than the $(100\\beta)^{th}$ percentile of all elements in $|W|$. $\\alpha$ and $\\beta$ refer to growth ratio, and pruning ratio, respectively.", "id": "8ead8d95-de3b-4322-8928-4d2d5dd4066f", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "2362b1c7-732c-4073-87eb-1e3f045414b0", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Weights" ], [ "subsubsection", "Iterative Magnitude Pruning and Densification" ] ], "subsections": [], "title": "Iterative Magnitude Pruning and Densification" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:PruningNeurons}\nIt is difficult to implement unstructured pruning practically since, at inference time, special support is needed for matrix multiplication in the sparse space. Pruning away neurons leads to removal of a row or a column from a weight matrix, thereby avoiding sparse matrix handling (see Fig.~\\ref{fig:pruning}(C)). However, compared to pruning weights, pruning neurons is less flexible since we need to find entire rows/columns for deletion. In this section, we discuss ways of determining neurons that can be pruned.", "id": "56b21370-ac94-4d19-9ed7-985faad42cc8", "level": "subsection", "origin_cites_number": 0, "parent_id": "92ac6101-b828-4042-8708-2e0386e701bb", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Neurons" ] ], "subsections": [ "60bed51b-08a1-4c61-9777-2265a58630d5", "e7c0d686-3915-4133-8281-2cebccd6dd52" ], "title": "Pruning Neurons" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4481, 4482 ], "content": "He et al.~ proposed three node importance functions to determine importance score for neurons. \n\\begin{itemize}\n \\item Entropy: For a neuron $i$, let $a_i$ ($d_i$) be the \\#instances with node output $> (or\\leq)$ 0.5 for binary classification with a sigmoid activation. Then entropy for node $i$ can be written as follows.\n \\begin{eqnarray}\n \\text{Entropy}(i)=\\frac{d_i}{a_i+d_i}\\log_2 \\frac{d_i}{a_i+d_i}+\\frac{a_i}{a_i+d_i}\\log_2 \\frac{a_i}{a_i+d_i} \n \\end{eqnarray}\n The intuition is that if one node's outputs are almost identical on all training data, these outputs do not generate variations to later layers and consequently the node may not be useful. \n \\item Output-weights Norm (onorm): average $L_1$-norm of the weights of its outgoing links.\n \\item Input-weights norm (inorm): average $L_1$-norm of the weights of its incoming links.\n\\end{itemize}\nAll the neurons are sorted by their scores and nodes with less importance values are removed. In most cases, onorm has been found to be the best among these importance functions. \nSpecial regularizers have also been proposed to force neurons to push either all incoming or outgoing connection weights towards zero~. Specifically, for handling incoming connections, the following two regularizers are popular.\n\\begin{itemize}\n \\item $L_2$ norm on weight matrix $W$ defined as follows.\n \\begin{eqnarray}\n \\sum_i ||W_{i:}||_2=\\sum_i \\left(\\sum_j W^2_{ij}\\right)^{1/2}\n \\end{eqnarray}\n This puts equal pressure on each row, but within each row, the larger values contribute more, and therefore there is more pressure on larger values towards zero. \n \\item $L_\\infty$ norm on weight matrix $W$ defined as follows.\n \\begin{eqnarray}\n \\sum_i ||W_{i:}||_\\infty=\\sum_i \\max_j |W_{ij}|\n \\end{eqnarray}\n This puts equal pressure on each row, but within each row, only the maximum value (or values) matter, and therefore the pressure towards zero is entirely on the maximum value(s).\n\\end{itemize}\nSimilar regularizers can easily be defined for outgoing connections as well.", "id": "60bed51b-08a1-4c61-9777-2265a58630d5", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "56b21370-ac94-4d19-9ed7-985faad42cc8", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Neurons" ], [ "subsubsection", "Removing Low Importance Neurons" ] ], "subsections": [], "title": "Removing Low Importance Neurons" }, { "cite_extract_rate": 1, "cites": [ 4483 ], "content": "Consider a simple network with one hidden layer with $n$ neurons. Thus, the output can be computed as follows.\n\\begin{eqnarray}\nz=a_1 h(W_1^TX)+a_2 h(W_2^TX)+...+a_n h(W_n^TX)\n\\end{eqnarray}\n\\noindent where $a_i$ and $W_i$ indicate weights. In case $W_1==W_2$, $h(w_1^TX)=h(w_2^TX)$. Thus, we can compute output as follows.\n\\begin{eqnarray}\nz=(a_1+a_2) h(W_1^TX)+0. h(W_2^TX)+...+a_n h(W_n^TX)\n\\end{eqnarray} \nIn general, whenever two weight sets ($W_1$, $W_2$) are equal, one of them can effectively be removed. This should be done with a surgery step, i.e., we need to alter the co-efficient $a_1$ to $a_1+a_2$. Of course, for many pairs of weight sets (i.e., neurons), $W_1$ and $W_2$ are not exactly the same. Hence, Srinivas et al.~ proposed this 3 step method for redundant neuron identification and removal. \n\\begin{itemize}\n \\item Compute saliency $s_{ij}$ for all possible neuron pairs (i, j) as follows.\n \\begin{eqnarray}\n s_{ij}=\\langle a_j^2 \\rangle||\\epsilon_{ij}||_2^2\n \\end{eqnarray} \n \\noindent where $\\langle a_j^2 \\rangle$ denotes the average of the quantity over all output neurons. Let $S$ be the matrix with all $s_{ij}$ values. \\item Pick the indices $(i',j')$ corresponding to the minimum $s_{ij}$. Delete the $j'$ neuron, and update $a_i'$ as follows.\n \\begin{eqnarray}\n a_i'\\leftarrow a_i'+a_j' \n \\end{eqnarray}\n \\item Update $S$ by removing the $j'^{th}$ column and row, and updating the $i'^{th}$ column (to account for the updated $a_i'$).\n\\end{itemize}", "id": "e7c0d686-3915-4133-8281-2cebccd6dd52", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "56b21370-ac94-4d19-9ed7-985faad42cc8", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Neurons" ], [ "subsubsection", "Removing Redundant Neurons" ] ], "subsections": [], "title": "Removing Redundant Neurons" }, { "cite_extract_rate": 0.5, "cites": [ 7826 ], "content": "\\label{subsec:PruningBlock}\nIn weight pruning, irregularity of sparse matrices limits the maximum performance and energy efficiency achievable on hardware accelerators. Pruning neurons avoids sparse matrix issues but is limited in term of overall pruning possible. Hence, block based pruning methods were introduced (see Fig.~\\ref{fig:pruning}(D)).\nBlock-sparse formats store blocks contiguously in memory reducing irregular memory accesses. If the maximum magnitude weight of a block is below the current threshold, we set all the weights in that block to zeros. Similar to iterative weight pruning, block pruning~ can also be done iteratively using a monotonically growing threshold. Any blocks that had been zeroed out are held at zero even after pruning has ended resulting in a sparse model at the end of training. Just like weight pruning (as discussed in Section~\\ref{subsec:PruningWeights}), the start slope $\\theta$ and ramp slope $\\phi$ determine the rate at which the threshold increases. For block pruning, we need to modify the start slope to account for the number of elements in a block ($N_b$). Thus, the start slope for block pruning is typically set as follows.\n\\begin{eqnarray}\n\\theta_b=\\theta\\times\\sqrt[\\leftroot{-2}\\uproot{2}4]{N_b} \n\\end{eqnarray}\nFurther, to enable effective removal of blocks, Narang et al.~ propose group Lasso regularization method. Group lasso is a type of weight regularization that works on groups of weights and can zero out all the weights in a group. For each block, we add a loss term proportional to the $L_2$ norm of the block. Thus, we optimize for the following.\n\\begin{eqnarray}\n\\min_w f(w)+\\lambda_g \\sum_{g=1}^{G} ||w^{(g)}||_2 \n\\end{eqnarray}\nWhen we combine group lasso with block pruning, group lasso guides the selection of blocks to prune. Group lasso regularization is applied to coincide with the pruning schedule, i.e., we turn off regularization when the pruning schedule ends. Typically, inducing block sparsity with 4x4 blocks in vanilla RNNs and GRUs works well, compared to larger block sizes. Larger blocks require lower sparsity to maintain similar accuracy. \nUnfortunately, it becomes challenging to maintain the same model accuracy when block sparsity is applied. Also, block sizes (i.e., pruning granularity) are application-sensitive, making it another hyper-parameter to tune. To avoid these problems, Cao et al.~ proposed a new method called Bank-Balanced Sparsity (BBS). BBS splits each weight matrix row into multiple equal-sized banks, and adopts fine-grained pruning to each bank independently to obtain identical sparsity among banks. Each bank has the same number of non-zero values. For example, retaining top two weights in each bank of size 4 implies a sparsity of 50\\%. We apply the BBS pruning method iteratively to a pre-trained network, and fine-tune the network after each pruning iteration to restore the model accuracy. BBS achieves almost the same model accuracy as unstructured sparsity and significantly outperforms block sparsity when pruning weights at the same sparsity level. BBS is also amenable to FPGA (Field Programmable Gate Arrays) acceleration because it inherently provides a balanced matrix partitioning for parallel computing.", "id": "49a2e5f4-1aec-4e81-ac60-4671c95948a6", "level": "subsection", "origin_cites_number": 2, "parent_id": "92ac6101-b828-4042-8708-2e0386e701bb", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Blocks" ] ], "subsections": [], "title": "Pruning Blocks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:PruningHeadsLayers}\nBesides neurons and blocks, for Transformer based models, structured pruning can also be applied to attention heads and entire layers. In this section, we discuss such methods.", "id": "10a33a73-e952-454b-b5ce-09086a71de83", "level": "subsection", "origin_cites_number": 0, "parent_id": "92ac6101-b828-4042-8708-2e0386e701bb", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Heads and Layers" ] ], "subsections": [ "bc2b6e76-0a87-4b66-8b3c-2bc732d0a821", "c4355812-9f8a-4da4-86ff-8739c39b93e8", "7dc1a143-838c-41ca-90a0-ebe43039cc74" ], "title": "Pruning Heads and Layers" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4485, 4484 ], "content": "BERT-base model consists of 12 layers each with 12 attention heads. Similarly, a typical NMT encoder-decoder Transformer with 6 layers each for encoder as well as decoder contains 16 attention heads per layer. Michel et al.~ found that majority of attention heads can be removed without deviating too much from the original score. Surprisingly, in some cases removing an attention head (see Fig.~\\ref{fig:pruning}(E)) results in an increase in accuracy. When these heads are removed individually, only 8 (out of 96) heads in 6-layer WMT NMT Transformer (16 heads/layer) cause a statistically significant change in performance when they are removed from the model, half of which actually result in a higher BLEU score. For most layers, one head is indeed sufficient at test time, even though the network was trained with 12 (BERT) or 16 (WMT Transformer) attention heads. One can also do iterative pruning of multiple heads (rather than just one at a time) across layers. For iterative pruning, head importance score is defined using the expected sensitivity of the model to the mask variables $\\xi_h$ as follows.\n\\begin{equation}\n I_h=E_{x\\sim X}\\left|\\frac{\\partial L(x)}{\\partial \\xi_h}\\right|=E_{x\\sim X}\\left|\\text{Att}_h(x)^T\\frac{\\partial L(x)}{\\partial \\text{Att}_h(x)}\\right|\n\\end{equation}\n\\noindent where $X$ is the data distribution, $L(x)$ is the loss on sample $x$, and $Att_h(x)$ is the output of the attention head $h$ for instance $x$, . Intuitively, if $I_h$ has a high value then changing $\\xi_h$ is liable to have a large effect on the model. Hence, in every iteration heads with low $I_h$ values are pruned out. Michel et al.~ observed that pruning up to 20\\% and 40\\% of heads from NMT and BERT models respectively, did not lead to any noticeable negative impact on accuracy.\nVoita et al.~ used two other head importance scores to prune attention heads from the NMT model. The two scoring methods were: (1) Layer-wise relevance propagation (LRP)~. LRP is a method for computing the relative contribution of neurons at one point in a network to neurons at another. (2) ``confidence'' of a head which is computed as the average of its maximum attention weight excluding the end of sentence symbol, where the average is taken over tokens in a set of sentences used for evaluation. For pruning the heads, they propose a method based on stochastic gates and a differentiable relaxation of the $L_0$ penalty. $L_0$ norm equals the number of non-zero components and pushes the model to switch off less important heads. They find that only a small subset of heads are important for translation. On the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.", "id": "bc2b6e76-0a87-4b66-8b3c-2bc732d0a821", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "10a33a73-e952-454b-b5ce-09086a71de83", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Heads and Layers" ], [ "subsubsection", "Pruning Attention Heads" ] ], "subsections": [], "title": "Pruning Attention Heads" }, { "cite_extract_rate": 1, "cites": [ 4486 ], "content": "Note that dropping attention heads does not reduce runtime as they are usually computed in parallel. While one can prune weights, neurons or attention heads, how can we design a scheme to prune away layers (see Fig.~\\ref{fig:pruning}(G))? The LayerDrop idea proposed in~ is inspired by DropConnect. DropConnect randomly drops weights while training on a batch. LayerDrop does structured dropout: it drops groups of weights, heads, feed forward network (FFN) matrices, or layers. The layers to be pruned can be decided using one of these ways: \n\\begin{itemize}\n \\item Every Other: Prune every other layer (with rate $p$), e.g., every $3^{rd}$ layer in a 12-layer BERT model. \n \\item Search on Validation: Search for a set of layers to be pruned by checking their impact on a validation set. This entails trying various combinations. \n \\item Data Driven Pruning: Learn the drop rate $p_d$ of each layer in a data driven manner.\n\\end{itemize}\nGiven a target drop rate $p$, we learn an individual drop rate $p_d$ for the layer at depth $d$ such that the average rate over layers is equal to $p$. At inference time, we forward only the fixed top-$k$ highest scoring layers based on the softmax output. Across the three methods, ``Every Other'' strategy works surprisingly well across many tasks and configurations. ``Search on Validation'' and ``Data Driven Pruning'' only offer marginal gains.", "id": "c4355812-9f8a-4da4-86ff-8739c39b93e8", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "10a33a73-e952-454b-b5ce-09086a71de83", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Heads and Layers" ], [ "subsubsection", "Pruning Layers" ] ], "subsections": [], "title": "Pruning Layers" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 4331, 4479, 4487, 4478, 4486, 844, 4482, 4480, 4485, 7826 ], "content": "Lastly, Prasanna et al.~ experiment with pruning both the FFN layers (see Fig.~\\ref{fig:pruning}(F)) as well as attention heads (see Fig.~\\ref{fig:pruning}(E)) in a BERT network. Just like~, they assign a mask variable to each of these structures. To decide which structures to prune, we look at the expected sensitivity of the model to the mask variables. High sensitivity implies large impact on the model output and hence corresponding structures should be retained. They find that it is possible to find a subnetwork of elements that achieves performance comparable with that of the full model, and similarly-sized subnetworks sampled from the rest of the model perform worse. \n\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|}\n \\hline\nTask&Dataset&Model&Method&Size (Pruned; Orig)&Eval. (Pruned; Orig)&Metric\\\\\n\\hline\n\\hline\nLanguage modeling&Europarl v7 English &2-layer MLP&Prune Neurons~&5.06M; 5.1M&57; 100&Perplexity (L)\\\\\n\\hline\nLanguage modeling&PTB&2-layer LSTM&Iter. Mag.~&6.6M; 66M&80.24; 78.45&Perplexity (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Iter. Mag.~&20M; 66M&78.5; 78.8&Perplexity (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Block Sparsity~&20M; 66M&83; 78.8&Perplexity (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&BBS~&20M; 66M&78.5; 78.8&Perplexity (L)\\\\\n\\hline\nLanguage modeling&Wikitext-103&Transformer&LayerDrop~&22M; 44M&19.5; 18.2&Perplexity (L)\\\\\n\\hline\nLanguage modeling&AFP from English Gigaword&2-layer MLP&Prune Neurons~&5.07M; 5.1M&107; 100&Perplexity (L)\\\\\n\\hline\nLinguistic acceptability&CoLA&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&82.8/76.3; 81.5&Matthews (H)\\\\\n\\hline\nMachine reading comp.&MRPC&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&88.1/83.5; 89.3&Acc (H)\\\\\n\\hline\nMachine reading comp.&MRPC&BERT-base&LayerDrop~&66M; 110M&85.3; 88.9&Acc (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de) &WMT14&Multi-layer LSTM&Mag.~&43M; 216M&20.91; 20.5&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de) &WMT16&4-layer LSTM&Iter. Mag.~&23M; 211M&26.19; 26.77&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de) &WMT16 &Transformer&LayerDrop~&22M; 44M&29; 29&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de) &WMT17&Transformer&Iter. Mag.~&22M; 44M&26.4; 28.09&BLEU (H)\\\\\n\\hline\nParaphrasing&QQP&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&91.2/85.1; 91.2&Acc (H)\\\\\n\\hline\nQuestion answering&SQuAD 1.1&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&90.23/85.3; 90.9&Acc (H)\\\\\n\\hline\nQuestion answering&SQuAD 2.0&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&81.3/75.3; 81.9&Acc (H)\\\\\n\\hline\nSentiment analysis&SST-2&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&92.4/91.3; 93.2&Acc (H)\\\\\n\\hline\nSentiment analysis&SST-2&BERT-base&LayerDrop~&66M; 110M&92.5; 93.5&Acc (H)\\\\\n\\hline\nSpeech recognition&2100 hours English Speech&2 CONV+7 BiRNNs+CTC&Iter. Mag.~&11.1M; 67M&10.59; 10.67&CER (L) on dev\\\\\n\\hline\nSpeech recognition&2100 hours English Speech&2 CONV+7 BiGRUs+CTC&Iter. Mag.~&17.8M; 115M&9.76; 9.55&CER (L) on dev\\\\\n\\hline\nSpeech recognition&2100 hours English speech&2 CONV+7 BiRNNs+CTC&Block Sparsity~&25.8M; 67M&15.66; 15.36&CER (L) on test\\\\\n\\hline\nSpeech recognition&2100 hours English speech&2 CONV+7 BiRNNs+CTC&Block Sparsity+Group Lasso~&12.9M; 67M&15.89; 15.36&CER (L) on test\\\\\n\\hline\nSpeech recognition&2100 hours English speech&2 CONV+3 BiGRUs+CTC&Block Sparsity~&25.6M; 115M&16.23; 15.42&CER (L) on test\\\\\n\\hline\nSpeech recognition&2100 hours English speech&2 CONV+3 BiGRUs+CTC&Block Sparsity+Group Lasso~&10.8M; 115M&16.78; 15.42&CER (L) on test\\\\\n\\hline\nSpeech recognition&AN4&2 CONV+3 HLSTMs+CTC&Grow and Prune~&2.6M; 44.72M&10.37; 8.92&WER (L)\\\\\n\\hline\nSpeech recognition&Switchboard (swb/fsh)&7-layer MLP&Prune Neurons~&12.2M; 32.19M&25.5/28.8; 25.7/28.8&WER (L)\\\\\n\\hline\nSpeech recognition&TIMIT&5-layer MLP&Prune Neurons~&3.5M; 5.76M&20.7; 20.79&PER (L)\\\\\n\\hline\nSpeech recognition&TIMIT&LSTM&Iter. Mag.~&0.32M; 3.2M&23.5; 23.5&PER (L)\\\\\n\\hline\nSpeech recognition&TIMIT&LSTM&Block Sparsity~&0.32M; 3.2M&26.5; 23.5&PER (L)\\\\\n\\hline\nSpeech recognition&TIMIT&LSTM&BBS~&0.32M; 3.2M&23.5; 23.5&PER (L)\\\\\n\\hline\nSpeech recognition&WSJ 92&1 CONV+3 FC+1 BiRNN&DSD~&4.07M; 8.14M&27.9; 27.45&WER (L)\\\\\n\\hline\nSpeech recognition&WSJ 92&2 CONV+7 BiRNNs+CTC&DSD~&33.5M; 67M&10.65; 9.02&WER (L)\\\\\n\\hline\nSpeech recognition&WSJ 93&1 CONV+3 FC+1 BiRNN&DSD~&4.07M; 8.14M&32.99; 31.6&WER (L)\\\\\n\\hline\nSpeech recognition&WSJ 93&2 CONV+7 BiRNNs+CTC&DSD~&33.5M; 67M&14.84; 13.44&WER (L)\\\\\n\\hline\nSummarization&CNN-Dailymail&Transformer&LayerDrop~&22M; 44M&39; 40&ROUGE (H)\\\\\n\\hline\nTextual entailment&MNLI&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&86.1/77; 86.1&Acc (H)\\\\\n\\hline\nTextual entailment&MNLI-m&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&85.7/82.5; 85.9&Acc (H)\\\\\n\\hline\nTextual entailment&MNLI-m&BERT-base&LayerDrop~&66M; 110M&82.9; 84.6&Acc (H)\\\\\n\\hline\nTextual entailment&QNLI&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&92.3/90.2; 91.3&Acc (H)\\\\\n\\hline\nTextual entailment&QNLI&BERT-base&LayerDrop~&66M; 110M&89.4; 90.5&Acc (H)\\\\\n\\hline\nTextual entailment&RTE&BERT-large&RPP/Iter. Mag.~&138M/170M; 340M&70.1/68.6; 70.1&Acc (H)\\\\\n\\hline\n \\end{tabular}\n \\caption{Comparison of various pruning methods (sorted by Task and then Dataset). CONV=Convolution. CTC=Connectionist temporal classification. FC=Fully connected. HLSTM=hidden-layer LSTM~. In the metric column, H means high is better while L means low is better. PER/CER/WER=Phone/Character/Word error rate. For~, embedding weights have not been considered when computing model size in the table. Block sparsity methods use block size of 4x4. BBS uses 64 banks.}\n \\label{tab:pruningSummary}\n\\end{table}", "id": "7dc1a143-838c-41ca-90a0-ebe43039cc74", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "10a33a73-e952-454b-b5ce-09086a71de83", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Pruning Heads and Layers" ], [ "subsubsection", "Pruning General Structures" ] ], "subsections": [], "title": "Pruning General Structures" }, { "cite_extract_rate": 0.75, "cites": [ 4331, 4484, 4479, 4478, 4486, 844, 1568, 4485, 7826 ], "content": "Table~\\ref{tab:pruningSummary} compares various pruning methods across different tasks and datasets. Size and accuracy of both the original and the pruned model are shown. The papers report multiple (model size, model accuracy) pairs but we carefully chose the pair such that accuracy is typically within 10\\% of the original or the best pruned model accuracy reported. For language modeling, most popular datasets include PTB, Europarl v7 English, Wikitext-103 and AFP from English Gigaword. On PTB using LSTMs, we observe that Bank-Balanced Sparsity method~ leads to lowest perplexity. For the Linguistic Acceptability CoLA task, RPP~ resulted into a smaller and a more accurate model compared to iterative magnitude pruning. As expected in some senses, the pruned model provides better accuracy than the unpruned one since pruning causes regularization. For the machine reading comprehension, question answering and paraphrasing tasks also, RPP seems to work better than iterative magnitude pruning. For the machine translation (NMT) task, on English-German WMT datasets, pruned Transformer models provide better accuracy than pruned LSTM with comparable number of parameters. For the sentiment analysis task, on SST-2, although RPP leads to a better pruned model compared to iterative pruning, LayerDrop~ improves further on it with a model less than half of the RPP-pruned model. For the speech recognition task, experiments have been reported on 2100 hours English speech data, TIMIT, WSJ, Switchboard and AN4 datasets. On 2100 hours English speech data, Block Sparsity+Group Lasso is better than Block sparsity without regularization. Also, it is better than plain iterative magnitude pruning. On TIMIT, block sparsity~ leads to a more accurate 90\\% pruned LSTM model compared to the unpruned one. For the summarization task, LayerDrop~ can compress the model to half without any noticeable accuracy change. Finally, for the textual entailment task, experiments have been done on GLUE~ datasets: MNLI, MNLI-m, QNLI and RTE. Models pruned from BERT-large perform better than models pruned from BERT-base; RPP performs better than iterative magnitude pruning.\nWhile older methods~ claimed that Hessian based methods were more effective than magnitude based pruning, almost all recent methods have been based on magnitude based pruning. See et al.~ proposed three pruning schemes. They make the following observations: (1) Class-blind pruning outperforms both other schemes. Further, the overall performance loss is caused disproportionately by a few classes: softmax weights, source and target embedding weights. (2) It seems that higher layers are more important than lower layers, and that attention and softmax weights are crucial in LSTMs. (3) After retraining the pruned NMT models, baseline performance (20.48 BLEU) is both recovered and improved upon, up to 80\\% pruning (20.91 BLEU), with only a small performance loss at 90\\% pruning (20.13 BLEU). (4) In LSTMs, the parameters corresponding to the less common words are more dispensable. Weights connecting to the input are most crucial, followed by the input gate, then the output gate, then the forget gate. This is particularly true of the lower layers, which focus primarily on the input. However for higher layers, especially on the target side, weights connecting to the gates are as important as those connecting to the input. \nNarang et al.~ observe that for approximately same number of parameters, gradual/iterative pruning is 7\\% to 9\\% better than hard pruning. They also conclude that the initial layers are pruned more aggressively compared to the final layers. Zhu et al.~ advise that in order to get the best-performing sparse model of a certain size, we should train a dense model that is 5x-10x larger and then prune to the desired number of parameters rather than taking the largest and best-performing dense model and pruning this model by 20x or more to the desired number of parameters. Guo et al.~ find that RPP is much better than typical iterative pruning. In their experiments with BERT they find that for both original BERT and BERT pruned with RPP, the low-dimensional manifolds of the language representations are similar, showing the similar projection. This implies that the BERT applied with RPP keeps most of the language representation information similar to that from the original BERT.\nFor block pruning, Narang et al.~ make the following observations: (1) We can create block-sparse RNNs with sparsity ranging from 80\\% to 90\\% with small loss in accuracy. This allows us to reduce the model size by roughly 10×. Block sparsity works with a variety of block sizes up to 32×32. (2) For block size 4×4, models with sparsity greater 90\\% yield a relative accuracy loss of 30\\% or higher. Similarly, for blocks of 16×16, models with sparsity greater than 86\\% have 30\\% or more accuracy loss. A similar trend is observed for block size 32×32. This indicates that there is a tradeoff between sparsity, block size and accuracy of the model. (3) For both block pruning and weight pruning, we see that the initial layers are pruned more aggressively compared to the final layers. Increasing sparsity in the layers closer to the output results in poor accuracy. Additionally, the variance in sparsity across the layers increases with the block size. Further, Cao et al.~ make the following observations comparing block sparsity with BBS: (1) BBS achieves almost the same model accuracy regardless of the change of bank size. For block sparsity, however, increasing the block size adversely affects model accuracy. \nFor pruning of attention heads, Michel et al.~ observe that one can prune up to 20\\% and 40\\% of heads from 6-layer NMT Transformer and BERT resp., without incurring any noticeable negative impact. When trying to remove a head at a time, only 8 (out of 96) heads in 6-layer NMT Transformer (16 heads/layer) cause a statistically significant change in performance when they are removed from the model, half of which actually result in a higher BLEU score. Further Voita et al.~ find that on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU. \nOverall, to summarize, pruning has been the most popular method for model compression. Pruning methods can be unstructured (prune weights) or structured (prune neurons, blocks, attention heads, layers). While weight pruning theoretically leads to pruning to a large extent, practical implementation of sparse data structures is difficult. Pruning and regularization need to be done together carefully. Also, it is critical to define the importance functions for various structures carefully. Among weight pruning methods, while iterative magnitude pruning with regularization works well for RNNs and LSTMs, RPP performs better for Transformer based models. Pruning blocks using BBS is better than pruning neurons. For Transformer models, pruning just the heads do not provide much model compression, but dropping a combination of attention heads and layers is better.", "id": "80f88ea7-37a5-4100-9fe4-121a616869c9", "level": "subsection", "origin_cites_number": 12, "parent_id": "92ac6101-b828-4042-8708-2e0386e701bb", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Pruning" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:quantization}\nWhile pruning saves on the model size by removing weights, quantization aims to reduce the number of bits needed to store weights. Most computer architectures use 32 bits to represent weights. However, estimated precision of the brain (hippocampal spine) synapses is around 4.6 bits~. Empirical evidence suggests that most quantities in the nervous system (for instance, firing of the neurons) have variability of a few percent due to biological noise, or a precision of 1 in 100 at best~. Thus, each decision could depend on $\\log_2 (100)$=6.64 bits. Thus, we should be able to store weights in our artificial neural networks on average in a space of 4--7 bits. Given this motivation, various methods have been proposed which perform 1-bit (or binary quantization), ternary quantization, and general quantization exploring the spectrum between 3 and 32 bits. We discuss such methods in this section. Fig.~\\ref{fig:quantization} provides a broad overview of various quantization styles.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{quantization}\n \\caption{Different Types of Quantization: Binary (A), Ternary (B) and General Quantized (C and D). Note that X axis denotes the weight value while the Y axis denotes frequency.}\n \\label{fig:quantization}\n\\end{figure}", "id": "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "level": "section", "origin_cites_number": 2, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ] ], "subsections": [ "11228467-1126-4614-9475-560634f98a9b", "60a24ca4-58df-4063-9d44-f538e8a26b98", "be929a80-0b3b-4005-bf19-e84fee8ab111", "97e94011-bad6-4d07-98cb-26de46e32fe4" ], "title": "Quantization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:binQuant}\nBinarized networks use binary quantization (see Fig.~\\ref{fig:quantization}(A)), which quantizes weights using 1 bit. Quantizing weights to 1 bit provides a compression of 32x but leads to a significant drop in accuracy across many tasks. However, in a hybrid quantization scheme, such binary quantization can be very helpful for some layers in a network. Binarization can be done using deterministic methods or could be stochastic in nature. Also, while na\\\"ive binarization has a very simple way of fixing the binary boundary threshold, one could perform a complex loss aware binarization as well. We discuss these variants of binarization in this section.", "id": "11228467-1126-4614-9475-560634f98a9b", "level": "subsection", "origin_cites_number": 0, "parent_id": "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Binarized Networks" ] ], "subsections": [ "f998a362-cba7-42b7-aa49-83d293cf4b58", "3c274eec-1522-4b1c-b25f-1f3d927c3167", "bb389108-4900-443c-bee1-d687f6265cf4" ], "title": "Binarized Networks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4351 ], "content": "Simplest way of binary quantization is to set the weight as 1 for non-negative weights, and to -1 for negative weights. This leads to 32x compression. Also, the matrix multiplication for binary matrices is $\\sim$7x faster~ leading to faster model inference. In the forward pass, binary networks drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which leads to great increases of power efficiency. Also, in the simplest version, binarization can be performed in a static manner, i.e., after the training is done. However, this method leads to large loss in accuracy. \nA variant of this simple method is to set the weight to a constant $c_1$ for non-negative weights, and to another constant $c_2$ for negative weights. Binary Scheme (BS)-Fixed method~ stores the original weights and during the forward pass replaces the values with a masked value of $c_1$ or $c_2$, where $c_1$ and $c_2$ are fixed and chosen with hyperparameter tuning. Full precision weights are used during training. At the end of training, the weights are replaced with the index of its masked value. Choosing the values of $c_1$ and $c_2$ can be difficult and time-consuming in BS-Fixed. Thus, in the BS-flexible method~, we initialize $c_1$ and $c_2$ using KMeans with two centroids over the weights, and then update $c_1$ and $c_2$ using back-propagation. Also, in the BS-Flexible method, weights are quantized as follows.\n\\begin{equation}\n w_b= \\begin{cases}\nc_1 &\\text{if } w \\geq (c_1+c_2)/2\\\\\nc_2 &\\text{if } w < (c_1+c_2)/2\n\\end{cases}\n\\end{equation}\nNote that $w$ is the original weight value while $w_b$ is the binarized weight value. These changes eliminate the need for hyper-parameter tuning.", "id": "f998a362-cba7-42b7-aa49-83d293cf4b58", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "11228467-1126-4614-9475-560634f98a9b", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Binarized Networks" ], [ "subsubsection", "Deterministic Binarization" ] ], "subsections": [], "title": "Deterministic Binarization" }, { "cite_extract_rate": 1, "cites": [ 685, 4488, 8375, 2670 ], "content": "Stochastic~ binarization is performed as follows. \n\\begin{equation}\n w_b= \\begin{cases}\n+1 &\\text{with probability } p=\\sigma(w)\\\\\n-1 &\\text{with probability } 1-p\n\\end{cases}\n\\end{equation}\n\\noindent where\n\\begin{eqnarray}\n \\sigma(w)=\\text{clip}\\left(\\frac{w+1}{2},0,1\\right)=\\max\\left(0, \\min\\left(1, \\frac{w+1}{2}\\right)\\right)\n\\end{eqnarray}\nWe only binarize the weights during the forward and backward propagations but not during the parameter update. Keeping good precision weights during the updates is necessary for Stochastic Gradient Descent (SGD). This is possible using something called as ``Straight Through Estimator (STE) trick''~. As per STE, as the quantized value is an approximation of the original value, we can substitute the gradient with respect to the quantized value for the gradient of original value. The trick allows the inclusion of quantization into the computation graph of back-propagation and allows QNNs to represent parameters, activations and gradients with low bitwidth numbers. For test-time inference, there are three options using such a quantization method: \n\\begin{itemize}\n \\item Use the resulting binary weights $w_b$ (this makes most sense with the deterministic binarization). \n \\item In the stochastic case, many different networks can be sampled by sampling a $w_b$ for each weight. The ensemble output of these networks can then be obtained by averaging the outputs from individual networks. \n \\item Use original weights. But this does not reduce model size. \n\\end{itemize}\nBesides this, there have been further efforts that make train/test faster but do not reduce model size. For example, Lin et al.~ convert multiplications in the backward pass into bit-shifts by restricting the activations to be power-of-two integers. Hubara et al.~ binarize weights and activations, at the inference phase and the entire training phase of a deep network.", "id": "3c274eec-1522-4b1c-b25f-1f3d927c3167", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "11228467-1126-4614-9475-560634f98a9b", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Binarized Networks" ], [ "subsubsection", "Stochastic Binarization" ] ], "subsections": [], "title": "Stochastic Binarization" }, { "cite_extract_rate": 1, "cites": [ 4489, 4490, 851 ], "content": "\\label{subsubsec:lossAwareBin}\nThe na\\\"ive binary quantization methods divide the real number line into two parts and each part was mapped to a quantized weight value. Can we decide per weight value which of the two weights it should be quantized to? Thus the idea behind Binary Weight Networks (BWN)~ is to approximate the weight vector $W\\in R^n$ using a binary vector $B\\in\\{+1,-1\\}^n$ and a scaling factor $\\alpha\\in R^+$ such that $W\\approx\\alpha B$. This can be expressed as an optimization problem as follows.\n\\begin{eqnarray}\n \\alpha^*, B^*=\\argmin_{\\alpha, B}||W-\\alpha B||^2\n\\end{eqnarray}\nWe can expand and write this as follows.\n\\begin{eqnarray}\n ||W-\\alpha B||^2=\\alpha^2B^TB-2\\alpha W^TB+W^TW\n\\end{eqnarray}\nSince $B\\in{+1,-1}^n$, $B^TB=n$. Also $W^TW$ is a constant. Thus $B^*=\\argmax_B W^B$ such that $B\\in{+1,-1}^n$. This optimization can be solved by simply assigning $B_i=+1$ when $W_i\\geq 0$, and $B_i=-1$ otherwise. To compute $\\alpha^*$, we set the derivative of $||W-\\alpha B||^2$ wrt $\\alpha$ to 0 and get the solution as follows.\n\\begin{eqnarray}\n \\alpha^*=\\frac{\\sum|W_i|}{n}\n\\end{eqnarray}\nThus, besides the binarized weight matrix, a scaling parameter is also learned in BWN.\nTo take this idea further, can we learn $\\alpha$ and $B$ to minimize the overall network's loss function? Thus, now, the Weight binarization can be formulated as the following optimization problem.\n\\begin{eqnarray}\n &&\\min_{\\hat{w}} \\text{loss}(\\hat{w})\\\\\n &&\\text{such that } \\hat{w}_l=\\alpha_l b_l; \\alpha_l>0; b_l\\in\\{+1,-1\\}^{n_l}; l=1,..., L\n\\end{eqnarray}\n\\noindent where $L$ is the number of layers, $n_l$ is the number of weights in layer $l$. This loss aware binarization~ problem can be solved using proximal Newton algorithm~ to find the best $\\alpha_l$ and $B_l$.", "id": "bb389108-4900-443c-bee1-d687f6265cf4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "11228467-1126-4614-9475-560634f98a9b", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Binarized Networks" ], [ "subsubsection", "Loss Aware Binarization (LAB)" ] ], "subsections": [], "title": "Loss Aware Binarization (LAB)" }, { "cite_extract_rate": 1, "cites": [ 4491 ], "content": "\\label{subsec:ternaryQuant}\nUnfortunately, binary quantization of the recurrent weights in RNNs/LSTMs never worked~. When the true value of a weight is near zero, its quantized value is either set to -1 or 1. This results into an artificial increase in the magnitude of the weights and the vanishing/exploding gradients problem becomes more severe. Hence, another popular form of quantization is ternary quantization (see Fig.~\\ref{fig:quantization}(B)). Ternary quantization can help achieve a min of 16x compression (up to 32x compression if hardware allows to avoid storing zeros). In this section, we discuss different variants of ternary quantization from the simplest ternary connect networks to hybrid ternary networks like HitNets.", "id": "60a24ca4-58df-4063-9d44-f538e8a26b98", "level": "subsection", "origin_cites_number": 1, "parent_id": "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Ternarized Networks" ] ], "subsections": [ "8b4d9559-79ce-43e9-8312-ccc718a576a7", "05681eea-3f45-4e85-b114-d5a86ce4f547", "3a480858-aec9-47ce-85a6-135a6bfa320a" ], "title": "Ternarized Networks" }, { "cite_extract_rate": 1, "cites": [ 4492, 4488 ], "content": "The simplest method for ternary quantization is ternary connect~ whose deterministic form is as follows.\n\\begin{equation}\n w_t= \\begin{cases}\n+1 &\\text{if } w > 0.5\\\\\n0 &\\text{if } -0.5 < w \\leq 0.5\\\\\n-1 &\\text{if } w \\leq -0.5\\\\\n\\end{cases}\n\\end{equation}\nNote that $w$ is the original weight value while $w_t$ is the ternarized weight value. Like binary connect, ternary connect also eliminates all multiplications in the forward pass. In the stochastic form, assuming original weights have been normalized to be in the range [-1,1], ternary quantization is done as follows.\n\\begin{equation}\n w_t= \\begin{cases}\n+1 &\\text{with prob } w \\text{ if } w\\in (0,1]\\\\\n0 &\\text{with prob } 1-w \\text{ if } w\\in (0,1]\\\\\n0 &\\text{with prob } 1+w \\text{ if } w\\in[-1,0]\\\\\n-1 &\\text{with prob } -w \\text{ if } w \\in [-1,0]\\\\\n\\end{cases}\n\\end{equation}\nA slightly related way called as Bernoulli Ternary Quantization where $w_t$ is set to +1 (or -1) with prob $p$ if $w>0$ (or) $<0$, and set to 0 with prob 1-p where $p\\sim$Bernoulli($|x|$). Yet another way to set the boundaries for the three ranges is to use Gaussian based ternary weights~ as follows.\n\\begin{equation}\n w_t= \\begin{cases}\n+1 &\\text{if } w > -(\\mu+\\sigma/2)\\\\\n0 &\\text{if } -(\\mu+\\sigma/2)<w\\leq (\\mu+\\sigma/2)\\\\\n-1 &\\text{if } w \\leq -(\\mu+\\sigma/2)\\\\\n\\end{cases}\n\\end{equation}\n\\noindent where $\\mu$ and $\\sigma$ are the mean and standard deviation of the weight matrix being quantized.", "id": "8b4d9559-79ce-43e9-8312-ccc718a576a7", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "60a24ca4-58df-4063-9d44-f538e8a26b98", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Ternarized Networks" ], [ "subsubsection", "Ternary Weight Networks" ] ], "subsections": [], "title": "Ternary Weight Networks" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 852, 4493, 851 ], "content": "Rather than using the rules for ternary quantization as mentioned above, one can learn the boundary ranges or the quantized values for individual weights. One way of learning the right ternary representation per weight value is to minimize the Euclidean distance between full precision weights $W$ and the ternary weights $T$ along with a scaling factor~. This can be expressed as the following optimization problem.\n\\begin{eqnarray}\n\\alpha^*, T^*=\\argmin_{\\alpha,T}||W-\\alpha T||_2^2\\\\\n\\text{such that } \\alpha\\geq 0; T_i\\in \\{-1,0,1\\}; i=1,2,..., n\n\\end{eqnarray}\nNote that this is equivalent to the BWN method~. This does not lead to a closed form solution. Hence, we approximate the solution with threshold-based ternary function. \n\\begin{equation}\n w_t= \\begin{cases}\n+1 &\\text{if } w > \\Delta\\\\\n0 &\\text{if } -\\Delta < w \\leq \\Delta\\\\\n-1 &\\text{if } w \\leq -\\Delta\\\\\n\\end{cases}\n\\label{eq:eq2}\n\\end{equation}\nThe approximation works when we set $\\Delta$ as follows.\n\\begin{eqnarray}\n\\Delta^*=\\argmax_{\\Delta>0}\\frac{1}{|I_\\Delta|}\\left(\\sum_{i\\in I_\\Delta}|W_i|\\right)^2\n\\end{eqnarray}\n\\noindent where $I_\\Delta$ is the number of weights with magnitude$>\\Delta$. Again, this has no straightforward solution, unless we assume that original weights $W_i$'s are generated from uniform or normal distribution. When $W_i$'s are uniformly distributed in $[-a, a]$ and $\\Delta$ lies in $(0, a]$, the approximated $\\Delta^*$ is $a/3$, which equals to $\\frac{2}{3}E(W)$. When $W_i$'s are generated from normal distributions $N(0,\\sigma^2)$, the approximated $\\Delta^*$ is 0.6$\\sigma$ which equals to 0.75$E(|W|)$. Thus, we can use the following rule of thumb for fast and easy computation.\n\\begin{eqnarray}\n\\Delta^*\\approx 0.7 E(W)=\\frac{0.7}{n}\\sum_{i=1}^n |W_i|\n\\end{eqnarray}\nAnother way to learn the quantization step size $\\Delta$ in Eq.~\\ref{eq:eq2} is to learn in a loss-aware manner~, i.e., tuning it to minimize the overall network loss. Given a multi-layered network, we need to perform such quantization layer by layer in a greedy manner. We first train the network with full precision weights. We quantize all input data and signals of hidden layers. Next, we start with the weight quantizer between the input layer and the first hidden layer, try several step sizes around the initial step size and measure the output error of the network with the training set. The initial step size is determined using Lloyd-Max algorithm~. Choose the step size that minimizes the output error and quantize the weights. Further, we perform these steps for the next few layers until the output layer. Finally, the quantized neural network is retrained.\nYet another way of training ternary quantization~ is to quantize weights to one of {$-W_l^n$, 0, $W_l^p$} for each layer $l$, where $W_l^n$ and $W_l^p$ are trainable parameters, learned using back-propagation. First, we normalize the full-precision weights to the range [-1, +1] by dividing each weight by the maximum weight. During SGD, we back propagate the gradient to both $W_l^n$ and $W_l^p$ and to the latent full-precision weights. This makes it possible to adjust the ternary assignment (i.e. which of the three values a weight is assigned). To decide the quantization step size $\\Delta_l$ for a layer $l$, two heuristics can be used: (1) set $\\Delta_l=t\\times \\max(|w_l|)$ where $t$ is a constant and $w_l$ are the full precision weights in layer $l$. (2) maintain a constant sparsity $r$ for all layers throughout training. By adjusting the hyper-parameter $r$ we can obtain ternary weight networks with various sparsities.", "id": "05681eea-3f45-4e85-b114-d5a86ce4f547", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "60a24ca4-58df-4063-9d44-f538e8a26b98", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Ternarized Networks" ], [ "subsubsection", "Trained Ternary Quantization" ] ], "subsections": [], "title": "Trained Ternary Quantization" }, { "cite_extract_rate": 0, "cites": [], "content": "Given various ternary quantization methods proposed so far, one can combine them and use different methods for different layers. Wang et al.~ found that threshold ternary quantization (TTQ) (Eq.~\\ref{eq:eq2}) is preferable for weights in an RNN while Bernoulli Ternary Quantization (BTQ) is preferable for activations. This is based on the observation that in an RNN, the distribution of weights follows normal distribution (with different ranges across different weight matrices), while for activations, the range is [0,1] and most of the values are located near to the two poles instead of the middle of the range. In the training phase (where we need to store the full precision weights), ternary quantization of weights only saves 1.4x memory consumption but quantizing both weights and activations can achieve up to 16x memory savings.\nThe HitNet architecture~ with this hybrid ternary quantization can be defined using these equations, where $i_t, f_t, o_t$ are the input, forget and output gates; $x_t$ is input at time $t$; $c_t$ is the cell output; and $h_t$ is the hidden layer output; $W_x$, $W_h$, $b_x$, $b_h$ are weights. \n\\begin{eqnarray}\n \\nonumber i_t,f_t,g_t,o_t&=&\\sigma(\\text{TTQ}(W_x)x_t+\\text{TTQ}(b_x)\\\\\n \\nonumber &+&\\text{TTQ}(W_h)h_{t-1}+\\text{TTQ}(b_h))\\\\\n \\nonumber c_t&=&f_t\\times c_{t-1}+i_t\\times g_t\\\\\n h_t&=&\\text{BTQ}(o_t\\times \\sigma(c_t))\n\\end{eqnarray}", "id": "3a480858-aec9-47ce-85a6-135a6bfa320a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "60a24ca4-58df-4063-9d44-f538e8a26b98", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Ternarized Networks" ], [ "subsubsection", "Hybrid Ternary Quantization" ] ], "subsections": [], "title": "Hybrid Ternary Quantization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:generalQuant}\nSo far we discussed methods designed specifically for binary and ternary quantization. Now, we discuss general $k$-bit quantization methods. We will discuss (1) uniform quantization methods which perform equal width binning, (2) non-uniform methods which are closer to equal frequency binning, (3) loss-aware quantization methods, and (4) methods specifically designed for Transformer models.", "id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "level": "subsection", "origin_cites_number": 0, "parent_id": "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ] ], "subsections": [ "f96f7375-a19b-4b18-9a52-99c14e0b21ea", "6c78c9b7-5b2e-4d63-90b9-bd800235109c", "4b1349e1-295c-4789-9acd-ec92a1cf9f02", "96bae440-819d-4a68-8df5-216ce70bd5a8", "c55e72b6-a137-4167-a584-9f7c420c0df0" ], "title": "General Quantized Networks" }, { "cite_extract_rate": 1, "cites": [ 4491, 8797, 4351, 851 ], "content": "Uniform $k$-bit Quantization simply splits the range of original weights into $2^k-1$ equal size intervals~. Refer Fig.~\\ref{fig:quantization}(C).\nIf original weights are in range [-1,1], they can be quantized as follows.\n\\begin{equation}\n q_k(x)=2\\left(\\frac{\\text{round}[(2^k-1)\\left(\\frac{x+1}{2}\\right)]}{2^k-1}-\\frac{1}{2}\\right)\n \\label{eq:uq}\n\\end{equation}\nSimilarly, if entries are in range [0,1], we could use the following formula.\n\\begin{eqnarray}\n q_k(x)=\\frac{1}{2^k-1}\\lfloor(2^k-1)x+\\frac{1}{2}\\rfloor\n\\end{eqnarray}\nWhen the weights in matrix $X$ are not in the range [0,1], we can first scale weights as follows.\n\\begin{eqnarray}\n \\tilde{X}=\\frac{X-\\beta}{\\alpha}\n\\end{eqnarray}\n\\noindent where $\\alpha=\\max(X)-\\min(X)$ and $\\beta=\\min(X)$. After quantization, we can apply a reverse transform to approximate the original values. Overall, the quantized result can be written as follows.\n\\begin{eqnarray}\n Q(X)=\\alpha q_k(\\tilde{X})+\\beta\n\\end{eqnarray}\nGiven any quantization function $q_k(x)$, one can use it for quantizing weight matrices of various recurrent models like RNNs, GRUs and LSTMs~. Typical inference equations for a GRU can be written as follows.\n\\begin{eqnarray}\n z_t=\\sigma(W_z.[h_{t-1},x_t]); r_t=\\sigma(W_r.[h_{t-1},x_t])\\\\\n \\tilde{h_t}=\\text{tanh}(W.[r_t\\times h_{t-1},x_t]); h_t=(1-z_t)h_{t-1}+z_t \\tilde{h_t}\n\\end{eqnarray}\nBesides the matrix multiplications needed to compute $z_t$, $r_t$ and $\\tilde{h_t}$, the gate structure of $\\tilde{h_t}$ and $h_t$ brings in the need for element-wise multiplication. \nAs $\\tilde{h_t}$ and $h_t$ are also the inputs to computations at the next timestamp, and noting that a quantized value multiplied by a quantized value will have a larger bit-width, we need to insert additional quantization steps after element-wise multiplications.\nAnother problem with quantization of GRU structure lies in the different value range of gates. The range of tanh is [-1, 1], which is different from the value range [0, 1] of $z_t$ and $r_t$. Keeping in mind these observations, the equations for a quantized GRU can be written as follows, after the weights $W_z$, $W_r$ and $W$ and input $x_t$ have already been quantized to [-1,1].\n\\begin{eqnarray}\n z_t&=&\\sigma(W_z.[h_{t-1},x_t])\\\\\n r_t&=&\\sigma(W_r.[h_{t-1},x_t])\\\\\n \\tilde{h_t}&=&tanh\\left(W.\\left[2q_k\\left(\\frac{1}{2}(r_t h_{t-1})+\\frac{1}{2}\\right)-1,x_t\\right]\\right)\\\\\n h_t&=&2q_k\\left(\\frac{1}{2}((1-z_t) h_{t-1}+z_t \\tilde{h_t})+\\frac{1}{2}\\right)-1\n\\end{eqnarray}\nFollowing a similar method, we can also quantize LSTM networks.", "id": "f96f7375-a19b-4b18-9a52-99c14e0b21ea", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ], [ "subsubsection", "Uniform Quantization" ] ], "subsections": [], "title": "Uniform Quantization" }, { "cite_extract_rate": 1, "cites": [ 4494, 4491 ], "content": "Uniform quantization is easy to implement but far from optimum when quantizing non-uniform data, which is believed to be the trained weights and activations of deep neural network. One way of performing non-uniform quantization is exponential quantization~. It quantizes the weight values to an integer power of 2. If we let \n\\begin{eqnarray}\n p=\\frac{|W|}{2^{\\lfloor \\log_2|W|\\rfloor}}-1\n\\end{eqnarray}\ndeterministic exponential quantization can be written as follows.\n\\begin{equation}\n \\log_2 W_q= \\begin{cases}\n\\lceil \\log_2|W| \\rceil &\\text{if } p>0.5\\\\\n\\lfloor \\log_2|W| \\rfloor &\\text{otherwise }\n\\end{cases}\n\\end{equation}\nSimilarly, stochastic exponential quantization can be written as follows.\n\\begin{equation}\n \\log_2 W_q= \\begin{cases}\n\\lceil \\log_2|W| \\rceil &\\text{with prob } p\\\\\n\\lfloor \\log_2|W| \\rfloor &\\text{with prob } 1-p\n\\end{cases}\n\\end{equation}\nExponential quantization enables storing weights in low precision and eliminating multiplications. However, it still does not perform quantization in a way which is sensitive to the distribution of the weights. Distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may under utilize available bitwidth. When we quantize values, it may be desirable to make the quantized values have balanced distributions, to take full advantage of the available parameter space. Balanced quantization method~ starts by partitioning numbers into $2^{k}$ bins containing roughly the same number of entries (percentiles). Refer Fig.~\\ref{fig:quantization}(D). Each partition is then mapped to an evenly-divided interval in the closed interval [0, 1]. Finally, the quantization step maps intervals into discrete values using Eq.~\\ref{eq:uq} and transforms the value range to be approximately the same as input.\nA na\\\"ive implementation using percentiles as thresholds would require sorting of weight values during each forward operation in back-propagation, which may slow down the training process. The $2^{k}$ evenly spaced percentiles required in histogram equalization can be computed from the recursive application of partitioning of numbers by medians. Further, the mean $\\mu$ can be used to approximate the median $m$. Thus, we can perform approximate histogram equalization without doing sorting.", "id": "6c78c9b7-5b2e-4d63-90b9-bd800235109c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ], [ "subsubsection", "Balanced Quantization" ] ], "subsections": [], "title": "Balanced Quantization" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4496, 4495 ], "content": "Yet another way of performing non-uniform quantization is to decide bin boundaries using clustering in a static manner. In this static-KMeans method~, We first train the neural network with full-precision parameters. Then apply KMeans to the weights. After clustering, the value of each pixel is set to the value of the center of the cluster it belongs to. We also need to store mapping from integers to cluster centers. Given $k$ clusters, we only need $\\log (k)$ bits to code the clusters.\nA better approach is to perform KMeans clustering during training. In this method~, multiple connections (belonging to the same cluster) share the same weight, and we fine-tune those shared weights. For the forward pass, the cluster index stored for each connection is mapped to a centroid which is then used as the weight. For back-propagation, during update, all the gradients are grouped by the cluster index and summed together, multiplied by the learning rate and subtracted from the shared centroids from last iteration. We use KMeans clustering to identify the shared weights for each layer of a trained network, so that all the weights that fall into the same cluster will share the same weight. Weights are not shared across layers. To calculate the compression rate, given $k$ clusters, we only need $\\log_2 k$ bits to encode the index. In general, for a network with $n$ connections and each connection is represented with $b$ bits, constraining the connections to have only $k$ shared weights will result in a compression rate of\n\\begin{eqnarray}\n r=\\frac{nb}{n\\log_2 k+kb} \n\\end{eqnarray}\nThere are two other ways of using KMeans for non-uniform quantization: Product Quantization (PQ) and Residual Quantization (RQ)~. In product quantization (PQ), we partition the vector space into many disjoint subspaces, and perform quantization (KMeans) in each subspace. Weight matrix $W$ is partitioned columnwise: $W=[W^1, W^2, ..., W^s]$ where $W^i\\in R^{m\\times n/s}$ assuming $n$ is divisible by $s$. Then we perform KMeans on each submatrix $W^i$ to obtain clusters $c^i_1, ..., c^i_k$. Thus, we get $s$ codebooks. The reconstructed matrix is $\\hat{W}=[\\hat{W}^1, \\hat{W}^2, ..., \\hat{W}^s]$ where $\\hat{W}^i_j$ is the closest centroid $c^i_j$. PQ can be applied to either the x-axis or the y-axis of the matrix. We need to store the cluster indexes and codebooks for each subvector. The compression rate for this method can be written as follows.\n\\begin{eqnarray}\n r=\\frac{32mn}{32kn + log_2(k)ms} \n\\end{eqnarray}\nResidual quantization (RQ) is similar. In RQ, we first quantize the vectors into k-centers. Next we find out the residuals for each data point ($w-c$) and perform KMeans on the residuals. Do it recursively $t$ times.\nThen the resultant weight vectors are calculated as $\\hat{W}_z=c^1_j+c^2_j+...+c^t_j$ given we have recursively performed $t$ iterations. We need to store all the codebooks for each iteration, which potentially needs large amount of memory. The compression rate for this method can be written as follows.\n\\begin{eqnarray}\n r=\\frac{m}{tk + log_2(k)tn} \n\\end{eqnarray}", "id": "4b1349e1-295c-4789-9acd-ec92a1cf9f02", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ], [ "subsubsection", "KMeans based Quantization Schemes" ] ], "subsections": [], "title": "KMeans based Quantization Schemes" }, { "cite_extract_rate": 1, "cites": [ 4498, 4497, 851 ], "content": "Generalizing the loss aware binarization approach (Sec.~\\ref{subsubsec:lossAwareBin})~, we can perform $k$-bit quantization~ by attempting to solve the following problem. \n\\begin{eqnarray}\n \\min_{\\{\\alpha_i,b_i\\}_{i=1}^k} \\left|\\left|w-\\sum_{i=1}^k \\alpha_i b_i\\right|\\right|^2 \n\\end{eqnarray}\nwhere $w\\in R^n$ is the original weight vector, $\\alpha_i\\in R$ and $b_i\\in\\{-1,+1\\}^n$ are variables to be learned. This NP-hard problem can be solved using an iterative greedy approximation which sequentially minimizes the residue. In each iteration, first the residue is computed as \n\\begin{eqnarray}\nr_{i-1}=w-\\sum_{j=1}^{i-1}\\alpha_j b_j,\n\\end{eqnarray}\nand then $\\alpha_i$ and $b_i$ are computed as $\\alpha_i=\\frac{1}{n}||r_{i-1}||_1$ and $b_i=\\text{sign}(r_{i-1})$. Further, refined greedy approximation~ extends this to further decrease the quantization error. In the $j^{th}$ iteration after $\\alpha_j$ and $b_j$ have been updated, the method adds one extra step to refine all computed $\\{\\alpha_i\\}_{i=1}^j$ with the least squares solution as follows.\n\\begin{eqnarray}\n[\\alpha_1, ..., \\alpha_j]=((B_j^TB_j)^{-1}B_j^Tw)^T\n\\end{eqnarray}\nwhere $B_j=[b_1,...,b_j]$. Typically refined greedy is more accurate than the greedy approach. In refined greedy approximation, after modification on the computed $\\alpha$'s, $b$'s are no longer optimal while the method keeps all of them fixed. To improve the refined greedy approximation, alternating minimizing $\\alpha$'s and $b$'s becomes a natural choice. Xu et al.~ find that only two alternating cycles is good enough to find high precision quantization. Further, similar to~, for an LSTM, we can combine overall network loss minimization with the multi-bit quantization loss minimization using this bi-level optimization.\n\\begin{eqnarray}\n\\min_{w,\\{\\alpha_i, b_i\\}_{i=1}^k} \\text{LSTM}\\left(\\sum_{i=1}^k\\alpha_i b_i\\right)\\\\\n\\text{such that }\\{\\alpha_i, b_i\\}_{i=1}^k=\\argmin_{\\{\\alpha_i',b_i'\\}_{i=1}^k}\\left|\\left|w-\\sum_{i=1}^k \\alpha_i' b_i'\\right|\\right|^2\n\\end{eqnarray}", "id": "96bae440-819d-4a68-8df5-216ce70bd5a8", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ], [ "subsubsection", "Loss Aware Quantization (LAQ)" ] ], "subsections": [], "title": "Loss Aware Quantization (LAQ)" }, { "cite_extract_rate": 0.25, "cites": [ 2488 ], "content": "Each word vector is typically represented as a 300--500 dimensional vector, with each parameter being 32 bits. As there are millions of words, word vectors may take up to 3--6 GB of memory/storage. Can we quantize word vectors? We can clearly quantize them after training. But, we could also quantize when learning word embeddings. For example, Lam et al.~ perform 1-bit and 2-bit quantization while performing word2vec~ training using the Continuous Bag of Words (CBOW) method. They observe that quantization while training leads to better results compared to quantization after training. \nCheong et al.~ applied BS-Fixed and BS-Flexible binary quantization to Transformer models. They observed that the Transformer architecture is highly resistant to quantization, and is able to match the original model up to a 4-bit representation. Simple iterative pruning is much worse compared to quantization. Lastly, Shen et al.~ propose mixed-precision quantization for BERT based on the observation that different encoder layers should use different number of bits for quantization. Layers that exhibit flatter curvature of the loss gradient surface can be quantized to lower bit precision. Thus, they use different number of bits at different levels of granularity: layers, attention heads and groups of neurons. They observe that quantizing embedding layers with 8 bits and other weight matrices with 2--4 bits leads to results comparable with full-precision BERT.", "id": "c55e72b6-a137-4167-a584-9f7c420c0df0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "be929a80-0b3b-4005-bf19-e84fee8ab111", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "General Quantized Networks" ], [ "subsubsection", "Quantization for Word Embeddings and Transformers" ] ], "subsections": [], "title": "Quantization for Word Embeddings and Transformers" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 4498, 8797, 4494, 4497, 2670, 4490, 4351, 4491, 4492, 2488, 851 ], "content": "\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|l|l|l|l|}\n \\hline\nTask&Dataset&Model&Method&Eval&Bits (weights)&Bits (activation)&Metric\\\\\n\\hline\n\\hline\nLanguage modeling&IMDB&GRU&Uniform Q.~&0.882; 0.905&4&4&Acc (H)\\\\\n\\hline\nLanguage modeling&Linux Kernel&LSTM&BinaryConnect~&3.532; 1.329&1&FP&CE (L)\\\\\n\\hline\nLanguage modeling&Linux Kernel&LSTM&Loss Aware B.~&1.305/1.409; 1.329&1&FP/1&CE (L)\\\\\n\\hline\nLanguage modeling&Linux Kernel&LSTM&BNN~&3.624; 1.329&1&1&CE (L)\\\\\n\\hline\nLanguage modeling&Linux Kernel&LSTM&BWN~&1.307; 1.329&1&FP&CE (L)\\\\\n\\hline\nLanguage modeling&PTB&GRU&Uniform Q.~&102; 100&4&4&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&GRU&Balanced Q.~&116; 100&4&4&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Greedy~&118.9; 89.8&2&FP&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Refined Loss Aware~&95.6; 89.8&2&3&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Uniform Q.~&152/114; 109&2/4&2/4&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&BNN~&100; 97&4&4&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&HitNet~&110.3; 97.2&2&2&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Alternating LAQ~&103.1/91.4; 89.8&2/4&FP&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Alternating LAQ~&91.9; 89.8&2&3&PPW (L)\\\\\n\\hline\nLanguage modeling&PTB&LSTM&Balanced Q.~&126/123; 106&2&2/3&PPW (L)\\\\\n\\hline\nLanguage modeling&Text-8&LSTM&Refined Loss Aware~&122.3; 101.1&2&3&PPW (L)\\\\\n\\hline\nLanguage modeling&Text-8&LSTM&HitNet~&169.1; 151.4&2&2&PPW (L)\\\\\n\\hline\nLanguage modeling&Text-8&LSTM&Alternating LAQ~&105.1; 101.1&2&3&PPW (L)\\\\\n\\hline\nLanguage modeling&Text-8&RNN&Exponential Q.~&1.639; 1.588&2&FP&BPC (L)\\\\\n\\hline\nLanguage modeling&War and Peace&LSTM&BinaryConnect~&2.942; 1.268&1&FP&CE (L)\\\\\n\\hline\nLanguage modeling&War and Peace&LSTM&Loss Aware B.~&1.291/1.376; 1.268&1&FP/1&CE (L)\\\\\n\\hline\nLanguage modeling&War and Peace&LSTM&BNN~&3.05; 1.268&1&1&CE (L)\\\\\n\\hline\nLanguage modeling&War and Peace&LSTM&BWN~&1.313; 1.268&1&FP&CE (L)\\\\\n\\hline\nLanguage modeling&Wikidata-2&LSTM&HitNet~&126.72; 114.37&2&2&PPW (L)\\\\\n\\hline\nLanguage modeling&WikiText-2&LSTM&Refined Loss Aware~&105.8; 114.37&2&3&PPW (L)\\\\\n\\hline\nLanguage modeling&WikiText-2&LSTM&Alternating LAQ~&102.7; 100.1&2&3&PPW (L)\\\\\n\\hline\nNamed Entity Recognition&CoNLL-03&BERT-base&QBERT~&91.06; 95&2w8e&8&F1 (H)\\\\\n\\hline\nNamed Entity Recognition&CoNLL-03&BERT-base&Mixed-precision Q.~&94.37; 95&2-3w8e&8&F1 (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT17&Transformer&BS-Fixed/BS-Flexible~&11.61/12.11; 28.09&1&FP&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT17&Transformer&K-Means 1/4-bit Q.~&12.07/27.65; 28.09&1&FP&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT17&Transformer&K-Means 1-bit att-Q.~&24.96; 28.09&1&FP&BLEU (H)\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT17&Transformer&BS-Flexible 1-bit att-Q.~&25.54; 28.09&1&FP&BLEU (H)\\\\\n\\hline\nQuestion answering&SQuAD&BERT-base&QBERT~&79.6; 88.69&2w8e&8&F1 (H)\\\\\n\\hline\nQuestion answering&SQuAD&BERT-base&Mixed-precision Q.~&86.95; 88.69&2-3w8e&8&F1 (H)\\\\\n\\hline\nQuestion answering&SQuAD&Facebook’s DrQA&BS-Fixed~&77.04; 75.28&2&FP&F1 (H)\\\\\n\\hline\nSentiment analysis&IMDB&LSTM&Gaussian Q.~&79.64; 82.87&1&FP&Acc (H)\\\\\n\\hline\nSentiment analysis&IMDB&LSTM&Gaussian T./B.~&76.86/76.25; 82.87&2&FP&Acc (H)\\\\\n\\hline\nSentiment analysis&SST-2&BERT-base&QBERT~&84.63; 93&2w8e&8&Acc (H)\\\\\n\\hline\nSentiment analysis&SST-2&BERT-base&Mixed-precision Q.~&92.08; 93&2-3w8e&8&Acc (H)\\\\\n\\hline\nSpeech recognition&TIDIGITS&GRU&Pow2 T.~&99.18; 99.1&2&FP&Acc (H)\\\\\n\\hline\nSpeech recognition&TIMIT&4-layer MLP&Loss Aware T.~&29.97/28.35; 26.24&1/2&1&FER (L)\\\\\n\\hline\nSpeech recognition&WSJ&4-layer BiLSTM&Pow2 T.~&10.49; 11.16&2&FP&WER (L)\\\\\n\\hline\nTextual entailment&MNLI&BERT-base&QBERT~&77.02; 84.4&2w8e&8&Acc (H)\\\\\n\\hline\nTextual entailment&MNLI&BERT-base&Mixed-precision Q.~&82.29; 84.4&2-3w8e&8&Acc (H)\\\\\n\\hline\nTextual entailment&MNLI-m&BERT-base&QBERT~&76.56; 84&2w8e&8&Acc (H)\\\\\n\\hline\nTextual entailment&MNLI-m&BERT-base&Mixed-precision Q.~&81.75; 84&2-3w8e&8&Acc (H)\\\\\n\\hline\nWord similarity&M. Turk &Word embeddings&BS-Fixed~&0.602; 0.617&2&FP&CHR (H)\\\\\n\\hline\nWord similarity&MEN &Word embeddings&BS-Fixed~&0.764; 0.745&2&FP&CHR (H)\\\\\n\\hline\nWord similarity&Rare Words &Word embeddings&BS-Fixed~&0.362; 0.4&2&FP&CHR (H)\\\\\n\\hline\nWord similarity&SimLex&Word embeddings&BS-Fixed~&0.387; 0.358&2&FP&CHR (H)\\\\\n\\hline\nWord similarity&WordSim Relatedness &Word embeddings&BS-Fixed~&0.594; 0.529&2&FP&CHR (H)\\\\\n\\hline\nWord similarity&WordSim Similarity&Word embeddings&BS-Fixed~&0.752; 0.741&2&FP&CHR (H)\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various quantization methods (sorted by Task and then Dataset). Q.=Quantization, B.=Binarization, T.=Ternarization, PPW=Perplexity per word, BPC=Bits per character, CE=cross-entropy, FER=frame error rate, CHR=correlation with human rankings. FP=full precision (32 bits). For~, we report results with word embedding dimensions set to 1000. In the metric column, H means high is better while L means low is better. For quantization of the BERT-base model~, we report number of bits used for encoders as well as for embeddings. `2-3w8e' means 2 or 3 bits were used for encoder weights while 8 bits were used for embeddings. For NMT results by Cheong et al.~, ``att-Q'' means only attention layers were quantized.}\n \\label{tab:quantizationSummary}\n\\end{table}\nOtt et al.~ observed that the weight binarization methods do not work with RNNs. Hubara et al.~ were the first to attempt to quantize both weights and activations by trying to evaluate the accuracy of quantized recurrent models trained on the Penn Treebank dataset. Similar to~, Hubara et al.~ found that binarization of weight matrices lead to large accuracy degradation. Later techniques like the one by Xu et al.~ with 2 bits for weights and 3 bits for activations showed better results.\nTable~\\ref{tab:quantizationSummary} compares various quantization methods across different tasks and datasets. Accuracy of both the original and the quantized model are shown. Also, we report number of bits used for weights (which indicate the model size) as well as activations. For the same task, dataset and model combination, different papers report different accuracy of the full precision model because of slight changes in training hyper-parameters; hence we report accuracy of full precision model for each row. \nFor language modeling, PTB, Text-8, WikiText-2, Linux Kernel, IMDB and ``War and Peace'' are the popular datasets. Across all the datasets, loss aware binarization outperforms other weight binarization schemes. On the Linux Kernel dataset, it is even better than the full-precision network. BinaryConnect does not work well here because of the problem of exploding gradients. On PTB, Xu et al.'s Alternating LAQ~ with 2 bits for weights and 3 bits for activations leads to an LSTM which is just 2.1 points worse in terms of perplexity per word. By 3-bit quantization, Alternating LAQ can achieve $\\sim$10.5x memory saving and $\\sim$3× real inference acceleration. Uniform and Balanced quantization are rule-based and not aim at minimizing the error. Balanced quantization proposed by Zhou et al.~ performs better than HitNet~ and uniform quantization~. Balanced quantization leads to better results compared to unbalanced counterparts, especially when quantizing to 2-bit weights. However, for 4-bit weights, there is no clear gap between scaling by mean and scaling by max (i.e. balanced and unbalanced quantization).\nAcross multiple tasks like named entity recognition with CoNLL-03, question answering with SQuAD, sentiment analysis with SST-2, textual entailment using MNLI, Shen et al.~ show that mixed precision quantization (where different number of bits are used for different groups of neurons -- 128 groups in each layer) of BERT is better than QBERT. The reason behind this is the discrepancy (in QBERT) that not all the layers have the same sensitivity to quantization. For more sensitive layers, higher bit precision needs to be set, while for layers that are less sensitive, 2-bits setting is already sufficient. With only additional 5MB memory storage, 2/3-bits mixed-precision Q-BERT is able to retain the performance drop within 2.3\\% for MNLI, SQuAD and 1.1\\% for SST-2, CoNLL-03, with up to 13x compression ratio in weights. For machine translation, Cheong et al.~ observe that Transformer architecture is highly resistant to quantization (unlike to pruning), and are, in essence, able to match the original model up to a 4-bit representation. Binary Scheme Flexible outperforms 1-bit k-means in both pure 1-bit compression and quantizing only the attention layers, suggesting not tying the weights to particular centroids improves performance, and outperform Binary Scheme Fixed, indicating learning the values to be a superior method. After binarizing only the attention layers, we are still able to recover 90\\% of the model's performance. Lam et al.~ experimented with quantization of word embeddings and showed that 2-bit quantized word vectors outperform full precision vectors on word similarity tasks, but do worse on word analogy tasks. Intuitively, they reason that full precision Word2Vec is prone to overfitting with increased epochs of training; quantized training does not seem to suffer as much from this. For sentiment analysis on IMDB, Alom et al.~ show that quantization to 4 bits is better than to 3 or 2 bits, which is expected. They also show that the normal distribution shows better performance against uniform distribution with quantized weights. For speech recognition, Ott et al.~ show that pow-2 ternarization is the best. \nCheong et al~ were the first to quantize Transformers. They observed that in the last attention layer of the decoder over the encoder hidden states, the attention distribution of the original and 4-bit model are highly similar, indicating 4 bit weights, i.e weights that take on one of 16 values, is enough to get the full effects of attention. Attention distributions in the encoder layers of the Transformer for the original and 4-bit models are almost indistinguishable from one another. This again highlights the idea that self-attention is highly resistant to quantization and could be heavily compressed. Later Shen et al.~ showed that comparable performance to full precision BERT can be achieved with at most 2.3\\% performance degradation across many tasks, even with ultra-low precision quantization down to 2 bits, corresponding up to 13x compression of the model parameters, and up to 4x compression of the embedding table as well as activations. \nOverall, quantization performs model compression by reducing the number of bits per weight value. Binary quantization does not work well by itself for text based neural models. But ternary and higher-bit quantization lead to significant model size reduction without loss in accuracy across tasks. One consideration for quantization is that 3-bit quantized execution is typically not supported in hardware. It is however possible to load 3-bit quantized values and cast them to higher bit precision such as 4 or 8 bits in the execution units. This would still have the benefit of reduced memory volume to/from DRAM. Non-uniform quantization methods like balanced quantization or KMeans based quantization methods are better than uniform quantization methods. Loss aware quantization done while training is better than static loss-unaware quantization. Mixed-precision quantization combined with pruning is highly effective for Transformer based models.", "id": "97e94011-bad6-4d07-98cb-26de46e32fe4", "level": "subsection", "origin_cites_number": 15, "parent_id": "c0c0e7e6-55dc-426b-8669-a8adfb8fe243", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Quantization" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:knowledgeDistillation}\nKD methods are the most popular model compression methods for Transformer networks. Also called student-teacher networks, the main idea is to first train a deep teacher network, and then learn a shallow student network that mimics the teacher. After training, the student model is deployed. What information (``dark knowledge'') from the teacher can be used to train the student? What loss functions can be used to ensure right flow of information from teacher to student? Can we have an ensemble of teachers, or teacher assistants or rather fellow students who can train the student? \nWe discuss these aspects in this section.", "id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "level": "section", "origin_cites_number": 0, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ] ], "subsections": [ "5addca6a-ae25-4e99-8ffc-a608a63231c7", "d2899d20-743a-43e0-84c7-0fc90c1a975b", "e1a31918-0804-4d5f-9f78-0ea43f83fc23", "8fabba12-aca9-4230-b6a7-cc461dfdceaa", "4cd2b9eb-147c-41ba-996b-a30e824f168f" ], "title": "Knowledge Distillation (KD)" }, { "cite_extract_rate": 0.9166666666666661, "cites": [ 8798, 4499, 4500, 4503, 4504, 681, 8375, 8799, 8390, 4501, 4502 ], "content": "Ba and Caruna~ proposed Student Teacher networks (or mimic models) where the student uses the logits before softmax from the teacher network for training (see Fig.~\\ref{fig:kd}(A)). The student model is not trained on the original labels; it is trained to learn the function that was learned by the teacher model. Thus, the student model is optimized to minimize the L2 loss between the teacher logits and the student logits across all training instances. Such distilled student models are more accurate than the same shallow student trained directly on the original labeled training data mainly because: (1) Teacher removes noisy labels, if any. (2) The uncertainty from the teacher is more informative to the student than the original 0/1 labels. (3) The original targets may depend in part on features not available as inputs for learning, but the student sees targets that depend only on the input features. The dependence on unavailable features has been eliminated by filtering targets through the teacher.\nYet another way of utilizing logits is to have the student learn from noisy teacher logits~. After obtaining logits from the teacher, Gaussian noise with mean 0 and standard deviation $\\sigma$ is added to teacher’s logits. This perturbation can be applied to samples selected with probability $\\alpha$. The perturbed outputs produce the effect of a regularizer. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{kd}\n \\caption{Different Types of Knowledge Distillation methods.}\n \\label{fig:kd}\n\\end{figure}\nWhile Ba and Caruna~ suggested using only the logits, Hinton et al.~ suggested training the student by minimizing the cross entropy loss between the teacher softmax output and the student softmax output, besides minimizing the cross entropy between student prediction and actual label. The first part is called the soft loss and the second one is called the hard loss. Typically hard loss is given much lower weight compared to the soft loss term. To make the softmax output non-peaked and thereby transfer more useful information from teacher to student, softmax with temperature $>$1 should be used. The same temperature should be used for training both the teacher and the student, but after the student has been trained the temperature can be set to 1 at test time. Besides logits and softmax output, Sobolev training for neural networks is a method for incorporating target derivatives in addition to the target values while training student network (see Fig.~\\ref{fig:kd}(C)). Czarnecki et al.~ experiment with first two derivatives of the targets.\nKD has also been used along with quantization for better model compression~. We start with a trained full-precision large teacher network and an apprentice (student) network that has been initialised with full-precision weights. The apprentice network's precision is lowered and is fine-tuned using KD. \nWhy just use the output from the last layer of the teacher for training the student? In FitNets~, the student performs hint-based training, i.e., the student is trained using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student (see Fig.~\\ref{fig:kd}(B)). we choose a hidden layer of the FitNet, the guided layer, to learn from the teacher's hint layer. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. \nWhile the methods discussed so far use logits, softmax output or their derivatives to transfer knowledge, Yim et al.~ proposed a ``flow of solution procedure (FSP)'' method where the distilled knowledge is transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers (see Fig.~\\ref{fig:kd}(D)). What does this ``flow'' capture intuitively? If we view the input of a deep network as the question and the output as the answer, we can think of the generated features at the middle of the network as the intermediate result in the solution process. There are many ways to solve the problem of generating the output from the input. Hence, mimicking the generated features of the teacher can be a hard constraint for the student. Learning the solution process from teacher is important. More concretely, the student is trained to minimize the L2 difference between the teacher and student FSP matrices computed across various pairs of layers and across multiple training instances. A similar method called Representational distance learning (RDL) has also been proposed in~.\nLastly, multiple KD variants have been proposed for sequence-level predictions~, e.g., for neural machine translation (NMT). In word-level KD, cross-entropy is minimized between the student/teacher distributions for each word in the actual target sequence, as well as between the student distribution and the degenerate data distribution, which has all of its probability mass on one word. In sequence-level KD (Seq-KD) the student network is trained on the output from beam search of the teacher network that had the highest score. In sequence-level interpolation (Seq-Inter) the student is trained on the output from beam search of the teacher network that had the highest similarity (say using BLEU score) with the target sequence.", "id": "5addca6a-ae25-4e99-8ffc-a608a63231c7", "level": "subsection", "origin_cites_number": 12, "parent_id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ], [ "subsection", "Various Distillation Architectures" ] ], "subsections": [], "title": "Various Distillation Architectures" }, { "cite_extract_rate": 1, "cites": [ 4506, 4505 ], "content": "Can multiple students learn from each other? Is a powerful teacher really required? In the deep mutual learning (DML) method~, different from the one-way transfer between a static pre-defined teacher and a student in model distillation, with DML, an ensemble of students learn collaboratively and teach each other throughout the training process. Surprisingly, no prior powerful teacher network is necessary -- mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher. Specifically, each student is trained with two losses: a conventional supervised learning loss, and a mimicry loss that aligns each student's class posterior with the class probabilities of other students. \nAnil et al.~ propose a similar method but suggest letting the students learn independently just using the conventional supervised learning (hard) loss at least for a few burn in iterations (see Fig.~\\ref{fig:kd}(E)). After this, the mutual learning can be done as in DML. They also propose a variant of their Co-Distillation method to perform this training in a distributed scenario where communication efficiency is also important. To update the parameters of one network using co-distillation one only needs the predictions of the other networks, which can be computed locally from copies of the other networks' weights. Empirically, using stale predictions instead of up-to-date predictions for the other neural networks has little to no adverse effect on the quality of the final trained model produced by co-distillation.", "id": "d2899d20-743a-43e0-84c7-0fc90c1a975b", "level": "subsection", "origin_cites_number": 2, "parent_id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ], [ "subsection", "Collaborative Learning" ] ], "subsections": [], "title": "Collaborative Learning" }, { "cite_extract_rate": 0.75, "cites": [ 4508, 4507, 681 ], "content": "So far we have talked about a student mimicing a single teacher. However, it is interesting to explore if the student can learn better in presence of multiple teachers or from a teacher assistant.\nIntuitively and also observed empirically, student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, Mirzadeh et al.~ introduced multi-step KD, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher (see Fig.~\\ref{fig:kd}(F)).\nThe teacher assistant (TA) models are distilled from the teacher, and the student is then only distilled from the TAs. One could also perform multi-step TA distillation, for example, distillation path could be $10 \\rightarrow 6 \\rightarrow 4 \\rightarrow 2$.\nA simple way to do KD with multiple teachers is to train student with cross entropy loss between student predictions and average prediction from multiple teachers (see Fig.~\\ref{fig:kd}(G)). A more effective method is to augment this with a relative dissimilarity (RD) loss~ defined over intermediate layer outputs generated for a triplet of instances between the student and an ensemble of teachers. For the student, the middle layer is selected. For each teacher, we select the layer such that most teachers are consistent with the resulting order relationships under the voting strategy. We discuss the RD loss given a student and a teacher. Consider a triplet of instances ($x_i$, $x_i^+$, $x_i^-$) such that at an intermediate layer of the teacher network, distance between activations for $x_i^+$ and $x_i$ is smaller than the distance between activations for $x_i^-$ and $x_i$. Let $p_i$ be the intermediate output from student for example $x_i$. Then the RD loss for the triplet ($x_i$, $x_i^+$, $x_i^-$) can be written as follows. \n\\begin{eqnarray}\n\\text{Loss}=\\max(0,d(p_i, p_i^+)-d(p_i, p_i^-)+\\delta) \n\\end{eqnarray}\nwhere $d$ is the distance function, and $\\delta$ is a small constant to prevent the trivial solution. To extend this loss function definition to multiple teachers, the order between the instances $x_i^+$ and $x_i^-$ given $x_i$ is decided based on majority voting between the teachers. \nThere are also specific settings when distilling from multiple teachers becomes natural, e.g., when the number of classes is large~ or in multi-lingual settings~. When the number of classes is very large, the teacher model could be an ensemble that contains one generalist model trained on all the data and many ``specialist'' models, each of which is trained on data that is highly enriched in examples from a very confusable subset of the classes (like different types of mushroom). Softmax distribution vector of this type of specialist can be made much smaller by combining all of the classes it does not care about into a single dustbin class. Each specialist model is initialized with the weights of the generalist model. These weights are then slightly modified by training the specialist with half its examples coming from its special subset and half sampled at random from the remainder of the training set. To derive groupings of object categories for the specialists, we focus on categories that the full generalist network often confuses. When training the student, for each instance, we first find the $set $k$ of n$ most probable classes according to the generalist model. Then, we take all the specialist models, $m$, whose special subset of confusable classes has a non-empty intersection with $k$ and call this the active set of specialists $A_k$. Given student's full probability distribution $q$ over all the classes, we minimize the following.\n\\begin{eqnarray}\n\\text{Loss}=KL(p^g,q)+\\sum_{m\\in A_k} KL(p^m, q) \n\\end{eqnarray}\nwhere $p^g$ is output distribution from the generalist model, and $p^m$ is the output distribution from the $m^{th}$ specialist model.\nAn ensemble of teachers is also very useful in a multi-lingual NMT setting~. Individual models for each language pair are first trained and regarded as teachers, and then the multilingual model is trained to fit the training data and match the outputs of individual models simultaneously through KD. When the accuracy of multilingual model surpasses the individual model for the accuracy threshold $\\tau$ on a certain language pair, we remove the distillation loss and just train the model with original negative log-likelihood loss for this pair. Lastly, when learning from a teacher ensemble, it is burdensome to load all the teacher models in the GPU memory for distillation. Alternatively, we first generate the output probability distribution of each teacher model for each instance offline, and then just load the top-$K$ probabilities of the distribution into memory and normalize them so that they sum to 1 for distillation. This reduces the memory cost from the scale of $|V|$ (the vocabulary size) to $K$.", "id": "e1a31918-0804-4d5f-9f78-0ea43f83fc23", "level": "subsection", "origin_cites_number": 4, "parent_id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ], [ "subsection", "Multiple Teachers" ] ], "subsections": [], "title": "Multiple Teachers" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4513, 4509, 4511, 2481, 4512, 4514, 4510, 854 ], "content": "Recently, there has been a lot of work around distilling Transformers to smaller Transformers with less number of layers or to Bidirectional LSTMs. Some of these methods aim at improving the accuracy versus model size tradeoff, while others focus on complex settings like mismatching student-teacher vocabulary~ or mismatch number of attention heads. \nZhao et al.~ learn a student with small vocabulary compared to the teacher using a dual training method. During distillation, for a given training sequence input to the teacher model, they mix the teacher and student vocabularies by randomly selecting tokens from the sequence to segment using the student vocabulary, with the other tokens segmented using the teacher vocabulary. As part of the masked language model (MLM) task, the model now needs to learn to predict words from the student vocabulary using context words segmented using the teacher vocabulary, and vice versa. The expectation is that the student embeddings can be learned effectively this way from the teacher embeddings as well as teacher model parameters. We perform dual training only for the teacher model inputs. The student model receives words segmented exclusively using the student vocabulary. Also, during MLM, the model uses different softmax layers for the teacher and the student vocabularies depending on which one was used to segment the word in question. Instead of distilling solely on the teacher model's final-layer outputs, layer-wise teacher model parameters can also be leveraged to directly optimize parameters of corresponding layers in the student model.\nIn Patient KD (PKD)~, the student learns from the teacher's output after every $k$ layers (see Fig.~\\ref{fig:kd}(H)) or the output from the last few layers of the teacher (see Fig.~\\ref{fig:kd}(I)). The student BERT is initialized using some layers of the pre-trained teacher BERT. TinyBERT~ further extends this idea by using extensive knowledge from embedding layer, and attention and hidden sub-layers of multiple teacher layers, and also the overall teacher output (see Fig.~\\ref{fig:kd}(J)). Each student layer is first mapped to a teacher layer before the student training. Liu et al.~ distill a multi-task student from a multi-task teacher, given the soft targets of the training data across multiple tasks. If task $t$ has a teacher, the task-specific loss is the average of two objective functions, one for the correct targets and the other for the soft targets assigned by the teacher. In MiniLM~, the student is trained by deeply mimicking the self-attention behavior of the last Transformer layer of the teacher (see Fig.~\\ref{fig:kd}(K)). Besides self-attention distributions, MiniLM introduces the self-attention value-relation transfer to help the student achieve a deeper mimicry. The value-relation is computed as pairwise correlation between different components of the value matrix across various attention heads of the final layer. Pretrained Distillation~ pretrains the student model with a self-supervised masked LM objective on a\nlarge corpus first, then performs a standard KD on supervised tasks. \nMost of these models learn one-to-one layer mapping, where each student layer is guided by only one specific teacher layer. Li et al.~ propose a method where each student intermediate layer learns from every teacher intermediate layer with learnable attention weights. Both the embedding-layer distillation and the prediction-layer distillation employ the one-to-one layer mapping as in TinyBERT and BERT-PKD.\nTang et al.~ propose distillation of a BERT model to a single layer BiLSTM using KL divergence between student and teacher logits (see Fig.~\\ref{fig:kd}(L)). Mukherjee et al.~ also distill a multi-lingual BERT (mBERT) model to a BiLSTM. Representation transfer is done from Transformer-based teacher model to BiLSTM-based student model with different embedding dimensions and disparate output spaces. Distillation features include teacher logits and internal teacher representations for one teacher layer. To make all output spaces compatible, a non-linear projection of the parameters in student representation is done to have same shape as teacher representation for each token. The projection parameters are learned by minimizing the KL-divergence (KLD) between the representations of the student and the chosen layer from the teacher. Overall there are multiple loss functions for the student training: supervised hard loss, soft loss wrt output logits, and soft loss wrt internal teacher layer. Rather than optimizing for all loss functions jointly, stage-wise training is performed where each loss function is sequentially used for optimization. \nLastly, there have been recent efforts~ to distill Transformer models to slightly modified Transformer architectures. MobileBERT~ is a thin version of BERT-large, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks (FFN). To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT-large model (IB-BERT). The IB-BERT uses the inverted-bottleneck structures to adjust its feature map size to 512. Thus, in the bottleneck structure, the inputs to the multi-head attention (MHA) module are from wider feature maps (of inter-block size), while the inputs to the FFN are from narrower bottlenecks (of intra-block size). To fix this issue, MobileBERT uses stacked feed-forward networks to re-balance the relative size between MHA and FFN. Each MobileBERT layer contains one MHA but 4 stacked FFN after each MHA. Then, we conduct knowledge transfer from this teacher to MobileBERT using feature map transfer and attention transfer across all layers. Also, while distilling they perform progressive knowledge transfer, i.e., they progressively train each layer in $L$ stages, where $L$ is the number of layers. When the $l$-th layer is trained, all the trainable parameters in the layers below are frozen. Another difference in the MobileBERT student architecture is usage of high information flow residual connections between the high-channel-count layers. MobileBERT is 4.3x smaller and 5.5x faster than BERT-base while achieving competitive results on well-known benchmarks. Iandola et al.~ propose a new Transformer model architecture called SqueezeBERT which is much like BERT-base, but with the position-wise fully connected layers implemented as convolutions, and grouped convolutions for many of the layers. Distillation for SqueezeBERT is rather straightforward. Distillation is applied only to the final layer, and only during finetuning using soft cross entropy loss with respect to a weighted sum of the teacher's logits and a one-hot encoding of the ground-truth. Teacher model is a BERT-base model finetuned independently on each GLUE task, and these task-specific teacher weights are used for distillation. Xu et al.~ propose a method for progressive module replacement for compressing BERT. Their approach first divides the original BERT into several modules and builds their compact substitutes. Then, the original modules are randomly replaced with their substitutes to train the compact modules to mimic the behavior of\nthe original modules. We progressively increase the probability of replacement through\nthe training. In this way, their approach brings\na deeper level of interaction between the original and compact models.", "id": "8fabba12-aca9-4230-b6a7-cc461dfdceaa", "level": "subsection", "origin_cites_number": 12, "parent_id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ], [ "subsection", "Distilling Transformers" ] ], "subsections": [], "title": "Distilling Transformers" }, { "cite_extract_rate": 0.8518518518518511, "cites": [ 4479, 4515, 1150, 4505, 4514, 9120, 4513, 856, 681, 854, 7333, 4486, 4500, 1568, 7, 8390, 4509, 4511, 2481, 4512, 4507, 2488, 4510 ], "content": "\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|l|l|p{0.5in}|}\n \\hline\nTask&Dataset&Teacher/Student models&Method&Metric&Size (Distilled; Orig)&Eval. (Distilled; Orig)\\\\\n\\hline\n\\hline\nAbs. summarization&CNN/DailyMail&UniLM-large/12-layer BERT&MiniLM~&ROUGE-L (H)&33M; 340M&39.73; 40.34\\\\\n\\hline\nAd CTR prediction&Criteo&No teacher/DNN&CoDistillation~&MAE (L)&-&0.019; 0.022\\\\\n\\hline\nCross-lingual NLI&XNLI (15 langs.)&XLM-R-base/12-layer BERT&MiniLM~&Acc (H)&21M; 85M&71.1; 74.5\\\\\n\\hline\nCross-lingual QA&MLQA (7 langs.)&XLM-R-base/12-layer BERT&MiniLM~&F1 (H)&21M; 85M&63.2; 64.9\\\\\n\\hline\nIntent detection&SNIPS (7-class)&BERT-base/6-layer BERT&Mixed-vocabulary training~&Acc (H)&6.2M; 109M&98.7; 98.6\\\\\n\\hline\nLanguage modeling&Common Crawl&No teacher/2-layer LSTM&CoDistillation~&CE (L)&-&3.91; 3.92\\\\\n\\hline\nMachine reading comp.&RACE&BERT-base/6-layer BERT&Patient KD~&Acc (H)&67M; 109M&60.34; 65.30\\\\\n\\hline\nMachine reading comp.&RACE&BERT-base/6-layer BERT&Vanilla KD~&Acc (H)&67M; 109M&58.74; 65.30\\\\\n\\hline\nNER (41 langs.)&Wikiann-41&Multiple mBERT/BiLSTM&XtremeDistill~&F1 (H)&31.8M; 179M*41&88.64; 90.76\\\\\n\\hline\nNMT (de$\\rightarrow$en)&OpenNMT&4-layer LSTM/2-layer LSTM&Vanilla KD~&BLEU (H)&64.8M; 84.8M&15.48; 15.88\\\\\n\\hline\nNMT (de$\\rightarrow$en)&OpenNMT&4-layer LSTM/2-layer LSTM&Quantized Distillation (4 bits)~&BLEU (H)&64.8M; 84.8M&15.26; 15.88\\\\\n\\hline\nNMT (de$\\rightarrow$en)&WMT13&4-layer LSTM/2-layer LSTM&Quantized Distillation (4 bits)~&BLEU (H)&81.6M; 84.8M&35.32; 34.7\\\\\n\\hline\nNMT (de$\\rightarrow$en)&WMT13&4-layer LSTM/2-layer LSTM&Vanilla KD~&BLEU (H)&81.6M; 84.8M&30.21; 34.7\\\\\n\\hline\nNMT (en$\\rightarrow$Others)&IWSLT (13 langs.)&Multiple Transformers/Transformer&Multiple teachers~&BLEU (H)&44M; 44M*12&22.96; 22.72\\\\\n\\hline\nNMT (en$\\rightarrow$Others)&IWSLT (13 langs.)&Multiple Transformers/Transformer&Seq-KD with Multiple teachers~&BLEU (H)&44M; 44M*12&21.98; 22.72\\\\\n\\hline\nNMT (en$\\rightarrow$Others)&WMT (7 langs.)&Multiple Transformers/Transformer&Multiple teachers~&BLEU (H)&44M; 44M*6&24.47; 24.50\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&4-layer LSTM/2-layer LSTM&Word-KD~&BLEU (H)&49M; 221M&14.9; 17.7\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&4-layer LSTM/2-layer LSTM&Seq-KD~&BLEU (H)&49M; 221M&18.1; 17.7\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&4-layer LSTM/2-layer LSTM&Seq-KD + Seq-Inter + Word-KD~&BLEU (H)&49M; 221M&18.5; 17.7\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&4-layer LSTM/2-layer LSTM&Pruned Seq-KD + Seq-Inter~&BLEU@5 (H)&8M/17M; 221M&18.5/19.1; 19.5\\\\\n\\hline\nNMT (Others$\\rightarrow$en)&IWSLT (13 langs.)&Multiple Transformers/Transformer&Multiple teachers~&BLEU (H)&44M; 44M*12&30.34; 29.7\\\\\n\\hline\nNMT (Others$\\rightarrow$en)&Ted Talk (45 langs.)&Multiple Transformers/Transformer&Multiple teachers~&BLEU (H)&44M; 44M*43&28.95; 25.17\\\\\n\\hline\nNMT (Others$\\rightarrow$en)&WMT (7 langs.)&Multiple Transformers/Transformer&Multiple teachers~&BLEU (H)&44M; 44M*6&28.61; 27.07\\\\\n\\hline\nNMT (th$\\rightarrow$en)&IWSLT15&4-layer LSTM/2-layer LSTM&Word-KD~&BLEU (H)&8M; 47M&11.8; 14.3\\\\\n\\hline\nNMT (th$\\rightarrow$en)&IWSLT15&4-layer LSTM/2-layer LSTM&Seq-KD~&BLEU (H)&8M; 47M&12.8; 14.3\\\\\n\\hline\nNMT (th$\\rightarrow$en)&IWSLT15&4-layer LSTM/2-layer LSTM&Seq-KD + Seq-Inter + Word-KD~&BLEU (H)&8M; 47M&14.2; 14.3\\\\\n\\hline\nQuestion generation&SQuAD 1.1&UniLM-large/12-layer BERT&MiniLM~&BLEU@4 (H)&33M; 340M&23.27; 24.32\\\\\n\\hline\nSlot filling&SNIPS (39 slots)&BERT-base/6-layer BERT&Mixed-vocabulary training~&F1 (H)&6.2M; 109M&95.0; 97.0\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various knowledge distillation methods (sorted by Task and then Dataset). CE=cross entropy, MAE=mean absolute error. en=English, th=Thai. MRC=Machine Reading Comprehension. NLI=Natural Language Inference. QA=Question Answering. NER=Named Entity Recognition. In the metric column, H means high is better while L means low is better.}\n \\label{tab:kdSummary1}\n\\end{table}\nTable~\\ref{tab:kdSummary1} compares various knowledge distillation methods across different tasks and datasets. Accuracy of both the original and the distilled model are shown in the Eval column. Also, we report model size for both the student as well as the teacher models. Note that sometimes smaller student models perform better than the teacher models. This could be because (1) for some (task, dataset) pairs, the smaller models are a better regularized fit compared to potentially overfitted teacher models, and (2) student models can be rigorously trained using additional semi-supervision while teacher models depend on limited labeled training data.\nWhile knowledge distillation has been used for distilling NLP models across many applications, NMT is the most popular one. For abstractive summarization, MiniLM~ leads to a student models which is less than one tenth of the teacher without much loss in ROUGE-L. Similarly, MiniLM shows good results for cross-lingual NLI and multi-lingual QA as well. For Ad click through rate (CTR) prediction and language modeling, Anil et al.~ show that co-distillation leads to lower MAE and cross entropy respectively compared to the individually trained models. Zhao et al.~'s mixed-vocab training leads to 6-layer model that retains over 95\\% of the BERT-base model’s slot filling F1 score while being 30x smaller ($<$10 MB without quantization) and 57x faster on a mobile device, yet task-agnostic. For Named Entity Recognition (NER), Mukherjee et al.~ show that XtremeDistil leads to massive compression of teacher models like mBERT by upto 35x in terms of parameters and 51x in terms of latency for batch inference while retaining 95\\% of its F1-score for NER over 41 languages. \nFor NMT, experiments have been done on OpenNMT, WMT, IWSLT and TedTalk datasets. Kim et al.~ make the following observations: (1) Sequence-level knowledge distillation (Seq-KD) does better than word-level knowledge distillation (Word-KD) on English $\\rightarrow$ German and performs similarly on Thai $\\rightarrow$ English. (2) Combining them (Seq-KD + Word-KD) results in further gains, indicating that these methods provide orthogonal means of transferring knowledge from the teacher to the student: Word-KD is transferring knowledge at the the local (i.e. word) level while Seq-KD is transferring knowledge at the global (i.e. sequence) level. (3) Applying weight pruning on top of knowledge distillation results in a student model that has 13x fewer parameters than the original teacher model, with a decrease of 0.4 BLEU. Tan et al.~ show that one model is enough to handle multiple languages (up to 44 languages), with comparable or even better accuracy than individual models. Their method achieves larger improvements on some languages, such as Da, Et, Fi, Hi and Hy, than others. This is correlated with the data size of the languages. When a language is of smaller data size, it may get more improvement due to the benefit of multilingual training.\n\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|p{0.3in}|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}\n\\hline\nMethod&Method&Teacher/Student&Size&MRPC&MNLI&MNLI-m&SST-2&QQP&QNLI&RTE&CoLA&STS-B&SQuAD&SQuAD\\\\ \nCategory&& models&& F1& Acc& Acc& Acc&F1& Acc& Acc& MCC&Spearman& 1.1 F1& 2.0 F1\\\\\n\\hline\n\\hline\n\\multirow{3}{0.3in}{Original Models}&BERT-B~&-& 109M&88.9&83.4&84.6&93.5&71.2&90.5&66.4&52.1&85.8&88.4&77.7\\\\\n\\cline{2-15}\n&BERT-L~&-& 340M&89.3&85.9&86.7&94.9&72.1&92.7&70.1&60.5&86.5&90.9&81.9\\\\\n\\cline{2-15}\n&MTDNN (ensemble)~&-& 340*4M&90.0&87.2&86.7&95.6&72.4&93.9&80.9&61.5&88.3&-&-\\\\\n\\hline\n\\hline\n\\multirow{19}{*}{\\rotatebox{90}{Knowledge Distillation Methods}}&Distilled-BiLSTM~&BERT-L/BiLSTM&0.96M&-&72.6&73.0&90.7&68.2&-&-&-&-&-&-\\\\\n\\cline{2-15}\n&Mixed-vocab. training~&BERT-L/BERT-12&10.9M&87.2&80.5&80.7&90.6&-&-&-&-&-&-&-\\\\\n\\cline{2-15}\n&TinyBERT~&BERT-B/BERT-4&14.5M&86.4&81.8&82.5&92.6&71.3&87.7&66.6&44.1&80.4&82.1&71.8\\\\\n\\cline{2-15}\n&BERT-EMD~&BERT-B/BERT-4&14.5M&87.6&80.6&82.1&91.0&69.3&87.2&66.2&25.6&82.3&-&-\\\\\n\\cline{2-15}\n&MobileBERT~&BERT-L/BERT-6&25.3M&88.8&82.6&83.3&92.8&70.2&90.6&66.2&50.5&84.4&90.0&79.2\\\\\n\\cline{2-15}\n&MobileBERT+Quantization~&BERT-L/BERT-6&25.3M&87.0&-&83.9&91.9&-&90.8&-&-&-&90.0&-\\\\\n\\cline{2-15}\n&MiniLM~&BERT-B/BERT-12&33M&89.5&-&85.7&93.0&91.3&91.5&73.3&58.5&-&-&81.7\\\\\n\\cline{2-15}\n&SqueezeBERT~&BERT-B/SqueezeBERT&51.1M&87.8&81.1&82.0&91.4&80.3&90.1&73.2&46.5&86.7&-&-\\\\\n\\cline{2-15}\n&DistilBERT~&BERT-B/BERT-4&52.2M&82.4&78.0&78.9&91.4&68.5&85.2&54.1&32.8&76.1&81.2&64.1\\\\\n\\cline{2-15}\n&Patient KD~&BERT-B/BERT-4&52.2M&82.6&79.3&79.9&89.4&70.2&85.1&62.3&24.8&79.8&79.5&64.6\\\\\n\\cline{2-15}\n&BERT-of-Theseus~&BERT-B/BERT-6&66M&87.6&82.1&82.4&92.2&71.6&89.6&66.2&47.8&84.1&-&-\\\\\n\\cline{2-15}\n&MiniLM~&BERT-B/BERT-6&66M&88.4&-&84.0&92.0&91.0&91.0&71.5&49.2&-&-&76.4\\\\\n\\cline{2-15}\n&BERT-EMD~&BERT-B/BERT-6&66M&89.8&83.5&84.7&93.3&72.0&90.7&71.7&47.5&86.8&-&-\\\\\n\\cline{2-15}\n&Patient KD~&BERT-B/BERT-6&67M&85.0&81.0&81.5&92.0&70.7&89.0&65.5&43.5&81.6&85.3&69.8\\\\\n\\cline{2-15}\n&Vanilla KD~&BERT-B/BERT-6&67M&86.2&79.8&80.2&91.5&70.1&88.3&64.7&-&-&-&-\\\\\n\\cline{2-15}\n&DistilBERT~&BERT-B/BERT-6&67M&86.9&81.3&82.6&92.5&70.1&88.9&58.4&49.0&81.3&86.2&69.5\\\\\n\\cline{2-15}\n&TinyBERT~&BERT-B/BERT-6&67M&87.3&83.2&84.6&93.1&71.6&90.4&70.0&51.1&83.7&87.5&77.7\\\\\n\\cline{2-15}\n&Pretrained Distillation~&BERT-B/BERT-6&67M&86.8&82.2&82.8&91.8&70.4&88.9&65.3&-&-&-&-\\\\\n\\cline{2-15}\n&MTDNN-KD~&MTDNN/MTDNN-KD&340M&91.1&86.7&87.5&95.6&72.7&96.0&85.1&65.4&89.6&-&-\\\\\n\\hline\n\\hline\nParam.&ALBERT-B~ (dev)&-&12M&-&-&81.6&90.3&-&-&-&-&-&89.3&80\\\\\n\\cline{2-15}\nSharing&ALBERT-L~ (dev)&-&18M&-&-&83.5&91.7&-&-&-&-&-&90.6&82.3\\\\\n\\hline\nTensor Decomp.&FLOP~&-&80M&88.61&-&-&92.09&-&89.05&-&-&88.18&-&-\\\\\n\\hline\n\\multirow{3}{*}{Pruning}&RPP Iterative Magnitude Pruning~&-&138M&88.1&86.1&85.7&92.4&91.2&92.3&70.1&82.8&-&90.23&75.3\\\\\n\\cline{2-15}\n&Iterative Magnitude Pruning~&-&170M&83.5&77&82.5&91.3&85.1&90.2&68.6&76.3&-&85.3&-\\\\\n\\cline{2-15}\n&LayerDrop~&-&66M&85.3&-&82.9&92.5&-&89.4&-&-&-&-&-\\\\\n\\hline\nQuant.&Mixed-precision quant. (QBERTMP)~&-&-&-&82.29&81.75&92.08&-&-&-&-&-&86.95&-\\\\\n\\hline\nSubQuad Trans.&Linformer~&-&-&-&-&-&93.1&90.8&91.2&-&-&-&-&-\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various methods across various GLUE~ and SQuAD tasks. Please refer~ for detailed description of tasks. Top part (first three rows) shows results for basic Transformer methods (teacher models). Middle part shows results for knowledge distillation methods. Bottom part shows results for a mix of other methods across categories. BERT-L=BERT-large, BERT-B=BERT-base, BERT-$i$=$i$-layer BERT. MCC refers to Matthews correlation. Results for SQuAD are on dev set. Empty entries indicate that the papers do not report those results, or NA entries.}\n \\label{tab:kdSummary2}\n\\end{table}\nTable~\\ref{tab:kdSummary2} compares various knowledge distillation methods (besides other model compression methods) across various GLUE~ and SQuAD tasks. Also, we report model size for both the student as well as the teacher models. Different distillation methods use one of these as the teacher: BERT-large, BERT-base or MTDNN. PKD~ outperforms the Vanilla KD~ on almost all the datasets except for MRPC. TinyBERT-4 significantly outperforms the 4-layer BERT-PKD and DistilBERT by a margin of at least 4.4\\%, with ~28\\% parameters and 3.1x inference speedup. Compared with the teacher BERT-base, 4-layer TinyBERT is 7.5x smaller and 9.4x faster in the model efficiency, while maintaining competitive performances. The 6-layer TinyBERT achieves comparable results with the teacher. Overall, TinyBERT consistently outperforms both the 4-layer and 6-layer baselines like PKD, DistilBERT and MiniLM.\nTurc et al.~ show how appropriate pretraining can improve the quality of distillation. MiniLM outperforms DistilBERT and TinyBERT across most tasks. The 6-layer MiniLM is 2.0x faster than original BERTBASE, while retaining more than 99\\% performance on a variety of tasks, such as SQuAD 2.0 and MNLI. Distilled MT-DNN significantly outperforms the original MT-DNN on 7 out of 9 GLUE tasks. 4-layer BERT-EMD outperforms 4-layer DistilBERT and BERT-PKD by a substantial margin, even with only 30\\% parameters and inference time. Furthermore, it exceeds the TinyBERT model by 2.3\\% accuracy on RTE, 2.2\\% F1 on MRPC, and 1.9\\% Spearman correlation on STS-B. 6-layer BERT-EMD performs better than the 12-layer BERT-base model on 7 out of 9 tasks, with only about 50\\% parameters and inference time of the original BERT-base model. Tang et al.~ distill BERT to BiLSTMs. They observe that the distilled BiLSTM model uses 349 times fewer parameters than BERT-large and is 434 times faster. Also, mixed vocab training by Zhao et al.~ produces a small 12-layer model that performs competitively with 6-layer PKD and 4-layer DistilBERT while being $\\sim$5-6x smaller.\nMobileBERT is 4.3x smaller and 5.5x faster than BERT-base. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT-base). On the natural language inference tasks of GLUE, MobileBERT can achieve a GLUE score of 77.7, which is only 0.6 lower than BERT-base, with a latency of 62 ms on a Pixel 4 phone. While quantization can further compress MobileBERT by 4x, there is nearly no performance degradation from it. SqueezeBERT is approximately half the size of BERT-base. MobileBERT is half the size of SqueezeBERT. SqueezeBERT is 4.3x faster than BERT-base, while MobileBERT is 3.0x faster than BERT-base. On 4 of GLUE tasks SqueezeBERT outperforms the accuracy of MobileBERT, while on the other 4 of the GLUE tasks MobileBERT outperforms SqueezeBERT. MobileBERT and SqueezeBERT outperform BERT-base significantly across all tasks.\nTo summarize, KD is a popular method for text based model compression. Various methods have proposed information copying using logits, softmax output, attention sub-layer output, value relation, relative dissimilarity information from both the last layer as well as intermediate layers of the teacher. Many methods have been proposed to handle complex teacher-student configuration mismatches in terms of vocabulary, number of attention heads, and hidden layer sizes. Also, KD has been found to be very effective in complex problem settings like multi-lingual tasks and tasks with large number of classes. Learning from noisy teachers, teacher assistants, an ensemble of teachers has been found to be effective as well. KD is the best model compression method especially in settings where a large amount of unlabeled data exists; distillation with data pseudo-labeled by teacher leads to very effective students.", "id": "4cd2b9eb-147c-41ba-996b-a30e824f168f", "level": "subsection", "origin_cites_number": 27, "parent_id": "2c5b1b2b-3f74-442d-8ee6-503425cf4207", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Knowledge Distillation (KD)" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:parameterSharing}\nRather than removing weights or reducing \\#bits to store them, parameter sharing methods reduce model size by finding weight blocks that can share the same weight. Character-based language models learn embeddings for characters and use them to compose word embeddings. In some senses, we can think of various words sharing these character embedding parameters. Further, various parameter sharing methods have been proposed to reduce the large word embedding matrix size. Finally, there are multiple Transformer architectures which benefit from the parameter sharing philosophy. We discuss these methods in this section.", "id": "57903397-1a0f-4365-9fd9-a6bf95e76421", "level": "section", "origin_cites_number": 0, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Parameter sharing" ] ], "subsections": [ "1d632f6e-824d-41b6-8395-ec40e6c541ab", "e8e447ef-9838-46fc-a0ca-4f8617f32377", "82aba295-2827-4559-b383-3667b9bd0577", "736e641f-1998-44af-a324-58eb82e0693d" ], "title": "Parameter sharing" }, { "cite_extract_rate": 1, "cites": [ 4516, 4517, 4036 ], "content": "Fig.~\\ref{fig:charCNN} illustrates various character-aware language model architectures. Ling et al.~ proposed their character to word (C2W) model which constructs vector representations of words by composing characters using BiLSTMs. Relative to traditional word representation models that have independent vectors for each word type, C2W requires only a single vector per character type and a fixed set of parameters for the compositional model. As input, we define an alphabet of characters $C$. For English, this vocabulary would contain an entry for each uppercase and lowercase letter as well as numbers and punctuation. Thus compared to the word embedding matrix, this model is much smaller. Despite the compactness of this model, this ``composed'' word representations yield comparable results across multiple text classification tasks. \nJozefowicz et al.~ propose two variants for composing word embeddings using character embeddings. In the first CNN-Softmax variant, they use character CNNs (Convolutional Neural Networks) to compose word embeddings from character embeddings both at the input side as well as at the output softmax layer. The character-CNN sub-networks at the input or the output do not share weights. The composed word embeddings are fed to an LSTM to generate the output. In the second Char-LSTM variant, character CNN is used to compose word embeddings on the input side. The composed word embeddings are fed to an LSTM to generate an output which is further fed to a small LSTM that predicts the target word one character at a time. Thus, the word and character-level models are combined, and predictions are made one character at a time, thus allowing to compute probabilities over a much smaller vocabulary. Kim et al.~ propose another variant where at the output side they continue to use word embeddings, but at the input side they compose word embeddings using a highway network on top of a character CNN. The highway network's output is used as the input to a multi-layer LSTM, whose last hidden state output is fed to the output softmax layer.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{charCNN}\n \\caption{Character-aware Language Models}\n \\label{fig:charCNN}\n\\end{figure}", "id": "1d632f6e-824d-41b6-8395-ec40e6c541ab", "level": "subsection", "origin_cites_number": 3, "parent_id": "57903397-1a0f-4365-9fd9-a6bf95e76421", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Parameter sharing" ], [ "subsection", "Character-aware Language Models" ] ], "subsections": [], "title": "Character-aware Language Models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4522, 4520, 4036, 4518, 4521, 4519 ], "content": "Given a weight matrix $W$ and a budget $K$, we want to share weights within $W$ to have a max of $K$ unique values. A na\\\"ive implementation of random weight sharing can be trivially achieved by maintaining a secondary matrix consisting of each connection's group assignment. But this needs memory space itself. Hence, Chen et al.~ propose to use hashing. HashedNets use a low-cost hash function (like xxhash\\footnote{\\url{https://code.google.com/p/xxhash/\n}}) to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value.\nUnlike HashedNets where weights are randomly grouped, parameter sharing mechanisms in Toeplitz-like structured matrices~ are highly specific and deterministic. Toeplitz matrices have parameters tied along diagonals. The displacement rank of all Toeplitz matrices is up to 2. Toeplitz-like matrices allow the displacement rank $r$ to be higher. They include products and inverses of Toeplitz matrices, and their linear combinations. The displacement rank $r$ serves as a knob on modeling capacity. High displacement rank matrices are increasingly unstructured. With displacement rank $r$, there are $2nr$ free parameters in the Toeplitz-like structured matrix. Toeplitz transforms can be applied not just to embedding matrix but to all weight matrices in an RNN model. Tay et al.~ use a similar Toeplitz-like structured matrix method with Hamilton Products in Quaternion Algebra to propose Quaternion Transformers which lead to 75\\% parameter reduction in the Transformer architecture.\nAnother method for parameter sharing is to share low-rank factors across layers in a recurrent model. In this method, we first represent a weight matrix $W$ using matrix factorization as $W=W_a W_b$. Thus, hidden layer output for layer $l$ at time $t$ can be written as follows.\n\\begin{eqnarray}\nh_t^l=\\sigma\\left[W_a^l W_b^l h_t^{l-1}+U_a^l U_b^l h_{t-1}^l + b^l\\right] \n\\end{eqnarray}\nBut we can share some low-rank factors by setting $W_b^l=U_b^{l-1}$. The combination of matrix factorization and parameter sharing leads to large model compression.\nAnother way of compressing the embedding matrix is to divide the vocabulary $V$ into frequent and infrequent word sets $B$ and $C$ respectively. Infrequent words' embeddings are represented with frequent words' by sparse linear combinations~. This is inspired by the observation that, in a dictionary, an unfamiliar word is typically defined by common words. A dense embedding is assigned to each common word; an infrequent word, on the other hand, computes its vector representation by a sparse combination of common words' embeddings. This compression is useful for both word embedding matrix as well as output layer of RNNs/LSTMs. Let $U\\in R^{E\\times |B|}$ be the learned embedding matrix of common words where $E$ is the embedding dimension. For a word $w\\in C$, we shall learn a sparse vector $x\\in R^{|B|}$ as the sparse code of the word. Once we know $x$, embedding for a word $w\\in C$ can be written as follows.\n\\begin{eqnarray}\n \\text{embedding}(w)=\\sum_{j=1}^B x_jU_j\n\\end{eqnarray}\nwhere $U_j$ is the $j^{th}$ column of $U$. To learn the sparse representation of word $w\\in C$, the following problem needs to be solved.\n\\begin{eqnarray}\n\\min_x ||Ux-A||_2^2+\\alpha ||x||_1+\\beta |1^Tx-1|+\\gamma 1^T \\max(0,-x)\n\\end{eqnarray}\nwhere $A$ is embedding for the rare word $w$. The last two regularization terms favor a solution that sums to 1 and that is non-negative (for psychological interpretation concerns), respectively. \nLightRNN~ compresses word embedding matrix from $O(|V|)$ to $O(\\sqrt{|V|})$. It uses a 2-Component shared embedding for word representations. We allocate every word in the vocabulary into a word-allocation table, each row of which is associated with a learned vector, and each column associated with another learned vector. Table~\\ref{tab:wordAllocation} shows an example of a word allocation table. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Thus, we only need $2\\sqrt{|V|}$ vectors to represent a vocabulary of $|V|$ unique words, which are far less than the $|V|$ vectors. \nThe input and output use different embedding row/column vectors but they share the same word-allocation table. Word Allocation table creation uses a bootstrap procedure to iteratively refine word allocation based on the learned word embedding. Embeddings (i.e. row and column vectors) are learned using language modeling loss using an RNN on top of the embedding layer.\n\\begin{table}\n \\centering\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\nEmbedding&$x_1^c$&$x_2^c$&$x_3^c$&$x_4^c$\\\\\n\\hline\n$x_1^r$&january&february&...&...\\\\\n\\hline\n$x_2^r$&one&two&...&...\\\\\n\\hline\n$x_3^r$&...&...&...&...\\\\\n\\hline\n$x_4^r$&...&...&...&...\\\\\n\\hline\n\\end{tabular}\n \\caption{An Example of a Word Allocation Table}\n \\label{tab:wordAllocation}\n\\end{table}\nFinally, Suzuki et al.~ propose a Skipgram~ training method with parameter sharing as follows. Split every embedding vector of size $D$ into $B$ equal sub-vectors of size $C$. Thus $D=B\\times C$. We assign a limited number of reference vectors to each block of block-splitting vectors. E.g., the number of reference vectors becomes $K\\times B$ if we assign $K$ reference vectors to each block. Each reference vector is of size $C$. Skipgram training optimization remains the same except for these extra parameter sharing constraints (applied to both the input and output embedding vectors). Liu et al.~ propose a very similar method, Slim Embeddings, where the embeddings are learned using a RNN language model rather than the Skipgram method. Slim Embeddings is very related to HashedNets~, LightRNN~ and\nCharacter Aware Language model~. In HashedNets, all elements in a parameter matrix are mapped into a vector through a hash function. However in Slim Embeddings approach, we randomly share subvectors instead of single elements. Slim Embeddings differs from LightRNN in\nthat it is able to control the compression ratio\nto any arbitrary value, while LightRNN can only compress\nat the rate of square or cube root of vocabulary size, which\ncould be too harsh in practical applications. In character aware language model, if we treat the sequence of subvector ids (virtual characters) as each word's representation,\nthe word embedding then can be treated as concatenated unigram character feature vectors. The drawback of such an approach is that the model is\nmore complicated and to speed up inference, it needs to pre-compute the word embeddings for the words, so it couldn't\nstay in its compact form during inference. The Slim embeddings model is much simpler, and easier to tune. And during inference, it uses much less space and can even decrease the complexity of inference.", "id": "e8e447ef-9838-46fc-a0ca-4f8617f32377", "level": "subsection", "origin_cites_number": 9, "parent_id": "57903397-1a0f-4365-9fd9-a6bf95e76421", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Parameter sharing" ], [ "subsection", "Parameter Sharing in the Embedding Matrix" ] ], "subsections": [], "title": "Parameter Sharing in the Embedding Matrix" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1150, 4523 ], "content": "A standard Transformer does not share parameters across layers and also has a fixed number of encoder layers. ALBERT~ incorporates two parameter reduction techniques: \n\\begin{itemize}\n \\item Factorized embedding parameterization. That is, it decomposes large vocabulary embedding matrix into two small matrices. Thus, it reduces the embedding parameters from $O(V \\times H)$ to $O(V \\times E + E \\times H)$ where $H >> E$.\n \\item Cross-layer parameter sharing: There are multiple ways to share parameters, e.g., only sharing feed-forward network (FFN) parameters across layers, or only sharing attention parameters. The default decision for ALBERT is to share all parameters across layers.\n\\end{itemize}\nAn ALBERT configuration similar to BERT-large has 18x fewer parameters and can be trained about 1.7x faster. Dehghani et al.~ propose Universal Transformers where the number of encoder layers are not pre-decided, and all the encoder layers share the parameters. Certain symbols (e.g. some words or phonemes) are usually more ambiguous than others. It is therefore reasonable to allocate more processing resources to these more ambiguous symbols. Thus, ambiguous symbols undergo more self-attention transformations compared to non-ambiguous ones. Thus, they provide a dynamic per-position halting mechanism for dynamically modulating the number of computational steps needed to process each input symbol (called the ``ponder time'') before the representation is passed on as input to the decoder. The idea of sharing weights across layers in Transformers has also been explored in~.", "id": "82aba295-2827-4559-b383-3667b9bd0577", "level": "subsection", "origin_cites_number": 3, "parent_id": "57903397-1a0f-4365-9fd9-a6bf95e76421", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Parameter sharing" ], [ "subsection", "Parameter Sharing in Transformers" ] ], "subsections": [], "title": "Parameter Sharing in Transformers" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 4522, 4516, 4036, 1150, 4517, 4521, 4519 ], "content": "\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|p{0.8in}|l|l|l|l|l|l|}\n \\hline\nTask&Dataset&Method&Base Model&Metric&Size (Comp; Orig)&Eval. (Comp; Orig)\n\\\\\n\\hline\n\\hline\nLanguage modeling&1B Word Benchmark&Char-CNN (input embeddings)~&2-layer word-BiLSTM&Perplexity (L)&1.04B; 1.8B&30.0; 30.6\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&Char-CNN (in/output embeddings)~&2-layer word-BiLSTM&Perplexity (L)&0.39B; 1.8B&35.8; 30.6\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&LightRNN~&word-LSTM&Perplexity (L)&41M; 1.6G&66; 85\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&Slim Embeddings~&2-layer word-BiLSTM&Perplexity (L)&0.25B; 1.8B&38.3; 30.6\\\\\n\\hline\nLanguage modeling&ACLW-Czech&CNN+Highway network~&word-LSTM&Perplexity (L)&64M; 83M&578; 701\\\\\n\\hline\nLanguage modeling&ACLW-Czech&LightRNN~&word-LSTM&Perplexity (L)&18M; 83M&558; 701\\\\\n\\hline\nLanguage modeling&ACLW-Czech&Slim Embeddings~&word-LSTM&Perplexity (L)&17M; 83M&528; 701\\\\\n\\hline\nLanguage modeling&ACLW-English&CNN+Highway network~&word-LSTM&Perplexity (L)&20M; 25M&216; 236\\\\\n\\hline\nLanguage modeling&ACLW-English&LightRNN~&word-LSTM&Perplexity (L)&17M; 25M&191; 236\\\\\n\\hline\nLanguage modeling&ACLW-English&Slim Embeddings~&word-LSTM&Perplexity (L)&7M; 25M&187; 236\\\\\n\\hline\nLanguage modeling&ACLW-French &CNN+Highway network~&word-LSTM&Perplexity (L)&44M; 56M&190; 202\\\\\n\\hline\nLanguage modeling&ACLW-French &LightRNN~&word-LSTM&Perplexity (L)&17M; 56M&176; 202\\\\\n\\hline\nLanguage modeling&ACLW-French &Slim Embeddings~&word-LSTM&Perplexity (L)&12M; 56M&162; 202\\\\\n\\hline\nLanguage modeling&ACLW-German &CNN+Highway network~&word-LSTM&Perplexity (L)&104M; 137M&305; 347\\\\\n\\hline\nLanguage modeling&ACLW-German &LightRNN~&word-LSTM&Perplexity (L)&18M; 137M&281; 347\\\\\n\\hline\nLanguage modeling&ACLW-German &Slim Embeddings~&word-LSTM&Perplexity (L)&17M; 137M&261; 347\\\\\n\\hline\nLanguage modeling&ACLW-Russian&CNN+Highway network~&word-LSTM&Perplexity (L)&152M; 200M&313; 353\\\\\n\\hline\nLanguage modeling&ACLW-Russian&LightRNN~&word-LSTM&Perplexity (L)&19M; 200M&288; 353\\\\\n\\hline\nLanguage modeling&ACLW-Russian&Slim Embeddings~&word-LSTM&Perplexity (L)&19M; 200M&274; 353\\\\\n\\hline\nLanguage modeling&ACLW-Spanish&CNN+Highway network~&word-LSTM&Perplexity (L)&48M; 61M&169; 186\\\\\n\\hline\nLanguage modeling&ACLW-Spanish&LightRNN~&word-LSTM&Perplexity (L)&18M; 61M&157; 186\\\\\n\\hline\nLanguage modeling&ACLW-Spanish&Slim Embeddings~&word-LSTM&Perplexity (L)&8M; 61M&149; 186\\\\\n\\hline\nLanguage modeling&PTB&CNN+Highway network~&word-LSTM&Perplexity (L)&5M; 20M&92.3; 85.4\\\\\n\\hline\nLanguage modeling&Wikipedia articles (ca)&C2W~&word emb.&Perplexity (L)&182K; 4.3M&34.92; 35.34\\\\\n\\hline\nLanguage modeling&Wikipedia articles (de)&C2W~&word emb.&Perplexity (L)&183K; 6.3M&41.94; 43.02\\\\\n\\hline\nLanguage modeling&Wikipedia articles (en)&C2W~&word emb.&Perplexity (L)&180K; 4.3M&57.39; 59.38\\\\\n\\hline\nLanguage modeling&Wikipedia articles (pt)&C2W~&word emb.&Perplexity (L)&178K; 4.2M&40.92; 46.17\\\\\n\\hline\nLanguage modeling&Wikipedia articles (tr)&C2W~&word emb.&Perplexity (L)&174K; 5.7M&32.88; 44.01\\\\\n\\hline\nMachine reading comp.&ReAding Comprehension from Examinations&ALBERT~&BERT-B&Acc (H)&18M; 108M&68.5; 68.2\\\\\n\\hline\nNLI&MNLI&Quaternion Attention~&2-layer att-GloVe&Acc (H)&200K; 700K&72.3; 73.6\\\\\n\\hline\nNLI&SciTail&Quaternion Attention~&2-layer att-GloVe&Acc (H)&200K; 700K&79.6; 79.0\\\\\n\\hline\nNLI&SNLI&Quaternion Attention~&2-layer att-GloVe&Acc (H)&200K; 700K&85.4; 86.2\\\\\n\\hline\nNMT (en$\\rightarrow$et)&IWSLT15&Quaternion Transformer~&Transformer&BLEU (H)&11M; 44M&13.1; 14.1\\\\\n\\hline\nNMT (en$\\rightarrow$ro)&IWSLT15&Quaternion Transformer~&Transformer&BLEU (H)&11M; 44M&18.5; 22.8\\\\\n\\hline\nNMT (en$\\rightarrow$vi)&IWSLT15&Quaternion Transformer~&Transformer&BLEU (H)&11M; 44M&28.0; 28.4\\\\\n\\hline\nPOS Tagging&PTB (ca)&C2W~&word-BiLSTM&Acc (H)&150K; 2M&98.92; 98.09\\\\\n\\hline\nPOS Tagging&PTB (de)&C2W~&word-BiLSTM&Acc (H)&150K; 2M&98.08; 97.51\\\\\n\\hline\nPOS Tagging&PTB (en)&C2W~&word-BiLSTM&Acc (H)&150K; 2M&97.36; 96.97\\\\\n\\hline\nPOS Tagging&PTB (pt)&C2W~&word-BiLSTM&Acc (H)&150K; 2M&97.47; 95.67\\\\\n\\hline\nPOS Tagging&PTB (tr)&C2W~&word-BiLSTM&Acc (H)&150K; 2M&91.59; 83.43\\\\\n\\hline\nQuestion answering&BABI&Universal Transformer~&Transformer&Avg error (L)&7.3M; 44M&0.21; 15.2\\\\\n\\hline\nQuestion answering&WikiQA&Quaternion Attention~&2-layer att-GloVe&MAP (H)&200K; 700K&66.2; 67.2\\\\\n\\hline\nSentiment analysis&IMDB&Quaternion Transformer~&2-layer Transformer&Acc (H)&100K; 400K&83.9; 82.6\\\\\n\\hline\nSentiment analysis&SST&Quaternion Transformer~&2-layer Transformer&Acc (H)&100K; 400K&80.5; 78.9\\\\\n\\hline\nSpeech recognition&2000 hour En. Speech&Toeplitz-like~&3-layer RNN&WER (L)&790K; 1.85M&48.4; 43.5\\\\\n\\hline\nSpeech recognition&2000 hour En. Speech&Toeplitz-like~&5-layer LSTM&WER (L)&2.00M; 9.12M&33.5; 33.1\\\\\n\\hline\nSubject verb agreement&SVA dataset&Quaternion Transformer~&2-layer Transformer&Acc (H)&100K; 400K&94.7; 94.8\\\\\n\\hline\nSubject verb agreement&SVA dataset&Universal Transformer~&Transformer&Acc (H)&7.3M; 44M&0.992; 0.962\\\\\n\\hline\nWord analogy &GSEm, GSYN, MSYN&Shared ref. vec. ($K$=16/256, $B$=256)~&SGNS&Acc (H)&98/196MB; 1565MB&64.6/64.5; 64.7\\\\\n\\hline\nWord analogy &GSEm, GSYN, MSYN&k-means quant. ($K$=16/256, $B$=256)~&SGNS&Acc (H)&98/196MB; 1565MB&64.0/64.5; 64.7\\\\\n\\hline\nWord similarity&7 datasets&Shared ref. vec. ($K$=16/256, $B$=256)~&SGNS&Acc (H)&98/196MB; 1565MB&65.5/67.1; 67.2\\\\\n\\hline\nWord similarity&7 datasets&k-means quant. ($K$=16/256, $B$=256)~&SGNS&Acc (H)&98/196MB; 1565MB&64.4/67.0; 67.2\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various parameter sharing methods (sorted by Task and then Dataset). 7 datasets for word similarity are MEN, MTurk, RARE, SLex, SCWS, WSR, WSS. SGNS=SkipGram with negative sampling. In the metric column, H means high is better while L means low is better.}\n \\label{tab:paramSharingSummary}\n\\end{table}\nTable~\\ref{tab:paramSharingSummary} compares various parameter sharing methods across different tasks and datasets. Accuracy of both the original and the compressed (comp.) model are shown. Also, we report model size (in terms of number of parameters or memory size) for both the original as well as the compressed models. For the same task, dataset and model combination, different papers report different accuracy of the original model because of slight changes in training hyper-parameters; hence we report accuracy of the original model for each row. Since many parameter sharing methods have been used to compress word embeddings, the most common application is language modeling. \nFor language modeling, experiments have been done on the One Billion Word Benchmark, ACLW, PTB and Wikipedia articles datasets across multiple languages using parameter sharing methods like Char-CNN~, LightRNN~, Slim embeddings~, CNN+Highway networks~ and C2W~. C2W~ always outperforms word lookup tables and improvements are especially pronounced in Turkish, which is a highly morphological language, where word meanings differ radically depending on the suffixes used. Jozefowicz et al.~ observe that Char-CNNs especially with character embeddings being used both at input as well as output can lead to 4.6x model size reduction with a slight increase in perplexity. On the One-Billion-Word benchmark, LightRNN~ achieves much lower perplexity compared to word-LSTMs, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. LightRNN~ significantly reduces the model size, while at the same time outperforms CNN+Highway network~ method. While the model sizes of the CNN+Highway network method increase linearly with respect to the vocabulary size, the model size of LightRNN almost keeps constant on the ACLW datasets. Slim embeddings~ is way better than LightRNN (low perplexity with smaller models). On PTB and 44M giga word corpus datasets, Slim embeddings applied at input layer can maintain same perplexity for a word-LSTM using just 1\\% (and 0.2\\%) of trainable parameters respectively. \nk-means quantization and shared reference vectors are also methods for compression of word embeddings using parameter sharing. Suzuki et al.~ showed significant gains over typical skipgram (SGNS) embeddings in terms of model size reduction while retaining similar accuracies for word analogy and similarity tasks. The model size of their method with shared reference vectors with `K = 16, B = 64' was just 24MB, approximately 65 times smaller than that of original SGNS method. They also showed that SGNS with shared reference vectors was better than SGNS with block-wise k-means post-processing method. Unfortunately, there exists no good comparison between the slim embeddings and the shared reference vectors methods.\nFor the MRC task, ALBERT~ pushed the accuracy from 68.5 to 68.2 using 6x fewer parameters compared to BERT-base. On the NLI task, a tiny Quaternion-Att model (50 dimensions) achieves comparable (or occasionally marginally better or worse) performance compared to typical attention over GloVe (200 dimensions), gaining a 68\\% parameter savings across three datasets. For sentiment analysis, Quaternion Transformers~ leads by +1.3\\%/1.6\\% gains on IMDb and SST datasets respectively while maintaining a 75\\% reduction in parameter cost. Quaternion Transformers~ have been shown to outperform the vanilla Transformer for the mathematical language understanding task as well, with 25\\% parameters. At the same compression rate, Quaternion Transformers lose only very minor BLEU on IWSLT 2015 NMT datasets.\nFor POS tagging, we can observe that the model using word lookup tables performs consistently worse than the C2W model~. Universal Transformers~ (while being 1/6th the size) outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task. On speech recognition, Lu et al.~ study mechanisms for learning compact RNNs and LSTMs via low-rank factorizations and parameter sharing schemes. A hybrid strategy of using structured matrices in the bottom layers and shared low-rank factors on the top layers is found to be particularly effective, reducing the parameters of a standard LSTM by 75\\%, at a small cost of 0.3\\% increase in WER, on a 2000-hr English Voice Search task. For LSTM parameter reduction, architecting upper layers with projection nodes to moderate rank, and bottom layers with Toeplitz-like transforms was found to be a particularly effective strategy. \nOverall, besides model compression, parameter sharing methods also act as a good regularizer. Parameter sharing in Transformers has been very successful. ALBERT was at the top of the GLUE leaderboard when it was proposed. Parameter sharing methods have also been widely used for compressing embedding matrix. Slim embeddings has the best method for compressing embedding matrices.", "id": "736e641f-1998-44af-a324-58eb82e0693d", "level": "subsection", "origin_cites_number": 10, "parent_id": "57903397-1a0f-4365-9fd9-a6bf95e76421", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Parameter sharing" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:matrixDecomposition}\nSparse Matrix decomposition has been traditionally used for applications like feature selection, collaborative filtering, topic mining from text, etc. In this section, we discuss how various popular tensor decomposition methods like Singular Value Decomposition (SVD), Tensor-Train~, CP (CANDECOMP/PARAFAC)~ and Tucker~ can be used for model compression.", "id": "a059247f-141a-43b7-a6a9-f3440e7622cd", "level": "section", "origin_cites_number": 3, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Tensor decomposition" ] ], "subsections": [ "da1505f4-583c-4b3b-a84f-338a8c53f73d", "1b3ed6c2-9088-4bfb-8866-160b3884224a", "4c1e5567-aabc-47e9-8344-624bb81f275f", "2c213350-b4bf-49d2-a046-a4fe16413654" ], "title": "Tensor decomposition" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4515, 4524, 4525 ], "content": "In this part, we will discuss methods where a matrix is factorized into two low-rank factors. Specifically, we replace a weight matrix $W$ with $W_1\\times W_2$ such that the total number of parameters are significantly lesser.\nA multi-layer RNN can be represented as follows.\n\\begin{eqnarray}\n h_t^l=\\sigma(W_x^{l-1}h_t^{l-1}+W_h^l h_{t-1}^l +b^l)\\\\\n h_t^{l+1}=\\sigma(W_x^l h_t^l + W_h^{l+1}h_{t-1}^{l+1}+b^{l+1})\n\\end{eqnarray}\nThus, there are two important weight matrices: the recurrent $W_h^l$ and inter-layer matrices $W_l^x$. Prabhavalkar et al.~ propose a method to jointly compress the recurrent and inter-layer matrices corresponding to a specific layer $l$ by determining a suitable recurrent projection matrix, denoted by $P^l\\in R^{r_l\\times N_l}$ of rank $r^l<N^l$ such that $W_h^l=Z_h^l P^l$ and $W_l^x=Z_x^l P^l$. First, $P^l$ is determined by computing a truncated SVD of the recurrent weight matrix, which we then truncate, retaining only the top $r^l$ singular values. Thus, $W_h^l$ can be written as follows.\n\\begin{eqnarray}\n W_h^l=(U_h^l \\Sigma_h^l)(V_h^l)^T=Z_h^lP^l\n\\end{eqnarray}\nThus, $P^l$ is set to $(V_h^l)^T$. Further, we determine $Z_x^l$ as the solution to the following least-squares problem.\n\\begin{eqnarray}\n Z_x^l=\\argmin_Y ||YP^l-W_x^l||_2^2\n\\end{eqnarray}\nThis solution can also be easily extended to LSTMs. Sak et al.~ also proposed a similar solution based on a combination of parameter sharing and matrix decomposition but without SVD initialization. However, typically SVD initialization has been found to perform better.\nBesides SVD, another way of matrix decomposition is sparse coding. Faruqui et al.~ propose using sparse coding to decompose word embedding matrices. Thus, given vocabulary of size $V$, word embedding matrix $X\\in R^{L\\times V}$, sparse coding aims at representing each input vector $x_i$ as a sparse linear combination of basis vectors $a_i$ by solving the following problem. \n\\begin{eqnarray}\n \\argmin_{D,A} \\sum_{i=1}^V ||x_i-Da_i||_2^2+\\lambda||a_i||_1+\\tau ||D||_2^2\n\\end{eqnarray}\nwhere $D\\in R^{L\\times K}$ and $A\\in R^{K\\times V}$. Further, for interpretability, one can enforce all elements of $A$ and $D$ to be non-negative. For further compression, one can also enforce $A$ to be binary or ensure that each column of $A$ is a $K$ sized one hot vector~. \nLastly, Wang et al.~ combine pruning with matrix factorization for model compression and propose the FLOP (Factorized Low-rank Pruning) method. Let $W$ be a weight matrix. Structured pruning (removing a neuron, i.e., removing a column from weight matrix) can be achieved by replacing the computation $Wx$ by $WGx$ where diagonal sparsity-inducing matrix $G$ is learned using $L_0$ regularization over $WG$ along with the supervised loss. This effectively removes a subset of columns of $W$ for column indices $k$ with $z_k=0$. One limitation is that this structured pruning method tends to produce lower performance than its unstructured counterpart. Hence, in the FL0P (Factorized L0 Pruning) model, we first factorize $W=PQ$. Let $r$ be \\#columns of $P$ (or equivalently \\#rows of $Q$), $p_k$ and $q_k$ be the $k$-th column of $P$ and $k$-th row of $Q$ respectively. We achieve structured pruning by introducing a pruning variable $z_k$ for each component. Thus, now, we can write $W$ as follows.\n\\begin{eqnarray}\n W=PGQ=\\sum_{k=1}^r z_k\\times (p_kq_k)\n\\end{eqnarray}\nwhere $G$ is again the diagonal matrix of pruning variables. After training, only columns and rows corresponding to non-zero diagonal values need to be stored, resulting in much smaller (but still dense) matrices $P$ and $Q$. The nonzero values of $G$ can be absorbed into either $P$ or $Q$. This structured pruning with factorization is much more effective compared to the vanilla structured pruning.", "id": "da1505f4-583c-4b3b-a84f-338a8c53f73d", "level": "subsection", "origin_cites_number": 5, "parent_id": "a059247f-141a-43b7-a6a9-f3440e7622cd", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Tensor decomposition" ], [ "subsection", "Two Low-Rank Factors" ] ], "subsections": [], "title": "Two Low-Rank Factors" }, { "cite_extract_rate": 1, "cites": [ 4526, 8800 ], "content": "The last layer of a language model is very large of the size $HV$ where $H$ is the size of the hidden layer and $V$ is vocabulary size. Each word by an output embedding of the same size $H$. Chen et al.~ propose a differentiated softmax method which varies the dimension of the output embeddings across words depending on how much model capacity is deemed suitable for a given word. In particular, it is meaningful to assign more parameters to frequent words than to rare words. By definition, frequent words occur more often in the training data than rare words and therefore allow to fit more parameters. They define partitions of the output vocabulary based on word frequency and the words in each partition share the same embedding size. Partitioning results in a sparse final weight matrix which arranges the embeddings of the output words in blocks, each one corresponding to a separate partition. The size of the final hidden layer $H$ is the sum of the embedding sizes of the partitions. While this method does not involve creation of multiple factors, it factorizes the original matrix into multiple blocks while setting the remaining part of the matrix to 0.\nVariani et al.~ propose a method called Word Encoded Sequence Transducers (WEST) which factorizes a matrix $E=C\\times D$ where $D$ is constrained to be a block diagonal matrix. The block diagonal nature of the second factor leads to large compression rates.", "id": "1b3ed6c2-9088-4bfb-8866-160b3884224a", "level": "subsection", "origin_cites_number": 2, "parent_id": "a059247f-141a-43b7-a6a9-f3440e7622cd", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Tensor decomposition" ], [ "subsection", "Factorizing into Block Diagonal Matrices" ] ], "subsections": [], "title": "Factorizing into Block Diagonal Matrices" }, { "cite_extract_rate": 0.8, "cites": [ 4529, 4527, 4530, 4528 ], "content": "Tensor train decomposition (TTD)~ is a standard tensor decomposition technique which decomposes a high dimensional tensor into multiple 2D and 3D tensors which can be multiplied together to reconstruct the original tensor. These factors are called TT-cores and the other dimensions are referred to as TT-ranks. TTD can be leveraged to compress various weight matrices in RNNs and LSTMs~. The first step is to represent a matrix as a multi-dimensional tensor by simple reshaping transformation and then use TTD on it. The values of TT–ranks directly define the compression ratio, so choosing them to be too small or too large will result into either significant performance drop or little reduction of the number of parameters. Typically TT-ranks around 16 for small matrices and 64-192 for larger matrices result in a good trade-off between compression ratio and the accuracy metric of interest. Also, when we use TTD for weight matrices, we also need change the inputs appropriately to be compatible in terms of dimensions. \nCompared with TT-RNN, Block-Term RNN (BTRNN)~ is not only more concise (when using the same rank), but also able to attain a better approximation to the original RNNs with much fewer parameters. BTD decomposes a high order tensor into a sum of multiple Tucker decomposition models. The redundant dense connections between input and hidden state is first tensorized to a $d$-dimensional tensor and then decomposed using low-rank BTD into a sum of $N$ different Tucker decompositions where $N$ is the CP-rank. Each Tucker decomposition in turn consists of a core $d$-dimensional tensor and $d$ 3-dimensional factor tensors. While Ye et al.~ used BTD to compress RNNs, Ma et al.~ used BTD to compress the self-attention matrix in Transformers. They first build a single-block attention based on the Tucker decomposition where the query, key and value are mapped into three factor matrices and the core tensor is trainable and randomly initialized. It is then straightforward to represent the multi-head attention using BTD.", "id": "4c1e5567-aabc-47e9-8344-624bb81f275f", "level": "subsection", "origin_cites_number": 5, "parent_id": "a059247f-141a-43b7-a6a9-f3440e7622cd", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Tensor decomposition" ], [ "subsection", "Tensor Train and Block Term Decomposition" ] ], "subsections": [], "title": "Tensor Train and Block Term Decomposition" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 4527, 4530, 1568, 4515, 4524, 4525, 4526 ], "content": "\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|l|l|l|l|}\n \\hline\nTask&Dataset&Method&Base Model&Metric&Size (Comp; Orig)&Eval. (Comp; Orig)\n\\\\\n\\hline\n\\hline\nCTR prediction&Criteo CTR Challenge&TT-embedding~&MLP&LogLoss (L)&4.7M; 41.2M&0.4433; 0.4440\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&BTD~&Transformer-XL Large&Perplexity (L)&0.16B; 0.8B&19.5; 21.8\\\\\n\\hline\nLanguage modeling&Enwiki-8&FLOP~&Transformer&BPC (L)&8M; 41M&1.13; 1.08\\\\\n\\hline\nLanguage modeling&PTB&WEST~&LSTM&Perplexity (L)&3.5M; 4.51M&116.84; 115.91\\\\\n\\hline\nLanguage modeling&PTB&BTD~&Transformer-XL-Base&Perplexity (L)&12M; 24M&49.8; 54.52\\\\\n\\hline\nLanguage modeling&PTB&TT-embedding~&Transformer-XL-Base&Perplexity (L)&18M; 24M&55.4; 54.52\\\\\n\\hline\nLanguage modeling&WikiText-103&FLOP~&Transformer&Perplexity (L)&50M; 151M&25.3; 24.1\\\\\n\\hline\nLanguage modeling&WikiText-103&TT-embedding~&Transformer-XL&Perplexity (L)&91M; 192M&25.67; 24.37\\\\\n\\hline\nLanguage modeling&WikiText-103&BTD~&Transformer-XL-Base&Perplexity (L)&85.3M; 151M&20.9; 24.0\\\\\n\\hline\nLanguage modeling&WikiText-103&TT-embedding~&Transformer-XL-Base&Perplexity (L)&130M; 151M&25.7; 24.0\\\\\n\\hline\nNMT (en$\\rightarrow$ja)&ASPEC&Compositional codes~&LSTM&BLEU (H)&2.97MB; 274MB&38.89; 37.93\\\\\n\\hline\nNMT (de$\\rightarrow$en)&IWSLT14&Compositional codes~&LSTM&BLEU (H)&2.11MB; 35MB&29.56; 29.45\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&TT-embedding~&Transformer-Big&BLEU (H)&179M; 210M&28.53; 28.84\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT16&BTD~&Transformer&BLEU (H)&21.2M; 52M&34.91; 34.5\\\\\n\\hline\nNP bracketing&Subset of PTB&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&82.3; 77.9\\\\\n\\hline\nQuestion classification&TREC Questions&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&81.5; 76.2\\\\\n\\hline\nSentiment analysis&IMDB&Compositional codes~&LSTM&Acc (H)&1.23MB; 78MB&87.37; 87.18\\\\\n\\hline\nSentiment analysis&IMDB&TT-embedding~&LSTM&Acc (H)&0.81M; 7.19M&89.7; 88.6\\\\\n\\hline\nSentiment analysis&SST&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&81.4; 77.7\\\\\n\\hline\nSpeech recognition&3M Google voice utterances&Joint-SVD~&5-layer RNN&WER (L)&3.1M; 9.7M&12.9; 12.4\\\\\n\\hline\nSpeech recognition&3M Google voice utterances&Projections~&LSTM&WER (L)&2M; 2.2M&14.8; 17.5\\\\\n\\hline\nSpeech recognition&Live traffic utterances&WEST~&3-layer LSTM&WER (L)&4.75MB; 15MB&13.6; 13.7\\\\\n\\hline\nSpeech recognition&Live traffic utterances&WEST+Quantization~&3-layer LSTM&WER (L)&1.35MB; 15MB&13.7; 13.7\\\\\n\\hline\nText classification&20 Newsgroup (Computers)&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&87.0; 79.7\\\\\n\\hline\nText classification&20 Newsgroup (Religion)&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&88.8; 86.7\\\\\n\\hline\nText classification&20 Newsgroup (Sports)&Sparse coding~&Logistic regression&Acc (H)&120M; 120M&96.3; 95.9\\\\\n\\hline\nWord similarity&Simlex-999&Sparse coding~&Logistic regression&Correlation (H)&120M; 120M&38.9; 36.9\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various tensor decomposition methods (sorted by Task and then Dataset). In the metric column, H means high is better while L means low is better. For compositional codes, 16x32 coding was used. For BTD, two block term tensors were used. In~, logistic regression uses GloVe embeddings.}\n \\label{tab:tensorDecompSummary}\n\\end{table}\nTable~\\ref{tab:tensorDecompSummary} compares various tensor decomposition methods across different tasks and datasets. Accuracy of both the original and the compressed (comp.) model are shown. Also, we report model size (in terms of number of parameters or memory size) for both the student as well as the teacher models. For the same task, dataset and model combination, different papers report different accuracy of the original model because of slight changes in training hyper-parameters; hence we report accuracy of the original model for each row. \nFor CTR prediction, the TT-embedding method~ in Table~\\ref{tab:tensorDecompSummary} uses 3 factors with TT-rank of 16. It actually leads to an embedding compression of 16. With 4 factors and TT-rank=2, test loss increases to 0.4530 with a massive embedding compression of 4193 and overall model size of 0.53M. Thus, substitution of large embedding layers with TT–embeddings leads to significant compression ratios (up to 2011 times) with a slight improvement in the test loss, and up to 4200 with a small drop in the test loss. \nFor language modeling, BTD~ leads to an improved model with 20\\% of the Transformer-XL large model. For character level language modeling using FLOP~, an 8M sized FLOP model achieves 1.13 on Enwiki8 while gradual pruning achieves 1.14. Thus, FLOP based pruning which combines pruning with matrix factorization is better than both structured neuron pruning as well as gradual unstructured pruning. On PTB, BTD is clearly much better than TT-embedding. With half the model size compared to Transformer-XL-Base, BTD leads to a model with $\\sim$10\\% lower perplexity. On WikiText-103, while FLOP pruned model achieves 25.3 perplexity with model size of 50M, gradual unstructured pruning and neuron pruning achieve 25.7 and 26.7 respectively with the same model size for language modeling on Wiki-103 dataset. Thus, FLOP is better than other pruning methods. Again on Wiki-103, BTD is superior to TT-embedding. \nFor NMT, the loss-free compression rate reaches 92\\% on ASPEC dataset by pruning 90\\% of the connections. However, with the same pruning ratio, a modest performance loss is observed in IWSLT14 dataset. For the models using compositional coding~, the loss-free compression rate is 94\\% for the IWSLT14 dataset and 99\\% for the ASPEC dataset. Thus, compositional codes are much better compared to pruning. TT-embedding with TT-rank=64 leads to embedding compression of 15.3 on WMT14 dataset with marginal loss in the BLEU score. Lastly, BTD has been used to reduce Transformer size by more than half with improved BLEU on WMT16 (en$\\rightarrow$de) data.\nSparse coding for GloVe vectors~ has led to improved accuracies for multiple tasks like Noun Phrase (NP) bracketing, question classification, text classification and word similarity, while retaining the same model size. For sentiment analysis on the IMDB dataset, compositional codes method achieves a compression rate of 98\\% without performance loss. Further, TT-embedding method leads to a much smaller model compared to compositional codes with better accuracy. In the TT-embedding case, embedding compression rate is 441 with TT-rank=16.\nFor Speech recognition, Prabhavalkar et al.~ experimented with a 3M Google voice utterances dataset and found that a joint SVD with explained variance retained after the SVD as 0.6 leads to a 3x smaller RNN model without much performance loss. They improved upon Sak et al.'s projections method which led to a much higher WER on the same dataset. On live traffic utterances dataset, WEST~ leads to a 3x smaller model with a slightly reduced word error rate when using 3-layer LSTMs. Variani et al.~ further compress this model using quantization to obtain a 11x compressed 3-layer LSTM model without any performance loss. Thus, using quantization along with matrix decomposition seems to work well.\nOverall, matrix decomposition techniques are usually used in combination with parameter sharing and sometimes with quantization. They have been very effective in dealing with large input/output embedding matrices in RNNs and LSTMs. SVD, Tensor Train, CP, Tucker, BTD have been the most popular decomposition techniques found to be useful for model compression.", "id": "2c213350-b4bf-49d2-a046-a4fe16413654", "level": "subsection", "origin_cites_number": 9, "parent_id": "a059247f-141a-43b7-a6a9-f3440e7622cd", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Tensor decomposition" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 1, "cites": [ 4531 ], "content": "\\label{sec:linearTransformers}\nMemory usage throughout neural network training can be categorized into three main types: (1) Model memory is used to store model parameters; (2) Optimizer memory is the additional memory used by the specific learning algorithm during the process; (3) Activation memory consists of the outputs of each layer, which are cached for reuse in backpropagation to compute gradients. For a BERT-base model, model memory is around 0.2 GB, optimizer memory is around 1GB, while activation memory is around 8.5GB~. Time and activation memory in Transformers grows quadratically with the sequence length. This is because in every layer, every attention head attempts to come up with a transformed representation for every position by ``paying attention'' to tokens at every other position. Quadratic complexity implies that practically the maximum input size is rather limited. Thus, we cannot extract semantic representation for long documents by passing them as input to Transformers.", "id": "65404f84-cbf0-4721-8f05-178f8230aa55", "level": "section", "origin_cites_number": 1, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Transformers with Sub-Quadratic Complexity" ] ], "subsections": [ "4db79252-def3-4c73-83ab-c7863b1703fc", "f00ad6f8-f45a-4b14-ab1c-135e291338eb", "2202feab-4da3-4fb5-8d59-2d1d4fbac3f9" ], "title": "Transformers with Sub-Quadratic Complexity" }, { "cite_extract_rate": 1, "cites": [ 4531, 793, 794 ], "content": "A few efforts like BlockBERT~ try to reduce this quadratic complexity by a constant factor by introducing sparse block structures into the attention matrix. If we split the length-$N$ input sequence into $n$ blocks, $N\\times N$ attention matrix gets partitioned into $n\\times n$ blocks, where each block matrix is of the size $\\frac{N}{n}\\times\\frac{N}{n}$. Thus, BlockBERT reduces $O(N^2)$ memory consumption by a factor of $n$. \nChild et al.~ propose sparse transformers where sparse factorizations of the attention matrix reduce the quadratic complexity to $O(n\\sqrt{n})$. The key idea is to reduce the dense attention matrix to a sparse version by only computing attention on a sparse number of (query, key) pairs. They propose two kinds of sparse factorizations: strided and fixed. Strided attention implies having one head attend to the previous $l$ locations, and the other head attend to every $l^{th}$ location, where $l$ is the stride and chosen to be close to $\\sqrt{n}$. More heads could be used with a different stride value. Fixed attention assumes that specific positions summarize previous locations and propagate that information to all future positions. \nThe Reformer architecture~ replaces the dot-product attention in a typical Transformer by one that uses locality-sensitive hashing (LSH), changing its complexity from $O(n^2)$ to $O(n\\log n)$, where $n$ is the length of the sequence. In a standard Transformer, we compute scaled dot-product attention as follows.\n\\begin{eqnarray}\n \\text{Attention}(Q,K,V)=\\text{softmax}\\left(\\frac{QK^T}{\\sqrt{d}}\\right)V\n\\end{eqnarray}\nwhere $Q$, $K$ and $V$ are the standard query, key and value components and $d$ is a scaling factor. Reformer uses a Shared QK Transformer, i.e., $Q=K$ enabled by sharing the matrix that projects words/hidden layer to $Q$ or $K$. Further, note that we are actually only interested in $\\text{softmax}(QK^T)$. Since softmax is dominated by the largest elements, for each query $q_i$ we only need to focus on the keys in $K$ that are closest to $q_i$. How can we find the nearest neighbors among the keys? Reformer uses LSH. LSH is used to cluster (hash-bucket) the positions into various groups, and then every position needs to focus only on others within the same bucket.", "id": "4db79252-def3-4c73-83ab-c7863b1703fc", "level": "subsection", "origin_cites_number": 3, "parent_id": "65404f84-cbf0-4721-8f05-178f8230aa55", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Transformers with Sub-Quadratic Complexity" ], [ "subsection", "Transformers with Super-Linear Complexity" ] ], "subsections": [], "title": "Transformers with Super-Linear Complexity" }, { "cite_extract_rate": 1, "cites": [ 7333, 7298, 798, 7371, 1473, 7827 ], "content": "Even better, there have been several efforts recently to reduce this quadratic complexity to linear. Most of these efforts choose a constant number of other positions to ``pay attention'' to so as to compute a transformed representation for any given position. They can model sequences tens of thousands of timesteps long using hundreds of layers. The methods differ in their approach towards selecting this constant number of other positions. We discuss a few of such recently proposed methods in this section.\nIn Star-Transformers~, to reduce model complexity from $O(n^2)$ to linear, we replace the fully-connected attention matrix structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node. While ring connections connect a satellite node with two other satellite nodes, a radical connection connects a satellite node with the relay node. The idea is to update the star-center relay node based on satellite nodes and then update satellite nodes using information from the star node, and adjacent satellite nodes. \nLinformer architecture~ exploits low-rank factorization of the self-attention matrix to reduce overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The main idea is to add two linear projection matrices $E_i, F_i\\in R^{n\\times k}$ when computing key and value. We first project the original $(n\\times d)$-dimensional key and value layers into $(k\\times d)$-dimensional projected key and value layers. We then compute an $(n \\times k)$-dimensional context mapping matrix using scaled dot-product attention. If we can choose a very small projected dimension $k$, such that $k<<n$, then we can significantly reduce the memory and space consumption. Overall, it is $O(nk)$. Further, we can do three other forms of parameter sharing: (1) Headwise sharing: $E_i=E$ and $F_i=F$ across all heads $i$ in a layer. (2) Key-value sharing: $E_i=F_i=E$ across all heads $i$ in a layer. (3) Layerwise sharing: Single projection matrix $E$ is used across all layers, all heads for both key and value.\nSparse Sinkhorn Attention based Transformer~ is based on differentiable sorting of internal representations. First, they divide the input sequence into $B$ equal sized blocks each of size $n/B$. A meta sorting network learns to generate latent permutations over these block sequences. Given sorted sequences, we are then able to compute quasi-global attention with only local windows, improving the memory efficiency of the attention module. They also propose Causal Sinkhorn Balancing and SortCut algorithms for causal scenarios for tailoring Sinkhorn Attention for encoding and/or decoding purposes. Their method reduces the memory complexity from $O(n^2)$ to $O(B^2+(n/B)^2)$. The SortCut variant further reduces complexity to linear-time, i.e., $O(nk)$ where $k$ is a user defined budget hyper-parameter much smaller than $n$.\nShen et al.~ propose a very simple mathematical trick to reduce quadratic complexity to linear. A typical dot-product attention can be written as $\\text{softmax}(QK^T)V$ ignoring the scale factor. This is quadratic because $QK^T$ is $n^2$ in size. This can be rewritten as follows.\n\\begin{eqnarray}\n \\text{Attention}(Q,K,V)=\\text{softmax}_r(Q)(\\text{softmax}_c(K^TV))\n\\end{eqnarray}\nwhere $\\text{softmax}_r$ and $\\text{softmax}_c$ are softmax applied to rows and columns respectively. This revised formulation has terms which are only linear in $n$. Finally, Katharopoulos et al.~ express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $O(n^2)$ to $O(n)$, where $n$ is the sequence length.\nFinally, Longformer~ propose to reduce Transformer complexity to linear by sparsifying the full self-attention matrix using multiple kinds of attention patterns. The simplest attention pattern is the sliding window pattern which employs a fixed-size window attention surrounding each token. Given a fixed window size $w$, each token attends to $0.5w$ tokens on each side. The computation complexity of this pattern is $O(nw)$ which scales linearly with input sequence length $n$. To further increase the receptive field without increasing computation, the sliding window can be ``dilated''. This is analogous to dilated CNNs where the window has gaps of size dilation $d$. The windowed and dilated attention are not flexible enough to learn task-specific representations. Accordingly, Longformer also has ``global attention'' on few pre-selected input locations. Moreover, this attention operation is symmetric: that is, a token with a global attention attends to all tokens across the sequence, and all tokens in the sequence attend to it. Further, two different sets of projections are used: $Q_s$, $K_s$, $V_s$ to compute attention scores of sliding window attention, and $Q_g$, $K_g$, $V_g$ to compute attention scores for the global attention. The pretrained Longformer consistently outperforms RoBERTa on multiple downstream long document tasks.", "id": "f00ad6f8-f45a-4b14-ab1c-135e291338eb", "level": "subsection", "origin_cites_number": 6, "parent_id": "65404f84-cbf0-4721-8f05-178f8230aa55", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Transformers with Sub-Quadratic Complexity" ], [ "subsection", "Transformers with Linear Complexity" ] ], "subsections": [], "title": "Transformers with Linear Complexity" }, { "cite_extract_rate": 1, "cites": [ 7333, 4531, 7298, 793, 798, 7371, 794, 1473 ], "content": "\\begin{table}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|l|l|}\n \\hline\nTask&Dataset&Model&Base Model&Metric&Eval. (Opt.; Orig)\\\\\n\\hline\n\\hline\nChar-level language modeling&1B Word Benchmark&Sinkhorn Mixture Big~&Transformer Big&BPC (L)&1.134; 1.825\\\\\n\\hline\nChar-level language modeling&1B Word Benchmark&Sparse Transformer~&Transformer Big&BPC (L)&1.119; 1.825\\\\\n\\hline\nChar-level language modeling&Enwik8&Longformer~&Transformer&BPC (L)&1.00; 1.11\\\\\n\\hline\nChar-level language modeling&Enwik8&Reformer~&Transformer&BPC (L)&1.05; 1.11\\\\\n\\hline\nChar-level language modeling&text8&Longformer~&Transformer&BPC (L)&1.10; 1.18\\\\\n\\hline\nCoreference resolution&OntoNotes&Longformer-base~&RoBERTa-base&F1 (H)&78.6; 78.4\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&Sinkhorn Mixture Big~&Transformer Big&Perplexity (L)&27.34; 27.59\\\\\n\\hline\nLanguage modeling&1B Word Benchmark&Sparse Transformer~&Transformer Big&Perplexity (L)&28.77; 27.59\\\\\n\\hline\nNamed Entity Recognition&CoNLL2003&Star Transformer~&Transformer&Acc (H)&90.93; 86.48\\\\\n\\hline\nNamed Entity Recognition&CoNLL2012&Star Transformer~&Transformer&Acc (H)&86.30; 83.57\\\\\n\\hline\nNLI&MNLI&SortCut Sinkhorn~&Transformer&Acc (H)&55.80; 53.69\\\\\n\\hline\nNLI&QNLI&Linformer~&BERT-base&Acc (H)&91.2; 91.8\\\\\n\\hline\nNLI&SNLI&Star Transformer~&Transformer&Acc (H)&86.0; 82.2\\\\\n\\hline\nNLI&SNLI&SortCut Sinkhorn~&Transformer&Acc (H)&80.30; 78.87\\\\\n\\hline\nNMT (en$\\rightarrow$de)&WMT14&Reformer big~&Transformer Big&BLEU (H)&29.1; 27.3\\\\\n\\hline\nPOS Tagging&PTB&Star Transformer~&Transformer&Acc (H)&97.14; 96.31\\\\\n\\hline\nQuestion answering&HotpotQA&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&78.94; 77.08\\\\\n\\hline\nQuestion answering&HotpotQA&SparseBERT~&BERT&F1 (H)&76.02; 77.08\\\\\n\\hline\nQuestion answering&NaturalQA&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&79.39; 78.29\\\\\n\\hline\nQuestion answering&NaturalQA&SparseBERT~&BERT&F1 (H)&77.31; 78.29\\\\\n\\hline\nQuestion answering&NewsQA&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&70.08; 66.25\\\\\n\\hline\nQuestion answering&NewsQA&SparseBERT~&BERT&F1 (H)&67.16; 66.25\\\\\n\\hline\nQuestion answering&SearchQA&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&83.51; 80.37\\\\\n\\hline\nQuestion answering&SearchQA&SparseBERT~&BERT&F1 (H)&80.54; 80.37\\\\\n\\hline\nQuestion answering&SQuAD 1.1&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&90.74; 88.45\\\\\n\\hline\nQuestion answering&SQuAD 1.1&SparseBERT~&BERT&F1 (H)&88.37; 88.45\\\\\n\\hline\nQuestion answering&SQuAD 2.0&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&81.45; 77.16\\\\\n\\hline\nQuestion answering&SQuAD 2.0&SparseBERT~&BERT&F1 (H)&77.57; 77.16\\\\\n\\hline\nQuestion answering&TriviaQA&BlockBERT (n=2, N=1024)~&BERT&F1 (H)&79.41; 75.35\\\\\n\\hline\nQuestion answering&TriviaQA&SparseBERT~&BERT&F1 (H)&75.34; 75.35\\\\\n\\hline\nQuestion answering&TriviaQA&Longformer-base~&RoBERTa-base&F1 (H)&75.2; 74.3\\\\\n\\hline\nQuestion answering&WikiHop&Longformer-base~&RoBERTa-base&Acc (H)&75.0; 72.4\\\\\n\\hline\nSentiment analysis&IMDB&Linformer~&BERT-base&Acc (H)&94.1; 93.5\\\\\n\\hline\nSentiment analysis&IMDB&Longformer-base~&RoBERTa-base&Acc (H)&95.7; 95.3\\\\\n\\hline\nSentiment analysis&SST&Star Transformer~&Transformer&Acc (H)&52.9; 50.4\\\\\n\\hline\nSentiment analysis&SST&Sinkhorn ~&Transformer&Acc (H)&77.52; 76.83\\\\\n\\hline\nSentiment analysis&SST-2&Linformer~&BERT-base&Acc (H)&93.1; 92.7\\\\\n\\hline\nSpeech recognition&WSJ&Linear Transformer~&Reformer&Phoneme Error Rate (L)&8.08; 9.33\\\\\n\\hline\nText classification&Hyperpartisan&Longformer-base~&RoBERTa-base&F1 (H)&94.8; 87.4\\\\\n\\hline\nText classification&MTL-16&Star Transformer~&Transformer&Acc (H)&86.98; 82.78\\\\\n\\hline\nTextual similarity&QQP&Linformer~&BERT-base&Acc (H)&90.8; 89.6\\\\\n\\hline\n \\end{tabular}\n\\caption{Comparison of various sub-quadratic complexity Transformer methods (sorted by Task and then Dataset). In the metric column, H means high is better while L means low is better. Note that in this case, model sizes do not reduce much; activation memory reduces as described in Section~\\ref{sec:linearTransformers} (with comparable or better accuracy) compared to the standard Transformer.}\n \\label{tab:linearTransSummary}\n\\end{table}\nTable~\\ref{tab:linearTransSummary} compares various sub-quadratic Transformer methods across different tasks and datasets. Accuracy of both the original Transformer and the optimized (opt.) model are shown. These models have been applied for various applications like language modeling (both word level as well as character level), coreference resolution, NER, NLI, NMT, POS, question answering, sentiment analysis, speech recognition, text classification and text similarity analysis. For the same task, dataset and model combination, different papers report different accuracy of the original model because of slight changes in training hyper-parameters; hence we report accuracy of the original model for each row. \nStar Transformers were shown to outperform the vanilla Transformer model across various tasks like Sentiment analysis, Text classification, NLI, POS and NER. On the SST, the Star Transformer achieves 2.5 points improvement against the standard Transformer. On MTL-16, the Star-Transformer outperform the standard Transformer in all 16 datasets, the improvement of the average accuracy is 4.2. Average test times are 10.94 and 49.31ms per batch with batch-size=128 for Star Transformer and standard Transformer respectively.\nLongformer achieves a new state-of-the-art on both text8 and enwik8 using the small models with BPC of 1.10 and 1.00 on text8 and enwik8 respectively. Longformer-large model of the same size as Sparse Transformer achieves a BPC of 0.99 on Enwik8, which is the same as that obtained using Sparse Transformer. Also, for both character-level as well as word-level language modeling, on 1B word benchmark, we observe that Sinkhorn mixture (which is a combination of the Sinkhorn attention by mixing it with the vanilla standard dot product attention) performs better than Sparse Transformer. \nOn question answering, results have been reported across multiple datasets like HotpotQA, NaturalQA, NewsQA, SearchQA, TriviaQA, WikiHop, SQuAD 1.1 and SQuAD 2.0. We observe that BlockBERT performs better than SparseBERT. Also, it is not surprising that BlockBERT with 2 blocks (n = 2) performs better than that with 3 blocks (n = 3), because it keeps more attention matrix entries. Also, not shown in the table, Longformer-large achieves scores of 81.9 and 73.2 for WikiHop and HotpotQA beating state-of-the-art results by 3.6 and 4 points respectively. \nLongformer consistently outperforms the RoBERTa baseline across many tasks like coreference resolution, question answering, sentiment analysis and text classification. Its performance gain is especially obvious for tasks that require long context such as WikiHop (question answering) and Hyperpartisan (text classification). For TriviaQA (question answering), the improvement is more modest as the local context is often sufficient to answer the question. In the case of HotpotQA (question answering), the supporting fact auxiliary supervision allows models to easily find relevant contexts and then focus on local context, leading to smaller gains. On the IMDB (sentiment analysis) and OntoNotes (coreference resolution) datasets the performance gains are smaller. For IMDB, the majority of the dataset consists of short documents and thus it is expected to see smaller improvements. For OntoNotes, the distance between any two mentions is typically quite small so that a baseline that processes smaller chunks separately is able to stitch together mentions into coreference chains without considering cross chunk interactions.\nOn speech recognition task, Linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences. The linear Transformer model outperforms the LSTM and Reformer while being faster to train and evaluate. Reformer takes 2250 seconds per epoch, while Linear Transformers take just 824s/epoch.\nTo summarize, multiple methods have been proposed to reduce the quadratic complexity of the standard Transformer model. While Sparse Transformers reduce it to $O(n\\sqrt{n})$, Reformers reduce it to $O(n\\log n)$. Other methods like Star Transformer, Linformer, Sparse Sinkhorn Transformer, Efficient Attention, Linear Transformers and Longformer promise linear complexity. In particular Sparse Sinkhorn Transformer and Longformer have been shown to result into very good accuracy, latency and RAM tradeoff across many tasks.", "id": "2202feab-4da3-4fb5-8d59-2d1d4fbac3f9", "level": "subsection", "origin_cites_number": 8, "parent_id": "65404f84-cbf0-4721-8f05-178f8230aa55", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Transformers with Sub-Quadratic Complexity" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:summary}\nWe discussed various methods for compression of deep learning models for text. Broadly, we discussed pruning, quantization, knowledge distillation, parameter sharing, tensor decomposition, and sub-quadratic Transformer based methods. These methods not just help reduce the model size, but also lead to lower prediction latencies and low power consumption due to reduced computations. Pruning is the oldest method, but not commonly applied for Transformers. Quantization is effective however it is important to use mixed precision balanced quantization with GPU architectures that support efficient low-bit computations. Knowledge Distillation is the most popular method for compression of Transformer models. Parameter sharing is a very useful method but often needs to be combined with other techniques. Matrix decomposition is not very common but has lots of potential, especially the BTD method. Sub-quadratic Transformers are very important to enable processing long documents in applications like query-document similarity, long document summarization, etc.", "id": "d06ca9cc-bcd7-4f53-9be8-ee86f8754664", "level": "section", "origin_cites_number": 0, "parent_id": "3efda6e0-cf0a-4f26-8cb1-3dde0497c32c", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Summary and Future Directions" ] ], "subsections": [ "ab02f8bb-9d54-4f84-82fc-3f831c1bc39a", "fe1a6e49-c5b1-4394-b0be-1a452c8b0f80", "7ea3f834-f595-4c1b-afed-044d53e835ea" ], "title": "Summary and Future Directions" }, { "cite_extract_rate": 0.76, "cites": [ 4479, 4478, 7298, 4515, 1150, 4517, 4480, 4521, 4514, 4485, 4492, 1473, 4491, 8375, 9120, 4522, 4536, 4497, 4533, 8799, 4525, 4519, 4526, 854, 4484, 7829, 7333, 8797, 4486, 4516, 4527, 4500, 793, 4530, 1568, 4036, 4523, 7371, 4532, 4524, 8390, 4499, 4331, 4509, 4534, 4511, 2481, 844, 4535, 4482, 7828, 4518, 794, 4351, 4507, 2488, 4510 ], "content": "In the previous six sections, we have compared across multiple methods within each of the six broad types of model compression. In this subsection, we attempt to compare across multiple model compression method types.\nTable~\\ref{tab:kdSummary2} provides a comparison of model size versus accuracy for various methods across various GLUE~ and SQuAD tasks. We observe that most of the methods tried on GLUE datasets have been based on knowledge distillation. However, some pruning methods (iterative magnitude pruning, RPP and LayerDrop), quantization (mixed precision QBERT), parameter sharing (ALBERT), tensor decomposition (FLOP) and linear Transformer (Linformer) have also been tried. Quantization methods do not reduce the number of parameters but reduce the number of bits per parameter. Particularly, mixed precision QBERT uses 8 bits for embeddings and 2/3/4 bits for encoder layer weights. Similarly, Linformer does not reduce number of weights but reduces activation memory as well as latency of the overall Transformer model. \nFor the remaining models, for each model, we first computed an average GLUE score based on any of the 9 tasks for which scores have been reported. Next, we computed the ratio GLUE score/model size (in Millions). We find the following as the top three \n\\begin{itemize}\n \\item Distilled-BiLSTM~ (ratio=79.3)\n \\item Mixed-vocab. training~ (ratio=7.8)\n \\item ALBERT-B~ (ratio=7.2)\n\\end{itemize}\nThis clearly tells us that Distilled BiLSTMs provide us the best accuracy versus size tradeoff on GLUE. However, each of these three models actually report results on only 4 out of 9 GLUE tasks. Hence, further, we considered only those methods for which results on at least 5 tasks have been reported and computed the GLUE score/model size ratio. We find the following as the top three \n\\begin{itemize}\n\\item TinyBERT~ (ratio=5.31)\n\\item BERT-EMD~ (ratio=5.15)\n\\item MobileBERT+Quantization~ (ratio=3.49)\n\\end{itemize}\nThus, on GLUE tasks, it is clear that distillation based methods (combined with quantization) are better than other types of methods. \nTables~\\ref{tab:pruningSummary},~\\ref{tab:quantizationSummary},~\\ref{tab:kdSummary1},~\\ref{tab:paramSharingSummary},~\\ref{tab:tensorDecompSummary} and~\\ref{tab:linearTransSummary} compare various pruning, quantization, knowledge distillation, parameter sharing, tensor decomposition and sub-quadratic Transformer methods across different tasks and datasets. Overall, there are 123 (task, dataset) combinations across these six tables which implies that unlike GLUE, not many methods have been applied on the same set of (task, dataset) combinations\\footnote{We make the entire statistics available as an excel file at \\url{https://bit.ly/3vmaxZ9}.}. The most popular tasks are language modeling (on PTB and 1B word benchmark), sentiment analysis (on SST) and NMT (WMT14 en$\\rightarrow$de). \nFor language modeling on PTB, on LSTM models, bank balanced sparsity~ based pruning method worked best. With Transformer models, Block Term Decomposition (BTD)~ method seems to work best. Among various methods like parameter sharing, tensor decomposition and sub-quadratic complexity Transformer which have been tried for language modeling on 1B Word Benchmark, again, BTD~ method seems to work best leading to a model with 0.16B parameters and a perplexity as low as 19.5. Multiple datasets have been used for Neural machine translation (NMT). Datasets from WMT and IWSLT are the most popular. Among these, the en$\\rightarrow$de from WMT14 is the most popular dataset used for testing various NMT models. For en$\\rightarrow$de NMT with WMT14, using 2-layer LSTMs, the best accuracy versus size tradeoff is using Pruned Seq-KD + Seq-Inter~ which gives a 8M size model leading to 18.5 BLEU. Among Transformer based models, BTD~ leads to a Transformer model which provides 34.91 BLEU with 21.2M model size.\n\\begin{table*}\n \\centering\n \\scriptsize\n \\begin{tabular}{|p{1in}|p{2in}|p{2.7in}|}\n \\hline\nTask&Popular Datasets&References\\\\\n\\hline\n\\hline\nLanguage modeling&Penn TreeBank Corpus, One billion word benchmark, Europarl, WikiText-103, text8, source code of Linux kernel, 2013 ACL Workshop Morphological Language Datasets (ACLW), Arabic news commentary corpus, 2013 ACL workshop on MT, enwik8 (from Wikipedia), Lambada&Neuron Pruning~, Iterative magnitude pruning~, Block sparsity~,Loss -Aware Quantization~, Uniform Quantization~, Binary Quantization~, HitNet~, Sparse Word Representations~, LightRNN~, Slim Embeddings~, C2W~, LayerDrop~, Reformer~, Linformer~, Char-CNN~, CNN+Highway Network~, SparseBERT~, FLOP~, Deep Equilibrium Models~, WEST~, Sparse Sinkhorn Attention~, BTD~, Universal Transformers~, TT-embedding~, multiple methods~\\\\\n\\hline\nNeural Machine translation (NMT)&IWSLT German-English, IWSLT Thai-English, ASPEC English-Japanese, WMT English-German, WMT German-English, WMT English-Russian, IWSLT English Vietnamese, WMT English-Romanian, WMT English-Estonian, Ted Talk&Compositional codes~, LayerDrop~, Pruning attention heads~, Neuron Pruning~, Magnitude Pruning~, Iterative magnitude pruning~, Pruned Seq-KD + Seq-Inter~, Quantized Distillation~, Teacher ensembles KD~, Multiple teachers KD~, BTD~, Quaternion Attention~, Universal Transformers~, TT-embedding~\\\\\n\\hline\nSentiment Analysis&IMDB movie review, SST, SST-2, Elec (electronic product reviews)&Compositional codes~, Star Transformer~, TinyBERT~, MiniLM~, Linformer~, XtremeDistil~, Gaussian Quantization~, Uniform Quantization~, Sparse coding~, Quaternion Attention~, Sparse Sinkhorn Attention~, RPP~, ALBERT~, Patient KD~, Mixed-vocabulary KD training~, Distilled-BiLSTM~, MTDNN~, TT-embedding~\\\\\n\\hline\nQuestion Answering&SQuAD1.1, SQuAD2.0, ELI5, SemEval, BABI&LayerDrop~, MiniLM~, RPP~, BS-Fixed Quantization~, ALBERT~, KD~, Universal Transformers~\\\\\n\\hline\nNatural Language Inference&SNLI, MNLI-m, MNLI-mm, QNLI, RTE, WNLI, XNLI&Star Transformer~, LayerDrop~, TinyBERT~, MiniLM~, Linformer~, Sparse Sinkhorn Attention~, RPP~, ALBERT~, Patient KD~, Mixed-vocabulary KD training~, Distilled-BiLSTM~, MTDNN~\\\\\n\\hline\nParaphrasing&QQP, STS-B&TinyBERT~, MiniLM~, Linformer~, RPP~, ALBERT~, Patient KD~, Distilled-BiLSTM~, MTDNN~\\\\\n\\hline\nImage captioning&MSCOCO&Grow and Prune~, Magnitude Pruning~, Iterative Magnitude Pruning and Densification~\\\\\n\\hline\nHandwritten character recognition&ICDAR&SVD and Pruning\\\\\n\\hline\nPart-of-speech (POS) tagging&Wall Street Journal of the Penn Treebank dataset, WikiAnn NER corpus&C2W~, XtremeDistil~\\\\\n\\hline\nSummarization&CNN-DailyMail, XSum&LayerDrop~, MiniLM~\\\\\n\\hline\nMachine Reading Comprehension& Microsoft Research Paraphrase Corpus (MRPC), ReAding Comprehension from Examinations (RACE)&LayerDrop~, TinyBERT~, MiniLM~, RPP~, ALBERT~, Patient KD~, Mixed-vocabulary KD training~, MTDNN~\\\\\n\\hline\nLinguistic Acceptability&CoLA&TinyBERT~, MiniLM~, RPP~, ALBERT~, MTDNN~\\\\\n\\hline\nTopic Classification&DbPedia, Ag News,20 Newsgroup&XtremeDistil~, Sparse coding~\\\\\n\\hline\nQuestion Type Classification&TREC&Sparse coding~\\\\\n\\hline\nNoun Phrase Bracketing&Lazaridou~&Sparse coding~\\\\\n\\hline\nWord Similarity&SimLex-999, MEN, MTurk, RARE, SCWS, WSR, WSS&Sparse coding~, Shared reference vectors~\\\\\n\\hline\nMathematical Language Understanding&Wangperawong's MLU~&Quaternion Attention~\\\\\n\\hline\nSubject Verb Agreement&Linzen~&Quaternion Attention~, Universal Transformers~\\\\\n\\hline\nWord Analogy&GSEM, GSYN, MSYN&Shared reference vectors~\\\\\n\\hline\nSentence Completion&MSC&Shared reference vectors~\\\\\n\\hline\nLearning to execute&Zaremba and Sutskever~&Universal Transformers~\\\\\n\\hline\nAd Click Through Rate Prediction&Criteo Kaggle&TT-embedding~\\\\\n\\hline\nSpeech Recognition&2100 hours English Speech, AN4, Switchboard, TIMIT, WSJ 92, WSJ 93, TIDIGITS, 3M Google voice utterances, Live traffic utterances&Iterative Magnitude Pruning~, Grow and Prune~, Neuron Pruning~, Block Sparsity~, BBS~, DSD~, Pow2 Ternarization~, Loss Aware Quantization~, Toeplitz-like~, Joint-SVD~, Projections~, WEST~\\\\\n\\hline\nNamed entity recognition (NER)&CoNLL2003, Wikiann-41, CoNLL2012&QBERT~, XtremeDistill~, Star Transformer~\\\\\n\\hline\nIntent Detection&SNIPS&Mixed-vocabulary KD training~\\\\\n\\hline\nQuestion Generation&SQuAD 1.1&MiniLM~\\\\\n\\hline\nSlot Filling&SNIPS&Mixed-vocabulary KD training~\\\\\n\\hline\nText classification&20 Newsgroup, Hyperpartisan, MTL-16&Sparse coding~, Longformer~, Star Transformer~\\\\\n\\hline\nCoreference Resolution&OntoNotes&Longformer~\\\\\n\\hline\n \\end{tabular}\n \\caption{Applications of Model Compression Methods for Text}\n \\label{tab:applications}\n\\end{table*}\nCombinations of multiple model compression method types has also been experimented with and found to be effective. Some examples of such combinations include the following:\n\\begin{itemize}\n \\item Pruning + Tensor Decomposition ~\n \\item Pruning + Quantization~\n \\item Knowledge distillation + Quantization~\n \\item Knowledge distillation + Pruning~\n \\item Tensor decomposition + Parameter sharing~\n \\item Tensor decomposition + Quantization~\n\\end{itemize}\nRecently, Kim et al.~ combined knowledge distillation, structured pruning and quantization leading to drastic improvements on inference efficiency. First, they investigate the efficacy of various Knowledge Distillation techniques to significantly reduce the size of the models with respect to the depth and hidden state sizes while preserving the\naccuracy. Second, they explore Structured Pruning that further reduces the size of the models by reducing the number of self-attention heads and the number of intermediate hidden states in the feedforward layers to achieve more efficiency while\ntrying to preserve the accuracy as well. Finally, they \nexplore model quantization which enables faster model executions by optimally utilizing hardware acceleration capabilities. Such a combined method leads to heavily reduced model size, 12.4x GPU speed-up and 6.9x-125.8x reduction in energy consumption.", "id": "ab02f8bb-9d54-4f84-82fc-3f831c1bc39a", "level": "subsection", "origin_cites_number": 75, "parent_id": "d06ca9cc-bcd7-4f53-9be8-ee86f8754664", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Summary and Future Directions" ], [ "subsection", "Comparison across model compression method types." ] ], "subsections": [], "title": "Comparison across model compression method types." }, { "cite_extract_rate": 0, "cites": [], "content": "The model compression methods mentioned in this survey have been used across a wide variety of text processing tasks. In Table~\\ref{tab:applications}, we list down the tasks, popular datasets and references where the readers can find more discussion around model size versus accuracy tradeoff.", "id": "fe1a6e49-c5b1-4394-b0be-1a452c8b0f80", "level": "subsection", "origin_cites_number": 0, "parent_id": "d06ca9cc-bcd7-4f53-9be8-ee86f8754664", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Summary and Future Directions" ], [ "subsection", "Summary of Applications" ] ], "subsections": [], "title": "Summary of Applications" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 849, 4536 ], "content": "Although there has been so much of work already in this field, there is a lot more work to be done. \n\\begin{itemize}\n \\item With linear Transformer models, one can afford to have input with tens of thousands of tokens. Hence, many tasks need to be redesigned where large context can now be included as input to improve accuracy. \n \\item Combinations of several methods have not been tested well. Recently, Fastformers method~ showed that combining multiple methods like knowledge distillation, 16-bit quantization, structured pruning and numerical optimizations can lead to drastic improvements. However, lot of experiments are needed to further check how models respond to combination of model compression methods.\n \\item Latency results vary based on GPU architectures. With new GPU architectures (Nvidia RTX 3080, Nvidia T4), some methods like quantization may become more impactful.\n \\item Real world settings are often complex: multi-modal, multi-task, multi-label, small-data, noisy labels, multi-teachers, mismatching teacher-student architectures. Efficient ways of recommending the most promising method is necessary.\n \\item Different components/structures of a model may respond to different kinds of compression methods with specific hyper-parameters. A generic method to choose the right method for various structures is needed.\n \\item How does compression of models impact their interpretability? Can we design model compression mechanisms aimed at looking at a tradeoff between model accuracy, size, latency and interpretability.\n \\item None of the model compression methods performs any application specific compression. Can we obtain further compression by exploiting some task-specific patterns?\n \\item Recently, deep reinforcement learning based methods have been proposed in the computer vision community~. It will be nice to check the effectiveness of such methods for NLP tasks.\n\\end{itemize}\nWe hope that this survey acts as a good guide to folks across academia and industry. Also, we hope that a significantly large chunk of research gets done in the area of model compression to enable good accuracy across many NLP tasks while keeping model sizes and latencies in check. \n\\begin{thebibliography}{100}\n\\scriptsize\n\\providecommand{\\url}[1]{#1}\n\\csname url@samestyle\\endcsname\n\\providecommand{\\newblock}{\\relax}\n\\providecommand{\\bibinfo}[2]{#2}\n\\providecommand{\\BIBentrySTDinterwordspacing}{\\spaceskip=0pt\\relax}\n\\providecommand{\\BIBentryALTinterwordstretchfactor}{4}\n\\providecommand{\\BIBentryALTinterwordspacing}{\\spaceskip=\\fontdimen2\\font plus\n\\BIBentryALTinterwordstretchfactor\\fontdimen3\\font minus\n \\fontdimen4\\font\\relax}\n\\providecommand{\\BIBforeignlanguage}[2]{{\n\\expandafter\\ifx\\csname l@#1\\endcsname\\relax\n\\typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}\n\\typeout{** loaded for the language `#1'. Using the pattern for}\n\\typeout{** the default language instead.}\n\\else\n\\language=\\csname l@#1\\endcsname\n\\fi\n#2}}\n\\providecommand{\\BIBdecl}{\\relax}\n\\BIBdecl\n\\bibitem{vaswani2017attention}\nA.~Vaswani, N.~Shazeer, N.~Parmar, J.~Uszkoreit, L.~Jones, A.~N. Gomez,\n {\\L}.~Kaiser, and I.~Polosukhin, ``Attention is all you need,'' in\n \\emph{NIPS}, 2017, pp. 5998--6008.\n\\bibitem{devlin2018bert}\nJ.~Devlin, M.-W. Chang, K.~Lee, and K.~Toutanova, ``Bert: Pre-training of deep\n bidirectional transformers for language understanding,''\n \\emph{arXiv:1810.04805}, 2018.\n\\bibitem{wang2019glue}\nA.~Wang, A.~Singh, J.~Michael, F.~Hill, O.~Levy, and S.~R. Bowman, ``{GLUE}: A\n multi-task benchmark and analysis platform for natural language\n understanding,'' in \\emph{ICLR}, 2019.\n\\bibitem{wang2019superglue}\nA.~Wang, Y.~Pruksachatkun, N.~Nangia, A.~Singh, J.~Michael, F.~Hill, O.~Levy,\n and S.~R. Bowman, ``Super{GLUE}: A stickier benchmark for general-purpose\n language understanding systems,'' \\emph{1905.00537}, 2019.\n\\bibitem{radford2019language}\nA.~Radford, J.~Wu, R.~Child, D.~Luan, D.~Amodei, and I.~Sutskever, ``Language\n models are unsupervised multitask learners,'' \\emph{OpenAI Blog}, vol.~1,\n no.~8, 2019.\n\\bibitem{liu2019multi}\nX.~Liu, P.~He, W.~Chen, and J.~Gao, ``Multi-task deep neural networks for\n natural language understanding,'' \\emph{arXiv:1901.11504}, 2019.\n\\bibitem{yang2019xlnet}\nZ.~Yang, Z.~Dai, Y.~Yang, J.~Carbonell, R.~Salakhutdinov, and Q.~V. Le,\n ``Xlnet: Generalized autoregressive pretraining for language understanding,''\n \\emph{arXiv:1906.08237}, 2019.\n\\bibitem{shoeybi2019megatron}\nM.~Shoeybi, M.~Patwary, R.~Puri, P.~LeGresley, J.~Casper, and B.~Catanzaro,\n ``Megatron-lm: Training multi-billion parameter language models using gpu\n model parallelism,'' \\emph{arXiv:1909.08053}, 2019.\n\\bibitem{raffel2019exploring}\nC.~Raffel, N.~Shazeer, A.~Roberts, K.~Lee, S.~Narang, M.~Matena, Y.~Zhou,\n W.~Li, and P.~J. Liu, ``Exploring the limits of transfer learning with a\n unified text-to-text transformer,'' \\emph{arXiv:1910.10683}, 2019.\n\\bibitem{rosset2019turing}\nC.~Rosset, ``Turing-nlg: A 17-billion-parameter language model by microsoft,''\n \\emph{Microsoft Blog}, 2019.\n\\bibitem{lepikhin2020gshard}\nD.~Lepikhin, H.~Lee, Y.~Xu, D.~Chen, O.~Firat, Y.~Huang, M.~Krikun, N.~Shazeer,\n and Z.~Chen, ``Gshard: Scaling giant models with conditional computation and\n automatic sharding,'' \\emph{arXiv:2006.16668}, 2020.\n\\bibitem{bianco2018benchmark}\nS.~Bianco, R.~Cadene, L.~Celona, and P.~Napoletano, ``Benchmark analysis of\n representative deep neural network architectures,'' \\emph{IEEE Access},\n vol.~6, pp. 64\\,270--64\\,277, 2018.\n\\bibitem{sanh2019distilbert}\nV.~Sanh, L.~Debut, J.~Chaumond, and T.~Wolf, ``Distilbert, a distilled version\n of bert: smaller, faster, cheaper and lighter,'' \\emph{arXiv:1910.01108},\n 2019.\n\\bibitem{han2016eie}\nS.~Han, X.~Liu, H.~Mao, J.~Pu, A.~Pedram, M.~A. Horowitz, and W.~J. Dally,\n ``Eie: efficient inference engine on compressed deep neural network,''\n \\emph{ACM SIGARCH Computer Architecture News}, vol.~44, no.~3, pp. 243--254,\n 2016.\n\\bibitem{diamos2016persistent}\nG.~Diamos, S.~Sengupta, B.~Catanzaro, M.~Chrzanowski, A.~Coates, E.~Elsen,\n J.~Engel, A.~Hannun, and S.~Satheesh, ``Persistent rnns: Stashing recurrent\n weights on-chip,'' in \\emph{ICML}, 2016, pp. 2024--2033.\n\\bibitem{denil2013predicting}\nM.~Denil, B.~Shakibi, L.~Dinh, M.~Ranzato, and N.~De~Freitas, ``Predicting\n parameters in deep learning,'' in \\emph{NIPS}, 2013, pp. 2148--2156.\n\\bibitem{cheng2017survey}\nY.~Cheng, D.~Wang, P.~Zhou, and T.~Zhang, ``A survey of model compression and\n acceleration for deep neural networks,'' \\emph{arXiv:1710.09282}, 2017.\n\\bibitem{deng2020model}\nL.~Deng, G.~Li, S.~Han, L.~Shi, and Y.~Xie, ``Model compression and hardware\n acceleration for neural networks: A comprehensive survey,'' \\emph{IEEE}, vol.\n 108, no.~4, pp. 485--532, 2020.\n\\bibitem{walsh2013peter}\nC.~A. Walsh, ``Peter huttenlocher (1931--2013),'' \\emph{Nature}, vol. 502, no.\n 7470, pp. 172--172, 2013.\n\\bibitem{zhu2017prune}\nM.~Zhu and S.~Gupta, ``To prune, or not to prune: exploring the efficacy of\n pruning for model compression,'' \\emph{arXiv:1710.01878}, 2017.\n\\bibitem{lecun1990optimal}\nY.~LeCun, J.~S. Denker, and S.~A. Solla, ``Optimal brain damage,'' in\n \\emph{NIPS}, 1990, pp. 598--605.\n\\bibitem{hassibi1993second}\nB.~Hassibi and D.~G. Stork, ``Second order derivatives for network pruning:\n Optimal brain surgeon,'' in \\emph{NIPS}, 1993, pp. 164--171.\n\\bibitem{han2015deep}\nS.~Han, H.~Mao, and W.~J. Dally, ``Deep compression: Compressing deep neural\n networks with pruning, trained quantization and huffman coding,''\n \\emph{arXiv:1510.00149}, 2015.\n\\bibitem{see2016compression}\nA.~See, M.-T. Luong, and C.~D. Manning, ``Compression of neural machine\n translation models via pruning,'' \\emph{arXiv:1606.09274}, 2016.\n\\bibitem{han2015learning}\nS.~Han, J.~Pool, J.~Tran, and W.~Dally, ``Learning both weights and connections\n for efficient neural network,'' in \\emph{NIPS}, 2015, pp. 1135--1143.\n\\bibitem{narang2017exploring}\nS.~Narang, E.~Elsen, G.~Diamos, and S.~Sengupta, ``Exploring sparsity in\n recurrent neural networks,'' \\emph{arXiv:1704.05119}, 2017.\n\\bibitem{cheong2019transformers}\nR.~Cheong and R.~Daniel, ``transformers. zip: Compressing transformers with\n pruning and quantization,'' Technical report, Stanford University, Stanford,\n California, 2019., Tech. Rep., 2019.\n\\bibitem{guo2019reweighted}\nF.-M. Guo, S.~Liu, F.~S. Mungall, X.~Lin, and Y.~Wang, ``Reweighted proximal\n pruning for large-scale language representation,'' \\emph{arXiv:1909.12486},\n 2019.\n\\bibitem{han2016dsd}\nS.~Han, J.~Pool, S.~Narang, H.~Mao, E.~Gong, S.~Tang, E.~Elsen, P.~Vajda,\n M.~Paluri, J.~Tran \\emph{et~al.}, ``Dsd: Dense-sparse-dense training for deep\n neural networks,'' \\emph{arXiv:1607.04381}, 2016.\n\\bibitem{dai2018grow}\nX.~Dai, H.~Yin, and N.~K. Jha, ``Grow and prune compact, fast, and accurate\n lstms,'' \\emph{arXiv:1805.11797}, 2018.\n\\bibitem{he2014reshaping}\nT.~He, Y.~Fan, Y.~Qian, T.~Tan, and K.~Yu, ``Reshaping deep neural network for\n fast decoding by node-pruning,'' in \\emph{ICASSP}.\\hskip 1em plus 0.5em minus\n 0.4em\\relax IEEE, 2014, pp. 245--249.\n\\bibitem{murray2015auto}\nK.~Murray and D.~Chiang, ``Auto-sizing neural networks: With applications to\n n-gram language models,'' \\emph{arXiv:1508.05051}, 2015.\n\\bibitem{pan2016dropneuron}\nW.~Pan, H.~Dong, and Y.~Guo, ``Dropneuron: Simplifying the structure of deep\n neural networks,'' \\emph{arXiv:1606.07326}, 2016.\n\\bibitem{srinivas2015data}\nS.~Srinivas and R.~V. Babu, ``Data-free parameter pruning for deep neural\n networks,'' \\emph{arXiv:1507.06149}, 2015.\n\\bibitem{narang2017block}\nS.~Narang, E.~Undersander, and G.~Diamos, ``Block-sparse recurrent neural\n networks,'' \\emph{arXiv:1711.02782}, 2017.\n\\bibitem{cao2019efficient}\nS.~Cao, C.~Zhang, Z.~Yao, W.~Xiao, L.~Nie, D.~Zhan, Y.~Liu, M.~Wu, and\n L.~Zhang, ``Efficient and effective sparse lstm on fpga with bank-balanced\n sparsity,'' in \\emph{SIGDA Intl. Symp. on FPGA}.\\hskip 1em plus 0.5em minus\n 0.4em\\relax ACM, 2019, pp. 63--72.\n\\bibitem{michel2019sixteen}\nP.~Michel, O.~Levy, and G.~Neubig, ``Are sixteen heads really better than\n one?'' \\emph{arXiv:1905.10650}, 2019.\n\\bibitem{voita2019analyzing}\nE.~Voita, D.~Talbot, F.~Moiseev, R.~Sennrich, and I.~Titov, ``Analyzing\n multi-head self-attention: Specialized heads do the heavy lifting, the rest\n can be pruned,'' \\emph{arXiv:1905.09418}, 2019.\n\\bibitem{ding2017visualizing}\nY.~Ding, Y.~Liu, H.~Luan, and M.~Sun, ``Visualizing and understanding neural\n machine translation,'' in \\emph{ACL}, 2017, pp. 1150--1159.\n\\bibitem{fan2019reducing}\nA.~Fan, E.~Grave, and A.~Joulin, ``Reducing transformer depth on demand with\n structured dropout,'' \\emph{arXiv:1909.11556}, 2019.\n\\bibitem{prasanna2020bert}\nS.~Prasanna, A.~Rogers, and A.~Rumshisky, ``When bert plays the lottery, all\n tickets are winning,'' \\emph{arXiv:2005.00561}, 2020.\n\\bibitem{bartol2015hippocampal}\nT.~M. Bartol, C.~Bromer, J.~Kinney, M.~A. Chirillo, J.~N. Bourne, K.~M. Harris,\n and T.~J. Sejnowski, ``Hippocampal spine head sizes are highly precise,''\n \\emph{bioRxiv}, p. 016329, 2015.\n\\bibitem{linden2018think}\nD.~J. Linden, \\emph{Think Tank: Forty Neuroscientists Explore the Biological\n Roots of Human Experience}.\\hskip 1em plus 0.5em minus 0.4em\\relax Yale\n University Press, 2018.\n\\bibitem{hubara2017quantized}\nI.~Hubara, M.~Courbariaux, D.~Soudry, R.~El-Yaniv, and Y.~Bengio, ``Quantized\n neural networks: Training neural networks with low precision weights and\n activations,'' \\emph{JMLR}, vol.~18, no.~1, pp. 6869--6898, 2017.\n\\bibitem{lam2018word2bits}\nM.~Lam, ``Word2bits-quantized word vectors,'' \\emph{arXiv:1803.05651}, 2018.\n\\bibitem{courbariaux2015binaryconnect}\nM.~Courbariaux, Y.~Bengio, and J.-P. David, ``Binaryconnect: Training deep\n neural networks with binary weights during propagations,'' in \\emph{NIPS},\n 2015, pp. 3123--3131.\n\\bibitem{bengio2013estimating}\nY.~Bengio, N.~L{\\'e}onard, and A.~Courville, ``Estimating or propagating\n gradients through stochastic neurons for conditional computation,''\n \\emph{arXiv:1308.3432}, 2013.\n\\bibitem{lin2015neural}\nZ.~Lin, M.~Courbariaux, R.~Memisevic, and Y.~Bengio, ``Neural networks with few\n multiplications,'' \\emph{arXiv:1510.03009}, 2015.\n\\bibitem{hubara2016binarized}\nI.~Hubara, M.~Courbariaux, D.~Soudry, R.~El-Yaniv, and Y.~Bengio, ``Binarized\n neural networks,'' in \\emph{NIPS}, 2016, pp. 4107--4115.\n\\bibitem{rastegari2016xnor}\nM.~Rastegari, V.~Ordonez, J.~Redmon, and A.~Farhadi, ``Xnor-net: Imagenet\n classification using binary convolutional neural networks,'' in\n \\emph{ECCV}.\\hskip 1em plus 0.5em minus 0.4em\\relax Springer, 2016, pp.\n 525--542.\n\\bibitem{hou2016loss}\nL.~Hou, Q.~Yao, and J.~T. Kwok, ``Loss-aware binarization of deep networks,''\n \\emph{arXiv:1611.01600}, 2016.\n\\bibitem{lee2014proximal}\nJ.~D. Lee, Y.~Sun, and M.~A. Saunders, ``Proximal newton-type methods for\n minimizing composite functions,'' \\emph{J. Optimization}, vol.~24, no.~3, pp.\n 1420--1443, 2014.\n\\bibitem{ott2016recurrent}\nJ.~Ott, Z.~Lin, Y.~Zhang, S.-C. Liu, and Y.~Bengio, ``Recurrent neural networks\n with limited numerical precision,'' \\emph{arXiv:1608.06902}, 2016.\n\\bibitem{alom2018effective}\nM.~Z. Alom, A.~T. Moody, N.~Maruyama, B.~C. Van~Essen, and T.~M. Taha,\n ``Effective quantization approaches for recurrent neural networks,'' in\n \\emph{IJCNN}.\\hskip 1em plus 0.5em minus 0.4em\\relax IEEE, 2018, pp. 1--8.\n\\bibitem{li2016ternary}\nF.~Li, B.~Zhang, and B.~Liu, ``Ternary weight networks,''\n \\emph{arXiv:1605.04711}, 2016.\n\\bibitem{hwang2014fixed}\nK.~Hwang and W.~Sung, ``Fixed-point feedforward deep neural network design\n using weights+ 1, 0, and- 1,'' in \\emph{SiPS}.\\hskip 1em plus 0.5em minus\n 0.4em\\relax IEEE, 2014, pp. 1--6.\n\\bibitem{lloyd1982least}\nS.~Lloyd, ``Least squares quantization in pcm,'' \\emph{Tran. on information\n theory}, vol.~28, no.~2, pp. 129--137, 1982.\n\\bibitem{zhu2016trained}\nC.~Zhu, S.~Han, H.~Mao, and W.~J. Dally, ``Trained ternary quantization,''\n \\emph{arXiv:1612.01064}, 2016.\n\\bibitem{wang2018hitnet}\nP.~Wang, X.~Xie, L.~Deng, G.~Li, D.~Wang, and Y.~Xie, ``Hitnet: hybrid ternary\n recurrent neural network,'' in \\emph{NIPS}, 2018, pp. 604--614.\n\\bibitem{he2016effective}\nQ.~He, H.~Wen, S.~Zhou, Y.~Wu, C.~Yao, X.~Zhou, and Y.~Zou, ``Effective\n quantization methods for recurrent neural networks,''\n \\emph{arXiv:1611.10176}, 2016.\n\\bibitem{zhou2017balanced}\nS.-C. Zhou, Y.-Z. Wang, H.~Wen, Q.-Y. He, and Y.-H. Zou, ``Balanced\n quantization: An effective and efficient approach to quantized neural\n networks,'' \\emph{J. of Computer Science and Technology}, vol.~32, no.~4, pp.\n 667--682, 2017.\n\\bibitem{muller2015rounding}\nL.~K. Muller and G.~Indiveri, ``Rounding methods for neural networks with low\n resolution synaptic weights,'' \\emph{arXiv:1504.05767}, 2015.\n\\bibitem{gong2014compressing}\nY.~Gong, L.~Liu, M.~Yang, and L.~Bourdev, ``Compressing deep convolutional\n networks using vector quantization,'' \\emph{arXiv:1412.6115}, 2014.\n\\bibitem{guo2017network}\nY.~Guo, A.~Yao, H.~Zhao, and Y.~Chen, ``Network sketching: Exploiting binary\n structure in deep cnns,'' in \\emph{CVPR}, 2017, pp. 5955--5963.\n\\bibitem{xu2018alternating}\nC.~Xu, J.~Yao, Z.~Lin, W.~Ou, Y.~Cao, Z.~Wang, and H.~Zha, ``Alternating\n multi-bit quantization for recurrent neural networks,''\n \\emph{arXiv:1802.00150}, 2018.\n\\bibitem{mikolov2013word2vec}\nT.~Mikolov, K.~Chen, G.~Corrado, J.~Dean, L.~Sutskever, and G.~Zweig,\n ``word2vec,'' \\emph{URL https://code. google. com/p/word2vec}, vol.~22, 2013.\n\\bibitem{shen2019q}\nS.~Shen, Z.~Dong, J.~Ye, L.~Ma, Z.~Yao, A.~Gholami, M.~W. Mahoney, and\n K.~Keutzer, ``Q-bert: Hessian based ultra low precision quantization of\n bert,'' \\emph{arXiv:1909.05840}, 2019.\n\\bibitem{ba2014deep}\nJ.~Ba and R.~Caruana, ``Do deep nets really need to be deep?'' in \\emph{NIPS},\n 2014, pp. 2654--2662.\n\\bibitem{sau2016deep}\nB.~B. Sau and V.~N. Balasubramanian, ``Deep model compression: Distilling\n knowledge from noisy teachers,'' \\emph{arXiv:1610.09650}, 2016.\n\\bibitem{hinton2015distilling}\nG.~Hinton, O.~Vinyals, and J.~Dean, ``Distilling the knowledge in a neural\n network,'' \\emph{arXiv:1503.02531}, 2015.\n\\bibitem{czarnecki2017sobolev}\nW.~M. Czarnecki, S.~Osindero, M.~Jaderberg, G.~Swirszcz, and R.~Pascanu,\n ``Sobolev training for neural networks,'' in \\emph{NIPS}, 2017, pp.\n 4278--4287.\n\\bibitem{mishra2017apprentice}\nA.~Mishra and D.~Marr, ``Apprentice: Using knowledge distillation techniques to\n improve low-precision network accuracy,'' \\emph{arXiv:1711.05852}, 2017.\n\\bibitem{polino2018model}\nA.~Polino, R.~Pascanu, and D.~Alistarh, ``Model compression via distillation\n and quantization,'' \\emph{arXiv:1802.05668}, 2018.\n\\bibitem{romero2014fitnets}\nA.~Romero, N.~Ballas, S.~E. Kahou, A.~Chassang, C.~Gatta, and Y.~Bengio,\n ``Fitnets: Hints for thin deep nets,'' \\emph{arXiv:1412.6550}, 2014.\n\\bibitem{yim2017gift}\nJ.~Yim, D.~Joo, J.~Bae, and J.~Kim, ``A gift from knowledge distillation: Fast\n optimization, network minimization and transfer learning,'' in \\emph{CVPR},\n 2017, pp. 4133--4141.\n\\bibitem{mcclure2016representational}\nP.~McClure and N.~Kriegeskorte, ``Representational distance learning for deep\n neural networks,'' \\emph{Frontiers in computational neuroscience}, vol.~10,\n p. 131, 2016.\n\\bibitem{kim2016sequence}\nY.~Kim and A.~M. Rush, ``Sequence-level knowledge distillation,''\n \\emph{arXiv:1606.07947}, 2016.\n\\bibitem{freitag2017ensemble}\nM.~Freitag, Y.~Al-Onaizan, and B.~Sankaran, ``Ensemble distillation for neural\n machine translation,'' \\emph{arXiv:1702.01802}, 2017.\n\\bibitem{zhang2018deep}\nY.~Zhang, T.~Xiang, T.~M. Hospedales, and H.~Lu, ``Deep mutual learning,'' in\n \\emph{CVPR}, 2018, pp. 4320--4328.\n\\bibitem{anil2018large}\nR.~Anil, G.~Pereyra, A.~Passos, R.~Ormandi, G.~E. Dahl, and G.~E. Hinton,\n ``Large scale distributed neural network training through online\n distillation,'' \\emph{arXiv:1804.03235}, 2018.\n\\bibitem{mirzadeh2019improved}\nS.-I. Mirzadeh, M.~Farajtabar, A.~Li, N.~Levine, A.~Matsukawa, and\n H.~Ghasemzadeh, ``Improved knowledge distillation via teacher assistant,''\n \\emph{arXiv:1902.03393}, 2019.\n\\bibitem{you2017learning}\nS.~You, C.~Xu, C.~Xu, and D.~Tao, ``Learning from multiple teacher networks,''\n in \\emph{KDD}, 2017, pp. 1285--1294.\n\\bibitem{tan2019multilingual}\nX.~Tan, Y.~Ren, D.~He, T.~Qin, Z.~Zhao, and T.-Y. Liu, ``Multilingual neural\n machine translation with knowledge distillation,'' \\emph{arXiv:1902.10461},\n 2019.\n\\bibitem{heo2019knowledge}\nB.~Heo, M.~Lee, S.~Yun, and J.~Y. Choi, ``Knowledge distillation with\n adversarial samples supporting decision boundary,'' in \\emph{AAAI}, vol.~33,\n 2019, pp. 3771--3778.\n\\bibitem{xu2017training}\nZ.~Xu, Y.-C. Hsu, and J.~Huang, ``Training shallow and thin networks for\n acceleration via knowledge distillation with conditional adversarial\n networks,'' \\emph{arXiv:1709.00513}, 2017.\n\\bibitem{wang2018kdgan}\nX.~Wang, R.~Zhang, Y.~Sun, and J.~Qi, ``Kdgan: Knowledge distillation with\n generative adversarial networks,'' in \\emph{NIPS}, 2018, pp. 775--786.\n\\bibitem{zhao2019extreme}\nS.~Zhao, R.~Gupta, Y.~Song, and D.~Zhou, ``Extreme language model compression\n with optimal subwords and shared projections,'' \\emph{arXiv:1909.11687},\n 2019.\n\\bibitem{sun2019patient}\nS.~Sun, Y.~Cheng, Z.~Gan, and J.~Liu, ``Patient knowledge distillation for bert\n model compression,'' \\emph{arXiv:1908.09355}, 2019.\n\\bibitem{jiao2019tinybert}\nX.~Jiao, Y.~Yin, L.~Shang, X.~Jiang, X.~Chen, L.~Li, F.~Wang, and Q.~Liu,\n ``Tinybert: Distilling bert for natural language understanding,''\n \\emph{arXiv:1909.10351}, 2019.\n\\bibitem{liu2019improving}\nX.~Liu, P.~He, W.~Chen, and J.~Gao, ``Improving multi-task deep neural networks\n via knowledge distillation for natural language understanding,''\n \\emph{arXiv:1904.09482}, 2019.\n\\bibitem{wang2020minilm}\nW.~Wang, F.~Wei, L.~Dong, H.~Bao, N.~Yang, and M.~Zhou, ``Minilm: Deep\n self-attention distillation for task-agnostic compression of pre-trained\n transformers,'' \\emph{arXiv:2002.10957}, 2020.\n\\bibitem{tang2019distilling}\nR.~Tang, Y.~Lu, L.~Liu, L.~Mou, O.~Vechtomova, and J.~Lin, ``Distilling\n task-specific knowledge from bert into simple neural networks,''\n \\emph{arXiv:1903.12136}, 2019.\n\\bibitem{mukherjee2020xtremedistil}\nS.~Mukherjee and A.~H. Awadallah, ``Xtremedistil: Multi-stage distillation for\n massive multilingual models,'' in \\emph{ACL}, 2020, pp. 2221--2234.\n\\bibitem{ling2015finding}\nW.~Ling, T.~Lu{\\'\\i}s, L.~Marujo, R.~F. Astudillo, S.~Amir, C.~Dyer, A.~W.\n Black, and I.~Trancoso, ``Finding function in form: Compositional character\n models for open vocabulary word representation,'' \\emph{arXiv:1508.02096},\n 2015.\n\\bibitem{jozefowicz2016exploring}\nR.~Jozefowicz, O.~Vinyals, M.~Schuster, N.~Shazeer, and Y.~Wu, ``Exploring the\n limits of language modeling,'' \\emph{arXiv:1602.02410}, 2016.\n\\bibitem{kim2016character}\nY.~Kim, Y.~Jernite, D.~Sontag, and A.~M. Rush, ``Character-aware neural\n language models,'' in \\emph{AAAI}, 2016.\n\\bibitem{chen2015compressing}\nW.~Chen, J.~Wilson, S.~Tyree, K.~Weinberger, and Y.~Chen, ``Compressing neural\n networks with the hashing trick,'' in \\emph{ICML}, 2015, pp. 2285--2294.\n\\bibitem{lu2016learning}\nZ.~Lu, V.~Sindhwani, and T.~N. Sainath, ``Learning compact recurrent neural\n networks,'' in \\emph{ICASSP}.\\hskip 1em plus 0.5em minus 0.4em\\relax IEEE,\n 2016, pp. 5960--5964.\n\\bibitem{tay2019lightweight}\nY.~Tay, A.~Zhang, L.~A. Tuan, J.~Rao, S.~Zhang, S.~Wang, J.~Fu, and S.~C. Hui,\n ``Lightweight and efficient neural natural language processing with\n quaternion networks,'' \\emph{arXiv:1906.04393}, 2019.\n\\bibitem{chen2016compressing}\nY.~Chen, L.~Mou, Y.~Xu, G.~Li, and Z.~Jin, ``Compressing neural language models\n by sparse word representations,'' \\emph{arXiv:1610.03950}, 2016.\n\\bibitem{li2016lightrnn}\nX.~Li, T.~Qin, J.~Yang, and T.-Y. Liu, ``Lightrnn: Memory and\n computation-efficient recurrent neural networks,'' in \\emph{NIPS}, 2016, pp.\n 4385--4393.\n\\bibitem{suzuki2016learning}\nJ.~Suzuki and M.~Nagata, ``Learning compact neural word embeddings by parameter\n space sharing.'' in \\emph{IJCAI}, 2016, pp. 2046--2052.\n\\bibitem{li2018slim}\nZ.~Li, R.~Kulhanek, S.~Wang, Y.~Zhao, and S.~Wu, ``Slim embedding layers for\n recurrent neural language models,'' in \\emph{AAAI}, 2018.\n\\bibitem{lan2019albert}\nZ.~Lan, M.~Chen, S.~Goodman, K.~Gimpel, P.~Sharma, and R.~Soricut, ``Albert: A\n lite bert for self-supervised learning of language representations,''\n \\emph{arXiv:1909.11942}, 2019.\n\\bibitem{dehghani2018universal}\nM.~Dehghani, S.~Gouws, O.~Vinyals, J.~Uszkoreit, and {\\L}.~Kaiser, ``Universal\n transformers,'' \\emph{arXiv:1807.03819}, 2018.\n\\bibitem{bai2019deep}\nS.~Bai, J.~Z. Kolter, and V.~Koltun, ``Deep equilibrium models,''\n \\emph{arXiv:1909.01377}, 2019.\n\\bibitem{oseledets2011tensor}\nI.~V. Oseledets, ``Tensor-train decomposition,'' \\emph{SIAM J. on Scientific\n Computing}, vol.~33, no.~5, pp. 2295--2317, 2011.\n\\bibitem{carroll1970analysis}\nJ.~D. Carroll and J.-J. Chang, ``Analysis of individual differences in\n multidimensional scaling via an n-way generalization of “eckart-young”\n decomposition,'' \\emph{Psychometrika}, vol.~35, no.~3, pp. 283--319, 1970.\n\\bibitem{tucker1966some}\nL.~R. Tucker, ``Some mathematical notes on three-mode factor analysis,''\n \\emph{Psychometrika}, vol.~31, no.~3, pp. 279--311, 1966.\n\\bibitem{prabhavalkar2016compression}\nR.~Prabhavalkar, O.~Alsharif, A.~Bruguier, and L.~McGraw, ``On the compression\n of recurrent neural networks with an application to lvcsr acoustic modeling\n for embedded speech recognition,'' in \\emph{ICASSP}.\\hskip 1em plus 0.5em\n minus 0.4em\\relax IEEE, 2016, pp. 5970--5974.\n\\bibitem{sak2014long}\nH.~Sak, A.~Senior, and F.~Beaufays, ``Long short-term memory based recurrent\n neural network architectures for large vocabulary speech recognition,''\n \\emph{arXiv:1402.1128}, 2014.\n\\bibitem{faruqui2015sparse}\nM.~Faruqui, Y.~Tsvetkov, D.~Yogatama, C.~Dyer, and N.~Smith, ``Sparse\n overcomplete word vector representations,'' \\emph{arXiv:1506.02004}, 2015.\n\\bibitem{shu2017compressing}\nR.~Shu and H.~Nakayama, ``Compressing word embeddings via deep compositional\n code learning,'' \\emph{arXiv:1711.01068}, 2017.\n\\bibitem{wang2019structured}\nZ.~Wang, J.~Wohlwend, and T.~Lei, ``Structured pruning of large language\n models,'' \\emph{arXiv:1910.04732}, 2019.\n\\bibitem{chen2015strategies}\nW.~Chen, D.~Grangier, and M.~Auli, ``Strategies for training large vocabulary\n neural language models,'' \\emph{arXiv:1512.04906}, 2015.\n\\bibitem{variani2019west}\nE.~Variani, A.~T. Suresh, and M.~Weintraub, ``West: Word encoded sequence\n transducers,'' in \\emph{ICASSP}.\\hskip 1em plus 0.5em minus 0.4em\\relax IEEE,\n 2019, pp. 7340--7344.\n\\bibitem{tjandra2017compressing}\nA.~Tjandra, S.~Sakti, and S.~Nakamura, ``Compressing recurrent neural network\n with tensor train,'' in \\emph{IJCNN}.\\hskip 1em plus 0.5em minus 0.4em\\relax\n IEEE, 2017, pp. 4451--4458.\n\\bibitem{khrulkov2019tensorized}\nV.~Khrulkov, O.~Hrinchuk, L.~Mirvakhabova, and I.~Oseledets, ``Tensorized\n embedding layers for efficient model compression,'' \\emph{arXiv:1901.10787},\n 2019.\n\\bibitem{ye2018learning}\nJ.~Ye, L.~Wang, G.~Li, D.~Chen, S.~Zhe, X.~Chu, and Z.~Xu, ``Learning compact\n recurrent neural networks with block-term tensor decomposition,'' in\n \\emph{CVPR}, 2018, pp. 9378--9387.\n\\bibitem{ma2019tensorized}\nX.~Ma, P.~Zhang, S.~Zhang, N.~Duan, Y.~Hou, D.~Song, and M.~Zhou, ``A\n tensorized transformer for language modeling,'' \\emph{arXiv:1906.09777},\n 2019.\n\\bibitem{child2019generating}\nR.~Child, S.~Gray, A.~Radford, and I.~Sutskever, ``Generating long sequences\n with sparse transformers,'' \\emph{arXiv:1904.10509}, 2019.\n\\bibitem{guo2019star}\nQ.~Guo, X.~Qiu, P.~Liu, Y.~Shao, X.~Xue, and Z.~Zhang, ``Star-transformer,'' in\n \\emph{NAACL-HLT}, 2019, pp. 1315--1325.\n\\bibitem{kitaev2020reformer}\nN.~Kitaev, {\\L}.~Kaiser, and A.~Levskaya, ``Reformer: The efficient\n transformer,'' \\emph{arXiv:2001.04451}, 2020.\n\\bibitem{wang2020linformer}\nS.~Wang, B.~Li, M.~Khabsa, H.~Fang, and H.~Ma, ``Linformer: Self-attention with\n linear complexity,'' \\emph{arXiv:2006.04768}, 2020.\n\\bibitem{tay2020sparse}\nY.~Tay, D.~Bahri, L.~Yang, D.~Metzler, and D.-C. Juan, ``Sparse sinkhorn\n attention,'' \\emph{arXiv:2002.11296}, 2020.\n\\bibitem{shen2018efficient}\nZ.~Shen, M.~Zhang, H.~Zhao, S.~Yi, and H.~Li, ``Efficient attention: Attention\n with linear complexities,'' \\emph{arXiv:1812.01243}, 2018.\n\\bibitem{katharopoulos2020transformers}\nA.~Katharopoulos, A.~Vyas, N.~Pappas, and F.~Fleuret, ``Transformers are rnns:\n Fast autoregressive transformers with linear attention,''\n \\emph{arXiv:2006.16236}, 2020.\n\\bibitem{kapur2017low}\nS.~Kapur, A.~Mishra, and D.~Marr, ``Low precision rnns: Quantizing rnns without\n losing accuracy,'' \\emph{arXiv:1710.07706}, 2017.\n\\bibitem{hou2018loss}\nL.~Hou and J.~T. Kwok, ``Loss-aware weight quantization of deep networks,''\n \\emph{arXiv:1802.08635}, 2018.\n\\bibitem{grachev2019compression}\nA.~M. Grachev, D.~I. Ignatov, and A.~V. Savchenko, ``Compression of recurrent\n neural networks for efficient language modeling,'' \\emph{Applied Soft\n Computing}, vol.~79, pp. 354--362, 2019.\n\\bibitem{damani20_pakdd}\nS.~Damani, K.~N. Narahari, A.~Chatterjee, M.~Gupta, and P.~Agrawal, ``Optimized\n transformer models for faq answering,'' in \\emph{PAKDD}, 2020, p. To appear.\n\\bibitem{yang2018accelerating}\nY.~Yang, K.~Liang, X.~Xiao, Z.~Xie, L.~Jin, J.~Sun, and W.~Zhou, ``Accelerating\n and compressing lstm based model for online handwritten chinese character\n recognition,'' in \\emph{ICFHR}.\\hskip 1em plus 0.5em minus 0.4em\\relax IEEE,\n 2018, pp. 110--115.\n\\bibitem{lazaridou2013fish}\nA.~Lazaridou, E.~M. Vecchi, and M.~Baroni, ``Fish transporters and miracle\n homes: How compositional distributional semantics can help np parsing,'' in\n \\emph{EMNLP}, 2013, pp. 1908--1913.\n\\bibitem{wangperawong2018attending}\nA.~Wangperawong, ``Attending to mathematical language with transformers,''\n \\emph{arXiv:1812.02825}, 2018.\n\\bibitem{linzen2016assessing}\nT.~Linzen, E.~Dupoux, and Y.~Goldberg, ``Assessing the ability of lstms to\n learn syntax-sensitive dependencies,'' \\emph{TACL}, vol.~4, pp. 521--535,\n 2016.\n\\bibitem{zaremba2014learning}\nW.~Zaremba and I.~Sutskever, ``Learning to execute,'' \\emph{arXiv:1410.4615},\n 2014.\n\\end{thebibliography}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{referencesShort}\n\\end{document}\n\\endinput", "id": "7ea3f834-f595-4c1b-afed-044d53e835ea", "level": "subsection", "origin_cites_number": 3, "parent_id": "d06ca9cc-bcd7-4f53-9be8-ee86f8754664", "prefix_titles": [ [ "title", "Compression of Deep Learning Models for Text: A Survey" ], [ "section", "Summary and Future Directions" ], [ "subsection", "Future Trends" ] ], "subsections": [], "title": "Future Trends" } ]
77
[ 844, 4478, 4331, 4479, 4480, 4481, 4482, 4483, 7826, 4485, 4484, 4486, 4487, 1568, 4351, 685, 4488, 8375, 2670, 4489, 4490, 851, 4491, 4492, 852, 4493, 8797, 4494, 4496, 4495, 4498, 4497, 2488, 8798, 4499, 4500, 4503, 4504, 681, 8799, 8390, 4501, 4502, 4506, 4505, 4508, 4507, 4513, 4509, 4511, 2481, 4512, 4514, 4510, 854, 4515, 1150, 9120, 856, 7333, 7, 4516, 4517, 4036, 4522, 4520, 4518, 4521, 4519, 4523, 4524, 4525, 4526, 8800, 4529, 4527, 4530, 4528, 4531, 793, 794, 7298, 798, 7371, 1473, 7827, 4536, 4533, 7829, 4532, 4534, 4535, 7828, 849 ]
1.403818
[ "Akshay L Chandra", "Sai Vikas Desai", "Wei Guo", "Vineeth N Balasubramanian" ]
Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey
2020
2020-06-18T14:21:19Z
cs.CV
In light of growing challenges in agriculture with ever growing food demand across the world, efficient crop management techniques are necessary to increase crop yield. Precision agriculture techniques allow the stakeholders to make effective and customized crop management decisions based on data gathered from monitoring crop environments. Plant phenotyping techniques play a major role in accurate crop monitoring. Advancements in deep learning have made previously difficult phenotyping tasks possible. This survey aims to introduce the reader to the state of the art research in deep plant phenotyping.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ] ], "subsections": [ "50e565bb-2ea0-4b95-baa0-7a3211a9fe18", "61c27a51-9e54-4459-89d0-6fb5a235e790", "be6f63d3-6b39-458c-918e-c883b38ba7a8", "fcb14fe1-1d8d-475b-8ed7-cf7547edb6ff", "6bf986f3-5fd9-4afa-a45b-7964c94d86eb", "1cab82e7-7d3c-4e75-913b-94d66346fc31" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "Population growth, increasing incomes, and rapid urbanization in developing countries are expected to cause a drastic hike in food demand. This projected rise in food demand poses several challenges to agriculture. Owing to a continuous decline in global cultivable land , increasing the productivity of the existing agricultural land is highly necessary. This need has led to the scientific community focusing their efforts on developing efficient and sustainable ways to increase crop yield. To this end, precision agriculture techniques have attracted a lot of attention. Precision agriculture is a set of methods to monitor crops, gather data, and carry out informed crop management tasks such as applying the optimum amount of water, selecting suitable pesticides, and reducing environmental impact. These methods involve the usage of specialized devices such as sensors, UAVs, and static cameras to monitor the crops. Accurate crop monitoring goes a long way in assisting farmers in making the right choices to obtain the maximum yield. Plant phenotyping, a rapidly emerging research area, plays a significant role in understanding crop-related traits. Plant phenotyping is the science of characterizing and quantifying the physical and physiological traits of a plant. It provides a quantitative assessment of the plant's properties and its behavior in various environmental conditions. Understanding these properties is crucial in performing effective crop management. \nResearch in plant phenotyping has grown rapidly thanks to the availability of cost-effective and easy to use digital imaging devices such as RGB, multispectral, and hyperspectral cameras, which have facilitated the collection of large amounts of data. This influx of data coupled with the usage of machine learning algorithms has fueled the development of various high throughput phenotyping tools [refs] for tasks such as weed detection, fruit/organ counting, disease detection and yield estimation. A machine learning pipeline typically consists of feature extraction followed by a classification/regression module for prediction. While machine learning techniques have helped build sophisticated phenotyping tools, they are known to lack robustness. They rely heavily on handcrafted feature extraction techniques and manual hyperparameter tuning methods. As a result, if feature extraction is not carefully done under a domain expert's supervision, they tend to perform poorly in uncontrolled environments such as agricultural fields where factors such as lighting, weather, exposure, etc. often cannot be regulated. Hence, feature extraction from data has been one of the major bottlenecks in the development of efficient high throughput plant phenotyping systems.\nAdvancements in deep learning, a sub-field of machine learning which allows for automatic feature extraction and prediction on large scale data, has led to a surge in the development of visual plant phenotyping methods. Deep learning is particularly well-known for its effectiveness in handling vision-based tasks such as image classification, object detection, semantic segmentation, and scene understanding. Coincidentally, many of these tasks form the backbone for various plant phenotyping tasks such as disease detection, fruit detection, and yield estimation. Fig. \\ref{fig-ml-dl} illustrates the difference between machine learning based plant phenotyping and deep learning based plant phenotyping. We believe that the expressive power and robustness of deep learning systems can be effectively leveraged by plant researchers to identify complex patterns from raw data and devise efficient precision agriculture methodologies. The purpose of this survey is to enable the readers to get a bird's eye view of the advancements in the field of deep learning based plant phenotyping, understand the existing issues, and become familiar with some of the open problems which warrant further research.\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.7]{figures/ml-dl.pdf}}\n\\caption{Difference between ML and DL based Plant Phenotyping.}\n\\label{fig-ml-dl}\n\\end{figure}", "id": "50e565bb-2ea0-4b95-baa0-7a3211a9fe18", "level": "section", "origin_cites_number": 5, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "61c27a51-9e54-4459-89d0-6fb5a235e790", "level": "section", "origin_cites_number": 0, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Background" ] ], "subsections": [ "3bee11d8-8a06-40e7-a105-9445d5a61266", "cd19296d-3914-418b-a3bf-11f032138dc2", "ad075082-3735-469d-88a5-712030a4f4ec" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "Plant phenotyping is the science of quantifying the physical and physiological traits of a plant. Plant phenotyping mainly benefits two communities: farmers and plant breeders. By better understanding the traits of the crop, a farmer can optimize crop yield by making informed crop management decisions. Similarly, understanding the crop's behavior is crucial for plant breeders to select the best possible crop variety for a given location and environment. In the past, plant phenotyping was a manual endeavor. The process of manually observing a small set of crop samples and reporting observations periodically was slow, labor intensive and inefficient. The low throughput nature of these methods has impeded the progress in plant breeding research. However, the advent of modern data acquisition methods with various sensors, cameras and UAVs (Unmanned Aerial Vehicles) coupled with advances in machine learning techniques have resulted in the development of high-throughput plant phenotyping methods to be effectively used for precision agriculture. \nDepending on the method of data collection, plant phenotyping techniques can be classified into ground based, aerial and satellite based methods. In ground based phenotyping, high precision sensors are embedded in handheld devices or mounted on movable vehicles to measure useful traits such as plant height, plant biomass, crop development stage, crop yield etc. Fig. \\ref{fig-three-modes} contracts the discussed classifcations. Movable phenotyping vehicles like BoniRob have been developed where RGB cameras, hyperspectral cameras, LIDAR sensors, GPS receivers and other sensors can be mounted. Aerial based methods typically involve the usage of Unmanned Aerial Vehicles (UAVs) for crop monitoring. The recent advancements in UAVs and high resolution cameras have allowed the researchers to obtain high quality crop images. Tasks such as weed mapping, crop yield estimation, plant disease detection and pesticide spraying have been effectively carried out by UAVs. Satellite based plant phenotyping involves remote sensing of agricultural plots from satellites such as Landsat-8 and WorldView-3. Satellite based methods have been typically used for crop health monitoring over a large scale area such as a region/country. \nHowever, the cost of obtaining satellite images, the effect of clouds and the time gap between capturing and obtaining images inhibits its applicability for high throughput plant phenotyping in precision agriculture. \n\\smallskip\nWith a variety of data collection tools at our disposal, large amounts of image and sensor data have been made available for plant phenotyping research. The next section introduces deep learning, a set of methods which can effectively recognize useful patterns in huge datasets.", "id": "3bee11d8-8a06-40e7-a105-9445d5a61266", "level": "subsection", "origin_cites_number": 1, "parent_id": "61c27a51-9e54-4459-89d0-6fb5a235e790", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Background" ], [ "subsection", "Plant Phenotyping" ] ], "subsections": [], "title": "Plant Phenotyping" }, { "cite_extract_rate": 0, "cites": [], "content": "Machine Learning (ML) is a subset of Artificial Intelligence (AI), that deals with an algorithmic approach of learning from observational data without being explicitly programmed. ML has unimaginably revolutionized several fields in the last few decades. Neural Networks (NN) is a sub-field of ML and it was this sub-field that spawned Deep Learning (DL). Among the most prominent factors that contributed to the huge boost of deep learning are the appearance of large, high-quality, publicly available labelled datasets, along with the empowerment of parallel GPU computing, which enabled the transition from CPU-based to GPU-based training thus allowing for signifcant acceleration in deep models’ training. Since its redemption in 2006 , DL community has been creating ever more complex and intelligent algorithms, showing better than human performances in several intelligent tasks. The \\textit{deep} in deep learning comes from the deep architectures of learning or the hierarchical nature of its algorithms. DL algorithms stack several layers of non-linear information processing units between input and output layer, called Artificial Neurons (AN). The stacking of these ANs in a hierarchical fashion allows for exploitation of feature learning and pattern recognition through efficient learning algorithms. It is proven that NNs are universal approximator of any function, making DL task agnostic . Fig. \\ref{fig-taxonomy-of-ai} depicts the taxonomy of AI. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.35]{figures/taxonomy-of-ai.pdf}}\n\\caption{The taxonomy of AI . AI: Artificial Intelligence; ML: Machine Learning; NN: Neural Networks; DL:Deep Learning; SNN: Spiking Neural Networks.}\n\\label{fig-taxonomy-of-ai}\n\\end{figure}\nDeep learning approaches may be categorized as follows: Supervised, semi-supervised or partially supervised, and unsupervised\\footnote{Reinforcement Learning (RL) or Deep RL (DRL) is often treated as a semi-supervised or sometimes unsupervised approach.}. Supervised learning techniques use labeled data. In supervised DL, the environment includes sets of input and corresponding output pairs (often in large amounts), a criterion that evaluates model performance at all times called cost or loss function, an optimizing algorithm that minimizes the cost function with respect to the given data. Semi-supervised learning techniques use only partially labeled datasets (usually small amounts of label data, large amounts of unlabeled data). The popular Generative Adversarial Networks (GAN) are semi-supervised learning techniques. Unsupervised learning systems function without the presence of labeled data. In this case, the system learns the internal representation or important features to discover unknown relationships or structure within the input data. Often clustering, dimensionality reduction, and generative techniques are considered as unsupervised learning approaches.", "id": "cd19296d-3914-418b-a3bf-11f032138dc2", "level": "subsection", "origin_cites_number": 7, "parent_id": "61c27a51-9e54-4459-89d0-6fb5a235e790", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Background" ], [ "subsection", "Deep Learning" ] ], "subsections": [], "title": "Deep Learning" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 1034, 486 ], "content": "\\subsubsection*{Convolutional Neural Networks} \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.4]{figures/cnn.pdf}}\n\\caption{The structure of a CNN , consisting of convolutional, pooling, and fully-connected layers.}\n\\label{fig-cnn}\n\\end{figure}\nConvolutional Neural Networks (CNN) is a subclass of neural networks that takes advantage of the spatial structure of the inputs. This network structure was first proposed by Fukushima in 1988 . It was not widely used then, however, due to limits of computation hardware for training the network. In the 1990s, LeCun et al. applied a gradient-based learning algorithm to CNNs and obtained successful results for the handwritten digit classification problem. CNNs have been extremely successful in computer vision applications, such as face recognition, object detection, powering vision in robotics, and self-driving cars. CNN models have a standard structure consisting of alternating convolutional layers and pooling layers (often each pooling layer is placed after a convolutional layer). The last layers are a small number of fully-connected layers, and the final layer is a softmax classifier as shown in Fig. \\ref{fig-cnn}. Every layer of a CNN transforms the input volume to an output volume of neuron activation, eventually leading to the final fully connected layers, resulting in a mapping of the input data to a 1D feature vector. In a nutshell, CNN comprises three main types of neural layers, namely, (i) convolutional layers, (ii) pooling layers, and (iii) fully connected layers. Each type of layer plays a diferent role.\n\\smallskip\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.3]{figures/fcn-cnn.pdf}}\n\\caption{In a fully connected layer (left), each unit is connected to all units of the previous layers. In a convolutional layer (right), each unit is connected to a constant number of units in a local region of the previous layer. Furthermore, in a convolutional layer, the units all share the weights for these connections, as indicated by the shared linetypes. Figure and description are taken from .}\n\\label{fig-fcnn-cnn}\n\\end{figure}\n\\hspace{-5mm}\\textit{(i) Convolution Layers}. In the convolutional layers, a CNN\nconvolves the whole image as well as the intermediate feature maps with different kernels, generating various feature maps. Exploiting the advantages of the convolution operation, several works have proposed it as a substitute for fully connected layers with a view to attaining faster learning times. Difference between a fully connected layer and a convolutional layer is shown in Fig. \\ref{fig-fcnn-cnn}. \n\\smallskip\n\\hspace{-5mm}\\textit{(ii) Pooling Layers}. Pooling layers handle the reduction of the spatial dimensions of the input volume for the convolutional layers that immediately follow. The pooling layer does not affect the depth dimension of the volume. The operation performed by this layer is also called subsampling or downsampling, as the reduction of size leads to a simultaneous loss of information. However, such a loss is beneficial for the network because the network is forced to learn only meaningful feature representation. On top of that, the decrease in size leads to less computational overhead for the upcoming layers of the network, and also it works against overfitting. Average pooling and max pooling are the most commonly used strategies. In a detailed theoretical analysis of max pooling and average pooling performances is given, whereas in it was shown that max pooling can lead to faster convergence, select superior invariant features, and improve generalization. \n\\smallskip\n\\hspace{-5mm}\\textit{(iii) Fully Connected Layers}.\nFollowing several convolutional and pooling layers, the high-level reasoning in the neural network is performed via fully connected layers. Neurons in a fully connected layer have full connections to all activation in the previous layer, as their name implies. Their activation can hence be computed with a matrix multiplication followed by a bias offset. Fully connected layers eventually convert the 2D feature maps into a 1D feature vector. The learned vector representations either could be fed forward for classification or could be used as feature vectors for further processing.\n\\subsubsection*{Object Detection and Segmentation}\nObject detection and segmentation are two of the most important and challenging branches of computer vision, which have been widely applied in real-world applications, such as monitoring security, autonomous driving and so on, with the purpose of locating instances of semantic objects of a certain class. In a nutshell, object detection is the task of identifying locating objects (with bounding boxes) in images. While the task of segmentation is to classify each pixel of images with objects (dog, cat, airplane, etc.). We refer readers to for more information on these tasks. Fig. \\ref{fig-class-det-seg} visually contrasts the difference between these tasks.\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.2]{figures/class_det_seg.png}}\n\\caption{Visual illustration of difference between tasks - Image Classification, Object Detection and Instance Segmentation. Example taken from MS-COCO Dataset .}\n\\label{fig-class-det-seg}\n\\end{figure}", "id": "ad075082-3735-469d-88a5-712030a4f4ec", "level": "subsection", "origin_cites_number": 9, "parent_id": "61c27a51-9e54-4459-89d0-6fb5a235e790", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Background" ], [ "subsection", "Deep Learning for Computer Vision" ] ], "subsections": [], "title": "Deep Learning for Computer Vision" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "be6f63d3-6b39-458c-918e-c883b38ba7a8", "level": "section", "origin_cites_number": 0, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Application of Deep Learning in Plant Phenotyping" ] ], "subsections": [ "7b5a3160-2cf3-4478-90b2-e4ffb2d28171", "a449abb1-6b4c-48cd-a598-c9b18f11bfc1", "3f927eff-6ed6-4137-949b-bb7a6ee40c3a" ], "title": "Application of Deep Learning in Plant Phenotyping" }, { "cite_extract_rate": 0.104477611940298, "cites": [ 1036, 7043, 7324, 1037, 1035, 7323 ], "content": "Automation in agriculture and robotic precision agriculture activities demand a lot of information about the environment, the field, the condition and the phenotype of individual plants. An increase in availability of data allowed for successful usage of such robotic tools in real-world conditions. Taking advantage of the available data, combined with the availability of robots such as BoniRob that navigate autonomously in fields, computer vision with deep learning has played a prominent role in realizing autonomous farming. Previously laborious jobs of actively tracking certain measurements of interest such as plant growth rate, plant stem position, biomass amount, leaf count, leaf area, inter crop spacing, crop plant count and others can now be done almost seamlessly. \n\\subsubsection*{Crop Identification and Classification}\nA crucial prerequisite for selective and plant-specific treatments is that farming robots need to be equipped with an effective plant identification and classification system providing the robot with the information where and when to trigger its actuators to perform the desired action in real-time. For example, weeds generally have no useful value in terms of food, nutrition or medicine yet they have accelerated growth and parasitically compete with actual crops for nutrients and space. Inefficient processes such as hand weeding has\nled to significant losses and increasing costs due to manual labour , which is why a lot of research is being done on crop vs weed classification and weed identification and plant seedlings classification . This is extremely useful in improving the efficiency of precision farming techniques on weed control by modulating herbicide spraying appropriately to the level of weeds infestation. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.3]{figures/three-modes.pdf}}\n\\caption{Top row of \\textbf{(a)} shows BoniRob a ground-based remote sensing robot, \\textbf{(b)} shows an unmanned aerial vehicle , \\textbf{(c)} shows a satellite scanning large areas of land respectively. Bottom row across \\textbf{(a)}, \\textbf{(b)}, and \\textbf{(c)} shows corresponding example images acquired. Satellite Image Credits: NASA.}\n\\label{fig-three-modes}\n\\end{figure}\n\\subsubsection*{Crop Detection and Segmentation}\nCrop detection in the wild is arguably the most crucial step in the pipeline of several farm management tasks such as visual crop categorization , real-time plant disease and pest recognition , picking and harvesting automatic robots , healthy and quality monitoring of crop growing and yield estimation . However, existing deep learning networks achieving state-of-the-art performance in other research fields are not suitable for agricultural tasks of crop management such as irrigation , picking , pesticide spraying , and fertilization . The dominating cause is lack of diverse set of public benchmark datasets that are specifically designed for various agricultural missions. Some of the few rich datasets available are CropDeep for detection, multi-modal datasets like Rosette plant or \\textit{Arabidopsis} datasets , Sorghum-Head , Wheat-Panicle , Crop/Weed segmentation , and Crop/Tassle segmentation . Fig. \\ref{fig-cropdeep} contains some examples from the CropDeep dataset. Fig. \\ref{fig-rosette} depicts multi-modal annotations provided in the Rosette Plant Phenotyping dataset i.e., annotations for detection, segmentation, leaf center along with otherwise rarely found meta data. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.3]{figures/cropdeep.png}}\n\\caption{Some annotation examples from CropDeep dataset .}\n\\label{fig-cropdeep}\n\\end{figure}\nEfficient yield estimation from images is also one of the key tasks for farmers and plant breeders to accurately quantify the overall throughput of their ecosystem. Recent efforts in panicle or spike detection , leaf counting , fruit detection as well as pixel-wise segmentation-based tasks such as panicle segmentation show very promising results in this direction. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.7]{figures/RosettePlant.pdf}}\n\\caption{Visual illustration of all types of annotations available in dataset.}\n\\label{fig-rosette}\n\\end{figure}\n\\subsubsection*{Crop Disease and Pest Recognition}\nModern technologies have given human society the ability to produce enough food to meet the demand of more than 7 billion people. However, food security remains threatened by a number of factors including climate change , the decline in pollinators , plant diseases , and others. Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops. India loses 35\\% of the annual crop yield due to plant diseases . In the developing world, more than 80 percent of the agricultural production is generated by smallholder farmers , and reports of yield loss of more than 50\\% due to pests and diseases are frequent . Furthermore, the largest fraction of hungry people (50\\%) live in smallholder farming households , making smallholder farmers a group that’s particularly vulnerable to pathogen-derived disruptions in food supply.\nOwing to these factors, timely disease and pest recognition becomes a priority task for farmers. In addition to that, farmers do not have many options other than consulting other fellow farmers or seeking help from government funded helplines . Availability of public datasets such as PlantVillage , PlantDoc allowed for progress in the area of disease and pest detection. Recent research works in pest and insect detection , invasive species detection in marine aquaculture and disease detection in plant leafs , Rice , Tomato , Banana , Grape , Sugarcane , Eggplant , Cucumber , Soybean , Olive , Tea , Coffee and other similar works take encouraging steps towards disease-free agriculture. Fig. \\ref{fig-disease} depicts banana diseases and pest detection outputs from . This work reports solutions to extant limitations in plant disease detection. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.3]{figures/banana.png}}\n\\caption{Detected classes and expected output of the trained disease detection model. \\textbf{a} Entire plant afected by banana bunchy top virus (BBTV), \\textbf{b} leaves affected by black sigatoka (BS), \\textbf{c} cut pseudostem of Xanthomonas wilt (BXW) afected plant showing yellow bacterial ooze, \\textbf{d} fruit bunch afected by Xanthomonas wilt (BXW), e cut fruit afected by Xanthomonas wilt (BXW), \\textbf{f} corm afected by banana corm weevil (BCW). Figure and description taken from .}\n\\label{fig-disease}\n\\end{figure}", "id": "7b5a3160-2cf3-4478-90b2-e4ffb2d28171", "level": "subsection", "origin_cites_number": 67, "parent_id": "be6f63d3-6b39-458c-918e-c883b38ba7a8", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Application of Deep Learning in Plant Phenotyping" ], [ "subsection", "Ground-Based Remote Sensing for Plant Phenotyping" ] ], "subsections": [], "title": "Ground-Based Remote Sensing for Plant Phenotyping" }, { "cite_extract_rate": 0.17391304347826, "cites": [ 8406, 1040, 1039, 1038 ], "content": "The past few decades have witnessed the great progress of unmanned aircraft vehicles (UAVs) in civilian fields, especially in photogrammetry and remote sensing. In contrast with the platforms of manned aircraft and satellite, the UAV platform holds many promising characteristics: flexibility, efficiency, high spatial/temporal resolution, low cost, easy operation, etc., which make it an effective complement to other remote-sensing platforms and a cost-effective means for remote sensing. We refer reader to literary works for the detailed reports of techniques and applications of UAVs in precision agriculture, remote sensing, search and rescue, construction and infrastructure inspection and discuss other market opportunities. UAVs can be utilized in precision agriculture (PA) for crop management and monitoring , weed detection , irrigation scheduling , agricultural pattern detection , pesticide spraying , cattle detection , disease detection , insect detection and data collection from ground sensors (moisture, soil properties, etc.,) . The deployment of UAVs in PA is a cost-effective and time saving technology which can help for improving crop yields, farms productivity and profitability in farming systems. Moreover, UAVs facilitate agricultural management, weed monitoring, and pest damage, thereby they help to meet these challenges quickly . \nUAVs can also be utilized to monitor and quantify several factors of irrigation such as availability of soil water, crop water need (which represents the amount of water needed by the various crops to grow optimally), rainfall amount, efficiency of the irrigation system . In this work , UAVs are currently being utilized to estimate the spatial distribution of surface soil moisture high-resolution multi-spectral imagery in combination with ground sampling. UAVs are also being used for thermal remote sensing to monitor the spatial and temporal patterns of crop diseases during various disease development phases which reduces crop losses for farmers. This work detects early stage development of soil-borne fungus in UAV imagery. Soil texture can be an indicative of soil quality which in turn influences crop productivity. Hence, UAV thermal images are being utilized to quantify soil texture at a regional scale by measuring the differences in land surface temperature under a relatively homogeneous climatic condition . Accurate assessment of crop residue is crucial for proper implementation of conservation tillage practices since crop residues provide a protective layer on agricultural fields that shields soil from wind and water. In , the authors demonstrated that aerial thermal images can explain more than 95\\% of the variability in crop residue cover amount compared to 77\\% using visible and near IR images.\nFarmers must monitor crop maturity to determine the harvesting time of their crops. UAVs can be a practical solution to this problem . Farmers require accurate, early estimation of crop yield for a number of reasons, including crop insurance, planning of harvest and storage requirements, and cash flow budgeting. In , UAV images were utilized to estimate yield and total biomass of rice crop in Thailand. In , UAV images were also utilized to predict corn grain yields in the early to midseason crop growth stages in Germany. \nThere have also been successful efforts that seamlessly combine aerial and ground based system for precision agriculture . With relaxed flight regulations and drastic improvement in machine learning techniques, geo-referencing, mosaicing, and other related algorithms, UAVs can provide a great potential for soil and crop monitoring . More precision agricultural researches are encouraged to design and implement special types of cameras and sensors on-board UAVs, which have the ability of remote crop monitoring and detection of soil and other agricultural characteristics in real time scenarios.", "id": "a449abb1-6b4c-48cd-a598-c9b18f11bfc1", "level": "subsection", "origin_cites_number": 23, "parent_id": "be6f63d3-6b39-458c-918e-c883b38ba7a8", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Application of Deep Learning in Plant Phenotyping" ], [ "subsection", "Unmanned Aircraft Vehicles for Plant Phenotyping" ] ], "subsections": [], "title": "Unmanned Aircraft Vehicles for Plant Phenotyping" }, { "cite_extract_rate": 0, "cites": [], "content": "The impact of climate change and its unforeseeable nature, has caused majority of the agricultural crops to be affected in terms of their production and maintenance. With more than seven billion mouths to feed greater demands are being put on agriculture than ever before, at the same time as land is being degraded by factors such as soil erosion, mineral exhaustion and drought. It becomes the utmost priority for governments to support farmers by providing crucial information about changing weather conditions, soil conditions and more. Currently, satellite imagery is making agriculture more efficient by reducing scouting efforts of farmers, by optimizing use of nitrogen based on variable rate of application, by optimizing water schedules, identifying field performance and benchmark fields, etc . India alone has 7 satellites specially designed for benefits of farmers .\nSatellites and their imagery are being applied to agriculture in several ways, initially as a means of estimating crop yields and crop types , soil salinity, soil moisture, soil pH . Optical and radar sensors can provide an accurate picture of the acreage being cultivated, while also differentiating between crop types and determining their health and maturity. Optical satellite sensors can detect visible and near-infrared wavelengths of light, reflected from agricultural land below. It is these wavelengths which combined, can be manipulated to help us understand the condition of the crops. This information helps to inform the market, and provide early warning of crop failure or famine.\nBy extension, satellites are also used as a management tool through the practice of PA, where satellite images are used to characterise a farmer's fields in detail, often used in combination with geographical information systems (GIS), to allow more intensive and efficient cultivation practices. For instance, different crops might be recommended for different fields while the farmer's use of fertiliser is optimised in a more economic and environmentally-friendly fashion. Providing access to satellite imagery also becomes very important for building trust among the involved parties (farmers and government and private bodies involved). Web-based platforms such as Google Earth Engine, Planet.com, Earth Data Search by NASA, LandViewer by Earth Observing System, Geocento and others provide access to past and present (even daily) satellite imagery of your interest. \nAgricultural monitoring is also increasingly being applied to forestry, both for forest management and as a way of characterising forests as carbon sinks to help minimise climate change – notably as part of the UN's REDD programme .", "id": "3f927eff-6ed6-4137-949b-bb7a6ee40c3a", "level": "subsection", "origin_cites_number": 10, "parent_id": "be6f63d3-6b39-458c-918e-c883b38ba7a8", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Application of Deep Learning in Plant Phenotyping" ], [ "subsection", "Satellites for Plant Phenotyping" ] ], "subsections": [], "title": "Satellites for Plant Phenotyping" }, { "cite_extract_rate": 0.5, "cites": [ 7217, 1050, 1051, 1048, 1045, 1041, 1047, 1043, 1049, 1044, 1042, 1046, 7323 ], "content": "While deep learning based plant phenotyping has shown great promise, requirement of large labeled datasets still remains to be the bottleneck. Phenotyping tasks are often specific to the environmental and genetic conditions, finding large datasets with such conditions is not always possible. This results in researchers needing to acquire their own dataset and label it, which is often a arduous and expensive affair. Moreover, small datasets often lead to models that overfit. Deep learning approaches optimized for working with limited labeled data would immensely help the plant phenotyping community, since this would encourage many more farmers, breeders, and researchers to employ reliable plant phenotyping techniques to optimize crop yield. To this end, we list out some of the recent efforts in the area of deep plant phenotyping with limited labeled data.\n\\subsection*{Data Augmentation}\nThe computer vision community has long been employing dataset augmentation techniques to grow the amount of data using artificial transformations. Artificially perturbing the original dataset with affine transformations (e.g., rotation, scale, translation) is considered a common practice now. However, this approach has some constraints: the augmented data only capture the variability of the available training set (e.g., if the dataset doesn't include a unique colored fruit, the particular unique case will never be learnt). To overcome this, several data augmentation methods proposed take advantage of recent advancements in the image generation space. In this work , the authors use Generative Adversarial Network (GAN) to generate \\textit{Arabidopsis} plant images (called ARIGAN) with unique desirable traits (over 7 leaves) that were originally less frequent in the dataset. Fig. \\ref{fig-generated} \\textbf{(a)} shows examples of images generated by ARIGAN. Other latest works use more advanced variants of GANs to generate realistic plant images with particularly favorable leaf segmentations of interest to boost leaf counting accuracy of the learning models. In , the authors proposed an unsupervised image translation technique to improve plant disease recognition performance. LeafGAN , an image-to-image translation model, generates leaf images with various plant diseases and boosts diagnostic performance by a great margin. Two sets of example images generated by LeafGAN are shown in Fig. \\ref{fig-generated} \\textbf{(b)}. Other data enhancement techniques are also being employed by researchers to train plant disease diagnosis models on generated lesions . \nThe effort to provide finely annotated data has enabled great improvement of the state of the art on segmentation performance. Some researches have started working on effectively transferring the knowledge obtained from RGB images on annotated plants either to other species or other modalities of imaging. In this work , the authors successfully transfer the knowledge gained from annotated leaves of \\textit{Arabidopsis thaliana} in RGB to images of the same plant in chlorophyll fluorescence imaging. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.55]{figures/generated.pdf}}\n\\caption{\\textbf{(a)} shows \\textit{Arabidopsis} plant images generated by ARIGAN . Bottom-right numbers refer to the leaf count. \\textbf{(b)} shows two sets of healthy leafs and their corresponding disease prone leaves generated by LeafGAN .}\n\\label{fig-generated}\n\\end{figure}\n\\subsection*{Weakly Supervised Learning}\nFruit/organ counting is a well explored task by the plant phenotyping community. However, many vision-based solutions we have currently require highly accurate instance and density labels of fruits and organs in diverse set of environments. The labeling procedures are often very burdensome and error prone and, in many agricultural scenarios, it may be impossible to acquire a sufficient number of labelled samples to achieve consistent performance that are robust to image noise or other forms of covariate shift. This is why using only weak labels can be crucial for cost-effective plant phenotyping. \nRecently, a lot of attention has been placed on engineering weakly supervised learning frameworks for plant phenotyping. In , the authors created a weakly supervised framework for the sorghum head detection task where annotators label the data only until the model reaches a desired performance level. After that, model outputs are directly passed as data labels leading to a exponential reduction in annotation costs with minimal loss in model accuracy. In other work , the authors proposed a strategy which is able to learn to count fruits without requiring task-specific supervision labels, such as manually labelled object bounding boxes or total instance count. In , the authors use a trained CNN on defect classification data and use it's activate maps to segment infected regions on potatoes. Segmentation task requires really rich labels (each pixel of the image is annotated) so this task effectively bypasses the labeling for segmentation altogether. On another note, rice heading date estimation greatly assists the breeders to understand the adaptability of the crop to various environmental and genetic conditions. Accurate estimation of heading date requires monitoring the increase in number of rice panicles in the crop. Detecting rice panicles from crop images usually requires training an object detection model such as Faster R-CNN or YOLO, which requires costly bounding box annotations. However, a recently proposed method uses a sliding window based detector which requires training an image classifier, for which annotations are much easier to obtain. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.3]{figures/bmvc.jpg}}\n\\caption{\\textbf{(a)} Standard pool-based active learning framework \\textbf{(b)} Proposed framework which interleaves weak supervision in the active learning process. This novel framework includes an adaptive supervision module which allows switching to a stronger form of supervision as required when training the model. The \\textit{oracle} is the source of labels \\textit{a.k.a} annotator.}\n\\label{fig-bmvc}\n\\end{figure}\n\\subsection*{Transfer Learning}\nTransfer learning is a type of learning that enables using knowledge gained while solving one problem and applying it to a different but related problem i.e., a model trained on one phenotyping task (say potato leaf classification) being able to assist another phenotyping (tomato leaf classification) task. Transfer learning is a very well explored area of machine learning. As part of the first steps of adopting existing transfer learning techniques for plant phenotyping, the authors of use CNNs (AlexNet, GoogleNet and VGGNet) pretrained on ImageNet dataset and fine tune on the plant dataset used in LifeCLEF 2015 challenge. With the help of transfer learning, they were able to beat then existing state-of-the-art LifeCLEF performance by 15\\% points. Similary in , the authors report better than human results in segmentation task with the help of transfer learning where they transfer learn a model trained on peanut root dataset for switchgrass root dataset (they also report results using ImageNet pretrained models). Leaf disease detection and treatment recommendation performance is also shown to be boosted with transfer learning . In , the authors interestingly combined a State-of-the-Art weakly-supervised fruit counting model with an unsupervised style transfer method for fruit counting. They used Cycle-Generative Adversarial Network (C-GAN) to perform unsupervised domain adaptation from one fruit dataset to another and train it alongside with a Presence-Absence Classifier (PAC) that discriminates images containing fruits or not and ultimately achieved better performance than fully supervised models. \n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.15]{figures/bmc.jpg}}\n\\caption{Proposed point supervision framework into the pool-based active learning cycle. In this framework, strong supervision is queried for images only after deemed \\textit{informative} based on point supervision of those images.}\n\\label{fig-bmc}\n\\end{figure}\n\\subsection*{Active Learning}\nActive learning , an iterative training approach that curiously selects the best samples to train, has been shown to reduce labeled data requirement when training deep classification networks . Research in the area of active learning for object detection has been, arguably, limited. However, numerous plant phenotyping tasks such as detection and quantification of crop yield and fruit counting are directly dependent on object detection. Keeping this in mind, an active learning method has been proposed for training deep object detection models where the model can selectively query either weak labels (pointing at the object) or strong labels (drawing a box around the object). By introducing a switching module for weak labels and strong labels, the authors were able to save 24\\% of annotation time while training a wheat head detection model. Fig. \\ref{fig-bmvc} illustrates the difference between regular active learning cycle and proposed active learning cycle. This method demonstrates the applicability of active learning to plant phenotyping methods where obtaining labeled data is often difficult. Along the same lines, to alleviate the labeled data requirement for training object detection models for cereal crop detection, a weak supervision based active learning method was proposed recently. In this active learning approach, the model constantly interacts with a human annotator by iteratively querying the labels for only the most informative images, as opposed to all images in a dataset. Fig. \\ref{fig-bmc} visually illustrates the proposed framework. The active query method is specifically designed for cereal crops which usually tend to have panicles with low variance in appearance. This training method has been shown to reduce over 50\\% of annotation costs on sorghum head and wheat spike detection datasets. We expect to see more research works using active learning for limited labeled data based plant phenotyping in the near future.", "id": "fcb14fe1-1d8d-475b-8ed7-cf7547edb6ff", "level": "section", "origin_cites_number": 28, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Plant Phenotyping with Limited Labeled Data" ] ], "subsections": [], "title": "Plant Phenotyping with Limited Labeled Data" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we describe some of the challenges present in plant phenotyping methods which warrant further research. \n\\subsection*{The Training Data Bottleneck} \nModern phenotyping methods rely on deep learning which is notorious for requiring large amounts of labeled data. While some progress has been made in developing data efficient models for phenotyping, reducing the labeling efforts for training efficient phenotyping tools is still an open problem. We believe that effectively adapting techniques from deep learning such as unsupervised, self supervised, weakly supervised, active and semi-supervised learning will greatly benefit the phenotyping community in observing plant traits with small datasets. \n\\subsection*{Explainability}\nDeep neural networks are generally considered as black boxes which produce predictions without sufficient justification. This makes debugging a neural network difficult i.e., it can be tough to understand what caused a wrong prediction. Crop management decisions based on incorrect phenotyping results can cause financial losses. Hence, developing explainable models for plant phenotyping is one of the open problems in this field. Obtaining the reasons behind a given set of plant traits using explainable models has the potential to achieve breakthroughs in our understanding of plant behavior in various genetic and environmental conditions. \n\\subsection*{Data collection}\nVision based plant phenotyping suffers from challenges such as occlusion, inaccuracies in 3D reconstruction of crops and bad lighting conditions caused by the changing weather. It is therefore necessary to develop phenotyping tools which are robust to visual variations.", "id": "6bf986f3-5fd9-4afa-a45b-7964c94d86eb", "level": "section", "origin_cites_number": 0, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Challenges and Open Problems" ] ], "subsections": [], "title": "Challenges and Open Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "High throughput plant phenotyping methods have shown great promise in efficiently monitoring crops for plant breeding and agricultural crop management. Research in deep learning has accelerated the progress in plant phenotyping research which resulted in the development of various image analysis tools to observe plant traits. However, wide applicability of high throughput phenotyping tools is limited by some issues such as 1) dependence of deep networks on large datasets, which are difficult to curate, 2) large variations of field environment which cannot always be captured, and 3) capital and maintenance which can be prohibitively expensive to be widely used in developing countries. With many open problems in plant phenotyping warranting further studies, it is indeed a great time to study plant phenotyping and achieve rapid progress by utilizing the advances in deep learning. \n\\bibliographystyle{splncs}\n\\bibliography{egbib}\n\\end{document}", "id": "1cab82e7-7d3c-4e75-913b-94d66346fc31", "level": "section", "origin_cites_number": 0, "parent_id": "40b8b988-fe57-4e55-9221-27ac302d3a6f", "prefix_titles": [ [ "title", "Computer Vision with Deep Learning for Plant Phenotyping in Agriculture: A Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
78
[ 1034, 486, 1036, 7043, 7324, 1037, 1035, 7323, 8406, 1040, 1039, 1038, 7217, 1050, 1051, 1048, 1045, 1041, 1047, 1043, 1049, 1044, 1042, 1046 ]
1.059429
[ "Yuqiao Liu", "Yanan Sun", "Bing Xue", "Mengjie Zhang", "Gary G. Yen", "Kay Chen Tan" ]
A Survey on Evolutionary Neural Architecture Search
2020
2020-08-25T11:00:46Z
cs.NE
Deep Neural Networks (DNNs) have achieved great success in many applications. The architectures of DNNs play a crucial role in their performance, which is usually manually designed with rich expertise. However, such a design process is labour intensive because of the trial-and-error process, and also not easy to realize due to the rare expertise in practice. Neural Architecture Search (NAS) is a type of technology that can design the architectures automatically. Among different methods to realize NAS, Evolutionary Computation (EC) methods have recently gained much attention and success. Unfortunately, there has not yet been a comprehensive summary of the EC-based NAS algorithms. This paper reviews over 200 papers of most recent EC-based NAS methods in light of the core components, to systematically discuss their design principles as well as justifications on the design. Furthermore, current challenges and issues are also discussed to identify future research in this emerging field.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "40bf4670-36c8-4014-b074-8f86f999eff7", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ] ], "subsections": [ "e9b844de-0b2f-4ecf-94b9-874602da0564", "2857eea5-3541-4f2f-8282-08d5126a14b3", "7fb1f2d3-d962-4d8c-8b81-c32933ee534a", "13bf9942-9456-4154-adf6-fca4ab8f93d7", "d3650a8a-21c2-4aa1-9b61-b69c455614a3", "fa11702e-7516-4849-b9c7-1422e93b4202", "390da3df-188a-48a9-991a-966d654bdd4d", "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "d338e930-f930-4ee2-9cea-b0e47084ac48" ], "title": "root" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 96, 1110, 166, 6822, 514, 8441, 6821, 6823, 8247, 97, 6824, 7, 684, 872, 6820, 6825 ], "content": "\\IEEEPARstart{D}{eep} Neural Networks (DNNs), as the cornerstone of deep learning~, have demonstrated their great success in diverse real-world applications, including image classification~, natural language processing~, speech recognition~, to name a few. The promising performance of DNNs has been widely documented due to their deep architectures~, which can learn meaningful features directly from the raw data almost without any explicit feature engineering. Generally, the performance of DNNs depends on two aspects: their architectures and the associated weights. Only when both achieve the optimal status simultaneously, the performance of the DNNs could be expected promising. The optimal weights are often obtained through a learning process: using a continuous loss function to measure the discrepencies between the real output and the desired one, and then the gradient-based algorithms are often used to minimize the loss. When the termination condition is satisfied, which is commonly a maximal iteration number, the algorithm can often find a good set of weights~. This kind of process has been very popular owing to its effectiveness in practice, and has become the dominant practice for weight optimization~, although they are principally local-search~ algorithms. On the other hand, obtaining the optimal architectures cannot be directly formulated by a continuous function, and there is even no explicit function to measure the process for finding optimal architectures.\nTo this end, there has been a long time that the promising architectures of DNNs are manually designed with rich expertise. This can be evidenced from the state of the arts, such as VGG~, ResNet~ and DenseNet~. These promising Convolutional Neural Network (CNN) models are all manually designed by the researchers with rich knowledge in both neural networks and image processing. However, in practice, most end users are not with such kinds of knowledge. Moreover, DNN architectures are often problem-dependent. If the distribution of the data is changed, the architectures must be redesigned accordingly. Neural Architecture Search (NAS), which aims to automate the architecture designs of deep neural networks, is identified as a promising way to address the challenge aforementioned. \nMathematically, NAS can be modeled by an optimization problem formulated by Equation~(\\ref{equ_defination_NAS}):\n\\begin{equation}\n\\label{equ_defination_NAS}\n\\left\\{\n\\centering\n\\begin{array}{c}\n\\argmin_A = \\mathcal{L}(A, \\mathcal{D}_{train}, \\mathcal{D}_{fitness}) \\\\\ns.t.\\quad A \\in \\mathcal{A} \\\\\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\mathcal{A}$ denotes the search space of the potential neural architectures, $\\mathcal{L}(\\cdot)$ measures the performance of the architecture $A$ on the fitness evaluation dataset $\\mathcal{D}_{fitness}$ after being trained on the training dataset $\\mathcal{D}_{train}$. The $\\mathcal{L}(\\cdot)$ is usually non-convex and non-differentiable~. In principle, NAS is a complex optimization problem experiencing several challenges, e.g., complex constraints, discrete representations, bi-level structures, computationally expensive characteristics and multiple conflicting criteria. NAS algorithms refer to the optimization algorithms which are specifically designed to effectively and efficiently solve the problem represented by Equation~(\\ref{equ_defination_NAS}). The initial work of NAS algorithms is commonly viewed as the work in~, which was proposed by Google. The pre-print version of this work was firstly released in the website of arXiv\\footnote{\\url{https://arxiv.org/abs/1611.01578}} in 2016, and then was formally accepted for publication by the International Conference on Learning Representations (ICLR) in 2017. Since then, a large number of researchers have been investing tremendous efforts in developing novel NAS algorithms.\nBased on the optimizer employed, existing NAS algorithms can be broadly classified into three different categories: Reinforcement Learning (RL)~ based NAS algorithms, gradient-based NAS algorithms, and Evolutionary Computation (EC)~ based NAS algorithms (ENAS). Specifically, RL based algorithms often require thousands of Graphics Processing Cards (GPUs) performing several days even on median-scale dataset, such as the CIFAR-10 image classification benchmark dataset~. Gradient-based algorithms are more efficient than RL based algorithms. However, they often find the ill-conditioned architectures due to the improper relation for adapting to gradient-based optimization. Unfortunately, the relation has not been mathematically proven. In addition, the gradient-based algorithms require to construct a supernet in advance, which also highly requires expertise. The ENAS algorithms solve the NAS by exploiting EC techniques. Specifically, EC is a class of population-based computational paradigms, simulating the evolution of species or the behaviors of the population in nature, to solve challenging optimization problems. In particular, Genetic Algorithms (GAs)~, Genetic Programming (GP)~, and Particle Swarm Optimization (PSO)~ are widely used EC methods in practice. Owing to the promising characteristics of EC methods in insensitiveness to the local minima and no requirement to gradient information, EC has been widely applied to solve complex non-convex optimization problems~, even when the mathematical form of the objective function is not available~.\nIn fact, the EC methods had been frequently used more than twenty years ago, searching for not only the optimal neural architectures but also the weights of neural networks simultaneously, which is also termed as neuroevolution~. The major differences between ENAS and neuroevolution lie in two aspects. Firstly, neuroevolution often uses EC to search for both the neural architectures and the optimal weight values, while ENAS for now focuses mainly on searching for the architectures and the optimal weight values are obtained by using the gradient-based algorithms\\footnote{Please note that we still categorize some existing algorithms as the ENAS algorithm, such as API~, EvoCNN~ and EvoDeep~, although they also concern the weights. This is because the optimal weight values of the DNNs searched by them are still obtained by the gradient-based algorithms. They only searched for the best weight initialization values or the best weight initialization method of the DNNs.} immediately after. Secondly, neuroevolution commonly applies to small-scale and median-scale neuron networks, while ENAS generally works on DNNs, such as the deep CNNs~ and deep stacked autoencoders~, which are stacked by the building blocks of deep learning techniques~. Generally, the first work of ENAS is often viewed as the LargeEvo algorithm~ which was proposed by Google who released its early version in March 2017 in arXiv. Afterwards, this paper got accepted into the 34th International Conference on Machine Learning in June 2017. The LargeEvo algorithm employed a GA to search for the best architecture of a CNN, and the experimental results on CIFAR-10 and CIFAR-100~ have demonstrated its effectiveness. Since then, a large number of ENAS algorithms have been proposed. Fig.~\\ref{fig_histogram} shows the number of similar works\\footnote{These ``submissions\" include the ones which have been accepted for publication after the peer-review process, and also the ones which are only available on the arXiv website without the peer-review process.} published from 2017 to 2020 when this survey paper was ready for submission. As can be seen from Fig.~\\ref{fig_histogram}, from 2017 to 2020, the number of submissions grows with multiple scales.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{histogram}\n\t\\caption{The number of submissions refers to the works of evolutionary neural architecture search. The data is collected from Google Scholar with the keywords of ``evolutionary\" OR ``genetic algorithm'' OR ``particle swarm optimization\" OR ``PSO\" OR ``genetic programming\" AND ``architecture search\" OR ``architecture design'' OR ``CNN'' OR ``deep learning'' OR ``deep neural network'' and the literature on Neural Architecture Search collected from the AutoML.org website by the end of 2020. With these initially collected data, we then have carefully checked each manuscript to make its scope accurately within the evolutionary neural architecture search.}\n\t\\label{fig_histogram}\n\t\\vspace{-0.3cm}\n\\end{figure}\nA large number of related submissions have been made available publicly, but there is no comprehensive survey of the literature on ENAS algorithms. Although recent reviews on NAS have been made in~, they mainly focus on reviewing different methods to realize NAS, rather than concentrating on the ENAS algorithms. Specifically, Elsken \\textit{et al.}~ divided NAS into three stages: search space, search strategy, and performance estimation strategy. Similarly, Wistuba \\textit{et al.}~ followed these three stages with an additional review about the multiple objectives in NAS. Darwish \\textit{et al.}~ made a summary of Swarm Intelligence (SI) and Evolutionary Algorithms (EAs) approaches for deep learning, with the focuses on both NAS and other hyperparameter optimization. Stanley \\textit{et al.} ~ went through a review of neuroevolution, aiming at revealing the weight optimization rather than the architecture of neural networks. Besides, most of the references in these surveys are pre-2019 and do not include an update on the papers published during the past two years when most ENAS works were published. This paper presents a survey involving a large number of ENAS papers, with the expectation to inspire some new ideas for enhancing the development of ENAS. To allow readers easily concentrating on the technical part of this survey, we also follow the three stages to introduce ENAS algorithms, which has been widely adopted by existing NAS survey papers~, but with essential modifications made to specifically suit ENAS algorithms.\nThe remainder of this paper is organized as follows. The background of ENAS is discussed in Section~\\ref{sec_background}. Section~\\ref{sec_encoding_space} documents different encoding spaces, initial spaces and search spaces of ENAS algorithms. In Section~\\ref{sec_architectureEncoding}, the encoding strategy and architecture representation are introduced. Section~\\ref{sec_Population_operators} summarizes the process of population updating, including evolutionary operators and selection strategies. Section~\\ref{sec_evaluation} shows multiple ways to speed up the evolution. Section~\\ref{sec_application} presents the applications of ENAS algorithms. Section~\\ref{sec_challenges_and_issues} discusses the challenges and prospects, and finally Section~\\ref{sec_conclusion} presents conclusions.", "id": "e9b844de-0b2f-4ecf-94b9-874602da0564", "level": "section", "origin_cites_number": 28, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_background}\nAs discussed above, ENAS is a process of searching for the optimal architectures of DNNs by using EC methods. In this section, we will first introduce the unified flow chart of ENAS in Subsection~\\ref{sec2_3}. Then, the categories of EC methods based on their search strategies and the categories of DNN architectures in Subsections~\\ref{sec2_1} and \\ref{sec2_2}, respectively.", "id": "2857eea5-3541-4f2f-8282-08d5126a14b3", "level": "section", "origin_cites_number": 0, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Background" ] ], "subsections": [ "0869d4e9-c981-472b-b025-5a8bbce117b1", "fb246415-ce9a-4259-827b-2d75a5822aa3", "a76618db-7274-4a81-a316-4d0b05a79b1e" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec2_3}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{flowchart}\n\t\\caption{The flowchart of a common ENAS algorithm.}\n\t\\label{fig_flowchart}\n\t\\vspace{-0.3cm}\n\\end{figure}\nFig.~\\ref{fig_flowchart} shows an illustration of the flowchart of an ENAS algorithm. Specifically, the evolutionary process takes place in the initial space and the search space sequentially. First of all, a population is initialized within the initial space that is defined in advance. Each individual in the population represents a solution for the ENAS, i.e., a DNN architecture. Each architecture needs to be encoded as an individual before it joins the population. Second, the fitness of the generated individuals is evaluated. Note that there are two fitness evaluation steps as shown in Fig.~\\ref{fig_flowchart}, which commonly employ the same evaluation criterion. Thirdly, after the fitness evaluation of the initial population, the whole population starts the evolutionary process within the search space, which is shown by the dashed box in Fig.~\\ref{fig_flowchart}. In the evolutionary process, the population is updated by the selection and the evolutionary operators in each iteration, until the stopping criterion is met. Please note that the selection stage is not necessary for some other EC paradigms like SI. Finally, a population that has finished the evolution is obtained. In the following sections, these key steps are documented in detail.", "id": "0869d4e9-c981-472b-b025-5a8bbce117b1", "level": "subsection", "origin_cites_number": 0, "parent_id": "2857eea5-3541-4f2f-8282-08d5126a14b3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Background" ], [ "subsection", "Flow Chart of ENAS" ] ], "subsections": [], "title": "Flow Chart of ENAS" }, { "cite_extract_rate": 0.4, "cites": [ 9078, 6826 ], "content": "\\label{sec2_1}\nENAS is distinguished from other NAS methods by its employed optimization approach, i.e., the EC methods, where the optimization approaches can be further subdivided based on the search strategy that the optimization approach adopted. Fig.~\\ref{fig_EC_taxonomy} provides such an illustration from three aspects: EAs, Swarm Intelligence (SI), and others, and more detailed statistics can be observed in Table~\\ref{table_EC}. In practice, the EA-based methods account for the majority of existing ENAS algorithms, where the GA takes a large part of EA-based methods. The other categories of EC methods are also important parts for realizing ENAS algorithms, such as GP, Evolutionary Strategy (ES), PSO, Ant Colony Optimization (ACO)~, Differential Evolution (DE)~, Firefly Algorithm (FA)~. In this paper, the Hill-Climbing Algorithm (HCA) is classified into EC paradigm because it can be regarded as an EA with a simple selection mechanism and without crossover operation~. HCA has been well known as a widely used local search algorithm in memetic algorithms~.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{taxonomy1}\n\t\\caption{The categories of ENAS from EC methods regarding the search strategies.}\n\t\\label{fig_EC_taxonomy}\n\t\\vspace{-0.3cm}\n\\end{figure}", "id": "fb246415-ce9a-4259-827b-2d75a5822aa3", "level": "subsection", "origin_cites_number": 5, "parent_id": "2857eea5-3541-4f2f-8282-08d5126a14b3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Background" ], [ "subsection", "Evolutionary Search Strategy" ] ], "subsections": [], "title": "Evolutionary Search Strategy" }, { "cite_extract_rate": 0.47368421052631504, "cites": [ 6822, 96, 514, 6821, 870, 8247, 97, 6827, 166 ], "content": "\\label{sec2_2}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{taxonomy2}\n\t\\caption{The categories of ENAS from neural network perspective.}\n\t\\label{fig_NN_taxonomy}\n\t\\vspace{-0.3cm}\n\\end{figure}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{NN}\n\t\\caption{Examples of different neural architectures in ENAS. (a) CNN. (b) DBN. (c) AE. (d) RNN. }\n\t\\label{fig_NN}\n\t\\vspace{-0.3cm}\n\\end{figure}\nThe end product of ENAS is all about DNNs, which can be broadly divided into five different categories: CNN, Deep Belief Network (DBN), Stacked Auto-Encoder (SAE), Recurrent Neural Network (RNN) and others. The brief fact can be seen from Fig.~\\ref{fig_NN_taxonomy}, and a detailed illustration will be shown in Table~\\ref{table_EC}. \nMost ENAS methods are proposed for searching for the optimal CNN architectures. This is because many hand-crafted CNNs, such as VGG~, ResNet~ and DenseNet~, have demonstrated their superiority in handling the image classification tasks which is the most successful applications in the deep learning field. Generally, the CNN architecture is composed of the convolutional layers, the pooling layers and the fully-connected layers, and a common example of CNNs is shown in Fig.~\\ref{fig_NN} (a). The optimization of CNN architecture is mainly composed of three aspects: the hyperparameters of each layer~, the depth of the architecture~ and the connections between layers~. In practice, the majority of the ENAS methods consider the above three aspects collectively~.\nDBN~ is made up by stacking multiple Restricted Boltzmann Machines (RBMs), and an example can be seen in Fig.~\\ref{fig_NN} (b). Specifically, RBMs have connections only between layers, while without any inter-layer connection. Meanwhile, RBMs allow the DBN to be trained in an unsupervised manner to obtain a good weight initialization. There are two commonly used types of hyperparameters needed to be optimized in DBN: the number of neurons in each layer~ and the number of layers~.\nAn SAE is constructed by stacking multiple of its building blocks which are AEs. An AE aims to learn meaningful representations from the raw input data by restoring its input data as its objective~. Generally, an AE is typically composed of two symmetrical components: the encoder and the decoder, and an example including both parts is shown in Fig.~\\ref{fig_NN} (c). Generally, some ENAS methods only take the encoder into evolution~ because the decoder part is symmetric with the encoder and can be derived accordingly. Yet, some other ENAS methods optimize the hyperparameters of encoders and decoders separately~.\nThe most significant difference between RNNs and the neural networks introduced above is its recurrent connection. Fig.~\\ref{fig_NN} (d) shows the time-expanded structure of an RNN, where the value of current hidden layer $h^{t}$ is influenced by the value at its previous time slot $h^{t-1}$ and the output value of its previous layer. Because these layers are reused, all the weights (i.e., \\textit{U}, \\textit{W} and \\textit{V} in the figure) are shared. Different from focusing on the number of neurons and the number of layers in the feedforward neural network, some ENAS methods concern about how many times the RNN should be unfold~. In addition, there remain other neural networks like typical DNNs, which are made up of only fully-connected layers, where the connections are formed by all the neurons in the two adjacent layers. Because such kind of DNNs is not the major target investigated by NAS, we will not introduce it in detail.\nWhen the ENAS algorithms are applied to these DNNs, the goal is to find the best architecture-related parameters. Specifically, for CNNs, they are the number of convolutional layers, pooling layers, fully-connected layers, and parameters related to these layers (such as the kernel size of the convolutional layers, the pooling type, and the number of neurons for fully-connected layers, and so on) as well as their connection situation (such as the dense connection and the skip connection). For DBN and SAE, they are the number of their building blocks, i.e., the RBM for DBN and the AE for SAE, and the number of neurons in each layer. For RNN, in addition to the architecture-related parameters mentioned above, the number of the time slot is also an important parameter to be optimized by ENAS algorithms. For the traditional DNNs, the ENAS algorithms often concern about the neuron number of each layer. In addition, some ENAS algorithms also concern the weights, such as the weight initialization method and weigh initial values. Table~\\ref{table_parameters} summarizes the detail of the common parameters optimized by ENAS methods in different types of neural networks, including the CNNs as the most popular type. The details of these ENAS algorithms will be documented in the following sections. \n\\begin{table}[]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Common Parameters Optimized in Different DNNs.} \n\t\\label{table_parameters}\n\t\\centering\n\t\\begin{tabular}{p{1.2cm}|p{1.5cm}|p{4.5cm}}\n\t\t\\hline\n\t\t& \\multicolumn{2}{c}{Parameters} \\\\ \\hline\n\t\t\\multirow{4}{*}{CNN} & global parameters & number of layers, connections between layers \\\\ \\cline{2-3} \n\t\t& convolution layer & filter size (width and height), stride size (width and height), feature map size, convolution type, standard deviation and mean value of the filter elements \\\\ \\cline{2-3} \n\t\t& pooling layer & filter size (width and height), stride size (width and height), pooling type \\\\ \\cline{2-3} \n\t\t& fully-connected layer & number of neurons, standard deviation and mean value of weights \\\\ \\hline\n\t\tDBN, AE & \\multicolumn{2}{l}{number of hidden layers, neurons per layer} \\\\ \\hline\n\t\tRNN & \\multicolumn{2}{l}{\\shortstack[l]{number of hidden layers, neurons per layer, number of\\\\ time slot}} \\\\ \\hline\n\t\\end{tabular}\n\\vspace{-0.3cm}\n\\end{table}", "id": "a76618db-7274-4a81-a316-4d0b05a79b1e", "level": "subsection", "origin_cites_number": 19, "parent_id": "2857eea5-3541-4f2f-8282-08d5126a14b3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Background" ], [ "subsection", "Common Neural Networks in ENAS" ] ], "subsections": [], "title": "Common Neural Networks in ENAS" }, { "cite_extract_rate": 0.330275229357798, "cites": [ 6851, 6838, 6837, 6844, 6841, 8955, 6829, 6832, 9079, 6850, 6836, 6835, 6843, 6821, 6846, 8247, 6847, 9080, 6834, 6845, 6842, 6826, 6830, 6822, 6848, 870, 6853, 6840, 6833, 6852, 6831, 6849, 6824, 6827, 6839, 6828 ], "content": "\\label{sec_encoding_space}\n\\begin{table*}[]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Different Encoding Space and the Constraints} \n\t\\label{table_constrains}\n\t\\vspace{-0.3cm}\n\t\\begin{center}\n\t\t\\begin{tabular}{p{3.8cm}|p{1.8cm}|p{2cm}|p{2.5cm}|p{5.5cm}}\n\t\t\t\\hline\n\t\t\t& Fixed depth & Rich initialization & Partial fixed structure & Relatively few constraints\\\\\n\t\t\t\\hline\n\t\t\tLayer-based encoding space & ~ & ~ & ~ & ~\\\\\n\t\t\t\\hline\n\t\t\tBlock-based encoding space & ~ & ~ & ~ & ~ \\\\\n\t\t\t\\hline\n\t\t\tCell-based encoding space & & & ~ & ~\\\\\n\t\t\t\\hline\n\t\t\tTopology-based encoding space & & ~ & & ~ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\\end{table*}\nThe encoding space contains all the valid individuals encoded in the population. In terms of the basic units, the encoding space can be divided into three categories according to the basic units they adopt. They are the layer-based encoding space, the block-based encoding space and the cell-based encoding space. In addition, some ENAS methods do not take the configuration of the basic unit into consideration, but care the connections between units. Such kind of encoding space is often called the topology-based encoding space.\nIn addition, the constraints on the encoding space are important. This is because they represent the human intervention which restricts the encoding space and lightens the burden of the evolutionary process. A method with a mass of constraints can obtain a promising architecture easily but prevent to design any novel architecture that does not follow the constraints. Furthermore, the different sizes of the search space would greatly affect the efficiency of the evolution. On the other hand, the effectiveness of the ENAS methods cannot be guaranteed if there is no constraint on the search space: one extreme case is that all the individuals in the search space are mediocre. In this case, an excellent individual can be obtained even without selection. Table~\\ref{table_constrains} shows different kinds of encoding spaces and the constraints of existing ENAS algorithms, where the first row shows the constraints and the first column displays the encoding space. In the following, we will discuss them at length.", "id": "7fb1f2d3-d962-4d8c-8b81-c32933ee534a", "level": "section", "origin_cites_number": 109, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Space" ] ], "subsections": [ "c45304b4-d500-4803-95c8-a1b032fcb338", "6c612948-3e12-4637-8d86-3b3c5b4c17d5", "0e3df321-149a-48ba-be12-da3f243c2af6" ], "title": "Encoding Space" }, { "cite_extract_rate": 0.72, "cites": [ 96, 6524, 514, 6821, 6848, 870, 305, 871, 544, 6831, 97, 6849, 6842, 872, 6827, 6839, 6854, 6826 ], "content": "The layer-based encoding space denotes that the basic units in the encoding space are the primitive layers, such as convolution layers and fully-connected layers. This would lead to a huge search space, since it tries to encode so much information in the search space. However, it may take more time to search for a promising individual because there are more possibilities to construct a well-performed DNN from the primitive layers. For example, a promising CNN is commonly composed of hundreds of primitive layers, accordingly, the search process will consume more time to search for such a deep architecture by iteratively stacking the primitive layers. In addition, the promising DNN may not be found if only with the primitive layers. For instance, the promising performance of ResNet is widely recognized due to its skip connections which cannot be represented by the primitive layers.\nTo alleviate the above problems, the block-based encoding space is developed, where various layers of different types are combined as the blocks to serve as the basic unit of the encoding space. Traditional blocks are ResBlock~, DenseBlock~, ConvBlock (Conv2d + BatchNormalization + Activation)~ and InceptionBlock~, etc. Specifically, layers in the blocks have a specific topological relationship, such as the residual connection in ResBlock and the dense connections in DenseBlock. These blocks have promising performance and often require fewer parameters to build the architecture. So it is principally easier to find a good architecture in the block-based encoding space compared to the layer-based encoding space. Some ENAS algorithms used these blocks directly, such as~, while other methods proposed different blocks for different purposes. For example, Chen \\textit{et al.}~ proposed eight blocks including ResBlock and InceptionBlock encoded in a 3-bit string, and used Hamming distance to determine the similar blocks. Song \\textit{et al.}~ proposed three residual-dense-mixed blocks to reduce the amount of computation due to the convolution operation of image super-resolution tasks. \nThe cell-based encoding space is similar to the block-based one, and can be regarded as a special case of the block-based space where all the blocks are the same. The ENAS algorithms employing this space build the architectures by stacking repeated motifs. Chu \\textit{et al.}~ divided the cell-based space into two independent parts: the micro part containing the parameters of cells, and the macro part defining the connections between different cells. To be more specific, the cell-based encoding space concentrates more on the micro part. Different from the block-based space, the layers in the cell can be combined more freely, and the macro part is always determined by human expertise~. Some widely used encoding spaces are classified in this category. For example, NAS-Bench-101~ and NAS-Bench-201~ search for different combinations of layers and connections in cells, and then stacked the cells sequentially. In addition, NASNet~ and DARTS~ search for two kinds of cells, namely normal cell and reduction cell, and each stacked cell is connected to the two previous cells. The cell-based space greatly reduces the size of the encoding space. This is because all the basic units in the encoding space are the same, and the number of parameters in terms of constructing the promising DNN is much fewer. However, Frachon \\textit{et al.}~ claimed that there is no theoretical basis for that the cell-based space can help to obtain a good architecture. \nIn contrast, the topology-based space does not consider the parameters or the structure of each unit (layer or block), yet they are only concerned about the connections between units. One classical example is the one-shot method which treats all the architectures as different subgraphs of a supergraph~. Yang \\textit{et al.}~ proposed CARS to search for the architecture by choosing different connections in the supergraph, and then the architecture was built by the chosen subgraph. CARS can be classified into the topology-based category because it aims at deciding whether or not to keep the connections in the supergraph. But from the perspective of building the architecture, CARS can also be classified into the cell-based category. This is because the subgraph cell was stacked several times to build the architecture. Another typical case is pruning. Wu \\textit{et al.}~ employed a shallow VGGNet~ on CIFAR-10, and the aim was to prune unimportant weight connections from the VGGNet.\nAs observed from Table~\\ref{table_constrains}, the constraints on the encoding space mainly focus on three aspects: fixed depth, rich initialization and partial fixed structure. The fixed depth means all the individuals in the population have the same depth. The fixed depth is a strong constraint and largely reduces the size of the encoding space. Please note that the fixed depth is different from the \\textit{fixed-length encoding strategy} which will be introduced in Section~\\ref{sec_architectureEncoding}. In Genetic CNN~, for example, the \\textit{fixed-length encoding strategy} only limits the maximum depth. The node which is isolated (no connection) is simply ignored. By this way, the individuals can obtain different depths. The second constraint is rich initialization (i.e., the \\textit{well-designed space} to be discussed in Section~\\ref{sec_initial_space}), and it is also a limitation in practice that requires a lot of expertise. In this case, the initialized architectures are manually designed, which goes against the original intention of NAS. The partial fixed structure means the architecture is partially settled. For example, in~, a max-pooling layer is added to the network after every set of four convolution layers.\nIn Table~\\ref{table_constrains}, the relatively few constraints category means that those methods have no restrictions like the three aspects discussed above. However, it does not imply there is no constraint. For example, in the classification task, the fully-connected layer is often used as the tail of the whole DNNs in some methods~. Moreover, the maximum length is predefined in many methods including both \\textit{fixed-length encoding strategy} methods~ and \\textit{variable-length encoding strategy} methods~ resulting in preventing the method from discovering a deeper architecture. Wang \\textit{et al.}~ tried to break the limit of maximum length by using a Gaussian distribution initialization mechanism. Irwin \\textit{et al.}~ broke the limit by using the evolutionary operators, the crossover operator and the mutation operator, to extend the depth to any size.\nGenerally, the encoding space can be served as the basis of the search space and the initial space. In practice, the encoding space is often the same as the search space, while the initial space is often a subspace of the encoding space. Fig.~\\ref{fig_spaces} shows the relationship between the three spaces. The search space is larger than the initial space when some manual constraints are added to the population initialization. When there is no such manual constrains, search space and initial space are equivalent. Furthermore, the initial space determines what kind of individuals may appear in the initial population, and the search space determines what kind of individuals may appear in the evolutionary search process, as illustrated in Fig.~\\ref{fig_flowchart}. In the following subsections, we will discuss the initial space and the search space in Subsections~\\ref{sec_initial_space} and~\\ref{sec_search_space}, respectively.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{spaces}\n\t\\caption{The relationship between encoding space, search space and initial space.}\n\t\\label{fig_spaces}\n\t\\vspace{-0.3cm}\n\\end{figure}", "id": "c45304b4-d500-4803-95c8-a1b032fcb338", "level": "subsection", "origin_cites_number": 25, "parent_id": "7fb1f2d3-d962-4d8c-8b81-c32933ee534a", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Space" ], [ "subsection", "Encoding Space and Constraints" ] ], "subsections": [], "title": "Encoding Space and Constraints" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 6829, 8247, 6821, 6827 ], "content": "\\label{sec_initial_space}\nIn general, there are three types of architecture initialization approaches in the initial space: starting from trivial initial conditions~, randomly initialization in the encoding space~ and staring from a well-designed architecture (also termed as rich initialization)~. These three types of initialization correspond to three different initial spaces: \\textit{trivial space}, \\textit{random space} and \\textit{well-designed space}.\nThe \\textit{trivial space} contains only a few primitive layers. For example, the LargeEvo algorithm~ initialized the population in a \\textit{trivial space} where each individual constitutes just a single-layer model with no convolutions. Xie \\textit{et al.}~ experimentally demonstrated that a \\textit{trivial space} can evolve to a competitive architecture. The reason for using as little experience as possible is to justify the advantage of EC-based methods where some novel architectures can be discovered, and most of the discovery is different from the manually designed DNN architectures. On the contrary, the \\textit{well-designed space} contains the state-of-the-art architectures. In this way, a promising architecture can be obtained at the beginning of the evolution, whereas it can hardly evolve to other novel architectures. Actually, many of ENAS methods adopting this initial space focus on improving the performance upon the well-designed architecture. For example, the architecture pruning aims at compressing DNNs by removing less important connections~. For a \\textit{random space}, all the individuals in the initial population are randomly generated in the limited space, and it has been adopted by many methods, such as~. The aim of this type of initial spaces is also to reduce the intervention of human experience in the initial population.", "id": "6c612948-3e12-4637-8d86-3b3c5b4c17d5", "level": "subsection", "origin_cites_number": 7, "parent_id": "7fb1f2d3-d962-4d8c-8b81-c32933ee534a", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Space" ], [ "subsection", "Initial Space" ] ], "subsections": [], "title": "Initial Space" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_search_space}\nAfter the initialization of the population, the ENAS methods start to search the architectures in the search space. Generally speaking, the search space is the same as the initial space when the random initial space is adopted. For other two types of initial spaces, however, due to the relatively small initial space, the search space will become much larger with the expectation that the promising architecture is included. It is worth noting that many methods do not directly define the search space, but restrict the search space by using evolutionary operators. For example, Irwin \\textit{et al.}~ did not specify the maximal depth, instead, they used the evolutionary operations to extend the architecture to any depth.", "id": "0e3df321-149a-48ba-be12-da3f243c2af6", "level": "subsection", "origin_cites_number": 1, "parent_id": "7fb1f2d3-d962-4d8c-8b81-c32933ee534a", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Space" ], [ "subsection", "Search Space" ] ], "subsections": [], "title": "Search Space" }, { "cite_extract_rate": 0.5, "cites": [ 6821, 6827 ], "content": "\\label{sec_architectureEncoding}\nThis section will discuss how to encode a network architecture into an individual of the EC methods. Each ENAS method needs to determine its encoding strategy before starting the first stage of the ENAS method, i.e., the population initialization. The most intuitive difference in different encoding strategies of ENAS methods is the length of the encoded individuals. \nGenerally, the encoding strategies can be divided into two different categories according to whether the length of an individual changes or not during the evolutionary process. They are the \\textit{fixed-length encoding strategy} and the \\textit{variable-length encoding strategy}. Particularly, the individuals have the same length during the evolutionary process when the \\textit{fixed-length encoding strategy} is used. In contrast, the individuals are with different lengths during the evolutionary process if the \\textit{variable-length encoding strategy} is employed. The advantage of the \\textit{fixed-length encoding strategy} is that it is easy to use standard evolutionary operations which are originally designed for the individuals with equal lengths. In Genetic CNN~, for example, the fixed-length binary string helped the evolutionary operators (especially the crossover) with easy implementation. Another example is~, where Loni \\textit{et al.} used a fixed-length string of genes to represent the architecture which led to easy implementation of the one-point crossover in the corresponding encoded information. However, a proper maximal length must be predefined for the fixed-length encoding strategy. Because the maximal length relates to the optimal depth of the DNN architecture, which is unknown in advance, the corresponding ENAS algorithms still rely on expertise and experience. \nCompared with the \\textit{fixed-length encoding strategy}, the \\textit{variable-length encoding strategy} dose not require the human expertise regarding the optimal depth in advance, which has the potential to be fully automated. In addition, the advantage of this encoding strategy is that it can define more details of the architecture with freedom. For example, when solving a new task where there is no expertise in knowing the optimal depth of the DNN, we just initialize the individuals with a random depth, and the optimal depth can be found through the variable-length encoding strategy, where the depths of the corresponding DNN can be changed during the evolutionary process. However, the variable-length encoding strategy also brings some drawbacks. Because the traditional evolutionary operators might not be suitable for this kind of encoding strategy, the corresponding evolutionary operators need to be redesigned, where an example can be seen in~. Another disadvantage is that, due to the flexibility of variable length encoding, it could generally result in over-depth architectures, which sometimes further leads to more time-consuming fitness evaluation. Please note that some works claimed their use of the variable-length encoding strategy, where each individual is designed with a maximal length, and then the placeholder is used to indicate the gene's validation of the gene~. For example, the individual is designed to have $1,000$ genes, while some genes having the values of zeros do not participate in the evolutionary process. In this paper, we also categorize these methods into the fixed-length encoding strategy.\nIn addition, most of the DNN architectures can be represented as directed graphs which are made up of different basic units and the connections between the units. Therefore, the encoding for architecture can be divided into two aspects: configurations of basic units and connections, which will be discussed in the following subsections.", "id": "13bf9942-9456-4154-adf6-fca4ab8f93d7", "level": "section", "origin_cites_number": 4, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Strategy" ] ], "subsections": [ "c0d67ff4-2346-4359-81e9-cfc247725ee8", "2960167f-5dd3-47f9-b647-47b27f2be6ee" ], "title": "Encoding Strategy" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6821, 870 ], "content": "In practice, different basic units have different configurations, such as layers, blocks and cells, which are demonstrated in Section~\\ref{sec_encoding_space}. For example, in CNNs, there are multiple parameters in the primitive layers which can be seen in Table~\\ref{table_parameters}. As for the DenseBlock implemented by Sun \\textit{et al.}~, only two parameters are needed to build the block. Because the configuration of cell is more flexible than that of blocks in CNNs, which can be regarded as a microcosm of a complete neural network. For example, the cells in~ are made up by a combination of 10 layers selected from 8 different primitive layers. But the cells do not include some of the configurations of primitive layer, such as the feature map size which is an important parameter in the primitive layer~.", "id": "c0d67ff4-2346-4359-81e9-cfc247725ee8", "level": "subsection", "origin_cites_number": 3, "parent_id": "13bf9942-9456-4154-adf6-fca4ab8f93d7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Strategy" ], [ "subsection", "Encoding for Configurations of Basic Units" ] ], "subsections": [], "title": "Encoding for Configurations of Basic Units" }, { "cite_extract_rate": 0, "cites": [], "content": "When the parameters (configurations) of the basic units in the architecture have been determined, the corresponding architecture cannot be built up immediately. Since edges are indispensable in directed graph, connections are also part of the architecture. The connections discussed in this section include not only connections between basic units, but also the connections within the basic units.\nGenerally, the architectures of neural networks can be divided into two categories: \\textit{linear architecture} and \\textit{non-linear architecture}. The former denotes the architectures containing sequential basic units. The latter indicates that there are skip-connections or loop-connections in the architecture. Please note that, the structure could be the macro structure consisting of basic units, or the micro structure within the basic units.", "id": "2960167f-5dd3-47f9-b647-47b27f2be6ee", "level": "subsection", "origin_cites_number": 0, "parent_id": "13bf9942-9456-4154-adf6-fca4ab8f93d7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Strategy" ], [ "subsection", "Encoding for Connections" ] ], "subsections": [ "6fd9ba7f-54c9-40c3-908e-224aaa013157", "a7c80d6f-1308-48d5-9a45-859a5cedcd95" ], "title": "Encoding for Connections" }, { "cite_extract_rate": 0.75, "cites": [ 96, 6821, 97 ], "content": "The linear architecture can be found in different kinds of architectures including the one generated from the layer-based encoding space and the block-based encoding space. Its widespread use in ENAS stems from its simplicity. No matter how complex the internal of the basic units is, many ENAS methods stack the basic units one by one to build up the skeleton of the architecture which is linear. For example, in AE-CNN~, Sun \\textit{et al.} stacked different kinds of blocks to generate the architecture.\nOne special case is a linear architecture generated from the layer-based encoding space. In this case, there is no need to solely encode the connections, and only the parameters in each basic unit are enough to build an architecture. One classical example can be seen in~ where Sun \\textit{et al.} explored a great number of parameters based on a linear CNN architecture. However, most architectures are not designed to be linear. The skip connections in ResNet~ and the dense connections in DenseNet~ show the ability to build a good architecture.", "id": "6fd9ba7f-54c9-40c3-908e-224aaa013157", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "2960167f-5dd3-47f9-b647-47b27f2be6ee", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Strategy" ], [ "subsection", "Encoding for Connections" ], [ "subsubsection", "Linear Architecture" ] ], "subsections": [], "title": "Linear Architecture" }, { "cite_extract_rate": 0.4, "cites": [ 870, 6827 ], "content": "Firstly, we will introduce two approaches to encoding a non-linear architecture in this subsection. Specifically, the adjacent matrix is the most popular way to represent the connections for non-linear architectures. Genetic CNN~ used a binary string to represent the connection, and the string can be transformed into a triangular matrix. In the binary string, ``1'' denotes that there is a connection between the two nodes while ``0'' denotes no connection in between. Lorenzo \\textit{et al.}~ used a matrix to represent the skip connections, and this work revolved around the adjacent matrix. Back in the 1990s, Kitano \\textit{et al.}~ began to study the use of the adjacent matrix to represent network connection and explained the process from the connectivity matrix, to the bit-string genotype, to the network architecture phenotype. Another way to represent the connections is to use an ordered pair $G = (V,E)$ with vertices $V$ and edge $E$ associated with a direction, to represent a directed acyclic graph. Irwin \\textit{et al.}~ also used this strategy to encode the connections.\nSecondly, the non-linear architecture is a more common case in both the macro architecture and the micro architecture. AmoebaNet-A~, as an example of non-linear macro architecture, stacked these two kinds of cells for several times to build up an architecture. In addition, each cell receives two inputs from the previous two cells separately, which means that the direct connection and the skip-connection are both used in this macro structure. Also in AmoebaNet, each cell is a non-linear architecture inside, which means the micro architecture is also non-linear. The non-linear architecture provides the architecture with more flexibility, which makes it more likely to build up a promising architecture than that of the linear ones.", "id": "a7c80d6f-1308-48d5-9a45-859a5cedcd95", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "2960167f-5dd3-47f9-b647-47b27f2be6ee", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Encoding Strategy" ], [ "subsection", "Encoding for Connections" ], [ "subsubsection", "Non-linear Architecture" ] ], "subsections": [], "title": "Non-linear Architecture" }, { "cite_extract_rate": 0.318840579710144, "cites": [ 6861, 6851, 6838, 6837, 6863, 6844, 6841, 8955, 6829, 6832, 6856, 6857, 9079, 6858, 6836, 6835, 6843, 6860, 6821, 6846, 6855, 8247, 6847, 6864, 6865, 9080, 6845, 6842, 6859, 6826, 6822, 6830, 6848, 870, 6853, 6840, 6833, 6862, 6831, 6849, 6824, 6827, 6839, 6828 ], "content": "\\label{sec_Population_operators}\n\\begin{table*}[ht]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Categorization of EC and Different Types of Neural Network} \n\t\\label{table_EC}\n\t\\vspace{-0.3cm}\n\t\\begin{center}\n\t\t\\begin{tabular}{|p{1cm}|p{0.5cm}|p{1.8cm}|p{4.5cm}|p{1.4cm}|p{2cm}|p{1.3cm}|p{2cm}|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{3}{|l|}{} & CNN & DBN & RNN & AE & Others \\\\ \\hline\n\t\t\t\\multirow{12}{*}{\\shortstack[l]{Single\\\\objective}} & \\multirow{3}{*}{EA} & GAs &~ & ~ & ~ & ~ & ~ \\\\ \\cline{3-8} \n\t\t\t& & GP & ~ & & ~ & ~ & ~ \\\\ \\cline{3-8} \n\t\t\t& & ES & ~ & & ~ & ~ & ~ \\\\ \\cline{2-8} \n\t\t\t& \\multirow{3}{*}{SI} & ACO & ~ & & ~ & & \\\\ \\cline{3-8} \n\t\t\t& & PSO & ~ & ~ & & ~ & ~ \\\\ \\cline{2-8} \n\t\t\t& \\multirow{6}{*}{Other} & Memetic & ~ & & & & \\\\ \\cline{3-8} \n\t\t\t& & DE & ~ & & ~ & ~ & ~ \\\\ \\cline{3-8} \n\t\t\t& & HCA & ~ & & & & \\\\ \\cline{3-8} \n\t\t\t& & CVOA & & & ~ & & \\\\ \\cline{3-8} \n\t\t\t& & Hyper-heuristic & & ~ & & & \\\\ \\cline{3-8} \n\t\t\t& & FA & ~ & & & & \\\\ \\cline{3-8} \n\t\t\t& & AIS & ~ & & & & \\\\ \\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{Multi-\\\\objective}} & \\multicolumn{2}{l|}{EA} & ~ & ~ & ~ & & ~ \\\\ \\cline{2-8} \n\t\t\t& \\multicolumn{2}{l|}{SI} & ~ & ~ & & & \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\vspace{-0.5cm}\n\\end{table*}\nThis section discusses the population updating process as shown in Fig.~\\ref{fig_flowchart}. Generally, the population updating varies greatly among existing ENAS algorithms because they may employ different EC methods which are with different updating mechanisms. Table~\\ref{table_EC} shows the ENAS algorithms which are classified according to the EC methods that they employ and different types of DNNs that they target at. Obviously, the EA-based ENAS algorithms dominate the ENAS. To be more specific, the GA-based ENAS is the most popular approach which largely owes to the convenience of architecture representation in GA. As a result, we give a detailed introduction to the EA-based ENAS including the selection strategy at first. Immediately after, we present a summary of the SI-based ENAS methods and others separately. The corresponding multi-objective ENAS algorithms are also introduced at the end of each subsection.", "id": "d3650a8a-21c2-4aa1-9b61-b69c455614a3", "level": "section", "origin_cites_number": 138, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Population Updating" ] ], "subsections": [ "a152dfdd-9b9e-46a8-8504-9097a22a6736", "f362ecc4-6be9-45c6-b9e4-700fb0393594", "670f0571-da2a-4b7f-ab3e-86d8a80bfc64" ], "title": "Population Updating" }, { "cite_extract_rate": 0.4, "cites": [ 6851, 6843, 6869, 6821, 870, 9079, 6868, 6867, 6837, 8247, 6847, 6863, 6853, 9080, 6844, 6841, 6834, 6845, 6829, 6831, 6849, 6824, 6866, 6842, 6827, 6839, 6832, 8302, 6858, 6826 ], "content": "\\begin{table}[]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Selection Strategy} \n\t\\label{table_selection_strategy}\n\t\\vspace{-0.3cm}\n\t\\begin{center}\n\t\t\\begin{tabular}{p{2cm}|p{5cm}}\n\t\t\t\\hline\n\t\t\tElitism & ~ \\\\\n\t\t\t\\hline\n\t\t\tDiscard the worst or the oldest & ~\\\\\n\t\t\t\\hline\n\t\t\tRoulette & ~ \\\\\n\t\t\t\\hline\n\t\t\tTournament selection & ~\\\\\n\t\t\t\\hline\n\t\t\tOthers & ~ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\vspace{-0.5cm}\t\n\\end{table}\nThe dash box in Fig.~\\ref{fig_flowchart} shows the general flow of population updating in EA-based ENAS. In this section, we will introduce the selection strategy and the evolutionary operations which collectively update the population. Specifically, the first stage of population updating is selection. The selection strategies can be classified into several types, where Table~\\ref{table_selection_strategy} shows the main kinds of selection strategies. Note that the selection strategy can be not only used in choosing individuals as parents to generate offspring with the evolutionary operators, but also used in the environmental selection stage which chooses individuals to make up the next population. Zhang \\textit{et al.}~ termed these two selections as mate selection and environmental selection separately.\nExisting selection strategies can be divided into five categories: elitism, discarding the worst, roulette wheel selection, tournament selection and others. The simplest strategy is elitism which retains the individuals with higher fitness. However, this can cause a loss of diversity in the population, which may lead the population falling into local optima. Discarding the worst is similar to elitism, which removes the individuals with poor fitness values from the population. Real \\textit{et al.}~ used the aging evolution which discards the oldest individual in the population. Aging evolution can explore the search space more, instead of zooming in on good models too early, as non-aging evolution would. The same selection strategy was also used in~. Zhu \\textit{et al.}~ combined these two approaches to discarding the worst individual and the oldest individual at the same time. Roulette wheel selection gives every individual a probability according to its fitness among the population to survive (or be discarded), regardless it is the best or not. Tournament selection selects the best one from an equally likely sampling of individuals. Furthermore, Johner \\textit{et al.}~ used a ranking function to choose individuals by rank. A selection trick termed as niching was used in~ to avoid stacking into local optima. This trick allows offspring worse than parent to survive for several generations until evolving to a better one.\nMost of the methods focus on preserving the well-performed individuals, however, Liu \\textit{et al.}~ emphasized the genes more than the survived individuals, where a gene can represent any components in the architecture. They believed the individuals which consist of the fine-gene set are more likely to have promising performance.\nSome selection methods aim at preserving the diversity of the population. Elsken \\textit{et al.}~ selected individuals in inverse proportion to their density. Javaheripi \\textit{et al.}~ chose the parents based on the distance (difference) during the mate selection. They chose two individuals with the highest distance to promote the exploration search.\nIn terms of evolutionary operations, mutation and crossover are two of the most commonly used operations in EA-based ENAS algorithms. Particularly, the mutation is only performed on a single individual, while the crossover takes two individuals to generate offspring.\nThe mutation operator aims to search the global optimum around the individual. A simple idea is to allow the encoded information to vary from a given range. Sun \\textit{et al.}~ used the polynomial mutation~ on the parameters of layers which are expressed by real numbers.\nTo make mutation not random, Lorenzo \\textit{et al.}~ proposed a novel Gaussian mutation based on a Gaussian regression to guide the mutation, i.e., the Gaussian regression can predict which architecture may be good, and the newly generated individuals are sampled in the regions of the search space, where the fitness values are likely to be high. This makes mutation to have a ``direction\".\nMoreover, Maziarz \\textit{et al.}~ used an RNN to guide the mutation operation. In this work, the mutation operations were not sampled at random among the possible architectural choices, but were sampled from distributions inferred by an RNN. Using an RNN to control the mutation operation can also be seen in other methods such as~.\nSome researches investigated the diversity of the population after mutation. Qiang \\textit{et al.}~ used a variable mutation probability. They used a higher probability in the early stage for better exploration and a lower probability in the later stage for better exploitation. It has been effectively applied to many other methods~. To maintain the diversity of the population after the mutation operation, Tian \\textit{et al.}~ used \\textit{force mutation} and distance calculation, which ensures the individual in the population is not particularly similar to other individuals (especially the best one). Kramer \\textit{et al.}~ used the (1+1)-evolutionary strategy that generates an offspring based on a single parent with bit-flip mutation, and used a mutation rate to control and niching to overcome local optima.\nThe intensive computational cost of ENAS presents a bottleneck which will be discussed in later sections. To reduce the unaffordable computational cost and time, some kinds of the mutation have also been designed. Zhang \\textit{et al.}~ proposed an exchange mutation which exchanges the position of two genes of the individual, i.e., exchanging the order of layers. This will not bring new layers and the weights in neural networks can be completely preserved, which means that the offspring do not have to be trained from scratch. \nChen \\textit{et al.}~ introduced two function-preserving operators for DNNs, and these operators are termed as network morphisms~. The network morphisms aim to change the DNN architecture without the loss of the acquired experience. The network morphisms change the architecture from $F(\\cdot)$ to $G(\\cdot)$, which satisfies the condition formulated by Equation~(\\ref{equ_function-preserving}):\n\\begin{equation}\n\\label{equ_function-preserving}\n\\forall x,\\quad F(x)=G(x)\n\\end{equation}\nwhere $x$ denotes the input of the DNN. The network morphisms can be regarded as a function-preserving mutation operation. With this operation, the mutated individuals will not have worse performance than their parents. To be more specific, Chen \\textit{et al.}~ proposed \\textit{net2widernet} to obtain a wider net and \\textit{net2deepernet} to obtain a deeper net. Elsken \\textit{et al.}~ extended the network morphisms with two popular network operations: skip connections and batch normalization. Zhu \\textit{et al.}~ proposed five well-designed function-preserving mutation operations to guide the evolutionary process by the information which have already learned. To avoid local optimal, Chen \\textit{et al.}~ added noises in some function-preserving mutation, and in the experiment, they found that by adding noises to pure network morphism, instead of compromising the efficiency, it by contrast, improved the final classification accuracy.\nPlease note that all the network morphisms can only increase the capacity of a network because if one would decrease the network’s capacity, the function-preserving property could not be guaranteed~. As a result, the architecture generated by network morphisms is only going to get larger and deeper, which is not suitable for a device with limited computing resources, such as a mobile phone. In order for the network architecture to be reduced, Elsken \\textit{et al.}~ proposed the approximate network morphism, which satisfies Equation~(\\ref{equ_approximate_network_morphism}):\n\\begin{equation}\n\\label{equ_approximate_network_morphism}\n\\forall x,\\quad F(x)\\approx G(x)\n\\end{equation}\nFor the crossover operator, the single-point crossover~ is the most popular method in EA-based ENAS~ because of its implementation simplicity. However, single-point crossover can typically apply to two individuals with equal lengths only. Therefore, this cannot be applied to variable-length individuals. To this end, Sun \\textit{et al.}~ proposed an efficient crossover operator for individuals with variable lengths. Sapra \\textit{et al.}~ proposed a disruptive crossover swapping the whole cluster (a sequence of layers) between both individuals at the corresponding positions rather than only focusing on the parameters of layers. Sun \\textit{et al.}~ used the Simulated Binary Crossover (SBX)~ to do a combination of the encoded parameters from two matched layers. Please note that the encoded parameters after SBX are not the same as that of both parents, which are quite different from other crossover operators.\nEAs for multi-objective ENAS is gaining more and more attention from researchers. The single objective ENAS algorithms are always concerned about only one objective, e.g., the classification accuracy, and these algorithms have only one goal: searching for the architecture with the highest accuracy. In general, most of the multi-objective ENAS algorithms aim at dealing with both the performance of the neural network and the number of parameters simultaneously~.\nHowever, these objective functions are often in conflict with each other. For example, getting a higher accuracy often requires a more complicated architecture with the need of more computational resources. On the contrary, a device with limited computational resource, e.g., a mobile phone, cannot afford such sophisticated architectures. \nThe simplest way to tackle the multi-objective optimization problem is by converting it into a single objective optimization problem with weighting factors, i.e., the weighted summation method. The Equation~(\\ref{equ_multi-fitness})\n\\begin{equation}\n\\label{equ_multi-fitness}\nF = \\lambda f_{1} + (1-\\lambda) f_{2}\n\\end{equation}\nis the classical linear form to weight two objective functions $f_{1},f_{2}$ into a single objective function, where the $\\lambda \\in (0, 1)$ denotes the weighting factor. In~, the multi-objective optimization problem was solved by using the available single objective optimization methods by Equation~(\\ref{equ_multi-fitness}) of the weighted summation. Chen \\textit{et al.}~ did not adopt the linear addition as the objective function, whereas using a nonlinear penalty term. However, the weights manually defined may incur bias~.\nSome algorithms have been designed and widely used in multi-objective optimization, such as NSGA-II~, and MOEA/D~, which have been also used in ENAS methods such as~. These methods aim to find a Pareto-font set (or non-dominant set). Only these methods are in the multi-objective category of Table~\\ref{table_EC}. Some researches have made improvements on these multi-objective optimization methods for better use in ENAS.\nBaldeon \\textit{et al.}~ chose the penalty based boundary intersection approach in MOEA/D because training a neural network involves nonconvex optimization and the form of Pareto Font is unknown.\nLEMONADE~ divided the objective function into two categories: $f_{exp}$ and $f_{cheap}$. $f_{exp}$ denotes the expensive-to-evaluate objectives (e.g., the accuracy), while $f_{cheap}$ denotes the cheap-to-evaluate objectives (e.g., the model size). In every iteration, they sampled parent networks with respect to sparsely distribution based on the cheap objectives $f_{cheap}$ to generate offspring. Therefore, they evaluated $f_{cheap}$ more times than $f_{exp}$ to save time. Schoron \\textit{et al.}~ also took the use of the LEMONADE proposed by Elsken \\textit{et al.}~.\nDue to that NSGA-III~ may fall into the small model trap (this algorithm prefers small models), Yang \\textit{et al.}~ have made some improvements to the conventional NSGA-III for favoring larger models.", "id": "a152dfdd-9b9e-46a8-8504-9097a22a6736", "level": "subsection", "origin_cites_number": 75, "parent_id": "d3650a8a-21c2-4aa1-9b61-b69c455614a3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Population Updating" ], [ "subsection", "EAs for ENAS" ] ], "subsections": [], "title": "EAs for ENAS" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 6859, 6835 ], "content": "PSO is inspired by the bird flocking or fish schooling~, and is easy to implement compared with other SI algorithms.\nJunior \\textit{et al.}~ used their implementation of PSO to update the particles based on the layer instead of the parameters of the layer. Gao \\textit{et al.}~ developed a gradient-priority particle swarm optimization algorithm to handle issues including the low convergence efficiency of PSO when there are a large number of hyper-parameters to be optimized. They expected the particle to find the locally optimal solution at first, and then move to the global optimal solution.\nFor ACO, the individuals are generated in a quite different way. Several ants are in an ant colony\\footnote{The population in ACO also termed as colony.}. Each ant moves from node to node following the pheromone instructions to build an architecture. The pheromone is updated every generation. The paths of well-performed architecture will maintain more pheromone to attract the next ant for exploitation and at the same time, the pheromone is also decaying (i.e., pheromone evaporation), which encourages other ants to explore other areas. Byla \\textit{et al.}~ let the ants choose the path from the node to node in a graph whose depth increases gradually. Elsaid \\textit{et al.}~ introduced different ant agent types to act according to specific roles to serve the needs of the colony, which is inspired by the real ants species.\nSI for multi-objective ENAS started only in the last two years and the research of this field is scarce which can be seen from Table~\\ref{table_EC}.\nLi \\textit{et al.}~ used the bias-variance framework on their proposed multi-objective PSO to get a more accurate and stable architecture.\nWu \\textit{et al.}~ used the MOPSO~ for neural networks pruning. The $G_{best}$ is selected according to the crowding distance in the non-dominant solutions set. Wang \\textit{et al.}~ used the OMOPSO~, which selects the leaders using a crowding factor and the $G_{best}$ is selected from the leaders. To better control the balance between convergence and diversity, Jiang \\textit{et. al.}~ proposed a MOPSO/D algorithm based on an adaptive penalty-based boundary intersection.", "id": "f362ecc4-6be9-45c6-b9e4-700fb0393594", "level": "subsection", "origin_cites_number": 11, "parent_id": "d3650a8a-21c2-4aa1-9b61-b69c455614a3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Population Updating" ], [ "subsection", "SI for ENAS" ] ], "subsections": [], "title": "SI for ENAS" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 6831, 6865, 6826 ], "content": "Different from GAs, the mutation of DE exploits the information from three individuals. \nSome ENAS methods like~ chose DE to guide the offspring generation. However, there is little difference between different DE-based ENAS algorithms.\nWang \\textit{et al.}~ proposed a hybrid PSO-GA method. They used PSO to guide the evolution of the parameters in each block encoded in decimal notation. Meanwhile, using GA to guide the evolution of the shortcut connections is encoded in binary notation. Because PSO performs well on continuous optimization and GA is suitable for optimization with binary values, this hybrid method can search architectures effectively.\nHCA can be interpreted as a very simple evolutionary algorithm. For example, in~ the evolutionary operators only contain mutation and no crossover, and the selection strategy is relatively simple. The memetic algorithm is the hybrids of EAs and local search. Evans \\textit{et al.}~ integrated the local search (as gradient descent) into GP as a fine-tuning operation. The CVOA~ was inspired by the new respiratory virus, COVID-19. The architecture was found by simulating the virus spreads and infecting healthy individuals. Hyper-heuristic contains two levels: high-level strategy and low-level heuristics, and a domain barrier is between these two levels. Hence the high-level strategy is still useful when the application domain is changed. AIS was inspired by theories related to the mammal immune system and do not require the crossover operator compared to the GA~.", "id": "670f0571-da2a-4b7f-ab3e-86d8a80bfc64", "level": "subsection", "origin_cites_number": 7, "parent_id": "d3650a8a-21c2-4aa1-9b61-b69c455614a3", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Population Updating" ], [ "subsection", "Other EC Techniques for ENAS" ] ], "subsections": [], "title": "Other EC Techniques for ENAS" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6822, 6851, 6843, 6821, 6848, 9079, 6846, 8247, 6871, 6841, 6833, 6845, 8955, 6862, 6831, 6842, 6872, 6870, 1461, 6826, 6835 ], "content": "\\label{sec_evaluation}\nIn this section, we will discuss the strategies to improve the efficiency of evaluations, with the consideration that the evaluation is often the most time-consuming stage of ENAS algorithms~.\nReal \\textit{et al.}~ used 250 computers to finish the LargeEvo algorithm over 11 days. Such computational resources are not available for everyone interested in NAS. Almost all of the methods evaluate individuals by training them first and evaluating them on the validation/test dataset. Since the architecture is becoming more and more complex, it will take a lot of time for training each architecture to convergence. So it is needed to investigate new methods to shorten the evaluation time and reduce the dependency on large amounts of computational resources.\nTable~\\ref{table_shorten_time} lists five of the most common methods to reduce the time: weight inheritance, early stopping policy, reduced training set, reduced population, and population memory. We would like to introduce the five kinds of methods first, then other kinds of promising methods next, and finally the surrogate-assisted methods at the end of this section.\nBecause the evolutionary operators do not completely disrupt the architecture of an individual, some parts of the newly generated individuals are the same as their parents. The weights of the same parts can be easily inherited. With the weight inheritance, the neural networks no longer need to be trained completely from scratch. This method has been used in~ 20 years ago.\nMoreover, as mentioned in Section~\\ref{sec_Population_operators}, the network morphisms change the network architecture without loss of the acquired experience. This could be regarded as the ultimate weight inheritance because it solved the weight inheritance problem in the changed architecture part. The ultimate weight inheritance lets the new individual completely inherit the knowledge from their parents, which will save a lot of time.\nThe early stopping policy is another method which has been used widely in NAS. The simplest way is to set a fixed and relatively small number of training epochs. This method is used in~, where training the individuals after a small number of epochs is sufficient. Similarly, Assunccao \\textit{et al.}~ let the individuals undergo the training for the same and short time each epoch (although this time is not fixed and will increase with the epoch), to allow the promising architectures have more training time to get a more precise evaluation, So \\textit{et al.}~ set hurdles after a fixed number of epochs. The weak individuals stop training early to save time.\nHowever, the early stopping policy can lead to inaccurate estimation about individuals' performance (especially the large and complicated architecture), which can be seen in Fig.~\\ref{fig_learning_curve}. In Fig.~\\ref{fig_learning_curve}, \\emph{individual2} performs better than \\emph{individual1} before epoch \\emph{t1}, whereas \\emph{individual1} performs better in the end. Yang \\textit{et al.}~ also discussed this phenomenon. So, it is crucial to determine at which point to stop.\nNote that neural networks can converge or hardly improve its performance after several epochs, as seen in the \\emph{t1} for \\emph{individual2} and the \\emph{t2} for \\emph{individual1} in Fig.~\\ref{fig_learning_curve}. Using the performance estimated at this point can evaluate an individual's relatively accurately with less training time. Therefore, some methods such as~ stopped training when observing there is no significant performance improvement.\nSuganuma \\textit{et al.}~ used the early stopping policy based on a reference curve. If the accuracy curve of an individual is under the reference curve for successive epochs, then the training will be terminated and this individual is regarded as a poor one. After every epoch, the reference curve is updated by the accuracy curve of the best offspring. \nThe reduced training set, i.e., using a subset of data that assuming similar properties to a large dataset, can also shorten the time effectively. Liu \\textit{et al.}~ explored promising architectures by training on a subset and used transfer learning on the large original dataset. Because there are so many benchmark datasets in the image classification field, the architecture can be evaluated on a smaller dataset (e.g., CIFAR-10) first and then applied on a large dataset (such as CIFAR-100 and ImageNet~). The smaller dataset can be regarded as the proxy for the large one.\nThe reduced population is a unique method of ENAS since other NAS approaches do not have a population. Assunccao \\textit{et al.}~ reduced the population based on their previous algorithm~ to speed up the evolution. However, simply reducing the population may not explore the search space well in each epoch and may lose the global search ability. Another way is reducing the population dynamically. For instance, Fan \\textit{et al.}~ used the ($\\mu+\\lambda$) evolution strategy and divided the evolution into three stages with the population reduction, which aims to find the balance between the limited computing resources and the efficiency of the evolution. The large population in the first stage is to ensure the global search ability, while the small population in the last stage is to shorten the evolution time. Instead of reducing the population, Liu \\textit{et al.}~ evaluated the downsizing architecture with smaller size at an early stage of evolution. Similarly, Wang \\textit{et al.}~ did not evaluate the whole architecture but starting with a single block, and then the blocks were stacked to build architecture as evolution proceeds.\n\\begin{table}[]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Different Methods to Shorten the Evaluation Time} \n\t\\label{table_shorten_time}\n\t\\vspace{-0.2cm}\n\t\\begin{center}\n\t\t\\begin{tabular}{p{2.5cm}|p{5cm}}\n\t\t\t\\hline\n\t\t\tWeight inheritance & ~ \\\\\n\t\t\t\\hline\n\t\t\tEarly stopping policy & ~\\\\\n\t\t\t\\hline\n\t\t\tReduced training set & ~ \\\\\n\t\t\t\\hline\n\t\t\tReduced population & ~\\\\\n\t\t\t\\hline\n\t\t\tPopulation memory & ~\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\vspace{-0.3cm}\n\\end{table}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{learning_curve}\n\t\\caption{Two learning curve of two different individuals.}\n\t\\label{fig_learning_curve}\n\t\\vspace{-0.2cm}\n\\end{figure}\nThe population memory is another category of the unique methods of ENAS. It works by reusing the corresponding architectural information that has previously appeared in the population. In the population-based methods, especially the GA based methods (e.g., in~), it is natural to maintain well-performed individuals in the population in successive generations. Sometimes, the individuals in the next population directly inherit all the architecture information of their parents without any modification, so it is not necessary to evaluate the individuals again. Fujino \\textit{et al.}~ used memory to record the fitness of individuals, and if the same architecture encoded in an individual appears, the fitness value is retrieved from memory instead of being reevaluated. Similarly, Miahi \\textit{et al.}~ and Sun \\textit{et al.}~ employed a hashing method for saving pairs of architecture and fitness of each individual and reusing them when the same architecture appears again. Johner \\textit{et al.}~ prohibited the appearance of architectures that have appeared before offspring generation. This does reduce the time, however, the best individuals are forbidden to remain in the population which may lead the population evolve towards a bad direction.\nThere are many other well-performed methods to reduce the time in ENAS. Rather than training thousands of different architectures, one-shot model~ trained only one SuperNet to save time. Different architectures, i.e., the SubNets, are sampled from the SuperNet with the shared parameters. Yang \\textit{et al.}~ believed the traditional ENAS methods without using SuperNet which were less efficient for models to be optimized separately. In contrast, the one-shot model optimizes the architecture and the weights alternatively. But the weight sharing mechanism brings difficulty in accurately evaluating the architecture. Chu \\textit{et al.}~ scrutinized the weight-sharing NAS with a fairness perspective and demonstrated the effectiveness. However, there remain some doubts that cannot explain clearly in the one-shot model. The weights in the SuperNet are coupled. It is unclear why inherited weights for a specific architecture are still effective~.\nMaking the use of hardware can reduce the time, too. Jiang \\textit{et al.}~ used a distributed asynchronous system which contained a major computing node with 20 individual workers. Each worker is responsible for training a single block and uploading its result to the major node in every generation. Wang \\textit{et al.}~ designed an infrastructure which has the ability to leverage all of the available GPU cards across multiple machines to concurrently perform the objective evaluations for a batch of individuals.\nColangelo \\textit{et al.}~ designed a reconfigurable hardware framework that fits the ENAS. As they claimed, this was the first work of conducting NAS and hardware co-optimization.\nFurthermore, Lu \\textit{et al.}~ adopted the concept of proxy models, which are small-scale versions of the intended architectures. For example, in a CNN architecture, the number of layers and the number of channels in each layer are reduced. However, the drawback of this method is obvious: the loss of prediction accuracy. Therefore, they performed an experiment to determine the smallest proxy model that can provide a reliable estimate of performance at a larger scale.\nAll the above methods obtain the fitness of individuals by directly evaluating the performance on the validation dataset. An alternative way is to use indirect methods, namely the performance predictors. As summarized in~, the performance predictors can be divided into two categories: performance predictors based on the learning curve and end-to-end performance predictors, both of which are based on the training-predicting learning paradigm. This does not mean the performance predictor does not undergo the training phase at all, while it means learning from the information obtained in the training phase, and uses the knowledge learned to make a reasonable prediction for other architectures.\nRalwal \\textit{et al.}~ took the use of the learning curve based predictor, where the fitness is not calculated in the last epoch but is predicted by the sequence of fitness from first epochs. Specifically, they used a Long Short Term Memory (LSTM)~ as a sequence to sequence model, predicting the final performance by using the learning results of the first several epochs on the validation dataset.\nSun \\textit{et al.}~ adopted a surrogate-assisted method which is called end-to-end performance predictor. The predictor does not need any extra information about the performance of individuals to be evaluated. The performance predictor is essentially a regression model mapping from the architecture to its performance. The regression model needs to be trained with sufficient training data pairs first, and each pair consists of architecture and its corresponding performance. Specifically, they chose random forest~ as the regression model to accelerate the fitness evaluations in ENAS. When the random forest receives a newly generated architecture as input, the adaptive combination of a huge number of regression trees which have been trained in advance in the forest gives the prediction.", "id": "fa11702e-7516-4849-b9c7-1422e93b4202", "level": "section", "origin_cites_number": 63, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Efficient Evaluation" ] ], "subsections": [], "title": "Efficient Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_application}\nThis section discussed different application fields of ENAS has involved. Generally, the ENAS algorithms can be applied to wherever DNNs can be applied. Table~\\ref{table_application} shows the wide range of applications and Table~\\ref{table_CIFAR} displays the performance of extraordinary ENAS methods on two popular and challenging datasets for image classification tasks, namely CIFAR-10 and CIFAR-100. Both of these two tables can show what ENAS has achieved so far.", "id": "390da3df-188a-48a9-991a-966d654bdd4d", "level": "section", "origin_cites_number": 0, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Applications" ] ], "subsections": [ "06bc3345-4a23-45a7-a0e1-7338ed3b2e6b", "ce4d03d7-51ce-4271-828b-0f60a203e534" ], "title": "Applications" }, { "cite_extract_rate": 0.3671875, "cites": [ 6851, 6838, 6837, 6863, 6844, 6841, 8955, 6829, 6876, 6832, 6857, 9079, 6875, 6874, 6850, 6870, 6836, 6835, 6869, 6843, 6860, 6821, 6846, 8247, 6847, 6864, 6873, 6865, 9080, 6834, 6845, 6842, 6866, 6830, 6822, 6848, 870, 6853, 6840, 6852, 6833, 6862, 6831, 6849, 6827, 6839, 6828 ], "content": "Table~\\ref{table_application} shows the applications of existing ENAS algorithms, which contains a wide range of real-world applications. \n\\begin{table}[]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{Applications of Existing ENAS algorithms.} \n\t\\label{table_application}\n\t\\begin{tabular}{|p{1cm}|p{3.5cm}|p{3cm}|}\n\t\t\\hline\n\t\tCategory & Applications & References \\\\ \\hline\n\t\t(1) & Image classification & ~ \\\\ \\hline\n\t\t(1) & Image to image & ~ \\\\ \\hline\n\t\t(1) & Emotion recognition & ~ \\\\ \\hline\n\t\t(1) & Speech recognition & ~ \\\\ \\hline\n\t\t(1) & Language modeling & ~ \\\\ \\hline\n\t\t(1) & Face De-identification & ~ \\\\ \\hline\n\t\t(2) & Medical image segmentation & ~ \\\\ \\hline\n\t\t(2) & Malignant melanoma detection & ~ \\\\ \\hline\n\t\t(2) & Sleep heart study & ~ \\\\ \\hline\n\t\t(2) & Assessment of human sperm & ~ \\\\ \\hline\n\t\t(3) & Wind speed prediction & ~ \\\\ \\hline\n\t\t(3) & Electricity demand time series forecasting & ~ \\\\ \\hline\n\t\t(3) & Traffic flow forecasting & ~ \\\\ \\hline\n\t\t(3) & Electricity price forecasting & ~ \\\\ \\hline\n\t\t(3) & Car park occupancy prediction & ~ \\\\ \\hline\n\t\t(3) & Energy consumption prediction & ~ \\\\ \\hline\n\t\t(3) & Time series data prediction & ~ \\\\ \\hline\n\t\t(3) & Financial prediction & ~ \\\\ \\hline\n\t\t(3) & Usable life prediction & ~ \\\\ \\hline\n\t\t(3) & Municipal waste forecasting & ~ \\\\ \\hline\n\t\t(4) & Engine vibration prediction & ~ \\\\ \\hline\n\t\t(4) & UAV & ~ \\\\ \\hline\n\t\t(4) & Bearing fault diagnosis & ~ \\\\ \\hline\n\t\t(4) & Predicting general aviation flight data & ~ \\\\ \\hline\n\t\t(5) & Crack detection of concrete & ~ \\\\ \\hline\n\t\t(5) & Gamma-ray detection & ~ \\\\ \\hline\n\t\t(5) & Multitask learning & ~ \\\\ \\hline\n\t\t(5) & Identify Galaxies & ~ \\\\ \\hline\n\t\t(5) & Video understanding & ~ \\\\ \\hline\n\t\t(5) & Comics understanding & ~ \\\\ \\hline\n\t\\end{tabular}\n\\vspace{-0.5cm}\n\\end{table}\nGenerally, these applications can be grouped into the following five different categories:\n(1) Image and signal processing, including image classification which is the most popular and competitive field, image to image processing (including image restoration, image denoising, super-resolution and image inpainting), emotion recognition, speech recognition, language modelling, and face de-identification.\n(2) Biological and biomedical tasks, including medical image segmentation, malignant melanoma detection, sleep heart study, and assessment of human sperm.\n(3) Predictions and forecasting about all sorts of data, including the prediction of wind speed, car park occupancy, time-series data, financial and usable life, the forecasting of electricity demand time series, traffic flow, electricity price, and municipal waste.\n(4) Engineering, including engine vibration prediction, Unmanned Aerial Vehicle (UAV), bearing fault diagnosis and predicting general aviation flight data.\n(5) Others, including crack detection of concrete, gamma-ray detection, multitask learning, identify galaxies, video understanding, and comics understanding.", "id": "06bc3345-4a23-45a7-a0e1-7338ed3b2e6b", "level": "subsection", "origin_cites_number": 128, "parent_id": "390da3df-188a-48a9-991a-966d654bdd4d", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Applications" ], [ "subsection", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 6869, 6851, 6877, 6821, 870, 6837, 8247, 6847, 6863, 6878, 6844, 6834, 6845, 6829, 6831, 97, 6842, 6827, 8319, 6836, 6826 ], "content": "\\label{sec_CIFAR}\n\\begin{table*}[!htbp]\n\t\\renewcommand\\arraystretch{0.8}\n\t\\caption{The Comparison of the Classification Error Rate on CIFAR-10, CIFAR-100 and ImageNet}\n\t\\label{table_CIFAR}\n\t\\vspace{-0.2cm}\n\t\\begin{tabular}{|p{3.7cm}|p{1.3cm}|p{1.8cm}|p{2.4cm}|p{2.4cm}|p{3cm}|p{0.5cm}|}\n\t\t\\hline\n\t\t\\textbf{ENAS Methods} & \\textbf{GPU Days} & \\textbf{Parameters(M)} & \\textbf{CIFAR-10(\\%)} & \\textbf{CIFAR-100(\\%)} & {\\textbf{ImageNet(Top1/Top5 \\%)}} & \\textbf{Year} \\\\ \\hline\n\t\tCGP-DCNN~ & --- & 1.1 & 8.1 & --- & --- & 2018 \\\\ \\hline\n\t\tEPT~ & 2 & --- & 7.5 & --- & --- & 2020 \\\\ \\hline\n\t\t\\multirow{2}{*}{GeNet~} & \\multirow{3}{*}{17} & --- & 7.1 & --- & --- & \\multirow{3}{*}{2017} \\\\ \\cline{3-6}\n\t\t& & --- & --- & 29.03 & --- & \\\\ \\cline{3-6}\n\t\t& & 156 & --- & --- & {27.87 / 9.74} &\n\t\t\\\\ \\hline\n\t\tEANN-Net~ & --- & --- & 7.05 $\\pm$ 0.02 & --- & --- & 2019 \\\\ \\hline\n\t\t\\multirow{2}{*}{DeepMaker~} & \\multirow{2}{*}{3.125} & 1 & 6.9 & --- & --- &\\multirow{2}{*}{2020} \\\\ \\cline{3-6}\n\t\t& & 1.89 & --- & 24.87 & --- & \\\\ \\hline\n\t\t\\multirow{2}{*}{GeneCai (ResNet-50)~ } & \\multirow{2}{*}{0.024} & --- & 6.4 & --- & --- & \\multirow{2}{*}{2020} \\\\ \\cline{3-6}\n\t\t& & --- & --- & --- & {25.7 / 7.9} & \\\\ \\hline\n\t\tCGP-CNN (ConvSet)~ & --- & 1.52 & 6.75 & --- & --- &2017 \\\\ \\hline\n\t\tCGP-CNN (ResSet)~ & --- & 1.68 & 5.98 & --- & --- &2017 \\\\ \\hline\n\t\tMOPSO/D-Net~ & 0.33 & 8.1 & 5.88 & --- & --- &2019 \\\\ \\hline\n\t\tReseNet-50 (20\\% pruned)~ & --- & 6.44 & 5.85 & --- & --- &2018 \\\\ \\hline\n\t\tImmuNeCS~ & 14 & --- & 5.58 & --- & --- &2019 \\\\ \\hline\n\t\t\\multirow{2}{*}{EIGEN~} & 2 & 2.6 & 5.4 & --- & --- & \\multirow{2}{*}{2018} \\\\ \\cline{2-6}\n\t\t& 5 & 11.8 & --- & 21.9 & --- & \\\\ \\hline\n\t\t\\multirow{2}{*}{LargeEvo~} & \\multirow{2}{*}{2750} & 5.4 & 5.4 & --- & --- & \\multirow{2}{*}{2017} \\\\ \\cline{3-6}\n\t\t& & 40.4 & --- & 23 & --- & \\\\ \\hline\n\t\t\\multirow{2}{*}{CGP-CNN (ConvSet)~} & 31 & 1.5 & 5.92 (6.48 $\\pm$ 0.48) & --- & --- & \\multirow{4}{*}{2019} \\\\ \\cline{2-6}\n\t\t& --- & 2.01 & --- & 26.7 (28.1 $\\pm$ 0.83) & --- & \\\\ \\cline{1-6}\n\t\t\\multirow{2}{*}{CGP-CNN (ResSet)~} & 30 & 2.01 & 5.01 (6.10 $\\pm$ 0.89) & --- & --- & \\\\ \\cline{2-6}\n\t\t& --- & 4.6 & --- & 25.1 (26.8 $\\pm$ 1.21) & --- & \\\\ \\hline\n\t\tMOCNN~ & 24 & --- & 4.49 & --- & --- & 2019 \\\\ \\hline\n\t\t\\multirow{2}{*}{NASH~} & 4 & 88 & 4.4 & --- & --- & \\multirow{2}{*}{2017} \\\\ \\cline{2-6}\n\t\t& 5 & 111.5 & --- & 19.6 & --- & \\\\ \\hline\n\t\tHGAPSO~ & 7+ & --- & 4.37 & --- & --- & 2018 \\\\ \\hline\n\t\t\\multirow{3}{*}{DPP-Net~,~} & \\multirow{3}{*}{2} & 11.39 & 4.36 & --- & --- & \\multirow{3}{*}{2018} \\\\ \\cline{3-6}\n\t\t& & 0.45 & 5.84 & --- & --- & \\\\ \\cline{3-6}\n\t\t& & 4.8 & --- & --- & {26.0 / 8.2} & \\\\ \\hline\n\t\t\\multirow{2}{*}{AE-CNN~} & 27 & 2 & 4.3 & --- & --- &\\multirow{2}{*}{2019} \\\\ \\cline{2-6}\n\t\t& 36 & 5.4 & --- & 20.85 & --- & \\\\ \\hline\n\t\tSI-ENAS~ & 1.8 & --- & 4.07 & 18.64 & --- & 2020 \\\\ \\hline\n\t\tEPSOCNN~ & 4- & 6.77 & 3.69 & --- & --- & 2019 \\\\ \\hline\n\t\t\\multirow{2}{*}{Hierarchical Evolution~ } & \\multirow{2}{*}{300} & --- & 3.63 $\\pm$ 0.10 & --- & --- & \\multirow{2}{*}{2017} \\\\ \\cline{3-6}\n\t\t& & --- & --- & --- & {20.3 / 5.2} & \\\\ \\hline\n\t\t\\multirow{2}{*}{EA-FPNN~} & 0.5 & 5.8 & 3.57 & --- & --- & \\multirow{2}{*}{2018} \\\\ \\cline{2-6}\n\t\t& 1 & 7.2 & --- & 21.74 & --- & \\\\ \\hline\n\t\t\\multirow{2}{*}{AmoebaNet-A~} & \\multirow{2}{*}{3150} & 3.2 & 3.34 $\\pm$ 0.06 & --- & --- & \\multirow{2}{*}{2018} \\\\ \\cline{3-6}\n\t\t&& 86.7 & --- & --- & {17.2 / 3.9} & \\\\ \\hline\n\t\tFirefly-CNN~ & --- & 3.21 & 3.3 & 22.3 & --- & 2019 \\\\ \\hline\n\t\t\\multirow{3}{*}{CNN-GA~} & 35 & 2.9 & 3.22 & --- & --- & \\multirow{3}{*}{2018} \\\\ \\cline{2-6}\n\t\t& 40 & 4.1 & --- & 20.53 & --- & \\\\ \\cline{2-6}\n\t\t& 35 & --- & --- & --- & {25.2 / 7.7} & \\\\ \\hline\n\t\t\\multirow{3}{*}{JASQNet~} & \\multirow{3}{*}{3} & 3.3 & 2.9 & --- & --- & \\multirow{3}{*}{2018} \\\\ \\cline{3-6}\n\t\t& & 1.8 & 2.97 & --- & --- & \\\\ \\cline{3-6} \n\t\t& & 4.9 & --- & --- & {27.2 / ---} & \\\\ \\hline\n\t\t\\multirow{2}{*}{RENASNet~} & \\multirow{2}{*}{6} & 3.5 & 2.88 $\\pm$ 0.02 & --- & --- & \\multirow{2}{*}{2018} \\\\ \\cline{3-6}\n\t\t& & 5.36 & --- & --- & {24.3 / 7.4} & \\\\ \\hline\n\t\tNSGA-Net~ & 4 & 3.3 & 2.75 & --- & --- & 2018 \\\\ \\hline\n\t\t\\multirow{4}{*}{CARS~} & \\multirow{4}{*}{0.4} & 2.4 & 3 & --- & --- & \\multirow{4}{*}{2019} \\\\ \\cline{3-6}\n\t\t& & 3.6 & 2.62 & --- & --- & \\\\ \\cline{3-6} \n\t\t& & 3.7 & --- & --- & {27.2 / 9.2} & \\\\ \\cline{3-6}\n\t\t& & 5.1 & --- & --- & {24.8 / 7.5} & \\\\ \\hline\n\t\t\\multirow{3}{*}{LEMONADE~} & \\multirow{3}{*}{80} & 13.1 & 2.58 & --- & --- & \\multirow{3}{*}{2018} \\\\ \\cline{3-6}\n\t\t& & 0.5 & 4.57 & --- & --- & \\\\ \\cline{3-6}\n\t\t& & --- & --- & --- & {28.3 / 9.6} & \\\\ \\hline\n\t\t\\multirow{2}{*}{EENA~} & \\multirow{2}{*}{0.65} & 8.47 & 2.56 & --- & --- & \\multirow{2}{*}{2019} \\\\ \\cline{3-6}\n\t\t& & 8.49 & --- & 17.71 & --- & \\\\ \\hline\n\t\tEEDNAS-NNMM~ & 0.5 & 4.7 & 2.55 & --- & --- & 2019 \\\\ \\hline\n\t\t\\multirow{5}{*}{NSGANet~} & \\multirow{5}{*}{27} & 0.2 & 4.67 & --- & --- & \\multirow{5}{*}{2019} \\\\ \\cline{3-6}\n\t\t& & 4 & 2.02 & --- & --- & \\\\ \\cline{3-6}\n\t\t& & 0.2 & --- & 25.17 & --- & \\\\ \\cline{3-6}\n\t\t& & 4.1 & --- & 14.38 & --- & \\\\ \\cline{3-6}\n\t\t& & 5.0 & --- & --- & {23.8 / 7.0} & \\\\ \\hline\n\t\\end{tabular}\n\\vspace{-0.3cm}\n\\end{table*}\nIn Table~\\ref{table_application}, it is obvious to see that many ENAS methods are applied to the image classification tasks. The benchmark dataset, CIFAR-10 which contains a total of ten classes, and the CIFAR-100 is the advanced dataset including a hundred classes. These two datasets have been widely used in image classification tasks, and the accuracy on these two challenging datasets can represent the ability of the architecture searched. In addition, ImageNet~ is a more challenging benchmark dataset, which contains 1,000 classes and more than one million images. Because CIFAR-10 and CIFAR-100 are relatively small and easy to be over-fitting nowadays~, the results of the ENAS methods on ImageNet are also included. We have collected the well-performed ENAS methods tested on these three datasets and showed the results in Table~\\ref{table_CIFAR}, where the methods are ranked in an ascending order of their best accuracy on CIFAR-10, i.e., the methods are ranked in a descending order of their error rate based on the conventions. The data shown under column ``CIFAR-10\" and ``CIFAR-100\" denotes the error rate of each method on the corresponding datasets. Especially, as for ImageNet, we report both top1 and top5 error rates. Furthermore, the ``GPU Days\", which was initially proposed in~, denotes the total search time of each method, it can be calculated by Equation~(\\ref{equ_GPUDays})\n\\begin{equation}\n\\label{equ_GPUDays}\nGPU\\ Days = The\\ number\\ of\\ GPUs \\times t\n\\end{equation}\nwhere the $t$ denotes the number of days that each method searched for. In Table~\\ref{table_CIFAR} ``Parameters\" denotes the total number of parameters which can represent the capability of architecture and the complexity. In addition, the symbol ``---\" in Table~\\ref{table_CIFAR} implies there is no result publically reported by the corresponding paper. The year reported in this table is its earliest time made public. Furthermore, there are additional notes provided for Table~\\ref{table_CIFAR}. Firstly, if there are several results reported from literature, such as CGP-DCNN~ and CARS~, we choose to report one or two of the most representative architectures. Secondly, we name two algorithms without proper names in Table~\\ref{table_CIFAR} (i.e., Firefly-CNN~ and EEDNAS-NNMM~) based on the EC methods they used and the first letter of the title. Thirdly, we report the classification errors of CGP-CNN~ in the format of ``best (mean $\\pm$ std)\". Finally, the symbols of ``+\" and ``-\" in the column of ``GPU Days\" denote the meaning of ``more than\" and ``less than\", respectively.\nIn principle, there is no totally fair comparison. Due to the following two reasons: (1) The encoding space including the initial space and the search space is different from each other. There are two extreme cases in the initial space: trivial initialization which starts at the simplest architecture and rich initialization which starts from a well-designed architecture (e.g., ResNet-50~). Meanwhile the size of the search space is largely different, e.g., Ref~ only takes the kernel size into the search space. (2) Different tricks exploited in the methods, e.g., the ``cutout\", can make the comparisons unfair, too. The ``cutout\" refers to a regularization method~ used in the training of CNNs, which could improve the final performance appreciably.\nTable~\\ref{table_CIFAR} shows the progress of ENAS for image classification according to the accuracy on CIFAR-10: LargeEvo algorithm~ (5.4\\%, 2017), LEMONADE~ (2.58\\%, 2018), and NSGANet~ (2.02\\%, 2019). Many ENAS methods have a lower error rate than ResNet-110~ with 6.43\\% error rate on CIFAR-10, which is a manually well-designed architecture. Therefore, the architecture found by ENAS can reach the same level or exceed the architecture designed by experts. It shows that the ENAS is reliable and can be used in other application fields.", "id": "ce4d03d7-51ce-4271-828b-0f60a203e534", "level": "subsection", "origin_cites_number": 39, "parent_id": "390da3df-188a-48a9-991a-966d654bdd4d", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Applications" ], [ "subsection", "Comparisons on CIFAR-10, CIFAR-100 and ImageNet" ] ], "subsections": [], "title": "Comparisons on CIFAR-10, CIFAR-100 and ImageNet" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_challenges_and_issues}\nDespite the positive results of the existing ENAS methods, there are still some challenges and issues which need to be addressed. In the following, we will briefly discuss them.", "id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "level": "section", "origin_cites_number": 0, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ] ], "subsections": [ "d6c13ce5-ab8c-4d29-a77d-718dbcbb0cf9", "8f512a41-ca51-4724-a511-f83f8ff395ee", "0511a9dd-9340-4ba1-ba4f-5c1c82725c9e", "97b2a2f7-bddb-45d3-819a-f3dcbad41cba", "ed6f44db-5133-4077-be4f-e9675dc00180", "14a9080c-0216-4f7c-899b-00f8965feb35" ], "title": "Challenges and Issues" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 8955, 6831, 8247, 6880, 870, 6854, 6825, 6879, 6844, 6830 ], "content": "The effectiveness of ENAS is questioned by many researchers. Wistuba \\textit{et al.}~ noticed that the random search can get a well-performed architecture and has proven to be an extremely strong baseline. Yu \\textit{et al.}~ showed the state-of-the-art NAS algorithms performed similarly to the random policy on average. Liashchynskyi \\textit{et al.}~ compared with grid search, random search, and GA for NAS, which showed that the architecture obtained by GA and random search have similar performance. There is no need to use complicated algorithms to guide the search process if random search can outperform NAS based on EC approaches.\nHowever, the evolutionary operator in~ only contains a recombination operator, which limited the performance of ENAS algorithms. Although random search can find a well-performed architecture in the experiments, it cannot guarantee that it will find a good architecture every time. Moreover, recent researches~ also showed the evolutionary search was more effective than random search. Furthermore, the experiments in Amoebanet~ and NAS-Bench-201~ also showed that ENAS can search for and find better architectures. Thus, there is an urgent need to design an elaborated experiment to reveal what components in ENAS are responsible for the effectiveness of the algorithm, especially in a large encoding space.\nIn Section~\\ref{sec_Population_operators}, two types of operators have been introduced. We note that some methods like the LargeEvo algorithm~ only use single individual based operator (mutation) to generate offspring. The main reason that they did not involve the crossover operator in their method come from two reasons: the first is for simplicity~, and the second is that simply combining a section of one individual with a section of another individual seems ``ill-suited\" to the neural network paradigm~. Secondly in~, the authors believed that there was no indication that a recombination operation applied to two individuals with high fitness would result in an offspring with similar or better fitness.\nHowever, the supplemental materials in~ demonstrated the effectiveness of the crossover operator in this method. This method can find a good architecture with the help of the crossover operation. On the contrary, when crossover is not used, the architecture found is not promising, unless it runs for a long time. In fact, the mutation operator let an individual explore the neighbouring region, and it is a gradually incremental search process like searching step by step. Crossover (recombination) can generate offspring dramatically different from the parents, which is more likely a stride. So, this operator has the ability to efficiently find a promising architecture. Chu \\textit{et al.}~ preferred that while a crossover mainly contributes to exploitation, a mutation is usually aimed to introduce exploration. These two operators play different roles in the evolutionary process. But there is not a sufficient explanation of how the crossover operator works. Maybe some additional experiments need to be done on the methods without the crossover operator. \nThe EC approach generally performs well when dealing with practical problems (e.g., the vehicle routing problem). This is mainly because EC is designed collectively by considering the domain knowledge in most cases~. Similarly, another key reason to ensure the effectiveness of the ENAS algorithm is that the EC approach can effectively consider (and embed) some domain knowledge of the neural architecture. Since the original intention of ENAS is to design neural architectures automatically without manual experience, the future development of ENAS will need to continue to reduce any artificial experience by including prior knowledge of the encoding space. In this case, how to effectively embed domain knowledge into ENAS is a big challenge.", "id": "d6c13ce5-ab8c-4d29-a77d-718dbcbb0cf9", "level": "subsection", "origin_cites_number": 14, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "The Effectiveness" ] ], "subsections": [], "title": "The Effectiveness" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 6829, 6851, 6881, 6866, 870 ], "content": "The scale of the datasets used in most ENAS methods is relatively large. Taking the image classification task as an example, the MNIST dataset~ is one of the earliest datasets. In total, it contains $70, 000$ $28\\times28$ grayscale images. Later in 2009, CIFAR-10 and CIFAR-100~ including $60, 000$ $32\\times32$ color images are medium-scale datasets. One of the most well-known large-scale datasets is ImageNet~, which provides more than 14 million manually annotated high-resolution images. Unlike CIFAR-10 and CIFAR-100, which are commonly used in ENAS, fewer methods choose to verify their performance on ImageNet~. This can be explained by the data shown in Table~\\ref{table_CIFAR}, where the GPU Days are usually tens or even thousands. It is unaffordable for most researchers when the medium-scale dataset changes to the lager one.\nHowever, Chen \\textit{et al.}~ believed that the results on the larger datasets like ImageNet are more convincing because CIFAR-10 is easy to be over-fitting. A popular way to deal with this is using a proxy on CIFAR-10 and transfers to ImageNet~. Another alternative approach is to use the down-scaled dataset of a large-scale dataset such as ImageNet$64\\times64$~.", "id": "8f512a41-ca51-4724-a511-f83f8ff395ee", "level": "subsection", "origin_cites_number": 9, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "Scalability" ] ], "subsections": [], "title": "Scalability" }, { "cite_extract_rate": 0.5, "cites": [ 8247 ], "content": "Section~\\ref{sec_evaluation} has introduced the most popular and effective ways to reduce the fitness evaluation time and the computational cost. In a nutshell, it can be described as a question that strikes the balance between the time spent and the accuracy of the evaluation. Because of the unbearable time for fully training the architecture, we must compromise as little as we can on the evaluation accuracy in exchange for significant reduction in the evaluation time without sufficient computing resources.\nAlthough a lot of ENAS methods have adopted various means to shorten the evaluation time. Even though Sun \\textit{et al.}~ specifically proposed a method for acceleration, the research direction of search acceleration is just getting started. The current approaches have many limitations that need to be addressed. For example, although the LargeEvo algorithm~ used the weight inheritance to shorten the evaluation time and reduced the computational cost, it still ran for several days with the use of lots of computational resources which cannot be easily accessed by many researchers. Furthermore, there is no baseline and common assessment criteria for search acceleration methods. It is a major challenge to propose a novel method to evaluate the architecture accurately and quickly.", "id": "0511a9dd-9340-4ba1-ba4f-5c1c82725c9e", "level": "subsection", "origin_cites_number": 2, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "Efficient Evaluation Method and Reduce Computational Cost" ] ], "subsections": [], "title": "Efficient Evaluation Method and Reduce Computational Cost" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 499, 6865 ], "content": "CNNs are known as black-box-like solutions, which are hard to interpret~. Although some works have been done to visualize the process of feature extraction~, they are uninterpretable due to a large number of learned features~. The low interpretability of the manual designed architecture becomes a big obstacle to the development of neural networks. To overcome this obstacle, some researches~ used GP to automatically design the neural network. Being well-known for its potential interpretability, GP aims at solving problems by automatically evolving computer programs~.\nAll the above researches~ gave a further analysis to demonstrate the interpretability. Specifically, Evans \\textit{et al.}~ have made a visualization on the JAFFE dataset~ to expound how the evolved convolution filter served as a form of edge detection, and the large presence of white color in the output of the convolution can help the classification. In their subsequent work~, they made a visualization of the automatically evolved model on the Hands dataset~, where the aggregation function extracts the minimum value of a specific area in the hand image to determine whether the hand is open or closed. Furthermore, Bi \\textit{et al.}~ displayed the features described by the evolved functions like convolution, max-pooling and addition, and the generated salient features are discriminative for face classification.\nDespite the interpretability the existing work made, all these GP-based ENAS methods only aim at shallow NNs and the number of the generated features is relatively small. However, all the most successful NNs have a deep architecture. This is due partly to the fact that very few GP based methods can be run on GPUs. It is necessary to use deep-GP to evolve a deeper GP tree and make a further analysis on the deeper architecture in the future.", "id": "97b2a2f7-bddb-45d3-819a-f3dcbad41cba", "level": "subsection", "origin_cites_number": 7, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "Interpretability" ] ], "subsections": [], "title": "Interpretability" }, { "cite_extract_rate": 0.8, "cites": [ 6882, 39, 6876, 9081 ], "content": "Table~\\ref{table_application} shows various applications which have been explored by ENAS. But these are just a small part of all areas of neural network applications. ENAS can be applied wherever neural networks can be applied, and automate the process of architecture designed which should have been done by experts. Moreover, plenty of the image classification successes of ENAS have proven that ENAS has the ability to replace experts in many areas. The automated architecture design is a trend.\nHowever, this process is not completely automated. The encoding space (search space) still needs to be designed by experts for different applications. For example, for the image processing tasks, CNNs are more suitable, so the encoding space contains the layers including convolution layers, pooling layers and fully connected layers. For the time-series data processing, RNNs are more suitable, so the encoding space may contain the cells including $\\Delta$-RNN cell, LSTM~, Gated Recurrent Unit (GRU)~, Minimally-Gated Unit (MGU)~, and Update-Gated RNN (UGRNN)~. The two manually determined encoding spaces already contain a great deal of artificial experience and the components without guaranteed performance are excluded. The problem is: can a method search the corresponding type of neural network for multiple tasks in a large encoding space, including all the popular widely used components? Instead of searching one multitask network~ which learns several tasks at once with the same neural network, the aim is to find appropriate networks for different tasks in one large encoding space.", "id": "ed6f44db-5133-4077-be4f-e9675dc00180", "level": "subsection", "origin_cites_number": 5, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "Future Applications" ] ], "subsections": [], "title": "Future Applications" }, { "cite_extract_rate": 0.875, "cites": [ 871, 6524, 6881, 8319, 6854, 870, 9080 ], "content": "Section~\\ref{sec_CIFAR} gives a brief introduction of the unfair comparisons. The unfairness mainly comes from two aspects: (1) the tricks including cutout~, ScheduledDropPath~, etc. (2) The different encoding spaces. For aspect (1), some ENAS methods~ have reported the results with and without the tricks. For aspect (2), the well-designed search space is widely used in different ENAS methods. For instance, the NASNet search space~ is also used in~ because it is well-constructed so that even random search can perform well. The comparison under the same condition can tell the effectiveness of different search methods.\nFortunately, the first public benchmark dataset for NAS, the NAS-Bench-101~ has been proposed. The dataset contains 432K unique convolutional architectures based on the cell-based encoding space. Each architecture can query the corresponding metrics, including test accuracy, training time, etc., directly in the dataset without the large-scale computation. NAS-Bench-201~ was proposed recently and is based on another cell-based encoding space, which does not have no limits on edges. Compared with NAS-Bench-101, which was only tested on CIFAR-10, this dataset collects the test accuracy on three different image classification datasets (CIFAR-10, CIFAR-100, ImageNet-16-120~). But the encoding space is relatively small, and only contains 15.6K architectures. Experiments with different ENAS methods on these benchmark datasets can get a fair comparison and it will not take too much time. However, these datasets are only based on the cell-based encoding spaces and cannot contain all the search space of the existing methods, because the other basic units (layers and blocks) are built using more hyper-parameters, which may lead to a larger encoding space.\nIn the future, a common platform making fair comparisons needs to be built. This platform must have several benchmarks encoding space, such as the NASNet search space, NAS-Bench-101 and NAS-Bench-201. All the ENAS methods can be directly tested on the platform. Furthermore, this platform also needs to solve the problem that different kinds of GPUs have different computing power, which may cause an inaccurate GPU Days based on different standards. The GPU Days cannot be compared directly until they have a common baseline of computing power.", "id": "14a9080c-0216-4f7c-899b-00f8965feb35", "level": "subsection", "origin_cites_number": 8, "parent_id": "e3dda7bf-8a69-4f93-8f3f-07b17269af8c", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Challenges and Issues" ], [ "subsection", "Fair Comparisons" ] ], "subsections": [], "title": "Fair Comparisons" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_conclusion}\nThis paper provides a comprehensive survey of ENAS. We introduced ENAS from four aspects: population representation, encoding space, population updating, and fitness evaluation following the unified flow, which can be seen in Fig.~\\ref{fig_flowchart}. The various applications and the performance of the state-of-the-art methods on image classification are also summarized in tables to demonstrate the wide applicability and the promising ability. Challenges and issues are also discussed to identify future research direction in this field.\nTo be specific, firstly, the encoding space is introduced by categories. We divide the encoding space into two parts: initial space and search space, where the former one defines the initial conditions whereas the latter one defines the architectures that can be found in the evolutionary process. Also, different encoding strategies and architecture representations are discussed. Secondly, the process of population updating including the evolutionary operators, the multi-objective search strategy, and the selection strategy are presented. A variety of EC paradigms use respective metaphors to generate new individuals. Based on standard algorithms, many improved methods have been proposed to obtain a stable and reliable search capability. Furthermore, we introduce the existing methods to reduce the need for large amounts of time and computing resources, which is a huge obstacle to efficiency.\nAlthough the state-of-the-art methods have achieved some success, ENAS still faces challenges and issues. The first important issue is whether the EC-based search strategy has advantages. If the result is at the same level as the baseline (e.g., random search), it is unnecessary to design the complex evolutionary operators. A sophisticated experiment is in urgent need to tell the effectiveness, especially in a large encoding space. Secondly, the crossover operator is a multi-individual based operator and there is no sufficient explanation to how the crossover operator works well on ENAS. Besides, ENAS is just beginning a new era, so there is a lot of uncharted territory to be explored. \nMoreover, a unified standard or platform is demanded to make a fair comparison.\n\\bibliographystyle{IEEEtran}\n\\bibliography{IEEEabrv,mybibfile}\n\\end{document}", "id": "d338e930-f930-4ee2-9cea-b0e47084ac48", "level": "section", "origin_cites_number": 0, "parent_id": "40bf4670-36c8-4014-b074-8f86f999eff7", "prefix_titles": [ [ "title", "A Survey on Evolutionary Neural Architecture Search" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
79
[ 96, 1110, 166, 6822, 514, 8441, 6821, 6823, 8247, 97, 6824, 7, 684, 872, 6820, 6825, 9078, 6826, 870, 6827, 6851, 6838, 6837, 6844, 6841, 8955, 6829, 6832, 9079, 6850, 6836, 6835, 6843, 6846, 6847, 9080, 6834, 6845, 6842, 6830, 6848, 6853, 6840, 6833, 6852, 6831, 6849, 6839, 6828, 6524, 305, 871, 544, 6854, 6861, 6863, 6856, 6857, 6858, 6860, 6855, 6864, 6865, 6859, 6862, 6869, 6868, 6867, 6866, 8302, 6871, 6872, 6870, 1461, 6876, 6875, 6874, 6873, 6877, 6878, 8319, 6880, 6879, 6881, 499, 6882, 39, 9081 ]
1.154587
[ "Zhifeng Jiang", "Wei Wang", "Bo Li", "Qiang Yang" ]
Towards Efficient Synchronous\\Federated Training: A Survey on\\System Optimization Strategies
2021
2021-09-09T02:31:29Z
cs.DC
The increasing demand for privacy-preserving collaborative learning has given rise to a new computing paradigm called federated learning (FL), in which clients collaboratively train a machine learning (ML) model without revealing their private training data. Given an acceptable level of privacy guarantee, the goal of FL is to minimize the \emph{time-to-accuracy} of model training. Compared with distributed ML in data centers, there are four distinct challenges to achieving short time-to-accuracy in FL training, namely the lack of information for optimization, the tradeoff between statistical and system utility, client heterogeneity, and large configuration space. In this paper, we survey recent works in addressing these challenges and present them following a typical training workflow through three phases: client selection, configuration, and reporting. We also review system works including measurement studies and benchmarking tools that aim to support FL developers.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "419df863-69f4-4c35-93d0-018d167ed4ec", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ] ], "subsections": [ "428f0e83-f11e-4df6-8b69-0c0beb42e5b5", "498f24aa-02f6-4623-986c-cb8907d7dc9c", "a91a3aeb-ed4e-41b8-921a-e2c112807c50", "52489191-afca-4640-b251-831b75c284de", "24e6f906-e64a-4296-a2e5-eb81d7da883e" ], "title": "root" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 585, 589, 584, 581, 587, 580, 583, 586, 7262, 582, 7261, 588 ], "content": "\\label{chap:introduction}}\n\\IEEEPARstart{B}{uilding} high-quality machine learning (ML) models demands a massive amount of training data. Yet, the communication cost and privacy concerns impinge on the process of collecting large volumes of data from diverse sources. It is not until recently that governments started to regulate the commercial use of data with privacy-preserving legislation (e.g., GDPR~, HIPAA~, and CCPA~). Compliance violations can be costly, with hefty fines up to hundreds of millions of dollars a year~. As such, the desire for multiple entities (e.g., mobile devices or large organizations) to collaboratively train a shared model efficiently and privately gives birth to a new ML paradigm called federated learning (FL)~. FL promises not to expose the clients' raw data, and has been widely adopted in many industries with applications ranging from mobile devices~ to financial management~ and medical care~.\nApart from providing strong privacy guarantees, the key to the success of a federated training system lies in its efficiency. A typical efficiency metric is the \\emph{time-to-accuracy}, which is the wall clock time taken to train a model until it reaches the target accuracy. Despite the rich body of work that explored various optimization strategies, there is still plenty of room for further improvement due to the following distinct challenges posed to FL (\\cref{chap:background}): (1) the \\textit{lack of information for optimization}: the information needed for optimally configuring the system is usually outdated or simply unavailable due to privacy constraints and scaling issues; (2) the \\textit{tradeoff between statistical and system utility}: statistical utility (the number of iterations taken to reach a plausible target accuracy) and system utility (the duration of an iteration), the two determining factor for time-to-accuracy, are usually at odds in FL; (3) \\textit{client heterogeneity}: clients cannot be treated uniformly due to the intrinsic differences in terms of resources, data, and states; and (4) a \\textit{large configuration space}: the operational dimensions for system developers are too many to explore within an acceptable time. Given these challenges, it is worth summarizing existing research efforts to give researchers a holistic view of the lessons learned and to solicit further explorations.\nTo position existing research attempts in optimizing the time-to-accuracy\nperformance in FL, we propose a layered approach that categorizes them by the\ntraining phases in which they take effect: selection, configuration, and\nreporting (\\cref{chap:prior}). For the \\textit{selection phase} where the\nserver chooses clients for participation, there are mainly two lines of\noptimization efforts: (1) prioritizing clients either with\nhigh statistical utility or system utility~\\cite\n{zhang2021client, nishio2019client, wang2020optimizing}, and (2) explicitly considering\nboth utilities and developing a more informed solution in response to client\ndynamics in practice~. \nAs for the \\textit{configuration phase} where the server sends the global\nmodel to the selected clients with auxiliary configuration information, and\nclients perform local training, we sort out four lines of work: (1) the first\ntwo lines advocate mitigating the communication cost by reducing the model\nsize~ and\ndecreasing the synchronization frequency~\\cite\n{hsieh2017gaia, kamp2018efficient, luping2019cmfl, chen2019communication,\nchen2021communication}; (2) the last two lines minimize the computational\noverhead by accelerating the training speed in each round~\\cite\n{anh2019efficient, nguyen2020resource, wang2020towards, li2018federated, diao2020heterofl, ren2020accelerating} and reducing the number of training\nrounds~. \nIn terms of the \\textit{reporting phase}, we focus on the aggregation and\noutline two related optimizations: (1) reducing the aggregation\nlatency by adopting hierarchical methods~\\cite\n{liu2020client, abad2020hierarchical, wu2020accelerating} and developing\nlightweight privacy-preserving methods~\\cite\n{so2021turbo, zhang2020batchcrypt, jiang2021flashe}, and (2)\nimproving the long-term convergence rate with adaptive\noptimizers on the server~. For each of the attempted optimizations, our discussion\nincludes necessary details for readers to understand the motivation,\nmechanisms, and major results. In addition, works such as measurement\nstudies~ and benchmarking tools~\\cite\n{caldas2018leaf, hu2020oarf, lai2021fedscale, fate, he2020fedml,\nbeutel2020flower, plato} are indispensable in system research. We also survey the\nstatus quo as a tutorial on FL practice (\\cref{chap:concerstone}).\nOur work focuses on the system-level efforts made in improving the time-to-accuracy performance for synchronous federated training. We also share some implications derived from the literature and our survey process. It thus differs from existing surveys which mainly focus on ML algorithms and privacy-preserving algorithms. We expect this work to be an initial attempt to bridge the gap of system-oriented surveys in FL literature, as well as soliciting more contributions to related research.", "id": "428f0e83-f11e-4df6-8b69-0c0beb42e5b5", "level": "section", "origin_cites_number": 20, "parent_id": "419df863-69f4-4c35-93d0-018d167ed4ec", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{chap:background}\nIn this section, we give a detailed introduction to the system optimization problem in federated training. We start with a quick primer on the execution workflow of federated training (\\cref{sec:background_federated}), followed by the problem statement and the scope of this survey (\\cref{sec:background_problem}). We next outline two challenges that make the problem difficult: optimality and practicality (\\cref{sec:background_challenge}), which also serves as a summary of criteria for evaluating existing solutions presented thereafter.", "id": "498f24aa-02f6-4623-986c-cb8907d7dc9c", "level": "section", "origin_cites_number": 0, "parent_id": "419df863-69f4-4c35-93d0-018d167ed4ec", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ] ], "subsections": [ "74ace61b-35de-40e1-bc2b-a3118bd28783", "e9b69d5b-e1b1-49ab-9b47-79a64bec7fd4", "a9660a24-bfd4-4549-8a91-101fb1f84585" ], "title": "Background, Problem and Challenges" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 590, 587, 7261, 582, 583, 588, 591 ], "content": "\\label{sec:background_federated}\nFederated learning (FL)~ has recently emerged\nas a new paradigm of collaborative machine learning (ML) that allows multiple\ndistributed clients (e.g., mobile devices or business organizations) to\ncollaboratively train or evaluate a model with decentralized data. Compared\nto traditional distributed learning in datacenter environments, FL mainly\ndiffers in orchestration, resource constraints, data distribution, and\nparticipation scale~. At its core, FL keeps private\ndata on-premises, while using a central server to maintain a global model and\niteratively refine it by aggregating each client's local updates. This design\nreduces not only the communication cost but also the privacy risk in\ngathering clients' raw data. Owing to its privacy guarantees, FL has found\nwide applications in various domains. On mobile devices, Google runs FL to\nimprove the user experience for Google Keyboard~\\cite\n{yang2018applied, hard2018federated, ramaswamy2019federated,\nchen2019federated} and Assistant~, while Apple deploys FL to\nevaluate and tune speech recognition models~; in\nfintech, both IBM~ and WeBank~ utilize\nFL to detect financial misconducts; in healthcare, NVIDIA applies FL\nto create medical imaging AI~ and predict patients' needs\nfor oxygen~.\nWhile both model training and evaluation play important roles in the development of an FL model, they have different criteria in system design. In this survey, we limit the scope to the \\textit{training} process, which is the most time-consuming and resource-intensive stage throughout the development of an FL model. Due to its predominance in practice, we focus on the support for the \\textit{synchronous} mode, wherein an ML model is trained across a pool of candidate clients in rounds, and in each round, the server needs to wait until a predefined deadline or receiving a sufficient number of clients' updates prior to deriving an aggregated update. In more detail, each round consists of the following three phases (Fig.~\\ref{fig:fl}).\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{standard-fl-protocol}\n \\caption{Standard synchronous federated training protocol~.}\n \\label{fig:fl}\n\\end{figure}\n\\begin{itemize}\n \\item \\textit{Client Selection}. At the beginning of each round, the server waits for a sufficient number of clients with eligible status (i.e., currently charging and connected to an unmetered network) to check in. The server then selects a subset of them based on certain strategies (e.g., randomly or selectively) for participation, and notifies the others to reconnect later.\n \\item \\textit{Configuration}. The server next sends the global model status and configuration profiles (e.g., the number of local epochs or the reporting deadline) to each of the selected clients. Based on the instructed configuration, the clients perform local model training independently with their private data.\n \\item \\textit{Reporting}. The server then waits for the participating clients to report local updates until reaching the predefined deadline. The current round is aborted if no enough clients report in time. Otherwise, the server aggregates the received local updates, uses the aggregate to update the global model status, and concludes the round.\n\\end{itemize}", "id": "74ace61b-35de-40e1-bc2b-a3118bd28783", "level": "subsection", "origin_cites_number": 11, "parent_id": "498f24aa-02f6-4623-986c-cb8907d7dc9c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "Federated Training" ] ], "subsections": [], "title": "Federated Training" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe primary goal of system optimization in federated training is to minimize the end-to-end resource usage of performing a task. The most common metric is the wall clock time, which is typically measured from the very beginning to a certain desirable checkpoint (e.g., convergence or reaching target accuracy). When a metered network is in use (e.g., when clients are on-demand virtual machines in a public cloud), the overall monetary cost becomes another relevant metric that deserves special attention. When uncharged devices are involved, the power consumption should also be considered. Because the cost and energy consumption generally grows linearly as time flies, in this survey, we are particularly interested in reducing the \\textit{time-to-accuracy}, i.e., the wall clock time for achieving a preset accuracy target.\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.19\\linewidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{time-to-acc.pdf}\n \\caption{Time to accuracy performance.}\n \\label{fig:toy_time_to_acc}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{./round-to-acc.pdf}\n \\caption{Round to accuracy performance.}\n \\label{fig:toy_round_to_acc}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth }\n \\centering\n \\includegraphics[width=\\columnwidth]{./round-time}\n \\caption{Average round time w/ standard deviation.}\n \\label{fig:toy_round_time}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{./dirichlet}\n \\caption{Data distributions of clients in NonIID.}\n \\label{fig:toy_noniid}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{./figure/comp_zipf.pdf}\n \\caption{Computational speed of clients in Straggler.}\n \\label{fig:toy_straggler}\n \\end{subfigure}\n \\caption{A case study for illustrating system utility and statistical utility with FL emulation.} \n \\label{fig:toy}\n \\end{figure*}", "id": "e9b69d5b-e1b1-49ab-9b47-79a64bec7fd4", "level": "subsection", "origin_cites_number": 0, "parent_id": "498f24aa-02f6-4623-986c-cb8907d7dc9c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "System Optimization: The Problem\\label{sec:background_problem" ] ], "subsections": [ "172d329c-9179-421a-a850-1b10255f24b6", "1485b3df-12ae-4a59-a7c0-b7477f1b132f" ], "title": "System Optimization: The Problem\\label{sec:background_problem" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 580, 582 ], "content": "}\nIntuitively, the time-to-accuracy performance of federated training is determined by two factors: the number of rounds taken to reach the accuracy and the average duration of training rounds. As in , we regard the former as \\textit{statistical utility} and the latter as \\textit{system utility} throughout this survey.\nTo illustrate, we conduct an FL case study where 16 clients collaboratively train the LeNet5~ model to classify the images from the MNIST~ dataset. To emulate the cross-device environment, each client is run atop an AWC EC2 \\texttt{c5.xlarge} instance (4 vCPUs and 8 GB memory) and the network bandwidth is throttled to 21 Mbps to dictate the average mobile connection speed as of 2021~. We run FedAvg~ for model aggregation without loss of generality, atop which we enforce clients' privacy by implementing the SecAgg protocol~ (also see~\\cref{sec:prior_configuration_latency}) which makes the server blind to each client's local updates. Essentially, we design three schemes to compare:\n\\begin{itemize}\n \\item \\textit{Baseline}: all clients' data is independent and identically distributed (IID). They also share the same computational speed.\n \\item \\textit{NonIID}: the same as \\textit{Baseline} except that clients' data is non-IID as depicted by Fig.~\\ref{fig:toy_noniid}. This is achieved by latent Dirichlet allocation (described in~\\cref{sec:cornerstone_benchmarking_datasets}) with the concentration vectors being all 0.1's.\n \\item \\textit{Straggler}: the same as \\textit{Baseline} except that clients’ computation speeds follow the Zipf's distribution (with $\\alpha=1.2$, i.e., moderately skewed). As shown in Fig.~\\ref{fig:toy_straggler}, the straggler is 5$\\times$ slower than the fastest who shares the same speed as clients in \\textit{Baseline}.\n\\end{itemize}\nWe first study the impact of statistical utility by comparing \\textit{Baseline} and \\textit{NonIID}. As indicated in Fig.~\\ref{fig:toy_round_time}, they proceed a round at the same speed. However, as the learning curves in Fig.~\\ref{fig:toy_time_to_acc} dictate, \\textit{Baseline} reaches the target accuracy (96.1\\%) with only 8.4 min, while \\textit{NonIID} takes 43.5 min, which is 5$\\times$ slower than \\textit{Baseline} does. Referring to the round-to-accuracy performance as in~Fig.~\\ref{fig:toy_round_to_acc}, one can know that the reason for this contrast lies in the difference in the numbers of rounds taken to convergence: 10 for \\textit{Baseline} while 40 for \\textit{NonIID}. Hence, despite having the same system utility, \\textit{Baseline} achieves much better time-to-accuracy performance for having greater statistical utility due to more evenly distributed data across clients. Of course, data distribution is uncontrollable in FL practice and developers can barely end up with IID cases as in \\textit{Baseline}. This adds up to the challenges of system-level optimizations in FL training, as later discussed in~\\cref{sec:background_challenge_optimality}.\nNext, we provide a sense of how system utility affects the end-to-end performance by comparing \\textit{Baseline} with \\textit{Straggler}. As shown in Fig.~\\ref{fig:toy_round_to_acc}, both cases converge with the same amount of rounds: 10. However, it takes \\textit{Straggler} slightly longer in time (15.8 min, i.e., 1.9$\\times$) to reach the target accuracy, as demonstrated in Fig.~\\ref{fig:toy_time_to_acc}. By observing Fig.~\\ref{fig:toy_round_to_acc} and Fig.~\\ref{fig:toy_round_time}, we know that the source of the difference does not stem from the number of rounds used but the making span of each round. As all clients have to proceed at the same speed as the slowest clients in synchronous training, \\textit{Straggler} features lower system utility than \\textit{Baseline} does due to the presence of slow clients, rendering worse time-to-accuracy. Similarly, the disparity of clients' capabilities is inevitable in FL practice, which also complicates the design of system-level optimizations as mentioned in~\\cref{sec:background_challenge_optimality}.", "id": "172d329c-9179-421a-a850-1b10255f24b6", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "e9b69d5b-e1b1-49ab-9b47-79a64bec7fd4", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "System Optimization: The Problem\\label{sec:background_problem" ], [ "subsubsection", "Statistical Utility and Sytem Utility: A Case Study~\\label{sec:background_problem_utility" ] ], "subsections": [], "title": "Statistical Utility and Sytem Utility: A Case Study~\\label{sec:background_problem_utility" }, { "cite_extract_rate": 0.625, "cites": [ 593, 595, 596, 600, 598, 599, 594, 592, 7038, 597 ], "content": "}\nWe emphasize that the solutions discussed in this survey operate at the \\textit{system-level}. Therefore, the following directions of work are excluded from discussion, though they could effectively improve the statistical and/or system efficiency in synchronous federated training:\n\\begin{itemize}\n \\item Hardware updates, e.g., adapting programmable switches to enjoy the communication efficiency brought by in-network aggregation~.\n \\item Security mechanisms, e.g., employing robust aggregation methods to protect the statistical utility from being impaired by model poisoning attacks~.\n \\item Paradigm innovations, e.g., (1) letting clients fit different sets of model parameters by personalization~ or models of heterogeneous architectures through knowledge distillation~ to tackle data heterogeneity (a concept mentioned in~\\cref{sec:background_challenge_optimality}), (2) allowing clients to exchange data representations for realizing prototype learning~ to reduce communication burdens, or (3) permitting clients to send encoded versions of local datasets to the server to reduce the computational complexity~.\n \\item Optimizations on upstream parts of the FL pipeline, e.g., searching for neural architectures that yield better predictive accuracy~.\n\\end{itemize}", "id": "1485b3df-12ae-4a59-a7c0-b7477f1b132f", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "e9b69d5b-e1b1-49ab-9b47-79a64bec7fd4", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "System Optimization: The Problem\\label{sec:background_problem" ], [ "subsubsection", "What is beyond the Scope~\\label{sec:background_problem_beyond" ] ], "subsections": [], "title": "What is beyond the Scope~\\label{sec:background_problem_beyond" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nDespite the clear objective, it is non-trivial to work out a feasible solution due to the following two challenges.", "id": "a9660a24-bfd4-4549-8a91-101fb1f84585", "level": "subsection", "origin_cites_number": 0, "parent_id": "498f24aa-02f6-4623-986c-cb8907d7dc9c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "What Makes It Hard: The Challenges\\label{sec:background_challenge" ] ], "subsections": [ "e9339976-85db-45bd-8a4a-f9969c5967b9", "b7818ca2-24b7-43f4-a5b2-dddedbcf0e1e" ], "title": "What Makes It Hard: The Challenges\\label{sec:background_challenge" }, { "cite_extract_rate": 0.92, "cites": [ 3469, 618, 617, 608, 591, 605, 602, 603, 606, 613, 616, 612, 580, 607, 610, 614, 604, 583, 611, 615, 601, 4338, 609 ], "content": "}\nFirst, the information needed in decision-making may be \\textit{outdated or even unavailable}. For example, to estimate a client's system utility, it is common to refer to its most recent response latency~. However, due to the dynamics over time, such information may not accurately reflect the client's current status. There also exists a cold-start issue, where we are unaware of a client's system capabilities until its first participation. As for estimating a client's statistical utility, the amount of available information is further limited by privacy concerns. According to the recent FL literature, exploratory attacks such as property inference~, membership inference~, and data reconstruction~ can be made possible with model updates. As such, even exposing model updates can discourage clients from participation, let alone inquiring about their data distributions or even raw data~. Note that the uncertainty in clients' statistical utility and system utility can be accumulated over time.\nEven given a holistic view of the environment, the problem remains hard due to the \\textit{coupled nature} of statistical utility and system utility. Intuitively, improving system utility is equivalent to minimizing the average resource consumption (e.g., time or bandwidth) per task unit. On the other hand, reducing the resources invested in a task unit inevitably downgrades the quality of the outcome (e.g., statistical utility) as long as no resource is redundant. To exemplify, by constantly picking the fastest clients in client selection, the average duration of each round indeed decreases, whereas the number of rounds taken to target accuracy may be increased as well when other clients' data are under-represented in the global model. Another example can be found in model compression. To improve communication efficiency, a client can send only an important subset of model updates by sparsification~, or a low-bit representation of them by quantization~. Although the per-round communication duration can be significantly reduced by adopting a higher compression ratio, the convergence has to take more rounds to occur due to the loss of arithmetic precision.\nThe problem is further complicated by \\textit{client heterogeneity.} Federated training involves tens to potentially millions of clients, each of which intrinsically differs from one another in the following three aspects: \n\\begin{itemize}\n \\item \\textit{Resource Heterogeneity}: due to the variability in hardware specifications and system-level constraints, clients in federated training typically possess different capabilities in computation (CPU/GPU/NPU, memory, and storage), communication (connectivity and bandwidth) and power (battery level and lifespan)~. These types of heterogeneity complicate the optimization of the overall system utility. For example, merely improving the communication speed does not necessarily lead to shorter end-to-end latency, especially when the straggler is bottlenecked by the computation~.\n \\item \\textit{Data Heterogeneity}: as the training datasets of clients are typically generated based on their local activities and contexts, they are not IID. More specifically, clients' datasets mainly differ in two aspects\\footnote{See a more complete categorization of non-IID scenarios in \\cref{sec:cornerstone_benchmarking_datasets}.}: (1) sample quantity (i.e., the number of data samples), and (2) label partition (i.e., the distribution of data labels)~. As a result, not all of them are representative of the population distribution. In case we do not include all the clients in the federation, optimization for statistical utility has to additionally account for such heterogeneity.\n \\item \\textit{State Heterogeneity}: as observed from real-world traces~, the available slots of mobile device clients vary significantly in temporal distribution due to different user behaviors (e.g., screen locking or battery charging). Therefore, in each round, there can be different sets of candidate clients to choose from, as well as different client drop-out outcomes. On top of the non-IID distribution of clients' data, this type of heterogeneity further complicates statistical utility optimization. Nevertheless, in the cross-silo settings, it may be less of a concern due to the stable and dedicated nature of clients' computing power~.\n\\end{itemize}\nLast but not least, it is \\textit{infeasible to search through the entire configuration space} for the global optimum. On the one hand, the space is prohibitively large, as a federated training task typically spans $10^1$--$10^6$ users and $10^2$--$10^4$ rounds~, wherein each phase of a round (\\cref{sec:background_federated}) has multiple configurable hyperparameters and alternative policies (e.g., client selection choices in the selection phase, or the number of local steps in configuration phase). On the other hand, most of the online decisions are made on the critical path of the task, meaning that the time spent on working out a solution also counts towards the end-to-end runtime performance, the very objective of the optimization. As a result, it is desirable to be guided by efficient and effective heuristic algorithms, especially balancing the exploration and exploitation efforts made in the configuration space.", "id": "e9339976-85db-45bd-8a4a-f9969c5967b9", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "a9660a24-bfd4-4549-8a91-101fb1f84585", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "What Makes It Hard: The Challenges\\label{sec:background_challenge" ], [ "subsubsection", "On the Optimality of a Solution\\label{sec:background_challenge_optimality" ] ], "subsections": [], "title": "On the Optimality of a Solution\\label{sec:background_challenge_optimality" }, { "cite_extract_rate": 1, "cites": [ 580, 591 ], "content": "}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{skeleton}\n \\caption{Taxonomy of the approaches discussed in \\cref{chap:prior}.}\n \\label{fig:opt}\n\\end{figure*}\nApart from navigating the performance-accuracy-privacy trade-off, the design process of a practical optimization solution should mitigate the accompanying side-effects on other aspects such as the loss of \\textit{robustness} to attacks and failures~. For example, to evaluate the statistical utility of a client, the server may require it to report the loss values generated in local training~. However, a malicious or free-rider client may intentionally respond with arbitrary values in the hope of messing with the orchestration or reaping the benefits of the federation without making solid contributions. As such backdoors are introduced by the optimization solution, the developers should take charge of eliminating the undesirable exploitations of these security loopholes. Other possible concerns that may arise as a result of a system optimization solution include but are not limited to \\textit{fairness} (e.g., whether participant bias is introduced in the solution), \\textit{generality} (e.g., whether the solution applies to diverse tasks), and \\textit{ease of deployment} (e.g., whether the solution can be implemented with moderate engineering efforts). In other words, a mature system optimization solution should not only improve the time-to-accuracy performance by enhancing statistical and system utility but also minimize the adverse impacts on other aspects that federated training also values in practice.", "id": "b7818ca2-24b7-43f4-a5b2-dddedbcf0e1e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "a9660a24-bfd4-4549-8a91-101fb1f84585", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Background, Problem and Challenges" ], [ "subsection", "What Makes It Hard: The Challenges\\label{sec:background_challenge" ], [ "subsubsection", "On the Practicality of a Solution\\label{sec:background_challenge_practicality" ] ], "subsections": [], "title": "On the Practicality of a Solution\\label{sec:background_challenge_practicality" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn the past few years, considerable research efforts have been put into tackling the above challenges for fully unleashing the performance potential of FL training. In this section, we organize them by the training phases, i.e. selection (\\cref{sec:prior_selection}), configuration (\\cref{sec:prior_configuration}), and reporting (\\cref{sec:prior_reporting}), as visualized in Fig.~\\ref{fig:opt}.", "id": "a91a3aeb-ed4e-41b8-921a-e2c112807c50", "level": "section", "origin_cites_number": 0, "parent_id": "419df863-69f4-4c35-93d0-018d167ed4ec", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ] ], "subsections": [ "6750018e-2b8c-43be-8977-a671addc47f2", "ded2658f-08bb-4199-8696-66ba4578329c", "d6884228-aa74-48a6-9eb1-ee9a7ae15668" ], "title": "Recent Optimization Approaches\\label{chap:prior" }, { "cite_extract_rate": 1, "cites": [ 590 ], "content": "}\nDue to the (potentially) large population size and the heterogeneity across clients, the effectiveness of the used participant selection algorithm plays a critical role in the time-to-accuracy performance in federated training. However, the state-of-the-practice system still relies on randomly picking participants~, which inevitably leads to waste of resources and suboptimal convergence speed. In response, there is an array of work to guide the selection, which can be roughly categorized by the target utilities that they improve.", "id": "6750018e-2b8c-43be-8977-a671addc47f2", "level": "subsection", "origin_cites_number": 1, "parent_id": "a91a3aeb-ed4e-41b8-921a-e2c112807c50", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Selection Phase\\label{sec:prior_selection" ] ], "subsections": [ "2c47120e-34e5-44cd-b940-4eb997711491", "c9c46878-34a7-4fcb-8626-cb64857d9ade" ], "title": "Optimizing the Selection Phase\\label{sec:prior_selection" }, { "cite_extract_rate": 0.5, "cites": [ 619, 620 ], "content": "}\nThis line of work does not consider the interplay between statistical utility and system utility. Instead, they mainly focus on lifting either utility while leaving the other one ignored or controlled to a limited extent.\n\\PHM{Statistics-Oriented.} To approach the convergence rate in centralized settings where the data is IID, CSFedAvg~ advocates that clients with a lower degree of non-IID data should participate more often. To this end, the authors propose weight divergence to capture the non-IID degree of data owned by a client. More precisely, it measures the normalized Euclidean distance between a client's model and the reference model trained by the server with auxiliary IID data. According to the 500-client simulation over CIFAR-10 and Fashion MNIST, CSFedAvg reduces the time-to-accuracy by up to 4.0$\\times$ and 2.7$\\times$, respectively, compared to random selection.\nBesides heuristic methods, some researchers use reinforcement learning (RL) algorithms to learn which clients to select in the presence of data heterogeneity. For example, FAVOR~ seeks to reduce the number of rounds to reach a target accuracy with a deep Q-learning network (DQN)~. To capture each client's statistical characteristics, it takes the low-dimension representations of local models as the RL states. Compared to random selection, FAVOR can reduce the communication rounds by up to 49\\% in three image classification tasks. On the other hand, the training overhead for the RL agent may become an obstacle to FAVOR's real-world applications. Specifically, it is reported to take more than 100 episodes to train an agent suited for a specific learning task, which could be prohibitive as each episode corresponds to an entire FL process, i.e., training a global model from scratch for reaching a target accuracy.\n\\PHM{System-Oriented.} In synchronous training, clients with the lowest system utility bottleneck the speed of a federation round. A straightforward way to bound the time usage is setting a deadline for randomly selected clients' to report updates and ignoring any update submitted after the deadline. To avoid waste of computing resources, FedCS~ takes a step further by proactively selecting a set of clients whose participation is not likely to miss the deadline according to latency estimation results. As there can be multiple eligible sets, FedCS further favors the solution with the largest scale of participation, which reduces part of the loss in statistical utility. Technically, the whole problem is formalized as a complex combinatorial optimization, and the authors resort to a greedy algorithm for efficient approximation. As indicated in their 1000-client simulation, FedCS outperforms FedLim (modified FedAvg with per-round deadlines imposed) by up to 1.2$\\times$ and 1.8$\\times$ in the time-to-accuracy when training over the non-IID CIFAR-10 and Fashion MNIST datasets, respectively.", "id": "2c47120e-34e5-44cd-b940-4eb997711491", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "6750018e-2b8c-43be-8977-a671addc47f2", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Selection Phase\\label{sec:prior_selection" ], [ "subsubsection", "Partial Optimization Attempts\\label{sec:prior_selection_either" ] ], "subsections": [], "title": "Partial Optimization Attempts\\label{sec:prior_selection_either" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7262, 580, 585 ], "content": "}\nGiven the coupled nature of clients' system utility and statistical utility, it is more desirable to navigate the sweet point of jointly maximizing both of them.\n\\PHM{Coarse-Grained.} TiFL~ first considers increasing the system utility. To that end, it divides clients into different tiers based on the observed runtime performance, and at each round only selects clients from the same tier for mitigating the waste of resources due to idle waiting for stragglers. To reduce the average iteration span, it also limits the number of times a (slow) tier can be selected. On top of that, the statistical utility is respected by prioritizing tiers with lower testing accuracy whenever there is more than one electable tier. Compared with FedCS, TiFL bears some resemblance in limiting the participation of less capable clients, while being more aware of the statistical utility. As reported in a 50-client cluster with 5 client tiers, TiFL achieves an improvement over random selection by up to 3$\\times$ speedup in overall training time and by 6\\% in accuracy.\n\\PHM{Fine-Grained.} Compared to TiFL, Oort~ reconciles the demand for enhancing both system utility and statistical utility in finer granularity. Specifically, it associates each client with a continuous score and prioritizes those clients with higher scores. The score is meant to be a principled measurement of both the statistical utility (determined by the training loss) and the system utility (estimated from historical response latency). As some components of the score cannot be known in advance until the corresponding client's first participation, or cannot be guaranteed to be stable due to the client's runtime dynamics, the score estimation process is modeled as a Multi-Armed Bandit (MAB) problem and tackled by the Upper Confidence Bound (UCB) algorithm~. Apart from the scoring backbone, Oort also aims to address other practical issues like staleness and robustness. Oort was evaluated on a 1300-client GPU cluster with realistic datasets and simulation on the client heterogeneity. Compared to random selection, it reduces the training time by up to 14$\\times$ and improves model accuracy by up to 9.8\\%.\nBesides the UCB algorithm, researchers also experiment with more sophisticated RL methods to cherrypick participants. Notably, AutoFL~ learns to select participants and execution targets for each individual based on the Q-Learning algorithm~. To achieve efficient FL execution, it identifies RL states that are critical to energy efficiency, convergence time, and accuracy. It also defines RL rewards that track the energy consumption of clients and model accuracy. Compared to random selection, AutoFL is reported to improve the average FL energy efficiency by up to 4.3$\\times$, while also exhibiting better training accuracy. Similar to FAVOR (mentioned in \\cref{sec:prior_selection_either}), however, training the Q-Learning model to converge needs multiple episodes, each of which corresponds to an entire FL training process. Its practicality could thus be challenged whenever offline training is infeasible or costly.", "id": "c9c46878-34a7-4fcb-8626-cb64857d9ade", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "6750018e-2b8c-43be-8977-a671addc47f2", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Selection Phase\\label{sec:prior_selection" ], [ "subsubsection", "Co-Optimizing Statistical/System Utility\\label{sec:prior_selection_both" ] ], "subsections": [], "title": "Co-Optimizing Statistical/System Utility\\label{sec:prior_selection_both" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn the configuration phase, there are mainly two processes that are responsible for the time-to-accuracy performance. One is downlink (i.e., server-to-client) model transmission and the other is local model training. Thus, both aspects can be investigated for system optimization. As for communication overhead reduction, one can reduce the size of model updates (\\cref{sec:prior_configuration_size}) and decrease the synchronization frequency (\\cref{sec:prior_configuration_freq}). To lower computational overhead, one can shorten the training latency by balancing the workload across clients (\\cref{sec:prior_configuration_latency}), as well as reducing the number of rounds taken to converge by adopting heterogeneity-aware training algorithms (\\cref{sec:prior_configuration_round}). As the uplink model submission (i.e., client-to-server) that takes place in the reporting phase shares the same operational space as the downlink one, we combine the discussion on both of them in this section for brevity.", "id": "ded2658f-08bb-4199-8696-66ba4578329c", "level": "subsection", "origin_cites_number": 0, "parent_id": "a91a3aeb-ed4e-41b8-921a-e2c112807c50", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Configuration Phase\\label{sec:prior_configuration" ] ], "subsections": [ "0b02d7a9-809d-41bd-95d3-97624f811da6", "77437726-b7b4-48cb-ac0d-7210459652b2", "75004bfc-b89a-4773-8188-0896026459ce", "c1657bca-2a2d-4dc4-a8d5-1e81d9a7c3f6" ], "title": "Optimizing the Configuration Phase\\label{sec:prior_configuration" }, { "cite_extract_rate": 0.7307692307692301, "cites": [ 618, 617, 626, 628, 624, 622, 606, 613, 616, 621, 625, 610, 611, 8366, 627, 615, 601, 623, 4338 ], "content": "}\nPrior arts of model update size reduction mainly fall into three camps: quantization, sketching, and sparsification. Ahead of the emergence of FL, the exploration of these directions has already been initiated in the context of traditional distributed learning. While their communication merits are mostly reproducible in FL, they also face new challenges due to the privacy regulations and client heterogeneity, which we will also point out hereafter.\n\\PHM{Quantization.} Quantization converts each scalar in a model update to its low-bit representation which takes up less space. While quantization has already gained its fame in traditional distributed learning and we refer the readers to dedicated surveys like~ for more details, here we only introduce the most representative work. As the first quantization work in model training with rigorous convergence proof, QSGD~ performs unbiased quantization with standard random dithering, a technique borrowed from image processing. Since its birth, related works have been emerging with more aggressive quantization bit-widths and more appealing empirical performance. For example, TernGrad~ advocates using only ternary values (0, $\\pm$1) in the uplink direction, while signSGD~ can use only binary signs ($\\pm$) in both uplink and downlink communication. It is worth mentioning that a popular technique in tackling the precision loss brought by quantization is error feedback, whose basic idea is to accumulate the previous quantization errors and compensate for them in the current round. Leveraging this technique, ECQ-SGD~ performs consistently better than QSD in terms of both convergence speed and accuracy, while EF-SGD~ also achieves a narrower generalization gap from centralized training compared to signSGD.\nDespite their generality, there are some practical concerns about applying these general quantization strategies to FL due to privacy constraints and client heterogeneity. For example, determining the clipping threshold for quantization needs to exploit the knowledge about its inputs (i.e., local model updates) for reducing the induced error as in dACIQ~. However, an FL client does neither possess a priori knowledge of others' model updates nor should it request these sensitive values. To work out a globally applicable clipping threshold, we may need to share some less sensitive information (e.g., the maximum and minimum values in local updates) across clients for threshold estimation as in BatchCrypt~. Still, whether such a circumvention guarantees robust estimation and immunity to privacy attacks remains an open question.\n\\PHM{Sketching.} Existing quantization approaches assume the input values follow a certain distribution (e.g., uniform or bell-shaped), which may not always be the case in model updates~. To be more general, some researchers introduce sketching methods where some memory-saving data structures are used to approximate the exact distribution of model update values in a single processing pass over the values. For example, SketchML~ utilizes a quantile sketch method to generate a non-uniform mapping from gradient values to low-bit integers. SketchML achieves empirical success such as decreasing the gradient size by around 7$\\times$ and is the first effort to introduce sketching for compressing model updates in ML training. Similar to quantization, sketch algorithms can also make use of error feedback techniques to efficiently amend the errors induced by the approximation, as demonstrated in SketchedSGD~ and FetchSGD~. There is also sketching practice that compresses auxiliary variables apart from model updates, such as sketching clients' momenta and per-coordinate learning rates as in~.\n\\PHM{Sparsification.} While quantization and sketching operate at the precision level in terms of model size reduction, sparsification operates at the coordinate level. Specifically, sparsification allows each client to transmit only a sparse subset of its model updates, while the rest are accumulated and incorporated into future training. Technically, the sparsified gradient is obtained by first performing element-wise multiplication on the original gradient with a certain 0/1 mask and then discarding zero elements. The mask is typically randomly generated as in~, while another commonly used variant is the top $s$\\% scheme where 1 is given to the coordinates that rank top $s$\\% in absolute magnitude and 0 otherwise~. The top $s$\\% method is reported to reduce network traffic by up to three orders of magnitude, while still preserving model quality~.\nWhile similar cost savings are transferable to plaintext FL, it is unclear whether sparsification can be further compatible with cryptographic techniques that are widely adopted for privacy enforcement in FL. For example, apart from the uplink model updates, it is also desirable to sparsify the downlink global update for fully releasing the potential for communication improvement. However, implementing the downlink sparsification may not be feasible when the server is not aware of the plaintext values of the aggregated update as a result of the applied Secure Multi-Party Computation (SMPC)~ or Homomorphic Encryption (HE)~ techniques.\nIt is noteworthy that as quantization (or sketching) and sparsification are orthogonal to each other, they can be combined to reap the most benefits in terms of model size reduction~.", "id": "0b02d7a9-809d-41bd-95d3-97624f811da6", "level": "subsubsection", "origin_cites_number": 26, "parent_id": "ded2658f-08bb-4199-8696-66ba4578329c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Configuration Phase\\label{sec:prior_configuration" ], [ "subsubsection", "Model Update Size Reduction\\label{sec:prior_configuration_size" ] ], "subsections": [], "title": "Model Update Size Reduction\\label{sec:prior_configuration_size" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 630, 629 ], "content": "}\nAt its core, the reduction in the synchronization frequency is achieved by identifying and precluding redundant synchronization efforts. This can be operated at different granularities ranging from clients, layers to individual parameters in a model update.\n\\PHM{Client-Level.} The importance of an entire model update is usually measured by some numerical features. In the most intuitive form, a model update in Gaia~ is considered significant if its magnitude relative to the current value $\\mid \\frac{Update}{Value} \\mid$ exceeds a specific threshold such as 1\\%. While the magnitude may serve as a good indicator for how data center learning performs, it does not work in FL where determining an appropriate threshold is hard due to clients' heterogeneity. As such, some researchers propose to involve the comparison with some reference points for more robust measurement of the importance. For example,~ tracks the Euclidean distance between the local model and a reference model, while CMFL~ focuses on the number of coordinates with the same sign in the local model and the most recent global model.\n\\PHM{Layer-Level.} Apart from considering a model update as a whole, another line of work tries to reduce the synchronization frequency on a layer basis. A representative work done in this direction is TWAFL~ where the model aggregation is conducted layer-wise. As observations made in deep neural network (DNN) fine-tuning~, shallow layers in a DNN learn general features across different datasets while deep layers learn ad hoc ones. TWAFL hence proposes to update shallow layers more frequently than deep ones as they are more responsible for the overall quality of the global model.\n\\PHM{Parameter-Level.} Some industrial practitioners also consider whether to synchronize for each round at the level of individual parameters. Noticing that each parameter usually evolves in a transient-then-stable manner, i.e., it first varies drastically and then settles down around a certain value with slight oscillation, APF~ proposes to stop synchronizing those parameters whose evolution has reached a stationary phase.", "id": "77437726-b7b4-48cb-ac0d-7210459652b2", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ded2658f-08bb-4199-8696-66ba4578329c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Configuration Phase\\label{sec:prior_configuration" ], [ "subsubsection", "Synchronization Frequency Reduction\\label{sec:prior_configuration_freq" ] ], "subsections": [], "title": "Synchronization Frequency Reduction\\label{sec:prior_configuration_freq" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 631, 632 ], "content": "}\nA client's training latency is determined by both its computational workload and resource capabilities. While the latter cannot be altered, the former still leaves room for optimization innovations. We discuss one major line of such efforts.\n\\PHM{Load Balancing.} Given the variations in computing power and data volume, clients may not finish the training process at the same time. To mitigate the resulting straggler effects, suggest balancing the amount of training data across clients. Specifically, they turn to RL techniques for determining the optimal number of data units used in a communication round for each participant, intending to minimize the time and energy consumption and maximize the volume of involved data. While these solutions can reduce the round latency, the number of rounds needed for convergence does not necessarily remain unchanged because they do not account for data heterogeneity and thus the contribution of slow clients with important data could be throttled. To jointly consider heterogeneous system speed and data distribution during load balancing, carefully formulates an optimization problem where each client is associated with a weight that diminishes exponentially with the disparity between its number of labels and that of the population. Compared to even data allocation, this work manages to reduce the end-to-end time-to-accuracy.\nLoad balancing can also be achieved by varying the number of optimization steps or the complexity of local models. For example, FedProx~ balances the system load across clients by formulating an inexact learning problem that allows variable steps of local solvers. On the other hand, HeteroFL~ assigns sub-models with different widths of hidden channels to clients so that clients with fewer capabilities can train smaller sub-models. All sub-models share the same model architecture, and thus normal model aggregation is still possible. The authors empirically show that the quality of the global model trained with heterogeneous sub-models is comparable to that trained with full-sized local models.\nApart from the computational load, it is sometimes also beneficial to balance the communication load across clients, especially when the network conditions are complicated as in wireless connections. For example, targeting a mobile edge computing (MEC) scenario where Time Division Multiple Access (TDMA) is implemented, optimizes both the data batch size and uplink/downlink frame time slots for each client to achieve the maximum learning efficiency. In addition to coping with CPU computing, the authors further extend the optimization problem to the scenario where devices are equipped with GPUs for training.", "id": "75004bfc-b89a-4773-8188-0896026459ce", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ded2658f-08bb-4199-8696-66ba4578329c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Configuration Phase\\label{sec:prior_configuration" ], [ "subsubsection", "Training Latency Reduction\\label{sec:prior_configuration_latency" ] ], "subsections": [], "title": "Training Latency Reduction\\label{sec:prior_configuration_latency" }, { "cite_extract_rate": 0.785714285714285, "cites": [ 634, 638, 635, 640, 636, 639, 641, 633, 582, 632, 637 ], "content": "}\nIn the settings with heterogeneous data, more local computation in a communication round does not necessarily lead to fewer numbers of rounds for reaching satisfying optima~. Thus, adopting heterogeneity-aware techniques such as adaptive optimization and bias reduction can help guarantee the convergence speed in FL practice.\n\\PHM{Optimizer State Synchronization.} It is common practice for first-order moment optimization to apply momentum to dampen oscillations~. However, in the federated settings, if the clients' momenta are separately updated, they may deviate from each other due to non-IID data distributions. Thus, there are researchers proposing optimizer state synchronization frameworks where clients' optimizer states are synchronized by the server periodically. PR-SGD-Momentum~ is aligned with this direction and also gives proof of the linear speedup of convergence w.r.t. the number of workers. FedAC~ also applies momentum at clients with periodic synchronization, while it is proven to obtain the same linear speedup property with asymptotically fewer rounds of synchronization. MFL~ is another similar idea with theoretical guarantees, but it focuses on accelerating deterministic gradient descent (DGD) instead of SGD, unlike the previous two studies.\n\\PHM{Client Bias Reduction.} Due to data heterogeneity, clients' model updates can be biased towards the minima of local objectives, known as ``client drift'' in the literature~, which hinders the convergence of the global model. To reduce the variance across clients, a natural idea is to regularize local objective functions for minimizing the drift. For example, assuming bounded dissimilarity between local functions, FedProx~ limits the Euclidean distance between local models and the global one by regularizing the square of the distance. FedDANE~ uses the same regularization term, while it formulates the other part of the objective function following the Distributed Approximate NEwton (DANE) method~ in classic distributed optimization. Despite having a similar theoretical convergence rate as FedProx, FedDANE underperforms FedProx in practice where the data heterogeneity is high and the participation rate is low, suggesting a discrepancy between theory and practice which needs further investigation. The objective function in FedDyn~ also considers the combination of a linear term and an L2 regularizer, with a different linear term which is formulated to align the empirical loss surfaces of clients. In theory, FedDyn ensures that the consensus point of model convergence across clients will be consistent with the global stationary solution as long as local models converge, regardless of the degree of data heterogeneity.\nApart from regularizing local objectives, variants in clients' local updates can also be reduced by leveraging the idea of control variates borrowed from the convex optimization literature. For example, in SCAFFOLD~, each of the clients and the server maintains a control variate, and at each local step, a client de-biases its local updates with two control variates: one of its own and the other broadcast by the server. SCAFFOLD converges provably faster than FedAvg~ without any assumption made on the client selection or data heterogeneity. MIME~ considers a similar idea but makes a different choice on the specific definition of control variates.\nWhile the use of control variates requires persistent client states, there exists another line of work that works for stateless clients: posterior averaging. Instead of approaching FL as optimization, this line of work formulates the problem as a posterior inference one. Compared to traditional federated optimization, posterior inference can benefit from an increased amount of local computation without risking stagnating at inferior optima. FedPA~ instantiates this idea with an efficient algorithm to conduct federated posterior inference with linear computation and communication costs.", "id": "c1657bca-2a2d-4dc4-a8d5-1e81d9a7c3f6", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "ded2658f-08bb-4199-8696-66ba4578329c", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Configuration Phase\\label{sec:prior_configuration" ], [ "subsubsection", "Training Round Reduction\\label{sec:prior_configuration_round" ] ], "subsections": [], "title": "Training Round Reduction\\label{sec:prior_configuration_round" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn this phase, the operation room for system optimization is limited to either model uploading or model aggregation. As the former has already been combined in the last section, we hereafter focus on optimizing the aggregation process with two main directions explored in the literature: (1) directly reducing the aggregation latency at each round (\\cref{sec:prior_reporting_round}), and (2) expediting the convergence rate in the long run through conducting adaptive aggregation (\\cref{sec:prior_reporting_end}).", "id": "d6884228-aa74-48a6-9eb1-ee9a7ae15668", "level": "subsection", "origin_cites_number": 0, "parent_id": "a91a3aeb-ed4e-41b8-921a-e2c112807c50", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Reporting Phase\\label{sec:prior_reporting" ] ], "subsections": [ "542ef1b6-edee-4a8a-bd3d-68e0c8f5c762", "cc07a19d-bfd0-4b7a-9057-e912fa55b00d" ], "title": "Optimizing the Reporting Phase\\label{sec:prior_reporting" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 643, 646, 644, 621, 645, 647, 7039, 642 ], "content": "}\nCompared to local training in the configuration phase, model aggregation involves less intensive computation. However, its latency can still be salient because (1) large-scale participation can put pressure on the communication, and (2) the deployment of security methods can complicate the computation. We hereafter introduce the respective optimization efforts in the literature.\n\\PHM{Hierarchical Aggregation.} The downsides of the traditional two-layer (server-client) system include (1) instability: the network bandwidth may be slow or unpredictable, especially in public networks and/or under geo-distributed settings; (2) risk of scalability: the cloud server may suffer from network congestion when concurrently receiving too many local updates; (3) heterogeneity: the straggler effects could be exacerbated by imbalanced network bandwidth. \nTo address these issues, some researchers resort to a hierarchical design of model aggregation by introducing an extra level of edge servers, each of which is typically responsible for a small number of clients with proximity. For instance, in HierFAVG~, after a fixed number of local updates on clients, each edge server aggregates its own clients' models. Subsequently, after another fixed interval of edge aggregation, the cloud server aggregates all the edge servers' models. It is proven that HierFAVG still guarantees convergence, and empirical studies with synthetic FL datasets show that it reduces the time-to-accuracy by up to 2.7$\\times$ in a simulated cloud-edge-client environment. A concurrent work HFL~ also considers a similar design, while it does not attach theoretical analysis on its convergence behaviors. HybridFL~ further extends this primary design with two ideas: (1) quota-triggered edge-level aggregation: edge nodes stop waiting for more local updates once receiving a sufficient number of them; and (2) immediate cloud aggregation: cloud-level aggregation is conducted right after the edge-level one is completed. This decouples each pair of interactions (i.e., cloud-edge and edge-client), thereby further mitigating the impact of client drop-out and stragglers.\nDespite changing the aggregation rules, the hierarchical designs mentioned above still focus on establishing the convergence to a single global model, which does not deviate from the learning paradigm discussed throughout this survey. On the other hand, there also exist other formulations of collaborative learning such as clustered FL where clients are assigned to different groups and aggregation takes place within a group . As they aim to build personalized models for each group of clients, they are considered paradigm innovations to standard FL and thus fall out of the scope of this survey (\\cref{sec:background_problem}).\n\\PHM{Lightweight Privacy-Preserving Aggregation.} As mentioned in \\cref{sec:background_challenge_optimality}, uploading model updates in the clear may be vulnerable to exploratory attacks which plague clients' privacy. Therefore, model aggregation is preferably safeguarded by cryptographic techniques, which inevitably induces extra computation and communication overhead. \nOne line of this work leverages secure multiparty computation (SMPC). One of the most impactful works is Google's secure aggregation (SecAgg) protocol, where pseudorandom masks are used for data confidentiality and Shamir's secret sharing scheme~ is used to accommodate client dropout~. It has provable rigorous privacy guarantees under both passive and active adversary models. Given its quadratic communication overhead during secret sharing, SecAgg has inspired many other SMPC protocols with improved performance. For example, SecAgg+~ optimizes SecAgg in both communication and computation by replacing the complete graph with a random sparse one of logarithmic degree. TurboAgg~, on the other hand, attempts to optimize SecAgg through the use of multi-group circular topology, additive secret sharing, and Lagrange coding. Compared to SecAgg+, TurboAgg incurs a slightly higher overhead in the asymptotic sense for both communication and computation, and only focuses on tackling honest-but-curious adversaries. While all of the above schemes rely on Shamir's scheme for sharing clients' secrets, there also exist other secret sharing techniques that offer different trade-offs across performance, privacy, and dropout resilience, e.g., encoding using Maximum Distance Separable matrices as in LightSecAgg~, or Fast Fourier Transform as in FastSecAgg~.\nBesides SA, Homomorphic Encryption (HE) is another commonly used privacy-preserving aggregation technique that comes with prohibitively high message inflation and runtime overhead. In response, BatchCrypt~ implements an end-to-end solution for batching multiple plaintexts into one large plaintext so that HE-related operations can be performed in a data-parallel manner. BatchCrypt is shown to speed up the training by 23$\\times$-93$\\times$ compared to plain Paillier~ (a prevalent variant of HEs), but it still leaves the message inflation suboptimal and is incompatible with top s\\% sparsification approaches~. Instead of optimizing traditional HE schemes by batching, FLASHE~ proposes a lightweight HE scheme that is tailored for cross-silo FL. It induces negligible ($\\leq6\\%$) computational overhead and no network communication overhead compared to plaintext FL for staying symmetric. Its performance is also optimized when interacting with sparsification techniques.", "id": "542ef1b6-edee-4a8a-bd3d-68e0c8f5c762", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "d6884228-aa74-48a6-9eb1-ee9a7ae15668", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Reporting Phase\\label{sec:prior_reporting" ], [ "subsubsection", "Aggregation Latency Reduction\\label{sec:prior_reporting_round" ] ], "subsections": [], "title": "Aggregation Latency Reduction\\label{sec:prior_reporting_round" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 635, 8367, 582, 648, 649 ], "content": "}\nIn FedAvg~, the de facto standard aggregation method, local model updates simply get weighted by the corresponding numbers of training samples and then added up. While it guarantees convergence when even dealing with non-convex empirical risk functions in IID data settings~, it is observed to yield unstable convergence behavior or even divergence when it faces models trained with arbitrarily non-IID data. There are thus rising interests on whether the aggregation can be more adaptive w.r.t. the data heterogeneity across clients.\n\\PHM{Server-Side Optimizers.} Other than accelerating convergence with local momentum (\\cref{sec:prior_configuration_round}), there are also exploration efforts on server-side momentum. As there is originally no optimizer at the server in FL, these methods first need to generalize the existing aggregation algorithm. Specifically, at each round, instead of collecting local model weights, the server instead collects their changes and treats these changes as the ``pseudo-gradient'' for the server, which the server can use to update the global model with adaptive optimizers. FedAvgM~ initiates the empirical studies with the simplest form of momentum applied at the server, while SlowMo~ independently proposes a similar scheme and also attaches the theoretical analysis for its convergence behaviors. A recent work~ makes more sophisticated use of momentum such as adopting AdaGrad~, Adam~ and YOGI~ optimizers (which correspond to FedAdaGrad, FedAdam, and FedYOGI, respectively). It is shown that FedYOGI consistently outperforms FedAvgM in terms of validation performance for both sparse- and dense-gradient FL tasks. Server-side momentum methods feature no need for persistent states or computation burdens at the client end, making them well suited to cross-device scenarios.", "id": "cc07a19d-bfd0-4b7a-9057-e912fa55b00d", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "d6884228-aa74-48a6-9eb1-ee9a7ae15668", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Recent Optimization Approaches\\label{chap:prior" ], [ "subsection", "Optimizing the Reporting Phase\\label{sec:prior_reporting" ], [ "subsubsection", "Adaptive Aggregation\\label{sec:prior_reporting_end" ] ], "subsections": [], "title": "Adaptive Aggregation\\label{sec:prior_reporting_end" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nAside from innovating optimization solutions, there are also researchers contributing with cornerstone works that benefit the community with informative insights from systematical measurement studies (\\cref{sec:cornerstone_measurement}) and grounding benchmarking tools (\\cref{sec:cornerstone_benchmarking}), as visualized in Fig.~\\ref{fig:cornerstone}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{skeleton-4}\n \\caption{Taxonomy of the work introduced in \\cref{chap:concerstone}.}\n \\label{fig:cornerstone}\n\\end{figure}", "id": "52489191-afca-4640-b251-831b75c284de", "level": "section", "origin_cites_number": 0, "parent_id": "419df863-69f4-4c35-93d0-018d167ed4ec", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Measurement and Benchmarking Tools\\label{chap:concerstone" ] ], "subsections": [ "b59ba9be-d908-40e7-8000-731ab0a60a79", "271672fe-a947-4489-af72-cdac79330349" ], "title": "Measurement and Benchmarking Tools\\label{chap:concerstone" }, { "cite_extract_rate": 0.75, "cites": [ 601, 583, 616 ], "content": "}\nDue to the complicated interplay between statistical utility and system utility, there are a few measurement studies which dedicate to conducting thorough investigations and providing actionable implications for interested researchers.\n\\begin{itemize}\n \\item \\textit{FLASH}\\footnote{We name it after the corresponding Github repository handle~.}~: FLASH particularly studies the impacts that heterogeneity has on both the statistical utility and system utility. This work contributes to the community mainly with three aspects: (1) it establishes a novel large-scale dataset that reflects the system and state heterogeneity among 136,000 real-world smartphones; (2) it implements and open-sources an FL platform that provides a heterogeneity-aware environment for experiments; (3) it reports comprehensive findings that stem from the authors' extensive measurements atop the platform.\n Among the authors' numerous implications, there are two fresh findings at the time when the study is written. First, gradient compression methods (e.g., Gradient Dropping~ and SignSGD~) can hardly shorten the convergence time under heterogeneous cross-device settings. Second, advanced aggregation algorithms that overlook some aspects of heterogeneity will be less effective in realistic settings. Readers can refer to the paper for more details.\n\\end{itemize}", "id": "b59ba9be-d908-40e7-8000-731ab0a60a79", "level": "subsection", "origin_cites_number": 4, "parent_id": "52489191-afca-4640-b251-831b75c284de", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Measurement and Benchmarking Tools\\label{chap:concerstone" ], [ "subsection", "Measurement-Based Research\\label{sec:cornerstone_measurement" ] ], "subsections": [], "title": "Measurement-Based Research\\label{sec:cornerstone_measurement" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nRealistic benchmark suites are necessary for enabling fair, insightful, and reproducible evaluation of the effectiveness of system optimizations. As the time-to-accuracy performance relies on both the statistical utility and system utility (\\cref{sec:background_problem}), in the following summary of existing benchmarking tools, we aim to cover diverse aspects of simulating practical FL: data characteristics, client capabilities, and availability.", "id": "271672fe-a947-4489-af72-cdac79330349", "level": "subsection", "origin_cites_number": 0, "parent_id": "52489191-afca-4640-b251-831b75c284de", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Measurement and Benchmarking Tools\\label{chap:concerstone" ], [ "subsection", "Benchmarking Suites\\label{sec:cornerstone_benchmarking" ] ], "subsections": [ "e04c90d3-4295-47a6-81b9-5f869d19382c", "6b106f87-b95f-4ecf-a35e-e685df38e3b3" ], "title": "Benchmarking Suites\\label{sec:cornerstone_benchmarking" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 605, 591, 8367, 633, 652, 653, 582, 650, 637, 648, 651 ], "content": "}\nThere are two prevalent categories of training datasets. One line of work is \\textit{derived from conventional ML datasets} (e.g., \\texttt{CIFAR}~, \\texttt{MNIST}~, and \\texttt{Fashion-MNIST}~). To synthesize the non-IID nature as in real FL scenarios, the data partitions in these datasets are typically formed by restricting the number of data classes each client has (e.g., partitioning by shard-based methods as in~ or latent Dirichlet allocation (LDA) processes as in~). Although the data generated in such a way are indeed non-IID, they may not perfectly represent the real-world characteristics. For instance, besides the label distribution skew, in reality, non-IID data may also involve feature distribution skew (e.g., same words with different stroke widths), same labels with different features (e.g., images of clothing vary due to regional differences) and same features with different labels (e.g., the same context mapped to different next words due to personal habits)~.\nIn contrast, the other type of datasets is \\textit{collected in real distributed scenarios} and thus naturally captures the FL features. We briefly introduce existing open-source attempts in curating such datasets as follows.\n\\begin{itemize}\n \\item \\textit{LEAF}~: LEAF is an actively maintained project, which currently consists of 6 datasets spanning multiple applications such as computer vision (CV) (\\texttt{FEMNIST} and \\texttt{CelebA}) and natural language processing (NLP) (\\texttt{Shakespeare}, \\texttt{Reddit}, and \\texttt{Sentiment140}). Each dataset is generally formed by splitting the corresponding public dataset by the original contributors of the data samples. In other words, the non-IID nature comes from the unique behavior style of each contributor.\n \\item \\textit{FedScale}~: Similar to LEAF, FedScale also collects realistic datasets and partitions them by unique client identification. FedScale currently includes 18 datasets (including \\texttt{iNature}, \\texttt{OpenImage}, \\texttt{Google Landmark} etc.) which span 10 FL tasks. Apart from the comprehensive coverage of tasks, FedScale has further made four contributions to the community: (1) it has the training, validation, and testing set well established; (2) it streamlines different datasets into a unified format; (3) it accounts for various participation scale from hundreds to millions of clients; and (4) it provides handy APIs for users to customize their datasets.\n \\item \\textit{OARF}~: for a specific FL task, OARF assembles real-world datasets from different sources to realize data heterogeneity. For example, for sentiment analysis, it combines both the \\texttt{IMDB Movie Review} and \\texttt{Amazon Moview Review} datasets. In training, datasets belonging to different sources are distributed to different parties. As such, the data are split in a dataset-wise manner instead of a sample-wise one. OARF currently covers 9 tasks in CV and NLP.\n\\end{itemize}\nBesides the above systematic collection, there are also other separately maintained realistic datasets, such as \\texttt{Stackoverflow}~ and \\texttt{PERSONA-CHAT}~.", "id": "e04c90d3-4295-47a6-81b9-5f869d19382c", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "271672fe-a947-4489-af72-cdac79330349", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Measurement and Benchmarking Tools\\label{chap:concerstone" ], [ "subsection", "Benchmarking Suites\\label{sec:cornerstone_benchmarking" ], [ "subsubsection", "Training Datasets\\label{sec:cornerstone_benchmarking_datasets" ] ], "subsections": [], "title": "Training Datasets\\label{sec:cornerstone_benchmarking_datasets" }, { "cite_extract_rate": 0.529411764705882, "cites": [ 656, 655, 612, 41, 7199, 583, 653, 654, 651 ], "content": "}\nIn addition to setting up data heterogeneity, we also need to incorporate system heterogeneity in realistic benchmarking. The most straightforward way to study FL designs with system utility borne in mind is to deploy in \\textit{production-oriented systems}. Such systems not only embed the ML backbone but also address practical problems like authentication, communication, encryption, and deployment to physical distributed environments. We sketch the open-source representatives.\n\\begin{itemize}\n \\item \\textit{FATE}~: FATE is an FL framework that can be deployed in distributed environments. In addition to its flexible ML pipeline, FATE also features several aspects that further facilitate the research on various goals of practical FL: (1) it supports privacy-preserving computation by implementing cryptographic algorithms such as the Diffie-Hellman key agreement~ and homomorphic encryption~; (2) it covers different training architectures including horizontal FL, vertical FL, and federated transfer learning; and (3) it allows a certain degree of customization on the FL pipeline such as the aggregation step. Given its heaviness in resource consumption, FATE is preferable in powering cross-silo applications instead of cross-device ones.\n \\item \\textit{FedML}~: FedML is also a secure and versatile FL framework that supports distributed mode. Compared to FATE, FedML is more flexible in communication engineering due to the ease of customizing message flow and topology definitions. It is also more lightweight and can thus accommodate training on mobile or IoT devices. Moreover, it can be accelerated with GPUs, while FATE is currently not compatible with hardware accelerators.\n \\item \\textit{Flower}~: Flower is a concurrent work with FedML, and it concentrates on providing a unified approach for FL with mobile devices. Similar to FedML, Flower bears in mind the goals of being (1) lightweight, (2) extensible, (3) scalable, and (4) compatible with diverse mobile platforms (e.g., Android and iOS) and ML-frameworks (e.g., PyTorch~ and Tensorflow~). It also implements secure aggregation methods like SecAgg (c.f. \\cref{sec:prior_reporting_round}) with easily configurable APIs~.\n \\item \\textit{Plato}~: similar to Flower, Plato also aims to facilitate FL research atop multiple ML backends including PyTorch, Tensorflow, and MindSpore~. Different from its counterparts, Plato is optimized in the development process, e.g., making extensive use of plugin mechanisms to maximize extensibility and maintainability. Apart from lightweight distributed deployment, Plato also offers solutions to integrate secure communication, reverse proxy, and load balancing for best fitting in production environments.\n\\end{itemize}\nAlthough using production systems yields the most realistic insights, it may not be practical for researchers with limited resources and time budgets. To meet the growing demand for conducting agile FL research, several platforms \\textit{that enable system-aware simulation} have been developed. As opposed to system-unaware simulators (e.g., Tensorflow Federated~, PySyft~, LEAF~, OARF~, and FedEval~), these platforms respect the impact of client system heterogeneity by associating each client with her computation and communication speed, as well as availability dynamics, which are either set manually by developers or by replaying realistic traces. In addition, these platforms also excel in producing comprehensive metrics needed in performance analysis. Compared to real deployment, on the other hand, these systems allow researchers to make fast-forward progress without being blocked by real-world bottlenecks in computation and communication.\n\\begin{itemize}\n \\item \\textit{Flower}~: besides deployment on real mobile devices as just mentioned, Flower also supports simulation in the cloud with configurable system-level parameters such as bandwidth constraints and computational capabilities. With that, researchers can experiment with larger and more compute-intensive FL workloads that cannot be run on today's mobile devices.\n \\item \\textit{FedScale}~: aside from curating real-world datasets (\\cref{sec:cornerstone_benchmarking_datasets}), FedScale also builds an automated runtime to simulate FL in realistic settings. By design, FedScale integrates \\texttt{AI Benchmark} and \\texttt{MobiPerf Measurements} system traces to simulate clients' heterogeneous training speed and network throughput, respectively. It also incorporates a large-scale user behavior dataset that was formulated in~ to emulate clients' availability dynamics. Compared to Flower, it lacks support for deployment on real distributed devices. Still, it broadly simulates realistic cross-device heterogeneity and can embrace new behavior traces with its APIs.\n\\end{itemize}", "id": "6b106f87-b95f-4ecf-a35e-e685df38e3b3", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "271672fe-a947-4489-af72-cdac79330349", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Measurement and Benchmarking Tools\\label{chap:concerstone" ], [ "subsection", "Benchmarking Suites\\label{sec:cornerstone_benchmarking" ], [ "subsubsection", "Production Systems and Simulation Platforms\\label{sec:cornerstone_benchmarking_platforms" ] ], "subsections": [], "title": "Production Systems and Simulation Platforms\\label{sec:cornerstone_benchmarking_platforms" }, { "cite_extract_rate": 0, "cites": [], "content": "}", "id": "24e6f906-e64a-4296-a2e5-eb81d7da883e", "level": "section", "origin_cites_number": 0, "parent_id": "419df863-69f4-4c35-93d0-018d167ed4ec", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ] ], "subsections": [ "526fca83-6724-45b6-99ae-76110ef046e4", "1404800d-4173-49f1-b574-96677b669f93", "02bbb984-68d7-4c50-804c-74430a998104", "a4f483b3-14e7-4959-a5a0-300b337d0549" ], "title": "Related Work and Concluding Remarks\\label{chap:conclusion" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 661, 636, 657, 658, 8368, 659, 660, 591, 628 ], "content": "}\nThe motivation for this survey stems from three observations. First, few in-depth surveys focus on system optimization for FL. As FL features strict compliance to privacy regulations as opposed to traditional distributed learning, many survey efforts are directed to the unique challenges such as enforcing data privacy and model robustness~, while the system optimization issues receive less attention in dedicated surveys.\nAs for common system issues shared by FL and traditional distributed learning, there do exist extensive surveys with detailed discussion. In~, the authors discuss the realm of mobile distributed machine learning, where algorithms are classified into three categories: 1) machine learning optimizers, 2) distributed optimization algorithms, and 3) data aggregation methods. In , the authors provide a detailed survey of communication-efficient distributed training algorithms. They identify four key factors that affect the communication cost in distributed learning and organize the literature accordingly: 1) synchronous scheme, 2) system architecture, 3) compression techniques and 4) parallelism of communication and computation.~ also studies the communication issues in distributed deep learning, however, with a focus on the commonly used lossless methods. It also contains a quantitative analysis of these methods based on real-world experiments. Compared to our survey, the scope of these surveys is not narrowed down to FL and thus does not fully capture all the system optimization challenges that are unique in FL. Notably, FL has in its standard workflow the selection phase which needs particular investigation due to client heterogeneity, while traditional distributed learning does not even have the notion of clients.\nWe notice that there are comprehensive surveys that cover a wide range of FL techniques including system-level optimizations. In~, the authors discuss in-depth the advances and open problems in FL at the time of writing. The presented problems and solutions are presented in four categories: 1) efficiency and effectiveness, 2) privacy preservation, 3) robustness enforcement and 4) fairness establishment. In~, the authors extensively introduce the challenges and research directions of FL at and for mobile edge network. As for FL at mobile edge network, they particularly elaborate techniques on 1) communication efficiency, 2) resource allocation, and 3) privacy and security. In~, the authors provide recommendations on algorithm-level optimization of FL applications, while they also briefly sketch the system-level constraints and practices. They discuss the reduction of communication, computation, and memory costs, as well as propose a basic model to estimate the communication efficiency of cross-device FL deployment. In~, the authors present a fine-grained multi-level classification of the FL literature, spanning six major topics: 1) statistical challenges, 2) communication efficiency, 3) client selection and scheduling, 4) security concerns, 5) privacy concerns, and 6) service pricing. Compared to this survey, the above-mentioned work is neither intended to focus on system-level optimizations nor do they categorize the literature by the phases in the synchronous training process. The survey most closely related to ours is~ which is dedicated to summarizing system-level optimizations used in asynchronous FL. As their studied architecture is fundamentally different from ours, their survey could be a good complement to this survey. It is also noteworthy that their classification mechanism on existing techniques is built upon the type of client heterogeneity, which also differs from our taxonomy.\nIn short, this survey aims to provide a succinct yet complete view of an essential domain in the FL literature: system optimization in synchronous federated training. We also summarize in Tab.~\\ref{tab:survey} the main similarities and differences between our survey and the existing surveys that are closely related.\n\\begin{table*}[h]\n \\begin{center}\n \\caption{Comparative summary between our survey and the most related work.}\n \\label{tab:survey}\n \\resizebox{1.0\\linewidth}{!}{\n \\begin{tabular}{cm{9cm}m{9cm}}\n \\toprule\n Survey & \\multicolumn{1}{c}{Similarities} & \\multicolumn{1}{c}{Differences} \\\\\n \\midrule\n Gu et al.~\n &\n Similar to our survey, this survey addresses some common challenges of synchronous federated training such as communication efficiency.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item As for machine learning optimizers, it does not discuss the server-side ones which are FL-specific.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as participant selection and aggregation efficiency.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Tang et al.~\n & Similar to our survey, this survey addresses some common challenges of synchronous federated training such as communication efficiency.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item It considers distributed vanilla SGD, which is different in convergence characteristics from FL, a special case of local SGD.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as participant selection and client bias reduction.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Shi et al.~\n & Similar to our survey, this survey addresses some common challenges of synchronous federated training such as communication efficiency.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item It focuses on controlling the system architecture and scheduling algorithms in data center learning, which is not applicable to synchronuous FL.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as participant selection and data heterogeneity.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Kairouz et al.~\n & Similar to our survey, this survey addresses some common challenges of synchronous federated training such as client bias reduction and communication efficiency.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item The scope of this survey is quite board, where for example theoretical analysis of convergence and privacy-preserving mechanisms are also extensively discussed.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as participant selection and load balancing.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Lim et al.~\n & Similar to our survey, this survey addresses some common challenges of FL such as communication efficiency and participant selection. It also mentions the standard workflow of synchronous training.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item The scope of this survey is quite board, where many applications of FL in mobile edge network, e.g., cyberattack detection, are also elaborated.\n \\item They do not organize system-level optimizations by the phases in the standard training workflow.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as client bias reduction and lightweight privacy-preserving aggregation.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Wang et al.~\n & Similar to our survey, this survey addresses some common challenges of FL such as communication efficiency and lightweight privacy-preserving aggregation.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item It centers on the algorithm-level optimizations as well as their theoretical guanrantees, while the challenges and directives of system-level issues are briefly introduced.\n \\item Our work addresses additional challenges that are not mentioned in this survey such as participant selection and load balancing.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Wahab et al.~\n & Similar to our survey, this survey addresses some common challenges of FL such as participant selection and communication efficiency.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item The scope of this survey is quite board, where even training service pricing are also included.\n \\item Our work contains additional exploratory work in the system community that is not mentioned in this survey including measurement studies and benchmarking suites.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n Xu et al.~\n & Similar to our survey, this survey focuses on optimizing FL training process in the precence of client heterogeneity.\n & \\hspace{-5mm}\n \\begin{minipage}[b]{0.52\\textwidth}\n \\vspace{1mm}\n \\begin{itemize}\n \\item Its target architecture is asynchornous and thus the intersection of its introduced techniques and ours are fairly small.\n \\item Our work sorts the literature by the phases in the standard synchronous training workflow, which is distinct from the classification mechanism used in this survey.\n \\end{itemize}\n \\vspace{1mm}\n \\end{minipage} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{center}\n\\end{table*}", "id": "526fca83-6724-45b6-99ae-76110ef046e4", "level": "subsection", "origin_cites_number": 13, "parent_id": "24e6f906-e64a-4296-a2e5-eb81d7da883e", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Related Surveys\\label{sec:conclusion_related" ] ], "subsections": [], "title": "Related Surveys\\label{sec:conclusion_related" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe primary goal of this survey is to help researchers design future optimization solutions. To stimulate more directives for FL practitioners, we discuss in the following some possible future directions that we derive from the literature as well as our development practice.", "id": "1404800d-4173-49f1-b574-96677b669f93", "level": "subsection", "origin_cites_number": 0, "parent_id": "24e6f906-e64a-4296-a2e5-eb81d7da883e", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Future Research Directions\\label{sec:conclusion_future" ] ], "subsections": [ "e4f64951-7cd3-49d1-bb82-2ae75da77f76", "3dbd150d-cc33-4585-ba9e-a4c4d2278414", "0e591dcb-87fb-4bba-a561-d583bde20c52" ], "title": "Future Research Directions\\label{sec:conclusion_future" }, { "cite_extract_rate": 0.4, "cites": [ 585, 582 ], "content": "}\nWhile RL agents are commonly used in guiding client selection for their beneficial balance between exploitation and exploration~, their training costs are inherently high. Before an RL agent is ready for use, it has to learn from tens to hundreds of full-sized FL training processes, whose cost may not be justifiable in practice. Inspired by the experience from container management~, we are interested in whether certain techniques, e.g., transfer learning~, can be adopted so that an RL agent can be trained from prior FL tasks, instead of learning from scratch for every single task.\nMoreover, existing algorithms all assume the participation scale (i.e., the number of clients training simultaneously) to be the magnitude of a hundred, because involving more clients in a round is observed to have marginal benefits under primary aggregation methods (e.g., FedAvg~). On the other hand, the number of available clients at each minute can be as many as thousands in the cross-device practice. It is still desirable to explore the selection of a larger crowd for boosting the time-to-accuracy performance.", "id": "e4f64951-7cd3-49d1-bb82-2ae75da77f76", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "1404800d-4173-49f1-b574-96677b669f93", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Future Research Directions\\label{sec:conclusion_future" ], [ "subsubsection", "On the Selection Phase\\label{sec:conclusion_future_selection" ] ], "subsections": [], "title": "On the Selection Phase\\label{sec:conclusion_future_selection" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 662 ], "content": "}\nAligned with the observations in the literature~, most prior arts consistently configure different clients. Although there exist some heterogeneity-aware efforts like load balancing where the number of batches, batch size, and number of local epochs can vary across clients (\\cref{sec:prior_configuration_latency}), we anticipate that the design space for heterogeneity-aware client configurations could be larger, e.g., using different compression ratios or synchronization frequencies.\nTo achieve efficient and secure communication, it is also worth studying how to combine sparsification techniques with privacy-preserving methods. For example, model sparsification is by nature not compatible with secure multi-party computation protocols such as SecAgg~, because the coordinates of sparsified model values often vary across clients, preventing the pairwise masks from being canceled out during model aggregation as required by SecAgg. A recent work named SparseSecAgg~ attempts to tackle this issue, however, it implies the need for sharing some sparsified locations between each pair of clients, which cannot be directly extended to conventional sparsification schemes such as the top $s\\%$ scheme (\\cref{sec:prior_configuration_size}). More general reconciliation between the two techniques is still ripe for future investigation.", "id": "3dbd150d-cc33-4585-ba9e-a4c4d2278414", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "1404800d-4173-49f1-b574-96677b669f93", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Future Research Directions\\label{sec:conclusion_future" ], [ "subsubsection", "On the Configuration Phase\\label{sec:conclusion_future_configuration" ] ], "subsections": [], "title": "On the Configuration Phase\\label{sec:conclusion_future_configuration" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 664, 663, 666, 7038, 665, 591 ], "content": "}\nAs the system bottleneck is usually assumed to locate in clients instead of the server, most of the existing optimization efforts focus on improving the utility (system and statistical) of clients. It is thus interesting to investigate whether such an assumption holds in practice, especially when the scalability of the server is restricted due to rigid capabilities or limited budgets, or when its aggregation latency is not negligible due to the presence of large-scale models. \nBesides, existing lightweight privacy-preserving aggregation methods are not compatible with robustness enforcement techniques~. Because the true values of local models are concealed from the server, it has no way of inspecting their numerical features~ or validating their performance for anomaly detection~. This raises the tension between privacy and robustness. Leveraging Trusted Execution Environment (TEE) to perform model inspection securely might be a helpful workaround. However, due to current limits on TEE's hardware capabilities , there could be a foreseeably large performance downgrade of doing so. It thus remains largely unexplored as to how to navigate the sweet point of jointly maximizing accuracy, performance, privacy, and robustness in synchronous federated training.", "id": "0e591dcb-87fb-4bba-a561-d583bde20c52", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "1404800d-4173-49f1-b574-96677b669f93", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Future Research Directions\\label{sec:conclusion_future" ], [ "subsubsection", "On the Reporting Phase\\label{sec:conclusion_future_reporting" ] ], "subsections": [], "title": "On the Reporting Phase\\label{sec:conclusion_future_reporting" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nWe further discuss the applicability of the above-introduced optimization techniques in different contexts.", "id": "02bbb984-68d7-4c50-804c-74430a998104", "level": "subsection", "origin_cites_number": 0, "parent_id": "24e6f906-e64a-4296-a2e5-eb81d7da883e", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Discussion\\label{sec:conclusion_dicussion" ] ], "subsections": [ "19eb0c2c-4b7e-4dcc-9766-a74abafd9f2f", "23cebd68-fb90-4604-b70b-25f0eba0e52b" ], "title": "Discussion\\label{sec:conclusion_dicussion" }, { "cite_extract_rate": 0.75, "cites": [ 590, 591, 583 ], "content": "}\nFL applications are often categorized as either cross-device FL (where the participants are a mass of less capable mobile or IoT devices) or cross-silo FL (where the participants are 2-100 organizational entities)~. While the FL workflow that we base on throughout this survey is primarily proposed for cross-device FL~, it also generalizes to cross-silo settings. Hence, the scope of this survey does not preclude cross-silo FL, and hence many practical methods mentioned here should apply to both settings. For those techniques that are suitable for merely one setting, we have emphasized their limitations and stated the practical reasons behind them.", "id": "19eb0c2c-4b7e-4dcc-9766-a74abafd9f2f", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "02bbb984-68d7-4c50-804c-74430a998104", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Discussion\\label{sec:conclusion_dicussion" ], [ "subsubsection", "Cross-Device FL and Cross-Silo FL\\label{sec:conclusion_dicussion_cross" ] ], "subsections": [], "title": "Cross-Device FL and Cross-Silo FL\\label{sec:conclusion_dicussion_cross" }, { "cite_extract_rate": 0.625, "cites": [ 669, 670, 667, 671, 668 ], "content": "}\nBased on the characteristics of data distribution, FL applications can also basically be classified as horizontal FL (where clients' data have the same feature space but different samples) or vertical FL (where clients' data share the same sample space but have different features)~. Horizontal FL is typically initiated by a service provider who wants to improve the quality of an ML application to better fit the end users' data of a specific domain. For example, by combining the statistics of users' input habits, the developers of a mobile keyboard application could achieve a more intelligent prediction of the next words typed by the users, thus enhancing their experience.\nOn the other hand, vertical FL participants are usually a few organizations who hold data in different domains while being likely to achieve win-win cooperation by knowledge sharing. For instance, a regional e-commerce company and a bank may share the same set of clients. By incorporating the knowledge of the clients' revenue and expenditure recorded at the bank as well as the purchasing traces collected by the e-commerce, they can build a more accurate prediction model on the clients' purchasing behaviors, benefiting both of their businesses.\nGiven that there exist diverse training workflows of vertical FL and the community has not yet achieved a consensus on the use of any one~, much discussion in this survey is biased towards horizontal FL that has a widely acknowledged architecture (\\cref{sec:background_federated}). Still, we can provide some insights for optimizing vertical FL training in general:\n\\begin{enumerate}\n \\item As the data is feature-partitioned, vertical FL needs the participation of all clients in each model update attempt. Thus, optimizations on participant selection (\\cref{sec:prior_selection}) are not applicable to vertical FL.\n \\item In terms of communication, model-agnostic compression techniques (\\cref{sec:prior_configuration_size}) should still apply to plaintext variants of vertical FL such as~. However, synchronization frequency reduction (\\cref{sec:prior_configuration_freq}) is not possible in vertical FL, as each client does not have a complete model and cannot conduct training independently.\n \\item As for local training, as all participants need processing data of the same sample space, load balancing techniques that assign a different amount of data samples to each client (\\cref{sec:prior_configuration_latency}) are not feasible in vertical FL. Moreover, as clients inherently have different local models, adaptive optimization and bias reduction methods (\\cref{sec:prior_configuration_round}) from horizontal FL do not generalize well to vertical FL. \n \\item Unlike horizontal FL, the forms of aggregation in vertical FL differ significantly by the model architecture. Thus, the mentioned optimizations (\\cref{sec:prior_reporting}) can hardly be a generic remedy.\n\\end{enumerate}\nBesides horizontal FL and vertical FL, there recently emerges a novel collaborative learning paradigm called Federated Transfer Learning (FTL)~ which can cope with more relaxed assumptions of client data distribution using transfer learning techniques~. Specifically, it deals with the cases where two clients' data not only differ in sample space but also feature space. Toward this end, FTL learns a common feature representation between the two clients and minimizes the empirical errors in predicting one client's labels by leveraging the labels of other clients. As FTL's training workflow ensembles some variants of vertical FL~, we believe that the above-mentioned insights for vertical FL also hold for FTL.", "id": "23cebd68-fb90-4604-b70b-25f0eba0e52b", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "02bbb984-68d7-4c50-804c-74430a998104", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Discussion\\label{sec:conclusion_dicussion" ], [ "subsubsection", "Horizontal FL and Vertical FL\\label{sec:conclusion_dicussion_horizontal" ] ], "subsections": [], "title": "Horizontal FL and Vertical FL\\label{sec:conclusion_dicussion_horizontal" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn this survey, we focus on system optimization in synchronous federated training and propose a natural taxonomy that categorizes existing solutions based on both the training phase and the type of utility at which they target. Apart from problem-driven attempts, we also include related cornerstone efforts including measurement studies and benchmarking suites. We expect this manuscript to be a \\textit{useful guideline} for the design and implementation of federated learning systems.\n\\ifCLASSOPTIONcompsoc\n \\section*{Acknowledgments}\n\\else\n \\section*{Acknowledgment}\n\\fi\nThe research was supported in part by RGC RIF grant R6021-20, and RGC GRF grants under the contracts 16209120, 16200221, and 16213120. We also thank the anonymous reviewers for their valuable feedback.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\Urlmuskip=0mu plus 1mu\\relax\n\\bibliographystyle{IEEEtran}\n\\bibliography{./ref.bib}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./biophoto/zjiangaj.jpg}}]{Zhifeng Jiang}\nreceived the BEng (Hons.) degree from the Department of Computer Science and Technology at Zhejiang University (ZJU) in 2019. Since then, he has been working towards a Ph.D. degree in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). He is mainly interested in machine learning systems, with a special focus on efficient and scalable federated learning. He was a recipient of the Best Paper Runner-up Award at IEEE ICDCS 2021. He served on the artifact evaluation committees of ACM SOSP 2021, USENIX OSDI 2022, and USENIX ATC 2022.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./biophoto/weiwa.jpeg}}]{Wei Wang} (Member, IEEE) received his B.Engr. and M.Engr. degrees from the Department of Electrical Engineering, Shanghai Jiao Tong University, China, in 2007 and 2010, respectively and his Ph.D. degree from the Department of Electrical and Computer Engineering, University of Toronto, Canada, in 2015. Since 2015, he has been with the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology (HKUST), where he is currently an Associate Professor. He is also affiliated with the HKUST Big Data Institute. Dr. Wang's research interests cover the broad area of distributed systems, with focus on serverless computing, machine learning systems, and cloud resource management. He published extensively in the premier conferences and journals of his fields. His research has won the Best Paper Runner Up awards of IEEE ICDCS 2021 and USENIX ICAC 2013.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./biophoto/bli.jpg}}]{Bo Li} is a Chair Professor in Department of Computer Science and Engineering, Hong Kong University of Science and Technology. He was a Cheung Kong Scholar Visiting Chair Professor in Shanghai Jiao Tong University (2010-2016), and was the Chief Technical Advisor for ChinaCache Corp. (NASDAQ:CCIH), a leading CDN provider. He made pioneering contributions in multimedia communications and the Internet video broadcast, which attracted significant investment and received the Test-of-Time Best Paper Award from IEEE INFOCOM (2015). He received 6 Best Paper Awards from IEEE including IEEE INFOCOM (2021). He was the Co-TPC Chair for IEEE INFOCOM (2004).\nHe is a Fellow of IEEE. He received his PhD in the ECE Department, University of Massachusetts at Amherst, and his B. Eng. (summa cum laude) in the Computer Science from Tsinghua University, Beijing, China.\n\\end{IEEEbiography}\n\\vfill\n\\newpage\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./biophoto/qyang.jpeg}}]{Qiang Yang} received the B.Sc. degree in astrophysics from Peking University, Beijing, China, in 1982, and the Ph.D. degree in computer science and the M.Sc. degree in astrophysics from the University of Maryland at College Park, College Park, MD, USA, in 1985 and 1989, respectively.\nHe was a Faculty Member with the University of Waterloo, Waterloo, ON, Canada, from 1989 to 1995, and Simon Fraser University, Burnaby, BC, Canada, from 1995 to 2001. He was the Founding Director of the Noah’s Ark Laboratory, Huawei, Hong Kong, from 2012 to 2014, and a Co-Founder of 4Paradigm Corporation, Beijing, an AI platform company. He is currently the Head (Chief AI Officer) with the AI Department, WeBank, Shenzhen, China, and the Chair Professor with the Department of Computer Science and Engineering (CSE), The Hong Kong University of Science and Technology, Hong Kong, where he was the Former Head of the Department of CSE and the Founding Director of the Big Data Institute, Hong Kong, from 2015 to 2018. He has authored several books, including Intelligent Planning (Springer), Crafting Your Research Future (Morgan and Claypool), and Constraint-Based Design Recovery for Software Engineering (Springer). His research interests include artificial intelligence, machine learning, and data mining, with an emphasis on transfer learning, automated planning, federated learning, and case-based reasoning.\nDr. Yang is a fellow of several international societies, including the ACM, AAAI, IAPR, and AAAS. He served as an Executive Council Member for the Association for the Advancement of AI from 2016 to 2020 and the President for the International Joint Conference on AI from 2017 to 2019. He was a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the AAAI Innovative AI Applications Award in 2016, and the ACM SIGKDD Distinguished Service Award in 2017. He was the Founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology and the IEEE TRANSACTIONS ON BIG DATA.\n\\end{IEEEbiography}\n\\end{document}", "id": "a4f483b3-14e7-4959-a5a0-300b337d0549", "level": "subsection", "origin_cites_number": 0, "parent_id": "24e6f906-e64a-4296-a2e5-eb81d7da883e", "prefix_titles": [ [ "title", "Towards Efficient Synchronous\\\\Federated Training: A Survey on\\\\System Optimization Strategies" ], [ "section", "Related Work and Concluding Remarks\\label{chap:conclusion" ], [ "subsection", "Conclusion\\label{sec:conclusion_conclusion" ] ], "subsections": [], "title": "Conclusion\\label{sec:conclusion_conclusion" } ]
80
[ 585, 589, 584, 581, 587, 580, 583, 586, 7262, 582, 7261, 588, 590, 591, 593, 595, 596, 600, 598, 599, 594, 592, 7038, 597, 3469, 618, 617, 608, 605, 602, 603, 606, 613, 616, 612, 607, 610, 614, 604, 611, 615, 601, 4338, 609, 619, 620, 626, 628, 624, 622, 621, 625, 8366, 627, 623, 630, 629, 631, 632, 634, 638, 635, 640, 636, 639, 641, 633, 637, 643, 646, 644, 645, 647, 7039, 642, 8367, 648, 649, 652, 653, 650, 651, 656, 655, 41, 7199, 654, 661, 657, 658, 8368, 659, 660, 662, 664, 663, 666, 665, 669, 670, 667, 671, 668 ]
0.985453
[ "Ji Liu", "Jizhou Huang", "Yang Zhou", "Xuhong Li", "Shilei Ji", "Haoyi Xiong", "Dejing Dou" ]
From Distributed Machine Learning to Federated Learning: A Survey
2021
2021-04-29T14:15:11Z
cs.DC
In recent years, data and computing resources are typically distributed in the devices of end users, various regions or organizations. Because of laws or regulations, the distributed data and computing resources cannot be \liu{aggregated or} directly shared among different regions or organizations for machine learning tasks. Federated learning emerges as an efficient approach to exploit distributed data and computing resources, so as to collaboratively train machine learning models\liu{. At the same time, federated learning obeys} the laws and regulations and ensures data security and data privacy. In this paper, we provide a comprehensive survey of existing works for federated learning. \liu{First, we propose a functional architecture of federated learning systems and a taxonomy of related techniques. Second, we explain the federated learning systems from four aspects: diverse types of parallelism, aggregation algorithms, data communication, and the security of federated learning systems. Third, we present four widely used federated systems based on the functional architecture. Finally, we summarize the limitations and propose future research directions.} \keywords{Federated learning \and Distributed system \and Parallel computing \and Security, Privacy}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ] ], "subsections": [ "bda39eab-e9c6-47ab-8897-26d79998749b", "2371347b-6739-4c7c-b6c1-b23b3b02d8ed", "5dfeced7-075b-4276-abd0-1ff3e9bb48a7", "cd0b3f18-6523-488f-aaa8-addaf533ff9f", "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "226ea081-07c1-4d87-ad04-18733a7f7202", "4ff5262c-b6d9-4162-aba6-d016f80c7306" ], "title": "root" }, { "cite_extract_rate": 0.47222222222222204, "cites": [ 3469, 1315, 5419, 3477, 591, 8917, 659, 5422, 5423, 1311, 5424, 671, 627, 5420, 600, 582, 5421 ], "content": "With billions of connected Internet of Things (IoT) devices , smartphones and large websites around the world, in recent years, we have witnessed huge amounts of data generated and dispersed over various mobile devices of end users, or the data centers of different organizations. As the data contain sensitive information of end users or organizations, such as facial images, location-based services, health information , or personal economic status, moving the raw data from personal devices or data centers of multiple organizations to a centralized server or data center may pose immediate or potential information leakage. Due to the concerns of data security and data privacy, legal restrictions, such as the Cybersecurity Law of the People’s Republic (CLPR) of China , the General Data Protection Regulation (GDPR) in European Union, the Personal Data Protection Act (PDP) in Singapore, the California Consumer Privacy Act (CCPA) , and the Consumer Privacy Bill of Rights (CPBR) in the United States, have been introduced and put in practice, which makes data aggregation from distributed devices, multiple regions, or organizations, almost impossible . In addition, computing and storage resources are also typically distributed in multiple regions and organizations , which cannot be aggregated in a single data center.\nFederated Learning (FL) emerges as an efficient approach to exploit the distributed resources to collaboratively train a machine learning model. FL is a distributed machine learning approach where multiple users collaboratively train a model, while keeping the raw data decentralized without being moved to a single server or data center .\nFL not only exploits the distributed resources to efficiently carry out the training process of machine learning, but also promises to provide security and privacy for the decentralized raw data. Within FL, the raw data, or the data generated based on the raw data with security processing, serves as the training data. FL only allows the intermediate data to be transferred among the distributed computing resources while avoiding the transfer of training data. \nThe distributed computing resources refer to mobile devices of end users or servers of multiple organizations.\nFL brings the code to the data, instead of bringing the data to the code, and it addresses the fundamental problems of privacy, ownership, and locality of data .\nIn this way, FL can enable multiple users to train a model while satisfying the legal data restrictions.\nTraditional centralized machine learning approaches typically gather the distributed raw data generated on different devices or organizations to a single server or a cluster with shared data storage, which may bring serious data privacy and security concerns . \nThe centralized approaches, in general, are associated with diverse challenges, including computational power and training time, and most importantly, security and privacy with respect to distributed data .\nFL differs from the centralized approach in three aspects. First, FL does not allow direct raw data communication, while the centralized approach has no restriction. Second, FL exploits the distributed computing resources in multiple regions or organizations, while the centralized approach generally only utilizes a single server or a cluster in a single region, which belongs to a single organization. Third, FL generally takes advantage of encryption or other defense techniques to ensure the data privacy or security, while the centralized approach pays little attention to these security issues .\nThe term ``federated learning'' was first introduced in 2016 , which focuses on the unbalanced and non-Independent and Identically Distributed (non-IID) data in mobile devices. \nThe concept of FL was extended to three data scenarios, i.e., horizontal, vertical, and hybrid . \nThe horizontal FL addresses the decentralized data of the same features, while the identifications are different. The vertical FL handles the decentralized data of the same identifications with different features. The hybrid FL deals with the data of different identifications and different features. Then, FL is formally defined as a machine learning approach where multiple clients collaborate in solving a machine learning problem while the raw data is stored locally and is neither exchanged nor transferred . \nAn FL system is an efficient tool to carry out FL with decentralized data and resources. Several open-source FL systems, e.g., FATE , PaddleFL , TensorflowFL , and Pysyft , are now intensively used by both research communities, e.g., healthcare , and computer visions , and by industrial groups, e.g., WeBank . Although various FL systems exist, the architecture of FL systems has common features: In particular, they share the capability to collaboratively train a machine learning model. Most FL systems are composed of four layers, i.e., presentation, user services, FL training, and infrastructure. These four layers enable FL system users to design, execute, and analyze machine learning models with distributed data.\nAlthough FL differs from the centralized machine learning approaches, it not only utilizes novel techniques designed for FL, but also takes advantage of the techniques designed for distributed machine learning. \nFL exploits parallelization techniques designed for distributed machine learning.\nFor instance, horizontal FL exploits the data parallelism, which trains multiple instances of the same model on different subsets of the training dataset . Vertical FL utilizes model parallelism to distribute parallel paths of a single model to multiple devices in order to handle the data of different features . \nMultiple aggregation algorithms are proposed to aggregate the models in distributed computing resources.\nData transfer techniques are also utilized in FL, e.g., model compression . \nAs FL promises to provide data security and data privacy, diverse defense techniques, e.g., differential privacy , homomorphic encryption , and Robustness Aggregation , are designed to address the possible attacks .\nThere have been a few surveys of FL. Some works provide a comprehensive study of FL, from the taxonomy of FL to the techniques, e.g., the efficiency, data privacy, security, and applications of FL. Some surveys focus on the data privacy and security of FL. Other surveys present the application of FL in a specific area, e.g., healthcare informatics , mobile edge networks , and neural architecture searches , and they personalize global models to work better for individual clients . However, few of them present the architecture of FL or analyze parallelization techniques in FL.\nIn this paper, we provide a survey of federated learning and the related parallelization techniques. The main contributions of this paper are:\n\\begin{itemize}\n \\item A four-layer FL system architecture, which is useful for discussing the techniques for FL. This architecture can also be a baseline for other work and can help with the assessment and comparison of FL systems.\n \\item A taxonomy of FL-related techniques, including the parallelization techniques, the aggregation algorithms, and the techniques for data communication and security, with a comparative analysis of the existing solutions.\n \\item A discussion of research issues to improve the efficiency and security of FL systems.\n\\end{itemize}\nThis paper is organized as follows. Section \\ref{sec:overview} gives an overview of the execution of FL, including the FL system architectures and basic functional architecture of FL systems. Section \\ref{sec:pexec} focuses on the techniques used for distributed training of FL and aggregation methods. Section \\ref{sec:security} presents the techniques for distributed execution, data communication and data security of FL. Section \\ref{sec:frameworks} demonstrates the existing FL frameworks.\nSection \\ref{sec:future} discusses the open issues raised for the execution of FL with distributed resources. Section \\ref{sec:con} summarizes the main findings of this study.", "id": "bda39eab-e9c6-47ab-8897-26d79998749b", "level": "section", "origin_cites_number": 36, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:overview}\nIn this section, we introduce the basic concepts of federated learning. Then, we present the life cycle of FL models. Afterwards, we detail the functional architecture and the corresponding functionality of FL systems.", "id": "2371347b-6739-4c7c-b6c1-b23b3b02d8ed", "level": "section", "origin_cites_number": 0, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ] ], "subsections": [ "4f50be54-11aa-4ae8-ae5e-fb6d0e6797e6", "cd1b4108-9163-49ab-bfbc-9fe67cc3a340", "78f72b80-7c28-4237-9e16-d97c8c144603" ], "title": "An Overview of Federated Learning" }, { "cite_extract_rate": 0.37037037037037, "cites": [ 1315, 591, 5426, 659, 5425, 671, 600, 166, 582 ], "content": "Machine learning is the process to automatically extract the models or patterns from data . \nThe models or patterns are expressed as machine learning models.\nA machine learning model is an ensemble of a model structure, which is typically expressed as a Directed Acyclic Graph (DAG), data processing units, e.g., activation functions in Deep Neural Networks (DNNs), and the associated parameters or hyper-parameters. \nThe input data can be processed through a machine learning model to generate the output, e.g., the prediction results or the classification results, which is the inference process.\nThe machine learning model is generated based on the training data, which is the training process.\nDuring the training process, the parameters or the model structure of the machine learning model is adjusted based on a training algorithm in order to improve the performance, e.g., the accuracy or the generalization capacity. The training algorithm is also denoted by machine learning algorithms. The duration of the training process is training time.\nAccording to whether the training data have labels, the training process of machine learning can be classified into four types , i.e., supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Supervised learning represents that a machine learning task exploits the training data composed of input features and the corresponding labels . In this paper, we focus on this type of training data. For instance, each data point in the training dataset contains $(x,y)$, where $x$ represents the input features and $y$ represents the desired output value. Unsupervised learning represents that a machine learning task exploits the training data, which only consists of input features without output values; i.e., each data point only contains $x$ and does not have $y$. Semi-supervised learning represents that one (generally small) part of the training data contains output values, while the other (generally small) part of the training does not. Reinforcement learning represents that each iteration in the training process considers its observation of the environment from the last iteration. \nWhile the training data become huge, e.g., on the order of terabyte , or when the training data is inherently distributed or too big to store on single machines , the training process is carried out using distributed resources, which is distributed machine learning. One of the important features of the distributed machine learning is that it can significantly accelerate the training speed so as to reduce the training time. Diverse parallelization techniques are used in distributed machine learning. For instance, Graphics Processing Units (GPUs) using Single Instruction Multiple Data (SIMD) and Tensor Processing Units (TPUs) using Multiple Instructions Multiple Data (MIMD) are exploited . In addition, distributed machine learning takes advantage of three types of parallelism to parallelize the training process, i.e., data parallelism , model parallelism , and pipeline parallelism . With the data parallelism approach, the training data is partitioned as many times as the number of computing resources, and all computing resources subsequently apply the same machine learning algorithm to process different chunks of the data sets . With the model parallelism approach, exact copies of the entirety of the data (the training data or the intermediate data)\nare processed by each computing resource, each of which exploits different parts of the machine learning model . The pipeline parallelism approach combines the data parallelism and the model parallelism. With this approach, each computing resource processes a part of the training data with a part of the machine learning model, while the processing, e.g., computation or communication, at each node can be parallelized . \nFL is a distributed machine learning approach where multiple users collaboratively train a model, while keeping the raw data distributed without being moved to a single server or data center .\nThe model used for FL is denoted by FL model.\nFL is first proposed to handle the unbalanced and non-Independent and Identically Distributed (non-IID) data of the same features in mobile devices .\nThen, the concept of FL is extended to the distributed data of diverse features in multiple organizations or various regions .\nFL systems are used within one or multiple phases of the life cycle of FL models. An FL system is a distributed system to manage the distributed training process with distributed resources.\nFL is a special type of distributed machine learning, which differs from other distributed machine learning approaches in the following three points.\nFirst, FL does not allow direct raw data communication, while other approaches have no restriction. \nAs the raw data are of multiple ownerships, FL approaches with this restriction can meet the requirements defined by the related laws, e.g., CLPR , GDPR , PDPA , CCPA , and CPBR .\nIn particular, the consent (GDPR Article 6) and the data minimalization principle (GDPR Article 5) limit data collection and storage to only what is consumer-consented and what is absolutely necessary for processing .\nSecond, FL exploits the distributed computing resources in multiple regions or organizations, while the other approaches generally only utilize a single server or a cluster in a single region, which belongs to a single organization. FL enables the collaboration among multiple organizations.\nThird, FL generally takes advantage of encryption or other defense techniques to ensure the data privacy or security, while the other approaches pay little attention to this security issue . FL promises to ensure the privacy and security of the raw data, as the leakage of information may incur significant financial and reputational losses. \nDuring the training process of FL, an optimization problem is solved as shown in Formula \\ref{eq:problem}. Given $n$ training dataset $\\mathcal{D} = {D_1, D_2, ..., D_n}$, where each data point $(x,y)\\sim\\mathcal{D}$, the problem of FL is to learn a function $\\widehat F$ from all possible hypotheses $\\mathcal{H}$, while minimizing the expectation of loss over the distribution of all the dataset $\\mathcal{D}$. \n\\begin{equation}\n\\label{eq:problem}\n \\widehat F = \\underset{F\\in\\mathcal{H}}{\\mathrm{argmin}} \\underset{(x,y)\\in\\mathcal{D}}{\\mathbb{E}} L(y, F(x)),\n\\end{equation}\nwhere $L(y, F(x))$ refers to the loss of $F(x)$ to the label $y$. \nDuring the training process, the Stochastic Gradient Descent (SGD) approach is generally used to minimize the loss function using Formula \\ref{eq:gd}.\n\\begin{equation}\n\\label{eq:gd}\n F_{k+1}(x)\\gets F_k(x) - \\eta_k \\nabla F_k(x),\n\\end{equation}\nwhere $F_k(x)$ refers to the learned model in the $k^{th}$ iteration, $\\nabla F_k(x)$ refers to the gradient of the model at the $k^{th}$ iteration based on the model already obtained $F_k(x)$ and the training dataset, $\\eta_k$ refers to the learning rate, and $F_{k+1}(x)$ refers to the update model of the $k^{th}$ iteration. Within each iteration, there are two phases, i.e., forward propagation and backward propagation. The forward propagation calculates the output based on the input data $x$ using the model, while the backward propagation calculates the gradients $\\nabla F_k(x)$ and updates the model.\nWhen the calculation is distributed among multiple computing resources, the gradients or models of each computing resource are aggregated using an aggregation algorithm (see details in Section \\ref{sebsec:aggre}), in order to achieve consensus of multiple models and to generate a global model. \\liu{The learning rate can be dynamically adapted using a local adaptive optimizer, e.g., Adam, and/or cross-round learning rate schedulers .}", "id": "4f50be54-11aa-4ae8-ae5e-fb6d0e6797e6", "level": "subsection", "origin_cites_number": 27, "parent_id": "2371347b-6739-4c7c-b6c1-b23b3b02d8ed", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Basic Concepts" ] ], "subsections": [], "title": "Basic Concepts" }, { "cite_extract_rate": 0.8, "cites": [ 591, 612, 671, 5427 ], "content": "The life cycle of an FL model is a description of the state transitions of an FL model from creation to completion . Lo \\textit{et al.} propose that the life cycle of an FL model consists of 8 phases: initiated, broadcast, trained, transmitted, aggregated, evaluated, deployed, and monitored. Kairouz \\textit{et al.} propose that the life cycle of an FL model includes 6 phases: problem identification, client instrumentation, simulation prototyping, federated model training, model evaluation, and deployment. However, they focus on the FL with distributed data in mobile devices. In this paper, we adopt a combination of workflow life cycle views with a few variations , condensed into four phases:\n\\begin{enumerate}\n \\item The composition phase is for the creation of an FL model, which is used to address a specific machine learning problem, e.g., classification. First, a machine learning model is created to address the problem with certain requirements, e.g., the requirement of accuracy. Then, the machine learning model is adapted to FL scenarios. For instance, if the distributed data is of different features, the machine learning model is partitioned (see details in Section \\ref{subsubsec:modelP}) to process the distributed data. \n \\item The FL training phase is for the training phase of the FL model. During this phase, a training strategy, which includes parallelism and aggregation algorithms (see details in Section \\ref{sec:pexec}), is used to update the parameters, hyper parameters, and even the structure of the network, in order to improve the accuracy and the generalization capacity of the FL model. \n \\item The FL model evaluation phase is to apply the trained FL models, in order to analyze the performance of the trained FL models on a simulation platform or a real distributed system . As a result, the FL models with the best performance are selected. If the FL models do not meet the requirements, the FL model should be modified, or the training phase should be carried out again.\n \\item The FL model deployment phase is to deploy the FL model in a real-life scenario to process the data. If the final model can be shared without restriction, there is no difference between the FL model deployment and the model generated from a traditional centralized approach. Otherwise, the deployment of the final model should consider the ownership of the corresponding parts. \n\\end{enumerate}", "id": "cd1b4108-9163-49ab-bfbc-9fe67cc3a340", "level": "subsection", "origin_cites_number": 5, "parent_id": "2371347b-6739-4c7c-b6c1-b23b3b02d8ed", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "FL Model Life Cycle" ] ], "subsections": [], "title": "FL Model Life Cycle" }, { "cite_extract_rate": 0, "cites": [], "content": "The functional architecture of an FL system can be layered as follows : presentation, user services, FL training, and infrastructure. Figure \\ref{fig:layers} shows this architecture. The higher layers exploit the lower layers to provide their own functionality. A user interacts with an FL system through the presentation layer and realizes independent functionalities at the user services layer. During the training phase of FL models, a Federated Learning Execution Plan (FLEP) is generated, and the corresponding distributed training is carried out at the FL training layer. The FLEP is composed of a type of parallelism, a scheduling strategy, and a fault-tolerance mechanism. The FL system manages the physical resources through the infrastructure layer for the distributed training. \n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures/layers.pdf}\n \\caption{\\liu{Functional architecture of an FL system.}}\n \\label{fig:layers}\n\\end{figure}", "id": "78f72b80-7c28-4237-9e16-d97c8c144603", "level": "subsection", "origin_cites_number": 1, "parent_id": "2371347b-6739-4c7c-b6c1-b23b3b02d8ed", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Functional Architecture of FL Systems" ] ], "subsections": [ "afff657f-2ca3-4a0d-b609-6427f5eee76c", "8ca8a082-7841-461b-ad59-49e7d8fef8fe", "6af71683-cb7e-4e75-87a8-c21ef2982fdc", "cfb855a7-c817-4897-9235-30d180850302" ], "title": "Functional Architecture of FL Systems" }, { "cite_extract_rate": 0.2, "cites": [ 656 ], "content": "The presentation layer is a User Interface (UI) for the interaction between Users and FL systems at one or multiple stages of the FL model life cycle. The UI can be textual or graphical. This interface is responsible for designing a new FL model or choosing an existing machine learning model as an FL model. In addition, this layer also supports the modules at the user services layer, e.g., shows the status of the distributed training process. The textual UI is largely used for designing FL models \\liu{based on the command line or scripts }. The models can be directly expressed using Python, with the textual interface in PaddleFL , TensorFlowFL , PySyft , and FATE . A graphic UI can make the interaction more practical, while the users can drag or drop the data processing element to design an FL model. For instance, FATE provides a Graphic UI (GUI) through a web portal. However, the graphic portal also exploits textual programming languages as inner representations of an FL model.", "id": "afff657f-2ca3-4a0d-b609-6427f5eee76c", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "78f72b80-7c28-4237-9e16-d97c8c144603", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Functional Architecture of FL Systems" ], [ "subsubsection", "Presentation Layer" ] ], "subsections": [], "title": "Presentation Layer" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 5429, 591, 5434, 8918, 5437, 5435, 5433, 5432, 7927, 5428, 5430, 7926, 8502, 5436, 5438, 5431 ], "content": "The user service layer supports the expected functionalities, i.e., \\liu{monitoring and steering and log; interpretability and explainability; and graph data.} The monitoring enables the users to get the real-time status of the distributed training process. As the training process of FL models can be very long, e.g., from several hours to days , it is of much importance to track the execution status, which allows the user to verify whether the training proceeds normally. The log service is generally supported by major FL systems, which can be used to analyze the training process. In addition, the log generated during the training process can be used to debug the system or adjust the FL model. FATE provides a visual monitoring board to users through its GUI. When there are unexpected results or errors during the training process, steering enables users to adjust the training process in order to reduce the time necessary to carry out the distributed training from scratch. Most FL systems can enable the users to stop the training, while the adjustment of parameters is not fully supported by major FL systems. The interpretability of FL is to describe the internals of an FL system in a way that is understandable to humans . The explainability focuses on explaining the representation of data inside an FL model . With interpretability and explainability, the FL system can \\liu{provide} a description of the results of the trained FL model based on the training data and the distributed training process. Shapley values have been used to provide the interpretability , while both the interpretability and explainability remain open challenges as each is hard to fully support. \n\\liu{In the real world, graph data widely exist in multiple domains, and a bunch of FL approaches have been proposed to handle the decentralized graph data for community detection , financial crime , and especially knowledge graph completion .\nFL is particularly useful in the field of knowledge graph completion, as a knowledge graph could not only contain text but also images or other type of data, i.e., multimodal Knowledge Graphs ; and the completion is realized in a collaborative fashion within an FL system .\nThe decentralized graphs can be inter-graph, i.e., the decentralized data belongs to multiple graphs, or intra-graph, i.e., the decentralized data is within one big graph , while most of the existing works focus on the intra-graph situation.\nHorizontal FL techniques can be exploited on the Graph Neural Networks (GNN) with encryption techniques (see details in Section \\ref{subsubsec:infrastructure}) while the performance of FL may be much worse than that of centralized GNNs .\nThe aggregation algorithms (see details in Section \\ref{subsubsec:centralized}) are also proposed based on the FedAvg or optimal transportation . \nIn addition, decentralized aggregation algorithms (see details in Section \\ref{subsubsec:decentralized}) are also proposed to deal with the decentralized graph data for social network and drug discovery .\nWhile the fine-tuning of the FL system is time-consuming, Bayesian optimization and evolutionary optimization strategies are utilized to automatically tune the hyper-parameters and the network structure, respectively. \nGraph data can be vertically distributed where the features of the nodes are distributed across multiple data owners, and a data owner may or may not have the edges. Vertical FL exploits embeddings or autoencoders to represent the nodes, which can be transferred to train a GNN. In addition, the differential privacy (see details in Section \\ref{subsubsec:infrastructure}) is combined with the embeddings to protect the data privacy of knowledge graphs. }", "id": "8ca8a082-7841-461b-ad59-49e7d8fef8fe", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "78f72b80-7c28-4237-9e16-d97c8c144603", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Functional Architecture of FL Systems" ], [ "subsubsection", "User Services Layer" ] ], "subsections": [], "title": "User Services Layer" }, { "cite_extract_rate": 0.5, "cites": [ 590 ], "content": "The FL training layer carries out the distributed training process with distributed data and computing resources. This layer consists of three modules: parallelization, scheduling, and fault-tolerance. FL parallelization exploits diverse types of parallelism, e.g., data parallelism, model parallelism, and pipeline parallelism, to generate executable tasks. Through the FL scheduling module, an FL system produces a Scheduling Plan (SP) of executable tasks, which aims at fully exploiting distributed computing resources and preventing training stalling. During the training process, the SP is generally defined by a training algorithm, which aggregates the updates, i.e., gradients or models, from each computing resource in order to generate a final machine learning model. The FL fault-tolerance mechanism handles the failures or errors of task execution and the connection of distributed resources. Reactive approaches are generally exploited, e.g., using check-points, restart, and task replication . A reactive approach reduces the effect of failures after perceiving failures . An FLEP, which captures the execution directives, typically the result of compiling and optimizing the training process of FL models, is generated at this layer.", "id": "6af71683-cb7e-4e75-87a8-c21ef2982fdc", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "78f72b80-7c28-4237-9e16-d97c8c144603", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Functional Architecture of FL Systems" ], [ "subsubsection", "FL Training Layer" ] ], "subsections": [], "title": "FL Training Layer" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 617, 7608 ], "content": "\\label{subsubsec:infrastructure}\nThe infrastructure layer provides the interaction between an FL system and the distributed resources, including the computing resources, storage resources, network resources, and data resources. This layer contains three modules: a data security module, a data transfer module, and a distributed execution module. The data security module generally exploits Differential Privacy (DP) and encryption techniques, e.g., homomorphic , to protect the raw data used during the training process. Although the raw data cannot be directly transferred, intermediate data, e.g., the gradients or models, can be communicated among distributed computing resources. The data transfer module exploits data compression techniques to improve the data transfer efficiency. \nAt this layer, the FLEP generated at the FL training layer is carried out within the distributed execution module; i.e., concrete tasks are executed in distributed computing resources.", "id": "cfb855a7-c817-4897-9235-30d180850302", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "78f72b80-7c28-4237-9e16-d97c8c144603", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "An Overview of Federated Learning" ], [ "subsection", "Functional Architecture of FL Systems" ], [ "subsubsection", "Infrastructure Layer" ] ], "subsections": [], "title": "Infrastructure Layer" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pexec}\nIn this section, we present the distributed training process for FL. First, we present three types of parallelism in distributed training and the application within FL. The parallelism approaches are generally implemented in the parallelization module. Then, we discuss existing aggregation algorithms for the distributed training, which is implemented in the scheduling module.", "id": "5dfeced7-075b-4276-abd0-1ff3e9bb48a7", "level": "section", "origin_cites_number": 0, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ] ], "subsections": [ "9be6bb25-a2cc-4134-abfd-4e31f8d8b0b1", "10053136-8ab4-4b06-aec3-a7a9b0e56bc3" ], "title": "Distributed Training" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 600, 1315, 671 ], "content": "Three types of parallelism exist for distributed machine learning: data parallelism, model parallelism, and pipeline parallelism . FL can be classified to three types, i.e., horizontal, vertical, and hybrid . The horizontal FL generally exploits data parallelism, and the vertical FL typically takes advantage of model parallelism. However, the hybrid FL relies on transfer learning , which is not a parallelism approach and is out of the scope of this paper. \n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figures/dnn.pdf}\n \\caption{An example of a neural network.}\n \\label{fig:dnn}\n\\end{figure}\nIn this section, we take an example of a neural network as shown in Figure \\ref{fig:dnn} to explain the parallelism. In the example, we assume that the model contains three layers and seven data processing nodes (neurons), i.e., $A_1$, $A_2$, $B_1$, $B_2$, $B_3$, $C_1$, $C_2$, $D$. The arrows represent the data flow among different data processing nodes. The execution of the data processing nodes at each layer can be carried out in parallel, while the execution of different layers should be performed sequentially. The input data contains 4 data points. We assume two/three computing resources owned by two/three users. Each has a part of the input data.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures/datap.pdf}\n \\caption{Data parallelism. The forward and backward process of $I^1$ and $I^2$ is performed in computing resource 1, while that of $I^3$ and $I^4$ is performed in computing resource 2 at the same time. Then the model or gradient is transferred to calculate an average model or gradient to be sent to each computing resource for the following training.}\n \\label{fig:datap}\n\\end{figure}", "id": "9be6bb25-a2cc-4134-abfd-4e31f8d8b0b1", "level": "subsection", "origin_cites_number": 5, "parent_id": "5dfeced7-075b-4276-abd0-1ff3e9bb48a7", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Parallelism \\& FL Types" ] ], "subsections": [ "0430fcfa-cd31-4b1c-bf51-10d342c0d942", "62c91cb4-aacb-47de-82b2-8f4949518fdc", "1b1418f9-bcb1-4bbe-b99b-80b182bd318e" ], "title": "Parallelism \\& FL Types" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 671, 582, 591 ], "content": "Data parallelism is realized by having the data processing performed in parallel at different computing resources, with the same model, on different data points. As shown in Figure \\ref{fig:datap}, data parallelism is exploited when the ensemble of data points is distributed among different computing resources. During the training process of FL, the training data is not transferred among different computing resources, while the intermediate data, e.g., the models or the gradients $\\nabla F_k(x)$ in Formula \\ref{eq:gd}, are transferred. \nThe data in each computing resource can be Independent and Identically Distributed Data (IID) or non-IID. FL focuses on the non-IID , while other distributed machine learning approaches mainly focus on IID data. \nWith the data parallelism, the FL is horizontal , i.e., the data and the calculation are horizontally distributed among multiple computing resources.\nIn addition, this parallelism generally corresponds to the cross-device FL , where a large number of devices (mobiles or edge devices) collaboratively participate in training a single global model to have good accuracy.\nWhen the number of devices is small, e.g., 2-100, and the computing resources are from diverse organizations, this parallelism also corresponds to cross-silo FL . In addition to the general data-parallel schemes for federated learning, some specific privacy-preserved distributed statistical tricks have been invented for federated sparse models~.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures/modelp.pdf}\n \\caption{Model parallelism. The dashed arrows represent inter-computing resource communication. For each input data point $I$, different parts, i.e., $I_{A_1}$ and $I_{A_2}$, are distributed at different computing resources.}\n \\label{fig:modelp}\n\\end{figure}", "id": "0430fcfa-cd31-4b1c-bf51-10d342c0d942", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "9be6bb25-a2cc-4134-abfd-4e31f8d8b0b1", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Parallelism \\& FL Types" ], [ "subsubsection", "Data Parallelism" ] ], "subsections": [], "title": "Data Parallelism" }, { "cite_extract_rate": 0.625, "cites": [ 600, 5439, 671, 668, 591 ], "content": "\\label{subsubsec:modelP}\nModel parallelism is realized by having independent data processing nodes distributed at different computing resources, so as to process the data points of specific features. \nTwo data processing nodes can be either independent, i.e., the execution of any node does not depend on the output of the other one; or dependent, i.e., there is a data dependency between them .\nAs shown in Figure \\ref{fig:modelp}, model parallelism is achieved when different parts of each data point are distributed at different computing resources. \nFor instance, the data process on Node $A_1$ and that of $A_2$ can be carried out in parallel.\nWith the model parallelism, vertical FL, where the data points and calculation are vertically distributed among multiple computing resources , is realized.\nIn this case, the original model needs to be partitioned to be distributed at different computing resources.\nTwo organizations generally apply this type of FL when each organization owns parts of the features of users and they would like to collaboratively train a model using the data of all the features, which corresponds to cross-silo FL .\nMost studies of vertical federated learning only support two parties (with or without a central coordinator) . \nFor instance, SecureGBM is proposed to train a tree-based Gradient Boosting Machine (GBM).\nIn order to support multiple parties, the idea of multi-view learning is exploited in a multi-participant, multi-class vertical federated learning framework .\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures/pipelinep.pdf}\n \\caption{Pipeline parallelism. The dashed arrows represent inter computing resource communication.}\n \\label{fig:pipelinep}\n\\end{figure}", "id": "62c91cb4-aacb-47de-82b2-8f4949518fdc", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "9be6bb25-a2cc-4134-abfd-4e31f8d8b0b1", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Parallelism \\& FL Types" ], [ "subsubsection", "Model Parallelism" ] ], "subsections": [], "title": "Model Parallelism" }, { "cite_extract_rate": 0.5, "cites": [ 5425 ], "content": "Pipeline parallelism is realized by having dependent data processing nodes distributed at different computing resources . As shown in Figure \\ref{fig:pipelinep}, the data processing nodes are distributed at multiple computing resources. While data point $I^3_A$ is processed in computing resource 1, the outputs of $A_1$ and $A_2$ are processed in computing resource 2, and the outputs of $B_1$, $B_2$, and $B_3$ are processed in computing resource 3. With this type of parallelism, the dependent data processing nodes can process the data in parallel. As this parallelism may incur many inter-computing resource data transfers, it is not widely used for FL.", "id": "1b1418f9-bcb1-4bbe-b99b-80b182bd318e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9be6bb25-a2cc-4134-abfd-4e31f8d8b0b1", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Parallelism \\& FL Types" ], [ "subsubsection", "Pipeline Parallelism" ] ], "subsections": [], "title": "Pipeline Parallelism" }, { "cite_extract_rate": 1, "cites": [ 636 ], "content": "\\label{sebsec:aggre}\nWith the horizontal FL and data parallelism, aggregation algorithms are used to aggregate the models or gradients generated from the forward and backward propagation in each computing resource. The aggregation algorithms can be either centralized, or hierarchical, and decentralized. The centralized aggregation algorithms generally rely on a centralized server, i.e., a parameter server, to synchronize or schedule the execution of distributed computing resources, while hierarchical aggregation algorithms rely on multiple parameter servers for the model aggregation.\nThe decentralized aggregation algorithms make each computing resource equally perform the calculation based on a predefined protocol, without relying on a centralized server. \\liu{Please refer to for the details of federated optimization. The characteristics are summarized in Table \\ref{tal:aggregation}, which can be used to choose appropriate algorithms in a specific situation. }\n\\begin{table}[ht]\n\\centering\n\\caption{\\liu{Comparison among diverse types of aggregation algorithms. ``Complexity'' represents the complexity to implement the algorithms (``H'' represents high complexity, ``M'' represents medium complexity, and ``L'' represents low complexity). ``Trust'' represents whether the aggregation algorithms require that the data owners trust the centralized server. ``Imbalance'' represents whether the algorithms can address the unbalanced data. ``High-latency'' represents whether the algorithms can support the high-latency model or gradient data transfer. ``Y'' represents that the algorithms support the functionality while ``N'' represents that the algorithms do not support the functionality.}}\n\\label{tal:aggregation}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nType & Complexity & Trust & Imbalance & High-latency \\\\\n\\hline\nCentralized & L & Y & N & N \\\\\n\\hline\nHierarchical & M & Y & Y & Y \\\\\n\\hline\nDecentralized & H & N & N & Y \\\\\n\\hline\n\\end{tabular}\n\\end{table}", "id": "10053136-8ab4-4b06-aec3-a7a9b0e56bc3", "level": "subsection", "origin_cites_number": 1, "parent_id": "5dfeced7-075b-4276-abd0-1ff3e9bb48a7", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Aggregation Algorithms" ] ], "subsections": [ "e5dd26ef-00aa-4b84-aa33-7ff2605aebde", "508677b5-4dbe-4b11-ad6a-7789cb9e091d", "ecce2209-838c-4800-b0a9-49b94ea5037a" ], "title": "Aggregation Algorithms" }, { "cite_extract_rate": 0.8125, "cites": [ 635, 602, 7928, 5440, 5443, 5441, 5445, 8919, 5442, 582, 5444, 5446, 632 ], "content": "\\label{subsubsec:centralized}\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures/cenagg.png}\n \\caption{The architecture of centralized aggregation. ``CR'' represents computer resource. $\\omega$ represents the local parameters or the weights of the model calculated in each computing resource. $\\mathrm{g}$ represents the local gradients in backward propagation in each computing resource. $\\overline{\\omega}$ represents the global model calculated in the parameter server. $\\overline{\\mathrm{g}}$ represents the global gradients calculated in the parameter server.}\n \\label{fig:centra}\n\\end{figure}\nAs shown in Figure \\ref{fig:centra}, a single parameter server is used to calculate the average models or gradients sent from multiple computing resources (mobiles).\nThe weights of the model (model) or the gradients are calculated in each computing resource, which are transferred to a parameter server. \nThe parameter server calculates global gradients or global models according to a centralized aggregation algorithm.\nThe global gradients or global models are transferred to each computing resource for the following computation. \nThe update of the model is based on the SGD defined in Formula \\ref{eq:gd} in both computing resources, or on the parameter server.\nA bunch of centralized aggregation algorithms have been proposed.\nFederated Averaging (FedAvg) algorithm is introduced as the aggregation method in Google's implementation of an FL system. A centralized server aggregates the machine learning models from selected users. Then, a global model is generated using a weighted sum of each aggregated machine learning model. Afterward, the global model is shared with selected users, and the training process is continued in the computing resource of selected users. \nHowever, the trained model of FedAvg may be biased towards computing resources with favorable network conditions . \nWhile FedAvg is a straightforward approach, some other methods are proposed to address additional problems. \nA Federated Stochastic Block Coordinate Descent (FedBCD) algorithm is proposed to reduce the number of communication rounds by enabling multiple local updates before the model communication between a user and the server. In addition, FedBCD also considers the regularization during the training process. The training problem with regularization can be formulated as:\n\\begin{equation}\n\\label{eq:regularizer}\n \\widehat F = \\underset{F_\\theta\\in\\mathcal{H}}{\\mathrm{argmin}} \\underset{(x,y)\\sim\\mathcal{D}}{\\mathbb{E}} L(y, F_\\theta(x)) + \\lambda \\cdot \\gamma(\\theta),\n\\end{equation}\nwhere $F$, $D$, $\\mathcal{H}$ are the same as those in Formula \\ref{eq:problem}, while $\\gamma(\\cdot)$ denotes the regularizer and $\\lambda$ is the hyper-parameter. The regularization is exploited to improve the generalization capacity of the trained machine learning model. As the fairness among multiple users is important for an FL system, the Stochastic Agnostic Federated Learning (SAFL) algorithm and the FedMGDA+ algorithm are proposed to achieve fairness during the training process of FL. The fairness represents that the data distribution among multiple users can be equally considered without the influence of unrelated factors. \nFairness may also refer to two other concepts: (1) A user gets a final model according to the contribution ; and/or (2) Uniform accuracy distribution among all the distributed computing resources , which are out of the scope of this paper.\nIn addition, while the computing resources may be heterogeneous, FedProx is proposed to tackle the heterogeneity in an FL system. FedProx enables multiple iterations in each computing resource, while minimizing a cost function based on the local loss function and the global model. Furthermore, in order to address permutation of data processing nodes during the training process, Federated Matched Averaging (FedMA) is proposed. FedMA exploits an existing approach, i.e., BBP-MAP , to generate a matrix, in order to align the data processing nodes of the models from computing resources and the server. SCAFFOLD is proposed to reduce the communication rounds, using stateful variables in the distributed computing resources. Attention-augmented mechanism is exploited in Attentive Federated Aggregation (FedAttOpt) to aggregate the knowledge generated from each computing resource (client), based on the contribution of the model from each client. \nWhen the data distribution is heterogeneous among users, personalization remains an open problem. In order to address this problem, the model can be split into local layers and global layers, which has been proposed in adaptive personalized federated learning (APFL) , FedPer , and pFedMe . The local layers are trained with the decentralized data in each computing resource of users, while the global layers are trained in the computing resources of users and the server. However, it is difficult to choose a dataset and its partition among clients to measure the personalization brought by APFL or FedPer, so as to prove the improvement compared with FedAvg.\nThe attention-augmented mechanism helps reduce the communication rounds.\nIn addition, knowledge distillation can also be exploited to aggregate the models, while requiring that there is data in the centralized server .\nAll these algorithms can handle non-IID data. A comparison among the aforementioned algorithms is proposed in Table \\ref{tal:aggre}. \n\\begin{table}[ht]\n\\centering\n\\caption{Comparison among aggregation algorithms. ``Reg'' represents regularization. Heterogeneity represents that the computing resources are heterogeneous. ``Firness'' represents that the data distribution among multiple users can be equally considered without the influence of unrelated factors. ``Permutation'' refers to the permutation of data processing nodes during the training process. ``C-E'' represents communication efficient. ``S'' represents that the algorithm supports the functionality, while ``N'' represents that the algorithm does not have support. }\n\\label{tal:aggre}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nAlgorithm & Reg & Fairness & Heterogeneity & Permutation & C-E \\\\\n\\hline\nFedAvg & N & N & N & N & N \\\\\n\\hline\nFedBCD & S & N & N & N & S \\\\\n\\hline\nSAFL & N & S & N & N & N \\\\\n\\hline\nFedMGDA+ & N & S & N & N & S \\\\\n\\hline\nFedProx & N & N & S & N & S \\\\\n\\hline\nFedMA & N & N & N & S & S \\\\\n\\hline\nSCAFFOLD& N & N & N & N & S \\\\\n\\hline\nFedAttOpt& N & N & N & N & S \\\\\n\\hline\n\\end{tabular}\n\\end{table}", "id": "e5dd26ef-00aa-4b84-aa33-7ff2605aebde", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "10053136-8ab4-4b06-aec3-a7a9b0e56bc3", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Aggregation Algorithms" ], [ "subsubsection", "Centralized Aggregation" ] ], "subsections": [], "title": "Centralized Aggregation" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 646, 644, 5448, 5447, 5449 ], "content": "\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figures/hier.png}\n \\caption{The architecture of hierarchical aggregation. ``CR'' represents computer resource. $\\omega$ represents the local parameters or the weights of the model calculated in each computing resource. $\\mathrm{g}$ represents the local gradients in backward propagation in each computing resource. $\\overline{\\omega}$ represents the region or global model calculated in each parameter server. $\\overline{\\mathrm{g}}$ represents the region or global gradients calculated in each parameter server. The region model or gradients are calculated by a region parameter server, while the global model or gradients are calculated by a global parameter server.}\n \\label{fig:hier}\n\\end{figure}\nAs shown in Figure \\ref{fig:hier}, a hierarchical architecture is also exploited using multiple parameter servers. \nA two-layer hierarchical architecture is proposed to reduce the time to transfer models between a parameter server and computing resources . The hierarchical architecture uses a global parameter server (GPS) and multiple region parameter servers. Each region parameter server (RPS) is implemented in a cell base station where the computing resources (mobiles) can be connected, with low latency. A hierarchical algorithm, i.e., Hierarchical Federated Learning (HFL) is deployed to realize the model aggregation. Within each iteration of HFL, each RPS calculates an average model using the models of the computing resources within its cluster. It sends the averaged model to the GPS, and it receives a global averaged model at every certain iteration. Afterward, it broadcasts the averaged model to all its computing resources. Some other algorithms, e.g., HierFAVG , HFEL , and LanFL , are similar to HFL, while the SPS is an edge or Local-Area Network (LAN) parameter server and the MPS is a parameter server implemented on the cloud or a Wide-Area Network (WAN). These algorithms take advantage of hierarchical architecture to reduce high-latency model or gradient data transfer, so as to accelerate the training process. In addition, by well-clustering the computing resources to groups, the hierarchical architecture is also exploited to address unbalanced data distributed among multiple computing resources , or to address data privacy .", "id": "508677b5-4dbe-4b11-ad6a-7789cb9e091d", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "10053136-8ab4-4b06-aec3-a7a9b0e56bc3", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Aggregation Algorithms" ], [ "subsubsection", "Hierarchical Aggregation" ] ], "subsections": [], "title": "Hierarchical Aggregation" }, { "cite_extract_rate": 0.5, "cites": [ 5452, 8920, 5450, 7930, 5451, 7929 ], "content": "\\label{subsubsec:decentralized}\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures/decentral.png}\n \\caption{The architecture of decentralized aggregation. ``CR'' represents computer resource. $\\omega$ represents the local parameters, or the weights of the model calculated in each computing resource. $\\mathrm{g}$ represents the local gradients in backward propagation in each computing resource. When two computing resources are neighbors, they can communicate with each other.}\n \\label{fig:decentralized}\n\\end{figure}\nWhile collaboratively training a machine learning model with a decentralized aggregation algorithm, the computing resources can be organized with a {\\em connected} topology and can communicate with a peer-to-peer manner, as shown in Figure \\ref{fig:decentralized}. The degree and connectivity of the topology affects the communication efficiency and the convergence rate of the aggregation algorithm. \nFor a given topology, we define $w_{i,j}$, the weight to scale information flowing from node $j$ to node $i$, as follows\\vspace{-2mm}\n\\begin{align}\\label{wij}\nw_{ij}\n\\begin{cases}\n> 0 & \\mbox{if node $j$ is connected to node $i$, or $i=j$;} \\\\\n= 0 & \\mbox{otherwise.}\\vspace{-2mm}\n\\end{cases}\n\\end{align}\nWe further define the {\\bf topology matrix} $W = [w_{ij}]_{i,j=0}^{n-1} \\in \\mathbb{R}^{n\\times n}$ as the matrix to represent the topology. \nIn the remainder of this paper, \nwe assume that $W$ satisfies \n$W\\mathds{1} = \\mathds{1}$ \nand $\\mathds{1}^T W = \\mathds{1}^T$, \ni.e., both the row sum and the column sum of $W$ are equal to $1$, \nso as to guarantee that the neighborhood averaging will asymptotically approach the global averaging . \nWhen a computing resource $j$ is directly connected to computing resource $i$, i.e., $w_{i,j} \\neq 0$, computing resource $j$ is the neighbor of computing resource $i$.\n\\liu{Please note that the weight $w_{i,j}$ denotes the confidence node $i$ has in the information it receives from node $j$ , which is different from the bandwidth or data transfer capacity in a network.}\nThe centralized aggregation algorithm is a special type of decentralized aggregation with a star topology while only the centralized server communicates with its neighbors.\n\\liu{A well designed topology, e.g., an exponential graph , can improve the convergence rate, which accelerates the training speed.}\nWith the decentralized SGD (D-SGD), each computing resource maintains a local copy of \nthe global model parameters, and it updates the local copy using the models of its neighbors. \nAccording to the order to conduct neighborhood averaging and gradient descent, D-SGD has two common types of realizations: Average-With-Communication (AWC) and Average-Before-Communication (ABC) .\nAWC can overlap communication and gradient computation, while ABC needs to sequentially calculate and communicate the gradient or model. \nHowever, ABC is robust , and it converges fast in terms of iterations by exploiting its large learning rate.\nIn addition, the decentralized aggregation algorithms can be classified to Full Communication (FC) and Partial Communication (PC) according to the number of neighbors. Within the iterations of FC, each computing resource calculates an averaged model or gradient, based on all the models or gradients of the last version from all its neighbors. However, within the iterations of PC, each computing resource calculates an averaged model or gradient based on one or multiple chosen neighbors. With PC, the selection of the neighbors can be based on a gossip algorithm . For instance, a random neighbor can be selected ; the neighbors that provide benign models are selected to avoid attack .", "id": "ecce2209-838c-4800-b0a9-49b94ea5037a", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "10053136-8ab4-4b06-aec3-a7a9b0e56bc3", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Distributed Training" ], [ "subsection", "Aggregation Algorithms" ], [ "subsubsection", "Decentralized Aggregation" ] ], "subsections": [], "title": "Decentralized Aggregation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:security}\nAt the infrastructure layer of an FL system, there are three types of data manipulation: data security mechanisms, data transfer, and distributed data processing within the distributed execution module. \nWe first present the techniques for the distributed execution in an FL system. \nThen, we present the techniques for data transfer during the training process of an FL system.\nFinally, as data security is of much importance to an FL system , we present the techniques to protect the data security.", "id": "cd0b3f18-6523-488f-aaa8-addaf533ff9f", "level": "section", "origin_cites_number": 1, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ] ], "subsections": [ "8048e455-33dd-411a-bdf6-5bafe4141358", "09ac7749-f7de-46a4-bc72-304efd143b4a", "4a182037-a6a4-48d6-aa9c-0986bac2c4b8" ], "title": "Data Manipulation" }, { "cite_extract_rate": 0.5, "cites": [ 5454, 623, 655, 659, 5453 ], "content": "While the bandwidth within a single data center is high, e.g., InfiniBand, the High Performance Computing (HPC) libraries, e.g., Message Passing Interface (MPI) or NVIDIA Collective Communications Library (NCCL) , are widely exploited for distributed data processing . With MPI or NCCL, the gradients or models in each computing resource can be easily calculated using ring-AllReduce algorithm . However, one of the drawbacks of the HPC libraries is that they lack support for fault-tolerance, as the HPC libraries are designed for high performance servers with high quality networks. When any computing resource within the network becomes unavailable, the distributed training process may be broken.\nHowever, as an FL system is generally implemented for the collaboration of large amounts of mobile device users or different organizations, the network connection among computing resources is of moderate quality, i.e., the bandwidth is not as good as that within a single data center, and the latency is high.\nFor instance, the Internet upload speed is typically much slower than the download speed . Also, some users with unstable wireless communication channels may consequently drop out due to disconnection from the Internet .\nIn this environment, the connection between computing resources and parameter servers has a high possibility of becoming disabled. Thus, Remote Procedure Call (RPC) frameworks are widely exploited, as this kind of framework can ignore the disconnected computing resources and continue the distributed training of an FL system , e.g., PaddleFL , PySyft , or TensorflowFL .", "id": "8048e455-33dd-411a-bdf6-5bafe4141358", "level": "subsection", "origin_cites_number": 10, "parent_id": "cd0b3f18-6523-488f-aaa8-addaf533ff9f", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ], [ "subsection", "Distributed Data Processing" ] ], "subsections": [], "title": "Distributed Data Processing" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 613, 7932, 623, 622, 7931, 5456, 5457, 5455, 625, 8366 ], "content": "As the network connection is of moderate quality, the data transfer module mainly focuses on data compression to transfer intermediate data, e.g., gradients or models. \nSketched updates are proposed for gradient compression to accelerate the data transfer during the distributed training within a single data center .\nWith the data parallelism and centralized aggregation algorithm, before sending the intermediate data, the intermediate data can be sketched with subsampling , quantization , sparsification , or projection to lower dimensional spaces , in each computing resource, in order to reduce the cost to transfer data.\nSubsampling refers to transferring only a random subset of the intermediate data .\nQuantization methods encode each value using a fixed number of bits, so as to reduce the length of gradients or models .\nWith the sparsification approach, only selected parts of the intermediate data are transferred, while the selection is based on a threshold, e.g., the gradients larger than a threshold are selected . Then, when the intermediate data is received in the server, they are decompressed to be aggregated according to the aggregation algorithms presented in Section \\ref{subsubsec:centralized}. The convergence of the quantization approach is analyzed in , which shows that this approach can also provide good convergence rates . In addition, irrelevant intermediate data can be precluded to be transferred to the server, in order to substantially reduce the communication overhead .", "id": "09ac7749-f7de-46a4-bc72-304efd143b4a", "level": "subsection", "origin_cites_number": 13, "parent_id": "cd0b3f18-6523-488f-aaa8-addaf533ff9f", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ], [ "subsection", "Data Transfer" ] ], "subsections": [], "title": "Data Transfer" }, { "cite_extract_rate": 0, "cites": [], "content": "Data security is of much importance for data processing. The problem of data security is related to significant financial and reputational losses. For instance, Uber had to pay \\$148,000,000 to settle the investigation incurred by a breach of 600,000 drivers’ personal information in 2016 . \nData security mainly includes two aspects, i.e., data privacy and model security.\nData privacy refers to the protection of raw data to avoid raw data information leakage during or after the distributed training of FL systems.\nModel security refers to the protection of the security of trained models, in order to avoid wrong output based on the trained models incurred by malicious attacks.\nIn this section, we first present the techniques to protect data privacy. Then we present the defense methods for model security.", "id": "4a182037-a6a4-48d6-aa9c-0986bac2c4b8", "level": "subsection", "origin_cites_number": 4, "parent_id": "cd0b3f18-6523-488f-aaa8-addaf533ff9f", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ], [ "subsection", "Data Security" ] ], "subsections": [ "7a14e4c6-106a-4ec2-8926-70aed111f75a", "cecfb28f-1aff-43d4-9671-2b7aa114b7c6" ], "title": "Data Security" }, { "cite_extract_rate": 0.48571428571428504, "cites": [ 3469, 5460, 3477, 892, 5459, 5461, 5462, 5439, 5458, 3478, 7933, 614, 5463, 7608, 5441, 5464, 7727 ], "content": "The techniques to protect data privacy consist of three types: Trusted Execution Environment (TEE), encryption, Differential Privacy (DP), and anti-Generative Adversarial Network (GAN) methods. These techniques can be combined in FL systems, e.g., the combination of DP and TEE in , the combination of encryption and DP , and the combination of DP and anti-GAN .\nA TEE is an environment where the execution is secured and no information can be leaked to unauthorized users. \nIntel SGX technique has been first proposed as a secure environment while providing a set of security-related instruction codes built within Intel Central Processing Units (CPUs). \nThen, the implementation of machine learning models has been carried out in the TEE, i.e., Intel SGX, in order to enable collaborative data analysis based on machine learning algorithms while providing a security guarantee . Afterwards, the TEE has been exploited in FL systems, in order to protect the privacy of data in two ways. The first way is to put the entire training process in the TEE of each distributed computing resource to protect the data privacy during the distributed training . The second way is to use TEE to check a small part of the distributed training, while exploiting insecure computing resources, e.g., GPUs, to reduce the training time . \nAs an encryption technique, homomorphic encryption has been used to ensure the data privacy for FL systems .\nHomomorphic encryption allows specific types of computations to be carried out on encrypted input data, and to generate an encrypted result, which matches the result of the same computations on the decrypted input data. \n\\liu{Two main branches of homomorphic encryption exist, i.e., fully homomorphic encryption and partially homomorphic encryption. \nThe fully homomorphic encryption supports both addition and multiplication on ciphertext, while partially homomorphic encryption only supports either an addition or a multiplication operation on ciphertext, which corresponds to less computational flexibility and better runtime efficiency. \nBoth the fully and partially homomorphic encryptions can be exploited with the horizontal and vertical federated learning.}\nAs sharing gradients also leaks the information of training data \\liu{in horizontal federated learning} , it is of much importance to protect the privacy of the intermediate data.\nThus, the intermediate data can be encrypted using a homomorphic encryption algorithm before being sent to a parameter server . In this way, the intermediate data remain encrypted during the aggregation process while only the computing resource can decrypt the encrypted intermediate data. \nEven if the transferred encrypted intermediate data is leaked, the information of gradients or models remains safe, and the privacy of the training data is ensured.\n\\liu{In addition, partial homomorphic encryption, e.g., Paillier , is exploited in vertical federated learning .}\nHowever, the homomorphic encryption incurs significant costs in computation and communication during distributed training . In order to reduce the overhead of homomorphic encryption, a set of quantized gradients are encrypted . \nDifferential Privacy (DP) protects the data privacy by adding artificial noise to a small part of raw data, while ensuring that the modification does not substantially affect the performance of the machine learning models . \nDP is widely used in FL systems as the first step to process the raw data, and the output is the training data to be used for the distributed training . With more added noise, the privacy is better protected, i.e., there is less possibility to leak raw data information, while it takes more time to converge for the machine learning models . A trade-off between the privacy protection and the convergence performance can be made by selecting a certain number of distributed resources . However, DP may not be able to ensure the data privacy under certain attacks, e.g., Generative Adversarial Network (GAN) attacks .\nA well-trained machine learning model can leak information about the training data based on the intermediate data, e.g., gradients . GANs can be used to generate data similar to the training data based on a well-trained machine learning model in either a parameter server or a distributed computing resource . \nThe adversary can reconstruct other participating clients’ private data, even if it has no knowledge of the label information using the GANs.\nThus, during the distributed training process of FL systems, a malicious user can exploit GANs to infer the training data of other users. DP can be used to prevent the GAN-based attack . In addition, fake training data can be generated based on a GAN and original raw data, which is then used during the distributed training process to prevent the GAN-based attack .", "id": "7a14e4c6-106a-4ec2-8926-70aed111f75a", "level": "subsubsection", "origin_cites_number": 35, "parent_id": "4a182037-a6a4-48d6-aa9c-0986bac2c4b8", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ], [ "subsection", "Data Security" ], [ "subsubsection", "Data Privacy" ] ], "subsections": [], "title": "Data Privacy" }, { "cite_extract_rate": 0.923076923076923, "cites": [ 2673, 5468, 5470, 5465, 5451, 5471, 5467, 7038, 3415, 597, 5469, 5466 ], "content": "We mainly focus on poisoning attacks in this section. \nThe objective of poisoning attacks is to reduce the accuracy of machine learning models using artificially designed data, i.e., data poisoning, or models, i.e., model poisoning, in one or several distributed computing resources during the model aggregation process (see details in in Section \\ref{subsubsec:centralized}). There are two ways to carry out poisoning attacks, i.e., data poisoning and model poisoning. \nData poisoning can be realized by modifying the features or the labels of the input data. For instance, malicious users can modify the data points of a certain class $C$ to other classes, and they can then use the modified data points to participate in the distributed training. \nThe modification of the labels is denoted by the label flipping attack.\nAs a result, the accuracy of the trained model has low accuracy in terms of Class $C$ . \nModel poisoning refers to the attacks in which the updated intermediate data, e.g., gradients or models, are poisoned before being sent to a parameter server in order to reduce the accuracy of the trained model . The goal of the model poisoning is to reduce the performance of the trained model on targeted tasks or classes, while the performance of the model remains unchanged in terms of other tasks or classes . \nData poisoning eventually realizes the model poisoning, as it enables some computing resources to update poisoned intermediate data based on the calculation of poisoned training data . \nHowever, model poisoning can be more powerful than data poisoning, as model poisoning directly influences the weights of the models and trains in a way that benefits the attack .\nBoth the data poisoning and the model poisoning rely on the backdoor attacks to modify the training data or the intermediate data . Backdoor attacks are performed by embedding the hidden instructions into machine learning models, so that the infected model performs well on benign testing samples when the backdoor is not activated, while its prediction will be changed to the attacker-specified target label when the backdoor is activated by the attacker . \nIn order to defend against these data attacks or model attacks, the malicious users should be identified by analyzing the updated intermediate data using dimensionality reduction methods, e.g., Principal Component Analysis (PCA) , anomaly detection , or interpretability techniques .\nIn addition, the model poisoning can be incurred by Byzantine failures of certain distributed computing resources . With Byzantine failures, some computing resources (bad users) are manipulated by attackers during the distributed training process, which significantly degrades the performance of the global model in terms of test error . \nIn order to make the training process robust against the Byzantine failures, the bad users can be identified by analyzing the updated intermediate data using a hidden Markov model or via secure aggregation protocols .", "id": "cecfb28f-1aff-43d4-9671-2b7aa114b7c6", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "4a182037-a6a4-48d6-aa9c-0986bac2c4b8", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Data Manipulation" ], [ "subsection", "Data Security" ], [ "subsubsection", "Model Security" ] ], "subsections": [], "title": "Model Security" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5420, 5423 ], "content": "\\label{sec:frameworks}\nFL systems are widely applied in diverse domains, e.g., \nmobile service, healthcare , and finance .\nAn FL system generally exploits an FL framework, which is deployed on distributed resources.\nIn this section, we present four widely used FL frameworks: PaddleFL , TensorFlowFederated , FATE , and PySyft .", "id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "level": "section", "origin_cites_number": 6, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ] ], "subsections": [ "37621a76-9ea9-4868-9b62-1b2d192ce562", "c27dbf26-a266-414a-aa80-dc2e4d0fee2b", "d79e2f48-eba0-495e-b5ac-cb343eff8f4d", "baf085e7-c8e0-4563-86bc-a6f1583fb40d", "c6708387-e962-4e76-9ea7-b422245775eb" ], "title": "Federated Learning Frameworks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5422 ], "content": "PaddleFL is an open source federated learning framework based on PaddlePaddle , which is supported by Baidu. At the presentation layer, PaddleFL provides a textual UI for the interaction between users and the FL system. At the User Services layer, PaddleFL provides the log and monitoring supports, and it can leverage the interpretability module of PaddlePaddle in the future. At the FL training layer, PaddleFL can realize data parallelism (horizontal FL) and model parallelism (vertical FL). It supports multiple aggregation algorithms, e.g., FedAvg, and fault-tolerance. At the infrastructure layer, PaddleFL exploits RPC for the distributed execution. PaddleFL exploits DP to protect the data security. PaddleFL is widely used in multiple domains, e.g., Natural Language Processing (NLP), Computing Vision (CV) , and recommendation.", "id": "37621a76-9ea9-4868-9b62-1b2d192ce562", "level": "subsection", "origin_cites_number": 3, "parent_id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ], [ "subsection", "PaddleFL" ] ], "subsections": [], "title": "PaddleFL" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 5419, 582 ], "content": "TensorFlow Federated (TFF) is an open-source framework for federated learning on decentralized data, which is supported by Google. TFF also provides a textual UI through Python. TFF supports the monitoring and log functionality at the user service layer. TFF supports data parallelism (horizontal FL), multiple aggregation algorithms, and fault-tolerance of mobile devices. TFF exploits RPC for the distributed execution and DP for the protection of data privacy.\nTFF enables Android mobile users to predict the next word while using the keyboard on their mobile phones .", "id": "c27dbf26-a266-414a-aa80-dc2e4d0fee2b", "level": "subsection", "origin_cites_number": 3, "parent_id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ], [ "subsection", "TensorFlowFederated" ] ], "subsections": [], "title": "TensorFlowFederated" }, { "cite_extract_rate": 0, "cites": [], "content": "FATE is an open-source FL framework supported by WeBank. \nFATE provides both a graphical and textual UI. \nFATE can support the monitoring of distributed training through a web portal. \nFATE takes advantage of database management systems (DBMS) to track the execution status.\nFATE can enable horizontal (data parallelism), vertical (model parallelism), and hybrid federated learning. \nFATE exploits both the DP and HE to protect the data privacy. \nIn addition, FATE exploits RPC to perform the distributed execution.", "id": "d79e2f48-eba0-495e-b5ac-cb343eff8f4d", "level": "subsection", "origin_cites_number": 1, "parent_id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ], [ "subsection", "FATE" ] ], "subsections": [], "title": "FATE" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 656 ], "content": "PySyft is an open-source FL framework based on the PyTorch framework .\nPySyft is written in Python and provides a textual UI based on Python.\nPySyft mainly supports the data parallelism and model parallelism based on an aggregator or orchestrating server. The aggregator or orchestrating server sends a part of the model to participating clients to process local data and gets results for federated averaging. PySyft exploits DP and encryption techniques to protect the data security. PySyft exploits multiple communication protocols for distributed execution, e.g., RPC, websocket etc.", "id": "baf085e7-c8e0-4563-86bc-a6f1583fb40d", "level": "subsection", "origin_cites_number": 3, "parent_id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ], [ "subsection", "PySyft" ] ], "subsections": [], "title": "PySyft" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7934 ], "content": "}\n\\liu{\nDiverse FL frameworks exist while each has its advantage. We summarize the characteristics of each framework in Table \\ref{tal:FLFramework}, so as to help select a proper framework for use. From the table, we can see that all the frameworks implement the centralized aggregation algorithms, while employing DP and HE for the data security. PaddleFL can exploit Paddle to realize data, model, and pipeline parallelism. FATE and TFF are based on Tensorflow as the engine, while FATE can provide Web portal UI, which is convenient for novices. PySyft is compatible with PyTorch, which can easily handle the PyTorch-based tasks, while PaddleFL is compatible with Paddle, which can easily deal with rich pre-trained models published in PaddleHub .}\n\\begin{table}[ht]\n\\centering\n\\caption{\\liu{Comparison among diverse frameworks. ``Aggregation'' represents the type of aggregation algorithms. ``Textual'' represents the textual UI, while ``Web'' represents Web portal.}}\n\\label{tal:FLFramework}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nFramework & Engine & Aggregation & UI & Parallelism & Security \\\\\n\\hline\nPaddleFL & Paddle & Centralized & textual & Data/Model/Pipeline & DP/HE \\\\\n\\hline\nTFF & TensorFlow & Centralized & textual & Data/Model & DP/HE \\\\\n\\hline\nFATE & TensorFlow & Centralized & Web & Data/Model & DP/HE \\\\\n\\hline\nPySyft & PyTorch & Centralized & textual & Data & DP/HE \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\liu{Table \\ref{tal:FrameworkSupport} represents the support of diverse types of FL in terms of data distribution. All the frameworks support horizontal FL, while vertical FL is supported by three frameworks except TFF. PySyft cannot directly support the vertical FL, while PyVertical , which is built upon PySyft, can be used to support vertical FL with the compatibility of PyTorch models. The hybrid FL is only supported by Paddle and FATE. In addition, all the frameworks support the execution with GPU. In practice, although PaddleFL may correspond to slightly longer time, the accuracy of the trained model can be higher that of TFF and FATE, while PySyft may generate ``out of memory'' errors .}\n\\begin{table}[ht]\n\\centering\n\\caption{\\liu{Comparison among diverse frameworks for the support of diverse FL types, e.g., horizontal FL, vertical FL, and hybrid FL, and GPU. $\\|$\\checkmark$\\|$ represents that the support is not realized by itself but a close one.}}\n\\label{tal:FrameworkSupport}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{} & PaddleFL & TFF & FATE & PySyft \\\\\n\\hline\n\\multirow{3}{*}{Types} & Horizontal & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\\n\\cline{2-6}\n& Vertical & \\checkmark & \\xmark & \\checkmark & $\\|$\\checkmark$\\|$ \\\\\n\\cline{2-6}\n& Hybrid & \\checkmark & \\xmark & \\checkmark & \\xmark \\\\\n\\hline\n\\multicolumn{2}{|c|}{GPU} & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\\n\\hline\n\\end{tabular}\n\\end{table}", "id": "c6708387-e962-4e76-9ea7-b422245775eb", "level": "subsection", "origin_cites_number": 3, "parent_id": "727148cb-019c-4e5c-9d02-e8f0c8fb671c", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Federated Learning Frameworks" ], [ "subsection", "\\liu{Concluding Remarks" ] ], "subsections": [], "title": "\\liu{Concluding Remarks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future}\nAlthough much work has been done on the FL systems, there remain\nsome limitations, e.g., interpretability of FL, decentralized aggregation, FL on graphs, benchmarks of FL systems, and applications to distributed intelligent systems. This section\ndiscusses the limitations of the existing frameworks and proposes new research directions.", "id": "226ea081-07c1-4d87-ad04-18733a7f7202", "level": "section", "origin_cites_number": 0, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ] ], "subsections": [ "9866dd12-79cc-42bf-8557-f19459ea5936", "b6673d3f-9d3e-4599-b8f4-93fdf908307f", "9c3427af-198a-448e-bda6-b7fbe346fc0a", "35e51ec5-ea11-4e9b-a3ee-cbf4853180ca", "e54c8539-ee3c-4e42-b5a8-6e639fe2d31f", "31802936-db7d-403f-b909-ee8ee4d7b0c8" ], "title": "Research Directions" }, { "cite_extract_rate": 0.4, "cites": [ 653, 582 ], "content": "Several datasets exist for experiments on FL systems. For instance, Federated Extended MNIST (FEMNIST) is built by partitioning the data in Extended MNIST based on each writer.\nShakespeare is built from The Complete Works of William Shakespeare based on each speaking role. \nBoth of these datasets can be used for horizontal FL. However, no public datasets exist for vertical FL or transfer FL. In addition, no open decentralized IID or non-IID distribution of popular datasets, e.g., ImageNet , exist for FL systems.", "id": "9866dd12-79cc-42bf-8557-f19459ea5936", "level": "subsection", "origin_cites_number": 5, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "Benchmarks" ] ], "subsections": [], "title": "Benchmarks" }, { "cite_extract_rate": 1, "cites": [ 5421, 5437 ], "content": "Deep neural networks have excellent performance in various areas, while it is often difficult to understand the results of deep neural network models, especially within FL systems.\nShapley values have been used to provide the interpretability , while it focuses on vertical FL. When multiple users collaboratively train an FL model, it remains an open problem to evaluate the contributions of each user, which helps provide evidence for the incentive of each user. \nThe primary incentive for clients to participate in federated learning is obtaining better models , while the benefit of participating in federated learning for clients who have sufficient private data to train accurate local models is disputable. \nInterpretability can help understand the contributions of each user and provide an objective opinion on the incentive strategy within an FL system.\nIn addition, the interpretability helps domain experts to understand the relationship between data and the final trained model in critical domains, e.g., healthcare and finance.\nHowever, the interpretability within FL systems remains an open problem.", "id": "b6673d3f-9d3e-4599-b8f4-93fdf908307f", "level": "subsection", "origin_cites_number": 2, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "Interpretability" ] ], "subsections": [], "title": "Interpretability" }, { "cite_extract_rate": 0, "cites": [], "content": "Current aggregation algorithms of FL systems focus on the full connection or star connection topology, while other topologies, e.g., dynamic exponential-2 graph, may help accelerate the distributed training with FL systems . \\liu{In addition, well-known graph algorithms, e.g., graph partitioning algorithms, and ad-hoc policies can be exploited to help better distribute computing resources with the topology defined in Section \\ref{subsubsec:decentralized} in order to improve the efficiency of FL systems. } While the peer-to-peer communication enables the FL with an arbitrary topology matrix, the data security under diverse attacks, e.g., data or model poisoning, GAN-based attacks, remain open problems and deserve further investigation.", "id": "9c3427af-198a-448e-bda6-b7fbe346fc0a", "level": "subsection", "origin_cites_number": 1, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "Decentralized Aggregation" ] ], "subsections": [], "title": "Decentralized Aggregation" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7935, 5438, 180 ], "content": "Graphs or graph neural networks (GNN) have gained increasing popularity in multiple domains, e.g., social network, knowledge graph, and recommender system. FL frameworks for graphs, i.e., GraphFL , and GNN, i.e., SGNN , have been proposed to train a model with decentralized graphs. However, the data security of FL on graphs remains an open problem. \\liu{In addition, while a multimodal knowledge graph could not only contain text but also images or other type of data , it is worth further exploration to efficiently support the multimodel knowledge graph construction within an FL system .}", "id": "35e51ec5-ea11-4e9b-a3ee-cbf4853180ca", "level": "subsection", "origin_cites_number": 5, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "Federated Learning on Graphs" ] ], "subsections": [], "title": "Federated Learning on Graphs" }, { "cite_extract_rate": 0.375, "cites": [ 8921, 5472, 5473 ], "content": "}\n\\liu{\nAlthough FL focuses on the non-IID data, the real-world decentralized data usually exhibit an imbalanced distribution . While the imbalanced data exist in multiple areas, such as computer vision , bioinformatics, and biomedicine , learning from such data requires special attention upon data sampling , data augmentation , and loss function designs . The imbalanced data is related to diverse tasks, e.g., two-class or multi-class classification . However, an optimized approach can be proposed to address the imbalanced data within FL systems.}", "id": "e54c8539-ee3c-4e42-b5a8-6e639fe2d31f", "level": "subsection", "origin_cites_number": 8, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "\\liu{Imbalanced Data" ] ], "subsections": [], "title": "\\liu{Imbalanced Data" }, { "cite_extract_rate": 0, "cites": [], "content": "Machine learning algorithms have been widely used to boost the performance of intelligent systems, while FL systems could further enhance intelligent systems~ in distributed computing environments with privacy and security ensured. An intelligent system is a group of machines that has the capacity to gather data, analyze the data, and respond to other systems or the world around. \nWith FL systems, the distributed data can be exploited to generate models of high performance so as to produce smart responses.", "id": "31802936-db7d-403f-b909-ee8ee4d7b0c8", "level": "subsection", "origin_cites_number": 3, "parent_id": "226ea081-07c1-4d87-ad04-18733a7f7202", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Research Directions" ], [ "subsection", "Applications to Distributed Intelligent Systems" ] ], "subsections": [], "title": "Applications to Distributed Intelligent Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:con}\nIn this paper, we discussed the current state of the art of FL systems, including the functional architecture of FL systems, distributed training, and data manipulation. \nFirst, we presented an overview of FL systems. In particular, we introduced the life cycle of FL models, including four phases. Then, we presented the four-layer functional architecture of FL systems, including presentation, user services, FL training, and infrastructure, and we presented each layer in detail. \nSecond, we detailed the distributed training with two parts, i.e., parallelism and aggregation algorithms. We presented three types of parallelism, including data parallelism, model parallelism, and pipeline parallelism. We associate each parallelism to a corresponding type of FL. For instance, data parallelism is associated with the horizontal FL, which corresponds to cross-device or cross-silo FL. Model parallelism is related to vertical FL and cross-silo FL. We presented the features of different aggregation algorithms in three types: centralized aggregation, hierarchical aggregation, and decentralized aggregation. \nThird, we presented the techniques for data manipulation within FL systems. We showed that FL systems prefer RPC for the distributed execution, to handle the fault-tolerance because of moderate network connection. Intermediate data are sketched in order to compress the data, so as to reduce the data communication time. In addition, we presented the data privacy and model security attacks and corresponding defense techniques, e.g., DP, HE, TEE, and the analysis of updated intermediate data for malicious user identification. \nWe mainly introduced four FL systems: PaddleFL, TensorFlowFederated, FATE, and PySyft. The current solutions primarily focus on the horizontal FL. And we identified five research directions that deserve further investigation: benchmarks, interpretability, decentralized aggregation, FL on graphs, imbalanced data, and the applications of FL systems to distributed intelligent systems.\n\\bibliographystyle{plain}\n\\bibliography{references}\n\\end{document}", "id": "4ff5262c-b6d9-4162-aba6-d016f80c7306", "level": "section", "origin_cites_number": 0, "parent_id": "42e77c1c-0c8b-4e28-9d53-60920d7558be", "prefix_titles": [ [ "title", "From Distributed Machine Learning to Federated Learning: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
81
[ 3469, 1315, 5419, 3477, 591, 8917, 659, 5422, 5423, 1311, 5424, 671, 627, 5420, 600, 582, 5421, 5426, 5425, 166, 612, 5427, 656, 5429, 5434, 8918, 5437, 5435, 5433, 5432, 7927, 5428, 5430, 7926, 8502, 5436, 5438, 5431, 590, 617, 7608, 5439, 668, 636, 635, 602, 7928, 5440, 5443, 5441, 5445, 8919, 5442, 5444, 5446, 632, 646, 644, 5448, 5447, 5449, 5452, 8920, 5450, 7930, 5451, 7929, 5454, 623, 655, 5453, 613, 7932, 622, 7931, 5456, 5457, 5455, 625, 8366, 5460, 892, 5459, 5461, 5462, 5458, 3478, 7933, 614, 5463, 5464, 7727, 2673, 5468, 5470, 5465, 5471, 5467, 7038, 3415, 597, 5469, 5466, 7934, 653, 7935, 180, 8921, 5472, 5473 ]
0.907733
[ "Chongyi Li", "Chunle Guo", "Linghao Han", "Jun Jiang", "Ming-Ming Cheng", "Jinwei Gu", "Chen Change Loy" ]
Low-Light Image and Video Enhancement \\Using Deep Learning: A Survey
2021
2021-04-21T19:12:19Z
cs.CV
\label{sec:Abstrat} Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination. Recent advances in this area are dominated by deep learning-based solutions, where many learning strategies, network structures, loss functions, training data, etc. have been employed. In this paper, we provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues. To examine the generalization of existing methods, we propose a low-light image and video dataset, in which the images and videos are taken by different mobile phones' cameras under diverse illumination conditions. Besides, for the first time, we provide a unified online platform that covers many popular LLIE methods, of which the results can be produced through a user-friendly web interface. In addition to qualitative and quantitative evaluation of existing methods on publicly available and our proposed datasets, we also validate their performance in face detection in the dark. This survey together with the proposed dataset and online platform could serve as a reference source for future study and promote the development of this research field. The proposed platform and dataset as well as the collected methods, datasets, and evaluation metrics are publicly available and will be regularly updated. Project page: \href{https://www.mmlab-ntu.com/project/lliv\_survey/index.html}{https://www.mmlab-ntu.com/project/lliv\_survey/index.html}.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ] ], "subsections": [ "c6721c4f-f568-49fa-a376-8b26124c3f6f", "8eda68ea-596d-47b0-8c59-3c11fbc66273", "3ad6854d-28fe-41e8-80bd-198011ce8b19", "62b1377f-b6e1-4398-be9f-a1fa2952056b", "c43fed76-06de-4d00-89a5-701cdf5d52e3", "0f160045-16e9-4579-89b0-811073538246" ], "title": "root" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "\\label{sec:Introduction}}\n\\IEEEPARstart{I}{mages} are often taken under sub-optimal lighting conditions, under the influence of backlit, uneven light, and dim light, due to inevitable environmental and/or technical constraints such as insufficient illumination and limited exposure time. Such images suffer from the compromised aesthetic quality and unsatisfactory transmission of information for high-level tasks such as object tracking, recognition, and detection. Figure \\ref{fig:example} shows some examples of the degradations induced by sub-optimal lighting conditions. \nLow-light enhancement enjoys a wide range of applications in different areas, including visual surveillance, autonomous driving, and computational photography. In particular, smartphone photography has become ubiquitous and prominent. Limited by the size of the camera aperture, the requirement of real-time processing, and the constraint of memory, taking photographs with a smartphone's camera in a dim environment is especially challenging. There is an exciting research arena of enhancing low-light images and videos in such applications.\n\\begin{figure}[!t]\n\t\\begin{center}\n\t\t\\begin{tabular}{c@{ }c@{ }c@{ }}\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/Fig1_backlit.png}&\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/Fig1_nonuniform.png}&\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/Fig1_weak.png}\\\\\n\t\t\t(a) back lit & (b) uneven light &(c) dim light\\\\\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/Fig1_dark.png}&\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/0027.png}&\n\t\t\t\\includegraphics[width=0.3\\linewidth,height=0.2\\linewidth]{figures/Fig1_noise.png}\\\\\n\t\t\t(d) extremely low &\t(e) colored light & (f) boosted noise\\\\\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-3pt}\n\t\\caption{Examples of images taken under sub-optimal lighting conditions. These images suffer from the buried scene content, reduced contrast, boosted noise, and inaccurate color. }\n\t\\label{fig:example}\n\t\\vspace{-5pt}\n\\end{figure}\n\\begin{figure*}[thb]\n\t\\centering \\centerline{\\includegraphics[width=1\\linewidth]{figures/chronology.png}}\n\t\\vspace{-2pt}\n\t\\caption{A concise milestone of deep learning-based low-light image and video enhancement methods. \\textbf{Supervised learning-based methods}: LLNet , Chen et al. , MBLLEN , Retinex-Net , LightenNet , SCIE , DeepUPE , Chen et al. , Jiang and Zheng , Wang et al. , KinD , Ren et al. , Xu et al. , Fan et al. , Lv et al. , EEMEFN~, SIDGAN. , LPNet , DLN , TBEFN , DSLR , Zhang et al. , PRIEN , and Retinex-Net . \\textbf{Reinforcement learning-based method}: DeepExposure . \\textbf{Unsupervised learning-based method}: EnlightenGAN . \\textbf{Zero-shot learning-based methods}: ExCNet , Zero-DCE , RRDNet , Zero-DCE++ , RetinexDIP , and RUAS . \\textbf{Semi-supervised learning-based method}: DRBN and DRBN .}\n\t\\label{fig:milestones}\n\t\\vspace{-4pt}\n\\end{figure*}\nTraditional methods for low-light enhancement include Histogram Equalization-based methods and Retinex model-based methods . The latter received relatively more attention. A typical Retinex model-based approach decomposes a low-light image into a reflection component and an illumination component by priors or regularizations. The estimated reflection component is treated as the enhanced result.\nSuch methods have some limitations:\n\\textbf{1)} the ideal assumption that treats the reflection component as the enhanced result does not always hold, especially given various illumination properties, which could lead to unrealistic enhancement such as loss of details and distorted colors, \\textbf{2)} the noise is usually ignored in the Retinex model, thus it is remained or amplified in the enhanced results, \\textbf{ 3)} finding an effective prior or regularization is challenging. Inaccurate prior or regularization may result in artifacts and color deviations in the enhanced results, and \\textbf{4)} the runtime is relatively long because of their complicated optimization process.\nRecent years have witnessed the compelling success of deep learning-based LLIE since the first seminal work . Deep learning-based solutions enjoy better accuracy, robustness, and speed over conventional methods, thus attracting increasing attention. A concise milestone of deep learning-based LLIE methods is shown in Figure \\ref{fig:milestones}. As shown, \nsince 2017, the number of deep learning-based solutions has grown year by year. Learning strategies used in these solutions cover Supervised Learning (SL), Reinforcement Learning (RL), Unsupervised Learning (UL), Zero-Shot Learning (ZSL), and Semi-Supervised Learning (SSL). Note that we only report some representative methods in Figure \\ref{fig:milestones}. In fact, there are more than 100 papers on deep learning-based methods from 2017 to 2021. Moreover, although some general photo enhancement methods can improve the brightness of images to some extent, we omit them in this survey as they are not designed to handle diverse low-light conditions. We concentrate on deep learning-based solutions that are specially developed for low-light image and video enhancement.\n\tDespite deep learning has dominated the research of LLIE, an in-depth and comprehensive survey on deep learning-based solutions is lacking. There are two reviews of LLIE . Wang et al. mainly reviews conventional LLIE methods while our work systematically and comprehensively reviews recent advances of deep learning-based LLIE. In comparison to Liu et al. that reviews existing LLIE algorithms, measures the machine vision performance of different methods, provides a low-light image dataset serving both low-level and high-level vision enhancement, and develops an enhanced face detector, our survey reviews the low-light image and video enhancement from different aspects and has the following unique characteristics.\n\t\\textbf{1)} Our work mainly focuses on recent advances of deep learning-based low-light image and video enhancement, where we provide in-depth analysis and discussion in various aspects, covering learning strategies, network structures, loss functions, training datasets, test datasets, evaluation metrics, model sizes, inference speed, enhancement performance, etc. Thus, this survey centers on deep learning and its applications in low-light image and video enhancement.\n\t\\textbf{2)} We propose a dataset that contains images and videos captured by different mobile phones' cameras under diverse illumination conditions to evaluate the generalization of existing methods. This new and challenging dataset is a supplement of existing low-light image and video enhancement datasets as such a dataset is lacking in this research area. Besides, we are the first, to the best of our knowledge, to compare the performance of deep learning-based low-light image enhancement methods on this kind of data. \n\t\\textbf{3)} We provide an online platform that covers many popular deep learning-based low-light image enhancement methods, where the results can be produced by a user-friendly web interface. With our platform, one without any GPUs can assess the results of different methods for any input images online, which speeds up the development of this research field and helps to create new research.\n\tWe hope that our survey could provide novel insights and inspiration to facilitate the understanding of deep learning-based LLIE, foster research on the raised open issues, and speed up the development of this research field.", "id": "c6721c4f-f568-49fa-a376-8b26124c3f6f", "level": "section", "origin_cites_number": 8, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Solution}", "id": "8eda68ea-596d-47b0-8c59-3c11fbc66273", "level": "section", "origin_cites_number": 0, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Deep Learning-Based LLIE" ] ], "subsections": [ "b406f65d-5d41-4498-9ddb-95caec62c1be", "44126fdb-481e-4904-b2e2-6e15d767dcdd" ], "title": "Deep Learning-Based LLIE" }, { "cite_extract_rate": 0, "cites": [], "content": "We first give a common formulation of the deep learning-based LLIE problem. For a low-light image $I\\in \\mathbb{R}^{W\\times H\\times3}$ of width $W$ and height $H$, the process can be modeled as:\n\\begin{equation}\n\t\\label{equ_1}\n\t\\widehat{R}=\\mathcal{F}(I;\\theta),\n\\end{equation}\nwhere $\\widehat{R}\\in \\mathbb{R}^{W\\times H\\times3}$ is the enhanced result and $\\mathcal{F}$ represents the network with trainable parameters $\\theta$. The purpose of deep learning is to find optimal network parameters $\\widehat{\\theta}$ that minimizes the error:\n\\begin{equation}\n\t\\label{equ_2}\n\t\\widehat{\\theta}=\\argmin_{\\theta}\\mathcal{L}(\\widehat{R},R),\n\\end{equation}\nwhere $R\\in \\mathbb{R}^{W\\times H\\times3}$ is the ground truth, and the loss function $\\mathcal{L}(\\widehat{R},R)$ drives the optimization of network.\nVarious loss functions such as supervised loss and unsupervised loss can be used. More details will be presented in Section \\ref{sec:Technical}.", "id": "b406f65d-5d41-4498-9ddb-95caec62c1be", "level": "subsection", "origin_cites_number": 0, "parent_id": "8eda68ea-596d-47b0-8c59-3c11fbc66273", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Deep Learning-Based LLIE" ], [ "subsection", "Problem Definition" ] ], "subsections": [], "title": "Problem Definition" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "According to different learning strategies, we categorize existing LLIE methods into supervised learning, reinforcement learning, unsupervised learning, zero-shot learning, and semi-supervised learning. \nA statistic analysis from different perspectives is presented in Figure \\ref{fig:statistic}. In what follows, we review some representative methods of each strategy. \n\\noindent\n\\textbf{Supervised Learning.} \nFor supervised learning-based LLIE methods, they are further divided into end-to-end, deep Retinex-based, and realistic data-driven methods.\nThe first deep learning-based LLIE method LLNet~ employs a variant of stacked-sparse\ndenoising autoencoder to brighten and denoise low-light images simultaneously. \nThis pioneering work inspires the usage of end-to-end networks in LLIE. \nLv et al. propose an end-to-end multi-branch enhancement network (MBLLEN). The MBLLEN improves the performance of LLIE via extracting effective feature representations by a feature extraction module, an enhancement module, and a fusion module. \nThe same authors propose other three subnetworks including an Illumination-Net, a Fusion-Net, and a Restoration-Net to further improve the performance. \nRen et al. design a more complex end-to-end network that comprises an encoder-decoder network for image content enhancement and a recurrent neural network for image edge enhancement. \nSimilar to Ren et al. , Zhu et al. propose a method called EEMEFN. The EEMEFN consists of two stages: multi-exposure fusion and edge enhancement. \nA multi-exposure fusion network, TBEFN~, is proposed for LLIE. The TBEFN estimates a transfer function in two branches, of which two enhancement results can be obtained. At last, a simple average scheme is employed to fuse these two images and further refine the result via a refinement unit. \nIn addition, pyramid network (LPNet) , residual network , and Laplacian pyramid (DSLR)\nare introduced into LLIE. These methods learn to effectively and efficiently integrate feature representations via commonly used end-to-end network structures for LLIE.\nBased on the observation that noise exhibits different levels of contrast in different frequency layers, Xu et al. proposed a frequency-based decomposition-and-enhancement network. This network recovers image contents with noise suppression in the low-frequency layer while inferring the details in the high-frequency layer. \nRecently, a progressive-recursive low-light image enhancement network is proposed, which uses a recursive unit to gradually enhance the input image.\nTo solve temporal instability when handling low-light videos, Zhang et al. propose to learn and infer motion field from a single image then enforce temporal consistency.\nIn comparison to directly learning an enhanced result in an end-to-end network, deep Retinex-based methods enjoy better enhancement performance in most cases owing to the physically explicable Retinex theory . Deep Retinex-based methods usually separately enhance the illuminance component and the reflectance components via specialized subnetworks.\nA Retinex-Net is proposed, which includes a Decom-Net that splits the input image into light-independent reflectance and structure-aware smooth illumination and an Enhance-Net that adjusts the illumination map for low-light enhancement. Recently, the Retinex-Net is extended by adding new constraints and advanced network designs for better enhancement performance .\nTo reduce the computational burden, Li et al. propose a lightweight LightenNet for weakly illuminated image enhancement, which only consists of four layers. \nThe LightenNet takes a weakly illuminated image as the input and then estimates its illumination map. Based on the Retinex theory , the enhanced image is obtained by dividing the input image by the illumination map.\nTo accurately estimate the illumination map, Wang et al. extract the global and local features to learn an image-to-illumination mapping by their proposed DeepUPE network.\nZhang et al. separately develop three subnetworks for layer decomposition, reflectance restoration, and illumination adjustment, called KinD. \nFurthermore, the authors alleviate the visual defects left in the results of KinD by a multi-scale illumination attention module. The improved KinD is called KinD++ .\nTo solve the issue that the noise is omitted in the deep Retinex-based methods,\nWang et al. propose a progressive Retinex network, where an IM-Net estimates the illumination and a NM-Net estimates the noise level. These two subnetworks work in a progressive mechanism until obtaining stable results. \nFan et al. integrate semantic segmentation and Retinex model for further improving the enhancement performance in real cases. The core idea is to use semantic prior to guide the enhancement of both the illumination component and the reflectance component. \n\tAlthough some methods can achieve decent performance, they show poor generalization capability in real low-light cases due to the usage of synthetic training data. To solve this issue, some works attempt to generate more realistic training data or capture real data.\n\tCai et al. build a multi-exposure image dataset, where the low-contrast images of different exposure levels have their corresponding high-quality reference images. \n\tEach high-quality reference image is obtained by subjectively selecting the best output from 13 results enhanced by different methods. Moreover, a frequency decomposition network is trained on the built dataset and separately enhances the high-frequency layer and the low-frequency layer via a two-stage structure. \n\tChen et al. collect a real low-light image dataset (SID) and \n\ttrain the U-Net to learn a mapping from low-light raw data to the corresponding long-exposure high-quality reference image.\n\tFurther, Chen et al. extend the SID dataset to low-light videos (DRV). The DRV contains static videos with the corresponding long-exposure ground truths. To ensure the generalization capability of processing the videos of dynamic scenes, a siamese network is proposed. \n\tTo enhance the moving objects in the dark, Jiang and Zheng~ design a co-axis optical system to capture temporally synchronized and spatially aligned low-light and well-lighted video pairs (SMOID). \n\tUnlike the DRV video dataset , the SMOID video dataset contains dynamic scenes. To learn the mapping from raw low-light video to well-lighted video, a 3D U-Net-based network is proposed.\n\tConsidering the limitations of previous low-light video datasets such as DRV dataset only containing statistic videos and SMOID dataset only having 179 video pairs, \n\tTriantafyllidou et al.~ propose a low-light video synthesis pipeline, dubbed SIDGAN. The SIDGAN can produce dynamic video data (raw-to-RGB) by a semi-supervised dual CycleGAN with intermediate domain mapping. \n\tTo train this pipeline, the real-world videos are collected from Vimeo-90K dataset . The low-light raw video data and the corresponding long-exposure images are sampled from DRV dataset . \n\tWith the synthesized training data, this work adopts the same U-Net network as Chen et al. for low-light video enhancement. \n\t\\noindent\n\t\\textbf{Reinforcement Learning.}\n\tWithout paired training data, Yu et al. learn to expose photos with reinforcement adversarial learning, named DeepExposure. Specifically, an input image is first segmented into sub-images according to exposures. For each sub-image, local exposure is learned by the policy network sequentially based on reinforcement learning. The reward evaluation function is approximated by adversarial learning. At last, each local exposure is employed to retouch the input, thus obtaining multiple retouched images under different exposures. The final result is achieved by fusing these images. \n\t\\noindent\n\t\\textbf{Unsupervised Learning.}\n\tTraining a deep model on paired data may result in overfitting and limited generalization capability. To solve this issue, an unsupervised learning method named EnligthenGAN is proposed.\n\tThe EnlightenGAN adopts an attention-guided U-Net as the generator and uses the global-local discriminators to ensure the enhanced results look like realistic normal-light images. \n\tIn addition to global and local adversarial losses, the global and local self feature preserving losses are proposed to preserve the image content before and after the enhancement. This is a key point for the stable training of such a one-path Generative Adversarial Network (GAN) structure. \n\t\\noindent\n\t\\textbf{Zero-Shot Learning.} \n\tThe supervised learning, reinforcement learning, and unsupervised learning methods either have limited generalization capability or suffer from unstable training. To remedy these issues, zero-shot learning is proposed to learn the enhancement solely from the testing images. \n\tNote that the concept of zero-shot learning in the low-level vision tasks is used to emphasize that the method does not require paired or unpaired training data, which is different from its definition in high-level visual tasks. \n\tZhang et al. propose a zero-shot learning method, called ExCNet, for back-lit image restoration. \n\tA network is first used to estimate the S-curve that best fits the input image. Once the S-curve is estimated, the input image is separated into a base layer and a detail layer using the guided filter . Then the base layer is adjusted by the estimated S-curve. Finally, the Weber contrast is used to fuse the detailed layer and the adjusted base layer. To train the ExCNet, the authors formulate the loss function as a block-based energy minimization problem. \n\tZhu et al. propose a three-branch CNN, called RRDNet, for underexposed images restoration. The RRDNet decomposes an input image into illumination, reflectance, and noise via iteratively minimizing specially designed loss functions. \n\tTo drive the zero-shot learning, a combination of Retinex reconstruction loss, texture enhancement loss, and illumination-guided noise estimation loss is proposed. \n\tZhao et al. perform Retinex decomposition via neural networks and then enhance the low-light image based on the Retinex model, called RetinexDIP. Inspired by Deep Image Prior (DIP) , RetinexDIP generates the reflectance component and illumination component of an input image by randomly sampled white noise, in which the component characteristics-related losses such as illumination smoothness are used for training.\n\tLiu et al. propose a Retinex-inspired unrolling method for LLIE, in which the cooperative architecture search is used to discover lightweight prior architectures of basic blocks and non-reference losses are used to train the network.\n\tDifferent from the image reconstruction-based methods , \n\ta deep curve estimation network, Zero-DCE , is proposed. Zero-DCE formulates the light enhancement as a task of image-specific curve estimation, which takes a low-light image as input and produces high-order curves as its output. These curves are used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. Further, an accelerated and lightweight version is proposed, called Zero-DCE++ . \n\tSuch curve-based methods do not require any paired or unpaired data during training. They achieve zero-reference learning via a set of non-reference loss functions. Besides, unlike the image reconstruction-based methods that need high computational resources, the image-to-curve mapping only requires lightweight networks, thus achieving a fast inference speed. \n\t\\begin{figure*}[!t]\n\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c@{ }c@{ }c@{ }c@{ }}\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture1.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture2.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture3.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture4.png}\\\\\n\t\t\t\t(a) learning strategy & (b) network structure& (c) Retinex model & (d) data format \\\\\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture5.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture6.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture7.png}&\n\t\t\t\t\\includegraphics[width=.23\\textwidth]{figures/Picture8.png}\\\\\n\t\t\t\t(e) loss function & (f) training dataset & (g) testing dataset & (h) evaluation metric \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{center}\n\t\t\\vspace{-2pt}\n\t\t\\caption{A statictic analysis of deep learning-based LLIE methods, including learning strategy, network characteristic, Retinex model, data format, loss function, training dataset, testing dataset, and evaluation metric. Best viewed by zooming in.}\n\t\t\\label{fig:statistic}\n\t\t\\vspace{-4pt}\n\t\\end{figure*}\n\t\\noindent\n\t\\textbf{Semi-Supervised Learning.}\n\tTo combine the strengths of supervised learning and unsupervised learning, semi-supervised learning has been proposed in recent years.\n\tYang et al. propose a semi-supervised deep recursive band network (DRBN). The DRBN first recovers a linear band representation of an enhanced image under supervised learning, and then obtains an improved one by recomposing the given bands via a learnable linear transformation based on unsupervised adversarial learning. \n\tThe DRBN is extended by introducing Long Short Term Memory (LSTM) networks and an image quality assessment network pre-trained on an aesthetic visual analysis dataset, which achieves better enhancement performance .\n\tObserving Figure \\ref{fig:statistic}(a), we can find that supervised learning is the mainstream among deep learning-based LLIE methods, of which the percentage reaches 73\\%. This is because supervised learning is relatively easy when paired training data such as LOL , SID and diverse low-/normal-light image synthesis approaches are used. \n\tHowever, supervised learning-based methods suffer from some challenges: \n\t\\textbf{1)} collecting a large-scale paired dataset that covers diverse real-world low-light conditions is difficult, \n\t\\textbf{2)} synthetic low-light images do not accurately represent real-world illuminance conditions such as spatially varying lighting and different levels of noise, and \n\t\\textbf{3)} training a deep model on paired data may result in limited generalization to real-world images of diverse illumination properties. \n\tTherefore, some methods adopt unsupervised learning, reinforcement learning, semi-supervised learning, and zero-shot learning to bypass the challenges in supervised learning. Although these methods achieve competing performance, they still suffer from some limitations: \n\t\\textbf{1)} for unsupervised learning/semi-supervised learning methods, how to implement stable training, avoid color deviations, and build the relations of cross-domain information challenges current methods,\n\t\\textbf{2)} for reinforcement learning methods, designing an effective reward mechanism and implementing efficient and stable training are intricate, and \n\t\\textbf{3)} for zero-shot learning methods, the design of non-reference losses is non-trivial when the color preservation, artifact removal, and gradient back-propagation should be taken into account. \n\t\\begin{table*}\n\t\t\\rowcolors{1}{gray!20}{white}\n\t\t\\centering\n\t\t\\caption{\n\t\t\t{Summary of essential characteristics of representative deep learning-based methods. ``Retinex'' indicates whether the models are Retinex-based or not. ``simulated'' means the testing data are simulated by the same approach as the synthetic training data. ``self-selected'' stands for the real-world images selected by the authors. ``\\#P''represents the number of trainable parameters. ``-'' means this item is not available or not indicated in the paper.}\n\t\t}\n\t\t\\vspace{-6pt}\n\t\t\\label{table:methods}\n\t\t\\begin{threeparttable}\n\t\t\t\\resizebox{1\\textwidth}{!}{\n\t\t\t\t\\setlength\\tabcolsep{2pt}\n\t\t\t\t\\renewcommand\\arraystretch{0.98}\n\t\t\t\t\\begin{tabular}{c|r||c|c|c|c|c|c|c|c|c}\n\t\t\t\t\t\\hline\n\t\t\t\t\t&\\textbf{Method}~~~~~~~~~&\\textbf{Learning} &\\textbf{Network Structure} &\\textbf{Loss Function}\n\t\t\t\t\t&\\textbf{Training Data} & \\textbf{Testing Data} &\\textbf{Evaluation Metric} & \\textbf{Format} &\\textbf{Platform} &\\textbf{Retinex}\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{1}{*}{\\rotatebox{90}{\\textbf{2017}}}\n\t\t\t\t\t&LLNet~ &SL &SSDA &SRR loss\n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\Gamma Correction \\& \\\\Gaussian Noise} &\\tabincell{c}{simulated \\\\ self-selected} &\\tabincell{c}{PSNR SSIM} &RGB &Theano &\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{1}{*}{\\rotatebox{90}{\\textbf{2018}}}\n\t\t\t\t\t&LightenNet~ &SL &four layers &$L_{2}$ loss\n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\random illumination values} &\\tabincell{c}{simulated\\\\self-selected} &\\tabincell{c}{PSNR MAE\\\\SSIM \\\\User Study} &RGB &\\tabincell{c}{Caffe\\\\MATLAB}&\\checkmark\\\\\n\t\t\t\t\t&Retinex-Net~ &SL &\n\t\t\t\t\tmulti-scale network &\\tabincell{c}{$L_{1}$ loss smoothness loss \\\\invariable reflectance loss}\n\t\t\t\t\t&\\tabincell{c}{LOL\\\\simulated by \\\\adjusting histogram} &self-selected &- &RGB &TensorFlow&\\checkmark\\\\\n\t\t\t\t\t&MBLLEN~ &SL &multi-branch fusion &\\tabincell{c}{SSIM loss region loss\\\\perceptual loss}\n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\Gamma Correction \\& \\\\Poisson Noise} &\\tabincell{c}{simulated\\\\ self-selected} &\\tabincell{c}{PSNR SSIM\\\\AB VIF\\\\LOE TOMI} &RGB &\\tabincell{c}{TensorFlow} & \\\\\n\t\t\t\t\t&SCIE~ &SL &frequency decomposition &\\tabincell{c}{$L_{2}$ loss $L_{1}$ loss SSIM loss}\n\t\t\t\t\t&SCIE &SCIE &\\tabincell{c}{PSNR FSIM\\\\Runtime FLOPs} &RGB&\\tabincell{c}{Caffe\\\\MATLAB}&\\\\\n\t\t\t\t\t&Chen et al.~ &SL &U-Net &$L_{1}$ loss\n\t\t\t\t\t&SID &SID &\\tabincell{c}{PSNR SSIM} &raw&TensorFlow&\\\\\n\t\t\t\t\t&Deepexposure~ &RL &\\tabincell{c}{policy network\\\\GAN} &\\tabincell{c}{deterministic policy gradient\\\\adversarial loss}\n\t\t\t\t\t&MIT-Adobe FiveK&MIT-Adobe FiveK &\\tabincell{c}{PSNR SSIM} &raw&TensorFlow &\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{1}{*}{\\rotatebox{90}{\\textbf{2019}}}\n\t\t\t\t\t&Chen et al.~ &SL &siamese network &\\tabincell{c}{$L_{1}$ loss\\\\ self-consistency loss}\n\t\t\t\t\t&DRV &DRV &\\tabincell{c}{PSNR\\\\ SSIM\\\\ MAE} &raw &TensorFlow&\\\\\n\t\t\t\t\t&Jiang and Zheng~ &SL &3D U-Net &$L_{1}$ loss\n\t\t\t\t\t&SMOID &SMOID &\\tabincell{c}{PSNR SSIM MSE} &raw &TensorFlow &\\\\\n\t\t\t\t\t&DeepUPE~ &SL &illumination map &\\tabincell{c}{$L_{1}$ loss color loss\\\\smoothness loss}\n\t\t\t\t\t&retouched image pairs&MIT-Adobe FiveK&\\tabincell{c}{PSNR SSIM\\\\User Study} &RGB &TensorFlow &\\checkmark\\\\\n\t\t\t\t\t&KinD~ &SL &\\tabincell{c}{three subnetworks\\\\U-Net} &\\tabincell{c}{reflectance similarity loss\\\\illumination smoothness loss\\\\mutual consistency loss\\\\$L_{1}$ loss $L_{2}$ loss SSIM loss\\\\texture similarity loss\\\\illumination adjustment loss}\n\t\t\t\t\t&LOL &\\tabincell{c}{LOL LIME\\\\NPE MEF} &\\tabincell{c}{PSNR SSIM\\\\LOE NIQE} &RGB &TensorFlow&\\checkmark\\\\\n\t\t\t\t\t&Wang et al.~ &SL &\\tabincell{c}{two subnetworks\\\\pointwise Conv} &$L_{1}$ loss \n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\camera imaging model} &\\tabincell{c}{IP100 FNF38\\\\MPI LOL NPE} &\\tabincell{c}{PSNR SSIM\\\\NIQE} &RGB &Caffe &\\checkmark \\\\\n\t\t\t\t\t&Ren et al.~ &SL &\\tabincell{c}{U-Net like network\\\\RNN dilated Conv} &\\tabincell{c}{$L_{2}$ loss perceptual loss\\\\adversarial loss} \n\t\t\t\t\t&\\tabincell{c}{MIT-Adobe FiveK\\\\with Gamma correction \\\\ \\& Gaussion noise} &\\tabincell{c}{simulated \\\\self-selected DPED} &\\tabincell{c}{PSNR SSIM\\\\Runtime} &RGB &Caffe&\\\\\n\t\t\t\t\t&EnlightenGAN~ &UL &U-Net like network &\\tabincell{c}{adversarial loss\\\\self feature preserving loss}\n\t\t\t\t\t&unpaired real images &\\tabincell{c}{NPE LIME\\\\MEF DICM\\\\VV BBD-100K\\\\ExDARK} &\\tabincell{c}{User Study NIQE\\\\Classification} &RGB &PyTorch&\\\\\n\t\t\t\t\t&ExCNet.~ &ZSL &\n\t\t\t\t\t\\tabincell{c}{fully connected layers} &energy minimization loss &real images\n\t\t\t\t\t&$IE_{ps}D$ &\\tabincell{c}{User Study\\\\CDIQA LOD} &RGB &PyTorch&\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{1}{*}{\\rotatebox{90}{\\textbf{2020}}}\n\t\t\t\t\t&Zero-DCE~ &ZSL &U-Net like network &\\tabincell{c}{spatial consistency loss\\\\exposure control loss\\\\color constancy loss\\\\illumination smoothness loss}\n\t\t\t\t\t&SICE &\\tabincell{c}{SICE NPE\\\\LIME MEF\\\\DICM VV\\\\ DARK FACE} &\\tabincell{c}{User Study PI\\\\PNSR SSIM\\\\MAE Runtime\\\\Face detection} &RGB &PyTorch &\\\\\n\t\t\t\t\t&DRBN~ &SSL &recursive network &\\tabincell{c}{SSIM loss perceptual loss\\\\adversarial loss}\n\t\t\t\t\t&\\tabincell{c}{LOL\\\\images selected by MOS} &LOL &\\tabincell{c}{PSNR SSIM\\\\SSIM-GC} &RGB &PyTorch &\\\\\n\t\t\t\t\t&Lv et al.~&SL &U-Net like network &\\tabincell{c}{Huber loss\\\\SSIM loss\\\\perceptual loss\\\\illumination smoothness loss}\n\t\t\t\t\t&\\tabincell{c}{simulated by a\\\\ retouching module} &\\tabincell{c}{LOL SICE \\\\DeepUPE} &\\tabincell{c}{User Study PSNR\\\\SSIM VIF\\\\LOE NIQE\\\\ \\#P Runtime\\\\Face detection} &RGB&\\tabincell{c}{TensorFlow}&\\checkmark\\\\\n\t\t\t\t\t&Fan et al.~ &SL &\\tabincell{c}{four subnetworks\\\\U-Net like network\\\\feature modulation} &\\tabincell{c}{mutual smoothness loss\\\\reconstruction loss\\\\illumination smoothness loss\\\\cross entropy loss\\\\ consistency loss\\\\SSIM loss\\\\gradient loss\\\\ratio learning loss}\n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\illumination adjustment, \\\\slight color distortion, \\\\and noise simulation} &\\tabincell{c}{simulated\\\\self-selected} &\\tabincell{c}{PSNR SSIM\\\\NIQE} &RGB & - &\\checkmark\\\\\n\t\t\t\t\t&Xu et al.~&SL &\\tabincell{c}{frequency decomposition\\\\U-Net like network} &\\tabincell{c}{$L_{2}$ loss\\\\perceptual loss}\n\t\t\t\t\t&SID in RGB&\\tabincell{c}{SID in RGB \\\\self-selected} &\\tabincell{c}{PSNR SSIM} &RGB &PyTorch&\\\\\n\t\t\t\t\t&EEMEFN~&SL &\\tabincell{c}{U-Net like network\\\\edge detection network} & \\tabincell{c}{$L_{1}$ loss\\\\weighted cross-entropy loss}\n\t\t\t\t\t&SID &SID &\\tabincell{c}{PSNR SSIM} &raw& \\tabincell{c}{TensorFlow\\\\PaddlePaddle}&\\\\\n\t\t\t\t\t&DLN~&SL &\\tabincell{c}{residual learning\\\\interactive factor\\\\back projection network} &\\tabincell{c}{SSIM loss\\\\total variation loss}\n\t\t\t\t\t&\\tabincell{c}{simulated by \\\\illumination adjustment, \\\\slight color distortion,\\\\and noise simulation} &\\tabincell{c}{simulated \\\\LOL} &\\tabincell{c}{User Study PSNR\\\\SSIM NIQE} &RGB &PyTorch&\\\\\n\t\t\t\t\t&LPNet~&SL &\\tabincell{c}{pyramid network} &\\tabincell{c}{$L_{1}$ loss\\\\perceptual loss\\\\luminance loss}\n\t\t\t\t\t&\\tabincell{c}{LOL SID in RGB\\\\MIT-Adobe FiveK} &\\tabincell{c}{LOL SID in RGB\\\\MIT-Adobe FiveK\\\\ MEF NPE DICM VV} &\\tabincell{c}{PSNR SSIM\\\\NIQE \\#P\\\\FLOPs Runtime} &RGB &PyTorch&\\\\\n\t\t\t\t\t&SIDGAN~ &SL &U-Net &CycleGAN loss\n\t\t\t\t\t&SIDGAN &SIDGAN &\\tabincell{c}{PSNR SSIM\\\\TPSNR TSSIM ATWE} &raw &\\tabincell{c}{TensorFlow}&\\\\\n\t\t\t\t\t&RRDNet &ZSL&three subnetworks &\\tabincell{c}{retinex reconstruction loss\\\\texture enhancement loss\\\\noise estimation loss}\n\t\t\t\t\t&-&\\tabincell{c}{NPE LIME\\\\MEF DICM} &\\tabincell{c}{NIQE CPCQI} &RGB&PyTorch &\\checkmark\\\\\n\t\t\t\t\t&TBEFN~&SL&\\tabincell{c}{three stages\\\\U-Net like network} &\\tabincell{c}{SSIM loss\\\\perceptual loss\\\\smoothness loss}\n\t\t\t\t\t&\\tabincell{c}{SCIE \\\\LOL} &\\tabincell{c}{SCIE LOL\\\\DICM MEF\\\\NPE VV} &\\tabincell{c}{PSNR SSIM\\\\NIQE Runtime\\\\ \\#P FLOPs} &RGB &TensorFlow&\\checkmark\\\\\n\t\t\t\t\t&DSLR~&SL&\\tabincell{c}{Laplacian pyramid\\\\U-Net like network} &\\tabincell{c}{$L_{2}$ loss\\\\Laplacian loss\\\\color loss}\n\t\t\t\t\t&MIT-Adobe FiveK &\\tabincell{c}{MIT-Adobe FiveK\\\\self-selected} &\\tabincell{c}{PSNR SSIM\\\\NIQMC NIQE\\\\BTMQI CaHDC} &RGB& PyTorch&\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{1}{*}{\\rotatebox{90}{\\textbf{2021}}}\n\t\t\t\t\t&RUAS~ &ZSL &neural architecture search &\\tabincell{c}{cooperative loss\\\\similar loss\\\\total variation loss}\n\t\t\t\t\t&\\tabincell{c}{LOL\\\\MIT-Adobe FiveK} &\\tabincell{c}{LOL\\\\MIT-Adobe FiveK} &\\tabincell{c}{PSNR SSIM\\\\ Runtime \\#P FLOPs} &RGB &PyTorch &\\checkmark\\\\\n\t\t\t\t\t&Zhang et al.~&SL &U-Net &\\tabincell{c}{$L_{1}$ loss\\\\ consistency loss}\n\t\t\t\t\t&\\tabincell{c}{simulated by illumination \\\\adjustmentand noise simulation} &\\tabincell{c}{simulated \\\\self-selected} &\\tabincell{c}{User Study PSNR\\\\SSIM AB\\\\MABD WE} &RGB&\\tabincell{c}{PyTorch}&\\\\\n\t\t\t\t\t&Zero-DCE++~ &ZSL &U-Net like network &\\tabincell{c}{spatial consistency loss\\\\exposure control loss\\\\color constancy loss\\\\illumination smoothness loss}\n\t\t\t\t\t&SICE &\\tabincell{c}{SICE NPE\\\\LIME MEF\\\\DICM VV\\\\ DARK FACE} &\\tabincell{c}{User Study PI\\\\PNSR SSIM \\#P\\\\MAE Runtime\\\\Face detection FLOPs} &RGB &PyTorch &\\\\\n\t\t\t\t\t&DRBN~ &SSL &recursive network &\\tabincell{c}{perceptual loss\\\\detail loss quality loss}\n\t\t\t\t\t&LOL &LOL &\\tabincell{c}{PSNR SSIM\\\\SSIM-GC} &RGB &PyTorch &\\\\\n\t\t\t\t\t&Retinex-Net~ &SL &\n\t\t\t\t\tthree subnetworks &\\tabincell{c}{$L_{1}$ loss $L_{2}$ loss\\\\SSIM loss\\\\total variation loss}\n\t\t\t\t\t&\\tabincell{c}{LOL\\\\simulated by adjusting histogram} &\\tabincell{c}{LOL simulated\\\\NPE DICM VV} &\\tabincell{c}{PNSR SSIM\\\\UQI OSS User Study} &RGB &PyTorch&\\checkmark\\\\\n\t\t\t\t\t&RetinexDIP &ZSL & encoder-decoder networks &\\tabincell{c}{reconstruction loss\\\\illumination-consistency loss\\\\reflectnce loss \\\\illumination smoothness loss}&- &\\tabincell{c}{DICM, ExDark\\\\Fusion LIME\\\\ NASA NPE VV} &\\tabincell{c}{NIQE \\\\NIQMC CPCQI} &RGB &PyTorch&\\checkmark\\\\\n\t\t\t\t\t&PRIEN~ &SL &recursive network &SSIM loss&\\tabincell{c}{MEF LOL\\\\simulated by adjusting histogram} &\\tabincell{c}{LOL LIME\\\\NPE MEF VV} &\\tabincell{c}{PNSR SSIM\\\\LOE TMQI} &RGB &PyTorch &\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t}\n\t\t\\end{threeparttable}\n\t\\end{table*}", "id": "44126fdb-481e-4904-b2e2-6e15d767dcdd", "level": "subsection", "origin_cites_number": 6, "parent_id": "8eda68ea-596d-47b0-8c59-3c11fbc66273", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Deep Learning-Based LLIE" ], [ "subsection", "Learning Strategies" ] ], "subsections": [], "title": "Learning Strategies" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Technical}\nIn this section, we first summarize the representative deep learning-based LLIE methods in Table \\ref{table:methods}, then analyze and discuss their technical characteristics.", "id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "level": "section", "origin_cites_number": 0, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ] ], "subsections": [ "68e1f02c-799a-4eae-8da7-114a11e886b0", "6a2b6ca0-a22f-4003-8b94-e1d5297a6a5e", "ae53f5e7-73c5-4548-9f34-2ad3930fd293", "30a23ace-8ed7-4b68-a78a-02ea2fbd6679", "b98d1366-433d-4a92-8dac-6386ad39398e", "e339615c-7f2d-4a6a-95fe-00d1129c19c1", "afdfe29b-5291-42fa-b702-0e326d794db8" ], "title": "Technical Review and Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "Diverse network structures and designs have been used in the existing models, spanning from the basic U-Net, pyramid network, multi-stage network to frequency decomposition network. \nAfter analyzing Figure \\ref{fig:statistic}(b), it can be observed that the U-Net and U-Net-like networks are mainly adopted network structures in LLIE. \nThis is because U-Net can effectively integrate multi-scale \nfeatures and employ both low-level and high-level features. Such characteristics are essential for achieving satisfactory low-light enhancement.\nNevertheless, some key issues may be ignored in the current LLIE network structures:\n\\textbf{1)} after going through several convolutional layers, the gradients of an extremely low light image may vanish during the gradient back-propagation due to its small pixel values. This would degrade the enhancement performance and affect the convergence of network training, \n\\textbf{2)} the skip-connections used in the U-Net-like networks might introduce noise and redundant features into the final results. How to effectively filter out the noise and integrate both low-level and high-level features should be carefully considered, and \n\\textbf{3)} although some designs and components are proposed for LLIE, most of them are borrowed or modified from related low-level visual tasks. The characteristics of low-light data should be considered when designing the network structures.", "id": "68e1f02c-799a-4eae-8da7-114a11e886b0", "level": "subsection", "origin_cites_number": 0, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Network Structure" ] ], "subsections": [], "title": "Network Structure" }, { "cite_extract_rate": 0, "cites": [], "content": "As presented in Figure \\ref{fig:statistic}(c), almost 1/3 of methods combine the designs of deep networks with the Retinex theory, e.g., designing different subnetworks to estimate the components of the Retinex model and estimating the illumination map to guide the learning of networks. Despite such a combination can bridge deep learning-based and model-based methods, their respective weaknesses may be introduced into the final models: \n\\textbf{1)} the ideal assumption that the reflectance is the final enhanced result used in Retinex-based LLIE methods would still affect the final results, and \n\\textbf{2)} the risk of overfitting in deep networks still exists despite the use of Retinex theory. How to cream off the best and filter out the impurities should be carefully considered when researchers combine deep learning with the Retinex theory.", "id": "6a2b6ca0-a22f-4003-8b94-e1d5297a6a5e", "level": "subsection", "origin_cites_number": 0, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Combination of Deep Model and Retinex Theory" ] ], "subsections": [], "title": "Combination of Deep Model and Retinex Theory" }, { "cite_extract_rate": 0, "cites": [], "content": "As shown in Figure \\ref{fig:statistic}(d), RGB data format dominates most methods as it is commonly found as the final imagery form produced by smartphone cameras, Go-Pro cameras, and drone cameras. Although raw data are limited to specific sensors such as those based on Bayer patterns, the data cover wider color gamut and higher dynamic range. Hence, deep models trained on raw data usually recover clear details and high contrast, obtain vivid color, reduce the effects of noises and artifacts, and improve the brightness of extremely low-light images. In future research, a smooth transformation from raw data of different patterns to RGB format would have the potentials to combine the convenience of RGB data and the advantage of high-quality enhancement of raw data for LLIE.", "id": "ae53f5e7-73c5-4548-9f34-2ad3930fd293", "level": "subsection", "origin_cites_number": 0, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Data Format" ] ], "subsections": [], "title": "Data Format" }, { "cite_extract_rate": 0, "cites": [], "content": "In Figure\\ref{fig:statistic}(e), the commonly adopted loss functions in LLIE models include reconstruction loss ($L_{1}$, $L_{2}$, SSIM), perceptual loss, and smoothness loss. Besides, according to different demands and formulations, color loss, exposure loss, adversarial loss, etc are also adopted. We detail representative loss functions as follows.\n\\noindent\n\\textbf{Reconstruction Loss.}\n\tDifferent reconstruction losses have their advantages and disadvantages. $L_{2}$ loss tends to penalize larger errors, but is tolerant to small errors. $L_{1}$ loss preserves colors and\n\tluminance well since an error is weighted equally regardless of the local structure. SSIM loss preserves the structure and texture well. Refer to this research paper for detailed analysis.\n\\noindent\n\\textbf{Perceptual Loss.}\n\tPerceptual loss , particularly the feature reconstruction loss, is proposed to constrain the results similar to the ground truth in the feature space. The loss improves the visual quality of results. It is defined as the Euclidean distance between the feature representations of an enhanced result and those of corresponding ground truth. The feature representations are typically extracted from the VGG network pre-trained on ImageNet dataset . \n\\noindent\n\\textbf{Smoothness Loss.}\n\tTo remove noise in the enhanced results or preserve the relationship of neighboring pixels, smoothness loss (TV loss) is often used to constrain the enhanced result or the estimated illumination map.\n\\noindent\n\\textbf{Adversarial Loss.}\n\tTo encourage enhanced results to be indistinguishable from reference images, adversarial learning solves a max-min optimization problem .\n\\noindent\n\\textbf{Exposure Loss.}\n\tAs one of key non-reference losses, exposure loss measures the exposure levels of enhanced results without paired or unpaired images as reference images.\n\tThe commonly used loss functions in LLIE networks are also employed in image reconstruction networks for image super-resolution , image denoising , image detraining , and image deblurring .\n\tDifferent from these versatile losses, the specially designed exposure loss for LLIE inspires the design of non-reference losses. A non-reference loss makes a model enjoying better generalization capability.\n\tIt is an on-going research to consider image characteristics for the design of loss functions.", "id": "30a23ace-8ed7-4b68-a78a-02ea2fbd6679", "level": "subsection", "origin_cites_number": 0, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Loss Function" ] ], "subsections": [], "title": "Loss Function" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "Figure \\ref{fig:statistic}(f) reports the usage of a variety of paired training datasets for training low-light enhancement networks. These datasets include real-world captured datasets and synthetic datasets. We list them in Table \\ref{table:training}. \n\\noindent\n\\textbf{Simulated by Gamma Correction.}\nOwing to its nonlinearity and simplicity, \nGamma correction is used to adjust the luminance or tristimulus values in video or still image systems. It is defined by a power-law expression:\n\\begin{equation}\n\t\\label{equ_12}\n\tV_{\\text{out}}=AV_{\\text{in}}^{\\gamma},\n\\end{equation}\nwhere the input $V_{\\text{in}}$ and output $V_{\\text{out}}$ are typically in the range of [0,1]. The constant $A$ is set to 1 in the common case. The power $\\gamma$ controls the luminance of the output. Intuitively, the input is brightened when $\\gamma<$1 while the input is darkened when $\\gamma>$1. The input can be the three RGB channels of an image or the luminance-related channels such as $L$ channel in the CIELab color space and $Y$ channel in the YCbCr color space. After adjusting the luminance-related channel using Gamma correction, the corresponding channels in the color space are adjusted by equal proportion to avoid producing artifacts and color deviations. \nTo simulate images taken in real-world low-light scenes, Gaussian noise, Poisson noise, or realistic noise is added to the Gamma corrected images. The low-light image synthesized using Gamma correction can be expressed as:\n\\begin{equation}\n\t\\label{equ_13}\n\tI_{\\text{low}}=n(g(I_{\\text{in}};\\gamma)),\n\\end{equation}\nwhere $n$ represents the noise model, $g(I_{\\text{in}};\\gamma)$ represents the Gamma correction function with Gamma value $\\gamma$, $I_{\\text{in}}$ is a normal-light and high-quality image or luminance-related channel. Although this function produces low-light images of different lighting levels by changing the Gamma value $\\gamma$, \nit tends to introduce artifacts and color deviations into the synthetic low-light images due to the nonlinear adjustment. \n\\begin{table}\n\t\\centering\n\t\\setlength\\tabcolsep{5pt}\n\t\\caption{\n\t\t{Summary of paired training datasets. `Syn' represents Synthetic.}\n\t}\n\t\\vspace{-6pt}\n\t\\label{table:training}\n\t\\begin{tabular}{r|c|c|c|c}\n\t\t\\hline\n\t\t\\textbf{Name}~~~~~~~~~~~~~~~~~~~~~&\\textbf{Number}& \n\t\t\\textbf{Format} & \\textbf{Real/Syn}&\\textbf{Video} \\\\\n\t\t\\hline\n\t\tGamma Correction &+$\\infty$ &RGB \n\t\t&Syn & \\\\\n\t\tRandom Illumination &+$\\infty$ &RGB \n\t\t&Syn & \\\\\n\t\t\\hline\n\t\tLOL &500 &RGB \n\t\t&Real &\\\\\n\t\tSCIE &4,413 &RGB \n\t\t&Real &\\\\\n\t\tVE-LOL-L &2,500 &RGB \n\t\t&Real+Syn &\\\\\n\t\tMIT-Adobe FiveK &5,000 &raw \n\t\t&Real &\\\\\n\t\tSID &5,094 &raw \n\t\t&Real &\\\\\n\t\tDRV &202 &raw \n\t\t&Real &\\checkmark\\\\\n\t\tSMOID &179 &raw \n\t\t&Real &\\checkmark\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\vspace{-4pt}\n\\end{table}\n\\noindent\n\\textbf{Simulated by Random Illumination.}\nAccording to the Retinex model, an image can be decomposed into a reflectance component and an illumination component. Assuming image content is independent of illumination component and local region in the illumination component have the same intensity, a low-light image can be obtained by \n\\begin{equation}\n\t\\label{equ_14}\n\tI_{\\text{low}}=I_{\\text{in}}L,\n\\end{equation}\nwhere $L$ is a random illumination value in the range of [0,1]. Noises can be added to the synthetic image. Such a linear function avoids artifacts, but the strong assumption requires the synthesis to operate only on image patches where local regions have the same brightness. A deep model trained on such image patches may lead to sub-optimal performance due to the negligence of context information.\n\\noindent\n\\textbf{LOL.}\nLOL is the first paired low-/normal-light image dataset taken in real scenes. The low-light images are collected by changing the exposure time and ISO. LOL contains 500 pairs of low-/normal-light images of size 400$\\times$600 saved in RGB format. \n\\noindent\n\\textbf{SCIE.}\nSCIE is a multi-exposure image dataset of low-contrast and good-contrast image pairs. It includes multi-exposure sequences of 589 indoor and outdoor scenes. Each sequence has 3 to 18 low-contrast images of different exposure levels, thus containing 4,413 multi-exposure images in total. The 589 high-quality reference images are obtained by selecting from the results of 13 representative enhancement algorithms. That is many multi-exposure images have the same high-contrast reference image.\nThe image resolutions are between 3,000$\\times$2,000 and \n6,000$\\times$4,000. The images in SCIE are saved in RGB format.\n\\noindent\n\\textbf{MIT-Adobe FiveK.}\nMIT-Adobe FiveK was collected for global tone adjustment but has been used in LLIE. This is because the input images have low light and low contrast. MIT-Adobe FiveK contains 5,000 images, each of which is retouched by 5 trained photographers towards visually pleasing renditions, akin to a postcard. The images are all in raw format. To train the networks that can handle images of RGB format, one needs to use Adobe Lightroom to pre-process the images and save them as RGB format following a dedicated pipeline\\footnote{\\url{https://github.com/nothinglo/Deep-Photo-Enhancer/issues/38\\#issuecomment-449786636}}.\nThe images are commonly resized to have a long edge of 500 pixels.\n\\noindent\n\\textbf{SID.}\nSID contains 5,094 raw short-exposure images, each with a corresponding long-exposure reference image. The number of distinct long-exposure reference images is 424. In other words, multiple short-exposure images correspond to the same long-exposure reference image. \nThe images were taken using two cameras: Sony $\\alpha$7S II and Fujifilm X-T2 in both indoor and outdoor scenes. Thus, the images have different sensor patterns (Sony camera' Bayer sensor and Fuji camera's APS-C X-Trans sensor). \nThe resolution is 4,240$\\times$2,832 for Sony and 6,000$\\times$4,000 for Fuji. Usually, the long-exposure images are processed by libraw (a raw image processing library) and saved in the RGB color space, and randomly cropped 512$\\times$512 patches for training.\n\\noindent\n\\textbf{VE-LOL.} VE-LOL consists of two subsets: paired VE-LOL-L that is used for training and evaluating LLIE methods and unpaired VE-LOL-H that is used for evaluating the effect of LLIE methods on face detection. Specifically, VE-LOL-L includes 2,500 paired images. Among them, 1,000 pairs are synthetic, while 1,500 pairs are real. VE-LOL-H includes 10,940 unpaired images, where human faces are manually annotated with bounding boxes. \n\\noindent\n\\textbf{DRV.}\nDRV contains 202 static raw videos, each of which has a corresponding long-exposure ground truth. Each video was taken at approximately 16 to 18 frames per second in a continuous shooting mode and is with up to 110 frames. The images were taken by a Sony RX100 VI camera in both indoor and outdoor scenes, thus all in raw format of Bayer pattern. The resolution is 3,672$\\times$5,496. \n\\noindent\n\\textbf{SMOID.}\nSMOID contains 179 pairs of videos taken by a co-axis optical system, each of which has 200 frames. Thus, SMOID includes 35,800 extremely low light raw data of Bayer pattern and their corresponding well-lightened RGB counterparts. SMOID consists of moving vehicles and pedestrians under different illumination conditions.\nSome issues challenge the aforementioned paired training datasets: \n\\textbf{1)} deep models trained on synthetic data may introduce artifacts and color deviations when processing real-world images and videos due to the gap between synthetic data and real data, \n\\textbf{2)} the scale and diversity of real training data are unsatisfactory, thus some methods incorporate synthetic data to augment the training data. This may lead to sub-optimal enhancement, and \n\\textbf{3)} the input images and corresponding ground truths may exist misalignment due to the effects of motion, hardware, and environment. This would affect the performance of deep networks trained using pixel-wise loss functions.", "id": "b98d1366-433d-4a92-8dac-6386ad39398e", "level": "subsection", "origin_cites_number": 2, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Training Datasets" ] ], "subsections": [], "title": "Training Datasets" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "In addition to the testing subsets in the paired datasets , there are several testing data collected from related works or commonly used for experimental comparisons. \nBesides, some datasets such as face detection in the dark and detection and recognition in low-light images are employed to test the effects of LLIE on high-level visual tasks. We summarize the commonly used testing datasets in Table \\ref{table:testing} and introduce the representative testing datasets as follows. \n\\begin{table}\n\t\\centering\n\t\\caption{\n\t\t{Summary of testing datasets.}\n\t}\n\t\\vspace{-6pt}\n\t\\label{table:testing}\n\t\\begin{tabular}{r|c|c|c|c}\n\t\t\\hline\n\t\t\\textbf{Name}~~~~~~~~~&\\textbf{Number}& \n\t\t\\textbf{Format} & \\textbf{Application}&\\textbf{Video} \\\\\n\t\t\\hline\n\t\tLIME &10 &RGB & & \\\\\n\t\tNPE &84 &RGB & & \\\\\n\t\tMEF &17 &RGB & &\\\\\n\t\tDICM &64 &RGB & &\\\\\n\t\tVV\\tablefootnote{\\url{https://sites.google.com/site/vonikakis/datasets}} &24 &RGB & &\\\\\n\t\t\\hline\n\t\tBBD-100K &10,000 &RGB &\\checkmark &\\checkmark\\\\\n\t\tExDARK &7,363 &RGB &\\checkmark &\\\\\n\t\tDARK FACE &6,000 &RGB &\\checkmark &\\\\\n\t\tVE-LOL-H &10,940 &RGB &\\checkmark &\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\vspace{-4pt}\n\\end{table}\n\\noindent\n\\textbf{BBD-100K.}\nBBD-100K is the largest driving video dataset with 10,000 videos taken over 1,100-hour driving experience across many different times in the day, weather conditions, and driving scenarios, and 10 tasks annotations. The videos taken at nighttime in BBD-100K are used to validate the effects of LLIE on high-level visual tasks and the enhancement performance in real scenarios.\n\\noindent\n\\textbf{ExDARK.}\nExDARK~ dataset is built for object detection and recognition in low-light images. ExDARK dataset contains 7,363 low-light images from extremely low-light environments to twilight with 12 object classes annotated with image class labels and local object bounding boxes.\n\\noindent\n\\textbf{DARK FACE.}\nDARK FACE dataset contains 6,000 low-light images captured during the nighttime, each of which is labeled with bounding boxes of the human face.\nFrom Figure \\ref{table:testing}(g) and Table \\ref{table:methods}, we can observe that one prefers using the self-collected testing data in the experiments. The main reasons lie into three-fold: \n\\textbf{1)} besides the test partition of paired datasets, there is no acknowledged benchmark for evaluations, \\textbf{2)}\nthe commonly used test sets suffer from some shortcomings such as small scale (some test sets contain 10 images only), repeated content and illumination properties, and unknown experimental settings, and\\textbf{ 3)} some of the commonly used testing data are not originally collected for evaluating LLIE. In general, current testing datasets may lead to bias and unfair comparisons.", "id": "e339615c-7f2d-4a6a-95fe-00d1129c19c1", "level": "subsection", "origin_cites_number": 4, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Testing Datasets" ] ], "subsections": [], "title": "Testing Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "Besides human perception-based subjective evaluations, image quality assessment (IQA) metrics, including both full-reference and non-reference IQA metrics, are able to evaluate image quality objectively. In addition, user study, number of trainable parameters, FLOPs, runtime, and applications also reflect the performance of LLIE models, as shown in Fig. \\ref{fig:statistic}(h). \nWe will detail them as follows.\n\\noindent\n\\textbf{PSNR and MSE.}\nPSNR and MSE are widely used IQA metrics. They are always non-negative, and values closer to infinite (PSNR) and zero (MSE) are better.\nNevertheless, the pixel-wise PSNR and MSE may provide an inaccurate indication of the visual perception of image quality since they neglect the relation of neighboring pixels.\n\\noindent\n\\textbf{MAE.}\nMAE represents the mean absolute error, serving as a measure of errors between paired observations. \nThe smaller the MAE value is, the better similarity is.\n\\noindent\n\\textbf{SSIM.}\nSSIM is used to measure the similarity between two images. It is a perception-based model that considers image degradation as perceived change in structural information. \nThe value 1 is only reachable in the case of two identical sets of data, indicating perfect structural similarity. \n\\noindent\n\\textbf{LOE.}\nLOE represents the lightness order error that reflects the naturalness of an enhanced image.\nFor LOE, the smaller the LOE value is, the better the lightness order is preserved.\n\\noindent\n\\textbf{Application.} \nBesides improving the visual quality, one of the purposes of image enhancement is to serve high-level visual tasks. Thus, the effects of LLIE on high-level visual applications are commonly examined to validate the performance of different methods.\nThe current evaluation approaches used in LLIE need to be improved in several aspects: \n\\textbf{1)} although the PSNR, MSE, MAE, and SSIM are classic and popular metrics, they are still far from capturing real visual perception of human,\n\\textbf{2)} some metrics are not originally designed for low-light images. They are used for assessing the fidelity of image information and contrast. Using these metrics may reflect the image quality, but they are far from the real purpose of low-light enhancement,\n\\textbf{3)} metrics especially designed for low-light images are lacking, except for the LOE metric. Moreover, there is no metric for evaluating low-light video enhancement, and\n\\textbf{4)} a metric that can balance both the human vision and the machine perception is expected.", "id": "afdfe29b-5291-42fa-b702-0e326d794db8", "level": "subsection", "origin_cites_number": 0, "parent_id": "3ad6854d-28fe-41e8-80bd-198011ce8b19", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Technical Review and Discussion" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "\\label{sec:evaluation}\nThis section provides empirical analysis and highlights some key challenges in deep learning-based LLIE. To facilitate the analysis, we propose a low-light image and video dataset to examine the performance of different solutions. We also develop the first online platform, where the results of LLIE models can be produced via a user-friendly web interface. In this section, we conduct extensive evaluations on several benchmarks and our proposed dataset. \nIn the experiments, we compare 13 representative RGB format-based methods, including eight supervised learning-based methods (LLNet , LightenNet , Retinex-Net , MBLLEN , KinD , KinD++ , TBEFN , DSLR ), one unsupervised learning-based method (EnlightenGAN ), one semi-supervised learning-based method (DRBN ), and three zero-shot learning-based methods (ExCNet , Zero-DCE , RRDNet ). Besides, we also compare two raw format-based methods, including SID and EEMEFN . Note that RGB format-based methods dominate LLIE. Moreover, most raw format-based methods do not release their code. Thus, we choose two representative methods to provide empirical analysis and insights. For all compared methods, we use the publicly available code to produce their results for fair comparisons. \n\\begin{table}\n\t\\centering\n\t\\caption{{Summary of LLIV-Phone dataset. LLIV-Phone dataset contains 120 videos (45,148 images) taken by 18 different mobile phones' cameras. ``\\#Video'' and ``\\#Image'' represent the number of videos and images, respectively.}}\n\t\\vspace{-6pt}\n\t\\label{table:LLIVPhone}\n\t\\begin{tabular}{r|c|c|c}\n\t\t\\hline\n\t\t\\textbf{Phone's Brand}&\\textbf{\\#Video}& \n\t\t\\textbf{\\#Image} &\\textbf{Resolution}\\\\\n\t\t\\hline\n\t\tiPhone 6s &4 &1,029 &1920$\\times$1080\\\\\n\t\tiPhone 7 &13 &6,081 &1920$\\times$1080\\\\\n\t\tiPhone7 Plus &2 &900 &1920$\\times$1080\\\\\n\t\tiPhone8 Plus &1 &489 &1280$\\times$720\\\\\n\t\tiPhone 11 &7 &2,200 &1920$\\times$1080\\\\\n\t\tiPhone 11 Pro &17 &7,739 &1920$\\times$1080\\\\\n\t\tiPhone XS &11 &2,470 &1920$\\times$1080\\\\\n\t\tiPhone XR &16 &4,997 &1920$\\times$1080\\\\\n\t\tiPhone SE &1 &455 &1920$\\times$1080\\\\\n\t\tXiaomi Mi 9 &2 &1,145 &1920$\\times$1080\\\\\n\t\tXiaomi Mi Mix 3 &6 &2,972 &1920$\\times$1080\\\\\n\t\tPixel 3&4 &1,311 &1920$\\times$1080\\\\\n\t\tPixel 4&3 &1,923 &1920$\\times$1080\\\\\n\t\tOppo R17 &6 &2,126 &1920$\\times$1080\\\\\n\t\tVivo Nex &12 &4,097 &1280$\\times$720\\\\\n\t\tLG M322&2 &761 &1920$\\times$1080\\\\\n\t\tOnePlus 5T &1 &293 &1920$\\times$1080\\\\\n\t\tHuawei Mate 20 Pro & 12&4,160 &1920$\\times$1080\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\vspace{-4pt}\n\\end{table}", "id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "level": "section", "origin_cites_number": 3, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ] ], "subsections": [ "d8e5b8c1-ee06-4aad-800d-232c4dd98922", "7ee6e01b-208a-4fef-a9a1-3cc6a973f6c4", "67f70829-d734-4fe0-baa6-b7f64f6f5b82", "e142566f-157a-4609-a670-8df6dbfe9874", "8ea7b52c-e5f5-46c9-8ff5-69763784bc99" ], "title": "Benchmarking and Empirical Analysis" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "We propose a Low-Light Image and Video dataset, called LLIV-Phone, to comprehensively and thoroughly validate the performance of LLIE methods. LLIV-Phone is the largest and most challenging real-world testing dataset of its kind. In particular, the dataset contains 120 videos (45,148 images) taken by 18 different mobile phones' cameras including iPhone 6s, iPhone 7, iPhone7 Plus, iPhone8 Plus, iPhone 11, iPhone 11 Pro, iPhone XS, iPhone XR, iPhone SE, Xiaomi Mi 9, Xiaomi Mi Mix 3, Pixel 3, Pixel 4, Oppo R17, Vivo Nex, LG M322, OnePlus 5T, Huawei Mate 20 Pro under diverse illumination conditions (e.g., weak lighting, underexposure, moonlight, twilight, dark, extremely dark, back-lit, non-uniform light, and colored light.) in both indoor and outdoor scenes. A summary of the LLIV-Phone dataset is provided in Table \\ref{table:LLIVPhone}. \nWe present several samples of LLIV-Phone dataset in Figure \\ref{fig:dataset_samples}. \nThe LLIV-Phone dataset is available at the project page. \nThis challenging dataset is collected in real scenes and contains diverse low-light images and videos. Consequently, it is suitable for evaluating the generalization capability of different low-light image and video enhancement models. Notably, the dataset can be used as the training dataset for unsupervised learning and the reference dataset for synthesis methods to generate realistic low-light data. \n\\begin{figure}[!t]\n\t\\centering \\centerline{\\includegraphics[width=1\\linewidth]{figures/dataset_samples.png}}\n\t\\vspace{-2pt}\n\t\\caption{Several images sampled from the proposed LLIV-Phone dataset. The images and videos are taken by different devices under diverse lighting conditions and scenes.}\n\t\\label{fig:dataset_samples}\n\t\\vspace{-4pt}\n\\end{figure}\n\\begin{figure*} [!t]\n\t\\begin{center}\n\t\t\\begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c}\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_input.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_LLNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_LightenNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_RetinexNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_MBLLEN.png}\\\\\n\t\t\t(a) input & (b) LLNet & (c) LightenNet & (d) Retinex-Net & (e) \tMBLLEN \\\\\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_KinD.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_KinD++.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_TBEFN.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_DSLR.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_EnlightenGAN.png}\\\\\n\t\t\t(f) KinD & (g) KinD++ & (h) TBEFN & (i) \tDSLR & (j) EnlightenGAN \\\\\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_DRBN.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_ExCNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_ZeroDCE.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_RRDNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/LOL_GT.png}\\\\\n\t\t\t(k) DRBN & (l)\tExCNet & (m) Zero-DCE & (n) \tRRDNet & (o) GT \\\\\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-2pt}\n\t\\caption{Visual results of different methods on a low-light image sampled from LOL-test dataset.}\n\t\\label{fig:LOL}\n\t\\vspace{-4pt}\n\\end{figure*}\n\\begin{figure*}[!t]\n\t\\begin{center}\n\t\t\\begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c}\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_input.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_LLNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_LightenNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_RetinexNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_MBLLEN.png}\\\\\n\t\t\t(a) input & (b) LLNet & (c) LightenNet & (d) Retinex-Net & (e) \tMBLLEN \\\\\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_KinD.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_KinD++.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_TBEFN.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_DSLR.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_EnlightenGAN.png}\\\\\n\t\t\t(f) KinD & (g) KinD++ & (h) TBEFN & (i) \tDSLR & (j) EnlightenGAN \\\\\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_DRBN.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_ExCNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_ZeroDCE.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_RRDNet.png}&\n\t\t\t\\includegraphics[width=0.18\\linewidth]{figures/5K_GT.png}\\\\\n\t\t\t(k) DRBN & (l)\tExCNet & (m) Zero-DCE & (n) \tRRDNet & (o) GT \\\\\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-2pt}\n\t\\caption{Visual results of different methods on a low-light image sampled from MIT-Adobe FiveK-test dataset.}\n\t\\label{fig:5K}\n\t\\vspace{-4pt}\n\\end{figure*}", "id": "d8e5b8c1-ee06-4aad-800d-232c4dd98922", "level": "subsection", "origin_cites_number": 3, "parent_id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ], [ "subsection", "A New Low-Light Image and Video Dataset" ] ], "subsections": [], "title": "A New Low-Light Image and Video Dataset" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "Different deep models may be implemented in different platforms such as Caffe, Theano, TensorFlow, and PyTorch. As a result, different algorithms demand different configurations, GPU versions, and hardware specifications. Such requirements are prohibitive to many researchers, especially for beginners who are new to this area and may not even have GPU resources. To resolve these problems, we develop an LLIE online platform, called LLIE-Platform, which is available at \\url{http://mc.nankai.edu.cn/ll/}. \nTo the date of this submission, the LLIE-Platform covers 14 popular deep learning-based LLIE methods including LLNet , LightenNet , Retinex-Net , EnlightenGAN , MBLLEN , KinD , KinD++ , TBEFN , DSLR , DRBN , ExCNet , Zero-DCE , Zero-DCE++ , and RRDNet , where the results of any input can be produced through a user-friendly web interface. We will regularly offer new methods on this platform.\nWe wish that this LLIE-Platform could serve the\ngrowing research community by providing users a flexible interface\nto run existing deep learning-based LLIE methods and develop their own new LLIE methods. \n\\newcommand{\\addImg}[1]{\\includegraphics[width=0.14\\linewidth,height=0.2\\linewidth]{figures/#1}}\n\\begin{figure*}[!t]\n\t\\centering\n\t\\setlength\\tabcolsep{0.5pt}\n\t\\begin{tabular}{ccccccccc}\n\t\t\\addImg{Phone_input1.png} &\n\t\t\\addImg{Phone_LLNet1.png}&\n\t\t\\addImg{Phone_LightenNet1.png}&\n\t\t\\addImg{Phone_RetinexNet1.png}&\n\t\t\\addImg{Phone_MBLLEN1.png}&\n\t\t\\addImg{Phone_KinD1.png}&\n\t\t\\addImg{Phone_KinD++1.png}&\\\\\n\t\t\t(a) & (b) & (c) & (d) & (e) & (f) & (g) \\\\\n\t\t\\addImg{Phone_TBEFN1.png}&\n\t\t\\addImg{Phone_DSLR1.png}&\n\t\t\\addImg{Phone_EnlightenGAN1.png}&\n\t\t\\addImg{Phone_DRBN1.png}&\n\t\t\\addImg{Phone_ExCNet1.png}&\n\t\t\\addImg{Phone_ZeroDCE1.png}&\n\t\t\\addImg{Phone_RRDNet1.png}\\\\\n\t\t\t(h) & (i) \t & (j) & \t(k) & (l) & (m) & (n) \t \\\\\n\t\\end{tabular}\n\t\\vspace{-12pt}\n\t\\caption{Visual results of different methods on a low-light image sampled from LLIV-Phone-imgT dataset. (a) input. (b) LLNet . (c) LightenNet . (d) Retinex-Net . (e) \tMBLLEN . (f) KinD . (g) KinD++ . (h) TBEFN . (i) \tDSLR . (j) EnlightenGAN .\t(k) DRBN . (l)\tExCNet . (m) Zero-DCE . (n) \tRRDNet .}\n\t\\label{fig:Phone1}\n\t\\vspace{-4pt}\n\\end{figure*}\n\\begin{figure*}[!t]\n\t\\setlength\\tabcolsep{0.5pt}\n\t\\centering\n\t\\begin{tabular}{ccccccccc}\n\t\t\\addImg{Phone_input2.png}&\n\t\t\\addImg{Phone_LLNet2.png}&\n\t\t\\addImg{Phone_LightenNet2.png}&\n\t\t\\addImg{Phone_RetinexNet2.png}&\n\t\t\\addImg{Phone_MBLLEN2.png}&\n\t\t\\addImg{Phone_KinD2.png}&\n\t\t\\addImg{Phone_KinD++2.png}&\\\\\n\t\t\t(a) & (b) & (c) & (d) & (e) \t & (f) & (g) \\\\\n\t\t\\addImg{Phone_TBEFN2.png}&\n\t\t\\addImg{Phone_DSLR2.png}&\n\t\t\\addImg{Phone_EnlightenGAN2.png}&\n\t\t\\addImg{Phone_DRBN2.png}&\n\t\t\\addImg{Phone_ExCNet2.png}&\n\t\t\\addImg{Phone_ZeroDCE2.png}&\n\t\t\\addImg{Phone_RRDNet2.png}\\\\\n\t\t\t(h) & (i) \t & (j) & \t(k) & (l)\t & (m) & (n) \t \\\\\n\t\\end{tabular}\n\t\\vspace{-12pt}\n\t\\caption{Visual results of different methods on a low-light image sampled from LLIV-Phone-imgT dataset. (a) input. (b) LLNet . (c) LightenNet . (d) Retinex-Net . (e) \tMBLLEN . (f) KinD . (g) KinD++ . (h) TBEFN . (i) \tDSLR . (j) EnlightenGAN .\t(k) DRBN . (l)\tExCNet . (m) Zero-DCE . (n) \tRRDNet .}\n\t\\label{fig:Phone2}\n\t\\vspace{-4pt}\n\\end{figure*}\n\\begin{figure*}[!t]\n\t\\begin{center}\n\t\t\\begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }}\n\t\t\t\\rotatebox{90}{~~~~~~~~~Bayer~~~~~~~~~~~~APS-C X-Trans}~&\n\t\t\t\\includegraphics[width=0.21\\linewidth]{figures/raw_input.png}&~~\n\t\t\t\\includegraphics[width=0.21\\linewidth]{figures/raw_SID.png}&~~\n\t\t\t\\includegraphics[width=0.21\\linewidth]{figures/raw_EEMEFN.png}&~~\n\t\t\t\\includegraphics[width=0.21\\linewidth]{figures/raw_GT.png}\\\\\n\t\t\t&~\t(a) inputs &~~ (b) SID &~~ (c) EEMEFN &~~ (d) GT \\\\\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-2pt}\n\t\\caption{Visual results of different methods on two raw low-light images sampled from SID-test-Bayer and SID-test-X-Trans test datasets. The inputs are amplified for visualization.}\n\t\\label{fig:raw}\n\t\\vspace{-4pt}\n\\end{figure*}", "id": "7ee6e01b-208a-4fef-a9a1-3cc6a973f6c4", "level": "subsection", "origin_cites_number": 4, "parent_id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ], [ "subsection", "Online Evaluation Platform" ] ], "subsections": [], "title": "Online Evaluation Platform" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "To qualitatively and quantitatively evaluate different methods, in addition to the proposed LLIV-Phone dataset, we also adopt the commonly used LOL and MIT-Adobe FiveK datasets for RGB format-based methods, and SID dataset for raw format-based methods. More visual results can be found in the supplementary material. The comparative results on the real low-light videos taken by different mobile phones' cameras can be found at YouTube \\url{https://www.youtube.com/watch?v=Elo9TkrG5Oo&t=6s}.\nWe select five images on average from each video of the LLIV-Phone dataset, forming an image testing dataset with a total of 600 images (denoted as LLIV-Phone-imgT). Furthermore, we randomly select one video from the videos of each phone's brand of LLIV-Phone dataset, forming a video testing dataset with a total of 18 videos (denoted as LLIV-Phone-vidT). We half the resolutions of the frames in both LLIV-Phone-imgT and LLIV-Phone-vidT because some deep learning-based methods cannot process the full resolution of test images and videos. \nFor the LOL dataset, we adopt the original test set including 15 low-light images captured in real scenes for testing, denoted as LOL-test. For the MIT-Adobe FiveK dataset, we follow the protocol in Chen et al. to decode the images into PNG format and resize them to have a long edge of 512 pixels using Lightroom. We adopt the same testing dataset as Chen et al. , MIT-Adobe FiveK-test, including 500 images with the retouching results by expert C as the corresponding ground truths. For the SID dataset, we use the default test set used in EEMEFN for fair comparisons, denoted as SID-test (SID-test-Bayer and SID-test-X-Trans), which is a partial test set of SID . The SID-test-Bayer includes 93 images of the Bayer pattern while the SID-test-X-Trans includes 94 images of the APS-C X-Trans pattern. \n\\begin{table*}[t]\n\t\\caption{Quantitative comparisons on LOL-test and MIT-Adobe FiveK-test test datasets in terms of MSE ($\\times10^3$), PSNR (in dB), SSIM , and LPIPS . The best result is in {\\color{red}{red}} whereas the second and third best results are in {\\color{blue}{blue}} and {\\color{purple}{purple}} under each case, respectively.}\n\t\\vspace{-6pt}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c|c|c|c||c|c|c|c}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\textbf{Learning}} &\\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{4}{c||}{\\textbf{LOL-test}} & \\multicolumn{4}{c}{\\textbf{MIT-Adobe FiveK-test}} \\\\\n\t\t\t\\cline{3-10}\n\t\t\t\\cline{3-10}\n\t\t\t& &\t\\textbf{MSE}$\\downarrow$ & \\textbf{PSNR} $\\uparrow$ & \\textbf{SSIM}$\\uparrow$ & \\textbf{LPIPS}$\\downarrow$ & \\textbf{MSE}$\\downarrow$ & \\textbf{PSNR} $\\uparrow$ & \\textbf{SSIM}$\\uparrow$ & \\textbf{LPIPS}$\\downarrow$ \\\\\n\t\t\t\\hline\n\t\t\t&input & 12.613 & 7.773 &0.181 & 0.560& {\\color{blue}{1.670}}& {\\color{blue}{17.824}} & {\\color{purple}{0.779}} &{\\color{blue}{0.148}} \\\\\n\t\t\t\\hline\n\t\t\t&LLNet & {\\color{red}{1.290}} & {\\color{red}{17.959}} &0.713 &0.360&4.465 & 12.177 &0.645 &0.292\\\\\n\t\t\t&LightenNet & 7.614 & 10.301 & 0.402 & 0.394 & 4.127 & 13.579 & 0.744 &{\\color{purple}{0.166}}\\\\\n\t\t\t&Retinex-Net & 1.651 & 16.774& 0.462 & 0.474 & 4.406 & 12.310 & 0.671 &0.239\\\\\n\t\t\tSL&MBLLEN &1.444&{\\color{blue}{17.902}}&0.715& 0.247& {\\color{red}{1.296}} & {\\color{red}{19.781}} & {\\color{red}{0.825}}&{\\color{red}{0.108}} \\\\\n\t\t\t&KinD & {\\color{purple}{1.431}} & 17.648& {\\color{blue}{0.779}}& {\\color{red}{0.175}} & 2.675 & 14.535 & 0.741 &0.177 \\\\\n\t\t\t&KinD++ & {\\color{blue}{1.298}}&{\\color{purple}{17.752}}& {\\color{purple}{0.760}}& {\\color{blue}{0.198}} & 7.582 & 9.732 &0.568 & 0.336 \\\\\n\t\t\t&TBEFN &1.764&17.351& {\\color{red}{0.786}}& {\\color{purple}{0.210}} & 3.865 & 12.769 &0.704 &0.178 \\\\\n\t\t\t&DSLR &3.536&15.050& 0.597&0.337 &{\\color{purple}{1.925}} & {\\color{purple}{16.632}} &{\\color{blue}{0.782}} & 0.167 \\\\\n\t\t\t\\hline\n\t\t\tUL\t& EnlightenGAN &1.998&17.483& 0.677&0.322 &3.628&13.260&0.745& 0.170 \\\\\n\t\t\t\\hline\n\t\t\tSSL\t&DRBN &2.359&15.125& 0.472&0.316 &3.314& 13.355&0.378 & 0.281 \\\\\n\t\t\t\\hline\n\t\t\t&\tExCNet &2.292&15.783& 0.515&0.373 &2.927& 13.978&0.710 &0.187 \\\\\t\t\t\n\t\t\tZSL\t&Zero-DCE &3.282&14.861& 0.589&0.335 &3.476&13.199&0.709 &0.203\\\\\t\t\t\t\n\t\t\t&\tRRDNet &6.313&11.392&0.468&0.361 &7.057&10.135&0.620 &0.303\\\\\t\t\t\t\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{Table:Quan_1}\n\t\\vspace{-4pt}\n\\end{table*}\n\\begin{table*}[t]\n\t\\caption{Quantitative comparisons on SID-test test dataset in terms of MSE ($\\times10^3$), PSNR (in dB), SSIM , and LPIPS . The best result is in {\\color{red}{red}} under each case. To compute the quantitative scores of input raw data, we use the corresponding camera ISP pipelines provided by Chen et al. to transfer raw data to RGB format.}\n\t\\vspace{-6pt}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c|c|c|c||c|c|c|c}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\textbf{Learning}} &\\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{4}{c||}{\\textbf{SID-test--Bayer}} & \\multicolumn{4}{c}{\\textbf{SID-test--X-Trans}} \\\\\n\t\t\t\\cline{3-10}\n\t\t\t\\cline{3-10}\n\t\t\t& &\t\\textbf{MSE}$\\downarrow$ & \\textbf{PSNR} $\\uparrow$ & \\textbf{SSIM}$\\uparrow$ & \\textbf{LPIPS}$\\downarrow$ & \\textbf{MSE}$\\downarrow$ & \\textbf{PSNR} $\\uparrow$ & \\textbf{SSIM}$\\uparrow$ & \\textbf{LPIPS}$\\downarrow$ \\\\\n\t\t\t\\hline\n\t\t\t&input &5.378 &11.840 &0.063 &0.711 &4.803 &11.880 &0.075 &0.796 \\\\\n\t\t\t\\hline\n\t\t\tSL&SID &0.140 &28.614 &0.757 &0.465 &0.235 &26.663 &0.680&0.586\\\\\n\t\t\t&EEMEFN &{\\color{red}{0.126}} &{\\color{red}{29.212}} &{\\color{red}{0.768}} &{\\color{red}{0.448}}&{\\color{red}{0.191}} &{\\color{red}{27.423}} &{\\color{red}{0.695}} &{\\color{red}{0.546}}\\\\\t\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{Table:Quan_11}\n\t\\vspace{-4pt}\n\\end{table*}\n\\noindent\n\\textbf{Qualitative Comparison.}\nWe first present the results of different methods on the images sampled from LOL-test and MIT-Adobe FiveK-test datasets in Figures \\ref{fig:LOL} and \\ref{fig:5K}. \nAs shown in Figure \\ref{fig:LOL}, all methods improve the brightness and contrast of the input image. However, none of them successfully recovers the accurate color of the input image when the results are compared with the ground truth. In particular, LLNet produces blurring result. LightenNet and RRDNet produce under-exposed results while MBLLEN and ExCNet over-expose the image. KinD , KinD++ , TBEFN , DSLR , EnlightenGAN , and DRBN introduce obvious artifacts.\nIn Figure \\ref{fig:5K}, LLNet , KinD++ , TBEFN , and RRDNet produce over-exposed results. Retinex-Net , KinD++ , and RRDNet yield artifacts and blurring in the results. \nWe found that the ground truths of MIT-Adobe FiveK dataset still contain some dark regions. This is because the dataset is originally designed for global image retouching, where restoring low light regions is not the main priority in this task. \nWe also observed that the input images in LOL dataset and MIT-Adobe FiveK dataset are relatively clean from noise, which is different from real low-light scenes. \nAlthough some methods take the MIT-Adobe FiveK dataset as the training or testing dataset, we argue that this dataset is not appropriate for the task of LLIE due to its mismatched/unsatisfactory ground truth for LLIE.\nTo examine the generalization capability of different methods, we conduct comparisons on the images sampled from our LLIV-Phone-imgT dataset. The visual results of different methods are shown in Figures \\ref{fig:Phone1} and \\ref{fig:Phone2}.\nAs presented in Figure \\ref{fig:Phone1}, all methods cannot effectively improve the brightness and remove the noise of the input low-light image. Moreover, Retinex-Net , MBLLEN , and DRBN produce obvious artifacts.\nIn Figure \\ref{fig:Phone2}, all methods enhance the brightness of this input image. However, only \nMBLLEN and RRDNet obtain visually pleasing enhancement without color deviation, artifacts, and over-/under-exposure. Notably, for regions with a light source, none of the methods can brighten the image without amplifying the noise around these regions. \nTaking light sources into account for LLIE would be an interesting direction to explore.\nThe results suggest the difficulty of enhancing the images of the LLIV-Phone-imgT dataset. \nReal low-light images fail most existing LLIE methods due to the limited generalization capability of these methods. The potential reasons are the use of synthetic training data, small-scaled training data, or unrealistic assumptions such as the local illumination consistency and treating the reflectance component as the final result in the Retinex model in these methods.\nWe further present the visual comparisons of raw format-based methods in Figure \\ref{fig:raw}. As shown, the input raw data have obvious noises. Both SID and EEMEFN can effectively remove the effects of noises. In comparison to the simple U-Net structure used in SID , the more complex structure of EEMEFN obtains better brightness recovery. However, their results are far from the corresponding GT, especially for the input of APS-C X-Trans pattern.\n\\noindent\n\\textbf{Quantitative Comparison.}\nFor test sets with ground truth i.e., LOL-test, MIT-Adobe FiveK-test, and SID-test, we adopt MSE, PSNR, SSIM , and LPIPS metrics to quantitatively compare different methods. LPIPS is a deep learning-based image quality assessment metric that measures the perceptual similarity between a result and its corresponding ground truth by deep visual representations. For LPIPS, we employ the AlexNet-based model to compute the perceptual similarity. A lower LPIPS\nvalue suggests a result that is closer to the corresponding ground truth in terms of perceptual similarity. \nIn Table \\ref{Table:Quan_1} and Table \\ref{Table:Quan_11}, we show the quantitative results of RGB format-based methods and raw format-based methods, respectively.\nAs presented in Table \\ref{Table:Quan_1}, the quantitative scores of supervised learning-based methods are better than those of unsupervised learning-based, semi-supervised learning-based, and zero-shot learning-based methods on LOL-test and MIT-Adobe FiveK-test datasets. Among them, LLNet obtains the best MSE and PSNR values on the LOL-test dataset; however, its performance drops on the MIT-Adobe FiveK-test dataset. \nThis may be caused by the bias of LLNet towards the LOL dataset since it was trained using the LOL training dataset. \nFor the LOL-test dataset, TBEFN obtains the highest SSIM value while KinD achieves the lowest LPIPS value. There is no winner across these four evaluation metrics on the LOL-test dataset despite the fact that some methods were trained on the LOL training dataset. \nFor the MIT-Adobe FiveK-test dataset, MBLLEN outperforms all compared methods under the four evaluation metrics in spite of being trained on synthetic training data. Nevertheless, MBLLEN still cannot obtain the best performance on both two test datasets. \nAs presented in Table \\ref{Table:Quan_11}, both SID and EEMEFN improve the quality of input raw data. Compared with the quantitative scores of SID , EEMEFN achieves consistently better performance across different raw data patterns and evaluation metrics.\nFor LLIV-Phone-imgT test set, we use the non-reference IQA metrics, i.e., NIQE , perceptual index (PI) , LOE , and SPAQ to quantitatively compare different methods. In terms of LOE, the smaller the LOE value is, the better the lightness order is preserved. For NIQE, the smaller the NIQE value is, the better the visual quality is. A lower PI value indicates better perceptual quality. SPAQ is devised for the perceptual quality assessment of smartphone photography. A larger SPAQ value suggests better perceptual quality of smartphone photography. The quantitative results are provided in Table \\ref{Table:Quan_2}.\n\\begin{table}[ht]\n\t\\caption{Quantitative comparisons on LLIV-Phone-imgT dataset in terms of NIQE , LOE , PI , and SPAQ . The best result is in {\\color{red}{red}} whereas the second and third best results are in {\\color{blue}{blue}} and {\\color{purple}{purple}} under each case, respectively.}\n\t\\begin{center}\n\t\t\\setlength{\\tabcolsep}{1.7mm}{\n\t\t\t\\begin{tabular}{c|c|c|c|c|c}\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{2}{*}{\\textbf{Learning}}&\\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{4}{c}{\\textbf{LoLi-Phone-imgT}}\\\\\n\t\t\t\t\\cline{3-6}\n\t\t\t\t\\cline{3-6}\n\t\t\t\t&\t& \\textbf{NIQE}$\\downarrow$ & \\textbf{LOE} $\\downarrow$ & \\textbf{PI}$\\downarrow$ & \\textbf{SPAQ}$\\uparrow$ \\\\\n\t\t\t\t\\hline\n\t\t\t\t&\tinput &6.99 &{\\color{red}{0.00}} &5.86 &44.45 \\\\\n\t\t\t\t\\hline\n\t\t\t\t&\tLLNet &5.86 &{\\color{blue}{5.86}} &5.66 &40.56 \\\\\n\t\t\t\t&\tLightenNet &5.34 &952.33 &4.58 &45.74 \\\\\n\t\t\t\t&\t\n\t\t\t\tRetinex-Net & 5.01 &790.21 &{\\color{red}{3.48}} &{\\color{red}{50.95}} \\\\\n\t\t\t\tSL&MBLLEN &5.08 &220.63 & 4.27 &42.50 \\\\\n\t\t\t\t&\tKinD &4.97 &405.88 &4.37 &44.79 \\\\\n\t\t\t\t&\tKinD++ &{\\color{red}{4.73}}\n\t\t\t\t&681.97 &{\\color{blue}{3.99}} &{\\color{blue}{46.89}} \\\\\n\t\t\t\t&\tTBEFN &4.81 &552.91 & 4.30 &44.14 \\\\\n\t\t\t\t&\tDSLR &{\\color{blue}{4.77}} &447.98 &4.31 & 41.08\\\\\n\t\t\t\t\\hline\n\t\t\t\tUL &EnlightenGAN &{\\color{purple}{4.79}} &821.87 &{\\color{purple}{4.19}} &45.48 \\\\\n\t\t\t\t\\hline\n\t\t\t\tSSL&\tDRBN &5.80 &885.75 &5.54 &42.74 \\\\\t\n\t\t\t\t\\hline\n\t\t\t\t&\tExCNet &5.55 &723.56 &4.38 &46.74 \\\\\t\t\t\n\t\t\t\tZSL &\tZero-DCE &5.82 &307.09 & 4.76 &{\\color{purple}{46.85}} \\\\ \t\t\t\n\t\t\t\t&\tRRDNet &5.97 &{\\color{purple}{142.89}} &4.84 &45.31 \\\\ \t\t\t\t\t\t\t\n\t\t\t\t\\hline\n\t\t\\end{tabular}}\n\t\\end{center}\n\t\\label{Table:Quan_2}\n\t\\vspace{-4pt}\n\\end{table}\n\\begin{table}[t]\n\t\\caption{Quantitative comparisons on LLIV-Phone-vidT dataset in terms of average luminance variance (ALV) score. The best result is in {\\color{red}{red}} whereas the second and third best results are in {\\color{blue}{blue}} and {\\color{purple}{purple}}.}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\textbf{Learning}}&\\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{1}{c}{\\textbf{LoLi-Phone-vidT}}\\\\\n\t\t\t\\cline{3-3}\n\t\t\t\\cline{3-3}\n\t\t\t&\t& \\textbf{ALV}$\\downarrow$\\\\\n\t\t\t\\hline\n\t\t\t&\tinput &185.60 \\\\\n\t\t\t\\hline\n\t\t\t&\tLLNet &{\\color{blue}{85.72}} \\\\\n\t\t\t&\tLightenNet &643.93\\\\\n\t\t\t&\t\n\t\t\tRetinex-Net &94.05 \\\\\n\t\t\tSL&MBLLEN &113.18 \\\\\n\t\t\t&\tKinD &98.05 \\\\\n\t\t\t&\tKinD++ &115.21\\\\\n\t\t\t&\tTBEFN &{\\color{red}{58.69}} \\\\\n\t\t\t&\tDSLR &175.35 \\\\\n\t\t\t\\hline\n\t\t\tUL &EnlightenGAN &{\\color{purple}{90.69}} \\\\\n\t\t\t\\hline\n\t\t\tSSL&\tDRBN &115.04 \\\\\t\n\t\t\t\\hline\n\t\t\t&\tExCNet &1375.29 \\\\\t\t\t\n\t\t\tZSL &\tZero-DCE &117.22 \\\\ \t\t\t\n\t\t\t&\tRRDNet &147.11 \\\\ \t\t\t\t\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{Table:Quan_33}\n\t\\vspace{-4pt}\n\\end{table}\nObserving Table \\ref{Table:Quan_2}, we can find that the performance of Retinex-Net , KinD++ , and EnlightenGAN is relatively better than the other methods. Retinex-Net achieves the best PI and SPAQ scores. The scores suggest the good perceptual quality of the results enhanced by Retinex-Net . However, from Figure~\\ref{fig:Phone1}(d) and Figure~\\ref{fig:Phone2}(d), the results of Retinex-Net evidently suffer from artifacts and color deviations.\nMoreover, KinD++ attains the lowest NIQE score while the original input achieves the lowest LOE score. For the de-facto standard LOE metric, we question if the lightness order can effectively reflect the enhancement performance. Overall, the non-reference IQA metrics experience biases on the evaluations of the quality of enhanced low-light images in some cases. \nTo prepare videos in the LLIV-vidT testing set, we first discard videos without obvious objects in consecutive frames. A total of 10 videos are chosen. For each video, we select one object that appears in all frames. We then use a tracker to track the object in consecutive frames of the input video and ensure the same object appears in the bounding boxes. We discard the frames with inaccurate object tracking. The coordinates of the bounding box in each frame are collected. We employ these coordinates to crop the corresponding regions in the results enhanced by different methods and compute the average luminance variance (ALV) scores of the object in the consecutive frames as: $ALV=\\frac{1}{N}\\sum\\limits_{i=1}^{N}(L_{i}-L_{\\text{avg}})^2$, where $N$ is the number of frames of a video, $L_{i}$ represents the average luminance value of the region of bounding box in the $i$th frame, and $L_{\\text{avg}}$ denotes the average luminance value of all bounding box regions in the video. A lower ALV value suggests better temporal coherence of the enhanced video. \nThe ALV values of different methods averaged over the 10 videos of the LLIV-vidT testing set are shown in Table \\ref{Table:Quan_33}. \nThe ALV values of different methods on each video can be found in the supplementary material. Besides, we follow Jiang and Zheng to plot their luminance curves in the supplementary material.\nAs shown in Table \\ref{Table:Quan_33}, \tTBEFN obtains the best temporal coherence in terms of ALV value whereas LLNet and EnlightenGAN rank the second and third best, respectively. In contrast, the ALV value of ExCNet , as the worst performer, reaches 1375.29. This is because the performance of the zero-reference learning-based ExCNet is unstable for the enhancement of consecutive frames. ExCNet can effectively improve the brightness of some frames while it does not work well on other frames.\n\\begin{table*}[t]\n\t\\caption{Quantitative comparisons of computational complexity in terms of runtime (in second), number of trainable parameters (\\#Parameters) (in M), and FLOPs (in G). The best result is in {\\color{red}{red}} whereas the second and third best results are in {\\color{blue}{blue}} and {\\color{purple}{purple}} under each case, respectively. `-' indicates the result is not available.}\n\t\\vspace{-6pt}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c|c|c|c}\n\t\t\t\\hline\n\t\t\t\\textbf{Learning}&\t\\textbf{Method} \n\t\t\t& \\textbf{RunTime}$\\downarrow$ & \\textbf{\\#Parameters} $\\downarrow$ & \\textbf{FLOPs}$\\downarrow$ & \\textbf{Platform}\\\\\n\t\t\t\\hline\n\t\t\t&\tLLNet &36.270 &17.908 &4124.177 & Theano\\\\\n\t\t\t&\tLightenNet & -&{\\color{red}{0.030}} &{\\color{red}{30.540}} &MATLAB\\\\\n\t\t\t&\tRetinex-Net &0.120 &0.555 &587.470 &TensorFlow\\\\\n\t\t\tSL&\tMBLLEN &13.995&{\\color{purple}{0.450}}&301.120 &TensorFlow\\\\\n\t\t\t&\tKinD &0.148 &8.160&574.954 &TensorFlow\\\\\n\t\t\t&\tKinD++ &1.068&8.275&12238.026 &TensorFlow\\\\\n\t\t\t&\tTBEFN &{\\color{purple}{0.050}}&0.486&108.532 &TensorFlow\\\\\n\t\t\t&\tDSLR &0.074&14.931&{\\color{purple}{96.683}} &PyTorch\\\\\n\t\t\t\\hline\n\t\t\tUL&\tEnlightenGAN &{\\color{blue}{0.008}}&8.637& 273.240 &PyTorch\\\\\n\t\t\t\\hline\n\t\t\tSSL\t&\tDRBN &0.878&0.577&196.359 &PyTorch\\\\\t\n\t\t\t\\hline\n\t\t\t&\tExCNet &23.280&8.274&- &PyTorch\\\\\t\t\t\n\t\t\tZSL\t&\tZero-DCE &{\\color{red}{0.003}}&{\\color{blue}{0.079}}&{\\color{blue}{84.990}} &PyTorch \\\\\t\t\t\t\n\t\t\t&\tRRDNet &167.260 &0.128&- &PyTorch\\\\\t\t\t\t\t\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{Table:Quan_4}\n\t\\vspace{-4pt}\n\\end{table*}", "id": "67f70829-d734-4fe0-baa6-b7f64f6f5b82", "level": "subsection", "origin_cites_number": 6, "parent_id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ], [ "subsection", "Benchmarking Results" ] ], "subsections": [], "title": "Benchmarking Results" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "In Table \\ref{Table:Quan_4}, we compare the computational complexity of RGB format-based methods, including runtime, trainable parameters, and FLOPs averaged over 32 images of size 1200$\\times$900$\\times$3 using an NVIDIA 1080Ti GPU. We omit LightenNet for fair comparisons because only the CPU version of its code is publicly available.\nBesides, we do not report the FLOPs of ExCNet and RRDNet as the number depends on the input images (different inputs require different numbers of iterations).\nAs presented in Table \\ref{Table:Quan_4}, Zero-DCE has the shortest runtime because it only estimates several curve parameters via a lightweight network. As a result, its number of trainable parameters and FLOPs are much fewer. Moreover, the number of trainable parameters and FLOPs of LightenNet are the least among the compared methods. This is because LightenNet estimates the illumination map of input image via a tiny network of four convolutional layers. In contrast, the FLOPs of LLNet and KinD++ are extremely large, reaching 4124.177G and 12238.026G, respectively. The runtime of SSL-based ExCNet and RRDNet is long due to the time-consuming optimization process.\n\\begin{figure}[!t]\n\t\\centering \\centerline{\\includegraphics[width=1\\linewidth]{figures/face.png}}\n\t\\vspace{-2pt}\n\t\\caption{The P-R curves of face detection in the dark.}\n\t\\label{fig:face}\n\t\\vspace{-4pt}\n\\end{figure}\n\\begin{table}[!t]\n\t\\caption{Quantitative comparisons of AP under different IoU thresholds of face detection in the dark. The best result is in {\\color{red}{red}} whereas the second and third best results are in {\\color{blue}{blue}} and {\\color{purple}{purple}} under each case, respectively.}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c|c|c}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\textbf{Learning}}&\\multirow{2}{*}{\\textbf{Method}}&\\multicolumn{3}{c}{\\textbf{IoU thresholds}} \\\\\n\t\t\t\\cline{3-5}\n\t\t\t\\cline{3-5}\n\t\t\t&\t & \\textbf{0.5} & \\textbf{0.6} & \\textbf{0.7}\\\\\n\t\t\t\\hline\n\t\t\t&input &0.195 &0.061 &0.007\\\\\n\t\t\t\\hline\n\t\t\t&\tLLNet &0.208 &0.063&0.006\\\\\n\t\t\t&\tLightenNet &0.249 &0.085&{\\color{purple}{0.010}}\\\\\n\t\t\t&\tRetinex-Net &{\\color{blue}{0.261}} &{\\color{red}{0.101}}&{\\color{red}{0.013}}\\\\\n\t\t\tSL&\tMBLLEN &0.249 &{\\color{purple}{0.092}}&{\\color{purple}{0.010}}\\\\\n\t\t\t&\tKinD &0.235 &0.081&{\\color{purple}{0.010}}\\\\\n\t\t\t&\tKinD++ &0.251 &0.090&{\\color{blue}{0.011}}\\\\\n\t\t\t&\tTBEFN &{\\color{red}{0.268}} &{\\color{blue}{0.099}}&{\\color{blue}{0.011}}\\\\\n\t\t\t&\tDSLR &0.223 &0.067&0.007\\\\\n\t\t\t\\hline\n\t\t\tUL&\tEnlightenGAN &0.246 &0.088&{\\color{blue}{0.011}}\\\\\n\t\t\t\\hline\n\t\t\tSSL\t&\tDRBN &0.199 &0.061&0.007\\\\\t\n\t\t\t\\hline\n\t\t\t&\tExCNet &0.256 &{\\color{purple}{0.092}}&{\\color{purple}{0.010}}\\\\\t\t\t\n\t\t\tZSL\t&\tZero-DCE &{\\color{purple}{0.259}} &{\\color{purple}{0.092}}&{\\color{blue}{0.011}}\\\\\t\t\t\t\n\t\t\t&\tRRDNet &0.248 &0.083&{\\color{purple}{0.010}}\\\\\t\t\t\t\t\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{Table:Quan_8}\n\t\\vspace{-4pt}\n\\end{table}\n\\subsection {Application-Based Evaluation}\nWe investigate the performance of low-light image enhancement methods on face detection in the dark. \nFollowing the setting presented in Guo et al. , we use the DARK FACE dataset that is composed of images with faces taken in the dark. Since the bounding boxes of the test set are not publicly available, we perform the evaluation on 500 images randomly sampled from the training and validation sets.\nThe Dual Shot Face Detector (DSFD) trained on WIDER FACE dataset is used as the face detector. We feed the results of different LLIE methods to the DSFD and depict the precision-recall (P-R) curves under 0.5 IoU threshold in Figure \\ref{fig:face}. We compare the average precision (AP) under different IoU thresholds using the evaluation tool\\footnote{\\url{https://github.com/Ir1d/DARKFACE_eval_tools}} provided in DARK FACE dataset in Table \\ref{Table:Quan_8}. \nAs shown in Figure \\ref{fig:face}, all the deep learning-based solutions improve the performance of face detection in the dark, suggesting the effectiveness of deep learning-based LLIE solutions for face detection in the dark. As shown in Table \\ref{Table:Quan_8}, the AP scores of best performers under different IoU thresholds range from 0.268 to 0.013 and the AP scores of input under different IoU thresholds are very low. The results suggest that there is still room for improvement. It is noteworthy that Retinex-Net , Zero-DCE , and TBEFN achieve relatively robust performance on face detection in the dark. We show the visual results of different methods in Figure \\ref{fig:face_visual}. Although Retinex-Net performs better than other methods on the AP score, its visual result contains obvious artifacts and unnatural textures. In general, Zero-DCE obtains a good balance between the AP score and the perceptual quality for face detection in the dark. \tNote that the results of face detection in the dark are related to not only the enhanced results but also the face detector including the detector model and the training data of the detector. Here, we only take the pre-trained DSFD as an example to validate the low-light image enhancement performance of different methods to some extent.", "id": "e142566f-157a-4609-a670-8df6dbfe9874", "level": "subsection", "origin_cites_number": 3, "parent_id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ], [ "subsection", "Computational Complexity" ] ], "subsections": [], "title": "Computational Complexity" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "From the experimental results, we obtain several interesting observations and insights:\n\\begin{figure*}[!t]\n\t\\begin{center}\n\t\t\\begin{tabular}{c@{ }c@{ }c@{ }c@{ }}\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_input.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_LightenNet.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_RetinexNet.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_MBLLEN.png}\\\\\n\t\t\t(a) input & (b) LightenNet & (c) Retinex-Net & (d) \tMBLLEN \\\\\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_KinD++.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_TBEFN.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_DSLR.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_EnlightenGAN.png}\\\\\n\t\t\t(e) KinD++ &\t(f) TBEFN & (g) \tDSLR \t & (h) EnlightenGAN \\\\\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_DRBN.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_ExCNet.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_ZeroDCE.png}&\n\t\t\t\\includegraphics[width=0.24\\linewidth]{figures/face_RRDNet.png}\\\\\n\t\t\t(i) DRBN & (j)\tExCNet & (k) Zero-DCE & (l) RRDNet \t \\\\\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-2pt}\n\t\\caption{Visual results of different methods on a low-light image sampled from DARK FACE dataset. Better see with zoom in for the bounding boxes of faces.}\n\t\\label{fig:face_visual}\n\t\\vspace{-4pt}\n\\end{figure*}\n1) The performance of different methods significantly varies based on the test datasets and evaluation metrics. In terms of the full-reference IQA metrics on commonly used test datasets, MBLLEN , KinD++ , and DSLR are generally better than other compared methods. For real-world low-light images taken by mobile phones, supervised learning-based Retinex-Net and KinD++ obtain better scores measured in the non-reference IQA metrics. For real-world low-light videos taken by mobile phones, TBEFN preserves the temporal coherence better. When coming to the computational efficiency, LightenNet and Zero-DCE are outstanding. From the aspect of face detection in the dark, TBEFN , Retinex-Net , and Zero-DCE rank the first three. No method always wins. Overall, Retinex-Net , , Zero-DCE , and DSLR are better choice in most cases. For raw data, EEMEFN obtains relatively better qualitative and quantitative performance than SID . However, from the visual results, EEMEFN and cannot recover the color well when compared with the corresponding ground truth.\n2) LLIV-Phone dataset fails most methods. The generalization capability of existing methods needs further improvements. It is worth noting that it is inadequate to use only the average luminance variance to evaluate the performance of different methods for low-light video enhancement. More effective and comprehensive assessment metrics would guide the development of low-light video enhancement towards the right track.\n3) Regarding learning strategies, supervised learning achieves better performance in most cases, but requires high computational resources and paired training data. In comparison, zero-shot learning is more appealing in practical applications because it does not require paired or unpaired training data. Consequently, zero-shot learning-based methods enjoy better generalization capability. However, their quantitative performance is inferior to other methods. \n4) There is a gap between visual results and quantitative IQA scores. In other words, a good visual appearance does not always yield a good IQA score. The relationships between human perception and IQA scores are worth more investigation. Pursuing better visual perception or quantitative scores depends on specific applications. For instance, to show the results to observers, more attention should be paid to visual perception. In contrast, accuracy is more important when LLIE methods are applied to face detection in the dark. Thus, more comprehensive comparisons should be performed when comparing different methods.\n5) Deep learning-based LLIE methods are beneficial to face detection in the dark. Such results further support the significance of enhancing low-light images and videos. However, in comparison to the high accuracy of face detection in normal-light images, the accuracy of face detection in the dark is extremely low, despite using LLIE methods.\n6) In comparison to RGB format-based LLIE methods, raw format-based LLIE methods usually recover details better, obtain more vivid color, and reduce noises and artifacts more effectively. This is because raw data contain more information such as wider color gamut and higher dynamic range. However, raw format-based LLIE methods are limited to specific sensors and formats such as the Bayer pattern of the Sony camera and the APS-C X-Trans pattern of the Fuji camera. In contrast, RGB format-based LLIE methods are more convenient and versatile since RGB images are commonly found as the final imagery form produced by mobile devices. However, RGB format-based LLIE methods cannot cope well with cases that exhibit low light and excessive noise.", "id": "8ea7b52c-e5f5-46c9-8ff5-69763784bc99", "level": "subsection", "origin_cites_number": 2, "parent_id": "62b1377f-b6e1-4398-be9f-a1fa2952056b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Benchmarking and Empirical Analysis" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 1, "cites": [ 1875 ], "content": "\\label{sec:Issue}\nIn this section, we summarize the open issues in low-light image and video enhancement as follows.\n\\noindent\n\\textbf{Generalization Capability.} Although existing methods can produce some visually pleasing results, they have limited generalization capability. For example, a method trained on MIT-Adobe FiveK dataset cannot effectively enhance the low-light images of LOL dataset . Albeit synthetic data are used to augment the diversity of training data, the models trained on the combination of real and synthetic data cannot solve this issue well. Improving the generalization capability of LLIE methods is an unsolved open issue.\n\\noindent\n\\textbf{Removing Unknown Noises.} Observing the results of existing methods on the low-light images captured by different types of phones' cameras, we can find that these methods cannot remove the noises well and even amplify the noises, especially when the types of noises are unknown. Despite some methods add Gaussian and/or Poisson noises in their training data, the noise types are different from real noises, thus the performance of these methods is unsatisfactory in real scenarios. Removing unknown noises is still unsolved.\n\\noindent\n\\textbf{Removing Unknown Artifacts.} One may enhance a low-light image downloaded from the Internet. The image may have gone through a serial of degradations such as JPEG compression or editing. Thus, the image may contain unknown artifacts. Suppressing unknown artifacts still challenges existing low-light image and video enhancement methods.\n\\noindent\n\\textbf{Correcting Uneven Illumination.} Images taken in real scenes usually exhibit uneven illumination. For example, an image captured at night has both dark regions and normal-light or over-exposed regions such as the regions of light sources. Existing methods tend to brighten both the dark regions and the light source regions, affecting the visual quality of the enhanced result. It is expected to enhance dark regions but suppress over-exposed regions. However, this open issue is not solved well in existing LLIE methods.\n\\noindent\n\\textbf{Distinguishing Semantic Regions.} Existing methods tend to enhance a low-light image without considering the semantic information of its different regions. For example, the black hair of a man in a low-light image is enhanced to be off-white as the black hair is treated as the low-light regions. An ideal enhancement method is expected to only enhance the low-light regions induced by external environments. How to distinguish semantic regions is an open issue. \n\\noindent\n\\textbf{Using Neighbouring Frames.} \n\tDespite some methods that have been proposed to enhance low-light videos, they commonly process a video frame-by-frame. How to make full use of the neighboring frames to improve the enhancement performance and speed up the processing speed is an unsolved open issue. For example, the well-lit regions of neighboring frames are used to enhance the current frame. For another example, the estimated parameters for processing neighboring frames can be reused to enhance the current frame for reducing the time of parameter estimation.", "id": "c43fed76-06de-4d00-89a5-701cdf5d52e3", "level": "section", "origin_cites_number": 1, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Open Issues" ] ], "subsections": [], "title": "Open Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Future}\nLow-light enhancement is a challenging research topic. As can be observed from the experiments presented in Section~\\ref{sec:evaluation} and the unsolved open issues in Section~\\ref{sec:Issue}, there is still room for improvement. We suggest potential future research directions as follows.\n\\noindent\n\\textbf{Effective Learning Strategies.}\nAs aforementioned, current LLIE models mainly adopt supervised learning that requires massive paired training data and may overfit on a specific dataset. Although some researchers attempted to introduce unsupervised learning into LLIE, the inherent relationships between LLIE and these learning strategies are not clear and their effectiveness in LLIE needs further improvements. Zero-shot learning has shown robust performance for real scenes while not requiring paired training data. The unique advantage suggests zero-shot learning as a potential research direction, especially on the formulation of zero-reference losses, deep priors, and optimization strategies. \n\\noindent\n\\textbf{Specialized Network Structures.}\nA network structure can significantly affect enhancement performance. As previously analyzed, most LLIE deep models employ U-Net or U-Net-like structures. Though they have achieved promising performance in some cases, the investigation if such an encoder-decoder network structure is most suitable for the LLIE task is still lacking.\nSome network structures require a high memory footprint and long inference time due to their\nlarge parameter space. Such network structures are unacceptable for practical applications. \nThus, it is worthwhile to investigate a more effective network structure for LLIE, considering the characteristics of low-light images such as non-uniform illumination, small pixel values, noise suppression, and color constancy. \nOne can also design more efficient network structures via taking into account the local similarity of low-light images or considering more efficient operations such as depthwise separable convolution layer and self-calibrated convolution . Neural architecture search (NAS) technique may be considered to obtain more effective and efficient LLIE network structures. Adapting the transformer architecture into LLIE may be a potential and interesting research direction.\n\\noindent\n\\textbf{Loss Functions.}\nLoss functions constrain the relationships between an input image and ground truth and drive the optimization of deep networks. In LLIE, the commonly used loss functions are borrowed from related vision tasks. \nThus, designing loss functions that are more well-suited for LLIE is desired. Recent studies have shown the possibility of using deep neural networks to approximate human visual perception of image quality . These ideas and fundamental theories could be used to guide the designs of loss functions for low-light enhancement networks. \n\\noindent\n\\textbf{Realistic Training Data.}\nAlthough there are several training datasets for LLIE, their authenticity, scales, and diversities fall behind real low-light conditions. Thus, as shown in Section \\ref{sec:evaluation}, current LLIE deep models cannot achieve satisfactory performance when encountering low-light images captured in real-world scenes.\nMore efforts are needed to study the collection of large-scale and diverse real-world paired LLIE training datasets or to generate more realistic synthetic data. \n\\noindent\n\\textbf{Standard Test Data.}\nCurrently, there is no well-accepted LLIE evaluation benchmark. Researchers prefer selecting their test data that may bias to their proposed methods. Despite some researchers leave some paired data as test data, the division of training and test partitions are mostly ad-hoc across the literature.\nConsequently, conducting a fair comparison among different methods is often laborious if not impossible. Besides, some test data are either easy to be handled or not originally collected for low-light enhancement. \nIt is desired to have a standard low-light image and video test dataset, which includes a large number of test samples with the corresponding ground truths, covering diverse scenes and challenging illumination conditions. \n\\noindent\n\\textbf{Task-Specific Evaluation Metrics.}\nThe commonly adopted evaluation metrics in LLIE can reflect the image quality to some extent. However, how to measure how good a result is enhanced by an LLIE method still challenges current IQA metrics, especially for non-reference measurements.\nThe current IQA metrics either focus on human visual perceptual such as subjective quality or emphasize machine perceptual such as the effects on high-level visual tasks. Therefore, \nmore works are expected in this research direction to make efforts on designing more accurate and task-specific evaluation metrics for LLIE.\n\\noindent\n\\textbf{Robust Generalization Capability.}\nObserving the experimental results on real-world test data, most methods fail due to their limited generalization capability. The poor generalization is caused by several factors such as synthetic training data, small-scaled training data, ineffective network structures, or unrealistic assumptions.\nIt is important to explore ways to improve the generalization.\n\\noindent\n\\textbf{Extension to Low-Light Video Enhancement.}\nUnlike the rapid development of video enhancement in other low-level vision tasks such as video deblurring , video denoising , and video super-resolution , low-light video enhancement receives less attention.\nA direct application of existing LLIE methods to videos often leads to unsatisfactory results and flickering artifacts. \nMore efforts are needed to remove visual flickering effectively, exploit the temporal information between neighboring frames, and speed up the enhancement speed. \n\\noindent\n\\textbf{Integrating Semantic Information.}\nSemantic information is crucial for low-light enhancement. It guides the networks to distinguish different regions in the process of enhancement. \nA network without access to semantic priors can easily deviate the original color of a region, e.g., turning black hair to gray color after enhancement. Therefore, integrating semantic priors into LLIE models is a promising research direction. Similar work has been done on image super-resolution and face restoration .\n\t\\ifCLASSOPTIONcompsoc\n\t\\section*{Acknowledgments}\n\t\\else\n\t\\section*{Acknowledgment}\n\t\\fi\nThis study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU SUG and NAP grant. Chunle Guo is sponsored by CAAI-Huawei MindSpore Open Fund.\n\t\\ifCLASSOPTIONcaptionsoff\n\t\\newpage\n\t\\fi\n\t{\n\t\t\\bibliographystyle{IEEEtran}\n\t\t\\bibliography{bibliography}\n\t}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/lichongyi.jpg}}]{Chongyi Li} is a Research Assistant Professor with the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He received the Ph.D. degree from Tianjin University, China in 2018. From 2016 to 2017, he was a joint-training Ph.D. Student with Australian National University, Australia. Prior to joining NTU, he was a postdoctoral fellow with City University of Hong Kong and Nanyang Technological University from 2018 to 2021. His current research focuses on image processing, computer vision, and deep learning, particularly in the domains of image restoration and enhancement. He serves as an associate editor of the Journal of Signal, Image and Video Processing and a lead guest editor of the IEEE Journal of Oceanic Engineering.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/guochunle.jpg}}]{Chunle Guo} received his PhD degree from Tianjin University in China under the supervision of Prof. Jichang Guo. He conducted the Ph.D. research as a Visiting Student with the School of Electronic Engineering and Computer Science, Queen Mary University of London (QMUL), UK. He continued his research as a Research Associate with the Department of Computer Science, City University of Hong Kong (CityU), from 2018 to 2019. Now he is a postdoc research fellow working with Prof. Ming-Ming Cheng at Nankai University. His research interests lie in image processing, computer vision, and deep learning.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/linghaohan.jpg}}]{Linhao Han} is currently a master student at the College of Computer Science, Nankai\n\t\tUniversity, under the supervision of Prof. Ming-Ming Cheng. His research interests include\n\t\tdeep learning and computer vision.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/jiangjun.jpg}}]{Jun Jiang} received the PhD degree in Color Science from Rochester Institute of Technology in 2013. He is a Senior Researcher in SenseBrain focusing on algorithm development to improve image quality on smartphone cameras. His research interest includes computational photography, low-level computer vision, and deep learning.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/chengmingming.jpg}}]{Ming-Ming Cheng} (Senior Member, IEEE) received the Ph.D. degree from Tsinghua University in 2012. Then he did two years research fellowship with Prof. Philip Torr at Oxford. He is currently a Professor at Nankai University and leading the Media Computing Laboratory. His research interests include computer graphics, computer vision, and image processing. He received research awards, including the ACM China Rising Star Award, the IBM Global SUR Award, and the CCF-Intel Young Faculty Researcher Program. He is on the Editorial Board Member of IEEE Transactions on Image Processing (TIP).\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/jinweigu.jpg}}]{Jinwei Gu} (Senior Member, IEEE) is the R\\&D Executive Director of SenseTime USA. His current research focuses on low-level computer vision, computational photography, smart visual sensing and perception, and robotics. He obtained his Ph.D. degree in 2010 from Columbia University, and his B.S and M.S. from Tsinghua University, in 2002 and 2005 respectively. Before joining SenseTime, he was a senior research scientist in NVIDIA Research from 2015 to 2018. Prior to that, he was an assistant professor in Rochester Institute of Technology from 2010 to 2013, and a senior researcher in the media lab of Futurewei Technologies from 2013 to 2015. He is an associate editor for IEEE Transactions on Computational Imaging and an IEEE senior member since 2018.\n\t\\end{IEEEbiography}\n\t\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/cavan.jpg}}]{Chen Change Loy} (Senior Member, IEEE) is an Associate Professor with the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is also an Adjunct Associate Professor at The Chinese University of Hong Kong. He received his Ph.D. (2010) in Computer Science from the Queen Mary University of London. Prior to joining NTU, he served as a Research Assistant Professor at the MMLab of The Chinese University of Hong Kong, from 2013 to 2018. He was a postdoctoral researcher at Queen Mary University of London and Vision Semantics Limited, from 2010 to 2013. He serves as an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and International Journal of Computer Vision. He also serves/served as an Area Chair of major conferences such as ICCV, CVPR, ECCV and AAAI. His research interests include image/video restoration and enhancement, generative tasks, and representation learning. \n\t\\end{IEEEbiography}\n\\end{document}", "id": "0f160045-16e9-4579-89b0-811073538246", "level": "section", "origin_cites_number": 0, "parent_id": "43aba542-7f6b-493f-9beb-9ae4164ca84b", "prefix_titles": [ [ "title", "Low-Light Image and Video Enhancement \\\\Using Deep Learning: A Survey" ], [ "section", "Future Research Directions" ] ], "subsections": [], "title": "Future Research Directions" } ]
82
[ 1875 ]
null
[ "Jer Shyuan Ng", "Wei Yang Bryan Lim", "Nguyen Cong Luong", "Zehui Xiong", "Alia Asheralieva", "Dusit Niyato", "Cyril Leung", "Chunyan Miao" ]
A Survey of Coded Distributed Computing
2020
2020-08-20T16:02:35Z
cs.DC
Distributed computing has become a common approach for large-scale computation of tasks due to benefits such as high reliability, scalability, computation speed, and cost-effectiveness. However, distributed computing faces critical issues related to communication load and straggler effects. In particular, computing nodes need to exchange intermediate results with each other in order to calculate the final result, and this significantly increases communication overheads. Furthermore, a distributed computing network may include straggling nodes that run intermittently slower. This results in a longer overall time needed to execute the computation tasks, thereby limiting the performance of distributed computing. To address these issues, coded distributed computing (CDC), i.e., a combination of coding theoretic techniques and distributed computing, has been recently proposed as a promising solution. Coding theoretic techniques have proved effective in WiFi and cellular systems to deal with channel noise. Therefore, CDC may significantly reduce communication load, alleviate the effects of stragglers, provide fault-tolerance, privacy and security. In this survey, we first introduce the fundamentals of CDC, followed by basic CDC schemes. Then, we review and analyze a number of CDC approaches proposed to reduce the communication costs, mitigate the straggler effects, and guarantee privacy and security. Furthermore, we present and discuss applications of CDC in modern computer networks. Finally, we highlight important challenges and promising research directions related to CDC.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "43fb2934-e7bf-4933-946e-becc16dd4285", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ] ], "subsections": [ "023f3c5d-7805-45c9-b902-bdd9101e05cb", "fc3d5a29-85c5-4d14-9457-330462c762f2", "9ba1f087-de1d-45e2-9661-620859b12846", "f06fb9ca-5252-413b-b98f-08903f106a9c", "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "ee3d99f5-815c-4ba8-b336-968d0def830a", "825a0578-3fe6-4ffc-8428-dfabbdc2d42a", "8c1cb679-0c49-4078-b259-687a2df3303d", "5a390405-bf12-44a0-aa67-38cecb4ab9be" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:intro}\nIn recent years, distributed computing has used for large-scale computation~ since it offers several advantages over centralized computing. First, distributed computing is able to provide computing services with high reliability and fault-tolerance. In particular, distributed computing systems can efficiently and reliably work even if some of the computing nodes, i.e., computers or workers, fail. Second, distributed computing has high computation speed as the computation load is shared among various computing nodes. Third, distributed computing systems are scalable since computing nodes can easily be added. Fourth, distributed computing is economical by using computing nodes with low-cost hardware. Likewise, distributed computing is adopted in cloud computing and other emerging services. Given the aforementioned advantages, distributed computing has been applied in numerous real-life applications such as telecommunication networks~ (e.g., telephone networks and wireless sensor networks), network applications~ (e.g., world-wide web networks, massively multiplayer online games and virtual reality communities, distributed database management systems, and network file systems), real-time process control~ (e.g., aircraft control systems), and parallel computation~ (e.g., cluster computing, grid computing, and computer graphics).\nHowever, distributed computing faces serious challenges. Let us consider one of the most common distributed computation frameworks, i.e., MapReduce~. MapReduce is a software framework and programming\nmodel for processing a computation task across large datasets using a large number of computing nodes, i.e., workers. A set of the computing nodes is referred to as a cluster or a grid. In general, the overall computation task is decomposed into three phases, i.e., the ``Map'' phase, ``Shuffle'' phase, and ``Reduce'' phase. In the Map phase, a master node splits the computation task into multiple subtasks and assigns the subtasks to the computing nodes. The computing nodes compute the subtasks according to the allocated Map functions to generate intermediate results. Then, the intermediate results are exchanged among the computing nodes, namely ``data shuffling'', during the Shuffle phase. In the Reduce phase, the computing nodes use these results to compute the final result in a distributed manner by using their allocated Reduce functions. \nDistributed computing has two major challenges. First, the computing nodes need to exchange a number of intermediate results over the network with each other in order to calculate the final result; this significantly increases communication overheads and limits the performance of distributed computing applications such as Self-Join, Terasort , and machine learning . For example, for the Hadoop cluster at Facebook, it is observed that on average, the data shuffling phase accounts for 33\\% of the overall job execution time~. 65\\% and 70\\% of the overall job execution time is spent on the Shuffle phase when running a TeraSort and Self-Join application respectively on a heterogeneous Amazon EC2 cluster~. In fact, the communication bottleneck is worse in the trainings of convolutional neural networks (CNNs), e.g., Resnet-50 and AlexNet , which include updates of millions of model parameters. Second, distributed computing is executed by a large number of computing nodes which may have very different computing and networking resources. As a result, there are straggling nodes or stragglers, i.e., computing nodes which run unintentionally slower than others, thereby increasing the overall time needed to complete the computing tasks. To address the straggler effects, traditional approaches such as work exchange~ and naive replication have been adopted for the distributed computing. However, such approaches either introduce redundancy or require coordination among the nodes that significantly increases communication costs and computation loads. This motivates the need for a novel technique that is able to more effectively and completely address the straggler effects and communication load of distributed computing. \nCoding theoretic techniques, e.g., channel coding such as low-density parity-check (LDPC)~, have been widely used in WiFi and cellular systems to combat the impact of channel noise and impairments. They have also been applied in distributed storage systems and cache networks~ to reduce storage cost and network traffic. The basic principle of the coding theoretic techniques is that redundant information, i.e., redundancy, is introduced in messages/signals before they are transmitted to a receiver. The redundancy is included in the messages in a controller manner such that it can be utilized by the receiver to correct errors caused by the channel noise. Coding theoretic techniques have been recently regarded as promising solutions to cope with the challenges in distributed computing~, . For example, coding theoretic techniques can be used to encode the Map tasks of the computing nodes such that the master node is able to recover the final result from partially finished nodes, thus alleviating the straggler effects~. Another example is that coding theoretic techniques enable coding opportunities across intermediate results of the distributed computation tasks which significantly reduces the communication load by reducing the number and the size of data transmissions among the processing nodes~. The combination of coding techniques and distributed computing is called coded distributed computing (CDC)~. Apart from reducing communication load and alleviating the effects of stragglers, CDC can provide fault-tolerance, preserve privacy~, and improve security~ in distributed computing. As a result, CDC approaches have recently received a lot of attention.\nCDC schemes can be applied in modern networks such as Network Function Virtualization (NFV) and edge computing. With the data mainly generated by end devices, e.g., Internet of Things (IoT), that have significant sensing as well as computational and storage capabilities, it is natural to perform some computations at the end devices, instead of the cloud which may not be able to handle the large amounts of data generated. As such, edge computing has been proposed as a solution to perform distributed computation tasks. In order to perform complex computations, e.g., the training of deep neural networks that involve a large number of training layers, resource-constrained devices may need to pool their resources to perform their computations collaboratively . This results in high communication costs and computation latency. CDC schemes can be implemented to overcome these challenges. Furthermore, CDC schemes can be implemented on edge computing networks that involve constantly-moving end devices , e.g., vehicles and smartphones, which imposes additional communication constraints.\nTo the best of our knowledge, although there are several surveys and books related to distributed computing, there is no survey paper on CDC. In particular, large-scale distributed computing and applications are discussed in . Surveys related to distributed computing include grid resource management systems for distributed computing~, resource allocation in high performance distributed computing~, wireless distributed computing~, and wireless grid computing~. This motivates the need for this survey on CDC. In summary, our survey has the following contributions:\n\\begin{itemize}\n\\item We describe the fundamentals of CDC. In particular, we introduce the commonly used distributed computation frameworks for the implementation of coding techniques and algorithms. We then discuss basic CDC schemes.\n\\item We review and discuss a number of CDC schemes to reduce the communication costs for distributed computing. The approaches include file allocation, coded shuffling design, and function allocation. We further analyze and compare the advantages and disadvantages of the CDC schemes. \n\\item We review, discuss, and analyze CDC schemes which mitigate the straggler effects of distributed computing. The approaches include computation load allocations, approximate coding, and exploitation of stragglers. \n\\item We review and present CDC schemes that can improve the privacy and security in distributed computing.\n\\item We analyze and provide insights into the existing approaches and solutions in the CDC literature.\n\\item We present and discuss applications of CDC in modern networks such as NFV and edge computing. \n\\item We highlight challenges and discuss promising research directions related to CDC.\n\\end{itemize}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{structure.pdf}\n\\caption{\\small Structure of survey.}\n\\label{fig:structure}\n\\end{figure}\nFor the reader's convenience, we classify the related CDC studies according to the challenges that need to be handled. In particular, the issues are communication costs, straggler effects, and security. As such, readers who are interested in or working on related issues will benefit greatly from our insightful reviews and in-depth discussions of existing approaches, remaining/open problems, and potential solutions. The rest of this paper is organized as follows. Section~\\ref{sec:fundamentals} introduces the fundamentals of CDC. Section~\\ref{sec:schemes} presents basic CDC schemes. Section~\\ref{sec:comms} reviews CDC approaches that have been proposed to reduce communication costs. Section~\\ref{sec:stragglers} discusses CDC approaches that have been proposed to mitigate straggler effects. Section~\\ref{sec:security} presents CDC approaches that have been proposed to enhance privacy and security in distributed computing.~Section~\\ref{sec:applications} discusses applications of CDC. Section~\\ref{sec:challenges} highlights important challenges and promising research directions. The structure of the survey is presented in Figure~\\ref{fig:structure}. Section~\\ref{sec:conclusion} concludes the paper. A list of abbreviations commonly used in this paper is given in Table~\\ref{tab:table_abb}.\n\\begin{table}[h!]\n\\scriptsize\n \\caption{\\small List of common abbreviations used in this paper.}\n \\label{tab:table_abb}\n \\centering\n \\begin{tabularx}{8.7cm}{|Sl|X|}\n \\hline\n \\rowcolor{mygray}\n \\textbf{Abbreviation} & \\textbf{Description} \\\\ \\hline\nARIMA & Auto Regressive Integrated Moving Average \\\\ \\hline\nBCC & Batch Coupon's Collector\\\\ \\hline\nBGC & Bernoulli Gradient Code\\\\ \\hline\nBGW & Ben-Or, Goldwasser, and Wigderson \\\\ \\hline\nBPCC & Batch-Processing Based Coded Computing\\\\ \\hline\nC3P & Coded Cooperative Computation Protocol\\\\ \\hline\nCDC & Coded Distributed Computing\\\\ \\hline\nCNN & Convolutional Neural Network\\\\ \\hline\nCPGC & Coded Partial Gradient Computation\\\\ \\hline\nDAG & Directed Acyclic Graphs\\\\ \\hline\nDNN & Deep Neural Network \\\\ \\hline\nFRC & Fractional Repetition Coding\\\\ \\hline\nHCMM & Heterogeneous Coded Matrix Multiplication\\\\ \\hline\nIoT & Internet of Things\\\\ \\hline\nLCC & Lagrange Coded Computing\\\\ \\hline\nLDPC & Low-Density Parity-Check\\\\ \\hline\nLT & Luby Transform\\\\ \\hline\nMDS & Maximum Distance Separable\\\\ \\hline\nMMC & Multi-Message Communication\\\\ \\hline\nMPC & Multi-Party Computation \\\\ \\hline\nNFV & Network Function Virtualization \\\\ \\hline\nPCR & Polynomially Coded Regression\\\\ \\hline\nPDA & Placement Delivery Array \\\\ \\hline\nSDMM & Secure Distributed Matrix Multiplication\\\\ \\hline\nSGC & Stochastic Gradient Coding \\\\ \\hline\nSGD & Stochastic Gradient Descent\\\\ \\hline\nSVD & Singular Vector Decomposition\\\\ \\hline\nUAV & Unmanned Aerial Vehicle\\\\ \\hline\n\\end{tabularx}\n\\end{table}", "id": "023f3c5d-7805-45c9-b902-bdd9101e05cb", "level": "section", "origin_cites_number": 9, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:fundamentals}\nDistributed computing has been an important solution to large-scale, complex computation problems, which involves massive amounts of data. Various distributed computing models, e.g., cluster computing , grid computing and cloud computing , have been developed to perform the distributed computation tasks while providing high quality of services (QoS) to the users. Among the distributed computing models, cloud computing is gaining much popularity recently as it eliminates the need for users to purchase expensive hardware and software resources since the users only need to pay for the cloud services based on their usage needs in an on-demand basis. A comparison between cluster, grid and cloud computing models is summarized in Table~\\ref{tab:compare}. \nDistributed computing has been widely implemented in a variety of applications, e.g., sensor networks, healthcare applications, the development of smart cities, automated manufacturing processes and vehicular applications. In order to improve the performance of the distributed computing systems, various aspects such as resource allocation strategies , task allocation strategies , scheduling algorithms , incentive mechanisms , energy efficiency , network security and the performance modelling of the distributed computing systems have been extensively studied in the literature. However, the performance of the distributed computing systems is still limited by the high communication costs and straggler effects which lead to a longer time needed to execute the computation tasks. As a result, recent research has focused on coding techniques to overcome these implementation challenges of the distributed computing systems, the aims of which are to minimize the communication load as well as to mitigate the straggler effects.\nIn this section, we discuss commonly used distributed computation frameworks for the implementation of coding techniques and algorithms. Note that while the different computation frameworks are useful for different computing applications, we focus specifically on the MapReduce framework as the majority of the research works on CDC schemes are based on the MapReduce computation framework. We also introduce the two main lines of works in CDC, i.e., to reduce communication load and to mitigate the straggler effects, which aim to solve the challenges in distributed computing.\n\\begin{table*}[t]\n\\caption{\\small Comparison between cluster, grid and cloud computing models . }\n\\label{tab:compare}\n\\centering\n\\begin{tabular}{|l |c| c| c|}\n\\hline\n\\rowcolor{mygray}\n\\multicolumn{1}{c|}{\\textbf{Feature}} & \\textbf{Cluster} & \\textbf{Grid} & \\textbf{Cloud} \\\\ \\hline\nSize & Small to medium & Large & Small to large\\\\ \\hline\nNetwork type & Private, LAN & Private, WAN & Public, WAN\\\\ \\hline\nJob management and scheduling & Centralized & Decentralized & Both\\\\ \\hline\nCoupling & Tight & Loose/tight & Loose \\\\ \\hline\nResource reservation & Pre-reserved & Pre-reserved & On-demand\\\\ \\hline\nService-level agreement (SLA) constraint & Strict & High & High\\\\ \\hline\nResource support & Homogeneous and heterogeneous (GPU) & Heterogeneous & Heterogenous\\\\ \\hline\nVirtualization & Semi-virtualized & Semi-virtualized & Completely virtualized\\\\ \\hline\nSecurity type & Medium & High & Low \\\\ \\hline\nService-oriented architecture and heterogeneity support & Not supported & Supported & Supported\\\\ \\hline\nUser interface & Single system image & Diverse and dynamic & Single system image \\\\ \\hline\nInitial infrastructure cost & Very high & High & Low \\\\ \\hline\nSelf service and elasticisty & No & No & Yes\\\\ \\hline\nAdministrative domain & Single & Multi & Both\\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "fc3d5a29-85c5-4d14-9457-330462c762f2", "level": "section", "origin_cites_number": 12, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Fundamentals of Coded Distributed Computing" ] ], "subsections": [ "9766cc05-52f0-40d7-8bb2-882e1f23113b", "e5ec24a4-049a-49ee-b4fa-e11576083168" ], "title": "Fundamentals of Coded Distributed Computing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:mapreduce}\nWhile the distributed computation frameworks have moved beyond a simple MapReduce framework, the majority of the studies on CDC have focused on the MapReduce framework. \n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{conventional.pdf}\n\\caption{\\small Illustration of conventional MapReduce framework. The intermediate output pairs are represented by (key,frequency)[book number] and the output pairs are represented by (key,frequency).}\n\\label{fig:conventional}\n\\end{figure}\nMapReduce is a software framework and programming model that runs on a large cluster of commodity machines for the processing of large-scale datasets in a distributed computing environment. The cluster of computers is modelled as a master-worker system which consists of a single master node and multiple workers to store and analyzes massive amount of unstructured data. Due to its scalability and its ability to tolerate machines' failure , the MapReduce framework is commonly used in a wide range of applications , e.g., the analysis of web access logs, the clustering of documents, the construction of web-link graph that matches the all source URLs to a target URL and the development of machine learning algorithms. Generally, the MapReduce computation framework involves the processing of a large input file to generate multiple output pairs of which each pair consists of a key and a corresponding value. Figure~\\ref{fig:conventional} demonstrates the implementation of the conventional MapReduce framework to determine the frequency of occurrence of 4 specific words in the books, where the 4 processing nodes, i.e., the workers are to compute the 4 output pairs. There are three important phases in the MapReduce computation framework:\n\\begin{enumerate}\n\\item In the \\textbf{\\emph{Map}} phase, there are two stages, namely the allocation of Map tasks and the execution of Map tasks. Generally, a Map task is a function that generates a key-value output pair based on the allocated subfiles. Firstly, as seen in Fig.~\\ref{fig:conventional}, the master node splits the input file into 8 subfiles of smaller sizes and allocates the subfiles to the 4 workers. Secondly, each worker produces 4 intermediate key-value pairs for each allocated subfile using the map functions. Since the workers are allocated 2 subfiles each, each worker generates 8 computed intermediate results.\n\\item In the \\textbf{\\emph{Shuffle}} phase, the workers exchange their computed intermediate results to obtain the required intermediate results for the computation of the Reduce functions. In particular, in each time slot, one of the workers creates a message that contains information of the intermediate output pairs from the Map phase and transmits the message to other workers. The shuffling process continues until all workers have received the required intermediate output pairs for the Reduce phase. \n\\item In the \\textbf{\\emph{Reduce}} phase, the workers aggregate the 8 intermediate key-value pairs obtained from the Shuffle phase and compute the final result which is a smaller set of key-value pairs using the reduce functions. In particular, the reduction tasks are evenly distributed among the workers. Each reduce function is responsible for the evaluation of a key. For example, node 1 in Fig.~\\ref{fig:conventional} is responsible for the evaluation of ``Tree''. Therefore, the total number of reduce functions needed equals the total number of keys of the output, i.e., 4 reduce functions are needed to compute 4 output pairs.\n\\end{enumerate}\nApart from the MapReduce framework, there are other distributed computation frameworks that provide support for the processing of large-scale datasets such as:\n\\begin{itemize}\n\\item \\emph{Spark : }It supports applications that need to reuse a working dataset across the multiple parallel processes. These applications cannot be expressed as efficiently as acyclic data flows which are required in popular computation frameworks such as MapReduce. There are two use cases for the implementation of Spark computation framework: (i) iterative machine learning algorithms which operate on the same dataset repeatedly, and (ii) iterative data analysis tools, where different users query for a subset of data from the same dataset.\n\\item \\emph{Dryad : }By allowing the developers to construct their own communication graphs and the subroutines at the vertices through simple, high-level programming language, Dryad executes large-scale data-intensive computations over clusters consisting multiple computers. It does not require the developers to express their code in Map, Shuffle and Reduce phases in order to adopt the MapReduce framework for computations. Besides, the Dryad execution engine, which is based on the constructed data flow graph, takes care of the implementation issues of the distributed computation tasks such as the scheduling of tasks, allocation of resources and the recovery from communication and computation failures.\n\\item \\emph{CIEL : }The main characteristic of the CIEL computation framework is that it allows data-dependent data flows where the directed acyclic graphs (DAG) are built dynamically based on the execution of previous computations, rather than a statistically predetermined DAG. Instead of maximizing throughput, CIEL aims to minimize latency of individual tasks, which is very useful for the implementation of iterative algorithms, where latency grows significantly as the number of iteration increases.\n\\end{itemize}\nA comparison between the distributed computation frameworks is presented in Table~\\ref{tab:frameworks}.\n\\begin{table}[t]\n\\caption{\\small Comparison between distributed computation frameworks . }\n\\label{tab:frameworks}\n\\centering\n\\begin{tabular}{|m{2.8cm} | >{\\centering\\arraybackslash}m{1.5cm} |>{\\centering\\arraybackslash}m{1.5cm} |>{\\centering\\arraybackslash}m{1.3cm} |}\n\\hline\n\\rowcolor{mygray}\n\\multicolumn{1}{c|}{\\textbf{Feature}} & \\textbf{MapReduce } & \\textbf{Dyrad } & \\textbf{CIEL } \\\\ \\hline\nDynamic Control Flow & No & No & Yes \\\\ \\hline\nTask Dependencies & Fixed (2-stage) & Fixed (DAG) & Dynamic \\\\ \\hline\nFault Tolerant & Transparent & Transparent & Transparent \\\\ \\hline\nData Locality & Yes & Yes & Yes \\\\ \\hline\nTransparent Scaling & Yes & Yes & Yes \\\\ \\hline\n\\end{tabular}\n\\end{table}", "id": "9766cc05-52f0-40d7-8bb2-882e1f23113b", "level": "subsection", "origin_cites_number": 4, "parent_id": "fc3d5a29-85c5-4d14-9457-330462c762f2", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Fundamentals of Coded Distributed Computing" ], [ "subsection", "Coded Distributed Computation Frameworks" ] ], "subsections": [], "title": "Coded Distributed Computation Frameworks" }, { "cite_extract_rate": 0, "cites": [], "content": "There are two main lines of works in CDC. Firstly, the CDC schemes are implemented to minimize the \\emph{communication load} in distributed computing systems. Secondly, the CDC schemes aim to mitigate the \\emph{straggler effects} which cause a delay in the computation of the distributed tasks. For each of the objectives, we discuss the importance of solving these issues to improve the performance of the distributed computing systems. Then, we briefly discuss the existing solutions that have been proposed in the literature to meet these objectives. However, the current existing solutions do not adopt coding approaches. Different from the existing solutions, the CDC schemes are able to meet these objectives by introducing coded redundancy. In fact, the CDC schemes outperform the replication methods, e.g., naive replication and fork-join model , which introduce redundancy without coding techniques, in terms of time taken to execute the tasks.", "id": "e5ec24a4-049a-49ee-b4fa-e11576083168", "level": "subsection", "origin_cites_number": 2, "parent_id": "fc3d5a29-85c5-4d14-9457-330462c762f2", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Fundamentals of Coded Distributed Computing" ], [ "subsection", "Objectives of CDC Schemes" ] ], "subsections": [ "109ae9de-6803-47dc-a8d7-f4437916dc6b", "274c5abc-70eb-46be-bb88-eff1a21724b3" ], "title": "Objectives of CDC Schemes" }, { "cite_extract_rate": 0, "cites": [], "content": "} \\label{subsub:comms} Among the three phases of the MapReduce computation framework, the Shuffle phase dominates the time required to complete the computation tasks since multiple communications between the processing nodes are needed to exchange their intermediate results. For the Hadoop cluster at Facebook, it is observed that on average, the data shuffling phase accounts for 33\\% of the overall job execution time . In fact, the data shuffling phase is more time consuming when running on heterogeneous clusters with diverse computational, communication and storage capabilities. When running TeraSort , which is a conventional distributed sorting algorithm for large amount of data, and Self-Join applications on heterogeneous Amazon EC2 clusters, 65\\% and 70\\% of the overall job execution time is spent on the Shuffle phase respectively . The data shuffling process is also an important step in implementing distributed learning algorithms. In particular, to train machine learning models with distributed algorithms, it is common to shuffle the data randomly and run the algorithms iteratively for a number of times such that the processing nodes compute a different subset of the data at each iteration until there is a convergence . \\emph{For the logistic regression application which requires at least 100 iteration to converge, 42\\% of the iteration time is spent on communication }. For each time the data shuffling process is performed, the entire training dataset is communicated over the network, resulting in high communication costs which limit the performance of the distributed computing systems. \nSince the performance of the data shuffling process has a significant impact on the overall performance of the distributed computing systems, it has been extensively studied in the literature, e.g., . Various data shuffling strategies are proposed to achieve different objectives such as minimizing the job execution time, maximizing the utilization of resources and accommodating interactive workloads. While the overlap between the map computations and the shuffle communication helps to reduce the latency of the distributed computation tasks , the computing nodes require large storage capacities for buffering. An efficient and adaptive data shuffling strategy is proposed in the study of to manage the tradeoff between the accumulation of the shuffle blocks and the minimization of the utilization of memory space to reduce the overall job execution time and improve the scalability of the distributed computing systems. In , the authors propose a virtual data shuffling strategy which reduces storage space and traffic load in the network by delaying the actual movement of the data until it is needed to complete the computations in the Reduce phase. \nTo improve the performance of the data shuffling process, task scheduling algorithms such as Quincy scheduler , Hadoop Fair Scheduler and delay scheduling algorithm are also designed to allocate tasks to the workers. In the design of optimal task scheduling and task selection algorithms, the communication load can be minimized through various approaches such as by optimizing the placement of computation tasks, distributing the computing resources fairly to the nodes and maximizing the resource utilization of the systems. \nSince the task scheduling schemes are not the focus of this survey, we refer interested readers to the study of and the references therein for more detailed information on the scheduling techniques.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{uncoded.pdf}\n\\caption{\\small Illustration of naive replication MapReduce framework.}\n\\label{fig:uncoded}\n\\end{figure}\nOne of the ways to reduce communication load in the shuffling phase is by repeating the computation tasks . Figure~\\ref{fig:uncoded} illustrates the implementation of the MapReduce framework by using the naive replication method in which 4 processing nodes are required to compute 4 output pairs, which is the same computation task illustrated in Fig.~\\ref{fig:conventional}. By simply replicating the Map tasks where each worker is required to compute more intermediate output pairs, i.e., 16 in Fig.~\\ref{fig:conventional} instead of 8 intermediate output pairs in Fig.~\\ref{fig:uncoded}, the communication load is reduced as fewer intermediate output pairs are communicated. For example, in the conventional MapReduce framework in Fig.~\\ref{fig:conventional}, node 1 needs to obtain 6 intermediate output pairs from other workers whereas in the naive replication scheme in Fig.~\\ref{fig:uncoded}, node 1 only needs to obtain 4 intermediate output pairs from other workers. \nHowever, the aforementioned non-coding methods have limits to which the communication load in the data shuffling phase can be minimized. Given that the naive replication method reduces communication load in the data shuffling phase by introducing redundancy to the systems (which is also discussed in Section~\\ref{subsec:cdccommunication}), coding techniques can be used to introduce redundancy to further minimize communication load, which will be discussed in-depth in Section~\\ref{sec:comms}.", "id": "109ae9de-6803-47dc-a8d7-f4437916dc6b", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "e5ec24a4-049a-49ee-b4fa-e11576083168", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Fundamentals of Coded Distributed Computing" ], [ "subsection", "Objectives of CDC Schemes" ], [ "subsubsection", "Communication Load" ] ], "subsections": [], "title": "Communication Load" }, { "cite_extract_rate": 0, "cites": [], "content": "}In distributed computing systems, the processing nodes may have heterogeneous computing capabilities, i.e., different processing speeds. As such, another line of work in the CDC literature is to solve the bottleneck that results from a variation in time taken to complete the allocated tasks. In distributed computing systems, there are \\emph{stragglers}, which are the processing nodes that run unexpectedly slower than the average or nodes that may be disconnected from the network due to several factors such as insufficient power, contention of shared resources, imbalance work allocation and network congestion . As a result, the overall time needed to execute the tasks is determined by the slowest processing node. We briefly discuss the existing approaches (which are summarized in Table~\\ref{tab:priorstragglers}) to handle the straggler effects as follows:\n\\begin{table*}[t]\n\\caption{\\small Approaches to mitigate the straggler effects.} \n\\label{tab:priorstragglers}\n\\centering\n\\begin{tabular}{|>{\\centering\\arraybackslash}m{2.6cm} |m{6.5cm} |m{6.5cm}|}\n\\hline\n\\rowcolor{mygray}\n\\textbf{Approach} & \\multicolumn{1}{c|}{\\textbf{Key Ideas}} & \\multicolumn{1}{c|}{\\textbf{Shortcomings}} \\\\ \\hline\nStragglers Detection & Detects the straggling nodes, determines the cause of delay and implements targeted solutions & Difficult to identify the cause of delays \\\\ \\hline\nWork Stealing & Reallocates the remaining computation tasks from the slower workers to the faster workers & The capabilities of the slower workers are not maximized\\\\ \\hline\nWork Exchange & Reallocates the computation tasks every time a worker completes its tasks & Incurs high communication costs due to the feedback from the workers to the master node as well as the reallocation of data \\\\ \\hline\nNaive Replication & Introduces redundancy where each computation subtask is performed by more than one processing node & Incurs high communication costs and computation load\\\\ \\hline\nCoded Redundancy & Uses coding techniques to introduce redundancy such that the master node can recover the final result from any decodable set of workers & Still incurs high communication costs and computation load, but lower than that of naive replication \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\begin{itemize}\n\\item \\emph{Stragglers Detection: }The most direct approach to mitigate the straggler effects is to detect the stragglers and act on them early in their lifetime. For example, Mantri detects the stragglers by identifying the tasks that are processed at a rate slower than the average. The system determines the cause of the delay and implements targeted solutions to mitigate the stragglers. The solutions include restarting the tasks allocated to the stragglers at other processing nodes, optimally allocating the tasks based on network resources as well as protecting against interim data loss by replicating the outputs of valuable tasks. \n\\item \\emph{Work Stealing : }The basic idea of the work stealing algorithm is to allow the faster processing nodes to take over the remaining computation tasks from the slower processing nodes so that the overall job execution time is minimized. By adopting this approach, the faster processing nodes operate continuously while leaving the slower processing nodes idle after their jobs are taken over by the faster processing nodes till the end of the computation session. \n\\item \\emph{Work Exchange : } By leveraging on the information of computational heterogeneity in the system, the master node first allocates the tasks to the workers based on their computational capabilities. Upon receiving the first computed result from any of the workers, the master node pauses the computation process and redistribute the remaining incomplete work to be computed by the workers. The process is performed for a number of iterations until all work is done. Since the workers need to inform the master node of the amount of work done at each time that the computation process is paused, additional communication costs are incurred. The higher communication costs are also a result of the reallocation of data to the workers. \n\\item \\emph{Naive Replication: }One of the solutions to handle stragglers in the distributed computing systems is by introducing redundancy to minimize computation latency. The computation task is replicated and executed over multiple processing nodes. Since all processing nodes are working on the same computation task, the time required to complete the computation task is determined by the fastest processing node. The partial computations of the remaining processing nodes are discarded. Experiments on Google Trace data have shown the effectiveness of the use of redundancy in minimizing computation latency by eliminating the need for the computed results by the stragglers. However, the introduction of redundancy comes at the expense of higher cost such as high communication bandwidth and high computation load . Various redundancy strategies have been analyzed to derive the limiting distribution of the state of the systems . Although the introduction of redundancy helps to reduce latency, the performance varies under different settings. In fact, in some settings, it is optimal to not use any redundancy strategy. Looking into this, the work in presents the optimal redundant-requesting policies under diverse settings.\n\\end{itemize}\nSimilar to the existing methods to reduce communication costs as discussed in the previous section (Section~\\ref{subsub:comms}), the existing methods to mitigate the straggler effects do not adopt coding approaches. Coding techniques can also be used to introduce redundancy into the systems to mitigate the straggler effects. The authors in investigate the tradeoff between latency and cost for both replication-redundancy systems and coded-redundancy systems. Coded-redundancy systems outperform the replication-redundancy systems in both latency and cost. In other words, by using coding techniques, the latency and cost incurred are lower than that of naive replication. The use of coding techniques to mitigate the straggler effects is discussed in more detail in Section~\\ref{subsec:codingtech} and Section~\\ref{sec:stragglers}.\nTo better understand the proposed CDC schemes, some of the commonly used performance metrics of the distributed computing systems are defined as follows:\n\\begin{enumerate}\n\\item \\emph{Storage Space }is defined as the total number of files stored across $K$ processing nodes, normalized by the total number of subfiles $N$ .\n\\item \\emph{Computation load }is represented by $r$, where $1\\leq r \\leq K$, is defined as the total number of Map functions computed across $K$ processing nodes, normalized by the total number of subfiles $N$ . In particular, when $r=1$, it means that each Map function is only computed by a single processing node. When $r=2$, it means that each Map function is computed by two processing nodes on average. \n\\item \\emph{Communication load }is represented by $L$, where $0\\leq L \\leq 1$, is defined as the total number of bits communicated by the $K$ processing nodes in the Shuffle phase, normalized by the total number of subfiles $N$ .\n\\end{enumerate}\nGiven that coding techniques are able to solve the aforementioned implementation challenges of the distributed computing systems, we review various proposed CDC schemes, which is the main focus of this paper. In the following section, we present a tutorial of the simple CDC schemes along these two lines of works, i.e., minimizing communication load and mitigating the straggler effects, of which is useful to better understand the related works discussed in Section~\\ref{sec:comms}, \\ref{sec:stragglers} and \\ref{sec:security}.", "id": "274c5abc-70eb-46be-bb88-eff1a21724b3", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "e5ec24a4-049a-49ee-b4fa-e11576083168", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Fundamentals of Coded Distributed Computing" ], [ "subsection", "Objectives of CDC Schemes" ], [ "subsubsection", "Straggler Effects" ] ], "subsections": [], "title": "Straggler Effects" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:schemes}\nRecently, coding techniques have become a popular approach to solve the challenges of the distributed computing systems. As mentioned previously, there are two main lines of work in CDC: (i) to reduce the communication costs and (ii) to mitigate the straggler effects. In this section, we introduce the two basic CDC schemes which are the first works that show the effectiveness of using coding techniques to solve these two challenges separately. Then, we discuss a unified CDC scheme that characterizes the tradeoff between computation latency and communication load.", "id": "9ba1f087-de1d-45e2-9661-620859b12846", "level": "section", "origin_cites_number": 0, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ] ], "subsections": [ "e9e65886-933d-4d54-aa7f-a7c03b8cc0f9", "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "2dffdb1e-356b-4152-ad17-f178b093d653" ], "title": "Coded Distributed Computing (CDC) Schemes" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:cdccommunication}\nIn the conventional MapReduce computation framework as shown in Fig.~\\ref{fig:conventional}, after the split of the input file into multiple subfiles, each subfile is mapped to only one of the processing nodes, i.e., the workers. The naive replication scheme, i.e., uncoded data shuffling scheme which relaxes this restriction, can reduce the communication costs of the system by allowing each subfile to be replicated and mapped to more than one processing nodes. In the example illustrated in Fig.~\\ref{fig:uncoded}, each subfile is repeated twice. Hence, as compared to the conventional MapReduce framework, each processing node has more Map tasks to perform in the naive replication scheme. However, by simple replication, the communication load in the Shuffle phase decreases and this gain is known as the \\emph{repetition gain}. Specifically, the communication load for uncoded schemes which include both the conventional MapReduce framework and the naive replication scheme, is denoted as follows :\n\\begin{equation}\nL_{uncoded}(r)=1-\\frac{r}{K},\n\\label{eqn:uncoded}\n\\end{equation}\nwhere $K$ is the number of processing nodes in the network. Based on Equation (\\ref{eqn:uncoded}), the communication loads achieved by the conventional MapReduce framework in Fig.~\\ref{fig:conventional} and the naive replication scheme in Fig.~\\ref{fig:uncoded} are $\\frac{3}{4}$ where $r=1$ and $\\frac{1}{2}$ where $r=2$ respectively.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{coded.pdf}\n\\caption{\\small Illustration of Coded MapReduce framework.}\n\\label{fig:coded}\n\\end{figure}\nTo further reduce the communication load, i.e., to increase the repetition gain, the Coded MapReduce computation framework is proposed in where the Map tasks are carefully distributed among the processing nodes and the messages are encoded for transmission in the Shuffle phase by using coding theory. Figure~\\ref{fig:coded} illustrates the Coded MapReduce framework with 4 processing nodes to determine the 4 output pairs. After the Map phase, the processing node multicasts a bit-wise XOR of two computed intermediate pairs, denoted by $\\oplus$, satisfying the requirements of two other processing nodes simultaneously. For example, node 1 multicasts a bit-wise XOR of ``Bear'' and ``Fork'' to both nodes 2 and 3, which involves the transmission of only one packet of information, instead of two packets if the information is sent separately to the nodes in a unicast manner. Since the intermediate output pairs are now coded, there is an additional step of decoding before the reduce functions are applied. Given the coded ``BearFork'' information, node 2 is able to decode and recover the required ``Bear'' information by cancelling the ``Fork'' information since node 2 has also computed the same ``Fork'' information. Similarly, node 3 can recover ``Fork'' information by cancelling the ``Bear'' information. The simulation results in show that the Coded MapReduce reduces the communication load by 66\\% and 50\\% as compared to the conventional MapReduce framework and the naive replication scheme respectively.\nSince the use of coding techniques reduces both the latency and cost of the distributed computing systems , a more generalized framework known as the Coded Distributed Computing (CDC) scheme is introduced in . \nThe study of presents the fundamental inverse relationship between computation load and communication load. Specifically, the communication load in the Shuffle phase can be reduced by a factor $r$ by increasing the computation load in the Map phase by the same factor $r$, as shown in Fig.~\\ref{fig:tradeoff}. The communication load achieved by the CDC framework, $L_{coded}$, is given as follows:\n\\begin{equation}\nL_{coded}(r)=\\frac{1}{r}\\left(1-\\frac{r}{K}\\right).\n\\label{eqn:coded}\n\\end{equation}\nNote that the information-theoretic lower bound derived on the minimum communication load $L^*(r)$ equals $L_{coded}(r)$ of the CDC framework. As such, the optimal tradeoff between the computation load and communication load is characterized as follows :\n\\begin{equation}\nL^*(r)=L_{coded}(r)=\\frac{1}{r}\\left(1-\\frac{r}{K}\\right), r \\in\\mathcal{K}.\n\\end{equation}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{tradeoff.pdf}\n\\caption{\\small Comparison of communication load between the CDC scheme and the uncoded scheme .}\n\\label{fig:tradeoff}\n\\end{figure}\nFrom Equation (\\ref{eqn:uncoded}) which applies to the uncoded computation schemes, the communication load $L$ decreases linearly as the computation load $r$ increases. However, when the number of processing nodes $K$ becomes large, there is no significant impact of increasing computation load on the communication load. On the other hand, for the proposed CDC framework, the communication load is inversely proportional to the computation load (Equation (\\ref{eqn:coded})). Even when $K$ becomes large, the increase in computation load still significantly reduces the communication load. \nSince the proposed CDC framework can be applied to any distributed computation framework with an underlying MapReduce structure, the performance of the CDC framework on TeraSort is evaluated. Experimental results on Amazon EC2 clusters show that the Coded TeraSort scheme which is a coded distributed sorting algorithm, achieves a reduction in the overall job execution time by factors of 2.16 and 3.39 with 16 processing nodes and computation loads of $r=3$ and $r=5$, respectively as compared to the uncoded TeraSort scheme.\nPreviously in , computation load is linearly dependent on the number of replicated Map tasks, i.e., load redundancy, as each processing node is assumed to compute all intermediate values for all subfiles allocated in its memory. However, the processing nodes can be selective in choosing the intermediate values to compute. As such, the load redundancy is no longer a direct measure of computation load. In other words, the storage constraints do not necessarily imply computation constraints. Building on the work in , the authors in propose an alternative tradeoff between computation load and communication load under a predefined storage constraint. In fact, the computation load is quadratic in terms of load redundancy. By taking load redundancy into consideration, an alternative computation-communication tradeoff curve is derived. In particular, this alternative tradeoff curve is especially relevant to the processing nodes that do have the sufficient resources or time to perform computations for all the allocated subfiles. Given that the processing nodes can only perform a limited amount of computations below the computation load threshold, the alternative tradeoff curve proposed in accurately defines the amount of communication load needed for the distributed computation tasks.\nSince the processing nodes are not required to compute all intermediate results that can be obtained from their locally stored data, the storage capabilities of the processing nodes should be considered in the CDC computation framework in . The study of characterizes the tradeoff between storage, computation and communication where the minimum communication load is determined given the storage and computational capabilities of the processing nodes. In particular, the optimal computation curve is obtained by characterizing the optimal storage-communication tradeoff given the minimum computation load. As a result, the triangles between the optimal communication curve and the optimal computation curve reflect the pareto-optimal surface of all achievable storage-computation-communication triples. However, as the number of processing nodes in the system increases, the number of input files required increases exponentially, resulting in an increase in the number of transmissions needed and hence high communication costs. As such, it is important to reduce the number of input files, which is discussed in Section~\\ref{subsubsec:packet} later. \n\\begin{table*}[!ht]\n\\caption{\\small Coding techniques to mitigate the straggler effects.} \n\\label{tab:stragglers}\n\\centering\n\\begin{tabular}{|m{2.0cm} |>{\\centering\\arraybackslash}m{1.5cm} |>{\\centering\\arraybackslash}m{3.3cm} |m{8cm}|}\n\\hline\n\\rowcolor{mygray}\n\\multicolumn{1}{|c|}{\\textbf{Problems}} & \\textbf{Ref.} & \\textbf{Coding Schemes} & \\multicolumn{1}{c|}{\\textbf{Key Ideas}} \\\\ \\hline\n\\multirow{8}{=}{Matrix-Vector} & & MDS Codes & Reduce the computation latency as the master node is able to recover the final result without waiting for the slowest processing node \\\\\n\\cline{2-4}\n & & LT Codes & Exploit the rateless property to generate unlimited number of encoded symbol from a finite set of source symbols\\\\\n\\cline{2-4}\n & & Short-Dot Codes & Reduce the length of dot-products computed at the processing nodes by introducing sparsity to the encoded matrices\\\\\n\\cline{2-4}\n & & s-diagonal codes & Exploit the diagonal structure of the matrices to achieve both optimal recovery threshold and optimal computation load\\\\\n\\hline\n\\multirow{16}{=}{Matrix-Matrix} & & Product Codes & Instead of encoding the matrices along one dimension as in the MDS-coded schemes, the matrices are encoded with MDS codes along both dimensions, i.e. row and column \\\\\n\\cline{2-4}\n & & Polynomial Codes & - Design the algebraic structure of the encoded matrices such that the MDS structure is found in both the encoded matrices and the intermediate computations\\\\\n & & & - Reconstruct the final results by solving the polynomial interpolation problem\\\\\n\\cline{2-4}\n & & MatDot Codes & Achieve lower recovery threshold than Polynomial Codes at the expense of higher communication costs by computing only the relevant cross-products\\\\\n\\cline{2-4}\n & & PolyDot Codes & Characterize the tradeoff between recovery threshold and communication costs where Polynomial Codes and MatDot Codes are the two extreme ends on this tradeoff curve\\\\\n\\cline{2-4}\n & & Sparse Codes & Exploit to sparsity in both input and output matrices to reduce computation load, while achieving near optimal threshold\\\\\n\\hline\n\\multirow{15}{=}{Gradient Descent} & & Fractional Repetition Coding & - Workers are divided into multiple groups and data is divided among the workers in each group \\\\\n & & & - Each partition of data is performed by more than one worker\\\\\n\\cline{2-4}\n & & Cyclic Repetition Coding & Data is allocated based on a cyclic assignment strategy\\\\\n\\cline{2-4}\n & & Cyclic MDS Codes & The entries to the columns of the encoding matrix are cyclic shifts of the entries to the first column \\\\\n\\cline{2-4}\n & & Reed-Solomon Codes & Use balanced mask matrix and chooses appropriate codewords from the RS codes to construct the encoding matrix \\\\\n\\cline{2-4}\n & & Batch Coupons Collector & - Divide the data into multiple batches which are allocated randomly to the workers for computations\\\\\n & & & - communication costs are significantly reduced as there is no need for communication between the workers and for feedback from the master node to the workers\\\\\n\\cline{2-4}\n & & Polynomially Coded Regression & Encode the data batches directly instead of the computed intermediate results\\\\\n\\hline\nConvolution & & Coded Convolution & Split both vectors into multiple parts of specified length and encodes one of the vectors with MDS codes \\\\\n\\hline\nFourier Transform & & Coded Fourier Transform & Leverage on recursive structure and the linearity of the discrete Fourier Transform operations\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{codedcomputation.pdf}\n\\caption{\\small Illustration of coded computation with 3 workers. The master node is able to recover the final result upon receiving the computed results from any 2 workers.}\n\\label{fig:codedcomputation}\n\\end{figure}", "id": "e9e65886-933d-4d54-aa7f-a7c03b8cc0f9", "level": "subsection", "origin_cites_number": 5, "parent_id": "9ba1f087-de1d-45e2-9661-620859b12846", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Minimize Communication Load" ] ], "subsections": [], "title": "CDC to Minimize Communication Load" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:codingtech}\nApart from reducing communication load in the Shuffle phase of the MapReduce framework, coding techniques can also be used to alleviate the straggler effects. Since matrix multiplication is one of the most basic linear operations used in distributed computing systems, a coded computation framework is proposed in to minimize computation latency of distributed matrix multiplication tasks. The coded computation framework uses erasure codes to generate redundant intermediate computations. In particular, \nthe master node encodes the equal-sized data blocks, i.e., submatrices and distributes them to the workers to compute the local functions. Upon completion, the workers transmit the computed results to the master node. The master node can recover the final result by using the decoding functions once the local computations from any of the decodable sets are completed. As seen in Fig.~\\ref{fig:codedcomputation}, the master node can recover the final result upon receiving the computed results from any 2 workers, instead of all 3 workers. As such, the total computation is not determined by the slowest straggler, but by the time when the master node receives computed results from some decodable set of indices. In this work , the authors explore the effectiveness of encoding the submatrices by using maximum distance separable (MDS) codes to mitigate the effects of stragglers.\nConsidering $K$ workers and a shifted-exponential distribution for the job execution time of the distributed algorithm, the simulation results show that the optimal repetition-coded distributed algorithm achieves a lower average job execution time when the straggling parameter is smaller than one, i.e., $\\mu<1$ but is still slower than the optimal MDS-coded distributed algorithm by a factor of $\\Theta (\\log K)$. However, the storage cost of the coded distributed algorithm is higher than that of the uncoded distributed algorithm as more data is required to be stored at the workers' sites for the coded distributed algorithm. The proposed algorithm is tested on an Amazon EC2 cluster and is compared against various parallel matrix multiplication algorithms, e.g., block matrix multiplication, column-partition matrix multiplication and row-partition matrix multiplication. The simulation results show that the proposed algorithm in performs better where the coded matrix multiplication achieves $40\\%$ and $39.5\\%$ reduction in average job execution time on clusters of m1-small and c1-medium instances with 10 workers each respectively as compared to the best of the three uncoded distributed algorithms. \nAlthough the MDS codes proposed in is able to mitigate the straggler effects, it cannot be generalized to all types of computation tasks. In order to mitigate the straggler effects of different distributed computation tasks, the coding techniques can be designed by exploiting the algebraic structures of the specific operations. An important performance metric that is introduced in the proposed CDC schemes is the recovery threshold, which refers to the worst-case required number of workers the master needs to wait to recover the final result for job completion . The smaller the recovery threshold, the shorter the computation latency. The objective is to reduce the recovery threshold so that the final result can be recovered by waiting for a smaller number of workers, thus contributing to a reduction in computation latency. Here, we discuss the coding techniques for various types of computation tasks, namely (i) matrix-vector multiplications, (ii) matrix-matrix multiplications, (iii) gradient descent, (iv) convolution and Fourier transform. Table~\\ref{tab:stragglers} summarizes the coding techniques designed for different distributed computation tasks.", "id": "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "level": "subsection", "origin_cites_number": 2, "parent_id": "9ba1f087-de1d-45e2-9661-620859b12846", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Mitigate the Straggler Effects" ] ], "subsections": [ "8d5bcfc5-309d-4be0-81a5-8bdc9010e406", "795c9420-d6c1-4f57-b9b5-1d340af6f0f3", "c272fc00-2e47-4ea2-92bd-9be23a93633e", "bc8c4799-cb61-4844-9fd9-6fd743d773dc" ], "title": "CDC to Mitigate the Straggler Effects" }, { "cite_extract_rate": 0, "cites": [], "content": "Distributed matrix-vector multiplications are the building blocks of linear transformation computations which are an important step in machine learning and signal processing applications. In particular, the computation of linear transformation on high-dimensional vectors is required for popular dimensionality reduction techniques such as Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) .\nInstead of using MDS codes that is proposed in , the authors in propose the use of Luby Transform (LT) codes to mitigate the straggler effects in distributed matrix-vector multiplications problems. Different from the works in and which use LT codes in fixed-rate settings, the rateless property of the LT codes can be exploited to generate unlimited number of encoded symbol from a finite set of source symbols. There are several advantages of using rateless codes: (i) near-ideal load balancing, (ii) negligible redundant computation, (iii) maximum straggler tolerance, and (iv) low decoding complexity. To further reduce the latency for practical implementations, blockwise communication can be used to transmit the submatrix-vector products. Instead of transmitting each encoded row-vector product separately to the master node, the workers are allowed to transmit the computed results in blocks where each block comprises a few row-vector products, reducing the number of communication rounds needed and hence minimizing the time needed to complete the computation tasks. \n The authors in propose Short-Dot codes to perform the computation of linear transforms reliably and efficiently under the presence of straggling nodes. Specifically, the processing nodes compute shorter dot products by imposing sparsity on the encoded submatrices. However, there is tradeoff between the optimal threshold recovery and the length of the dot-products where the master node needs to wait for computed results from more processing nodes when the length of the dot-products is shorter. The experimental results on the classification of hand-written digits of MNIST show that the Short-Dot codes achieve 32\\% faster expected computation time than the MDS codes .\nAlthough the Short-Dot codes can offer lower recovery threshold, the greater length of the dot-products means greater computation load for the processing nodes. With this concern, s-diagonal codes are proposed to achieve both optimal recovery threshold and optimal computation load by exploiting the diagonal structure of the encoding matrix. The computation time can be further reduced by using a low-complexity hybrid decoding algorithm which combines the peeling decoding algorithm and Gaussian elimination techniques.", "id": "8d5bcfc5-309d-4be0-81a5-8bdc9010e406", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Mitigate the Straggler Effects" ], [ "subsubsection", "Matrix-vector multiplications" ] ], "subsections": [], "title": "Matrix-vector multiplications" }, { "cite_extract_rate": 0, "cites": [], "content": "For large-scale distributed matrix-matrix multiplications, the coded computation schemes based on MDS codes are no longer suitable as the encoding and decoding processes scale with system size. Besides, the size of one of the matrices is assumed to be small enough in order to allow individual workers to perform the computations , restricting the implementation of MDS codes in large-scale multiplications. Hence, for large-scale problems, coded schemes not only need to achieve low computation time, but also require efficient encoding and decoding algorithms in order to minimize the overall job execution time. To deal with the straggler effects in high-dimensional distributed matrix multiplications, four types of coded computation schemes are proposed:\n\\begin{itemize}\n\\item \\emph{Product codes : }Product codes are implemented by building a larger code upon smaller MDS codes. Instead of encoding computations along only one dimension in MDS-coded schemes, the product codes encode computations along both dimensions, i.e., rows and columns of the matrices. When the number of backup workers increases sub-linearly with the number of subtasks, the product-coded schemes outperform the MDS-coded schemes in terms of average computation time and decoding time. In the linear regime, the one-dimensional decoding of the MDS-coded schemes is sufficient to recover the missing entries of the computation results. By allowing each row and column of the MDS constituent codes to have different code rates , the average computation time can be further reduced, contributing to a decrease in the overall job execution time. Product codes can also be used to solve higher-dimensional linear operations such as tensor operations by exploiting the tensor-structured encoding matrix . To reduce the decoding time of the product codes, efficient decoding algorithms such as Reed-Solomon codes and LDPC codes can be explored.\n\\item \\emph{Polynomial codes : }The key advantage of the polynomial codes in solving large-scale matrix multiplication problems is that they provide a lower bound to the optimal recovery threshold. For polynomial codes, the recovery threshold does not scale with the number of workers involved where as for MDS codes and the product codes, the recovery thresholds scale proportionally with the number of workers. By taking advantage of the algebraic structure of the polynomial codes, the master node can recover the final result by using polynomial interpolation algorithms, e.g., the Reed-Solomon codes, to decode the computation results from the workers. In addition to the optimal recovery threshold, the polynomial-coded schemes achieve minimum possible computation latency and communication load for distributed matrix multiplications. However, as the number of workers increases, the encoding and decoding costs are much higher than that of the product codes. Furthermore, by implementing Reed-Solomon codes, there is a limit to the number of workers that can be handled, which is not useful for practical implementations where the systems may involve up to thousands of nodes. As an extension to the polynomial codes proposed in , the entangled polynomial code that is proposed in achieves a lower recovery threshold which is only half of that achieved by the PolyDot codes , to be discussed later. Different from the polynomial codes which only allow column-wise partitioning of the matrices, the entangled polynomial codes allow arbitrary partitioning of the input matrices and evaluate only a subspace of bilinear functions such that unnecessary multiplications are avoided. The issue of numerical stability has also received attention to ensure the scalability of the polynomial-coded schemes . \n\\item \\emph{PolyDot codes : }PolyDot codes characterize the tradeoff between the recovery threshold and communication costs where the polynomial codes and the MatDot codes are special instances of this coding framework by considering two extreme ends of this tradeoff: minimizing either recovery threshold or communication costs. In particular, the MatDot codes achieve lower recovery threshold than the polynomial codes at the expense of much higher communication costs. This is achieved by only computing the relevant cross-products of the submatrices. Building on the work of PolyDot codes , the Generalized PolyDot codes are used to compute matrix-vector multiplications and achieve the same recovery threshold as the entangled polynomial codes . More importantly, the Generalized PolyDot codes can be extended for the training of large deep neural networks (DNNs), which consists of multiple non-linear layers. \n\\item \\emph{Sparse codes : }Although the polynomial codes achieve optimal recovery threshold, the computation loads of the workers increase due to the increased density of the input matrix, resulting in an increase in the overall job execution time which is not desirable. By exploiting sparsity, i.e., the number of zero entries of the encoded matrix, not only the recovery threshold is kept low, but the computation loads of the workers also decrease while maintaining a nearly linear decoding time , jointly contributing to the shorter overall job execution time. The basic idea of the algorithm proposed in is to allow the master node to find the linear combination of row vectors such that only the particular relevant sub-blocks are recovered. Then, the entire block of matrix can be recovered by aggregating the partial recovery of sub-blocks. Simulation results show that the sparse codes require an overall shortest time to complete the job as compared to other computation schemes, e.g., uncoded scheme, product codes , polynomial codes , sparse MDS codes and LT codes . A further analysis of the different components of subtasks, i.e., communication time, computation time and decoding time, shows that the sparse codes require much shorter time to decode, thus contributing significantly to the shorter overall job execution time.\n\\end{itemize}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{frc.pdf}\n\\caption{\\small Fractional repetition coding with 6 workers and 2 stragglers.}\n\\label{fig:frc}\n\\end{figure}", "id": "795c9420-d6c1-4f57-b9b5-1d340af6f0f3", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Mitigate the Straggler Effects" ], [ "subsubsection", "Matrix-matrix multiplications" ] ], "subsections": [], "title": "Matrix-matrix multiplications" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from matrices, coding techniques can be applied to recover batch gradients of any loss function of the distributed gradient descent tasks. In , the authors have introduced the idea of gradient coding which is useful to mitigate stragglers that may slow down the computation tasks. Two gradient coding schemes are proposed, namely (i) fractional repetition coding (FRC) and (ii) cyclic repetition coding. In the FRC scheme, the workers are first divided into several groups. In each group, the data is equally divided and allocated to the workers. As a result, all groups of workers are replica of each other as shown in Fig.~\\ref{fig:frc}. Upon completing their subtasks, the workers in each group transmit the sum of partial gradients to the master node. In the cyclic repetition coding scheme, the data partitions are allocated to the workers based on a cyclic assignment strategy. The partial gradients computed by each worker are encoded by linearly combining them, of which is transmitted as a single coded message to the master node. By applying the gradient coding schemes, the distributed computation tasks do not suffer from delays incurred by the straggling nodes as the master node is able to recover the final result with the results from the non-straggling nodes. Other coding theories such as the cyclic MDS codes and the Reed-Solomon codes can be used to compute exact gradients of the distributed gradient descent problems. \nTo efficiently mitigate the straggler effects in distributed gradient descent algorithms, Batch Coupon's Collector (BCC) scheme is proposed in . In BCC, there are two important steps, namely (i) batching and (ii) coupon collecting. In batching, the training set is partitioned into batches which are distributed to the workers randomly whereas in coupon collecting, the master node collects the computed results from the workers until the results from all batches of data are received. This decentralized BCC scheme does not require any communication between the workers nodes and each worker is allocated data batches independently of other workers. As a result, it is easy to implement the BCC scheme in practical scenarios. Another important advantage of the BCC scheme is its universality. Different from other coding schemes that are designed to guarantee their robustness to a fixed number of straggling nodes, the BCC scheme does not require any prior knowledge about the straggling nodes which is more practical as it is difficult to estimate the number of straggling nodes present in the clusters. Furthermore, the BCC scheme can be easily extended to solve gradient descent problems over heterogeneous clusters where the workers have different computational and communication capabilities. The simulation results show that the BCC scheme speeds up the overall job execution time by up to 85.4\\% and 69.9\\% over the uncoded scheme and the cyclic repetition coding scheme respectively.\nThe gradient coding schemes proposed in illustrate the tradeoff between computation load and straggler tolerance. However, in non-linear learning tasks, communication costs dominate the overall job execution time as the number of iterations increases. As such, to generalize the coding schemes in , the authors in incorporate communication costs into their framework and present a fundamental tradeoff between the three parameters, namely computation load, straggler tolerance and communication costs. In particular, for a fixed computation load, the communication costs can be reduced by waiting for more workers.\nInstead of encoding the partial gradients computed based on uncoded data as seen in the studies of , coding techniques can be applied directly to the data batches to reduce the straggler effects and the overall job execution time. Considering the gradient computations for least-square regression problems, the polynomially coded regression (PCR) scheme exploits the underlying algebraic property to generate coded submatrices such that they are linear combinations of the uncoded input matrices. The master node can evaluate the final gradient by interpolating the polynomials from the computed partial gradients by the workers. Compared to the gradient coding schemes proposed in , the simulation results show that the PCR scheme achieves much lower recovery threshold and hence shorter computation and communication time, resulting in shorter time needed for overall job execution.", "id": "c272fc00-2e47-4ea2-92bd-9be23a93633e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Mitigate the Straggler Effects" ], [ "subsubsection", "Gradient Descent" ] ], "subsections": [], "title": "Gradient Descent" }, { "cite_extract_rate": 0, "cites": [], "content": "The polynomial codes proposed in both the studies of and can be extended to the applications of distributed coded convolution based on the coded convolution scheme proposed in . The work in explores the use of MDS codes to encode the pre-specified vectors such that fast convolution is performed under deadline constraint. In addition to that, MDS codes can be used to mitigate the straggler effects in widely-implemented distributed discrete Fourier Transform operations which are used in many applications such as machine learning algorithms and signal processing frameworks.", "id": "bc8c4799-cb61-4844-9fd9-6fd743d773dc", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9f1f1f0b-c93c-42b6-815f-1a27a54d04ee", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "CDC to Mitigate the Straggler Effects" ], [ "subsubsection", "Convolution and Fourier transform" ] ], "subsections": [], "title": "Convolution and Fourier transform" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:unified} \nGiven the aforementioned coding schemes of and , we can observe that coding techniques are used to speed up distributed computing applications with two different approaches. On one hand, the authors in propose the ``Minimum Bandwidth Code\" that minimizes the communication load by repeating the computation tasks in the Map phase to introduce multicasting opportunities in the Shuffle phase. On the other hand, the authors in propose the ``Minimum Latency Code\" which minimizes the computation latency by encoding the Map tasks such that the master node is able to recover the final result without waiting for the straggling processing nodes.\nInspired by the aforementioned approaches, a unified coded scheme that characterizes the tradeoff between computation latency and communication load is proposed in given the computation load. The coding schemes in and are considered to be the extreme cases of this unified scheme. The unified coded scheme exploits the advantages of the two coding approaches by applying the MDS codes to the Map tasks and replicating the encoded Map tasks. Specifically, the unified coded scheme first encodes the rows of the matrix, following which the coded rows of the matrix are replicated and stored at the processing nodes in a specific pattern. Then, the processing nodes perform the computation until a certain number of the fastest processing nodes complete their tasks. To reduce communication load in the Shuffle phase, coded multitasking is used to exchange the intermediate results that are needed to recover the final results in the Reduce phase. An improvement to the latency-communication tradeoff presented in the unified coded scheme is proposed in by leveraging on the redundancy created by the repetition code. By increasing the redundancy rate of the repetition code, both the communication load in the Shuffle phase and the computation latency in the Map phase can be simultaneously improved, thus contributing to a improved latency-communication tradeoff.\nThe aforementioned initial works of coding schemes have shown their effectiveness in minimizing communication costs and alleviating the straggler effects. In the following sections, we review related works that leverage on coding techniques to address the implementation challenges of the distributed computing systems.", "id": "2dffdb1e-356b-4152-ad17-f178b093d653", "level": "subsection", "origin_cites_number": 3, "parent_id": "9ba1f087-de1d-45e2-9661-620859b12846", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Coded Distributed Computing (CDC) Schemes" ], [ "subsection", "Unified CDC Scheme" ] ], "subsections": [], "title": "Unified CDC Scheme" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:comms}\nWith more computing nodes that are equipped with greater capabilities to collect and process data, massive amounts of data are generated for the computations of user-defined computation tasks. Since the computations are scaled out across a large number of distributed computing nodes, large number of intermediate results need to be exchanged between the computing nodes in the Shuffle phase of the MapReduce framework to complete the computation tasks, resulting in significant data movement. Oftentimes, for the training of a model with distributed learning algorithms, data is shuffled at each iteration, contributing to high communication costs, which is a bottleneck of the distributed computing systems. As a result, there is a need to reduce the communication costs in order to speed up the distributed computation tasks. In this section, we present four approaches to reduce communication costs: \n\\begin{itemize}\n\\item \\emph{File Allocation: }In this approach, the studies aim to design an optimal file allocation strategy that considers the heterogeneities of the processing nodes' capabilities in the systems, maximizing data locality or reducing subpacketization level, which refers to the number of subfiles generated , . These different approaches work towards reducing the communication load in the distributed computing systems.\n\\item \\emph{Coded Shuffling Design: }Since data shuffling phase incurs a large proportion of the communication costs, data is encoded before it is transmitted so that the communication load can be minimized. Apart from combining coding with different techniques, e.g., compression and randomization techniques to improve the performance of the shuffling phase, the coding techniques are also designed to solve different computation problems, e.g., distributed graph computation problems , and multistage MapReduce computations .\n\\item \\emph{Consideration of Underlying Network Architecture: }Generally, the communications between the workers as well as between the workers and the master node are affected by the way that they are connected to each other. For example, server-rack architecture (Fig.~\\ref{fig:server}) is one of the most commonly used methods to connect the various servers. By taking the underlying architecture into consideration, the effectiveness of the coding implementation in reducing communication costs can be greatly improved.\n\\item \\emph{Function Allocation: }Similar to the allocation of files, the studies apply this approach on heterogeneous systems. In addition, some studies consider a cascaded system where each Reduce function is allowed to be computed at multiple processing nodes. In some cases where the data is randomly stored at the processing nodes, e.g., when the processing nodes are constantly moving, an optimal function allocation strategy is useful in reducing the number of broadcast transmissions and thus minimizing the communication load. \n\\end{itemize}", "id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "level": "section", "origin_cites_number": 3, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ] ], "subsections": [ "a30f2979-178f-4897-ab80-a68948bc9c5f", "08339afb-55ae-4aba-a167-25ccaafe6521", "2cb9e913-abbc-43af-837d-bdfad67a0786", "1109cc8b-6b58-4ee5-ba62-ca162ae902db", "71e25826-5a72-4095-8ca3-d26170c96db8" ], "title": "Minimization of Communication Load" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:files}\nThe design of file allocation at each processing node is one of the major steps for the implementation of CDC scheme. There are a few approaches to an optimal file allocation strategy: (i) considering heterogeneous systems, (ii) maximizing data locality, and (iii) reducing subpacketization level.", "id": "a30f2979-178f-4897-ab80-a68948bc9c5f", "level": "subsection", "origin_cites_number": 0, "parent_id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "File Allocation" ] ], "subsections": [ "97abadeb-ed93-480e-acfa-dc9e87a6bd77", "c8d7498f-a451-4053-980e-15ca23008379", "73c478e0-3e12-49d2-8043-871c00017ef4" ], "title": "File Allocation" }, { "cite_extract_rate": 0, "cites": [], "content": "}As discussed in Section~\\ref{subsec:cdccommunication}, although the CDC scheme proposed in carefully allocates the subfiles to the processing nodes in order to introduce coded multitasking opportunities, it considers a homogeneous system which may not be useful for practical implementation. In order to appropriately allocate the files to the distributed computing nodes, heterogeneous systems where the processing nodes have diverse storage, computational and communication capabilities, should be considered in determining the optimal file allocation strategy and coding scheme that minimize the communication load . \nBy leveraging on the extra storage capacity of the workers, the communication costs between the master node and the workers in the process of data shuffling are minimized. The reason is that if more data can be stored at the workers, fewer communication rounds are needed for the workers to receive shuffled data from the master node. In the extreme case, if the worker can store the entire dataset, there is no communication needed for the worker to receive shuffled data in any iteration. As a result, there is a tradeoff between the storage capacity of the workers and the communication overhead in the data shuffling process. In the data shuffling process, there are two phases, namely data delivery and storage update. Instead of a random storage placement , a deterministic and systematic storage update strategy creates more coding opportunities in transmitting data to the workers at each iteration, reducing the communication load.", "id": "97abadeb-ed93-480e-acfa-dc9e87a6bd77", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a30f2979-178f-4897-ab80-a68948bc9c5f", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "File Allocation" ], [ "subsubsection", "Considering Heterogeneous Systems" ] ], "subsections": [], "title": "Considering Heterogeneous Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "}One of the important factors in determining the optimal file allocation strategy is data locality. Data locality is defined as the percentage of local tasks over the total number of Map tasks, i.e., the fraction of Map tasks that are allocated to the processing nodes having the required data for computations such that no communication is needed to obtain the data. High data locality means that less communication bandwidth is needed for the transmission of subfiles, which is required if the processing node does not have the needed subfiles for the execution of the Map tasks. In order to maximize data locality, the problem of allocation of Map tasks to different processing nodes can be tackled by solving a constrained integer optimization problem .", "id": "c8d7498f-a451-4053-980e-15ca23008379", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a30f2979-178f-4897-ab80-a68948bc9c5f", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "File Allocation" ], [ "subsubsection", "Maximizing Data Locality" ] ], "subsections": [], "title": "Maximizing Data Locality" }, { "cite_extract_rate": 0, "cites": [], "content": "} \\label{subsubsec:packet}As the number of processing nodes in the network increases, the input file needs to be split into a large number of subfiles. Specifically, the number of subfiles generated increases exponentially in the number of processing nodes . However, there is a maximum allowable subpacketization level, i.e., number of subfiles, where the dataset can only be partitioned into a limited number of packets, beyond which the communication load increases due to more transmissions required and the unevenly-sized intermediate results. Hence, there are several reasons for the reduction of subpacketization level: (i) to reduce the communication load in the Shuffle phase even when there is a large number of processing nodes, (ii) to reduce the packet overheads which increases with the number of broadcast transmissions, and (iii) to reduce the number of unevenly-mapped outputs which require zero padding. To keep the subpacketization level below the maximum allowable level, Group-based Coded MapReduce allocates the dataset based on the random groupings of the processing nodes and allows the processing nodes to cooperate in the transmission of messages. \nTo avoid splitting the input file too finely, the authors in use an appropriate resolvable design , which is based on linear error correcting codes, to determine the number of subfiles, the allocation of the subfiles to the processing nodes and the construction of the coded messages in the Shuffle phase. Building on this initial work, the authors in use the resolvable design based scheme to solve the limitation of the compressed CDC scheme that uses both compression and coding techniques. Although the compressed CDC scheme helps to reduce communication load, it requires large number of jobs to be processed simultaneously. Hence, the resolvable design based scheme is used to reduce the number of subfiles generated. Specifically, for each job in the compressed CDC scheme, the single-parity code is used to split the input file and the resolvable design based scheme is used to allocate the subfiles to the processing nodes. By aggregating the underlying functions and applying the resolvable design based scheme, multiple jobs can be processed in parallel while minimizing the execution time in the Shuffle phase, contributing to the reduction of the overall job execution time. Although the number of subfiles or number of jobs generated still increases exponentially with some of the system parameters, e.g., number of computing nodes and number of output functions, the exponent is much smaller when the resolvable design based scheme is implemented.\nIn addition to the exponential increase in the number of subfiles required, the number of output functions required also increases exponentially as the number of processing nodes in the network increases. There are other methods to reduce the number of subfiles and the number of output functions such as the hypercube computing scheme and the placement delivery array (PDA) . However, most of the CDC schemes consider non-cascaded systems, i.e., each Reduce function is computed at exactly one processing node . In , a cascaded system is considered but only two values for the number of processing nodes that perform each Reduce function are considered. By applying the concept of PDA to the distributed computation framework, the performance of the proposed computation scheme is evaluated for different number of processing nodes that compute the Reduce functions . Although the implementation of these various methods reduces the number of subfiles generated, it may come at the expense of higher communication load , .", "id": "73c478e0-3e12-49d2-8043-871c00017ef4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a30f2979-178f-4897-ab80-a68948bc9c5f", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "File Allocation" ], [ "subsubsection", "Reducing Subpacketization Level" ] ], "subsections": [], "title": "Reducing Subpacketization Level" }, { "cite_extract_rate": 0, "cites": [], "content": "In the design of coded shuffling algorithms, we have classified the approaches into three different categories: (i) compression and randomization, (ii) coding across multiple iterations and (iii) problem-specific coding approaches.", "id": "08339afb-55ae-4aba-a167-25ccaafe6521", "level": "subsection", "origin_cites_number": 0, "parent_id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Coded Shuffling Design" ] ], "subsections": [ "de2801c0-f5ea-4b6a-9559-1daeed4b4d16", "47dbc343-daef-4549-a329-038a4aa2efda", "4b18764b-6d94-4230-8ec2-e099673e37eb" ], "title": "Coded Shuffling Design" }, { "cite_extract_rate": 0, "cites": [], "content": "To further reduce the communication costs of the distributed computation tasks, the design of the coded data shuffling scheme can incorporate different techniques to create more coded multicasting opportunities. Besides, the coded shuffling schemes are designed to minimize communication costs for different distributed computation problems such as iterative algorithms, graph computations and multistage dataflow problems. \nThe work in generates replications of the computation tasks in the Map phase in order to reduce communication load in the Shuffle phase by coding and multicasting the intermediate results. To further reduce the communication load, compression and randomization techniques can be applied to the design of the coded shuffling algorithms. \n\\begin{itemize}\n\\item \\emph{Compression Techniques: }The compressed CDC computation scheme is proposed in by jointly using two techniques, i.e., compression and coding techniques. Each processing node first computes the allocated Map tasks and generates the intermediate results. By using the compression techniques, several intermediate results of a single computation task are compressed into a single pre-combined value. The communication bandwidth needed to transmit a single pre-combined value is much smaller than that of transmitting several uncombined intermediate values since the size of the pre-combined value equals the size of only one intermediate value. With the pre-combined values from different computation tasks, the processing node codes them for multicasting to other processing nodes simultaneously. There are two advantages to this compressed CDC scheme: (i) the communication load is reduced proportional to the storage capacity of each processing node, and (ii) the communication load does not scale linearly, i.e., slower than linear, with the size of the dataset.\nIn some cases, e.g., parallel stochastic gradient descent (SGD) algorithms, instead of transmitting intermediate results, computed gradient updates are exchanged among the workers. In such cases, Quantized SGD , a compression technique, can be used to reduce communication bandwidth used during the gradient updates between the processing nodes. In each iteration, the processing nodes are allowed to adjust the number of transmitted bits by quantizing each component to a discrete set of values and encoding these quantized gradients.\n\\item \\emph{Randomization Techniques: }Instead of introducing coded multicasting opportunities to reduce the communication load in the Shuffle phase, there are other coding techniques that can be applied to increase efficiency of data shuffling. One of the ways is to perform a semi-random data shuffling and coding scheme based on pliable index coding which introduces randomization in the data shuffling process . There are two important modifications made to the conventional pliable index coding scheme , which is used to minimize the number of broadcast transmissions while satisfying users' demands. Firstly, the correlation of messages between workers is reduced. In order to do so, a message should only be transmitted to a fraction of the workers so that the same message is not held by all workers. As such, the pliable index coding problem is formulated with an objective where the goal is to minimize the number of broadcast transmissions under the constraint of a maximum number of workers that can receive the same message. Secondly, the correlation of messages between iterations is reduced. The reduction of correlation of messages prevents the workers from performing computations on the same dataset after shuffling, which may be redundant. A two-layer hierarchical structure is proposed for data shuffling. In the upper layer, the messages are partitioned into multiple groups of which each group of messages is transmitted to a fraction of workers. In the lower layer, each group of messages and the corresponding allocated workers are formulated as a constrained pliable index coding problem. Randomization occurs in two stages: (i) when the master node selects the messages in each group and transmits them to the workers, and (ii) when the workers discard old messages from their cache. Experimental results \nshow that the proposed pliable index coding requires only 12\\% of broadcast transmissions needed by an uncoded scheme, i.e., random shuffling with replacement scheme.\n\\end{itemize}", "id": "de2801c0-f5ea-4b6a-9559-1daeed4b4d16", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "08339afb-55ae-4aba-a167-25ccaafe6521", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Coded Shuffling Design" ], [ "subsubsection", "Compression and randomization" ] ], "subsections": [], "title": "Compression and randomization" }, { "cite_extract_rate": 0, "cites": [], "content": "Most works on coded iterative algorithms focus on the optimization of a single computation iteration or the minimization of communication load in a single communication round . However, multiple rounds of communications are generally required to solve distributed iterative problems. In the studies of and , the results of multiple iterations are transmitted in a single round of communication by jointly coding across several iterations of the distributed computation task. By leveraging on the computation and storage redundancy of the workers, the number of communication rounds between the master node and the workers is greatly reduced, resulting in a reduction in the communication costs. However, the computation and storage costs may not be optimal as compared to uncoded computing schemes, e.g., and , which achieve near-optimal computation and storage costs.", "id": "47dbc343-daef-4549-a329-038a4aa2efda", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "08339afb-55ae-4aba-a167-25ccaafe6521", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Coded Shuffling Design" ], [ "subsubsection", "Coding across multiple iterations" ] ], "subsections": [], "title": "Coding across multiple iterations" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from the typical distributed computation problems, the MapReduce framework can also be used to solve distributed graph computation problems, . However, for graph computing systems, the computation at each vertex is a function of the graph structure which means that each computation only needs data from its neighbouring vertices. More specifically, the communication load in the Shuffle phase depends on the connectivity probabilities of vertices in the graph, in which each vertex is only allowed to communicate reliably with a subset of random vertices . As a result, the CDC scheme proposed in (which was previously discussed in Section~\\ref{subsec:cdccommunication}) is not applicable to solving the graph computation problems. Looking into this, the authors in propose a coded scheme to solve the problem of random Erdös-Rényi graph while minimizing the communication load in the Shuffle phase. A similar inverse tradeoff curve between the computation load and average communication load is obtained by using coding techniques in solving distributed graph computations. \nMoreover, many distributed computing applications consists of multiple stages of MapReduce computations. The multistage data flow can be represented by a layered DAG in which the processing nodes, i.e., vertices in a particular computation stage are grouped into a single layer. Each vertex computes a user-defined function, i.e., Map or Reduce computation that transforms the given input files into intermediate results whereas the edges represent data flow between the processing nodes. By exploiting the redundancy of the computing nodes, coding techniques are applied to the processing nodes individually to minimize the communication costs. The proposed work in considers a uniform resource allocation strategy where the computation of each vertex is distributed across all processing nodes. However, the communication load can be further reduced by reducing the number of processing nodes that are used to compute each vertex. Given that fewer processing nodes are used to compute each vertex, the computation load performed by each processing node increases and the processing nodes have more local information, thus reducing the need for communication to obtain the required information. Therefore, a dynamic resource allocation strategy is needed to further minimize the communication load in the multistage MapReduce problems.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{server.pdf}\n\\caption{\\small Server-rack architecture where multiple servers in each rack are connected via a Top of Rack switch and the Root switch connects multiple Top of the Rack switches.}\n\\label{fig:server}\n\\end{figure}", "id": "4b18764b-6d94-4230-8ec2-e099673e37eb", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "08339afb-55ae-4aba-a167-25ccaafe6521", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Coded Shuffling Design" ], [ "subsubsection", "Problem-specific coding approaches" ] ], "subsections": [], "title": "Problem-specific coding approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "Although various techniques can be used jointly with the coding techniques to increase the efficiency of data shuffling phase, it is important to consider the underlying network architecture, i.e., how the servers are connected to each other, in designing the coded shuffling algorithms. \nIn , Hybrid Coded MapReduce is proposed by considering the server-rack architecture (Fig.~\\ref{fig:server}) in the distributed computing systems. There are two types of communications in the Shuffle phase: (i) the cross-rack communication where the data is shuffled across different rack layers, and (ii) the intra-rack communication where the data is shuffled within the rack. In the first stage where cross-rack communication takes place, the Coded MapReduce algorithm (Fig.~\\ref{fig:coded}) is used to create multicasting opportunities for the transmission of messages. In the second stage where the intra-rack communication is performed, data is shuffled in a unicast manner where no coding technique is applied. The simulation results show that the cross-rack communication costs incurred by the hybrid scheme are lower than those of both Coded MapReduce and uncoded scheme at the expense of a higher intra-rack communication costs. Although the Coded MapReduce scheme still achieves the lowest total communication costs, the overall communication costs for the hybrid scheme can be further reduced by parallelizing the intra-rack operations to provide a more accurate comparison between the different computation schemes.\nThe CDC computation scheme proposed in is useful for networks with processing nodes that are closely located with each other and connected via a common communication bus. However, in practical distributed computing networks, it is hard to implement a common-bus topology for the physically separated processing nodes. Hence, to reduce communication load for the distributed computation tasks, it is important to consider a practical data center network topology to reap the coding benefits of the CDC schemes. As such, in , the authors propose a CDC scheme based on a widely-used low-cost network topology which is the t-ary fat-tree topology . It has the characteristics of network symmetry and connections between any two processing nodes in the network. Given a practical network topology design, the proposed topological CDC scheme achieves optimal max-link communication load over all links in the topology of which the optimal tradeoff between the max-link communication load and the computation load is characterized. \nAlthough the coded shuffling algorithm proposed in reduces communication load of the data shuffling process, there are two main limitations: (i) it assumes a perfect broadcast channel between the master node and the workers, and (ii) the theoretical guarantee of number of broadcast transmissions only holds when the number of data points approaches infinity. To overcome the limitations of the coded shuffling algorithm proposed in , Ubershuffle fills the missing entries in the encoding tables by reallocating the data points between the workers. This reduces the number of transmitted encoded packets, resulting in a reduction of communication load. The performance of the UberShuffle algorithm is evaluated when applied to different problem settings. The experimental results show that in comparison to the coded shuffling algorithm in , the UberShuffle algorithm reduces the shuffling time by up to 47.2\\% and 35.7\\% when implemented with the distributed SGD algorithm for a low-rank matrix completion problem and the parallel SGD algorithm for a linear regression problem respectively.", "id": "2cb9e913-abbc-43af-837d-bdfad67a0786", "level": "subsection", "origin_cites_number": 5, "parent_id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Consideration of Underlying Network Architecture" ] ], "subsections": [], "title": "Consideration of Underlying Network Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "In most of the studies of CDC computation framework, non-cascaded systems are considered. In other words, each Reduce function can only be computed at exactly one processing node. Considering that the processing nodes have different storage, computational and communication capabilities, a non-cascaded heterogeneous system where each Reduce function is computed exactly once is considered. In this proposed scheme, the processing nodes are allocated with different number of Reduce functions of which the processing nodes with greater storage and computational capabilities are allocated more Reduce functions. This heterogenous Reduce function allocation creates a symmetry among the multicast groups such that each processing node in a group requests the same number of intermediate outputs from the other processing nodes in the same group. The heterogeneous CDC system achieves a lower communication load than an equivalent homogeneous CDC system.\nHowever, it is desirable for the Reduce functions to be computed at multiple processing nodes in some applications, e.g., iterative algorithms in the Spark model where the output of a MapReduce procedure acts as the input to the MapReduce procedure in the next iteration. Although the work in generalizes the CDC framework by allowing each Reduce function to be computed at more than one processing node, it only applies to homogeneous systems. Similar to , heterogeneities of the systems are also considered in . However, instead of a non-cascaded system, the authors propose a more general framework of cascaded CDC on heterogeneous systems. In other words, each Reduce function is allowed to be computed at multiple processing nodes. Since the processing nodes have different storage capacities, the number of subfiles stored at each processing node differs. The processing nodes with larger storage capacities are allocated more subfiles and thus, compute more intermediate results. Instead of allocating the files and functions randomly as in the work in , this cascaded CDC scheme uses a hypercube approach to allocate the files and functions to the processing nodes. The simulation results show that for the same number of processing nodes in the network, the proposed cascaded CDC scheme achieves smaller communication load than the state-of-art schemes that consider homogeneous systems , . \nIn the study of , the allocation of the functions to the processing nodes does not depend on their capabilities but on the data stored at the nodes, of which the data placement is assumed to be random. This is very useful for applications that the processing nodes are mobile and collect data on-the-move. If the probability that the processing nodes contain data is higher than the pre-defined threshold, it is possible to allocate Reduce functions such that the processing nodes do not need to exchange their intermediate results for the computations of Reduce functions, i.e., each processing node can compute the Reduce functions based on its locally stored data. This reduces the number of broadcast transmissions in the Shuffle phase, thus minimizing the communication load.\nAlthough heterogeneities of the systems are taken into consideration in some of the works , , , they merely focus on either file allocation or function allocation. On one hand, the work in proposes an optimal file allocation strategy in consideration of the heterogeneous storage capabilities of the processing nodes. However, the Reduce functions are distributed uniformly among the processing nodes. On the other hand, the works in and propose the allocation of Reduce functions based on the computational capabilities of the processing nodes. However, the input file is split equally and distributed among the processing nodes. Considering a more generalized heterogeneous system, a joint file and function allocation strategy is proposed in to reduce the communication load in the Shuffle phase. The file allocation and function assignment strategies allocate more subfiles and Reduce functions respectively to the processing nodes with higher computational and storage capabilities. Generally, the Reduce function assignment is related to the input file allocation as the processing nodes with more input files have greater storage and computational capabilities and hence are more capable of computing more output functions. In particular, there are two proposed schemes of function assignment, i.e., computation-aware function assignment and shuffle-aware function assignment. For computation-aware function assignment strategy, the number of functions allocated is proportional to the computational capabilities of the processing nodes in order to reduce computation latency. For the shuffle-aware function assignment strategy, the functions are mostly allocated to the processing nodes with high computational capabilities so that the communication load in the Shuffle phase is minimized. The simulation results show that the communication loads achieved by both computation-aware and shuffle-aware function assignment strategies are lower than the uniform function allocation strategy. Besides, the computation-aware function assignment strategy requires fewer number of output functions as compared to the proposed schemes in and . However, the number of input files required is much larger, especially when the number of processing nodes in the system increases.\nWhile there has been great attention on the design of file allocation, coded shuffling algorithms and function allocation, all these works assume a fixed number of processing nodes in the distributed computing systems. For a given computation task that specifies the number of subfiles and the number of outputs, the allocation of functions and the data shuffling schemes can be implemented with minimum number of processing nodes. In the study of , the resource allocation problem is formulated as an optimization problem that minimizes the overall job execution time with optimal number of processing nodes used. For more practical implementation, the resource allocation strategy should consider the heterogeneity in the processing speed of the nodes, since the straggler effects cause an increase in computation latency which increases the overall job execution time.\nBy exploiting the fact that the processing nodes have time-varying computing resources, e.g., the processing nodes may have different central processing unit (CPU) power over time, an optimal computation task scheduling scheme helps to reduce the communication load. In the scheduling of tasks under dynamic computing resources, the total communication load is minimized by optimizing the number of allocated tasks and load redundancy at each time slot .\n\\begin{table*}[t]\n\\caption{\\small CDC schemes to reduce communication costs.} \n\\label{tab:communication}\n\\centering\n\\begin{tabular}{|>{\\centering\\arraybackslash}m{2cm} |>{\\centering\\arraybackslash}m{1.4cm} |>{\\centering\\arraybackslash}m{3.5cm} |m{6.3cm} |>{\\centering\\arraybackslash}m{2.3cm}|}\n\\hline\n\\rowcolor{mygray}\n\\textbf{Approach} & \\textbf{Ref.} & \\textbf{Coding Schemes} & \\multicolumn{1}{c|}{\\textbf{Key Ideas}} & \\textbf{Platform Support} \\\\ \\hline\n\\multirow{12}{=}{File Allocation} & & - & Deterministic and systematic storage update strategy & Heterogeneous\\\\\n\\cline{2-5}\n& & Hybrid Coded MapReduce & Allocates Map tasks such that data locality is maximized & Homogeneous\\\\\n\\cline{2-5}\n& & Group-based Coded MapReduce & Allocates dataset by using a group-based method in order to avoid high subpacketization level and allow processing nodes to cooperate in the transmission of messages& Homogeneous\\\\\n\\cline{2-5}\n& & Resolvable Design & Uses single-parity code to determine the number of subfiles and allocate the subfiles based on the corresponding resolvable design & Homogeneous\\\\\n\\cline{2-5}\n& & Placement Delivery Arrays & Construction of CDC schemes based on PDA which has the property of illustrating the placement and delivery phase in a single array & Homogeneous\\\\ \n\\hline\n\\multirow{18}{=}{Coded Shuffling Design} & & Compressed CDC & Pre-combines computed intermediate values of the same function, followed by coding the pre-combined packets for communication between different processing nodes & Homogeneous\\\\\n\\cline{2-5}\n& & Quantized SGD & Quantizes the component of the gradient vector to a discrete set of values and encodes the quantized gradients given their statistical properties & Homogeneous\\\\\n\\cline{2-5}\n& & Pliable Index Coding & Semi-random data shuffling scheme based on modified pliable index coding to reduce the number of communication rounds & Homogeneous\\\\\n\\cline{2-5}\n& & Cross-iteration Coded Computing & Jointly codes across multiple iterations for a single communication round & Homogeneous\\\\\n\\cline{2-5}\n& & CDC for distributed graph computing systems & Instead of communicating with all other processing nodes, each processing nodes only needs to communicate with a subset of processing nodes to obtain the required data to complete its computation tasks & Homogeneous\\\\\n\\cline{2-5}\n& & CDC for multistage dataflow & A more generalized CDC scheme is proposed to handle multistage dataflow computation tasks which are represented by layered DAGs & Homogeneous\\\\\n\\hline\n\\multirow{12}{=}{Consideration of Underlying Network Architecture} & & Hybrid Coded MapReduce & Reduces cross-rack communication at the expense of higher intra-rack communication based on the server-rack architecture & Homogeneous\\\\\n\\cline{2-5}\n & & Topological CDC & Considers t-ary fat-tree topology which is a more practical topology to connect the physically separated processing nodes in data center networks & Homogeneous\\\\\n\\cline{2-5}\n & & UberShuffle & Considers imperfect communication channel between the workers and the master node & Homogeneous\\\\\n\\cline{2-5}\n& & - & Considers multicore setup where each computing node can have multiple cores, e.g., the CPU instances of publicly available cloud infrastructure can deliver up to 128 cores & Homogeneous\\\\\n\\hline\n\\multirow{12}{=}{Function Allocation} & & - & Considers a non-cascaded system and allocates Reduce functions over a simplified heterogeneous network which comprises multiple homogeneous networks & Heterogenous \\\\\n\\cline{2-5}\n& & Cascaded CDC & Reduce functions are computed at more than one processing node and they are allocated based on the combinatorial design in & Heterogeneous\\\\\n\\cline{2-5}\n & & - & Allocates functions to maximize data locality such that the number of communication rounds required is reduced & Homogeneous\\\\\n\\cline{2-5}\n & & - & Joint file and function allocation strategy & Heterogeneous\\\\\n\\cline{2-5}\n & & - & Considers the availability of time-varying computing resources & Homogeneous\\\\\n\\hline\n\\end{tabular}\n\\end{table*}", "id": "1109cc8b-6b58-4ee5-ba62-ca162ae902db", "level": "subsection", "origin_cites_number": 6, "parent_id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Function Allocation" ] ], "subsections": [], "title": "Function Allocation" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we have reviewed four approaches to minimize communication costs in distributed computing systems. For each approach, we discuss the solutions that are proposed in different works. We summarize the coding schemes to minimize communication costs in Table~\\ref{tab:communication}. The lessons learned are as follows:\n\\begin{itemize}\n\\item To handle the increasing amounts of data generated, more processing nodes are needed for the completion of distributed computations given the limited capabilities of the processing nodes. With more processing nodes connected to the network, more communication rounds of the computed intermediate results are required, resulting in higher communication costs which lead to a longer job execution time. Besides, the high communication costs in the data shuffling phase impede the implementation of efficient distributed iterative algorithms which are useful for the training of machine learning models. As such, the minimization of communication costs is a key step to achieve the objective of reducing the overall job execution time of the distributed computation tasks.\n\\item While having more processing nodes to perform the computation tasks in parallel helps to reduce the computation load of each processing node, the communication costs may increase, slowing down the entire computation process. Instead of generating infinitely large number of subfiles and distributing them among large number of processing nodes, several studies focus on determining the optimal number of subfiles to avoid the input file from splitting too finely. For example, the resolvable design based schemes and PDA approaches are adopted to split the input files. Hence, it may not be optimal to use all the processing nodes that are connected to the network. In fact, the authors in propose an optimal resource allocation scheme that determines the minimum number of processing nodes needed to achieve the minimum overall job execution time. \n\\item Coding techniques have shown to be effective in reducing communication load in the data shuffling phase at the expense of higher computation load . However, the two-dimensional tradeoff is insufficient to fully evaluate the performance of the CDC schemes. Apart from leveraging on the computational capabilities of the processing nodes, their storage capabilities can be exploited. For example, more data can be stored at processing nodes with higher storage capacities such that the number of communication rounds is reduced . Moreover, by considering the data stored at the processing nodes, the allocation of functions that maximizes data locality helps to reduce the need for communication bandwidth . Hence, instead of the two-dimensional tradeoff between computation load and communication load, the three-dimensional tradeoff between computation, communication and storage cost has to be carefully managed for the implementation of efficient CDC schemes.\n\\item For effective implementation of CDC schemes in practical distributed computing systems, the underlying architecture has to be considered. Generally, the distributed computing systems operate under the server architecture which consists of multiple racks where each rack has several servers. The Hybrid Coded MapReduce scheme reduces cross-rack communication at the expense of higher intra-rack communication. Besides, the communication channels between the master node and workers are imperfect. As a result, the theoretical gains of the coded data shuffling schemes are not achievable under practical setups. In order to design and analyze the performance of the CDC schemes, the limitations of the distributed computing systems should be taken into consideration. \n\\item Most of the CDC schemes focus on the minimization of communication costs in the data shuffling phase at the expense of reasonable increase in computation load. However, the computational overhead of the algorithm is not negligible under some settings. For example, under much faster broadcast environment, the UberShuffle algorithm incurs significant encoding and decoding costs such that the shuffling gain is offset by the high computational overhead. For future works, more practical CDC schemes can be proposed such that the communication costs are minimized while maintaining low computational cost to improve the performance of the distributed computing systems. Given the uncoded computing schemes, e.g., and , achieve near-optimal computation and storage costs, one possible research direction is to merge the communication-efficient proposed schemes with the uncoded computing schemes to reduce the computation and storage costs.\n\\item Although some of the works consider heterogeneities in the capabilities of the processing nodes to allocate files and functions, the presence of stragglers which have slower processing speeds still hinder the performance of the distributed computing systems. Therefore, we further discuss the approaches to mitigate the straggler effects in the next section.\n\\end{itemize}", "id": "71e25826-5a72-4095-8ca3-d26170c96db8", "level": "subsection", "origin_cites_number": 5, "parent_id": "f06fb9ca-5252-413b-b98f-08903f106a9c", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Minimization of Communication Load" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:stragglers}\nIn distributed computing systems, processing nodes have different processing speeds and thus the time taken to complete their allocated subtasks differs from each other. Since the computation task is distributed among the processing nodes, the master node needs to wait for all processing nodes to complete their subtasks and return the computed results before recovering the final result. As such, the time taken to execute a computation task is determined by the slowest processing node. This is also known as the \\emph{straggler effects}. \nThe problem of straggler effects has been widely observed in the distributed computing systems. Previously, various methods such as straggler detection , , asynchronous execution and naive replication of jobs , have been proposed to reduce the overall time taken to execute the computation tasks. Recently, coding approaches have been shown to outperform the aforementioned methods in reducing computation latency of the distributed computing systems. In this section, we discuss three approaches to mitigate the straggler effects: \n\\begin{itemize}\n\\item \\emph{Computation Load Allocations: }Coding techniques can be implemented together with computation load allocation strategies to reduce the computation latency in the distributed computing systems. It is important to take into account the variation in the computational capabilities of the processing nodes to allocate computation load. As such, different prediction methods such as an long short-term memory (LSTM) algorithm , an Auto Regressive Integrated Moving Average (ARIMA) model and a Markov model are used to estimate the processing speeds of the nodes. Generally, the objective of the load allocation strategies is to minimize computation latency. However, in some applications where strict deadlines are given, the load allocation strategies aim to maximize timely computation throughput .\n\\item \\emph{Approximate Coding: }For some applications, e.g., location-based recommendation systems, exact solutions are not necessary. The studies explore different coding approaches to obtain approximate solutions to the problems. The approximate coding methods relax the requirement for convergence and thus reduce the number of workers that are required to return their computed results. This can avoid stragglers to make an adverse effect to the system.\n\\item \\emph{Exploitation of Stragglers: }The straggling nodes may have completed a fraction of the allocated computation tasks, of which is a waste to be ignored completely. In fact, the stragglers may not be persistent over the entire computation process and thus their partial computed results can be useful to recover the final result. In order to maximize the resource utilization of the straggling nodes, the workers are allowed to sequentially process their allocated subtasks and transmit their partial computed results continuously . However, this may come at the expense of higher communication load, which needs to be carefully managed. \n\\end{itemize}", "id": "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "level": "section", "origin_cites_number": 6, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ] ], "subsections": [ "bbc9eda4-c6aa-4f51-abdb-b74f0a6b37df", "99bd280c-f0a8-4997-9dfb-23945fa49c2a", "775a5f6a-bf8f-475d-ab6b-0bf91f59fd88", "07a5860c-2192-4141-a8ce-fe22c5666e51" ], "title": "Mitigation of Stragglers" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from having different straggling rates that affect the completion of tasks, the processing nodes have different capabilities, e.g., storage capacities, computing resources and communication bandwidths. To better handle the straggling nodes in the distributed computing systems, an optimal load allocation strategy that takes into account these heterogeneities is necessary to minimize the overall job execution time. Given the computation time parameters, i.e., the straggling and shift parameters of each worker, the Heterogeneous Coded Matrix Multiplication (HCMM) algorithm determines the allocation of computation load to each worker. The HCMM scheme exploits the benefits of both coding techniques and computation load allocation strategy to minimize the average computation time of the computation tasks. Given that it is difficult to derive closed-form expressions of expected computation time of the heterogeneous processing nodes, a two-step alternative problem formulation is proposed. In the first step, given a time period, the number of computed results by the workers is maximized by optimizing the load allocation. In the second step, given the derived load allocation in the first step, the time needed to ensure sufficient results are returned at a pre-defined probability is minimized. The simulation results show that when workers' computation time is assumed to follow a shifted exponential runtime distribution, HCMM reduces the average computation time by up to 71\\%, 53\\% and 39\\% over uniform uncoded, load-balanced uncoded and uniform coded load allocation schemes, respectively. In practical experiments over Amazon EC2 clusters, the combination of HCMM and LT codes outperforms the uniform uncoded, load-balanced uncoded and uniform coded load allocation schemes by up to 61\\%, 46\\% and 36\\%, respectively.\nAlthough HCMM achieves asymptotically optimal computation time, the decoding complexity is high, which suggests the opportunity to further speed up the overall computation tasks. In practical distributed computing systems, some processing nodes have the same computational capabilities, in terms of the same distributions of computation time, and thus they can be grouped together. By exploiting the group structure and heterogeneities among different groups of processing nodes , , the implementation of a combination of group codes and an optimal load allocation strategy not only approaches the optimal computation time that is achieved by the MDS codes, but also has low decoding complexity. In addition, by varying the number of allocated rows of the matrix to the workers , the computation latency can be reduced by orders of magnitude over the MDS codes with fixed computation load allocation as the number of workers increases. The load allocation strategy proposed in focuses mainly on the design of an optimal MDS code. Other efficient coding algorithms such as LT codes can also be explored in future studies.\nIn addition to the heterogeneous capabilities of the processing nodes, the amount of available resources of the processing nodes may vary over time. By always allocating computation tasks to the processing nodes with higher capabilities, delays may be incurred in completing the allocated tasks if the processing nodes start to work on the newly-allocated computation tasks in parallel. At the same time, the resources of the processing nodes with lower capabilities may be under-utilized. In order to maximize the resource utilization of the processing nodes, dynamic workload allocation algorithms which are adaptive to the time-varying capabilities of the processing nodes are proposed , , . To provide robustness against the straggling nodes, the design of the load allocation algorithms often depends on the historical data of the processing nodes such as computation time, which can be used to predict the processing speeds by using an LSTM algorithm , an ARIMA model or a Markov model . The best performing LSTM model achieves 5\\% lower prediction error than an ARIMA (1,0,0) model .\nIn the study of , the authors propose Coded Cooperative Computation Protocol (C3P) which is a dynamic and adaptive coded cooperation framework that efficiently utilizes the available resources of the processing node while minimizing the overall job execution time. Specifically, the master node determines the coded packet transmission intervals based on the responsiveness of the processing nodes. For the processing nodes which are not able to complete the tasks within the given transmission interval, they suffer from longer waiting time for the next coded packets. In comparison to the HCMM scheme which does not consider the dynamic resource heterogeneity in workers, the C3P framework achieves more than 30\\% improvement in task completion delay.\nThe dynamic and adaptive load allocation algorithms are especially useful in providing timely services with deadline constraints which are common in many IoT applications. For such applications, instead of minimizing task completion delay, the objective of the load allocation algorithms is to maximize timely computation throughput, i.e., the average number of computation tasks that are successfully completed before the given deadline . \nFor some applications that may need timely but not necessarily optimal results, it is more important to recover the final result with the highest accuracy possible by the stipulated deadlines than to solve for an exact solution. The algorithm to solve for an approximate solution requires significantly shorter computation time than that of an algorithm that solves for an exact solution . In the study of , the batch-processing based coded computing (BPCC) algorithm is proposed. The workers first partition the allocated encoded matrix into several batches, i.e., submatrices, for computations. As soon as the workers complete the computation of each batch of submatrix with a given vector, they return the partial computation results which are used to generate the approximate solution. Based on the computation time parameters, i.e., the straggling and shift parameters of the workers, the BPCC algorithm is used to optimally allocate the computation load to each worker by considering two cases: (i) negligible batching overheads, and (ii) linear batching overheads. Hence, the allocation of computation load is optimized by jointly minimizing both the overall job execution time as well as the potential overhead of batch processing. In addition to reducing computation latency, the BPCC algorithm exploits the partial computation results of all processing nodes, including the straggling nodes, which contribute to approximate solutions of higher accuracy . The simulation results show that BPCC algorithm with negligible batching overheads achieves up to 73\\%, 56\\% and 34\\% reduction in average job execution time over uniform coded, load-balanced uncoded and HCMM load allocation schemes respectively. The experimental results on Amazon EC2 clusters and Unmanned Aerial Vehicles (UAVs) based airborne computing platform also demonstrate similar results. \n\\begin{table*}[!ht]\n\\caption{\\small Approximate CDC schemes to mitigate the straggler effects.} \n\\label{tab:approximate}\n\\centering\n\\begin{tabular}{|>{\\centering\\arraybackslash}m{2.0cm} |>{\\centering\\arraybackslash}m{1.5cm} |>{\\centering\\arraybackslash}m{3.3cm} |m{8cm}|}\n\\hline\n\\rowcolor{mygray}\n\\multicolumn{1}{|c|}{\\textbf{Problems}} & \\textbf{Ref.} & \\textbf{Coding Schemes} & \\multicolumn{1}{c|}{\\textbf{Key Ideas}} \\\\ \\hline\n\\multirow{5}{=}{Matrix-Vector} & & Anytime Coding & Computations can be stopped anytime and the approximate solution is derived from the processing nodes that have completed their tasks\\\\\n\\cline{2-4}\n & & Coded Sequential Computation Scheme & A sequence of approximated problems are designed such that the time required to solve these problems is shorter than solving the original problem directly\\\\\n\\hline\n\\multirow{3}{=}{Matrix-Matrix} && CodedSketch & Use a combination of count-sketch technique and structured polynomial codes \\\\\n\\cline{2-4}\n & & OverSketch & Divide the sketched matrices into blocks for computations\\\\\n\\hline\n\\multirow{8}{=}{Gradient Descent} & & Bernoulli Gradient Codes & Use Bernoulli random variables as entries of the function assignment matrix\\\\\n\\cline{2-4}\n && Stochastic Block Codes & Interpolation between BGC and FRC \\\\\n\\cline{2-4}\n & & Stochastic Gradient Coding & Distribute data based on pair-wise balanced scheme and provide a rigorous convergence analysis of the proposed coding scheme\\\\\n\\cline{2-4}\n & & LDPC Codes & Encode the second moment of the data points\\\\\n\\cline{2-4}\n &, & Encoded Optimization & Encode both the labels and data such that the redundancy is introduced in the formulation of the optimization problems \\\\\n\\hline\n\\multirow{4}{=}{Non-Linear} & & Generalized PolyDot & Generalization of PolyDot codes and is used for the training of DNNs\\\\\n\\cline{2-4}\n & & Learning-based Approach & Design neural network architectures to learn and train the encoding and decoding functions to approximate unavailable outputs\\\\\n\\hline\n\\end{tabular}\n\\end{table*}", "id": "bbc9eda4-c6aa-4f51-abdb-b74f0a6b37df", "level": "subsection", "origin_cites_number": 5, "parent_id": "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Computation Load Allocation" ] ], "subsections": [], "title": "Computation Load Allocation" }, { "cite_extract_rate": 0, "cites": [], "content": "For some applications, it is not necessary to obtain exact solutions. In this subsection, we review the coding approaches that are used to derive approximate solutions for the distributed computation tasks: (i) matrix multiplication, (ii) gradient descent, (iii) non-linear computations. Table~\\ref{tab:approximate} summarizes the approximate coding schemes for various distributed computation tasks.", "id": "99bd280c-f0a8-4997-9dfb-23945fa49c2a", "level": "subsection", "origin_cites_number": 0, "parent_id": "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Approximate Coding" ] ], "subsections": [ "a171484f-77a0-4766-94ff-e7249ae60fc2", "e8ed1de1-1ec4-4ea0-b627-b1ea53a5a3a3", "fd439e01-9cb8-4b85-a497-db80e1e80ae6" ], "title": "Approximate Coding" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:matrix} The anytime coding scheme derives an approximate solution by using the output results of the completed processing nodes at any given time. Based on singular value decomposition (SVD), the given computation task is decomposed into several subtasks of different priorities. More important subtasks are allocated more processing nodes for computations as they improve the accuracy of the approximation. To allow the users to receive useful information from time to time, approximate solutions can be transmitted to the users sequentially. This can be achieved by solving a sequence of approximated problems , instead of solving the original problem directly. \nTo further reduce the computation time of approximate matrix multiplications, sketching techniques , can be used to remove redundancy in the structure of the matrices through dimensionality reduction. However, by using sketching techniques, the recovery threshold increases as the redundancy is removed. In contrast, coding techniques reduces recovering threshold by introducing redundancy. As such, a combination of both techniques that carefully manage the tradeoff between the recovery threshold and the amount of redundancy can be implemented to minimize computation latency. In particular, count-sketch technique is combined with structured codes to mitigate the straggler effects by preserving a certain amount of redundancy, thereby achieving the optimal recovery threshold and hence computation latency , .", "id": "a171484f-77a0-4766-94ff-e7249ae60fc2", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "99bd280c-f0a8-4997-9dfb-23945fa49c2a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Approximate Coding" ], [ "subsubsection", "Matrix multiplications" ] ], "subsections": [], "title": "Matrix multiplications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsubsec:gradient}To speed up the distributed gradient descent tasks, several approximate gradient coding schemes are proposed to approximately compute any sum of functions. Instead of constructing the gradient codes based on expander graphs which are difficult to compute due to high complexity, a more efficient and simpler Bernoulli Gradient Code (BGC) is proposed by using sparse random graphs which introduce randomness into the entries of the function assignment matrix. Since the performance of the gradient codes depends on the efficiency of the decoding algorithms, the authors also present two decoding techniques to recover the approximate solution from the outputs of the non-straggling nodes. The simulation results show that the BGC schemes can handle adversaries with polynomial-time computations but at a cost of higher decoding error than the FRC schemes . Besides, the optimal decoding algorithm always achieves a lower decoding error than that of the one-step decoding algorithm across various gradient coding schemes. A rigorous convergence analysis of the approximate gradient codes and the performance of BGC on different practical datasets such as the Amazon dataset, Covertype dataset and KC Housing dataset are presented in . Stochastic block code which is based on the stochastic block model from random graph theory, is an interpolation between FRC and BGC . On one hand, the FRC schemes achieve small reconstruction errors under random straggler selection while on the other hand, the BGC schemes are robust against polynomial-time adversarial stragglers.\nOther approximate gradient coding methods such as the Stochastic Gradient Coding (SGC) and LDPC codes are used to obtain an estimate of the true gradient. Similar to the idea of encoding data batches in the PCR schemes , the data variables in the optimization formulation can be efficiently encoded to mitigate the straggler effects in more general large-scale optimization problems such as support vector machines, linear regressions and compressed sensing , . Generally, in solving for approximate solutions to reduce computation latency of the distributed tasks, there is an inherent tradeoff between the recovery threshold and the approximation error where the recovery threshold can be reduced at the expense of larger approximation error .", "id": "e8ed1de1-1ec4-4ea0-b627-b1ea53a5a3a3", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "99bd280c-f0a8-4997-9dfb-23945fa49c2a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Approximate Coding" ], [ "subsubsection", "Gradient descent" ] ], "subsections": [], "title": "Gradient descent" }, { "cite_extract_rate": 0, "cites": [], "content": "By leveraging on the enormous amounts of data generated, machine learning algorithms are useful in making predictions and allowing devices to respond intuitively to user demands without human interception. Different neural network architectures are developed to make accurate inference given the dataset. Since some of the layers of the neural networks such as the max-pooling functions and the activation layer are non-linear, the overall computation of the functions are non-linear. As a result, most of the prior works on linear computations which are discussed in Sections~\\ref{subsec:codingtech}, \\ref{subsubsec:matrix} and \\ref{subsubsec:gradient}, are not applicable to the computation of the increasingly important non-linear neural networks, in which their performances are also limited by the straggling nodes. One of the few approaches that can be extended to the training of DNNs is the Generalized PolyDot codes which are used to compute matrix-vector multiplications. The Generalized PolyDot codes are used to code the linear operations at each layer of the neural networks. This coding scheme allows for errors in the training of each layer. In other words, decoding can still be performed correctly given that the amount of errors does not exceed the maximum error tolerance level. The effectiveness of coding techniques in mitigating the straggler effects of different neural network architectures such as AlexNet and Visual Geometry Group (VGG) in an IoT system is illustrated in . However, this unified coded DNN training strategy may not be relevant to the training of other neural networks which have large number of non-linear functions. As such, the authors in propose a learning-based approach for designing codes. Based on the dilated CNN and multilayer perceptrons (MLP), neural network architectures and a training methodology are proposed to learn the encoding and decoding functions. The outputs of the decoding functions are used to approximate the unavailable outputs of any differentiable non-linear function. The simulation results show that the learning-based approach to designing the encoding and decoding functions can accurately reconstruct 95.71\\%, 82.77\\% and 60.74\\% of the unavailable ResNet-18 classifier outputs on MNIST, Fashion-MNIST and CIFAR-10 datasets respectively.", "id": "fd439e01-9cb8-4b85-a497-db80e1e80ae6", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "99bd280c-f0a8-4997-9dfb-23945fa49c2a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Approximate Coding" ], [ "subsubsection", "Non-linear computations" ] ], "subsections": [], "title": "Non-linear computations" }, { "cite_extract_rate": 0, "cites": [], "content": "To avoid delays caused by the straggling nodes in the network, most distributed computation schemes ignore the work completed by the straggling nodes by either increasing the workload of the non-straggling nodes or by obtaining less accurate approximate solutions. However, the amount of work that has been completed by the straggling nodes, especially the non-persistent stragglers, is non-negligible and can be better utilized.\nIn order to exploit the computational capacities of these non-persistent stragglers, multi-message communication (MMC) is used where the workers are allowed to send multiple messages to the master node at each iteration. This allows the workers to transmit their partial computed results whenever they complete a fraction of the allocated task, rather than completing the entire computation task before transmitting the computed result in a single communication round. The work in considers the implementation of Lagrange Coded Computing (LCC) with MMC to minimize the average job execution time at the expense of higher communication load due to the increase in number of messages transmitted by the workers to the master node. Since the LCC scheme utilizes polynomial interpolation to recover the final result, the decoding complexity and the number of transmissions can be further reduced by increasing the number of polynomials used in decoding the computed results returned by the workers. The simulation results show that by exploiting the computing resources of the non-persistent stragglers via MMC, the average job execution time decreases as the computation load of each worker increases. However, since the communication load of the LCC-MMC scheme is constant as the computation load increases, it is suitable to be implemented only when computation time dominates the overall execution time of the distributed tasks. The total time needed to execute the distributed tasks includes the time needed for both computation and communication. Otherwise, if communication load is the cause of the bottleneck of the network, LCC without MMC should be used instead since the communication load can be reduced at the expense of higher computation load.\nGiven that MMC is allowed where the workers perform more than one round of communication for each iteration, the computation work done by the straggling nodes can be exploited by allowing sequential processing where the workers need to transmit the computation results of their completed subtask before working on the next subtask. To fully exploit the useful information provided by the straggling nodes, the hierarchical coded computation scheme is proposed in to utilize the computations from all workers. Each worker first divides the allocated computation task into layers of sub-computations, which are processed sequentially, i.e., the result of a layer of sub-computation is transmitted to the master node before the computation of the next layer starts. Since the workers have different processing speeds and the sub-computations are performed sequentially, the finishing time for each layer is different. MDS codes are used to encode the layers so that the finishing time of each layer is approximately the same. The top layers which have lower probability of erasure are encoded with higher-rate MDS codes whereas the bottom layers are encoded with lower-rate MDS codes. The simulation results show that for the same amount of tasks to be completed, the proposed hierarchical coded computation scheme achieves $1.5$ factor improvement in expected finishing time as compared to the coded computation scheme proposed in which ignores the computations of the straggling nodes.\nIn the study of , by computing the block products sequentially, the partial computation results from the straggling nodes are used to aggregate the final result. In order to preserve the sparsity of matrices for processing by the workers, instead of coding the entire matrix, the fraction of coded blocks can be specified to control the level to which coding is utilized in the solutions. There are two different approaches considered, depending on the placement of the coded blocks. When the coded blocks appear at the bottom of the workers, it is easier for the master to decode. When the coded blocks appear at the top of the nodes, it minimizes the computation load of the workers. As such, the placement can be calibrated based on the task requirements.\nWhile the coded computation schemes achieve low communication load and reduce the average job execution time for each iteration, the uncoded computation schemes have their own benefits of having no decoding complexity and allowing partial gradient updates. In order for a system to benefit from the advantages of both schemes, coded partial gradient computation (CPGC) scheme is proposed in . In the CPGC scheme, uncoded submatrices are allocated for the first computation since there is a high probability for each worker to complete their first computation task. Subsequently, coded submatrices are allocated to handle the straggling nodes. The master node is able to update the gradient parameters by using the computation results from a subset of workers and by exploiting the partial computations of the straggling nodes. As a result, the average job execution time for each iteration is reduced. \n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{mitigate.pdf}\n\\caption{\\small Approaches to mitigate the straggler effects include (a) computation load allocation by predicting the speed of the processing nodes , , (b) approximate coding and (c) exploitation of stragglers by allowing multi-message communications , .}\n\\label{fig:mitigate}\n\\end{figure}", "id": "775a5f6a-bf8f-475d-ab6b-0bf91f59fd88", "level": "subsection", "origin_cites_number": 2, "parent_id": "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Exploitation of Stragglers" ] ], "subsections": [], "title": "Exploitation of Stragglers" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we have discussed three approaches (Fig.~\\ref{fig:mitigate}) to mitigate the straggler effects. The lessons learned are as follows:\n\\begin{itemize}\n\\item The straggler effect is a key issue to be resolved in order to reduce computation latency, hence minimizing the overall job execution time. Due to various factors such as insufficient power, contention of shared resources, imbalance work allocation and network congestions , some processing nodes may run slower than the average or even be disconnected from the network. Since the computation tasks are only completed when all processing nodes complete their computations, the time needed to complete the tasks is determined by the slowest processing node. Coding techniques have shown their effectiveness in reducing computation latency by introducing redundancy . In this section, we have explored the use of coding techniques for different distributed computation tasks, e.g., matrix-vector multiplications, matrix-matrix multiplications, linear inverse problems, iterative algorithms, convolutions and non-linear problems. While most of the research focuses on the design of encoding techniques, the decoding complexity of the codes also affects the computation latency significantly. Apart from Reed-Solomon codes and LDPC codes , more effective codes with low decoding complexity can be investigated in future studies. \n\\item Considering heterogeneities in the capabilities of the processing nodes, effective computation load allocation strategies are implemented to allocate workload to the processing nodes. We have discussed the proposed computation load allocation algorithms under different constraints, e.g., strict deadlines and time-varying computing resources. \nIn addition, different prediction methods such as the LSTM algorithm , an ARIMA model and a Markov model that predict the speeds of the processing nodes are explored. However, the stragglers may be non-persistent in nature and thus they may be useful when they are able to perform computations faster than the average rate. Hence, the load allocation based on the responsiveness of the processing nodes may be more useful in such situations. \n\\item Instead of exact solutions, it is acceptable to derive approximate solutions for some applications, e.g., location-based recommender systems. Various coding techniques to derive approximate solutions are investigated. For example, in the studies of and , sketching techniques are used with structured codes to minimize computation latency. However, there exists a tradeoff between the recovery threshold and the approximation error. For future works, an improvement to this tradeoff can be investigated.\n\\item Although the straggling nodes run slower than the average, the computations that are completed may still be useful. It is wasteful to not utilize the partial computed results of the straggling nodes. Besides, these partial computed results can help to improve the accuracy of the estimates. For example, in , the stragglers are treated as soft errors instead of erasures to minimize the mean-squared error of the iterative linear inverse solvers under a deadline constraint by using approximate weights. Outputs from all computing nodes, including the straggling nodes are used to recover estimates that are as close as possible to the convergence values when the computation deadline is brought forward or when the number of computing nodes increases. Unfortunately, as the processing nodes are required to send their partial results once they complete, more communication rounds are performed. The high communication costs may be the bottleneck of the distributed computation tasks. Given the advantages of using partial results from the straggling nodes, optimization approaches to minimize the communication costs between the master node and the workers should be explored.\n\\item The current studies in this section have proposed effective coding schemes for implementation. However, they do not consider security in the design of the coding schemes. For example, the FRC scheme achieves high accuracy in the presence of stragglers, but it is susceptible to attacks from adversarial stragglers, which turn more processing nodes into straggling nodes. Besides, other than the straggling workers, there may exist malicious or curious workers that may compromise the privacy and security of the system. Therefore, approaches to ensure secure coding are discussed in-depth in the next section. \n\\end{itemize}", "id": "07a5860c-2192-4141-a8ce-fe22c5666e51", "level": "subsection", "origin_cites_number": 5, "parent_id": "79250ecd-4bf6-4689-8a12-f801fbeb39c1", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Mitigation of Stragglers" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:security}\nIn distributed computing, the data owner, master node, and workers may not belong to the same entity. For example, the data owner may wish to perform a task on a massive dataset on which intensive computations have to be performed. The computations may be divided and distributed to multiple workers on third-party computing services. However, sensitive data, e.g., in healthcare services , may be involved. In this case, \\textit{curious} workers may collude to obtain information about the raw data, whereas \\textit{malicious} workers may intentionally contribute erroneous inputs to introduce biases to the model. Besides, in some cases, the dataset does not belong to either the master node or the workers, and as such, the raw dataset has to be guarded against both parties.\nTo ensure that privacy and security are preserved during the computing tasks, conventional methods such as homomorphic encryption have been proposed in which the data is first encrypted before being shared to workers. However, the encryption techniques are found to be costly in terms of computation and storage costs . Besides, the secure multi-party computation (MPC) approaches mainly focus on the correctness and privacy of the data , whereas neglecting to reduce the complexity of computation at each workers, or the number of workers required to perform the task. Recently, coding techniques that have originally been introduced to mitigate straggling workers are increasingly utilized to provide information-theoretic privacy guarantees. Specifically, information-theoretic privacy considers the setup consisting honest but curious workers, in which collusions formed between $T$ of $N$ workers do not leak information about the training dataset. Coding schemes can be used to not only mitigate the stragglers but also the colluding curious workers and malicious workers as illustrated in Fig.~\\ref{fig:security}. For ease of exposition, we classify the related studies into two main categories in this section:\n\\begin{itemize}\n\\item \\textit{Secure Distributed Computing:} In this section, the studies aim to reduce the number of workers needed for information-theoretic privacy, i.e., where the colluding workers are unable to infer sensitive information from the dataset. For some studies, this objective is met while simultaneously preserving the efficiency of distributed computing, e.g., through providing resiliency against straggling workers . \n\\item \\textit{Secure Distributed Matrix Multiplication (SDMM):} The studies in the aforementioned category mainly focus on generic operations, e.g., addition, subtraction, multiplication, or computation of polynomial functions, whereas the studies in this category focus specifically on matrix multiplication. One key difference between the two categories is that SDMM considers the specific scenario in which \\textit{both} input matrices in the multiplication operation are private information, i.e., two-sided privacy , whereas the prior category mainly considers one-sided privacy, i.e., only one input matrix is private. In addition, a performance metric of interest in the SDMM literature is the download rate, i.e., the ratio of the size of desired result to the total amount of information downloaded by a user from the workers.\n\\end{itemize}", "id": "ee3d99f5-815c-4ba8-b336-968d0def830a", "level": "section", "origin_cites_number": 7, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Secure Coding for Distributed Computing" ] ], "subsections": [ "8d724f64-c2a5-4e3f-a151-1bdc01b1a822", "d523704a-b56b-49c2-97c7-6500404f0ac2", "362bc888-78c5-42b5-b72f-1d88622fe4e4" ], "title": "Secure Coding for Distributed Computing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:securedist}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{cdc-security.pdf}\n\\caption{\\small Illustration of coding framework for the objectives of mitigating stragglers, colluding curious workers, and malicious workers.}\n\\label{fig:security}\n\\end{figure}\nIn Section~\\ref{subsec:codingtech}, we discuss that the polynomial codes proposed in have the desirable property of an optimal recovery threshold that does not scale with the number of workers involved. In consideration of this useful property, the authors in propose the polynomial sharing approach which combines the polynomial codes and the Ben-Or, Goldwasser, and Wigderson (BGW) scheme . The system model considered in this study is that the data originates from external sources, and thus has to be kept private against both the workers and master node. In contrast to the BGW approach which uses Shamir's scheme to encode the dataset, the study of proposes to encoded the dataset using the polynomial coding scheme. The authors show that the polynomial sharing approach may be applied to perform several procedures, e.g., addition, multiplication, and the computation of polynomial functions, while reducing the number of workers required to complete the task as compared to conventional MPC approaches even when workers have capacity-limited communication links. \nTypically, in conventional polynomial coding schemes, the dataset on which computations are performed is divided into multiple sub-tasks, with one sub-task encoded and assigned to each worker. In this case, faster workers that complete their task will be idle while waiting for straggling workers. To further mitigate the straggler effects, the authors in leverage on computation redundancy to propose the private asynchronous polynomial coding scheme in which a computation task is divided into several relatively smaller sub-tasks for distribution to each worker. This results in two key advantages, in addition to retaining the privacy preservation properties of polynomial coding. Firstly, the smaller sub-tasks can be successfully completed by straggling workers with limited computing capacity. Secondly, workers of the fastest groups are assigned more tasks to continue working throughout the whole duration rather than wait for the stragglers, thus reducing the computation time. \nHowever, the studies mainly utilize polynomial coding for privacy preservation which is restrictive in certain aspects, e.g., it only allows column-wise partitioning of the matrices . As such, the entangled polynomial codes are applied by as an extension to polynomial sharing, so as to further reduce the restrictions during the data sharing phase, and hence, the number of workers required to perform the same computations while meeting privacy constraints.\nWhile the studies of consider the scenario in which honest-but-curious workers are involved, workers may randomly be malicious in nature. As an illustration, a group of workers may be involved to compute gradients towards training a machine learning model. However, the gradients may be intentionally misreported by the workers to introduce biases or inaccuracies to the model . An existing approach is to perform median, rather than mean, based aggregation of the gradients to eliminate misreports that are usually outliers . However, the median based aggregation is computationally costly and faces convergence issues. As such, the study of proposes DRACO, which is based on the coding of gradients and algorithmic redundancy, i.e., each worker evaluates redundant gradients, such that the accurate gradients may be derived even in the presence of adversarial nodes. The simulation results show that DRACO is more than 3 times faster in achieving 90\\% test accuracy for gradient computations on MNIST dataset as compared to the geometric median method.\n\\begin{table}\n\\caption{\\small Comparison of BGW, LCC, Harmonic coding schemes for Gradient-type computations.}\n\\label{table:lccharmonic}\n\\centering\n\\begin{tabular}{|>{\\centering\\arraybackslash}m{2.0cm}|c|c|c|}\n\\hline\n\\rowcolor{mygray}\n & \\textbf{Sharmir} & \\textbf{LCC} & \\textbf{Harmonic} \\\\ \\hline\n Min. number of workers & $K($deg $g+1)$ & $K$deg $g$+1 & $K($deg $ g-1) +2$ \\\\ \\hline\n \\end{tabular}\n\\end{table}\nAn improvement to the studies of is done in , which proposes LCC to achieve an optimum tradeoff between resiliency against straggling workers, security against malicious workers, and information-theoretic privacy amid colluding workers. In LCC, the dataset of the master is encoded using the Lagrange polynomial to create computational redundancy. Then, the coded data is shared to the workers for computation on the encoded data, as if the coding did not take place. In comparison with the BGW MPC scheme , LCC requires more workers. However, the Lagrange polynomial based encoding leads to a reduction in the amount of randomness required to encode the data, which translates to lower storage and computation costs incurred by each worker. The LCC also outperforms the BGW based polynomial sharing in terms of communication costs, given that the polynomial sharing scheme requires a communication round for each bilinear operations. In addition, LCC is less computationally costly than DRACO , which does not utilize the algebraic structure of the encoded gradients. However, the Lagrange coding only works for computations involving arbitrary multivariate polynomial functions of the input dataset. As an extension, the study of proposes CodedPrivateML which adopts polynomial approximations to handle the non-linearities of the gradient computation when the sigmoid function is involved, such that logistic regression can be conducted on LCC-encoded data while providing information-theoretic privacy for the dataset. Given that the advantages of the LCC is preserved, the experiments conducted on Amazon EC2 clusters validate that the proposed scheme is close to 34 times faster than the BGW based MPC approaches.\nIn light of the growing popularity of machine learning, the study of proposes Harmonic coding for tasks specific to gradient-type computations, e.g., for loss function minimization in distributed model training. Harmonic coding leverages on the structure of the intermediate partial gradients computed to enable the cancellation of redundant results, such that the encoding and decoding process is more efficient. As such, for the same level of privacy constraint, Harmonic coding improves on Shamir's secret sharing scheme and LCC in terms of requiring fewer workers to compute gradient-type functions. This result is further summarized in Table \\ref{table:lccharmonic}, where we present a comparison of the minimum number of workers required for the discussed schemes. Note that $K$ refers to the number of partitions of the input dataset, $g$ refers to the fixed multivariate polynomial , and deg refers to the degree of $g$. Like LCC, Harmonic coding can also be applied universally to any gradient-type function computation. As such, the data encoding can be performed before the computing task is specified, thus further reducing the delay in computation.", "id": "8d724f64-c2a5-4e3f-a151-1bdc01b1a822", "level": "subsection", "origin_cites_number": 5, "parent_id": "ee3d99f5-815c-4ba8-b336-968d0def830a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Secure Coding for Distributed Computing" ], [ "subsection", "Secure Distributed Computing" ] ], "subsections": [], "title": "Secure Distributed Computing" }, { "cite_extract_rate": 0, "cites": [], "content": "Matrix multiplication is a key operation in many popular machine learning algorithms , e.g., principal component analysis , support vector machines , and gradient-based computations. While the reviewed studies in Section~\\ref{subsec:securedist} discuss coded computing for privacy preservation in general operations, the studies to be discussed consider tailored strategies for SDMM.\nIn and , the authors propose the use of staircase codes in place of linear secret sharing codes, e.g., Shamir's codes . As an illustration, we consider that a master encodes its data $A$ with random matrix $R$ into three secret shares before transmitting a share to each of the workers to perform matrix multiplication. When linear secret sharing codes are used, the data and random matrix are not segmented but instead encoded and transmitted as a whole (Table~\\ref{table:staircase}). As such, the master has to wait for two \\textit{full} responses from any two of three workers before being able to decode and derive the desired results. In contrast, when the staircase code is used, the data and random matrices are segmented into sub-shares before transmission to the workers. When sufficient sub-tasks have been completed by the workers, the master can then instruct the workers to cease computation. Clearly, the staircase coding approach reduces the computation cost of workers and communication costs incurred by the master node. Accordingly, the staircase coding approach can outperform the classical secret sharing code in terms of mean waiting time. With 4 workers considered, experiments conducted on the Amazon EC2 clusters show a 59\\% decrease in mean waiting time using staircase codes. \nHowever, and still consider the case of one-sided privacy, i.e., the approach is designed to keep only one of two input matrices involved in SDMM operations private. As such, several studies have shifted the focus towards applying coding techniques to the specific case in which two-sided privacy, i.e., in which both input matrices are private, is ensured. Among the first such study is that of , which applies the aligned secret sharing for two-sided privacy. Specifically, the input matrices are split into submatrices and encoded with random matrices. Then, the undesired terms are aligned such that the server only recovers the desired results and saves on communication costs. This leads to an improved download rate, i.e., the ratio of size of desired result to the amount of information downloaded by a user, over conventional secret sharing schemes. \nFollowing , the study of proposes an inductive approach to find a close-to-optimal partition of the input matrices in consideration of two metrics namely the download rate and the minimum number of required workers. The proposed scheme improves on download rate, number of tolerable colluding servers, and computational complexity as compared to the study of . Inspired by , the polynomial coding scheme is also extended to SDMM operations and specifically convolution tasks in , while preserving two-sided privacy, download rate similar to that of , and further mitigating the straggler effects. For convolution tasks, the authors leverage on the inherent property in which the sums of convolutions of sub-vectors may be used to derive the convolution result. Then, the upper and lower bounds of the recovery threshold is derived to show that an order-optimal recovery threshold is achieved, i.e., it does not scale with number of workers. \nHowever, the key weakness of and as indicated in is that the proposed theoretical results do not clarify the effect of matrix dimensions on the download rate, i.e., the download rates are derived in the case whereby matrices are simply assumed to have large dimensions but without any specifications otherwise. However, the study of found that the results for may be violated in some cases for differing relative dimensions of the input matrices. Under this context, the model proposed in allows the matrix dimensions to be specified, and a new converse bound for the two-sided security SDMM is derived.\nIn general, the encoded results of matrix multiplication are sent to the master, where interpolation is performed to obtain the multiplication results, i.e., coefficients of a polynomial. In , the encoding of the private matrices is such that the coefficients are mainly non-zero. In contrast, the study of propose the Gap Additive Secure Polynomial (GASP) codes such that there are as many zero coefficients as possible. This allows the product to be interpolated and the desired results to be derived with fewer number of evaluations performed, which implies that fewer workers are required to perform the matrix multiplication. To assign the exponents for decodability while having as many zero coefficients as possible, the authors propose the degree table to solve the combinatorial problem. In , the authors further generalize the GASP codes to be applicable to different partitions of input matrices and security parameters. The GASP codes are shown to outperform the approaches in in terms of download rate.\n\\begin{table}[!]\n\\caption{\\small Comparison of linear secret sharing codes and staircase codes in the distribution of computation tasks in a system with three workers .}\n\\label{table:staircase}\n\\centering\n\\scalebox{0.9}{\n\\begin{tabular}{ll ll l}\n\\hline\n\\rowcolor{mygray}\n\\multicolumn{1}{|l|}{} & \\multicolumn{1}{l|}{$S_1$} & \\multicolumn{1}{l|}{$S_2$} & \\multicolumn{1}{l|}{$S_3$} \\\\ \\hline\n\\multicolumn{1}{|l|}{\\begin{tabular}[c]{@{}l@{}}Linear secret\\\\ sharing code\\end{tabular}} & \\multicolumn{1}{l|}{$R$} & \\multicolumn{1}{l|}{$R+A$} & \\multicolumn{1}{l|}{$R+2A$} \\\\ \\hline\n\\multicolumn{1}{|l|}{Staircase code} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}$A_1+2A_2+4R_1$\\\\ $R_1+R_2$\\end{tabular}} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}$A_1+2A_2+4R_1$\\\\ $R_1+2R_2$\\end{tabular}} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}$A_1+2A_2+4R_1$\\\\ $R_1+3R_2$\\end{tabular}} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}", "id": "d523704a-b56b-49c2-97c7-6500404f0ac2", "level": "subsection", "origin_cites_number": 7, "parent_id": "ee3d99f5-815c-4ba8-b336-968d0def830a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Secure Coding for Distributed Computing" ], [ "subsection", "Secure Distributed Matrix Multiplication (SDMM)" ] ], "subsections": [], "title": "Secure Distributed Matrix Multiplication (SDMM)" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we have discussed studies that have adopted a coded computing approach towards ensuring secure distributed computing. Then, we discuss the specific applications of SDMM, which require two-sided privacy. The summary and lessons learned are as follows:\n\\begin{itemize}\n\\item Originally proposed to mitigate stragglers in distributed computing systems, coding approaches have also been extended to ensure information theoretic privacy and for some studies, security against malicious workers . However, given that the coding techniques, e.g., polynomial codes, are designed with the intention to mitigate stragglers, the studies discussed in this section have focused on modifications to the approaches to simultaneously achieve both straggler mitigation and privacy. As was mentioned in this section, the coding approaches to ensure privacy preservation have outperformed conventional secret sharing schemes that mainly focused on data correctness and privacy. \n\\item With the recent popularity of machine learning, several studies have also focused on tailoring the coding approaches to be utilized for various machine learning tasks, e.g., entangled polynomial coding for logistic regression and harmonic coding for gradient based computations . In particular, given the importance of two-sided privacy in SDMM, we have also specifically discussed papers that focus on two-sided privacy separate from papers that only considered one-sided privacy. In general, papers that focus on taking advantage of the algebraic structure of the specific operations , e.g., convolution tasks or gradient computations , for efficient encoding and decoding usually perform better.\n\\item In most of the studies that we have discussed, the focus tends to be on information-theoretic privacy. However, weak security is key and may be explored in future works. Specifically, a system is weakly secure if attackers are unable to learn the sensitive intermediate values without having received a certain number of coded packets . This may be relevant when there exist eavesdroppers with an access to communication links between the master and workers . For example, in the study of , a data shuffling design and redundancy reduction algorithm to assign computing tasks have been proposed to ensure weak security in the system. However, the focus has been on workers, rather than eavesdroppers that may not be involved in the computations.\n\\item Given that the straggler effect is a fundamental concern in distributed computing, most papers have considered workers with heterogeneous computing capabilities. However, as discussed in , the issue of heterogeneous networks in other aspects have been under-explored in the aforementioned studies. For example, the workers may have different levels of reputation . This enables the adoption of context-sensitive solutions, e.g., data security may be guarded against new workers or workers with low reputation, whereas it may not be required for trusted workers. With this, the computation complexity and duration may be reduced in trusted nodes.\n\\end{itemize}", "id": "362bc888-78c5-42b5-b72f-1d88622fe4e4", "level": "subsection", "origin_cites_number": 4, "parent_id": "ee3d99f5-815c-4ba8-b336-968d0def830a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Secure Coding for Distributed Computing" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{nfv.pdf}\n\\caption{\\small Application of CDC to Network Function Virtualization (NFV) model for uplink channel decoding.}\n\\label{fig:nfv}\n\\end{figure}", "id": "825a0578-3fe6-4ffc-8428-dfabbdc2d42a", "level": "section", "origin_cites_number": 0, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "CDC Applications" ] ], "subsections": [ "6f2cf0e3-4854-4438-b7c6-37b57226ff9d", "f8c9cb00-a005-4862-b9c8-df9e98e5996f", "eb23bbfe-351d-4db3-8737-3055dd885cdb" ], "title": "CDC Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "Network Function Virtualization serves as an enabling technology for optimizing the 5G as well as emerging 6G networks , presenting a promising paradigm shift in the telecommunication service provisioning industry. By leveraging on virtualization technologies, NFV simplifies the management and operations of the networking services. In particular, NFV decouples the Network Functions (NFs) such as routing and baseband processing from the physical network equipment on which they operate. The NFs are mapped to the Virtual Network Functions (VNFs) that are supported by the Commercial Off-The-Shelf (COTS) physical resources which provide storage, networking and computational capabilities. As the software component in the network is decoupled from the hardware component by a virtualization layer, the VNFs can be easily implemented over the distributed network locations which have the required hardware resources. Due to the flexibility in the deployment of the VNFs, the NFV brings about significant reduction in operating and capital expenses. Moreover, the development of new networking services is faster and cheaper as the COTS resources can be instantiated easily to provide the required network connection services. Extensive literature has been carried out on the various aspects of NFV such as architectural designs , resource allocation , energy efficiency , performance improvement and security . More details can be found in .\nHowever, one of the limiting factors of the performance of NFV lies in the reliability of the COTS hardware resources . Hardware failure due to several factors such as component malfunctioning, temporary unavailability and malicious attacks affects the implementation of NFV, hindering the provision of services. Apart from the fault-tolerant virtualization strategies that are based on the diversity approach which maps VNFs onto the various virtual machines (VMs) such that the probability of a disruptive failure is minimized , coding approach can also be used to minimize computation latency in NFV. \nIn the study of , the authors consider the highly complex uplink channel decoding of the Cloud Radio Access Network (C-RAN) architecture , which is a key application of NFV. In this system, the users communicate with the cloud via Remote Radio Head (RRH). In order to ensure the reliability of channel decoding, the received data frames by the RRH are encoded by leveraging on their algebraic structures before being distributed to the different VMs as shown in Fig.~\\ref{fig:nfv}. The simulation results show that the coding approach achieves lower probability of error for decoding at the cloud than that of the diversity-based approach. However, there are several assumptions that are made in this proposed scheme. Firstly, the binary symmetric communication channel is assumed between the users and the RRH. By considering other communication channels such as the additive Gaussian noise channels, different coding techniques may be applied. Secondly, this simple framework works well for a network with three processing nodes, but its performance for larger-size networks is not guaranteed. \nConsidering the same issue of uplink channel decoding in the C-RAN architecture, the authors in propose a more generalized coded computation framework that works for any number of servers, random computing runtimes and random packet arrivals by adopting the coding approach proposed in . Given the randomness in the arrivals of data frames transmitted by the users, two queue management policies are considered: (i) per-frame decoding where one frame is decoded at any point of time, and (ii) continuous decoding where the servers start to decode the next packet of data frame upon completion of the first packet. There is an underlying tradeoff between the average decoding latency and the frame unavailability probability which is an indication of the reliability of the decoding process at the servers. The simulation results show that properly designed NFV codes are useful in achieving the desired tradeoffs by optimizing the minimum distance of the codes.\nThe idea of adopting coding approach in NFV is relatively new. Different coding techniques can be explored in the future. Besides, instead of addressing the uplink channel decoding, which is the most computationally-intensive baseband function , other network functions such as routing and security functions can be considered.", "id": "6f2cf0e3-4854-4438-b7c6-37b57226ff9d", "level": "subsection", "origin_cites_number": 6, "parent_id": "825a0578-3fe6-4ffc-8428-dfabbdc2d42a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "CDC Applications" ], [ "subsection", "Network Function Virtualization (NFV)" ] ], "subsections": [], "title": "Network Function Virtualization (NFV)" }, { "cite_extract_rate": 0, "cites": [], "content": "With the enhanced sensing capabilities of end devices, an overwhelming amount of data is produced at the edge of the network today. Traditional schemes of computation offloading to the cloud is thus unsustainable. Moreover, certain edge applications may involve end devices in remote areas that have limited connectivity. This necessitates a paradigm shift towards edge computing, in which computation is performed closer to the edge of the network where data is produced. However, resource-constrained devices may not be able to carry out complex computations individually, especially given the increasing size and complexity of state-of-the-art AI models . As such, one of the enabling technologies of edge computing is cooperative computation in which the available resources of end devices and edge nodes, e.g., road side units in vehicular networking, can be pooled together to execute computation intensive tasks collaboratively . For ease of exposition, we refer to these participating end devices and edge nodes as workers in this section.\nAs the number of devices connected to the network increases, more information needs to be exchanged among the workers, resulting in high communication load. However, the communication bandwidth is fixed and thus the network is unable to handle the high communication load, causing a bottleneck as a result. Moreover, the heterogeneous nature of workers in the edge computing paradigm, e.g., in terms of computational and communication capabilities, can lead to the straggler effects. In face of these challenges, coding techniques can be used.\nIn the study of , the \\emph{coded wireless distributed computing} (CWDC) framework is proposed. The system model consists of multiple devices, i.e., workers, involved in cooperative computation. As an illustration, a worker may have an input, e.g., an image, that has to be processed, e.g., for object recognition. Individually, a worker may not have the storage or computational capabilities to execute the task. Therefore, the inference model may be split and stored on each worker, whereas the cooperative computation of results can be implemented following the MapReduce framework as discussed in Section~\\ref{subsec:mapreduce}. An access point, e.g., a base station or a Wi-Fi router can then be utilized to facilitate the exchange of intermediate results among workers. The proposed framework achieves communication loads that are independent of the size of the network and the storage size of the workers. Moreover, the CWDC framework can be generalized and applied to different types of applications.\nIn practical distributed computing systems, the workers may have heterogeneous computational, communication, and storage capabilities. Based on a similar system model proposed in where the workers communicate with each other via an access point, the study in considers devices with heterogeneous storage capacities over wireless networks. For uplink transmission, the allocation of files is based on the scheme proposed in as previously discussed in Section~\\ref{subsec:files} whereas for downlink transmission, data is encoded at the access point for the reduction of downlink communication load. However, the achievable scheme has only been validated in a small network that consists just three processing nodes. \n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{flmodel.pdf}\n\\caption{\\small Illustration of the Coded Federated Learning (CFL) scheme in a FL system that consists of multiple data owners and a FL model owner.}\n\\label{fig:flmodel}\n\\end{figure}\nIn light of the growing popularity of machine learning model training at the edge, the study of considers the distribution of gradient descent computations across workers in the network to train a linear regression model. The proposed heterogeneous coded gradient descent (HCGD) assigns each worker with an optimal load partition, through modelling the computation delay of devices with a shifted exponential distribution. In consideration of data privacy, the authors in propose the Coded Federated Learning (CFL) approach for privacy-preserving linear regression. Federated Learning (FL) is a privacy preserving distributed machine learning paradigm proposed in , in which the sensitive data of data owners are kept locally, whereas only the model parameters are transmitted to the central server for aggregation to update the model. However, FL still suffers from the issues of straggling devices and communication inefficiency . As such, in the proposed CFL scheme, each data owner first generates parity data from its local data for transmission to the central server. At the central server, the gradients are also computed from parity data simultaenously, such that only a subset of gradients from the data owners have to be received for completion of model update. Figure~\\ref{fig:flmodel} illustrates the implementation of the CFL scheme. The CFL approach is also shown to converge almost four times faster than an uncoded approach. However, the communication and computation cost involved in generating and transmitting the parity data has not been well elaborated in the study. Clearly, a major weakness of aforementioned studies of is that they can only be applied to linear regression model training. The study of extends on the aforementioned works with the proposed CodedFedL designed to mitigate straggler mitigation in non-linear regression and classification tasks.\nIn some cases, the resource level of the devices may not be known by the network operator. To enable a dynamic and adaptive coded sub-task allocation for cooperative computation, an Automatic Repeat reQuest (ARQ) mechanism is proposed in the study of , in which devices are allocated with specific levels of packets for computation based on their responsiveness. Specifically, devices that are more responsive are assumed to have more available resources. These devices will thus be assigned with more sub-tasks for computation.\nInstead of reducing communication load, it is also important to consider communication efficiency, i.e., achieved data rates, especially in wireless networks that have limited spectral resources or networks with mutual interference among users. In order to improve spectral efficiency, the co-channel communication model is proposed in the study of which consists of two stages, i.e., the uplink multiple access stage and the downlink broadcasting stage. The communication model turns out to be equivalent to a multiple-input-multiple-output (MIMO) interference channel. Interference alignment has been an effective approach in handling mutual interference among users. The signals are precoded into the same subspaces at the unintended receivers and the desired signals are recovered at the intended receivers by using the decoding matrix. Linear coding scheme is adopted to establish the conditions for interference alignment. A low-rank optimization problem is formulated to minimize the number of channel used subject to the established interference alignment conditions. By solving this optimization problem, the achievable symmetric degree-of-freedom (DoF), which implies the extent to which interference is eliminated, can be maximized. In , an efficient difference-of-convex-functions (DC) algorithm based on a DC representation for the rank function is proposed to solve the low-rank optimization problem. The performance of the DC algorithm is evaluated in two different scenarios by varying: (i) the number of files stored in the devices, and (ii) the number of antennas that are equipped by the devices. In the first scenario, as the number of files stored in the devices increases, the achievable DoF increases. Similarly in the second scenario, as the number of antennas equipped by the devices increases, the achievable DoF increases. Furthermore, the simulation results show that the DC approach achieves higher DoF than the existing benchmark algorithms, e.g., iterative reweighted least squares (IRLS) algorithm and the nuclear norm relaxation approach. However, the proposed scheme is based on a homogeneous network, i.e., the number of files stored is the same across all devices and all devices are equipped with the same number of antennas. As an extension, heterogeneous networks can be considered. \nInstead of communicating with each other via an access point, the devices can communicate directly with each other over wireless interference channels . In particular, the transmission of data in the Shuffle phase operates over wireless interference channels. While the CDC scheme in allows communications based on time-division multiple access (TDMA) scheme which allows each processing node to transmit one coded information packet at any time, the one-shot linear scheme adopted in allows more than one processing node to transmit information simultaneously at any given time slot. The transmitted symbols are linear combinations of coded intermediate results from the processing nodes. The transmitted symbols are broadcasted to the processing nodes, following which the nodes can decode to recover the desired information. The study in characterizes an improved computation-communication tradeoff as compared to the study in . However, the proposed scheme operates under the assumption of perfect channel state information (CSI) where the CSI is available to all processing nodes. As such, the authors in propose superposition coding scheme which has better performance than that of the CDC scheme and the one-shot linear scheme under imperfect CSI condition. The study is extended to account for the presence of stragglers in the networks .\nIn general, the aforementioned studies have considered conventional implementations of the CDC scheme in which each device has to complete the computation task first, before transmitting the intermediate results, e.g., to other devices via the access point. However, given the limited computational capabilities of devices, the latency involved can still be significant. As such, the studies in and consider the batch-processing based coded computing (BPCC) framework which allows each device to return only the partially completed results to a master node in batches, even before the task is fully completed. These partial results are then utilized to generate approximations to the full solution through the singular value decomposition (SVD) approach . This is particularly useful in applications that require fast, but not necessarily optimized, results. The BPCC scheme has its effectiveness validated on the EC2 computing cluster, in which latency is proven to be reduced. Moreover, in consideration of the energy limitations of Unmanned Aerial Vehicles (UAV), the BPCC framework was proposed for UAV-based mobile edge computing to provide energy-efficient computation offloading support on-demand .", "id": "f8c9cb00-a005-4862-b9c8-df9e98e5996f", "level": "subsection", "origin_cites_number": 5, "parent_id": "825a0578-3fe6-4ffc-8428-dfabbdc2d42a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "CDC Applications" ], [ "subsection", "Edge Computing" ] ], "subsections": [], "title": "Edge Computing" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we have discussed applications of CDC in NFV and edge computing. The lessons learned are as follows:\n\\begin{itemize}\n\\item The convergence of the recently popular edge computing and machine learning has given rise to edge intelligence, in which the computational capabilities of edge servers and devices have been increasingly leveraged to conduct machine learning model training closer to where the data is produced. While this leads to several benefits, e.g., lower training latency and enhanced privacy preservation, the problem of straggling devices is still a bottleneck. As such, CDC approaches have been increasingly applied in this context.\n\\item One major difference between distributed computing at the edge and at computing clusters is that the end devices and edge servers, i.e., workers, are not specifically dedicated to computations. For example, the workers may only share a fraction of their processing power at time slots in which they are idle. As such, the heterogeneity in computational capabilities of workers may also be greater in this regard. In face of these challenges, optimal load partition and allocation strategies have been explored. Moreover, for the future works, the dynamics of the system may be captured using Deep Reinforcement Learning based resource allocation .\n\\item In edge intelligence, there may be several data owners involved. Moreover, the data owners can be devices with computational constraints. As such, conventional techniques of data encoding before transmission to workers for computation may not be feasible. To meet this challenge, FL has been proposed in the work of , whereas CFL and CodedFedL have been proposed to mitigate the straggler effects in FL. However, the proposed methods are highly restrictive. For example, the CFL can only be applied to linear regression problems, whereas both methods require costly computation and transmission of parity data. Moreover, they have not been implemented on typical end devices to validate the feasibility of the schemes in practical implementation. For the future works, studies on using CDC schemes in edge computing applications may adopt the approach of , in which the schemes are implemented under the context of practical hardware constraints.\n\\end{itemize}", "id": "eb23bbfe-351d-4db3-8737-3055dd885cdb", "level": "subsection", "origin_cites_number": 1, "parent_id": "825a0578-3fe6-4ffc-8428-dfabbdc2d42a", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "CDC Applications" ], [ "subsection", "Summary and Lessons Learned" ] ], "subsections": [], "title": "Summary and Lessons Learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:challenges}\nThe utilization of CDC schemes to solve the implementation challenges of the distributed computing systems is a relatively new and recent approach. There are still challenges and open issues that have yet to be addressed and this provides opportunities for new research directions in the future. We present major challenges that need to be looked into for effective implementation of CDC schemes.\n\\begin{itemize}\n\\item \\emph{Heterogeneous nodes: }As compared to traditional distributed computing clusters, heterogeneities among computing nodes are much more significant when the computing nodes are connected in the edge computing networks, e.g., smartphones and wearable devices. Many studies, e.g., , , , have considered heterogeneous systems where the processing nodes have different computational, communication and storage capabilities. For example, in , a joint file and function allocation strategy is proposed to assign jobs to the processing nodes such that the communication load is minimized. In , the computation load is allocated based on the capabilities of the processing nodes. However, other aspects of heterogeneity such as the reputation and the willingness to participate of the processing nodes are not taken into account. New allocation strategies need to consider different aspects of heterogeneities of the processing devices so that coding techniques can be implemented effectively and securely. For example, the computation tasks can be allocated to workers with higher reputation which implies higher probability of the workers in completing their allocated tasks.\n\\item \\emph{Encoding and decoding complexities: }The studies that we have discussed in Section~\\ref{sec:comms} and Section~\\ref{sec:stragglers} mainly minimize communication load in Shuffle phase and computation latency in the Map phase respectively. However, the complexities of encoding and decoding are often not evaluated. It is important to ensure low encoding and decoding complexities in order to minimize the overall job execution time. Otherwise, the speedup gain achieved by in specific phases, e.g., communication and computation phases may be offset by the high encoding and decoding complexities. For example, the UberShuffle algorithm incurs high computational overhead under the fast broadcast environment, i.e., networks with large bandwidth, such that it is not feasible for implementation even though it achieves significant shuffling gain. Hence, to better assess the performance of the CDC schemes, the complexities of the encoding and decoding methods have to be evaluated.\n\\item \\emph{Non-static computing nodes: }For the commonly used distributed computing models such as cluster computing, grid computing and cloud computing that are discussed in Section~\\ref{sec:fundamentals}, the computing nodes are static, i.e., the nodes are located at fixed location. For example, the servers are located at specific data centers. Data required for computations is transmitted over the wireless communication channels to the servers. However, as the edge devices, e.g., IoT devices, wearable devices and vehicles, have greater communication and computational capabilities, new distributed computing models such as mobile edge computing , and fog computing have been developed recently. In , the basic CDC scheme is implemented in the context of fog computing. However, the edge devices are usually mobile. The data that is processed by the edge devices depends on the locations that they visit , and hence the master node has no control of the data distribution to the workers. New coding approaches for edge and fog computing which involve moving workers can be proposed in future works. One proposed solution is to allocate Reduce functions based on the data stored at the processing nodes .\n\\item \\emph{Security concerns: }Coding techniques are able to mitigate the straggler effects while preserving privacy as shown in the studies of , and . The proposed secure coding techniques are an extension to the coding techniques that are originally proposed to mitigate the straggler effects. As mentioned in , the tradeoff between resiliency against straggling workers, security against malicious workers and information-theoretic privacy amid colluding workers needs to be carefully managed. In addition, there may exist eavesdroppers which tap on the less secure communication links between the master node and the workers. As such, more research effort can be directed towards developing weakly secure systems which prevents the eavesdroppers from retrieving sensitive information. \n\\item \\emph{Network architectures: }It is important to consider how the computing nodes are connected and communicate with each other for effective implementation of the CDC schemes. For example, in and , the authors introduce a hierarchical structure where the master communicates with multiple submaster and each submaster leads a group of computing nodes. From the studies that we have reviewed, the network architecture is only considered for the implementation of CDC schemes to reduce communication load. However, it is an important consideration when designing secure coding schemes. In practice, it may be safe for computing nodes within a group, e.g., from the same location, to share information freely with each other but not with computing nodes from another group, e.g., from different location. In addition, the communication channels are not perfect, i.e., they may not have perfect CSI or the transmitted information may have missing entries. Some networks may have limited spectral resources or suffer from mutual interference among users . As such, future research can work towards designing coding schemes that can be implemented in practical distributed computing systems. Besides a need to design effective coding schemes, there is also a need to consider the design of low-cost, easily-implementable and scalable network architectures in which the coding schemes can be applied to.\n\\item \\emph{Different computation frameworks: }Currently, most of the studies are based on the MapReduce computing model. Specifically, Coded MapReduce is proposed in by implementing coding techniques in the MapReduce framework. However, there are limitations to MapReduce model that hinder its wide adoption for all types of distributed computation tasks as explained in Section~\\ref{subsec:mapreduce}. In fact, there are other computing models such as Spark, Dryad and CIEL which support iterative algorithms, in which the feasibility of implementing coding techniques has not been explored. As such, the importance of these computing models motivates future directions such as the design of coding schemes that are specific to these computing models in order to solve any distributed computation tasks, e.g., convolution, Fourier transform and non-linear computations. \n\\item \\emph{Coding for both communication reduction and stragglers mitigation: }As we have discussed previously in Section~\\ref{sec:comms} and Section~\\ref{sec:stragglers}, coding techniques are used to either reduce communication load or mitigate the straggler effects. However, coding techniques cannot be applied to solve these implementation challenges simultaneously. As characterized in , there is a tradeoff between communication load and computation latency. However, in an ideal situation, both communication load and computation latency need to be minimized. Thus, it is important to carefully manage the tradeoff to achieve optimal performance of the distributed computing systems. For future works on CDC schemes, there is a need to improve the latency-communication tradeoff curve so that the time taken to execute the allocated tasks can be significantly reduced. In addition, instead of the two-dimensional tradeoff, the three-dimensional tradeoff between computation, communication and storage cost , which is much more challenging to manage, should be considered.\n\\item \\emph{CDC applications: }Given the advantages, CDC schemes are implemented in distributed computing applications such as the NFV and edge computing. Apart from the UAVs, CDC schemes can be extended to edge computing applications in other areas, e.g., vehicular networks, healthcare systems and industrial operations. For the studies that we have reviewed in CDC applications, the main focus lies in the implementation of coding techniques in various applications, without considering privacy and security. Given the importance of secure coding as discussed in Section~\\ref{sec:security}, secure coding techniques need to be considered in the implementation of CDC applications. Besides, application-specific issues need to be addressed. For example, in vehicular networks where the vehicles are constantly moving, the CDC schemes need to be robust to vehicles which do not have consistent access to the wireless communication channels.\n\\end{itemize}\nThe idea of using coding techniques to overcome the challenges in distributed computing systems is relatively new. For effective implementation in practical distributed computing systems, various aspects such as the heterogeneities of the computing nodes and the network architectures are worth the in-depth studies. Some promising research directions presented in this survey serve as useful guidelines and valuable references for future research in CDC.", "id": "8c1cb679-0c49-4078-b259-687a2df3303d", "level": "section", "origin_cites_number": 6, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Challenges, Open Issues and Future Works" ] ], "subsections": [], "title": "Challenges, Open Issues and Future Works" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this paper, we provided a tutorial of CDC schemes and a comprehensive survey on the two main lines of CDC works. We first motivated the need for CDC schemes. The current performance of distributed computing systems can be improved using coding schemes. Then, we described the fundamentals and principles of CDC schemes. We also reviewed CDC works which aim to minimize communication costs, mitigate straggler effects as well as enhance privacy and security. In addition, we discussed the implementation of CDC schemes in practical distributed computing applications. Finally, we highlighted the challenges and discussed promising research directions.\n\\bibliographystyle{ieeetr}\n\\bibliography{cdc}\n\\end{document}", "id": "5a390405-bf12-44a0-aa67-38cecb4ab9be", "level": "section", "origin_cites_number": 0, "parent_id": "43fb2934-e7bf-4933-946e-becc16dd4285", "prefix_titles": [ [ "title", "A Survey of Coded Distributed Computing" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
83
[]
null
[ "Ziqiang Li", "Muhammad Usman", "Rentuo Tao", "Pengfei Xia", "Chaoyue Wang", "Huanhuan Chen", "Bin Li" ]
A Systematic Survey of Regularization and Normalization in GANs
2020
2020-08-19T12:52:10Z
cs.LG
Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can fit the target distribution without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs' training, such as non-convergence, mode collapses, gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey which primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the applications of regularization and normalization techniques that have been frequently employed in state-of-the-art GANs. Finally, we highlight potential future directions of research in this domain. Code and studies related to the regularization and normalization of GANs in this work is summarized on https://github.com/iceli1007/GANs-Regularization-Review.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4410407c-7d7c-48f1-8174-a9a95319844f", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ] ], "subsections": [ "29469c09-cd5f-4556-9382-b0a3b4de184e", "5fdbf0be-ea34-468a-9943-d62d05b593ca", "8469161e-625b-422b-90e0-a6cca9638a04", "fdfb947c-602c-4735-9040-96d30810700e", "44c32216-8be6-4e5c-aa2f-ca241ea4139f", "bc57c604-a3ec-457f-ab6f-07cf5243c526", "fb4135aa-ff8b-439d-9a28-a790ffddd379", "b359d174-4880-4149-b0f4-5fd0e7bd75f4", "f6a7c46a-3c0c-4605-a602-4104833c8854" ], "title": "root" }, { "cite_extract_rate": 0.7213114754098361, "cites": [ 63, 89, 73, 86, 68, 85, 78, 76, 8317, 53, 60, 6998, 66, 57, 70, 88, 80, 6996, 72, 54, 58, 62, 59, 7190, 55, 69, 77, 6997, 82, 74, 56, 81, 87, 6999, 8316, 84, 71, 61, 75, 67, 65, 83, 64, 79 ], "content": "\\setlength{\\abovecaptionskip}{0.cm}\n\tGenerative adversarial networks (GANs) have been widely used in computer vision, such as image inpainting , style transfer , text-to-image translations , and attribute editing . GANs training is a two-player zero-sum game between a generator and a discriminator, which can be understood from different perspectives: (i) \"Real \\& Fake\" , (ii) \"Fitting distribution\" , and (iii) \"Training dynamics\" . GANs training suffers from several issues, for instance: non-convergence , mode collapses , gradient vanishing , overfitting , discriminator forgetting and deficiency , and hyperparameters sensitivity . Many solutions to mitigate these issues have been proposed, focusing on designing new architectures , loss functions , optimization methods , regularization , and normalization . Among them, regularization and normalization techniques are compatible with loss functions, model structures, and tasks, which has attracted the attention of scholars. \n\tRegularization and normalization are widely applied in neural networks training to introduce prior knowledge. For supervised tasks, regularization in literature has been proposed to introduce some advantages like overfitting prevention , semi-supervised assumptions , manifold assumptions , feature selection , and low rank representation . On the other hand, normalization is advantageous for the Stochastic Gradient Descent (SGD) , accelerating convergence and improving accuracy. Unlike the icing on the cake of supervisory tasks, regularization and normalization are utilized inevitably in weak-supervised and unsupervised tasks. GANs' training is a two-player zero-sum game having a solution to Nash equilibrium. The proposal of standard GAN is based upon the non-parametric assumption of the infinite capacity of networks, an unsupervised learning task. Likewise, a good number of research studies targeting GANs training from different perspectives argue that unconstrained training causes unstable training (generator and discriminator ) and significant bias between real images and fake images (attributes domain and frequency domain ). Therefore, a large amount of prior should be introduced into GANs training through regularization and normalization\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=.52]{figs/summary_new.pdf}\n\t\t\\caption{The overview of different perspectives of GANs training and the summary of the regularization and normalization for GANs.}\n\t\t\\label{fig:overview}\n\t\\end{figure}\n\tRegularization and normalization are effectively used to stabilize training and improve the performance of GANs in existing literature . Due to diverse nature of the topic, there is a need for systematic literature survey. There exist some literature studies , however, these studies either lack comprehensive coverage of the topic, miss detailed background information and theoretical analysis, or do not correlate different methods. In this paper, based on the different perspectives of GANs training, we propose a new taxonomy, denoted as \\textbf{\"Real \\& Fake\"}, \\textbf{\"Fitting distribution\"}, \\textbf{\"Training dynamics\"}, and \\textbf{\"Other methods\"}, for a better understanding of regularization and normalization during the GANs training as depicted in Figure \\ref{fig:overview}. {\\textbf{\"Real \\& Fake\"} is the low-level (intuitive) perspective of GANs, in which GANs is considered as \"counterfeiters-police\" competition. At this level, D estimates the real probability of both real and fake samples, which is similar to the bi-classification task. Therefore, prior information and additional supervision tasks in classification task are also urgent during the training process of the discriminator. Based on these, some regularization methods, such as {\\textit{Data Augmentation and Preprocessing}}}\\footnote{Data augmentation and preprocess introduce additional data and prior, which is similar to regularization. More importantly, both consistency regularization and self-supervision need different data transformation operations. Hence, this paper also discusses some works on this.}, {\\textit{Consistency Regularization}, and {\\textit{Self-Supervision}} are proposed to mitigate overfitting, improve the representation of discriminator, and avoid discriminator forgetting by introducing additional supervised information and data; \\textbf{\"Fitting distribution\"} is the middle-level perspective of GANs. At this level, generator is considered as a distribution mapping function, and the discriminator is a distribution divergence. Among various distances, Wasserstein distance is a popular form, and Lipschitz continuity is a necessary condition for achieving Wasserstein distance. Based on these, \\textit{Gradient Penalty}, \\textit{Weight Normalization}, \\textit{Weight Regularization}, and \\textit{Gradient} \\textit{Normalization} are used to fulfill Lipschitz continuity and ensure training stability of discriminator; \\textbf{\"Training Dynamics\"} is the high-level (essential) perspective of GANs. At this level, GANs training is a two-player zero-sum game with a solution to Nash Equilibrium. To achieve theoretical local convergence, \\textit{Jacobian Regularization} needs to be used; Finally, \\textbf{\"Other methods\"} containing \\textit{Layer} \\textit{ Normalization} and \\textit{Inverse Gradient Penalty} are used for conditional generation and easing mode collapse, respectively}\n\tIn summary, we make the following contributions in this survey:\n\t\\begin{itemize}\n\t\t\\item \\textit{Comprehensive analysis of GANs training.} In this study, we analyze the GANs training from three perspectives including \"Real \\& Fake\", \"Fitting distribution\", and \"Training dynamics\". To the best of our knowledge, this survey is the first in this domain with comprehensive analysis.\n\t\t\\item \\textit{New taxonomy.} Based on the analysis of GANs training from different perspectives, we propose a novel taxonomy and contextualize the regularization and normalization of GANs comprehensively.\n\t\t\\item \\textit{ Comparison and analysis.} Following the taxonomy, we also provide quantitative and qualitative analysis and comparison for each type of regularization and normalization techniques, which has helped the researchers and practitioners navigate this space.\n\t\\end{itemize}\n\t{\\textbf{The Scope of This Survey.} This survey aims to systematically analyze the prevalent problems in the GANs training, such as non-convergence, mode collapses, gradient vanishing, and discriminator overfitting. Accordingly, different regularization and normalization technologies have been summarized. Of course, not all regularization and normalization technologies for GANs are covered in this survey. In some case, some regularization and normalization technologies are highly dependent on the task and data in the hand, we recommend looking for domain-specific regularization and normalization techniques from the following reviews: data-efficient generation , medical image generation, image super-resolution , biomedical informatics generation , Spatio- temporal data generation , text generation . Our survey is concerned with general technologies in GANs, which are not dependent on model structures, data, and task. We hope our study can provide general and universal insights of GANs for the community.}\n\tThe rest of this paper is organized as follows: Section 2 introduces the background and different training perspectives of GANs. Section 3, 4, 5, and 6\n\tdescribe regularization and normalization methods in different groups, respectively. Furthermore, we investigate the applications of regularization and normalization techniques that have been frequently employed in SOTA GANs in Section 7 and discuss the current problems and prospects for future work in Section 8.", "id": "29469c09-cd5f-4556-9382-b0a3b4de184e", "level": "section", "origin_cites_number": 61, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "level": "section", "origin_cites_number": 0, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ] ], "subsections": [ "5a1f5e65-9650-482a-9b5a-2625a5dd0b96", "c2fed4a0-e8f0-4ddc-ab66-70cc7bfe76f8", "fd688225-4a59-4600-886f-76309f6e8ba9", "4e9cd8aa-b10a-44ef-a7b7-7d2880558ce2", "a915370f-85fe-4345-a91b-c8d67718061c", "a397f8d2-2658-471a-8c3b-cbde88271e09", "0edf29a6-bba8-43a2-a318-3f272fcb91d9" ], "title": "Background and Three Perspectives of GANs Training" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 91, 66, 57, 90, 71, 67 ], "content": "{Regularization and normalization are common and important techniques to introduce prior knowledge in neural networks.} Regularization is a technique to control the complexity of learning models. Weight decay is a typical method to minimize the square of weights together with the training loss in the training of neural networks , which can be used to improve generalization. \n\tIn Bayesian learning methods, such as relevance vector machine , probabilistic classification vector machines , and others , regularization is termed as prior distribution. Specifically, L2 regularization is equivalent to introducing Gaussian prior to the parameters, and L1 regularization is equivalent to introducing Laplace prior to the parameters. The theoretical connection between regularization and prior information has been investigated in neural network ensembles research . Regularization does not only control overfitting but also provide other characteristics like semi-supervised assumptions , manifold assumptions , feature selection , low rank representation , and consistency assumptions . {Normalization is the mapping of data to a specified range, which is advantageous for the Stochastic Gradient Descent (SGD) , accelerating convergence and improving accuracy.}", "id": "5a1f5e65-9650-482a-9b5a-2625a5dd0b96", "level": "subsection", "origin_cites_number": 22, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "Regularization and Normalization" ] ], "subsections": [], "title": "Regularization and Normalization" }, { "cite_extract_rate": 0.75, "cites": [ 64, 92, 54 ], "content": "GANs are two-player zero-sum games, where generator (G) and discriminator ($D$) try to optimize opposing loss functions to find the global Nash equilibrium. In general, GANs can be formulated as follows:\n\t\\begin{equation}\n\t\t\\mathop{\\min}\\limits_{\\phi}\\mathop{\\max}\\limits_{\\theta} f(\\phi,\\theta)\n\t\t=\\mathop{\\min}\\limits_{\\phi}\\mathop{\\max}\\limits_{\\theta}\\mathbb{E}_{x\\sim p_{r}}[g_1(D_\\theta(x))]\n\t\t+\\mathbb{E}_{z\\sim p_z}[g_2(D_\\theta(G_\\phi(z)))],\n\t\t\\label{EQ:eqn1}\n\t\\end{equation}\n\twhere $\\phi$ and $\\theta$ are parameters of the generator $G$ and the discriminator $D$, respectively. $p_r$ and $p_z$ represent the real distribution and the latent distribution, respectively. {$g_1$and $g_2$ are different functions corresponding to different GANs. Specifically, vanilla GAN can be described as $g_1(t)=g_2(-t)=-\\log(1+e^{-t})$; $f$-GAN can be written as $g_1(t)=-e^{-t}$, $g_2(t)=1-t$; Morever, Geometric GAN and WGAN are described as $g_1(t)=g_2(-t)=-\\mathop{\\max}(0,1-t)$ and $g_1(t)$\\\\$=g_2(-t)=t$, respectively.} \n\t{Different from supervised learning, GANs training is an unsupervised learning, which leads to the urgency of regularization and normalization in the training of GANs.} In the following parts, we elaborate the training of GANs from three perspectives: low level: the perspective of \"Real \\& Fake\", middle level: the perspective of \"Fitting distribution\", and high level: the perspective of \"Training dynamics\". According to different perspectives of GANs, various regularization and normalization have been proposed in the GANs training.\n\t\\begin{comment}", "id": "c2fed4a0-e8f0-4ddc-ab66-70cc7bfe76f8", "level": "subsection", "origin_cites_number": 4, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "GANs" ] ], "subsections": [], "title": "GANs" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 91, 66, 57, 90, 71, 67 ], "content": "Regularization is a technique to control the complexity of learning models. Weight decay is a typical method to minimize the square of weights together with the training loss in the training of neural networks , which can be used to improve generalization. \n\tIn Bayesian learning methods, such as relevance vector machine , probabilistic classification vector machines , and others , regularization is termed as prior distribution. Specifically, L2 regularization is equivalent to introducing Gaussian prior to the parameters, and L1 regularization is equivalent to introducing Laplace prior to the parameters. The theoretical connection between regularization and prior information has been investigated in neural network ensembles research . Regularization does not only control overfitting but also provide other characteristics like semi-supervised assumptions , manifold assumptions , feature selection , low rank representation , and consistency assumptions . {Normalization is the mapping of data to a specified range, which is advantageous for the Stochastic Gradient Descent (SGD) , accelerating convergence and improving accuracy.\n\t}\n\t\\end{comment}", "id": "fd688225-4a59-4600-886f-76309f6e8ba9", "level": "subsection", "origin_cites_number": 22, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "Regularization and Normalization" ] ], "subsections": [], "title": "Regularization and Normalization" }, { "cite_extract_rate": 0.5, "cites": [ 63, 92 ], "content": "In low level, GANs is considered as \"counterfeiters-police\" competition, where the generator (G) can be thought of counterfeiters, trying to produce fake currency and use it undetected, while the discriminator (D) is analogous to the police, trying to detect the counterfeit currency. This competition drives both teams to upgrade their methods until the fake currency is indistinguishable from the real ones. Generally, D estimates the real probability of both real and fake samples, which is very similar to the bi-classification task, while G generates fake samples similar to real ones. Hence, the loss function in Eq (\\ref{EQ:eqn1}) is formulated as:\n\t\\begin{equation}\n\t\t\\mathop{\\min}\\limits_{\\phi}\\mathop{\\max}\\limits_{\\theta} f(\\phi,\\theta)\n\t\t=\\mathop{\\min}\\limits_{\\phi}\\mathop{\\max}\\limits_{\\theta}\\mathbb{E}_{x\\sim p_{r}}[\\log(D_\\theta(x))]\n\t\t+\\mathbb{E}_{z\\sim p_z}[\\log(1-D_\\theta(G_\\phi(z)))],\n\t\t\\label{EQ:eqnganlevel1}\n\t\\end{equation}\n\twhere $f(\\phi,\\theta)$ is a binary cross-entropy function, commonly used in binary classification problems. Eq (\\ref{EQ:eqnganlevel1}) is proposed in original GAN and can be optimized by alternate training. The training of discriminator is:\n\t\\begin{equation}\n\t\t\\mathop{\\max}\\limits_{\\theta} \\mathbb{E}_{x\\sim p_{r}}[\\log(D_\\theta(x))]\n\t\t+\\mathbb{E}_{z\\sim p_z}[\\log(1-D_\\theta(G_\\phi(z)))],\n\t\t\\label{EQ:eqndiscriminator}\n\t\\end{equation}\n\twhich is the same as a bi-classification task between real images and generated images. However, the naive binary cross-entropy function suffers from many problems, such as gradients vanishing. {Gradients vanishing is present when the difference between real and generated images as measured by discriminator is large, which leads generators cannot get optimised directions.} Accordingly, many techniques from classification like loss functions and regularization methods have been used to improve the training of discriminator. For instance, to overcome the gradients vanishing problem, Mao et al. propose the LSGANs which adopts the least\n\tsquares loss function for the discriminator. The least squares loss function moves the fake samples toward the decision boundary even though they are correctly classified. Based on this property, LSGANs is able to generate samples that are closer to real ones. The loss functions of LSGANs can be defined as follows:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t&\\mathop{\\min}\\limits_{\\theta} \\mathbb{E}_{x \\sim p_{r}}[(D_{\\theta}({x})-b)^{2}] \n\t\t\t+ \\mathbb{E}_{{z} \\sim p_{{z}}}[(D_{\\theta}(G_{\\phi}({z}))-a)^{2}], \\\\\n\t\t\t&\\mathop{\\min}\\limits_{\\phi} \\mathbb{E}_{z \\sim p_{z}}[(D_{\\theta}(G_{\\phi}({z}))-c)^{2}],\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere $b$ and $a$ are objectives that $D$ uses for the training of real and fake samples respectively, $c$ denotes the value that $G$ wants $D$ to believe for fake sample. Gradients vanishing problem of the LSGANs only appears with $D_{\\theta}(G_{\\phi}({z}))=c$, which is hard. Furthermore, Lin et al. use SVM separating hyperplane that maximizes the margin to propose geometric GAN. Authors use the Hinge loss to train the models, which can be formulated as:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t&\\mathop{\\min}\\limits_{\\theta} \\mathbb{E}_{x \\sim p_{r}}[(1-D_{\\theta}({x})]_{+}\n\t\t\t+ \\mathbb{E}_{{z} \\sim p_{{z}}}[1+D_{\\theta}(G_{\\phi}({z}))]_{+}, \\\\\n\t\t\t&\\mathop{\\min}\\limits_{\\phi} -\\mathbb{E}_{z \\sim p_{z}}D_{\\theta}(G_{\\phi}({z})),\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere $[x]_{+}=\\mathop{\\max}\\{0,x\\}$.\n\tThe motivation of GANs is to train the generator based on the output of the discriminator. Unlike the direct training objective of the classification task (minimizing cross-entropy loss), the objective of generator is indirect (with the help of the discriminator output). Hence, discriminator should provide a richer representation on the truth or false of samples compared to the classification task. More prior information and additional supervision tasks are urgent during the training process of the discriminator. Based on these, some regularization methods, such as {\\textit{Data Augmentation and Preprocessing}}, {\\textit{Consistency Regularization}}, and {\\textit{Self-Supervision}} are proposed to improve the stability and generalizability of discriminator.", "id": "4e9cd8aa-b10a-44ef-a7b7-7d2880558ce2", "level": "subsection", "origin_cites_number": 4, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "Low Level: The Perspective of \"Real \\& Fake\"" ] ], "subsections": [], "title": "Low Level: The Perspective of \"Real \\& Fake\"" }, { "cite_extract_rate": 0.642857142857142, "cites": [ 88, 54, 64, 93, 94, 95, 78, 8318, 8317 ], "content": "At middle level, generator $G(z)$ is considered as a distribution mapping function that maps latent distribution $p_z(z)$ to generated distribution $P_g(x)$, and the discriminator $D(x)$ is a distribution distance that evaluates the distance between the target distribution $P_r(x)$ and the generated distribution $P_g(x)$ as illustrated in Figure \\ref{FIG}. For the optimal discriminator, the generator $G(z)$ tries to minimize the distance between $P_r(x)$ and $P_g(x)$. For instance, generator of the vanilla GAN\\footnote{Vanilla GAN, also known as standard GAN, is the first GAN model.} and $f$-GAN\\footnote{\\label{ft:5} $f$-GAN is a collective term for a type of GAN models whose discriminator minimizes $f$ divergence. $f$ divergence is the general form of KL divergence. It can be demonstrated as: $D_f(P||Q)=\\int q(x)f\\big(\\frac{p(x)}{q(x)}\\big)\\rm{d}x$, where $f$ is a mapping function from non-negative real numbers to real numbers ($\\mathbb{R}^*\\rightarrow\\mathbb{R}$) that satisfies: (1) $f(1)=0$. (2) $f$ is a convex function. To be more specific, KL divergence corresponds to $f(u)=u\\log u$ and JS divergence corresponds to $f(u)=-\\frac{u+1}{2}\\log\\frac{1+u}{2}+\\frac{u}{2}\\log u$. More details can be viewed in } are considered to minimize Jensen–Shannon (JS) divergence and $f$ divergence\\textsuperscript{\\ref{ft:5}}, respectively. When the conditions of LSGANs loss are set to $b-c=1$ and $b-a=2$, generator of the LSGAN considers the minimization of Pearson ${\\chi}^2$ divergence. Furthermore, generators of the WGAN-div\\footnote{Different from WGAN-div, WGAN minimize Wasserstein distance, not Wasserstein divergence.} and GAN-QP consider the minimization of Wasserstein divergence and Quadratic divergence, respectively.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=0.55]{figs/GAN.pdf}\n\t\t\\caption{The framework of GANs. $P_z$ is a latent space distribution, $P_r$ and $P_g$ represent the real distribution and the generated distribution, respectively.}\n\t\t\\label{FIG}\n\t\\end{figure}\n\tGenerator is a transportation map from $z$ to $p_g(x)$. In this section, we introduce the optimal transport and the optimal transport with regular term, which leads to the form of Wasserstein GANs with gradient penalty (WGAN-GP) and Wasserstein GANs with Lipschitz penalty (WGAN-LP), respectively. Wasserstein distance is a popular and important distance in GANs and it corresponds to the optimal transport of the generator. To solve the dual problem of Wasserstein distance, Lipschitz continuity is introduced, which is the reason why gradient penalty and weight normalization techniques are proposed in the GANs training.\n\t\\begin{comment}\n\tOptimal transport was proposed in the 18th century to minimize the transportation cost while preserving the measure quantities. Given the space with probability measures $(X,\\mu)$ and $(Y,\\upsilon)$, if there is a map $T:X\\rightarrow Y$ which is measure-preserving, then for any $B\\subset Y$, having:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\int_{T^{-1}(B)}\\mathrm{d}\\mu(x)=\\int_B \\mathrm{d}\\upsilon(y).\n\t\\end{equation}\n\tWriting the measure-preserving map as $T_*(\\mu)=\\upsilon$. For any $x\\in X$ and $y\\in Y$, the transportation distance is defined as $c(x,y)$, the total transportation cost is given by:\n\t\\begin{equation}\\label{eqn1}\n\t\tC(T):=\\int_X c(x,T(x)) \\mathrm{d}\\mu(x).\n\t\\end{equation}\n\tIn the 18th century, Monge et al. proposed the Optimal Mass Transportation Map that corresponds to the smallest total transportation cost: $C(T)$. The transportation cost corresponding to the optimal transportation map is called the Wasserstein distance between probability measures $\\mu$ and $\\upsilon$:\n\t\\begin{equation}\\label{eqn1}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\min}\\limits_{T}\\left\\{\\int_X c(x,T(x)) \\mathrm{d}\\mu(x)\\ |\\ T_*(\\mu)=\\upsilon\\right\\}.\n\t\\end{equation}\n\tIn 1940s, Kantorovich proved the existence and uniqueness of the solution for Monge problem, and according to the duality of linear programming, the Kantorovich-Rubinstein (KR) duality of Wasserstein distance is given by:\n\t\\begin{equation}\\label{eqn1}\n\t\tW_c(\\mu,\\upsilon)\n\t\t=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\left\\{\\int_X \\varphi \\mathrm{d}\\mu\\ +\\int_Y \\psi \\mathrm{d}\\upsilon\\ |\\ \\varphi(x)+\\psi(y)\\leq c(x,y)\\right\\}.\n\t\\end{equation}\n\tThis dual problem is constrained, defining the c-transform: $\\psi(y)=\\varphi^c(y):=inf_x\\{c(x,y)-\\varphi(x)\\}$, and the Wasserstein distance becomes:\n\t\\begin{equation}\\label{eqn1}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\max}\\limits_{\\varphi}\\left\\{\\int_X \\varphi \\mathrm{d}\\mu\\ +\\int_Y \\varphi^c \\mathrm{d}\\upsilon\\right\\},\n\t\\end{equation}\n\twhere $\\varphi$ is called the Kantorovich potential. It can be shown that if $c(x,y)=|x-y|$ and Kantorovich potential satisfies the 1-Lipschitz continuity, then $\\varphi^c=-\\varphi$. Kantorovich potential can be fitted by a deep neural network, which is recorded as $\\varphi_\\xi$. Wasserstein distance is:\n\t\\begin{equation}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\max}\\limits_{||\\varphi_\\xi||_L\\leq 1}\\left\\{\\int_X \\varphi_\\xi \\mathrm{d}\\mu\\ -\\int_Y \\varphi_\\xi \\mathrm{d}\\upsilon\\right\\}.\n\t\t\\label{eqn7}\n\t\\end{equation}\n\tIf $X$ is the generated image space, $Y$ is the real sample space, $Z$ is latent space and $g_\\theta$ is the geneartor, the Wasserstein GANs (WGAN) is formulated as a min-max problem:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\mathop{\\min}\\limits_{\\theta}\\mathop{\\max}\\limits_{||\\varphi_\\xi||_L\\leq 1}\\left\\{\\int_Z \\varphi_\\xi(g_\\theta(z)) \\mathrm{d}z \\ -\\int_Y \\varphi_\\xi(y) \\mathrm{d}y\\right\\}.\n\t\\end{equation}\n\tIn the optimization process, the generator and the Kantorovich potential function (discriminator) are independent of each other, optimized in a step-by-step iteration.\n\tIf $c(x,y)=\\frac{|x-y|^2}{2}$, there is a convex function $u$ that is called Brenier potential . The optimal transportation map is given by the gradient map of Brenier potential: $T(x)=\\nabla u(x)$. There exists a relationship between Kantorovich potential and Brenier potential : \n\t\\begin{equation}\\label{eqn1}\n\t\tu(x)=\\frac{|x|^2}{2}-\\varphi(x).\n\t\\end{equation}\n\tFrom the previous discussion, it is evident that the optimal transportation map (Brenier potential) corresponds to the generator, and Kantorovich potential corresponds to the discriminator. After the discriminator is optimized, the generator is directly drivable without the optimization process .\n\tThe transportation cost of Eq (3) is defined as the form of two distribution distances:\n\t\\begin{equation}\\label{eqn1}\n\t\tOT(P||Q)=\\mathop{inf}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y,\n\t\\end{equation}\n\twhere $\\pi(x,y)$ is the joint distribution, satisfying $\\int_y\\pi(x,y)dy=P(x)$ and $\\int_x\\pi(x,y)dx=Q(y)$. The dual form of Eq (10) is derived as follows::\n\t\\begin{equation}\\label{eqn1}\n\t\tOT(P||Q)=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\{\\int_x \\varphi(x)P(x) \\mathrm{d}x\\ \\\\ +\\int_y\\psi(y)Q(y) \\mathrm{d}y\\ |\\ \\varphi(x)+\\psi(y)\\leq c(x,y)\\}.\n\t\\end{equation} \n\tConsidering the optimal transportation with regular terms, Peyr{\\'e} et al. added the entropic regularization for optimal transportation that transforms the dual problem into a smooth unconstrained convex problem. The regularized optimal transport is defined as:\n\t\\begin{equation}\\label{eqn1}\n\t\tOT_c(P||Q)=\\mathop{\\min}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y+\\epsilon E(\\pi).\n\t\\end{equation}\n\tIf $E(\\pi)=\\int_x\\int_y\\pi(x,y)\\log(\\frac{\\pi(x,y)}{P(x)Q(y)})\\mathrm{d}x\\mathrm{d}y$, Eq (12) can be written as:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\begin{split}\n\t\t\tOT_c(P||Q)=&\\mathop{\\min}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y+\\epsilon \\int_x\\int_y\\pi(x,y)\\log\\left(\\frac{\\pi(x,y)}{P(x)Q(y)}\\right)\\mathrm{d}x\\mathrm{d}y\\\\\n\t\t\ts.t. \\int_y\\pi(x,y)&\\mathrm{d}y=P(x),\\int_x\\pi(x,y)\\mathrm{d}x=Q(y).\n\t\t\\end{split}\n\t\\end{equation}\n\tThe dual form of Eq (13) becomes:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tOT_c(P||Q)\n\t\t\t&=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\int_x \\varphi(x)P(x) \\mathrm{d}x\\ +\\int_y\\psi(y)Q(y)\\mathrm{d}y\\\\\n\t\t\t&+\\frac{\\epsilon}{e}\\int_x\\int_y\\exp\\left(\\frac{-\\left(c(x,y)+\\varphi(x)+\\psi(y)\\right)}{\\epsilon}\\right)\\mathrm{d}x\\mathrm{d}y.\n\t\t\\end{split}\n\t\t\\label{EQ:eqn14}\n\t\\end{equation}\n\t\\end{comment}\n\tThe details of optimal transportation and optimal transportation with regular terms for WGAN and Lipschitz continuity can be found on the Section \\ref{sect:A-1} of the Supplementary Online-only Material. It is pertinent to note that \\textit{Gradient Penalty} and \\textit{Gradient} \\textit{Normalization} are two simple and effective ways to implement the Lipschitz continuity. Furthermore, demonstrates that the spectral norm and the Lipschitz constant have the same meaning. Therefore, the spectral norm can be used to represent the Lipschitz constant. The Lipschitz continuity is achieved by normalizing the spectral norm of the weight, approximately. Hence, \\textit{Weight Normalization} and \\textit{Weight Regularization} can also be used to enable the Lipschitz continuity of the discriminator.\n\t\\begin{comment}", "id": "a915370f-85fe-4345-a91b-c8d67718061c", "level": "subsection", "origin_cites_number": 14, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "Middle Level: The Perspective of \"Fitting distribution\"" ] ], "subsections": [], "title": "Middle Level: The Perspective of \"Fitting distribution\"" }, { "cite_extract_rate": 1, "cites": [ 97, 96 ], "content": "WGAN is a popular and important generative adversarial network. From the optimal transport introduced in the last subsection, to obtain Eq (\\ref{eqn7}), the discriminator must satisfy the 1-Lipschitz continuity. This section introduces the form of the Lipschitz constant and shows that the spectral norm and the Lipschitz constant have the same meaning.\n\t1-Lipschitz continuity is represented as:\n\t\\begin{equation}\n\t||D(x_1)-D(x_2)||\\leq ||x_1-x_2||.\\footnote{Lipschitz continuity can be defined by any form of norm.}\n\t\\end{equation}\n\tGenerally, considering the K-Lipschitz for a neural network $f(x)$:\n\t\\begin{equation}\n\tf(x)=g_N\\circ\\cdots g_2\\circ g_1(x),\\footnote{$\\circ$ is the symbol for function cascade. Specifically, $h\\circ g(x)=h(g(x))$. This definition of neural network is not general, such as DenseNet and ResNet , which can not be defined like this. Therefore, we do not strictly derive the relationship between the matrix norm and Lipschitz continuity.}\n\t\\end{equation}\n\twhere $g_i(x)=\\sigma (W_i x+b_i)$. And K-Lipschitz continuity for $f(x)$ is:\n\t\\begin{equation}\n\t||f(x_1)-f(x_2)||\\leq \\mathrm{K}||x_1-x_2||,\n\t\\label{EQ:eqn17}\n\t\\end{equation}\n\twhere K is Lipschitz constant of the function $f$. Due to the consistency of Lipschitz $||h\\circ g||_{Lip}\\leq ||h||_{Lip}\\cdot||g||_{Lip}$, $g_i$ needs to satisfy the C-Lipschitz continuity ($\\mathrm{C}=\\sqrt[N]{\\mathrm{K}}$) so that $f$ satisfies the K-Lipschitz continuity:\n\t\\begin{equation}\n\t||g_i(x_1)-g_i(x_2)||\\leq \\mathrm{C}||x_1-x_2||,\n\t\\end{equation}\n\t\\begin{equation}\n\t||\\sigma(Wx_1+b)-\\sigma(Wx_2+b)||\\leq \\mathrm{C}||x_1-x_2||.\n\t\\label{eq:23}\n\t\\end{equation}\n\tWhen $x_1\\rightarrow x_2$, the Taylor expansion of Eq (\\ref{eq:23}):\n\t\\begin{equation}\n\t||\\frac{\\partial\\sigma}{\\partial x} W(x_1-x_2)||\\leq \\mathrm{C}||x_1-x_2||.\n\t\\end{equation}\n\tNormally, $\\sigma$ is a function with limited derivatives such as Sigmoid, so the $\\mathrm{C'}$-Lipschitz continuity is be written as:\n\t\\begin{equation}\n\t|| W(x_1-x_2)||\\leq \\mathrm{C'}||x_1-x_2||,\n\t\\end{equation}\n\twhere $\\mathrm{C'}$ is a limited constant, which is determined by $\\frac{\\partial\\sigma}{\\partial x}$ and $\\mathrm{C}$.\n\tSimilarly, the spectral norm of matrix is defined by:\n\t\\begin{equation}\n\t||W||_2=\\mathop{\\max}\\limits_{x\\not=0}\\frac{||Wx||}{||x||}.\n\t\\end{equation}\n\tIn this context, the spectral norm $||W||_2$ can be used to represent the Lipschitz constant $\\mathrm{C'}$. The Lipschitz continuity is achieved by normalizing the spectral norm of the weight, approximately. Hence, \\textit{Weight Normalization} and \\textit{Weight Regularization} can also be used to enable the Lipschitz continuity of the discriminator.\n\t\\end{comment}", "id": "a397f8d2-2658-471a-8c3b-cbde88271e09", "level": "subsection", "origin_cites_number": 2, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "Lipschitz Continuity and Matrix Norm" ] ], "subsections": [], "title": "Lipschitz Continuity and Matrix Norm" }, { "cite_extract_rate": 0.9, "cites": [ 74, 98, 100, 54, 102, 99, 101, 64, 7190 ], "content": "GANs training is a two-player zero-sum game with a solution to Nash Equilibrium. At high level, we analyze the convergence of GANs by understanding the optimization process. Based on these, some regularization techniques are proposed to guide the GANs model to reach the theoretical equilibrium point leading to improvement in the effectiveness of GANs.\n\tReconsidering the Eq (\\ref{EQ:eqn1}) in Section 2, the training of GANs is achieved by solving a two-player zero-sum game via Simultaneous Gradient Descent (SimGD) . The updates of the SimGD are given as:\n\t\\begin{equation} \n\t\t\\label{eq:eqn27}\n\t\t\\phi^{(k+1)}=\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)}),\\quad\n\t\t\\theta^{(k+1)}=\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)}).\n\t\\end{equation}\n\tAssuming that the objectives of GANs are convex, many research studies discuss their global convergence characteristics . However, due to the high non-convexity of deep networks, even a simple GAN does not satisfy the convexity assumption . A recent study shows that it is unrealistic to obtain approximate global convergence under the assumption of the optimal discriminator, so the community considers local convergence. It hopes that the trajectory of the dynamic system can enter a local convergence point with continuity iterations, that is, Nash equilibrium:\n\t\\begin{equation} \n\t\t\\bar{\\phi}=\\mathop{\\arg\\max}_{\\phi}-f(\\phi,\\bar{\\theta}),\\quad\n\t\t\\bar{\\theta}=\\mathop{\\arg\\max}_{\\theta}f(\\bar{\\phi},\\theta).\n\t\t\\label{EQ:eq24}\n\t\\end{equation}\n\tIf the point $(\\bar{\\phi},\\bar{\\theta})$ is called the local Nash-equilibrium, Eq (\\ref{EQ:eq24}) holds in a local neighborhood of $(\\bar{\\phi},\\bar{\\theta})$.\n\tFor this differentiable two-player zero-sum game, a vector is defined as below:\n\t\\begin{equation}\n\t\tv(\\phi,\\theta)=\n\t\t\\left(\n\t\t\\begin{aligned}\n\t\t\t-\\nabla_\\phi f(\\phi,\\theta)\\\\\n\t\t\t\\nabla_\\theta f(\\phi,\\theta)\n\t\t\\end{aligned} \n\t\t\\right).\n\t\\end{equation}\n\tThe Jacobian matrix is:\n\t\\begin{equation}\n\t\tv^{'}(\\phi,\\theta)=\n\t\t\\left(\n\t\t\\begin{aligned}\n\t\t\t-&\\nabla^2_{\\phi,\\phi} f(\\phi,\\theta)\\quad-\\nabla^2_{\\phi,\\theta} f(\\phi,\\theta)\\\\\n\t\t\t&\\nabla^2_{\\phi,\\theta} f(\\phi,\\theta)\\quad\\nabla^2_{\\theta,\\theta} f(\\phi,\\theta)\n\t\t\\end{aligned} \n\t\t\\right).\n\t\\end{equation}\n\tAccording to the propositions on the Section \\ref{sect:A-2} of the Supplementary Online-only Material, under the premise of asymptotic convergence, the local convergence of GAN is equivalent to the absolute value of all eigenvalues of the Jacobian matrix at the fixed point $(v(\\bar{\\phi},\\bar{\\theta})=0)$ being less than 1. To get this condition, \\textit{Jacobian Regularization} needs to be used.", "id": "0edf29a6-bba8-43a2-a318-3f272fcb91d9", "level": "subsection", "origin_cites_number": 10, "parent_id": "5fdbf0be-ea34-468a-9943-d62d05b593ca", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Background and Three Perspectives of GANs Training" ], [ "subsection", "High Level: The Perspective of \"Training dynamics\"" ] ], "subsections": [], "title": "High Level: The Perspective of \"Training dynamics\"" }, { "cite_extract_rate": 1, "cites": [ 6998, 72 ], "content": "From the perspective of the \"Real \\& Fake\", generator is counterfeiter designed to deceive the discriminator, while discriminator is police designed to distinguish between real and fake samples. The motivation of GANs is to train the generator based on the loss of the discriminator. {Compared to supervised classification tasks, discriminator formally needs to perform only bi-classification tasks, which is easy to implement. Therefore, discriminator is very easy to overfit.} Furthermore, unlike the direct training objective of the classification task (Minimizing cross-entropy loss), the objective of GANs training is indirect. Hence, only one-dimensional output of the discriminator does not provide a complete representation on truth or false of samples. Some studies have shown that the present discriminators contain some significant deficiencies in the frequency domain and attribute domain , which are evidence of the lacking discrimination for discriminators. Excessive shortage of discrimination makes the generator lack incentives from the discriminator to learn useful information of the data. {In addition to discriminator overfitting and lacking discrimination of discriminators, discriminator forgetting is another challenge for GANs.} To alleviate these situations, many regularization methods and additional supervision tasks have been proposed in the literature, which can be divided into three categories: \\textit{Data Augmentation and Preprocessing}, \\textit{Consistency Regularization}, and \\textit{Self-supervision}. {All of them are based on data augmentation and are orthogonal to each other. As shown in Table \\ref{table:SOTA }, The state-of-the-art GANs always adopt two or even all of the above regularization.}", "id": "8469161e-625b-422b-90e0-a6cca9638a04", "level": "section", "origin_cites_number": 2, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ] ], "subsections": [ "6dc08e83-31a2-4629-92a0-ea9cbb6c27a0", "a5e04be5-4733-477b-9081-c52ce2a3be68", "4e2e9490-faa8-4a2f-9dbd-66ae6c225118", "2267f79d-3862-4c72-bdff-9670534c3017" ], "title": "Regularization and Normalization of \"Real \\& Fake\"" }, { "cite_extract_rate": 0.7894736842105261, "cites": [ 91, 7191, 111, 104, 112, 107, 106, 110, 109, 108, 103, 113, 61, 8319, 105 ], "content": "\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=.35]{figs/data_augmentation_framework.pdf}\n\t\t\\caption{Framework of data augmentation and preprocessing for updating D (left) and G (right). (Coming from )}\n\t\t\\label{fig:data_augmentation_framework}\n\t\\end{figure}\n\tData Augmentation plays a significant role in deep learning algorithms. It increases the diversity of the training data naturally, thus reduces the overfitting in many computer vision and graphics applications . Date augmentation adopts different data transformation techniques ({\\color{red}$T$}) to increase the number of training samples. One type of data transformation is spatial transformation of data, such as $zoomout$, $zoomin$, $translation$, $translationx$, $translationy$, $cutout$ , $cutmix$ ; The other is\n\tvisual transformation, such as $brightness$, $redness$, $greenness$, $blueness$, $mixup$ . {Furthermore, recent study is also attempting to use frequency transformation to data augmentation. }\n\tSimilarly, the performance of GANs heavily deteriorates given a limited amount of training data . {For instance, shows that Frechet Inception Distance (FID) starts to rise at some point during training and outputs of discriminator keep drifting apart during training, when training data is limited. More analysis can be found in the survey on data-efficient GANs training.} However, recent studies observe that augmenting only real images (Only applying $T$ to (i) in Figure \\ref{fig:data_augmentation_framework}), only generated images (Only applying $T$ to (ii) in Figure \\ref{fig:data_augmentation_framework}), and only discriminator (Both applying $T$ to (i) and (ii) in Figure \\ref{fig:data_augmentation_framework}) do not help with GANs training. Naturally, one problem needs to be considered: whether the overfitting exists in GANs' training? Some studies demonstrate that, even with big dataset and state of the art models, the training of GANs suffers from severe overfitting. Furthermore, in case of small training data, overfitting occurs at an early stage in the training. Recently, some studies on data augmentation for GANs training have been proposed. It is argued that the classical data augmentation approach could mislead the generator to learn the distribution of the augmented data, which could be different from that of the original data. To deal with this problem, these studies augment both real and fake samples and let gradients propagate through the augmented samples to G (Applying $T$ to (i), (ii), and (iii) in Figure \\ref{fig:data_augmentation_framework}). By adding the data augmentation to all processes of GANs training, the performance of GANs has been significantly improved. {However, this \"\\textit{Augment All}\" strategy may lead to the “leaking” of augmentations to the generated samples, which is highly undesirable. The experiments in demonstrate that as long as the probability of executing the augmentation remains below 0.8, leaks are unlikely to happen in practice.}\n\tData augmentation in GANs has remarkable achievement. However, which augmentation is most beneficial for GANs training is still an open problem. Figure 2 in shows the FID comparisons of BigGAN on CIFAR-10 dataset. For data augmentation (represented by ‘vanilla\\_rf’), the operations in spatial augmentation such as $translation$, $zoomout$, and $zoomin$, are much more effective than the operations in visual augmentation, such as $brightness$, $colorness$ ($redness$ and $greenness$), and $mixup$. The results indicate that augmentation leads to spatial changes which improves GANs performance compared with cases where visual changes are induced. It is easy to understand that generated images are significantly lacking in detail information compared to the real images, and spatial augmentation improves the ability of the generator to fit detailed textures through spatial changes. {$InstanceNoise$, resulting in images out of the natural data manifold, cannot help with improving GANs performance. Apart form applying only a limited range of augmentations, some studies explore some strong data augmentations in GANs training. For instance, Jeong et al. adopt contrastive learning to extract more useful information under stronger data augmentation beyond the existing yet limited practices. Combining adaptive strategies and 18 transformations (Both spatial and visual transformations) , even and $InstanceNoise$ only can bring performance improvement over strong GANs baselines. {Furthermore, is the first method to tackle the generative learning trilemma with denoising diffusion GANs.} Apart from these, Jiang et al. also devise an adaptive strategy to control the strength of selecting generated images to augment real data, which can further boosts the performance of GANs. Data augmentation is popular and significant in GANs training, whose achievements are attributed to improving discrimination, avoiding overfitting, and increasing the overlap between real and fake distributions.}\n\t{Different from data augmentation that increases the amount of the training data, data preprocessing only adopt prior knowledge and do some uniform data transformation before the network training. Data preprocessing is orthogonal to data augmentation, which can further enhance the performance combining the data augmentation.} Li et al. indicate that high-frequency components between real and fake images are different, which is not conducive to the GANs training. They propose two preprocessing eliminating high-frequency differences in GANs training: High-Frequency Confusion (HFC) and High-Frequency Filter (HFF). These methods are applied in places (i), (ii), and (iii) in Figure \\ref{fig:data_augmentation_framework} and improve the performance of GANs with a fraction of the cost. \n\tIn summary, both data augmentation and data preprocessing improve the performance of GANs with little cost. Data augmentation uses different transformations to improve discrimination and avoid disciminator overfitting. Furthermore, spatial augmentations achieve better performance than visual augmentations. More specifically, Zhao et al. demonstrate that hybrid augmentation with $Color + Translation + Cutout$ is especially effective and is widely used in other studies . Adaptive data augmentation (ADA) is the most popular method in GANs. Besides the data augmentation, data preprocessing is also a remarkable method.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=.3]{figs/consistency.pdf}\n\t\t\\caption{Overview of consistency regularization, where image consistency regularization updates the G (left) and network consistency regularization updates the D (right). $H$ is the feature mapping function and {\\color{red}$T$} is different data transformation techniques.}\n\t\t\\label{fig:consistency}\n\t\\end{figure}", "id": "6dc08e83-31a2-4629-92a0-ea9cbb6c27a0", "level": "subsection", "origin_cites_number": 19, "parent_id": "8469161e-625b-422b-90e0-a6cca9638a04", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Data Augmentation and Preprocessing" ] ], "subsections": [], "title": "Data Augmentation and Preprocessing" }, { "cite_extract_rate": 1, "cites": [ 115, 114, 7000, 7192 ], "content": "In context of semi-supervised and unsupervised learning, consistency regularization has been widely used in . It is motivated by the fact that models should produce consistent predictions given input and their semantics-preserving augmentations, such as image rotating, and adversarial attacks. It is pertinent to note that the supervision of GANs training is weak. To increase the discrimination of discriminator, some consistency regularization techniques have also been used. Due to different goals, we divide these into two parts: \\textit{Image Consistency} and \\textit{Network Consistency} as demonstrated in Figure \\ref{fig:consistency}.", "id": "a5e04be5-4733-477b-9081-c52ce2a3be68", "level": "subsection", "origin_cites_number": 4, "parent_id": "8469161e-625b-422b-90e0-a6cca9638a04", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Consistency Regularization" ] ], "subsections": [ "a0c72661-2f99-4ec9-8181-70dfdeab36dc", "86d17fb0-5862-4222-a247-42d917acf482" ], "title": "Consistency Regularization" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 116, 6998, 117, 121, 8320, 119, 118, 120, 91, 69 ], "content": "The purpose of GANs is to generate fake images similar to real ones. In GANs, the discriminator is generally used to distinguish real images and generated images. However, outputs of the discriminator with only one dimension hardly portray the authenticity of the image completely. To improve the representation of the discriminator, some studies extend the outputs of the discriminator, for example, relativistic discriminator , distribution discriminator , and cascading rejection . Apart from this, some studies reduce the training difficulty of discriminators by introducing prior information. Regularizing the distance between the generated and real images with different measurements, namely image consistency, is the focus of this paper. The overview of it is demonstrated in left part of Figure \\ref{fig:consistency}, where consistency regularization is used to update generator (G) and can be formulated as:\n\t\\begin{equation}\n\t\t\\mathcal{L}_C=\\mathbb{E}_{x\\sim p_r, z\\sim p_z} C(H(x),H(G(z))),\n\t\t\\label{EQ:image_consistency}\n\t\\end{equation}\n\twhere $H$ is the feature mapping function and $C$ is the consistency measurement function. Different image consistency regularization have different $H$ and $C$. For instance, Salimans et al. recommend that the generator is trained using a feature matching procedure. The objective is:\n\t\\begin{equation}\n\t\t\\mathcal{L}_C=||\\mathbb{E}_{x\\sim p_r} f(x)-\\mathbb{E}_{z\\sim p_z}f(G(z))||^2_2,\n\t\\end{equation}\n\twhere $f(x)$ denotes the intermediate layer of the discriminator. Similarly, the intermediate layer of another pre-trained classification model is an alternate option. The empirical results indicate that feature matching is indeed effective in situations where normal GAN becomes unstable. Unlike the above study which only uses mean feature matching to training generators, Mroueh et al. propose McGAN, which trains both the generator and discriminator using the mean and covariance feature matching. The objective is:\n\t\\begin{equation}\n\t\t\t\\mathcal{L}_C=\\mathcal{L}_\\mu+\\mathcal{L}_\\sigma\n\t\t\t=||\\mu(p_r)-\\mu(G(p_z))||_q+||\\sum\\left(p_r\\right)-\\sum(p(G(z)))||_k,\n\t\\end{equation}\n\twhere $\\mu(p_r)=\\mathbb{E}_{x\\sim p_r} f(x)$ and $\\sum(p_r)=\\mathbb{E}_{x\\sim p_r} f(x)\\cdot f(x)^\\mathrm{T}$ represent the mean and the covariance of the feature layer $f(x)$, respectively. Apart from statistical differences, some studies focus on the difference in frequency domain between the generated and real image. For instance, Durall et al. find that the deep generative models based on up-convolution fail to reproduce spectral distributions leading to considerable differences in the spectral distributions between real images and generated images. Thus, the spectral regularization has been proposed as follows:\n\t\\begin{equation}\n\t\t\\mathcal{L}_C=\\frac{1}{M/2-1}\\sum_{i=0}^{M/2-1}AI^{real}_i\\cdot\\log(AI^{fake}_i)\n\t\t+(1-AI^{real}_i)\\cdot\\log(1-AI^{fake}_i),\n\t\\end{equation}\n\twhere $M$ is the image size and $AI$ is the spectral representation from the Fourier transform of the images. Corresponding to Eq (\\ref{EQ:image_consistency}), $H$ and $C$ are implemented with $AI$ and cross-entropy, respectively.\n\tContrary to this, the research study uses hard example mining to improve the discriminatory of the discriminator based on the difference between real and generated samples under different metrics. Although this paradigm is different from the paradigm of image consistency regularization, both cases are motivated by obtaining generated samples similar to real images under different distance measures, so we integrate them. Chen et al. consider both downsampling strategies: downsampling with anti-aliasing and downsampling without anti-aliasing, leads to high frequencies missing in the discriminator. High frequencies missing leads to high frequency deviation between real and generated images. To mitigate this issue, authors propose SSD-GAN, which introduces an additional spectral classifier to detect frequency spectrum discrepancy between real and generated images and integrate it into the discriminator of GANs. The overall realness of sample x is represented as:\n\t\\begin{equation}\n\t\tD^{s s}(x)=\\lambda D(x)+(1-\\lambda) C(\\phi(x)),\n\t\\end{equation}\n\twhere the enhanced discriminator $D^{s s}$ consists of two modules, a vanilla discriminator $D$ that measures the spatial realness, and a spectral classifier $C$. $\\lambda$ is a hyperparameter that controls the relative importance of the spatial realness and the spectral realness. The adversarial loss of the framework can be written as:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{D} =\\mathbb{E}_{x \\sim p_{\\text {data }}(x)}\\left[\\log D^{s s}(x)\\right] \n\t\t+\\mathbb{E}_{x \\sim p_{g}(x)}\\left[\\log \\left(1-D^{s s}(x)\\right)\\right].\n\t\\end{equation}\n\t{karnewar et al. introduce adversarial loss into the intermediate layer of the generator, which provides multiple and richer metrics for the training of generator.}\n\t\\begin{table}\n\t\t\\caption{The summary of the consistency regularization.}\n\t\t\\label{table:consistency}\n\t\t\\tiny\n\t\t\\centering\n\t\t\\begin{tabular}{c| c }\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tMethod& Consistency regularization term $L_C$\t \\\\\n\t\t\t\\hline\n\t\t\tMean regularization &$||\\mathbb{E}_{x\\sim p_r} f(x)-\\mathbb{E}_{z\\sim p_z}f(G(z))||_q$\\\\\n\t\t\t\\hline\n\t\t\tMean and Convariance regularization &$||\\mathbb{E}_{x\\sim p_r} f(x)-\\mathbb{E}_{z\\sim p_z}f(G(z))||_q+||\\mathbb{E}_{x\\sim p_r} f(x)\\cdot f(x)^\\mathrm{T}-\\mathbb{E}_{z\\sim p_z}f(G(z))\\cdot f(G(z))^\\mathrm{T}||_k$\\\\\n\t\t\t\\hline\n\t\t\tSpectral regularization &$\\mathcal{L}_C=\\frac{1}{M/2-1}\\sum_{i=0}^{M/2-1}AI^{real}_i\\cdot\\log(AI^{fake}_i)+(1-AI^{real}_i)\\cdot\\log(1-AI^{fake}_i)$\\\\\n\t\t\t\\hline\n\t\t\tCR-GAN &$\\mathbb{E}_{x\\sim p_r}|| D(x)-D(T(x))||_2$\\\\\n\t\t\t\\hline\n\t\t\tbCR-GAN &$\\lambda_{real}\\mathbb{E}_{x\\sim p_r}|| D(x)-D(T(x))||_2+\\lambda_{fake}\\mathbb{E}_{z\\sim p_z}||D(G(z))-D(T(G(z)))||_2$\\\\\n\t\t\t\\hline\n\t\t\tzCR-GAN&$\\lambda_{dis}\\mathbb{E}_{z\\sim p_z}||D(G(z))-D(G(T(z)))||_2-\\lambda_{gen}\\mathbb{E}_{z\\sim p_z}||G(z)-G(T(z))||_2$\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table}\n\tThe summary of the image consistency regularization is given in Table \\ref{table:consistency}. In summary, image consistency considers that the real images and the generated images are similar not only in the output of discriminator, but also in statistical information and frequency domain. The analysis of biases between real and generated images using different metrics will be an interesting future research direction.", "id": "a0c72661-2f99-4ec9-8181-70dfdeab36dc", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "a5e04be5-4733-477b-9081-c52ce2a3be68", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Consistency Regularization" ], [ "subsubsection", "Image Consistency" ] ], "subsections": [], "title": "Image Consistency" }, { "cite_extract_rate": 1, "cites": [ 91, 8320, 69, 111 ], "content": "Network consistency regularization can be regarded as Lipschitz continuity on semantics-preserving transformation. Specifically, we hope discriminator is insensitive to semantics-preserving transformation, which drives the discriminator to pay more attention to the authenticity of the images. For example, in the image domain, the reality of images should not change if we flip the image horizontally or translate the image by a few pixels. To resolve this, Zhang et al. propose the Consistency Regularization GAN (CR-GAN) that uses the consistency regularization on the discriminator during GANs training:\n\t\\begin{equation}\n\t\t\\mathcal{L}_C=\\mathbb{E}_{x\\sim p_r}|| D(x)-D(T(x))||_2,\n\t\\end{equation}\n\twhere $T(x)$ represents a transformation (shift, flip, cutout, etc.) of images. One key problem with the CR-GAN is that the discriminator might occur the 'mistakenly believe'. 'mistakenly believe' considers that the transformations are actual features of the target dataset, due to only applying these transformations on real images. This phenomenon is not easy to notice for certain types of transformations (e.g. image shifting and flipping). However, some types of transformations, such as cutout transformations, contain visual artifacts not belonging to real images, which effects greatly limits the choice of advanced transformations we could use. To address this issue, Zhao et al. propose Balanced Consistency Regularization (bCR-GAN) that uses regulation with respect to both real and fake images and balances the training of discriminator between real images and fake images by $\\lambda_{real}$ and $\\lambda_{fake}$:\n\t\\begin{equation}\n\t\t\\mathcal{L}_C=\\lambda_{real}\\mathbb{E}_{x\\sim p_r}|| D(x)-D(T(x))||_2\n\t\t+\\lambda_{fake}\\mathbb{E}_{z\\sim p_z}||D(G(z))-D(T(G(z)))||_2.\n\t\\end{equation}\n\tThe overview of bCR is demonstrated in right part of Figure \\ref{fig:consistency}.\n\tContrary to the methods which focus on consistency regularization with respect to transformations in image space, Zhao et al. also propose Latent Consistency Regularization (zCR) that considers the consistency regularization on transformations in latent space. Authors expect that output of the discriminator ought not to change much with respect to the small enough perturbation $\\Delta z$ and modify the discriminator loss by enforcing:\n\t\\begin{equation}\n\t\t\\mathcal{L}^D_C=\\lambda_{dis}\\mathbb{E}_{z\\sim p_z}||D(G(z))-D(G(T(z)))||_2,\n\t\\end{equation}\n\twhere $T(z)$ represents the added small perturbation noise. However, if only this loss is added into the GAN loss, mode collapse can easily appear in the training of generators. To avoid this, an inverse gradient penalty (we will describe it in section 6.2) is added to modify the loss function for generator. Hence, we modify the generator loss by enforcing:\n\t\\begin{equation}\n\t\t\\mathcal{L}^D_C=-\\lambda_{gen}\\mathbb{E}_{z\\sim p_z}||G(z)-G(T(z))||_2.\n\t\\end{equation}\n\tNaturally, putting both bCR and zCR together, Improved Consistency Regularization (ICR) is also proposed by Zhao et al. . In addition, there are some applications where cyclic consistency regularization is used for unpaired image-to-image translation . The summary of network consistency regularization is given in Table \\ref{table:consistency}. \n\tIn summary, network consistency considers the networks, especially the discriminator, to be insensitive to semantics-preserving transformation ($T$). The results in demonstrate that random shift and flip is the best way to perform image transformation on the CIFAR-10 dataset. Furthermore, the FID results with CR, bCR, zCR, and ICR (where transformation is flipping horizontally and shifting by multiple pixels) as presented in are shown in Table \\ref{table:consistency_results}. The results demonstrate that network consistency regularization can significantly improve the performance of GANs. However, which transformation is best for consistency regularization, is a question. Zhao et al compare the effect of different data transformation techniques (mentioned in Section 5.1) on bCR. Figure 2 in shows the FID results of BigGAN adding bCR (represented by 'bcr') on CIFAR-10 dataset. From the results, the best BigGAN FID 8.65 is with transformation technology $translation$ of strength $\\lambda=0.4$, outperforming the corresponding FID 10.54 reported in Zhao et al. . Moreover, spatial transforms, which retain the major content while introducing spatial variances, can substantially improve GANs performance together with bCR. While visual transforms, which retain the spatial variances, can not further improve the performance of GANs compared with data augmentation only. Furthermore, bCR with stronger transformation (larger value of $\\lambda_{aug}$) does not improve the performance of GANs, the optimal value of $\\lambda_{aug}$ is uncertain for different data transformations. \n\t\\begin{table}\n\t\t\\caption{FID scores for class conditional image generation of the network consistency regularization (Data come from ).}\n\t\t\\label{table:consistency_results}\n\t\t\\centering\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tModels & CIFAR-10 & ImageNet \\\\\n\t\t\t\\hline SNGAN & 17.50 & 27.62 \\\\\n\t\t\tBigGAN & 14.73 & 8.73 \\\\\n\t\t\tCR-BigGAN & 11.48 & 6.66 \\\\\n\t\t\tbCR-BigGAN & 10.54 & 6.24 \\\\\n\t\t\tzCR-BigGAN & 10.19 & 5.87 \\\\\n\t\t\tICR-BigGAN & $\\mathbf{9 . 2 1}$ & $\\mathbf{5 . 3 8}$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{table}", "id": "86d17fb0-5862-4222-a247-42d917acf482", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a5e04be5-4733-477b-9081-c52ce2a3be68", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Consistency Regularization" ], [ "subsubsection", "Network Consistency" ] ], "subsections": [], "title": "Network Consistency" }, { "cite_extract_rate": 1, "cites": [ 123, 122, 7000 ], "content": "Self-supervised learning aims to learn representations from the data itself without explicit manual supervision. Recently, some self-supervised studies provide competitive results on ImageNet classification and the representations learned from which transfer well to downstream tasks. Self-supervised learning outperforms its supervised pre-training counterpart in many tasks, such as detection and segmentation, sometimes surpassing it by large margins. This suggests that self-supervised learning obtains more representational features and significantly improve the representation of networks. Based on this, self-supervised learning is introduced into the training of GANs, and we divide them into two categories according to different self-supervision tasks: \\textit{Predictive Self-supervised Learning} and \\textit{Contrastive Self-supervised Learning}.", "id": "4e2e9490-faa8-4a2f-9dbd-66ae6c225118", "level": "subsection", "origin_cites_number": 3, "parent_id": "8469161e-625b-422b-90e0-a6cca9638a04", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Self-Supervision" ] ], "subsections": [ "b5c16876-cba3-44e8-b6be-d7b6cabbf7ee", "aff71d0b-4dab-423d-9325-d61d96004987" ], "title": "Self-Supervision" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 126, 132, 8321, 128, 124, 129, 130, 131, 7193, 103, 127, 55, 113, 111, 125 ], "content": "Predictive self-supervised learning is a popular method to improve the representation of neural networks by introducing additional supervised tasks, such as context prediction and rotations prediction . Predictive self-supervised learning is introduced into GANs by Chen et al. to avoid discriminator forgetting. Discriminator forgetting means that the discriminator does not remember all tasks at the same time during the training process. For example, learning varying levels of detail, structure, and texture, which causes the discriminator to fail to get a comprehensive representation of the current images. \"If the outcome is your focus, then it’s easy to look for shortcuts. And ultimately shortcuts keep you from seeing the truth, it drains your spark for life. What matters most is your will to seek the truth despite the outcome.\"\\footnote{Come from \"JoJo's Bizarre Adventure:Golden Wind\" -Araki Hirohiko.}. The same is true for GANs, which are only driven by the loss of discriminator, which is easy to distinguish between real images and generated images through shortcuts, instead of the texture and structural features we need. Predictive self-supervised learning solves this problem by introducing new generalization tasks, which also helps to prevent overfitting. The overview of the predictive self-supervised learning of GANs are demonstrated on Figure \\ref{fig:predict_self_supervised}. Depending on the data transformation function $T$, we can design different self-supervised tasks. \n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=.4]{figs/predict_self_supervised.pdf}\n\t\t\\caption{Overview of the predict self supervised learning of GANs, where $C$ performs the predict classification task and shares the weights with the discriminator except for the last layer, {\\color{red}$T$} is different data transformation techniques. Furthermore, the self-supervised task of generated images is used to update the generator (left) and the self-supervised task of real images is used to update the classification (right).}\n\t\t\\label{fig:predict_self_supervised}\n\t\\end{figure}\n\tChen et al. introduced the predictive self-supervision in GANs training. Authors adopt the rotation prediction as the expanding task to prevent the discriminator from forgetting. Besides, plenty of other prediction tasks have also been proposed to improve the discrimination. Huang et al. exploit the feature exchange to make the discriminator learn the proper feature structure of natural images. Baykal et al. introduce a reshuffling task to randomly arrange the structural blocks of the images, thus helping the discriminator increase its expressive capacity for spatial structure and realistic appearance. Contrary to the methods for designing tasks at the image or feature level, Patel et al. propose a self-supervised task with latent transformation detection, which identifies whether the latent transformation applied in the given pair is the same as that of the other pair. All above methods have designed different self-supervised tasks, and their loss functions can be formulated as:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\mathcal{L}_{D,C}=-\\lambda_{r}\\mathbb{E}_{x\\sim p^T_r}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_k(x)\\big) \\quad \\text { for } k=1, \\ldots, K,\\\\\n\t\t\t\\mathcal{L}_G=-\\lambda_{g}\\mathbb{E}_{x\\sim p^T_g}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_k(x)\\big) \\quad \\text { for } k=1, \\ldots, K,\n\t\t\\end{split}\n\t\\end{equation}\n\twhere ${T}$ represents the different types of image transfer, such as rotation and reshuffling. Furthermore, $T_k$ represents different forms of the transfer ${T}$, such as $0^\\circ,90^\\circ,180^\\circ, 270^\\circ$ for rotation task, $K$ is the number of transformed forms, $C_k$ is the k-th output of the classifier $C$ that shares parameters with discriminator except for two different heads, $P^T_r$ and $P^T_g$ are the transformed distributions of real and generated images, respectively. For rotation conversion task , $K=4$, and the classifier $C$ predicts the rotation angle; For feature exchange task , $K=2$, and the classifier $C$ predicts whether the swap has occurred; For block reshuffling task , the image is divided into $9$ blocks and the number of the permutations is $9!$, which is unnecessarily huge. Thirty different permutations are selected in terms of the Hamming distances between the permutations in . As a result, $K$ is set to 30, and classifier $C$ predicts the Hamming distances of different permutations; For the latent transformation task, $K=2$, and the classifier $C$ predicts whether the transformations parameterized by the same $\\epsilon$ or different. {Besides, some study introduces the autoencoder task, making discriminator reconstruct the input. }\n\t\\begin{table*}\n\t\t\\renewcommand\\arraystretch{1}\n\t\t\\caption{The summary of the Self-supervision}\n\t\t\\label{table:self-supervised}\n\t\t\\setlength{\\tabcolsep}{0.5mm}\n\t\t\\centering\n\t\t\\begin{tabular}{c| c |c }\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\t\\tiny Method&\\tiny Description&\\tiny Types\\\\\n\t\t\t\\hline\n\t\t\t\\tiny Rotation Prediction &\\tiny Predicting the angle of rotation ($0^\\circ,90^\\circ,180^\\circ, 270^\\circ$)&\\tiny PSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny Feature Exchange Detection &\\tiny Predicting if some exchanges have occurred at the feature level (yes or not)&\\tiny PSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny Block Reshuffling Prediction &\\tiny Predicting the Hamming distances of different reshuffling in image level (Total 30 categories)&\\tiny PSS\\\\\n\t\t\t\\hline \n\t\t\t\\tiny Latent Transformation Detection &\\tiny Predicting if some exchanges have occurred at the latent space level (yes or not)&\\tiny PSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny Autoencoder &\\tiny Reconstruct the input of the discriminator&\\tiny PSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny InfoMax-GAN &\\tiny \\makecell[c]{Positive pairs: Global and local features of an image (both real and fake images)\\\\ Negative pairs: Global and local features of different images (both real fake images)}&\\tiny CSS\\\\\n\t\t\t\\hline \n\t\t\t\\tiny Cntr-GAN &\\tiny \\makecell[c]{Positive pairs: Two different data transformations of the same image (both real and fake images).\\\\ Negative pairs: Otherwise}&\\tiny CSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny\tContraD &\\tiny \\makecell[c]{Positive pairs: Two different data transformations of the same image (real images only) + Two fake images\\\\ Negative pairs: Otherwise}&\\tiny CSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny\tInsGen &\\tiny \\makecell[c]{Positive pairs: Two different data transformations (additional latent transformations for fake image)\\\\ of the same image (both real and fake images). Negative pairs: Otherwise.}&\\tiny CSS\\\\\n\t\t\t\\hline\n\t\t\t\\tiny\tFakeCLR &\\tiny \\makecell[c]{Positive pairs: Two different data transformations and additional latent transformations for fake image.\\\\ Negative pairs: Otherwise.}&\\tiny CSS\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table*}\n\tThe above methods design different kinds of self-supervised prediction tasks and participate in the training of the discriminator or generator, independently, having “loophole” that, during generator learning, $G$ could exploit to minimize $\\mathcal{L}_G$ without truly learning the data distribution. To address this issue, Ngoc-TrungTran et al. introduce true or false judgment along with self-supervised prediction. The number of classification is $K+1$, while the loss function can be expressed as:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\mathcal{L}_{D,C}=&-\\lambda_{r}\\Bigg(\\mathbb{E}_{x\\sim p^T_r}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_k(x)\\big)\n\t\t\t+\\mathbb{E}_{x\\sim p^T_g}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_{K+1}(x)\\big)\\Bigg)\\quad \\text { for } k=1, \\ldots, K,\\\\\n\t\t\t\\mathcal{L}_G=&-\\lambda_{g}\\Bigg(\\mathbb{E}_{x\\sim p^T_g}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_k(x)\\big)\n\t\t\t-\\mathbb{E}_{x\\sim p^T_g}\\mathbb{E}_{T_k\\sim {T}}\\log\\big( C_{K+1}(x)\\big)\\Bigg)\\quad \\text { for } k=1, \\ldots, K,\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere $C_k$ is a classifier that predicts the rotation angles and $C_{K+1}$ is a classifier that predicts the truth of the images. The new self-supervised rotation-based GANs use the multi-class minimax game to avoid the mode collapse, which is better than the original predictive self-supervised paradigm.\n\tIn summary, predictive self-supervised learning improves the discrimination by designing different self-supervised prediction tasks, among them, rotation prediction is widely used () for its simplicity and practicality. The summary of different methods is illustrated in Table \\ref{table:self-supervised}. {All of these methods are juxtaposed with each other. However, few studies have used multiple self-supervised tasks simultaneously, and the use of multiple self-supervised tasks to improve GANs training is still an open problem.}", "id": "b5c16876-cba3-44e8-b6be-d7b6cabbf7ee", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "4e2e9490-faa8-4a2f-9dbd-66ae6c225118", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Self-Supervision" ], [ "subsubsection", "Predictive Self-Supervised Learning (PSS)" ] ], "subsections": [], "title": "Predictive Self-Supervised Learning (PSS)" }, { "cite_extract_rate": 0.9166666666666661, "cites": [ 134, 133, 123, 130, 103, 125, 122, 7000, 111, 61 ], "content": "Contrastive self-supervised Learning , as the name implies, learn representations by contrasting positive and negative examples. These techniques have resulted in empirical success in computer vision tasks with unsupervised contrastive pre-training. A handful number of studies demonstrate that self-supervised learning outperforms its supervised pre-training counterpart in many tasks, which indicates contrastive self-supervised learning leads to more expressive features. Considering two views $(v^{(1)}$ and $v^{(2)})$, contrastive self-supervised learning aims to identify whether two views are dependent or not. More specifically, it means to maximize the mutual information of positive pairs. To this end, Oord et al. propose to minimize InfoNCE loss, which turns out to maximize a lower bound of mutual information. The InfoNCE loss is defined by:\n\t\\begin{equation}\n\t\tL_{\\mathrm{NCE}}\\left({v}_{i}^{(1)} ; {v}^{(2)}, s\\right):=-\\log \\frac{\\exp \\left(s\\left({v}_{i}^{(1)}, {v}_{i}^{(2)}\\right)\\right)}{\\sum_{j=1}^{K} \\exp \\left(s\\left({v}_{i}^{(1)}, {v}_{j}^{(2)}\\right)\\right)},\n\t\\end{equation}\n\twhere $s(,)$ is the score function that measure the similarity, positive pairs ($v_i^{(1)}$ and $v_i^{(2)}$) are different views of the same sample, and negative pairs ($v_i^{(1)}$ and $v_j^{(2)}$, $(i\\neq j)$) are different views of different samples. InfoNCE loss is the cornerstone of contrastive self-supervised learning as depicted in Figure \\ref{fig:contrastive_self_supervised}. \n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[scale=.3]{figs/constrative_self_supervised.pdf}\n\t\t\\caption{Overview of the contrastive self supervised learning, where x is real of fake images, $v^{(1)}$ and $v^{(2)}$ are different views of the image x, s is the score function (Usually discriminator in GANs) that measures the similarity, the square on the middle part is the label of InfoNCE (blue, white, and black squares are labels with 1, 0, and undefined respectively.), and the square on the right part is the label of SimCLR. Obviously, SimCLR defines more negative pairs to improve the sample utilization.}\n\t\t\\label{fig:contrastive_self_supervised}\n\t\\end{figure}\n\tMany advanced self-supervised methods are implemented by modifying the views of images $v^{(1)}$, $v^{(2)}$ and score function $s(,)$. Specifically, Deep InfoMAX maximizes the mutual information between local and global features, that is, image $x$ passes through the encoder $E_{\\psi}=f_{\\psi}\\circ C_{\\psi}$, producing local feature map $C_{\\psi}(x)$ and global feature vector $E_{\\psi}(x)$. To maximize the lower bound of the InfoMax: $\\mathcal{I}\\big(C_{\\psi}(x),E_{\\psi}(x)\\big)$, the theoretical InfoMAX loss has been defined as:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\tL_{InfoMAX}(X)=&-\\mathbb{E}_{x\\in X}\\mathbb{E}_{i\\in\\mathcal{A}}\\big[\n\t\t\t\\log \\frac{\\exp \\left(g_{\\theta, \\omega}\\left(C_{\\psi}^{(i)}(x), E_{\\psi}(x)\\right)\\right)}{\\sum_{\\left(x^{\\prime}, i\\right) \\in X \\times \\mathcal{A}} \\exp \\left(g_{\\theta, \\omega}\\left(C_{\\psi}^{(i)}\\left(x^{\\prime}\\right), E_{\\psi}(x)\\right)\\right)}\\big],\\\\\n\t\t\tg_{\\theta, \\omega}&\\left(C_{\\psi}^{(i)}(x), E_{\\psi}(x)\\right)=\\phi_{\\theta}\\left(C_{\\psi}^{(i)}(x)\\right)^{T} \\phi_{\\omega}\\left(E_{\\psi}(x)\\right),\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere $X=\\{x_1,\\cdots,x_N\\}$ is a set of random images and $\\mathcal{A}=\\{0,1,\\cdots,M^2-1\\}$ represents indices of a $M\\times M$ spatial sized local feature map. Based on this, positive sample pairs are $C^{(i)}_{\\phi}(x)$ and $E_{\\phi}(x)$, and negative sample pairs are $C^{(i)}_{\\phi}(x^{'})$ and $E_{\\phi}(x)$), where $x^{'}$ is a different image from $x$. SimCLR is another popular contrast learning framework, which applies two independent transformations, namely $t_1$ and $t_2$, to obtain the different views $v^{(1)},v^{(2)}=t_1(x),t_2(x)$. The loss function of SimCLR is defined as: \n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\tL_{\\mathrm{SimCLR}}\\left({v}^{(1)}, {v}^{(2)}\\right)=\\frac{1}{N} \\sum_{i=1}^{N}\\left(L_{\\mathrm{NCE}}\\left({v}_{i}^{(1)} ;\\left[{v}^{(2)} ; {v}_{-i}^{(1)}\\right], s_{\\mathrm{SimCLR}}\\right)\\right),\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere ${v}_{-i}:={v} \\backslash\\left\\{{v}_{i}\\right\\}$ and $s_{\\mathrm{SimCLR}}$ is defined as:\n\t\\begin{equation}\n\t\ts_{\\text {SimCLR }}\\left({v}^{(1)}, {v}^{(2)} ; f, h\\right)=\\frac{h\\left(f\\left({v}^{(1)}\\right)\\right) \\cdot h\\left(f\\left({v}^{(2)}\\right)\\right)}{\\tau \\cdot\\left\\|h\\left(f\\left({v}^{(1)}\\right)\\right)\\right\\|_{2}\\left\\|h\\left(f\\left({v}^{(2)}\\right)\\right)\\right\\|_{2}}.\n\t\\end{equation}\n\tAs shown in Figure \\ref{fig:contrastive_self_supervised}, SimCLR defines more negative pairs to improve the sample utilization compared to InfoNCE. {However, SimCLR needs a large batch size to obtain some sufficiently rich negative samples (In , batch size is set to 4096). To alleviate the attachment of SimCLR to large batch size, MoCo introduce a negative queue to store and update negative samples.}\n\t\\begin{comment}\n\tSun et al. used the triplet matching objective as a pretext task for cGANs: pairs of images with the same category will have similar features and vice versa. Specifically, positive pairs contain the same category, such as two images with black hair; negative pairs contain different category, such as a image with black hair and a image with white hair. The loss function can be expressed as:\n\t\\begin{equation}\n\t\\begin{aligned}\n\t\\mathcal{L}_{D}=-\\lambda_{d}&\\mathop{\\mathbb{E}}\\limits_{x_a,x_p\\sim P_{X|y},x_n\\sim P_{X|y'}}\\bigg[\\log\\big(D_{mch}(x_a,x_p)\\big)\\\\\n\t&+\\log\\big(1-D_{mch}(x_a,x_n)\\big)\\bigg],\\\\\n\t\\mathcal{L}_{G}=-\\lambda_{g}&\\mathop{\\mathbb{E}}\\limits_{x_1,x_2\\sim P_{X|y},x_3\\sim P_{X|y'}}\\bigg[\\log\\big(D_{mch}(G(x_1,y),G(x_2,y))\\big)\\\\\n\t&+\\log\\big(1-D_{mch}(G(x_1,y),G(x_3,y'))\\big)\\bigg],\n\t\\end{aligned}\n\t\\end{equation}\n\twhere $y\\neq y'$, $x_a$ and $x_p$ are positive pairs of real images, while $x_a$ and $x_n$ are negative pairs of real images. \n\t\\end{comment}\n\tThe self-supervised methods mentioned above are also widely applied to the training of GANs. \n\tInspired by Deep InfoMax , Lee et al. propose InfoMax-GAN maximizing the mutual information between local and global features of real and fake images. The regularization of discriminator is expressed as:\n\t\\begin{equation}\n\t\t\\mathcal{L}_\\mathrm{InfoMax-GAN}=\\lambda_d \\{L_{\\mathrm{InfoMAX}}(x_{r})+L_{\\mathrm{InfoMAX}}(x_{f}) \\}.\n\t\\end{equation}\n\twhere $x_r$ and $x_f$ represent sets of real and fake images, respectively.\n\tInspired by SimCLR , some studies introduce different data transformation techniques to create positive and negative pairs during GANs training. Zhao et al propose Cntr-GAN, where SimCLR loss is used to regularize the discriminator on two random augmented copies of both real and fake images. The regularization of discriminator for transformation $T$ is:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{\\mathrm{Cntr-GAN}}=\\lambda_d \\{L_{\\mathrm{SimCLR}}(x_{r}, T(x_{r}))+L_{\\mathrm{SimCLR}}(x_{f}, T(x_{f})) \\}.\n\t\t\\label{Eq:Cntr-GAN}\n\t\\end{equation}\n\tThey also compare the effect of different data transformation techniques (mentioned in Section 5.1) on Cntr-GAN. Figure 6 in shows the FID results of BigGAN adding SimCLR loss on CIFAR-10 dataset. The results illustrate that spatial transformations still work better than visual transformations and the best FID of 11.87 is achieved by applying adjusted SimCLR transformations with the cropping/resizing strength of 0.3. Although, regularization of auxiliary SimCLR loss improves GAN training, but does not outperform existing methods based on simple data augmentations, $e.g.$, bCR (demonstrated on Figure 2 in ). \n\tTo improve the efficiency of contrastive learning, Jeong et al. propose Contrastive Discriminator (ContraD), a way of training discriminators of GANs using improved SimCLR. Different from Cntr-GAN with SimCLR loss on both real and generated images, ContraD uses the SimCLR loss on the real images and the supervised contrastive loss on the generated images. Supervised contrastive loss adopts the contrastive between real and generated images, required as a GAN discriminator. More concretely, for two views $v^{(1)},v^{(2)}=t_1(x),t_2(x)$ with ${t}_{1}, {t}_{2} \\sim {T}$, the loss of real images are:\n\t\\begin{equation}\n\t\tL_{\\text {con }}^{+}\\left(D, h_{{r}}\\right)=L_{\\mathrm{SimCLR}}\\left({v}_{{r}}^{(1)}, {v}_{{r}}^{(2)} ; D, h_{{r}}\\right),\n\t\\end{equation}\n\twhere $h_r$ is a projection head for this loss. However, the loss for generated images, an extended version of contrastive loss to support supervised learning by allowing more than one view to be positive. More concretely, they assume all the views from fake samples have the same label against those from real samples. Formally, for each $v_i^{(1)}$, the positive views are represented by $V_i^{(2)}$ that is a subset of $v^{(2)}$. The supervised contrastive loss is defined by:\n\t\\begin{equation}\n\t\tL_{\\text {SupCon }}\\left({v}_{i}^{(1)}, {v}^{(2)}, V_{i+}^{(2)}\\right)=\n\t\t-\\frac{1}{\\left|V_{i+}^{(2)}\\right|} \\sum_{{v}_{i+}^{(2)} \\in V_{i+}^{(2)}} \\log \\frac{\\exp \\left(s_{{SimCLR}}\\left({v}_{i}^{(1)}, {v}_{i+}^{(2)}\\right)\\right)}{\\sum_{j} \\exp \\left(s_{{SimCLR}}\\left({v}_{i}^{(1)}, {v}_{j}^{(2)}\\right)\\right)}.\n\t\\end{equation}\n\tUsing the notation,the ContraD loss for fake samples are:\n\t\\begin{equation}\n\t\tL_{\\text {con }}^{-}\\left(D, h_{f}\\right)=\n\t\t\\frac{1}{N} \\sum_{i=1}^{N} L_{\\text {SupCon }}\\left({v}_{f, i},\\left[{v}_{f,-i} ; {v}_{{r}}^{(1)} ; {v}_{{r}}^{(2)}\\right],\\left[{v}_{f,-i}\\right] ; D, h_{{f}}\\right),\n\t\\end{equation}\n\twhere $v_f= t_3(G(z))$ is a random view of fake samples ($t_3\\sim T$), and $v_{-i}=v\\backslash \\{v_i\\}$ is subset of $v$ that does not contain $v_i$. It is pertinent to note that authors also use an independent projection header $h_f$ in this loss instead of $h_r$ in $L_{\\text {con }}^{+}\\left(D, h_{{r}}\\right)$. {The adopted supervised contrastive learning on the fake images introduce the real/fake information into contrastive learning, which improves the efficiency of contrastive learning, thus improve the discrimination of the discriminator.}\n\tTo sum up, ContraD learns its contrastive representation by minimizing the following regularization loss:\n\t\\begin{equation}\n\t\tL_{\\text {con }}\\left(D, h_{{r}}, h_{{f}}\\right)=L_{\\text {con }}^{+}\\left(D, h_{{r}}\\right)+\\lambda_{\\text {con }} L_{{con}}^{-}\\left(D, h_{{f}}\\right).\n\t\\end{equation}\n\tThe experimental results show that ContraD consistently improves the performance of GANs compared to other methods, such as Cntr-GAN, DiffAug, bCR, and CR. However, ContraD with different data transformations is not discussed further. \n\t{The achievement of above SimCLR-based contrastive learning methods depends on the sufficiently large batch size. However, large-scale GANs training often has only a small batch size for limited computational resources. Therefore, MoCo-based contrastive learning method, namely InsGen , has been introduced into GANs training. InsGen follows the MoCo-v2 to store the various negative samples with an extra queue. Furthermore, it also introduces a latent space augmentation for fake images. Combining with ADA and MoCo-based contrastive learning, InsGen has achieved state-of-the-art performance on a variety of datasets and training settings. Recently, Li et al. identify that only latent space augmentation for fake images brings the major performance improvement and contrastive learning in real images causes performance drop on limited data generation (DE-GANs). Based on this, they propose FakeCLR, which only applies contrastive learning on perturbed fake samples and devises three related training techniques. The experimental results manifest the new state of the arts in both few-shot generation and limited-data generation.}\n\tIn summary, contrastive self-supervised learning designs different positive and negative pairs and maximizes the mutual information of positive pairs according to the InfoNCE loss. Different from classification and segmentation tasks, two types of samples (real and fake images) exist for generating adversarial networks, which add more possibilities to the definition of positive and negative pairs. In the future, score-based contrastive lea rning may be proposed during the training of GANs. The summary of contrastive self-supervised regularization techniques of GANs is given in Table \\ref{table:self-supervised}.", "id": "aff71d0b-4dab-423d-9325-d61d96004987", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "4e2e9490-faa8-4a2f-9dbd-66ae6c225118", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Self-Supervision" ], [ "subsubsection", "Contrastive Self-Supervised Learning (CSS)" ] ], "subsections": [], "title": "Contrastive Self-Supervised Learning (CSS)" }, { "cite_extract_rate": 1, "cites": [ 130, 108, 125 ], "content": "{According to the perspective of \\textbf{\"Real $\\&$ Fake\"}, many regularization and normalization technologies inspired from supervised learning have been proposed to GANs training. The key point of them is improving the representation and generalizability of the discriminator. \\textit{Data Augmentation and Preprocessing} is a basic operation containing many types such as spatial augmentation, visual augmentation, frequency augmentation, and noise augmentation. Among them, combining adaptive strategies and all augmentation has achieved the most remarkable achievement and has been employed as default operations in most GANs training. \\textit{Consistency Regularization} and \\textit{Self-supervision} are designed additional tasks based on data augmentation, which further improve the efficiency of data augmentation and extract more useful information under stronger data augmentation beyond the existing yet limited practices. Currently, combining contrastive self-supervised learning with adaptive data augmentation has achieved state of the art in GANs training.}", "id": "2267f79d-3862-4c72-bdff-9670534c3017", "level": "subsection", "origin_cites_number": 3, "parent_id": "8469161e-625b-422b-90e0-a6cca9638a04", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Real \\& Fake\"" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 1, "cites": [ 64, 135, 136, 6997 ], "content": "From the perspective of \"Fitting distribution\", generator is considered as a distribution mapping function and the optimal discriminator is considered to be the distribution divergence. Wasserstein distance is a popular and important in GANs, and it corresponds to the optimal transport of the generator. To solve the dual problem of Wasserstein distance, Lipschitz continuity is introduced into the GANs training. The Wasserstein-based GANs (WGAN and WGAN-GP) have achieved remarkable results during the training. However, some studies suggest that the success of WGAN-GP is not due to the Wasserstein distance and the Lipschitz constraint of discriminator may improve the performance and stability of GANs training regardless of the statistical distance used as a loss function. Therefore, the Lipschitz continuity of discriminator is an essential condition during GANs training. {Weight clipping is a simple and the first solution to enforce a Lipschitz constraint, which clamps the weights of discriminator to a fixed box after each gradient update.} Furthermore, {gradient penalty}, {weight normalization}, and {weight regularization} are widely applied in GANs training for fulfilling Lipschitz continuity as summarized in subsequent subsections.", "id": "fdfb947c-602c-4735-9040-96d30810700e", "level": "section", "origin_cites_number": 4, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ] ], "subsections": [ "1d7d66d8-7065-4eb8-b5e7-40c9b5f20e5c", "16b503b7-d01a-45ea-9b9c-0986b73e7409", "3e0461ca-996a-4c2c-8c3e-dbd291acb92b", "ee52e377-012a-4b04-8fa5-1df1ed551949" ], "title": "Regularization and Normalization of \"Fitting distribution\" " }, { "cite_extract_rate": 1, "cites": [ 137, 136, 88, 138, 93, 135, 8317, 6997 ], "content": "Gradient penalty is a simple and direct way to fulfill Lipschitz continuity. Specifically, K-Lipschitz continuity of the function $f$ can be accessed by $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla f(\\hat{x})||_2-{\\rm K})^2$. According to the optimal transport theory mentioned on the Section \\ref{sect:A-1} of the Supplementary Online-only Material, gradient penalty can be used for the approximation of $W_c(\\mu,\\upsilon)$ in WGANs, named WGAN-GP . Specifically, WGAN-GP fulfills the 1-Lipschitz continuity of the discriminator by $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_2-{\\rm 1})^2$, which limits the gradient of the discriminator to 1. Although WGAN-GP solves the instability of GANs training to some extent, the assumption of optimal transport is a constrained linear programming problem. Overly strict restriction reduces the exploratory of the discriminator. In contrast, the optimal transport with the regular term mentioned is an unconstrained optimization problem. Like optimal transport corresponds to 1-Lipschitz continuity, the optimal transport with the regular term corresponds to k-Lipschitz continuity (${\\rm k}\\leq1$) of the discriminator, named WGAN-LP , which is implemented by $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||_2-1\\}\\right)^2\\right]$. WGAN-LP achieves better performance by using a weaker regularization term which enforces the Lipschitz constraint of the discriminator. \n\tWGAN-GP and WGAN-LP introduce Wasserstein distance into GANs framework. Due to the gap between limited input samples and the strict Lipschitz constraint on the whole input sample domain, the approximation of the Wasserstein distance is a challenging task. To this end, WGAN-div introduces a Wasserstein divergence into GANs training. The objective of WGAN-div can be smoothly derived as:\n\t\\begin{equation}\n\t\t\\mathbb{E}_{y\\sim q(y)}[\\varphi(y)]-\\mathbb{E}_{x\\sim p(x)}[\\varphi(x)]+k \\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[ ||\\varphi(\\hat{x})||^p\\right].\n\t\\end{equation}\n\tThe objective of WGAN-div is similar to WGAN-GP and WGAN-LP. It can be considered as achieving 0-Lipschitz continuity of discriminator by adopting $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[||\\nabla D_{\\theta}(\\hat{x})||^p\\right]$. \n\tGenerally, Wasserstein distance and Wasserstein divergence are reliable ways of measuring the difference between fake and real data distribution, which leads to the stable training of WGAN-based algorithms. However, a recent study shows that the c-transform method achieves better estimation of Wasserstein divergence but leads to worse performance compared to the gradient penalty method. The results demonstrate that the success of WGAN-based methodologies cannot truly be attributed to approximate the Wasserstein distance and the gradient penalty methods improve the performance indeed. Furthermore, some studies also demonstrate that gradient penalty methods of discriminator, such as 1-GP, k-GP (${\\rm k}\\leq 1$), and 0-GP stabilize the training and improve the performance of GANs remarkably regardless of the loss functions. Based on these observations, stabilizing GANs training using gradient penalty is widely applied in the research community for various losses of GANs. In the rest of this section, we discuss gradient penalty methods regardless of the loss function by dividing them into three parts: \\textit{1-GP}: $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||-{\\rm 1})^p$, \\textit{k-GP (${\\rm k}\\leq1$)}: $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||-1\\}\\right)^p\\right]$, and \\textit{0-GP}: $\\mathop{\\min}\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[||\\nabla D_{\\theta}(\\hat{x})||^p\\right]$, where $\\pi$ is the distribution of different image space (entire image space or part of image space) and $||\\cdot||$ represents the norm of the gradient. Generally, the loss function of the discriminator with GP can be formulated as:\n\t\\begin{equation}\\label{GP}\n\t\t\\mathcal{L}_{D}=f(\\phi,\\theta)+\\lambda\\mathcal{L}_{GP},\n\t\\end{equation}\n\twhere $f(\\phi,\\theta)$ is the uniform loss function defined in Eq (\\ref{EQ:eqn1}) and $\\mathcal{L}_{GP}$ is the gradient penalty regularization.", "id": "1d7d66d8-7065-4eb8-b5e7-40c9b5f20e5c", "level": "subsection", "origin_cites_number": 8, "parent_id": "fdfb947c-602c-4735-9040-96d30810700e", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Gradient Penalty" ] ], "subsections": [ "258a5900-7326-4a5a-9ab5-4d4abbbbc5a5", "4a0b3b8a-89ed-4329-a9ef-855c13a16be6", "349a6a29-8d68-4bed-a022-d5ef2efaccc2" ], "title": "Gradient Penalty" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 141, 140, 139, 8317, 6997 ], "content": "Gulrajani et al. used 1-GP in WGAN-GP to train GANs. WGAN-GP uses the 2-norm gradient penalty across the entire image domain, which can be formulated as: \n\t\\begin{equation}\\label{wgan-gp}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_2-1)^2,\n\t\\end{equation}\n\twhere $\\pi$ is the distribution of entire image space approximated by the interpolation of real distribution ($p_r$) and generated distribution ($p_g$): $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$ for $t\\sim U[0,1]$. Although, WGAN-GP stabilizes the training of GANs to a great extent, the overly strict gradient penalty limits the exploratory of discriminator. To loosen the penalty, many efforts of $\\pi$, $||\\cdot||$, and gradient direction are proposed. \n\tTo relax the image distribution, Kodali et al. track the training process of GANs and find that the decrease of the Inception Score (IS) is accompanied by a sudden change of the discriminator’s gradient around the real images. Authors propose DRAGAN by restricting the Lipschitz constant around the real images $\\pi=p_r+\\epsilon$, where $\\epsilon\\sim N_d(0,cI)$.\n\tIn order to relax the gradient direction, Zhou et al. argue that restricting the global Lipschitz constant is unnecessary. Therefore, only maximum gradient is necessary to be penalized:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{GP}=\\left(\\mathop{\\max}\\limits_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||_2-1\\right)^2,\n\t\\end{equation}\n\twhere $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$; Furthermore, inspired by Virtual Adversarial Training (VAT) , D{\\'a}vid et al. propose a method, called Adversarial Lipschitz Regularization (ALR), which restricts the 1-Lipschitz continuity at $\\pi=p_r\\cup p_g$ in the direction of adversarial perturbation. {Adversarial perturbation direction is the most unstable direction, Restricting the 1-Lipschitz continuity to the adversarial direction means restricting only the largest Lipschitz constant, which is simpler and more efficient than the previous method.} The proposed ALP shows the SOTA performance in terms of Inception Score and Fréchet Inception Distance among non-progressive growing methods trained on CIFAR-10 dataset.\n\tContrary to the methods which penalize the gradient in Euclidean space, Adler et al. extended the $L_p(p=2)$ space with gradient penalty to Banach space that contains the $L_p$ space and Sobolev space. For the Banach space B, the Banach norm $||.||_{B^*}$ is defined as: \n\t\\begin{equation}\n\t\t||x^*||_{B^*}=\\sup\\limits_{x\\in B}\\frac{x^*(x)}{||x||_B}.\n\t\\end{equation}\n\tThus, the gradient penalty of Banach wasserstein GAN can be expressed as:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_{B^*}-1)^2,\n\t\\end{equation}\n\twhere $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$. {Banach wasserstein GAN expands the Lipschitz continuity into Banach space containing both $L_p$ space and Sobolev space, which has more restriction than wasserstein GAN.}", "id": "258a5900-7326-4a5a-9ab5-4d4abbbbc5a5", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "1d7d66d8-7065-4eb8-b5e7-40c9b5f20e5c", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Gradient Penalty" ], [ "subsubsection", "1-GP" ] ], "subsections": [], "title": "1-GP" }, { "cite_extract_rate": 1, "cites": [ 88, 142, 8317 ], "content": "\\leq1$)}\n\t{k-GP (${\\rm k}\\leq1$) was first tested by Gulrajani et al. and named one sided gradient penalty. It uses the 2-norm gradient penalty across the entire image domain, which is formulated as: }\n\t\\begin{equation}\\label{wgan-lp}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||_2-1\\}\\right)^2\\right],\n\t\\end{equation}\n\twhere $\\pi$ is the distribution of entire image space approximated by the interpolation of real distribution ($p_r$) and generated distribution ($p_g$): $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$ for $t\\sim U[0,1]$. Inspired by the optimal transport with the regular term, Petzka et al. also used k-GP (${\\rm k}\\leq1$) to training GANs named WGAN-LP. Furthermore, Xu et al show a more general dual form of the Wasserstein distance compared to KR duality (mentioned in section 2.4), named Sobolev duality, which relaxes the Lipschitz constraint but still maintains the favorable gradient\n\tproperty of the Wasserstein distance. Authors also show that the KR duality is a special case of the proposed Sobolev duality. Based on the Sobolev duality, the relaxed gradient penalty of the proposed SWGAN is formulated as:\n\t\\begin{equation}\\label{wgan-sp}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||^2-1\\}\\right)^2\\right],\n\t\\end{equation}\n\twhere $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$ for $t\\sim U[0,1]$. It is clear that above three method have the same form of gradient penalty. Interestingly, different relaxation methods yield the same form of regularization.", "id": "4a0b3b8a-89ed-4329-a9ef-855c13a16be6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "1d7d66d8-7065-4eb8-b5e7-40c9b5f20e5c", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Gradient Penalty" ], [ "subsubsection", "k-GP (${\\rm k" ] ], "subsections": [], "title": "k-GP (${\\rm k" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 142, 136, 143, 8317, 60, 7002, 88, 7001, 99, 94, 6997, 140, 8322, 141, 92, 64, 93, 135 ], "content": "\\begin{table}\n\t\t\\caption{The Gradient penalty of the Discriminator. $\\mu$ and $v$ are real and generated distribution, respectively.}\n\t\t\\footnotesize\n\t\t\\label{table:gradient penalty}\n\t\t\\centering\n\t\t\\begin{tabular}{c | c | c | c }\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\t{ Method}& { $\\mathcal{L}_{GP}$}& $\\pi$&Lipschitz continuity\t \\\\\n\t\t\t\\hline\n\t\t\tWGAN-GP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_2-1)^2$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\to1$\\\\\n\t\t\t\\hline\n\t\t\tDRAGAN &$\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_2-1)^2$&$p_r+\\epsilon$&$||D_{\\theta}||_{Lip}\\to1$\\\\\n\t\t\t\\hline \n\t\t\tMax-GP &$\\left(\\mathop{\\max}\\limits_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||_2-1\\right)^2$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\to1$\\\\\n\t\t\t\\hline\n\t\t\tALP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_2-1)^2$&$p_r\\cup p_g$&$||D_{\\theta}||_{ALP-Lip}\\to1$\\\\\n\t\t\t\\hline\n\t\t\tBanach-GP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}(||\\nabla D_{\\theta}(\\hat{x})||_{B^*}-1)^2$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\to1$\\\\\n\t\t\t\\hline\n\t\t\tWGAN-LP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||_2-1\\}\\right)^2\\right]$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\leq1$\\\\\n\t\t\t\\hline\n\t\t\tSWGAN &$\\mathbb{E}_{\\hat{x}\\sim\\pi}\\left[\\left(\\mathop{\\max}\\{0,||\\nabla D_{\\theta}(\\hat{x})||^2-1\\}\\right)^2\\right]$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\leq1$\\\\\n\t\t\t\\hline\n\t\t\tzc-GP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||^2_2$&$p_r\\cup p_g$&$||D_{\\theta}||_{Lip}\\to0$\\\\\n\t\t\t\\hline\n\t\t\tGAN-QP &$\\mathcal{L}_{GP}=\\mathbb{E}_{x_r,x_g\\sim\\pi}\\frac{\\left(D_{\\theta}(x_r)-D_{\\theta}(x_f)\\right)^2}{||x_r-x_f||}$&$\\pi=p_r\\cdot p_g$&$\\frac{\\left(D_{\\theta}(x_r)-D_{\\theta}(x_f)\\right)^2}{||x_r-x_f||}\\to 0$\\\\\n\t\t\t\\hline\n\t\t\tZP-Max &$\\mathop{\\max}\\limits_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||^2_2$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\to0$\\\\\n\t\t\t\\hline\n\t\t\tZP &$\\mathbb{E}_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||^2_2$&$t\\cdot p_r+(1-t)\\cdot p_g$&$||D_{\\theta}||_{Lip}\\to0$\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table} \n\tWu et al. used 0-GP, and proposed Wasserstein divergence. According to , Wasserstein divergence is solved by minimizing:\n\t\\begin{equation}\n\t\t\\label{0-gp}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}||\\nabla D_{\\theta}(\\hat{x})||^2,\n\t\\end{equation}\n\twhere $\\pi$ is both the real distribution ($p_r$) and the generated distribution ($p_g$): $\\pi=p_r\\cup p_g$. Furthermore, Mescheder et al. also demonstrate that the optimization of unregularized GAN is not always locally convergent and some simplified zero centered gradient penalty (zc-GP) techniques, implemented by minimizing Eq (\\ref{0-gp}), can be used to achieve local convergence of GANs. Li et al. introduce the adversarial training to discriminator training, which is turned out to be an adaptive 0-GP. \n\tBesides, some other 0-GP methods are derived by different theoretical derivations. For instance, Su et al. propose a Quadratic Potential (QP) for GANs training with the following formulation:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{x_r,x_g\\sim\\pi}\\frac{\\left(D_{\\theta}(x_r)-D_{\\theta}(x_f)\\right)^2}{||x_r-x_f||},\n\t\\end{equation}\n\twhere $\\pi$ is the joint distribution of the real and generated distributions: $\\pi=p_r\\cdot p_g$; Zhang et al. combine a Total Variational (TV) regularizing term into the training of GANs, that is $|D_{\\theta}(x_r)-D_{\\theta}(x_f)-\\delta|$. According to , the TV term can be approximated by Eq (\\ref{0-gp}), which is exhilarating; Zhou et al. propose the Lipschitz GANs, with the maximum of the gradients penalty for guaranteeing the gradient informativeness:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{GP}=\\mathop{\\max}\\limits_{\\hat{x}\\sim\\pi}||\\nabla f(\\hat{x})||^2_2,\n\t\\end{equation}\n\twhere $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$; Thanh-Tung et al. also propose the 0-GP with gradients penalty at $\\pi=t\\cdot p_r+(1-t)\\cdot p_g$:\n\t\\begin{equation}\n\t\t\\mathcal{L}_{GP}=\\mathbb{E}_{\\hat{x}\\sim\\pi}||\\nabla f(\\hat{x})||^2_2.\n\t\\end{equation}\n\tIn summary, gradient penalty techniques are widely used in the GANs training to achieve Lipschitz continuity of discriminator. As shown in Table \\ref{table:gradient penalty}, many techniques are proposed based on different theories and phenomena. But to the best of our knowledge, there is no fair and comprehensive work comparing the performance of these gradient penalty methods.\n\t\\begin{comment}\n\tThe summary of the gradient penalty of the discriminator is shown in TABLE \\ref{table:gradient penalty}. Based on the summary and the 1-Lipschitz continuity in WGAN-GP, the gradient penalty has been improved in two directions.\n\t\\begin{enumerate}\n\t\\item[*] Limit the Lipschitz constant so that let the $||D||_{Lip}\\leq 1$ or $||D||_{Lip}\\to 0$.\n\t\\item[*] Explore the suitable scope for Lipschitz continuity.\n\t\\end{enumerate}\n\t\\end{comment}\n\tTo compare the performance of various methods intuitively, a comparative experiment on CIFAR-10 and CIFAR-100 datasets is conducted\\footnote{The base framework comes from wgan-gp in \\url{https://github.com/kwotsin/mimicry}}. The results of FID for various gradient penalty methods with different loss functions are shown in Table \\ref{table: experiments results}. The results validate the conclusion in studies , that the Lipschitz constraint of discriminator may improve the performance and stability of GANs training regardless of the statistical distance used as a loss function. All gradient penalty methods improve the performance of GANs upon all three loss functions. Among them, zc-GP obtains the best performance and is widely used in SOTA methods as illustrated in Table \\ref{table:SOTA }.\n\t\\begin{comment}\n\t\\begin{table}\n\t\\caption{The experimental results of different Gradient penalty}\n\t\\label{table: experiments results}\n\t\\setlength{\\tabcolsep}{1mm}\n\t\\centering\n\t\\begin{tabular}{c c c }\t\n\t\\toprule\n\t\\midrule\n\t{ Method}&Inception Score& FID \\\\\n\t\\hline\n\tWGAN-GP &7.869&16.62\\\\\n\tWGAN-LP &7.98&15.85\\\\\n\tWGAN-ALP &8.247&14.19\\\\\n\tWGAN-GP-Max &7.956&18.43\\\\\n\tWGAN-ZP-Max &7.908&17.97\\\\\n\tWGAN-ZP-Sample &8.013&15.87\\\\\n\tWGAN-ZP &7.957&16.08\\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t\\end{table} \n\t\\end{comment}\n\t\\begin{table*}\n\t\t\\tiny\n\t\t\\caption{ FID results on the CIFAR-10 and CIFAR-100 datasets for various gradient penalty methods with different GAN losses.}\n\t\t\\centering\n\t\t\\label{table: experiments results}\n\t\t\\begin{tabular}{cccccccccc}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Dataset}&\\multirow{2}{*}{Loss} & \\multicolumn{8}{c}{\\makecell*[c]{Gradient Penalty Methods}} \\\\ \n\t\t\t\\cline{3-10} \n\t\t\t&& \\makecell*[c]{None}&GP&DRAGAN &MAX-GP &LP&zc-GP &ZP-MAX &ZP \\\\ \n\t\t\t\\midrule\n\t\t\t\\multirow{3}{*}{CIFAR-10}&\n\t\t\tGAN& 42.41& 23.45&20.98& 26.65&22.9&\\textbf{19.39}&24.38&23.96 \\\\\n\t\t\t&WGAN&290&30.38& 29.53&37.21&28.31&\\textbf{26.99}&31.28&30.19\\\\\n\t\t\t&Hinge&58.34&21.19&21.77&25.4&20.79&\\textbf{18.75}&23.1&22.58\\\\\n\t\t\t\\cline{1-10} \n\t\t\t\\specialrule{0em}{2pt}{2pt}\n\t\t\t\\multirow{3}{*}{CIFAR-100}&\n\t\t\tGAN&44.5&25.76& 25.37&24.29&23.82&\\textbf{21.81}&26.27&25.38 \\\\\n\t\t\t&WGAN&244&32.28&31.93&38.71&32.19&\\textbf{29.12}&39.75&37.8 \\\\\n\t\t\t&Hinge&59.43&25.13&25.42&28.34&23.67&\\textbf{21.55}&26.19&26.06\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table*}", "id": "349a6a29-8d68-4bed-a022-d5ef2efaccc2", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "1d7d66d8-7065-4eb8-b5e7-40c9b5f20e5c", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Gradient Penalty" ], [ "subsubsection", "0-GP" ] ], "subsections": [], "title": "0-GP" }, { "cite_extract_rate": 0, "cites": [], "content": "WGAN is a popular and important generative adversarial network. From the optimal transport introduced on the Section \\ref{sect:A-1} of the Supplementary Online-only Material, to obtain $W_c(\\mu,\\upsilon)=\\mathop{\\max}\\limits_{||\\varphi_\\xi||_L\\leq 1}\\left\\{\\int_X \\varphi_\\xi \\mathrm{d}\\mu\\ -\\int_Y \\varphi_\\xi \\mathrm{d}\\upsilon\\right\\}$, the discriminator must satisfy the 1-Lipschitz continuity. According to the Section \\ref{sect:A-3} in the Supplementary Online-only Material, the spectral norm $||W||_2$ can be used to represent the Lipschitz constant $\\mathrm{C'}$. The Lipschitz continuity is achieved by normalizing the spectral norm of the weight, approximately. Hence, \\textit{Weight Normalization} and \\textit{Weight Regularization} can also be used to enable the Lipschitz continuity of the discriminator.", "id": "16b503b7-d01a-45ea-9b9c-0986b73e7409", "level": "subsection", "origin_cites_number": 0, "parent_id": "fdfb947c-602c-4735-9040-96d30810700e", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Weight Normalization and Weight Regularization" ] ], "subsections": [ "930186fd-4bd2-4fae-a134-426d00072853", "9c9a2b69-11c0-4e4b-90b7-101fced8cb32" ], "title": "Weight Normalization and Weight Regularization" }, { "cite_extract_rate": 0.5, "cites": [ 78, 144 ], "content": "Spectral norm of the weight and the Lipschitz constant express the same concept. Therefore, weight normalization is another method to achieve Lipschitz continuity. {More important, weight normalization methods are Non-sampling-based, which don’t have the lack of support problem in contrast to gradient penalties.} Spectral normalization of the weight limits the Lipschitz constant to 1. Certainly, upper bound of the spectral norm can be used to normalize the weights, achieving $k (k\\leq1)$ Lipschitz continuity. The following lemmas put forward some upper bounds of the spectral norm.\n\t\\newenvironment{lemma3}{{\\indent\\it \\textbf{Lemma 3.1:}}}{\\hfill \\par}\n\t\\begin{lemma3}\n\t\t\\textit{\n\t\t\tIf $\\lambda_1\\leq\\lambda_2\\leq\\cdots\\leq\\lambda_M$\n\t\t\tare the eigenvalues of the $W^\\top W$, then the spectral norm$||W||_2=\\sqrt{\\lambda_M}$; The Frobenius norm$||W||_F=\\sqrt{\\sum_{i=1}^{M}\\lambda_i}$ \n\t\t}\n\t\\end{lemma3}\n\t\\newenvironment{lemma4}{{\\indent\\it \\textbf{Lemma 3.2:}}}{\\hfill \\par}\n\t\\newenvironment{lemma5}{{\\indent\\it \\textbf{Lemma 3.3:}}}{\\hfill \\par}\n\t\\newenvironment{proof3}{{\\indent\\it Proof 3.1:}}{\\hfill $\\square$\\par}\n\t\\newenvironment{proof4}{{\\indent\\it Proof 3.2:}}{\\hfill $\\square$\\par}\n\t\\newenvironment{proof5}{{\\indent\\it Proof 3.3:}}{\\hfill $\\square$\\par}\n\t\\begin{proof3}\n\t\tSee and \n\t\\end{proof3}\n\t\\begin{lemma4}\n\t\t\\textit{\n\t\t\tFor a $n\\times m$ matrix, $||W||_1=\\mathop{\\max}\\limits_{j}\\sum_{i=1}^{n}|a_{i,j}|$, $ ||W||_\\infty=\\mathop{\\max}\\limits_{i}\\sum_{j=1}^{m}|a_{i,j}|$, then $||W||_2\\leq\\sqrt{||W||_1||W||_\\infty}$\n\t\t}\n\t\\end{lemma4}\n\t\\begin{proof4}\n\t\tSee \n\t\\end{proof4}\n\t\\begin{lemma5}\n\t\t\\textit{\n\t\t\tFor a $n\\times m$ matrix, \n\t\t\t$||W||_F=\\sqrt{\\left(\\sum_{j=1}^{m}\\sum_{i=1}^{n}|a_{i,j}|^2\\right)}$, then $||W||_2\\leq||W||_F$\n\t\t}\n\t\\end{lemma5}\n\t\\begin{proof5}\n\t\tSee \n\t\\end{proof5}\n\t1-Lipschitz continuity can be expressed by the spectral normalization. Miyato et al. control the Lipschitz constant through spectral normalization $W_\\sigma=\\frac{W}{||W||_2}$ of each layer for D, leading to a better result than WGAN-GP. {Practically, the power iteration method is used as a fast approximation for the spectral norm ($||W||_2$).} Similarly, according to the optimal transport with regular term, Lipschitz constant of discriminator should be less than or equal to 1. Correspondingly, upper bound of the spectral norm can be utilized to normalize the weight ($||W_\\sigma||_2\\leq1$), achieving $k (k\\leq1)$ Lipschitz continuity. In terms of Lemma 3.2 and Lemma 3.3, $\\sqrt{||W||_1||W||_\\infty}$ and Frobenius norm ($||W||_F$) are simple upper bound of the spectral norm ($||W||_2$) and can be used to normalize the weight. For example, Zhang et al. use the $\\sqrt{||W||_1||W||_\\infty}$, seeking for an approximation of the spectral norm that is easy to calculate. Miyato et al. explain that the Frobenius norm is a restriction on all eigenvalues. It is different from the spectral norm, which only constrains the maximum eigenvalue. Authors conjecture that Frobenius normalization affects the network's ability to express, but no experiments are reported to compare it with the spectral normalization. Liu et al. find that the mode collapse is often accompanied by the collapse of the eigenvalue of the discriminator. Because the spectral normalization only limits the maximum eigenvalue, and the eigenvalue collapse means the remaining eigenvalues suddenly decrease. Therefore, authors adopt the following methods to prevent the collapse of the eigenvalues:\n\t\\begin{equation}\n\t\tW_{\\sigma}=\\frac{W+\\nabla W}{||W||_2}=\\frac{W}{||W||_2}+\\frac{\\nabla W}{||W||_2}.\n\t\\end{equation}\n\tThe results demonstrate that this method effectively prevents mode collapses. Although the experiments are reported in this study, but it misses theoratical proofs. Therefore the relationship between the matrix eigenvalues and GAN performance is not clear.\n\tFew researches focus on weight normalization as demonstrated in Table \\ref{table:norm normalization}. Among these studies, spectral normalization is widely applied in some SOTA methods, as demonstrated in Section 7.\n\t\\begin{table}\n\t\\caption{The summary of the weight normalization and wight regularization.}\n\t\\label{table:norm normalization}\n\t\\centering\n\t\\begin{tabular}{c | c | c }\t\n\t\t\\toprule\n\t\t\\midrule\n\t\t{ Method}&Implementation &Motivation\t \\\\\n\t\t\\hline\n\t\tSpectral normalization (SN) &$W_\\sigma=W/||W||_2$&$||D||_{Lip}\\to1$\\\\\n\t\t\\hline\n\t\tF normalization &$W_\\sigma=W/||W||_F$&$||D||_{Lip}\\leq1$\\\\\n\t\t\\hline\n\t\tMixed normalization &$W_\\sigma=W/\\sqrt{||W||_1||W||_\\infty}$&$||D||_{Lip}\\leq1$\\\\\n\t\t\\hline\n\t\tSpectral increment normalization &$W_\\sigma=W/||W||_2+\\nabla W/||W||_2$&$||D||_{Lip}\\to1$\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}", "id": "930186fd-4bd2-4fae-a134-426d00072853", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "16b503b7-d01a-45ea-9b9c-0986b73e7409", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Weight Normalization and Weight Regularization" ], [ "subsubsection", "Weight Normalization" ] ], "subsections": [], "title": "Weight Normalization" }, { "cite_extract_rate": 0.5, "cites": [ 65 ], "content": "Compared with spectral normalization similar to 1-GP, spectral regularization is similar to the 0-GP. Kurach et al. use the $\\mathcal{L}_R=||W||_2$ to regularize the loss function. Zhou et al. also use the $L_P$-norm ($P=1,F,\\infty$) to regularize the discriminator. However, these studies have worse performance than weight normalization and did not catch much attention among researchers.", "id": "9c9a2b69-11c0-4e4b-90b7-101fced8cb32", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "16b503b7-d01a-45ea-9b9c-0986b73e7409", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "Weight Normalization and Weight Regularization" ], [ "subsubsection", "Weight Regularization" ] ], "subsections": [], "title": "Weight Regularization" }, { "cite_extract_rate": 1, "cites": [ 145, 146 ], "content": "}\n\tGradient normalization is also a popular method to impose the Lipschitz constraint on the discriminator. As we all know, 1-Lipschitz constraint can be implemented by $\\mathop{\\min}\\mathbb{E}_{{x}\\sim\\pi}(||\\nabla_{x} D_{\\theta}({x})||_2-1)^2$ to let the gradient of the discriminator ($||\\nabla_{{x}} D_{\\theta}({x})||_2$) equal to 1. Therefore, control the Lipischitz constant through gradient normalization $\\hat{D}_{\\theta}(x)=\\frac{D_{\\theta}(x)}{||\\nabla_{{x}}D_{\\theta}({x})||_2}$ for $D_{\\theta}$. Accordingly, the gradient of $\\hat{D}_{\\theta}$ can be represented as $||\\nabla_{{x}} \\hat {D}_{\\theta}({x})||_2=||\\nabla_{{x}}\\left(\\frac{D_{\\theta}(x)}{||\\nabla_{{x}}D_{\\theta}({x})||_2}\\right)||_2$, equaling to 1. To ensure the boundedness, different studies have different implementation. For instance, adopts $\\hat{D}_{\\theta}(x)=\\frac{D_{\\theta}(x)}{||\\nabla_{{x}}D_{\\theta}({x})||_2+D_{\\theta}(x)}$ and adopts $\\hat{D}_{\\theta}(x)=\\frac{D_{\\theta}(x)}{||\\nabla_{{x}}D_{\\theta}({x})||_2+\\epsilon}$. Extensive experiments demonstrate that both implementation of gradient normalization attain significant performance gains comparing to gradient penalty, weight normalization, and weight regularization.", "id": "3e0461ca-996a-4c2c-8c3e-dbd291acb92b", "level": "subsection", "origin_cites_number": 2, "parent_id": "fdfb947c-602c-4735-9040-96d30810700e", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "{Gradient Normalization" ] ], "subsections": [], "title": "{Gradient Normalization" }, { "cite_extract_rate": 1, "cites": [ 145 ], "content": "}\n\t{As mentioned above, weight clipping, gradient penalty, weight regularization, weight normalization, and gradient normalization all could enable Lipschitz continuity of the discriminator. However, what are the advantages and disadvantages of investigated techniques? Imposing the Lipschitz constraint on the discriminator can be characterized by three properties . 1) \\textit{model-} or \\textit{module-wise} \\textit{constraint}. Model-wise constraint is defined as methods that constraint objective depends on full model, while module-wise constraint is defined as methods that constraint objective depends on layers. Generally, model-wise constraint is better since module-wise constraint is strict, which limits\n\t\tthe layer capacities and reduces the power of discriminator. 2) \\textit{sampling-based} or \\textit{non-sampling} \\textit{-based} \\textit{constraint}. Sampling-based constraint is defined as requiring sampling data during usage, while non-sampling-based constraint depends on the model, not data sampling. Generally, non-sampling-based constraint performs better since Lipschitz constraint should be fulfilled on the entire data manifold, not only sampling data. 3) \\textit{Hard} or \\textit{soft constraint}. {The accurate constraint of Lipschitz continuity is defined as hard constraint and the converse to be soft constraint. Hard constraint has achieved the exact Lipschitz continuity through limiting the spectral norm, which is expected to perform better. While soft constraint only obtain the Lipschitz continuity approximatively through optimization.} Table \\ref{table:lipschitz continuity} summarizes the properties of different technologies, from which gradient normalization is a model-wise, non-sampling-based, and hard constraint method.}\n\t\\begin{table}\n\t\t\\caption{Summary of different regularization and normalization technologies for imposing Lipschitz continuity.}\n\t\t\\label{table:lipschitz continuity}\n\t\t\\centering\n\t\t\\begin{tabular}{c | c | c | c }\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tMethod& Model-wise&Non-sampling-based&Hard\\\\\n\t\t\t\\hline\n\t\t\tWeight Clipping&&\\checkmark&\\\\\n\t\t\t\\hline\n\t\t\tGradient Penalty&\\checkmark&&\\\\\n\t\t\t\\hline\n\t\t\tWeight Regularization&&\\checkmark&\\\\\n\t\t\t\\hline\n\t\t\tWeight Normalization&&\\checkmark&\\checkmark\\\\\n\t\t\t\\hline\n\t\t\tGradient Normalization&\\checkmark&\\checkmark&\\checkmark\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table}", "id": "ee52e377-012a-4b04-8fa5-1df1ed551949", "level": "subsection", "origin_cites_number": 1, "parent_id": "fdfb947c-602c-4735-9040-96d30810700e", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Fitting distribution\" " ], [ "subsection", "{Summary" ] ], "subsections": [], "title": "{Summary" }, { "cite_extract_rate": 1, "cites": [ 98, 54, 99, 7190, 8323 ], "content": "Assuming the objectives of GANs are convex-concave, some studies have proposed the global convergence of GANs . However, these theoretical convergence analyses are only applicable to the GANs with the optimal discriminator. Therefore, some studies focus on analyzing the local convergence of GANs. According to Nagarajan et al. and Mescheder et al. , under some assumptions, GANs dynamics are locally convergent. However, if these assumptions are not satisfied, especially if the data distributions are not continuous, GANs dynamics do not always converge locally unless some regularization techniques are used.\n\tWe review {Jacobian regularization} techniques in this section, which minimize the Jacobian matrix to achieve local convergence. With the same motivation, Mescheder et al. propose a simplified gradient penalties method, named zero-centered gradient penalties (zc-GP), that guarantees the local convergence under suitable assumptions. Since it is similar to 0-GP, we cover it in Section 4.", "id": "44c32216-8be6-4e5c-aa2f-ca241ea4139f", "level": "section", "origin_cites_number": 5, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Training dynamics\"" ] ], "subsections": [ "f5ab6a67-88dd-45b6-aeae-1b85b061e943", "8377346c-065d-4bc5-adf5-275fad74664d" ], "title": "Regularization and Normalization of \"Training dynamics\"" }, { "cite_extract_rate": 1, "cites": [ 74, 149, 7003, 98, 148, 99, 147, 7190 ], "content": "In \\textit{Proposition 2.2} of Section 2: absolute values of all eigenvalues of the Jacobian matrix ($v^{'}(\\phi,\\theta)$) are expected to be less than 1 at the fixed point, which is equivalent to the real part of the eigenvalue being negative. Additionally, the learning rate must be relatively low . To meet these requirements, Mescheder et al. used the Consensus Optimization (ConOpt) to make the real part of the eigenvalue negative. Its regularized updates are:\n\t\\begin{equation} \\label{eqn1}\n\t\t\\begin{split}\n\t\t\t\\phi^{(k+1)}=\\phi^{(k)}+h\\nabla_\\phi\\left(- f(\\phi^{(k)},\\theta ^{(k)})-\\gamma L(\\phi^k,\\theta^k)\\right),\\\\\n\t\t\t\\theta^{(k+1)}=\\theta^{(k)}+h\\nabla_\\theta\\left(f(\\phi^{(k)},\\theta ^{(k)})-\\gamma L(\\phi^k,\\theta^k)\\right),\n\t\t\\end{split}\n\t\\end{equation}\n\twhere $L(\\phi^k,\\theta^k)=\\frac{1}{2}||v(\\phi^k,\\theta^k)||^2=\\frac{1}{2}\\left(||\\nabla_\\phi f(\\phi^k,\\theta^k)||^2+||\\nabla_\\theta f(\\phi^k,\\theta^k)||^2\\right)$ is the regularization of the Jacobian matrix.\n\tApart from , Nagaraja et al. also analyze the relationship between local convergence of GANs and all eigenvalues of the Jacobian\n\tof the gradient vector field. Authors prove the local convergence for absolutely continuous generator and data distributions under certain regularity assumptions. This requires the loss function of the GANs to be strictly concave, which is not the case for some GANs. Based on this, a simple regularization technology that regularized the generator using the gradient of the discriminator is proposed by Nagaraja et al. . The regularized updates for the generator can be expressed as:\n\t\\begin{equation} \\label{eqn1}\n\t\t\\phi^{(k+1)}=\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\phi||\\nabla_\\theta f(\\phi^{k},\\theta^{k})||^2.\n\t\\end{equation}\n\tHerein, the update of the discriminator is similar to SimGD. Furthermore, Nie et al. propose a method that only regularizes the discriminator. The regularized update of the discriminator in this case is given by:\n\t\\begin{equation} \\label{eqn1}\n\t\t\\theta^{(k+1)}=\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||\\nabla_\\phi f(\\phi^{k},\\theta^{k})||^2.\n\t\\end{equation}\n\tThe update of the generator is the same as SimGD. Nie et al. propose JAcobian REgularization (JARE) that regularizes both the generator and the discriminator. The regularized updates for the generator and the discriminator are:\n\t\\begin{equation} \\label{eqn1}\n\t\t\\begin{split}\n\t\t\t\\phi^{(k+1)}=\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\phi||\\nabla_\\theta f(\\phi^{k},\\theta^{k})||^2,\\\\\n\t\t\t\\theta^{(k+1)}=\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||\\nabla_\\phi f(\\phi^{k},\\theta^{k})||^2.\n\t\t\\end{split}\n\t\\end{equation}\n\tThe key difference between JARE and ConOpt is that JARE does not contain the Hessians $\\nabla^2_{\\phi,\\phi}f(\\phi^{k},\\theta^k)$ and $\\nabla^2_{\\theta,\\theta}f(\\phi^{k},\\theta^k)$ in the regularization term. \n\t{There are several Jacobian regularization methods that have been proposed to deal with the training instabilities of GANs. What is the difference of them? Nie et al. consider a simple toy example to analyse the convergence of GANs. There may exist two factors of the Jacobian in the GANs dynamics simultaneously that destroy the GANs training: (i) the Phase Factor, i.e., the Jacobian has complex eigenvalues with a large imaginary-to-real ratio; (ii) the Conditioning Factor, i.e., the Jacobian is ill-conditioned. According to the toy example,\n\t\tOnly Regularizing Generator , Only Regularizing Discriminator , and ConOpt could only alleviate the impact of the Phase Factor but not alleviating the impact of the Conditioning Factor. However, JARE can address both factors by construction}.\\footnote{Intuitively, a reason for not introducing Hessians in JARE is to avoid the risk of reversing the gradient flows, which may diverge the GAN training dynamics (see Appendix C in for a detailed explanation).}\n\tThe above discussions of local convergence during GANs training involve a premise: absolutely continuous data and generator distributions. Indeed, the assumption of absolute continuity is not true for common cases of GANs, where both distributions, specially the data distribution, may lie on lower-dimensional manifolds . More generally, Mescheder et al. extend the convergence proof by to the case where the generator and data distribution do not locally have the same support. Based on this, a simplified zero-centered gradient penalties (zc-GP) method is proposed, which guarantees the local convergence under suitable assumptions. Zc-GP is obtained from the training dynamics, which is similar to 0-GP methods mentioned in Section 4.\n\t{Furthermore, there are also other literature studies analyze the training of GANs through other tools. For instance, introduces a novel algorithm, competitive gradient descent (CGD), that is a natural extension of gradient descent to the competitive setting. Different from gradient descent ascent (GDA) in Eq (\\ref{eq:eqn27}), CGD does not need to reduce the stepsize to match the increase of the interactions to avoid divergence. Specifically, CGD introduces an equilibrium term that lets each player prefer strategies that are less vulnerable to the actions of the other player. also elucidates the cause of undesirable convergence of GDA is leader's (discriminator) gradient step takes the system away from the ridge, which has undesirable convergence properties and requires using very small learning rates to converge. To mitigate this, Follow-the-Ridge (FR) term ($\\mathbf{H}_{\\mathbf{\\theta} \\mathbf{\\theta}}^{-1} \\mathbf{H}_{\\mathbf{\\theta} \\mathbf{\\phi}} \\nabla_{\\mathbf{\\phi}} f\\left(\\mathbf{\\phi}^{(k)}, \\mathbf{\\theta}^{(k)}\\right)$) has been added to the updating of the discriminator. studies the continuous-time dynamics induced by GANs training. In this perspective, instabilities in training GANs arise from the integration error in discretizing the continuous dynamics. It treats GANs training as solving ODEs and shows that higher-order solvers lead to better convergence.}", "id": "f5ab6a67-88dd-45b6-aeae-1b85b061e943", "level": "subsection", "origin_cites_number": 8, "parent_id": "44c32216-8be6-4e5c-aa2f-ca241ea4139f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Training dynamics\"" ], [ "subsection", "Jacobian Regularization" ] ], "subsections": [], "title": "Jacobian Regularization" }, { "cite_extract_rate": 0.8, "cites": [ 74, 98, 99, 7190 ], "content": "In summary, Jacobian regularization techniques are obtained from the training dynamics of GANs, which are used for achieving local convergence and stabilizing training. The summary of the Jacobian regularization methods is demonstrated in Table \\ref{table:Jacobian regularization}. Jacobian regularization is similar to the Gradient penalty in terms of update form. In general, zc-GP is used in many SOTA methods, as demonstrated in Section 7.\n\t\\begin{table}\n\t\t\\caption{The summary of the Jacobian regularization.}\n\t\t\\tiny\n\t\t\\label{table:Jacobian regularization}\n\t\t\\centering\n\t\t\\begin{tabular}{c|c|c}\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tMethod& regularized updates of generator ($\\phi^{(k+1)}$)&regularized updates of discriminator ($\\theta^{(k+1)}$)\t \\\\\n\t\t\t\\hline\n\t\t\tSimGD &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})$\\\\\n\t\t\t\\hline\n\t\t\tConOpt &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\phi||v(\\phi^{(k)},\\theta^{(k)})||^2$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||v(\\phi^{(k)},\\theta^{(k)})||^2$\\\\\n\t\t\t\\hline\n\t\t\tGenerator &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\phi||\\nabla_\\theta f(\\phi^{(k)},\\theta^{(k)})||^2$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})$\\\\\n\t\t\t\\hline\n\t\t\tDiscriminator &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||\\nabla_\\phi f(\\phi^{(k)},\\theta^{(k)})||^2 $\\\\\n\t\t\t\\hline\n\t\t\tJARE &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\phi||\\nabla_\\theta f(\\phi^{(k)},\\theta^{(k)})||^2$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||\\nabla_\\phi f(\\phi^{(k)},\\theta^{(k)})||^2 $\\\\\n\t\t\t\\hline\n\t\t\tzc-GP &$\\phi^{(k)}-h\\nabla_\\phi f(\\phi^{(k)},\\theta ^{(k)})$&$\\theta^{(k)}+h\\nabla_\\theta f(\\phi^{(k)},\\theta ^{(k)})-\\frac{1}{2}h\\gamma\\nabla_\\theta||\\nabla D_{\\theta}(x)||^2 $\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table}", "id": "8377346c-065d-4bc5-adf5-275fad74664d", "level": "subsection", "origin_cites_number": 5, "parent_id": "44c32216-8be6-4e5c-aa2f-ca241ea4139f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Training dynamics\"" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0, "cites": [], "content": "In addition to the three groups mentioned above, this section discusses and summarizes the remaining regularization and normalization techniques, namely, \\textit{Layer Normalization} and \\textit{Inverse Gradient Penalty}. \\textit{Layer Normalization} consists of unconditional-based layer normalization and conditional-based layer normalization, the former inspired by supervised learning is used to accelerate training, but its impact on the GANs is small and sometimes \n\tdrops the performance, while the latter is used in the conditional generation and significantly improves the performance of conditional generation; On the other hand, \\textit{Inverse Gradient Penalty} mitigates mode collapse by maximizing the Lipschitz constant of the generator.", "id": "bc57c604-a3ec-457f-ab6f-07cf5243c526", "level": "section", "origin_cites_number": 0, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Other methods\"" ] ], "subsections": [ "253f0a0c-e301-40a7-a3f4-7aa82d059919", "ed3fad77-b260-419c-962b-ecc924abe2e8" ], "title": "Regularization and Normalization of \"Other methods\"" }, { "cite_extract_rate": 1, "cites": [ 71 ], "content": "Data in machine learning is expected to be independent and identically distributed ($i.i.d$). However, in terms of deep learning, because of the Internal Covariate Shift (ICS) , inputs of each neuron do not satisfy the $i.i.d$, making the training of the deep neural networks hard and unstable. Layer normalization\\footnote{\\label{ft:10} layer normalization is different from the Layer Normalization (LN), where layer normalization is a general term for a class of methods such as BN, LN. } has been proposed to avoid such problems. The general form of the layer normalization is (the difference between the normalization methods lies in the choice of $h$ and the calculation of $\\mathbb{E}[h]$ and $var[h]$):\n\t\\begin{equation}\n\t\th_N=\\frac{x-\\mathbb{E}[h]}{\\sqrt{var[h]+\\epsilon}}\\cdot\\gamma+\\beta.\n\t\\end{equation}\n\tFor GANs, the layer normalization is divided into two parts: {\\textit{unconditional-based layer normalization}} and {\\textit{conditional-based layer normalization}}. Unconditional-based layer normalizations are used for unconditional generation similar to the other deep neural networks. On the other hand, conditional-based layer normalizations are used for the generator of the conditional generation, where the shift and scale parameters ($\\gamma, \\beta$) depend on the condition information, as given below:\n\t\\begin{equation}\n\t\th_N=\\frac{x-\\mathbb{E}[h]}{\\sqrt{var[h]+\\epsilon}}\\cdot\\gamma(c)+\\beta(c).\n\t\\end{equation}", "id": "253f0a0c-e301-40a7-a3f4-7aa82d059919", "level": "subsection", "origin_cites_number": 1, "parent_id": "bc57c604-a3ec-457f-ab6f-07cf5243c526", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Other methods\"" ], [ "subsection", "Layer Normalization" ] ], "subsections": [ "e30468f0-4365-4dc0-8ffe-61832a49d364", "f33ea61f-e2e1-4dfc-9f9a-3c4da4ad1e64" ], "title": "Layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 154, 155, 57, 8324, 153, 156, 150, 152, 151, 7194, 157, 78, 71 ], "content": "Unconditional-based layer normalization is used for both the generator and discriminator with the same motivation as in other deep neural networks. Ioffe et al. proposed the first normalization for neural networks, namely, Batch Normalization (BN). Batch normalization adopts the data of the mini-batch to compute the mean and variance, making the data distribution of each mini-batch approximately the same. Miyato et al. used the BN in GANs. BN normalizes at the mini-batch level, which destroys the difference between pixels during the generation on account of image generation being a pixel-level task. {Therefore, Batch Norm can be less applicable to style transfer and can’t be used with gradient penalty\n\t\tmethods since the gradient would be dependent on multiple inputs.} Contrary to BN which normalizes the same channel with different images, Layer Normalization\\textsuperscript{\\ref {ft:10}} (LN) normalizes different channels of a single image that also destroys the diversity between channels for the pixel-by-pixel generative model . Instance Normalization (IN) has also been proposed for style transformation that is adopted for a single channel of a single image. Moreover, Group Normalization (GN) sits between LN and IN, which first divides the channel into many groups, and normalizes different groups of a single image. Compared to normalization of input of neural networks in BN, LN, IN and GN, Weight Normalization (WN) normalizes the weight matrix of neural networks. Miyato et al. also used this normalization in GANs. \n\tIn summary, unconditional-based layer normalization in GANs is similar to other neural networks. The related summaries are shown in Table \\ref{table:layer normalization}. To the best of our knowledge, no study compares the performance of these methods, therefore, we demonstrate the FID results\\footnote{The base framework comes from the SNGAN in \\url{https://github.com/kwotsin/mimicry}} for different normalization methods on CIFAR-10 and CIFAR-100 datasets in Table \\ref{table:normalization}. Among them, LN and GN obtained better performance than the most popular normalization method: Spectral normalization (mentioned in Section 4.2) and other methods significantly affect the stability of GANs training.\n\t\\begin{table*}\n\t\t\\caption{The summary of the layer normalization}\n\t\t\\label{table:layer normalization}\n\t\t\\scriptsize\n\t\t\\centering\n\t\t\\begin{tabular}{c|c | c | c }\t\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tMethod&{ Reference}& Classification&Inputs of $\\gamma(c)$ and $\\beta(c)$\t \\\\\n\t\t\t\\hline\n\t\t\tBatch Normalization (BN)&2018 & unconditional-based&-\\\\\n\t\t\t\\hline\n\t\t\tLayer Normalization (LN)&2018 & unconditional-based&-\\\\\n\t\t\t\\hline\n\t\t\tInstance Normalization (IN)&2018 & unconditional-based&-\\\\\n\t\t\t\\hline\n\t\t\tGroup Normalization (GN)&2018 & unconditional-based&-\\\\\n\t\t\t\\hline\n\t\t\tWeight Normalization (WN) &2018 & unconditional-based&-\\\\\n\t\t\t\\hline\n\t\t\tConditional Batch Normalization (CBN)&2018 &conditional-based&class label\\\\\n\t\t\t\\hline\n\t\t\tAdaptive Instance Normalization (AdaIN)&2017 ,2019 &conditional-based&target images\\\\\n\t\t\t\\hline\n\t\t\tSpatially-adaptive (de) Normalization (SPADE)&2019 &conditional-based&sematic segmentation map\\\\\n\t\t\t\\hline\n\t\t\tAttentive Normalization(AN)&2020 &conditional-based&self\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table*}", "id": "e30468f0-4365-4dc0-8ffe-61832a49d364", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "253f0a0c-e301-40a7-a3f4-7aa82d059919", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Other methods\"" ], [ "subsection", "Layer Normalization" ], [ "subsubsection", "Unconditional-based layer Normalization" ] ], "subsections": [], "title": "Unconditional-based layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 154, 8324, 150, 151, 157, 7194 ], "content": "Conditional-based layer normalization is only used for the generator of the conditional generation. It aims to introduce conditional information to each layer of the generator, which helps to improve the quality of the generated images. $\\gamma(c)$ and $\\beta(c)$ in Eq (44) are calculated with different features or class labels as input to the neural network in different methods. Miyato et al. and Zhang et al. used the Conditional Batch Normalization (CBN) to encode class labels, thereby improving the quality of conditional generation. Huang et al. and Karras et al. used the Adaptive Instance Normalization (AdaIN) with target images to improve the accuracy of style transfer. Park et al. used the Spatially-Adaptive (de) Normalization (SPADE) with semantic segmentation image to incorporate semantic information into all layers. Wang et al. used the Attentive Normalization (AN) to model long-range dependent attention, which is similar to self-attention GAN .\n\tIn summary, the main difference between these conditional-based normalizations is the content of conditional inputs (c in Eq (45)). As the information of inputs is gradually enriched, the performance of conditional generation is gradually improved. The related summaries are shown in Table \\ref{table:layer normalization}.", "id": "f33ea61f-e2e1-4dfc-9f9a-3c4da4ad1e64", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "253f0a0c-e301-40a7-a3f4-7aa82d059919", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Other methods\"" ], [ "subsection", "Layer Normalization" ], [ "subsubsection", "Conditional-based layer Normalization" ] ], "subsections": [], "title": "Conditional-based layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 56, 159, 78, 160, 158 ], "content": "Mode collapse is a common phenomenon in GANs' training, that is, changes in the latent space do not cause changes in the generated images. Geometrically, the phenomenon means that all the tangent vectors of the manifold are no longer independent of each other - some tangent vectors either disappear or become linearly correlated with each other. Intuitively, we can solve this problem by maximizing the Lipschitz constant of the generator, which is opposite of the gradient penalty of the discriminator described in the previous section. Based on this, inverse gradient penalty of the generator has been proposed. Concretely, under the little perturbation of the latent space, the generator needs to produce different images. Yang et al. use it in conditional generation, especially for tasks that are rich in conditional information, such as inpainting and super-resolution. \n\t\\begin{equation}\n\t\t\\mathop{\\max}\\limits_{G}\\mathcal{L}_z(G)=\\mathop{\\max}\\mathbb{E}_{z_1,z_2}\\left[\\mathop{\\min}\\left(\\frac{||G(y,z_1)-G(y,z_2)||}{||z_1-z_2||},\\tau\\right)\\right],\n\t\\end{equation}\n\twhere $y$ is the class label and $\\tau$ is the bound to ensure numerical stability. Unlike the intuition-based study described above, Odena et al. demonstrate that the decreasing of singular value in the Jacobian matrix of the generator is the main reason for the mode collapse during GANs training. Furthermore, the singular value can be approximated by the gradient, so Jacobian clamping is used to limit singular values to $[\\lambda_{min},\\lambda_{\\max}]$. The loss is expressed as:\n\t\\begin{equation}\n\t\t\\mathop{\\min}\\limits_{G}\\mathcal{L}_z(G)=\\big(\\mathop{\\max}(Q,\\lambda_{\\max})-\\lambda_{\\max}\\big)^2\n\t\t+\\big(\\mathop{\\min}(Q,\\lambda_{\\min})-\\lambda_{\\min}\\big)^2,\n\t\\end{equation}\n\twhere $Q=||G(z)-G(z')||/||z-z'||$. \n\t{In summary, the above two methods are similar and mitigate the model collapse of generator to some extent. The key point is to improve the sensitivity of the generator to latent space. In addition to the above methods used to implement inverse gradient penalty for generators, some studies adopt orthogonal regularization to enforce amenability to truncation by conditioning G to be\n\t\tsmooth. Accordingly, the full space of $z$ will map to good output samples. Introducing orthogonality condition is a direct method:}\n\t\\begin{equation}\n\t\tR(W)=\\beta\\left\\|W^{\\top} W -I\\right\\|_{\\mathrm{F}}^{2},\n\t\\end{equation}\n\t{where W is a weight matrix and $\\beta$ is a hyperparameter. However, this regularization is too limiting . Therefore, a relaxed constraint has been designed by . Brock et al. apply Off-Diagonal Orthogonal Regularization (Off-Diagonal OR) to the generator directly enforcing the orthogonality condition:}\n\t\\begin{equation}\n\t\tR_o(W)=\\beta\\left\\|W^{\\top} W \\odot(\\mathbf{1}-I)\\right\\|_{\\mathrm{F}}^{2},\n\t\\end{equation}\n\t{where $\\mathbf{1}$ denotes a matrix with all elements set to 1. The Off-Diagonal OR makes G smooth so that the entire space of $z$ will map to good output samples. Orthogonality regularization is different from spectral normalization . Orthogonality regularization destroys the information about the spectrum by setting all the singular values to one, while spectral normalization only makes the maximum singular be one.\n\t}\n\t\\begin{table}\n\t\t\\caption{ FID results for different normalization methods on CIFAR-10 and CIFAR-100 datasets. ( The structure is the same as SNGAN except that the discriminator uses different normalization methods).\\tiny}\n\t\t\\label{table:normalization}\n\t\t\\centering\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\toprule\n\t\t\t\\midrule\n\t\t\tmethods & CIFAR-10 & CIFAR-100 \\\\\n\t\t\t\\hline\n\t\t\tNone & 40.91 & 45.44 \\\\\n\t\t\tBN & 37.63 & 44.45\\\\\n\t\t\tLN & \\textbf{19.21} & 21.15 \\\\\n\t\t\tIN & 34.14 & 43.64\\\\\n\t\t\tGN & 19.31 & $\\textbf{20.80}$ \\\\\n\t\t\tWN & 24.28 & 29.96 \\\\\n\t\t\tSN&19.75&22.89\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{table}", "id": "ed3fad77-b260-419c-962b-ecc924abe2e8", "level": "subsection", "origin_cites_number": 5, "parent_id": "bc57c604-a3ec-457f-ab6f-07cf5243c526", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Regularization and Normalization of \"Other methods\"" ], [ "subsection", "Inverse Gradient Penalty" ] ], "subsections": [], "title": "Inverse Gradient Penalty" }, { "cite_extract_rate": 1, "cites": [ 56, 164, 8324, 108, 99, 130, 161, 162, 163, 62, 157, 78, 8317 ], "content": "{In this section, to provide a side view to the selection of regularization and normalization techniques, we investigate the applications of regularization and normalization techniques frequently employed in state-of-the-art and popular GANs. We select six methods (One per year from 2017-2022) categorized into two classes according to different tasks: Unconditional Generation and Conditional Generation. The selected methods and analysis are shown in Table \\ref{table:SOTA }. PGGAN is a popular GAN model in recent years, which grows the size of both the generator and discriminator progressively. PGGAN empowers high-resolution image generation. Since PGGAN was proposed in 2017, only some simple regularization techniques were applied: WGAN-GP , BN , and LN ; BigGAN is a popular conditional generative adversarial networks, which uses many regularization and normalization techniques, such as zc-GP, SN , Off-Diagonal OR , and CBN ; AutoGAN is the first study introducing the Neural architecture search (NAS) to GANs. It defines the search space for the generator architecture and adopts Inception score as the reward to discover the best architecture. The main focus of AutoGAN is architecture, so AutoGAN only comprises SN ; StyleGAN2 is the most popular architecture of GANs, which produces photorealistic images with large varieties and is widely used in image generation, such as Image Completion , Image-to-Image Translation . StyleGAN2-ADA proposes a novelty adaptive data augmentation methods. Combining StyleGAN2 and adaptive data augmentation, StyleGAN2-ADA obtains impressive performance in image generation, particularly in data-effficient generation. Furthermore, InsGen combines StyleGAN2-ADA with contrastive learning, acquiring state of the art on many generation tasks and datasets. Recently, StyleGAN-XL scales StyleGAN to large diverse datasets and sets a new state-of-the-art on large-scale image synthesis. In summary, many regularization and normalization techniques have been used in state-of-the-art GANs with zc-GP and SN being more attractive to researchers. Data augmentation is a striking method and orthogonal to other ongoing researches on training, architecture, and regularization. Therefore, popular augmentation strategies, such as ADA, have been employed as default operations GANs training. Furthermore, self-supervision has been used to further improve the performance of GANs, which is also orthogonal to other methods.}\n\t\\begin{table}\n\t\t\\caption{The applications of the Regularization and Normalization techniques used in SOTA GANs.}\n\t\t\\label{table:SOTA }\n\t\t\\centering\n\t\t\\footnotesize\n\t\t\\begin{tabular}{c | c | c |c | c | c|c }\t\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\t{ Method}& Task&\\makecell[c]{Gradient \\\\Penalty}&\\makecell[c]{Data augmentation\\\\ and preprocessing}&\\makecell{Self\\\\supervision}&\\makecell[c]{Weight\\\\ normalization }&\\makecell[c]{Layer\\\\ normalization}\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{PGGAN\\\\(2017)}&\\makecell{Unconditinal\\\\ Generation}&WGAN-GP&\\makecell{None}&None&\\makecell{None}&BN: G,LN: D\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{BigGAN\\\\ (2018 )}&\\makecell{Conditinal \\\\Generation}&zc-GP&None&None&\\makecell{SN: G, D}&CBN\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{AutoGAN\\\\ (2019 )}&\\makecell{Unconditinal\\\\ Generation}&None&None&None&\\makecell{SN:D}&None\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{StyleGAN2-ADA\\\\(2020)}&\\makecell{Unconditinal \\\\Generation}&zc-GP&\\makecell{Adaptive}&None&None&IN\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{InsGen\\\\(2021)}&\\makecell{Unconditinal \\\\Generation}&zc-GP&\\makecell{Adaptive}&contrastive&None&IN\\\\\n\t\t\t\\hline\n\t\t\t\\makecell{StyleGAN-XL\\\\(2022)}&\\makecell{Conditinal \\\\Generation}&None&\\makecell{Translation\\\\Cutout}&None&SN:D&IN\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{table}", "id": "fb4135aa-ff8b-439d-9a28-a790ffddd379", "level": "section", "origin_cites_number": 13, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Applications of Regularization and Normalization in SOTA GANs" ] ], "subsections": [], "title": "Applications of Regularization and Normalization in SOTA GANs" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "b359d174-4880-4149-b0f4-5fd0e7bd75f4", "level": "section", "origin_cites_number": 0, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Summary and Outlook" ] ], "subsections": [ "9cf8af1d-75ca-445d-aa5a-fb3169fa5585", "c7f682e9-c24a-41ee-8bfd-ac2766fa59c0" ], "title": "Summary and Outlook" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 144, 88, 7001, 99, 8322, 78, 8317, 6997 ], "content": "Recently, significant achievements of GANs have been made in generation tasks and the network has been widely used in many computer vision tasks, such as image inpainting, style transfer, text-to-image translations, and attribute editing. However, due to the overconfident assumptions, the training faces many challenges, such as non-convergence, mode collapse, gradient vanishing, and overfitting. To mitigate these problems, many solutions focus on designing new architectures, new loss functions, new optimization methods, and regularization and normalization techniques. \n\tIn this paper, we study GANs training from three perspectives and propose a new taxonomy, denoted as \\textbf{\"Training dynamics\"}, \\textbf{\"Fitting distribution\"}, \\textbf{\"Real \\& Fake\"}, and \\textbf{\"Other methods\"}, to survey the different regularization and normalization techniques during GANs training. Our study provides a systematic and comprehensive analysis of the reviewed methods to serve researchers of the community. In addition, we also demonstrate the motivation and objectives of different methods and compare the performance of some popular methods in a fair manner quantitatively, which has implications for future research in selecting their research topics or developing their approaches.\n\t\\begin{comment}\n\tThis paper summarizes the regularization and normalization of generative adversarial networks and explains the problem from three aspects: \n\t\\textbf{Optimal transport and Lipschitz continuity:} First, optimal transport and optimal transport with the regular term are introduced. According to the duality form, WGAN-GP and WGAN-LP are proposed to make the discriminator satisfy 1-Lipschitz continuity and k-Lipschitz continuity ($k\\leq1$), respectively. Next, many gradient penalty methods have been analyzed. These methods achieved certain improvements as compared with WGAN-GP in two prespecitves: \\textbf{(1) To limit the Lipschitz constant to a small value}, such as 0 ; \\textbf{(2) To find the right restricted space}. The restricted space of WGAN-GP and WGAN-LP is the interpolation between the real images and fake images. Some works restricted it at the real images or fake images . The relationship between the Lipschitz constant and the spectral norm of the matrix is also derived in this paper. It indicates that the normalization of the spectral norm can also be used to achieve 1-Lipschitz continuity. Furthermore, the Frobenius norm is proved to be the upper bound of the spectral norm. The results of WGAN-LP can be obtained by normalizing the Frobenius norm. Moreover, the work analyzed the relationship between the mode collapse of the GAN and the eigenvalue distribution of the discriminator weight matrix, ending up in avoiding the mode collapse by limiting the eigenvalue collapse. However, there is a lack of the specific reason or sound proof.\n\t\\textbf{ Training dynamics:} GANs is a two-player zero-sum game. Due to the nonlinearity of the neural network, it is difficult to find the global convergence solution. In terms of the local convergence, the Jacobian regularization needs to be added. Few research studies are found in this area.\n\t\\textbf{ Representation:} As mentioned earlier, training of GANs is a semi-supervised task. More prior information is needed to improve the presentative ability of the network. Conditional layer normalization is used to better encode condition information and reduce the difficulty of conditional generation. Consistent regularization, data augmentation and self-supervision use unsupervised methods to improve the supervision information in GANs training, and use additional labels and tasks to improve GANs performance. We suggest that this area is promising and more studies can be conducted in this specific direction.\n\t\\end{comment}", "id": "9cf8af1d-75ca-445d-aa5a-fb3169fa5585", "level": "subsection", "origin_cites_number": 9, "parent_id": "b359d174-4880-4149-b0f4-5fd0e7bd75f4", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Summary and Outlook" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 1, "cites": [ 165 ], "content": "By reviewing the regularization and normalization of GANs, the following questions and thoughts are proposed based on different perspectives of GANs training:\n\t\\begin{itemize}\n\t\t\\item [1)] \n\t\t\\textit{What is a good distance metric, and which divergence should be used in GANs training?} The priority in the training process of GANs is to find a suitable divergence to measure the distance between the generated distribution and the true distribution. Wasserstein divergence is important for the training of GANs. However, it is uncertain whether the next proposed divergence performs better.\n\t\t\\item [2)]\n\t\t\\textit{What is the main difference between real images and generated images?} During the training of unconstrained and unprioritized GANs, if we can quantitatively represent the difference between real images and generated images from different perspectives, the efficient regularization methods can be designed based on this.\n\t\t\\item [3)]\n\t\t\\textit{How to avoid real images forgetting\\footnote{Real images forgetting is caused by not introducing real images while training the generator, which is different from discriminator forgetting.}?} As acknowledged, real images do not directly participate in the training of the generator, thus the discriminator needs to remember the characteristics of the real images to optimize the generator indirectly. We call this the real images forgetting. We conjecture that real images forgetting may exist, and which may increase the difficulty of GANs training. Some works might serve as basis to prove this hypothesis and propose effective solutions.\n\t\t\\item [4)]\n\t\tRecent studies show that discriminator suffers from overfitting and discriminator forgetting. It is a common problem of neural networks, which is caused by the shortcut of the loss driven method. Some new methods, such as contrastive learning, representation learining, can be proposed to improve the generalization of the discriminator.\n\t\t\\item [5)]\n\t\tRecently, diffusion model acquires the impressive performance in image generation. One possible reason for success is the phased training strategy in diffusion model . Inspired by this, some strategies to reduce the difficulty of GANs training may be proposed.\n\t\\end{itemize}\n\t\\section*{Acknowledgment}\n\tThe work is partially supported by the National Natural Science Foundation of China under Grand No.U19B2044, No.61836011 and No.91746209. We are very grateful to the help of Jianlin Su, whose blog is \\url{https://spaces.ac.cn/tag/GAN/}.\n\t\\bibliographystyle{ACM-Reference-Format}\n\t\\bibliography{bare_jrnl_transmag}\n\t\\appendix\n\t\\clearpage", "id": "c7f682e9-c24a-41ee-8bfd-ac2766fa59c0", "level": "subsection", "origin_cites_number": 1, "parent_id": "b359d174-4880-4149-b0f4-5fd0e7bd75f4", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Summary and Outlook" ], [ "subsection", "Outlook" ] ], "subsections": [], "title": "Outlook" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "f6a7c46a-3c0c-4605-a602-4104833c8854", "level": "section", "origin_cites_number": 0, "parent_id": "4410407c-7d7c-48f1-8174-a9a95319844f", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Supplementary Online-only Material" ] ], "subsections": [ "9df0baaf-f220-4a8d-b36a-27ebd583cf6c", "375c251d-5d04-454e-83a6-2140b14015f5", "d07e4dc9-6fdd-4534-85ee-925acd678910" ], "title": "Supplementary Online-only Material" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 88, 95, 8318 ], "content": "\\label{sect:A-1}\n\tOptimal transport was proposed in the 18th century to minimize the transportation cost while preserving the measure quantities. Given the space with probability measures $(X,\\mu)$ and $(Y,\\upsilon)$, if there is a map $T:X\\rightarrow Y$ which is measure-preserving, then for any $B\\subset Y$, having:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\int_{T^{-1}(B)}\\mathrm{d}\\mu(x)=\\int_B \\mathrm{d}\\upsilon(y).\n\t\\end{equation}\n\tWriting the measure-preserving map as $T_*(\\mu)=\\upsilon$. For any $x\\in X$ and $y\\in Y$, the transportation distance is defined as $c(x,y)$, the total transportation cost is given by:\n\t\\begin{equation}\\label{eqn1}\n\t\tC(T):=\\int_X c(x,T(x)) \\mathrm{d}\\mu(x).\n\t\\end{equation}\n\tIn the 18th century, Monge et al. proposed the Optimal Mass Transportation Map that corresponds to the smallest total transportation cost: $C(T)$. The transportation cost corresponding to the optimal transportation map is called the Wasserstein distance between probability measures $\\mu$ and $\\upsilon$:\n\t\\begin{equation}\\label{eqn3}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\min}\\limits_{T}\\left\\{\\int_X c(x,T(x)) \\mathrm{d}\\mu(x)\\ |\\ T_*(\\mu)=\\upsilon\\right\\}.\n\t\\end{equation}\n\tIn 1940s, Kantorovich proved the existence and uniqueness of the solution for Monge problem, and according to the duality of linear programming, the Kantorovich-Rubinstein (KR) duality of Wasserstein distance is given by:\n\t\\begin{equation}\\label{eqn1}\n\t\tW_c(\\mu,\\upsilon)\n\t\t=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\left\\{\\int_X \\varphi \\mathrm{d}\\mu\\ +\\int_Y \\psi \\mathrm{d}\\upsilon\\ |\\ \\varphi(x)+\\psi(y)\\leq c(x,y)\\right\\}.\n\t\\end{equation}\n\tThis dual problem is constrained, defining the c-transform: $\\psi(y)=\\varphi^c(y):=inf_x\\{c(x,y)-\\varphi(x)\\}$, and the Wasserstein distance becomes:\n\t\\begin{equation}\\label{eqn1}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\max}\\limits_{\\varphi}\\left\\{\\int_X \\varphi \\mathrm{d}\\mu\\ +\\int_Y \\varphi^c \\mathrm{d}\\upsilon\\right\\},\n\t\\end{equation}\n\twhere $\\varphi$ is called the Kantorovich potential. It can be shown that if $c(x,y)=|x-y|$ and Kantorovich potential satisfies the 1-Lipschitz continuity, then $\\varphi^c=-\\varphi$. Kantorovich potential can be fitted by a deep neural network, which is recorded as $\\varphi_\\xi$. Wasserstein distance is:\n\t\\begin{equation}\n\t\tW_c(\\mu,\\upsilon)=\\mathop{\\max}\\limits_{||\\varphi_\\xi||_L\\leq 1}\\left\\{\\int_X \\varphi_\\xi \\mathrm{d}\\mu\\ -\\int_Y \\varphi_\\xi \\mathrm{d}\\upsilon\\right\\}.\n\t\t\\label{eqn7}\n\t\\end{equation}\n\tIf $X$ is the generated image space, $Y$ is the real sample space, $Z$ is latent space and $g_\\theta$ is the geneartor, the Wasserstein GANs (WGAN) is formulated as a min-max problem:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\mathop{\\min}\\limits_{\\theta}\\mathop{\\max}\\limits_{||\\varphi_\\xi||_L\\leq 1}\\left\\{\\int_Z \\varphi_\\xi(g_\\theta(z)) \\mathrm{d}z \\ -\\int_Y \\varphi_\\xi(y) \\mathrm{d}y\\right\\}.\n\t\\end{equation}\n\tIn the optimization process, the generator and the Kantorovich potential function (discriminator) are independent of each other, optimized in a step-by-step iteration.\n\tIf $c(x,y)=\\frac{|x-y|^2}{2}$, there is a convex function $u$ that is called Brenier potential . The optimal transportation map is given by the gradient map of Brenier potential: $T(x)=\\nabla u(x)$. There exists a relationship between Kantorovich potential and Brenier potential : \n\t\\begin{equation}\\label{eqn1}\n\t\tu(x)=\\frac{|x|^2}{2}-\\varphi(x).\n\t\\end{equation}\n\tFrom the previous discussion, it is evident that the optimal transportation map (Brenier potential) corresponds to the generator, and Kantorovich potential corresponds to the discriminator. After the discriminator is optimized, the generator is directly drivable without the optimization process .\n\tThe transportation cost of Eq (\\ref{eqn3}) is defined as the form of two distribution distances:\n\t\\begin{equation}\\label{eqn10}\n\t\tOT(P||Q)=\\mathop{inf}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y,\n\t\\end{equation}\n\twhere $\\pi(x,y)$ is the joint distribution, satisfying $\\int_y\\pi(x,y)dy=P(x)$ and $\\int_x\\pi(x,y)dx=Q(y)$. The dual form of Eq (\\ref{eqn10}) is derived as follows::\n\t\\begin{equation}\\label{eqn1}\n\t\tOT(P||Q)=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\{\\int_x \\varphi(x)P(x) \\mathrm{d}x\\ \\\\ +\\int_y\\psi(y)Q(y) \\mathrm{d}y\\ |\\ \\varphi(x)+\\psi(y)\\leq c(x,y)\\}.\n\t\\end{equation} \n\tConsidering the optimal transportation with regular terms, Peyr{\\'e} et al. added the entropic regularization for optimal transportation that transforms the dual problem into a smooth unconstrained convex problem. The regularized optimal transport is defined as:\n\t\\begin{equation}\\label{eqn12}\n\t\tOT_c(P||Q)=\\mathop{\\min}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y+\\epsilon E(\\pi).\n\t\\end{equation}\n\tIf $E(\\pi)=\\int_x\\int_y\\pi(x,y)\\log(\\frac{\\pi(x,y)}{P(x)Q(y)})\\mathrm{d}x\\mathrm{d}y$, Eq (\\ref{eqn12}) can be written as:\n\t\\begin{equation}\\label{eqn13}\n\t\t\\begin{split}\n\t\t\tOT_c(P||Q)=&\\mathop{\\min}\\limits_{\\pi}\\int\\pi(x,y)c(x,y)\\mathrm{d}x\\mathrm{d}y+\\epsilon \\int_x\\int_y\\pi(x,y)\\log\\left(\\frac{\\pi(x,y)}{P(x)Q(y)}\\right)\\mathrm{d}x\\mathrm{d}y\\\\\n\t\t\ts.t. \\int_y\\pi(x,y)&\\mathrm{d}y=P(x),\\int_x\\pi(x,y)\\mathrm{d}x=Q(y).\n\t\t\\end{split}\n\t\\end{equation}\n\tThe dual form of Eq (\\ref{eqn13}) becomes:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tOT_c(P||Q)\n\t\t\t&=\\mathop{\\max}\\limits_{\\varphi ,\\psi}\\int_x \\varphi(x)P(x) \\mathrm{d}x\\ +\\int_y\\psi(y)Q(y)\\mathrm{d}y\\\\\n\t\t\t&+\\frac{\\epsilon}{e}\\int_x\\int_y\\exp\\left(\\frac{-\\left(c(x,y)+\\varphi(x)+\\psi(y)\\right)}{\\epsilon}\\right)\\mathrm{d}x\\mathrm{d}y.\n\t\t\\end{split}\n\t\t\\label{EQ:eqn14}\n\t\\end{equation}\n\tPetzka et al. set $c(x,y)=||x-y||_2$ in Eq (\\ref{EQ:eqn14}), and the dual form of optimal transport with the regular term can be expressed as:\n\t\\begin{equation}\n\t\t\\mathop{\\sup}\\limits_{\\varphi,\\psi}\\{\\mathbb{E}_{x\\sim{p(x)}}[\\varphi(x)]-\\mathbb{E}_{y\\sim q(y)}[\\psi(y)]\n\t\t-\\frac{4}{\\epsilon}\\int\\int\\mathop{\\max}\\{0,(\\varphi(x)-\\psi(y)-||x-y||_2)\\}^2\\mathrm{d}p(x)\\mathrm{d}q(y) \\}.\n\t\t\\label{EQ:eqn32}\n\t\\end{equation}\n\tSimilar to dealing with a single function, one can replace $\\varphi = \\psi$ in Eq (\\ref{EQ:eqn32}), which leads to the objective of minimum:\n\t\\begin{equation}\\label{eqn1}\n\t\t\\mathbb{E}_{y\\sim q(y)}[\\varphi(y)]-\\mathbb{E}_{x\\sim p(x)}[\\varphi(x)]\n\t\t+\\frac{4}{\\epsilon}\\int\\int\\mathop{\\max}\\{0,(\\varphi(x)-\\varphi(y)-||x-y||_2)\\}^2\\mathrm{d}p(x)\\mathrm{d}q(y).\n\t\\end{equation}", "id": "9df0baaf-f220-4a8d-b36a-27ebd583cf6c", "level": "subsection", "origin_cites_number": 7, "parent_id": "f6a7c46a-3c0c-4605-a602-4104833c8854", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Supplementary Online-only Material" ], [ "subsection", "Optimal Transport and Lipschitz Continuity" ] ], "subsections": [], "title": "Optimal Transport and Lipschitz Continuity" }, { "cite_extract_rate": 0.5, "cites": [ 7190 ], "content": "\\label{sect:A-2}\n\t\\newenvironment{lemma1}{{\\indent\\it \\textbf{Proposition 2.1:}}}{\\hfill \\par}\n\t\\begin{lemma1}\n\t\t\\textit{For zero-sum games, $v^{'}$is negative semi-definite for any local Nash-equilibrium. Conversely, if $v(\\bar{x})=0$ and $v^{'}$is negative definite, then $\\bar{x}$ is a local Nash-equilibrium.\n\t\t}\n\t\\end{lemma1}\n\t\\newenvironment{proof1}{{\\indent\\it Proof 2.1:}}{\\hfill $\\square$\\par}\n\t\\begin{proof1}\n\t\tRefer to \n\t\\end{proof1}\n\t\\textit{Proposition 2.1} gives the conditions for the local convergence of GANs, which is converted into the negative semi-definite problem of the Jacobian matrix. Negative semi-definite of the Jacobian matrix corresponds to its eigenvalue less than or equal to 0. If the eigenvalue of the Jacobian matrix at a certain point is a negative real number, the training process can converge; but if the eigenvalue is complex and the real part of the eigenvalue is small and the imaginary part is relatively large, the training process is difficult to converge unless the learning rate is very small. \n\t\\newenvironment{lemma2}{{\\indent\\it \\textbf{Proposition 2.2:}}}{\\hfill \\par}\n\t\\begin{lemma2}\n\t\t\\textit{\n\t\t\tLet $F:\\Omega\\rightarrow\\Omega$ be a continuously differentiable function on an open subset $\\Omega$ of $R^n$ and let $\\bar{x}\\in\\Omega$ be so that: 1. $F(\\bar{x})=\\bar{x}$ and 2. the absolute values of the eigenvalues of the Jacobian $F^{'}(x)$ are all smaller than 1.}\n\t\t\\textit{There is an open neighborhood $U$ of $\\bar{x}$ so that for all $x_0\\in U$, the iterates $F^{(k)}(x_0)$ converge to $\\bar{x}$. The rate of convergence is at least linear. More precisely, the error $||F^{(k)}(x_0)-\\bar{x}||$ is in $\\mathcal{O}(|\\lambda_{max}|^k)$ for $k\\rightarrow\\infty$ where $\\lambda_{max}$ is the eigenvalue of $F^{'}(\\bar{x})$ with the largest absolute value.}\n\t\\end{lemma2}\n\t\\newenvironment{proof2}{{\\indent\\it Proof 2.2:}}{\\hfill $\\square$\\par}\n\t\\begin{proof2}\n\t\tRefer to Section 3 in and Proposition 4.4.1 in .\n\t\\end{proof2}", "id": "375c251d-5d04-454e-83a6-2140b14015f5", "level": "subsection", "origin_cites_number": 2, "parent_id": "f6a7c46a-3c0c-4605-a602-4104833c8854", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Supplementary Online-only Material" ], [ "subsection", "Some Propositions of Training Dynamic in GANs" ] ], "subsections": [], "title": "Some Propositions of Training Dynamic in GANs" }, { "cite_extract_rate": 1, "cites": [ 97, 96 ], "content": "\\label{sect:A-3}\n\t1-Lipschitz continuity is represented as:\n\t\\begin{equation}\n\t\t||D(x_1)-D(x_2)||\\leq ||x_1-x_2||.\\footnote{Lipschitz continuity can be defined by any form of norm.}\n\t\\end{equation}\n\tGenerally, considering the K-Lipschitz for a neural network $f(x)$:\n\t\\begin{equation}\n\t\tf(x)=g_N\\circ\\cdots g_2\\circ g_1(x),\\footnote{$\\circ$ is the symbol for function cascade. Specifically, $h\\circ g(x)=h(g(x))$. This definition of neural network is not general, such as DenseNet and ResNet , which can not be defined like this. Therefore, we do not strictly derive the relationship between the matrix norm and Lipschitz continuity.}\n\t\\end{equation}\n\twhere $g_i(x)=\\sigma (W_i x+b_i)$. And K-Lipschitz continuity for $f(x)$ is:\n\t\\begin{equation}\n\t\t||f(x_1)-f(x_2)||\\leq \\mathrm{K}||x_1-x_2||,\n\t\t\\label{EQ:eqn17}\n\t\\end{equation}\n\twhere K is Lipschitz constant of the function $f$. Due to the consistency of Lipschitz $||h\\circ g||_{Lip}\\leq ||h||_{Lip}\\cdot||g||_{Lip}$, $g_i$ needs to satisfy the C-Lipschitz continuity ($\\mathrm{C}=\\sqrt[N]{\\mathrm{K}}$) so that $f$ satisfies the K-Lipschitz continuity:\n\t\\begin{equation}\n\t\t||g_i(x_1)-g_i(x_2)||\\leq \\mathrm{C}||x_1-x_2||,\n\t\\end{equation}\n\t\\begin{equation}\n\t\t||\\sigma(Wx_1+b)-\\sigma(Wx_2+b)||\\leq \\mathrm{C}||x_1-x_2||.\n\t\t\\label{eq:23}\n\t\\end{equation}\n\tWhen $x_1\\rightarrow x_2$, the Taylor expansion of Eq (\\ref{eq:23}):\n\t\\begin{equation}\n\t\t||\\frac{\\partial\\sigma}{\\partial x} W(x_1-x_2)||\\leq \\mathrm{C}||x_1-x_2||.\n\t\\end{equation}\n\tNormally, $\\sigma$ is a function with limited derivatives such as Sigmoid, so the $\\mathrm{C'}$-Lipschitz continuity is be written as:\n\t\\begin{equation}\n\t\t|| W(x_1-x_2)||\\leq \\mathrm{C'}||x_1-x_2||,\n\t\\end{equation}\n\twhere $\\mathrm{C'}$ is a limited constant, which is determined by $\\frac{\\partial\\sigma}{\\partial x}$ and $\\mathrm{C}$.\n\tSimilarly, the spectral norm of matrix is defined by:\n\t\\begin{equation}\n\t\t||W||_2=\\mathop{\\max}\\limits_{x\\not=0}\\frac{||Wx||}{||x||}.\n\t\\end{equation}\n\tIn this context, the spectral norm $||W||_2$ can be used to represent the Lipschitz constant $\\mathrm{C'}$. The Lipschitz continuity is achieved by normalizing the spectral norm of the weight, approximately.\t\n\\end{document}\n\\endinput", "id": "d07e4dc9-6fdd-4534-85ee-925acd678910", "level": "subsection", "origin_cites_number": 2, "parent_id": "f6a7c46a-3c0c-4605-a602-4104833c8854", "prefix_titles": [ [ "title", "A Systematic Survey of Regularization and Normalization in GANs" ], [ "section", "Supplementary Online-only Material" ], [ "subsection", "Spectral Norm and the Lipschitz Constant" ] ], "subsections": [], "title": "Spectral Norm and the Lipschitz Constant" } ]
84
[ 63, 89, 73, 86, 68, 85, 78, 76, 8317, 53, 60, 6998, 66, 57, 70, 88, 80, 6996, 72, 54, 58, 62, 59, 7190, 55, 69, 77, 6997, 82, 74, 56, 81, 87, 6999, 8316, 84, 71, 61, 75, 67, 65, 83, 64, 79, 91, 90, 92, 93, 94, 95, 8318, 97, 96, 98, 100, 102, 99, 101, 7191, 111, 104, 112, 107, 106, 110, 109, 108, 103, 113, 8319, 105, 115, 114, 7000, 7192, 116, 117, 121, 8320, 119, 118, 120, 123, 122, 126, 132, 8321, 128, 124, 129, 130, 131, 7193, 127, 125, 134, 133, 135, 136, 137, 138, 141, 140, 139, 142, 143, 7002, 7001, 8322, 144, 145, 146, 8323, 149, 7003, 148, 147, 154, 155, 8324, 153, 156, 150, 152, 151, 7194, 157, 159, 160, 158, 164, 161, 162, 163, 165 ]
0.856502
[ "Thanh Thi Nguyen", "Quoc Viet Hung Nguyen", "Dung Tien Nguyen", "Duc Thanh Nguyen", "Thien Huynh-The", "Saeid Nahavandi", "Thanh Tam Nguyen", "Quoc-Viet Pham", "Cuong M. Nguyen" ]
Deep Learning for Deepfakes Creation and Detection: A Survey
2019
2019-09-25T16:03:45Z
cs.CV
Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is deepfake. Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "44777335-fb2f-4b5d-83e4-47fe65116410", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ] ], "subsections": [ "c7a70291-a1e9-476f-8254-e897ee781284", "b11ed0ba-b192-4a4a-b063-d32fbaab94d4", "7513ea5e-21ef-4144-b033-7ba34feb94da", "af5e6522-a34d-4deb-8b41-532983a76f29", "6f55771a-c73a-46f3-b17e-1720217d26ca" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 5133, 5128, 5132, 5130, 5131, 5680, 5129, 1602, 986, 1256, 4939 ], "content": "\\label{sec1}\nIn a narrow definition, deepfakes (stemming from ``deep learning\" and ``fake\") are created by techniques that can superimpose face images of a target person onto a video of a source person to make a video of the target person doing or saying things the source person does. This constitutes a category of deepfakes, namely \\emph{face-swap}. In a broader definition, deepfakes are artificial intelligence-synthesized content that can also fall into two other categories, i.e., \\emph{lip-sync} and \\emph{puppet-master}. Lip-sync deepfakes refer to videos that are modified to make the mouth movements consistent with an audio recording. Puppet-master deepfakes include videos of a target person (puppet) who is animated following the facial expressions, eye and head movements of another person (master) sitting in front of a camera .\nWhile some deepfakes can be created by traditional visual effects or computer-graphics approaches, the recent common underlying mechanism for deepfake creation is deep learning models such as autoencoders and generative adversarial networks (GANs), which have been applied widely in the computer vision domain . These models are used to examine facial expressions and movements of a person and synthesize facial images of another person making analogous expressions and movements . Deepfake methods normally require a large amount of image and video data to train models to create photo-realistic images and videos. As public figures such as celebrities and politicians may have a large number of videos and images available online, they are initial targets of deepfakes. Deepfakes were used to swap faces of celebrities or politicians to bodies in porn images and videos. The first deepfake video emerged in 2017 where face of a celebrity was swapped to the face of a porn actor. It is threatening to world security when deepfake methods can be employed to create videos of world leaders with fake speeches for falsification purposes . Deepfakes therefore can be abused to cause political or religion tensions between countries, to fool public and affect results in election campaigns, or create chaos in financial markets by creating fake news . It can be even used to generate fake satellite images of the Earth to contain objects that do not really exist to confuse military analysts, e.g., creating a fake bridge across a river although there is no such a bridge in reality. This can mislead a troop who have been guided to cross the bridge in a battle .\nAs the democratization of creating realistic digital humans has positive implications, there is also positive use of deepfakes such as their applications in visual effects, digital avatars, snapchat filters, creating voices of those who have lost theirs or updating episodes of movies without reshooting them . Deepfakes can have creative or productive impacts in photography, video games, virtual reality, movie productions, and entertainment, e.g., realistic video dubbing of foreign films, education through the reanimation of historical figures, virtually trying on clothes while shopping, and so on . However, the number of malicious uses of deepfakes largely dominates that of the positive ones. The development of advanced deep neural networks and the availability of large amount of data have made the forged images and videos almost indistinguishable to humans and even to sophisticated computer algorithms. The process of creating those manipulated images and videos is also much simpler today as it needs as little as an identity photo or a short video of a target individual. Less and less effort is required to produce a stunningly convincing tempered footage. Recent advances can even create a deepfake with just a still image . Deepfakes therefore can be a threat affecting not only public figures but also ordinary people. For example, a voice deepfake was used to scam a CEO out of \\$243,000 . A recent release of a software called DeepNude shows more disturbing threats as it can transform a person to a non-consensual porn . Likewise, the Chinese app Zao has gone viral lately as less-skilled users can swap their faces onto bodies of movie stars and insert themselves into well-known movies and TV clips . These forms of falsification create a huge threat to violation of privacy and identity, and affect many aspects of human lives.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\columnwidth]{Fig1.pdf}\n\\caption{Number of papers related to deepfakes in years from 2016 to 2021, obtained from https://app.dimensions.ai at the end of 2021 with the search keyword ``deepfake\" applied to full text of scholarly papers.}\n\\label{fig0} \n\\end{figure}\nFinding the truth in digital domain therefore has become increasingly critical. It is even more challenging when dealing with deepfakes as they are majorly used to serve malicious purposes and almost anyone can create deepfakes these days using existing deepfake tools. Thus far, there have been numerous methods proposed to detect deepfakes . Most of them are based on deep learning, and thus a battle between malicious and positive uses of deep learning methods has been arising. To address the threat of face-swapping technology or deepfakes, the United States Defense Advanced Research Projects Agency (DARPA) initiated a research scheme in media forensics (named Media Forensics or MediFor) to accelerate the development of fake digital visual media detection methods . Recently, Facebook Inc. teaming up with Microsoft Corp and the Partnership on AI coalition have launched the Deepfake Detection Challenge to catalyse more research and development in detecting and preventing deepfakes from being used to mislead viewers . Data obtained from https://app.dimensions.ai at the end of 2021 show that the number of deepfake papers has increased significantly in recent years (Fig. \\ref{fig0}). Although the obtained numbers of deepfake papers may be lower than actual numbers but the research trend of this topic is obviously increasing.\nThere have been existing survey papers about creating and detecting deepfakes, presented in . For example, focused on reenactment approaches (i.e., to change a target’s expression, mouth, pose, gaze or body), and replacement approaches (i.e., to replace a target’s face by swap or transfer methods). separated detection approaches into conventional methods (e.g., blind methods without using any external data for training, one-class sensor-based and model-based methods, and supervised methods with handcrafted features) and deep learning-based approaches (e.g., CNN models). categorized both creation and detection methods based on the way deepfakes are created, including entire face synthesis, identity swap, attribute manipulation, and expression swap. On the other hand, we carry out the survey with a different perspective and taxonomy. We categorize the deepfake detection methods based on the data type, i.e., images or videos, as presented in Fig. \\ref{fig2}. With fake image detection methods, we focus on the features that are used, i.e., whether they are handcrafted features or deep features. With fake video detection methods, two main subcategories are identified based on whether the method uses temporal features across frames or visual artifacts within a video frame. We also discuss extensively the challenges, research trends and directions on deepfake detection and multimedia forensics problems.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1.0\\columnwidth]{Fig2.pdf}\n\\caption{Categories of reviewed papers relevant to deepfake detection methods where we divide papers into two major groups, i.e., fake image detection and face video detection.}\n\\label{fig2} \n\\end{figure}", "id": "c7a70291-a1e9-476f-8254-e897ee781284", "level": "section", "origin_cites_number": 22, "parent_id": "44777335-fb2f-4b5d-83e4-47fe65116410", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7893, 5134, 62, 7896, 5138, 5136, 7895, 96, 5137, 150, 157, 7894, 5135, 7892 ], "content": "\\label{sec2}\nDeepfakes have become popular due to the quality of tampered videos and also the easy-to-use ability of their applications to a wide range of users with various computer skills from professional to novice. These applications are mostly developed based on deep learning techniques. Deep learning is well known for its capability of representing complex and high-dimensional data. One variant of the deep networks with that capability is deep autoencoders, which have been widely applied for dimensionality reduction and image compression . The first attempt of deepfake creation was FakeApp, developed by a Reddit user using autoencoder-decoder pairing structure . In that method, the autoencoder extracts latent features of face images and the decoder is used to reconstruct the face images. To swap faces between source images and target images, there is a need of two encoder-decoder pairs where each pair is used to train on an image set, and the encoder's parameters are shared between two network pairs. In other words, two pairs have the same encoder network. This strategy enables the common encoder to find and learn the similarity between two sets of face images, which are relatively unchallenging because faces normally have similar features such as eyes, nose, mouth positions. Fig. \\ref{fig1} shows a deepfake creation process where the feature set of face A is connected with the decoder B to reconstruct face B from the original face A. This approach is applied in several works such as DeepFaceLab , DFaker , DeepFake\\_tf (tensorflow-based deepfakes) .\n\\begin{table*}\n\\centering\n\\begin{scriptsize}\n\\caption{Summary of notable deepfake tools}\n\\label{table1}\n\\begin{tabular}{p{0.10\\textwidth} p{0.29\\textwidth} p{0.54\\textwidth}}\n\\hline\n\\textbf{Tools} & \\textbf{Links} & \\textbf{Key Features}\\\\\n\\hline\nFaceswap\n&https://github.com/deepfakes/faceswap\n&- Using two encoder-decoder pairs.\\newline\n- Parameters of the encoder are shared.\\\\\n\\hline\nFaceswap-GAN\n&https://github.com/shaoanlu/faceswap-GAN\n&Adversarial loss and perceptual loss (VGGface) are added to an auto-encoder architecture.\\\\\n\\hline\nFew-Shot Face Translation\n& https://github.com/shaoanlu/fewshot-face-translation-GAN\n& - Use a pre-trained face recognition model to extract latent embeddings for GAN processing.\\newline\n- Incorporate semantic priors obtained by modules from FUNIT and SPADE .\\\\\n\\hline\nDeepFaceLab\n&https://github.com/iperov/DeepFaceLab\n&- Expand from the Faceswap method with new models, e.g. H64, H128, LIAEF128, SAE .\\newline\n- Support multiple face extraction modes, e.g. S3FD, MTCNN, dlib, or manual .\\\\\n\\hline\nDFaker\n&https://github.com/dfaker/df\n&- DSSIM loss function is used to reconstruct face.\\newline\n- Implemented based on Keras library.\\\\\n\\hline\nDeepFake\\_tf\n&https://github.com/StromWine/DeepFake\\_tf\n&Similar to DFaker but implemented based on tensorflow.\\\\\n\\hline\nAvatarMe\n& https://github.com/lattas/AvatarMe\n& -\tReconstruct 3D faces from arbitrary ``in-the-wild\" images.\\newline\n- Can reconstruct authentic 4K by 6K-resolution 3D faces from a single low-resolution image .\\\\\n\\hline\nMarioNETte\n& https://hyperconnect.github.io/MarioNETte\n& -\tA few-shot face reenactment framework that preserves the target identity.\\newline\n- No additional fine-tuning phase is needed for identity adaptation .\\\\\n\\hline\nDiscoFaceGAN\n& https://github.com/microsoft/DiscoFaceGAN\n& -\tGenerate face images of virtual people with independent latent variables of identity, expression, pose, and illumination.\\newline\n- Embed 3D priors into adversarial learning .\\\\\n\\hline\nStyleRig\n& https://gvv.mpi-inf.mpg.de/projects/StyleRig\n& -\tCreate portrait images of faces with a rig-like control over a pretrained and fixed StyleGAN via 3D morphable face models.\\newline\n- Self-supervised without manual annotations .\\\\\n\\hline\nFaceShifter\n& https://lingzhili.com/FaceShifterPage\n& -\tFace swapping in high-fidelity by exploiting and integrating the target attributes.\\newline\n- Can be applied to any new face pairs without requiring subject specific training .\\\\\n\\hline\nFSGAN\n& https://github.com/YuvalNirkin/fsgan\n& -\tA face swapping and reenactment model that can be applied to pairs of faces without requiring training on those faces.\\newline\n- Adjust to both pose and expression variations .\\\\\n\\hline\nStyleGAN\n& https://github.com/NVlabs/stylegan\n& -\tA new generator architecture for GANs is proposed based on style transfer literature.\\newline\n- The new architecture leads to automatic, unsupervised separation of high-level attributes and enables intuitive, scale-specific control of the synthesis of images .\\\\\n\\hline\nFace2Face & https://justusthies.github.io/posts/face2face/ & - Real-time facial reenactment of monocular target video sequence, e.g. Youtube video.\\newline\n- Animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion .\\\\\n\\hline\nNeural Textures & https://github.com/SSRSGJYD/NeuralTexture & \n- Feature maps that are learned as part of the scene capture process and stored as maps on top of {3D} mesh proxies.\\newline\n- Can coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates .\\\\\n\\hline\nTransformable Bottleneck Networks\n& https://github.com/kyleolsz/TB-Networks\n& -\tA method for fine-grained 3D manipulation of image content.\\newline\n- Apply spatial transformations in CNN models using a transformable bottleneck framework .\\\\\n\\hline\n``Do as I Do\" Motion \\newline Transfer\n& github.com/carolineec/EverybodyDanceNow\n& - Automatically transfer the motion from a source to a target person by learning a video-to-video translation.\\newline\n- Can create a motion-synchronized dancing video with multiple subjects .\\\\\n\\hline\nNeural Voice Puppetry\n& https://justusthies.github.io/posts/neural-voice-puppetry\n& - A method for audio-driven facial video synthesis.\\newline\n- Synthesize videos of a talking head from an audio sequence of another person using 3D face representation. .\\\\\n\\hline\n\\end{tabular}\n\\end{scriptsize}\n\\end{table*}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.92\\columnwidth]{Fig3.pdf}\n\\caption{A deepfake creation model using two encoder-decoder pairs. Two networks use the same encoder but different decoders for training process (top). An image of face A is encoded with the common encoder and decoded with decoder B to create a deepfake (bottom). The reconstructed image (in the bottom) is the face B with the mouth shape of face A. Face B originally has the mouth of an upside-down heart while the reconstructed face B has the mouth of a conventional heart.}\n\\label{fig1} \n\\end{figure} \nBy adding adversarial loss and perceptual loss implemented in VGGFace to the encoder-decoder architecture, an improved version of deepfakes based on the generative adversarial network , i.e., faceswap-GAN, was proposed in . The VGGFace perceptual loss is added to make eye movements to be more realistic and consistent with input faces and help to smooth out artifacts in segmentation mask, leading to higher quality output videos. This model facilitates the creation of outputs with 64x64, 128x128, and 256x256 resolutions. In addition, the multi-task convolutional neural network (CNN) from the FaceNet implementation is used to make face detection more stable and face alignment more reliable. The CycleGAN is utilized for generative network implementation in this model.\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.55\\columnwidth]{Fig4.pdf}\n\\caption{The GAN architecture consisting of a generator and a discriminator, and each can be implemented by a neural network. The entire system can be trained with backpropagation that allows both networks to improve their capabilities.}\n\\label{figGAN} \n\\end{figure} \nA conventional GAN model comprises two neural networks: a generator and a discriminator as depicted in Fig. \\ref{figGAN}. Given a dataset of real images $x$ having a distribution of $p_{data}$, the aim of the generator $G$ is to produce images $G(z)$ similar to real images $x$ with $z$ being noise signals having a distribution of $p_z$. The aim of the discriminator $G$ is to correctly classify images generated by $G$ and real images $x$. The discriminator $D$ is trained to improve its classification capability, i.e., to maximize $D(x)$, which represents the probability that $x$ is a real image rather than a fake image generated by $G$. On the other hand, $G$ is trained to minimize the probability that its outputs are classified by $D$ as synthetic images, i.e., to minimize $1-D(G(z))$. This is a minimax game between two players $D$ and $G$ that can be described by the following value function :\n\\begin{multline}\n \\min_G \\max_D V(D,G)=\\EX_{x\\sim p_{data}(x)}[\\log D(x)] \\\\ + \\EX_{z\\sim p_z(z)}[\\log (1-D(G(z)))]\n\\end{multline}\nAfter sufficient training, both networks improve their capabilities, i.e., the generator $G$ is able to produce images that are really similar to real images while the discriminator $D$ is highly capable of distinguishing fake images from real ones.\nTable \\ref{table1} presents a summary of popular deepfake tools and their typical features. Among them, a prominent method for face synthesis based on a GAN model, namely StyleGAN, was introduced in . StyleGAN is motivated by style transfer with a special generator network architecture that is able to create realistic face images. In a traditional GAN model, e.g., the progressive growing of GAN (PGGAN) , the signal noise (latent code) is fed to the input layer of a feedforward network that represents the generator. In StyleGAN, there are two networks constructed and linked together, a mapping network $f$ and a synthesis network $g$. The latent code $z \\in Z$ is first converted to $w \\in W$ (where $W$ is an intermediate latent space) through a non-linear function $f:Z\\rightarrow W$, which is characterized by a neural network (i.e., the mapping network) consisting of several fully connected layers. Using an affine tranformation, the intermediate representation $w$ is specialized to styles $y=(y_s,y_b)$ that will be fed to the adaptive instance normalization (AdaIN) operations, specified as: \n\\begin{equation}\n \\mathrm{AdaIN}(x_i,y)=y_{s,i}\\frac{x_i-\\mu(x_i)}{\\sigma(x_i)}+y_{b,i}\n\\end{equation}\nwhere each feature map $x_i$ is normalized separately. The StyleGAN generator architecture allows controlling the image synthesis by modifying the styles via different scales. In addition, instead of using one random latent code during training, this method uses two latent codes to generate a given proportion of images. More specifically, two latent codes $z_1$ and $z_2$ are fed to the mapping network to create respectively $w_1$ and $w_2$ that control the styles by applying $w_1$ before and $w_2$ after the crossover point. Fig. \\ref{figMixing} demonstrates examples of images created by mixing two latent codes at three different scales where each subset of styles controls separate meaningful high-level attributes of the image. In other words, the generator architecture of StyleGAN is able to learn separation of high-level attributes (e.g., pose and identity when trained on human faces) and enables intuitive, scale-specific control of the face synthesis.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{Fig5.pdf}\n\\caption{Examples of mixing styles using StyleGAN: the output images are generated by copying a specified subset of styles from source B and taking the rest from source A. a) Copying coarse styles from source B will generate images that have high-level aspects from source B and all colors and finer facial features from source A; b) if copying the styles of middle resolutions from B, the output images will have smaller scale facial features from B and preserve the pose, general face shape, and eyeglasses from A; c) if copying the fine styles from source B, the generated images will have the color scheme and microstructure of source B .}\n\\label{figMixing} \n\\end{figure}", "id": "b11ed0ba-b192-4a4a-b063-d32fbaab94d4", "level": "section", "origin_cites_number": 21, "parent_id": "44777335-fb2f-4b5d-83e4-47fe65116410", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Creation" ] ], "subsections": [], "title": "Deepfake Creation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1244, 5139, 5134 ], "content": "\\label{sec:3}\nDeepfake detection is normally deemed a binary classification problem where classifiers are used to classify between authentic videos and tampered ones. This kind of methods requires a large database of real and fake videos to train classification models. The number of fake videos is increasingly available, but it is still limited in terms of setting a benchmark for validating various detection methods. To address this issue, Korshunov and Marcel produced a notable deepfake dataset consisting of 620 videos based on the GAN model using the open source code Faceswap-GAN . Videos from the publicly available VidTIMIT database were used to generate low and high quality deepfake videos, which can effectively mimic the facial expressions, mouth movements, and eye blinking. These videos were then used to test various deepfake detection methods. Test results show that the popular face recognition systems based on VGG and Facenet are unable to detect deepfakes effectively. Other methods such as lip-syncing approaches and image quality metrics with support vector machine (SVM) produce very high error rate when applied to detect deepfake videos from this newly produced dataset. This raises concerns about the critical need of future development of more robust methods that can detect deepfakes from genuine.\nThis section presents a survey of deepfake detection methods where we group them into two major categories: fake image detection methods and fake video detection ones (Fig. \\ref{fig2}). The latter is distinguished into two smaller groups: \\emph{visual artifacts} within single video frame-based methods and \\emph{temporal features} across frames-based ones. Whilst most of the methods based on temporal features use deep learning \\emph{recurrent} classification models, the methods use visual artifacts within video frame can be implemented by either deep or shallow classifiers.", "id": "7513ea5e-21ef-4144-b033-7ba34feb94da", "level": "section", "origin_cites_number": 9, "parent_id": "44777335-fb2f-4b5d-83e4-47fe65116410", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ] ], "subsections": [ "f1d90192-c843-4231-8ac1-201458f556a0", "406b6759-531d-4af3-af75-8057a4f625fc" ], "title": "Deepfake Detection" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5140 ], "content": "Deepfakes are increasingly detrimental to privacy, society security and democracy . Methods for detecting deepfakes have been proposed as soon as this threat was introduced. Early attempts were based on handcrafted features obtained from artifacts and inconsistencies of the fake image synthesis process. Recent methods, e.g., , have commonly applied deep learning to automatically extract salient and discriminative features to detect deepfakes.", "id": "f1d90192-c843-4231-8ac1-201458f556a0", "level": "subsection", "origin_cites_number": 3, "parent_id": "7513ea5e-21ef-4144-b033-7ba34feb94da", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Image Detection" ] ], "subsections": [ "5d2250d1-1cae-45db-a279-c36ed6840045", "87acc6e6-8b7c-49ef-9ce2-768ae70fea85" ], "title": "Fake Image Detection" }, { "cite_extract_rate": 0.2, "cites": [ 7897, 5141 ], "content": "Most works on detection of GAN generated images do not consider the generalization capability of the detection models although the development of GAN is ongoing, and many new extensions of GAN are frequently introduced. Xuan et al. used an image preprocessing step, e.g., Gaussian blur and Gaussian noise, to remove low level high frequency clues of GAN images. This increases the pixel level statistical similarity between real images and fake images and allows the forensic classifier to learn more intrinsic and meaningful features, which has better generalization capability than previous image forensics methods or image steganalysis networks .\nZhang et al. used the bag of words method to extract a set of compact features and fed it into various classifiers such as SVM , random forest (RF) and multi-layer perceptrons (MLP) for discriminating swapped face images from the genuine. Among deep learning-generated images, those synthesised by GAN models are probably most difficult to detect as they are realistic and high-quality based on GAN's capability to learn distribution of the complex input data and generate new outputs with similar input distribution. \nOn the other hand, Agarwal and Varshney cast the GAN-based deepfake detection as a hypothesis testing problem where a statistical framework was introduced using the information-theoretic study of authentication . The minimum distance between distributions of legitimate images and images generated by a particular GAN is defined, namely the oracle error. The analytic results show that this distance increases when the GAN is less accurate, and in this case, it is easier to detect deepfakes. In case of high-resolution image inputs, an extremely accurate GAN is required to generate fake images that are hard to detect by this method.", "id": "5d2250d1-1cae-45db-a279-c36ed6840045", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "f1d90192-c843-4231-8ac1-201458f556a0", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Image Detection" ], [ "subsubsection", "Handcrafted Features-based Methods" ] ], "subsections": [], "title": "Handcrafted Features-based Methods" }, { "cite_extract_rate": 0.551724137931034, "cites": [ 1603, 895, 56, 62, 5142, 63, 5143, 7194, 8317, 3212, 1003, 485, 96, 1600, 78, 7898 ], "content": "Face swapping has a number of compelling applications in video compositing, transfiguration in portraits, and especially in identity protection as it can replace faces in photographs by ones from a collection of stock images. However, it is also one of the techniques that cyber attackers employ to penetrate identification or authentication systems to gain illegitimate access. The use of deep learning such as CNN and GAN has made swapped face images more challenging for forensics models as it can preserve pose, facial expression and lighting of the photographs .\nHsu et al. introduced a two-phase deep learning method for detection of deepfake images. The first phase is a feature extractor based on the common fake feature network (CFFN) where the Siamese network architecture presented in is used. The CFFN encompasses several dense units with each unit including different numbers of dense blocks to improve the representative capability for the input images. \nDiscriminative features between the fake and real images are extracted through the CFFN learning process based on the use of pairwise information, which is the label of each pair of two input images. If the two images are of the same type, i.e., fake-fake or real-real, the pairwise label is $1$. In contrast, if they are of different types, i.e., fake-real, the pairwise label is $0$. The CFFN-based discriminative features are then fed to a neural network classifier to distinguish deceptive images from genuine. The proposed method is validated for both fake face and fake general image detection. On the one hand, the face dataset is obtained from CelebA , containing 10,177 identities and 202,599 aligned face images of various poses and background clutter. Five GAN variants are used to generate fake images with size of 64x64, including deep convolutional GAN (DCGAN) , Wasserstein GAN (WGAN) , WGAN with gradient penalty (WGAN-GP) , least squares GAN , and PGGAN . A total of 385,198 training images and 10,000 test images of both real and fake ones are obtained for validating the proposed method. On the other hand, the general dataset is extracted from the ILSVRC12 . The large scale GAN training model for high fidelity natural image synthesis (BIGGAN) , self-attention GAN and spectral normalization GAN are used to generate fake images with size of 128x128. The training set consists of 600,000 fake and real images whilst the test set includes 10,000 images of both types. Experimental results show the superior performance of the proposed method against its competing methods such as those introduced in .\nLikewise, proposed a CNN model, namely SCnet, to detect deepfake images, which are generated by the Glow-based facial forgery tool . The fake images synthesized by the Glow model have the facial expression maliciously tampered. These images are hyper-realistic with perfect visual qualities, but they still have subtle or noticeable manipulation traces, which are exploited by the SCnet. The SCnet is able to automatically learn high-level forensics features of image data thanks to a hierarchical feature extraction block, which is formed by stacking four convolutional layers. Each layer learns a new set of feature maps from the previous layer, with each convolutional operation is defined by:\n\\begin{equation}\n f_j^{(n)} = \\sum_{i=1}^i f_i^{(n-1)}*\\omega_{ij}^{(n)} + b_j^{(n)}\n\\end{equation}\nwhere $f_j^{(n)}$ is the $j^{th}$ feature map of the $n^{th}$ layer, $\\omega_{ij}^{(n)}$ is the weight of the $i^{th}$ channel of the $j^{th}$ convolutional kernel in the $n^{th}$ layer, and $b_j^{(n)}$ is the bias term of the $j^{th}$ convolutional kernel in the $n^{th}$ layer.\nThe proposed approach is evaluated using a dataset consisting of 321,378 face images, which are created by applying the Glow model to the CelebA face image dataset . Evaluation results show that the SCnet model obtains higher accuracy and better generalization than the Meso-4 model proposed in .\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=1.75\\columnwidth]{Fig6.pdf}\n\\caption{A two-step process for face manipulation detection where the preprocessing step aims to detect, crop and align faces on a sequence of frames and the second step distinguishes manipulated and authentic face images by combining convolutional neural network (CNN) and recurrent neural network (RNN) .}\n\\label{fig3} \n\\end{figure*}\nRecently, proposed a method for deepfake detection using self-consistency of local source features, which are content-independent, spatially-local information of images. These features could come from either imaging pipelines, encoding methods or image synthesis approaches. The hypothesis is that a modified image would have different source features at different locations, while an original image will have the same source features across locations. These source features, represented in the form of down-sampled feature maps, are extracted by a CNN model using a special representation learning method called pairwise self-consistency learning. This learning method aims to penalize pairs of feature vectors that refer to locations from the same image for having a low cosine similarity score. At the same time, it also penalizes the pairs from different images for having a high similarity score. The learned feature maps are then fed to a classification method for deepfake detection. This proposed approach is evaluated on seven popular datasets, including FaceForensics++ , DeepfakeDetection , Celeb-DF-v1 \\& Celeb-DF-v2 , Deepfake Detection Challenge (DFDC) , DFDC Preview , and DeeperForensics-1.0 . Experimental results demonstrate that the proposed approach is superior to state-of-the-art methods. It however may have a limitation when dealing with fake images that are generated by methods that directly output the whole images whose source features are consistent across all positions within each image.", "id": "87acc6e6-8b7c-49ef-9ce2-768ae70fea85", "level": "subsubsection", "origin_cites_number": 29, "parent_id": "f1d90192-c843-4231-8ac1-201458f556a0", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Image Detection" ], [ "subsubsection", "Deep Features-based Methods" ] ], "subsections": [], "title": "Deep Features-based Methods" }, { "cite_extract_rate": 1, "cites": [ 1603 ], "content": "Most image detection methods cannot be used for videos because of the strong degradation of the frame data after video compression . Furthermore, videos have temporal characteristics that are varied among sets of frames and they are thus challenging for methods designed to detect only still fake images. This subsection focuses on deepfake video detection methods and categorizes them into two smaller groups: methods that employ temporal features and those that explore visual artifacts within frames.", "id": "406b6759-531d-4af3-af75-8057a4f625fc", "level": "subsection", "origin_cites_number": 1, "parent_id": "7513ea5e-21ef-4144-b033-7ba34feb94da", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Video Detection" ] ], "subsections": [ "f76aa108-55fe-4a3b-9495-cbb64d02a169", "89bc57b3-3d0e-4686-9c89-9b9c741caaac" ], "title": "Fake Video Detection" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 5142, 1600, 97, 243, 3758, 96 ], "content": "Based on the observation that temporal coherence is not enforced effectively in the synthesis process of deepfakes, Sabir et al. leveraged the use of spatio-temporal features of video streams to detect deepfakes. Video manipulation is carried out on a frame-by-frame basis so that low level artifacts produced by face manipulations are believed to further manifest themselves as temporal artifacts with inconsistencies across frames. A recurrent convolutional model (RCN) was proposed based on the integration of the convolutional network DenseNet and the gated recurrent unit cells to exploit temporal discrepancies across frames (see Fig. \\ref{fig3}). The proposed method is tested on the FaceForensics++ dataset, which includes 1,000 videos , and shows promising results. \nLikewise, highlighted that deepfake videos contain intra-frame inconsistencies and temporal inconsistencies between frames. They then proposed the temporal-aware pipeline method that uses CNN and long short term memory (LSTM) to detect deepfake videos. CNN is employed to extract frame-level features, which are then fed into the LSTM to create a temporal sequence descriptor. A fully-connected network is finally used for classifying doctored videos from real ones based on the sequence descriptor as illustrated in Fig. \\ref{fig4}. An accuracy of greater than 97\\% was obtained using a dataset of 600 videos, including 300 deepfake videos collected from multiple video-hosting websites and 300 pristine videos randomly selected from the Hollywood human actions dataset in .\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=1\\columnwidth]{Fig7.pdf}\n\\caption{A deepfake detection method using convolutional neural network (CNN) and long short term memory (LSTM) to extract temporal features of a given video sequence, which are represented via the sequence descriptor. The detection network consisting of fully-connected layers is employed to take the sequence descriptor as input and calculate probabilities of the frame sequence belonging to either authentic or deepfake class .}\n\\label{fig4} \n\\end{figure}\nOn the other hand, the use of a physiological signal, eye blinking, to detect deepfakes was proposed in Li et al. based on the observation that a person in deepfakes has a lot less frequent blinking than that in untampered videos. A healthy adult human would normally blink somewhere between 2 to 10 seconds, and each blink would take 0.1 and 0.4 seconds. Deepfake algorithms, however, often use face images available online for training, which normally show people with open eyes, i.e., very few images published on the internet show people with closed eyes. Thus, without having access to images of people blinking, deepfake algorithms do not have the capability to generate fake faces that can blink normally. In other words, blinking rates in deepfakes are much lower than those in normal videos. To discriminate real and fake videos, Li et al. crop eye areas in the videos and distribute them into long-term recurrent convolutional networks (LRCN) for dynamic state prediction. The LRCN consists of a feature extractor based on CNN, a sequence learning based on long short term memory (LSTM), and a state prediction based on a fully connected layer to predict probability of eye open and close state. The eye blinking shows strong temporal dependencies and thus the implementation of LSTM helps to capture these temporal patterns effectively.\nRecently, proposed the use of optical flow to gauge the information along the temporal axis of a frame sequence for video deepfake detection. The optical flow is a vector field calculated on two temporal-distinct frames of a video that can describe the movement of objects in a scene. The optical flow fields are expected to be different between synthetically created frames and naturally generated ones . Unnatural movements of lips, eyes, or of the entire faces inserted into deepfake videos would introduce distinctive motion patterns when compared with pristine ones. Based on this assumption, features consisting of optical flow fields are fed into a CNN model for discriminating between deepfakes and original videos. More specifically, the ResNet50 architecture is implemented as a CNN model for experiments. The results obtained using the FaceForensics++ dataset show that this approach is comparable with state-of-the-art methods in terms of classification accuracy. A combination of this kind of feature with frame-based features is also experimented, which results in an improved deepfake detection performance. This demonstrates the usefulness of optical flow fields in capturing the inconsistencies on the temporal axis of video frames for deepfake detection.", "id": "f76aa108-55fe-4a3b-9495-cbb64d02a169", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "406b6759-531d-4af3-af75-8057a4f625fc", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Video Detection" ], [ "subsubsection", "Temporal Features across Video Frames" ] ], "subsections": [], "title": "Temporal Features across Video Frames" }, { "cite_extract_rate": 0, "cites": [], "content": "As can be noticed in the previous subsection, the methods using temporal patterns across video frames are mostly based on deep recurrent network models to detect deepfake videos. This subsection investigates the other approach that normally decomposes videos into frames and explores visual artifacts within single frames to obtain discriminant features. These features are then distributed into either a deep or shallow classifier to differentiate between fake and authentic videos. We thus group methods in this subsection based on the types of classifiers, i.e. either deep or shallow.", "id": "89bc57b3-3d0e-4686-9c89-9b9c741caaac", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "406b6759-531d-4af3-af75-8057a4f625fc", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Video Detection" ], [ "subsubsection", "Visual Artifacts within Video Frame" ] ], "subsections": [ "c8afcbd9-52a4-4801-8c8d-0c5d0d477915", "b02a12ee-cc0c-427d-b853-a4c928ab1f56" ], "title": "Visual Artifacts within Video Frame" }, { "cite_extract_rate": 0.642857142857142, "cites": [ 5144, 1603, 5145, 7899, 97, 514, 5146, 5147, 306 ], "content": "Deepfake videos are normally created with limited resolutions, which require an affine face warping approach (i.e., scaling, rotation and shearing) to match the configuration of the original ones. Because of the resolution inconsistency between the warped face area and the surrounding context, this process leaves artifacts that can be detected by CNN models such as VGG16 , ResNet50, ResNet101 and ResNet152 . A deep learning method to detect deepfakes based on the artifacts observed during the face warping step of the deepfake generation algorithms was proposed in . The proposed method is evaluated on two deepfake datasets, namely the UADFV and DeepfakeTIMIT. The UADFV dataset contains 49 real videos and 49 fake videos with 32,752 frames in total. The DeepfakeTIMIT dataset includes a set of low quality videos of 64 x 64 size and another set of high quality videos of 128 x 128 with totally 10,537 pristine images and 34,023 fabricated images extracted from 320 videos for each quality set. Performance of the proposed method is compared with other prevalent methods such as two deepfake detection MesoNet methods, i.e. Meso-4 and MesoInception-4 , HeadPose , and the face tampering detection method two-stream NN . Advantage of the proposed method is that it needs not to generate deepfake videos as negative examples before training the detection models. Instead, the negative examples are generated dynamically by extracting the face region of the original image and aligning it into multiple scales before applying Gaussian blur to a scaled image of random pick and warping back to the original image. This reduces a large amount of time and computational resources compared to other methods, which require deepfakes are generated in advance.\nNguyen et al. proposed the use of capsule networks for detecting manipulated images and videos. The capsule network was initially introduced to address limitations of CNNs when applied to inverse graphics tasks, which aim to find physical processes used to produce images of the world . The recent development of capsule network based on dynamic routing algorithm demonstrates its ability to describe the hierarchical pose relationships between object parts. This development is employed as a component in a pipeline for detecting fabricated images and videos as demonstrated in Fig. \\ref{fig5}. A dynamic routing algorithm is deployed to route the outputs of the three capsules to the output capsules through a number of iterations to separate between fake and real images. The method is evaluated through four datasets covering a wide range of forged image and video attacks. They include the well-known Idiap Research Institute replay-attack dataset , the deepfake face swapping dataset created by Afchar et al. , the facial reenactment FaceForensics dataset , produced by the Face2Face method , and the fully computer-generated image dataset generated by Rahmouni et al. . The proposed method yields the best performance compared to its competing methods in all of these datasets. This shows the potential of the capsule network in building a general detection system that can work effectively for various forged image and video attacks.\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=1\\columnwidth]{Fig8.pdf}\n\\caption{Capsule network takes features obtained from the VGG-19 network to distinguish fake images or videos from the real ones (top). The pre-processing step detects face region and scales it to the size of 128x128 before VGG-19 is used to extract latent features for the capsule network, which comprises three primary capsules and two output capsules, one for real and one for fake images (bottom). The statistical pooling constitutes an important part of capsule network that deals with forgery detection .}\n\\label{fig5} \n\\end{figure}", "id": "c8afcbd9-52a4-4801-8c8d-0c5d0d477915", "level": "paragraph", "origin_cites_number": 14, "parent_id": "89bc57b3-3d0e-4686-9c89-9b9c741caaac", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Video Detection" ], [ "subsubsection", "Visual Artifacts within Video Frame" ], [ "paragraph", "Deep classifiers" ] ], "subsections": [], "title": "Deep classifiers" }, { "cite_extract_rate": 0.512820512820512, "cites": [ 1255, 56, 7899, 5148, 5149, 5151, 485, 5145, 895, 97, 514, 7900, 5134, 5153, 306, 7022, 96, 5152, 5144, 5156, 1600, 5155, 1216, 62, 5142, 1003, 150, 1603, 7897, 5150, 5147, 63, 7194, 8317, 3212, 59, 5154, 157, 78, 243 ], "content": "Deepfake detection methods mostly rely on the artifacts or inconsistency of intrinsic features between fake and real images or videos. Yang et al. proposed a detection method by observing the differences between 3D head poses comprising head orientation and position, which are estimated based on 68 facial landmarks of the central face region. The 3D head poses are examined because there is a shortcoming in the deepfake face generation pipeline. The extracted features are fed into an SVM classifier to obtain the detection results. Experiments on two datasets show the great performance of the proposed approach against its competing methods. The first dataset, namely UADFV, consists of 49 deep fake videos and their respective real videos . The second dataset comprises 241 real images and 252 deep fake images, which is a subset of data used in the DARPA MediFor GAN Image/Video Challenge . Likewise, a method to exploit artifacts of deepfakes and face manipulations based on visual features of eyes, teeth and facial contours was studied in . The visual artifacts arise from lacking global consistency, wrong or imprecise estimation of the incident illumination, or imprecise estimation of the underlying geometry. For deepfakes detection, missing reflections and missing details in the eye and teeth areas are exploited as well as texture features extracted from the facial region based on facial landmarks. Accordingly, the eye feature vector, teeth feature vector and features extracted from the full-face crop are used. After extracting the features, two classifiers including logistic regression and small neural network are employed to classify the deepfakes from real videos. Experiments carried out on a video dataset downloaded from YouTube show the best result of 0.851 in terms of the area under the receiver operating characteristics curve. The proposed method however has a disadvantage that requires images meeting certain prerequisite such as open eyes or visual teeth.\nThe use of photo response non uniformity (PRNU) analysis was proposed in to detect deepfakes from authentic ones. PRNU is a component of sensor pattern noise, which is attributed to the manufacturing imperfection of silicon wafers and the inconsistent sensitivity of pixels to light because of the variation of the physical characteristics of the silicon wafers. The PRNU analysis is widely used in image forensics and advocated to use in because the swapped face is supposed to alter the local PRNU pattern in the facial area of video frames. The videos are converted into frames, which are cropped to the questioned facial region. The cropped frames are then separated sequentially into eight groups where an average PRNU pattern is computed for each group. Normalised cross correlation scores are calculated for comparisons of PRNU patterns among these groups. A test dataset was created, consisting of 10 authentic videos and 16 manipulated videos, where the fake videos were produced from the genuine ones by the DeepFaceLab tool . The analysis shows a significant statistical difference in terms of mean normalised cross correlation scores between deepfakes and the genuine. This analysis therefore suggests that PRNU has a potential in deepfake detection although a larger dataset would need to be tested. \nWhen seeing a video or image with suspicion, users normally want to search for its origin. However, there is currently no feasibility for such a tool. Hasan and Salah proposed the use of blockchain and smart contracts to help users detect deepfake videos based on the assumption that videos are only real when their sources are traceable. Each video is associated with a smart contract that links to its parent video and each parent video has a link to its child in a hierarchical structure. Through this chain, users can credibly trace back to the original smart contract associated with pristine video even if the video has been copied multiple times. An important attribute of the smart contract is the unique hashes of the interplanetary file system, which is used to store video and its metadata in a decentralized and content-addressable manner . The smart contract's key features and functionalities are tested against several common security challenges such as distributed denial of services, replay and man in the middle attacks to ensure the solution meeting security requirements. This approach is generic, and it can be extended to other types of digital content, e.g., images, audios and manuscripts.\n\\begin{table*}[h!]\n\\centering\n\\caption{Summary of prominent deepfake detection methods}\n\\label{table2}\n\\begin{scriptsize}\n\\begin{tabularx}{\\linewidth}{p{0.11\\textwidth} p{0.09\\textwidth} p{0.33\\textwidth} p{0.04\\textwidth} p{0.32\\textwidth}}\n\\toprule\n\\textbf{Methods} & \\textbf{Classifiers/\\newline Techniques} & \\textbf{Key Features} & \\textbf{Dealing with} & \\textbf{Datasets Used}\\\\\n\\midrule\nEye blinking \n&LRCN\n&- Use LRCN to learn the temporal patterns of eye blinking.\\newline\n- Based on the observation that blinking frequency of deepfakes is much smaller than normal.\n&Videos\n&Consist of 49 interview and presentation videos, and their corresponding generated deepfakes.\\\\\n\\hline\nIntra-frame and temporal inconsistencies \n&CNN and LSTM\n&CNN is employed to extract frame-level features, which are distributed to LSTM to construct sequence descriptor useful for classification.\t\n&Videos\n&A collection of 600 videos obtained from multiple websites.\\\\\n\\hline\nUsing face warping artifacts \n&VGG16 ,\\newline ResNet models \n&Artifacts are discovered using CNN models based on resolution inconsistency between the warped face area and the surrounding context.\n&Videos\n&- UADFV , containing 49 real videos and 49 fake videos with 32752 frames in total.\\newline\n- DeepfakeTIMIT \\\\\n\\hline\nMesoNet \n&CNN\n&- Two deep networks, i.e. Meso-4 and MesoInception-4 are introduced to examine deepfake videos at the mesoscopic analysis level.\\newline\n- Accuracy obtained on deepfake and FaceForensics datasets are 98\\% and 95\\% respectively.\n&Videos\n&Two datasets: deepfake one constituted from online videos and the FaceForensics one created by the Face2Face approach .\\\\\n\\hline\nEye, teach and facial texture \n&Logistic regression and neural network (NN)\n&- Exploit facial texture differences, and missing reflections and details in eye and teeth areas of deepfakes.\\newline\n- Logistic regression and NN are used for classifying.\t\n&Videos\n&A video dataset downloaded from YouTube.\\\\\n\\hline\nSpatio-temporal features with RCN \n&RCN\n&Temporal discrepancies across frames are explored using RCN that integrates convolutional network DenseNet and the gated recurrent unit cells \n&Videos\n&FaceForensics++ dataset, including 1,000 videos .\\\\\n\\hline\nSpatio-temporal features with LSTM \n& Convolutional bidirectional recurrent LSTM network\n& - An XceptionNet CNN is used for facial feature extraction while audio embeddings are obtained by stacking multiple convolution modules.\\newline\n- Two loss functions, i.e. cross-entropy and Kullback-Leibler divergence, are used.\n& Videos\n& FaceForensics++ and Celeb-DF (5,639 deepfake videos) datasets and the ASVSpoof 2019 Logical Access audio dataset .\\\\\n\\hline\nAnalysis of PRNU \n&PRNU\n&- Analysis of noise patterns of light sensitive sensors of digital cameras due to their factory defects.\\newline\n- Explore the differences of PRNU patterns between the authentic and deepfake videos because face swapping is believed to alter the local PRNU patterns.\n&Videos\t\n&Created by the authors, including 10 authentic and 16 deepfake videos using DeepFaceLab .\\\\\n\\hline\nPhoneme-viseme mismatches \n& CNN\n& - Exploit the mismatches between the dynamics of the mouth shape, i.e. visemes, with a spoken phoneme.\\newline\n- Focus on sounds associated with the M, B and P phonemes as they require complete mouth closure while deepfakes often incorrectly synthesize it.\n& Videos\n& Four in-the-wild lip-sync deepfakes from Instagram and YouTube (www.instagram.com/bill\\_posters\\_uk and youtu.be/VWMEDacz3L4) and others are created using synthesis techniques, i.e. Audio-to-Video (A2V) and Text-to-Video (T2V) .\\\\\n\\hline\nUsing attribution-based confidence (ABC) metric \n& ResNet50 model , pre-trained on VGGFace2 \n& - The ABC metric is used to detect deepfake videos without accessing to training data.\\newline\n- ABC values obtained for original videos are greater than 0.94 while those of deepfakes have low ABC values.\n& Videos\n& VidTIMIT and two other original datasets obtained from the COHFACE (https://www.idiap.ch/dataset/cohface) and from YouTube. datasets from COHFACE and YouTube are used to generate two deepfake datasets by commercial website https://deepfakesweb.com and another deepfake dataset is DeepfakeTIMIT .\\\\\n\\hline\nUsing appearance and behaviour \n& Rules based on facial and behavioural features.\n& Temporal, behavioral biometric based on facial expressions and head movements are learned using ResNet-101 while static facial biometric is obtained using VGG .\n& Videos\n& The world leaders dataset , FaceForensics++ , Google/Jigsaw deepfake detection dataset , DFDC and Celeb-DF .\\\\\n\\hline\nFakeCatcher \n& CNN\n& Extract biological signals in portrait videos and use them as an implicit descriptor of authenticity because they are not spatially and temporally well-preserved in deepfakes.\n& Videos\n& UADFV , FaceForensics , FaceForensics++ , Celeb-DF , and a new dataset of 142 videos, independent of the generative model, resolution, compression, content, and context.\\\\\n\\hline\nEmotion audio-visual affective cues \n& Siamese network \n& Modality and emotion embedding vectors for the face and speech are extracted for deepfake detection.\n& Videos\n& DeepfakeTIMIT and DFDC .\\\\\n\\hline\nHead poses \n&SVM\n&- Features are extracted using 68 landmarks of the face region.\\newline\n- Use SVM to classify using the extracted features.\t\n&Videos/\\newline Images\n&- UADFV consists of 49 deep fake videos and their respective real videos.\\newline\n- 241 real images and 252 deep fake images from DARPA MediFor GAN Image/Video Challenge.\\\\\n\\hline\nCapsule-forensics \n&Capsule networks\n&- Latent features extracted by VGG-19 network are fed into the capsule network for classification.\\newline\n- A dynamic routing algorithm is used to route the outputs of three convolutional capsules to two output capsules, one for fake and another for real images, through a number of iterations.\t\n&Videos/\\newline Images\n&Four datasets: the Idiap Research Institute replay-attack , deepfake face swapping by , facial reenactment FaceForensics , and fully computer-generated image set using .\\\\\n\\hline\n\\end{tabularx}\n\\end{scriptsize}\n\\end{table*}\n\\begin{table*}[h!]\n\\centering\n\\begin{scriptsize}\n\\begin{tabularx}{\\linewidth}{p{0.1\\textwidth} p{0.1\\textwidth} p{0.33\\textwidth} p{0.04\\textwidth} p{0.32\\textwidth}}\n\\toprule\n\\textbf{Methods} & \\textbf{Classifiers/\\newline Techniques} & \\textbf{Key Features} & \\textbf{Dealing with} & \\textbf{Datasets Used}\\\\\n\\midrule\nPreprocessing combined with deep network \n&DCGAN,\nWGAN-GP and PGGAN.\n&- Enhance generalization ability of deep learning models to detect GAN generated images.\\newline\n- Remove low level features of fake images.\\newline\n- Force deep networks to focus more on pixel level similarity between fake and real images to improve generalization ability.\n&Images\n&- Real dataset: CelebA-HQ , including high quality face images of 1024x1024 resolution.\\newline\n- Fake datasets: generated by DCGAN , WGAN-GP and PGGAN .\\\\\n\\hline\nAnalyzing convolutional traces \n& KNN, SVM, and linear discriminant analysis (LDA)\n& Using expectation-maximization algorithm to extract local features pertaining to convolutional generative process of GAN-based image deepfake generators.\n& Images\n& Authentic images from CelebA and corresponding deepfakes are created by five different GANs (group-wise deep whitening-and-coloring transformation GDWCT , StarGAN ,\nAttGAN , StyleGAN , StyleGAN2 ).\\\\\n\\hline\nBag of words and shallow classifiers \t\n&SVM, RF, MLP\n&Extract discriminant features using bag of words method and feed these features into SVM, RF and MLP for binary classification: innocent vs fabricated.\n&Images\n&The well-known LFW face database , containing 13,223 images with resolution of 250x250.\\\\\n\\hline\nPairwise learning \n&CNN concatenated to CFFN\n&Two-phase procedure: feature extraction using CFFN based on the Siamese network architecture and classification using CNN.\n&Images\n&- Face images: real ones from CelebA , and fake ones generated by DCGAN , WGAN , WGAN-GP , least squares GAN , and PGGAN .\\newline\n- General images: real ones from ILSVRC12 , and fake ones generated by BIGGAN , self-attention GAN and spectral normalization GAN .\\\\\n\\hline\nDefenses against adversarial perturbations in deepfakes \n& VGG and ResNet \n& - Introduce adversarial perturbations to enhance deepfakes and fool deepfake detectors.\\newline\n- Improve accuracy of deepfake detectors using Lipschitz regularization and deep image prior techniques.\n& Images\n& 5,000 real images from CelebA and 5,000 fake images created by the “Few-Shot Face Translation GAN” method .\\\\\n\\hline\nFace X-ray \n& CNN\n& - Try to locate the blending boundary between the target and original faces instead of capturing the synthesized artifacts of specific manipulations.\\newline\n- Can be trained without fake images.\n& Images\n& FaceForensics++ , DeepfakeDetection (DFD) , DFDC and Celeb-DF .\\\\\n\\hline\nUsing common artifacts of CNN-generated images \n& ResNet-50 \npre-trained with ImageNet \n& Train the classifier using a large number of fake images generated by a high-performing unconditional GAN model, i.e., PGGAN and evaluate how well the classifier generalizes to other CNN-synthesized images.\n& Images\n& A new dataset of CNN-generated images, namely ForenSynths, consisting of\nsynthesized images from 11 models such as StyleGAN , super-resolution methods and FaceForensics++ .\\\\\n\\hline\nUsing convolutional traces on GAN-based images \n& KNN, SVM, and LDA\n& Training the expectation-maximization algorithm to detect\nand extract discriminative features via a fingerprint that represents the convolutional traces left by GANs during image generation.\n& Images\n& A dataset of images generated by ten GAN models, including CycleGAN , StarGAN , AttGAN , GDWCT , StyleGAN , StyleGAN2 , PGGAN , FaceForensics++ , IMLE , and\nSPADE .\\\\\n\\hline\nUsing deep features extracted by CNN \n& A new CNN model, namely SCnet\n& The CNN-based SCnet is able to automatically learn high-level forensics features of image data thanks to a hierarchical feature extraction block, which is formed by stacking four convolutional layers.\n& Images\n& A dataset of 321,378 face images, created by applying the Glow model to the CelebA face image dataset .\n\\\\\n\\bottomrule\n\\end{tabularx}\n\\end{scriptsize}\n\\end{table*}", "id": "b02a12ee-cc0c-427d-b853-a4c928ab1f56", "level": "paragraph", "origin_cites_number": 78, "parent_id": "89bc57b3-3d0e-4686-9c89-9b9c741caaac", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Deepfake Detection" ], [ "subsection", "Fake Video Detection" ], [ "subsubsection", "Visual Artifacts within Video Frame" ], [ "paragraph", "Shallow classifiers" ] ], "subsections": [], "title": "Shallow classifiers" }, { "cite_extract_rate": 0.4, "cites": [ 7901, 5160, 5157, 5158, 5156, 5161, 5162, 5159 ], "content": "\\label{sec:4}\nWith the support of deep learning, deepfakes can be created easier than ever before. The spread of these fake contents is also quicker thanks to the development of social media platforms . Sometimes deepfakes do not need to be spread to massive audience to cause detrimental effects. People who create deepfakes with malicious purpose only need to deliver them to target audiences as part of their sabotage strategy without using social media. For example, this approach can be utilized by intelligence services trying to influence decisions made by important people such as politicians, leading to national and international security threats . Catching the deepfake alarming problem, research community has focused on developing deepfake detection algorithms and numerous results have been reported. This paper has reviewed the state-of-the-art methods and a summary of typical approaches is provided in Table \\ref{table2}. It is noticeable that a battle between those who use advanced machine learning to create deepfakes with those who make effort to detect deepfakes is growing.\nDeepfakes' quality has been increasing and the performance of detection methods needs to be improved accordingly. The inspiration is that what AI has broken can be fixed by AI as well . Detection methods are still in their early stage and various methods have been proposed and evaluated but using fragmented datasets. An approach to improve performance of detection methods is to create a growing updated benchmark dataset of deepfakes to validate the ongoing development of detection methods. This will facilitate the training process of detection models, especially those based on deep learning, which requires a large training set . \nImproving performance of deepfake detection methods is important, especially in cross-forgery and cross-dataset scenarios. Most detection models are designed and evaluated in the same-forgery and in-dataset experiments, which do not ensure their generalization capability. Some previous studies have addressed this issue, e.g., in , but more work needs to be done in this direction. A model trained on a specific forgery needs to be able to work against another unknown one because potential deepfake types are not normally known in the real-world scenarios. Likewise, current detection methods mostly focus on drawbacks of the deepfake generation pipelines, i.e., finding weakness of the competitors to attack them. This kind of information and knowledge is not always available in adversarial environments where attackers commonly attempt not to reveal such deepfake creation technologies. Recent works on adversarial perturbation attacks to fool DNN-based detectors make the deepfake detection task more difficult . These are real challenges for detection method development and a future study needs to focus on introducing more robust, scalable and generalizable methods.\nAnother research direction is to integrate detection methods into distribution platforms such as social media to increase its effectiveness in dealing with the widespread impact of deepfakes. The screening or filtering mechanism using effective detection methods can be implemented on these platforms to ease the deepfakes detection . Legal requirements can be made for tech companies who own these platforms to remove deepfakes quickly to reduce its impacts. In addition, watermarking tools can also be integrated into devices that people use to make digital contents to create immutable metadata for storing originality details such as time and location of multimedia contents as well as their untampered attestment . This integration is difficult to implement but a solution for this could be the use of the disruptive blockchain technology. The blockchain has been used effectively in many areas and there are very few studies so far addressing the deepfake detection problems based on this technology. As it can create a chain of unique unchangeable blocks of metadata, it is a great tool for digital provenance solution. The integration of blockchain technologies to this problem has demonstrated certain results but this research direction is far from mature.\nUsing detection methods to spot deepfakes is crucial, but understanding the real intent of people publishing deepfakes is even more important. This requires the judgement of users based on social context in which deepfake is discovered, e.g. who distributed it and what they said about it . This is critical as deepfakes are getting more and more photorealistic and it is highly anticipated that detection software will be lagging behind deepfake creation technology. A study on social context of deepfakes to assist users in such judgement is thus worth performing. \nVideos and photographics have been widely used as evidences in police investigation and justice cases. They may be introduced as evidences in a court of law by digital media forensics experts who have background in computer or law enforcement and experience in collecting, examining and analysing digital information. The development of machine learning and AI technologies might have been used to modify these digital contents and thus the experts' opinions may not be enough to authenticate these evidences because even experts are unable to discern manipulated contents. This aspect needs to take into account in courtrooms nowadays when images and videos are used as evidences to convict perpetrators because of the existence of a wide range of digital manipulation methods . The digital media forensics results therefore must be proved to be valid and reliable before they can be used in courts. This requires careful documentation for each step of the forensics process and how the results are reached. Machine learning and AI algorithms can be used to support the determination of the authenticity of digital media and have obtained accurate and reliable results, e.g., , but most of these algorithms are unexplainable. This creates a huge hurdle for the applications of AI in forensics problems because not only the forensics experts oftentimes do not have expertise in computer algorithms, but the computer professionals also cannot explain the results properly as most of these algorithms are black box models . This is more critical as the most recent models with the most accurate results are based on deep learning methods consisting of many neural network parameters. Researchers have recently attempted to create white box and explainable detection methods. An example is the approach proposed by in which they use discrete cosine transform statistics to detect so-called specific GAN frequencies to differentiate between real images and deepfakes. Through the analysis of particular frequency statistics, that method can be used to mathematically explain whether a multimedia content is a deepfake and why it is. More research must be conducted in this area and explainable AI in computer vision therefore is a research direction that is needed to promote and utilize the advances and advantages of AI and machine learning in digital media forensics.", "id": "af5e6522-a34d-4deb-8b41-532983a76f29", "level": "section", "origin_cites_number": 20, "parent_id": "44777335-fb2f-4b5d-83e4-47fe65116410", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Discussions and Future Research Directions" ] ], "subsections": [], "title": "Discussions and Future Research Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "Deepfakes have begun to erode trust of people in media contents as seeing them is no longer commensurate with believing in them. They could cause distress and negative effects to those targeted, heighten disinformation and hate speech, and even could stimulate political tension, inflame the public, violence or war. This is especially critical nowadays as the technologies for creating deepfakes are increasingly approachable and social media platforms can spread those fake contents quickly. This survey provides a timely overview of deepfake creation and detection methods and presents a broad discussion on challenges, potential trends, and future directions in this area. This study therefore will be valuable for the artificial intelligence research community to develop effective methods for tackling deepfakes.\n\\section*{Declaration of Competing Interest}\nAuthors declare no conflict of interest.\n\\bibliography{cas-refs}\n\\end{document}\n\\endinput", "id": "6f55771a-c73a-46f3-b17e-1720217d26ca", "level": "section", "origin_cites_number": 0, "parent_id": "44777335-fb2f-4b5d-83e4-47fe65116410", "prefix_titles": [ [ "title", "Deep Learning for Deepfakes Creation and Detection: A Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
6
[ 5133, 5128, 5132, 5130, 5131, 5680, 5129, 1602, 986, 1256, 4939, 7893, 5134, 62, 7896, 5138, 5136, 7895, 96, 5137, 150, 157, 7894, 5135, 7892, 1244, 5139, 5140, 7897, 5141, 1603, 895, 56, 5142, 63, 5143, 7194, 8317, 3212, 1003, 485, 1600, 78, 7898, 97, 243, 3758, 5144, 5145, 7899, 514, 5146, 5147, 306, 1255, 5148, 5149, 5151, 7900, 5153, 7022, 5152, 5156, 5155, 1216, 5150, 59, 5154, 7901, 5160, 5157, 5158, 5161, 5162, 5159 ]
1.057986
[ "Stefan Escaida Navarro", "Stephan Mühlbacher-Karrer", "Hosam Alagi", "Hubert Zangl", "Keisuke Koyama", "Björn Hein", "Christian Duriez", "Joshua R. Smith" ]
Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications
2021
2021-08-16T16:28:26Z
cs.RO
Proximity perception is a technology that has the potential to play an essential role in the future of robotics. It can fulfill the promise of safe, robust, and autonomous systems in industry and everyday life, alongside humans, as well as in remote locations in space and underwater. In this survey paper, we cover the developments of this field from the early days up to the present, with a focus on human-centered robotics. Here, proximity sensors are typically deployed in two scenarios: first, on the exterior of manipulator arms to support safety and interaction functionality, and second, on the inside of grippers or hands to support grasping and exploration. Starting from this observation, we propose a categorization for the approaches found in the literature. To provide a basis for understanding these approaches, we devote effort to present the technologies and different measuring principles that were developed over the years, also providing a summary in form of a table. Then, we show the diversity of applications that have been presented in the literature. Finally, we give an overview of the most important trends that will shape the future of this domain.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4528ef20-9412-4b01-95cc-07b7f164956d", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ] ], "subsections": [ "01d49bee-621f-47ab-9ac0-165e2dba2afa", "7c679291-40be-4834-b2fe-70234a403984", "702ec4ea-e939-40cf-abd8-8e5caea8edc6", "55e0e327-ca41-4cdf-b82e-23bda770d2ce", "d1273b1a-f5ad-41b8-be89-3d4b86e92fbe", "46c2dad5-012b-443e-b703-dee40d0ff9dc" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Intro}\n\\IEEEPARstart{I}{n} the current robotics research landscape, a lot of effort is still dedicated to overcoming the challenges posed by unstructured environments. Areas, such as medicine, health care, agriculture, Industry\\ 4.0, and exploration endeavors in space as well as underwater are awaiting to profit from the robotics technologies currently in development. Furthermore, in terms of unstructured environments or situations, one of the main challenges is to develop robotics technologies that enable a safe and reliable interaction with the human. At the same time, the functionality and intelligence provided by the robot system must justify the investment, meaning its autonomous behavior, oftentimes in environments made for humans, must contribute real value. A hallmark of robust and efficient robot behavior is that task execution does not need to be interrupted in the presence of emergent events. A technology that is capable of addressing these challenges is proximity perception. It has been developed over the years, with first, impactful applications being shown in the late 1980s and early 1990s, which were sparked by seminal developments in robot control. Proximity perception is complementary to the main robotics perceptive modalities of vision and touch. Its use is often motivated by closing the perception gap left by occlusions and blind spots as well as by dealing with pose uncertainty of the robot with respect to objects and its environment. Therefore, one of the big challenges in this domain is to find sensor designs that can coexist with the main existing modalities of vision and touch. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{./Figures/Intro_AT-I.png}\n \\caption{A robot skin with proximity sensing capability can be deployed in scenarios with intense human-robot interaction and collaboration. The skin helps closing the gap between vision-based perception and tactile/force perception.}\n\\label{fig:OcclusionHRI}\n\\end{figure}\nIn human-centered robotics, the typical applications of proximity perception can be broadly divided into two categories: the first one is pertaining a sensitive skin covering the links of a robot manipulator for safety and interaction functionality, which we call \\emph{\\gls{ATI}}. The second one is where a robot gripper or hand is equipped with sensors to support grasping and exploration tasks, which we call \\emph{\\gls{ATII}}. In Fig.~\\ref{fig:OcclusionHRI}, a typical scenario of human-robot interaction and collaboration is illustrated (\\gls{ATI}). As the human approaches the robot, the view of the camera monitoring the robot and its workspace will become increasingly occluded. A tactile skin covering the robot is not adequate to handle this perception gap in general. This is because detecting the human or the environment only when contact is established, implies operating the robot at very low velocities, thus undermining the purpose of installing such a system in the first place. To address scenarios like these, a sensitive skin with proximity perception capabilities has been proposed by several authors. In Fig.~\\ref{fig:Intro_Preshaping}, a typical scenario for grasping supported by proximity perception shown (\\gls{ATII}). Since the robotic hand can detect an object's surface before touch is established, a \\emph{pre-touch} closed-loop control can be implemented to adjust the hand posture during this phase. This is called \\emph{reactive preshaping} and can also have a diversity of use-cases. In Fig.~\\ref{fig:Intro_Preshaping}, the three typical phases of this procedure are shown. Proximity perception also has the potential to play an important role in robotic solutions that are compliant with norms and standards, such as ISO/TS 15066 for the operation of collaborative robots.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Figures/Intro_AT-II.jpg}\n \\caption{Reactive preshaping to an object based on proximity perception has three characteristic phases: (1) object detection by vision and approaching of the hand to the object, (2) detection of the object by the hand's proximity sensors, start of closed-loop control and occlusion of the object in the camera view, and (3) finalized preshaping control, where the fingers and palm of the hand are aligned with the object.}\n\\label{fig:Intro_Preshaping}\n\\end{figure}\nIn this paper, we want to provide an up-to-date perspective on the field of proximity perception in human-centered robotics as well as an introduction to the principles and technologies developed. Proximity perception in areas such as autonomous vehicles or \\gls{UAVs} usually aim at autonomous driving or flying and thus address larger distances and speeds and avoidance of contact and interaction with objects and humans. Recent surveys that include discussions on proximity perception in these domains are~ (millimeter wave Radar), (sensor fusion), both for autonomous driving, and~ for indoor localization of \\gls{UAVs}. In human-centered robotics, proximity perception is related to short distances between humans and robots and aiming for improving human-robot interaction as well as safety. Nonetheless, many ideas presented in this paper can be valid in the automotive domain and for \\gls{UAVs} as well, especially those about the sensing principles and their applicability. We hope to give the readers a starting point to understand the principles and applications of proximity perception that have been developed in human-centered robotics. Furthermore, we want to provide a perspective on what, in our opinion, are the important trends that will shape the developments within the next years.\nThe main contributions of this paper are as follows:\n\\begin{itemize}\n \\item We provide an introduction to the concept of proximity perception and an overview of the possible use-cases. We provide a categorization according to the application types and the complexity of the implemented behavior (Sec.~\\ref{sec:ProximitySensor}). We use this categorization throughout the paper to organize the different works found in the literature.\n \\item We give an introduction to the working principles of the most important proximity sensor designs (capacitive, optical, radar, etc.) and give a review of the related work in the context of robotics (Sec.~\\ref{sec:PhysicalWorkingPrinciples}).\n \\item We cover what use-cases for proximity perception have been studied in robotics research and industrial robotics. Here, we give a detailed account of the two most important basic applications, \\gls{ATI} and \\gls{ATII} (Sec.~\\ref{sec:Applications}, see also Figs.~\\ref{fig:OcclusionHRI},~\\ref{fig:Intro_Preshaping}, and~\\ref{fig:ApplicationsAndBehaviors}). We start by giving a historical account and cover the basic forms of behavior possible to the more advanced, cognitive approaches.\n \\item We provide a systematic comparison of the technologies from the field summarized in Table~\\ref{tab:ComparisonYear}\n \\item Finally, we project the current developments into the future and finish the paper with concluding remarks (Sec.~\\ref{sec:FuturePerspectives} and \\ref{sec:SummaryAndConclusions}).\n\\end{itemize}", "id": "01d49bee-621f-47ab-9ac0-165e2dba2afa", "level": "section", "origin_cites_number": 3, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ProximitySensor}", "id": "7c679291-40be-4834-b2fe-70234a403984", "level": "section", "origin_cites_number": 0, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Proximity Sensor: Characterization, Applications and Safety Considerations" ] ], "subsections": [ "a3bf09b4-0e28-4364-af23-7c860d5e9658", "4c0a7015-4171-43a9-86c9-7a4825899568", "fdd92cab-0437-4f63-850b-5ab4cab257fd" ], "title": "Proximity Sensor: Characterization, Applications and Safety Considerations" }, { "cite_extract_rate": 0, "cites": [], "content": "Providing a concise characterization of proximity sensors is challenging. One thing common to all proximity sensors is that they detect objects without physical contact. However, this alone does not distinguish them from cameras, which is problematic, as both modalities are considered to be complementary. To address this, we propose a series of attributes that generally characterize proximity sensor designs. At the same time, not all of the attributes need to be present at once in a particular case. Thus, proximity sensors provide non-contact detection of objects \\emph{and} more often than not\n\\begin{itemize}\n \\item use active measurement principles, i.\\,e. they probe the nearby environment to detect an object's presence, \n item provide limited sensing range and even small detection ranges can be considered to be useful, \n \\item are \\emph{skinlike}, i.\\,e. they can be deployed on surfaces such as robot arm segments as well as fingers where they can form a network of sensing elements,\n \\item are suitable for being highly integrated into the sensory-motor functionality of the robot, enabling reflex-like behaviors due to low latency measurements,\n \\item are used to handle occlusions in vision systems, i.\\,e.\\ are complementary to vision,\n \\item are used to supervise approaching objects which are bound to enter in contact with the robot, i.\\,e.\\ are complementary to tactile sensing.\n\\end{itemize}\nIn Fig.~\\ref{fig:SensingRange}, we propose a definition for the sensing range of a proximity sensor. Any detection distance below $\\SI{50}{\\centi\\meter}$ can be considered to be within the proximity range. This limit is not strict, but in \\gls{HRI} and \\gls{HRC}, this is an approximate distance at which visual occlusions begin to become problematic. As discussed later in Sec.~\\ref{subsec:SafetyConsiderations}, this is a similar range in which monitoring of separation distance is relevant for compliance with safety standards. At larger distances, i.\\,e.\\ mid-range and long-range perception, other technologies (LIDAR, long-range stereo vision, etc.) can provide better performance in workspace surveillance or for providing \\gls{HRI} functionality. This is especially true here because the requirements on reactivity can be relaxed at larger distances. Furthermore, it is interesting to consider the contributions by anthropologist Edward T.\\ Hall, who describes the \\emph{intimate space} of humans as part of his studies on \\emph{proxemics}~. The intimate space starts at a distance of typically $\\SI{45}{\\centi\\meter}$, which is also close to the range proposed above. Therefore, proximity sensing is easy to understand from the perspective of humans, as they can intuitively relate this perception to the ``intimate space\" of the robot by analogy. \nFinally, a distinction can also be made for a range below $\\SI{10}{\\centi\\meter}$ that we call \\emph{pre-touch}-range. This is the type of sensing that precedes contact interactions, for instance during grasping. Here it is especially important to have uninterrupted sensing until contact. Some sensor designs might not feature a long detection range, but the sensing capabilities provided are still useful for closed-loop control of finger and hand posture, which is executed until touch is established. A more in-depth discussion of the available proximity sensing technologies is provided in Sec.~\\ref{sec:PhysicalWorkingPrinciples}. In Fig.~\\ref{fig:LargeScaleModularSkin}, an example of a modern humanoid robot covered in a multi-modal skin is shown, displaying many of the characteristics discussed in this section. \n\\begin{figure}\n \\centering\n \\footnotesize\n \\fontsize{8}{10}\\selectfont\n \\def\\svgwidth{.4\\textwidth}\n \\input{Figures/Generated/Characterization_SensingRange_svg-tex.pdf_tex}\n \\caption{Definition of the proximity sensing range.}\n \\label{fig:SensingRange}\n\\end{figure}", "id": "a3bf09b4-0e28-4364-af23-7c860d5e9658", "level": "subsection", "origin_cites_number": 1, "parent_id": "7c679291-40be-4834-b2fe-70234a403984", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Proximity Sensor: Characterization, Applications and Safety Considerations" ], [ "subsection", "Characterization" ] ], "subsections": [], "title": "Characterization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:ApplicationsAndBehaviorTypes}\nTo talk about proximity sensors in human-centered robotics as a whole, it is useful to consider first a categorization of the possible applications and desired behaviors. In the introduction, we already mentioned that a broad classification of applications into two categories is possible: the ones relating to safety and \\gls{HRI}/\\gls{HRC} and the ones relating to preshaping and grasping (Figs.~\\ref{fig:OcclusionHRI} and~\\ref{fig:Intro_Preshaping}). Beyond this, automated behaviors based on proximity sensors can be organized according to their conceptual complexity and how instantaneous their effect is on the movement of the robot. One example for a low-complexity behavior is a \\emph{safety stop}, i.\\,e.\\ enabling the brakes of the robot based on a sensor signal surpassing a threshold value. This behavior is closely tied to the update-rate of the sensors and the low-level robot controller. In that sense, it can be called \\emph{reactive} or \\emph{reflex-like}. Modern collaborative robots, e.\\,g.\\ the Franka Emika Panda~ or the KUKA LBR iiwa~ have control loop cycles of $t_{cl}=1~ms$. Thus, the closer the response time of the proximity sensor is to $t_r<t_{cl}$, the better. An example of high-complexity behavior is object exploration. It involves managing an object model as well as a planner to complete this model with purposeful exploration steps, resulting in a robot behavior that is executed in several phases and over a longer time compared to the basic control loop cycle times. This behavior is also characterized by being executed at different layers, reaching, as mentioned, up to the planning and cognitive components in the robot's architecture. Fig.~\\ref{fig:ApplicationsAndBehaviors} illustrates the categorizing of applications and behaviors we propose as well as providing some examples (not an exhaustive list). As a result, we have a broad classification of applications into two types, \\gls{ATI} (left) and \\gls{ATII} (right), and behaviors into two types, \\gls{BTI} (bottom) and \\gls{BTII} (top). In general, \\gls{BTI} will appear as subsystems of \\gls{BTII}.\n\\begin{figure}\n \\centering\n \\footnotesize\n \\includegraphics[width=0.97\\linewidth]{Figures/ApplicationsAndBehaviorsV5.pdf}\n \\caption{Categorization of some use-cases for proximity sensors in robotics according to the possible application types (\\gls{ATI} and \\gls{ATII}) and behavior types (\\gls{BTI} and \\gls{BTII})}.\n \\label{fig:ApplicationsAndBehaviors}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\fontsize{8}{10}\\selectfont\n \\def\\svgwidth{0.48\\textwidth}\n \\input{Figures/Generated/Humanoid_Cheng+Modules_svg-tex.pdf_tex}\n \\caption{Left: Researchers at the Technical University of Munich, Chair for Cognitive Systems, have developed a multi-modal, modular skin, which they have deployed on their robot H-1. Right: A single cell of the multi-modal sensor. (Copyright left picture. A. Eckert / TU M\\\"unchen. Right picture \\copyright 2019 IEEE) . Both reprinted with kind permission of the authors.)}\n \\label{fig:LargeScaleModularSkin}\n\\end{figure}", "id": "4c0a7015-4171-43a9-86c9-7a4825899568", "level": "subsection", "origin_cites_number": 3, "parent_id": "7c679291-40be-4834-b2fe-70234a403984", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Proximity Sensor: Characterization, Applications and Safety Considerations" ], [ "subsection", "Application and Behavior Types" ] ], "subsections": [], "title": "Application and Behavior Types" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:SafetyConsiderations}\nFrom the safety perspective, a proximity sensor deployed on a collaborative robot in an industrial environment has to fulfill the requirements of the ISO/TS 15066 complying with the ISO 10218 for robots and robotic devices, as well as the performance level defined in the safety of machinery ISO 13849. Proximity sensing hardware is very well suited to operate a collaborative robot in the \\emph{speed and separation monitoring mode} as defined in the ISO/TS 15066 to monitor the protective separation distance $S_p$ between a human and a robot's surface. According to the standard, the following equation has to be fulfilled during the operation mode:\n\\begin{equation}\n \\label{eq:separationDistance}\n \\small\n \\begin{aligned}\n S_{p}(t=t_0) = v_{h} (T_{r}+T_{s}) + v_{r} T_{r} + S_{s} + C_{i} + Z_{d} +Z_{r},\\\\\n \\end{aligned}\n\\end{equation}\nwhere $v_h$ is the human speed (if not monitored, $v_h=\n\\SI{1.6}{\\meter\\per\\second}$)\n$T_r$ is the reaction time of the robot, $T_s$ is the robot stopping time, $v_r$ is the robot speed, $S_s$ is the robot stopping distance, $C_i$ is the intrusion distance, $Z_d$ is the position uncertainty of the human and $Z_r$ is the position uncertainty of the robot.\nTo provide an illustrative example for a state of the art collaborative robot, we look at the UR10e series. It has a control loop cycle of $\\SI{500}{\\hertz}$ and the safety parameters can be configured $v_r =\\SI{5}{\\meter\\per\\second}$ (max end-effector speed)\n$T_r = \\SI{4}{\\milli\\second}$ (two control loop cycles), $T_s = \\SI{100}{\\milli\\second} $, $S_s = \\SI{50}{\\milli\\meter}$, $Z_r = \\SI{0.05}{\\milli\\meter}$. This configuration results in separation distance of $S_p=\\SI{0.236}{\\meter}$ excluding the uncertainty of the position of the human and the intrusion distance, as this depends on the sensor parameters monitoring the area.", "id": "fdd92cab-0437-4f63-850b-5ab4cab257fd", "level": "subsection", "origin_cites_number": 0, "parent_id": "7c679291-40be-4834-b2fe-70234a403984", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Proximity Sensor: Characterization, Applications and Safety Considerations" ], [ "subsection", "Safety Considerations and Norm Compliance" ] ], "subsections": [], "title": "Safety Considerations and Norm Compliance" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:PhysicalWorkingPrinciples}\nIn this section, we will give an introduction to the main physical principles available to implement proximity sensing. The idea is to be able to ease the process of reviewing articles by starting with an explanation of the basics. This will also help us in the systematization in Table~\\ref{tab:ComparisonYear}. There, we use the abbreviations introduced in this section. Here, we will already do a review of some representative works in the field focusing on how the proximity sensing technology is implemented. This section is closely linked to Sec.~\\ref{sec:Applications}, where the details of the implemented applications are discussed. We try to cross-reference the most relevant relationships. However, favoring readability, cross-referencing is not exhaustive.\n\\input{Content/03.01-CapacitiveSensing.tex}\n\\input{Content/03.02-OpticalSensing.tex}\n\\input{Content/03.03-Radar.tex}\n\\input{Content/03.04-OtherSensingPrinciples.tex}\n\\input{Content/03.05-Multi-Modal}", "id": "702ec4ea-e939-40cf-abd8-8e5caea8edc6", "level": "section", "origin_cites_number": 0, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Measurement Principles for Proximity Sensing" ] ], "subsections": [], "title": "Measurement Principles for Proximity Sensing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:Applications}\nIn this section, we will review the contributions from the field focusing on the applications and methods presented. We will follow the organization presented in Sec.~\\ref{subsec:ApplicationsAndBehaviorTypes} and Fig.~\\ref{fig:ApplicationsAndBehaviors}, i.\\,e.\\ focusing separately on \\gls{ATI} \nand \\gls{ATII} and going from low-complexity behaviors (\\gls{BTI}) to high-complexity behaviors (\\gls{BTII}).\n\\input{Content/04.01-CollisionAvoidance.tex}\n\\input{Content/04.02-Preshaping.tex}\n\\input{Content/04.03-HigherLevelApplications.tex}\n\\input{Content/04.04-IndustrialTechnologiesAndSolutions.tex}", "id": "55e0e327-ca41-4cdf-b82e-23bda770d2ce", "level": "section", "origin_cites_number": 0, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Applications and Methods in the Research and Industry Domains" ] ], "subsections": [], "title": "Applications and Methods in the Research and Industry Domains" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:FuturePerspectives}\nIn this section, we provide an outlook for the domain of human-centered proximity perception in robotics. We have highlighted what we think are \\emph{grand challenges} at the end of each the three sub-sections.", "id": "d1273b1a-f5ad-41b8-be89-3d4b86e92fbe", "level": "section", "origin_cites_number": 0, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Future Perspectives: Grand Challenges" ] ], "subsections": [ "017a9c7a-b440-4a26-8ec2-d03025b12bb0", "b968a456-2525-49a5-a737-0c5965cfb1de", "c6615d1d-30c1-4597-a3db-191781a41f9f" ], "title": "Future Perspectives: Grand Challenges" }, { "cite_extract_rate": 0.25, "cites": [ 2680, 2679 ], "content": "\\label{subsec:HRIInIndustryAndServiceRobotics}\nProximity perception technology is mature enough to be deployed under strict safety requirements, as discussed in Sec.~\\ref{subsec:IndustrialTechnologies}. However, today, collaborative automation still struggles to be an interesting value proposition, i.\\,e.\\ providing a large increase in efficiency that will justify the cost of investing in this technology. A value proposition that is more likely to be attractive in the near future is that of \\emph{fenceless automation}. Here, humans and robots coexist in the same space but do not necessarily share a task. Value is added for instance by the fact that real-estate on the shop-floor is saved or because robotic automation is now possible in spaces that were previously considered to be too small. Nonetheless, safety certification is still a major challenge that is preventing wide-spread use of proximity perception. Today, technologies such as the ones discussed in ~\\ref{subsec:IndustrialTechnologies} need to be certified on a solution level. Certification at the modular sensor level is not yet possible. Therefore, technologies such as radar (see Sec.~\\ref{subsec:Radar}) can be attractive alternatives for HRI in industrial automation. Radar is more likely to achieve a safety rating on modular level soon, also profiting from all the prior experience coming from the developments in autonomous driving.\nAreas such as medical robotics face similar issues for commercialization. The process of certifying the solutions is long and costly. However, we think that the market for automated solutions in health care or service robotics based on proximity perception is there, especially in scenarios where the human is very close to the robot, preventing the sole use of cameras. In this survey, we discussed examples such as grasping of moving objects and handover~, dressing and washing of patient , or assistive robotics based on teleoperation . Overall, we think that medical and service robotics is a promising field for proximity perception, but more research is needed to establish use-cases and the corresponding methods before commercial exploitation appears on the horizon.Furthermore, the study of highly redundant robotic systems that use proximity sensing is scarce. Meanwhile, the exploitation of kinematic redundancy based on proximity sensor feeds is natural, as discussed in Secs.~\\ref{subsubsec:EarlyJacobian} and \\ref{subsubsec:RecentJacobian}. This includes the use of proximity perception in unusual areas, such as on the legs or feet of robots. More research, like the one done in the group of Prof.\\ Cheng~, is needed. In summary, we can say the following about the challenges in these domains:\n\\begin{itemize}\n \\item Lowering the difficulties in achieving safety ratings for proximity sensing technologies will significantly expand the market for fenceless or collaborative automation. An effective procedure for safety certification, elaborated by industrial stakeholders and certification organizations, is needed.\n \\item Modularized technologies, such as radar chips, can have significant advantages in a certification procedure because solutions can be based on safety rated components.\n \\item In the service, medical and active assisted living domains proximity perception allows for new ways of interaction between humans and robots yet more research is needed before commercially viable applications are established.\n \\item A better understanding of how highly redundant robotic systems can profit from proximity perception is needed.\n \\item The potential of proximity perception for increasing robustness of grasping of moving objects, for instance during handover tasks or in teleoperation, needs to be established further, for example with conclusive user studies.\n\\end{itemize}", "id": "017a9c7a-b440-4a26-8ec2-d03025b12bb0", "level": "subsection", "origin_cites_number": 8, "parent_id": "d1273b1a-f5ad-41b8-be89-3d4b86e92fbe", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Future Perspectives: Grand Challenges" ], [ "subsection", "Human-robot interaction in the industry and service domains" ] ], "subsections": [], "title": "Human-robot interaction in the industry and service domains" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2681, 2683, 2682 ], "content": "\\label{subsec:CognitiveRobotics}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\linewidth]{Figures/Illustration_pointcloudV3.pdf}\n \\caption{Proximity perception can play an important role in multi-modal exploration: a) Shows a typical scene with objects as captured by an RGBD-camera. b) A perspective shift reveals considerable gaps in the visual perception, having occluded elements that could be explored contactless and contact-based exploration steps, c) and d) respectively. A multi-modal cognitive model controls the exploration and aggregates the information to the object model, which includes geometric, material property, fill-level information as well as potential grasping regions obtained from the exploration steps in c) and d). This aggregated information is illustrated in e). The yellow and red colors on the mug (in c)-d)) illustrate regions that are extracted from the proximity and touch-based perception, respectively. }\n\\label{fig:futureperspectives}\n\\end{figure}\nOne of the hallmarks of cognitive robotics is active perception. It is the paradigm in which perception is enabled by purposeful actions, i.\\,e.\\ exploration). The active perception principle has been studied in many robotics research papers, as discussed in a survey by Bohg et al.~. Machine learning is essential in active perception for tasks such as object pose estimation or scene labeling, which can be based on proximity perception, e.\\,g.~. Currently, a trend in active perception is the combined perception of vision and touch. We think that extending this trend to include proximity sensing will be a relevant research question in the near future. Therefore, we will see haptic exploration strategies that will be able to explore occluded regions of the workspace combining touch-based and touch-less exploratory movements. We have illustrated this possible multi-modal exploration approach in Fig.~\\ref{fig:futureperspectives}. Some aspects of it have already been addressed in the literature discussed in this paper, e.\\,g.~. Another important domain in this area is the multi-modal modeling of the human for safe interaction and collaboration. In summary, regarding the major challenges we can say that:\n\\begin{itemize}\n \\item As we mentioned throughout the paper, there is a need for proximity sensing technologies that can easily be combined with vision and/or touch so they can be deployed together. This will make it attractive to include proximity sensing in established and novel active perception/cognitive approaches.\n \\item It can be expected that the trend regarding sim-to-real learning is going to prevail for the next years. Therefore, it is a challenge to implement realistic simulation models for the different measurement principles discussed in Sec.~\\ref{sec:PhysicalWorkingPrinciples}. In most cases, current approaches to model them are not viable in terms of their temporal performance. Overcoming this is an important challenge.\n\\end{itemize}", "id": "b968a456-2525-49a5-a737-0c5965cfb1de", "level": "subsection", "origin_cites_number": 5, "parent_id": "d1273b1a-f5ad-41b8-be89-3d4b86e92fbe", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Future Perspectives: Grand Challenges" ], [ "subsection", "Cognitive Robotics" ] ], "subsections": [], "title": "Cognitive Robotics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:SoftRobotics}\nWe think that a good portion of the topics relevant for proximity perception will eventually transfer to Soft Robotics. In this way, soft manipulators equipped with proximity sensors can perform collision avoidance in an analogous way as described in Sec.~\\ref{subsec:CollisionAvoidance}, i.\\,e.\\ respecting task hierarchies. Similarly, soft robots will be able to execute reactive preshaping, grasping, and exploration tasks using proximity sensors. Arguably more so than in other areas, in Soft Robotics it is of interest to study how to purposefully engage in contacts to achieve the desired task. Proximity perception can help in finding and reaching desired contact states. Overall, the support of simulation frameworks for model-based control and sensing, such as SOFA~, will also be relevant. Therefore, regarding the major challenges in this domain, we can say that:\n\\begin{itemize}\n \\item New control methods must be found for soft robots to integrate the information provided by the proximity sensors and adapt the actuation strategy adequately.\n \\item Integration of proximity sensors in deformable structures represents an important challenge. A significant cross-talk between the global deformation and the detection of tactile and proximity events has to be expected. As stated in the previous section, appropriate models for proximity sensors are needed for sim-to-real learning or for interpreting their signals correctly under deformation using interactive simulations~. \n \\item Except for capacitive sensing, the realization of proximity sensors having deformable or stretchable sensing elements is challenging. Shrinking the size of rigid components, such as ICs, within a stretchable substrate may still allow the realization of deformable circuits, as described e.\\,g. in .\n\\end{itemize}", "id": "c6615d1d-30c1-4597-a3db-191781a41f9f", "level": "subsection", "origin_cites_number": 4, "parent_id": "d1273b1a-f5ad-41b8-be89-3d4b86e92fbe", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Future Perspectives: Grand Challenges" ], [ "subsection", "Soft Robotics" ] ], "subsections": [], "title": "Soft Robotics" }, { "cite_extract_rate": 0.037735849056603, "cites": [ 2680, 2679, 2681, 8577 ], "content": "\\label{sec:SummaryAndConclusions}\nIn this paper, we have given an overview of the main aspects of proximity sensing in today's robotic landscape. Considering that the field has not had a significant formalization over the years, we provide a basic scheme for categorization of the robotic applications and technologies based on proximity sensors and we propose a set of traits that characterize proximity sensors in human-centered robotics. We give an account of the existing technologies and the main measurement principles reported by authors since the early 1970s and have organized the technologies in Table~\\ref{tab:ComparisonYear}, including characteristics such as sensing range, update rate, sensing element size, etc. We then proceeded to detail how the technologies have been used for implementing applications such as collision avoidance and human-robot interaction (\\gls{ATI}) as well as preshaping and grasping (\\gls{ATII}). We start with the seminal developments of the early years in these domains and cover the progress up to today (2021). The tight integration into the sensory-motor functionality has received constant attention over the years in order to realize highly reactive behavior of robotic systems (\\gls{BTI}). Meanwhile, as the area of robotics progresses as a whole, we report that more and more approaches begin to have cognitive aspects in them (\\gls{BTII}). \\gls{BTII} includes areas such as teleoperation, where the human is in the loop, autonomous object exploration and even bio-inspired approaches that mimic weakly electric fish. \nFinally, Sec.~\\ref{sec:FuturePerspectives} is dedicated to summarizing our projections for the field regarding the grand challenges we have identified. We think that as the technology of proximity sensors is reaching the maturity to coexist with tactile and visual perception in terms of integration, costs, and norm conformity, it will be adopted for a variety of solutions, especially those involving interaction with humans. \n\\section*{Acknowledgment}\nThis work was supported by the Region Hauts-de-France, the project COMOROS (ANR-17-ERC2-0029), the European Regional Development Fund (ERDF) and the project Inventor (I-SITE ULNE, le programme d’Investissements d’Avenir, M\\'etropole Europ\\'eenne de Lille). \nThis work has received funding from the \"K\\\"arntner Wirtschaftsf\\\"orderung Fonds\" (KWF) and the \"European Regional Development Fund\" (EFRE) within the CapSize project 26616/30969/44253. \nThis work also was supported by the Federal Ministry of Education and Research (Bundesministerium f\\\"ur Bildung und Forschung, BMBF) within the project (Verbundprojekt-Nr.: 16SV7823K:). \n The authors would like to thank cartoonist Adriana Filippini for the illustrations. \n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{ReviewProximityPerception}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/escaidanavarro}}]{Stefan Escaida Navarro}\nreceived his diploma in computer science (Dipl.-Informatiker) from Karlsruhe Institute of Technology (KIT), Germany in 2010. He obtained is PhD degree (Dr.-Ing.) from the same institution in 2016. During his doctoral phase he researched on haptic object recognition and grasping as well as the technology and the applications for capacitive tactile proximity sensors, such as preshaping, collision avoidance and teleoperation. He is currently a postdoctoral researcher at Inria Lille - Nord Europe, France, where he is working on model-based force and shape sensing for Soft Robotics. \n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/muehlbacherkarrer}}]{Stephan M\\\"{u}hlbacher-Karrer}\nreceived the Dipl.Ing. degree in telematics from the Graz University of Technology (TUG), Graz, Austria, in 2012, where he focused on Autonomous Robots and Telecommunications. In 2017 he received his Dr. tech. degree in information and communications engineering from the Alpen-Adria-Universit\\\"{ä}t Klagenfurt, Klagenfurt, Austria, focusing on capacitive sensing for robotic applications.\nFrom 2012 to 2013, he was with Gigatronik Ingolstadt GmbH, Gaimersheim, Germany, working\nfor Audi Electronics Venture GmbH, Gaimersheim, where he was involved in the pre-development for autonomous driving. Since 2017 he is a senior researcher with JOANNEUM RESEARCH ROBOTICS, Institute for Robotics and Mechatronics in Klagenfurt Austria. His current research interests include near field sensor technology, signal processing, and autonomous robots.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/alagi}}]{Hosam Alagi}\nreceived his M.Sc. degree in electrical engineering and information technology at the Karlsruhe Institute of Technology in Germany with a specialization in System-on-Chip. Since August 2015 he is working as a research assistant at the Institute for Anthropomatics and Robotics (IAR) at Intelligent Process Automation and Robotics Lab (IPR). Focusing on human-robot interaction, he develops capacitive multi-modal sensors and evaluates methods that allow robots to perceive their near environment. Within the scope of the PhD Studies both the sensors and sensor applications are being implemented, addressing the industrial and domestic domains.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/hein}}]{Bj\\\"{o}rn Hein}\nstudied electrical engineering with a focus on control theory and received his PhD in 2003 concerning automatic collision-free path planning at the Karlsruhe Institute of Technology (KIT). May 2010 he finished his post-doctoral lecture qualification for computer science with stress on human-robot interaction. 2012-2018 he was professor for Interaction Technologies for Robot Systems at the Institute for Anthropomatics and Robotics (IAR) - Intelligent Process Automation and Robotics Lab (IPR). Currently he holds a professorship at the University of Applied Sciences in Karlsruhe. His research focus comprises: algorithms for collision free motion planning and path optimization, methods for intuitive and automatic programming of robots, human-robot interaction, novel sensor-technologies for enhancing capabilities of robot systems.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/zangl}}]{Hubert Zangl} received the Dipl.Ing. degree in telematics, the Dr. Tech. degree in electrical engineering, and the Venia Docendi degree in sensors and instrumentation from the Graz University of Technology (TUG), Graz, Austria, in 2001, 2005, and 2009, respectively.\nFrom 2010 to 2013, he was an Associate Professor of Sensors and Instrumentation with the Institute of Electrical Measurement and Measurement Signal Processing, TUG. Since 2013, he has been a Professor chairing Sensors and Actuators with the Institute of Smart System Technologies, Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria. \nHis current research interests include the design and optimization of smart sensors and actuators, robustness, and reliability of sensors and actuators, sensor signal processing, autarkic wireless sensors, and energy harvesting with target applications in the field of IoT and robotics.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/koyama}}]{Keisuke Koyama} received the B.E., M.E. and Ph.D. in Eng. degrees in mechanical engineering and intelligent Systems from the University of Electro-Communications (UEC), \n in 2013, 2015 and 2017, respectively. He was a research fellow of Japan Society for the Promotion of Science (JSPS) from 2015 to 2017. \n He researched on high-speed proximity sensor for pre-grasping and integrated control of multi-degree-of-freedom robot hand and arm. \n From 2017 to 2019, he was project Assistant Professor, Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo. \n Since 2019, he has been Assistant Professor, Department of Systems Innovation, Graduate School of Engineering Science, Osaka University. \n And, he has been Visiting Researcher, Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo. \n His current research focuses on high-speed, high-precision proximity sensing for high-speed robotic manipulation and assembly. \n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/christian_duriez}}]{Christian Duriez} is Research Director at Inria Lille - Nord Europe.\nHe received the engineering degree from the Institut Catholique d’Arts et Métiers of Lille, France and a PhD degree in robotics from University of Evry and CEA in France in 2004. He had a postdoctoral position at the CIMIT SimGroup in Boston. He arrived at INRIA in 2006 and worked on interactive simulation of deformable objects and haptic rendering with focus on surgical simulation. He is now the head of DEFROST team, created in January 2015. In 2018 he was invited researcher at Stanford University. His research topics are Soft Robot models and control, Fast Finite Element Methods, simulation of contact response, new algorithms for haptics… He participated to the creation of the open-source SOFA framework. He was also one of the founders of the start-up company InSimo.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Profiles/smith}}]{Joshua R. Smith} is the Milton and Delia Zeutschel Professor, jointly appointed in the Allen School of Computer Science and Engineering, and the Department of Electrical Engineering, at the University of Washington, Seattle. He received BA degrees in Computer Science and Philosophy from Williams College in 1991, an MA in Physics from the University of Cambridge in 1997, and SM and PhD degrees in 1995 and 1999 from the MIT Media Lab's Physics and Media Group, where he began working on Electric Field Sensing for input device applications. He began applying these techniques to robotics while he was a researcher at Intel Labs Seattle, between 2004 and 2010. In recent years his group at UW has developed a number of new approaches to proximity perception for robotics.\n\\end{IEEEbiography}\n\\newpage\n\\markboth{APPENDIX to Proximity Perception in Human-centered Robotics - A Survey on Sensing Systems and Applications}\n{Shell \\MakeLowercase{\\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals}\n\\onecolumn\n\\setcounter{page}{1}\n\\begin{center}\n\\Huge{Appendix} \\\\ \\vspace{.5cm}\n\\end{center}\n\\printglossaries\n\\clearpage\n\\newpage\n\\begin{landscape}\n\\begin{center}\n\\large{Overview - Sorted by year - Table I}\n\\end{center}\n\\tiny\n\\begin{doclongtable}{p{0.3cm}p{1.3cm}p{0.8cm}p{0.8cm}p{1.1cm}p{1.3cm}p{1.1cm}p{1.3cm}p{2.1cm}p{1.3cm}p{2.7cm}p{3.5cm}p{1.3cm}} \n\\toprule\n{} & Measurement Principle & Reference & Year & Reported Min. Working Distance [mm] & Reported Max. Range [mm] & Field of View [degree] & Measurement Rate [Hz] & Sensing Element Dimension [$mm$ / $mm^2$ / $mm^3$] & Multiple-Obstacles & Commercially Available Core Components & Categorization & Basis reference \\\\\n\\midrule\n\\endhead\n\\midrule\n\\multicolumn{13}{r}{{Continued on next page}} \\\\\n\\midrule\n\\endfoot\n\\bottomrule\n\\endlastfoot\n1 & O-Tri & & 1973 & 2 & 20 & n/s & n/s & 3x5x5 & not considered & no & AT-II, Manipulation and Grasping & - \\\\\n2 & O-Tri & & 1973 & 2 & 20 & n/s & n/s & 3x5x5 & not considered & no & AT-II, Grasping, Teleoperation & \\\\\n4 & O-Tri & & 1984 & n/s & v1: focal point 4.5 +-0.56 cm, v2: focal poin... & 13 & 1000 & 110 & no & no & AT-II, measure distance, orientation and curva... & - \\\\\n7 & O-RLI & & 1988 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n6 & C-M & & 1988 & 40 & 100 & n/s & n/s & 80x210 & not tested & no & AT-I Obstacle Detection & - \\\\\n8 & O-RLI & & 1989 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n9 & A-US & & 1990 & n/s & 200-300 & 180 & n/s & ~20x20x20 & n/s & n/s & AT-II & - \\\\\n10 & C-M & & 1991 & 0 & 304.8 human and alu, 127 grafite lead & n/s & n/s & 355,6x190,5 & no & n/s & AT-I, Collision Avoidence & - \\\\\n13 & O-RLI & & 1992 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & OD8810 and SFH205 & AT-I, Collision Avoidence & - \\\\\n14 & O-RLI & & 1992 & n/s & 200 & n/s & 30 whole skin & n/s & n/s & no & AT-I, Collision Avoidence & - \\\\\n12 & C-M & & 1992 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & - \\\\\n16 & O-RLI & & 1993 & n/s & 200 & n/s & 30 whole skin & n/s & n/s & no & AT-I, Collision Avoidance & \\\\\n17 & O-RLI & & 1993 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n15 & C-M & & 1993 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & \\\\\n19 & C-M & & 1994 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & \\\\\n18 & A-US & & 1994 & n/s & n/s & n/s & n/s & n/s & n/s & n/s & AT-II, Contour Following & - \\\\\n20 & O-RLI & & 1996 & 5 & 75 & 100 (+-50) & 300 & ~5x5x13 & no & STM & AT-II, Manipulation and Grasping & - \\\\\n22 & O-RLI & & 1997 & n/s & 40 & 60 (+-30) & 10000 & ~32 & no & no & AT-II, Manipulation and Grasping & - \\\\\n24 & O-RLI & & 1997 & n/s & 200 & 130 (+-65) & n/s & 2020-08-07 00:00:00 & no & no & AT-II, Object Tracking for Grasping & - \\\\\n23 & A-US,O-RLI & & 1997 & 25 (A-US), 50 (O-RLI) & 3000 (A-US), 450 (O-RLI) & n/s & 40 (A-US), 100 (O-RLI) & 63.5x44.5x20 & yes & no & AT-I, Sensor Skin & - \\\\\n25 & O-BB & & 2000 & n/s & n/s & n/s & n/s & n/s & no & no & AT-II, BT-I, Sensing for Preshaping & - \\\\\n27 & C-SE & & 2006 & 0 & n/s & n/s & 200 & 15x5 & no & MC33794 & AT-I, Sensor Skin & - \\\\\n29 & O-RLI & & 2007 & 0 & 50 & 360 & 48 & 1 & yes & no & AT-I, BT-I, Contour Following & - \\\\\n28 & C-M & & 2007 & n/s & 50 & 180 & n/s & ~50 & no & no & AT-II, Sensing for Preshaping & - \\\\\n31 & C-M & & 2008 & n/s & n/s & n/s & 500 & n/s & n/s & no & AT-II, Material recognition & - \\\\\n32 & C-M & & 2008 & n/s & 190 & n/s & 30 & ~50 & n/s & no & AT-II, Proximity Servoing & - \\\\\n33 & O-Tri & & 2009 & 10 & 60 & n/s & 224 & 90 x 90 & yes & TCRT1000 & AT-I, AT-II, Sensor Array & - \\\\\n36 & O-RLI & & 2009 & 2 & 40 & 4 alnaog sensors per finger = 90 & n/s & 6x3.7x3.7 & n/s & Vishay TCND5000 & AT-II, Preshaping and Grasping & - \\\\\n35 & C-M & & 2009 & n/s & 170 & n/s & n/s & 22x22 & no & no & AT-I, AT-II, Sensor & - \\\\\n38 & C-SE, C-M & & 2010 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & Preshaping, Grasping, Exploration, Teleoperati... & - \\\\\n37 & C-M & & 2010 & 0 & 150 & n/s & 20 (update cycle for control) & Fingertip of Barret Hand & not tested & no & AT-II, Preshaping, Grasping and Co-manipulation & - \\\\\n40 & C-M & & 2010 & n/s & n/s & n/s & n/s & n/s & yes & BOSCH (complete solution) & AT-I, Sensor & - \\\\\n42 & C-M & & 2010 & 0 & 160 & 180 & 25000 & 220x150 & no & no & Automotive Seat Occupancy & - \\\\\n43 & O-RLI & & 2011 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-I, AT-II, Modular Multi-Modal Skin & - \\\\\n44 & C-M & & 2011 & 0 & ~25 & 360 & ~1000 & whole body & n/s & no & AT-I, AT-II, bioinspired & - \\\\\n46 & C-M & & 2011 & 0 & 2000 & 360 & n/s & 30 x 570 & yes & AD7143 & Automotive parking & - \\\\\n48 & O-RLI & & 2012 & 1 & 10 & narrow & n/s & 15x15 & no & ADNS-9500 & AT-II, Slip Detection, Object Rreconstruction,... & - \\\\\n49 & A-US & & 2012 & 20 & 400 & 15 & 40 & 45 x 20 x 15 & no & HC-SR04 & AT-II legged climbing robot & - \\\\\n47 & A-S & & 2012 & n/s & 2020-04-03 00:00:00 & n/s & 20 & 6 & no & yes & AT-II, Reactive Grasping, Object Exploration & - \\\\\n57 & O-Tri & & 2013 & 200 & 1500 & n/s & 250 (20 sensors) & ~20 x 40 & not tested & Sharp GP2Y0A02YK & AT-I, Collision Avoidance & - \\\\\n52 & O-RLI & & 2013 & 10 & 400 & 10 & ~50 & simulated & yes & TCND5000,GP2D120 & AT-I, AT-II, Teleoperation, Shared Autonomy, S... & - \\\\\n59 & O-RLI & & 2013 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping & - \\\\\n54 & C-SE, C-M & & 2013 & n/s & 100 & n/s & 30-100 & 40 x 40 & yes & no & AT-II, Multi-Modal Sensor System & - \\\\\n55 & C-SE & & 2013 & n/s & 100 & n/s & 30-100 & 40 x 40 & yes & no & AT-II, Hand and Object Tracking, Collision Pre... & \\\\\n51 & C-M & & 2013 & 0 & $>$100 & 180 & 6250 & 1500x150x0.35 & no & no & AT-I, BT-I, Reactive Collision Avoidance & - \\\\\n60 & C-M & & 2013 & 0 & ~25 & 360 & ~1000 & whole body & n/s & no & AT-I, AT-II, Bio-inspired & \\\\\n61 & A-S & & 2013 & n/s & 2020-04-03 00:00:00 & n/s & 20 & 6 & no & yes & AT-II, Reactive Grasping, Object Exploration & \\\\\n63 & O-Tri & & 2014 & 200 & 1500 & n/s & 250 (20 sensors) & ~20x40 & not tested & Sharp GP2Y0A02YK & AT-I, Collision Avoidance & \\\\\n62 & C-SE & & 2014 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-II, Preshaping, Grasping and Exploration & \\\\\n65 & C-M & & 2014 & 0 & 90 & n/s & n/s & 5x5 & yes & no & Sensor System & - \\\\\n66 & O-RLI & & 2015 & 0 & 20 & n/s & ~1000 & ~3 x 1 & no & KEYENCE fiber optical converter & AT-II, Multi-Modal Sensor System & - \\\\\n67 & O-RLI & & 2015 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II & \\\\\n72 & O-RLI & & 2015 & n/s & 300 & n/s & 1000 & 100x100 & not considered & no & AT-I, AT-II & - \\\\\n73 & O-BB & & 2015 & 0 & $>$84 & n/s & n/s & 32x19x6.5 & no & - & AT-II, Preshaping and Grasping & - \\\\\n68 & C-M & & 2015 & 0 & ~250 & 360 & n/s & whole body & no & no & AT-I, AT-II, Bio-Inspired & - \\\\\n69 & C-M & & 2015 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-II, Teleoperation, Exploration & \\\\\n70 & C-M & & 2015 & 5 & 50 & Cross Section & n/s & 200x200 & yes & no & AT-I, BT-II Object detection, tomography & - \\\\\n71 & C-M & & 2015 & 0 & n/s & 180 & quasi-simultaneously & Meka H2 Finger & no & no & At-II, BT-II Active Object Categorization, Gra... & - \\\\\n83 & R & & 2016 & n/s & 500 (for given resolution) & 58 (E) 57 (H) & n/s & 13,2 (antenna diameter) & yes & no & BT-II, Object exploration & - \\\\\n76 & O-RLI & & 2016 & 0 & 20 & n/s & ~1000 & ~3 x 1 & no & KEYENCE fiber optical converter & AT-II, Multi-Modal Sensor System & \\\\\n77 & O-RLI & & 2016 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping and Grasping & \\\\\n81 & O-RLI & & 2016 & 10 & 400 & 10 & ~50 & simulated & yes & TCND5000,GP2D120 & AT-I, AT-II, Teleoperation, Shared Autonomy, S... & \\\\\n78 & C-SE, I & & 2016 & n/s & 150 & n/s & 5 & 30x30 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & - \\\\\n80 & C-SE, C-M & & 2016 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-I, AT-II, Multi-Modal Sensor System & - \\\\\n75 & C-SE & & 2016 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-I, Contour Following & \\\\\n79 & C-SE & & 2016 & 0 & 350 & n/s & 40 & n/s & yes & MRK-Systeme (complete solution) & AT-II, HRI & - \\\\\n74 & C-M & & 2016 & 0 & 50/100 & 180 & n/s & 100x150 & yes & no & AT-II, Gesture Control, Grasping \\& Object Man... & - \\\\\n82 & C-M & & 2016 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Preshaping and Grasping & \\\\\n86 & O-ToF, C-SE & & 2017 & 10 & 100 & 180 for whole fipgertip (6 module) & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & - \\\\\n85 & O-ToF & & 2017 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x ToF sensor & AT-II, Preshaping and Grasping & \\\\\n84 & O-RLI & & 2017 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-II, BT-II & \\\\\n88 & C-SE, I & & 2017 & n/s & 150 & n/s & 5 & 30x30 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & \\\\\n87 & A-US & & 2017 & n/s & 6000 & 180 & \\~30 & 5.7 x 4.6 & yes & no & AT-II, Navigation & - \\\\\n96 & R & & 2018 & n/s & n/s & n/s & n/s & n/s & yes & yes & AT-II, Grasping & - \\\\\n97 & O-Tri & & 2018 & 2.85 & 20 & 90 & 1000 & 18 x 28.5 x 38.5 & not tested & VSMY1850, TEMD7500X01 & AT-II Dynamic grasping & - \\\\\n98 & O-ToF & & 2018 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & - \\\\\n100 & O-ToF & & 2018 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x & AT-II, Teleoperation & \\\\\n91 & O-RLI, O-ToF & & 2018 & 0 & 150 & n/s & 1000, 100 & 3.2 x 1.9 x 1.1, 4.8 x 2.8 x 1 & no & EE-SY 1200, VL6180x & AT-II, Preshaping and Grasping & - \\\\\n90 & O-RLI & & 2018 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-II, Object Exploration & \\\\\n93 & O-RLI & & 2018 & 5 & 200 possible, 60-70 in their work due to PDMS & 60 & 20 (whole skin) & 4x4 & not tested & Vishay VCNL4010 & AT-I, AT-II, Preshaping, Grasping and Gesture ... & - \\\\\n94 & O-RLI & & 2018 & 5 & 200 possible, 60-70 in their work due to PDMS & 60 & 20 (whole skin) & 4x4 & not tested & Vishay VCNL4010 & AT-I, AT-II, Preshaping, Grasping and Gesture ... & - \\\\\n95 & C-SE & & 2018 & 0 & 100 & n/s & 200 & 115 x 85 x 1 & n/s & MPR121 & AT-II, Contour Following, Health Care & - \\\\\n89 & C-M & & 2018 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-II, Material Recognition & \\\\\n92 & C-M & & 2018 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Material Recognition & \\\\\n99 & A-US & & 2018 & n/s & n/s & n/s & n/s & n/s & n/s & n/s & AT-II, Teleoperation & - \\\\\n104 & R & & 2019 & 300 & n/s & n/s & n/s & 300 (waveguide length) & yes & no & AT-I & - \\\\\n110 & O-Tri & & 2019 & 2.85 & 20 & 90 & 1000 & 18 x 28.5 x 38.5 & not tested & VSMY1850, TEMD7500X01 & AT-II, Preshaping and Grasping & \\\\\n102 & O-ToF & & 2019 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x ToF sensor & AT-II, Multi-Modal Sensor System & \\\\\n108 & O-RLI & & 2019 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-I, AT-II, Whole-Body Control & \\\\\n109 & O-RLI & & 2019 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping and Grasping & \\\\\n101 & C-SE, O-ToF & & 2019 & 10 & 100 & 180 for whole & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-I, Multi-Modal Skin & - \\\\\n103 & C-SE & & 2019 & 0 & n/s & n/s & 100 & 30 x 30 & no & Teensy 3.2 & AT-II, Contour Following, Health Care & - \\\\\n106 & C-SE & & 2019 & 0 & 300 & n/s & 125 & 50 x 50 to 100 x 100 & yes & FOGALE Robotics (complete solution) & AT-I, Collision Avoidance & - \\\\\n111 & C-SE & & 2019 & 0 & 50 & 180 & n/s & 5 x 5 & no & no & AT-II, Grasping in Harsh environments, Industr... & - \\\\\n107 & C-M, O-ToF & & 2019 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-I, Collision Avoidance & \\\\\n105 & A-US & & 2019 & 0 & ~8 & n/s & n/s & ~30x14x14 & no & no & AT-II, Material Recognition & - \\\\\n113 & R & & 2020 & n/s & $>$1200 & 160 & 40 & 9x13 & yes & yes & AT-I, BT-II & - \\\\\n118 & O-Tof, O-RLI & & 2020 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x, MAX30105 & AT-II, Object Exploration & - \\\\\n116 & O-ToF & & 2020 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & \\\\\n119 & O-RLI & & 2020 & n/s & n/s & n/s & n/s & 7.1 x 2.75 x 2.7 & no & Broadcom Limited, HSDL-9100-021 & AT-II, Teleoperation, VR, Transcutaneous Elect... & - \\\\\n115 & C-SE, O-ToF & & 2020 & 0 & 350 & 180 & 50 & 27x27 & no & no & AT-II, HRI & - \\\\\n117 & C-SE, I & & 2020 & 0 & 300 & n/s & 100 & 100 x 100 x 2.75 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & \\\\\n114 & C-SE & & 2020 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Teleoperation, Tactile Feedback & \\\\\n121 & C-SE & & 2020 & 0 & 350 & 180 & \\~108 & \\~175 x 80 & no & AD7147 & AT-I, Collision Avoidance & \\\\\n112 & C-M, ToF & & 2020 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-I Collision Avoidance & \\\\\n120 & A-US & & 2020 & n/s & 6000 & 180 & \\~30 & 5.7 x 4.6 & yes & no & AT-II, Navigation & - \\\\\n\\caption{Overview - Sorted by year \\label{tab:ComparisonYear}}\n\\end{doclongtable}\n\\toprule\n & & Year & Reported Min. Working Distance [mm] & Reported Max. Range [mm] & Field of View [degree] & Measurement Rate [Hz] & Sensing Element Dimension [$mm$ / $mm^2$ / $mm^3$] & Multiple-Obstacles & Commercially Available Core Components & Categorization & Basis reference \\\\\nMeasurement Principle & Reference & & & & & & & & & & \\\\\n\\midrule\n\\endhead\n\\midrule\n\\multicolumn{12}{r}{{Continued on next page}} \\\\\n\\midrule\n\\endfoot\n\\bottomrule\n\\endlastfoot\n\\multirow{4}{*}{R} & & 2016 & n/s & 500 (for given resolution) & 58 (E) 57 (H) & n/s & 13,2 (antenna diameter) & yes & no & BT-II, Object exploration & - \\\\\n & & 2018 & n/s & n/s & n/s & n/s & n/s & yes & yes & AT-II, Grasping & - \\\\\n & & 2019 & 300 & n/s & n/s & n/s & 300 (waveguide length) & yes & no & AT-I & - \\\\\n & & 2020 & n/s & $>$1200 & 160 & 40 & 9x13 & yes & yes & AT-I, BT-II & - \\\\\n\\cline{1-12}\n\\multirow{8}{*}{O-Tri} & & 1973 & 2 & 20 & n/s & n/s & 3x5x5 & not considered & no & AT-II, Manipulation and Grasping & - \\\\\n & & 1973 & 2 & 20 & n/s & n/s & 3x5x5 & not considered & no & AT-II, Grasping, Teleoperation & \\\\\n & & 1984 & n/s & v1: focal point 4.5 +-0.56 cm, v2: focal poin... & 13 & 1000 & 110 & no & no & AT-II, measure distance, orientation and curva... & - \\\\\n & & 2009 & 10 & 60 & n/s & 224 & 90 x 90 & yes & TCRT1000 & AT-I, AT-II, Sensor Array & - \\\\\n & & 2013 & 200 & 1500 & n/s & 250 (20 sensors) & ~20 x 40 & not tested & Sharp GP2Y0A02YK & AT-I, Collision Avoidance & - \\\\\n & & 2014 & 200 & 1500 & n/s & 250 (20 sensors) & ~20x40 & not tested & Sharp GP2Y0A02YK & AT-I, Collision Avoidance & \\\\\n & & 2018 & 2.85 & 20 & 90 & 1000 & 18 x 28.5 x 38.5 & not tested & VSMY1850, TEMD7500X01 & AT-II Dynamic grasping & - \\\\\n & & 2019 & 2.85 & 20 & 90 & 1000 & 18 x 28.5 x 38.5 & not tested & VSMY1850, TEMD7500X01 & AT-II, Preshaping and Grasping & \\\\\n\\cline{1-12}\nO-Tof, O-RLI & & 2020 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x, MAX30105 & AT-II, Object Exploration & - \\\\\nO-ToF, C-SE & & 2017 & 10 & 100 & 180 for whole fipgertip (6 module) & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & - \\\\\n\\multirow{5}{*}{O-ToF} & & 2017 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x ToF sensor & AT-II, Preshaping and Grasping & \\\\\n & & 2018 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & - \\\\\n & & 2018 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x & AT-II, Teleoperation & \\\\\n & & 2019 & 0 & 70 & 42 & 10 & 4.8 x 2.8 x 1 & not tested & VL6180x ToF sensor & AT-II, Multi-Modal Sensor System & \\\\\n & & 2020 & 5 & 200 & - & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-II, Object Exploration & \\\\\n\\cline{1-12}\nO-RLI, O-ToF & & 2018 & 0 & 150 & n/s & 1000, 100 & 3.2 x 1.9 x 1.1, 4.8 x 2.8 x 1 & no & EE-SY 1200, VL6180x & AT-II, Preshaping and Grasping & - \\\\\n\\multirow{28}{*}{O-RLI} & & 1988 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n & & 1989 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n & & 1992 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & OD8810 and SFH205 & AT-I, Collision Avoidence & - \\\\\n & & 1992 & n/s & 200 & n/s & 30 whole skin & n/s & n/s & no & AT-I, Collision Avoidence & - \\\\\n & & 1993 & n/s & 200 & n/s & 30 whole skin & n/s & n/s & no & AT-I, Collision Avoidance & \\\\\n & & 1993 & n/s & 250 & 60 & 16 whole skin & min. one sensor pair & yes & no & AT-I, Collision Avoidence & \\\\\n & & 1996 & 5 & 75 & 100 (+-50) & 300 & ~5x5x13 & no & STM & AT-II, Manipulation and Grasping & - \\\\\n & & 1997 & n/s & 40 & 60 (+-30) & 10000 & ~32 & no & no & AT-II, Manipulation and Grasping & - \\\\\n & & 1997 & n/s & 200 & 130 (+-65) & n/s & 2020-08-07 00:00:00 & no & no & AT-II, Object Tracking for Grasping & - \\\\\n & & 2007 & 0 & 50 & 360 & 48 & 1 & yes & no & AT-I, BT-I, Contour Following & - \\\\\n & & 2009 & 2 & 40 & 4 alnaog sensors per finger = 90 & n/s & 6x3.7x3.7 & n/s & Vishay TCND5000 & AT-II, Preshaping and Grasping & - \\\\\n & & 2011 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-I, AT-II, Modular Multi-Modal Skin & - \\\\\n & & 2012 & 1 & 10 & narrow & n/s & 15x15 & no & ADNS-9500 & AT-II, Slip Detection, Object Rreconstruction,... & - \\\\\n & & 2013 & 10 & 400 & 10 & ~50 & simulated & yes & TCND5000,GP2D120 & AT-I, AT-II, Teleoperation, Shared Autonomy, S... & - \\\\\n & & 2013 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping & - \\\\\n & & 2015 & 0 & 20 & n/s & ~1000 & ~3 x 1 & no & KEYENCE fiber optical converter & AT-II, Multi-Modal Sensor System & - \\\\\n & & 2015 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II & \\\\\n & & 2015 & n/s & 300 & n/s & 1000 & 100x100 & not considered & no & AT-I, AT-II & - \\\\\n & & 2016 & 0 & 20 & n/s & ~1000 & ~3 x 1 & no & KEYENCE fiber optical converter & AT-II, Multi-Modal Sensor System & \\\\\n & & 2016 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping and Grasping & \\\\\n & & 2016 & 10 & 400 & 10 & ~50 & simulated & yes & TCND5000,GP2D120 & AT-I, AT-II, Teleoperation, Shared Autonomy, S... & \\\\\n & & 2017 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-II, BT-II & \\\\\n & & 2018 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-II, Object Exploration & \\\\\n & & 2018 & 5 & 200 possible, 60-70 in their work due to PDMS & 60 & 20 (whole skin) & 4x4 & not tested & Vishay VCNL4010 & AT-I, AT-II, Preshaping, Grasping and Gesture ... & - \\\\\n & & 2018 & 5 & 200 possible, 60-70 in their work due to PDMS & 60 & 20 (whole skin) & 4x4 & not tested & Vishay VCNL4010 & AT-I, AT-II, Preshaping, Grasping and Gesture ... & - \\\\\n & & 2019 & n/s & 3 & n/s (180) & 1000 & 3x4 & n/s & GP2S60 & AT-I, AT-II, Whole-Body Control & \\\\\n & & 2019 & 0 & 50 & 90 & 1000 & 22x24x40 & n/s & EE-SY1200 & AT-II, Preshaping and Grasping & \\\\\n & & 2020 & n/s & n/s & n/s & n/s & 7.1 x 2.75 x 2.7 & no & Broadcom Limited, HSDL-9100-021 & AT-II, Teleoperation, VR, Transcutaneous Elect... & - \\\\\n\\cline{1-12}\n\\multirow{2}{*}{O-BB} & & 2000 & n/s & n/s & n/s & n/s & n/s & no & no & AT-II, BT-I, Sensing for Preshaping & - \\\\\n & & 2015 & 0 & $>$84 & n/s & n/s & 32x19x6.5 & no & - & AT-II, Preshaping and Grasping & - \\\\\n\\cline{1-12}\n\\multirow{2}{*}{C-SE, O-ToF} & & 2019 & 10 & 100 & 180 for whole & 30 & 4.8 x 2.8 x 1 & n/s & VL6180x & AT-I, Multi-Modal Skin & - \\\\\n & & 2020 & 0 & 350 & 180 & 50 & 27x27 & no & no & AT-II, HRI & - \\\\\n\\cline{1-12}\n\\multirow{3}{*}{C-SE, I} & & 2016 & n/s & 150 & n/s & 5 & 30x30 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & - \\\\\n & & 2017 & n/s & 150 & n/s & 5 & 30x30 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & \\\\\n & & 2020 & 0 & 300 & n/s & 100 & 100 x 100 x 2.75 & n/s & no & AT-I, AT-II, Multi-Modal Sensor System & \\\\\n\\cline{1-12}\n\\multirow{3}{*}{C-SE, C-M} & & 2010 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & Preshaping, Grasping, Exploration, Teleoperati... & - \\\\\n & & 2013 & n/s & 100 & n/s & 30-100 & 40 x 40 & yes & no & AT-II, Multi-Modal Sensor System & - \\\\\n & & 2016 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-I, AT-II, Multi-Modal Sensor System & - \\\\\n\\cline{1-12}\n\\multirow{11}{*}{C-SE} & & 2006 & 0 & n/s & n/s & 200 & 15x5 & no & MC33794 & AT-I, Sensor Skin & - \\\\\n & & 2013 & n/s & 100 & n/s & 30-100 & 40 x 40 & yes & no & AT-II, Hand and Object Tracking, Collision Pre... & \\\\\n & & 2014 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-II, Preshaping, Grasping and Exploration & \\\\\n & & 2016 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-I, Contour Following & \\\\\n & & 2016 & 0 & 350 & n/s & 40 & n/s & yes & MRK-Systeme (complete solution) & AT-II, HRI & - \\\\\n & & 2018 & 0 & 100 & n/s & 200 & 115 x 85 x 1 & n/s & MPR121 & AT-II, Contour Following, Health Care & - \\\\\n & & 2019 & 0 & n/s & n/s & 100 & 30 x 30 & no & Teensy 3.2 & AT-II, Contour Following, Health Care & - \\\\\n & & 2019 & 0 & 300 & n/s & 125 & 50 x 50 to 100 x 100 & yes & FOGALE Robotics (complete solution) & AT-I, Collision Avoidance & - \\\\\n & & 2019 & 0 & 50 & 180 & n/s & 5 x 5 & no & no & AT-II, Grasping in Harsh environments, Industr... & - \\\\\n & & 2020 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Teleoperation, Tactile Feedback & \\\\\n & & 2020 & 0 & 350 & 180 & \\~108 & \\~175 x 80 & no & AD7147 & AT-I, Collision Avoidance & \\\\\n\\cline{1-12}\nC-M, ToF & & 2020 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-I Collision Avoidance & \\\\\nC-M, O-ToF & & 2019 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-I, Collision Avoidance & \\\\\n\\multirow{25}{*}{C-M} & & 1988 & 40 & 100 & n/s & n/s & 80x210 & not tested & no & AT-I Obstacle Detection & - \\\\\n & & 1991 & 0 & 304.8 human and alu, 127 grafite lead & n/s & n/s & 355,6x190,5 & no & n/s & AT-I, Collision Avoidence & - \\\\\n & & 1992 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & - \\\\\n & & 1993 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & \\\\\n & & 1994 & n/s & 400 & n/s & 100 & n/s & not tested & n/s & AT-I, Collision Avoidance & \\\\\n & & 2007 & n/s & 50 & 180 & n/s & ~50 & no & no & AT-II, Sensing for Preshaping & - \\\\\n & & 2008 & n/s & n/s & n/s & 500 & n/s & n/s & no & AT-II, Material recognition & - \\\\\n & & 2008 & n/s & 190 & n/s & 30 & ~50 & n/s & no & AT-II, Proximity Servoing & - \\\\\n & & 2009 & n/s & 170 & n/s & n/s & 22x22 & no & no & AT-I, AT-II, Sensor & - \\\\\n & & 2010 & 0 & 150 & n/s & 20 (update cycle for control) & Fingertip of Barret Hand & not tested & no & AT-II, Preshaping, Grasping and Co-manipulation & - \\\\\n & & 2010 & n/s & n/s & n/s & n/s & n/s & yes & BOSCH (complete solution) & AT-I, Sensor & - \\\\\n & & 2010 & 0 & 160 & 180 & 25000 & 220x150 & no & no & Automotive Seat Occupancy & - \\\\\n & & 2011 & 0 & ~25 & 360 & ~1000 & whole body & n/s & no & AT-I, AT-II, bioinspired & - \\\\\n & & 2011 & 0 & 2000 & 360 & n/s & 30 x 570 & yes & AD7143 & Automotive parking & - \\\\\n & & 2013 & 0 & $>$100 & 180 & 6250 & 1500x150x0.35 & no & no & AT-I, BT-I, Reactive Collision Avoidance & - \\\\\n & & 2013 & 0 & ~25 & 360 & ~1000 & whole body & n/s & no & AT-I, AT-II, Bio-inspired & \\\\\n & & 2014 & 0 & 90 & n/s & n/s & 5x5 & yes & no & Sensor System & - \\\\\n & & 2015 & 0 & ~250 & 360 & n/s & whole body & no & no & AT-I, AT-II, Bio-Inspired & - \\\\\n & & 2015 & n/s & 100 & n/s & 30-100 & 40x40 & yes & no & AT-II, Teleoperation, Exploration & \\\\\n & & 2015 & 5 & 50 & Cross Section & n/s & 200x200 & yes & no & AT-I, BT-II Object detection, tomography & - \\\\\n & & 2015 & 0 & n/s & 180 & quasi-simultaneously & Meka H2 Finger & no & no & At-II, BT-II Active Object Categorization, Gra... & - \\\\\n & & 2016 & 0 & 50/100 & 180 & n/s & 100x150 & yes & no & AT-II, Gesture Control, Grasping \\& Object Man... & - \\\\\n & & 2016 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Preshaping and Grasping & \\\\\n & & 2018 & n/s & 500 & 180 & Capacitive: 1000, ToF: limited bei the i2C 400k & 40x40x4 & yes & no & AT-II, Material Recognition & \\\\\n & & 2018 & 0 & 100 & n/s & 22-380 & 20x20 to 40x40 & yes, but only x/y & no & AT-II, Material Recognition & \\\\\n\\cline{1-12}\nA-US,O-RLI & & 1997 & 25 (A-US), 50 (O-RLI) & 3000 (A-US), 450 (O-RLI) & n/s & 40 (A-US), 100 (O-RLI) & 63.5x44.5x20 & yes & no & AT-I, Sensor Skin & - \\\\\n\\multirow{7}{*}{A-US} & & 1990 & n/s & 200-300 & 180 & n/s & ~20x20x20 & n/s & n/s & AT-II & - \\\\\n & & 1994 & n/s & n/s & n/s & n/s & n/s & n/s & n/s & AT-II, Contour Following & - \\\\\n & & 2012 & 20 & 400 & 15 & 40 & 45 x 20 x 15 & no & HC-SR04 & AT-II legged climbing robot & - \\\\\n & & 2017 & n/s & 6000 & 180 & \\~30 & 5.7 x 4.6 & yes & no & AT-II, Navigation & - \\\\\n & & 2018 & n/s & n/s & n/s & n/s & n/s & n/s & n/s & AT-II, Teleoperation & - \\\\\n & & 2019 & 0 & ~8 & n/s & n/s & ~30x14x14 & no & no & AT-II, Material Recognition & - \\\\\n & & 2020 & n/s & 6000 & 180 & \\~30 & 5.7 x 4.6 & yes & no & AT-II, Navigation & - \\\\\n\\cline{1-12}\n\\multirow{2}{*}{A-S} & & 2012 & n/s & 2020-04-03 00:00:00 & n/s & 20 & 6 & no & yes & AT-II, Reactive Grasping, Object Exploration & - \\\\\n & & 2013 & n/s & 2020-04-03 00:00:00 & n/s & 20 & 6 & no & yes & AT-II, Reactive Grasping, Object Exploration & \\\\\n\\cline{1-12}\n\\multirow{947}{*}{-} & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n & - & - & - & - & - & - & - & - & - & - & - \\\\\n\\end{longtable}\n\\end{landscape}\n\\tableofcontents\n\\end{document}", "id": "46c2dad5-012b-443e-b703-dee40d0ff9dc", "level": "section", "origin_cites_number": 106, "parent_id": "4528ef20-9412-4b01-95cc-07b7f164956d", "prefix_titles": [ [ "title", "Proximity Perception in Human-Centered Robotics: A Survey on Sensing Systems and Applications " ], [ "section", "Summary and Conclusions" ] ], "subsections": [], "title": "Summary and Conclusions" } ]
85
[ 2680, 2679, 2681, 2683, 2682, 8577 ]
1.29486
[ "Tianyang Lin", "Yuxin Wang", "Xiangyang Liu", "Xipeng Qiu" ]
A Survey of Transformers
2021
2021-06-08T17:43:08Z
cs.LG
Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots of interest from academic and industry researchers. Up to the present, a great variety of Transformer variants (a.k.a. X-formers) have been proposed, however, a systematic and comprehensive literature review on these Transformer variants is still missing. In this survey, we provide a comprehensive review of various X-formers. We first briefly introduce the vanilla Transformer and then propose a new taxonomy of X-formers. Next, we introduce the various X-formers from three perspectives: architectural modification, pre-training, and applications. Finally, we outline some potential directions for future research.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Transformers" ] ], "subsections": [ "48308b43-2cfe-4236-84fb-c7cd67559d21", "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "aa12aeb0-6a41-4c9b-bfb0-68b4522245cd", "57f59954-45c7-48ac-baf2-dc92c147a85e", "7a323139-7d7b-422d-a6c6-3617457ad2f6", "04795308-eff0-4fbb-962f-a8077d36f894", "b7522af4-45ec-452c-b804-11b6882e6225", "b3d753a5-00c0-4eb4-b313-01e490dd9d3d", "ec81ede4-3b9c-44b8-96e2-94e0f2e4f760" ], "title": "root" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 7360, 1447, 1445, 7052, 2401, 732, 1446, 38 ], "content": "\\label{sec:intro}\nTransformer~ is a prominent deep learning model that has been widely adopted in various fields, such as natural language processing (NLP), computer vision (CV) and speech processing. Transformer was originally proposed as a sequence-to-sequence model~ for machine translation. Later works show that Transformer-based pre-trained models (PTMs)~ can achieve \\textit{state-of-the-art} performances on various tasks. As a consequence, Transformer has become the go-to architecture in NLP, especially for PTMs. In addition to language related applications, Transformer has also been adopted in CV~, audio processing~ and even other disciplines, such as chemistry~ and life sciences~.\nDue to the success, a variety of Transformer variants (a.k.a. X-formers) have been proposed over the past few years.\nThese X-formers improve the vanilla Transformer from different perspectives.\n\\begin{enumerate}\n \\item \\textit{Model Efficiency}. A key challenge of applying Transformer is its inefficiency at processing long sequences mainly due to the computation and memory complexity of the self-attention module.\nThe improvement methods include lightweight attention (e.g. sparse attention variants) and Divide-and-conquer methods (e.g., recurrent and hierarchical mechanism).\n \\item \\textit{Model Generalization}. Since the transformer is a flexible architecture and makes few assumptions on the structural bias of input data, it is hard to train on small-scale data. The improvement methods include introducing structural bias or regularization, pre-training on large-scale unlabeled data, etc.\n \\item \\textit{Model Adaptation}. This line of work aims to adapt the Transformer to specific downstream tasks and applications.\n\\end{enumerate}\nIn this survey, we aim to provide a comprehensive review of the Transformer and its variants. Although we can organize X-formers on the basis of the perspectives mentioned above, many existing X-formers may address one or several issues. For example, sparse attention variants not only reduce the computational complexity but also introduce structural prior on input data to alleviate the overfitting problem on small datasets. Therefore, it is more methodical to categorize the various existing X-formers and propose a new taxonomy mainly according to their ways to improve the vanilla Transformer: architecture modification, pre-training, and applications.\nConsidering the audience of this survey may be from different domains, we mainly focus on the general architecture variants and just briefly discuss the specific variants on pre-training and applications.\nThe rest of the survey is organized as follows. Sec.~\\ref{sec:background} introduces the architecture and the key components of Transformer. Sec.~\\ref{sec:taxonomy} clarifies the categorization of Transformer variants. Sec.~\\ref{sec:attention}$\\sim$\\ref{sec:other_module} review the module-level modifications, including attention module, position encoding, layer normalization and feed-forward layer. Sec.~\\ref{sec:beyond} reviews the architecture-level variants. Sec.~\\ref{sec:ptm} introduces some of the representative Transformer-based PTMs. Sec.~\\ref{sec:app} introduces the application of Transformer to various different fields. Sec.~\\ref{sec:discussion} discusses some aspects of Transformer that researchers might find intriguing and summarizes the paper.", "id": "48308b43-2cfe-4236-84fb-c7cd67559d21", "level": "section", "origin_cites_number": 11, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}", "id": "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "level": "section", "origin_cites_number": 0, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ] ], "subsections": [ "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "89fa15e6-1537-4f12-a5ac-bdd5d071bfac", "3ddd6e3c-d4a1-4bce-b510-16dfc75582ed", "f0fa958a-db02-433c-858f-f7024645f38d" ], "title": "Background" }, { "cite_extract_rate": 1, "cites": [ 97, 57, 38 ], "content": "\\label{sec:vanilla_xformer}\nThe vanilla Transformer~ is a sequence-to-sequence model and consists of an encoder and a decoder, each of which is a stack of $L$ identical blocks. Each \\textit{encoder block} is mainly composed of a multi-head self-attention module and a position-wise feed-forward network (FFN).\nFor building a deeper model, a residual connection~ is employed around each module, followed by Layer Normalization~ module.\nCompared to the encoder blocks, decoder blocks additionally insert cross-attention modules between the multi-head self-attention modules and the position-wise FFNs. Furthermore, the self-attention modules in the decoder are adapted to prevent each position from attending to subsequent positions.\nThe overall architecture of the vanilla Transformer is shown in Fig. \\ref{fig:xformer_arch}.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.7\\linewidth]{assets/xformer.pdf}\n \\caption{Overview of vanilla Transformer architecture}\n \\label{fig:xformer_arch}\n\\end{figure}\nIn the following subsection, we shall introduce the key modules of the vanilla Transformer.", "id": "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "level": "subsection", "origin_cites_number": 3, "parent_id": "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Vanilla Transformer" ] ], "subsections": [ "625a964c-dc0c-4228-a0bd-e5fba16df1f2", "fdfb7e39-e1e9-4079-9037-b8bc17e8bb70", "0c6acb39-f7c8-4d89-9969-16b6e42547a1", "64d291c3-febe-419e-831a-4ec33ca28ce7" ], "title": "Vanilla Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "Transformer adopts attention mechanism with Query-Key-Value (QKV) model. Given the packed matrix representations of queries $\\bQ\\in\\mathbb{R}^{N\\times D_k}$, keys $\\bK\\in\\mathbb{R}^{M\\times D_k}$, and values $\\bV\\in\\mathbb{R}^{M\\times D_v}$, the scaled dot-product attention used by Transformer is given by\\footnote{if not stated otherwise, we use row-major notations throughout this survey (e.g., the $i$-th row in $\\bQ$ is the query $\\bq_i$) and all the vectors are row vectors by default.}\n\\begin{equation}\\label{attention}\n \\mathrm{Attention}(\\bQ, \\bK, \\bV)=\\mathrm{softmax}\\left(\\frac{\\bQ\\bK^\\top}{\\sqrt{D_k}}\\right)\\bV=\\bA\\bV,\n\\end{equation}\nwhere $N$ and $M$ denote the lengths of queries and keys (or values); $D_k$ and $D_v$ denote the dimensions of keys (or queries) and values; $\\bA=\\mathrm{softmax}\\left(\\frac{\\bQ\\bK^\\top}{\\sqrt{D_k}}\\right)$ is often called \\textit{attention matrix}; softmax is applied in a row-wise manner. The dot-products of queries and keys are divided by $\\sqrt{D_k}$ to alleviate gradient vanishing problem of the softmax function.\nInstead of simply applying a single attention function, Transformer uses multi-head attention, where the $D_m$-dimensional original queries, keys and values are projected into $D_k$, $D_k$ and $D_v$ dimensions, respectively, with $H$ different sets of learned projections. For each of the projected queries, keys and values, and output is computed with attention according to Eq. \\eqref{attention}. The model then concatenates all the outputs and projects them back to a $D_m$-dimensional representation.\n\\begin{align}\n \\mathrm{MultiHeadAttn}(\\bQ,\\bK,\\bV)&=\\mathrm{Concat}(\\mathrm{head}_1,\\cdots,\\mathrm{head}_H)\\bW^O\\label{eq:multihead},\\\\\n \\mathrm{where}\\ \\mathrm{head}_i&=\\mathrm{Attention}(\\bQ \\bW_i^Q,\\bK \\bW_i^K, \\bV \\bW_i^V)\\label{eq:headi}.\n\\end{align}\nIn Transformer, there are three types of attention in terms of the source of queries and key-value pairs:\n\\begin{itemize}\n \\item \\textit{Self-attention}. In Transformer encoder, we set $\\bQ=\\bK=\\bV=\\bX$ in Eq. \\eqref{eq:multihead}, where $\\bX$ is the outputs of the previous layer.\n \\item \\textit{Masked Self-attention}. In the Transformer decoder, the self-attention is restricted such that queries at each position can only attend to all key-value pairs up to and including that position. To enable parallel training, this is typically done by applying a mask function to the unnormalized attention matrix $\\hat{\\bA}=\\exp(\\frac{\\bQ\\bK^\\top}{\\sqrt{D_k}})$, where the illegal positions are masked out by setting $\\hat{A}_{ij}=-\\infty \\textrm{ if } i<j$. This kind of self-attention is often referred to as autoregressive or causal attention\\footnote{This term seems to be borrowed from the \\textit{causal system}, where the output depends on past and current inputs but not future inputs.}.\n \\item \\textit{Cross-attention}. The queries are projected from the outputs of the previous (decoder) layer, whereas the keys and values are projected using the outputs of the encoder.\n\\end{itemize}", "id": "625a964c-dc0c-4228-a0bd-e5fba16df1f2", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Vanilla Transformer" ], [ "subsubsection", "Attention Modules" ] ], "subsections": [], "title": "Attention Modules" }, { "cite_extract_rate": 0, "cites": [], "content": "The position-wise FFN\\footnote{The parameters are shared across different positions, thus the position-wise FFN can also be understood as two convolution layers with kernel size of 1.} is a fully connected feed-forward module that operates separately and identically on each position\n\\begin{equation}\n \\mathrm{FFN}(\\bH') = \\mathrm{ReLU}(\\bH'\\bW^1 + \\bb^1)\\bW^2+\\bb^2,\n\\end{equation}\nwhere $\\bH'$ is the outputs of previous layer, and $\\bW^1\\in\\mathbb{R}^{D_m\\times D_f},\\bW^2\\in\\mathbb{R}^{D_f\\times D_m},\\bb^1\\in\\mathbb{R}^{D_f},\\bb^2\\in\\mathbb{R}^{D_m}$ are trainable parameters. Typically the intermediate dimension $D_{f}$ of the FFN is set to be larger than $D_m$.", "id": "fdfb7e39-e1e9-4079-9037-b8bc17e8bb70", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Vanilla Transformer" ], [ "subsubsection", "Position-wise FFN" ] ], "subsections": [], "title": "Position-wise FFN" }, { "cite_extract_rate": 1, "cites": [ 97, 57 ], "content": "In order to build a deep model, Transformer employs a residual connection~ around each module, followed by Layer Normalization~. For instance, each Transformer encoder block may be written as\n\\begin{align}\n \\bH'&=\\mathrm{LayerNorm}(\\mathrm{SelfAttention}(\\bX)+\\bX)\\\\\n \\bH&=\\mathrm{LayerNorm}(\\mathrm{FFN}(\\bH')+\\bH'),\n\\end{align}\nwhere $\\mathrm{SelfAttention}(\\cdot)$ denotes self attention module and $\\mathrm{LayerNorm}(\\cdot)$ denotes the layer normalization operation.", "id": "0c6acb39-f7c8-4d89-9969-16b6e42547a1", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Vanilla Transformer" ], [ "subsubsection", "Residual Connection and Normalization" ] ], "subsections": [], "title": "Residual Connection and Normalization" }, { "cite_extract_rate": 0, "cites": [], "content": "Since Transformer doesn't introduce recurrence or convolution, it is ignorant of positional information (especially for the encoder). Thus additional positional representation (Detailed discussion in Sec.~\\ref{sec:pos_rep}) is needed to model the ordering of tokens.", "id": "64d291c3-febe-419e-831a-4ec33ca28ce7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cbbc7151-fd9c-4d11-a9dd-a0790a09698d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Vanilla Transformer" ], [ "subsubsection", "Position Encodings" ] ], "subsections": [], "title": "Position Encodings" }, { "cite_extract_rate": 0, "cites": [], "content": "Generally, the Transformer architecture can be used in three different ways:\n\\begin{itemize}\n \\item \\textit{Encoder-Decoder}. The full Transformer architecture as introduced in Sec.~\\ref{sec:vanilla_xformer} is used. This is typically used in sequence-to-sequence modeling (e.g., neural machine translation).\n \\item \\textit{Encoder only}. Only the encoder is used and the outputs of the encoder are utilized as a representation for the input sequence. This is usually used for classification or sequence labeling problems.\n \\item \\textit{Decoder only}. Only the decoder is used, where the encoder-decoder cross-attention module is also removed. This is typically used for sequence generation, such as language modeling.\n\\end{itemize}", "id": "89fa15e6-1537-4f12-a5ac-bdd5d071bfac", "level": "subsection", "origin_cites_number": 0, "parent_id": "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Model Usage" ] ], "subsections": [], "title": "Model Usage" }, { "cite_extract_rate": 1, "cites": [ 38 ], "content": "\\label{sec:model-analysis}\nTo illustrate the computation time and parameter requirements of the Transformer, we analyze the two core components of the Transformer (i.e., the self-attention module and the position-wise FFN) in Table \\ref{tab:complexity_attn_ffn}. We assume that the hidden dimension $D_m$ of the model is $D$, and that the input sequence length is $T$. The intermediate dimension of FFN is set to $4D$ and the dimension of keys and values are set to $D/H$ as in .\n\\begin{table}[htbp]\n \\caption{Complexity and parameter counts of self-attention and position-wise FFN}\n \\label{tab:complexity_attn_ffn}\n \\centering\n \\begin{tabular}{c|c|c}\n \\hline\n Module & Complexity & \\#Parameters\\\\\n \\hline\n self-attention & $\\mathcal O(T^2\\cdot D)$ & $4D^2$\\\\\n \\hline\n position-wise FFN & $\\mathcal O(T\\cdot D^2)$ & $8D^2$\\\\\n \\hline\n \\end{tabular}\n\\end{table}\nWhen the input sequences are short, the hidden dimension $D$ dominates the complexity of self-attention and position-wise FFN. The bottleneck of Transformer thus lies in FFN. However, as the input sequences grow longer, the sequence length $T$ gradually dominates the complexity of these modules, in which case self-attention becomes the bottleneck of Transformer. Furthermore, the computation of self-attention requires that a $T\\times T$ attention distribution matrix is stored, which makes the computation of Transformer infeasible for long-sequence scenarios (e.g., long text documents and pixel-level modeling of high-resolution images). One shall see that the goal of increasing the efficiency of Transformer generally leads to the long-sequence compatibility of self-attention, as well as the computation and parameter efficiency of position-wise FFN for ordinary settings.", "id": "3ddd6e3c-d4a1-4bce-b510-16dfc75582ed", "level": "subsection", "origin_cites_number": 1, "parent_id": "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Model Analysis" ] ], "subsections": [], "title": "Model Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "f0fa958a-db02-433c-858f-f7024645f38d", "level": "subsection", "origin_cites_number": 0, "parent_id": "07bae4f8-4d29-4fbd-aec3-ca99cfb7bc62", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Comparing Transformer to Other Network Types" ] ], "subsections": [ "037a9c67-64a3-4bbe-b197-a41922561a14", "cf87fcdd-fc72-4f50-9754-c59538a65d3f" ], "title": "Comparing Transformer to Other Network Types" }, { "cite_extract_rate": 1, "cites": [ 38 ], "content": "As a central piece of Transformer, self-attention comes with a flexible mechanism to deal with variable-length inputs. It can be understood as a fully connected layer where the weights are dynamically generated from pairwise relations from inputs. Table \\ref{tab:op_complexities} compares the complexity, sequential operations, and maximum path length\\footnote{The maximum length of the paths forward and backward signals have to traverse to get from any input position to arbitrary output position. Shorter length implies a better potential for learning long-range dependencies.} of self-attention with three commonly used layer types. We summarize the advantages of self-attention as follows:\n\\begin{enumerate}\n \\item It has the same maximum path length as fully connected layers, making it suitable for long-range dependencies modeling. Compared to fully connected layers, it is more parameter-efficient and more flexible in handling variable-length inputs.\n \\item Due to the limited receptive field of convolutional layers, one typically needs to stack a deep network to have a global receptive field. On the other hand, the constant maximum path length enables self-attention to model long-range dependencies with a constant number of layers.\n \\item The constant sequential operations and maximum path length make self-attention more parallelizable and better at long-range modeling than recurrent layers.\n\\end{enumerate}\n\\begin{table}[ht]\n \\caption{Per-layer complexity, minimum number of sequential operations and maximum path lengths for different layer types. $T$ is the sequence length, $D$ is the representation dimension and $K$ is the kernel size of convolutions~.}\n\\label{tab:op_complexities}\n\\begin{center}\n\\vspace{-1mm}\n\\begin{tabular}{lccc}\n\\toprule\nLayer Type & Complexity & Sequential & Maximum Path Length \\\\\n &per Layer & Operations & \\\\\n\\hline\n\\rule{0pt}{2.0ex}Self-Attention & $\\mathcal O(T^2 \\cdot D)$ & $\\mathcal O(1)$ & $\\mathcal O(1)$ \\\\\nFully Connected & $\\mathcal O(T^2 \\cdot D^2)$ & $\\mathcal O(1)$ & $\\mathcal O(1)$ \\\\\nConvolutional & $\\mathcal O(K \\cdot T \\cdot D^2)$ & $\\mathcal O(1)$ & $\\mathcal O(\\log_K(T))$ \\\\\nRecurrent & $\\mathcal O(T \\cdot D^2)$ & $\\mathcal O(T)$ & $\\mathcal O(T)$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table}", "id": "037a9c67-64a3-4bbe-b197-a41922561a14", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f0fa958a-db02-433c-858f-f7024645f38d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Comparing Transformer to Other Network Types" ], [ "subsubsection", "Analysis of Self-Attention" ] ], "subsections": [], "title": "Analysis of Self-Attention" }, { "cite_extract_rate": 1, "cites": [ 208, 553 ], "content": "Transformer is often compared against convolutional and recurrent networks. Convolutional networks are known to impose the inductive biases of translation invariance and locality with shared local kernel functions. Similarly, recurrent networks carry the inductive biases of temporal invariance and locality via their Markovian structure~. On the other hand, the Transformer architecture makes few assumptions about structural information of data. This makes Transformer a universal and flexible architecture. As a side effect, the lack of structural bias makes Transformer prone to overfitting for small-scale data.\nAnother closely related network type is Graph Neural Networks (GNNs) with message passing~. Transformer can be viewed as a GNN defined over a complete directed graph (with self-loop) where each input is a node in the graph. The key difference between Transformer and GNNs is that Transformer introduces no prior knowledge over how input data are structured — the message passing process in Transformer solely depends on similarity measures over the content.", "id": "cf87fcdd-fc72-4f50-9754-c59538a65d3f", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "f0fa958a-db02-433c-858f-f7024645f38d", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Background" ], [ "subsection", "Comparing Transformer to Other Network Types" ], [ "subsubsection", "In Terms of Inductive Bias" ] ], "subsections": [], "title": "In Terms of Inductive Bias" }, { "cite_extract_rate": 0.808333333333333, "cites": [ 1474, 1493, 1457, 1482, 1460, 1491, 7360, 7040, 1484, 1494, 1464, 1488, 1470, 1452, 1466, 1490, 793, 1445, 1502, 7054, 1467, 1463, 8384, 1456, 1480, 794, 1475, 1465, 7361, 1500, 1459, 7053, 1458, 1479, 7365, 1496, 1133, 8454, 768, 7367, 7333, 1489, 1451, 1469, 9, 1476, 1453, 1495, 1487, 1462, 1478, 679, 1483, 1492, 7369, 7, 1473, 1446, 7370, 8456, 826, 7362, 7364, 7363, 1468, 38, 1486, 7298, 1498, 1497, 1501, 7371, 8457, 7368, 1455, 707, 8455, 1471, 1481, 1182, 732, 1272, 7339, 1477, 1449, 7273, 1461, 1454, 1448, 1485, 7366, 798, 1499, 1450, 1472 ], "content": "\\label{sec:taxonomy}\nA wide variety of models have been proposed so far based on the vanilla Transformer from three perspectives: types of architecture modification, pre-training methods, and applications.\nFig. \\ref{fig:xformer_taxonomy} gives an illustrations of our categorization of Transformer variants.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{assets/taxonomy.pdf}\n \\caption{Categorization of Transformer variants.}\n \\label{fig:xformer_taxonomy}\n\\end{figure}\nFig. \\ref{taxonomy_of_xformer} illustrates our taxonomy and some representative models.\n\\tikzstyle{leaf}=[mybox,minimum height=1em,\nfill=hidden-orange!40, text width=20em, text=black,align=left,font=\\tiny,\ninner xsep=2pt,\ninner ysep=1pt,\n]\n\\begin{figure*}[tp]\n \\centering\n\\begin{forest}\n forked edges,\n for tree={\n grow=east,\n reversed=true,\n anchor=base west,\n parent anchor=east,\n child anchor=west,\n base=left,\n font=\\small,\n rectangle,\n draw=hiddendraw,\n rounded corners,align=left,\n minimum width=2.5em,\ns sep=3pt,\ninner xsep=2pt,\ninner ysep=1pt,\nver/.style={rotate=90, child anchor=north, parent anchor=south, anchor=center},\n },\n where level=1{text width=2.7em,font=\\scriptsize,}{},\n where level=2{text width=3em,font=\\tiny}{},\n where level=3{text width=3em,font=\\tiny}{},\n [X-formers, ver\n [Module\\\\Level\n [Attention\n [Sparse\n [Star-Transformer{,} Longformer{,} ETC{,} BigBird{,} Sparse Transformer\\\\\n BP-Transformer{,} Image Transformer{,} Axial Transformer\n ,leaf,text width=21.5em]\n [Routing Transformer{,} Reformer{,} SAC{,} Sparse Sinkhorn Attention\n ,leaf,text width=21.5em]\n ]\n [Linearized\n [Linear Transformer{,} Performer{,} RFA{,} Delta Net\n ,leaf,text width=18.5em]\n ]\n [Prototype\n [Clustered Attention{,} Informer\n ,leaf,text width=12em]\n ]\n [Memory\\\\Compress\n [MCA{,} Set Transformer{,} Linformer\n ,leaf,text width=12em]\n ]\n [Low-rank\n [Low-rank Attention{,} CSALR{,} Nystr{\\\"{o}}mformer~\n ,leaf,text width=15em]\n ]\n [Prior\\\\Attention\n [Local Transformer{,} Gaussian Transformer\n ,leaf,text width=15em]\n [Predictive Attention Transformer{,} Realformer{,} Lazyformer\n ,leaf,text width=18.5em]\n [CAMTL\n ,leaf,text width=4em]\n [Average Attention{,} Hard-Coded Gaussian Attention{,} Synthesizer\n ,leaf]\n ]\n [Multi-head\n [{,} {,} Talking-head Attention\\\\\n Collaborative MHA\n ,leaf,text width=20em]\n [Adaptive Attention Span{,} Multi-Scale Transformer\n ,leaf,text width=20em]\n [Dynamic Routing\n ,leaf,text width=6.5em]\n ]\n ]\n [Position\\\\ Encoding\n [Absolute\n [BERT{,} {,} FLOATER\n ,leaf,text width=11em]\n ]\n [Relative\n [{,} Music Transformer{,} T5{,} Transformer-XL\\\\ DeBERTa\n ,leaf,text width=20em]\n ]\n [Other Rep.\n [TUPE{,} Roformer\n ,leaf,text width=7em]\n ]\n [Implicit Rep.\n [Complex Embedding{,} R-Transformer {,} CPE\n ,leaf,text width=16em]\n ]\n ]\n [LayerNorm\n [Placement\n [post-LN{,} pre-LN\n ,leaf,text width=16em]\n ]\n [Substitutes\n [AdaNorm{,} scaled $\\ell_2$ normalization{,} PowerNorm\n ,leaf,text width=16em]\n ]\n [Norm-free\n [ReZero-Transformer\n ,leaf,text width=9.5em]\n ]\n ]\n [FFN\n [Activ. Func.\n [Swish{,} GELU{,} GLU\n ,leaf,text width=9.5em]\n ]\n [Enlarge\\\\Capacity\n [Product-key Memory{,} Gshard{,} Switch Transformer{,}\\\\ Expert Prototyping{,} Hash Layer\n ,leaf,text width=16em]\n ]\n [Dropping\n [All-Attention layer{,} \n ,leaf,text width=11em]\n ]\n ]\n ]\n [Arch.\\\\Level\n [Lighweight\n [Lite Transformer{,} Funnel Transformer{,} DeLighT\n ,leaf,text width=16em]\n ]\n [Connectivity\n [Realformer{,} Predictive Attention Transformer{,} Transparent Attention\\\\\n Feedback Transformer~\n ,leaf,text width=24.5em]\n ]\n [ACT\n [UT{,} Conditional Computation Transformer{,} DeeBERT{,} PABEE{,} {,} \\\\\n ,leaf,text width=24.5em]\n ]\n [Divide \\& \\\\\n Conquer\n [Recurrence\n [Transformer-XL{,} Compressive Transformer{,} Memformer\\\\ {,} ERNIE-Doc\n ,leaf,text width=20em]\n ]\n [Hierarchy\n [{,} HIBERT{,} {,} Hi-Transformer\\\\ TENER{,} TNT\n ,leaf,text width=20em]\n ]\n ]\n [Alt. Arch.\n [ET{,} Macaron Transformer{,} Sandwich Transformer{,} MAN{,} DARTSformer\n ,leaf,text width=24.5em]\n ]\n ]\n [Pre-Train\n [Encoder\n [BERT{,} RoBERTa{,} BigBird,leaf,text width=11em]\n ]\n [Decoder\n [GPT{,} GPT-2{,} GPT-3,leaf,text width=11em]\n ]\n [Enc.Dec.\n [ BART{,} T5{,} Switch Transformer\n ,leaf,text width=11em]\n ]\n ]\n [App.\n [NLP\n [BERT{,}ET{,} Transformer-XL{,}Compressive Transformer{,} TENER\n ,leaf,text width=22em]\n ]\n [CV\n [Image Transformer{,} DETR{,} ViT{,} Swin Transformer{,} ViViT\n ,leaf,text width=22em]\n ]\n [Audio\n [Speech Transformer{,} Streaming Transformer{,} Reformer-TTS{,} Music Transformer\n ,leaf,text width=25em]\n ]\n [Multimodal\n [VisualBERT{,} VLBERT{,} VideoBERT{,} M6{,} Chimera{,} DALL-E{,} CogView\n ,leaf,text width=25em]\n ]\n ]\n ]\n\\end{forest}\n\\caption{Taxonomy of Transformers}\n\\label{taxonomy_of_xformer}\n\\end{figure*}\nIn this survey, we focus on reviewing the works on architecture modifications.\nSince the attention module is the key component of Transformer, we solely describe the attention-related variants in Sec.~\\ref{sec:attention} and introduce the other module-level variants in Sec.~\\ref{sec:other_module}. Then Sec.~\\ref{sec:beyond} describes the other architecture-level variants. Finally, we briefly review the works on pre-training in Sec.~\\ref{sec:ptm} and applications in Sec.~\\ref{sec:app}.\nThere are some comprehensive surveys on the latter two categories of work, such as pre-trained models (PTMs)~ and visual Transformers.", "id": "aa12aeb0-6a41-4c9b-bfb0-68b4522245cd", "level": "section", "origin_cites_number": 120, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Taxonomy of Transformers" ] ], "subsections": [], "title": "Taxonomy of Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:attention}\nSelf-attention plays an important role in Transformer, but there are two challenges in practical applications.\n\\begin{enumerate}\n \\item \\textit{Complexity}. As discussion in Sec.~\\ref{sec:model-analysis}, the complexity of self-attention is $\\mathcal O(T^2\\cdot D)$. Therefore, the attention module becomes a bottleneck when dealing with long sequences.\n \\item \\textit{Structural prior}. Self-attention does no assume any structural bias over inputs. Even the order information is also needed to be learned from training data. Therefore, Transformer (w/o pre-training) is usually easy to overfit on small or moderate-size data.\n\\end{enumerate}\nThe improvements on attention mechanism can be divided into several directions:\n\\begin{enumerate}\n \\item \\textit{Sparse Attention}. This line of work introduces sparsity bias into the attention mechanism, leading to reduced complexity.\n \\item \\textit{Linearized Attention}. This line of work disentangles the attention matrix with kernel feature maps. The attention is then computed in reversed order to achieve linear complexity.\n \\item \\textit{Prototype and Memory Compression}. This class of methods reduces the number of queries or key-value memory pairs to reduce the size of the attention matrix.\n \\item \\textit{Low-rank Self-Attention}. This line of work capture the low-rank property of self-attention.\n \\item \\textit{Attention with Prior}. The line of research explores supplementing or substituting standard attention with prior attention distributions.\n \\item \\textit{Improved Multi-Head Mechanism}. The line of studies explores different alternative multi-head mechanisms.\n\\end{enumerate}\nWe will describe these attention variants at length in the rest of this section.", "id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "level": "section", "origin_cites_number": 0, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ] ], "subsections": [ "c3955c2b-4114-49e4-bbb8-30061d5f27e8", "a4f89d55-c38a-442a-8744-5625c30e01ff", "99d263f7-48ac-4be9-9f96-b1cd4380de1b", "9a2c3eb1-bf66-4ddc-822d-bcc1121a8120", "afe04fc5-1717-4f24-b061-7af855828533", "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5" ], "title": "Attention" }, { "cite_extract_rate": 1, "cites": [ 793 ], "content": "\\label{sec:sparseattn}\nIn the standard self-attention mechanism, every token needs to attend to all other tokens.\nHowever, it is observed that for the trained Transformers the learned attention matrix $\\bA$ is often very sparse across most data points~.\nTherefore, it is possible to reduce computation complexity by incorporating structural bias to limit the number of query-key pairs that each query attends to. Under this limitation, we just compute the similarity score of the query-key pairs according to pre-defined patterns\n\\begin{equation}\\label{Sparse attention:position-based}\n \\mathrm{\\hat{\\bA}}_{ij} = \\begin{cases} \\bq_i\\bk_{j}^\\top & \\text{if token }i\\text{ attends to token }j,\\\\\n -\\infty & \\text{if token }i\\text{ does not attend to token }j,\n \\end{cases}\n\\end{equation}\\label{eq:sparseattn}\nwhere $\\mathrm{\\hat{\\bA}}$ is un-normalized attention matrix. In implementation the $-\\infty$ item is usually not stored in memory so as to decrease memory footprint.\nFrom another perspective, the standard attention can be regarded as a complete bipartite graph where each query receives information from all memory nodes and updates its representation.\nThe sparse attention can be considered as a sparse graph where some of the connections between nodes are removed.\nBased on the metrics of determining the sparse connection, we categorize these approaches into two classes: \\textit{position-based} and \\textit{content-based} sparse attention.", "id": "c3955c2b-4114-49e4-bbb8-30061d5f27e8", "level": "subsection", "origin_cites_number": 1, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ] ], "subsections": [ "766f8fe1-2a61-4664-a9be-83f0177cc298", "6429c3b2-0cb9-415c-9f1d-1f7bd028a380" ], "title": "Sparse Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pos_based}\nIn position-based sparse attention, the attention matrix is limited according to some pre-defined patterns.\nAlthough these sparse patterns vary in different forms, we find that some of them can be decomposed into some atomic sparse patterns.\nWe first identify some atomic sparse patterns and then describe how these patterns are composed in some existing work. Finally, we introduce some extended sparse patterns for specific data types.", "id": "766f8fe1-2a61-4664-a9be-83f0177cc298", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c3955c2b-4114-49e4-bbb8-30061d5f27e8", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ], [ "subsubsection", "Position-based Sparse Attention" ] ], "subsections": [ "3abb14c2-1133-4417-a3ba-50491a89789a", "cf0fe507-a0b3-499f-9145-a52ed5faa99e", "8a0b3727-3354-4028-870f-983c49224baa" ], "title": "Position-based Sparse Attention" }, { "cite_extract_rate": 1, "cites": [ 4722 ], "content": "There are mainly five types of atomic sparse attention patterns, as shown in Fig. \\ref{fig:atomic_sparse_attentions}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[global\\label{fig:global}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/atomic/global.pdf}\n}\n\\subfigure[band\\label{fig:band}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/atomic/band.pdf}\n}\n\\subfigure[dilated\\label{fig:dilated}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/atomic/dilated.pdf}\n}\n\\subfigure[random\\label{fig:random}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/atomic/random.pdf}\n}\n\\subfigure[block local\\label{fig:block_local}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/atomic/block_local.pdf}\n}\n\\end{center}\n\\caption{Some representative atomic sparse attention patterns. The colored squares means corresponding attention scores are calculated and a blank square means the attention score is discarded.}\\label{fig:atomic_sparse_attentions}\n\\end{figure}\n\\begin{enumerate}\n \\item \\textit{Global Attention}. To alleviate the degradation of the ability to model the long-range dependencies in sparse attention, one can add some global nodes\\footnote{In practice, these global nodes can be selected from the sequence (internal global nodes) or virtual nodes with trainable parameters (external global nodes).} as the hub for information propagation between nodes. These global nodes can attend all nodes in the sequence and the whole sequence attend to these global nodes, as illustrated in Fig. \\ref{fig:global}.\n \\item \\textit{Band Attention}(a.k.a \\textit{sliding window attention} or \\textit{local attention}). Since most data come with a strong property of locality, it is natural to restrict each query to attend to its neighbor nodes. A widely adopted class of such sparse pattern is band attention, in which the attention matrix is a band matrix as illustrated in Fig. \\ref{fig:band}.\n \\item \\textit{Dilated Attention}. Analogous to dilated CNNs~, one can potentially increase the receptive field of the band attention without increasing computation complexity by using a dilated window with gaps of dilation $w_d\\ge 1$, as depicted in Fig. \\ref{fig:dilated}. This can be easily extended to \\textit{strided attention}, where the window size is not limited but the dilation $w_d$ is set to a large value.\n \\item \\textit{Random Attention}. To increase the ability of non-local interactions, a few edges are randomly sampled for each query, as illustrated in Fig. \\ref{fig:random}. This is based on the observation that random graphs (e.g., Erd\\H os–R\\'enyi random graph) can have similar spectral properties with complete graphs that leads to a fast mixing time for random walking on graphs.\n \\item \\textit{Block Local Attention}. This class of attention segments input sequence into several non-overlapping query blocks, each of which is associated with a local memory block. All the queries in a query block attend to only the keys in the corresponding memory block. Fig. \\ref{fig:block_local} depicts a commonly used case where the memory blocks are identical to their corresponding query blocks.\n\\end{enumerate}", "id": "3abb14c2-1133-4417-a3ba-50491a89789a", "level": "paragraph", "origin_cites_number": 1, "parent_id": "766f8fe1-2a61-4664-a9be-83f0177cc298", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ], [ "subsubsection", "Position-based Sparse Attention" ], [ "paragraph", "4.1.1.1 Atomic Sparse Attention" ] ], "subsections": [], "title": "4.1.1.1 Atomic Sparse Attention" }, { "cite_extract_rate": 1, "cites": [ 7298, 793, 7371, 1499, 7363, 134 ], "content": "Existing sparse attentions are often composed of more than one of the above atomic patterns. Fig.~\\ref{fig:sparse_attn} illustrates some representative compound sparse attention patterns.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[Star-Transformer\\label{fig:starxformer}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/compound/star.pdf}\n}\\quad\n\\quad\n\\subfigure[Longformer\\label{fig:longformer}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/compound/longformer.pdf}\n}\\quad\n\\subfigure[ETC\\label{fig:etc}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/compound/etc.pdf}\n}\\quad\n\\subfigure[BigBird\\label{fig:bigbird}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/compound/bigbird.pdf}\n}\\quad\n\\caption{Some representative compound sparse attention patterns. The red boxes indicate sequence boundaries.}\\label{fig:sparse_attn}\n\\end{center}\n\\end{figure}\nStar-Transformer~ uses a combination of band attention and global attention. Specifically, Star-Transformer just includes only a global node and a band attention with the width of 3, in which any pair of non-adjacent nodes are connected through a shared global node and adjacent nodes are connected directly with each other. This kind of sparse pattern forms a star-shaped graph among nodes.\nLongformer~ uses a combination of band attention and internal global-node attention. The global nodes are chosen to be \\texttt{[CLS]} token for classification and all question tokens for Question Answering tasks. They also replace some of the band attention heads in upper layers with dilated window attention to increase the receptive field without increasing computation.\nAs a concurrent work to Longformer~, Extended Transformer Construction (ETC)~ utilizes combination of band attention and external global-node attention. ETC also includes a masking mechanism to handle structured inputs and adapt Contrastive Predictive Coding (CPC)~ for pre-training.\nIn addition to the band and global attention, BigBird~ uses additional random attention to approximate full attention. Their theoretical analysis also reveals that the usage of a sparse encoder and sparse decoder can simulate any Turing Machine, which explains the success of those sparse attention models.\nSparse Transformer~ uses a factorized attention where different sparse patterns are designed for different types of data. For data with a periodic structure (e.g., images), it uses a composition of band attention and strided attention. Whereas for data without a periodic structure (e.g., text), it uses a composition of block local attention combined with global attention, where global nodes are from fixed positions in the input sequence.", "id": "cf0fe507-a0b3-499f-9145-a52ed5faa99e", "level": "paragraph", "origin_cites_number": 6, "parent_id": "766f8fe1-2a61-4664-a9be-83f0177cc298", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ], [ "subsubsection", "Position-based Sparse Attention" ], [ "paragraph", "4.1.1.2 Compound Sparse Attention" ] ], "subsections": [], "title": "4.1.1.2 Compound Sparse Attention" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1484, 1459 ], "content": "Apart from the above patterns, some existing studies have explored extended sparse patterns for specific data types.\nFor text data, BP-Transformer~ constructs a binary tree where all tokens are leaf nodes and the internal nodes are span nodes containing many tokens. The edges in this graph are constructed so that each leaf node is connected to its neighbor leaf nodes and higher-level span nodes containing tokens from a longer distance. This approach can be seen as an extension of global attention, where global nodes are hierarchically organized and any pair of tokens are connected with paths in the binary tree. An abstract view of this method is illustrated in Fig. \\ref{fig:bpt}.\nThere are also some extensions for vision data. Image Transformer~ explores two types of attention: (1) flattening image pixels in raster-scan order and then applying block local sparse attention. (2) 2D block local attention, where query blocks and memory blocks are arranged directly in 2D plate, as depicted in Fig. \\ref{fig:block_local_2d}. As another example of sparse pattern on vision data, Axial Transformer~ applies independent attention modules over each axis of the image. Each attention module mixes information along one axis while keeping information along the other axis independent, as illustrated in Fig. \\ref{fig:axial}. This can be understood as horizontally and vertically flattening image pixels in raster-scan order and then applying strided attention with gaps of image width and height, respectively.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[BPT\\label{fig:bpt}]{\n\\includegraphics[width=0.40\\linewidth]{assets/sparse_variants/other/bpt.pdf}\n}\\quad\n\\subfigure[block local (2D)\\label{fig:block_local_2d}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/other/block_local.pdf}\n}\\quad\n\\subfigure[axial (2D)\\label{fig:axial}]{\n\\includegraphics[width=0.18\\linewidth]{assets/sparse_variants/other/axial.pdf}\n}\n\\caption{Other types of sparse attentions. The red box indicates the query position, and the orange nodes/squares means corresponding tokens are attended to by the query.}\\label{fig:other_sparse_attn}\n\\end{center}\n\\end{figure}", "id": "8a0b3727-3354-4028-870f-983c49224baa", "level": "paragraph", "origin_cites_number": 3, "parent_id": "766f8fe1-2a61-4664-a9be-83f0177cc298", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ], [ "subsubsection", "Position-based Sparse Attention" ], [ "paragraph", "4.1.1.3 Extended Sparse Attention" ] ], "subsections": [], "title": "4.1.1.3 Extended Sparse Attention" }, { "cite_extract_rate": 1, "cites": [ 1453, 1473, 7362, 794 ], "content": "Another line of work creates a sparse graph based on input content, i.e., the sparse connections are conditioned on inputs.\nA straightforward way of constructing a content-based sparse graph is to select those keys that are likely to have large similarity scores with the given query. To efficiently construct the sparse graph, we can recur to Maximum Inner Product Search (MIPS) problem, where one tries to find the keys with maximum dot product with a query without computing all dot product terms. Routing Transformer~ uses k-means clustering to cluster both queries $\\{\\bq_i\\}_{i=1}^T$ and keys $\\{\\bk_i\\}_{i=}^T$ on the same set of centroid vectors $\\{\\mathbf\\mu_i\\}_{i=1}^k$. Each query only attends to the keys that belong to the same cluster.\nDuring training, the cluster centroid vectors are updated using the exponentially moving average of vectors assigned to it, divided by the exponentially moving average of cluster counts:\n\\begin{align}\n\\tilde\\mu&\\leftarrow \\lambda\\tilde\\mu+(1-\\lambda)\\left(\\sum_{i:\\mu(\\bq_i)=\\mu}\\bq_i+\\sum_{j:\\mu(\\bk_j)=\\mu}\\bk_j\\right),\\\\\nc_\\mu&\\leftarrow \\lambda c_\\mu+(1-\\lambda)|\\mu|,\\\\\n\\mu&\\leftarrow \\frac{\\tilde\\mu}{c_\\mu},\n\\end{align}\nwhere $|\\mu|$ denotes the number of vectors currently in cluster $\\mu$ and $\\lambda\\in(0,1)$ is a hyperparameter.\nLet $\\mathcal{P}_i$ denote the set of indices of keys that the $i$-th query attend to. $\\mathcal{P}_i$ in Routing Transformer is defined as\n\\begin{equation}\n \\mathcal{P}_i=\\{j: \\mu(\\bq_i)=\\mu(\\bk_j) \\}.\n\\end{equation}\nReformer~ uses locality-sensitive hashing (LSH) to select key-value pairs for each query. The proposed LSH attention allows each token to attend only to the tokens within the same hashing bucket. The basic idea is to use an LSH function to hash queries and keys into several buckets, with similar items fall in the same bucket with high probability. Specifically, they use the random matrix method for the LSH function. Let $b$ be the number of buckets, given a random matrix $R$ of size $[D_k,b/2]$, the LSH function is computed by :\n\\begin{equation}\n h(x)=\\argmax([xR;-xR]).\n\\end{equation}\nThe LSH attention allows the $i$-th query to attend only to key-value pairs with indices\n\\begin{equation}\n \\mathcal{P}_i=\\{j: h(\\bq_i)=h(\\bk_j) \\}.\n\\end{equation}\nSparse Adaptive Connection (SAC)~ views the input sequence as a graph and learns to construct attention edges to improve task-specific performances using an adaptive sparse connection. SAC uses an LSTM edge predictor to construct edges between tokens. With no ground truth for edges, the edge predictor is trained with reinforcement learning.\nSparse Sinkhorn Attention~ first splits queries and keys into several blocks and assigns a key block to each query block. Each query is only allowed to attend to the keys in the key block that is assigned to its corresponding query block. The assignment of key blocks is controlled by a sorting network, which uses Sinkhorn normalization to produce a doubly stochastic matrix as the permutation matrix representing the assignment. They use this content-based block sparse attention along with block local attention introduced in Sec.~\\ref{sec:pos_based} to enhance the ability of the model to model locality.", "id": "6429c3b2-0cb9-415c-9f1d-1f7bd028a380", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "c3955c2b-4114-49e4-bbb8-30061d5f27e8", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Sparse Attention" ], [ "subsubsection", "Content-based Sparse Attention" ] ], "subsections": [], "title": "Content-based Sparse Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:kernel}\nAssuming $\\bQ,\\bK,\\bV\\in \\mathbb{R}^{T\\times D}$, the complexity of computing $\\softmax(\\bQ\\bK^\\top)\\bV$ is quadratic w.r.t. sequence length $T$, as illustrated in Fig. \\ref{fig:standard_complexity}. If $\\softmax(\\bQ\\bK^\\top)$ can be disentangled into $\\bQ'\\bK'^\\top$, we can compute $\\bQ'\\bK'^\\top\\bV$ in reversed order (i.e., $\\bQ'\\left(\\textcolor[rgb]{0,0,1}{\\bK'^\\top\\bV}\\right)$), leading to a complexity of $\\mathcal{O}(T)$.\nLet $\\hat{\\bA} = \\exp(\\bQ\\bK^\\top)$ denote un-normalized attention matrix, and $\\exp(\\cdot)$ is applied element-wise, the regular attention can be rewritten as $\\bZ={\\bD}^{-1}\\hat{\\bA}\\bV$, where $\\bD = \\mathrm{diag}(\\hat{\\bA}\\mathbf{1}_T^\\top)$; $\\mathbf{1}_T^\\top$ is the all-ones column vector of length $T$; $\\mathrm{diag}(\\cdot)$ is a diagonal\nmatrix with the input vector as the diagonal.\nLinearized attention is a class of methods that approximate or replace the unnormalized attention matrix $\\exp(\\bQ\\bK^\\top)$ with $\\phi(\\bQ)\\phi(\\bK)^\\top$, where $\\phi$ is a feature map that is applied in row-wise manner. Hence the computation of unnormalized attention matrix can be linearized by computing $\\phi(\\bQ)\\left(\\textcolor[rgb]{0,0,1}{\\phi(\\bK)^\\top\\bV}\\right)$\\footnote{Similarly, the partition term $\\bD$ can be computed with $\\phi(\\bQ)\\left(\\textcolor[rgb]{0,0,1}{\\phi(\\bK)^\\top\\mathbf{1}_T^\\top}\\right)$ in linear time.}, as illustrated in Fig. \\ref{fig:linearized_complexity}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[standard self-attention]{\\label{fig:standard_complexity}\n\\includegraphics[width=0.45\\textwidth]{assets/kernel/vanilla.pdf}\n}\\hspace{.2in}\n\\subfigure[linearized self-attention]{\\label{fig:linearized_complexity}\n\\includegraphics[width=0.45\\textwidth]{assets/kernel/kernel.pdf}\n}\n\\caption{Illustration of complexity difference between standard self-attention and linearized self-attention.}\n\\end{center}\n\\end{figure}\nTo gain further insights into linearized attention, we derive the formulation in vector form. We consider a general form of attention\n\\begin{equation}\n \\bz_i=\\sum_j \\frac{\\mathrm{sim}(\\bq_i, \\bk_j)}{\\sum_{j'}\\mathrm{sim}(\\bq_i, \\bk_{j'})}\\bv_j,\n\\end{equation}\nwhere $\\mathrm{sim}(\\cdot,\\cdot)$ is a scoring function measuring similarity between input vectors. In vanilla Transformer, the scoring function is the exponential of inner product $\\exp(\\langle\\cdot,\\cdot\\rangle)$. A natural choice of $\\mathrm{sim}(\\cdot,\\cdot)$ is a kernel function $\\mathcal{K}(\\bx,\\by)=\\phi(\\bx)\\phi(\\by)^\\top$, which leads to\n\\begin{align}\n \\bz_i&=\\sum_j \\frac{\\phi(\\bq_i) \\phi(\\bk_j)^\\top}{\\sum_{j'} \\phi(\\bq_i) \\phi(\\bk_{j'})^\\top}\\bv_j\\\\\n &=\\frac{\\phi(\\bq_i) \\textcolor[rgb]{0,0,1}{\\sum_j\\phi(\\bk_j)\\otimes\\bv_j}}{\\phi(\\bq_i)\\textcolor[rgb]{0,0,1}{\\sum_{j'} \\phi(\\bk_{j'})^\\top}},\\label{eq:linearized}\n\\end{align}\nwhere $\\otimes$ denotes outer product of vectors. Based on this formulation, attention can be linearized by first computing the highlighted terms $\\sum_j\\phi(\\bk_j)\\otimes\\bv_j$ and$\\sum_{j'} \\phi(\\bk_{j'})^\\top$. This could be especially beneficial for autoregressive attention, as the cumulative sums $\\bS_i=\\sum_{j=1}^{i} \\phi(\\bk_j)\\otimes\\bv_j$ and $\\bu_i=\\sum_{j=1}^{i} \\phi(\\bk_j)$ can be computed from $\\bS_{i-1}$ and $\\bu_{i-1}$ in constant time. The effectively enables Transformer decoders to run like RNNs.\nAn interpretation of Eq. \\eqref{eq:linearized} is that the model maintains a \\textit{memory matrix} by aggregating \\textit{associations} represented by outer products of (feature mapped) keys and values, and then retrieve a value by multiplying the memory matrix with feature mapped query with proper normalization. There are two key components in this approach: (1) feature map $\\phi(\\cdot)$, and (2) aggregation rule.", "id": "a4f89d55-c38a-442a-8744-5625c30e01ff", "level": "subsection", "origin_cites_number": 0, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Linearized Attention" ] ], "subsections": [ "c219036a-ca77-49bd-b9c6-4ac269a7d3d6", "22632f15-ca9f-4bd6-a9a4-e425db6125fa" ], "title": "Linearized Attention" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8384, 1494, 1182, 798 ], "content": "Linear Transformer~ propose to use a simple feature map $\\phi_i(\\bx)=\\mathrm{elu}(x_i)+1$. This feature map does not aim to approximate dot product attention, but is empirically proved to perform on par with the standard Transformer.\nPerformer~ uses random feature maps that approximate the scoring function of Transformer. The random feature maps take functions $f_1,\\cdots,f_l:\\mathbb R\\rightarrow \\mathbb R$ and $h:\\mathbb{R}^D\\rightarrow \\mathbb R$.\n\\begin{equation}\n \\phi(\\bx) = \\frac{h(\\bx)}{\\sqrt{m}}[f_1(\\omega_1^\\top \\bx),\\cdots,f_m(\\omega_m^\\top \\bx),\\cdots,f_l(\\omega_1^\\top \\bx),\\cdots,f_l(\\omega_m^\\top \\bx)],\n\\end{equation}\nwhere $\\omega_1,\\cdots,\\omega_m\\stackrel{\\text{iid}}{\\sim} \\mathcal D$ are drawn from some distribution $\\mathcal D\\in\\mathcal P(\\mathbb{R}^D)$.\nThe first version of Performer~ is inspired from the random Fourier feature map~ that was originally used to approximate Gaussian kernel. It uses trigonometric functions with $h(\\bx)=\\exp(\\frac{\\|\\bx\\|^2}{2}), l=2, f_1=\\sin, f_2=\\cos$. This approach has also been used in Random Feature Attention (RFA)~, with the difference that $h(\\bx)$ is set to $1$ as the queries and keys are $\\ell_2$-normalized before applying the feature map.\nAlthough the trigonometric random feature map leads to an unbiased approximation, it does not guarantee non-negative attention scores and thus could lead to unstable behaviors and abnormal behaviors. To mitigate this issue, the second version of Performer~ proposes positive random feature maps, which uses $h(\\bx)=\\exp(-\\frac{\\|\\bx\\|^2}{2}),l=1,f_1=\\exp$ and thus guarantees unbiased and non-negative approximation of dot-product attention. This approach is more stable than and reports better approximation results.\nIn addition to using random feature maps to approximate standard dot product attention, and also explore approximating order-1 arc-cosine kernel with $h(\\bx)=1,l=1,f_1=\\mathrm{ReLU}$. This feature map has been show to be effective in various tasks including machine translation and protein sequence modeling.\n design a feature map that aims at facilitating orthogonality in feature space. Specifically, given an input $\\bx\\in\\mathbb{R}^D$, the feature map $\\phi:\\mathbb{R}^D\\rightarrow \\mathbb{R}^{2\\nu D}$ is defined by the partial function\n\\begin{equation}\n \\phi_{i+2(j-1)D}(\\bx)=\\mathrm{ReLU}([\\bx,-\\bx])_i\\mathrm{ReLU}([\\bx,-\\bx])_{i+j}\\quad\\text{for }i=1,\\cdots,2D,j=1,\\cdots,\\nu.\n\\end{equation}", "id": "c219036a-ca77-49bd-b9c6-4ac269a7d3d6", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "a4f89d55-c38a-442a-8744-5625c30e01ff", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Linearized Attention" ], [ "subsubsection", "Feature Maps" ] ], "subsections": [], "title": "Feature Maps" }, { "cite_extract_rate": 0.8, "cites": [ 8384, 1494, 1182, 798 ], "content": "In Eq. \\eqref{eq:linearized} the associations $\\{\\phi(\\bk)_j\\otimes\\bv_j\\}$ are aggregated into the memory matrix by simple summation. This is adopted by several studies~. However, it could be more beneficial for the network to selectively drop associations as new associations are added to the memory matrix.\nRFA~ introduces a gating mechanism to the summation to model local dependency in sequence data. Specifically, when adding a new association to the memory matrix $\\bS$, at a particular time step, they weigh $\\bS$ by a learnable, input-dependent scalar $g$, and the new association by $(1-g)$ (and a similar mechanism to $\\bu$). With this modification, history associations are exponentially decayed and recent context is favored in each timestep.\n argue that simple summation limits the capacity of the memory matrix and thus propose to enlarge the capacity in a write-and-remove fashion. Specifically, given a new input key-value pair $(\\bk_i, \\bv_i)$, the model first retrieve the value $\\bar{\\bv}_i$ currently associated with $\\bk_i$ using matrix multiplication. It then writes to the memory matrix a convex combination of $\\bar{\\bv}_i$ and $\\bv_i$, using a input-dependent gating scalar $g$, and removes the association $\\bar{\\bv}_i$. They also propose \\textit{sum normalization} (normalizing $\\phi(\\bq_i),\\phi(\\bk_i)$ by the sum of their components before updating the memory matrix) instead of normalizing with the denominator in Eq. \\eqref{eq:linearized} for this aggregation rule.", "id": "22632f15-ca9f-4bd6-a9a4-e425db6125fa", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "a4f89d55-c38a-442a-8744-5625c30e01ff", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Linearized Attention" ], [ "subsubsection", "Aggregation Rule" ] ], "subsections": [], "title": "Aggregation Rule" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from using sparse attention or kernel-based linearized attention, one could also reduce the complexity of attention by reducing the number of queries or key-value pairs, which leads to \\textit{query prototyping} and \\textit{memory compression}\\footnote{The key-value pairs are often referred to as a key-value memory (hence the name memory compression).} methods, respectively.", "id": "99d263f7-48ac-4be9-9f96-b1cd4380de1b", "level": "subsection", "origin_cites_number": 0, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Query Prototyping and Memory Compression" ] ], "subsections": [ "ad89ae14-36aa-4aeb-ab1d-65a0637088d3", "5fdbbc99-3354-4d28-a5a0-c76985e76cdb" ], "title": "Query Prototyping and Memory Compression" }, { "cite_extract_rate": 1, "cites": [ 1479, 1472 ], "content": "In query prototyping, several prototypes of queries serve as the main source to compute attention distributions. The model either copies the distributions to the positions of represented queries or filling those positions with discrete uniform distributions. Fig. \\ref{fig:query_prototype} illustrates the computing flow of query prototyping.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[Query prototyping]{\\label{fig:query_prototype}\n\\includegraphics[height=1.5in]{assets/prototype/query_prototype.pdf}\n}\\hspace{.2in}\n\\subfigure[Memory compression]{\\label{fig:compressed_mem}\n\\includegraphics[height=1.5in]{assets/prototype/mem_compress.pdf}\n}\n\\caption{Query prototyping and memory compression.}\\label{fig:prototype_mem_compress}\n\\end{center}\n\\end{figure}\nClustered Attention~ groups queries into several clusters and then computes attention distributions for cluster centroids. All queries in a cluster share the attention distribution calculated with the corresponding centroid.\nInformer~ selects prototypes from queries using explicit query sparsity measurement, which is derived from an approximation of the Kullback-Leibler divergence between the query's attention distribution and the discrete uniform distribution. Attention distributions are then only calculated for the top-$u$ queries under query sparsity measurement. The rest of the queries are assigned with discrete uniform distributions.", "id": "ad89ae14-36aa-4aeb-ab1d-65a0637088d3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "99d263f7-48ac-4be9-9f96-b1cd4380de1b", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Query Prototyping and Memory Compression" ], [ "subsubsection", "Attention with Prototype Queries" ] ], "subsections": [], "title": "Attention with Prototype Queries" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7333, 1503, 1133, 1504, 1482 ], "content": "\\label{sec:compressed_mem}\nApart from decreasing the number of queries with query prototyping, one can also reduce the complexity by reducing the number of the key-value pairs before applying the attention mechanism, as depicted in Fig. \\ref{fig:compressed_mem}.\n propose Memory Compressed Attention (MCA) that reduces the number of keys and values using a strided convolution. This modification is used as a complement to local attention proposed in the same work (as discussed in Sec.~\\ref{sec:sparseattn}), in that it can capture global context. The mechanism reduces the number of keys and values by a factor of kernel size $k$ and thus allowing to process significantly longer sequences than vanilla Transformer given the same computation resources.\nSet Transformer~ and Luna~ use a number of external trainable global nodes to summarize information from inputs and then the summarized representations serve as a compressed memory that the inputs attend to. This reduces the quadratic complexity of self-attention to linear complexity w.r.t. sequence length.\nLinformer~ utilizes linear projections to project keys and values from length $n$ to a smaller length $n_k$. This also reduces the complexity of self-attention to linear. The drawback of this approach is that an input sequence length has to be assumed and hence it cannot be used in autoregressive attention.\nPoolingformer~ adopts two-level attention that combines a sliding window attention and a compressed memory attention. The compressed memory module is used after the sliding window attention to increase the receptive field. They explore a few different pooling operations as the compression operation to compress the number of keys and values, including max pooling and pooling with Dynamic Convolution~.", "id": "5fdbbc99-3354-4d28-a5a0-c76985e76cdb", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "99d263f7-48ac-4be9-9f96-b1cd4380de1b", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Query Prototyping and Memory Compression" ], [ "subsubsection", "Attention with Compressed Key-Value Memory" ] ], "subsections": [], "title": "Attention with Compressed Key-Value Memory" }, { "cite_extract_rate": 0.5, "cites": [ 7333 ], "content": "Some empirical and theoretical analyses~ report the self-attention matrix $\\bA \\in \\mathbb{R}^{T\\times T}$ is often low-rank\\footnote{The rank of $\\bA$ is far lower than input length $T$.}. The implications of this property are twofold: (1) The low-rank property could be explicitly modeled with parameterization; (2) The self-attention matrix could be replaced by a low-rank approximation.", "id": "9a2c3eb1-bf66-4ddc-822d-bcc1121a8120", "level": "subsection", "origin_cites_number": 2, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Low-rank Self-Attention" ] ], "subsections": [ "8570c9f2-0cf3-47a3-befe-ca2093b97f7a", "fc87c410-c4a6-46de-806f-b6f8720b2eb4" ], "title": "Low-rank Self-Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The fact that the rank of the attention matrix is less than sequence length implies that, for scenarios where the inputs are typically short, setting $D_k>T$ would be more than an over-parameterization and lead to overfitting. It is thus reasonable to limit the dimension of $D_k$ to explicitly model the low-rank property as an inductive bias. decompose self-attention matrix into a low-rank attention module with small $D_k$ that captures long-range non-local interactions, and a band attention module that captures local dependencies.", "id": "8570c9f2-0cf3-47a3-befe-ca2093b97f7a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "9a2c3eb1-bf66-4ddc-822d-bcc1121a8120", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Low-rank Self-Attention" ], [ "subsubsection", "Low-rank Parameterization" ] ], "subsections": [], "title": "Low-rank Parameterization" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1494 ], "content": "Another implication of the low-rank property of the attention matrix is that one can use a low-rank matrix approximation to reduce the complexity of self-attention. A closely related methodology is the low-rank approximation of kernel matrices. We believe some existing works are inspired by kernel approximation.\nSome of the aforementioned linearized attention methods in Sec.~\\ref{sec:kernel} are inspired from kernel approximation with random feature maps. For example, Performer~ follows the Random Fourier feature map originally proposed to approximate Gaussian kernels. The method first decomposes the attention distribution matrix $\\bA$ into $\\bC_Q\\bG\\bC_K$ where $\\bG$ is a Gaussian kernel matrix and the random feature map is used to approximate $\\bG$.\nAnother line of work follow the idea of Nystr\\\"om method. These Nystr\\\"om-based methods~ first select $m$ landmark nodes from the $T$ inputs with down-sampling methods (e.g., strided average pooling). Let $\\tilde\\bQ,\\tilde\\bK$ be the selected landmark queries and keys, then the follow approximation is used in the attention computation\n\\begin{equation}\\label{eq:nystrom}\n \\tilde{\\bA}=\\mathrm{softmax}\\left(\\bQ\\tilde{\\bK}^\\top\\right)\\left(\\mathrm{softmax}\\left(\\tilde\\bQ\\tilde{\\bK}^\\top\\right)\\right)^{-1}\\mathrm{softmax}\\left(\\tilde\\bQ\\bK^\\top\\right).\n\\end{equation}\nNote that $\\bM^{-1}=\\left(\\mathrm{softmax}\\left(\\tilde\\bQ\\tilde{\\bK}^\\top\\right)\\right)^{-1}$ in Eq. \\eqref{eq:nystrom} does not always exist. To mitigate this issue, CSALR~ adds an identity matrix to $\\bM$ to make sure that the inverse always exists. Nystr{\\\"{o}}mformer~ uses the Moore-Penrose pseudoinverse of $\\bM$ instead of the inverse so that the approximation can be made for cases where $\\bM$ is singular.", "id": "fc87c410-c4a6-46de-806f-b6f8720b2eb4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "9a2c3eb1-bf66-4ddc-822d-bcc1121a8120", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Low-rank Self-Attention" ], [ "subsubsection", "Low-rank Approximation" ] ], "subsections": [], "title": "Low-rank Approximation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{assets/attention_w_prior.pdf}\n \\caption{Attention with prior. This type of model fuse generated attention scores with pior attention scores, producing the final attention scores for attention computation.}\n \\label{fig:attention_w_prior}\n\\end{figure}\nAttention mechanism generally outputs an expected attended value as a weighted sum of vectors, where the weights are an attention distribution over the values. Traditionally, the distribution is generated from inputs (e.g., $\\mathrm{softmax}(\\bQ\\bK^\\top)$ in vanilla Transformer). As a generalized case, attention distribution can also come from other sources, which we refer to as \\textit{prior}. Prior attention distribution can be a supplement or substitute for distribution generated from inputs. We abstract this formulation of attention as \\textit{attention with prior}, as depicted in Fig. \\ref{fig:attention_w_prior}. In most cases, the fusion of two attention distribution can be done by computing a weighted sum of the scores corresponding to the prior and generated attention before applying softmax.", "id": "afe04fc5-1717-4f24-b061-7af855828533", "level": "subsection", "origin_cites_number": 0, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Attention with Prior" ] ], "subsections": [ "65f37434-480a-4748-b65e-d8966645bec9", "af364707-c13f-44cb-9e4e-2366287a19b6", "a795db35-abb3-4e07-ad32-5c47de43f06f", "55c07bfe-1106-456b-a01c-8edd7f25017b" ], "title": "Attention with Prior" }, { "cite_extract_rate": 0.5, "cites": [ 1489 ], "content": "Some types of data (e.g., text) can exhibit a strong preference for the locality. This property can be explicitly encoded as a prior attention. A simple method would be to use a Gaussian distribution over positions. Specifically, one could multiply the generated attention distribution with some Gaussian density and then renormalize, which is equivalent to adding to the generated attention scores $\\bA$ a bias term $\\bG$, where higher $G_{ij}$ indicates a higher prior probability that the $i$-th input attend to the $j$-th input.\n proposes to first predict a central position $p_i$ for each $\\bq_i$ using a simple feed-forward network. The Gaussian bias is then defined to be\n\\begin{equation}\n G_{ij}=-\\frac{(j-p_i)^2}{2\\sigma^2},\n\\end{equation}\nwhere $\\sigma$ denotes standard deviation for the Gaussian and can be determined as a hyperparameter or predicted from inputs.\nGaussian Transformer~ assumes the central position to be $i$ for each $\\bq_i$ and defines the bias to bes\n\\begin{equation}\n G_{ij}=-|w(i-j)^2+b|,\n\\end{equation}\nwhere $w\\ge 0, b\\le 0$ are scalar parameters that controls the deviation and reduce the weight for central position, respectively.", "id": "65f37434-480a-4748-b65e-d8966645bec9", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "afe04fc5-1717-4f24-b061-7af855828533", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Attention with Prior" ], [ "subsubsection", "Prior that Models locality" ] ], "subsections": [], "title": "Prior that Models locality" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1497, 1492 ], "content": "\\label{sec:prior_prev_module}\nIn Transformer architecture, it is often observed the attention distributions are similar in adjacent layers. It is thus natural to provide attention distribution from previous layer as a prior for attention computation. The final attention scores can be defined as\n\\begin{equation}\\label{crosslayer}\n \\hat{\\bA}^{(l)} = w_1\\cdot\\bA^{(l)}+w_2\\cdot g(\\bA^{(l-1)}),\n\\end{equation}\nwhere $\\bA^{(l)}$ denotes the attention scores of the $l$-th layer, $w_1,w_2\\in\\mathbb R$ are weight applied to the scores from adjacent layers, and $g:\\mathbb{R}^{n\\times n}\\rightarrow \\mathbb{R}^{n\\times n}$ is a function that translate previous scores to the prior to be applied.\nPredictive Attention Transformer~ proposes to apply a 2D-convolutional layer to previous attention scores and compute the final attention scores as a convex combination of the generated attention scores and the convolved scores. This is equivalent to setting $w_1=\\alpha,w_2=1-\\alpha$ and $g(\\cdot)$ to be a convolutional layer in Eq. \\eqref{crosslayer}. They experiment training such a model from scratch and finetune after adapting the pre-trained BERT model, and both sets of experiments show improvements over baseline models.\nRealformer~ uses adds the previous attention scores directly to the generated attention scores, thus resembles a residual skip connection on attention maps. It's equivalent to setting $w_1=w_2=1$ and $g(\\cdot)$ to be identity map in Eq. \\eqref{crosslayer}. They conduct pre-training experiments on this model. The results show that this model outperforms the baseline BERT model in multiple datasets and surpasses the baseline model even when pre-training budgets are significantly lower.\nAs an extreme case, Lazyformer~ proposes to share attention maps between a number of adjacent layers. This is equivalent to setting $g(\\cdot)$ to identity and switch the settings of $w_1=0,w_2=1$ and $w_1=1,w_2=0$ alternatingly. The benefit of this approach is that the attention maps are computed only once and reused several times in the succeeding layers, thus reducing the computation cost. Their pre-training experiments show that the resulting model remains effective while being much more efficient to compute.", "id": "af364707-c13f-44cb-9e4e-2366287a19b6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "afe04fc5-1717-4f24-b061-7af855828533", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Attention with Prior" ], [ "subsubsection", "Prior from Lower Modules" ] ], "subsections": [], "title": "Prior from Lower Modules" }, { "cite_extract_rate": 1, "cites": [ 725, 1505 ], "content": "Adapters are task-dependent, trainale modules that are attached in specific locations of a pre-trained network for cross-task efficient parameter sharing~. propose a Conditionally Adaptive Multi-Task Learning (CAMTL) framework that uses a trainable attention prior $M(\\bz_i)$ that depends on task encoding $\\bz_i\\in\\mathbb{R}^{D_z}$\n\\begin{equation}\n M(\\bz_i)=\\bigoplus_{j=1}^m A'_j(\\bz_i),\\quad A'_j(\\bz_i)=A_j\\gamma_i(\\bz_i)+\\beta_i(\\bz_i),\n\\end{equation}\nwhere $\\bigoplus$ denotes direct sum, $A_j\\in \\mathbb{R}^{(n/m)\\times(n/m)}$ are trainable parameters, and $\\gamma_j,\\beta_j:\\mathbb{R}^{D_z}\\rightarrow \\mathbb{R}^{(n/m)\\times(n/m)}$ are are Feature Wise\nLinear Modulation functions~. A maximum sequence length $n_{max}$ is specified in implementation. The prior is formulated as a block diagonal matrix and added to the attention scores of upper layers in pre-trained Transformers to serve as an adapter for parameter-efficient multi-task inductive knowledge transfer.", "id": "a795db35-abb3-4e07-ad32-5c47de43f06f", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "afe04fc5-1717-4f24-b061-7af855828533", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Attention with Prior" ], [ "subsubsection", "Prior as Multi-task Adapters" ] ], "subsections": [], "title": "Prior as Multi-task Adapters" }, { "cite_extract_rate": 0.8, "cites": [ 1489, 7364, 7366, 1476 ], "content": "Some works have explored using an attention distribution that is independent of pair-wise interaction between inputs. In other words, their models exploit only a prior attention distribution.\n design an efficient Transformer decoder variant called average attention network that uses a discrete uniform distribution as the sole source of attention distribution. The values are thus aggregated as a cumulative-average of all values. To improve the expressiveness of the network, they further adds a feed-forward\ngating layer on top of the average attention module. The advantage of this approach is that the adapted Transformer decoder can train in a parallel manner as usual Transformers do and decode like an RNN, thus avoiding the $\\mathcal O(T^2)$ complexity in decoding.\n utilize a Gaussian distribution as the hardcoded attention distribution for attention calculation. The intuition is very similar to and in that attention distribution should be focused on a certain local window. Distinctively, they drop the generated attention completely and use only the Gaussian distribution for attention computation. In this approach, the mean (central position) and variance are designed to be hyperparameters. The experiments show that the hardcoded attention, when applied only to self-attention, can achieve comparable performance to the baseline model in machine translation tasks.\nSynthesizer~ proposes to replace generated attention scores with: (1) a learnable, randomly initialized attention scores, and (2) attention scores output by a feed-forward network that is only conditioned on the querying input itself. The experiments on machine translation and language modeling show that these variants can achieve competitive performance with vanilla Transformer. It is not explained why these variants work but the empirical results are intriguing.", "id": "55c07bfe-1106-456b-a01c-8edd7f25017b", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "afe04fc5-1717-4f24-b061-7af855828533", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Attention with Prior" ], [ "subsubsection", "Attention with Only Prior" ] ], "subsections": [], "title": "Attention with Only Prior" }, { "cite_extract_rate": 0, "cites": [], "content": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. However, there is no mechanism to guarantee that different attention heads indeed capture\ndistinct features.", "id": "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5", "level": "subsection", "origin_cites_number": 0, "parent_id": "57f59954-45c7-48ac-baf2-dc92c147a85e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Improved Multi-Head Mechanism" ] ], "subsections": [ "dcd668ac-e997-46ab-8e9e-c34fd7ac742a", "6b049103-9bb4-4d9c-bd95-dceaba44ed6d", "c68f2053-1c0a-4bd7-8d8a-92474761c343", "be224101-c6fc-4828-9ad0-f1f79c3097ee" ], "title": "Improved Multi-Head Mechanism" }, { "cite_extract_rate": 1, "cites": [ 1474, 7372, 1460, 1498, 7369, 38 ], "content": "A basic motivation for using multi-head attention is to allow the model to jointly attend to information from different representation subspaces at different positions~. However, in vanilla Transformer there is no explicit mechanism to guarantee different behavior across attention heads, nor is there any mechanism for heads to interact with each other. A line of work is dedicated to improving multi-head mechanism by introducing incorporating more sophisticated mechanisms that guide the behavior of different attention heads or allow interaction across attention heads.\n introduce an auxiliary disagreement regularization term into loss function to encourage diversity among different attention heads. Two regularization terms are respectively to maximize cosine distances of the input subspaces and output representations, while the last one is to disperse\nthe positions attended by multiple heads with element-wise multiplication of the corresponding attention matrices.\nSeveral probing works have revealed that pre-trained Transformer models exhibit certain patterns of self-attention that are of little linguistic backing. As a representative work, identify several simple attention patterns in BERT. For instance, many of the attention heads simply pay attention to special BERT tokens \\texttt{[CLS]} and \\texttt{[SEP]}. As a result, some constraints can be introduced to boost the training of Transformer models. To this end, propose to use an auxiliary loss, which is defined to be the Frobenius norm between attention distribution maps and predefined attention patterns.\nTalking-head Attention~ uses a talking head mechanism that linearly projects the generated attention scores from $h_k$ to $h$ heads, applies softmax in that space, and then projects to $h_v$ heads for value aggregation. The motivation is to encourage the model to move information between attention heads in a learnable fashion.\nCollaborative Multi-head Attention~ uses shared query and key projection $\\bW^Q$ and $\\bW^K$ and a mixing vector $\\bm_i$ for the $i$-th head to filter from the projection parameters such that Eq. \\eqref{eq:headi} is adapted to\n\\begin{equation}\n \\mathrm{head}_i=\\mathrm{Attention}(\\bQ\\bW^Q\\mathrm{diag}(\\bm_i),\\bK\\bW^K, \\bV\\bW_i^V),\n\\end{equation}\nwhere $\\bW^Q$ and $\\bW^K$ are shared by all the attention heads.", "id": "dcd668ac-e997-46ab-8e9e-c34fd7ac742a", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Improved Multi-Head Mechanism" ], [ "subsubsection", "Head Behavior Modeling" ] ], "subsections": [], "title": "Head Behavior Modeling" }, { "cite_extract_rate": 0.5, "cites": [ 1465 ], "content": "Vanilla attention adopts full attention spans assume, where a query can attend to all of the key-value pairs. However, it is often observed that some heads focus their attention distribution mainly in a local context while some other heads attend to broader contexts. It could thus be beneficial to restrict the attention spans:\n\\begin{itemize}\n \\item \\textit{Locality}. Restricting attention spans induce explicit local constraints. This is advantageous in cases where locality is an important prior.\n \\item \\textit{Efficiency}. If implemented appropriately, such a model can scale to very long sequences without introducing additional memory footprint and computational time.\n\\end{itemize}\nRestricting attention spans can be expressed as multiplying each attention distribution value with a mask value and then re-normalize, where the mask can be expressed as a non-increasing function that maps a distance to a value in $[0,1]$. A vanilla attention assigns a mask value of $1$ for all distances, as depicted in Fig. \\ref{span_mask}(a).\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[mask function for vanilla attention]{\n\\includegraphics[width=0.3\\linewidth]{assets/span_mask/full_mask.pdf}\n}\n\\subfigure[mask function for adaptive span]{\n\\includegraphics[width=0.3\\linewidth]{assets/span_mask/adaptive_mask.pdf}\n}\n\\subfigure[mask function for fixed span]{\n\\includegraphics[width=0.3\\linewidth]{assets/span_mask/scale.pdf}\n}\n\\caption{Three types of span masking function $m(x)$. The horizontal axis represents distance $x$ and vertical axis the mask value.}\\label{span_mask}\n\\end{center}\n\\end{figure}\n propose to use a learnable attention span, as depicted in Fig. \\ref{span_mask}(b) . The mask is parameterized by a learnable scalar $z$ and a hyperparameter $R$. The experiments on character-level language modeling show that the adaptive-span models outperform baseline models while having significantly fewer FLOPS. It is also observed that lower layers generally have smaller learned spans and higher layers otherwise. This indicates that the model can learn a hierarchical composition of features.\nMulti-Scale Transformer~ proposes to use a fixed attention span, with different heads in different layers using a different max span. The fixed attention span is depicted in Fig. \\ref{span_mask}(c). The attention is restricted within a fixed window which is controlled by a \\textit{scale} value $w$. They design the scales from an intuitive linguistic perspective and empirical observation from BERT such that higher layers tend to have more large scales (e.g., large span size), and lower layers should be confined with a smaller scale. Their experiments on several tasks show that the model can outperform baseline models while accelerating inference on long sequences.", "id": "6b049103-9bb4-4d9c-bd95-dceaba44ed6d", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Improved Multi-Head Mechanism" ], [ "subsubsection", "Multi-head with Restricted Spans" ] ], "subsections": [], "title": "Multi-head with Restricted Spans" }, { "cite_extract_rate": 0.8, "cites": [ 306, 1452, 8455, 38 ], "content": "After each attention head computes its output representation, the vanilla multi-head attention~ concatenates these representation and then apply a linear transformation to the concatenated representation to obtain the final output representation, as formulated in Eq. \\eqref{eq:multihead}. Combining Eq. \\eqref{attention}\\eqref{eq:multihead} and \\eqref{eq:headi}, one can see that this \\textit{concatenate-and-project} formulation is equivalent to summation over $H$ re-parameterized attention outputs. To this end, we first divide $\\bW^O\\in\\mathbb{R}^{D_m\\times D_m}$ into $H$ blocks\n\\begin{equation}\n \\bW^O=[\\bW_1^O;\\bW_2^O;\\cdots;\\bW_H^O],\n\\end{equation}\nwhere each $\\bW_i^O$ is of dimension $D_v\\times D_m$. It's thus easy to see that multi-head attention can be reformulated as\n\\begin{equation}\n \\mathrm{MultiHeadAttn}(Q,K,V)= \\sum_{i=1}^H \\mathrm{Attention}(Q \\bW_i^Q,K \\bW_i^K, V\\textcolor[rgb]{0,0,1} {\\bW_i^V\\bW_i^O}).\n\\end{equation}\nOne might argue that this simple \\textit{aggregate-by-summation} paradigm does not fully exploit the expressiveness of multi-head attention and that it is more desirable to use a more complex aggregation.\n propose to use routing methods, originally proposed for capsule networks~, to further aggregate information produced by different attention heads. The outputs of attention heads are first transformed into input capsules, then output capsules are obtained after the iterative routing process. The output capsules are then concatenated as a final output of multi-head attention. These two works both utilizes two routing mechanisms, namely \\textit{dynamic routing} and \\textit{EM routing}. One would notice that iterative routing introduces additional parameters and computational overhead. empirically show that applying the routing mechanism only to the lower layers can best balance the translation performance and computational efficiency.", "id": "c68f2053-1c0a-4bd7-8d8a-92474761c343", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Improved Multi-Head Mechanism" ], [ "subsubsection", "Multi-head with Refined Aggregation" ] ], "subsections": [], "title": "Multi-head with Refined Aggregation" }, { "cite_extract_rate": 1, "cites": [ 1507, 1506 ], "content": "Several other modifications to the multi-head mechanism have been proposed to improve multi-head attention.\n propose multi-query attention, where key-value pairs are shared among attention heads (i.e., to use only one key projection and one value projection for all attention heads). The advantage of this method is that it reduces the memory bandwidth requirements for decoding and results in a model that is faster to decode, while incurring only minor quality degradation from the baseline.\n establish that small attention key size can affect its ability to represent arbitrary distribution. They thus propose to disentangle head size from the number of heads $h$, as opposed to the common practice that sets the head size to be $D_m/h$. It is observed empirically that setting attention head size to be input sequence length is beneficial.", "id": "be224101-c6fc-4828-9ad0-f1f79c3097ee", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "4c615d59-27b6-48ea-a7bd-4c6d27bd91b5", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Attention" ], [ "subsection", "Improved Multi-Head Mechanism" ], [ "subsubsection", "Other Modifications" ] ], "subsections": [], "title": "Other Modifications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:other_module}", "id": "7a323139-7d7b-422d-a6c6-3617457ad2f6", "level": "section", "origin_cites_number": 0, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ] ], "subsections": [ "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "ba9ac74a-b1ed-471e-a9a5-9097079a09db", "e915275c-0207-4658-9d41-438a8b5b3e51" ], "title": "Other Module-level Modifications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pos_rep}\n\\begin{definition}[permutation equivariant function]\nLet $\\Pi_n$ be the set of all permutations of indices $\\{1,2,\\cdots, T\\}$. A function $f:\\mathcal{X}^T\\rightarrow \\mathcal{Y}^T$ is said to be \\textit{permutation equivariant} if and only if for any $\\pi\\in\\Pi_T$\n\\begin{equation}\n f(\\pi x)=\\pi f(x).\n\\end{equation}\n\\end{definition}\nIt is easy to verify that Convolution and Recurrence networks are not permutation equivariant. However, both self-attention modules and position-wise feed-forward layers in Transformer are permutation equivariant, which could be a problem when it comes to modeling problems other than \\textit{set-input} problems where the structure of inputs is needed. For example, when modeling sequences of text, the ordering of words matters and it's thus crucial to properly encode the positions of words in Transformer architecture. Therefore, additional mechanisms are required to inject positional information into Transformers. A common design is to first represent positional information using vectors and then infuse the vectors to the model as an additional input.", "id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a323139-7d7b-422d-a6c6-3617457ad2f6", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ] ], "subsections": [ "ccc63091-167f-4bc8-97e9-c63a8adbf741", "9cdcf5bd-ef00-4ae9-a8b3-6d8cd2240795", "268772ad-4cd8-43e9-9608-b7ac7de2e35c", "866b7a57-7c1b-41c7-8a8f-c03a0681cbeb", "90d9b719-8d67-48fa-a44c-ce04b5d77df6" ], "title": "Position Representations" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 1493, 790, 7, 1450, 38 ], "content": "In vanilla Transformer~, positional information is encoded as absolute sinusoidal position encodings.For each position index $t$, the encoding is a vector $\\bp_t = \\mathrm{PE}(t)\\in\\mathbb{R}^{D_m}$, of which every element is a sinusoidal ($\\sin$/$\\cos$) function of the index with pre-defined frequency.\n\\begin{equation}\\label{eq:PE}\n \\mathrm{PE}(t)_i = \\begin{cases} \\sin(\\omega_i t) & \\text{if }i\\text{ is even},\\\\\n \\cos(\\omega_i t) & \\text{if }i\\text{ is odd},\n \\end{cases}\n\\end{equation}\nwhere $\\omega_i$ is the hand-crafted frequency for each dimension. The position encoding of each position in the sequence is then added to the token embeddings and fed to Transformer.\nAnother way of representing absolute positions is to learn a set of positional embeddings for each position~. Compared to hand-crafted position representation, learned embeddings are more flexible in that position representation can adapt to tasks through back-propagation. But the number of embeddings is limited up to a maximum sequence length determined before training, which makes this approach no longer \\textit{inductive}, i.e., not able to handle sequences longer than sequences seen in the training time.\n propose to use sinusoidal position representation, but with each frequency $\\omega_i$ (in Eq. \\eqref{eq:PE}) learned from data. This approach retains inductiveness but is more flexible than hand-crafted sinusoidal encoding. FLOATER~ frames positional representation as a continuous dynamical system and adopts Neural ODE to enable end-to-end training with backpropagation. This method is inductive and flexible while being parameter efficient compared to a fully learnable approach.\nThe Vanilla approach to incorporating absolute position representations is to add position encodings/embeddings to token embeddings. However, as the input signals propagate through\nthe layers, the positional information might get lost in the upper layers. Later works find it beneficial to add position representations to inputs to each Transformer layer~.", "id": "ccc63091-167f-4bc8-97e9-c63a8adbf741", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ], [ "subsubsection", "Absolute Position Representations" ] ], "subsections": [], "title": "Absolute Position Representations" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 1456, 1508, 1481, 7370, 7054, 9 ], "content": "Another line of works focuses on representing positional relationships between tokens instead of positions of individual tokens. The intuition is that in self-attention, pairwise positional relationships between input elements (direction and distance) could be more beneficial than positions of elements. Methods following this principles are called relative positional representation. propose to add a learnable relative position embedding to keys of attention mechanism\n\\begin{align}\n \\bk_j' &= \\bk_j + \\br_{ij},\\ \\text{for }i=1,\\cdots, n,\\\\\n \\br_{ij} &= \\bR_{\\mathrm{clip}(i-j)}\\label{rij},\\\\\n \\mathrm{clip}(x) &= \\max(-K, \\min(x, K))\\label{eq:shaw_clip},\n\\end{align}\nwhere $\\br_{ij}\\in\\mathbb{R}^{D_k}$ is the relative position embedding for relation between position $i$ and $j$ and $K$ is the largest offset that determines the number of embeddingg. Typically $K$ is set to a length that can accommodate most input sequences. As a special case, InDIGO~ sets $K$ to $3$ for their specially designed framework for non-autoregressive generation. As an incremental effort, Music Transformer~ further introduce a mechanism to reduce the intermediate memory requirements for this approach. Similar to this approach, T5~ adopt a simplified form of relative position embeddings where each embedding is only a learnable scalar that is added to the corresponding score used for computing the attention weights.\nTransformer-XL~ use a sinusoidal encoding to represent positional relationships but fuses contents and position information by redesign the computation of attention scores\\footnote{the scaling factor is omitted without loss of generality.}\n\\begin{equation}\n \\bA_{ij}=\\bq_i \\bk_j^\\top+\\bq_i\\left(\\bR_{i-j}\\bW^{K,R}\\right)^\\top +\\bu^1\\bk_j^\\top+\\bu^2\\left(\\bR_{i-j}\\bW_{K,R}\\right)^\\top,\n\\end{equation}\nwhere $\\bW^{K,R}\\in\\mathbb{R}^{D_m\\times D_k},\\bu^1,\\bu^2\\in\\mathbb{R}^{D_k}$ are learnable parameters and $\\bR$ is a sinusoidal encoding matrix similar to position encoding in vanilla Transformer. Then softmax function is applied to scores $\\bA$ to provide attention weights. Note that the learnable sinusoidal encoding is also a drop-in replacement to hand-crafted $\\bR$.\nDeBERTa~ utilizes position embeddings like and applies the embeddings to the model in a disentangled style similar to Transformer-XL~\n\\begin{equation}\n \\bA_{ij}=\\bq_i \\bk_j^\\top+\\bq_i\\left(\\br_{ij}\\bW^{K,R}\\right)^\\top +\\bk_j\\left(\\br_{ij}\\bW^{Q,R}\\right)^\\top,\n\\end{equation}\nwhere $\\bW^{K,R},\\bW^{Q,R}\\in\\mathbb{R}^{D_m\\times D_k}$ are learnable parameters and $\\br_{ij}$ is the learnable relative positional embedding as in Eq. \\eqref{rij}. The first term is interpreted as a content-to-content attention, and the latter two terms are interpreted as (relative) content-to-position and position-to-content attention, respectively.", "id": "9cdcf5bd-ef00-4ae9-a8b3-6d8cd2240795", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ], [ "subsubsection", "Relative Position Representations" ] ], "subsections": [], "title": "Relative Position Representations" }, { "cite_extract_rate": 1, "cites": [ 1454, 8457 ], "content": "Some research studies have explored using hybrid positional representations that contains both absolute and relative positional information. Transformer with Untied Position Encoding (TUPE)~ re-designs the computation of attention scores as a combination of a content-to-content term, an absolute position-to-position term and a bias term representing relative positional relationships\n\\begin{equation}\n \\bA_{ij}=\\bq_i \\bk_j^\\top+\\left(\\bp_i\\bW^{Q,P}\\right)\\left(\\bp_{j}\\bW^{K,P}\\right)^\\top +b_{j-i},\n\\end{equation}\nwhere $\\bW^{K,P},\\bW^{Q,P}\\in\\mathbb{R}^{D_m\\times D_k}$ are learnable parameters, $\\bp_i, \\bp_j$ are the position embeddings for positions $i,j$, and $b_{j-i}$ is a learnable scalar relative position embedding.\nOne can also design a single set of positional representations that express both absolute and relative information. Roformer~ uses Rotary Position Embedding (RoPE) to represent the position of a token by multiplying the affine-transformed embedding of the $t$-th input $x_t$ by a rotatory matrix $\\bR_{\\Theta,t}$\n\\begin{align}\n \\bq_t=\\bx_t\\bW^Q\\bR_{\\Theta,t}\\quad & \\bk_t=\\bx_t\\bW^K\\bR_{\\Theta,t},\\\\\n \\bR_{\\Theta,t}&=\\bigoplus_{j=1}^{D_k/2} \\bM(t,\\theta_j),\n\\end{align}\nwhere $\\bigoplus$ denotes \\textit{direct sum} of matrices. Each $\\bM(t,\\theta_j)$ is a 2-D clockwise rotatory matrix of angle $t\\cdot \\theta_j$\n\\begin{equation}\n \\bM(t,\\theta_j)=\\left[\\begin{matrix} \\cos(t\\cdot \\theta_j)&\\sin(t\\cdot \\theta_j)\\\\\n -\\sin(t\\cdot \\theta_j)&\\cos(t\\cdot \\theta_j)\\end{matrix}\n \\right].\n\\end{equation}\nThe key advantage of this formulation is that the induced representation is translation invariant, i.e., the attention score of $(\\bq_i,\\bk_j)$ is only related to their relative position offset\n\\begin{equation}\n \\bq_i\\bk_j^\\top=\\left(\\bx_i\\bW^Q\\right)\\bR_{\\Theta,\\textcolor[rgb]{0,0,1}{j-i}}\\left(\\bx_j\\bW^K\\right)^\\top.\n\\end{equation}\nIn practice, the embedding matrix multiplication can be implemented by two element-wise multiplication for lower memory footprint. The RoPE uses the form of absolute embedding but can capture relative positional relations. This approach is compatible with linearized attention in Sec.~\\ref{sec:kernel}.", "id": "268772ad-4cd8-43e9-9608-b7ac7de2e35c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ], [ "subsubsection", "Other Representations" ] ], "subsections": [], "title": "Other Representations" }, { "cite_extract_rate": 1, "cites": [ 1480, 1493, 1509, 1471 ], "content": "Instead of explicitly introducing additional positional encodings, propose to encode positional information in word embeddings, by generalizing embedding to continuous (complex-valued) functions over positions.\nR-Transformer~ model locality of sequential data with a local RNN. Specifically, inputs to each block of R-Transformer are first fed to a local RNN and then to multi-Head self-attention module. The RNN structure introduces ordering information and captures local dependencies as a complement to self-attention.\nConditional positional encoding (CPE)~ generate conditional position encodings at each layer for ViT with a 2-D convolution with zero-paddings. The intuition behind this approach is that convolution networks can implicitly encode absolute positional information with zero-paddings~.", "id": "866b7a57-7c1b-41c7-8a8f-c03a0681cbeb", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ], [ "subsubsection", "Position Representations without Explicit Encoding" ] ], "subsections": [], "title": "Position Representations without Explicit Encoding" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1510 ], "content": "It is worth noticing that masked self-attention is not permutation equivariant~. Thus a model that exploits only the decoder of Transformer has the potential of sensing positional information without incorporating explicit positional representation. This is confirmed by some empirical results on language modeling tasks~, where the authors find that removing position encodings even improves performance.", "id": "90d9b719-8d67-48fa-a44c-ce04b5d77df6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "3f67f9f0-d57d-471a-bcb5-cbf64e5bcabc", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position Representations" ], [ "subsubsection", "Position Representation on Transformer Decoders" ] ], "subsections": [], "title": "Position Representation on Transformer Decoders" }, { "cite_extract_rate": 0, "cites": [], "content": "Layer Normalization (LN), along with residual connection, is considered as a mechanism to stabilizing training of deep networks (e.g., alleviating ill-posed gradients and model degeneration).\nThere are some studies that are dedicated to analyzing and improving LN module.", "id": "ba9ac74a-b1ed-471e-a9a5-9097079a09db", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a323139-7d7b-422d-a6c6-3617457ad2f6", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Layer Normalization" ] ], "subsections": [ "9c98bd5d-1c97-4d09-acae-d8ba70e8c480", "0329ae93-9d4a-453b-86ad-768aed04ebd3", "29a8b774-a21f-46a2-8095-eac026b86ef8" ], "title": "Layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 1496, 1511, 1448, 793, 1464, 38 ], "content": "\\ifx \\smv \\undefined\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[post-LN]{\n\\includegraphics[height=2.2in]{assets/layernorm/post_ln.pdf}\n}\\hspace{.6in}\n\\subfigure[pre-LN]{\n\\includegraphics[height=2.2in]{assets/layernorm/pre_ln.pdf}\n}\n\\caption{Comparison of Transformer Encoder with pre-LN and post-LN.}\\label{fig:pre_post_ln}\n\\end{center}\n\\end{figure}\n\\fi\nIn vanilla Transformer, the LN layer lies between the residual blocks, called post-LN~. Later Transformer implementations~ place the LN layer inside the residual connection before the attention or FFN, with an additional LN after the final layer to control the magnitude of final outputs, which is referred to as pre-LN\\footnote{To the best of our knowledge, this approach is adopted since \\texttt{v1.1.7} in the \\texttt{Tensor2Tensor} implementation~.}. The pre-LN has been adopted by numerous following research studies and implementations, e.g., .\nThe difference between pre-LN and post-LN is shown in Fig. \\ref{fig:pre_post_ln}.\n theoretically investigate the gradients of Transformers and find that the gradients near the output layer are large at initialization in post-LN Transformers, which could be the reason why post-LN Transformers without learning rate warm-up~\\footnote{Learning rate warm-up refers to starting optimization with an extremely small learning rate and then gradually increasing it to a pre-defined maximum value in a certain number of iterations.} leads to unstable training, whereas pre-LN Transformers do not suffer from the same problem. They thus deduce and empirically verify that warm-up stage can be safely removed for pre-LN Transformers.\nAlthough Post-LN often results in unstable training and divergence, it usually outperforms pre-LN variants after convergence~. Similar to , conduct theoretical and empirical analysis and find that post-LN encoders do not suffer from gradient imbalance. They thus conjecture that the gradient issue is not the direct cause of unstable post-LN Transformer training and further identify the \\textit{amplification effect} in post-LN Transformers — at initialization, the heavier dependency on residual branch leads to a larger output shift in post-LN Transformers, thus resulting in unstable training. In light of this finding, they introduce additional parameters to post-LN Transformers to control residual dependencies of Post-LN. These parameters are initialized according to activation variations of sample data so that the output shift of post-LN Transformers is not amplified. This approach ensures and boosts convergence of post-LN Transformers and reaches better performance than pre-LN Transformers.", "id": "9c98bd5d-1c97-4d09-acae-d8ba70e8c480", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "ba9ac74a-b1ed-471e-a9a5-9097079a09db", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Layer Normalization" ], [ "subsubsection", "Placement of Layer Normalization" ] ], "subsections": [], "title": "Placement of Layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 1466, 1467, 1491, 71 ], "content": " empirically observe that the learnable parameters in the LN module do not work in most experiments, and even increase the risk of overfitting. They further conclude from controlled experiments that the forward normalization is not the reason why LN works for Transformer. From analysis and experiments, it is concluded that the derivatives of the mean and variance re-center and re-scale the gradients and play a significant role in LN. They thus propose \\textit{AdaNorm}, a normalization technique without learnable parameters\n\\begin{align}\n \\bz &= C(1-k\\by)\\odot\\by,\\\\\n \\by &=\\frac{\\bx-\\mu}{\\sigma},\n\\end{align}\nwhere $C,k$ are hyperparameters and $\\odot$ denotes element-wise multiplication. $\\mu$ and $\\sigma$ are the mean and standard deviation of input $\\bx$, respectively.\n propose to replace the LN module with \\textit{scaled $\\ell_2$ normalization}. Given any input $\\bx$ of $d$-dimension, their approach project it onto a $d-1$-sphere of learned radius $g$\n\\begin{equation}\n \\bz=g\\frac{\\bx}{\\|\\bx\\|},\n\\end{equation}\nwhere $g$ is a learnable scalar. It is more parameter efficient compared to normal LN and is shown to be effective in machine translation datasets, especially in low-resource settings.\n discuss why Batch Normalization (BN)~ performs poorly in Transformer for text data and conclude that BN's significant performance degradation stems from the instabilities associated with its batch statistics. They thus propose PowerNorm (PN) that has three modifications over BN: (1) it relaxes the zero-mean normalization; (2) it uses the quadratic mean\nof the signal, instead of the variance; (3) it uses running statistics for the\nquadratic mean, instead of using per-batch statistics. Specifically, for the $t$-th iteration, the PN computes the outputs as\n\\begin{align}\n \\bz^{(t)} &= \\gamma\\odot \\by^{(t)}+\\beta,\\\\\n \\by^{(t)} &=\\frac{\\bx^{(t)}}{\\psi^{(t-1)}},\\\\\n (\\psi^{(t)})^2&=\\alpha(\\psi^{(t-1)})^2+(1-\\alpha)\\left(\\frac{1}{|B|}\\sum_{i=1}^{|B|}(\\bx_i^{(t)})^2\\right),\n\\end{align}\nwhere $0<\\alpha<1$ is the moving average coefficient and $\\gamma, \\beta$ are the learnable parameters as in BN formulation.", "id": "0329ae93-9d4a-453b-86ad-768aed04ebd3", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "ba9ac74a-b1ed-471e-a9a5-9097079a09db", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Layer Normalization" ], [ "subsubsection", "Substitutes of Layer Normalization" ] ], "subsections": [], "title": "Substitutes of Layer Normalization" }, { "cite_extract_rate": 1, "cites": [ 1495 ], "content": "Besides LN, there is another mechanism to construct deeper neural network. ReZero~ replace LN module with a learnable residual connection. For each module $F(\\cdot)$, ReZero re-scales $F(\\cdot)$ in the residual formulation:\n\\begin{equation}\n \\bH'=\\bH+\\alpha\\cdot F(\\bH),\n\\end{equation}\nwhere $\\alpha$ is a learnable parameter with zero-initialization.\nReplacing LN in Transformer with ReZero mechanism is verified to induce better dynamic isometry for input signals and leads to faster convergence.", "id": "29a8b774-a21f-46a2-8095-eac026b86ef8", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "ba9ac74a-b1ed-471e-a9a5-9097079a09db", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Layer Normalization" ], [ "subsubsection", "Normalization-free Transformer" ] ], "subsections": [], "title": "Normalization-free Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ffn}\nDespite its simplicity, the position-wise feed-forward network (FFN) layers are important for a Transformer to achieve good performance. observe that simply stacking self-attention modules causes a \\textit{rank collapse} problem, leading to token-uniformity inductive bias, and that the feed-forward layer is one of the important building blocks that mitigate this issue.\nVarious works have explored modifications on the FFN module.", "id": "e915275c-0207-4658-9d41-438a8b5b3e51", "level": "subsection", "origin_cites_number": 1, "parent_id": "7a323139-7d7b-422d-a6c6-3617457ad2f6", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position-wise FFN" ] ], "subsections": [ "d96aa288-77f3-4bf6-931e-abd30aeafccb", "e9f737ba-3b14-4121-b536-765cb8c260d1", "d1496356-82a7-4f73-8a03-62f59057e605" ], "title": "Position-wise FFN" }, { "cite_extract_rate": 0.875, "cites": [ 1481, 1512, 7055, 1455, 7, 7365, 38 ], "content": "The vanilla Transformer~ adopts the Rectified Linear Units (ReLU) activation for non-linearity in between the two FFN layers. Over time, several studies have explored different activation other than ReLU.\n try to replace ReLU in Transformer with Swish function $f(x)=x\\mathrm{sigmoid}(\\beta x)$ and observe that it consistently improve performance on WMT 2014 English$\\rightarrow$German dataset.\nGPT~ replace ReLU with Gaussian Error Linear Unit (GELU)~ on language pre-training. It becomes the default practice for many pre-trained language models ~.\n explore using Gated Linear Units (GLU)~ and its variants as a drop-in replacement for ReLU in FFN. Their pre-training experiments show that the GLU variants consistently improve vanilla Transformer with ReLU activation. Note that GLU introduces extra parameters and the experiments are conducted with the intermediate dimension of FFN reduced to match the parameter count with baseline.", "id": "d96aa288-77f3-4bf6-931e-abd30aeafccb", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "e915275c-0207-4658-9d41-438a8b5b3e51", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position-wise FFN" ], [ "subsubsection", "Activation Function in FFN" ] ], "subsections": [], "title": "Activation Function in FFN" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 680, 1490, 1451, 8454, 707 ], "content": "Several works have focused on expanding FFNs in order for a larger model capacity. The basic idea is to replace FFNs with similar structures with much more parameters.\n replace some of the FFNs with the product-key memory layers. A product-key memory is composed of three components: a query network, a key selection module containing two\nsets of sub-keys, and a value lookup table. The model first projects an input to a latent space using the query network, and then compares the generated query to keys that are Cartesian product of the two sets of sub-keys from key selection module to get $k$ nearest neighbors, and finally finds the corresponding values in a value lookup table using the $k$ nearest keys and aggregates them to produce the final output. This process resembles the attention mechanism, in that the generated query attends to a large number of global key-value pairs. They thus propose a multi-head mechanism for the key-product memory to further enlarge the capacity of this module. The experiments on large-scale language modeling suggest that this mechanism significantly improves performance with negligible computational overhead.\nSeveral studies exploits the idea of Mixture-of-Experts (MoE) to increase the capacity of FFNs. Gshard uses sparsely-gated MoE layers to replace FFNs in Transformer. Each MoE layer consists of several FFNs (each called an expert) that are the same structure as position-wise FFNs in vanilla Transformer. The output of the layer is a weighted sum of the outputs of the FFNs, using gate values computed by a routing function $g(\\cdot)$. They design a learnable routing function that assigns tokens to experts, with auxiliary loss to satisfy balanced loads between experts and efficiency at the scale of length such that the experts can be distributed across multiple devices. For each forward pass of the MoE layer, only the experts with top-$k$ gate values are activated.\nInstead of using $k$ experts for each forward pass, Switch Transformer~ proposes to route using only a single expert with the largest gate value, leading to a much smaller computational footprint. The authors also design an auxiliary loss to encourage load balance between experts. It is reported to speed up pre-training by a large margin compared to the non-MoE counterpart while having a similar number of FLOPS.\n propose to replace top-$k$ routing with expert prototyping strategy. Specifically, the proposed strategy splits experts into $k$ different groups and applies top-1 routing within each group. The outputs of prototype groups are combined linearly to form the final output of the MoE layer. This strategy is proved to improve the model quality while maintaining constant computational costs.\nAs opposed to using a learnable routing function for expert assignment, design \\textit{hash layers} where tokens are hashed into a fixed number of buckets, each bucket corresponding to an expert. This approach requires no routing parameters or any auxiliary loss function, while showing competitive results with existing methods such as Switch Transformer~.", "id": "e9f737ba-3b14-4121-b536-765cb8c260d1", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "e915275c-0207-4658-9d41-438a8b5b3e51", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position-wise FFN" ], [ "subsubsection", "Adapting FFN for Larger Capacity" ] ], "subsections": [], "title": "Adapting FFN for Larger Capacity" }, { "cite_extract_rate": 1, "cites": [ 1463, 7053 ], "content": "Notably, one might argue that under some circumstances, FFN layers can be dropped completely, resulting in a simplified network.\n demonstrate that replacing the ReLU activation with Softmax and dropping the bias term in FFN effectively turns FFN into an attention module where position-wise inputs attend to a global key-value memory of $D_{\\mathrm{ffn}}$ slots. They thus propose to drop the FFN module and add to the attention module a set of global key-value pairs, which are learnable parameters concatenated with key and values generated by inputs. This approach simplifies the structure of the network with no loss of performance.\n empirically show that FFNs in the decoder of Transformer, despite its large number of parameters, is not efficient and can be removed safely with only slight or no loss of performance. This approach significantly boosts the training and inference speed.", "id": "d1496356-82a7-4f73-8a03-62f59057e605", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "e915275c-0207-4658-9d41-438a8b5b3e51", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Other Module-level Modifications" ], [ "subsection", "Position-wise FFN" ], [ "subsubsection", "Dropping FFN Layers" ] ], "subsections": [], "title": "Dropping FFN Layers" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:beyond}\nIn this section, we introduce the X-formers that modify the vanilla Transformer beyond modules.", "id": "04795308-eff0-4fbb-962f-a8077d36f894", "level": "section", "origin_cites_number": 0, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ] ], "subsections": [ "f397ed71-a760-4154-8eaa-b1b20d5004b2", "736cefee-90d8-4e53-a49e-f031442dc1ba", "690a01cd-27c7-4e1a-baef-83bbdeff8220", "9b2cdbdc-bac6-48ec-96d0-b5576360be13", "be720c58-062b-41bd-b6e9-732058f0f671" ], "title": "Architecture-level Variants" }, { "cite_extract_rate": 0.5, "cites": [ 1468, 1488 ], "content": "Apart from the efforts made at the module level to alleviate computation overheads, there are several attempts to adapt Transformer to be lightweight by modifications at a higher level.\nSimilar to low-rank self-attention~ that decomposes attention into a locality-constrained attention and a low-rank global attention, Lite Transformer~ proposes to replace each attention module in Transformer with a two-branch structure, where one branch uses attention to capture long-range contexts while the other branch uses depth-wise convolution and linear layers to capture local dependencies. The architecture is lightweight both in terms of model size and computation, and is thus more suitable for mobile devices.\nFunnel Transformer~ utilizes a funnel-like encoder architecture where the length of the hidden sequence is gradually reduced using pooling along the sequence dimension, and then recovered using up-sampling. The architecture effectively reduces the FLOPs and memory compared to the vanilla Transformer encoder. Naturally, one can use this architecture to build a deeper or wider model using the same computation resources.\nDeLighT~ replaces the standard Transformer block with \\texttt{DeLighT} block, which consists of three sub-modules: (1) a ``expand-and-reduce'' \\texttt{DeLighT} transformation module to learn wider representations with low computation requirements; (2) a single-head self-attention to learn pair-wise interaction; (3) a lightweight ``reduce-and-expand'' FFN (as opposed to vanilla Transformer that first expands the dimension of hidden representations and then reduces them back to $D_m$). They also propose a block-wise scaling strategy that allows for shallower and narrower blocks near the input and wider and deeper blocks near the output. The induced network is much deeper than the vanilla Transformer but with fewer parameters and operations.", "id": "f397ed71-a760-4154-8eaa-b1b20d5004b2", "level": "subsection", "origin_cites_number": 4, "parent_id": "04795308-eff0-4fbb-962f-a8077d36f894", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Adapting Transformer to Be Lightweight" ] ], "subsections": [], "title": "Adapting Transformer to Be Lightweight" }, { "cite_extract_rate": 0.75, "cites": [ 1457, 1486, 1492 ], "content": "In vanilla Transformer, each block takes outputs from the previous block as inputs and outputs a sequence of hidden representations. One might be interested in creating more paths along which input signals can run through the networks. In Sec.~\\ref{sec:prior_prev_module}, we introduced Realformer~ and Predictive Attention Transformer~ that reuses attention distributions from previous block to guide attention of current block. This can be seen as creating a forward path between adjacent Transformer blocks.\nIn a deep Transformer encoder-decoder model, the cross-attention modules in the decoder only utilize the final outputs of the encoder, therefore the error signal will have to traverse along the depth of the encoder. This makes Transformer more susceptible to optimization issues (e.g., vanishing gradients). Transparent Attention~ uses a weighted sum of encoder representations at all encoder layers (including the embedding layer) in each cross-attention module. For the $j$-th decoder block, the cross-attention module is modified to attend to\n\\begin{equation}\\label{eq:transparent}\n \\tilde{\\bH}^{(j)}=\\sum_{i=0}^{N}\\frac{\\exp(w_{ij})}{\\sum_{k=0}^N \\exp(w_{kj})}\\bH^{(i)},\n\\end{equation}\nwhere each $w_{ij}$ is a trainable parameter. This effectively shortens the path from each layer in the encoder to the error signal and thus eases the optimization of deeper Transformer models.\nAnother issue associated with vanilla Transformer is that each position can only attend to history representations from lower layers. Feedback Transformer~ proposes to add a feedback mechanism to Transformer decoder, where each position attends to a weighted sum of history representations from all layers\n\\begin{equation}\n \\tilde{\\bh}_i=\\sum_{l=0}^{N}\\frac{\\exp(w_{l})}{\\sum_{k=0}^N \\exp(w_{k})}\\bh_i^{(l)}.\n\\end{equation}", "id": "736cefee-90d8-4e53-a49e-f031442dc1ba", "level": "subsection", "origin_cites_number": 4, "parent_id": "04795308-eff0-4fbb-962f-a8077d36f894", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Strengthening Cross-Block Connectivity" ] ], "subsections": [], "title": "Strengthening Cross-Block Connectivity" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 683, 7273, 1485, 1449, 1478 ], "content": "Vanilla Transformer, like most neural models, utilizes a fixed (learned) computation procedure to process each input. An intriguing and promising modification is to make computation time conditioned on the inputs, i.e., to introduce Adaptive Computation Time (ACT)~ into Transformer models. Such modifications potentially give rise to the following advantages:\n\\begin{itemize}\n \\item Feature refinement for hard examples. For data that are hard to process, a shallow representation might not be adequate to fulfill the task at hand. It would be more ideal to apply more computations to acquire a deeper and more refined representation.\n \\item Efficiency for easy examples. When processing easy examples, a shallow representation might be enough for the task. In this case, it would be beneficial if the network can learn to extract features using reduced computation time.\n\\end{itemize}\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[dynamic halting]{\\label{fig:ut}\\hspace{2em}\n\\includegraphics[height=1.2in]{assets/ACT/ut.pdf}\\hspace{2em}\n}\n\\subfigure[conditional skipping]{\\label{fig:skip}\\hspace{2em}\n\\includegraphics[height=1.2in]{assets/ACT/skip.pdf}\\hspace{2em}\n}\n\\subfigure[early exit]{\\label{fig:exit}\\hspace{2em}\n\\includegraphics[height=1.2in]{assets/ACT/exit.pdf}\n}\n\\caption{Three typical ACT paradigms.}\\label{fig:act}\n\\end{center}\n\\end{figure}\nUniversal Transformer (UT)~ incorporates a recurrence-over-depth mechanism that iteratively refines representations for all symbols using a module that is shared over depth, as illustrated in Fig. \\ref{fig:ut}. It also adds a per-position dynamic halting mechanism that calculates a halting probability for each symbol at every time step. If a symbol's halting probability is greater than a predefined threshold, then the symbol's representation will remain unchanged for subsequent timesteps. The recurrence is stopped when all symbols halt or when a predefined maximum step is reached.\nConditional Computation Transformer (CCT)~ adds a gating module at each self-attention and feed-forward layer to decide whether to skip the current layer, as illustrated in Fig. \\ref{fig:skip}. The authors also introduce an auxiliary loss that encourages the model to adjust the gating modules to match the practical computation cost to the available computation budget.\nSimilar to the dynamic halting mechanism used in UT, there is a line of work dedicated to adapting the number of layers to each input in order to achieve a good speed-accuracy trade-off, which is called \\textit{early exit} mechanism, as illustrated in Fig. \\ref{fig:exit}. A commonly used technique is to add an internal classifier at each layer and jointly train all classifiers. The core of these methods is the criteria used to decide whether to exit at each layer. DeeBERT~ uses the entropy of the output probability distribution of the current layer to determine whether to exit. PABEE~ counts the number of times that the predictions remain unchanged to decide whether to exit. design a window-based uncertainty criterion to achieve token-level partial exiting for sequence labeling tasks. introduces a voting-based exiting strategy that considers at each layer predictions of all the past internal classifiers to infer the correct label and to decide whether to exit.", "id": "690a01cd-27c7-4e1a-baef-83bbdeff8220", "level": "subsection", "origin_cites_number": 6, "parent_id": "04795308-eff0-4fbb-962f-a8077d36f894", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Adaptive Computation Time" ] ], "subsections": [], "title": "Adaptive Computation Time" }, { "cite_extract_rate": 0, "cites": [], "content": "The quadratic complexity of self-attention on sequences length can significantly limit the performance\nof some downstream tasks. For example, language modeling usually needs long-range context. Apart from the techniques introduced in Sec.~\\ref{sec:attention}, another effective way of dealing with long sequences is to use \\textit{divide-and-conquer} strategy, i.e., to decompose an input sequence into finer segments that can be efficiently processed by Transformer or Transformer modules. We identify two representative class of methods, \\textit{recurrent} and \\textit{hierarchical} Transformers, as illustrated in Fig. \\ref{fig:divide_and_conquer}. These techniques can be understood as a wrapper for the Transformer model in which Transformer acts as an elementary component that is reused to process different input segments.\n\\begin{figure}[htbp]\n\\begin{center}\n\\subfigure[Recurrent Transformer\\label{fig:recurrence}]{\n\\includegraphics[width=0.4\\linewidth]{assets/divide_and_conquer/recurrence.pdf}\n}\\quad\n\\subfigure[Hierarchical Transformer\\label{fig:hierarchy}]{\n\\includegraphics[width=0.4\\linewidth]{assets/divide_and_conquer/hierarchy.pdf}\n}\n\\caption{Illustrations of recurrent and hierarchical Transformers.}\\label{fig:divide_and_conquer}\n\\end{center}\n\\end{figure}", "id": "9b2cdbdc-bac6-48ec-96d0-b5576360be13", "level": "subsection", "origin_cites_number": 0, "parent_id": "04795308-eff0-4fbb-962f-a8077d36f894", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Transformers with Divide-and-Conquer Strategies" ] ], "subsections": [ "88924fec-4dc9-4d83-b667-d6862e359751", "6244ba7c-c68b-4615-be13-f855c4dbca70" ], "title": "Transformers with Divide-and-Conquer Strategies" }, { "cite_extract_rate": 0.5, "cites": [ 1500, 7370, 1487 ], "content": "In recurrent Transformers, a cache memory is maintained to incorporate the history information. While processing a segment of text, the network reads from the cache as an additional input. After the processing is done, the network writes to the memory by simply copying hidden states or using more complex mechanisms. The abstract process is illustrated in Fig. \\ref{fig:recurrence}.\nTransformer-XL~ address the limitation of a fixed length context by caching representations from the previous segment and reuse it as an extended context when the model processes the current segment. For the $l$-th layer and the $(\\tau+1)$-th segment, the input representation $\\bH_{\\tau+1}^{(l-1)}$ is concatenated with the representation $\\bH_\\tau^{(l-1)}$ from previous segment to produce the keys and values\n\\begin{align}\n \\tilde{\\bH}_{\\tau+1}^{(l)}&=[\\mathrm{SG}(\\bH_\\tau^{(l-1)})\\circ \\bH_{\\tau+1}^{(l-1)}]\\label{eq:xformerxl},\\\\\n \\bK_{\\tau+1}^{(l)}, \\bV_{\\tau+1}^{(l)}&=\\tilde{\\bH}_{\\tau+1}^{(l)}\\bW^K, \\tilde{\\bH}_{\\tau+1}^{(l)}\\bW^V,\n\\end{align}\nwhere $\\bH_\\tau^{(0)}$ is defined as the word embedding sequence, $\\mathrm{SG}(\\cdot)$ denotes stop-gradient operation and $[\\bX\\circ\\bY]$ denotes concatenating the two vector sequences along the time dimension. This approach extends the maximum context length by $L\\times N_{\\text{mem}}$ where $L$ is the number of layers and $N_{\\text{mem}}$ is the length of cached memory sequence.\nCompressive Transformer~ extends this idea further by extending the cache with two levels of memory. In Transformer-XL, the activations from the previous segment are cached as a memory that is used to augment the current segment, and activations from older segments are discarded. Compressive Transformer, on the other hand, applies a compression operation (e.g., Convolution, Pooling, etc.) on older activations and stores them in the compressed memory. In order to avoid the expensive backpropagating-through-time (BPTT) from training compression sub-network with gradients from the loss, they propose to use local loss functions where original memories are constructed from the compressed memories. This approach further extends the theoretical maximum history context length from $L\\times N_{\\text{mem}}$ of Transformer-XL to $L\\times (N_{\\text{mem}}+c\\times N_{\\text{cm}})$, where $c$ is the compression rate and $N_{\\text{cm}}$ is the length of compressed memory.\n Memformer~ extends the recurrence mechanism from decoder-only architecture to an encoder-decoder architecture. They introduce to the encoder a memory cross attention similar to the cross attention in vanilla Transformer to allow the Transformer encoder to attend to the memory. They also introduce a memory slot attention on top of the encoder output to explicitly write the memory for the next segment. To avoid BPTT over a long range of timesteps, they propose Memory\nReplay Back-Propagation (MRBP) algorithm, which replays\nthe memory at each timestep to accomplish gradient back-propagation over long unrolls.\n propose a simple fine-tuning mechanism to add recurrence to a pre-trained language model (e.g., GPT-2~).\n\\ifx \\smv \\undefined\nThey first compress the representations produced by the $\\tau$-th segment into one single vector representation, using a weighted average of pooled representations from each layer $l\\in\\{1,\\cdots,L\\}$\n\\begin{equation}\n \\bz_\\tau=\\sum_{l=1}^Lw_l\\sum_{j=1}^{T_\\tau}\\bh_j^{(l)},\n\\end{equation}\nwhere $T_\\tau$ denotes the sequence length of the $\\tau$-th segment, $w_l=\\mathrm{softmax}(\\mathbf\\alpha)_l$ is the weight softmax-normalized from learnable parameters $\\mathbf\\alpha=[\\alpha_1,\\cdots,\\alpha_L]$. This compressed representation is then fed to a feed-forward network to produce the memory state $\\bh_{\\text{prev},\\tau}$ for the $\\tau$-th segment, which is then prepended to the key-value inputs of a specific attention layer. This approach effectively extends the context length of a pre-trained language model, without significant change of the architecture of the original model.\n\\fi\nERNIE-Doc~ proposes an enhanced recurrence mechanism based on the recurrence mechanism used in Transformer-XL, by replacing the memory with the history representations from the $l$-th layer.\n\\ifx \\smv \\undefined\n\\begin{align}\n \\tilde{\\bH}_{\\tau+1}^{(l)}&=[\\mathrm{SG}(\\textcolor[rgb]{0,0,1}{\\bH_\\tau^{(l)}})\\circ \\bH_{\\tau+1}^{(l-1)}],\n\\end{align}\nas opposed to using representations from the $(l-1)$-th layer in Eq. \\eqref{eq:xformerxl}. This modification essentially leads to a larger effective context length.\n\\fi", "id": "88924fec-4dc9-4d83-b667-d6862e359751", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "9b2cdbdc-bac6-48ec-96d0-b5576360be13", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Transformers with Divide-and-Conquer Strategies" ], [ "subsubsection", "Recurrent Transformers" ] ], "subsections": [], "title": "Recurrent Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "Hierarchical Transformer decomposes inputs hierarchically into elements of finer granularity. Low-level features are first fed to a Transformer encoder, producing output representations that are then aggregated (using pooling or other operations) to form a high-level feature, which is then processed by a high-level Transformer. This class of methods can be understood as a process of hierarchical abstraction. The overview of this approach is depicted in Fig. \\ref{fig:hierarchy}. The advantages of this approach are twofold: (1) Hierarchical modeling allows the model to handle long inputs with limited resources; (2) It has the potential to generate richer representations that are beneficial to tasks.", "id": "6244ba7c-c68b-4615-be13-f855c4dbca70", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9b2cdbdc-bac6-48ec-96d0-b5576360be13", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Transformers with Divide-and-Conquer Strategies" ], [ "subsubsection", "Hierarchical Transformers" ] ], "subsections": [ "10bc35f7-356c-4cac-a874-ee859573c18e", "153b332f-732b-4172-8553-f1c9626b599f" ], "title": "Hierarchical Transformers" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1482, 1477 ], "content": "For tasks with inherently long input length, one can use hierarchical Transformers for effective modeling of long-range dependencies. For document-level machine translation tasks, introduce dependencies on the previous sentences from both the source and target sides when translating a sentence. They use an attention mechanism as the aggregation operation to summarize low-level information. For document summarization, HIBERT~ encodes a document of text by first learn sentence representations for all sentences and then use these sentence representations to encode document-level representations that are then used to generate the summary. The model uses the last hidden representation (corresponding to the \\texttt{EOS} token) as the representation for each sentence. propose a similar hierarchical Transformer for multi-document summarization where the extracted low-level representations are aggregated using an attention layer with a global trainable query node and low-level representations as the source of key-value pairs. Hi-Transformer~ first utilizes a sentence Transformer and a document Transformer to hierarchically learn document context-aware sentence representations. The document context-aware sentence representations are then fed to another sentence Transformer to further improve the sentence context modeling.", "id": "10bc35f7-356c-4cac-a874-ee859573c18e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "6244ba7c-c68b-4615-be13-f855c4dbca70", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Transformers with Divide-and-Conquer Strategies" ], [ "subsubsection", "Hierarchical Transformers" ], [ "paragraph", "6.5.2.1 Hierarchical for long sequence inputs" ] ], "subsections": [], "title": "6.5.2.1 Hierarchical for long sequence inputs" }, { "cite_extract_rate": 1, "cites": [ 732, 7368, 7367 ], "content": "One might also be interested in using hierarchical models to acquire richer representations that are beneficial to the tasks at hand. For example, TENER~ uses a low-level Transformer encoder to encode character features, which is then concatenated with word embeddings as the inputs to the high-level Transformer encoder. This incorporates more features and alleviates the problems of data sparsity and out-of-vocabulary (OOV). Recently emerging Vision Transformer~ divides an input image into several patches that serve as the basic input elements of Transformer, which potentially loses intrinsic pixel-level information within patches. To address this issue, Transformer in Transformer (TNT)~ uses at each layer an inner Transformer block that transforms pixel representations and an outer Transformer block that takes fused vectors of patch representations and pixel representations as input.", "id": "153b332f-732b-4172-8553-f1c9626b599f", "level": "paragraph", "origin_cites_number": 3, "parent_id": "6244ba7c-c68b-4615-be13-f855c4dbca70", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Transformers with Divide-and-Conquer Strategies" ], [ "subsubsection", "Hierarchical Transformers" ], [ "paragraph", "6.5.2.2 Hierarchical for richer representations" ] ], "subsections": [], "title": "6.5.2.2 Hierarchical for richer representations" }, { "cite_extract_rate": 1, "cites": [ 8456, 1469, 544, 1475, 1483, 1461 ], "content": "Despite the success of Transformer architecture, one might question whether the current Transformer architecture is optimal. Interestingly, several studies have explored alternative architectures for Transformer.\n interpret Transformer as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system and design Macaron Transformer, which replaces each Transformer block with a \\textit{FFN-attention-FFN} variant.\nSandwich Transformer~ explores reorganizing attention modules and FFN modules such that attention modules are mainly located in lower layers and FFN modules in upper layers. The induced model improves perplexity on multiple language modeling benchmarks, without increasing parameters, memory or training time.\nMask Attention Network (MAN)~ prepends a dynamic mask attention module to the self-attention module in each Transformer block. The mask is conditioned on token representations, the relative distance between tokens and head indices. The proposed dynamic mask attention is shown to effectively model locality in text data and the induced model consistently outperforms the baseline model in machine translation and abstractive summarization.\nNotably, there's a line of work that uses Neural Architecture Search (NAS) to search for alternative Transformer architectures. The Evolved Transformer (ET)~ employs evolution-based architecture search with the standard Transformer architecture seeding the initial population. The searched model demonstrates consistent improvement\nover Transformer on several language tasks. As another representative work, DARTSformer applies differentiable architecture search (DARTS)~, combined with a multi-split reversible network and a backpropagation-with-reconstruction algorithm for memory efficiency. The resulting model consistently outperforms standard Transformer and compares favorably to larger ET models, with a significantly reduced search cost.", "id": "be720c58-062b-41bd-b6e9-732058f0f671", "level": "subsection", "origin_cites_number": 6, "parent_id": "04795308-eff0-4fbb-962f-a8077d36f894", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Architecture-level Variants" ], [ "subsection", "Exploring Alternative Architecture" ] ], "subsections": [], "title": "Exploring Alternative Architecture" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 826, 1445, 679, 707, 9, 1499, 7 ], "content": "\\label{sec:ptm}\nAs a key difference from convolutional networks and recurrent networks that inherently incorporates the inductive bias of locality, Transformer does not make any assumption about how the data is structured. On the one hand, this effectively makes Transformer a very universal architecture that has the potential of capturing dependencies of different ranges. On the other hand, this makes Transformer prone to overfitting when the data is limited. One way to alleviate this issue is to introduce inductive bias into the model.\nRecent studies suggest that Transformer models that are pre-trained on large corpora can learn universal language representations that are beneficial for downstream tasks~. The models are pre-trained using various self-supervised objectives, e.g., predicting a masked word given its context. After pre-training a model, one can simply fine-tune it on downstream datasets, instead of training a model from scratch. To illustrate typical ways of using Transformers in pre-training, we identify some of the pre-trained Transformers and categorize them as follows.\n\\begin{itemize}\n \\item \\textit{Encoder only}. A line of work uses the Transformer encoder as its backbone architecture. BERT~ is a representative PTM that is typically used for natural language understanding tasks. It utilizes Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) as the self-supervised training objective. RoBERTa~ further adapts the training of BERT and removes the NSP objective as it is found to hurt performance on downstream tasks.\n \\item \\textit{Decoder only}. Several studies focus on pre-training Transformer decoders on language modeling. For example, the Generative Pre-trained Transformer (GPT) series (i.e., GPT~, GPT-2~, and GPT-3~) is dedicated to scaling pre-trained Transformer decoders and has recently illustrated that a large-scale PTM can achieve impressive few-shot performance with the task and examples fed to the model as constructed prompts~.\n \\item \\textit{Encoder-Decoder}. There are also PTMs that adopt Transformer encoder-decoder as the overall architecture. BART~ extends the denoising objective of BERT to encoder-decoder architecture. The benefit of using an encoder-decoder architecture is that the inducing model is equipped with the ability to perform both natural language understanding and generation. T5~ adopts similar architecture and was one of the earliest studies that use task-specific text prefix in downstream tasks.\n\\end{itemize}\nSome of the Transformer architecture variants can also be applied to Transformer-based PTMs. For instance, BigBird~ introduced in Sec.~\\ref{sec:sparseattn} is a encoder-based PTM that uses compound position-based sparse attention to enable long sequence inputs. GPT-3~ uses alternating dense and locally banded sparse attention (which was also introduced in Sec.~\\ref{sec:sparseattn}) in self-attention modules. Switch Transformer~ is an encoder-based PTM that replaces FFN layers with mixture-of-experts layers and can increase parameter count while keeping the FLOPs per example constant.", "id": "b7522af4-45ec-452c-b804-11b6882e6225", "level": "section", "origin_cites_number": 10, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Pre-trained Transformers" ] ], "subsections": [], "title": "Pre-trained Transformers" }, { "cite_extract_rate": 0.7674418604651161, "cites": [ 7361, 1500, 7368, 7373, 1458, 1446, 7056, 7360, 7040, 7370, 1470, 768, 732, 38, 1272, 1514, 7339, 1447, 8459, 8458, 1502, 1469, 9, 1515, 1501, 1461, 1453, 1456, 1462, 1503, 1516, 1513 ], "content": "\\label{sec:app}\nTransformer was originally designed for machine translation but has been widely adopted in various fields besides NLP, including CV and audio processing, due to its flexible architecture.\n(1) \\textit{Natural Language Processing}. Transformer and its variants have been extensively explored and applied in NLP tasks, e.g., machine translation~, language modeling~ and named entity recognition~. Massive effort has been dedicated to pre-training Transformer models on large-scale text corpora, which we believe is one of the major reasons of Transformer's wide application in NLP.\n(2) \\textit{Computer Vision}. Transformer have also been adapted for various vision tasks, e.g., image classification~, object detection~, image generation~ and video processing~. and provide reviews on existing work of visual Transformers. We encourage readers to refer to these surveys for further understand the current research progress on Transformers in CV.\n(3) \\textit{Audio Applications}. Transformer can also be extended for audio-related applications, e.g., speech recognition~, speech synthesis~, speech enhancement~ and music generation~.\n(4) \\textit{Multimodal Applications}. Owing to its flexible architecture, Transformer has also been applied in various multimodal scenarios, e.g., visual question answering~, visual commonsense reasoning~, caption generation~, speech-to-text translation~ and text-to-image generation~.", "id": "b3d753a5-00c0-4eb4-b313-01e490dd9d3d", "level": "section", "origin_cites_number": 43, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Applications of Transformer" ] ], "subsections": [], "title": "Applications of Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:discussion}\nIn this survey, we conduct a comprehensive overview of X-formers and propose a new taxonomy. Most of the existing works improve Transformer from different perspectives, such as efficiency, generalization, and applications. The improvements include incorporating structural prior, designing lightweight architecture, pre-training, and so on.\nAlthough X-formers have proven their power for various tasks, challenges still exist. Besides the current concerns (e.g. efficiency and generalization), the further improvements of Transformer may lie in the following directions:\n(1) \\textit{Theoretical Analysis}. The architecture of Transformer has been demonstrated to be capable of supporting large-scale training datasets with enough parameters. Many works show that Transformer has a larger capacity than CNNs and RNNs and hence has the ability to handle a huge amount of training data.\nWhen Transformer is trained on sufficient data, it usually has better performances than CNNs or RNNs.\nAn intuitive explanation is that Transformer has few prior assumptions on the data structure and therefore is more flexible than CNNs and RNNs. However, the theoretical reason is unclear and we need some theoretical analysis of Transformer ability.\n(2) \\textit{Better Global Interaction Mechanism beyond Attention}. A main advantage of Transformer is the use of the attention mechanism to model the global dependencies among nodes within input data. However, many studies have shown that full attention is unnecessary for most nodes. It is, to some degree, inefficient to indistinguishably calculate attention for all nodes. Therefore, there is still plenty of room for improvements in efficiently modeling global interactions. On the one hand, the self-attention module can be regarded as a fully-connected neural network with dynamical connection weights, which aggregates non-local information with dynamic routing. Therefore, other dynamic routing mechanisms are alternative approaches worth exploring. On the other hand, the global interaction can also be modeled by other types of neural networks, such as memory-enhanced models.\n(3) \\textit{Unified Framework for Multimodal Data}. In many application scenarios, integrating multimodal data is useful and necessary to boost the task performance. Moreover, the general AI also needs the ability to capture the semantic relations across different modalities.\nSince Transformer achieves great success on text, image, video, and audio, we have a chance to build a unified framework and better capture the inherent connections among multimodal data. However, the design of the intra-modal and cross-modal attention still remains to be improved.\nFinally, we wish this survey to be a hands-on reference for better understanding the current research progress on Transformers and help readers to further improve Transformers for various applications.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{references}\n\\end{document}", "id": "ec81ede4-3b9c-44b8-96e2-94e0f2e4f760", "level": "section", "origin_cites_number": 0, "parent_id": "456ca844-322a-4fcf-90ea-531b14db9d9e", "prefix_titles": [ [ "title", "A Survey of Transformers" ], [ "section", "Conclusion and Future Directions" ] ], "subsections": [], "title": "Conclusion and Future Directions" } ]
86
[ 7360, 1447, 1445, 7052, 2401, 732, 1446, 38, 97, 57, 208, 553, 1474, 1493, 1457, 1482, 1460, 1491, 7040, 1484, 1494, 1464, 1488, 1470, 1452, 1466, 1490, 793, 1502, 7054, 1467, 1463, 8384, 1456, 1480, 794, 1475, 1465, 7361, 1500, 1459, 7053, 1458, 1479, 7365, 1496, 1133, 8454, 768, 7367, 7333, 1489, 1451, 1469, 9, 1476, 1453, 1495, 1487, 1462, 1478, 679, 1483, 1492, 7369, 7, 1473, 7370, 8456, 826, 7362, 7364, 7363, 1468, 1486, 7298, 1498, 1497, 1501, 7371, 8457, 7368, 1455, 707, 8455, 1471, 1481, 1182, 1272, 7339, 1477, 1449, 7273, 1461, 1454, 1448, 1485, 7366, 798, 1499, 1450, 1472, 4722, 134, 1503, 1504, 725, 1505, 7372, 306, 1507, 1506, 790, 1508, 1509, 1510, 1511, 71, 1512, 7055, 680, 683, 544, 7373, 7056, 1514, 8459, 8458, 1515, 1516, 1513 ]
1.148347
[ "Ana Paula Chaves", "Marco Aurelio Gerosa" ]
How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design
2019
2019-04-04T18:43:31Z
cs.HC
Chatbots' growing popularity has brought new challenges to HCI, having changed the patterns of human interactions with computers. The increasing need to approximate conversational interaction styles raises expectations for chatbots to present social behaviors that are habitual in human-human communication. In this survey, we argue that chatbots should be enriched with social characteristics that cohere with users' expectations, ultimately avoiding frustration and dissatisfaction. We bring together the literature on disembodied, text-based chatbots to derive a conceptual model of social characteristics for chatbots. We analyzed 56 papers from various domains to understand how social characteristics can benefit human-chatbot interactions and identify the challenges and strategies to designing them. Additionally, we discussed how characteristics may influence one another. Our results provide relevant opportunities to both researchers and designers to advance human-chatbot interactions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ] ], "subsections": [ "3e165ffa-29cf-4605-ba88-0e6c84a411e5", "ad5096ab-28b5-4a2f-b84b-74bcbea02a02", "5950bb1f-b30c-410b-94c0-7eba5ce91d72", "616598c0-a6ef-4e0d-bae2-49c648e87606", "cefcc373-1755-478a-bdc0-b708ace17a7a", "e47b0d6a-5496-42d1-b93d-9f8d302b471a", "1430d8e4-a5c1-4f7d-ae08-3a9fcec59bb4" ], "title": "root" }, { "cite_extract_rate": 0.026315789473684, "cites": [ 6262 ], "content": "\\label{sec:introduction}\nChatbots are computer programs that interact with users in natural language . The origin of the chatbot concept dates back to 1950 . ELIZA and A.L.I.C.E. are examples of early chatbot technologies, where the main goal was to mimic human conversations. Over the years, the chatbot concept has evolved. Today, chatbots can have distinct and diverse characteristics, which has resulted in several synonyms, such as multimodal agents, chatterbots, and conversational interfaces. In this survey, we use the term ``\\textit{chatbot}'' to refer to \\textit{a disembodied conversational agent that holds a natural language conversation via text-based environment to engage the user in either a general-purpose or task-oriented conversation.}\nChatbots are changing the patterns of interactions between humans and computers . Many instant messenger tools, such as Skype, Facebook Messenger, and Telegram provide platforms to develop and deploy chatbots, which either engage with users in general conversations or help them solve domain specific tasks . As messaging tools increasingly become platforms, traditional websites and apps are providing space for this new form of human-computer interaction (HCI) . For example, in the 2018 F8 Conference, Facebook announced that it had 300K active chatbots on Facebook Messenger . The BotList \\footnote{https://botlist.co/} website indexes thousands of chatbots for education, entertainment, games, health, productivity, travel, fun, and several other categories. The growth of chatbot technology is changing how companies engage with their customers , students engage with their learning groups , and patients self-monitor the progress of their treatment , among many other applications.\nHowever, chatbots still fail to meet users' expectations . While many studies on chatbot design focus on improving chatbots' functional performance and accuracy (see, e.g., ), the literature has consistently suggested that chatbots' interactional goals should also include social capabilities . According to the Media Equation theory , people naturally respond to social situations when interacting with computers . As chatbots are designed to interact with users in a way that mimics person-to-person conversations, new challenges in HCI arise . Neururer and colleagues (\\citeyear{neururer2018perceptions}) state that making a conversational agent acceptable to users is primarily a social, not only technical, problem. Studies on chatbots have shown that people prefer agents who: conform to gender stereotypes associated with tasks ; self-disclose and show reciprocity when recommending ; and demonstrate a positive attitude and mood . When chatbots do not meet these expectations, the user may experience frustration and dissatisfaction . In contrast, designing overly humanized agents results in uncanny feelings and increased expectations , which also negatively impacts the interaction. The challenge remains as to what social characteristics are relevant for improving chatbots' communication and social skills and in which domains they have shown to be beneficial.\nAlthough chatbots' social characteristics have been explored in the literature, this knowledge is spread across several domains in which chatbots have been studied, such as customer services, education, finances, and travel. In the HCI domain, some studies focus on investigating the social aspects of human-chatbot interactions (see, e.g., ). However, most studies focus on a single or small set of characteristics (e.g., ); in other studies, the social characteristics emerged as secondary, exploratory results (e.g., ). It has become difficult to find evidence regarding what characteristics are important for designing a particular chatbot, and what research opportunities exist in the field.\nWhilst the literature has extensively reviewed the technical aspects of chatbots design (e.g., ), a lack of studies brings together the social characteristics that influence the way users perceive and behave toward chatbots. To fill this gap, this survey compiles research initiatives for understanding the impact of chatbots' social characteristics on the interaction. We bring together literature that is spread across several research areas. From our analysis of 56 scientific studies, we derive a conceptual model of social characteristics, aiming to help researchers and designers identify the subset of characteristics that are relevant to their context and how adopting--or neglecting--a particular characteristic may influence the way humans perceive the chatbots. The research question that guided our investigation was: \\textbf{What chatbot social characteristics benefit human interaction and what are the challenges and strategies associated with them?}\nTo answer this question, we discuss why designing a chatbot with a particular characteristic can enrich the human-chatbot interaction. Our results provide insight into whether the characteristic is desirable for a particular chatbot, so that designers can make informed decisions by selecting the appropriate subset of characteristics, as well as inspire researchers' further investigations. In addition, we discuss the influence of humanness and the conversational context on users' perceptions as well as the interrelationship among the identified characteristics. We stated 22 propositions about how social characteristics may influence one another. In the next section, we present an overview of the studies included in this survey.", "id": "3e165ffa-29cf-4605-ba88-0e6c84a411e5", "level": "section", "origin_cites_number": 38, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 6263, 6262 ], "content": "\\label{sec:overview}\nThe literature presents no coherent definition of chatbots; thus, to find relevant studies we used a search string that includes the synonyms \\textit{``chatbots, chatterbots, conversational agents, conversational interfaces, conversational systems, conversation systems, dialogue systems, digital assistants, intelligent assistants, conversational user interfaces, and conversational UI''}. We explicitly left out studies that relate to embodiment (\\textit{``ECA, multimodal, robots, eye-gaze, gesture''}), and speech input mode (\\textit{``speech-based, speech-recognition, voice-based''}). The search string did not include the term ``social bots'', because it refers to chatbots that produce content for social networks such as Twitter . We did not include ``personal assistants'' either, since this term consistently refers to commercially available, voice-based assistants such as Google Assistant, Amazon's Alexa, Apple's Siri, and Microsoft's Cortana.\nWe decided to not include terms that relate to social characteristics/traits, because most studies do not explicitly label their results as such. Additionally, to find studies from a variety of domains (not only computing), we used Google Scholar to search for papers. We started with a set of about one thousand papers. In the first round, we reduced this amount by about half based on the titles. We also removed papers for which full text was not available or written in a language other than English. We started the second round with 464 studies. In this round, we excluded short/position papers, theses, book chapters, and documents published in non-scientific venues. We also read the abstracts and removed all the papers that focused on technical aspects, rather than social ones. The third and last round started with 104 papers. After reading the full texts, we removed the papers that either did not highlight social characteristics or represented previous, ongoing research of another complete study from the same authors. The datasets with all analyzed studies (original list of results and exclusions per round) is available in .\nAfter filtering the search results, we had 56 remaining studies. Most of the selected studies are recent publications (less than 10 years old). The publication venues include the domains of human-computer interactions (25 papers), learning and education (8 papers), information and interactive systems (8 papers), virtual agents (5 papers), artificial intelligence (3 papers), and natural language processing (2 papers). We also found papers from health, literature \\& culture, computer systems, communication, and humanities (1 paper each). Most papers (59\\%) focus on task-oriented chatbots. General purpose chatbots reflect 33\\% of the surveyed studies. Most general purpose chatbots (16 out of 19) are designed to handle topic-unrestricted conversations. The most representative specific domain is education, with 9 papers, followed by customer services, with 5 papers. See the supplementary materials for the complete list of topics. \nMost surveyed studies adopted real chatbots (35 out of 56); among them, 18 studies analyze logs of conversations or users' perceptions of third-party chatbots such as Cleverbot (e.g., ), Talkbot (e.g., ), and Woebot . Nine studies introduce a self-developed architecture and/or dialogue management (e.g. ). In another nine studies, the chatbots were designed for research purposes using third-party platforms for chatbot development and deployment, such as IBM's Watson service (e.g., ) and Microsoft's Bot Framework as well as pattern-matching packages such as Artificial Intelligence Markup Language (AIML) . When a chatbot was simulated (11 studies), Wizard of Oz (WoZ) is the most used technique (9 studies). In a WoZ study, participants believe to be interacting with a chatbot when, in fact, a person (or ``wizard'') pretends to be the automated system . Eight studies do not address a particular chatbot. See the supplementary materials for details.\nWe analyzed the papers by searching for chatbot behavior or attributed characteristics that influence the way users perceive it and behave toward it. Noticeably, the characteristics and categories are seldom explicitly highlighted to in the literature, so the conceptual model was derived using a qualitative coding process inspired by methods such as Grounded Theory (open coding stage). For each study (\\textit{document}), we selected relevant statements from the paper (\\textit{quotes}) and labeled them as a characteristic (\\textit{code}). These steps were performed by one researcher (the first author). After coding all the studies, a second researcher (the second author) reviewed the produced set of characteristics and both researchers participated in discussion sessions to identify characteristics that could be merged, renamed, or removed. At the end, the characteristics were grouped into the categories, depending on whether the characteristic relates to the chatbot's virtual representation, conversational behavior, or social protocols. Finally, the quotes for each characteristic were labeled as references to benefits, challenges, or strategies.\nWe derived a total of 11 social characteristics, and grouped them into three categories, as depicted in Table \\ref{tab:conceptualmodel}: \\textbf{conversational intelligence}, \\textbf{social intelligence}, and \\textbf{personification}. The next section describes the derived conceptual model.\n\\begin{landscape}\n\\begin{table}[ht]\n\\scriptsize\n\\centering\n\\caption{Conceptual model of chatbots social characteristics. Eleven characteristics found in the literature were grouped into three main categories, depending on whether the social characteristic relates to the chatbot's conversational behavior, social protocols, or virtual representation. For each characteristic, we point out the benefits, challenges, and strategies of implementing the characteristic in chatbots as reported in the literature.}\n\\vspace{1mm}\n\\begin{tabular}{c|p{2.5cm}|p{6cm}|p{6cm}|p{6cm}|}\n\\cline{2-5}\n & \\textbf{\\shortstack{\\\\Social\\\\Characteristics}}& \\multicolumn{1}{c|}{\\textbf{Benefits}} & \\multicolumn{1}{c|}{\\textbf{Challenges}} & \\multicolumn{1}{c|}{\\textbf{Strategies}} \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{14}{*}{\\rotatebox{90}{\\textbf{\\shortstack{Conversational\\\\Intelligence}}}}} & \\multirow{6}{*}{\\textit{Proactivity}} & \\textbf{[B1]} to provide additional information & \\textbf{[C1]} timing and relevance & \\textbf{[S1]} to leverage conversational context \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to inspire users and to keep the conversation alive & \\textbf{[C2]} privacy & \\textbf{[S2]} to select a topic randomly \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B3]} to recover from a failure & \\textbf{[C3]} users' perception of being controlled & \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B4]} to improve conversation productivity & & \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B5]} to guide and engage users & & \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{4}{*}{\\textit{Conscientiousness}} & \\textbf{[B1]} to keep the conversation on track & \\textbf{[C1]} to handle task complexity & \\textbf{[S1]} conversational flow \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to demonstrate understanding & \\textbf{[C2]} to harden the conversation & \\textbf{[S2]} visual elements \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B3]} to hold a continuous conversation & \\textbf{[C3]} to keep the user aware of the chatbot's context & \\textbf{[S3]} confirmation messages \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{4}{*}{\\textit{Communicability}} & \\textbf{[B1]} to unveil functionalities & \\textbf{[C1]} to provide business integration & \\textbf{[S1]} to clarify the purpose of the chatbot \\\\ \n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to manage the users' expectations & \\textbf{[C2]} to keep visual elements consistent with textual inputs & \\textbf{[S2]} to advertise the functionality and suggest the next step \\\\\n\\multicolumn{1}{|l|}{} & & & & \\textbf{[S3]} to provide a help functionality \\\\\\hline\n\\multicolumn{1}{|l|}{\\multirow{19}{*}{\\rotatebox{90}{\\textbf{\\shortstack{Social\\\\Intelligence}}}} }& \\multirow{6}{*}{\\textit{Damage control}} & \\textbf{[B1]} to appropriately respond to harassment & \\textbf{[C1]} to deal with unfriendly users & \\textbf{[S1]} emotional reactions \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to deal with testing & \\textbf{[C2]} to identify abusive utterances & \\textbf{[S2]} authoritative reactions \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B3]} to deal with lack of knowledge & \\textbf{[C3]} to fit the response to the context & \\textbf{[S3]} to ignore the user's utterance and change the topic \\\\\n\\multicolumn{1}{|l|}{} & & & & \\textbf{[S4]} \\textit{conscientiousness} and \\textit{communicability} \\\\\n\\multicolumn{1}{|l|}{} & & & & \\textbf{[S5]} to predict users' satisfaction \\\\\\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{2}{*}{\\textit{Thoroughness}} & \\textbf{[B1]} to increase human-likeness & \\textbf{[C1]} to decide how much to say & Not identified \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to increase believability & \\textbf{[C2]} to be consistent & \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{2}{*}{\\textit{Manners}} & \\textbf{[B1]} to increase human-likeness & \\textbf{[C1]} to deal with face-threatening acts & \\textbf{[S1]} to engage in small talk \\\\\n\\multicolumn{1}{|l|}{} & & & \\textbf{[C2]} to end a conversation gracefully & \\textbf{[S2]} to adhere turn-taking protocols \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{3}{*}{\\textit{Moral agency}} & \\textbf{[B1]} to avoid stereotyping & \\textbf{[C1]} to avoid alienation & Not identified \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to enrich interpersonal relationships & \\textbf{[C2]} to build unbiased training data and algorithms & \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{3}{*}{\\textit{\\shortstack{Emotional\\\\intelligence}}} & \\textbf{[B1]} to enrich interpersonal relationships & \\textbf{[C1]} to regulate affective reactions & \\textbf{[S1]} to use social-emotional utterances \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to increase engagement & & \\textbf{[S2]} to manifest \\textit{conscientiousness} \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B3]} to increase believability & & \\textbf{[S3]} reciprocity and self-disclosure \\\\ \\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{3}{*}{\\textit{Personalization}} & \\textbf{[B1]} to enrich interpersonal relationships & \\textbf{[C1]} privacy & \\textbf{[S1]} to learn from and about the user \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to provide unique services & & \\textbf{[S2]} to provide customizable agents \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B3]} to reduce interactional breakdowns & & \\textbf{[S3]} visual elements \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{5}{*}{\\rotatebox{90}{\\textbf{\\shortstack{Personifi-\\\\cation}}}} }& \\multirow{3}{*}{\\textit{Identity}} & \\textbf{[B1]} to increase engagement & \\textbf{[C1]} to avoid negative stereotypes & \\textbf{[S1]} to design and elaborate on a persona \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to increase human-likeness & \\textbf{[C2]} to balance the \\textit{identity} and the technical capabilities & \\\\\\cline{2-5}\n\\multicolumn{1}{|l|}{} & \\multirow{2}{*}{\\textit{Personality}} & \\textbf{[B1]} to increase believability & \\textbf{[C1]} to adapt humor to the users' culture & \\textbf{[S1]} to use appropriate language \\\\\n\\multicolumn{1}{|l|}{} & & \\textbf{[B2]} to enrich interpersonal relationships & \\textbf{[C2]} to balance the \\textit{personality} traits & \\textbf{[S2]} to have a sense of humor \\\\\n\\hline\n\\end{tabular}\n\\label{tab:conceptualmodel}\n\\end{table}\n\\end{landscape}", "id": "ad5096ab-28b5-4a2f-b84b-74bcbea02a02", "level": "section", "origin_cites_number": 12, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Overview of the surveyed literature" ] ], "subsections": [], "title": "Overview of the surveyed literature" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:socialcharacteristics}\nThis section describes the identified social characteristics grouped into categories. As Table \\ref{tab:conceptualmodel} depicts, the category \\textbf{conversational intelligence} includes characteristics that help the chatbot manage interactions. \\textbf{Social intelligence} focuses on habitual social protocols, while \\textbf{personification} refers to the chatbot's perceived identity and personality representations. We also grouped the social characteristics based on the domain in which they were investigated (Table \\ref{tab:domain-analysis}). In the following subsections, we describe the identified social characteristics as well as the domains of study. Then, we summarize the relationship among the characteristics in Section \\ref{sec:interrelationship}. For each category, a table with an overview of the surveyed studies is provided in the supplementary materials. The supplementary materials also include tables for each social characteristic, listing the studies associated with the domains of study and reported benefits, challenges, and strategies. Finally, the supplementary materials also highlight five constructs that can be used to assess whether social characteristics reach the intended design goals.", "id": "5950bb1f-b30c-410b-94c0-7eba5ce91d72", "level": "section", "origin_cites_number": 0, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ] ], "subsections": [ "9ad3c98e-3389-49e5-a57e-7e261302fc58", "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "fb12c3f5-18c5-4fb9-8465-0d7950752ebc" ], "title": "Chatbots Social Characteristics" }, { "cite_extract_rate": 0.035087719298245, "cites": [ 6262, 7476 ], "content": "\\label{sec:conversationalintelligence}\n\\textbf{Conversational intelligence} enables the chatbot to actively participate in the conversation and to demonstrate awareness of the topic discussed, the evolving conversational context, and the flow of the dialogue. Therefore, \\textbf{conversational Intelligence} refers to the ability of a chatbot to effectively converse beyond the technical capability of achieving a conversational goal . In this section, we discuss social characteristics related to \\textbf{conversational intelligence}, namely: \\textit{proactivity} (18 studies), \\textit{conscientiousness} (11 studies), and \\textit{communicability} (6 studies). Most of the studies rely on data that comes from the log of the conversations, interviews, and questionnaires. The questionnaires are mostly Likert-scales, and some of them include subjective feedback. Most studies analyzed the interaction with real chatbots, although Wizard of Oz (WoZ) settings are also common. Only two papers did not evaluate a particular type of interaction because they were based on a literature review and surveys with chatbot users in general . Only five studies applied only quantitative methods, while seven focused on qualitative methods. The majority of the studies (15) applied mixed methods (both quantitative and qualitative). See the supplementary materials for details.\n\\begin{table}[!btp]\n\\scriptsize\n\\centering\n\\caption{Studied social characteristics per domain. Research in open domain chatbots has reported most of the social characteristics, except for \\textit{Communicability}. In he task-oriented domains, some characteristics are largely influenced by the topic (e.g., \\textit{Moral agency} and \\textit{Personality}), while others are more generally applied (e.g., \\textit{Manners} and \\textit{Damage control}).}\n\\begin{tabular}{p{1.6cm}p{5cm}p{6.5cm}}\n\\hline\n\\textbf{Domain} & \\textbf{Social Characteristics} & \\textbf{Studies} \\\\\n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Open\\\\domain}} & \\textit{Proactivity, Conscientiousness, Damage control, Thoroughness, Manners, Moral agency, Emotional intelligence, Personalization, Identity, Personality} & \\\\ \n\\hline\nEthnography & \\textit{Proactivity, Conscientiousness, Thoroughness, Personalization} & \\\\ \n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Task\\\\management}} & \\textit{Proactivity, Damage control, Manners, Personalization, Identity} & \\\\ \n\\hline\nTourism & \\textit{Proactivity, Thoroughness, Manners} & \\\\ \n\\hline\nBusiness & \\textit{Proactivity, Personalization} & \\\\ \n\\hline\nInformation search & \\textit{Proactivity, Damage control, Manners, Emotional intelligence} & \\\\ \n\\hline\nDecision-making & \\textit{Proactivity, Damage control, Manners} & \\\\\n\\hline\nHealth-care & \\textit{Proactivity, Emotional intelligence} & \\\\ \n\\hline\n\\shortstack{Credibility\\\\assessment} & \\textit{Proactivity, Conscientiousness} & \\\\ \n\\hline\nEducation & \\textit{Proactivity, Conscientiousness, Damage control, Thoroughness, Manners, Emotional intelligence, Identity, Personality} & \\\\ \n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Financial\\\\services}} & \\textit{Conscientiousness, Communicability, Damage control, Thoroughness, Personalization, Identity} & \\\\ \n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Customer\\\\services}} & \\textit{Conscientiousness, Communicability, Damage control, Thoroughness, Manners, Emotional intelligence, Personalization, Identity} & \\\\ \n\\hline\nE-commerce & \\textit{Conscientiousness, Manners} & \\\\ \n\\hline\nNews & \\textit{Communicability} & \\\\ \n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Human\\\\resources}} & \\textit{Communicability, Damage control, Manners, Identity} & \\\\ \n\\hline\n\\multirow{2}{*}{\\shortstack[l]{Virtual\\\\assistant}} & \\textit{Thoroughness, Emotional intelligence, Personalization, Identity} & \\\\ \n\\hline\nGaming & \\textit{Thoroughness, Emotional intelligence, Personality} & \\\\ \n\\hline\nRace-talk & \\textit{Moral agency, Identity} & \\\\ \n\\hline\nHumorous talk & \\textit{Personality} & \\\\ \n\\hline\nNot defined & \\textit{Proactivity, Conscientiousness, Communicability, Damage control, Personalization, Identity, Personality} & \\\\\n\\hline\n\\end{tabular}\n\\label{tab:domain-analysis}\n\\end{table}", "id": "9ad3c98e-3389-49e5-a57e-7e261302fc58", "level": "subsection", "origin_cites_number": 57, "parent_id": "5950bb1f-b30c-410b-94c0-7eba5ce91d72", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Conversational Intelligence" ] ], "subsections": [ "30b70a69-50cb-4695-9975-c797ccdef4e2", "8d7c33ab-b84e-4247-bfcd-ac939fd2d82f", "d1a8a8a3-347c-4cde-8b5a-76ed8060b75f" ], "title": "Conversational Intelligence" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 6262, 7476 ], "content": "\\label{sec:proactivity}\n\\textit{Proactivity} is the capability of a system to \nautonomously act on the user's behalf and thereby reduce the amount of human effort to complete a task . In human-chatbot conversations, a proactive behavior enables a chatbot to share initiative with the user, contributing to the conversation in a more natural way . Chatbots may manifest \\textit{proactivity} when they initiate exchanges, suggests new topics, provide additional information, or formulate follow-up questions. In this survey, we found 18 papers that report either chatbots with proactive behavior or implications of manifesting a proactive behavior. \\textit{Proactivity} (also addressed as ``\\textit{intervention mode}'') was explicitly addressed in seven studies . In most of the studies, however, \\textit{proactivity} emerged either as an exploratory result, mostly from post-intervention interviews and user's feedback , or as a strategy to attend to domain-specific requirements (e.g., monitoring, and guidance) . \\textit{Proactivity} was mostly investigated in open domain and education chatbots (four studies each). In open domain chatbots, \\textit{proactivity} helps improve engagement by introducing new topics to keep the conversation alive . Educational chatbots rely on \\textit{proactivity} to prompt students to think, share, and collaborate (e.g., ). \\textit{Proactivity} was also observed in eight other task-oriented domains, including task management and information searches as can be observed in the supplementary materials.\nThe surveyed literature evidences several benefits of chatbot \\textit{proactivity} in chatbots: \n\\textbf{[B1] to provide additional, useful information:} literature reveals that \\textit{proactivity} in chatbots adds value to interactions . Investigating evaluation criteria for chatbots, asked users of a general purpose chatbot to rate the chatbots' naturalness and report in what areas they excel. Both statistical and qualitative results confirm that taking the lead and suggesting specialized information about the conversation theme correlate to chatbots' naturalness. corroborates this result; in post-intervention interviews, ten out of 14 users mentioned they preferred a chatbot that takes the lead and volunteers additional information, such as useful links and song playlists. In a WoZ study, investigated whether proactive interventions of a chatbot contribute to a collaborative search in a group chat. The chatbot either elicits or infers needed information from the collaborative chat and proactively intervenes in the conversation by sharing useful search results. The intervention modes were not significantly different from each other, but both intervention modes resulted in a statistically significant increase of enjoyment and decrease of effort when compared to the same task with no chatbot interventions. Moreover, in a post-intervention, open-ended question, 16 out of 98 participants self-reported positive perceptions about the provided additional information.\n\\textbf{[B2] to inspire users, and keep the conversation alive: }proactively suggesting and encouraging new topics have been shown useful to both inspire users and keep the conversation alive . Participants in the study conducted by self-reported that the chatbot's suggestions helped them to get started (7 mentions) and gave them ideas about search topics (4 mentions). After iteratively evaluating prototypes for a chatbot in an educational scenario, concluded that proactively initiating topics makes the dialogue more fun and reveals topics the chatbot can talk about. The refined prototype also proactively maintains the engagement by posing a follow-up when the student had not provided an answer to the question. hypothesized that including follow-up questions based on the content of previous messages would result in higher perceived partner engagement. The hypothesis was supported, with participants in the dynamic condition rating the chatbot as more engaging. In an ethnographic data collection , users included photos in their responses to add information about their experience; 85\\% of these photos were proactively prompted by the chatbot. This result shows that prompting the user for more information stimulates them to expand their entries. also observed that chatbots' proactive messages provided insights about the chatbots' knowledge, which potentially helped the conversation to continue. In this paper, we call the strategies to convey the chatbot's knowledge and capabilities as \\textit{communicability}, and we discuss it in Section \\ref{sec:communicability}.\n\\textbf{[B3] to recover the chatbot from a failure: }in and , \\textit{proactivity} is employed to naturally recover from a failure. In both studies, the approach was to introduce a new topic when the chatbot failed to understand the user or could not find an answer, preventing the chatbot from getting stuck and keeping the conversation alive. Additionally, in , the chatbot inserted new topics when users are either abusive or non-sensical. We call the strategies to handle failure and abusive behavior as \\textit{damage control}, and we discuss this characteristic in Section \\ref{sec:damagecontrol}.\n\\textbf{[B4] to improve conversation productivity: }in task-oriented interactions, such as searching or shopping, \\textit{proactivity} can improve the conversation's productivity . In interviews with first-time users of chatbots, found that chatbots should ask follow-up questions to resolve and maintain the context of the conversation and reduce the time searching before achieving the goal. found similar results for collaborative searches; 28 out of 98 participants self-reported that chatbot's proactive interventions saved collaborators time.\n\\textbf{[B5] to guide and engage users: }in particular domains, \\textit{proactivity} helps chatbots to either guide users or establish and monitor users' goals. In , the chatbot assigns a goal to the user and proactively prompts motivational messages and reminders to keep the user engaged in the treatment. suggest that a decision-making coach chatbot needs to lead the interaction toward guiding the user to a decision. In ethnographic data collection , the chatbot prompts proactive messages that guide the users on what information they need to report. evaluates a chatbot that manage tasks in a workplace. Proactive messages are used to check whether the team member has concluded the tasks, and then report the outcome to the other stakeholders. In the educational context, \\textit{proactivity} is used to develop tutors that engage the students and facilitate learning. In , the tutor chatbot was designed to provide examples of how other students explained a topic. The network analysis of the learner's textual inputs shows that students used more key terms and provided more important messages when receiving feedback about other group members. In , , and the chatbots prompt utterances to encourage the students to reason about a topic. In all three studies, the chatbot condition provided better learning outcomes and increased students' engagement in the discussions.\nThe surveyed papers also highlight challenges in providing proactive interactions, such as timing and relevance, privacy, and the user's perception of being controlled.\n\\textbf{[C1] timing and relevance:} untimely and irrelevant proactive messages may compromise the success of the interaction. states that untimely turn-taking behavior was perceived as annoying, negatively affecting emotional engagement. and reported that \\textit{proactivity} can be disruptive. investigated \\textit{proactivity} in a workspace environment, hypothesizing that the perceived interruption of agent \\textit{proactivity} negatively affects users' opinion's. The hypothesis was supported, and the authors found that influencing the sense of interruption was the general aversion to unsolicited messages, regardless of whether they came from a chatbot or a colleague. showed that proactively introducing new topics resulted in a high number of ignored messages. The analysis of the conversation log reviewed that either the new topics were not relevant, or it was not the proper time to start a new topic. also reported annoyance when a chatbot introduces repetitive topics.\n\\textbf{[C2] privacy:} in a work-related, group chat, observed privacy concerns regarding the chatbot ``reading'' the employees' conversations to proactively act. During a semi-structured interview, researchers presented a mockup of the chatbot to employees from two different enterprises and collected perceptions of usefulness, intrusiveness, and privacy. Employees reported feeling that the chatbot represented their supervisors' interests, which conveyed as sense of workplace surveillance. Privacy concerns may result in under-motivated users, discomfort about disclosing information, and lack of engagement .\n\\textbf{[C3] user's perception of being controlled:} \\textit{proactivity} can be annoying when the chatbot conveys the impression of trying to control the user. report that seven out of 13 participants experienced irritation with the chatbot; one of the most frequent reasons was due to the chatbot directing them to specific places. For the task management chatbot, reported to have adapted the follow-up questions approach to pose questions in a time negotiated with the user. In a previous implementation, the chatbot checked the status of the task twice a day, which participants considered too frequent and annoying.\nThe surveyed literature also reveals two strategies to provide \\textit{proactivity}: leveraging the conversational context and randomly selecting a topic. \\textbf{[S1] Leveraging the conversational context} is the most frequent strategy , in which proactive messages relate to contextual information provided in the conversation to increase the usefulness of interventions . argue that general purpose, emotionally aware chatbots should recognize users' interests and intents from the conversational context to proactively offer comfort and relevant services. In , the chatbot leverages conversational context to suggest new topics and propose to share documents or links to assist employees in a work-related group chat. The chatbots studied by introduce new topics based on keywords from previous utterances posted in the chat. According to , leveraging the context can also help smoothly guide the user to a target topic. Researchers in the chatbots domain can refer to the emerging literature on Ambient Intelligence (see, e.g., ~) to understand how contextual knowledge can be leveraged to convey proactivity. One surveyed paper proposes a chatbot that \\textbf{[S2] selects a topic randomly} but also observes that the lack of context is a major problem of this approach. Contextualized proactive interventions also suggest that the chatbot should be attentive to the conversation, which conveys \\textit{conscientiousness}, as discussed in the next section.", "id": "30b70a69-50cb-4695-9975-c797ccdef4e2", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "9ad3c98e-3389-49e5-a57e-7e261302fc58", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Conversational Intelligence" ], [ "subsubsection", "Proactivity" ] ], "subsections": [], "title": "Proactivity" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conscientiousness}\n\\textit{Conscientiousness} is a chatbot's capacity to demonstrate attentiveness to the conversation at hand . It enables a chatbot to follow the conversational flow, show understanding of the context, and interpret each utterance as a meaningful part of the whole conversation . In this survey, we found 11 papers that reported findings related to \\textit{conscientiousness} for chatbot design. Four studies explicitly investigated influences of \\textit{conscientiousness} for chatbots . In the remaining studies, \\textit{conscientiousness} emerged in exploratory findings. In , \\textit{conscientiousness} emerged from the analysis of conversational logs, while elicited \\textit{conscientiousness} aspects as a requirement for chatbot design when surveying the literature on customer service chatbots. In the remaining studies , \\textit{conscientiousness} issues were self-reported by the users in post-intervention interviews and subjective feedback to open-ended questions. \\textit{Conscientiousness} was investigated in three studies from the education domain, where the tutor chatbot is expected to be attentive and relevant as well as to control the conversation flow to keep the students focused on the topic (see e.g., ). \\textit{Conscientiousness} was also investigated in seven other domains, including open domain interactions and customer and financial services. See the supplementary materials for a complete list of domains of studies that report \\textit{conscientiousness} as a social characteristic for chatbots.\nThe surveyed literature evidenced benefits of designing conscientious chatbots:\n\\textbf{[B1] to provide meaningful answers:} some chatbots use simplistic approaches, like pattern matching rules based on keywords or template phrases applied in the last user utterance, to find the most appropriate response . However, as the chatbot does not interpret the meaning and the intent of users' utterances, the best-selected response may still sound irrelevant to the conversation . As shown by , when a chatbot does not interpret the meaning of users' utterances, users show frustration and the chatbot's credibility is compromised. This argument is supported by when studying chatbots to facilitate collaborative learning. The authors proposed a chatbot that promotes Academically Productive Talk moves. Exploratory results show that the chatbot performed inappropriate interventions, which was perceived as a lack of attention to the conversation. In , participants complained that some chatbots seemed ``\\textit{completely scripted,}'' ignoring user's inputs that did not fall into the script. In this case, users needed to adapt their inputs to match the chatbot script to be understood, which resulted in dissatisfaction. Besides avoiding frustration, showed that \\textit{conscientiousness} also influences chatbots' perceived humanness and social presence. The authors invited participants to interact with a chatbot that shows an image and asks the users to describe it. To perform the task, the chatbot could either ask the same generic follow-up questions each time (nonrelevant condition) or respond with a follow-up question related to the last participant's input (relevant condition), demonstrating attention to the information provided by the user. Statistical analysis of survey rates supported that the relevant condition increased chatbots' perceived humanness and social presence. In , to motivate the students to communicate in a second language, the proposed chatbot interprets users' input to detect breakdowns and react using an appropriate communication strategy. In interviews, participants reported ``\\textit{appreciating the help they got from the chatbot to understand and express what they have got to say}'' . This study was later extended to the context of ECAs (see for details).\n\\textbf{[B2] to hold a continuous conversation: }a conversation with a chatbot should maintain a ``\\textit{sense of continuity over time}'' to demonstrate that the chatbot is exerting effort to track the conversation. To do so, it is essential to maintain the topic. When evaluating the naturalness of a chatbot, found that maintaining a theme is convincing, while failure to do so is unconvincing. Furthermore, based on the literature on customer support chatbots, argue that comfortably conversing on any topic related to the service offering is a requirement for task-oriented chatbots. reviewed popular chatbots for practicing second languages. The author showed that most chatbots in this field cannot hold continuous conversations, since they are developed to answer the user's last input. Therefore, they did not have the sense of topic, which resulted in instances of inappropriate response. While the chatbots could change the topic, they could not sustain it afterward. Showing \\textit{conscientiousness} also requires the chatbot to understand and track the context, which is particularly important in task-oriented scenarios. In , first-time users stressed positive experiences with chatbots that retained information from previous turns. Two participants also expected the chatbots to retain this context across sessions, thus reducing the need for the user's additional input per interaction. Keeping the context across sessions was highlighted as a strategy to convey \\textit{personalization} and empathy (see Sections \\ref{sec:personalization} and \\ref{sec:emotionalintelligence}).\n\\textbf{[B3] to steer the conversation toward a productive direction:} in task-oriented interactions, a chatbot should understand the purpose of the interaction and strive to conduct the conversation toward this goal in an efficient, productive way . show that productivity is the key motivating factor for using chatbots (68\\% of the participants mentioned it as the main reason for using chatbots). First-time users in self-reported that interacting with chatbots should be more productive than using websites, phone apps, and search engines. In this sense, compared the user experience when interacting with a chatbot for solving either simple or complex tasks in a financial context. The authors found that, for complex tasks, to keep the conversation on track the user must be aware of the next steps or why something is happening. In the educational context, proposed a dialogue management approach based on communication strategies to enrich a chatbot with the capability to express its meaning when faced with challenges. Statistical results show that the communication strategies, combined with affective backchannel (which is detailed in Section \\ref{sec:emotionalintelligence}), are effective in motivating students to communicate and maintain the task flow. Thirty-two participants out of 40 reported that they preferred to interact with a chatbot with these characteristics. Noticeably, the chatbot's attentiveness to the interactional goal may not be evident to the user if the chatbot passively waits for the user to control the interaction. Thus, \\textit{conscientiousness} relates to proactive ability, as discussed in Section \\ref{sec:proactivity}. \nNevertheless, challenges in designing conscientious chatbots are also evident in the literature:\n\\textbf{[C1] to handle task complexity: }as the complexity of tasks increases, more turns are required to achieve a goal; hence, more mistakes may be made. This argument was supported by both and , where the complexity of the task compromised the experience and satisfaction in using the chatbot. also highlights that complex tasks require more effort to correct eventual mistakes. Therefore, it is an open challenge to design flexible workflows, where the chatbot recovers from failures and keeps the interaction moving productively toward the goal, despite potential misunderstandings . \nRecovering from failure \nis discussed in Section \\ref{sec:damagecontrol}.\n\\textbf{[C2] to harden the conversation:} aiming to ensure the conversational structure--and to hide natural language limitations--chatbots are designed to restrict free-text inputs from the user . However, limiting the choices of interaction may convey a lack of attention to the users' inputs. In , one participant mentioned the feeling of ``\\textit{going through a form or a fixed menu.}'' According to , participants consider the chatbot's understanding of free-text input as a criterion to determine whether it can be considered a chatbot, since chatbots are supposed to chat. In the ethnographic data collection study, reported that eight out of ten participants described the interaction using pre-set responses as too restrictive, although they fulfilled the purpose of nudging participants to report their activities. Thus, the challenge lies in how to leverage the benefits of suggesting predefined inputs without limiting conversational capabilities.\n\\textbf{[C3] to keep the user aware of the chatbot's context:} a chatbot should provide a way to inform the user of the current context, especially for complex tasks. According to , context can be inferred from explicit user input, or assumed based on data from previous interactions. In both cases, the user and chatbot should be on the same page about the chatbot's contextual state , providing the users the opportunity to clarify possible misunderstandings . highlighted that participants reported negative experience when finding ``mismatch[es] between [a] chatbot's real context and their assumptions of the chatbot context.'' We identified three strategies used to provide understanding to a chatbot:\n\\textbf{[S1] conversation workflow: }designing a conversational blueprint helps to conduct the conversation strictly and productively to the goal . However, argue that the workflow should be flexible enough to handle both multi-turn and one-turn question-answer interactions; it also should be unambiguous, such that users can efficiently achieve their goals. In addition, discuss that the workflow should make it easy to fix mistakes; otherwise, the users need to restart the workflow, which leads to frustration. In , the conversation workflow included communicative strategies to detect a learner's breakdowns and pitfalls. In that study, when the student does not respond, the chatbot uses a comprehension-check question to detect whether the student understood what was said. Then, it reacts to the user's input by adopting one of the proposed communication strategies (e.g., asking for repetition or simplifying the previous sentence). The conversation workflow could also allow the chatbot to be proactive\n. For example, participants in suggested that proactive follow-up questions would anticipate the resolution of the context, reducing the effort required from the user to achieve the goal.\n\\textbf{[S2] visual elements: }user-interface resources--such as quick replies, cards, and carousels\n--are used to structure the conversation and reduce issues regarding understanding . Using these resources, the chatbot shows the next possible utterances and conveys the conversational workflow in a step-by-step manner . Visual elements are also used to show the user what the chatbot can (or cannot) do. This is another conversational characteristic, which will be discussed in the \nSection \\ref{sec:communicability}.\n\\textbf{[S3] context window:} to keep the user aware of the chatbot's current context, developed a chatbot for shopping that shows a context window on the side of the conversation. In this window, the user can click on specific attributes and change them to fix inconsistencies. A survey showed that the chatbot outperformed a default chatbot (without the context window) for mental demand and effort constructs. However, when the chatbots are built into third-party apps (e.g., Facebook Messenger), an extra window may not be possible.\n\\textbf{[S4] confirmation messages: }a conversation workflow may include confirmation messages to convey the chatbots' context to the user . In , when trying to block a stolen credit card, a confirmation message is used to verify the given personal data. In , confirmation messages are used as a communicative strategy to check whether the system's understanding about a particular utterance matches what the user actually meant. Balancing the number of confirmation messages (see ) and the right moment to introduce them into the conversation flow is still under-investigated.\nThe surveyed literature supports that failing to demonstrate understanding about the users' individual utterances, the conversational context, and the interactional goals results in frustration and loss of credibility. However, most of the results are exploratory; a lack of studies investigate the extent to which the provided strategies influence users' behaviors and perceptions. In the field of human-human communication, the cooperative principle has been extensively applied to understand conversational partners' expectations during a conversation. Additionally, recent studies have shown that this principle influences chatbots' perceived humanness . Particularly, the maxims of relation (the ability to be relevant) and manner (the ability to be clear and orderly) can contribute to the design of \\textit{conscientious} chatbots. In addition, \\textit{conscientiousness} is by itself a \\textit{personality} trait; the more \\textit{conscientiousness} a chatbot manifests, the more it can be perceived as attentive, organized, and efficient. The relationship between \\textit{conscientiousness} and \\textit{personality} is highlighted in Section \\ref{sec:interrelationship}.", "id": "8d7c33ab-b84e-4247-bfcd-ac939fd2d82f", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "9ad3c98e-3389-49e5-a57e-7e261302fc58", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Conversational Intelligence" ], [ "subsubsection", "Conscientiousness" ] ], "subsections": [], "title": "Conscientiousness" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:communicability}\nInteractive software is communicative by its nature, since users achieve their goals \nby exchanging messages with the system . In this context, \\textit{communicability} is defined as the capacity of a software to convey to users its underlying design intent and interactive principles . Providing \\textit{communicability} helps users to interpret the codes used by designers to convey the interactional possibilities embedded in the software , which improves system learnability . In the chatbot context, \\textit{communicability} is, therefore, the capability of a chatbot to convey its features to users . The problema around chatbots' \\textit{communicability} lies in the nature of the interface: instead of buttons, menus, and links, chatbots unveil their capabilities through conversational turns, one sentence at a time , bringing new challenges to the system learnability field. The lack of \\textit{communicability} may lead users to give up on using the chatbot when they cannot understand the available functionalities and how to use them .\nIn this survey, we found six papers that describe \\textit{communicability}, although investigating \\textit{communicability} is the main purpose of only one . Conversational logs revealed \\textit{communicability} needs in three studies ; in the other three studies \\textit{communicability} issues were self-reported by the users in post-intervention interviews and subjective feedback. \\textit{Communicability} was investigated in the customer services domain (two studies), where chatbots guide customers through available functionalities . The remaining task-oriented domains in which \\textit{communicability} was investigated are financial services, news, and human resources. We did not identify studies on open domain chatbots that report on \\textit{communicability}, since in open domain interactions users are free to talk about varying topics and guidance is less a concern.\nThe surveyed literature reports two main benefits of \\textit{communicability} for chatbots:\n\\textbf{[B1] to unveil functionalities: }while interacting with chatbots, users may not know that a desired functionality is available or how to use it . Most participants in 's study mentioned that they did not understand the functionalities of at least one of the chatbots and none of them mentioned searching for the functionalities in other sources (e.g., Google search or the chatbot website) rather than exploring options during the interaction. In a study about playful interactions in a work environment, observed that 22\\% of the participants explicitly asked the chatbot about its capabilities (e.g., \\textit{``what can you do?''}), and 1.8\\% of all the users' messages were ability-check questions. In a study about hotel chatbots, verified that 63\\% of the conversations were initiated by clicking an option displayed in the chatbot welcome message. A semiotic inspection of news-related chatbots evidenced that \\textit{communicability} strategies are effective at providing clues about the chatbot's features and ideas about what to do and how.\n\\textbf{[B2] to manage users' expectations:} observed that when first-time users do not understand chatbots' capabilities and limitations, they have high expectations and, consequently, end up more frustrated when the chatbots fail. Some participants \nblamed themselves for not knowing how to communicate and gave up. \nIn , quantitative results \nevidenced that ability-check questions can be considered signals of users struggling with functional affordances. Users posed ability-check questions after encountering errors as a means of establishing a common ground between the chatbot's capabilities and their own expectations. According to the authors, ability-check questions helped users to understand the system and reduce uncertainty . In , users also demonstrated the importance of understanding chatbots' capabilities in advance. Since the tasks related to financial support, users expected the chatbot to validate the personal data provided and to provide feedback after completing the task (e.g., explaining how long it would take for the credit card to be blocked). Therefore, \\textit{communicability} helps users gain a sense of which type of messages or functionalities a chatbot can handle.\nThe surveyed literature also highlights two challenges of providing \\textit{communicability}:\n\\textbf{[C1] to provide business integration: }communicating chatbots' functionalities should be performed as much as possible within the chat interface . Chatbots often act as an intermediary between users and services. In this case, to overcome technical challenges, chatbots answer users' inputs with links to external sources, where the request will be addressed. First-time users expressed dissatisfaction with this strategy in . Six participants complained that the chatbot redirected them to external websites. According to , business integration is a requirement for designing chatbots, so that the chatbot can solve the users' requests without transferring the interaction to another user interface.\n\\textbf{[C2] to keep visual elements consistent with textual inputs:} in the semiotic engineering evaluation, observed that some chatbots responded differently depending on whether the user accesses a visual element in the user-interface or types the desired functionality in the text-input box, even if both input modes result in the same utterance. This inconsistency produces the user's feeling of misinterpreting the affordances, which has a negative impact on the system's learnability.\nAs an outcome of the semiotic inspection process, present a set of strategies to provide \\textit{communicability}. Some of them are also emphasized in other studies, as follows:\n\\textbf{[S1] to clarify the purpose of the chatbot:} First-time users in highlighted that a clarification about the chatbots' purpose should be placed in the introductory message. found similar inference from the literature on customer services chatbots. The authors argue that providing an opening message with insights into the chatbots' capabilities is a requirement for chatbots design. In addition, a chatbot could give a short tour throughout the main functionalities at the beginning of the first sessions .\n\\textbf{[S2] to advertise the functionality and suggest the next step: }when the chatbot is not able to answer the user, or when it notices that the user is silent, it may suggest available features to stimulate the user to engage \n. In , six participants mentioned that they appreciated when the chatbot suggested responses, for example by saying \\textit{``try a few of these commands: ...''} . shows that chatbots use visual elements, such as cards, carousel, and menus (persistent or not) to show contextualized clues about the next answer, which both fulfills \\textit{communicability} purpose and spares users from having to type.\n\\textbf{[S3] to provide a help functionality: }chatbots should recognize a \\textit{``help''} input from the user, so it can provide instructions on how to proceed . reported that users highlighted this functionality as useful for the reviewed chatbots. Also, results from show that chatbots should be able to answer ability-check questions (e.g., ``\\textit{what can you do?}'' or ``\\textit{can you do [functionality]?}'').\nThe literature states the importance of communicating chatbots' functionality to the success of the interaction. Failing to provide \\textit{communicability} \nleads the users to frustration and they often give up when they do not know how to proceed. The literature on interactive systems has highlighted system learnability as the most fundamental component for usability , and an easy-to-learn system should lead the user to perform well, even during their initial interactions. Thus, researchers in the chatbots domain can leverage the vast literature on system learnability to identify metrics and evaluation methodologies, as well as propose new forms of \\textit{communicability} strategies that reduce the learnability issues in chatbot interactions. \\textit{Communicability} may also be used as a strategy to avoid mistakes (\\textit{damage control}), which will be discussed in Section \\ref{sec:damagecontrol}.\n\\vspace{-1.5mm}\n\\begin{framed} \\small\n\\vspace{-2mm}\nIn summary, the \\textbf{conversational intelligence }category includes characteristics that help a chatbot to perform a proactive, attentive, and informative role in the interaction. Acknowledging that functional aspects play a crucial role in the ability of a chatbot to converse intelligently, the survey focuses on the characteristics that go beyond the functional aspects, looking through the lens of social perceptions. The highlighted benefits relate to how a chatbot manages the conversation to make it productive, interesting, and neat. To achieve that, designers and researchers should care for the timing and relevance of provided information, privacy, interactional flexibility, and consistency.\n\\vspace{-2mm}\n\\end{framed}\n\\vspace{-2mm}", "id": "d1a8a8a3-347c-4cde-8b5a-76ed8060b75f", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "9ad3c98e-3389-49e5-a57e-7e261302fc58", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Conversational Intelligence" ], [ "subsubsection", "Communicability" ] ], "subsections": [], "title": "Communicability" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:socialintelligence}\n\\textbf{Social Intelligence} refers to the ability of an individual to produce adequate social behavior for the purpose of achieving desired goals . In the HCI domain, the Media Equation theory posits that people react to computers as social actors. Hence, when developing chatbots, it is necessary to account for the socially acceptable protocols for conversational interactions . Chatbots should be able to respond to social cues during the conversation, accept differences, and manage conflicts as well as be empathic and demonstrate caring , which ultimately increase chatbots' authenticity . In this section, we discuss the social characteristics related to \\textbf{social intelligence}, namely: \\textit{damage control} (12 papers), \\textit{thoroughness} (13 papers), \\textit{manners} (10 papers), \\textit{moral agency} (6 papers),\\textit{ emotional intelligence} (14 papers), and \\textit{personalization} (11 papers). Although the focus of the investigations is diverse, we found more studies where the focus of the investigation relates to a specific social characteristic, particularly \\textit{moral agency} and \\textit{emotional intelligence}, when compared to the \\textbf{conversational intelligence} category. Regarding the adopted research methods, 12 studies applied qualitative methods and 10 focuses on quantitative methods. 16 studies reported both qualitative and quantitative results.", "id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "level": "subsection", "origin_cites_number": 6, "parent_id": "5950bb1f-b30c-410b-94c0-7eba5ce91d72", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ] ], "subsections": [ "028808f1-c630-47f4-8ddb-e59065ee5178", "f4e8fa92-1e99-453b-a644-c67f9ee2f76c", "b7e4439b-1760-43b1-b0dc-0c9d26d8356e", "f7519054-562c-44e0-9cb2-d8defc3722e3", "e08fd9d3-0a36-4724-8257-1d5f35b4d5dc", "69b481db-bee0-4219-a694-6a1c1caa8255" ], "title": "Social Intelligence" }, { "cite_extract_rate": 0.05, "cites": [ 6262 ], "content": "\\label{sec:damagecontrol}\n\\textit{Damage control} is the ability of a chatbot to deal with either conflict or failure situations. Although the Media Equation theory argues that humans socially respond to computers as they respond to other people , the literature has shown that interactions with conversational agents are not quite equal to human-human interactions . When talking to a chatbot, humans are more likely to harass , test the agent's capabilities and knowledge , and feel disappointed with mistakes . When a chatbot does not respond appropriately, it may encourage the abusive behavior or disappoint the user , which ultimately leads the conversation to fail . Thus, it is necessary to enrich chatbots with the ability to recover from failures and handle inappropriate talk in a socially acceptable manner .\nIn this survey, we found 12 studies that discuss \\textit{damage control} as a relevant characteristic for chatbots, two of which focus on conflict situations, such as testing and flaming . In the remaining studies , needs for \\textit{damage control} emerged from the analysis of conversational logs and users' feedback. \\textit{Damage control} was mostly investigated for open domain (two studies) and customer service chatbots (three studies). In open domain interactions, users are free to wander among topics and testing or flaming tends to be more frequent . In the customer services context, the chatbot needs to avoid disappointing the user, as frustration may negatively reflect the business that the agent represents . \\textit{Damage control} was also identified in other six domains (see the supplementary materials), such as task management and financial services.\nThe surveyed literature highlights the following benefits of providing \\textit{damage control} in chatbots:\n\\textbf{[B1] to appropriately respond to harassment:} chatbots are more likely to targets of profanity than humans would . When analyzing conversation logs from hotel chatbots, observed that 4\\% of the conversations contained vulgar, indecent, and insulting vocabulary, and 2.8\\% of all statements were abusive. Qualitative evaluation reveals that the longer the conversations last, the more users are encouraged to go beyond the chatbots main functions. In addition, sexual expressions represented 1.8\\% of all statements. A similar number was found in a study with general purpose chatbots . When analyzing a corpus from the Amazon Alexa Prize 2017, the researchers estimated that about 4\\% of the conversations included sexually explicit utterances. used utterances from this corpus to harass a set of state-of-art chatbots and analyze the responses. The results show that chatbots respond to harassment in a variety of ways, including nonsensical, negative, and positive responses. However, the authors highlight that the responses should align with the chatbot's goal to avoid encouraging the behavior or reinforcing stereotypes.\n\\textbf{[B2] to deal with testing:} abusive behavior is often used to test chatbots' social reactions . During the evaluation of a virtual guide to the university campus , a participant answered the chatbot's introductory greeting with ``\\textit{moron,}'' likely hoping to see how the chatbot would answer. argue that handling this type of testing helps the chatbots to establish limits and resolve social positioning. Other forms of testing were highlighted in , including sending random letters, repeated greetings, laughs and acknowledgments, and posing comments and questions about the chatbot's intellectual capabilities. When analyzing conversations with a task management chatbot, observed that casually testing the chatbots' ``intelligence'' is a manifestation of satisfaction seeking. In , first-time users appreciated when the chatbot successfully performed tasks when the user expected the chatbot to fail, which shows that satisfaction is influenced by the ability to provide a clever response when the user tests the chatbot.\n\\textbf{[B3] to deal with lack of knowledge: }chatbots often fail in a conversation due to lack of either linguistic or world knowledge . \\textit{Damage control} enables the chatbot to admit the lack of knowledge or cover up cleverly . When analyzing the log of a task management chatbot, found out that the chatbot failed to answer 10\\% of the exchanged messages. The authors suggest that the chatbot should be designed to handle novel scenarios when the current knowledge is not enough to answer the requests. In some task-oriented chatbots, though, the failure may not be caused by a novel scenario, but by an off-topic utterance. In the educational context, observed that students posted off-topic utterances when they did not know what topics they could talk about, which led the chatbot to fail rather than help the users to understand its knowledge. In task-oriented scenarios, the lack of linguistic knowledge may lead the chatbots to get lost in the conversational workflow , compromising the success of the interaction. demonstrated that dialogue-reference errors (e.g., user's attempt to correct a previous answer or jumping back to an earlier question) are one of the major reasons for failed dialogues and they mostly result from chatbots' misunderstandings.\nThe literature also reveals some challenges to provide \\textit{damage control}:\n\\textbf{[C1] to deal with unfriendly users:} argue that users who want to test and find the system's borders are likely to never have a meaningful conversation with the chatbot no matter how sophisticated it is. Thus, there is a limit to which \\textit{damage control} strategies will be effective to avoid testing and abuse. In , the authors observed human tendencies to dominate, be rude, and infer stupidity, which they call ``\\textit{unfriendly partners.}'' After an intervention where users interacted with a chatbot for decision-making coaching, evaluated participants' self-perceived work and cooperation with the system. The qualitative results show that cooperative users are significantly more likely to give a higher rating for overall evaluation and decision efficiency. The qualitative analysis of the conversation log reveals that a few interactions failed because the users' motivations were curiosity and mischief rather than trying to solve the decision problem.\n\\textbf{[C2] to identify abusive utterances: }several chatbots are trained on ``clean'' data. Because they do not understand profanity or abuse, they may not recognize a statement as harassment, which makes it difficult to adopt answering strategies . shows that data-driven chatbots often provide non-coherent responses to harassment. Sometimes, these responses conveyed the impression of flirtatious or counter-aggression. Providing means to identify an abusive utterance is important to adopting \\textit{damage control} strategies.\n\\textbf{[C3] to fit the response to the context:} argue that humans negotiate conflict and social positioning well before reaching abuse. In human-chatbot interactions, however, predicting users' behavior toward the chatbots in a specific context to develop the appropriate behavior is a challenge. \\textit{Damage control} strategies need to be adapted to both the social situation and the intensity of the conflict. For example, showed that being evasive about sexual statements may convey the impression of flirtatiousness, which would not be an acceptable behavior for a customer assistant or a tutor chatbot. In contrast, adult chatbots are supposed to flirt, so encouraging behaviors are expected in some situations. argue that when the chatbot is not accepted as part of the social group it represents, it is discredited by the user, leading the interaction to fail. In addition, designing chatbots with too strong of reactions may lead to ethical concerns . For , choosing between peaceful or aggressive reactions in conflict situations is optional for socially intelligent individuals. Enriching chatbots with the ability to choose between the options is a challenge.\n\\textit{Damage control} strategies depend on the type of failure and the target benefit, as following:\n\\textbf{[S1] emotional reactions:} suggest that when faced with abuse, a chatbot could be seen to take offense and respond in kind or act hurt. The authors argue that humans might feel inhibited about hurting the pretended feelings of a machine if the machine is willing to hurt a human's feelings too . If escalating the aggressive behavior is not appropriate, the chatbot could withdraw from the conversation to demonstrate that the user's behavior is not acceptable. In , the authors discuss that users appeared to be uncomfortable and annoyed whenever the chatbot pointed out any defect in the user or reacted to aggression, as this behavior conflicted with the user's perceived power relations. This strategy is also applied in , where abusive behavior may lead the chatbot to stop responding until the student changes the topic. categorized responses from state-of-the-art conversational systems in a pool of emotional reactions, both positive and negative. The reactions include humorous responses, chastising and retaliation, and evasive responses, as well as flirtation and play-along utterances. To provide an emotional reaction, \\textit{emotional intelligence} is also required. This category is presented in Section \\ref{sec:emotionalintelligence}.\n\\textbf{[S2] authoritative reactions: }when facing testing or abuse, chatbots can communicate consequences \nor call on the authority of others . In , although the wizard acting as a chatbot was conscientiously working as a campus guide, she answered a bogus caller with \\textit{``This is the University of Melbourne. Sorry, how can I help you?''} The authors suggest that the wizard was calling on the authority of the university to handle the conflict, where being part of a recognized institution places the chatbot in a stronger social group. In , when students recurrently harass the chatbot, the chatbot informs the student that further abuse will be reported to the (human) teacher (although the paper does not clarify whether the problem is, in fact, escalated to a human). and also suggest that chatbots could redirect users' problematic requests to a human attendant in order to avoid conflict situations.\n\\textbf{[S3] to ignore the user's utterance and change the topic:} argue that ignoring abuse and testing is not a good strategy because it could encourage more extreme behaviors. It also positions the chatbot as an inferior individual, which is particularly harmful in scenarios where the chatbot should demonstrate a more prominent or authoritative social role (e.g., a tutor). However, this strategy has been found in some studies to handle lack of knowledge. When iteratively developing a chatbot for an educational context, proposed initiating a new topic in one out of four user's utterances that the chatbot did not understand.\n\\textbf{[S4] \\textit{conscientiousness} and \\textit{communicability}:} successfully implementing \\textit{conscientiousness} and \\textit{communicability} may prevent errors; hence, strategies to provide these characteristics can also be used for \\textit{damage control}. In , when users utter out-of-scope statements, the chatbot could make it clear what topics are appropriate to the situation. For task-oriented scenarios, where the conversation should evolve toward a goal, argue that the chatbot can clarify the purpose of the offered service when facing abusive behavior, bringing the user back to the task. showed that describing chatbot's capabilities after failures in the dialogue was appreciated by first-time users. In situations where the conversational workflow is susceptible to failure, discuss that posting confirmation messages avoids trapping the users in the wrong conversation path. Participants in also suggested back buttons as a strategy to fix mistakes in the workflow. In addition, the exploratory results about the user interface showed that having visual elements such as quick replies prevents errors, since they keep the users aware of what to ask and the chatbot is more likely to know how to respond .\n\\textbf{[S5] to predict users' satisfaction:} chatbots should perceive both explicit and implicit feedback about users' (dis)satisfaction .\nTo address this challenge, invited participants to send a ``\\textit{\\#fail}'' statement to express dissatisfaction. The results show that 42.4\\% of the users did it at least once, and the number of complaints and flaming for the proposed chatbot was significantly lower than the baseline. However, the amount of implicit feedback was also significant, which advocates for predicting user's satisfaction from the conversation. The most powerful conversational act that predicted user satisfaction in that study was the agent ability-check questions, see discussion in \\textit{Communicability} section) and the explicit feedback \\textit{\\#fail}, although closings and off-topic requests were also significant in predicting frustration. Although these results are promising, more investigation is needed to identify other potential predictors of users' satisfaction in real-time, in order to provide appropriate reactions.\n\\textit{Damage control} strategies have different levels of severity. Deciding what strategy is adequate to the intensity of the conflict is crucial . The strategies can escalate in severity if the conflict is not solved. For example, uses a sequence of clarification, suggesting a new topic, and asking a question about the new topic. In case of abuse, the chatbot refers to an authority after two attempts at changing the topic.\nAccording to , humans also fail in conversations; they misunderstand what their partner says and do not know things that are assumed common knowledge by others. Hence, it is unlikely that chatbots interactions will evolve to be conflict-free. That said, \\textit{damage control} intends to avoid escalating the conflicts and manifesting an unexpected behavior. In this sense, politeness can be used as a strategy to minimize the effect of lack of knowledge (see Section \\ref{sec:manners}), managing the conversation despite the possible mistakes. Regarding interpersonal conflicts, the strategies are in line with the theory on human-human communication, which includes non-negotiation, emotional appeal, personal rejection, and emphatic understanding . Further research on \\textit{damage control} can evaluate the adoption of human-human strategies in human-chatbot communication.", "id": "028808f1-c630-47f4-8ddb-e59065ee5178", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Damage control" ] ], "subsections": [], "title": "Damage control" }, { "cite_extract_rate": 0.1, "cites": [ 6264, 6265 ], "content": "\\label{sec:thoroughness}\n\\textit{Thoroughness} is the ability of a chatbot to be precise regarding how it uses language . In traditional user interfaces, user communication takes place using visual affordances, such as buttons, menus, or links. In a conversational interface, language is the main tool to achieve the communicative goal. Thus, chatbots should coherently use language that portrays the expected style . When the chatbot uses inconsistent language, or unexpected patterns of language (e.g., excessive formality), the conversation may sound strange to the user, leading to frustration. \nWe found 13 papers that report the importance of \\textit{thoroughness} in chatbot design, three of which investigate how patterns of language influence users' perceptions and behavior toward the chatbots . and suggest design principles that include concerns about language choices. Log of conversations revealed issues regarding \\textit{thoroughness} in two studies . In the remaining papers, \\textit{thoroughness} emerged from interviews and users' subjective feedback . \\textit{Thoroughness} is mainly reported for open domain (five studies) and customer service chatbots (two studies), where the interactions are expected to be natural and credible to succeed . \\textit{Thoroughness} was also reported in another six domains of studies, such as financial services and education (see the supplementary materials).\nWe found two benefits of providing \\textit{thoroughness}:\n\\textbf{[B1] to increase human-likeness:} chatbot utterances are often pre-recorded by the chatbot designer . On the one hand, this approach produces high quality utterances; on the other hand, it reduces flexibility, since the chatbot is not able to adapt the tone of the conversation based on individual users and conversational context. When analyzing interactions with a customer representative chatbot, observed that the chatbot proposed synonyms to keywords, and the repetition of this vocabulary led the users to imitate it. observed a similar tendency to matching language style. The authors compared human-human conversations with human-chatbots conversations regarding language use. They found that people use, indeed, fewer words per message and a more limited vocabulary with chatbots. However, a deeper investigation revealed that the human interlocutors were actually matching the patterns of language use with the chatbot, who sent fewer words per message. When interacting with a chatbot that uses many emojis and letter reduplication , participants reported a draining experience, since the chatbot's energy was too high to match. These outcomes show that adapting the language to the interlocutor is a common behavior for humans, and so chatbots would benefit from manifesting it. In addition to the interlocutor, chatbots should adapt their language use to the context in which they are implemented and adopt appropriate linguistic register . In the customer services domain, state that chatbots are expected to fulfill the role of a human; hence, they should produce language that corresponds to the represented service provider. In the financial scenario , some participants complained about the use of emojis in a situation of urgency (blocking a stolen credit card).\n\\textbf{[B2] to increase believability: }because people associate social qualities with machines , chatbots are deemed sub-standard when users see them ``\\textit{acting as a machine}'' . When analyzing chatbots' naturalness, found that the formal grammatical and syntactical abilities of a chatbot are the biggest discriminators between good and poor chatbots (the other factors being \\textit{conscientiousness}, \\textit{manners}, and \\textit{proactivity}). The authors highlight that chatbots should use consistent grammar and spelling. discusses how, even with English as Second Language (ESL) learners, basic grammar errors, such as pronoun confusion, diminish the value of the chatbot. In addition, states that believable chatbots also need to display unique characters through linguistic choices. In this sense, demonstrated that \\textit{personality} can be expressed by language patterns. The authors proposed a computational framework to produce utterances to manifest a target \\textit{personality}. The utterances were rated by experts in \\textit{personality} evaluation and statistically compared against utterances produced by humans who manifest the target \\textit{personality}. The outcomes show that a single utterance can manifest a believable \\textit{personality} when using the appropriate linguistic form. Participants in described some interactions as ``\\textit{robotic}'' if the chatbot repeated the keywords in the answers, reducing the interaction's naturalness. Similarly, in , participants complained about the ``\\textit{inflexibility}'' of the pre-defined, handcrafted chatbot's responses and expressed the desire for it to talk ``\\textit{more as a person.}''\nRegarding the challenges, the surveyed literature shows the following:\n\\textbf{[C1] to decide how much to say: } in , some participants described the chatbot's utterances as not having enough detail, or being too generic; however, most of them appreciated finding answers in a sentence rather than in a paragraph. Similarly, argue that simple questions should not be too detailed, while important transactions require more information. In three studies , participants complained about information overload and inefficiency caused by big blocks of texts. Balancing the granularity of information with the sentence length is a challenge to overcome.\n\\textbf{[C2] to be consistent:} chatbots should not combine different language styles. For example, in , most users found it strange that emojis were combined with a certain level of formal contact. \nWhen analyzing the critical incidents about an open-domain interaction, found that participants criticized chatbots when they used more formal language or unusual vocabulary, since general-purpose chatbots focus on casual interactions.\nDespite the highlighted benefits, \nwe did not find strategies to provide \\textit{thoroughness}. proposed a rule-based architecture where the language choices consider the agent's \\textit{personality}, emotional state, and beliefs about the social relationship among the interlocutors. However, they did not provide evidence of whether the proposed models produced the expected outcome. Although the literature in computational linguistics has proposed algorithms and statistical models to manipulate language style and matching (see e.g., ), to the best of our knowledge, these strategies have not been evaluated in the context of chatbots' social interactions.\nThis section shows that linguistic choices influence users' perceptions of chatbots. The computer-mediated communication (CMC) field has a vast literature that shows language variation according to the media and its effect on social perceptions (see e.g. ). Additionally, the cooperative principle , particularly the maxim of quantity (the ability to give the appropriate amount of information), provides theoretical basis for the challenge of deciding how much to talk. Regarding adaptation and believability, researchers in sociolinguistic fields have shown that language choices are influenced by personal style, dialect, genre, and register. For chatbots, the results presented in are promising, demonstrating that automatically generated language can manifest recognizable traits. Thus, further research on chatbot's \\textit{thoroughness} could leverage CMC and linguistics theories to provide strategies that lead language to accomplish its purpose for a particular interactional context.", "id": "f4e8fa92-1e99-453b-a644-c67f9ee2f76c", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Thoroughness" ] ], "subsections": [], "title": "Thoroughness" }, { "cite_extract_rate": 0.105263157894736, "cites": [ 6266, 6262 ], "content": "\\label{sec:manners}\n\\textit{Manners} refer to the ability of a chatbot to manifest polite behavior and conversational habits . Although individuals with different personalities and from different cultures may have different notions of what is considered polite (see e.g., ), politeness can be more generally applied as rapport management , where interlocutors strive to control the harmony between people in discourse. A chatbot can manifest \\textit{manners} by adopting speech acts such as greetings, apologies, and closings ; minimizing impositions , and making interactions more personal . \\textit{Manners} potentially reduces the feeling of annoyance and frustration that may lead the interaction to fail .\nWe identified ten studies that report \\textit{manners}, one of which directly investigates this characteristic . In some studies , \\textit{manners} were observed in the analysis of conversational logs, where participants talked to the chatbot in polite, human-like ways. Users' feedback and interviews revealed users' expectations regarding chatbots politeness and personal behavior . We identified studies reporting \\textit{manners} in nine different domains, with only open domain appearing twice. The list includes education, information search, and task management, among others. See the supplementary materials for the complete list of domains of studies that report \\textit{manners} as a social characteristic for chatbots.\nThe main benefit of providing \\textit{manners} is \\textbf{[B1] to increase human-likeness}. \\textit{Manners} is highlighted in the literature as a way to generate a more natural, convincing interaction in chatbot conversations . In an in-the-wild data collection, observed that 93\\% of the participants used polite words (e.g., ``\\textit{thanks}'' or ``\\textit{please}'') with a task management chatbot at least once, and 20\\% always spoke politely to the chatbot. Unfortunately, the chatbot evaluated in that study was not prepared to handle these protocols and ultimately failed to understand. When identifying incidents from their own conversational logs with a chatbot , several participants identified greetings as a human-seeming characteristic. The users also found convincing when the chatbot appropriately reacts to social cues statements, such as \\textit{``how are you?''}-types of utterances. Using this result, later suggested that greetings, apologies, social niceties, and introductions are significant constructs to measure chatbots' naturalness. In , the chatbot used exclamation marks at some point and frequently offered sentences available on the website, in a vaguely human-like matter. In the feedback, participants described the chatbot as rude, impolite, and cheeky. \nThe surveyed literature highlights two challenges to convey \\textit{manners}:\n\\textbf{[C1] to deal with face-threatening acts:} Face-Threatening Acts (FTA) are speech acts that threaten, either positive or negatively, the ``face'' of an interlocutor . Politeness strategies in human-human interactions are adopted to counteract the threat when an FTA needs to be performed . In , the authors discuss that the wizard performing the role of the chatbot used several politeness strategies to counteract face threats. For instance, when she did not recognize a destination, instead of providing a list of possible destinations, she stimulated the user to keep talking until they volunteered the information. In chatbot design, by contrast, providing a list of options to choose is a common strategy. For example, in , the chatbot was designed to present the user with a list of pending tasks when it did not know what task the user was reporting as completed, although the authors acknowledged that it resulted in an unnatural interaction. Although adopting politeness strategies is natural for humans and people usually do not consciously think about them, implementing them for chatbots is challenging due to the complexity of identifying face-threatening acts. For example, in the decision-making coach scenario, observed that users tend to utter straightforward and direct agreements while most of the disagreements contained modifiers that weakened their disagreement. The adoption of politeness strategies to deal with face-threatening acts is still under-investigated in the chatbot literature.\n\\textbf{[C2] to end a conversation gracefully:} discuss that first-time users expected human-like conversational etiquette from the chatbots, specifically introductory phrases and concluding phrases. Although several chatbots perform well in the introduction, the concluding phrases are less explored. Most of the participants reported being annoyed with chatbots that do not end a conversation . also highlight that chatbots need to know when the conversation ends. In that scenario, the chatbot could recognize a closing statement (the user explicitly says \\textit{``thank you''} or \\textit{``bye''}); however, it would not end the conversation otherwise. Users that stated a decision, but kept receiving more information from the chatbot, reported feeling confused and undecided afterward. Thus, recognizing the right moment to end the conversation is a challenge to overcome.\nThe strategies highlighted in the surveyed literature for providing \\textit{manners} are the following:\n\\textbf{[S1] to engage in small talk:} and point out that even task-oriented chatbots engage in small talk. When categorizing the utterances from the conversational log, the authors found a significant number of messages about the agent status (e.g., \\textit{``what are you doing?}''), opening and closing sentences as well as acknowledgment statements (\\textit{``ok,''} \\textit{``got it''}). also observed that first-time users included small talk in the introductory phrases. According to , these are common behaviors in human-human chat interface, and chatbots would likely benefit from anticipating these habitual behaviors and reproducing them. However, particularly for task-oriented chatbots, it is important to control the small talk to avoid off-topic conversations and harassment, as discussed in Sections \\ref{sec:damagecontrol} and \\ref{sec:conscientiousness}.\n\\textbf{[S2] to adhere to turn-taking protocols:} suggest that chatbots should adopt turn-taking protocols to know when to talk. Participants who received frequent follow-up questions from the task management chatbot about their pending tasks perceived the chatbot as invasive. Literature in chatbot development proposes techniques to improve chatbots' turn-taking capabilities (see e.g., ), which can be explored as a means of improving chatbots' perceived \\textit{manners}.\nAlthough the literature emphasizes that \\textit{manners} are important to approximate chatbot interactions to human conversational protocols, this social characteristic is under-investigated in the literature. Conversational acts such as greetings and apologies are often adopted (e.g., ), but there is a lack of studies on the rationality around the strategies with politeness models used in human-human social interactions . In addition, the literature points to needs for personal conversations (e.g., addressing the users by name), but we did not find studies that focus on this type of strategy. CMC is by itself more impersonal than face-to-face conversation ; even so, current online communication media has been successfully used to initiate, develop, and maintain interpersonal relationships . Researchers can learn from human behaviors in CMC and adopt similar strategies to produce more personal conversations.", "id": "b7e4439b-1760-43b1-b0dc-0c9d26d8356e", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Manners" ] ], "subsections": [], "title": "Manners" }, { "cite_extract_rate": 0.066666666666666, "cites": [ 7476 ], "content": "\\label{sec:moralagency}\nMachine \\textit{moral agency} refers to the ability of a technology to act based on social notions of right and wrong . The lack of this ability may lead to cases such as Tay, Microsoft's Twitter chatbot that became racist, sexist, and harassing in a few hours . The case raised concerns on what makes an artificial agent (im)moral. Whether machines can be considered (moral) agents is widely discussed in the literature (see e.g., ). In this survey, the goal is not to argue about criteria to define a chatbot as moral, but to discuss the benefits of manifesting a perceived agency and the implications of disregarding chatbots' moral behavior. Hence, for the purpose of this survey, \\textit{moral agency} is a manifested behavior that may be inferred by a human as morality and agency .\nWe found six papers that address \\textit{moral agency}. developed and validated a metric for perceived \\textit{moral agency} in conversational interfaces, including chatbots. In four studies, the authors investigated the ability of chatbots to handle conversations where the persistence of gender and racial stereotypes may occur. In , \\textit{moral agency} is discussed as a secondary result, where the authors discuss the impact of biased responses on emotional connection. \\textit{Moral agency} was observed in only two domains of studies: open domain (four studies) and race-talk (two studies), which shows that this characteristic is primarily relevant when the conversational topic may raise moral concerns, which ultimately requires ethical behavior from the conversational partners.\nThe two main reported benefits of manifesting perceived \\textit{moral agency} are the following:\n\\textbf{[B1] to avoid stereotyping:} chatbots are often designed with anthropomorphized characteristics (see Section \\ref{sec:personification}), including gender, age, and ethnicity identities. Although the chatbot's personification is more evident in embodied conversational agents, text-based chatbots may also be assessed by their social representation, which risks building or reinforcing stereotypes . and argue that chatbots are often developed using language registers and cultural references of the dominant culture. In addition, a static image (or avatar) representing the agent may convey social grouping . When the chatbot is positioned in a minority \\textit{identity} group, it exposes the image of that group to judgment and flaming, which is frequent in chatbot interactions . For example, discusses the controversies caused by a chatbot designed to answer questions about Caribbean Aboriginal culture: its representation as a Caribbean Amerindian individual created an unintended context for stereotyping, where users projected the chatbot's behavior as a standard for people from the represented population. Another example is the differences in sexual discourse between male- and female-presenting chatbots. found that female-presenting chatbots are the object of implicit and explicit sexual attention and swear words more often than male-presenting chatbots. show that sex talks with the male chatbot were rarely coercive or violent; his sexual preference was often questioned, though, and he was frequently propositioned by reported male users. In contrast, the female character received violent sexual statements, and was threatened with rape five times in the analyzed corpora. In , when the avatars were presented as black adults, references to race often deteriorated into racist attacks. Manifesting moral agency may, thus, prevent obnoxious user interactions. In addition, moral agency may prevent the chatbot itself from being biased or disrespectful to humans. argue that the lack of context about the world does not redeem the chatbot from the necessity of being respectful with all the social groups.\n\\textbf{[B2] to enrich interpersonal relationships:} in a study on how interlocutors perceive conversational agents' \\textit{moral agency}, hypothesized that perceived morality may influence a range of motivations, dynamics, and effects of human-machine interactions. Based on this claim, the authors evaluated whether goodwill, trustworthiness, willingness to engage, and relational certainty in future interactions are constructs to measure perceived \\textit{moral agency}. Statistical results showed that all the constructs correlate with morality, which suggests that manifesting \\textit{moral agency} can enrich interpersonal relationships with chatbots. Similarly, suggest that to produce interpersonal responses, chatbots should be aware of inappropriate information and avoid generating biased responses.\nHowever, the surveyed literature also reveals challenges of manifesting \\textit{moral agency}:\n\\textbf{[C1] to avoid alienation:} in order to prevent a chatbot from reproducing hate speech or abusive talk, most chatbots are built over ``clean'' data, where specific words are removed from their dictionary . These chatbots have no knowledge of those words and their meaning. Although this strategy is useful to prevent unwanted behavior, it does not manifest agency, but rather alienates the chatbot of the topic. show that the lack of understanding about sex-talk does not prevent the studied chatbot from harsh verbal abuse, or even from being perceived as encouraging such abuse. From , one can notice that the absence of racist specific words did not prevent the chatbot Zo from uttering discriminatory exchanges. As a consequence, manifesting \\textit{moral agency} requires a broader understanding of the world, rather than alienation, which is an open challenge.\n\\textbf{[C2] to build unbiased algorithms and training data:} as extensively discussed in , machine learning algorithms and corpus-based language generation are biased toward the available training datasets. Hence, \\textit{moral agency} relies on data that is biased in its nature, producing unsatisfactory results from an ethical perspective. In , the authors propose a framework for developing social chatbots. The authors highlight that the core chat module should follow ethical design to generate unbiased, non-discriminative responses, but they do not discuss specific strategies for that. Building unbiased training datasets and learning algorithms that connect the outputs with individual, real-world experiences, therefore, are challenges to overcome.\nDespite the relevance of \\textit{moral agency} to the development of socially intelligent chatbots, we did not find strategies to address the issues. advocate for developing diversity-conscious databases and learning algorithms that account for ethical concerns; however, the paper focuses on outlining the main research branches and calls on the community of designers to adopt new strategies. As discussed in this section, research on perceived \\textit{moral agency} is still necessary in order to develop chatbots whose social behavior is inclusive and respectful. In the field of embodied agents, mind perception theory has been investigated as a means to improve interactions through agency and emotion . Future investigations in chatbots interactions could leverage this theory to understand when and the extent to which perceived moral agency improves communication with chatbots.", "id": "f7519054-562c-44e0-9cb2-d8defc3722e3", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Moral agency" ] ], "subsections": [], "title": "Moral agency" }, { "cite_extract_rate": 0.11764705882352901, "cites": [ 7476, 2047 ], "content": "\\label{sec:emotionalintelligence}\n\\textit{Emotional intelligence} is a subset of social intelligence that allows an individual to appraise and express feelings, regulate affective reactions, and harness emotions to solve a problem . Although chatbots do not have genuine emotions , there are considerable discussions about the role of manifesting emotional cues in chatbots . An emotionally intelligent chatbot can recognize and influence users' feelings and demonstrate respect, empathy, and understanding, improving the human-chatbot relationship .\nWe identified 14 studies that report \\textit{emotional intelligence}. Unlikely the previously discussed categories, most studies on \\textit{emotional intelligence} focused on understanding the effects of chatbots' empathy and emotional self-disclosure . Only three papers highlighted \\textit{emotional intelligence} as an exploratory outcome , where needs for \\textit{emotional intelligence} emerged from participants' subjective feedback and post-intervention surveys. \\textit{Emotional intelligence} is mainly investigated in domains where topics may involve the disclosure of feelings (e.g., in open domain interactions ) and expressions of empathy and understanding are appropriate (e.g., health care, gaming, education). See the supplementary materials for the domains of study investigating \\textit{emotional intelligence}.\nThe main reported benefits of developing emotionally intelligent chatbots are the following:\n\\textbf{[B1] to enrich interpersonal relationships:} the perception that the chatbot understands one's feelings may create a sense of belonging and acceptance . propose that chatbots for second language studies should use congratulatory, encouraging, sympathetic, and reassuring utterances to create a friendly atmosphere to the learner. The authors statistically demonstrated that affective backchannel, combined with communicative strategies (see Section \\ref{sec:conscientiousness}) significantly increased learners' confidence and desire to communicate, while reducing anxiety. In another educational study, evaluated the impact of chatbot's affective moves on friendliness and achieving social belonging. Qualitative results show that affective moves significantly improve the perception of amicability, and marginally increased social belonging. According to , when a chatbot's emotional reaction triggers a social response from the user, the chatbot has achieved group membership and users' sympathy. proposed a chatbot that uses empathic and self-oriented emotional expressions to keep users engaged in quiz-style dialogue. The survey results revealed that empathic expressions significantly improved user satisfaction. In addition, the empathic expressions also improved the user ratings of the peer agent regarding intimacy, compassion, amiability, and encouragement. Although did not find effect of chatbot's self-disclosure on emotional connection, found that self-disclosure and reciprocity significantly improved trust and interactional enjoyment. In , seven participants reported that the best thing about their experience with the therapist chatbot was perceived empathy. Five participants highlighted that the chatbot demonstrated attention to their users' feelings. In addition, the users referred the chatbot as \\textit{``he,'' ``a friend,''} \\textit{``a fun little dude,''} which demonstrates that empathy emerged from the personification of the chatbot. In another mental health care study, found that humans are twice as likely to mirror negative sentiment from a chatbot than from a human, which is a relevant implication for therapeutic interactions. In , participants reported that some content is embarrassing to ask another human, thus, talking to a chatbot would be easier due to the lack of judgement. measured users' experience in conversations with a chatbot compared to a human partner as well as the amount of intimacy disclosure and cognitive reappraisal. Participants in the chatbots condition experienced as many emotional, relational, and psychological benefits as participants who disclosed to a human partner. \n\\textbf{[B2] to increase engagement:} argue that longer conversations (10+ turns) are needed to fulfill the needs of affection and belonging. Therefore, the authors defined conversation-turns per session as a success metric for chatbots, where usefulness and emotional understanding are combined. In , empathic utterances for the quiz-style interaction significantly increase the number of users' messages per hint for both answers and non-answer utterances (such as feedback about the success/failure). This result shows that empathic utterances encouraged the users to engage and utter non-answers statements. compared the possibility of emotional connection between a classical chatbot and a pretend chatbot, simulated in a WoZ experiment. Quantitative results showed that the WoZ condition was more engaging, since it resulted in conversations that lasted longer, with a higher number of turns. The analysis of the conversational logs revealed the positive effect of the chatbot manifesting social cues and empathic signs as well as touching on personal topics.\n\\textbf{[B3] to increase believability:} argue that adapting chatbots' language to their current emotional state, along with their \\textit{personality} and social role awareness, results in more believable interactions. The authors propose that conversation acts should reflect the pretended emotional status of the agent; the extent to which the acts impact on emotion depends however on the agent's \\textit{personality} (e.g., its temperament or tolerance). \\textit{Personality} is an anthropomorphic characteristic and is discussed in Section \\ref{sec:personality}.\nAlthough \\textit{emotional intelligence} is the goal of several studies, \\textbf{[C1] regulating affective reactions} is still a challenge. The chatbot presented in was designed to mimic the patterns of affective moves in human-human interactions. Nevertheless, the chatbot has shown an only marginally significant increase in social belonging, when compared to the same interaction with a human partner. Conversational logs revealed that the human tutor performed a significantly higher number of affective moves in that context. In , the chatbot was designed to present emotive-like cues, such as exclamation marks, and interjections. The participants negatively rated the degree of emotion in the chatbot's responses. In , the energetic chatbot was reported as having an enthusiasm too high to match. In contrast, the chatbot was described as an \\textit{``emotional buddy''} was reported as being \\textit{``overly caring.'' } state that chatbot's empathic utterances may be seen as pre-programmed and inauthentic. Although their results revealed that the partners' identity (chatbot vs. person) had no effect in the perceived relational and emotional experience, the chatbot condition was a WoZ setup. The wizards were blind to whether users thought they were talking to a chatbot or a person, which reveals that identity does not matter if the challenge of regulating emotions is overcome.\nThe chatbots literature also report some strategies to manifest \\textit{emotional intelligence}:\n\\textbf{[S1] using social-emotional utterances:} affective utterances toward the user are a common strategy to demonstrate \\textit{emotional intelligence}. , , and suggest that affective utterances improve the interpersonal relationship with a tutor chatbot. In , the authors propose affective backchannel utterances (congratulatory, encouraging, sympathetic, and reassuring) to motivate the user to communicate in a second language. The tutor chatbot proposed in uses solidarity, tension release, and agreement utterances to promote its social belonging and acceptance in group chats. propose empathic utterances to express opinion about the difficulty or ease of a quiz, and feedback on success and failure.\n\\textbf{[S2] to manifest \\textit{conscientiousness}:} demonstrating \\textit{conscientiousness} may affect the emotional connection between humans and chatbots. In , participants reported the rise of affection when the chatbot remembered something they had said before, even if it was just the user's name. Keeping track of the conversation was reported as an empathic behavior and resulted in mutual affection. argue that a chatbot needs to combine usefulness with emotion by asking questions that help to clarify the users' intentions. They provide an example where a user asks the time, and the chatbot answer \\textit{``Cannot sleep?''} as an attempt to guide the conversation to a more engaging direction. Adopting this strategy requires the chatbot to handle message understanding, emotion and sentiment tracking, session context modeling, and user profiling .\n\\textbf{[S3] reciprocity and self-disclosure:} hypothesized that a high level of self-disclosure and reciprocity in communication with chatbots would increase trust, intimacy, and enjoyment, ultimately improving user satisfaction and intention to use. They performed a WoZ, where the assumed chatbot was designed to recommend movies. Results demonstrated that reciprocity and self-disclosure are strong predictors of rapport and user satisfaction. In contrast, did not find any effect of self-oriented emotional expressions in the users' satisfaction or engagement. More research is needed to understand the extent to which this strategy produces positive impact on the interaction.\nThe literature shows that \\textit{emotional intelligence} is widely investigated, with particular interest from education and mental health care domains. Using emotional utterances in a personalized, context relevant way is still a challenge. Researchers in chatbots \\textit{emotional intelligence} can learn from \\textit{emotional intelligence} theory to adapt the chatbots utterances to match the emotions expressed in the dynamic context. Adaption to the dynamic context also improves the sense of personalized interactions, which is discussed in the next section.", "id": "e08fd9d3-0a36-4724-8257-1d5f35b4d5dc", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Emotional Intelligence" ] ], "subsections": [], "title": "Emotional Intelligence" }, { "cite_extract_rate": 0.1875, "cites": [ 6262, 650, 7476 ], "content": "\\label{sec:personalization}\n\\textit{Personalization} refers to the ability of a technology to adapt its functionality, interface, information access, and content to increase its personal relevance to an individual or a category of individuals . In the chatbot domain, \\textit{personalization} may increase the agents' social intelligence, since it allows a chatbot to be aware of situational context and dynamically adapt its features to better suit individual needs . Grounded in robots and artificial agents' literature, argue that \\textit{personalization} can improve rapport and cooperation, ultimately increasing engagement with chatbots. Although some studies (see e.g., ) also relate \\textit{personalization} to the attribution of personal qualities such as \\textit{personality}, we discuss personal qualities in the \\textbf{Personification} category. In this section, we focus on the ability to adapt the interface, content, and behavior to the users' preferences, needs, and situational context.\nWe found 11 studies that report \\textit{personalization}. Three studies pose \\textit{personalization} as a research goal . In most of the studies, though, \\textit{personalization} was observed in exploratory findings. In six studies, \\textit{personalization} emerged from the analysis of interviews and participants' self-reported feedback . In two studies , needs for \\textit{personalization} emerged from the conversational logs. \\textit{Personalization} was investigated in seven different domains, including open domain and task management (two studies each). In open domain interactions, personalization is derived from remembering information from previous interactions, such as personal preferences and users' details . In task-oriented contexts, such as task management, personalization aims to increase the relevance of services to particular users . See the supplementary materials for details.\nThe surveyed literature highlighted three benefits of providing personalized interactions:\n\\textbf{[B1] to enrich interpersonal relationships:} state that personalizing the amount of personal information a chatbot can access and store is required to establish a relation of trust and reciprocity in workplace environments. In , interviews with 12 participants generated a total of 59 statements about how learning from experience promotes chatbots' authenticity. argue that chatbots whose focus is engagement need to personalize the generation of responses for different users' backgrounds, personal interests, and needs in order to serve their needs for communication, affection, and social belonging. In , participants expressed the desire for the chatbot to provide different answers to different users. Although has found no significant effect of \\textit{personalization} on the user experience with the financial assistant chatbot, the study applies \\textit{personalization} as the ability to provide empathic responses according to the users' issues, where \\textit{emotional intelligence} plays a role. Interpersonal relationship can also be enriched by adapting the chatbots' language to match the user's context, energy, and formality; the ability to appropriately use language is discussed in Section \\ref{sec:thoroughness}.\n\\textbf{[B2] to provide unique services:} providing \\textit{personalization} increases the value of provided information . In the ethnography data collection study , eight participants reported dissatisfaction with the chatbot's generic guidance to specific places. Participants self-reported that the chatbot should use their current location to direct them to places more conveniently located, and ask for participants interests and preferences to direct them to areas that meet their needs. When exploring how teammates used a task-assignment chatbot, found that the use of the chatbot varied depending on the participants' levels of hierarchy. Similarly, qualitative analysis of perceived interruption in a workplace chat suggests that interruption is likely associated with users' general aversion to unsolicited messages at work. Hence, the authors argue that chatbot's messages should be personalized to the user's general preference. also found that users with low social-agent orientation emphasize the utilitarian value of the system, while users with high social-agent orientation see the system as a humanized assistant. This outcome support to the need to personalize the interaction to individual user's mental models. In , participants reported preference for a chatbot that remembers their details, likes and dislikes, and preferences, and voluntarily uses the information to make recommendations. In , two participants also expected chatbots to retain context from previous interactions to improve recommendations.\n\\textbf{[B3] to reduce interactional breakdowns:} in HCI, \\textit{personalization} is used to customize the interface toward user familiarity . When evaluating\nvisual elements (such as quick replies) compared to typing the responses, observed that users that start the interaction by clicking an option are more likely to continue the conversation if the next exchange also has visual elements as optional affordances. In contrast, users who typed are more likely to abandon the conversation when they face options to click. Thus, chatbots should adapt their interface to users' preferred input methods. In , one participant suggested that the choice of text color and font size should be customizable. also observed that participants faced difficulties with small letters, and concluded that adapting the interface to provide accessibility also needs to be considered.\nAccording to the surveyed literature, the main challenge regarding \\textit{personalization} is \\textbf{[C1] privacy}. To enrich the efficiency and productivity of the interaction, a chatbot needs to have memory of previous interactions as well as learn user's preferences and disclosed personal information . However, as and suggest, collecting personal data may lead to privacy concerns. Thus, chatbots should showcase transparent purpose and ethical standards . also suggest that there should be a way to inform a chatbot that something in the conversation is private. Similarly, participants in reported that personal data \nand social media content may be inappropriate topics for chatbots because they can be sensitive\n. These concerns may be reduced if a chatbot demonstrates care about privacy .\nThe reported strategies to provide \\textit{personalization} in chatbots interactions are the following:\n\\textbf{[S1] to learn from and about the user:} state that chatbots should present strategies to learn from cultural, behavioral, personal, conversational, and contextual interaction data. For example, the authors suggest using Facebook profile information to build knowledge about users' personal information. also suggest that the chatbot should remember user's preferences disclosed in previous conversations. In , the authors propose an architecture where responses are generated based on a \\textit{personalization} rank that applies users' feedback about their general interests and preferences. When evaluating the user's experience with a virtual assistant chatbot, found 16 mentions to personalized interactions, where participants demonstrated the need for a chatbot to be aware of their personal quirks and anticipate their needs.\n\\textbf{[S2] to provide customizable agents:} suggest that users should be able to choose the level of the chatbot's attributes, for example, the agent's look and persona. By doing so, users with low social-agent orientation could use a non-humanized interface, which would better represent their initial perspective. This differentiation could be the first signal to personalize further conversation, such as focusing on more productive or playful interactions. Regarding chatbots' learning capabilities, in , interviews with potential users revealed that users should be able to manage what information the chatbot knows about them and decide whether the chatbot can learn from previous interactions or not. If the user prefers a more generic chatbot, then it would not store personal data, potentially increasing the engagement with more resistant users. raise the possibility of offering an ``incognito'' mode for chatbots or asking the chatbot to forget what was said in previous utterances.\n\\textbf{[S3] visual elements:} adopted quick replies as a means for the chatbot to tailor its subsequent questions to the specific experience the participant had reported. As discussed in Section \\ref{sec:conscientiousness}, quick replies may be seen as restrictive from an interactional perspective; however, conversation logs showed that the tailored questions prompted the users to report more detail about their experience, which is important in ethnography research. \nBoth the benefits and strategies identified in the literature are in line with the types of \\textit{personalization} proposed by . Therefore, further investigations in \\textit{personalization} can leverage knowledge from interactive systems (e.g., ) to adapt \\textit{personalization} strategies and manage the privacy concern.\n\\begin{framed} \\small \n\\vspace{-2mm}\nIn summary, the \\textbf{social intelligence} category includes characteristics that help a chatbot to manifest an adequate social behavior, by managing conflicts, using appropriate language, displaying \\textit{manners} and \\textit{moral agency}, sharing emotions, and handling personalized interactions. The benefits relate to resolving social positioning and recovering from failures, as well as increasing believability, human-likeness, engagement, and rapport. To achieve that, designers and researchers should care about privacy, emotional regulation issues, language consistency, and identification of failures and inappropriate content.\n\\vspace{-2mm}\n\\end{framed}", "id": "69b481db-bee0-4219-a694-6a1c1caa8255", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "6e94027f-a398-4b65-8b3b-d03e95d73fa6", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Social Intelligence" ], [ "subsubsection", "Personalization" ] ], "subsections": [], "title": "Personalization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:personification}\nIn this section, we discuss the influence of \\textit{identity} projection on human-chatbot interaction. \\textbf{Personification} refers to assigning personal traits to non-human agents, including physical appearance, and emotional states . In the HCI field, researchers argue that using a personified character in the user interface is a natural way to support the interaction . Indeed, the literature shows that (i) users can be induced to behave as if computers were humans, even when they consciously know better ; and (ii) the more human-like a computer representation is, the more social people's responses will be .\nChatbots are, by definition, designed to have at least one human-like trait: (human) natural language. Although research on \\textbf{personification} is more common in the Embodied Conversational Agents field, claim that chatbot embodiment can be created through narrative without any visual help. According to , talking to a machine affords it a new \\textit{identity}. In this section, we divided the social characteristics that reflect \\textbf{personification} into \\textit{identity} (16 papers) and \\textit{personality} (12 papers). In this category, we found several studies where part of the main investigation relates to the social characteristics. Six studies applied quantitative methods, whereas the majority reported qualitative (10) or mixed (11) methods. See the supplementary materials for details.", "id": "fb12c3f5-18c5-4fb9-8465-0d7950752ebc", "level": "subsection", "origin_cites_number": 6, "parent_id": "5950bb1f-b30c-410b-94c0-7eba5ce91d72", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Personification" ] ], "subsections": [ "3da905ba-ca45-4a2d-9608-fe81f0daa556", "eec7d496-87cc-419a-b26e-00faf7d9c1fd" ], "title": "Personification" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 6262, 7476 ], "content": "\\label{sec:identity}\n\\textit{Identity} refers to the ability of an individual to demonstrate belonging to a particular social group . Although chatbots lack the agency to choose what social group to which they want to belong, designers attribute \\textit{identity} to them, intentionally or not, when they define the way a chatbot talks or behaves . The \\textit{identity} of a partner (even if only perceived) gives rise to new processes, expectations, and effects that influence the outcomes of the interaction . Aspects that convey the chatbots' \\textit{identity} include gender, age, language style, and name. Additionally, chatbots may have anthropomorphic, zoomorphic, or robotic representations. Some authors include \\textit{identity} aspects in the definition of \\textit{personality} (see, e.g., ). We distinguish these two characteristics, where \\textit{identity} refers to appearance and cultural traits while \\textit{personality} focuses on behavioral traits.\nWe found 16 studies that discuss \\textit{identity} issues, ten of which include \\textit{identity} as part of their main investigation . In two studies, the authors argue for the impact of \\textit{identity} on the interaction based on the literature . In three studies , qualitative analysis of conversational logs revealed that participants put effort into understanding aspects of the chatbots' \\textit{identity}. \\textit{Identity} concerns were primarily investigated for open domain and customer service chatbots (four studies each). In open domain interactions, \\textit{identity} is explored as a means of building common ground . In the case of customer services, \\textit{identity} helps manifest credibility and trust . Other domains include race-talk, education, and gaming. The complete list appears in the supplementary materials.\nThe identified benefits of attributing \\textit{identity} to a chatbot are the following:\n\\textbf{[B1] to increase engagement: }when evaluating signals of playful interactions, found that agent-oriented conversations (asking about an agent's traits and status) are consistent with the tendency to anthropomorphize the agent and engage in chit-chat. In the educational scenario, also observed questions about agents' appearance, intellectual capacities, and sexual orientation, although the researchers considered these questions inappropriate for the context of tutoring chatbots. When comparing human-like vs. machine-like language style, greetings, and framing, noticed that using informal language, having a human name, and using greetings associated with human communication resulted in significantly higher scores for adjectives like likeable, friendly, and personal. In addition, framing the agent as ``intelligent'' also had a slight influence on users' scores.\n\\textbf{[B2] to increase human-likeness:} some attributes may convey a perceived human-likeness. showed that using human-like language style, name, and greetings resulted in significantly higher scores for naturalness. The chatbot's framing influenced the outcomes when combined with other anthropomorphic clues. When evaluating different typefaces for a financial adviser chatbot, found that users perceive machine-like typefaces as more chatbot-like, although they did not find strong evidence of handwriting-like typefaces conveying humanness.\nThe surveyed literature also highlights challenges regarding \\textit{identity}:\n\\textbf{[C1] to avoid negative stereotypes: }when engaging in a conversation, interlocutors base their behavior on common ground (joint knowledge, background facts, assumptions, and beliefs that participants have of each other) (see ). Common ground reflects stereotypical attributions that chatbots should be able to manage as the conversation evolves . In , the authors discuss that chatbots for company representations are often personified as attractive human-like women acting as spokespeople for their companies, while men chatbots tend to have a more important position, such as a virtual CEO. state that the agent self-disclosure of gender \\textit{identity} opens possibilities to sex talk. The authors observed that the conversations mirror stereotyped male/female encounters, and the ambiguity of the chatbot's gender may influence the exploration of homosexuality. However, fewer instances were observed of sex-talk with the chatbot personified as a robot, which demonstrates that the gender \\textit{identity} may lead to the stereotypical attributions. When evaluating the effect of gender identity on disinhibition, showed that people spoke more often about physical appearance and used more swear and sexual words with the female-presenting chatbot, and racist attacks were observed in interactions with black-representing chatbots. The conversation logs from also show instances of users attacking the chatbot persona (a static avatar of a woman pointing to the conversation box). and also highlight that race identity conveys not only the physical appearance, but all the socio-cultural expectations about the represented group (see discussion in Section \\ref{sec:moralagency}). Hence, designers should care about the impact of attributing an \nidentity to chatbots in order to avoid reinforcing negative stereotypes.\n\\textbf{[C2] to balance the \\textit{identity} and the technical capabilities:} the literature comparing embodied vs. disembodied conversational agents yields contradictory results regarding the relevance of a human representation. For example, in the context of general-purpose interactions, show that people exert more effort toward establishing common ground with the agent when they are represented as fully human; in contrast, when evaluating a website assistant chatbot, show that simpler text-based chatbots with no visual, human \\textit{identity} resulted in less of an uncanny effect and reduced negative affect. Overly humanized agents create a higher expectation for users, which eventually leads to more frustration when the chatbot fails . When arguing about why chatbots fail, advocate for balancing human versus robotic aspects, where \\textit{``too human''} representations may lead to off-topic conversations, and overly robotic interactions may lack personal touch and flexibility. When arguing about the social presence conveyed by deceptive chatbots, state that extreme anthropomorphic features may generate cognitive dissonance. The challenge thus lies in designing a chatbot that provides appropriate \\textit{identity} cues, which correspond to their capabilities and communicative purpose, in order to convey the right expectation and minimize discomfort from over-personification.\nRegarding the strategies, the surveyed literature suggests \\textbf{[S1] to design and elaborate on a persona}. Chatbots should have a comprehensive persona and answer agent-oriented conversations with a consistent description of itself . For example, discuss that Eliza, the psychotherapist chatbot, and Parry, a paranoid chatbot, have behaviors consistent with the stereotypes associated with the professional and personal identities, respectively. suggest that designers should explicitly build signals of the chatbot personification (either machine- or human-like), so the users can have the right expectation about the interaction. When \\textit{identity} aspects are not explicit, users try to establish common ground. In and , much of the small talk with the chatbot related to the chatbot's traits and status. In , the authors observed many instances of Alice's self-references to ``\\textit{her}'' artificial nature. These references triggered the users to reflect on their human-condition (self-categorization process), resulting in exchanges about their species (either informational or confrontational). observed similar results, as participants engaged in conversations about the artificial nature of the agent. Providing the chatbot with the ability to describe its personal \\textit{identity} helps to establish the common ground, and hence, enrich the interpersonal relationship .\nChatbots may be designed to deceive users about their actual \\textit{identity}, pretending to be a human . In this case, the more human the chatbot sounds, the more successful it will be. In many cases, however, there is no need to engage in deception and the chatbots can be designed to represent an elaborated persona. \nResearchers can explore social identity theory related to ingroup bias, power relations, homogeneity, and stereotyping, in order to design chatbots with \\textit{identity} traits that reflect their expected social position .", "id": "3da905ba-ca45-4a2d-9608-fe81f0daa556", "level": "subsubsection", "origin_cites_number": 24, "parent_id": "fb12c3f5-18c5-4fb9-8465-0d7950752ebc", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Personification" ], [ "subsubsection", "Identity" ] ], "subsections": [], "title": "Identity" }, { "cite_extract_rate": 0.043478260869565, "cites": [ 7476 ], "content": "\\label{sec:personality}\n\\textit{Personality} refers to personal traits that help to predict someone's thoughts, feelings, and behaviors . The most accepted set of traits is called the Five-Factor model (or Big Five model) ), which describes \\textit{personality} across five dimensions (extraversion, agreeableness, conscientiousness, neuroticism, and openness). However, \\textit{personality} can also refer to other dynamic, behavioral characteristics, such as temperament and sense of humor . In the chatbots domain, \\textit{personality} refers to the set of traits that determines the agent's interaction style, describes its character, and allows the end-user to understand its general behavior . Chatbots with consistent \\textit{personality} are more predictable and trustworthy . According to , unpredictable swings in chatbots' attitudes can disorient users and create a strong sense of discomfort. Thus, \\textit{personality} ensures that a chatbot displays behaviors that stand in agreement with the users' expectations in a particular context .\nWe found 12 studies that report \\textit{personality} issues for chatbots. In some studies, \\textit{personality} was investigated in reference to the Big Five model , while two studies focused on sense of humor . Three studies investigated the impact of the \\textit{personality} of tutor chatbots on students' engagement . compared users' preferences regarding pre-defined personalities. In the remaining studies, \\textit{personality} concerns emerged from the qualitative analysis of the interviews, users' subjective feedback, and literature reviews . \\textit{Personality} was mostly investigated in open domain (five studies) and education (three studies) chatbots. The other two domains were gaming and humorous chatbots, which are both playful agents. Hence, \\textit{personality} is relevant when believability and attitude play a role in the interaction (e.g., in open domain) and when the chatbots' attitude may increase users' mental comfort when performing a task , such as in educational contexts.\nThe surveyed literature revealed two benefits of attributing \\textit{personality} to chatbots:\n\\textbf{[B1] to increase believability:} states that chatbots should have a \\textit{personality}, defined by the Five Factor model, plus characteristics such as temperament and tolerance, in order to build utterances using linguistic choices that cohere with these attributions. When evaluating a humorous chatbot, compared the naturalness of the chatbot's inputs and the chatbot's human-likeness compared to a non-humorous chatbot. The humorous chatbot scored significantly higher in both constructs. also showed that sense of humor humanizes the interactions, since humor was one of the factors that influenced naturalness for the WoZ condition. demonstrated that manipulating language to manifest a target \\textit{personality} produced moderately natural utterances, with a mean rating of 4.59 out of 7 for the \\textit{personality} model utterances.\n\\textbf{[B2] to enrich interpersonal relationships:} chatbots \\textit{personality} can make the interaction more enjoyable . In , the second most frequent motivation for using chatbots, noted by 20\\% of the participants, was entertainment. The authors argue that a chatbot's capacity to be fun is important even when the main purpose is productivity; according to participants, the chatbot's ``\\textit{fun tips}'' enrich the user experience. This result is consistent with the experience of first-time users , where participants relate better with chatbots who have consistent \\textit{personality}. show how witty banter and casual, enthusiastic conversations help make the interaction effortless. In addition, a few participants enjoyed the chatbot with a caring \\textit{personality}, who was described as a good listener. In and , the authors argue that a consistent \\textit{personality} helps the chatbot to gain the users' confidence and trust. state that tutor chatbots should display appropriate posture, conduct, and representation, which include being encouraging, expressive, and polite. Accordingly, other studies highlight students' desire for chatbots with positive agreeableness and extraversion . Outcomes consistently suggest that students prefer a chatbot that is not overly polite, but has some attitude. Agreeableness seems to play a critical role, helping the students to feel encouraged and deal with difficulties. Notably, agreeableness requires \\textit{emotional intelligence} to be warm and sympathetic in appropriate circumstances (see Section \\ref{sec:emotionalintelligence}).\nThe reviewed literature pointed out two challenges regarding \\textit{personality}:\n\\textbf{[C1] to adapt humor to the users' culture:} sense of humor is highly shaped by cultural environment . discusses a Japanese chatbot who uses puns to create funny conversations. The authors state that puns are one of the main humor genres in that culture. However, puns are restricted to a specific culture and language, with low portability. Thus, the design challenge lies in personalizing chatbots' sense of humor to the target users' culture and interests or, alternatively, designing cross-cultural forms of humor. The ability to adapt to the context and users' preference is discussed in Section \\ref{sec:personalization}.\n\\textbf{[C2] to balance \\textit{personality} traits:} observed that users prefer a proactive, productive, witty chatbot. Yet, they also would like them to be caring, encouraging, and exciting. In , the researchers intentionally generated utterances to reflect extreme personalities; as a result, they observed that some utterances sounded unnatural because a human's \\textit{personality} is a continuous phenomenon, rather than a discrete one. also points out that, although \\textit{personality} is consistent, moods and states of mind constantly vary. Thus, balancing the predictability of the \\textit{personality} and the expected variation is a challenge to overcome.\nWe also identified strategies to design chatbots that manifest \\textit{personality}:\n\\textbf{[S1] to use appropriate language: } and suggest that the chatbot language should be consistently influenced by its \\textit{personality}. Both studies propose that chatbots' architecture should include a persona-based model that encodes the \\textit{personality} and influences the response generation. proposed a framework to shows that it is possible to automatically manipulate language features to manifest a particular \\textit{personality} based on the Big Five model. The Big Five model is a relevant tool because it can be assessed using validated psychological instruments . Using this model to represent the \\textit{personality} of chatbots was also suggested by and . discussed that a chatbot's \\textit{personality} should match its domain. Participants expected the language used by the news chatbot to be professional, while they expected the shopping chatbot to be casual and humorous. The ability to use consistent language is discussed in Section \\ref{sec:thoroughness}.\n\\textbf{[S2] to have a sense of humor:} literature highlights humor as a positive \\textit{personality} trait . In , ten participants mentioned enjoyment when the chatbots provided humorous and highly diverse responses. The authors found occurrences of the participants asking for jokes and expressing delight when the request was supported. present similar results when arguing that humor is important even for task-oriented chatbots when the user is usually seeking for productivity. For casual conversations, highlight that timely, relevant, and clever wit is a desired \\textit{personality} trait. In , the joker chatbot was perceived as more human-like, knowledgeable, and funny, and participants felt more engaged. \n\\textit{Personality} for artificial agents has been studied for some time in the Artificial Intelligence field . Thus, further investigations on chatbots' \\textit{personality} can leverage models for \\textit{personality} and evaluate how they contribute to believability and rapport building.\n\\begin{framed}\\small\n\\vspace{-2mm}\nIn summary, the \\textbf{personification} category includes characteristics that help a chatbot to manifest personal and behavioral traits. The \nbenefits relate to increasing believability, human-likeness, engagement, and interpersonal relationship, which is in line with the benefits of \\textbf{social intelligence}. However, unlike the \\textbf{social intelligence} category, designers and researchers should focus on attributing recognizable \\textit{identity} and \\textit{personality} traits that are consistent with users' expectations and the chatbot's capabilities. In addition, it is important to care about adaptation to users' cultural context and reducing the effects of negative stereotypes.\n\\vspace{-2mm}\n\\end{framed}", "id": "eec7d496-87cc-419a-b26e-00faf7d9c1fd", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "fb12c3f5-18c5-4fb9-8465-0d7950752ebc", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Chatbots Social Characteristics" ], [ "subsection", "Personification" ], [ "subsubsection", "Personality" ] ], "subsections": [], "title": "Personality" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we discuss cross-cutting aspects of the results, namely the influence of the chatbots' perceived humanness, the conversational domains, and the relationships among the characteristics.", "id": "616598c0-a6ef-4e0d-bae2-49c648e87606", "level": "section", "origin_cites_number": 0, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Discussion" ] ], "subsections": [ "212bc0e6-0de8-4c65-af3e-ab937c5ebe0e", "3bce4038-12ed-4398-a390-053fbed73d7f", "267f99ec-a3fd-4ef2-ae14-9b0b6a98c88c" ], "title": "Discussion" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 6267, 7476 ], "content": "The social characteristics identified in the survey align well with characteristics that are present in human-human interactions. On the one hand, this survey shows the benefits of considering these characteristics when developing chatbots, which is supported by the Media Equation theory ; we identified the domains in which each characteristic was studied. On the other hand, previous studies have also shown that developing overly humanized agents results in high expectations and uncanny feelings , which was also pointed out in the surveyed literature as a challenge to conveying particular characteristics, such as \\textit{identity} and \\textit{personality}.\nAn explanation for these contrasts is that interactivity is ``dependent on the identity of the source with whom/which we are carrying out the message exchange''~, i.e., people direct their messages to an artificial agent. In interactions with chatbots, the artificial agent represents the social role usually associated with a human, for example, a tutor~, a healthcare provider~, a salesperson~, a hotel concierge~, or even a friend . The idea of assigning a social role to a chatbot does not imply deceiving people into thinking the software is human. Still, as the chatbots enrich their communication and social skills, their perceived social role may approach human profiles.", "id": "212bc0e6-0de8-4c65-af3e-ab937c5ebe0e", "level": "subsection", "origin_cites_number": 12, "parent_id": "616598c0-a6ef-4e0d-bae2-49c648e87606", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Discussion" ], [ "subsection", "A note on chatbots' humanness" ] ], "subsections": [], "title": "A note on chatbots' humanness" }, { "cite_extract_rate": 0, "cites": [], "content": "As Table \\ref{tab:domain-analysis} depicts\\footnote{see more details in the supplementary materials}, almost all social characteristics (10 out of 11) were found in studies related to conversations not restricted to a specific domain (open domain): manners, moral agency, and emotional intelligence are particularly relevant, since many conversations involve personal content (e.g., preferences, personal life, and habits); damage control is necessary since interactions are more susceptible to testing or flaming; identity, personality, and thoroughness are adopted to increase believability and consistency; and conscientiousness, proactivity, and personalization foster user engagement and a coherent conversational flow. The only characteristic that was not investigated in an open domain was communicability. As the conversations in this context mimic human social interactions, conveying the chatbots' capabilities is less of a concern.\nThe reported characteristics vary across task-oriented domains. Education and customer services literature report the greatest number of characteristics, but this result might be influenced by the maturity of the research in chatbots for these domains. As we pointed out in Section \\ref{sec:overview}, education and customer services are the task-oriented domains most reported in the literature. We found conscientiousness, damage control, thoroughness, manners, emotional intelligence, and identity in studies for both domains. However, manners and emotional intelligence have a different goal in these domains. In the education context, these characteristics are designed to encourage students, especially in a situation of failure, in which the chatbot should be comforting and sensitive. This function aligns with other domains, such as healthcare. The education domain also reports needs for personality, so the chatbot can be recognized as either an instructor or a student, and proactivity, so the chatbot can motivate students to participate in the interactions. Personality is also reported in other domains in which the chatbots' character influences the interactions, such as gaming and humorous talk. Proactivity is also consistently reported in domains in which the chatbot provides guidance, such as coaching, health, ethnography, and assessment interviews.\nIn customer services, emotional intelligence and manners are used as a means to manage customers' frustrations with either a product or service. This function is also reported in other domains, such as virtual assistants and information search. Additionally, communicability is important to convey the services provided by the chatbot, and personalization helps to provide services that align with a particular customer, which is observed in domains such as ethnography, task management, virtual assistants, and financial services. Noticeably, personalization mostly co-occurs with proactivity, since users expects proactive messages to be relevant to their particular interactional context.\nMoral agency is the only characteristic that does not show up across several domains. It was reported only in race-talk, and open domain. This highlights that moral agency is a concern when the topic involves content that might lead to inappropriate responses. However, as this survey focuses on characteristics reported in the literature, we believe that it might be influenced by the approach adopted by the researchers: it makes sense to study moral agency in the context of gender or race conversations. We argue, though, that this characteristic might be relevant in domains that were not reported here, but that involve interactions that may lead to biased conversations.", "id": "3bce4038-12ed-4398-a390-053fbed73d7f", "level": "subsection", "origin_cites_number": 0, "parent_id": "616598c0-a6ef-4e0d-bae2-49c648e87606", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Discussion" ], [ "subsection", "The importance of conversational context" ] ], "subsections": [], "title": "The importance of conversational context" }, { "cite_extract_rate": 0.038461538461538006, "cites": [ 7476 ], "content": "\\label{sec:interrelationship}\nIn Section \\ref{sec:socialcharacteristics}, we organized the social characteristics into discrete groups and discussed several instances of characteristics influencing each other, or being used as a strategy to manifest one another. In this section, we describe these relations in a theoretical framework, depicted in Figure \\ref{fig:relationship}. Rather than providing a comprehensive map, the goal here is to make explicit the relationships found in the surveyed chatbots literature, which underlines the complexity of developing chatbots with appropriate social behaviors. In Figure \\ref{fig:relationship}, boxes represent the social characteristics and the colors group them into their respective categories. The axes represent the 22 propositions we derived from the literature.\n\\begin{figure}[!hb]\n \\centering\n \\includegraphics[width=8cm]{interrelationship-survey}\n \\caption{Interrelationship among social characteristics}\n \\label{fig:relationship}\n\\end{figure}\nAccording to the surveyed literature, \\textit{proactivity} influences perceived \\textit{personality} \\textbf{(P1)} , since recommending and initiating topics may manifest higher levels of extraversion. When the proactive messages are based on the context, \\textit{proactivity} increases perceived \\textit{conscientiousness} \\textbf{(P2)} , since the proactive messages may demonstrate attention to the topic. \\textit{Proactivity} supports \\textit{communicability} \\textbf{(P3)} , since a chatbot can proactively communicate its knowledge and provide guidance; in addition, \\textit{proactivity} supports \\textit{damage control} \\textbf{(P4)} , since a chatbot can introduce new topics when the user either is misunderstood, tries to break the system, or sends an inappropriate message.\n\\textit{Conscientiousness} is itself a dimension of the Big Five model; hence, \\textit{conscientiousness} influences the perceived \\textit{personality} \\textbf{(P5)} . Higher levels of context management, goal-orientation, and understanding increase the chatbots' perceived efficiency, organization, and commitment . \\textit{Conscientiousness} manifests\\textit{ emotional intelligence} \\textbf{(P6)} since retaining information from previous turns and being able to recall them demonstrates empathy . In addition, \\textit{conscientiousness} manifests \\textit{personalization} \\textbf{(P7)} because a chatbot can remember individual preferences within and across sessions.\n\\textit{Emotional intelligence} influences perceived \\textit{personality} \\textbf{(P8)}, since chatbots' \\textit{personality} traits affect the intensity of emotional reactions . Agreeableness is demonstrated through consistent warm reactions such as encouraging and motivating \n. Some \\textit{personality} traits \nrequire \\textit{personalization} \\textbf{(P9)} to adapt to the interlocutors' culture and interests . Besides, \\textit{personalization} benefits \\textit{identity} \\textbf{(P10)}, since the users' social-agent orientation may require a chatbot to adapt the level of engagement in small talk and the agent's visual representation . \\textit{Personalization} also improves \\textit{emotional intelligence} \\textbf{(P11)}, since a chatbot should dynamically regulate the affective reactions to the interlocutor . \\textit{Emotional intelligence} improves perceived \\textit{manners} \\textbf{(P12)}, since the lack of \\textit{emotional intelligence} may lead to the perception of impoliteness .\n\\textit{Conscientiousness} facilitates \\textit{damage control} \\textbf{(P13)}, since the attention to the \nworkflow and context may increase the ability to recover from a failure without restarting the workflow . \\textit{Communicability} facilitates \\textit{damage control} \\textbf{(P14)}, since it teaches the user how to communicate, reducing the numbers of mistakes . In addition, suggesting how to interact can reduce frustration after failure scenarios .\n\\textit{Personalization} manifests \\textit{thoroughness} \\textbf{(P15)} , since chatbots can adapt their language use to the conversational context and the interlocutor's expectations. When the context requires dynamic variation , \\textit{thoroughness} may reveal traits of the chatbot's \\textit{identity} \\textbf{(P16)} . As demonstrated by , \\textit{thoroughness} also reveals \\textit{personality} \\textbf{(P17)}.\n\\textit{Manners} influence \\textit{conscientiousness} \\textbf{(P18)} , since they can be applied as a strategy to politely refuse off-topic requests and to keep the conversation on track. \\textit{Manners} also influence \\textit{damage control} \\textbf{(P19)} , because they can help a chatbot prevent verbal abuse and reduce the negative effect of lack of knowledge. Both \\textit{moral agency} \\textbf{(P20)} and \\textit{emotional intelligence} \\textbf{(P21)} improve \\textit{damage control} because they provide\nthe ability to appropriately respond to abuse and testing . \\textit{Identity} influences \\textit{moral agency} \\textbf{(P22)}, since \\textit{identity} representations require the ability to prevent a chatbot from building or reinforcing negative stereotypes .", "id": "267f99ec-a3fd-4ef2-ae14-9b0b6a98c88c", "level": "subsection", "origin_cites_number": 26, "parent_id": "616598c0-a6ef-4e0d-bae2-49c648e87606", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Discussion" ], [ "subsection", "Interrelationships among the characteristics" ] ], "subsections": [], "title": "Interrelationships among the characteristics" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 8990, 6268 ], "content": "\\label{sec:relatedsurveys}\nPrevious studies have reviewed the literature on chatbots. Several surveys discuss chatbots' urgency and their potential application for particular domains, which include education , business , health , information retrieval and e-commerce . Other surveys focus on technical design techniques , such as language generation models, knowledge management, and architectural challenges. Although discuss the social capabilities of chatbots, the survey focuses on the potential of available open source technologies to support these skills, highlighting technical hurdles rather than social ones.\nWe found three surveys that include insights about social characteristics of chatbots, although none of them focus on this theme. investigated chatbots that ``\\textit{mimic conversation rather than understand it},'' and review the main technologies and ideas that support their design, while focuses on identifying best practices for developing script-based chatbots. review the literature on quality issues and attributes for chatbots. The supplementary materials include a table that shows the social characteristics covered by each survey. These related surveys also point out technical characteristics and attributes that are outside the scope of this survey.", "id": "cefcc373-1755-478a-bdc0-b708ace17a7a", "level": "section", "origin_cites_number": 18, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 6269 ], "content": "\\label{sec:limitations}\nThis research has some limitations. Firstly, since this survey focused on disembodied, text-based chatbots, the literature on embodied and speech-based conversational agents was excluded. We acknowledge that studies that include these attributes can have relevant social characteristics for chatbots, especially for characteristics that could be highly influenced by physical representations, tone, accent, and so forth (e.g. identity, politeness, and thoroughness). However, embodiment and speech could also bring new challenges (e.g., speech-recognition or eye-gazing), which are beyond the scope of this study and could potentially impact users' experiences with chatbots. Additionally, this decision may have caused the under-representation of certain research domains. For example, there is an established research branch on conversational agents in the Ambient Intelligence discipline~ that primarily investigates multimodal interactions, and hence is not represented in this survey. Another example is the Software Engineering domain (see, e.g.,~), where chatbots have been widely investigated, but focused on the functional, rather than social aspects of the interactions.\nSecondly, since the definition of chatbot is not consolidated in the literature and chatbots have been studied across several different domains, some studies that include social aspects of chatbots may not have been found. To account for that, we adopted several synonyms in our research string and used Google Scholar as search engine, which provides a fairly comprehensive indexing of the literature in more domains. Finally, the conceptual model of social characteristics was derived through a coding process inspired by qualitative methods, such as Grounded Theory. Like any qualitative coding methods, it relies on the researchers' subjective assessment. To mitigate this threat, the researchers discussed the social characteristics and categories during in-person meetings until reaching consensus, and both the conceptual framework and the relationships amongst the characteristics were derived based on outcomes the surveyed studies explicitly reported.", "id": "e47b0d6a-5496-42d1-b93d-9f8d302b471a", "level": "section", "origin_cites_number": 6, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Limitations" ] ], "subsections": [], "title": "Limitations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this survey, we investigated the literature on disembodied, text-based chatbots to answer the question \\textit{``What chatbot social characteristics benefit human interactions and what are the challenges and strategies associated with them?''} Our main contribution is the conceptual model of social characteristics, from which we can derive conclusions about several research opportunities. Firstly, we point out several challenges to overcome in order to design chatbots that manifest each characteristic. Secondly, further research may focus on assessing the extent to which the identified benefits are perceived by users and influence their satisfaction. Finally, further investigations may propose new strategies to manifest particular characteristics. In this sense, we highlight that we could not identify strategies to manifest moral agency and thoroughness, although strategies for several other characteristics are also under-investigated.\nWe reported domains where social characteristics were primarily investigated. We showed that some characteristics are largely influenced by the domain (e.g., \\textit{moral agency} and \\textit{communicability}), while others are more generally applied (e.g., \\textit{manners} and \\textit{damage control}). We also discussed the relationship among the characteristics through 22 propositions derived from the surveyed literature, which underline the complexity of developing chatbots with appropriate social behaviors. Although the propositions and social characteristics do not intend to be comprehensive, they cover important aspects of human-human communication. Social sciences such as sociolinguistics and communication studies have a good deal to contribute to the design of chatbots. In the description of the characteristics, we pointed out theories that could guide these investigations, such as the cooperative principle , social identity , personalization , politeness , and mind perception theories . For example, \\textit{conscientiousness}, \\textit{thoroughness}, and \\textit{proactivity} might be covered by the cooperative principle , and social identity theory might explain the relevance of \\textit{identity}, \\textit{personality}, and \\textit{moral agency} for conversational technologies. Our results provide important references for helping designers and researchers find opportunities to advance the human-chatbot interactions field.\n\\section*{Disclosure statement}\nNo potential conflict of interest was reported by the authors.\n\\bibliographystyle{apacite}\n\\bibliography{refs}\n\\section*{About the Authors}\n\\textbf{Ana Paula Chaves} is a Ph.D. Candidate at the Northern Arizona University and a Faculty at the Federal University of Technology - Paraná, Campus Campo Mourão, Brazil. She researches social aspects of human-chatbot interactions as well as technologies for tourism. More information at http://anachaves.pro.br\n\\textbf{Marco Aurelio Gerosa} is an Associate Professor at the Northern Arizona University. He researches Software Engineering and CSCW, having served on the program committee of important conferences, such as FSE, CSCW, SANER, and MSR. He is an ACM and IEEE senior member. For more information, visit http://www.marcoagerosa.com\n\\end{document}", "id": "1430d8e4-a5c1-4f7d-ae08-3a9fcec59bb4", "level": "section", "origin_cites_number": 6, "parent_id": "48e1448c-68e1-4eb0-9088-8c4cfe65bb83", "prefix_titles": [ [ "title", "How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
87
[ 6262, 6263, 7476, 6264, 6265, 6266, 2047, 650, 6267, 8990, 6268, 6269 ]
1.487598
[ "Petr Chunaev" ]
Community detection in node-attributed social networks: a~survey
2019
2019-12-20T13:35:32Z
cs.SI
Community detection is a fundamental problem in social network analysis consisting, roughly speaking, in unsupervised dividing social actors (modelled as nodes in a social graph) with certain social connections (modelled as edges in the social graph) into densely knitted and highly related groups with each group well separated from the others. Classical approaches for community detection usually deal only with the structure of the network and ignore features of the nodes (traditionally called node attributes), although the majority of real-world social networks provide additional actors' information such as age, gender, interests, etc. It is believed that the attributes may clarify and enrich the knowledge about the actors and give sense to the detected communities. This belief has motivated the progress in developing community detection methods that use both the structure and the attributes of the network (modelled already via a node-attributed graph) to yield more informative and qualitative community detection results. During the last decade many such methods based on different ideas and techniques have appeared. Although there exist partial overviews of them, a recent survey is a necessity as the growing number of the methods may cause repetitions in methodology and uncertainty in practice. In this paper we aim at describing and clarifying the overall situation in the field of community detection in node-attributed social networks. Namely, we perform an exhaustive search of known methods and propose a classification of them based on when and how the structure and the attributes are fused. We not only give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available information which methods outperform others and which datasets and quality measures are used for their performance evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ] ], "subsections": [ "fb35c7bc-cc42-4de0-92ed-61876f9f332b", "f91b6565-cd82-4d66-b50d-4e9374bb4beb", "f7c5bcf3-a055-4452-990c-b315c5512443", "124afd9d-8240-4dd5-9e90-68d0d18e9a07", "c0718d2b-4a41-48ee-939a-4ab8664cabe6", "1c6b434c-2289-4e6e-9627-ef1149463588", "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "a4712ae6-1690-4727-8bbb-b44187674ee8", "ec3f6cdd-43b4-49d2-ab2f-b07a25b101ed", "7362ce42-549a-4724-9b4c-b963b4574449" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 8481, 1710 ], "content": "Community detection is a fundamental problem in social network analysis consisting, roughly speaking, in unsupervised dividing social actors into densely knitted and highly related groups with each group well separated from the others. One class of classical community detection methods mainly deal only with the {\\it structure} of social networks (i.e. with connections between social actors) and ignore actors' features. There exist a variety of such structure-aware methods that have shown their efficiency in multiple applications (see ). However, the majority of real-world social networks provide more information about social actors than just connections between them. Indeed, it is rather common that certain actors' {\\it attributes} such as age, gender, interests, etc., are available. When it is so, the social network is called {\\it node-attributed} (recall that the actors are represented via nodes). According to , attributes form the second dimension, besides the structural one, in social network representation. There is another class of classical community detection methods (being opposite to the structure-aware ones, in a sense) that use only node attributes to detect communities and completely ignore connections between social actors. A representative of the attributes-aware methods is well-known $k$-means clustering algorithm taking attribute vectors as an input. Clearly, methods that deal only with structure or only with attributes do not use all the information available in a node-attributed social network. Naturally, this issue can be overcome if a method would somehow jointly use structure and attributes while detecting communities. Developing of such methods became a novel field in social network analysis . The field is moreover promising as the joint usage is believed to clarify and enrich the knowledge about social actors and to describe the powers that form their communities . \nDuring the last decade numerous methods based on different ideas and techniques have appeared in the field. Although there exist some partial overviews of them, especially in Related Works sections of published papers and in the survey published in 2015, a recent summary of the subject is a necessity as the growing number of the methods may cause repetitions in methodology and uncertainty in practice. \nIn this survey, we aim at describing and clarifying the overall situation in the field. Namely, we perform an exhaustive search of existing community detection methods for node-attributed social networks. What is more, we propose a classification of them based on when and how they use and fuse network structure and attributes. We not only give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available information which methods outperform others and which datasets and quality measures are used for their performance evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future. \nTo be more precise, let us describe the content of the survey. In Section~\\ref{section2}, we first provide the reader with the notation used in the survey and state the problem of community detection in node-attributed social networks. We further briefly discuss the traditional argumentation in support of such a community detection and the effect of fusing network structure and attributes. In\nSection~\\ref{section3}, we give information about the related survey works and explain how the search of relevant literature was organized in our case. We also indicate which methods are included in the survey and which are not. Additionally, we explain why references throughout the survey are made in a certain way. Section~\\ref{section4} introduces the classification that we propose for community detection methods under consideration. In Section~\\ref{section5}, we discuss the most popular datasets and quality measures for evaluation of community detection results. This section is also helpful for simplifying exposition in the forthcoming sections. Sections~\\ref{section6}--\\ref{section8} contain descriptions of the classes of methods and their representatives. In Section~\\ref{section9}, we analyze the overall situation in the field basing on the information from Sections~\\ref{section6}--\\ref{section8}. Among other things, we disclose several methodological problems that are important to resolve in future studies, in our opinion. Our conclusions on the topic are summarized in Section~\\ref{section10}.", "id": "fb35c7bc-cc42-4de0-92ed-61876f9f332b", "level": "section", "origin_cites_number": 4, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section2}", "id": "f91b6565-cd82-4d66-b50d-4e9374bb4beb", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Community detection problem for node-attributed social networks and the~effect of fusing network structure and attributes" ] ], "subsections": [ "796f6d71-50d7-49c9-9b0b-67ff77b34e9c", "c2f50602-97d7-4993-94e6-9400d76e89e5" ], "title": "Community detection problem for node-attributed social networks and the~effect of fusing network structure and attributes" }, { "cite_extract_rate": 0, "cites": [], "content": "We represent a node-attributed social network as triple ({\\it node-attributed graph}) $G=(\\mathcal{V},\\mathcal{E},\\mathcal{A})$, where $\\mathcal{V}=\\{v_i\\}$ is the set of nodes (vertices) representing social actors, $\\mathcal{E}= \\{e_{ij}\\}$ the set of edges representing connections between the actors ($e_{ij}$ stands for the edge between nodes $v_i$ and $v_j$), and $\\mathcal{A}$ the set of attribute vectors $A(v_i)=\\{a_{k}(v_i)\\}$ associated with nodes in $\\mathcal{V}$ and containing information about actors' features. Furthermore, $|\\mathcal{V}|=n$, $|\\mathcal{E}|=m$ and the dimension of the attribute vectors is $d$. The domain of $a_k$, i.e. the set of possible values of $k$th element of attribute vectors $a_k(v_i)$, is denoted by $dom(a_k)$. In these terms, $k$th attribute of node $v_i$ is referred to as $a_k(v_i)$. The notation introduced above is summarized in Figure~\\ref{fig:notation}. Note that pairs $(\\mathcal{V},\\mathcal{E})$ and $(\\mathcal{V},\\mathcal{A})$ are correspondingly called the {\\it structure} (or {\\it topology}) and the {\\it attributes} (or {\\it semantics}) of node-attributed graph $G$.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{notation.pdf}\n\t\\caption{Notation related to the triple (node-attributed graph) $G=(\\mathcal{V},\\mathcal{E},\\mathcal{A})$, where $\\mathcal{V}=\\{v_i\\}$, $\\mathcal{E}=\\{e_{ij}\\}$ and $\\mathcal{A}=\\{A(v_i)\\}$.}\n\t\\label{fig:notation}\n\\end{figure}\nBy {\\it community detection\\footnote{It is also called ``community discovery'' or ``clusterization''.} in node-attributed graph} $G=(\\mathcal{V},\\mathcal{E},\\mathcal{A})$ we mean {\\it unsupervised} partitioning the set of nodes $\\mathcal{V}$ into $N$ subsets ({\\it communities} or {\\it clusters}) $C_k\\subset \\mathcal{V}$, with $C=\\{C_k\\}_{k=1}^N$ called a {\\it partition}, such that\n$\\mathcal{V}=\\bigcup_{k=1}^N C_k$ and a certain balance between the following two properties is achieved:\n\\begin{itemize}\n\t\\item[(a)] {\\it structural closeness}, i.e. nodes within a community are structurally close to each other, while nodes in different communities are not;\n\t\\item[(b)] {\\it attribute homogeneity}, i.e. nodes within a community have homogeneous attributes, while nodes in different communities do not.\n\\end{itemize}\nSince one can meet variations of the above-mentioned definitions in the relevant literature, it is worth giving several comments on them. First, the number of communities $N$ can be either known in advance or determined during the community detection process automatically. Second, the communities $C_k$ may be defined to be disjoint or overlapping. Third, the property $\\mathcal{V}=\\bigcup_{k=1}^N C_k$ is sometimes omitted if the resulting partition is not required to include all the nodes from $\\mathcal{V}$. Fourth, \nthe notion of structural closeness and attribute homogeneity may seem vague at the moment but hopefully become more evident after Subsection~\\ref{subsection-str-closeness-attr-homo} and Section~\\ref{section4} where reasons and particular measures for them are discussed. Fifth, the definitions given are for the case of nodes and edges each of one type. It is of course possible that social actors and connections between them can be of different types in a social network and thus one should take this heterogeneity into account. This situation is however closer to the notion of multi-layer networks that are out of scope of the present survey (see also Section~\\ref{section3}).", "id": "796f6d71-50d7-49c9-9b0b-67ff77b34e9c", "level": "subsection", "origin_cites_number": 0, "parent_id": "f91b6565-cd82-4d66-b50d-4e9374bb4beb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Community detection problem for node-attributed social networks and the~effect of fusing network structure and attributes" ], [ "subsection", "Necessary notation and the community detection problem statement" ] ], "subsections": [], "title": "Necessary notation and the community detection problem statement" }, { "cite_extract_rate": 0.36363636363636304, "cites": [ 8483, 1712, 1710, 8482, 1711, 8484, 217, 9116 ], "content": "\\label{subsection-str-closeness-attr-homo}\nThe structural closeness requirement is based on the recent concepts of a (structural) community in a social network. For example, communities are thought in as subsets of nodes with dense connections within the subsets and sparse in between. In its turn, adopts the intuition that nodes within the same community should be better connected than they would be by chance. A possible measure for that is famous Newman's Modularity that has become an influential tool for structure-based community detection in social networks . Multiple Modularity modifications and other measures have been also proposed to assess structural closeness . In fact, the precise meaning of structural closeness in each community detection method is determined by the measure chosen.\nThe attribute homogeneity requirement is based on the social science founding (see e.g. ) that actors' features can reflect and affect community structure in social networks. The well-known principle of homophily in social networks states that like-minded social actors have a higher likelihood to be connected . Thus community detection process taking into account attribute homogeneity may provide results of better quality . Oppositely to the situation with structural closeness measures, the attribute homogeneity is usually measured by Entropy that quantifies the degree of disorder of attribute vectors in $(\\mathcal{V},\\mathcal{A})$ within the communities detected.\nLet us now discuss different points of view on the effect of fusing structure and attributes.\nFrom one side, multiple experiments, e.g. in and many other papers cited in this survey, suggest that the structure and the attributes of a node-attributed social network often provide complementary information that improves community detection quality. For example, attributes may compensate the structural sparseness of a real-world social network , while structural information may be helpful in resolving the problem of missing or noisy attributes . What is more, it is observed in that structure-only or attributes-only community detection is often not as effective as when both sources of information are used. From the other side, some experiments (see e.g. ) suggest that this is not always true, and network structure and attributes may be orthogonal and contradictory thus leading to ambiguous community detection results. Moreover, relations between these sources of information may be highly non-linear and challenging to analyze . \nBesides the above-mentioned points, our general impression is that there is no widely accepted opinion on the effect of fusing structure and attributes and how this fusion can influence community detection quality. Let us illustrate this with an example.\nSuppose that the structure of a certain node-attributed social network is ideally correlated with a half of attributes and is wholly orthogonal to another half. For simplicity, let the dimension of attribute vectors be small so that there is no sense to fight against the curse of dimensionality. Now we follow the popular suggestion that the mismatch between structure and attributes negatively affects community detection quality and that the existence of structure-attributes correlation offers ``a unique opportunity to improve the learning performance of various graph mining tasks'' . The choice is clear then: we need to use the structure and only the ideally correlated attributes for community detection. It turns out however that we are going to use two sources of information that mostly duplicate each other. Why should we expect that this improves the quality of detected communities? From our side, we would presume that the structure and the chosen half of attributes (considered separately or jointly) would yield very similar or even the same communities, with all the ensuing consequences for assessing their quality. Shouldn't we use just one of the sources then? Furthermore, it is not clear to us why the other half of attributes should be omitted. Generally speaking, they may contain valuable information for community detection and thus omitting them because of the lack of correlation with the structure is rather questionable.\nIn any case, a focused theoretical study of when the fusion of structure and attributes is worthy and when not for community detection (ideally, in terms of subclasses of node-attributed social networks) seems to be an extremely important thing that would allow to remove the above-mentioned contradictions.", "id": "c2f50602-97d7-4993-94e6-9400d76e89e5", "level": "subsection", "origin_cites_number": 22, "parent_id": "f91b6565-cd82-4d66-b50d-4e9374bb4beb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Community detection problem for node-attributed social networks and the~effect of fusing network structure and attributes" ], [ "subsection", "Structural closeness and attribute homogeneity. The effect of fusing structure and attributes" ] ], "subsections": [], "title": "Structural closeness and attribute homogeneity. The effect of fusing structure and attributes" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section3}", "id": "f7c5bcf3-a055-4452-990c-b315c5512443", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ] ], "subsections": [ "e751245c-7c36-43ca-8918-65a42b01d47c", "b12abaa9-ebec-4b58-8134-5afb1ff9a9f1", "f0793b92-de74-4aae-8b54-7257a1189cf5", "e38efd05-8a77-49a3-ba2a-eff593b2c129", "29a40dbb-fbd7-48bc-96e3-b7914c7e2c83", "32b01039-79ed-4c11-816c-f79bd32ba71d" ], "title": "Related works and processing the relevant literature" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 8481, 1713, 1714, 8485, 1710 ], "content": "There is a variety of surveys and comparative studies considering community detection in social networks without attributes, in particular, . At the same time, the survey \t seems to be the only one on community detection in node-attributed social networks. Obviously, since it was published in 2015, many new methods adapting different fusion techniques have appeared in the field. Furthermore, a big amount of the methods that had been proposed before 2015 are not covered by , in particular, some based on objective function modification, non-negative matrix factorization, probabilistic models, clustering ensembles, etc. In a sense, the technique-based classification of methods in is also sometimes confusing. For example, CODICIL , a method based on assigning attribute-aware weights on graph edges, is not included in \\cite[Section 3.2. Weight modification according to node attributes]{Bothorel2015}, but to \\cite[Section 3.7. Other methods]{Bothorel2015}. Although is a well-written and highly cited paper, a recent survey of community detection methods for node-attributed social networks is clearly required.\nBesides , almost every paper on the topic contains a Related Works section. It typically has a short survey of preceding methods and an attempt to classify them. We observed that many authors are just partly aware of the relevant literature and this sometimes leads to repetitions in approaches. Furthermore, multiple classifications (usually technique-based) are mainly not full and even contradictory.", "id": "e751245c-7c36-43ca-8918-65a42b01d47c", "level": "subsection", "origin_cites_number": 6, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "Related works" ] ], "subsections": [], "title": "Related works" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsection 3.2}\nAt the beginning, we started the search of relevant literature using regular and scientific search engines making the queries like ``community detection'' or ``clustering'' or\n``community discovery'' in ``node-attributed social networks'' or ``node-attributed graphs''. Within the search process it became evident that other queries also lead to the relevant literature. In particular, ``clustering an attribute-aware graph'', ``community detection in networks with node attributes'', ``description-oriented community detection'', ``semantic clustering of social networks'', ``structure and attributes community detection'', ``joint cluster analysis of attribute and relationship data'', ``community discovery integrating links and tags'', ``attributed graph partitioning'', ``node attribute-enhanced community detection'', ``community detection based on structure and content'', etc. It can be also observed that node-attributed networks and graphs are also sometimes called ``augmented networks'', ``graphs with feature vectors'', ``feature-rich networks'' and ``multi-attributed graphs''.\nThis variety of terms suggests that there is still no established terminology in the field and emphasizes the significance of our survey, where we try to use consistent terminology.\nAfter the above-mentioned exhaustive search, we learned the references in the found papers. Among other things, it brought us to ideologically close papers devoted, for example, to ``attributed information networks'', ``annotated document networks'', ``multi-layer networks'' and ``subspace-based clustering''. We stopped further search when we could not find any new relevant references. Since this happened in the middle of 2019, the survey covers the found papers that had been published in journals or conference proceedings before this date.", "id": "b12abaa9-ebec-4b58-8134-5afb1ff9a9f1", "level": "subsection", "origin_cites_number": 0, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "Relevant literature search process" ] ], "subsections": [], "title": "Relevant literature search process" }, { "cite_extract_rate": 0, "cites": [], "content": "It turns out that several methods for community detection in node-attributed social networks can be proposed in one paper. Therefore, a regular reference of the form [ReferenceNumber] may be not informative enough. From the other hand, authors usually provide their methods with short names like SA-Cluster, CODICIL or CESNA\\footnote{If this is not the case, we allowed ourselves to invent our own names suggesting the class the method belongs to in our classification.}. Some of the names are rather familiar to researches in the field. Thus it seems reasonable to make a reference of the form MethodName [ReferenceNumber] and so we do in what follows. However, not all the methods that are mentioned in the survey are included in our classification and thus discussed in a more detailed manner (as some are just out of scope). To distinguish the cases, we write names of the methods included in our classification in {\\bf bold}, e.g. {\\bf SCMAG} , {\\bf UNCut} and {\\bf DCM} . Such a format means that the reader can find short descriptions of the methods {\\bf SCMAG} , {\\bf UNCut} and {\\bf DCM} in our survey. References like DB-CSC , FocusCO and ACM mean that the corresponding methods are not included in our classification. In this case the reader is recommended to go directly to the papers , and to get additional information. \nA similar scheme is applied to names of the node-attributed network datasets discussed in Section~\\ref{section5} and used in further classification. The reader is assumed to have in mind that if a dataset name is written in {\\bf bold}, then its description can be found in Tables~\\ref{tab_dataset1}, \\ref{tab_dataset2} or \\ref{tab_dataset3}. Note also that various versions of the datasets from Tables~\\ref{tab_dataset1}, \\ref{tab_dataset2} or \\ref{tab_dataset3} are in fact used in different papers. To show that a dataset is somehow different from the description in Tables~\\ref{tab_dataset1}, \\ref{tab_dataset2} or \\ref{tab_dataset3}, we mark it by *. For example, a DBLP dataset with the number of nodes and edges different from {\\bf DBLP10K} and {\\bf DBLP84K} in Table~\\ref{tab_dataset2} is denoted by {\\bf DBLP*} in Sections~\\ref{section6}--\\ref{section8}.", "id": "f0793b92-de74-4aae-8b54-7257a1189cf5", "level": "subsection", "origin_cites_number": 6, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "The format of references to methods and datasets" ] ], "subsections": [], "title": "The format of references to methods and datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "In the survey, we do not consider community detection methods for multi-layer networks, where different types of vertices and edges may present at different layers . Nevertheless, we mention some of these methods from time to time in corresponding remarks. It is though important to note that node-attributed networks of different nature may be clearly considered as a particular case of multi-layer ones. In the majority of papers covered by the present survey, this connection is however rarely commented. \nLet us also emphasize that multi-layer networks (graphs) require special analysis taking into account the heterogeneity of vertices and edges on different layers. A~separate survey and an extensive comparable study of such methods is an independent and useful task (see partial overviews e.g. in ).", "id": "e38efd05-8a77-49a3-ba2a-eff593b2c129", "level": "subsection", "origin_cites_number": 3, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "Note on multi-layer network clustering" ] ], "subsections": [], "title": "Note on multi-layer network clustering" }, { "cite_extract_rate": 0, "cites": [], "content": "Following the above-mentioned definition of community detection in node-attributed social networks, we mainly consider in the survey the methods that can use the full attribute space and find communities covering the whole network. However, there is a big class of special methods that explore subspaces of attributes and/or deal with subgraphs of an initial graph, e.g. GAMer , DB-CSC , SSCG , FocusCO and ACM .\nThe main idea behind the subspace-based (also known as projection-based) clustering methods is that not all available semantic information (attributes) is relevant to obtain good-quality communities . For this reason, one has to somehow choose the appropriate attribute subspace to avoid the so-called {\\it curse of dimensionality} (see \\cite[Section 3.2]{Bothorel2015}) and reveal significant communities that would not be detected if all available attributes were considered.\nTo be precise, some of the methods that we discuss below partly use this idea, e.g. {\\bf WCru} (cf. the definition of a {\\it point of view} in the papers), {\\bf DVil} , {\\bf SCMAG} , {\\bf UNCut} , {\\bf DCM} , etc., but still can work with the full attribute space. In any case, a separate survey on subspace-based methods for community detection in node-attributed social networks would be a very valuable complement to the current one.", "id": "29a40dbb-fbd7-48bc-96e3-b7914c7e2c83", "level": "subsection", "origin_cites_number": 14, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "Note on subspace-based clustering" ] ], "subsections": [], "title": "Note on subspace-based clustering" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 1715, 1714, 1712 ], "content": "Clearly, community detection tools for node-attributed social networks are suitable for networks of different nature. That is why, besides obvious applications in marketing (recommender systems, targeted advertisements and user profiling) , the tools are used for search engine\noptimization and spam detection , in counter-terrorist activities and disclosing fraudulent schemes . They are also applied to analysis of protein-protein interactions, genes and epidemics .\nAnother possible application is document network clustering. Note that such a clustering is historically preceding to community detection in node-attributed social networks and is rich methodologically on its own . One should take into account however that social communities although have similar formal description with document clusters have inner and more complicated forces to be formed and act. What is more, it has been shown that methods for community detection in node-attributed social networks outperform preceding methods for document network clustering in many cases, see . For this reason, we do not consider document network clustering methods in this survey.", "id": "32b01039-79ed-4c11-816c-f79bd32ba71d", "level": "subsection", "origin_cites_number": 10, "parent_id": "f7c5bcf3-a055-4452-990c-b315c5512443", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Related works and processing the relevant literature" ], [ "subsection", "Note on community detection in node-attributed networks of different type and its applications" ] ], "subsections": [], "title": "Note on community detection in node-attributed networks of different type and its applications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section4}\nIn previous works, the classification of methods for community detection in node-attributed social networks was done mostly with respect to the techniques used (e.g. distance-based or random walk-based). We partly follow this principle but at a lower level. At the upper level we group the methods by when structure and attributes are fused with respect to the community detection process, see Figure~\\ref{fig:pics_clafficication}. Namely, we distinguish the classes of \n\t\\begin{itemize}\n\t\t\\item {\\it early fusion methods} that fuse structure and attributes before the community detection process,\n\t\t\\item {\\it simultaneous fusion methods} that fuse structure and attributes simultaneously with the community detection process,\n\t\t\\item {\\it late fusion methods} that first partition structure and attributes separately and further fuse the partitions obtained.\n\t\\end{itemize}\n\t\\begin{figure}[b]\n\\begin{minipage}[t]{0.325\\textwidth}\n \\includegraphics[width=\\textwidth]{early.pdf}\n \\centering \n {\\bf Early fusion methods}\\\\ (Section~\\ref{section6})\n \\vspace{0.3cm}\n \\hrule\n \\vspace{0.3cm}\n Weight-based (Section~\\ref{class-ef-weight}) \n \\vspace{0.3cm}\n Distance-based (Section~\\ref{class-ef-distance})\n \\vspace{0.3cm}\n Node-augmented graph-based (Section~\\ref{class-ef-node-augmented-graph}) \n \\vspace{0.3cm}\n Embedding-based (Section~\\ref{class-ef-embedding})\n \\vspace{0.3cm}\n Pattern mining-based (Section~\\ref{class-ef-patterns})\n\\end{minipage}\n\\begin{minipage}[t]{0.325\\textwidth}\n \\includegraphics[width=\\textwidth]{sim.pdf}\n \\centering \n {\\bf Simultaneous fusion methods} (Section~\\ref{section7})\n \\vspace{0.3cm}\n \\hrule\n \\vspace{0.3cm}\n Methods modifying objective functions of classical clustering algorithms (Section~\\ref{class-sf-modifying}) \n \\vspace{0.3cm}\n Metaheuristic-based (Section~\\ref{class-sf-mataheuristic})\n \\vspace{0.3cm}\n Methods based on non-negative matrix factorization and matrix compression (Section~\\ref{class-sf-nnmf}) \n \\vspace{0.3cm}\n Pattern mining-based (Section~\\ref{class-sf-patterns}) \n \\vspace{0.3cm}\n Probabilistic model-based (Section~\\ref{class-sf-probabilistic-models}) \n \\vspace{0.3cm}\n Dynamical system-based and agent-based (Section~\\ref{class-sf-dynamics})\n\\end{minipage}\n\\begin{minipage}[t]{0.325\\textwidth}\n \\includegraphics[width=\\textwidth]{scheme-late.pdf}\n \\centering\n {\\bf Late fusion methods}\n (Section~\\ref{section8})\n \\vspace{0.3cm}\n \\hrule\n \\vspace{0.3cm}\n Consensus-based (Section~\\ref{class-lf-consensus})\n \\vspace{0.3cm}\n Switch-based (Section~\\ref{class-lf-switch})\n\\end{minipage}\n \\caption{The proposed classification and the survey structure guide.}\n \\label{fig:pics_clafficication}\n\\end{figure}\nSuch a classification allows an interested researcher/data scientist to estimate the labour costs of software implementation of the method chosen for use in practice.\nIndeed, early fusion methods require just a preprocessing (fusion) procedure converting information about structure and attributes into a format that is often suitable for classical community detection algorithms. For example, weight-based early fusion methods convert attribute vectors into the graph form and further merge the structure- and attributes-aware graphs into a weighted graph that can be processed by graph clustering algorithms with existing implementations. Implementation of late fusion methods is also rather simple. Namely, at the first step they detects communities by classical graph and vector clustering algorithms applied to structure and attributes separately. At the second step, the communities obtained are somehow merged by an existing (or at least easy-implemented) algorithm. Oppositely to the early and late fusion methods, simultaneous fusion ones require more programmer work as usually assume either a completely new implementation or essential modifications to existing ones. The situation is aggravated by the fact that their source codes are rarely available.\nAs seen from Figure~\\ref{fig:pics_clafficication}, we also divide the methods within each class into technique-used subclasses. Let us emphasize that the priority in this division is the fusion technique. For example, by ``weight-based methods'' we mean those which form a weighted graph while fusing structure and attributes. The majority of such methods further use graph clustering algorithms for community detection (and this is reasonable). However, some may still transform the graph into the distance matrix form and then use distance-based clustering algorithms. Such methods are still called ``weight-based''. One more example: ``distance-based methods'' are called in this way as directly produce a distance matrix while fusing structure and attributes, independently on how this matrix is further processed.\nWhat is more, we provide short descriptions of methods in each class and subclass in tables in Sections~\\ref{section6}--\\ref{section8}. In particular, we briefly describe the community detection algorithm and its input used in the method, and the type of communities obtained (overlapping or not). Furthermore, we mention which datasets and quality measured are used by method's authors for evaluation of community detection results. In addition, for each method we provide a list of other methods for {\\it community detection in node-attributed networks} (using both structure and attributes) that the method under consideration is compared with. Note that the list may be empty sometimes. This is so, for example, if the method under consideration is compared only with classical community detection methods that deal either with structure or attributes and do not fuse them to detect communities.", "id": "124afd9d-8240-4dd5-9e90-68d0d18e9a07", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Our classification of community detection methods for~node-attributed~ social~networks" ] ], "subsections": [], "title": "Our classification of community detection methods for~node-attributed~ social~networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section5}", "id": "c0718d2b-4a41-48ee-939a-4ab8664cabe6", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Most used node-attributed network datasets and quality measures for community detection evaluation" ] ], "subsections": [ "36da5b81-971c-4135-9405-fdc43b1406db", "01bdf590-8667-4b9e-b2aa-932d3480f52b" ], "title": "Most used node-attributed network datasets and quality measures for community detection evaluation" }, { "cite_extract_rate": 0.25, "cites": [ 1714, 8486, 1716 ], "content": "The title of the survey suggests that we are focused on community detection in node-attributed {\\it social} networks. However, the methods that are included in our classification, generally speaking, may be applied to node-attributed networks of different nature. As we have noticed, authors of the methods implicitly share this point of view and freely use various node-attributed network datasets to evaluate community detection quality.\nIn this subsection (Tables~\\ref{tab_dataset1}, \\ref{tab_dataset2} or \\ref{tab_dataset3}) we collect and briefly describe the datasets that are popular in the field\\footnote{An interested reader can find more node-attributed network datasets at \\href{http://www-personal.umich.edu/~mejn/netdata/}{Mark Newman page}, \\href{https://hpi.de/naumann/projects/repeatability/datasets/}{HPI Information Systems Group}, \\href{https://linqs.soe.ucsc.edu/data}{LINQS Statistical Relational Learning Group}, \\href{http://snap.stanford.edu/data/}{Stanford Large Network Dataset Collection}, \\href{http://dp.univr.it/~laudanna/LCTST/downloads/index.html}{University of Verona Laboratory of Cell Trafficking and Signal Transduction}, \\href{https://perso.liris.cnrs.fr/marc.plantevit/doku/doku.php?id=data_sets}{Marc Plantevit page}, \\href{https://toreopsahl.com/datasets/}{Tore Opsahl page}, \\href{https://sites.google.com/site/ucinetsoftware/datasets}{UCINET networks}, \\href{http://networkrepository.com}{Interactive Scientific Network Data Repository}, \\href{https://aminer.org/citation}{Citation Network Dataset}.\n}. Recall that the dataset names written in {\\bf bold} in Sections~\\ref{section6}--\\ref{section8} refer to the tables in this subsection. \nIt can be observed that Tables~\\ref{tab_dataset1}, \\ref{tab_dataset2} or \\ref{tab_dataset3} contain datasets based on social network data (e.g. Facebook, LastFM and Twitter) and document or citation network data (e.g. DBLP, Wiki and Patents). \nFor convenience, we distinguish the datasets by size. Namely, by {\\it small}, {\\it medium} and {\\it large} we mean network datasets with $< 10^3$, $10^3\\ldots 10^5$ and $>10^5$ nodes, correspondingly.\n\\begin{table}\n\t\\caption{Most popular small size datasets.}\n\t\\label{tab_dataset1} \n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.2cm}|p{12.9cm}|p{0.8cm}|} \n\t\t\t\\hline \n\t\t\tDataset & Description & Source\\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Political Books} & All books in this dataset were\n\t\t\tabout U.S. politics published during the 2004 presidential election\n\t\t\tand sold by Amazon.com. Edges between books means\n\t\t\ttwo books are always bought together by customers. Each\n\t\t\tbook has only one attribute termed as political persuasion, with\n\t\t\tthree values: 1) conservative; 2) liberal; and 3) neutrality & \\href{http://www-personal.umich.edu/~mejn/netdata/}{Link} \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf WebKB} & A classified network of 877 webpages (nodes) and 1608 hyperlinks (edges) gathered from four different universities Web sites (Cornell, Texas, Washington, and Wisconsin). Each web page is associated with a binary vector, whose elements take the value $1$ if the corresponding word from the vocabulary is present in that webpage, and $0$ otherwise. The vocabulary consists of 1703 unique words. Nodes are classified into five classes: course, faculty, student, project, or staff. & \\href{https://linqs.soe.ucsc.edu/data}{Link} \\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Twitter} & A collection of several tweet networks: 1) Politics-UK dataset is collected from Twitter accounts of 419 Members of Parliament in the United Kingdom in 2012. Each user has 3614-dimensional attributes, including a list of words repeated more than 500 times in their tweets. The accounts are assigned to five\n\t\t\tdisjoint communities according to their political affiliation.\n\t\t\t2) Politics-IE dataset is collected from 348 Irish\n\t\t\tpoliticians and political organizations, each user has 1047-\n\t\t\tdimensional attributes. The users are distributed into seven\n\t\t\tcommunities. 3) Football dataset contains 248 English\n\t\t\tPremier League football players active on Twitter which are\n\t\t\tassigned to 20 disjoint communities, each corresponding to\n\t\t\ta Premier League club. 4) Olympics dataset contains users\n\t\t\tof 464 athletes and organizations involved in the London\n\t\t\t2012 Summer Olympics. The users are grouped into 28\n\t\t\tdisjoint communities, corresponding to different Olympic\n\t\t\tsports. & \\href{http://mlg.ucd.ie/aggregation/}{Link 1} \\newline \\href{http://mlg.ucd.ie/networks/}{Link 2}\\newline \\\\ \n\t\t\t\\hline \n\t\t{\\bf Lazega} & A corporate law partnership in a Northeastern US corporate law firm; possible attributes: (1: partner; 2: associate), office (1: Boston; 2: Hartford; 3: Providence); 71 nodes and 575 edges & \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Research} & A research team of employees in a manufacturing company; possible attributes: location (1: Paris; 2: Frankfurt; 3: Warsaw; 4: Geneva), tenure (1: 1--12 months; 2: 13--36 months; 3: 37--60 months; 4: 61+ months); 77 nodes and 2228 edges & \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Consult} & Network showing relationships between employees in a consulting company; possible attributes: organisational level (1: Research Assistant; 2: Junior Consultant; 3: Senior Consultant; 4: Managing Consultant; 5: Partner), gender (1: male; 2: female); 46 nodes and 879 edges & \\\\ \n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{table} \n\\begin{table}\n\t\\caption{Most popular medium size datasets.}\n\t\\label{tab_dataset2} \n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.2cm}|p{12.9cm}|p{0.8cm}|}\n\t\t\t\\hline \n\t\t\tDataset & Description & Source \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Political Blogs} & A non-classified network of 1,490 webblogs (nodes) on US politics with\n\t\t\t19,090 hyperlinks (edges) between the webblogs. Each node has an attribute describing its political leaning as either liberal or\n\t\t\tconservative (represented by $0$ and $1$). & \\href{http://www-personal.umich.edu/~mejn/netdata/}{Link}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf DBLP10K} & A non-classified co-author network extracted from DBLP Bibliography (four research areas of database, data mining, information retrieval and artificial intelligence) with 10,000 authors (nodes) and their co-author relationships (edges). Each author is associated with two relevant categorical attributes: prolific and primary topic. For attribute ``prolific'', authors with $\\ge 20$ papers are labelled as highly prolific; authors with $>10$ and $<20$ papers are labelled as prolific and authors with $\\le 10$ papers are labelled as low prolific. Node-attribute values for ``primary topic'' (100 research topics) are obtained via topic modelling. Each extracted topic consists of a probability distribution of keywords which are most representative of the topic. & \\href{https://github.com/Issamfalih/ANCL/tree/master/data}{Link}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf DBLP84K} & A larger non-classified co-author network extracted from DBLP Bibliography (15 research areas of database, data mining, information retrieval, artificial intelligence, machine learning, computer vision, networking, multimedia, computer systems, simulation, theory, architecture, natural language processing, human-computer interaction, and programming language) with 84,170 authors (nodes) and their co-author relationships (edges). Each author is associated with two relevant categorical attributes: prolific and primary topic, defined in a similar way as in DBLP10. & \\href{https://github.com/Issamfalih/ANCL/tree/master/data}{Link}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Cora} & A classified network of machine learning papers with 2,708 papers (nodes) and 5,429 citations (edges). Each node is attributed with a $1433$-dimension binary vector indicating the absence/presence of words from the dictionary of words collected from the corpus of papers. The papers are classified into 7 subcategories: case-based reasoning,genetic algorithms,\n\t\t\tneural networks, probabilistic methods, reinforcement learning,\n\t\t\trule learning and theory.\t & \\href{https://linqs.soe.ucsc.edu/data}{Link 1}\\newline \\href{https://github.com/albertyang33/TADW}{Link 2}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf CiteSeer} & A classified citation network in the field of machine learning with 3,312 papers (nodes) and 4,732 citations (edges). Each node is attributed with a binary vector indicating the absence/presence of the corresponding words from the dictionary of the 3,703 words collected from the corpus of papers. Papers are classified into 6 classes. & \\href{https://linqs.soe.ucsc.edu/data}{Link 1}\\newline \\href{https://github.com/albertyang33/TADW}{Link 2}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Sinanet} & A classified microblog user relationship network extracted from the sina-microblog website (http://www.weibo.com) with 3,490 users (nodes) and 30,282 relationships (edges). Each node is attributed with 10-dimensional numerical attributes describing the interests of the user. & \\href{https://github.com/smileyan448/Sinanet}{Link}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf PubMed Diabetes} & A classified citation networks extracted from the PubMed database pertaining to diabetes. It contains 19,717 publications (nodes) and 44,338 citations (edges). Each node is attributed by a TF-IDF weighted word vector from a dictionary that consists of 500 unique words. & \\href{https://linqs.soe.ucsc.edu/data}{Link} \\\\ \n\t\t\t\\hline\n\t\t\t{\\bf Facebook100} & A non-classified Facebook users network with 6,386 users (nodes) and 435,324 friendships (edges). The network is gathered from Facebook users of 100 colleges and universities (e.g. Caltech, Princeton, Georgetown and UNC Chapel Hill) in September 2005. Each user has the following attributes: ID, a student/faculty status flag, gender, major, second major/minor (if applicable), dormitory(house), year and high school. & \\href{https://archive.org/download/oxford-2005-facebook-matrix}{Link}\\newline \\\\ \n\t\t\t\\hline\n\t\t\t{\\bf ego-Facebook} & Dataset consists of 'circles' ('friends lists') from Facebook with 4039 nodes and 88234 edges. Facebook data was collected from survey participants using a Facebook app. The dataset includes node features (profiles), circles, and ego networks. & \\href{https://snap.stanford.edu/data/ego-Facebook.html}{Link}\\newline \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf LastFM} & A network gathered from the online music system Last.fm with 1,892 users (nodes) and 12,717 friendships on Last.fm (edges). Each node has 11,946-dimensional attributes, including a list of most listened music artists, and tag assignments.\n\t\t\t& \\href{http://ir.ii.uam.es/hetrec2011/datasets.html}{Link} \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Delicious} & A network of 1,861 nodes, 7,664 edges and 1,350 attributes. This is a publicly available dataset from the HetRec 2011 workshop that has been obtained from the Delicious social bookmarking system. Its users are connected in a social network generated from Delicious mutual fan relations. Each user has bookmarks, tag assignments, that is, [user, tag, bookmark] tuples, and contact relations within the social network. The tag assignments were\n\t\t\ttransformed to attribute data by taking all tags that a user ever assigned to any bookmark and assigning those to the user. & \\href{http://ir.ii.uam.es/hetrec2011/}{Link} \\\\ \n\t\t\t\\hline \n\t\t\t{\\bf Wiki} & A network with nodes as web pages. The link among different nodes is the hyperlink in the web page. 2,405 nodes, 12,761 edges, 4,973 attributes, 17 labels & \\href{https://github.com/albertyang33/TADW}{Link} \\\\ \n\t\t\\hline \n\t\t{\\bf ego-Twitter} & This dataset consists of 'circles' (or 'lists') from Twitter. Twitter data was crawled from public sources. The dataset includes node features (profiles), circles, and ego networks. Nodes\t81306,\n\t\tEdges 1768149 & \\href{https://snap.stanford.edu/data/ego-Twitter.html}{Link}\\newline \\\\\n\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{table} \n\\begin{table}\n\t\\caption{Most popular large size datasets.}\n\t\\label{tab_dataset3} \n\t\t{\\tiny\n\\begin{tabular}{|p{1.2cm}|p{12.9cm}|p{0.8cm}|}\n\t\\hline \nDataset & Description & Source \\\\ \n\t\\hline \n{\\bf Flickr} & A network with 100,267 nodes, 3,781,947 edges and 16,215 attributes collected from the internal database of the popular Flickr\nphoto sharing platform. The social network is defined by the contact relation of\nFlickr. Two vertices are connected with an undirected edge if at least one undirected edge exists between them. Each user has a list of tags associated that he/she used at least five times. Tags are limited to those used by at least 50 users. Users are limited to those having a vocabulary of more than 100 and less than 5,000 tags. & \\href{https://snap.stanford.edu/data/web-flickr.html}{Link}\\newline \\\\ \n\\hline \n{\\bf Patents} & A patent citation network with vertices representing patents and edges depicting the citations between. A subgraph containing all the patents from the year 1988 to 1999. Each patent has six attributes, grant year, number of claims, technological category, technological subcategory, assignee type, and main patent class. There are 1,174,908 vertices and 4,967,216 edges in the network. & \\href{http://www.nber.org/patents/}{Link 1}\\newline \\href{https://snap.stanford.edu/data/cit-Patents.html}{Link 2} \\\\ \n\\hline\n{\\bf ego-G+} & This dataset consists of 'circles' from Google+. Google+ data was collected from users who had manually shared their circles using the 'share circle' feature. The dataset includes node features (profiles), circles, and ego networks. Nodes 107,614, Edges 13,673,453. Each node has four features: job\ntitle, current place, university, and workplace. A user-pair(edge) is\ncompared using knowledge graphs based on, Category: Occupations,\nCategory:Companies by country and industry, Category: Countries,\nCategory:Universities and colleges by country. & \\href{https://snap.stanford.edu/data/ego-Gplus.html}{Link}\\newline \\\\\n\\hline \n\\end{tabular}\n}\n\\end{table}", "id": "36da5b81-971c-4135-9405-fdc43b1406db", "level": "subsection", "origin_cites_number": 12, "parent_id": "c0718d2b-4a41-48ee-939a-4ab8664cabe6", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Most used node-attributed network datasets and quality measures for community detection evaluation" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 1, "cites": [ 1711 ], "content": "Given a set of detected communities (overlapping or not), one needs to evaluate their quality. There are two possible options for this depending on the network dataset under consideration. If the network dataset has no ground truth, one can use various measures of structural closeness and attribute homogeneity. According to our observations, the most popular quality measures in this case are Modularity and Density for the former and Entropy for the latter. Many others such as Conductance, Permanence, Intra- and Inter-Cluster Densities, etc., are also possible. If there is ground truth, it is traditional to compare the detected communities with the ground truth ones. This can be done, for instance, with the following popular measures: Accuracy, Normalized Mutual Information (denoted below by NMI), Adjusted Rand Index or Rand Index (denoted below by ARI and RI, correspondingly) and $F$-measure. We will discuss both the approaches further in Section~\\ref{section9}. \nDue to space limitations, we refer the reader to the comprehensive survey and to \\cite[Sections 2.2 and 4]{Bothorel2015}, where all the above-mentioned quality measures and many others are precisely defined and discussed in detail.", "id": "01bdf590-8667-4b9e-b2aa-932d3480f52b", "level": "subsection", "origin_cites_number": 1, "parent_id": "c0718d2b-4a41-48ee-939a-4ab8664cabe6", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Most used node-attributed network datasets and quality measures for community detection evaluation" ], [ "subsection", "Community detection quality measures" ] ], "subsections": [], "title": "Community detection quality measures" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section6}\nAs we have already mentioned, these methods fuse structure and attributes before the community detection process so that the data obtained are suitable for classical community detection algorithms (thus one can use their existing software implementations).\nBefore we proceed, we introduce additional notation applied only to the weight-based and distance-based early fusion methods. The fact is that existing network structure may be saved or modified depending on the heuristics used in a method, therefore we distinguish\n\t\\begin{itemize}\n\t\t\\item {\\it fixed topology methods} that use initial network structure without modifying it with respect to attributes,\n\t\t\\item {\\it non-fixed topology methods} that modify initial network structure with respect to attributes, in particular, add/erase edges and/or vertices.\n\t\\end{itemize}\nAs far as we know, there is no study on which approach is preferable. How each one influences community detection results is yet to be established.\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{weight.pdf}\n\t\\caption{A typical scheme of a weight-based method (the attributive weights here are the values of normalized matching coefficient for the attribute vectors).}\n\t\\label{fig:weight}\n\\end{figure}", "id": "1c6b434c-2289-4e6e-9627-ef1149463588", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ] ], "subsections": [ "4ebad2e3-f246-4134-89c0-5d1b73a7710d", "0bd4586d-c3e0-4e3a-817b-be22385f142c", "9080cec6-6c41-47f4-8a10-ec818aed0224", "bcc6d0ce-88ee-4a08-8574-4e5664967352", "99f1e588-a145-4e2a-aa14-a5084080874e" ], "title": "Early fusion methods" }, { "cite_extract_rate": 0.203389830508474, "cites": [ 7062, 1714, 1712, 1710, 8487, 8488, 8484 ], "content": "\\label{class-ef-weight}\nThese methods (see Tables \\ref{tab_weights_alpha0} and \\ref{tab_weights}) convert attributes $(\\mathcal{V},\\mathcal{A})$ into a weighted {\\it attributive} graph and further somehow merge it with structural graph $(\\mathcal{V},\\mathcal{E})$.\nThe result is a weighted graph $G_W$ that is a substitution for node-attributed graph $G$, see Figure~\\ref{fig:weight}. Edge weights of $G_W$ are usually assigned as follows:\n\\begin{equation}\n\\label{main_weight}\nW_\\alpha(v_i,v_j)=\\alpha w_S(v_i,v_j)+(1-\\alpha) w_A (v_i,v_j),\\qquad \\alpha\\in[0,1],\\qquad v_i,v_j\\in \\mathcal{V},\n\\end{equation}\nwhere $w_S$ and $w_A$ are chosen {\\it structural} and {\\it attributive} similarity functions, respectively. The hyperparameter $\\alpha$ controls the balance between structure and attributes. Clearly, if $\\alpha=1$ in (\\ref{main_weight}), one obtains weights based only on structure; if $\\alpha=0$, then they are based only on attributes. As for $w_S$ and $w_A$, $w_S(v_i,v_j)$ usually reflects existing connections in $(\\mathcal{V},\\mathcal{E})$ (e.g. $w_S(v_i,v_j)=1$, if $e_{ij}\\in \\mathcal{E}$, and $w_S(v_i,v_j)=0$, otherwise), while $w_A(v_i,v_j)$ may be Cosine Similarity or Matching Coefficient values for vectors $A(v_i)$ and $A(v_j)$.\nOnce the weighted graph $G_W$ is constructed, one can use classical graph clustering algorithms on it such as Weighted Louvain . Sometimes $G_W$ is instead converted to a certain distance matrix that is further used for detecting communities via distance-based clustering algorithms such as $k$-means or $k$-medoids.\nIt is worth mentioning that the fixed topology methods in this subclass assume that the weights (\\ref{main_weight}) are assigned on the same set of edges $\\mathcal{E}$ as in the initial node-attributed graph $G$, while the non-fixed topology ones assign weights on all (or on the most part of) possible edges between nodes in $\\mathcal{V}$.\nNow let us say some words how the hyperparameter $\\alpha$ can be chosen. A very popular approach in fixed topology methods is assuming $\\alpha=0$ in (\\ref{main_weight}), see Table~\\ref{tab_weights_alpha0}. This actually means that weights in $G_W$ are based only on attributive similarity. Clearly, this may lead to dominance of attributes in the resulting fusion and disappearance of structural connections between nodes with dissimilar attributes. Varying $\\alpha$ over the segment $[0,1]$ in (\\ref{main_weight}), used by the methods in Table~\\ref{tab_weights}, seems more adequate for controlling the impact of both the components. However, tuning $\\alpha$ in this case is usually performed {\\it manually}. In fact, we are unaware of any general non-manual approaches for tuning $\\alpha$ in (\\ref{main_weight}), although the need for such approaches has been repeatedly emphasized .\nNote that the choice of similarity functions $w_S$ and $w_A$ is usually determined by preferences of the authors of a particular method. The systematic study of how such a choice influences the community detection results is yet to be done.\n\\begin{table}\n\t\\caption{Weight-based methods with $\\alpha=0$ in (\\ref{main_weight}).}\n\t\\label{tab_weights_alpha0} \n\t\\begin{center}\n\t\t{\\tiny\n\t\t\t\\begin{tabular}{|p{1.5cm}|p{4.0cm}|p{1.5cm}|p{1.0cm}|p{1.1cm}|p{1cm}|p{1.3cm}|p{1.5cm}|}\n\t\t\t\t\\hline \n\t\t\t\tMethod & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Topology & Datasets used & Compared with \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WNev}~ & {\\bf Weighted graph}\\newline MinCut \\newline MajorClust \\newline Spectral & No/No & Small & Accuracy & Fixed & Synthetic & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WSte1}~ & {\\bf Weighted graph}\\newline Threshold & No/No & Large & Modularity & Fixed & Phone Network & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WSte2}~ & {\\bf Similarity matrix (via Weighted graph and random walks)}\\newline Hierarchical clustering & No/No & Large & Modularity & Fixed & Phone Network & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WCom1} & {\\bf Weighted graph}\\newline Weighted Louvain & Yes/No & Small & Accuracy & Fixed & {\\bf DBLP*} & {\\bf WCom2} \\newline {\\bf DCom} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WCom2} & {\\bf Distance matrix (via weighted graph)}\\newline Hierarchical agglomerative clustering & Yes/No & Small & Accuracy & Fixed & {\\bf DBLP*} & {\\bf WCom1} \\newline {\\bf DCom} \\\\\n\t\t\t\t\\hline\n\t\t\t\t{\\bf AA-Cluster} & {\\bf Node embeddings (via weighted graph)}\\newline $k$-medoids & Yes/No & Small\\newline Medium\\newline Large & Density\\newline Entropy & Fixed & {\\bf Political Blogs}\\newline {\\bf DBLP*}\\newline {\\bf Patents*} \\newline Synthetic & {\\bf SA-Cluster}~\\newline {\\bf BAGC} \\newline {\\bf CPIP} \n\t\t\t\t\\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf PWMA-MILP} & {\\bf Weighted graph}\\newline Linear programming MILP~ & No/No & Small & RI \\newline NMI & Fixed & {\\bf WebKB} & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf KDComm} & {\\bf Weighted graph} \\newline Iterative Weighted Louvain & No/No & Small\\newline Medium \\newline Large & $F$-measure\\newline Jaccard measure \\newline Rank Entropy measure & Fixed & \t{\\bf ego-G+} \\newline {\\bf Twitter*} \\newline {\\bf DBLP*} \\newline Reddit \\href{https://archive.org/details/2015_reddit_comments_corpus}{link} & {\\bf CPIP} \\newline {\\bf JCDC} \\newline {\\bf UNCut} \\newline {\\bf SI} \\\\\n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\\end{table}\n\\begin{table}\n\t\\caption{Weight-based methods with $\\alpha\\in[0,1]$ in (\\ref{main_weight}).}\n\t\\label{tab_weights} \n\t\\begin{center}\n\t\t{\\tiny\n\t\t\t\\begin{tabular}{|p{1.25cm}|p{1.5cm}|p{2.0cm}|p{1.5cm}|p{0.8cm}|p{1.2cm}|p{1.2cm}|p{1.7cm}|p{1.7cm}|}\n\t\t\t\t\\hline \n\t\t\t\tMethod & $\\alpha$ in (\\ref{main_weight}) & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Topology & Datasets used & Compared with \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WWan} & $[0,1]$ in theory\\newline $\\tfrac{1}{2}$ in experiments & {\\bf Edge similarity matrix (via weighted graph)}\\newline EdgeCluster \\newline($k$-means variant) & Yes & Small & NMI \\newline Micro-F1 \\newline Macro-F1 & Non-fixed: removing edges & Synthetic \\newline BlogCatalog\\newline {\\bf Delicious$^*$} & Non-overlapping co-clustering \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf SAC2} & $[0,1]$ & {\\bf $k$NN (unweighted) graph (via weighted graph)}\\newline (Unweighted) Louvain & No/ No & Small\\newline Medium & Density\\newline Entropy & Non-fixed: removing edges & {\\bf Political Blogs} \\newline {\\bf Facebook100} \\newline {\\bf DBLP10K} & {\\bf SAC1} \\newline {\\bf WSte2} \\newline Fast greedy for weighted graph \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WCru}~\n\t\t\t\t& $[0,1]$ in theory \\newline Not specified in experiments & {\\bf Weighted graph}\\newline Weighted Louvain~ & No & Medium & Modularity\\newline Intracluster distance & Fixed & {\\bf Twitter$^*$} & ---\\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf CODICIL} & $[0,1]$ in theory\\newline\n\t\t\t\t$1/2$ in some experiments & {\\bf Weighted graph}\\newline Metis \\newline Markov Clustering~ & No & Small\\newline Medium\\newline Large & $F$-measure & Non-fixed: adding and removing edges & {\\bf CiteSeer*}\\newline {\\bf Flickr*}\\newline Wikipedia* & {\\bf Inc-Cluster} \\newline {\\bf PCL-DC} \\newline Link-PLSA-LDA \n\t\t\t\t\\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf WMen} & Not specified & {\\bf Weighted graph/Distance matrix for the weighted graph}\\newline SLPA \\newline Weighted Louvain \\newline K-medoids & Yes-No/\\newline Yes-No & Small\\newline Medium & NMI \\newline $F$-measure \\newline Accuracy & Fixed & {\\bf Lazega} \\newline {\\bf Research} \\newline {\\bf Consult} \\newline\n\t\t\t\tLFR benchmark & {\\bf CODICIL}~\\newline {\\bf SA-Cluster}~ \\\\ \n\t\t\t\t\\hline \n\t\t\t\t{\\bf PLCA-MILP} & $[0,1]$ & {\\bf Weighted graph}\\newline Linear programming MILP~ & No/No & Small & RI \\newline NMI & Non-fixed: adding and removing edges & {\\bf WebKB} & {\\bf SCD} \\newline {\\bf ASCD} \\newline {\\bf SCI} \\newline PCL-DC \\newline Block-LDA \\\\\n\t\t\t\t\\hline\n\t\t\t\t{\\bf kNN-enhance} & May be thought as $\\alpha=1/2$, $k$NN by attributes & {\\bf Distance matrix (of an edge-augmented graph)} \\newline $k$NN\\newline $k$-means & No/No & Medium & Accuracy \\newline NMI \\newline F-Measure \\newline Modularity\\newline Entropy & Non-fixed: adding edges & {\\bf Cora} \\newline {\\bf Citeseer}\\newline {\\bf Sinanet}\\newline {\\bf PubMed Diabetes}\\newline {\\bf DBLP*}& {\\bf PCL-DC} \\newline PPL-DC \\newline {\\bf PPSB-DC} \\newline {\\bf CESNA} \\newline {\\bf cohsMix} \\newline {\\bf BAGC} \\newline {\\bf GBAGC} ) \\newline {\\bf SA-Custer} \\newline {\\bf Inc-Cluster} \\newline {\\bf CODICIL} \\newline GLFM \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf IGC-CSM} (\\href{https://github.com/WNawaz/CSM}{source}) & $[0,1]$ in theory\\newline $1/2$ in comparison experiments & {\\bf Distance matrix for the weighted graph} \\newline $k$-Medoids & Yes/ No & Medium & Density\\newline Entropy & Fixed & {\\bf Political Blogs}\\newline {\\bf DBLP10K} & {\\bf SA-Cluster} \\newline {\\bf SA-Cluster-Opt} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf AGPFC} & $[0,1]$ in theory, manually tuned in experiments & Fuzzy equivalent matrix \\newline $\\lambda$-cut set method & No/Yes & Small\\newline Medium & Density\\newline Entropy & Fixed & {\\bf Political Blogs}\\newline {\\bf CiteSeer} \\newline {\\bf Cora} \\newline {\\bf WebKB} & {\\bf SA-Cluster} \\newline {\\bf BAGC} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf NMLPA} & $1/2$ & {\\bf Weighted graph}\\newline A multi-label propagation algorithm & Yes/ Yes & Medium & $F1$-score \\newline Jaccard Similarity & Fixed & {\\bf ego-Facebook}\\newline {\\bf Flickr*} \\newline {\\bf ego-Twitter} & {\\bf CESNA} \\newline {\\bf SCI} \\newline {\\bf CDE} \\\\\n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\\end{table}\n\\begin{remark} The weight-based strategy has been applied to more general networks than considered in the present survey. For example, and use a similar scheme to detect communities in {\\it multi-layer} networks. Another example is\nSANS that works with {\\it directed} node-attributed graphs. \nEdge weighting similar to (\\ref{main_weight}) is also applied in FocusCO\t. Although it is not a purely unsupervised community detection method (it requires user's preferences on some of the attributes), it can simultaneously extract local clusters and detect outliers in a node-attributed social network.\n\\end{remark}\n\\begin{remark}\nLet us mention that there exist community detection methods similar ideologically (attributes $\\to$ edge weights $\\to$ weighted graph node embeddings\\footnote{Low-dimensional continuous vector representations of graph nodes. See also Subsection~\\ref{class-ef-embedding}.} $\\to$ $k$-means) but preceding to recently proposed {\\bf AA-Cluster} . Namely, GraphEncoder~ and GraRep~ also first convert a node-attributed graph into a weighted one according to (\\ref{main_weight}) with $\\alpha=0$ and then find corresponding node embeddings. The embeddings are further fed to $k$-means algorithm to detect communities. However, in opposite to , and mostly focus on node embedding techniques for a weighted graph than on the community detection task. \n\\end{remark}", "id": "4ebad2e3-f246-4134-89c0-5d1b73a7710d", "level": "subsection", "origin_cites_number": 59, "parent_id": "1c6b434c-2289-4e6e-9627-ef1149463588", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ], [ "subsection", "Weight-based methods" ] ], "subsections": [], "title": "Weight-based methods" }, { "cite_extract_rate": 0.1, "cites": [ 1717, 1714 ], "content": "\\label{class-ef-distance}\n\\begin{table}[b]\n\\caption {Distance-based methods.}\n\\label{tab:distance} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.0cm}|p{1.3cm}|p{1.8cm}|p{1.5cm}|p{1.2cm}|p{1.5cm}|p{1.1cm}|p{1.5cm}|p{1.8cm}|}\n\t\t\t\\hline \n\t\t\tMethod & $\\alpha$ in (\\ref{distance_main}) & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Topology & Datasets used & Compared with \\\\\n\t\t\t\\hline \n\t\t\t{\\bf DCom} & $[0,1]$ & {\\bf Distance matrix}\\newline Hierarchical agglomerative clustering & Yes/No & Small & Accuracy & Non-fixed: added edges & {\\bf DBLP*} & {\\bf WCom1} \\newline {\\bf WCom2} \\\\\n\t\t\t\\hline\n\t\t\t{\\bf DVil} & $[0,1]$ & {\\bf Distance matrix} \\newline Self-organizing maps & No/No & Small\\newline Medium & NMI & Non-fixed: added edges & Synthetic\\newline \\href{http://graphcomp.univ-tlse2.fr/}{Medieval Notarial Deeds} & --- \\\\\n\t\t\t\\hline \n\t\t\t{\\bf SToC} & Maybe thought depending on the values of $d_S$ and $d_A$ & {\\bf Distance matrix}\\newline $\\tau$-close clustering & No/No & Medium\\newline Large & Modularity\\newline Within-Cluster Sum of Squares & Non-fixed: added edges & {\\bf DBLP10K}\\newline DIRECTORS*\\newline DIRECTORS-gcc* & {\\bf Inc-Cluster} \\newline {\\bf GBAGC} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf @NetGA} & $\\alpha\\in[0,1]$ in general\\newline $\\alpha=1/2$ in experiments & {\\bf Distance matrix}\\newline Genetic algorithm & No/No & Medium & NMI & Non-fixed: added edges & Synthetic & {\\bf SA-Cluster} \\newline CSPA \\newline {\\bf Selection} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf ANCA} & Maybe thought as $\\alpha=1/2$ & {\\bf Distance matrix}\\newline $k$-means & Yes/No & Medium & Adjusted Rand Index\\newline NMI \\newline Density \\newline Modularity \\newline Conductance \\newline Entropy & Fixed & Synthetic \\newline {\\bf DBLP10K} \\newline \n\t\t\tEnron email corpus \\newline & {\\bf SA-Cluster} \\newline {\\bf SAC1-SAC2} \\newline {\\bf IGC-CSM} \\newline {\\bf WSte1} \\newline {\\bf ILouvain} \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\nMethods discussed in the previous subsection aim at representing structure and attributes in a unified graph form suitable for further graph clustering. In opposite, distance-based methods intentionally abandon graph representation in favor of the representation via a distance matrix that contains information about structure and attributes. The distance matrix is usually obtained by fusing the components by a structure- and attributes-aware distance function, see Figure~\\ref{fig:distance}. The matrix can be further fed to distance-based clustering algorithms such as $k$-means and $k$-medoids. Note that the resulting clusters may contain disconnected portions of initial graph $G$ as the graph structure is removed at the fusion step \\cite[Section 3.3]{Akbas2017}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{distance.pdf}\n\t\\caption{A typical scheme of a distance-based method.}\n\t\\label{fig:distance}\n\\end{figure}\nThe distance function fusing structure and attributes is often of the form\n\\begin{equation}\n\\label{distance_main}\nD_{\\alpha}(v_i,v_j)=\\alpha d_S(v_i,v_j)+(1-\\alpha)d_A(v_i,v_j),\\qquad \\alpha\\in[0,1],\\qquad v_i,v_j\\in \\mathcal{V},\n\\end{equation}\nwhere $d_S$ and $d_A$ are {\\it structural} and {\\it attributive} distance functions, correspondingly. As in the case of (\\ref{main_weight}), $\\alpha$ in (\\ref{distance_main}) controls the balance between the components; how to choose it ``properly'' seems to be an open problem, too. As for the distance functions, it is common to define $d_S(v_i,v_j)$ as the shortest path length between $v_i$ and $v_j$. Possible options for $d_S(v_i,v_j)$ are the Jaccard or Minkowski distances between vectors $A(v_i)$ and $A(v_j)$.\nShort descriptions for known distance-based methods are given in Table~\\ref{tab:distance}. Note that {\\bf ANCA} and {\\bf SToC} employ a bit different fusion than in (\\ref{distance_main}) but nevertheless still deal with certain structure- and attribute-aware distances.\n\\begin{remark}\nThere exist distance-based methods for {\\it multi-layer} networks. For example, CLAMP is an method for clustering networks with heterogeneous attributes and multiple types of edges that uses a distance function similar to (\\ref{distance_main}), in a sense.\n\\end{remark}\n\\begin{table}\n\\caption {Node-augmented graph distance-based methods.} \\label{tab:sa} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.8cm}|p{1.7cm}|p{2.8cm}|p{1.2cm}|p{1cm}|p{1.0cm}|p{1.4cm}|p{2.2cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Graph augmentation & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with\\\\\n\t\t\t\\hline \n\t\t\t{\\bf SA-Cluster} \\newline {\\bf Inc-Cluster} \\newline {\\bf SA-Cluster-Opt} & New nodes and edges & {\\bf Distance matrix (via neighbourhood random walks) }\\newline Modified $k$-medoids & Yes/No & Small\\newline Medium & Density\\newline Entropy & {\\bf Political Blogs}\\newline {\\bf DBLP10K}\\newline {\\bf DBLP84K} & W-Cluster (based on (\\ref{distance_main}))\\newline {\\bf SA-Cluster} \\newline {\\bf Inc-Cluster} \\newline {\\bf SA-Cluster-Opt} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf SCMAG} & New nodes and edges & {\\bf Distance matrix (via neighbourhood random walks) }\\newline Subspace clustering algorithm based on ENCLUS & No/Yes & Medium & Density\\newline Entropy & IMDB\\newline Arnetminer bibliography$^*$ & {\\bf SA-Custer} \\newline GAMer \\\\\t\t\t\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}", "id": "0bd4586d-c3e0-4e3a-817b-be22385f142c", "level": "subsection", "origin_cites_number": 20, "parent_id": "1c6b434c-2289-4e6e-9627-ef1149463588", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ], [ "subsection", "Distance-based methods" ] ], "subsections": [], "title": "Distance-based methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{class-ef-node-augmented-graph}\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{auggraph.pdf}\n\t\\caption{A typical scheme of a node-augmented graph-based method.}\n\t\\label{fig:node-augmented}\n\\end{figure}\nMethods from this subsection (see Table~\\ref{tab:sa}) transform initial node-attributed graph $G$ into another node-augmented graph $\\tilde{G}$, with nodes from $\\mathcal{V}$ and new {\\it attributive} nodes representing attributes, see Figure~\\ref{fig:node-augmented}.\nTo be more precise, suppose that $dom(a_k)=\\{s_l\\}_{l=1}^{L_k}$, i.e. the domain of $k$th element (attribute) of attribute vector $A$ contains $L_k$ possible values. Then one should create $L_k$ new attribute nodes $\\tilde{v}_{k,l}$ corresponding to $l$th value of $k$th attribute. Such a procedure is performed for $k=1,\\ldots,d$, where $d=\\dim A$. The set $\\mathcal{V}_A:=\\{\\tilde{v}_{k,l}\\}$, where $k=1,\\ldots,d$ and $l=1,\\ldots,L_k$, is then the set of attributive nodes. An edge between structural node $v_i\\in \\mathcal{V}$ and attributive node $\\tilde{v}_{k,l}$ in $\\tilde{G}$ exists if $a_k(v_i)=s_l$. Community detection is further performed in $\\tilde{G}$. The methods in Table~\\ref{tab:sa} propose to apply random walks to obtain a certain distance matrix for $\\tilde{G}$ and further use it in a distance-based clustering algorithm.\nNote that the above-mentioned augmentation is not applicable to continuous attributes. What is more, $\\tilde{G}$ contains much more nodes end edges than $G$ (especially if $d$ and $L_k$, $k=1,\\ldots,d$, are large) and this makes the methods from this subclass rather computationally expensive.", "id": "9080cec6-6c41-47f4-8a10-ec818aed0224", "level": "subsection", "origin_cites_number": 1, "parent_id": "1c6b434c-2289-4e6e-9627-ef1149463588", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ], [ "subsection", "Node-augmented graph-based methods" ] ], "subsections": [], "title": "Node-augmented graph-based methods" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 1712, 242, 217, 8489, 215, 282, 1719, 1718, 229, 1010 ], "content": "\\label{class-ef-embedding}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{emb.pdf}\n\t\\caption{A possible scheme of an embedding-based method (the attribute vectors are here concatenated with the node embeddings and then fed to $k$-means).}\n\t\\label{fig:embeddings}\n\\end{figure}\nAs is well-known, a graph as a traditional representation of a network brings several difficulties to network analysis. As mentioned in , graph algorithms suffer from high computational complexity, low parallelisability and inapplicability of machine learning methods. Novel embedding techniques aim at tackling this by learning {\\it node embeddings}, i.e. low-dimensional continuous vector representations for network nodes so that main network information is efficiently encoded . Roughly speaking, node embedding techniques allow to convert a graph with $n$ nodes into a set of $n$ vectors.\nIn the context of node-attributed social networks, the objective of node embedding techniques is efficient encoding both structure and attributes . We are not going to provide general details on the techniques as this has been already done in the surveys . Let us just mention that having node embeddings (i.e. vectors) at hand allows one to use classical distance-based clustering algorithms such as $k$-means to detect communities, see Figure~\\ref{fig:embeddings}.\nIt turns out that there exists a rich bibliography on node embedding techniques for attributes networks of different type . However, not all of them have been applied to the community detection task (the classification task is typically considered). Taking this into account, we confine ourselves in this survey only to the embedding-based (early fusion) methods that have been used for community detection in node-attributed social or document networks and compared with other community detection methods. The results are presented in Table~\\ref{tab:emb}.\n\\begin{table}[b]\n\\caption {Embedding-based methods.}\n\\label{tab:emb} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1cm}|p{1.5cm}|p{2.5cm}|p{1.4cm}|p{1cm}|p{1.9cm}|p{1.5cm}|p{2.3cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Embedding technique & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with\\\\\n\t\t\t\\hline \n\t\t\t{\\bf PLANE} & A generative model and EM~ & {\\bf Node embeddings}\\newline $k$-means & Yes/No & Small\\newline Medium & Accuracy & {\\bf Cora*} & Relational Topic Model~+Topic Distributions Embedding~\\\\\n\t\t\t\\hline \n\t\t\t{\\bf DANE} & Autoencoder & {\\bf Node Embeddings}\\newline $k$-means & Yes/No & Medium & Accuracy & {\\bf Cora}\\newline {\\bf Citeseer}\\newline {\\bf PubMed Diabetes}\\newline {\\bf Wiki} & Embeddings obtained via TADW \\newline\n\t\t\tLANE \\newline GAE \\newline VGAE \\newline GraphSAGE \\\\\n\t\t\t\\hline \n\t\t\t{\\bf CDE} & Structure embedding matrix & {\\bf Structure embedding matrix and attribute matrix}\\newline Non-negative matrix factorization & Yes/(Yes/No) & Small\\newline Medium & Accuracy \\newline NMI \\newline Jaccard similarity \\newline F1-score & {\\bf Cora}\\newline {\\bf Citeseer}\\newline {\\bf WebKB}\\newline {\\bf Flickr*} \\newline Philosophers \\newline {\\bf ego-Facebook} & {\\bf PCL-DC} \\newline {\\bf Circles} \\newline {\\bf CESNA} \\newline \\newline {\\bf SCI} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf MGAE} & Autoencoder & {\\bf Node embeddings} \\newline Spectral clustering & Yes/No & Medium & Accuracy\\newline NMI \\newline $F$-score \\newline Precision \\newline Recall \\newline Average Entropy \\newline Adjusted Rand Index & {\\bf Cora}\\newline {\\bf CiteSeer} \\newline {\\bf Wiki} & {\\bf Circles} \\newline RTM \\newline RMSC \\newline Embeddings obtained via TADW \\newline VGAE \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\n\\begin{remark}\nVarious embedding techniques applicable for community detection in {\\it multi-layer} networks are considered e.g. in .\n\\end{remark}\n\\begin{table}[b]\n\\caption{Pattern mining-based (early fusion) methods.}\n \\label{tab:pattern} \n\\begin{center}\n\t{\\tiny\n\\begin{tabular}{|p{1.1cm}|p{0.9cm}|p{2.9cm}|p{2.5cm}|p{2cm}|p{1.5cm}|p{1.2cm}|p{1cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Pattern used & Community detection method used and its {\\bf input} & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with\\\\\n\t\t\t\\hline \n\t\t\t{\\bf AHMotif} & Motif & {\\bf Structure- and attributes-aware adjacency matrix} \\newline Permanence \\newline Affinity Propagation & Yes/No & Medium & NMI \\newline Accuracy & {\\bf Cora}\\newline {\\bf WebKB} & ---\\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}", "id": "bcc6d0ce-88ee-4a08-8574-4e5664967352", "level": "subsection", "origin_cites_number": 28, "parent_id": "1c6b434c-2289-4e6e-9627-ef1149463588", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ], [ "subsection", "Embedding-based methods" ] ], "subsections": [], "title": "Embedding-based methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{class-ef-patterns}\nRecall that a motif is a pattern of the interconnection occurring in real-world networks at numbers that are significantly higher than those in random\nnetworks . Motifs are considered as building blocks for complex networks . We found just one community detection method for node-attributed social networks using such patterns, namely, {\\bf AHMotif} , see Table~\\ref{tab:pattern}. This method equips structural motifs identified in the network with the so-called homogeneity value based on node attributes involved in the motif. This information is stored in a special adjacency matrix that can be an input to classical community detection algorithms.", "id": "99f1e588-a145-4e2a-aa14-a5084080874e", "level": "subsection", "origin_cites_number": 2, "parent_id": "1c6b434c-2289-4e6e-9627-ef1149463588", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Early fusion methods" ], [ "subsection", "Pattern mining-based (early fusion) methods" ] ], "subsections": [], "title": "Pattern mining-based (early fusion) methods" }, { "cite_extract_rate": 0.228571428571428, "cites": [ 8483, 1714, 8487, 8488, 8482, 1720 ], "content": "\\label{section7}\nRecall that simultaneous fusion methods fuse structure and attributes in a joint process with community detection. For this reason, these methods often require special software implementation, in contrast to early and late fusion methods that partially allow one to use existing implementations of classical community detection algorithms.\n\\begin{table}\n\t\\caption {Methods modifying objective functions of classical clustering algorithms}\n\t\\label{tab_Louvain_NCut} \n\t\\begin{center}\n\t\t{\\tiny\n\t\t\t\\begin{tabular}{|p{1.2cm}|p{2.8cm}|p{1.8cm}|p{1.5cm}|p{1.6cm}|p{2.0cm}|p{2.4cm}|}\n\t\t\t\t\\hline \n\t\t\t\tMethod & Modified algorithm & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf OCru} & Louvain \\newline Added attribute Entropy minimization & No/No & Medium & Modularity\\newline Entropy & {\\bf Facebook100} & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf SAC1} & Louvain \\newline Added attribute similarity maximization & No/ No & Small\\newline Medium & Density\\newline Entropy & {\\bf Political Blogs}\\newline {\\bf Facebook100} \\newline {\\bf DBLP10K} & {\\bf SAC2} \\newline {\\bf WSte2} \\newline Fast greedy for weighted graph \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf ILouvain} (\\href{http://bit.ly/ILouvain}{source}) & Louvain \\newline Added maximization of attribute-aware Inertia & No/ No & Small\\newline Medium & NMI \\newline Accuracy & DBLP+Microsoft Academic Search$^*$ \\newline Synthetic & ToTeM \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf LAA/LOA} & Louvain \\newline Modularity gain depends on attributes & No/No & Small & Density\\newline Modularity & \\href{https://sites.google.com/site/ucinetsoftware/datasets/covert-networks/londongang}{London gang} \\newline\n\t\t\t\t\\href{https://sites.google.com/site/ucinetsoftware/datasets/covert-networks/italiangangs}{Italy gang} \\newline\n\t\t\t\t\\href{http://networkrepository.com/polbooks.php}{Polbooks}\n\t\t\t\t\\newline\n\t\t\t\t\\href{http://networkrepository.com/adjnoun.php}{Adjnoun} \n\t\t\t\t\\newline\n\t\t\t\tFootball & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf MAM} (\\href{http://ipd.kit.edu/~muellere/mam/}{source}) & Louvain-type algorithm with attribute-aware Modularity+Outlier detection & No/No & Small\\newline Medium \\newline Large & F1-score \\newline Attribute-aware Modularity & Synthetic \\newline Disney \\newline DFB \\newline ARXIV \\newline\n\t\t\t\tIMDB \\newline \n\t\t\t\t{\\bf DBLP*} \\newline\n\t\t\t\t{\\bf Patents*} \\newline\n\t\t\t\tAmazon & CODA \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf UNCut} & Normalized Cut\\newline Added attributes-aware Unimodality Compactness & Yes/No & Small\\newline Medium & NMI \\newline ARI \\newline & Disney \\newline\n\t\t\t\tDFB \\newline\n\t\t\t\tARXIV \\newline\n\t\t\t\t{\\bf Political Blogs} \\newline\n\t\t\t\t4area \\newline\n\t\t\t\t{\\bf Patents} & {\\bf SA-cluster} \\newline SSCG \\newline {\\bf NNM} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf NetScan} & An approximation algorithm for the connected $k$-Center optimization problem (structure and attributes involved) & Yes/Yes & Small\\newline Medium & Accuracy & Professors* \\newline Synthetic \\newline {\\bf DBLP*}\\newline BioGRID+Spellman\n\t\t\t\t& --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf JointClust} & An approximation algorithm for the Connected X Clusters problem (structure and attributes involved) & No/No & Medium & Accuracy & {\\bf DBLP*} \\newline {\\bf CiteSeer*} \\newline Corel stock photo collection & --- \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf SS-Cluster} & $k$-Medoid with structure- and attributes-aware objective functions & Yes/No & Medium & Density \\newline Entropy & {\\bf Political Blogs}\\newline {\\bf DBLP10K} & {\\bf SA-cluster} \\newline W-cluster \\newline $k$SNAP \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf Adapt-SA} & Weighted $k$-means for $d$-dimensional representations of structure and attributes & Yes/No & Medium & Accuracy \\newline \n\t\t\t\tNMI \\newline F-measure \\newline Modularity\\newline\n\t\t\t\tEntropy & Synthetic \\newline {\\bf WebKB}\\newline {\\bf Cora}\\newline {\\bf Political Blogs} \\newline {\\bf CiteSeer} \\newline {\\bf DBLP10K} & {\\bf CODICIL} \\newline {\\bf SA-Cluster} \\newline {\\bf Inc-Cluster} \\newline {\\bf PPSB-DC} \\newline {\\bf PCL-DC} \\newline {\\bf BAGC} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf kNAS} & $kNN$ with added Semantic Similarity Score & Yes/Yes & Medium & Density\\newline Tanimoto Coefficient & {\\bf DBLP*} \\newline {\\bf Facebook*} \\newline {\\bf Twitter*} & {\\bf SA-Cluster-Opt} \\newline {\\bf CODICIL} \\newline NISE \\\\\n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\\end{table}", "id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "level": "section", "origin_cites_number": 35, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ] ], "subsections": [ "1af2c98c-c4a1-4d9f-917a-330efea7a2a3", "7b216870-6a3a-40ca-874f-da14c2289f12", "c986582c-70df-43b2-b7f2-9e2e83ab57f5", "f8f23311-f720-42e4-94c4-ed1c64c919d4", "45b099b6-e934-4199-b2c6-b9c78b20f692", "6a22c128-a708-4842-b8d3-074bec4c46a4" ], "title": "Simultaneous fusion methods" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8487 ], "content": "\\label{class-sf-modifying}\nTable~\\ref{tab_Louvain_NCut} contains\\footnote{The authors of {\\bf ILouvain} claim that they compare their method with ToTeM , ``another community detection method designed for attributed graphs''. However, it seems that there is an inaccuracy with it as we could not find in any method called ToTeM.} short descriptions of simultaneous fusion methods that modify objective functions of well-known clustering algorithms such as Louvain, Normalized Cut, $k$-means, $k$-medoids and $kNN$. Their main idea is to adapt a classical method (that works, for example, only with network structure originally) for using both structure and attributes in the optimization process. For example, if one wants to modify Louvain whose original objective function is structure-aware Modularity, one can include an attributes-aware objective function, say, Entropy in the optimization process. Then Modularity is maximized and Entropy is minimized simultaneously in an iterative process similar to that of Louvain .", "id": "1af2c98c-c4a1-4d9f-917a-330efea7a2a3", "level": "subsection", "origin_cites_number": 3, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Methods modifying objective functions of classical clustering algorithms" ] ], "subsections": [], "title": "Methods modifying objective functions of classical clustering algorithms" }, { "cite_extract_rate": 0.25, "cites": [ 7062, 1714, 1712 ], "content": "\\label{class-sf-mataheuristic}\nMethods in this subclass are rather similar ideologically to those in Subsection~\\ref{class-sf-modifying}. However, instead of modifying objective functions and iterative processes of well-known community detection algorithms, they directly apply metaheuristic algorithms (in particular, evolutionary algorithms and tabu search) to find a node-attributed network partition that provides optimal values for certain measures of structural closeness and attribute homogeneity. More precisely, they use metaheuristics for optimization of a combination of structure- and attributes-aware objective functions, for example, Modularity and Attributes Similarity. Short descriptions of the methods from this subclass are given in Table~\\ref{tab_heur}. \n\\begin{table}\n\t\\caption {Metaheuristic-based methods.}\n\t\\label{tab_heur} \n\t\\begin{center}\n\t\t{\\tiny\n\t\t\t\\begin{tabular}{|p{1.2cm}|p{3.7cm}|p{1.5cm}|p{1.0cm}|p{1.4cm}|p{2.5cm}|p{2.2cm}|}\n\t\t\t\t\\hline \n\t\t\t\tMethod & Metaheuristic optimization algorithm & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf MOEA-SA} & Multiobjective evolutionary algorithm (Modularity and Attribute Similarity are maximized) & No/No & Small \\newline Medium & Density\\newline Entropy & {\\bf Political Books}\\newline {\\bf Political Blogs} \\newline {\\bf Facebook100}\\newline {\\bf ego-Facebook} & {\\bf SAC1-SAC2} \\newline {\\bf SA-Cluster} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf MOGA-@Net} & Multiobjective genetic algorithm (optimizing Modularity, Community score, Conductance, attribute similarity) & No/No & Small\\newline Medium & NMI \\newline Cumulative NMI \\newline Density \\newline Entropy & Synthetic \\newline {\\bf Cora}\\newline {\\bf Citeseer} \\newline {\\bf Political books} \\newline {\\bf Political Blogs} \\newline {\\bf ego-Facebook} & {\\bf SA-cluster} \\newline {\\bf BAGC} \\newline {\\bf OCru} \\newline {\\bf Selection} \\newline HGPA-CSPA \\\\\n\t\t\t\t\\hline\n\t\t\t\t{\\bf JCDC} & Tabu search and gradient ascent for a certain structure- and attributes-aware objective function & Yes/No & Small\\newline Medium & NMI & Synthetic \\newline World trade \\newline {\\bf Lazega} & CASC \\newline {\\bf CESNA} \\newline {\\bf BAGS} \\\\\n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\\end{table}", "id": "7b216870-6a3a-40ca-874f-da14c2289f12", "level": "subsection", "origin_cites_number": 12, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Metaheuristic-based methods" ] ], "subsections": [], "title": "Metaheuristic-based methods" }, { "cite_extract_rate": 0.1, "cites": [ 1712, 1721, 8489 ], "content": "\\label{class-sf-nnmf}\nNon-negative matrix factorization (NNMF) is a matrix technique that consists in approximating a non-negative matrix with high rank by a product of non-negative matrices with lower ranks so that the approximation error by means of the Frobenius norm\\footnote{A matrix norm defined as the square root of the sum of the absolute squares of matrix elements.} $F$ is minimal. As is well known, NNMF is able to find clusters in the input data . \nTo be applied to community detection in node-attributed social networks, NNMF requires a proper adaptation to fuse both structure and attributes. Different versions of such an adaptation have been proposed, see Table~\\ref{tab:NMF}. \nTo be more formal in describing the corresponding NNMF-based methods, let us introduce additional notation. Let ${\\bf S}_{n\\times n}$ denote the adjacency matrix for the network structure (as before, $n$ is the number of nodes), ${\\bf A}_{n\\times d}$ the node attribute matrix for the network attributes ($d$ is the dimension of attribute vector $A$), $N$ the number of required clusters (it is an input in NNMF-based methods), ${\\bf U}_{n\\times N}$ the cluster membership matrix whose elements indicate the association of nodes with communities, and finally ${\\bf V}_{d\\times N}$ denotes the cluster membership matrix whose elements indicate the association of the attributes with the communities. In these terms, the aim of NNMF-based methods is to use known matrices $\\bf S$, $\\bf A$ and the number of clusters $N$ in order to determine the unknown matrices $\\bf U$ and $\\bf V$ in an iterative optimization procedure. For example, {\\bf SCI} models structural closeness as $\\min_{{\\bf U}\\ge 0}\\|{\\bf S}-{\\bf U}{\\bf U}^T\\|^2_F$ and attribute homogeneity as \n$\\min_{{\\bf V}\\ge 0}\\|{\\bf U}-{\\bf A}{\\bf V}\\|^2_F$. It is also proposed to select the most relevant attributes for each community by adding an $l_1$ norm sparsity term to each column of matrix\n$\\bf V$. As a result, one obtains the following optimization problem: \n$$\n\\min_{{\\bf U}\\ge 0,\\,{\\bf V}\\ge 0}\\left(\\alpha_1\\|{\\bf S}-{\\bf U}{\\bf U}^T\\|^2_F+\\|{\\bf U}-{\\bf A}{\\bf V}\\|^2_F+\\alpha_2 \\sum\\nolimits_j \\|{\\bf V}(\\cdot,j)\\|^2_1\\right),\n$$\nwhere $\\alpha_1>0$ controls the impact of structure and $\\alpha_1\\ge 0$ the sparsity penalty. This problem is further approximately solved in an iterative process according to Majorization-Minimization framework .\n\\begin{table}\n\\caption {Non-negative matrix factorization-based and matrix compression-based methods.} \\label{tab:NMF} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.6cm}|p{1.8cm}|p{1.8cm}|p{1cm}|p{2cm}|p{2.5cm}|p{2.8cm}|}\n\t\t\t\\hline \n\t\t\tAlgorithm & Factorization/ compression type & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\\hline \n\t\t\t{\\bf NPei} & 3-factor NNMF & Yes/Yes & Small\\newline Medium & Purity & {\\bf Twitter} \\newline DBLP* & Relational Topic Model \\\\\n\t\t\t\\hline \n\t\t\t{\\bf 3NCD} & $2$-factor NNMF & Yes/Yes & Medium\\newline Large & F1-score \\newline Jaccard\n\t\t\tsimilarity & {\\bf ego-Facebook}\\newline {\\bf ego-Twitter}\\newline {\\bf ego-G+} & {\\bf CESNA} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf SCI} & 2-factor NNMF & Yes/Yes & Medium & ACC \\newline NMI \\newline GNMI \\newline $F$-measure \\newline Jaccard similarity & {\\bf Citeseer}\\newline {\\bf Cora}\\newline {\\bf WebKB}\\newline {\\bf LastFM} & {\\bf PCL-DC} \\newline {\\bf CESNA} \\newline {\\bf DCM} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf JWNMF} & 2-factor NNMF & Yes/Yes & Small\\newline Medium & Modularity\\newline Entropy\\newline NMI & Amazon Fail \\href{https://www.ipd.kit.edu/mitarbeiter/muellere/consub/}{dataset} \n\t\t\t\\newline Disney \\href{http://www.perozzi.net/projects/focused-clustering/}{dataset} \\newline Enron \\href{https://www.ipd.kit.edu/mitarbeiter/muellere/consub/}{dataset} \\newline DBLP-4AREA \\href{http://www.perozzi.net/projects/focused-clustering/}{dataset}\\newline {\\bf WebKB} \\newline {\\bf Citeseer} \\newline {\\bf Cora} & {\\bf BAGC} \\newline {\\bf PICS} \\newline SANS \\\\\n\t\t\t\\hline \n\t\t\t{\\bf SCD} & 2- and 3-factor NNMF & Yes/Yes-No & Small\\newline Medium & Accuracy \\newline NMI & {\\bf Twitter}\\newline {\\bf WebKB} & {\\bf SCI} \\\\\n\t\t\t\\hline\n\t\t\t{\\bf ASCD} & 2-factor NNMF &Yes/Yes-No & Small\\newline Medium & ACC \\newline NMI \\newline $F$-measure \\newline Jaccard similarity & {\\bf LastFM} \\newline {\\bf WebKB}\\newline {\\bf Cora} \\newline {\\bf Citeseer} \\newline {\\bf ego-Twitter*} \\newline {\\bf ego-Facebook*} & Block-LDA \\newline {\\bf PCL-DC} \\newline {\\bf SCI} \\newline {\\bf CESNA} \\newline {\\bf Circles} \\\\\n\t\t\t\\hline\n\t\t\t{\\bf CFOND} & $2$- and $3$-factor NMF & Yes/(Yes/No) & Medium & Accuracy \\newline NMI & {\\bf Cora}\\newline {\\bf CiteSeer}\\newline {\\bf PubMed}\\newline {\\bf Attack}\\newline Synthetic\n\t\t\t& GNMF \\newline DRCC \\newline \n\t\t\tLP-NMTF \\newline iTopicModel \\\\\n\t\t\t\\hline \n\t\t\t{\\bf MVCNMF} & $2$-factor NMF & Yes/Yes & Small\\newline Medium & Density\\newline Entropy & {\\bf Political\n\t\t\t\tBlogs}\\newline {\\bf CiteSeer}\\newline {\\bf Cora}\\newline {\\bf WebKB} \\newline ICDM (DBLP*) & FCAN \\newline SACTL \\newline {\\bf kNAS} \\\\\n\t\t\t\\hline\n\t\t\t{\\bf PICS} (\\href{http://www.andrew.cmu.edu/user/lakoglu/pubs.html#pics}{source}) & Matrix compression (finding rectangular blocks) & No/No & Small\\newline Medium & Anecdotal and visual study & Youtube \\newline {\\bf Twitter}* \\newline Phonecall \\newline Device \\newline {\\bf Political Books} (\\href{http://www-personal.umich.edu/~mejn/netdata/}{link}) \\newline {\\bf Political Blogs} & ---\\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\n\\begin{remark}\nAn NNMF-based community detection method for {\\it multi-layer} networks is given in .\n\\end{remark}\nNow we briefly describe {\\bf PICS} , the method adopting matrix compression\\footnote{The aim of compression methods is to find a\nshorter form of describing the same information content.} for community detection in node-attributed social networks. {\\bf PICS} uses lossless compression principles from to simultaneously compress the network adjacency matrix $\\bf S$ and the attribute matrix $\\bf A$. As a result of the compression, certain homogeneous rectangular blocks in the matrices can be determined. Groups of the nodes corresponding to the blocks are considered as communities. One should be aware however that nodes within communities found by {\\bf PICS} may not be densely connected due to the definition of a community in .", "id": "c986582c-70df-43b2-b7f2-9e2e83ab57f5", "level": "subsection", "origin_cites_number": 30, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Non-negative matrix factorization-based and matrix compression-based methods" ] ], "subsections": [], "title": "Non-negative matrix factorization-based and matrix compression-based methods" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 1714, 1710 ], "content": "\\label{class-sf-patterns}\nPattern mining in node-attributed social networks focuses on extraction of patterns, e.g. subsets of specific attributes or connections\\footnote{An example of a pattern is a maximal clique . Recall that a clique is a subset of nodes in an undirected graph such that every two nodes are adjacent. A clique is called maximal if there is no other clique that contains it.}, in network structure and attributes . Among other things, this helps to make sense of a network and to understand how it was formed. In the context of community detection, the extracted patterns are used as building blocks for communities. \nThere are many papers devoted to pattern mining in social networks but the majority of them do not deal with the task of community detection. The ones relevant to the topic of this survey are presented in Table~\\ref{tab:clique}. It is worth mentioning here that it is common for pattern mining-based methods to detect communities not in the whole network but in its part only (e.g. ). \n\\begin{table}\n\\caption {Pattern mining-based (simultaneous fusion) methods.}\n\\label{tab:clique} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.4cm}|p{1.2cm}|p{2.5cm}|p{2cm}|p{2.5cm}|p{2.0cm}|p{1.8cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Patterns used & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with\\\\\n\t\t\t\\hline \n\t\t\t{\\bf DCM} (\\href{http://www.patternsthatmatter.org/software.php}{source}) & Semantic patterns ({\\it queries}) & Yes/Yes & Small\\newline Medium & Community score\\newline Conductance \\newline Intra-cluster density \\newline Modularity & {\\bf Delicious}\\newline {\\bf LastFM}\\newline {\\bf Flickr} & --- \\\\\n\t\t\t\\hline \n\t\t\t{\\bf COMODO} & Semantic patterns & Yes/Yes & Small\\newline Medium & Description complexity\\newline Community size & BibSonomy \\newline {\\bf Delicious} \\newline {\\bf LastFM} & {\\bf DCM} \\\\\n\t\t\t\\hline \n\t\t\t{\\bf ACDC} & Maximal cliques & Yes/Yes & Medium & Density & {\\bf Political Blogs} & {\\bf SA-Cluster} \\newline {\\bf SAC1-SAC2} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\n\\begin{remark}\nABACUS detects communities by extracting patterns in {\\it multi-layer} networks.\n\\end{remark}", "id": "f8f23311-f720-42e4-94c4-ed1c64c919d4", "level": "subsection", "origin_cites_number": 9, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Pattern mining-based (simultaneous fusion) methods" ] ], "subsections": [], "title": "Pattern mining-based (simultaneous fusion) methods" }, { "cite_extract_rate": 0.137931034482758, "cites": [ 1712, 1721, 1714, 7062 ], "content": "\\label{class-sf-probabilistic-models}\n\\begin{table}[b]\n\t\\caption{Probabilistic model-based methods.} \\label{tab:statistical} \n\t\\begin{center}\n\t\t{\\tiny\t\\begin{tabular}{|p{1.2cm}|p{2.5cm}|p{1.5cm}|p{1.3cm}|p{1.5cm}|p{3cm}|p{2.5cm}|}\n\t\t\t\t\\hline \n\t\t\t\tMethod & Model features & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf PCL-DC} & Conditional Link Model\\newline Discriminative Content model & Yes/No & Medium & NMI \\newline Pairwise $F$-measure \\newline Modularity\\newline Normalized cut & {\\bf Cora}\\newline {\\bf Siteseer} & PHITS-PLSA \\newline LDA-Link-Word \\newline Link-Content-Factorization \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf CohsMix} & MixNet model & Yes/No & Small & Rand Index & Synthetic \\newline Exalead.com search engine dataset & Multiple view learning \\newline Hidden Markov Random Field \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf BAGC} \\newline {\\bf GBAGC} & a Bayesian treatment on distribution parameters & Yes/No & Medium & Modularity\\newline Entropy & {\\bf Political Blogs}\\newline {\\bf DBLP10K}\\newline {\\bf DBLP84K} & {\\bf Inc-Cluster} \\newline {\\bf PICS} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf VEM-BAGC} & Based on {\\bf BAGC} & Yes/No & Medium & Modularity\\newline Entropy & {\\bf Political Blogs}\\newline Synthetic networks & {\\bf BAGC} \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf PPSB-DC} & Popularity-productivity stochastic block model and discriminative content model & Yes/No & Medium & normalized mutual information (NMI)\\newline Pairwise\n\t\t\t\tF measure (PWF)\\newline Accuracy & {\\bf Cora} \\newline {\\bf CiteSeer} \\newline {\\bf WebKB} & {\\bf PCL-DC} \\newline PPL-DC \\\\\n\t\t\t\t\\hline \n\t\t\t{\\bf CESNA} & A probabilistic generative model assuming communities generate network structure and attributes & No/Yes & Medium\\newline Large & Evaluation & {\\bf ego-Facebook}\\newline {\\bf ego-G+} \\newline {\\bf ego-Twitter}\\newline \n\t\t\t{\\bf Wikipedia* (philosophers)}\\newline {\\bf Flickr} & {\\bf CODICIL} \\newline {\\bf Circles} \\newline Block-LDA \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf Circles} & A generative model for friendships in social circles & Yes/Yes & Medium\\newline Large & Balanced Error Rate & {\\bf ego-Facebook}\\newline {\\bf ego-G+} \\newline {\\bf ego-Twitter} & Block-LDA \\newline Adapted Low-Rank Embedding \\\\\n\t\t\t\t\\hline \n\t\t\t\t{\\bf SI} & A modified version of a stochastic block model & Yes/No & Small\\newline Medium & Normalized\n\t\t\t\tmutual information (NMI) & Synthetic \\newline High school friendship network \\newline Food web of marine species in the Weddell Sea \\newline Harvard Facebook friendship network \\newline Malaria HVR 5 and 6 gene\n\t\t\t\trecombination network & --- \\\\\n\t\t\t\t\\hline\n\t\t\t\t{\\bf NEMBP} & A generative model with learning method using a nested EM algorithm with belief propagation & Yes/(Yes/No) & Small \\newline Medium & Accuracy \\newline NMI \\newline GNMI \\newline F-score \\newline Jaccard & {\\bf WebKB} \\newline {\\bf ego-Twitter*}\\newline\n\t\t\t\t{\\bf ego-Facebook*}\\newline {\\bf CiteSeer} \\newline {\\bf Cora}\\newline {\\bf Wikipedia*}\\newline {\\bf Pubmed} & Block-LDA \\newline {\\bf PCL-DC} \\newline {\\bf CESNA} \\newline {\\bf DCM} \\newline {\\bf SCI} \\\\\n\t\t\t\t\\hline\n\t\t\t\t{\\bf NBAGC-FABAGC} & A nonparametric and asymptotic Bayesian model selection method based on {\\bf BAGC} & No/No & Medium & NMI \\newline Modularity \\newline Entropy & Synthetic \\newline {\\bf Political Blogs} \\newline {\\bf DBLP10K} \\newline {\\bf DBLP84K} & {\\bf PICS} \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\\end{table}\nMethods from this subclass probabilistically infer the distribution of community memberships for nodes in a node-attributed social network under the assumption that network structure and attributes are generated according to chosen parametric distributions. Generative and discriminative models are mainly used for the inferring. It is worth mentioning that it is though a non-trivial task to ``properly'' choose a priori distributions for structure and attributes .\nShort descriptions of the methods from this subclass are given in Table~\\ref{tab:statistical}. Pay attention that this table does not contain any method preceding to and this requires the following additional comments. According to , several probabilistic model-based clustering methods for node-attributed networks had been proposed before , for example, in . However, they focus on node-attributed {\\it document} networks which are out of scope of the present survey. That is why they are non included in Table~\\ref{tab:statistical}.\n\\begin{remark}\nTUCM proposes a generative Bayesian model for detecting communities in {\\it multi-layer} networks where different types of interactions between social actors are possible. \n\\end{remark}", "id": "45b099b6-e934-4199-b2c6-b9c78b20f692", "level": "subsection", "origin_cites_number": 29, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Probabilistic model-based methods" ] ], "subsections": [], "title": "Probabilistic model-based methods" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 1714, 1712 ], "content": "\\label{class-sf-dynamics}\nMethods from this subclass (see Table~\\ref{tab:agent}) treat a node-attributed social network as a dynamic system and assume that its community structure is a consequence of certain interactions among nodes (of course, the attributes are thought to affect the interactions). Some of the methods assume that the interactions occur in an information propagation process, i.e. while information is sent to or received from every node. Others comprehend each node as an autonomous agent and develop a multi-agent system to detect communities. Note that these approaches are rather recent and consider community detection in node-attributed social networks from a new perspective. Furthermore, they seem to be efficient for large networks as can be easily parallelized.\n\\begin{table}\n\\caption{Dynamical system-based and agent-based methods.}\n\\label{tab:agent} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.2cm}|p{3.2cm}|p{1.7cm}|p{1.5cm}|p{2cm}|p{2cm}|p{1.8cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Description & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\\hline \n\t\t\t{\\bf CPIP}-{\\bf CPRW} & Content (information) propagation models: a linear approximate model\n\t\t\tof influence propagation ({\\bf CPIP}) and content propagation\n\t\t\twith the random walk principle ({\\bf CPRW}) & Yes/Yes & Medium & F-score\\newline Jaccard Similarity\\newline NMI & {\\bf CiteSeer}\\newline {\\bf Cora}\\newline {\\bf ego-Facebook}\\newline {\\bf PubMed Diabetes} & Adamic Adar \\newline {\\bf PCL-DC} \\newline {\\bf Circles} \\newline {\\bf CODICIL} \\newline {\\bf CESNA} \\\\\n\t\t\t\\hline \n{\\bf CAMAS} & Each node with attributes as an autonomous agent with influence in a cluster-aware multiagent system & No/Yes & Medium\\newline Large & Coverage Rate\\newline Normalized Tightness \\newline Normalized Homogeneity\\newline F1-Score \\newline Jaccard\\newline Adjusted Rand Index & Synthetic \\newline {\\bf ego-Facebook}\\newline {\\bf ego-Twitter*} \\newline {\\bf ego-G+} & {\\bf CESNA} \\newline EDCAR \\\\\n\\hline \n{\\bf SLA} & A dynamic cluster formation game played by all nodes and clusters in a discrete-time dynamical system & Yes/No & Medium \\newline Large & Density \\newline Entropy \\newline F1-score \\newline & {\\bf Delicious}\\newline {\\bf LastFM} \\newline {\\bf ego-Facebook}\\newline {\\bf ego-Twitter*} \\newline {\\bf ego-G+} & {\\bf CESNA} \\newline EDCAR \\\\\n\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{late.pdf}\n\t\\caption{A typical scheme of a late fusion method.}\n\t\\label{fig:late-fusion}\n\\end{figure}", "id": "6a22c128-a708-4842-b8d3-074bec4c46a4", "level": "subsection", "origin_cites_number": 9, "parent_id": "2eeee52e-f7f4-4129-a5ef-03fe424ccf1f", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Simultaneous fusion methods" ], [ "subsection", "Dynamical system-based and agent-based methods" ] ], "subsections": [], "title": "Dynamical system-based and agent-based methods" }, { "cite_extract_rate": 0.5, "cites": [ 8487 ], "content": "\\label{section8}\nRecall that late fusion methods intend to fuse structural and attributive information after the community detection process. More precisely, community detection is first separately performed for structure (e.g. by Louvain ) and attributes (e.g. by $k$-means ). After that, the partitions obtained are fused somehow in order to get the resulting structure- and attributes-aware partition, see Figure~\\ref{fig:late-fusion}.\nNote that late fusion methods usually allow a researcher/data scientist to use existing implementations of classical community detection and consensus clustering algorithms to get the required partition.", "id": "a4712ae6-1690-4727-8bbb-b44187674ee8", "level": "section", "origin_cites_number": 2, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Late fusion methods" ] ], "subsections": [ "a2976c7a-9d26-4a9e-9cb2-40daafc174e6", "edd7b709-6935-46b9-9d57-cbc247202cf8" ], "title": "Late fusion methods" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 8484, 1712, 1722, 1715 ], "content": "\\label{class-lf-consensus}\nGiven a set (also known as an ensemble) of partitions, the general goal of consensus clustering algorithms is to find a single consolidated partition that aggregates information in the ensemble . A recent survey on such methods can be found e.g. in . The idea behind consensus clustering is clearly appropriate for community detection in node-attributed social networks if one has an ensemble of partitions obtained separately (or maybe even jointly) for structure and attributes. Table~\\ref{tab:late} contains short descriptions of the methods applying the idea.\n\\begin{table}[t]\n\\caption {Consensus-based methods.} \\label{tab:late} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.2cm}|p{3.0cm}|p{1.9cm}|p{1.5cm}|p{1.5cm}|p{2.3cm}|p{2cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Fusing the partitions & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\\hline \n\t\t\t{\\bf LCru} & Row-manipulation in the contingency matrix for the partitions & No/No & Small\\newline Medium & ARI \\newline Density\\newline Entropy & Facebook$^*$\\newline {\\bf DBLP10K} & ---\\\\\n\t\t\t\\hline \n\t\t\t{\\bf Multiplex} & Multiplex representation scheme (attributes and structure are clustered separately as layers and then combined via consensus ) & No/Yes & Medium\\newline Large & F1-score & Synthetic \\newline {\\bf ego-Twitter} \\newline {\\bf ego-Facebook} \\newline {\\bf ego-G+} & {\\bf CESNA} \\newline {\\bf 3NCD} \\\\\n\t\t\t\\hline\n\t\t\t{\\bf WCMFA} & Association matrix with weighting based on structure- and attributes-aware similarity & Depends on the partitions & Small & Rand index \\newline ARI \\newline NMI & {\\bf Consult}~\\newline \t\t\n\t\t\tLondon Gang \\newline\n\t\t\tMontreal Gang & {\\bf WMen} \\\\\n\t\t\t\\hline \t\t\n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}\n\\begin{remark}\nGeneral-purpose consensus clustering algorithms for {\\it multi-layer} networks\nare considered in .\n\\end{remark}", "id": "a2976c7a-9d26-4a9e-9cb2-40daafc174e6", "level": "subsection", "origin_cites_number": 14, "parent_id": "a4712ae6-1690-4727-8bbb-b44187674ee8", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Late fusion methods" ], [ "subsection", "Consensus-based methods" ] ], "subsections": [], "title": "Consensus-based methods" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 8484 ], "content": "\\label{class-lf-switch}\nThe only method included in this subclass (see Table~\\ref{tab:late2}) also deals with partitions obtained separately for structure and attributes but chooses a more ``preferable'' one instead of finding consensus. Namely, {\\bf Selection} switches from a structure-based to an attributes-based partition when the former one is {\\it ambiguous}. This refers to the case when the so-called {\\it estimated mixing parameter} $\\mu$ for the structure-based partition is less then a certain experimental value $\\mu_{\\lim}$ associated with a significant drop in clustering quality on synthetic networks . An interested reader can find the precise definitions of $\\mu$ and $\\mu_{\\lim}$ in .\n\\begin{table}[t]\n\\caption {Switch-based methods.} \\label{tab:late2} \n\\begin{center}\n\t{\\tiny\n\t\t\\begin{tabular}{|p{1.2cm}|p{3.0cm}|p{1.9cm}|p{1.5cm}|p{1.5cm}|p{2.3cm}|p{2cm}|}\n\t\t\t\\hline \n\t\t\tMethod & Fusing the partitions & Require the number of clusters/ Clusters overlap & Size of datasets used for evaluation & Quality measures & Datasets used & Compared with \\\\\n\t\t\t\\hline \n\t\t\t{\\bf Selection} & Switching between the partitions & Depends on the partitions & Medium & NMI \\newline Modularity & Synthetic LFR benchmark \\newline {\\bf DBLP84K} & {\\bf BAGC} \\newline {\\bf OCru} \\newline {\\bf SA-Cluster} \\newline HGPA-CSPA \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t}\n\\end{center}\n\\end{table}", "id": "edd7b709-6935-46b9-9d57-cbc247202cf8", "level": "subsection", "origin_cites_number": 6, "parent_id": "a4712ae6-1690-4727-8bbb-b44187674ee8", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Late fusion methods" ], [ "subsection", "Switch-based methods" ] ], "subsections": [], "title": "Switch-based methods" }, { "cite_extract_rate": 0.347826086956521, "cites": [ 7062, 1712, 1714, 1723, 1724, 1721 ], "content": "\\label{section9}\nThe information collected in Sections~\\ref{section6}--\\ref{section8} allows us to analyze the overall situation in the field. Particularly, we would like to determine which methods are {\\it state-of-the-art}\\footnote{According to Cambridge Dictionary, state-of-the-art means ``the best and most modern of its type''.}. This is probably what a researcher/data scientist facing the community detection problem in node-attributed social networks expects from our survey.\nWe start with observing the directed graph based on the information in Sections~\\ref{section6}--\\ref{section8} and showing method-method comparisons, see Figure~\\ref{fig:graph1}. It requires several comments though. First, there are methods in Sections~\\ref{section6}--\\ref{section8} that are not compared with others for community detection in node-attributed networks or compared with a few. For this reason, we include in the graph only nodes (representing methods) whose degree is at least two. This means that there are at least two comparison experiments with each method presented in Figure~\\ref{fig:graph1}. Note that 46 methods are shown in the graph of 75 classified in the survey. \nSecond, the directed edges in the graph show the existing method-method comparisons. For example, the directed edge from node {\\bf CESNA} to node {\\bf CODICIL} indicates that the authors of {\\bf CESNA} compared their method with {\\bf CODICIL} and showed that {\\bf CESNA} outperforms {\\bf CODICIL} in some sense (community detection quality, computational efficiency, etc.). This is applied to all edges in the graph.\n\\begin{figure}[b]\n\t\\label{fig:graph1}\n \\includegraphics[scale=0.95]{Graph1new.pdf}\n\t\\caption{The directed graph of existing method-method comparisons. Nodes (shown only those with degree $\\ge 2$) represent the methods classified in the present survey. The most influential methods (nodes with highest PageRank) are filled green so that the darker green means the higher PageRank.}\n\\end{figure}\nWhat is more, we used PageRank to detect the most important or, better to say, {\\it most influential methods} in the field. Nodes with the highest PageRank values are filled green in Figure~\\ref{fig:graph1} so that the darker green means the higher PageRank. It turns out that the most influential ones are (in the order they discussed in Sections~\\ref{section6}--\\ref{section8}):\n\\begin{itemize}\n\\item weight-based {\\bf SAC2}~ and {\\bf CODICIL}~ (Subsection~\\ref{class-ef-weight}),\n\\item node-augmented graph-based {\\bf SA-Cluster} , {\\bf Inc-Cluster} and {\\bf SA-Cluster-Opt}~ (Subsection~\\ref{class-ef-node-augmented-graph}),\n\\item {\\bf SAC1}~ modifying the Louvain objective function (Subsection~\\ref{class-sf-modifying}),\n\\item NNMF-based {\\bf SCI} and matrix compression-based {\\bf PICS} (Subsection~\\ref{class-sf-nnmf}),\n\\item pattern mining-based (simultaneous fusion) {\\bf DCM} (Subsection~\\ref{class-sf-patterns}),\n\\item probabilistic model-based {\\bf PCL-DC} , {\\bf BAGC} , {\\bf GBAGC} , {\\bf CESNA} and {\\bf Circles} (Subsection~\\ref{class-sf-probabilistic-models}).\n\\end{itemize}\nIn our opinion, these methods may be though to be chosen by researchers' community as those determining further developments in the field. Thus we encourage a newcomer in the field to get familiar with them first to see the main ideas and techniques existing for community detection in node-attributed social networks.\nAt the same time, we would not consider the most influential methods as state-of-the-art as other methods are shown to outperform them in some sense.\nWhat else the graph in Figure~\\ref{fig:graph1} can tell us about? We emphasize that it does not contain 29 methods of those discussed in Sections~\\ref{section6}--\\ref{section8} and furthermore it is rather sparse and even disconnected. These points lead to the conclusion that the comparison study of the methods in the field is far from being complete. We would even strengthen the conclusion made by saying that a researcher/data scientist cannot be sure that the method chosen for practical use is preferable to other existing ones, even if there is an edge in Figure~\\ref{fig:graph1}. The problem is that the comparison experiments (represented via the edges) are made by different means, e.g. different quality measures, datasets, hyperparameter tuning strategies, etc. Let us discuss it below in more detail.\nSuppose for a second that two methods show the same community detection quality for a number of datasets and measures, and their hyperparameters are tuned to provide the best possible results. Then we should think how much time/space each method uses for it. Particularly, we may think of method's computational complexity in terms of the number of vertices $n$, the number of edges $m$ and the attribute dimension $d$ in a node-attributed graph. Such estimates exist for certain methods, particularly for some of the most influential ones. Examples are {\\bf CODICIL}~ with $O(n^2\\log n)$, {\\bf SA-Cluster} with $O(n^3)$ and {\\bf CESNA} with $O(m)$. However, we could not find such estimates for the majority of methods discussed in Sections~\\ref{section6}--\\ref{section8} as authors often omit such estimation. This makes the overall comparison of methods in terms of computational complexity impossible.\nNow let us discuss the hyperparameter tuning problem. It turns out that some authors tune hyperparameters in their methods manually, some just do not consider the problem at all. Another issue is the lack of a general understanding how to determine ``equal impact'' of structure and attributes on the community detection results. For example, in the weight-based early fusion methods (Subsection~\\ref{class-ef-weight}) some authors choose $\\alpha=1/2$ in experiments hoping that this provides the equal impact. However, if ones takes into account the different nature of structural and attributive information and the disbalance between associated statistical features, this choice seems questionable in a general situation.\nFurthermore, authors use different datasets (of various size and nature, see Section~\\ref{section5}) and quality measures to test their methods so that a unified comparison of experimental results in different papers cannot be carried out\\footnote{A more general observation is that several comparison experiments clearly do not provide generality of conclusions. This however seems to be a hot topic among supporters of scientific research from one side and of empirical one from another side.}. What is more, datasets and software implementations used in comparison experiments are rarely provided by the authors (especially of out-of-date papers) thus making reproducing their results time-consuming or even impossible. Note how few links to source codes are indicated in Sections~\\ref{section6}--\\ref{section8}.\nLet us now discuss the methodology of using quality measures in comparison experiments. There exist two main strategies (see Section~\\ref{section5}). Namely, community detection quality can be evaluated (a) by heuristic measures of structural closeness and attributes homogeneity (e.g. Modularity and Entropy) when the dataset under consideration has no ``ground truth'' and (b) by measures estimating the agreement between the detected communities and the ground truth ones (e.g. NMI or ARI).\nLet us first discuss Option (a) in terms of a typical weight-based early fusion method from Subsection~\\ref{class-ef-weight}. For a given node-attributed graph $G$, suppose that we first convert its attributes $(\\mathcal{V},\\mathcal{A})$ into attributive graph $G_A$ and further fuse it with the structure $(\\mathcal{V},\\mathcal{E})$. This results in weighted graph $G_W$ that is thought to contain information about both the structure and the attributes in a unified form. After that, we find communities in $G_W$ that provide, say, the maximum of Modularity of $G_W$ (e.g. by Weighted Louvain). At the evaluation step we further calculate Modularity of $(\\mathcal{V},\\mathcal{E})$ and Entropy of $(\\mathcal{V},\\mathcal{A})$ basing on the partition found. This seems reasonable as far as you do not look at such a methodology critically. Indeed, note that we deal with one quality measure within the optimization process (Modularity of $G_W$) but evaluate community detection performance by other measures (Modularity of $(\\mathcal{V},\\mathcal{E})$ and Entropy of $(\\mathcal{V},\\mathcal{A})$). Generally speaking, how can we be sure that optimization of one objective function provides optimal values of other objective functions, if there is no mathematically established connection between them? Of course, this may be simply called a heuristic but it looks more a logical gap, from our point of view. Anyways, we are unaware of any explicit explanation of such a methodology. What is more, one should carefully study how the change of data representation within a method (the change of vectors $\\mathcal{A}$ for edge weights in $G_A$ in the above-mentioned example) affects community detection results.\nTake into account that a similar discussion is suitable for the majority of methods in Sections~\\ref{section6}--\\ref{section8} that use quality measures for estimating structural closeness and attributes homogeneity of the detected communities. The exception is the methods that directly optimize the quality measures {\\it within the optimization process}. Examples are the simultaneous fusion methods {\\bf OCru} , \n{\\bf SAC1} ,\n{\\bf ILouvain} , \n{\\bf UNCut} , \n{\\bf MOEA-SA} ,\n{\\bf MOGA-@Net} and \n{\\bf JCDC} .\nJust by construction, they aim at providing optimal values of the quality measures. One should however take into account the precision of the optimization method applied. \nNow we turn our attention to the methodology of evaluating community detection quality by measures that estimate the agreement between the detected communities and the ground truth ones (Option (b)). In our opinion, such a methodology makes sense for synthetic node-attributed social networks as the way how the communities are formed is known in advance. As for real-world networks, it seems somewhat questionable as ground truth communities may reflect only one point of view on network communities (among many possible). Therefore, expecting that a community detection method would share this point of view seems a bit unsuitable. There are several works on this issue, e.g. , discussing connections between ground truth, attributes and quality measures in detail, and therefore we refer the interested reader to them.\nRecall that we started this discussion trying to determine the state-of-the-art methods for community detection in node-attributed social networks. Unfortunately, the above-mentioned facts do not give us a chance to do this. Of course, we could simply list the most recent methods in the field (and then it is enough to check just the time of publication), but this certainly does not meet the requirements imposed on state-of-the-art methods.", "id": "ec3f6cdd-43b4-49d2-ab2f-b07a25b101ed", "level": "section", "origin_cites_number": 23, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Analysis of the overall situation in the field" ] ], "subsections": [], "title": "Analysis of the overall situation in the field" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{section10}\nIt is shown in the survey that there exist a large amount of methods for community detection in node-attribute social networks based on different ideas. In particular, we gave short descriptions of 75 most relevant methods and mentioned much more those partly related to the topic (Sections~\\ref{section6}--\\ref{section8}).\nWe also proposed to divide the methods into the three classes --- early fusion, simultaneous fusion and late fusion ones (Section~\\ref{section4}). This classification is based on the moment when network structure and attributes are fused within a method and allows a researcher/data scientist to estimate the ease of method's software implementation. Namely, we concluded that early and late fusion methods can be easier implemented in comparison with simultaneous fusion ones as the former two usually can be combined of several classical community detection algorithms with existing implementations. At a lower lever, we also divided the methods into subclasses of fusion techniques used (Section~\\ref{section4}). This allows one to estimate the methodological variety in the field.\nWithin the classification, we also focused on the experimental part so that one can see which datasets and measures are used for quality evaluation of each method from Sections~\\ref{section5}--\\ref{section8}.\nThe analysis of all the information collected brought us to the unfortunate conclusion that it is impossible now to determine state-of-the-art methods for community detection in node-attributed social networks (Section~\\ref{section9}). This is a result of the presence of the following general problems in the field that we disclosed in the survey:\n \\begin{itemize}\n \\item the terminology in the field is rather unstable (Subsection~\\ref{subsection 3.2});\n \\item there is no generally accepted opinion on the effect of fusing structure and attributes, in particular, on when the fusion is helpful and when not in terms of subclasses of node-attributed social networks (Subsection~\\ref{subsection-str-closeness-attr-homo}); \n\\item there is no unified methodology for experimental comparison of methods that would include estimation of computational complexity, use of a unified set of datasets and quality measures for evaluation, and justified hyperparameter tuning procedures (Section~\\ref{section9});\n\\item there is no general understanding what is the ``equal impact'' of structure and attributes on community detection results (Section~\\ref{section9}),\n \\item as a rule, there is no mathematically established connection between computational processes within a community detection method and the quality measured used for its evaluation (Section~\\ref{section9}).\n \\end{itemize}\nSummarizing, we concluded that the comparison study allowing one to determine the most preferable (in any sense) methods in the field is far from being complete.\nNevertheless, community detection methods dealing both with network structure and attributes remain a powerful tool in social network analysis and can yield useful insights about a node-attributed social network to a researcher/data scientist. Furthermore, they have wide applications even beyond social networks (Section~\\ref{section3}). With respect to these, we believe that the formulation of existing problems in the field done in this survey is the first step in finding solutions to them.\n\\section*{Competing Interests Statement}\nThere are no competing interests in publication of this paper.\n\\section*{Acknowledgements}\nThe author is very grateful to Klavdiya Bochenina and Timofey Gradov for numerous conversations on the topic and especially to the Anonymous Referee for useful comments that helped to improve the quality of exposition in the survey essentially.\nThis research is financially supported by Russian Science Foundation, Agreement 17-71-30029 with co-financing of Bank Saint Petersburg, Russia.\n{\\small\n\\bibliographystyle{acm}\n\\bibliography{sample-base}\n}\n\\end{document}", "id": "7362ce42-549a-4724-9b4c-b963b4574449", "level": "section", "origin_cites_number": 0, "parent_id": "4b2a7071-f315-4436-8aea-91d8d6db7afb", "prefix_titles": [ [ "title", "Community detection in node-attributed social networks: a~survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
88
[ 8481, 1710, 8483, 1712, 8482, 1711, 8484, 217, 9116, 1713, 1714, 8485, 1715, 8486, 1716, 7062, 8487, 8488, 1717, 242, 8489, 215, 282, 1719, 1718, 229, 1010, 1720, 1721, 1722, 1723, 1724 ]
1.138955
[ "Xiaocong Chen", "Lina Yao", "Julian McAuley", "Guanglin Zhou", "Xianzhi Wang" ]
A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions
2021
2021-09-08T10:44:55Z
cs.IR
In light of the emergence of deep reinforcement learning (DRL) in recommender systems research and several fruitful results in recent years, this survey aims to provide a timely and comprehensive overview of the recent trends of deep reinforcement learning in recommender systems. We start with the motivation of applying DRL in recommender systems. Then, we provide a taxonomy of current DRL-based recommender systems and a summary of existing methods. We discuss emerging topics and open issues, and provide our perspective on advancing the domain. This survey serves as introductory material for readers from academia and industry into the topic and identifies notable opportunities for further research.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4bb82714-5571-4179-9dac-693fb550f058", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ] ], "subsections": [ "f16a1a1e-25a7-4e14-aaa7-69bb7d4fc8b0", "a6f31a50-0b05-46c3-ac49-a2a06aea3f6f", "696bfdf7-6191-445e-82a8-ae4df2378853", "3dd345fb-ce99-4727-b174-a3f36f9f711a", "62c3e9c3-0318-4d9a-947c-bf1d76096072", "db61769c-00fa-49cf-aca1-794fe6209774", "0a382833-431c-4392-982a-66ab0eba2961" ], "title": "root" }, { "cite_extract_rate": 0.4, "cites": [ 8983, 5194 ], "content": "Recent years have seen significant progress in recommendation techniques, from traditional recommendation techniques, e.g., collaborative filtering, content-based recommendation and matrix factorization~, to deep learning based techniques.\nIn particular, deep learning show strong advantages in solving complex tasks and dealing with complex data, due to its capability to capture non-linear user-item relationships and deal with various types of data sources such as images and text.\nIt has thus been increasingly used in recommender systems.\nDeep learning-based recommender systems have limitations in capturing interest dynamics~ due to distribution shift, i.e., the training phase is based on an existing dataset which may not reflect real user preferences that undergo rapid change.\nIn contrast, deep reinforcement learning (DRL) aims to train an agent that can learn from interaction trajectories provided by the environment by combining the power of deep learning and reinforcement learning. Since an agent in DRL can actively learn from users' real-time feedback to infer dynamic user preferences,\nDRL is especially suitable for learning from interactions, such as human-robot collaboration; it has also driven significant advances in a range of interactive applications ranging from video games, Alpha Go to autonomous driving~.\nIn light of the significance and recent progresses in DRL for recommender sytsems, we aim to timely summaize and comment on DRL-based recommendation systems in this survey.\nA\nrecent survey on reinforcement learning based recommender systems provides a general review about reinforcement learning in recommender systems without a sophsiticated investigation of the growing area of deep reinforcement learning. Our survey distinguishes itself in providing a systematic and comprehensive overview of existing methods in DRL-based recommender systems, along with a discussion of emerging topics, open issues, and future directions. This survey introduces researchers, practitioners and educators into this topic\nand fostering an understanding of the key techniques in the area.\nThe main contributions of this survey include the following:\n\\begin{itemize}\n \\item We provide an up-to-date comprehensive review of deep reinforcement learning in recommender systems, with state of the art techniques and pointers to core references. To the best of our knowledge, this the first comprehensive survey in deep reinforcement learning based recommender systems.\n \\item We present a taxonomy of the literature of deep reinforcement learning in recommender systems. Along with the outlined taxonomy and literature overview, we discuss the benefits, drawbacks and give suggestions for future research directions. \n \\item We shed light on emerging topics and open issues for DRL-based recommender systems. We also point out future directions that could be crucial for advancing DRL-based recommender systems.\n\\end{itemize}\nThe remainder of this survey is organized as follows: Section 2 provides an overview of recommender systems, DRL and their integration. Section 3 provides a literature review with a taxonomy and classification mechanism. Section 4 reviews emerging topics, and Section 5 points out open questions. Finally, Section 6 provides a few promising future directions for further advances in this domain.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{route.pdf}\n \\caption{Taxonomy of Deep Reinforcement Learning based Recommender Systems in this survey}\n \\label{fig:overview}\n\\end{figure}", "id": "f16a1a1e-25a7-4e14-aaa7-69bb7d4fc8b0", "level": "section", "origin_cites_number": 5, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section,we introduce key concepts related to dynamic recommender systems (RS) and deep reinforcement learning (DRL), and motivate the introduction of DRL to dynamic recommender systems.", "id": "a6f31a50-0b05-46c3-ac49-a2a06aea3f6f", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Background" ] ], "subsections": [ "a7e80129-286c-44ec-84f0-48ae3a07002b", "e05ae3e8-b24d-431b-b181-250a9bf1fb50", "467d3d60-cfac-40c2-8a3f-6dbe98149287" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "Recommender systems require coping with \\emph{dynamic} environments by estimating rapidly changing users' preferences and proactively recommending items to users.\nLet $\\mathcal{U}$ be a set of users of cardinality $|\\mathcal{U}|$ and $\\mathcal{I}$ be a set of items of cardinality $|\\mathcal{I}|$.\nFor each user $u\\in\\mathcal{U}$, we observe a sequence of user actions $\\mathbb{X}^u = [x_1^u, x_2^u, \\cdots , x_{T_u}^u]$ with item $x_t^u \\in \\mathcal{I}$, i.e., each event in a user sequence comes from the item set. We refer to a user making a decision as an interaction with an item. Suppose the feedback (e.g., ratings or clicking behavior) provided by users is $\\mathcal{F}$, then a dynamic recommender system maintains the corresponding recommendation policy $\\pi^u_t$, which will be updated systematically based on the feedback $f^u_i \\in \\mathcal{F}$ received during the interaction for item $i \\in \\mathcal{I}$ at the timestamp $t$. \nThe marriage of deep learning and reinforcement learning has fueled breakthroughs in recommender systems. \nDRL-based RS consists of a pipeline with three building blocks: environment construction, state representation and recommendation policy learning. Environment construction aims to build an environment based on a set of users' historical behaviors. State representation is provided by the environment containing certain user information including historical behavior, demographic data (etc.). Recommendation policy learning is the key component to understand and predict users' future behavior. \nDL-based RS receives user feedback (e.g., ratings or clicks) to reflect users' interests and update the recommender, while DRL-based RS receives the reward provided by the environment to update the policy. The reward provided by the environment is a pre-defined function containing several factors. The detailed process of DL based RS and DRL-based RS mapping can be found in~\\Cref{fig:catnew}.", "id": "a7e80129-286c-44ec-84f0-48ae3a07002b", "level": "subsection", "origin_cites_number": 0, "parent_id": "a6f31a50-0b05-46c3-ac49-a2a06aea3f6f", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Background" ], [ "subsection", "Why Deep Reinforcement Learning for Recommendation?" ] ], "subsections": [], "title": "Why Deep Reinforcement Learning for Recommendation?" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 8105 ], "content": "The typical defining feature of DRL is to use the deep learning to approximate reinforcement learning's value function and solve high-dimensional Markov Decision Processes (MDPs)~. Formally, a MDP can be represented as a tuple ($\\mathcal{S},\\mathcal{A},\\mathcal{P},\\mathcal{R},\\gamma$). The agent chooses an action $a_t\\in\\mathcal{A}$ according to the policy $\\pi_t(s_t)$ at state $s_t\\in\\mathcal{S}$. The environment receives the action and produces a reward $r_{t+1}\\in\\mathcal{R}$ and transfers the reward into the next state $s_{t+1}$ according to the transition probability $P(s_{t+1}|s_t,a_t)\\in\\mathcal{P}$. The transition probability $\\mathcal{P}$ is unknown beforehand in DRL.\nSuch a process continues until the agent reaches the terminal state or exceeds a pre-defined maximum time step. The overall objective is to maximize the expected discounted cumulative reward,\n\\begin{align}\n \\mathbb{E}_\\pi [r_t] = \\mathbb{E}_\\pi \\big[\\sum_{0}^\\infty \\gamma^k r_{t+k}\\big] \n\\end{align}\nwhere $\\gamma \\in [0,1]$ is the discount factor that balances the future reward and the immediate reward.\n\\begin{wrapfigure}{r}{0.45\\textwidth}\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{tax.pdf}\n \\end{center}\n \\caption{Taxonomy of Deep Reinforcement Learning in Recommender Systems}\n \\label{fig:tax}\n\\end{wrapfigure}\nDeep reinforcement learning can be divided into two categories: \\emph{model-based} and \\emph{model-free} methods (a detailed taxonomy can be found in \\Cref{fig:tax}). The major {\\emph difference} between the two is whether the agent can learn a model of the environment. Model-based methods aim to estimate the transition function and reward function, while model-free methods aim to estimate the value function or policy from experience. In model-based methods, the agent accesses the environment and plans ahead while model-free methods gain sample efficiency from using models which are more extensively developed and tested than model-based methods in recent literature~. \nDeep reinforcement learning approaches are divided into three streams: \\emph{value-based}, \\emph{policy-based} and \\emph{hybrid} methods. In value-based methods, the agent updates the value function to learn a policy; policy-based methods learn the policy directly; and hybrid methods combine value-based and policy-based methods called \\emph{actor-critic} methods. Actor-critic contains two different networks where an actor network uses a policy-based method and the critic uses a value-based method to evaluate the policy learned by the agent.\n\\begin{table}[ht]\n \\centering\n \\caption{Notations}\n \\begin{tabular}{cc|ccc}\n \\hline\n Notations & Name & Notations & Name & Notes \\\\\\hline\n $Q(\\cdot)$ & Q-Value Function & $s$ & State & users' preference \\\\\n $V(\\cdot)$ & Value Function & $a$ & Action & Recommended item(s) \\\\\n $\\gamma$ & Discount Factor & $\\pi$, $\\mu(\\cdot)$ & Policy & Recommendation policy \\\\\n $\\mathbb{E}$ & Expected Value & $r(\\cdot,\\cdot)$ & Reward & users' click behavior \\\\\n $\\theta$ & Model Parameter & $\\alpha$ & constant $\\in [0,1]$ & - \\\\\n $p(\\cdot)$ & Transition Probability & $\\tau$ & Sampled Trajectory & A tuple $(s_t,a_t,s'_t,r_t)$ \\\\\\hline\n \\end{tabular}\n \\label{tab:my_label}\n\\end{table}\nDeep reinforcement learning can be divided into \\emph{on-policy} and \\emph{off-policy} methods.\nIn off-policy methods,\nthe behavior policy $\\pi_b$ is used for exploration while the target policy $\\pi$ is used for decision-making.\nFor on-policy methods, the behavior policy is the same as the target policy. \n\\textbf{Q-learning}~ is an off-policy value-based learning scheme for finding a greedy target policy:\n\\begin{align}\n \\pi(s) = \\argmax_a Q_\\pi (s,a)\n\\end{align}\nwhere $Q_u (s,a)$ denotes the $Q$-value and is used in a small discrete action space. For a deterministic policy, the $Q$ value can be calculated as follows\n\\begin{align}\n Q (s_t,a_t) = \\mathbb{E}_{\\tau \\sim \\pi}[r(s_t,a_t) + \\gamma Q(s'_t, a'_t)].\n\\end{align}\nDeep Q learning (DQN)~ uses deep learning to approximate a non-liner Q function parameterized by $\\theta_q$: $Q_{\\theta_q} (s,a)$. DQN designs a network $Q_{\\theta_q}$ that is asynchronously updated by minimizing the MSE:\n\\begin{align}\n \\mathcal{L}(\\theta_q) = \\mathbb{E}_{\\tau \\sim \\pi}\\Big[Q_{\\theta_q}(s_t,a_t)-(r(s_t,a_t) + \\gamma Q_{\\theta_q}(s'_{t},a'_{t}))\\Big]^2 \\label{dqnloss}\n\\end{align}\nwhere $\\tau$ is the sampled trajectory containing $(s,a,s',r(s,a))$. In particular, $s'_t$ and $a'_t$ come from the behavior policy $\\pi_b$ while $s,a$ comes from the target policy $\\pi$.\nIt is worth mentioning that the value function $V_\\pi(s)$ represents the expected return. $V_\\pi(s)$ is used to evaluate the goodness of the state while $Q_\\pi(s_t,a_t)$ is used to evaluate the action. $V_\\pi(s)$ can be defined as\n\\begin{align}\n V_\\pi(s) = \\mathbb{E}_{\\tau\\sim \\pi}\\bigg[\\sum_{t=0}^T\\gamma^t r(s,a)|s_0=s\\bigg].\n\\end{align}\n$V_\\pi(\\cdot)$ and $Q_\\pi(\\cdot)$ have the following relationship:\n\\begin{align}\n V_\\pi(s) = \\mathbb{E}_{a\\sim\\pi}[Q_\\pi(s,a)].\n\\end{align}\nThe value function is updated using the following rule with the Temporal Difference (TD) method,\n\\begin{align}\n V_\\pi(s_t) \\leftarrow V_\\pi(s_t) + \\alpha[\\underbrace{r(s'_t,a'_t) + \\gamma V_\\pi(s'_t) - V_\\pi(s_t)}_{\\text{TD-error}}]\n\\end{align}\nwhere $\\alpha$ is a constant.\n\\textbf{Policy gradient}~ is an on-policy policy-based method which can handle high-dimensional or continuous actions which cannot be easily handled by Q-learning. Policy gradient\naims to find the parameter $\\theta$ of $\\pi_{\\theta}$ to maximize the accumulated reward. To this end, it maximizes the expected return from the start state:\n\\begin{align}\n J(\\pi_\\theta) = \\mathbb{E}_{\\tau \\sim \\pi_{\\theta}}[r(\\tau)] = \\int\\pi_{\\theta}(\\tau) r(\\tau)d\\tau\n\\end{align}\nwhere $\\pi_{\\theta}(\\tau)$ is the probability of the occurrence of $\\tau$. Policy gradient learns the parameter $\\theta$ by the gradient $\\nabla_\\theta J(\\pi_\\theta)$ as defined below:\n\\begin{align}\n \\nabla_\\theta J(\\pi_\\theta) = \\int\\pi_{\\theta}(\\tau) r(\\tau)d\\tau &= \\int\\pi_{\\theta}(\\tau) \\nabla_{\\theta}\\log \\pi_{\\theta}(\\tau)r(\\tau)d\\tau \\notag \\\\\n & =\\mathbb{E}_{\\tau \\sim d_{\\pi_\\theta}}[\\sum_{t=1}^Tr(s_t,a_t)\\sum_{t=1}^T\\nabla_{\\theta} \\log \\pi_{\\theta}(s_t,a_t)].\n\\end{align}\nThe above derivations contain the following substitution,\n\\begin{align}\n \\pi_{\\theta}(\\tau) = p(s_1)\\prod_{t=1}^T \\pi_{\\theta}(s_t,a_t)p(s_{t+1}|s_t,a_t)\n\\end{align}\nwhere $p(\\cdot)$ are independent from the policy parameter $\\theta$, which is omitted during the derivation.\nMonte-Carlo sampling has been used by previous policy gradient algorithm (e.g,. REINFORCE) for $\\tau\\sim d_{\\pi_{\\theta}}$.\n\\textbf{Actor-critic networks} combine the advantages from Q-learning and policy gradient. They can be either on-policy~ or off-policy~. An actor-critic network consists of two components: i) \\textit{an actor}, which optimizes the policy $\\pi_\\theta$ under the guidance of $\\nabla_\\theta J(\\pi_{\\theta})$; and ii) \\textit{a critic}, which evaluates the learned policy $\\pi_\\theta$ by using $Q_{\\theta_q} (s,a)$. The overall gradient is represented as follows:\n\\begin{align}\n \\mathbb{E}_{s \\sim d_{\\pi_\\theta}}[Q_{\\theta_q}(s,a)\\nabla_{\\theta} \\log \\pi_{\\theta}(s,a)].\n\\end{align}\nWhen dealing with off-policy learning, the value function for $\\pi_\\theta (a|s)$ can be further determined by deterministic policy gradient (DPG) as shown below:\n\\begin{align}\n \\mathbb{E}_{s \\sim d_{\\pi_\\theta}}[\\nabla_a Q_{\\theta_q}(s,a)|_{a=\\pi_\\theta(s)}\\nabla_{\\theta} \\pi_{\\theta}(s,a)].\n\\end{align}\nWhile traditional policy gradient calculates the integral for both the state space $\\mathcal{S}$ and the action space $\\mathcal{A}$, DPG only requires computing the integral to the state space $\\mathcal{S}$.\nGiven a state $s\\in\\mathcal{S}$, there will be only one corresponding action $a\\in\\mathcal{A}:\\mu_\\theta(s) = a$ using DPG. Specifically, deep Deterministic Policy Gradients (DDPG) is an algorithm that combines techniques from DQN and DPG. DDPG contains four different neural networks: Q Network $Q$, policy network, target Q network $Q^{\\mathit{tar}}$, and target policy network. It uses the target network for both the Q Network $Q$ and policy network $\\mu$ to ensure stability during training. Assume $\\theta_q, \\theta_{\\pi}, \\theta_{q'}$ and $\\theta_{\\pi'}$ are parameters of the above networks; then DDPG soft-updates the parameters for the target network~:\n\\begin{align}\n \\text{Actor: }\\theta_{\\pi'} \\leftarrow \\alpha \\theta_\\pi + (1-\\alpha)\\theta_{\\pi'} \\text{ Critic: }\\theta_{q'} \\leftarrow \\alpha \\theta_q + (1-\\alpha)\\theta_{q'} \n\\end{align}", "id": "e05ae3e8-b24d-431b-b181-250a9bf1fb50", "level": "subsection", "origin_cites_number": 7, "parent_id": "a6f31a50-0b05-46c3-ac49-a2a06aea3f6f", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Background" ], [ "subsection", "Preliminaries of Deep Reinforcement Learning" ] ], "subsections": [], "title": "Preliminaries of Deep Reinforcement Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "DRL is normally formulated as a Markov Decision Process (MDP).\nGiven a set of users $\\mathcal{U} = \\{u, u_1, u_2, u_3, ...\\}$, a set of items $\\mathcal{I} = \\{i, i_1, i_2, i_3, ...\\}$, the system first recommends item $i$ to user $u$ and then gets feedback $f_i^u$.\nThe system aims to incorporate the feedback to improve future recommendations and needs to determine an optimal policy $\\pi^*$ regarding which item to recommend to the user to achieve positive feedback.\nThe MDP modelling of the problem treats the user as the environment and the system as the agent. The key components of the MDP in DRL-based RS include the following:\n\\begin{itemize}\n \\item State $\\mathcal{S}$: A state $S_t\\in\\mathcal{S}$ is determined by both users' information and the recent $l$ items in which the user was interested before time $t$.\n \\item Action $\\mathcal{A}$: An action $a_t \\in\\mathcal{A}$ represents users' dynamic preference at time $t$ as predicted by the agent. $\\mathcal{A}$ represents the whole set of (potentially millions of) candidate items.\n \\item Transition Probability $\\mathcal{P}$: The transition probability $p(s_{t+1}|s_t,a_t)$ is defined as the probability of state transition from $s_t$ to $s_{t+1}$ when action $a_t$ is executed by the recommendation agent. In a recommender system, the transition probability refers to users' behavior probability. $\\mathcal{P}$ is only used in model-based methods.\n \\item Reward $\\mathcal{R}$: Once the agent chooses a suitable action $a_t$ based on the current state $S_t$ at time $t$, the user will receive the item recommended by the agent. Users' feedback on the recommended item accounts for the reward $r(S_t,a_t)$. The feedback is used to improve the policy $\\pi$ learned by the recommendation agent.\n \\item Discount Factor $\\gamma$: The discount factor $\\gamma \\in [0,1]$ is used to balance between future and immediate rewards---the agent focuses only on the immediate reward when $\\gamma=0$ and takes into account all the (immediate and future) rewards otherwise.\n\\end{itemize}\nThe DRL-based recommendation problem can be defined by using MDP as follows.\n\\textit{Given the historical MDP, i.e., $(\\mathcal{S},\\mathcal{A},\\mathcal{P}, \\mathcal{R},\\gamma)$, the goal is to find a set of recommendation polices ($\\{\\pi\\}: \\mathcal{S}\\to \\mathcal{A}$) that maximizes the cumulative reward during interaction with users.}\n\\begin{dfn*}\nGiven an environment that contains all items $\\mathcal{I}$, when user $u\\in\\mathcal{U}$ interacts with the system, an initial state $s$ is sampled from the environment which contains a list of candidate items and users' historical data. The DRL agent needs to work out a recommendation policy $\\pi$ based on the state $s$ and produces the corresponding recommended item list $a$.\nThe user will provide feedback on the list which is normally represented as click or not click. The DRL agent will then utilize the feedback to improve the recommendation policy and move to the next interaction episode.\n\\end{dfn*}\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}[b]{0.68\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{dlrs.pdf}\n \\caption{Deep learning based recommender systems}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.65\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{drlrs.pdf}\n \\caption{Deep reinforcement learning based recommender systems}\n \\end{subfigure}\n \\caption{Difference between deep learning based RS and DRL-based RS. Deep learning based RSs may only update the recommendation policy during the training stage. They often require re-training, which is computationally inefficient, when users' interests change significantly. DRL-based RS will update the recommendation policy time over time as new rewards are received.}\n \\label{fig:catnew}\n\\end{figure}", "id": "467d3d60-cfac-40c2-8a3f-6dbe98149287", "level": "subsection", "origin_cites_number": 0, "parent_id": "a6f31a50-0b05-46c3-ac49-a2a06aea3f6f", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Background" ], [ "subsection", "DRL meets RS: Problem Formulation" ] ], "subsections": [], "title": "DRL meets RS: Problem Formulation" }, { "cite_extract_rate": 0, "cites": [], "content": "DRL-based RS has some unique challenges such as state construction, reward estimation and environment simulation. We categorize the existing work of DRL-based recommendation into model-based and model-free methods (the taxonomy is shown in~\\Cref{fig:tax}).", "id": "696bfdf7-6191-445e-82a8-ae4df2378853", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ] ], "subsections": [ "4ebfecfc-b15f-4ce1-abac-7b121665be4a", "74b47d11-d3b5-407c-b802-84dc050162e4", "c30b91c8-48bc-4d92-ba66-c418e39b9327" ], "title": "Deep Reinforcement Learning in Recommender Systems" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 5189, 5180, 5195 ], "content": "Model-based methods assume an expected reward or action available for the next step to help the agent update the policy.\n\\begin{table}[ht]\n\\caption{List of publications in model-based DRL-based RS}\n\\begin{tabular}{ccc}\n\\hline\nMethod & Work \\\\\\hline\nValue-based & \\\\\nPolicy-based & \\\\\nHybrid & \n\\\\\\hline\n\\end{tabular}\n\\end{table}", "id": "4ebfecfc-b15f-4ce1-abac-7b121665be4a", "level": "subsection", "origin_cites_number": 7, "parent_id": "696bfdf7-6191-445e-82a8-ae4df2378853", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-based Deep Reinforcement Learning based Methods" ] ], "subsections": [ "8ceb9826-a858-407a-9b78-7a83d0ea8871" ], "title": "Model-based Deep Reinforcement Learning based Methods" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7217 ], "content": "IRecGAN~ is a model-based method that adopts generative adversarial training to improve the robustness of policy learning. It can reduce the cost of interaction for RS by using offline data instead of the simulated environment.\nIRecGAN employs a generative adversarial network~ to generate user data based on the offline dataset.\nIt trains a recommendation agent using a policy gradient-based DRL method called REINFORCE. The agent aims to learn a policy based on the following gradient,\n\\begin{align}\n \\mathbb{E}_{\\tau\\sim\\{g,data\\}}\\big[\\sum_{t=0}^T \\sum_{t'=t}^T\\gamma^{t'-t} q_D(\\tau_{0:t}^n)r_t \\nabla_{\\theta_a}(c_t \\in \\pi_{\\theta_a}(s_t)\\big],q_D(\\tau_{0:t}^n)= \\frac{1}{N}\\sum_{n=1}^N D(\\tau_{0:T}^n),\\tau_{0:T}^n\\in MC^{\\mathcal{U}}(N)\n\\end{align}\nwhere the $\\mathit{MC}^{\\mathcal{U}}(N)$ represents the sampled $N$ sequences from the interaction between $\\mathcal{U}$ and the agent using the Monte-Carlo tree search algorithm, $D$ is the discriminator, $T$ is the length of $\\tau$, $g$ represents the offline data, and \\textit{data} represents the generated data. \n propose NRSS for personalized music recommendation. NRSS uses wireless sensing data to learn users' current preferences. NRSS considers three different types of feedback: score, option, and wireless sensing data. Because multiple factors are considered as the reward, NRSS designs a reward model which consists of users' preference reward $r_p$ and a novel transition reward $r_{\\textit{trans}}$ which are parameterized by $\\theta_{r_p}$ and $\\theta_{r_{\\textit{trans}}}$. The goal for NRSS is to find the optimal parameters $\\theta_{r_p}$ and $\\theta_{r_{\\textit{trans}}}$ by using the Monte-Carlo tree search thus improving recommendation performance.\nHowever, wireless sensing feedback lacks generalization ability as it is only available for certain tasks or scenarios, making it hard to determine dynamic user interest.", "id": "8ceb9826-a858-407a-9b78-7a83d0ea8871", "level": "paragraph", "origin_cites_number": 3, "parent_id": "4ebfecfc-b15f-4ce1-abac-7b121665be4a", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-based Deep Reinforcement Learning based Methods" ], [ "paragraph", "Policy-based methods" ] ], "subsections": [ "bb6b42ee-6de2-432f-ab2a-fc4b096f3199", "68abbde2-1fcc-48e8-a5f8-eccafeb14c6f", "326848cf-b30c-48e3-8c0b-494e313de16b" ], "title": "Policy-based methods" }, { "cite_extract_rate": 0.5, "cites": [ 5195, 5180 ], "content": "Prior to Q-learning, value iteration is a more traditional value-based reinforcement learning algorithm that focuses on the iteration of the value function. Gradient Value Iteration (GVI)~ is proposed to improve the traditional value iteration algorithm by utilizing the transition probability and a multi-agent setting to predict chronological author collaborations. It introduces a new parameter named `status' to reflect the amount of knowledge that the agent needs to learn from this state. The policy is updated only when the distance between the new status and the old status is lower than a pre-defined threshold. However, value iteration requires the transition probability, which is hard to obtain in most cases. Hence, Q-learning and its variants are widely used in DRL-based RS. Cascading DQN (CDQN) with a generative user model~ is proposed to deal with the environment with unknown reward and environment dynamics. The generative user model adopts GANs to generate a user model based on an offline dataset. Different from previous work, it will generate the reward function for each user to explain the users' behavior. The user model can be written as,\n\\begin{align}\n \\argmax_{\\phi\\in\\triangle^{k-1}} \\mathbb{E}_{\\phi}[r(s_t,a_t)]-R(\\phi)/\\eta\n\\end{align}\nwhere $\\triangle^{k-1}$ is the probability simplex, $R(\\phi)$ is the regularization term for exploration and $\\eta$ is a constant.\nPseudo Dyna-Q (PDQ)~ points out that Monte-Carlo tree search may lead to an extremely large action space and an unbounded importance weight of training samples. Hence, a world model is proposed to reduce the instability of convergence and high computation cost for interacting with users by imitating the offline dataset. With the world model, the agent will interact with the learned world model instead of the environment to improve the sample efficiency and convergence stability. The world model learning process introduced in PDQ can be described as finding the parameter $\\theta_M$,\n\\begin{align}\n \\argmin_{\\theta_M}\\mathbb{E}_{\\xi\\in P_{\\xi}^\\pi}[\\sum_{t}^{T-1}\\gamma^t\\prod_{j=0}^t\\frac{\\pi(s_j,a_j)}{\\pi_b(s_j,a_j)}\\Delta_t(\\theta_M)]\n\\end{align}\nwhere $\\xi$ is generated by the logged policy $\\pi_b$, $\\prod_{j=0}^t\\frac{\\pi(s_j,a_j)}{\\pi_b(s_j,a_j)}$ is the ratio used for importance sampling and $\\Delta$ is the difference between the reward in the world model and real reward. \nFurthermore, GoalRec~ designs a disentangled universal value function to be integrated with the world model to help the agent deal with different recommendation tasks. The universal value function is defined as\n\\begin{align}\n V_{\\pi} (s)= \\mathbb{E}[\\sum_{t=0}^\\infty r(s_t,a_t)\\prod_{k=0}^t\\gamma s_k|s_0=s].\n\\end{align}\nMoreover, GoalRec introduces a new variable goal $g\\in G$ used to represent users' future trajectory and measurement $m\\in M$. $m$ is an indicator that reflects users' response to the given future trajectory based on historical behaviors. Based on that, the optimal action will be selected based on\n\\begin{align}\n a^* = \\max_a U(M(s,a),g)\n\\end{align}\nwith a customized liner function $U(\\cdot)$.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{GoalRec_wb.pdf}\n \\caption{Left is the general structure of model-free methods. Right is the structure for GoalRec which is a model-based method. A sample trajectory is used to demonstrate the difference between them~. }\n \\label{fig:worldmodel}\n\\end{figure}", "id": "bb6b42ee-6de2-432f-ab2a-fc4b096f3199", "level": "paragraph", "origin_cites_number": 4, "parent_id": "8ceb9826-a858-407a-9b78-7a83d0ea8871", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-based Deep Reinforcement Learning based Methods" ], [ "paragraph", "Policy-based methods" ], [ "paragraph", "Value-based methods" ] ], "subsections": [], "title": "Value-based methods" }, { "cite_extract_rate": 1, "cites": [ 5189 ], "content": "Hybrid method can be recognized as a midpoint between value-based and policy gradient-based methods. DeepChain~ uses the multi-agent setting to relieve the sub-optimality problem. The sub-optimality problem is caused by the \\textit{one for all} setting that optimizes one policy for all users. Hence, DeepChain designs a multi-agent setting that adopts several agents to learn consecutive scenarios and jointly optimizes multiple recommendation policies. The main training algorithm used is DDPG. To this end, users' actions can be formulated in a model-based form as follows:\n\\begin{align}\n \\sum_{m,d}[p_m^s (s_t,a_t)\\gamma Q_{\\theta}(s'_{t},\\pi_m(s'_{t})) + p_m^c(s_t,a_t)(r_t+\\gamma Q_{\\theta}(s'_{t},\\pi_d(s'_{t}))) + p_m^l (s_t,a_t)r_t]1_m\n\\end{align}\nwhere $m$ represents the number of actor networks, $c,l,s$ represent the three different scenarios, $1_m$ is used to control the activation of two actors and $(m,d)\\in\\{(1,2),(2,1)\\}$.", "id": "68abbde2-1fcc-48e8-a5f8-eccafeb14c6f", "level": "paragraph", "origin_cites_number": 1, "parent_id": "8ceb9826-a858-407a-9b78-7a83d0ea8871", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-based Deep Reinforcement Learning based Methods" ], [ "paragraph", "Policy-based methods" ], [ "paragraph", "Hybrid methods" ] ], "subsections": [], "title": "Hybrid methods" }, { "cite_extract_rate": 0.5, "cites": [ 5189, 5180, 5195 ], "content": "Model-based methods aim to learn a model or representation to represent the whole environment so that the agent can plan ahead and receive better sample efficiency.\nThe drawback of such a method is that the ground-truth representation of the environment is unavailable in recommendation scenarios as it dynamically changes, leading to a biased representation. Moreover, model-based methods use the transition probability function $\\mathcal{P}$ to estimate the optimal policy. As mentioned, the transition probability function is normally equivalent to users' behavior probability which is hard to determine in a recommender system. Hence, existing works~ approximate $\\mathcal{P}$ using a neural network or embedding it into the world model. design a probability network to estimate $\\mathcal{P}$ while uses a GAN to generate user behavior where $\\mathcal{P}$ is embedded in the latent space. Different from them,~ relies on the world model to predict users' next behavior and feed it into the policy learning process.\nThe challenges of model-based DRL are not widely used in RS and can be summarized into the following facets:\n\\begin{itemize}\n\\item $\\mathcal{P}$ is hard to determine in real-world recommender systems. \n\\item If approximation is used to estimate $\\mathcal{P}$, the overall model complexity will substantially increase as it requires approximating two different functions $\\mathcal{P}$ and the recommendation policy $\\pi$ by using a large amount of user behavior data. \n\\item World model-based methods require periodic re-training to ensure the model can reflect user interests in time which increases the computation cost. \n\\end{itemize}\n\\begin{figure}\n \\centering\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{DQN_wb.pdf}\n \\caption{DQN}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{DDPG_wb.pdf}\n \\caption{DDPG}\n \\end{subfigure}\n \\caption{The typical structure of DQN and DDPG}\n \\label{fig:ddpg_fig}\n\\end{figure}", "id": "326848cf-b30c-48e3-8c0b-494e313de16b", "level": "paragraph", "origin_cites_number": 6, "parent_id": "8ceb9826-a858-407a-9b78-7a83d0ea8871", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-based Deep Reinforcement Learning based Methods" ], [ "paragraph", "Policy-based methods" ], [ "paragraph", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.43181818181818105, "cites": [ 5190, 3604, 8884, 5165, 6153, 3616, 1389, 5184, 5191, 5177, 5169, 5199, 6154, 5181, 6152, 5172, 5194, 5170, 5185 ], "content": "Compared with model-based methods, model-free methods are relatively well-studied. Different from model-based methods, $\\mathcal{P}$ is unknown and not required in model-free methods. Model-based methods enable the agent to learn from previous experiences. In this subsection, we categorize model-free based DRL in RS into three parts: value-based, policy-based and hybrid methods. \n\\begin{table}[ht]\n\\caption{List of reviewed publications in Model-free DRL-based RS}\n\\begin{tabular}{ccc}\n\\hline\nTasks & Note & Work \\\\\\hline\n\\multirow{4}{*}{Value-based} & Vanilla DQN and its extensions & \\\\ \\\n& DQN with state/action space optimization & \\\\\n& DQN with graph/image input & \\\\\n& DQN for joint learning & \\\\\\hline\n\\multirow{3}{*}{Policy-based} & Vanilla REINFORCE & \\\\\n& REINFORCE uses graph structure/input & \\\\\n& Non-REINFORCE based & \\\\ \\hline\n\\multirow{2}{*}{Hybrid} & Vanilla DDPG & \\\\\n& with Knowledge Graph & \\\\\n\\hline\n\\end{tabular}\n\\end{table}", "id": "74b47d11-d3b5-407c-b802-84dc050162e4", "level": "subsection", "origin_cites_number": 44, "parent_id": "696bfdf7-6191-445e-82a8-ae4df2378853", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-free deep reinforcement learning based methods" ] ], "subsections": [ "6c84bc7f-b9cc-44a6-b8d6-5c59f102ecfe" ], "title": "Model-free deep reinforcement learning based methods" }, { "cite_extract_rate": 0.41176470588235203, "cites": [ 5181, 5191, 5170, 6155, 5185, 5199, 1408 ], "content": "As mentioned, Deep Q-learning and its variants are typical value-based DRL methods widely used in DRL-based RS. DRN~ is the first work utilizing Deep Q-Networks (DQN) in RS. It adopts Double DQN (DDQN)~ to build a user profile and designs an activeness score to reflect how frequently a user returns after one recommendation plus users' action (click or not) as the reward. DRN provides a new approach to integrating DRL into RS when dealing with a dynamic environment. The key objective function can be found as follows,\n\\begin{align}\n \\mathbb{E}[r_{t+1} + \\gamma Q_{\\theta_t'}(s_{t+1}, \\argmax_{a'} Q_{\\theta_t}(s_t,a'))]\n\\end{align}\nwhere $a'$ is the action that gives the maximum future reward according to $\\theta_t$, $\\theta_t$ and $\\theta_t'$ are different parameters for two different DQNs. \n points out that negative feedback will also affect recommendation performance which DRN does not consider. Moreover, positive feedback is sparse due to the large number of candidate items in RS. Only using positive feedback would lead to convergence problems. Hence, DEERS is proposed to consider both positive and negative feedback simultaneously by using DQN. Gated Recurrent Units (GRU) are employed to capture users' preferences for both a positive state $s^+$ and negative state $s^-$ and the final objective function can be computed as:\n\\begin{align}\n \\mathbb{E}[r_{t+1} + \\gamma \\max_{a_{t+1}}Q_{\\theta_q}(s^+_{t+1},s^-_{t+1},a_{t+1})|s^+_t,s^-_t,a_t].\n\\end{align}\n introduces attention mechanisms into the DQN to leverage social influence among users. To be specific, a social impact representation $U_v$ is introduced into the state representation. Matrix factorization is adopted to determine \nsimilarity among\nusers and hence present the social influence. Social attention is introduced to distill the final state representation.\nIn addition, a few studies focus on user profiling to improve recommendation performance~. claims that user feedback contains useful information in the previous feedback even when the user does not like the recommended items. Some existing studies focus on final feedback which ignore the importance from earlier steps to later ones. Hence, user-specific DQN (UQDN) is proposed to consider multi-step feedback from users. It employs Matrix Factorization to generate user-specific latent state spaces. The newly defined objective function with the user-specific latent state space can be represented as\n\\begin{align}\n \\mathbb{E}[r_{t+1} + \\gamma \\max_{a_{t+1}}\\overline{Q}_{\\theta_q}(s_{t+1},a_{t+1}) + \\overline{\\textbf{b}}_u - Q_{\\theta_q} (s_{t+1},a_{t+1})]\n\\end{align}\nwhere $\\overline{\\textbf{b}}_u$ is the learned user latent representation.\n also points out that most studies do not consider users' long-term engagement in the state representation as they focus on the immediate reward. FeedRec is proposed that combines both instant feedback and delayed feedback into the model to represent the long-term reward and optimize the long-term engagement by using DQN. To be specific, time-LSTM is employed to track users' hierarchical behavior over time to represent the delayed feedback which contains three different operations: $h_{\\mathit{skip}},h_{\\mathit{choose}},h_{\\mathit{order}}$. The state space is the concatenation of those operations and users' latent representation. Differently, focuses on the user privacy issue in recommender systems. Deep user profile perturbation (DUPP) is proposed to add perturbation into the user profile by using DQN during the recommendation process. Specifically, DUPP adds a perturbation vector into users' clicked items as well as the state space, which contains users' previous behavior.\nDistinct from previous studies which focus on optimizing user profiles or state spaces, some studies aim to optimize the action space formed by interactions with items. In the situation of basket recommendation, the user is suggested multiple items as a bundle, which is called a recommendation slate. It leads to combinatorially large action spaces making it intractable for DQN based recommendation models.\nSlateQ~ is proposed to decompose slate Q-value to estimate a long-term value for individual items, and it is represented as,\n\\begin{align}\n Q_{\\theta_q}(s_t,a_t) = \\sum_{i \\in a_t}p(i|s_t,a_t)\\overline{Q}_{\\theta_q}(s_t,i)\n\\end{align}\nwhere $\\overline{Q}_{\\theta}(s,i)$ is the decomposed Q-value for item $i$. The decomposed Q-value will be updated by the following rule which is similar to traditional DQN,\n\\begin{align}\n \\overline{Q}_{\\theta_q}(s_t,i) \\leftarrow \\alpha \\bigg(r_t +\\gamma \\sum_{j\\in a_{t+1}}p(j|s_{t+1},a_{t+1}) \\overline{Q}_{\\theta_q}(s_{t+1},j)\\bigg) + (1-\\alpha)\\overline{Q}_{\\theta_q}(s_t,i).\n\\end{align}\nDifferent from other mode-free methods, Slate-Q assumes that the transition probability $p(i|s_t,a_t)$ is known.\nVanilla DQN methods may not have sufficient knowledge to handle complex data such as images and graphs. firstly models users' click behavior as an embedding matrix in the latent space to include the skip behaviors of sequence patterns for sequential recommendation. Based on that, propose DRCGR, which adopts CNN and GAN into DQN to help the agent to better understand high-dimensional data, e.g., a matrix. Two different convolution kernels are used to capture users' positive feedback. In the meantime, DRCGR uses GANs to learn a negative feedback representation to improve robustness. \nAnother typical data format is the graph, which is widely used in RS, including knowledge graphs.\n propose GCQN which adopts Graph Convolutional Networks (GCN)~ into the DQN which constructs the state and action space as a graph-aware representation. Differently, GCQN introduces the attention aggregator: $\\sum_{w\\in \\mathcal{N}(i)}\\alpha_{iu}e_u$ which demonstrates better performance than the mean-aggregator and pooling-aggregator. For item $i$, the graph-aware representation can be represented as,\n\\begin{align}\n \\sigma\\bigg(W_{fc}[e_i\\oplus\\sum_{w\\in \\mathcal{N}(i)}\\alpha_{iu}e_u + b_{fc}]\\bigg)\n\\end{align}\nwhere $W_{fc},b_{fc}$ are the parameters for the fully-connected layer, $e_u$ is the embedding for user $u$ and $\\mathcal{N}(i)$ is the set of one-hot neighbours of item $i$ in graph $G(i)$. propose KGQR uses a similar strategy to transform the information into a knowledge graph which is fed into the GCN to generate the state representation. Notably, KGQR presents a different state representation generation method. For given node $i$, the neighbourhood representation with a $k$-hop neighborhood aggregator can be represented as,\n\\begin{align}\n e_i^k = \\sigma\\bigg(W_{k}\\frac{1}{|\\mathcal{N}(i)|}\\sum_{t\\in\\mathcal{N}(i)} e_{t}^{k-1} + B_k e_i^{k-1}\\bigg)\n\\end{align}\nwhere $\\mathcal{N}(i)$ is the set of neighboring nodes, $W_k, B_k$ are the parameter of the aggregator. Those neighbourhood representations will be fed into a GRU and the state representation will be generated. Another application domain for using graph data is job recommendation which requires considering multiple factors jointly such as salary, job description, job location etc. SRDQN~ constructs a probability graph to represent a candidate's skill set and employs a multiple-task DQN structure to process these different factors concurrently. \nThere are some studies targeting recommendation and advertising simultaneously in e-commerce environments~. mentions when deploying RS into real-world platforms such as e-commerce scenarios, the expectation is to improve the profit of the system.\nA new metric, Gross Merchandise Volume (GMV), is proposed to measure the profitability of the RS to provide a new view about evaluating RS in advertising. Different from GMV, separates recommendation and advertising as two different tasks and proposes the Rec/Ads\nMixed display (RAM) framework. RAM designs two agents: a recommendation agent and an advertising agent, where each agent employs a CDQN to conduct the corresponding task. find that advertising and recommendation may harm each other and formulate a rec/ads trade-off. Their proposed solution, DEARS, contains two RNNs. Two RNNs are employed to capture user preferences toward recommendations and ads separately. Based on that, DQN is employed to take those two outputs as the input to construct the state and output the advertising.", "id": "6c84bc7f-b9cc-44a6-b8d6-5c59f102ecfe", "level": "paragraph", "origin_cites_number": 17, "parent_id": "74b47d11-d3b5-407c-b802-84dc050162e4", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-free deep reinforcement learning based methods" ], [ "paragraph", "Value based methods" ] ], "subsections": [ "01225554-85b2-42e4-a661-722c3b5972d7", "cb915b6e-26a1-44b9-be2e-47ffa839e199", "47d0fb4b-4dca-4006-ba90-90530959ef38" ], "title": "Value based methods" }, { "cite_extract_rate": 0.4, "cites": [ 5190, 5165, 6153, 5184, 5177, 1911, 6152, 6156 ], "content": "Policy-based DRL can be divided into two parts which are Constrained Policy Optimization (CPO) ~ and policy gradient.\n uses CPO to identify the contradiction between text feedback and historical preferences. It provides a solution for using DRL in the situation where users' feedback is entirely different from previous feedback in RS. \nPolicy gradient-based methods are the other stream in policy-based DRL methods for RS. These methods aims to optimize the policy $\\pi$ directly instead of estimating the Q-value like DQN. A well-known and widely used policy gradient method in RS is REINFORCE which uses the following rule for policy $\\pi_{\\theta_\\pi}$,\n\\begin{align}\n \\theta \\leftarrow \\theta + \\alpha \\mathbb{E}_{\\tau\\sim d_{\\pi_{\\theta_\\pi}}}\\bigg[\\sum_{t=1}^Tr(s_t^i,a_t^i)\\sum_{t=1}^T\\nabla_{\\theta_\\pi} \\log \\pi_{\\theta_\\pi}(s_t^i,a_t^i)\\bigg]\n\\end{align}\nwhere $i$ is sampled trajectories from $\\pi_{\\theta}(a_t|s_t)$. propose Policy Gradient for Contextual Recommendation (PGCR), which adopts REINFORCE and considers contextual information. PGCR assumes that the policy follows the multinoulli distribution, in which case the transition probability can be estimated easily through sampling from previously seen context.\n incorporate CNNs and attention mechanisms in REINFORCE for explainable recommendation. Specifically, this work designs a coupled agent structure where one agent generates the explanation and the other makes recommendations based on the generated explanation.\n increases the scalability of REINFORCE to ensure it can deal with the extremely large action space under recommendation scenarios. To be specific, it introduces a policy correction gradient estimator into REINFORCE to reduce the variance of each gradient by doing importance sampling. The new update rule\nbecomes\n\\begin{align}\n \\theta_\\pi \\leftarrow \\theta_\\pi + \\alpha \\sum_{\\tau\\sim\\beta}\\bigg[\\sum_{t=1}^T\\frac{\\pi_{\\theta_\\pi}(s_t,a_t)}{\\pi_\\beta(s_t,a_t)}r(s_t^i,a_t^i)\\sum_{t=1}^T\\nabla_{\\theta_\\pi} \\log \\pi_{\\theta_\\pi}(s_t^i,a_t^i)\\bigg]\n\\end{align}\nwhere $\\pi_\\beta$ is the behavior policy trained by state-action pairs without the long-term reward and $\\pi_{\\theta}$ is trained based on the long-term reward only. It is worth mentioning that the vanilla REINFORCE algorithm is on-policy, and importance sampling will make REINFORCE behave like an off-policy method with the following gradient format,\n\\begin{align}\n \\mathbb{E}_{\\tau \\sim d_{\\pi_\\theta}}\\bigg[\\prod_{t'=1}^t\\frac{\\pi_{\\theta}(s_t,a_t)}{\\pi_{\\theta_s}(s_{t'},a_{t'})}\\sum_{t'=t}^Tr(s_t,a_t)\\sum_{t=1}^T\\nabla_{\\theta} \\log \\pi_{\\theta}(s_t,a_t)\\bigg]\n\\end{align}\nwhere $\\pi_{\\theta_s}$ is the sample policy parameter.\n also finds that the REINFORCE method suffers from a high variance gradient problem and Pairwise Policy Gradient (PPG) is proposed. Different from policy correction, PPG uses Monte Carlo sampling to sample two different actions $a,b$ and compare the gradient to update $\\theta$,\n\\begin{align}\n \\mathbb{E}_{\\tau\\sim d_{\\pi_{\\theta_\\pi}}}\\bigg(\\sum_a\\sum_b(r(s,a)-r(s,b))\\sum_{t=1}^T(\\nabla_{\\theta_\\pi}\\log_{\\pi_{\\theta_\\pi}}(s_t,a_t)-\\nabla_{\\theta_\\pi}\\log_{\\pi_{\\theta_\\pi}}(s_t,b_t))\\bigg).\n\\end{align}\n extends the policy correction gradient estimator into a two-stage setting which are $p(s_t,a^p)$ and $q(s_t,a|a^p)$ and the policy can be written as\n\\begin{align}\n \\sum_{a^p}p(s_t,a^p)q(s_t,a|a^p).\n\\end{align}\nIn addition, weight capping and self-normalized importance sampling are used to further reduce the variance.\nMoreover, a large state space and action space will cause sample inefficiency problems as REINFORCE relies on the current sampled trajectories $\\tau$. finds that the auxiliary loss can help improve the sample efficiency\n~. Specifically, a linear projection is applied to the state $s_t$, the output is combined with action $a_t$ to calculate the auxiliary loss and appended into the final overall objective function for optimization.\nAnother prototype of vanilla policy gradient in DRL-based RS is the policy network. designs a policy network to extract features and represent the relevant feedback that can help the agent make a decision. Similar to DQN, this work uses a neural network to approximate the Q-value and the policy directly without theoretical analysis.\n extend the policy network by introducing spatio-temporal feature fusion to help the agent understand complex features. Specifically, it considers both the current number and the future number of vacant taxis on the route to recommend routes for taxis. introduces multi-modal data as new features to conduct vision-language recommendation by using historical data to train REINFORCE. ResNet and attention are used to encode vision and text information, respectively. Moreover, two rewards are introduced with a customized ratio $\\lambda$ to balance vision and text information. \nKnowledge Graphs (KG) are widely used in RS to enrich side information, provide explainability and improve recommendation performance. Similar to DQN, vanilla REINFORCE cannot properly handle graph-like data. propose a method named Knowledge-guided Reinforcement Learning (KERL), which integrates knowledge graphs into the REINFORCE algorithm. To be specific, KERL adopts TransE~ to transfer the knowledge graph into a graph embedding and utilizes a multilayer perceptron (MLP) to predict future knowledge of user preferences. The state representation can be written as\n\\begin{align}\n h_t \\oplus \\mathit{TransE}(\\mathcal{G}) \\oplus \\mathit{MLP}(\\mathit{TransE}(\\mathcal{G}))\n\\end{align}\nwhere $h_t$ is the hidden representation from the GRU for sequential behavior and $\\mathcal{G}$ is the knowledge graph.\nDifferent from KERL, propose Policy-Guided Path Reasoning (PGPR), which formulates the whole environment as a knowledge graph. The agent is trained to find the policy to find good items conditioned on the starting user in the KG by using REINFORCE. PGPR uses the tuple $(u,e_t,h_t)$ to represent the state instead of the graph embedding where $e_t$ is the entity the agent has reached at $t$ for user $u$ and $h_t$ is the previous action before $t$.\nThe action in PGPR is defined as the prediction of all outgoing edges for $e_t$ based on $h_t$. \n propose a knowledge graph policy network (KGPolicy) which puts the KG into the policy network and adopts REINFORCE to optimize it. In addition, KGPolicy uses negative sampling instead of stochastic sampling to overcome the false negative issue---sampled items behave differently during training and inference. Similar to GCQN, attention is also employed to establish the representation for its neighbors. \nDue to the on-policy nature of REINFORCE, it is difficult to apply it to large-scale RS as the convergence speed will be a key issue. To relieve this, propose TPGR, which designs a tree-structured policy gradient method to handle the large discrete action space hierarchically. TPGR uses balanced hierarchical clustering to construct a clustering tree. Specifically, it splits a large-scale data into several levels and maintains multiple policy networks for each level to conduct the recommendation. The results are integrated at the final stage.\nAs mentioned, policy gradient can be further extended to deterministic policy gradient (DPG) . propose Deterministic Policy Gradient with Full Backup\nEstimation (DPG-FBE) to complete a sub-task of recommendation. DPG-FBE considers a search session MDP (SSMDP) that contains a limited number of samples, where the stochastic policy gradient method like REINFORCE cannot work well.", "id": "01225554-85b2-42e4-a661-722c3b5972d7", "level": "paragraph", "origin_cites_number": 20, "parent_id": "6c84bc7f-b9cc-44a6-b8d6-5c59f102ecfe", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-free deep reinforcement learning based methods" ], [ "paragraph", "Value based methods" ], [ "paragraph", "Policy-based methods" ] ], "subsections": [], "title": "Policy-based methods" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 5172, 6154, 3604, 8884, 5194, 5170, 3616, 1389 ], "content": "The most common model-free hybrid method used would be the actor-critic algorithm where the critic network uses the DQN and the actor uses the policy gradient. The common algorithm used to train actor-critic is DDPG with the following objective function,\n\\begin{align}\n \\mathbb{E}[r_t+\\gamma Q_{\\theta_q'}(s_{t+1},\\mu_{\\theta_\\pi'}(s_{t+1})) - Q_{\\theta_q}(s_t,a_t)]\n\\end{align}\nwhere $\\theta_q,\\theta_q'$ is the parameter for Q-learning at time $t,t+1$ while $\\theta_\\pi'$ is the parameter for deterministic policy gradient at time $t+1$.\n propose LIRD, which uses the vanilla actor-critic framework to conduct list-wise recommendations. In order to demonstrate the effectiveness of LIRD, a pre-trained user simulator is used to evaluate the effectiveness of LIRD where the transition probability is approximated using the cosine similarity for a given state-action pair $s_t,a_t$.\n further extend LIRD into page-wise recommendation and proposed DeepPage. Similar to other previous work, GRU is employed to process the sequential pattern. Moreover, similar to DRCGR, DeepPage formulates the state as a page, then CNNs are employed to capture features and fed to the critic network. The final state representation is the concatenation of the sequential pattern and the page features. Additionally, there are a few studies focusing on different scenarios such as top-aware recommendation~, treatment recommendation~, allocating impressions~ etc. introduces a supervised learning module (SLC) as the indicator to identify the difference between the current policy and historical preferences. SLC will conduct the ranking process to ensure the recommendation policy will not be affected by the positional bias -- the item appearing on top receives more clicks. Similarly, also integrates the supervised learning paradigm into DRL but in a different way. An expert action $\\hat{a}_t$ is provided when the critic evaluates the policy and the update rule is slightly different than normal DQN,\n\\begin{align}\n \\theta_q \\leftarrow \\theta_q + \\alpha \\sum_{t}[ Q_{\\theta_q}(s_t,\\hat{a}_t)-r_t-\\gamma Q_{\\theta_{q'}}(s_t,\\mu_{\\theta_{\\pi'}}(s_t))]\\nabla_{\\theta_q}Q_{\\theta_q}(s_t,a_t).\n\\end{align}\nHowever, such a method is not universal as the acquisition of expert action is difficult and depends on the application domain.\nSimilar to policy gradient and DQN, Knowledge Graphs (KG) are also used in actor-critic-based methods. propose KGRL to incorporate the substantial information of knowledge graphs to help the critic to better evaluate the generated policy. A knowledge graph is embedded into the critic network. Different from previous studies which use the KG as the environment or state representation, KGRL uses KG as a component in the critic, which can guide the actor to find a better recommendation policy by measuring the proximity from the optimal path. Specifically, a graph convolutional network is used to weight the graph and Dijkstra's algorithm is employed to find the optimal path for finally identifying the corresponding Q-value. claim that human's demonstration could improve path searching and propose ADAC. ADAC also searches for the optimal path in the KG but further adopts adversarial imitation learning and uses expert paths to facilitate the search process. propose MA-RDPG, which extends the standard actor-critic algorithm to deal with multiple scenarios by utilizing a multi-actor reinforcement learning setting. Specifically, two different actor-networks are initialized while only one critic network will make the final decision. Those two actor networks can communicate with each other to share information and approximate the global state. find that there are multiple factors can affect the selection of electric charging station. Hence, it uses a similar idea to recommend the electric vehicle charging station by considering current supply, future supply, and future demand. figure out that the communication mechanism in MA-RDPG will harm actors as they are dealing with independent modules, and there is no intersection. Hence, extend MA-RDPG into multi-agent settings which contain multiple pairs of actors and critics and remove the communication mechanism to ensure independence. \nDifferent from , use `soft' actor-critic (SAC)~, which introduces a maximum entropy term $\\mathcal{H}(\\pi(s_t,\\phi_t))$ to actor-critic to improve exploration and stability with the stochastic policy $\\pi(s_t,\\phi_t)$. \nSimilar to the multi-agent idea, use a hierarchical setting to help the agent learn multiple goals by setting multiple actors and critics. In comparison, hierarchical RL uses multiple actor-critic networks for the same task. It splits a recommendation task into two sub-tasks: discovering long-term behavior and capturing short-term behavior. The final recommendation policy is the combination of the optimal policies for the two sub-tasks. Similarly, use the hierarchical setting for integrated recommendation by using different sourced data. The objective is to work out the sub-polices for each source hierarchically and form the final recommendation policy afterward.", "id": "cb915b6e-26a1-44b9-be2e-47ffa839e199", "level": "paragraph", "origin_cites_number": 13, "parent_id": "6c84bc7f-b9cc-44a6-b8d6-5c59f102ecfe", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-free deep reinforcement learning based methods" ], [ "paragraph", "Value based methods" ], [ "paragraph", "Hybrid methods" ] ], "subsections": [], "title": "Hybrid methods" }, { "cite_extract_rate": 0.5, "cites": [ 6157 ], "content": "In RS, model-free methods are generally more flexible than model-based methods as they do not require knowing the transition probability. We summarize the advantages and disadvantages of the three kinds of methods described under the model-free category. DQN is the first DRL method used in RS, which is suitable for small discrete action spaces. The problems with DQN in RS are: \n\\begin{itemize}\n \\item RS normally contains large and high-dimensional action spaces.\n \\item The reward function is hard to determine which will lead to inaccurate value function approximation.\n\\end{itemize} \nSpecifically, the high dimensional action space in context of recommender systems is recognized as a major drawback of DQN~. The reason lies in the large number of the candidate items. Hence, DQN, as one of the most popular schemes, is not the best choice for RS in many situations. Moreover, some unique factors need to be considered when designing the reward function for RS such as social inference. It introduces extra parameters to the Q-network and hinders the convergence.\nPolicy gradient does not require the reward function to estimate the value function. Instead, it estimates the policy directly. However, policy gradient is designed for continuous action spaces. More importantly, it will introduce high variance in the gradient. Actor-critic algorithms combine the advantages of DQN and policy gradient. Nonetheless, actor-critic will map the large discrete action space into a small continuous action space to ensure it is differentiable, which may cause potential information loss. Actor-critic uses DDPG and thus inherits disadvantages from DQN and DPG, including difficulty in determining the reward function and poor exploration ability.", "id": "47d0fb4b-4dca-4006-ba90-90530959ef38", "level": "paragraph", "origin_cites_number": 2, "parent_id": "6c84bc7f-b9cc-44a6-b8d6-5c59f102ecfe", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Model-free deep reinforcement learning based methods" ], [ "paragraph", "Value based methods" ], [ "paragraph", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.5, "cites": [ 6158, 5192, 5201, 8984, 5204, 3610 ], "content": "\\label{sec:component}\nThere are a few studies that use DRL in RS for goals other than improving recommendation performance or proposing new application domains. We split the literature based on the following components: environment, state representation, and reward function. Exisitng studies usually focus on optimizing one single component in the DRL setting (as illustrated in~\\Cref{fig:overview}).\n\\begin{table}[h]\n \\centering\n \\caption{List of publications reviewed in this section}\n \\begin{tabular}{c|c}\n \\hline\n Component & Work \\\\\\hline\n Environment & \\\\\n State & \\\\\n Reward & \\\\\n \\hline\n \\end{tabular}\n \\label{tab:component}\n\\end{table}\n\\\\", "id": "c30b91c8-48bc-4d92-ba66-c418e39b9327", "level": "subsection", "origin_cites_number": 12, "parent_id": "696bfdf7-6191-445e-82a8-ae4df2378853", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Component Optimization in Deep Reinforcement Learning based RS" ] ], "subsections": [ "3de2362c-6535-43b9-860b-ebc2cbce5821", "45229725-ce80-46d4-8644-8011bc19bde5", "0c8d5ce0-24e2-4a9b-aa04-14e44c18baf4" ], "title": "Component Optimization in Deep Reinforcement Learning based RS" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6158, 5201, 8984, 1346, 5204, 3610 ], "content": "~\\\\\nMany environments are available for evaluating deep reinforcement learning. Two popular ones are OpenAI gym-based environment and MuJoCo\\footnote{http://mujoco.org/}.\nUnfortunately, there is no standardized simulation platform or benchmark specific to reinforcement learning based recommender systems. Existing work on DRL in RS is usually evaluated through offline datasets or via deployment in real applications. The drawback for evaluating offline datasets include:\n\\begin{itemize}\n \\item Different studies use different environment construction methods which leads to unfair comparison. For instance, some studies use the KG as the environment while some studies assume the environment is gym-like or design a simulator for specific tasks.\n \\item With offline datasets, users' dynamic interests, and environment dynamics are hard to maintain. Deploying the method into a real application is difficult for academic research as it takes time and costs money. Hence, a standardized simulation environment is a desirable solution.\n\\end{itemize}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{cofounder.pdf}\n \\caption{Left is the traditional MDP transition; Right is the POMDP which considers the environmental confounders such as social influence~.}\n \\label{fig:cofounder}\n\\end{figure}\nThere are several different studies that provide standardized gym\\footnote{https://gym.openai.com/}-based simulation platforms for DRL-based RS research in different scenarios. RecSim~ is a configurable platform that supports sequential interaction between the system and users. RecSim contains three different tasks: interest evolution, interest exploration and long-term satisfaction. RecoGym~ provides an environment for recommendation and advertising. In addition, RecoGym also provides simulation of online experiments such as A/B-tests. However, RecSim and RecoGym are designed for bandit behavior which means users' interests will not change over time. \nVirtualTB~ is proposed to relieve such problems. VirtualTB employs imitation learning to learn a user model to interact with the agent. GANs are employed to generate users' interests. Similar to VirtualTB, Recsimu~ uses a GAN to tackle the complex item distribution.\nIn addition, PyRecGym~ accommodates standard benchmark datasets into a gym-based environment. MARS-Gym~ provides a benchmark framework for marketplace recommendation.~ suggests that existing simulation environments are biased because of biased logged data. Two common biases are discussed: popularity bias and positivity bias. To reduce the effect from those biases, SOFA introduces an Intermediate Bias Mitigation Step for debiasing purposes.\nOne work discusses environment reconstruction by considering confounders.~ claims that users' interests may be affected by social networks which may introduce extra bias to the state and affect the decision-making process. A multi-agent setting is introduced to treat the environment as an agent which can partially relieve the hidden confounder effect. Specifically, a deconfounded environment reconstruction method DEMER is proposed. Different from previously mentioned methods, DEMER assumes the environment is partially observed and models the whole recommendation task as a Partially Observed MDP (POMDP). Different from an MDP, a POMDP contains one more component observation $o\\in\\mathcal{O}$ and the action $a_t$ is derived based on the observation $o_t$ instead of the state $s_t$ by $a_t = \\pi_a(o_t)$. DEMER assumes there is a confounder policy $\\pi_h$ for observation $o_h$ which is composed by $a_t$ and $o_t$: $a_h = \\pi_h(a_t,o_t)$. Moreover, another observation $o_b$ is introduced to observe the transition as well. $\\pi_b$ is the corresponding policy and $a_b = \\pi_b(o_b) = \\pi_b(o_t,a_t,a_h)$. DEMER uses generative adversarial imitation learning (GAIL) to imitate the policy $A,B$.\nGiven trajectory $\\{o_t,o_h,o_b\\}$ for different policies $A$ and $B$, the objective function is defined as\n\\begin{align}\n & (\\pi_a,\\pi_b,\\pi_h) = \\argmin_{(\\pi_a,\\pi_b,\\pi_h)}\\mathbb{E}_{s\\sim \\tau }(L(s,a_t,a_b)) \\notag \\\\\n & \\text{where }L(s,a_t,a_b) = \\mathbb{E}_{(\\pi_a,\\pi_b,\\pi_h)}[\\log D(s,a_t,a_b)]-\\lambda \\sum_{\\pi\\in\\{\\pi_a,\\pi_b,\\pi_h\\}}H(\\pi)\n\\end{align}\nwhere $L(\\cdot)$ is the loss function, $D(\\cdot)$ is a discriminator and $H(\\pi)$ is introduced in GAIL.\n\\\\", "id": "3de2362c-6535-43b9-860b-ebc2cbce5821", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "c30b91c8-48bc-4d92-ba66-c418e39b9327", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Component Optimization in Deep Reinforcement Learning based RS" ], [ "subsubsection", "Environment Simulation and Reconstruction" ] ], "subsections": [], "title": "Environment Simulation and Reconstruction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5192 ], "content": "~\\\\\nState representation is another component in DRL-based RS which exists in both model-based and model-free methods. find that the state representation in model-free methods would affect recommendation performance.\nExisting studies usually directly use the embedding as the state representation. propose a supervised learning method to generate a better state representation by utilizing an attention mechanism and a pooling operation as shown in~\\Cref{fig:staterep}.\nSuch a representation method requires training a representation network when training the main policy network, which increases the model complexity.\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{staterep_wb.pdf}\n \\caption{State representation used in works~. $h_t$ is the output of an attention layer that takes the representation of users' history at time $t$ as input and $g(\\cdot)$ is the pooling operation. }\n \\label{fig:staterep}\n\\end{figure}\n\\\\", "id": "45229725-ce80-46d4-8644-8011bc19bde5", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "c30b91c8-48bc-4d92-ba66-c418e39b9327", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Component Optimization in Deep Reinforcement Learning based RS" ], [ "subsubsection", "State Representation" ] ], "subsections": [], "title": "State Representation" }, { "cite_extract_rate": 0, "cites": [], "content": "~\\\\\nThe reward function is crucial for methods involving DQN.\nA robust reward function can significantly improve training efficiency and performance. find that the DQN may not receive the correct reward value when entering the absorbing state. That is, when the absorbing state is reached, the agent will receive zero reward and harm policy learning. The reason behind this is that when designing the environment zero reward is implicitly assigned to the absorbing state as it is hard to determine the reward value in such a state. propose a robust DQN method, which can stabilize the reward value when facing the absorbing state. The new reward formula can improve the robustness, which is defined as follows:\n\\begin{align}\n r = \\begin{cases} \n r_t & \\text{if } s_{t+1} \\text{ is an absorbing state} \\\\\n r_t + \\gamma Q_{\\theta'}(s_{t+1},a_{t+1})& \\text{otherwise}.\n \\end{cases}\n\\end{align}\nThe major difference is that $r_t$ is assigned to the absorbing state to ensure the agent can continue learning.\nOne remaining problem in current DRL-based RS is the reward sparsity, i.e,. the large state and action spaces make the reward sparsity problem more serious. One of the possible solution would be a better designed reward by using the reward shaping~.", "id": "0c8d5ce0-24e2-4a9b-aa04-14e44c18baf4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "c30b91c8-48bc-4d92-ba66-c418e39b9327", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Deep Reinforcement Learning in Recommender Systems" ], [ "subsection", "Component Optimization in Deep Reinforcement Learning based RS" ], [ "subsubsection", "Robustness of Reward Functions" ] ], "subsections": [], "title": "Robustness of Reward Functions" }, { "cite_extract_rate": 0, "cites": [], "content": "While existing studies have established a solid foundation for DRL-based RS research, this section\noutlines several promising emerging research directions.", "id": "3dd345fb-ce99-4727-b174-a3f36f9f711a", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Emerging Topics" ] ], "subsections": [ "fd29fd11-361a-4846-b9fe-151e3edf8271", "ca4b40c6-413e-4583-9e93-7c0668b3a545", "04addbb4-b221-4c4b-aaeb-b0d4e6b0d4fb", "19a7db65-832f-4d59-9f47-c445aa549fa1" ], "title": "Emerging Topics" }, { "cite_extract_rate": 0.30769230769230704, "cites": [ 8884, 5185, 3588, 3616 ], "content": "Recommender systems are monolithic systems containing \ntasks such as searching, ranking, recommendation, advertising, personalization, and diverse stakeholders such as users and items. Most existing methods are based on single agent. \nMulti-Agent Reinforcement Learning (MARL) is a subfield of reinforcement learning and it is capable of learning multiple policies and strategies. \nWhile a single-agent reinforcement learning framework can only handle a single task, many studies consider the multi-task situation in RS and employ multi-agent DRL (MADRL) or hierarchical DRL (HDRL).\nHDRL~ is proposed to handle complex tasks by splitting such tasks into several small components and asks the agent to determine sub-policies. HDRL belongs to a single-agent reinforcement learning framework such that the agent contains a meta-controller and several controllers. The meta-controller splits the task, and the controllers learn the value and reward functions for designated tasks to get a series of sub-policies. There are a few studies already utilizing HDRL in RS.\n target integrated recommendation to capture user preferences on both heterogeneous items and recommendation channels. Specifically, the meta-controller is used for item recommendation, and controllers aim to find the personalized channel according to user channel-level preferences. uses HDRL for course recommendation in MOOCs, which contains two different tasks: profile reviser and recommendation. The meta-controller aims to make course recommendations by using the revised profile pruned by the controllers.\nDifferent from HDRL, MADRL~ introduces multiple agents to handle the sub-tasks. uses the MADRL for twitter mention recommendation where three agents are initialized. The three agents need to generate different representations for the following tasks: query text, historical text from authors and historical text from candidate users. Once the representations are finalized, the model will conduct the recommendation based on the concatenation of representations. and provide two different views of the communication mechanism in MADRL and demonstrate that agents could work collaboratively or individually. designs a MADRL framework for two tasks where two agents are designed to conduct advertising and recommendation respectively. uses MADRL for collaborative recommendation where each agent is responsible for a single user. MADRL is adopted to help the recommender consider both collaboration and potential competition between users. designs a charging recommender system for intelligent electric vehicles by using decentralized agents to handle sub-tasks and a centralized critic to make the final decision.\nHierarchical multi-agent RL (HMARL)~ proves that MARL and HRL can be combined.\nRecently, introduces HMADRL into the continuous action space, which provides a direction for RS. uses HMARL for multi-goal recommendation where the meta-controller considers users' long-term preferences and controllers focus on short-term click behavior. While the meta-controller and controllers in HDRL deal with sub-tasks that belong to a single task, HMARL focuses on multi-task or multi-goal learning where the meta-controller and controllers belong to different agents and deal with different tasks or goals.\nHMADRL would be a suitable solution for future research work in DRL-based RS where HDRL can be used to split a complex task into several sub-tasks such as users' long-term interests and short-term click behavior, and MADRL can jointly consider multiple factors such as advertising .", "id": "fd29fd11-361a-4846-b9fe-151e3edf8271", "level": "subsection", "origin_cites_number": 13, "parent_id": "3dd345fb-ce99-4727-b174-a3f36f9f711a", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Emerging Topics" ], [ "subsection", "Multi-Agent and Hierarchical Deep Reinforcement Learning-based RS" ] ], "subsections": [], "title": "Multi-Agent and Hierarchical Deep Reinforcement Learning-based RS" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 6160, 3592, 6159, 6161, 5199 ], "content": "As\nmentioned,\nthe reward function plays a critical role in DRL-based recommender systems. In many existing works, reward functions are manually designed. The common method uses users' click behavior to represent the reward and to reflect users' interests. However, such a setting can not represent users' long-term goals~ as clicking or not only depicts part of the feedback information from users. It requires significant effort to design a reward function, due to the large number of factors that can affect users' decision, such as social engagement or bad product reviews, which may adversely affect recommendation performance. It is difficult to include all potential factors into the reward function because not every factor can be represented properly. A few works~ show that manually designed reward functions can be omitted by employing inverse reinforcement learning (IRL)~ or generative adversarial imitation learning (GAIL)~. Such inverse DRL-based methods require using expert demonstration as the ground truth. However, expert demonstration is often hard to obtain for recommendation scenarios.\nThose two studies conduct experiments in an offline dataset-based simulation environment that can access expert demonstration. In contrast, use IRL as the main algorithm to train the agent while use both demonstration and reward to train the agent. also employ GAIL to improve recommendation performance. In this work, GAIL is used to learn the reasoning path inside the KG to provide side information to help the agent learn the policy. Although IRL achieves some progress in RS, the lack of demonstration is a key shortcoming that impedes adoption in RS. One possibility is to use the IRL method in casual reasoning to help improve interpretability~ thus boosting recommendation performance. Alternately, IRL may be suitable for learning users' long-term and static behavior to support the reward function.", "id": "ca4b40c6-413e-4583-9e93-7c0668b3a545", "level": "subsection", "origin_cites_number": 7, "parent_id": "3dd345fb-ce99-4727-b174-a3f36f9f711a", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Emerging Topics" ], [ "subsection", "Inverse Deep Reinforcement Learning for RS" ] ], "subsections": [], "title": "Inverse Deep Reinforcement Learning for RS" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 1444, 6160, 180, 6162, 5194 ], "content": "Graph data and KG are widely used in RS.\nGraph modeling enables an RS to leverage interactions between users and the recommender for reasoning or improving interpretability. According to existing studies about deep learning-based RS~, embedding is a technique used to get the representation for the input data. Graph embedding is a common solution to handle graph-like data. GCN is a type of graph embedding method which\nare broadly used in RS to process graph data. propose a variant of GCN to learn the embedding for KG. Specifically, they propose knowledge graph convolutional networks (KGCN) to capture the high-order structural proximity among entities in a knowledge graph. \nIn DRL-based RS, graph data are handled similarly---s underthe transformed into an embedding and fed to the agent. uses a traditional graph embedding method TransE~ to generate the state representation for DRL-based RS. There are several studies that use GCN in DRL for recommendations under different settings. propose a graph convolutional RL (DGN) method which integrates the GCN into the Q-learning framework for general RL problems by replacing the state encoding layer with the GCN layer. extend this method into the deep learning field and apply it to recommender systems. To be specific, multiple GCN layers are employed to process the sub-graphs for a given item $i$. employs KG inside the actor-critic algorithm to help the agent learn the policy. Specifically, the critic network contains a GCN layer to give weight to the graph and conduct searches in the graph to find an optimal path and hence guide the optimization of policy learning. However, such a method is relatively computationally expensive as it requires jointly training the GCN and the actor-critic network. adopts a Graph Attention Network (GAT)~ into the actor-critic network to conduct recommendation. In addition, the GAT is used as an encoder to obtain a state representation.\nA common way of using GCN or its variants in DRL-based RS is the state encoder. \nThe related challenge is the difficulty for the environment to provide a graph-like input to the GCN.", "id": "04addbb4-b221-4c4b-aaeb-b0d4e6b0d4fb", "level": "subsection", "origin_cites_number": 9, "parent_id": "3dd345fb-ce99-4727-b174-a3f36f9f711a", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Emerging Topics" ], [ "subsection", "Graph Neural Networks for Boosting DRL-based RS" ] ], "subsections": [], "title": "Graph Neural Networks for Boosting DRL-based RS" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4711, 6163, 6164, 5188 ], "content": "Self-supervised learning (SSL) is a technique in which the model is trained by itself without external label information. SSL-DRL is receiving growing interest in robotics \n~. shows that SSL can be used to learn the policy when doing navigation by providing real-world experience. demonstrates that SSL-DRL can be used to help the agent learn synergies between two similar policies, thus empowering the agent to conduct two different tasks. \nRecent advances in SSL RL show that SSL can also provide interpretability for RL, which is promising for interpretable RS research~. shows that SSL based RL can highlight the task-relevant information to guide the agent's behavior.\nMoreover, shows that SSL can be used to provide negative feedback for DRL-based RS to improve recommendation performance. To be specific, a self-supervised loss function is appended into the normal DRL loss function,\n\\begin{align}\n -\\sum_{i=1}^nY_i\\log\\bigg(\\frac{e^{y_i}}{\\sum_{i'=1}^ne^{y_{i'}}}\\bigg) + L_{\\mathit{DRL}}\n\\end{align}\nwhere $Y_i$ is an indicator function to show users interact with the item $i$ or not. $L_{\\mathit{DRL}}$ could vary, if the DQN is adopted, \\Cref{dqnloss} should be used.\nSSL demonstrates promising performance in visual representation in recent years, which would be a possible solution to generate the state representation as there are a few DRL-based RS studies that adopt CNNs to process image-like data and transform it into a state~. Furthermore, as an unsupervised learning approach, SSL would provide a new direction about defining the reward function by learning common patterns between different states as well as multi-task learning.", "id": "19a7db65-832f-4d59-9f47-c445aa549fa1", "level": "subsection", "origin_cites_number": 6, "parent_id": "3dd345fb-ce99-4727-b174-a3f36f9f711a", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Emerging Topics" ], [ "subsection", "Self-Supervised DRL-based RS" ] ], "subsections": [], "title": "Self-Supervised DRL-based RS" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we outline several open questions and challenges that exist in DRL-based RS research. We believe these issues could be critical for the future development of DRL-based RS.", "id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ] ], "subsections": [ "e24a9f91-4259-41ce-8cfe-67de33facae4", "f3bc9826-577b-48f2-9fd1-1cbd7502868e", "8fb9ed60-6e10-4fb7-a67b-6f49f9ef3798", "bb9fa85c-840e-4ea9-928c-30b0b38ddbc5", "317e1a16-83a6-4e39-b6b1-02817b7555bf", "03cb7efb-1098-4511-ad2a-26dcf319c6b5" ], "title": "Open Questions" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3586, 8985 ], "content": "Sample inefficiency is a well-known challenge in model-free DRL methods. Model-free DRL requires a significant number of samples as there is no guarantee that the received state is useful. Normally, after a substantial number of episodes, the agent may start learning as the agent finally receives a useful state and reward signal. A common solution is the experience replay technique, which only works in off-policy methods. \nExperience replay still suffers the sample inefficiency problem~ as not every past experience is worth replaying. propose selected experience replay (SER) that only stores valuable experience into the replay buffer and thus improves sample efficiency. while traditional DRL environments only contain several\\footnote{For example, the number of actions in MuJoCo is less than one hounded.} candidate items, in DRL-based RS, the agent must deal with a significantly larger action space as RS may contain lots of candidate items. Existing DRL-based RS studies on traditional experience replay methods often demonstrate slow converge speed. design a user model to improve the sample efficiency through auxiliary learning. Specifically, they apply the auxiliary loss with the state representation, and the model distinguishes low-activity users and asks the agent to update the recommendation policy based on high-activity users more frequently.\nOn the other hand, model-based methods are more sample efficient. However, they introduce extra complexity as the agent is required to learn the environment model as well as the policy. Due to the extremely large action space and possibly large state space (depending on users' contextual information) in RS, approximating the environment model and policy simultaneously\nbecomes challenging.", "id": "e24a9f91-4259-41ce-8cfe-67de33facae4", "level": "subsection", "origin_cites_number": 3, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Sample Efficiency" ] ], "subsections": [], "title": "Sample Efficiency" }, { "cite_extract_rate": 0.388888888888888, "cites": [ 5190, 6154, 5180, 5195, 5177, 5165, 5199 ], "content": "The exploration and exploitation dilemma is a fundamental and challenging problem in reinforcement learning research and receives lots of attention in DRL.\nThis dilemma describes a trade-off between obtaining new knowledge and the need to use that knowledge to improve performance. Many DQN-based methods focus on exploration before the replay buffer is full and exploitation afterward. Consequently, it requires an extremely large replay buffer to allow all possibilities in recommendation can be stored.\nDRN employs Dueling Bandit Gradient Descent (DBGD)~ to encourage exploration while ~ introduces a regularization or entropy term into the objective function to do so.~ uses the sheer size of the action space to ensure sufficient exploration.~ uses a separate KG or elaborated graph exploration operation to conduct exploration.~ employs Boltzmann exploration to get the benefit of exploratory data without negatively impacting user experience. \nIn addition, $\\epsilon$-greedy is the most common technique used to encourage exploration ~. Remaining studies rely on a simulator to conduct exploration. However, it may suffer from noise and over-fitting~ because of the gap between simulation and real online application. For most DRL-based methods such as vanilla DQN, policy gradient, or actor-critic-based methods, $\\epsilon$-greedy would be a good choice for exploration. In addition, injecting noise into the action space would also be helpful for those actor-critic-based methods~. For methods involving KGs, $\\epsilon$-greedy may help, but the elaborated graph exploration methods may receive better performance.", "id": "f3bc9826-577b-48f2-9fd1-1cbd7502868e", "level": "subsection", "origin_cites_number": 18, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Exploration and Exploitation" ] ], "subsections": [], "title": "Exploration and Exploitation" }, { "cite_extract_rate": 1, "cites": [ 7022, 3199, 8106 ], "content": "Existing work generally trains DRL algorithms in simulation environments or offline datasets.\nDeploying DRL algorithms into real applications is challenging due to the gap between simulation and real-world applications.\nSimulation environments do not contain domain knowledge or social impact.\nThey can not cover the domain knowledge and task-specific engineering in the real-world recommendation.\nHow to bridge the gap between simulation and real applications is a challenging topic. Sim2real~ is a transfer learning approach that transfers DRL policies from simulation environments to reality. Sim2real uses domain adaption techniques to help agents transfer learned policy. Specifically, it adopts GANs to conduct adaption by generating different samples.\nRL-CycleGAN~ is a sim2real method for vision-based tasks. It uses CycleGAN~ to conduct pixel-level domain adaption. Specifically, it maintains cycle consistency during GAN training and encourages the adapted image to retain certain attributes of the input image.\nIn DRL-based RS, sim2real would be a possible solution for generalizing the learned policy from simulation environments to reality. However, sim2real is a new technique still under exploration. It shows an adequate capability in simple tasks and requires more effort to handle the complex task such as recommendation. We believe it is a workable solution for generalizing from simulation to reality.", "id": "8fb9ed60-6e10-4fb7-a67b-6f49f9ef3798", "level": "subsection", "origin_cites_number": 3, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Generalizing from Simulation to Real-World Recommendation" ] ], "subsections": [], "title": "Generalizing from Simulation to Real-World Recommendation" }, { "cite_extract_rate": 0.5, "cites": [ 6165, 5165, 3891 ], "content": " observe that user behavior data are not experimental but observational, which leads to problems of bias and unfairness.\\\\\nThere are two reasons why bias is so common. First, the inherent characteristic of user behavior data is not experimental but observational. In other words, data that are fed into recommender systems are subject to selection bias . For instance, users in a video recommendation system tend to watch, rate, and comment on those movies that they are interested in. Second, a distribution discrepancy exists, which means the distributions of users and items in the recommender system are not even. Recommender systems may suffer from 'popularity bias', where popular items are recommended far more frequently than the others. However, the ignored products in the “long tail” can be equally critical for businesses as they are the ones less likely to be discovered.\n denote the unfairness as that the system systematically and unfairly discriminates against certain individuals or groups of individuals in favor of others. \nA large number of studies explore dynamic recommendation systems by utilizing the agent mechanism in reinforcement learning (RL), considering the information seeking and decision-making as sequential interactions. How to evaluate a policy efficiently is a big challenge for RL-based recommenders. Online A/B tests are not only expensive and time-consuming but also sometimes hurt the user experience. Off-policy evaluation is an alternative strategy that historical user behavior data are used to evaluate the policy. However, user behavior data are biased, as mentioned before, which causes a gap between the policy of RL-based RS and the optimal policy.\nTo eliminate the effects of bias and unfairness, use the inverse of the probability of historical policy to weight the policy gradients. introduce a debiasing step that corrects the biases presented in the logged data before it is used to simulate user behavior. propose to build a customer simulator that is designed to simulate the environment and handle the selection bias of logged data.", "id": "bb9fa85c-840e-4ea9-928c-30b0b38ddbc5", "level": "subsection", "origin_cites_number": 6, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Bias (Unfairness)" ] ], "subsections": [], "title": "Bias (Unfairness)" }, { "cite_extract_rate": 0.4, "cites": [ 5190, 8881 ], "content": "Although deep learning-based models can generally improve the performance of recommender systems, they are not easily interpretable. As a result, it becomes an important task to make recommender results explainable, along with providing high-quality recommendations.\nHigh explainability in recommender systems not only helps end-users understand the items recommended but also enables system designers to check the internal mechanisms of recommender systems. review different information sources and various types of models that can facilitate explainable recommendation. Attention mechanisms and knowledge graph techniques currently play an important role in realizing explainability in RS.\nAttention models have great advantages in both enhancing predictive performance and having greater explainability~. introduce a reinforcement learning framework incorporated with an attention model for explainable recommendation. Firstly, it achieves model-agnosticism by separating the recommendation model from the explanation generator. Secondly, the agents that are instantiated by attention-based neural networks can generate sentence-level explanations. \nKnowledge graphs contain rich information about users and items, which can help to generate intuitive and more tailored explanations for the recommendation system . Recent work has achieved greater explainability by using reinforcement and knowledge graph reasoning. The algorithm from learns to find a path that navigates from users to items of interest by interacting with the knowledge graph environment. extract imperfect path demonstrations with minimum labeling effort and propose an adversarial actor-critic model for demonstration-guided path-finding. Moreover, it achieves better recommendation accuracy and explainability by reinforcement learning and knowledge graph reasoning.", "id": "317e1a16-83a6-4e39-b6b1-02817b7555bf", "level": "subsection", "origin_cites_number": 5, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Explainability" ] ], "subsections": [], "title": "Explainability" }, { "cite_extract_rate": 0.5, "cites": [ 6167, 1192, 6166 ], "content": "Adversarial samples demonstrate that deep learning-based methods are vulnerable. Hence,\nrobustness becomes an open question for both RS and DRL. Specifically, adversarial attack and defense in RS have received a lot of attention in recent years~ as security is crucial in RS. Moreover, DRL policies are vulnerable to adversarial perturbations to agent's observations~. provide an adversarial attack method for perturbing the observations, thus affecting the learned policy. Hence, improving the robustness is the common interest for DRL and RS, which would be a critical problem for DRL-based RS. provide an adversarial attack detection method for DRL-based RS which uses the GRU to encode the action space into a low-dimensional space and design decoders to detect the potential attack. However, it only considers Fast Gradient Sign Method (FGSM)-based attacks and strategically-timed attacks~. Thus, it lacks the capability to detect other types of attack. Moreover, it only provides the detection method while the defence is still an opening question.\nWe believe zero-shot learning techniques would be a good direction for training a universal adversarial attack detector. For defence, it is still an open question for DRL-based RS, though recent advances in adversarial defence in DRL may provide some insights~.", "id": "03cb7efb-1098-4511-ad2a-26dcf319c6b5", "level": "subsection", "origin_cites_number": 6, "parent_id": "62c3e9c3-0318-4d9a-947c-bf1d76096072", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Open Questions" ], [ "subsection", "Robustness on Adversarial Samples and Attacks" ] ], "subsections": [], "title": "Robustness on Adversarial Samples and Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we provide a few potential future directions of DRL-based RS. Benefiting from recent advances in DRL research, we believe those topics can boost the progress of DRL-based RS research.", "id": "db61769c-00fa-49cf-aca1-794fe6209774", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Future Directions" ] ], "subsections": [ "e714f9af-5ef5-4cc0-b0fc-93a677fe49fd", "69b9e3f2-89df-425e-ae0b-359e87390941", "18638c1b-d943-46f7-8255-6db369c76c4b" ], "title": "Future Directions" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6168, 4573, 6169, 4428, 3610, 4416 ], "content": "Causality is a generic relationship between a cause and effect. Moreover, inferring causal effects is a fundamental problem in many applications like computational advertising, search engines, and recommender systems~. \nRecently, some researchers have connected reinforcement learning with learning causality to improve the effects for solving sequential decision-making problems.\nBesides, Learning agents in reinforcement learning frameworks face a more complicated environment where a large number of heterogeneous data are integrated.\nFrom our point of view, causal relationships would be capable of improving the recommendation results by introducing the directionality of cause and the effect. The users' previous choices have impact on the subsequent actions. This can be cast as an interventional data generating the dynamics of recommender systems. By viewing a policy in RL as an intervention, we can detect unobserved confounders in RL and choose a policy on the expected reward to better estimate the causal effect . Some studies improve RL models with causal knowledge as side information. Another line of work uses causal inference methods to achieve unbiased reward prediction~. \n propose a Causal Inference Q-network which introduces observational inference into DRL by applying extra noise and uncertain inventions to improve resilience. Specifically, in this work, noise and uncertainty are added into the state space during the training state, and the agent is required to learn a causal inference model by considering the perturbation.\n give the first demonstration that model-free reinforcement learning can be used for causal reasoning. They explore meta-reinforcement learning to solve the problem of causal reasoning. The agents trained by a recurrent network able to make causal inferences from observational data and output counterfactual predictions. \n bridge RL and causality by data-fusion for reinforcement learners. Specifically, online agents combine observations, experiments and counterfactual data to learn about the environment, even if unobserved confounders exist. Similarly, make the model-based RL agents work in a causal way to explore the environment under the Partially-Observable Markov Decision Process (POMDP) setting. They consider interventional data and observational data jointly and interprete model-based reinforcement learning as a causal inference problem. In this way, they bridge the gap between RL and causality by relating common concepts in RL and causality. \nRegarding explainability in RL, propose to explain the behavior of agents in reinforcement learning with the help of causal science. The authors encode causal relationships and learn a structural causal model in RL, which is used to generate explanations based on counterfactual analysis. With counterfactual exploration, this work is able to generate two contrastive explanations for `why' and `why not' questions.\nIt is so important to search for a Directed Acyclic Graph (DAG) in causal discovery. Considering traditional methods rely on local heuristics and predefined score functions, propose to use reinforcement learning to search DAG for causal discovery. They use observational data as an input, RL agents as a search strategy and output the causal graph generated from an encoder-decoder NN model.", "id": "e714f9af-5ef5-4cc0-b0fc-93a677fe49fd", "level": "subsection", "origin_cites_number": 9, "parent_id": "db61769c-00fa-49cf-aca1-794fe6209774", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Future Directions" ], [ "subsection", "Causal and Counterfactual Inference" ] ], "subsections": [], "title": "Causal and Counterfactual Inference" }, { "cite_extract_rate": 1, "cites": [ 7753, 6170 ], "content": "Recommender systems often need to deal with multiple scenarios such as joint recommendation and adverting, offline DRL and meta DRL provide a promising direction for achieving multiple scenarios at the same time.\nOffline DRL is a new paradigm of DRL that can be combined with existing methods such as self-supervised learning and transfer learning to move toward real-world settings. \nOffline DRL~ (also known as batch DRL) is designed for tasks which contain huge amounts of data.\nGiven a large dataset that contains past interactions, offline DRL uses the dataset for training across many epochs but does not interact with the environment.\nOffline DRL provides a solution that can be generalized to new scenarios as it was trained by a large sized dataset.\nSuch generalization ability is critical to RSs, which may need to deal with multiple scenarios or multiple customers.\nWhile offline DRL could provide a new direction for DRL-based RS, it still faces a few problems regarding handling the distributional shifts between existing datasets and real-world interactions. \nMeta DRL~ is defined as \nmeta learning in the filed of DRL. Meta DRL is another approach to help agents to generalize to new tasks or environments. Different from offline DRL, meta DRL contains a memory unit which is formed by the recurrent neural network to memorize the common knowledge for different tasks. Different from offline DRL, meta DRL does not require a large amount of data to train.", "id": "69b9e3f2-89df-425e-ae0b-359e87390941", "level": "subsection", "origin_cites_number": 2, "parent_id": "db61769c-00fa-49cf-aca1-794fe6209774", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Future Directions" ], [ "subsection", "Offline DRL and Meta DRL" ] ], "subsections": [], "title": "Offline DRL and Meta DRL" }, { "cite_extract_rate": 0.8, "cites": [ 6171, 2219, 1391, 1389 ], "content": "An actor-critic method uses the traditional policy gradient method, which suffers from the high variance problem due to the gap between behavior policy (i.e., the policy that is being used by an agent for action select) and target policy (i.e., the policy that an agent is trying to learn).\nA method commonly used to \nrelieve\nthe high variance problem is Advantage Actor-critic (A2C). \nDifferent from traditional actor-critic methods, A2C uses an advantage function to replace the Q-function inside the critic network. The advantage function $A(s_t)$ is defined as the expected value of the TD-error. \nThe new objective function for policy gradient can be written as,\n\\begin{align}\n \\mathbb{E}_{\\tau \\sim d_{\\pi_\\theta}}[\\sum_{t=1}^T\\underbrace{(Q(s_t,a_t) - V(s_t))}_{A(s_t)}\\sum_{t=1}^T\\nabla_{\\theta} \\log \\pi_{\\theta}(s_t,a_t)].\n\\end{align}\nHowever, A2C still uses DDPG as the main training algorithm, which may suffer function approximation errors when estimating the Q value.\nTwin-Delayed DDPG (TD3)~ is designed to improve the function approximation problem in DDPG which uses clipped double Q-learning to update the critic. The gradient update can be expressed as,\n\\begin{align}\n \\mathbb{E}_{\\tau \\sim d_{\\pi_\\theta}}[\\sum_{t=1}^Tr(s_t,a_t) + \\gamma \\min(Q_1(s_t,a_t+\\epsilon), Q_2(s_t,a_t+\\epsilon))\\sum_{t=1}^T\\nabla_{\\theta} \\log \\pi_{\\theta}(s_t,a_t)].\n\\end{align}\nwhere $\\epsilon\\sim\\textit{clip}(\\mathcal{N}(0,\\sigma,-c,c))$, $\\sigma$ is the standard deviation and $c$ is a constant for clipping. \nAnother two ways to improve actor-critic methods are Trust Region Policy Optimization (TRPO)~ and Proximal Policy Optimization (PPO)~, which focus on modification of the advantage function. TRPO aims to limit the step size for each gradient to ensure it will not change too much. The core idea is to add a constraint to the advantage function,\n\\begin{align}\n \\frac{\\pi(a|s)}{\\pi_{old}(a|s)}A(s),\n\\end{align}\nwhere the KL divergence will be used to measure the distance between the current policy and the old policy is small enough. PPO has the same goal as TRPO which is to try to find the biggest possible improvement step on a policy using the current data. PPO is a simplified version of TRPO which introduces the clip operation,\n\\begin{align}\n \\min\\bigg(\\frac{\\pi(a|s)}{\\pi_{old}(a|s)}A(s),\\text{clip}\n \\bigg(\\frac{\\pi(a|s)}{\\pi_{old}(a|s)}A(s), 1-\\epsilon, 1+\\epsilon\\bigg)A(s)\\bigg).\n\\end{align}\nSoft Actor-Critic (SAC)~ is another promising variant of the actor-critic algorithm and is widely used in DRL research. SAC uses the entropy term to encourage the agent to explore, which could be a possible direction to solve the exploration and exploitation dilemma. Moreover, SAC assigns an equal probability to actions that are equally attractive to the agent to capture those near-optimal policies.\nAn example of related work uses SAC to improve the stability of the training process in RS.", "id": "18638c1b-d943-46f7-8255-6db369c76c4b", "level": "subsection", "origin_cites_number": 5, "parent_id": "db61769c-00fa-49cf-aca1-794fe6209774", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Future Directions" ], [ "subsection", "Further Developments in Actor-Critic Methods" ] ], "subsections": [], "title": "Further Developments in Actor-Critic Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we provide a comprehensive overview the use of deep reinforcement learning in recommender systems. We introduce a classification scheme for existing studies and discuss them by category. We also provide an overview of such existing emerging topics and point out a few promising directions. \nWe hope this survey can provide a systematic understanding of the key concepts in DRL-based RS and valuable insights for future research. \n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{sample-base}\n\\end{document}\n\\endinput", "id": "0a382833-431c-4392-982a-66ab0eba2961", "level": "section", "origin_cites_number": 0, "parent_id": "4bb82714-5571-4179-9dac-693fb550f058", "prefix_titles": [ [ "title", "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
89
[ 8983, 5194, 8105, 5189, 5180, 5195, 7217, 5190, 3604, 8884, 5165, 6153, 3616, 1389, 5184, 5191, 5177, 5169, 5199, 6154, 5181, 6152, 5172, 5170, 5185, 6155, 1408, 1911, 6156, 6157, 6158, 5192, 5201, 8984, 5204, 3610, 1346, 3588, 6160, 3592, 6159, 6161, 1444, 180, 6162, 4711, 6163, 6164, 5188, 3586, 8985, 7022, 3199, 8106, 6165, 3891, 8881, 6167, 1192, 6166, 6168, 4573, 6169, 4428, 4416, 7753, 6170, 6171, 2219, 1391 ]
0.881914
[ "Hwanjun Song", "Minseok Kim", "Dongmin Park", "Yooju Shin", "Jae-Gil Lee" ]
Learning from Noisy Labels with Deep Neural Networks: A Survey
2020
2020-07-16T09:23:13Z
cs.LG
Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data. However, the quality of data labels is a concern because of the lack of high-quality labels in many real-world scenarios. As noisy labels severely degrade the generalization performance of deep neural networks, learning from noisy labels\,(robust training) is becoming an important task in modern deep learning applications. In this survey, we first describe the problem of learning with label noise from a supervised learning perspective. Next, we provide a comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority. Subsequently, we perform an in-depth analysis of noise rate estimation and summarize the typically used evaluation methodology, including public noisy datasets and evaluation metrics. Finally, we present several promising research directions that can serve as a guideline for future studies. All the contents will be available at \href{https://github.com/songhwanjun/Awesome-Noisy-Labels}{\color{blue!50!brown} https://github.com/songhwanjun/Awesome-Noisy-Labels}. \looseness=-1
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ] ], "subsections": [ "8a473653-5cf6-4f3e-941b-06c5873beaf5", "345d7d1c-66a9-4d03-80b2-cee975c6db53", "60bd8c81-37fb-4934-ab18-bb9a24e80724", "3b9c1099-a2f2-417d-a753-e663d7ce62fa", "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "0788ae52-30c4-4156-89f9-6be2e443db83", "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "0fd993d8-9dac-43e0-a25d-3f85b6def415" ], "title": "root" }, { "cite_extract_rate": 0.392857142857142, "cites": [ 7, 206, 7210, 4238, 7770, 71, 3630, 8734, 4116, 4115, 7769 ], "content": "\\label{sec:introduction}\n\\IEEEPARstart{W}ith the recent emergence of large-scale datasets, deep neural networks (DNNs) have exhibited impressive performance in numerous machine learning tasks, such as computer vision , information retrieval , and language processing . Their success is dependent on the availability of massive but carefully labeled data, which are expensive and time-consuming to obtain. Some non-expert sources, such as Amazon's Mechanical Turk and the surrounding text of collected data, have been widely used to mitigate the high labeling cost; however, the use of these source often results in unreliable labels . \nIn addition, data labels can be extremely complex even for experienced domain experts ; \nthey can also be adversarially manipulated by a label-flipping attack . Such unreliable labels are called \\emph{noisy labels} because they may be \\emph{corrupted} from ground-truth labels. The ratio of corrupted labels in real-world datasets is reported to range from $8.0\\%$ to $38.5\\%$ . \\looseness=-1\nIn the presence of noisy labels, training DNNs is known to be susceptible to noisy labels because of the significant number of model parameters that render DNNs overfit to even corrupted labels with the capability of learning any complex function . Zhang et al. demonstrated that DNNs can easily fit an entire training dataset with any ratio of corrupted labels, which eventually resulted in poor generalizability on a test dataset. Unfortunately, popular regularization techniques, such as data augmentation , weight decay , dropout , and batch normalization {have been applied extensively, but they do \\emph{not} completely overcome the overfitting issue by themselves.} As shown in Figure \\ref{fig:convergence_analysis}, the gap in test accuracy between models trained on clean and noisy data remains significant even though all of the aforementioned regularization techniques are activated. Additionally, the accuracy drop with label noise is considered to be more harmful than with other noises, such as input noise . Hence, achieving a good generalization capability in the presence of noisy labels is a key challenge.\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=8.8cm]{figures/introduction/convergence.pdf}\n\\end{center}\n\\vspace*{-0.55cm}\n\\caption{Convergence curves of training and test accuracy when training WideResNet-16-8 using a standard training method on the CIFAR-100 dataset with the symmetric noise of $40\\%$: \\enquote{Noisy w/o. Reg.} and \\enquote{Noisy w. Reg.} are the models trained on noisy data without and with regularization, respectively, and \\enquote{Clean w. Reg.} is the model trained on clean data with regularization.}\n\\label{fig:convergence_analysis}\n\\vspace*{-0.4cm}\n\\end{figure}\nSeveral studies have been conducted to investigate supervised learning under noisy labels. Beyond conventional machine learning techniques , deep learning techniques have recently gained significant attention in the machine learning community. In this survey, we present the advances in recent deep learning techniques for overcoming noisy labels. We surveyed {recent studies by recursively tracking relevant bibliographies in papers published at premier research conferences, such as CVPR, ICCV, NeurIPS, ICML, and ICLR. Although we attempted to comprehensively include all recent studies at the time of submission, some of them may not be included because of the quadratic increase in deep learning papers. The studies included were grouped into \\emph{five} categories, as shown in Figure \\ref{fig:categorization} (see Section \\ref{sec:methodology} for details).\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[width=16.5cm]{figures/introduction/catorization_revise.pdf}\n\\end{center}\n\\vspace*{-0.55cm}\n\\caption{Categorization of recent deep learning methods for overcomming noisy labels.}\n\\label{fig:categorization}\n\\vspace*{-0.4cm}\n\\end{figure*}", "id": "8a473653-5cf6-4f3e-941b-06c5873beaf5", "level": "section", "origin_cites_number": 28, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "74284b44-a1b2-476d-b3b5-489a51838419", "e5555e5b-acf2-4c85-9ee1-4bf47dcf2ae1" ], "title": "Introduction" }, { "cite_extract_rate": 0.25, "cites": [ 7771 ], "content": "\\label{sec:related_surveys}\nFr{\\'e}nay and Verleysen discussed the potential negative consequence of learning from noisy labels and provided a comprehensive survey on noise-robust classification methods, focusing on conventional supervised approaches such as na\\\"ive Bayes and support vector machines. Furthermore, their survey included the definitions and sources of label noise as well as the taxonomy of label noise. Zhang et al. discussed another aspect of label noise in crowdsourced data annotated by non-experts and provided a thorough review of expectation-maximization (EM) algorithms that were proposed to improve the quality of crowdsourced labels. Meanwhile, Nigam et al. provided a brief introduction to deep learning algorithms that were proposed to manage noisy labels; however, the scope of these algorithms was limited to only two categories, i.e., the loss function and sample selection in Figure \\ref{fig:categorization}. Recently, Han et al. summarized the essential components of robust learning with noisy labels, but their categorization is totally different from ours in philosophy; {we mainly focus on systematic methodological difference, whereas they rather focused on more general views, such as input data, objective functions, and optimization policies. Furthermore, this survey is the first to present a comprehensive methodological comparison of existing robust training approaches (see Tables \\ref{table:all_comparision} and \\ref{table:direction_comparison}}).", "id": "74284b44-a1b2-476d-b3b5-489a51838419", "level": "subsection", "origin_cites_number": 4, "parent_id": "8a473653-5cf6-4f3e-941b-06c5873beaf5", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 8696, 4118, 313, 4117 ], "content": "\\label{sec:survey_scope}\n{\nRobust training with DNNs becomes critical to guarantee the reliability of machine learning algorithms. In addition to label noise, two types of flawed training data have been actively studied by different communities . \\emph{Adversarial learning} is designed for small, worst-case perturbations of the inputs, so-called adversarial examples, which are maliciously constructed to deceive an already trained model into making errors . Meanwhile, \\emph{data imputation} primarily deals with missing inputs in training data, where missing values are estimated from the observed ones . Adversarial learning and data imputation are closely related to robust learning, but handling \\emph{feature} noise is beyond the scope of this survey---i.e., learning from noisy \\emph{labels}. \n}", "id": "e5555e5b-acf2-4c85-9ee1-4bf47dcf2ae1", "level": "subsection", "origin_cites_number": 7, "parent_id": "8a473653-5cf6-4f3e-941b-06c5873beaf5", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Survey Scope" ] ], "subsections": [], "title": "Survey Scope" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:preliminaries}\nIn this section, the problem statement for supervised learning with noisy labels is provided along with the taxonomy of label noise. Managing noisy labels is a long-standing issue; therefore, we review the basic conventional approaches and {theoretical foundations underlying robust deep learning}. Table \\ref{table:notation} summarizes the notation frequently used in this study. \\looseness=-1\n{\n\\newcolumntype{L}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{X}[1]{>{\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}p{#1}}\n\\begin{table}[t!]\n\\caption{Summary of the notation.}\n\\vspace*{-0.3cm}\n\\begin{center}\n\\begin{tabular}{|L{1.7cm} |X{6.2cm}|}\\hline\n\\!\\!\\textbf{Notation} & \\hspace*{2.25cm} \\textbf{Description} \\\\\\hline \\hline \n$\\mathcal{X}$ & the data feature space \\\\\\hline\n$\\mathcal{Y}$, $\\tilde{\\mathcal{Y}}$ & the true and noisy label space \\\\\\hline\n$\\mathcal{D}$, $\\tilde{\\mathcal{D}}$ & the clean and noisy training data \\\\\\hline\n$P_{\\mathcal{D}}$, $P_{\\tilde{\\mathcal{D}}}$ & the joint distributions of clean and noisy data \\\\\\hline\n$\\mathcal{B}_{t}$ & a set of mini-batch examples at time $t$ \\\\\\hline\n$\\Theta_{t}$ & the parameter of a deep neural network at time $t$ \\\\\\hline\n$f(\\,\\cdot\\,;\\Theta_{t})$ & a deep neural network parameterized by $\\Theta_{t}$ \\\\\\hline\n$\\ell$ & a specific loss function \\\\\\hline\n$\\mathcal{R}$ & an empirical risk \\\\\\hline\n$\\mathbb{E}_{\\mathcal{D}}$ & an expectation over $\\mathcal{D}$ \\\\\\hline\n$x$, $x_i$ & a data example of $\\mathcal{X}$ \\\\\\hline\n$y$, $y_i$ & a true label of $\\mathcal{Y}$ \\\\\\hline\n$\\tilde{y}$, $\\tilde{y}_i$ & a noisy label of $\\tilde{\\mathcal{Y}}$ \\\\\\hline\n$\\eta$ & a specific learning rate \\\\\\hline\n$\\tau$ & a true noise rate \\\\\\hline\n$b$ & the number of mini-batch examples in $\\mathcal{B}_{t}$ \\\\\\hline\n$c$ & the number of classes \\\\\\hline\n{T}, $\\hat{\\text{T}}$ & the true and estimated noise transition matrix \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\label{table:notation}\n\\vspace*{-0.55cm}\n\\end{table}\n}\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[width=15.5cm]{figures/introduction/tree_revise.pdf}\n\\end{center}\n\\vspace*{-0.45cm}\n\\caption{A high level research overview of robust deep learning for noisy labels. The research directions that are actively contributed by the machine learning community are categorized into five groups in blue italic.}\n\\label{fig:tree_categorization}\n\\vspace*{-0.4cm}\n\\end{figure*}", "id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "level": "section", "origin_cites_number": 0, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ] ], "subsections": [ "160721f0-f1c5-43ce-955b-8257be13eaa0", "53f5fc74-56fd-44a0-ad89-6075838d51dd", "587ccbe7-447b-402d-9bec-671027007ca9", "f4b49791-795c-49a1-9ee2-043cc9fa43e4", "7dbbb1a5-6e79-42e1-a3b6-1ed894715ef6" ], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "\\emph{Classification} is a representative supervised learning task for learning a function that maps an input feature to a label . In this paper, we consider a $c$-class classification problem using a DNN with a softmax output layer. Let $\\mathcal{X} \\subset \\mathbb{R}^{d}$ be the feature space and $\\mathcal{Y} = \\{0,1\\}^{c}$ be the ground-truth label space in a \\emph{one-hot} manner. In a typical classification problem, we are provided with a training dataset $\\mathcal{D}=\\{(x_i, y_i)\\}_{i=1}^{N}$ obtained from an unknown joint distribution $P_{\\mathcal{D}}$ over $\\mathcal{X} \\times \\mathcal{Y}$, where each $(x_i, y_i)$ is \\emph{independent and identically distributed}. The goal of the task is to learn the mapping function $f(\\,\\cdot\\,;\\Theta): \\mathcal{X} \\rightarrow [0, 1]^{c}$ of the DNN parameterized by $\\Theta$ such that the parameter $\\Theta$ minimizes the empirical risk $\\mathcal{R}_{\\mathcal{D}}(f)$,\n\\begin{equation}\n\\label{eq:empirical_risk}\n\\!\\!\\mathcal{R}_{\\mathcal{D}}(f) = \\mathbb{E}_{\\mathcal{D}}[\\ell\\big(f(x;\\Theta), y\\big)] = \\frac{1}{|\\mathcal{D}|}\\!\\sum_{(x,y)\\in\\mathcal{D}}\\!\\!\\!\\!\\ell\\big(f(x;\\Theta), y\\big),\n\\end{equation}\nwhere $\\ell$ is a certain loss function.\nAs data labels are corrupted in various real-world scenarios, we aim to train the DNN from noisy labels. Specifically, we are provided with a noisy training dataset $\\tilde{\\mathcal{D}}=\\{(x_i, \\tilde{y}_i)\\}_{i=1}^{N}$ obtained from a noisy joint distribution $P_{\\tilde{\\mathcal{D}}}$ over $\\mathcal{X}\\times\\tilde{\\mathcal{Y}}$, where $\\tilde{y}$ is a \\emph{noisy} label which may not be true. Hence, following the standard training procedure, a mini-batch $\\mathcal{B}_{t}=\\{(x_i, \\tilde{y}_i)\\}_{i=1}^{b}$ comprising $b$ examples is obtained randomly from the noisy training dataset $\\tilde{\\mathcal{D}}$ at time $t$. Subsequently, the DNN parameter $\\Theta_t$ at time $t$ is updated along the descent direction of the empirical risk on mini-batch $\\mathcal{B}_t$,\n\\begin{equation}\n\\label{eq:corrupted_update}\n\\Theta_{t+1} = \\Theta_{t} - \\eta\\nabla\\Big(\\frac{1}{|\\mathcal{B}_{t}|} \\!\\sum_{(x,\\tilde{y}) \\in \\mathcal{B}_{t}} \\!\\!\\!\\!\\ell\\big(f(x;\\Theta_{t}), \\tilde{y}\\big)\\Big),\n\\end{equation}\nwhere $\\eta$ is a learning rate specified. \\looseness=-1\nHere, the risk minimization process is no longer \\emph{noise-tolerant} because of the loss computed by the noisy labels. DNNs can easily memorize corrupted labels and correspondingly degenerate their generalizations on unseen data . Hence, mitigating the adverse effects of noisy labels is essential to enable noise-tolerant training for deep learning. \n\\vspace*{-0.1cm}", "id": "160721f0-f1c5-43ce-955b-8257be13eaa0", "level": "subsection", "origin_cites_number": 4, "parent_id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Supervised Learning with Noisy Labels" ] ], "subsections": [], "title": "Supervised Learning with Noisy Labels" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:taxonomy}\n{This section presents the types of label noise that have been adopted to design robust training algorithms.} Even if data labels are corrupted from ground-truth labels without \\emph{any} prior assumption, in essence, the corruption probability is affected by the dependency between \\emph{data features} or \\emph{class labels}. A detailed analysis of the taxonomy of label noise was provided by Fr{\\'e}nay and Verleysen . {Most existing algorithms dealt with instance-independent noise, but instance-dependent noise has not yet been extensively investigated owing to its complex modeling.}\n\\smallskip", "id": "53f5fc74-56fd-44a0-ad89-6075838d51dd", "level": "subsection", "origin_cites_number": 1, "parent_id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Label Noise" ] ], "subsections": [ "5a2e6abe-21d0-4935-ab73-f9bf0caf9ca6", "9e5fdfd7-ba0e-466c-96b0-5d8ad47de6e9" ], "title": "Taxonomy of Label Noise" }, { "cite_extract_rate": 0.5, "cites": [ 3630 ], "content": "}\nA typical approach for modeling label noise assumes that the corruption process is conditionally \\emph{independent} of data features when the true label is given . That is, the true label is corrupted by a \\emph{noise transition matrix} $\\text{T} \\in [0, 1]^{c\\times c}$, where $ \\text{T}_{ij} \\coloneqq p(\\tilde{y}=j|y=i)$ is the probability of the true label $i$ being flipped into a corrupted label $j$. \nIn this approach, the noise is called a \\emph{symmetric}\\,(or \\emph{uniform}) noise with a noise rate $\\tau \\in [0,1]$ if $\\forall_{i=j} \\text{T}_{ij} \\!=\\! 1-\\tau \\wedge \\forall_{i \\neq j} \\text{T}_{ij} = \\frac{\\tau}{c-1}$, where a true label is flipped into other labels with equal probability. In contrast to symmetric noise, the noise is called an \\emph{asymmetric}\\,(or \\emph{label-dependent}) noise if $\\forall_{i=j} \\text{T}_{ij} \\!=\\! 1-\\tau \\wedge \\exists_{i \\neq j, i\\neq k, j\\neq k} \\text{T}_{ij} > \\text{T}_{ik} $, where a true label is more likely to be mislabeled into a particular label. For example, a \\enquote{dog} is more likely to be confused with a \\enquote{cat} than with a \\enquote{fish.} In a stricter case when $\\forall_{i=j} \\text{T}_{ij} \\!=\\! 1-\\tau \\wedge \\exists_{i \\neq j} \\text{T}_{ij} = \\tau$, the noise is called a \\emph{pair noise}, where a true label is flipped into only a certain label. \n\\smallskip", "id": "5a2e6abe-21d0-4935-ab73-f9bf0caf9ca6", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "53f5fc74-56fd-44a0-ad89-6075838d51dd", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Label Noise" ], [ "subsubsection", "Instance-independent Label Noise" ] ], "subsections": [], "title": "Instance-independent Label Noise" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nFor more realistic noise modeling, the corruption probability is assumed to be \\emph{dependent} on both the data features and class labels . Accordingly, the corruption probability is defined as $\\rho_{ij}(x)\\!= \\!p(\\tilde{y}\\!=\\!j|y\\!=\\!i, x)$. \nUnlike the aforementioned noises, the data feature of an example $x$ also affects the chance of $x$ being mislabeled. \n\\vspace*{-0.1cm}", "id": "9e5fdfd7-ba0e-466c-96b0-5d8ad47de6e9", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "53f5fc74-56fd-44a0-ad89-6075838d51dd", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Label Noise" ], [ "subsubsection", "Instance-dependent Label Noise" ] ], "subsections": [], "title": "Instance-dependent Label Noise" }, { "cite_extract_rate": 0.344827586206896, "cites": [ 3454, 7771, 4121, 8735, 4120, 4122, 3630, 4119, 4115, 8736 ], "content": "\\label{sec:non_deep_learning}\nFor decades, numerous methods have been proposed to manage noisy labels using conventional machine learning techniques. These methods can be categorized into \\emph{four} groups , as follows:\n\\begin{itemize}[leftmargin=9pt]\n\\item \\textbf{Data Cleaning:} Training data are cleaned by excluding examples whose labels are likely to be corrupted. Bagging and boosting are used to filter out false-labeled examples to remove examples with higher weights because false-labeled examples tend to exhibit much higher weights than true-labeled examples . In addition, various methods, such as $k$-nearest neighbor, outlier detection, and anomaly detection, have been widely exploited to exclude false-labeled examples from noisy training data . Nevertheless, this family of methods suffers from over-cleaning issue that overly removes even the true-labeled examples. \n\\vspace*{0.12cm}\n\\item \\textbf{Surrogate Loss:} Motivated by the noise-tolerance of the 0-1 loss function , many researchers have attempted to resolve its inherent limitations, such as computational hardness and non-convexity that render gradient methods unusable. Hence, several convex surrogate loss functions, which approximate the 0-1 loss function, have been proposed to train a specified classifier under the binary classification setting . However, these loss functions cannot support the multi-class classification task.\n\\vspace*{0.12cm}\n\\item \\textbf{Probabilistic Method:} Under the assumption that the distribution of features is helpful in solving the problem of learning from noisy labels , the confidence of each label is estimated by clustering and then used for a weighted training scheme . This confidence is also used to convert hard labels into soft labels to reflect the uncertainty of labels . In addition to these clustering approaches, several Bayesian methods have been proposed for graphical models such that they can benefit from using any type of prior information in the learning process . However, this family of methods may exacerbate the overfitting issue owing to the increased number of model parameters. \n\\vspace*{0.12cm}\n\\item \\textbf{Model-based Method:} As conventional models, such as the SVM and decision tree, are not robust to noisy labels, significant effort has been expended to improve the robustness of them. To develop a robust SVM model, misclassified examples during learning are penalized in the objective . In addition, several decision tree models are extended using new split criteria to solve the overfitting issue when the training data are not fully reliable . However, it is infeasible to apply their design principles to deep learning.\n\\end{itemize}\n\\smallskip\n{\nMeanwhile, deep learning is more susceptible to label noises than traditional machine learning owing to its high expressive power, as proven by many researchers . \nThere has been significant effort to understand why noisy labels negatively affect the performance of DNNs . This theoretical understanding has led to the algorithmic design which achieves higher robustness than non-deep learning methods. A detailed analysis of theoretical understanding for robust deep learning was provided by Han et al. . \n}\n\\begin{comment}\n\\vspace*{-0.2cm}", "id": "587ccbe7-447b-402d-9bec-671027007ca9", "level": "subsection", "origin_cites_number": 29, "parent_id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Non-deep Learning Approaches" ] ], "subsections": [], "title": "Non-deep Learning Approaches" }, { "cite_extract_rate": 0.647058823529411, "cites": [ 7771, 3340, 4126, 3630, 4253, 8735, 4120, 4124, 4125, 4115, 4123 ], "content": "\\label{sec:theoretical}\n{\nDeep learning is more susceptible to label noises than traditional machine learning owing to its high expressive power, as proven by many researchers . \nThere has been significant effort to understand why noisy labels negatively affect the performance of DNNs . This theoretical understanding has led to the architectural or algorithmic design which is more robust than the non-deep learning methods. A detailed analysis of theoretical understanding for robust deep learning was provided by Han et al. ; \\emph{three} high-level perspectives have been widely leveraged to design robust approaches, as follows:\n\\vspace*{0.05cm}\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\\begin{itemize}[leftmargin=9pt]\n\\item \\textbf{Memorization Effect}: The \\emph{memorization nature} of DNNs was explored theoretically in recent literature . Assuming clusterable data where the clusters are located on the unit Euclidean ball, Li et al. proved the distance from the initial weight ${W}_{0}$ to the weight ${W}_t$ after $t$ iterations,\n\\begin{equation}\n\\label{eq:memorization_effect_foundation}\n\\norm{{W}_t - {W}_0}_{F} \\lesssim \\big( \\sqrt{K} + (K^{2}\\epsilon_{0}/\\norm{{C}}^{2})t \\big),\n\\end{equation}\nwhere $\\norm{\\cdot}_{F}$ is the Frobenius norm, $K$ is the number of clusters, and ${C}$ is the set of cluster centers reaching all input examples within their $\\epsilon_0$ neighborhood.\nEq.\\ \\eqref{eq:memorization_effect_foundation} demonstrates that the weights of DNNs start to stray far from the initial weights when overfitting to corrupted labels, while they are still in the vicinity of the initial weights at the beginning of training . In the empirical studies , this result is also known as the \\emph{memorization effect} that DNNs tend to first learn simple and generalized patterns and then gradually overfit to all the noisy patterns. Thus, to achieve better generalization, early stopping and favoring small-loss training examples are commonly employed to design robust training methods.\n\\end{itemize}\n}\n\\end{comment}\n\\vspace*{-0.2cm}", "id": "f4b49791-795c-49a1-9ee2-043cc9fa43e4", "level": "subsection", "origin_cites_number": 17, "parent_id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Theoretical Foundations" ] ], "subsections": [], "title": "Theoretical Foundations" }, { "cite_extract_rate": 0.25, "cites": [ 4127 ], "content": "\\label{sec:regression}\n{\nIn addition to classification, regression is another main topic of supervised machine learning, which aims to model the relationship between a number of features and a continuous target variable. Unlike the classification task with a \\emph{discrete} label space, the regression task considers the continuous variable as its target label , and thus it learns the mapping function $f(~\\cdot~; \\Theta): \\mathcal{X} \\rightarrow \\mathcal{Y}$, where $\\mathcal{Y} \\in \\mathbb{R}$ is a \\emph{continuous} label space. \nGiven the input feature $x$ and its ground-truth label $y$, two types of label noise are considered in the regression task. An \\emph{additive noise} is formulated by $\\tilde{y} := y + \\epsilon$ where $\\epsilon$ is drawn from a random distribution independent from the input feature; an \\emph{instance-dependent noise} is formulated by $\\tilde{y} := \\rho(x)$ where $\\rho: \\mathcal{X} \\rightarrow \\mathcal{Y}$ is a noise function dependent on the input feature. \nAlthough regression predicts continuous values, regression and classification share the same concept of learning the mapping function from the input feature $x$ to the output label $y$. Thus, many robust approaches for classification are easily extended to the regression problem with simple modification . Thus, in this survey, we focus on the classification setting for which most robust methods are defined.\n}\n\\vspace*{-0.1cm}", "id": "7dbbb1a5-6e79-42e1-a3b6-1ed894715ef6", "level": "subsection", "origin_cites_number": 4, "parent_id": "345d7d1c-66a9-4d03-80b2-cee975c6db53", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Regression with Noisy Labels" ] ], "subsections": [], "title": "Regression with Noisy Labels" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 3340, 4129, 4143, 7133, 4137, 7773, 3345, 4134, 4132, 4140, 4141, 7775, 4126, 3342, 4142, 7772, 8737, 4130, 4138, 7191, 7162, 4135, 4144, 4145, 7774, 4139, 4125, 892, 4128, 4253, 4131, 8630, 4133, 2277, 4136 ], "content": "\\label{sec:methodology}\n\\vspace*{-0.0cm}\nAccording to our comprehensive survey, the robustness of deep learning can be enhanced in numerous approaches . Figure \\ref{fig:tree_categorization} shows an overview of recent research directions conducted by the machine learning community. {All of them\\,(i.e., \\textsection \\ref{sec:robust_architecture}~--~\\textsection \\ref{sec:sample_selection}) focused on making a supervised learning process more robust to label noise: \\looseness=-1\n\\begin{itemize}[leftmargin=9pt]\n\\item\n(\\textsection \\ref{sec:robust_architecture}) Robust architecture: adding a noise adaptation layer at the top of an underlying DNN to learn label transition process or developing a dedicated architecture to reliably support more diverse types of label noise;\n\\vspace*{0.12cm}\n\\item (\\textsection \\ref{sec:robust_regularization}) Robust regularization: enforcing a DNN to overfit less to false-labeled examples explicitly or implicitly;\n\\vspace*{0.12cm}\n\\item (\\textsection \\ref{sec:robust_loss_function}) Robust loss function: improving the loss function;\n\\vspace*{0.12cm}\n\\item (\\textsection \\ref{sec:loss_adjustment}) Loss adjustment: adjusting the loss value according to the confidence of a given loss (or label) by loss correction, loss reweighting, or label refurbishment; \n\\vspace*{0.12cm}\n\\item (\\textsection \\ref{sec:sample_selection}) Sample selection: identifying true-labeled examples from noisy training data via multi-network or multi-round learning. \n\\end{itemize}\nOverall, we categorize all recent deep learning methods into \\emph{five} groups corresponding to popular research directions, as shown in Figure \\ref{fig:tree_categorization}. In \\textsection \\ref{sec:loss_adjustment}, meta learning is also discussed because it finds the optimal hyperparameters for loss reweighting. In \\textsection \\ref{sec:sample_selection}, we discuss the recent efforts for combining sample selection with other orthogonal directions or semi-supervised learning toward the state-of-the-art performance.}\nFigure \\ref{fig:categorization} illustrates the categorization of robust training methods using these five groups. \n\\begin{comment}\n{\n\\newcolumntype{L}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{X}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}p{#1}}\n\\begin{table}[t!]\n\\caption{Summary of existing deep learning methods according to the seven categories in Figure \\ref{fig:categorization}.}\n\\vspace*{-0.3cm}\n\\begin{center}\n\\begin{tabular}{L{1.75cm} |X{6.23cm}}\\toprule\n\\!\\!\\textbf{Category} & \\textbf{Deep Learning Method} \\\\\\midrule \nRobust Loss Function & \\makecell[l]{\\!\\!\\emph{Robust MAE}\\,\\!, \\emph{Generalized Cross Entropy}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{Symmetric Cross Entropy}\\,\\!, \\emph{Curriculum Loss}\\,\\!}\\!\\!\\! \\\\\\hline\nRobust Architecture & \\makecell[l]{ \\!\\!\\emph{Webly Learning}\\,\\!, \\emph{Noise Model}\\,\\!, \\emph{Dropout Noise} \\!\\!\\!\\!\\!\\\\ \\!\\!\\emph{Model}\\,\\!, \\emph{S--model}\\,\\!, \\emph{C--model}\\,\\!, \\emph{NLNN}\\,\\!, \\!\\!\\!\\!\\!\\\\ \\!\\!\\emph{Probabilistic Noise Model}\\,\\!, \\emph{Masking}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{Contrastive-additive Noise Network}\\,\\!}\\!\\!\\! \\\\\\hline\nRobust Regularization & \\makecell[l]{ \\!\\!\\!\\! \\emph{Adversarial Training}\\,\\!, \\emph{Label Smoothing}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{Mixup}\\,\\!, \\emph{Bilevel Learning}\\,\\!, \\emph{Annotator}\\!\\!\\!\\\\ \\!\\!\\!\\! \\emph{Confusion}\\,\\!, \\emph{Pre-training}\\,\\!\\!\\!} \\!\\!\\!\\\\\\hline\n\\!\\!Loss Adjustment\\!\\!\\! & \\makecell[l]{ \\!\\!\\emph{Backward Correction}\\,\\!}, \\emph{Forward Correction}\\,\\!}, \\!\\!\\!\\\\ \\!\\!\\emph{Gold}\\! \\emph{Loss}\\! \\emph{Correction}\\,\\!\\!, \\emph{Importance}\\! \\emph{Reweighting}\\,\\!\\!, \\!\\!\\!\\!\\!\\!\\\\ \\!\\!\\emph{Active Bias}\\! , \\emph{Bootstrapping}\\! , \\emph{Dynamic} \\!\\!\\\\ \\!\\!\\!\\! \\emph{Bootstrapping}\\,\\!, \\emph{D2L}\\,\\!, \\emph{SELFIE}\\,\\! }\\!\\!\\!\\\\\\hline\n\\!\\!\\!Sample Selection\\!\\!\\! & \\makecell[l]{\\!\\!\\emph{Decouple}\\,\\!, \\emph{MentorNet}\\,\\!, \\emph{Co-teaching}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{Co-teaching+}\\,\\!, \\emph{Iterative Detection}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{ITLM}\\,\\!, \\emph{INCV}\\,\\!, \\emph{SELFIE}\\,\\!, \\emph{SELF}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{Curriculum Loss}\\,\\!}\\!\\!\\!\\\\\\hline\n\\!\\!\\!Meta Learning\\!\\!\\! & \\makecell[l]{ \\!\\!\\emph{Meta-Regressor}\\,\\!, \\emph{Knowledge Distillation}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{L2LWS}\\,\\!, \\emph{CWS}\\,\\!, \\emph{Automatic Reweighting}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{MLNT}\\,\\!, \\emph{Meta-weight-net}\\,\\!} \\!\\!\\!\\\\\\hline\n\\!\\!Semi-supervised\\!\\! \\!Learning\\! & \\makecell[l]{ \\!\\!\\emph{Label Aggregation}\\,\\!, \\emph{Two-Stage Framework}\\,\\!,\\!\\!\\!\\\\ \\!\\!\\emph{SELF} , \\emph{DivideMix}\\,\\!}\\!\\!\\! \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{table:summary_methods}\n\\vspace*{-0.5cm}\n\\end{table}\n}\n\\end{comment}", "id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "level": "section", "origin_cites_number": 45, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ] ], "subsections": [ "f16bcc10-a9fa-4749-b206-8aa388990240", "25e15182-1991-4f1a-83d9-017fe1a40d14", "59ab6098-f130-49dc-b87f-123eaa6be482", "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "54d73157-e47b-4943-9b30-b518e1806946" ], "title": "Deep Learning Approaches" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4138, 4133, 4146, 7773, 2277, 4136 ], "content": "\\label{sec:robust_architecture}\nIn numerous studies, architectural changes have been made to model the noise transition matrix of a noisy dataset . These changes include adding a noise adaptation layer at the top of the softmax layer and designing a new dedicated architecture. The resulting architectures yield improved generalization through the modification of the DNN output based on the estimated label transition probability.\n\\vspace*{0.15cm}", "id": "f16bcc10-a9fa-4749-b206-8aa388990240", "level": "subsection", "origin_cites_number": 9, "parent_id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Architecture" ] ], "subsections": [ "439a17f4-972c-4739-b63c-f2fd25ca8426", "81044d2e-5255-4dbd-8274-7c1d26bb1d92" ], "title": "Robust Architecture" }, { "cite_extract_rate": 0.5, "cites": [ 7776, 2277, 4136 ], "content": "}\n\\label{sec:adaptation_layer}\n{\nFrom the view of training data, the noise process is modeled by discovering the underlying label transition pattern (i.e., the {noise transition matrix} T). Given an example $x$, the noisy class posterior probability for an example $x$ is expressed by\n\\begin{equation}\n\\label{eq:label_transition_matrix}\n\\begin{gathered}\n\\!\\!\\!\\!\\!p(\\tilde{y}=j|x) \\!=\\!\\sum_{i=1}^{c}p(\\tilde{y}=j, y=i|x)\\! = \\!\\sum_{i=1}^{c}\\text{T}_{ij}p(y=i|x),\\\\\n\\text{where} ~~ \\text{T}_{ij} = p(\\tilde{y}=j|y=i,x).\n\\end{gathered}\n\\end{equation} \nIn light of this, the noise adaptation layer is intended to mimic the label transition behavior in learning a DNN. Let $p(y|x;\\Theta)$ be the output of the base DNN with a softmax output layer.\nThen, following Eq.\\,\\eqref{eq:label_transition_matrix}, the probability of an example $x$ being predicted as its noisy label $\\tilde{y}$ is parameterized by }\n\\begin{equation}\n\\label{eq:noise_adaption_process}\n\\begin{split}\n\\!\\!\\!p(\\tilde{y}=j|x;\\Theta, \\mathcal{W})&= \\sum_{i=1}^{c}p(\\tilde{y}=j, y\\!=\\!i|x; \\Theta, \\mathcal{W})\\\\\n&= \\sum_{i=1}^{c}\\underbrace{p(\\tilde{y}=j|y\\!=\\!i;\\mathcal{W})}_{\\text{Noise Adaptation Layer}}\\underbrace{p(y\\!=\\!i|x;\\Theta)}_{\\text{Base Model}}.\n\\end{split}\n\\end{equation}\nHere, the noisy label $\\tilde{y}$ is assumed to be \\emph{conditionally independent} of the input $x$ in general. \nAccordingly, as shown in Figure \\ref{fig:noise_adaption_process}, the noisy adaptation layer is added at the top of the base DNN to model the noise transition matrix parameterized by $\\mathcal{W}$. This layer should be removed when test data is to be predicted. \\looseness=-1\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=8.5cm]{figures/methodology/noise_adaption_layer.pdf}\n\\end{center}\n\\vspace*{-0.4cm}\n\\caption{Noise modeling process using the noise adaptation layer.}\n\\label{fig:noise_adaption_process}\n\\vspace*{-0.3cm}\n\\end{figure}\n\\smallskip\n\\noindent \\underline{Technical Detail}: \\emph{Webly learning} first trains the base DNN only for easy examples retrieved by search engines; subsequently, the confusion matrix for all training examples is used as the initial weight $\\mathcal{W}$ of the noise adaptation layer. It fine-tunes the entire model in an end-to-end manner for hard training examples. In contrast, the \\emph{noise model} initializes $\\mathcal{W}$ to an identity matrix and adds a regularizer to force $\\mathcal{W}$ to diffuse during DNN training. The \\emph{dropout noise model} applies dropout regularization to the adaptation layer, whose output is normalized by the softmax function to implicitly diffuse $\\mathcal{W}$. The \\emph{s-model} is similar to the \\emph{dropout noise model} but dropout is not applied. The \\emph{c-model} is an extension of the s-model that models the instance-dependent noise, which is more realistic than the symmetric and asymmetric noises. Meanwhile, \\emph{NLNN} adopts the EM algorithm to iterate the E-step to estimate the noise transition matrix and the M-step to back-propagate the DNN. \\looseness=-1\n\\smallskip\n\\noindent {\\underline{Remark}: A common drawback of this family is their inability to identify false-labeled examples, treating all the examples equally. Thus, the estimation error for the transition matrix is generally large when only noisy training data is used or when the noise rate is high .}\nMeanwhile, for the EM-based method, becoming stuck in local optima is inevitable, and high computational costs are incurred .\n\\vspace*{0.15cm}", "id": "439a17f4-972c-4739-b63c-f2fd25ca8426", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "f16bcc10-a9fa-4749-b206-8aa388990240", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Architecture" ], [ "subsubsection", "Noise Adaptation Layer" ] ], "subsections": [], "title": "Noise Adaptation Layer" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4138, 7773, 8738 ], "content": "} \n{Beyond the label-dependent label noise, several studies have been conducted to support more complex noise, leading to the design of dedicated architectures .} They typically aimed at increasing the reliability of estimating the label transition probability to handle more complex and realistic label noise. \n\\smallskip\n\\noindent \\underline{Technical Detail}: \\emph{Probabilistic noise modeling} manages two independent networks, each of which is specialized to predict the noise type and label transition probability.\nBecause an EM-based approach with random initialization is impractical for training the entire network, both networks are trained with massive noisy labeled data after the pre-training step with a small amount of clean data.\nMeanwhile, \\emph{masking} is a human-assisted approach to convey the human cognition of invalid label transitions.\nConsidering that noisy labels are mainly from the interaction between humans and tasks, the invalid transition investigated by humans was leveraged to constrain the noise modeling process. Owing to the difficulty in specifying the explicit constraint, a variant of generative adversarial networks\\,(GANs) was employed in this study. Recently, the \\emph{contrastive-additive noise network} was proposed to adjust incorrectly estimated label transition probabilities by introducing a new concept of quality embedding, which models the trustworthiness of noisy labels. {\\emph{RoG} builds a simple yet robust generative classifier on top of any discriminative DNN pre-trained on noisy data.}\n\\smallskip\n\\noindent \\underline{Remark}: Compared with the noise adaptation layer, this family of methods significantly improves the robustness to more diverse types of label noise, but it cannot be easily extended to other architectures in general.", "id": "81044d2e-5255-4dbd-8274-7c1d26bb1d92", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "f16bcc10-a9fa-4749-b206-8aa388990240", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Architecture" ], [ "subsubsection", "Dedicated Architecture" ] ], "subsections": [], "title": "Dedicated Architecture" }, { "cite_extract_rate": 0.4, "cites": [ 4142, 71 ], "content": "\\label{sec:robust_regularization}\nRegularization methods have been widely studied to improve the generalizability of a learned model in the machine learning community . \nBy avoiding overfitting in model training, the robustness to label noise improves with widely-used regularization techniques such as \\emph{data augmentation} , \\emph{weight decay} , \\emph{dropout} , and \\emph{batch normalization} . {These canonical regularization methods operate well on moderately noisy data, but they alone do \\emph{not sufficiently} improve the test accuracy}; poor generalization could be obtained when the noise is heavy . Thus, more advanced regularization techniques have been recently proposed, which further improved robustness to label noise when used along with the canonical methods. The main advantage of this family is its \\emph{flexibility} in collaborating with other directions because it only requires simple modifications. \n\\smallskip", "id": "25e15182-1991-4f1a-83d9-017fe1a40d14", "level": "subsection", "origin_cites_number": 5, "parent_id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Regularization" ] ], "subsections": [ "84774a8d-19b4-4ef8-ba90-419f51709a46", "50da4bbd-f1a0-49f9-b327-8d43650a94c6" ], "title": "Robust Regularization" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4147, 4131, 7133, 4142 ], "content": "}\n{\nThe regularization can be an {explicit} form that modifies the expected training loss, e.g., weight decay and dropout. \n}\n\\smallskip\n\\noindent \\underline{Technical Detail}:\n\\emph{Bilevel learning} uses a clean validation dataset to regularize the overfitting of a model by introducing a bilevel optimization approach, which differs from the conventional one in that its regularization constraint is also an optimization problem. Overfitting is controlled by adjusting the weights on each mini-batch and selecting their values such that they minimize the error on the validation dataset. Meanwhile, \\emph{annotator confusion} assumes the existence of multiple annotators and introduces a regularized EM-based approach to model the label transition probability; its regularizer enables the estimated transition probability to converge to the true confusion matrix of the annotators. \nIn contrast, \\emph{pre-training} empirically proves that fine-tuning on a pre-trained model provides a significant improvement in robustness compared with models trained from scratch; the universal representations of pre-training prevent the model parameters from being updated in the wrong direction by noisy labels. { \\emph{PHuber} proposes a composite loss-based gradient clipping, which is a variation of standard gradient clipping for label noise robustness. \\emph{Robust early-learning} classifies critical parameters and non-critical parameters for fitting clean and noise labels, respectively. Then, it penalizes only the non-critical ones with a different update rule.} \n{\\emph{ODLN} leverages open-set auxiliary data and prevents the overfitting to noisy labels by assigning random labels to the open-set examples, which are uniformly sampled from the label set.}\n\\smallskip\n\\noindent {\\underline{Remark}: The explicit regularization often introduces sensitive model-dependent hyperparameters or requires deeper architectures to compensate for the reduced capacity, yet it can lead to significant performance gain if they are optimally tuned. \\looseness=-1}\n\\smallskip", "id": "84774a8d-19b4-4ef8-ba90-419f51709a46", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "25e15182-1991-4f1a-83d9-017fe1a40d14", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Regularization" ], [ "subsubsection", "Explicit Regularization" ] ], "subsections": [], "title": "Explicit Regularization" }, { "cite_extract_rate": 1, "cites": [ 7191, 4141, 4148, 892 ], "content": "}\nThe regularization can also be an {implicit} form that gives the effect of stochasticity, e.g., data augmentation and mini-batch stochastic gradient descent.\n\\smallskip\n\\noindent \\underline{Technical Detail}: \\emph{Adversarial training} enhances the noise tolerance by encouraging the DNN to correctly classify both original inputs and hostilely perturbed ones. {\\emph{Label smoothing} estimates the marginalized effect of label noise during training, thereby reducing overfitting by preventing the DNN from assigning a full probability to noisy training examples. Instead of the one-hot label, the noisy label is mixed with a uniform mixture over all possible labels,\n\\begin{equation}\n\\begin{gathered}\n\\bar{y} = \\big\\langle \\bar{y}(1), \\bar{y}(2), \\dots, \\bar{y}(c) \\big\\rangle,\\\\ \n\\text{where} ~ \\bar{y}(i) = (1-\\alpha) \\cdot [\\tilde{y} = i] + \\alpha / c ~\\text{and}~ \\alpha \\in [0, 1].\n\\end{gathered}\n\\end{equation}\nHere, $[\\cdot]$ is the Iverson bracket and $\\alpha$ is the smoothing degree.} In contrast, \\emph{mixup} regularizes the DNN to favor simple linear behaviors in between training examples. \nFirst, the mini-batch is constructed using virtual training examples, each of which is formed by the linear interpolation of two noisy training examples $(x_i, \\tilde{y}_i)$ and $(x_j, \\tilde{y}_j)$ obtained at random from noisy training data $\\tilde{\\mathcal{D}}$,\n\\begin{equation}\n\\label{eq:mixup_construction}\n{x}_{mix} = \\lambda x_i + (1-\\lambda) x_j~~ \\text{and} ~~{y}_{mix} = \\lambda \\tilde{y}_i + (1-\\lambda) \\tilde{y}_j,\n\\end{equation}\nwhere $\\lambda \\in [0,1]$ is the balance parameter between two examples. Thus, \\emph{mixup} extends the training distribution by updating the DNN for the constructed mini-batch. \n\\smallskip\n\\noindent {\\underline{Remark}: The implicit regularization improves the generalization capability of the DNN without reducing the representational capacity. It also does not introduce sensitive model-dependent hyperparameters because it is applied to the training data. However, the extended feature or label space slows down the convergence of training.}", "id": "50da4bbd-f1a0-49f9-b327-8d43650a94c6", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "25e15182-1991-4f1a-83d9-017fe1a40d14", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Regularization" ], [ "subsubsection", "Implicit Regularization" ] ], "subsections": [], "title": "Implicit Regularization" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4130, 3454, 4132, 4151, 4150, 4128, 4149, 8736, 4134 ], "content": "\\label{sec:robust_loss_function}\n{\nIt was proven that a learned DNN with a \\emph{suitably modified} loss function $\\ell^{\\prime}$ for noisy data $\\tilde{\\mathcal{D}}$ can approach the Bayes optimal classifier $f^{*}$, which achieves the optimal Bayes risk $\\mathcal{R}^{*} = \\mathcal{R}_{\\mathcal{D}}(f^{*})$ for clean data $\\mathcal{D}$. Let $\\hat{f} = \\text{argmin}_{f \\in \\mathcal{F}} \\hat{\\mathcal{R}}_{\\ell^{\\prime}, \\tilde{\\mathcal{D}}}(f)$ be the learned classifier with the modified loss $\\ell^{\\prime}$ for the noisy data, where $\\hat{\\mathcal{R}}_{\\ell^{\\prime}, \\tilde{\\mathcal{D}}}(f) = \\mathbb{E}_{\\tilde{\\mathcal{D}}}[\\ell(f(x;\\Theta), \\tilde{y})]$. If $\\ell$ is $L$-Lipschitz and classification-calibrated , with probability at least $1\\!-\\!\\delta$, there exists a non-decreasing function $\\zeta_{\\ell}$ with $\\zeta_{\\ell}(0)=0$ such that \\looseness=-1\n\\vspace*{-0.6cm}\n\\begin{equation}\n\\begin{split}\n\\mathcal{R}_{\\mathcal{D}}(\\hat{f}) - \\mathcal{R}^{*} \\leq &~\\overbrace{\\zeta_{\\ell} \\Big( \\text{min}_{f\\in \\mathcal{F}}\\mathcal{R}_{\\ell,\\mathcal{D}}(f) - \\text{min}_{f}\\mathcal{R}_{\\ell, \\mathcal{D}}(f)}^{\\text{Approximation and Estimation Errors}}\\\\\n&~~+ 4L_{p}\\text{RC}(\\mathcal{F}) + 2\\sqrt{{\\text{log}(1/\\delta)}\\big/{2|\\mathcal{D}|}} \\Big),\\!\\!\\!\\!\\!\n\\end{split}\n\\end{equation}\n$L_{p}$ is the Lipschitz constant of $\\ell^{\\prime}$ and RC is the Rademacher complexity of the hypothesis class $\\mathcal{F}$. Then, by the universal approximation theorem , the Bayes optimal classifier $f^*$ is guaranteed to be in the hypothesis class $\\mathcal{F}$ with DNNs. \nBased on this theoretical foundation, researchers have attempted to design robust loss functions such that they achieve a small risk for unseen clean data even when noisy labels exist in the training data . \n}\n\\smallskip\n\\noindent \\underline{Technical Detail}: \nInitially, Manwani and Sastry theoretically proved a sufficient condition for the loss function such that risk minimization with that function becomes noise-tolerant for binary classification. Subsequently, the sufficient condition was extended for multi-class classification using deep learning . Specifically, a loss function is defined to be \\emph{noise-tolerant} for a $c$-class classification under \\emph{symmetric} noise if the function satisfies the noise rate $\\tau<\\frac{c-1}{c}$ and\n\\begin{equation}\n\\label{eq:symmetric_function}\n\\sum_{j=1}^{c}\\ell\\big(f(x;\\Theta),y=j\\big)=C, ~\\forall x\\in\\mathcal{X}, ~\\forall f,\n\\end{equation}\nwhere $C$ is a constant. This condition guarantees that the classifier trained on noisy data has the same misclassification probability as that trained on noise-free data under the specified assumption. {An extension for \\emph{multi-label} classification was provided by Kumar et al. .} Moreover, if $\\mathcal{R}_{\\mathcal{D}}(f^{*})=0$, then the function is also noise-tolerant under an \\emph{asymmetric} noise, where $f^{*}$ is a global risk minimizer of $\\mathcal{R}_{\\mathcal{D}}$. \\looseness=-1\nFor the classification task, the categorical cross entropy\\,(CCE) loss is the most widely used loss function owing to its fast convergence and high generalization capability. However, in the presence of noisy labels, the \\emph{robust MAE} showed that the mean absolute error\\,(MAE) loss achieves better generalization than the CCE loss because only the MAE loss satisfies the aforementioned condition. A limitation of the MAE loss is that its generalization performance degrades significantly when complicated data are involved. Hence, the \\emph{generalized cross entropy}\\,(GCE) was proposed to achieve the advantages of both MAE and CCE losses; the GCE loss is a more general class of noise-robust loss that encompasses both of them. \n{\nAmid et al. extended the GCE loss by introducing two temperatures based on the Tsallis divergence. \\emph{Bi-tempered loss} introduces a proper unbiased generalization of the CE loss based on the Bregman divergence.}\nIn addition, inspired by the symmetricity of the Kullback-Leibler divergence, the symmetric cross entropy\\,(SCE) was proposed by combining a noise tolerance term, namely reverse cross entropy loss, with the standard CCE loss. \nMeanwhile, the \\emph{curriculum loss}\\,(CL) is a surrogate loss of the 0-1 loss function; it provides a tight upper bound and can easily be extended to multi-class classification. {The \\emph{active passive loss}\\,(APL) is a combination of two types of robust loss functions, an active loss that maximizes the probability of belonging to the given class and a passive loss that minimizes the probability of belonging to other classes.}\n\\smallskip\n\\noindent \\underline{Remark}: The robustness of these methods is theoretically supported well. However, they perform well only in simple cases, when learning is easy or the number of classes is small . Moreover, the modification of the loss function increases the training time for convergence .\n\\vspace*{-0.2cm}", "id": "59ab6098-f130-49dc-b87f-123eaa6be482", "level": "subsection", "origin_cites_number": 15, "parent_id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Robust Loss Function" ] ], "subsections": [], "title": "Robust Loss Function" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7162, 3340, 7775, 4135, 4145, 4139 ], "content": "\\label{sec:loss_adjustment}\n\\vspace*{-0.0cm}\nLoss adjustment is effective for reducing the negative impact of noisy labels by adjusting the loss of all training examples before updating the DNN . The methods associated with it can be categorized into three groups depending on their adjustment philosophy: \\emph{1)} {\\emph{loss correction}} that estimates the noise transition matrix to correct the forward or backward loss, \\emph{2)} {\\emph{loss reweighting}} that imposes different importance to each example for a weighted training scheme, \\emph{3)} {\\emph{label refurbishment}} that adjusts the loss using the refurbished label obtained from a convex combination of noisy and predicted labels, and \\emph{4)} {\\emph{meta learning}} that automatically infers the optimal rule for loss adjustment. {Unlike the robust loss function newly designed for robustness, this family of methods aims to make the traditional optimization process robust to label noise. Hence, in the middle of training, the {update rule} is adjusted such that the negative impact of label noise is minimized.}\nIn general, loss adjustment allows for a \\emph{full exploration} of the training data by adjusting the loss of every example. However, the error incurred by \\emph{false} correction is accumulated, especially when the number of classes or the number of mislabeled examples is large . \n\\vspace*{0.15cm}", "id": "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "level": "subsection", "origin_cites_number": 9, "parent_id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Loss Adjustment" ] ], "subsections": [ "26fbf349-d8eb-41b0-9964-18115fa4704c", "67aed536-256b-4b94-954e-fe79fe9aa42b", "6322f675-ca49-4162-8e1c-216bde9693b8", "5cc6d843-bb17-416b-bbf4-7e089941b605" ], "title": "Loss Adjustment" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7777, 4145, 4153, 4152 ], "content": "}\n\\label{sec:loss_correction}\nSimilar to the noise adaptation layer presented in Section \\ref{sec:robust_architecture}, this approach modifies the loss of each example by multiplying the estimated label transition probability by the output of a specified DNN. The main difference is that the learning of the transition probability is decoupled from that of the model. \n\\smallskip\n\\noindent \\underline{Technical Detail}:\n\\emph{Backward correction} initially approximates the noise transition matrix using the softmax output of the DNN trained without loss correction. Subsequently, it retrains the DNN while correcting the original loss based on the estimated matrix. The corrected loss of a example $(x,\\tilde{y})$ is computed by a linear combination of its loss values for observable labels, whose coefficient is the inverse transition matrix $\\text{T}^{-1}$ to the observable label $y\\in\\{1,\\dots, c\\}$, given its target label $\\tilde{y}$.\nTherefore, the backward correction $\\cev{\\ell}$ is performed by multiplying the inverse transition matrix to the prediction for all the observable labels, \n{\n\\begin{equation}\n\\small\n\\label{eq:backward_correction}\n\\begin{split}\n\\cev{\\ell}\\big(f(x;&\\Theta),\\tilde{y}\\big) = \\hat{\\text{T}}^{-1}\\Big\\langle\\ell\\big(f(x;\\Theta), 1\\big), \\dots, \\ell\\big(f(x;\\Theta), c\\big)\\Big\\rangle^{\\!\\top}\\!\\!,\n\\end{split}\n\\end{equation}\n}\n\\noindent where $\\hat{\\text{T}}$ is the estimated noise transition matrix. \nConversely, \\emph{forward correction} uses a linear combination of a DNN's softmax outputs before applying the loss function. Hence, the forward correction $\\vec{\\ell}$ is performed by multiplying the estimated transition probability with the softmax outputs during the forward propagation step,\n{\n\\begin{equation}\n\\small\n\\label{eq:forward_correction}\n\\begin{split}\n\\vec{\\ell}\\big(f(x;\\Theta),\\tilde{y}\\big)& = \\ell\\Big(\\Big\\langle\\hat{p}(\\tilde{y}|1),\\dots,\\hat{p}(\\tilde{y}|c)\\Big\\rangle f(x;\\Theta)^{\\top},\\tilde{y}\\Big)\\\\\n&=\\ell\\big(\\hat{\\text{T}}^{\\top}f(x;\\Theta)^{\\top},\\tilde{y}\\big).\n\\end{split}\n\\end{equation}\n}\n\\vspace*{-0.2cm}\nFurthermore, \\emph{gold loss correction} assumes the availability of clean validation data or anchor points for loss correction. Thus, a more accurate transition matrix is obtained by using them as additional information, which further improves the robustness of the loss correction. {Recently, \\emph{T-Revision} provides a solution that can infer the transition matrix without anchor points, and \\emph{Dual T} factorizes the matrix into the product of two easy-to-estimate matrices to avoid directly estimating the noisy class posterior.}\n{\nBeyond the instance-independent noise assumption, Zhang et al. introduced the instance-confidence embedding to model instance-dependent noise in estimating the transition matrix. On the other hand, Yang et al. proposed to use the Bayes optimal transition matrix estimated from the distilled examples for the instance-dependent noise transition matrix.}\n\\smallskip\n\\noindent {\\underline{Remark}: The robustness of these approaches is highly dependent on how precisely the transition matrix is estimated. To acquire such a transition matrix, they require prior knowledge in general, such as anchor points or clean validation data.}\n\\vspace*{0.15cm}", "id": "26fbf349-d8eb-41b0-9964-18115fa4704c", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Loss Adjustment" ], [ "subsubsection", "Loss Correction" ] ], "subsections": [], "title": "Loss Correction" }, { "cite_extract_rate": 0.5, "cites": [ 3453, 7775 ], "content": "} Inspired by the concept of importance reweighting , loss reweighting aims to assign smaller weights to the examples with false labels and greater weights to those with true labels. Accordingly, the reweighted loss on the mini-batch $\\mathcal{B}_t$ is used to update the DNN, \n{\n\\begin{equation}\n\\label{eq:loss_reweighting}\n\\Theta_{t+1} = \\Theta_{t} - \\eta\\nabla \\Big( \\frac{1}{|\\mathcal{B}_{t}|}\\!\\sum_{(x,\\tilde{y})\\in\\mathcal{B}_t}\\!\\!\\!\\!\\overbrace{w(x,\\tilde{y})\\ell\\big(f(x;\\Theta_t), \\tilde{y}\\big)}^{\\text{Reweighted Loss}}\\Big),\n\\end{equation}}\nwhere $w(x,\\tilde{y})$ is the weight of an example $x$ with its noisy label $\\tilde{y}$. Hence, the examples with smaller weights do not significantly affect the DNN learning.\n\\smallskip\n\\noindent \\underline{Technical Detail}:\nIn \\emph{importance reweighting} , the ratio of two joint data distributions {$w(x,\\tilde{y})=P_{\\mathcal{D}}(x,\\tilde{y})/P_{\\tilde{\\mathcal{D}}}(x,\\tilde{y})$} determines the contribution of the loss of each noisy example. An approximate solution to estimate the ratio was developed because the two distributions are difficult to determine from noisy data. Meanwhile, \\emph{active bias} emphasizes uncertain examples with inconsistent label predictions by assigning their prediction variances as the weights for training. \n{\n\\emph{DualGraph} employs graph neural networks and reweights the examples according to the structural relations among labels, eliminating the abnormal noise examples.}\n\\smallskip\n\\noindent {\\underline{Remark}: These approaches need to manually pre-specify the weighting function as well as there additional hyper-parameters, which is fairly hard to be applied in practice due to the significant variation of appropriate weighting schemes that rely on the noise type and training data.}\n\\vspace*{0.15cm}", "id": "67aed536-256b-4b94-954e-fe79fe9aa42b", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Loss Adjustment" ], [ "subsubsection", "Loss Reweighting" ] ], "subsections": [], "title": "Loss Reweighting" }, { "cite_extract_rate": 0.75, "cites": [ 7162, 4135, 4156, 4154, 4155, 4139 ], "content": "} Refurbishing a noisy label $\\tilde{y}$ effectively prevents overfitting to false labels. Let $\\hat{y}$ be the current prediction of the DNN $f(x;\\Theta)$. Therefore, the refurbished label $y^{refurb}$ can be obtained by a convex combination of the noisy label $\\tilde{y}$ and the DNN prediction $\\hat{y}$,\n\\begin{equation}\n\\label{eq:label_correction}\ny^{refurb} = \\alpha \\tilde{y} + (1-\\alpha) \\hat{y},\n\\end{equation}\nwhere $\\alpha \\in [0,1]$ is the label confidence of $\\tilde{y}$. To mitigate the damage of incorrect labeling, this approach backpropagates the loss for the refurbished label instead of the noisy one, thereby yielding substantial robustness to noisy labels. \n\\smallskip\n\\noindent \\underline{Technical Detail}:\n\\emph{Bootstrapping} is the first method that proposes the concept of label refurbishment to update the target label of training examples. It develops a more coherent network that improves its ability to evaluate the consistency of noisy labels, with the label confidence $\\alpha$ obtained via cross-validation.\n\\emph{Dynamic bootstrapping} dynamically adjusts the confidence $\\alpha$ of individual training examples. The confidence $\\alpha$ is obtained by fitting a two-component and one-dimensional beta mixture model to the loss distribution of all training examples. {\\emph{Self-adaptive training} applies the exponential moving average to alleviate the instability issue of using instantaneous prediction of the current DNN, \n\\begin{equation}\n\\!\\!y_{t+1}^{refurb} = \\alpha y_t^{refurb} + (1-\\alpha)\\hat{y}, ~\\text{where}~ y_0^{refurb} = \\tilde{y}\\!\\!\n\\end{equation}\n}\n\\emph{D2L} trains a DNN using a dimensionality-driven learning strategy to avoid overfitting to false labels. A simple measure called \\emph{local intrinsic dimensionality} is adopted to evaluate the confidence $\\alpha$ in considering that the overfitting is exacerbated by dimensional expansion. Hence, refurbished labels are generated to prevent the dimensionality of the representation subspace from expanding at a later stage of training. \nRecently, \\emph{SELFIE} introduces a novel concept of \\emph{refurbishable examples} that can be corrected with high precision. The key idea is to consider the example with consistent label predictions as refurbishable because such consistent predictions correspond to its true label with a high probability owing to the learner's perceptual consistency. Accordingly, the labels of only refurbishable examples are corrected to minimize the number of falsely corrected cases. {Similarly, \\emph{AdaCorr} selectively refurbishes the label of noisy examples, but a theoretical error-bound is provided. Alternatively, \\emph{SEAL} averages the softmax output of a DNN on each example over the whole training process, then re-trains the DNN using the averaged soft labels.}\n\\smallskip\n\\noindent {\\underline{Remark}: Differently from loss correction and reweighting, all the noisy labels are explicitly replaced with other expected clean labels (or their combination). If there are not many confusing classes in data, these methods work well by refurbishing the noisy labels with high precision. In the opposite case, the DNN could overfit to wrongly refurbished labels.}\n\\vspace*{0.15cm}", "id": "6322f675-ca49-4162-8e1c-216bde9693b8", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Loss Adjustment" ], [ "subsubsection", "Label Refurbishment" ] ], "subsections": [], "title": "Label Refurbishment" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 4157, 4158, 4128, 3345, 1695, 7774, 7772, 8737 ], "content": "}\n{In recent years, meta learning becomes an important topic in the machine learning community and is applied to improve noise robustness . The key concept is \\emph{learning to learn} that performs learning at a level higher than conventional learning, thus achieving data-agnostic and noise type-agnostic rules for better practical use. It is similar to loss reweighting and label refurbishment, but the adjustment is automated in a meta learning manner.\n\\smallskip\n\\noindent \\underline{Technical Detail}:\nFor the loss reweighting in Eq.~\\eqref{eq:loss_reweighting}, the goal is to learn the weight function $w(x,\\tilde{y})$. Specifically, \\emph{L2LWS} and \\emph{CWS} are unified neural architectures composed of a target DNN and a meta-DNN. The meta-DNN is trained on a small clean validation dataset; it then provides guidance to evaluate the weight score for the target DNN. Here, part of the two DNNs are shared and jointly trained to benefit from each other. \\emph{Automatic reweighting} is a meta learning algorithm that learns the weights of training examples based on their gradient directions. It includes a small clean validation dataset into the training dataset and reweights the backward loss of the mini-batch examples such that the updated gradient minimizes the loss of this validation dataset. \\emph{Meta-weight-net} parameterizes the weighting function as a multi-layer perceptron network with only one hidden layer. A meta-objective is defined to update its parameters such that they minimize the empirical risk of a small clean dataset. At each iteration, the parameter of the target network is guided by the weight function updated via the meta-objective. \n{Likewise, \\emph{data coefficients}\\,(i.e., exemplar weights and true labels) are estimated by meta-optimization with a small clean set, which is only $0.2$\\% of the entire training set, while refurbishing the examples probably mislabeled.}\nFor the label refurbishment in Eq.~\\eqref{eq:label_correction}, \\emph{knowledge distillation} adopts the technique of transferring knowledge from one expert model to a target model. The prediction from the expert DNN trained on small clean validation data is used instead of the prediction $\\hat{y}$ from the target DNN. \n{\\emph{MLC} updates the target model with corrected labels provided by a meta model trained on clean validation data. The two models are trained concurrently via a bi-level optimization.} \n\\smallskip\n\\noindent \\underline{Remark}:\nBy learning the update rule via meta learning, the trained network easily adapts to various types of data and label noise. Nevertheless, unbiased clean validation data is essential to minimize the auxiliary objective, although it may not be available in real-world data.}\n\\vspace*{-0.1cm}", "id": "5cc6d843-bb17-416b-bbf4-7e089941b605", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "3d40345d-b6d6-4b98-943c-ec552f10d7fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Loss Adjustment" ], [ "subsubsection", "Meta Learning" ] ], "subsections": [], "title": "Meta Learning" }, { "cite_extract_rate": 0.631578947368421, "cites": [ 3340, 4137, 7771, 4140, 4126, 3342, 4125, 4124, 4123, 4253, 8630, 4115 ], "content": "\\label{sec:sample_selection}\nTo avoid any false corrections, many recent studies have adopted sample selection that involves selecting true-labeled examples from a noisy training dataset. In this case, the update equation in Eq.\\ \\eqref{eq:corrupted_update} is modified to render a DNN more robust for noisy labels. Let $\\mathcal{C}_t \\subseteq \\mathcal{B}_t$ be the identified \\emph{clean} examples at time $t$. Then, the DNN is updated only for the selected clean examples $\\mathcal{C}_t$, \n\\begin{equation}\n\\label{eq:clean_update}\n\\Theta_{t+1} = \\Theta_{t} - \\eta\\nabla\\Big(\\frac{1}{|\\mathcal{C}_t|} \\!\\sum_{(x,\\tilde{y}) \\in \\mathcal{C}_t} \\!\\!\\!\\!\\ell\\big(f(x;\\Theta_{t}), \\tilde{y}\\big)\\Big),\n\\end{equation}\nwhere the rest mini-batch examples, which are likely to be false-labeled, are excluded to pursue robust learning. \n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n{The {memorization nature} of DNNs has been explored theoretically and empirically to identify clean examples from noisy training data .\nSpecifically, assuming clusterable data where the clusters are located on the unit Euclidean ball, Li et al. proved the distance from the initial weight ${W}_{0}$ to the weight ${W}_t$ after $t$ iterations,\n\\begin{equation}\n\\label{eq:memorization_effect_foundation}\n\\norm{{W}_t - {W}_0}_{F} \\lesssim \\big( \\sqrt{K} + (K^{2}\\epsilon_{0}/\\norm{{C}}^{2})t \\big),\n\\end{equation}\nwhere $\\norm{\\cdot}_{F}$ is the Frobenius norm, $K$ is the number of clusters, and ${C}$ is the set of cluster centers reaching all input examples within their $\\epsilon_0$ neighborhood.\nEq.\\ \\eqref{eq:memorization_effect_foundation} demonstrates that the weights of DNNs start to stray far from the initial weights when overfitting to corrupted labels, while they are still in the vicinity of the initial weights at an early stage of training . In the empirical studies , the \\emph{memorization effect} is also observed since DNNs tend to first learn simple and generalized patterns and then gradually overfit to all noisy patterns. As such, favoring small-loss training examples as the clean ones are commonly employed to design robust training methods .\nLearning with sample selection is well motivated and works well in general, but this approach suffers from accumulated error caused by incorrect selection, especially when there are many ambiguous classes in training data.} \nHence, recent approaches often leverage multiple DNNs to cooperate with one another or run multiple training rounds .\nMoreover, to benefit from even false-labeled examples, loss correction or semi-supervised learning have been recently combined with the sample selection strategy .\n\\begin{figure}\n\\vspace*{+0.08cm}\n\\begin{center}\n\\includegraphics[width=8.7cm]{figures/methodology/loss_distribution.pdf}\n\\end{center}\n\\vspace*{-0.2cm}\n\\hspace*{0.80cm} {\\small (a) Symmetric Noise $40\\%$.} \\hspace*{0.525cm} {\\small (b) Asymmetric Noise $40\\%$.}\n\\vspace*{-0.15cm}\n\\caption{Loss distribution of training examples at the training accuracy of $50\\%$ on noisy CIFAR-100. (This figure is adapted from Song et al. .)}\n\\label{fig:loss_distribution}\n\\vspace*{-0.4cm}\n\\end{figure}\n\\smallskip\\smallskip", "id": "54d73157-e47b-4943-9b30-b518e1806946", "level": "subsection", "origin_cites_number": 19, "parent_id": "60bd8c81-37fb-4934-ab18-bb9a24e80724", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Sample Selection" ] ], "subsections": [ "a00439ee-902b-4911-93a8-2198ddea3d64", "3f9409b8-29c2-4f75-91a3-9614418f8bb4", "3b9e7951-da54-4090-b74a-113ced54d832" ], "title": "Sample Selection" }, { "cite_extract_rate": 0.8, "cites": [ 3340, 4140, 4137, 4159 ], "content": "}\n{\nCollaborative learning and co-training are widely used for the multi-network training. Consequently, the sample selection process is guided by the mentor network in the case of collaborative learning or the peer network in the case of co-training.\n}\n\\smallskip\n\\noindent \\underline{Technical Detail}:\nInitially, \\emph{Decouple} proposes the decoupling of when to update from how to update. Hence, two DNNs are maintained simultaneously and updated only the examples selected based on a disagreement between the two DNNs.\nNext, due to the memorization effect of DNNs, many researchers have adopted another selection criterion, called a \\emph{small-loss} trick, which treats a certain number of small-loss training examples as true-labeled examples; many true-labeled examples tend to exhibit smaller losses than false-labeled examples, as illustrated in Figure \\ref{fig:loss_distribution}(a). In \\emph{MentorNet} , a pre-trained mentor network guides the training of a student network in a collaborative learning manner. Based on the small-loss trick, the mentor network provides the student network with examples whose labels are likely to be correct. \\emph{Co-teaching} and \\emph{Co-teaching+} also maintain two DNNs, but each DNN selects a certain number of small-loss examples and feeds them to its peer DNN for further training. \\emph{Co-teaching+} further employs the disagreement strategy of \\emph{Decouple} compared with \\emph{Co-teaching}. {In contrast, \\emph{JoCoR} reduces the diversity of two networks via co-regularization, making predictions of the two networks closer.}\n\\smallskip\n\\noindent {\\underline{Remark}: The co-training methods help reduce the confirmation bias , which is a hazard of favoring the examples selected at the beginning of training, while the increase in the number of learnable parameters makes their learning pipeline inefficient. In addition, the small-loss trick does not work well when the loss distribution of true-labeled and false-labeled examples largely overlap, as in the asymmetric noise in Figure \\ref{fig:loss_distribution}(b).}\n\\smallskip\\smallskip", "id": "a00439ee-902b-4911-93a8-2198ddea3d64", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "54d73157-e47b-4943-9b30-b518e1806946", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Sample Selection" ], [ "subsubsection", "Multi-network Learning" ] ], "subsections": [], "title": "Multi-network Learning" }, { "cite_extract_rate": 0.75, "cites": [ 4126, 3343, 8630, 4125, 7163, 4123 ], "content": "}\nWithout maintaining additional DNNs, multi-round learning iteratively refines the selected set of clean examples by repeating the training round. Thus, the selected set keeps improved as the number of rounds increases. \n\\smallskip\n\\noindent \\underline{Technical Detail}:\n\\emph{ITLM} iteratively minimizes the trimmed loss by alternating between selecting true-labeled examples at the current moment and retraining the DNN using them. At each training round, only a fraction of small-loss examples obtained in the current round are used to retrain the DNN in the next round. \\emph{INCV} randomly divides noisy training data and then employs cross-validation to classify true-labeled examples while removing large-loss examples at each training round. Here, \\emph{Co-teaching} is adopted to train the DNN on the identified examples in the final round of training. {Similarly, \\emph{O2U-Net} repeats the whole training process with the cyclical learning rate until enough loss statistics of every examples are gathered. Next, the DNN is re-trained from scratch only for the clean data where false-labeled examples have been detected and removed based on statistics.} \\looseness=-1\nA number of variations have been proposed to achieve high performance using iterative refinement only in a single training round.\nBeyond the small-loss trick, \\emph{iterative detection} detects false-labeled examples by employing the local outlier factor algorithm . With a Siamese network, it gradually pulls away false-labeled examples from true-labeled samples in the deep feature space. \\emph{MORPH} introduces the concept of memorized examples which is used to iteratively expand an initial safe set into a maximal safe set via self-transitional learning. \\emph{TopoFilter} utilizes the spatial topological pattern of learned representations to detect true-labeled examples, not relying on the prediction of the noisy classifier.\n{\n\\emph{NGC} iteratively constructs the nearest neighbor graph using latent representations and performs geometry-based sample selection by aggregating information from neighborhoods. Soft pesudo-labels are assigned to the examples not selected.\n}\n\\smallskip\n\\noindent {\\underline{Remark}: The selected clean set keeps expanded and purified with iterative refinement, mainly through multi-round learning. As a side effect, the computational cost for training increases linearly for the number of training rounds.}\n\\smallskip", "id": "3f9409b8-29c2-4f75-91a3-9614418f8bb4", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "54d73157-e47b-4943-9b30-b518e1806946", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Sample Selection" ], [ "subsubsection", "Multi-round Learning" ] ], "subsections": [], "title": "Multi-round Learning" }, { "cite_extract_rate": 0.7741935483870961, "cites": [ 3340, 3343, 7133, 4137, 7773, 3345, 8738, 4159, 4134, 4132, 4140, 4141, 7775, 7778, 4126, 3342, 4156, 4142, 7772, 4150, 8737, 4155, 4138, 4130, 7191, 4151, 4147, 4135, 7162, 4157, 4145, 7774, 4139, 4125, 4123, 892, 4128, 4152, 4253, 4158, 7777, 4131, 8630, 7163, 4133, 4154, 2277, 4136 ], "content": "} {An inherent limitation of sample selection is to discard all the \\emph{unselected} training examples, thus resulting in a \\emph{partial} exploration of training data. To exploit all the noisy examples, researchers have attempted to combine sample selection with other orthogonal ideas.\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=8.45cm]{figures/methodology/semi_supervised.pdf}\n\\end{center}\n\\vspace*{-0.1cm}\n\\hspace*{0.2cm} \\small{(a) Noisy Data.} \\hspace*{0.4cm} \\small{(b) Transformed Data.} \\hspace*{0.8cm} \\small{(c) SSL.}\n\\vspace*{-0.2cm}\n\\caption{{Procedures for semi-supervised learning under label noise.}}\n\\label{fig:noise_semi_supervised}\n\\vspace*{-0.45cm}\n\\end{figure}\n\\smallskip\n\\noindent \\underline{Technical Detail}:\nThe most prominent method in this direction is combining a specific sample selection strategy with a specific semi-supervised learning model. As illustrated in Figure \\ref{fig:noise_semi_supervised}, selected examples are treated as labeled clean data, whereas the remaining examples are treated as unlabeled. Subsequently, semi-supervised learning is performed using the transformed data. \\emph{SELF} is combined with a semi-supervised learning approach to progressively filter out false-labeled examples from noisy data. By maintaining the running average model called the {mean-teacher} as the backbone, it obtains the self-ensemble predictions of all training examples and then progressively removes examples whose ensemble predictions do not agree with their annotated labels. This method further leverages unsupervised loss from the examples not included in the selected clean set. \\emph{DivideMix} uses two-component and one-dimensional Gaussian mixture models to transform noisy data into labeled (clean) and unlabeled (noisy) sets. Then, it applies a semi-supervised technique \\emph{MixMatch} . Recently, \\emph{RoCL} employs two-phase learning strategies: supervised training on selected clean examples and then semi-supervised learning on relabeled noisy examples with self-supervision. For selection and relabeling, it computes the exponential moving average of the loss over training iterations. \n}\n\\clearpage\n{\n\\newcolumntype{L}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{X}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}p{#1}}\n\\newcolumntype{Y}[1]{>{\\let\\newline\\\\\\arraybackslash\\hspace{1pt}}m{#1}}\n\\begin{table*}[h]\n\\vspace*{-3.9cm}\n\\caption{{Comparison of proposed robust deep learning methods with respect to the following six properties: (P1)\\,Flexibility, (P2)\\,No Pre-training, (P3)\\,Full Exploration, (P4)\\,No Supervision, (P5)\\,Heavy Noise, and (P6)\\,Complex Noise.}}\n\\vspace*{-0.2cm}\n\\begin{tabular}{L{0.7cm}|L{2.5cm}|Y{4.7cm}|X{0.5cm}|X{0.5cm}|X{0.5cm}|X{0.5cm}|X{0.5cm}|X{0.5cm}|Y{2.9cm}}\n\\toprule\n\\multicolumn{2}{c|}{\\textbf{Category}} & \\hspace*{2.0cm}\\textbf{Method} & \\textbf{P1} & \\textbf{P2} & \\textbf{P3} & \\textbf{P4} & \\textbf{P5} & \\textbf{P6} & \\,\\,\\,\\,\\,\\,\\,\\,\\,\\textbf{Implementation}\\!\\! \\\\ \\hline\n\\multirow{9}{*}{\\vspace*{-0.35cm}\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!\\rotatebox[origin=c]{90}{\\makecell[l]{Robust Architecture\\\\\\hspace*{0.66cm}(\\textsection \\ref{sec:robust_architecture})}}\\!\\!\\!\\!\\!} \n& \\multirow{6}{*}{\\vspace*{-0.2cm}\\makecell[l]{Noisy Adaptation\\\\\\hspace*{0.67cm}Layer}} \n& \\emph{Webly Learning} & \\cellcolor{gray!10}$\\bigtriangleup$ & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(Caffe)\\protect\\footnotemark[1] \\\\\\cline{3-10} \n& & \\emph{Noise Model} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Unofficial\\,(Keras)\\protect\\footnotemark[2] \\\\\\cline{3-10}\n& & \\emph{Dropout Noise Model} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(MATLAB)\\protect\\footnotemark[3] \\\\\\cline{3-10}\n& & \\emph{S-model} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(Keras)\\protect\\footnotemark[4]\\\\\\cline{3-10}\n& & \\emph{C-model} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & Official\\,(Keras)\\protect\\footnotemark[4] \\\\\\cline{3-10}\n& & \\emph{NLNN} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Unofficial\\,(Chainer)\\protect\\footnotemark[5] \\\\\\cmidrule{2-10}\n& \\multirow{4}{*}{\\makecell[l]{\\hspace*{0.12cm}Dedicated\\\\Architecture}} \n& \\emph{Probablistic Noise Model} & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & $ \\cellcolor{gray!10}\\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & Official\\,(Caffe)\\protect\\footnotemark[6] \\\\\\cline{3-10}\n& & \\emph{Masking} & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & $ \\cellcolor{gray!10}\\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & Official\\,(TensorFlow)\\protect\\footnotemark[7] \\\\\\cline{3-10}\n& & \\emph{Contrastive-Additive Noise Network} & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark &\\cellcolor{blue!10}\\cmark & $ \\cellcolor{gray!10}\\bigtriangleup$ &\\cellcolor{blue!10}\\cmark & N/A \\\\\\cline{3-10}\n& & {\\emph{RoG}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark &\\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark &$ \\cellcolor{gray!10}\\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[8] \\\\\\midrule\n\\multirow{9}{*}{\\vspace*{-0.25cm}\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!\\rotatebox[origin=c]{90}{\\makecell[l]{Robust Regularization\\\\\\hspace*{0.8cm}(\\textsection \\ref{sec:robust_regularization})}}\\!\\!\\!\\!\\!} \n& \\multirow{6}{*}{\\makecell[l]{\\hspace*{0.35cm}Explicit\\\\Regularization}} \n& \\emph{Bilevel Learning} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[9] \\\\\\cline{3-10}\n& & \\emph{Annotator Confusion} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[10] \\\\\\cline{3-10}\n& & \\emph{Pre-training} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[11] \\\\\\cline{3-10}\n& & {\\emph{PHuber}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(PyTorch)\\protect\\footnotemark[12] \\\\\\cline{3-10}\n& & {\\emph{Robust Early-learning}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[13] \\\\\\cline{3-10}\n& & {\\emph{ODLN}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[14] \\\\\\cmidrule{2-10}\n& \\multirow{3}{*}{\\makecell[l]{\\hspace*{0.35cm}Implicit\\\\Regularization}} \n& \\emph{Adversarial Training} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(PyTorch)\\protect\\footnotemark[15] \\\\\\cline{3-10}\n& & \\emph{Label Smoothing} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(PyTorch)\\protect\\footnotemark[16] \\\\\\cline{3-10}\n& & \\emph{Mixup} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[17] \\\\\\midrule\n\\multicolumn{2}{c|}{\\multirow{7}{*}{{\\makecell[l]{Robust Loss Function\\\\\\hspace*{0.75cm}(\\textsection \\ref{sec:robust_loss_function})}}}} \n& \\emph{Robust MAE} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & N/A \\\\\\cline{3-10}\n\\multicolumn{2}{c|}{} & \\emph{Generalized Cross Entropy} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Unofficial\\,(PyTorch)\\protect\\footnotemark[18] \\\\\\cline{3-10}\n\\multicolumn{2}{c|}{} & \\emph{Symmetric Cross Entropy} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(Keras)\\protect\\footnotemark[19] \\\\\\cline{3-10}\n\\multicolumn{2}{c|}{} & {\\emph{Bi-tempered Loss}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$\\bigtriangleup$ & \\cellcolor{gray!10}$\\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[20] \\\\\\cline{3-10}\n\\multicolumn{2}{c|}{} & \\emph{Curriculum Learning} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$\\bigtriangleup$ & N/A \\\\\\cline{3-10}\n\\multicolumn{2}{c|}{} & {\\emph{Active Passive Loss}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$\\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[21] \\\\\\midrule\n\\multirow{21}{*}{\\vspace*{-0.7cm}\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!\\rotatebox[origin=c]{90}{\\makecell[l]{Loss Adjustment\\\\\\hspace*{0.5cm}(\\textsection \\ref{sec:loss_adjustment})}}\\!\\!\\!\\!\\!} \n& \\multirow{5}{*}{\\makecell[l]{Loss Correction}} \n& \\emph{Backward Correction} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(Keras)\\protect\\footnotemark[22] \\\\\\cline{3-10}\n& & \\emph{Forward Correction} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(Keras)\\protect\\footnotemark[22] \\\\\\cline{3-10}\n& & \\emph{Gold Loss Correction} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(PyTorch)\\protect\\footnotemark[23] \\\\\\cline{3-10}\n& & {\\emph{T-revision}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & Official\\,(PyTorch)\\protect\\footnotemark[24] \\\\\\cline{3-10}\n& & {\\emph{Dual T}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{red!10}\\xmark & N/A \\\\\\cmidrule{2-10}\n& \\multirow{3}{*}{\\makecell[l]{Loss Reweigting}} \n& \\emph{Importance Reweighting} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(PyTorch)\\protect\\footnotemark[25] \\\\\\cline{3-10}\n& & \\emph{Active Bias} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(TensorFlow)\\protect\\footnotemark[26]\\!\\! \\\\\\cline{3-10}\n& & {\\emph{DualGraph}} & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\cmidrule{2-10}\n& \\multirow{6}{*}{\\makecell[l]{Label Refurbishment}} \n& \\emph{Bootstrapping} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(Keras)\\protect\\footnotemark[27] \\\\\\cline{3-10}\n& & \\emph{Dynamic Bootstrapping} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[28] \\\\\\cline{3-10}\n& & {\\emph{Self-adaptive Training}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[29] \\\\\\cline{3-10}\n& & \\emph{D2L} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(Keras)\\protect\\footnotemark[30] \\\\\\cline{3-10}\n& & {\\emph{AdaCorr}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[31] \\\\\\cline{3-10}\n& & {\\emph{SEAL}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & Official\\,(PyTorch)\\protect\\footnotemark[32] \\\\\\cmidrule{2-10}\n& \\multirow{7}{*}{\\makecell[l]{Meta Learning}} \n& \\emph{L2LWS} & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(TensorFlow)\\protect\\footnotemark[33]\\!\\! \\\\\\cline{3-10}\n& & \\emph{CWS} & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\cline{3-10}\n& & \\emph{Automatic Reweighting} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[34] \\\\\\cline{3-10}\n& & \\emph{Meta-weight-net} & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[35] \\\\\\cline{3-10}\n& & {\\emph{Data Coefficients}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[36] \\\\\\cline{3-10}\n& & \\emph{Knowledge Distillation} & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\cline{3-10}\n& & {\\emph{MLC}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & Official\\,(PyTorch)\\protect\\footnotemark[37] \\\\\\midrule\n\\multirow{13}{*}{\\vspace*{-0.45cm}\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!\\rotatebox[origin=c]{90}{\\makecell[l]{Sample Selection\\\\\\hspace*{0.5cm}(\\textsection \\ref{sec:sample_selection})}}\\!\\!\\!\\!\\!} \n& \\multirow{4}{*}{\\makecell[l]{Multi-Network\\\\\\hspace*{0.33cm}Learning}} \n& \\emph{Decouple} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[38] \\\\\\cline{3-10}\n& & \\emph{MentorNet} & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[39] \\\\\\cline{3-10}\n& & \\emph{Co-teaching} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[40] \\\\\\cline{3-10}\n& & \\emph{Co-teaching+} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[41] \\\\\\cline{3-10}\n& & {\\emph{JoCoR}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[42] \\\\\\cmidrule{2-10}\n& \\multirow{6}{*}{\\makecell[l]{Multi-Round\\\\\\hspace*{0.23cm}Learning}} \n& \\emph{ITLM} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(GluonCV)\\protect\\footnotemark[43] \\\\\\cline{3-10}\n& & \\emph{INCV} & \\cellcolor{blue!10}\\cmark &\\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(Keras)\\protect\\footnotemark[44] \\\\\\cline{3-10}\n& & {\\emph{O2U-Net}} & \\cellcolor{blue!10}\\cmark &\\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Unofficial\\,(PyTorch)\\protect\\footnotemark[45] \\\\\\cline{3-10}\n& & \\emph{Iterative Detection} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(Keras)\\protect\\footnotemark[46] \\\\\\cline{3-10}\n& & {\\emph{MORPH}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\cline{3-10}\n& & {\\emph{TopoFilter}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[47] \\\\\\cline{3-10}\n& & {\\emph{NGC}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A\\\\\\cmidrule{2-10}\n& \\multirow{3}{*}{\\vspace*{-0.24cm}\\makecell[l]{Hybrid Approach}} \n& \\emph{SELFIE} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(TensorFlow)\\protect\\footnotemark[48] \\\\\\cline{3-10}\n& & \\emph{SELF} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\cline{3-10}\n& & \\emph{DivideMix} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & Official\\,(PyTorch)\\protect\\footnotemark[49] \\\\\\cline{3-10}\n& & {\\emph{RoCL}} & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & N/A \\\\\\bottomrule\n\\end{tabular}\n\\label{table:all_comparision}\n\\vspace*{-4cm}\n\\end{table*}\n}\n\\clearpage\n{\n\\footnotetext[1]{\\url{https://github.com/endernewton/webly-supervised}}\n\\footnotetext[2]{\\url{https://github.com/delchiaro/training-cnn-noisy-labels-keras}}\n\\footnotetext[3]{\\url{https://github.com/ijindal/Noisy_Dropout_regularization}}\n\\footnotetext[4]{\\url{https://github.com/udibr/noisy_labels}}\n\\footnotetext[5]{\\url{https://github.com/Ryo-Ito/Noisy-Labels-Neural-Network}}\n\\footnotetext[6]{\\url{https://github.com/Cysu/noisy_label}}\n\\footnotetext[7]{\\url{https://github.com/bhanML/Masking}}\n\\footnotetext[8]{\\url{https://github.com/pokaxpoka/RoGNoisyLabel}}\n\\footnotetext[9]{\\url{https://github.com/sjenni/DeepBilevel}}\n\\footnotetext[10]{\\url{https://rt416.github.io/pdf/trace_codes.pdf}}\n\\footnotetext[11]{\\url{github.com/hendrycks/pre-training}}\n\\footnotetext[12]{\\url{https://github.com/dmizr/phuber}}\n\\footnotetext[13]{\\url{https://github.com/xiaoboxia/CDR}}\n\\footnotetext[14]{\\url{https://github.com/hongxin001/ODNL?ref=pythonrepo.com}}\n\\footnotetext[15]{\\url{https://https://github.com/sarathknv/adversarial-examples-pytorch}}\n\\footnotetext[16]{\\url{https://github.com/CoinCheung/pytorch-loss}}\n\\footnotetext[17]{\\url{https://github.com/facebookresearch/mixup-cifar10}}\n\\footnotetext[18]{\\url{https://github.com/AlanChou/Truncated-Loss}}\n\\footnotetext[19]{\\href{https://github.com/YisenWang/symmetric\\_cross\\_entropy\\_for\\_noisy_label}{https://github.com/YisenWang/symmetric\\_cross\\_entropy}}\n\\footnotetext[20]{\\url{https://github.com/google/bi-tempered-loss}}\n\\footnotetext[21]{\\url{https://github.com/HanxunH/Active-Passive-Losses}}\n\\footnotetext[22]{\\url{https://github.com/giorgiop/loss-correction}}\n\\footnotetext[23]{\\url{https://github.com/mmazeika/glc}}\n\\footnotetext[24]{\\url{https://github.com/xiaoboxia/T-Revision}}\n\\footnotetext[25]{\\href{https://github.com/xiaoboxia/Classification-with-noisy-labels-by-importance-reweighting}{https://github.com/xiaoboxia/Classification-with-noisy-labels}}\n\\newcolumntype{L}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{X}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}p{#1}}\n\\begin{table*}[!htb]\n\\centering\n\\vspace*{-0.1cm}\n\\caption{Comparison of robust deep learning categories for overcoming noisy labels.}\n\\vspace*{-0.2cm}\n\\begin{tabular}{L{2.2cm} | L{2.85cm}|X{1.5cm}|X{1.55cm}|X{1.8cm}|X{1.65cm}|X{1.5cm}|X{1.6cm}}\n\\toprule\n\\multicolumn{2}{>{}c|}{\\multirow{2}{*}{\\textbf{Category}}} & \\textbf{P1} & \\textbf{P2} & \\textbf{P3} & \\textbf{P4} & \\textbf{P5} & \\textbf{P6} \\\\\n\\multicolumn{2}{>{}c|}{} & \\!\\!\\!{Flexibility}\\!\\!\\! & \\!\\!\\!{No Pre-train}\\!\\!\\! & \\!\\!\\!{Full Exploration}\\!\\!\\! & \\!\\!\\!{No Supervision}\\!\\!\\! & \\!\\!\\!{Heavy Noise}\\!\\!\\! & \\!\\!\\!{Complex Noise}\\!\\!\\! \\\\\\midrule\n\\multirow{2}{*}\n{\\makecell[l]{\\!\\!\\!Robust Architecture\\!\\!\\! \\\\\\hspace*{0.5cm}(\\textsection \\ref{sec:robust_architecture})}}\n& \\!\\!Noise Adaptation Layer & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark \\\\\n\\cline{2-8}\n& \\!\\!Dedicated Architecture & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{blue!10}\\cmark\\\\ \\midrule\n\\multirow{2}{*}\n{\\makecell[l]{\\!\\!\\!\\!\\!\\!Robust Regularization\\!\\!\\!\\!\\!\\! \\\\\\hspace*{0.5cm}(\\textsection \\ref{sec:robust_regularization})}}\n& \\!\\!Implicit Regularization & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ \\\\\n\\cline{2-8}\n& \\!\\!Explicit Regularization & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ \\\\ \\midrule\n\\multicolumn{2}{>{}c|}{\\hspace*{0cm}Robust Loss Function (\\textsection \\ref{sec:robust_loss_function}) }& \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark \\\\\\midrule\n\\multirow{2}{*}\n{\\vspace*{-0.58cm}\\!\\makecell[l]{Loss Adjustment \\\\\\hspace*{0.52cm}(\\textsection \\ref{sec:loss_adjustment})}}\n& \\!\\!Loss Correction &\\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark &\\cellcolor{red!10}\\xmark \\\\\n\\cline{2-8}\n& \\!\\!Loss Reweighting & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$\\\\\n\\cline{2-8}\n& \\!\\!Label Refurbishment & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$\\\\\\cline{2-8}\n& \\!\\!Meta Learning & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{gray!10}$ \\bigtriangleup$ & \\cellcolor{gray!10}$ \\bigtriangleup$ \\\\ \\midrule\n{\\vspace*{-0.55cm}\\hspace*{0.075cm}\\makecell[l]{Sample Selection\\\\\\hspace*{0.55cm}(\\textsection \\ref{sec:sample_selection})}}\n& \\!\\!Multi-Network Learning &\\cellcolor{blue!10}\\cmark &\\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$ \\\\\n\\cline{2-8}\n& \\!\\!Multi-Round Learning & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{red!10}\\xmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$\\\\\\cline{2-8}\n& \\!\\!Hybrid Approach & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{blue!10}\\cmark & \\cellcolor{gray!10}$ \\bigtriangleup$\\\\\\bottomrule\n\\end{tabular}\n\\label{table:direction_comparison}\n\\vspace*{-0.4cm}\n\\end{table*}\n}\nMeanwhile, \\emph{SELFIE} is a hybrid approach of sample selection and loss correction. The loss of refurbishable examples is corrected (i.e., loss correction) and then used together with that of small-loss examples (i.e., sample selection). Consequently, more training examples are considered for updating the DNN.\nThe \\emph{curriculum loss\\,(CL)} is combined with the robust loss function approach and used to extract the true-labeled examples from noisy data.\n\\smallskip\n\\noindent \\underline{Remark}:\nNoise robustness is significantly improved by combining with other techniques. However, the hyperparameters introduced by these techniques render a DNN more susceptible to changes in data and noise types, and an increase in computational cost is inevitable", "id": "3b9e7951-da54-4090-b74a-113ced54d832", "level": "subsubsection", "origin_cites_number": 62, "parent_id": "54d73157-e47b-4943-9b30-b518e1806946", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Deep Learning Approaches" ], [ "subsection", "Sample Selection" ], [ "subsubsection", "Hybrid Approach" ] ], "subsections": [], "title": "Hybrid Approach" }, { "cite_extract_rate": 0.5, "cites": [ 3340 ], "content": "\\label{sec:comparison}\nIn this section, we compare the {$62$} deep learning methods for overcoming noisy labels introduced in Section \\ref{sec:methodology} with respect to the following \\emph{six} properties. When selecting the properties, we refer to the properties that are typically used to compare the performance of robust deep learning methods . To the best of our knowledge, this survey is the first to provide a systematic comparison of robust training methods. \nThis comprehensive comparison will provide useful insights that can enlighten new future directions.\n\\vspace*{0.1cm}\n\\begin{itemize}[leftmargin=9pt]\n\\item \\textbf{(P1)\\,Flexibility:} With the rapid evolution of deep learning research, a number of new network architectures are constantly emerging and becoming available. Hence, the ability to support any type of architecture is important. \\enquote{Flexibility} ensures that the proposed method can quickly adapt to the state-of-the-art architecture.\n\\vspace*{0.1cm}\n\\item \\textbf{(P2)\\,No Pre-training:} A typical approach to improve noise robustness is to use a pre-trained network; however, this incurs an additional computational cost to the learning process. \\enquote{No Pre-training} ensures that the proposed method can be trained from scratch without any pre-training.\n\\vspace*{0.1cm}\n\\item \\textbf{(P3)\\,Full Exploration:} Excluding unreliable examples from the update is an effective method for robust deep learning; however, it eliminates hard but useful training examples as well. \\enquote{Full Exploration} ensures that the proposed methods can use \\emph{all} training examples without severe overfitting to false-labeled examples by adjusting their training losses or applying semi-supervised learning.\n\\vspace*{0.1cm}\n\\item \\textbf{(P4)\\,No Supervision:} Learning with supervision, such as a clean validation set or a known noise rate, is often impractical because they are difficult to obtain. Hence, such supervision had better be avoided to increase practicality in real-world scenarios. \\enquote{No Supervision} ensures that the proposed methods can be trained without any supervision.\n\\vspace*{0.1cm}\n\\item \\textbf{(P5)\\,Heavy Noise:} In real-world noisy data, the noise rate can vary from light to heavy. Hence, learning methods should achieve consistent noise robustness with respect to the noise rate. \\enquote{Heavy Noise} ensures that the proposed methods can combat even the heavy noise.\n\\end{itemize}\n{\n\\footnotetext[26]{\\url{https://github.com/songhwanjun/ActiveBias}}\n\\footnotetext[27]{\\url{https://github.com/dr-darryl-wright/Noisy-Labels-with-Bootstrapping}}\n\\footnotetext[28]{\\url{https://github.com/PaulAlbert31/LabelNoiseCorrection}}\n\\footnotetext[29]{\\url{https://github.com/LayneH/self-adaptive-training}}\n\\footnotetext[30]{\\url{https://github.com/xingjunm/dimensionality-driven-learning}}\n\\footnotetext[31]{\\url{https://github.com/pingqingsheng/LRT}}\n\\footnotetext[32]{\\url{https://github.com/chenpf1025/IDN}}\n\\footnotetext[33]{\\url{https://github.com/krayush07/learn-by-weak-supervision}}\n\\footnotetext[34]{\\url{https://github.com/uber-research/learning-to-reweight-examples}}\n\\footnotetext[35]{\\url{https://github.com/xjtushujun/meta-weight-net}}\n\\footnotetext[36]{\\url{https://github.com/google-research/google-research/tree/master/ieg}}\n\\footnotetext[37]{\\url{https://aka.ms/MLC}}\n\\footnotetext[38]{\\url{https://github.com/emalach/UpdateByDisagreement}}\n\\footnotetext[39]{\\url{https://github.com/google/mentornet}}\n\\footnotetext[40]{\\url{https://github.com/bhanML/Co-teaching}}\n\\footnotetext[41]{\\url{https://github.com/bhanML/coteaching_plus}}\n\\footnotetext[42]{\\url{https://github.com/hongxin001/JoCoR}}\n\\footnotetext[43]{\\url{https://github.com/yanyao-shen/ITLM-simplecode}}\n\\footnotetext[44]{\\url{https://github.com/chenpf1025/noisy_label_understanding_utilizing}}\n\\footnotetext[45]{\\url{https://github.com/hjimce/O2U-Net}}\n\\footnotetext[46]{\\url{https://github.com/YisenWang/Iterative_learning}}\n\\footnotetext[47]{\\url{https://github.com/pxiangwu/TopoFilter}}\n\\footnotetext[48]{\\url{https://github.com/kaist-dmlab/SELFIE}}\n\\footnotetext[49]{\\url{https://github.com/LiJunnan1992/DivideMix}}\n}\n\\begin{itemize}[leftmargin=9pt]\n\\item \\textbf{(P6)\\,Complex Noise:} The type of label noise significantly affects the performance of a learning method. To manage real-world noisy data, diverse types of label noise should be considered when designing a robust training method. \\enquote{Complex Noise} ensures that the proposed method can combat even the complex label noise.\n\\end{itemize}\n\\vspace*{0.05cm}\nTable \\ref{table:all_comparision} shows a comparison of all robust deep learning methods, which are grouped according to the most appropriate category. In the first row, the aforementioned six properties are labeled as P1--P6, and the availability of open-source implementation is added in the last column. For each property, we assign \\enquote{\\cmark} if it is completely supported, \\enquote{\\xmark} if it is not supported, and \\enquote{$\\bigtriangleup$} if it is supported but not completely. \nMore specifically, \\enquote{$\\bigtriangleup$} is assigned to P1 if the method can be flexible but requires additional effort, to P5 if the method can combat only moderate label noise, {and to P6 if the method does not make a strict assumption about the noise type but without explicitly modeling instance-dependent noise. Thus, for P6, the method marked with \\enquote{\\xmark} only deals with the instance-independent noise, while the method marked with \\enquote{\\cmark} deals with both instance-independent and -dependent noises.} The remaining properties\\,(i.e., P2, P3, and P4) are only assigned \\enquote{\\cmark} or \\enquote{\\xmark}. Regarding the implementation, we assign \\enquote{N/A} if a publicly available source code is not available.\nNo existing method supports all the properties. Each method achieves noise robustness by supporting a different combination of the properties. The supported properties are similar among the methods of the same (sub-)category because those methods share the same methodological philosophy; however, they differ significantly depending on the (sub-)category. Therefore, we investigate the properties generally supported in each (sub-)category and summarize them in Table \\ref{table:direction_comparison}. Here, the property of a (sub-)category is marked as the majority of the belonging methods. If no clear trend is observed among those methods, then the property is marked \\enquote{$\\bigtriangleup$}.", "id": "3b9c1099-a2f2-417d-a753-e663d7ce62fa", "level": "section", "origin_cites_number": 2, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Methodological Comparison" ] ], "subsections": [], "title": "Methodological Comparison" }, { "cite_extract_rate": 0.9, "cites": [ 4130, 3340, 4126, 4135, 7777, 3453, 4123, 4160, 4152 ], "content": "\\label{sec:noise_rate}\n{\nThe estimation of a noise rate is an imperative part of utilizing robust methods for better practical use, especially with the approaches belonging to the loss adjustment and sample selection. The estimated noise rate is widely used to reweight examples for a robust classifier or to determine how many examples should be selected as clean ones . However, detailed analysis has yet to be performed properly, though many robust approaches highly rely on the accuracy of noise rate estimation. The noise rate can be estimated by exploiting the inferred noise transition matrix , the Gaussian mixture model , or the cross-validation .\n}\n\\begin{comment}\n1. Yao, Y., Liu, T., Han, B., Gong, M., Deng, J., Niu, G., & Sugiyama, M.\nDual T: Reducing estimation error for transition matrix in label-noise learning. Advances in Neural Information Processing Systems 33, pp. 7260-7271, 2020. \n-> noise transition matrix\n2. Xia, X., Liu, T., Wang, N., Han, B., Gong, C., Niu, G., & Sugiyama, M.\nAre anchor points really indispensable in label-noise learning? Advances in Neural Information Processing Systems 32, pp. 6835-6846, 2019.\n-> noise transition matrix\n3. Li, X., Liu, T., Han, B., Niu, G., & Sugiyama, M.\nProvably end-to-end label-noise learning without anchor points.\n-> noise transition matrix\nProceedings of 38th International Conference on Machine Learning (ICML2021), pp. 6403-6413, 2021.\n4. Patrini, Giorgio et al. “Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 2233-2241.\n-> noise transition matrix\n5. Liu, Tongliang, and Dacheng Tao. \"Classification with noisy labels by importance reweighting.\" IEEE Transactions on pattern analysis and machine intelligence 38.3 (2015): 447-461.\n6. Scott, Clayton, Gilles Blanchard, and Gregory Handy. \"Classification with asymmetric label noise: Consistency and maximal denoising.\" Conference on learning theory. PMLR, 2013.\n\\end{comment}\n\\vspace*{-0.15cm}", "id": "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "level": "section", "origin_cites_number": 10, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Noise Rate Estimation" ] ], "subsections": [ "841668d4-4892-46d7-ae08-7e210e7c2add", "39b0234a-04ab-44e2-98d0-fa20ed4e0f74", "8516d834-e901-490a-aa9d-864c2f267139", "685f1250-3a8b-48b6-b510-042be23d64f6" ], "title": "Noise Rate Estimation" }, { "cite_extract_rate": 0.5, "cites": [ 7777, 4238, 4145, 4161, 652, 3453, 7769, 4160, 4152 ], "content": "{\nThe noise transition matrix has been used to build a statistically consistent robust classifier because it represents the class posterior probabilities for noisy and clean data, as in Eq.\\,\\eqref{eq:label_transition_matrix}. The first method to estimate the noise rate is exploiting this noise transition matrix, which can be inferred or trained accurately by using perfectly clean examples, i.e., \\emph{anchor points} ; an example $x$ with its label $i$ is defined as an anchor point if $p(y=i|x)=1$ and $p(y=k|x)=0$ for $k \\neq i$. Thus, let $\\mathcal{A}_i$ be the set of anchor points with label $i$, then the element of the noise transition matrix $T_{ij}$ is estimated by \\looseness=-1\n\\begin{equation}\n\\begin{split}\n\\hat{T}_{ij} &= \\frac{1}{|\\mathcal{A}_{i}|}\\sum_{x\\in\\mathcal{A}_{i}} \\sum_{k=1}^{c} p(\\tilde{y}=j|y=k)p(y=k|x) \\\\\n&= \\frac{1}{|\\mathcal{A}_{i}|}\\sum_{x\\in\\mathcal{A}_{i}} p(\\tilde{y}=j|x; \\Theta), \n\\end{split}\n\\end{equation}\nwhere $p(\\tilde{y}=j|x; \\Theta)$ is the noisy class posterior probability of the classifier trained on noisy training data for the anchor point $x$ (see the detailed proof in ). Next, based on the inferred noise transition matrix, the noise rate of a balanced training data is obtained by averaging the label transition probabilities between classes,\n\\begin{equation}\n\\hat{\\tau} = \\frac{1}{c} \\sum_{i=1}^{c} \\sum_{j\\neq i}^{c} p(\\tilde{y}=j | {y}=i) = \\frac{1}{c} \\sum_{i=1}^{c} \\sum_{j\\neq i}^{c} \\hat{T}_{ij}.\n\\end{equation}\nHowever, since the anchor points are typically unknown in real-world data, they are identified from noisy training data by either theoretical derivations or heuristics . \nIn addition, there have been recent efforts to learn the noise transition matrix without anchor points. \\emph{T-Revision} initializes a transition matrix by exploiting the examples with high noisy class posterior probabilities and then refines the matrix by adding a slack variable. \\emph{Dual-T} introduces an intermediate class that factorizes the transition matrix into two easy-to-estimate matrices for better accuracy. \\emph{VolMinNet} realizes an end-to-end framework and relaxes the need for anchor points under the sufficiently scattered assumption.\n}\n{\n\\newcolumntype{L}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{X}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}p{#1}}\n\\newcolumntype{Y}[1]{>{\\let\\newline\\\\\\arraybackslash\\hspace{1pt}}m{#1}}\n\\begin{table*}[t!]\n\\small\n\\caption{Summary of publicly available datasets used for studying label noise.}\n\\vspace*{-0.7cm}\n\\begin{center}\n\\begin{tabular}{L{2.0cm} | Y{3.4cm} |X{1.9cm}| X{1.9cm} |X{1.9cm} |X{1.9cm} |Y{2.1cm}}\\toprule \n\\multicolumn{2}{c|}{\\textbf{Dataset}} & \\textbf{\\# Training} & \\textbf{\\# Validation} & \\textbf{\\# Testing} & \\textbf{\\# Classes} & \\textbf{Noise Rate\\,(\\%)} \\\\\\hline\n\\multirow{7}{*}{\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!{\\makecell[l]{Clean Data}}\\!\\!\\!\\!\\!} \n& {MNIST} \\footnote[50] & {60K}& N/A & {10K} & $10$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {Fashion-MNIST} \\footnote[51]\\!\\!\\!\\!\\! & {60K}& N/A & {10K} & $10$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {CIFAR-10} \\footnote[52] & {50K}& N/A & {10K} & $10$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {CIFAR-100} \\footnote[52] & {50K}& N/A & {10K} & $100$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {SVHN} \\footnote[53] & {73K}& N/A & {26K} & $10$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {Tiny-ImageNet} \\footnote[55] & {100K}& {10K} & {10K} & $200$ & $\\approx0.0$ \\\\\\cline{2-7}\n& {ImageNet} \\footnote[54] & {1.3M}& {50K} & {50K} & $1000$ & $\\approx0.0$ \\\\\\midrule\n\\multirow{6}{*}{\\hspace*{0.13cm}\\!\\!\\!\\!\\!\\!{\\makecell[l]{Real-world\\\\\\hspace*{-0.02cm}Noisy Data}}\\!\\!\\!\\!\\!} \n& {ANIMAL-10N} \\footnote[56] & {50K}& N/A & {5K} & $10$ & $\\approx8.0$ \\\\\\cline{2-7}\n& {CIFAR-10N }\\footnote[57] & {50K}& N/A & {10K} & $10$ & $\\approx9.0/18.0/40.2$\\!\\!\\!\\!\\!\\!\\! \\\\\\cline{2-7}\n& {CIFAR-100N }\\footnote[57] & {50K}& N/A & {10K} & $100$ & $\\approx 25.6/40.2$ \\\\\\cline{2-7}\n& {Food-101N} \\footnote[58] & {310K}& {5K} & {25K} & $101$ & $\\approx18.4$ \\\\\\cline{2-7}\n& {Clothing1M} \\footnote[59] & {1M}& {14K} & {10K} & $14$ & $\\approx38.5$ \\\\\\cline{2-7}\n& {WebVision} \\footnote[60] & {2.4M}& {50K} & {50K} & $1000$ & $\\approx20.0$ \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{table:summary_dataset}\n\\vspace*{-0.6cm}\n\\end{table*}\n}\n\\vspace*{-0.15cm}", "id": "841668d4-4892-46d7-ae08-7e210e7c2add", "level": "subsection", "origin_cites_number": 18, "parent_id": "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Noise Rate Estimation" ], [ "subsection", "Noise Transition Matrix" ] ], "subsections": [], "title": "Noise Transition Matrix" }, { "cite_extract_rate": 1, "cites": [ 4135 ], "content": "\\label{sec:gmm}\n\\begin{figure}[t!]\n\\begin{center}\n\\vspace*{0.1cm}\n\\includegraphics[width=8.8cm]{figures/evaluation/gmm_analysis.pdf}\n\\end{center}\n\\vspace*{-0.15cm}\n\\hspace*{1.3cm} \\small{(a) Symmetric Noise.} \\hspace*{1.2cm} \\small{(b) Asymmetric Noise.} \n\\vspace*{-0.15cm}\n\\caption{Training loss distributions of true-labeled and false-labeled examples using the ground-truth label and the GMM on CIFAR-100 data with two synthetic noises of $40\\%$.}\n\\label{fig:loss_gmm}\n\\vspace*{-0.4cm}\n\\end{figure}\nThe second method is exploiting a one-dimensional and two-component GMM to model the loss distribution of true-labeled and false-labeled examples . As shown in Figure \\ref{fig:loss_gmm}, since the loss distribution tends to be \\emph{bi-modal}, the two Gaussian components are fitted to the training loss by using the EM algorithm; the probability of an example being a false-labeled one is obtained through its posterior probability. Hence, the noise rate is estimated at each epoch $t$ by computing the expectation of the posterior probability for all training examples, \\looseness=-1\n\\begin{equation}\n\\label{eq:naive_gmm}\n\\begin{gathered}\n\\hat{\\tau} = \\mathbb{E}_{(x,\\tilde{y})\\in\\tilde{\\mathcal{D}}}\\Big[\\,p\\big(g\\,|\\,\\ell\\big(f(x;\\Theta_{t}), \\tilde{y}\\big)\\big)\\,\\Big],\\\\ \n\\end{gathered}\n\\end{equation}\nwhere $g$ is the Gaussian component with a larger loss. However, Pleiss et al. recently pointed out that the training loss becomes less separable by the GMM as the training progresses, and thus proposed the \\emph{area under the loss}\\,(AUL) curve, which is the sum of the example's training losses obtained from all previous training epochs. Even after the loss signal decays in later epochs, the distributions remain separable. Therefore, the noise rate is finally estimated by\n\\begin{equation}\n\\label{eq:aul_gmm}\n\\begin{gathered}\n\\hat{\\tau} = \\mathbb{E}_{(x,\\tilde{y})\\in\\tilde{\\mathcal{D}}}\\Big[\\,p\\big(g\\,|\\,{\\rm AUL}_{t}(x,\\tilde{y})\\big)\\,\\Big],\\\\ \n\\text{where}~ \\text{AUL}_{t}(x,\\tilde{y})= \\sum_{i=1}^{t}\\ell\\big(f(x;\\Theta_t),\\tilde{y}\\big).\n\\end{gathered}\n\\end{equation}\n\\vspace*{-0.4cm}", "id": "39b0234a-04ab-44e2-98d0-fa20ed4e0f74", "level": "subsection", "origin_cites_number": 1, "parent_id": "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Noise Rate Estimation" ], [ "subsection", "Gaussian Mixture Model\\,(GMM)" ] ], "subsections": [], "title": "Gaussian Mixture Model\\,(GMM)" }, { "cite_extract_rate": 0.75, "cites": [ 3340, 4140, 4126 ], "content": "\\label{sec:cross_val}\nThe third method is estimating the noise rate by applying cross validation, which typically requires clean validation data . However, such clean validation data is hard to acquire in real-world applications. Thus, Chen et al. leveraged two randomly divided noisy training datasets for cross validation. Under the assumption that the two datasets share exactly the same noise transition matrix, the noise rate quantifies the test accuracy of DNNs that are respectively trained and tested on the two divided sets,\n\\begin{equation}\n\\label{eq:cross_val}\n\\!\\!\\!{\\rm Test\\,Accuracy} \\!= \\!\\!\\!\\\n\\begin{cases}\n(1 - \\hat{\\tau})^{2} + \\hat{\\tau}^{2} / (c - 1) \\!\\!&\\!\\! \\text{if symmetric}\\!\\!\\!\\!\\\\\n(1 - \\hat{\\tau})^{2} + \\hat{\\tau}^{2} \\!\\!& \\!\\!\\text{if asymmetric}.\\!\\!\\!\\!\n\\end{cases}\n\\end{equation}\nTherefore, the noise rate is estimated from the test accuracy obtained by cross validation.\n\\begin{comment}", "id": "8516d834-e901-490a-aa9d-864c2f267139", "level": "subsection", "origin_cites_number": 4, "parent_id": "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Noise Rate Estimation" ], [ "subsection", "Cross Validation" ] ], "subsections": [], "title": "Cross Validation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:nr_comp}\nTo compare the estimation performance of using the GMM and cross validation, we trained a WideResNet-16-8 for three benchmark datasets with varying noise rates. The results are plotted in Figure \\ref{fig:noise_rate_symmetric}. Generally, both methods performed well on the easy dataset\\,(i.e., CIFAR-10), but their performance worsened as the training difficulty increased from CIFAR-10 to Tiny-ImageNet because the true-labeled but hard examples are not clearly distinguishable from the false-labeled ones. Nevertheless, the GMM method showed considerably better performance than the cross validation method even in the two difficult datasets, CIFAR-100 and Tiny-ImageNet. \nOverall, this empirical analysis will be helpful for practitioners or researchers who design robust algorithms for noisy labels.\n\\begin{figure}[!t]\n\\vspace*{0.0cm}\n\\begin{minipage}[t]{1.0\\linewidth}\n\\begin{center}\n\\includegraphics[width=8.8cm]{figures/evaluation/noise_rate_symmetric.pdf}\n\\end{center}\n\\vspace*{-0.15cm}\n\\hspace*{2.1cm} \\small{(a) GMM.} \\hspace*{2.0cm} \\small{(b) Cross Validation.}\n\\vspace*{-0.2cm}\n\\caption{Noise rate estimation using the Gaussian mixture model and cross validation when training a WideResNet-16-8 on three datasets with symmetric noise.}\n\\label{fig:noise_rate_symmetric}\n\\end{minipage}\n\\vspace*{-0.35cm}\n\\end{figure}\n\\end{comment}\n\\vspace*{-0.15cm}", "id": "685f1250-3a8b-48b6-b510-042be23d64f6", "level": "subsection", "origin_cites_number": 0, "parent_id": "8d7ed0bf-5e07-45ed-a147-190ba1a209fb", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Noise Rate Estimation" ], [ "subsection", "Comparison of Noise Rate Estimation" ] ], "subsections": [], "title": "Comparison of Noise Rate Estimation" }, { "cite_extract_rate": 0, "cites": [], "content": "This section describes the typically used experimental design for comparing robust training methods in the presence of label noise. We introduce publicly available image datasets and then describe widely-used evaluation metrics.\n\\vspace*{-0.3cm}", "id": "0788ae52-30c4-4156-89f9-6be2e443db83", "level": "section", "origin_cites_number": 0, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Experimental Design" ] ], "subsections": [ "b05d418b-8b51-4e39-b585-58d8b1bcd7ff", "6b4a1fff-c3c1-412a-b6f0-eeaa7d3721a4" ], "title": "Experimental Design" }, { "cite_extract_rate": 0, "cites": [], "content": "To validate the robustness of the proposed algorithms, an image classification task was widely conducted on numerous image benchmark datasets. Table \\ref{table:summary_dataset} summarizes popularly-used public benchmark datasets, which are classified into two categories: \\emph{1)} a \\enquote{clean dataset} that consists of mostly true-labeled examples annotated by human experts and \\emph{2)} a \\enquote{real-world noisy dataset} that comprises real-world noisy examples with varying numbers of false labels.\n\\vspace*{0.15cm}", "id": "b05d418b-8b51-4e39-b585-58d8b1bcd7ff", "level": "subsection", "origin_cites_number": 0, "parent_id": "0788ae52-30c4-4156-89f9-6be2e443db83", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Experimental Design" ], [ "subsection", "Publicly Available Datasets" ] ], "subsections": [ "0c8efb50-e5a7-4707-a3b7-dcff2ec802f5", "2e98f74b-5745-4f37-aea0-711f64e9e6c5" ], "title": "Publicly Available Datasets" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4253, 8630, 652 ], "content": "According to the literature , \\emph{seven} clean datasets are widely used: MNIST\\footnote[50]{\\url{http://yann.lecun.com/exdb/mnist}}, classification of handwritten digits ; Fashion-MNIST\\footnote[51]{\\url{https://github.com/zalandoresearch/fashion-mnist}}, classification of various clothing ; CIFAR-10\\footnote[52]{\\url{https://www.cs.toronto.edu/~kriz/cifar.html}} and CIFAR-100\\footnotemark[52], classification of a subset of $80$ million categorical images ; SVHN\\footnote[53]{\\url{http://ufldl.stanford.edu/housenumbers}}, classification of house numbers in Google Street view images ; ImageNet\\footnote[54]{\\url{http://www.image-net.org}} and Tiny-ImageNet\\footnote[55]{\\url{https://www.kaggle.com/c/tiny-imagenet}}, image database organized according to the WordNet hierarchy and its small subset . Because the labels in these datasets are almost all true-labeled, their labels in the training data should be artificially corrupted for the evaluation of synthetic noises, namely \\emph{symmetric} noise and \\emph{asymmetric} noise. \n\\vspace*{0.15cm}", "id": "0c8efb50-e5a7-4707-a3b7-dcff2ec802f5", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "b05d418b-8b51-4e39-b585-58d8b1bcd7ff", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Experimental Design" ], [ "subsection", "Publicly Available Datasets" ], [ "subsubsection", "Clean Datasets" ] ], "subsections": [], "title": "Clean Datasets" }, { "cite_extract_rate": 0.5, "cites": [ 4238, 4161, 7769 ], "content": "Unlike the clean datasets, real-world noisy datasets inherently contain many mislabeled examples annotated by non-experts. According to the literature , \\emph{six} real-world noisy datasets are widely used: ANIMAL-10N\\footnote[56]{\\url{https://dm.kaist.ac.kr/datasets/animal-10n}}, real-world noisy data of human-labeled online images for 10 confusing animals ;\n{CIFAR-10N\\footnote[57]{\\url{http://noisylabels.com/}} and CIFAR-100N\\footnotemark[57], variations of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels collected from Amazon’s Mechanical Turk . They provide human labels with different noise rates, as shown in Table \\ref{table:summary_dataset};}\nFood-101N\\footnote[58]{\\url{https://kuanghuei.github.io/Food-101N}}, real-world noisy data of crawled food images annotated by their search keywords in the Food-101 taxonomy ; Clothing1M\\footnote[59]{\\url{https://www.floydhub.com/lukasmyth/datasets/clothing1m}}, real-world noisy data of large-scale crawled clothing images from several online shopping websites ; WebVision\\footnote[60]{\\url{https://data.vision.ee.ethz.ch/cvl/webvision/download.html}}, real-world noisy data of large-scale web images crawled from Flickr and Google Images search . To support sophisticated evaluation, most real-world noisy datasets contain their own clean validation set and provide the estimated noise rate of their training set. \\looseness=-1", "id": "2e98f74b-5745-4f37-aea0-711f64e9e6c5", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "b05d418b-8b51-4e39-b585-58d8b1bcd7ff", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Experimental Design" ], [ "subsection", "Publicly Available Datasets" ], [ "subsubsection", "Real-world Noisy Datasets" ] ], "subsections": [], "title": "Real-world Noisy Datasets" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3340, 4126, 3630 ], "content": "A typical metric to assess the robustness of a particular method is the prediction accuracy for unbiased and clean examples that are not used in training. The prediction accuracy degrades significantly if the DNN overfits to false-labeled examples . Hence, \\emph{test accuracy} has generally been adopted for evaluation . For a test set $\\mathcal{T}=\\{(x_i,y_i)\\}_{i=1}^{|\\mathcal{T}|}$, let $\\hat{y}_i$ be the predicted label of the $i$-th example in $\\mathcal{T}$. Subsequently, the test accuracy is formalized by\n\\begin{equation}\n\\label{eq:test_accuracy}\n\\text{Test Accuracy} = \\frac{|\\{(x_i,y_i)\\in\\mathcal{T}:\\hat{y}_i=y_i\\}|}{|\\mathcal{T}|}.\n\\end{equation}\nIf the test data are not available, \\emph{validation accuracy} can be used by replacing $\\mathcal{T}$ in Eq.\\,\\eqref{eq:test_accuracy} with validation data $\\mathcal{V}=\\{(x_i,y_i)\\}_{i=1}^{|\\mathcal{V}|}$ as an alternative,\n\\begin{equation}\n\\label{eq:validation_accuracy}\n\\text{Validation Accuracy} = \\frac{|\\{(x_i,y_i)\\in\\mathcal{V}:\\hat{y}_i=y_i\\}|}{|\\mathcal{V}|}.\n\\end{equation}\nFurthermore, if the specified method belongs to the \\enquote{sample selection} category, \\emph{label precision} and \\emph{label recall} can be used as the metrics,\n\\noindent\n\\begin{equation}\n\\label{eq:label_precision}\n\\begin{gathered}\n\\text{Label Precision} = \\frac{|\\{(x_i,\\tilde{y}_i)\\in\\mathcal{S}_t: \\tilde{y}_i = y_i\\}|}{|\\mathcal{S}_t|},\\\\\n\\text{Label Recall} = \\frac{|\\{(x_i,\\tilde{y}_i)\\in\\mathcal{S}_t: \\tilde{y}_i = y_i\\}|}{|\\{(x_i,\\tilde{y}_i)\\in\\mathcal{B}_t: \\tilde{y}_i = y_i\\}|},\n\\end{gathered}\n\\end{equation}\nwhere $\\mathcal{S}_t$ is the set of selected clean examples in a mini-batch $\\mathcal{B}_t$. The two metrics are performance indicators for the examples selected from the mini-batch as true-labeled ones . \\looseness=-1\nMeanwhile, if the specified method belongs to the \\enquote{label refurbishment} category, \\emph{correction error} can be used as an indicator of how many examples are incorrectly refurbished,\n\\begin{equation}\n\\label{eq:correction_error}\n\\!\\text{Correction Error} = \\frac{|\\{x_i\\!\\in\\!\\mathcal{R}: \\text{argmax}(y_{i}^{refurb}) \\neq y_i\\}|}{|\\mathcal{R}|},\n\\end{equation}\nwhere $\\mathcal{R}$ is the set of examples whose labels are refurbished by Eq.\\,(\\ref{eq:label_correction}) and $y_i^{refurb}$ is the refurbished label of the $i$-th examples in $\\mathcal{R}$. \n\\vspace*{-0.12cm}", "id": "6b4a1fff-c3c1-412a-b6f0-eeaa7d3721a4", "level": "subsection", "origin_cites_number": 5, "parent_id": "0788ae52-30c4-4156-89f9-6be2e443db83", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Experimental Design" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "With recent efforts in the machine learning community, the robustness of DNNs becomes evolving in several directions. Thus, the existing approaches covered in our survey face a variety of future challenges. This section provides discussion for future research that can facilitate and envision the development of deep learning in the label noise area. \\looseness=-1\n\\vspace*{-0.22cm}", "id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "level": "section", "origin_cites_number": 0, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ] ], "subsections": [ "4ad866ea-f321-4ab2-be85-14135292873f", "e53153d2-a39d-4573-9711-598781d00529", "cfbd0ac1-5575-43b8-9a59-26d13838ff8e", "23870233-7f5b-4ef1-9d38-e6f3adb0fa48", "bf2c4a41-a623-4acf-b905-fe345054727a", "a23d6d53-b5ad-4828-8bb9-730c9fff2012" ], "title": "Future Research Directions" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7777, 4120, 4124, 4155, 4136, 4152 ], "content": "}\n{\nExisting theoretical and empirical studies for \\emph{robust loss function} and \\emph{loss correction} are largely built upon the instance-independent noise assumption that the label noise is independent of input features . However, this assumption may not be a good approximation of the real-world label noise. In particular, Chen et al. conducted a theoretical hypothesis testing\\footnote[61]{In Clothing1M, the result showed that the instance-independent noise happens with probability lower than $10^{-21250}$, which is statistically impossible.} using a popular real-world dataset, Clothing1M, and proved that its label noise is statistically different from the instance-independent noise. This testing confirms that the label noise should depend on the instance. \\looseness=-1\nConversely, most methods for the other direction (especially, \\emph{sample selection}) work well even under the instance-dependent label noise in general since they do not rely on the assumption. Nevertheless, Song et al. pointed out that their performance could considerably worsen in the instance-dependent\\,(or real-world) noise compared to symmetric noise due to the confusion between true-labeled and false-labeled examples. The loss distribution of true-labeled examples heavily overlaps that of false-labeled samples in the asymmetric noise, which is similar to the real-world noise, in Figure \\ref{fig:loss_distribution}(b). Thus, identifying clean examples becomes more challenging when dealing with the instance-dependent label noise.\nBeyond the instance-independent label noise, there have been a few recent studies for the instance-dependent label noise. Mostly, they only focus on a binary classification task or a restricted small-scale machine learning model such as logistic regression . Therefore, learning with the instance-dependent label noise is an important topic that deserves more research attention.} \n\\vspace*{-0.32cm}", "id": "4ad866ea-f321-4ab2-be85-14135292873f", "level": "subsection", "origin_cites_number": 9, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Instance-dependent Label Noise" ] ], "subsections": [], "title": "{Instance-dependent Label Noise" }, { "cite_extract_rate": 0.25, "cites": [ 4162 ], "content": "}\n{\nMost of the existing methods are applicable only for a \\emph{single-label} multi-class classification problem, where each data example is assumed to have only one true label. However, in the case of \\emph{multi-label} learning, each data example can be associated with a set of multiple true class labels. In music categorization, each music can belong to multiple categories . In semantic scene classification, each scene may belong to multiple scene classes . Thus, contrary to the single-label setup, the multi-label classifier aims to predict a set of target objects simultaneously. \nIn this setup, a multi-label dataset of millions of examples are reported to contain over $26.6\\%$ false-positive labels as well as a significant number of omitted labels . \nEven worse, the difference in occurrence between classes makes this problem more challenging; some minor class labels occur less in training data than other major class labels. Considering such aspects that can arise in multi-label classification, the simple extension of existing methods may not learn the proper correlations among multiple labels. Therefore, learning from noisy labels with multi-label data is another important topic for future research. We refer the readers to a recent study that discusses the evaluation of multi-label classifiers trained with noisy labels.\n}\n\\vspace*{-0.12cm}", "id": "e53153d2-a39d-4573-9711-598781d00529", "level": "subsection", "origin_cites_number": 4, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Multi-label Data with Label Noise" ] ], "subsections": [], "title": "{Multi-label Data with Label Noise" }, { "cite_extract_rate": 0.5, "cites": [ 7774 ], "content": "}\n{\nThe \\emph{class imbalance} in training data is commonly observed, where a few classes account for most of the data. Especially when working with large data in many real-world applications, this problem becomes more severe and is often associated with the problem of noisy labels .\nNevertheless, to ease the label noise problem, it is commonly assumed that training examples are equally distributed over all class labels in the training data. This assumption is quite strong when collecting large-scale data, and thus we need to consider a more realistic scenario in which the two problems coexist. \\looseness=-1\nMost of the existing robust methods may not work well with the class imbalance, especially when they rely on the learning dynamics of DNNs, e.g., the small-loss trick or memorization effect. Under the existence of the class imbalance, the training model converges to major classes faster than minor classes such that most examples in the major class exhibit small losses\\,(i.e., early memorization). That is, there is a risk of discarding most examples in the minor class. Furthermore, in terms of example importance, high-loss examples are commonly favored for the class imbalance problem , while small-loss examples are favored for the label noise problem. This conceptual contradiction hinders the applicability of the existing methods that neglect the class imbalance. Therefore, these two problems should be considered simultaneously to deal with more general situations.}\n\\vspace*{-0.12cm}", "id": "cfbd0ac1-5575-43b8-9a59-26d13838ff8e", "level": "subsection", "origin_cites_number": 2, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Class Imbalance Data with Label Noise" ] ], "subsections": [], "title": "{Class Imbalance Data with Label Noise" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7771, 4163, 3899, 8740, 8739 ], "content": "}\n{\nMachine learning classifiers can perpetuate and amplify the existing systemic injustices in society . Hence, fairness is becoming another important topic. Traditionally, robust training and fair training have been studied by separate communities; robust training with noisy labels has mostly focused on combating label noise without regarding data bias , whereas fair training has focused on dealing with data bias, not necessarily noise . However, noisy labels and data bias, in fact, coexist in real-world data. Satisfying both robustness and fairness is more realistic but challenging because the bias in data is pertinent to label noise. \nIn general, many fairness criteria are group-based, where a target metric is equalized or enforced over subpopulations in the data, also known as \\emph{protected groups} such as race or gender . Accordingly, the goal of fair training is building a model that satisfies such fairness criteria for the \\emph{true} protected groups. However, if the \\emph {noisy} protection group is involved, such fairness criteria cannot be directly applied. Recently, mostly after 2020, a few pioneering studies have emerged to consider both robustness and fairness objectives at the same time under the binary classification setting . Therefore, more research attention is needed for the convergence of robust training and fair training.\n}\n\\vspace*{-0.12cm}", "id": "23870233-7f5b-4ef1-9d38-e6f3adb0fa48", "level": "subsection", "origin_cites_number": 6, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Robust and Fair Training" ] ], "subsections": [], "title": "{Robust and Fair Training" }, { "cite_extract_rate": 0.4, "cites": [ 9127, 4164 ], "content": "}\n{\nThere has been a lot of research on the robustness of deep learning under input perturbation, mainly in the field of adversarial training where the {input feature} is maliciously perturbed to distort the output of the DNN . Although learning with noisy labels and learning with noisy inputs have been regarded as separate research fields, their goals are similar in that they learn noise-robust representations from noisy data. Based on this common point of view, a few recent studies have investigated the interaction of adversarial training with noisy labels .\nInterestingly, it was turned out that adversarial training makes DNNs robust to label noise . Based on this finding, Damodaran et al. proposed a new regularization term, called Wasserstein adversarial regularization, to address the problem of learning with noisy labels. Zhu et al. proposed to use the number of projected gradient descent steps as a new criterion for sample selection such that clean examples are filtered out from noisy data. These approaches are regarded as a new perspective on label noise compared to traditional work. Therefore, understanding the connection between input perturbation and label noise could be another future topic for better representation learning toward robustness. \n}\n\\vspace*{-0.12cm}", "id": "bf2c4a41-a623-4acf-b905-fe345054727a", "level": "subsection", "origin_cites_number": 5, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Connection with Input Perturbation" ] ], "subsections": [], "title": "{Connection with Input Perturbation" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n{\n{The efficiency of the learning pipeline is another important aspect to design deep learning approaches.} However, for robust deep learning, most studies have neglected the efficiency of the algorithm because their main goal is to improve the robustness to label noise. For example, maintaining multiple DNNs or training a DNN in multiple rounds is frequently used, but these approaches significantly degrade the efficiency of the learning pipeline. On ther other hand, the need for more efficient algorithms is increasing owing to the rapid increase in the amount of available data . \nAccording to our literature survey, most work did not even report the efficiency\\,(or time complexity) of their approaches. However, it is evident that saving the training time is helpful under the restricted budget for computation.\nTherefore, enhancing the efficiency will significantly increase the usability of robust deep learning in the big data era.\n}", "id": "a23d6d53-b5ad-4828-8bb9-730c9fff2012", "level": "subsection", "origin_cites_number": 1, "parent_id": "01d4e82b-d227-4b85-bfa5-6db6af2b743d", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Future Research Directions" ], [ "subsection", "{Efficient Learning Pipeline" ] ], "subsections": [], "title": "{Efficient Learning Pipeline" }, { "cite_extract_rate": 0, "cites": [], "content": "DNNs easily overfit to false labels owing to their high capacity in totally memorizing all noisy training samples. This overfitting issue still remains even with various conventional regularization techniques, such as dropout and batch normalization, thereby significantly decreasing their generalization performance. Even worse, in real-world applications, the difficulty in labeling renders the overfitting issue more severe. Therefore, learning from noisy labels has recently become one of the most active research topics. \nIn this survey, we presented a comprehensive understanding of modern deep learning methods to address the negative consequences of learning from noisy labels. All the methods were grouped into five categories according to their underlying strategies and described along with their methodological weaknesses. Furthermore, a systematic comparison was conducted using six popular properties used for evaluation in the recent literature. According to the comparison results, there is no ideal method that supports all the required properties; the supported properties varied depending on the category to which each method belonged. Several experimental guidelines were also discussed, including {noise rate estimation,} publicly available datasets, and evaluation metrics. Finally, we provided insights and directions for future research in this domain. \n\\section*{Acknowledgements}\nThis work was supported by Institute of Information \\& Communications Technology Planning \\& Evaluation\\,(IITP) grant funded by the Korea government\\,(MSIT) (No. 2020-0-00862, DB4DL: High-Usability and Performance In-Memory Distributed DBMS for Deep Learning).\n\\end{document}", "id": "0fd993d8-9dac-43e0-a25d-3f85b6def415", "level": "section", "origin_cites_number": 0, "parent_id": "4c23e240-6b38-42a9-a3ad-77c4f677661a", "prefix_titles": [ [ "title", "Learning from Noisy Labels with Deep Neural Networks: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
90
[ 7, 206, 7210, 4238, 7770, 71, 3630, 8734, 4116, 4115, 7769, 7771, 8696, 4118, 313, 4117, 3454, 4121, 8735, 4120, 4122, 4119, 8736, 3340, 4126, 4253, 4124, 4125, 4123, 4127, 4129, 4143, 7133, 4137, 7773, 3345, 4134, 4132, 4140, 4141, 7775, 3342, 4142, 7772, 8737, 4130, 4138, 7191, 7162, 4135, 4144, 4145, 7774, 4139, 892, 4128, 4131, 8630, 4133, 2277, 4136, 4146, 7776, 8738, 4147, 4148, 4151, 4150, 4149, 7777, 4153, 4152, 3453, 4156, 4154, 4155, 4157, 4158, 1695, 4159, 3343, 7163, 7778, 4160, 4161, 652, 4162, 4163, 3899, 8740, 8739, 9127, 4164 ]
0.914392
[ "Tiago D. Perez", "Samuel Pagliarini" ]
A Survey on Split Manufacturing: Attacks, Defenses, and Challenges
2020
2020-06-08T14:24:49Z
cs.CR
In today's integrated circuit (IC) ecosystem, owning a foundry is not economically viable, and therefore most IC design houses are now working under a fabless business model. In order to overcome security concerns associated with the outsorcing of IC fabrication, the Split Manufacturing technique was proposed. In Split Manufacturing, the Front End of Line (FEOL) layers (transistors and lower metal layers) are fabricated at an untrusted high-end foundry, while the Back End of Line (BEOL) layers (higher metal layers) are manufactured at a trusted low-end foundry. This approach hides the BEOL connections from the untrusted foundry, thus preventing overproduction and piracy threats. However, many works demonstrate that BEOL connections can be derived by exploiting layout characteristics that are introduced by heuristics employed in typical floorplanning, placement, and routing algorithms. Since straightforward Split Manufacturing may not afford a desirable security level, many authors propose defense techniques to be used along with Split Manufacturing. In our survey, we present a detailed overview of the technique, the many types of attacks towards Split Manufacturing, as well as possible defense techniques described in the literature. For the attacks, we present a concise discussion on the different threat models and assumptions, while for the defenses we classify the studies into three categories: proximity perturbation, wire lifting, and layout obfuscation. The main outcome of our survey is to highlight the discrepancy between many studies -- some claim netlists can be reconstructed with near perfect precision, while others claim marginal success in retrieving BEOL connections. Finally, we also discuss future trends and challenges inherent to Split Manufacturing, including the fundamental difficulty of evaluating the efficiency of the technique.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4df5dfa6-5506-4133-aae0-08641ac96086", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ] ], "subsections": [ "bcde8dcf-d020-417f-80e7-74fb7e142a07", "c3cea202-e3a7-4e28-a205-e6374a147781", "63582e79-19ab-44ed-8225-2fdf07c8ddcd", "0381ac67-f861-41a5-94dd-d57b1d10cc30", "5c7f1b9c-d130-400e-ae7c-f7e824d126c3", "347259ba-3c12-43d7-b6a0-2402a507ae62" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:introduction}\nCounterfeiting and intellectual property (IP) infringement are growing problems in several industries, including the electronics sector. In Europe, for instance, seizures of counterfeit electronics products increased by almost 30\\% when comparing the 2014-2016 to the 2011-2013 period . Legitimate electronics companies reported about \\$100 billion in sales losses every year because of counterfeiting .\n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=1\\linewidth]{counter_taxiderm.pdf}\n{Taxonomy of counterfeit electronics (adapted from ). \\label{fig:cf_tax}}\nAs electronic systems are being increasingly deployed in critical infrastructure, counterfeit and maliciously modified integrated circuits (ICs) have become a major concern. The globalized nature of the IC supply chain contributes to the problem as we lack the means to assess the trustworthiness of the design and fabrication of ICs. It is conceivable -- if not likely -- that a fault in a low-quality counterfeit IC (or even a maliciously modified IC) will effectively disrupt critical infrastructure with grave consequences. Therefore, hardware security has gained more attention in the past decades, emerging as a relevant research topic.\nAs the IC supply chain has become more globalized, ensuring the integrity and trustworthiness of ICs becomes more challenging . When a modern IC is conceived, the probability that all involved parties are trusted is, in practice, close to zero. The process of conceiving an IC can be broken down into three major steps: design, manufacturing, and validation. \\textit{Designing} an IC involves arranging blocks and their interconnections. Some blocks are in-house developed, while some are third-party IPs. Finally, a layout is generated by instantiating libraries that might also be in-house developed or provided by third parties. The resulting layout is then sent to a foundry for \\textit{manufacturing}. The process of \\textit{validation} requires test for physical defects as well as verification of packaged parts for correct functionality. Both test and packaging facilities may be untrusted, as these efforts are often offshored. Thus, in order to produce an IC, sensitive information almost inevitably is exposed to untrusted parties. Today's reality is that ICs are vulnerable to many hardware-based threats, including insertion of hardware trojans, IP piracy, IC overbuilding, reverse engineering, side-channel attacks, and counterfeiting. These threats are discussed in details in . \nHardware trojans, in particular, are malicious modifications to an IC, where attackers insert circuitry (or modify the existing logic) for their own malicious purposes. This type of attack is (typically) mounted during manufacturing, as the foundry holds the entire layout and can easily identify critical locations for trojan insertion. Third-party IPs can also contain trojans/backdoors that may contain hidden functionalities, and which can be used to access restricted parts of the design and/or expose data that would otherwise be unknown to the adversary.\nIP piracy and IC overbuilding are, essentially, illegal ownership claims of different degrees. As said before, during designing an IC, third-party and in-house developed IPs are utilized. The untrusted foundry (or a rogue employee of it) can copy one of those IPs without the owner's authorization. Similarly, malicious foundries can manufacture a surplus of ICs (overbuilding) without the owner's knowledge, and sell these parts in the grey market.\nReversing engineering of ICs has been extensively demonstrated in the specialized literature . An attacker can identify the technology node and underlying components (memory, analog, and standard cells), from which a gate-level netlist can be extracted and even a high-level abstraction can be inferred . Reverse engineering can be effortlessly executed during manufacturing, as the foundry holds the entire layout and most likely promptly recognizes some of the IP as well. After fabrication, -- when ICs are already packaged and deployed -- reverse engineering is more laborious but can still be executed by a knowledgeable adversary.\nAccording to , counterfeit components are classified into seven distinct categories, as illustrated in Figure~\\ref{fig:cf_tax}. Recycled, remarked, out-of-spec/defective, and forged documentation are intrinsic after-market problems, where products are offered by parties other than the original component manufacturer or authorized vendors. These cases are highlighted in red. On the other hand, overproducing, cloning, and tampering are problems faced during the designing and/or fabrication of ICs. These cases are highlighted in blue. For this reason, in this paper, we will focus on these threats. It is important to realize that these threats could be avoided if a trusted fabrication scheme was in place. However, the escalating cost and complexity of semiconductor manufacturing on advanced technologies made owning an advanced foundry unfeasible for design companies, which now have the tendency to adopt the fabless business model . Consequentially, outsourcing of the manufacturing exposes their entire layout to untrusted foundries, leaving their designs vulnerable to malicious attacks.\nWhile many \\textit{ad hoc} techniques have been proposed to individually combat these threats, very few solutions directly address the lack of trust in the fabrication process. Split Manufacturing stands out from other techniques as it promotes a hybrid solution between trusted and untrusted fabrication. The technique was first pitched to DARPA circa 2006 in a white paper authored by Carnegie Mellon and Stanford universities. Later, it was picked by IARPA which then launched the Trusted IC program that successfully stewarded much of the research in the area and led to this survey. \nIn Split Manufacturing, the key concept is to \\textit{split} the circuit in two distinct parts before manufacturing, one containing the transistors and some routing wires, and the other containing the remaining routing wires. These parts are then fabricated in different foundries. The anatomy of an IC is illustrated in Figure~\\ref{fig:ic_anatomy}, containing two set of layers, the bottom layer where the transistors are built, called Front end of the Line (FEOL), and the top layer where the metal layers are built for routing purposes, called Back end of Line (BEOL). The metal layers are referred as MX, where X is the level of the layer. M1 is the lowest layer at the bottom of the stack. Connections between metals are made by vias referred as VX, following the same naming scheme for metals. In Split Manufacturing, the FEOL is first manufactured in a high-end foundry, and later the BEOL is stacked on top of it by a second (and possibly low-end) foundry. This process requires electrical, mechanical, and/or optical alignment techniques to ensure the connections between them. Additionally, FEOL and BEOL technologies have to be compliant with each other regarding the rules for metal/via dimensions where the split is to be performed. The split can be performed at the lowest metal layer (M1) or at higher layers, for which trade-offs are established between attained security and overheads. If the BEOL and FEOL technologies are vastly different from one another, Split Manufacturing may incur heavy overheads.\nIn this work, the focus is on the Split Manufacturing technique. As described above, Split Manufacturing can tackle threats that occur during the fabrication. It avoids overproduction, reverse engineering (to some extent) and unwanted modifications, limiting the capability of attackers. In Section II, we provide a background and more in-depth explanation of the technique. We address security threats in Section \\ref{sec:bck_gnd}, demonstrating the potential vulnerabilities found in split circuits and describing the state-of-the-art attacks proposed until the present day. In Section \\ref{sec:defenses}, the security of split circuits is discussed, showing how it can be improved using enhancement techniques. Future trends and lessons learnt are discussed in Section \\ref{sec:trends}. Finally, our conclusions are presented in Section \\ref{sec:conclusion}.\n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=.9\\linewidth]{cross_section.pdf}\n{Anatomy of an integrated circuit (adapted from ). \\label{fig:ic_anatomy}}", "id": "bcde8dcf-d020-417f-80e7-74fb7e142a07", "level": "section", "origin_cites_number": 10, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:bck_gnd}\nAs mentioned before, in order to have access to advanced technologies, many design companies have to outsource their IC manufacturing to untrusted high-end foundries. Protecting their designs against threats that may occur during manufacturing is a concern. Designs can be protected by applying the Split Manufacturing technique, thus combating all threats highlighted in blue in Figure~\\ref{fig:cf_tax}.\nSplit Manufacturing protects a design by hiding sensitive data from the untrusted foundry. This is achieved by splitting the IC into two parts before manufacturing, a horizontal cut that breaks the circuit into one part containing the transistors and some (local) routing wires, and another containing only routing wires. These parts are termed FEOL and BEOL. \nAs the FEOL and the BEOL of an IC are built sequentially, first FEOL and then BEOL, this characteristic enables the Split Manufacturing technique. Since the FEOL contains the transistors and possibly a few of the lowest ultra-thin metal layers -- the most complex parts of an CMOS process --, it is logical to seek to use a high-end foundry for its manufacturing, even if said foundry is not trusted. Completing the IC can then be done in a trusted low-end foundry, where the BEOL is stacked on top of the FEOL. Split Manufacturing was successfully demonstrated in , where designs were manufactured with \\textasciitilde 0\\% of faults, and are reported to present a performance overhead of about 5\\%. Therefore, the technique is, in principle, feasible and available for design companies to make use of, such that they can utilize advanced foundries without fully exposing their layouts during manufacturing.\nHowever, there are many caveats to Split Manufacturing. The technique can be successfully applied only if the technologies used to build the FEOL and BEOL are ``compatible''. In theory, a layout can be split at any layer if the chosen layer presents a good interface between FEOL and BEOL. However, since advanced technologies utilize the dual-damascene fabrication process, the layout can only be split on metal layers . Thus, the FEOL cannot terminate in a via layer. The dual-damascene process is characterized by patterning the vias and trenches in such a way that the metal deposition fills both at the same time, i.e., via-metal pairs (e.g., V1 and M2) must mandatorily be built by the same foundry. \nTwo technologies are said to be compatible with each other if there is a way for a BEOL via to land on the FEOL uppermost layer while respecting all design rule checks (DRCs) of both technologies. DRCs are used to guarantee the manufacturability and functionality of an IC, and are defined with respected to the characteristics of the materials utilized and to tolerance ranges of the manufacturing processes (e.g., polishing, patterning, and deposition). These rules encompass minimum enclosure, width, spacing, and density checks. Modern technologies have several options for via shapes -- as long as one via shape is valid, the technologies are compatible for Split Manufacturing purposes. However, in practice, to keep the overhead of the technique under control, an array of via shapes must be feasible, thus providing the physical synthesis with a rich selection for both power and signal routing. \nAccording to , compatibility between two technologies can be generalized by enclosure rules as in Eq.~\\ref{eq:drc_1}, where MW.U.x is the minimum width of Mx on untrusted foundry, VW.T.x is the minimum width of Vx on trusted foundry and EN.T.x is the minimum enclosure on trusted foundry. These rules are illustrated in Figure \\ref{fig:drc_mdim}, where the left side of the image portrays a cross section view and the right side shows the top view. According to Figure \\ref{fig:drc_mdim}, the minimum enclosure width, Mx.EX.Vx, must be compatible between the two foundries. In modern technologies, Equation~\\ref{eq:drc_1} is no longer sufficient as it does not capture the intricate rules for vias and line endings (enclosure from 1 side, 2 sides, 3 sides, T-shaped/hammerheads, etc.).\n \\begin{equation} \\label{eq:drc_1}\n MW.U.x \\geq VW.T.x + (2 EN.T.x)\n \\end{equation}\n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.95\\linewidth]{metal_view.pdf}\n{Compatibility rules between FEOL and BEOL (adapted from ). \\label{fig:drc_mdim}}\nSplit Manufacturing also presents challenges on the design flow front, which is illustrated in Figure~\\ref{fig:split_sm_flow}. An in-house team designs the circuit, from RTL to layout. Most likely, the layout contains IPs obtained from third parties. Depending on the metal layer where the layout is to be split, it may affect existing IP. Logic and memory IP may use higher metal layers -- memories typically require 4 to 5 metal layers, while standard cells typically require 2 metal layers --, limiting where the split can be done. Standard cells and memories have to be re-designed if they use metal layers that will be split, a grave challenge that may render Split Manufacturing significantly harder to execute.\nStill referring to Figure~\\ref{fig:split_sm_flow}, the FEOL and BEOL are generated using a hybrid process design kit (PDK), and then later split to be manufactured. After splitting the layout correctly, the FEOL is first manufactured in a high-end foundry, and later the BEOL is stacked on top of it by a second (and possibly low-end) foundry. \n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=1\\linewidth]{sm_flow_dia.pdf}\n{Split Manufacturing Design Flow. \\label{fig:split_sm_flow}}\nEven by splitting the layout, it is often argued that the FEOL exposes enough information to be exploited. Attacks towards the FEOL can effectively retrieve the BEOL connections by making educated guesses. The efficiency of the guessing process is inherently linked to the threat model assumed, which determines the information the attacker possesses to begin with. The literature describes two distinct threat models:\n \\begin{itemize}\n \\item \\textbf{Threat model I:} an attacker located at the untrusted foundry holds the FEOL layout and wants to retrieve the BEOL connections.\n \\item \\textbf{Threat model II:} an attacker located at the untrusted foundry holds the entire gate-level netlist that is assumed to be provided by a malicious observer. The attacker here still holds the FEOL layout and wants to retrieve the BEOL connections. .\n \\end{itemize}\nIt is important to emphasize that the second threat model completely nullifies the security introduced by Split Manufacturing. Possessing the gate-level netlist makes reverse engineering the layout a trivial task, as if the attacker held the entire layout, not only the FEOL. Assuming the attacker has knowledge about the netlist challenges the integrity of the design company itself. If a rogue element exists inside the design company, other representations of the design could be equally stolen, such as the register-transfer level (RTL) code of the design, or even the entire layout (including the BEOL). It could be argued that this vulnerability is so severe that Split Manufacturing virtually stops making sense. For this reason, threat model I is the focus in this work. However, as our goal is to present a comprehensive survey, related works that use threat model II will be covered as well. \nAssuming threat model I, an attacker already knowing all the layers that make up the FEOL, is now interested in retrieving the BEOL connections to recreate the full design (or as close as possible). The commonly used assumption is that attackers are powerful and work within the untrusted foundry in some capacity. Thus, the attackers have deep understanding about the technology. Extracting the (still incomplete) gate-level netlist from a layout is, therefore, a trivial task for the attacker.\nMany approaches to retrieve the BEOL connectivity have been proposed, several of which are termed \\textit{proximity attacks} . Since EDA tools focus on optimizing power, performance, and area (PPA), the solution found by a placement algorithm (that uses heuristics internally) tends to place connected cells close to one another as this will, in turn, reduce area, wirelength, and delay. Therefore, finding the correct missing connections between FEOL and BEOL can be done by assessing input and output pins that are in proximity (thus, the name proximity attack). The more input and output pins to connect, the higher is the probability to make a wrong connectivity guess. Thus, a higher level of security is achieved by splitting the circuit at the lowest metal layer possible.\nAs a promising technique to enhance the security of ICs in this era of fabless chip design, Split Manufacturing still faces some enormous challenges:\n \\begin{itemize}\n \\item[] \\textit{Logistical challenge:} Split Manufacturing is not presently incorporated into the IC supply chain. Finding foundries with compliant technologies that are willing to work with each other is not trivial.\n \\item[] \\textit{Technological challenge:} even within compliant technologies, non-negligible overheads can be introduced if they are vastly different\\footnote{For a thorough discussion and silicon results on BEOL-related overheads, please refer to .}. In the worst-case scenario, it can make routing impossible. Thus, this fact narrows down the technology choices available and feasibility of certain layers as candidates for splitting.\n \\item[] \\textit{Security challenge:} the attained security of straightforward Split Manufacturing is still under debate. Attacks towards the FEOL can be effective, where the hidden connections can be retrieved.\n \\end{itemize}\nFor the purpose of this survey, we categorized related works in the literature as attacks and defenses. In \\textit{attacks}, authors proposed new attack models or modifications of existing attacks in order to improve their effectiveness. In \\textit{defenses}, authors proposed new techniques to use together with Split Manufacturing in order to improve its security level.", "id": "c3cea202-e3a7-4e28-a205-e6374a147781", "level": "section", "origin_cites_number": 10, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Split Manufacturing: Background" ] ], "subsections": [], "title": "Split Manufacturing: Background" }, { "cite_extract_rate": 0.130434782608695, "cites": [ 7800, 7798, 7799 ], "content": "\\label{sec:sec_thrts}\nThe Split Manufacturing technique was developed to protect ICs against threats related to manufacturing in potentially untrusted foundries. In practical terms, to split the layout means to hide some connections from the untrusted foundry. The security provided by Split Manufacturing is based on the fact that the attacker in the FEOL foundry cannot infer the missing BEOL connections. This assumption, however, was challenged by several works where authors proposed attack approaches that can potentially retrieve the missing connections with varying degrees of success. In the text that follows, we present works that proposed Split Manufacturing attacks. These attacks are discussed in chronological order as compiled in Table~\\ref{tab:atcks_desc}. For each studied attack, we reported the related threat model, attack type, novelty, and benchmark circuits used in experiments. Additionally, we reported the largest and average size of the circuits utilized in each work (measured in number of gates). The circuit size is an important metric when analyzing the Split Manufacturing technique because its effectiveness is often proportional to circuit size. In Table~\\ref{tab:atcks_effect}, we compiled results for when the smallest and largest studied circuits are under attack. Also, we reported the circuit size and, if defined, the split layer that the author selected to split the circuit at.\nThe first reported attack is by Jeyavijayan \\etal~and is described in . In this work, the authors assumed that naive Split Manufacturing (i.e., splitting a layout without care for the connections) is inherently insecure. They introduced the concept of proximity attacks that exploits ``vulnerabilities'' introduced by EDA design tools. Since EDA tools focus on optimizing for PPA, the solution found by a placement algorithm tends to place logically connected cells close to one another so they become physically connected during routing. Therefore, the distance between output-input pairs can be used as a metric to recover the missing BEOL connections.\n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.90\\linewidth]{partitions.pdf}\n{Example of a partitioned circuit. \\label{fig:cir_part}}\nDesigns are commonly partitioned during physical implementation, i.e., separated into small logical blocks with few connections between them. That way, the designer has total control of the floorplaning regarding the placement of blocks. This approach also allows for blocks to be implemented separately and later integrated, creating a sense of parallelism in the design flow which can reduce the overall time required for executing physical synthesis. Consider as an example the circuit illustrated in Figure~\\ref{fig:cir_part} which contains two partitions, A and B, each with 3 gates. Partition A has 3 input pins and one output pin, while partition B has three input pins and two output pins. The partitions have only one connection between them, connecting the output pin of gate G2 with one of the inputs of gate G3. Consider a target output pin from partition A $P_{x,A,out}$ and its corresponding candidate input pin from partition B $P_{x,B,in}$. During placement, the EDA tool will attempt to place the pin $P_{x,A,out}$ as close as possible to $P_{x,B,in}$, possibly closer than any other pin in partition B. Using this insight, an attacker may recover the missing connections in the FEOL layout, performing then a proximity attack. The authors argued that their proposed attack flow is successful due to being able to leverage the following ``hints'' provided by the EDA tools:\n \\begin{itemize}\n \\item[] \\textit{Hint 1 - Input-Output Relationship}: partition input pins are connected either to another partition output pin or to an input port of the IC (i.e., input to input connections are excluded from the search space).\n \\item[] \\textit{Hint 2 - Unique Inputs per Partition}: input-output pins between partitions are connected by only one net. If a single partition output pin feeds more than one input pin, the fan-in and fan-out nodes are usually placed within the partitions (i.e., one-to-many connections are ruled out from the search space). \n \\item[] \\textit{Hint 3 - Combinational Loops}: in general, only very specific structures are allowed to utilize combinational loops (e.g., ring oscillators). These structures are very easy to identify. In most cases, random logic does not contain combinational loops (i.e., connections that would lead to combinational loops can be excluded from the search space).\n \\end{itemize}\nAn attacker can correctly connect a target pin to a candidate pin by identifying the closest pin from a list of possible candidates. The list of possible candidates is created by observing the hints mentioned above. A possible candidate pin is an unassigned output pin of another partition and an unassigned input port of the design. Then, a minimum distance metric is used to connect the pins based on the previously discussed heuristic behavior of EDA tools.\nIn Algorithm \\ref{alg:prox_jevs}, we described the proximity attack detailed in . The input to the algorithm is the FEOL layout, from which the information about unassigned input-output ports can be derived. The algorithm does not describe the specifics of how to derive a netlist from a layout. However, the complexity of this task is rather straightforward. It is assumed that the attacker possesses information about both the PDK and the standard cell library. In many cases, the untrusted foundry is the actual provider of both\\footnote{For the PDK, it is very natural that it is created by the untrusted foundry itself. For standard cell libraries, the cells might be designed by the foundry or by a third-party licensed by the foundry. In either case, the effort to revert a layout to a netlist remains trivial.}. From there, a layout in GDSII or OASIS format can be easily reverted to a netlist by any custom design EDA tool. \n\\begin{algorithm}[h]\n\\DontPrintSemicolon\n \\KwInput{FEOL layers}\n \\KwOutput{Netlist with BEOL connections}\n Reverse engineer FEOL layers and obtain partitions;\n \\While{Unassigned partition pins or ports exist}\n {\n Select arbitrary unassigned pin/port as a targetPin;\n \tListOfCandPins = BuildCandPinsList(targetPin);\n \tSelect candPin from ListOfCandPins that is closest to targetPin;\n \tConnect targetPin and candPin;\n \tUpdate netlist;\n }\n \\textbf{Return:} netlist\n \\textbf{BuildCandPinsList}(targetPin)\n \\KwInput{targetPin $P_{X,i,in}$}\n \\KwOutput{CandPins for targetPin}\n CandPins = Unassigned output pins of other partitions + unassigned input ports of the design;\n \\For{\\textbf{each} $Pin_J \\in$ CandPins}\n {\n \\If{CombinationalLoop(targetPin, $Pin_J$)}\n {\n CandPins -= $Pin_J$;\n }\n }\n \\textbf{Return:} CandPins\n\\caption{Proximity attack}\n\\label{alg:prox_jevs}\n\\end{algorithm}\nFrom the gate-level netlist, the algorithm chooses an arbitrary \\textit{TargetPin} from the unassigned partition input pins and output ports, creates a list of possible \\textit{CandidatePins}, and then connects the \\textit{TargetPin} to the closest pin in this list. After each connection, the netlist is updated. This procedure is repeated until all unassigned ports are connected. When the procedure is over, the attacker obtains the possible missing BEOL connections. If all guesses were correct, the original design has been recovered and Split Manufacturing has been defeated.\n \\begin{table*}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Threat Models, Attacks, and Metrics.}\n \\begin{tabular}{ccclp{4.0cm}lp{1.7cm}p{1.2cm}}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Work} & \\textbf{Year} & \\parbox{1cm}{\\textbf{Threat} \\\\\\textbf{model}} & \\textbf{Attack type } & \\textbf{Novelty} & \\textbf{Benchmark suite(s)} & \\parbox{2cm}{\\textbf{Largest circuit} \\\\\\textbf{size (gates)}} & \\parbox{2cm}{\\textbf{Avg. circuit} \\\\\\textbf{size (gates)} } \\vspace{1pt} \\\\ \n \\hline\n & 2013 & I & Proximity & Attack Based on Proximity & ISCAS\\textquotesingle85 & 3.51k & 1288\\\\\n & 2016 & II & Proximity & \\parbox{4.5cm}{Placement and routing proximity \\\\ used in conjunction} & ISPD\\textquotesingle11 & 1.29M & 951k \\\\\n & 2018 & I & Proximity & \\parbox{4.5cm}{Network-Flow-Based with Design \\\\ Based Hints} & ISCAS\\textquotesingle85 \\& ITC\\textquotesingle99 & 190.21k & 9856\\\\\n & 2018 & I & Proximity & \\parbox{4.5cm}{Proximity Attack Based on Machine \\\\ Learning} & ISPD\\textquotesingle11 & 1.29M & 951k\\\\\n & 2019 & I & SAT & SAT Attack without Proximity Information & ISCAS\\textquotesingle85 \\& ITC\\textquotesingle99 & 190.21k & 9856\\\\\n & 2019 & I & SAT & \\parbox{4.5cm}{SAT attack dynamically adjusted \\\\based on proximity information} & ISCAS\\textquotesingle85 \\& ITC\\textquotesingle99 & 190.21K & 9856\\\\\n \\hline \\hline\n \\end{tabular}\n \\label{tab:atcks_desc}\n\\end{table*}\nAlgorithm \\ref{alg:prox_jevs} was originally applied to the ISCAS\\textquotesingle85 suite of benchmark circuits. These circuits were originally selected and published to help in comparing automatic test pattern generation (ATPG) tools. Due to the small size of these circuits, they may not be the best option to assess the effectiveness of Split Manufacturing. The difficulty to retrieve the BEOL connections is directly proportional to the size of the circuit. The authors reported an average effectiveness of 96\\% of Correct Connection Rate (CCR) across all the benchmarks considered. For the c17 circuit (smallest circuit in the ISCAS\\textquotesingle85 suite with only 6 gates), all the connections were retrieved correctly, thus demonstrating that the algorithm is capable of retrieving the missing BEOL connections. In Table~\\ref{tab:atcks_effect}, we highlight the best and worst results in terms of CCR.\nJeyavijayan \\etal were the first to question the security of straightforward Split Manufacturing. Their proximity attack showed promising results, even if the considered benchmark circuits were rather small in size. This was the starting point for other studies proposing different attacks to Split Manufacturing in an attempt to retrieve the missing BEOL connections. Improvements over the original proximity attack, as well as other attacks, are compiled in Table~\\ref{tab:atcks_desc}. \nThe effectiveness of the proximity attack utilizing distance of unassigned pins alone as metric to find missing BEOL connections was questioned by Maga\\~na \\etal . The authors proposed to utilize both placement and routing information in augmented proximity attacks. For their results, large-sized circuits from the ISPD-2011 routability-driven placement contest were used. These benchmarks are better representatives of modern circuits as they contain 9 metal layers and up to two million nets in a design. Thus, in an attempt to increase the success rate of the attack for large-sized circuits, they proposed routing-based proximity in conjunction with placement-centric proximity attacks.\n \\begin{table*}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Benchmark Size and Comparison of Attack Results.}\n \\begin{tabular}{cllcllc}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Work} & \\textbf{Benchmark} & \\textbf{Attack} & \\textbf{Split Layer} & \\textbf{Size (In Gate Count)} & \\textbf{Metric} & \\textbf{Result} \\vspace{5pt} \\\\\n \\hline\n & c17 & Proximity & Not Defined & 6 & CCR(\\%) & 100 \\\\\n & c7552 & Proximity & Not Defined & 3513 & CCR(\\%) & 94 \\\\\n & Superblue 1 & Placement Proximity & M2 & 847k & \\% Match in List & 12.84 \\\\\n & Superblue 1 & Placement Proximity & M2 & 847k & CCR(\\%) & 5.479 \\\\\n & Superblue 1 & Routing Proximity & M2 & 847k & \\% Match in List & 71.08 \\\\\n & Superblue 1 & Routing Proximity & M2 & 847k & CCR(\\%) & 0.651 \\\\\n & Superblue 1 & Overlap (P\\&R) Proximity & M2 & 847k & \\% Match in List & 13.05 \\\\\n & Superblue 1 & Overlap (P\\&R) Proximity & M2 & 847k & CCR(\\%) & 3.977 \\\\\n & Superblue 1 & Crouting Proximity & M2 & 847k & \\% Match in List & 82.08 \\\\\n & Superblue 1 & Crouting Proximity & M2 & 847k & CCR(\\%) & 0.651 \\\\\n & c7552 & Network-flow Based Proximity & Not Defined & 3513 & CCR(\\%) & 93 \\\\\n & c7552 & Proximity & Not Defined & 3513 & CCR(\\%) & 42 \\\\\n & B18 & Network-flow Based Proximity & Not Defined & 94249 & CCR(\\%) & 17 \\\\\n & B18 & Proximity & Not Defined & 94249 & CCR(\\%) & < 1 \\\\\n & Superblue 1 & Proximity & M6 & 847k & \\% Match in list & 33.40 \\\\\n & Superblue 1 & Proximity & M6 & 847k & CCR(\\%) & 0.76 \\\\\n & Superblue 1 & ML & M6 & 847k & \\% Match in list & 83.12 \\\\\n & Superblue 1 & ML & M6 & 847k & CCR(\\%) & 1.91 \\\\\n & Superblue 1 & ML-imp & M6 & 847k & \\% Match in list & 74.65 \\\\\n & Superblue 1 & ML-imp & M6 & 847k & CCR(\\%) & 2.11 \\\\\n & Superblue 1 & ML-imp & M4 & 847k & \\% Match in list & 75.45 \\\\\n & Superblue 1 & ML-imp & M4 & 847k & CCR(\\%) & 2.58 \\\\\n & c7552 & SAT Attack & Not Defined & 3513 & Logical Equivalence(\\%) & 100 \\\\\n & B18 & SAT Attack & Not Defined & 94249 & Logical Equivalence(\\%) & 100 \\\\\n & c7552 & Improved SAT Attack & Not Defined & 3513 & Logical Equivalence(\\%) & 100 \\\\\n & B18 & Improved SAT Attack & Not Defined & 94249 & Logical Equivalence(\\%) & 100 \\\\\n \\hline \\hline\n \\end{tabular}\n \\label{tab:atcks_effect}\n\\end{table*}\nA key difference present in is that this work utilizes a different threat model (model II), claiming that the untrusted foundry possesses information about the entire place \\& routed netlist (as well as the FEOL layout). This assumption is hard to reason if the attacker's intent was to overproduce the IC or pirate the IP. For these goals, clearly, this assumption is unnecessary. The attacker himself can, if he indeed possesses the netlist, perform his own physical synthesis and generate his own layout. The interest in reverse engineering the BEOL connections of the original design diminishes. Nevertheless, we report on the strategies employed by the authors of since they build on the approach proposed by .\n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.90\\linewidth]{sa_onlycir.pdf}\n{Representation of the routing for the first 3 metal layers of a simple circuit (adapted from ). \\label{fig:circ_abs}}\nRegarding the attacks, the authors of proposed four different techniques to identify a small search neighborhood for each pin. The goal is to create a neighborhood that is small enough to make further pruning feasible, and therefore increase the likelihood of including the matching pins. The techniques are called \\textit{placement proximity}, \\textit{routing proximity}, \\textit{crouting proximity} and \\textit{overlap of placement and routing proximity}, and are described in the text that follows. The circuit illustrated in Figure~\\ref{fig:circ_abs} is the example (before the split) that will guide the discussion on these four techniques.\n\\textit{Placement proximity} exploits the placement information of cells. Each split wire is taken from the pin location of the corresponding standard cell that is connected to it. A search neighborhood is defined as a square region centered around the corresponding pin with an area equal to the average areas of the bounding boxes (BB) in a typical design. The authors argued that it can also be measured based on BBs of the non-split wires in the design under attack, under the assumption that the number of wires that remain in the FEOL is also very large in practice. Let us consider the circuit illustrated in Figure~\\ref{fig:circ_abs} as an example. If the split is done at M2, the search area defined using the placement proximity would contain three gates as illustrated in Figure~\\ref{fig:prox_sa} (a). Thus, using the placement proximity search area, the most likely connections are illustrated by the green squares (candidate pins) and by the red squares (target pins). Note that the layer at which the layout is split does not affect the search area defined by placement proximity.\n\\textit{Routing proximity} exploits the routing information. First, for each split wire, pins are identified as the point where the wire is actually cut at the split layer, i.e., the via location. Next, a square area centered around those pins is defined. The size of the square area is defined based on the average BBs of the pins on that layer in the design. This procedure for identification results in different neighborhood sizes according to the split layer location, i.e., the search radius adapts to the routing resources of each layer. A search area defined using routing proximity is illustrated in Figure~\\ref{fig:prox_sa} (b), highlighted in gray and containing a set of routing wires and its respective pins. \n\\textit{Crouting proximity} takes into account routing congestion by exploiting the union of placement and routing proximity. The search area for each pin is defined in such way that the ratio of number of pins to the search area is equal across all the pins in the split layer. Thus, if a pin is located at a high routing congestion area, the search area will be expanded until the pin density in the new search area reaches a target value or the search area grows to four times its starting value. The starting value is set according to the split layer, set as the average of numbers of pins which fall within a BB. A search area defined using crouting proximity is illustrated in Figure~\\ref{fig:prox_sa} (c). \nThe last strategy proposed by also combines placement and routing information. It is referred to as \\textit{Overlap of placement and routing proximities}. The concept here is to include a subset of pins identified by the placement proximity list which have their corresponding pins included in the routing proximity list. According to the author, intuitively, the overlap then identifies a subset of pins which may be more likely to point towards the direction of the matching pin. A search area defined using the overlap of placement and routing proximities is illustrated in Figure~\\ref{fig:prox_sa} (d). \n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=1\\linewidth]{prox_sa_new.pdf}\n{Multiple strategies for pin/connectivity search areas according to . \\label{fig:prox_sa}}\nMaga\\~na et al. assessed each strategy using the benchmark circuit \\textit{superblue1}. Different split layers were also considered. In Table~\\ref{tab:atcks_effect}, we compiled the results for split layer M2. By comparing the results, it becomes clear that no strategy was able to recover 100\\% of the missing BEOL connections. The best result was only 5.479\\% of CCR. This is in heavy contrast with the findings of . However, as we previously noted, the circuit sizes differ by orders of magnitude.\nAccording to the authors of , proximity alone is in no way sufficient to reverse engineer the FEOL. However, proximity attacks have merit as they can be used to narrow down the list of candidates to a significantly smaller size. Using crouting proximity, in 82.08\\% of the cases, the search area defined contained the matched pin in the list of candidates. The authors also present results for a circuit split at M8. We opt not to show these results in Table~\\ref{tab:atcks_effect}. Using the circuit \\textit{superblue1} as an example, the number of unassigned pins when the circuit is split at M8 is only 1.2\\% of the pins when split at M2. Therefore, the small number of unassigned pins to be connected overshadows the large circuit used for their experiments. It must also be emphasized that splitting a circuit in such higher layers is rather impractical since M8 tends to be a very thick metal reserved for power distribution in typical 10-metal stacks. There is very little value in hiding a power distribution network from an adversary that wants to pirate an IP. Once again, we opt not to show this result in our comparisons.\nA network-flow based attack model towards flattened designs was proposed by Wang et. al . The authors argued that the proximity attack originally proposed by utilizes hints that can be used only by hierarchical designs, and that modern designs are often flattened \\footnote{We highlight that best practices in circuit design have changed over the years. Hierarchical design was heavily utilized for many years, but it lost favor due to the difficulty in performing reasonable timing budgeting between the many blocks of a system. Thus, flattened designs are often used to facilitate timing closure.}. Based on the original proximity attack, they proposed a proximity attack utilizing five hints: physical proximity, acyclic combinational logic circuit, load capacitance constraint, directionality of dangling wires, and timing constraint. Note that the first two hints are already described by and . The three novel hints are described below:\n \\begin{itemize}\n \\item[] \\textit{Load Capacitance Constraint}: gates can drive a limited load to honor slew constraints. Typically, maximum load capacitance is constrained and has a maximum value defined by the PDK and/or the standard cell characterization boundaries. Hence, an attacker will consider only connections that will not violate load capacitance constraints.\n \\item[] \\textit{Directionality of Dangling Wires}: routing engines tend to route wires from a source to a sink node along the direction of the sink node. Therefore, the directionality of remaining dangling wires at lower metal layers may indicate the direction of their destination cell with a high degree of certainty\\footnote{Metals usually have preferred directions that alternate along the stack (i.e., if M1 is vertical, then M2 is horizontal). Therefore, this hint becomes more effective if the attacker can observe more than one routing layer of the FEOL}. An attacker can disregard connections in the other directions.\n \\item[] \\textit{Timing Constraint}: connections that create timing paths that violate timing constraints can be excluded. An attacker, through an educated guess of the clock period, can determine a conservative timing constraint and exclude any connections that would lead to slower paths. \n \\end{itemize}\nThe network-flow based attacked framework proposed by Yang \\etal~considers two hints proposed by plus the aforementioned hints to create a directed graph $G = (V,E)$, where $V$ is a set of vertices and $E$ is a set of edges. The set ($V$) is composed by the set of vertices corresponding to the output pins $V_o$, and a set corresponding to the input pins ($V_i$), the source vertex ($S$), and the target vertex ($T$). The set $E$ consists of $E_{So}$, edges from $S$ to every output pin vertex, $E_{oi}$, edges from output pin vertices to input pin vertices, and $E_{iT}$, which includes edges from every input vertex to the target vertex. An example of this kind of representation is shown in Figure~\\ref{fig:network_flow}, where (a) is the circuit with missing connections and (b) is the network-flow representation. The detailed problem formulation is omitted from this work. To find the connections, a min-cost network-flow problem is solved, where the decision variables are the flow $x_{i,j}$ going through each edge $(i,j) \\in E$ . The authors utilized the Edmons-karp algorithm to solve this problem. Complexity of the algorithm alone is given by $O(VE^2)$, however, in the worst case it is required to run the algorithm $V$ times; thus, the run-time of the complete network attack is given by $O(V^2E^2)$ in the worst case.\n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.90\\linewidth]{network_flow.pdf}\n{(a) Circuit with missing connections. (b) Network-flow model for inferring the missing connections. (Adapted from ). \\label{fig:network_flow}}\nThe network-flow approach was applied to ISCAS\\textquotesingle85 and ITC\\textquotesingle99 benchmark circuits. The ITC\\textquotesingle99 circuits were proposed as an evolution of the ISCAS\\textquotesingle85 set, since the Test community acknowledged that newer and larger circuits were already in demand. For comparison, the authors applied both the original proximity attack and the network-flow attack to flattened designs. As shown in Table~\\ref{tab:atcks_effect}, their network-flow proximity attack outperformed the original attack in terms of CCR. However, despite the evident improvement, the attack could only retrieve 17\\% of the missing BEOL connections for a medium sized circuit (b18 from the ITC\\textquotesingle99 suite).\nA Machine Learning (ML) framework was used by Zhang \\etal in an attempt to improve the attack proposed in . The same setup as previously discussed was utilized. However, more layout features were incorporated in their ML formulation, including placement, routing, cell sizes, and cell pin types. \nA high-level overview of their modeling framework is shown in Figure~\\ref{fig:ml_framework} (a). First, they create a challenge instance from the entire layout and only FEOL view. Next, for each virtual pin (point where a net is broken on the split layer), layout information is collected, including placement, routing, cell areas, and cell pin as illustrated in Figure~\\ref{fig:ml_framework} (b). Using this information, samples are generated which are fed into the ML training process. Each sample carries information of a pair of virtual pins which may or may not be matched. Classifiers then are built by the ML framework using training samples. After training and building the regression model, cross validation is used for evaluation which ensures validation of the model is done on data samples which were not used for training. Their framework faces scaling issues when applied to lower split metal layers. An improved ML framework is then proposed as well, denoted by ML-imp, to solve the scaling issues.\nFor their experiments, Zhang \\etal utilize the ISPD\\textquotesingle11 benchmark suite. They compare results from their previous work with their ML and ML-imp frameworks. However, they do not show results for lower split metal layers (e.g., M2). Instead, results are provided for M8, M6, and M4 splits. As pointed out before, utilizing higher layers for the split effectively shrinks the otherwise large circuits used in their experiments. A drastic reduction of unassigned pins is expected for such higher layers, as higher metal layers are used often for power routing, not for signal routing. Results for the \\textit{superblue1} circuit are shown in Table~\\ref{tab:atcks_effect}. Regarding recovering missing BEOL connections, ML and ML-imp could only retrieve around 2\\%, therefore not showing a huge improvement over their previous work. However, search list area accuracy showed significantly better results when compared to their prior work. A caveat worth mentioning is that the proposed machine learning framework needs the entire layout during its modeling phase. This characteristic may, in an extreme case, nullify the applicability for an attacker that only holds the FEOL layout and cannot produce training samples from other sources.\n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=1\\linewidth]{ml_features.pdf}\n{(a) Machine Learning Modeling by Zhang et al. . (b) Few Exmples of Layout Features. \\label{fig:ml_framework}}\nAttacks using proximity information as a metric are not the only solution to recover missing BEOL connections. An effective methodology to apply a Boolean satisfiability based strategy is proposed by Chen \\etal . The authors claim that their attack methodology does not need (or depend on) any proximity information, or even any other insights into the nature of EDA tools utilized in the design process. The key insight in their work is to model the interconnect network as key-controlled multiplexers (MUX). Initially, all combinations of signal connections between the FEOL partitions are allowed, as illustrated in Figure~\\ref{fig:mux_network}. First, a MUX network is created in order to connect all missing paths in the circuit. This MUX network leads to potential cyclic paths, thus, there is a possibility to generate many combinational loops during the attack process corresponding to incorrect key guesses. Therefore, constraints on the key values are generated in order to avoid activating the cyclic paths. The attack can be summarized in 4 steps: \\textit{identification of all cyclic paths}, \\textit{generation of cycle constraints}, \\textit{cycle constraints optimizations}, and finally, \\textit{SAT attack}. The authors utilize a SAT solver-based attack method derived from CycSat . The SAT attack algorithm has as input the FEOL circuit with MUX network and a packaged IC that serves as an oracle. The algorithm outputs keys to be used in the MUXes such that correct BEOL connections are made. \nIn reality, presents a different interpretation of Threat model I since the attacker is assumed to possess a functional IC. This IC would then have to be available in the open market for the attacker to be able to purchase it. This characteristic severely narrows down the applicability of this SAT attack. For instance, ICs designed for space or military use will not be freely available, thus an oracle may not be known to the attacker.\nExperimental results presented by utilize ISCAS\\textquotesingle85 and ITC\\textquotesingle99 benchmark circuits. It has been shown that their attack could recover a logically correct netlist for all the studied circuits. However, there is a small clarification to be made that relates to what is a logically correct circuit. In Table~\\ref{tab:atcks_effect}, two of those results are shown. For seven of the studied benchmarks (c1908, c2670, c5315, c7552, b14, b15, b17), the connections recovered are identical to the BEOL connections. For the remaining benchmarks, the recovered connections are not identical but logically equivalent to the original circuit. In practice, the logically equivalent circuit may present performance deviations from the original design. Matching the performance of the original design can be done by re-executing place and route using the logically equivalent gate-level netlist. Depending on the attack goal, it is possible that the attacker had already planned to re-execute the physical synthesis flow again (say, to resell the IP in a different form or shape). An attack that guarantees 100\\% of logic equivalence of the recovered netlist is powerful enough, allowing attackers to copy and modify split layouts.\nIn order to increase the efficiency and capacity of the SAT attack proposed in , the authors proposed two improvements in . First, the size of the key-controlled interconnect network that models the possible BEOL connections is reduced. Second, after the MUX network is inserted into the FEOL circuit, the number of combinational cycles it induces in the design for incorrect key guesses should also be reduced. Proximity information is then exploited to achieve the proposed improvements. The improved SAT attack method which exploits proximity information showed significant reduction in the attack time and increase in the capacity. Same as in , the circuits tested were 100\\% recovered, as shown in Table~\\ref{tab:atcks_effect}. \n\\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.95\\linewidth]{mux_network.pdf}\n{MUX network for a bipartitioned FEOL circuit (Adapted from ). \\label{fig:mux_network}}\n\\begin{table*}[htb]\n\\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Split Manufacturing Defenses.}\n \\begin{tabular}{cp{1cm}ccp{2.2cm}p{2cm}l}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Work} & \\textbf{Year} & \\textbf{Threat Model} & \\textbf{Category} & \\textbf{Defense} & \\textbf{Metrics} & \\textbf{Defense Overheads Presented} \\vspace{5pt} \\\\\n \\hline \n & 2013 & I & Proximity Perturbation &Pin Swapping & Hamming Distance & \\multicolumn{1}{c}{-*} \\\\\n & 2013 & II & Wire Lifting &Wire Lifting & k-Distance & Power, Area, Delay and Wire-Length \\\\\n & 2014 & I & Layout Obfuscation & Layout Obfuscation for SRAMs and Analog IPs & - & Performance, Power and Area \\\\\n & 2014 & I & Layout Obfuscation & Obfuscation Techniques & Neighbor Connectedness and Entropy & Performance and Area \\\\\n & 2015 & I & Layout Obfuscation & Automatic Obfuscation Cell Layout & Neighbor Connectedness and Entropy & Performance, Power and Area \\\\\n & 2015 & I & Layout Obfuscation & Obfuscated Built-in Self-Authentication & Obfuscation Connection & Number of Nets \\\\\n & 2016 & I & Wire Lifting & Artificial Blockage Insertion & Number of Pins & \\multicolumn{1}{c}{-*} \\\\\n & 2016 & I & Wire Lifting & Net Partition, Cell Hidden and Pin Shaken & - & \\multicolumn{1}{c}{-*} \\\\\n & 2017 & I & Proximity Perturbation & Routing Perturbation & Hamming Distance & Performance and Wire-Length \\\\\n & 2017 & I & Wire Lifting & Secure Routing Perturbation for Manufacturability & Hamming Distance & Performance and Wire-Length \\\\\n & 2017 & I & Proximity Perturbation & placement-centric Techniques & CCR & Performance, Power and Area \\\\\n & 2017 & II & Proximity Perturbation & Gate Swapping and Wire Lifting & Effective Mapped Set Ratio and Average Mapped Set Pruning Ratio & Wire-Length \\\\\n & 2018 & I & Wire Lifting & Concerted Wire Lifting & Hamming Distance & Performance, Power and Area \\\\\n & 2018 & I & Proximity Perturbation & Secure Driven Placement Perturbation & Hamming Distance & Power and Wire-Length\\\\\n & 2018 & I & Proximity Perturbation & placement and routing perturbation & Hamming Distance & Performance, Power and Area \\\\\n & 2019 & I & Layout Obfuscation & Isomorphic replacement for Cell Obfuscation & Isomorphic Entropy & \\multicolumn{1}{c}{-*} \\\\\n & 2019 & II & Layout Obfuscation & Dummy Cell and Wire Insertion & k-security & Area and Wire-Length \\\\\n \\hline \\hline\n \\multicolumn{7}{l}{\\cellcolor{white}* Authors do not present any discussion regarding overhead.} \n \\end{tabular}\n \\label{tab:sm_defenses}\n\\end{table*}", "id": "63582e79-19ab-44ed-8225-2fdf07c8ddcd", "level": "section", "origin_cites_number": 23, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Attacks on Split Manufacturing" ] ], "subsections": [], "title": "Attacks on Split Manufacturing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:defenses}\nAttacks toward Split Manufacturing showed promising results, as described in the previous section. A malicious attacker has the real potential to recover the missing BEOL connections. If the missing connections are successfully recovered, the security introduced by applying the technique is nullified. Therefore, straightforward Split Manufacturing is questioned by several works. Several authors proposed defense techniques that augment the technique, i.e., techniques that, when used together with Split Manufacturing, do increase the achieved security against attacks. In Table~\\ref{tab:sm_defenses}, we compile a comprehensive list of defense techniques found in the literature. Each defense technique utilizes a different metric and Threat model, depending upon the type of attack they are trying to overcome. Since many of the studied defense techniques often introduce heavy PPA overheads, Table~\\ref{tab:sm_defenses} also shows if the studied work assessed overheads and which ones were addressed. \nIn the text that follows, the many defense techniques are divided into categories, namely Proximity Perturbation (i.e., change the location of cells or pins), Wire Lifting (i.e., move routing wires to upper layers), and Layout Obfuscation (i.e., hide the circuit structure). We present the categories in this exact order. For some techniques, it is worth mentioning that overlaps do exist and that techniques could be categorized differently. Thus, this categorization is our interpretation of the state of the art and may not be definitive. Furthermore, the boundaries between categories are not strict. For example, a technique may perform a layout modification that promotes proximity perturbation and leads to (indirect) wire lifting. In Tables~\\ref{tab:def_res} and \\ref{tab:def_res2}, we compile the results for the Proximity Perturbation and Wire Lifting categories, respectively. To demonstrate the effectiveness of each defense technique, we compile the results for when the attack is done with and without the defense. The results showed in Tables~\\ref{tab:def_res} and \\ref{tab:def_res2} are for the smallest and largest circuits addressed in each studied work. Additionally, we show the PPA overhead introduced and, if specified, the split layer.", "id": "0381ac67-f861-41a5-94dd-d57b1d10cc30", "level": "section", "origin_cites_number": 0, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Split Manufacturing Defenses" ] ], "subsections": [ "ee25dbd5-cfaf-41a3-8225-ebde1d228b40", "90e5175d-afc8-4844-8dfa-513e3fcc4fbd", "4032829b-2e93-4b12-822b-7b2a6e653dbc" ], "title": "Split Manufacturing Defenses" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 7798, 7799 ], "content": "Attacks toward split circuits are generally based on leveraging proximity information. The first category of defenses, Proximity Perturbation, addresses this hint left by the EDA tools. The goal of the techniques within this category is to promote changes in the circuit such that the proximity information between the FEOL pins is less evident. Therefore, the success rate of the proximity attacks is decreased.\nIn , the authors proposed pin swapping to overcome proximity attacks. Rearranging the partition pins can alter their distance in such a way that the attacker is deceived. As an example, if the pins $P_{G3,B,in}$ and $P_{G6,A,in}$ (Figure~\\ref{fig:cir_part}) are swapped, the proximity attack will incorrectly guess the connection between $P_{G2,A,out}$ and $P_{G3,B,in}$. Thus, a sufficient number of pins have to be swapped in order to create a netlist that is significantly different from the original netlist (based on some sort of metric for similarity). In , Hamming distance is proposed as a way to quantify the difference between the outputs of the original netlist and the modified netlist. Assuming the outputs of a circuit are arranged as a string of bits, Hamming distance is defined as the number of bits that change when two instances of this string are compared to one another. The authors argued that the optimum netlist is created when the Hamming distance is 50\\%. Therefore, inducing the maximal ambiguity for a potential attacker. Since the best rearrangement for N pins of partitions might take $N!$ computations (rather computationally expensive), pair-wise swapping of pins is considered in . Pair-wise swapping of pins results in $O(N^2)$ computations. \nThe modified netlist is created based on a series of rules. Similarly, to the proximity attack, a list of candidates pins to be swapped is created before the actual swap is applied. Since not every pin can be swapped, a candidate pin to be swapped should:\n \\begin{itemize}\n \\item be an output pin of the partition where the target pin resides\n \\item not be connected to the partition where the candidate pin resides\n \\item not form a combinational loop\n \\end{itemize}\nUsing the above constraints, a candidate pin is selected. The target pin also needs to be chosen carefully. In , IC testing principles and hints from the original proximity attack are used to choose the target pin. The swapping procedure is described in Algorithm \\ref{alg:pin_swap}, where $TestMetric$ is a metric based on IC testing principles, such as stuck-at fault models which are still utilized in Test today. More details can be obtained from . The proposed defense technique is validated using ISCAS\\textquotesingle85 circuits and the original proximity attack. For the smallest circuit, c17, it takes only one swap to achieve a Hamming Distance of 50\\%. For the largest studied circuit, c7552, it takes 49 swaps. These results are summarized in Table~\\ref{tab:def_res}.\nAs demonstrated in , rearranging the partition pins can thwart proximity attacks. However, according to Chen \\etal , pin swapping at partition level has limited efficacy. They demonstrated that an attacker holding the FEOL layout as well as the nestlist can insert hardware trojans even when the defense approach of is applied. It must be highlighted that assumes threat model II, which we have previously argued that has the potential to nullify the vast majority of defenses towards split circuits. Thus, they proposed a defense to counter the threat from hardware trojans. Their defense incorporates the global wire-length information, with the goal to hide the gates from their candidate locations, and as result decreasing the effective mapped set ratio (EMSR). The EMSR metric is an attempt to quantify the ratio of real gates location of a given mapping during a simulated annealing-based attack. This defense consists of two steps, first a greedy gate swapping defense , and second, a measurement of the security elevation in terms of EMSR. The technique is evaluated using ISCAS\\textquotesingle85 benchmarks circuits and the EMSR metric to quantify the defense effectiveness. The results are shown in Table~\\ref{tab:def_res}.\nFollowing the same principle of increasing the Hamming Distance, Wang \\etal proposed a routing perturbation based defense. The optimum Hamming distance is sought to be achieved by layer elevation, routing detours, and wire decoys, while test principles are used to drive the perturbation choices. Layer elevation is essentially a wire lifting technique: without changing the choice of split metal layer, wires are forced to route using higher metal layers, thus being lifted from the FEOL to the BEOL. Intentional routing detours are a way to increase the distance between disconnected pins of the FEOL. If done properly, disconnected pins will not be the closest to each other, deceiving the proximity attack. In some cases, routing detours will increase the distance between disconnected pins, however, they still remain the closest to each other. In this scenario, wire decoys can be drawn near disconnected pins, in such a way that decoys are now the closest and will instead be picked as the ideal candidate pin. \nThe perturbations proposed in can incur heavy overheads, and, for this reason, wires to be perturbed are chosen by utilizing IC test principles. In , fault observability, as defined in SCOAP , is used as a surrogate metric for this task. The technique is evaluated using ISCAS\\textquotesingle85 and ITC\\textquotesingle99 benchmark circuits. For all studied circuits, the Hamming distance increased by an average of 27\\% at a cost of only 2.9\\% wire length overhead (WLO), on average. The results for the largest and smallest studied circuits are shown in Table~\\ref{tab:def_res}.\n \\begin{algorithm}[htb]\n\\DontPrintSemicolon\n \\KwInput{Partitions}\n \\KwOutput{List of target and swapping pins}\n $ListofTargetPins$ = $\\emptyset$;\\\\\n $ListofSwappingPins$ = $\\emptyset$;\\\\\n $ListofUntouchedPins$ = All partition pins and I/O ports;\\\\\n \\While{Untouched output partitions pins or input ports exist}\n {\n \\For{$UntouchedPin$}\n {\n $SwappingPins$ = \\\\\n BuildSwappingPinsList($UntouchedPin$);\n \\For{$SwappingPin$ $\\in$ $SwappingPins$}\n {\n Compute \\\\\n $TestMetric$($UntouchedPin,\n SwappingPin$);\n }\n }\n Find the $TargetPin$ and $SwappingPin$ with the Highest $TestMetric$ from its SwappingPins;\\\\\n $ListofTargetPins += TargetPins$;\\\\\n $ListofSwappingPins += SwappingPins$;\\\\\n $ListofUntouchedPins -= TargetPins$;\\\\\n $LisofUntouchedPins -= SwappingPin$; \\\\\n Swap TargetPin and SwappingPin; \\\\\n Update netlist;\n }\n \\textbf{Return:} ListofTargetPins and ListofswappingPins;\n \\textbf{BuildSwappingPinList}($TargetPin$); \\\\\n \\KwInput{$TargetPin P_{x,i,out}$}\n \\KwOutput{$SwappingPins$ for $TagetPin$}\n \\For{$Pin_J \\in SwappingPins$}\n {\n \\If{$CombinationalLoop(TargetPin,Pin_J)$}\n {\n $SwappingPins -= Pin_J$;\n }\n }\n \\textbf{Return:} $SwappingPins$;\n\\caption{ Fault analysis-based swapping of pins to thwart proximity attack (adapted from ).}\n\\label{alg:pin_swap}\n\\end{algorithm}\nSengupta \\etal take a different direction from other works. They utilized an information-theoretic metric to increase the resilience of a layout against proximity attacks. As demonstrated in , mutual information (MI) can be used to quantify the amount of information revealed by the connectivity distance between cells. Mutual information is calculated by taking into account the cells connectivity $D$, if they are connected or not, and their Manhattan distance $X$, described by Eq. \\ref{eq:mutual_info}, where $H[\\cdot]$ is the entropy. The Manhattan distance of two cells is defined as the sum of horizontal and vertical distances between them. Entropy is a measure of disorder of a system. Therefore, in this work, entropy is utilized as a measure of disorder in the FEOL layer. The distribution of the variable $X$ for a given layout is determined pair-wise for all gates, allowing a straightforward computation of $I(X;D)$. Thus, layouts with the lowest mutual information, i.e., the correlation between cell connectivity and their distance is low, are more resilient against proximity attacks.\n \\begin{equation}\\label{eq:mutual_info}\n MI = I(X;D) = H[X]-H[X/D]\n \\end{equation}\nIn order to minimize the information ``leaked'' from mutual information, applies cell placement randomization and three other techniques: g-color, g-type1, and g-type2. Randomizing the cell placement can achieve the desired low mutual information; however, the PPA overhead incurred is excessive. Minimizing mutual information without excessive PPA overhead can be achieved by the other techniques. From a graph representation of the circuit, graph coloring can be used to hide connectivity information, where gates of the same color must not be connected. Thus, the resulting colored netlist is then partitioned by clustering all cells of same color together. During cell placement, the cells with the same color will be confined within their respective clusters. According to , these constraints naturally mitigate the information leakage to a great extent. The g-color technique utilizes only the graph coloring as described above. The other two, g-type1 and g-type2, consider the type of the gate when creating clusters. The g-type1 approach clusters gates only by their functionality, while g-type2 utilizes functionality and the number of inputs for clustering. The authors assessed their techniques utilizing ISCAS\\textquotesingle85 and MNCN benchmark suites. Results for the smallest and largest circuits are shown in Table~\\ref{tab:def_res}. \n\\begin{table*}[htb]\n\\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Results for Defense Techniques based on Proximity Perturbation.}\n \\begin{tabular}{p{.8cm}p{1cm}p{1.2cm}p{2.5cm}lp{2.2cm}p{.5cm}p{1.5cm}p{1.5cm}}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Work} & \\textbf{Attack Type} & \\textbf{Benchmark} & \\textbf{Defense Technique} & \\textbf{Defense Metric} & \\textbf{Defense Overhead} & \\textbf{Split Layer} & \\textbf{Result without Defense} & \\textbf{Result with Defense} \\vspace{5pt} \\\\\n \\hline\n & Proximity & c17 & - & Hamming Distance & 1 Swap for 50\\% HD & -* & 100\\% CCR & 78\\% CCR\\\\\n & Proximity & c7552 & - & Hamming Distance & 49 Swaps for 50\\% HD & -* & 94\\% CCR & 91\\% CCR\\\\\n & Proximity & c432 & Modifed Greedy Gate Swapping & EMSR & 75\\% of WLO & -* & 90\\% EMSR & 25\\% EMSR \\\\\n & Proximity & c432 & Modifed Greedy Gate Swapping & EMSR & 300\\% of WLO & -* & 78\\% EMSR & 10\\% EMSR \\\\\n & Proximity & c432 & - & Hamming Distance & 3.1\\% WLO for 46.1\\% HD & -* & 92.4\\% CCR & 78.8\\% CCR \\\\\n & Proximity & c432 & - & Hamming Distance & 4.1\\% WLO for 31.7\\% HD & -* & 62.8\\% CCR & 37.9\\% CCR \\\\\n & Proximity & c432 & Random & Mutual Information & < 10\\% PPA & M1 & 17\\% CCR & < 1\\% CCR \\\\\n & Proximity & c432 & g-color & Mutual Information & < 10\\% PPA & M1 & 17\\% CCR & 2\\% CCR \\\\\n & Proximity & c432 & g-type1 & Mutual Information & < 10\\% PPA & M1 & 17\\% CCR & 6\\% CCR \\\\\n & Proximity & c432 & g-type2 & Mutual Information & < 10\\% PPA & M1 & 17\\% CCR & 4.5\\% CCR \\\\\n & Proximity & c7552 & Random & Mutual Information & < 10\\% PPA & M1 & 13\\% CCR & < 1\\% CCR \\\\\n & Proximity & c7552 & g-color & Mutual Information & < 10\\% PPA & M1 & 13\\% CCR & 2\\% CCR \\\\\n & Proximity & c7552 & g-type1 & Mutual Information & < 10\\% PPA & M1 & 13\\% CCR & 4\\% CCR \\\\\n & Proximity & c7552 & g-type2 & Mutual Information & < 10\\% PPA & M1 & 13\\% CCR & 3\\% CCR \\\\\n & SAT & c432 & BEOL+Physical & Perturbation & 4.5\\% WLO & -* & 58\\% CCR & 56\\% CCR \\\\\n & SAT & c432 & Logic+Physical & Perturbation & 5.57\\% WLO & -* & 58\\% CCR & 58\\% CCR \\\\\n & SAT & c432 & Logic+Logic & WLD & 1.68\\% WLO & -* & 58\\% CCR & 52\\% CCR \\\\\n & SAT & b18 & BEOL+Physical & Perturbation & 8.06\\% WLO & -* & 15\\% CCR & 14\\% CCR \\\\\n & SAT & b18 & Logic+Physical & Perturbation & 1.70\\% WLO & -* & 15\\% CCR & 17\\% CCR** \\\\\n & SAT & b18 & Logic+Logic & WLD & 0.61\\% WLO & -* & 15\\% CCR & 16\\% CCR** \\\\\n & Proximity & c432 & Netlist Randomization & Hamming Distance & < 10\\% PPA overall & -* & 92.4\\% CCR & 0\\% CCR \\\\\n & Proximity & c7552 & Netlist Randomization & Hamming Distance & < 10\\% PPA overall & -* & 94.4\\% CCR & 0\\% CCR \\\\\n \\hline\n \\hline\n \\multicolumn{9}{l}{\\cellcolor{white}* Split layer not specified by the authors.} \\\\\n \\multicolumn{9}{l}{\\cellcolor{white}** These results are counter-intuitive, the applied defense degrades the metric.} \n \\end{tabular}\n \\label{tab:def_res}\n\\end{table*} \nSimilar to the pin swapping technique proposed in , Wang \\etal proposed a placement-based defense with the same objective of deceiving a proximity attack by perturbing proximity information. Differently from pin swapping, their placement-based defense considers the incurred wire-length overhead as a metric. This technique is based on changing gate locations such that the proximity hint is no longer effective. Their algorithm consists of two phases, one to select which gates to be perturbed and a second phase where the selected gates are (re)placed. Gate selection is done by extracting a set of trees using two techniques, BEOL-driven and logic-ware extraction. The first approach selects all gate trees that contain any metal wires in the FEOL, i.e., connections that are not hidden from the attacker. The second approach considers the wire-length impact and the gate tree impact on the overall security. After extracting the set of trees, the placement perturbation is done in one of two ways: physical-driven or logic-driven. For each extracted tree, the physical-driven perturbation changes the location of gates using a Pareto optimization approach. Also, each solution is evaluated by its wire-length overhead and a perturbation metric that discerns the placement difference from the original layout. According to , geometric-based difference alone may be insufficient to enhance the split circuit security. Thus, a logic-driven perturbation is performed with a weighted logical difference (WLD) metric, which encourages perturbation solutions with large logical difference from its neighbors. The authors assessed their techniques combining the gate selection and perturbation as BEOL+Physical, Logic+Physical and Logic+Logic, using ISCAS\\textquotesingle85 and ITC\\textquotesingle99 circuit benchmarks. Results for the smallest and largest circuits considered are shown in Table~\\ref{tab:def_res}.\nA considerably different approach is proposed by Patnaik \\etal , whereas netlist modifications are promoted (instead of placement/routing modifications during physical synthesis). The goal is to modify the netlist of a design in order to insert (partial) randomization. According to , this approach helps to retain the misleading modifications throughout any regular design flow, thereby obtaining more resilient FEOL layouts where the netlist changes are later ``corrected'' in the BEOL. This methodology is implemented as an extension to commercial EDA tools with custom in-house scripts. The process goes as follows: first, the netlist is randomized. Second, the modified netlist is place and routed. Lastly, the true functionality is restored by re-routing in the BEOL. For the netlist randomization, pairs of drivers and their sinks are randomly selected and swapped. This is done in such way to avoid combinational loops that may be introduced by swapping. The modified netlist then is place and routed, utilizing a `do not touch'\\footnote{This terminology is used in IC design to mean that a specific cell or family of cells should not be optimized, i.e., not to be touched.} setting for the swapped drivers/sinks to avoid logic restructuring/removal of the related nets. Finally, the true connectivity is restored in the BEOL with the help of correction cells that resemble switch boxes. The technique is evaluated using ISCAS\\textquotesingle85 circuits, and the results for the largest and smallest circuit are shown in Table~\\ref{tab:def_res}.", "id": "ee25dbd5-cfaf-41a3-8225-ebde1d228b40", "level": "subsection", "origin_cites_number": 7, "parent_id": "0381ac67-f861-41a5-94dd-d57b1d10cc30", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Split Manufacturing Defenses" ], [ "subsection", "Proximity Perturbation" ] ], "subsections": [], "title": "Proximity Perturbation" }, { "cite_extract_rate": 0.07692307692307601, "cites": [ 7800 ], "content": "Hiding routing information from untrusted foundries is the main objective of the Split Manufacturing technique. Since attacks mainly rely on hints left by EDA tools to recover the missing BEOL connections, the amount of hidden information is related to the circuit performance -- splitting the circuits at low metal layers increases the security level. Following the same idea, wire lifting proposes `lifting' wires from the FEOL layer to the BEOL. That is, changing the routing to split metal layers has the potential to increase the security level. \n \\begin{table*}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Results for Defense Techniques based on Wire Lifting.}\n \\begin{tabular}{cp{1cm}p{1.5cm}p{2.5cm}llp{1cm}p{1.5cm}p{1.5cm}}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Work} & \\textbf{Attack Type} & \\textbf{Benchmark} & \\textbf{Defense Technique} & \\textbf{Defense Metric} & \\textbf{Defense Overhead} & \\textbf{Split Layer} & \\textbf{Result without Defense} & \\textbf{Result with Defense} \\vspace{5pt}\\\\\n \\hline\n & SAT & c432 & Wire Lifting & \\textit{k-security} & 477\\% of WLO & -* & k=1 & k=48 \\\\\n & Proximity & Superblue 1 & Routing Blockage Insertion & $E[LS]$ & Not Presented & M4 & 1.51 & 1.77 \\\\\n & Proximity & Superblue 1 & Routing Blockage Insertion & $FOM$ & Not Presented & M4 & 1222.8 & 1433 \\\\\n & Proximity & c432 & Concerted Lifting & Hamming Distance & 7.7\\% of Area & Average** & 23.4 & 45.9 \\\\\n & Proximity & c432 & Concerted Lifting & CCR & 13.2\\% of Power & Average** & 92.4 & 0 \\\\\n & Proximity & c7552 & Concerted Lifting & Hamming Distance & 16.7\\% of Area & Average** & 1.6 & 25.7 \\\\\n & Proximity & c7552 & Concerted Lifting & CCR & 9.3\\% of Power & Average** & 97.8 & 0 \\\\\n & Proximity & c2670 & CMP-Friendly & Hamming Distance & 3.4\\% of WLO & -* & 14.5\\% & 20.4\\% \\\\\n & Proximity & c2670 & CMP-Friendly & CCR(\\%) & 3.4\\% of WLO & -* & 48.1\\% & 33.4\\% \\\\\n & Proximity & b18 & CMP-Friendly & Hamming Distance & 0.4\\% of WLO & -* & 21.6\\% & 27.6\\% \\\\\n & Proximity & b18 & CMP-Friendly & CCR(\\%) & 0.4\\% of WLO & -* & 12.1\\% & 10.7\\% \\\\\n & Proximity & c2670 & SADP-Compliant & Hamming Distance & 7.49\\% of WLO & -* & 14.5\\% & 24.4\\% \\\\\n & Proximity & c2670 & SADP-Compliant & CCR(\\%) & 7.49\\% of WLO & -* & 48.1\\% & 6.4\\% \\\\\n & Proximity & b18 & SADP-Compliant & Hamming Distance & 4.64\\% of WLO & -* & 21.6\\% & 29.6\\% \\\\\n & Proximity & b18 & SADP-Compliant & CCR(\\%) & 4.64\\% of WLO & -* & 12.1\\% & 2.7\\% \\\\ \\hline\n & Proximity & s526 & Net Partitioning & CCR(\\%) & Not Presented & -* & 40\\%*** & 0\\%*** \\\\\n & Proximity & s526 & Net Partitioning \\& Cell Hiding & CCR(\\%) & Not Presented & -* & 40\\%*** & 0\\%*** \\\\\n & Proximity & s526 & Net Partitioning \\& Cell Hiding \\& Pin Shaking & CCR(\\%) & Not Presented & -* & 40\\%*** & 0\\%*** \\\\\n & Proximity & s9234.1 & Net Partitioning & CCR(\\%) & Not Presented & -* & 30\\%*** & 4\\%*** \\\\\n & Proximity & s9234.1 & Net Partitioning \\& Cell Hiding & CCR(\\%) & Not Presented & -* & 30\\%*** & 1.5\\%*** \\\\\n & Proximity & s9234.1 & Net Partitioning \\& Cell Hiding \\& Pin Shaking & CCR(\\%) & Not Presented & -* & 30\\%***& 1.5\\%*** \\\\\n \\hline \n \\hline\n \\multicolumn{9}{l}{\\cellcolor{white}* Split layer not specified by the authors.} \\\\\n \\multicolumn{9}{l}{\\cellcolor{white}** Results are given as an average between M3, M4, and M5.} \\\\\n \\multicolumn{9}{l}{\\cellcolor{white}*** These results cannot be directly compared with previous ones as the transistor technology is vastly different.} \n \\end{tabular}\n \\label{tab:def_res2}\n\\end{table*} \nWire lifting was first presented by Imerson \\etal where Split Manufacturing is considered as a 3D IC implementation . For the sake of argument, we will continue to refer to this technique as Split Manufacturing, even if the notion of untrusted FEOL vs. trusted BEOL is shifted. This type of 3D implementation consists of two or more independently manufactured ICs, where each IC represents a tier that is vertically integrated on top of each other. Connections between the tiers are done using vertical metal pillars, referred to as through-silicon vias (TSVs). In , a 3D implementation consisting of two tiers is used for their experiments. The bottom tier containing the transistors and some routing wires (akin to the FEOL), and the top tier, containing only routing wires (akin to the BEOL). Regarding the manufacturing of these 3D ICs, the bottom tier is built in a high-end untrusted foundry, and the top tier is built in an also untrusted foundry (not necessarily high-end, however).\nIn , threat model II is used, i.e., the adversary is assumed to possess the entire netlist. The problem is formulated as the attacker being the FEOL foundry, which in turn also possesses the so called `unlifted netlist' extracted from the FEOL layout. By utilizing a graph to represent the circuits as previously described, the attacker seeks a bijective mapping of gates of the unlifted netlist to gates in the complete netlist. According to , if the attacker can distinguish any gate between the two netlists, the split circuit does not provide any security. A security notion is provided by the authors, based on existing multiple mapping between gates in the unlifted and complete netlists. Referred to as \\textit{k-security}, this metric qualifies that gates across the design are indistinguishable from at least $k-1$ other gates. Thus, a defender wants to lift wires in a way to guarantee the higher $k-security$ possible. Two procedures are proposed to achieve this goal, one utilizing a greedy heuristic targeted at small circuits (due to scalability issues), and another procedure that utilizes partitioning to solve those issues. For their experimental study, they have utilized the ISCAS\\textquotesingle85 benchmark suite and a DES crypto circuit with approximated 35000 gates. The results are shown in Table~\\ref{tab:def_res2}, where $k=1$ is the original circuit and $k=48$ is achieved when all the wires are lifted. It is worth to mention that; besides the notion of the security metric, their defense technique was not validated using an actual proximity attack towards the modified netlist.\nAn artificial routing blockage\\footnote{This terminology is used in IC design to mean that a specific area should be avoided by the EDA tool for a specific task. A blockage can be for placement and/or for routing.} insertion that promotes wire lifting is proposed by Maga\\~na \\etal . The goal of this technique is to deceive proximity attacks by wire lifting. As discussed before, the objective of commercial EDA tools is to guarantee the best PPA possible. During the routing stage, lower metals are preferred for signal routing, promoting better PPA. Thus, routing blockages can be inserted at the split layer, forcing signals to be routed above the split layer. The result is an artificial wire lifting done during the routing stage. \nApplying this type of procedure must be done considering the design routability and overhead introduced, as well as top level floorplan decisions for the power grid, clock distribution, and resources for busses. Larger designs are generally difficult to be routed -- simply reducing the number of routing layers can make the design unroutable. In , a procedure is proposed to insert routing blockages ensuring the design routability is kept. After a first routing stage, the design is divided into small rectangular non-overlapping windows. The routing congestion is then analyzed in each window at the split layer for the blockage insertion. If the area has capacity for more routing, a routing blockage is inserted, otherwise the original routing is kept. Utilizing ISPD\\textquotesingle11 circuits, the technique is evaluated using the proximity attack proposed by , and its effectiveness is measured using two metrics, $E[LS]$ and $FOM$. The $E[LS]$ metric reports the candidate list size, being an average over different search areas. The $FOM$ metric is a figure of merit obtained from the ratio of candidates list size divided by the search area, when averaged over all the search areas at the split layer. According to , a higher value of $FOM$ means it is more challenging for an attack to be mounted because of the density of candidates (over the same search area). The results for the \\textit{Superblue 1} circuit are shown in Table~\\ref{tab:def_res2}.\nDesign for Manufacturability (DFM) has become an extremely important aspect of IC design for many years now. Manufacturing an IC is a sensitive process that involves many critical steps. Hence, a layout is required to be compliant to several rules to ensure its manufacturability. A layout is said to be manufacturable if there are no DRC violations. However, for a design to also achieve high yield, the layout must also pass strict DFM checks. The most common checks are related to wire and via density over predetermined region sizes. Until now, defense techniques discussed were mainly concerned about security and PPA overheads. Feng \\etal argued that previous works have largely neglected manufacturability concerns. Therefore, they proposed two wire-lifting techniques that address two important DFM-related techniques: Chemical Mechanical Planarization (CMP) and Self-Aligned Double Patterning (SADP) . The first technique, CMP-friendly routing defense is divided into layer elevation, wire selection, and re-routing. Layer elevation selects wires for lifting according to following principles :\n \\begin{itemize}\n \\item The wire has a significant logic difference from its neighboring wires. As such, an incorrect connection in attacking this wire may lead to more signal differences.\n \\item The wire has large observability such that an erroneous guess by the adversary can easily affect the circuit primary output signals.\n \\item The wire segment is originally at a wire-dense region. The wire density of this region would be reduced by the layer elevation and makes the corresponding FEOL layer have more uniform wire density.\n \\item The BEOL region where the wire is elevated to has low wire density so that the density of the corresponding BEOL layer is more uniform.\n \\end{itemize}\nPrinciples 1 and 2 have the goal to increase security in the same way as described in . After the wire lifting step, a set of wires is selected for re-routing. The selection has two purposes, CMP-friendliness and security improvement. For CMP-friendliness, wires located in dense regions are selected to be re-rerouted in sparse areas. For the security improvement, decoys are inserted if the routing detour passes through a sparse area. A suspicious attacker may realize that the detour is a defense measure. After selecting the set of wires to be re-routed, wires are re-routed one at a time. According to , their routing approach considers wire density, while the routing perturbation proposed by can be solely focused on security, and may not be CMP-friendly. Utilizing a graph representation, their re-routing method is based on the Dijkstra's shortest path algorithm where the density of wires is used as a metric. \nWith a few exceptions, the SADP-compliant routing defense follows the same approach as described above. During wire lifting, the density is not considered. Wire re-routing is actually wire extension of FEOL wires as in . This wire extension of FEOL wires inevitably leads to re-routing of connected BEOL wires. According to , solving SADP violations by wire extension can also increase security, as its increase the distance between vias. The wire extension for simultaneous SADP-compliance and security is realized using Integer Linear Programming. In their experiments, ISCAS\\textquotesingle85 and ISPD\\textquotesingle11 are used to evaluate their techniques. Each technique, CMP-friendly and SADP-compliant routing, is evaluated separately. The results for the smallest and largest circuits are shown in Table~\\ref{tab:def_res2}.\nWire lifting approaches, in general, are not cost-free. As shown in the discussed results, wire-lifting based defenses introduce a considerable PPA overhead. An approach to establish a cost-security trade-off is proposed by Paitinak \\etal , i.e., PPA margins for a given security budget. In , a concerted wire-lifting method is proposed. The authors claim to enable higher degrees of security while being cost-effective. For their method, custom elevating cells are used for executing the wire-lifting. Elevating cells connect gates or short wires directly to the first layer above the split layer. Their wire-lifting method utilizes three strategies: lifting high-fanout nets, controlling the distance for open pin pairs, and obfuscation of short nets. High-fanout nets are chosen to be lifted for two reasons: (a) a wrong connection made by the attacker propagates the error to multiple locations, and, (b) introduces multiple open pin pairs. As the attack to overcome is the proximity one, controlling the distance between open pin pairs is necessary, which is achieved at will simply by controlling the placement of the elevating cells. According to , short nets may be easy for an attacker to identify and localize (from assessing driving strengths). Short wires are obfuscated by inserting an elevating cell with two pins close to each other, one being the true connection and the other a dummy connection. Finally, wires are lifted according to those strategies until a given PPA budget is reached. For their experimental study, ISCAS\\textquotesingle85 and ISPD\\textquotesingle11 circuits are utilized. However, results for attacks are presented only for ISCAS\\textquotesingle85 circuits. For ISPD\\textquotesingle11, only the PPA impact result introduced by their technique is presented. Once again, we present the results for the smallest and largest of the studied circuits in Table~\\ref{tab:def_res2}.\nWhile the majority of studies reported in our survey make use of conventional transistors (bulk CMOS technologies with either planar or FinFET transistors), Yang \\etal proposed a design methodology to secure Split Manufacturing for Vertical Slit Field Effect Transistor (VeSFET)-based integrated circuits. VeSFET is a twin-gate device with a horizontal channel and four metal pillars implementing vertical terminals . While a detailed explanation on VeSFETs is beyond the scope of this work, we do highlight the differences between VesFETs and conventional transistors. In contrast with conventional transistors, a VeSFET can be accessed by both top and bottom pillars, allowing two-side routing and offering a friendly monolithic 3D integration . While we have so far considered ICs that have two distinct layers, the FEOL and BEOL, a VeSFET-based IC has tiers of the layer containing the transistors. Connections between tiers can be made directly, same as TSV by the pillars, or by a layer containing connections between tiers. A 2D VeSFET design contains only one tier and both top and bottom connections, whereas a 3D design contains two or more tiers. In summary, the notion of tier is pushed down to the transistor level in this device topology, thus making it an interesting platform for Split Manufacturing.\nThe method proposed by assumes that both foundries are untrusted and have the same capability (i.e., same technology). For 2D designs, the first foundry manufactures the tier with the top connections, comprising most of the connections. Then, the rest of the bottom connections, comprising of the critical minority connections, are completed by the second foundry. For 3D IC designs, they proposed special types of standard cells, referred as cell A and B. Cell A has two tiers that are visible and manufactured by the first foundry, as well as inter-tier connections. Cell B has only the top tier visible and manufactured by the first foundry, the low tier is completed by the second foundry, without inter-tier connections. Thus, transistors can be hidden from the first foundry as a security feature. Vulnerabilities claimed by for both 2D and 3D methods are described in Table~\\ref{tab:vesfet_vul}. Practices of reverse engineering and IC overbuilding are claimed to be impossible because the first foundry controls the number of wafers available to the second foundry.\nIncreasing the security of both 2D and 3D VesFET designs is achieved by net partitioning, and exclusively for 3D designs, by transistor hiding and pin shacking. Net partitioning is performed similarly to the wire lifting techniques described above, where nets are chosen to be routed in the bottom connection layer, thus, hiding those from the first foundry. Their selection method is done by selecting nets from sequential logic. First, all the high-fanout nets are selected to be partitioned. Next, the remaining nets are selected by a search area, where two approaches are used, distance-first search and high-fanout first search. In distance-first method, a pin in a predefined search window connecting to an un-partitioned net is selected when it has the minimum distance to the currently processed pin pair. The FO-first search method selects the pin connecting to a net having the highest FO in the searching window. Transistor hiding in 3D designs is done by utilizing cells similar to the cell B. Cells connected only by partitioned nets are candidates for hiding. After selecting the candidates, availability of unused transistors that are accessible to the second foundry in the lower tiers of the nearby cells is checked. If the available transistor count is sufficient, then the cell is hidden. The empty space created could provide clues for the first foundry about the security technique. Pin shaking is then applied to obfuscate the empty spaces. Some nearby cells are moved to this area to obfuscate the layout for any distance-based proximity attackers. In , 10 MCNC LGSynth\\textquotesingle91 benchmark circuits are used to evaluate the effectiveness of their methodology. The best and worst results are shown in Table~\\ref{tab:def_res2}. It is worth to mention that, even though the VesFET implementation mimics the layered structure of Split Manufacturing, the results cannot be compared side by side in a fair manner.\n \\begin{table}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Vulnerabilities of Split Manufactured VesFET-based Designs Described by .}\n \\begin{tabular}{lll}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Threats} & \\textbf{1st Foundry} & \\textbf{2nd Foundry} \\vspace{5pt} \\\\\n \\hline\n \\parbox{1cm}{Design\\\\ Reconstruction} & \\parbox{2.5cm}{2D IC: Very Difficult \\\\ 3D IC: Impossible} & \\parbox{2cm}{Impossible due to a very limited\\\\ information} \\\\\n Trojan Insertion & \\parbox{2.5cm}{Possible, but will be\\\\ detected} & \\parbox{2cm}{No control of devices} \\\\\n \\parbox{1cm}{Reverse\\\\ Engineering} & Meaningless & Impossible\\\\\n IC Overbuilding & Meaningless & Impossible \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\label{tab:vesfet_vul}\n \\end{table}", "id": "90e5175d-afc8-4844-8dfa-513e3fcc4fbd", "level": "subsection", "origin_cites_number": 13, "parent_id": "0381ac67-f861-41a5-94dd-d57b1d10cc30", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Split Manufacturing Defenses" ], [ "subsection", "Wire Lifting" ] ], "subsections": [], "title": "Wire Lifting" }, { "cite_extract_rate": 0, "cites": [], "content": "The main goal of Split Manufacturing -- to hide sensitive information from untrusted foundries -- is compromised once we start to consider more regular structures such as memory. Even without knowing where all the routing goes to, an attacker can easily identify regular structures just by looking at the FEOL layout, possibly leading to easier attacks. Mitigating attacks towards regular structures could be done by obfuscating those structures in such a way that they become indistinguishable. In this section, we discuss works that propose layout obfuscation techniques to be used in a Split Manufacturing context.\nDuring the development of a modern IC, third-party IPs are sought to close a technological gap or to minimize time-to-market. IPs are typically categorized as soft and hard IPs: soft IPs typically come in code form, giving the customer flexibility to modify the IP such that it meets a given specification during synthesis. Therefore, soft IPs do not present a direct challenge for a Split Manufacturing design flow. Perhaps, and on a very specific scenario, a given IP can facilitate a proximity attack because it promotes certain library cells over others (i.e., it leads to a biased composition).\nOn the other hand, hard IPs are completely designed by the vendor and are technology dependent. In some instances, the vendor only provides an abstract of the IP; the customer then has to rely on the foundry to replace the abstract by the actual layout. Thus, splitting a hard IP is not trivial. Additional information is needed to be provided by the vendor, which is not guaranteed to be provided, making the IP completely incompatible with Split Manufacturing. Even when the customer holds the entirety of the IP layout, differences between the FEOL foundry and the BEOL foundry could make the IP no longer compliant and therefore virtually useless. Furthermore, defense techniques cannot be applied due to the lack of information or lack of feasibility. Hard IPs, such as embedded memories and specialized analog circuits, have been heavily optimized for maximum compactness, performance and yield. In today's IP market, there is still little concern with security in general, so it is not conceivable that any vendors will start to offer split IP any time soon.\nThe security of hard IPs in a Split Manufacturing context was first analyzed by Vaidyanathan \\etal . A recognition attack flow was proposed for this purpose. An attacker holding the FEOL layer starts his attack by isolating a target embedded memory or analog hard IP. Since the targeted hard IP has a high probability of being constructed by compilation of leaf-cells, layout pattern recognition software can be used for trivial leaf-cell identification. After recognizing all the leaf-cells, the attacker attempts to infer the missing BEOL connections. Using proximity hints together with the knowledge about the regularized structure, the connections have a high likelihood to be guessed correctly. Demonstrated in , embedded memories, such as SRAM, are susceptible to the proposed recognition attack. Defending against recognition attacks can be achieved by means of obfuscation. According to , SRAM IPs can be obfuscated by the following methods:\n \\begin{itemize}\n \\item Randomization of periphery cells, thus avoiding predictable connections. \n \\item Minimization of regularized topologies used for peripheral circuits such as pre-decoders, word line decoders, sense amplifiers, etc.\n \\item Adding non-standard application-specific functions to improve obfuscation and performance.\n \\end{itemize}\nA synthesis framework is proposed by to obfuscate SRAM IPs. Referred as application-specific SRAM, the methodology synthesizes SRAMs using augmented bitcell arrays and standard cell logic IP (instead of using leaf-cells). Such synthesis, when compared with conventional SRAM compilation, accomplishes all the three obfuscation goals described above while still providing similar performance. \nAnalog hard IPs are also vulnerable to recognition attacks. In contrast with embedded memories (that are often compiled), analog hard IPs are mostly hand designed to cater for a challenging specification or interface. Even when such degree of customization is employed, the majority of the design is done utilizing leaf-cells (e.g., current mirrors, matched arrays, etc.). Thus, disclosing important information that could be used as leverage for recognition attacks. In , two methods are proposed to defend analog hard IPs against such attacks:\n \\begin{itemize}\n \\item Obfuscation of analog leaf-cells.\n \\item Use of diverse topologies and architectures that enable obfuscation and efficiency. \n \\end{itemize}\nNext, let us discuss the techniques utilized in order to achieve the goals listed above. First, adding camouflaging dummy transistors in empty spaces can turn leaf-cells indistinguishable. Second, regularizing transistor widths, which allows transistor with different channel lengths to abut each other, thereby obscuring boundaries across different sized transistors. Third, utilizing the same idea behind wire-lifting, routing blockages can be inserted between transistors below the split layer. Such routing scheme would make it difficult to infer the missing BEOL connections, virtually in the same way as it does for a standard-cell based design. \nTo demonstrate the feasibility and efficacy of their proposed approaches, the authors of designed and fabricated test chips in 130nm technology. For comparison, the same designs were Split Manufactured and conventionally manufactured. Split Manufacturing used Global Foundries Singapore as the untrusted foundry and IBM Burlington as the trusted foundry. Conventional manufacturing was entirely done in Global Foundries Singapore. The first reported design is a smart SRAM that targets an imaging application. Two implementations of a parallel 2x2 access 1Kb SRAM were demonstrated. For conventional manufacturing, the SRAMs were traditionally implemented, and for Split Manufacturing, the SRAMs were implemented using their smart synthesis approach. For their measurements, 10 chips were used to demonstrate the feasibility regarding PPA. Area reported for the split manufactured samples was 75\\% of the conventional approach, and, while the power consumption was 88\\%. Performance was the same between conventional and split manufactured, i.e., both could work with the same clock frequency. The PPA advantage of Split Manufacturing reported in is not from the manufacturing itself. This advantage is from their smart memory synthesis approach, that was not applied on the conventional manufacturing samples.\nThe second demonstrated design is a DAC with statistical element selection. The test chip contains a high resolution 15-bit current steering DAC. Only a description of the results is presented, where the authors claim there are tiny measurements differences between the performance of the conventional and the split manufactured, emphasizing that the differences are within measurement noise.\nAn attacker trying to reverse engineer a split IC will try to recover the maximum number of connections as possible, while minimizing the Time To Evaluate (TTE), i.e., the amount of time needed to reverse engineering the IC. For Jagasivamani \\etal , the goal of a designer seeking to secure his design is to create an IC with a high TTE while being cost-effective regarding design effort and PPA overheads. If the TTE is high enough, an adversary would be discouraged from reverse engineering the IC. To achieve this goal, proposed obfuscation methods that do not require any modifications to standard cells nor the implementation of any specialized cell. \nFour techniques are proposed by for layout obfuscation, (1) limited standard-cell library, (2) smart-dummy cell insertion, (3) isomorphic cells and (4) non-optimal cell placement. Along with the techniques, a set of metrics is presented to help assess the obfuscation level of a design. \\textit{Neighbor connectedness}, a measure of how interconnected cells are to their respective neighbors, i.e., how much proximity information is exposed to the attacker. For a specific cell, this metric is computed as how many connections that cell has for a given radius around it. \\textit{Standard-cell composition bias}, a metric that addresses the effort required for composition analysis of a design. The bias signature could leave information of the function of the cell. Thus, this metric measure how skewed a design is according to a specific bias cell. In , they utilized three types of bias cells for this analysis: XOR-type, flip-flop type, and adder type of cells. \\textit{Cell-level obfuscation}, a metric that measures the percentage of standard-cells that have been obfuscated. \\textit{Entropy}, which is similar to the concept of mutual information previously discussed. \nTechnique (1) aims to achieve obfuscation by reducing the use of complex cells and instead favor only simple cells to compose the design (i.e., to prefer single stage cells over complex multi-stage cells). Removing specialized complex cells could obfuscate functional signatures due to the larger granularity that is employed to construct the cell. However, since the functionality of complex gates will have to be reconstructed through basic cells, a heavy PPA overhead is likely to occur when applying this technique. Technique (2) aims to obfuscate composition analysis by adding dummy cells in such a way that a neutral bias composition is achieved. Dummy cells are inserted as spare cells \\footnote{Spare cells are extra logic usually inserted during physical synthesis. These cells are used when an engineering change order (ECO) is required, such that small tweaks to the circuit logic can be performed with minimal changes to placement and routing.}, focusing solely on obfuscating the composition analysis. Technique (3) obfuscates the layout by regularizing the shapes of the cells in a library. All layouts of logic standard cells are made FEOL-identical such that the overall circuit layout appears to be a sea of gates. The functionality of the cells is defined later by the BEOL connections. Thus, the true functionally of the cell is hidden at the BEOL, making cell-level identification harder. Technique (4) employs the same strategy from placement perturbation discussed before. \nFor their experimental study, the authors of made use of a multiplier block with a high number of adder cells and a crypto-like circuit. Experiments were separated into limited library and smart dummy insertion. Results are shown as a percentage relative to the baseline circuit, i.e., without any protection approach applied. Neighbor connectedness (\\%) for a radius $\\leq 25nm$ decreased substantially for both test cases and circuits (for more information see ). Overhead results are shown in Table~\\ref{tab:over_javs}, where the figures presented are normalized with respect to the baseline circuit.\n \\begin{table}[htb]\n \\centering\n \\caption{Impact on Performance from the Defense Approaches of .}\n \\begin{tabular}{cccc}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Benchmark} & \\textbf{Metric} & \\textbf{Limited Library} & S\\textbf{mart Dummy} \\vspace{5pt}\\\\\n \\hline\n \\multirow{2}{*}{ mult24} &\\cellcolor[gray]{0.9} Area & \\cellcolor[gray]{0.9} 94.9\\% & \\cellcolor[gray]{0.9} 72.6\\% \\\\\n & Timing Slack & -64.8\\% & 3.4\\% \\\\\n \\multirow{2}{*}{a5/1} & \\cellcolor[gray]{0.9} Area & \\cellcolor[gray]{0.9} 69.8\\% & \\cellcolor[gray]{0.9} 69.4\\% \\\\\n & Timing Slack & -27.2\\% & -1.2\\% \\\\ \n \\hline \\hline\n \\end{tabular}\n \\label{tab:over_javs}\n \\end{table}\nUtilizing exactly the same concepts and metrics described in , Otero \\etal proposed a ``trusted split-foundry toolflow'' based on cellTK . The concept of the cellTK-based flow is to have on-demand cell generation from a transistor-level netlist. This is heavily in contrast with a traditional ASIC flow that relies on a predefined (and thoroughly characterized) cell library. Leveraging cellTK, proposed an extension referred as split-cellTK. This extension can generate multiple physical layouts for each unique cell without modifying the circuit topology, which is then used to implement obfuscation strategies. Two strategies are proposed, referred as Uniform and Random. The Uniform strategy tries to standardize the size and spacing between cells by inserting dummy transistors to equalize the number of nMOS and pMOS devices and, after the cell placement, dummy cells are inserted in empty gaps. A Random strategy is also proposed in order to reduce the overheads introduced by the Uniform strategy. Instead of deliberately standardizing the size and spacing between cells, a specific number of empty spaces is chosen for these tasks. A more in-depth explanation about their strategies is beyond the scope of this work because they are closely related to cellTK itself. However, their goals and evaluations are the same as in . For their experimental study, they utilized the island-style asynchronous FPGA developed in . A test chip was Split Manufactured in 65nm and the design was synthesized with a cellTK-based approach, i.e., without any defense strategy. Their defense strategies were evaluated only by simulation. The performance results for the baseline, Uniform, and Random strategies, when applied to obfuscate an adder, are shown in Table~\\ref{tab:over_celltk}. Trustworthiness results are given in terms of neighbor connectedness and, for all implementations discussed, neighbor connectedness results were significantly smaller than the results reported in .\n \\begin{table}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Performance Results Reported by to Obfuscate an Adder Circuit.}\n \\begin{tabular}{lllll}\n \\hline \\hline \\\\ [-1.5ex]\n \\parbox{1cm}{\\textbf{Technique}} & \\parbox{0.6cm}{\\textbf{Area}\\\\ ($\\mathbf{\\mu m^2}$)} & \\parbox{0.6cm}{\\textbf{Power (mW)}} & \\parbox{0.6cm}{\\textbf{Energy (pJ)}} & \\parbox{0.7cm}{\\textbf{Perf. (MHz)}} \\vspace{5pt} \\\\\n \\hline\n Baseline & 462 & 0.146 & 0.257 & 568 \\\\\n Uniform & 717 & 0.149 & 0.307 & 486 \\\\\n Random & 760 & 0.164 & 0.303 & 542 \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\label{tab:over_celltk}\n \\end{table}\nIn the context of obfuscation, but also generally for Split Manufacturing, a higher level of security is achieved when the chosen layer to perform the split is the lowest possible. Xiao \\etal pointed out that splitting at lower metal layers could increase the cost to manufacture the IC; it is argued that the FEOL-BEOL integration process must be more `precise' for correct alignment. Thus, a closer technology match between the trusted and untrusted foundries is required. As we previously argued, if the goal is to make use of the best silicon available from an untrusted foundry, the implication is that the trusted foundry cannot provide a legacy technology, but perhaps a mature yet still relevant technology is sufficient. In , a methodology for obfuscating the layout is proposed for split at M3 or higher, meanwhile keeping the cost as low as possible and at the same time providing a high level of security. Their strategy is similar to the insertion of dummy cells; however, functional cells are inserted instead. Referred as obfuscated built-in self-authentication (OBISA) cells, the inserted functional cells are connected together to form a circuit. As the circuit is connected to the original circuit it is trying to protect, they claim this fact makes it extremely difficult for an attacker to separate the OBISA design from the original design. The idea behind OBISA is to obfuscate the layout by hindering neighbor connectedness analysis and standard-cell composition bias analysis while also perturbing the proximity between gates. As illustrated in Figure~\\ref{fig:obisa}, two additional functional cells, O1 and O2, were placed between gates G2 and G3, and, G3 and G4, respectively. The insertion of these additional functional cells could deceive proximity attacks, assuming that the EDA tool would place the gates between OBISA cells farther apart than in the original circuit.\nThe proposed OBISA circuitry has two operating modes: functional and authentication. During functional mode, the OBISA circuitry stays idle and incoming signals and clock are gated/blocked. Thus, the original circuit is not affected by OBISA operating as it should. As the name suggests, when in authentication mode, OBISA is used to verify the trustworthiness of the manufactured IC (in the field). The specifics of the authentication are beyond the scope of this work and will not be discussed. The insertion of OBISA cells follows a similar strategy of dummy cell insertion as discussed in . The connections of the inserted cells are done in a way to promote the testability of the OBISA circuit and increase the obfuscation strength. Their approaches were evaluated using benchmark circuits from OpenCores. Results for the smallest and largest circuits are shown in Table~\\ref{tab:obisa_res}. \n \\begin{table}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Implementation Results from .}\n \\begin{tabular}{lllll}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Benchmark} & \\parbox{0.8cm}{\\textbf{Gate count}} & \\parbox{1.5cm}{\\textbf{OBISA cell count}} & \\textbf{Total nets} & \\textbf{Nets} $\\mathbf{\\geq M4}$ \\vspace{5pt}\\\\\n \\hline\n DES3 & 1559 & 158 & 1799 & 127 \\\\\n DES\\_perf & 49517 & 2090 & 49951 & 1343 \\\\\n \\hline \\hline\n \\end{tabular}\n \\label{tab:obisa_res}\n \\end{table}\n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.95\\linewidth]{obisa_cir.pdf}\n{Circuit representation with OBISA cells (square cells) inserted (adapted from ). \\label{fig:obisa}}\nAnother study using look-alike cells is reported by Masoud \\etal where the goal remains to make the attacker unable to distinguish cells and their inputs/outputs, thus mitigating attacks to some degree. In this study, two types of search algorithms are proposed to replace cells for isomorphic cells. In contrast with where all cells are replaced, in , only cells with high impact on the security are replaced. Thus, the overhead introduced by cell replacement can be controlled (i.e., a trade-off is established). The proposed algorithms are based on `gate normalization', whereby truth tables of cells are analyzed in order to balance the occurrence of 0s and 1s (e.g., XOR and XNOR gates are normalized by definition). An analysis is made by replacing existing gates by XORs and comparing the deviation from the original circuit. If the deviation is larger than a given deviation threshold, the gates are effectively replaced. \nA novel layout obfuscation framework is proposed by Li \\etal which builds on the wire lifting concept of . According to the authors, wire lifting alone is not enough to secure a design. If an attacker can tell the functionality of a specific gate that had its wires lifted, the security is already compromised. To address this problem, a framework that considers dummy cells and wire insertion simultaneously with wire lifting is proposed. As in , threat model II was used. The proposed framework makes use of mixed-integer linear programming (MILP) formulation for the FEOL layer generation and a Lagragian relaxation (LR) algorithm to improve scalability. The generation of the new FEOL layout considers three operations: wire-lifting, dummy cell insertion, and dummy wire insertion. Dummy wire insertion is done only on dummy cells; thus, the original functionality of the circuit is guaranteed to remain and floating pins are avoided. Utilizing a graph representation, they re-formulate the security metric to accommodate dummy cell and dummy wire insertion. Since the original graph isomorphic relationship is lost when new nodes are inserted, a new approach has to be used to formalize the relationship between the original and the new FEOL; this concept is denoted as k-isomorphism and the associated security analysis is denoted as \\textit{k-security}. In their experimental study, TrustHub trojan insertion methods are used to select the nodes for protection. They used ISCAS\\textquotesingle85 benchmark circuits together with functional units (shifter, alu, and div) from the OpenSPARC T1 processor . Comparison between MILP and LR algorithms are done for several \\textit{k-security} levels, and the results are given in terms of area overhead (AO) and wire-length overhead (WLO). The results for a few of the security levels are shown in Table~\\ref{tab:li_milplr}. \n \\begin{table}[htb]\n \\rowcolors{2}{gray!25}{white}\n \\centering\n \\caption{Comparison Between MILP and LR Algorithms for the c4232 circuit .}\n \\begin{tabular}{clll}\n \\hline \\hline \\\\ [-1.5ex]\n \\textbf{Security Level } & \\textbf{Algorithm} & \\textbf{AO(\\%)} & \\textbf{WLO(\\%)} \\vspace{5pt}\\\\\n \\hline\n 15 & MILP & 18& 180\\\\\n 20 & MILP & 41 & 220\\\\\n 25 & MILP & 58 & 295\\\\\n 15 & LR & 18& 200\\\\\n 20 & LR & 40 & 230\\\\\n 25 & LR & 60 & 305\\\\\n \\hline \\hline\n \\end{tabular}\n \\label{tab:li_milplr}\n \\end{table}", "id": "4032829b-2e93-4b12-822b-7b2a6e653dbc", "level": "subsection", "origin_cites_number": 12, "parent_id": "0381ac67-f861-41a5-94dd-d57b1d10cc30", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Split Manufacturing Defenses" ], [ "subsection", "Layout Obfuscation" ] ], "subsections": [], "title": "Layout Obfuscation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:trends}\nDespite our effort to present the results of the many studied papers in the most fair way possible, it is clear that the hardware security community lacks a \\textit{unified benchmark suite} and/or a \\textit{common criteria} for assessing results. Often, researchers make use of benchmark suites that are popular in the Test community but have no real applicability in security. For instance, the ISCAS\\textquotesingle85 suite has no crypto cores in it, which are the bread and butter of the research in the area. Furthermore, we believe the community would largely benefit from using circuits that better represent IC design practices of this decade where IPs often have millions of gates and ICs have billions of transistors.\nWhile the lack of a common criteria is an issue for the academic community, the lack of an industry-supported path for Split Manufacturing is even more troubling. Today, more than ever, foundries compete for the title of `best silicon' and rarely engage in cross-foundry cooperation. Efforts of the past, such as the now defunct Common Platform of IBM, Samsung and GF, could have been a catalyzer for the adoption of Split Manufacturing. Without such collaboration, it is hard to foresee a future where the technique will gain traction again. Furthermore, the study of DFM-related implications of the technique is really cumbered by the fact that we cannot measure yield from massively produced Split Manufactured chips. \nWe have discussed in details how many attacks leverage heuristics and hints left behind by the EDA tools. Many of these hints are very logical and can be appreciated, even graphically, as we illustrated in Figure~\\ref{fig:prox_sa}. It is entirely possible that machine learning approaches can detect subtle biases in the tools that are not easy to appreciate graphically. There is no consolidated knowledge of what these biases are and to which extent machine learning is effective in detecting them. This avenue of research is certainly interesting and we believe it will be the target of many papers in the near future.\n \\Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.95\\linewidth]{pie_chart.pdf}\n{Techniques validated in silicon among presented works. \\label{fig:pizza}}\nIt is also worth discussing the attack models that have been proposed so far. We have previously highlighted how strong Threat model II is, but the fact of the matter is that defining the threat model is a complicated exercise in which we seek to establish what are the capabilities of the attacker. By definition, formalizing the capabilities of an attacker requires understanding his motivations, technical proficiency, and availability of resources. If the attacker is underestimated, useless defense strategies can be devised and assumed to be effective. If the attacker is overestimated, convoluted defense strategies might be employed, leading to unnecessary PPA overheads. This is a challenge for Split Manufacturing and many other techniques that promote obfuscation.\nAnother topic that has led to no consensus is whether an attacker can make use of a partially recovered netlist. For instance, let us assume a design that instantiates the same block multiple times. If one of the blocks is correctly recovered, perhaps a cursory inspection of the structure will allow the attacker to recover all other instances of the same block. The same line of thinking can be applied to datapaths and some cryptographic structures that are regular in nature. In a sense, an analysis of the functionality of the recovered netlist could be combined with existing attacks for further improvement of correctly guessed connections.\nWe note that many of the works studied in this survey have not actually demonstrated their approach in silicon. This fact is summarized in Figure~\\ref{fig:pizza}. As a community effort, we should strive to validate our approaches in silicon as often as possible. However, as discussed before, finding two foundries willing to diverge from their established practices could be next to impossible. This is likely the main reason that such small percentage of the works herein reported have validated their techniques in silicon.", "id": "5c7f1b9c-d130-400e-ae7c-f7e824d126c3", "level": "section", "origin_cites_number": 0, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Future trends and challenges" ] ], "subsections": [], "title": "Future trends and challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion} \nOur findings showed a big disparity on how the Split Manufacturing technique is approached among the surveyed studies. A variety of benchmark suites and metrics were used for evaluation, making direct comparisons between studies very difficult -- and, in some cases, impossible. In spite of that, we were able to classify the studies, clearly demonstrating the many interpretations of the technique, its attacks, and defenses. Our belief is that this survey assesses the most significant studies about Split Manufacturing as we focused on papers that appear on highly-regarded venues. Results gathered from the surveyed studies were compiled such that main features, metrics, and performance results are available. Regarding the results themselves, these are presented in such manner to illustrate the present state of the technique. Therefore, this work can be very helpful for future researchers to contextualize their own techniques for augmenting Split Manufacturing.\nOverall, the security of Split Manufacturing is still under debate. Some studies conclude that the technique is indeed secure, and others that it is not. However, these conclusions are reached for different scenarios, i.e., using different benchmark circuits and set of metrics. Creating a unified benchmark suite suitable for Split Manufacturing evaluation, along with a unified set of metrics to quantify/qualify its performance, could facilitate the discussion about its security. In addition, increasing the number of demonstrations in silicon could also help with evaluation and adoption issues related to Split Manufacturing.\n\\bibliographystyle{ieeetr}\n\\bibliography{split_refs}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{perez_t_d.jpg}}]{Tiago D. Perez} received the M.S. degree in electric engineering from the University of Campinas, S\\~ao Paulo, Brazil, in 2019. He is currently pursuing a Ph.D. degree at Tallinn University of Technology (TalTech), Tallinn, Estonia. \nFrom 2014 to 2019, he was a Digital Designer Engineer with Eldorado Research Institute, S\\~ao Paulo, Brazil. His fields of work include digital signal processing, telecommunication systems and IC implementation. His current research interests include the study of hardware security from the point of view of digital circuit design and IC implementation.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{pagliarini_s.png}}]{Samuel Pagliarini}\n(M'14) received the PhD degree from Telecom ParisTech, Paris, France, in 2013. \nHe has held research positions with the University of Bristol, Bristol, UK, and with Carnegie Mellon University, Pittsburgh, PA, USA. He is currently a Professor of Hardware Security with Tallinn University of Technology (TalTech) in Tallinn, Estonia where he leads the Centre for Hardware Security. His current research interests include many facets of digital circuit design, with a focus on circuit reliability, dependability, and hardware trustworthiness.\n\\end{IEEEbiography}\n\\EOD\n\\end{document}", "id": "347259ba-3c12-43d7-b6a0-2402a507ae62", "level": "section", "origin_cites_number": 0, "parent_id": "4df5dfa6-5506-4133-aae0-08641ac96086", "prefix_titles": [ [ "title", "A Survey on Split Manufacturing: Attacks, Defenses, and Challenges" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
91
[ 7800, 7798, 7799 ]
0.359591
[ "Christopher Schröder", "Andreas Niekler" ]
A Survey of Active Learning for Text Classification using Deep Neural Networks
2020
2020-08-17T12:53:20Z
cs.CL
Natural language processing (NLP) and neural networks (NNs) have both undergone significant changes in recent years. For active learning (AL) purposes, NNs are, however, less commonly used -- despite their current popularity. By using the superior text classification performance of NNs for AL, we can either increase a model's performance using the same amount of data or reduce the data and therefore the required annotation efforts while keeping the same performance. We review AL for text classification using deep neural networks (DNNs) and elaborate on two main causes which used to hinder the adoption: (a) the inability of NNs to provide reliable uncertainty estimates, on which the most commonly used query strategies rely, and (b) the challenge of training DNNs on small data. To investigate the former, we construct a taxonomy of query strategies, which distinguishes between data-based, model-based, and prediction-based instance selection, and investigate the prevalence of these classes in recent research. Moreover, we review recent NN-based advances in NLP like word embeddings or language models in the context of (D)NNs, survey the current state-of-the-art at the intersection of AL, text classification, and DNNs and relate recent advances in NLP to AL. Finally, we analyze recent work in AL for text classification, connect the respective query strategies to the taxonomy, and outline commonalities and shortcomings. As a result, we highlight gaps in current research and present open research questions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ] ], "subsections": [ "f9d1ed5f-ebd8-41c8-b85b-c14ee4a51e33", "10ab6316-68f8-4bea-9503-5f9388a60998", "3e095066-9c33-4519-a7ec-ec4550b8cb2d", "909d9662-92b7-4a13-b530-6c9553233a2e", "847fa582-b0b5-403a-9242-ea4d17738bff", "3ba159ca-0eee-4f69-aedd-91724bce7349", "ca5cf519-fe65-47a3-9e14-707edbf02174" ], "title": "root" }, { "cite_extract_rate": 0.32, "cites": [ 7764, 1159, 7165, 1096, 4022, 8385, 1684, 11 ], "content": "\\label{sec:introduction}\nData is the fuel of machine learning applications and therefore has been steadily increasing in value.\nIn many settings an abundant amount of unlabeled data is produced,\nbut in order to use such data in supervised machine learning, \none has no choice but to provide labels.\nThis usually entails a manual labeling process, which is often non-trivial and can even require a domain expert, \ne.g., in patent classification , or clinical text classification .\nMoreover, this is time-consuming and rapidly increases monetary costs, thereby quickly rendering this approach infeasible.\nEven if an expert is available, it is often impossible to label each datum due to the vast size of modern datasets.\nThis especially impedes the field of Natural Language Processing (NLP), \nin which both the dataset and the amount of text within each document can be huge,\nresulting in unbearable amounts of annotation efforts for human experts.\nActive Learning (AL) aims to reduce the amount of data annotated by the human expert.\n\\revised{It is an iterative cyclic process between an {\\em oracle} (usually the human annotator)\nand an {\\em active learner}.}\nIn contrast to passive learning, in which the data is simply fed to the algorithm, \nthe active learner chooses which samples are to be labeled next. \nThe labeling itself, however, is done by a human expert, the so-called human in the loop.\nHaving received new labels, the active learner trains a new model and the process starts from the beginning.\n\\revised{Using the term active learner, we refer to the composition of a {\\em model}, a {\\em query strategy}, and a {\\em stopping criterion}.}\nIn this work the model is w.l.o.g. a text classification model, \nthe query strategy decides which instances should be labeled next, and the stopping criterion defines when to stop the AL loop.\nAccording to \\textcite{settles2010active} there are three main scenarios for AL:\n(1) Pool-based, in which the learner has access to the closed set of unlabeled instances, called the pool;\n(2) stream-based, where the learner receives one instance at a time and has the options to keep it, or to discard;\n(3) membership query synthesis, \\revised{in which the learner creates new artificial instances to be labeled}.\nIf the pool-based scenario operates not on a single instance, but on a batch of instances, this is called {\\em batch-mode} AL .\nThroughout this work we assume a pool-based batch-mode scenario because in a text classification setting the dataset is usually a closed set, and the batch-wise operation reduces the number of retraining operations, which cause waiting periods for the user.\\\\\n\\pindent \\revised{The underlying idea of AL is that few representative instances can be used as surrogate for the full dataset.}\nNot only does a smaller subset of the data reduce the computational costs, \nbut also it has been shown that AL can even increase the quality of the resulting model compared to learning on the full dataset .\nAs a consequence, AL has been used in many NLP tasks, e.g. text classification , named entity recognition , \nor machine translation and is still an active area of research.\nIn recent years, deep learning (DL) approaches have dominated most NLP tasks' state-of-the-art results.\nThis can be attributed to advances in neural networks (NNs), above all Convolutional Neural Networks (CNN; ) and (Bidirectional-)Long Short-Term Memory (LSTM; ),\nwhich were eventually adopted into the NLP domain,\nand to the advances of using word embeddings and contextualized word embeddings .\nBoth NN architectures and text representations have raised the state-of-the-art results in the field of text classification considerably (e.g., ).\nIf these improvements were transferrable to AL, this would result in a huge increase in efficiency.\nFor the AL practitioner, this either means achieving the same performance using fewer samples, or having an increase in performance using the same amount of data.\nAnother favorable development is that transfer learning, especially the paradigm of fine-tuning pre-trained language models (LMs), has become popular in NLP.\nIn the context of AL this helps especially in the small data scenario, in which a pre-trained model can be leveraged to train a model by fine-tuning using only little data, which would otherwise be infeasible.\nFinally, by operating on sub-word units LMs also handle out-of-vocabulary tokens, which is an advantage over many traditional methods.\nResulting from these advances, existing AL surveys have become both incomplete in some parts and outdated in others:\nThey lack comparison against the current state of the art models, do not provide results for more recent large-scale datasets, \nand most importantly, they are lacking the aforementioned advances in NNs and text representations.\nSurprisingly, despite the current popularity of NNs, there is only little research about NN-based active learning in the context of NLP, and even less thereof in the context of text classification (see Section \\ref{subsect:nn_al} and Section \\ref{subsect:active_tc} for a detailed summary).\nWe suspect this is due to the following reasons: \n(1) Many DL models are known to require large amounts of data ,\nwhich is in strong contrast to AL aiming at requiring as little data as possible\n(2) there is a whole AL scenario based on artificial data generation, which unfortunately is a lot more challenging for text in contrast to for example images, for which data augmentation is commonly used in classification tasks ;\n(3) NNs are lacking uncertainty information regarding their predictions \\revised{(as explained in Section \\ref{subsect:nn_al})}, which complicates the use of a whole prominent class of query strategies.\n\\hiddennotes{\n \\begin{itemize}\n \\item No clear evaluation protocol / huge differences in the evaluation approaches\n \\end{itemize}\n}\nThis survey aims at summarizing the existing approaches of (D)NN-based AL for text classification.\nOur main contributions are as follows:\n\\begin{enumerate}\n \\item We provide a taxonomy of query strategies and classify strategies relevant for AL for text classification.\n \\item We survey existing work at the intersection of AL, text classification, and (D)NNs.\n \\item Recent advances in text classification are summarized and related to the AL process. It is then investigated, if and to what degree they have been adopted for AL.\n \\item The experimental setup of previous research is collectively analyzed regarding datasets, models, and query strategies in order to identify recent trends, commonalities, and shortcomings in the experiments.\n \\item \\revised{We identify research gaps and outline future research directions.}\n\\end{enumerate}\n\\noindent Thereby we provide a comprehensive survey of recent advances in NN-based active text classification. \nHaving reviewed these recent advances, we illuminate areas that either need re-evaluation, or have not yet been evaluated in a more recent context.\nAs a final result, we develop research questions outlining the scope of future research.", "id": "f9d1ed5f-ebd8-41c8-b85b-c14ee4a51e33", "level": "section", "origin_cites_number": 25, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1159 ], "content": "\\textcite{settles2010active} provides a general active learning survey, \nsummarizing the prevalent AL scenario types and query strategies.\nThey present variations of the basic AL setup like \n variable labeling costs or alternative query types, \nand most notably, they discuss empirical and theoretical research investigating the effectiveness of AL: \nThey mention research suggesting that AL is effective in practice \nand has increasingly gained adoption in real world applications. However, it is pointed out that empirical research also reported cases in which AL performed worse than passive learning and that the theoretical analysis of AL is incomplete.\nFinally, relations to related research areas are illustrated, thereby connecting AL among others to reinforcement learning and semi-supervised learning.\n\\hiddennotes{\n settles2009/settles2010active \n \\begin{itemize}\n \\item scenarios: membership query synthesis, stream-based, pool-based\n \\item query strategies\n \\begin{itemize}\n \\item keine Klassifikation vorgenommen\n \\end{itemize}\n \\item most important (at that time): analysis\n \\begin{itemize}\n \\item does it work?\n \\item both empirical and theoretical evidence\n \\end{itemize}\n \\item TODO: problem setting variants, practical considerations (only in 2010 version)\n \\item related areas: semi-supervised, reinforcement, submodular optimization, equivalence query learning, model parroting and compression\n \\end{itemize} \n}\nThe survey of \\textcite{fu2013survey} is focused around a thorough analysis of uncertainty-based query strategies, which are categorized into a taxonomy.\nThis taxonomy differentiates at the topmost level between the uncertainty of i.i.d. instances and instance correlation. The latter is a superset of the former \nand intends to reduce redundancy among instances by considering feature, label, and structure correlation when querying.\nMoreover, they perform an algorithmic analysis for each query strategy \nand order the strategies by their respective time complexity, highlighting the increased complexity for correlation-based strategies.\n\\hiddennotes{\n \\begin{itemize}\n \\item survey on instance selection\n \\end{itemize} \n}\nAnother general survey covering a wide range of topics was conducted by \\textcite{aggarwal2014active}. \nThey provide a flat categorization of query strategies, which is quite different from the taxonomy of \\textcite{fu2013survey} and divides them into the following three categories: (1) ``heterogenity-based'', which sample instances by their prediction uncertainty or dissimilarity compared to existing labeled instances, (2) ``performance-based''', which select instances based on a predicted change of the model loss, and (3) ``representativeness-based'', which select data points to reflect a larger set in terms of their properties, usually achieved by the means of distribution density .\nSimilarly to , they present and discuss many non-standard variations of the active learning scenario.\n\\revised{\n An NLP-focused active learning survey was performed by \\textcite{olsson2009literature}.\n This work's main contribution is a survey of disagreement-based query strategies, which use the disagreement among multiple classifiers to select instances.\n Moreover, Olsson reviews practical considerations, e.g., selecting an initial seed set,\n deciding between stream-based and pool-based scenario, and deciding when to terminate the learning process.\n Although some NN-based applications are mentioned, none of the above surveys covers NN-based AL in depth.\n Besides, none is recent enough to cover NN-architectures, which have only recently been adapted successfully to text classification problems like e.g., KimCNN .\n The same holds true for recent advances in NLP such as word embeddings, contextualized language models (explained in Section \\ref{subsect:tc_recent_adv}), or resulting advances in text classification (discussed in Section \\ref{subsect:nn_text_classification} and Section \\ref{subsect:active_tc}).\n We intend to fill these gaps in the remainder of this survey.\n}\n\\hiddennotes{\n \\begin{itemize}\n \\item NLP-centric Active Learning Survey\n \\item describes multiple active learning modes: Query by uncertainty, query by committee, redundant views\n \\item extensively reviews strategies to measure disagrement\n \\end{itemize}\n}", "id": "10ab6316-68f8-4bea-9503-5f9388a60998", "level": "section", "origin_cites_number": 3, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Related Work" ] ], "subsections": [], "title": "Related Work" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sect:active_learning}\nThe goal of AL is to create a model using as few labeled instances as possible,\ni.e. minimizing the interactions between the oracle and the active learner.\nThe AL process (illustrated in Figure \\ref{fig:al_process}) is as follows: The oracle requests unlabeled instances from the active learner ({\\em query}, see Figure \\ref{fig:al_process}: step 1), \nwhich are then selected by the active learner (based on the selected query strategy) and passed to the oracle (see Figure \\ref{fig:al_process}: step 2).\nSubsequently, these instances are labeled by the oracle and returned to the active learner ({\\em update}, see Figure \\ref{fig:al_process}: step 3).\nAfter each update step the active learner's model is retrained, which makes this operation at least as expensive as a training of the underlying model.\nThis process is repeated until a stopping criterion is met (e.g., a maximum number of iterations or a minimum threshold of change in classification accuracy).\n\\begin{figure}[!h]\n \\centering \n \\includegraphics{figures/al_process.pdf}\n \\caption{An overview of the AL process: Model, query strategy, and (optionally a) stopping criterion are the key components of an active learner.\n The main loop is as follows:\n First the oracle queries the active learner, which returns a fixed amount of unlabeled instances.\n Then, for all selected unlabeled unstances are assigned labels by the oracle.\n This process is repeated until the oracle stops, or a predefined stopping criterion is met.}\n \\label{fig:al_process}\n\\end{figure}\n\\noindent The most important component for AL is the query strategy.\nIn the introduction we claimed that a large fraction of query strategies are uncertainty-based.\nTo analyze this we provide a taxonomy of query strategies in the following section and highlight the parts in which uncertainty is involved.\nFor a general and more detailed introduction on AL refer to the surveys of \\textcite{settles2010active} and \\textcite{aggarwal2014active}.", "id": "3e095066-9c33-4519-a7ec-ec4550b8cb2d", "level": "section", "origin_cites_number": 0, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ] ], "subsections": [ "3effa5dc-9905-4ec7-8c3d-464cfd4bd1d9", "aff7fdcf-a8ec-4ce1-9895-2dad247f6097" ], "title": "Active Learning" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1042, 4024, 4023, 11, 8716, 8717 ], "content": "\\label{subsec:query_strategies}\nIn Figure \\ref{fig:query_strategies} we classify the most common AL query strategies based on a strategy's {\\em input information}, which denotes the numeric value(s) a strategy operates on.\nIn our taxonomy the input information can be either random or one of data, model, and prediction.\n\\phrasing{These categories are ordered by increasing complexity and are not mutually exclusive.}\n\\phrasing{Obviously, the model is a function of the data, as well as the prediction is a function of model and data, and moreover, in many cases a strategy use multiple of these criteria.}\nIn such cases we assign the query strategy to the most specific category (i.e. prediction-based precedes model-based, which in turn precedes data-based).\n\\begin{figure}[!h]\n {\n \\scriptsize\n \\centering\n \\begin{forest} for tree={\n grow'=0, draw\n },\n forked edges,\n [\\scshape query strategies, no edge, root\n [\\scshape random, onode_dashed\n [,phantom]\n ]\n [\\scshape data-based, onode\n [\\scshape data uncertainty, onode\n [discriminative\\\\ , tier=leaf, tnode]\n ]\n [\\scshape representativeness, onode\n [\\scshape clustering, onode\n [flat\\\\ , tier=leaf, tnode]\n [hierarchical\\\\ , tier=leaf, tnode]\n ]\n [\\scshape set construction, onode\n [core-set\\\\ , tier=leaf, tnode]\n ]\n ]\n ]\n [\\scshape model-based, onode\n [\\scshape model uncertainty, onode\n [UNC-IE\\\\ , tier=leaf, tnode]\n ]\n [\\scshape expected parameter\\\\change, onode\n [expected gradient length\\\\ , tier=leaf, tnode]\n [expected weight change\\\\ , tier=leaf, tnode]\n ]\n [\\scshape adversarial, tnode\n [DFAL\\\\ , tier=leaf, tnode]\n ] \n ]\n [\\scshape prediction-based, name=prediction_based, onode\n [\\scshape prediction\\newline uncertainty, name=prediction_uncertainty, tnode\n [\\scshape probabilistic, onode\n [uncertainty sampling , tier=leaf,tnode]\n ]\n [\\scshape margin-based, onode\n [version space\\\\ , tier=leaf,tnode]\n [closest to hyperplane\\\\ , tier=leaf,tnode]\n ]\n [\\scshape entropy, name=entropy, onode\n [BALD\\\\ , tier=leaf,tnode]\n ]\n ]\n [\\scshape discriminative, onode\n [DAL\\\\ , tier=leaf , tnode]\n ] \n [\\scshape expected prediction change\\\\, onode\n [expected error reduction\\\\ , name=lastnode, tier=leaf, tnode]\n ]\n [\\scshape disagreement, name=disagreement, onode_dashed\n [,phantom, tier=leaf]\n ]\n ] \n ]\n \\draw[zlevel,-] (disagreement.east) -| (entropy.south);\n \\draw[zlevel,-] (entropy.north) |- (prediction_uncertainty.east);\n \\path (current bounding box.south) coordinate (s1);\n \\draw[decorate,decoration={brace,amplitude=1em,mirror}] ([yshift=-3pt]prediction_based.south west|-s1) -- node[below=1em] {class} ([yshift=-3pt,xshift=-1pt]prediction_based.south east|-s1);\n \\draw[decorate,decoration={brace,amplitude=1em,mirror}] ([yshift=-3pt,xshift=1pt]prediction_based.south east|-s1) -- node[below=1em] {subclass(es)} ([yshift=-3pt,xshift=-1pt]lastnode.south west|-s1);\n \\draw[decorate,decoration={brace,amplitude=1em,mirror}] ([yshift=-3pt,xshift=1pt]lastnode.south west|-s1) -- node[below=1em] {example} ([yshift=-3pt,xshift=-1pt]lastnode.south east|-s1);\n \\end{forest}\n \\caption{\\revised{A taxonomy of query strategies for AL.\n The key distinction is at the first level, where the query strategies are categorized by their access to different kinds of input information. \n From the second to the penultimate level we form coherent subclasses,\n and the final level shows examples for the respective class.\n This taxonomy is not exhaustive due to the abundance of existing query strategies, and it is biased towards query strategies in NLP.}\n }\n \\label{fig:query_strategies}\n }\n\\end{figure}\n\\hiddennotes{\n \\begin{itemize}\n \\item Table in appendix: For each strategy here: Applicability: B / MC / ML\\\\\n \\end{itemize}\n}", "id": "3effa5dc-9905-4ec7-8c3d-464cfd4bd1d9", "level": "subsection", "origin_cites_number": 18, "parent_id": "3e095066-9c33-4519-a7ec-ec4550b8cb2d", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ] ], "subsections": [ "78f9fdb7-e132-4e81-b968-042f74647827" ], "title": "Query Strategies" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 1042, 4024, 8716 ], "content": "Randomness has traditionally been used as a baseline for many tasks.\nIn this case, random sampling selects instances at random and is a strong baseline for AL instance selection .\n\\revised{It often performs competitive to more sophisticated strategies, especially when the labeled pool has grown larger .}", "id": "78f9fdb7-e132-4e81-b968-042f74647827", "level": "paragraph", "origin_cites_number": 5, "parent_id": "3effa5dc-9905-4ec7-8c3d-464cfd4bd1d9", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ], [ "paragraph", "Random" ] ], "subsections": [ "734bf4e0-5809-4e56-9af4-79cebf71ecdb", "d53e7115-2a38-4cfc-bd3f-72de86cf317d", "c472f10c-b421-401e-aea9-953609e5c153", "ff2fa7ea-f6cb-4eae-aca9-41c04d989749" ], "title": "Random" }, { "cite_extract_rate": 0, "cites": [], "content": "Data-based strategies have the lowest level of knowledge, i.e. they only operate on the raw input data and optionally the labels of the labeled pool. \nWe categorize them further into (1) strategies relying on data-uncertainty, which may use information about the data distribution, label distribution, and label correlation,\nand (2) representativeness, \\revised{which tries to geometrically compress a set of points, by using fewer representative instances to represent the properties of the entirety}.", "id": "734bf4e0-5809-4e56-9af4-79cebf71ecdb", "level": "paragraph", "origin_cites_number": 0, "parent_id": "78f9fdb7-e132-4e81-b968-042f74647827", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ], [ "paragraph", "Random" ], [ "paragraph", "Data-based" ] ], "subsections": [], "title": "Data-based" }, { "cite_extract_rate": 0, "cites": [], "content": "The class of model-based strategies has knowledge about both the data and the model.\nThese strategies query instances based on measure provided by the model given an instance. \nAn example for this would be a measure of confidence for the model's explanation of the given instance ,\nfor example, how reliable the model rates encountered features.\nThis can also be an expected quantity, for example in terms of the gradient's magnitude . \nWhile predictions from the model can still be obtained, we impose the restriction that the target metric must be an (observed or expected) quantity of the model, excluding the final prediction.\n\\revised{Model-based uncertainty is a noteworthy subclass here, which operates using the uncertainty of a model's weights .}\n\\revised{\\textcite{sharma2017evidence-based} describe a similar class, in which the uncertainty stems from not finding enough evidence in the training data, i.e. failing to separate classes at training time.}\n\\revised{They refer to this kind of uncertainty as {\\em insufficient evidence uncertainty}.}", "id": "d53e7115-2a38-4cfc-bd3f-72de86cf317d", "level": "paragraph", "origin_cites_number": 2, "parent_id": "78f9fdb7-e132-4e81-b968-042f74647827", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ], [ "paragraph", "Random" ], [ "paragraph", "Model-based" ] ], "subsections": [], "title": "Model-based" }, { "cite_extract_rate": 1, "cites": [ 8716 ], "content": "Prediction-based strategies select instances by scoring their prediction output.\nThe most prominent members of this class are prediction-uncertainty-based and disagreement-based approaches.\n\\textcite{sharma2017evidence-based} denote prediction-based uncertainty by {\\em conflicting-evidence uncertainty}, which they, contrary to this work, count as another form of model-based uncertainty.\nThere is sometimes only a thin line between the concepts of model-based und prediction-based uncertainty.\n\\revised{Roughly speaking, prediction-based uncertainty corresponds in a classification setting to inter-class uncertainty, as opposed to model-based uncertainty, which corresponds to intra-class uncertainty.}\nIn literature, uncertainty sampling usually refers to prediction-based uncertainty, unless otherwise specified.", "id": "c472f10c-b421-401e-aea9-953609e5c153", "level": "paragraph", "origin_cites_number": 1, "parent_id": "78f9fdb7-e132-4e81-b968-042f74647827", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ], [ "paragraph", "Random" ], [ "paragraph", "Prediction-based" ] ], "subsections": [], "title": "Prediction-based" }, { "cite_extract_rate": 0, "cites": [], "content": "\\revised{When a query strategy combines the output of multiple other strategies, this is called an {\\em ensemble}.}\n\\revised{We only classify the concept of ensemble strategies within the taxonomy (see disagreement-based subclass in Figure \\ref{fig:query_strategies}) without going into detail due to several reasons:}\n(1) Ensembles are again composed of primitive query strategies, which can be classified using our taxonomy.\n(2) Ensembles can be hybrids, i.e. they can be a mixture of different classes of query strategies.\nMoreover, the output of an ensemble is usually a function of the disagreement among the single classifiers, which is already covered in previous surveys of \\textcite{olsson2009literature} and \\textcite{fu2013survey}.\\\\\n\\noindent We are not the first to provide a classification of query strategies: \\textcite{aggarwal2014active} provide an alternative classification, which divides the query strategies into heterogenity-based models, \nperformance-based models, and representativeness-based models.\nHeterogenity-based models try to sample diverse data points, w.r.t the current labeled pool. This class includes among others uncertainty sampling and ensembles, i.e. no distinction is made between ensembles and single-model strategies. \nPerformance-based models aim to sample data targeting an increase of the models performance, for example a reduction in the model's error. \nThis intersects with our model-based class, however, it lacks strategies which focus on a change of parameters (e.g., expected gradient length ) as opposed to changes in a metric.\nLastly, representativeness-based strategies sample instances so that the distribution of the subsample is as similar as possible to the training set.\nAlthough similar to our data-based class, they always assume the existence of a model, which is not the case for data-based strategies.\\\\\n\\pindent \\textcite{fu2013survey} separate query strategies into \nuncertainty-based and diversity-based classes. \nUncertainty-based strategies assume the i.i.d. distribution of instances; they compute a separate score for each instance, which is the basis for the instance selection.\nDiversity-based strategies are a superset thereof and additionally consider correlation amongst instances.\nThereby they characterize uncertainty and correlation as critical components for query strategies.\nThis classification successfully distinguishes query strategies by considering exclusively uncertainty and correlation.\nHowever, it is less transparent in terms of the input information, \nwhich our taxonomy highlights.\nNevertheless, correlation is a factor orthogonal to our taxonomy and can be added as an additional criterion.\n\\\\\n\\pindent After creating our taxonomy, we discovered a recent categorization of uncertainty in deep learning , which distinguishes between data-, model-, and predict{\\em{}ive}-{\\em uncertainty}, similar to the taxonomy's first level (data-, model-, prediction-based query strategies).\nAlthough this classification comes naturally from the data's degree of processing, we emphasize that we are not the first to come up with this abstraction.\nBy using the input information as decisive criterion, this taxonomy provides an information-oriented view on query strategies.\nIt highlights in which parts and how uncertainty has been involed in existing query strategies. Uncertainty in terms of NNs is, however, is known to be challenging as described in Section \\ref{subsect:nn_al}. Moreover, we use the taxonomy to categorize recent work in AL for text classification in Section \\ref{subsec:experiments}.", "id": "ff2fa7ea-f6cb-4eae-aca9-41c04d989749", "level": "paragraph", "origin_cites_number": 2, "parent_id": "78f9fdb7-e132-4e81-b968-042f74647827", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Query Strategies" ], [ "paragraph", "Random" ], [ "paragraph", "Ensembles" ] ], "subsections": [], "title": "Ensembles" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsect:nn_al}\nIn this section we investigate the question, why neural networks are not more prevalent in AL applications. \nThis can be attributed to two central topics: Uncertainty estimation in NNs, and the contrast of NNs requiring between big data and AL dealing with small data.\nWe examine these issues from a NN perspective, alleviating the NLP focus.", "id": "aff7fdcf-a8ec-4ce1-9895-2dad247f6097", "level": "subsection", "origin_cites_number": 0, "parent_id": "3e095066-9c33-4519-a7ec-ec4550b8cb2d", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Neural-Network-Based Active Learning" ] ], "subsections": [ "fcf92379-9f52-4b46-8476-f3cf95e27e6c" ], "title": "Neural-Network-Based Active Learning" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 1042, 3986 ], "content": "\\hiddennotes{\n \\begin{itemize}\n \\item Überleitung fehlt\n \\item Argumentationsstruktur unklar?\n \\begin{itemize}\n \\item Classic -> RNN -> CNN -> LSTM -> GANs (-> Meta?) \n \\end{itemize}\n \\end{itemize}\n}\nEarly research in NN-based AL can be divided into uncertainty-based ,\nand ensemble-based strategies.\nThe former often use prediction entropy as measure of uncertainty,\nwhile the latter utilize the disagreement among the single classifiers.\n\\textcite{settles2008multiple-instance} proposed the expected gradient length (EGL) query strategy, which selects instances by the expected change in the model's weights. \\textcite{zhang2017active} were first to use a CNN for AL.\nThey proposed a variant of the expected gradient length strategy , in which they select instances that are expected to result in the largest change in embedding space, thereby training highly discriminative representations.\n\\textcite{sener2017active} observed uncertainty-based query strategies not to be effective for CNN-based batch-mode AL, and proposed core-set selection, which samples a small subset to represent the full dataset. \\textcite{ash2019deep} proposed BADGE, a query strategy for DNNs, which uses k-means++ seeding on the gradients of the final layer, in order to query by uncertainty and diversity.\nFinally, Generative Adversarial Networks (GANs; ) have also been applied successfully for AL tasks: \n\\textcite{zhu2017generative} use GANs for query synthesis of images within an active learner using an SVM model.\nThe instances are synthesized so that they would be classified with high uncertainty. \nThe authors report this approach to outperform random sampling, pool-based uncertainty sampling using an SVM , and in some cases passive learning, while having the weakness to generate too similar instances.\nThe approach itself is neither pure NN-based, nor does it belong to the pool-based scenario, however, it is the first reported use of GANs for AL.\n\\textcite{ducoffe2018adversarial} use adversarial attacks to find instances that cross the decision boundary with the aim to increase the model robustness.\nThey train two CNN architectures and report results superior to the core-set strategy on image classification tasks.\nIt is obvious that GANs inherently belong to the membership query synthesis scenario.\n\\revised{Therefore their performance correlates with the quality of artificial data synthesis, i.e. they are usually not that effective for NLP tasks. This has already been recognized and first improvements towards a better text generation have been made }.", "id": "fcf92379-9f52-4b46-8476-f3cf95e27e6c", "level": "paragraph", "origin_cites_number": 11, "parent_id": "aff7fdcf-a8ec-4ce1-9895-2dad247f6097", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Neural-Network-Based Active Learning" ], [ "paragraph", "Previous Work" ] ], "subsections": [ "72864fc8-a21e-4cfd-8e06-6087c4b614bf", "17995907-7b29-4bbf-84e0-f3a0b81b8627" ], "title": "Previous Work" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4025, 759, 1042, 3288 ], "content": "One of the earliest and in many variations adopted class of strategies is uncertainty sampling .\n\\label{sect:nn_uncertainty}\nUnfortunately, this widely-used concept is not straightforward to apply for NNs, as they do not provide an inherent indicator of uncertainty.\nIn the past, this has been tackled among others by ensembling , or by learning error estimates .\nMore recent approaches furthermore use Bayesian extensions , \nobtain uncertainty estimations using dropout ,\nor use probabilistic NNs to estimate predictive uncertainty .\nHowever, ensemble and Bayesian approaches quickly become infeasible on larger datasets,\nand NN architectures are generally known to be overconfident in their predictions .\nConsequently, uncertainty in NNs is only insufficiently solved and therefore still remains a highly relevant research area.\n\\hiddennotes{\n To circumvent this, various approaches have been suggested:\n (1) Bayes by Backprop uses Bayesian variational inference in order to learn approximated distributions over the weights of a NN.\n (2) extend dropout to represent uncertainties \n by interpreting dropout-enhanced NNs as a Bayesian approximation to deep Gaussian Processes.\n However, this has been reported to not scale well for larger datasets .\n (3) Traditionally this has been tackled by ensembling , i.e. producing $k$-many different models and outputting a prediction by interpreting their disagreement (see \\textcite{olsson2009literature}).\n The disadvantage of this approach is the obvious additional cost in runtime and model size.\n \\textcite{sener2017active} report uncertainty-based approaches to be not effective for CNNs, and \\textcite{ducoffe2018adversarial} observed worse performance compared to random sampling.\\\\\n}", "id": "72864fc8-a21e-4cfd-8e06-6087c4b614bf", "level": "paragraph", "origin_cites_number": 12, "parent_id": "fcf92379-9f52-4b46-8476-f3cf95e27e6c", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Neural-Network-Based Active Learning" ], [ "paragraph", "Previous Work" ], [ "paragraph", "Uncertainty in Neural Networks" ] ], "subsections": [], "title": "Uncertainty in Neural Networks" }, { "cite_extract_rate": 0.25, "cites": [ 7764, 1096 ], "content": "DNNs are known to excel in particularly at large-scale datasets,\nbut often having large amounts of data available is a strict requirement to perform well at all (e.g., ).\nAL on the other hand tries to minimize the labeled data.\nThe small labeled datasets can be a problem for DNNs, since they are known to overfit on small datasets (e.g., ), \nwhich results in bad generalization performance on the test set.\nMoreover, DNNs often offer little advantage over shallow models when they are trained using small datasets , thereby lacking justification for their higher computational costs.\nOn the other hand we clearly cannot require AL to label more data, since this would defeat its purpose.\nTherefore there has been research on dealing with (D)NNs using small datasets, however, it is only a scarce amount, especially in relation to the large volume of NN literature in general.\nHandling small datasets is mostly circumvented by using pre-training or other transfer learning approaches .\nFinally, the search for optimal hyperparameters is often neglected and instead the hyperparameters of related work are used, which are optimized for large datasets, if at all.", "id": "17995907-7b29-4bbf-84e0-f3a0b81b8627", "level": "paragraph", "origin_cites_number": 8, "parent_id": "fcf92379-9f52-4b46-8476-f3cf95e27e6c", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning" ], [ "subsection", "Neural-Network-Based Active Learning" ], [ "paragraph", "Previous Work" ], [ "paragraph", "Contrasting Paradigms" ] ], "subsections": [], "title": "Contrasting Paradigms" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sect:text_classification}\nIn Sections \\ref{subsect:tc_recent_adv} and \\ref{subsect:active_tc} we first summarize recent methods in text classification and NNs.\nWe elaborate on each method's importance in the context of AL, and analyze its adoption by recent research where applicable.\nFor insufficiently adopted methods, we present how they could advance AL for text classification.\nMost importantly, we present an overview of recent experiments in AL for text classification and analyze commonalities and shortcomings.", "id": "909d9662-92b7-4a13-b530-6c9553233a2e", "level": "section", "origin_cites_number": 0, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ] ], "subsections": [ "54497d60-bdb0-4315-828f-3d8f53e9c84c", "5247a74b-cf2b-493b-a901-1dbfdda01e22", "c22085b6-c5bc-4141-87dc-c353d7a13732" ], "title": "Active Learning for Text Classification" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsect:tc_recent_adv}", "id": "54497d60-bdb0-4315-828f-3d8f53e9c84c", "level": "subsection", "origin_cites_number": 0, "parent_id": "909d9662-92b7-4a13-b530-6c9553233a2e", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ], [ "subsection", "Recent Advances in Text Classification" ] ], "subsections": [ "ad081844-efd8-44ae-a153-552a2838fa22" ], "title": "Recent Advances in Text Classification" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7265, 7165, 856, 1684, 7370, 11, 826, 1557, 8385, 673 ], "content": "Traditional methods use the bag-of-words (BoW) representation, which are sparse and high-dimensional.\nHowever, with the introduction of word embeddings like word2vec , \nGloVe , or fastText ,\nword embeddings have replaced BoW representations in many cases.\nThis is due to several reasons:\n(1) They represent semantic relations in vectors space and avoid the problem of mismatching features as for example due to synonymy;\n(2) incorporating word embeddings resulted in superior performance for many downstream tasks ;\n(3) unlike bag-of-words, word vectors are dense, low-dimensional representations, \nwhich makes them applicable to a wider range of algorithms --\nespecially in the context of NNs which favor fixed-size inputs.\nVarious approaches have been presented in order to obtain similar fixed size representations for word sequences, i.e. sentences, paragraphs or documents .\\\\\n\\pindent Word embeddings are representations, which provide exactly one vector per word and in consequence one meaning as well.\nThis makes them also unaware of the current word's context and therefore makes them unable to detect and handle ambiguities.\nUnlike word embeddings, language models (LMs) compute the word vector using the word and the surrounding context .\nThis results in a contextualized representation, which inherits the advantages of word embeddings, and at the same time allows for context-specific representation (in contrast to static embeddings) . \nELMo was the first LM to gain wide adoption and surpassed state of the art models on several NLP tasks .\nShortly thereafter, BERT was introduced and provided bidirectional pre-training-based language modelling.\nThe process to create a BERT-based model consists of a pre-training and a fine-tuning step as opposed to ELMO's direct feature-based approach in which contextualized vectors are obtained from the pre-trained model and used directly as features .\nBy masking, i.e. randomly removing a fraction of tokens during training, the training was adapted to predict the masked words.\nThis made the bidrectional training possible, which would otherwise be obstructed because a word could \"see itself\" when computing its probability of occurrence given a context .\nFollowing this, XLNet introduced a similar approach of pre-training and fine-tuning using an autoregressive language model, however, it overcame BERT's limitation as it does not rely on masking data during pre-training , and moreover, successfully manages to integrate the recent TransformerXL architecture .\nSince then, a variety of LMs have been published, which further optimize the pre-training of previous LM architectures (e.g., RoBERTa and ELECTRA ), or distill the knowledge into a smaller model (e.g., DistilBERT ).\nSimilarly to word embeddings, there are approaches to use LMs in order to obtain sentence representations from LMs .\\\\\n\\pindent All mentioned representations offer a richer expressiveness than traditional BoW representations and therefore are well-suited for active learning purposes.", "id": "ad081844-efd8-44ae-a153-552a2838fa22", "level": "paragraph", "origin_cites_number": 12, "parent_id": "54497d60-bdb0-4315-828f-3d8f53e9c84c", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ], [ "subsection", "Recent Advances in Text Classification" ], [ "paragraph", "Representations" ] ], "subsections": [ "e093196b-af7c-4543-b2db-908cbfcfea1d" ], "title": "Representations" }, { "cite_extract_rate": 0.5, "cites": [ 7595, 673, 11, 4026 ], "content": "\\label{subsect:nn_text_classification}\nA well-known CNN architecture presented by \\textcite{kim2014convolutional} (KimCNN) operates on pre-trained word vectors and achieved state of the art results at the time using only a simple but elegant architecture.\nThe investigated CNN setups did not require much hyperparameter tuning and confirmed the effectiveness of dropout as a regularizer for CNN-based text classification.\\\\\n\\pindent The word embeddings of fastText differ from other word embeddings in the sense that the approach is (1) supervised and (2) specifically designed for text classification.\nBeing a shallow neural network, it is still very efficient, while still obtaining performances comparable to deep learning approaches at that time.\\\\\n\\pindent\\textcite{howard2018universal} developed Universal Language Model Fine-tuning (ULMFiT), a LM transfer learning method using the AWD-LSTM architecture ,\nwhich outperformed the state of the art on several text classification datasets when trained on only $100$ labeled examples, and thereby achieved results significantly superior to more sophisticated architectures of previous work.\nContext-specific LMs like BERT and XLNet yield a context-dependent vector for each token, thereby strongly improving NN-based text classification .\nState of the art in NN-based text classification is LM-based fine-tuning with XLNet, which has a slight edge over BERT in terms of test error rate .\nULMFiT follows closely thereafter, and KimCNN is still a strong contender.\nNotably, ULMFiT, BERT and XLNet all perform {\\em transfer learning}, which aims to transfer knowledge from one model to another ,\nthereby massively reducing the required amounts of data.", "id": "e093196b-af7c-4543-b2db-908cbfcfea1d", "level": "paragraph", "origin_cites_number": 8, "parent_id": "ad081844-efd8-44ae-a153-552a2838fa22", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ], [ "subsection", "Recent Advances in Text Classification" ], [ "paragraph", "Representations" ], [ "paragraph", "Neural-Network-Based Text Classification" ] ], "subsections": [], "title": "Neural-Network-Based Text Classification" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7765, 4023, 4022, 8716, 11 ], "content": "\\label{subsect:active_tc} \nTraditional AL for text classification heavily relied on query strategies based on prediction-uncertainty and ensembling . \nCommon model choices included support vector machines (SVMs; ), naive bayes , logistic regression and neural networks .\nTo the best of our knowledge, no previous survey covered traditional AL for text classification, however, ensembling-based AL for NLP has been covered in depth by \\textcite{olsson2009literature}.\\\\\n\\pindent Regarding modern NN-based AL for text classification, the relevant models are primarily CNN- and LSTM-based deep architectures: \\textcite{zhang2017active} claim to be the first to consider AL for text classification using DNNs.\nThey use CNNs and contribute a query strategy, which selects the instances \\phrasing{based on the expected change of the word embeddings and the model's uncertainty given the instance,} thereby learning discriminative embeddings for text classification.\n\\textcite{an2018deep} evaluated SVM, LSTM and gated recurrent unit (GRU; \\textcite{cho2014properties}) models, and reported that the latter two significantly outperformed the SVM baseline on the Chinese news dataset ThucNews.\n\\textcite{lu2020investigating} investigated the performance of different text representations in a pool-based AL scenario.\nThey compared frequency-based text representations, word embeddings and transformer-based representations used as input features for a SVM-based AL and different query strategies,\nin which transformer-based representations yielded consistently higher scores.\n\\textcite{prabhu2019sampling} investigate sampling bias and apply active text classification on the large scale text corpora of \\textcite{zhang2016character-level}.\nThey demonstrate FastText.zip with (entropy-based) uncertainty sampling to be a strong baseline, which is competitive compared to recent approaches in active text classification.\nMoreover, they use this strategy to obtain a surrogate dataset (comprising from 5\\% to 40\\% of the total data) on which a LSTM-based LM is trained using ULMFiT , reaching accuracy levels close to a training on the full dataset.\nUnlike past publications, they report this uncertainty-based strategy to be effective, robust, and at the same time computationally cheap.\nThis is the most relevant work in terms of \\revised{the intersection between text classification, NNs and DL}.\n\\hiddennotes{\n \\begin{itemize}\n \\item Previous Research nach Query Strategy Class / Embeddings untersuchen\n \\item Reinforcement Learning using NNs\n \\end{itemize}\n}\n\\begin{table}[h!]\n \\begin{center}\n \\begin{tabular}{l P{3cm} P{2.5cm} P{4.5cm}}\n \\hline\n \\textbf{Publication} & \\textbf{Datasets} & \\textbf{Model(s)} & \\textbf{Query Strategy Class(es)}\\\\\n \\hline\n \\mbox{} & 20N, R21, RV2, SPM & NB, SVM, kNN & 1. Prediction uncertainty (LC)\\newline 2. Prediction uncertainty (CTH)\\newline 3. Prediction uncertainty (disagreement) \\\\\n \\mbox{} & CR, MR, SJ, MRL, MUR, DR & CNN & 1. Model uncertainty (EGL)\\newline 2. Prediction Uncertainty (entropy)\\\\\n \\mbox{} & RMA & SVM & 1. Prediction uncertainty (CTH)\\newline 2. Prediction uncertainty (disagreement)\\\\\n \\mbox{} & TQA, MR & SVM, CNN, BiLSTM & Prediction uncertainty (disagreement)\\\\\n \\mbox{} & MR, SJ, TQA, CR & SVM, CNN, BiLSTM & 1. Prediction uncertainty (entropy)\\newline 2. Prediction uncertainty (disagreement)\\\\\n \\mbox{} & SGN, DBP, YHA, YRP, YRF, AGN, ARP, ARF & FTZ, ULMFiT & Prediction uncertainty (entropy)\\\\\n \\mbox{} & MRL, MDS, BAG, G13, ACR, SJ, AGN, DBP & SVM & 1. Prediction uncertainty (CTH) \\newline 2. Prediction uncertainty (disagreement) \\newline 3. Data-based (EGAL) \\newline 4. Data-based (density)\\\\\n \\hline \n \\end{tabular}\n \\captionsetup{aboveskip=10pt}\n \\caption{An overview of recent work on AL for text classification.\n We referred to the datasets using short keys, which can be looked up in Table \\ref{tab:datasets} in the Appendix. Models: Naive Bayes (NB), Support Vector Machine (SVM), k-Nearest Neighbours (kNN), Convolutional Neural Network (CNN), [Bidirectional] Long Short-Term Memory ([Bi]LSTM), FastText.zip (FTZ), Univeral Language Model Fine-Tuning (ULMFiT). Query strategies: Least confidence (LC), Closest-to-hyperplane (CTH), expected gradient length (EGL).\n Random selection baselines were omitted.}\n \\label{table:recent_work}\n\\end{center}\n\\end{table}", "id": "5247a74b-cf2b-493b-a901-1dbfdda01e22", "level": "subsection", "origin_cites_number": 15, "parent_id": "909d9662-92b7-4a13-b530-6c9553233a2e", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ], [ "subsection", "Text Classification for Active Learning" ] ], "subsections": [], "title": "Text Classification for Active Learning" }, { "cite_extract_rate": 0.5, "cites": [ 11, 1096 ], "content": "\\label{subsec:experiments}\nTable \\ref{table:recent_work} shows the most recent AL for text classification experiments, all of them more recent than the surveys of \\textcite{settles2010active} and \\textcite{olsson2009literature}. \nFor each publication we list the utilized datasets, models, and classes of query strategies (with respect to the taxonomy in Section \\ref{subsec:query_strategies}).\nWe present this table in order to get insights about the recently preferred classification models and query strategy classes.\\\\\n\\pindent We can draw multiple conclusions from Table \\ref{table:recent_work}:\nIt is obvious that a significant majority of these query strategies belong to the class of prediction-based query strategies, more specifically to the prediction-uncertainty and disagreement-based sub-classes. \nIn addition to that, we can identify several shortcomings:\nFirst, in many experiments two or more standard datasets are evaluated, but very often there is little to no intersection between the experiments in terms of their datasets. \nAs a result we lose comparability against previous research.\nFor recent research, this can seen in Table \\ref{table:recent_work}, where the only larger intersections is between the works of \\textcite{zhang2017active} and \\textcite{lowell2019practical}.\n\\textcite{siddhant2018deep} provide at least some comparability against \\textcite{zhang2017active} and \\textcite{lowell2019practical} through one dataset each.\nAdditionally, RMA is a subset of R21 , which are used by \\textcite{bloodgood2018support} and \\textcite{hu2016active}, so they might be comparable to some degree.\n are the only ones to evaluate on the more recent large-scale text classification datasets , and although these datasets are more realistic in terms of their size, the authors omitted the classic datasets, so it is difficult to relate their contributions to previous work.\nMoreover, as a result of this, we do not know if and to what degree past experiments generalize to DNNs .\n\\noindent Finally, it is not clear if recent (D)NNs benefit from the same query strategies, i.e. past findings may not apply to modern NN architectures:\n\\textcite{prabhu2019sampling} identified contradicting statements in recent literature about the effectiveness of using prediction uncertainty in combination with NNs.\nThey achieved competitive results using a FastText.zip (FTZ) model and a prediction uncertainty query strategy, which proved to be very effective while requiring only a small amount of data, despite all reported weaknesses concering NNs and uncertainty estimates.", "id": "c22085b6-c5bc-4141-87dc-c353d7a13732", "level": "subsection", "origin_cites_number": 4, "parent_id": "909d9662-92b7-4a13-b530-6c9553233a2e", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Active Learning for Text Classification" ], [ "subsection", "Commonalities and Limitations of Previous Experiments" ] ], "subsections": [], "title": "Commonalities and Limitations of Previous Experiments" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "847fa582-b0b5-403a-9242-ea4d17738bff", "level": "section", "origin_cites_number": 0, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ] ], "subsections": [ "b759602b-ca2d-4e2f-9b70-cf51e332ff01" ], "title": "Open Research Questions" }, { "cite_extract_rate": 0, "cites": [], "content": "In Section \\ref{sect:active_learning} it was illustrated that uncertainty-based strategies \nhave been used successfully in combination with non-NN models, and in Section \\ref{subsec:experiments} it was shown that they also account for the largest fraction of query strategies in recent NN-based AL. \nUnfortunately, uncertainty in NNs is still challenging due to inaccurate uncertainty estimates, or limited scalability (as described in Section \\ref{sect:nn_uncertainty}).", "id": "b759602b-ca2d-4e2f-9b70-cf51e332ff01", "level": "paragraph", "origin_cites_number": 0, "parent_id": "847fa582-b0b5-403a-9242-ea4d17738bff", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ], [ "paragraph", "Uncertainty Estimates in Neural Networks" ] ], "subsections": [ "00aa4769-dd36-4d2f-ba39-18878b0e6be3" ], "title": "Uncertainty Estimates in Neural Networks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 11, 4023 ], "content": "As outlined in Section \\ref{subsect:tc_recent_adv}, the use of text representations in NLP has shifted from bag-of-words to static and contextualized word embeddings.\nThese representations evidentially provide many advantages like disambiguation capabilities, non-sparse vectors, and an increase in performance for many tasks.\nAlthough there have been some applications , there is no AL-specific systematic evaluation to compare word embeddings and LMs using NNs.\nMoreover, they are currently only scarcely used,\nwhich hints at either a slow adoption, or some non-investigated practical issues.", "id": "00aa4769-dd36-4d2f-ba39-18878b0e6be3", "level": "paragraph", "origin_cites_number": 3, "parent_id": "b759602b-ca2d-4e2f-9b70-cf51e332ff01", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ], [ "paragraph", "Uncertainty Estimates in Neural Networks" ], [ "paragraph", "Representations" ] ], "subsections": [ "6e1cfbc6-56a2-4243-bbb5-f779c6fa69b3", "fd5f0c0a-5428-46a3-b6af-76f50107da5a", "1b466353-341c-4aec-900d-656fddb7ff9e" ], "title": "Representations" }, { "cite_extract_rate": 0, "cites": [], "content": "DL approaches are usually applied in the context of large datasets.\nAL, however, necessarily intends to keep the (labeled) dataset as small as possible.\nIn Section \\ref{sect:active_learning} we outlined why small datasets can be challenging for DNNs, and as a direct consequence as well for DNN-based AL.\nUsing pre-trained language models, this problem is alleviated to some degree because fine-tuning allows training models using considerably smaller datasets.\nNonetheless, it is to be investigated how little data is still necessary to successfully fine-tune a model.", "id": "6e1cfbc6-56a2-4243-bbb5-f779c6fa69b3", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00aa4769-dd36-4d2f-ba39-18878b0e6be3", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ], [ "paragraph", "Uncertainty Estimates in Neural Networks" ], [ "paragraph", "Representations" ], [ "paragraph", "Small Data DNNs" ] ], "subsections": [], "title": "Small Data DNNs" }, { "cite_extract_rate": 1, "cites": [ 11 ], "content": "In Section \\ref{subsec:experiments} we provided an overview of the most common AL strategies for text classification.\nUnfortunately, the combinations of datasets used in the experiments are often completely disjoint, e.g. \\textcite{siddhant2018deep}, \\textcite{lowell2019practical}, and \\textcite{prabhu2019sampling}.\nAs a consequence, comparability is decreased or even lost, especially between more recent and past work.\nComparibility is, however, crucial to verify if past insights regarding shallow NN-based AL still apply in context of DNN-based AL .", "id": "fd5f0c0a-5428-46a3-b6af-76f50107da5a", "level": "paragraph", "origin_cites_number": 1, "parent_id": "00aa4769-dd36-4d2f-ba39-18878b0e6be3", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ], [ "paragraph", "Uncertainty Estimates in Neural Networks" ], [ "paragraph", "Representations" ], [ "paragraph", "Comparable Evaluations" ] ], "subsections": [], "title": "Comparable Evaluations" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 8718, 4027 ], "content": "There is an abundance of query strategies to choose from, which we have (non-exhaustively) categorized in Section \\ref{subsec:query_strategies}.\nThis introduces the problem of choosing the optimal strategy.\nThe right choice depends on many factors like data, model, or task, and can even vary between different iterations during the AL process.\nAs a result, {\\em learning to learn} (or {\\em meta-learning}) has become popular and can be used to learn the optimal selection , or even learn query strategies as a whole .", "id": "1b466353-341c-4aec-900d-656fddb7ff9e", "level": "paragraph", "origin_cites_number": 3, "parent_id": "00aa4769-dd36-4d2f-ba39-18878b0e6be3", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Open Research Questions" ], [ "paragraph", "Uncertainty Estimates in Neural Networks" ], [ "paragraph", "Representations" ], [ "paragraph", "Learning to Learn" ] ], "subsections": [], "title": "Learning to Learn" }, { "cite_extract_rate": 0.5, "cites": [ 4023 ], "content": "In this survey, we investigated (D)NN-based AL for text classification and inspected factors obstructing its adoption.\nWe created a taxonomy, distinguishing query strategies by their reliance on data-based, model-based, and prediction-based input information.\nWe analyzed query strategies used in AL for text classification and categorized them into the respective taxonomy classes.\nWe presented the intersection between AL, text classification and DNNs, which is to the best of our knowledge the first survey of this topic.\nFurthermore, we reviewed (D)NN-based AL, identified current challenges and state of the art, and pointed out that it is both underresearched and often lacks comparability.\nIn addition to that, we presented relevant recent advances in NLP, related them to AL, and showed gaps and limitations for their application.\nOne of our main findings is that uncertainty-based query strategies are still the most widely used class, regardless of whether the analysis is restricted to NNs.\nLM-based representations offer finer-grained context-specific representations while also handling out-of-vocabulary words. \nMoreover, we find fine-tuning-based transfer learning alleviates the small data problem to some degree but lacks adoption.\nMost important DNNs are known for their strong performance on many tasks and first adoptions in AL have shown promising results .\nAll these gains would be highly desirable for AL.\nTherefore improving the adoption of DNNs in AL is crucial, especially since the expected increases in performance could be either used to improve the classification results while using the same amount of data or to increase the efficiency of the labeling process by reducing the data and therefore the labeling efforts.\nBased on these findings we identify research directions for future work in order to further advance (D)NN-based AL.\n\\section*{Acknowledgements}\nWe thank Gerhard Heyer for his valuable feedback on the manuscript, Lydia Müller for fruitful discussions about the taxonomy and advice thereon, and Janos Borst for sharing his thoughts on recent advances in language models.\nThis research was partially funded by the Development Bank of Saxony\n(SAB) under project number 100335729.\n\\printbibliography\n\\appendix", "id": "3ba159ca-0eee-4f69-aedd-91724bce7349", "level": "section", "origin_cites_number": 2, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "ca5cf519-fe65-47a3-9e14-707edbf02174", "level": "section", "origin_cites_number": 0, "parent_id": "4e01ebc7-074c-4060-b09d-956a012ff8ea", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Appendix" ] ], "subsections": [ "01e0ed63-a97a-4079-8877-7a6544e5b88a" ], "title": "Appendix" }, { "cite_extract_rate": 0.17647058823529402, "cites": [ 1096, 4028, 8420 ], "content": "The following table provides additional information about the datasets which were referred to in Section \\ref{subsect:active_tc}. \n{\n\\setlength{\\LTcapwidth}{\\textwidth}\n\\begin{longtable}{l p{3.8cm} l p{4.1cm} r r}\n \\hline\n \\textbf{Id} & \\textbf{Name} & \\textbf{Type} & \\textbf{Publication} & \\textbf{\\#Train} & \\textbf{\\#Test} \\\\\n \\hline\n TQA & TREC QA & MC & & 5,500 & 500 \\\\\n CR & Customer Reviews & MC & & \\textsuperscript{*}{315} & - \\\\\n ACR & Additional Customer\\newline Reviews & MC & & \\textsuperscript{*}{325} & - \\\\\n MDS & Multi-Domain Sentiment & B & & \\textsuperscript{**}{8,000} & - \\\\\n BAG & Blog Author Gender & B & & 3,100 & - \\\\\n G13 & Guardian 2013 & MC & & 6,520 & -\\\\\n MR & Movie Reviews & B & & 10,662 & - \\\\\n MRL & Movie Reviews Long & B & & 2,000 & - \\\\\n MUR & Music Review & B & & 2,000 & - \\\\\n DR & Doctor Reviews & MC & & 58,110 & - \\\\\n SJ & Subjectivity & B & & 10,000 & -\\\\\n 20N & 20newsgroups & MC & & \\textsuperscript{***}18,846 & - \\\\\n R21 & Reuters-21578 & ML & & 21578 & - \\\\\n RMA & Reuters ModApté & ML & & 9,603 & 3,299\\\\\n RV2 & RCV1-V2 & ML & & 23,149 & 781,265 \\\\\n SPM & Spam & B & & 1,000 & - \\\\\n AGN & AG News & MC & \\newline & 120,000 & 7,600 \\\\\n SGN & Sogou News & MC & & 450,000 & 60,000\\\\\n DBP & DBPedia & MC & & 560,000 & 70,000\\\\\n YRP & Yelp Review Polarity & B & & 560,000 & 38,000\\\\\n YRF & Yelp Review Full & MC & & 650,000 & 50,000\\\\\n YAH & Yahoo! Answers & MC & & 1,400,000 & 60,000\\\\\n ARP & Amazon Review Polarity & B & & 3,600,000 & 40,000\\\\\n ARF & Amazon Review Full & MC & & 3,000,000 & 650,000\\\\\n \\hline\n \\caption{A collection of widely-used text classification datasets.\n The column \"Type\" denotes the classification setting (B = binary, MC = multi-class, ML = multi-class multi-label). \n The columns \"\\#Train\" and \"\\#Test\" show the size of the train and of the test set.\n In the case that no predefined splits were available \"\\#Train\" represents the full dataset's size. \n Each dataset was assigned a short id (first column), which we use in the paper for reference.\\\\\n \\newline\n (*): documents, (**) labels reduced to positive/negative, (***) 20news-bydate with duplicates removed}\n \\label{tab:datasets}\n\\end{longtable}\n}\n\\end{document}", "id": "01e0ed63-a97a-4079-8877-7a6544e5b88a", "level": "subsection", "origin_cites_number": 17, "parent_id": "ca5cf519-fe65-47a3-9e14-707edbf02174", "prefix_titles": [ [ "title", "A Survey of Active Learning for Text Classification using Deep Neural Networks" ], [ "section", "Appendix" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" } ]
7
[ 7764, 1159, 7165, 1096, 4022, 8385, 1684, 11, 1042, 4024, 4023, 8716, 8717, 3986, 4025, 759, 3288, 7265, 856, 7370, 826, 1557, 673, 7595, 4026, 7765, 8718, 4027, 4028, 8420 ]
1.728934
[ "Shuanghong Shen", "Qi Liu", "Zhenya Huang", "Yonghe Zheng", "Minghao Yin", "Minjuan Wang", "Enhong Chen" ]
A Survey of Knowledge Tracing: Models, Variants, and Applications
2021
2021-05-06T13:05:55Z
cs.CY
Modern online education has the capacity to provide intelligent educational services by automatically analyzing substantial amounts of student behavioral data. Knowledge Tracing (KT) is one of the fundamental tasks for student behavioral data analysis, aiming to monitor students' evolving knowledge state during their problem-solving process. In recent years, a substantial number of studies have concentrated on this rapidly growing field, significantly contributing to its advancements. In this survey, we will conduct a thorough investigation of these progressions. Firstly, we present three types of fundamental KT models with distinct technical routes. Subsequently, we review extensive variants of the fundamental KT models that consider more stringent learning assumptions. Moreover, the development of KT cannot be separated from its applications, thereby we present typical KT applications in various scenarios. To facilitate the work of researchers and practitioners in this field, we have developed two open-source algorithm libraries: EduData that enables the download and preprocessing of KT-related datasets, and EduKTM that provides an extensible and unified implementation of existing mainstream KT models. Finally, we discuss potential directions for future research in this rapidly growing field. We hope that the current survey will assist both researchers and practitioners in fostering the development of KT, thereby benefiting a broader range of students.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ] ], "subsections": [ "a96dd0ad-2f86-4f16-b5a9-882602bf6a06", "6987e516-0520-40ba-bdfa-9ecf587cfc88", "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "a705e159-25ef-44ad-8166-1cef83a7c265", "5fc15152-8ae0-4a7f-83a2-d64f287123fe", "34caa8b6-1d19-4064-837a-9dcb1549a45d", "18821b92-3ccc-4057-9352-24e5c0409dd6", "8a28f5df-8a26-4509-8bdf-0a8c518eb23c" ], "title": "root" }, { "cite_extract_rate": 0.21739130434782603, "cites": [ 5371, 5372, 5370, 5374, 5373 ], "content": "\\IEEEPARstart{W}{ith} the proliferation of the Internet and mobile communication technology, online education has become increasingly popular and is now developing at an unprecedented scale .\n{This innovative style of learning provides a degree of flexibility that conventional education cannot match, which enables teaching and learning to occur at any time and any place.}\nMeanwhile, online learning systems (e.g., Coursera, ASSISTment) have proven to be even more effective than traditional learning styles, since they can offer more intelligent educational services, such as recommending individualized learning resources to students . \nTo provide these intelligent services, online learning systems continuously record a massive amount of available data about student-system interactions {(e.g., responding to exercises)}, which can be further mined to assess their knowledge levels, learning preferences, and other attributes. Specifically, Knowledge Tracing (KT) is one of the most fundamental and critical tasks for analyzing students' learning behavior data, which aims to explore the recorded student-system interactions to monitor their evolving knowledge states .\n\\begin{figure*}[t]\n\t\\vspace{-0.5cm}\n\t\\centerline{\\includegraphics[width=\\textwidth]{Figures/learning_case.pdf}}\n\t\\vspace{-0.2cm}\n\t\\caption{A simple schematic diagram of knowledge tracing. Different knowledge concepts are represented in different colors, while exercises are also depicted in the color relevant to the knowledge concepts. During the learning process, different kinds of side information are also recorded. The evolving process of the knowledge state is assessed by KT models and illustrated by the radar maps.}\n\t\\label{kt}\n\t\\vspace{-0.5cm}\n\\end{figure*}\nFig. \\ref{kt} presents a simple schematic diagram of knowledge tracing. During the learning process, online learning systems continuously record students' learning behavioral data, including exercises and their related knowledge concepts (e.g., \\emph{equality, inequality, plane vector}, \\emph{probability}, represented in various colors), and students' answers (i.e., correct or incorrect responses). A substantial amount of supplementary information is also simultaneously recorded, including response time, opportunity count, and tutor intervention, which provides a more comprehensive reflection of students' learning process. \nBased on the collected learning data, researchers are striving to maintain an estimate of students' evolving knowledge states. {For illustration, we give a case in Fig. \\ref{kt}, where the student's prior knowledge is quantified as 0.2, 0.4, 0.4, and 0.5 across four distinct knowledge concepts. The radar map serves as a visual representation of the student's knowledge mastery, which progressively expands as the student continues to acquire new knowledge in learning.} After a period of learning, the student's knowledge states reach 0.9, 0.8, 0.8, and 0.7 respectively, suggesting good knowledge growth. In the aforementioned learning process, KT models aim to monitor changes in students' knowledge states. Once we understand students' knowledge states, the learning system can customize more suitable learning schemes for different students, thereby enabling the teaching of students in accordance with their proficiency. It also allows students to better comprehend their learning process and gradually focus on improving their skills with poorly mastered concepts .\nKnowledge tracing has been studied for decades, with the first studies tracing back to the late 1970s. These initial works primarily focused on confirming the effectiveness of mastery learning . To the best of our knowledge, ~ were the first to introduce the concept of knowledge tracing, employing Bayesian networks to model the student learning process, which they referred to as Bayesian Knowledge Tracing. Since then, the significance of KT has been recognized by a broader spectrum of researchers, and increasing attention has been directed towards KT-related research. Many logistic models have been applied to KT, including Learning Factor Analysis and Performance Factor Analysis . In recent years, deep learning has greatly enhanced research into the KT task, largely due to its capacity to extract and represent features and discover intricate structure. For instance, Deep Knowledge Tracing introduced recurrent neural networks (RNNs) into the KT task and was found to significantly outperform previous methods . Following this, various methods have been introduced that employ various types of neural networks to the KT task, considering various characteristics of the learning sequence . Moreover, due to the requirements of practical applications, many variants of KT models have been continuously developed, and KT has already been broadly applied in numerous educational scenarios. \nWhile novel KT models continue to emerge, there remains a lack of comprehensive surveys exploring this young research field, particularly {regarding its} numerous variants and applications. To this end, {the current survey aims to systematically review the development of KT}. As depicted in Figure \\ref{tax}, we initially categorize existing KT models from a technical perspective, which is consistent with the majority of existing surveys . This categorization splits them into three categories: (1) Bayesian models, (2) logistic models, and (3) deep learning models. In each category, we further organize specific KT methods according to their various techniques. \nSubsequently, we introduce extensive variants of these fundamental KT models, which consider more stringent assumptions about more complete learning process in different learning phases. In addition, we present several typical applications of KT in real learning scenarios. Due to the complexity of different KT models, we have open sourced two algorithm libraries to better aid researchers and practitioners in implementing KT models and facilitate community development in this domain. These libraries, EduData\\footnote{https://github.com/bigdata-ustc/EduData} and EduKTM\\footnote{https://github.com/bigdata-ustc/EduKTM}, include most existing KT-related datasets, extensible, and unified implementations of existing KT models, and relevant resources. Finally, we discuss potential future research directions. In summary, this paper presents an extensive survey of KT that can serve as a comprehensive guide for both researchers and practitioners. \nThe remainder of this survey is structured as follows. Section \\ref{sec:overview} presents an overview of the KT task and we discuss the differences between this and previous surveys. Section \\ref{sec:models} provides a review of the three categories of fundamental KT models. \nSection \\ref{sec:variants} describes the variants of fundamental KT models. \nSection \\ref{sec:applications} introduces the extensive applications of KT in different scenarios. Section \\ref{sec:dataset} gives the summary of existing datasets for evaluating KT models and details of the algorithm libraries we have released. Section \\ref{sec:future} discusses some potential future research directions. Finally, section \\ref{sec:conclution} summarizes the paper.\n\\begin{figure*}[t]\n\t\\vspace{-0.2cm}\n\t\\centerline{\\includegraphics[width=\\textwidth]{Figures/taxonomy.pdf}}\n\t\\vspace{-0.2cm}\n\t\\caption{{An overview of knowledge tracing models.} }\n\t\\label{tax}\n\t\\vspace{-0.2cm}\n\\end{figure*}", "id": "a96dd0ad-2f86-4f16-b5a9-882602bf6a06", "level": "section", "origin_cites_number": 23, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:overview}", "id": "6987e516-0520-40ba-bdfa-9ecf587cfc88", "level": "section", "origin_cites_number": 0, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Overview" ] ], "subsections": [ "0c71baa7-2bd1-4dbb-a83e-a14394274744", "c68cad9f-5ccb-45ac-9535-78433e591dfe", "1f61ba5e-8077-4921-924c-62e9bddd3afc" ], "title": "Overview" }, { "cite_extract_rate": 0.5, "cites": [ 5376, 5375, 38 ], "content": "In an online learning system, supposing there exists a set of students $\\mathbb{S}$ and a set of exercises $\\mathbb{E}$. {Each exercise is related to specific Knowledge Concepts (KCs). Generally, the name given to the knowledge related to exercises differs across online learning platforms. For instance, it is named \\emph{skill} in ASSISTments . To promote better understanding, we refer to these uniformly as knowledge concepts throughout this paper, and denote the set of all KCs as $\\mathbb{KC}$. Moreover, $M$ and $K$ are respectively used to represent the total number of exercises and KCs. Students are asked to answer different exercises in order to achieve mastery of the related knowledge.}\nTherefore, the learning sequence of a student can be formulated as $\\bm{X} = \\{([e_1, k_{e_1}], a_1, r_1), ([e_2, k_{e_2}], a_2, r_2), ..., ([e_t, k_{e_t}], a_t, r_t), ..., \\\\([e_N, k_{e_N}], a_N, r_N)\\}$, where the tuple $([e_t, k_{e_t}], a_t, r_t)$ represents the learning interaction at the $t-$th time step, $e_t$ represents the exercise, $k_{e_t}$ represents the exercise's related KCs, $a_t$ represents the correctness label (i.e., with 1 for correct and 0 for incorrect answers), $r_t$ stands for the side information recorded in this learning interaction, and $N$ is the length of the learning sequence. The research problem of knowledge tracing can thus be defined as follows:\n\\textbf{Given sequences of learning interactions in online learning systems, knowledge tracing aims to monitor students' evolving knowledge states during the learning process and predict their performance on future exercises. The measured knowledge states can be further applied to individualize students' learning schemes in order to maximize their learning efficiency.}\nSome recent works directly regarded the KT task as student performance prediction, without considering students' knowledge states . We agree that predicting student performance is of great significance, as it is now {the best way} to evaluate the quality of the knowledge state traced by KT models. However, we have to point out that KT focuses more on students' knowledge states, especially their interpretability and rationality, which is related to the students' acceptance of the conclusions given based on the KT model .", "id": "0c71baa7-2bd1-4dbb-a83e-a14394274744", "level": "subsection", "origin_cites_number": 6, "parent_id": "6987e516-0520-40ba-bdfa-9ecf587cfc88", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Overview" ], [ "subsection", "Problem Definition" ] ], "subsections": [], "title": "Problem Definition" }, { "cite_extract_rate": 0, "cites": [], "content": "{ As shown in Fig. \\ref{tax}, we categorize and summarize the existing KT models according to their technical differences. In more detail, the proposed taxonomy splits existing KT methods into three categories: (1) Bayesian models, which are implemented through probability model, (2) logistic models, which are implemented through logistic functions, and (3) deep learning models, which are implemented through neural networks. Specifically, we divide the deep learning models into four sub-categories according to four various neural networks, i.e., deep knowledge tracing based on recurrent neural networks, memory-aware knowledge tracing based on memory networks, attentive knowledge tracing based on self-attention mechanism, and graph-based knowledge tracing based on graph neural networks.\nIn addition to these fundamental KT models, we also introduce a large number of their variants, which respectively consider a more complete learning process in distinct learning phases: (1) modeling individualization before learning, (2) incorporating engagement during learning, (3) considering forgetting after learning, and (4) utilizing side information across learning. Moreover, we also summarize the extensive applications of KT in different educational scenarios, including learning resources recommendation, adaptive learning and broader applications beyond student learning.}", "id": "c68cad9f-5ccb-45ac-9535-78433e591dfe", "level": "subsection", "origin_cites_number": 0, "parent_id": "6987e516-0520-40ba-bdfa-9ecf587cfc88", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Overview" ], [ "subsection", "Categorization" ] ], "subsections": [], "title": "Categorization" }, { "cite_extract_rate": 0.41176470588235203, "cites": [ 5372, 5377, 5374, 5370, 8911, 5371, 1206 ], "content": "Given the increasing importance of KT, several recent surveys have also examined this area. These include works by , . Here, we will briefly discuss the key distinctions between these studies to highlight the necessity and significance of this survey. \nExisting surveys have either focused on specific categories of KT models or comprehensively reviewed all available KT models. {For example, provided an overview of KT in terms of Bayesian models and logistic models, compared and discussed deep learning based KT models, while presented to pay more attention to hybird models in KT.} conducted a bibliometric analysis to examine the evolution of KT research from 1992 to 2021. also presented a comprehensive survey for the KT literature, including a broad range of methods starting from the early attempts to the recent state-of-the-art techniques utilizing deep learning. summarized KT methods in the context of student performance modeling problems. \nHowever, current surveys are somewhat limited in their scope, they only provide a detailed introduction to various KT methods and comparisons between them. Given the complexity of online learning systems and the significant importance of KT research in practical applications, this survey places a greater emphasis on the variants and applications of KT models, rather than solely introducing and comparing different KT methods. Moreover, considering that datasets are collected from different systems with various setting, subjects, learning stages, and scales, \nwe do not report and compare the performance of KT models on the student performance prediction task across various datasets in this survey. have also empirically verified that no single KT model was always the best, a specific better model must consider multiple student features and the learning context.\nInstead, we have open sourced two algorithm libraries which include the majority of existing KT-related datasets and unified implementations of existing KT models. Consequently, researchers and practitioners can freely select appropriate KT models based on their specific requirements in various application scenarios. \n\\begin{table*}[t]\n\t\\vspace{-0.6cm}\n\t\\centering\n\t\\renewcommand\\arraystretch{1.0}\n\t\\caption{A summary of different types of fundamental knowledge tracing models.}\n\t\\vspace{-0.3cm}\n\t\\resizebox{\\textwidth}{!}{\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{1}{|c|}{Category} & Typical approach & Technique & KC relationship & Knowledge state \\bigstrut\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{1}{|c|}{\\multirow{2}[5]{*}{Bayesian models}} & Bayesian knowledge tracing & Bayesian networks & independent & \\multirow{2}[5]{*}{\\shortstack{unobservable node \\\\ in HMM}} \\bigstrut\\\\\n\t\t\t\\cline{2-4} & dynamic Bayesian knowledge tracing & dynamic Bayesian networks & pre-defined & \\multicolumn{1}{c|}{} \\bigstrut\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{1}{|c|}{\\multirow{3}[8]{*}{Logistic models}} & learning factor analysis & \\multirow{2}[4]{*}{logistic regression} & \\multirow{3}[8]{*}{independent} & \\multirow{3}[6]{*}{\\shortstack{the output of \\\\ logistic regression function}} \\bigstrut\\\\\n\t\t\t\\cline{2-2} & performance factor analysis & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\bigstrut\\\\\n\t\t\t\\cline{2-3} & knowledge tracing machines & factorization machines & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\bigstrut\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{1}{|c|}{\\multirow{4}[6]{*}{\\shortstack{Deep learning \\\\ models}}} & deep knowledge tracing & RNN/LSTM & discover automatically & the hidden state \\bigstrut\\\\\n\t\t\t\\cline{2-5} & memory-aware knowledge tracing & memory networks & correlation weights & \\emph{value} matrix \\bigstrut\\\\\n\t\t\t\\cline{2-5} & attentive knowledge tracing & self-attention mechanism & attention weights & attentive historical knowledge state \\\\\n\t\t\t\\cline{2-5} & graph-based knowledge tracing & graph neural networks & edges in graph & aggregate in the graph \\bigstrut\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\\vspace{-0.6cm}\n\t\\label{tab1}\n\\end{table*}", "id": "1f61ba5e-8077-4921-924c-62e9bddd3afc", "level": "subsection", "origin_cites_number": 17, "parent_id": "6987e516-0520-40ba-bdfa-9ecf587cfc88", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Overview" ], [ "subsection", "Differences between this and previous surveys" ] ], "subsections": [], "title": "Differences between this and previous surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:models}\nIn this section, as shown in Table \\ref{tab1}, we will present the fundamental KT models, based on our taxonomic framework. {Specifically, we will introduce these models in accordance with their development timeline. Subsequently, we will provide a summary of these fundamental KT models.}", "id": "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "level": "section", "origin_cites_number": 0, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ] ], "subsections": [ "22232da7-baf6-4ee6-a966-d59421dce543", "737cf102-cd65-4a28-91e5-d7967b691341", "0b0df85b-c041-47ba-8171-c36927847515", "2737aa4f-b2f6-4977-83f3-36d33949ab77" ], "title": "Fundamental Knowledge Tracing Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Bayesian models assume that the learning process adheres to a Markov process. This process allows for the estimation of students' latent knowledge states based on their observed performance. In the subsequent section, we will present two Bayesian models in our taxonomy framework: the Bayesian Knowledge Tracing (BKT) and the Dynamic Bayesian Knowledge Tracing (DBKT).", "id": "22232da7-baf6-4ee6-a966-d59421dce543", "level": "subsection", "origin_cites_number": 0, "parent_id": "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Bayesian Models" ] ], "subsections": [ "9274e535-9216-4905-8c97-6e2b15501b96", "2d449595-4bbc-4a8e-8c76-39195e6d0bf8" ], "title": "Bayesian Models" }, { "cite_extract_rate": 0, "cites": [], "content": "{BKT's structure is illustrated in Fig. \\ref{fbkt}; here, the unshaded nodes represent unobservable latent knowledge states, while the shaded nodes represent the observable answers of the student.}\nBKT is a unique instance of the Hidden Markov Model (HMM). There are two types of parameters in HMM: transition probabilities and emission probabilities. In BKT, the transition probabilities are defined by two learning parameters: (1) $P(T)$, the probability of transition from the unlearned state to the learned state; (2) $P(F)$, the probability of forgetting previously mastered knowledge. Moreover, the emission probabilities are determined by two performance parameters: (1) $P(G)$ - the probability that a student will guess correctly, despite non-mastery; (2) $P(S)$ - the probability that a student will make a mistake, despite mastery. Additionally, $P(L_0)$ represents the initial probability of mastery. BKT operates within a two-state student modeling framework: knowledge is either learned or unlearned, and there is no forgetting once a student has mastered the knowledge. Based on the observations of the student's learning interactions, the following equation is utilized to estimate the knowledge state and the probability of correct answers: \n\\begin{equation}\n\t\\small\n\t\\begin{aligned}\n\t\tP(L_n) &= P(L_{n}|Answer) + (1 - P(L_{n}|Answer))P(T), \\\\\n\t\tP(C_{n+1}) &= P(L_n)(1 - P(S)) + (1 - P(L_n))P(G),\n\t\\end{aligned}\n\\end{equation}\nwhere $P(L_n)$ is the probability that a KC is mastered at the $n$-th learning interaction, $P(C_{n+1})$ is the probability of correct answers at the next learning interaction. $P(L_n)$ is the sum of two probabilities: (1) the probability that the KC is already mastered; (2) the probability that the knowledge state will convert to the mastered state. The posterior probability $P(L_{n}|Answer)$ is estimated as follows:\n\\begin{equation}\n\t\\scriptsize\n\t\\begin{aligned}\n\t\tP(L_{n}|correct)&=\n\t\t\\frac{P(L_{n-1})(1 - P(S))}{P(L_{n-1})(1 - P(S)) + (1 - P(L_{n-1}))P(G)}, \\\\\n\t\tP(L_{n}|incorrect)&=\n\t\t\\frac{P(L_{n-1})P(S)}{P(L_{n-1})P(S) + (1 - P(L_{n-1}))(1 - P(G))}.\n\t\\end{aligned}\n\\end{equation}\n\\begin{figure}[t]\n\t\\vspace{-0.0cm}\n\t\\centerline{\\includegraphics[width=0.9\\columnwidth]{Figures/BKT_model.pdf}}\n\t\\vspace{-0.2cm}\n\t\\caption{The topology of Bayesian Knowledge Tracing . $K$ are the unobserved knowledge nodes, $A$ are the observed performance (answer) nodes, $P(L_0)$ represents the initial probability, $P(T)$ is the transition probability, $P(G)$ is the guessing probability and $P(S)$ is the slipping probability.}\n\t\\label{fbkt}\n\t\\vspace{-0.3cm}\n\\end{figure}", "id": "9274e535-9216-4905-8c97-6e2b15501b96", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "22232da7-baf6-4ee6-a966-d59421dce543", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Bayesian Models" ], [ "subsubsection", "Bayesian Knowledge Tracing" ] ], "subsections": [], "title": "Bayesian Knowledge Tracing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{dynamic}\nBKT independently models the parameters of each KC, employing a specific model for each KC. However, KCs are not completely independent, but rather hierarchical and closely related . Dynamic Bayesian networks are capable of jointly representing multiple skills within a single model, potentially enhancing the representational power of BKT. Consequently, propose dynamic Bayesian knowledge tracing (DBKT) to represent the hierarchies and relationships within KCs using dynamic Bayesian networks. This approach considers different KCs jointly within a single model. \nIn DBKT, a student's knowledge mastery is represented by binary latent variables, which are estimated based on their learning interactions. Unlike BKT, DBKT takes into account the dependencies between various KCs. For example, if $KC_1$ and $KC_2$ are prerequisites for mastering $KC_3$, students' mastery of $KC_3$ depends on their mastery of $KC_1$ and $KC_2$.\nLet $H$ denote the unobserved variables, i.e., lack of student answers and binary mastery variables. Assuming the student correctly answers an exercise associated with $KC_1$ at time step $t_1$, i.e., $a_{1,1} = 1$. The observed variables are then $a_m = a_{1,1}$ and the unobserved variables are $h_m = \\{KC_{1,1}, KC_{2,1}, KC_{3,1}, a_{2,1}, a_{3,1}\\}$. The objective of DBKT is to find the parameters $\\theta$ that maximize the joint probability $p(a_m, h_m|\\theta)$. The log-likelihood can alternatively be formulated using a log-linear model, as follows: \n\\begin{equation}\n\tL(\\bm{w}) = \\sum_{m}ln(\\sum_{h_m}exp(\\bm{w}^T\\varPhi(a_m, h_m) - ln(Z))),\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $\\varPhi: A \\times H \\rightarrow \\mathbb{R}^{F}$ denotes a mapping from the observed space $A$ and the latent space $H$ to an $F$-dimensional feature vector. $Z$ is a normalizing constant, $\\bm{w}$ denotes the weights.", "id": "2d449595-4bbc-4a8e-8c76-39195e6d0bf8", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "22232da7-baf6-4ee6-a966-d59421dce543", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Bayesian Models" ], [ "subsubsection", "Dynamic Bayesian Knowledge Tracing" ] ], "subsections": [], "title": "Dynamic Bayesian Knowledge Tracing" }, { "cite_extract_rate": 0, "cites": [], "content": "{Logistic models represent the probability of students correctly answering exercises as a logistic function of the student and KC parameters.} They first use different factors in students' learning interactions to compute an estimation of the student and KC parameters, then utilize a logistic function to transform this estimation into the prediction of the probability of mastery .\nIn the subsequent section, we will introduce three types of logistic models: (1) Learning Factor Analysis (LFA), (2) Performance Factor Analysis (PFA) and (3) Knowledge Tracing Machines (KTM).", "id": "737cf102-cd65-4a28-91e5-d7967b691341", "level": "subsection", "origin_cites_number": 1, "parent_id": "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Logistic Models" ] ], "subsections": [ "5d298d54-78eb-4db4-91fe-1e91f13c0cdc", "d181c1c4-a973-41d5-b53b-df201574371b", "609eb20d-a953-453f-b163-f5e296a7c090" ], "title": "Logistic Models" }, { "cite_extract_rate": 0, "cites": [], "content": "The LFA model considers the following learning factors:\n\\begin{itemize}\n\t\\item{Initial knowledge state}: parameter $\\alpha$ estimates the initial knowledge state of each student;\n\t\\item{Easiness of KCs}: parameter $\\beta$ captures the easiness of different KCs;\n\t\\item{Learning rate of KCs}: parameter $\\gamma$ denotes the learning rate of KCs.\n\\end{itemize}\nThe standard LFA model takes the following form:\n\\vspace{-0.2cm}\n\\begin{equation}\n\tp(\\theta) = \\sigma(\\sum_{i \\in N}\\alpha_iS_i\t+ \\sum_{j \\in KCs}(\\beta_j + \\gamma_jT_j)K_j), \n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $\\sigma$ is the sigmoid function, $S_i$ is the covariate for the student $i$, $T_j$ represents the covariate for the number of interactions on KC $j$, $K_j$ is the covariate for KC $j$, $p(\\theta)$ is the estimation of the probability of a correct answer.", "id": "5d298d54-78eb-4db4-91fe-1e91f13c0cdc", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "737cf102-cd65-4a28-91e5-d7967b691341", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Logistic Models" ], [ "subsubsection", "Learning Factor Analysis" ] ], "subsections": [], "title": "Learning Factor Analysis" }, { "cite_extract_rate": 0.5, "cites": [ 5377 ], "content": "The PFA model can be seen as an extension of the LFA model that is especially sensitive to the student performance. In contrast to the LFA model, PFA considers the following different factors:\n\\begin{itemize}\n\t\\item{Previous failures}: parameter $f$ is the prior failures for the KC of the student;\n\t\\item{Previous successes}: parameter $s$ represents the prior successes for the KC of the student;\n\t\\item{Easiness of KCs}: parameter $\\beta$ means the easiness of different KCs, which is the same as in the LFA model.\n\\end{itemize}\nThe standard PFA model takes the following form:\n\\vspace{-0.15cm}\n\\begin{equation}\n\tp(\\theta) = \\sigma(\\sum_{j \\in KCs}(\\beta_j + \\mu_js_{ij} + \\nu_jf_{ij})),\n\t\\label{pfa}\n\t\\vspace{-0.15cm}\n\\end{equation}\nwhere $\\mu$ and $\\nu$ are the coefficients for $s$ and $f$, which denote the learning rates\nfor successes and failures.\n\\begin{figure}[t]\n\t\\vspace{-0.2cm}\n\t\\centerline{\\includegraphics[width= 0.9\\columnwidth]{Figures/KTM.pdf}}\n\t\\vspace{-0.2cm}\n\t\\caption{Example of activation of a knowledge tracing machine . $V$ refers to the matrix of embeddings, $w$ refers to the vector of biases, $x$ is the encoding vector of the learning interaction.}\n\t\\label{fktm}\n\t\\vspace{-0.6cm}\n\\end{figure}", "id": "d181c1c4-a973-41d5-b53b-df201574371b", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "737cf102-cd65-4a28-91e5-d7967b691341", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Logistic Models" ], [ "subsubsection", "Performance Factor Analysis" ] ], "subsections": [], "title": "Performance Factor Analysis" }, { "cite_extract_rate": 0.25, "cites": [ 5377 ], "content": "The KTM model, developed by Vie et al. , employs factorization machines (FMs) to generalize logistic models to higher dimensions. FMs were initially introduced as a general predictor capable of working with any real-valued feature vector, enabling the model to represent all interactions between variables using factorized parameters . FMs provide a means of encoding side information about exercises or students into the model. Figure \\ref{fktm} illustrates the example of KTM, which models the knowledge mastery of the student based on a sparse set of weights for all features involved in the event. Let $L$ be the number of features; here, the features can be related to students, exercises, KCs, or any other side information. The learning interaction is encoded by a sparse vector $\\bm{l}$ of length $L$. When feature $i$ is involved in the interaction, $l_i > 0$. The probability $p(\\theta)$ of the correct answer is determined by the following equations: \n\\vspace{-0.3cm}\n\\begin{equation}\n\tp(\\theta) = \\sigma(\\mu + \\sum_{i = 1}^{L}w_il_i + \\sum_{1 \\leq i < j \\leq L}l_il_j\\langle\\bm{v_i}, \\bm{v_j}\\rangle ),\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $\\mu$ is the global bias, the feature $i$ is modeled by the bias $w_i \\in \\bm{R}$ and the embedding $\\bm{v}_i \\in \\bm{R}^d$ ($d$ is the dimension). Note that only features with $l_i > 0$ will have impacts on the predictions.", "id": "609eb20d-a953-453f-b163-f5e296a7c090", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "737cf102-cd65-4a28-91e5-d7967b691341", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Logistic Models" ], [ "subsubsection", "Knowledge Tracing Machines" ] ], "subsections": [], "title": "Knowledge Tracing Machines" }, { "cite_extract_rate": 1, "cites": [ 5378, 5371 ], "content": "\\label{deep}\nThe cognitive process can be influenced by various factors at both the macro and micro levels. It is difficult for Bayesian models or logistic models to adequately capture a cognitive process of high complexity .\nDeep learning, with its potent ability to achieve non-linearity and feature extraction, is well-suited for modeling complex learning processes, particularly when a significant amount of learning interaction data is available . {In recent years, numerous research works have been proposed on deep learning KT models, we will introduce deep learning models from four sub-categories: (1) deep knowledge tracing, (2) memory-aware knowledge tracing, (3) attentive knowledge tracing, and (4) graph-based knowledge tracing.}", "id": "0b0df85b-c041-47ba-8171-c36927847515", "level": "subsection", "origin_cites_number": 2, "parent_id": "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Deep Learning Models" ] ], "subsections": [ "6bda0552-a0c8-4448-bccb-783dc9dbe658", "a1cbba76-ad36-4969-8878-16538f12140f", "b60bbb0c-acf3-4bdd-ace6-fb43306d7dda", "9d414c8f-554a-43a7-afe7-209c3c4d813e" ], "title": "Deep Learning Models" }, { "cite_extract_rate": 0.5, "cites": [ 5378, 5379, 5371 ], "content": "Deep Knowledge Tracing (DKT) is the pioneering approach that introduces deep learning {to complete the KT task. DKT employs Recurrent Neural Networks (RNNs) to process the input sequence of learning interactions over time,} maintaining a hidden state that implicitly contains information about the history of all past elements of the sequence. This hidden state evolves based on both the previous knowledge state and the present input learning interaction . DKT provides a high-dimensional and continuous representation of the knowledge state, enabling it to more effectively model the complex learning process. Typically, RNNs' variant, the Long Short-Term Memory (LSTM) networks , are more frequently used in the implementation of DKT, which is further strengthened by considering forgetting. \nFig. \\ref{fdkt} illustrates the process of deep knowledge tracing. In DKT, exercises are represented by their contained KCs. For datasets with different numbers of KCs, DKT applies two different methods to convert students' learning interactions $\\bm{X} = \\{(e_1, a_1), (e_2, a_2), ..., (e_t, a_t), ..., (e_N, a_N)\\}$ into a sequence of fixed-length input vectors. More specifically, for datasets with a small number $K$ of unique KCs, $\\bm{x}_t \\in \\{0,1\\}^{2K}$ is set as a one-hot embedding, where $ \\bm{x}_t^k = 1 $ if the answer $a_t$ of the exercise with KC $k$ was correct or $\\bm{x}_t^{k + K} = 1$ if the answer was incorrect. For datasets with a large number of unique KCs, one-hot embeddings are considered too sparse. Therefore, DKT assigns each input vector $\\bm{x}_t$ to a corresponding random vector, and then uses the embedded learning sequence as the input of RNNs. A linear mapping and activation function are then applied to the output hidden states to obtain the knowledge state of students:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\begin{aligned}\\label{dkt}\n\t\t&\\bm{h}_t = tanh(\\bm{W}_{hs}\\bm{x}_t + \\bm{W}_{hh}\\bm{h}_{t-1} + \\bm{b}_h), \\\\\n\t\t&\\bm{y}_t = \\sigma (\\bm{W}_{yh}\\bm{h}_{t} + \\bm{b}_y),\n\t\\end{aligned}\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $tanh$ is the activation function, $\\bm{W}_{hs}$ is the input weights, $\\bm{W}_{hh}$ is the recurrent weights, $\\bm{W}_{yh}$ is the readout weights, and $\\bm{b}_h$ and $\\bm{b}_y$ are the bias terms.\n\\begin{figure}[t]\n\t\\vspace{-0.3cm}\n\t\\centerline{\\includegraphics[width=0.9\\columnwidth]{Figures/DKT_model.pdf}}\n\t\\vspace{-0.3cm}\n\t\\caption{The architecture of DKT .}\n\t\\label{fdkt}\n\t\\vspace{-0.6cm}\n\\end{figure}\nDespite demonstrating superior performance compared to Bayesian and logistic models, DKT has several inherent shortcomings. For instance, the lack of interpretability is a significant drawback. It is challenging to understand how the hidden states represent students' knowledge states, and the model cannot explicitly determine a student's knowledge mastery from the hidden state . Additionally, identified two unreasonable phenomena in DKT that contravene common sense. These are: (1) the inability to reconstruct observed input, and (2) inconsistent predicted knowledge states across time-steps. However, despite these shortcomings, DKT remains a promising KT model .", "id": "6bda0552-a0c8-4448-bccb-783dc9dbe658", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "0b0df85b-c041-47ba-8171-c36927847515", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Deep Learning Models" ], [ "subsubsection", "Deep Knowledge Tracing" ] ], "subsections": [], "title": "Deep Knowledge Tracing" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 5380, 1206 ], "content": "To enhance the interpretability of DKT, memory-aware knowledge tracing introduces an external memory module, as proposed by . This module is designed to {store the and update the corresponding knowledge mastery of the student.} The most representative example is Dynamic Key-Value Memory Networks (DKVMN) for knowledge tracing, as proposed by . DKVMN highlights students' specific knowledge states on various knowledge categories. It initializes a static matrix, referred to as a $key$ matrix to store latent KCs and a dynamic matrix, called a $value$ matrix to store and update the mastery of corresponding KCs through read and write operations over time.\nAs shown in Fig. \\ref{fdkvmn}, an embedding matrix is first defined to obtain the embedding vector $k_t$ of the exercises. A correlation weight $\\bm{w}_t$ is then obtained by taking the inner product between the exercise embedding $k_t$ and the $key$ vectors $M^k$, followed by the softmax activation:\n\\vspace{-0.2cm}\n\\begin{equation}\n\t\\bm{w}_t = Softmax(k_tM^k),\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere the correlation weight $\\bm{w}_t$ represents the correlation between the exercises and all latent KCs.\nIn the read operation, DKVMN predicts student performance based on the student's knowledge mastery. Specifically, DKVMN reads students’ mastery of the exercise $\\bm{r}_t$ with reference to the weighted sum of all memory vectors in the $value$ matrix using the correlation weight. The read content and the input exercise embeddings are then concatenated together and passed to a fully connected layer to yield a summary vector $\\bm{f}_t$, which contains both the student's knowledge mastery and the prior difficulty of the exercise. Furthermore, the student's performance can be predicted by applying another fully connected layer with a sigmoid activation function to the summary vector:\n\\vspace{-0.2cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\bm{r}_t = \\sum_{i=1}^{N}w_t(i)M_t^v(i), \\\\\n\t\t&\\bm{f}_t = tanh(\\bm{W}_f[\\bm{r}_t, k_t] + \\bm{b}_f),\\\\\n\t\t&p_t = \\sigma(\\bm{W}_p\\bm{f}_t + \\bm{b}_p),\n\t\\end{aligned}\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\bm{W}_f$ and $\\bm{W}_p$ are the weights, $\\bm{b}_f$ and $\\bm{b}_p$ are bias terms.\nIn the write operation, after an exercise has been answered, DKVMN updates students' knowledge mastery (i.e., the $value$ matrix) based on their performance. Specifically, the learning interaction $(e_t, a_t)$ is first embedded with an embedding matrix $\\bm{B}$ to obtain the student's knowledge growth $\\bm{v}_t$. Then DKVMN calculates an erase vector $\\bm{erase}_t$ from $\\bm{v}_t$ and decides to erase the previous memory with reference to both the erase vector and the correlation weight $\\bm{w}_t$.\nFollowing erasure, the new memory vectors are updated by the new knowledge state and the add vector $\\bm{add}_t$, which forms an $erase$-followed-by-$add$ mechanism that allows forgetting and strengthening knowledge mastery in the learning process:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\bm{erase}_t = \\sigma(\\bm{W}_e\\bm{v}_t + \\bm{b}_e), \\\\\n\t\t&\\widetilde{M}_t^v(i) = M_{t-1}^v(i)[1 - w_t(i)\\bm{erase}_t],\\\\\n\t\t&\\bm{add}_t = tanh(\\bm{W}_d\\bm{v}_t + \\bm{b}_d),\\\\\n\t\t&M_{t}^v(i) = \\widetilde{M}_t^v(i) + w_t(i)\\bm{add}_t,\\\\\n\t\\end{aligned}\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\bm{W}_e$ and $\\bm{W}_d$ are the weights, $\\bm{b}_e$ and $\\bm{b}_d$ are bias terms.\n point out that DKVMN failed to capture long-term dependencies in the learning process. Therefore, they propose a Sequential Key-Value Memory Network (SKVMN) to combine the strengths of DKT's recurrent modelling capacity and DKVMN's memory capacity. In SKVMN, a modified LSTM called \\emph{Hop-LSTM} is used to hop across LSTM cells according to the relevance of the latent KCs, which directly captures the long-term dependencies. During the writing process, SKVMN allows for the calculation of the knowledge growth of a new exercise, taking into consideration the current knowledge state, thereby yielding more reasonable results. \n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=0.9\\columnwidth]{Figures/DKVMN.pdf}}\n\t\\vspace{-0.3cm}\n\t\\caption{The architecture of DKVMN .}\n\t\\label{fdkvmn}\n\t\\vspace{-0.6cm}\n\\end{figure}", "id": "a1cbba76-ad36-4969-8878-16538f12140f", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "0b0df85b-c041-47ba-8171-c36927847515", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Deep Learning Models" ], [ "subsubsection", "Memory-aware Knowledge Tracing" ] ], "subsections": [], "title": "Memory-aware Knowledge Tracing" }, { "cite_extract_rate": 0.5, "cites": [ 679, 5376, 5374, 5375, 5370, 38 ], "content": "\\label{attentive}\nIn the development of deep learning, the Transformer is initially proposed for neural machine translation , which abandons recurrence and solely relies on the self-attention mechanism to capture global dependencies within a sequence. The Transformer has been demonstrated to excel in feature extraction and dependency capture, while maintaining high computational efficiency. Some representative pre-training models based on the Transformer, such as BERT and GPT , have obtained state-of-the-art results on various natural language processing tasks. propose a self-attentive model for knowledge tracing (SAKT), which directly apply the Transformer to capture long-term dependencies between students' learning interactions. Furthermore, introduce an adaptive sparse self-attention network to generate missing features and simultaneously produce fine-grained predictions of student performance. employ a multi-head ProbSparse self-attention mechanism to mitigate the time complexity and effectively capture the long-term dependencies in students' learning interactions. \nHowever, the complexity of the KT task often limits the performance of the aforementioned simple Transformer applications. introduce a novel approach named Separated Self-Attentive Neural Knowledge Tracing (SAINT) to enhance self-attentive computation for KT adaptability. Specifically, SAINT employs an encoder-decoder structure, with the exercise and answer embeddings being separately encoded and decoded by self-attention layers. The separation of the input allows SAINT to stack self-attention layers multiple times, thus capturing complex relations in student interactions. Subsequently, introduce the SAINT+ model, which integrates two temporal features into SAINT: namely, the time taken to answer each exercise and the interval time between consecutive learning interactions. Both SAINT and SAINT+ have outperformed the SAKT model on the student performance prediction task. \nAdditionally, observe that SAKT does not surpass DKT and DKVMN in their experiments. Unlike SAINT and SAINT+, they present a context-aware attentive knowledge tracing (AKT) model. This model integrates the self-attention mechanism with psychometric models, creating a more effective system. AKT is composed of four modules: Rasch model-based embeddings, exercise encoder, knowledge encoder, and knowledge retriever. Specifically, the embedding module employs the classic Rasch model in psychometrics to construct embeddings for exercises and KCs:\n\\vspace{-0.2cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\bm{x}_t = \\bm{c}_{c_t} + \\mu_{e_t} \\cdot \\bm{d}_{c_t}, \\\\\n\t\\end{aligned}\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $\\bm{c}_{c_t} \\in \\mathbb{R}^{\\bm{D}}$ is the embedding of the KC of this exercise, $\\bm{d}_{c_t} \\in \\mathbb{R}^{\\bm{D}}$ is a vector that summarizes the variation in exercises with the related KC, and $\\mu_{e_t} \\in \\mathbb{R}^{\\bm{D}}$ is a scalar difficulty parameter that controls the extent to which this exercise deviates from the related KC.\nThe exercise-answer tuple $(e_t, a_t)$ is similarly extended using the scalar difficulty parameter for each pair:\n\\vspace{-0.2cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\bm{y}_t = \\bm{q}_{(c_t, a_t)} + \\mu_{e_t} \\cdot \\bm{f}_{(c_t, a_t)}, \\\\\n\t\\end{aligned}\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $\\bm{q}_{(c_t, a_t)} \\in \\mathbb{R}^{\\bm{D}}$ is the KC-answer embedding, $\\bm{f}_{(c_t, a_t)} \\in \\mathbb{R}^{\\bm{D}}$ is the variation vector. Through the above embedding, exercises labeled as the same KCs are determined to be closely related while retaining important individual characteristics.\nThen, in the exercise encoder, the input is the exercise embeddings $ \\{\\bm{e}_1, . . . , \\bm{e}_t \\} $ and the output is a sequence of context-aware exercise embeddings $\\{\\widetilde{\\bm{e}}_1, . . ., \\widetilde{\\bm{e}}_t\\}$. AKT designs a monotonic attention mechanism to accomplish the above process, where the context-aware embedding of each exercise depends on both itself and the previous exercises, i.e., $\\widetilde{\\bm{e}}_t = f_{enc_1}(\\bm{e}_1, . . . , \\bm{e}_t)$. Similarly, the knowledge encoder takes exercise-answer embeddings $ \\{\\bm{y}_1, . . . , \\bm{y}_t \\} $ as input and\noutputs a sequence of context-aware embeddings of the knowledge acquisitions $\\{\\widetilde{\\bm{y}}_1, . . ., \\widetilde{\\bm{y}}_t\\}$ using the same monotonic attention mechanism; these are also determined by students' answers to both the current exercise and prior exercises, i.e., $\\widetilde{\\bm{y}}_t = f_{enc_1}(\\bm{y}_1, . . . , \\bm{y}_t)$.\nFinally, the knowledge retriever takes the context-aware exercise embedding $\\widetilde{\\bm{e}}_{1:t}$ and exercise-answer pair embeddings $\\widetilde{\\bm{y}}_{1:t}$ as input and outputs a retrieved knowledge state $\\bm{h}_t$ for the current exercise. Since the student’s current knowledge state depends on answering the related exercise, it is also context-aware in AKT.\nThe novel monotonic attention mechanism proposed in AKT is based on the assumption that the learning process is temporal and students' knowledge will decay over time. Therefore, the scaled inner-product attention mechanism utilized in the original Transformer is not suitable for the KT task. AKT uses exponential decay and a context-aware relative distance measure to compute the attention weights. Finally, AKT achieves outstanding performance in predicting students' future answers, as well as demonstrating interpretability due to the combination of the psychometric model.\nIt is important to note that have recently proposed that attentive knowledge tracing models significantly benefit from students' continuous, repeated interactions on the same exercises throughout the learning process. In their experiments, the removal of these repeated interactions in the dataset led to a decline in AKT's performance, bringing it close to that of DKVMN. \n{Moreover, according to the findings of ,} existing attentive attentive KT models primarily trace patterns of a learner's learning activities, rather than their evolving knowledge states. Consequently, they develop the DTransformer model to facilitate stable knowledge state estimation and tracing, rather than solely focusing on next performance prediction. \n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=0.9\\columnwidth]{Figures/gkt.pdf}}\n\t\\vspace{-0.3cm}\n\t\\caption{The architecture of graph-based knowledge tracing .}\n\t\\label{fgkt}\n\t\\vspace{-0.3cm}\n\\end{figure}", "id": "b60bbb0c-acf3-4bdd-ace6-fb43306d7dda", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "0b0df85b-c041-47ba-8171-c36927847515", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Deep Learning Models" ], [ "subsubsection", "Attentive Knowledge Tracing" ] ], "subsections": [], "title": "Attentive Knowledge Tracing" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 243, 553 ], "content": "\\label{graph}\nGraph neural networks (GNNs), which are designed to handle complex graph-related data, have developed rapidly in recent years . The graph represents a kind of data structure that models a set of objects (nodes) and their relationships (edges). From a data structure perspective, there is a naturally existing graph structure within the KCs. Therefore, incorporating the graph structure of the KCs as additional information should be beneficial to the KT task.\n presented graph-based knowledge tracing (GKT), which conceptualizes the potential graph structure of the KCs as a graph $G = (V, E)$, where nodes $V = \\{v_1, v_2, ..., v_N\\}$ represent the set of KCs and the edges $E \\subseteq V \\times V $ represent relationships of these KCs; moreover, $ \\bm{h}^t = \\{\\bm{h}^t_{i \\in V}\\} $ represents the student's temporal knowledge state after answering the exercise at time $t$.\nThe architecture for graph-based knowledge tracing is presented in Figure \\ref{fgkt}, which is composed of three parts: (1) $aggregate$, (2) $update$ and (3) $predict$.\nIn the $aggregate$ module, GKT aggregates the temporal knowledge state and the embedding for the answered KC $i$ and its neighboring KC $j$:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\bm{h}_k^{'t}=\\left\\{\n\t\\begin{aligned}\n\t\t&[\\bm{h}_k^{t}, a^t\\bm{E}_s] &(k = i), \\\\\n\t\t&[\\bm{h}_k^{t}, \\bm{E}_e(k)] &(k \\neq i),\n\t\\end{aligned}\n\t\\right.\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $a^t$ represents the exercises answered correctly or incorrectly at time step $t$, $\\bm{E}_s$ is the embedding matrix for the learning interactions, $\\bm{E}_e$ is the embedding matrix for the KC, and $k$ represents the $k$-th row of $\\bm{E}_e$.\nIn the $update$ module, GKT updates the temporal knowledge state based on the aggregated features and the knowledge graph structure, as follows:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\bm{m}_k^{t+1}=\n\t\t\\begin{cases}\n\t\t\t&f_{self}(\\bm{h}_k^{'t}) (k = i), \\\\\n\t\t\t&f_{neighbor}(\\bm{h}_i^{'t}, \\bm{h}_k^{'t}) (k \\neq i),\\\\\n\t\t\\end{cases}\\\\\n\t\t&\\widetilde{\\bm{m}}_k^{t+1} = G_{ea}(\\bm{m}_k^{t+1}), \\\\\n\t\t&\\bm{h}_{k}^{t+1} = G_{gru}(\\widetilde{\\bm{m}}_k^{t+1}, \\bm{h}_{k}^{t}), \\\\\n\t\\end{aligned}\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $f_{self}$ is the multilayer perceptron, $G_{ea}$ is the same $erase$-followed-by-$add$ mechanism used in DKVMN, and $G_{gru}$ is the gated recurrent unit (GRU) gate . Moreover, $f_{neighbor}$ defines the information propagation to neighboring nodes based on the knowledge graph structure.\nIn the $predict$ module, GKT predicts the student's performance at the next time step according to the updated temporal knowledge state:\n\\vspace{-0.1cm}\n\\begin{equation}\n\ty_k^t = \\sigma(\\bm{W}_k\\bm{h}_k^{t+1} + \\bm{b}_k),\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\bm{W}_k$ is the weight parameter and $\\bm{b}_k$ is the bias term.\nIn addition to modeling the graph structure in KCs by graph neural networks, propose to model the educational relation and topology in the concept map, which will be intended to act as mathematical constraints for the construction of the KT model.\nRecently, in the attempt to further explore knowledge structure, propose structure-based knowledge tracing (SKT), which aims to capture the multiple relations in knowledge structure to model the influence propagation among concepts. SKT is mainly motivated by an education theory, \\emph{transfer of knowledge} , which claims that students' knowledge states on some relevant KCs will also be changed when they are practicing on a specific KC due to the potential knowledge structure among KCs. Therefore, a student's knowledge state is determined by not only the temporal effect from the exercise sequence, but also the spatial effect from the knowledge structure. To concurrently model the latent spatial effects, SKT presents the synchronization and partial propagation methods to characterize the undirected and directed relations between KCs, respectively. In this way, SKT measures influence propagation in the knowledge structure with both temporal and spatial relations. To get rid of dependence on knowledge structure, propose the Automatical Graph-based Knowledge Tracing (AGKT), which utilizes the automatical\ngraph to measure students’ knowledge states automatically without annotation manual annotations.", "id": "9d414c8f-554a-43a7-afe7-209c3c4d813e", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "0b0df85b-c041-47ba-8171-c36927847515", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Deep Learning Models" ], [ "subsubsection", "Graph-based Knowledge Tracing" ] ], "subsections": [], "title": "Graph-based Knowledge Tracing" }, { "cite_extract_rate": 0.2, "cites": [ 5381 ], "content": "{It is crucial to emphasize that, despite deep learning models exhibiting superior performance compared to Bayesian models and logistic models, there remains significant room for improvement in their interpretability and explainability. \nDue to the end-to-end learning strategy, deep learning models are notoriously difficult to interpret. The modeling process itself is also challenging to explain. Specifically, deep learning models are predominantly data-driven and benefit a lot from large-scale student learning data. It is challenging to understand how they calculate a student's knowledge state with no theoretical guidance . \nAll we have are the results generated by these models. Therefore, any errors made by these models will lead students to doubt their reliability. The lack of explainability and interpretability has thus limited their further applicability. }\n{To make the complex KT models interpretable, especially those deep learning models, researchers have attempted various methods.\n presented a post-hoc approach to reveal the interpretability of DKT. Specifically, they employed the layer-wise relevance propagation (LRP) technique to interpret DKT by measuring the relevance between DKT's output and input. Preliminary experimental results suggest that this post-hoc approach could be a promising method for explaining DKT. Besides, explainable AI (xAI) is proposed to make the black-box deep learning models more transparent, thereby promoting its applications . proposed to use the xAI technique to interpret the complex KT models based on deep learning. The interpreting results of DKT have been demonstrated to aid in enhancing the trust of students and teachers. Their findings suggested that it is promising to utilize xAI techniques to interpret the deep learning KT models, thereby assisting users in accepting and applying the suggestions provided by these models. \n}", "id": "2737aa4f-b2f6-4977-83f3-36d33949ab77", "level": "subsection", "origin_cites_number": 5, "parent_id": "bd30f2af-cabd-4e04-ab90-5536a2f60dde", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Fundamental Knowledge Tracing Models" ], [ "subsection", "Summarization" ] ], "subsections": [], "title": "Summarization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:variants}\n{The aforementioned fundamental KT models are} typically based on simplistic assumptions regarding the learning process. Specifically, these models predominantly employ learning interactions such as exercises and responses to estimate students' knowledge states. However, the learning process is not solely represented by exercises and responses, but is influenced by various factors. In summary, the aforementioned fundamental KT models, while straightforward, may have reduced performance in real-world learning scenarios. Consequently, numerous variants have been proposed under more stringent assumptions, reflecting a more comprehensive learning process in real-world scenarios. Accordingly, we classify and review current variants of fundamental KT models into four categories: (1) Modeling individualization before learning, (2) Incorporating engagement \\ during learning, (3) Considering forgetting after learning, and (4) utilizing side information across learning.", "id": "a705e159-25ef-44ad-8166-1cef83a7c265", "level": "section", "origin_cites_number": 0, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Variants of Knowledge Tracing Models" ] ], "subsections": [ "242f5268-6395-4e4d-b14a-e118bc61d29b", "b2eec265-034c-4c04-b806-03ebaf934ee5", "2cb054ef-697a-4d85-a2b7-9c0e6282362f", "60b4234d-a28f-4026-ab3b-0291b62ceca6" ], "title": "Variants of Knowledge Tracing Models" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 5382, 166 ], "content": "Everything and everyone possess unique characteristics. For instance, explored several personalized factors of various tourists to recommend personalized travel packages. Similarly, the concept of individualization in the KT task implies that different students often exhibit different learning characteristics (such as varying learning rates or prior knowledge). Considering the student-specific variability in learning could potentially enhance the KT process, as suggested by . In the subsequent sections, we will introduce various variant KT models that {take into account individualization before learning.}\nThe initial BKT paper has delved into the concept of individualization. Specifically, it uses all students' learning interactions on a specific KC to learn the individual parameter. Similarly, for a specific student, all her learning interactions are utilized to fit her individual learning parameters . Consequently, BKT is able to ascertain different learning and performance parameters for various students and KCs. However, this approach only offers a marginal improvement compared to original BKT. \nSubsequently, propose two simple variants of BKT that respectively individualize students' initial probability of mastery and the probability of transition from the unlearned state to the learned state. {Specifically, a student node is added to individualize the initial probability of mastery for each student. The student node assigns each student with a personalized initial probability of mastery. A conditional probability table is designed to determine the value of the student node. Similarly, if changing the connection of the student node to the subsequent knowledge nodes, the transition probability parameter can also be individualized. In this case, the student node gives individualized transition parameters to each student.}\nMoreover, rather than individualizing only one kind of parameter in BKT, some other variants of BKT opt to individualize all four BKT parameters simultaneously . suggest that when applied in an intelligent tutoring system, the individualized BKT model can yield good improvements to student learning efficacy, reducing by about half the amount of questions required for 20\\% of students to achieve mastery.\nAnother means of modeling individualization is clustering, which considers a wider range of students in different groups . \nBy clustering the students into various groups, we can train different KT models and make predictions on the test data. The number of clusters is then varied according to the student groups and the predicting process is repeated iteratively. Finally, we can obtain a set of different predictions. Furthermore, there are two common methods used to combine these predictions : (1) uniform averaging, which simply averages the predictions; (2) weighted averaging, which combines the models by means of a weighted average.\nTo realize clustering, K-means is a basic clustering algorithm that randomly initializes a set of cluster centroids, which are identified using Euclidean distance. Another popular clustering algorithm is spectral clustering, which represents the data as an undirected graph and analyzes the spectrum of the graph Laplacian obtained from the pairwise similarities of data points. Recently, some novel clustering algorithms have been proposed, including discrete nonnegative spectral clustering and clustering uncertain data .\n propose a model named deep knowledge tracing with dynamic student classification (DKT-DSC), which introduces individualization to DKT by exploiting the idea of clustering. \nAccording to students' previous performance, DKT-DSC assigns students with similar learning ability to the same group . The knowledge states of students in different groups are then traced by different DKT models. Moreover, considering the dynamic property of the learning ability, each student's learning sequence is segmented into multiple time intervals. At the start of each time interval, DKT-DSC will reassess students' learning ability and reassign their groups. \nIn DKT-DSC, the K-means clustering algorithm is utilized to split students with similar ability levels into the same group at each time interval. After learning the centroids of all K clusters, each student is assigned to the nearest cluster. Through dynamic student clustering, DKT-DSC offers an effective approach to realizing individualization in DKT.\n claim that it is significant to consider both individual exercise representation and individual prior knowledge. They propose a fine-grained knowledge tracing model, named FGKT. FGKT obtains the individual exercise representation through the acquisition of knowledge cells (KCs) and exercise distinctions. Subsequently, it assesses the individual prior knowledge by evaluating the relevance between current and historical learning interactions. Finally, the above individual representations will be utilized as the input of LSTM in FGKT to evaluate students' evolving knowledge states. also notice that the individualization of exercises is significant for measuring students' knowledge states. They propose to consider multiple exercise factors, including the difficulty\nand the discrimination, to enhance the performance of DKT.\n propose a convolutional knowledge tracing model (CKT) to implicitly measure student individualization. Specifically, CKT considers two factors that influence students' individualization: individualized learning rates and individualized prior knowledge.\nIndividualized learning rates represent students' differing capacities to absorb knowledge. The sequence of student learning interactions can reflect different learning rates in the sense that students with high learning rates can rapidly master knowledge, while others need to spend more time trying and failing. Therefore, it is reasonable to assess the differences in learning rate by simultaneously processing several continuous learning interactions within a sliding window of convolutional neural networks . Besides, individualized prior knowledge refers to students' prior knowledge, which can be assessed via their historical learning interactions.", "id": "242f5268-6395-4e4d-b14a-e118bc61d29b", "level": "subsection", "origin_cites_number": 14, "parent_id": "a705e159-25ef-44ad-8166-1cef83a7c265", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Variants of Knowledge Tracing Models" ], [ "subsection", "Modeling Individualization before Learning" ] ], "subsections": [], "title": "Modeling Individualization before Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Student engagement is defined as \\emph{\"the quality of effort students themselves devote to educationally purposeful activities that contribute directly to desired outcomes\"} . This definition highlights a strong connection {between student engagement and the learning process.} Generally, higher engagement leads to enhanced knowledge gains. Consequently, considering student engagement in the learning process could potentially improve knowledge tracing results . In this section, {we will present some variants that integrate student engagement into the KT models. }\nStudent engagement is difficult to be directly measured. In practice, some online learning systems have made use of sensor data to measure student engagement. For example, inexpensive portable electroencephalography (EEG) devices can help to detect a variety of student mental states in learning, which can be seen as reflections of student engagement . propose two methods that combine EEG-measured mental states to improve the performance of BKT. Concretely, the first one inserts a one-dimensional binary EEG measure into BKT, forming the EEG-BKT structure that extends BKT by adding a binary variable node $E$ between the knowledge node and the answer node. The second one, i.e., EEG-LRKT, utilizes logistic regression to combine an $m$-dimensional continuous variable $E$ extracted from the raw EEG signal in BKT.\nHowever, in most cases, it is difficult to collect sensor data on every student. Therefore, propose the knowledge and affect tracing (KAT) to model both knowledge and engagement in parallel. KAT is a sensorless model that does not rely on any sensor data. In this model, both knowledge and engagement are assumed to have direct influences on student performance. KAT considers three kinds of disengagement behaviors: quick guess (the student makes an attempt very quickly), bottom-out hint (all available hints are used) and many attempts (making more than three attempts at an exercise). These three behaviors are grouped as “gaming” behaviors in order to predict students' knowledge and engagement at each learning interaction. \nRather than assuming equal influence of knowledge and engagement on students' knowledge state, one variation on the KAT model defines the connection between knowledge and engagement, and accordingly considers that students' knowledge states will influence their engagement. For example, students are more likely to disengage from knowledge they are not familiar with.\nMoreover, rather than explicitly modeling student engagement, further propose the knowledge tracing with behavior (KTB) model, which has only one latent knowledge node that acts as a combination of both knowledge and engagement. KTB assumes that both engagement and performance are expressions of knowledge. The Bayesian estimation of the knowledge state needs to be inferred by both student engagement and performance.\n propose to add five features in the process of watching videos on MOOCs to the input of DKT. These features reflect student engagement from various aspects, including playback speed, whether or not the video was paused, fast-forwarded or rewound, and whether or not the video was completed. For example, if a student watches a video at a much faster playback speed, it is likely that he/she is impatient and absent-minded. This model incorporates two further features: whether or not the exercise was submitted with an answer selected and whether or not the exercise was a part of an end-of-unit quiz, both of which are considered together. Experimental results indicate that DKT can achieve better performance through incorporating the above binarized engagement covariates.", "id": "b2eec265-034c-4c04-b806-03ebaf934ee5", "level": "subsection", "origin_cites_number": 7, "parent_id": "a705e159-25ef-44ad-8166-1cef83a7c265", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Variants of Knowledge Tracing Models" ], [ "subsection", "Incorporating Engagement during Learning" ] ], "subsections": [], "title": "Incorporating Engagement during Learning" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 5378 ], "content": "In real-world scenarios, while learning, forgetting is inevitable . The \\emph{Ebbinghaus forgetting curve theory} indicates that students' knowledge proficiency will decline due to forgetting . Recently, proposed the concept of 'Knowledge Proficiency Tracing' (KPT), a model that can dynamically capture the changes in students' proficiency levels on knowledge concepts over time. This model effectively tracks these changes in an interpretable manner. Therefore, the assumption that students' knowledge states will remain constant over time is untenable. However, fundamental KT models, such as the BKT, often overlook forgetting. In the following, we will introduce some variants of fundamental KT models that have attempted to consider forgetting after learning for more precise knowledge states. \n discover that BKT consistently overestimates the accuracy of students' answers when a day or more had elapsed since {their previous responses.} The underlying reason is that BKT assumes that student performance will remain the same regardless of how much time has passed. To consider how student performance declines with time, they propose a BKT-Forget model, which hypothesizes that students may forget information they have learned as days go by. In the BKT-Forget model, a time node is added to specify which parameters should be affected by a new day and the new day node is fixed with a prior probability of 0.2. {It also introduced parameters to represent the forgetting rate on a new day and denote the forgetting rate on the same day.} However, although BKT-forget does consider the decline in student performance, it can only model forgetting that occurs over the time scale of days. \nTo model the continuous decay of knowledge as time progresses, incorporate forgetting into BKT based on the assumption that learned knowledge decays exponentially over time . An exponential decay function is thus utilized to update the knowledge mastery level. They further assumed that the chance of forgetting will increase if a student does not practice the knowledge concepts within 30 days.\nMoreover, introduce an approach that counts the number of intervening trials and treats each as an independent opportunity for forgetting to occur.\nRecall the PFA model in Eq.(\\ref{pfa}), in which the probability of students' mastery is estimated using a logistic function: $ p(\\theta) = \\sigma(\\beta + \\mu s + \\nu f)$. The original PFA model ignores the order of answers, in addition to the time between learning interactions. It is therefore difficult to directly incorporate time information into the original PFA model.\n propose PFAE (PFA Elo/Extended), a variant of the PFA model that combines PFA with some aspects of the Elo rating system . The Elo rating system is originally devised for chess rating (estimating players' skills based on match results). In PFAE, $\\theta$ is updated after each learning interaction:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\theta :=\\left\\{\n\t\\begin{aligned}\n\t\t& \\theta + \\mu \\cdot (1 - p(\\theta))& \\mbox{if the answer was correct}, \\\\\n\t\t& \\theta + \\nu \\cdot p(\\theta)& \\mbox{if the answer was wrong}.\n\t\\end{aligned}\n\t\\right.\n\t\\vspace{-0.1cm}\n\\end{equation}\nAs the forgetting behavior of students is closely related to time, in order to consider forgetting, add a time effect function $f$ to $\\theta$, i.e., using $p(\\theta+f(t))$ instead of $p(\\theta)$, where $t$ is the time (in seconds) from the last learning interaction, and $f$ is the time effect function.\nTo represent the complex forgetting behavior, the DKT-forget model introduces forgetting into DKT, which considers three types of side information related to forgetting: (1) the repeated time gap that represents the interval time between the present interaction and the previous interaction with the same KC, (2) the sequence time gap that represents the interval time between the present interaction and the previous interaction, and (3) past trial counts that represent the number of times a student has attempted on the exercise with the same KC. All these three features are discretized at $log_2$ scale.\nThose side information is concatenated as additional information and represented as a multi-hot vector $\\bm{c}_t$, which is integrated with the embedding vector $\\bm{v}_t$ of the learning interaction, as follows:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\bm{v}_t^c = \\theta^{in}(\\bm{v}_t, \\bm{c}_t),\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\theta^{in}$ is the input integration function. The integrated input $\\bm{v}_t^c$ and the previous knowledge state $\\bm{h}_{t-1}$ are passed through the RNNs to update $\\bm{h}_t$ in the same way as in Eq.(\\ref{dkt}). The additional information at the next time step $\\bm{c}_{t+1}$ is also integrated with the updated $\\bm{h}_t$:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\bm{h}_t^c = \\theta^{out}(\\bm{h}_t, \\bm{c}_{t+1}),\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\theta^{out}$ is the output integration function.\n propose a novel HawkesKT model, which introduces the Hawkes process to adaptively model temporal cross-effects. The Hawkes process performs well at modeling sequential events localized in time, as it controls corresponding temporal trends by the intensity function. The intensity function in HawkesKT is designed to characterize the accumulative effects of previous learning interactions, along with their evolutions over time. In HawkesKT, the temporal cross-effects and the ways in which they evolve between historical learning interactions combine to form a dynamic learning process.", "id": "2cb054ef-697a-4d85-a2b7-9c0e6282362f", "level": "subsection", "origin_cites_number": 12, "parent_id": "a705e159-25ef-44ad-8166-1cef83a7c265", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Variants of Knowledge Tracing Models" ], [ "subsection", "Considering Forgetting after Learning" ] ], "subsections": [], "title": "Considering Forgetting after Learning" }, { "cite_extract_rate": 0.157894736842105, "cites": [ 1684, 5383, 5384 ], "content": "\\label{sec:side information}\nMost KT models primarily {rely on exercises and student responses} to evaluate students' knowledge states. These models have yielded impressive results and have been effectively implemented in online learning systems. Despite this, there are various other types of side information collected across the learning process that could be utilized to enhance {these models.} In this section, we will introduce several variants that aim to leverage this diverse side information across learning. \nIn terms of a student's first response time, a short initial response time could indicate either high proficiency or 'gaming' behavior, while a long initial response time could indicate either careful thinking or lack of concentration. Since the connection between initial response time and knowledge state could be influenced by {complex factors}, propose to discretize the continuous first response time into four categories (i.e., extremely short, short, long, extremely long) {to eliminate unnecessary information and simplify the latent complex possibilities. They then build a one-by-four parameter table, in which each column represents the category of the initial response time of the previous exercise, while the relevant values represent the probability of correct answers}. \nRegarding tutor intervention, propose the Bayesian evaluation and assessment model, which simultaneously assesses students' knowledge states and evaluates the lasting impact of tutor intervention. More specifically, it adds one observable binary intervention node to BKT: \\emph{True} means that the tutor intervention occurs in corresponding interactions while \\emph{False} indicates the opposite. The connection between the intervention node and knowledge node indicates the potential impact of the tutor intervention on students' knowledge states. The intervention node is linked to all four BKT parameters. As a result, there are a total of eight parameters to learn in order to incorporate tutor intervention. One possible way to reduce the number of parameters is choosing to link only the intervention node to the learning rate parameter .\nSimilarly, develop the intervention-Bayesian knowledge tracing (Intervention-BKT) model, {which incorporates two types of interventions into BKT and distinguishes their different effects: \\emph{elicit and tell}}. The relations between the intervention and performance nodes represent the impact of teaching interventions on student performance, while the relations between the intervention and knowledge nodes represent the impact of teaching interventions on students' knowledge states. Therefore, at each learning interaction, while the present knowledge state is conditional on both the previous knowledge state and the current intervention, the student's performance depends on both the present knowledge state and the current intervention.\nRather than considering only one kind of side information, propose a feature-aware student knowledge tracing (FAST) model, which allows for the utilization of all kinds of side information. Traditional BKT uses conditional probability tables for the guessing, slipping, transition and learning probabilities, meaning that the number of features involved in inference grows exponentially. Therefore, as the number of features increases, the time and space complexity of the model also grow exponentially. To deal with this large number of features, FAST uses logistic regression parameters rather than conditional probability tables. The number of features and complexity increase linearly rather than exponentially. \nFor parameter learning, FAST uses the Expectation Maximization with Features algorithm and focuses on only emission features. The E step uses the current parameter estimates $\\lambda$ to infer the probability of the student having mastered the KC at each learning interaction.\nThe parameters $\\lambda$ are now a function of the weight $\\beta$ and the feature vector $\\bm{f}(t)$. $\\bm{f}$ is the feature extraction function, and $\\bm{f}(t)$ is the feature vector constructed from the observations at the relevant time step. The emission probability is represented with a logistic function:\n\\vspace{-0.1cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\lambda(\\beta)^{y^{'},k^{'}} & = \\frac{1}{1 + exp(-\\beta^T \\cdot \\bm{f}(t))},\n\t\\end{aligned}\n\t\\vspace{-0.1cm}\n\\end{equation}\nwhere $\\beta$ is learned by training a weighted regularized logistic regression using a gradient-based search algorithm.\n propose an extension to DKT that explored the inclusion of additional features. Specifically, it incorporates an auto-encoder network layer to convert the higher-dimensional input data into smaller representative feature vectors, thereby reducing both the resource and time requirement for training. Students' response time, opportunity count, and first action are selected as incorporated side information and all input features are converted into a fixed-length input vector. First, all input features are converted into categorical data and represented as a sparse vector by means of one-hot encodings. These encoded features are concatenated together to construct the higher-dimensional input vector:\n\\vspace{-0.2cm}\n\\begin{equation}\n\t\\begin{aligned}\n\t\tC(e_t,a_t) &= e_t + (max(e) +1)a_t, \\\\\n\t\tv_t &= O(C(e_t,a_t)) \\oplus O(C(t_t,a_t)) \\oplus O(t_t), \\\\\n\t\tv_t^{'} &= tanh(W_vv_t + b_v),\n\t\\end{aligned}\n\t\\vspace{-0.2cm}\n\\end{equation}\nwhere $C$ is the cross feature, $O$ is the one-hot encoder format, $v_t$ represents the resulting input vector of each learning interaction, $e_t$ is the exercise, $a_t$ refers to the answer, $t_t$ is the response time, $W_v$ is the weight parameter and $b_v$ is the bias term. Subsequently, an auto-encoder is introduced to reduce the dimensionality without incurring the loss of too much important information. Finally, the feature vectors extracted by auto-encoder will be the new input of DKT.\n presented another extension to DKT, namely the Exercise-aware Knowledge Tracing (EKT), which utilized the potential value of exercises' text contents. Generally, the text content is of great significance for students to understand the exercises. For example, used text materials to automatically predict their difficulties, utilized the text content to find similar exercises. further proposed a pre-training model called QuesNet for learning the unified representations of heterogeneous exercises. \nTherefore, instead of using one-hot encoding of exercises, EKT automatically learns the semantic representation of each exercise from its text contents. EKT first uses $Word2vec$ to pre-train the embedding vector for each word in the exercise. It then constructs a bidirectional LSTM, which captures the word sequence from both forward and backward directions to learn the semantic word representation. The element-wise max-pooling operation is utilized to merge words’ contextual representations into a global embedding. Finally, EKT can update the student's knowledge state with the aid of the semantic representation of each exercise. \nTo achieve more feasible integration of side information, present a deep knowledge tracing method with decision trees (DKT-DT), which takes advantage of Classification And Regression Trees (CART) to preprocess the heterogeneous input features . Specifically, CART is utilized to automatically partition the feature space and outputs whether or not a student can answer an exercise correctly. {The predicted response and the true response are encoded into a four-bit binary code; for example, the code is $1010$ if the predicted response and the true response are both correct. This binary code is then concatenated with the original one-hot encoding of the exercise as the new input of DKT to train the corresponding model.}\n suggests that the student's language proficiency can serve as supplementary information to improve existing KT models. The student’s language proficiency is extracted by Elo rating score and time window features. Then, the language proficiency information is demonstrated to be effective in promoting several KT models, including DKT, DKVMN, and SAKT. Additionally, the problem of cold start in the KT task is alleviated with the assistance of language proficiency information. \n explore to add side information to the original KT model by auxiliary learning tasks. They specifically introduced two tasks: (1) predicting the KCs of the question, and (2) predicting the individualized prior knowledge. By training with these tasks, KT can enhance its understanding of the intrinsic relationships between questions and KCs, while explicitly capturing student-level variability. \n{When solving programming problems, we can record students' full code submissions, which can be employed to analyze their programming ability. collected student programming data and analyzed their personalized programming preferences. The study commenced with a statistical analysis of student errors, followed by an examination of students' programming structures derived from their code submissions, and finally used BKT to measure students' programming abilities.\n transformed students' code submissions into embedded vectors, and applied them in DKT to model students' fine-graded programming knowledge states.\n noticed that a single programming problem generally involves in multiple KCs, thereby they proposed to learn useful information about the programming problem's multiple requirements from students' code submissions. }", "id": "60b4234d-a28f-4026-ab3b-0291b62ceca6", "level": "subsection", "origin_cites_number": 19, "parent_id": "a705e159-25ef-44ad-8166-1cef83a7c265", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Variants of Knowledge Tracing Models" ], [ "subsection", "Utilizing Side Information across Learning" ] ], "subsections": [], "title": "Utilizing Side Information across Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}\nAlthough knowledge tracing is an emerging research area, it has already been applied in a wide variety of scenarios. {In the following, we will first survey the applications of KT models in two typical educational scenarios: learning resources recommendation and adaptive learning. Then, we will discuss broader applications of KT beyond student learning.}", "id": "5fc15152-8ae0-4a7f-83a2-d64f287123fe", "level": "section", "origin_cites_number": 0, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Applications" ] ], "subsections": [ "260a9040-27f0-4099-a17f-ea39955bc0eb", "f6f4178c-41e5-4941-b347-f40654197f09", "c5ab0210-1d4b-4488-91f4-cb2e837345b7" ], "title": "Applications" }, { "cite_extract_rate": 0.2, "cites": [ 1206 ], "content": "Traditionally, learning resources for each student are selected in one of two ways. The first one requires teachers to manually select suitable resources that match students' knowledge levels. However, this approach requires substantial time and effort, and different teachers may have different preferences. The second one allows students themselves to freely choose resources to learn. However, this may result in students choosing too easy or too difficult materials that will not benefit their learning , leading to low learning efficiency. In recent years, the prevalence of intelligent tutoring systems and the development of KT methods have made it possible to automatically recommend appropriate exercises to each student based on artificially designed algorithms.\nExercises are the most common learning resources in learning. Given the inferred knowledge states, one common strategy is selecting the next exercise that will best advance students' knowledge acquisition. propose two extensions of the original BKT model, which respectively considered exercises' difficulties and students' multiple-attempt behaviors. These two extensions are integrated into a BKT-sequence algorithm to recommend exercises to students based on their knowledge states. Specifically, BKT-sequence first determines the predicted range of scores for each exercise. It then computes an expected score for each exercise that the student should get to achieve mastery, which is dependent on their current knowledge state (for instance, a lower knowledge state will result in higher expected scores). Finally, the algorithm returns the exercise with a predicted score that is closest to that of the expected score. Therefore, as the knowledge state of a particular KC grows, more difficult exercises will be recommended, as harder exercises are associated with a lower predictive score. Experimental results have shown that students using the BKT-sequence algorithm were able to solve more difficult exercises, obtained higher performance and spent more time in the system than students who used the traditional approach. Moreover, students also expressed that the BKT-sequence algorithm was more efficient. expanded the DKVMN model to include the exercise's type and difficulty. This model is then used to assess students' knowledge state and subsequently recommends personalized exercises for each student in Ssmall Private Online Courses (SPOCs). They conducted a randomized controlled trial to show the proposed personalized exercise recommendation could enhance students' learning efficiency.\nIn addition to exercises, there are also some other types of multi-modal learning resources, such as videos and figures. utilizes an adaptation of BKT to improve student performance prediction by incorporating video observation. Experimental verification demonstrates the impact of both using and eschewing video data, as well as the learning rate associated with a particular video. In this way, they further developed a method to help people evaluate the quality of video resources. Concretely, they proposed the Template 1 Video model to incorporate video observations into BKT, which adds video activity as additional independent observation nodes to the BKT model. This model accordingly considers the probability that a given video resource will impart knowledge to a student. Moreover, the transition probability in BKT is conditional only on the presence of either a video or an exercise. Thus, the quality of the video can be determined by its promotion of learning, and this model can be leveraged as a tool to aid in evaluating and recommending video resources.\nWhen recommending learning resources, the primary aim of existing solutions is to choose a simple strategy for assigning non-mastered exercises to students. \nWhile reasonable, it is also too broad to advance learning effectively. accordingly propose three more beneficial and specific objectives: \\emph{review and explore}, \\emph{smoothness of difficulty level} and \\emph{student engagement}. In more detail, \\emph{review and explore} considers both enhancing students' non-mastered concepts with timely reviews and reserving certain opportunities to explore new knowledge; \\emph{smoothness of difficulty level} indicates that the difficulty levels of several continuous exercises should vary within a small range as students gradually learn new knowledge; finally, \\emph{student engagement} considers that to promote students' enthusiasm during learning, the recommended exercises should be in line with their preferences. In order to support online intelligent education with the above three domain-specific objectives,\nthey developed a more reasonable multi-objective deep reinforcement learning (DRE) framework. DRE presented three corresponding novel reward functions to capture and quantify the effects of the above three objectives. This DRE framework is a unified platform designed to optimize multiple learning objectives, where more reasonable objectives also can be incorporated if necessary.\nExperimental results show that DRE can effectively learn from the students' learning records to optimize multiple objectives and adaptively recommend suitable exercises.", "id": "260a9040-27f0-4099-a17f-ea39955bc0eb", "level": "subsection", "origin_cites_number": 5, "parent_id": "5fc15152-8ae0-4a7f-83a2-d64f287123fe", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Applications" ], [ "subsection", "Learning Resources Recommendation" ] ], "subsections": [], "title": "Learning Resources Recommendation" }, { "cite_extract_rate": 0.25, "cites": [ 5373 ], "content": "{Adaptive learning, unlike learning resource recommendations, goes beyond the mere provision of resources. It not only concentrates on the selection of appropriate learning materials but also designs effective learning strategies and dynamic learning pathways. These are structured based on both the learning rules and students' evolving knowledge states.\nSpecifically, adaptive learning broadly refers to \\emph{\"a learning process in which the content taught, or the way such content is presented, changes or 'adapts' based on individual student responses, and which dynamically adjusts the level or types of instruction based on individual student abilities or preferences\"} . }\nThe first few attempt made to apply KT to adaptive learning was the ACT Programming Tutor (APT) , where students were asked to write short programs and BKT was utilized to estimate their evolving knowledge state. This tutor can present an individualized sequence of exercises to each student based on their estimated knowledge states until the student has \"mastered\" each rule. \nIn recent years, Massive Open Online Courses (MOOCs) have become an emerging modality of learning, particularly in higher education. adapt BKT on the edX platform. The research object was a 14-week online course that included weekly video lectures and corresponding lecture problems. BKT was applied to enhance students' learning on this course. In order to better adapt BKT to the learning platform, the original BKT was modified in several respects. First, due to the lack of labeled KCs, the problems would be directly seen as the KCs, while the questions would be seen as the exercises belonging to the KC.\nSecond, in order to capture the varying degrees of students' knowledge acquisition at each attempt, the modified model assigned different guess and slip parameters to different attempt counts. Third, to deal with the problem of multiple pathways in the system, which reflected that the impacts on learning may come from various resources, they framed the influence of resources on learning as a credit/blame inference problem.\nGenerally, students' cognitive structures include both students' knowledge level and the knowledge structure of learning items (e.g., \\emph{one-digit addition} is the prerequisite knowledge of \\emph{two-digit addition}). Therefore, adaptive learning should maintain consistency with both students' knowledge level and the latent knowledge structure. Nevertheless, existing methods for adaptive learning often focus separately on either the knowledge levels of students (i.e., with the help of specific KT models) or the knowledge structure of learning items. To fully exploit the cognitive structure for adaptive learning, propose a Cognitive Structure Enhanced framework for adaptive Learning (CSEAL). CSEAL conceptualized adaptive learning as a Markov Decision Process. It first utilized DKT to trace the evolving knowledge states of students at each learning step. Subsequently, the authors designed a navigation algorithm based on the knowledge structure to ensure that the learning paths in adaptive learning were logical and reasonable, which also reduced the search space in the decision process. Finally, CSEAL utilized the actor-critic algorithm to dynamically determine what should be learned next. In this way, CSEAL can sequentially identify the most suitable learning resources for different students.", "id": "f6f4178c-41e5-4941-b347-f40654197f09", "level": "subsection", "origin_cites_number": 4, "parent_id": "5fc15152-8ae0-4a7f-83a2-d64f287123fe", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Applications" ], [ "subsection", "Adaptive Learning" ] ], "subsections": [], "title": "Adaptive Learning" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 5385, 8912 ], "content": "{The above two types of applications are most commonly used for KT in student learning. In addition, the KT methods can be expanded to be utilized in any systems that necessitate continuous evaluation of user capabilities or states. We will introduce some broader applications of KT in this section.}\nIn gaming systems, the paradigm of tracing students' knowledge state can also work for player modeling. Here, player modeling, which is the study of computational models of players in games, aims to capture human players' characteristics and cognitive features . For instance, reveal that children engage in cycles of increasingly sophisticated mathematical thinking over the course of playing an online game. present an approach to trace player knowledge in a parallel programming educational game, which is capable of measuring the current players' real-time state across the different skills required to play an educational game based only on in-game player activities. conduct a classroom experiment comparing a commercial game for equation solving, i.e., \\emph{DragonBox}, with a research-based intelligent tutoring system, i.e., \\emph{Lynnette}. The results indicated that students who used \\emph{DragonBox} enjoyed the experience more, while students who used \\emph{Lynnette} performed significantly better on the test. Therefore, it is possible to enable students to learn effectively and happily by designing suitable educational games on the online learning platform.\n{In crowdsourcing, unlabeled data or specific tasks are assigned to various crowd annotators. Understanding the dynamic capabilities of these annotators is crucial in ensuring the reliability of their annotations and promoting the annotation efficiency. developed a framework called KT4Crowd, which utilized KT methods to predict the performance of annotators, which surpassed traditional rating systems. \nBesides, observed that students' participation in crowdsourcing tasks can enhance their learning. KT methods can also better comprehend students' knowledge states, aided by the students' annotated items. \n}\n develop an online citizen science project that employs machine learning techniques to improve the training of new volunteers using authentic tasks featuring uncertain outcomes, such as image classification. Specifically, they employ the BKT model to monitor the knowledge states of volunteers, enabling them to more efficiently complete assigned tasks and contribute meaningfully to the project. \n develop an automated exercise collection approach for teachers, employing the KT model and reinforcement learning. Specifically, the exercise collection need to be well-designed to align with students' abilities. This study first leverages the KT model to forecast students' performance on unseen exercise candidates. Subsequently, the exercise selector is designed based on the KT model's predictions, ensuring that the exercise collection is both approximate and optimized. Similarly, design a reinforcement learning guided\nmethod for exam paper generation, where the DKT model is utilized to measure examinees' knowledge states. \n\\iffalse\n\\begin{table*}[t]\n\t\\vspace{-0.2cm}\n\t\\renewcommand\\arraystretch{1.0}\n\t\\caption{A Summary of Public Datasets for Knowledge Tracing. }\n\t\\vspace{-0.3cm}\n\t\\centering\n\t\\begin{tabular}{l|l}\n\t\t\\hline\n\t\tDatasets & Link\\\\\n\t\t\\hline\n\t\tASSISTments2009 & https://sites.google.com/site/assistmentsdata/home/assistment-2009-2010-data\\\\\n\t\tASSISTments2012 & https://sites.google.com/site/assistmentsdata/home/2012-13-school-data-with-affect\\\\\n\t\tASSISTments2015 & https://sites.google.com/site/assistmentsdata/home/2015-assistments-skill-builder-data \\\\\n\t\tASSISTments2017 & https://sites.google.com/view/assistmentsdatamining/dataset \\\\\n\t\tKDD-Cup 2010 & https://pslcdatashop.web.cmu.edu/KDDCup/rules\\_data\\_format.jsp \\\\\n\t\tStatics 2011 & https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=507 \\\\\n\t\tslepemapy.cz & https://www.fi.muni.cz/adaptivelearning/?a=data\\\\\n\t\tsynthetic & https://github.com/chrispiech/DeepKnowledgeTracing/tree/master/data/synthetic\\\\\n\t\tJunyi & https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=1198 \\\\\n\t\tEdNet & http://ednet-leaderboard.s3-website-ap-northeast-1.amazonaws.com/\\\\\n\t\tpoj & http://poj.org/ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab3}\n\t\\vspace{-0.6cm}\n\\end{table*}\n\\fi", "id": "c5ab0210-1d4b-4488-91f4-cb2e837345b7", "level": "subsection", "origin_cites_number": 9, "parent_id": "5fc15152-8ae0-4a7f-83a2-d64f287123fe", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Applications" ], [ "subsection", "Broader Applications" ] ], "subsections": [], "title": "Broader Applications" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 5386 ], "content": "\\label{sec:dataset}\nAfter introducing above KT models and variants, to better help researchers and practitioners who want to further conduct related work and promote the application of KT, we have open sourced two algorithm libraries, i.e., EduData that for downloading and preprocessing most existing KT-related datasets, and EduKTM that includes extensible and unified implementations of existing popular KT models. In the following, we will give detailed introduction of these two algorithm libraries.\n\\begin{table*}[t]\n\t\\vspace{-0.6cm}\n\t\\renewcommand\\arraystretch{1.2}\n\t\\caption{{Basic information and statistics of existing datasets available for evaluating KT models. }}\n\t\\vspace{-0.2cm}\n\t\\centering\n\t\\resizebox{\\textwidth}{!}\n\t{\n\t\t\\begin{tabular}{l|c|c|c|c|c|r|r|r|r}\n\t\t\t\\hline\n\t\t\t\\multirow{3}*{Datasets}&\\multirow{3}{*}{Subjects} &\\multirow{3}{*}{Learning Stages} & \\multirow{3}*{\\makecell[c]{Side \\\\ Information}}& \\multicolumn{2}{c|}{Sources}&\\multicolumn{4}{c}{Statistics} \\\\\n\t\t\t\\cline{5-10} \n\t\t\t& & & & \\makecell[c]{Online \\\\ platforms} & \\makecell[c]{Educational \\\\ challenges} & \\makecell[c]{\\# of \\\\ Students} & \\makecell[c]{\\# of \\\\ Exercises} & \\makecell[c]{\\# of \\\\ KCs} & \\makecell[c]{\\# of \\\\ Learning records}\\\\\n\t\t\t\\hline\n\t\t\tASSISTments2009 & mathematics & middle school & Yes & \\Checkmark & & 4,163 & 17,751& 123 & 346,860 \\\\\n\t\t\t\\hline\n\t\t\tASSISTments2012 & mathematics & middle school & Yes & \\Checkmark & & 46,674 & 179,999& 265 &6,123,270 \\\\\n\t\t\t\\hline\n\t\t\tASSISTments2015 & mathematics & middle school & No & \\Checkmark & & 19,917 & /& 100 & 708,631 \\\\\n\t\t\t\\hline\n\t\t\tASSISTments2017& mathematics &\\makecell[c]{from middle school \\\\ to college} & Yes & & \\Checkmark & 1,709 &3,162& 102 & 942,816 \\\\\n\t\t\t\\hline\n\t\t\tJunyi & mathematics & \\makecell[c]{from primary \\\\ to high school} & Yes & \\Checkmark & & 247,606 & 722 & 41 & 25,925,922 \\\\\n\t\t\t\\hline\n\t\t\tEedi2020 & mathematics & \\makecell[c]{from primary \\\\ to high school} & Yes & & \\Checkmark & 118,971 & 27,613 & 388 & 15,867,850 \\\\\n\t\t\t\\hline\n\t\t\tStatics2011 &engineering &university &Yes & \\Checkmark & & 335 & 1,224 & 80 & 361,092 \\\\\n\t\t\t\\hline\n\t\t\tEdNet-KT1 & english & / & Yes & \\Checkmark & & 784,309 & 13,169 & 188 & 95,293,926 \\\\\n\t\t\t\\hline\n\t\t\tEdNet-KT2 & english &/ & Yes & \\Checkmark & & 297,444 & 13,169 & 188 & 56,360,602 \\\\\n\t\t\t\\hline\n\t\t\tEdNet-KT3 & english &/ & Yes & \\Checkmark & & 297,915 & 13,169 & 293 & 89,270,654 \\\\\n\t\t\t\\hline\n\t\t\tEdNet-KT4 & english &/& Yes& \\Checkmark & & 297,915 & 13,169 & 293 & 131,441,538 \\\\\n\t\t\t\\hline\n\t\t\tCodeWorkout & programming & university & Yes & & \\Checkmark & 819 & 50 & 50 & \\textgreater 130,000 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\\label{data}\n\t\\vspace{-0.6cm}\n\\end{table*}", "id": "34caa8b6-1d19-4064-837a-9dcb1549a45d", "level": "section", "origin_cites_number": 7, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ] ], "subsections": [ "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "66f9b876-576e-4b0f-af2b-91235cb5ec11" ], "title": "Datasets and Baselines" }, { "cite_extract_rate": 0, "cites": [], "content": "As we have mentioned, KT emerges from the development of online education, where a large number of students' learning data is collected for analyzing their learning behaviors and knowledge states. In this section, we mainly introduce existing public datasets available for evaluating KT models. Table \\ref{data} lists all datasets, as well as their basic information and statistics. In our released EduData, we provide the service of downloading and preprocessing all these datasets, which is convenient to help beginners analyze and utilize them quickly.\nIn summary, these datasets are collected in different learning scenarios, so that they exhibit an extremely distinct difference in data scale, subject, and so on, indicating complex applications in practical for KT models.", "id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "level": "subsection", "origin_cites_number": 0, "parent_id": "34caa8b6-1d19-4064-837a-9dcb1549a45d", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ] ], "subsections": [ "3d73fd7b-d881-422a-a516-5cd421232c1c", "efcd8f6a-3777-4dd9-8b4a-f9125974da7d", "db5459a4-5149-42f0-85da-a9bc2ed105e4", "266d224a-ec55-40cb-a11c-e27be3a867bf", "fd615d1a-1d1e-4b41-9cdf-9cd4873ba07b", "6cb5a8b7-c682-457d-85c6-8ed300213a0a" ], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "ASSISTments , created in 2004, is an online tutoring system in the United States, which provides students with both assessment information and tutoring assistance. While working on ASSISTments, students will be provided with instructional assistance to help them solve the problem in several substeps when they give wrong answers. After obtaining correct answers, they will be given a new one. Meanwhile, the system will learn about the students' knowledge states and predict how they will do in future tests. Up to now, the organizers have released four public avaliable datasets from ASSISTments, which are respectively ASSISTments2009, ASSISTments2012\\footnote{https://sites.google.com/site/assistmentsdata/datasets/2012-13-school-data-with-affect}, ASSISTments2015\\footnote{https://sites.google.com/site/assistmentsdata/datasets/2015-assistments-skill-builder-data}, and ASSISTments2017\\footnote{https://sites.google.com/view/assistmentsdatamining/dataset}. The ASSISTments datasets have built profound impacts in the research community of KT, which can be seen in many related papers.\nMost of these datasets were collected from mathematics in middle school, the details of them are listed as follows.\n\\begin{itemize}[leftmargin=*]\n\t\\item{\\textbf{ASSISTments2009.}}\n\tThe full name of ASSISTments2009 is the ASSISTments 2009-2010 skill builder data set , which was collected form ASSISTments’ skill builder problem sets during the school year from 2009 to 2010. Students were asked to work on exercises with similar knowledge concepts until they can answer correctly for three or more times in a row. This dataset contains many valuable side information, such as \\textit{attempt count} that represents the number of attempts of the student, \\textit{ms first reponse} that represents the time in the milliseconds for the student’s first response, and \\textit{opportunity} that represents the number of opportunities the student has to practice. It worth noting that the original version of ASSISTments2009 has three serious problems that lead to unreliable experimental results : (1) a large number of duplicated records, (2) treating scaffolding problems as the same as main problems, and (3) repeated response sequences with different KCs. The latest version of ASSISTments2009 has fixed these problems.\n\t\\item{\\textbf{ASSISTments2012.}}\n\tASSISTments2012 is collected from the ASSISTments system, in the school year from 2012 to 2013 with affect predictions. Compared with ASSISTments2009, ASSISTments2012 has much more students, exercises, as well as learning records. However, many learning records have missed the related KCs in this dataset. After filtering these records, there are only 29,018 students, 53,091 exercises, and 2,711,813 learning records.\n\tBesides, this dataset contains more side information, such as \\textit{start time}, \\textit{end time}, \\textit{problem type}, and \\textit{average confidence}. \n\tIt worth noting that you can access to the text of the exercises in ASSISTments2012 for more fine-grained research. Specifically, you can email nth@wpi.edu and cc td@wpi.edu, explaining your purpose and promising not to share it with anyone else.\n\t\\item{\\textbf{ASSISTments2015.}}\n\tASSISTments2015 covers the 2015 school years' student response records on ASSISTments. This dataset only contains student learning records on 100 KCs, there is no information about the exercise, as well as any other side information. \n\t\\item{\\textbf{ASSISTments2017.}}\n\tASSISTments2017 is the dataset used in the 2017 ASSISTments Longitudinal Data Mining Competition. This dataset collected data over a decade, which tracked students' intercations at the ASSISTments learning platform from middle school study in 2004-2007 to their high school course-taking, until they graduated from the college. Therefore, the average learning record of students in ASSISTments2017 is much longer than other dataset, which is beyond the length of 1,000. ASSISTments2017 also has rich side information, including but not limited to \\textit{AveKnow} that indicates students' average knowledge level based on BKT, \\textit{timeTaken} that represents the time spent on the current exercise, \\textit{frIsHelpRequest} that represents whether the first response is a help request.\n\\end{itemize}", "id": "3d73fd7b-d881-422a-a516-5cd421232c1c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "ASSISTments Datasets" ] ], "subsections": [], "title": "ASSISTments Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "The Junyi dataset contains the problem log and exercise-related information on the Junyi Academy\\footnote{http://www.junyiacademy.org/}, a Chinese e-learning website established in 2012 on the basis of the open-source code released by Khan Academy. In contrast to the ASSISTments datasets, Junyi has less exercsies and KCs, but includes an exercise hierarchy labeled by experts, the annotations of exercise relationship are also available. Therefore, many research works that focused on the knowledge structure in KT had utilized this dataset . Junyi provides the prerequisite exercise of a specific exercise in the knowledge map, the topic and area of each exercise, as well as the coordiate position of the knowledge map.", "id": "efcd8f6a-3777-4dd9-8b4a-f9125974da7d", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "Junyi Dataset" ] ], "subsections": [], "title": "Junyi Dataset" }, { "cite_extract_rate": 0.5, "cites": [ 8913 ], "content": "The Eedi2020 dataset is also released in an achedemic challenge, i.e., NeurIPS 2020 Education Challenge\\footnote{https://eedi.com/projects/neurips-education-challenge}. This dataset contains students’ answers to mathematics questions from Eedi, an online educational platform which millions of students interact with daily around the globe from school year 2018 to 2020. All exercises are multiple-choice problems with 4 possible answer choices, exactly one of which is correct. In Table \\ref{data}, we give the statistics based on the training data in this competition, the total number of learning records in the full dataset exceeds 17 million. It worth noting that Eedi2020 gives students' exact answer choice so that we can also predict students options . Moreover, for the students, Eedi2020 records lots of valuable context information, including the \\textit{Gender}, \\textit{DateOfBirth}, \\textit{PremiumPupil}. For the learning records, Eedi2020 also presents their \\textit{Confidence}, \\textit{GroupId}, \\textit{QuizId}, and \\textit{SchemeOfWorkId}.", "id": "db5459a4-5149-42f0-85da-a9bc2ed105e4", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "Eedi2020 Dataset" ] ], "subsections": [], "title": "Eedi2020 Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "Different from the above datasets that focus on mathematics exercises, the Statics2011 dataset is obtained from a college-level engineering statics course via online educational system developed by Carnegie Mellon University\\footnote{https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=507}. {The problems in college engineering course are quite complex, often comprising numerous independent sub-steps. Consequently, we treat each sub-problem as an exercise and calculate the number of exercises and KCs within this dataset.}", "id": "266d224a-ec55-40cb-a11c-e27be3a867bf", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "Statics2011 Dataset" ] ], "subsections": [], "title": "Statics2011 Dataset" }, { "cite_extract_rate": 1, "cites": [ 5386 ], "content": "The EdNet dataset is related to the english subject, which is consisted of students' learning records in the multi-platform AI tutoring system Santa in South Korea\\footnote{https://github.com/riiid/ednet}. EdNet collected learning data of students over two years for their preparation of the e TOEIC (Test of English for International Communication) Listening and Reading Test. EdNet is now the largest public dataset in KT field with a total of 131,441,538 learning records from 784,309 students. Besides, it contains various features of students' learning actions, such as the specific learning material they have interacted, how much time they have spent for answering a given exercise. There are four differnet versions of EdNet, respectively named EdNet-KT1, EdNet-KT2, EdNet-KT3, and EdNet-KT4 with different extents. {We note that the students in this dataset may be in different learning states. Therefore we do not give the learning state information in Table \\ref{data}.}\n\\begin{itemize}[leftmargin=*]\n\t\\item{\\textbf{EdNet-KT1.}}\n\tEdNet-KT1 contains students' basic exercise-answering logs. This dataset has 784,309 students, 13,169 exercises, 188 KCs, and a total of 95,293,926 learning records.\n\tExercises in EdNet-KT1 are organized by bundles, i.e., a collection of exercises sharing a common passage, picture or listening material. Therefore, exercises come up in bundles and students have to answer all contained exercises when a bundle is given.\n\t\\item{\\textbf{EdNet-KT2.}}\n\tEdNet-KT2 recorded students' action sequences, which indicated their full learning behaviors. For example, a student who is not confident about the answer may alternately select among several answer choices before submitting. Such learning behaviors can reflect more fine-grained knowledge state of students. EdNet-KT2 contains three kinds of actions: \\textit{enter} when student first receives and views a bundle , \\textit{respond} when the student selects an answer choice to the exercise, and \\textit{submit} when the student submits his final answers to the the given bundle. It is worth noting that EdNet-KT2 is a subset of EdNet-KT1.\n\t\\item{\\textbf{EdNet-KT3.}}\n\tOn the basis of EdNet-KT2, EdNet-KT3 collected more students' learning activities, such as reading explanations or watching lectures. These learning activities have potential impacts on students' knowledge state so that they are valuable to be analyzed.\n\t\\item{\\textbf{EdNet-KT4.}}\n\tIn EdNet-KT4, the very fine details of actions were provided. In particular, the following types of actions are added to EdNet-KT3: erase choice, undo erase choice, play audio, pause audio, play video, pause video, pay, refund, and enroll coupon.\n\\end{itemize}", "id": "fd615d1a-1d1e-4b41-9cdf-9cd4873ba07b", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "EdNet Dataset" ] ], "subsections": [], "title": "EdNet Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The CodeWorkout dataset is utilized in the 2nd Computer Science Educational Data Mining Challenge (CSEDM)\\footnote{https://sites.google.com/ncsu.edu/csedm-dc-2021/home}. This dataset is collected from a CS1 course in the Spring and Fall 2019 semesters at a public university in United States. It contains the code submissions from students for 50 coding problems, each requiring ~10-26 lines of code. {In this dataset, each exercise has a unique KC, thereby the number of KCs is equal to the number of exercises.} In total, there are 329 and 490 students in the Spring and Fall semesters who completed the course. Each dataset contains more than 65,000 code submissions, the scores of the submissions (\\% of unit tests passes) are also available, as well as the compiler message if the compilation is not successful. The final grades of students are also provided for this dataset.", "id": "6cb5a8b7-c682-457d-85c6-8ed300213a0a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "8d0ee175-d87a-4c6c-bbdc-2113cd06121b", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Datasets" ], [ "subsubsection", "CodeWorkout Dataset" ] ], "subsections": [], "title": "CodeWorkout Dataset" }, { "cite_extract_rate": 0, "cites": [], "content": "The implementations of existing KT methods are not standardized, which may use different program languages (e.g., python, lua) and different deep learning frameworks (e.g., tensorflow, torch). Furthermore, some works did not well organize the codes systematically (e.g., the missing of running environments and dependencies), which brings difficulties in reproducing the models. To this end, we put forward the algorithm library for KT baselines, named EduKTM, which now has contained the concurrent popular works. EduKTM will be always under development for including the latest KT models, more algorithms and features are going to be added. Besides, we provide detailed guidelines for everyone who is interested in contributing to EduKTM.\nIt worth nothing that we do not provide the performance evaluation of these baselines using the aforementioned benchmark datasets. The reasons are two-folds: (1) For each baseline method, their experimental settings are quite different as they were designed to handle with various learning scenarios. No single KT model can always be the best one under various learning contexts. It is not fair to compare all baselines under a fixed setting.\n(2) Existing evaluation standards primarily concentrate on the performance of student performance prediction task, which cannot directly reflect the effectiveness of various KT models in practical applications.\nTherefore, we decide to open source EduData and EduKTM. Researchers and practitioners can freely compare and select appropriate KT models based on their specific requirements in various application scenarios.", "id": "66f9b876-576e-4b0f-af2b-91235cb5ec11", "level": "subsection", "origin_cites_number": 0, "parent_id": "34caa8b6-1d19-4064-837a-9dcb1549a45d", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Datasets and Baselines" ], [ "subsection", "Baselines" ] ], "subsections": [], "title": "Baselines" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:future}\nThis survey has comprehensively reviewed the abundant current developments in the field of knowledge tracing, including fundamental KT models, their variations and typical applications. Despite this, as knowledge tracing is a relatively new but promising research area, there are still a substantial number of problems that require urgent resolution. In this section, we discuss several potential future research directions.", "id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "level": "section", "origin_cites_number": 0, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ] ], "subsections": [ "ee9a478b-7e5c-4c3d-a3b7-01ab255d95f7", "203ad4ee-8eb1-4e57-ac27-fd2974c1a08f", "1df76c0e-efc5-4366-b813-6204fb5c7b9f", "ed0bdca3-3fb6-4028-a4d7-cc6fb028657b", "01240c30-0ca2-4a6c-bf1e-ccabebf5cfe7", "c7707957-a296-4aea-b19d-29d5e30c34c6" ], "title": "Future Research Directions" }, { "cite_extract_rate": 0.4, "cites": [ 5374, 8914 ], "content": "The performance of KT models is now evaluated indirectly through the student performance prediction task. The higher the precision of students' responses on future exercises, the better the KT model's performance. Nonetheless, interpretability plays a significant role in education, as students often express more concern about the 'why' than the 'what' of a learning decision . Enhancing the interpretability of KT models is therefore crucial. Some educational theories, such as the Rasch model used in AKT and the transfer of knowledge used in SKT , could be considered for this purpose. noticed the significance of interpretable KT, particularly for deep learning models, as they have numerous parameters that are challenging to explain meaningfully. Thus, they introduced a straightforward and interpretable KT approach, based on the causal relationships within the latent features extracted from students' behavioral data. \n attempted to introduce causal inference for explanatory analysis on KT, and they achieved more stable and explainable knowledge tracing based on the analysis results. \nIt is imperative to further refine existing KT models or to explore additional methods for interpretable KT researches. This will lead to the production of more accurate and interpretable evaluations of students' knowledge states.", "id": "ee9a478b-7e5c-4c3d-a3b7-01ab255d95f7", "level": "subsection", "origin_cites_number": 5, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing with Interpretability" ] ], "subsections": [], "title": "Knowledge Tracing with Interpretability" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 5388, 5387, 8915 ], "content": "The acquisition of high-quality KT models necessarily requires a substantial amount of data to ensure training stability. However, online learning systems often achieve less learning interactions in practical educational scenarios, thereby leading to the data sparsity problem. To address this issue, utilized graph convolutional network to include exercise-KC correlations. \n turned to improve the attention mechanism.\nMore recently, researchers have employed contrastive learning to alleviate the data sparsity problem in KT . For example, presented a contrastive learning framework to enhance KT, which measure the semantical similarity between various learning interactions to effectively learn their representations. They further designed data augmentation methods to enhance the semantics of students' learning interactions. However, while the aforementioned methods alleviate the data sparsity problem in various aspects, there remains a need for further improvement in addressing this issue comprehensively.", "id": "203ad4ee-8eb1-4e57-ac27-fd2974c1a08f", "level": "subsection", "origin_cites_number": 5, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing with Sparse Learning Interactions" ] ], "subsections": [], "title": "Knowledge Tracing with Sparse Learning Interactions" }, { "cite_extract_rate": 0, "cites": [], "content": "Most existing KT researches are capable of dealing with objective exercises, assuming student responses are binary. However, they overlook students' open-ended answers, often presented in natural language, on subjective exercises. As per the work by , an open-ended KT approach for computer science education was explored. This method aimed to predict students' precise solutions to programming questions, utilizing language models for assistance. \nRecent advancements in large language models offer promising potential to enhance the ability of KT models to comprehend the critical information regarding students' knowledge states embedded within their open-ended answers on subjective exercises .", "id": "1df76c0e-efc5-4366-b813-6204fb5c7b9f", "level": "subsection", "origin_cites_number": 2, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing with Subjective Exercises" ] ], "subsections": [], "title": "Knowledge Tracing with Subjective Exercises" }, { "cite_extract_rate": 0, "cites": [], "content": "Learning records are passive representations of students' knowledge states. In contrast, students' feedback provides us with their proactive understanding of their knowledge states, thereby offering direct and authentic indicators of their learning situation. However, there are few KT models that take advantage of training data related to students' feedback, even though it can play an important role in fixing the KT results . have noted that feedback plays a positive role in learning, which may promote transfer and retention in learning from worked-out examples. Therefore, incorporating students' feedback presents a meaningful avenue that could lead to significant improvements.", "id": "ed0bdca3-3fb6-4028-a4d7-cc6fb028657b", "level": "subsection", "origin_cites_number": 2, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing with Students' Feedback" ] ], "subsections": [], "title": "Knowledge Tracing with Students' Feedback" }, { "cite_extract_rate": 0, "cites": [], "content": "Generally, user modeling refers to tools for characterizing users' behaviors (e.g., frequent locations), personal information (e.g., age, gender, and occupation) and latent features (e.g., interests and abilities), which facilitate the provision of targeted services for different users . As a type of latent feature modeling, knowledge tracing diagnoses the proficiency of users (not only individuals, but also groups of individuals, like user teams and companies) on specific skills/concepts. Thus, in addition to education, knowledge tracing can be developed and applied across a wide range of domains for user modeling, including games, sports, and recruitment.", "id": "01240c30-0ca2-4a6c-bf1e-ccabebf5cfe7", "level": "subsection", "origin_cites_number": 1, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing for General User Modeling" ] ], "subsections": [], "title": "Knowledge Tracing for General User Modeling" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8461 ], "content": "{In recent years, the advancement of large language models (LLM), notably ChatGPT, has led to significant impacts and garnered considerable research interest worldwide . The use of LLM in education is promising to revolutionize the current learning pattern . The current study mainly focuses on generating learning materials, improving student-system interaction, and explaining educational contents. However, it remains unclear how LLM can assist in understanding students' knowledge states. Given the powerful capabilities of LLM, it has the potential to enhance the generalization and interpretability of existing KT methods. Specifically, LLM can analyze the content of students' responses and evaluate their quality, which directly reflects students' knowledge states. Besides, LLM can play the role of the instructor, answering questions for students and identifying their strengths and weaknesses. Furthermore, LLM itself can serve as a KT model, which can output reasonable results about students' knowledge states given their previous learning interactions.\nSimultaneously, it is imperative to safeguard the privacy and security of student data when utilizing LLM . \n}", "id": "c7707957-a296-4aea-b19d-29d5e30c34c6", "level": "subsection", "origin_cites_number": 3, "parent_id": "18821b92-3ccc-4057-9352-24e5c0409dd6", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Future Research Directions" ], [ "subsection", "Knowledge Tracing with Large Language Models" ] ], "subsections": [], "title": "Knowledge Tracing with Large Language Models" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclution}\nIn this survey, we conducted a thorough review of knowledge tracing. Specifically, we initially conducted a comprehensive review of existing fundamental KT models. Considering the complexity of online learning systems and the significant importance of KT research in practical scenarios, we investigated a broad range of variant KT research. Subsequently, we summarized some typical applications of KT in common educational scenarios. Furthermore, we released two algorithm libraries for KT-related datasets and baselines, thereby facilitating researchers and practitioners in selecting appropriate KT models based on their specific requirements. Finally, we outlined some potential future directions. \nWe hope that this comprehensive survey of knowledge tracing will assist readers in understanding the problem of modeling students' dynamic knowledge states. It serves as a fundamental framework for both researchers and practitioners in future studies, fostering the development of KT. The development of KT will directly benefit millions of students, with its impact potentially extending to a broader audience as online learning continues to evolve. In this context, KT will play an increasingly important role in enabling individuals to adapt to the ever-changing society . Furthermore, the development of more refined KT methods tailored to students in various subjects and age groups will enhance their ability to comprehend students' individual knowledge states. \nHowever, we recognize some limitations of the current survey. \nFirstly, as new KT methods continue to emerge, although we have conducted a thorough investigation, there may be some representative works that we have overlooked. If necessary, we will incorporate these into our proposed framework in the future. \nSecondly, the complexity of KT methods warrants attention, as it continues to grow, particularly in those based on deep learning. The complexity of the model may enhance its accuracy, but it could also compromise its applicability, as users will question the reliability of complex models. Despite the fact that some studies have attempted to reveal the interpretability of complex KT models, for instance, by utilizing the xAI technique, there remains a significant distance to traverse before achieving a completely transparent deep learning KT model. \nWe posit that the application of KT methods in online education presents a promising avenue for research. It significantly enhances the learning experience for students while simultaneously alleviating the burden on teachers. Despite the numerous challenges and obstacles, researchers are be encouraged to overcome them and ensure a reliable and equitable access to students' evolving knowledge state in learning. \n\\section*{Acknowledgment}\nThis research was supported by grants from the National Key Research and Development Program of China (Grant No. 2021YFF0901003), and the National Natural Science Foundation of China (Grant No. 62337001, U20A20229).\n\\bibliographystyle{IEEEtranN}\n\\begin{footnotesize}\n\t\\bibliography{cite}\n\\end{footnotesize}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/shen.pdf}}]\n\t{Shuanghong Shen} recieved the B.E. degree from Wuhan University, Wuhan, China, in 2018. He is currently working toward the Ph.D. degree in the School of Computer Science and Technology, University of Science and Technology of China. His main research interests include data mining, knowledge discovery, natural language processing and intelligent tutoring systems. He won the first prize in task 2 of the NeurIPS 2020 Education Challenge. He has published papers in referred conference proceedings, such as KDD2023, AAAI2022, SIGIR2022.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/liu.pdf}}]\n\t{Qi Liu} (Member, IEEE) received the Ph.D. degree from University of Science and Technology of China (USTC), Hefei, China, in 2013. He is currently a Professor in the School of Computer Science and Technology at USTC. His general area of research is data mining and intelligent education. He has published prolifically in refereed journals and conference proceedings (e.g., TKDE, TOIS, KDD). He is an Associate Editor of IEEE TBD and Neurocomputing. He was the recipient of KDD’ 18 Best Student Paper Award and ICDM’ 11 Best Research Paper Award. He is a member of the Alibaba DAMO Academy Young Fellow. He was also the recipient of China Outstanding Youth Science Foundation in 2019.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/huang.pdf}}]\n\t{Zhenya Huang} (Member, IEEE) received the B.E. degree from Shandong University, in 2014 and the Ph.D. degree from USTC, in 2020. He is currently an associate researcher of the School of Computer Science and Technology, University of Science and Technology of China (USTC). His main research interests include data mining, knowledge discovery representation learning and intelligent tutoring systems. He has published more than 30 papers in refereed journals and conference proceedings including TKDE, TOIS, KDD, AAAI. He has served regularly in the program committees of a number of conferences, and is reviewers for the leading academic journals.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/zheng.png}}]\n\t{Yonghe Zheng} is the funding dean of Research Institute of Science Education in Beijing Normal University, Professor in Education Faculty of BNU, PhD Supervisor. Prof. Zheng is the Vice president of the 8th Council of China Association of Children's Science Instructors and is the member of Teaching Guidance Committee of the Basic Education, the member of comprehensive group of the compulsory education curriculum revision committee, the science curriculum standard revision group; the General Secretary of the Awarding Scheme Council for outstanding computer teachers in Universities; the Vice director of the academic committee of Chinese society of Natural Science Museum; and former Director General of the Science Policy Bureau of National Natural Science Foundation of China. Prof. Zheng has engaged in project management of basic research and S\\&T policy research for a long time. More than 60 articles have been published. And Zheng is the chief editor of Disciplinary and Interdisciplinary Science Education Research Journal. Prof. Zheng joined in Educational Faculty of Normal University in April, 2018, with specific research interests on science education and educational technology. Prof. Zheng established the Research Institute of Science Education of Beijing Normal University in November of 2019.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/yin.png}}]\n\t{Minghao Yin} (Member, IEEE) received the B.S. and M.S. degrees from Northeast Normal University, Changchun, China, in 2001 and 2004, respectively, and the Ph.D. degree from Jilin University, Changchun, China, in 2008, all in computer science. He is currently a professor, the dean of the School of Information Science and Information Technology at Northeast Normal University, and the director of the Key Laboratory of Intelligent Information Processing in Jilin Province Colleges and Universities. He also contributes as the Associate Editor for the IEEE Transactions on Learning Technologies (TLT). He has published multiple papers in prestigious journals and conferences, including Artificial Intelligence, IEEE Transactions, ACM Transactions, JAIR, AAAI, IJCAI, and WWW. His research interests include heuristic search, data mining, and combinatorial optimization.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/wang.png}}]\n\t{Minjuan Wang} (Member, IEEE) obtained her B.A. in Chinese literature from the Beijing/Peking University in 1995, M.A. in comparative literature from Penn State University in 1997, and Ph.D. in information science and learning technologies from University of Missouri-Columbia in 2001. She is\ta newly appointed Chair Professor of Emerging Technologies and Future Education at The Education University of Hong Kong. She has also been a professor and program chair of Learning design and technology at San Diego State University, California, USA since 2000. In addition, she serves as the Editor-in-Chief for the IEEE Transactions on Learning Technologies (TLT). Her research specialties are multidisciplinary, focusing on STEM education, new and emerging technologies in various educational settings, Metaverse and immersive learning, and\tthe design and implementation of artificial intelligence including AIGC for education and training. She is an internationally recognized scholar and has keynoted about 45 international conferences. She is recognized internationally for her research, publishing and dedicated service to IEEE and other scholarly communities. Dr. Wang is a member of IEEE and the IEEE Education Society. She also co-chairs the Education Society’s technical committee for immersive learning (TC-ILE). \n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/chen.pdf}}]\n\t{Enhong Chen} (Fellow, IEEE) received the B.S. degree from Anhui University, Hefei, China, the M.S. degree from the Hefei University of Technology, Hefei, China, and the Ph.D. degree in computer science from the University of Science and Technology of China (USTC), Hefei, China, in 1989, 1992 and 1996 respectively. He is currently a Professor and the Executive Dean of School of Data Science, the Director of Anhui Province Key Laboratory of Big Data Analysis and Application. He has published a number of papers on refereed journals and conferences, such as TKDE, TIST, TMC, KDD, ICDM, NIPS and CIKM. His current research interests include data mining and machine learning, social network analysis, and recommender system. Dr. Chen was a recipient of the National Science Fund for Distinguished Young Scholars of China, the Best Application Paper Award on KDD-2008, the Best Student Paper Award on KDD-2018 (Research), and the Best Research Paper Award on ICDM-2011. He is an IEEE Fellow and an ACM Distinguished Scientist.\n\\end{IEEEbiography}\n\\vspace{-0.6cm}\n\\end{document}", "id": "8a28f5df-8a26-4509-8bdf-0a8c518eb23c", "level": "section", "origin_cites_number": 1, "parent_id": "4e5af7af-552b-4c7f-939b-54b6b4dadeb9", "prefix_titles": [ [ "title", "A Survey of Knowledge Tracing: Models, Variants, and Applications" ], [ "section", "Discussions and Conclusions" ] ], "subsections": [], "title": "Discussions and Conclusions" } ]
92
[ 5371, 5372, 5370, 5374, 5373, 5376, 5375, 38, 5377, 8911, 1206, 5378, 5379, 5380, 679, 243, 553, 5381, 5382, 166, 1684, 5383, 5384, 5385, 8912, 5386, 8913, 8914, 5388, 5387, 8915, 8461 ]
1.291703
[ "Yizeng Han", "Gao Huang", "Shiji Song", "Le Yang", "Honghui Wang", "Yulin Wang" ]
Dynamic Neural Networks: A Survey
2021
2021-02-09T16:02:00Z
cs.CV
Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) {\emph{sample-wise}} dynamic models that process each sample with data-dependent architectures or parameters; 2) \emph{spatial-wise} dynamic networks that conduct adaptive computation with respect to different spatial locations of image data; and 3) \emph{temporal-wise} dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ] ], "subsections": [ "ed4f3055-e5dc-420d-8009-404bb4d82113", "58436ad0-55e1-4da8-b7e8-6bf9edce3fb1", "d1dfc651-19ab-46ab-9db8-02eef3371ca0", "8a8770e6-47f0-428a-8e00-4bb401ff006f", "4ffefefb-4651-456b-a07d-75e465a728b0", "3cd25c7f-359c-4b70-8c02-f2738746b35c", "e9013abd-3d22-4eab-a882-4ba5e86719f2" ], "title": "root" }, { "cite_extract_rate": 0.642857142857142, "cites": [ 71, 96, 679, 681, 687, 544, 305, 682, 683, 675, 38, 504, 97, 686, 684, 680, 677, 676, 7266, 8372, 7268, 7, 678, 7267, 674, 685, 306 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{D}{eep} neural networks (DNNs) are playing an important role in various areas including computer vision (CV) and natural language processing (NLP) . \nRecent years have witnessed many successful deep models such as AlexNet , VGG , GoogleNet , ResNet , DenseNet and Transformer . These {architectural innovations} have enabled the training of deeper, more accurate and more efficient models. The recent research on neural architecture search (NAS) further speeds up the process of designing more powerful structures. However, most of the prevalent deep learning models perform inference in a static manner, i.e., both the computational graph and the network parameters are fixed once trained, which may limit their representation power, efficiency and interpretability . \nDynamic networks, as opposed to static ones, can adapt their structures or parameters to the input during inference, and therefore enjoy favorable properties that are absent in static models. In general, dynamic computation in the context of deep learning has the following advantages: \n\\begin{figure*}\n \\centering\n \\vspace{-2ex}\n \\includegraphics[width=0.875\\linewidth]{1_overall.pdf}\n \\vskip -0.175in\n \\centering\\caption{Overview of the survey. {We first review the dynamic networks that perform adaptive computation at three different granularities (i.e. sample-wise, spatial-wise and temporal-wise). Then we summarize the decision making strategy, training technique and applications of dynamic models. Existing open problems in this field together with some future research directions are finally discussed. Best viewed in color.}}\n \\label{framework}\n \\vspace{-3ex}\n\\end{figure*}\n\\textbf{1) Efficiency.} One of the most notable advantages of dynamic networks is that they are able to allocate computations on demand at test time, by selectively activating model components (e.g. layers , channels or sub-networks ) \\emph{conditioned} on the input. Consequently, less computation is spent on canonical samples that are relatively easy to recognize, or on less informative spatial/temporal locations of an input. {In addition to \\emph{computational efficiency}, dynamic models have also shown promising results for improving \\emph{data efficiency} in the scenario of few-shot learning .}\n\\textbf{2) Representation power.} Due to the data-dependent network architecture/parameters, dynamic networks have significantly enlarged parameter space and improved representation power. For example, with a minor increase of computation, model capacity can be boosted by applying feature-conditioned attention weights on an ensemble of convolutional kernels . It is worth noting that the popular soft attention mechanism could also be unified in the framework of dynamic networks, as different channels , spatial areas or temporal locations of features are dynamically re-weighted at test time. \n\\textbf{3) Adaptiveness.} Dynamic models are able to achieve a desired trade-off between accuracy and efficiency for dealing with varying computational budgets on the fly. Therefore, they are more adaptable to different hardware platforms and changing environments, compared to static models with a fixed computational cost.\n\\textbf{4) Compatibility.} Dynamic networks are compatible with most advanced techniques in deep learning, including architecture design , optimization algorithms and data preprocessing , which ensures that they can benefit from the most recent advances in the field to achieve state-of-the-art performance. For example, dynamic networks can inherit architectural innovations in lightweight models , or be designed via NAS approaches . Their efficiency could also be further improved by acceleration methods developed for static models, such as network pruning , weight quantization , knowledge distillation and low-rank approximation .\n\\textbf{5) Generality.} As a substitute for static deep learning techniques, many dynamic models are general approaches that can be applied seamlessly to a wide range of applications, such as image classification , object detection and semantic segmentation . Moreover, the techniques developed in CV tasks are proven to transfer well to language models in NLP tasks , and vice versa.\n\\textbf{6) Interpretability.} We finally note that the research on dynamic networks {may bridge} the gap between the underlying mechanism of deep models and brains, as it is believed that the brains process information in a dynamic way~. It is possible to analyze which components of a dynamic model are activated when processing an input sample, and to observe which parts of the input are accountable for certain predictions . These properties may shed light on interpreting the decision process of DNNs.\n\\begin{table}\n \\vspace{-1ex}\n \\caption{Notations used in this paper.}\n \\label{tab:notations}\n \\vspace{-4ex}\n \\begin{center}\n \\begin{tabular}{c|c}\n \\hline\n \\textbf{Notations} & \\textbf{Descriptions} \\\\\n \\hline\n $\\mathbb{R}^m$ & $m$-dimensional real number domain \\\\\n \\hline\n $a, \\mathbf{a}$ & Scalar, vector/matrix/tensor \\\\\n \\hline\n $\\mathbf{x,y}$ & Input, output feature \\\\\n \\hline\n $\\mathbf{x}^{\\ell}$ & Feature at layer $\\ell$ \\\\\n \\hline\n $\\mathbf{h}_t$ & Hidden state at time step $t$ \\\\\n \\hline\n $\\mathbf{x(p)}$ & Feature at spatial location $\\mathbf{p}$ on $\\mathbf{x}$ \\\\\n \\hline\n $\\bm{\\Theta}$ & Learnable parameter \\\\\n \\hline\n $\\bm{\\hat{\\Theta}}|\\mathbf{x}$ & Dynamic parameter conditioned on $\\mathbf{x}$ \\\\\n \\hline\n $\\mathbf{x}\\star\\mathbf{W}$ & Convolution of feature $\\mathbf{x}$ and weight $\\mathbf{W}$ \\\\\n \\hline\n $\\otimes$ & Channel-wise or element-wise multiplication \\\\\n \\hline\n $\\mathcal{F}(\\cdot,\\bm{\\Theta})$ & Functional Operation parameterized by $\\bm{\\Theta}$ \\\\\n \\hline\n $\\mathcal{F}\\circ\\mathcal{G}$ & Composition of function $\\mathcal{F}$ and $\\mathcal{G}$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\vskip -0.35in\n\\end{table}\nIn fact, adaptive inference, the key idea underlying dynamic networks, has been studied before the popularity of modern DNNs. The most classical approach is building a model ensemble through a cascaded or parallel structure, and selectively activating the models conditioned on the input. Spiking neural networks (SNNs) also perform data-dependent inference by propagating pulse signals. However, the training strategy for SNNs is quite different from that of popular convolutional neural networks (CNNs), and they are less used in vision tasks. Therefore, we leave out the work related to SNNs in this survey. \n\\begin{figure}\n \\centering\n \\vspace{-1ex}\n \\includegraphics[width=\\linewidth]{2_dynamic_cascade.pdf}\n \\vskip -0.15in\n \\caption{Two early-exiting schemes. The dashed lines and shaded modules are not executed, conditioned on the decisions made by the routers.}\n \\label{cascading_models}\n \\vskip -0.2in\n\\end{figure}\n\\begin{figure*}\n \\centering\n \\vspace{-2ex}\n \\includegraphics[width=\\linewidth]{3_multi_scale_SuperNet.pdf}\n \\vskip -0.175in\n \\caption{{Multi-scale architectures with dynamic inference graphs. The first three models (a, b, c) perform adaptive early exiting with specific architecture designs and exiting policies. Dynamic routing is achieved inside a SuperNet (d) to activate data-dependent inference paths.}}\n \\label{multi_scale}\n \\vspace{-3ex}\n\\end{figure*}\nIn the context of deep learning, dynamic inference with modern deep architectures, has raised many new research questions and has attracted great research interests in the past three years.\nDespite the extensive work on designing various types of dynamic networks, a systematic and comprehensive review on this topic is still lacking. This motivates us to write this survey, to review the recent advances in this rapidly developing area, with the purposes of 1) providing an overview as well as new perspectives for researchers who are interested in this topic; 2) pointing out the close relations of different subareas and reducing the risk of reinventing the wheel; and 3) summarizing the key challenges and possible future research directions.\n{This survey is organized as follows (see \\figurename~\\ref{framework} for an overview). In Sec. \\ref{sec_sample_wise}, we introduce the most common \\emph{sample-wise} dynamic networks which adapt their architectures or parameters conditioned on each input sample. Dynamic models working on a finer granularity, i.e., \\emph{spatially} adaptive and \\emph{temporally} adaptive models, are reviewed in Sec. \\ref{sec_spatially_adaptive} and Sec.\\ref{sec_temporal_adaptive}, respectively\\footnote{{These two categories can also be viewed as sample-wise dynamic networks as they perform adaptive computation within each sample at a finer granularity, and we adopt such a split for narrative convenience.}}. Then we investigate the decision making strategies and the training techniques of dynamic networks in Sec. \\ref{inference_and_train}. The applications of dynamic models are further summarized in Sec. \\ref{sec_tasks}. Finally, we conclude this survey with a discussion on a number of open problems and future research directions in Sec. \\ref{sec:discussion}. For better readability, we list the notations that will be used in this survey in Table \\ref{tab:notations}.}\n\\vspace{-2ex}", "id": "ed4f3055-e5dc-420d-8009-404bb4d82113", "level": "section", "origin_cites_number": 42, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\label{sec_sample_wise}\nAiming at processing different inputs in data-dependent manners, sample-wise dynamic networks are typically designed from two perspectives: 1) adjusting model architectures to allocate appropriate computation based on each sample, and therefore reducing redundant computation for increased efficiency (Sec. \\ref{dynamic_arch}); \n2) adapting network \\emph{parameters} to every input sample with fixed computational graphs, with the goal of boosting the representation power with minimal increase of computational cost (Sec. \\ref{adaptive_params}).\n\\vspace{-2ex}", "id": "58436ad0-55e1-4da8-b7e8-6bf9edce3fb1", "level": "section", "origin_cites_number": 0, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ] ], "subsections": [ "3befdcee-686f-4399-9d86-c1c11ef02509", "fdd3c2e8-dc39-4dd8-8a63-1d07741f61b4" ], "title": "{Sample-wise Dynamic Networks" }, { "cite_extract_rate": 1, "cites": [ 688, 685, 687 ], "content": "\\label{dynamic_arch}\nConsidering different inputs may have diverse computational demands, it is natural to perform inference with dynamic architectures conditioned on each sample. Specifically, one can adjust the network depth (Sec. \\ref{dynamic_depth}), width (Sec. \\ref{dynamic_width}), or perform dynamic routing within a super network (SuperNet) that includes multiple possible paths (Sec. \\ref{SuperNets}). Networks with dynamic architectures not only save redundant computation for canonical (\"easy\") samples, but also preserve their representation power when recognizing non-canonical (\"hard\") samples. Such a property leads to remarkable advantages in efficiency compared to the acceleration techniques for static models , which handle \"easy\" and \"hard\" inputs with identical computation, and fail to reduce intrinsic computational redundancy.\n\\vspace{-1.5ex}", "id": "3befdcee-686f-4399-9d86-c1c11ef02509", "level": "subsection", "origin_cites_number": 3, "parent_id": "58436ad0-55e1-4da8-b7e8-6bf9edce3fb1", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Architectures" ] ], "subsections": [ "ca8e1af0-84b8-4669-8aed-b0ba1c15924a", "9747d6b0-66bd-4364-a0a9-4ece0a43fe6e", "ad51ce7b-9d07-4edc-88e3-0822d3ae0397" ], "title": "Dynamic Architectures" }, { "cite_extract_rate": 0.621621621621621, "cites": [ 698, 695, 7273, 305, 693, 683, 690, 691, 7269, 8373, 38, 7270, 697, 97, 692, 694, 696, 676, 7272, 7271, 7266, 689, 7 ], "content": "\\label{dynamic_depth}\nAs modern DNNs are getting increasingly deep for recognizing more \"hard\" samples, a straightforward solution to reducing redundant computation is performing inference with dynamic depth, which can be realized by 1) \\emph{early exiting}, i.e. allowing \"easy\" samples to be output at shallow exits without executing deeper layers ; or 2) \\emph{layer skipping}, i.e. selectively skipping intermediate layers conditioned on each sample .\n\\begin{figure*}\n \\vspace{-2ex}\n \\centering\n \\includegraphics[width=\\linewidth]{4_skip_layers.pdf}\n \\vskip -0.175in\n \\caption{Dynamic layer skipping. Feature $\\mathbf{x}_4$ in (a) are not calculated conditioned on the halting score, and the gating module in (b) decides whether to execute the block based on the intermediate feature. The policy network in (c) generates the skipping decisions for all layers in the main network.}\n \\label{skipping_layers}\n \\vspace{-3ex}\n\\end{figure*}\n\\noindent\\textbf{1) Early exiting.} The complexity (or \"difficulty\") of input samples varies in most real-world scenarios, and shallow networks are capable of correctly identifying many canonical inputs. Ideally, these samples should be output at certain early exits without executing deeper layers.\nFor an input sample $\\mathbf{x}$, the forward propagation of an $L$-layer deep network $\\mathcal{F}$ could be represented by \n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{y} = \\mathcal{F}^{L}\\circ\\mathcal{F}^{L-1}\\circ\\cdots\\circ\\mathcal{F}^1(\\mathbf{x}),\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\n\\noindent where $\\mathcal{F}^{\\ell}$ denotes the operational function at layer $\\ell,1\\!\\le \\ell\\!\\le L$. In contrast, early exiting allows to terminate the inference procedure at an intermediate layer. For the $i$-th input sample $\\mathbf{x}_i$, the forward propagation can be written as\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{y}_i = \\mathcal{F}^{\\ell_i}\\circ\\mathcal{F}^{\\ell_i-1}\\circ\\cdots\\circ\\mathcal{F}^1(\\mathbf{x}_i), 1\\!\\le \\ell_i\\!\\le L.\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nNote that $\\ell_i$ is adaptively determined based on $\\mathbf{x}_i$. Extensive architectures have been studied to endow DNNs with such early exiting behaviors, as discussed in the following.\na) \\emph{Cascading of DNNs.} The most intuitive approach to enabling early exiting is cascading multiple models (see \\figurename~\\ref{cascading_models} (a)), and adaptively retrieving the prediction of an early network without activating latter ones. For example, Big/little-Net cascades two CNNs with different depths. After obtaining the \\emph{SoftMax} output of the first model, early exiting is conducted when the score margin between the two largest elements exceeds a threshold. {Moreover, a number of classic CNNs are cascaded in and }. After each model, a decision function is trained to determine whether the obtained feature should be fed to a linear classifier for immediate prediction, or be sent to subsequent models.\nb) \\emph{Intermediate classifiers.}\nThe models in the aforementioned cascading structures are mutually independent. Consequently, \nonce a \"difficult\" sample is decided to be fed to a latter network, a whole inference procedure needs to be executed from scratch without reusing the already learned features. A more compact design is involving intermediate classifiers within one backbone network (see \\figurename~\\ref{cascading_models} (b)), so that early features can be propagated to deep layers if needed. Based on such a multi-exit architecture, adaptive early exiting is typically achieved according to confidence-based criteria or learned functions .\nc) \\emph{Multi-scale architecture with early exits.} Researchers have observed that in chain-structured networks, the multiple classifiers may interfere with each other, which degrades the overall performance. A reasonable interpretation could be that in regular CNNs, the high-resolution features lack the coarse-level information that is essential for classification, leading to unsatisfying results for early exits. Moreover, early classifiers would force the shallow layers to generate \\emph{task-specialized} features, while a part of \\emph{general} information is lost, resulting in degraded performance for deep exits. To tackle this issue, multi-scale dense network (MSDNet) adopts 1) a \\emph{multi-scale} architecture, {which consists of multiple sub-networks for processing feature maps with different resolutions (scales)}, to quickly generate coarse-level features that are suitable for classification; 2) \\emph{dense connections}, to reuse early features and improve the performance of deep classifiers (see \\figurename~\\ref{multi_scale} (a)). Such a specially-designed architecture effectively enhances the overall accuracy of all the classifiers in the network. \n{Based on the multi-scale architecture design, researchers have also studied the exiting policies (see \\figurename~\\ref{multi_scale} (b)) and training schemes of early-exiting dynamic models. More discussion about the inference and training schemes for dynamic models will be presented in Sec. \\ref{inference_and_train}.}\nPrevious methods typically achieve the adaptation of network depths. From the perspective of exploiting spatial redundancy in features, resolution adaptive network (RANet, see \\figurename~\\ref{multi_scale} (c)) first processes each sample with low-resolution features, while high-resolution representations are conditionally utilized based on early predictions.\nAdaptive early exiting is also extended to language models (e.g. BERT ) for improving their efficiency on NLP tasks . In addition, it can be implemented in recurrent neural networks (RNNs) for \\emph{temporally} dynamic inference when processing sequential data such as videos and texts (see Sec. \\ref{sec_temporal_adaptive}).\n\\noindent\\textbf{2) Layer skipping.} The general idea of the aforementioned early-exiting paradigm is skipping the execution of all the deep layers after a certain classifier. More flexibly, the network depth can also be adapted on the fly by strategically skipping the calculation of \\emph{intermediate layers} without placing extra classifiers. Given the $i$-th input sample $\\mathbf{x}_i$, dynamic layer skipping could be generally written as\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{y}_i = (\\mathds{1}^{L}\\circ\\mathcal{F}^{L})\\circ (\\mathds{1}^{L-1}\\circ\\mathcal{F}^{L-1})\\circ\\cdots\\circ (\\mathds{1}^1\\circ\\mathcal{F}^1)(\\mathbf{x}_i),\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nwhere $\\mathds{1}^{\\ell}$ denotes the indicator function determining the execution of layer $\\mathcal{F}^{\\ell}, 1\\!\\le\\!\\ell\\!\\le\\!L$.\nThis scheme is typically implemented on structures with skip connections (e.g. ResNet ) to guarantee the continuity of forward propagation, and here we summarize three common approaches.\na) \\emph{Halting score} is first proposed in , where an accumulated scalar named halting score adaptively decides whether the hidden state of an RNN will be directly fed to the next time step. The halting scheme is extended to vision tasks by viewing residual blocks within a ResNet stage \\footnote{Here we refer to a stage as a stack of multiple residual blocks with the same feature resolution.} as linear layers within a step of RNN (see \\figurename~\\ref{skipping_layers} (a)).\nRather than skipping the execution of layers with independent parameters, multiple blocks in each ResNet stage could be replaced by one weight-sharing block, leading to a significant reduction of parameters . In every stage, the block is executed for an adaptive number of steps according to the halting score.\nIn addition to RNNs and CNNs, the halting scheme is further implemented on Transformers by and to achieve dynamic network depth on NLP tasks.\nb) {\\emph{Gating function} is also a prevalent option for dynamic layer skipping due to its plug-and-play property.} Take ResNet as an example (see \\figurename~\\ref{skipping_layers} (b)), let $\\mathbf{x}^{\\ell}$ denote the input feature of the $\\ell$-th residual block, gating function $\\mathcal{G}^{\\ell}$ generates a binary value to decide the execution of residual block $\\mathcal{F}^{\\ell}$. This procedure could be represented by\\footnote{For simplicity and without generality, the subscript for sample index will be omitted in the following.}\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{x}^{\\ell+1} = \\mathcal{G}^{\\ell}(\\mathbf{x}^{\\ell})\\mathcal{F}^{\\ell}(\\mathbf{x}^{\\ell}) + \\mathbf{x}^{\\ell}, \\mathcal{G}^{\\ell}(\\mathbf{x}^{\\ell})\\in\\{0,1\\}.\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nSkipNet and convolutional network with adaptive inference graph (Conv-AIG) are two typical approaches to enabling dynamic layer skipping. Both methods induce lightweight computational overheads to efficiently produce the binary decisions on whether skipping the calculation of a residual block. Specifically, Conv-AIG utilizes two FC layers in each residual block, while the gating function in SkipNet is implemented as an RNN for parameter sharing.\n\\begin{figure*}\n \\centering\n \\vspace{-2ex}\n \\includegraphics[width=0.85\\linewidth]{5_multi_branch.pdf}\n \\vskip -0.175in\n \\caption{MoE with soft weighting (a) and hard gating (b) schemes both adopt an auxiliary module to generate the weights or gates. In the tree structure (c), features (nodes) and transformations (paths) are represented as circles and lines with arrows respectively. Only the full lines are activated.}\n \\label{multi_branch}\n \\vspace{-3ex}\n\\end{figure*}\nRather than skipping layers in classic ResNets, dynamic recursive network iteratively executes one block with shared parameters in each stage. Although the weight-sharing scheme seems similar to the aforementioned IamNN , the skipping policy of differs significantly: gating modules are exploited to decide the recursion depth.\nInstead of either skipping a layer, or executing it thoroughly with a full numerical precision, a line of work studies adaptive \\emph{bit-width} for different layers conditioned on the \\emph{resource budget}. Furthermore, fractional skipping adaptively selects a bit-width for each residual block by a gating function based on \\emph{input features}.\nc) {\\emph{Policy network} can be built to take in an input sample, and directly produces the skipping decisions for all the layers in a backbone network (see \\figurename~\\ref{skipping_layers} (c)).}\n\\vspace{-1.5ex}", "id": "ca8e1af0-84b8-4669-8aed-b0ba1c15924a", "level": "subsubsection", "origin_cites_number": 37, "parent_id": "3befdcee-686f-4399-9d86-c1c11ef02509", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Architectures" ], [ "subsubsection", "Dynamic Depth" ] ], "subsections": [], "title": "Dynamic Depth" }, { "cite_extract_rate": 0.764705882352941, "cites": [ 706, 688, 705, 707, 8375, 544, 691, 8374, 710, 8373, 712, 713, 709, 699, 38, 711, 703, 680, 696, 702, 7266, 700, 701, 708, 704, 687 ], "content": "\\label{dynamic_width}\nIn addition to dynamic network \\emph{depth} (Sec. \\ref{dynamic_depth}), a finer-grained form of conditional computation is performing inference with dynamic \\emph{width}: although every layer is executed, its multiple units (e.g. neurons, channels or branches) are selectively activated conditioned on the input.\n\\noindent\\textbf{1) {Skipping neurons in fully-connected (FC) layers.}} The computational cost of an FC layer is determined by its input and output dimensions. It is commonly believed that different neuron units are responsible for representing different features, and therefore not all of them need to be activated for every sample. Early studies learn to adaptively control the neuron activations by auxiliary branches or other techniques such as low-rank approximation .\n\\noindent\\textbf{2) {Skipping branches in mixture-of-experts (MoE).}} In Sec. \\ref{dynamic_depth}, adaptive model ensemble is achieved via a \\emph{cascading} way, and later networks are conditionally executed based on early predictions. An alternative approach to improving the model capacity is the MoE structure, which means that multiple network branches are built as experts in \\emph{parallel}. \nThese experts could be selectively executed, and their outputs are fused with data-dependent weights.\nConventional \\emph{soft} MoEs adopt real-valued weights to dynamically rescale the representations obtained from different experts (\\figurename~\\ref{multi_branch} (a)). In this way, all the branches still need to be executed, and thus the computation cannot be reduced at test time. \\emph{Hard} gates with only a fraction of non-zero elements are developed to increase the inference efficiency of the MoE structure (see \\figurename~\\ref{multi_branch} (b)) : let $\\mathcal{G}$ denote a gating module whose output is a $N$-dimensional vector $\\bm{\\alpha}$ controlling the execution of $N$ experts $\\mathcal{F}_1,\\mathcal{F}_2, \\cdots, \\mathcal{F}_N$, the final output can be written as\n\\begin{equation} \\label{eq_moe}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{y} = \\sum\\nolimits_{n=1}^N [\\mathcal{G}(\\mathbf{x})]_n \\mathcal{F}_n (\\mathbf{x}) = \\sum\\nolimits_{n=1}^N \\alpha_n \\mathcal{F}_n (\\mathbf{x}),\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nand the $n$-th expert will not be executed if $\\alpha_n\\! =\\!0$. \nHard MoE has been implemented in diverse network structures. For example, HydraNet replaces the convolutional blocks in a CNN by multiple branches, and selectively execute them conditioned on the input. For another example, dynamic routing network (DRNet) performs a branch selection in each cell structure which is commonly used in NAS . On NLP tasks, sparely gated MoE and switch Transformer embeds hard MoE in a long short-term memory (LSTM) network and a Transformer , respectively. Instead of making choice with \\emph{binary} gates as in , only the branches corresponding to the \\emph{top-K} elements of the real-valued gates are activated in .\n\\noindent\\textbf{{3) Skipping channels in CNNs.}}\nModern CNNs usually have considerable channel redundancy. Based on the common belief that the same feature channel can be of disparate importance for different samples, adaptive width of CNNs could be realized by dynamically activating convolutional channels. Compared to the \\emph{static} pruning methods which remove \"unimportant\" filters permanently, such a \\emph{data-dependent} pruning approach improves the inference efficiency without degrading the model capacity.\na) \\emph{Multi-stage architectures along the channel dimension.} Recall that the early-exiting networks discussed in Sec. \\ref{dynamic_depth} can be viewed as multi-stage models along the \\emph{depth} dimension, where late stages are conditionally executed based on early predictions. One can also build multi-stage architectures along the \\emph{width} (channel) dimension, and progressively execute these stages on demand.\nAlong this direction, an optimal architecture is searched among multiple structures with different widths, and any sample can be output at an early stage when a confident prediction is obtained . Channel gating network (CGNet) first executes a subset of convolutional filters in every layer, and the remaining filters are only activated on strategically selected areas.\nb) \\emph{Dynamic pruning with gating functions.} In the aforementioned progressive activation paradigm, the execution of a later stage is decided based on previous output. As a result, a complete forward propagation is required for every stage, which might be suboptimal for reducing the practical inference latency. Another prevalent solution is to decide the execution of channels at every layer by gating functions. For example, runtime neural pruning (RNP) models the layer-wise pruning as a Markov decision process, and an RNN is used to select specific channel groups at every layer. Moreover, pooling operations followed by FC layers are utilized to generate {\\emph{channel-wise hard attention} (i.e. making discrete decisions on whether to activate each channel)} for each sample . The recent work uses a gate module to decide the width for a whole stage of a ResNet. Different reparameterization and optimizing techniques are required for training these gating functions, which will be reviewed in Sec. \\ref{training}.\nRather than placing plug-in gating modules \\emph{inside} a CNN, GaterNet builds an \\emph{extra} network, which takes in the input sample and directly generates all the channel selection decisions for the backbone CNN. This implementation is similar to BlockDrop that exploits an additional policy network for dynamic layer skipping (Sec. \\ref{dynamic_depth}).\nc) \\emph{Dynamic pruning based on feature activations directly} has also been realized without auxiliary branches and computational overheads , where a regularization item is induced in training to encourage the sparsity of features.\n{On basis of the existing literature on dynamically skipping either network \\emph{layers} or convolutional \\emph{filters} , recent work has realized dynamic inference with respect to network \\emph{depth} and \\emph{width} simultaneously: only if a layer is determined to be executed, its channels will be selectively activated, leading to a more flexible adaptation of computational graphs.}\n\\vspace{-1ex}", "id": "9747d6b0-66bd-4364-a0a9-4ece0a43fe6e", "level": "subsubsection", "origin_cites_number": 34, "parent_id": "3befdcee-686f-4399-9d86-c1c11ef02509", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Architectures" ], [ "subsubsection", "Dynamic Width" ] ], "subsections": [], "title": "Dynamic Width" }, { "cite_extract_rate": 0.684210526315789, "cites": [ 715, 714, 8374, 8373, 719, 720, 718, 696, 7266, 7274, 716, 717, 306 ], "content": "\\label{SuperNets}\nThe aforementioned methods mostly adjust the depth (Sec. \\ref{dynamic_depth}) or width (Sec. \\ref{dynamic_width}) of \\emph{classic architectures} by activating their computational units (e.g. layers or channels ) conditioned on the input. {Another line of work develops different forms of SuperNets with various possible inference paths, and performs dynamic routing inside the SuperNets to adapt the computational graph to each sample.}\nTo achieve dynamic routing, there are typically routing nodes that are responsible for allocating features to different paths. For node $s$ at layer $\\ell$, let $\\alpha_{s\\rightarrow j}^{\\ell}$ denote the probability of assigning the reached feature $\\mathbf{x}^{\\ell}_s$ to node $j$ at layer $\\ell+1$, the path $\\mathcal{F}_{s\\rightarrow j}^{\\ell}$ will be activated only when $\\alpha_{s\\rightarrow j}^{\\ell}\\!>\\!0$. The resulting feature reaching node $j$ is represented by\n\\begin{equation}\\label{eq_supernet}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{x}^{\\ell+1}_j = \\sum\\nolimits_{s\\in\\left\\{s: \\alpha_{s\\rightarrow j}^{\\ell}>0\\right\\}} \\alpha_{s\\rightarrow j}^{\\ell}\\mathcal{F}_{s\\rightarrow j}^{\\ell}(\\mathbf{x}^{\\ell}_s).\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nThe probability $\\alpha_{s\\rightarrow j}^{\\ell}$ can be obtained in different manners. \nNote that the dynamic early-exiting networks are a special form of SuperNets, where the routing decisions are only made at intermediate classifiers. The CapsuleNets also perform dynamic routing between capsules, i.e. groups of neurons, to character the relations between (parts of) objects. Here we mainly focus on specific architecture designs of the SuperNets and their routing policies.\n\\noindent\\textbf{1) Path selection in multi-branch structures.} The simplest dynamic routing can be implemented by selectively executing \\emph{one} of multiple candidate modules at each layer , which is equivalent to producing a one-hot probability distribution $\\alpha_{s\\rightarrow \\cdot}^{\\ell}$ in Eq. \\ref{eq_supernet}. The main difference of this approach to hard MoE (\\figurename~\\ref{multi_branch} (b)) is that only one branch is activated without any fusion operations.\n\\noindent\\textbf{2) Neural trees and tree-structured networks.} \nAs decision trees always perform inference along one forward path that is dependent on input properties, combining tree structure with neural networks can naturally enjoy the adaptive inference paradigm and the representation power of DNNs simultaneously. Note that in a tree structure, the outputs of different nodes are routed to \\emph{independent} paths rather than being \\emph{fused} as in MoE structures (compare \\figurename~\\ref{multi_branch} (b), (c)).\na) \\emph{Soft decision tree} (SDT) adopts neural units as routing nodes (blue nodes in \\figurename~\\ref{multi_branch} (c)), which decides the portion that the inputs are assigned to their left/right sub-tree. Each leaf node generates a probability distribution over the output space, and the final prediction is the expectation of the results from all leaf nodes. Although the probability for a sample reaching each leaf node in an SDT is data-dependent, all the paths are still executed, which limits the inference efficiency.\nb) \\emph{Neural trees with deterministic routing policies} are designed to make hard routing decisions during inference, avoiding computation on those unselected paths.\nc) \\emph{Tree-structured DNNs.} Instead of developing decision trees containing neural units, a line of work builds special network architectures to endow them with the routing behavior of decision trees. For instance, a small CNN is first executed to classify each sample into coarse categories, and specific sub-networks are conditionally activated based on the coarse predictions . A subsequent work not only partitions samples to different sub-networks, but also divides and routes the feature channels.\nDifferent to those networks using neural units only in routing nodes , or routing each sample to pre-designed sub-networks , adaptive neural tree (ANT) adopts CNN modules as feature transformers in a hard neural tree (see lines with arrows in \\figurename~\\ref{multi_branch} (c)), and learns the tree structure together with the network parameters simultaneously in the training stage.\n\\noindent\\textbf{3) Others.} \n{Performing dynamic routing within more general SuperNet architectures is also a recent research trend. Representatively, an architecture distribution with partly shared parameters is \\emph{searched} from a SuperNet containing $\\sim \\!\\! 10^{25}$ sub-networks . During inference, every sample is allocated by a controller network to one sub-network with appropriate computational cost. Instead of training a standalone controller network, gating modules are plugged inside the \\emph{hand-designed} SuperNet (see \\figurename~\\ref{multi_scale} (d)) to decide the routing path based on intermediate features . \n\\begin{figure*}\n \\centering\n \\vspace{-2ex}\n \\includegraphics[width=0.9\\linewidth]{6_dynamic_parameter.pdf}\n \\vskip -0.2in\n \\caption{{Three implementations of dynamic parameters: adjusting (a) or generating (b) the backbone parameters based on the input, and (c) dynamically rescaling the features with the attention mechanism.}}\n \\label{dynamic_params}\n \\vskip -0.2in\n\\end{figure*}\n\\vspace{-1.5ex}", "id": "ad51ce7b-9d07-4edc-88e3-0822d3ae0397", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "3befdcee-686f-4399-9d86-c1c11ef02509", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Architectures" ], [ "subsubsection", "Dynamic Routing" ] ], "subsections": [], "title": "Dynamic Routing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{adaptive_params}\n\\vspace{-0.25ex}\nAlthough the networks with dynamic \\emph{architectures} in Sec. \\ref{dynamic_arch} can adapt their inference graphs to each sample and achieve an efficient allocation of computation, they usually have special architecture designs, requiring specific training strategies or careful hyper-parameters tuning (Sec. \\ref{sec:discussion}).\nAnother line of work adapts network \\emph{parameters} to different inputs while keeping the architectures fixed, which has been shown effective in improving the representation power of networks with a minor increase of computational cost. Given an input sample $\\mathbf{x}$, \nthe output of a conventional network (module) with static parameters can be written as $\\mathbf{y}\\! =\\!\\mathcal{F}(\\mathbf{x},\\bm{\\Theta})$. In contrast, the output of a model with dynamic parameters could be represented by\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{y} = \\mathcal{F}(\\mathbf{x},\\bm{\\hat{\\Theta}}|\\mathbf{x}) = \\mathcal{F}(\\mathbf{x},\\mathcal{W}(\\mathbf{x}, \\bm{\\Theta})),\n \\label{eq:dynamic_parameter}\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nwhere $\\mathcal{W}(\\cdot, \\bm{\\Theta})$ is the operation producing input-dependent parameters, and its design has been extensively explored.\nIn general, the parameter adaptation can be achieved from three aspects (see \\figurename~\\ref{dynamic_params}): 1) adjusting the trained parameters based on the input (Sec. \\ref{dynamic_param_adjust}); 2) directly generating the network parameters from the input (Sec. \\ref{weight_predict}); and 3) rescaling the features with soft attention (Sec. \\ref{sec:attention}).\n\\vspace{-1ex}", "id": "fdd3c2e8-dc39-4dd8-8a63-1d07741f61b4", "level": "subsection", "origin_cites_number": 0, "parent_id": "58436ad0-55e1-4da8-b7e8-6bf9edce3fb1", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Parameters" ] ], "subsections": [ "8a3a1403-da1a-48be-b1e9-80d1325e0909", "81d9301c-7d1b-4044-b144-7027a7b2165b", "0651cec9-e0a5-4c53-aca3-aa4ed8ac7d94" ], "title": "Dynamic Parameters" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 675, 8376, 721, 7276, 7275, 722 ], "content": "\\label{dynamic_param_adjust}\n\\vspace{-0.5ex}\nA typical approach to parameter adaptation is adjusting the weights based on their input during inference as presented in \\figurename~\\ref{dynamic_params} (a). This implementation usually evokes little computation to obtain the adjustments, e.g., attention weights or sampling offsets .\n\\noindent\\textbf{1) Attention on weights.} {To improve the representation power without noticeably increasing the computation, \nsoft attention can be performed on multiple convolutional kernels, producing an adaptive ensemble of parameters .}\nAssuming that there are $N$ kernels $\\mathbf{W}_n, n\\! =1,2,\\!\\cdots\\!,N$, such a dynamic convolution \ncan be formulated as \n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n\\mathbf{y} = \\mathbf{x} \\star \\mathbf{\\tilde{W}} = \\mathbf{x} \\star (\\sum\\nolimits_{n=1}^N \\alpha_n\\mathbf{W}_n). \n\\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nThis procedure increases the model capacity yet remains high efficiency, as the result obtained through fusing the outputs of $N$ convolutional branches (as in MoE structures, see \\figurename~\\ref{multi_branch} (a)) is equivalent to that produced by performing once convolution with $\\mathbf{\\tilde{W}}$. However, only $\\sim\\!1/N$ times of computation is consumed in the latter approach.\nWeight adjustment could also be achieved by performing soft attention over the \\emph{spatial locations} of convolutional weights . For example, segmentation-aware convolutional network applies locally masked convolution to aggregate information with larger weights from similar pixels, which are more likely to belong to the same object.\nUnlike that requires a sub-network for feature embedding, pixel-adaptive convolution (PAC) adapts the convolutional weights based on the attention mask generated from the input feature at each layer.\n{Instead of adjusting weights conditioned on every sample itself, meta-neighborhoods adapt the network parameters to each input sample based on its similarity to the neighbors stored in a dictionary.}\n\\noindent\\textbf{2) Kernel shape adaptation.} \nApart from adaptively scaling the weight \\emph{values}, parameter adjustment can also be realized to reshape the convolutional kernels and achieve \\emph{dynamic reception of fields}. Towards this direction, \ndeformable convolutions sample feature pixels from adaptive locations when performing convolution on each pixel. Deformable kernels samples weights in the kernel space to adapt the \\emph{effective} reception field (ERF) while leaving the reception field unchanged. Table \\ref{tab:deform_kernels} summarizes the formulations of the above three methods. \n{Due to their irregular memory access and computation pattern, these kernel shape adaptation approaches typically require customized CUDA kernels for the implementation on GPUs. However, recent literature has shown that the practical efficiency of deformable convolution could be effectively improved by co-designing algorithm and hardware based on embedded devices such as FPGAs .}\n\\begin{table*}\n \\vspace{-2ex}\n \\caption{{Kernel shape adaptation by dynamically sampling feature pixels or convolutional weights .}}\n \\vspace{-4ex}\n \\label{tab:deform_kernels}\n \\begin{center}\n \\begin{tabular}{c|c|c|c}\n \\hline\n \\textbf{Method} & \\textbf{Formulation} & \\textbf{Sampled Target} & \\textbf{Dynamic Mask} \\\\\n \\hline\n Regular Convolution & $\\mathbf{y(p)} = \\sum\\nolimits_{k=1}^K \\mathbf{W}(\\mathbf{p}_k) \\mathbf{x}(\\mathbf{p+p}_k)$ & - & - \\\\\n \\hline\n Deformable ConvNet-v1 & $\\mathbf{y(p)} = \\sum\\nolimits_{k=1}^K \\mathbf{W}(\\mathbf{p}_k) \\mathbf{x}(\\mathbf{p+p}_k+\\Delta \\mathbf{p}_k)$ & Feature map & No \\\\\n Deformable ConvNet-v2 & $\\mathbf{y(p)} = \\sum\\nolimits_{k=1}^K \\mathbf{W}(\\mathbf{p}_k) \\mathbf{x}(\\mathbf{p+p}_k+\\Delta \\mathbf{p}_k)\\Delta \\mathbf{m}_k$ & Feature map & Yes \\\\\n Deformable Kernels & $\\mathbf{y(p)} = \\sum\\nolimits_{k=1}^K \\mathbf{W}(\\mathbf{p}_k+\\Delta \\mathbf{p}_k) \\mathbf{x}(\\mathbf{p+p}_k)$ & Conv kernel & No \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\vspace{-5ex}\n\\end{table*}\n\\vspace{-1ex}", "id": "8a3a1403-da1a-48be-b1e9-80d1325e0909", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "fdd3c2e8-dc39-4dd8-8a63-1d07741f61b4", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Parameters" ], [ "subsubsection", "Parameter Adjustment" ] ], "subsections": [], "title": "Parameter Adjustment" }, { "cite_extract_rate": 0.785714285714285, "cites": [ 675, 726, 7277, 724, 7215, 678, 727, 725, 8377, 677, 723 ], "content": "\\label{weight_predict}\n\\vspace{-0.25ex}\nCompared to making modifications on model parameters on the fly (Sec. \\ref{dynamic_param_adjust}), weight prediction is more straightforward: it directly generates (a subset of) input-adaptive parameters with an independent model at test time (see \\figurename~\\ref{dynamic_params} (b)). This idea was first suggested in , where both the weight prediction model and the backbone model were feedforward networks. Recent work has further extended the paradigm to modern network architectures and tasks.\n\\noindent\\textbf{1) General architectures.} Dynamic filter networks (DFN) and HyperNetworks are two classic approaches realizing runtime weight prediction for CNNs and RNNs, respectively. Specifically, a filter generation network is built in DFN to produce the filters for a convolutional layer. As for processing sequential data (e.g. a sentence), the weight matrices of the main RNN are predicted by a smaller one at each time step conditioned on the input (e.g. a word) . WeightNet unifies the dynamic schemes of and by predicting the convolutional weights via simple grouped FC layers, achieving competitive results in terms of the accuracy-FLOPs\\footnote{{Floating point operations, which is widely used as a measure of inference efficiency of deep networks.}} and accuracy-parameters trade-offs.\n{Rather than generating standard \\emph{convolutional} weights, LambdaNetworks learns to predict the weights of \\emph{linear} projections based on the contexts of each pixel together with the relative position embeddings, showing advantages in terms of computational cost and memory footprint.}\n\\noindent\\textbf{2) Task-specific information} \nhas also been exploited to predict model parameters on the fly, enabling dynamic networks to generate task-aware feature embeddings. For example, edge attributes are utilized in to generate filters for graph convolution, and camera perspective is incorporated in to generate weights for image convolution. {Such task-aware weight prediction has been shown effective in improving the data efficiency on many tasks, including visual question answering and few-shot learning .}\n\\vspace{-1ex}", "id": "81d9301c-7d1b-4044-b144-7027a7b2165b", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "fdd3c2e8-dc39-4dd8-8a63-1d07741f61b4", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Parameters" ], [ "subsubsection", "Weight Prediction" ] ], "subsections": [], "title": "Weight Prediction" }, { "cite_extract_rate": 0.7619047619047611, "cites": [ 736, 730, 729, 7278, 8378, 38, 734, 8379, 733, 731, 7040, 728, 735, 7268, 7, 732 ], "content": "\\label{sec:attention}\n\\vspace{-0.25ex}\nThe main goal of either \\emph{adjusting} (Sec. \\ref{dynamic_param_adjust}) or \\emph{predicting} (Sec. \\ref{weight_predict}) model parameters is producing more dynamic and informative features, and therefore enhancing the representation power of deep networks. A more straightforward solution is rescaling the features with input-dependent soft attention (see \\figurename~\\ref{dynamic_params} (c)), which requires minor modifications on computational graphs. Note that for a linear transformation $\\mathcal{F}$, applying attention $\\bm{\\alpha}$ on the output is equivalent to performing computation with re-weighted parameters, i.e.\n\\begin{equation}\\label{dynamic_feature_key}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathcal{F}(\\mathbf{x},\\bm{\\Theta})\\otimes \\bm{\\alpha}= \\mathcal{F}(\\mathbf{x},\\bm{\\Theta}\\otimes\\bm{\\alpha}).\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\n\\noindent\\textbf{1) Channel-wise attention} is one of the most common soft attention mechanisms. Existing work typically follows \nthe form in squeeze-and-excitation network (SENet) :\n\\begin{equation}\\label{se_attention}\n \\setlength{\\abovedisplayskip}{3pt}\n\\mathbf{\\tilde{y}}\\! = \\!\\mathbf{y} \\otimes \\bm{\\alpha}\\! =\\!\\mathbf{y} \\otimes \\mathcal{A}(\\mathbf{y}), \\bm{\\alpha}\\in\\left[0,1\\right]^C.\n\\setlength{\\belowdisplayskip}{3pt}\n\\end{equation} \nIn Eq. \\ref{se_attention}, $\\mathbf{y}\\! =\\!\\mathbf{x} \\star \\mathbf{W}$ is the output feature of a convolutional layer with $C$ channels, and $\\mathcal{A}(\\cdot)$ is a lightweight function composed of pooling and linear layers for producing $\\bm{\\alpha}$.\nTaking the convolution into account, the procedure can also be written as $\\mathbf{\\tilde{y}}\\! = \\!(\\mathbf{x} \\star \\mathbf{W}) \\otimes \\bm{\\alpha}\\! =\\! \\mathbf{x} \\star (\\mathbf{W} \\otimes \\bm{\\alpha})$, from which we can observe that applying attention on features is equivalent to performing convolution with dynamic weights.\nOther implementations for attention modules have also been developed, including using standard deviation to provide more statistics , or replacing FC layers with efficient 1D convolutions .\nThe empirical performance of three computational graphs for soft attention is studied in : 1) $\\mathbf{\\tilde{y}}\\! =\\!\\mathbf{y}\\otimes \\mathcal{A}(\\mathbf{y})$, 2) $\\mathbf{\\tilde{y}}\\! =\\!\\mathbf{y}\\otimes \\mathcal{A}(\\mathbf{x})$ and 3) $\\mathbf{\\tilde{y}}\\! =\\!\\mathbf{y}\\otimes \\mathcal{A}(\\mathrm{Conv}(\\mathbf{x}))$. It is found that the three forms yield different performance in different backbone networks.\n\\noindent\\textbf{2) Spatial-wise attention}. Spatial locations in features could also be dynamically rescaled with attention to improve the representation power of deep models . Instead of using pooling operations to efficiently gather global information as in channel-wise attention, convolutions are often adopted in spatial-wise attention to encode local information. Moreover, these two types of attention modules can be integrated in one framework (see \\figurename~\\ref{dynamic_params} (c)).\n\\noindent\\textbf{3) Dynamic activation functions.} The aforementioned approaches to generating dynamic features usually apply soft attention before static activation functions. A recent line of work has sought to increase the representation power of models with dynamic activation functions . For instance, \nDY-ReLU replaces ReLU $(\\mathbf{y}_c\\! =\\!\\max (\\mathbf{x}_c, 0))$ with the max value among $N$ linear transformations $\\mathbf{y}_c\\! =\\!\\max_{n} \\left\\{a_c^n \\mathbf{x}_c + b_c^n\\right\\}$, where $c$ is the channel index, and $a_c^n, b_c^n$ are linear coefficients calculated from $\\mathbf{x}$. On many vision tasks, these dynamic activation functions can effectively improve the performance of different network architectures with negligible computational overhead.\nTo summarize, soft attention has been exploited in many fields due to its simplicity and effectiveness. Moreover, it can be incorporated with other methods conveniently. E.g., by replacing the weighting scalar $\\alpha_n$ in Eq. \\ref{eq_moe} with channel-wise or spatial-wise attention, the output of multiple branches with independent kernel sizes or feature resolutions are adaptively fused.\nNote that we leave out the detailed discussion on the self attention mechanism, which is widely studied in both NLP and CV fields to re-weight features based on the similarity between queries and keys at different locations (temporal or spatial). \nReaders who are interested in this topic may refer to review studies . In this survey, we mainly focus on the feature re-weighting scheme in the framework of dynamic inference. \n\\vspace{-2ex}", "id": "0651cec9-e0a5-4c53-aca3-aa4ed8ac7d94", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "fdd3c2e8-dc39-4dd8-8a63-1d07741f61b4", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Sample-wise Dynamic Networks" ], [ "subsection", "Dynamic Parameters" ], [ "subsubsection", "Dynamic Features" ] ], "subsections": [], "title": "Dynamic Features" }, { "cite_extract_rate": 1, "cites": [ 737, 504 ], "content": "\\vspace{-0.25ex}\n\\label{sec_spatially_adaptive}\nIn visual learning, it has been found that not all locations contribute equally to the final prediction of CNNs , which suggests that \\emph{spatially} dynamic computation has great potential for reducing computational redundancy.\nIn other words, making a correct prediction may only require processing a fraction of pixels or regions with an adaptive amount of computation. Moreover, based on the observations that low-resolution representations are sufficient to yield decent performance for most inputs , the static CNNs that take in all the input with the same resolution may also induce considerable redundancy.\nTo this end, spatial-wise dynamic networks are built to perform adaptive inference with respect to different spatial locations of images.\nAccording to the granularity of dynamic computation, we further categorize the relevant approaches into three levels: {\\emph{pixel level} (Sec. \\ref{pixel_level}), \\emph{region level} (Sec. \\ref{region_level}) and \\emph{resolution level} (Sec. \\ref{dynamic_resolution}).}\n\\vspace{-2ex}", "id": "d1dfc651-19ab-46ab-9db8-02eef3371ca0", "level": "section", "origin_cites_number": 2, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ] ], "subsections": [ "bb93a738-d7fa-4ffe-811c-d2e5d3062ddd", "0d52450e-77a8-4861-a2eb-00ddb1ece863", "ab0c50b9-9d18-4c18-976a-34aae739e7ac" ], "title": "Spatial-wise Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vspace{-0.25ex}\n\\label{pixel_level}\nCommonly seen spatial-wise dynamic networks perform adaptive computation at the pixel level. Similar to the categorization in Sec. \\ref{sec_sample_wise}, pixel-level dynamic networks are grouped into two types: models with pixel-specific \\emph{dynamic architectures} (Sec. \\ref{pixel_dynamic_arch}) and \\emph{dynamic parameters} (Sec. \\ref{pixel_dynamic_param}).\n\\vspace{-1ex}", "id": "bb93a738-d7fa-4ffe-811c-d2e5d3062ddd", "level": "subsection", "origin_cites_number": 0, "parent_id": "d1dfc651-19ab-46ab-9db8-02eef3371ca0", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Pixel-level Dynamic Networks" ] ], "subsections": [ "fa853b1f-afd4-40d2-bed1-be3a1d6cb128", "530cae77-537a-4dca-80a6-e035f5680e22" ], "title": "Pixel-level Dynamic Networks" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 701, 738, 7279, 7280, 676, 739, 7267, 7281 ], "content": "\\label{pixel_dynamic_arch}\n\\vspace{-0.25ex}\nBased on the common belief that foreground pixels are more informative and computational demanding than those in the background, some dynamic networks learn to adjust their architectures for each pixel. Existing literature generally achieves this by 1) \\emph{dynamic sparse convolution}, which only performs convolutions on a subset of sampled pixels; 2) \\emph{additional refinement}, which strategically allocates extra computation (e.g. layers or channels) on certain spatial positions.\n\\begin{figure}\n \\centering\n \\vspace{-1ex}\n \\includegraphics[width=\\linewidth]{8_spatial_pixel_sparse.pdf}\n \\vskip -0.15in\n \\caption{{Dynamic convolution on selected spatial locations. The 1 elements (black) in the spatial mask determine the pixels (green) that require computation in the output feature map.}}\n \\label{fig_spatial_location_specific}\n \\vspace{-3ex}\n\\end{figure}\n\\noindent\\textbf{1) Dynamic sparse convolution.} To reduce the unnecessary computation on less informative locations, convolution can be performed only on strategically sampled pixels. Existing sampling strategies include 1) making use of the intrinsic sparsity of the input ; 2) predicting the positions of zero elements on the output ; and 3) estimating the saliency of pixels . A typical approach is using an extra branch to generate a spatial mask, determining the execution of convolution on each pixel (see \\figurename~\\ref{fig_spatial_location_specific}). {Pixel-wise dynamic depth could also be achieved based on a halting scheme } (see Sec. \\ref{dynamic_depth}). These dynamic convolutions usually neglect the unselected positions, which might degrade the network performance. Interpolation is utilized in to efficiently fill those locations, therefore alleviating the aforementioned disadvantage.\n\\noindent\\textbf{2) Dynamic additional refinement.} Instead of only sampling certain pixels to perform convolutions, another line of work first conducts relatively cheap computation on the whole feature map, and adaptively activate extra modules on selected pixels for further \\emph{refinement}. Representatively, dynamic capacity network generates coarse features with a shallow model, and salient pixels are sampled based on the gradient information. \nFor these salient pixels, extra layers are applied to extract finer features. Similarly, specific positions are additionally processed by a fraction of convolutional filters in . These methods adapt their network architectures in terms of \\emph{depth} or \\emph{width} at the pixel level, achieving a spatially adaptive allocation of computation.\n{The aforementioned dynamic additional refinement approaches are mainly developed for image classification.} On the semantic segmentation task, pixel-wise \\emph{early exiting} (see also Sec. \\ref{dynamic_depth}) is proposed in , where the pixels with high prediction confidence are output without being processed by deeper layers. PointRend shares a similar idea, and applies additional FC layers on selected pixels with low prediction confidence, which are more likely to be on borders of objects. All these researches demonstrate that by exploiting the spatial redundancy in image data, dynamic computation at the pixel level beyond sample level significantly increases the model efficiency.\n\\vspace{-1ex}", "id": "fa853b1f-afd4-40d2-bed1-be3a1d6cb128", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "bb93a738-d7fa-4ffe-811c-d2e5d3062ddd", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Pixel-level Dynamic Networks" ], [ "subsubsection", "Pixel-wise Dynamic Architectures" ] ], "subsections": [], "title": "Pixel-wise Dynamic Architectures" }, { "cite_extract_rate": 0.8, "cites": [ 7257, 740, 741, 7268, 8376, 731, 722, 7282, 7276, 742, 7278, 7275 ], "content": "\\label{pixel_dynamic_param}\n\\vspace{-0.25ex}\nIn contrast to entirely skipping the convolution operation on a subset of pixels, dynamic networks can also apply data-dependent parameters on different pixels for improved representation power or adaptive reception fields.\n\\noindent\\textbf{1) Dynamic weights.} {Similar to the sample-wise dynamic parameter methods (Sec. \\ref{adaptive_params}), pixel-level dynamic weights are achieved by test-time \\emph{adjustment} , \\emph{prediction} or \\emph{dynamic features} . Take weight prediction as an example,} typical approaches generate an $H\\!\\times\\!W\\!\\times\\! k^2$ kernel map to produce spatially dynamic weights ($H,W$ are the spatial size of the output feature and $k$ is the kernel size). Considering the pixels belonging to the same object may share identical weights, dynamic region-aware convolution (DRConv) generates a segmentation mask for an input image, dividing it into $m$ regions, for each of which a weight generation network is responsible for producing a data-dependent kernel.\n\\noindent\\textbf{2) Dynamic reception fields.} Traditional convolution operations usually have a fixed shape and size of kernels (e.g. the commonly used $3\\!\\times\\!3$ 2D convolution). The resulting uniform reception field across all the layers may have limitations for recognizing objects with varying shapes and sizes. {To tackle this, a line of work learns to adapt the reception field for different feature pixels , as discussed in Sec. \\ref{dynamic_param_adjust}.} Instead of adapting the sampling location of features or kernels, adaptive connected network realizes a dynamic trade-off among self transformation (e.g. $1\\!\\times\\!1$ convolution), local inference (e.g. $3\\!\\times\\!3$ convolution) and global inference (e.g. FC layer). The three branches of outputs are fused with data-dependent weighted summation. \nBesides images, the local and global information in non-Euclidean data, such as graphs, could also be adaptively aggregated.\n\\vspace{-1.5ex}", "id": "530cae77-537a-4dca-80a6-e035f5680e22", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "bb93a738-d7fa-4ffe-811c-d2e5d3062ddd", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Pixel-level Dynamic Networks" ], [ "subsubsection", "Pixel-wise Dynamic Parameters" ] ], "subsections": [], "title": "Pixel-wise Dynamic Parameters" }, { "cite_extract_rate": 1, "cites": [ 7281 ], "content": "\\label{region_level}\n\\vspace{-0.5ex}\n{Pixel-level dynamic networks mentioned in Sec. \\ref{pixel_level} often require specific implementations for sparse computation, and consequently may face challenges in terms of achieving real acceleration on hardware .} An alternative approach is performing adaptive inference on \\emph{regions/patches} of input images. There mainly exists two lines of work along this direction (see \\figurename~\\ref{fig_spatial_select_regions}): one performs parameterized \\emph{transformations} on a region of feature maps for more accurate prediction (Sec. \\ref{dynamic_transofrm}), and the other learns patch-level \\emph{hard attention}, with the goal of improving the effectiveness and/or efficiency of models (Sec. \\ref{hard_attention_pathces}).\n\\vspace{-1ex}", "id": "0d52450e-77a8-4861-a2eb-00ddb1ece863", "level": "subsection", "origin_cites_number": 1, "parent_id": "d1dfc651-19ab-46ab-9db8-02eef3371ca0", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Region-level Dynamic Networks" ] ], "subsections": [ "b67eaf7b-932e-433e-a7e5-732f3b779d92", "27095343-62ad-48ed-a110-4d5657e3a267" ], "title": "Region-level Dynamic Networks" }, { "cite_extract_rate": 1, "cites": [ 743, 531 ], "content": "\\label{dynamic_transofrm}\n\\vspace{-0.25ex}\nDynamic transformations (e.g. affine/projective/thin plate spline transformation) can be performed on images to undo certain variations for better generalization ability, or to exaggerate the salient regions for discriminative feature representation. For example, spatial transformer adopts a localization network to generate the transformation parameters, and then applies the parameterized transformation to recover the input from the corresponding variations. \nMoreover, transformations are learned to adaptively zoom-in the salient regions on some tasks where the model performance is sensitive to a small portion of regions. \n\\begin{figure}\n \\centering\n \\vspace{-1ex}\n \\includegraphics[width=\\linewidth]{9_spatial_select_regions.pdf}\n \\vskip -0.15in\n \\caption{{Region-level dynamic inference. The region selection module generates the transformation/localization parameters, and the subsequent network performs inference on the transformed/cropped region.}}\n \\label{fig_spatial_select_regions}\n \\vspace{-4ex}\n\\end{figure}\n\\vspace{-1ex}", "id": "b67eaf7b-932e-433e-a7e5-732f3b779d92", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "0d52450e-77a8-4861-a2eb-00ddb1ece863", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Region-level Dynamic Networks" ], [ "subsubsection", "Dynamic Transformations" ] ], "subsections": [], "title": "Dynamic Transformations" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 747, 745, 737, 744, 674, 746 ], "content": "\\label{hard_attention_pathces}\n\\vspace{-0.25ex}\nInspired by the fact that informative features may only be contained in certain regions of an image, dynamic networks with hard spatial attention are explored to strategically select patches from the input for improved efficiency.\n\\noindent\\textbf{1) Hard attention with RNNs.} The most typical approach is formulating a classification task as a sequential decision process, {and adopting RNNs to make iterative predictions based on selected patches .} For example, images are classified within a fixed number of steps, and at each step, the classifier RNN only sees a cropped patch, deciding the next attentional location until the last step is reached . An adaptive step number is further achieved by including early stopping in the action space . Glance-and-focus network (GFNet) builds a general framework of region-level adaptive inference by sequentially focusing on a series of selected patches, and is compatible with most existing CNN architectures. The recurrent attention mechanism together with the early exiting paradigm enables both \\emph{spatially} and \\emph{temporally} adaptive inference .\n\\noindent\\textbf{2) Hard attention with other implementations.} Rather than using an RNN to predict the region position that the model should pay attention to, class activation mapping (CAM) is leveraged in to iteratively focus on salient patches. At each iteration, the selection is performed on the previously cropped input, leading to a progressive refinement procedure. A multi-scale CNN is built in , where the sub-network in each scale takes in the cropped patch from the previous scale, and is responsible for simultaneously producing 1) the feature representations for classification and 2) the attention map for the next scale. {Without an iterative manner, the recent differentiable patch selection adopts a differentiable top-K module to select a fixed number of patches in one step.}\n\\vspace{-2ex}", "id": "27095343-62ad-48ed-a110-4d5657e3a267", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "0d52450e-77a8-4861-a2eb-00ddb1ece863", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Region-level Dynamic Networks" ], [ "subsubsection", "Hard Attention on Selected Patches" ] ], "subsections": [], "title": "Hard Attention on Selected Patches" }, { "cite_extract_rate": 1, "cites": [ 504 ], "content": "\\label{dynamic_resolution}\n\\vspace{-0.5ex}\nThe researches discussed above typically divide feature maps into different areas (pixel-level or region-level) for adaptive inference.\nOn a coarser granularity, some dynamic networks could treat each image as a whole by processing feature representations with adaptive resolutions. Although it has been observed that a low resolution might be sufficient for recognizing most \"easy\" samples , conventional CNNs mostly process all the inputs with the same resolution, inducing considerable redundancy. Therefore, resolution-level dynamic networks exploit spatial redundancy from the perspective of feature resolution rather than the saliency of different locations. Existing approaches mainly include 1) scaling the inputs with adaptive ratios (Sec. \\ref{adaptive_scaling_ratio}); 2) selectively activating the sub-networks with different resolutions in a multi-scale architecture (Sec. \\ref{dynamic_res_multiscale}).\n\\vspace{-1ex}", "id": "ab0c50b9-9d18-4c18-976a-34aae739e7ac", "level": "subsection", "origin_cites_number": 1, "parent_id": "d1dfc651-19ab-46ab-9db8-02eef3371ca0", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Resolution-level Dynamic Networks" ] ], "subsections": [ "6a7330af-c2c7-4497-af0d-0ab86ce108a0", "16545c48-ec80-4bbd-871e-b151b745999d" ], "title": "Resolution-level Dynamic Networks" }, { "cite_extract_rate": 0.5, "cites": [ 748 ], "content": "\\label{adaptive_scaling_ratio}\n\\vspace{-0.25ex}\nDynamic resolution can be achieved by scaling features with adaptive ratios. For example, a small sub-network is first executed to predict a scale distribution of faces on the face detection task, then the input images are adaptively zoomed, so that all the faces fall in a suitable range for recognition . A plug-in module is used by to predict the stride for the first convolution block in each ResNet stage, producing features with dynamic resolution.\n\\vspace{-1ex}", "id": "6a7330af-c2c7-4497-af0d-0ab86ce108a0", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ab0c50b9-9d18-4c18-976a-34aae739e7ac", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Resolution-level Dynamic Networks" ], [ "subsubsection", "Adaptive Scaling Ratios" ] ], "subsections": [], "title": "Adaptive Scaling Ratios" }, { "cite_extract_rate": 0.5, "cites": [ 7266 ], "content": "\\label{dynamic_res_multiscale} \n\\vspace{-0.25ex}\nAn alternative approach to achieving dynamic resolution is building multiple sub-networks in a parallel or cascading way. These sub-networks with different feature resolutions are selectively activated conditioned on the input during inference. For instance, Elastic realizes a \\emph{soft} selection from multiple branches at every layer, where each branch performs a downsample-convolution-upsample procedure with an independent scaling ratio. To practically avoid redundant computation, a \\emph{hard} selection is realized by , which allows each sample to conditionally activate sub-networks that process feature representations with resolution from low to high (see \\figurename~\\ref{multi_scale} (c) in Sec. \\ref{dynamic_depth}).\n\\vspace{-2ex}\n\\begin{figure*}\n \\centering\n \\vspace{-2ex}\n \\includegraphics[width=0.9\\linewidth]{10_temporal_skim.pdf}\n \\vskip -0.15in\n \\caption{{Temporally adaptive inference. The first three approaches dynamically allocate computation in each step by (a) skipping the update, (b) partially updating the state, or (c) conditional computation in a hierarchical structure. The agent in (d) decides where to read in the next step.}}\n \\label{fig_temporal_skim}\n \\vspace{-3ex}\n\\end{figure*}", "id": "16545c48-ec80-4bbd-871e-b151b745999d", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ab0c50b9-9d18-4c18-976a-34aae739e7ac", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Spatial-wise Dynamic Networks" ], [ "subsection", "Resolution-level Dynamic Networks" ], [ "subsubsection", "Dynamic Resolution in Multi-scale Architectures" ] ], "subsections": [], "title": "Dynamic Resolution in Multi-scale Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec_temporal_adaptive}\n\\vspace{-0.25ex}\nApart from the spatial dimension (Sec. \\ref{sec_spatially_adaptive}), adaptive computation could also be performed along the temporal dimension of sequential data, such as texts (Sec. \\ref{temporal_text}) and videos (Sec. \\ref{sec:tempoal_video}). {In general, network efficiency can be improved by dynamically allocating less/no computation to the inputs at unimportant temporal locations.}\n\\vspace{-1.5ex}", "id": "8a8770e6-47f0-428a-8e00-4bb401ff006f", "level": "section", "origin_cites_number": 0, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ] ], "subsections": [ "d0476612-fbf1-40f0-a9de-cc704c8eb7d7", "844590b4-9a40-42f9-8052-47ad9e4a7783" ], "title": "Temporal-wise Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vspace{-0.25ex}\n\\label{temporal_text}\nTraditional RNNs mostly follow a static inference paradigm, i.e. input tokens are read sequentially to update a hidden state at each time step, which could be written as\n\\begin{equation}\\label{eq_rnn}\n \\setlength{\\abovedisplayskip}{3pt}\n\\mathbf{h}_t = \\mathcal{F}(\\mathbf{x}_t, \\mathbf{h}_{t-1}), t=1,2,\\cdots, T.\n\\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nSuch a static inference paradigm induces significant redundant computation, as different tokens usually have different contributions to the downstream tasks.\nA type of dynamic RNN is developed for allocating appropriate computational cost at each step. Some learn to \\emph{\"skim\"} unimportant tokens by dynamic update of hidden states (Sec. \\ref{skimming_text}), and others conduct \\emph{adaptive reading} to avoid processing task-irrelevant tokens. Specifically, such adaptive reading can be achieved by \\emph{early exiting} (Sec. \\ref{temporal_early_exit}) or \\emph{dynamic jumping} (Sec. \\ref{text_jumping}).\n\\vspace{-1ex}", "id": "d0476612-fbf1-40f0-a9de-cc704c8eb7d7", "level": "subsection", "origin_cites_number": 0, "parent_id": "8a8770e6-47f0-428a-8e00-4bb401ff006f", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "RNN-based Dynamic Text Processing" ] ], "subsections": [ "7450af29-777b-40c4-bbdb-5820eb26595d", "04644c19-76d1-481e-b8a1-a56033239887", "64d0b8b6-25b1-481c-a175-60d486a5210d" ], "title": "RNN-based Dynamic Text Processing" }, { "cite_extract_rate": 0.625, "cites": [ 683, 7284, 750, 7283, 749 ], "content": "\\vspace{-0.25ex}\n\\label{skimming_text}\nSince not all the tokens are essential for capturing the task-relevant information in a sequence, dynamic RNNs can be built to adaptively update their hidden states at each time step. Less informative tokens will be coarsely \\emph{skimmed}, i.e. the states are updated with cheaper computation.\n\\noindent\\textbf{1) Skipping the update.} For unimportant inputs at certain temporal locations, dynamic models can learn to entirely skip the update of hidden states (see \\figurename~\\ref{fig_temporal_skim} (a)), i.e.\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{h}_t = \\alpha_t\\mathcal{F}(\\mathbf{x}_t, \\mathbf{h}_{t-1}) + (1-\\alpha_t)\\mathbf{h}_{t-1}, \\alpha_t\\in\\left\\{0,1\\right\\}.\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nFor instance, Skip-RNN updates a controlling signal in every step to determine whether to update or \\emph{copy} the hidden state from the previous step. An extra agent is adopted by Structural-Jump-LSTM to make the skipping decision conditioned on the previous state and the current input. Without training the RNNs and the controllers jointly as in and , a predictor is trained in to estimate whether each input will make a \"significant change\" on the hidden state. The update is identified worthy to be executed only when the predicted change is greater than a threshold.\n\\noindent\\textbf{2) Coarse update. } As directly skipping the update may be too aggressive, dynamic models could also update the hidden states with adaptively allocated operations. In specific, a network can adapt its architecture in every step, i.e. \n\\begin{equation}\n \\setlength{\\abovedisplayskip}{3pt}\n \\mathbf{h}_t = \\mathcal{F}_t(\\mathbf{x}_t, \\mathbf{h}_{t-1}), t=1,2,\\cdots, T,\n \\setlength{\\belowdisplayskip}{3pt}\n\\end{equation}\nwhere $\\mathcal{F}_t$ is determined based on the input $\\mathbf{x}_t$. One implementation is selecting a subset of dimensions of the hidden state to calculate, and copying the remaining from the previous step , as shown in \\figurename~\\ref{fig_temporal_skim} (b). To achieve the partial update, a subset of rows in weight matrices of the RNN is dynamically activated in , while Skim-RNN makes a choice between two independent RNNs.\nWhen the hidden states are generated by a multi-layer network, the update could be interrupted at an intermediate layer based on an accumulated halting score .\nTo summarize, a coarse update can be realized by data-dependent network \\emph{depth} or \\emph{width} . \n{\\noindent\\textbf{3) Selective updates in hierarchical RNNs.} Considering the intrinsic hierarchical structure of texts (e.g. sentence-word-character), researchers have developed hierarchical RNNs to encode the temporal dependencies with different timescales using a dynamic update mechanism . During inference, the RNNs at higher levels will selectively update their states conditioned on the output of low-level ones (see \\figurename~\\ref{fig_temporal_skim} (c)).\nFor example, when a character-level model in detects that the input satisfies certain conditions, it will \\emph{\"flush\"} (reset) its states and feed them to a word-level network. Similar operations have also been realized by a gating module on question answering tasks .}\n\\vspace{-1ex}", "id": "7450af29-777b-40c4-bbdb-5820eb26595d", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "d0476612-fbf1-40f0-a9de-cc704c8eb7d7", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "RNN-based Dynamic Text Processing" ], [ "subsubsection", "Dynamic Update of Hidden States" ] ], "subsections": [], "title": "Dynamic Update of Hidden States" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vspace{-0.25ex}\n\\label{temporal_early_exit}\nDespite that the dynamic RNNs in Sec. \\ref{skimming_text} are able to update their states with data-dependent computational costs at each step, all the tokens still must be read, leading to inefficiency in scenarios where the task-relevant results can be obtained before reading the entire sequence.\nIdeally, an efficient model should adaptively stop reading before the last step $T$ in Eq. \\ref{eq_rnn} is reached, once the captured information is satisfactory to solve the task. For instance, reasoning network (ReasoNet) terminates its reading procedure when sufficient evidence has been found for question answering. \nSimilarly, early stopping is implemented for sentence-level and paragraph-level text classification, respectively. Note that the approaches discussed here focus on making early predictions with respect to the \\emph{temporal} dimension of sequential input, rather than along the \\emph{depth} dimension of networks as in Sec. \\ref{dynamic_depth}.\n\\vspace{-1ex}", "id": "04644c19-76d1-481e-b8a1-a56033239887", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "d0476612-fbf1-40f0-a9de-cc704c8eb7d7", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "RNN-based Dynamic Text Processing" ], [ "subsubsection", "Temporally Early Exiting in RNNs" ] ], "subsections": [], "title": "Temporally Early Exiting in RNNs" }, { "cite_extract_rate": 0.5, "cites": [ 7283, 7285 ], "content": "\\vspace{-0.25ex}\n\\label{text_jumping}\nAlthough early exiting in Sec. \\ref{temporal_early_exit} can largely reduce redundant computation, all the tokens must still be fed to the model one by one. More aggressively, dynamic RNNs could further learn to decide \\emph{\"where to read\"} by strategically skipping some tokens without reading them, and directly jumping to an arbitrary temporal location (see \\figurename~\\ref{fig_temporal_skim} (d)).\nSuch dynamic jumping, together with early exiting, is realized in and . Specifically, LSTM-Jump implements an auxiliary unit to predict the jumping stride within a defined range, and the reading process ends when the unit outputs zero. The model in first decides whether to stop at each step. If not, it will further choose to re-read the current input, or to skip a flexible number of words. Moreover, structural information is exploited by Structural-Jump-LSTM , which utilizes an agent to decide whether to jump to the next punctuation. Apart from looking ahead, LSTM-Shuttle also allows backward jumping to supplement the missed history information.\n\\vspace{-1.5ex}", "id": "64d0b8b6-25b1-481c-a175-60d486a5210d", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "d0476612-fbf1-40f0-a9de-cc704c8eb7d7", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "RNN-based Dynamic Text Processing" ], [ "subsubsection", "Jumping in Texts" ] ], "subsections": [], "title": "Jumping in Texts" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tempoal_video}\n\\vspace{-0.25ex}\nFor video recognition, where a video could be seen as a sequential input of frames, temporal-wise dynamic networks are designed to allocate adaptive computational resources for different frames. This can generally be achieved by two approaches: 1) dynamically updating the hidden states in each time step of \\emph{recurrent} models (Sec. \\ref{recurrent_video}), and 2) performing adaptive \\emph{pre-sampling} for key frames (Sec. \\ref{frame_sampling}).\n\\vspace{-1ex}", "id": "844590b4-9a40-42f9-8052-47ad9e4a7783", "level": "subsection", "origin_cites_number": 0, "parent_id": "8a8770e6-47f0-428a-8e00-4bb401ff006f", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "Temporal-wise Dynamic Video Recognition" ] ], "subsections": [ "b2f1acb3-4b7a-41ce-96e3-e8abe77bc9b2", "ce2a3a82-4498-49fd-b6ea-af13eecfa56b" ], "title": "Temporal-wise Dynamic Video Recognition" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7286, 755, 7270, 752, 751, 754, 7283, 8380, 753 ], "content": "\\label{recurrent_video}\n\\vspace{-0.25ex}\nVideo recognition is often conducted via a recurrent procedure, where the video frames are first encoded by a 2D CNN, and the obtained frame features are fed to an RNN sequentially for updating its hidden state. Similar to the approaches introduced in Sec. \\ref{temporal_text}, RNN-based adaptive video recognition is typically realized by 1) treating unimportant frames with relatively cheap computation (\\emph{\"glimpse\"}) ; 2) \\emph{early exiting} ; and 3) performing dynamic \\emph{jumping} to decide {\"where to see\"} .\n\\noindent\\textbf{1) Dynamic update of hidden states.} To reduce redundant computation at each time step, LiteEval makes a choice between two LSTMs with different computational costs. ActionSpotter decides whether to update the hidden state according to each input frame. {AdaFuse selectively reuses certain feature channels from the previous step to efficiently make use of historical information.} Recent work has also proposed to adaptively decide the numerical precision or modalities when processing the sequential input frames. Such a \\emph{glimpse} procedure (i.e. allocating cheap operations to unimportant frames) is similar to the aforementioned text \\emph{skimming} .\n\\noindent\\textbf{2) Temporally early exiting.} Humans are able to comprehend the contents easily before watching an entire video. Such early stopping is also implemented in dynamic networks to make predictions only based on a portion of video frames . Together with the \\emph{temporal} dimension, the model in further achieves early exiting from the aspect of network \\emph{depth} as discussed in Sec. \\ref{dynamic_depth}.\n\\noindent\\textbf{3) Jumping in videos.} Considering encoding those unimportant frames with a CNN still requires considerable computation, a more efficient solution could be dynamically skipping some frames without watching them. Existing arts typically learn to predict the location that the network should jump to at each time step.\nFurthermore, both early stopping and dynamic jumping are allowed in , where the jumping stride is limited in a discrete range.\nAdaptive frame (AdaFrame) generates a continuous scalar within the range of $[0,1]$ as the relative location. \n\\vspace{-1ex}", "id": "b2f1acb3-4b7a-41ce-96e3-e8abe77bc9b2", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "844590b4-9a40-42f9-8052-47ad9e4a7783", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "Temporal-wise Dynamic Video Recognition" ], [ "subsubsection", "Video Recognition with Dynamic RNNs" ] ], "subsections": [], "title": "Video Recognition with Dynamic RNNs" }, { "cite_extract_rate": 0.642857142857142, "cites": [ 8381, 756, 6975, 8382, 757, 7288, 7287, 686, 758 ], "content": "\\label{frame_sampling}\n\\vspace{-0.25ex}\nRather than processing video frames recurrently as in Sec. \\ref{recurrent_video}, another line of work first performs an adaptive \\emph{pre-sampling} procedure, and then makes prediction by processing the selected subset of key frames or clips.\n\\noindent{\\textbf{1) Temporal attention} is a common technique for networks to focus on salient frames.} For face recognition, neural aggregation network uses \\emph{soft} attention to adaptively aggregate frame features. To improve the inference efficiency, \\emph{hard} attention is realized to remove unimportant frames iteratively with RL for efficient video face verification .\n\\noindent{\\textbf{2) Sampling module} is also a prevalent option for dynamically selecting the key frames/clips in a video.} For example, the frames are first sampled uniformly in , and discrete decisions are made for each selected frame to go forward or backward step by step. As for clip-level sampling, SCSample is designed based on a trained classifier to find the most informative clips for prediction. Moreover, dynamic sampling network (DSN) segments each video into multiple sections, and a sampling module with shared weights across the sections is exploited to sample one clip from each section.\n{Adjusting multiple factors of deep models simultaneously has attracted researches in both static and dynamic networks . For example, together with \\emph{temporal-wise} frame sampling, \\emph{spatially} adaptive computation can be achieved by spatial /temporal resolution adaptation and patch selection . It would be promising to exploit the redundancy in both \\emph{input data} and \\emph{network structure} for further improving the efficiency of deep networks.}\n\\vspace{-2ex}", "id": "ce2a3a82-4498-49fd-b6ea-af13eecfa56b", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "844590b4-9a40-42f9-8052-47ad9e4a7783", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Temporal-wise Dynamic Networks" ], [ "subsection", "Temporal-wise Dynamic Video Recognition" ], [ "subsubsection", "Dynamic Key Frame Sampling" ] ], "subsections": [], "title": "Dynamic Key Frame Sampling" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 675, 7277 ], "content": "\\label{inference_and_train}\n\\vspace{-0.25ex}\nIn previous sections, we have reviewed three different types of dynamic networks (sample-wise (Sec. \\ref{sec_sample_wise}), spatial-wise (Sec. \\ref{sec_spatially_adaptive}) and temporal-wise (Sec. \\ref{sec_temporal_adaptive})). It can be observed that making data-dependent \\emph{decisions} at the inference stage is essential to achieve high efficiency and effectiveness. Moreover, \\emph{training} dynamic models is usually more challenging than optimizing static networks.\nNote that since parameter adaptation (Sec. \\ref{adaptive_params}) could be conveniently achieved by differentiable operations, models with dynamic parameters can be directly trained by stochastic gradient descent (SGD) without specific techniques. Therefore, in this section we mainly focus on discrete decision making (Sec. \\ref{inference}) and its training strategies (Sec. \\ref{training}), which are absent in most static models.\n\\vspace{-1.5ex}", "id": "4ffefefb-4651-456b-a07d-75e465a728b0", "level": "section", "origin_cites_number": 3, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ] ], "subsections": [ "8d1ee291-ab1d-4e7f-8c16-431c07d68081", "2bd26a27-bb2c-4752-8590-7d1e502a061d" ], "title": "Inference and Training" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{inference}\n\\vspace{-0.5ex}\nAs described above, dynamic networks are capable of making data-dependent decisions during inference to transform their architectures, parameters, or to select salient spatial/temporal locations in the input. Here we summarize three commonly seen decision making schemes as follows.\n\\vspace{-1ex}", "id": "8d1ee291-ab1d-4e7f-8c16-431c07d68081", "level": "subsection", "origin_cites_number": 0, "parent_id": "4ffefefb-4651-456b-a07d-75e465a728b0", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Decision Making of Dynamic Networks" ] ], "subsections": [ "16951c8b-a4dd-48e3-a004-db0aa23dc8c4", "2a7772db-4dcb-4d14-b74b-bd6ac1bb74cf", "6e023be0-dfce-4cb5-b69e-f84dd0c01501" ], "title": "Decision Making of Dynamic Networks" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 7266, 689, 683, 759, 7269, 676, 7273 ], "content": "\\vspace{-0.25ex}\nMany dynamic networks are able to output \"easy\" samples at early exits if a certain confidence-based criterion is satisfied. These methods generally require estimating the confidence of intermediate predictions, which is compared to a predefined threshold for decision making.\nIn classification tasks, the confidence is usually represented by the maximum element of the \\emph{SoftMax} output . Alternative criteria include the entropy and the score margin . On NLP tasks, a \\emph{model patience} is proposed in : when the predictions for one sample stay unchanged after a number of classifiers, the inference procedure stops. \nIn addition, the halting score in could also be viewed as confidence for whether the current feature could be output to the next time step or calculation stage.\nEmpirically, the confidence-based criteria are easy to implement, and generally require no specific training techniques. A trade-off between accuracy and efficiency is controlled by manipulating the thresholds, which are usually tuned on a validation dataset. Note that the \\emph{overconfidence} issue in deep models might affect the effectiveness of such decision paradigm, when the incorrectly classified samples could obtain a high confidence at early exits.\n\\vspace{-1ex}", "id": "16951c8b-a4dd-48e3-a004-db0aa23dc8c4", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "8d1ee291-ab1d-4e7f-8c16-431c07d68081", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Decision Making of Dynamic Networks" ], [ "subsubsection", "Confidence-based Criteria" ] ], "subsections": [], "title": "Confidence-based Criteria" }, { "cite_extract_rate": 1, "cites": [ 702, 718, 691 ], "content": "\\label{inference_policy}\n\\vspace{-0.25ex}\nIt is a common option to build an additional policy network learning to adapt the network topology based on different samples. \nSpecifically, each input sample is first processed by the policy network, whose output directly determines which parts of the main network should be activated. For example, BlockDrop and GaterNet use a policy network to adaptively decide the \\emph{depth} and \\emph{width} of a backbone network. More generally, dynamic routing in a \\emph{SuperNet} can also be controlled by a policy network .\nOne possible limitation of this scheme is that the architectures and the training process of some policy networks are developed for a specific backbone , and may not be easily adapted to different architectures. \n\\vspace{-1ex}", "id": "2a7772db-4dcb-4d14-b74b-bd6ac1bb74cf", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "8d1ee291-ab1d-4e7f-8c16-431c07d68081", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Decision Making of Dynamic Networks" ], [ "subsubsection", "Policy Networks" ] ], "subsections": [], "title": "Policy Networks" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 7274, 708, 751, 8374, 710, 8373, 696, 7281, 713, 699 ], "content": "\\label{inference_gating}\n\\vspace{-0.25ex}\nGating function is a general and flexible approach to decision making in dynamic networks. It can be conveniently adopted as a plug-in module at arbitrary locations in any backbone network.\nDuring inference, each module is responsible for controlling the local inference graph of a layer or block. The gating functions take in intermediate features and efficiently produce binary-valued gate vectors to decide: 1) which channels need to be activated \\emph{width}, 2) which layers need to be skipped , 3) which paths should be selected in a SuperNet , or 4) what locations of the input should be allocated computations .\nCompared to the aforementioned decision policies, the gating functions demonstrate notable generality and applicability. However, due to their lack of differentiability, these gating functions usually need specific training techniques, which will be introduced in the following Sec. \\ref{training}.\n\\vspace{-2ex}", "id": "6e023be0-dfce-4cb5-b69e-f84dd0c01501", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "8d1ee291-ab1d-4e7f-8c16-431c07d68081", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Decision Making of Dynamic Networks" ], [ "subsubsection", "Gating Functions" ] ], "subsections": [], "title": "Gating Functions" }, { "cite_extract_rate": 0.581196581196581, "cites": [ 779, 725, 722, 7273, 683, 770, 771, 776, 7257, 778, 741, 7294, 7283, 7267, 674, 8381, 7290, 772, 746, 7285, 150, 8382, 760, 7281, 7289, 7272, 761, 747, 752, 757, 7288, 8383, 764, 750, 727, 743, 765, 748, 762, 8380, 777, 775, 769, 686, 680, 767, 7295, 7291, 763, 773, 744, 774, 7286, 7284, 7282, 7278, 749, 7215, 7292, 726, 766, 768, 676, 7293, 738, 7274, 780, 7287 ], "content": "\\vspace{-0.5ex}\n\\label{training}\nBesides architecture design, training is also essential for dynamic networks. Here we summarize the existing training strategies for dynamic models from the perspectives of objectives and optimization.\n\\begin{table*}\n \\scriptsize\n \\vspace{-2ex}\n \\caption{Applications of Dynamic Networks. For the type column, Sa, Sp and Te stand for sample-wise, spatial-wise and temporal-wise respectively.}\n \\label{tab_tasks}\n \\begin{center}\n \\vspace{-6ex}\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{cccc}\n \\toprule\n \\textbf{Fields} & \\textbf{Data} & \\textbf{Type} & \\textbf{Subfields \\& references} \\\\\n \\midrule\n & \\multirow{6}*{\\textbf{Image}} & \\multirow{2}*{Sa} & Object detection (face , facial point , pedestrian , general ) \\\\\n & & & Image segmentation , Super resolution , Style transfer , Coarse-to-fine classification \\\\ \n \\cmidrule{3-4}\n & & \\multirow{3}*{Sa \\& Sp} & Image segmentation , Image-to-image \\\\\n & & & translation , Object detection , Semantic image synthesis , \\\\\n \\multirow{1}*{\\textbf{Computer}} & & & Image denoising , Fine-grained classification Eye tracking , Super resolution \\\\\n \\cmidrule{3-4}\n \\multirow{1}*{\\textbf{Vision}}& & Sa \\& Sp \\& Te & General classification , Multi-object classification , Fine-grained classification \\\\\n \\cmidrule{2-4}\n & \\multirow{4}*{\\textbf{Video}} & Sa & Multi-task learning (human action recognition and frame prediction) \\\\\n \\cmidrule{3-4}\n & & \\multirow{2}*{Sa \\& Te} & Classification (action recognition) , Semantic segmentation \\\\\n & & & Video face recognition , Action detection , Action spotting \\\\\n \\cmidrule{3-4}\n & & Sa \\& Sp \\& Te & Classification , Frame interpolation , Super resolution , Video deblurring , Action prediction \\\\\n \\cmidrule{2-4}\n & \\textbf{Point Cloud} & Sa \\& Sp & 3D Shape classification and segmentation, 3D scene segmentation , 3D semantic scene completion \\\\\n \\midrule\n \\multirow{2}*{\\textbf{Natural}} & \\multirow{3}*{\\textbf{Text}} & Sa & Neural language inference, Text classification, Paraphrase similarity matching, and Sentiment analysis \\\\\n \\cmidrule{3-4}\n \\textbf{Language} & & \\multirow{2}*{Sa \\& Te} & Language modeling , Machine translation , Classification , \\\\\n \\textbf{Processing} & & & Sentiment analysis , Question answering \\\\\n \\midrule\n \\textbf{Cross-Field} & \\multicolumn{3}{c}{Image captioning , {Video captioning , Visual question answering , Multi-modal sentiment analysis }} \\\\\n \\midrule\n \\multirow{2}{*}{\\textbf{Others}} & \\multicolumn{3}{c}{{Time series forecasting} , Link prediction , {Recommendation system} }\\\\\n & \\multicolumn{3}{c}{Graph classification , {Document classification} , Stereo confidence estimation }\\\\\n \\bottomrule\n \\end{tabular}}\n \\end{center}\n \\vskip -0.3in\n\\end{table*}\n\\vspace{-1ex}", "id": "2bd26a27-bb2c-4752-8590-7d1e502a061d", "level": "subsection", "origin_cites_number": 117, "parent_id": "4ffefefb-4651-456b-a07d-75e465a728b0", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Training of Dynamic Networks" ] ], "subsections": [ "4d7d59fa-e730-411f-b178-1c7c2fc0254e", "34c67f80-85f1-4cea-8d35-f3e767fa3802" ], "title": "Training of Dynamic Networks" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 7266, 701, 693, 7274, 708, 8374, 8373, 696, 7281, 753 ], "content": "\\label{sec_train_objectives}\n\\vspace{-0.25ex}\n\\noindent\\textbf{1) Training multi-exit networks.} We first notice that early-exiting dynamic networks are generally trained by minimizing a weighted cumulative loss of intermediate classifiers. One challenge for training such models is the joint optimization of multiple classifiers, which may interfere with each other. MSDNet alleviates the problem through its special architecture design. Several improved training techniques are proposed for multi-exit networks, including a gradient equilibrium algorithm to stable the training process, and a bi-directional knowledge transfer approach to boost the collaboration of classifiers. {For temporal-wise early exiting, the training of the policy network in FrameExit is supervised by pseudo labels.}\n\\noindent\\textbf{2) Encouraging sparsity.} Many dynamic networks adapt their inference procedure by conditionally activating their computational units or strategically sampling locations from the input .\nTraining these models without additional constraints would result in superfluous computational redundancy, as a network could tend to activate all the candidate units for minimizing the task-specific loss. \nThe overall objective function for restraining such redundancy are typically written as $\\mathfrak{L}\\! =\\!\\mathfrak{L}_\\mathrm{task}\\!+\\!\\gamma \\mathfrak{L}_\\mathrm{sparse}$, where $\\gamma$ is the hyper-parameter balancing the two items for the trade-off between accuracy and efficiency. In real-world applications, the second item can be designed based on the gate/mask values of candidate units (e.g. channels , layers or spatial locations ). Specifically, one may set a target activation rate or minimizing the $\\mathcal{L}_1$ norm of the gates/masks . It is also practical to directly optimize a resource-aware loss (e.g. FLOPs) , which can be estimated according to the input and output feature dimension for every candidate unit.\n\\noindent{\\textbf{3) Others.} Note that extra loss items are mostly designed for but not limited to improving efficiency. Take as an example, the model progressively focuses on a selected region, and is trained with an additional \\emph{inter-scale pairwise ranking loss} for proposing more discriminative regions. Moreover, knowledge distilling is utilized to boost the co-training of multiple sub-networks in and .}\n\\vspace{-1ex}", "id": "4d7d59fa-e730-411f-b178-1c7c2fc0254e", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "2bd26a27-bb2c-4752-8590-7d1e502a061d", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Training of Dynamic Networks" ], [ "subsubsection", "Training Objectives for Efficient Inference" ] ], "subsections": [], "title": "Training Objectives for Efficient Inference" }, { "cite_extract_rate": 0.8235294117647051, "cites": [ 8381, 708, 702, 7279, 696, 781, 691, 782, 750, 8373, 8375, 674, 7281, 699 ], "content": "\\vspace{-0.25ex}\nA variety of dynamic networks contain non-differentiable functions that make discrete decisions to modify their architectures or sampling spatial/temporal locations from the input. These functions can not be trained directly with back-propagation. Therefore, specific techniques are studied to enable the end-to-end training as follows.\n\\noindent\\textbf{1) Gradient estimation} is proposed to approximate the gradients for those non-differentiable functions and enable back-propagation. In , straight-through estimator (STE) is exploited to heuristically copy the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the \\emph{Sigmoid} argument.\n\\noindent\\textbf{2) Reparameterization} is also a popular technique to optimize the discrete decision functions. For instance, the gating functions controlling the network width or depth can both be trained with \\emph{Gumbel SoftMax} , which is also used for pixel-level dynamic convolution . An alternative technique is \\emph{Improved SemHash} adopted in and to train their hard gating modules.\n{Note that although these reparameterization techniques enable joint optimizing dynamic models together with gating modules in an end-to-end fashion, they usually lead to a longer training process for the decision functions to converge into a stable situation . Moreover, the model performance might be sensitive to some extra hyper-parameters (e.g. temperature in \\emph{Gumbel SoftMax}), which might also increase the training cost for these dynamic networks.}\n\\noindent\\textbf{3) Reinforcement learning (RL)} is widely exploited for training non-differentiable decision functions. In specific, the backbones are trained by standard SGD, while the agents (either policy networks in Sec. \\ref{inference_policy} or gating functions in Sec. \\ref{inference_gating}) are trained with RL to take discrete actions for dynamic inference graphs or spatial/temporal sampling strategies .\n{One challenge for RL-based training is the design of reward functions, which is important to the accuracy-efficiency tradeoff of dynamic models. Commonly seen reward signals are usually constructed to minimize a penalty item of the computational cost . Moreover, the training could be costly due to a multi-stage procedure: a pre-training process may be required for the backbone networks before the optimization of decision or sampling modules, and joint finetuning may be indispensable finally.}\n\\vspace{-2ex}", "id": "34c67f80-85f1-4cea-8d35-f3e767fa3802", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "2bd26a27-bb2c-4752-8590-7d1e502a061d", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Inference and Training" ], [ "subsection", "Training of Dynamic Networks" ], [ "subsubsection", "Optimization of Non-differentiable Functions" ] ], "subsections": [], "title": "Optimization of Non-differentiable Functions" }, { "cite_extract_rate": 0.8148148148148141, "cites": [ 764, 744, 785, 7273, 765, 770, 7269, 777, 758, 38, 786, 692, 768, 783, 7296, 7295, 7272, 7271, 7, 757, 674, 784 ], "content": "\\label{sec_tasks}\n\\vspace{-0.25ex}\nIn this section, we summarize the applications of dynamic networks. Representative methods are listed in Table \\ref{tab_tasks} based on the input data modality.\nFor image recognition, most dynamic CNNs are designed to conduct \\emph{sample-wise} or \\emph{spatial-wise} adaptive inference on classification tasks, and many inference paradigms can be generalized to other applications. Note that as mentioned in Sec. \\ref{region_level}, the object recognition could be formulated as a sequential decision problem . By allowing early exiting in these approaches, \\emph{temporally} adaptive inference procedure could also be enabled.\nFor text data, reducing its intrinsic temporal redundancy has attracted great research interests, and the inference paradigm of \\emph{temporal-wise} dynamic RNNs (see Sec. \\ref{temporal_text}) is also general enough to process audios . Based on large language models such as Transformer and BERT , adaptive depths are extensively studied to reduce redundant computation in network architectures.\nFor video-related tasks, the three types of dynamic inference can be implemented simultaneously . However, for the networks that do not process videos recurrently, e.g. 3D CNNs , most of them still follow a static inference scheme. Few researches have been committed to building dynamic 3D CNNs , which might be an interesting future research direction.\n{Moreover, dynamic networks (especially the attention mechanism) have also been applied to dynamically fuse the features from different modalities in some multi-modal learning tasks, e.g. RGB-D image segmentation and image/video captioning .}\nFinally, dynamic networks have also been exploited to tackle some fundamental problems in deep learning. For example, multi-exit models can be used to: 1) alleviate the \\emph{over-thinking} issue while reducing the overall computation ; 2) perform \\emph{long-tailed classification} by inducing early exiting in the training stage; and 3) improve the model \\emph{robustness} . For another example, the idea of dynamic routing is implemented for: 1) {reducing the \\emph{training cost} under a multi-task setting } and 2) finding the optimal fine-tuning strategy for per example in \\emph{transfer learning} .\n\\vspace{-2ex}", "id": "3cd25c7f-359c-4b70-8c02-f2738746b35c", "level": "section", "origin_cites_number": 27, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "Application of Dynamic Networks" ] ], "subsections": [], "title": "Application of Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\label{sec:discussion}\n\\vspace{-0.5ex}\n{Though recent years have witnessed significant progress in the research of dynamic neural networks, there still exist many open problems that are worth exploring.} In this section, we summarize a few challenges together with possible future directions in this field.\n\\vspace{-1ex}", "id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "level": "section", "origin_cites_number": 0, "parent_id": "4fd906cb-4aa6-49a2-a5b7-ec10dfbd9280", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ] ], "subsections": [ "3fc7f7a7-7403-4a95-918e-d6e40acaa928", "9458c790-c005-432b-a25a-958ab55678a8", "baaa0a8b-88d7-44dc-8c2b-1c3a26c2c457", "23446088-fa89-4b21-8a64-51ee553e33cb", "a8fb9eb8-777e-40f9-a6cc-da50d402237b", "80ebe8d1-1699-4384-9941-1c27455ceb08" ], "title": "{Challenges and Future Directions" }, { "cite_extract_rate": 1, "cites": [ 7266, 706 ], "content": "\\vspace{-0.5ex}\nDespite the success of dynamic neural networks, relatively few researches has been committed to analyze them from the theoretical perspective. In fact, theories for a deep understanding of current dynamic learning models and further improving them in principled ways are highly valuable. {Notably, it has been proven that a dynamic network with an adaptive width can preserve the representation power of an unsparsified model .} However, there are more theoretical problems that are fundamental for dynamic networks. Here we list several of them as follows.\n\\noindent\\textbf{1) Optimal decision in dynamic networks.}\nAn essential operation in most dynamic networks (especially those designed for improving computational efficiency) is making data-dependent decisions, e.g., determining whether a module should be evaluated or skipped. Existing solutions either use confidence-based criteria, or introduce policy networks and gating functions. Although being effective in practice (as mentioned in Sec. \\ref{inference_and_train}), they may not be optimal and lack theoretical justifications. Take early exiting as an example, the current heuristic methods might face the issues of overconfidence, high sensitivity for threshold setting and poor transferability. As for policy networks or gating modules, runtime decisions can be made based on a learned function.\nHowever, they often introduce extra computations, and usually require a long and unstable training procedure. Therefore, principled approaches with theoretical guarantees for decision function design in dynamic networks is a valuable research topic.\n\\noindent\\textbf{2) Generalization issues.} \nIn a dynamic model, a sub-network might be activated for a set of test samples that are not uniformly sampled from the data distribution, e.g., smaller sub-networks tend to handle ``easy'' samples, while larger sub-networks are used for ``hard'' inputs . This brings a divergence between the training data distribution and that of the inference stage, and thus violates the common \\emph{i.i.d.} assumption in classical machine learning. Therefore, it would be interesting to develop new theories to analyze the generalization properties of dynamic networks under such distribution mismatch. Note that transfer learning also aims to address the issue of distributional shift at test time, but the samples of the target domain are assumed to be accessible in advance. In contrast, for dynamic models, the test distribution is not available until the training process is finished, when the network architecture and parameters are finalized. This poses greater challenges than analyzing the generalization issues in transfer learning.", "id": "3fc7f7a7-7403-4a95-918e-d6e40acaa928", "level": "subsection", "origin_cites_number": 2, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Theories for Dynamic Networks" ] ], "subsections": [], "title": "Theories for Dynamic Networks" }, { "cite_extract_rate": 0.764705882352941, "cites": [ 7266, 96, 700, 8372, 718, 7297, 97, 787, 732, 696, 685, 504, 687 ], "content": "Architecture design has been proven to be essential for deep networks. Existing researches on architectural innovations are mainly proposed for static models , while relatively few are dedicated to developing architectures specially for dynamic networks. It is expected that architectures developed specifically for dynamic networks may further improve their effectiveness and efficiency. For example, the interference among multiple classifiers in an early-exiting network could be mitigated by a carefully designed multi-scale architecture with dense connections . \nPossible research direction include designing dynamic network structures either by hand (as in ), or by leveraging the NAS techniques (as in ). Moreover, considering the popularity of Transformers , {recent work has proposed dynamic vision Transformers with adaptive early exiting or token sparsification .} Developing a dynamic version of this family of models could also be an interesting direction.\n{Note that the research on dynamic networks differs from a seemingly close topic, i.e. model compression . One common goal of them is improving the network efficiency with minimal accuracy drop. However, model compression may focus on reducing the \\emph{size} of deep networks, while dynamic networks pay more attention to the \\emph{computation}, even at the price of slightly \\emph{increasing} model size . Moreover, model compression typically adopts pruning or quantization techniques to produce compact \\emph{static} models, which treat all the inputs in the same way. In contrast, dynamic networks perform data-dependent computation on different inputs, which can effectively reduce the intrinsic redundancy in static models.}\n\\vspace{-1ex}", "id": "9458c790-c005-432b-a25a-958ab55678a8", "level": "subsection", "origin_cites_number": 17, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Architecture Design for Dynamic Networks" ] ], "subsections": [], "title": "Architecture Design for Dynamic Networks" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 7281, 675, 7268, 674, 676 ], "content": "\\vspace{-0.5ex}\nMany existing dynamic networks (e.g., most of the sample-wise adaptive networks) are designed specially for classification tasks, and cannot be applied to other vision tasks such as object detection and semantic segmentation. The difficulty arises from the fact that for these tasks there is no simple criterion to assert whether an input image is easy or hard, as it usually contains multiple objects and pixels that have different levels of difficulty. Although many efforts, e.g., spatially adaptive models and soft attention based models , have been made to address this issue, it remains a challenging problem to develop a unified and elegant dynamic network that can serve as an off-the-shelf backbone for a variety of tasks.\n\\vspace{-1.5ex}", "id": "baaa0a8b-88d7-44dc-8c2b-1c3a26c2c457", "level": "subsection", "origin_cites_number": 6, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Applicability for More Diverse Tasks" ] ], "subsections": [], "title": "Applicability for More Diverse Tasks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 721, 788, 7281 ], "content": "\\label{discuss_implement}\n\\vspace{-0.5ex}\nThe current deep learning hardware and libraries are mostly optimized for static models, and they may not be friendly to dynamic networks. Therefore, we usually observe that the practical runtime of dynamic models lags behind the theoretical efficiency. For example, some spatially adaptive networks involve sparse computation, {which is known to be inefficient on modern computing devices due to the memory access bottleneck . A recent line of work focuses on the codesign of algorithm and hardware for accelerating deep models on platforms with more flexibility such as FPGA . Many input-dependent operations, including pixel-level dynamic computation , adaptive channel pruning and early exiting , have also been tailored together with hardware for further improving their practical efficiency. It is an interesting research direction to simultaneously optimize the algorithm, hardware and deep learning libraries to harvest the theoretical efficiency gains of dynamic networks.}\n{In addition, a data-dependent inference procedure, especially for the dynamic \\emph{architectures}, usually requires a model to handle input samples sequentially, which also poses challenge for parallel computation. Although inference with batches has been enabled for early-exiting networks , the conflict between adaptive computational graph and parallel computation still exists for other types of dynamic architectures. This issue is mitigated in the scenario of mobile/edge computing, where the input signal by itself is sequential and the computing hardware is less powerful than high-end platforms. However, designing dynamic networks that are more compatible with existing hardware and software is still a valuable and challenging topic.}\n\\vspace{-1.5ex}", "id": "23446088-fa89-4b21-8a64-51ee553e33cb", "level": "subsection", "origin_cites_number": 9, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Gap between Theoretical \\& Practical Efficiency" ] ], "subsections": [], "title": "Gap between Theoretical \\& Practical Efficiency" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 789 ], "content": "\\vspace{-0.5ex}\nDynamic models may provide new perspectives for the research on adversarial robustness of deep neural networks. {For example, recent work has leveraged the multi-exit structure to improve the robustness against adversarial attacks. Moreover, traditional attacks are usually aimed at causing \\emph{misclassification}. For dynamic networks, it is possible to launch attacks on \\emph{efficiency} . Specifically, by adjusting the objective function of the adversarial attack, input-adaptive models could be fooled to activate all their intermediate layers or yielding confusing predictions at early exits even for \"easy\" samples. It has also been observed that the commonly used adversarial training is not effective to defend such attacks.} The robustness of dynamic network is an interesting yet understudied topic.\n\\vspace{-1.5ex}", "id": "a8fb9eb8-777e-40f9-a6cc-da50d402237b", "level": "subsection", "origin_cites_number": 3, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Robustness Against Adversarial Attack" ] ], "subsections": [], "title": "Robustness Against Adversarial Attack" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vspace{-0.5ex}\nDynamic networks inherit the black-box nature of deep learning models, and thus also invite research on interpreting their working mechanism. What is special here is that the adaptive inference paradigm, e.g., spatial/temporal adaptiveness, conforms well with that of the human visual system, and may provide new possibilities for making the model more transparent to humans. In a dynamic network, it is usually convenient to analyze which part of the model is activated for a given input or to locate which part of the input the model mostly relies on in making its prediction. {It is expected that the research on dynamic network will inspire new work on the interpretability of deep learning.}\n\\vspace{-2ex}\n\\appendices\n\\ifCLASSOPTIONcompsoc\n \\section*{Acknowledgments}\n\\else\n \\section*{Acknowledgment}\n\\fi\nThis work is supported in part by the National Science and Technology Major Project of the Ministry of Science and Technology of China under Grants 2018AAA0100701, the National Natural Science Foundation of China under Grants 61906106 and 62022048, the Institute for Guo Qiang of Tsinghua University and Beijing Academy of Artificial Intelligence.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\small{\n \\bibliography{reference}\n}\n\\end{document}", "id": "80ebe8d1-1699-4384-9941-1c27455ceb08", "level": "subsection", "origin_cites_number": 0, "parent_id": "e9013abd-3d22-4eab-a882-4ba5e86719f2", "prefix_titles": [ [ "title", "Dynamic Neural Networks: A Survey" ], [ "section", "{Challenges and Future Directions" ], [ "subsection", "Interpretability" ] ], "subsections": [], "title": "Interpretability" } ]
93
[ 71, 96, 679, 681, 687, 544, 305, 682, 683, 675, 38, 504, 97, 686, 684, 680, 677, 676, 7266, 8372, 7268, 7, 678, 7267, 674, 685, 306, 688, 698, 695, 7273, 693, 690, 691, 7269, 8373, 7270, 697, 692, 694, 696, 7272, 7271, 689, 706, 705, 707, 8375, 8374, 710, 712, 713, 709, 699, 711, 703, 702, 700, 701, 708, 704, 715, 714, 719, 720, 718, 7274, 716, 717, 8376, 721, 7276, 7275, 722, 726, 7277, 724, 7215, 727, 725, 8377, 723, 736, 730, 729, 7278, 8378, 734, 8379, 733, 731, 7040, 728, 735, 732, 737, 738, 7279, 7280, 739, 7281, 7257, 740, 741, 7282, 742, 743, 531, 747, 745, 744, 746, 748, 7284, 750, 7283, 749, 7285, 7286, 755, 752, 751, 754, 8380, 753, 8381, 756, 6975, 8382, 757, 7288, 7287, 758, 759, 779, 770, 771, 776, 778, 7294, 7290, 772, 150, 760, 7289, 761, 8383, 764, 765, 762, 777, 775, 769, 767, 7295, 7291, 763, 773, 774, 7292, 766, 768, 7293, 780, 781, 782, 785, 786, 783, 7296, 784, 7297, 787, 788, 789 ]
1.199999
[ "Deqiang Li", "Qianmu Li", "Yanfang Ye", "Shouhuai Xu" ]
Arms Race in Adversarial Malware Detection: A Survey
2020
2020-05-24T07:20:42Z
cs.CR
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defender's feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attacker's freedom in conducting manipulations in the problem space; knowing the attacker's manipulation set is critical to the defender's success; the effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack. We also discuss a number of future research directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ] ], "subsections": [ "74b358c3-1042-4f31-a16f-380254516292", "a1283bee-37c1-4371-bed3-ceeb8af64d75", "e26ef155-9b9c-4a92-9ffa-e334fbe6ea87", "965da080-653e-47f5-949b-de42288cf6a5", "82494750-c0aa-44ab-add1-1d1d59481f4a" ], "title": "root" }, { "cite_extract_rate": 0.32692307692307604, "cites": [ 314, 7755, 7337, 893, 3856, 3857, 937, 3853, 3854, 7754, 7153, 3855, 3861, 7154, 3860, 3859, 3858 ], "content": "\\label{sec:intro}\nMalware (malicious software) is a big cyber threat and has received a due amount of attention. For instance, Kaspersky reports that 21,643,946 unique malicious files were detected in the year 2018, 24,610,126 in 2019, and 33,412,568 in 2020 .\nA popular defense against malware is to use {\\em signature-based} detectors , where a signature is often extracted by malware analysts from known malware examples. This approach has two drawbacks: signatures are tedious to extract and can be evaded by a range of techniques (e.g., encryption, repacking, polymorphism ). This incompetence has motivated the use of Machine Learning (ML) based malware detectors, which can be automated to some degree and can possibly detect \\emph{new} malware examples (via model generalization or knowledge adaptation ). More recently, Deep Learning (DL) has been used for malware detection (see, e.g., ). \nWhile promising, ML-based malware detectors are vulnerable to attacks known as {\\em adversarial examples} . \nThere are two kinds of attacks. One is {\\em evasion attack}, where the attacker perturbs test examples to {\\em adversarial examples} to evade malware detectors . The other is {\\em poisoning attack}, where the attacker manipulates the training dataset for learning malware detectors . These attacks usher in the new field of Adversarial Malware Detection (AMD) . \nThe state-of-the-art in AMD is that there are some specific results scattered in the literature but there is no systematic understanding.\nThis is true despite that there have been attempts at systematizing the related field of Adversarial Machine Learning (AML) , which however cannot be automatically translated to AMD. This is so because malware detection \nhas three {\\em unique} characteristics which are not exhibited by the other application domains (e.g., image or audio processing). (i) There are no common, standard feature definitions because both attackers and defenders can define their own features to represent computer files. As a consequence, attackers can leverage this ``freedom'' in feature definition to craft adversarial examples. (ii) Malware features are often discrete rather than continuous and program files are often highly structured with multiple modalities. This means that arbitrarily perturbing malware files or their feature representations might make the perturbed files no more executable. \nThis also means that the discrete domain makes perturbation a {\\em non-differentiable} and {\\em non-convex} task. \n(iii) Any meaningful perturbation to a malware example or its feature representation must preserve its malicious functionality. For example, the Android Package Kit (APK) requires that the used permissions are publicized in the AndroidManifest.xml, meaning that removing permissions in this manifest file would incur a runtime error. The preceding (ii) and (iii) make both the attacker's and defender's tasks more challenging than their \ncounterparts where small perturbations are not noticeable (e.g., images).\n\\noindent{\\bf Our Contributions}. We propose a conceptual framework for systematizing the AMD field through the lens of {\\em assumptions}, {\\em attacks}, {\\em defenses}, and {\\em security properties}. In specifying these, we seek rigorous definitions whenever possible, while noting that these definitions have been scattered in the literature. Rigorous definitions are important because they can serve as a common reference for future studies. The framework allows us to map the known attacks and defenses into some partial order structures and systematize the AMD attack-defense arms race. \nWe make a number of observations, including: (i) the indiscriminate attack that treats malicious examples as equally important has been extensively investigated, but targeted and availability attacks are much less investigated; (ii) the evasion attack is much more extensively studied than the poisoning attack; (iii) there is no silver-bullet defense against evasion and poisoning attacks; (iv) sanitizing examples is effective against black-box and grey-box attacks, but not white-box attacks; (v) AMD security properties have been evaluated empirically rather than rigorously; (vi) there is no theoretical evidence to support that the effectiveness of defense techniques on the training set can generalize to other adversarial examples.\nWe draw a number of insights, including: (i) knowing defender's feature set is critical to the success of transfer attacks, highlighting the importance of keeping defender's feature set secret (e.g., by randomizing defender's feature set);\n(ii) the effectiveness of practical evasion attacks largely depends on the attacker's degree of freedom in conducting manipulations in the problem space (i.e., a small degree of freedom means harder to succeed); (iii) effective defenses often require the defender to know the attacker's manipulation set, explaining from one perspective why it is hard to design effective defenses;\n(iv) effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.\nFinally, we discuss a number of future research directions, which hopefully will inspire and encourage many researchers to explore them.\n\\smallskip\n\\noindent{\\bf Related Work}.\nThe closely related prior work is Maiorca et al. , which surveys previous studies in adversarial malicious PDF document detection. In contrast, we consider the broader context of AMD and propose novel partial orders to accommodate AMD assumptions, attacks, defenses, and properties.\nThere are loosely-related prior studies, which survey prior AML studies (but not focusing on AMD), including .\nFor example, Yuan et al. survey attack methods for generating adversarial examples, while briefly discussing evasion attacks in the AMD context; \nBarreno et al. propose a taxonomy of AML attacks\n(causative vs. exploratory attacks, integrity vs. availability attacks, and targeted vs. indiscriminate attacks); Biggio et al. propose a defense framework for protecting \nSupport Vector Machines (SVMs) from evasion attacks, poisoning attacks and privacy violations; Papernot et al. systematize AML security and privacy with emphasis on demonstrating the trade-off between detection accuracy and robustness.\n\\smallskip\n\\noindent{\\bf Paper Outline}.\nSection \\ref{sec:fw} describes our survey and systematization methodology and framework.\nSection \\ref{sec:systematizstion-AMD-studies} applies our framework to systematize the literature AMD studies.\nSection \\ref{sec:cd} discusses future research directions. \nSection \\ref{sec:conc} concludes the paper.", "id": "74b358c3-1042-4f31-a16f-380254516292", "level": "section", "origin_cites_number": 52, "parent_id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:fw}\n\\begin{table}[htbp!]\n\t\\centering\\caption{Main notations used in the paper \\label{table:notations}}\n\t\\begin{adjustbox}{width=0.99\\textwidth}\n\t\\begin{tabular}{l|p{.8\\textwidth}}\n\t\t\\hline\n\t\tNotation & Meaning\\\\\\hline\n\t\t$\\mathbb R$ ($\\mathbb R_{+}$) & the set of (positive) real numbers \\\\\n\t\t$\\mathcal{A},\\mathcal{I}$ & attacker and defender (treated as algorithms)\\\\\n\t\t$\\mathbb{P}$ & the probability function \\\\\n\t\t$z, z'\\in\\mathcal{Z}$ & $\\mathcal{Z}$ is example space; $z'$ is obtained by perturbing $z$\\\\\n\t\t$S$ & defender $\\mathcal{I}$'s feature set for representing files \\\\\n\t\t${\\mathbf x}, {\\mathbf x}'\\in \\mathcal X$ & $\\mathcal X=\\mathbb{R}^d$ is $d$-dimensional feature space; ${\\mathbf x},{\\mathbf x}' \\in \\mathcal{X}$ are respectively feature representations of $z,z'\\in \\mathcal{Z}$ \\\\\n\t\t$\\mathcal{Y}$, $y$ & $\\mathcal{Y}$ is the label space of binary classification, $\\mathcal{Y}=\\{+/1,-/0\\}$; $y\\in \\mathcal{Y}$ \\\\\n\t\t$\\mathcal{D}=\\mathcal Z \\times \\mathcal Y$ & the file-label (i.e., example-label) space \\\\\n\t\t$D_{train}\\subset \\mathcal D,n$ & the training set in file-label space; $n=|D_{train}|$ \\\\\n\t\t$D_{test}$ & the test set in file-label space \\\\\n\t\t$D_{poison},D'_{poison}$& $D'_{poison}$ is set of adversarial file-label pairs obtained by perturbing non-adversarial files in $D_{poison}\\subset\\mathcal{D}$ \\\\\n\t\t$\\mathcal{O}(z,z')$ & $\\mathcal{O}(z,z'):\\mathcal{Z}\\times\\mathcal{Z}\\to\\{{\\tt true}, {\\tt false}\\}$ is an oracle telling if two files have the same functionality or not \\\\\n\t\t$\\delta$ & a manipulation for perturbing files {\\em with} preserving their functionalities \\\\\n\t\t$\\mathcal{M}$, $\\mathcal{Z}_\\mathcal{M}\\subseteq \\mathcal{Z}$ & $\\mathcal{M}$ is manipulation set in the problem space;\n\t\t$\\mathcal{Z}_\\mathcal{M}$ is set of adversarial files generated using $\\mathcal{M}$\\\\\n\t\t$\\mathbf{M}$, $\\mathcal{X}_\\mathbf{M}\\subseteq \\mathcal{X}$ & $\\mathbf{M}$ is feature manipulation set;\n\t\t$\\mathcal{X}_\\mathbf{M} $ is set of adversarial feature vectors generated using $\\mathbf{M}$ \\\\\n\t\t$\\Gamma(z,z')$ & $\\Gamma(z,z'):\\mathcal Z \\times \\mathcal{Z} \\to \\mathbb R_+$ measures the degree of manipulation for perturbing $z \\in \\mathcal{Z}$ into $z'\\in \\mathcal{Z}$ \\\\\n\t\t$C(\\mathbf{x},\\mathbf{x}')$& $C(\\mathbf{x},\\mathbf{x}'):\\mathcal X \\times \\mathcal X \\to \\mathbb R_+$ is the function measuring the cost incurred by changing feature vector $\\mathbf{x}$ to $\\mathbf{x}'$ \\\\\n\t\t$\\delta_z \\in \\mathcal{M}$ & $\\delta_z$ is a set of manipulations of $z$ w.r.t. $z'$ \\\\\n\t\t$\\delta_{\\mathbf{x}} \\in \\mathbf{M}$ & $\\delta_{\\mathbf{x}}={\\mathbf x}' - {\\mathbf x}$ is a perturbation vector of $\\mathbf{x}$ w.r.t. $\\mathbf{x}'$\\\\\n\t\t$\\phi:\\mathcal Z \\to \\mathcal{X} $ & feature extraction function; $\\mathbf{x}\\leftarrow\\phi(z)$, $\\mathbf{x}'\\leftarrow\\phi(z')$ \\\\\n\t\t$\\varphi,f$ & $\\varphi:\\mathcal{X} \\to \\mathbb{R}$ is classification function; $f:\\mathcal Z \\to \\mathbb{R}$ is classifier $f=\\varphi(\\phi(\\cdot))$; by abusing notation a little bit, we also use ``$+\\leftarrow f(z)$'' to mean that $f$ predicts $z$ as malicious when\n\t\t$f(z)\\geq \\tau$ for a threshold $\\tau$\\\\\n\t\t$F_{\\theta}:\\mathcal X \\to \\mathbb R$& machine learning algorithm \n\t\twith parameters $\\theta$ \\\\\n\t\t$L:\\mathbb{R}\n\t\t\\times\\mathcal{Y}\\to\\mathbb{R}$ & loss function measuring prediction error of $F_\\theta$ \\\\\n\t\t${\\sf EL},{\\sf WR},{\\sf AT}$ & defense techniques: Ensemble Learning, Weight Regularization, Adversarial Training \\\\\n\t\t${\\sf VL},{\\sf RF},{\\sf IT},{\\sf CD},{\\sf SE}$ & defense techniques: Verifiable Learning, Robust Feature, Input Transformation, Classifier ranDomization, Sanitizing Examples \\\\\n\t\t${\\sf BE},{\\sf OE},{\\sf BP},{\\sf OP}$ & attack tactics: basic and optimal evasion; basic and optimal poisoning\\\\\n\t\t{\\sf GO},{\\sf SF},{\\sf MI} & attack techniques: Gradient-based Optimization, Sensitive Features, MImicry\\\\\n\t\t{\\sf TR},{\\sf HS},{\\sf GM},{\\sf MS} & attack techniques: TRansferability, Heuristic Search, Generative Model, Mixture Strategy \\\\\n\t\t$A_1,\\ldots,A_5$ & the 5 attributes under $\\mathcal{I}$'s control; they are known to $\\mathcal{A}$ at respective degrees\n\t\t$a_1,\\ldots,a_5$ \\\\\n\t\t$A_6,\\ldots,A_9$ & the 4 attributes under $\\mathcal{A}$'s control; they are known to $\\mathcal{I}$ at respective degree $a_6,\\ldots,a_9$ \\\\\n\t\t${\\sf RR},{\\sf CR},{\\sf DR},{\\sf TR}$ & security properties: Representation Robustness, Classification Robustness, Detection Robustness, Training Robustness\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\end{adjustbox}\n\\end{table}\n\\noindent{\\bf Terminology, Scope and Notations}. \nIn the AMD context, a defender $\\mathcal{I}$ aims to use ML to detect or classify computer files as benign or malicious; i.e., we focus on {\\em binary classification}. An attacker $\\mathcal{A}$ attempts to make malicious files evade $\\mathcal{I}$'s detection by leveraging {\\em adversarial files} (interchangeably, {\\em adversarial examples}). Adversarial malware examples are often generated by perturbing or manipulating malware examples, explaining why we will use the two terms, perturbation and manipulation, interchangeably. Adversarial attacks can be waged in the training phase of a ML model (a.k.a., poisoning attack) or in the test phase (a.k.a., evasion attack). It is worth mentioning that the {\\em privacy violation} attack is waged in addition to the preceding two attacks because $\\mathcal{A}$ can always probe $\\mathcal{I}$'s detectors.\nA file, benign and malicious alike, is adversarial if it is intentionally crafted to (help malicious files) evade $\\mathcal{I}$'s detection, and non-adversarial otherwise.\nWe focus on $\\mathcal{I}$ using supervised learning to detect malicious files, which may be adversarial or non-adversarial because they co-exist in the real world with no self-identification. This means that we do not consider the large body of malware detection literature that does not cope with AMD, which has been addressed elsewhere (e.g., ). \nTable \\ref{table:notations} summarizes the main notations used in the paper.", "id": "a1283bee-37c1-4371-bed3-ceeb8af64d75", "level": "section", "origin_cites_number": 2, "parent_id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ] ], "subsections": [ "b7dbff6a-c1f8-49f8-8c03-efdb7ba23041", "8e32af6c-587d-43e1-aabc-dc5d8345628d", "b05fe1bc-da2e-4a87-8f17-d43f1acc81fc" ], "title": "Survey and Systematization Methodology" }, { "cite_extract_rate": 0.277777777777777, "cites": [ 1159, 8565, 318, 166, 3861 ], "content": "\\label{sec:ml}\nLet $\\mathcal Z$ be the {\\em example space} of benign/malicious adversarial/non-adversarial files. Let $\\mathcal{Y}=\\{+,-\\}$ or $\\mathcal{Y}=\\{1,0\\}$ be the {\\em label space} of binary classification, where $+$/$1$ ($-$/$0$) means a file is malicious (benign). Let $\\mathcal D =\\mathcal Z \\times \\mathcal Y$ be the file-label (example-label) space. For training and evaluating a classifier in the absence of adversarial files, $\\mathcal I$ is given a set $D\\subset \\mathcal{D}$ of non-adversarial benign/malicious files as well as their {\\em ground-truth} labels. $\\mathcal I$ splits $D$ into three disjoint sets: a training set $D_{train}=\\{(z_i,y_i)\\}_{i=1}^n$, a validation set for model selection, and a test set for evaluation. \nEach file $z_i\\in \\mathcal{Z}$ is characterized by a set $S$ of features and represented by a numerical vector $\\mathbf{x}_i=(x_{i,1},\\ldots,x_{i,d})$ in the $d$-dimensional {\\em feature space} $\\mathcal X=\\mathbb R^d$, which accommodates both continuous and discrete feature representations .\nThe process for obtaining feature representation $\\mathbf{x}_i$ of $z_i\\in \\mathcal{Z}$ is called {\\it feature extraction}, denoted by a function $\\phi:\\mathcal{Z}\\to \\mathcal{X}$ with ${\\bf x}_i\\leftarrow \\phi(S,z_i)$. Because $\\phi$ can be hand-crafted (denoted by $\\phi_c$), automatically learned (denoted by $\\phi_a$), or a hybrid of both , we unify them into $\\phi$ such that $\\phi(S,z)=\\phi_a(\\phi_c(S,z))$; when only manual (automatic) feature extraction is involved, we can set $\\phi_a$ ($\\phi_c$) as the identity map.\nThere are two kinds of features: {\\em static features} are extracted via static analysis (e.g., strings, API calls\n); {\\em dynamic features} are extracted via dynamic analysis (e.g., instructions, registry activities\n).\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\scalebox{0.32}{\n\t\\includegraphics{figures/ml.pdf}\n\t}\n\\caption{Illustration of ML-based malware detector.\n}\n\t\\label{fig:ml}\n\\end{figure}\nAs highlighted in Figure \\ref{fig:ml},\n$\\mathcal{I}$ uses $\\{(z_i,y_i)\\}_{i=1}^n$ to learn a malware detector or classifier $f:\\mathcal{Z}\\to [0,1]$, where $f(z)=\\varphi(\\phi(S,z))$ is composed of feature extraction function $\\phi: \\mathcal{Z}\\to \\mathcal{X}$ and classification function $\\varphi:\\mathcal{X} \\to [0,1]$.\nNote that $f(z)\\in [0,1]$, namely $\\varphi({\\bf x})\\in [0,1]$ with ${\\bf x}\\leftarrow \\phi(S,z)$, can be interpreted as the probability that $z$ is malicious (while noting that calibration may be needed ). For a given threshold $\\tau\\in[0,1]$, we further say (by slightly abusing notations) $z$ is labeled by $f$ as $+$, or $+\\leftarrow f(z)$, if $f(z)\\geq \\tau$, and labeled as $-$ or $-\\leftarrow f(z)$ otherwise. In practice, $f$ is often specified by a learning algorithm $F$ with learnable parameter $\\theta$ (e.g., weights) and a hand-crafted feature extraction $\\phi_c$; then, $\\theta$ is tuned to minimize the empirical risk associated with a loss function $L:[0,1] \\times \\mathcal Y \\to \\mathbb{R}$ measuring the prediction error of $F_\\theta$ (e.g., cross-entropy ), namely\n\\begin{equation}\n\\min \\limits_{\\theta}~\\mathcal{L}(\\theta, D_{train}) = \\min \\limits_{\\theta}~\\frac{1}{n}\\sum_{(z_i, y_i)\\in D_{train}}\\left(L(F_\\theta(\\phi_c(S,z_i)), y_i)\\right). \\label{eq:rsk_emp}\n\\end{equation}\n\\noindent{\\bf Example 1: The Drebin malware detector}. Drebin is an Android malware detector trained from static features . Table \\ref{table:drebin} summarizes Drebin's feature set, which includes 4 subsets \n\\begin{table}[htbp!]\n\t\\centering\\caption{Drebin features \\label{table:drebin}}\n\t\\begin{tabular}{c|p{.32\\textwidth}}\n\t\t\\hline\n\t\t\\multicolumn{2}{c}{Feature set} \\\\\\hline\n\t\t\\multirow{4}{*}{Manifest} \n\t\t& $S_1$~~Hardware components \\\\\n\t\t& $S_2$~~Requested permissions\\\\\n\t\t& $S_3$~~App components \\\\\n\t\t& $S_4$~~Filtered intents \\\\\\hline\n\t\t\\multirow{4}{*}{Dexcode}\n\t\t& $S_5$~~Restricted API calls\\\\\n\t\t& $S_6$~~Used permissions\\\\\n\t\t& $S_7$~~Suspicious API calls\\\\\n\t\t& $S_8$~~Network addresses\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\nof features $S_1, S_2, S_3, S_4$ extracted from the AndroidManifest.xml and another 4 subsets of features $S_5, S_6, S_7, S_8$ extracted from the disassembled DEX code files (recalling that DEX code is compiled from a program written in some language and can be understood by the Android Runtime). Specifically, $S_1$ contains features related to the access of an Android package (APK) to smartphone hardware (e.g., camera, touchscreen, or GPS module); $S_2$ contains features related to APK's requested permissions listed in the manifest prior to installation; $S_3$ contains features related to application components (e.g., {\\em activities}, {\\em service}, {\\em receivers}, and {\\em providers}.); $S_4$ contains features related to APK's communications with the other APKs; $S_5$ contains features related to critical system API calls, which cannot run without appropriate permissions or the {\\em root} privilege; $S_6$ contains features corresponding to the used permissions; $S_7$ contains features related to API calls that can access sensitive data or resources in a smartphone; and $S_8$ contains features related to IP addresses, hostnames and URLs found in the disassembled codes.\nThe feature representation is binary, meaning $\\phi=\\phi_c:\\mathcal{Z} \\mapsto \\{0,1\\}^d$ with $|S|=d$ and ${\\bf x}=(x_1,\\ldots,x_d)$, where $x_i=1$ if the corresponding feature is present in the APK $z$ and $x_i=0$ otherwise. A file $z$ in the feature space looks like the following: \n\\[\n\\mathbf x = \\phi(z) \\to\n\\begin{pmatrix}\n\\cdots \\\\ 0\\\\ 1\\\\\n\\cdots \\\\ 1\\\\ 0\\\\\n\\cdots\\\\\n\\end{pmatrix}\n\\begin{array}{ll}\n\\cdots & \\multirow{4}{*}{\\hspace{-1mm}\\bigg \\} $S_2$ }\\\\\n\\texttt{\\small permission::SEND\\_SMS} \\\\\n\\texttt{\\small permission::READ\\_CONTACTS}\\\\\n\\cdots & \\multirow{4}{*}{\\hspace{-1mm}\\bigg \\} $S_5$ }\\\\\n\\texttt{\\small api\\_call::getDeviceID}\\\\\n\\texttt{\\small api\\_call::setWifiEnabled}\\\\\n\\cdots & \\\\\n\\end{array}\n\\]\nDrebin uses a linear Support Vector Machine (SVM) to learn classifiers.\n\\smallskip\n\\noindent{\\bf Example 2: The MalConv malware detector}.\nMalConv is Convolutional Neural Network (CNN)-based Windows Portable Executable (PE) malware detector learned from raw binary programs (i.e., end-to-end detection)\n. \nFigure \\ref{fig:malconv} depicts its architecture.\nThe sequence of binary code is transformed into byte values (between 0 to 255) with the maximum length bounded by $N_{max}$ (e.g., $N_{max}=2^{21}$ bytes or 2MB). Each byte is further mapped into a real-valued vector using the embedding . The CNN layer and pooling layer learn abstract representations. The embedding, CNN and pooling layers belong to feature extraction $\\phi_a$, and the fully-connected and softmax layers belong to the classification operation $\\varphi$.\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[width=.7\\textwidth]{figures/malconv.pdf}\n\t\\caption{MalConv architecture .}\n\t\\label{fig:malconv}\n\\end{figure}", "id": "b7dbff6a-c1f8-49f8-8c03-efdb7ba23041", "level": "subsection", "origin_cites_number": 18, "parent_id": "a1283bee-37c1-4371-bed3-ceeb8af64d75", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Brief Review on ML-based Malware Detection" ] ], "subsections": [], "title": "Brief Review on ML-based Malware Detection" }, { "cite_extract_rate": 0, "cites": [], "content": "We systematize AMD studies through the lens of four aspects: (i) the assumptions that are made; (ii) the attack or threat model in terms of attacker ${\\mathcal A}$'s {\\em objective} and ${\\mathcal A}$'s {\\em input}, with the latter including ${\\mathcal A}$'s information about the defender $\\mathcal{I}$ and ${\\mathcal A}$'s own; (iii) the defense in terms of $\\mathcal{I}$'s {\\em objective} and $\\mathcal{I}$'s {\\em input}, with the latter including $\\mathcal{I}$'s information about $\\mathcal{A}$ and $\\mathcal{I}$'s own; (iv) the security properties that are at stake. These four aspects are respectively elaborated below.", "id": "8e32af6c-587d-43e1-aabc-dc5d8345628d", "level": "subsection", "origin_cites_number": 0, "parent_id": "a1283bee-37c1-4371-bed3-ceeb8af64d75", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Framework" ] ], "subsections": [ "18cf934c-a897-4b88-8a38-4cd32c20a2a0", "5f85312b-af7a-44ed-87ac-7e94a51dd8cc", "1e0de86d-a9b2-453b-b977-3d5abab5bd99" ], "title": "Framework" }, { "cite_extract_rate": 0.4375, "cites": [ 318, 3857, 3862, 937, 7312, 9099, 3854 ], "content": "\\label{sec:assumption}\nFive assumptions have been made in the AMD literature. Assumption \\ref{assumption:iid} below says that the data samples in $D$ are Independent and Identically Distributed (IID), which is a strong assumption and researchers have started to weaken it . \n\\begin{assumption}[{\\sf IID} assumption; see, e.g., ]\n\\label{assumption:iid}\nComputer files in training data and testing data are independently drawn from the same distribution.\n\\end{assumption}\nAssumption \\ref{assumption:oracle} below is adapted from AML context, where humans can serve as an oracle $\\mathcal{O}$ for determining whether two images are the same . In the AMD context, $\\mathcal{O}$ can be instantiated as (or approximated by) malware analysts or automated tools (e.g., Sandbox ), with the latter often using heuristic rules produced by malware analysts (e.g., YARA ).\n\\begin{assumption}[{\\sf Oracle} assumption; adapted from ]\n\\label{assumption:oracle}\nThere is an oracle $\\mathcal{O}:\\mathcal{Z}\\times\\mathcal{Z}\\to\\{{\\tt true},{\\tt false}\\}$ that tells if two files $z,z'\\in\\mathcal{Z}$ have the same functionality or not; ${\\tt true}\\leftarrow {\\mathcal{O}}(z,z')$ if and only if $z$ and $z'$ have the same functionality. \n\\end{assumption}\nAssumption \\ref{assumption:measurability} below says that there is a way to measure the degree of manipulations by which one file \nis transformed to another.\n\\begin{assumption}[{\\sf Measurability} assumption ]\n\\label{assumption:measurability}\nThere is a function $\\Gamma(z,z'):\\mathcal Z \\times \\mathcal Z \\to \\mathbb R_+$ that measures the degree of manipulations according to which a file $z'\\in \\mathcal{Z}$ can be derived from the file $z \\in \\mathcal{Z}$.\n\\end{assumption}\nSince Assumption \\ref{assumption:measurability} is often difficult to validate, $\\Gamma(z,z')$ may be replaced by a function that quantifies the degree of manipulation that can turn feature representation $\\mathbf{x}$ into $\\mathbf{x}'$, where\n$\\mathbf{x}=\\phi(S,z)$ and $\\mathbf{x}'=\\phi(S,z')$. This leads to:\n\\begin{assumption}[{\\sf Smoothness} assumption ]\n\\label{assumption:smoothness}\n\\ignore{\nThis assumption says that for any $z,z'\\in \\mathcal{Z}$ and $\\delta \\in \\Delta$, it holds that $C(\\phi(z),\\phi(z'))\\approx 0$ when $(z'\\leftarrow \\mathcal{A}(z,\\delta)) \\wedge (\\Gamma(z,z')\\approx 0) \\wedge ({\\tt true}\\leftarrow {\\mathcal{O}(z,z')})$.\n}\nThere is a function $C(\\mathbf{x},\\mathbf{x}'):\\mathcal X \\times \\mathcal X \\to \\mathbb R_+$ such that $C(\\phi(S,z),\\phi(S,z'))\\approx 0$ when $ (\\Gamma(z,z')\\approx 0) \\wedge ({\\tt true}\\leftarrow {\\mathcal{O}(z,z')})$.\n\\end{assumption}\nAssumption \\ref{assumption:inverse} below says that the inverse of feature extraction, $\\phi^{-1}$, is solvable so that a perturbed representation $\\mathbf{x}'$ can be mapped back to a legitimate file. \n\\begin{assumption}[{\\sf Invertibility} assumption ] \\label{assumption:inverse}\nFeature extraction $\\phi$ is invertible, meaning that given $\\mathbf{x}'$, the function $\\phi^{-1}:\\mathcal{X}\\to\\mathcal{Z}$ produces $z'=\\phi^{-1}(\\mathbf{x}')$. \n\\end{assumption}\nRecall that the feature extraction function $\\phi$ may be composed of a hand-crafted $\\phi_c$ and an automated $\\phi_a$, where $\\phi_c$ may be neither differentiable nor invertible . This means $\\mathbf{x}'$ may not be mapped to a legitimate file. Researchers tend to relax the assumption by overlooking the interdependent features , while suffering from the side-effect $\\mathbf{x}'\\neq \\phi(\\phi^{-1}(\\mathbf{x}'))$ .", "id": "18cf934c-a897-4b88-8a38-4cd32c20a2a0", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "8e32af6c-587d-43e1-aabc-dc5d8345628d", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Framework" ], [ "subsubsection", "Systematizing Assumptions" ] ], "subsections": [], "title": "Systematizing Assumptions" }, { "cite_extract_rate": 0.5111111111111111, "cites": [ 8695, 890, 3866, 7156, 888, 3868, 1193, 3867, 3865, 166, 6172, 3864, 7155, 3862, 937, 3853, 9099, 9100, 7754, 3863, 7153, 3869 ], "content": "\\label{sec:sys-attack}\nWe systematize attacks from two perspectives: {\\em attacker's objective} (i.e., what the attacker attempts to accomplish) and {\\em attacker's input} (i.e., what leverages the attacker can use). Whenever possible, we seek rigorous definitions to specify the attacker's input, while noting that these definitions have been scattered in the literature. We believe this specification is important because it can serve as a common reference for future studies. To demonstrate this, we discuss how to apply it to formulate a partial-order structure for comparing attacks.\n\\smallskip\n\\noindent{\\bf Attacker's Objective}.\nThere are three kinds of objectives: (i)\n{\\sf Indiscriminate}, meaning $\\mathcal{A}$ attempts to cause as many false-negatives as possible ; \n(ii) {\\sf Targeted}, meaning $\\mathcal{A}$ attempts to cause specific false-negatives (i.e., making certain malicious files evade the detection );\n(iii) {\\sf Availability}, meaning $\\mathcal{A}$ attempts to frustrate defender $\\mathcal{I}$ by rendering $\\mathcal{I}$'s classifier $f$ unusable (e.g., causing substantially high false-positives ).\n\\begin{table}[!htbp]\n\\caption{Attributes for specifying $\\mathcal{A}$'s and $\\mathcal{I}$'s input.}\n\t\\begin{minipage}{\\columnwidth}\n\t\t\\centering\n\t\t\\begin{tabular}{l|c|c}\n\t\t\\hline\nAttributes & Attacker $\\mathcal{A}$'s input & Defender $\\mathcal{I}$'s input \\\\\\hline\\hline\n\\multicolumn{3}{l}{Attributes under $\\mathcal{I}$'s control but may be known to $\\mathcal{A}$ to some extent} \\\\\\hline\n$A_1$: Training set $D_{train}$ & $a_1\\in [0,1]$ & 1 \\\\\\hline\n$A_2$: Defense technique & $a_2\\in \\{0,1\\}$ & 1 \\\\\\hline\n$A_3$: Feature set $S$ & $a_3\\in [0,1]$ & 1\\\\\\hline\n$A_4$: Learning algorithm $F_\\theta$ & $a_4\\in [0,1]$ & 1 \\\\\\hline\n$A_5$: Response & $a_5\\in \\{0,1\\}$ & 1 \\\\\\hline\\hline\n\\multicolumn{3}{l}{Attributes under $\\mathcal{A}$'s control but may be known to $\\mathcal{I}$ to some extent} \\\\\\hline\n$A_6$: Manipulation set $\\mathcal{M}$ & 1 & $a_6\\in [0,1]$ \\\\\\hline\n$A_7$: Attack tactic & 1 & $a_7\\in \\{0,1\\}$ \\\\\\hline\n$A_8$: Attack technique & 1& $a_8\\in \\{0,1\\}$ \\\\\\hline\n$A_9$: Adversarial examples & 1 & $a_9\\in [0,1]$ \\\\\\hline\n\t\t\\end{tabular}\n\t\\end{minipage}\n\t\\label{tab:attr}\n\\end{table}\n\\smallskip\n\\noindent{\\bf Attacker's Input}.\nTable \\ref{tab:attr} highlights the attributes we define to describe $\\mathcal{A}$'s input, including: five attributes $A_1,\\ldots,A_5$ that are under $\\mathcal{I}$'s control (indicated by 1) but may be known to $\\mathcal{A}$ at some extent $a_1,\\ldots,a_5$, respectively; and four attributes $A_6,\\ldots,A_9$ that are under $\\mathcal{A}$'s control (indicated by 1). These attributes are elaborated below.\n({\\bf i}) $A_1$: it describes $\\mathcal{I}$'s training set $D_{train}$ for learning classifier $f$. \nWe use $a_1\\in [0,1]$ to represent the extent at which $D_{train}$ is known to $\\mathcal{A}$.\nLet $\\hat{D}_{train}$ be the training files that are known to $\\mathcal{A}$. Then, $a_1=|\\hat{D}_{train}\\cap{D}_{train}|/|{D}_{train}|$.\n({\\bf ii}) $A_2$: it describes $\\mathcal{I}$'s techniques, which can be Ensemble Learning (${\\sf EL}$), Weight Regularization (${\\sf WR}$), Adversarial Training (${\\sf AT}$), Verifiable Learning (${\\sf VL}$), Robust Feature ({\\sf RF}), Input Transformation (${\\sf IT}$), Classifier ranDomization (${\\sf CD}$), Sanitizing Examples ({\\sf SE}). Let $A_2\\in\\{${\\sf EL},{\\sf WR},{\\sf AT},{\\sf VL},{\\sf RF},{\\sf IT},{\\sf CD},{\\sf SE}$\\}$ and $a_2\\in \\{0,1\\}$ such that $a_2=0$ means $\\mathcal{A}$ does not know $\\mathcal{I}$'s techniques and $a_2=1$ means $\\mathcal{A}$ knows $\\mathcal{I}$'s technique. The techniques are defined as follows.\nDefinition \\ref{definition:el} says that $\\mathcal{I}$ constructs multiple classifiers and uses them collectively in malware detection.\n\\begin{definition} [ensemble learning or {\\sf EL} ] \\label{definition:el}\nLet $\\mathcal{H}$ be $\\mathcal{I}$'s classifier space.\nGiven $K$ classifiers $\\{f_i\\}_{i=1}^{K}$ where $f_i\\in\\mathcal{H}$ and $f_i:\\mathcal{Z}\\to[0,1]$, let $f_i$ be assigned with weight $\\omega_i$ with $\\sum_{i=1}^K{\\omega_i}=1$ and $\\omega_i\\geq 0$. Then, $f=\\sum_{i=1}^{K}\\omega_i f_i$.\n\\end{definition}\nDefinition \\ref{definition:wr} says that $\\mathcal{I}$ uses regularization (e.g., $\\ell_2$ regularization or {\\em dropout} ) to decrease model's sensitivity to adversarial examples.\n\\begin{definition} [weight regularization or {\\sf WR} ] \\label{definition:wr}\nGiven a regularization item $\\Omega$ (e.g., constraints imposed on the learnable parameters), the empirical risk is\n$\\min \\limits_{\\theta}~\\left[\\mathcal{L}(\\theta, D_{train})+\\Omega(\\theta)\\right]$, where $\\mathcal{L}$ is defined in Eq. \\eqref{eq:rsk_emp}.\n\\end{definition}\nDefinition \\ref{definition:at} says that $\\mathcal{I}$ proactively makes its classifier $f$ perceive some information about adversarial files. That is, $\\mathcal{I}$ augments the training set by incorporating adversarial examples that may be produced by $\\mathcal{I}$, $\\mathcal{A}$, or both.\n\\begin{definition} [adversarial training or {\\sf AT} ] \\label{definition:at}\nLet $D'$ denote a set of adversarial file-label pairs.\nThen, $\\mathcal{I}$ tunes model parameters by minimizing the empirical risk: $\\min \\limits_{\\theta}~\\left[\\mathcal{L}(\\theta, D_{train}) + \\beta \\mathcal{L}(\\theta, D')\\right]$, where $\\beta\\geq 0$ denotes a balance factor.\n\\end{definition}\nDefinition \\ref{definiton:vl} says that $\\mathcal{I}$ intentionally over-estimates the error incurred by $\\mathcal{A}$'s manipulations and then minimizes it.\n\\begin{definition}[verifiable learning or {\\sf VL} ] \\label{definiton:vl}\nGiven $(z,y)\\in D_{train}$ and a manipulation set $\\hat{\\mathcal{M}}$ known by $\\mathcal{I}$, let $z(\\hat{\\mathcal{M}})$ denote the upper and lower boundaries on $\\hat{\\mathcal{M}}$. Then, this defense technique minimizes the following loss function derived from Eq.\\eqref{eq:rsk_emp}:\n$L(F_\\theta(\\phi_c(S,z)), y)+\\beta L(F_\\theta(\\phi_c(S,z(\\hat{\\mathcal{M}}))), y).$\n\\end{definition}\nDefinition \\ref{eq:robust-feature-defense-technique} says that $\\mathcal{I}$ uses a set of features $S^*\\subseteq S$ that can lead to higher detection capability against adversarial example attacks.\n\\begin{definition} [robust feature or {\\sf RF}; adapted from ]\n\\label{eq:robust-feature-defense-technique}\nGiven a training set $D_{train} \\cup D'$ that contains (adversarial) file-label pairs, the set of robust feature set $S^*$ is $$S^*=\\argmin_{\\tilde{S}\\subset S}\\sum_{(z,y)\\in {D_{train}\\cup D'}}L(\\widetilde{F}_\\theta(\\phi_c(\\tilde{S},z)),y),$$ \nwhere $\\widetilde{F}_\\theta$ is $F_\\theta$ or a simplified learning algorithm that is computationally faster than $F_\\theta$ .\n\\end{definition}\nDefinition \\ref{def:it} says that $\\mathcal{I}$ aims to use non-learning methods (e.g., de-obfuscation as shown in Proguard ) to offset $\\mathcal{A}$'s manipulations.\n\\begin{definition} [input transformation or {\\sf IT}, adapted from ] \\label{def:it}\nLet ${\\tt IT}:\\mathcal{Z}\\to\\mathcal{Z}$ denote an input transformation in the file space. Given file $z$ and transformation ${\\tt IT}$, the classifier is $f=\\varphi(\\phi({\\tt IT}(z)))$.\n\\end{definition}\nDefinition \\ref{definition:cd} says that $\\mathcal{I}$ randomly chooses $m$ classifiers and uses their results for prediction. That is, $\\mathcal{I}$ aims to randomize the feature representation used by $f$, the learning algorithm, and/or response to $\\mathcal{A}$'s queries (to prevent $\\mathcal{A}$ from inferring information about $f$). \n\\begin{definition} [classifier randomization or {\\sf CD}; adapted from ] \\label{definition:cd}\nGiven $\\mathcal{I}$'s classifier space $\\mathcal{H}$ and an input file $z$, $\\mathcal{I}$ randomly selects $m$ classifiers from $\\mathcal{H}$ with replacement, say $\\{f_i\\}_{i=1}^m$. Then,\n\t$f=\\frac{1}{m}\\sum_{i=1}^m~f_i(z)$. \n\\end{definition}\nInstead of enhancing malware detectors, Definition \\ref{definition:se} provides an alternative that detects the adversarial examples for further analysis.\n\\begin{definition}[sanitizing examples or ${\\sf SE}$; adapted from ] \\label{definition:se}\n$\\mathcal{I}$ aims to detect adversarial files by using function \n${\\sf flag}:\\mathcal{Z}\\to \\{{\\tt yes},{\\tt no}\\}$ to flag a file as adversarial ({\\tt yes}) or not ({\\tt no}).\n\\end{definition}\n({\\bf iii}) $A_3$: it describes $\\mathcal{I}$'s feature set $S$. We use $a_3\\in [0,1]$ to represent the extent at which $\\mathcal{A}$ knows about $S$. Let $\\hat{S}$\ndenote the features that are known to $\\mathcal{A}$. Then,\n$a_3=|\\hat{S}\\cap S|/|S|$.\n({\\bf iv}) $A_4$: it describes $\\mathcal{I}$'s learning algorithm $F_\\theta$, the set of trainable parameters $\\theta$, and hyperparameters (which are set manually, e.g., $\\beta$ in Definition \\ref{definition:at})\n. We use $a_4\\in [0,1]$ to represent that $\\mathcal{A}$ knows an $a_4$ degree about $A_4$, where $a_4=0$ means $\\mathcal{A}$ knows nothing and $a_4=1$ means $\\mathcal{A}$ knows everything.\n({\\bf v}) $A_5$: it describes $\\mathcal{I}$'s response to $\\mathcal{A}$'s query to $f$ (if applicable), which is relevant because $\\mathcal{A}$ can learn useful information about $f$ by observing $f$'s responses .\nWe define $a_5\\in \\{0,1\\}$ such that $a_5=0$ means there is a limit on the response that can be made by $\\mathcal{A}$ to $f$\n(referred as ${\\sf LQ}$) and $a_5=1$ means there is no limit (referred as ${\\sf FQ}$).\n({\\bf vi}) $A_6$: it describes $\\mathcal{A}$'s manipulation set in the problem space, which describes perturbations for generating adversarial files (adapted from {\\em perturbation set} in the AML literature ):\n\\begin{eqnarray*}\n\\mathcal{M}=\\{\\delta: (z'\\leftarrow \\mathcal{A}(z,\\delta)) \\wedge ({\\tt true}\\leftarrow \\mathcal{O}(z,z')\n) \\wedge (z \\in \\mathcal{Z}) \\wedge (z'\\neq z)\\}.\n\\end{eqnarray*}\n$\\mathcal{M}$ is application-specific. For instance, an Android Package Kit (APK) permits adding codes or renaming class names , a Windows Portable Executable (PE) permits adding codes or changing PE {\\em section} names , and a Portable Document Format (PDF) file permits appending dead-code at its end or add new instructions . This means that a perturbation $\\delta\\in\\mathcal{M}$ can be a tuple specifying an operator (e.g., addition or removal), an object (e.g., a feature used by $\\mathcal{I}$), and other kinds of information (e.g., perturbation location in a file). \nSince it is often infeasible to enumerate the entire manipulation set, $\\mathcal{A}$ may leverage an empirical one $\\widetilde{\\mathcal{M}}$ , which can be defined in the problem or feature space. Manipulations in the problem space must not violate the relevant constraints (e.g., adding APIs in an APK should not cause the request of unauthorized permissions). Manipulations in the feature space facilitate efficient computing via gradient-based methods as long as the inverse feature mapping $\\phi^{-1}$ is available.\nFurthermore, we can use the manipulation set $\\mathcal{M}$ to define a {\\em feature manipulation set} $\\mathbf{M}$:\n\\begin{eqnarray}\n\\mathbf{M}=\\{\\delta_{\\mathbf{x}}=\\mathbf{x}'-\\mathbf{x}: (\\mathbf{x}=\\phi(z)) \\land (\\mathbf{x}'=\\phi(z')) \\land (z'\\leftarrow \\mathcal{A}(z,\\delta)) \\land (\\delta \\in \\mathcal{M}) \\land (z \\in \\mathcal{Z})\n\\}. \\label{eq:manipulations}\n\\end{eqnarray}\nIn order to compute $\\mathbf{M}$ efficiently, one strategy is to estimate a feature-space analog of $\\widetilde{\\mathcal{M}}$, denoted by $\\widetilde{\\mathbf{M}}$ . This however demands resolving the invertibility Assumption \\ref{assumption:inverse}.\n({\\bf vii}) $A_7$: it describes $\\mathcal{A}$'s attack tactics. We consider two tactics: classifier {\\em evasion} and classifier {\\em poisoning}. For the evasion attack, we consider three variants: basic evasion (${\\sf BE}$), optimal evasion 1 (${\\sf OE1}$) and optimal evasion 2 (${\\sf OE2}$).\nFor the poisoning attack, we consider two variants: basic poisoning (${\\sf BP}$) and optimal poisoning (${\\sf OP}$). Correspondingly, we have $A_7\\in\\{{\\sf BE},{\\sf OE1},{\\sf OE 2},{\\sf BP},{\\sf OP}\\}$.\nThese tactics are elaborated below, while noting that they do not explicitly call oracle $\\mathcal{O}$ because definitions of manipulation sets $\\mathcal{M}$ already assure that manipulations preserve functionalities of non-adversarial files.\nAs shown in Definition \\ref{defintion:basic_evs}, the basic evasion attack is that $\\mathcal{A}$ uses a set of perturbations $\\delta_z\\subseteq\\mathcal{M}$ to manipulate a malicious file $z$, which is classified by $\\mathcal{I}$'s classifier $f$ as $+\\leftarrow f(z)$, to an adversarial file $z'$ such that $-\\leftarrow f(z')$.\n\\begin{definition}[basic evasion or ${\\sf BE}$ ] \\label{defintion:basic_evs}\n$\\mathcal{A}$ looks for $\\delta_z\\subseteq\\mathcal{M}$ to achieve the following for $z\\in \\mathcal{Z}$ with $+\\leftarrow f(z)$:\n\\begin{eqnarray*}\n-\\leftarrow f(z') ~\\text{where}~\n(z'\\leftarrow \\mathcal{A}(z,\\delta_z)) \\wedge\n(\\delta_z\\subseteq\\mathcal{M}).\n\\end{eqnarray*}\n\\end{definition}\nAs shown in Definition \\ref{def:opt_evasion1}, the attacker attempts to minimize the degree of perturbations. In other words, this attack tactic is the same as {\\sf BE}, except that $\\mathcal{A}$ attempts to minimize the manipulation when perturbing a non-adversarial file $z\\in \\mathcal{Z}$ into an adversarial file $z'\\in \\mathcal{Z}$.\n\\begin{definition}[optimal evasion 1 or ${\\sf OE1}$; adapted from ] \\label{def:opt_evasion1}\n$\\mathcal{A}$ attempts to achieve the following for $z\\in \\mathcal{Z}$ with $+\\leftarrow f(z)$:\n\\begin{eqnarray*}\n\\min \\limits_{z'}\n\\Gamma(z', z) \n~\\text{s.t.}~(z'\\leftarrow \\mathcal{A}(z,\\delta_z))\\land (\\delta_z\\subseteq\\mathcal{M}) \\land (-\\leftarrow f(z')). \n\\end{eqnarray*}\n\\end{definition}\nAs shown in Definition \\ref{def:opt_evasion2}, the attacker attempts to maximize $\\mathcal{I}$'s loss for waging high-confidence evasion attacks, while noting the small perturbations may be incorporated.\n\\begin{definition}[optimal evasion 2 or ${\\sf OE2}$; adapted from ] \\label{def:opt_evasion2}\n$\\mathcal{A}$ attempts to achieve the following for $z\\in \\mathcal{Z}$ with $+\\leftarrow f(z)$:\n\\begin{eqnarray*}\n\\max\\limits_{z'}\nL(F_\\theta(\\phi_c(S,z')),+)\n~\\text{s.t.}~(z'\\leftarrow \\mathcal{A}(z,\\delta_z))\\land (\\delta_z\\subseteq\\mathcal{M}) \\land (-\\leftarrow f(z')).\n\\end{eqnarray*}\n\\end{definition}\nLet $D'_{poison}\\subset \\mathcal{D}$ be a set of adversarial file-label pairs obtained by manipulating non-adversarial files in $D_{poison}$. Let $D'_{train}\\leftarrow D_{train} \\cup D'_{poison}$ be the contaminated training data for learning a classifier $f'$ with parameters $\\theta'$. As shown in Definition \\ref{def:basic poisoning attack}, the basic poisoning attack is that the attacker aims to make $f'$ mis-classify the files in a dataset $D_{target}$, while accommodating the attacks that $\\mathcal{A}$ manipulates labels of the files in $D_{poison}$ .\n\\begin{definition}[basic poisoning or ${\\sf BP}$ ]\n\\label{def:basic poisoning attack}\nGiven a set $D_{target}$ of files where $+\\leftarrow f(\\dot{z})$ for $\\dot{z}\\in D_{target}$ and a set $D_{poison}$ of non-adversarial files, $\\mathcal{A}$ attempts to perturb files in $D_{poison}$ to adversarial ones\n$D'_{poison}=\n\\{(\\mathcal{A}(z,\\delta_z), \\mathcal{A}(y)): ((z,y)\\in D_{poison})\\land (\\delta_z\\subseteq \\mathcal{M})\\land $ $ (\\mathcal{A}(y)\\in\\{+,-\\}) \\}$ such that classifier\n$f'$ learned from $D'_{train}\\leftarrow D_{train} \\cup D'_{poison}$\nmis-classifies the files in $D_{target}$.\nFormally, the attacker intents to achieve the following for $\\forall~\\dot{z} \\in D_{target}$: \n$-\\leftarrow f'(\\dot{z})$ where $f'$ is learned from $D'_{train}\\leftarrow D_{train}\\cup D'_{poison}$.\n\\end{definition}\nAs shown in Definition \\ref{def:optimal poisoning attack}, the optimal poisoning attack is the same as {\\sf BF}, except that $\\mathcal{A}$ attempts to\nmaximize the loss when using classifier $f'$ with parameter $\\theta'$ to classify\nfiles in $D_{target}$. \nDefinition \\ref{def:optimal poisoning attack} can have multiple variants by considering bounds on $|D'_{poison}|$ or bounds on the degree of perturbations $|\\delta_z|$ . \n\\begin{definition}[optimal poisoning or ${\\sf OP}$ ]\n\\label{def:optimal poisoning attack}\nGiven $D_{poison}$, $\\mathcal{A}$ perturbs $D_{poison}$ into $D'_{poison}$ for achieving:\n\\begin{align*}\n\\max \\limits_{D'_{poison}}~\\mathcal{L}(\\theta', D_{target})\n~\\text{where}~{\\theta'} \\leftarrow \\operatorname*{\\arg\\min}_{\\theta} \\mathcal{L}(\\theta, D_{train} \\cup D'_{poison}).\n\\end{align*}\n\\end{definition}\n{({\\bf viii})} $A_8$: it describes $\\mathcal{A}$'s attack techniques, such as Gradient-based Optimization ({\\sf GO}), Sensitive Features ({\\sf SF}), MImicry ({\\sf MI}), TRansferability ({\\sf TR}), Heuristic Search ({\\sf HS}), Generative Model ({\\sf GM}), and Mixture Strategy ({\\sf MS}). We denote this by $A_8\\in\\{${\\sf GO}, {\\sf SF}, {\\sf MI}, {\\sf TR}, {\\sf HS}, {\\sf GM}, {\\sf MS}$\\}$. Let $\\mathcal{A}$ have a classifier $\\hat{f}$, which consists of a hand-crafted feature extraction $\\hat{\\phi}_c$ and a parameterized model $\\hat{F}_{\\hat\\theta}$. Let $\\mathcal{A}$ also have an objective function $L_\\mathcal{A}:[0,1]\\times\\mathcal{Y}\\to\\mathcal{R}$, which measures $\\hat{f}$'s error or $\\mathcal{A}$' failure in evasion . Note that $\\hat{f}$ and $L_\\mathcal{A}$ can be the same as, or can mimic (by leveraging $\\mathcal{A}$'s knowledge about $\\mathcal{I}$'s attributes $A_1,\\ldots,A_5$), $\\mathcal{I}$'s classifier $f$ and loss function $L$, respectively. \nThe attack technique specified by Definition \\ref{definition:go} is that $\\mathcal{A}$ solves the feature-space optimization problems described in Definitions \\ref{def:opt_evasion1}, \\ref{def:opt_evasion2} and \\ref{def:optimal poisoning attack} by using some gradient-based optimization method and then leverages the invertibility Assumption \\ref{assumption:inverse} to generate adversarial malware examples.\n\\begin{definition}[Gradient-based Optimization or {\\sf GO}, adapted from ] \\label{definition:go}\nLet $\\mathbf{x}\\leftarrow\\hat{\\phi}_c(\\hat{S},z)$ and $\\mathbf{x}'\\leftarrow\\mathbf{x}+\\delta_\\mathbf{x}$. The feature-space optimization problem in Definition \\ref{def:opt_evasion1} can be written as\n\t\\begin{equation}\n\t\t\\min_{\\delta_\\mathbf{x}} C(\\mathbf{x}, \\mathbf{x}+\\delta_\\mathbf{x})~~~\\text{s.t.}~~~ (\\delta_\\mathbf{x}\\in[\\underline{\\mathbf{u}},\\overline{\\mathbf{u}}]) \\land (\\hat{F}_{\\hat{\\theta}}(\\mathbf{x}')<\\tau), \\label{eq:a8:evs1}\n\t\\end{equation}\nwhere $\\underline{\\mathbf{u}}$ and $\\overline{\\mathbf{u}}$ are respectively the lower and upper bounds on $\\mathbf{M}$ (e.g., $\\delta_\\mathbf{x}\\in[-\\mathbf{x}, 1-\\mathbf{x}]$ for binary representation $\\mathbf{x}$). \nThe feature-space optimization problem in Definition \\ref{def:opt_evasion2} can be written as \n \\begin{equation}\n \t\\max_{\\delta_\\mathbf{x}} L_\\mathcal{A}\\left(\\hat{F}_{\\hat{\\theta}}(\\mathbf{x}+\\delta_\\mathbf{x}), +\\right)~~\\text{s.t.}~~(\\delta_\\mathbf{x}\\in[\\underline{\\mathbf{u}},\\overline{\\mathbf{u}}]). \\label{eq:a8:evs2}\n \\end{equation}\nThe feature-space optimization problem specified in Definition \\ref{def:optimal poisoning attack} can be written as \n \\begin{align}\n \t&\\max_{\\delta_\\mathbf{x}\\in[\\underline{\\mathbf{u}},\\overline{\\mathbf{u}}]}\\mathbb{E}_{(\\dot{z},\\dot{y})\\in D_{target}}L_\\mathcal{A}(\\hat{F}_{\\hat{\\theta}'}(\\hat{\\phi}_c(\\hat{S},\\dot{z}),\\dot{y})),~~ \\forall (z,y)\\in D_{poison} \\label{eq:a8:poi}\\\\\n \t~~\\text{where}~~&{\\hat\\theta}'\\leftarrow\\argmin_{\\hat\\theta}\\mathbb{E}_{(z_t,y_t)\\in{\\hat{D}_{train}\\cup\\{(\\hat{\\phi}_c^{-1}(\\hat{\\phi}_c(z)+\\delta_\\mathbf{x}),y')\\}}}L_\\mathcal{A}\\left(\\hat{F}_{\\hat\\theta}(\\hat{\\phi}_c(\\hat{S},z_t),y_t)\\right). \\nonumber\n \\end{align}\n\\end{definition}\nIn order to calculate the gradients of loss function $L_\\mathcal{A}$ with respect to $\\delta_\\mathbf{x}$ in Eqs.\\eqref{eq:a8:evs1} and \\eqref{eq:a8:evs2}, inequality constraints can be handled by appending penalty items to the loss function in question and box-constraints can be coped with by using gradient projection . Since $\\mathbf{x}+\\delta_\\mathbf{x}$ is continuous, the {\\sf GO} attack technique needs to map $\\delta_\\mathbf{x}$ to a discrete perturbation vector in $\\mathbf{M}$, for instance by using the nearest neighbor search .\nThe gradients of loss function $L_\\mathcal{A}$ with respect to $\\delta_\\mathbf{x}$ in Eq. \\eqref{eq:a8:poi} are delicate to deal with. One issue is the indirect relation between $L_\\mathcal{A}$ and $\\delta_\\mathbf{x}$, which can be handled by the chain rule . \nAnother issue is the difficulty that is encountered when computing the partial derivatives $\\partial{\\hat\\theta'}/\\partial\\delta_{\\mathbf{x}}$ . \nFor dealing with this, researchers often relax the underlying constraints (e.g., by supposing that $\\hat{F}_{\\hat\\theta}$ is a linear model). \nThe attack technique specified by Definition \\ref{definition:sf} is that $\\mathcal{A}$ perturbs malware examples by injecting or removing a small number of features to decrease the classification error measured by the loss function ${L}_\\mathcal{A}$ as much as possible.\n\\begin{definition}[Sensitive Features or {\\sf SF}, adapted from ] \\label{definition:sf}\nFor evasion attacks, $\\mathcal{A}$ aims to maximize the following with respect to a given malware example-label pair $(z,+)$: $$\\max_{z'} L_\\mathcal{A}\\left(\\hat{F}_{\\hat\\theta}(\\hat{\\phi}(\\hat{S},z')),+\\right)~~\\text{s.t.}~~(z'\\leftarrow \\mathcal{A}(z,\\delta_z)) \\land (\\delta_z\\subseteq \\mathcal{M}) \\land (|\\delta_z|\\leq m),$$ \nwhere $m$ is the maximum degree of manipulations.\nFor poisoning attacks, $\\mathcal{A}$ aims to maximize the following with respect to the given $D_{poison}$ and $D_{target}$, \n\\begin{align}\n \\max_{D'_{poison}}\\mathbb{E}_{(\\dot{z},\\dot{y})\\in D_{target}}L_\\mathcal{A}(\\hat{F}_{\\hat{\\theta}'}(\\hat{\\phi}_c(\\hat{S},\\dot{z}),\\dot{y})),\n\\end{align}\nwhere $\\hat{\\theta}'$ is learned from $\\hat{D}_{train}\\cup D'_{poison}$ such that $\\forall z'\\in D'_{poison}$ is obtained via the perturbation $\\delta_z$ with obeying $(z'\\leftarrow \\mathcal{A}(z,\\delta_z)) \\land (\\delta_z\\subseteq \\mathcal{M}) \\land (z\\in D_{poison}) \\land (|\\delta_z|\\leq m)$.\n\\end{definition} \nThe attack technique specified by Definition \\ref{definition:mi} is that $\\mathcal{A}$ perturbs malware example $z$ by mimicking a benign example, while noting that this attack technique can be algorithm-agnostic.\n\\begin{definition}[MImicry or {\\sf MI}, adapted from ] \\label{definition:mi}\nGiven a set of benign examples $D_{ben}$ and a malware example $z$, $\\mathcal{A}$ aims to achieve the following minimization:\n\t\\begin{eqnarray}\n\t\t\\min_{\\delta_z\\in\\mathcal{M}} \\Gamma(\\mathcal{A}(z,\\delta_z),z_{ben})~~\\text{s.t.}~~ (\\exists~z_{ben}\\in D_{ben}). \\label{eq:mimicry}\n\t\\end{eqnarray}\n\\end{definition}\nThe attack technique specified by Definition \\ref{definition:mi} can be extended to accommodate the similarity between representations in the feature space .\nThe attack technique specified by Definition \\ref{definition:tr} is that $\\mathcal{A}$ generates adversarial examples against a surrogate model $\\hat{f}$.\n\\begin{definition}[TRansferability or {\\sf TR}, adapted from ] \\label{definition:tr}\n$\\mathcal{A}$ learns a surrogate model $\\hat{f}$ of $f$ from $\\hat{D}_{train}$, $\\hat{S}$, $\\hat{\\phi_c}$ and $\\hat{F}$. For evasion attacks,\n$\\mathcal{A}$ achieves $-\\leftarrow\\hat{f}(z')$ by perturbing malware example $z$ to $z'$ and then attacks $f$ with $z'$. For poisoning attacks, $\\mathcal{A}$ contaminates $\\hat{f}$ to $\\hat{f}'$ such that $-\\leftarrow\\hat{f}'(\\dot{z})~\\text{for}~\\forall(\\dot{z},\\dot{y}) \\in D_{target}$, by poisoning the training set $\\hat{D}_{train}$ with $D'_{poison}$ and the attacks $f$ with $D'_{poison}$.\n\\end{definition}\nThe attack technique specified by\nDefinition \\ref{definition:hs} is that $\\mathcal{A}$ searches perturbations in $\\mathcal{M}$ via some heuristics, while leveraging oracle $\\mathcal{O}$'s responses to $\\mathcal{A}$'s queries and $f$'s responses to $\\mathcal{A}$'s queries. Since $\\mathcal{M}$ is defined with respect to the problem space, this attack technique does not need the invertibility Assumption \\ref{assumption:inverse}.\n\\begin{definition}[Heuristic Search or {\\sf HS}] \\label{definition:hs}\nLet $h$ be a function taking $\\mathcal{O}$'s response and $f$'s response as input. Given a malware example $z$, $\\mathcal{A}$ looks for an $m$-length manipulation path \n\\begin{align}\n\\langle z_{(0)}, z_{(1)},\\ldots,z_{(m)}\\rangle \n~~\\text{s.t.}~~z_{(i+1)}=\\mathcal{A}(z_{(i)},\\delta_{z,(i)}) \\land (\\delta_{z,(i)}& \\in \\mathcal{M}) \\land (h(\\mathcal{O}, f,z,z_{(i)})\\leq h(\\mathcal{O}, f,z,z_{(i+1)})) \\nonumber\n\\end{align}\nwhere $z_{(0)}=z$.\n\\end{definition}\nThe attack technique specified by Definition \\ref{def:generative-model} is that $\\mathcal{A}$ uses a generative model $G$ with parameters $\\theta_g$ to perturb malware representation vectors and then leverages the invertibility Assumption \\ref{assumption:inverse} to turn the perturbed vector into an adversarial malware example.\n\\begin{definition}[Generative Model or {\\sf GM}]\\label{def:generative-model}\nGiven a malware representation vector $\\mathbf{x}=\\hat{\\phi}_c(\\hat{S},z)$, $\\mathcal{A}$ achieves\n$$\\max_{\\theta_g} L_\\mathcal{A}\\left(\\hat{F}_{\\hat{\\theta}}(G_{\\theta_g}(\\mathbf{x})),+\\right)~~\\text{s.t.}~~ G_{\\theta_g}(\\mathbf{x})\\in [\\underline{\\mathbf{u}}-\\mathbf{x}, \\overline{\\mathbf{u}}-\\mathbf{x}]$$\nand leverages the invertibility Assumption \\ref{assumption:inverse} to obtain an adversarial example $z'=\\hat\\phi_c^{-1}(G_{\\theta_g}(\\mathbf{x}))$.\n\\end{definition}\nThe attack technique specified by\nDefinition \\ref{definition:ms} is that $\\mathcal{A}$ combines multiple perturbation methods to perturb an example. \n\\begin{definition}[Mixture Strategy or {\\sf MS} ] \\label{definition:ms}\nLet $\\mathcal{H}_A$ denote the space of generative methods and $\\mathcal{W}_a=\\{\\mathbf{w}_a:\\mathbf{w}_a=(w_{a,1},\\ldots,w_{a,K}),w_{a,i}\\geq 0\\}$ with $i=1,\\ldots,K$ denote the weights space. Given a malware example $z$, $\\mathcal{A}$ aims to achieve $$\\max_{\\mathbf{w}_a} L_\\mathcal{A}(\\hat{F}_{\\hat{\\theta}}(\\hat\\phi(z')),+) ~~\\text{s.t.}~~(z'=\\sum_{i=1}^K{w_{a,i}}g_i(z))\\land (\\mathcal{O}(z,z')={\\sf true})) \\land (g_i\\in \\mathcal{H}_A) \\land (\\mathbf{w}_a\\in \\mathcal{W}_a).$$\n\\end{definition}\n({\\bf ix}) $A_9$: it corresponds to $\\mathcal{A}$'s adversarial files. Given file manipulation set $\\mathcal{M}$, the corresponding set of adversarial files is defined as $\\mathcal{Z}_{\\mathcal M}=\\{\\mathcal{A}(z, \\delta_z):(z \\in \\mathcal{Z}) \\land (\\delta_z\\subseteq\\mathcal{M})\\}$. Given feature manipulation set $\\mathbf{M}$, the set of adversarial feature vectors is: $\\mathcal{X}_{\\mathbf{M}}=\\{\\mathbf{x}':(\\mathbf{x}'=\\mathbf{x}+\\delta_{\\mathbf{x}}) \\land (\\delta_\\mathbf{x}\\in\\mathbf{M})\\}.$\n\\begin{figure}[htbp!]\n\t\\centering\n\t\\begin{tikzpicture}[thick,scale=0.95, every node/.style={transform shape}]\n\t\\node(top) [label=right:{White-box} attack, label=left:{}] at (0,6) {$(1,1,1,1,1)$};\n\t\\node(l5n0)[label=below:{},align=center] at (0,5) {$(a_1,a_2,1,1,a_5)$};\n\t\\node(l4n0)[label=below:{},align=center] at (2.2,4) {$(a_1,a_2,a_3,1,a_5)$};\n\t\\node(l4n1)[label=below:{},align=center] at (0,4) {$(0,0,1,1,0)$};\n\t\\node(l4n2)[label=below:{},align=center] at (-2.2,4) {$(a_1,a_2,1,a_4,a_5)$};\n\t\\node(l3n0)[label=below:{},align=center] at (4.4,3) {$(a_1,a_2,0,1,a_5)$};\n\t\\node(l3n1)[label=below:{},align=center] at (2.2,3) {$(0,0,a_3,1,0)$};\n\t\\node(l3n2)[label=below:{},align=center] at (0,3) {$(a_1,a_2,a_3,a_4,a_5)$};\n\t\\node(l3n3)[label=below:{},align=center] at (-2.2,3) {$(0,0,1,a_4,0)$};\n\t\\node(l3n4)[label=below:{},align=center] at (-4.4,3) {$(a_1,a_2,1,0,a_5)$};\n\t\\node(l2n0)[label=below:{},align=center] at (4.4,2) {$(0,0,0,1,0 )$};\n\t\\node(l2n1)[label=below:{},align=center] at (2.2,2) {$(a_1,a_2,0,a_4,a_5)$};\n\t\\node(l2n2)[label=below:{},align=center] at (0,2) {$(0,0,a_3,a_4,0)$};\n\t\\node(l2n3)[label=below:{},align=center] at (-2.2,2) {$(a_1,a_2,a_3,0,a_5)$};\n\t\\node(l2n4)[label=below:{},align=center] at (-4.4,2) {$(0,0,1,0,0)$};\n\t\\node(l1n0)[label=below:{},align=center] at (2.2,1) {$(0,0,0,a_4,0 )$};\n\t\\node(l1n1)[label=below:{},align=center] at (0,1) {$(a_1,a_2,0,0,a_5)$};\n\t\\node(l1n2)[label=below:{},align=center] at (-2.2,1) {$(0,0,a_3,0,0 )$};\n\t\\node(bot)[label=right:{Black-box attack}, label=left:{}] at (0,0) {$ (0,0,0,0,0)$};\n\t\\draw(l5n0) -- (top);\n\t\\draw(l4n0) -- (l5n0);\n\t\\draw(l4n1) -- (l5n0);\n\t\\draw(l4n2) -- (l5n0);\n\t\\draw(l3n0) -- (l4n0);\n\t\\draw(l3n1) -- (l4n0);\n\t\\draw(l3n1) -- (l4n1);\n\t\\draw(l3n2) -- (l4n0);\n\t\\draw(l3n2) -- (l4n2);\n\t\\draw(l3n3) -- (l4n1);\n\t\\draw(l3n3) -- (l4n2);\n\t\\draw(l3n4) -- (l4n2);\n\t\\draw(l2n0) -- (l3n0);\n\t\\draw(l2n0) -- (l3n1);\n\t\\draw(l2n1) -- (l3n0);\n\t\\draw(l2n1) -- (l3n2);\n\t\\draw(l2n2) -- (l3n1);\n\t\\draw(l2n2) -- (l3n2);\n\t\\draw(l2n2) -- (l3n3);\n\t\\draw(l2n3) -- (l3n2);\n\t\\draw(l2n3) -- (l3n4);\n\t\\draw(l2n4) -- (l3n3);\n\t\\draw(l2n4) -- (l3n4);\n\t\\draw(l1n0) -- (l2n0);\n\t\\draw(l1n0) -- (l2n1);\n\t\\draw(l1n0) -- (l2n2);\n\t\\draw(l1n1) -- (l2n1);\n\t\\draw(l1n1) -- (l2n3);\n\t\\draw(l1n2) -- (l2n2);\n\t\\draw(l1n2) -- (l2n3);\n\t\\draw(l1n2) -- (l2n4);\n\t\\draw(bot) -- (l1n2);\n\t\\draw(bot) -- (l1n1);\n\t\\draw(bot) -- (l1n0);\n\t\\end{tikzpicture}\n\t\\caption{A portion of the partial order defined over $(a_1,\\ldots,a_5)$.\n\t} \n\t\\label{fig:attack_input}\n\\end{figure}\n\\smallskip\n\\noindent{\\bf On the Usefulness of the Preceding Specification}. The preceding specification can be applied to formulate a partial order in the attribute space, which allows to compare attacks unambiguously. Figure \\ref{fig:attack_input} depicts how vector $(a_1, \\cdots, a_5)$ formulates a partial order between the widely-used informal notions of {\\em black-box} attack, namely $(a_1,a_2,a_3,a_4,a_5)=(0,0,0,0,0)$, \nand {\\em white-box} attack, namely $(a_1,a_2,a_3,a_4,a_5)=(1,1,1,1,1)$;\nthere are many kinds of grey-box attacks in between.", "id": "5f85312b-af7a-44ed-87ac-7e94a51dd8cc", "level": "subsubsection", "origin_cites_number": 45, "parent_id": "8e32af6c-587d-43e1-aabc-dc5d8345628d", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Framework" ], [ "subsubsection", "Systematizing Attacks" ] ], "subsections": [], "title": "Systematizing Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "Similarly, we systematize defenses from two perspectives: {\\em defender's objective} (i.e., what the defender aims to achieve) and {\\em defender's input} (i.e., what leverages the defender can use). \nWe also discuss how to apply the specification to formulate a partial-order structure for comparing defenses.\n\\smallskip\n\\noindent{\\bf Defender's Objectives}.\n$\\mathcal{I}$ aims to detect ideally all of the malicious files, adversarial and non-adversarial alike, while suffering from small side-effects (e.g., increasing false-positives).\n\\smallskip\n\\noindent{\\bf Defender's Input}.\nAs highlighted in Table \\ref{tab:attr}, $\\mathcal{I}$'s input includes attributes $A_1,\\ldots,A_5$, which are under $\\mathcal{I}$'s control, and the extent $a_6,\\ldots,a_9$ at which $\\mathcal{I}$ respectively knows about attributes $A_6,\\ldots,A_9$, which are under $\\mathcal{A}$'s control. Note that $A_1,\\ldots,A_9$ have been defined above.\n\\begin{itemize}\n\\item We define $a_6 \\in [0,1]$ to represent the extent at which $\\mathcal{I}$ knows $\\mathcal{A}$'s manipulation set $\\mathcal{M}$. Let $\\hat{\\mathcal{M}}\\subseteq \\mathcal{M}$ denote the subset of $\\mathcal{A}$'s manipulation set known to $\\mathcal{I}$. \nThen, we set $a_6=|\\hat{\\mathcal{M}}|/|\\mathcal{M}|$.\n\\item We define $a_7\\in\\{0,1\\}$ such that $a_7=0$ means $\\mathcal{I}$ does not know $\\mathcal{A}$'s attack tactic $A_7\\in \\{{\\sf BE},{\\sf OE1},{\\sf OE2},{\\sf BP},$ ${\\sf OP}\\}$ and $a_7=1$ means $\\mathcal{I}$ knows $\\mathcal{A}$'s tactic. \n\\item We define $a_8\\in\\{0,1\\}$ such that $a_8=1~(a_8=0)$ means the defender does (not) know $\\mathcal{I}$'s attack technique $A_8\\in\\{{\\sf GO}, {\\sf SF}, {\\sf TR}, {\\sf MI}, {\\sf HS}, {\\sf GM}, {\\sf MS}\\}$.\n\\item We use $a_9=|\\hat{\\mathcal{Z}}_{\\mathcal{M}}| / |\\mathcal{Z}_{\\mathcal{M}}|$ to represent the extent at which $\\mathcal{I}$ knows about $\\mathcal{A}$'s adversarial files, where $a_9\\in [0,1]$ and $\\mathcal{Z}_{\\mathcal{M}}$ is $\\mathcal{A}$'s adversarial files and $\\hat{\\mathcal{Z}}_{\\mathcal{M}}\\subseteq\\mathcal{Z}_{\\mathcal{M}}$ is known to $\\mathcal{I}$.\n\\end{itemize}\n\\smallskip\n\\noindent{\\bf On the Usefulness of the Preceding Specification}. \nSimilarly, the defense specification can be used to formulate a partial order in the attribute space, paving the way for comparing defenses unambiguously. Figure \\ref{fig:defense_input} depicts how vector $(a_6,\\ldots,a_9)$ formulates a partial order between the widely-used informal notions of black-box defense $(a_6,a_7,a_8,a_9)=(0,0,0,0)$ and white-box defense $(a_6,a_7,a_8,a_9)=(1,1,1,1,1)$; there are many kinds of grey-box defenses in between.\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\begin{tikzpicture}[thick,scale=0.95, every node/.style={transform shape}]\n\t\t\\node(top)[label=right:{White-box defense}, label=left:{}] at (0,6) {$( 1,1,1,1)$};\n\t\t\\node(l3n0)[label=below:{},align=center] at (1.8,5) {$( a_6,a_7,a_8,1)$};\n\t\t\\node(l3n1)[label=below:{},align=center] at (0,5) {$( 1,0,0,1)$};\n\t\t\\node(l3n2)[label=below:{},align=center] at (-1.8,5) {$( 1,a_7,a_8,a_9)$};\n\t\t\\node(l2n0)[label=below:{},align=center] at (3.6,4) {$( 0,a_7,a_8, 1)$};\n\t\t\\node(l2n1)[label=below:{},align=center] at (1.85,4) {$( a_6,0,0,1)$};\n\t\t\\node(l2n2)[label=below:{},align=center] at (0,4) {$( a_6,a_7,a_8,a_9)$};\n\t\t\\node(l2n3)[label=below:{},align=center] at (-1.85,4) {$( 1,0,0,a_8)$};\n\t\t\\node(l2n4)[label=below:{},align=center] at (-3.6,4) {$( 1,a_7,a_8,0)$};\n\t\t\\node(l1n0)[label=below:{},align=center] at (3.6,3) {$( 0,0,0,1)$};\n\t\t\\node(l1n1)[label=below:{},align=center] at (1.85,3) {$( 0,a_7,a_8,a_9)$};\n\t\t\\node(l1n2)[label=below:{},align=center] at (0,3) {$( a_6,0,0,a_8)$};\n\t\t\\node(l1n3)[label=below:{},align=center] at (-1.85,3) {$( a_6,a_7,a_8,0)$};\n\t\t\\node(l1n4)[label=below:{},align=center] at (-3.6,3) {$( 1,0,0,0)$};\n\t\t\\node(l0n0)[label=below:{},align=center] at (1.85,2) {$( 0,0,0,a_9)$};\n\t\t\\node(l0n1)[label=below:{},align=center] at (0,2) {$( 0,a_7,a_8,0)$};\n\t\t\\node(l0n2)[label=below:{},align=center] at (-1.85,2) {$( a_6,0,0,0)$};\n\t\t\\node(bot)[label=right:{Black-box defense}, label=left:{}] at (0,1) {$( 0,0,0,0)$};\n\t\t\\draw(bot) -- (l0n0);\n\t\t\\draw(bot) -- (l0n1);\n\t\t\\draw(bot) -- (l0n2);\n\t\t\\draw(l0n0) -- (l1n0);\n\t\t\\draw(l0n0) -- (l1n1);\n\t\t\\draw(l0n0) -- (l1n2);\n\t\t\\draw(l0n1) -- (l1n1);\n\t\t\\draw(l0n1) -- (l1n3);\n\t\t\\draw(l0n2) -- (l1n2);\n\t\t\\draw(l0n2) -- (l1n3);\n\t\t\\draw(l0n2) -- (l1n4);\n\t\t\\draw(l1n0) -- (l2n0);\n\t\t\\draw(l1n0) -- (l2n1);\n\t\t\\draw(l1n1) -- (l2n0);\n\t\t\\draw(l1n1) -- (l2n2);\n\t\t\\draw(l1n2) -- (l2n1);\n\t\t\\draw(l1n2) -- (l2n2);\n\t\t\\draw(l1n2) -- (l2n3);\n\t\t\\draw(l1n3) -- (l2n2);\n\t\t\\draw(l1n3) -- (l2n4);\n\t\t\\draw(l1n4) -- (l2n3);\n\t\t\\draw(l1n4) -- (l2n4);\n\t\t\\draw(l2n0) -- (l3n0);\n\t\t\\draw(l2n1) -- (l3n0);\n\t\t\\draw(l2n1) -- (l3n1);\n\t\t\\draw(l2n2) -- (l3n0);\n\t\t\\draw(l2n2) -- (l3n2);\n\t\t\\draw(l2n3) -- (l3n1);\n\t\t\\draw(l2n3) -- (l3n2);\n\t\t\\draw(l2n4) -- (l3n2);\n\t\t\\draw(l3n0) -- (top);\n\t\t\\draw(l3n1) -- (top);\n\t\t\\draw(l3n2) -- (top);\n\t\\end{tikzpicture}\n\t\\caption{A portion of the partial order defined over $(a_6,\\ldots,a_9)$.\n\t}\n\t\\label{fig:defense_input}\n\\end{figure}", "id": "1e0de86d-a9b2-453b-b977-3d5abab5bd99", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "8e32af6c-587d-43e1-aabc-dc5d8345628d", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Framework" ], [ "subsubsection", "Systematizing Defenses" ] ], "subsections": [], "title": "Systematizing Defenses" }, { "cite_extract_rate": 0.8, "cites": [ 917, 3866, 3870, 3854 ], "content": "\\label{sec:ml_model_property}\nSince $f=\\varphi(\\phi(\\cdot))$, we decompose $f$'s security properties into $\\varphi$'s and $\\phi$'s. We consider: Representation Robustness ({\\sf RR}), meaning that two similar files have similar feature representations; Classification Robustness ({\\sf CR}), meaning that two similar feature representations lead to the same label; Detection Robustness ({\\sf DR}), meaning that feature extraction function $\\phi$ returns similar representations for two files with the same functionality; Training Robustness ({\\sf TR}), meaning that small changes to the training set does not cause any significant change to the learned classifier. \nWith respect to small perturbations, Definitions \\ref{definition:representation-robustness} and\n\\ref{definition:classification-robustness} below collectively say that when two files $z$ and $z'$ are similar, they would be classified as the same label with a high probability. Since the classification function $\\varphi$ is linear, we can obtain a $\\epsilon$-robust $\\varphi$ analytically, where $\\epsilon$ is a small scalar that bounds the perturbations applied to feature vectors . This means that the main challenge is to achieve robust feature extraction.\n\\begin{definition}[{\\sf RR} or $(\\epsilon,\\eta)$-robust feature extraction; adapted from ] \n\\label{definition:representation-robustness}\nGiven constants $\\epsilon,\\eta\\in [0,1]$, and files $z,z' \\in \\mathcal{Z}$ such that $(\\Gamma(z,z')\\approx 0) \\land ({\\tt true}\\leftarrow \\mathcal{O}(z,z'))$,\nwe say feature extraction function $\\phi$ is $(\\epsilon,\\eta)$-robust if\n$$\\mathbb{P}(C(\\mathbf{x},\\mathbf{x}')\\leq \\epsilon)=\\mathbb{P}(C(\\phi(z),\\phi(z'))\\leq \\epsilon)>1 - \\eta.$$\n\\end{definition}\n\\begin{definition}[{\\sf CR} or {$\\epsilon$-robust classification} ]\n\\label{definition:classification-robustness} \nGiven constant $\\epsilon\\in [0,1]$ as in Definition \n\\ref{definition:representation-robustness} and\nany feature vectors $\\mathbf{x},{\\bf x}'\\in \\mathcal{X}$, we say classification function $\\varphi$ is $\\epsilon$-robust if $$(C(\\mathbf{x},\\mathbf{x}') \\leq \\epsilon) \\rightarrow ((\\varphi({\\bf x})>\\tau)\\land (\\varphi({\\bf x}')>\\tau)).$$\n\\end{definition}\nDefinition \\ref{definition:representation-robustness2} \nspecifies detection robustness, which says that feature extraction function $\\phi$ returns similar representations for two different files as long as they have the same functionality.\nNote that Definitions \\ref{definition:classification-robustness} and\n\\ref{definition:representation-robustness2} collectively produce a malware detector with detection robustness.\n\\begin{definition}[{\\sf DR} or $(\\mathcal{O},\\eta)$-robust feature extraction; adapted from ] \n\\label{definition:representation-robustness2}\nGiven constant $\\eta\\in [0,1]$ and two files $z,z' \\in \\mathcal{Z}$ such that $(\\Gamma(z,z')>>0)\\land({\\tt true}\\leftarrow \\mathcal{O}(z,z'))$, we say feature extraction $\\phi$ is $(\\mathcal{O}, \\eta)$-robust if $\\mathbb{P}(C(\\phi(z),\\phi(z'))\\leq \\epsilon)>1 - \\eta.$\n\\end{definition}\nSuppose we impose a restriction on the adversarial files set $D'_{poison}$ such that $|D'_{poison}|\\leq \\gamma|D_{train}|$ for some constant $\\gamma\\in[0,1]$. Let classifier $f'$ be learned from $D_{train} \\cup D'_{poison}$. \nDefinition \\ref{def:robust-training} says that a classifier $f'$ learned from poisoned training set can predict as accurately as $f$ learned from $D_{train}$ with a high probability. \n\\begin{definition}[{\\sf TR} or $(\\gamma,\\zeta)$-robust training; adapted from ]\\label{def:robust-training}\nGiven classifiers $f$ learned from $D_{train}$ and $f'$ learned from $D_{train} \\cup D'_{poison}$ where\n$|D_{poison}'| \\leq \\gamma |D_{train}|$, and small constants $\\zeta\\in [0,1]$, we say\n$f'$ is $(\\gamma,\\zeta)$-robust if $\\forall z\\in\\mathcal{Z}$\n$$\\left((f(z)>\\tau) \\land (|D_{poison}'| \\leq \\gamma |D_{train}|)\\right) \\rightarrow \\left(\\mathbb{P}(f'(z) > \\tau)>1-\\zeta\\right).$$\n\\end{definition}", "id": "b05fe1bc-da2e-4a87-8f17-d43f1acc81fc", "level": "subsection", "origin_cites_number": 5, "parent_id": "a1283bee-37c1-4371-bed3-ceeb8af64d75", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Survey and Systematization Methodology" ], [ "subsection", "Systematizing Security Properties" ] ], "subsections": [], "title": "Systematizing Security Properties" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:systematizstion-AMD-studies}\nWe systematize attacks according to $\\mathcal{A}$'s objective, input, assumptions, the security properties that are broken, and the types of victim malware detectors (e.g., Windows vs. Android). Similarly, we systematize defenses according to $\\mathcal{I}$'s objective, input, assumptions, the security properties that are achieved, and the types of enhanced malware detectors (e.g., Windows vs. Android).\nWe group attacks (defenses) according to the attacker's (defender's) techniques and then summarize them in a table according to the publication date in chronological order. \nFor convenience, we will use wildcard $*$ to indicate any value in a domain (e.g., $[0,1]$); we will use $\\lor$ to describe $\\mathcal{A}$'s and $\\mathcal{I}$'s ``broader'' input (if applicable). For example, $(0,1,0,1,0|A_6,\\ldots,A_9)\\lor (1,0,1,1,1|A_6,\\ldots,A_9)$ means that $\\mathcal{A}$ has either $(a_1,a_2,a_3,a_4,a_5)=(0,1,0,1,0)$ or\n$(a_1,a_2,a_3,a_4,a_5)=(1,0,1,1,1)$. Finally, we will present the attack-defense escalation.", "id": "e26ef155-9b9c-4a92-9ffa-e334fbe6ea87", "level": "section", "origin_cites_number": 0, "parent_id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ] ], "subsections": [ "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "76e4c68c-dec2-4af2-b3fa-ce426944cf52" ], "title": "Systematizing AMD Arms Race" }, { "cite_extract_rate": 0.423076923076923, "cites": [ 3867, 3857, 6172, 7756, 3864, 3862, 937, 3871, 9099, 3854, 7754 ], "content": "\\label{sec:attack}\n\\begin{table}[htbp]\n\\caption{Summary of AMD attacks (\\checkmark means applicable, \\protect\\fullcirc means 0, \\protect\\emptycirc means 1, \\protect\\dotcirc means a value in $[0,1]$).}\n\t\\centering\n\t\\resizebox{0.9\\columnwidth}{!}{\n\t\\setlength{\\tabcolsep}{0.4em}\n\t\\begin{tabular}{l|ccc|ccccc|cccc|ccccc|cccc|ccc}\n\t\\hline\n\t\\multicolumn{1}{c|}{\\specialcell{Attack \\\\ (in chronological order)}}& \\multicolumn{3}{c|}{\\specialcell{Attack \\\\ Objective}} & \\multicolumn{9}{c|}{\\specialcell{Attack Input}} & \n\t\\multicolumn{5}{c|}{Assumptions} &\n\t\\multicolumn{4}{c|}{\\specialcell{Broken \\\\ Properties}} & \\multicolumn{3}{c}{\\specialcell{Malware \\\\ detector}} \\\\\n\t& \\vthead{\\sf Indiscriminate} \n\t& \\vthead{\\sf Targeted}\n\t& \\vthead{\\sf Availability}\n\t& \\vthead{$A_1$: Training set $D_{train}$}\n\t& \\vthead{$A_2$: Defense technique}\n\t& \\vthead{$A_3$: Feature set}\n\t& \\vthead{$A_4$: Learning algorithm}\n\t& \\vthead{$A_5$: Response}\n\t& \\vthead{$A_6$: Manipulation set}\n\t& \\vthead{$A_7$: Attack tactic}\n\t& \\vthead{$A_8$: Attack technique}\n\t& \\vthead{$A_9$: Adversarial example set}\n\t& \\vthead{{\\sf IID} assumption}\n\t& \\vthead{{\\sf Oracle} assumption}\n\t& \\vthead{{\\sf Measurability} assumption}\n\t& \\vthead{{\\sf Smoothness} assumption}\n\t& \\vthead{{\\sf Invertibility} assumption}\n\t& \\vthead{{\\sf RR}: Representation Robustness}\n\t& \\vthead{{\\sf CR}: Classification Robustness}\n\t& \\vthead{{\\sf DR}: Detection Robustness}\n\t& \\vthead{{\\sf TR}: Training Robustness}\n\t& \\vthead{Windows Program}\n\t& \\vthead{Android Package}\n\t& \\vthead{PDF}\n\t\\\\\\hline\\hline\n \\rowcolor{lightgray!30}\n Smutz and Stavrou & \n \\checkmark &&&\n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n $\\mathbf{M}$ &\n {\\sf OE2} &\n {\\sf MI} &\n $\\mathcal{X}_\\mathbf{M}$ &\n & & \\checkmark & & &\n & \\checkmark & & &\n & & \\checkmark\n \\\\\n Biggio et al. &\n \\checkmark &&& \n \\specialcell{\\pie{0} \\\\ \\pie{4}} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} & \n \\specialcell{\\pie{0} \\\\ \\pie{0}} & \n \\specialcell{\\pie{0} \\\\ \\pieg} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} &\n $\\mathbf{M}$ &\n ${\\sf OE2}$ &\n ${\\sf GO}$ &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n & \\checkmark & \\checkmark && \\checkmark &\n & \\checkmark & &&\n & & \\checkmark \n \\\\\n \\rowcolor{lightgray!30}\n Maiorca et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{4}} &\n \t$\\mathcal{M}$ &\n \t${\\sf BE}$ &\n \t{\\sf MI} &\n \t$\\mathcal{Z}_{\\mathcal{M}}$ &\n \t& \\checkmark & & & &\n \t& & \\checkmark & &\n \t& & \\checkmark \n \t\\\\\n {\\v{S}}rndi\\'{c} and Laskov &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{0} \\\\ \\pie{0}} & \n \\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{4}} & \n \\specialcell{\\pieg \\\\ \\pieg \\\\ \\pieg \\\\ \\pieg} & \n \\specialcell{\\pie{4} \\\\ \\pieg \\\\ \\pie{4} \\\\ \\pieg} &\n \\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{4}} &\n \t\\specialcell{$\\mathbf{M}$ \\\\ $\\mathcal{M}$ \\\\ $\\mathbf{M}$ \\\\ $\\mathcal{M}$} &\n \t\\specialcell{${\\sf BE}$ \\\\ ${\\sf BE}$ \\\\ ${\\sf OE2}$ \\\\ ${\\sf OE2}$ } &\n \t\\specialcell{${\\sf TR}$ \\\\ ${\\sf TR}$ \\\\ ${\\sf TR}$ \\\\ ${\\sf TR}$} &\n \t\\specialcell{$\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$ \\\\ $\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$} &\n \t& \\checkmark &&&\\checkmark& \n \t&&\\checkmark&&\n \t& & \\checkmark \n \t\\\\\n \\rowcolor{lightgray!30}\n Xu et al.~ &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{0}} &\n \t$\\mathcal{M}$ &\n \t${\\sf BE}$ &\n \t{\\sf HS} &\n \t$\\mathcal{Z}_{\\mathcal{M}}$ &\n \t& & & & &\n \t& & \\checkmark &&\n \t& & \\checkmark \n \t\\\\\n \tCarmony et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \t\\specialcell{\\pie{4}} & \n \t\\specialcell{\\pie{4}} & \n \t\\specialcell{\\pie{4}} &\n \t\\specialcell{\\pie{4}} &\n \t$\\mathcal{M}$ &\n \t${\\sf BE}$ &\n \t{\\sf MI} &\n \t$\\mathcal{Z}_{\\mathcal{M}}$ &\n \t& \\checkmark & & & &\n \t& & \\checkmark & &\n \t& & \\checkmark \n \t\\\\\n \\rowcolor{lightgray!30}\n Hu and Tan & \n \\checkmark &&& \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{0}} &\n $\\mathbf{M}$ &\n ${\\sf BE}$ &\n {\\sf GM} &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n \\checkmark& \\checkmark & & & \\checkmark& \n & & \\checkmark &&\n \\checkmark & & \n \\\\\n Hu and Tan & \n \\checkmark &&& \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{0}} &\n $\\mathbf{M}$ &\n ${\\sf BE}$ &\n {\\sf GM} &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n \\checkmark& \\checkmark & & & \\checkmark &\n & & \\checkmark &&\n \\checkmark & & \n \\\\\n \\rowcolor{lightgray!30}\n Demontis et al. &\n \\checkmark &&& \n \\specialcell{\\pie{0} \\\\ \\pie{4}} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} &\n \\specialcell{\\pie{0} \\\\ \\pie{0}} & \n \\specialcell{\\pie{0} \\\\ \\pieg} &\n \\specialcell{\\pie{0} \\\\ \\pie{4}} &\n $\\mathbf{M}$ &\n ${\\sf OE2}$ &\n {\\sf SF} &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n & \\checkmark & \\checkmark & & \\checkmark &\n & \\checkmark & &&\n & \\checkmark & \n \\\\\n Grosse et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n \t$\\mathbf{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf SF} &\n \t$\\mathcal{X}_{\\mathbf{M}}$ &\n \t& \\checkmark &\\checkmark & & \\checkmark &\n \t\\checkmark & \\checkmark & &&\n \t& \\checkmark & \n \t\\\\\n \\rowcolor{lightgray!30}\n Chen et al. &\n \\checkmark &&& \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n $\\mathbf{M}$ &\n ${\\sf OE2}$ &\n {\\sf SF} &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n & \\checkmark & \\checkmark & & \\checkmark & \n & \\checkmark&&&\n \\checkmark & & \n \\\\\n Khasawneh et al. &\n \\checkmark &&& \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pieg} &\n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} &\n \\specialcell{ $\\mathbf{M}$ \\\\ $\\mathcal{M}$} &\n \\specialcell{${\\sf BE}$ \\\\ ${\\sf BE}$} &\n \\specialcell{{\\sf TR} \\\\ {\\sf TR}} &\n \\specialcell{$\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$} &\n & \\checkmark & & & \\checkmark & \n & \\checkmark & &&\n \\checkmark & & \n \\\\\n \\rowcolor{lightgray!30}\n Dang et al. &\n \\checkmark &&& \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{4}} &\n $\\mathcal{M}$ &\n ${\\sf BE}$ &\n {\\sf HS} &\n $\\mathcal{Z}_{\\mathcal{M}}$ &\n & & & & &\n & & \\checkmark &&\n & & \\checkmark \n \\\\\n Mu{\\~n}oz-Gonz{\\'a}lez et al. & \n &&\\checkmark& \n \\specialcell{\\pie{0} \\\\ \\pie{0}} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} & \n \\specialcell{\\pie{0} \\\\ \\pie{0}} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} & \n \\specialcell{\\pie{0} \\\\ \\pie{4}} &\n $\\mathbf{M}$ &\n ${\\sf OP}$ &\n {\\sf GO} &\n $\\mathcal{X}_{\\mathbf{M}}$ &\n & \\checkmark & & &\\checkmark &\n & & & \\checkmark &\n \\checkmark & & \n \\\\\n \\rowcolor{lightgray!30}\n Yang et al. &\n \\checkmark &&& \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{0}} &\n $\\mathcal{M}$ &\n ${\\sf BE}$ &\n ${\\sf HS}$ &\n $\\mathcal{Z}_{\\mathcal{M}}$ &\n & \\checkmark & \\checkmark & & &\n & & \\checkmark &&\n & \\checkmark & \n \\\\\n Rosenberg et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} &\n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} &\n \t\\specialcell{$\\mathbf{M}$ \\\\ $\\mathcal{M}$} &\n \t\\specialcell{${\\sf BE}$ \\\\ ${\\sf BE}$} &\n \t\\specialcell{{\\sf TR} \\\\ {\\sf TR}} &\n \t\\specialcell{$\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$} &\n \t& \\checkmark & & & \\checkmark &\n \t\\checkmark & \\checkmark & &&\n \t\\checkmark & & \n \t\\\\\n \\rowcolor{lightgray!30}\n Anderson et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} & \n \\specialcell{\\pie{4}} &\n \\specialcell{\\pie{0}} &\n \t$\\mathcal{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf GM} &\n \t$\\mathcal{Z}_{\\mathcal{M}}$ \n \t&\\checkmark& \\checkmark& \\checkmark & & \n \t& \\checkmark & \\checkmark &&\n \t& \\checkmark & & \n \t\\\\\n \tKreuk et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \t\\specialcell{\\pie{0}} & \n \t\\specialcell{\\pie{0}} & \n \t\\specialcell{\\pie{0}} & \n \t\\specialcell{\\pie{0}} &\n \t$\\mathbf{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf GO} &\n \t$\\mathcal{X}_{\\mathbf{M}}$ \n \t& &\\checkmark& & & \\checkmark\n \t& \\checkmark & \\checkmark &&\n \t& \\checkmark & & \n \t\\\\\n \t\\rowcolor{lightgray!30}\n \tChen et al. & \n \t\\checkmark &&& \n \t\\specialcell{\\pie{0}\\\\\\pie{4}\\\\ \\pie{0}} & \n \t\\specialcell{\\pie{0} \\\\ \\pie{4} \\\\ \\pie{4}} & \n \t\\specialcell{\\pie{0} \\\\ \\pie{4} \\\\ \\pieg} & \n \t\\specialcell{\\pie{0} \\\\ \\pie{0} \\\\ \\pie{0}} & \n \t\\specialcell{\\pie{0}\\\\ \\pie{0} \\\\ \\pie{0}} &\n \t$\\mathbf{M}$ &\n \t{\\sf BP} &\n \t{\\sf SF} &\n \t$\\mathcal{X}_\\mathbf{M}$ &\n \t& \\checkmark & \\checkmark & & \\checkmark & \n \t& & & \\checkmark &\n \t& \\checkmark & \n \t\\\\\n Al-Dujaili et al. & \n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n \t$\\mathbf{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf GO} &\n \t$\\mathcal{X}_{\\mathbf{M}}$ &\n \t& \\checkmark & & & \\checkmark &\n \t&& \\checkmark &&\n \t\\checkmark & & \n \t\\\\\n \t\\rowcolor{lightgray!30}\n \tSuciu et al. & \n \t&\\checkmark&& \n \t\\specialcell{\\pieg \\\\ \\pie{0} \\\\ \\pie{0} \\\\ \\pie{0}} & \n \t\\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{0}} & \n \t\\specialcell{\\pie{0} \\\\ \\pieg \\\\ \\pie{0} \\\\ \\pie{0}} & \n \t\\specialcell{\\pieg \\\\ \\pieg \\\\ \\pie{4} \\\\ \\pie{0}} & \n \t\\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{0}} &\n \t\\specialcell{$\\mathbf{M}$ \\\\ $\\mathcal{M}$} &\n \t\\specialcell{${\\sf BP}$ \\\\ ${\\sf BP}$} &\n \t\\specialcell{{\\sf SF} \\\\ {\\sf SF}} &\n \t\\specialcell{$\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$} &\n \t& \\checkmark & \\checkmark & & \\checkmark & \n \t& & &\\checkmark&\n \t& \\checkmark & \n \t\\\\\n Kolosnjaji et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n \t$\\mathbf{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf GO} &\n \t$\\mathcal{X}_{\\mathbf{M}}$ \n \t&& \\checkmark&& & \\checkmark\n \t& \\checkmark & \\checkmark &&\n \t& \\checkmark & & \n \t\\\\\n \\rowcolor{lightgray!30}\n Suciu et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} & \n \\specialcell{\\pie{0}} &\n \t$\\mathbf{M}$ &\n \t${\\sf OE2}$ &\n \t{\\sf GO} &\n \t$\\mathcal{X}_{\\mathbf{M}}$ &\n \t& \\checkmark & & & \\checkmark &\n \t\\checkmark & \\checkmark &&&\n \t\\checkmark & & \n \t\\\\\n Chen et al. &\n \t\\checkmark &&& \n \t\\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{0} \\\\ \\pie{0}} &\n \\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{4}} &\n \\specialcell{\\pie{0} \\\\ \\pie{0} \\\\ \\pie{0} \\\\ \\pie{0}} &\n \\specialcell{\\pie{4} \\\\ \\pie{4} \\\\ \\pie{4} \\\\ \\pie{4}} &\n \\specialcell{\\pie{4} \\\\ \\pie{0} \\\\ \\pie{4} \\\\ \\pie{0}} &\n \t\\specialcell{$\\mathbf{M}$ \\\\ $\\mathbf{M}$} &\n \t\\specialcell{${\\sf OE1}$ \\\\ ${\\sf OE2}$} &\n \t\\specialcell{{\\sf GO} \\\\ {\\sf SF}} &\n \t\\specialcell{$\\mathcal{X}_{\\mathbf{M}}$\\\\ $\\mathcal{X}_{\\mathbf{M}}$} &\n \t& \\specialcell{\\checkmark } & \\specialcell{\\checkmark } & & \\checkmark &\n \t& \\checkmark & & &\n \t& \\checkmark & \n \t\\\\\n\t\\rowcolor{lightgray!30}\n\tPierazzi et al. &\n\t\\checkmark &&& \n\t\\specialcell{\\pie{0}} & \n\t\\specialcell{\\pie{0}} &\n\t\\specialcell{\\pie{0}} & \n\t\\specialcell{\\pie{0}} &\n\t\\specialcell{\\pie{0}} &\n\t$\\mathcal{M}$ &\n\t${\\sf OE2}$ &\n\t{\\sf MI} &\n\t$\\mathcal{Z}_{\\mathbf{M}}$ &\n\t& \\checkmark & & & &\n\t& & \\checkmark & & \n\t& \\checkmark & \n\t\\\\ \t \n\tLi and Li & \n\t\\checkmark &&& \n\t\\specialcell{\\pie{4}} & \n\t\\specialcell{\\pie{0}} & \n\t\\specialcell{\\pie{0}} & \n\t\\specialcell{\\pie{0}} & \n\t\\specialcell{\\pie{0}} &\n\t\\specialcell{$\\mathbf{M}$ \\\\ $\\mathcal{M}$} &\n\t\\specialcell{${\\sf OE2}$ \\\\ ${\\sf OE2}$} &\n\t\\specialcell{{\\sf MS} \\\\ {\\sf MS}} &\n\t\\specialcell{$\\mathcal{X}_{\\mathbf{M}}$ \\\\ $\\mathcal{Z}_{\\mathcal{M}}$} &\n\t& \\checkmark & & & \\checkmark &\n\t&& \\checkmark &&\n\t& \\checkmark & \n\t\\\\\\hline\n\t\\end{tabular}\n}\n\t\\label{tbl:attacks}\n\\end{table}", "id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "level": "subsection", "origin_cites_number": 26, "parent_id": "e26ef155-9b9c-4a92-9ffa-e334fbe6ea87", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ] ], "subsections": [ "f3db0c13-9b45-4545-9019-43245444f75d", "6dae1197-fcde-489c-835b-7117fc4737e8", "6bd1592d-a0b4-4061-b526-553c6ebc409a", "4e7e5878-d20a-4c28-acf4-a41004766a2c", "b8ab1fcb-f749-4c70-9893-28b772af455c", "33314056-9caf-48b3-90ff-0ca708a5a80c", "c51df6e6-79f5-400d-8643-cef009ff014f", "d4bb8301-3585-4f94-b9fa-12ca7a7b0164" ], "title": "Systematizing Attack Literature" }, { "cite_extract_rate": 0.8235294117647051, "cites": [ 892, 1193, 923, 3867, 3873, 3862, 3872, 937, 3871, 3863, 9099, 3855, 3854, 3861 ], "content": ")}\nBiggio et al.~ propose solving the problem of optimal evasion attacks by leveraging gradient-based optimization techniques. They focus on high-confidence evasion attacks with small perturbations (cf. Definition \\ref{def:opt_evasion2}). Given a malware representation-label pair $(\\mathbf{x},y=+)$, the optimization problem specified in Eq.\\eqref{eq:a8:evs2} with respect to {\\sf GO} is instantiated as:\n\\begin{align*}\n\t\\max_{\\delta_{\\mathbf{x}}} L_\\mathcal{A}(\\hat{F}_{\\hat\\theta}(\\mathbf{x}+\\delta_{\\mathbf{x}}),y=+) & = \\min_{\\delta_{\\mathbf{x}}} \\left(L(F_\\theta(\\mathbf{x}+\\delta_{\\mathbf{x}}, y=-)) - \\beta_a\\mathcal{K}(\\mathbf{x} + \\delta_{\\mathbf{x}})\\right) \\\\ ~~\\text{s.t.}~~(\\delta_{\\mathbf{x}}\\in[\\mathbf{0},\\overline{\\mathbf{u}}]) &\\land (C(\\mathbf{x},\\mathbf{x}+\\delta_{\\mathbf{x}})\\leq m),\n\\end{align*}\nwhere $\\beta_a\\geq 0$ is a balance factor and $\\mathcal{K}$ is a density estimation function for lifting $\\mathbf{x}+\\delta_{\\mathbf{x}}$ to the populated region of benign examples. Since $\\delta_{\\mathbf{x}}\\geq \\mathbf{0}$, the manipulation only permits object injections to meet the requirement of preserving malicious functionalities. The attack is validated by using the PDF malware detector and the feature representation is the number of appearances of hand-selected keywords (e.g., JavaScript). Because the perturbation is continuous, the authors suggest searching a discrete point close to the continuous one and aligning the point with $\\nabla L_\\mathcal{A}(\\mathbf{x}+\\delta_{\\mathbf{x}},y=+)$. \nThis attack makes the invertibility Assumption \\ref{assumption:inverse} because it operates in the feature space. Experimental results show that when $\\mathcal{I}$ employs no countermeasures, knowing $\\mathcal{I}$'s feature set $S$ and learning algorithm $F$ are sufficient for $\\mathcal{A}$ to evade $\\mathcal{I}$'s detector. This attack and its variants have been shown to evade PDF malware detectors , PE malware detectors , Android malware detectors , and Flash malware detectors . The kernel density estimation item makes the perturbed representation $\\mathbf{x}+\\delta_\\mathbf{x}$ similar to the representations of benign examples, explaining the successful evasion. In summary, the attack works under the ${\\sf Oracle}$, ${\\sf Measurability}$, and {\\sf Invertibility} assumptions. $\\mathcal{A}$'s input is, or $\\mathcal{A}$ can be characterized as, $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) = (1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_{\\mathbf{M}}) \\lor (0,0,1,*,0|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_{\\mathbf{M}})$ and $\\mathcal{A}$ breaks the {\\sf CR} property.\nAl-Dujaili et al.~ propose evasion attacks against DNN-based malware detectors in the feature space. In this attack,\n$\\mathcal{A}$ generates adversarial examples with possibly large perturbations in the feature space. More precisely, given a representation-label pair $(\\mathbf{x},y=+)$, the optimization problem of Eq.\\eqref{eq:a8:evs2} with respect to {\\sf GO} is instantiated as:\n\t$\\max_{\\delta_{\\mathbf{x}}} L(F_{\\theta}(\\mathbf{x}+\\delta_{\\mathbf{x}}),y=+) ~~\\text{s.t.}~~ \\delta_{\\mathbf{x}} \\in [\\mathbf{0}, \\mathbf{1}-\\mathbf{x}].$\nThe attack has four variants, with each perturbing the representation in a different direction (e.g., normalized gradient of the loss function using the $\\ell_\\infty$ norm). A ``random'' rounding operation is used to map continuous perturbations into a discrete domain. When compared with the basic rounding (which returns 0 if the input is smaller than 0.5, and returns 1 otherwise), the ``random'' rounding means that the threshold of rounding is sampled from the interval $[0,1]$ uniformly. For binary feature representation, the manipulation set $\\mathbf{M}_\\mathbf{x}=[\\mathbf{0}, \\mathbf{1}-\\mathbf{x}]$ assures the flipping of 0 to 1. The effectiveness of the attack is validated using Windows malware detector in the feature space. In summary, the attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with $\\mathcal{A}$ input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(0,1,1,1,1|\\mathbf{M}, {\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$ and breaks the {\\sf DR} property.\nKreuk et al. propose an evasion attack in the feature space against MalConv, which is an end-to-end Windows PE malware detector (as reviewed in Section \\ref{sec:ml}) . Given a malware embedding code $\\mathbf{x}$, the optimization problem of Eq.\\eqref{eq:a8:evs2} with respect to {\\sf GO} is instantiated as:\n\t$\\min_{\\delta_{\\mathbf{x}}} L(F_{\\theta}([\\mathbf{x}|\\delta_{\\mathbf{x}}]),y=-) ~~\\text{s.t.}~~ \\Vert\\delta_{\\mathbf{x}}\\Vert_p \\leq \\epsilon,$\nwhere | means concatenation and $\\Vert\\cdot\\Vert_p$ is the $p$ norm where $p\\geq 1$. Because MalConv is learned from sequential data, perturbation means appending some content to the end of a PE file. Perturbations are generated in a single step by following the direction of the $\\ell_\\infty$ or $\\ell_2$ normalized gradients of the loss function . For instance, the attack based on the $\\ell_{\\infty}$ norm is\n$\\tilde{\\mathbf{x}}'=\\mathbf{x} - \\epsilon\\cdot\\sign(\\nabla_{\\bf x}(L(F_\\theta(\\mathbf{x}), -))$, where $\\sign(x)=+1~(-1)$ if $x\\geq0~(x<0)$.\nSince the embedding operation uses a look-up table to map discrete values (0, 1, $\\ldots$, 255) to the learned real-value vectors, the attack uses a nearest neighbor search to look for the learned embedding code close to $\\tilde{\\mathbf{x}}'$. \nIn summary, the attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$ and breaks the {\\sf RR} and {\\sf CR} properties.\nKolosnjaji et al. and Suciu et al. independently propose gradient-based attacks in the feature space to evade MalConv . Both studies also use the loss function exploited by Kreuk et al. . Kolosnjaji et al. use the manipulation set $\\mathcal{M}$ corresponding to appending instructions at the end of a file. This attack proceeds iteratively and starts with randomly initialized perturbations. In each iteration, continuous perturbations are updated in the direction of the $\\ell_2$ normalized gradient of the loss function with respect to the input, and then a nearest neighbor search is applied to obtain discrete perturbations. Suciu et al. perturb embedding codes in the direction of the $\\ell_\\infty$ normalized gradient of the loss function, while adding instructions in the mid of a PE file (e.g., between PE {\\em sections}) while noting that appended content could be truncated by MalConv. Both attacks work under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$ and break the {\\sf RR} and {\\sf CR} properties.\nMu{\\~n}oz-Gonz{\\'a}lez et al. propose the optimal poisoning {\\sf OP} attack in the feature space (Definition \\ref{def:optimal poisoning attack}), which is NP-hard. In this case, the optimization problem of Eq.\\eqref{eq:a8:poi} is relaxed by supposing that the classifier is linear to render the optimization problem tractable . The attack is waged against Windows PE malware detectors. Feature set includes API calls, actions and modifications in the file system; each file is represented by a binary vector. The attack has two variants: one uses white-box input, where $\\mathcal{A}$ derives $D_{poison}'$ from $\\mathcal{I}$'s detector $f$; the other uses grey-box input, where $\\mathcal{A}$ knows $\\mathcal{I}$'s training set as well as feature set and trains a surrogate detector. The attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(1,1,1,1,1|\\mathbf{M},{\\sf OP}, {\\sf GO}, \\mathcal{X}_\\mathbf{M})\\lor(1,0,1,0,0|\\mathbf{M},{\\sf OP}, {\\sf GO},\\mathcal{X}_\\mathbf{M})$ and breaks {\\sf TR}.", "id": "f3db0c13-9b45-4545-9019-43245444f75d", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using Gradient-based Optimization ({\\sf GO" ] ], "subsections": [], "title": "Attacks using Gradient-based Optimization ({\\sf GO" }, { "cite_extract_rate": 0.375, "cites": [ 894, 890, 3864, 7754, 3863, 7154 ], "content": ")}\nDemontis et al. propose the optimal evasion {\\sf OE2} in the feature space to perturb important features in terms of their weights in the linear function $\\varphi(\\mathbf{x})={\\mathbf{w}}^\\top\\mathbf{x} + b$, where $\\mathbf{w}=(w_1,w_2,\\cdots,w_d)$ is a weight vector, $b$ is the bias, and $d$ is the dimension of feature space. The attack is waged against Drebin malware detector (which is reviewed in Section \\ref{sec:ml}). $\\mathcal{A}$ manipulates the $x_i$'s with largest $|w_i|$'s as follows:\nflip $x_i=1$ to $x_i=0$ if $w_i > 0$, flip $x_i=0$ to $x_i=1$ if $w_i < 0$, and do nothing otherwise, while obeying the manipulation set $\\mathbf{M}$ corresponding to the injection or removal of features. The attack works under the {\\sf Oracle}, {\\sf Measurability}, and {\\sf Invertiblity} assumptions with\ninput $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (0,0,1,*,0|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ and breaks the {\\sf CR} property.\nGrosse et al. propose a variant of the Jacobian-based Saliency Map Attack (JSMA) in the feature space against the Drebin malware detector (which is reviewed in Section \\ref{sec:ml}). Instead of using SVM, Deep Neural Network (DNN) is used to build a detector. Important features are identified by leveraging the gradients of the softmax output of a malware example with respect to the input. A large gradient value indicates a high important feature. $\\mathcal{A}$ only injects manifest features to manipulate Android Packages and generates adversarial files from $\\mathcal{I}$'s detector $f$. The attack works under the {\\sf Oracle}, {\\sf Measurability}, and {\\sf Invertiblity} assumptions\nwith input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ and breaks the {\\sf RR} and {\\sf CR} properties.\nChen et al. propose an evasion attack in the feature space by perturbing the important features derived from a wrapper-based feature selection algorithm . The attacker's loss function $L_\\mathcal{A}$ has two parts: (i) the classification error in the mean squared loss and (ii) the manipulation cost $C(\\mathbf{x}, \\mathbf{x}')=\\sum_{i=1}^{d}c_i|x_i - x_i'|$, where $\\mathbf{x}=(x_1,\\ldots,x_d)$, $\\mathbf{x}'=(x_1', \\ldots,x_d')$, and $c_i$ is the hardness of perturbing the $i$th feature while preserving malware's functionality. The attack is waged against Windows PE malware detector that uses hand-crafted Windows API calls as features and the binary feature representation. However, there are no details about the composition of manipulation set. This attack works under the {\\sf Oracle}, {\\sf Measurability}, and {\\sf Invertibility} assumptions with input $(a_1,\\ldots,a_5|A_7,\\cdots,A_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF}, \\mathcal{X}_\\mathbf{M}))$ and breaks {\\sf CR}.\nChen et al. propose evasion attacks in the feature space against two Android malware detectors, MaMaDroid and Drebin . The manipulation set $\\mathbf{M}$ corresponds to the injection of manifest features (e.g., {\\em activities}) and \nAPI calls. $\\mathcal{A}$ evades MaMaDroid by using the optimal evasion ${\\sf OE1}$ (Definition \\ref{def:opt_evasion1}) and ${\\sf OE2}$ (Definition \\ref{def:opt_evasion2}), and evades Drebin by using ${\\sf OE2}$. The optimization problem of ${\\sf OE1}$ Eq.\\eqref{eq:a8:evs1} is solved using an advanced gradient-based method known as C\\&W . ${\\sf OE2}$ is solved using JSMA . Because JSMA perturbs sensitive features, we categorize this attack into the {\\sf SF} group. The ${\\sf OE2}$ attack works under the {\\sf Oracle}, {\\sf Measurability} and {\\sf Invertibility} assumptions,\nwith four kinds of input \n$(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,0,1,0,0|\\mathbf{M},{\\sf OE2}, {\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (0,0,1,0,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (1,0,1,0,0|\\mathbf{M},{\\sf OE2},{\\sf SF}, \\mathcal{X}_\\mathbf{M}) \\lor (1,0,1,0,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$,\nand breaks the {\\sf CR} property. The ${\\sf OE1}$ attack works under the same assumptions with the same input except using attack technique {\\sf GO}, and breaks the {\\sf CR} property.\nChen et al. propose a basic poisoning {\\sf BP} attack in the feature space against Android malware detectors. The feature set contains syntax features (e.g., permission, hardware, {API}) and semantic features (e.g., sequence of pre-determined program behaviors such as {\\tt getDevicedID}$\\rightarrow$ URL$\\rightarrow${\\tt openConnection}). The ML algorithm used is SVM, random forest, or $K$-Nearest Neighbor (KNN) . The malware representations are perturbed using a JSMA variant against the SVM-based classifier (while noting JSMA is applicable neither to random forests nor to KNN because they are gradient-free). Feature manipulation set ${\\bf M}$ corresponds to the injection of syntax features. $\\mathcal{A}$ poisons $\\mathcal{I}$'s training set by injecting perturbed perturbations with label $-$. The attack works under the {\\sf Oracle}, {\\sf Measurability}, and {\\sf Invertibility} assumptions with input\n$(a_1,\\ldots,a_5|A_6,\\cdots,A_9)=(1,1,1,1,1|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (0,0,0,1,1|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (1,0,*,1,1|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ and breaks {\\sf TR}.\nSuciu et al. propose a basic poisoning attack in both feature and problem spaces. The authors obtain $D'_{poison}$ by applying small manipulations to non-adversarial benign files and then obtain their labels as given by VirusTotal service . \n$\\mathcal{A}$'s objective is to make $\\mathcal{I}$'s classifier $f$ mis-classify a targeted malware file $z_{mal}$ as benign. $\\mathcal{A}$ proceeds as follow: (i) obtain an initial benign file $z_{ben}$, where $z_{ben} \\approx z_{mal}$ in the feature space with respect to the $\\ell_1$ norm; (ii) use the JSMA method to manipulate $z_{ben}$ to $z'_{ben}$ by making a small perturbation so that they have similar feature representations; (iii) add $z'_{ben}$ and its label obtained from VirusTotal to $D'_{poison}$ and use $D_{train} \\cup D'_{poison}$ to train classifier $f'$ (Definition \\ref{definition:sf}); (iv) undo the addition if $z'_{ben}$ lowers the classification accuracy significantly, and accept it otherwise. The attack is waged against the Drebin malware detector and the manipulation set corresponds to the feature injection of permission, API, and strings. This attack works under the {\\sf Oracle}, {\\sf Measurability}, and {\\sf Inversiability} assumptions with input \n$(a_1,\\ldots,a_5|A_6,\\ldots,A_9) =(*,0,1,*,0|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})\\lor(1,1,1,1,1|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})\\lor(1,0,*,*,0|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M}) \\lor (1,0,1,0,0|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})$\nand breaks the {\\sf TR} property. The study generates adversarial malware examples, but does not test their malicious functionalities.", "id": "6dae1197-fcde-489c-835b-7117fc4737e8", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using Sensitive Features ({\\sf SF" ] ], "subsections": [], "title": "Attacks using Sensitive Features ({\\sf SF" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7754 ], "content": ")} Smutz and Stavrou propose a mimicry attack in the feature space to modify features of a malicious file to mimic benign ones, where $\\mathcal{A}$ knows $\\mathcal{I}$'s classifier $f$. Discriminative features are identified by observing their impact on classification accuracy. The attack perturbs features of malware examples by replacing their value with the mean of the benign examples. The attack is leveraged to estimate the robustness of PDF malware detectors without considering the preservation of malware functionality. The attack works under the {\\sf Measurability} assumption with input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) = (1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf MI},\\mathcal{X})$ and breaks the {\\sf CR} property.\nMaiorca et al. propose a reverse mimicry attack against PDF malware detectors in the problem space. Instead of modifying malicious files to mimic benign ones, $\\mathcal{A}$ embeds malicious payload (e.g., JavaScript code) into a benign file. The attack can be enhanced by using parser confusion strategies, which make the injected objects being neglected by feature extractors when rendered by PDF readers . The attack works under the {\\sf Oracle} assumption with input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) = (0,0,0,0,0|$ $\\mathcal{M},{\\sf BE},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf DR} property. \nPierazzi et al. propose a white-box evasion attack against the Drebin malware detector and then an enhanced version of the detector in the problem space . They intend to bridge the gap between the attacks in the problem space and the attacks in the feature space.\nIn addition, four realistic constraints are imposed on the manipulation set $\\mathcal{M}$, including available transformation, preserved semantics, robustness to preprocessing, and plausibility. In order to cope with the side-effect features when incorporating gradient information of $\\mathcal{I}$'s classifier, the attacker first harvests a set of manipulations from benign files; \nManipulations in the problem space are used to query $\\mathcal{I}$'s feature extraction for obtaining perturbations in the feature space;\nan adversarial malware example is obtained by using the manipulations corresponding to the perturbations that have a high impact on the classification accuracy. This attack works under the {\\sf Oracle} assumption with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(1,1,1,1,1|\\mathcal{M},{\\sf OE2},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf DR} property.", "id": "6bd1592d-a0b4-4061-b526-553c6ebc409a", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using MImicry ({\\sf MI" ] ], "subsections": [], "title": "Attacks using MImicry ({\\sf MI" }, { "cite_extract_rate": 0, "cites": [], "content": ")} \n{\\v{S}}rndi\\'{c} and Laskov investigate the mimicry attack and the aforementioned gradient descent and kernel density estimation attack against the PDFrate service, where $\\mathcal{A}$ knows some features used by $\\mathcal{I}$. $\\mathcal{A}$ makes the representation of an adversarial malware example similar to a benign one. Manipulation set $\\mathcal{M}$ corresponds to adding objects into PDF files. Both attacks perturb feature vectors against a surrogate model, and then map the perturbed feature representations to the problem space by injecting manipulations between the body and the trailer of a PDF file. For the mimicry attack, $\\mathcal{A}$ uses $N_{ben}>0$ benign examples to guide manipulations, resulting in $N_{ben}$ perturbed examples. The example incurring the highest classification error is used as an adversarial example. The attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input \n$( a_1,\\ldots,a_5|A_6,\\ldots,A_9) =(0,0,*,0,0|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M})\\lor(0,0,*,*,0|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M})\\lor(1,0,*,0,0|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M})\\lor(1,0,*,*,0|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M})\\lor(0,0,*,0,0|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})\\lor(0,0,*,*,0|\\mathcal{M},$ ${\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})\\lor(1,0,*,0,0|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})\\lor(1,0,*,*,0|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})$\nand breaks {\\sf DR}. The gradient-based attack neglects the constraint of small perturbations and works under the same assumptions with the same input except using the attack technique {\\sf OE2}.\nKhasawneh et al. propose an evasion attack in both the feature and problem spaces against malware detectors learned from dynamic hardware features (e.g., instruction frequency), where $\\mathcal{A}$ knows some features used by $\\mathcal{I}$. The attack proceeds as follows. $\\mathcal{A}$ first queries $\\mathcal{I}$'s classifier to obtain a surrogate model and then generates adversarial files against the surrogate model. Manipulation set $\\mathcal{M}$ corresponds to the injection of some features because the others (e.g., memory access) are uncontrollable. Perturbations are conducted to the important features that are identified by large weights in the model. The attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input $( a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(0,0,*,0,1|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M})\\lor(0,0,*,0,1|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf CR} property.\nRosenberg et al. propose an evasion attack in both the feature and problem spaces against a Recurrent Neural Network (RNN) based surrogate model, which is learned from API call sequences. In this attack, $\\mathcal{A}$'s training data is different from $\\mathcal{I}$'s, but the labels are obtained by querying $\\mathcal{I}$'s detector. In order to reduce the number of queries to $\\mathcal{I}$'s detector, $\\mathcal{A}$ augments its training data using the Jacobian-based augmentation technique and modifies the API sequence of an example in the direction of the $\\ell_\\infty$ normalized gradient of the loss function. Manipulation set ${\\bf M}$ corresponds to inserting no-op API calls. Experimental results show that adversarial examples generated from a surrogate RNN model can evade SVM, DNN, and RNN detectors. The attack works under the {\\sf Oracle} and {\\sf Invertibility} assumptions with input $( a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(0,0,1,0,1|\\mathbf{M},{\\sf BE},{\\sf TR},\\mathcal{X}_\\mathbf{M}) \\lor (0,0,1,0,1|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf RR} and {\\sf CR} properties.", "id": "4e7e5878-d20a-4c28-acf4-a41004766a2c", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using TRansferability ({\\sf TR" ] ], "subsections": [], "title": "Attacks using TRansferability ({\\sf TR" }, { "cite_extract_rate": 0.2, "cites": [ 3857 ], "content": ")}\nXu et al.~ propose black-box evasion attacks in the problem space against two PDF malware detectors known as PDFrate and Hidost , respectively. Given a malicious file $z$, $\\mathcal{A}$ uses a genetic algorithm to iteratively generate $z'$ from $z$ as follows: (i) $\\mathcal{A}$ manipulates a set of candidates (or $z$ in the initial iteration) via object deletion, insertion, or replacement. (ii) $\\mathcal{A}$ queries these variants to $\\mathcal{O}$ and $f$. (iii) $\\mathcal{A}$ succeeds when obtaining a successful adversarial example $z'$, namely $({\\tt True} \\leftarrow \\mathcal{O}(z,z')) \\land (-\\leftarrow f(z'))$; otherwise, $\\mathcal{A}$ uses a score function to select candidates for the next iteration or aborts after reaching a threshold number of iterations. The score function $h$ varies with classifiers; for PDFrate, $h(\\mathcal{O},f,z,z')=0.5-f(z')$ if $\\mathcal{O}(z,z')={\\sf true}$, and returns -0.5 if $\\mathcal{O}(z,z')={\\sf false}$. This attack models an {\\sf Oracle} and works with the input $(a_1,\\cdots, a_5|A_6,\\cdots,A_9) = (0,0,0,0,1|\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf DR} property.\nYang et al. propose evasion attacks against Android malware detectors in the problem space. In this attack, $\\mathcal{A}$ also uses a genetic algorithm to perturb a malware example $z$ iteratively. In each iteration, $\\mathcal{A}$ extracts some features and calculates similarity scores between the malicious APKs in the feature space; the features that have high impact on the similarity scores are selected; the manipulations are to perturb the selected features.\nThe attack works under the {\\sf Oracle} and {\\sf Measurability} assumptions with input $(a_1,\\cdots, a_5|A_6,\\cdots,A_9) = (0,0,0,0,1|\\mathcal{M},\\mathcal{Z},{\\sf BE}, {\\sf HS},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf DR} property.\nDang et al. propose a black-box evasion attack against malware detectors (e.g., PDFrate) in the problem space. Given a malicious file $z$, $\\mathcal{A}$ uses the hill-climbing algorithm to iteratively generate adversarial file $z'$ from $z$. In each iteration, $\\mathcal{A}$ generates a path of variants sequentially, each of which is perturbed from its predecessor using manipulations corresponding to object deletion, insertion, or replacement. A score function $h$ is leveraged to select candidates, such as $h(\\mathcal{O},f,z,z')={\\sf mal}_{z'}-{\\sf clf}_{z'}$ or $h(\\mathcal{O},f,z,z')={\\sf mal}_{z'}/{\\sf clf}_{z'}$, where ${\\sf mal}_{z'}$ denotes the length of the first example turned from malicious to benign (obtaining by using an oracle) on the manipulation path (cf. Definition \\ref{definition:hs}) and ${\\sf clf}_{z'}$ denotes the length of the first malware example that has successfully misled the classifier $f$. Both examples of interest are obtained by a binary search, effectively reducing the number of queries to oracle $\\mathcal{O}$ and $f$. The attack models an {\\sf Oracle} and works with input $(a_1,\\cdots, a_5|A_6,\\cdots,A_9) = (0,0,0,0,0|\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_{\\mathcal{M}})$ and breaks the {\\sf DR} property.", "id": "b8ab1fcb-f749-4c70-9893-28b772af455c", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using Heuristic Search ({\\sf HS" ] ], "subsections": [], "title": "Attacks using Heuristic Search ({\\sf HS" }, { "cite_extract_rate": 0.5, "cites": [ 243, 6172 ], "content": ")}\nHu and Tan \\cite {Hu2017} propose an evasion attack against Windows malware detectors in the feature space, by using Generative Adversarial Networks (GAN) . In this attack, $\\mathcal{A}$ modifies the binary representation of Windows API calls made by malicious files, namely flipping some feature values from 0 to 1. $\\mathcal{A}$ learns a generator $G_{\\theta_{g}}$ and a discriminator from $\\mathcal{A}$'s training dataset. The discriminator is a surrogate detector learned from feature vectors corresponding to $\\mathcal{A}$'s benign files and those produced by $G_{\\theta_{g}}$, along with labels obtained by querying $\\mathcal{I}$'s detector $f$. An adversarial example feature vector is generated by using\n$\\mathbf{x}' = \\max(\\mathbf{x}, \\round(G_{\\theta_{g}}(\\mathbf{x}, \\mathbf{a})))$,\nwhere $\\mathbf{a}$ is a vector of noises, $\\round$ is the round function, and $\\max$ means element-wise maximum.\nHu and Tan~ also propose another evasion attack using the Seq2Seq model . Both attacks work under the {\\sf IID}, {\\sf Oracle} and {\\sf Invertibility} assumptions with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,0,1,0,1|\\mathbf{M},{\\sf BE},{\\sf GM},\\mathcal{X}_\\mathbf{M})$ and break the {\\sf DR} property.\nAnderson et al. propose a Reinforcement Learning (RL)-based evasion attack against Windows PE malware detectors in the problem space. Manipulation set $\\mathcal{M}$ is the RL action space, which includes some bytecode injections (e.g., API insertion) and some bytecode deletion. Attacker $\\mathcal{A}$ learns an RL agent on $\\mathcal{A}$'s data, with labels obtained by querying defender $\\mathcal{I}$'s detector $f$. The learned agent predicts manipulations sequentially for a given malware example. Moreover, $\\mathcal{A}$ is restricted by only applying a small number of manipulations to a malicious PE file. Experimental results show that the attack is not as effective as others (e.g., gradient-based methods). The attack works under the {\\sf Oracle} and {\\sf Measurability} assumptions with input $(a_1,\\cdots, a_5|A_6,\\cdots,A_9) = (0,0,0,0,1|\\mathcal{M},{\\sf OE2},{\\sf GM},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf RR} and {\\sf CR} properties.", "id": "33314056-9caf-48b3-90ff-0ca708a5a80c", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using Generative Model ({\\sf GM" ] ], "subsections": [], "title": "Attacks using Generative Model ({\\sf GM" }, { "cite_extract_rate": 1, "cites": [ 9099 ], "content": ")} \nLi and Li propose evasion attacks against DNN-based Android malware detectors in both feature and problem spaces. Given four gradient-based attack methods, the attack looks for the best one to perturb malware representations. $\\mathcal{A}$ can iteratively perform this strategy to modify the example obtained in the previous iteration. Experimental results show that the mixture of attacks can evade malware detectors effectively. The attack works under the {\\sf IID}, {\\sf Oracle} and {\\sf Invertibility} assumptions with input $(a_1,\\cdots, a_5|A_6,\\cdots,A_9) = (1,1,1,1,1|\\mathbf{M},{\\sf BE},{\\sf MS},\\mathcal{X}_\\mathbf{M})\\lor(1,1,1,1,1|\\mathcal{M},{\\sf BE},{\\sf MS},\\mathcal{Z}_\\mathcal{M})$ and breaks the {\\sf DR} property.", "id": "c51df6e6-79f5-400d-8643-cef009ff014f", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Attacks using Mixture Strategy ({\\sf MS" ] ], "subsections": [], "title": "Attacks using Mixture Strategy ({\\sf MS" }, { "cite_extract_rate": 0, "cites": [], "content": "We summarize the preceding reviews with the following observations.\n(i) Indiscriminate attacks have been much more extensively investigated than targeted attacks and availability attacks. (ii) Evasion attacks have been much more extensively studied than poisoning attacks. (iii) The {\\sf Oracle} assumption has been widely made. \nIn addition, we draw the following insights.\n\\begin{insight} \n(i) Knowing the defender's feature set is critical to the success of transfer attacks, highlighting the importance of keeping the defender's feature set secret (e.g., randomizing the defender's feature set).\n(ii) The effectiveness of evasion attacks largely depends on the attacker's degree of freedom in conducting manipulations in the problem space (i.e., a smaller degree of freedom means it is harder for the attack to succeed).\n\\end{insight}\n\\ignore{\nIn order to draw further insights, we propose applying ML to the systematized structured data in Table \\ref{tbl:attacks}. We propose formulating {\\em insights learning} as a multiclass classification problem. We use attributes of attack objective, attack input and assumption as features and treat the broken properties as the labels to learn. We preprocess data as follows:\nfor features corresponding to attack objective and assumption, we use 1 to represent ``applicable'' and\n0 otherwise; for features corresponding to $A_1,\\ldots,A_5$, we set $a_i =0.5$ for $1\\leq i \\leq 5$; for features corresponding to categorical $A_6,\\dots,A_9$, we use integers to represent the elements (e.g., 0,1 respectively representing $\\mathcal{M},\\mathbf{M}$ for $A_6$). This leads to a dataset of 63 attacks, each of which has 17 dimensions. We use the dataset to train a {\\em random forest} model with default hyper-parameters in scikit-learn . {\\color{blue} The random forest is ML algorithm, capable of learning from un-normalized features and producing the important features in terms of classification error .}\nWe focus on learning important attributes.\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\scalebox{0.25}{\n\t\\includegraphics{figures/attack-important-feat.eps}\n\t}\n\\caption{Insights learning from systematized AMD attacks via attribute importance.}\n\t\\label{fig:attack-attribute}\n\\end{figure}\nFigure \\ref{fig:attack-attribute} highlights the most important features, from which we make the following observations. (i) Attack tactic is the most important feature in determining the security property that is broken. This can be explained attacks using sensitive features ({\\sf SF}) usually breaks {\\sf CR} or {\\sf TR}, and attacks using heuristic search ({\\sf HS}) usually breaks the {\\sf DR} property. (ii) Attack scenario is the second important feature. This is because ${\\sf BE}$ breaks ${\\sf DR}$, ${\\sf OE2}$ usually breaks {\\sf CR} and {\\sf RR}, and poisoning attacks ({\\sf BP} and {\\sf OP}) break {\\sf TR}. (iii) Feature set is the third important feature, resonating the manually-drawn insight. (iv) The {\\sf Measurability} assumption is the four important feature. This can be explained because {\\sf RR} and {\\sf CR} are defined with respect to small-manipulation attacks, which need this assumption.\n\\begin{insight}[Insights learned via ML]\nTechnique-wise, attack tactic and attack scenario (core part of the threat model) largely determines what security property is broken.\nMethodology-wise, ML is an effective method for insights learning in survey and systematization studies.\n\\end{insight}\n}", "id": "d4bb8301-3585-4f94-b9fa-12ca7a7b0164", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "108d281b-7e1a-46ce-b750-a8b5a5ef8ec0", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Attack Literature" ], [ "subsubsection", "Drawing Observations and Insights" ] ], "subsections": [], "title": "Drawing Observations and Insights" }, { "cite_extract_rate": 0.38095238095238004, "cites": [ 3857, 3853, 9099, 3854, 9100, 7754, 3874, 3863 ], "content": "\\label{sec:defense}\n\\begin{table}[htbp]\n\t\\caption{Summary of AMD defenses (\\checkmark means applicable, \\protect\\fullcirc means 0, \\protect\\emptycirc means 1, \\protect\\dotcirc means a value in $[0,1]$)}\n\t\\resizebox{1.\\columnwidth}{!}{\n\t\\begin{threeparttable}\n\t\\centering\n\t\\setlength{\\tabcolsep}{0.4em}\n\t\\begin{tabular}{l|c|ccccc|cccc|ccccc|cccc|ccc}\n\t\\hline\n\t\\multicolumn{1}{c|}{\\specialcell{Defense \\\\ (in chronological order)}}& \n\t\\multicolumn{1}{c|}{\\specialcell{Defense \\\\ Objective}} &\n\t\\multicolumn{9}{c|}{\\specialcell{Defense Input}} & \n\t\\multicolumn{5}{c|}{\\specialcell{Assumptions}} & \n\t\\multicolumn{4}{c|}{\\specialcell{Achieved \\\\ Properties}} & \n\t\\multicolumn{3}{c}{\\specialcell{Malware\\\\Detector}}\n\t\\\\\n\t& \\vthead{malware detection} \n\t& \\vthead{$A_1$: Training set $D_{train}$}\n\t& \\vthead{$A_2$: Defense technique}\n\t& \\vthead{$A_3$: Feature set}\n\t& \\vthead{$A_4$: Learning algorithm}\n\t& \\vthead{$A_5$: Response}\n\t& \\vthead{$A_6$: Manipulation set}\n\t& \\vthead{$A_7$: Attack tactic}\n\t& \\vthead{$A_8$: Attack technique}\n\t& \\vthead{$A_9$: Adversarial example set}\n\t& \\vthead{${\\sf IID}$ assumption}\n\t& \\vthead{${\\sf Oracle}$ assumption}\n\t& \\vthead{${\\sf Measurability}$ assumption}\n\t& \\vthead{${\\sf Smoothness}$ assumption}\n\t& \\vthead{${\\sf Invertibility}$ assumption}\n\t& \\vthead{{\\sf RR}: Representation Robustness}\n\t& \\vthead{{\\sf CR}: Classification Robustness}\n\t& \\vthead{{\\sf DR}: Detection Robustness}\n\t& \\vthead{{\\sf TR}: Training Robustness}\n\t& \\vthead{Windows Program}\n\t& \\vthead{Android Package}\n\t& \\vthead{PDF}\n\t\\\\\n\t\\hline\\hline\n \\rowcolor{lightgray!30}\n Biggio et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf EL}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark& & & & &\n \t&\\checkmark&&&\n \t& &\\checkmark\n \t\\\\\n Smutz and Stavrou &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf SE}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&&\\checkmark&&\n \t& & \\checkmark \n \t\\\\\n \\rowcolor{lightgray!30}\n Zhang et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf RF}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&\\checkmark&\\checkmark&&\n \t\\checkmark&\\checkmark&&&\n \t& & \\checkmark\n \t\\\\\n Demontis et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf WR}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark& &\\checkmark& & &\n \t&\\checkmark&&&\n \t& \\checkmark &\n \t\\\\\n \t\\rowcolor{lightgray!30}\n \tWang et al. &\n \t\\checkmark&\n \t$D_{train}$ &\n \t${\\sf IT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&\\checkmark&&&\n \t\\checkmark & &\n \t\\\\\n Grosse et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf WR}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t\\checkmark&\\checkmark&&&\n \t& \\checkmark &\n \t\\\\\n \\rowcolor{lightgray!30}\n Grosse et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{0} &\n \t\\pie{4} &\n \t\\checkmark& &\\checkmark&&&\n \t\\checkmark&\\checkmark&&&\n \t& \\checkmark &\n \t\\\\\n Chen et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{0} &\n \t\\pie{4} &\n \t\\checkmark& &\\checkmark&&&\n \t&\\checkmark&&&\n \t\\checkmark & &\n \t\\\\\n \\rowcolor{lightgray!30}\n Khasawneh et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf CD}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{0} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&\\checkmark&&&\n \t\\checkmark & &\n \t\\\\ \n \tDang et al. &\n \t\\checkmark&\n \t$D_{train}$ &\n \t${\\sf SE}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf LQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&&\\checkmark&&\n \t& & \\checkmark \n \t\\\\ \n \\rowcolor{lightgray!30}\n Yang et al. &\n \\checkmark&\n \t\\specialcell{$D^\\ast_{train}$} &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pieg &\n \t\\checkmark&&&&&\n \t&&\\checkmark&&\n \t& \\checkmark &\n \t\\\\\n Yang et al. &\n \\checkmark&\n \t\\specialcell{$D_{train}$} &\n \t${\\sf SE}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{0} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&&\\checkmark&&\n \t& \\checkmark &\n \t\\\\\n \\rowcolor{lightgray!30}\n Chen et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf RF}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark& &\\checkmark&\\checkmark& &\n \t\\checkmark&\\checkmark&&&\n \t& \\checkmark & \n \t\\\\\n Incer et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf VL}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark& & & & &\n \t\\checkmark&\\checkmark&\\checkmark& & \n \t\\checkmark& & \n \t\\\\ \n \t\\rowcolor{lightgray!30}\n \tChen et al. ~ &\n \t\\checkmark&\n \t$D_{train}$ &\n \t${\\sf SE}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t&&\\checkmark&&&\n \t&&&\\checkmark&\n \t&\\checkmark &\n \t\\\\\n Al-Dujaili et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&& &\n \t&&\\checkmark&&\n \t\\checkmark & &\n \t\\\\\n \\rowcolor{lightgray!30}\n Chen et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf IT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t&\\checkmark&&&\n \t& \\checkmark &\n \t\\\\\n Jordan et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf RF}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t&&&&&\n \t\\checkmark&\\checkmark&\\checkmark&&\n \t& & \\checkmark\n \t\\\\ \n \\rowcolor{lightgray!30}\n Li et al. &\n \\checkmark&\n \t$D_{train}$ &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{4} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&\\checkmark&\\checkmark& &\n \t\\checkmark&\\checkmark&&&\n \t\\checkmark & &\n \t\\\\\n \tTong et al. &\n \t\\checkmark&\n \t$D_{train}$ &\n \t${\\sf RF}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&&&\n \t\\checkmark&\\checkmark&\\checkmark&&\n \t& & \\checkmark\n \t\\\\ \n \t\\rowcolor{lightgray!30}\n \tLi and Li &\n \t\\checkmark&\n \t$D_{train}$ &\n \t${\\sf AT}$ &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&&& &\n \t&&\\checkmark&&\n \t& \\checkmark &\n \t\\\\\n Chen et al. &\n \\checkmark&\n $D_{train}$ &\n ${\\sf VL}$ &\n $S$ &\n $F_\\theta$ &\n {\\sf FQ} &\n \\pie{0} & \n \\pie{0} & \n \\pie{4} &\n \\pie{4} &\n \\checkmark& &\\checkmark & \\checkmark& &\n \\checkmark&\\checkmark& & & \n & & \\checkmark\n \\\\\n \\rowcolor{lightgray!30}\n Li et al. & \n \t\\checkmark &\n \t$D_{train}$ &\n \t\\small{{\\sf EL}+{\\sf AT}+{\\sf RF}} &\n \t$S$ &\n \t$F_\\theta$ &\n \t{\\sf FQ} &\n \t\\pie{0} & \n \t\\pie{0} & \n \t\\pie{4} &\n \t\\pie{4} &\n \t\\checkmark&&\\checkmark&& &\n \t&&\\checkmark&&\n \t& \\checkmark &\n \t\\\\\n \\hline\n\t\\end{tabular}\n\t\\begin{tablenotes}\n\t\t\\small\n\t\t\\item $D^\\ast_{train}$ contains $D_{train}$ and a portion of $\\mathcal{A}$'s adversarial examples.\n\t\\end{tablenotes}\n \\end{threeparttable}\n\t}\n\t\\label{tbl:defenses}\n\\end{table}", "id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "level": "subsection", "origin_cites_number": 21, "parent_id": "e26ef155-9b9c-4a92-9ffa-e334fbe6ea87", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ] ], "subsections": [ "6f9fa41a-b356-42e4-b332-196f10febba0", "1e7bc569-0fe6-4f78-b486-391ed3ea051e", "272e13c4-c17a-4f6c-ab41-e2e83ef3c588", "89604438-0b71-470a-93c0-468dd4ea65af", "48b02090-502b-4798-92fb-c646a315077a", "a5b08aa1-d4f9-4077-b1e1-6b8c15efc187", "45225fbe-29bd-4000-8b60-553581208d2f", "362ab0d6-2e4b-4607-b141-4a89fdd71ebf", "7446c022-f8d7-4a15-911d-d2f735cce556" ], "title": "Systematizing Defense Literature" }, { "cite_extract_rate": 0.5, "cites": [ 937, 7754 ], "content": ")} \\label{sec:defense-el}\nBiggio et al. propose a one-and-a-half-class SVM classifier against evasion attacks, by leveraging an interesting observation (i.e., decision boundaries of one-class SVM classifiers are tighter than that of two-class SVM classifiers) to facilitate outlier detection. Specifically, the authors propose an ensemble of a two-class classifier and two one-class classifiers, and then combine them using another one-class classifier. \nThe defense can enhance PDF malware detectors against gradient-based attacks , which can be characterized as $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OP2}, {\\sf GO},\\mathcal{X}_\\mathbf{M})$. However, the defense cannot thwart attacks incurring large perturbations. Independent of this study, other researchers propose using the random subspace and bagging techniques to enhance SVM-based malware detectors, dubbed Multiple Classifier System SVM (MCS-SVM), which leads to evenly distributed weights .\nThese defenses work under the {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf EL},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and achieves the {\\sf CR} property.", "id": "6f9fa41a-b356-42e4-b332-196f10febba0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Ensemble Learning ({\\sf EL" ] ], "subsections": [], "title": "Defenses using Ensemble Learning ({\\sf EL" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 314, 7754 ], "content": ")} \\label{sec:attack-wr}\nDemontis et al. propose enhancing the Drebin malware detector $\\varphi(\\mathbf{x})=\\mathbf{w}^\\top\\mathbf{x} + b$ by using box-constraint weights. The inspiration is that the classifier's sensitivity to perturbations based on the $\\ell_1$ norm is bounded by the $\\ell_\\infty$ norm of the weights. This defense\nhardens the Drebin detector against a mimicry attack with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,0,1,0,0|\\mathbf{M},{\\sf BE},{\\sf MI},\\mathcal{X}_\\mathbf{M})$, obfuscation attack with input $(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,0,0,0,$ $0|\\mathcal{M},{\\sf BE},-, \\mathcal{Z}_\\mathcal{M})$, and the attack that modifies important features with input $(a_1,\\cdots,a_5|A_6,\\cdots,$ $A_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$, here `$-$' means inapplicable. Experimental results show this defense outperforms MSC-SVM . The defense works under the {\\sf IID}, {\\sf Oracle} and {\\sf Measurability} assumptions with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf WR},S,F_\\theta,{\\sf FQ}|1,1,0,0)$ and achieves {\\sf CR}.\nGrosse et al. investigate how to apply two defense techniques known as distillation and retraining to enhance the DNN-based Drebin malware detector. The distillation technique can decrease a model's generalization error by leveraging a teacher to relabel the training data represented by real-value vectors (rather than one-hot encoding). It uses retraining to tune a learned model with respect to an augmented training set with adversarial examples. Both defenses are estimated against a variant of JSMA and can be characterized by their input as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF}, \\mathcal{X}_\\mathbf{M})$. Experimental results show the two defenses achieve limited success. The defense based on the distillation technique works under the {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf WR},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and achieves the {\\sf RR} and {\\sf CR} properties. The defense based on the retraining technique works under the {\\sf IID} and {\\sf Measurability} assumptions with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ}|1,$ $1,1,0)$ and achieves the {\\sf RR} and {\\sf CR} properties.", "id": "1e7bc569-0fe6-4f78-b486-391ed3ea051e", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Weight Regularization ({\\sf WR" ] ], "subsections": [], "title": "Defenses using Weight Regularization ({\\sf WR" }, { "cite_extract_rate": 0.5, "cites": [ 3875, 9100, 7754, 9099, 3854 ], "content": ")}\nChen et al. adapt a generic retraining framework proposed in the AML context to enhance linear malware detectors. The defense uses a label smoothness regularization technique to mitigate the side-effect of adversarial training . The defense is evaluated using Windows malware detectors against ``feature selection''-based evasion attacks, which can be characterized as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$. The defense works under the {\\sf IID} and {\\sf Measurability} assumptions and can be characterized as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ}|1,1,1,0)$, while assuring the {\\sf CR} property.\nYang et al. propose a defense against genetic algorithm-based evasion attacks that can be characterized as $(A_1,\\cdots,A_5|a_6,$ $\\cdots,a_9)=(0,0,0,0,1|\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$. The defense uses three techniques: adversarial training, sanitizing examples, and weight regularization . The adversarial training uses one half of $\\mathcal{A}$'s adversarial examples. The defense works under the {\\sf IID} assumption and can be characterized as\n$(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D'_{train},{\\sf AT},S,F_\\theta,{\\sf FQ}|0,1,0,*)$, where $D'_{train}$ is the union of $D_{train}$ and a portion (e.g., one half) of $\\mathcal{A}$'s adversarial examples. The defense of sanitizing examples is learned from manipulations used by the attacker and works under the IID assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf SE},S,F_\\theta,{\\sf FQ}|1,1,1,0)$. Both defenses achieve the {\\sf DR} property. The defense of wight regularization is reviewed in Section \\ref{sec:attack-wr}.\nAl-Dujaili et al.~ adapt the idea of minmax adversarial training (proposed in the AML context) to enhance DNN-based malware detectors. In this defense, the inner-layer optimization generates adversarial files by maximizing the classifier's loss function; the outer-layer optimization searches for the parameters $\\theta$ (of DNN $F_\\theta$) that minimize the classifier's loss with respect to the adversarial files.\nThe defense enhances Windows malware detectors against attacks with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$. \nExperimental results show that malware detectors that are hardened to resist one attack may not be able to defend against other attacks. By observing this phenomenon, researchers propose using a mixture of attacks to harden DNN-based malware detectors .\nThe defense works under the {\\sf IID} assumption and can be characterized as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ}|1,1,0,0)$. The defense assures the {\\sf DR} property. \nLi et al. propose a DNN-based attack-agnostic framework to enhance adversarial malware detectors. The key idea is dubbed adversarial regularization, which enhances malware detectors via the (approximately) optimally small perturbation. The framework wins the AICS'2019 adversarial malware classification challenge organized by MIT Lincoln Lab researcher , without knowing anything about the attack. \nThe defense works under the {\\sf IID}, {\\sf Measurability}, and {\\sf Smoothness} assumptions with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and assures the {\\sf RR} and {\\sf CR} properties. In the extended study , the authors further enhance the framework with 6 defense principles, including ensemble learning, adversarial training, and robust representation learning. The enhanced defense is validated with 20 attacks (including 11 grey-box attacks and 9 white-box attacks) against Android malware detectors. The enhanced defense works under the {\\sf IID} and {\\sf Measurability} assumptions with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf AT}+{\\sf EL}+{\\sf RF},S,F_\\theta,{\\sf FQ}|1,1,0,0)$ and assures the {\\sf DR} property.", "id": "272e13c4-c17a-4f6c-ab41-e2e83ef3c588", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Adversarial Training ({\\sf AT" ] ], "subsections": [], "title": "Defenses using Adversarial Training ({\\sf AT" }, { "cite_extract_rate": 0.5, "cites": [ 3877, 3876 ], "content": ")}\nIncer et al. propose using monotonic malware classifiers to defend against evasion attacks, where {\\em monotonic} means\n$\\varphi(\\mathbf{x}) \\leq \\varphi(\\mathbf{x}')$ when ${\\mathbf{x}} \\leq {\\mathbf{x}}'$ . Technically, this can be achieved by using (i) robust features that can only be removed or added but not both and (ii) monotonic classification function (e.g., linear models with non-negative weights). The resulting classifier can thwart any attack that perturbs feature values monotonically. The defense works under the {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf VL},S,F_\\theta,{\\sf FQ}|1,1,0,0)$ and assures the {\\sf RR}, {\\sf CR} and {\\sf DR} properties.\nChen et al. propose a defense to enhance PDF malware detectors against evasion attacks, by leveraging the observation that manipulations on PDF files are subtree additions and/or removals. They also propose new metrics for quantifying such structural perturbations. This allows to adapt the \n{\\em symbolic interval analysis} technique proposed in the AML context to enhance the PDF malware detectors. The defense can cope with attacks leveraging small perturbations in the training phase. This defense works under the {\\sf IID}, {\\sf Measurability}, and {\\sf Smoothness} assumptions with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf VL},S,F_\\theta,{\\sf FQ}|1,1,0,0)$ and achieves the {\\sf RR} and {\\sf CR} properties.", "id": "89604438-0b71-470a-93c0-468dd4ea65af", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Verifiable Learning ({\\sf VL" ] ], "subsections": [], "title": "Defenses using Verifiable Learning ({\\sf VL" }, { "cite_extract_rate": 0.4, "cites": [ 3874, 3863 ], "content": ")}\nZhang et al. propose leveraging optimal adversarial attacks for feature selection. The defense enhances PDF malware detectors against gradient-based attacks, which can be characterized as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_{\\mathbf{M}})$. The defense works under the {\\sf IID}, {\\sf Measurability}, and {\\sf Smoothness} assumptions and can be characterized as $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ}|1,1,0,0)$. The defense assures the {\\sf RR} and {\\sf CR} properties.\nTong et al. propose refining features into invariant ones to defend against genetic algorithm-based evasion attacks with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,0,0,0,1|\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$. Experimental results show that adversarial training can be further leveraged to enhance the robustness of the defense. The defense works under {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)$ $=(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ}|1,1,0,0)$, and achieves {\\sf RR}, {\\sf CR} and {\\sf DR}.\nChen et al. propose mitigating evasive attacks by filtering features according to their importance $|w_i|/c_i$ with respect to the linear function $\\varphi(\\mathbf{x})=\\mathbf{w}^\\top\\mathbf{x} + b$, where $x_i$, $w_i$ and $c_i$ denote respectively the $i$th component of $\\mathbf{x}$, $\\mathbf{w}$ and the constraint on manipulation cost $\\mathbf{c}$. The defense enhances Android malware detectors against three attacks: a random attack with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,0,1,0,0|\\mathbf{M},{\\sf BE},-,\\mathcal{X}_\\mathbf{M})$, a variant of the mimicry attack with input $(0,0,1,0,0|\\mathbf{M},{\\sf BE},{\\sf MI},$ $\\mathcal{X}_\\mathbf{M})$, and the attack that modifies important features with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(1,1,1,1,1|\\mathbf{M},{\\sf BE},{\\sf SF},$ $\\mathcal{X}_\\mathbf{M})$, where `$-$' means inapplicable. The defense works under the {\\sf IID}, {\\sf Measurability}, and {\\sf Smoothness} assumptions with input $(A_1,\\cdots,A_5|a_6,$ $\\cdots,a_9)=(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ}|1,1,0,0)$ and achieves {\\sf RR} and {\\sf CR}.\nJordan et al. propose a robust PDF malware detector against evasion attacks by interpreting JavaScript behaviors using static analysis. A PDF file is classified as malicious when it calls a vulnerable API method or when it exhibits potentially malicious or unknown behaviors. The defense is validated against the {\\em reverse mimicry} attack with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,0,0,0,0|\\mathcal{M},{\\sf BE},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$. The defense has input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ}|1,$ $1,0,0)$ and achieves {\\sf RR}, {\\sf CR} and {\\sf DR}.", "id": "48b02090-502b-4798-92fb-c646a315077a", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Robust Features ({\\sf RF" ] ], "subsections": [], "title": "Defenses using Robust Features ({\\sf RF" }, { "cite_extract_rate": 0.5, "cites": [ 892, 3853 ], "content": ")}\nWang et al. propose the {\\em random feature nullification} to enhance DNN-based malware detectors against the attack of Fast Gradient Sign Method (FGSM) by nullifying (or dropping) features randomly in both training and testing phases. This offers a probabilistic assurance in preventing a white-box attacker from deriving adversarial files by using gradients of the loss function with respect to the input. \nThe defense enhances Windows malware detectors against the FGSM attack with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$. The defense works under {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,$ $\\cdots,a_9)=(D_{train},{\\sf IT},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and achieves {\\sf CR}.\nDroidEye defends Android malware detectors against evasion attacks by quantizing binary representations, namely transforming binary representations into real values and then using compression to reduce the effect of adversarial manipulations. The defense enhances linear malware detectors against a ``feature selection''-based attack with input\n$(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(1,1,1,1,1|\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ and the FGSM attack with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(0,1,1,1,$ $1|\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$ . The defense works under {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)$ $=(D_{train},{\\sf IT},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and achieves {\\sf CR}.", "id": "a5b08aa1-d4f9-4077-b1e1-6b8c15efc187", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Input Transformation ({\\sf IT" ] ], "subsections": [], "title": "Defenses using Input Transformation ({\\sf IT" }, { "cite_extract_rate": 0, "cites": [], "content": ")}\nKhasawneh et al. propose randomizing classifiers (i.e., using one randomly chosen from a pool of classifiers that use heterogeneous features) to defend against transfer attacks. The defense is validated against an attack which perturbs important features with input\n$(a_1,\\dots,a_5|a_6,\\cdots,a_9)=(0,0,*,0,1|\\mathcal{M},{\\sf BE},{\\sf SF},\\mathcal{Z}_\\mathcal{M})$.\nThe defense works under the {\\sf IID} assumption with input $(a_1,\\dots,a_5|a_6,\\cdots,a_9)=(D_{train},{\\sf CD},S,F_\\theta,$ ${\\sf FQ}|0,1,1,0)$ and achieves the {\\sf CR} property.", "id": "45225fbe-29bd-4000-8b60-553581208d2f", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Classifier Randomization ({\\sf CD" ] ], "subsections": [], "title": "Defenses using Classifier Randomization ({\\sf CD" }, { "cite_extract_rate": 0.2, "cites": [ 3857 ], "content": ")}\nSmutz and Stavrou propose an ensemble classifier to defend against grey-box evasion attacks by returning classification results as benign, uncertain and malicious according to the voting result (e.g., $[0\\%,25\\%]$ classifiers saying malicious can be treated as benign, $[25\\%,75\\%]$ saying malicious can be treated as uncertain, and $[75\\%,100\\%]$ saying malicious can be treated as malicious). The defense enhances a PDF malware detector against three types of evasion attacks: gradient-based attack with input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(1,0,*,*,0|\\mathcal{M},{\\sf OE2},{\\sf TR},\\mathcal{Z}_\\mathbf{M})$, mimicry attack with input $(1,0,*,*,0|\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathbf{M})$, and reverse mimicry attack with input $(0,0,0,0,0|\\mathcal{M},{\\sf BE},{\\sf MI},\\mathcal{Z}_\\mathbf{M})$ .\nThe defense works under the {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)$ $=(D_{train},{\\sf SE},S,F_\\theta,{\\sf FQ}|0,1,0,0)$ and achieves {\\sf DR}.\nDang et al. propose enhancing PDF malware detectors by lowering the classification threshold $\\tau$ and restricting the maximum query times, rendering genetic algorithm-based evasion attacks harder to succeed. This defense works under the {\\sf IID} assumption with input $(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf SE},S,F_\\theta,{\\sf LQ}|0,1,0,0)$ and achieves {\\sf DR}.\nChen et al. investigate defending Android malware detectors against poisoning attacks with input $(a_1,\\ldots,a_5|A_6,\\cdots,A_9) =(1,1,1,1,1|\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})$\nThe idea is to filter adversarial files that are distant from non-adversarial ones, where distance is measured by the Jaccard index, Jaccard-weight similarity and cosine similarity.\nThe defense works under the {\\sf Measurability} assumption with input \n$(A_1,\\cdots,A_5|a_6,\\cdots,a_9)=(D_{train},{\\sf SE},S,F_\\theta,$ ${\\sf FQ}|0,1,0,0)$\nand achieves {\\sf TR}.", "id": "362ab0d6-2e4b-4607-b141-4a89fdd71ebf", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Defenses using Sanitizing Examples ({\\sf SE" ] ], "subsections": [], "title": "Defenses using Sanitizing Examples ({\\sf SE" }, { "cite_extract_rate": 0.5, "cites": [ 966 ], "content": "Summarizing the preceding discussions, we draw the following observations.\n(i) Most studies focus on black-box defenses (i.e., the defender knows little about the attacker), which is against the principle of ``knowing yourself and knowing your enemy\". (ii) Most studies focus on defenses against evasion attacks rather than poisoning attacks. (iii) There is no silver bullet defense against evasion attacks or poisoning attacks, at least for now. (iv) Sanitizing adversarial files as outliers is effective against black-box and grey-box attacks, but not white-box attacks. (v) The security properties achieved by defenses have been evaluated empirically rather than rigorously proven (despite that provable security is emerging on the small degree of perturbations; see for example ). (vi) There is no theoretical evidence to support that the effectiveness of defense tactics on the training set (e.g., adversarial training and verifiable learning) can generalize to other adversarial examples.\nIn addition, we draw the following insights:\n\\begin{insight}\n(i) Effective defenses often require the defender to know the attacker's manipulation set. In the real world, it is hard to achieve this, explaining from one perspective why it is hard to design effective defenses.\n(ii) The effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.\n\\end{insight}\n\\ignore{\nIn order to draw further insights, we train another {\\em random forest} model from the systematized structure data presented in Table \\ref{tbl:defenses}. We use attributes of defense objective, defense input, and assumptions as features and treat the achieved properties as the labels to learn. The data preprocessing is similar to insight learning from AMD attacks. This leads to a dataset of 22 defenses, each of which has 15 dimensions. \n\\begin{figure}[!htbp]\n\t\\centering\n\t\\scalebox{0.25}{\n\t\\includegraphics{figures/defense-important-feat.eps}\n\t}\n\\caption{Insights learning from systematized AMD defenses via attribute importance.\n}\n\t\\label{fig:defense-attribute}\n\\end{figure}\nFigure \\ref{fig:defense-attribute} shows the most important features on AMD defenses, from which we make the following observations. (i) Defense technique is the most important feature in determining the empirically achieved security property. This can be explained because ${\\sf EL},{\\sf IT},{\\sf CD}, {\\sf VL}, {\\sf WR}$ enhance the classifier against small manipulations to achieve the {\\sf CR} property; {\\sf SE} usually achieves the {\\sf DR} property against large manipulations, which could be detected as outliers; ${\\sf AT},{\\sf RF}$ enhance classifiers against certain attacks specified by the defender. (ii) The {\\sf Measurability} assumption is the second important feature.\n(iii) Manipulation set is the third important feature because knowing this attribute enables the defender to specify {\\sf RF} and {\\sf AT} techniques, which can enhance classifiers to achieve empirical {\\sf RR}, {\\sf CR} and {\\sf DR} properties.\n\\begin{insight}[Insights automatically learned]\nTechnique-wise, defense technique largely determines what security properties can be (empirically) achieved and knowing attacker's manipulation set is critical to defender's success.\n\\end{insight}\n}", "id": "7446c022-f8d7-4a15-911d-d2f735cce556", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ecf055b9-207b-4f7f-9c11-8bb57d05480e", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing Defense Literature" ], [ "subsubsection", "Drawing Observations and Insights" ] ], "subsections": [], "title": "Drawing Observations and Insights" }, { "cite_extract_rate": 0.4, "cites": [ 3867, 3857, 6172, 3864, 7756, 3862, 937, 3871, 3853, 9099, 3854, 7754, 3874, 3863, 3861, 7154 ], "content": "\\afterpage{\n\\begin{rotatepage}\n\\begin{figure*}[ht!]\n\\centering\n\\rotatebox{90}{\n\\scalebox{0.8}{\n\\begin{tikzpicture}\n\\node(rbot1)[label=below:{}, label=left:{},align=center] at (20,3.6) {$ (*,0,1,*,0)$\\\\$ (1,0,1,0,0)$\\\\$ (1,0,*,*,0)$};\n\\node(rbot1_attr)[label=below:{},align=center] at (20,2.6) {$(\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ \\\\ $(\\mathcal{M},{\\sf BP},{\\sf SF},\\mathcal{Z}_\\mathcal{M})$};\n\\node(rbot1_tag)[label=below:{},align=center] at (20,2.1) {};\n\\node(rbot)[label=below:{}, label=left:{}] at (20,1) {$ (1,1,1,1,1)$};\n\\node(rbot_attr)[label=below:{},align=center] at (20,0.6) {$(\\mathbf{M},{\\sf OE2},{\\sf MI},\\mathcal{X})$};\n\\node(rbot_tag)[label=below:{},align=center] at (20,0.2) {};\n\\node(rbot0)[label=below:{}, label=left:{}] at (20,-2) {$ (1,1,1,1,1)$};\n\\node(rbot0_attr)[label=below:{},align=center] at (20,-2.4) {$(\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$};\n\\node(rbot0_refer)[label=below:{},align=center] at (20,-2.8) {};\n\\node(lbot3)[label=below:{}, label=left:{}] at (0,3) {$ (0,0,0,0,1)$};\n\\node(lbot3_attr)[label=below:{},align=center] at (0,2.6) {$(\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$};\n\\node(lbot3_refer)[label=below:{},align=center] at (0,2.2) {};\n\\node(lbot2)[label=below:{}, label=left:{}] at (0,1) {$ (0,0,0,0,0)$};\n\\node(lbot2_attr)[label=below:{},align=center] at (0,0.6) {$(\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$};\n\\node(lbot2_refer)[label=below:{},align=center] at (0,0.2) {};\n\\node(lbot1)[label=below:{}, label=left:{}] at (0,-1) {$ (0,0,0,0,0)$};\n\\node(lbot1_attr)[label=below:{},align=center] at (0,-1.4) {$(\\mathcal{M},{\\sf BE},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$};\n\\node(lbot1_refer)[label=below:{},align=center] at (0,-1.8) {};\n\\node(lbot0)[label=below:{}, label=left:{}] at (0,-3) {$ (0,0,0,0,0)$};\n\\node(lbot0_attr)[label=below:{},align=center] at (0,-3.4) {$(\\mathcal{M},{\\sf BE},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$};\n\\node(lbot0_refer)[label=below:{},align=center] at (0,-3.8) {};\n\\node(above7_attr)[label=below:{},align=center] at (20,6) {$(\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$};\n\\node(above7)[label=below:{}, label=left:{}] at (20,6.4) {$ (0,1,1,1,1)$};\n\\node(above7_refer)[label=below:{},align=center] at (20,6.8) {};\n\\node(above6_attr)[label=below:{},align=center] at (16.8,6) {$(\\mathbf{M},{\\sf OE2},{\\sf MS},\\mathcal{X}_\\mathbf{M})$};\n\\node(above6)[label=below:{}, label=left:{}] at (16.8,6.4) {$ (0,1,1,1,1)$};\n\\node(above6_refer)[label=below:{},align=center] at (16.8,6.8) {};\n\\node(above5_attr)[label=below:{},align=center] at (14,6) {$(\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$};\n\\node(above5)[label=below:{}, label=left:{}] at (14,6.4) {$ (0,1,1,1,1)$};\n\\node(above5_refer)[label=below:{},align=center] at (14,6.8) {};\n\\node(above4_attr)[label=below:{},align=center] at (11.2,6) {$(\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$}; \n\\node(above4)[label=below:{}, label=left:{}] at (11.2,6.4) {$ (0,1,1,1,1)$};\n\\node(above4_refer)[label=below:{},align=center] at (11.2,6.8) {};\n\\node(above3_attr)[label=below:{},align=center] at (8.4,6) {$(\\mathcal{M},{\\sf OE2},{\\sf MI},\\mathcal{Z}_\\mathcal{M})$};\n\\node(above3)[label=below:{}, label=left:{}] at (8.4,6.4) {$ (1,1,1,1,1)$};\n\\node(above3_refer)[label=below:{},align=center] at (8.4,6.8) {};\n\\node(above2_attr)[label=below:{},align=center] at (5.6,6) {$(\\mathcal{M},{\\sf BE},{\\sf HS},\\mathcal{Z}_\\mathcal{M})$};\n\\node(above2)[label=below:{}, label=left:{}] at (5.6,6.4) {$ (0,0,0,0,1)$};\n\\node(above2_refer)[label=below:{},align=center] at (5.6,6.8) {};\n\\node(above1_attr)[label=below:{},align=center] at (2.8,6) {$(\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$};\n\\node(above1)[label=below:{}, label=left:{}] at (2.8,6.4) {$ (0,0,1,*,0)$};\n\\node(above1_refer)[label=below:{},align=center] at (2.8,6.8) {};\n\\node(above0_attr)[label=below:{},align=center] at (0,5.8) {$(\\mathbf{M},{\\sf OE1},{\\sf GO},\\mathcal{X}_\\mathbf{M})$ \\\\ $(\\mathbf{M},{\\sf OE2},{\\sf SF},\\mathcal{X}_\\mathbf{M})$ };\n\\node(above0)[label=below:{}, label=left:{}] at (0,6.4) {$ (0,0,1,0,0)$};\n\\node(above0_refer)[label=below:{},align=center] at (0,6.8) {};\n\\node(below7)[label=below:{}, label=left:{}] at (20,-6.4) {$ (1,0,1,0,0)$};\n\\node(below7_attr)[label=below:{},align=center] at (20,-6) {$(\\mathbf{M},{\\sf OP},{\\sf GO},\\mathcal{X}_\\mathbf{M})$};\n\\node(below7_refer)[label=below:{},align=center] at (20,-6.8) {};\n\\node(below6)[label=below:{}, label=left:{}] at (16.8,-6) {$ (0,0,0,1,1)$};\n\\node(below6_attr)[label=below:{},align=center] at (16.8,-6.4) {$(\\mathbf{M},{\\sf BP},{\\sf SF},\\mathcal{X}_\\mathbf{M})$};\n\\node(below6_tag)[label=below:{},align=center] at (16.8,-6.8) {};\n\\node(below5)[label=below:{}, label=left:{}] at (14,-6) {$ (0,0,1,0,1)$};\n\\node(below5_attr)[label=below:{},align=center] at (14,-6.4) {$(\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathcal{M})$};\n\\node(below5_refer)[label=below:{},align=center] at (14,-6.8) {};\n\\node(below4)[label=below:{}, label=left:{}] at (11.2,-6) {$ (0,0,1,0,1)$};\n\\node(below4_attr)[label=below:{},align=center] at (11.2,-6.4) {$(\\mathbf{M},{\\sf BE},{\\sf GM},\\mathcal{X}_\\mathbf{M})$};\n\\node(below4_refer)[label=below:{},align=center] at (11.2,-6.8) {};\n\\node(below3)[label=below:{}, label=left:{}] at (8.4,-6) {$ (0,0,1,*,0)$};\n\\node(below3_attr)[label=below:{},align=center] at (8.4,-6.4) {$(\\mathbf{M},{\\sf OE2},{\\sf GO},\\mathcal{X}_\\mathbf{M})$};\n\\node(below3_refer)[label=below:{},align=center] at (8.4,-6.8) {};\n\\node(below2)[label=below:{}, label=left:{}] at (5.6,-6) {$ (0,0,*,0,1)$};\n\\node(below2_attr)[label=below:{},align=center] at (5.6,-6.4) {$(\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathbf{M})$};\n\\node(below2_refer)[label=below:{},align=center] at (5.6,-6.8) {};\n\\node(below1)[label=below:{}, label=left:{}] at (2.8,-6) {$ (0,0,0,0,1)$};\n\\node(below1_attr)[label=below:{},align=center] at (2.8,-6.4) {$(\\mathcal{M},{\\sf OE2},{\\sf GO},\\mathcal{Z}_\\mathcal{M})$};\n\\node(below1_refer)[label=below:{},align=center] at (2.8,-6.8) {};\n\\node(below0)[label=below:{}, label=left:{}] at (0,-6) {$ (0,0,*,0,0)$};\n\\node(below0_attr)[label=below:{},align=center] at (0,-6.4) {$(\\mathcal{M},{\\sf BE},{\\sf TR},\\mathcal{Z}_\\mathbf{M})$};\n\\node(below0_refer)[label=below:{},align=center] at (0,-6.8) {};\n\\fill [lightgray!30] (1.5,5) rectangle (18.6,-5);\n\\node(mid4)[label=below:{},align=left] at (15.5,0) {:$(D_{train},{\\sf SE},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$\\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$\n};\n\\node(mid4-attr)[align=center] at (15.5,-0.85) {$(1,1,1,0)$};\n\\node(mid3)[label=below:{$(1,1,0,0)$},align=left] at (10,1.88) {\n:$(D_{train},{\\sf WR},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf RF},S,F_\\theta,{\\sf FQ})$\n}; \n\\node(mid2)[label=below:{$(0,1,1,0)$},align=left] at (10,-1.5) {:$(D_{train},{\\sf CD},S,F_\\theta,{\\sf FQ})$};\n\\node(mid1)[label=below:{$(0,1,0,*)$},align=center] at (10,-3) {:$(D^\\ast_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$};\n\\node(mid0)[label=below:{$(0,1,0,0)$},align=left] at (4.5,0) {\n:$(D_{train},{\\sf SE},S,F_\\theta,{\\sf LQ})$\\\\\n:$(D_{train},{\\sf AT},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf IT},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf WR},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf SE},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf SE},S,F_\\theta,{\\sf FQ})$ \\\\\n:$(D_{train},{\\sf IT},S,F_\\theta,{\\sf FQ})$ \\\\ \n:$(D_{train},{\\sf EL},S,F_\\theta,{\\sf FQ})$\n};\n\\node(det3)[label=below:{},align=center, fill=blue!20,fill opacity=0.8] at (18.2,3) {};\n\\node(det2)[label=below:{},align=center, fill=blue!20,fill opacity=0.8] at (18.1,4.65) {};\n\\node(dnn1)[label=below:{},align=center, fill=blue!20,fill opacity=0.8] at (15.2, 3.7) {DNN-based malware \\\\ detector };\n\\node(dnn1_label)[align=center,color=blue!50] at (15.2,2.8) {$(D_{train},\\emptyset,S,F_\\theta,{\\sf FQ})$\\\\$(0,0,0,0)$};\n\\node(apk1)[label=right:{},align=center, fill=blue!20,fill opacity=0.8] at (6, 3.9) {Drebin };\n\\node(apk1_label) [align=center,color=blue!50] at (6.2,3.2) {$(D_{train},\\emptyset,S,F_\\theta,{\\sf FQ})$\\\\$(0,0,0,0)$};\n\\node(det1)[label=below:{},align=center, fill=blue!20,fill opacity=0.8] at (3.05,3.9) {};\n\\node(pdf2)[label=below:{},align=center, fill=blue!20,fill opacity=0.8] at (3.6, 3.2) {PDFrate };\n\\node(pdf2_label)[align=center,color=blue!50] at (3.6,2.4) {$(D_{train},\\emptyset,S,F_\\theta,{\\sf FQ})$\\\\$(0,0,0,0)$};\n\\node(pdf1)[label=below:{},label=below:{},align=left, fill=blue!20,fill opacity=0.8] at (3.6, -3.8) {PDFrate };\n\\node(pdf1_label)[align=center,color=blue!50] at (3.6,-4.6) {$(D_{train},\\emptyset,S,F_\\theta,{\\sf FQ})$\\\\$(0,0,0,0)$};\n{\n\\draw [-latex,line width=1] (3.3,-5.7) -- (3.3, -5) -- (2.3, -5) -- (2.3, -5.7);\n\\draw [-latex,line width=1] (11.7,-5.7) -- (11.7, -5) -- (10.7, -5) -- (10.7, -5.7);\n\\draw [-latex,line width=1] (14.5,-5.7) -- (14.5, -5) -- (13.5, -5) -- (13.5, -5.7);\n\\draw [-latex,line width=1] (20.5,-5.7) -- (20.5,-5) -- (19.5,-5) -- (19.5,-5.7);\n\\draw [-latex,line width=1] (19.5,1.2) -- (19.5,1.4) -- (18.6,1.4) -- (18.6,0) -- (19.5,0) -- (19.5,0.4); \n\\draw [-latex,line width=1] (above0_attr) -- (0, 3.9) -- (det1);\n\\draw [-latex,line width=1] (20,5.7) -- (20,4.65) -- (det2);\n\\draw [-latex,line width=1] (19,3.) -- (det3);\n\\draw [-latex,line width=1] (6.1,-5.7) -- (6.1, -5) -- (5.1, -5) -- (5.1, -5.7);\n\\draw [-latex,line width=1] (17.3,-5.7) -- (17.3, -5) -- (16.3, -5) -- (16.3, -5.7);\n\\ignore{\n\\draw [-latex,line width=1] (6.4,-0.2)node[red,right]{}--(5.8,-0.2);\n\\draw [-latex,line width=1] (6.4,0.2)node[red,right]{}--(5.9,0.2);\n\\draw [-latex,line width=1] (9.2,0.7)node[red,above]{}--(9.2,0.2);\n\\draw [-latex,line width=1] (6.4,-3.2)node[red,left]{}--(6.9,-3.2);\n\\draw [-latex,line width=1] (13.6,-3.2)node[red,left]{}--(14.1,-3.2);\n}\n\\draw [-latex,line width=1] (below3) -- (8.4,-5.0) -- (7.25,-5.0) -- (7.25,-1.5) -- (6.25,-1.5);\n\\draw [-latex,line width=1] (7.25,-1.5) -- (7.25,0.2) -- (8.1,0.2);\n\\draw [line width=1] (18.7,-2.4) -- (18.6,-2.4) -- (18.6,-1.05) -- (7.35,-1.05); \n\\draw [-latex,line width=1] (7.15,-1.05) --(6.25,-1.05);\n\\draw [-latex,line width=1] (12.55, -1.05) -- (12.55, 0.65) -- (11.75, 0.65);\n\\tkzDefPoint(7.25,-1.05){c1}\n\\tkzDefPoint(7.15,-1.05){c2}\n\\tkzDrawArc[rotate,color=black,line width=1](c1,c2)(180);\n\\draw [-latex,line width=1] (11.1,5.7) -- (11.1,4.3) -- (13.15,4.3) -- (13.15,3.7) -- (13.6, 3.7); \n\\draw [blue,densely dashdotted,-latex,line width=1] (16.8,3.7) -- (17.5,3.7) -- (17.5,0.1) -- (17.2,0.1); \n\\draw [red,dashed,-latex,line width=1] (11.3,5.7) -- (11.3,5.0) -- (13.9,5.0) -- (13.9,5.7);\n\\draw [-latex,line width=1] (14.1,5.7) -- (14.1,4.3) -- (17.7,4.3) -- (17.7,-0.1) -- (17.2,-0.1); \n\\draw [blue,densely dashdotted,-latex,line width=1] (13.55,0) -- (12.75,0) -- (12.75,1.8) -- (11.6,1.8);\n\\draw [red,dashed,-latex,line width=1] (14.3,5.7) -- (14.3,5.0) -- (16.7,5.0) -- (16.7,5.7);\n\\draw [line width=1] (17.3,5.7) -- (17.3,4.4);\n\\draw [-latex,line width=1] (17.3,4.2) -- (17.3,2) -- (11.6,2);\n\\draw [blue,densely dashdotted,-latex,line width=1] (8.05, 1.9) -- (7.25,1.9) -- (7.25,1.45) -- (8.1,1.45) ;\n\\tkzDefPoint(17.3,4.3){c3}\n\\tkzDefPoint(17.3,4.2){c4}\n\\tkzDrawArc[rotate,color=black,line width=1](c3,c4)(180);\n\\draw [-latex,line width=1] (2.7,5.7) -- (2.7,4.3) -- (4.7,4.3) -- (4.7,3.9) -- (5.2,3.9); \n\\draw [blue,densely dashdotted,-latex,line width=1] (6.9,3.9) -- (7.05,3.9) -- (7.05,3.58) -- (8.1,3.58); \n\\draw [red,dashed,-latex,line width=1] (2.9,5.7) -- (2.9,5.0) -- (8.5,5.0) --(8.5,5.7); \n\\draw [red,dashed,-latex,line width=1] (5.7,5.0) -- (5.7,5.7);\n\\draw [-latex,line width=1] (5.5,5.7) -- (5.5,4.3) -- (8.3,4.3) -- (8.3,3.7); \n\\draw [line width=1] (8.3,5.7) -- (8.3,4.3); \n\\draw [blue,densely dashdotted,-latex,line width=1] (11.9,3.7) -- (12.95,3.7) -- (12.95,0.43) -- (13.55,0.43);\n\\draw [blue,densely dashdotted,-latex,line width=1] (11.9,3.5) -- (12.75,3.5) -- (12.75,2.7) -- (11.8,2.7);\n\\draw [-latex,line width=1] (0.1, 3.1) -- (0.1,3.3) -- (2.5,3.3); \n\\draw [blue,densely dashdotted,-latex,line width=1] (2.4,3.1) -- (1.7,3.1) -- (1.7, 1.55) -- (2.5,1.55); \n\\draw [red,dashed,-latex,line width=1] (-0.1,2.1) -- (-0.1,1.1);\n\\draw [-latex,line width=1] (0.1,1.1) -- (0.1, 1.35) -- (2.5,1.35); \n\\draw [blue,densely dashdotted,-latex,line width=1] (4.7,3.2) -- (8.1,3.2);\n\\draw [-latex,line width=1] (0.1,-5.7) -- (0.1,-5) -- (1.5,-5) -- (1.5,-5) -- (1.5,-3.9) -- (2.5,-3.9); \n\\draw [blue,densely dashdotted,-latex,line width=1] (2.4,-3.7) -- (1.7,-3.7) -- (1.7, -0.75) -- (2.5, -0.75); \n\\draw [red,dashed,-latex,line width=1] (-0.1,-5.7) -- (-0.1,-3.9); \n\\draw [-latex,line width=1](0.1,-2.9) -- (0.1,-2) -- (1.5,-2) -- (1.5,-0.55) -- (2.5,-0.55); \n\\draw [blue,densely dashdotted,-latex,line width=1] (6.4,-0.6) -- (7.05,-0.6) -- (7.05,2.3) -- (8.1,2.3); \n\\draw [red,dashed,-latex,line width=1] (-0.1,-2.9) -- (-0.1,-2); \n\\draw [-latex,line width=1](0.1,-0.9) -- (0.1,-0.55) -- (2.5,-0.55);\n}\n\\draw [-latex,line width=1] (12.3,7.45) -- (12.9,7.45) node[right,align=center]{Waging \\\\ Attack};\n\\draw [blue,densely dashdotted,-latex,line width=1] (14.5,7.45) -- (15.1,7.45);\n\\node [right,align=center] at (15.1,7.45) {Defense \\\\ Escalation};\n\\draw [red,dashed,-latex,line width=1] (17,7.45) -- (17.6,7.45);\n\\node [right,align=center] at (17.6, 7.45) {Attack \\\\ Escalation};\n\\draw (11,7) -- (20,7) -- (20,7.9) -- (11,7.9) -- node [right]{Legend:} (11,7);\n\\end{tikzpicture}\n}\n}\n\\caption{Arms race in AMD attack and defense escalations.}\n\\label{fig:attacks-defenses}\n\\end{figure*}\n\\end{rotatepage}\n}\nFigure \\ref{fig:attacks-defenses} displays AMD attack-defense arms race surrounding three malware detectors: PDFrate, Drebin, and DNN-based detector. For a better visual effect, we group papers that proposed defense methods in terms of a common input $(a_6,\\ldots,a_9)$. For example, we group\n, and together because the defenders in both papers have input $(a_6,\\ldots,a_9)=(1,1,1,0)$, while noting that their input on $(A_1,\\ldots,A_5)$ may or may not be different.\nWe also simplify attack and defense inputs while preserving the {\\em critical information} when an attack (defense) works for multiple inputs. \nFor example, $(a_1,\\ldots,a_5|A_6,\\cdots,A_9)=(0,0,1,0,0|\\mathbf{M},{\\sf OE2}, {\\sf SF}, \\mathcal{X}_\\mathbf{M})$ is the {\\em critical information} for attack input \n$(a_1,\\cdots,a_5|A_6,\\cdots,A_9)=(0,0,1,0,0|\\mathbf{M},{\\sf OE2}, {\\sf SF}, \\mathcal{X}_\\mathbf{M}) \\lor (0,0,1,0,1|\\mathbf{M},{\\sf OE2}, {\\sf SF}, \\mathcal{X}_\\mathbf{M}) \\lor (1,0,1,0,0|\\mathbf{M},{\\sf OE2}, {\\sf SF}, \\mathcal{X}_\\mathbf{M}) \\lor (1,0,1,0,1|\\mathbf{M},{\\sf OE2}, {\\sf SF},\\mathcal{X}_\\mathbf{M})$\nbecause it is the weakest attack input in the partial order formulated by these $(a_1,\\ldots,a_5)$'s. This suggests us to focus on attack input $(0,0,1,0,0|\\mathbf{M},{\\sf OE2}, {\\sf SF},\\mathcal{X}_\\mathbf{M})$ because it is already able to break some defense and automatically implies that a stronger input can achieve the same (while noting some special cases, see discussion in Section \\ref{sec:defense-el}). Multiple defense inputs are simplified in the same manner.\n{\\em Arms race in PDF malware detection}: We summarize two sequences of escalations caused by PDFrate . In one sequence, PDFrate is defeated by transfer attacks, which are realized by gradient-based and mimicry methods against surrogate models . These attacks trigger the defense escalation to an ensemble detector built on top of some diversified classifiers . This defense triggers attack escalation to {\\em reverse mimicry attacks} , \nwhich trigger the defense escalation of using robust hand-crafted features . This defense represents the state-of-the-art PDF malware detector, but still incurs a high false-positive rate. In the other sequence of arms race, PDFrate is defeated by genetic algorithm-based attacks . These attacks trigger the defense escalation to and . The former defense restricts the responses to attacker queries, but can be defeated by the escalated attack that leverages the hill-climbing algorithm (also shown in ). The latter defense uses invariant features to thwart the attacks and represents another state-of-the-art PDF malware detectors.\n{\\em Arms race in android malware detection}: Drebin is defeated by the attack that modifies a limited number of important features , which also proposes the new defense to defeat the escalated attack. This defense triggers the attack escalation to, and is defeated by, the genetic algorithm-based attack and the mimicry-alike attack . The former attack triggers the escalated defense (also presented in ) that leverages attack mutations to detect adversarial examples . The latter attack injects objects in APKs and (in principle) can be defeated by the monotonic classifier . These escalated defenses represent the state-of-the-art Android malware detectors, but still incur a high false-positive rate.\n{\\em Arms race in DNN-based malware detection}: The DNN-based detector triggers four gradient-based evasion attacks presented in , which also hardens the DNN malware detector by using an {\\em minmax adversarial training} instantiation to incorporates the $\\ell_\\infty$ normalized gradient-based attack. This escalated defense triggers the mixture of attacks presented in . The defense of {\\em minmax adversarial training} incorporating a mixture of attacks can defeat a broad range of attacks, but still suffers from the mimicry attack and other mixtures of attacks . \nAs a consequence, there are no effective defenses can thwart all kinds of attacks.\n{\\em Independent arms race}:\nThere are studies that have yet to trigger cascading arms races, including: (i) Studies propose independent attacks and then show how to defeat these attacks.\n(ii) Studies propose attacks to defeat naive malware detectors. (iii) Studies propose defenses to counter some attacks .", "id": "76e4c68c-dec2-4af2-b3fa-ce426944cf52", "level": "subsection", "origin_cites_number": 40, "parent_id": "e26ef155-9b9c-4a92-9ffa-e334fbe6ea87", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Systematizing AMD Arms Race" ], [ "subsection", "Systematizing AMD Arms Race" ] ], "subsections": [], "title": "Systematizing AMD Arms Race" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 8399, 8696, 314, 892, 3878, 966, 937, 7754, 7312 ], "content": "\\label{sec:cd}\n\\noindent{\\bf FRD 1}: {\\em Pinning down the root cause(s) of adversarial malware examples}.\nSpeculations on root cause(s) include: (i) invalidity of the {\\sf IID} assumption because of distribution drifting, namely that testing files and training files are drawn from different distributions );\n(ii) incompetent feature extraction ;\n(iii) high dimensionality of malware representations ;\n(iv) insufficient scale of training data ;\n(v) low-probability ``pockets'' in data manifolds ;\n(vi) linearity of DNNs ; and\n(vii) large curvature of decision boundaries .Although these speculations may be true, more studies are needed in order to (in)validate them. \n\\smallskip\n\\noindent{\\bf FRD 2}: {\\em Characterizing the relationship between transferability and vulnerability}.\nIn the AMD context, an attacker may use a surrogate model to generate adversarial examples and a defender may use a surrogate model for adversarial training. Transferability is related to the extent at which knowledge gained by a surrogate model may be the same as, or similar to, what is accommodated by a target model. The wide use of surrogate models in the AMD context suggests that there may be a fundamental connection between knowledge transferability and model vulnerability.\n\\smallskip\n\\noindent{\\bf FRD 3}: {\\em Investigating adversarial malware examples in the wild}. In the AMD context, it is challenging to generate practical adversarial malware examples to correspond to perturbations conducted in the feature space, owing to realistic constraints. On the other hand, an attacker can directly search for manipulations in the problem space. This may cause large perturbations, putting the value of studies on small perturbations in question. This represents a fundamental open problem that distinguishes the field of AMD from its counterparts in other application settings. \nThis issue is largely unaddressed by assuming that there is an oracle for telling whether manipulated or perturbed features indeed correspond to a malware sample or not.\n\\smallskip\n\\noindent{\\bf FRD 4}: {\\em Quantifying the robustness and resilience of malware detectors}. Robustness and resilience of malware detectors against adversarial examples need to be quantified, ideally with a provable guarantee. For this purpose, one may adapt the reduction-based paradigm underlying the provable security of cryptographic primitives and protocols.\n\\smallskip\n\\noindent{\\bf FRD 5}: {\\em Designing malware detectors with provable robustness and resilience guarantees}.\nHaving understood the root cause(s) of adversarial examples, characterized the effect of transferability, investigated the effectiveness of practical attacks, and designed metrics for quantifying the robustness and resilience of malware detectors, it is imperative to investigate robust malware detectors with provable robustness, ideally as rigorous as what has been achieved in the field of cryptography. In this regard, robust feature extraction, adversarial learning, and verifiable learning are promising candidates for making breakthroughs.\n\\smallskip\n\\noindent{\\bf FRD 6}: {\\em Forecasting the arms race in malware detection}. Arms race is a fundamental phenomenon inherent to the cybersecurity domain. In order to effectively defend against adversarial malware, one approach is to deploy proactive defense, which requires the capability to forecast the arms race between malware writers and defenders. For instance, it is important to predict how attacks will evolve and what kinds of information would be necessary in order to defeat such attacks.", "id": "965da080-653e-47f5-949b-de42288cf6a5", "level": "section", "origin_cites_number": 11, "parent_id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Future Research Directions (FDRs)" ] ], "subsections": [], "title": "Future Research Directions (FDRs)" }, { "cite_extract_rate": 1, "cites": [ 7155 ], "content": "\\label{sec:conc}\nWe have presented a framework for systematizing the field of AMD through the lens of assumptions, attacks, defenses and security properties. This paves the way for precisely relating attacks and defenses. We have also shown how to apply the framework to systematize the AMD literature, including the arms race between AMD attacks and defenses. We have reported a number of insights.\nThe study leads to a set of future research directions. In addition to the ones described in Section \\ref{sec:cd}, we mention the following two, which are discussed here because there are rarely studies on these aspects. (i) To what extent explainability (or interpretability) of ML models can be leveraged to cope with adversarial malware examples? It is intuitive that explainability could be leveraged to recognize adversarial examples because they may not be explainable . \n(ii) To what extent uncertainty quantification can be leveraged to cope with adversarial malware examples? If the uncertainty associated with detectors' predictions on adversarial malware examples are inherently and substantially higher than the uncertainty associated with non-adversarial malware examples, this fact can be leveraged to recognize adversarial malware examples.\nFinally, we reiterate that the research community should seek to establish a solid foundation for AMD. Although this foundation can leverage ideas and techniques from AML, the unique characteristics of AMD warrant the need of a unique foundation.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{mal_attacks,mal_defenses,aml,cyber_security,data_mining,mal_detection}\n\\end{document}", "id": "82494750-c0aa-44ab-add1-1d1d59481f4a", "level": "section", "origin_cites_number": 1, "parent_id": "4fe25043-06f8-4577-bd62-3fe42c7ab899", "prefix_titles": [ [ "title", "Arms Race in Adversarial Malware Detection: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
94
[ 314, 7755, 7337, 893, 3856, 3857, 937, 3853, 3854, 7754, 7153, 3855, 3861, 7154, 3860, 3859, 3858, 1159, 8565, 318, 166, 3862, 7312, 9099, 8695, 890, 3866, 7156, 888, 3868, 1193, 3867, 3865, 6172, 3864, 7155, 9100, 3863, 3869, 917, 3870, 7756, 3871, 892, 923, 3873, 3872, 894, 243, 3874, 3875, 3877, 3876, 966, 8399, 8696, 3878 ]
0.959291
[ "Xiaoxiao Ma", "Jia Wu", "Shan Xue", "Jian Yang", "Chuan Zhou", "Quan Z. Sheng", "Hui Xiong", "Leman Akoglu" ]
A Comprehensive Survey on\\ Graph Anomaly Detection with Deep Learning
2021
2021-06-14T06:04:57Z
cs.LG
Anomalies are rare observations (\eg data records or events) that deviate significantly from the others in the sample. Over the past few decades, research on anomaly mining has received increasing interests due to the implications of these occurrences in a wide range of disciplines - for instance, security, finance, and medicine. For this reason, anomaly detection, which aims to identify these rare observations, has become one of the most vital tasks in the world and has shown its power in preventing detrimental events, such as financial fraud, network intrusions, and social spam. The detection task is typically solved by identifying outlying data points in the feature space, which, inherently, overlooks the relational information in real-world data. At the same time, graphs have been prevalently used to represent the structural/relational information, which raises the \textit{graph anomaly detection problem} - identifying anomalous graph objects (\ie nodes, edges and sub-graphs) in a single graph, or anomalous graphs in a set/database of graphs. Conventional anomaly detection techniques cannot tackle this problem well because of the complexity of graph data (\eg irregular structures, relational dependencies, node/edge types/attributes/directions/multiplicities/weights, large scale, etc.). However, thanks to the advent of deep learning in breaking these limitations, graph anomaly detection with deep learning has received a growing attention recently. In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection. Specifically, we provide a taxonomy that follows a task-driven strategy and categorizes existing work according to the anomalous graph objects that they can detect. We especially focus on the challenges in this research area and discuss the key intuitions, technical details as well as relative strengths and weaknesses of various techniques in each category. From the survey results, we highlight 12 future research directions spanning unsolved and emerging problems introduced by graph data, anomaly detection, deep learning and real-world applications. Additionally, to provide a wealth of useful resources for future studies, we have compiled a set of open-source implementations, public datasets, and commonly-used evaluation metrics. With this survey, our goal is to create a ``one-stop-shop'' that provides a unified understanding of the problem categories and existing approaches, publicly available hands-on resources, and high-impact open challenges for graph anomaly detection using deep learning.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ] ], "subsections": [ "ecbafaf3-1565-4746-bf9c-a9182b235789", "8ea06e54-3d89-4768-967e-6b68c2d22f53", "59eed45d-0f6f-4983-a1d0-48c2fb48a606", "12f2fc41-29d6-4189-b62f-ff980f20a825", "678bd7af-5bf2-44f6-ae32-df413829eb80", "78c71045-f884-4e7f-b2a8-d7769a5bc16b", "5da2d684-654a-4048-95ec-2ae406b319b2", "9ba15010-a3db-4588-8e7f-56a7def289f5", "ae3bbf88-f1cb-408b-8ded-ef8445178dfa", "8147e459-f864-454f-86f6-52d0b0cd6b52", "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "32ddd0ab-f741-4a6c-a1ff-380ee935d4a2", "9a4d2cc0-0208-4633-a227-f43cf17e07e4", "12e1194f-a7ed-4e46-889d-a06a013eaeaf", "dfc7ea98-0750-495a-b3c3-8183220b6042", "a37030ed-a13e-4c9b-bdf9-32f760d9d73a", "ef880176-a742-4e90-bb20-77c1c0e4b593", "6dcc483a-2e1d-4f81-abe8-97ae152ee771" ], "title": "root" }, { "cite_extract_rate": 0.24193548387096703, "cites": [ 6272, 3207, 1671, 553, 3344, 6271, 6277, 6275, 217, 1662, 6274, 6276, 6270, 6273, 6278 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{A}{nomalies} were first defined by Grubbs in 1969~ as \\textit{``one that appears to deviate markedly from other members of the sample in which it occurs''} and the studies on anomaly detection were initiated by the statistics community in the 19th century.\nTo us, anomalies might appear as social spammers or misinformation in social media; fraudsters, bot users or sexual predators in social networks; network intruders or malware in computer networks and broken devices or malfunctioning blocks in industry systems, and they often introduce huge damage to the real-world systems they appear in.\nAccording to FBI's 2014 Internet Crime Report\\footnote{https://www.fbi.gov/file-repository/2014\\_ic3report.pdf/view}, the financial loss due to crime on social media reached more than \\$60 million in the second half of the year alone and a more up-to-date report\\footnote{https://www.zdnet.com/article/online-fake-news-costing-us-78-billion-globally-each-year/} indicates that the global economic cost of online fake news reached around \\$78 billion a year in 2020.\nIn computer science, the research on anomaly detection dates back to the 1980s, and detecting anomalies on graph data has been an important data mining paradigm since the beginning.\nHowever, the extensive presence of connections between real-world objects and advances in graph data mining in the last decade have revolutionized our understanding of the graph anomaly detection problems such that this research field has received a dramatic increase in interest over the past five years.\nOne of the most significant changes is that graph anomaly detection has evolved from relying heavily on human experts' domain knowledge into machine learning techniques that eliminate human intervention, and more recently, to various deep learning technologies.\nThese deep learning techniques are not only capable of identifying potential anomalies in graphs far more accurately than ever before, but they can also do so in real-time.\n\\begin{figure}[!t]\n\\setlength{\\belowcaptionskip}{-0.5cm}\n\\centering\n\\subfigure[Conventional Anomaly Detection]{\n\\label{Fig1a}\n\\includegraphics[scale=0.45]{pics/Introduction/Conventional_AD.pdf}}\n\\subfigure[Graph Anomaly Detection]{\n\\label{Fig1b}\n\\includegraphics[scale=0.45]{pics/Introduction/Toy_Example.pdf}} \n\\caption{Toy Examples of Conventional Anomaly Detection and Graph Anomaly Detection. Apart from anomalies shown in (b), graph anomaly detection also identifies graph-level anomalies, detailed in Sections~\\ref{sec:anosgd:db} and~\\ref{sec:anosgd:dynamic}.} \n\\label{Toy}\n\\end{figure}\nFor our purposes today, anomalies, which are also known as outliers, exceptions, peculiarities, rarities, novelties, etc., in different application fields, refer to abnormal objects that are significantly different from the standard, normal, or expected. \nAlthough these objects rarely occur in real-world, they contain critical information to support downstream applications. For example, the behaviors of fraudsters provide evidences for anti-fraud detection and abnormal network traffics reveal signals for network intrusion protection. Anomalies, in many cases, may also have real and adverse impacts, for instance, fake news in social media can create panic and chaos with misleading beliefs~, untrustworthy reviews in online review systems can affect customers' shopping choices~, network intrusions might leak private personal information to hackers~, and financial frauds can cause huge damage to economic systems~.\nAnomaly detection is the data mining process that aims to identify the unusual patterns that deviate from the majorities in a dataset~. In order to detect anomalies, conventional techniques typically represent real-world objects as feature vectors (\\eg news in social media are represented as bag-of-words~, and images in web pages are represented as color histograms~), and then detect outlying data points in the vector space~, as shown in Fig.~\\ref{Fig1a}. \nAlthough these techniques have shown power in locating deviating data points under tabulated data format, they inherently discard the complex relationships between objects~.\nYet, in reality, many objects have rich relationships with each other, which can provide valuable complementary information for anomaly detection.\nTake online social networks as an example, fake users can be created using valid information from normal users or they can camouflage themselves by mimicking benign users' attributes~. \nIn such situations, fake users and benign users would have near-identical features, and conventional anomaly detection techniques might not be able to identify them using feature information only. \nMeanwhile, fake users always build relationships with a large number of benign users to increase their reputation and influence so they can get unexpected benefits, whereas benign users rarely exhibit such activities~.\nHence, these dense and unexpected connections formed by fake users denote their deviations to the benigns and more comprehensive detection techniques should take these structural information into account to pinpoint the deviating patterns of anomalies.\nTo represent the structural information, \\textit{Graphs}, in which nodes/vertices denote real objects, and the edges denote their relationships, have been prevalently used in a range of application fields~, including social activities, e-commerce, biology, academia and communication. With the structural information contained in graphs, detecting anomalies in graphs raises a more complex anomaly detection problem in non-Euclidean space - graph anomaly detection (GAD) that aims to identify anomalous graph objects (\\ie nodes, edges or sub-graphs) in a single graph as well as anomalous graphs among a set/database of graphs~.\nAs a toy example shown in Fig.~\\ref{Fig1b}, given an online social network, graph anomaly detection aims to identify anomalous nodes (\\ie malicious users), anomalous edges (\\ie abnormal relations) and anomalous sub-graphs (\\ie malicious user groups).\nBut, because the copious types of graph anomalies cannot be directly represented in Euclidean feature space, it is not feasible to directly apply traditional anomaly detection techniques to graph anomaly detection, and researchers have intensified their efforts to GAD recently.\nAmongst earlier works in this area, the detection methods relied heavily on handcrafted feature engineering or statistical models built by domain experts~.\nThis inherently limits these techniques' capability to detect unknown anomalies, and the exercise tended to be very labor-intensive. \nMany machine learning techniques, such as matrix factorization~ and SVM~, have also been applied to detect graph anomalies.\nHowever, real-world networks often contain millions of nodes and edges that result in extremely high dimensional and large-scale data, and these techniques do not easily scale up to such data efficiently.\nPractically, they exhibit high computational overhead in both the storage and execution time~.\nThese general challenges associated with graph data are significant for the detection techniques, and we categorize them as data-specific challenges (Data-CHs) in this survey.\nA summary of them is provided in Appendix~\\ref{appendix:data-challenges}.\n\\begin{table*}[!h]\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}{\n\\caption{A Comparison Between Existing Surveys on Anomaly Detection. We mark edge, sub-graph and graph detection as \\includegraphics[scale=0.2]{pics/Introduction/4.pdf} in our survey because we review more deep learning based works than any previous surveys.} \n\\resizebox{0.96\\textwidth}{!}{\n\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c}\n\\toprule[1 pt]\n\\multirow{2}{*}{\\textbf{Surveys}} & \\multirow{2}{*}{\\textbf{AD}} & \\multirow{2}{*}{\\textbf{DAD}} & \\multirow{2}{*}{\\textbf{GAD}} \n& \\multicolumn{4}{c|}{\\textbf{GADL}} & \\multirow{2}{*}{\\textbf{Source Code}} & \\multicolumn{2}{c}{\\textbf{Dataset}} \\\\ \\cline{5-8} \\cline{10-11}\n& & & &\\textbf{Node} &\\textbf{Edge} & \\textbf{Sub-graph} & \\textbf{Graph} & &\\textbf{Real-world} &\\textbf{Synthetic} \\\\ \n\\midrule[1 pt]\nOur Survey & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf}& \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} \\\\\n\\midrule[1 pt]\nChandola \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - & - & - \\\\\nBoukerche \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & - & - & - & - & - & - & - & - \\\\\nBulusu \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - & - \\\\\nThudumu \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - & - & - & - & - & - & - \\\\\n\\midrule[1 pt]\nPang \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - \\\\\nChalapathy and Chawla~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - \\\\\n\\midrule[1 pt]\nAkoglu \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - \\\\\nRanshous \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & -& - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - \\\\\nJennifer and Kumar~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - \\\\\n\\midrule[1 pt]\nEltanbouly \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - & - & - & - & - & - & - \\\\\nFernandes \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - \\\\\nKwon \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & - & - & - & - & - & - & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - \\\\\nGogoi \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - & - & - & - & - & - & - & - \\\\\n\\midrule[1 pt]\nSavage \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - \\\\\nYu \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & - & - & - & - & - & - \\\\\nHunkelmann \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & - & \\includegraphics[scale=0.15]{pics/Introduction/1.pdf} & - & - & - & - & - & - & - \\\\\nPourhabibi \\etal~ & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/4.pdf} & \\includegraphics[scale=0.15]{pics/Introduction/2.pdf} & - & - & - & - & - & - \\\\\n\\bottomrule[1 pt]\n\\multicolumn{11}{l}{* AD: Anomaly Detection, DAD: Anomaly Detection with Deep Learning, GAD: Graph Anomaly Detection.} \\\\\n\\multicolumn{11}{l}{* GADL: Graph Anomaly Detection with Deep Learning.}\\\\\n\\multicolumn{11}{l}{* -: not included, \\includegraphics[scale=0.2]{pics/Introduction/1.pdf} (1-2 references included), \\includegraphics[scale=0.2]{pics/Introduction/2.pdf} (3-10 references included), \\includegraphics[scale=0.2]{pics/Introduction/4.pdf} (10+ references included).}\n\\end{tabular}\n\\label{table:comparison}}}\n\\end{table*}\nNon-deep learning based techniques also lack the capability to capture the non-linear properties of real objects~.\nHence, the representations of objects learned by them are not expressive enough to fully support graph anomaly detection.\nTo tackle these problems, more recent studies seek the potential of adopting deep learning techniques to identify anomalous graph objects. \nAs a powerful tool for data mining, deep learning has achieved great success in data representation and pattern recognition~.\nIts deep architecture with layers of parameters and transformations appear to suit the aforementioned problems well. The more recent studies, such as deep graph representation learning and graph neural networks (GNNs), further enrich the capability of deep learning for graph data mining~. \nBy extracting expressive representations such that graph anomalies and normal objects can be easily separated, or the deviating patterns of anomalies can be learned directly through deep learning techniques, graph anomaly detection with deep learning (GADL) is starting to take the lead in the forefront of anomaly detection.\nAs a frontier technology, graph anomaly detection with deep learning, hence, is expected to generate more fruitful results on detecting anomalies and secure a more convenient life for the society.", "id": "ecbafaf3-1565-4746-bf9c-a9182b235789", "level": "section", "origin_cites_number": 62, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Introduction" ] ], "subsections": [ "2ea37eb2-3734-4b08-ad7a-6c1226af2e9b", "b87f5810-e203-4606-8bb5-00623ec5335b", "7291e51c-caa8-4bd7-9c08-2d393eaf83dc" ], "title": "Introduction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 6279, 4539, 6280 ], "content": "\\label{sec:introdution:challenges}\nDue to the complexity of anomaly detection and graph data mining~, in addition to the prior mentioned data-specific challenges, adopting deep learning techniques for graph anomaly detection also faces a number of challenges from the technical side. \nThese challenges associated with deep learning are categorized as technique-specific challenges (Tech-CHs), and they are summarized as follows.\n\\textbf{Tech-CH1. Anomaly-aware training objectives.} Deep learning models rely heavily on the training objectives to fine-tune all the trainable parameters. \nFor graph anomaly detection, this necessitates appropriate training objectives or loss functions such that the GADL models can effectively capture the differences between benign and anomalous objects.\nDesigning anomaly-aware objectives is very challenging because there is no prior knowledge about the ground-truth anomalies as well as their deviating patterns versus the majority.\nHow to effectively separate anomalies from normal objects through training remains critical for deep learning-based models.\n\\textbf{Tech-CH2. Anomaly interpretability.} In real-world scenarios, the interpretability of detected anomalies is also vital because we need to provide convincing evidence to support the subsequent anomaly handling process. For example, the risk management department of a financial organization must provide lawful evidence before blocking the accounts of identified anomalous users. \nAs deep learning has been limited for its interpretability~, how to justify the detected graph anomalies remains a big challenge for deep learning techniques.\n\\textbf{Tech-CH3. High training cost.} Although D(G)NNs are capable of digesting rich information (\\eg structural information and attributes) in graph data for anomaly detection, these GADL models are more complex than conventional deep neural networks or machine learning methods due to the anomaly-aware training objectives. Such complexity inherently leads to high training costs in both time and computing resources.\n\\textbf{Tech-CH4. Hyperparameter tuning.} D(G)NNs naturally exhibit a large set of hyperparameters, such as the number of neurons in each neural network layer, the learning rate, the weight decay and the number of training epochs. Their learning performance is significantly affected by the values of these hyperparameters. However, it remains a serious challenge to effectively select the optimal/sub-optimal settings for the detection models due to the lack of labeled data in real scenarios.\nBecause deep learning models are sensitive to their associated hyperparameters, setting well-performing values for the hyperparameters is vital to the success of a task.\nTuning hyperparameter is relatively trivial in supervised learning when labeled data are available. For instance, users can find an optimal/sub-optimal set of hyperparameters (\\eg through random search, grid search) by comparing the model's outputs with the ground-truth. However, unsupervised anomaly detection has no accessible labeled data to judge the model's performance under different hyperparameter settings~. Selecting the ideal hyperparameter values for unsupervised detection models persists as a critical obstacle to applying them in a wide range of real scenarios.", "id": "2ea37eb2-3734-4b08-ad7a-6c1226af2e9b", "level": "subsection", "origin_cites_number": 9, "parent_id": "ecbafaf3-1565-4746-bf9c-a9182b235789", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Challenges in GAD with Deep Learning" ] ], "subsections": [], "title": "Challenges in GAD with Deep Learning" }, { "cite_extract_rate": 0.29411764705882304, "cites": [ 6275, 6274, 3207, 3206, 3344 ], "content": "\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Introduction/Timeline.pdf}}\n \\caption{A Timeline of Graph Anomaly Detection and Reviewed Techniques.}\n \\label{pic:timeline}\n\\end{figure*} \nRecognizing the significance of anomaly detection, many review works have been conducted in the last ten years covering a range of anomaly detection topics: anomaly detection with deep learning, graph anomaly detection, graph anomaly detection with deep learning, and particular applications of graph anomaly detection such as social media, social networks, fraud detection and network security, etc.\nThere are some representative surveys on generalized anomaly detection techniques -~,~ and~.\nBut only the most up-to-date work in Thudumu \\etal~ covers the topic of graph anomaly detection.\nRecognizing the power of deep learning, the three contemporary surveys, Ruff \\etal~, Pang \\etal~ and Chalapathy and Chawla~ specifically review deep learning based anomaly detection techniques specifically.\nAs for graph anomaly detection, Akoglu \\etal~, Ranshous \\etal~, and Jennifer and Kumar~ put their concentration on graph anomaly detection, reviewing many conventional approaches in this area, including statistical models and machine learning techniques.\nOther surveys are dedicated to particular applications of graph anomaly detection, such as computer network intrusion detection and anomaly detection in online social networks, \\eg~, and~.\nThese works provided solid reviews of the application of anomaly detection/graph anomaly detection techniques in these high demand and vital domains. \nHowever, none of the mentioned surveys are dedicated to techniques on graph anomaly detection with deep learning, as shown in Table~\\ref{table:comparison}, and hence do not provide a systematic and comprehensive review of these techniques.", "id": "b87f5810-e203-4606-8bb5-00623ec5335b", "level": "subsection", "origin_cites_number": 17, "parent_id": "ecbafaf3-1565-4746-bf9c-a9182b235789", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Existing Anomaly Detection Surveys" ] ], "subsections": [], "title": "Existing Anomaly Detection Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "Our contributions are summarized as follows:\n\\begin{itemize}\n \\item \\textbf{The first survey in graph anomaly detection with deep learning.} To the best of our knowledge, our survey is the first to review the state-of-the-art deep learning techniques for graph anomaly detection. Most of the relevant surveys focus either on conventional graph anomaly detection methods using non-deep learning techniques or on generalized anomaly detection techniques (for tabular/point data, time series, etc.). Until now, there has been no dedicated and comprehensive survey on graph anomaly detection with deep learning. Our work bridges this gap, and we expect that an organized and systematic survey will help push forward research in this area. \n \\item \\textbf{A systematic and comprehensive review.} In this survey, we review the most up-to-date deep learning techniques for graph anomaly detection published in influential international conferences and journals in the area of deep learning, data mining, web services, and artificial intelligence, including: TKDE, TKDD, TPAMI, NeurIPS, SIGKDD, ICDM, WSDM, SDM, SIGMOD, IJCAI, AAAI, ICDE, CIKM, ICML, WWW, CVPR, and others. We first summarize seven data-specific and four technique-specific challenges in graph anomaly detection with deep learning. We then comprehensively review existing works from the perspectives of: 1) the motivations behind the deep methods; 2) the main ideas for identifying graph anomalies; 3) a brief introduction to conventional non-deep learning techniques; and 4) the technical details of deep learning algorithms. A brief timeline of graph anomaly detection and reviewed works is given in Fig.~\\ref{pic:timeline}.\n \\item \\textbf{Future directions.} From the survey results, we highlight 12 future research directions covering emerging problems introduced by graph data, anomaly detection, deep learning models, and real-world applications. These future opportunities indicate challenges that have not been adequately tackled, and so more effort is needed in the future.\n \\item \\textbf{Affluent resources.} Our survey also provides an extensive collection of open-sourced anomaly detection algorithms, public datasets, synthetic dataset generating techniques, as well as commonly used evaluation metrics to push forward the state-of-the-art in graph anomaly detection. These published resources offer benchmark datasets and baselines for future research. \n \\item \\textbf{A new taxonomy.} We have organized this survey with regard to different types of anomalies (\\ie nodes, edges, sub-graphs, and graphs) existing in graphs or graph databases. We also pinpoint the differences and similarities between different types of graph anomalies.\n\\end{itemize}\nThe rest of this survey is organized as follows. In Section~\\ref{sec:preliminaries}, we provide preliminaries about the different types of settings. \nFrom Section~\\ref{sec:AND} to Section~\\ref{sec:anosgd:dynamic}, we review existing techniques for detecting anomalous nodes, edges, sub-graphs and graphs, respectively.\nIn Section~\\ref{sec:resources}, we first provide a collection of published graph anomaly detection algorithms and datasets and then summarize commonly used evaluation metrics and synthetic data generation strategies.\nWe highlight 12 future directions concerning deep learning in graph anomaly detection in Section~\\ref{sec:futures} and summarize our survey in Section~\\ref{sec:conclusion}.\nA concrete taxonomy of our survey is given in Appendix~\\ref{appendix:taxonomy}.\n\\vspace{-0.1cm}", "id": "7291e51c-caa8-4bd7-9c08-2d393eaf83dc", "level": "subsection", "origin_cites_number": 0, "parent_id": "ecbafaf3-1565-4746-bf9c-a9182b235789", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Introduction" ], [ "subsection", "Contributions" ] ], "subsections": [], "title": "Contributions" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3344 ], "content": "\\label{sec:preliminaries}\nIn this section, we provide definitions of different types of graphs mostly used in node/edge/sub-graph-level anomaly detection (Section~\\ref{sec:AND} to Section~\\ref{sec:subgraph}).\nFor consistency, we have followed the conventional categorization of graphs as in existing works~ and categorize them as static graphs, dynamic graphs, and graph databases. Unless otherwise specified, all graphs mentioned in the following sections are static. Meanwhile, as graph-level anomaly detection is discussed far away on page 13, to enhance readability, the definition for the graph database is given closer to the material in Section~\\ref{sec:anosgd:db}.\n\\textbf{\\textit{Definition 1 (Plain Graph)}}. A static plain graph $G = \\{V, E\\}$ comprises a node set $V = \\{ v_{i} \\}_{1}^{n}$ and an edge set $E = \\{e_{i,j}\\}$ where $n$ is the number of nodes and $e_{i,j} = (v_i,v_j)$ denotes an edge between nodes $v_{i}$ and $v_{j}$. The adjacency matrix $A = [a_{i,j}]_{n\\times n}$ restores the graph structure, where $a_{i,j} = 1$ if node $v_{i}$ and $v_{j}$ is connected, otherwise $a_{i,j} = 0$.\n\\textbf{\\textit{Definition 2 (Attributed Graph)}}. A static attributed graph $G = \\{V, E, X\\}$ comprises a node set $V$, an edge set $E$ and an attribute set $X$. In an attributed graph, the graph structure follows the definition in Definition 1. The attribute matrix $X = [\\mathbf{x}_{i}]_{n\\times k}$ consists of nodes' attribute vectors, where $\\mathbf{x}_{i}$ is the attribute vector associated with node $v_{i}$ and $k$ is the vector's dimension. Hereafter, the terms attribute and feature are used interchangeably.\n\\textbf{\\textit{Definition 3 (Dynamic Graph)}}. A dynamic graph $G(t) = \\{V(t), E(t), X_{v}(t), X_{e}(t) \\}$ comprises nodes and edges changing overtime. $V(t)$ is the nodes set in the graph at a specific time step $t$, $E(t)$ is the corresponding edge set, $X_{v}(t)$ and $X_{e}(t)$ are the node attribute matrix and edge attribute matrix at time step $t$ in the graph if existed.\nIn reality, the nodes or edges might also be associated with numerical or categorical labels to indicate their classes (\\eg normal or abnormal). When label information is available/partially-available, supervised/semi-supervised detection models could be effectively trained.", "id": "8ea06e54-3d89-4768-967e-6b68c2d22f53", "level": "section", "origin_cites_number": 3, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Preliminaries" ] ], "subsections": [], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:AND}\nAnomalous nodes are commonly recognized as individual nodes that are significantly different from others.\nIn real-world applications, these nodes often represent abnormal objects that appear individually, such as a single network intruder in computer networks, an independent fraudulent user in online social networks or a specific fake news on social media.\nIn this section, we specifically focus on anomalous node detection in static graphs.\nThe reviews on dynamic graphs can be found in Section~\\ref{sec:node:dg}. Table~\\ref{table:ANOSND} at the end of Section~\\ref{sec:node:dg} provides a summary of techniques reviewed for ANOS ND.\nWhen detecting anomalous nodes in static graphs, the differences between anomalies and regular nodes are mainly drawn from the graph structural information and nodes/edges' attributes~.\nGiven prior knowledge (\\ie community structure, attributes) about a static graph, anomalous nodes can be further categorized into the following three types:\n\\begin{itemize}\n \\item \\textbf{Global anomalies} only consider the node attributes. They are nodes that have attributes significantly different from all other nodes in the graph. \n \\item \\textbf{Structural anomalies} only consider the graph structural information. They are abnormal nodes that have different connection patterns (\\eg connecting different communities, forming dense links with others).\n \\item \\textbf{Community anomalies} consider both node attributes and graph structural information. They are defined as nodes that have different attribute values compared to other nodes in the same community.\n\\end{itemize}\nIn Fig.~\\ref{pic:node_classes}, node 14 is a global anomaly because its 4th feature value is 1 while all other nodes in the graph have the value of 0 for the corresponding feature. Nodes 5, 6, and 11 are identified as structural anomalies because they have links with other communities while other nodes in their community do not form cross-community links. Nodes 2 and 7 are community anomalies because their feature values are different from others in the communities they belong to. \n\\begin{figure}[!t]\n \\setlength{\\belowcaptionskip}{-0.5cm} \n \\centerline{\\includegraphics[scale=0.47]{pics/Node/node_classes_new.pdf}}\n \\caption{Three Types of Anomalous Nodes: Structural Anomalies, Community Anomalies and Global Anomalies.}\n \\label{pic:node_classes}\n\\end{figure}", "id": "59eed45d-0f6f-4983-a1d0-48c2fb48a606", "level": "section", "origin_cites_number": 4, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ] ], "subsections": [ "5e100893-099a-4201-b53e-ba07115d4307", "9b59f8e2-fd33-4bf3-bf31-9d82848223fa" ], "title": "Anomalous node detection (ANOS ND)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:node:spg}\nPlain graphs are dedicated to representing the structural information in real-world networks.\nTo detect anomalous nodes in plain graphs, the graph structure has been extensively exploited from various angles. \nHere, we first summarize the representative traditional non-deep learning approaches, followed by a more recent, advanced detection technique based on representation learning.\n\\begin{figure*}[!t]\n \\setlength{\\belowcaptionskip}{-0.25cm} \n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Node/Traditional.pdf}}\n \\caption{ANOS ND on attributed graphs -- Deep NN based approaches. As an example, the autoencoder is used to capture the graph structure and node attributes. With a specially-designed anomaly aware loss function, anomaly scores will be assigned to every node, and the top-k nodes are anomalies (\\eg nodes 9, 1, and 2 at top-3).}\n \\label{pic:conventional_autoencoder_framework}\n\\end{figure*}", "id": "5e100893-099a-4201-b53e-ba07115d4307", "level": "subsection", "origin_cites_number": 0, "parent_id": "59eed45d-0f6f-4983-a1d0-48c2fb48a606", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Plain Graphs" ] ], "subsections": [ "658ca9f1-8bc4-490f-ac7d-c460db855b3b", "699af85c-49af-4ca0-a394-878a65f78e4e", "bc2ebce7-d1c1-4f0b-881b-6a0ace25800e" ], "title": "ANOS ND on Plain Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:anosnd:tndp}\nPrior to the recent advances in deep learning and other state-of-the-art data mining technologies, traditional non-deep learning techniques have been widely used in many real-world networks to identify anomalous entities.\nA key idea behind these techniques was to transform the graph anomaly detection into a traditional anomaly detection problem, because the graph data with rich structure information can not be handled by the traditional detection techniques (for tabular data only) directly. \nTo bridge the gap, many approaches~ used the statistical features associated with each node, such as in/out degree, to detect anomalous nodes. \nFor instance, OddBall~ employs the statistical features (\\eg the number of 1-hop neighbors and edges, the total weight of edges) extracted from each node and its 1-hop neighbors to detect particular structural anomalies that: 1) form local structures in shape of near-cliques or stars; 2) have heavy links with neighbors such that the total weight is extremely large; or 3) have a single dominant heavy link with one of the neighbors. \nWith properly selected statistical features, anomalous nodes can be identified with respect to their deviating feature patterns.\nBut, in real scenarios, it is very hard to choose the most suitable features from a large number of candidates, and domain experts can always design new statistics, \\eg the maximum/minimum weight of edges.\nAs a result, these techniques often carry prohibitive cost for assessing the most significant features and do not effectively capture the structural information.", "id": "658ca9f1-8bc4-490f-ac7d-c460db855b3b", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "5e100893-099a-4201-b53e-ba07115d4307", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Plain Graphs" ], [ "subsubsection", "Traditional Non-Deep Learning Techniques" ] ], "subsections": [], "title": "Traditional Non-Deep Learning Techniques" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 282, 6281, 218 ], "content": "\\label{sec:spg:nr}\nTo capture more valuable information from the graph structure for anomaly detection, network representation techniques have been widely exploited. \nTypically, these techniques encode the graph structure into an embedded vector space and identify anomalous nodes through further analysis. Hu \\etal~, for example, proposed an effective embedding method to detect structural anomalies that are connecting with many communities. \nIt first adopts a graph partitioning algorithm (\\eg METIS~) to group nodes into $d$ communities ($d$ is a user-specified number). \nThen, the method employs a specially designed embedding procedure to learn node embeddings that could capture the link information between each node and $d$ communities.\nDenoting the embedding for node $i$ as $Z_i = \\{z_{i}^{1},\\cdots,z_{i}^{d}\\}$, the procedure initializes each $z_{i}^{c} \\in Z_i$ with regard to the membership of node $i$ to community $c$ (if node $i$ belongs to the community, then $z_{i}^{c} = \\frac{1}{\\sqrt{2}}$; otherwise, 0.) and optimizes node embeddings such that directly linked nodes have similar embeddings and unconnected nodes are dissimilar.\nAfter generating the node embeddings, the link information between node $i$ and $d$ communities is quantified for further anomaly detection analysis.\nFor a given node $i$, such information is represented as:\n\\begin{equation} \\label{eq:hu:nbi}\n \\overline{NB(i)} = (y_{i}^{1},...,y_{i}^{d}) = \\mathop{\\sum}\\limits_{j\\in NB(i)}(1- \\|Z_{i} - Z_{j}\\|)\\cdot Z_{j},\n\\end{equation}\nwhere $NB(i)$ comprises node $i$'s neighbors.\nIf $i$ has many links with community $c$, then the value in the corresponding dimension $y_{i}^{c}$ will be large.\nIn the last step, Hu \\etal~ formulate a scoring function to assign anomalousness scores, calculated as:\n\\begin{equation} \\label{eq:hu:as}\n AScore(i) = \\mathop{\\sum}\\limits_{k=1}^{d}\\frac{y_{i}^{k}}{y_{i}^{*}}, y_{i}^{*}=\\max\\{y_{i}^{1},...,y_{i}^{d}\\}.\n\\end{equation}\nAs expected, structural anomalies receive higher scores as they connect to different communities.\nIndeed, given a predefined threshold, nodes with above-threshold scores are identified as anomalies.\nTo date, many plain network representation methods such as Deepwalk~, Node2Vec~ and LINE~ have shown their effectiveness in generating node representations and been used for anomaly detection performance validation~.\nBy pairing the conventional anomaly detection techniques such as density-based techniques~ and distance-based techniques~ with node embedding techniques, anomalous nodes can be identified with regard to their distinguishable locations (\\ie low-density areas or far away from the majorities) in the embedding space.", "id": "699af85c-49af-4ca0-a394-878a65f78e4e", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "5e100893-099a-4201-b53e-ba07115d4307", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Plain Graphs" ], [ "subsubsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 1, "cites": [ 6282 ], "content": "The success of reinforcement learning (RL) in tackling real-world decision making problems has attracted substantial interests from the anomaly detection community. Detecting anomalous nodes can be naturally regarded as a problem of deciding which class a node belongs to - anomalous or benign. As a special scenario of the general selective harvesting task, the anomalous node detection problem can be approached by a recent work in~ that intuitively combines reinforcement learning and network embedding techniques for selective harvesting. The proposed model, NAC, is trained with labeled data without any human intervention. Specifically, it first selects a seed network consisting of partially observed nodes and edges. Then, starting from the seed network, NAC adopts reinforcement learning to learn a node selection plan such that anomalous nodes in the undiscovered area can be identified. This is achieved by rewarding selection plans that can choose labeled anomalies with higher gains. Through offline training, NAC will learn an optimal/suboptimal anomalous node selection strategy and discover potential anomalies in the undiscovered graph step by step.", "id": "bc2ebce7-d1c1-4f0b-881b-6a0ace25800e", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "5e100893-099a-4201-b53e-ba07115d4307", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Plain Graphs" ], [ "subsubsection", "Reinforcement Learning Based Techniques" ] ], "subsections": [], "title": "Reinforcement Learning Based Techniques" }, { "cite_extract_rate": 1, "cites": [ 7007, 180, 242 ], "content": "\\label{lb:node:sag}\nIn addition to the structural information, real-world networks also contain rich attribute information affiliated with nodes~. These attributes provide complementary information about real objects and together with graph structure, more hidden anomalies that are non-trivial can now be detected. \n\\begin{figure*}[!t]\n \\setlength{\\belowcaptionskip}{-0.25cm} \n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Node/GCN.pdf}}\n \\caption{ANOS ND on attributed graphs -- GCN based approaches. Node representations are generated through GCN layers. Anomalies are then detected according to their reconstruction loss (\\ding{172}) or embedding distribution in the embedding space (\\ding{173}).}\n \\label{pic:GCNbased_framework}\n\\end{figure*}\nFor clarity, we distinguish between deep neural networks and graph neural networks in this survey. \nWe review deep neural network (Deep NN) based techniques, GCN based techniques, and reinforcement learning based techniques for ANOS ND as follows. \nDue to page limitations, other existing works including traditional non-deep learning techniques, GAT~ based techniques, GAN based techniques, and network representation based techniques are surveyed in Appendix~\\ref{appendix:node:static}.", "id": "9b59f8e2-fd33-4bf3-bf31-9d82848223fa", "level": "subsection", "origin_cites_number": 3, "parent_id": "59eed45d-0f6f-4983-a1d0-48c2fb48a606", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Attributed Graphs" ] ], "subsections": [ "9201dd97-4956-4c75-840c-212d3ec5e5ae", "73bf642c-6661-4550-b563-f83087e0c906", "be74c2e6-73e6-42d6-9154-063b0fce612a" ], "title": "ANOS ND on Attributed Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "The deep learning models such as autoencoder and deep neural networks provide solid basis for learning data representations. \nAdopting these models for more effective anomalous node detection have drawn substantial interest recently.\nFor example, Bandyopadhyay \\etal~ developed an unsupervised deep model, DONE, to detect global anomalies, structural anomalies and community anomalies in attributed graphs.\nSpecifically, this work measures three anomaly scores for each node that indicate the likelihood of the situations where 1) it has similar attributes with nodes in different communities ($o_i^a$); or 2) it connects with other communities ($o_i^s$); or 3) it belongs to one community structurally but the attributes follow the pattern of another community ($o_i^{com}$).\nIf a particular node exhibits any of these characteristics, then it is assigned a higher score and is anomalous.\nTo acquire these scores, DONE adopts two separate autoencoders (AE), \\ie a structure AE and an attribute AE, as shown in Fig.~\\ref{pic:conventional_autoencoder_framework}. \nBoth are trained by minimizing the reconstruction errors and preserving the homophily that assumes connected nodes have similar representations in the graph. \nWhen training the AEs, nodes exhibiting the predefined characteristics are hard to reconstruct and therefore introduce more reconstruction errors because their structure or attribute patterns do not conform to the standard behavior.\nHence, the adverse impact of anomalies should be alleviated to achieve the minimized error.\nAccordingly, DONE specially designs an anomaly-aware loss function with five terms: $\\mathcal{L}_{str}^{Recs}$, $\\mathcal{L}_{attr}^{Recs}$, $\\mathcal{L}_{str}^{Hom}$, $\\mathcal{L}_{attr}^{Hom}$, and $\\mathcal{L}^{Com}$.\n$\\mathcal{L}_{str}^{Resc}$ and $\\mathcal{L}_{attr}^{Resc}$ are the structure reconstruction error and attribute reconstruction error that can be written as: \n\\begin{equation} \\label{DONE:ls1}\n \\begin{split}\n \\mathcal{L}_{str}^{Recs} = \\frac{1}{N}\\sum_{i=1}^{N}\\log(\\frac{1}{o_{i}^{s}})\\|\\mathbf{t}_{i} - \\mathbf{\\hat{t}}_{i}\\|_{2}^{2},\n \\end{split}\n\\end{equation}\nand \n\\begin{equation} \\label{DONE:ls2}\n \\begin{split}\n \\mathcal{L}_{attr}^{Recs} \\frac{1}{N}\\sum_{i=1}^{N}\\log(\\frac{1}{o_{i}^{a}})\\|\\mathbf{x}_{i} - \\mathbf{\\hat{x}}_{i}\\|_{2}^{2},\n \\end{split}\n\\end{equation}\nwhere $N$ is the number of nodes, $\\mathbf{t}_{i}$ and $\\mathbf{x}_{i}$ store the structure information and attributes of node $i$, $\\mathbf{\\hat{t}}_{i}$ and $\\mathbf{\\hat{x}}_{i}$ are the reconstructed vectors. \n$\\mathcal{L}_{str}^{Hom}$ and $\\mathcal{L}_{attr}^{Hom}$ are proposed to maintain the homophily and they are formulated as:\n\\begin{equation} \\label{DONE:ls3}\n \\begin{split}\n \\mathcal{L}_{str}^{Hom} = \\frac{1}{N}\\sum_{i=1}^{N}\\log(\\frac{1}{o_{i}^{s}})\\frac{1}{|N(i)|}\\sum_{j\\in N(i)}||\\mathbf{h}_{i}^{s} - \\mathbf{h}_{j}^{s}||_{2}^{2},\n \\end{split}\n\\end{equation}\nand \n\\begin{equation} \\label{DONE:ls4}\n \\begin{split}\n \\mathcal{L}_{attr}^{Hom} = \\frac{1}{N}\\sum_{i=1}^{N}\\log(\\frac{1}{o_{i}^{a}})\\frac{1}{|N(i)|}\\sum_{j\\in N(i)}||\\mathbf{h}_{i}^{a} - \\mathbf{h}_{j}^{a}||_{2}^{2},\n \\end{split}\n\\end{equation}\nwhere $\\mathbf{h}_{i}^{s}$ and $\\mathbf{h}_{i}^{a}$ are the learned latent representations from the structure AE and attribute AE, respectively.\n$\\mathcal{L}^{Com}$ poses further restrictions on the generated representations for each node by the two AEs such that the graph structure and node attributes complement each other. It is formulated as:\n\\begin{equation} \\label{DONE:ls5}\n \\begin{split}\n \\mathcal{L}^{Com} = \\frac{1}{N}\\sum_{i=1}^{N}\\log(\\frac{1}{o_{i}^{com}})||\\mathbf{h}_{i}^{s} - \\mathbf{h}_{i}^{a}||_{2}^{2},\n \\end{split}\n\\end{equation}\nBy minimizing the sum of these loss functions, the anomaly scores of each node are quantified, and the top-k nodes with higher scores are identified as anomalies.", "id": "9201dd97-4956-4c75-840c-212d3ec5e5ae", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "9b59f8e2-fd33-4bf3-bf31-9d82848223fa", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Attributed Graphs" ], [ "subsubsection", "Deep NN Based Techniques" ] ], "subsections": [], "title": "Deep NN Based Techniques" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 6283, 6284 ], "content": "\\label{sec:AND:GCN}\nGraph convolutional neural networks (GCNs)~ have accomplished decent success in many graph data mining tasks (\\eg link prediction, node classification, and recommendation) owing to its capability of capturing comprehensive information in the graph structure and node attributes.\nTherefore, many anomalous node detection techniques start to investigate GCNs. \nFig.~\\ref{pic:GCNbased_framework} illustrates a general framework of existing works in this line.\nIn~, Ding \\etal measured an anomaly score for each node using the network reconstruction errors of both the structure and attribute.\nThe proposed method, DOMINANT, comprises three parts, namely, the graph convolutional encoder, the structure reconstruction decoder, and the attribute reconstruction decoder. The graph convolutional encoder generates node embeddings through multiple graph convolutional layers. The structure reconstruction decoder tends to reconstruct the network structure from the learned node embeddings, while the attribute reconstruction decoder reconstructs the node attribute matrix.\nThe whole neural network is trained to minimize the following loss function:\n\\begin{equation} \\label{eq:ad}\n \\begin{split}\n \\mathcal{L}_{DOMINANT} &= (1-\\alpha)\\mathcal{R}_{S} + \\alpha \\mathcal{R}_{A}\\\\\n &= (1-\\alpha)||A - \\hat{A}||_{F}^{2} + \\alpha||X - \\hat{X}||_{F}^{2},\n \\end{split}\n\\end{equation}\nwhere $\\alpha$ is the coefficient, $A$ depicts the adjacency matrix of the graph, $\\mathcal{R}_{S}$ and $\\mathcal{R}_{A}$ quantify reconstruction errors with regard to the graph structure and node attributes, respectively.\nWhen the training is finished, an anomaly score is then assigned to each node according to its contribution to the total reconstruction error, which is calculated by:\n\\begin{equation} \\label{eq:nodescore}\n \\textit{score}(i) \n = (1-\\alpha)||\\mathbf{a}_{i} - \\mathbf{\\hat{a}}_{i}||_{2} + \\alpha||\\mathbf{x}_{i} - \\mathbf{\\hat{x}}_{i}||_{2},\n\\end{equation}\nwhere $\\mathbf{a}_{i}$ and $\\mathbf{x}_{i}$ are the structure vector and attribute vector of node $i$, $\\mathbf{\\hat{a}}_{i}$ and $\\mathbf{\\hat{x}}_{i}$ are their corresponding reconstructed vectors. \nThe nodes are then ranked according to their anomaly scores in descending order, and the top-k nodes are recognized as anomalies.\nTo enhance the performance of anomalous node detection, later work by Peng \\etal~ further explores node attributes from multiple attributed views to detect anomalies.\nThe multiple attributed views are employed to describe different perspectives of the objects~.\nFor example, in online social networks, user's demographic information and posted contents are two different attributed views, and they characterize the personal information and social activities, respectively.\nThe underlying intuition of investigating different views is that anomalies might appear to be normal in one view but abnormal in another view.\nFor the purpose of capturing these signals, the proposed method, ALARM, applies multiple GCNs to encode information in different views and adopts a weighted aggregation of them to generate node representations.\nThis model's training strategy is similar to DOMINANT~ in that it aims to minimize the network reconstruction loss and attribute reconstruction loss and can be formulated as:\n\\begin{equation} \\label{eq:alarmloss}\n\\begin{split}\n\\mathcal{L}_{ALARM} = & \\sum_{i=1}^n \\sum_{j=1}^n - [\\gamma A_{ij}\\log\\hat{A}_{ij} + (1-A_{ij})\\log(1-\\hat{A}_{ij})] \\\\\n & + ||X - \\tilde{X}||^{F}_{2},\n\\end{split}\n\\end{equation}\nwhere $\\gamma$ is coefficient to balance the errors, $A_{ij}$ is the element at coordinate $(i,j)$ in the adjacency matrix $A$, $\\hat{A}_{ij}$ is the corresponding element in the reconstructed adjacency matrix $\\hat{A}$, $X$ is the original node feature matrix and $\\tilde{X}$ is the reconstructed node feature matrix.\nLastly, ALARM adopts the same scoring function as~, and nodes with top-k highest scores are anomalous.\nInstead of spotting unexpected nodes using their reconstruction errors, Li \\etal~ proposed SpecAE to detect global anomalies and community anomalies via a density estimation approach, Gaussian Mixture Model (GMM).\nGlobal anomalies can be identified by only considering the node attributes. \nFor community anomalies, the structure and attributes need to be jointly considered because of their distinctive attributes to the neighbors.\nAccordingly, SpecAE investigates a graph convolutional encoder to learn node representations and reconstruct the nodal attributes through a deconvolution decoder.\nThe parameters in the GMM are then estimated using the node representations. \nDue to the deviating attribute patterns of global and community anomalies, normal nodes are expected to exhibit greater energies in GMM, and the k nodes with the lowest probabilities are deemed to be anomalies.\nIn~, Wang \\etal developed a novel detection model that identify fraudsters using their relations and features. Their proposed method, Fdgars, first models online users' reviews and visited items as their features, and then identifies a small portion of significant fraudsters based on these features. In the last step, a GCN is trained in a semi-supervised manner by using the user-user network, user features, and labeled users. After training, the model can directly label unseen users.\nA more recent work, GraphRfi~, also explores the potential of combining anomaly detection with other downstream graph analysis tasks.\nIt targets on leveraging anomaly detection to identify malicious users and provide more accurate recommendations to service benign users by alleviating the impact of these untrustworthy users. \nSpecifically, a GCN framework is deployed to encode users and items into a shared embedding space for recommendation and users are classified as fraudsters or normal users through an additional neural random forest using their embeddings. \nFor rating prediction between users and items, the framework reduces the corresponding impact of suspicious users by assigning less weights to their training loss. \nAt the same time, the rating behavior of users also provides auxiliary information for fraudster detection. \nThe mutually beneficial relationship between these two applications (anomaly detection and recommendation) indicates the potential of information sharing among multiple graph learning tasks.", "id": "73bf642c-6661-4550-b563-f83087e0c906", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "9b59f8e2-fd33-4bf3-bf31-9d82848223fa", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Attributed Graphs" ], [ "subsubsection", "GCN Based Techniques" ] ], "subsections": [], "title": "GCN Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "In contrast to NAC, Ding \\etal~ investigated to the use of reinforcement learning for anomalous node detection in attributed graphs.\nTheir proposed algorithm, GraphUCB, models both attribute information and structural information, and inherits the merits of the contextual multi-armed bandit technology~ to output potential anomalies.\nBy grouping nodes into $k$ clusters based on their features, GraphUCB forms a $k$-armed bandit model and measures the payoff of selecting a specific node as a potential anomaly for expert evaluation.\nWith experts' feedback on the predicted anomalies, the decision-making strategy is continuously optimized.\nEventually, the most potential anomalies can be selected. \n\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n\\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Edge/UGED.pdf}}\n\\caption{ANOS ED on static graphs -- Deep NN based approaches. For example, the detection technique employs the autoencoder and fully connected network to learn the presence probability of each edge. Anomaly scores are assigned with regard to the probabilities, and the top-k edges are anomalous (\\eg $E_{8,9}$, $E_{1,3}$, and $E_{2,4}$ at top-3).}\n\\label{pic:UGED}\n\\end{figure*}", "id": "be74c2e6-73e6-42d6-9154-063b0fce612a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9b59f8e2-fd33-4bf3-bf31-9d82848223fa", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous node detection (ANOS ND)" ], [ "subsection", "ANOS ND on Attributed Graphs" ], [ "subsubsection", "Reinforcement Learning Based Techniques" ] ], "subsections": [], "title": "Reinforcement Learning Based Techniques" }, { "cite_extract_rate": 0.38235294117647, "cites": [ 6285, 6290, 6288, 6287, 6283, 6289, 6282, 3344, 6271, 6284, 6291, 8991, 6286 ], "content": "\\label{sec:node:dg}\nReal-world networks can be modeled as dynamic graphs to represent evolving objects and the relationships among them.\nIn addition to structural information and node attributes, dynamic graphs also contain rich temporal signals~, \\eg the evolving patterns of the graph structure and node attributes. \nOn the one hand, these information inherently makes anomalous node detection on dynamic graphs more challenging. \nThis is because dynamic graphs usually introduce large volume of data and temporal signals should also be captured for anomaly detection.\nBut, on the other hand, they could provide more details about anomalies~. \nIn fact, some anomalies might appear to be normal in the graph snapshot at each time stamp, and, only when the changes in a graph's structure are considered, do they become noticeable.\nIn this section, we review the network representation based techniques and GAN based techniques as follows. Relevant techniques from traditional non-deep learning approaches are reviewed in Appendix~\\ref{appendix:node:dynamic}.\n\\begin{table*}[!t]\n\\centering\n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Summary of Anomalous Node Detection Techniques.}\n\\resizebox{0.96\\textwidth}{!}{\n\\begin{tabular}{m{2.1cm}<{\\centering}|m{2.3cm}<{\\centering}|m{1.1cm}<{\\centering}|m{4cm}<{\\centering}|m{1.8cm}<{\\centering}|m{2.9cm}<{\\centering}}\n\\toprule[1 pt]\n\\textbf{Graph Type} & \\textbf{Approach} & \\textbf{Category} & \\textbf{Objective Function} & \\textbf{Measurement} & \\textbf{Outputs} \\\\ \n\\midrule[1 pt]\n\\multirow{3}{*}{\\shortstack{Static Graph - \\\\ Plain}} & ~ & NR & $\\mathop{\\sum}\\limits_{(i,j)\\in E} \\|\\mathbf{Z_{i}} - \\mathbf{Z_{j}}\\|^{2} + \\alpha \\mathop{\\sum}\\limits_{(i,j)\\notin E}(\\|\\mathbf{Z_{i}} - \\mathbf{Z_{j}}\\| -1)^{2}$ & Anomaly Score & $ \\mathop{\\sum}\\limits_{k=1}^{d}\\frac{y_{i}^{k}}{y_{i}^{*}}$ \\\\ \\cline{2-6}\n& DCI~ & NR &$\\frac{1}{K}\\mathop{\\sum}\\limits_{k=1}^K\\mathcal{L}_{DCI}^{k}$ & Anomaly Prediction & Predicted Label \\\\\\cline{2-6}\n&NAC~ & RL & Cumulative reward & - & Anomalies \\\\ \n\\midrule[1 pt]\n\\multirow{24}{*}{\\shortstack{Static Graph - \\\\ Attributed}} &ALAD~ & Non-DP & $\\min\\limits_{W,H} \\|A-WW^{T}\\|_{F}^{2} + \\alpha \\|X-WH\\|_{F}^{2} + \\gamma (\\|W\\|_{F}^{2} + \\|H\\|_{F}^{2})$ & Anomaly Score & $\\frac{\\mathbf{W}_{n,c}}{\\sum_{c}\\mathbf{W}_{n,c}} cos(\\mathbf{A}_{n*}, \\mathbf{H}_{c*})$ \\\\ \\cline{2-6}\n&Radar~ & Non-DP & $\\min\\limits_{W,R}\\|X - W^{T} X - R\\|_{F}^{2}+ \\alpha \\|W\\|_{2,1} +\\beta \\|R\\|_{2,1} +\\gamma tr(R^{T}LR)$ & Residual Analysis & Residual Value \\\\ \\cline{2-6}\n&ANOMALOUS~ & Non-DP & $\\min\\limits_{W,\\tilde{R}}||X - XWX - \\tilde{R}||_{F}^{2} + \\alpha ||W||_{2,1} + \\beta ||W^{T}||_{2,1} + \\gamma ||\\tilde{R}^{T}||_{2,1} + \\varphi tr(\\tilde{R}L\\tilde{R}^{T})$ & Residual Analysis & Residual Value \\\\ \\cline{2-6}\n&SGASD~ & Non-DP & $\\min\\limits_{\\mathbf{w,c}}\\frac{1}{2}\\sum_{i=1}^{m}\\mathbf{c_{i}}(V_{i,*}\\mathbf{w} - y_i)^2 + \\frac{\\lambda_{1}}{2}||\\mathbf{w}||_{2}^{2} + \\lambda_{2} \\sum_{i=0}^{d}\\sum_{j=1}^{n_{i}}||\\mathbf{c}_{G_{j}^{i}}||_{2}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&DONE~ & DNN & $\\alpha_{1}\\mathcal{L}_{str}^{Recs} + \\alpha_{2}\\mathcal{L}_{attr}^{Recs} + \\alpha_{3}\\mathcal{L}_{str}^{Hom} + \\alpha_{4}\\mathcal{L}_{attr}^{Hom} + \\alpha_{5}\\mathcal{L}^{Com}$ & Anomaly Scores & $o_{i}^{s},o_{i}^{a},o_{i}^{com}$ \\\\ \\cline{2-6}\n&DOMINANT~ & GCN & $(1-\\alpha)\\mathcal{R}_{S} + \\alpha \\mathcal{R}_{A}$ & Anomaly Score & $(1-\\alpha)||\\mathbf{a}_{i} - \\mathbf{\\hat{a}}_{i}||_{2} + \\alpha||\\mathbf{x}_{i} - \\mathbf{\\hat{x}}_{i}||_{2}$ \\\\ \\cline{2-6}\n&ALARM~ & GCN & $\\sum_{i=1}^n \\sum_{j=1}^n - [\\gamma A_{ij}\\log\\hat{A}_{ij} + (1-A_{ij})\\log(1-\\hat{A}_{ij})] + \\mathcal{L}_a$ & Anomaly Score & $(1-\\alpha)||\\mathbf{a}_{i} - \\mathbf{\\hat{a}}_{i}||^2_{2} + \\alpha||\\mathbf{x}_{i} - \\mathbf{\\hat{x}}_{i}||^2_{2}$ \\\\ \\cline{2-6}\n&SpecAE~ & GCN & $\\mathbb{E}[dis(X,\\hat{X})] + \\mathbb{E}[dis(X,\\tilde{X})] + \\lambda_{1}\\mathbb{E}(E(Z)) + \\lambda_{2}KL$ & Density Estimation & Anomalousness Rank \\\\ \\cline{2-6}\n&Fdgars~ & GCN & $\\mathcal{L}_{GCN}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&GraphRfi~ & GCN & $\\mathcal{L}_{rating} + \\lambda \\mathcal{L}_{fraudster}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&ResGCN~ & GCN & $(1-\\alpha)||A - \\hat{A}||^2_{F} + \\alpha||X - \\hat{X} - \\lambda R||^2_{F}$ & Anomaly Score & $||R_{i,:}||_{2}$ \\\\ \\cline{2-6}\n&GraphUCB~ & RL & Expert Judgment & - & Anomalies \\\\ \\cline{2-6}\n&AnomalyDAE~ & GAT & $\\alpha ||(A - \\hat{A})\\odot \\bm{\\theta}||_{F}^{2} + (1-\\alpha)||(X - \\hat{X})\\odot \\bm{\\eta}||_{F}^{2}$ & Reconstruction Loss & Anomalousness Rank \\\\ \\cline{2-6}\n&SemiGNN~ & GAT & $\\alpha \\mathcal{L}_{sup} + (1- \\alpha) \\mathcal{L}_{unsup} + \\lambda \\mathcal{L}_{reg}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&AEGIS~ & GAN & $\\mathcal{L}_{AE} + \\mathcal{L}_{GAN}$ & Anomaly Score & $1 - D(\\mathbf{z}_i)$ \\\\ \\cline{2-6}\n&REMAD~ & NR & $\\mathcal{L}_{res} + \\beta\\| R^{T}\\|_{2,1}$ & Residual Analysis & Residual Value \\\\ \\cline{2-6}\n&CARE-GNN~ & NR & $\\mathcal{L}_{GNN} + \\lambda_{1}\\mathcal{L}_{Simi}^{(1)} + \\lambda_{2}\\mathcal{L}_{reg}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&SEANO~ & NR & $-\\sum_{i \\in V_{L}} \\log p(y_{i}|\\mathbf{x}_{i},\\Bar{\\mathbf{x}}_{N_{i}}) - \\sum_{i \\in V}\\sum_{v\\prime \\in C_i} \\log p(v\\prime|\\mathbf{x}_{i},\\Bar{\\mathbf{x}}_{N_{i}})$ & Anomaly Score & Discriminator's Output \\\\ \\cline{2-6}\n&OCGNN~ & NR & $\\frac{1}{\\beta K}\\sum\\limits_{v_{i}\\in\\mathbf{V}_{tr}}[||g(X,A;\\mathcal{W})_{v_i}-c||^2 -r^2]^{+}+ r^2 +\\frac{\\lambda}{2}\\sum\\limits_{l=1}^{L}||W^{(l)}||^2$ & Location in Embedding Space & Distance to Hypersphere Center \\\\ \\cline{2-6}\n&GAL~ & NR & $\\max\\{0,\\max\\limits_{y_{v^{\\prime}} \\neq y_{u}}g(u,v^\\prime) - \\min\\limits_{y_{v} = y_{u}}g(u,v) + \\Delta_{y_u}\\}$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&CoLA~& NR & $-\\sum\\limits_{i=1}^{N}y_{i}\\log(CLM(v_i,\\mathcal{G}_{i})) + (1-y_i)\\log(1-CLM(v_i,\\mathcal{G}_{i}))$ & Anomaly Score & $\\frac{\\sum\\limits_{r=1}^{R}(s_{i,r}^{(-)}-s_{i,r}^{(+)})}{R}$ \\\\ \\cline{2-6}\n&COMMANDER~ & NR & $-\\mathcal{L}_{D} + \\mathcal{L}_{C} + \\mathcal{L}_{R}$ & Anomaly Score & $\\bar{y_i}||\\mathbf{\\tilde{x}_i} - \\mathbf{x}_i||_2^2$ \\\\ \\cline{2-6}\n&FRAUDRE~& NR & $\\sum\\limits_{i=1}^{n}f^{*}(y_i,\\mathbf{h}_{i}^{(final)}\\mathbf{W}_2)$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n&Meta-GDN~ & NR & $(1-y_i)\\cdot|dev(v_i)| + y_i\\cdot\\max(0,dev(v_i))$ & Anomaly Score & $\\mathbf{u}_{s}^{T}\\mathbf{o}_{i} + b_s$ \\\\ \n\\midrule[1 pt]\nDynamic Graph - Plain & NetWalk~ & DNN & $\\gamma\\mathcal{L}_{AE} + \\mathcal{L}_{Clique} + \\lambda\\|W\\|_{F}^{2} + \\beta KL$ & Anomaly Score & Nearest Distance to Cluster Centers \\\\ [2ex]\n\\midrule[1 pt]\n\\multirow{2}{*}{\\shortstack{Dynamic Graph - \\\\ Attributed}} &MTHL~ & Non-DP & $\\mathop{min}_{\\mathcal{P}}f(\\mathcal{P})$ & Anomaly Score & Distance to Hypersphere Centroid \\\\ \\cline{2-6}\n&OCAN~ & GAN & $\\mathcal{L}_{LSTM-AE} + \\mathcal{L}_{GAN}$ & Anomaly Score & Discriminator's Output \\\\\n\\bottomrule[1 pt]\n\\multicolumn{6}{l}{* Non-DP: Non-Deep Learning Techniques, DNN: Deep NN Based Techniques, GCN: GCN Based Techniques, RL: Reinforcement Learning Based Techniques.} \\\\\n\\multicolumn{6}{l}{* GAT: GAT Based Techniques, NR: Network Representation Based Techniques, GAN: Generative Adversarial Network Based Techniques.}\n\\end{tabular}\n}\n\\label{table:ANOSND}\n\\end{table*}\n\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm} \n\\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Edge/AddGraph.pdf}}\n\\caption{ANOS ED on dynamic graphs -- GCN based approaches. GCN is employed to learn node embeddings from the temporal graph at each timestamp. The attention-based GRU generates the current hidden state using the node embeddings and previous hidden states. The edge scoring function, such as a FCN, is learned to assign anomaly scores, and the top-k edges are depicted as anomalies.}\n\\label{pic:Dynamic_Edge_Comparison}\n\\end{figure*}", "id": "12f2fc41-29d6-4189-b62f-ff980f20a825", "level": "section", "origin_cites_number": 34, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Dynamic Graphs" ] ], "subsections": [ "0af4432d-d88f-455d-b4ca-b342865e37f7", "75faec9f-6952-423b-8944-196564d6aaa7" ], "title": "ANOS ND on Dynamic Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ND:NR}\nFollowing the research line of encoding a graph into an embedding space, after which anomaly detection is performed, dynamic network representation techniques have been investigated in the more recent works.\nSpecifically, in~, Yu \\etal presented a flexible deep representation technique, called NetWalk, for detecting anomalous nodes in dynamic (plain) graphs using only the structure information.\nIt adopts an autoencoder to learn node representations on the initial graph and incrementally updates them when new edges are added or existing edges are deleted.\nTo detect anomalies, NetWalk first executes the streaming $k$-means clustering algorithm~ to group existing nodes in the current time stamp into different clusters. \nThen, each node's anomaly score is measured with regard to its closest distance to the $k$ clusters. \nWhen the node representations are updated, the cluster centers and anomaly scores are recalculated accordingly.", "id": "0af4432d-d88f-455d-b4ca-b342865e37f7", "level": "subsection", "origin_cites_number": 2, "parent_id": "12f2fc41-29d6-4189-b62f-ff980f20a825", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Dynamic Graphs" ], [ "subsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2626, 6285, 6292 ], "content": "\\label{sec:ND:GAN}\nIn practice, anomaly detection is facing great challenges from the shortage of ground-truth anomalies.\nConsequently, many research efforts have been invested in modeling the features of anomalies or regular objects such that anomalies can be identified effectively.\nAmong these techniques, generative adversarial networks (GAN)~ have received extensive attention because of its impressive performance in capturing real data distribution and generating simulated data.\nMotivated by the recent advances in ``bad\" GAN~, Zheng \\etal~ circumvented the fraudster detection problem using only the observed benign users' attributes.\nThe basic idea is to seize the normal activity patterns and detect anomalies that behave significantly differently.\nThe proposed method, OCAN, starts by extracting the benign users' content features using their historical social behaviors (\\eg historical posts, posts' URL), for which this method is classified into the dynamic category.\nA long short-term memory (LSTM) based autoencoder~ is employed to achieve this and as assumed, benign users and malicious users are in separate regions in the feature space.\nNext, a novel one-class adversarial net comprising a generator and a discriminator is trained.\nSpecifically, the generator produces complementary data points that locate in the relatively low density areas of benign users.\nThe discriminator, accordingly, aims to distinguish the generated samples from the benign users. \nAfter training, benign users' regions are learned by the discriminator and anomalies can hence be identified with regard to their locations.\nBoth NetWalk~ and OCAN~ approach the anomalous node detection problem promisingly, however, they respectively only consider the structure or attributes.\nBy the success of static graph anomaly detection techniques that analyze both aspects, when the structure and attribute information in dynamic graphs are jointly considered, an enhanced detection performance can be foreseen.\nWe therefore highlight this unexplored area for future works in Section~\\ref{sec:futures}.", "id": "75faec9f-6952-423b-8944-196564d6aaa7", "level": "subsection", "origin_cites_number": 5, "parent_id": "12f2fc41-29d6-4189-b62f-ff980f20a825", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Dynamic Graphs" ], [ "subsection", "GAN Based Techniques" ] ], "subsections": [], "title": "GAN Based Techniques" }, { "cite_extract_rate": 1, "cites": [ 6293 ], "content": "\\label{sec:edge:static}\nIn contrast to anomalous node detection, which targets individual nodes, ANOS ED aims to identify abnormal links.\nThese links often inform the unexpected or unusual relationships between real objects~, such as the abnormal interactions between fraudsters and benign users shown in Fig.~\\ref{Toy}, or suspicious interactions between attacker nodes and benign user machines in computer networks. Following the previous taxonomy, in this section, we review the state-of-the-art ANOS ED methods for static graphs, and Section~\\ref{sec:edge:dynamic} summarizes the techniques for dynamic graphs. A summary is provided in Table~\\ref{tb:edgeandsubgraph}. This section includes methods based on deep NNs, GCNs and network representations. The non-deep learning techniques are reviewed in Appendix~\\ref{appendix:edge}.", "id": "678bd7af-5bf2-44f6-ae32-df413829eb80", "level": "section", "origin_cites_number": 1, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous edge detection (ANOS ED) " ] ], "subsections": [ "509789f2-b5ef-4288-92b5-0d64443ed4af", "e519d916-fe62-4029-8d13-0a7fc3954141", "231d1246-99ce-4de5-9270-25f30b2ceb5d" ], "title": "Anomalous edge detection (ANOS ED) " }, { "cite_extract_rate": 0, "cites": [], "content": "Similar to deep NN based ANOS ND techniques, autoencoder and fully connected network (FCN) have also been used for anomalous edge detection.\nAs an example, Ouyang \\etal~ approached the problem by modeling the distribution of edges through deep models to identify the existing edges that are least likely to appear as anomalies (as shown in Fig.~\\ref{pic:UGED}). The probability of each edge $e_{u,v}$ is decided by $P(v|u,N(u))$ and $P(u|v,N(v))$ which measure the edge probability using node $u$ with its neighbors $N(u)$ and node $v$ with its neighbors $N(v)$, respectively.\nTo calculate $P(v|u,N(u))$, the proposed method, UGED, first encodes each node into a lower-dimensional vector through a FCN layer and generates node $u$'s representation by a mean aggregation of itself and its neighbors' vectors.\nNext, the node representations are fed into another FCN to estimate $P(v|u,N(u))$.\nThe prediction is expressed as $\\hat{P}(v|u,N(u)) = \\text{Softmax}(W\\cdot H(u))|_v$, where $W$ represents the trainable parameters, and $H(u)$ is $u$'s representation.\nUGED's training scheme aims to maximize the prediction of existing edges via a cross-entropy-based loss function, $\\text{CE}(\\hat{P}(v|u,N(u)), v)$.\nAfter training, an anomaly score is assigned to each edge using the average of $1-P(v|u,N(u))$ and $1-P(u|v,N(v))$. \nAs such, existing edges that have a lower probability will get higher scores and the top-k edges are reported as anomalous.", "id": "509789f2-b5ef-4288-92b5-0d64443ed4af", "level": "subsection", "origin_cites_number": 1, "parent_id": "678bd7af-5bf2-44f6-ae32-df413829eb80", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous edge detection (ANOS ED) " ], [ "subsection", "Deep NN Based Techniques" ] ], "subsections": [], "title": "Deep NN Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Following the line of modeling edge distributions, some studies leverage GCNs to better capture the graph structure information.\nDuan \\etal~ demonstrated that the existence of anomalous edges in the training data prevents traditional GCN based models from capturing real edge distributions, which leads to sub-optimal detection performance.\nThis inherently raises a problem: to achieve better detection performance, the node embedding process should alleviate the negative impact of anomalous edges, but these edges are detected using the learned embeddings.\nTo tackle this, the proposed method, AANE, jointly considers these two issues by iteratively updating the embeddings and detection results during training.\nIn each training iteration, AANE generates node embeddings $Z$ through GCN layers and learns an indicator matrix $I$ to spot potential anomalous edges.\nGiven an input graph $G$ with adjacency matrix $A$, each term $I_{uv}$ in $I$ is 1 if $\\hat{A}_{uv} < \\text{mean}_{v^{\\prime}\\in N_{u}} \\hat{A}_{uv^{\\prime}} - \\mu\\cdot\\text{std}_{v^{\\prime}\\in N_{u}} \\hat{A}_{uv^{\\prime}}$, and 0 otherwise.\nHere, $\\hat{A}_{uv}$ is the predicted link probability between nodes $u$ and $v$, which is calculated as the hyperbolic tangent of $u$ and $v$'s embeddings, and $\\mu$ is a predefined threshold.\nBy this, an edge $uv$ is identified as anomalous when its predicted probability is less than the average of all links associated with the node $u$ by a predefined threshold.\nThe total loss function of AANE contains two parts: an anomaly-aware loss ($\\mathcal{L}_{aal}$) and an adjusted fitting loss ($\\mathcal{L}_{afl}$). \n$\\mathcal{L}_{aal}$ is proposed to penalize the link prediction results and the indicator matrix $I$ such that anomalous edges will have lower prediction probabilities when they are marked as 1 in $I$. This is formulated as:\n\\begin{equation}\n \\mathcal{L}_{aal} = \\sqrt{\\sum_{u\\in V} \\sum_{v\\in N(u)}((1-\\hat{A}^2_{uv})(1-I_{uv}) + \\hat{A}^2_{uv}I_{uv})},\n\\end{equation}\nwhere $V$ is the node set, $N(u)$ is the set of $u$'s neighbors.\n$\\mathcal{L}_{afl}$ quantifies the reconstruction loss with regard to the removal of potential anomalous edges, denoted as:\n\\begin{equation}\n \\mathcal{L}_{afl} = \\| B - \\hat{A}\\|^{2}_{2},\n\\end{equation}\nwhere $B$ is an adjusted adjacency matrix that removes all predicted anomalies from the input adjacency matrix $A$.\nBy minimizing these two losses, AANE identifies the top-k edges with lowest probabilities as anomalies.", "id": "e519d916-fe62-4029-8d13-0a7fc3954141", "level": "subsection", "origin_cites_number": 1, "parent_id": "678bd7af-5bf2-44f6-ae32-df413829eb80", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous edge detection (ANOS ED) " ], [ "subsection", "GCN Based Techniques" ] ], "subsections": [], "title": "GCN Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Instead of using node embeddings for ANOS ED, edge representations learned directly from the graph are also feasible for distinguishing anomalies.\nIf the edge representations well-preserve the graph structure and interaction content (\\eg messages in online social networks, co-authored papers in citation networks) between pairs of nodes, an enhanced detection performance can then be expected.\nTo date, several studies, such as Xu \\etal~, have shown promising results in generating edge representations.\nAlthough they are not specifically designed for graph anomaly detection, they pinpoint a potential approach to ANOS ED. This is highlighted as a potential future direction in Section~\\ref{sec:future:ED}.", "id": "231d1246-99ce-4de5-9270-25f30b2ceb5d", "level": "subsection", "origin_cites_number": 1, "parent_id": "678bd7af-5bf2-44f6-ae32-df413829eb80", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous edge detection (ANOS ED) " ], [ "subsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:edge:dynamic}\n\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n\\centerline{\\includegraphics[width=0.98\\textwidth]{pics/subgraph/SGD_Static.pdf}}\n\\caption{ANOS SGD. Real-world networks are usually represented as bipartite graphs for reflecting the interactions between two different types of objects. To detect ANOS SGD, source nodes and sink nodes are embedded using two autoencoders (linked by a shared loss function), respectively. Anomalous sub-graphs are identified by applying dense region detection algorithms in the embedding space.}\n\\label{pic:Staic_SG}\n\\end{figure*}\nDynamic graphs are powerful in reflecting the appearance/disappearance of edges over time~.\nAnomalous edges can be distinguished by modeling the changes in graph structure and capturing the edge distributions at each time step.\nRecent approaches to ANOS ED on dynamic graphs are reviewed in this section.", "id": "78c71045-f884-4e7f-b2a8-d7769a5bc16b", "level": "section", "origin_cites_number": 1, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ED on Dynamic Graphs" ] ], "subsections": [ "af8f0ff9-09a3-46b3-96d8-edda541a9029", "946969f4-52c1-445f-99a0-7e162c350a60" ], "title": "ANOS ED on Dynamic Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "The intuition of network representation based techniques is to encode the dynamic graph structure information into edge representations and apply the aforementioned traditional anomaly detection techniques to spot irregular edges.\nThis is quite straightforward, but there remain vital challenges in generating/updating informative edge representations when the graph structure evolves.\nTo mitigate this challenge, the ANOS ND model NetWalk~ is also capable of detecting anomalous edges in dynamic graphs. \nFollowing the line of distance-based anomaly detection, NetWalk encodes edges into a shared latent space using node embeddings, and anomalies are identified based on their distances to the nearest edge-cluster centers in the latent space.\nPractically, Netwalk generates edge representations as the Hadamard product of the source and destination nodes' representations, denoted as: $\\mathbf{z}_{u,v} = \\mathbf{z}_u \\odot \\mathbf{z}_v$.\nWhen new edges arrive or existing edges disappear, the node and edge representations are updated from random walks in the temporary graphs at each time stamp, after which the edge-cluster centers and edge anomaly scores are recalculated. \nFinally, the top-k farthest edges to the edge-clusters are reported as anomalies.", "id": "af8f0ff9-09a3-46b3-96d8-edda541a9029", "level": "subsection", "origin_cites_number": 1, "parent_id": "78c71045-f884-4e7f-b2a8-d7769a5bc16b", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ED on Dynamic Graphs" ], [ "subsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Although NetWalk is capable of detecting anomalies in dynamic graphs, it simply updates edge representations without considering the evolving patterns of long/short-term nodes and the graph's structure.\nFor more effective ANOS ED, Zheng \\etal~ intuitively combined temporal, structural and attribute information to measure the anomalousness of edges in dynamic graphs. \nThey propose a semi-supervised model, AddGraph, which comprises a GCN and Gated Recurrent Units (GRU) with attention~ to capture more representative structural information from the temporal graph in each time stamp and dependencies between them, respectively.\nAt each time stamp $t$, GCN takes the output hidden state ($H^{t-1}$) at time $t-1$ to generate node embeddings, after which the GRU learns the current hidden state $H^{t}$ from the node embeddings and attentions on previous hidden states (as shown in Fig.~\\ref{pic:Dynamic_Edge_Comparison}).\nAfter getting the hidden state $H^{t}$ of all nodes, AddGraph assigns an anomaly score to each edge in the temporal graph based on the nodes associated with it. The proposed anomaly scoring function is formulated as:\n\\begin{equation}\\label{eq:AddGraph:scoring}\n f(u,v,w) = w\\cdot\\sigma(\\beta\\cdot (||\\mathbf{a}\\odot \\mathbf{h}_{u} + \\mathbf{b}\\odot \\mathbf{h}_{v}||^{2}_{2} - \\mu)),\n\\end{equation}\nwhere $u$ and $v$ are the corresponding nodes, $w$ is the weight of the edge, $\\mathbf{a}$ and $\\mathbf{b}$ are trainable parameters, $\\beta$ and $\\mu$ are hyper-parameters, and $\\sigma(\\cdot)$ is the non-linear activation function.\nTo learn $\\mathbf{a}$ and $\\mathbf{b}$, Zheng \\etal further assumed that all existing edges in the dynamic graph are normal in the training stage, and sampled non-existing edges as anomalies. Specifically, they form the loss function as:\n\\begin{equation}\\label{eq:AddGraph:loss}\n \\begin{split}\n \\mathcal{L}_{AddGraph} = &\\min\\sum_{{(u,v,w)\\in \\varepsilon^{t}}}\\sum_{{(u^{\\prime},v^{\\prime},w)\\notin \\varepsilon^{t}}} \\\\\n & \\max\\{0, \\gamma + f(u,v,w) - f(u^{\\prime},v^{\\prime},w)\\} + \\lambda\\mathcal{L}_{reg},\n \\end{split}\n\\end{equation}\nwhere $\\varepsilon^{t}$ is the edge set, $(u^{\\prime},v^{\\prime})$ are sampled non-existing edges at time stamp $t$, $\\lambda$ is a hyper-parameter, and $\\mathcal{L}_{reg}$ regularizes all trainable parameters in the model. \nAfter training, the scoring function identifies anomalous edges in the test data by assigning higher anomaly scores to them based on Eq.~\\ref{eq:AddGraph:scoring}.", "id": "946969f4-52c1-445f-99a0-7e162c350a60", "level": "subsection", "origin_cites_number": 2, "parent_id": "78c71045-f884-4e7f-b2a8-d7769a5bc16b", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ED on Dynamic Graphs" ], [ "subsection", "GCN Based Techniques" ] ], "subsections": [], "title": "GCN Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:subgraph}\nIn real life, anomalies might also collude and behave collectively with others to garner benefits.\nFor instance, fraudulent user groups in an online review network, as shown in Fig.~\\ref{Toy}, may post misleading reviews to promote or besmirch certain merchandise. When these data are represented as graphs, anomalies and their interactions usually form suspicious sub-graphs, and ANOS SGD is proposed to distinguish them from the benign.\n\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n\\centerline{\\includegraphics[width=0.98\\textwidth]{pics/graph/graph.pdf}}\n\\caption{ANOS GD on Graph Database. Generally, the graph-level anomaly detection techniques take a graph database as input. By generating embeddings/hidden state for each single graph in the database through D(G)NNs, anomalous graphs can be depicted by One-class classifiers or MLP.}\n\\label{pic:graph_level}\n\\end{figure*}\nUnlike individual and independent graph anomalies, \\ie single nodes or edges, each node and edge in a suspicious sub-graph might be normal.\nHowever, when considered as a collection, they turn out to be anomalous.\nMoreover, these sub-graphs also vary in size and inner structure, making anomalous sub-graph detection more challenging than ANOS ND/ED~.\nAlthough extensive effort has been placed on circumventing this problem, deep-learning techniques have only begun to address this problem in the last five years.\nFor reference, traditional non-deep learning based techniques are briefly introduced in Appendix~\\ref{appendix:subgraph}, and a summary of techniques reviewed for ANOS SGD is provided in Table~\\ref{tb:edgeandsubgraph} at the end of Section~\\ref{sec:anosgd:dynamic}.\nDue to the flexibility of heterogeneous graphs in representing the complex relationships between different kinds of real objects, several recent works have taken advantage of deep network representation techniques to detect real-world anomalies through ANOS SGD.\nFor instance, Wang \\etal~ represented online shopping networks as bipartite graphs (a specific type of heterogeneous graph that has two types of nodes and one type of edge), in which users are source nodes and items are sink nodes.\nFraudulent groups are then detected based on suspicious dense blocks that form in these graphs.\nWang \\etal~ aimed to learn anomaly-aware representations of users such that suspicious users in the same group will be located closely in the vector space, while benign users will be far away (as shown in the embedding space in Fig.~\\ref{pic:Staic_SG}). \nAccording to the observation that user nodes belonging to one fraudulent group are more likely to connect with the same item nodes, the developed model, DeepFD, measures similarities in the behavior of two users, $sim_{ij}$, as the percentage of items shared among all the items they have reviewed.\nUser representations are then generated through a traditional autoencoder, which is trained using three losses and follows the encoding-decoding process.\nThe first loss is the reconstruction loss $\\mathcal{L}_{res}$ that ensures the bipartite graph structure can be reconstructed properly using the learned user representations and item representations.\nThe second term $\\mathcal{L}_{sim}$ preserves the user similarity information in the learned user representations. That is, if two users have similar behaviors, their representations should also be similar. This loss is formulated as:\n\\begin{equation}\\label{eq:DeepFD:simloss}\n \\mathcal{L}_{sim} = \\sum_{i,j=1}^{m} sim_{ij} \\cdot \\| \\widehat{sim}_{ij} - sim_{ij} \\|_{2}^{2},\n\\end{equation}\nwhere $m$ is the number of user nodes, $\\widehat{sim}_{ij}$ measures the similarity of user $i$ and $j$'s representations using an RBF kernel or other alternative.\nThe third loss $\\mathcal{L}_{reg}$ regularizes all trainable parameters.\nFinally, the suspicious dense blocks, which are expected to form dense regions in the vector space, are detected using DBSCAN~.\nAnother work, FraudNE~, also models online review networks as bipartite graphs and further detects both malicious users and associated manipulated items following the dense block detection principle.\nUnlike DeepFD, FraudNE aspires to encode both types of nodes into a shared latent space where suspicious users and items belonging to the same dense block are very close to each other while others distribute uniformly (as shown in Fig.~\\ref{pic:Staic_SG}).\nFraudNE adopts two traditional autoencoders, namely, a source node autoencoder and a sink node autoencoder, to learn user representations and item representations, respectively.\nBoth autoencoders are trained to jointly minimize their corresponding reconstruction losses and a shared loss function, and the total loss can be formulated as:\n\\begin{equation}\n \\mathcal{L}_{FraudNE} = \\mathcal{L}_{res}^{source} + \\mathcal{L}_{res}^{sink} + \\alpha \\mathcal{L}_{share} + \\eta \\mathcal{L}_{reg},\n\\end{equation}\nwhere $\\alpha$ and $\\eta$ are hyperparameters, and $\\mathcal{L}_{reg}$ regularizes all trainable parameters.\nSpecifically, the reconstruction losses (\\ie $\\mathcal{L}_{res}^{source}$ and $\\mathcal{L}_{res}^{sink}$) measure the gap between the input user/item features (extracted from the graph structure) and their decoded features. \nThe shared loss function is proposed to restrict the representation learning process such that each linked pair of users and items get similar representations.\nAs the DBSCAN~ algorithm is convenient to apply for dense region detection, FraudNE also uses it to distinguish the dense sub-graphs formed by suspicious users and items.\nTo date, only a few works have put their efforts into using deep learning techniques for ANOS SGD. However, with intensifying research interest in sub-graph representation learning, we encourage more studies on ANOS SGD and highlight this as a potential future in Section~\\ref{sec:future:ED}.\n\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/graph/dynamic.pdf}}\n \\caption{ANOS GD on dynamic graphs. For each graph snapshot in the dynamic graph, the LSTM autoencoder generates its hidden state using its adjacency matrix and previous hidden states. Through hypersphere learning, a hypersphere with centroid $a$ and radius $r$ is learned such that anomalous snapshots lay outside.}\n \\label{pic:GDdynamic}\n\\end{figure*}", "id": "5da2d684-654a-4048-95ec-2ae406b319b2", "level": "section", "origin_cites_number": 4, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous sub-graph detection (ANOS SGD)" ] ], "subsections": [], "title": "Anomalous sub-graph detection (ANOS SGD)" }, { "cite_extract_rate": 0.2, "cites": [ 8992 ], "content": "\\label{sec:anosgd:db}\nBeyond anomalous node, edge, and sub-graph, graph anomalies might also appear as abnormal graphs in a set/database of graphs.\nTypically, a graph database is defined as:\n\\textbf{\\textit{Definition 4 (Graph Database)}}. A graph database $\\mathcal{G} = \\{ G_i = (V_i,E_i,X_v(i), X_e(i)) \\}^N_{i=1}$ contains $N$ individual graphs. Here, each graph $G_i$ is comprised of a node set $V_i$ and an edge set $E_i$. $X_v(i)$ and $X_e(i)$ are the node attribute matrix and edge attribute matrix of $G_i$ if it is an attributed graph.\nThis graph-level ANOS GD aims to detect individual graphs that deviate significantly from the others.\nA concrete example of ANOS GD is unusual molecule detection.\nWhen chemical compounds are represented as molecular/chemical graphs where the atoms and bonds are represented as nodes and edges~, unusual molecules can be identified because their corresponding graphs have structures and/or features that deviate from the others.\nBrain disorders detection is another example.\nA brain disorder can be diagnosed by analyzing the dynamics of brain graphs at different stages of aging in sequence and finding an inconsistent snapshot at a specific time stamp.\nThe prior reviewed techniques (\\ie ANOS ND/ED/SGD) are not compatible with ANOS GD because they are dedicated to detecting anomalies in a single graph, whereas ANOS GD is directed at detecting graph-level anomalies.\nThis problem is commonly approached by: 1) measuring the pairwise proximities of graphs using graph kernels~; 2) detecting the appearance of anomalous graph signals created by abnormal groups of nodes~; or 3) encoding graphs using frequent motifs~.\nHowever, none of these methods are deep learning-based.\nAs the time of writing, very few studies in ANOS GD with deep learning have been undertaken. As such, this is highlighted as a potential future direction in Section~\\ref{sec:future:ED}.", "id": "9ba15010-a3db-4588-8e7f-56a7def289f5", "level": "section", "origin_cites_number": 5, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous graph detection (ANOS GD)" ] ], "subsections": [ "49385046-24ac-4a52-8807-655e25c7942e", "c5bcea84-0572-429e-a5e7-37efbed81759" ], "title": "Anomalous graph detection (ANOS GD)" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8149 ], "content": "Motivated by the success of GNNs in various graph classification tasks, the most recent works in ANOS GD employ GNNs to classify single graphs as normal/abnormal in the given graph database. Specifically, Dou \\etal~ transformed fake news detection into an ANOS GD problem by modeling news as tree-structured propagation graphs where the root nodes denote pieces of news, and child nodes denote users who interact with the root news. Their end-to-end framework, UPFD, extracts two embeddings for the news piece and users, respectively, via a text embedding model (e.g. word2vec, BERT) and a user engagement embedding process. For each news graph, its latent representation is a flattened concatenation of these two embeddings, which is input to train a neural classifier with the label of the news. \nCorresponding propagation graphs that are labeled as fake by the trained model are regarded as anomalous.\nAnother representative work by Zhao and Akoglu~ employed a GIN model and one-class classification (\\ie DeepSVDD~) loss to train a graph-level anomaly detection framework in an end-to-end manner.\nFor each individual graph in the graph database, its graph-level embedding is generated by applying mean-pooling over its nodes' node-level embeddings.\nA graph is eventually depicted as anomalous if it lies outside the learned hypersphere, as shown in Fig.~\\ref{pic:graph_level}.", "id": "49385046-24ac-4a52-8807-655e25c7942e", "level": "subsection", "origin_cites_number": 3, "parent_id": "9ba15010-a3db-4588-8e7f-56a7def289f5", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous graph detection (ANOS GD)" ], [ "subsection", "GNN Based Techniques" ] ], "subsections": [], "title": "GNN Based Techniques" }, { "cite_extract_rate": 0.5, "cites": [ 3953 ], "content": "It is also possible to apply general graph-level network representation techniques to ANOS GD.\nWith these methods, the detection problem is transformed into a conventional outlier detection problem in the embedding space.\nIn contrast to D(G)NN based techniques that can detect graph anomalies in an end-to-end manner, adopting these representation techniques for anomaly detection is two-staged.\nFirst, graphs in the database are encoded into a shared latent space using graph-level representation techniques, such as Graph2Vec~, FGSD~.\nThen, the anomalousness of each single graph is measured by an off-the-shelf outlier detector.\nEssentially, this kind of approach involves pairing existing methods in both stages, yet, the stages are disconnected from each other and, hence, the detection performance can be subpar since the embedding similarities are not necessarily designed for the sake of anomaly detection.", "id": "c5bcea84-0572-429e-a5e7-37efbed81759", "level": "subsection", "origin_cites_number": 2, "parent_id": "9ba15010-a3db-4588-8e7f-56a7def289f5", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Anomalous graph detection (ANOS GD)" ], [ "subsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 0.412698412698412, "cites": [ 8993, 6285, 6281, 6302, 6297, 3333, 6283, 6303, 6296, 6289, 3700, 6294, 6304, 6299, 6271, 6301, 1420, 6277, 6295, 6300, 6293, 5645, 8149, 6291, 6286, 6298 ], "content": "\\label{sec:anosgd:dynamic}\nFor dynamic graph environments, graph-level anomaly detection endeavors to identify abnormal graph snapshots/temporal graphs. Similar to ANOS ND and ED on dynamic graphs, given a sequence of graphs, anomalous graphs can be distinguished regarding their unusual evolving patterns, abnormal graph-level features, or other characteristics.\n\\begin{table*}[!h]\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Summary of Anomalous Edge, Sub-graph and Graph Detection Techniques.} \n\\resizebox{0.96\\textwidth}{!}{\n\\begin{tabular}{m{2.1cm}<{\\centering}|m{2.3cm}<{\\centering}|m{1.1cm}<{\\centering}|m{4cm}<{\\centering}|m{1.8cm}<{\\centering}|m{2.9cm}<{\\centering}}\n\\toprule[1 pt]\n\\textbf{Graph Type} & \\textbf{Approach} & \\textbf{Category} & \\textbf{Objective Function} & \\textbf{Measurement} & \\textbf{Outputs} \\\\ \n\\midrule[1 pt]\n\\multicolumn{6}{c}{Anomalous Edge Detection Techniques} \\\\\n\\midrule[1 pt]\n\\multirow{2}{*}{\\shortstack{Static Graph - \\\\ Plain}} &UGED~ & DNN & $\\text{cross-entropy}(f(u,N(u)), v)$ & Anomaly Score & $\\text{mean}(1-P(v|u,N(u)), 1-P(u|v,N(v)))$ \\\\ \\cline{2-6}\n&AANE~ & GCN & $\\mathcal{L} = \\mathcal{L}_{afl} + \\gamma\\mathcal{L}_{aal}$ & Anomaly Ranking & Edge Existing Probability \\\\ \n\\midrule[1 pt]\nStatic Graph - Attributed &eFraudCom~ & NR & $\\mathcal{L}_{MAIN} - \\lambda\\mathcal{L}_{MI}$ & Anomaly Prediction & Predicted Label \\\\ \n\\midrule[1 pt]\nDynamic Graph - Plain & NetWalk~ & NR & $\\gamma\\mathcal{L}_{AE} + \\mathcal{L}_{Clique} + \\lambda\\|W\\|_{F}^{2} + \\beta KL$ & Anomaly Score & Nearest Distance to Cluster Centers \\\\\n\\midrule[1 pt]\nDynamic Graph - Attributed & AddGraph~ & GCN & $\\text{min} \\sum_{{e\\in \\varepsilon^{t}}}\\sum_{{e^{\\prime}\\notin \\varepsilon^{t}}} \\text{max}\\{0, \\gamma + f(i,j,w) - f(i^{\\prime},j^{\\prime},w)\\} + \\lambda\\mathcal{L}_{reg}$ & Anomaly Score & $f(i,j,w) = w\\cdot\\sigma(\\beta\\cdot (||\\mathbf{a} \\odot \\mathbf{h}_{i} + \\mathbf{b} \\odot \\mathbf{h}_{j}||^{2}_{2} - \\mu))$ \\\\ \n\\midrule[1 pt]\n\\midrule[1 pt]\n\\multicolumn{6}{c}{Anomalous Sub-graph Detection Techniques} \\\\\n\\midrule[1 pt]\n\\multirow{2}{*}{\\shortstack{Static Graph - \\\\ Plain}} &DeepFD~ & NR & $\\mathcal{L}_{recon} +\\alpha \\mathcal{L}_{sim} + \\gamma \\mathcal{L}_{reg}$& Density-based Method (DBSCAN) & Dense sub-graphs \\\\ \\cline{2-6}\n&FraudNE~ & NR & $\\mathcal{L}_{res}^{source} + \\mathcal{L}_{res}^{sink} + \\alpha \\mathcal{L}_{share} + \\eta \\mathcal{L}_{reg}$ & Density-based Method (DBSCAN) & Dense sub-graphs \\\\\n\\midrule[1 pt]\n\\midrule[1 pt]\n\\multicolumn{6}{c}{Anomalous Graph Detection Techniques} \\\\\n\\midrule[1 pt]\n\\multirow{2}{*}{\\shortstack{Graph Database - \\\\ Attributed}}&UPFD~ & NR & $ -(y\\log(p) + (1-y)\\log(1-p))$ & Anomaly Prediction & Predicted Label \\\\ \\cline{2-6}\n& OCGIN~ & GNN & $ \\min\\limits_{W}\\frac{1}{N}\\sum\\limits_{i=1}^{N}||GIN(G_i,W)-c||^2 + \\frac{\\lambda}{2}\\sum\\limits_{l=1}^{L}||W^l||^2_{F}$ & Location in Embedding Space & Distance to Hypersphere Center \\\\\n\\midrule[1 pt]\n\\multirow{2}{*}{\\shortstack{Dynamic Graph - \\\\ Plain}} & DeepSphere~ & DNN & $\\mathcal{L} = \\mathcal{L}_{h} + \\lambda\\mathcal{L}_{res}$ & Location in Embedding Space & Anomalous Label \\\\ \\cline{2-6}\n& GLAD-PAW~ & GNN & $\\text{cross-entropy}(\\mathbf{y},\\mathbf{\\hat{y}})$ & Anomaly Prediction & Predicted Label \\\\ \n\\bottomrule[1 pt]\n\\multicolumn{6}{l}{* DNN: Deep NN Based techniques, GCN: GCN Based Techniques, NR: Network Representation Based Techniques.}\\\\\n\\multicolumn{6}{l}{* GNN: Graph Neural Network Based Techniques.}\n\\end{tabular}\n\\label{tb:edgeandsubgraph}\n}\n\\end{table*}\nIn order to derive each graph snapshot/temporal graph’s characteristics, the commonly used GNN, LSTM and autoencoder are feasible to apply. For instance, Teng \\etal~ applied a LSTM-autoencoder to detect abnormal graph snapshots, as shown in Fig.~\\ref{pic:GDdynamic}.\nIn their proposed model, DeepSphere, a dynamic graph is described as a collection of three-order tensors, $\\{\\mathcal{X}_k, k=1,2...\\}$ where each $\\mathcal{X} \\in \\mathcal{R}^{N \\times N \\times T}$, and the slices along the time dimension are the adjacency matrices of graph snapshots.\nTo identify abnormal tensors, DeepSphere first embeds each graph snapshot into a latent space using an LSTM autoencoder, and then leverages a one-class classification objective~ that learns a hypersphere such that normal snapshots are covered, and anomalous snapshots lay outside.\nThe LSTM autoencoder takes the adjacency matrices as input sequentially and attempts to reconstruct these input matrices through training.\nThe hypersphere is learned through a single neural network layer and its objective function is formulated as:\n\\begin{equation} \\label{eq:ds:hsp}\n \\mathcal{L}_{h} = r^{2} + \\gamma\\sum_{k=1}^{m} \\epsilon_{k} + \\frac{1}{m}\\sum_{k=1}^{m} \\|\\textbf{z}_{k} - \\textbf{a}\\|^{2},\n\\end{equation}\nwhere $\\textbf{z}_{k}$ is the latent representation generated by the LSTM autoencoder, $\\textbf{a}$ is the centroid of the hypersphere, $r$ is the radius, $\\epsilon_{k}$ is the outlier penalty ($\\epsilon_{k} = \\|\\textbf{z}_{k} - \\textbf{a}\\|^{2} - r^2$), $m$ is the number of training graph snapshots, and $\\gamma$ is a hyperparameter. \nThe overall objective function of DeepSphere is represented as:\n\\begin{equation} \\label{eq:ds:obj}\n \\mathcal{L} = \\mathcal{L}_{h} + \\lambda\\mathcal{L}_{res},\n\\end{equation}\nwhere $\\mathcal{L}_{res}$ is the reconstruction loss of the LSTM autoencoder.\nWhen the training is finished, DeepSphere spots a given unseen data $\\mathcal{X}$ as anomalous if its embedding lies outside the learned hypersphere with a radius of $r$.\nIn addition to all ANOS ND, ED, SGD, and GD techniques reviewed above, it is worth mentioning that perturbed graphs, which adversarial models generate to attack graph classification algorithms or GNNs~, can also be regarded as (intensional) anomalies. In a perturbed graph, the nodes and edges are modified deliberately to deviate from the others. We have not reviewed these in this survey because their main purpose is to attack a GNN model. The key idea behind these methods is the attacking/perturbation strategy, and studies in this sphere seldom focus on a detection or reasoning module to identify the perturbed graph or its sub-structures, \\ie anomalous nodes, edges, sub-graphs, or graphs.\n\\begin{table*}[!t]\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Published Algorithms and Models} \n\\resizebox{0.96\\textwidth}{!}{\n\\begin{tabular}{c|c|c|c|l}\n\\toprule[1 pt]\n\\textbf{Model} & \\textbf{Language} & \\textbf{Platform} & \\textbf{Graph} & \\textbf{Code Repository} \\\\ \n\\midrule[1 pt]\nAnomalyDAE~ & Python & Tensorflow & Static Attributed Graph & https://github.com/haoyfan/AnomalyDAE \\\\ \\hline\nMADAN~ & Python & - & Static Attributed Graph & https://github.com/leoguti85/MADAN \\\\ \\hline\nPAICAN~ & Python & Tensorflow & Static Attributed Graph & http://www.kdd.in.tum.de/PAICAN/ \\\\ \\hline\nONE~ & Python & - & Static Attributed Graph & https://github.com/sambaranban/ONE \\\\ \\hline\nDONE\\&AdONE~ & Python & Tensorflow & Static Attributed Graph & https://bit.ly/35A2xHs \\\\ \\hline\nSLICENDICE~ & Python & - & Static Attributed Graph & http://github.com/hamedn/SliceNDice/ \\\\ \\hline\nFRAUDRE~ & Python & Pytorch & Static Attributed Graph & https://github.com/FraudDetection/FRAUDRE \\\\ \\hline\nSemiGNN~ & Python & Tensorflow & Static Attributed Graph & https://github.com/safe-graph/DGFraud \\\\ \\hline\nCARE-GNN~ & Python & Pytorch & Static Attributed Graph & https://github.com/YingtongDou/CARE-GNN \\\\ \\hline\nGraphConsis~ & Python & Tensorflow & Static Attributed Graph & https://github.com/safe-graph/DGFraud \\\\ \\hline\nGLOD~ & Python & Pytorch & Static Attributed Graph & https://github.com/LingxiaoShawn/GLOD-Issues \\\\ \\hline\nOCAN~ & Python & Tensorflow & Static Graph &https://github.com/PanpanZheng/OCAN \\\\ \\hline\nDeFrauder~ & Python & - & Static Graph & https://github.com/LCS2-IIITD/DeFrauder \\\\ \\hline\nGCAN~ & Python & Keras & Heterogeneous Graph & https://github.com/l852888/GCAN \\\\ \\hline\nHGATRD~ & Python & Pytorch & Heterogeneous Graph & https://github.com/201518018629031/HGATRD \\\\\\hline\nGLAN~ & Python & Pytorch & Heterogeneous Graph & https://github.com/chunyuanY/RumorDetection \\\\ \\hline\nGEM~ & Python & - & Heterogeneous Graph & https://github.com/safe-graph/DGFraud/tree/master/algorithms/GEM \\\\ \\hline\neFraudCom~ & Python & Pytorch & Heterogeneous Graph & https://github.com/GeZhangMQ/eFraudCom \\\\ \\hline\nDeepFD~ & Python & Pytorch & Bipartite Graph & https://github.com/JiaWu-Repository/DeepFD-pyTorch \\\\ \\hline\nANOMRANK~ & C++ & - & Dynamic Graph & https://github.com/minjiyoon/anomrank \\\\ \\hline\nMIDAS~ & C++ & - & Dynamic Graph & https://github.com/Stream-AD/MIDAS \\\\ \\hline\nSedanspot~ & C++ & - & Dynamic Graph & https://www.github.com/dhivyaeswaran/sedanspot \\\\ \\hline\nF-FADE~ & Python & Pytorch & Dynamic Graph & http://snap.stanford.edu/f-fade/ \\\\ \\hline\nDeepSphere~ & Python & Tensorflow & Dynamic Graph & https://github.com/picsolab/DeepSphere \\\\ \\hline\nChangedar~ & Matlab & - & Dynamic Graph & https://bhooi.github.io/changedar/ \\\\\\hline\nUPFD~ & Python & Pytorch & Graph Database & https://github.com/safe-graph/GNN-FakeNews \\\\ \\hline\nOCGIN~ & Python & Pytorch & Graph Database & https://github.com/LingxiaoShawn/GLOD-Issues \\\\ \\hline\nDAGMM~ & Python & Pytorch & Non Graph & https://github.com/danieltan07/dagmm \\\\ \\hline\nDevNet~ & Python & Tensorflow & Non Graph &https://github.com/GuansongPang/deviation-network \\\\ \\hline\nRDA~ & Python & Tensorflow & Non Graph &https://github.com/zc8340311/RobustAutoencoder \\\\ \\hline\nGAD~ & Python & Tensorflow & Non Graph &https://github.com/raghavchalapathy/gad \\\\ \\hline\nDeep SAD~ &Python & Pytorch & Non Graph & https://github.com/lukasruff/Deep-SAD-PyTorch \\\\ \\hline\nDATE~ &Python & Pytorch &Non Graph & https://github.com/Roytsai27/Dual-Attentive-Tree-aware-Embedding \\\\ \\hline\nSTS-NN~ & Python & Pytorch & Non Graph & https://github.com/JiaWu-Repository/STS-NN \\\\\n\\bottomrule[1 pt]\n\\multicolumn{5}{l}{* -: No Dedicated Platforms.}\n\\end{tabular}\n\\label{tb:publishedalgs}\n}\n\\end{table*}\n\\begin{table*}[!t]\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Published Datasets.}\n\\resizebox{0.96\\textwidth}{!}{\n\\begin{tabular}{m{1.6cm}<{\\centering}|m{1.4cm}<{\\centering}|m{0.5cm}<{\\centering}|m{0.5cm}<{\\centering}|m{0.5cm}<{\\centering}|m{0.5cm}<{\\centering}|m{0.5cm}<{\\centering}|m{2cm}<{\\centering}|l}\n\\toprule[1 pt]\n\\textbf{Category} & \\textbf{Dataset} & \\textbf{\\#G} & \\textbf{\\#N} & \\textbf{\\#E} & \\textbf{\\#FT} & \\textbf{\\#AN} & \\textbf{REF} & \\textbf{URL} \\\\ \n\\midrule[1 pt]\n\\multirow{5}{*}{\\shortstack{Citation \\\\ Networks}} \n& ACM & 1 & 16K & 71K & 8.3K & - & &\\text{http://www.arnetminer.org/open-academic-graph} \\\\ \\cline{2-9}\n& Cora & 1 & 2.7K & 5.2K & 1.4K & - & &http://linqs.cs.umd.edu/projects/projects/lbc \\\\ \\cline{2-9}\n& Citeseer & 1& 3.3K & 4.7K & 3.7K & - & & http://linqs.cs.umd.edu/projects/projects/lbc \\\\ \\cline{2-9}\n& Pubmed & 1& 19K & 44K & 500 & - & &http://linqs.cs.umd.edu/projects/projects/lbc \\\\ \\cline{2-9}\n& DBLP & 1& - & - & - & - & &http://www.informatik.uni-trier.de/˜ley/db/ \\\\\n\\midrule[1 pt]\n\\multirow{10}{*}{\\shortstack{Social \\\\ Networks}}\n&Enron & - & 80K & - & - & - & & http://odds.cs.stonybrook.edu/\\#table2 \\\\ \\cline{2-9}\n&UCI Message & 1& 5K & - & - & - & &http://archive.ics.uci.edu/ml \\\\ \\cline{2-9}\n&Google+ & 4 & 75M & 11G & - & - & - & https://wangbinghui.net/dataset.html \\\\ \\cline{2-9}\n&Twitter Sybil & 3 & 41M & - & - & 100K & - & https://wangbinghui.net/dataset.html \\\\ \\cline{2-9}\n&Twitter WorldCup2014 & - & 54K & - & - & - & & http://shebuti.com/SelectiveAnomalyEnsemble/ \\\\ \\cline{2-9}\n&Twitter Security2014 & - & 130K & - & - & - & & http://shebuti.com/SelectiveAnomalyEnsemble/ \\\\ \\cline{2-9}\n&Reality Mining & - & 9.1K & - & - & - & & http://shebuti.com/SelectiveAnomalyEnsemble/ \\\\ \\cline{2-9}\n&NYTNews & - & 320K & - & - & - & & http://shebuti.com/SelectiveAnomalyEnsemble/ \\\\ \\cline{2-9}\n&Politifact & 314 & 41K & 40K & - & 157 & &https://github.com/safe-graph/GNN-FakeNews \\\\ \\cline{2-9}\n&Gossipcop & 5.4K & 314K & 308K & - & 2.7K & &https://github.com/safe-graph/GNN-FakeNews \\\\ \n\\midrule[1 pt]\n\\multirow{5}{*}{\\shortstack{Co-purchasing \\\\ Networks}}\n& Disney & 1 & 124 & 334 & 30 & 6 & &https://www.ipd.kit.edu/mitarbeiter/muellere/consub/ \\\\ \\cline{2-9}\n&Amazon-v1 & 1 & 314K & 882K & 28 & 6.2K & &https://www.ipd.kit.edu/mitarbeiter/muellere/consub/ \\\\ \\cline{2-9}\n&Amazon-v2 & 1 & 11K & - & 25 & 821 & -&https://github.com/dmlc/dgl/blob/master/python/dgl/data/fraud.py \\\\ \\cline{2-9}\n&Elliptic & 1 & 203K & 234K & 166 & 4.5K & - &https://www.kaggle.com/ellipticco/elliptic-data-set \\\\\\cline{2-9}\n&Yelp & 1 & 45K & - & 32 & 6.6K & -&https://github.com/dmlc/dgl/blob/master/python/dgl/data/fraud.py \\\\ \n\\midrule[1 pt]\nTransportation Networks & New York City Taxi& - & - & - & - & - & &http://www.nyc.gov/html/tlc/html/about/triprecorddata.shtml \\\\\n\\bottomrule[1 pt]\n\\multicolumn{9}{l}{* -: Not Given, \\#G: Number of Graphs, \\#N: Number of Nodes, \\#E: Number of Edges, \\#FT: Number of Features, \\#AN: Number of Anomalies, REF: References.}\n\\end{tabular}\n\\label{tb:publisheddataset}\n}\n\\end{table*}", "id": "ae3bbf88-f1cb-408b-8ded-ef8445178dfa", "level": "section", "origin_cites_number": 63, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS GD on Dynamic Graphs" ] ], "subsections": [], "title": "ANOS GD on Dynamic Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:resources}\nAcquiring open-sourced implementations and real-world datasets with real-world anomalies are far from trivial in academic research on graph anomaly detection.\nHere, we first list the published algorithms with publicly available implementations, then we provide a collection of public benchmark datasets and summarize the commonly used evaluation metrics. \nLastly, due to the shortage of labeled anomalies in real-world datasets, we review three synthetic dataset generation strategies used in the existing works.\nAll the resources are available at: https://github.com/XiaoxiaoMa-MQ/Awesome-Deep-Graph-Anomaly-Detection/.", "id": "8147e459-f864-454f-86f6-52d0b0cd6b52", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Published Algorithms and Datasets" ] ], "subsections": [ "9907d812-2cb7-4bfc-ba97-f1d8f46efce2", "152cd0db-3934-4cfa-8fbb-64d458d75863", "4da63f48-4501-4170-981d-8fe6cdbe1f52", "6b67eb52-a545-45eb-aa8e-2135a2a408e8" ], "title": "Published Algorithms and Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "The published implementations of algorithms and models contribute to baseline experiments. Table~\\ref{tb:publishedalgs} provides a summary of published implementations outlining the language and platforms, graphs that they can admit, and URLs to code repositories.", "id": "9907d812-2cb7-4bfc-ba97-f1d8f46efce2", "level": "subsection", "origin_cites_number": 0, "parent_id": "8147e459-f864-454f-86f6-52d0b0cd6b52", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Published Algorithms and Datasets" ], [ "subsection", "Published Algorithms" ] ], "subsections": [], "title": "Published Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Table~\\ref{tb:publisheddataset} summarizes the most commonly used datasets, categorizing them into different groups with regard to their application fields. Notably, there is a lack of anomaly ground truth with labeled anomalies only provided in the Enron, Twitter Sybil, Disney, Amazon, Elliptic and Yelp datasets. Details of the DBLP, UCI message, Digg, Wikipedia, and New York city taxi datasets are not given because these public datasets only contain the raw data, and in most existing works, they are further processed to build different graphs (\\eg homogeneous graphs, bipartite graphs).\nThe well-known citation networks are often used to generate synthetic datasets by injecting anomalies into them - the number of anomalies varies from study to study.\nIn addition to these anomaly detection datasets, Table~\\ref{tb:publisheddataset} also lists eight graph classification datasets.\nAs mentioned in~\\ref{sec:synthetic}, through downsampling, these datasets can be used as benchmarks for evaluating the anomaly detection performance.", "id": "152cd0db-3934-4cfa-8fbb-64d458d75863", "level": "subsection", "origin_cites_number": 0, "parent_id": "8147e459-f864-454f-86f6-52d0b0cd6b52", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Published Algorithms and Datasets" ], [ "subsection", "Published Datasets" ] ], "subsections": [], "title": "Published Datasets" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 6281 ], "content": "\\label{sec:synthetic}\nGiven the rarity of ground-truth anomalies, many researchers have employed synthetic datasets to investigate the effectiveness of their proposed methods~. \nTypically, these datasets can be categorized as follows: \n\\begin{itemize}\n \\item \\textit{Synthetic graphs with injected anomalies.} Pursuing this strategy, graphs are created to simulate real-world networks. All the nodes and edges are manually added with well-known benchmarks (\\eg Lanchinetti-Fornunato-Radicchi (LFR)~, small-world~, scale-free graphs~). Once built, ground-truth anomalies are planted into the network. For the feasibility of generating expected scale of networks, this strategy is mostly used by previous works to validate their underlying intuitions in anomaly detection.\n \\item \\textit{Real-world datasets with injected anomalies.} These datasets are built based on the real-world networks. In particular, anomalies are created either by modifying the topological structure or the attributes of existing nodes/edges/sub-graphs, or by inserting non-existent graph objects. \n \\item \\textit{Downsampled graph classification datasets.} The widely-used graph classification datasets (\\eg NCI1, IMDB, ENZYMES in~) can be easily converted into sets suitable for anomaly detection through two steps. Firstly, one particular class and its data records are chosen to represent normal objects. Then, the other data records are downsampled as anomalies at a specified downsampling rate. By this, the generated graph anomaly detection dataset is, in fact, a subset of the original dataset. The most significant strength of this strategy is that no single data record has been modified.\n\\end{itemize}", "id": "4da63f48-4501-4170-981d-8fe6cdbe1f52", "level": "subsection", "origin_cites_number": 7, "parent_id": "8147e459-f864-454f-86f6-52d0b0cd6b52", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Published Algorithms and Datasets" ], [ "subsection", "Synthetic Dataset Generation" ] ], "subsections": [], "title": "Synthetic Dataset Generation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:em}\nTo date, the most widely used metrics for evaluating anomaly detection performance evaluation include accuracy, precision, recall rate, F1-score and AUC-AP (Average Precision). Their formulas/descriptions are given in Table~\\ref{tb:evluationmetrics}. However, a more dedicated analysis with some new evaluation metrics is needed for further performance examination \nbecause anomaly detection has different requirements for different applications~, \\eg false negative and false positive.\nFor instance, network intrusion prevention systems are more sensitive to false negative errors, while false positive errors are considered relatively harmless. This is because any risky connections should be shut down to prevent information leaks.\nBy contrast, other applications concentrate more on the false positives, \\eg in auditing domain, companies often set a budget for an auditor to look at flagged anomalies, and they want high precision/small false positive rate such that the auditor's time can be best used.\nHence, when evaluating detection performance, we suggest reviewing the specific requirements of the application domain for fair and suitable comparisons.\n\\begin{table}\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Evaluation Metrics. $tp$: true positives; $tn$: true negatives; $fp$: false positives; $fn$: false negatives.}\n\\resizebox{0.46\\textwidth}{!}{\n\\begin{tabular}{l|c}\n\\toprule[1 pt]\n\\textbf{Evaluation Metric} & \\textbf{Formula/Description} \\\\ \n\\midrule[1 pt]\nAccuracy & $\\frac{tp + tn}{tp + tn + fp + fn}$ \\\\\\hline\nPrecision & $\\frac{tp}{tp + fp}$ \\\\\\hline\nRecall & $\\frac{tp}{tp + fn}$ \\\\\\hline\nF1 Score & $2 * \\frac{Recall * Precision}{Recall + Precision}$ \\\\\\hline\nAUC-ROC & The Area Under the ROC Curve\\\\ \\hline\nAUC-AP & The Area Under Precision-Recall Curve \\\\\n\\bottomrule[1 pt]\n\\end{tabular}\n\\label{tb:evluationmetrics}\n}\n\\end{table}", "id": "6b67eb52-a545-45eb-aa8e-2135a2a408e8", "level": "subsection", "origin_cites_number": 3, "parent_id": "8147e459-f864-454f-86f6-52d0b0cd6b52", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Published Algorithms and Datasets" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:futures}\nSo far, we have reviewed the contemporary deep learning techniques that are devoted to graph anomaly detection.\nAn apparent observation from our survey is there remain many compounded challenges imposed by the complexity of anomaly detection, graph data, and the immaturity of deep learning techniques for graph data mining. Another observation is that deep learning techniques in graph anomaly detection are still confined to a relatively small number of studies, and most of these focus on anomalous node detection. For a gauge, simply compare the length of Tables~\\ref{table:ANOSND} and~\\ref{tb:edgeandsubgraph}. Edge, sub-graph, and graph-level anomaly detection have clearly received much less attention.\nTo bridge the gaps and push forward future work, we have identified 12 directions of future research for graph anomaly detection with deep learning.", "id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ] ], "subsections": [ "32fa0363-7074-47b3-9e03-5211f2f264be", "239988e3-c594-47d4-adb9-69b272543369", "aff7afb2-0d0e-4d85-baf6-f135a679f1c2", "094e633e-5d3b-45d7-bca3-171ad2d8492b", "e773dc9f-5c7d-4700-bcf9-1e7e0772172c", "cc214041-b949-4f3c-95ae-5deb83ec1573", "85e8f422-2b40-4284-aca9-eaeef926d8d1", "1224de5d-ffc7-41d5-bee2-9722227a3ca0", "d42a5aab-3ff4-4f52-8576-f381cddddc12", "4eb29545-f452-443d-b7b6-8accd75e515c", "9d5eeb0b-b689-4f33-bf46-0aa58f25c4d7", "b1a4c699-5c1b-4fe6-a282-ae8f4a92410d" ], "title": "Future Directions" }, { "cite_extract_rate": 0.5, "cites": [ 6305 ], "content": "\\label{sec:future:ED}\nIn real-world graphs, anomalies also appear as unusual relationships between objects, sub-structures formed by abnormal groups, or abnormal graphs, which are known as anomalous edges, sub-graphs, and graphs respectively.\nAs indicated in our review, there is a huge gap between the existing anomalous edge/sub-graph/graph detection techniques and the emerging demands for more advanced solutions in various application domains (\\eg social networks, computer networks, financial networks).\nWhen detecting anomalous edges/sub-graphs/graphs, the proposed methods should be capable of leveraging the rich information contained in graphs to find clues and characteristics that can distinguish normal objects and anomalies in specific applications.\nTypically, this involves extracting edge/sub-graph/graph-level features, modeling the patterns of these features, and measuring the abnormalities accordingly.\nHowever, current deep learning based graph anomaly detection techniques put forward very little effort in this regard.\n\\textit{Opportunities}: We believe more research efforts can be done on anomalous edge, sub-graph, and graph detection with regard to their significance in real-world applications. Possible solutions to this gap might to be first consider the application domain and explore domain knowledge to find complementary clues as a basis for these problems. Then, motivated by recent advances in deep learning for edge, sub-graph, and graph-level representation learning~, extensive work can be done to learn an anomaly-aware embedding space such that it is feasible to extract abnormal patterns of anomalies. Although this direction seems quite straightforward, the true challenge lies in the specific application domains. Hence, domain knowledge, anomalous pattern recognition and anomaly-aware deep learning techniques should be enforced simultaneously.", "id": "32fa0363-7074-47b3-9e03-5211f2f264be", "level": "subsection", "origin_cites_number": 2, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Anomalous Edge, Sub-graph, and Graph Detection" ] ], "subsections": [], "title": "Anomalous Edge, Sub-graph, and Graph Detection" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 217, 218 ], "content": "Dynamic graphs provide powerful machinery with which to capture the evolving relationships between real objects and their attributes.\nTheir ever-changing structure and attribute information inherently make anomaly detection very challenging in these scenarios, leading to two primary concerns for the task.\nThe first is to consider the spatial and temporal information contained in each graph snapshot at different time stamps, and the second is to explore the evolving patterns of nodes, edges, sub-graphs and graphs, as well as their interaction with the node/edge attributes over time.\nWhen these challenges have been tackled with mature solutions, detection techniques will achieve better results.\n\\textit{Opportunities}: From our observations, most of deep learning based dynamic graph anomaly detection techniques are built on DeepWalk~, GCN~ or other deep models that are intuitively designed for static graphs. This means other information, like evolving patterns in attributes~), are not adequately used in the detection task.\nWe can therefore identify the following directions for future studies to target.\n\\begin{itemize}\n \\item \\textit{Using dynamic graph mining tools.} As a popular research topic, deep learning for dynamic graph data mining~ has shown its effectiveness in supporting dynamic graph analysis, such as node clustering and graph classification~. More future works can be foreseen that adopt these techniques for anomaly detection.\n \\item \\textit{Deriving solid evidence for anomaly detection.} The rich structural, attribute and temporal information in dynamic graphs are valuable resources for identifying anomalies. Apart from the indicators widely used in current works, such as burst of connections between node pairs or suddenly vanishing connections, we suggest exploring structural and attribute changes in depth. From such studies, we may derive additional information to enhance the detection performance, such as the appearance of abnormal attributes.\n \\item \\textit{Handling complex dynamics.} Real-world networks always exhibit changes in both the network structure and node attributes, but only very few studies address this circumstance. Most of the state-of-arts only consider changes in one of these aspects. Although this `double' scenario is extremely complex and detecting anomalies in this kind of dynamic graph is very challenging, it is worth studying because these graphs are highly reflective of real network data.\n\\end{itemize}", "id": "239988e3-c594-47d4-adb9-69b272543369", "level": "subsection", "origin_cites_number": 7, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Anomaly Detection in Dynamic Graphs" ] ], "subsections": [], "title": "Anomaly Detection in Dynamic Graphs" }, { "cite_extract_rate": 1, "cites": [ 6296, 6299, 6300 ], "content": "Heterogeneous graphs are a specific type of graph that contain diverse types of nodes and edges.\nFor instance, Twitter can be intuitively modeled as a heterogeneous graph comprised of tweets, users, words, etc.\n\\textit{Opportunities}: To use the complex relationships between different types of nodes in heterogeneous graphs for anomaly detection, representative works, such as HGATRD~, GCAN~ and GLAN~, typically decompose a heterogeneous graph into individual graphs according to meta-paths, \\eg one with tweets and users, and another with tweets and words. They then use D(G)NNs to learn the embeddings for graph anomaly detection. Such a decomposition inherently overlooks the direct inter-relations among diverse types of nodes/edges and downgrades the effectiveness of the embeddings. A possible solution is to reveal the complex relations between different types of nodes and edges, and encode them into a unique representation for boosted detection performance.", "id": "aff7afb2-0d0e-4d85-baf6-f135a679f1c2", "level": "subsection", "origin_cites_number": 3, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Anomaly Detection in Heterogeneous Graphs" ] ], "subsections": [], "title": "Anomaly Detection in Heterogeneous Graphs" }, { "cite_extract_rate": 1, "cites": [ 242 ], "content": "The scalability of methods to high-dimensional and large-scale data is an ongoing and significant challenge to anomaly detection techniques.\nIn face of large-scale networks, such as Facebook and Twitter that contain billions of users and friendship links, the size of data in terms of both graph size and number of node attributes is extremely high.\nHowever, most of the existing works lack the ability to detect anomalies in such large-scale data because they are transductive models and need to take the whole graph as input for further analysis.\nComputation time and memory cost increase dramatically as the network scales up, and this stops existing techniques from being used on large-scale networks.\n\\textit{Opportunities}: Accordingly, there is a need for scalable graph anomaly detection techniques.\nOne possible approach would be an inductive learning scheme that first trains a detection model on part of the whole graph and then applies the model to detect anomalies in the unseen data.\nAs some inductive learning models, such as GraphSAGE~, have shown their effectiveness on link prediction and node classification in large-scale graphs, this approach is expected to provide a basis for graph anomaly detection in large-scale graphs and similar techniques can be investigated in the future.", "id": "094e633e-5d3b-45d7-bca3-171ad2d8492b", "level": "subsection", "origin_cites_number": 1, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Anomaly Detection in Large-scale Graphs" ] ], "subsections": [], "title": "Anomaly Detection in Large-scale Graphs" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 8994 ], "content": "In real-world networks, objects might form different kinds of relationships with others (\\eg user's followership and friendship on Twiter). And their attribute information might be collected from different resources, such as user's profile, historical posts.\nThis results in two types of multi-view graphs: 1) multi-graph that contains more than one type of edges between two nodes~; and 2) multi-attributed-view graph that stores node attributes in different attributed views~.\n\\textit{Opportunities}: These multi-views basically allow us to analyze real objects' characteristics from different perspectives.\nEach view also provides complementary information to other views, and they might have different significance on anomaly detection. \nFor instance, anomalies might be indistinguishable in one view but are obviously divergent from the majority in another view.\nThere are a variety of work in data mining on multi-view learning~. However, work that can accommodate multi-view graphs along with multi-view attributes on nodes for anomaly detection purposes is nascent.\nMoreover, the rich information contained in multiple views and the inconsistency among them are overlooked in these works. \nTo this end, we believe more research effort in this direction is needed. Digesting the relationships between views will be vital to their success, as two views might provide contrary/supplementary information for anomaly detection.", "id": "e773dc9f-5c7d-4700-bcf9-1e7e0772172c", "level": "subsection", "origin_cites_number": 7, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Multi-view Graph Anomaly Detection" ] ], "subsections": [], "title": "Multi-view Graph Anomaly Detection" }, { "cite_extract_rate": 0.5, "cites": [ 6307, 6306, 6271, 8995 ], "content": "The easy accessibility of online platforms has made them convenient targets for fraudsters, attackers and other malevolent agents to carry out malicious activities.\nAlthough various anomaly detection systems have been deployed to protect benign objects, anomalies can still conceal themselves to evade detection~.\nKnown as camouflaged anomalies, these entities typically disguise themselves as regular objects.\nIf the detection techniques are not robust against such cases, \\ie if they cannot quickly and effectively adapt to the evolving behavior of evasion-seeking attackers, the anomalies are simply left to cause their damage.\n\\textit{Opportunities}: In the face of camouflage, the boundary between anomalies and regular objects is blurred, making anomalies much harder to be identified.\nWe believe extensive effort should be placed on detecting these anomalies because, as yet, very few studies have looked at handling camouflaged anomalies in graphs~.\nTo fulfill this gap, one major direction might be to jointly analyze the attributes, co-relations, such as the triadic, tetradic, or high-order relationships between objects in hypergraphs~, and other information comprised in graphs.\nBy this, anomalies that only camouflage their local structures or attributes can be identified effectively.\nEnhancing existing techniques might be another direction. \nThis involves incorporating additional detection mechanisms or function blocks particularly designed for distinguishing camouflaged anomalies with existing detection techniques.\nConsequently, these techniques will bridge most existing works and camouflaged anomaly detection.", "id": "cc214041-b949-4f3c-95ae-5deb83ec1573", "level": "subsection", "origin_cites_number": 8, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Camouflaged/Adversarial Anomaly Detection" ] ], "subsections": [], "title": "Camouflaged/Adversarial Anomaly Detection" }, { "cite_extract_rate": 0, "cites": [], "content": "Anomalies are rare, which means anomaly detection always occurs coexists with class imbalance in the training data. As deep learning models rely heavily on the training data, such imbalance pose great challenges to graph anomaly detection, and this remains a significant obstacle to deep learning techniques. \nTypically, the imbalanced class distributions will downgrade the detection techniques' ability to capture the differences between anomalies and non-anomalies. It might even cause over-fitting on the anomalous class because there are too few anomalies in the data.\nIf the detection model overlooks this critical fact and is trained improperly, detection performance will be sub-optimal.\n\\textit{Opportunities}: In fact, class imbalance has been widely explored in various research areas~.\nAdvances such as under-sampling the majority class or modifying the algorithms shed important light on solving training problems with imbalances.\nYet, contemporary graph anomaly detection methods rarely incorporate these techniques. For more effective detection techniques, biased models that pay more attention to anomalies, such as penalizing additional training losses on misclassified anomalies, would be a possible direction to circumvent the problem.\nMoreover, when adopting graph neural networks that aggregate neighboring information to the target node, like GCN or GraphSAGE, the over-smoothing between the features of connected nodes should be prevented such that the distinguishable features of the anomalies can be preserved to support anomaly detection.", "id": "85e8f422-2b40-4284-aca9-eaeef926d8d1", "level": "subsection", "origin_cites_number": 2, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Imbalanced Graph Anomaly Detection" ] ], "subsections": [], "title": "Imbalanced Graph Anomaly Detection" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6308, 1671, 6309, 6284 ], "content": "Graph anomaly detection has close relations with other graph mining tasks including community detection~ and node classification~, and link prediction~. \nFor a concrete example, when detecting community anomalies, community detection techniques are usually used to extract the community structures prior to anomaly detection. Meanwhile, the anomaly detection results can be used to optimize the community structure.\nSuch mutually beneficial collaborations between anomaly detection and other tasks inherently suggest an opportunity for multi-task learning that can handle diverse tasks simultaneously and share information among tasks.\n\\textit{Opportunities}: Multi-task learning provides effective machinery with which incorporate associated tasks~. Its utmost advantage is that the training signal from another task could yield complementary information to distinguish anomalies from non-anomalies. The result would be enhanced detection performance. However, very few attempts focus on this at present.\nBeyond current works, such as~ that jointly perform anomalous node detection and personalized recommendation, explorations into combining other learning tasks with graph anomaly detection are likely to emerge as a fruitful future direction.", "id": "1224de5d-ffc7-41d5-bee2-9722227a3ca0", "level": "subsection", "origin_cites_number": 6, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Multi-task Anomaly Detection" ] ], "subsections": [], "title": "Multi-task Anomaly Detection" }, { "cite_extract_rate": 0.5, "cites": [ 8996, 6310 ], "content": "The interpretability of anomaly detection techniques is vital to the subsequent anomaly handling process.\nWhen applying these techniques to real applications, such as financial and insurance systems, it is essential to provide explainable and lawful evidence to support the detection results.\nHowever, most of the existing works lack the ability to provide such evidence.\nTo identify the anomalies, the most commonly used metrics are top-k rankings and simple anomaly scoring functions.\nThese metrics are flexible enough to label objects as being either an anomaly or not an anomaly, but they cannot derive solid explanations.\nMoreover, as deep learning techniques have also been criticized for their low interpretability, future works on graph anomaly detection with deep learning should pay much more attention to this~.\n\\textit{Opportunities}: To bridge this gap, integrating specially designed interpretation algorithms or mechanisms~ into the detection framework would be a possible solution, noting that this would inherently induce a higher computational cost.\nFuture works should therefore balance the cost of anomaly detection performance and interpretability.\nVisualization-based approaches, \\eg dashboards, charts, might also be feasible for showing the distinction between anomalies and non-anomalies in a human-friendly manner.\nFurther research in this direction will be successful if interpretable visualization results can be given~.", "id": "d42a5aab-3ff4-4f52-8576-f381cddddc12", "level": "subsection", "origin_cites_number": 4, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Graph Anomaly Interpretability" ] ], "subsections": [], "title": "Graph Anomaly Interpretability" }, { "cite_extract_rate": 0.36363636363636304, "cites": [ 6289, 6287, 6283, 8997 ], "content": "Amongst existing unsupervised graph anomaly detection techniques, anomalies are mainly identified based on residual analysis~, reconstruction loss~, distance-based statistics~, density-based statistics~, graph scan statistics~, and one-class classification~.\nThe underlying intuition of these identification strategies is that anomalies have inconsistent data patterns with regular objects, and they will, therefore: 1) introduce more residual errors or be harder to reconstruct; or 2) be located in low-density areas or far away from the majority class in an anomaly-aware feature space.\nEffort toward designing novel loss functions for GNNs for anomaly detection is currently quite limited~\n\\textit{Opportunities}: Although these strategies could capture the deviating data patterns of anomalies, they also have different limitations. Specifically, the residual analysis, one-class classification and reconstruction loss strategies are sensitive to noisy training data. Noisy nodes, edges or sub-graphs also exhibit large residuals, a large distance to the origin/hypersphere center and high reconstruction losses. Meanwhile, the distance-based and density-based strategies can only be applied when anomalies and non-anomalies are well separated in lower-dimensional space. Detection performance also downgrades dramatically if the gap between anomalies and non-anomalies is not that evident.\nIt calls for extensive future efforts to break these limitations and explore new anomaly identification strategies.", "id": "4eb29545-f452-443d-b7b6-8accd75e515c", "level": "subsection", "origin_cites_number": 11, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Graph Anomaly Identification Strategies" ] ], "subsections": [], "title": "Graph Anomaly Identification Strategies" }, { "cite_extract_rate": 0, "cites": [], "content": "Systematic benchmarking is key to evaluating the performance of graph anomaly detection techniques.\nAs indicated in our analysis in Section~\\ref{sec:em}, recent studies are constantly raising attention to more comprehensive and effective benchmarking~. \nTypically, the benchmarking framework consists of benchmark datasets, baseline methods, evaluation metrics, and further analysis tools.\nWhen evaluating a techniques' performance with other baselines, the evaluation dataset and metrics become very important because the performance of each model may vary depending on the setting.\nThe shortage of public datasets and (public available) baseline methods also imposes great challenges for effective evaluation.\nAlthough one of the aims of our survey is to provide extensive materials for this purpose, like open-sourced implementations, datasets, and evaluation metrics, this work can only serve as a basis for future investigations into systematic benchmarking. We invite more efforts from the anomaly detection community toward this important case.\nCertainly, rigorous attention to designing better frameworks for bechmarking would help to disclose the advances and shortcomings of various detection techniques and essentially track a unprejudiced and accurate progress record in this field.", "id": "9d5eeb0b-b689-4f33-bf46-0aa58f25c4d7", "level": "subsection", "origin_cites_number": 4, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Systematic Benchmarking" ] ], "subsections": [], "title": "Systematic Benchmarking" }, { "cite_extract_rate": 0, "cites": [], "content": "A graph anomaly can be categorized as an anomalous node, edge, and sub-graph in a single graph or an anomalous graph in a graph database.\nThese anomalies usually coexist in real-world datasets.\nFor instance, individual fraudsters, abnormal relationships, and fraud groups exist concurrently in online social network, as shown in Fig.~\\ref{Toy}.\nMoreover, there may be different ways to define anomalies of a certain type, such as community outliers versus anomalous communities or attribute-based versus structural anomalies.\nWhen deploying detection techniques in real applications, it is expected that all types of anomalies can be identified while consuming the least resources and time.\nA straightforward approach would be to integrate independent anomalous node, edge, and sub-graph detection techniques.\nAlthough this is convenient to apply to relatively small networks, its high computational cost will surely prevent the approach from scaling to large networks, such as Facebook and Twitter, because the same graph data has to be loaded and processed more than once by different techniques.\n\\textit{Opportunities}: Unified frameworks that can detect diverse types of anomalies together~ may provide feasible solutions to bridge the gap.\nTo build such frameworks, one possible direction is to capture all the information needed by different detection techniques simultaneously so that these techniques can be applied.\nThe idea seems to be non-challenging, but in deep learning, methods of designing neural network layers and learning strategies that can fulfill this need will require extensive effort.", "id": "b1a4c699-5c1b-4fe6-a282-ae8f4a92410d", "level": "subsection", "origin_cites_number": 2, "parent_id": "e97bdf9c-475c-4ebc-8114-7af7a8b0aa87", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Future Directions" ], [ "subsection", "Unified Anomaly Detection Framework" ] ], "subsections": [], "title": "Unified Anomaly Detection Framework" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nDue to the complex relationships between real-world objects and recent advances in deep learning, especially graph neural networks, graph anomaly detection with deep learning is currently at the forefront of anomaly detection.\nTo the best of our knowledge, this is the first survey to present a comprehensive review dedicated to graph anomaly detection with modern deep learning techniques.\nSpecifically, we have reviewed and categorized contemporary deep learning techniques according to the types of graph anomalies they can detect, including: (1) anomalous node detection; (2) anomalous edge detection; (3) anomalous sub-graph detection; and, finally, (4) anomalous graph detection.\nClear summarizations and comparisons between different works are given to offer a complete and thorough picture of the current work and progress of graph anomaly detection as a field.\nMoreover, to push forward future research in this area, we have provided a basis for systematic benchmarking by compiling a wide range of commonly used datasets, open-sourced implementations and synthetic dataset generation techniques.\nWe further highlight 12 potential directions for future work based on the survey results.\nIt is our firm belief that graph anomaly detection with deep learning is indeed more than a burst of temporary interest, and numerous applications from diverse domains are to surely benefit from it for the years to come.\n\\bibliographystyle{IEEEtran}\n\\bibliography{main.bib}\n\\clearpage\n\\setcounter{page}{1}\n\\appendices", "id": "32ddd0ab-f741-4a6c-a1ff-380ee935d4a2", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0.31707317073170704, "cites": [ 6285, 6281, 6283, 6289, 6307, 6279, 6271, 6306, 6293, 5645, 8149, 6291, 6286 ], "content": "\\label{appendix:data-challenges}\nDue to the complexity of anomaly detection and graph data mining, adopting deep learning technologies for graph anomaly detection faces a number of challenges:\n\\textbf{Data-CH1. Ground-truth is scarce.} In most cases, there is little or no prior knowledge about the features or patterns of anomalies in real applications. Ground-truth anomalies are often identified by domain experts, and this generally cost-prohibitive. As a result, labeled ground-truth anomalies are often unavailable for analysis in a wide range of disciplines.\n\\textbf{Data-CH2. Various types of graphs.} Different graphs model different real-world data. For instance, plain graphs contain only structural information, attributed graphs contain both structural and attribute information, and heterogeneous graphs represent the complex relations between different types of objects.\nThese graphs reflect the real-world data in different forms, and graph anomalies will show different deviating patterns in different types of graphs.\n\\textbf{Data-CH3. Various types of graph anomalies.} Given a specific type of graph, graph anomalies could appear as a specific node, edge, sub-graph, or an entire graph, and each type of these anomalies is significantly different from others. This means detection methods must involve concise definitions of anomalies and be able to identify concrete clues about the deviating patterns of anomalies.\n\\textbf{Data-CH4. High dimensionality and large scale.} Representing the structure information of real-world networks usually results in high dimensional and large-scale data~ because real-world network often contain millions or billions of nodes. Graph anomaly detection techniques, hence, should be capable of handling such high dimensional and large scale data; this includes the ability to extract anomalous patterns under the constraints of execution time and feasible computing resources. \n\\textbf{Data-CH5. Interdependencies and dynamics.} The relationships between real objects reveal their interdependencies and they can no longer be treated individually for anomaly detection. That is to say, the detection techniques need to consider the deviating patterns of anomalies by assessing the pairwise, triadic, and higher relationships among objects restored in conventional graphs or hypergraphs~.\nIn addition, the dynamic nature of real-world networks makes detection problems much more challenging.\n\\textbf{Data-CH6. Class imbalance.} As anomalies are rare occurrences, only a very small proportion of the real-world data might be anomalous. This naturally introduces a critical class imbalance problem to anomaly detection because the number of normal objects is far greater than anomalies in the training data. If no further actions are taken to tackle this challenge, learning-based anomaly detection techniques might overlook the patterns of anomalies, leading to sub-optimal results.\n\\textbf{Data-CH7. Unknown and camouflage of anomalies.} In reality, knowledge about anomalies mainly stems from human expertise. There are still many unknown anomalies across different application domains, and new types of anomalies might appear in the future. Nevertheless, real-world anomalies can hide or be camouflaged as benign objects to bypass existing detection systems. In graphs, anomalies might hide themselves by connecting with many normal nodes or by mimicking their attributes. Detection methods, therefore, need to be adaptive to unknown and novel anomalies and robust to camouflaged anomalies.\nThese data-specific challenges and technical-specific challenges (discussed in Section~\\ref{sec:introdution:challenges}) are summarized in Table.~\\ref{tb:challenges}, along with the corresponding articles that aim to address them.\n\\begin{table}\n\\centering \n\\footnotesize\n\\renewcommand\\arraystretch{1}\n\\setlength{\\tabcolsep}{2.8mm}\n\\caption{Challenges and Methods.}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{m{1.1cm}<{\\centering}|m{2cm}<{\\centering}|m{2.9cm}<{\\centering}}\n\\toprule[1 pt]\n\\textbf{Challenges} & \\textbf{Details} & \\textbf{Methods} \\\\ \n\\midrule[1 pt]\n\\multicolumn{3}{c}{\\textbf{Data-specific Challenges}} \\\\\n\\midrule[1 pt]\nData-CH1 & Ground-truth is scarce & \\\\ \n\\cline{1-3}\nData-CH2 & Various types of graphs & \\\\ \n\\cline{1-3}\nData-CH3 & Various types of graph anomalies & \\\\ \n\\cline{1-3}\nData-CH4 & High dimensionality and large scale & \\\\ \n\\cline{1-3}\nData-CH5 & Interdependencies and dynamics & \\\\ \n\\cline{1-3}\nData-CH6 & Class imbalance & \\\\ \n\\cline{1-3}\nData-CH7 & Unknown and camouflage of anomalies & \\\\ \n\\midrule[1 pt]\n\\midrule[1 pt]\n\\multicolumn{3}{c}{\\textbf{Techniques-specific Challenges}} \\\\\n\\midrule[1 pt]\nTech-CH1 & Anomaly-aware training objectives & \\\\\\cline{1-3}\nTech-CH2 & Anomaly interpretability & \\\\\\cline{1-3}\nTech-CH3 & High training cost & \\\\\\cline{1-3}\nTech-CH4 & Hyperparameter tuning & \\\\\n\\bottomrule[1 pt]\n\\end{tabular}\n\\label{tb:challenges}\n}\n\\end{table}", "id": "9a4d2cc0-0208-4633-a227-f43cf17e07e4", "level": "section", "origin_cites_number": 41, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Challenges in Graph Anomaly Detection" ] ], "subsections": [], "title": "Challenges in Graph Anomaly Detection" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{appendix:taxonomy}\nThe taxonomy of this survey is shown in Fig.~\\ref{pic:taxonomy}.", "id": "12e1194f-a7ed-4e46-889d-a06a013eaeaf", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "Taxonomy" ] ], "subsections": [], "title": "Taxonomy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{appendix:node:static}\nFollowing the taxonomy of Section~\\ref{lb:node:sag}, this section reviews traditional non-deep learning techniques designed for ANOS ND on static attributed graphs, followed by techniques based on GATs, GANs, and network representation.\n\\onecolumn\n\\begin{sidewaysfigure*}[h]\n \\centering\n \\includegraphics[width=\\textwidth,keepaspectratio]{pics/taxonomy.pdf}\n \\captionof{figure}{The Taxonomy of Our Survey.}\n \\label{pic:taxonomy}\n\\end{sidewaysfigure*}\n\\twocolumn", "id": "dfc7ea98-0750-495a-b3c3-8183220b6042", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Static Graphs" ] ], "subsections": [ "92e0465c-2bc8-4d8d-9625-6c4f4b200390", "ac5b6d2a-da5e-45c0-bd9e-1a63dbed603a", "ec58be64-9072-46de-98e9-29a0e0542d63", "99113f23-1403-4b3d-bae2-8db2bf051152" ], "title": "ANOS ND on Static Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "Traditional techniques, such as statistical models, matrix factorization, and KNN, have been widely applied to extract the structural/attribute patterns of anomalous nodes for a subsequent detection process.\nAmong these, matrix factorization (MF) based techniques have shown power at capturing both topological structures and node attributes, achieving promising detection performance.\nAn early attempt at this type of method was Liu \\etal~.\nThey aimed to detect community anomalies (defined in Section~\\ref{sec:AND}) through the developed model, ALAD.\nAs shown in Fig.~\\ref{pic:ALAD}, through non-negative matrix factorization, ALAD incorporates both the graph structure $A$ and node attributes $X$ to derive community structures $W$ and their attribute distribution vectors $H$.\nWhen the matrices are decomposed, ALAD measures the normality of each node according to the attribute similarity between it and the community it belongs to.\nBy ranking the nodes' normality scores in ascending order, the top-k nodes are identified as community anomalies. \nLi \\etal~ approached ANOS ND from a different perspective using residual analysis.\nAs assumed, anomalies will lead to larger attribute reconstruction residual errors because they do not conform to the attribute patterns of the majorities.\nAccordingly, the proposed model, Radar, learns the residual errors $R$, as shown in Fig.~\\ref{pic:Radar}, by factorizing the node attributes $X$. To incorporate the structural information for obtaining these errors, Radar puts explicit restrictions on the learned residuals such that directly linked nodes will have similar residual patterns in $R$ (known as the homophily effect).\nFinally, the top-k nodes with larger norms in $R$ are identified as anomalies.\nAlthough the homophily hypothesis provides strong support for exploiting structural information, it might not always be hold.\nIn fact, real objects may have distinct attributes from their connected neighbors and it is non-trivial to regulate all connected objects share similar values in each dimension in the feature space. By this, Peng \\etal~ indicated that there are structurally irrelevant node attributes that do not satisfy the homophily hypothesis.\nIndeed, these structurally irrelevant node attributes would have adverse effects on anomaly detection techniques that are developed based on this hypothesis. \nTo tackle this problem, their developed model, ANOMALOUS, uses CUR~ decomposition to select attributes that are closely related to the network structure and then spot anomalies through residual analysis following Radar (as shown in Fig.~\\ref{pic:ANOMALOUS}). \n\\begin{figure}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n \\centering\n \\subfigure[The Framework of ALAD~]{\n \\label{pic:ALAD} \n \\includegraphics[width=0.48\\textwidth]{pics/Node/ALAD.pdf}}\n \\subfigure[The Framework of Radar~]{\n \\label{pic:Radar} \n \\includegraphics[width=0.48\\textwidth]{pics/Node/Radar.pdf}}\n \\subfigure[The Framework of ANOMALOUS~]{\n \\label{pic:ANOMALOUS}\n \\includegraphics[width=0.48\\textwidth]{pics/Node/Anomalous.pdf}}\n \\caption{ANOS ND on attributed graphs -- Matrix Factorization based approaches. The three representative MF techniques adopt different decomposition strategies to fuse the graph structural information and node attributes. Anomalous nodes are then spotted through scoring functions or residual analysis.}\n \\label{pic:MFtechniques}\n\\end{figure}\nBeyond matrix factorization, linear regression models have also been designed to train anomaly classifiers given labeled training data.\nA representative work is Wu \\etal~.\nTheir supervised model, SGASD, has yielded encouraging results with identifying social spammers using the social network structure, the content information in social media and user labels.\nThese non-deep learning techniques are able to capture valuable information from graph topology and node attributes, but their application and generalizability to real-world networks (which are usually in large-scale) is strictly limited due to the high computational cost of matrix decomposition operations and regression models.", "id": "92e0465c-2bc8-4d8d-9625-6c4f4b200390", "level": "subsection", "origin_cites_number": 5, "parent_id": "dfc7ea98-0750-495a-b3c3-8183220b6042", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Static Graphs" ], [ "subsection", "Traditional Non-Deep Learning Techniques" ] ], "subsections": [], "title": "Traditional Non-Deep Learning Techniques" }, { "cite_extract_rate": 1, "cites": [ 6291, 180, 6289 ], "content": "\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Node/GAT.pdf}}\n \\caption{ANOS ND on attributed graphs -- GAT based approaches. Given the input graph, these techniques employ graph attention neural network to learn node embeddings. The unsupervised technique, AnomalyDAE~, scores each node based on the reconstruction loss and mark the top-k nodes as anomalies, while the semi-supervised technique, SemiGNN~, trains a classifier to predict node labels.}\n \\label{pic:GAT}\n\\end{figure*}\nAlthough GCN provides an effective solution to incorporating graph structure with node attributes for ANOS ND (reviewed in Section~\\ref{sec:AND:GCN}), its ability to capture the most relevant information from neighboring is subpar.\nThis is due to the simple convolution operation that aggregates neighbor information equally to the target node.\nRecently, graph attention mechanism (GAT)~ is employed to replace the traditional graph convolution.\nFor instance, Fan \\etal~ applied graph attention neural network to encode the network structure information (structure encoding).\nThe method, AnomalyDAE, also adopts a separate attribute autoencoder to embed the node attributes (attribute encoding).\nThrough an unsupervised encoding-decoding process, each node is ranked according to its corresponding reconstruction loss, and the top-k nodes introducing the greatest losses are identified as anomalies.\nSpecifically, the attribute decoding process takes both node embeddings learned through the structure and attribute encoding processes to reconstruct node attributes, as shown in Fig.~\\ref{pic:GAT}, while the graph topology is reconstructed only using the embeddings output by the GAT.\nTo acquire better reconstruction results, AnomalyDAE is trained to minimize the overall loss function, denoted as:\n\\begin{equation} \\label{eq:anomalydae}\n\\begin{split}\n \\mathcal{L}_{AnomalyDAE} = & \\alpha ||(A - \\hat{A})\\odot \\bm{\\theta}||_{2}^{2} + \\\\\n &(1-\\alpha)||(X - \\hat{X})\\odot \\bm{\\eta}||_{2}^{2},\n\\end{split}\n\\end{equation}\nwhere $\\alpha$ is the coefficient, $A$ and $X$ is the input adjacency matrix and attribute matrix, $\\hat{A}$ and $\\hat{X}$ are the reconstructed matrices. Each $\\theta_{i,j} \\in \\bm{\\theta}$ and $\\eta_{i,j} \\in \\bm{\\eta}$ is 1, if the corresponding element $A_{ij}$ and $X_{ij}$ equals 0, otherwise, their values are defined by hyperparameters greater than 1.\nAnother decent work is SemiGNN~, in which Wang \\etal proposed a semi-supervised attention-based graph neural network for detecting fraudulent users in online payment platforms.\nThis work further explores user information collected from various sources (\\eg transaction information and user profiles), and represents real-networks as multi-view graphs.\nEach view in the graph is modeled to reflect the relationship between users or the correlation between user attributes.\nFor anomaly detection, SemiGNN first generates node embedding $h_u^v$ from each view $v$ by aggregating neighbor information through a node-level attention mechanism.\nIt then employs view-level attention to aggregate node embeddings from each view and generates a unified representation $a_u$ for each node.\nLastly, the class of each node is predicted through a softmax classifier.\nIndeed, Wang \\etal designed a supervised classification loss and an unsupervised graph reconstruction loss to jointly optimize the model by fully utilizing labeled and unlabeled data.\nThe classification loss can be denoted as:\n\\begin{equation} \n \\mathcal{L}_{sup} = - \\frac{1}{|U_L|}\\sum_{u \\in U_L} \\sum_{i=1}^{k}I(y_u=i)\\log\\frac{\\exp(a_u\\cdot\\theta_i)}{\\sum_{j=1}^{k}\\exp(a_u\\cdot\\theta_j)},\n\\end{equation}\nwhere $U_L$ is the labeled user set and its size is $|U_L|$, $I(\\cdot)$ is an indicator function, $k$ is the number of labels to be predicted (in most cases, the label is either anomalies or non-anomalies, and $k=2$), and $\\theta$ represents the trainable variables.\nMeanwhile, the unsupervised loss encourages unlabeled nodes (users) that can be reached by labeled nodes through random walks to obtain similar representations and vice versa.\nThis is achieved by negative sampling (unlabeled nodes that cannot be reached by random walks are negative samples) and the loss can be formulated as:\n\\begin{equation}\n \\begin{split}\n \\mathcal{L}_{unsup} & = \\sum_{u\\in U}\\sum_{v\\in N_u \\cup Neg_{u}} -\\log(\\sigma(a_{u}^{T}a_{v})) \\\\\n &- 3\\cdot E_{q\\sim P_{neg}(u)}\\log(\\sigma(a_{u}^{T}a_{q})),\n \\end{split}\n\\end{equation}\nwhere $U$ denotes the user set, $N_u$ denotes the neighbor set of $u$, $Neg_{u}$ represents negative samples, $P$ is the sampling distribution, and $\\sigma(\\cdot)$ is the sigmoid function. The total loss takes the sum of them and is formulated as:\n\\begin{equation} \n \\mathcal{L}_{SemiGNN} = \\alpha \\mathcal{L}_{sup} + (1- \\alpha) \\mathcal{L}_{unsup} + \\lambda \\mathcal{L}_{reg},\n\\end{equation}\nwhere $\\alpha$ is a balancing parameter and $\\mathcal{L}_{reg}$ regularizes all trainable variables.", "id": "ac5b6d2a-da5e-45c0-bd9e-1a63dbed603a", "level": "subsection", "origin_cites_number": 3, "parent_id": "dfc7ea98-0750-495a-b3c3-8183220b6042", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Static Graphs" ], [ "subsection", "GAT Based Techniques" ] ], "subsections": [], "title": "GAT Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure*}[!t]\n\\setlength{\\belowcaptionskip}{-0.25cm}\n \\centerline{\\includegraphics[width=0.98\\textwidth]{pics/Node/GAN.pdf}}\n \\caption{ANOS ND on static graphs -- GAN based approaches. The generator, G, generates potential anomalies by sampling noisy data from a prior distribution to fool the discriminator. The discriminator, D, distinguishes the generated anomalies and normal nodes. The scoring function then quantifies the anomaly score of each node according to D's output.}\n \\label{pic:GAN}\n\\end{figure*}\nBecause GAN is effective at capturing anomalous/regular data distributions (as reviewed in Section~\\ref{sec:ND:GAN}), Ding \\etal~ used GAN in their developed model, AEGIS, for improved anomaly discriminability on unseen data.\nAs shown in Fig.~\\ref{pic:GAN}, this model first generates node embeddings through a GNN from the input attributed graph, and then a generator and a discriminator are trained to identify anomalies. In the first phase, anomalous nodes and regular nodes are mapped to distinctive areas in the embedding space such that the GAN is able to learn a boundary between them. Accordingly, Ding \\etal built an autoencoder network with graph differentiative layers to capture the attribute difference between each node and its $k$-th neighbors. Such difference information enables anomalies to be distinguished easily. The embeddings are encoded as follows:\n\\begin{equation} \n h_{i}^{l} = \\sum_{k=1}^{K} \\beta_{i}^{k} \\sigma \\left( W_1h_{i}^{l-1} + \\sum_{j\\in N_{k}(i)}\\alpha_{ij}W_{2}\\Delta_{ij}^{l-1}\\right),\n\\end{equation}\nwhere $h_{i}^{l}$ is embedded features of node $i$ through the $l$-th layer, $\\beta_{i}^{k}$ is the attention for each hop, $N_{k}(i)$ is the set of $k$-th order neighbors, $\\alpha_{ij}$ is the attention for each neighbor, and $\\Delta_{ij}^{l-1}$ is the difference between $i$ and $j$, $W_1$ and $W_2$ are the trainable variables.\nThe autoencoder is fine-tuned until the node attributes can be best reconstructed using the learned embeddings, after which the GAN is trained.\nIn the second phase, the generator follows a prior distribution $p(\\tilde{z})$ to generate anomalies by sampling noisy data, while the discriminator struggles to distinguish between the embeddings of the normal nodes and those of the generated anomalies. The training process is formulated as a mini-max game between the generator and discriminator as follows:\n\\begin{equation}\n \\min_{G}\\max_{D}\\mathbb{E}_{z\\sim Z}[\\log(D(z))] + \\mathbb{E}_{\\tilde{z}\\sim \\tilde{Z}}[\\log(1-D(G(\\tilde{z})))],\n\\end{equation}\nwhere $z$ is the node embeddings, and $\\tilde{z}$ are the generated anomalies.\nAfter training, AEGIS directly learns embedding $z_u$ for a test node $u$, and quantifies its anomaly score with regard to the discriminator's outputs, \\ie the possibility that node is normal. The scoring function is formulated as:\n\\begin{equation} \n Ascore(u) = 1 - D(z_u),\n\\end{equation}\nand the top-k nodes are deemed to be anomalous.", "id": "ec58be64-9072-46de-98e9-29a0e0542d63", "level": "subsection", "origin_cites_number": 1, "parent_id": "dfc7ea98-0750-495a-b3c3-8183220b6042", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Static Graphs" ], [ "subsection", "GAN Based Techniques" ] ], "subsections": [], "title": "GAN Based Techniques" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 6286, 6298, 6271 ], "content": "With network representation, graphs are first encoded into a vector space before the anomalies detection procedure takes place.\nAs outlined in Section~\\ref{sec:spg:nr}, numerous studies on ANOS ND in attributed graphs have exploited deep network representation techniques.\nFor instance, Zhang \\etal~ detected abnormal nodes that have attributes significantly deviating from their neighbors through a 3-layer neural network, REMAD, and residual analysis.\nThey explicitly divide the original node attribute matrix into a residual attribute matrix $R$ that captures the abnormal characters of anomalies and a structurally relevant attribute matrix $\\hat{X}$ for network representation learning. Both matrices are jointly updated throughout the representation learning process so nearby nodes are encouraged to have similar representations.\nSpecifically, these node embeddings are generated by aggregating neighbor information with each node's own attributes, formulated as:\n\\begin{equation} \n h_{i}^{l} = \\sigma \\left( W^l \\cdot \\text{CONCAT}\\{h_{i}^{l-1},h_{N_i}^{l}\\} + b^2\\right),\n\\end{equation}\nwhere $h_{i}^{l}$ is node $i$'s representation generated by the $l$-th layer ($h_{i}^{0} = \\hat{X}$), $N_i$ contains $i$'s neighbors, $\\sigma()$ is the activation function, $W^{l}$ and $b$ are the trainable variables. Finally, the residual matrix $R$ will contain the abnormal information of each node and the top-k nodes with the largest norms are considered anomalies.\nGiven partial node labels, Liang \\etal~ developed a semi-supervised representation model, SEANO, that incorporates graph structure, node attributes and label information.\nSimilar to REMAD, SEANO also aggregates neighbor information to center nodes, and the node representations are obtained through an embedding layer, formulated as:\n\\begin{equation} \\label{SEANO:embedding}\n z_{i} = \\lambda_{i}h^{l_{1}}(x_{i}) + (1-\\lambda_{i})h^{l_{1}}(\\Bar{x}_{N_{i}}),\n\\end{equation} \nwhere $z_{i}$ is $i$'s representation, $\\lambda_{i}$ is a trainable variable that identifies the weight of $i$'s own attributes ($x_{i}$), $\\Bar{x}_{N_{i}}$ is the average of node $i$'s neighbors' representations, and the function $h^{k}(x_{i}) = \\phi(W^kh^{k-1}(x_i)+b^k)$ maps original node attributes into lower dimensional vectors.\nThen, a supervised component, which takes the representations as input, predicts node labels through a softmax classifier, and an unsupervised component is trained to reconstruct node contexts (node sequences).\nThe context of each node is not only generated through random walks on the graph but also from the labeled nodes that belong to the same class. After training, SEANO interprets $\\lambda_{i}$ as the normality score of node $i$ and the top-k nodes with the highest scores are classed as anomalies.\nLearning node representations via aggregating neighbor information has proven effective for capturing comprehensive information from graph structure and node attributes. \nBut, Liu \\etal~ demonstrated such an approach can help anomalies aggregate features from regular nodes, making them look normal and leading to sub-optimal detection performance. \nThey identified three concrete issues that should be considered when applying aggregation operations for anomaly detection: \n1) Anomalies are rare objects in a network. Hence, directly aggregating neighborhood information will smooth the difference between the anomalies and normal instances, blurring boundaries between them. \n2) Directly connected nodes have distinctive features, and the assumption that connected nodes share similar features, which serves as the basis for feature aggregation, no longer holds in this scenario.\n3) Real objects also form multiple types of relations with others, which means aggregation results for different types of relations will be distinctive.\nWith regard to these concerns, their proposed method, GraphConsis, follows a sampling strategy to avoid potential anomalous neighbors when aggregating node features.\nThis method also adopts an attention mechanism to aggregate neighbor information following different links.\nThe learned node representations, therefore, are more robust to anomalies. As such, GraphConsis takes them as input to train a classifier for predicting labels.\nDou \\etal~ further considered camouflage behaviors of fraudsters in their proposed model CARE-GNN to enhance detection performance.\nAs specified, the camouflages can be categorized as either feature camouflage or relation camouflage.\nRespectively, anomalies either adjust their feature information or form connections with many benign objects to gloss over suspicious information.\nHence, directly employing aggregation will overlook the camouflages and smooth the abnormal patterns of anomalies, eliminating the distinctions between anomalies and normal objects. \nTo alleviate over-smoothness, CARE-GNN also adopts a neighbor sampling strategy, as is the case with GraphConsis, to filter camouflaged anomalies and explores different types of relations formed between users.\nSpecifically, under each relation, Dou \\etal employed a MLP to predict node labels using their features and measure the similarity ($l1$ distance) between each node and its neighbors according to the MLP's output.\nThen, the top-k most similar neighbors are selected for feature aggregation, and CARE-GNN generates each node's representation through a combination of latent representations that are learned under different relations.\nA classifier is eventually trained using the representations to predict the node labels.\nAs can be seen, the performance of these network representation based techniques is decided by their training objectives/loss functions.\nEnhanced detection performance is probable if the loss function is able to separate normal nodes from abnormal nodes reasonably well. \nMotivated by this, a more recent work in~ emphasizes the importance of anomaly-aware loss functions.\nIn order to adjust margins for the anomalies, the authors proposed a novel loss function to guide the representation learning process.\nSpecifically, this loss function is designed to find the relative scales between the margins of outlier nodes and normal nodes.\nAn MLP-based classifier is finally trained using the node representations generated by the anomaly-aware loss-guided GNNs and node labels. \nFor unseen nodes, the classifier will label them upon their representations.", "id": "99113f23-1403-4b3d-bae2-8db2bf051152", "level": "subsection", "origin_cites_number": 5, "parent_id": "dfc7ea98-0750-495a-b3c3-8183220b6042", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Static Graphs" ], [ "subsection", "Network Representation Based Techniques" ] ], "subsections": [], "title": "Network Representation Based Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{appendix:node:dynamic}\nTo detect anomalous nodes in dynamic plain graphs, traditional non-deep learning techniques rely heavily on modeling the structural evolving patterns of nodes.\nRepresentative works like~ and~ assume that the evolutionary behaviors of regular nodes' (\\ie generate or remove connections with others) usually follow stable patterns, and their changes introduce less impact on the graph structure compared to anomalies.\nSpecifically, in~, Wang \\etal proposed a novel link prediction method to fit the evolutionary patterns of most nodes such that anomalies can be identified because their observations significantly conflict with the prediction results.\nThey further quantified the impact of anomalous behaviors by assessing the perturbation imposed on the graph adjacency matrix.\nOther traditional works also exploit node/edge attributes and their changes. For examples, Teng \\etal~ took node and edge attributes as two different views to describe each node. By encoding both types of information into a shared latent space, their proposed model learns a hypersphere from historical records. When new observations of existing nodes are given, the model distinguishes between the benign and the anomalies according to their distances to the hypersphere centroid. Points lying outside the hypersphere are spotted as anomalies.\nUnlike embedding techniques, Nguyen \\etal~ proposed a non-parametric method to detect anomalous users, tweets, hashtags, and links on social platforms. Specifically, they modeled social platforms as heterogeneous social graphs such that the affluent relationships between users, tweets, hashtags and links were effectively captured. Through extensive analysis of the features, such as user registration information, keywords in tweets, the linguistic style of links and the popularity scores of hashtags, anomalous objects are spotted based on their deviating features. This work also uses relationships between individual objects as well as the detected anomalies, and detects groups of abnormal objects.\n\\begin{figure}[!t]\n \\centerline{\\includegraphics[width=0.48\\textwidth]{pics/Node/NetWalk.pdf}}\n \\caption{The Framework of NetWalk~. The input dynamic graph is modeled as an initial graph with an incoming edge stream. Given the initial graph, a trunk of node sequences is generated using random walks on the graph and the deep autoencoder encodes each node into an embedding space following process (1). The node sequences and node embeddings are updated based on the incoming edge stream following process (2). Finally, NetWalk assigns an anomaly score to each node according to its distance to the cluster centroid in the embedding space.}\n \\label{pic:netwalk}\n\\end{figure}", "id": "a37030ed-a13e-4c9b-bdf9-32f760d9d73a", "level": "section", "origin_cites_number": 5, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ND on Dynamic Graphs With Traditional Non-Deep Learning Techniques" ] ], "subsections": [], "title": "ANOS ND on Dynamic Graphs With Traditional Non-Deep Learning Techniques" }, { "cite_extract_rate": 0.25, "cites": [ 6293 ], "content": "\\label{appendix:edge}\nTraditional non-deep learning based approaches mainly focus on using temporal signals (\\eg changes in graph structure), and applying specially designed statistical metrics to detect anomalous edges on dynamic graphs~. \nAs a concrete example, Eswaran and Faloutsos~ modeled a dynamic graph as a stream of edges and exploited the graph structure as well as the structure evolving patterns. They identified two signs of anomalous edges: 1) connecting regions of the graph that were disconnected; and 2) connections that appear in bursts. For incoming edges, their model assigns anomaly scores to each edge, and the top-k edges with highest scores are anomalies. Another most recent work by Chang \\etal~ proposed a novel frequency factorization algorithm, aiming to spot anomalous incoming edges based on their likelihood of observed frequency. This method merges the advantages of both probabilistic models and matrix factorization for capturing both temporal and structural changes of nodes, and as reported, it only requires constant memory to handle edge streams.", "id": "ef880176-a742-4e90-bb20-77c1c0e4b593", "level": "section", "origin_cites_number": 4, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS ED With Traditional Non-Deep Learning Techniques" ] ], "subsections": [], "title": "ANOS ED With Traditional Non-Deep Learning Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{appendix:subgraph}", "id": "6dcc483a-2e1d-4f81-abe8-97ae152ee771", "level": "section", "origin_cites_number": 0, "parent_id": "5020eb09-4f12-4d1a-8dae-076499c8173a", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS SGD With Traditional Non-Deep Learning Techniques" ] ], "subsections": [ "a68d74d8-bc4c-4876-9c25-1421d3078bb4", "ab11e3f8-fc96-4814-ac49-21cd01dfc491" ], "title": "ANOS SGD With Traditional Non-Deep Learning Techniques" }, { "cite_extract_rate": 0.5, "cites": [ 6277, 6297 ], "content": "\\label{appendix:subgraph:static}\nOne motivation of ANOS SGD in static graphs is that anomalous sub-graphs often exhibit significantly different attribute distributions.\nTherefore, traditional non-deep learning techniques, such as gAnomaly~, AMEN~, and SLICENDICE~, focus on modeling the attribute distributions and measuring the normality of sub-graphs.\nAnother line of investigation is graph residual analysis. The rich attribute information contained in real-world networks provides insight into the relationships formed between objects. Thus, the motivation behind several studies to spot anomalous sub-graphs has been to measure the residual between the expected structures and observed structures~.", "id": "a68d74d8-bc4c-4876-9c25-1421d3078bb4", "level": "subsection", "origin_cites_number": 4, "parent_id": "6dcc483a-2e1d-4f81-abe8-97ae152ee771", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS SGD With Traditional Non-Deep Learning Techniques" ], [ "subsection", "ANOS SGD on Static Graphs" ] ], "subsections": [], "title": "ANOS SGD on Static Graphs" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 6272, 8997 ], "content": "\\label{appendix:subgraph:dynamic}\nDevising metrics for ANOS SGD has been the subject of many traditional works. \nFor instance, Chen \\etal~ introduced six metrics to identify community-based anomalies, namely: grown community, shrunken community, merged community, split community, born community and vanished community.\nAlthough these hand-crafted features or statistical patterns well fit some particular types of existing anomalies, their abilities to detect unseen and camouflage anomalies are limited and applying them directly might introduce high false negative rate, which is not optimal for applications like financial security. \nOther works, such as SPOTLIGHT by Eswaran \\etal~ and another by Liu \\etal~, explore sudden changes in dynamic graphs and identify anomalous sub-graphs that are related to such changes.\nMotivated by the phenomena that social spam and fraud groups often form dense temporal sub-graphs in online social networks, plenty of works, including,~, use manually-extracted features and spot anomalous dense sub-graphs that have evolved significantly different from the reset of the graph. \nIn addition to these studies, a large number of works discuss uses of various graph scan statistics for anomalous sub-graph detection, such as the Kulldorff statistic~, Poisson statistic~, elevated mean scan statistic~ and Berk-Jones statistic~. Specifically, Shao \\etal~ proposed a non-parametric method to detect anomalous sub-graphs in dynamic graphs where the network structure is constant, but the node attributes change overtime. This approach measures the anomalous score of each sub-graph with regard to the p-values of nodes it comprises. Sub-graphs with higher scores are more anomalous. Another work, GBGP~, instead, adopts the elevated mean scan statistic to identify nodes that might form anomalous sub-graphs and detects anomalous groups that follow predefined irregular structures.\n\\end{document}", "id": "ab11e3f8-fc96-4814-ac49-21cd01dfc491", "level": "subsection", "origin_cites_number": 11, "parent_id": "6dcc483a-2e1d-4f81-abe8-97ae152ee771", "prefix_titles": [ [ "title", "A Comprehensive Survey on\\\\ Graph Anomaly Detection with Deep Learning" ], [ "section", "ANOS SGD With Traditional Non-Deep Learning Techniques" ], [ "subsection", "ANOS SGD on Dynamic Graphs" ] ], "subsections": [], "title": "ANOS SGD on Dynamic Graphs" } ]
95
[ 6272, 3207, 1671, 553, 3344, 6271, 6277, 6275, 217, 1662, 6274, 6276, 6270, 6273, 6278, 6279, 4539, 6280, 3206, 282, 6281, 218, 6282, 7007, 180, 242, 6283, 6284, 6285, 6290, 6288, 6287, 6289, 6291, 8991, 6286, 2626, 6292, 6293, 8992, 8149, 3953, 8993, 6302, 6297, 3333, 6303, 6296, 3700, 6294, 6304, 6299, 6301, 1420, 6295, 6300, 5645, 6298, 6305, 8994, 6307, 6306, 8995, 6308, 6309, 8996, 6310, 8997 ]
1.136778
[ "Wei Chen", "Yu Liu", "Weiping Wang", "Erwin Bakker", "Theodoros Georgiou", "Paul Fieguth", "Li Liu", "Michael S. Lew" ]
Deep Learning for Instance Retrieval: A Survey
2021
2021-01-27T09:32:58Z
cs.CV
In recent years a vast amount of visual content has been generated and shared from many fields, such as social media platforms, medical imaging, and robotics. This abundance of content creation and sharing has introduced new challenges, particularly that of searching databases for similar content --- Content Based Image Retrieval (CBIR) --- a long-established research area in which improved efficiency and accuracy are needed for real-time retrieval. Artificial intelligence has made progress in CBIR and has significantly facilitated the process of instance search. In this survey we review recent instance retrieval works that are developed based on deep learning algorithms and techniques, with the survey organized by deep feature extraction, feature embedding and aggregation methods, and network fine-tuning strategies. Our survey considers a wide variety of recent methods, whereby we identify milestone work, reveal connections among various methods and present the commonly used benchmarks, evaluation results, common challenges, and propose promising future directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ] ], "subsections": [ "87ccd777-d058-4a76-9575-0c136cb10697", "8a512096-a741-4540-8715-b12b364b3d9e", "1b3ca748-df26-4b7f-85f3-3eb438a055f5", "acffc1fe-bf03-450b-857b-db143905236e", "a6c69e00-d8a1-492b-8514-fab570f109d2", "6d16c13c-e552-486e-8c1d-7e8590b69688" ], "title": "root" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 2093, 2092, 2091, 2094, 2090, 97, 8490, 2089, 2095, 209 ], "content": "\\label{sec:intro}}\n\\IEEEPARstart{C}{ontent} Based Image Retrieval (CBIR) is the problem of searching for relevant images in an image gallery by analyzing the visual content (colors, textures, shapes, objects \\emph{etc.}), given a query image ,. CBIR has been a longstanding research topic in the fields of computer vision and multimedia ,. With the exponential growth of image data, the development of appropriate information systems that efficiently manage such large image collections is of utmost importance, with image searching being one of the most indispensable techniques. Thus there is a nearly endless potential for applications of CBIR, such as person/vehicle reidentification ,, landmark retrieval , \nremote sensing , online product searching .\nGenerally, CBIR methods can be grouped into two different tasks ,: Category level Image Retrieval (CIR) and Instance level Image Retrieval (IIR). The goal of CIR is to find an arbitrary image representative of the same category as the query (\\eg dogs, cars) ,. By contrast, in the IIR task, a query image of a particular instance (\\eg the Eiffel Tower, my neighbor's dog) is given and the goal is to find images containing the same instance that may be captured under different conditions like different imaging distances, viewing angles, backgrounds, illuminations, and weather conditions (reidentifying exemplars of the same instance) ,. The focus of this survey is the IIR task\\footnote{If not further specified, ``image retrieval'', ''IIR'', and ``instance retrieval'' are considered equivalent and will be used interchangeably.}.\nIn many real world applications, IIR is usually to find a desired image requiring a search among thousands, millions, or even billions of images. Hence, searching efficiently is as critical as searching accurately, to which continued efforts have been devoted ,,. To enable accurate and efficient retrieval over a large-scale image collection, \\emph{developing compact yet discriminative feature representations} is at the core of IIR.\n\\iffalse\n\\begin {figure}[!t]\n\\centering\n\\begin{tabular}{c}\n\\includegraphics[width=0.9\\columnwidth]{./Figures/TheProblem.pdf} \\\\\n(\\textit{a}) The CBIR problem \\\\\n\\includegraphics[width=0.9\\columnwidth]{./Figures/InstancevsCategory.pdf} \\\\\n(\\textit{b}) instance-level retrieval versus category-level retrieval\n \\end{tabular}\n\\caption{ Illustration of (a) the CBIR problem, and (b)\nCBIR categorization. The images in green frame are retrieved correctly, while the ones in red frame are matched incorrectly.\n}\n\\label{category_vs_instance}\n\\vspace{-1em}\n\\end {figure}\n\\fi\n\\begin{table*}[!ht]\n\\caption{A summary and comparison of the primary surveys in the field of image retrieval.}\\label{SurveyCompare}\n\\vspace{-1em}\n\\centering\n\\setlength{\\tabcolsep}{3.0mm}\\footnotesize\n\\begin{tabular}{!{\\vrule width0.3bp}p{5cm}|p{0.5cm}|p{1.6cm}|p{9.2cm}!{\\vrule width0.3bp}}\n\\Xhline{1.0pt}\n\\multicolumn{1}{|c|}{ \\multirow{2}*{ Title} } & \\multirow{2}*{ $\\!\\!\\!\\!$ Year} & \\multirow{2}*{ Published in} & \\multicolumn{1}{c|}{ \\multirow{2}*{ Main Content } } \\\\ \n\\hline\n\\multirowcell{2}{Content-Based Image Retrieval at the \\\\ End of the Early Years } & \\multirow{2}*{2000} & \\multirowcell{2}{ TPAMI } & \\multirowcell{2}{This paper discusses the steps for image retrieval systems, including image \\\\ processing, feature extraction, user interaction, and similarity evaluation. }\\\\\n\\hline\n\\multirowcell{2}{Image Search from Thousands to Billions \\\\in 20 Years } & \\multirow{2}*{2013} & \\multirowcell{2}{ TOMM} & \\multirowcell{2}{This paper gives a good presentation of image search achievements from \\\\ 1970 to 2013, but the methods are not deep learning-based.} \\\\\n\\hline\n\\multirowcell{2}{Deep Learning for Content-Based Image \\\\ Retrieval: A Comprehensive Study } & \\multirow{2}*{2014} & \\multirowcell{2}{ ACM MM} & \\multirowcell{2}{This paper introduces supervised metric learning methods for fine-tuning \\\\ AlexNet. Details of instance-based image retrieval are limited.}\\\\\n\\hline\n\\multirowcell{2}{Semantic Content-based Image Retrieval:\\\\ A Comprehensive Study } & \\multirow{2}*{2015} & \\multirowcell{2}{ JVCI } & \\multirowcell{2}{This paper presents a comprehensive study about CBIR using traditional \\\\ methods; deep learning is introduced as a section with limited details.}\\\\\n\\hline\n\\multirowcell{3}{Socializing the Semantic Gap: A Compa-\\\\rative Survey on Image Tag Assignment, \\\\Refinement, and Retrieval } & \\multirowcell{1}{2016} & \\multirowcell{3}{CSUR} & \\multirowcell{3}{A taxonomy is introduced to structure the growing literature of image \\\\ retrieval. Deep learning methods for feature learning is introduced as \\\\future work.} \\\\\n\\hline\n\\multirowcell{2}{Recent Advance in Content based Image \\\\ Retrieval: A Literature Survey } & \\multirow{2}*{2017} & \\multirowcell{2}{ arXiv } & \\multirowcell{2}{This survey presents image retrieval from 2003 to 2016. Neural networks \\\\ are introduced in a section and mainly discussed as a future direction.}\\\\\n\\hline\n\\multirowcell{3}{Information Fusion in Content-based \\\\Image Retrieval: A Comprehensive \\\\ Overview } & \\multirowcell{1}{2017} & \\multirowcell{3}{Information\\\\Fusion} & \\multirowcell{3}{This paper presents information fusion strategies in CBIR. \\\\ Deep convolutional networks for feature learning are introduced \\\\ briefly but not covered thoroughly.} \\\\\n\\hline\n\\multirowcell{2} { $\\!\\!\\!$ A Survey on Learning to Hash } & \\multirow{2}*{2018} & \\multirowcell{2}{ TPAMI} & \\multirowcell{2}{This paper focuses on hash learning algorithms and introduces the \\\\ similarity-preserving methods and discusses their relationships. } \\\\\n\\hline\n\\multirowcell{2}{$\\!\\!\\!$ SIFT Meets CNN: A Decade Survey of \\\\ Instance Retrieval } & \\multirow{2}*{2018} & \\multirowcell{2}{ TPAMI} & \\multirowcell{2}{This paper presents a comprehensive review of instance retrieval based on \\\\SIFT and CNN methods.}\\\\\n\\hline\n\\multirowcell{3}{ \\bf Deep Learning for Instance \\\\ \\bf Retrieval: A Survey } & \\multirowcell{1}{\\bf 2021} & \\multirowcell{3}{ \\bf Ours} & \\multirowcell{3}{\\bf Our survey focuses on deep learning methods. We expand the review \\\\ \\bf with indepth details on IIR, including methods of feature extraction, \\\\ \\bf feature embedding and aggregation, and network fine-tuning. }\n \\\\\n\\Xhline{1.0pt}\n\\end{tabular}\n\\end{table*}\nDuring the past two decades, startling progress has been witnessed in image representation which mainly consists of two important periods, \\ie feature engineering and deep learning. In the feature engineering era, the field was dominated by various milestone handcrafted image representations like SIFT and Bag of visual Words (BoW) . The deep learning era was reignited in 2012 when a Deep Convolutional Neural Network (DCNN) referred as ``AlexNet'' won the first place in the ImageNet classification contest with a breakthrough reduction in classification error rate. Since then, the dominating role of SIFT like local descriptors has been replaced by data driven Deep Neural Networks (DNNs) which can learn powerful feature representations with multiple levels of abstraction directly from data. During the past decade, DNNs have set the state of the art in various classical computer vision tasks, including image classification~,, object detection~, semantic segmentation , \nand image retrieval~.\nGiven this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in IIR. In comparison with existing excellent surveys on traditional image retrieval ,,,, as contrasted in Table~\\ref{SurveyCompare}, our focus in this paper is reviewing deep learning based methods for IIR, particularly on issues of retrieval accuracy and efficiency.", "id": "87ccd777-d058-4a76-9575-0c136cb10697", "level": "section", "origin_cites_number": 28, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "d77f2bd6-4077-4eb5-ba4e-7c6288149714", "846c213c-7851-4f7e-b5f1-17d26ed0cdb1" ], "title": "Introduction" }, { "cite_extract_rate": 0.5789473684210521, "cites": [ 7543, 2101, 514, 2096, 2098, 2102, 2105, 2108, 2094, 2104, 2103, 8524, 7544, 2109, 2099, 2097, 2106, 97, 2107, 629, 2100, 7545 ], "content": "\\label{Summary_of_Progress}\nAfter the highly successful image classification implementation of AlexNet~, significant exploration of DCNNs for instance retrieval tasks was undertaken and representative methods are shown in Figure \\ref{fig1}. Building on these methods, more recent progress for IIR can be achieved from the perspectives of off-the-shelf models and fine-tuned models, which form the basis for this survey.\nOff-the-shelf models, based on DCNNs with fixed parameters ,,, extract features at image scales or patch scales, which correspond to single-pass and multiple-pass schemes, respectively. These methods focus on effective feature use, for which researchers have proposed embedding and aggregation methods, such as R-MAC , CroW , and SPoC to promote the discriminativity of the extracted features. Fine-tuned models, based on DCNNs in which parameters are updated , are more popular since deep networks themselves have been investigated extensively. To learn better retrieval features, researchers have proposed to improve the network architectures and/or update the pre-stored parameters . \nThis survey will explore recent progress in detail in the context of the three following themes:\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=\\textwidth]{./Figures/RepresentativeMethods.pdf} \n\\vspace{-1.5em}\n\\caption{Representative methods in IIR. Off-the-shelf models have model parameters which are not further updated or tuned when extracting retrieval features. For single-pass schemes, the key step is the feature embedding and aggregation to promote the discriminativity of the extracted image-level activations ,,,, whereas for multiple-pass schemes the goal is to extract instance features at region scales and eliminate image clutter as much as possible ,,,. In contrast, for fine-tuned models, the model parameters are tuned towards the retrieval task and address the issue of domain shifts. For supervised fine-tuning, the key step lies in the design of objective functions and sample sampling strategies ,,,,, while the success of unsupervised fine-tuning is to mine the relevance among training samples ,,,,. See Sections \\ref{Retrieval_with_Off_the_Shelf_DCNN_Models} and \\ref{Retrieval_via_Learning_DCNN_Representations} for details.}\n\\label{fig1}\n\\end{figure*}\n\\medskip\n\\noindent (1) \\emph{Deep Feature Extraction} \\hfill (Section \\ref{Deep_Feature_Extraction})\n\\smallskip\n\\noindent \nOne key step in IIR is to make the descriptors as semantic-aware , as possible. For this, some recent work focus on the input data of DCNNs, thereby instance features can be extracted from the whole image, \\eg CroW , VLAD-CNN or from image patches, \\eg MOP-CNN , FAemb . For instance, evaluated on the Holidays dataset , patch-level input scheme can improve mAP by 8.29\\% compared to the results (70.53\\%) achieved using image-level input . Others focus on exploring different feature extractors, \\eg one layer of a given DCNN, to get the output activations. Initially, fully-connected layers are usually chosen to extract features ,, and then convolutional layers are popularly used ,. Afterwards, some work leverage the complementarity of different extractors to explore layer-level fusion, such as MoF , and model-level fusion, such as DeepIndex to promote the retrieval performance.\n\\medskip\n\\noindent \n(2) \\emph{Feature Embedding and Aggregation} \\hfill (Section \\ref{Deep_Feature_Enhancement})\n\\smallskip\n\\noindent \nRecent work revisit the classical embedding and aggregation methods and apply on deep features. Most work have a preference towards mapping individual vector from convolutional layer ,, and then aggregating into a global feature. The mapping process can be realized by using a pre-trained codebook (\\ie built separately), such as VLAD-CNN , DeepIndex or learned as parameters during training (built simultaneously), such as NetVLAD , GeM-DSM . Some work aggregate local features into a global one by direct pooling or sophisticated pooling-based methods without the aggregation operation, such as R-MAC .\n\\medskip\n\\noindent \n(3) \\emph{Network Fine-tuning for Learning Representations} \\hfill (Section \\ref{Retrieval_via_Learning_DCNN_Representations})\n\\smallskip\n\\noindent \nDCNNs pretrained on source datasets for image classification are influenced by domain shifts when performing retrieval tasks on new datasets. Thus, it is necessary to fine-tune deep networks to the specific domain , by using supervised or unsupervised fine-tuning methods. As depicted in Figure \\ref{fig1}, recent supervised fine-tuning methods focus on designing objective functions (\\eg Listwise loss ) and sample sampling strategies, such as NetVLAD , Triplet Network . Unsupervised methods focus on mining the relevance among training samples by using clustering, such as SfM-GeM or manifold learning, such as AILIR . Recently, convolution-free models that only rely on transformer layers have shown competitive performance and been used as a powerful alternative to DCNNs, such as IRT , reranking Transformers .", "id": "d77f2bd6-4077-4eb5-ba4e-7c6288149714", "level": "subsection", "origin_cites_number": 38, "parent_id": "87ccd777-d058-4a76-9575-0c136cb10697", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Summary of Progress since 2012" ] ], "subsections": [], "title": "Summary of Progress since 2012" }, { "cite_extract_rate": 0.75, "cites": [ 2095, 2110, 2112, 2111, 2109, 2105 ], "content": "\\label{Keychallenges}\nThe goal of each of the preceding three themes is to address the competing objectives of \\emph{accuracy} and \\emph{efficiency}, with both objectives continuing to present challenges:\n\\textbf{A) Accuracy related challenges} depend on the input data, the feature extractor, and the way in which the extracted features are processed:\n\\begin{itemize}\n\\item Invariance challenge: The instance in an image may be rotated, translated, or scaled differently, so the final features are affected by these transformations and retrieval accuracy may be degraded . It is necessary to incorporate invariance into the IIR pipeline ,.\n\\item Distraction challenge: IIR systems may need to focus on only a certain object, or even only a small portion. DCNNs may be affected by image clutter or background, so multiple-pass schemes have been examined where region proposals are studied before feature extraction.\n\\item Discriminativity challenge: Deep features for IIR are needed to be as discriminative as possible to distinguish instances with small differences, leading to many explorations in feature processing. These include feature embedding and aggregation methods, to promote feature discriminativity; and attention mechanisms, to highlight the most relevant regions within the extracted features or to enable deep networks to focus on regions of interest. \n\\item Fine-tune challenge: DCNNs can be fine-tuned as powerful extractors to capture fine semantic distinctions among instances. These explorations have offered improved accuracy, however the area remains a major challenge. \n\\end{itemize}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{./Figures/PipelineofImageRetrieval.pdf}\n \\vspace{-0.5em}\n\\caption{General framework of IIR, which includes feature extraction on image or image patches, followed by feature embedding and aggregation methods to improve feature discriminativity. Feature matching can be performed by using global features (initial filtering) or use local features to rerank the top-ranked images matched by global features. } \n\\label{PipelineofImageRetrieval}\n\\end{figure*}\n\\textbf{B) Efficiency related challenges} are important, especially for large-scale datasets . Retrieval systems should respond quickly when given a query image. Deep features are high-dimensional and contain semantic-aware information to support higher accuracy, yet is often at the expense of efficiency. \nOn the one hand, the efficiency is related to the \nformat of features, \\ie real valued or binary. Hash codes have advantages in storage and searching ,, however for hashing methods one needs to carefully consider the loss function design ,, to obtain optimal codes for high retrieval accuracy. \nOn the other hand, efficiency is also related to the mechanism of feature matching. For example, instead of time-consuming cross-matching between local features, one can choose to use global features to perform an initial ranking and then a post-step re-ranking via the features of top-ranked images. \n\\vspace{-1em}", "id": "846c213c-7851-4f7e-b5f1-17d26ed0cdb1", "level": "subsection", "origin_cites_number": 8, "parent_id": "87ccd777-d058-4a76-9575-0c136cb10697", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Key Challenges" ] ], "subsections": [], "title": "Key Challenges" }, { "cite_extract_rate": 0.36842105263157804, "cites": [ 2093, 2113, 2094, 2115, 2114, 2110, 7545 ], "content": "\\label{General_Framework_of_IIR}\nFigure \\ref{PipelineofImageRetrieval} offers an overview of the general framework for deep-learning-based IIR, involving three main stages. \n\\medskip\n\\noindent \n\\textbf{1) Deep feature extraction}: \\hfill (Section \\ref{Deep_Feature_Extraction})\n\\smallskip\n\\noindent\nFeature extraction is the first step of IIR and can be realized in a single-pass or multiple-pass way. Single-pass methods take as input the whole image, whereas multiple-pass methods depend on region extraction, as depicted in Figure \\ref{MultiplePass}.\nThe activations from fully-connected layers of a given DCNN can be used as retrieval features whether based on a whole image or on patches. The tensors from convolutional layers can be used when further processed by sophisticated pooling, as shown in Figure \\ref{PipelineofImageRetrieval}. Different layers of the same deep network can be combined as a more powerful extractor \n,. Furthermore, it is possible to fuse the activations from layers of different models \n,. Feature extraction is the step to produce vanilla network activations (\\ie 3D tensors or a single vector), these activations, in most cases, are needed to be further processed. \n\\medskip\n\\noindent \n\\textbf{2) Embedding and aggregation}: \\hfill (Section \\ref{Deep_Feature_Enhancement})\n\\smallskip\n\\noindent\nFeature embedding and aggregation are two essential steps to produce global or local features. Feature embedding maps individual local features into higher-dimensional space whereas feature aggregation summarizes the multiple mapped vectors or all individual features into a global vector. Global features may come from pooling convolutional feature maps directly , or using some sophisticated weighting methods , (\\ie both without feature embedding). Feature embedding method using a pre-generated codebook can be performed to encode individual convolutional vectors and then aggregated ,,. For local features, the well-embedded representations for all regions of interest are stored individually and used for cross-matching in the reranking stage without aggregation.\n\\medskip\n\\noindent \n\\textbf{3) Feature matching}:\n\\smallskip\n\\noindent\nFeature matching is a process to measure the feature similarity between images and then return a ranked list. Global matches can be computed efficiently via such as Euclidean distance. For local features ,, the image similarity is usually evaluated by summarizing the similarities across local features, using classical RANSAC or more recent variations ,. Storing local features separately and then estimating their similarity individually lead to additional memory and search costs ,, therefore in most cases local features are used to re-rank the initial ranking image matched by global features ,,,.\nThe three preceding stages for IIR rely on DCNNs as backbone architectures. In almost all cases, pre-stored parameters in these backbones can be fine-tuned (Section \\ref{Retrieval_via_Learning_DCNN_Representations}) to be better suited for instance retrieval and to contribute to better performance. \nThe detailed categorization of the material of the following sections is shown in Figure~\\ref{FourAspectsofprogress}.", "id": "8a512096-a741-4540-8715-b12b364b3d9e", "level": "section", "origin_cites_number": 19, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "General Framework of IIR" ] ], "subsections": [], "title": "General Framework of IIR" }, { "cite_extract_rate": 0.5769230769230761, "cites": [ 2118, 514, 2096, 2113, 2105, 2094, 2103, 7544, 2119, 2097, 2109, 2116, 629, 7545, 2117 ], "content": "\\label{Retrieval_with_Off_the_Shelf_DCNN_Models}\n\\textcolor{black}{Because of their size, DCNNs need to be trained, initially for classification tasks, on exceptionally large-scale datasets and be able to recognize images from different classes.} One possible scheme, then, is that DCNNs effectively trained for classification directly serve as off-the-shelf feature detectors for the image retrieval task, the topic of this survey. That is, one can propose to undertake image retrieval on the basis of DCNNs, trained for classification and with their pre-trained parameters frozen. \n\\begin{figure}[!htbp]\n\\centering\n\\footnotesize\n\\begin{tikzpicture}[xscale=0.8, yscale=0.4]\n\\draw [thick, -] (0, 16.5) -- (0, -4); \\node [right] at (-0.5, 17) {\\bf \\em Deep Learning for Instance-level Image Retrieval (overall survey)};\n\\draw [thick, -] (0, 16) -- (0.5, 16);\\node [right] at (0.5, 16) {Retrieval with Off-the-Shelf DCNN Models (Section \\ref{Retrieval_with_Off_the_Shelf_DCNN_Models})};\n\\draw [thick, -] (1, 15.5) -- (1, 5);\n\\draw [thick, -] (1, 15) -- (1.5, 15);\\node [right] at (1.5, 15) {\\bf Deep Feature Extraction (Section \\ref{Deep_Feature_Extraction})};\n\\draw [thick, -] (2, 14.5) -- (2, 8); \n\\draw [thick, -] (2, 14) -- (2.5, 14);\\node [right] at (2.5, 14) {Network Feedforward Scheme (Section \\ref{Network_Feedforward_Scheme})};\n\\draw [thick, -] (3, 13.5) -- (3, 12); \n\\draw [thick, -] (3, 13) -- (3.5, 13);\\node [right] at (3.5, 13) {Single Feedforward Pass: MAC , R-MAC };\n\\draw [thick, -] (3, 12) -- (3.5, 12);\\node [right] at (3.5, 12) {Multiple Feedforward Pass: SPM , RPNs };\n\\draw [thick, -] (2, 11) -- (2.5, 11);\\node [right] at (2.5, 11) {Deep Feature Selection (Section \\ref{Deep_Feature_Selection}) };\n\\draw [thick, -] (3, 10.5) -- (3, 9); \n\\draw [thick, -] (3.0, 10) -- (3.5, 10);\\node [right] at (3.5, 10) { Fully-connected Layer: Neural codes };\n\\draw [thick, -] (3.0, 9) -- (3.5, 9);\\node [right] at (3.5, 9) { Convolutional Layer: SPoC , CroW };\n\\draw [thick, -] (2.0, 8) -- (2.5, 8);\\node [right] at (2.5, 8) {Feature Fusion Strategy (Section \\ref{Feature_Fusion_Strategy}) };\n\\draw [thick, -] (3.0, 7.5) -- (3.0, 6);\n\\draw [thick, -] (3.0, 7) -- (3.5, 7);\\node [right] at (3.5, 7) { Layer-level Fusion: MoF , MOP };\n\\draw [thick, -] (3.0, 6) -- (3.5, 6);\\node [right] at (3.5, 6) { Model-level Fusion: ConvNet fusion };\n\\draw [thick, -] (1, 5) -- (1.5, 5);\\node [right] at (1.5, 5) {\\bf Feature Embedding and Aggregation (Section \\ref{Deep_Feature_Enhancement}) };\n\\draw [thick, -] (2, 4.5) -- (2, -1); \n\\draw [thick, -] (2.0, 4) -- (2.5, 4);\\node [right] at (2.5, 4) { Matching with Global Features (Section \\ref{Matching_with_Global_Feature})};\n\\draw [thick, -] (2.0, 3) -- (2.5, 3);\\node [right] at (2.5, 3) {Matching with Local Features (Section \\ref{Matching_with_Local_Feature})};\n\\draw [thick, -] (2.0, 2) -- (2.5, 2);\\node [right] at (2.5, 2) { Attention Mechanism (Section \\ref{Attention_Mechanism}) };\n\\draw [thick, -] (3.0, 1.5) -- (3.0, 0);\n\\draw [thick, -] (3.0, 1) -- (3.5, 1);\\node [right] at (3.5, 1) { Non-parameteric: CroW , SPoC , SWVF };\n\\draw [thick, -] (3.0, 0) -- (3.5, 0);\\node [right] at (3.5, 0) { Parameteric: CRN , DeepFixNet+SAM , };\n\\draw [thick, -] (2.0, -1) -- (2.5, -1);\\node [right] at (2.5, -1) {Deep Hash Embedding (Section \\ref{Deep_Hash_Embedding}) };\n\\draw [thick, -] (3.0, -1.5) -- (3.0, -3);\n\\draw [thick, -] (3.0, -2) -- (3.5, -2);\\node [right] at (3.5, -2) { Supervised Hashing: SSDH };\n\\draw [thick, -] (3.0, -3) -- (3.5, -3);\\node [right] at (3.5, -3) { Unsupervised Hashing: DeepBit , DSTH };\n\\draw [thick, -] (0, -4) -- (0.5, -4);\\node [right] at (0.5, -4) {\\bf Fine-Tuning for Learning DCNN Representations (Section \\ref{Retrieval_via_Learning_DCNN_Representations})};\n\\draw [thick, -] (1, -4.5) -- (1, -11);\n\\draw [thick, -] (1, -5) -- (1.5, -5);\\node [right] at (1.5, -5) {Supervised Fine-tuning (Section \\ref{Supervised_Fine-tuning}) };\n\\draw [thick, -] (2, -5.5) -- (2, -7); \n\\draw [thick, -] (2.0, -6) -- (2.5, -6);\\node [right] at (2.5, -6) {Fine-tuning via classification loss (Section \\ref{Classification-based_Fine-tuning}) }; \n\\draw [thick, -] (2.0, -7) -- (2.5, -7);\\node [right] at (2.5, -7) {Fine-tuning via pairwise ranking loss (Section \\ref{Verification_based_Learning}) };\n\\draw [thick, -] (3.0, -7.5) -- (3.0, -10);\n\\draw [thick, -] (3.0, -8) -- (3.5, -8);\\node [right] at (3.5, -8) { Transformation Matrix: Non-metric };\n\\draw [thick, -] (3.0, -9) -- (3.5, -9);\\node [right] at (3.5, -9) { Siamese Networks };\n\\draw [thick, -] (3.0, -10) -- (3.5, -10);\\node [right] at (3.5, -10) { Triplet Networks };\n\\draw [thick, -] (1.0, -11) -- (1.5, -11);\\node [right] at (1.5, -11) { Unsupervised Fine-tuning (Section \\ref{Unsupervised_Fine-tuning})};\n\\draw [thick, -] (2, -11.5) -- (2, -13); \n\\draw [thick, -] (2.0, -12) -- (2.5, -12);\\node [right] at (2.5, -12) { Manifold Learning Samples Mining: Diffusion Net };\n\\draw [thick, -] (2.0, -13) -- (2.5, -13);\\node [right] at (2.5, -13) {Mining Samples by Clustering: SfM-GeM , };\n\\end{tikzpicture}\n\\vspace{-2em}\n\\caption{This survey is organized around three key themes in instance-level image retrieval, shown in bold.}\n\\label{FourAspectsofprogress}\n\\end{figure}\nThere are limitations to this approach, most fundamentally that there is a model-transfer or domain-shift challenge between tasks~,,, meaning that models trained for classification do not necessarily extract features well suited to image retrieval. In particular, a classification decision can be successful as long as features remain within classification boundaries, however features from such models may show insufficient capacity for retrieval where feature matching itself is more important than classification. This section will survey the strategies which have been developed to improve the quality of feature representations, particularly based on feature extraction / fusion (Section \\ref{Deep_Feature_Extraction}) and feature embedding / aggregation (Section \\ref{Deep_Feature_Enhancement}).", "id": "1b3ca748-df26-4b7f-85f3-3eb438a055f5", "level": "section", "origin_cites_number": 26, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ] ], "subsections": [ "f1ce1975-10d7-4d60-8e5d-320750dbf247", "f6eab6de-c628-4a89-bfb0-43b4f87b1151" ], "title": "Retrieval with Off-the-Shelf DCNN Models" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Deep_Feature_Extraction}\nFeature extraction is about the mechanism by which retrieval features can be extracted from off-the-shelf DCNNs. \nFor an input image $x$ and a network $f(\\cdot; \\bm{\\theta})$, we denote its features from a convolutional layer as $ \\boldsymbol{A} := f_{conv}(x) \\in \\mathbb{R}^{H \\times W \\times C}$ with height $H$, width $W$, and channels $C$ while that from a fully-connected layer as $ \\boldsymbol{B} := f_{fc}(x) \\in \\mathbb{R}^{D \\times 1}$ with the dimensional $D$.", "id": "f1ce1975-10d7-4d60-8e5d-320750dbf247", "level": "subsection", "origin_cites_number": 0, "parent_id": "1b3ca748-df26-4b7f-85f3-3eb438a055f5", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", " Deep Feature Extraction" ] ], "subsections": [ "d9000dca-9ea2-4e50-b3cd-ac34b0c46c16", "4be11049-ece2-473a-babd-9ac8ce82a05f", "f5333f99-42b6-4668-a652-9777bc115d47" ], "title": " Deep Feature Extraction" }, { "cite_extract_rate": 0.518518518518518, "cites": [ 7543, 2105, 8525, 7546, 2094, 2103, 2109, 2120, 2121, 2114, 2107, 2112, 209, 7545 ], "content": "\\label{Network_Feedforward_Scheme}\nNetwork feedforward schemes focus on how images are fed into a DCNN, which includes single-pass and multiple-pass.\n\\begin{flushleft}\n\\emph{a. Single Feedforward Pass Methods}. \n\\end{flushleft}\nSingle feedforward pass methods take the whole image and feed it into an off-the-shelf model to extract features. The approach is relatively efficient since the input image is fed only once. For these methods, both the fully-connected layer and last convolutional layer can be used as feature extractors .\nEarly network-based IIR work focused on leveraging DCNNs as a fixed extractor to obtain global features, especially based on the fully-connected layers ,, requiring close to zero engineering effort. However, extracting features in this way affects retrieval accuracy since the extracted features may include background information or activations for irrelevant objects.\nThe key to single-pass schemes is to embed and aggregate features to improve their discriminativity, such that features of two related images (\\ie including the same object) are more similar than these of two unrelated images . For this purpose, it is possible to first map the features $ \\boldsymbol{B} $ into a high-dimensional space and then to aggregate them into a final global feature . Another direction is to treat regions in convolutional features $ \\boldsymbol{A} $ as different sub-vectors, such that a combination of sub-vectors of all feature maps are used to represent the input image ,.\n\\begin{flushleft}\n\\emph{b. Multiple Feedforward Pass Methods}. \n\\end{flushleft}\nCompared to single-pass schemes, multiple-pass methods are more time-consuming because several patches are generated and then fed into the network, \n\\textcolor{black}{\nmultiple-pass schemes are more helpful for addressing the ``\\textit{invariance challenges} and ``\\textit{distraction challenges}'' in Section \\ref{Keychallenges}. Local patches at multiple scales become more robust for image translation, scaling and rotation ,. Also, these patches are helpful to filter several irrelevant background information. \n}\nThe representations are usually produced from two stages: patch detection and patch description. Multi-scale image patches are obtained using sliding windows , or spatial pyramid model (SPM) ,,, as shown in Figure \\ref{MultiplePass}. For example, Zheng \\etal partition an image by using SPM and extract features at increasing scales, thus enabling the integration of global, regional, local contextual information.\nPatch detection methods lack retrieval efficiency since irrelevant patches are also detected . For example, Cao \\etal propose to merge image patches into larger regions with different hyper-parameters, where the hyper-parameter selection is viewed as an optimization problem to maximize the similarity between query and candidate features.\n\\begin {figure}[!t]\n\\centering\n { \n \\includegraphics[width=\\columnwidth]{./Figures/MultiplePass.pdf} \n }\n \\vspace{-2em}\n\\caption{Image patch generation schemes: (a) Sliding windows ,; (b) Spatial pyramid modeling ; (c) Dense sampling ,; (d) Region proposals from region proposal networks ,.}\n\\vspace{-1em}\n\\label{MultiplePass}\n\\end {figure}\nInstead of generating multi-scale image patches randomly or densely, region proposal methods introduce a degree of purpose. Region proposals can be generated using object detectors, such as selective search , edge boxes ,, and BING . For example, Yu \\etal propose fuzzy object matching (FOM) for instance search in which the fuzzy objects are generated from 300 object proposals and then clustered to filter out overlapping proposals. Region proposals can also be learned using such as region proposal networks (RPNs) , and convolutional kernel networks (CKNs) , and then to apply these networks into end-to-end fine-tuning for learning similarity . \\textcolor{black}{This usually requires the datasets provide well-localized bounding boxes as supervision, \\eg the datasets INSTRE , Oxford-5k , Paris-6k , GLD-v2 variant . Also, in the off-the-shelf scenarios, the way that using the bounding boxes to crop the query images and use as input the DCNNs has been shown to provide better retrieval performance since only the information relevant to the instance is extracted ,.\n}", "id": "d9000dca-9ea2-4e50-b3cd-ac34b0c46c16", "level": "subsubsection", "origin_cites_number": 27, "parent_id": "f1ce1975-10d7-4d60-8e5d-320750dbf247", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", " Deep Feature Extraction" ], [ "subsubsection", " Network Feedforward Scheme" ] ], "subsections": [], "title": " Network Feedforward Scheme" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7547, 7543, 2122, 2102, 2113, 2105, 2108, 2094, 2104, 2109, 2110, 2112, 7548, 7545 ], "content": "\\label{Deep_Feature_Selection}\nFeature selection decides the receptive field of the extracted features, \\ie global-level from fully-connected layers and regional-level from convolutional layers.\n\\begin{flushleft}\n\\emph{a. Extracted from Fully-connected Layers}\n\\end{flushleft}\nIt is straightforward to select a fully-connected layer as a global feature extractor ,,. With PCA dimensionality reduction and normalization~ image similarity can be measured. Extracting features $ \\boldsymbol{B} $ from fully-connected layer leads to two obvious limitations for IIR: including irrelevant information, and a lack of local geometric invariance .\nWith regards to the first limitation, image-level global descriptors may include irrelevant patterns or background clutter, especially when a target instance is only a small portion of an image. It may then be more reasonable to extract region-level features at finer scales, \\ie using multiple passes~,,.\nFor the second limitation, \nan alternative is to extract multi-scale features on a convolutional layer ,. Further, it makes the global features incompatible with techniques such as spatial verification and re-ranking. Several methods then choose to leverage intermediate convolutional layers ,,,.\n\\begin{flushleft}\n\\emph{b. Extracted from Convolutional Layers}\n\\end{flushleft}\nThe neurons in a convolutional layer are connected only to a local region of the input image, and this smaller receptive field ensures that the produced features $ \\boldsymbol{A} $, usually from the last layer, preserve more local structural information , and are more robust to image transformations \\textcolor{black}{ thereby address the ``\\textit{invariance challenge}''. For instance, Razavian \\etal extract multi-scale features on the last convolutional layer and Mor{\\`e}re \\etal incorporate a series of nested pooling layers into CNN. Both of them provide higher feature invariance. Thus, many image retrieval methods use convolutional layers as feature extractors~ ,,,.}\nSum/average and max pooling are two simple aggregation methods to produce global features . For a pooled layer, the last convolutional layer usually yields superior accuracy over other shallower or later fully-connected layers . There is no other operation on the feature maps before pooling, so we illustrate these methods as ``direct pooling'' in Figure~\\ref{PipelineofImageRetrieval}.\n\\begin {figure}[!t]\n\\centering\n { \n \\includegraphics[width=\\columnwidth]{./Figures/SinglePass.pdf} \n }\n \\vspace{-1.5em}\n\\caption{Representative methods in single-pass methods, focusing on convolutional feature tensor $ \\boldsymbol{A} $. We denote the entry in $ \\boldsymbol{A} $ corresponding to channel $ c $, at spatial location ($i, j$) as $ A_{ijc} $:\nMAC , R-MAC , SPoC with the per-channel Gaussian weighting $\\alpha^{\\prime}_{ij} A_{ij}$ where $\\alpha^{\\prime}_{ij} = \\exp\\left\\{-{\\textstyle\\frac{\\left(i-\\frac H2\\right)^2+\\left(j-\\frac W2\\right)^2}{2\\sigma^2}}\\right\\} $ , CroW with $ \\alpha^{\\prime\\prime} $ computed by summing all $C$ feature maps at location ($i, j$) and $ \\beta $ computed by summing the $H\\times W$-array at each feature map $c$ , GeM with channel-wise powers operation , and CAM+CroW by performing $ M_{ij}^{(l)}=\\sum_{c=1}^C\\omega_{lc}A_{ijc} $ where $\\omega_{lc}$ are weights activated by $l$-th class . \n}\n\\vspace{-1em}\n\\label{SinglePass}\n\\end {figure}\nInstead of direct pooling, many sophisticated aggregation methods have been explored, such as channel-wise or spatial-wise feature weighting on the convolutional feature maps ,,. These aggregation methods aim to highlight feature importance or reduce the undesirable influence of bursty descriptors of some regions ,. For clarity, we illustrate the representative strategies in Figure \\ref{SinglePass}. \nNote that these feature aggregation methods are usually performed before channel-wise sum/max pooling and does not embed features into a higher dimensional space.\nOne rationale behind using convolutional features is that each such vector can act as a ``dense SIFT'' feature since each vector corresponds to a region in the input image. Inspired by this perception, many works leverage embedding methods (\\eg BoW) used for SIFT features on the regional feature vectors and then aggregate them (\\eg by sum pooling) into a global descriptor. \n\\textcolor{black}{Feature embedding methods address the discriminativity challenge via mapping individual features into a high-dimensional space and make them distinguishable .} Feature embedding is followed by PCA to reduce feature dimensionality and whitening to down-weight co-occurrence between features.", "id": "4be11049-ece2-473a-babd-9ac8ce82a05f", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "f1ce1975-10d7-4d60-8e5d-320750dbf247", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", " Deep Feature Extraction" ], [ "subsubsection", "Deep Feature Selection" ] ], "subsections": [], "title": "Deep Feature Selection" }, { "cite_extract_rate": 0.388888888888888, "cites": [ 2124, 2115, 2123, 7548, 2125, 514, 2105 ], "content": "\\label{Feature_Fusion_Strategy}\nFusion studies the complementarity of different features which includes layer-level and model-level fusion explorations.\n\\begin{flushleft}\n\\emph{a. Layer-level Fusion}\n\\end{flushleft}\nWith layer-level fusion it is possible to fuse multiple fully-connected layers in a deep network ,. For instance, Liu \\etal introduce DeepIndex to incorporate multiple global features from different fully connected layers. The activation from the first fully-connected layer is taken as column indexing, and that from the second layer serves as row indexing. Similarly, it is also possible to fuse the activations from multiple convolutional layers. For instance, Li \\etal apply the R-MAC encoding scheme on five convolutional layers of VGG-16 and then concatenate them into a multi-scale feature vector.\nFeatures from fully-connected layers retain global high-level semantics, whereas features from convolutional layers can present local low- and intermediate-level cues. Global and local features therefore complement each other when measuring semantic similarity and can, to some extent, guarantee retrieval performance ,. Such features can be concatenated directly ,, with convolutional features normally filtered by sliding windows or region proposal nets. Direct concatenation can also be replaced by other advanced methods, such as orthogonal operations or pooling-based methods, such as Multi-layer Orderless Fusion (MOF) of Li \\etal , which is inspired by Multi-layer Orderless Pooling (MOP) . However local features cannot play a decisive role in distinguishing subtle feature differences if global and local features are treated identically. Yu \\etal use a mapping function to assert local features in refining the return ranking lists, via an exponential mapping function for tapping the complementary strengths of convolutional and fully-connected layers. Similarly, \nLiu \\etal design two sub-networks on top of convolutional layers to obtain global and local features and then learn to fuse these features, thereby adaptively adjusting the fusion weights. Instead of directly fusing the layer activations, Zhang \\etal fuse the index matrices which are generated based on the two feature types extracted from the same CNN, a feature fusion which has low computational complexity.\nIt is worth considering which layer combinations are better for fusion given their differences and complementarity. Yu \\etal compare the performance of different combinations between fully-connected and convolutional layers on the Oxford 5k, Holiday, and UKBench datasets. The results show that the combinations including the first fully-connected layer always perform better. Li \\etal demonstrate that fusing convolutional and fully-connected layers outperforms the fusion of only convolutional layers. Fusing two convolutional layers with one fully-connected layer achieves the best performance on the Holiday and UKBench datasets.\n\\begin{flushleft}\n\\emph{b. Model-level Fusion}\n\\end{flushleft}\nIt is possible to combine features from different models; such a fusion focuses more on model complementarity, with methods categorized into \\emph{intra-model} and \\emph{inter-model}.\nIntra-model fusion suggests multiple deep models having similar or highly compatible structures, while inter-model fusion involves models with differing structures. For example, \nSimonyan \\etal introduce a ConvNet intra-model fusion strategy to improve the feature learning capacity of VGG where VGG-16 and VGG-19 are fused. \nTo attend to different parts of an image object, Wang \\etal realize the multi-feature fusion by selecting all convolutional layers of VGG-16 to extract image representations, which is demonstrated to be more robust than using only single-layer features.\nInter-model fusion is a way to bridge different features given the fact that different deep networks have different receptive fields ,,,. For instance, a two-stream attention network is introduced to implement image retrieval where the main network for semantic prediction is VGG-16 while an auxiliary network is used for predicting attention maps. Similarly, considering the importance and necessity of inter-model fusion to bridge the gap between mid-level and high-level features, Liu \\etal and Zheng \\etal combine VGG-19 and AlexNet to learn combined features, while Ozaki \\etal concatenate descriptors from six different models. \nInter-model and intra-model fusion are relevant to model selection. There are some strategies to determine how to combine the features from two models. It is straightforward to fuse all features from the candidate models and then learn a metric based on the concatenated features ,, which is a kind of ``\\emph{early fusion}'' strategy. Alternatively, it is also possible to learn optimal metrics separately for the features from each model, and then to combine these metrics for final retrieval ranking ,, which is a kind of ``\\emph{late fusion}'' strategy.\n\\textbf{Discussion.} Layer-level fusion and model-level fusion are conditioned on the fact that the associated layers or networks have different feature description capacities. For these fusion strategies, the key question is \\textit{what features are the best to be combined?} Some explorations have been made on the basis of off-the-shelf models, such as Xuan \\etal , who illustrates the effect of combining different numbers of features and different sizes within the ensemble. Chen \\etal analyze the performance of embedded features from off-the-shelf image classification and object detection models with respect to image retrieval.", "id": "f5333f99-42b6-4668-a652-9777bc115d47", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "f1ce1975-10d7-4d60-8e5d-320750dbf247", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", " Deep Feature Extraction" ], [ "subsubsection", "Feature Fusion Strategies" ] ], "subsections": [], "title": "Feature Fusion Strategies" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Deep_Feature_Enhancement}\n\\textcolor{black}{The primary aim of feature embedding and aggregation is to further promote feature discriminativity, targeting for the ``\\textit{discriminativity challenge}'', and obtain final global and/or local features for retrieving specific instances.}", "id": "f6eab6de-c628-4a89-bfb0-43b4f87b1151", "level": "subsection", "origin_cites_number": 0, "parent_id": "1b3ca748-df26-4b7f-85f3-3eb438a055f5", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", "Feature Embedding and Aggregation" ] ], "subsections": [ "96cef0e9-d2bb-419e-8aa3-3757a002290b", "22557b43-bfa7-48d0-9dad-fad3f1f25551", "3362dbb1-5a27-4b56-85a4-413bc5c384f6", "b5eed66c-a402-4570-be6b-8b9987293fc6" ], "title": "Feature Embedding and Aggregation" }, { "cite_extract_rate": 0.448275862068965, "cites": [ 7543, 2122, 2102, 2113, 2105, 8525, 2108, 2109, 2097, 2123, 2126, 7548, 7545 ], "content": "\\label{Matching_with_Global_Feature}\nGlobal features can be extracted from fully-connected layers, followed by dimensionality reduction and normalization ,. They are easy to implement and there is no further aggregation process. Gong \\etal extract fully-connected activations for local image patches at three scale levels and embed patch-level activations individually using VLAD. Thus, the final concatenated features significantly tackle the invariance challenge caused by image rotations.\nConvolutional features can also be aggregated into compact a global feature. Simple aggregation methods are sum/average or max pooling ,. \nSum/average pooling is less discriminative, because it takes into account all activated convolutional outputs, thereby weakening the effect of highly activated features . As a result, max pooling is particularly well suited for sparse features having a low probability of being active, however max pooling may be inferior to sum/average pooling when image features are whitened .\nFigure \\ref{SinglePass} illustrates sophisticated feature aggregation methods using channel-wise or spatial-wise weighting ,. For example, Babenko \\etal propose sum-pooling convolutional features (SPoC) to obtain compact descriptors weighted by $ \\alpha^{\\prime} $ with a Gaussian center prior. Similarly, \nit is possible to treat regions in feature maps as different sub-vectors ,,, thus combinations of $R$ sub-vectors are used to represent the input image, such as R-MAC . Since convolutional features may include repetitive patterns and each vector may correspond to identical regions, the resulting descriptors may be bursty, which makes the final aggregated global feature less distinguishable. As a solution, Pang \\etal leverage heat diffusion to weigh convolutional features at the aggregation stage, and reduce the undesirable influence of burstiness. \nConvolutional features have an interpretation as descriptors of local regions, thus many works leverage embedding methods, including BoW, VLAD, and FV, to encode regional feature vectors and then aggregate them into a global descriptor. Note that BoW and VLAD can be extended by using other metrics, such as a Hamming distance . Here we briefly describe the principle of Euclidean embeddings.\nBoW is a widely used feature embedding which leads to a sparse vector of occurrence. Let $ \\boldsymbol{a} = \\left\\{a_{1}, a_{2},...,a_{R} \\right\\} $ be a set of $R$ local features, each of dimensionality $d$. BoW requires a pre-defined codebook $ \\boldsymbol{c} = \\left\\{c_{1}, c_{2},...,c_{K} \\right\\} $ with $ K $ centroids, usually learned offline, to cluster these local descriptors, and maps each descriptor $a_{t}$ to the nearest centroid $c_{k}$. For each centroid, one can count and normalize the number of occurrences as\n\\begin{equation}\ng(c_{k}) = \\frac{1}{R} \\sum_{r=1}^{R}\\phi(a_{r}, c_{k})\n\\end{equation}\n\\begin{equation}\n\\phi(a_{r}, c_{k}) = \\left\\{ \\begin{array}{ll}\n1 & \\textrm{if $ c_{k} $ is the closest codeword for $a_{r}$ }\\\\\n0 & \\textrm{otherwise}\\\\\n\\end{array} \\right.\n\\label{eqn-phi}\n\\end{equation}\nThus BoW considers the number of descriptors belonging to each $c_{k}$ (\\emph{i.e.} 0-order feature statistics), so the BoW representation is the concatenation of all mapped vectors:\n\\begin{equation}\n\\label{BoW}\n G_{_{BoW}}(\\boldsymbol{a}) =\n \\left[\n \\begin{array}{ccc}\n g(c_{1}), \\cdots ,g(c_{k}) ,\\cdots, g(c_{K})\n \\end{array}\n \\right] \\rm ^{\\top}\n\\end{equation}\nBoW is simple to implement the encoding of local descriptors, such as convolutional feature maps , or fully-connected activations ,, or to encode regional descriptors ,. Mukherjee \\etal extract image patches based on information entropy and feed into a pre-trained VGG-16, then use BoW to embed and aggregate the patch-level descriptors from a fully-connected layer. Embedded BoW vectors are typically high-dimensional and sparse, so not well suited to large-scale datasets in terms of the mentioned efficiency challenge.\nVLAD stores the sum of residuals for each visual word. Similar to BoW, it generates $K$ visual word centroids, then each feature $a_{r}$ is assigned to its nearest visual centroid $ c_{k} $:\n\\begin{equation}\ng(c_{k}) = \\frac{1}{R} \\sum_{r=1}^{R}\\phi(a_{r} , c_{k})(a_{r} - c_{k})\n\\end{equation}\nThe VLAD representation is stacked by the residuals for all centroids, with dimension ($ d \\times K $), \\ie \n\\begin{equation}\nG_{_{VLAD}}(\\boldsymbol{a}) \\! = \\! \\left[\n \\begin{array}{ccc}\n \\cdots, g(c_{k}) \\rm ^{\\top},\\cdots \n \\end{array}\n \\right] \\rm ^{\\top} .\n\\end{equation}\nVLAD captures first-order feature statistics, \\ie ($a_{r} - c_{k}$). Similar to BoW, the performance of VLAD is affected by the number of clusters: more centroids produce larger vectors that are harder to index. For instance-level image retrieval, \nGong \\etal concatenate the activations of a fully-connected layer with VLAD applied to image-level and patch-level inputs . Ng \\etal replace BoW~ with VLAD~, and are the first to encode local features into VLAD representations. This idea inspired another milestone work where, for the first time, VLAD is plugged into the last convolutional layer, which allows end-to-end training via back-propagation.\nFV extends BoW by encoding the first and second order statistics. FV clusters the set of local descriptors by a Gaussian Mixture Model (GMM) with $K$ components to generate a dictionary $ \\boldsymbol{c} = \\left\\{ \\mu_{k}; \\Sigma_{k}; w_{k}\\right\\}_{k=1}^{K} $ made up of mean / covariance / weight triples , where the covariance may be simplified by keeping only its diagonal elements. For each local feature $ a_{r} $, a GMM is given by \n\\begin{equation}\n\\begin{aligned}\n\\gamma_{k}(a_{r}) = w_{k}\\times p_{k}(a_{r})/\\Big(\\sum_{k=1}^{K}w_{k}p_{k}(a_{r})\\Big) \\quad s.t. \\sum_{k=1}^K w_{k} = 1\n\\end{aligned}\n\\end{equation}\nwhere $ p_{k}(a_{r})=\\mathcal{N}(a_{r}, \\mu_{k}, \\sigma_{k}^2 )$. All local features are assigned into each component $k$ in the dictionary, which is computed as\n\\begin{equation}\n\\begin{aligned}\n &g_{w_{k}} = \\frac{1}{R \\sqrt{w_{k}}} \\sum_{r=1}^{R} \\Big(\\gamma_{k}(a_{r}) - w_{k}\\Big) \\\\\n& g_{u_{k}} = \\frac{\\gamma_{k}(a_{r})}{R \\sqrt{w_{k}}} \\sum_{r=1}^{R} \\left( \\frac{a_{r} - \\mu_{k}}{\\sigma_{k}} \\right),\\\\ \n& g_{\\sigma_{k}^2} = \\frac{\\gamma_{k}(a_{r})}{R \\sqrt{2w_{k}}} \\sum_{r=1}^{R}\\left[ {\\left( \\frac{a_{r} - \\mu_{k}}{\\sigma_{k}} \\right)}^{2} - 1 \\right]\n\\end{aligned}\n\\end{equation}\nThe FV representation is produced by concatenating vectors from the $K$ components: \n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\! G_{_{FV}}(\\boldsymbol{a}) \\! = \\! \\left[\n \\begin{array}{ccc}\n g_{w_{1}}, \\cdots , g_{w_{K}}, g_{u_{1}}, \\cdots, g_{u_{K}}, g_{\\sigma_{1}^2}, \\cdots, g_{\\sigma_{K}^2} \n \\end{array}\n \\right] \\rm ^{\\top}\n\\end{aligned}\n\\end{equation}\nThe FV representation defines a kernel from a generative process and captures more statistics than BoW and VLAD. FV vectors do not increase the computational cost significantly but require more memory. Applying FV without memory controls may lead to suboptimal performance . \n\\textbf{Discussion.} Traditionally, pooling-based aggregation methods (\\eg in Figure \\ref{SinglePass}) are directly plugged into deep networks and then the whole model is used end-to-end. The three embedding methods (BoW, VLAD, FV) are initially trained with large pre-defined vocabularies ,. One needs to pay attention on their properties before choosing an embedding: BoW and VLAD are computed in the rigid Euclidean space where performance is closely related to the number of centroids, whereas FV can capture higher-order statistics and improves the effectiveness of feature embedding at the expense of a higher memory cost. Further, although vocabularies are usually built separately and pre-trained before encoding deep features, it is necessary to integrate the training of networks and the learning of vocabulary parameters into a unified framework so as to guarantee training and testing efficiency. For example, VLAD is integrated into deep networks where each spatial column feature is used to construct clusters via k-means . This idea led to NetVLAD , where deep networks are fine-tuned with the VLAD vectors. The FV method is also combined with deep networks for retrieval tasks ,.", "id": "96cef0e9-d2bb-419e-8aa3-3757a002290b", "level": "subsubsection", "origin_cites_number": 29, "parent_id": "f6eab6de-c628-4a89-bfb0-43b4f87b1151", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", "Feature Embedding and Aggregation" ], [ "subsubsection", "Matching with Global Features" ] ], "subsections": [], "title": "Matching with Global Features" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7543, 2093, 2101, 7546, 8525, 2127, 2103, 8524, 2121, 2106, 2114, 2126, 2110, 209, 7545 ], "content": "\\label{Matching_with_Local_Feature}\nAlthough matching with global features has high efficiency for both feature extraction and similarity computation, global features are not compatible with spatial verification and correspondence estimation, which are important procedures for instance-level retrieval tasks, motivating work on matching with local features. In terms of the matching process, global features are matched only once while local feature matching is evaluated by summarizing the similarity across all individual local features (\\ie many-to-many matching).\nOne important aspect of local features is to detect the keypoints for an instance within an image, and then to describe the detected keypoints as a set of local descriptors. Inspired by , the common strategies of this whole procedure for IIR can be categorized as \\textit{detect-then-describe} and \\textit{describe-then-detect}.\nIn terms of \\textit{detect-then-describe}, we regard the descriptors around keypoints as local features, similar to ,. Coarse regions can be detected, for example, by using the methods depicted in Figure~\\ref{MultiplePass}, and regions of interest in an image can be detected by using region proposal networks (RPNs) ,. The extracted coarse regions around the keypoints are fed into a DCNN, followed by feature description. Traditional detectors can also be used to detect fine regions around a keypoint. For instance, Zheng \\etal employ the popular Hessian-Affine detector to get an affine-invariant local region. Paulin \\etal and Mishchuk \\etal detect regions using the Hessian-Affine detector and feed into patch-convolutional kernel networks (Patch-CKNs) . \\textcolor{black}{Note that it becomes more convenient for the case where bounding boxes annotations have been provided by datasets (see Section \\ref{Datasets_and_Evaluation_Criteria}), and then the image regions can be cropped directly for further reranking . }\nRather than performing keypoint detection early on, it is possible to postpone the detection stage on the convolutional feature maps, \\ie \\textit{describe-then-detect}. One can select regions on the convolutional feature maps to obtain a set of local features ,,; the local maxima of the feature maps are then detected as keypoints . A similar strategy is also used in network fine-tuning ,,,,, where the keypoints on the convolutional feature maps can be selected based on attention scores predicted by an attention network ,, or based on single-head and multi-head attention modules in transformers ,. This approach to keypoint selection is better for achieving computational efficiency. \nAfter keypoint detection and description, a large number of local features are used in the matching stage to perform instance-level retrieval, and the image similarity is evaluated by matching across all local features. Local matching techniques include spatial verification and selective match kernels (SMK) . Spatial verification assumes object instances are rigid so that local matches between images can be estimated as an affine transformation using RANdom SAmple Consensus (RANSAC) . One limitation of RANSAC is its high computational complexity of estimating the transformation model when all local descriptors are considered; instead, it is possible to apply RANSAC to a small number of top-ranked local descriptors, such as those selected by approximate nearest neighbor . \nSMK weighs the contributions of individual matches with a non-linear selective function, but is still memory intensive. Its extension, the Aggregated Selective Match Kernel (ASMK), focuses more on aggregating similarities between local features without explicitly modeling the geometric alignment, which can produce a more compact representation ,. Recently, Teichmann \\etal introduced Regional Aggregated Selective Match Kernel (R-ASMK) to combine information from detected regions, boosting image retrieval accuracy compared to the ASMK.\n\\textbf{Discussion.} Using local descriptors to perform instance retrieval tasks has two limitations. First, the local descriptors for an image are stored individually and independently, which is memory-intensive, and not well-suited for large-scale scenarios.\nSecond, estimating the similarity between the query and database images depends on cross-matching all local descriptor pairs, which incurs additional searching cost and then a low retrieval efficiency. Therefore, most instance retrieval systems using local features follow a two-stage paradigm: initial filtering and re-ranking ,,,,, as in Figure \\ref{PipelineofImageRetrieval}. The initial filtering stage is to employ a global descriptor to select a set of candidate matching images, thereby reducing the solution space; the re-ranking stage is to use local descriptors to re-rank the top-ranked images from the global descriptor.", "id": "22557b43-bfa7-48d0-9dad-fad3f1f25551", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "f6eab6de-c628-4a89-bfb0-43b4f87b1151", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", "Feature Embedding and Aggregation" ], [ "subsubsection", "Matching with Local Features" ] ], "subsections": [], "title": "Matching with Local Features" }, { "cite_extract_rate": 0.5, "cites": [ 2124, 7547, 2094, 2104, 2093, 2118, 2099 ], "content": "\\label{Attention_Mechanism}\nAttention mechanism can be regarded as a kind of feature aggregation, whose aim is to \\textcolor{black}{highlight the most relevant feature parts. It can effectively address the ``\\textit{distraction challenge}'' and also promote feature discriminativity , realized by computing an attention map.} Approaches to obtaining attention maps can be categorized into non-parametric and parametric groups, as shown in Figure~\\ref{fig3}, where the main difference is whether the importance weights in the attention map are learnable.\nNon-parametric weighting is a straightforward method to highlight feature importance, and the corresponding attention maps can be obtained by channel-wise or spatial-wise pooling, as in Figure \\ref{fig3} (a,b). For spatial-wise pooling, \nKalantidis \\etal propose an effective CroW method to weight and pool feature maps, which concentrate on weighting activations at different spatial locations, without considering the relations between these activations. In contrast, Ng \\etal explore the correlations among activations at different spatial locations on the convolutional feature maps. \nChannel-wise weighting methods are also popular non-parametric attention mechanisms ,. Xu \\etal rank the weighted feature maps to build ``probabilistic proposals'' to select regional features. Jimenez \\etal combine CroW and R-MAC to propose Classes Activation Maps (CAM) to weigh the feature map per class. Xiang \\etal employ a Gram matrix to analyze the correlations between different channels and then obtain channel sensitivity information to tune the importance of each feature map. Channel-wise and spatial-wise weighting methods are usually integrated into a deep model to highlight feature importance ,.\nParametric attention maps, shown in Figure \\ref{fig3} (c,d), can be learned via deep networks, where the input can be either image patches or feature maps ,,, approaches which are commonly used in supervised metric learning . Kim \\etal make the first attempt to propose a shallow network (CRN) to take as input the feature maps of convolutional layers and outputs a weighted mask indicating the importance of spatial regions in the feature maps. The resulting mask modulates feature aggregation to create a global representation of the input image. \nNoh \\etal design a 2-layer CNN with a softplus output layer to compute scores which indicate the importance of different image regions. Inspired by R-MAC, Kim \\etal employ a pre-trained ResNet101 to train a context-aware attention network using multi-scale feature maps.\nApart from using feature maps as inputs, a whole image can be used to learn feature importance, for which specific networks are needed ,,. Mohedano explores different saliency models, including DeepFixNet and Saliency Attentive Model. Yang \\etal and Wei \\etal introduce a two-stream network for image retrieval in which the auxiliary stream, DeepFixNet, is used specifically for predicting attention maps, which are then fused with the feature maps produced by the main network.\nFor image retrieval, attention mechanisms can be combined with supervised metric learning .", "id": "3362dbb1-5a27-4b56-85a4-413bc5c384f6", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "f6eab6de-c628-4a89-bfb0-43b4f87b1151", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", "Feature Embedding and Aggregation" ], [ "subsubsection", "Attention Mechanism" ] ], "subsections": [], "title": "Attention Mechanism" }, { "cite_extract_rate": 0.45, "cites": [ 7547, 2118, 2094, 2104, 2099, 2116, 2128, 2095, 2110 ], "content": "\\label{Deep_Hash_Embedding}\nReal-valued features extracted by deep networks are typically high-dimensional, and therefore are not well-suited to retrieval efficiency. As a result, there is significant motivation to transform deep features into more compact codes. \\textcolor{black}{Since their computational and storage efficiency are beneficial for the ``\\textit{efficiency challenge}'', hashing algorithms have been widely used for global , and local descriptors ,,.}\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{./Figures/AttentionMachanism.pdf}\n\\vspace{-1.5em}\n\\caption{Illustration of attention mechanisms. (a)-(b) Non-parametric schemes: The attention is based on convolutional feature $ \\boldsymbol{A} $. Channel-wise attention in (a) produces a $C$-dimensional importance vector \\textbf{$\\beta_{1}$} ,; Spatial-wise attention in (b) computes a 2-dimensional attention map \\textbf{$\\alpha$} ,,. (c)-(d) Parametric schemes: The attention weights are learned by a trainable network. In (c), \\textbf{$\\beta_2$} are provided by a sub-network with parameters $\\theta_{\\gamma}$ ,,,. In (d), the attention maps, as a tensor, are predicted by some auxiliary saliency extraction models from the input image directly ,,.} \\label{fig3}\n\\end{figure}\nHash functions can be plugged as a layer into deep networks, so that hash codes and deep networks can be simultaneously trained and optimized, either supervised or unsupervised . During hash function training, the hash codes of originally similar images are embedded as closely as possible, and the hash codes of dissimilar images are as separated as possible. $d$-dim hash codes from a hash function $h(\\cdot)$ for an image $x$ can be formulated as $ b_{x} = h(x) = h\\big(f(x;\\bm{\\theta})\\big) \\in \\{+1, -1\\}^d $. Because hash codes are non-differentiable their optimization is difficult, so $h(\\cdot)$ can be relaxed to be differentiable by using \\textit{tanh} or \\textit{sigmoid} functions .\nWhen binarizing real-valued features, it is crucial to preserve image similarity and to improve hash code quality . These two aspects are at the heart of hashing algorithms to maximize retrieval accuracy. \n\\begin{flushleft}\n\\emph{a. Hash Functions to Preserve Image Similarity}\n\\end{flushleft}\nPreserving similarity seeks to minimize the inconsistencies between real-valued features and corresponding hash codes, for which a variety of strategies have been adopted. \nLoss functions can significantly influence similarity preservation, which includes both supervised and unsupervised methods. With class labels available, many loss functions are designed to learn hash codes in a Hamming space. As a straightforward method, one can optimize either the difference between matrices computed from the binary codes and their supervision labels , or the difference between the hash codes and real-valued deep features ,. Song \\etal propose to learn hash codes for regional features in which each local feature is converted to a set of binary codes by multiplying a hash function and the raw RoI features, then the differences between RoI features and hash codes are characterized by an L$_2$ loss. Do \\etal regularize hash codes with a reconstruction loss, which ensure that codes can be reconstructed to their inputs so that similar/dissimilar inputs are mapped to similar/dissimilar hash codes. Lin \\etal learn hash codes and address the ``\\textit{invariance challenge}'' by introducing an objective function which characterize the difference between the binary codes which are computed from the original image and the geometric transformed one.\n\\begin{flushleft}\n\\emph{b. Improving Hash Function Quality}\n\\end{flushleft}\nA good hash function seeks to have binary codes uniformly distributed; that is, maximally filling and using the hash code space, normally on the basis of bit uncorrelation and bit balance ,. Bit uncorrelation implies that different bits are as independent as possible, so that a given set of bits can aggregate more information within a given code length . Bit balance means that each bit should have a 50\\% chance of being +1 or -1, thereby maximizing code variance and information . Mor{\\`e}re \\etal use the uniform distribution $U$(0,1) to build a regularization term to make hash codes distribute evenly where the codes are learned by a Restricted Boltzmann Machine layer. Likewise, Lin \\etal optimize the mean of learned hash codes to be close to 0.5 to prevent any bit bias towards zero or one.", "id": "b5eed66c-a402-4570-be6b-8b9987293fc6", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "f6eab6de-c628-4a89-bfb0-43b4f87b1151", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Retrieval with Off-the-Shelf DCNN Models" ], [ "subsection", "Feature Embedding and Aggregation" ], [ "subsubsection", " Hashing Embedding " ] ], "subsections": [], "title": " Hashing Embedding " }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 2117, 7546, 2110, 2103, 2129, 2096, 629, 2109, 2097 ], "content": "\\label{Retrieval_via_Learning_DCNN_Representations}\nThe off-the-shelf DCNNs pre-trained on source datasets for classification are quite robust to inter-class variability. However, in most cases, deep features extracted based on off-the-shelf models may not be sufficient for accurate retrieval, even with the strategies discussed in Section \\ref{Retrieval_with_Off_the_Shelf_DCNN_Models}. In order for models to be more effective for retrieval, a common practice is network fine-tuning, \\ie updating the pre-stored parameters . Fine-tuning methods have been studied extensively to learn better features, \\textcolor{black}{whose primary aim is to address the ``\\textit{fine-tune challenge}''.} A standard dataset with clear and well-defined ground-truth labels is indispensable for the supervised fine-tuning and subsequently pair-wise supervisory information is incorporated into ranking loss to update networks by regularizing on retrieval representations, otherwise it is necessary to develop unsupervised fine-tuned methods. After network fine-tuning, features can be organized as global or local to perform retrieval. \n\\textcolor{black}{For the most feature strategies we presented in Section \\ref{Retrieval_with_Off_the_Shelf_DCNN_Models}, including feature extraction, feature embedding and feature aggregation. Note that fine-tuning does not contradict or render irrelevant these feature processing methods; indeed, these strategies are complementary and can be equivalently incorporated as part of network fine-tuning. To this end, this section will survey the strategies which have been developed, based on the patch-level, image-level, or class-level supervision, to fine-tune deep networks for better instance retrieval.}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{./Figures/All_Metric_Learning.pdf}\n\\vspace{-0.5em}\n\\caption{Schemes of supervised fine-tuning. Anchor, positive, and negative images are indicated by $x_{a}$, $x_{p}$, $x_{n}$, respectively. (a) classification loss ; (b) similarity learning by using a transformation matrix ; (c) Siamese loss ,,,; (d) triplet loss ; (e) an attention block into DCNNs to highlight regions ; (f) combining classification loss and pairwise ranking loss ,; (g) region proposal networks (RPNs) to locate the RoI and highlight specific regions or instances ; (h) inserting the RPNs of (g) into DCNNs, such that the RPNs extract regions or instances at the convolutional layer ,. }\n\\vspace{-1em}\n\\label{All_Metric_Learning}\n\\end{figure*}", "id": "acffc1fe-bf03-450b-857b-db143905236e", "level": "section", "origin_cites_number": 13, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ] ], "subsections": [ "45dba903-566e-4f9f-a48c-8f8adf0593ca", "871a2087-f4a4-4968-91d1-0752906a7612" ], "title": "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Supervised_Fine-tuning}\nThe way to realize supervised fine-tuning can be determined by the given class labels or pairwise supervisory information.", "id": "45dba903-566e-4f9f-a48c-8f8adf0593ca", "level": "subsection", "origin_cites_number": 0, "parent_id": "acffc1fe-bf03-450b-857b-db143905236e", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Supervised Fine-tuning " ] ], "subsections": [ "c05b41ca-005d-493d-96fa-98315460aeed", "db1c70e4-39b7-4baf-8fa7-4ccb702563d0", "14afec8b-a454-4ba0-a24a-2c2e37482b34" ], "title": "Supervised Fine-tuning " }, { "cite_extract_rate": 0.75, "cites": [ 2114, 309, 2130, 2093, 2111, 2109 ], "content": "\\label{Classification-based_Fine-tuning}\nWhen class labels of a new dataset are available (\\eg INSTRE , GLDv2 ,), it is preferable to begin with a previously-trained DCNN, trained on a separate dataset, with the backbone DCNN typically chosen from one of AlexNet, VGG, GoogLeNet, or ResNet.\nThe DCNN can then be fine-tuned, as shown in Figure \\ref{All_Metric_Learning} (a), by optimizing its parameters on the basis of a cross entropy loss \n\\begin{equation}\n\\mathcal{L}_{CE}(\\hat{p_{i}},y_{i})= - \\! \\sum^{c}_{i} \\! \\big(y_{i} \\! \\times \\!log(\\hat{p}_{i}) \\big)\n\\label{cross_entropy_loss}\n\\end{equation}\nHere $y_{i}$ and $\\hat{p}_{i}$ are the ground-truth labels and the predicted logits, respectively, and $c$ is the total number of categories. The milestone work in such fine-tuning is , in which AlexNet is re-trained on the Landmarks dataset. \\textcolor{black}{\nAccording to the class labels, the image-level features are required to compute the logits. Thus, the descriptors extracted from local regions on convolutional feature maps , or image patch inputs are further needed to be aggregated. }\n\\textcolor{black}{A classification-based fine-tuning method enables to enforce higher similarity for intra-class samples and diversity for inter-class samples. Cao \\etal employ the ArcFace loss , which uses the margin-adjusted cosine similarity in the form of softmax loss, to induce smaller intra-class variance and show excellent results for instance retrieval.} Recently, Boudiaf \\etal claim that cross entropy loss can minimize intra-class distances while maximizing inter-class distances. Cross entropy loss is, in essence, maximizing a common mutual information between the retrieval features and the ground-truth labels. Therefore, it can be regarded as an upper bound on a new pairwise loss, which has a structure similar to various pairwise ranking losses, of which representatives are introduced below.", "id": "c05b41ca-005d-493d-96fa-98315460aeed", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "45dba903-566e-4f9f-a48c-8f8adf0593ca", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Supervised Fine-tuning " ], [ "subsubsection", "Fine-tuning via Classification Loss" ] ], "subsections": [], "title": "Fine-tuning via Classification Loss" }, { "cite_extract_rate": 0.631578947368421, "cites": [ 2101, 2096, 2098, 7546, 7549, 2103, 2097, 2106, 2129, 2110, 209, 2117 ], "content": "\\label{Verification_based_Learning}\nWith affinity information (\\eg samples from the same group) indicating similar and dissimilar pairs, fine-tuning methods based on pairwise ranking loss learn an optimal metric which minimizes or maximizes the distance of pairs to maintain their similarity. Network fine-tuning via ranking loss involves two types of information :\n\\begin{enumerate}\n\\item A pair-wise constraint, corresponding to a Siamese network as in Figure \\ref{All_Metric_Learning} (c), in which input images are paired with either a positive or negative sample;\n\\item A triplet constraint, associated with triplet networks as in Figure \\ref{All_Metric_Learning} (e), in which anchor images are paired with both similar and dissimilar samples .\n\\end{enumerate}\nThese pairwise ranking loss based methods are categorized into globally supervised approaches (Figure \\ref{All_Metric_Learning} (c,d)) and locally supervised approaches (Figure \\ref{All_Metric_Learning} (g,h)), where the former ones learn a metric on global features by satisfying all constraints, whereas the latter ones focus on local areas by only satisfying the given local constraints (\\emph{e.g.} region proposals).\nTo be specific, consider a triplet set $ X \\!\\! = \\!\\! \\lbrace(x_{a}, x_{p}, x_{n})\\} $ in a mini-batch, where $ (x_{a}, x_{p} )$ indicates a similar pair and $ (x_{a}, x_{n} )$ a dissimilar pair. Features $ f(x;\\bm{\\theta}) $ of one image are extracted by a network $f(\\cdot)$ with parameters $\\bm{\\theta}$, for which we can represent the affinity information for each similar or dissimilar pair as\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{D}_{ij} = \\mathcal{D}(x_{i}, x_{j}) = || f(x_{i}; \\bm{\\theta}) - f(x_{j}; \\bm{\\theta})||_{2}^{2}\n\\end{aligned}\\label{Distance_pre_defined}\n\\end{equation}\n\\begin{flushleft}\n\\emph{a. Refining with Transformation Matrix}. \n\\end{flushleft}\nLearning the similarity among input samples can be implemented by optimizing the weights of a linear transformation matrix . It transforms the concatenated feature pairs into a common latent space using a transformation matrix $ \\bm{W} \\!\\! \\in \\!\\! \\mathbb{R}^{2d \\times 1} $, where $d$ is the final feature dimension. The similarity score of these pairs are predicted via a sub-network $ \\mathcal{S}_{W}(x_{i}, x_{j}) = f_{W}(f(x_{i}; \\bm{\\theta})\\cup f(x_{j};\\bm{\\theta}); \\bm{W}) $ . In other words, the sub-network $f_{W}$ predicts how similar the feature pairs are. Given the affinity information of feature pairs $ \\mathcal{S}_{ij}=\\mathcal{S}(x_{i}, x_{j}) \\! \\in \\! \\{0,1\\} $, the binary labels 0 and 1 indicate the similar (positive) or dissimilar (negative) pairs, respectively. The training of function $f_{W}$ can be achieved by using a regression loss: \n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\!\\mathcal{L}_{W}(x_{i}, x_{j}) = & | \\mathcal{S}_{W}(x_{i}, x_{j}) - \\mathcal{S}_{ij}\\big(\\text{sim}(x_{i}, x_{j}) + m\\big) - \\\\\n&(1-\\mathcal{S}_{ij})\\big(\\text{sim}(x_{i}, x_{j})-m\\big) | \\label{regressionLoss} \n\\end{aligned}\n\\end{equation}\nwhere $\\text{sim}(x_{i}, x_{j}) $ can be the cosine function for guiding the training of $\\bm{W}$ and $m$ is a margin. By optimizing the regression loss and updating $ \\bm{W} $, deep networks maximize the similarity of similar pairs and minimize that of dissimilar pairs. It is worth noting that the pre-stored parameters in the deep models are frozen when optimizing $ \\bm{W} $. The pipeline of this approach is depicted in Figure \\ref{All_Metric_Learning} (b). \n\\begin{flushleft}\n\\emph{b. Fine-tuning with Siamese Networks}. \n\\end{flushleft}\nSiamese networks represent important options in implementing metric learning for fine-tuning, as in Figure \\ref{All_Metric_Learning} (c) and Figure \\ref{mAP_loss_illustration} (a). It is a structure composed of two branches that share the same weights across layers. Siamese networks are trained on paired data, consisting of an image pair $ (x_{i}, x_{j}) $ such that $ \\mathcal{S}(x_{i}, x_{j}) \\! \\in \\! \\{0,1\\} $. A Siamese loss \nis formulated as\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{Siam}(x_{i}, x_{j})& = \\frac{1}{2}\\mathcal{S}(x_{i}, x_{j})\\mathcal{D}(x_{i}, x_{j}) \\; + \\\\\n&\\frac{1}{2}\\big(1 - \\mathcal{S}(x_{i}, x_{j})\\big)\\max\\big(0,\\; m - \\mathcal{D}(x_{i}, x_{j})\\big)\n\\end{aligned}\\label{contrastive}\n\\end{equation}\nSiamese loss has recently been reaffirmed as a very effective metric in category-level image retrieval, outperforming many more sophisticated losses if implemented carefully . Enabled by the standard Siamese network, this objective function is used to learn the similarity between semantically relevant samples under different scenarios ,. For example, Radenovi{\\'c} \\etal employ a Siamese network on matching and non-matching global feature pairs which are aggregated by GeM-based pooling. The deep network fine-tuned by the Siamese loss generalizes better and converges at higher retrieval performance. Ong \\etal leverage the Siamese network to learn image features which are then fed into the Fisher Vector model for further encoding. Siamese networks can also be applied to hashing learning in which the Euclidean distance $\\mathcal{D}(\\cdot)$ in Eq. \\ref{contrastive} is computed for binary codes .\nAn implicit drawback of the Siamese loss is that it may penalize similar image pairs even if the margin between these pairs is small or zero , if the constraint is too strong and unbalanced. At the same time, it is hard to map the features of similar pairs to the same point when images contain complex contents or scenes. To tackle this limitation, Cao \\etal adopt a double-margin Siamese loss to relax the penalty for similar pairs by setting a margin $ m_1 $ instead of zero, in which case the original single-margin Siamese loss is re-formulated as \n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\!\\! \\mathcal{L}_{\\mathcal{D}\\_Siam}(x_{i}, x_{j}) & = \\frac{1}{2}\\mathcal{S}(x_{i}, x_{j})\\max \\big(0, \\mathcal{D}(x_{i}, x_{j}) - m_{1} \\big) + \\\\\n& \\!\\! \\frac{1}{2}\\big(1 - \\mathcal{S}(x_{i}, x_{j})\\big)\\max\\big(0, m_{2} - \\mathcal{D}(x_{i}, x_{j})\\big)\n\\end{aligned}\\label{doublemargincontrastive}\n\\end{equation}\nwhere $ m_{1} \\!\\! > \\!\\! 0 $ and $ m_{2} \\!\\! > \\!\\! 0 $ are the margins affecting the similar and dissimilar pairs, respectively, as in Figure \\ref{mAP_loss_illustration} (b), meaning that the double margin Siamese loss only applies a contrastive force when the distance of a similar pair is larger than $ m_{1} $. The mAP metric of retrieval is improved when using the double margin Siamese loss . \nMore recently, transformers have been trained under the regularization of cross entropy and Siamese loss for instance-level retrieval and achieved competitive performance, positioning it as an alternative to convolutional architectures. As observed by , the transformer-based architecture is less impacted than convolutional networks by feature collapse since each input feature is projected to different sub-spaces before the multi-headed attention. Moreover, the transformer backbone operates as a learned aggregation operator, thereby avoiding the design of sophisticated feature aggregation methods.\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=\\linewidth]{./Figures/mAP_loss_illustration.pdf}\n\\vspace{-2em}\n\\caption{Illustrations of different losses for network fine-tuning. The same shape with different colors denotes images that include the same instance. (a)-(c) have been introduced in the text ,,. (d) Listwise AP loss considers a mini-batch of $N$ features simultaneously and directly optimizes the Average-Precision computed from these features ,.\n} \\label{mAP_loss_illustration}\n\\end{figure*}\n\\begin{flushleft}\n\\emph{c. Fine-tuning with Triplet Networks}. \n\\end{flushleft}\nTriplet networks optimize similar and dissimilar pairs simultaneously. As shown in Figure \\ref{All_Metric_Learning} (d) and Figure \\ref{mAP_loss_illustration} (c), the plain triplet networks adopt a ranking loss for training:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{Triplet}(x_{a}, x_{p}, x_{n}) = \\max \\big( 0, m + \\mathcal{D}(x_{a}, x_{p}) - \n \\mathcal{D}(x_{a}, x_{n})\\big)\\label{triplet}\n\\end{aligned}\n\\end{equation}\nwhich indicates that the distance of an anchor-negative pair $ \\mathcal{D}(x_{a}, x_{n}) $ should be larger than that of an anchor-positive pair $ \\mathcal{D}(x_{a}, x_{p}) $ by a certain margin $ m $. \n\\textcolor{black}{Given the datasets that provide bounding box annotations, such as INSTRE, Oxford-5k, Paris-6k, and their variants, the bounding box annotations are used as patch-level supervision to train a region detector which enables the final DCNNs to locate specific regions or objects. As an example, region proposal networks (RPNs) is fine-tuned and subsequently plugged into DCNNs and trained end-to-end , as shown in Figure \\ref{All_Metric_Learning} (g). RPNs yield the regressed bounding box coordinates of objects and are trained by the multi-class classification loss. Once fine-tuned, RPNs can produce regional features for each detected region by RoI pooling and perform better instance search. }\n\\textcolor{black}{Further, local supervised metric learning has been explored based on the fact that RPNs enable deep models to learn regional features for particular instance objects ,,,. RPNs used in the triplet formulation are shown in Figure \\ref{All_Metric_Learning} (h). Firstly, regression loss (RPNs loss) is used to minimize the regressed bounding box relative to ground-truth. Then, the regional features for all detected RoIs are aggregated into a global one and L2-normalized for the triplet loss.} Note that, in some cases, jointly training an RPN loss and triplet loss leads to unstable results, a problem addressed in by first training a CNN to produce R-MAC using a rigid grid, after which the parameters in convolutional layers are fixed and RPNs are trained to replace the rigid grid.\nAttention mechanisms can also be combined with metric learning for fine-tuning , as in Figure \\ref{All_Metric_Learning} (e), where the attention module is typically end-to-end trainable and takes as input the convolutional feature maps. Song \\etal introduce a convolutional attention layer to explore spatial-semantic information, highlighting regions in images to significantly improve the discrimination for image retrieval.\nRecent studies , have jointly optimized the triplet loss and classification loss to further improve network capacity, as shown in Figure \\ref{All_Metric_Learning} (f). The overall joint function is \n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\!\\mathcal{L}_{Joint} = &\\lambda_{1} \\! \\cdot \\! \\mathcal{L}_{Triplet}(x_{i,a}, x_{i,p}, x_{i,n}) \\! + \\! \\lambda_{2} \\!\\cdot \\! \\mathcal{L}_{CE}(\\hat{p_{i}},y_{i})\n\\end{aligned}\n\\end{equation}\nwhere the cross entropy loss (CE loss) $ \\mathcal{L}_{CE} $ is defined in Eq. (\\ref{cross_entropy_loss}) and the triplet loss $ \\mathcal{L}_{Triplet} $ in Eq. (\\ref{triplet}). $ \\lambda_{1} $ and $ \\lambda_{2} $ are hyper-parameters tuning the tradeoff between the two loss functions.", "id": "db1c70e4-39b7-4baf-8fa7-4ccb702563d0", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "45dba903-566e-4f9f-a48c-8f8adf0593ca", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Supervised Fine-tuning " ], [ "subsubsection", "Fine-tuning via Pairwise Ranking Loss" ] ], "subsections": [], "title": "Fine-tuning via Pairwise Ranking Loss" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2126, 2127, 2130, 2129, 7549, 2098 ], "content": "\\label{Discussion_ranking_loss}\nIn some cases, pairwise ranking loss cannot effectively learn the variations between samples and still suffers from a weaker generalization capability if the training set is not ordered correctly. Therefore, pairwise ranking loss requires careful sample mining and weighting strategies to obtain the most informative pairs, especially when considering mini-batches. The hard negative mining strategy is commonly used ,,, however further sophisticated mining strategies have recently been developed. Mishchuk \\etal calculate a pair-wise distance matrix on all mini-batch samples to select two closest negative and one anchor-positive pair to form a triplet. \nInstead of traversing all possible two-tuple or three-tuple combinations, it is possible to consider all positive samples in one cluster and negative samples together. Liu \\etal introduce a group-group loss to decrease the intra-group distance and increase the inter-group distance. Considering all samples may be beneficial for stabilizing optimization and promoting generalization due to a larger data diversity, however the extra computational cost remains an issue to be addressed.\nSubstantial research has been devoted to pair-wise ranking loss, while cross entropy loss, mainly used for classification, has been largely overlooked. Recently, Boudiaf \\etal claim that cross entropy loss can match and even surpass the pair-wise ranking loss when carefully tuned on fine-grained category-level retrieval tasks. In fact, the greatest improvements have come from enhanced training schemes (\\eg data augmentation, learning rate polices, batch normalization freeze) rather than intrinsic properties of pairwise ranking loss. Further, although several sophisticated ranking losses have been explored and validated for category-level retrieval, Musgrave \\etal revisited these losses and found that most of them perform on par to vanilla Siamese loss and triplet loss, so there is merit to consider these losses also for instance-level image retrieval tasks.\nBoth cross entropy loss and pair-wise ranking loss regularize on the embedded features and the corresponding labels so as to maximize their mutual information . Their effectiveness is not guaranteed to give retrieval results that also optimize mAP . To tackle this limitation one can directly optimize the average precision (AP) metric using the listwise AP loss,\n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\!\\mathcal{L}_{mAP} = 1 - \\frac{1}{N} \\sum^{N}_{i=1} {\\rm AP}(x_{i}^{\\top}X_{N}, Y_{i})\n\\label{AP_loss}\n\\end{aligned}\n\\end{equation}\nwhich optimizes the global ranking of thousands of images simultaneously, instead of only a few images at a time. Here $Y_{i}$ is the binary label to evaluate the relevance between batch images. $X_{N} = \\{x_{1}, x_{2},...x_{j},...,x_{N}\\}$ denotes the features of all images, where each $x_{i}$ is used as a potential query to rank the remaining batch images. Each similarity score $x_{i}^{\\top}x_{j}$ can be measured by a cosine function. \nIt is demonstrated that training with AP-based loss improves retrieval performance ,. However average precision, as a metric, is normally non-differentiable. To directly optimize the AP loss during back-propagation, the key is that the indicator function for AP computing needs to be relaxed using methods such as triangular kernel-based soft assignment or sigmoid function , as shown in Figure \\ref{mAP_loss_illustration} (d).", "id": "14afec8b-a454-4ba0-a24a-2c2e37482b34", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "45dba903-566e-4f9f-a48c-8f8adf0593ca", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Supervised Fine-tuning " ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 1, "cites": [ 2131 ], "content": "\\label{Unsupervised_Fine-tuning}\nSupervised network fine-tuning becomes infeasible when there is insufficient supervisory information, normally because of cost or unavailability. Therefore unsupervised fine-tuning methods for image retrieval are quite necessary, but less studied . \nFor unsupervised fine-tuning, two directions are to mine relevance among features via manifold learning, and via clustering techniques, each discussed below.", "id": "871a2087-f4a4-4968-91d1-0752906a7612", "level": "subsection", "origin_cites_number": 1, "parent_id": "acffc1fe-bf03-450b-857b-db143905236e", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Unsupervised Fine-tuning " ] ], "subsections": [ "f6c0b11c-f5b0-47c6-9e48-358712ee7aee", "7ab93d94-79dd-47b1-b0d5-6e73b13d8f40" ], "title": "Unsupervised Fine-tuning " }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2131, 2103, 2132, 2100 ], "content": "\\label{Mining_Samples_with_Manifold_Learning}\nManifold learning focuses on capturing intrinsic correlations on a manifold structure to mine or deduce relevance, as illustrated in Figure \\ref{manifold_learning_pipeline}. Initial similarities between the extracted global features or local features , are used to construct an affinity matrix, which is then re-evaluated and updated using manifold learning . According to the manifold similarity in the updated affinity matrix, positive and hard negative samples are selected for metric learning using pairwise ranking loss based functions such as pair loss , or triplet loss ,. Note that this is different from the aforementioned methods for pairwise ranking loss based fine-tuning methods, where the hard positive and negative samples are explicitly selected from an ordered dataset according to the given affinity information.\nIt is important to capture the geometry of the manifold of deep features, generally involving two steps , known as diffusion. First, the affinity matrix (Figure \\ref{manifold_learning_pipeline}) is interpreted as a weighted kNN graph, where each vector is represented by a node, and edges are defined by the pairwise affinities of two connected nodes. Then, the pairwise affinities are re-evaluated in the context of all other elements by diffusing the similarity values through the graph ,,,, with recent strategies proposed such as regularized diffusion (RDP) and regional diffusion . For more details on diffusion methods refer to survey .\nMost algorithms follow the two steps of ; the differences among methods lie primarily in three aspects: \n\\begin{enumerate}\n\\item {\\bf Similarity initialization},\nwhich affects the subsequent kNN graph construction in an affinity matrix. Usually, an inner product or Euclidean distance is directly computed for the affinities. A Gaussian kernel function can be used ,, or consider regional similarity from image patches .\n\\item {\\bf Transition matrix definition}, a row-stochastic matrix , determines the probabilities of transiting from one node to another in the graph. These probabilities are proportional to the affinities between nodes, which can be measured by Geodesic distance (\\emph{e.g.} the summation of weights of relevant edges).\n\\item {\\bf Iteration scheme},\nto re-valuate and update the values in the affinity matrix by the manifold similarity until some convergence is achieved. Most algorithms are iteration-based ,, as illustrated in Figure \\ref{manifold_learning_pipeline}. \n\\end{enumerate}\nDiffusion process algorithms are indispensable for unsupervised fine-tuning. Better image similarity is guaranteed when it is improved based on initialization (\\emph{e.g.} regional similarity or higher order information ). Diffusion is normally iterative and is computationally demanding , a limitation which cannot meet the efficiency requirements of image retrieval. To reduce the computational complexity, Bai \\etal propose a regularized diffusion process, facilitated by an efficient iteration-based solver. Zhao \\etal regard the diffusion process as a non-linear kernel mapping function, which is then modelled by a deep neural network. Other studies replace the diffusion process on a kNN graph with a diffusion network , which is derived from graph convolution networks , an end-to-end trainable framework which allows efficient computation during training and testing.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{./Figures/manifoldlearningpipeline.pdf}\n\\caption{Paradigm of manifold learning for unsupervised metric learning, based on triplet loss ,.} \n\\label{manifold_learning_pipeline}\n\\end{figure}\nOnce the manifold space is learned, samples are mined by computing geodesic distances based on the Floyd-Warshall algorithm or by comparing the set difference . The selected samples are fed into deep networks to perform fine-tuning.", "id": "f6c0b11c-f5b0-47c6-9e48-358712ee7aee", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "871a2087-f4a4-4968-91d1-0752906a7612", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Unsupervised Fine-tuning " ], [ "subsubsection", "Mining Samples with Manifold Learning" ] ], "subsections": [], "title": "Mining Samples with Manifold Learning" }, { "cite_extract_rate": 0.4, "cites": [ 9094, 2135, 2133, 2128, 7544, 2134 ], "content": "\\label{Mining_Samples_by_Clustering}\nClustering is used to explore proximity information that has been studied in instance-level retrieval ,,,,. The rationale behind these methods is that samples in a cluster are likely to satisfy a degree of similarity. \nOne class of methods for clustering deep features is via k-means. Given $k$ cluster centroids, during each training epoch a deep network alternates between two steps: first, a soft assignment between the feature representations and the cluster centroids; second, the cluster centroids are refined and, at the same time, the deep network is updated by learning from current high confidence assignments using a certain regularization. These two steps are repeated until a convergence criterion is met, at which point the cluster assignments are used as pseudo-labels ,. Alternatively, the pseudo-labels can be calculated from the samples in a cluster, \\eg the mean values. For example, Tzelepi \\etal compute $ k $ nearest feature representations with respect to a query feature and then compute their mean vectors, which is used as a target for the query feature. In this case, fine-tuning is performed by minimizing the squared distance between each query feature and the mean of its $ k $ nearest features. Liu \\etal propose a self-taught hashing algorithm using a kNN graph construction to generate pseudo labels that are used to analyze and guide network training. Shen \\etal and Radenovi{\\'c} \\etal , use Structure-from-Motion (SfM) for each image cluster to explore sample reconstructions to select images for triplet loss. Clustering methods depend on the Euclidean distance, making it difficult to reveal the intrinsic relationship between objects.\nThere are further techniques for instance retrieval, such as by using AutoEncoder ,, generative adversarial networks (GANs) , convolutional kernel networks ,, and graph convolutional networks . For these methods, they focus on devising novel unsupervised frameworks to realize unsupervised learning, instead of iterative similarity diffusion or cluster refinement on feature space. For example, instead of performing iterative traversal on a set of nearest neighbors defined by kNN graph, Liu \\etal employ graph convolutional networks to directly encode the neighbor information into image descriptors and then train the deep models to learn a new feature space. This method is demonstrated to significantly improve retrieval accuracy while maintaining efficiency. GANs are also explored, for the first time, for instance-level retrieval in an unsupervised fashion . The generator retrieves images that contain similar instances as a given image, while the discriminator judges whether the retrieved images have the specified instance which appeared in the query image. During training, the discriminator and the generator play a min-max game via an adversarial reward which is computed based on the cosine distance between the query image and the images retrieved by the generator.", "id": "7ab93d94-79dd-47b1-b0d5-6e73b13d8f40", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "871a2087-f4a4-4968-91d1-0752906a7612", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "$\\!\\!\\!\\!\\!$Retrieval via Learning DCNN Representations " ], [ "subsection", "Unsupervised Fine-tuning " ], [ "subsubsection", "Mining Samples by Clustering" ] ], "subsections": [], "title": "Mining Samples by Clustering" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{State_of_the_Art_Performance}", "id": "a6c69e00-d8a1-492b-8514-fab570f109d2", "level": "section", "origin_cites_number": 0, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "State of the Art Performance" ] ], "subsections": [ "347ae713-adb4-418f-9e62-4191f141d96c", "78dcf4cf-a971-4517-838a-4552eb9d65a5", "4289168f-a812-4899-b8a0-52669cd0ba36" ], "title": "State of the Art Performance" }, { "cite_extract_rate": 0.384615384615384, "cites": [ 2114, 2136, 2137, 2093, 2111 ], "content": "\\label{Datasets_and_Evaluation_Criteria}\nTo demonstrate the effectiveness of methods, we choose the following commonly-used datasets for performance comparison:\n\\textbf{UKBench (UKB)} consists of 10,200 images of objects. This dataset has 2,550 groups of images, each group having four images of the same object from different viewpoints or illumination conditions, which can be regarded as a kind of class-level supervision information. All images can be used as a query.\n\\textbf{Holidays} consists of 1,491 images collected from personal holiday albums. Most images are scene-related. The dataset comprises 500 groups of similar images with a single query image for each group. The dataset also provides position information of the interest regions for each image.\n\\textbf{Oxford-5k} consists of 5,062 images for 11 Oxford buildings. Each building is associated with five hand-drawn bounding box queries. \\textcolor{black}{According to the relevance level, each image of the same building is assigned a label \\textit{Good} (\\ie \\textit{positive}), \\textit{OK} (\\ie \\textit{positive}), \\textit{Junk}, or \\textit{Bad} (\\ie \\textit{negative}). \\textit{Junk} images can be \ndiscarded or regarded as \\textit{negative} examples ,.\nTo build a tuple for each given query, one can select a positive example whose label corresponds to \\textit{Good} or \\textit{OK} in the same category, and select one negative example from each of the remaining building categories. Furthermore, an additional disjoint set of 100,000 distractor images is added to obtain Oxford-105k.}\n\\textbf{Paris-6k} includes 6,412 images and is categorized into 12 groups by architecture. \nThe supervision information can be used like that of Oxford-5k. Likewise, an additional disjoint set of 100,000 distractor images is added to obtain Paris-106k.\n\\textbf{INSTRE} consists of 28,543 images from 250 different object classes, including three disjoint subsets\\footnote{https://github.com/imatge-upc/salbow}: INSTRE-S1, INSTRE-S2, INSTRE-M. \n\\textcolor{black}{INSTRE dataset has bounding box annotations, providing single-labelled and double-labelled class information for single- and multiple-object retrieval, respectively. One can use the class information to build a tuple, with two positive examples from the same class and one negative from one of the remaining classes. The performance evaluation on INSTRE in our experiments follows the protocol in . }\n\\textbf{Google Landmarks Dataset (GLD)} , consists of GLD-v1 and GLD-v2. \\textcolor{black}{GLD-v2 is mainly recommended to use and it has the advantage of stability where all images have permissive licenses . GLD-v2 is divided into three subsets: (i) 118k query images with ground-truth annotations, (ii) 4.1M training images of 203k landmarks with labels, and (iii) 762k index images of 101k landmarks. Due to its large scale, GLD-v2 provides class-level ground-truth which can be used to build training tuples. Due to its image diversity, it may produce clutter images for each landmark so it is necessary to introduce pre-processing methods to select the more relevant images . Finally, the training set is cleaned by removing these clutters, consisting of a subset ``GLD-v2-clean'' containing 1.6M images of 81k landmarks. Since Google landmarks dataset stills lack bounding box for objects of interest, Teichmann \\etal provide a new dataset of landmark bounding boxes, based on GLD. This patch-level supervision information can help locate the most relevant regions. }\n\\textcolor{black}{Note that, additional queries and distractor images have been added into Oxford-5k and Paris-6k, producing the Revisited Oxford ($\\mathcal{R}$Oxford) and Revisited Paris ($\\mathcal{R}$Paris) datasets where each image of the same building is assigned a label \\textit{Easy}, \\textit{Hard}, \\textit{Unclear}, or \\textit{Negative} . Different label combinations are used as \\textit{positive} according to the difficulty level of different setups. During testing, if there are no positive images for a query, then that query is excluded from the evaluation. For details, we refer the reader to . We undertake partial comparisons under the \\textit{hard} evaluation protocol on these revisited datasets. }", "id": "347ae713-adb4-418f-9e62-4191f141d96c", "level": "subsection", "origin_cites_number": 13, "parent_id": "a6c69e00-d8a1-492b-8514-fab570f109d2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "State of the Art Performance" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "\\textbf{Average precision} (AP) refers to the coverage area under the precision-recall (PR) curve. A larger AP implies a higher PR curve and better retrieval accuracy. AP can be calculated as $ AP=\\frac{1}{R} \\sum_{k=1}^N P(k)\\cdot rel(k) $,\nwhere $ R $ denotes the number of relevant results for the query image from the total number $ N $ of images. $ P(k) $ is the precision of the top $ k $ retrieved images, and $ rel(k) $ is an indicator function equal to 1 if the item within rank $ k $ is a relevant image and 0 otherwise. Mean average precision (mAP) is adopted for the evaluation over all query images, \\ie $mAP=\\frac1Q{\\sum_{q=1}^Q}AP(q) $,\nwhere $ Q $ is the number of query images.\nThe \\textbf{N-S score} is a metric used for UKBench ; the N-S score is the average for the top-4 precision over the dataset.", "id": "78dcf4cf-a971-4517-838a-4552eb9d65a5", "level": "subsection", "origin_cites_number": 1, "parent_id": "a6c69e00-d8a1-492b-8514-fab570f109d2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "State of the Art Performance" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" }, { "cite_extract_rate": 0.50943396226415, "cites": [ 2136, 2140, 7543, 2093, 514, 2096, 2105, 2113, 8525, 8526, 7546, 2094, 2137, 2131, 2103, 2139, 2138, 7544, 2119, 2109, 2097, 2132, 2100, 2117, 2112, 7545, 2111 ], "content": "\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{0.248\\textwidth}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tikzpicture} \n\\scriptsize\n\\begin{axis}[legend style={at={(0.95,0.3)}, scale=0.5, anchor=north east, font= \\scriptsize}, symbolic x coords={2014, 2015, 2016, 2017, 2018, 2019, 2020 }, grid=both, ylabel={mAP(\\%)}, ymin=77.5, ymax=99, xmin=2014, xmax=2020, xtick=data, x tick label style={rotate=45,anchor=east} ];\n\\addlegendentry{Holidays}\n\\addplot[mark=square*,thick, line width=0.45mm, color={rgb, 255:red,152;green,179;blue,134}, on layer=main] coordinates { (2014, 80.18) (2015,89.7) (2016,94.2) (2017,95.13) (2018,95.67) (2019, 95.5) (2020, 94.0) };\n\\node [above right=0cm, font=\\tiny] at (axis cs:2014, 80.5) {80.18 }; \\node [ font=\\tiny] at (axis cs:2015, 90.4) {89.7 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2016, 93.4) {94.2 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2017, 94.8) {95.13 }; \\node [above right=-0.4cm, font=\\tiny] at (axis cs:2018, 95.6) {95.7 }; \\node [above right=-0.5cm, font=\\tiny] at (axis cs:2019, 95.9) {95.5 }; \\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 96) {94.0 };\n\\addlegendentry{UKBench}\n\\addplot[mark=diamond*,thick,line width=0.45mm, color={rgb,255:red,141;green,170;blue,196}, on layer=main,] coordinates {(2014,91.1) (2015,91.3) (2016,96.3) (2018,98.08) (2020,98.84) }; \\node [above right=0cm, font=\\tiny] at (axis cs:2014, 91) {91.1 }; \\node [above right=-0.4cm, font=\\tiny] at (axis cs:2015, 92.8) {91.3 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2016, 97.4) {96.3 }; \\node [above right=-0.4cm, font=\\tiny] at (axis cs:2018, 99.5) {98.1 }; \\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 100.8) {98.8 };\n\\addlegendentry{Oxford-5k}\n\\addplot[mark=*,thick, on layer=background, line width=0.45mm, color={rgb, 255:red,237;green,160;blue,155}] coordinates { (2014,78.34) (2015,84.4) (2016,88.95) \n(2018,95.8) (2019,96.2) (2020,96.2) }; \\node [above right=0cm, font=\\tiny] at (axis cs:2014, 77.5) {78.34 }; \\node [ font=\\tiny] at (axis cs:2015, 85.15) {84.4 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2016, 88.6) {88.95 }; \n\\node [above right=-0.4cm, font=\\tiny] at (axis cs:2018, 97.2) {95.8 }; \\node [above right=-0.5cm, font=\\tiny] at (axis cs:2019, 98) {96.2 }; \\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 99.1) {96.2 };\n\\addlegendentry{Paris-6k}\n\\addplot[mark=triangle*,thick,line width=0.45mm, color={rgb, 255:red,217;green,188;blue,102}, on layer=main,] coordinates { (2014,86.83) (2015,86.5) (2016,95.88) (2017,96.0) (2018, 97.0) (2019, 97.8) (2020, 97.4) };\n\\node [above right=0cm, font=\\tiny] at (axis cs:2014, 86.8) {86.83 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2015, 87.8) {86.5 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2016, 95.6) {95.8 };\\node [above right=-0.3cm, font=\\tiny] at (axis cs:2017, 97.0) {96.0 }; \\node [above right=-0.4cm, font=\\tiny] at (axis cs:2018, 98.4) {97.0 }; \\node [above right=-0.5cm, font=\\tiny] at (axis cs:2019, 99.3) {97.8 }; \\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 100) {97.4 };\n\\end{axis}\n\\end{tikzpicture}\n} \n\\vspace{-2.1em}\n\\end{subfigure}\n\\begin{subfigure}{0.235\\textwidth}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tikzpicture} \n\\scriptsize\n\\begin{axis}[legend style={at={(0.425,0.3)}, scale=0.5, anchor=north east, font=\\scriptsize}, symbolic x coords={2017, 2018, 2019, 2020}, grid=both, ylabel={}, ymin=15, ymax=95, xmin=2017, xmax=2020, xtick=data, x tick label style={rotate=45, anchor=east} ];\n\\addlegendentry{$\\mathcal{R}$Oxford-5k}\n\\addplot[mark=*,thick,line width=0.45mm, color={rgb, 255:red,181;green,144;blue,202}, on layer=main,] coordinates { (2018, 54.8) (2019, 65.8) (2020, 64.0) };\n\\node [above right=-0.3cm, font=\\tiny] at (axis cs:2018, 53.8) {54.8 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2019, 72.5) {65.8 }; \\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 74.0) {64.0 };\n\\addlegendentry{$\\mathcal{R}$Paris-6k}\n\\addplot[mark=triangle*,thick,line width=0.45mm, color={rgb, 255:red,117;green,207;blue,184}, on layer=main,] coordinates { (2018, 84.0) (2019, 85.3) (2020, 80.4) };\n\\node [above right=-0.3cm, font=\\tiny] at (axis cs:2018, 92.5) {84.0 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2019, 84.0) {85.3 }; \\node [above right=-0.9cm, font=\\tiny] at (axis cs:2020, 93.5) {80.4 };\n\\addlegendentry{INSTRE}\n\\addplot[mark=diamond*,thick,line width=0.45mm, color={rgb, 255:red,83;green,82;blue,4}, on layer=main,] coordinates { (2017, 80) (2018, 80.5) (2019, 92.4)};\n\\node [above right=-0.05cm, font=\\tiny] at (axis cs:2017, 71.5) {80.0 }; \\node [above right=-0.3cm, font=\\tiny] at (axis cs:2018, 79.5) {80.5 }; \\node [above right=0cm, font=\\tiny] at (axis cs:2019, 86) {92.4 } ;\n\\addlegendentry{GLD-v2}\n\\addplot[mark=square*,thick,line width=0.45mm, color={rgb, 255:red,130;green,54;blue,203}, on layer=main,] coordinates {(2020, 26.8) };\n\\node [above right=-0.8cm, font=\\tiny] at (axis cs:2020, 38) {26.8 };\n\\end{axis};\n\\end{tikzpicture}\n}\n\\vspace{-2.1em}\n\\end{subfigure}\n\\caption{Performance improved from 2014 to 2020.} \n\\label{datasets_progress_2014_2_2020}\n\\end{figure}\n\\textbf{Overview.} Figure \\ref{datasets_progress_2014_2_2020} summarizes the performance over 6 datasets from 2014 to 2020. Early on, the powerful feature extraction of DCNNs led to rapid improvements. Subsequent key ideas have been to extract instance features at the region level to reduce image clutter , and to improve feature discriminativity by using methods including feature fusion ,,, feature aggregation ,, and feature embedding . Fine-tuning is an important strategy to improve performance by tuning deep networks specific for learning instance features ,. For instance, the accuracy increases steadily from 78.34\\% to 96.2\\% on the Oxford-5k dataset when manifold learning is used to fine-tune deep networks. The mAP on $\\mathcal{R}$Paris-6k and $\\mathcal{R}$Oxford-5k is smaller than Paris-6k and Oxford-5k, leaving room for improvement.\nWe report results using off-the-shelf models (Table \\ref{Table_retrieval_off_the_shelf}) and fine-tuning networks (Table \\ref{Metric_learning_table_B}). In Table \\ref{Table_retrieval_off_the_shelf}, single-pass and multiple-pass are analyzed, while supervised and unsupervised fine-tuning are compared in Table \\ref{Metric_learning_table_B}. Since there are many aspects that vary across the different methods, making them not directly comparable, we mainly draw some general claims or trends based on the collected results.\n\\begin{table*}[t]\n\\centering \n\\caption{\\footnotesize Performance evaluation of off-the-shelf DCNN models. ``$ \\bullet $'' indicates that the models or layers are combined to learn features; ``PCA$_{w}$ indicates PCA with whitening on the extracted features to improve robustness; ``MP'' means Max Pooling; ``SP'' means Sum Pooling. The CNN-M network with ``$\\ast$'' has an architecture similar to that of AlexNet. ``-'' means that the results were not reported. }\n\\vspace{-1em}\n\\label{Table_retrieval_off_the_shelf}\n\\renewcommand{\\arraystretch}{1.0}\n\\resizebox{19.5cm}{!}{\n\\begin{tabular}{!{\\vrule width1.2bp}c|c|c|c|c|c|c|c|c|c|p{9cm}!{\\vrule width1.2bp}}\n\\Xhline{1pt}\n\\footnotesize Type & \\footnotesize \\shortstack [c] { Method } & \\footnotesize \\shortstack [c] {Backbone \\\\ DCNN}\t& \\footnotesize \\shortstack [c] {Output \\\\ Layer} & \\footnotesize \\shortstack [c] {Embed. \\\\ Aggre.}\t& \\footnotesize \\shortstack [c] { Feat. \\\\ Dim } & \\footnotesize \\shortstack [c] {Holidays} & \\footnotesize \\shortstack [c] {UKB} & \\footnotesize \\shortstack [c] {Oxford5k \\\\ (+100k)} & \\footnotesize \\shortstack [c] {Paris6k \\\\ (+100k)} & \\footnotesize Brief Conclusions and Highlights \\\\\n\\Xhline{1.0pt}\n\\multirow{5}{*}{ \\raisebox{-15ex}[0pt]{\\begin{tabular}[c]{@{}c@{}} \\rotatebox{90}{Single-pass} \\end{tabular}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { Neural \\\\ codes } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ AlexNet} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { FC6}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { PCA}} & \\footnotesize \\raisebox{-2ex}[0pt]{ $128$} & \\footnotesize \\raisebox{-2ex}[0pt]{ $74.7$} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 3.42 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}}& \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $43.3$\\\\ (38.6)}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize Compressed neural codes of different layers are explored. AlexNet is also fine-tuned for retrieval.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SPoC } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}}\t& \\footnotesize \\raisebox{-2.5ex}[0pt]{\\shortstack [c] { SPoC + \\\\ PCA$_{w} $}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 256} & \\footnotesize \\raisebox{-2ex}[0pt]{ 80.2} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 3.65 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $58.9$\\\\ (57.8)}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize Exploring Gassian weighting scheme \\ie the centering prior, to improve the discrimination of\nfeatures. \\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { CroW } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}}\t& \\footnotesize \\raisebox{-2.5ex}[0pt]{\\shortstack [c] { CroW + \\\\ PCA$_{w} $}} & \\footnotesize \\raisebox{-2ex}[0pt]{256} & \\footnotesize \\raisebox{-2ex}[0pt]{ $85.1$} & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { $-$}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] {$68.4$\\\\ (63.7)}} & \\footnotesize\t\\raisebox{-3.0ex}[0pt]{\\shortstack [c] { $ 76.5 $\\\\ (69.1) }} & \\footnotesize The spatialwise and channelwise weighting mechanisms are utilized to highlight crucial convolutional features. \\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { R-MAC } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}}\t& \\footnotesize \\raisebox{-2.5ex}[0pt]{\\shortstack [c] { R-MAC + \\\\ PCA$_{w} $}} & \\footnotesize \\raisebox{-2ex}[0pt]{512} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$ }} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $66.9$\\\\ (61.6)}} & \\footnotesize\t\\raisebox{-3.0ex}[0pt]{\\shortstack [c] { $-$ \\\\ (75.7) }} & \\footnotesize Sliding windows with different scales on convolutional feature maps to encode multiple image regions.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { Multi-layer \\\\ CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{VGG16} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { FC6 $\\bullet$ \\\\ Conv4$\\sim$5 }}\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { SP}} & \\footnotesize \\raisebox{-2ex}[0pt]{4096} & \\footnotesize \\raisebox{-2ex}[0pt]{ 91.4} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 3.68 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $61.5$\\\\($-$)}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize Layer-level feature fusion and the complementary properties of different layers are explored.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { BLCF } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}}\t& \\footnotesize \\raisebox{-2.5ex}[0pt]{\\shortstack [c] { BoW + \\\\ PCA$_{w} $}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 25k} & \\footnotesize \\raisebox{-2ex}[0pt]{$-$} & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { $-$}}& \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $73.9$\\\\ (59.3)}} & \\footnotesize\t\\raisebox{-3.0ex}[0pt]{\\shortstack [c] { $ 82.0 $\\\\ (64.8) }} & \\footnotesize Both global features and local features are explored, demonstrating that local features have higher accuracy. \\\\ \\cline{2-11} \n\\Xhline{1.0pt}\n\\multirow{7}{*}{ \\raisebox{-20ex}[0pt]{\\begin{tabular}[c]{@{}c@{}} \\rotatebox{90}{Multiple-pass} \\end{tabular}}}\n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { OLDFP } } & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] {AlexNet }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { FC6}}\t& \\footnotesize \\raisebox{-2.6ex}[0pt]{\\shortstack [c] { MP \\\\ + PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ 88.5} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] {$ 3.81 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $60.7$\\\\ ($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $66.2$\\\\ ($-$)}} & \\footnotesize Exploring the impact of proposal number. Patches are extracted by RPNs (see Figure \\ref{MultiplePass} (d)) and the features are encoded in an orderless way.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-1.6ex}[0pt]{ \\shortstack [c] { MOP-CNN } } & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { AlexNet }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { FC7}} & \\footnotesize \\raisebox{-2.5ex}[0pt]{\\shortstack [c] { VLAD \\\\ + PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 2048} & \\footnotesize \\raisebox{-2ex}[0pt]{ 80.2} & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { $-$}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize Image patches are extracted densely, as shown in Figure \\ref{MultiplePass} (c). Multi-scale patch features are further embedded into VLAD descriptors. \\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { CNNaug-ss } } & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { Overfeat\\\\ }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { FC}}\t\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 15k} & \\footnotesize \\raisebox{-2ex}[0pt]{ 84.3} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 91.1 $ \\\\ \\textcolor[rgb]{0,0,0}{(mAP)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $68.0$\\\\($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $79.5$\\\\ ($-$)}} & \\footnotesize Image patches are extracted densely, as shown in Figure \\ref{MultiplePass} (c). Image regions at different locations with different sizes are included.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { MOF } } & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { CNN-M$^{\\ast}$ \\\\ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { FC7 $\\bullet$ \\\\ Conv }}\t& \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { SP or MP\\\\ + BoW }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 20k} & \\footnotesize \\raisebox{-2ex}[0pt]{ 76.8} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 3.00 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize Exploring layer-level fusion scheme. Image patches are extracted using spatial pyramid modeling, as shown in Figure \\ref{MultiplePass} (b).\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2.7ex}[0pt]{ \\shortstack [c] { Multi-scale \\\\ CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}} & \\footnotesize \\raisebox{-2.7ex}[0pt]{\\shortstack [c] { SP or MP\\\\ + PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 32k} & \\footnotesize \\raisebox{-2ex}[0pt]{ 89.6} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 95.1 $ \\\\ \\textcolor[rgb]{0,0,0}{(mAP)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $84.3$\\\\ ($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $87.9$\\\\ ($-$)}} & \\footnotesize Image patches are extracted in a dense manner, as shown in Figure \\ref{MultiplePass} (c). Geometric invariance is considered when aggregating patch features.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { LDD } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG19} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv5}}\t& \\footnotesize \\raisebox{-2.6ex}[0pt]{\\shortstack [c] {BoW \\\\ + PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{500k} & \\footnotesize \\raisebox{-2ex}[0pt]{ 84.6 } & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$ }} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $83.3$ \\\\ ($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $87.2$ \\\\ ($-$)}} & \\footnotesize Image patches are obtained using a uniform square mesh, as shown in Figure \\ref{MultiplePass} (a). Patch features are encoded into BoW descriptors.\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-1.7ex}[0pt]{ \\shortstack [c] { DeepIndex } } & \\footnotesize\t\\raisebox{-2.7ex}[0pt]{ \\shortstack [c] { AlexNet \\\\ $\\bullet$ VGG19 }} & \\footnotesize\t\\raisebox{-2.7ex}[0pt]{ \\shortstack [c] { FC6-7 $\\bullet$\\\\ FC17-18 }}\t& \\footnotesize \\raisebox{-2.7ex}[0pt]{\\shortstack [c] {BoW + \\\\ PCA }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ 81.7} & \\footnotesize \\raisebox{-2.7ex}[0pt]{\\shortstack [c] { $ 3.32 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize\t\\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-2.7ex}[0pt]{ \\shortstack [c] { $75.4$\\\\ ($-$)}} & \\footnotesize Exploring layer-level and model-level fusion methods. Image patches are extracted using spatial pyramid modeling, as shown in Figure \\ref{MultiplePass} (b).\\\\ \\cline{2-11} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { CCS } } & \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { GoogLeNet }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { Conv}}\t& \\footnotesize \\raisebox{-2.7ex}[0pt]{\\shortstack [c] { VLAD \\\\ + PCA$_{w} $ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ 128} & \\footnotesize \\raisebox{-2ex}[0pt]{ 84.1} & \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { $ 3.81 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $64.8$\\\\ ($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $76.8$\\\\ ($-$)}} & \\footnotesize Object proposals are extracted by RPNs, as shown in Figure \\ref{MultiplePass} (d). Object level and point level feature concatenation schemes are explored.\\\\ \\cline{2-11} \n\\Xhline{1.0pt}\n\\end{tabular}\n}\n\\vspace{-0.5em}\n\\end{table*}\n\\begin{table*}[!t]\n\\centering \n \\caption{ \\footnotesize Performance evaluation of methods in which DCNN models are fine-tuned, in a supervised or an unsupervised manner. ``CE Loss'' means the models are fine-tuned using the classification-based loss function in the form of Eq. \\ref{cross_entropy_loss}. ``Siamese Loss'' is in the form of Eq. \\ref{contrastive}. ``Regression Loss'' is in the form of Eq. \\ref{regressionLoss}. ``Triplet Loss'' is in the form of Eq. \\ref{triplet}. }\n \\vspace{-1em}\n \\label{Metric_learning_table_B} \n \\renewcommand{\\arraystretch}{1.0}\n\\resizebox{19.5cm}{!}{\n\\begin{tabular}{!{\\vrule width1.2bp}c|p{2.0cm}|c|p{1.35cm}|c|p{1.1cm}|p{1.25cm}|c|c|p{0.7cm}|c|c|p{7.8cm}!{\\vrule width1.2bp}}\n\\Xhline{1pt}\n\\footnotesize Type & \\footnotesize \\shortstack [c] { Method } & \\footnotesize \\shortstack [c] {Backbone \\\\ DCNN} & \\footnotesize \\shortstack [c] {Training \\\\ set}\t& \\footnotesize \\shortstack [c] {Output \\\\ Layer} & \\footnotesize \\shortstack [c] {Embed. \\\\ Aggre.}\t& \\footnotesize \\shortstack [c] { Loss \\\\ Function } & \\footnotesize \\shortstack [c] { Feat. \\\\ Dim } & \\footnotesize \\shortstack [c] {Holidays} & \\footnotesize \\shortstack [c] {UKB} & \\footnotesize \\shortstack [c] {Oxford5k \\\\ (+100k)} & \\footnotesize \\shortstack [c] {Paris6k \\\\ (+100k)} & \\footnotesize Brief Conclusions and Highlights \\\\ \\Xhline{1.0pt}\n\\multirow{5}{*}{ \\raisebox{-24ex}[0pt]{\\begin{tabular}[c]{@{}c@{}} \\rotatebox{90}{Supervised Fine-tuning}\\end{tabular}}} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\!\\!\\! $ Neural codes } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ AlexNet} & \\footnotesize \\raisebox{-3.5ex}[0pt]{\\shortstack [c] { Landmarks \\\\ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { FC6}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { PCA }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { CE Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 128} & \\footnotesize \\raisebox{-2ex}[0pt]{ 78.9 } & \\footnotesize \\raisebox{-3.5ex}[0pt]{\\shortstack [c] { $ 3.29 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $55.7$\\\\ (52.3)}} & \\footnotesize \\raisebox{-2.0ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize The first work which fine-tunes deep networks for IIR. Compressed neural codes and different layers are explored. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { Nonmetric } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ $\\!\\!\\! $ Landmarks } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { PCA$_w$ }} & \\footnotesize\t\\raisebox{-3.2ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\! $ Regression \\\\ $\\!\\!\\!\\! $ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{512} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$} } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $88.2$\\\\ (82.1)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $88.2$\\\\ (82.9)}} & \\footnotesize Similarity learning of similar and dissimilar pairs is performed by a neural network, optimized using regression loss. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SIAM-FV } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ $\\!\\!\\!\\! $ Landmarks } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize\t\\raisebox{-2.7ex}[0pt]{ \\shortstack [c] { FV + \\\\ PCA$_w$ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$} } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $81.5$\\\\ (76.6)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $82.4$\\\\ ($-$)}} & \\footnotesize Fisher Vector is integrated on top of VGG and is trained with VGG simultaneously. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { Faster \\\\ R-CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { Oxford5k \\\\ Paris6k } } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{\\shortstack [c] { MP / SP }} & \\footnotesize\t\\raisebox{-3.2ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\! $ Regression \\\\ $\\!\\!\\!\\! $ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$} } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $75.1$\\\\ $(-)$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $80.7$\\\\ ($-$)}} & \\footnotesize RPN is fine-tuned, based on bounding box coordinates and class scores for specific region query which is region-targeted. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SIFT-CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{VGG16} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Holidays \\\\ UKB }} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SP }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ 88.4 } & \\footnotesize \\raisebox{-3.5ex}[0pt]{\\shortstack [c] { $ 3.91 $ \\\\ \\textcolor[rgb]{0,0,0}{(N-S)}}} & \\footnotesize\t \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$}} & \\footnotesize SIFT features are used as supervisory information for mining positive and negative samples. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\!\\! $ Quartet-Net } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { GeoPair \\\\ }} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { FC6}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { PCA }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 128} & \\footnotesize \\raisebox{-2ex}[0pt]{71.2 } & \\footnotesize \\raisebox{-3.5ex}[0pt]{\\shortstack [c] { $ 87.5 $ \\\\ \\textcolor[rgb]{0,0,0}{(mAP)}}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $48.5$\\\\ ($-$)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] {$48.8$\\\\($-$)}} & \\footnotesize Quartet-net learning is explored to improve feature discrimination where double-margin contrastive loss is used. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { NetVLAD } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ VGG16} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\!\\! $ Tokyo Time \\\\ Machine }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { VLAD \\\\ Layer }}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { PCA$_{w}$ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Triplet \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 256} & \\footnotesize \\raisebox{-2ex}[0pt]{ 79.9 } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $62.5$\\\\ ($-$)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $72.0$\\\\ ($-$)}} & \\footnotesize VLAD is integrated at the last convolutional layer of VGG16 network as a plugged layer. \\\\ \\cline{2-13} \\textbf{}\n& \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { Deep \\\\ Retrieval } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ ResNet-101} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ $\\!\\!\\!\\!\\! $ Landmarks } & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Conv5 \\\\ Block }}\t& \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { MP + \\\\ PCA$_w$ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Triplet \\\\Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 2048} & \\footnotesize \\raisebox{-2ex}[0pt]{ 90.3 } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $86.1$\\\\ (82.8)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $94.5$\\\\ (90.6)}} & \\footnotesize Dataset is cleaned automatically. Features are encoded by R-MAC. RPN is used to extract the most relevant regions. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { DELF } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ ResNet-101} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ GLDv1 } & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Conv4 \\\\ Block }}\t& \\footnotesize\t\\raisebox{-2.6ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\! $ Attention \\\\ + PCA$_w$ }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { CE Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 2048} & \\footnotesize \\raisebox{-2ex}[0pt]{$-$} & \\footnotesize \\raisebox{-2ex}[0pt]{$-$} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $83.8$\\\\ (82.6)}} & \\footnotesize \\raisebox{-3ex}[0pt]{ \\shortstack [c] { $85.0$\\\\ (81.7)}} & \\footnotesize Exploring the FCN to extract region-level features and construct feature pyramids of different sizes. \\\\ \\cline{2-13} \n \\Xhline{1.0pt}\n\\multirow{7}{*}{ \\raisebox{-13ex}[0pt]{\\begin{tabular}[c]{@{}c@{}} \\rotatebox{90}{Unsupervised Fine-tuning} \\end{tabular}}}\n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { MoM } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { VGG16 }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ Flickr 7M } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { MP + \\\\ PCA$_{w} $ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 64} & \\footnotesize \\raisebox{-2ex}[0pt]{ 87.5} & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $78.2$\\\\ (72.6)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $85.1$\\\\ (78.0)}} & \\footnotesize Exploring manifold learning for mining dis/similar samples. Features are tested globally and regionally.\\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { GeM } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { VGG16 }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ Flickr 7M } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] { Conv5}}\t& \\footnotesize \\raisebox{-3ex}[0pt]{\\shortstack [c] { GeM \\\\ Pooling }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 512} & \\footnotesize \\raisebox{-2ex}[0pt]{ 83.1} & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $82.0$\\\\ (76.9)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $79.7$\\\\ (72.6)}} & \\footnotesize Fine-tuning CNNs on an unordered dataset. Samples are selected from an automated 3D reconstruction system. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SfM-CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ \\shortstack [c] { VGG16 }} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ Flickr 7M } & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [l] {Conv5}}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { PCA$_{w}$ }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Siamese \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{512} & \\footnotesize \\raisebox{-2ex}[0pt]{ 82.5} & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $77.0$\\\\ (69.2)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $83.8$\\\\ (76.4)}} & \\footnotesize Employing Structure-from-Motion to select positive and negative samples from unordered images. \\\\ \\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { MDP-CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ ResNet-101} & \\footnotesize\t\\raisebox{-2ex}[0pt]{ $\\!\\!\\!\\!\\! $ Landmarks } & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Conv5 \\\\ Block }}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { SP }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { Triplet \\\\ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 2048} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$} } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $85.4$\\\\ (85.1)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $96.3$\\\\ (94.7)}} & \\footnotesize Exploring global feature structure by modeling the manifold learning to select positive and negative pairs. \\\\ \n\\cline{2-13} \n& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { IME-CNN } } & \\footnotesize\t\\raisebox{-2ex}[0pt]{ ResNet-101} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\! $ Oxford105k \\\\ Paris106k }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { IME \\\\ Layer }}\t& \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] { MP }} & \\footnotesize\t\\raisebox{-3ex}[0pt]{ \\shortstack [c] { $\\!\\!\\!\\! $ Regression \\\\ $\\!\\!\\!\\!\\! $ Loss}} & \\footnotesize \\raisebox{-2ex}[0pt]{ 2048} & \\footnotesize \\raisebox{-2ex}[0pt]{ \\shortstack [c] {$-$} } & \\footnotesize\t\\raisebox{-2ex}[0pt]{\\shortstack [c] {$-$}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $92.0$\\\\ (87.2)}} & \\footnotesize \\raisebox{-3.5ex}[0pt]{ \\shortstack [c] { $96.6$\\\\ (93.3)}} & \\footnotesize Graph-based manifold learning is explored to mine the matching and non-matching pairs in unordered datasets. \\\\ \\cline{2-13} \n\\Xhline{1.0pt}\n\\end{tabular}\n}\n\\end{table*}\n\\textbf{Evaluation for single feedforward pass.} \\textcolor{black}{In general, we observe that fully-connected layers used as feature extractors may give a lower accuracy (\\eg 74.7\\% on Holidays in ), compared to using convolutional layers in Table \\ref{Table_retrieval_off_the_shelf}. For the case where the same VGG net is used, the way to embed or aggregate features is critical. The methods shown in Figure \\ref{SinglePass} improve the discrimination of convolutional feature maps and perform differently in Table \\ref{Table_retrieval_off_the_shelf}, 66.9\\% of R-MAC and 58.9\\% of SPoC on Oxford-5k, differences which we see as critical factors for further analysis. If embedded by a BoW model, the results are competitive on Oxford-5k and Paris-6k (73.9\\% and 82.0\\%, respectively), while its codebook size is 25k, which may affect retrieval efficiency. Moreover, layer-level feature fusion improves retrieval accuracy. Yu \\etal combine three layers (mAP of 91.4\\% on Holidays), outperforming the performance of non-fusion method (mAP of 80.2\\%).}\n\\textbf{Evaluation for multiple feedforward pass.}\nResults for the methods of Figure \\ref{MultiplePass} are reported in Table \\ref{Table_retrieval_off_the_shelf}. Among them, extracting image patches densely using VGG has the highest performance on the 4 datasets , and rigid grid with BoW encoding is competitive (mAP of 87.2\\% on Paris-6k). These two methods consider more patches, even background information, when used for feature extraction. Instead of generating patches densely, region proposals and spatial pyramid modeling introduce a degree of purpose and efficiency in processing image objects. Spatial information is better maintained using multiple-pass schemes than with single-pass. For example, a shallower network (AlexNet) and region proposal networks in have a UKBench N-Score of 3.81, higher than using deeper networks ,,. Besides feeding image patches into the same network, model-level fusion also exploits complementary spatial information to improve accuracy. For instance, as reported in , which combines AlexNet and VGG, the results on Holidays (81.7\\% of mAP) and UKBench (3.32 of N-Score) are better than these in (76.75\\% and 3.00, respectively). \n\\textbf{Evaluation for supervised fine-tuning.} Compared to off-the-shelf models, fine-tuning deep networks usually improves accuracy, see Table \\ref{Metric_learning_table_B}. For instance, the result on Oxford-5k by using a pre-trained VGG is improved from 66.9\\% to 81.5\\% in when a single-margin Siamese loss is used. Similar trends can be also observed on the Paris-6k dataset. For classification-based fine-tuning, its performance may be improved by using powerful DCNNs and feature enhancement methods such as the attention mechanism in , with an mAP increased from 55.7\\% in to 83.8\\% in on Oxford-5k. As for pairwise ranking loss fine-tuning, in some cases the loss used for fine-tuning is essential for performance improvement. For example, RPN is re-trained using regression loss on Oxford-5k and Paris-6k (75.1\\% and 80.7\\%, respectively) . Its results are lower than the results from (88.2\\% and 88.2\\%, respectively) where a transformation matrix is used to learn visual similarity. However, when RPN is trained by using triplet loss such as , the effectiveness of retrieval is improved significantly where the results are 86.1\\% (on Oxford-5k) and 94.5\\% (on Paris-6k). Feature embedding methods are important for retrieval accuracy; Ong \\etal embedded \\textit{Conv5} feature maps by Fisher Vector and achieved an mAP of 81.5\\% on Oxford-5k, while embedding feature maps by using VLAD achieves an mAP of 62.5\\% on this dataset ,.\n\\textbf{Evaluation for unsupervised fine-tuning.} \nCompared to supervised fine-tuning, unsupervised fine-tuning methods are relatively less explored. The difficulty for unsupervised fine-tuning is to mine sample relevance without ground-truth labels. In general, unsupervised fine-tuning methods should be expected to have lower performance than supervised. For instance, supervised fine-tuning using Siamese loss achieves an mAP 88.4\\% on Holidays, while unsupervised fine-tuning using the same loss function in ,, achieves 82.5\\%, 83.1\\%, and 87.5\\%, respectively. However, unsupervised fine-tuning methods can achieve a similar accuracy, even outperform the supervised fine-tuning, if a suitable feature embedding method is used. For instance, Zhao \\etal explore global feature structure modeling the manifold learning, producing an mAP of 85.4\\% (on Oxford-5k) and 96.3\\% (on Paris-6k), which is similar to supervised results of 86.1\\% (on Oxford-5k) and 94.5\\% (on Paris-6k). As another example, the precision of ResNet-101 fine-tuned by cross entropy loss achieves 83.8\\% on Oxford-5k , while the precision is further improved to 92.0\\% when an IME layer is used to embed features and fine-tuned in an unsupervised way . Note that fine-tuning strategies are related to the type of the target retrieval datasets. As demonstrated in Table \\ref{GLD_evaluation_on_training_dataset} and , fine-tuning on different datasets may produce a different final retrieval performance. \n\\textbf{Network depth.} We compare the efficacy of DCNNs by depth, following the fine-tuning protocols\\footnote{https://github.com/filipradenovic/cnnimageretrieval-pytorch} in . For fair comparisons, all convolutional features from these backbone DCNNs are aggregated by MAC , and fine-tuned by using the same loss function with the same learning rate, thus the adopted methods are the same except for the DCNN depth. We use the default feature dimension (\\emph{i.e.} AlexNet (256), VGG (512), GoogLeNet (1024), ResNet-50/101 (2048)). The results are reported in Figure \\ref{CNNandaggregation_depth_dim_dataset} (a). We observe that the deeper networks consistently lead to better accuracy due to extracting more discriminative features.\n\\textbf{Feature aggregation methods.} The methods of embedding convolutional feature maps were illustrated in Figure \\ref{SinglePass}. We use the off-the-shelf VGG (without updating parameters) on the Oxford and Paris datasets. The results are reported in Figure \\ref{CNNandaggregation_depth_dim_dataset} (b). We observe that the different ways to aggregate the same off-the-shelf DCNN leads to differences in retrieval performance. These reported results provide a reference for feature aggregation when one uses convolutional layers for performing retrieval tasks.\n\\textbf{Global feature dimension.} We add fully-connected layers on the top of pooled convolutional features of ResNet-50 to obtain global descriptors with their dimensions varying from 32 to 8192. The results of 5 datasets are shown in Figure \\ref{CNNandaggregation_depth_dim_dataset} (c). It is expected that higher-dimension features usually capture more semantics and are helpful for retrieval. The performance\ntends to be stable when the dimension is very large.\n\\textcolor{black}{\\textbf{Number of image regions.} We compare the retrieval performance when different number of regions are fed and other components are kept the same, as depicted in Figure \\ref{CNNandaggregation_depth_dim_dataset} (d). Convolutional features of each region are pooled as 2048-dim regional features by MAC and then aggregated into a global one. Note that the final memory requirement is identical for the case that a holistic image is used as input (\\ie regarded as the case where only one region is used). Regional inputs on an image are extracted with a 40\\% overlap of neighboring regions and the number varying from 1 to 41. For Oxford-5k, the best result is given by the case where 9 image regions are used. For the rest datasets, 3 image regions give the best results. Finally, more regions extracted from one image decline the retrieval mAP. A reason is that features of background or irrelevant regions have also been aggregated, and negatively affect the performance.}\n\\begin{table}[t]\n\\caption{Evaluations of training sets and retrieval reranking. Numerical results are cited from .\n}\n\\vspace{-1em}\n\\label{GLD_evaluation_on_training_dataset}\n\\footnotesize\n\\setlength{\\tabcolsep}{0.6mm}\n\\begin{tabular}{|cc|c|c|cc|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{\\multirow{3}{*}{Conditions}} \n & \\multirow{3}{*}{Global} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Local\\\\ reranking\\end{tabular}} & \\multicolumn{2}{c|}{Training set} & \\multirow{3}{*}{$\\mathcal{R}$Oxf} & \\multirow{3}{*}{$\\mathcal{R}$Par} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}GLD-v2\\\\ testing\\end{tabular}} \\\\ \\cline{5-6}\n & & & & \\multicolumn{1}{c|}{GLD-v1} & \\begin{tabular}[c]{@{}c@{}}GLD-v2\\\\ -clean\\end{tabular} & & & \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}ResNet\\\\ -50\\end{tabular}}} & Case 1 & \\tickYes & \\tickNo & \\multicolumn{1}{c|}{\\tickYes} & \\tickNo & 45.1 & 63.4 & 20.4 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{} & Case 2 & \\tickYes & \\tickNo & \\multicolumn{1}{c|}{\\tickNo} & \\tickYes & 51.0 & 71.5 & 24.1 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{} & Case 3 & \\tickYes & \\tickYes & \\multicolumn{1}{c|}{\\tickYes} & \\tickNo & 54.2 & 64.9 & 22.3 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{}& Case 4 & \\tickYes & \\tickYes & \\multicolumn{1}{c|}{\\tickNo} & \\tickYes & 57.9 & 71.0 & 24.3 \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}ResNet\\\\ -101 \\end{tabular}}} & Case 5 & \\tickYes & \\tickNo & \\multicolumn{1}{c|}{\\tickYes} & \\tickNo & 51.2 & 64.7 & 21.7 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{} & Case 6 & \\tickYes & \\tickNo & \\multicolumn{1}{c|}{\\tickNo} & \\tickYes & 55.6 & 72.4 & 26.0 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{} & Case 7 & \\tickYes & \\tickYes & \\multicolumn{1}{c|}{\\tickYes} & \\tickNo & 59.3 & 65.5 & 24.3 \\\\ \\cline{2-9} \n\\multicolumn{1}{|c|}{} & Case 8 & \\tickYes & \\tickYes & \\multicolumn{1}{c|}{\\tickNo} & \\tickYes & 64.0 & 72.8 & 26.8 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\textbf{Fine-tuning datasets and retrieval reranking.} We compare performance on $\\mathcal{R}$Oxford-5k, $\\mathcal{R}$Paris-6k, and GLD-v2, aiming at comparing the role of different fine-tuning training sets and the effectiveness of retrieval reranking. Table \\ref{GLD_evaluation_on_training_dataset} lists 8 experimental scenarios using two network backbones, as in . \n\\textcolor{black}{Since GLD-v2 provides class-level ground-truth, its including images show large context diversity and may pose challenges to the network fine-tuning. Thus, the pre-processing steps, as proposed in ,, are necessary to select the more coherent images, referring to the GLD-v2-clean subset. As a result, when using the global features only, this cleaned version of the training set improves the performance, as observed in Cases 1/5 and Cases 2/6 for ResNet-50/ResNet-101, respectively.} As an important postprocessing strategy, reranking further boosts the accuracy after the initial filtering step by using global features.\n \\begin{figure*}[!t]\n\\centering \n {\n \\includegraphics[width=\\textwidth]{./Figures/CNNandaggregation_depth_dim_dataset.pdf} \n } \n \\vspace{-2em}\n \\caption{(a) The effectiveness of different DCNNs; (b) Comparison of the feature aggregation methods in Figure \\ref{SinglePass}; (c) The impact of global feature dimension by using ResNet-50; (d) Performance comparison when aggregating different numbers of image regions.} \n \\label{CNNandaggregation_depth_dim_dataset} \n \\end{figure*}", "id": "4289168f-a812-4899-b8a0-52669cd0ba36", "level": "subsection", "origin_cites_number": 53, "parent_id": "a6c69e00-d8a1-492b-8514-fab570f109d2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "State of the Art Performance" ], [ "subsection", "Performance Comparison and Analysis" ] ], "subsections": [], "title": "Performance Comparison and Analysis" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 2141, 2136, 2144, 2142, 2110, 2093, 2111, 2143 ], "content": "\\label{Conclusion_and_Future_Directions}\n\\textcolor{black}{As a comprehensive yet timely survey on instance retrieval using deep learning, this paper has discussed the main challenges, presented a taxonomy of recent developments according to their roles in implementing instance retrieval, highlighted the recent representative methods and analyzed their merits and demerits, discussed the datasets, evaluation protocols, and SOTA performance. Nowadays the exponentially increasing amount of image and video data due to surveillance, e-commerce, medical images, handheld devices, robotics, \\etc, offers an endless potential for applications of instance retrieval. \nAlthough significant progress has been made, as discussed in Section \\ref{Keychallenges}, the main challenges in instance retrieval have not been fully addressed. Below we identify a number of promising directions for\nfuture research. }\n\\textcolor{black}{\\textbf{(1) Accurate and robust feature representations.} \nOne of the main challenges in instance retrieval is the large intra-class variations due to changes in viewpoint, scale, illumination, weather condition and background clutter \\emph{etc.}, as we discussed in Section~\\ref{Keychallenges}. However, DCNN representations have very little invariance, even though trained with lots of augmented data . Fortunately, before deep learning, with instance retrieval there are lots of important ideas in handling such intra-class variations like local interest point detectors and local invariant descriptors. Therefore, it is worth enabling DCNN to learn more accurate and robust representations via leveraging such traditional wisdom to design better DCNNs. In addition, unlike most objects in existing benchmarks which are rigid, planar and textured, textureless objects, 3D objects, transparent objects, reflective surfaces, \\etc are still very hard to find.}\n\\textcolor{black}{In addition, pursuing accuracy alone is not sufficient, as instance retrieval systems should be able to resist potential adversarial attacks. \nRecently, deep networks have been proven to be fooled rather easily by adversarial examples , \n\\ie images added with intentionally designed yet nearly imperceptible perturbations, which raises serious safety and robustness concerns. However, adversarial robustness in instance retrieval , has received very little attention, and should merit further effort.}\n\\textcolor{black}{\\textbf{(2) Compact and efficient deep representations.} In instance retrieval, \nsearching efficiently is as critical as searching accurately, especially for the pervasive \nmobile or wearable devices with very limited computing resources. However, existing methods adopt large scale, energy hungry DCNNs that are very difficult to be deployed in mobile devices. Hence, there has been pressing needs to develop compact, efficient, yet reusable deep representations tailored to the resource limited devices, like using binary neural networks ,,.}\n\\textcolor{black}{\\textbf{(3) Learning with fewer labels.} Deep learning require a large amount of high-quality labeled dataset to achieve high accuracy. The presence of labels errors or the limited amount of labeled data can greatly degrade DCNN's accuracy. However, collecting massive amounts of accurately labeled data is costly. In practical scenarios, datasets like GLDv2 , have long-tailed distributions, and noisy labels. Thus, to address such limitations, few shot learning , self-supervised learning , \nimbalanced learning , \n noisy label aware learning \\etc should be paid more attention in instance retrieval in the future. }\n\\textcolor{black}{\\textbf{(4) Continual learning for instance retrieval.}\nIn specific, the current IIR methods make restrictive assumptions, such as the training data being enough and stationary, retraining from scratch being possible when new data becomes available, which is problematic in realistic conditions. Our living world is continuously varying, and in\ngeneral data distributions are often non-stationary, new data may be added, and\npreviously unseen classes may be encountered. Thus, continual learning plays a vital role in continuously updating the IIR systems. The key issues are how to retain and utilize the previously learned knowledge, how to update the retrieval system as new images becomes available, and how to learn and improve over time.\n}\n\\textbf{(5) Privacy-aware instance retrieval.} Most IIR systems concentrate on improving the accuracy or efficiency performance, and the higher performance might come at the cost of users' privacy. Therefore, in some cases, such as personalized search systems, the privacy protection problem is also an important issue to be considered. Deep models should be privacy-aware and protect users' personalized searching experience to avoid their worries about using such IIR systems.\n\\textbf{(6) Video instance retrieval.} Searching a specific instance in an image cannot always meet the requirements in some scenarios such as the video surveillance system in the field of searching criminals. Currently, with the rapid growth of video data, retrieving a certain object, place, or action in videos has become more and more important and highly necessary in the future. For video instance retrieval, 3D-CNNs models need to be built to learn video’s spatio-temporal representations to compute the semantic similarity of instances.\n\\footnotesize\n\\bibliographystyle{IEEEtran}\n\\bibliography{IEEEabrv,myreference}\n\\vspace{-2.0 cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/WeiChen.pdf}}]{Wei Chen} received the Ph.D. degree at Leiden University, in 2021. Before starting with PhD study in Leiden University, he received his Master degree from the National University of Defense Technology, China, in 2016. His research interest focuses on cross-modal retrieval with deep learning methods, and also in the context of incremental learning. He has published papers in international conferences and journal including CVPR, ACM MM, PR, Neurocomputing, and IEEE TMM \\etc\n\\end{IEEEbiography}\n\\vspace{-1.7 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Yu_Liu.pdf}}]{Yu Liu} is currently an Associate Professor in International School\nof Information Science and Engineering at Dalian University of Technology, China.\nHe was a post-doctoral researcher in the PSI group of KU Leuven, Belgium. \nIn 2018, he received the PhD degree from Leiden University, Netherlands.\nHis research interests include object recognition and retrieval in the context of continual learning and zero-shot learning. \nHe has co-organized several workshops at ICCV, CVPR and ECCV, respectively. \nHe has published papers in CVPR, ICCV, ECCV, ACM MM, IEEE TIP, etc, and \nreceived the best paper award at MMM2017.\n\\end{IEEEbiography}\n\\vspace{-1.7 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Weiping_Wang.pdf}}]{Weiping Wang} received the Ph.D. degree in systems engineering from the National University of Defense Technology, Changsha, China. He is a Professor at Academy of Advanced Technology Research of Hunan. He has more than 200 papers published on journals and conferences including IEEE Transactions multimedia, Transactions on Vehicular Technology, \\etc His current research interests focus on intelligent decision experimentation, multi-modal knowledge extraction and knowledge integrated computation.\n\\end{IEEEbiography}\n\\vspace{-2.0 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Erwin.pdf}}]{Erwin M. Bakker} is co-director of the LIACS Media Lab at Leiden University. He has published widely in the fields of image retrieval, audio analysis and retrieval and bioinformatics. He was closely involved with the start of the International Conference on Image and Video Retrieval (CIVR). Moreover, he regularly serves as a program committee member or organizing committee member for scientific multimedia and human-computer interaction conferences and workshops.\n\\end{IEEEbiography}\n\\vspace{-2.0 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Theodoros_Georgiou.pdf}}]{Theodoros Georgiou} received the Ph.D. degree at Leiden University, in 2021. His research interest focuses on deep learning methods applied on higher than two dimensional data. Before starting with PhD study in Leiden University, he received his Master degree from the Leiden University in 2016. He has published papers in international conferences and journal including ICPR, CBMI, WCCI and IJMIR.\n\\end{IEEEbiography}\n\\vspace{-2.0 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Paul_Fieguth.pdf}}]{Paul Fieguth} is co-director of the Vision \\& Image Processing Group in Systems Design Engineering at the University of Waterloo, where he is Professor and Associate Dean. He received the Ph.D.\\ degree from the Massachusetts Institute of Technology, Cambridge, in 1995, and has held visiting appointments at the University of Heidelberg in Germany, at INRIA/Sophia in France, at the Cambridge Research Laboratory in Boston, at Oxford University and the Rutherford Appleton Laboratory in England. His research interests include statistical signal and image processing, hierarchical algorithms, data fusion, machine learning, and the interdisciplinary applications of such methods. He has published textbooks on Statistical Image Processing and on Complex Systems theory.\n\\end{IEEEbiography}\n\\vspace{-1.7 cm} \n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/LiSmileGoodFace.pdf}}]{Li Liu} received the Ph.D. degree in information and communication engineering from the National University of Defense Technology, China, in 2012. She is currently a Professor with the College of System Engineering. She has held visiting appointments at the University of Waterloo, Canada, at the Chinese University of Hong Kong, and at the University of Oulu, Finland. Her current research interests include computer vision, pattern recognition and machine learning. Her papers have currently over 7800 citations in Google Scholar.\n\\end{IEEEbiography}\n\\vspace{-1.7 cm}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./Figures/Michael_Lew.pdf}}]{Michael S. Lew} is the head of the Deep Learning and Computer Vision Research Group and a full Professor at LIACS, Leiden University. He has published over a dozen books and 190 peer-reviewed scientific articles in the areas of image retrieval, computer vision, and deep learning. Notably, he had the most cited paper in the ACM Transactions on Multimedia and one of the top 10 most cited articles in the history (out of more than 16,000 articles) of the ACM SIGMM. He was also a founding member of the advisory committee for the TRECVID video retrieval evaluation project, chair of the steering committee for the ACM International Conference on Multimedia Retrieval and a member of the ACM SIGMM Executive Committee. \n\\end{IEEEbiography}\n\\vspace{-1.7 cm} \n \\newpage\n\\end{document}", "id": "6d16c13c-e552-486e-8c1d-7e8590b69688", "level": "section", "origin_cites_number": 11, "parent_id": "515509a6-319a-4385-b1b5-4418a83d2bc2", "prefix_titles": [ [ "title", "Deep Learning for Instance Retrieval: A Survey" ], [ "section", "Conclusions and Outlooks" ] ], "subsections": [], "title": "Conclusions and Outlooks" } ]
96
[ 2093, 2092, 2091, 2094, 2090, 97, 8490, 2089, 2095, 209, 7543, 2101, 514, 2096, 2098, 2102, 2105, 2108, 2104, 2103, 8524, 7544, 2109, 2099, 2097, 2106, 2107, 629, 2100, 7545, 2110, 2112, 2111, 2113, 2115, 2114, 2118, 2119, 2116, 2117, 8525, 7546, 2120, 2121, 7547, 2122, 7548, 2124, 2123, 2125, 2126, 2127, 2128, 2129, 309, 2130, 7549, 2131, 2132, 9094, 2135, 2133, 2134, 2136, 2137, 2140, 8526, 2139, 2138, 2141, 2144, 2142, 2143 ]
0.761756
[ "Amir Rasouli" ]
Deep Learning for Vision-based Prediction: A Survey
2020
2020-06-30T20:26:46Z
cs.CV
Vision-based prediction algorithms have a wide range of applications including autonomous driving, surveillance, human-robot interaction, weather prediction. The objective of this paper is to provide an overview of the field in the past five years with a particular focus on deep learning approaches. For this purpose, we categorize these algorithms into video prediction, action prediction, trajectory prediction, body motion prediction, and other prediction applications. For each category, we highlight the common architectures, training methods and types of data used. In addition, we discuss the common evaluation metrics and datasets used for vision-based prediction tasks. A database of all the information presented in this survey, cross-referenced according to papers, datasets and metrics, can be found online at \url{https://github.com/aras62/vision-based-prediction}.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ] ], "subsections": [ "2fec0d8e-0911-43a3-a218-2d210969c7e7", "cbf20d18-4b97-4aff-b65c-6034395a9fa5", "80143ee1-3a53-4ec6-96bf-ec81cc495f85", "040ae750-d27c-4e06-99d2-48f8312f00ca", "991d18f2-ffd8-45b8-b2e8-b4c21fb20720", "ea670b52-fc46-4dcb-ac08-efd4b23b03c3", "77155bc6-d46c-48ec-a4ba-c2d738e13396", "374ffbdc-d4ff-461d-a063-67f03516e35e", "5bb03330-9009-4e51-abdd-12e3b8d3505d", "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "6ab7b3e0-1f9f-4a59-bcf5-66c554040f7e", "2dbff660-6f4e-44d8-90a2-fa6081f28574", "23d750d7-3a7a-460d-8e6b-4f54e379529f", "63a0cc61-98a6-4be7-b982-ebf42e368ec1", "14c8fcc2-4517-4a60-9512-dfff6198b563", "e82c2ffc-f5dc-4d01-9184-399825652bbe", "a33fb5e9-81a5-4a37-8f37-6dd6031d77ae" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{T}{he} ability to predict the changes in the environment and the behavior of objects is fundamental in many applications such as surveillance, autonomous driving, scene understanding, etc. Prediction is a widely studied field in various artificial intelligence communities. A subset of these algorithms relies primarily on visual appearances of the objects and the scene to reason about the future. Other approaches use different forms of sensors such as wearable or environmental sensors to learn about the past states of the environment or objects. \nThe focus of this report is on vision-based prediction algorithms, which primarily use visual information to observe the changes in the environment and predict the future. In this context, prediction can be in the form of generating future scenes or reasoning about specific aspects of the objects, e.g. their trajectories, poses, etc.\nFor this review, we divide the prediction algorithms into five groups, namely video prediction, action prediction, trajectory prediction, motion (pose) prediction, and others which involve various applications of prediction such as trend prediction, visual weather prediction, map prediction, semantic prediction, etc. In addition, we briefly discuss algorithms that use a form of prediction as an intermediate step to perform tasks such as object detection, action detection, and recognition, etc. Moreover, for each group of prediction algorithms, we will talk about the common datasets and metrics and discuss of their characteristics. It should be noted that due to the broad scope of this review and the large body of work on the vision-based prediction, this review will only focus on works that had been published since five years ago in major computer vision, robotics and machine learning venues. In addition, as the title of the paper suggests, the main focus of the discussion will be on deep learning methods given their popularity in recent years.", "id": "2fec0d8e-0911-43a3-a218-2d210969c7e7", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "Before reviewing the works on vision-based prediction algorithms, there are a number of points that should be considered.", "id": "cbf20d18-4b97-4aff-b65c-6034395a9fa5", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ] ], "subsections": [ "ffddf8b2-9c16-4742-b890-9fae3c51a950", "d51e120c-5adf-414f-9f19-52ce0c4182c7" ], "title": "Vision-based Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on our review, we have identified four major vision-based applications namely, video prediction, action prediction, trajectory prediction, and motion prediction. We discuss each of the studies in each category in a dedicated section. Some of the prediction works, such as visual weather prediction, semantic prediction, contests outcome prediction, that do not fit to any of the four major categories are presented in other application section. \nSome works address multiple prediction tasks, e.g. predicting trajectories and actions simultaneously, and therefore might fall in more than one category. It should be noted that we only include an algorithm in each category if the corresponding task is directly evaluated. For instance, if an algorithm performs video prediction for future action classification, and only evaluates the accuracy of predicted actions, it will only appear in the action prediction category. Furthermore, some works that are reviewed in this paper propose multiple architectures, e.g. recurrent and feedforward, for solving the same problem. In architecture-based categorizations, these algorithms may appear more than once.", "id": "ffddf8b2-9c16-4742-b890-9fae3c51a950", "level": "subsection", "origin_cites_number": 0, "parent_id": "cbf20d18-4b97-4aff-b65c-6034395a9fa5", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "d51e120c-5adf-414f-9f19-52ce0c4182c7", "level": "subsection", "origin_cites_number": 0, "parent_id": "cbf20d18-4b97-4aff-b65c-6034395a9fa5", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ], [ "subsection", "Methods" ] ], "subsections": [ "f46c6d90-8842-4e11-a89d-e5ce565f4be9", "56ce5010-908f-43e2-ac25-ca9c33b72d32", "fa1f4460-f37c-4c55-8b41-c1ea861edc31" ], "title": "Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "This work focuses on vision-based algorithms, which use some form of visual input such as RGB camera images or active sensors such as LIDAR. It should be noted that many algorithms, especially trajectory prediction ones, only use ground truth data such as object trajectories without actual visual processing, e.g. for detection of objects. However, as long as these algorithms are evaluated on vision datasets, they are included in this paper. Note that a completer list of papers with published code can be found in Appendix \\ref{papers_with_code}.", "id": "f46c6d90-8842-4e11-a89d-e5ce565f4be9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d51e120c-5adf-414f-9f19-52ce0c4182c7", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ], [ "subsection", "Methods" ], [ "subsubsection", "Algorithms" ] ], "subsections": [], "title": "Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "As mentioned earlier, we focus on algorithms that have a deep learning component, either in the stage of visual representation generation (e.g. using convolutional features) or reasoning (e.g. using an MultiLayer Preceptron (MLP) for classification). We will, however, acknowledge the classical methods by mentioning some of the main techniques and including them in the datasets and metrics sections of this paper.\nWe classify the algorithms in terms of training techniques and architectures. In practice, this is very challenging as the majority of algorithms use a combination of different approaches. For example, recurrent networks often rely on a form of Convolutional Neural Networks (CNNs) to generate feature representations for scenes, poses of agents, etc. To better distinguish between different classes of algorithms, we only focus on the core component of each algorithm, i.e. the parts that are used for reasoning about the future. Hence, for example, if an algorithm uses a CNN model for pre-processing input data and a recurrent network for temporal reasoning, we consider this algorithm as recurrent. On the other hand, if the features are used with a fully connected network, we categorize this algorithm as feedforward or one-shot method. A few algorithms propose the use of both architectures for reasoning. We address those methods as hybrid. In addition, it should be noted that many works propose alternative approaches using each architecture. Therefore, we categorize them in more than one group.", "id": "56ce5010-908f-43e2-ac25-ca9c33b72d32", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d51e120c-5adf-414f-9f19-52ce0c4182c7", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ], [ "subsection", "Methods" ], [ "subsubsection", "Architectures" ] ], "subsections": [], "title": "Architectures" }, { "cite_extract_rate": 0, "cites": [], "content": "As one would expect, vision-based algorithms primarily rely on visual information. However, many algorithms use pre-trained off-the-shelf algorithms to transform the input to some explicit feature spaces, e.g. poses, trajectories, action labels and perform reasoning in those feature spaces. If pre-processing is not part of the main algorithm, we consider those secondary features as different types of data inputs to the algorithms. If some basic processing, e.g. generating convolutional features for a scene is used, we consider the data type of the original input, e.g. RGB images.", "id": "fa1f4460-f37c-4c55-8b41-c1ea861edc31", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d51e120c-5adf-414f-9f19-52ce0c4182c7", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Vision-based Prediction" ], [ "subsection", "Methods" ], [ "subsubsection", "Data type" ] ], "subsections": [], "title": "Data type" }, { "cite_extract_rate": 0.5897435897435891, "cites": [ 2804, 2814, 2817, 2807, 2808, 2816, 5680, 2819, 2815, 2806, 2809, 2818, 2811, 2802, 2820, 2810, 2805, 2812, 8590, 7617, 2803, 2813, 2073 ], "content": "Video or future scene prediction can be regarded as the most generic form of prediction. The objective of video prediction algorithms is to generate future scenes, often in the form of RGB images and/or optical flow maps . The generated images in turn can be used for various tasks such as action prediction , event prediction , flow estimation , semantic segmentation , etc.\nVideo prediction applications rely on generative models whose task is to predict future scene(s) based on a short observation of input sequences (or in some cases only a single image ). Although many approaches use feedforard architectures , the majority of algorithms take advantage of Recurrent Neural Networks (RNNs) such as Gated Recurrent Units (GRUs) , Long-Short Term Memory (LSTM) networks , its variation Convolutional LSTMs (ConvLSTMs) or a combination of these .\nGenerative Adversarial Networks (GANs) are particularly popular in the video prediction community. . In these adversarial training frameworks, there are two compoents: A generative network that produces future representations and a discriminator whose objective is to distinguish between the predicted representations (e.g. optical flow , frames , motion ) or their temporal consistency and the actual ground truth data by producing a binary classification score that indicates whether the prediction is real or fake. While many algorithms use discriminators to judge how realistic the final generated images are or intermediate features (e.g. poses ), others use multiple discriminators at different stages of processing. For example, the authors of use two discriminators, one is responsible for judging the temporal consistency of the generated frames (i.e. whether the order of generated frames is real) and the other assesses whether the generated frames are real or not. Lee et al. use three discriminators to assess the quality of generated frames and the intermediate motion and content features. Using a two-stream approach, the method in produces both the next frame and optical flow and each stream is trained with a separate discriminator. The prediction network of uses a discriminator for intermediate features generated from input scenes and another discriminator for the final results.\nVariational Autoencoders (VAEs) or Conditional VAEs (CVAEs) are also used in some approaches . VAEs model uncertainty in generated future frames by defining a posterior distribution over some latent variable space . In CVAEs, the posterior is conditioned on an additional parameter such as the observed action in the scenes or initial observation . Using VAEs, at inference time, a random sample is drawn from the posterior to generate the future frame.\nMany video prediction algorithms operate solely on input images and propose various architectural innovations for encoding the content and generating future images . For example, the method in performs a two-way prediction, forward and backward. Each prediction relies on two discriminators for assessing the quality of the generated images and temporal consistency. The model presented in trains a context network by inputting an image sequence into a ConvLSTM whose output is used to initialize convolutional networks responsible for generating the next frames. Xu et al. , in addition to raw pixel values, encode the output of a high pass filter applied to the image as a means of maintaining the structural integrity of the objects in the scene. In , the authors use a two-step approach in which they first perform a coarse frame prediction followed by a fine frame prediction. In , the algorithm learns in two stages. A discriminator is applied after features are generated from the scenes and another one after the final generated frames. \nOptical flow prediction has been widely used as an intermediate step in video prediction algorithms . For example, to deal with occlusion in dynamic scenes, Gao et al. disentangle flow and pixel-level predictions into two steps: the algorithm first predicts the flow of the scene, and then uses it, in conjunction with the input frames, to predict the future. Similar multi-step approaches have also been used in . In , the authors use two separate branches: one branch receives two consecutive frames $(t, t+1)$ and produces context information. The second branch produces motion information by receiving two frames that are $k$ steps apart (i.e. $t+1$, $t+k$). The outputs of these two branches are fused and fed into the final scene generator. The method in simultaneously produces the next future frame and the corresponding optical flow map. In this architecture, two additional networks are used: A flow estimator which uses the output of the frame generator and the last observation to estimate a flow map and a warping layer which performs differential 2D spatial transformation to warp the last observed image into the future predicted frame according to the predicted flow map. \nSome algorithms rely on various intermediate steps for video prediction. For instance, the method in , reasons about the locations and features of individual entities (e.g. cubes) for final scene predictions. Kim et al. first identify keypoints, which may correspond to important structures such as joints, and then predict their motion. For videos involving humans, in the authors identify and reason about the changes in poses, and use this information to generate future frames. In , in addition to raw input images, the differences between consecutive frames used in the learning process.\nPrediction networks can also be provided with additional information to guide future frame generation. In an optical flow network and in a pose estimation network are used in addition to RGB images. Using a CVAE architecture, in the authors use the action lables as conditional input for frame generation. In the context of active tasks, e.g. object manipulation with robotic arms, in which the consequences of actions influence the future scene configuration, it is common to condition the future scene generation on the current or future intended actions .", "id": "80143ee1-3a53-4ec6-96bf-ec81cc495f85", "level": "section", "origin_cites_number": 39, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Video prediction" ] ], "subsections": [ "f163c039-4fc8-43c5-8ff4-fc6e32ae5edf" ], "title": "Video prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Video prediction algorithms are based on generative models that produce future images given a short observation, or in extreme cases a single view of the scene. Both recurrent and feedforward models are widely used in the field, with recurrent ones being slightly more favorable. The architectural designs and training strategies such as the VAEs or GANs are very common. However, it is hard to establish which one of these approaches is superior given that the majority of the video prediction algorithms are application-agnostic, meaning that they are evaluated on a wide range of video datasets with very different characteristics such as traffic scenes, activities, games, object manipulations, etc. (more on this in Section \\ref{datasets}).\nDespite the great progress in the field, video prediction algorithms are still facing some major challenges. One of them is the ability to hallucinate, which is to generate visual representations for parts of the scenes that were not visible during observation phase, e.g. due to occlusions. This is particularly an issue for more complex images such as traffic scenes, movies, etc. The complexity of the scenes also determines how fast the generated images would degrade. Although these algorithms show promising results in simple synthetic videos or action sequences, they still struggle in real practical applications. In addition, many of these algorithms cannot reason about the expected presence or absence of objects in the future. For example, if a moving object is present in the observations and is about to exit the field of view in near future, the algorithms account for it in the future scenes as long as parts of it are visible in the observation stage. This can be an issue for safety-critical applications such as autonomous driving in which the presence or absence of traffic elements and the interactions between them are essential for action planning.", "id": "f163c039-4fc8-43c5-8ff4-fc6e32ae5edf", "level": "subsection", "origin_cites_number": 0, "parent_id": "80143ee1-3a53-4ec6-96bf-ec81cc495f85", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Video prediction" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0.17391304347826, "cites": [ 2821, 2823, 2822, 2824 ], "content": "Action prediction algorithms can be categorized into two groups: Next action or event prediction (or action anticipation) and early action prediction. In the former category, the algorithms use the observation of current activities or scene configurations and predict what will happen next. Early action prediction algorithms, on the other hand, observe parts of the current action in progress and predict what this action is. The classical learning approaches such as Conditional Random Fields (CRFs) , Support Vector Machines (SVMs) with hand-crafted features , Markov models , Bayesian networks and other statistical methods have been widely used in recent years. However, as mentioned earlier, we will only focus on deep learning approaches.", "id": "040ae750-d27c-4e06-99d2-48f8312f00ca", "level": "section", "origin_cites_number": 23, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Action prediction" ] ], "subsections": [ "b0a73071-8988-459c-960a-fbc201a48a45", "2ebd83ed-2292-4ab8-9334-3163c17703ba" ], "title": "Action prediction" }, { "cite_extract_rate": 0.486486486486486, "cites": [ 2829, 2837, 7117, 2838, 2832, 2828, 2825, 2806, 2834, 2835, 2827, 2826, 2831, 2836, 2833, 7618, 2839, 2830 ], "content": "Action prediction algorithms are used in a wide range of applications including cooking activities , traffic understanding , accident prediction , sports and other forms of activities . Although the majority of these algorithms use sequences in which the objects and agents are fully observable, a number of methods rely on egocentric scenes which are recorded from the point of view of the acting agents and only parts of their bodies (e.g. hands) are observable.\nAction prediction methods predominantly use a variation of RNN-based architectures including LSTMs , GRUs , ConvLSTMs , and Quasi-RNNs (QRNNs) . For instance, in the authors use a graph-based RNN architecture in which the nodes represent actions and the edges of the graph represent the transitions between the actions. The method in employs a two-step approach: using a recognition algorithm, the observed actions and their durations are recognized. These form a one-hot encoding vector which is fed into GRUs for the prediction of the future activities, their corresponding start time and length. In the context of vehicle behavior prediction, Ding et al. uses a two-stream GRU-based architecture to encode the trajectory of two vehicles and a shared activation unit to encode the vehicles mutual interactions. Scheel et al. encode the relationship between the ego-vehicle and surrounding vehicles in terms of their mutual distances. The vectorized encoding is then fed into a bi-directional LSTM. At each time step, the output of the LSTM is classified, using a softmax activation, into a binary value indicating whether it is safe for the ego-vehicle to change lane. In the authors use a QRNN network to capture the relationships between road users in order to predict the likelihood of a traffic accident. To train the model, the authors propose an adaptive loss function that assigns penalty weights depending on how early the model can predict accidents. \nAs an alternative to recurrent architectures, some algorithms use feedforward architectures using both 3D and 2D convolutional networks. For example, in the context of pedestrian crossing prediction, in the authors use a generative 3D CNN model that produces future scenes and is followed by a classifier. The method of detects and tracks pedestrians in the scenes, and then feeds the visual representations of the tracks, in the form of an image sequence, into a 3D CNN architecture, which directly classifies how likely the pedestrian will cross the road. To predict the time of traffic accidents, the method in processes each input image using a 2D CNN model and then combines the representations followed by a fully-conntected (\\textit{fc}) layer for prediction. Farha et al. create a 2D matrix by stacking one-hot encodings of actions for each segment of observation and use a 2D convolutional net to generate future actions encodings. Casas et al. use a two-stream 2D CNN, each processing the stacked voxelized LIDAR scans and the scene map. The feature maps obtained from each stream are fused and fed into a backbone network followed by three headers responsible for the detection of the vehicles and predicting their intentions and trajectories. For sports forecasting, Felsen et al. concatenate 5 image observations channel-wise and feed the resulting output into a 2D CNN network comprised of 4 convolutional layers and an \\textit{fc} layer.\nAlthough some algorithms rely on a single source of information, e.g. a set of pre-processed features from RGB images or trajectories , many algorithms use a multimodal approach by using various sources of information such as optical flow maps , poses , scene attributes (e.g. road structure, semantics) , text , action labels , length of actions , speed (e.g. ego-vehicle or surrounding agents) , gaze , current activities and the time of the actions. For example, the method in uses a multi-stream LSTM in which two LSTMs encode visual features and cooking recipes and an LSTM decodes them for final predictions. To capture the relationships within and between sequences, Gammulle et al. propose a two-stream LSTM network with external neural memory units. Each stream is responsible for encoding visual features and action labels. In , a multi-layer GRU structure is used in which features with different modalities enter the network at different levels and are fused with the previous level encodings. The fusion process is taking place according to the complexity of the data modality, e.g. more complex features such as encodings of pedestrian appearances enter the network at the bottom layer, whereas location and speed features enter at the second-last and last layers respectively. Farha et al. use a two-layer stacked GRU architecture which receives as input a feature tuple of the length of the activity and its corresponding one-hot vector encoding. In , the method uses a two-stage architecture: First information regarding the appearance of the scene, optical flow (pre-processed using a CNN) and vehicle dynamics are fed into individual LSTM units. Then, the output of these units is combined and passed through an \\textit{fc} layer to create a representation of the context. This representation is used by another LSTM network to predict future traffic actions. In the context of human-robot interaction, the authors of combine the information regarding the gaze and pose of the humans using an encoder-decoder LSTM architecture to predict their next actions. Jain et al. use a fusion network to combine head pose information of the driver, outside scene features, GPS information, and vehicle dynamics to predict the driver's next action.\nBefore concluding this section, it is important to discuss the use of attention modules which have gained popularity in recent years . As the name implies, the objective of attention modules is to determine what has to be given more importance at a given time. These modules can come in different froms and can be applied to different dimensions of data and at various processing. Some of these modules are temporal attention for identifying keyframes, modality attention to prioritize between different modalities of data input, spatial attention for highlighting the important parts of the scenes, and graph attention for weighting nodes of the graph. In some cases, a combination of different attention mechanisms is used .", "id": "b0a73071-8988-459c-960a-fbc201a48a45", "level": "subsection", "origin_cites_number": 37, "parent_id": "040ae750-d27c-4e06-99d2-48f8312f00ca", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Action prediction" ], [ "subsection", "Action anticipation" ] ], "subsections": [ "41132876-21fb-45f4-86a1-c905e9d97c64" ], "title": "Action anticipation" }, { "cite_extract_rate": 0, "cites": [], "content": "In the field of action anticipation, RNN architectures are strongly preferred. Compared to feedforward algorithms, recurrent methods have the flexibility of dealing with variable observation lengths and multi-modal data, in particular, when they are significantly different, e.g. trajectories and RGB images. However, basic recurrent architectures such as LSTMs and GRUs rely on some forms of pre-processing, especially when dealing with high dimensional data such as RGB images, which requires the use of various convolutional networks, a process that can be computationally costly. Feedforward models, on the other hand, can perform prediction in one shot, meaning that they can simultaneously perform temporal reasoning and spatial feature generation in a single framework, and as a result, potentially have a shorter processing time. \nMany of the approaches mentioned earlier are generative in nature. They generate representations in some feature space and then using these representations predict what will happen next. Some algorithms go one step further and generate the actual future images and use them for prediction. Although such an approach seems effective for single actor events, e.g. cooking scenes, human-robot interaction, it is not a feasible approach for multi-agent predictions such as reasoning about behaviors of pedestrians or cars in traffic scenes.\nThe majority of the methods reviewed in this section use multi-modal data input. This seems to be a very effective approach, especially in high dimensional problems such as predicting road user behavior where the state (e.g. location and speed) of the road user, observer, and other agents, as well as scene structure, lighting conditions, and many other factors, play a role in predicting the future behavior.\nMulti-tasking, e.g. predicting actions and trajectories, are an effective way to predict future actions. For instance, trajectories can imply the possibility of certain actions, e.g. moving towards the road implies the possibility that the person might cross the street. As a result, the simultaneous learning of different tasks can be beneficial. \nLast but not least is the use of attention modules. These modules are deemed to be very effective , in particular for tasks with high complexity in terms of the modality of input data, the scene structure and temporal relations.", "id": "41132876-21fb-45f4-86a1-c905e9d97c64", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "b0a73071-8988-459c-960a-fbc201a48a45", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Action prediction" ], [ "subsection", "Action anticipation" ], [ "subsubsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0.41176470588235203, "cites": [ 2829, 2841, 2845, 2844, 2840, 2843, 2842 ], "content": "Similar to action anticipation methods, early action prediction algorithms widely use recurrent network architectures . Although many of these algorithms use similar aproaches to action anticipation algorithms, some propose new approaches. For example, in the authors use a teacher-student learning scheme where the teacher learns to recognize full sequences using a bi-directional LSTM and the student relies on partial videos using an LSTM network. They perform knowledge distillation by linking feature representations of both networks. Using GAN frameworks, the methods in predict future feature representations of videos in order to predict actions. Zhao et al. implement a Kalman filter using an LSTM architecture. In this method, actions are predcted after observing each frame and corrections are made if the next observation provide additional information. The method of uses a two-step LSTM architecture which first generates an encoding of the context using context-aware convolutional features and then combines these encodings with action-aware convolutional features to predict the action. The authors of this method propose a new loss function that penalizes false negatives with the same strength at any point in time and false positives with a strength that increases linearly over time, to reach the same weight as that on false negatives. In the authors perform coarse-to-fine-grained predictions depending on the amount of evidence available for the type of action. \nMany early action prediction methods adopt feedforward architectures . The authors of predict actions from a single image by transforming it into a new representation called Ranked Saliency Map and Predicted Optical Flow. This representation is passed through a 2D convent and \\textit{fc} layers for final prediction. In , the authors use temporal CNN (TCNNs) architectures, which are a series of 1D dilated convolutional layers designed to capture the temporal dependencies of feature representations, e.g. in the form of a vector representation of poses or word-embeddings describing the video frames. Chen et al. use features of body parts generated by a CNN model and train an attention module whose objective is to activate only features that contribute to the prediction of the action. In , the authors use an action detection framework, which incrementally predicts the locations of the actors and the action classes based on the current detections.\nThe majority of the early action prediction algorithms pre-process the entire observed scenes using, e.g. different forms of convolutional neural networks or other forms of features . Some algorithms complement these features using optical flow maps . Another group of action prediction methods focuses on specific parts of the observations. For example, in the authors use the changes in the poses of actors to predict their actions. The method in uses body parts extracted by cropping a local patch around identified joints of the actors. The authors of only use the visual appearances of actors extracted from detected bounding boxes, and the relationship between them.", "id": "2ebd83ed-2292-4ab8-9334-3163c17703ba", "level": "subsection", "origin_cites_number": 17, "parent_id": "040ae750-d27c-4e06-99d2-48f8312f00ca", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Action prediction" ], [ "subsection", "Early action prediction" ] ], "subsections": [ "2bcad629-77b1-4f2d-ae76-43a654b49c71" ], "title": "Early action prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Early action detection methods have many commonalities with action anticipation algorithms in terms of architectural design. However, there are some exceptions that are more applicable in this domain. These exceptions are teacher-student training schemes for knowledge distillation, identifying discriminative features, and recursive prediction/correction mechanisms. In addition, early action prediction algorithm, with a few exceptions, rely on single modal data for prediction and rarely use refinement frameworks such as attention modules. Adopting these techniques and operating on multi-modal feature spaces can further improve the performance of early action prediction algorithms. Unlike the action anticipation methods, there is no strong preference for recurrent or feedforward approaches. Some approaches take advantage of architectures such as temporal CNNs which are popular in the language processing domain and show their effectiveness for early action prediction tasks.", "id": "2bcad629-77b1-4f2d-ae76-43a654b49c71", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "2ebd83ed-2292-4ab8-9334-3163c17703ba", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Action prediction" ], [ "subsection", "Early action prediction" ], [ "subsubsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0.42465753424657504, "cites": [ 2867, 2855, 2847, 2860, 2854, 2853, 2868, 2857, 2869, 2856, 2846, 2862, 2852, 2866, 979, 988, 2858, 2859, 2848, 2850, 2865, 2831, 2851, 8591, 1261, 2861, 2863, 2849, 7379, 2830, 2864 ], "content": "As the name implies, trajectory prediction algorithms predict the future trajectories of objects, i.e. the future positions of the objects over time. These approaches are particularly popular for applications such as intelligent driving and surveillance. Predicted trajectories can be used directly, e.g. in route planning for autonomous vehicles, or used for predicting future events, anomalies, or actions. \nIn this section, we follow the same routine as before and focus on algorithms that have a deep learning component while acknowledging many recent works that have used classical approaches including Gaussian mixture models and processes , Markov decision processes (MDPs) , Markov chains and other techniques .\nTrajectory prediction applications like many other sequence prediction tasks heavily rely on recurrent architectures such as LSTMs , and GRUs . These methods often use an encoder-decoder architecture in which a network, e.g. an LSTM encodes single- or multi-modal observations of the scenes for some time, and another network generates future trajectories given the encoding of the observations. Depending on the complexity of input data, these algorithms may rely on some form of pre-processing for generating features or embedding mechanisms to minimize the dimensionality of the data.\nThe feedforward algorithms often use whole views of the scenes (i.e. the environment and moving objects) and encode them using convolutional layers followed by a regression layer to predict trajectories. A few algorithms use hybrid approaches in which both convolutional and recurrent reasoning are also used . \nDepending on the prediction task, algorithms may rely on single- or multi-modal observations. For example, in the context of visual surveillance where a fixed camera provide a top-down or bird-eye viewing angle, many algorithms only use past trajectories of the agents in either actual 2D frame coordinates or their velocities calculated by the changes from each time step to another . In addition to observations of individual trajectories of agents, these algorithms focus on modeling the interaction between the agents and how they impact each other. For example, Zhang et al. use a state refinement module that aligns all pedestrians in the scene with a message passing mechanism that receives as input the current locations of the subjects and their encodings from an LSTM unit. In a graph-based approach is used where pedestrians are considered as nodes and the interactions between them as edges of the graph. By aggregating information from neighboring nodes, the network learns to assign a different level of importance to each node for a given subject. The authors of perform a pooling operation on the generated representations by sharing the state of individual LSTMs that have spatial proximity. \nAs shown in some works, other sources of information are used in surveilling objects . For example, in addition to encoding the interactions with the environment, Liang et al. use the semantic information of the scene as well as changes in the poses of the pedestrians. In the visual representations of the layout of the environment and the appearances of the subjects are included. The authors of use an occupancy map which highlights the potential traversable locations for the subjects. The method in takes into account pedestrians' head orientations to estimate their fields of view in order to predict which subjects would potentially interact with one another. To predict interactions between humans, in the authors use both poses and trajectories of the agents. Ma et al. go one step further and take into account the pedestrians' characteristics (e.g. age, gender) within a game-theoretic perspective to determine how the trajectory of one pedestrians impact each other.\nIn the context of traffic understanding, predicting trajectories can be more challenging due to the fact that there is camera ego-motion involved (e.g. the prediction is from the perspective of a moving vehicle), there are interactions between different types of objects (e.g. vehicles and pedestrians), and there are certain constraints involved such as traffic rules, signals, etc. To achieve robustness, many methods in this domain take advantage of multi-modal data for trajectory prediction . In addition to using past trajectories, all these algorithms account for the road structure (whether it is from the perspective of the ego-vehicle or a top-down view) often in the form of raw visual inputs or, in some cases, as an occupancy map . The scene layout can implicitly capture the structure of the road, the appearances of the objects (e.g. shape) and the dynamics (e.g. velocity or locations of subjects). Such implicit information can be further augmented by explicit data such as the shapes of the objects (in the case of vehicles) , the speed and steering angle of the ego-vehicle, the distance between the objects , traffic rules and signals , and kinematic constraints . For example, the method in uses a two-stream LSTM encoder-decoder scheme: first stream encodes the current ego-vehicle's odometry (steering angle and speed) and the last observation of the scene and predicts future odometry of the vehicle. The second stream is a trajectory stream that jointly encodes location information of pedestrians and the ego-vehicle's odometry and then combines the encoding with the prediction of the odometry stream to predict the future trajectories of the pedestrians. The method in further extends this approach and adds an intention prediction stream which outputs how likely the observed pedestrian intends to cross the street. The intention likelihood is produced using an LSTM network that encodes the dynamics of the pedestrian, their appearances and their surroundings. Chandra et al. create embeddings of contextual information by taking into account the shape and velocity of the road users and their spatial coordinates within a neighboring region. These embeddings are then fed into some LSTM networks followed by a number of convolutional layers to capture the dynamics of the scenes. In the authors use separate LSTMs for encoding the trajectories of pedestrians and vehicles (as orientated bounding boxes) and then combine them into a unified framework by generating an occupancy map of the scene centered at each agent, followed by a pooling operation to capture the interactions between different subjects. Lee et al. predict the future trajectories of vehicles in two steps: First, an encoder-decoder GRU architecture predicts future trajectories by observing the past ones. Then a refinement network adjusts the predicted trajectories by taking into account the contextual information in the forms of social interactions, dynamics of the agents involved, and the road structure.\nSimilar to action prediction algorithms, attention modules have been widely used in trajectory prediction methods . For example, in , the attention module jointly measures spatial and temporal interactions. The authors of propose the use of social attention modules which estimate the relative importance of interactions between the subject of interest and its neighboring subjects. The method in uses two attention mechanisms, a temporal attention module that measures the importance of each timestep of the observed trajectories and a series of self-attention modules which are applied to encodings of observations prior to predictions. Xue et al. propose an attention mechanism to measure the relative importance between different data modalities, namely the locations and velocities of subjects.\nOne of the major challenges in trajectory prediction is to model uncertainty of predictions, especially in scenarios where many possibilities exist (e.g. pedestrians at intersections). Some algorithms such as train the models by directly measuring the error between the ground truth trajectories and predicted ones, e.g. by using an \\textit{L2} objective function. At the inference time, these algorithms generate deterministic predictions. To measure the uncertainty of predictions, these models are trained multiple times with randomly initialized parameters. Alternatively, uncertainty can be estimated via probabilistic objective functions, e.g. Gaussian log-likelihood, as in Instead of a single point in space, these algorithms predict a distribution that captures the uncertainty of the predictions. VAEs are another technique that can be used to estimate uncertainty . Using these methods, at training time a posterior distribution over some latent space $z$ is learned by conditioning, for example, over future trajectories. At the inference time, a random sample is drawn from the latent space for the predictions.\nSince trajectory prediction algorithms are generative by nature, many approaches rely on adversarial training techniques in which at training time a discriminator is used to predict whether the generated trajectories are real or fake. Kosaraju et al. extend this approach by using two discriminators: A local discriminator which predicts the results on the output of the prediction using only past trajectories, and one global discriminator which operates on the output of the entire network, i.e. the prediction results based on trajectories and scene information.", "id": "991d18f2-ffd8-45b8-b2e8-b4c21fb20720", "level": "section", "origin_cites_number": 73, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Trajectory prediction" ] ], "subsections": [ "66a51a9f-6cef-4c6e-8e6b-9a97e4e8a052" ], "title": "Trajectory prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Trajectory prediction is a widely studied field in the computer vision community. Although these works dominantly use recurrent network architectures, many approaches, such as those used in the field of traffic scene understanding, use feedforward networks. Trajectory prediction algorithms rely on one or more sources of information such as the past trajectories of subjects, surrounding visual context, object attributes, vehicle sensor readings, etc. One factor that is common in many of trajectory prediction algorithms is modeling the interactions between dynamic or dynamic and static objects. Relationships are captured explicitly or implicitly via encoding the scenes as a whole. Like many other prediction approaches, trajectory prediction algorithms benefit from various forms of attention mechanisms to learn the importance of spatial, temporal or social interactions between objects. To model uncertainty, techniques such as probabilistic objectives and variational encodings are used. \nTrajectory prediction algorithms are predominantly rely on past trajectory information to predict the future. Although past motion observations are very informative, in some context, e.g. traffic scenes, they are simply not enough. There is a need for a more explicit encoding of contextual information such as road conditions, the subject's attributes, rules and constraints, scene structure, etc. A number of approaches successfully have included a subset of these factors, but a more comprehensive approach should be considered.", "id": "66a51a9f-6cef-4c6e-8e6b-9a97e4e8a052", "level": "subsection", "origin_cites_number": 0, "parent_id": "991d18f2-ffd8-45b8-b2e8-b4c21fb20720", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Trajectory prediction" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0.565217391304347, "cites": [ 2814, 2821, 2871, 8592, 2874, 2841, 2875, 2873, 2870, 2878, 2876, 2877, 2872 ], "content": "Although the term \"motion prediction\" in many cases is used to refer to future trajectory prediction, here we only consider the algorithms that are designed to predict changes in human pose. Motion prediction play a fundamental role in all prediction approaches as an intermediate step, e.g. to reflect how the future visual representations would look like or the types of actions to anticipate. Like many other prediction applications, this field is dominated by deep learning models, even though some methods still rely on classical techniques .\nSimilar to other prediction algorithms, motion prediction methods widely use recurrent architectures such as LSTMs and GRUs or a combination of both . For example, in the authors use a two-layer GRU model in which the top layer operates backward to learn noise processes and the bottom level is used to predict the poses given the past pose observations and the output of the top layer. Chiu et al. propose a hierarchical LSTM architecture in which each layer of the network encodes the observed poses at different time-scales. In the context of 3D pose prediction, some algorithms rely on a two-stage process where the visual inputs, either as a single image or a sequence of images , are fed into a recurrent network to predict 2D poses of the agent. This is followed by a refinement procedure that transforms the 2D poses into 3D. \nSome approaches adopt feedforward architectures . For example, the method in uses two feedforward networks in a two-stage process. First, the input poses are fed into an autoencoder which is comprised of fully connected layerss (implemented by 1D convolutions with a kernel size of 1) and self-attention blocks. The encodings are then used by multi-level 2D convolutional blocks for final predictions. Zhang et al. predict 3D poses from RGB videos. In their method, the images are converted to a feature space using a convolutional network, and then the features are used to learn a latent 3D representation of 3D human dynamics. The representation is used by a network to predict future 3D poses. To capture movement patterns, a method proposed in converts poses into a trajectory space using discrete cosine transformation. The newly formed representations are then used in a Graph-CNN framework to learn the dependencies between different joint trajectories. \nTo train motion prediction models, some authors use adversarial training methods in which a discriminator is used to classify whether the predicted poses are real or fake . The discrimination procedure can also be applied to evaluating the continuity, i.e. correct order of, predictions as demonstrated in . In the discrimination score is used to generate a policy for future action predictions in the context of imitation learning.", "id": "ea670b52-fc46-4dcb-ac08-efd4b23b03c3", "level": "section", "origin_cites_number": 23, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Motion prediction" ] ], "subsections": [ "304422d0-f0b0-4723-862a-ed7a4565bca2" ], "title": "Motion prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "Motion prediction algorithms primarily focus on the prediction of changes in the dynamics (i.e. poses) of observed agents. Such predictions can be fundamental to many other applications such as video or trajectory prediction tasks some of which were discussed previously. \nIn recent works, recurrent network architectures are strongly preferred. The architecture of the choice often depends on the representation of the input data, e.g. whether joint coordinates are directly used or are encoded into a high-dimensional representation. \nDespite the development of many successful motion prediction algorithms, the majority of these methods rely on a single source of information, for example, poses or scenes. Encoding higher-level contextual information, such as scene semantics, interactions, etc. can potentially result in more robust predictions, as shown in other prediction applications. Attention modules also, except for one instance, haven't been adopted within motion prediction algorithms. Given the success of using attention in other prediction applications, motion prediction algorithms may benefit from the use of attention mechanisms.", "id": "304422d0-f0b0-4723-862a-ed7a4565bca2", "level": "subsection", "origin_cites_number": 0, "parent_id": "ea670b52-fc46-4dcb-ac08-efd4b23b03c3", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Motion prediction" ], [ "subsection", "Summary" ] ], "subsections": [], "title": "Summary" }, { "cite_extract_rate": 0.565217391304347, "cites": [ 2882, 2879, 2880, 2885, 2888, 2806, 2883, 2881, 2887, 2886, 9096, 2884, 2803 ], "content": "In the context of autonomous robotics, some algorithms are designed to predict occupancy grid maps (OGMs) that are grayscale representations of the robot's surroundings showing which parts of the environment are traversable. These approaches are often object-agnostic and are concerned with generating future OGMs which are used by an autonomous agent to perform path planning. In recent years both classical and deep learning methods are used. The deep learning approaches, in essence, are similar to video prediction methods in which the model receives as input a sequence of OGMs and predicts the future ones over some period of time. In this context both recurrent and feedforward methods were common. Another group of generative approaches is semantic map prediction algorithms . These algorithms receive as inputs RGB images of the scenes and predict future segmentation maps.\nSome of the other vision-based prediction applications include weather and Solar irradiance forecasting , steering angle prediction , predicting the popularities of tweets based on tweeted images used and the users' histories , forecasting fashion trends , storyline prediction , pain anticipation , predicting the effect of force after manipulating objects , forecasting the winner of Miss Universe given the appearances of contestants' gowns , and predicting election results given the facial attributes of candidates . These algorithms rely on a combination of techniques discussed earlier in this paper.", "id": "77155bc6-d46c-48ec-a4ba-c2d738e13396", "level": "section", "origin_cites_number": 23, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Other applications" ] ], "subsections": [], "title": "Other applications" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 8593, 2889, 7619, 6980 ], "content": "Before concluding our discussion on vision-prediction methods, it is worth mentioning that prediction techniques are also widely used in other visual processing tasks such as video summarization , anomaly detection , tracking , active object recognition , action detection and recognition . For example, tracking algorithms are very closely related to trajectory prediction ones and often rely on short term predictions to deal with gaps, e.g. due to occlusions, in tracking. For example, in the method uses a recurrent framework to generate future frames in order to localize pedestrians in next frames. In the context of action detection, some methods rely on a future frame or trajectory prediction of objects to detect actions . In , a method is used for detecting an object in 3D by relying on predicting next best viewing angle of the object. Liu et al. uses a video prediction framework to predict future motion flow maps and images. The future predictions that do not conform with expectations will be identified as abnormal.", "id": "374ffbdc-d4ff-461d-a063-67f03516e35e", "level": "section", "origin_cites_number": 7, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Prediction in other vision applications" ] ], "subsections": [], "title": "Prediction in other vision applications" }, { "cite_extract_rate": 0, "cites": [], "content": "When it comes to the evaluation of algorithms, there are two important factors: metrics and datasets. They highlight the strengths and weaknesses of the algorithms and provide a means to compare the relative performances of the methods. Given the importance of these two factors in the design of algorithms, we dedicate the following sections to discussing the common metrics and datasets used for vision-based prediction tasks. Since the datasets and metrics used in these applications are highly diverse, we will focus our discussion on some of the main ones for each prediction category while providing visual aids to summarize what the past works used for evaluation purposes.", "id": "5bb03330-9009-4e51-abdd-12e3b8d3505d", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "The evaluation of state-of-the-art" ] ], "subsections": [], "title": "The evaluation of state-of-the-art" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure*}\n\\centering\n \\includegraphics[width=0.21\\textwidth]{video_metrics.png}\n \\hspace{0.1cm}\n \\includegraphics[width=0.21\\textwidth]{action_metrics.png}\n \\hspace{0.1cm}\n \\includegraphics[width=0.22\\textwidth]{trajectory_metrics.png}\n \\hspace{0.1cm}\n \\includegraphics[width=0.18\\textwidth]{motion_metrics.png}\n \\caption{Metrics used in vision-based prediction applications. From left to right: Video, action, trajectory and motion prediction. }\n\\label{metrics_fig}\n\\end{figure*}\nIn this section, we follow the same routine as the discussion on the past works and divide the metrics into different categories. A summary of the metrics can be found in Figure \\ref{metrics_fig}. The interested readers can also refer to Appendix \\ref{metrics_papers} for further information regarding the metrics and the the papers that used them. Note while we discuss the metrics in each category and we only provide mathematical expressions of the most popular metrics in Appendix \\ref{appendix_metrics}.", "id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ] ], "subsections": [ "a3e52edb-4aa4-4bb7-abe0-4be3c8454b3e", "f0c26ecc-6bb8-4c9f-9512-d54a1c7c4a8c", "cbe38bc2-9ec9-4841-bbf1-11b25e444a6d", "d0bc398b-84fd-40e6-a4bf-c6c2267c1d9c", "6b621d84-5e91-4190-9b89-35e7d1a31664" ], "title": "Metrics" }, { "cite_extract_rate": 0.585365853658536, "cites": [ 2804, 2814, 2817, 2807, 2808, 2816, 2819, 2815, 2806, 117, 2809, 2818, 7620, 2811, 2802, 2820, 2810, 491, 2805, 2812, 8590, 7617, 2813, 2890 ], "content": "Video prediction is about generating realistic images, hence the best performance is achieved when the disparities between the generated images and groundtruth images are minimal. The most straightforward way of computing the disparity is to measure pixel-wise error using a Mean Square Error (MSE) , which computes average squared intensity differences between pixels. Another more popular metric related to MSE is Peak Signal-to-Noise Ratio (PSNR) . PSNR is the ratio of the maximum pixel value (i.e. possible signal power), e.g. 255 in 8-bit images, divided by the MSE (or power of distorting noise) measure of two images. The lower the error between two images, the higher the value of PSNR, and consequently, the higher the quality of the generated images. Because of the wide dynamic range of signals, PSNR value is expressed in the logarithmic decibel scale. \nAlthough MSE, and PSNR metrics are easy to calculate, they cannot measure the perceived visual quality of a generated image. An alternative metric to address this issue is Structural SIMilarity (SSIM) index () which is designed to model image distortion. To capture the structural differences between the two images, SSIM separates illumination information as it is independent of objects' structures. As a result, the similarity is measured by a combination of three comparisons, namely luminance, contrast, and structure. \nHigher-level contextual similarities may not be captured by distance measures on pixel values. More recently proposed metric, Learned Perceptual Image Patch Similarity (LPIPS), () measures the similarity between two images by comparing internal activations of convolutional networks trained for high-level classification tasks. The value is calculated by an average L2 distance over normalized deep features.\nSome other metrics that have been used in the literature are qualitative human judgment , Frechet Video Distance (FVD) () , Maximum Mean Discrepancy (MMD) , Inception Scores (IS) () , Binary Cross Entropy (BCE) , L1 , and Root MSE (RMSE) .", "id": "a3e52edb-4aa4-4bb7-abe0-4be3c8454b3e", "level": "subsection", "origin_cites_number": 41, "parent_id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Video prediction" ] ], "subsections": [], "title": "Video prediction" }, { "cite_extract_rate": 0.378378378378378, "cites": [ 2821, 2837, 2829, 7117, 2844, 2838, 2832, 2841, 2823, 2822, 2828, 2825, 2806, 2834, 2840, 2824, 2845, 2835, 2827, 2826, 2831, 2842, 2833, 2836, 7618, 2839, 2830, 2843 ], "content": "Similar to classification tasks, many action prediction algorithms use accuracy measure to report on the performance that is the ratio of the correct predictions with respect to the total number of predictions . Despite being used widely, accuracy on its own is not a strong indicator of performance, especially when we are dealing with class-imbalance data. This is because, for example, the model can simply favor the more represented class and predict every input as that class. This then would result in a high accuracy measure because the metric only considers the ratio of correct predictions. To address these shortcomings, some works use complimentary metrics that, in addition to correct predictions, account for different types of false predictions. These metrics are precision , recall , and Area Under the Curve (AUC) of precision-recall graph. Precision and recall also form the basis for the calculation of some higher level metrics such as F1-score , Average Precision (AP) and its variations mean AP (mAP) and calibrated AP (cAP) . Some of the less common performance metrics are Matthews Correlation Coefficient (MCC) , False positive (FP), True Positive Rate (TPR) and False Positive Rate (FPR) \n, Prediction Power (PP) , and Mean Reciprocal Rank (MRR) .\nDepending on the application, some algorithms evaluate the timing factor in terms of the Run Time (RT) of the model or time of the event, e.g. beginning of the next activity , Time To Accident (or collision) (TTA) , and, in the context of driving, Time To Maneuver (TTM) .", "id": "f0c26ecc-6bb8-4c9f-9512-d54a1c7c4a8c", "level": "subsection", "origin_cites_number": 74, "parent_id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Action prediction" ] ], "subsections": [], "title": "Action prediction" }, { "cite_extract_rate": 0.42465753424657504, "cites": [ 2860, 2855, 2867, 2847, 2854, 2853, 2868, 2857, 2869, 2862, 2856, 2846, 2852, 2866, 979, 988, 2858, 2859, 2848, 2850, 2865, 2831, 2851, 8591, 1261, 2849, 2863, 2861, 7379, 2830, 8594 ], "content": "Perhaps the most popular performance measure for trajectory prediction is Average Displacement Error (ADE) calculated as the average error between the prediction location and the ground truth over all time steps. Some methods complement ADE measure with its extension Final Displacement Error (FDE) . As the name suggests, FDE only measures the error between the ground truth and the generated trajectory for the final time step.\nMany other works use the same metric as ADE or ADE/FDE without using the same terminology. It is also a common practice that instead of using average or final time step measures, to calculate the error at different time steps over a period of time .\nTo measure displacement error in probablistic trajectory prediction algorithms, some works generate a set number of samples and report the best measure (i.e. minimum error) or average over all samples . Depending on the error metric used, some refer to these measures as Minimum ADE/FDE (MinADE/FDE) (using Euclidean distance) or Mean/Minimum Mean Square Displacement (Mean/MinMSD) (using MSE). Some of the other probabilistic measures are Log-Likelihood (LL) , Negative Log-Likelihood (NLL) , Kullback–Leibler Divergence (KLD) , Negative Log-Probability (NLP), Cross Entropy (CE) , Average Prediction Probability (APP) .\nPerformance can also be evaluated using common classification metrics. For example, in Hit Rate (HR) and in Miss Rate (MR) metrics are used. In these cases, if the predicted trajectory is below (or above) a certain distance threshold from the groundtruth, it is considered as a hit or miss. Following a similar approach, some authors calculate accuracy or precision of predictions.\nSome of the other metrics used in the literature are Run Time (RT) , Average Non-linear Displacement Error (ANDE)\n, Maximum Distance (MaxD) , State collision rate (SCR) , Percentage Deviated (PD) , Distance to Goal (DtG) , Fraction of Near Misses (FNM) , Expected Calibration Error (ECE) , and qualitative (Q) . A few works predict the orientations of pedestrians or vehicles , therefore also report performance using Mean angular error (MAnE).", "id": "cbe38bc2-9ec9-4841-bbf1-11b25e444a6d", "level": "subsection", "origin_cites_number": 73, "parent_id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Trajectory prediction" ] ], "subsections": [ "6226a0b6-b6ff-448f-a412-1d2941716ad2" ], "title": "Trajectory prediction" }, { "cite_extract_rate": 0.43076923076923, "cites": [ 2860, 2855, 2867, 2847, 2853, 2857, 2869, 2862, 2856, 2846, 2866, 2852, 979, 988, 2858, 2859, 2850, 2865, 2831, 2851, 1261, 8591, 2861, 2863, 2849, 2830, 2864, 8594 ], "content": "Unlike video and action prediction fields, performance measures for trajectory prediction algorithms are not standardized in terms error metrics used and units of measure. For example, for measuring displacement error, although many algorithms use Euclidean Distance (ED) (aka L2-distance, L2-norm, Euclidean norm) , many others rely on different error metrics including MSE , RMSE , Weighted RMSE , Mean Absolute Error (MAE), Hausdorff Distance (HD), Modified HD (MHD) , and discrete Fr\\'echet distance (DFD) . Moreover, trajectory prediction algorithms use different units for measuring displacement error. These are meter , pixel , normalized pixel and feet .\nAlthough such discrepancy between error metrics and units is expected across different applications, the problem arises when the proposed works do not specify the error metric , the unit of measure or both . Despite the fact that the reported results might imply the choice of the metrics and units, the lack of specification can cause erroneous comparisons, specially because many authors use the results of previous works directly as reported in the papers.\nUnfortunately, metric and unit discrepancy exists within the same applications and the same error measuring techniques. For instance, in the case of ADE measure, this metric is originally proposed in in terms of ED, and was referred to as ADE by the authors of despite the fact that they used MSE instead. This is also apparent in many subsequent works that employed ADE measure. For example, the majority of methods use the original metric and report the results in terms of ED whereas some works use MSE and RMSE or do not specify the metric . Although the formulation of these metrics look similar, they produce different results. ADE using ED, for example, is square-root of squared differences averaged over all samples and time steps. Unlike ED, in RMSE, the averaging takes place inside square-root operation. MSE, on the other hand, is very different from the other two metrics, and does not calculate the root of the error. As we can also see in some of the past works, this discrepancy may cause confusion about the intended and actual metric that is used. For example, in the authors propose to use MAE metric while presenting mathematical formulation of Euclidean distance. The authors of make a similar mistake and define ED formulation but refer to it as MSE.\nIn addition, some algorithms within the same applications and using the same datasets use different measuring unit. For instance, in the context of surveillance, ETH is one of the most commonly used datasets. Many works use this dataset for benchmarking the performance of their proposed algorithms, however, they either use different units, e.g. meter , pixel , normalized pixel , or do not specify the unit used .\nLast but not least, another potential source of error in performance evaluation is in the design of the experiments. Taking surveillance applications as an example, it is a common practice to evaluate algorithms with 8 frames observations of the past and prediction 12 steps in the future . However, in some cases the performance of state-of-the-art is reported under the standard 8/12 condition, but the proposed algorithms are tested under different conditions. For instance, in the authors incorrectly compared the performance of their proposed algorithm using 5 observations and 5 predictions with the results of the previous works evaluated under the standard 8/12 condition.", "id": "6226a0b6-b6ff-448f-a412-1d2941716ad2", "level": "subsubsection", "origin_cites_number": 65, "parent_id": "cbe38bc2-9ec9-4841-bbf1-11b25e444a6d", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Trajectory prediction" ], [ "subsubsection", "Pitfalls of trajectory prediction metrics" ] ], "subsections": [], "title": "Pitfalls of trajectory prediction metrics" }, { "cite_extract_rate": 0.565217391304347, "cites": [ 2814, 2871, 2821, 8592, 2874, 2841, 2875, 2873, 2870, 2878, 2876, 2877, 2872 ], "content": "Due to the inherent stochasticity of human body movement, motion prediction algorithms often limit their prediction horizon to approximately $500ms$. To measure the error between corresponding ground truth and predicted poses, these algorithms use mean average error, either in angle space (MAnE) or joint space (MJE) in terms of joint coordinates . In the 3D motion prediction domain, a metric known as Mean Per Joint Prediction Error (MPJPE) is used which is the error over joints normalized with respect to the root joint. \nAs an alternative to distance error metrics, Percentage of Correct Keypoints (PCK) measures how many of the keypoints (e.g. joints) are predicted correctly. The correct predictions are those that are below a certain error threshold (e.g. 0.05). Some works also use the accuracy metric to report on how well the algorithm can localize the position of a particular joint within an error tolerance region .\nOther metrics used in the literature include Normalized Power Spectrum Similarity (NPSS) , Reconstruction Error (RE) , Limb Orientation (LO) , PoSe Entropy (PSEnt), PoSe KL (PSKL) , qualitative human judgment and method Run Time (RT) .", "id": "d0bc398b-84fd-40e6-a4bf-c6c2267c1d9c", "level": "subsection", "origin_cites_number": 23, "parent_id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Motion prediction" ] ], "subsections": [ "aa256766-5674-49c7-8798-111d626f386c" ], "title": "Motion prediction" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 2814, 2871, 2821, 8592, 2874, 2841, 2875, 2873, 2878, 2876, 2877, 2872 ], "content": "Similar to trajectory methods, motion prediction algorithms are evaluated using distance-based methods that calculate the error between pose vectors. In the case of MAnE measure, some methods use ED metric while others use MSE . Sometimes no metric is specified . The same holds for MJE measure where metrics used include MSE , RMSE , ED , MAE , or no metric is specified .\nThe added challenge in coordinate-based error measures, e.g. MJE, MPJPE, is the error unit. While many approaches do not specify the unit explicitly , others clearly state whether the unit is in pixel , centimeter , meter or millimeter . As was the case before, here many algorithms that benchmark on the same datasets, may use different performance metrics, e.g. using popular human datasset Human 3.6M , MAnE (ED), MAnE (MSE) and MJE (ED) are used.", "id": "aa256766-5674-49c7-8798-111d626f386c", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "d0bc398b-84fd-40e6-a4bf-c6c2267c1d9c", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Motion prediction" ], [ "subsubsection", "Pitfalls of motion prediction metrics" ] ], "subsections": [], "title": "Pitfalls of motion prediction metrics" }, { "cite_extract_rate": 0.56, "cites": [ 2882, 2879, 2880, 2885, 8595, 2806, 2888, 2883, 2881, 2887, 2886, 9096, 2884, 2803 ], "content": "Depending on the task objectives, the metrics used in other prediction applications are similar to the ones discussed thus far. For instance, the applications that classify future events or outcomes, e.g. contest or an election winner, next image index for storytelling, severe weather, and pain, use common metrics such as accuracy , precision, recall , percentage of correct predictions (PCP) , and Matthews correlation coefficient (MCC) which predicts the quality of binary classification by taking into account both false and true predictions.\nRegression based methods, such as temperature, trends, or steering prediction, use distance metrics including Euclidean Distance (ED) , RMSE , MSE , MAE , Mean Absolute Percentage Error (MAPE) , normalized MAPE (nMAPE) , and the Spearman's ranking Correlation (SRC) which measures the strength and direction of relationship between two variables.\nOf particular interest are metrics used for evaluating generative models that predict Occupancy Grid Maps (OGMs) and segmentation maps. OGMs are grayscale images that highlight the likelihood of a certain region (represented as a cell in the grid) that is occupied. The generated map can be compared to ground truth by using image similarity metrics such as SSIM , PSNR or psi ($\\psi$) . Alternatively, OGM can be evaluated using a binary classification metric. Here, the grid cells are classified as occupied or free by applying a threshold and then can be evaluated as a whole by using metrics such as True Positive (TP), True Negative (TN) , Receiver Operator Characteristic (ROC) curve over TP and TN , F1-score , precision, recall, and their corresponding AUC . Given that OGM prediction algorithms are mainly used in safety-critical applications such as autonomous driving, some algorithms are also evaluated in terms of their Run Time (RT) .\nImage similarity metrics such as PSNR and SSIM can also be used in the segmentation prediction domain . The most common metric, however, is Intersection over Union (IoU) which measures the average overlap of segmented instances with the ground truth segments. In addition, by applying a threshold to IoU scores, the true matches can be identified and used to calculate the Average Precision (AP) scores as in . Other metrics used for segmentation prediction tasks include EndPoint error (EPE) , Probabilistic Rand Index (RI), Global Consistency Error (GCE), and Variation of Information (VoI) .", "id": "6b621d84-5e91-4190-9b89-35e7d1a31664", "level": "subsection", "origin_cites_number": 25, "parent_id": "4c69ee31-a62c-468f-a731-a0d63c6e9fe2", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics" ], [ "subsection", "Other prediction applications" ] ], "subsections": [], "title": "Other prediction applications" }, { "cite_extract_rate": 0.33628318584070804, "cites": [ 2899, 2838, 2808, 2869, 2824, 2892, 8596, 2835, 2897, 1517, 2903, 8590, 2894, 6980, 2891, 2900, 2626, 2908, 2906, 2896, 2905, 2898, 2902, 8597, 1733, 2904, 2907, 2893, 2878, 2851, 7379, 2901, 2880, 2895 ], "content": "\\label{datasets}\n\\begin{table*}[]\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|l|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\textbf{Year}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Dataset}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Type}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Annotations}}} & \\multicolumn{5}{c|}{\\textbf{Application}} \\\\ \\cline{5-9} \n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textit{\\textbf{V}} & \\textit{\\textbf{A}} & \\textit{\\textbf{T}} & \\textit{\\textbf{M}} & \\textit{\\textbf{O}} \\\\ \\hline\n\\multirow{12}{*}{\\textit{\\textbf{2019}}} & ARGOVerse & Traffic & RGB, LIDAR, 3D BB & & & x & & \\\\ \\cline{2-9} \n & CARLA & Traffic (sim) & RGB & & x & & & \\\\ \\cline{2-9} \n & EgoPose & Pose (ego) & RGB, 3D Pose & & & & x & \\\\ \\cline{2-9} \n & Future Motion (FM) & Mix & RGB, BB, Attrib. & & & x & & \\\\ \\cline{2-9} \n & InstaVariety & Activities & RGB, BB, Pose & & & & x & \\\\ \\cline{2-9} \n & INTEARCTION & Traffic & Map, Traj. & & & x & & \\\\ \\cline{2-9} \n & Luggage & Robot & Stereo RGB, BB & & x & & & \\\\ \\cline{2-9} \n & MGIF & Activities & RGB & x & & & & \\\\ \\cline{2-9} \n & Pedestrian Intention Estimation (PIE) & Traffic & RGB, BB, Class, Attrib., Temporal seg., Vehicle sensors & & x & x & & \\\\ \\cline{2-9} \n & nuScenes & Traffic & RGB, LIDAR, 3D BB, Vehicle sensors & & & x & & \\\\ \\cline{2-9} \n & Vehicle-Pedestrian-Mixed (VPM) & Traffic & RGB, BB & & & x & & \\\\ \\cline{2-9} \n & TRAF & Traffic & RGB, BN, Class, Time-of-day & & & x & & \\\\ \\hline\n\\multirow{10}{*}{\\textit{\\textbf{2018}}} & 3D POSES IN THE WILD (3DPW) & Outdoor & RGB, 2D/3D Pose, Models & & & & x & \\\\ \\cline{2-9} \n & ActEV/VIRAT & Surveillance & RGB, BB, Activity, Temporal seg. & & x & x & & \\\\ \\cline{2-9} \n & ACTICIPATE & Interaction & RGB, Gaze, Pose & & x & & & \\\\ \\cline{2-9} \n & Atomic Visual Actions (AVA) & Activities & RGB, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Epic-Kitchen & Cooking (ego) & RGB, Audio, BB, Class, Text, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & EGTEA Gaze+ & Cooking (ego) & RGB, Gaze, Mask, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & ShanghaiTech Campus (STC) & Surveillance & RGB, Anomaly & x & & & & \\\\ \\cline{2-9} \n & ShapeStack & Objects (sim) & RGBD, Mask, Stability & x & & & & \\\\ \\cline{2-9} \n & VIENA & Traffic (sim) & RGB, Activity, Vehicle sensors & & x & & & \\\\ \\cline{2-9} \n & YouCook2 & Cooking & RGB, Audio, Text, Activity, Temporal seg. & & x & & & \\\\ \\hline\n\\multirow{10}{*}{\\textit{\\textbf{2017}}} & BU Action (BUA) & Activities & RGB (image), Activity & & x & & & \\\\ \\cline{2-9} \n & CityPerson & Traffic & Stereo RGB, BB, Semantic seg. & & & x & & \\\\ \\cline{2-9} \n & Epic-Fail & Risk assessment & RGB, BB, Traj., Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Joint Attention in Autonomous Driving (JAAD) & Traffic & RGB, BB, Attrib., Temporal seg. & x & x & x & & x \\\\ \\cline{2-9} \n & L-CAS & Traffic & LIDAR, 3D BB, Attrib. & & & x & & \\\\ \\cline{2-9} \n & Mouse Fish & Animals & Depth, 3D Pose & & & & x & \\\\ \\cline{2-9} \n & Oxford Robot Car (ORC) & Traffic & Stereo RGB, LIDAR, Vehicle sensors & & & x & & \\\\ \\cline{2-9} \n & PKU-MMD & Activities, interactions & RGBD, IR, 3D Pose, Multiview, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Recipe1M & Cooking & RGB(image), Text & & x & & & \\\\ \\cline{2-9} \n & STRANDS & Traffic & RGBD, 3DBB & & & x & & \\\\ \\hline\n\\multirow{13}{*}{\\textit{\\textbf{2016}}} & BAIR Push & Object manipulation & RGB & x & & & & \\\\ \\cline{2-9} \n & Bouncing Ball (BB) & Simulation & RGB & x & & & & \\\\ \\cline{2-9} \n & Miss Universe (MU) & Miss universe & RGB, BB, Scores & & & & & x \\\\ \\cline{2-9} \n & Cityscapes & Traffic & Stereo RGB, BB, Semantic seg., Vehicle Sensors & x & & x & & x \\\\ \\cline{2-9} \n & CMU Mocap & Activities & 3D Pose, Activity & & x & & x & \\\\ \\cline{2-9} \n & Dashcam Accident Dataset (DAD) & Traffic, accidents & RGB, BB, Class, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & NTU RGB-D & Activities & RGBD, IR, 3D Pose, Activity & & x & & & \\\\ \\cline{2-9} \n & Ongoing Activity (OA) & Actvities & RGB, Activity & & x & & & \\\\ \\cline{2-9} \n & OAD & Activities & RGBD, 3D Pose, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Stanford Drone (SD) & Surveillance & RGB, BB, Class & & & x & & \\\\ \\cline{2-9} \n & TV Series & Activities & RGB, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Visual StoryTelling (VIST) & Visual story & RGB, Text & & & & & x \\\\ \\cline{2-9} \n & Youtube-8M & Activities & RGB, Activity, Temporal seg. & x & & & & \\\\ \\hline\n\\multirow{14}{*}{\\textit{\\textbf{2015}}} & Amazon & Fashion & Features, Attrib., Text & & & & & x \\\\ \\cline{2-9} \n & Atari & Games & RGB & x & & & & \\\\ \\cline{2-9} \n & Brain4Cars & Traffic, Driver & RGB, BB, Attrib., Temporal seg., Vehicle sensors & & x & & & \\\\ \\cline{2-9} \n & CMU Panoptic & Interaction & RGBD, Multiview, 3D Pose, 3D facial landmark & x & x & & x & \\\\ \\cline{2-9} \n & First Person Personalized Activities (FPPA) & Activities (ego) & RGB, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & GTEA Gaze + & Cooking (ego) & RGB, Gaze, Mask, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & MicroBlog-Images (MBI-1M) & Tweets & RGB (image), Attrib., Text & & & & & x \\\\ \\cline{2-9} \n & MOT & Surveillance & RGB, BB & & & x & & \\\\ \\cline{2-9} \n & Moving MNIST (MMNIST) & Digits & Grayscale & x & & & & \\\\ \\cline{2-9} \n & SUN RGB-D & Places & RGBD, 3D BB , Class & & & & & x \\\\ \\cline{2-9} \n & SYSU 3DHOI & Object interaction & RGBD, 3D Pose, Activity & & x & & & \\\\ \\cline{2-9} \n & THUMOS & Activities & RGB, Activity, Temporal seg. & x & x & & & \\\\ \\cline{2-9} \n & Watch-n-Push (WnP) & Activities & RGBD, 3D Pose, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Wider & Activities & RGB (image), Activity & & x & & & \\\\ \\hline\n\\multirow{5}{*}{\\textit{\\textbf{2014}}} & Breakfast & Cooking & RGB, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & Human3.6M & Activities & RGB, 3D Pose, Activity & x & x & & x & \\\\ \\cline{2-9} \n & MPII Human Pose & Activities & RGB, Pose, Activity & & & & x & \\\\ \\cline{2-9} \n & Online RGBD Action Dataset (ORGBD) & Activities & RGBD, BB, 3D Pose, Activity & & x & & & \\\\ \\cline{2-9} \n & Sports-1M & Sports & RGB, Activity & x & x & & & \\\\ \\hline\n\\end{tabular}\n}\n\\caption{A summary of common datasets from years 2014-2019 used in vision-based prediction applications, namely video (V), action (A), trajectory (T), motion (M) and others (O). The annotation column specifies the type of data (e.g. RGB, Infrared(IR)) and annotation types. All datasets contain image sequences unless specified by \"image\". As for annotations, BB stands for bounding box. Attributes include any object characteristics (e.g. for pedestrians demographics, behavior). Vehicle sensors may include speed, steering angle, GPS, etc. Temporal seg. identifies the datasets that specify the start and end of the events.}\n\\label{dataset_table1}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|l|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\textbf{Year}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Dataset}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Type}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Annotations}}} & \\multicolumn{5}{c|}{\\textbf{Application}} \\\\ \\cline{5-9} \n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\textit{\\textbf{V}} & \\textit{\\textbf{A}} & \\textit{\\textbf{T}} & \\textit{\\textbf{M}} & \\textit{\\textbf{O}} \\\\ \\hline\n \\multirow{7}{*}{\\textit{\\textbf{2013}}} & 50Salads & Cooking (ego) & RGBD, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & ATC & Surveillance & RGB, Traj., Attrib. & & & x & & \\\\ \\cline{2-9} \n & CAD-120 & Activities & RGBD, 3D Pose, Activity & & x & & & \\\\ \\cline{2-9} \n & CHUK Avenue & Surveillance & RGB, BB, Anomaly, Temporal seg. & x & & x & & \\\\ \\cline{2-9} \n & Daimler Path & Traffic & Stereo Grayscale, BB, Temporal seg. , Vehicle sensors & & x & & & \\\\ \\cline{2-9} \n & Joint-annotated HMDB (JHMDB) & Activities & RGB, Mask, Activity, Pose, Optical flow & x & x & & & \\\\ \\cline{2-9} \n & Penn Action & Activities & RGB, BB, Pose, Activity & x & & & x & \\\\ \\hline\n\\multirow{11}{*}{\\textit{\\textbf{2012}}} & BIT & Interaction & RGB, Activity & & x & & & \\\\ \\cline{2-9} \n & GTEA Gaze & Cooking (ego) & RGB, Gaze, Mask, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & KITTI & Traffic & Stereo RGB, LIDAR, BB, Optical flow, Vehicle sensors & x & & x & & x \\\\ \\cline{2-9} \n & MANIAC & Object manipulation & RGBD, Semantic seg., Activity & & x & & & \\\\ \\cline{2-9} \n & MPII-Cooking & Cooking & RGB, 3D Pose, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & MSR Daily Activity (MSRDA) & Activities & Depth, Activity & & x & & & \\\\ \\cline{2-9} \n & New York Grand Central (GC) & Surveillance & RGB, Traj. & & & x & & \\\\ \\cline{2-9} \n & SBU Kinetic Interction (SBUKI) & Interaction & RGBD, 3D Pose, Activity & & x & x & x & \\\\ \\cline{2-9} \n & UCF-101 & Activities & RGB, Activity & x & x & & x & \\\\ \\cline{2-9} \n & UTKinect-Action (UTKA) & Activities & RGBD, 3D Pose, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & UvA-NEMO & Smiles & RGB & x & & & & \\\\ \\hline\n\\multirow{5}{*}{\\textit{\\textbf{2011}}} & Ford campus vision LiDAR (FCVL) & Traffic & RGB, LIDAR, Vehicle sensors & & & & & x \\\\ \\cline{2-9} \n & Human Motion Database (HMDB) & Activities & RGB, BB, Mask, Activity & & x & & & \\\\ \\cline{2-9} \n & Stanford 40 & Activities & RGB (image), BB, Activity & & x & & & \\\\ \\cline{2-9} \n & Town Center & Surveillance & RGB, BB & & & x & & \\\\ \\cline{2-9} \n & VIRAT & Surveillance, Activities & RGB, BB, Activity, Temporal seg. & & x & x & & \\\\ \\hline\n\\multirow{8}{*}{\\textit{\\textbf{2010}}} & DISPLECS & Traffic & RGB, Vehicle sensors & & & & & x \\\\ \\cline{2-9} \n & MSR & Activities & Depth, Activity & x & & & & \\\\ \\cline{2-9} \n & MUG & Facial expressions & RGB, Keypoints, Label & \\multicolumn{1}{l|}{x} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} \\\\ \\cline{2-9} \n & PROST & Objects & RGB, BB & x & & & & \\\\ \\cline{2-9} \n & TV Human Interaction (THI) & Interaction & RGB, BB, Head pose, Activity & & x & & & \\\\ \\cline{2-9} \n & UT Interaction (UTI) & Interaction & RGB, BB, Activity, Temporal seg. & & x & & & \\\\ \\cline{2-9} \n & VISOR & Surveillance & RGB, BB, Pose, Attrib. & x & & & & \\\\ \\cline{2-9} \n & Willow Action & Activiites & RGB (image), Activity & & x & & & \\\\ \\hline\n\\multirow{9}{*}{\\textit{\\textbf{2009}}} & Caltech Pedestrian & Traffic & RGB, BB & x & x & & & \\\\ \\cline{2-9} \n & Collective Activity (CA) & Interaction & RGB, BB, Attrib., Activity, Temporal seg. & & x & x & x & \\\\ \\cline{2-9} \n & Edinburgh IFP & Surveillance & RGB, BB & & & x & & \\\\ \\cline{2-9} \n & ETH & Surveillance & RGB, Traj. & & & x & & \\\\ \\cline{2-9} \n & OSU & Sports & RGB, BB, Attrib. & & & x & & \\\\ \\cline{2-9} \n & PETS2009 & Surveillance & RGB, BB & & & x & & \\\\ \\cline{2-9} \n & QMUL & Traffic, anomaly & RGB, Traj. & & & x & & \\\\ \\cline{2-9} \n & TUM Kitchen & Activities & RGB, RFID, 3D Pose, Activity, Temporal seg. & & & x & & \\\\ \\cline{2-9} \n & YUV Videos & Mix videos & RGB & x & & & & \\\\ \\hline\n\\multirow{2}{*}{\\textit{\\textbf{2008}}} & Daimler & Traffic & Grayscale, BB & & x & & & \\\\ \\cline{2-9} \n & MIT Trajectory (MITT) & Surveillance & RGB, Traj. & & & x & & \\\\ \\hline\n\\multirow{5}{*}{\\textit{\\textbf{2007}}} & AMOS & Weather & RGB, Temperature, Time & & & & & x \\\\ \\cline{2-9} \n & ETH Pedestrian & Traffic & RGB, BB & & x & & & \\\\ \\cline{2-9} \n & Lankershim Boulevard & Traffic & RGB, Traj. & & & x & & \\\\ \\cline{2-9} \n & Next Generation Simulation (NGSIM) & Traffic & Map, Traj. & & x & x & & \\\\ \\cline{2-9} \n & UCY & Surveillance & RGB, Traj., Gaze & & & x & & \\\\ \\hline\n\\textit{\\textbf{2006}} & Tuscan Arizona & Weather & RGB & & & & & x \\\\ \\hline\n\\textit{\\textbf{2004}} & KTH & Activities & Grayscale, Activity & x & & & & \\\\ \\hline\n\\textit{\\textbf{1981}} & Golden Colorado & Weather & RGB & & & & & x \\\\ \\hline\n\\end{tabular}\n}\n\\caption{A summary of common datasets from years 2013 and earlier used in vision-based prediction applications, namely video (V), action (A), trajectory (T), motion (M) and others (O). The annotation column specifies the type of data (e.g. RGB, Infrared(IR)) and annotation types. All datasets contain sequences unless specified by \"image\". As for annotations, BB stands for bounding box. Attributes include any object characteristics (e.g. for pedestrians demographics, behavior). Vehicle sensors may include speed, steering angle, GPS, etc. Temporal seg. identifies the datasets that specify the start and end of the events.}\n\\label{dataset_table2}\n\\end{table*}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{dataset_paper.png}\n\\caption{An illustration of datasets and papers that use them.}\n\\label{datasets_fig}\n\\end{figure*}\nWe have identified more than 100 datasets that are used in the vision-based prediction literature. Discussing all datasets in detail is beyond the scope of this paper. We provide a summary of the datasets and their characteristics in Tables \\ref{dataset_table1} and \\ref{dataset_table2} and briefly discuss more popular datasets in each field. Figure \\ref{datasets_fig} illustrates the list of papers and corresponding datasets used for evaluation. Note that the papers that do not use publicly available datasets are not listed in this figure. For further information, the readers can also refer to Appendices \\ref{links_datasets} and \\ref{papers_datasets}. \n\\textbf{Video prediction.} Almost any forms of sequential RGB images can be used for evaluation of video prediction algorithms. Among the most common datasets are traffic datasets such a KITTI , and Caltech Pedestrians . KITTI is a dataset recorded from inside of a vehicle and contains images of urban roads annotated with bounding box information. It also contains depth maps, LIDAR point clouds and semantic segmentation maps. Caltech Pedestrian is a similar dataset with the difference of only containing RGB images and bounding boxes for pedestrians. It also contains occlusion bounding boxes highlighting the visible portions of the pedestrians. Activity datasets such as UCF-101 and Human3.6M are also widely used. UCF-101 contains videos of various types of activities such as sports, applying makeup, playing music instruments annotated with activity labels per video. Human3.6M consists of 3.6 million 3D human poses and corresponding images recorded from 11 professional actors. This dataset contains 17 generic scenarios such as discussion, smoking, and taking photos.\n\\textbf{Action prediction. } The algorithms in this domain are evaluated on a wide range of datasets. For anticipation tasks, traffic datasets such as Next Generation Simulation (NGSIM) and Joint Attention in Autonomous Driving (JAAD) are used. NGSIM contains trajectories of vehicles driving on highways in the United States. The trajectories are accompanied by the top-down views of the corresponding road structures. The JAAD dataset contains videos of pedestrians crossing the road recorded using an on-board camera. This dataset contains the frame-wise pedestrian bounding boxes, and action labels as well as pedestrians' and roads' attributes. A similar dataset to JAAD is Pedestrian Intention Estimation (PIE) which, in addition, provides the ego-vehicle sensor data and spatial annotations for traffic elements.\nAnother popular category of datasets in this domain is those containing videos of cooking activities. These datasets are Epic-Kitchen , 50salads , Breakfast and MPII-Cooking . These datasets contain videos showing sequences of different cooking actions of preparing meals. All videos in the datasets have temporal segments with corresponding activity labels. Some datasets also provide additional annotations such as object bounding boxes, voice and text in Epic-Kitchen, and the poses of the actors in MPII-Cooking. \nEarly action prediction works widely use the popular UCF-101 dataset and interaction datasets such as UT Interaction (UTI) and BIT . UTI and BIT contain videos of people engaged in interaction with the corresponding label for the types of interactions. In addition, UTI has the added temporal segment annotations detailing different stages of interactions.\n\\textbf{Trajectory prediction. } The most common datasets in this domain are ETH and UCY which contain surveillance videos of pedestrians walking on sidewalks annotated with their position coordinates. UCY also provides the gaze directions to capture the viewing angle of pedestrians. Another popular dataset is Stanford Aerial Pedestrian (SAP), also known as Stanford Drone (SD) . This dataset has the footage of road users from a top-down view recorded by a drone. The annotations include bounding boxes and object class labels. \n\\textbf{Motion prediction. } The algorithms in this domain are mainly evaluated on the widely popular dataset Human 3.6M described earlier. This dataset is particularly suitable for these applications because it contains accurate 3D poses of the actors recorded by a high-speed motion capture system. Using this dataset, the background can be accurately removed allowing the algorithms to focus purely on changes in the poses. Another popular dataset in this field is Penn Action which contains RGB videos of various activities with corresponding activity labels and poses of the actors involved. \n\\textbf{Other applications. } The most notable datasets are KITTI which is used by the OGM prediction algorithms and CityScapes that is used by the segmentation prediction algorithms. CityScapes contains video footage of urban environments recorded by an on-board camera. The data is annotated with semantic masks of the traffic objects and corresponding bounding boxes.", "id": "6ab7b3e0-1f9f-4a59-bcf5-66c554040f7e", "level": "section", "origin_cites_number": 113, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "2dbff660-6f4e-44d8-90a2-fa6081f28574", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ] ], "subsections": [ "52732619-477d-4fcc-bf24-a81fa432764f", "24dca31c-1523-4bb9-9bbe-85a5ca0d065e", "dd7e38b9-46fb-48ee-b838-9bbba4067a29", "6521f90c-6d80-4566-ab87-2b5811291280" ], "title": "Summary and Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "There are many factors that define the choice of architecture for vision-based prediction tasks. These factors include the types of input data and expected output, computational efficiency, application-specific constrains, etc. For instance, in terms of network choice, whether it is feedforward and recurrent, no preference is observed in video applications. However, in the case of action, trajectory and motion predictions, recurrent architectures are strongly preferred. This can be due to the fact that these applications often rely on multi-modal data which can be combined easier in a recurrent framework. In the case of trajectory prediction, recurrent architectures give the flexibility of varying observation and prediction lengths without the need for architectural modifications. \nGenerative Adversarial Networks (GANs) are widely used in video prediction applications and to some extent in trajectory prediction methods. Some of the major challenges using generative models is to deal with inherent uncertainty of future representations, in particular, this an issue in the context of trajectory prediction due to high unpredictability of human movement. To remedy this issue and to capture uncertainty of movement, techniques such as variational auto encoders, in which uncertainty is modeled as a latent distribution, and the use of probabilistic objective functions are common. \nA more recent trend in the field of vision-based prediction (and perhaps in other computer vision applications) is the use of attention modules. These modules can be applied at spatial or temporal level or even to adjust the impact of different modalities of data.", "id": "52732619-477d-4fcc-bf24-a81fa432764f", "level": "subsection", "origin_cites_number": 0, "parent_id": "2dbff660-6f4e-44d8-90a2-fa6081f28574", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "Architecture" ] ], "subsections": [], "title": "Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "The type of data and methods of processing vary across different applications. For instance, video applications mainly rely on images but also take advantage of alternative representations, such as optical flow, poses, object-based keypoints, and report improved results. Similarly, many action prediction algorithms use different sources of information such as optical flow, poses, scene attributes, text, complementary sensor readings (e.g. speed in vehicles), gaze and time of the actions.\nTrajectory prediction algorithms, predominantly rely on trajectory information, with some exceptions that use scene layouts, complimentary sensors' readings or other constrains. One of the main applications in this domain, in particular surveillance, is modeling the social interaction between the dynamic agents. Unlike other vision-based applications, motion prediction algorithms are mainly single-modal and use only poses and the images of agents as inputs.", "id": "24dca31c-1523-4bb9-9bbe-85a5ca0d065e", "level": "subsection", "origin_cites_number": 0, "parent_id": "2dbff660-6f4e-44d8-90a2-fa6081f28574", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "Data representation and processing" ] ], "subsections": [], "title": "Data representation and processing" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "dd7e38b9-46fb-48ee-b838-9bbba4067a29", "level": "subsection", "origin_cites_number": 0, "parent_id": "2dbff660-6f4e-44d8-90a2-fa6081f28574", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "Evaluation" ] ], "subsections": [ "eb39ad05-ca7d-4eac-ac9e-5926623f5423", "f4e6a673-f09d-443b-b219-ea24268d39ec" ], "title": "Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "Metrics may vary across different applications of vision-based prediction. Video prediction algorithms, for instance, are mainly evaluated using MSE, SSIM, and PSNR, whereas in the case of action prediction algorithms the main metrics are accuracy, precision, and recall. Trajectory prediction works often measure the average distance (ADE) or final distance (FDE) between the actual and predicted locations of the agents. The models with probabilistic outputs are also evaluated using NLL and KLD metrics. Distance-based metrics are used in motion prediction methods where the error in joint prediction is either calculated on average (MJE) or per joint (MPJPE). In addition, joint accuracy can be reported in terms of the percentage of correct prediction using PCK metric. In this case, a tolerance threshold is defined to determine whether a predicted joint is correct.\nWhile calculating performance error for video and action prediction algorithms are fairly standardized, there are major discrepancies across different works in the way error is computed for trajectory and motion prediction algorithms. For example, in trajectory prediction, distance error is calculated by using metrics such as MSE, RMSE, ED, etc. and units such as pixels and meters. Such discrepancy, and the fact that many works omit mentioning the choice of error metrics and units, increases the chance of incorrect comparisons between models.", "id": "eb39ad05-ca7d-4eac-ac9e-5926623f5423", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dd7e38b9-46fb-48ee-b838-9bbba4067a29", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "Evaluation" ], [ "subsubsection", "Metrics" ] ], "subsections": [], "title": "Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "The choice of datasets depends on the objective of the applications. For example, action prediction algorithms for cooking activities are evaluated on datasets such as Epic-Kitchen, 50Salads, Breakfast, and MPII-Cooking and the ones for traffic events evaluated on JAAD, NSGIM, and PIE. Similarly, trajectory prediction works for surveillance widely use UCY, ETH, and SD and for traffic NGSIM. Motion prediction algorithms are more focusing on individual movements in diverse context, therefore predominantly use Human3.6M and Penn Action datasets. \nCompared to other applications, video prediction is an exception. The algorithms in this group are evaluated on almost any datasets with video content. The algorithms in this domain are often task agnostic meaning that the same approaches are evaluated on datasets with traffic scenes (e.g. KITTI, Caltech Pedestrian), general activities (e.g. UCF-101, Penn Action), basic actions (e.g. Human3.6M, KTH) and synthetic data (e.g. MMNIST, Atari games). Although such generalizability across different domains is a desirable feature in video prediction algorithms, it is often the case that the reason behind the choice of datasets is not discussed raising the question of whether the decision for selecting particular datasets is motivated by the limitations of the algorithms.", "id": "f4e6a673-f09d-443b-b219-ea24268d39ec", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "dd7e38b9-46fb-48ee-b838-9bbba4067a29", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "Evaluation" ], [ "subsubsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "In recent years we have witnessed an exponential growth in the number of works published in the field of vision-based prediction. There are still, however, many open research problems in the field that need to be addressed. \nThe ability to hallucinate or generate parts of the image that were not previously observed is still a major challenge in video prediction applications. In addition, the algorithms in this domain cannot deal with cases where some objects go out of view in future time frames. The performances of action prediction algorithms are still sub-optimal, especially in safety critical and complex tasks such as event prediction in traffic scenes. To make predictions in such cases, many modalities of data and the relationships between them should be considered which is often not the case in the proposed approaches.\nTrajectory prediction algorithms mainly rely on changes in the location of the agents to predict their future states. Although, this might be an effective approach for tasks such as surveillance, in many other cases it might not be sufficient. For example, in order to predict trajectories of pedestrians in traffic scenes, many other sources of information, such as their poses and orientation, road structure, interactions, road conditions, traffic flow, etc., are potentially relevant. Such contextual analysis can also be beneficial for motion prediction algorithms which manly rely on the changes in poses to predict the future.\nIn terms of the choice of learning architectures and training schemes, a systematic comparison of different approaches, e.g. using feedforward vs recurrent networks, the benefits of using adversarial training schemes, various uncertainty modeling approaches, etc. is missing. Such information can be partially extracted from the existing literature, however, in many cases it is not possible due to the lack of standard evaluation procedures and metrics, unavailability of corresponding implementation code and the datasets used for comparison. \n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{vision_based_prediction}\n\\appendices", "id": "6521f90c-6d80-4566-ab87-2b5811291280", "level": "subsection", "origin_cites_number": 0, "parent_id": "2dbff660-6f4e-44d8-90a2-fa6081f28574", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Summary and Discussion" ], [ "subsection", "What's next" ] ], "subsections": [], "title": "What's next" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 2871, 7117, 2817, 2838, 8592, 2808, 2874, 2869, 2819, 2856, 2866, 2809, 979, 2834, 2802, 2845, 2881, 2873, 2870, 2805, 2831, 2878, 2851, 2876, 8590, 7618, 2863, 2849, 2861, 2872, 2813, 2890 ], "content": "\\label{papers_with_code}\n\\begin{table*}[]\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{|c|l|l|}\n\\hline\n\\textbf{App} & \\multicolumn{1}{c|}{\\textbf{Paper}} & \\multicolumn{1}{c|}{\\textbf{Link}} \\\\ \\hline\n\\multirow{14}{*}{\\textbf{Video}}\n & & \\url{https://github.com/andrewjywang/SEENet} \\\\ \\cline{2-3} \n & & \\url{https://github.com/Yijunmaverick/FlowGrounded-VideoPrediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/liuem607/DYAN} \\\\ \\cline{2-3} \n & & \\url{https://github.com/garyzhao/FRGAN} \\\\ \\cline{2-3} \n & & \\url{https://github.com/jthsieh/DDPAE-video-prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/xjwxjw/VPSS} \\\\ \\cline{2-3} \n & & \\url{https://github.com/jinbeibei/VarNet} \\\\ \\cline{2-3} \n & & \\url{https://bit.ly/2HqiHqx} \\\\ \\cline{2-3} \n & & \\url{https://github.com/ujjax/pred-rnn} \\\\ \\cline{2-3} \n & & \\url{https://github.com/rubenvillegas/icml2017hierchvid} \\\\ \\cline{2-3} \n & & \\url{https://github.com/tensorflow/models/tree/master/research/video_prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/junhyukoh/nips2015-action-conditional-video-prediction} \\\\ \\hline\n\\multirow{8}{*}{\\textbf{Action}} & & \\url{https://github.com/google/next-prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/fpv-iplab/rulstm} \\\\ \\cline{2-3} \n & & \\url{https://github.com/aras62/SF-GRU} \\\\ \\cline{2-3} \n & & \\url{https://github.com/aashi7/NearCollision} \\\\ \\cline{2-3} \n & & \\url{https://github.com/yabufarha/anticipating-activities} \\\\ \\cline{2-3} \n & & \\url{https://github.com/gurkirt/realtime-action-detection} \\\\ \\cline{2-3} \n & & \\url{https://github.com/asheshjain399/RNNexp} \\\\ \\cline{2-3} \n & & \\url{https://github.com/aditya7874/Activity-Prediction-in-EgoCentric-Videos} \\\\ \\cline{2-3} \n & & \\url{https://github.com/JoeHEZHAO/Spatiotemporal-Residual-Propagation} \\\\ \\hline\n\\multirow{15}{*}{\\textbf{Trajectory}} & & \\url{https://go.umd.edu/TraPHic} \\\\ \\cline{2-3} \n & & \\url{https://github.com/google/next-prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/zhangpur/SR-LSTM} \\\\ \\cline{2-3} \n & & \\url{https://github.com/aras62/PIEPredict} \\\\ \\cline{2-3} \n & & \\url{https://sites.google.com/view/precog} \\\\ \\cline{2-3} \n & & \\url{https://rebrand.ly/INFER-results} \\\\ \\cline{2-3} \n & & \\url{https://github.com/wzhi/KernelTrajectoryMaps} \\\\ \\cline{2-3} \n & & \\url{https://github.com/apratimbhattacharyya18/onboard_long_term_prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/agrimgupta92/sgan} \\\\ \\cline{2-3} \n & & \\url{https://github.com/svip-lab/CIDNN} \\\\ \\cline{2-3} \n & & \\url{https://github.com/yfzhang/vehicle-motion-forecasting} \\\\ \\cline{2-3} \n & & \\url{https://github.com/yadrimz/DESIRE} \\\\ \\cline{2-3} \n & & \\url{https://github.com/quancore/social-lstm} \\\\ \\hline\n\\multirow{11}{*}{\\textbf{Motion}} & & \\url{https://github.com/cr7anand/neural_temporal_models} \\\\ \\cline{2-3} \n & & \\url{https://github.com/BII-wushuang/Lie-Group-Motion-Prediction} \\\\ \\cline{2-3} \n & & \\url{https://github.com/wei-mao-2019/LearnTrajDep} \\\\ \\cline{2-3} \n & & \\url{https://github.com/magnux/MotionGAN} \\\\ \\cline{2-3} \n & & \\url{https://github.com/Khrylx/EgoPose} \\\\ \\cline{2-3} \n & & \\url{https://jasonyzhang.com/phd/} \\\\ \\cline{2-3} \n & & \\url{https://github.com/eddyhkchiu/pose_forecast_wacv/.} \\\\ \\cline{2-3} \n & & \\url{https://github.com/ywchao/image-play} \\\\ \\cline{2-3} \n & & \\url{https://github.com/una-dinosauria/human-motion-prediction} \\\\ \\cline{2-3} \\hline\n\\multicolumn{1}{|l|}{\\multirow{2}{*}{\\textbf{Others}}} & & \\url{https://bitbucket.org/vguizilini/cvpp/src} \\\\ \\cline{2-3} \n\\multicolumn{1}{|l|}{} & & \\url{https://github.com/facebookresearch/SegmPred} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{A summary of vision-based prediction papers with published code.}\n\\label{table_published_code}\n\\end{table*}\nA list of papers with official published code can be found in Table \\ref{table_published_code}.", "id": "23d750d7-3a7a-460d-8e6b-4f54e379529f", "level": "section", "origin_cites_number": 44, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Papers with code" ] ], "subsections": [], "title": "Papers with code" }, { "cite_extract_rate": 0.461187214611872, "cites": [ 2829, 2871, 2844, 2807, 2808, 2838, 2869, 2841, 2862, 2822, 979, 2840, 2824, 2811, 2848, 2845, 2850, 2835, 2873, 2870, 2827, 2842, 2836, 2887, 9096, 1261, 8590, 2849, 2877, 2813, 2814, 2855, 2860, 7117, 2853, 2879, 2816, 2823, 2856, 2866, 2852, 2888, 2809, 2858, 2802, 2865, 2805, 2826, 2833, 2876, 7618, 2839, 2830, 2843, 2847, 2854, 8592, 2882, 2868, 2832, 2874, 2857, 2828, 2806, 8595, 2834, 2859, 2875, 2881, 2812, 2886, 2878, 2851, 2863, 7617, 7379, 2803, 2890, 8594, 2804, 2821, 2837, 2867, 2817, 2880, 2885, 2819, 2846, 2815, 2825, 2883, 988, 2818, 2820, 2810, 2831, 2884, 8591, 2861, 2872, 2864 ], "content": "\\label{metrics_papers}\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{|l|p{0.8\\textwidth}|}\n\\hline\n\\textbf{Metric} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{BCE}} & \\\\ \\hline\n\\textit{\\textbf{FVD}} & , \\\\ \\hline\n\\textit{\\textbf{Human}} & ,,, \\\\ \\hline\n\\textit{\\textbf{IS}} & \\\\ \\hline\n\\textit{\\textbf{L1}} & , \\\\ \\hline\n\\textit{\\textbf{LPIPS}} & ,,, \\\\ \\hline\n\\textit{\\textbf{MMD}} & \\\\ \\hline\n\\textit{\\textbf{MSE}} & ,,,,, ,,,,, ,, \\\\ \\hline\n\\textit{\\textbf{PSNR}} & ,,,,, ,,,,, ,,,,, ,,,,, ,,,, \\\\ \\hline\n\\textit{\\textbf{RMSE}} & \\\\ \\hline\n\\textit{\\textbf{SSIM}} & ,,,,, ,,,,, ,,,,, ,,,,, ,, \\\\ \\hline\n\\end{tabular}\n\\caption{Metrics used in \\textbf{video prediction} applications.}\n\\label{video_metrics_table}\n\\end{table*}\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{|l|p{0.8\\textwidth}|}\n\\hline\n\\textbf{Metrics} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{AP}} & ,,,,, \\\\ \\hline\n\\textit{\\textbf{ATTA}} & \\\\ \\hline\n\\textit{\\textbf{ATTC}} & \\\\ \\hline\n\\textit{\\textbf{AUC}} & ,, \\\\ \\hline\n\\textit{\\textbf{Accuracy}} & ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,,,,,,,,,, , \\\\ \\hline\n\\textit{\\textbf{F1}} & ,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{FP}} & \\\\ \\hline\n\\textit{\\textbf{MAE}} & \\\\ \\hline\n\\textit{\\textbf{MCC}} & \\\\ \\hline\n\\textit{\\textbf{MRR}} & \\\\ \\hline\n\\textit{\\textbf{PP}} & \\\\ \\hline\n\\textit{\\textbf{Precision}} & ,,,,, ,,,,, ,,,,, \\\\ \\hline\n\\textit{\\textbf{RMSE}} & ,, \\\\ \\hline\n\\textit{\\textbf{Recall}} & ,,,,, ,,,,, ,,,,, ,, \\\\ \\hline\n\\textit{\\textbf{Run time}} & , \\\\ \\hline\n\\textit{\\textbf{TNR}} & \\\\ \\hline\n\\textit{\\textbf{TPR}} & \\\\ \\hline\n\\textit{\\textbf{TTA}} & \\\\ \\hline\n\\textit{\\textbf{TTM}} & ,,,, \\\\ \\hline\n\\textit{\\textbf{cAP}} & \\\\ \\hline\n\\textit{\\textbf{mAP}} & ,,,,,,, \\\\ \\hline\n\\textit{\\textbf{recall}} & , \\\\ \\hline\n\\end{tabular}\n\\caption{Metrics used in \\textbf{action prediction} applications.}\n\\label{action_metrics_table}\n\\end{table*}\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{|l|p{0.8\\textwidth}|}\n\\hline\n\\textbf{Metrics} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{ADE}} & ,,,,, ,,,,,,,,,, ,,,,, ,,,,,,,,,, ,,,,, ,,,,,,,,,, ,\n\\\\ \\hline\n\\textit{\\textbf{AEDE}} & \\\\ \\hline\n\\textit{\\textbf{ANDE}} & , \\\\ \\hline\n\\textit{\\textbf{APP}} & \\\\ \\hline\n\\textit{\\textbf{Accuracy}} & , \\\\ \\hline\n\\textit{\\textbf{CE}} & \\\\ \\hline\n\\textit{\\textbf{DtG}} & \\\\ \\hline\n\\textit{\\textbf{ECE}} & \\\\ \\hline\n\\textit{\\textbf{ED}} & ,,,,,,,,,\n\\\\ \\hline\n\\textit{\\textbf{FDE}} & ,,,,, ,,,,, ,,,,, ,,,,, ,,,,, ,\n\\\\ \\hline\n\\textit{\\textbf{FNM}} & \\\\ \\hline\n\\textit{\\textbf{Hit rate}} & , \\\\ \\hline\n\\textit{\\textbf{KLD}} & ,, \\\\ \\hline\n\\textit{\\textbf{L1}} & \\\\ \\hline\n\\textit{\\textbf{LL}} & ,,, \\\\ \\hline\n\\textit{\\textbf{MAnE}} & \\\\ \\hline\n\\textit{\\textbf{MHD}} & ,, \\\\ \\hline\n\\textit{\\textbf{MSE}} & , \\\\ \\hline\n\\textit{\\textbf{Miss rate}} & , \\\\ \\hline\n\\textit{\\textbf{NLL}} & ,,,,, \\\\ \\hline\n\\textit{\\textbf{NLP}} & , \\\\ \\hline\n\\textit{\\textbf{None}} & \\\\ \\hline\n\\textit{\\textbf{PD}} & \\\\ \\hline\n\\textit{\\textbf{Precision}} & \\\\ \\hline\n\\textit{\\textbf{RMSE}} & , \\\\ \\hline\n\\textit{\\textbf{Run time}} & ,,,, \\\\ \\hline\n\\textit{\\textbf{SCR}} & \\\\ \\hline\n\\textit{\\textbf{WRMSE}} & \\\\ \\hline\n\\textit{\\textbf{maxD}} & , \\\\ \\hline\n\\textit{\\textbf{meanMSD}} & , \\\\ \\hline\n\\textit{\\textbf{minADE}} & ,,, \\\\ \\hline\n\\textit{\\textbf{minED}} & , \\\\ \\hline\n\\textit{\\textbf{minFDE}} & , \\\\ \\hline\n\\textit{\\textbf{minMSD}} & , \\\\ \\hline\n\\end{tabular}\n\\caption{Metrics used in \\textbf{trajectory prediction} applications.}\n\\label{trajectory_metrics_table}\n\\end{table*}\n\\begin{table*}[h]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Metrics} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{Accuracy}} & , \\\\ \\hline\n\\textit{\\textbf{Human}} & , \\\\ \\hline\n\\textit{\\textbf{LO}} & \\\\ \\hline\n\\textit{\\textbf{MAnE}} & ,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{MJE}} & ,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{MPJPE}} & , \\\\ \\hline\n\\textit{\\textbf{NPSS}} & \\\\ \\hline\n\\textit{\\textbf{PCK}} & ,,, \\\\ \\hline\n\\textit{\\textbf{PSEnt}} & \\\\ \\hline\n\\textit{\\textbf{PSKL}} & \\\\ \\hline\n\\textit{\\textbf{RE}} & \\\\ \\hline\n\\textit{\\textbf{Run time}} & , \\\\ \\hline\n\\end{tabular}\n\\caption{Metrics used in motion prediction applications.}\n\\label{motion_metrics_table}\n\\end{table*}\nLists of metrics and corresponding papers can be found in Tables \\ref{video_metrics_table}, \\ref{action_metrics_table}, \\ref{trajectory_metrics_table}, \\ref{motion_metrics_table}, and \\ref{other_metrics_table}. \n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Metrics} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{AUC}} & \\\\ \\hline\n\\textit{\\textbf{Accuracy}} & ,, \\\\ \\hline\n\\textit{\\textbf{ED}} & , \\\\ \\hline\n\\textit{\\textbf{EPE}} & \\\\ \\hline\n\\textit{\\textbf{F1}} & , \\\\ \\hline\n\\textit{\\textbf{GCE}} & \\\\ \\hline\n\\textit{\\textbf{ISM}} & \\\\ \\hline\n\\textit{\\textbf{IoU}} & ,, \\\\ \\hline\n\\textit{\\textbf{MAE}} & ,, \\\\ \\hline\n\\textit{\\textbf{MAPE}} & , \\\\ \\hline\n\\textit{\\textbf{MCC}} & \\\\ \\hline\n\\textit{\\textbf{MIoU}} & \\\\ \\hline\n\\textit{\\textbf{MSE}} & \\\\ \\hline\n\\textit{\\textbf{PCP}} & \\\\ \\hline\n\\textit{\\textbf{PSNR}} & , \\\\ \\hline\n\\textit{\\textbf{Precision}} & ,, \\\\ \\hline\n\\textit{\\textbf{Psi}} & \\\\ \\hline\n\\textit{\\textbf{RI}} & \\\\ \\hline\n\\textit{\\textbf{RMSE}} & \\\\ \\hline\n\\textit{\\textbf{ROC}} & , \\\\ \\hline\n\\textit{\\textbf{Recall}} & ,, \\\\ \\hline\n\\textit{\\textbf{Run time}} & ,, \\\\ \\hline\n\\textit{\\textbf{SRC}} & \\\\ \\hline\n\\textit{\\textbf{SSIM}} & ,, \\\\ \\hline\n\\textit{\\textbf{TN}} & \\\\ \\hline\n\\textit{\\textbf{TP}} & \\\\ \\hline\n\\textit{\\textbf{VoI}} & \\\\ \\hline\n\\textit{\\textbf{nMAPE}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Metrics used in \\textbf{other prediction} applications.}\n\\label{other_metrics_table}\n\\end{table*}\n\\clearpage\n\\clearpage", "id": "63a0cc61-98a6-4be7-b982-ebf42e368ec1", "level": "section", "origin_cites_number": 219, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metrics and corresponding papers" ] ], "subsections": [], "title": "Metrics and corresponding papers" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{appendix_metrics}", "id": "14c8fcc2-4517-4a60-9512-dfff6198b563", "level": "section", "origin_cites_number": 0, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ] ], "subsections": [ "e9d8a36b-2d5e-4533-a6ab-b63d369319dc", "ddfb59d7-61bc-4e3f-8381-c962428d999f", "ed1fa5f0-0752-4fc1-89e1-31159f822bee", "327259be-0b0d-405f-9880-1b39cd8f50fa" ], "title": "Metric formulas" }, { "cite_extract_rate": 0, "cites": [], "content": "$$ MSE= \\frac{1}{MN} \\sum_{i=1}^M\\sum_{j=1}^N(I(i,j) -\\tilde{I}(i,j))^2$$\n $$PSNR = 20\\log\\left(\\frac{MAX_I}{\\sqrt{MSE}}\\right)$$\n\\textbf{Structural Similarity (SSIM)} \n$$luminance (l)(x,y) = \\frac{2\\mu_x\\mu_y + C_1}{\\mu_x^2 + \\mu_y^2 + C_1}$$\n$$\\mu_x=\\frac{1}{N}\\sum_{i=1}^Nx_i$$\n$$C_1=(K_1L)^2$$\n\\noindent where $L$ is dynamic range of pixel values (e.g. 255) and $K_1 \\ll 1$ is a small constant.\n$$contrast (c)(x,y) = \\frac{2\\sigma_x\\sigma_y+C_2}{\\sigma_x^2+\\sigma_y^2+C_2}$$\n$$\\sigma_x=\\left(\\frac{1}{N-1}\\sum_{i=1}^N(x_i-\\mu_x)^2\\right)^{1/2}$$\n$$C_2=(K_2L)^2$$ $$K_2 \\ll 1$$\n$$structure(s)(x,y) = \\frac{\\sigma_{xy}+C_3}{\\sigma_x\\sigma_y+C_3}$$\n$$C_3=(K_3L)^2$$ $$K_3 \\ll 1$$\n$$SSIM(x,y)=[l(x,y)]^\\alpha.[c(x,y)]^\\beta.[s(x,y)]^\\gamma$$\n\\noindent where $\\alpha ,\\beta, \\gamma >0$ are parameters to choose in order to adjust the importance.\n\\textbf{Learned Perceptual Image Patch Similarity (LPIPS)}\nAssume features are extracted from $L$ layers and unit-normalized in channel dimension, for layer l\n$$\\hat{y}^l,\\hat{y}^l \\in R^{H_l\\times W_l \\times C_l}$$.\nThe distance between reference $x$ and distorted patches $x_0$ is given by,\n$$d(x,x_0)=\\sum_l \\frac{1}{H_lW_l}\\sum_{w,l}\\parallel w_l \\odot (\\hat{y}^l_{hw},\\hat{y}^l_{0hw})\\parallel ^2_2$$", "id": "e9d8a36b-2d5e-4533-a6ab-b63d369319dc", "level": "subsection", "origin_cites_number": 0, "parent_id": "14c8fcc2-4517-4a60-9512-dfff6198b563", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ], [ "subsection", "Video prediction" ] ], "subsections": [], "title": "Video prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "There are 4 possibilities for classification: True positive (TP) and True Negative (TN) when the algorithm correctly classifies positive and negative samples, and False Positive (FP) and False Negative (FN) when the algorithm incorrectly classifies negative samples as positive and vice versa.\n$$Accuracy = \\frac{TN+TP}{TP+TN+FP+FN}$$\n$$Precision = \\frac{TP}{TP+FP}$$\n$$Recall = \\frac{TP}{TP+FN}$$\n$$F1-score = 2 \\times \\frac{Precision \\times Recall}{Precision + Recall}$$\n$$RMSE = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n(y_i -\\tilde{y_i})}$$\nLet $p(r)$ be the precision-recall curve. Then,\n$$AP = \\int_0^1 p(r)dr$$", "id": "ddfb59d7-61bc-4e3f-8381-c962428d999f", "level": "subsection", "origin_cites_number": 0, "parent_id": "14c8fcc2-4517-4a60-9512-dfff6198b563", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ], [ "subsection", "Action prediction" ] ], "subsections": [], "title": "Action prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "ed1fa5f0-0752-4fc1-89e1-31159f822bee", "level": "subsection", "origin_cites_number": 0, "parent_id": "14c8fcc2-4517-4a60-9512-dfff6198b563", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ], [ "subsection", "Trajectory prediction" ] ], "subsections": [ "4a5b290c-66e4-4bd3-b5e3-68e6a0a15129" ], "title": "Trajectory prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "$$\\text{Euclidean Distance}(ED) = \\parallel y-\\tilde{y}\\parallel = \\parallel y-\\tilde{y}\\parallel_2$$\n$$ = \\sqrt{\\sum_{i=1}^n(y_i -\\tilde{y_i})^2} $$\n$$ \\text{Mean Absolute Error}(MAE) = \\frac{1}{n} \\sum_{i=1}^n|y_i -\\tilde{y_i}| $$\n$$\\text{Mean Square Error} (MSE) = \\parallel y-\\tilde{y}\\parallel^2$$\n$$ =\\frac{1}{n} \\sum_{i=1}^n(y_i -\\tilde{y_i})^2$$\n$$\\text{Root MSE} (RMSE) = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n(y_i -\\tilde{y_i})^2}$$\n$$\\text{Hausdorff Distance}(HD) = max_{y\\in Y}min_{\\tilde{y}\\in\\tilde{Y}} \\parallel y-\\tilde{y}\\parallel$$\n$$\\text{Modified HD}(MHD) = max(d(Y,\\tilde{Y}),d(\\tilde{Y}, Y))$$\n$$ d(Y,\\tilde{Y}) = \\frac{1}{N_y}\\sum_{y\\in Y} min_{\\tilde{y}\\in \\tilde{Y}} \\parallel y - \\tilde{y}\\parallel$$\n$$ ADE = \\frac{\\sum^N_{i=1}\\sum^{T_{pred}}_{t=1}\\parallel \\tilde{y}^i_t - y^i_t \\parallel}{N \\times T_{pred}}$$\nwhere $N$ is the number of samples and $T_{pred}$ is the prediction steps.\n$$FDE = \\frac{ \\sum^N_{i=1} \\parallel \\tilde{y}^i_{T_{pred}} - y^i_{T_{pred}} \\parallel}{N}$$\n$$ minMSD = \\mathbb{E}_{\\tilde{Y}_k\\sim q_\\theta}min_{\\tilde{y}\\in\\tilde{Y}_k}\\parallel y-\\tilde{y}\\parallel^2$$\nwhere $q_\\theta$ is the sampling space and $K$ number of samples. \n$$ meanMSD = \\frac{1}{K}\\sum_{k=1}^K\\parallel y-\\tilde{y}\\parallel^2$$\n$$NLL = \\mathbb{E}_{p(Y|X)}\\left[-\\log \\prod_{t=1}^{T_{pred}} p(y_t|X)\\right]$$", "id": "4a5b290c-66e4-4bd3-b5e3-68e6a0a15129", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "ed1fa5f0-0752-4fc1-89e1-31159f822bee", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ], [ "subsection", "Trajectory prediction" ], [ "subsubsection", "Distance metrics" ] ], "subsections": [], "title": "Distance metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "$$MPJPE = \\frac{1}{N \\times T_{pred}}\\sum_{t=1}^{T_{pred}}\\sum_{i=1}^N \\parallel (J_i^t-J_{root}^t)-(\\tilde{J}_i^t-\\tilde{J}_{root}^t)\\parallel$$\n\\clearpage", "id": "327259be-0b0d-405f-9880-1b39cd8f50fa", "level": "subsection", "origin_cites_number": 0, "parent_id": "14c8fcc2-4517-4a60-9512-dfff6198b563", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Metric formulas" ], [ "subsection", "Motion prediction" ] ], "subsections": [], "title": "Motion prediction" }, { "cite_extract_rate": 0.33628318584070804, "cites": [ 2899, 2838, 2808, 2869, 2824, 2892, 8596, 2835, 2897, 1517, 2903, 8590, 2894, 6980, 2891, 2900, 2626, 2908, 2906, 2896, 2905, 2898, 2902, 8597, 1733, 2904, 2907, 2893, 2878, 2851, 7379, 2901, 2880, 2895 ], "content": "\\label{links_datasets}\n\\begin{table*}[]\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|}\n\\hline\nYear & Dataset & Links \\\\ \\hline\n\\multirow{12}{*}{2019} & ARGOVerse & \\url{https://www.argoverse.org/data.html} \\\\ \\cline{2-3} \n & CARLA & \\url{https://sites.google.com/view/precog} \\\\ \\cline{2-3} \n & EgoPose & \\url{https://github.com/Khrylx/EgoPose} \\\\ \\cline{2-3} \n & FM & \\url{https://mcl.korea.ac.kr/$\\sim$krkim/iccv2019/index.html} \\\\ \\cline{2-3} \n & InstaVariety & \\url{https://github.com/akanazawa/human_dynamics} \\\\ \\cline{2-3} \n & INTEARCTION & \\url{https://interaction-dataset.com} \\\\ \\cline{2-3} \n & Luggage & \\url{https://aashi7.github.io/NearCollision.html} \\\\ \\cline{2-3} \n & MGIF & \\url{https://github.com/AliaksandrSiarohin/monkey-net} \\\\ \\cline{2-3} \n & PIE & \\url{http://data.nvision2.eecs.yorku.ca/PIE_dataset/} \\\\ \\cline{2-3} \n & nuScenes & \\url{https://www.nuscenes.org/} \\\\ \\cline{2-3} \n & Vehicle-Pedestrian-Mixed (VPM) & \\url{http://vr.ict.ac.cn/vp-lstm.} \\\\ \\cline{2-3} \n & TRAF & \\url{https://drive.google.com/drive/folders/1LqzJuRkx5yhOcjWFORO5WZ97v6jg8RHN} \\\\ \\hline\n\\multirow{10}{*}{2018} & 3DPW & \\url{https://virtualhumans.mpi-inf.mpg.de/3DPW/} \\\\ \\cline{2-3} \n & ActEV/VIRAT & \\url{https://actev.nist.gov/trecvid19} \\\\ \\cline{2-3} \n & ACTICIPATE & \\url{http://vislab.isr.tecnico.ulisboa.pt/datasets/} \\\\ \\cline{2-3} \n & AVA & \\url{https://research.google.com/ava/} \\\\ \\cline{2-3} \n & Epic-Kitchen & \\url{https://epic-kitchens.github.io/2019} \\\\ \\cline{2-3} \n & EGTEA Gaze+ & \\url{http://www.cbi.gatech.edu/fpv/} \\\\ \\cline{2-3} \n & STC & \\url{https://svip-lab.github.io/dataset/campus_dataset.html} \\\\ \\cline{2-3} \n & ShapeStack & \\url{https://shapestacks.robots.ox.ac.uk/} \\\\ \\cline{2-3} \n & VIENA & \\url{https://sites.google.com/view/viena2-project/home} \\\\ \\cline{2-3} \n & YouCook2 & \\url{http://youcook2.eecs.umich.edu/} \\\\ \\hline\n\\multirow{10}{*}{2017} & BUA & \\url{http://cs-people.bu.edu/sbargal/BU-action/} \\\\ \\cline{2-3} \n & CityPerson & \\url{https://bitbucket.org/shanshanzhang/citypersons/src/default/} \\\\ \\cline{2-3} \n & Epic-fail & \\url{http://aliensunmin.github.io/project/video-Forecasting/} \\\\ \\cline{2-3} \n & JAAD & \\url{http://data.nvision2.eecs.yorku.ca/JAAD_dataset/} \\\\ \\cline{2-3} \n & L-CAS & \\url{https://lcas.lincoln.ac.uk/wp/research/data-sets-software/l-cas-3d-point-cloud-people-dataset/} \\\\ \\cline{2-3} \n & Mouse Fish & \\url{https://web.bii.a-star.edu.sg/archive/machine_learning/Projects/behaviorAnalysis/Lie-X/Lie-X.html} \\\\ \\cline{2-3} \n & ORC & \\url{https://robotcar-dataset.robots.ox.ac.uk/} \\\\ \\cline{2-3} \n & PKU-MMD & \\url{http://www.icst.pku.edu.cn/struct/Projects/PKUMMD.html} \\\\ \\cline{2-3} \n & Recipe1M & \\url{http://pic2recipe.csail.mit.edu/} \\\\ \\cline{2-3} \n & STRANDS & \\url{https://strands.readthedocs.io/en/latest/datasets/} \\\\ \\hline\n\\multirow{13}{*}{2016} & BAIR Push & \\url{https://sites.google.com/site/brainrobotdata/home/push-dataset} \\\\ \\cline{2-3} \n & BB & \\url{https://github.com/mbchang/dynamics} \\\\ \\cline{2-3} \n & MU & \\url{http://staff.itee.uq.edu.au/lovell/MissUniverse/} \\\\ \\cline{2-3} \n & Cityscapes & \\url{https://www.cityscapes-dataset.com/} \\\\ \\cline{2-3} \n & CMU mocap & \\url{http://mocap.cs.cmu.edu/} \\\\ \\cline{2-3} \n & DAD & \\url{https://aliensunmin.github.io/project/dashcam/} \\\\ \\cline{2-3} \n & NTU RGB-D & \\url{http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp} \\\\ \\cline{2-3} \n & OA & \\url{http://www.mpii.de/ongoing-activity} \\\\ \\cline{2-3} \n & OAD & \\url{http://www.icst.pku.edu.cn/struct/Projects/OAD.html} \\\\ \\cline{2-3} \n & SD & \\url{http://cvgl.stanford.edu/projects/uav_data/} \\\\ \\cline{2-3} \n & TV Series & \\url{https://github.com/zhenyangli/online_action} \\\\ \\cline{2-3} \n & VIST & \\url{http://visionandlanguage.net/VIST/} \\\\ \\cline{2-3} \n & Youtube-8M & \\url{https://research.google.com/youtube8m/} \\\\ \\hline\n\\multirow{14}{*}{2015} & Amazon & \\url{http://jmcauley.ucsd.edu/data/amazon/index_2014.html} \\\\ \\cline{2-3} \n & Atari & \\url{https://github.com/junhyukoh/nips2015-action-conditional-video-prediction} \\\\ \\cline{2-3} \n & Brain4Cars & \\url{https://github.com/asheshjain399/ICCV2015_Brain4Cars} \\\\ \\cline{2-3} \n & CMU Panoptic & \\url{http://domedb.perception.cs.cmu.edu/dataset.html} \\\\ \\cline{2-3} \n & FPPA & \\url{http://bvision11.cs.unc.edu/bigpen/yipin/ICCV2015/prediction_webpage/Prediction.html} \\\\ \\cline{2-3} \n & GTEA Gaze+ & \\url{http://www.cbi.gatech.edu/fpv/} \\\\ \\cline{2-3} \n & MBI-1M & \\url{http://academic.mywebsiteontheinternet.com/data/} \\\\ \\cline{2-3} \n & MOT & \\url{https://motchallenge.net/} \\\\ \\cline{2-3} \n & MMNIST & \\url{http://www.cs.toronto.edu/$\\sim$nitish/unsupervised_video/} \\\\ \\cline{2-3} \n & SUN RGB-D & \\url{http://rgbd.cs.princeton.edu/} \\\\ \\cline{2-3} \n & SYSU 3DHOI & \\url{http://www.isee-ai.cn/$\\sim$hujianfang/ProjectJOULE.html} \\\\ \\cline{2-3} \n & THUMOS & \\url{http://www.thumos.info/home.html} \\\\ \\cline{2-3} \n & WnP & \\url{http://watchnpatch.cs.cornell.edu/} \\\\ \\cline{2-3} \n & Wider & \\url{http://yjxiong.me/event_recog/WIDER/} \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{5}{*}{2014}} & Breakfast & \\url{http://serre-lab.clps.brown.edu/resource/breakfast-actions-dataset/} \\\\ \\cline{2-3} \n\\multicolumn{1}{|c|}{} & Human3.6M & \\url{http://vision.imar.ro/human3.6m/description.php} \\\\ \\cline{2-3} \n\\multicolumn{1}{|c|}{} & MPII Human Pose & \\url{http://human-pose.mpi-inf.mpg.de/} \\\\ \\cline{2-3} \n\\multicolumn{1}{|c|}{} & ORGBD & \\url{https://sites.google.com/site/skicyyu/orgbd} \\\\ \\cline{2-3} \n\\multicolumn{1}{|c|}{} & Sports-1M & \\url{https://cs.stanford.edu/people/karpathy/deepvideo/} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{A summary of datasets (from year 2014-2019) used in vision-based prediction papers and corresponding links.}\n\\label{data_set_links1}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|}\n\\hline\nYear & Dataset & Links \\\\ \\hline\n\\multirow{7}{*}{2013} & 50 salads & \\url{https://cvip.computing.dundee.ac.uk/datasets/foodpreparation/50salads/} \\\\ \\cline{2-3} \n & ATC & \\url{https://irc.atr.jp/crest2010_HRI/ATC_dataset/} \\\\ \\cline{2-3} \n & CAD-120 & \\url{http://pr.cs.cornell.edu/humanactivities/data.php} \\\\ \\cline{2-3} \n & CHUK Avenue & \\url{http://www.cse.cuhk.edu.hk/leojia/projects/detectabnormal/dataset.html} \\\\ \\cline{2-3} \n & Daimler path & \\url{http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Pedestrian_Path_Predict_GCPR_1/pedestrian_path_predict_gcpr_1.html} \\\\ \\cline{2-3} \n & JHMDB & \\url{http://jhmdb.is.tue.mpg.de/} \\\\ \\cline{2-3} \n & Penn Action & \\url{http://dreamdragon.github.io/PennAction/} \\\\ \\hline\n\\multirow{11}{*}{2012} & BIT & \\url{https://sites.google.com/site/alexkongy/software} \\\\ \\cline{2-3} \n & GTEA Gaze & \\url{http://www.cbi.gatech.edu/fpv/} \\\\ \\cline{2-3} \n & KITTI & \\url{http://www.cvlibs.net/datasets/kitti/} \\\\ \\cline{2-3} \n & MANIAC & \\url{https://alexandria.physik3.uni-goettingen.de/cns-group/datasets/maniac/} \\\\ \\cline{2-3} \n & MPII-cooking & \\url{https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/human-activity-recognition/mpii-cooking-activities-dataset/} \\\\ \\cline{2-3} \n & MSRDA & \\url{https://documents.uow.edu.au/$\\sim$wanqing/\\#MSRAction3DDatasets} \\\\ \\cline{2-3} \n & GC & \\url{http://www.ee.cuhk.edu.hk/$\\sim$xgwang/grandcentral.html} \\\\ \\cline{2-3} \n & SBUKI & \\url{https://www3.cs.stonybrook.edu/$\\sim$kyun/research/kinect_interaction/index.html} \\\\ \\cline{2-3} \n & UCF-101 & \\url{https://www.crcv.ucf.edu/data/UCF101.php} \\\\ \\cline{2-3} \n & UTKA & \\url{http://cvrc.ece.utexas.edu/KinectDatasets/HOJ3D.html} \\\\ \\cline{2-3} \n & UvA-NEMO & \\url{https://www.uva-nemo.org/} \\\\ \\hline\n\\multirow{5}{*}{2011} & FCVL & \\url{http://robots.engin.umich.edu/SoftwareData/Ford} \\\\ \\cline{2-3} \n & HMDB & \\url{http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/} \\\\ \\cline{2-3} \n & Stanford 40 & \\url{http://vision.stanford.edu/Datasets/40actions.html} \\\\ \\cline{2-3} \n & Town Center & \\url{http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html\\#datasets} \\\\ \\cline{2-3} \n & VIRAT & \\url{http://viratdata.org/} \\\\ \\hline\n\\multirow{8}{*}{2010} & DISPLECS & \\url{https://cvssp.org/data/diplecs/} \\\\ \\cline{2-3} \n & MSR & \\url{https://www.microsoft.com/en-us/download/details.aspx?id=52315} \\\\ \\cline{2-3} \n & MUG Facial Expression & \\url{https://mug.ee.auth.gr/fed/} \\\\ \\cline{2-3} \n & PROST & \\url{www.gpu4vision.com } \\\\ \\cline{2-3} \n & THI & \\url{http://www.robots.ox.ac.uk/$\\sim$alonso/tv_human_interactions.html} \\\\ \\cline{2-3} \n & UTI & \\url{http://cvrc.ece.utexas.edu/SDHA2010/Human_Interaction.html} \\\\ \\cline{2-3} \n & VISOR & \\url{imagelab.ing.unimore.it/visor} \\\\ \\cline{2-3} \n & Willow Action & \\url{https://www.di.ens.fr/willow/research/stillactions/} \\\\ \\hline\n\\multirow{9}{*}{2009} & Caltech Pedestrian & \\url{http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/} \\\\ \\cline{2-3} \n & Collective Activity (CA) & \\url{http://www-personal.umich.edu/$\\sim$wgchoi/eccv12/wongun_eccv12.html} \\\\ \\cline{2-3} \n & EIFP & \\url{http://homepages.inf.ed.ac.uk/rbf/FORUMTRACKING/} \\\\ \\cline{2-3} \n & ETH & \\url{http://www.vision.ee.ethz.ch/en/datasets/} \\\\ \\cline{2-3} \n & OSU & \\url{http://eecs.oregonstate.edu/football/tracking/dataset} \\\\ \\cline{2-3} \n & PETS2009 & \\url{http://www.cvg.reading.ac.uk/PETS2009/a.html} \\\\ \\cline{2-3} \n & QMUL & \\url{http://personal.ie.cuhk.edu.hk/$\\sim$ccloy/downloads_qmul_junction.html} \\\\ \\cline{2-3} \n & TUM Kitchen & \\url{https://ias.in.tum.de/dokuwiki/software/kitchen-activity-data} \\\\ \\cline{2-3} \n & YUV Videos & \\url{http://trace.kom.aau.dk/yuv/index.html} \\\\ \\hline\n\\multirow{2}{*}{2008} & Daimler & \\url{http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Daimler_Mono_Ped__Detection_Be/daimler_mono_ped__detection_be.html} \\\\ \\cline{2-3} \n & MITT & \\url{http://www.ee.cuhk.edu.hk/$\\sim$xgwang/MITtrajsingle.html} \\\\ \\hline\n\\multirow{5}{*}{2007} & AMOS & \\url{http://amos.cse.wustl.edu/} \\\\ \\cline{2-3} \n & ETH pedestrian & \\url{https://data.vision.ee.ethz.ch/cvl/aess/} \\\\ \\cline{2-3} \n & Lankershim Boulevard & \\url{https://www.fhwa.dot.gov/publications/research/operations/07029/index.cfm} \\\\ \\cline{2-3} \n & NGSIM & \\url{https://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm} \\\\ \\cline{2-3} \n & UCY & \\url{https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data} \\\\ \\hline\n2006 & Tuscan Arizona & \\url{http://www.mmto.org/} \\\\ \\hline\n2004 & KTH & \\url{http://www.nada.kth.se/cvap/actions/} \\\\ \\hline\n1981 & Golden Colorado & \\url{https://www.osti.gov/dataexplorer/biblio/dataset/1052221} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{A summary of datasets (from year 2013 and earlier) used in vision-based prediction papers and corresponding links.}\n\\label{data_set_links2}\n\\end{table*}\nLists of datasets with associated repository links can be found in Tables \\ref{data_set_links1} and \\ref{data_set_links2}.\n\\clearpage", "id": "e82c2ffc-f5dc-4d01-9184-399825652bbe", "level": "section", "origin_cites_number": 113, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Links to the datasets" ] ], "subsections": [], "title": "Links to the datasets" }, { "cite_extract_rate": 0.505813953488372, "cites": [ 2829, 2871, 2844, 2838, 2808, 2807, 2869, 2841, 2822, 979, 2840, 2824, 2811, 2845, 2850, 2835, 2873, 2870, 2827, 2842, 2836, 2887, 9096, 8590, 2849, 2877, 2813, 2814, 7117, 2853, 2879, 2816, 2856, 2866, 2888, 2809, 2858, 2802, 2805, 2826, 2833, 2876, 7618, 2839, 2843, 2847, 2854, 8592, 2868, 2832, 2874, 2828, 2806, 8595, 2834, 2859, 2875, 2881, 2812, 2886, 2878, 2851, 2863, 7617, 7379, 2803, 2890, 8594, 2804, 2821, 2837, 2867, 2817, 2880, 2885, 2819, 2846, 2815, 988, 2818, 2820, 2810, 2831, 2884, 2861, 2872, 2864 ], "content": "\\label{papers_datasets}\nLists of datasets and corresponding papers can be found in Tables \\ref{video_datasets_table}, \\ref{action_datasets_table}, \\ref{trajectory_datasets_table}, \\ref{motion_datasets_table}, and \\ref{other_datasets_table}. \n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Datasets} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{Atari}} & \\\\ \\hline\n\\textit{\\textbf{BAIR Push}} & ,, \n \\\\ \\hline\n\\textit{\\textbf{Bouncing Ball}} & \\\\ \\hline\n\\textit{\\textbf{CHUK Avenue}} & \\\\ \\hline\n\\textit{\\textbf{Caltech Pedestrian}} & , , ,,,,, \\\\ \\hline\n\\textit{\\textbf{Cityscapes}} & , \\\\ \\hline\n\\textit{\\textbf{Human 3.6M}} & ,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{JAAD}} & \\\\ \\hline\n\\textit{\\textbf{JHMDB}} & \\\\ \\hline\n\\textit{\\textbf{KITTI}} & ,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{KTH}} & ,,,,,, \\\\ \\hline\n\\textit{\\textbf{MGIF}} & \\\\ \\hline\n\\textit{\\textbf{MMNIST}} & ,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{MSR}} & \\\\ \\hline\n\\textit{\\textbf{MUG}} & \\\\ \\hline\n\\textit{\\textbf{Own}} & , \\\\ \\hline\n\\textit{\\textbf{PROST}} & \\\\ \\hline\n\\textit{\\textbf{Penn Action}} & ,, \\\\ \\hline\n\\textit{\\textbf{Penn action}} & ,, \\\\ \\hline\n\\textit{\\textbf{ShanghaiTech Campus}} & \\\\ \\hline\n\\textit{\\textbf{ShapeStack}} & \\\\ \\hline\n\\textit{\\textbf{Sports-1M}} & , \\\\ \\hline\n\\textit{\\textbf{THUMOS}} & \\\\ \\hline\n\\textit{\\textbf{UCF-101}} & ,,,,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{UvA-NEMO}} & \\\\ \\hline\n\\textit{\\textbf{ViSOR}} & \\\\ \\hline\n\\textit{\\textbf{YUV}} & , \\\\ \\hline\n\\textit{\\textbf{Youtube-8M}} & \\\\ \\hline\n\\textit{\\textbf{pedestrian}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Datasets used in \\textbf{video prediction} applications.}\n\\label{video_datasets_table}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Datasets} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{50Salad}} & ,, \\\\ \\hline\n\\textit{\\textbf{ANTICIPATE}} & \\\\ \\hline\n\\textit{\\textbf{AVA}} & , \\\\ \\hline\n\\textit{\\textbf{ActEV/VIRAT}} & \\\\ \\hline\n\\textit{\\textbf{BIT}} & ,,, \\\\ \\hline\n\\textit{\\textbf{BU Action}} & \\\\ \\hline\n\\textit{\\textbf{Brain4Cars}} & ,, \\\\ \\hline\n\\textit{\\textbf{Breakfast}} & ,, \\\\ \\hline\n\\textit{\\textbf{CA}} & \\\\ \\hline\n\\textit{\\textbf{CAD-120}} & ,,,, \\\\ \\hline\n\\textit{\\textbf{CMU Panoptic}} & \\\\ \\hline\n\\textit{\\textbf{CMU Mocap}} & \\\\ \\hline\n\\textit{\\textbf{Caltech Pedestrian}} & \\\\ \\hline\n\\textit{\\textbf{DAD}} & , \\\\ \\hline\n\\textit{\\textbf{Daimler}} & \\\\ \\hline\n\\textit{\\textbf{Daimler Path}} & \\\\ \\hline\n\\textit{\\textbf{EGTEA Gaze+}} & \\\\ \\hline\n\\textit{\\textbf{ETH Pedestrian}} & \\\\ \\hline\n\\textit{\\textbf{Epic-fail}} & \\\\ \\hline\n\\textit{\\textbf{Epic-Kitchen}} & ,, \\\\ \\hline\n\\textit{\\textbf{FPPA}} & \\\\ \\hline\n\\textit{\\textbf{GTEA Gaze}} & \\\\ \\hline\n\\textit{\\textbf{GTEA Gaze+}} & \\\\ \\hline\n\\textit{\\textbf{HMDB}} & \\\\ \\hline\n\\textit{\\textbf{Human 3.6M}} & \\\\ \\hline\n\\textit{\\textbf{JAAD}} & ,,, \\\\ \\hline\n\\textit{\\textbf{JHMDB}} & ,,,,, \\\\ \\hline\n\\textit{\\textbf{Luggage}} & \\\\ \\hline\n\\textit{\\textbf{MANIAC}} & \\\\ \\hline\n\\textit{\\textbf{MPII Cooking}} & ,, \\\\ \\hline\n\\textit{\\textbf{MSRDA}} & \\\\ \\hline\n\\textit{\\textbf{NGSIM}} & ,, \\\\ \\hline\n\\textit{\\textbf{NTU RGB-D}} & \\\\ \\hline\n\\textit{\\textbf{OA}} & \\\\ \\hline\n\\textit{\\textbf{OAD}} & \\\\ \\hline\n\\textit{\\textbf{ORGBD}} & \\\\ \\hline\n\\textit{\\textbf{PIE}} & \\\\ \\hline\n\\textit{\\textbf{PKU-MMD}} & \\\\ \\hline\n\\textit{\\textbf{Recipe1M}} & \\\\ \\hline\n\\textit{\\textbf{SBUIK}} & \\\\ \\hline\n\\textit{\\textbf{SYSU 3DHOI}} & , \\\\ \\hline\n\\textit{\\textbf{Stanford-40}} & \\\\ \\hline\n\\textit{\\textbf{THUMOS}} & ,, \\\\ \\hline\n\\textit{\\textbf{TV Human Interaction}} & , ,,, \\\\ \\hline\n\\textit{\\textbf{TV Series}} & \\\\ \\hline\n\\textit{\\textbf{UCF-101}} & ,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{UTI}} & ,,,,,, \\\\ \\hline\n\\textit{\\textbf{UTKA}} & \\\\ \\hline\n\\textit{\\textbf{VIENA}} & \\\\ \\hline\n\\textit{\\textbf{VIRAT}} & \\\\ \\hline\n\\textit{\\textbf{WIDER}} & \\\\ \\hline\n\\textit{\\textbf{Willow Action}} & \\\\ \\hline\n\\textit{\\textbf{WnP}} & \\\\ \\hline\n\\textit{\\textbf{YouCook2}} & \\\\ \\hline\n\\textit{\\textbf{Sports-1M}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Datasets used in \\textbf{action prediction} applications.}\n\\label{action_datasets_table}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|p{0.8\\textwidth}|}\n\\hline\n\\textbf{Datasets} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{ARGOVerse}} & \\\\ \\hline\n\\textit{\\textbf{ATC}} & \\\\ \\hline\n\\textit{\\textbf{ActEV/VIRAT}} & \\\\ \\hline\n\\textit{\\textbf{CA}} & \\\\ \\hline\n\\textit{\\textbf{CARLA}} & , \\\\ \\hline\n\\textit{\\textbf{CHUK}} & \\\\ \\hline\n\\textit{\\textbf{CityPerson}} & \\\\ \\hline\n\\textit{\\textbf{Cityscapes}} & \\\\ \\hline\n\\textit{\\textbf{Daimler Path}} & \\\\ \\hline\n\\textit{\\textbf{ETH}} & ,,,,,,,,,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{Edinburgh (IFP)}} & , \\\\ \\hline\n\\textit{\\textbf{FM}} & \\\\ \\hline\n\\textit{\\textbf{GC}} & ,,, \\\\ \\hline\n\\textit{\\textbf{INTEARCTION}} & \\\\ \\hline\n\\textit{\\textbf{JAAD}} & \\\\ \\hline\n\\textit{\\textbf{KITTI}} & ,, \\\\ \\hline\n\\textit{\\textbf{L-CAS}} & \\\\ \\hline\n\\textit{\\textbf{Lankershim Boulevard}} & \\\\ \\hline\n\\textit{\\textbf{MITT}} & \\\\ \\hline\n\\textit{\\textbf{MOT}} & \\\\ \\hline\n\\textit{\\textbf{NGSIM}} & ,,,,,, \\\\ \\hline\n\\textit{\\textbf{OSU}} & \\\\ \\hline\n\\textit{\\textbf{Oxford}} & \\\\ \\hline\n\\textit{\\textbf{PETS2009}} & \\\\ \\hline\n\\textit{\\textbf{PIE}} & \\\\ \\hline\n\\textit{\\textbf{QMUL}} & \\\\ \\hline\n\\textit{\\textbf{SBUIK}} & \\\\ \\hline\n\\textit{\\textbf{SD}} & ,,,,,, \\\\ \\hline\n\\textit{\\textbf{STRANDS}} & \\\\ \\hline\n\\textit{\\textbf{TRAF}} & \\\\ \\hline\n\\textit{\\textbf{TUM Kitchen}} & \\\\ \\hline\n\\textit{\\textbf{Town Center}} & ,, \\\\ \\hline\n\\textit{\\textbf{UCY}} & ,,,,,,,,,,,,,,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{VIRAT}} & \\\\ \\hline\n\\textit{\\textbf{VPM}} & \\\\ \\hline\n\\textit{\\textbf{nuScenes}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Datasets used in \\textbf{trajectory prediction} applications.}\n\\label{trajectory_datasets_table}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Datasets} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{3DPW}} & \\\\ \\hline\n\\textit{\\textbf{CA}} & \\\\ \\hline\n\\textit{\\textbf{CMU Mocap}} & \\\\ \\hline\n\\textit{\\textbf{CMU Panoptic}} & \\\\ \\hline\n\\textit{\\textbf{Egopose}} & \\\\ \\hline\n\\textit{\\textbf{Human 3.6M}} & ,,,,,,,,,,,,,, \\\\ \\hline\n\\textit{\\textbf{InstaVariety}} & \\\\ \\hline\n\\textit{\\textbf{MPII Human Pose}} & \\\\ \\hline\n\\textit{\\textbf{Mouse Fish}} & \\\\ \\hline\n\\textit{\\textbf{Own}} & ,,,, \\\\ \\hline\n\\textit{\\textbf{Penn Action}} & ,,, \\\\ \\hline\n\\textit{\\textbf{SBUIK}} & \\\\ \\hline\n\\textit{\\textbf{UCF-101}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Datasets used in \\textbf{motion prediction} applications.}\n\\label{motion_datasets_table}\n\\end{table*}\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Datasets} & \\textbf{Papers} \\\\ \\hline\n\\textit{\\textbf{AMOS}} & \\\\ \\hline\n\\textit{\\textbf{Amazon}} & \\\\ \\hline\n\\textit{\\textbf{Cityscapes}} & ,,, \\\\ \\hline\n\\textit{\\textbf{DIPLECS}} & \\\\ \\hline\n\\textit{\\textbf{FCVL}} & \\\\ \\hline\n\\textit{\\textbf{Golden Colorado}} & \\\\ \\hline\n\\textit{\\textbf{JAAD}} & \\\\ \\hline\n\\textit{\\textbf{KITTI}} & , \\\\ \\hline\n\\textit{\\textbf{MBI-1M}} & \\\\ \\hline\n\\textit{\\textbf{MU}} & \\\\ \\hline\n\\textit{\\textbf{SUN RGB-D}} & \\\\ \\hline\n\\textit{\\textbf{Tuscan Arizona}} & \\\\ \\hline\n\\textit{\\textbf{VIST}} & \\\\ \\hline\n\\end{tabular}\n\\caption{Datasets used in \\textbf{other prediction} applications.}\n\\label{other_datasets_table}\n\\end{table*}\n\\end{document}", "id": "a33fb5e9-81a5-4a37-8f37-6dd6031d77ae", "level": "section", "origin_cites_number": 172, "parent_id": "54da09bf-16a7-43ca-8a25-4aafacb8f53e", "prefix_titles": [ [ "title", "Deep Learning for Vision-based Prediction: A Survey" ], [ "section", "Datasets and corresponding papers" ] ], "subsections": [], "title": "Datasets and corresponding papers" } ]
97
[ 2804, 2814, 2817, 2807, 2808, 2816, 5680, 2819, 2815, 2806, 2809, 2818, 2811, 2802, 2820, 2810, 2805, 2812, 8590, 7617, 2803, 2813, 2073, 2821, 2823, 2822, 2824, 2829, 2837, 7117, 2838, 2832, 2828, 2825, 2834, 2835, 2827, 2826, 2831, 2836, 2833, 7618, 2839, 2830, 2841, 2845, 2844, 2840, 2843, 2842, 2867, 2855, 2847, 2860, 2854, 2853, 2868, 2857, 2869, 2856, 2846, 2862, 2852, 2866, 979, 988, 2858, 2859, 2848, 2850, 2865, 2851, 8591, 1261, 2861, 2863, 2849, 7379, 2864, 2871, 8592, 2874, 2875, 2873, 2870, 2878, 2876, 2877, 2872, 2882, 2879, 2880, 2885, 2888, 2883, 2881, 2887, 2886, 9096, 2884, 8593, 2889, 7619, 6980, 117, 7620, 491, 2890, 8594, 8595, 2899, 2892, 8596, 2897, 1517, 2903, 2894, 2891, 2900, 2626, 2908, 2906, 2896, 2905, 2898, 2902, 8597, 1733, 2904, 2907, 2893, 2901, 2895 ]
0.864968
[ "Nan Wu", "Yuan Xie" ]
A Survey of Machine Learning for Computer Architecture and Systems
2021
2021-02-16T04:09:57Z
cs.LG
It has been a long time that computer architecture and systems are optimized for efficient execution of machine learning (ML) models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: improvement of designers' productivity, and completion of the virtuous cycle. In this paper, we present a comprehensive review of the work that applies ML for computer architecture and system design. First, we perform a high-level taxonomy by considering the typical role that ML techniques take in architecture/system design, i.e., either for fast predictive modeling or as the design methodology. Then, we summarize the common problems in computer architecture/system design that can be solved by ML techniques, and the typical ML techniques employed to resolve each of them. In addition to emphasis on computer architecture in a narrow sense, we adopt the concept that data centers can be recognized as warehouse-scale computers; sketchy discussions are provided in adjacent computer systems, such as code generation and compiler; we also give attention to how ML techniques can aid and transform design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5631fef3-6389-407d-9101-745f45240141", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ] ], "subsections": [ "5359b569-700e-4932-aef8-e103109cdfb1", "06a8a978-bb13-405e-8614-2ae9a01d5c56", "15e5a658-0b25-4890-8335-c1ec062ad3a2", "87e1e057-9e86-4a5c-afa3-431c75b16a64", "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "79dc37a5-e0c0-498b-b911-e6bc545700a1" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "Machine learning (ML) has been doing wonders in many fields.\nAs people are seeking better artificial intelligence (AI), there is a trend towards larger, more expressive, and more complex models.\nAccording to the data reported by OpenAI , from 1959 to 2012, the amount of compute used in the largest AI training runs doubles every two years; since 2012, deep learning starts taking off, and the required amount of compute has been increasing exponentially with a 3.4-month doubling period. By comparison, Moore’s law , the principle that has powered the integrated-circuit revolution since 1960s, doubles the transistor density every 18 months.\nWhile Moore's law is approaching its end , more pressure is put on innovations of computer architecture and systems, so as to keep up with the compute demand of AI applications.\nConventionally, computer architecture/system designs are made by human experts based on intuitions and heuristics, which requires expertise in both ML and architecture/system.\nMeanwhile, these heuristic-based designs can not guarantee scalability and optimality, especially in the case of increasingly complicated systems.\nAs such, it seems natural to move towards more automated and powerful methodologies for computer architecture and system design, and the relationship between ML and system design is being reconsidered.\nOver the past decade, architecture and systems are optimized to accelerate the execution and improve the performance of ML models.\nRecently, there have been signs of emergence of applying ML for computer architecture and systems, which embraces a twofold meaning: \\textcircled{\\small{1}} the reduction of burdens on human experts designing systems manually, so as to improve designers' productivity, and \\textcircled{\\small{2}} the close of the positive feedback loop, i.e., architecture/systems for ML and simultaneously ML for architecture/systems, formulating a virtuous cycle to encourage improvements on both sides.\n\\begin{figure}[tbp]\n \\centering\n \\vspace{-12pt}\n\t\\includegraphics[width=0.95\\linewidth]{fig0.png}\n\t\\vspace{-8pt}\n \\caption{A comprehensive overview of applying ML for computer architecture and systems. Existing work roughly falls into two categories: ML for fast system modeling, and ML as design methodology.}\n \\label{fig:frame}\n \\vspace{-20pt}\n\\end{figure}\nExisting work related to applying ML for computer architecture and system design falls into two categories.\n\\textcircled{\\small{1}} ML techniques are employed for \\textbf{fast and accurate system modeling}, which involves performance metrics or some criteria of interest (e.g. power consumption, latency, throughput, etc.).\nDuring the process of designing systems, it is necessary to make fast and accurate predictions of system behaviors.\nTraditionally, system modeling is achieved through the forms of cycle-accurate or functional virtual platforms, and instruction set simulators (e.g. gem5 ).\nEven though these methods provide accurate estimations, they bring expensive computation costs associated with performance modeling, which limits the scalability to large-scale and complex systems; meanwhile, the long simulation time often dominates design iteration, making it impossible to fully explore the design space.\nBy contrast, ML-based modeling and performance prediction are capable to balance simulation cost and prediction accuracy.\n\\textcircled{\\small{2}} ML techniques are employed as \\textbf{a design methodology to directly enhance architecture/system design}.\nML techniques are skilled at extracting features that might be implicit to human experts, making decisions without explicit programming, and improving themselves automatically with accumulated experience.\nTherefore, applying ML techniques as design tools can explore design space proactively and intelligently, and manage resource through better understanding of the complicated and non-linear interactions between workloads and systems, making it possible to deliver truly optimal solutions.\nIn this paper, we present a comprehensive overview of applying ML for computer architecture and systems. \nAs depicted in Figure \\ref{fig:frame},\nwe first perform a high-level taxonomy by considering the typical role that ML techniques take in architecture/system design, i.e., either for fast predictive modeling or as the design methodology;\nthen, we summarize the common problems in architecture/system design that can be solved by ML techniques, and the typical ML techniques employed to resolve each of them.\nIn addition to emphasis on computer architecture in a narrow sense, we adopt the concept that data centers can be recognized as warehouse-scale computers , and review studies associated with data center management; we provide sketchy discussions on adjacent computer systems, such as code generation and compiler; we also give attention to how ML techniques can aid and transform design automation that involves both analog and digital circuits.\nAt the end of the paper, we discuss challenges and future prospects of applying ML for architecture/system design, aiming to convey insights of design considerations.", "id": "5359b569-700e-4932-aef8-e103109cdfb1", "level": "section", "origin_cites_number": 5, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.210526315789473, "cites": [ 1390, 553, 2219, 166 ], "content": "\\label{sec:ml techniques}\nThere are three general frameworks in ML: supervised learning, unsupervised learning and reinforcement learning.\nThese frameworks mainly differentiate on what data are sampled and how these sample data are used to build learning models.\nTable \\ref{table:ml_techniques} summarizes the commonly used ML techniques for computer architecture and system designs.\nSometimes, multiple learning models may work well for one given problem, and the appropriate selection can be made based on available hardware resource and data, implementation overheads, performance targets, etc.\n\\begin{table}[tbp]\n\\vspace{-12pt}\n\\caption{Machine learning techniques.}\n\\vspace{-12pt}\n\\label{table:ml_techniques}\n\\centering\n \\tiny\n \\renewcommand{\\arraystretch}{1}\n \\setlength{\\tabcolsep}{6pt}\n\\begin{tabular}{c|c|c|c}\n\\toprule\n\\textbf{Realm of ML} & \\textbf{Category} & \\makecell[c]{\\textbf{Classical ML}} & \\textbf{Deep Learning Counterpart } \\\\ \\midrule\n\\multirow{10}{*}{\\makecell*[c]{\\textbf{Supervised} \\\\ \\textbf{Learning}}} & \\multirow{1}{*}{Classification} &\n Logistic regression & \n \\multirow{8}{*}{\\makecell{CNN, RNN,\\\\ GNN , etc.}} \\\\ \\cline{2-3}\n & \\multirow{6}{*}{Working for both} & Support vector machines/regression & \\\\ \\cline{3-3}\n && K-nearest neighbors & \\\\ \\cline{3-3}\n && Decision tree, e.g., CART , MARS & \\\\ \\cline{3-3}\n && ANN & \\\\ \\cline{3-3}\n && Bayesian analysis & \\\\ \\cline{3-3}\n && \\makecell{Ensemble learning , e.g., gradient boosting, random forest} & \\\\ \\cline{2-3}\n & \\multirow{2}{*}{Regression} & \\makecell{Linear regression with variants , e.g., lasso (L1 regularization),\\\\ ridge (L2 regularization), elastic-net (hybrid L1/L2 regularization)} & \\\\ \\cline{3-3}\n && Non-linear regression & \\\\\n \\hline\n\\multirow{2}{*}{\\makecell[c]{\\textbf{Unsupervised} \\\\ \\textbf{Learning}}} \n& Clustering & K-means clustering & \\multirow{2}{*}{\\makecell[c]{Autoencoder, \\\\ GAN, etc.}} \\\\ \\cline{2-3}\n & Dimension reduction & Principal component analysis (PCA) &\\\\ \\hline\n\\multirow{3}{*}{\\makecell*[c]{\\textbf{Reinforcement} \\\\ \\textbf{Learning}}} &\n Value-based & Q-learning & DQN \\\\ \n \\cline{2-4} \n & \\multirow{2}{*}{Policy-based} & Actor-critic \n & A3C , DDPG \\\\ \\cline{3-4}\n & & Policy gradient, e.g., REINFORCE \n & PPO \\\\\n\\bottomrule\n\\end{tabular}\n\\vspace{-15pt}\n\\end{table}\n\\vspace{-5pt}", "id": "06a8a978-bb13-405e-8614-2ae9a01d5c56", "level": "section", "origin_cites_number": 19, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Different ML Techniques" ] ], "subsections": [ "f49480df-db26-480b-8149-671064eda13f", "c3e4e81a-5169-40c5-91a3-4a2389c943b1", "218b240c-27bf-4571-953a-c14101e51384" ], "title": "Different ML Techniques" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 166 ], "content": "Supervised learning is the process of learning a set of rules able to map an input to an output based on labeled datasets. These learned rules can be generalized to make predictions for unseen inputs.\nWe briefly introduce several prevalent techniques in supervised learning, as shown in Figure \\ref{fig:sl}.\n\\begin{itemize}[leftmargin=*]\n \\item Regression is a process for estimating the relationships between a dependent variable and one or more independent variables. The most common form is linear regression , and some other forms include different types of non-linear regression . Regression techniques are primarily used for two purposes, prediction/forecasting, and inference of causal relationships.\n \\item Support vector machines (SVMs) try to find the best hyperplanes to separate data classes by maximizing margins. One variant is support vector regression (SVR), which is able to conduct regression tasks. Predictions or classifications of new inputs can be decided by their relative positions to these hyperplanes.\n \\item Decision tree is one representative of logical learning methods, which uses tree structures to build regression or classification models.\n The final result is a tree with decision nodes and leaf nodes. Each decision node represents a feature and branches of this node represent possible values of the corresponding feature. Starting from the root node, input instances are classified by sequentially passing through nodes and branches, until they reach leaf nodes that represent either classification results or numerical values.\n \\item Artificial neural networks (ANNs) are capable to approximate a broad family of functions: a single-layer perceptron is usually used for linear regression; complex DNNs consisting of multiple layers are able to approximate non-linear functions, such as the multi-layer perceptron (MLP); variants of DNNs that achieve excellent performance in specific fields benefit from the exploitation of certain computation operations, e.g., convolutional neural networks (CNNs) with convolution operations leveraging spatial features, and recurrent neural networks (RNNs) with recurrent connections enabling learning from sequences and histories.\n \\item Ensemble learning employs multiple models that are strategically designed to solve a particular problem, and the primary goal is to achieve better predictive performance than those could be obtained from any of the constituent models alone. Several common types of ensembles include random forest and gradient boosting.\n\\end{itemize}\n\\begin{figure}[tbp]\n \\centering\n \\vspace{-15pt}\n\t\\includegraphics[width=0.97\\linewidth]{fig1.png}\n\t\\vspace{-10pt}\n \\caption{Examples of supervised learning: (a) regression, (b) SVM, (c) decision tree, (d) MLP, and (e) ensemble learning.}\n \\vspace{-20pt}\n \\label{fig:sl}\n\\end{figure}\nDifferent learning models have different preference of input features: \nSVMs and ANNs generally perform much better with multi-dimension and continuous features, while logic-based systems tend to perform better when dealing with discrete/categorical features.\nIn system design, supervised learning is commonly used for performance modeling, configuration predictions, or predicting higher-level features/behaviors from lower-level features.\nOne thing worth noting is that supervised learning techniques need well labeled training data prior to the training phase, which usually require tremendous human expertise and engineering.\n\\vspace{-5pt}", "id": "f49480df-db26-480b-8149-671064eda13f", "level": "subsection", "origin_cites_number": 6, "parent_id": "06a8a978-bb13-405e-8614-2ae9a01d5c56", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Different ML Techniques" ], [ "subsection", "Supervised Learning" ] ], "subsections": [], "title": "Supervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Unsupervised learning is the process of finding previously unknown patterns based on unlabeled datasets.\nTwo prevailing methods are clustering analysis and principal component analysis (PCA) , as depicted in Figure \\ref{fig:usl}.\n\\begin{itemize}[leftmargin=*]\n \\item Clustering is a process of grouping data objects into disjoint clusters based on a measure of similarity, such that data objects in the same cluster are similar while data objects in different clusters share low similarities. The goal of clustering is to classify raw data reasonably and to find possibly existing hidden structures or patterns in datasets. One of the most popular and simple clustering algorithms is k-means clustering.\n \\item PCA is essentially a coordinate transformation leveraging information from data statistics. It aims to reduce the dimensionality of the high-dimensional variable space by representing it with a few orthogonal (linearly uncorrelated) variables that capture most of its variability.\n\\end{itemize}\nSince there is no label in unsupervised learning, it is difficult to simultaneously measure the performance of learning models and decide when to stop the learning process.\nOne potential workaround is \\textit{semi-supervised learning} , which uses a small amount of labeled data together with a large amount of unlabeled data. This approach stands between unsupervised and supervised learning, requiring less human effort and producing higher accuracy.\nThe unlabeled data are used to either finetune or re-prioritize hypotheses obtained from labeled data alone.\n\\begin{figure}[t]\n\\vspace{-15pt}\n\\flushleft\n\\begin{minipage}{.63\\textwidth}\n \\centering\n \\includegraphics[width=0.82\\linewidth]{fig2.png}\n \\vspace{-10pt}\n \\caption{Examples of unsupervised learning: (a) clustering, and (b) PCA.}\n \\label{fig:usl}\n\\end{minipage}\n\\hspace{.03\\textwidth}\n\\begin{minipage}{.33\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{fig3.png}\n \\caption{A typical framing of RL.}\n \\label{fig:rl}\n\\end{minipage}\n\\vspace{-10pt}\n\\end{figure}", "id": "c3e4e81a-5169-40c5-91a3-4a2389c943b1", "level": "subsection", "origin_cites_number": 3, "parent_id": "06a8a978-bb13-405e-8614-2ae9a01d5c56", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Different ML Techniques" ], [ "subsection", "Unsupervised Learning" ] ], "subsections": [], "title": "Unsupervised Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "In standard reinforcement learning (RL) , an agent interacts with an environment $\\mathcal{E}$ over a number of discrete time steps, as shown in Figure \\ref{fig:rl}.\nAt each time step $t$, the agent receives a state $s_t$ from the \\textit{state space} $\\mathcal{S}$, and selects an action $a_t$ from the \\textit{action space} $\\mathcal{A}$ according to its policy $\\pi$, where $\\pi$ is a mapping from states $s_t$ to actions $a_t$. \nIn return, the agent receives the next state $s_{t+1}$ and a scalar reward $r_t: \\mathcal{S}\\times\\mathcal{A}\\to\\mathbb{R}$. \nThis process continues until the agent reaches a terminal state after which the process restarts.\nThe return $R_t={\\sum\\limits_{k=0}^\\infty \\gamma^kr_{t+k}}$ is the totally accumulated rewards at the time step $t$ with a \\textit{discount factor} $\\gamma\\in(0,1]$. \nThe goal of the agent is to maximize the expected return for each state $s$.\nThe state-action value $Q_{\\pi}(s,a)=\\mathbb{E_\\pi}\\lbrack R_t|s_t=s, a_t=a\\rbrack$ is the expected return of selecting action $a$ at state $s$ with policy $\\pi$.\nSimilarly, the state value $V_{\\pi}(s)=\\mathbb{E_\\pi}\\lbrack R_t|s_t=s\\rbrack$ is the expected return starting from state $s$ by following policy $\\pi$. There are two general types of methods in RL: value-based, and policy-based.\n\\begin{itemize}[leftmargin=*]\n \\item In value-based RL, the state-action value function $Q_{\\pi}(s,a)$ is approximated by either tabular approaches or function approximations. At each state $s_t$, the agent always selects the optimal action $a^*_t$ that could bring the maximal state-action value $Q_{\\pi}(s_t,a^*_t)$. One well-known example of value-based methods is Q-learning.\n \\item In policy-based RL, it directly parameterizes the policy $\\pi(a|s;\\theta)$ and updates the parameters $\\theta$ by performing gradient ascent on $\\mathbb{E}[R_t]$. One example is the REINFORCE algorithm. \n\\end{itemize}\nRL is modeled based on Markov decision process, and thus it is suitable to handle control problems or sequential decision-making processes.\nWith these characteristics, RL is able to explore design space proactively and intelligently, and learn how to achieve resource management or task scheduling in system designs through interactions with environments. The optimal behaviors can be found by embedding optimization goals into reward functions.", "id": "218b240c-27bf-4571-953a-c14101e51384", "level": "subsection", "origin_cites_number": 1, "parent_id": "06a8a978-bb13-405e-8614-2ae9a01d5c56", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Different ML Techniques" ], [ "subsection", "Reinforcement Learning" ] ], "subsections": [], "title": "Reinforcement Learning" }, { "cite_extract_rate": 0.04918032786885201, "cites": [ 4378, 5390, 5389 ], "content": "\\label{sec:system modeling}\nThis section reviews studies that employ ML techniques for fast and accurate system modeling, which involves predictions of performance metrics or some other criteria of interest.\nAlthough cycle-accurate simulators, which are commonly used for system performance prediction, can provide accurate estimations, they usually run multiple orders of magnitude slower than native executions.\nBy contrast, ML-based techniques can balance simulation costs and prediction accuracy, showing great potentials in exploring huge configuration spaces and learning non-linear impacts of configurations.\nMost of existing work applies supervised learning for either pure system modeling or efficient design space exploration (DSE) enabled by fast predictions.\nTable \\ref{table:model_sys} and Table \\ref{table:model_eda} summarize the studies for predictive modeling in computer architecture/system and design automation respectively, in terms of task domains, prediction targets, adopted ML techniques, and corresponding inputs.\n\\begin{table}[tbp]\n\\vspace{-10pt}\n\\caption{Summary of applying ML techniques for fast modeling in computer architecture and systems.}\n\\label{table:model_sys}\n\\centering\n \\tiny\n \\renewcommand{\\arraystretch}{0.9}\n \\setlength{\\tabcolsep}{1pt}\n\\vspace{-10pt}\n\\begin{tabular}{c|c|c|c}\n\\toprule\n\\textbf{Domain} & \\textbf{Prediction of} & \\textbf{Technique} & \\textbf{Input} \\\\ \\midrule\n\\multirow{4}{*}{\\makecell[c]{\\textbf{Memory} \\\\ \\textbf{System} \\\\ $( \\S$ \\textbf{\\ref{sec:model_memory}}$)$}} \n& Throughput/cache miss & ANN & Cache configurations \\\\ \\cline{2-4} \n& Throughput/lifetime/energy & \\makecell[c]{Gradient boosting/quadratic regression with lasso } & NVM configurations \\\\ \\cline{2-4} \n& Throughput & CNN & Memory controller placements \\\\ \\cline{2-4} \n& Disk block correlation & DNN & Data blocks in the context window \\\\ \\hline\n\\multirow{4}{*}{\\makecell[c]{\\textbf{NoC} \\\\ $( \\S$ \\textbf{\\ref{sec:model_noc}}$)$}} \n& Latency & SVR & Queue information \\\\ \\cline{2-4} \n& Hotspots & ANN & Buffer utilization \\\\ \\cline{2-4} \n& Buffer/link utilization & \\makecell[c]{Regression with ridge regularization , decision tree } & NoC configurations \\\\ \\cline{2-4} \n& Probability of errors & Decision tree & \\makecell[c]{Link utilization and transistor wearing-out} \\\\ \\hline\n\\multirow{6}{*}{\\makecell[c]{\\textbf{GPU} \\\\ $( \\S$ \\textbf{\\ref{sec:model_gpu}}$)$}} & \\makecell[c]{Speedup or execution time \\\\ by cross-platform inputs} & \\makecell[c]{Nearest neighbor/SVM , \\\\ ensemble of regression-based learners , \\\\ random forest } & \\makecell[c]{Static analysis and/or \\\\ dynamic profiling of \\\\ source CPU code} \\\\ \\cline{2-4} \n & Execution time & \\makecell[c]{Stepwise regression , \\\\ ensemble of linear regression and random forest } & \\multirow{3}{*}{\\makecell[c]{GPU configurations \\\\ and performance counters}} \\\\ \\cline{2-3}\n & Throughput/power & Ensemble of NNs & \\\\ \\cline{2-3}\n & Scaling behavior of GPGPU & ANN and K-means clustering & \\\\ \\cline{2-4} \n & Kernel affinity and execution time & Logistic regression and linear regression & Kernel characteristics \\\\ \\cline{2-4}\n & Traffic patterns in GPGPU & CNN & Grayscale heat maps \\\\ \\hline\n\\multirow{2}{*}{\\makecell[c]{\\textbf{Single-Core} \\\\ \\textbf{CPU} $( \\S$ \\textbf{\\ref{sec:model_single}}$)$}} \n& Throughput & \\makecell[c]{Linear regression , non-linear regression } & \\multirow{2}{*}{\\makecell[c]{Micro-architectural parameters \\\\ and performance counters}} \\\\ \\cline{2-3}\n & Program execution time & Linear regression & \\\\ \\hline\n\\multirow{8}{*}{\\makecell[c]{\\textbf{General}\\\\ \\textbf{Modeling} \\\\ $( \\S$ \\textbf{\\ref{sec:model_general}}$)$}} & \\multirow{6}{*}{Throughput/latency/power} & ANN & \\multirow{7}{*}{\\makecell[c]{Micro-architectural parameters \\\\ and performance counters}} \\\\ \\cline{3-3}\n & & \\makecell[c]{Non-linear regression } & \\\\ \\cline{3-3}\n & & Linear regression & \\\\ \\cline{3-3}\n & & Hierarchical Bayesian model & \\\\ \\cline{3-3}\n & & LSTM & \\\\ \\cline{3-3}\n & & Generative model & \\\\ \\cline{2-3}\n & \\makecell[c]{Slowdown caused by \\\\ application interference} & \\makecell[c]{Linear regression with elastic-net \\\\ regularization } & \\\\ \\cline{2-4} \n & \\makecell[c]{Speedup of multi-thread applications} & Gausian process regression & Profiling of single-thread execution \\\\ \\hline\n\\multirow{9}{*}{\\makecell[c]{\\textbf{Data} \\\\ \\textbf{Center} \\\\ $( \\S$ \\textbf{\\ref{sec:model_dc}}$)$}} & Job completion time & SVR & \\makecell[c]{Application characteristics \\\\ and cluster configurations} \\\\ \\cline{2-4} \n & Resource demand & \\makecell[c]{Statistical learning , linear regression/MLP } & \\multirow{3}{*}{Workload characterization} \\\\ \\cline{2-3}\n & Incoming workload & ARMA , ARIMA & \\\\ \\cline{2-3}\n & Workload pattern & Hidden Markov model & \\\\ \\cline{2-4} \n & Power usage effectiveness & MLP & Data center configurations \\\\ \\cline{2-4} \n & Disk Failure & \\makecell[c]{Bayesian methods , clustering , \\\\ SVM/MLP , random forest } & \\multirow{3}{*}{\\makecell[c]{SMART (Self-Monitoring, Analysis \\\\ and Reporting Technology) attributes \\\\ of data centers}} \\\\ \\cline{2-3}\n & Health assessment of drives & \\makecell[c]{CART , gradient boosted regressions tree , RNN } & \\\\ \\cline{2-3}\n & \\multirow{2}{*}{Partial drive failure} & \\makecell[c]{CART/random forest/SVM/ANN/logistic regression } & \\\\ \\cline{3-4} \n & & Gradient boosted regression trees & \\makecell[c]{SMART attributes and system-level signals} \\\\ \\bottomrule\n\\end{tabular}\n\\vspace{-15pt}\n\\end{table}", "id": "15e5a658-0b25-4890-8335-c1ec062ad3a2", "level": "section", "origin_cites_number": 61, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ] ], "subsections": [ "cb797a5f-f6ce-4412-9791-ccdeb4f88886", "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "419e1fc4-5eb3-40c4-b251-cc170597bc2e" ], "title": "ML for Fast System Modeling" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_sub_sys}", "id": "cb797a5f-f6ce-4412-9791-ccdeb4f88886", "level": "subsection", "origin_cites_number": 0, "parent_id": "15e5a658-0b25-4890-8335-c1ec062ad3a2", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Sub-system Modeling and Performance Prediction" ] ], "subsections": [ "1e898be7-db20-4db0-b960-280b242ecf62", "5ac93425-5d57-405e-be1e-d3cab4e56f18" ], "title": "Sub-system Modeling and Performance Prediction" }, { "cite_extract_rate": 0.2, "cites": [ 5391 ], "content": "\\label{sec:model_memory}\nIn memory systems, ML-based performance models are exploited to help explore trade-offs among different objectives.\nTo explore non-volatile memory (NVM) based cache hierarchies, Dong \\textit{et al.} develop an ANN model to predict higher-level features (e.g. miss of cache read/write, and instruction-per-cycle (IPC)) from lower-level features (e.g. cache associativity, capacity and latency). \nTo adaptively select architectural techniques in NVMs for different applications, Memory Cocktail Therapy estimates lifetime, IPC, and energy consumption through lightweight online predictors by gradient boosting and quadratic regression with lasso. \nTo optimize memory controller placements in throughput processors, Lin \\textit{et al.} build a CNN model that takes memory controller placements as inputs to predict throughput, which accelerates the optimization process by two orders of magnitude.\nSome studies concentrate on learning efficient representations of memory access patterns.\nBlock2Vec tries to mine data block correlations by training a DNN to learn the best vector representation of each block and capturing block similarities via vector distances, which enables further optimization for caching and prefetching.\nShi \\textit{et al.} use a graph neural network (GNN) to learn fused representations of static code and its dynamic execution. This unified representation is capable to model both data flows (e.g., prefetching) and control flows (e.g., branch prediction).", "id": "1e898be7-db20-4db0-b960-280b242ecf62", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "cb797a5f-f6ce-4412-9791-ccdeb4f88886", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Sub-system Modeling and Performance Prediction" ], [ "subsubsection", "Memory System" ] ], "subsections": [], "title": "Memory System" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_noc}\nIn NoCs, several performance metrics of interest are latency, energy consumption, and reliability.\n\\textcircled{\\small{1}}\nRegarding latency predictions, Qian \\textit{et al.} use an SVR model to predict the traffic flow latency and the average channel waiting time in mesh-based NoCs, which relaxes some assumptions in the classical queuing theory. \nRather than explicitly predicting latency, a lightweight hardware-based ANN predicts existence of traffic hotspots, which are intensive network congestions significantly degrading the effective throughput and implicitly indicate the average communication latency in NoCs.\nThe input features are buffer utilization rates from neighboring NoC routers, and the trained predictor is combined with a proactive hotspot-preventive routing algorithm to avert hotspot formation, attaining significant improvements for synthetic workloads while modest melioration for real-world benchmarks.\n\\textcircled{\\small{2}}\nRegarding estimating energy consumption, the learned predictors are often leveraged for saving dynamic and/or static energy in NoCs.\nDiTomaso \\textit{et al.} use per-router decision trees to predict link utilization and traffic direction, which are combined with sleepy link storage units to power-gate links/routers and to change link directions. \nClark \\textit{et al.} use ridge regression models to predict buffer utilization, changes in buffer utilization, or a combined metric of energy and throughput,\nbased on which a router can select proper voltage/frequency.\nIn photonic NoCs, the ridge regression model is also applicable to predict the number of packets to be injected into each router in the following time window , based on which the number of wavelengths are scaled properly to reduce static energy consumed by photonic links.\n\\textcircled{\\small{3}}\nRegarding the reliability of NoCs, a per-link decision tree trained offline can predict the probability of timing faults on links during runtime , based on which a proactive fault-tolerant technique is developed to mitigate errors by using the strengthened cyclic redundancy check with error-correction code and relaxed transmission.", "id": "5ac93425-5d57-405e-be1e-d3cab4e56f18", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "cb797a5f-f6ce-4412-9791-ccdeb4f88886", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Sub-system Modeling and Performance Prediction" ], [ "subsubsection", "Network-on-Chip (NoC)" ] ], "subsections": [], "title": "Network-on-Chip (NoC)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_sys}\nAccurate and fast performance estimation is a necessity for system optimization and design space exploration.\nWith the increasing complexity of systems and variety of workloads, ML-based techniques can provide highly accurate performance estimations with reasonable simulation costs, surpassing the capability of commonly-used cycle-accurate simulators that require highly computational costs and long simulation time.", "id": "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "level": "subsection", "origin_cites_number": 0, "parent_id": "15e5a658-0b25-4890-8335-c1ec062ad3a2", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "System Modeling and Performance Prediction" ] ], "subsections": [ "a2990a76-751b-4214-b5a9-77c66ab0dd6a", "a74c5980-8f81-4447-be80-f6c683740060", "3890381b-cc70-4717-9bbf-fa3c92dca1c8", "37bd7027-5f70-4a17-9138-7b53f9c12df8" ], "title": "System Modeling and Performance Prediction" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 5390 ], "content": "\\label{sec:model_gpu}\nThere are two types of predictions for GPU modeling: cross-platform predictions and GPU-specific predictions.\nCross-platform predictions are used to decide in advance whether to offload an application from a CPU to a GPU, since not every application benefits from GPU execution and the porting process requires considerably additional efforts;\nGPU-specific predictions are used to estimate metrics of interest and to assist GPU design space exploration, helpful to handle design space irregularities and complicated interactions among configurations.\nCross-platform predictions can be formulated as a binary classification problem that identifies whether the potential GPU speedup of an application would be greater than a given threshold.\nThis task can be solved by the nearest neighbor and SVMs using dynamic instruction profiles , or a random forest that composes of one thousand decision trees using static analysis of source CPU code (i.e., memory coalescing, branch divergence, kernel size available parallelism and instruction intensities) .\nWith both dynamic and static program properties from single-thread CPU code, an ensemble of one hundred regression-based learners can predict the GPU execution time .\nIn terms of GPU-specific predictions that take GPU configurations and performance counters as input features, the execution time can be predicted by stepwise linear regression, which recognizes the most important input features among many GPU parameters and thus achieves high accuracy even with sparse samples ; the power/throughput can be modeled by an ensemble of NN predictors .\nProvided with profiling results from earlier-generation GPUs, an ensemble of linear and non-linear regression models is capable to predict cross-generation GPU execution time for later/future-generation GPUs, which achieves more than 10,000 times speedup compared to cycle-accurate GPU simulators .\nFocusing on processing-in-memory (PIM) assisted GPU architectures, Pattnaik \\textit{et al.} classify GPU cores into two types: powerful GPU cores but far away from memory, and auxiliary/simple GPU cores but close to memory. They develop a logistic regression model that takes kernel characteristics as input features to predict architecture affinity of kernels, aiming to accurately identify which kernels would benefit from PIM and offload them accordingly to auxiliary GPU cores. They also build a linear regression model to predict the execution time of each kernel, so that a concurrent kernel management mechanism can be developed based on these two models and kernel dependency information. \nFocusing on general-purpose GPUs (GPGPUs), Wu \\textit{et al.} model kernel scaling behaviors with respect to the number of compute units, engine frequency, and memory frequency. During training, kernels with similar performance scaling behaviors are grouped by K-means clustering, and when encountering a new kernel, it is mapped to the cluster that best describes its scaling performance by an ANN-based classifier.\nLi \\textit{et al.} reassess prevailing assumptions of GPGPU traffic patterns, and combine a CNN with a t-distributed stochastic neighbor embedding to classify different traffic patterns.", "id": "a2990a76-751b-4214-b5a9-77c66ab0dd6a", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "System Modeling and Performance Prediction" ], [ "subsubsection", "Graphics Processing Unit (GPU)" ] ], "subsections": [], "title": "Graphics Processing Unit (GPU)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_single}\nIn predictive performance modeling of single-core processors, early-stage work mostly targets superscalar processors. \nTo predict the application-specific cycle-per-instruction (CPI) of superscalar processors, Joseph \\textit{et al.} introduce an iterative procedure to build linear regression models using 26 key micro-architectural parameters. Later they construct predictive models by non-linear regression techniques (i.e., radial basis function networks generated from regression trees) with 9 key micro-architectural parameters .\nIn parallel with Joseph's work, Lee and Brooks use regression modeling with cubic splines to predict application-specific performance (billions of instructions per second) and power.\nLater work focuses on performance modeling for existing hardware (e.g., Intel, AMD, and ARM processors) by using micro-architectural parameters and performance counters.\nEyerman \\textit{et al.} construct a mechanistic-empirical model for CPI predictions of three Intel processors. The initially parameterized performance model is inspired by mechanistic modeling, where the unknown parameters inside the model are derived through regression, benefiting from both mechanistic modeling (i.e., interpretability) and empirical modeling (i.e., ease of implementation).\nZheng \\textit{et al.} explore two approaches to cross-platform predictions of program execution time, where profiling results on Intel Core i7 and AMD Phenom processors are used to estimate the execution time on a target ARM processor. The first approach relaxes the assumption of global linearity to local linearity in the feature space and applies constrained locally sparse linear regression; the other approach applies lasso linear regression with phase-level performance features.", "id": "a74c5980-8f81-4447-be80-f6c683740060", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "System Modeling and Performance Prediction" ], [ "subsubsection", "Single-Core Processor" ] ], "subsections": [], "title": "Single-Core Processor" }, { "cite_extract_rate": 0.13636363636363602, "cites": [ 4378, 5392, 5389 ], "content": "\\label{sec:model_general}\nRegression techniques are the mainstream to predict performance metrics from micro-architectural parameters or other features, which attributes to their capability to make high-accuracy estimations with reasonable training costs.\nFor conventional regression-based models, ANNs and non-linear regression with different designs are the common practice to predict throughput/latency and power/energy .\nConsequently, there are comparisons among different techniques.\nLee \\textit{et al.} compare piecewise polynomial regression with ANNs, with emphasis that piecewise polynomial regression offers better explainability while ANNs show better generalization ability.\nOzisikyilmaz \\textit{et al.} contrast several linear regression models and different ANNs, indicating that the pruned ANNs achieve best accuracy while requiring longer training time.\nAgarwal \\textit{et al.} estimate the parallel execution speedup of multi-threaded applications on a target hardware platform, and mention that Gaussian process regression performs the best among several explored methods in this case.\nMore recent work tends to take advantage of data-driven approaches.\nIthemal leverages a hierarchical multi-scale RNN with long short term memory (LSTM) to predict throughput of basic blocks (i.e., sequences of instructions with no branches or jumps), and evaluations demonstrate that Ithemal is more accurate and as fast as analytical throughput estimators.\nBy employing a variant of Ithemal as a differentiable surrogate to approximate CPU simulators, DiffTune is able to apply gradient-based optimization techniques to learn the parameters of x86 basic block CPU simulators such that simulators' error is minimized. The learned parameters finally are plugged back into the original simulator.\nDing \\textit{et al.} provide some insights in learning-based modeling methods: the improvement of prediction accuracy may receive diminishing returns; the consideration of domain knowledge will be helpful for system optimizations, even if the overall accuracy may not be improved.\nThus, they propose a generative model to handle data scarcity by generating more training data, and apply a multi-phase sampling to improve prediction accuracy of optimal configuration points.\nML-based predictive performance modeling enables efficient resource management and rapid design space exploration to improve throughput.\nEquipped with ANNs for IPC predictions, strategies for resource allocation and task scheduling can always select decisions that would bring the best predicted IPC.\nESP constructs a regression model with elastic-net regularization to predict application interference (i.e., slowdown), which is integrated with schedulers to increase throughput.\nMetaTune is a meta-learning based cost model for convolution operations, and when combined with search algorithms, it enables efficient auto-tuning of parameters during compilation.\nIn consideration of rapid design space exploration of the uncore (i.e., memory hierarchies and NoCs), Sangaiah \\textit{et al.} uses a regression-based model with restricted cubic splines to estimate CPI, reducing the exploration time by up to four orders of magnitude.\nML-based predictive performance modeling benefits adaptations between performance and power budgets.\nLeveraging off-line multivariate linear regression to predict IPC and/or power of different architecture configurations, Curtis-Maury \\textit{et al.} maximize performance of OpenMP applications by dynamic concurrency throttling and dynamic voltage and frequency scaling (DVFS); Bailey \\textit{et al.} apply hardware frequency-limiting techniques to select optimal hardware configurations under given power constraints.\nTo effectively apply DVFS towards various optimization goals, the designed strategy can adopt predictions for power consumption by a constrained-posynomial model or job execution time by a linear regression model .\nTo conduct smart power management in a more general manner, LEO employs hierarchical Bayesian models to predict performance and power, and when integrated for runtime energy optimization, it is capable to figure out the performance-power Pareto frontier and select the configuration satisfying performance constraints with minimized energy.\nCALOREE~ further breaks up the power management task into two abstractions: a learner for performance modeling and an adaptive controller leveraging predictions from the learner. These abstractions enable both the learner to use multiple ML techniques and the controller to maintain control-theoretic formal guarantees. Since no user-specified parameter except the goal is required, CALOREE is applicable even for non-experts.", "id": "3890381b-cc70-4717-9bbf-fa3c92dca1c8", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "System Modeling and Performance Prediction" ], [ "subsubsection", "General Modeling and Performance Prediction" ] ], "subsections": [], "title": "General Modeling and Performance Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_dc}\nData centers have been in widespread use for both traditional enterprise applications and cloud services.\nMany studies employ ML techniques to predict workload/resource-related metrics, so as to enable elastic resource provision.\nCommon examples include but not limited to using SVR to predict job completion time , leveraging the autoregressive moving average (ARMA) model or the autoregressive integrated moving average (ARIMA) model to forecast incoming workloads, exploiting the hidden Markov modeling to characterize variations in workload patterns , and estimating dynamic resource demand of workloads by light-weight statistical learning algorithms or MLP .\nJim Gao builds an MLP model to predict power usage effectiveness of data centers, which is extensively tested and validated at Google data centers.\nCortez \\textit{et al.} predict virtual machine (VM) behaviors (including VM lifetimes, maximum deployment sizes, and workload classes) for a broader set of purposes (e.g., health/resource management and power capping), where the evaluated ML models are random forests and extreme gradient boosting trees.\nIn addition to workload/resource-related metrics, the availability in data centers or cloud services is also a topic of concern, where one of the key tasks is to predict disk failure in advance.\nLeveraging SMART (Self-Monitoring, Analysis and Reporting Technology) attributes, the disk failure prediction model can be built via various ML techniques, such as different Bayesian methods , unsupervised clustering , SVM and MLP .\nThe adoption of classification and regression trees (CART) , RNNs , or gradient boosted regression trees makes it possible to assess health status of drives.\nWhile all the aforementioned methods rely on offline training, online random forests can evolve with forthcoming data on-the-fly by generating new trees and forget old information by discarding outdated trees, consequently avoiding the model aging problem in disk failure predictions.\nTo predict partial drive failures (i.e., disk error or sector error), Mahdisoltani \\textit{et al.} explore five ML techniques (CART, random forests, SVM, NN and logistic regression), among which random forests consistently outperform others.\nXu \\textit{et al.} incorporate SMART attributes and system-level signals to train a gradient boosted regression tree, which is an online prediction model that ranks disks according to the degree of error-proneness in the near future.", "id": "37bd7027-5f70-4a17-9138-7b53f9c12df8", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "9ac2d4c9-6bb8-44b3-aa23-38c568e05513", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "System Modeling and Performance Prediction" ], [ "subsubsection", "Data Center Performance Modeling and Prediction" ] ], "subsections": [], "title": "Data Center Performance Modeling and Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:model_eda}", "id": "419e1fc4-5eb3-40c4-b251-cc170597bc2e", "level": "subsection", "origin_cites_number": 0, "parent_id": "15e5a658-0b25-4890-8335-c1ec062ad3a2", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Performance Modeling in Chip Design and Design Automation" ] ], "subsections": [ "df0787ef-e9e6-48be-837b-abd7e6a37852", "3e9b9f12-6464-4e9c-b39f-515e81853fdb", "8ba558a9-ad3c-4f4c-a999-80fc17f11ebf" ], "title": "Performance Modeling in Chip Design and Design Automation" }, { "cite_extract_rate": 0.15625, "cites": [ 5393, 5395, 5396, 8916, 5394 ], "content": "\\label{sec:model_analog}\nAnalog circuit design is usually a manual process that requires many trial-and-error iterations between pre-layout and post-layout phases.\nIn recent years, the discrepancy between schematic (i.e., pre-layout) performance estimations and post-layout simulation results is further enlarged.\nOn the one hand, the analytical performance estimations from schematics are no longer accurate with device scaling; on the other hand, even though post-layout simulations can provide high-accuracy estimations, they are extremely time-consuming and have become the major bottleneck of design iteration time.\nTo shrink the gap in performance modeling of integrated circuits (ICs), ML techniques are widely applied for fast circuit evaluation.\nWe discuss the studies based on whether their input features are extracted from pre-layout or post-layout information.\n\\textcircled{\\small{1}}\nGiven design schematics, parasitics in layouts can be predicted from pre-layout stage, which helps bridge the gap of performance difference between pre-layout and post-layout simulations.\nParaGraph builds a GNN model to predict layout-dependent parasitics and physical device parameters.\nMLParest shows that non-graph based methods (e.g., random forest) also work well for estimating interconnect parasitics, whereas the lack of placement information may cause large variations in predictions.\n\\textcircled{\\small{2}}\nGiven circuit schematics as well as device information as inputs, it is possible to directly model post-layout performance from pre-layout designs.\nAlawieh \\textit{et al.} propose a hierarchical method that combines the Bayesian co-learning framework and semi-supervised learning to predict power consumption.\nThe entire circuit schematic is partitioned into multiple blocks to build block-level performance models, upon which circuit-level performance models are built. By combining these two low-dimensional models with a large amount of unlabeled data, pseudo samples can be labeled with almost no cost. Finally, a high-dimensional performance model mapping low-level features to circuit-level metrics is trained with pseudo samples and a small amount of labeled samples, which demonstrates the feasibility of performance modeling with inadequate labeled samples.\nSeveral variants of Bayesian-based methods also perform well for estimating post-layout performance, e.g., combining Bayesian regression with SVM to predict circuit performance and using Bayesian DNNs to compare circuit designs .\n\\textcircled{\\small{3}}\nSince post-layout simulations with SPICE-like simulators is time-consuming, ML techniques are applied to quickly assess layout design performance .\nTo make better use of structural information inside layouts, intermediate layout placement results are represented as 3D images to feed a 3D CNN model , or encoded as graphs to train a customized GNN model , with the goal to predict whether a design specification is satisfied.\n\\begin{table}[tbp]\n\\vspace{-10pt}\n\\caption{Summary of applying ML techniques for performance modeling and prediction in design automation.}\n\\vspace{-10pt}\n\\label{table:model_eda}\n\\centering\n \\tiny\n \\renewcommand{\\arraystretch}{1}\n \\setlength{\\tabcolsep}{0.5pt}\n\\begin{tabular}{c|c|c|c}\n\\toprule\n\\textbf{Domain} & \\textbf{Prediction of} & \\textbf{Technique} & \\textbf{Input} \\\\ \\midrule\n\\multirow{5}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Analog \\\\ Circuit \\\\ $( \\S$ \\ref{sec:model_analog}$)$ \\end{tabular}}} & Parasitics & GNN , random forest & Circuit schematics \\\\ \\cline{2-4} \n & Power/area/bandwidth & \\begin{tabular}[c]{@{}c@{}}Bayesian co-learning and semi-supervised learning , \\\\ Bayesian regression and SVM \\end{tabular} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Circuit schematics \\\\ and device information\\end{tabular}} \\\\ \\cline{2-3}\n & \\begin{tabular}[c]{@{}c@{}}Probability of superiority between designs\\end{tabular} & Bayesian DNN & \\\\ \\cline{2-4} \n & \\begin{tabular}[c]{@{}c@{}}Gain/unity gain frequency/bandwidth/phase margin\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}SVM/ANN/random forest , 3D CNN , GNN \\end{tabular} & \\multirow{2}{*}{Circuit placement} \\\\ \\cline{2-3}\n & Electromagnetic properties & GNN & \\\\ \\hline\n\\multirow{9}{*}{\\makecell[c]{\\textbf{HLS} \\\\ $( \\S$ \\textbf{\\ref{sec:model_hls}}$)$}} & \\begin{tabular}[c]{@{}c@{}}Area/latency/throughput/logic utilization\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Random forest , transfer learning \\end{tabular} & Directives in HLS scripts \\\\ \\cline{2-4} \n & Resource utilization & ANN & \\multirow{3}{*}{IR graphs from HLS front-ends} \\\\ \\cline{2-3}\n & Resource mapping and clustering & GraphSAGE & \\\\ \\cline{2-3}\n & Routing congestion & \\begin{tabular}[c]{@{}c@{}}Linear regression/ANN/gradient boosted regression tree \\end{tabular} & \\\\ \\cline{2-4} \n & Power & \\begin{tabular}[c]{@{}c@{}}Linear regression/SVM/tree-based models/DNNs \\end{tabular} & IR graphs and HLS reports \\\\ \\cline{2-4} \n & \\begin{tabular}[c]{@{}c@{}}Throughput and throughput-to-area ratio\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Ensemble learning by stacked regression \\end{tabular} & \\multirow{2}{*}{HLS reports} \\\\ \\cline{2-3}\n & Resource utilization and timing & \\begin{tabular}[c]{@{}c@{}}Linear regression/ANN/gradient tree boosting \\end{tabular} & \\\\ \\cline{2-4} \n & Cross-platform latency and power & Random forest & CPU program counters \\\\ \\cline{2-4} \n & Speedup over an ARM processor & ANN & \\begin{tabular}[c]{@{}c@{}}Application characteristics, \\\\ HLS reports, FPGA configurations\\end{tabular} \\\\ \\hline\n\\multirow{8}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Logic and \\\\ Physical \\\\ Synthesis \\\\ $( \\S$ \\ref{sec:model_logic}$)$\\end{tabular}}} & Area/delay & CNN , LSTM & Synthesis flows \\\\ \\cline{2-4} \n & \\multirow{2}{*}{DRVs} & \\begin{tabular}[c]{@{}c@{}}Linear regression/ANN/decision tree , MARS \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Placement and GR information\\end{tabular} \\\\ \\cline{3-4} \n & & MARS/SVM , MLP & Placement \\\\ \\cline{2-4} \n & \\multirow{2}{*}{DRC hotspots} & FCN & \\begin{tabular}[c]{@{}c@{}}Placement and GR information\\end{tabular} \\\\ \\cline{3-4} \n & & Variant of FCN & \\multirow{3}{*}{Placement} \\\\ \\cline{2-3} \n & GR congestion map & FCN & \\\\ \\cline{2-3} \n & \\multirow{2}{*}{Routing congestion in FPGAs} & Linear regression & \\\\ \\cline{3-4} \n & & Conditional GAN & Post-placement images \\\\ \\bottomrule\n\\end{tabular}\n\\vspace{-15pt}\n\\end{table}", "id": "df0787ef-e9e6-48be-837b-abd7e6a37852", "level": "subsubsection", "origin_cites_number": 32, "parent_id": "419e1fc4-5eb3-40c4-b251-cc170597bc2e", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Performance Modeling in Chip Design and Design Automation" ], [ "subsubsection", "Analog Circuit Analysis" ] ], "subsections": [], "title": "Analog Circuit Analysis" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 5393, 5394 ], "content": "\\label{sec:model_hls}\nHLS is an automated transformation from behavioral languages (e.g., C/C++/SystemC) to register-transfer level (RTL) designs, which significantly expedites the development of hardware designs involving with field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs).\nSince HLS tools usually take considerable time to synthesize each design, it prevents designers from exploring design space sufficiently, which motivates the application of ML models for fast and accurate performance estimation.\nIn performance estimation of HLS designs, the input features to ML models are extracted from three major sources: HLS directives, IRs from HLS front-ends, and HLS reports.\n\\textcircled{\\small{1}} Taking the directives in an HLS script as input features, random forest is capable to forecast different design metrics, such as area and effective latency , and throughput and logic utilization .\nIn order to reuse knowledge from previous experiences, a transfer learning approach can transfer knowledge across different applications or synthesis options.\n\\textcircled{\\small{2}} Taking advantages of IR graphs generated by HLS front-ends, Koeplinger \\textit{et al.} count resource requirements of each node in graphs by using pre-characterized area models, which are then used as inputs to ANNs to predict the LUT routing usage, register duplication, and unavailable LUTs.\nThe exploitation of GNNs makes it possible to automatically predict the mapping from arithmetic operations in IR graphs to different resources on FPGAs . \nTo forecast post-implementation routing congestion, Zhao \\textit{et al.} build a dataset that connects the routing congestion metrics after RTL implementation with operations in IRs, with the goal to train ML models locating highly congested regions in source code.\n\\textcircled{\\small{3}} Taking the information that can be directly extracted from HLS reports, Dai \\textit{et al.} try several ML models (linear regression, ANN, and gradient tree boosting) to predict post-implementation resource utilization and timing. \nPyramid applies the ensemble learning by stacked regression to accurately estimate the throughput or the throughput-to-area ratio.\nHL-Pow employs features from both IR graphs and HLS reports to predict the power by a variety of ML models. \nThe surge of heterogeneous platforms with FPGA/AISC and CPU provides more possibility of hardware/software co-design, motivating cross-platform performance predictions.\nHLSPredict uses random forest to predict FPGA cycle counts and power consumption based on program counter measurements obtained from CPU execution. \nWhile HLSPredict targets the same FPGA platform in training and testing, XPPE considers different FPGA platforms, and uses ANNs to predict the speedup of an application on a target FPGA over an ARM processor.", "id": "3e9b9f12-6464-4e9c-b39f-515e81853fdb", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "419e1fc4-5eb3-40c4-b251-cc170597bc2e", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Performance Modeling in Chip Design and Design Automation" ], [ "subsubsection", "High-Level Synthesis (HLS)" ] ], "subsections": [], "title": "High-Level Synthesis (HLS)" }, { "cite_extract_rate": 0.15384615384615302, "cites": [ 5395, 8916 ], "content": "\\label{sec:model_logic}\nIn digital design, logic synthesis converts RTL designs into optimized gate-level representations;\nphysical synthesis then transforms these design netlists into physical layouts.\nSince these two stages may take hours or days to generate final bitstreams/layouts, many problems benefit from the power of ML models for fast performance estimation.\nIn logic synthesis, CNN models or LSTM-based models can be leveraged to forecast the delay and area after applying different synthesis flows on specific designs, where the inputs are synthesis flows represented in either matrices or time series.\nIn physical synthesis, routing is a sophisticated problem subject to stringent constraints, and EDA tools typically utilize a two-step method: global routing (GR) and detailed routing (DR). GR tool allocates routing resource coarsely, and provides routing plans to guide DR tools to complete the entire routing.\nIn general, routing congestion can be figured out during or after GR; the routability of a design is confirmed after DR and design rule checking (DRC).\nEndeavors have been made to predict routability from early layout stages, so as to avoid excessive iterations back and forth between placement and routing.\nIn ASICs, some investigations predict routability by estimating the number of design rule violations (DRVs).\nTaking GR results as inputs, Li \\textit{et al.} explore several ML models (linear regression, ANN, and decision tree) to predict the number of DRVs, final hold slack, power, and area. \nQi \\textit{et al.} rely on placement data and congestion maps from GR as input features, and use a nonparametric regression technique, multivariate adaptive regression splines (MARS) , to predict the utilization of routing resource and the number of DRVs.\nBy merely leveraging placement information, it is possible to predict routability by MARS and SVM , or to detect DR short violations by an MLP .\nWhen representing placement information as images, fully convolutional networks (FCNs) are capable to predict locations of DRC hotspots by considering GR information as inputs , or to forecast GR congestion maps by formulating the prediction task as a pixel-wise binary classification using placement data .\nJ-Net is a customized FCN model, and takes both high-resolution pin patterns and low-resolution layout information from the placement stage as features to output a 2D array that indicates if the tile corresponding to each entry is a DRC hotspot.\nIn FPGAs, routing congestion maps can be directly estimated by linear regression using feature vectors coming from pin counts and wirelength per area of SLICEs.\nBy constructing the routing congestion prediction as an image translation problem, a conditional GAN is able to take post-placement images as inputs to predict congestion heat maps.", "id": "8ba558a9-ad3c-4f4c-a999-80fc17f11ebf", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "419e1fc4-5eb3-40c4-b251-cc170597bc2e", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML for Fast System Modeling" ], [ "subsection", "Performance Modeling in Chip Design and Design Automation" ], [ "subsubsection", "Logic and Physical Synthesis" ] ], "subsections": [], "title": "Logic and Physical Synthesis" }, { "cite_extract_rate": 0.15384615384615302, "cites": [ 5397, 5409, 3579, 5400, 5399, 5406, 5398, 5407, 5403, 3603, 5408, 5401, 5405, 4096, 5402, 5404 ], "content": "\\label{sec:methodology}\nThis section introduces studies that directly employ ML techniques as the design methodology for computer architecture/systems.\nComputer architecture and systems have been becoming increasingly complicated, making it expensive and inefficient for human efforts to design or optimize them.\nIn response, visionaries have argued that computer architecture and systems should be imbued with the capability to design and configure themselves, adjust their behaviors according to workloads' needs or user-specified constraints, diagnose failures, repair themselves from the detected failures, etc.\nWith strong learning and generalization capabilities, ML-based techniques are naturally suitable to resolve these considerations, which can adjust their policies during system designs according to long-term planning and dynamic workload behaviors.\nAs many problems in architecture/system design can be formulated as combinatorial optimization or sequential decision-making problems, RL is broadly explored and exploited.\nTable \\ref{table:dse_sys} and Table \\ref{table:dse_eda} recapitulate the studies that apply ML techniques as the design methodology for computer architecture/system and design automation respectively, in terms of target tasks and adopted ML techniques.\n\\begin{table}[tbp]\n\\vspace{-10pt}\n\\caption{Summary of applying ML techniques as the design methodology for computer architecture/ systems.}\n\\vspace{-10pt}\n\\label{table:dse_sys}\n\\centering\n \\tiny\n \\renewcommand{\\arraystretch}{0.9}\n \\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{c|c|c}\n\\toprule\n\\textbf{Domain} & \\textbf{Task} & \\textbf{Technique} \\\\ \\midrule\n\\multirow{4}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Memory \\\\ System\\\\ Design \\\\ $( \\S$ \\ref{sec:dse_mem}$)$ \\end{tabular}}} & Cache replacement policy & \\begin{tabular}[c]{@{}c@{}}Perceptron learning , Markov decision process , LSTM and SVM \\end{tabular} \\\\ \\cline{2-3} \n & Cache prefetching policy & \\begin{tabular}[c]{@{}c@{}}Perceptron learning , contextual bandit , LSTM \\end{tabular} \\\\ \\cline{2-3} \n & Memory controller policy & Q-learning \\\\ \\cline{2-3} \n & Garbage collection & Q-learning \\\\ \\hline\n\\textbf{\\begin{tabular}[c]{@{}c@{}}Branch\\\\ Prediction \\\\ $( \\S$ \\ref{sec:dse_bp}$)$\\end{tabular}} & Branch direction & \\begin{tabular}[c]{@{}c@{}}MLP , piecewise linear regression , \\\\ perceptron , CNN \\end{tabular} \\\\ \\hline\n\\multirow{8}{*}{\\makecell[c]{\\textbf{NoC} \\\\ $( \\S$ \\textbf{\\ref{NoC}}$)$}} & Link management & ANN \\\\ \\cline{2-3} \n & DVFS for routers & Q-learning \\\\ \\cline{2-3} \n & Routing & Q-learning \\\\ \\cline{2-3} \n & Arbitration policy & DQN \\\\ \\cline{2-3} \n & Adjusting injection rates & Q-learning , ANN \\\\ \\cline{2-3} \n & Selection of fault-tolerant modes & Q-learning \\\\ \\cline{2-3} \n & Link placement in 3D NoCs & STAGE algorithm \\\\ \\cline{2-3} \n & Loop placement in routerless NoCs & Advantage actor-critic with MCTS \\\\ \\hline\n\\multirow{6}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Power \\\\ Management \\\\ $( \\S$ \\ref{power}$)$ \\end{tabular}}} & DVFS and thread packing & Multinomial logistic regression \\\\ \\cline{2-3} \n & DVFS and power gating & MLP \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}DVFS, socket allocation, and use of HyperThreads\\end{tabular} & Extra trees/gradient boosting/KNN/MLP/SVM \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}DVFS for CPU cores/uncore/through-silicon interposers\\end{tabular} & Propositional rule , ANN , Q-learning \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}DVFS for CPU-GPU heterogeneous platforms\\end{tabular} & Weighted majority algorithm \\\\ \\cline{2-3} \n & DVFS for multi-/many-core systems & \\begin{tabular}[c]{@{}c@{}}Q-learning , semi-supervised RL , hierarchical Q-learning \\end{tabular} \\\\ \\hline\n\\multirow{6}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Resource \\\\ Management \\\\ \\& Task \\\\ Allocation \\\\ $( \\S$ \\ref{resource management}$)$\\end{tabular}}} & Tuning architecture configurations & \\begin{tabular}[c]{@{}c@{}}Maximum likelihood , statistical machine learning \\end{tabular} \\\\ \\cline{2-3} \n & Dynamic cache partitioning & Enforced subpopulations , Q-learning \\\\ \\cline{2-3} \n & Task allocation in many-core systems & Q-learning , DDPG \\\\ \\cline{2-3} \n & Workflow management & SVM and random forest \\\\ \\cline{2-3} \n & Hardware resource assignment & REINFORCE , Bayesian optimization \\\\ \\cline{2-3} \n & Device placement & \\begin{tabular}[c]{@{}c@{}}REINFORCE , policy gradient , PPO \\end{tabular} \\\\ \\hline\n\\multirow{2}{*}{\\makecell[c]{\\textbf{Scheduling}\\\\ $( \\S$ \\textbf{\\ref{scheduling}}$)$}} & Scheduling jobs in single-core processors & Q-learning \\\\ \\cline{2-3} \n & Scheduling jobs in multi-processor systems & Value-based RL \\\\ \\hline\n\\multirow{9}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Data Center\\\\ Management \\\\ $( \\S$ \\ref{data center}$)$\\end{tabular}}} & Assignment of servers to applications & Value-based RL \\\\ \\cline{2-3} \n & Content allocation in CDNs & Fuzzy RL \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}Placement of virtual machines onto physical machines\\end{tabular} & PPO \\\\ \\cline{2-3} \n & Traffic optimization & Policy gradient and DDPG \\\\ \\cline{2-3} \n & Scheduling jobs with complex dependency & REINFORCE \\\\ \\cline{2-3} \n & Straggler diagnosis & Statistical ML \\\\ \\cline{2-3} \n & Data-center-level caching policy & \\begin{tabular}[c]{@{}c@{}}Decision tree , LSTM , gradient boosting , DDPG \\end{tabular} \\\\ \\cline{2-3} \n & Bitrate selection for video chunks & A3C \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}Scheduling video workloads in hybrid CPU-GPU clusters\\end{tabular} & DQN \\\\ \\hline\n \\multirow{3}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Code\\\\ Generation \\\\ $( \\S$ \\ref{code_generation}$)$\\end{tabular}}} & Code completion & N-gram model and RNN \\\\ \\cline{2-3} \n & Code generation & LSTM \\\\ \\cline{2-3} \n & Program translation & Tree-to-tree encoder-decoder , seq2seq , transformer \\\\ \\hline\n\\multirow{6}{*}{\\makecell[c]{\\textbf{Compiler}\\\\ $( \\S$ \\textbf{\\ref{sec:compiler}}$)$}} & Instruction scheduling & Temporal difference , projective reparameterization \\\\ \\cline{2-3} \n & Improving compiler heuristics & NEAT , LSTM \\\\ \\cline{2-3} \n & Ordering of optimizations & NEAT \\\\ \\cline{2-3} \n & Automatic vectorization & Imitation learning \\\\ \\cline{2-3} \n & \\begin{tabular}[c]{@{}c@{}}Program transformation for approximate computing\\end{tabular} & MLP \\\\ \\cline{2-3} \n & Compilation for DNN workloads & PPO , policy gradient \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}", "id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "level": "section", "origin_cites_number": 104, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ] ], "subsections": [ "4ce0d49a-419f-4a57-9bf8-bac10689ff82", "93695066-a3b6-43e6-a561-f289514d99b8", "02b1496f-b229-4d07-b91a-238e79b26753", "8b48b63d-94af-4ec0-a218-72746bce7952", "de7c5699-73a2-48d8-9e75-ff1b60c4f5f5", "68c1d9ca-e725-4975-ad78-356d7848d1d1", "e25c32d8-64dc-4364-8501-8063014c67c6" ], "title": "ML as Design Methodology" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dse_mem}\nThe \"memory wall\" has been a performance bottleneck in von Neumann architectures, where computation is orders of magnitude faster than memory access.\nTo alleviate this problem, hierarchical memory systems are widely used and there arise optimizations for different levels of memory systems.\nAs both the variety and the size of modern workloads are drastically growing, conventional designs that are based on heuristics or intuitions may not catch up with the demand of the ever-growing workloads, leading to sharply degradation in performance.\nAs such, many studies resort to ML-based techniques to design smart and intelligent memory systems.", "id": "4ce0d49a-419f-4a57-9bf8-bac10689ff82", "level": "subsection", "origin_cites_number": 0, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Memory System Design" ] ], "subsections": [ "c842f575-5671-48ac-aa2a-715cd98df8a6", "dbd16b4a-1edb-4482-a515-db26bea31ee6", "dc411e9f-ce19-4cdd-9e1c-1930d2c68aa6" ], "title": "Memory System Design" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 5401, 5398 ], "content": "\\label{cache}\nThe conspicuous disparity in latency and bandwidth between CPUs and memory systems motivates investigations for efficient cache management.\nThere are two major types of studies on cache optimization: improving cache replacement policies, and designing intelligent prefetching policies.\n\\textcircled{\\small{1}} \nTo develop cache replacement policies, perceptron learning is employed to predict whether to bypass or reuse a referenced block in the last-level cache (LLC) .\nInstead of using perceptrons, Beckmann \\textit{et al.} model the cache replacement problem as a Markov decision process and replace lines according to the difference between their expected hits and the average hits.\nShi \\textit{et al.}~ train an attention-based LSTM model offline to extract insights from history program counters, which are then used to build an online SVM-based hardware predictor to serve as the cache replacement policy.\n\\textcircled{\\small{2}} \nTo devise intelligent prefetchers, Wang \\textit{et al.} propose a prefetching mechanism that uses conventionally table-based prefetchers to provide prefetching suggestions and a perceptron trained by spatio-temporal locality to reject unnecessary prefetching decisions, ameliorating the cache pollution problem. \nSimilarly, Bhatia \\textit{et al.} integrate a perceptron-based prefetching filter with conventional prefetchers, increasing the coverage of prefetches without hurting accuracy.\nInstead of the commonly used spatio-temporal locality, a context-based memory prefetcher leverages the semantic locality that characterizes access correlations inherent to program semantics and data structures, which is approximated by a contextual bandit model in RL.\nInterpreting semantics in memory access patterns is analogous to sequence analysis in natural language processing (NLP), and thus several studies use LSTM-based models and treat the prefetching as either a regression problem or a classification problem .\nEven with better performance, especially for long access sequences and noise traces, LSTM-based prefetchers suffer from long warm-up and prediction latency, and considerable storage overheads.\nThe discussion of how hyperparameters impact LSTM-based prefetchers' performance highlights that the lookback size (i.e. memory access history window) and the LSTM model size strongly affect prefetchers' learning ability under different noise levels or workload patterns.\nTo accommodate the large memory space, Shi \\textit{et al.} introduce a hierarchical sequence model to decouple predictions of pages and offsets by using two separate attention-based LSTM layers, whereas the corresponding hardware implementation is impractical for actual processors.", "id": "c842f575-5671-48ac-aa2a-715cd98df8a6", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "4ce0d49a-419f-4a57-9bf8-bac10689ff82", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Memory System Design" ], [ "subsubsection", "Cache" ] ], "subsections": [], "title": "Cache" }, { "cite_extract_rate": 0, "cites": [], "content": "Smart memory controllers can significantly improve memory bandwidth utilization.\nAiming at a self-optimizing memory controller adaptive to dynamically changing workloads, it can be modeled as an RL agent that always selects legal DRAM commands with the highest expected long-term performance benefits (i.e., Q-values) .\nTo allow optimizations toward various objectives, this memory controller is then improved in two major aspects . First, the rewards of different actions (i.e., legal DRAM commands) are automatically calibrated by genetic algorithms to serve different objective functions (e.g., energy, throughput, etc). Second, a multi-factor method that considers the first-order attribute interactions is employed to select proper attributes used for state representations.\nSince both of them use table-based Q-learning and select limited attributes to represent states, the scalability may be a concern and their performance could be improved with more informative representations.", "id": "dbd16b4a-1edb-4482-a515-db26bea31ee6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "4ce0d49a-419f-4a57-9bf8-bac10689ff82", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Memory System Design" ], [ "subsubsection", "Memory Controller" ] ], "subsections": [], "title": "Memory Controller" }, { "cite_extract_rate": 0.2, "cites": [ 3600 ], "content": "A variety of work targets different parts of the memory system.\nMargaritov \\textit{et al.} accelerate virtual address translation through learned index structures . The results are encouraging in terms of the accuracy, which reaches almost 100\\% for all tested virtual addresses; yet this method has unacceptably long inference latency, leaving practical hardware implementation as the future work.\nWang \\textit{et al.} reduce data movement energy in interconnects by exploiting asymmetric transmission costs of different bits, where data blocks to be transmitted are dynamically grouped by K-majority clustering to derive energy-efficient expressions for transmission.\nIn terms of garbage collection in NAND flash, Kang \\textit{et al.} propose an RL-based method to reduce the long-tail latency. The key idea is to exploit the inter-request interval (idle time) to dynamically decide the number of pages to be copied or whether to perform an erase operation, where decisions are made by table-based Q-learning. Their following work considers more fine-grained states, and introduces a Q-table cache to manage key states among enormous amount of states.", "id": "dc411e9f-ce19-4cdd-9e1c-1930d2c68aa6", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "4ce0d49a-419f-4a57-9bf8-bac10689ff82", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Memory System Design" ], [ "subsubsection", "Others" ] ], "subsections": [], "title": "Others" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 5405 ], "content": "\\label{sec:dse_bp}\nBranch predictor is one of the mainstays of modern processors, significantly improving the instruction-level parallelism. As pipelines gradually deepen, the penalty of mis-prediction increases. Traditional branch predictors often consider limited history length, which may hurt the prediction accuracy.\nIn contrast, the perceptron/MLP-based predictors can handle long histories with reasonable hardware budgets, outperforming prior state-of-the-art non-ML-based predictors.\nStarting with a static branch predictor trained with static features from program corpus and control flow graphs, an MLP is used to predict the direction of a branch at compile time .\nLater, a dynamic branch predictor uses a perceptron-based method . It hashes the branch address to select the proper perceptron and computes the dot product accordingly to decide whether to take this branch, which shows great performance on linearly separable branches. \nIts latency and accuracy can be further improved by applying ahead pipelining and selecting perceptrons based on path history .\nTo attain high accuracy in non-linearly separable branches, the perceptron-based prediction is generalized as piecewise linear branch prediction .\nIn addition to the path history, multiple types of features from different organizations of branch histories can be leveraged to enhance the overall performance .\nWhen considering practical hardware implementation of branch predictors, SNAP leverages current-steering digital-to-analog converters to transfer digital weights into analog currents and replaces the costly digital dot-product computation to the current summation.\nIts optimized version equips several new techniques, such as the use of global and per-branch history, trainable scaling coefficients, dynamic training thresholds, etc.\nRather than making binary decisions of whether to take a certain branch, it is possible to directly predict the target address of an indirect branch at the bit level via perceptron-based predictors .\nWhile high accuracy is achieved by current perceptron/MLP-based predictors, Tarsa \\textit{et al.} notice that a small amount of static branch instructions are systematically mispredicted, referred to as hard-to-predict branches (H2Ps). Consequently, they propose a CNN helper predictor for pattern matching of history branches, ultimately improving accuracy for H2Ps in conditional branches.", "id": "93695066-a3b6-43e6-a561-f289514d99b8", "level": "subsection", "origin_cites_number": 9, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Branch Prediction" ] ], "subsections": [], "title": "Branch Prediction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{NoC}\nThe aggressive transistor scaling has paved the way for integrating more cores in a single chip or processor.\nWith the increasing number of cores per chip, NoC plays a gradually crucial role, since it is responsible for inter-core communication and data movement between cores and memory hierarchies.\nSeveral problems attracting attention are as follows. \nFirst, communication energy scales slower than computation energy , implying necessity to improve power efficiency of NoCs. \nSecond, the complexity of routing or traffic control grows with the number of cores per chip and this problem is even exacerbated by the rising variety and irregularity of workloads.\nThird, with the continuous scaling down of transistors, NoCs are more vulnerable to different types of errors and thus reliability becomes a key concern.\nFourth, some non-conventional NoC architectures might bring promising potentials in the future, whereas they usually come with large design spaces and complex design constraints, which is nearly impossible for manually optimization.\nAmong all aforementioned fields, ML-based design techniques display their strength and charm.", "id": "02b1496f-b229-4d07-b91a-238e79b26753", "level": "subsection", "origin_cites_number": 1, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "NoC Design" ] ], "subsections": [ "f70d9a91-45f6-4778-9893-1e1691350184", "9cbfc7ad-3786-493c-a695-e8c786d70c4c", "0f23b3c6-50c2-4cf8-a777-94590c806c36", "5afafbcf-6c39-4a33-8c76-314585dc4284" ], "title": "NoC Design" }, { "cite_extract_rate": 0, "cites": [], "content": "Power consumption is one crucial concern in NoCs, in which links usually consume a considerable portion of network power.\nWhile turning on/off links according to a static threshold of link utilization is a trivial way to reduce power consumption, it can not adapt to dynamically changing workloads.\nSavva \\textit{et al.} use multiple ANNs for dynamic link management. Each ANN is responsible for one region of the NoC, and dynamically computes a threshold for every time interval to turn on/off links given the link utilization of each region. Despite significant power savings with low hardware overheads, this approach causes long latency in routing.\nIn order to meet certain power and thermal budgets, hierarchical ANNs are used to predict optimal NoC configurations (i.e., link bandwidth, node voltage and task assignment to nodes), where the global ANN predicts globally optimal NoC configurations exploiting local optimal energy consumption predicted by local ANNs.\nTo save dynamic power, several investigations employ per-router based Q-learning agents, which are offline trained ANNs to select optimal voltage/frequency levels for each router.", "id": "f70d9a91-45f6-4778-9893-1e1691350184", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "02b1496f-b229-4d07-b91a-238e79b26753", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "NoC Design" ], [ "subsubsection", "Link Management and DVFS" ] ], "subsections": [], "title": "Link Management and DVFS" }, { "cite_extract_rate": 0, "cites": [], "content": "With the increasing variety and irregularity of workloads and their traffic patterns, learning-based routing algorithms and traffic control approaches show superior performance due to their excellent adaptability.\n\\textcircled{\\small{1}} \nAs routing problems can be formulated as sequential decision-making processes, several studies apply Q-learning based approaches, namely the Q-routing algorithm , which uses local estimation of delivery time to minimize total packets delivery time, capable to handle irregular network topologies and keep a higher network load than the conventional shortest path routing.\nQ-routing is then extended to several other scenarios, such as combining with dual RL to improve learning speed and routing performance , resolving packets routing in dynamic NoCs whose network structures/topologies are dynamically changing during runtime , handling irregular faults in bufferless NoCs by the reconfigurable fault-tolerant Q-routing , and enhancing the capability to reroute messages around congested regions by the congestion-aware non-minimal Q-routing .\nIn addition to routing problems, deep Q-network is also promising for NoC arbitration policies , where the agent/arbiter grants a certain output port to the input buffer with the largest Q-value.\nEven displaying some improvements in latency and throughput, the direct hardware implementation is impractical due to the complexity of deep Q-networks, and thus insights are distilled to derive a relatively simple circuitry implementation.\n\\textcircled{\\small{2}} \nWith the goal to control congestion in NoCs, the SCEPTER NoC architecture , a bufferless NoC with single-cycle multi-hop traversals and a self-learning throttling mechanism, controls the injection of new flits into the network by Q-learning. Each node in the network independently selects whether to increase, decrease, or retain the throttle rate according to their Q-values, which conspicuously improves bandwidth allocation fairness and network throughput.\nWang \\textit{et al.}~ design an ANN-based admission controller to determine the appropriate injection rate and the control policy of each node in a standard NoC.", "id": "9cbfc7ad-3786-493c-a695-e8c786d70c4c", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "02b1496f-b229-4d07-b91a-238e79b26753", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "NoC Design" ], [ "subsubsection", "Routing and Traffic Control" ] ], "subsections": [], "title": "Routing and Traffic Control" }, { "cite_extract_rate": 0, "cites": [], "content": "With the aggressive technology scaling down, transistors and links in NoCs are more prone to different types of errors, indicating that reliability is a crucial concern and proactive fault-tolerant techniques are required to guarantee performance.\nWang~\\textit{et al.}~ employ per-router-based Q-learning agents to independently select one of four fault-tolerant modes, which can minimize the end-to-end packet latency and power consumption. These agents are pre-trained and then fine-tuned during runtime.\nIn their following work , these error-correction modes are extended and combined with various multi-function adaptive channel configurations, retransmission settings, and power management strategies, significantly improving latency, energy efficiency, and mean-time-to-failure.", "id": "0f23b3c6-50c2-4cf8-a777-94590c806c36", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "02b1496f-b229-4d07-b91a-238e79b26753", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "NoC Design" ], [ "subsubsection", "Reliability and Fault Tolerance" ] ], "subsections": [], "title": "Reliability and Fault Tolerance" }, { "cite_extract_rate": 0.4, "cites": [ 5400, 5406 ], "content": "With the growing number of cores per chip/system, the increasing heterogeneity of cores, and various performance targets, it is complicated to simultaneously optimize copious design knobs in NoCs.\nOne attempt to automated NoC design is the MLNoC , which utilizes supervised learning to quickly find near-optimal NoC designs under multiple optimization goals. MLNoC is trained by data from thousands of real-world and synthetic SoC (system-on-chip) designs, and evaluated with real-world SoC designs. Despite disclosure of limited details and absence of comprehensive comparison with other design methods, it shows superior performance to manually optimized NoC designs, delivering encouraging results.\nApart from conventional 2D mesh NoCs, a series of investigations focuses on 3D NoC designs, where the STAGE algorithm is applied to optimize vertical and planar placement of communication links in small-world network based 3D NoCs .\nThe STAGE algorithm repeatedly alternates between two stages, the base search that tries to find the local optima based on the learned evaluation function, and the meta-search that uses SVR to learn evaluation functions.\nLater, the STAGE algorithm is extended for multi-objective optimization in heterogeneous 3D NoC systems , which jointly considers GPU throughput, average latency between CPUs and LLCs, temperature, and energy.\nIn terms of routerless NoCs that any two nodes are connected via at least one ring/loop, a deep RL framework that exploits Monte-Carlo tree search for efficient design space exploration is developed to optimize loop placements , and the design constraints can be strictly enforced by carefully devising the reward function.", "id": "5afafbcf-6c39-4a33-8c76-314585dc4284", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "02b1496f-b229-4d07-b91a-238e79b26753", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "NoC Design" ], [ "subsubsection", "General Design" ] ], "subsections": [], "title": "General Design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{resource}\nResource allocation or management is the coordination between computer architecture/systems and workloads. Consequently, its optimization difficulty occurs with the booming complexity from both sides and their intricate interactions. \nML-based approaches have blazed the trail to adjusting policies wisely and promptly pursuant to dynamic workloads or specified constraints.", "id": "8b48b63d-94af-4ec0-a218-72746bce7952", "level": "subsection", "origin_cites_number": 0, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Resource Allocation or Management" ] ], "subsections": [ "2eb80446-13aa-40ce-a764-e98e86d2b650", "35decbd9-a2aa-43b2-b9e5-0585c9140768", "5a43dcc0-acd3-4daf-bbb5-f9e380b060a3" ], "title": "Resource Allocation or Management" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{power}\nML-based techniques have been applied broadly to improve power management, due to two main reasons.\nFirst, power/energy consumption can be recognized as one metric of runtime costs.\nSecond, under certain circumstances there could be a hard or soft constraint/budget of power/energy, making power efficiency a necessity.\nIn consideration of power management for different parts of systems, PACSL uses the propositional rule to adjust dynamic voltage scaling (DVS) for CPU cores and on-chip L2 cache, which achieves an improvement in the energy-delay product by 22\\% on average (up to 46\\%) over independently applying DVS for each part.\nWon \\textit{et al.} coordinate an ANN controller with a proportional integral for uncore DVFS. The ANN controller can be either pre-trained offline by a prepared dataset or trained online by bootstrapped learning.\nManoj~\\textit{et al.} deploy Q-learning to adaptively adjust the level of output-voltage swing at transmitters of 2.5D through-silicon interposer I/Os, under constraints of communication power and bit error rate.\nFrom the system level, DVFS is one of the most prevalent techniques.\nPack \\& Cap builds a multinomial logistic regression classifier that is trained offline and queried during runtime, to accurately identify the optimal operating point for both thread packing and DVFS under an arbitrary power cap.\nGreenGPU focuses on heterogeneous systems with CPUs and GPUs, and applies the weighted majority algorithm to scale frequency levels for both GPU cores and memory in a coordinated manner.\nCHARSTAR targets joint optimization of power gating and DVFS within a single core, where frequencies and configurations are dynamically selected by a lightweight offline trained MLP predictor.\nTo minimize energy consumption, Imes~\\textit{et al.}~ use ML-based classifiers (e.g., extra trees, gradient boosting, KNN, MLP and SVM) to predict the most energy-efficient resource settings (specifically, tuning socket allocation, the use of HyperThreads, and processor DVFS) by using low-level hardware performance counters.\nBai~\\textit{et al.}~ consider the loss caused by on-chip regulator efficiency during DVFS, and try to minimize energy consumption under a parameterized performance constraint. The online control policy is implemented by a table-based Q-learning, which is portable across platforms without accurate modeling of a specific system.\nA series of studies leverages RL for dynamic power management in multi-/many-core systems.\nAs systems scale up, these RL-based methods often suffer from state space explosion, and two types of methods are introduced to resolve the scalability issue.\n\\textcircled{\\small{1}} \nBy combining RL with supervised learning, a semi-supervised RL-based approach achieves linear complexity with the number of cores, which is able to maximize throughput ensuring power constraints and cooperatively control cores and uncores in synergy.\n\\textcircled{\\small{2}} \nThe exploitation of hierarchical Q-learning reduces the time complexity to $O(n \\lg n)$, where $n$ denotes the number of cores.\nPan \\textit{et al.} introduce multi-level Q-learning to select target power modes, where Q-values are approximated by a generalized radial basis function.\nTable-based distributed Q-learning also performs well for DVFS , and there is one variant aware of the priorities of different applications.\nSome energy management policies target specific applications or platforms.\nJouleGuard~ is a runtime control system coordinating approximate computing applications with system resource under energy budgets.\nIt uses a multi-arm bandit approach to identifying the most energy efficient system configuration, upon which application configurations are determined to maximize compute accuracy within energy budgets.\nTargeting Intel SkyLake processors, a post-silicon CPU customization applies various ML models for dynamically clock-gating unused resource .", "id": "2eb80446-13aa-40ce-a764-e98e86d2b650", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "8b48b63d-94af-4ec0-a218-72746bce7952", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Resource Allocation or Management" ], [ "subsubsection", "Power Management" ] ], "subsections": [], "title": "Power Management" }, { "cite_extract_rate": 0.23529411764705802, "cites": [ 5404, 5403, 5407, 5409 ], "content": "\\label{resource management}\nModern architectures and systems have been becoming so sophisticated and diverse that it is non-trivial to either optimize performance or fully utilize system resource.\nThis rapidly evolving landscape is further complicated by various workloads with specific requirements or targets.\nIn order to keep the pace, one cure is to develop more efficient and automated methods for resource management and task allocation, where ML-based techniques are excelled to explore large design spaces and simultaneously optimize multiple objectives, and preserve better scalablility and portability after carefully designed.\nFor a single-core processor, a regularized maximum likelihood approach predicts the best hardware micro-architectural configuration for each phase of a program, based on runtime hardware counters. \nFor multi-core processors, a statistical machine learning (SML) based method can quickly find configurations that simultaneously optimize running time and energy efficiency. Since this method is agnostic to application and micro-architecture domain knowledge, it is a portable alternative to human expert optimization.\nSML can also be applied as a holistic method to design self-evolving systems that optimize performance hierarchically across circuit, platform, and application levels .\nIn addition to tuning architectural configurations, dynamic on-chip resource management is crucial for multi-core processors, where one example is dynamic cache partitioning.\nIn response to changing workload demands, an RNN evolved by the enforced subpopulations algorithm is introduced to partition L2 cache dynamically.\nWhen integrating dynamic partitioning of LLC with DVFS on cores and uncore, a co-optimization method using table-based Q-learning achieves much lower energy-delay products than any of the techniques applied individually .\nTo guarantee efficient and reliable execution in many-core systems, task allocation should consider several aspects, such as heat and communication issues.\nTargeting the heat interaction of processor cores and NoC routers, Lu \\textit{et al.} apply Q-learning to assign tasks to cores based on current temperatures of cores and routers, such that the maximum temperature in the future is minimized.\nTargeting the non-uniform and hierarchical on/off-chip communication capability in multi-chip many-core systems, core placement optimization leverages deep deterministic policy gradient (DDPG) to map computation onto physical cores, able to work in a manner agnostic to domain-specific information.\nSome studies pay attention to workflow management and general hardware resource assignment.\nSmartFlux focuses on the workflow of data-intensive and continuous processing. It intelligently guides asynchronous triggering of processing steps with the help of predictions made by multiple ML models (e.g., SVM, random forest), which indicate whether to execute certain steps and to decide corresponding configurations upon each wave of data.\nGiven target DNN models, deployment scenarios, platform constraints, and optimization objectives (latency/energy), ConfuciuX~ applies a hybrid two-step scheme for optimal hardware resource assignments (i.e., assigning the number of processing elements and the buffer sizes to each DNN layer), where REINFORCE performs a global coarse-grained search followed by a genetic algorithm for fine-grained tuning.\nApollo is a general architecture exploration framework for sample-efficient accelerator designs, which leverages ML-based black-box optimization techniques (e.g., Bayesian optimization) to optimize accelerator configurations to satisfy use-specified design constraints.\nIn heterogeneous systems with CPUs and GPUs, device placement refers to the process of mapping nodes in computational graphs of neural networks onto proper hardware devices.\nInitially, computational operations are grouped manually, and assigned to devices by REINFORCE that employs a sequence-to-sequence RNN model as the parameterized policy .\nLater, a hierarchical end-to-end model makes this manual grouping process automatic .\nThe training speed is further improved by introduction of proximal policy optimization (PPO) .\nDespite great advance brought by the above approaches, they are not transferable and a new policy should be trained from scratch specifically for each new computational graph.\nBy encode structure of computational graphs with static graph embeddings or learnable graph embeddings , the trained placement policy exhibits great generalizability to unseen neural networks.", "id": "35decbd9-a2aa-43b2-b9e5-0585c9140768", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "8b48b63d-94af-4ec0-a218-72746bce7952", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Resource Allocation or Management" ], [ "subsubsection", "Resource Management and Task Allocation" ] ], "subsections": [], "title": "Resource Management and Task Allocation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{scheduling}\nIn classical real-time scheduling problems, the key task is to decide the order, according to which the currently unscheduled jobs should be executed by a single processor, such that the overall performance is optimized.\nAs multi-core processors have been the mainstream, the scheduling is gradually perplexing.\nOne major reason is that multiple objectives besides the performance should be carefully considered, such as balanced assignments among various cores and response time fairness.\nEquipped with the capability to well understand the feedback provided by the environment and to dynamically adjust policies, RL is a common tool for real-time scheduling.\nTo optimize the execution order of jobs after they are routed to a single CPU core, Whiteson and Stone propose an adaptive scheduling policy that exploits Q-routing, where the scheduler utilizes the router’s Q-table to assess a job’s priority and decides jobs' ordering accordingly so as to maximize the overall utility.\nIn multi-core systems, Fedorova~\\textit{et al.} present a blueprint for a self-tuning scheduling algorithm based on the value-based temporal-difference method in RL, aiming to maximize a cost function that is an arbitrary weighted sum of metrics of interest.\nThis algorithm is then improved to be a general method for online scheduling of parallel jobs , where the value functions are approximated by a parameterized fuzzy rulebase. This scheduling policy always selects to execute jobs with the maximum value functions in the job queue, which possibly preempts currently running jobs and squeezes some jobs into fewer CPUs than they ideally require, with the goal of achieving optimized long-term utility.", "id": "5a43dcc0-acd3-4daf-bbb5-f9e380b060a3", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "8b48b63d-94af-4ec0-a218-72746bce7952", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Resource Allocation or Management" ], [ "subsubsection", "Scheduling" ] ], "subsections": [], "title": "Scheduling" }, { "cite_extract_rate": 0.2, "cites": [ 3579, 3603, 1390 ], "content": "\\label{data center}\nWith the rapid scale expansion of data centers, issues that may be trivial in a single machine become increasingly challenging, let alone the inherently complicated problems.\nEarly work aims at a relatively simple scenario of resource allocation, i.e., to dynamically assign different numbers of servers to multiple applications. \nThis problem can be modeled as an RL problem with service-level utility functions as rewards:\nthe arbiter will select a joint action that would bring the maximum total return after consulting local value functions estimated via either table-based methods or function approximation . \nIn order to better model interactions among multiple agents, a multi-agent coordination algorithm with fuzzy RL can be used to solve the dynamic content allocation in content delivery networks (CDNs), in which each requested content is modeled as an agent, trying to move toward the area with a high demand while coordinating with other agents/contents.\nA recent innovation pays attention to the placement of virtual machines onto physical machines, so as to minimize the peak-to-average ratio of resource usage across physical machines, where PPO and hindsight imitation learning are evaluated.\nTo improve data center performance and quality of experience (QoE) for users, ML-based techniques have been explored in a few directions.\n\\textcircled{\\small{1}} It is important to efficiently schedule jobs and effectively diagnose stragglers within jobs.\nAiming at traffic optimization (e.g., flow scheduling, load balancing) in data centers, Chen~\\textit{et al.} develop a two-level RL system: peripheral systems, which are trained by DDPG, reside on end-hosts and locally make instant traffic optimization decisions for short flows; the central system, which is trained by policy gradient, aggregates global traffic information, guides behaviors of peripheral systems, and makes traffic optimization decisions for long flows.\nDecima exploits GNNs to represent cluster information and dependency among job stages, so that the RL-based scheduler can automatically learn workload-specific scheduling policies to schedule data processing jobs with complex dependency. \nHound combines statistical ML with meta-learning to diagnose causes of stragglers at data-center-scale jobs.\n\\textcircled{\\small{2}} It is essential to deploy an intelligent data-center-level cache.\nDeepCache employs an LSTM encoder-decoder model to predict future content popularity, which can be combined with existing cache policies to make smarter decisions.\nSong \\textit{et al.} apply gradient boosting machines to mimic a relaxed Belady algorithm that evicts an object whose next request is beyond a reuse distance threshold but not necessarily the farthest in the future.\nPhoebe is an online cache replacement framework leveraging DDPG to predict priorities of objects and to conduct eviction accordingly.\nConsidering non-history based features, Wang \\textit{et al.}~ build a decision tree to predict whether the requested file will be accessed only once in the future. These one-time-access files will be directly sent to users without getting into cache, to avoid cache pollution. \n\\textcircled{\\small{3}} From the workload perspective, video workloads on CDNs or clusters are prevalent but their optimization is quite challenging: first, network conditions fluctuate overtime and a variety of QoE goals should be balanced simultaneously; second, only coarse decisions are available and current decisions will have long-term effects on following decisions.\nThis scenario naturally matches the foundation of RL-based techniques.\nTo optimize users' QoE of streaming videos, adaptive bitrate algorithms have been recognized as the primary tool used by content providers, which are executed on client-side video players and dynamically choose a bitrate for each video chunk based on underlying network conditions.\nPensieve applies asynchronous advantage actor-critic to select proper bitrate for future video chunks based on resulting performance from past decisions.\nWhen considering large-scale video workloads in hybrid CPU-GPU clusters, performance degradation often comes from uncertainty and variability of workloads, and unbalanced use of heterogeneous resources. \nTo accommodate this, Zhang \\textit{et al.} use two deep Q-networks to build a two-level task scheduler, where the cluster-level scheduler selects proper execution nodes for mutually independent video tasks and the node-level scheduler assigns interrelated video subtasks to appropriate computing units. This scheme enables the scheduling model to adjust policies according to runtime status of cluster environments, characteristics of video tasks, and dependency among video tasks.", "id": "de7c5699-73a2-48d8-9e75-ff1b60c4f5f5", "level": "subsection", "origin_cites_number": 15, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Data Center Management" ] ], "subsections": [], "title": "Data Center Management" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dse_code_compiler}", "id": "68c1d9ca-e725-4975-ad78-356d7848d1d1", "level": "subsection", "origin_cites_number": 0, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Code Generation and Compiler" ] ], "subsections": [ "21901bea-3ca4-4650-b39a-4ecbde92fab7", "9d772652-3e90-4f57-98a3-ad3bb92cfe28" ], "title": "Code Generation and Compiler" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 5397, 5408, 4096, 5410 ], "content": "\\label{code_generation}\nDue to the similarities in syntax and semantics between programming languages and natural languages, the problem of code generation or translation is often modeled as an NLP problem or a neural machine translation (NMT) problem. \nHere, we would like to bring up a brief discussion. \nFor more reference, a comprehensive survey detailedly contrasts programming languages against natural languages, and discusses how these similarities and differences drive the design and application of different ML models in code.\nTargeting code completion, several statistical language models (N-gram model, RNN, and a combination of these two) are explored to select sentences that have the highest probability and satisfy constraints to fill up partial programs with holes.\nAs for code generation, CLgen trains LSTM models by a corpus of hand-written code to learn semantics and structures of OpenCL programs, and generates human-like programs via iteratively sampling from the learned model.\nTargeting program translation, NMT-based techniques are widely applied to migrate code from one language to another.\nFor example, a tree-to-tree model with the encoder-decoder structure effectively translates programs from Java to C\\# ;\nthe sequence-to-sequence (seq2seq) model can translate from CUDA to OpenCL .\nRather than translating between high-level programming languages, Coda translates binary executables to the corresponding high-level code, which employs a tree-to-tree encoder-decoder structure for code sketch generation and an ensembled RNN-based error predictor for iterative error correction on the generated code.\nNotably, these supervised NMT-based techniques may confront several issues: difficulty to generalize to programs longer than training ones, limited size of vocabulary sets, and scarcity of aligned input-output data.\nFully counting on unsupervised machine translation, TransCoder adopts a transformer architecture and uses monolingual source code to translate among C++, Java, and Python.", "id": "21901bea-3ca4-4650-b39a-4ecbde92fab7", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "68c1d9ca-e725-4975-ad78-356d7848d1d1", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Code Generation and Compiler" ], [ "subsubsection", "Code Generation" ] ], "subsections": [], "title": "Code Generation" }, { "cite_extract_rate": 0.25, "cites": [ 5402, 5399, 5411 ], "content": "\\label{sec:compiler}\nThe complexity of compilers grows with the complexity of computer architectures and workloads. \nML-based techniques can optimize compilers from many perspectives, such as instruction scheduling, compiler heuristics, the order to apply optimizations, hot path identification, auto-vectorization, and compilation for specific applications.\n\\textcircled{\\small{1}} For instruction scheduling, the preference function of one scheduling over another can be computed by the temporal difference algorithm in RL .\nRegarding scheduling under highly-constrained code optimization, the projective reparameterization enables automatic instruction scheduling under constraints of data-dependent partial orders over instructions.\n\\textcircled{\\small{2}} For improving compiler heuristics, Neuro-Evolution of Augmenting Topologies (NEAT) improves instruction placement heuristics by tuning placement cost functions.\nTo avoid manual feature engineering, LSTM-based model automatically learns compiler heuristics from raw code, which constructs proper embeddings of programs and simultaneously learn the optimization process.\n\\textcircled{\\small{3}} For choosing the appropriate order to apply different optimizations, NEAT can automatically generate beneficial optimization orderings for each method in a program.\n\\textcircled{\\small{4}} For path profiling, CrystalBall uses an LSTM model to statically identify hot paths, the sequences of instructions that are frequently executed. As CrystalBall only relies on IRs, it avoids manual feature crafting and is independent of language or platform. \n\\textcircled{\\small{5}} For automatic vectorization, Mendis~\\textit{et al.} leverage imitation learning to mimic optimal solutions provided by superword-level-parallelism based vectorization .\n\\textcircled{\\small{6}} For compilation of specific applications, there are studies improving compilation for approximate computing or DNN applications.\nConsidering compilation for approximate computing, Esmaeilzadeh \\textit{et al.} propose a program transformation method, which trains MLPs to mimic regions of approximable code and eventually replaces the original code with trained MLPs. The following work extends this algorithmic transformation to GPUs.\nConsidering compilation for DNNs, RELEASE utilizes PPO to search optimal compilation configurations for DNNs.\nEGRL optimizes memory placement of DNN tensors during compilation, which combines GNNs, RL, and evolutionary search to figure out optimal mapping onto different on-board memory components (i.e., SRAM, LLC, and DRAM).", "id": "9d772652-3e90-4f57-98a3-ad3bb92cfe28", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "68c1d9ca-e725-4975-ad78-356d7848d1d1", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Code Generation and Compiler" ], [ "subsubsection", "Compiler" ] ], "subsections": [], "title": "Compiler" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{chip_design}\nAs technology scales down, the increased design complexity comes with growing process variations and reduced design margins, making chip design an overwhelmingly complex problem for human designers.\nRecent advancements in ML create a chance to transform chip design workflows.", "id": "e25c32d8-64dc-4364-8501-8063014c67c6", "level": "subsection", "origin_cites_number": 0, "parent_id": "87e1e057-9e86-4a5c-afa3-431c75b16a64", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Chip Design and Design Automation" ] ], "subsections": [ "4b384d57-c0f5-44ec-ae75-2d05825729b5", "75c9f4ed-ccff-4fa2-af01-38a78105e680" ], "title": "Chip Design and Design Automation" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 5414, 727, 5412, 5413, 5415, 166 ], "content": "\\label{sec:dse_analog}\nCompared with the highly automated digital design counterpart, analog design usually demands many manual efforts and domain expertise.\nFirst, analog circuits have large design spaces to search proper topology and device sizes. Second, there is an absence of a general framework to optimize or evaluate analog designs, and design specifications often vary case by case.\nRecently, ML techniques have been introduced to expedite analog design automation.\nWe discuss these studies following the top-down flow of analog design: in the circuit level, a proper circuit topology is selected to satisfy system specifications; then in the device level, device sizes are optimized subject to various objectives. These two steps compose of pre-layout designs.\nAfter circuit schematics are carefully designed, analog layouts in the physical level are generated.\n\\textcircled{\\small{1}} \nIn the circuit level, there is an attempt towards automatic circuit generation currently targeting two-port linear analog circuits .\nThe design specifications are encoded by a hypernetwork to generate weights for an RNN model, which is trained to select circuit components and their configurations.\n\\textcircled{\\small{2}} \nIn the device level, the combination of RL and GNNs enables automatic transistor sizing , which is able to generalize across different circuit topologies or different technology nodes.\nAutoCkt introduces transfer learning techniques into deep RL for automatic sizing, achieving 40$\\times$ speedup over a traditional genetic algorithm.\nRosa \\textit{et al.} provide comprehensive discussions of how to address automatic sizing and layout of analog ICs via deep learning and ANNs.\n\\textcircled{\\small{3}}\nIn the physical level, GeniusRoute automates analog routing through the guidance learned by a generative neural network.\nThe analog placements and routing are represented as images to pass through a variational autoencoder (VAE) to learn routing likelihoods of each region. GeniusRoute achieves competitive performance to manual layouts and is capable to generalize to circuits of different functionality.\nLiu \\textit{et al.} apply multi-objective Bayesian optimization to optimize combinations of net weighting parameters, which could significantly change floor plans and placement solutions, so as to improve analog layouts of building block circuits.\n\\begin{table}[tbp]\n\\vspace{-12pt}\n\\caption{Summary of applying ML techniques as the design methodology for design automation.}\n\\vspace{-10pt}\n\\label{table:dse_eda}\n\\centering\n \\tiny\n \\renewcommand{\\arraystretch}{0.9}\n \\setlength{\\tabcolsep}{12pt}\n\\begin{tabular}{c|c|c|c}\n\\toprule\n\\multicolumn{2}{c|}{\\textbf{Domain}} & \\textbf{Task} & \\textbf{Technique} \\\\ \\midrule\n\\multirow{4}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Analog \\\\ Design \\\\ $( \\S$ \\ref{sec:dse_analog}$)$ \\end{tabular}}} & \\textbf{Circuit Level} & Generating circuit topology & RNN and hypernetwork \\\\ \\cline{2-4} \n & \\textbf{Device Level} & Device sizing & Actor critic , ANN \\\\ \\cline{2-4} \n & \\multirow{2}{*}{\\textbf{Physical Level}} & Routing & VAE \\\\ \\cline{3-4} \n & & Optimizing layout configurations & Multi-objective Bayesian optimization \\\\ \\cline{2-4} \n \\hline\n\\multirow{12}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Digital \\\\ Design \\\\ $( \\S$ \\ref{sec:dse_digital}$)$\\end{tabular}}} & \\multirow{3}{*}{\\textbf{HLS}} & Optimizing loop unrolling pragma & Random forest \\\\ \\cline{3-4} \n & & Optimizing placement of multiple pragmas & Bayesian optimization \\\\ \\cline{3-4} \n & & Optimizing resource pragma & Actor-critic and GNN \\\\ \\cline{2-4} \n & \\multirow{3}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Logic \\\\ Synthesis\\end{tabular}}} & Selecting proper optimizers & MLP \\\\ \\cline{3-4} \n & & Logic optimization & Policy gradient , actor-critic \\\\ \\cline{3-4} \n & & Determining the maximum error of each node & Q-learning \\\\ \\cline{2-4} \n & \\multirow{5}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Physical \\\\ Synthesis\\end{tabular}}} & Optimizing flip-flop placement in clock networks & K-means clustering \\\\ \\cline{3-4} \n & & Optimizing clock tree synthesis & Conditional GAN \\\\ \\cline{3-4} \n & & Optimizing memory cell placement & PPO \\\\ \\cline{3-4} \n & & Optimizing standard cell placement & Cast as an NN training problem \\\\ \\cline{3-4} \n & & Fix design rule violations & PPO \\\\ \\bottomrule\n\\end{tabular}\n\\vspace{-12pt}\n\\end{table}", "id": "4b384d57-c0f5-44ec-ae75-2d05825729b5", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "e25c32d8-64dc-4364-8501-8063014c67c6", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Chip Design and Design Automation" ], [ "subsubsection", "Analog Design" ] ], "subsections": [], "title": "Analog Design" }, { "cite_extract_rate": 0.21428571428571402, "cites": [ 5414, 5415, 5413 ], "content": "\\label{sec:dse_digital}\nFor the studies applying ML techniques to directly optimize digital designs, we organize them following a top-down flow, i.e., HLS, logic synthesis, and physical synthesis.\nThe design space exploration in HLS designs usually relates to properly assigning directives (pragmas) in high-level source code, since directives significantly impact the quality of HLS designs by controlling parallelism, scheduling, and resource usage.\nThe optimization goal is often to find Pareto solutions between different objectives or to satisfy pre-defined constraints. \nWith IR analysis, the employment of a random forest is able to select suitable loop unrolling factors to optimize a weighted sum of execution latency and resource usage .\nProspector uses Bayesian optimization to optimize placement of directives (loop unrolling/pipelining, array partitioning, function inlining, and allocation), aiming to find Pareto solutions between execution latency and resource utilization in FPGAs. \nIronMan targets Pareto solutions between different resources while keeping the latency unchanged.\nIt combines GNNs with RL to conduct a finer-grained design exploration in the operation level, pursuing optimal resource allocation strategies by optimizing assignments of the resource pragma.\nIn logic synthesis, RTL-designs or logic networks are represented by directed acyclic graphs (DAGs). The goal is to optimize logic networks subject to certain constraints.\nLSOracle employs an MLP to automatically decide which one of the two optimizers should be applied on different parts of circuits.\nThe logic optimization can be formulated as an RL problem solved by the policy gradient or the advantage actor-critic : the state is the current status of a design; the action is a transformation between two DAGs with equivalent I/O behaviors; the optimization objective is to minimize area or delay of designs.\nQ-ALS aims at approximate logic synthesis and embeds a Q-learning agent to determine the maximum tolerable error of each node in a DAG, such that the total error rates at primary outputs are bounded by pre-specified constraints.\nIn physical synthesis, placement optimization is a popular topic.\n\\textcircled{\\small{1}} \nTo optimize flip-flop placement in clock networks, Wu \\textit{et al.} apply a modified K-means clustering to group post-placement flip-flops, and relocate these clusters by reducing the distance between flip-flops and their drivers while minimizing disruption of original placement results. \nTo optimize clock tree synthesis (CTS), Lu \\textit{et al.} train a regression model that takes pre-CTS placement images and CTS configurations as inputs to predict post-CTS metrics (clock power, clock wirelength, and maximum skew), which is used as the supervisor to guide the training of a conditional GAN, such that the well-trained generator can recommend CTS configurations leading to optimized clock trees. \n\\textcircled{\\small{2}} \nAiming at cell placement, a deep RL approach is introduced to place macros (memory cells), after which standard cells are placed by a force-directed method. \nThis method is able to generalize to unseen netlists, and outperforms RePlAce yet several times slower.\nDREAMPlace casts the analytical standard cell placement optimization into a neural network training problem, achieving over 30$\\times$ speedup without quality degradation compared to RePlAce.\nNVCell is an automated layout generator for standard cells, which employs RL to fix DRVs after placement and routing.\n\\textcircled{\\small{3}} \nML-based techniques demonstrates their versatility in many design automation tasks, such as post-silicon variation extraction by sparse Bayesian learning, and post-silicon timing tuning to mitigate the effects caused by process variation .", "id": "75c9f4ed-ccff-4fa2-af01-38a78105e680", "level": "subsubsection", "origin_cites_number": 14, "parent_id": "e25c32d8-64dc-4364-8501-8063014c67c6", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "ML as Design Methodology" ], [ "subsection", "Chip Design and Design Automation" ], [ "subsubsection", "Digital Design" ] ], "subsections": [], "title": "Digital Design" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dis}\nIn this section, we discuss limitations and potentials of ML techniques for computer architecture and systems, which span the entire development and deployment stack that involves data, algorithms, implementation, and targets.\nWe also envision that the application of ML techniques could be the propulsive force for \\textit{hardware agile development}.\n\\vspace{-5pt}", "id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "level": "section", "origin_cites_number": 0, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ] ], "subsections": [ "4da21a0b-f62c-4e97-84e3-786911c6b5c8", "bf55ec88-5f58-456f-b066-c5e92f556071", "ac2615bc-0a08-4fa7-932a-166c6e497c28", "56d8fa9d-3bba-4c5d-9a9d-e6d1744157e3", "5afb9503-6a23-454b-90e9-dd19be81420a" ], "title": "Discussion and Potential Directions" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 5416 ], "content": "Data are the backbone to ML, however, perfect datasets are sometimes non-available or prohibitively expensive to obtain in computer architecture and system domain.\nHere, we would like to scrutinize two points, the gap between small data and big data, and non-perfect data.\n\\textcircled{\\small{1}} \nIn some EDA problems, such as placement and routing in physical synthesis, the simulation or evaluation is extremely expensive , leading to data scarcity. As ML models usually require enough data to learn underlying statistics and make decisions, this gap between small data and big data often limits the capability of ML-based techniques. There have been different attempts to bridge this gap. From the algorithm side, algorithms that can work with small data await to be developed, where one current technique is Bayesian optimization that is effective in small parameter space ; active learning , which significantly improve sample efficiency, may also be a cure to this problem. From the data side, generative methods can be used to generate synthetic data , mitigating data scarcity.\n\\textcircled{\\small{2}} \nRegarding non-perfect data, even if some EDA tools produce a lot of data, they are not always properly labeled nor presented in the form suitable to ML models. In the absence of perfectly labeled training data, possible alternatives are to use unsupervised learning, self-supervised learning , or to combine supervised with unsupervised techniques . Meanwhile, RL could be a workaround where training data can be generated on-the-fly.\n\\vspace{-5pt}", "id": "4da21a0b-f62c-4e97-84e3-786911c6b5c8", "level": "subsection", "origin_cites_number": 6, "parent_id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ], [ "subsection", "Bridging Data Gaps" ] ], "subsections": [], "title": "Bridging Data Gaps" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 5202, 5413, 5416, 5407, 3588 ], "content": "Despite the current achieved accomplishments, we are still expecting novel ML algorithms or schemes to further improve system modeling and optimization, with respect scalability, domain knowledge interpretability, etc.\n\\textbf{New ML Schemes.}\nClassical analytic-based methods usually adopt a bottom-up or top-down procedure, encouraging ML-based techniques to distill hierarchical structures of systems/architecture.\nOne example is hierarchical RL that has flexible goal specifications and learns goal-directed behaviors in complex environments with sparse feedback. Such kind of models enables more flexible and effective multi-level design and control.\nAdditionally, many system optimizations involve participation of multiple agents, such as NoC routing, which are naturally suitable to the realm of multi-agent RL . These agents can be fully cooperative, fully competitive, or a mix of the two, enabling versatility of system optimization.\nAnother promising approach is self-supervised learning , beneficial in both improving model robustness and mitigating data scarcity.\nWhile applying a single ML method solely has led to powerful results, hybrid methods, i.e., combining different ML techniques or combining ML techniques with heuristics, unleash more opportunities. For example, RL can be combined with genetic algorithms for hardware resource assignment .\n\\textbf{Scalability.}\nThe system scaling-up poses challenges on scalability issues.\nFrom the algorithm side, multi-level techniques can help reduce the computation complexity, e.g., multi-level Q-learning for DVFS .\nOne implicit workaround is to leverage transfer learning:\nthe pre-training is a one-time cost, which can be amortized in each future use; the fine-tuning provides flexibility between a quick solution from the pre-trained model and a longer yet better one for a particular task. \nSeveral examples are discussed in Section \\ref{chip_design}.\n\\textbf{Domain Knowledge and Interpretability.}\nMaking better use of domain knowledge unveils possibilities to choose more proper models for different system problems and provide more intuitions or explanations of why and how these models work.\nBy making analogy of semantics between memory access patterns/program languages and natural languages, the prefetching or code generation problems can be modeled as NLP problems, as discussed in Section \\ref{cache} and Section \\ref{code_generation}. \nBy making analogy of graphical representations in many EDA problems, where data are intrinsically presented as graphs (e.g., circuits, logic netlists or IRs), GNNs are expected to be powerful in these fields . \nSeveral examples are provided in Section \\ref{sec:model_eda} and Section \\ref{chip_design}.", "id": "bf55ec88-5f58-456f-b066-c5e92f556071", "level": "subsection", "origin_cites_number": 11, "parent_id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ], [ "subsection", "Developing Algorithms" ] ], "subsections": [], "title": "Developing Algorithms" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 5418, 7633, 9139, 5417 ], "content": "To fully benefit from ML-based methods, we need to consider practical implementations, appropriate selection of deployment scenarios, and post-deployment model maintenance.\n\\textbf{Better Implementations.}\nTo enable practical implementations of ML-based techniques, improvement can be made from either the model side or software/hardware co-design .\nFrom the model level, network pruning and model compression reduce the number of operations and model size ; weight quantization improves computation efficiency by reducing the precision of operations/operands .\nFrom the co-design level, strategies that have been used for DNN acceleration could also be used in applying ML for system.\n\\textbf{Appropriate Scenarios: online vs. offline.}\nWhen deploying ML-based techniques for system designs, it is crucial to deliberate design constraints under different scenarios.\nGenerally, existing work falls into two categories. \n\\textcircled{\\small{1}} \nML-based techniques are deployed online or during runtime, no matter the training phase is performed online or offline. Obviously, the model complexity and runtime overhead are often strictly limited by specific constraints, e.g., power/energy, timing/latency, area, etc. \nTo take one more step, if the online training/learning is further desired, the design constraint will be more stringent. One promising approach is to employ semi-online learning models, which have been applied to solve some classical combinatorial optimization problems, such as bipartite matching and caching . These models enable smooth interpolation between the best possible online and offline training algorithms.\n\\textcircled{\\small{2}} ML-based techniques are applied offline, which often refers to architectural design space exploration. Such problems leverage ML-based techniques to guide system implementation, and once the designing phase is completed, ML models will not be invoked again. Thus, the offline applications can tolerate relatively higher overheads.\n\\textbf{Model Maintenance.}\nIn the case of offline training and online deployment, ML models employed for computer architecture domain, as in other scenarios, require regular maintenance and updating to meet performance expectations,\nsince workload variations over time and hardware aging often cause data drift or concept drift .\nTo proactively circumvent performance degradation of ML models, some measures could be taken during post-deployment periods.\n\\textcircled{\\small{1}}\nML models can be retrained either at a regular interval or when key performance indicators are below certain thresholds. Retraining models regularly, regardless of their performance, is a more direct way, but it requires a clear understanding of how frequently a model should be updated under its own scenario. The model performance will decline if retraining intervals are too spaced out in the interim. Monitoring key performance indicators relies on a comprehensive panel of measurements that explicitly demonstrate model drift, whereas this may introduce additional hardware/software overhead and incorrect selection of measurements often defeats the intention of this method.\n\\textcircled{\\small{2}} During the retraining of ML models, there is often a trade-off between newly collected data and previous data. Properly assigning importance of input data would improve retraining efficacy .\n\\vspace{-5pt}", "id": "ac2615bc-0a08-4fa7-932a-166c6e497c28", "level": "subsection", "origin_cites_number": 7, "parent_id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ], [ "subsection", "Improving Implementations and Deployments" ] ], "subsections": [], "title": "Improving Implementations and Deployments" }, { "cite_extract_rate": 1, "cites": [ 1695, 7739, 7612 ], "content": "ML-based techniques are supposed to be applicable in both current architectures and emerging systems, leading to long-term advancement in computer architecture and systems.\n\\textbf{Non-homogeneous Components.}\nDesign and development for computer architectures are often based upon earlier-generation architectures of similar purpose, but commonly rely on next-generation hardware components that were not present in earlier generations. Examples include employment of new device nodes with technology scaling, and replacement of conventional constituents in memory systems with NVM- or PIM-based components.\nIn addition to the heterogeneity of components from different generations, one architecture or system usually consists of both standard parts from library and specialized/customized hardware components.\nThis provides the motivation that ML-assisted architectures/systems should have the flexibility to transfer among different-generation components, and to support standard and specialized parts simultaneously.\n\\textbf{Non-homogeneous Applications.}\nIn computer architecture and system design, some issues are universal, while others may arise with the advent of new architecture/systems and new workloads.\n\\textcircled{\\small{1}} For evergreen design areas, several examples include caching in hardware/software/data centers (Section \\ref{cache} and Section \\ref{data center}), resource management and task allocation in single/multi-/many-core CPUs and heterogeneous systems (Section \\ref{resource}), NoC design under various scenarios (Section \\ref{NoC}), etc.\n\\textcircled{\\small{2}} For problems aroused from new systems/workloads, transfer learning and meta-learning could be helpful in either exploring new heuristics or directly deriving design methodology. \nFor example, combining meta-learning with RL allows training a \"meta\" agent that is designed to adapt to a specific workload with only a few observations.\n\\vspace{-5pt}", "id": "56d8fa9d-3bba-4c5d-9a9d-e6d1744157e3", "level": "subsection", "origin_cites_number": 3, "parent_id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ], [ "subsection", "Supporting Non-homogeneous Tasks" ] ], "subsections": [], "title": "Supporting Non-homogeneous Tasks" }, { "cite_extract_rate": 1, "cites": [ 4334 ], "content": "Even though ML-based modeling significantly reduces the evaluation cost during design iteration, making great strides towards the landing of hardware agile development, there is still a long way to go in the ML-based design methodology prospective.\nOne ultimate goal might be the fully automated design, which should entangle two core capabilities: holistic optimization in system-wise, and easy migration across different systems, to enable rapid and agile hardware design.\n\\textbf{Holistic Optimization.}\nFueled by recent advancements, ML techniques have been increasingly explored and exploited in computer system design and optimization . \nThe target problems that await further endeavors could be multi-objective optimizations under highly constrained situations, or optimizing several components in a system simultaneously.\nWe envisage an ML-based system-wise and holistic framework with a panoramic vision: it should be able to leverage information from different levels of systems in synergy, so that it could thoroughly characterize system behaviors as well as their intrinsically hierarchical abstractions; it should also be able to make decisions in different granularity, so that it could control and improve systems precisely and comprehensively.\n\\textbf{Portable, Rapid, and Agile.}\nStriving for portable, rapid, and agile hardware design, there are two potential directions.\n\\textcircled{\\small{1}} The well-designed interfaces between systems/architectures and ML-based techniques would facilitate the portability across different platforms, since ML models can perform well without explicit descriptions of the target domain.\n\\textcircled{\\small{2}} The proliferation of ML-based techniques have more or less transformed the workflow of design automation, directly driving rapid and agile hardware design.\nWe expect GNNs make better use of naturally graphical data in EDA field; we expect deep RL be a powerful and general-purpose tool for many EDA optimization problems, especially when the exact heuristic or objective is obscure; we expect these ML-based design automation tools enhance designers' productivity and thrive in the community.\n\\vspace{-5pt}", "id": "5afb9503-6a23-454b-90e9-dd19be81420a", "level": "subsection", "origin_cites_number": 1, "parent_id": "a80b2264-c6ef-4ec9-ae9a-838d036e281c", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Discussion and Potential Directions" ], [ "subsection", "Facilitating General Tool Design and Hardware Agile Development" ] ], "subsections": [], "title": "Facilitating General Tool Design and Hardware Agile Development" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:con}\nThe flourishing of ML would be retarded without the great systems and powerful architectures supportive to run these algorithms at scale. \nNow, it is the time to return the favor and let ML transform the way that computer architecture and systems are designed.\nExisting work that applies ML for computer architecture/systems roughly falls into two categories: ML-based fast modeling that involves performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool.\nWe hope to see the virtuous cycle, in which ML-based techniques are efficiently running on the most powerful computers with the pursuit of designing the next generation computers. We hope ML-based techniques could be the impetus to the revolution of computer architecture and systems.\n\\vspace{-5pt}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{ref}\n\\end{document}\n\\endinput", "id": "79dc37a5-e0c0-498b-b911-e6bc545700a1", "level": "section", "origin_cites_number": 0, "parent_id": "5631fef3-6389-407d-9101-745f45240141", "prefix_titles": [ [ "title", "A Survey of Machine Learning for Computer Architecture and Systems" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
98
[ 1390, 553, 2219, 166, 4378, 5390, 5389, 5391, 5392, 5393, 5395, 5396, 8916, 5394, 5397, 5409, 3579, 5400, 5399, 5406, 5398, 5407, 5403, 3603, 5408, 5401, 5405, 4096, 5402, 5404, 3600, 5410, 5411, 5414, 727, 5412, 5413, 5415, 5416, 5202, 3588, 5418, 7633, 9139, 5417, 1695, 7739, 7612, 4334 ]
1.252764
[ "Sebastian Houben", "Stephanie Abrecht", "Maram Akila", "Andreas Bär", "Felix Brockherde", "Patrick Feifel", "Tim Fingscheidt", "Sujan Sai Gannamaneni", "Seyed Eghbal Ghobadi", "Ahmed Hammam", "Anselm Haselhoff", "Felix Hauser", "Christian Heinzemann", "Marco Hoffmann", "Nikhil Kapoor", "Falk Kappel", "Marvin Klingner", "Jan Kronenberger", "Fabian Küppers", "Jonas Löhdefink", "Michael Mlynarski", "Michael Mock", "Firas Mualla", "Svetlana Pavlitskaya", "Maximilian Poretschkin", "Alexander Pohl", "Varun Ravi-Kumar", "Julia Rosenzweig", "Matthias Rottmann", "Stefan Rüping", "Timo Sämann", "Jan David Schneider", "Elena Schulz", "Gesina Schwalbe", "Joachim Sicking", "Toshika Srivastava", "Serin Varghese", "Michael Weber", "Sebastian Wirkert", "Tim Wirtz", "Matthias Woehrle" ]
Inspect, Understand, Overcome:\\A Survey of Practical Methods\\for AI Safety
2021
2021-04-29T09:54:54Z
cs.LG
The use of deep neural networks (DNNs) in safety-critical applications like mobile health and autonomous driving is challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability to problems with malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from \emph{safety concerns}. In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged. This work provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning (ML) topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern ML methods. We moreover hope that our contribution fuels discussions on desiderata for ML systems and strategies on how to propel existing approaches accordingly.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ] ], "subsections": [ "86d6cdb9-2750-4e63-865d-831683dd115b", "1fe2887d-3162-404c-bb44-c07ac2772367", "380e4e9a-2d10-49f6-add4-04990cddb902", "4396c1c3-a3ad-4efa-9f10-99f90060c0b4", "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "71d4e278-f67c-443c-84a9-ff0254362119", "94174462-51f2-497a-b88f-1ceaadd52aaf", "f0f7d7d0-2bd8-48ae-bdb1-126b96887d72", "e23cf554-7193-4f8f-a54e-3da02ce3d7c2", "3497f966-f76d-439f-b7b6-ad5a513d9e31" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 8089, 6010, 8157 ], "content": "\\label{sec:intro} \n\\sectionauthor{Sebastian Houben\\textsuperscript{1}, Michael Mock\\textsuperscript{1}, Timo Sämann\\textsuperscript{4}, Gesina Schwalbe\\textsuperscript{3}, Joachim Sicking\\textsuperscript{1}}\nIn barely a decade, deep neural networks (DNNs) have revolutionized the field of machine learning by reaching unprecedented, sometimes superhuman, performances on a growing variety of tasks.\nMany of these neural models have found their way into consumer applications like smart speakers, machine translation engines or content feeds. However, in safety-critical systems, where human life might be at risk, the use of recent DNNs is challenging as various model-immanent insufficiencies are yet difficult to address. \n\\par\nThis paper summarizes the promising lines of research in how to identify, address, and at least partly mitigate these DNN insufficiencies.\nWhile some of the reviewed works are theoretically grounded and foster the overall understanding of training and predictive power of DNNs, others provide practical tools to adapt their development, training or predictions. \nWe refer to any such method as a \\emph{safety mechanism} if it addresses one or several safety concerns in a feasible manner.\nTheir effectiveness in mitigating safety concerns is assessed by \\emph{safety metrics} . \nAs most safety mechanisms target only a particular insufficiency, we conclude that a \\emph{holistic safety argumentation} for a complex DNN-based systems will in many cases rely on a variety of safety mechanisms. \n\\par\nWe structure our review of these mechanisms as follows: Chapter \\ref{sec:dataset_optimization} focuses on \\emph{dataset optimization} for network training and evaluation. It is motivated by the well-known fact that, in comparison to humans, DNNs perform poorly on data that is structurally different from training data. Apart from insufficient generalization capabilities of these models, the data acquisition process and distributional data shifts over time play vital roles. We survey potential counter-measures, \\eg augmentation strategies and outlier detection techniques.\n\\par\nMechanisms that improve on \\emph{robustness} are described in Chapters \\ref{sec:robust_training} and \\ref{sec:adversarial_attacks}, respectively. They deserve attention as DNNs are generally not resilient to common perturbations and adversarial attacks.\n\\par\nChapter \\ref{sec:interpretability} addresses incomprehensible network behavior and reviews mechanisms that aim at \\emph{explainability}, \\eg a more transparent functioning of DNNs.\n\\par\nMoreover, DNNs tend to overestimate their prediction confidence, especially on unseen data. \nStraightforward ways to estimate prediction confidence yield mostly unsatisfying results.\nAmong others, this observation fuelled research on more sophisticated \\emph{uncertainty estimations} (see Chapter \\ref{sec:uncertainty}), \\emph{redundancy mechanisms} (see Chapter \\ref{sec:aggregation}) and attempts to reach \\emph{formal verification} as addressed in Chapter \\ref{sec:verification}.\n\\par\nAt last, many safety-critical applications require not only accurate but also near real-time decisions. This is covered by mechanisms on the DNN \\emph{architectural level} (see Chapter \\ref{sec:architecture}) and furthermore by \\emph{compression} and \\emph{quantization} methods (see Chapter \\ref{sec:compression}).\n\\par\nWe conclude this review of mechanism categories with an outlook on the steps to transfer a carefully arranged combination of safety mechanisms into an actual holistic safety argumentation.", "id": "86d6cdb9-2750-4e63-865d-831683dd115b", "level": "section", "origin_cites_number": 6, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:dataset_optimization}\n\\sectionauthor{Matthias Rottmann\\textsuperscript{5}}\nThe performance of a trained model inherently relies on the nature of the underlying dataset.\nFor instance, a dataset with poor variability will hardly result in a model ready for real-world applications.\nIn order to approach the latter, data selection processes such as corner case selection and active learning are of utmost importance.\nThese approaches can help to design datasets that contain the most important information, while preventing the so much desired information from getting lost in an ocean of data.\nFor a given dataset and active learning setups, data augmentation techniques are very common aiming at extracting as much model performance out of the dataset as possible.\nOn the other hand, safety arguments also require the analysis of how a model behaves on out-of-distribution data, data that contains concepts the model has not encountered during training.\nThis is quite likely to happen as our world is under constant change, in other words exposed to a constantly growing domain shift.\nTherefore, these fields are lately gaining interest, also with respect to perception in automated driving.", "id": "1fe2887d-3162-404c-bb44-c07ac2772367", "level": "section", "origin_cites_number": 0, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ] ], "subsections": [ "36c23394-2c6f-46c4-b022-50c53af7e345", "f767152d-e919-4dba-8417-ed1faf4da217", "7b5fda9c-0631-4e3b-bf4f-fb2fb6b0d60d", "de2e598e-f35b-4b65-945b-25aee539ea0d", "e4ebc416-15f6-4517-a761-acd4315ad09b" ], "title": "Dataset Optimization" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 1624, 8160, 3251, 3237, 3219, 8159, 8158, 3248, 8161 ], "content": "\\label{subsec:dataset_optimization:anomaly_detection}\n\\sectionauthor{Sujan Sai Gannamaneni\\textsuperscript{1}, Matthias Rottmann\\textsuperscript{5}}\nThe terms anomaly, outlier and \\emph{out-of-distribution} (OOD) data detection are often used interchangeably in literature and refer to task of identifying data samples that are not representative of training data distribution. Uncertainty evaluation (\\cf Chapter \\ref{sec:uncertainty}) is closely tied to this field as self-evaluation of models is one of the active areas of research for OOD detection. In particular, for image classification problems it has been reported that neural networks often produce high confidence predictions on OOD data . The detection of such OOD inputs can either be tackled by post-processing techniques that adjust the estimated confidence or by enforcing low confidence on OOD samples during training . Even guarantees that neural networks produce low confidence predictions for OOD samples can be provided under specific assumptions (\\cf ). More precisely, this work utilizes Gaussian mixture models that, however, may suffer from high-dimensional data and require strong assumptions on the distribution parameters.\nSome approaches use generative models like \\emph{GANs} and \\emph{autoencoders} for outlier detection. The models are trained to learn in-distribution data manifolds and will produce higher reconstruction loss for outliers.\n\\par\nFor OOD detection in semantic segmentation, only a few works have been presented so far.\nAngus \\etal present a comparative study of common OOD detection methods, which mostly deal with image-level classification. In addition, they provide a novel setup of relevant OOD datasets for this task. Another work trains a fully convolutional binary classifier that distinguishes image patches from a known set of classes from image patches stemming from an unknown class . The classifier output applied at every pixel will give the per-pixel confidence value for an OOD object. Both of these works perform at pixel level and without any sophisticated feature generation methods specifically tailored for the detection of entire OOD instances.\nUp to now, outlier detection has not been studied extensively for object detection tasks based on benchmark object detection datasets. In , two CNNs are used to perform object detection and binary classification (benign or anomaly) in a sequential fashion, where the second CNN takes the localized object within the image as input.\n\\par\nFrom a safety standpoint, detecting outliers or OOD samples is extremely important and beneficial as training data cannot realistically be large enough to capture all situations. Research in this area is heavily entwined with progress in uncertainty estimation (\\cf Chapter \\ref{sec:uncertainty}) and domain adaptation (\\cf \\refsec{subsec:dataset_optimization:domains}). Extending research works to segmentation and object detection tasks would be particularly significant for leveraging autonomous driving research. In addition to safety, OOD detection can be beneficial in other aspects like when using local expert models. For example, when using an expert model for segmentation of urban driving scenes and another expert model for segmentation of highway driving scenes, an OOD detector could act as trigger on which models can be switched. \n\\par\nWith respect to the approaches presented above, uncertainty-based and generative model-based OOD detection methods are currently promising directions of research. However, it remains an open question whether they can unfold their potential well on segmentation and object detection tasks.", "id": "36c23394-2c6f-46c4-b022-50c53af7e345", "level": "subsection", "origin_cites_number": 13, "parent_id": "1fe2887d-3162-404c-bb44-c07ac2772367", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ], [ "subsection", "Outlier/Anomaly Detection" ] ], "subsections": [], "title": "Outlier/Anomaly Detection" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 8162, 1044, 5527, 3237, 4620, 1043, 1049 ], "content": "\\label{subsec:dataset_optimization:active_learning}\n\\sectionauthor{Matthias Rottmann\\textsuperscript{5}}\nIt is widely known that, as a rule of thumb, for the training of any kind of artificial neural network, an increase of training data leads to increased performance.\nObtaining labeled training data, however, is often very costly and time consuming.\n\\emph{Active learning} provides one possible remedy to this problem: Instead of labeling every data point, active learning utilizes a \\emph{query strategy} to request labels from a teacher/an oracle which leverage the model performance most.\nThe survey paper by Settles provides a broad overview regarding query strategies for active learning methods. However, except for \\emph{uncertainty sampling} and \\emph{query by committee}, most of them seem to be infeasible in deep learning applications up to now. Hence, most of the research activities in active deep learning focus on these two query strategies, as we outline in the following.\n\\par\nIt has been shown for image classification that labels corresponding to uncertain samples can leverage the networks' performance significantly and that a combination with semi-supervised learning is promising. In both works, uncertainty of unlabeled samples is estimated via Monte Carlo (MC) dropout inference. MC dropout inference and a chosen number of training epochs are executed alternatingly, after performing MC dropout inference, the unlabeled samples' uncertainties are assessed by means of sample-wise dispersion measures. Samples for which the DNN model is very uncertain about its prediction are presented to an oracle and labeled.\n\\par\nWith respect to object detection, a moderate number of active learning methods has been introduced .\nThese approaches include uncertainty sampling and query-by-committee methods . In , additional algorithmic features specifically tailored for object detection networks are presented, \\ie separate treatment of the localization and classification loss , as well as weak and strong supervision schemes . For semantic segmentation, an uncertainty-sampling-based approach has been presented , which queries polygone masks for image sections of a fixed size ($128 \\times 128$). Queries are performed by means of accumulated entropy in combination with a cost estimation for each candidate image section.\n\\par\nRecently, new methods for estimating the quality of a prediction as well as new uncertainty quantification approaches, \\eg gradient-based ones , have been proposed. It remains an open question whether they are suitable for active learning. Since most of the conducted studies are rather of academic nature, also their applicability to real-life data acquisition is not yet demonstrated sufficiently. In particular, it is not clear whether the proposed active learning schemes, including the label acquistion, for instance in semantic segmentation, is suitable to be performed by human labelers. Therefore, labeling acquisition with a common understanding of the labelers' convenience and suitability for active learning are a promising direction for research and development.", "id": "f767152d-e919-4dba-8417-ed1faf4da217", "level": "subsection", "origin_cites_number": 11, "parent_id": "1fe2887d-3162-404c-bb44-c07ac2772367", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ], [ "subsection", "Active Learning" ] ], "subsections": [], "title": "Active Learning" }, { "cite_extract_rate": 0.4, "cites": [ 330, 8163, 3267, 5113, 328, 8495 ], "content": "\\label{subsec:dataset_optimization:domains}\n\\sectionauthor{Julia Rosenzweig\\textsuperscript{1}}\nThe classical assumption in machine learning is that the training and testing data sets are drawn from the same distribution, implying that the model is deployed under the same conditions as it was trained under. However, as mention, in real-world applications this assumption is often violated in the sense that the training and the testing set stem from different domains having different distributions. This poses difficulties for statistical models and the performance will mostly degrade when they are deployed on a domain $D^{\\mathrm{test}}$, having a different distribution than the training dataset (\\ie generalizing from the training to the testing domain is not possible). This makes the study of domains not only relevant from the machine learning perspective, but also from a safety point of view.\n\\par\nMore formally, there are differing notions of a 'domain' in literature. For , a domain $\\mathcal{D} = \\{\\mathcal{X}, P(X) \\}$ consists of a feature space $\\mathcal{X} \\subset \\mathbb{R}^d$ together with a marginal probability distribution $P(X)$ with $X\\in \\mathcal{X}$. In , a domain is a pair consisting of a distribution over the inputs together with a labeling function.\nHowever, instead of a sharp labeling function, it is also widely accepted to define a (training) domain $\\mathcal{D} = \\{(x_i,y_i)\\}_{i=1}^n$ to consist of $n$ (labeled) samples that are sampled from a joint distribution $P(x,y)$ (\\cf ). \n\\par\nThe reasons for \\emph{distributional shift} are diverse---as are the names to indicate a shift. For example, if the rate of (class) images of interest is different between training and testing set this can lead to a domain gap and, \\eg result in differing overall error rates. Moreover, as mentions, changing weather conditions and camera setups in cars lead to a domain mismatch in applications of autonomous driving.\nIn biomedical image analysis, different imaging protocols and diverse anatomical structures can hinder generalization of trained models (\\cf ).\nCommon terms to indicate distributional shift are \\emph{domain shift, dataset shift, covariate shift, concept drift, domain divergence, data fracture, changing environments} or \\emph{dataset bias}. References provide an overview. \n\\par\nMethods and measures to overcome the problem of domain mismatch between one or more (\\cf ) source domains and target domain(s) and the resulting poor model performance are studied in the field of transfer learning and in particular its subtopic domain adaptation (\\cf ).\nFor instance, adapting a model that is trained on synthetically generated data to work on real data is one of the core challenges, as can be seen .\nFurthermore, detecting when samples are out-of-domain or out-of-distribution is an active field of research (\\cf and the outlier/anomaly detection in \\refsec{subsec:dataset_optimization:anomaly_detection} as well as the topic of observers in the black-box methods in \\refsec{subsec:verification:black_box_methods} for further reference). This is particularly relevant for machine learning models that operate in the real world: If, \\eg an autonomous vehicle encounters some situation that deviates strongly from what was seen during training (\\eg due to some special event like a biking competition, carnival, etc.) this can lead to wrong predictions and thereby potential safety issues if not detected in time.", "id": "7b5fda9c-0631-4e3b-bf4f-fb2fb6b0d60d", "level": "subsection", "origin_cites_number": 15, "parent_id": "1fe2887d-3162-404c-bb44-c07ac2772367", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ], [ "subsection", "Domains" ] ], "subsections": [], "title": "Domains" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 8164, 858, 8165, 5980, 5552, 90, 2083, 5548, 7022, 5553, 62 ], "content": "\\label{subsec:dataset_optimization:augmentation}\n\\sectionauthor{Falk Kappel\\textsuperscript{13}}\nGiven the need for big amounts of data to train neural networks, one often runs into a situation where data is lacking. This can lead to insufficient generalization and an overfitting to the training data. An overview over different techniques to tackle this challenge can be found in . One approach to try and overcome this issue is the augmentation of data. It aims at optimizing available data and increasing its amount, curating a dataset that represents a wide variety of possible inputs during deployment. Augmentation can as well be of help when having to work with a heavily unbalanced dataset by creating more samples of underrepresented classes. A broad survey on data augmentation is provided by . They distinguish between two general approaches to data augmentation with the first one being data warping augmentations that focus on taking existing data and transforming it in a way that does not effect labels. The other option are oversampling augmentations, which create synthetic data that can be used to increase the size of the dataset.\\newline\nExamples of some of the most basic augmentations are flipping, cropping, rotating, translating, shearing and zooming. These are affecting the geometric properties of the image and are easily implemented . The machine learning toolkit \\texttt{Keras}, for example, provides an easy way of applying them to data using their \\texttt{ImageDataGenerator} class . Other simple methods include adaptations in color space that affect properties such as lighting, contrast and tints, which are common variations within image data. Filters can be used to control increased blur or sharpness . In random erasing is introduced as a method with similar effect as cropping, aiming at gaining robustness against occlusions. An example for mixing images together as an augmentation technique can be found in . \\newline\nThe abovementioned methods have in common that they work on the input data but there are different approaches that make use of deep learning for augmentation. An example for making augmentations in feature space using autoencoders can be found in . They use the representation generated by the encoder and generate new samples by interpolation and extrapolation between existing samples of a class. The lack of interpretability of augmentations in feature space in combination with the tendency to perform worse than augmentations in image space present open challenges for those types of augmentations . Adversarial training is another method that can be used for augmentation. The goal of adversarial training is to discover cases that would lead to wrong predictions. That means the augmented images won't necessarily represent samples that could occur during deployment but that can help in achieving more robust decision boundaries . An example of such an approach can be found in . Generative modelling can be used to generate synthetic samples that enlarge the dataset in a useful way with GANs, variational autoencoders and the combination of both are important tools in this area . Examples for data augmentation in medical context using a CycleGAN can be found in and using a progressively growing GAN in . Next to neural style transfer that can be used to change the style of an image to a target style, AutoAugment and population based augmentation are two more interesting publications. In the latter two, the idea is to search a predefined search space of augmentations to gather the best selection.", "id": "de2e598e-f35b-4b65-945b-25aee539ea0d", "level": "subsection", "origin_cites_number": 15, "parent_id": "1fe2887d-3162-404c-bb44-c07ac2772367", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ], [ "subsection", "Augmentation" ] ], "subsections": [], "title": "Augmentation" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 8133, 8166, 7698, 1636, 3500 ], "content": "\\label{subsec:dataset_optimization:corner_case_detection}\n\\sectionauthor{Alexander Pohl\\textsuperscript{16}, Marco Hoffmann\\textsuperscript{16}, Michael Mlynarski\\textsuperscript{16}, Timo Sämann\\textsuperscript{4}}\nEnsuring that AI-based applications behave correctly and predictably even in unexpected or rare situations is a major concern that gains importance especially in safety-critical applications such as autonomous driving. In the pursuit of more robust AI corner cases play an important role. \n\\par\nThe meaning of the term corner case varies in the literature. Some consider mere erroneous or incorrect behavior as corner cases . For example, in corner cases are referred to as situations in which an object detector fails to detect relevant objects at relevant locations. Others characterize corner cases mainly as rare combinations of input parameter values . This project adopts the first definition: Inputs that result in unexpected or incorrect behaviour of the AI function are defined as corner cases. \n\\par\nContingent on the hardware, the AI architecture and the training data, the search space of corner cases quickly becomes incomprehensibly large. While manual creation of corner cases (\\eg constructing or re-enacting scenarios) might be more controllable, approaches that scale better and allow for a broader and more systematic search for corner cases require extensive automation.\n\\par\nOne approach to automatic corner case detection is based on transforming the input data. The \\emph{DeepTest} framework uses three types of image transformations: linear, affine and convolutional transformations. In addition to these transformations, metamorphic relations help detect undesirable behaviors of deep learning systems. They allow changing the input while asserting some characteristics of the result . For example, changing the contrast of input frames should not affect the steering angle of a car . Input-output pairs that violate those metamorphic relations can be considered as corner cases.\n\\par\nAmong other things, the white-box testing framework \\emph{DeepXplore} applies a method called \\emph{gradient ascent} to find corner cases (\\cf \\refsec{subsec:verification:formal_testing}). In the experimental evaluation of the framework, three variants of deep learning architectures were used to classify the same input image. The input image was then changed according to the gradient ascent of an objective function that reflected the difference in the resulting class probabilities of the three model variants. When the changed (now artificial) input resulted in different class label predictions by the model variants, the input was considered as a corner case. \n\\par\nIn , corner cases are detected on video sequences by comparing predicted with actual frames. The detector has three components: The first component, semantic segmentation, is used to detect and locate objects in the input frame. As the second component, an image predictor trained on frame sequences predicts the actual frame based on the sequence preceding that frame. An error is determined by comparing the actual with the predicted (\\ie expected) frame, following the idea that only situations that are unexpected for AI-based perception functions may be potentially dangerous and therefore a corner case. Both the segmentation and the prediction error are then fed into the third component of the detector, which determines a corner case score that reflects the extent to which unexpected relevant objects are at relevant locations.\n\\par\nIn , a corner case detector based on simulations in a \\emph{Carla} environment is presented. In the simulated world, AI agents control the vehicles. During simulations, state information of both the environment and the AI agents are fed into the corner case detector. While the environment provides the real vehicle states, the AI agents provide estimated and perceived state information. Both sources are then compared to detect conflicts (\\eg collisions). These conflicts are recorded for analysis. \n\\par\nSeveral ways of automatically generating and detecting corner cases exist. However, corner case detection is a task with challenges of its own: Depending on the operational domain including its boundaries, the space of possible inputs can be very large. Also, some types of corner cases are specific to the AI architecture, \\eg the network type or the network layout used. Thus, corner case detection has to assume a holistic point of view on both model and input, adding further complexity and reducing transferability of previous insights. \n\\par\nAlthough it can be argued that rarity does not necessarily characterize corner cases, rare input data might have the potential of challenging the AI functionality (\\cf \\refsec{subsec:dataset_optimization:anomaly_detection}).\nAnother research direction could investigate whether structuring the input space in a way suitable for the AI functionality supports the detection of corner cases. Provided that the operational domain is conceptualized as an ontology, ontology-based testing may support automatic detection. A properly adapted generator may specifically select promising combinations of extreme parameter values and, thus, provide valuable input for synthetic test data generation.", "id": "e4ebc416-15f6-4517-a761-acd4315ad09b", "level": "subsection", "origin_cites_number": 9, "parent_id": "1fe2887d-3162-404c-bb44-c07ac2772367", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Dataset Optimization" ], [ "subsection", "Corner Case Detection" ] ], "subsections": [], "title": "Corner Case Detection" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 314, 509, 1606, 8429, 1621, 1759, 8167, 8168, 8169, 206, 520, 4169 ], "content": "\\label{sec:robust_training}\n\\sectionauthor{Nikhil Kapoor\\textsuperscript{7}}\nRecent works~ have shown that state-of-the-art deep neural networks (DNNs) performing a wide variety of computer vision tasks such as image classification~, object detection~ and semantic segmentation~ are not robust to small changes in the input.\n\\par\nRobustness of neural networks is an active and open research field that can be considered highly relevant for achieving safety in autonomous driving. Currently, most of the research is directed towards either improving adversarial robustness~ (robustness against carefully designed perturbations that aim at causing misclassifications with high confidence), or improving corruption robustness~ (robustness against commonly occurring augmentations such as weather changes, addition of Gaussian noise, photometric changes, etc.). While adversarial robustness might be more of a security issue than a safety issue, corruption robustness, on the other hand, can be considered highly safety-relevant.\n\\par\nEquipped with these definitions, we broadly term \\emph{robust training} here as methods or mechanisms that aim at improving either adversarial or corruption robustness of a DNN, by incorporating modifications into the architecture or into the training mechanism itself.", "id": "380e4e9a-2d10-49f6-add4-04990cddb902", "level": "section", "origin_cites_number": 18, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Robust Training" ] ], "subsections": [ "62315f40-21bf-40c7-afd5-00e8d80aa4e1", "cf7b5f9f-caf9-44e0-b472-2124535b4d7a", "e4ab90c2-e9e5-49ce-a3ae-b1dfdb4472d4" ], "title": "Robust Training" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 868, 8171, 8173, 8172, 8170 ], "content": "\\label{subsec:robust_training:hyperparameter_optimization}\n\\sectionauthor{Seyed Eghbal Ghobadi\\textsuperscript{8}, Patrick Feifel\\textsuperscript{8}}\nThe final performance of a neural network depends highly on the learning process. The process includes the actual optimization and may additionally introduce training methods such as dropout, regularization, or parametrization of a multi-task loss. \n\\par\nThese methods adapt their behavior for predefined parameters. Hence, their optimal configuration is a priori unknown. We refer to them as \\emph{hyperparameters}. Important hyperparameters comprise, for instance, the initial learning rate, steps for learning rate reduction, learning rate decay, momentum, batch size, dropout rate and number of iterations. Their configuration has to be determined according to the architecture and task of the CNN . The search of an optimal hyperparameter configuration is called hyperparameter optimization (HO).\n\\par\nHO is usually described as an optimization problem . Thereby, the combined configuration space is defined as \n$\\boldsymbol{\\Lambda} = \\lambda_1 \\times \\lambda_2 \\times \\cdots \\lambda_N$,\naccording to each domain $\\lambda_n$. Their individual spaces can be continuous, discrete, categorical or binary. \n\\par\nHence, we aim to find an optimal hyperparameter configuration $\\lambda^{\\star}$ by minimizing an objective function $\\mathcal{O}\\left ( \\right )$, which evaluates a trained model $\\mathcal{M}$ on the validation dataset $\\mathcal{D}^{\\textrm{val}}$ with the loss $\\mathcal{L}$:\n\\begin{equation}\n\\lambda^{\\star} = \n\\text{arg min}_{\\lambda \\in \\boldsymbol{\\Lambda}} \\; \n\\mathcal{O} \\left( \\mathcal{L}, \\mathcal{M}_{\\lambda}, \\mathcal{D}^{\\textrm{train}}, \\mathcal{D}^{\\textrm{val}} \\right)\n\\end{equation}\nThis problem statement is widely regarded in traditional machine learning and primarily based on Bayesian optimization (BO) in combination with Gaussian processes. However, a straightforward application to deep neural networks encounters problems due to a \\emph{lack of scalability, flexibility and robustness} , . \n\\par\nTo exploit the benefits of BO, many authors proposed different combinations with other approaches. \nHyperband in combination with BO (BOHB) frames the optimization as ``... a pure exploration non-stochastic infinite-armed bandit problem ...''. \nThe method of BO for iterative learning (BOIL) internalizes iteratively collected information about the learning curve and the learning algorithm itself. \nThe authors of introduce the trace-aware knowledge gradient (taKG) as an acquisition function for BO (BO-taKG) which ``leverages both trace information and multiple fidelity controls''.\nThereby BOIL and BO-taKG achieve state-of-research performance regarding CNNs outperforming Hyperband.\n\\par\nOther approaches such as the orthogonal array tuning method (OATM) or HO by reinforcement learning (Hyp-RL) turn away from the Bayesian approaches and offer new research directions.\n\\par\nFinally, the insight that many authors include kernel sizes and number of kernels and layers in their hyperparameter configuration should be emphasized. More work should be spent on the distinct integration of HO in the performance estimation strategy of neural architecture search (\\cf \\refsec{subsec:architecture:neural_architecture_search}).", "id": "62315f40-21bf-40c7-afd5-00e8d80aa4e1", "level": "subsection", "origin_cites_number": 7, "parent_id": "380e4e9a-2d10-49f6-add4-04990cddb902", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Robust Training" ], [ "subsection", "Hyperparameter Optimization" ] ], "subsections": [], "title": "Hyperparameter Optimization" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 8179, 8177, 958, 8176, 8174, 8167, 8175, 8180, 8178 ], "content": "\\label{subsec:robust_training:modification_of_loss}\n\\sectionauthor{Nikhil Kapoor\\textsuperscript{7}}\nThere exist many approaches that aim at directly modifying the loss function with an objective of improving either adversarial or corruption robustness~. One of the earliest approaches for improving corruption robustness was introduced by Zheng \\etal called \\emph{stability training}, where they introduce a regularization term that penalizes the network prediction to a clean and an augmented image. However, their approach does not scale to many augmentations at the same time. Janocha \\etal then introduced a detailed analysis on the influence of multiple loss functions to model performance as well as robustness and suggested that expectation-based losses tend to work better with noisy data and squared-hinge losses tend to work better for clean data. Other well-known approaches are mainly based on variations of data augmentation~, which can be computationally quite expensive. \n\\par\nIn contrast to corruption robustness, there exist many more approaches based on adversarial examples. We highlight some of the most interesting and relevant ones here. Mustafa \\etal proposes to add a loss term that maximally separates class-wise feature map representations, hence increasing the distance from data points to the corresponding decision boundaries. Similarly, Pang \\etal proposed the Max-Mahalanobis center (MMC) loss to learn more structured representations and induce high-density regions in the feature space. Chen \\etal proposed a variation of the well-known cross entropy (CE) loss that not only maximizes the model probabilities of the correct class, but in addition, also minimizes model probabilities of incorrect classes. Cisse \\etal constraints the Lipschitz constant of different layers to be less than one which restricts the error propagation introduced by adversarial perturbations to a DNN. Dezfooli \\etal proposed to minimize the curvature of the loss surface locally around data points. They emphasize that there exists a strong correlation between locally small curvature and correspondingly high adversarial robustness. \n\\par\nAll of these methods highlighted above are evaluated mostly for image classification tasks on smaller datasets, namely CIFAR-10~, CIFAR-100~, SVHN~, and only sometimes on ImageNet~. Very few approaches have been tested rigorously on complex safety-relevant tasks such as \\emph{object detection} and \\emph{semantic segmentation}, etc. Moreover, methods that improve adversarial robustness are only tested on a small subset of attack types under differing attack specifications. This makes comparing multiple methods difficult. \n\\par\nIn addition, methods that improve corruption robustness are evaluated over a standard data set of various corruption types which may or may not be relevant to its application domain. In order to assess multiple methods for their effect on safety-related aspects, a thorough robustness evaluation methodology is needed, which is largely missing in the current literature. This evaluation would need to take into account relevant disturbances/corruption types present in the real world (application domain) and had to assess robustness towards such changes in a rigorous manner. Without such an evaluation, we run the risk of being overconfident in our network, thereby harming safety.", "id": "cf7b5f9f-caf9-44e0-b472-2124535b4d7a", "level": "subsection", "origin_cites_number": 21, "parent_id": "380e4e9a-2d10-49f6-add4-04990cddb902", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Robust Training" ], [ "subsection", "Modification of Loss" ] ], "subsections": [], "title": "Modification of Loss" }, { "cite_extract_rate": 0.5666666666666661, "cites": [ 2800, 5073, 7109, 3624, 2690, 8589, 2761, 2686, 8181, 8182, 3655, 8588, 2705, 1695, 1256, 8183, 2793 ], "content": "\\sectionauthor{Firas Mualla\\textsuperscript{13}}\nDomain generalization (DG) can be seen as an extreme case of \\emph{domain adaptation} (DA). The latter is a type of transfer learning, where the source and target tasks are the same (\\eg shared class labels) but the source and target domains are different (\\eg another image acquisition protocol or a different background) . \nThe DA can be either supervised (SDA), where there is little available labeled data in the target domain, or unsupervised (UDA), where data in the target domain is not labeled. The DG goes one step further by assuming that the target domain is entirely unknown. Thus, it seeks to solve the train-test domain shift in general. While DA is already an established line of research in the machine learning community, DG is relatively new , though with an extensive list of papers in the last few years. \n\\par\nProbably, the first intuitive solution that one may think of to implement DG is neutralizing the domain-specific features. It was shown in that the gray-level co-occurrence matrices (GLCM) tend to perform poorly in semantic classification (\\eg digit recognition) but yield good accuracy in textural classification compared to other feature sets such as SURF and LBP. DG was thus implemented by decorrelating the model's decision from the GLCM features of the input image even without the need of domain labels.\nBesides the aforementioned intensity-based statistics of an input image, it is known that characterizing image style can be done based on the correlations between the filter responses of a DNN layer (neural style transfer). In , the training images are enriched with stylized versions, where a style is defined either by an external style (\\eg cartoon or art) or by an image from another domain. Here, DG is addressed as a \\emph{data augmentation} problem.\nSome approaches try to learn generalizable latent representations by a kind of adversarial training. This is done by a generator or an encoder, which is trained to generate a hidden feature space that maximizes the error of a domain discriminator but at the same time minimizes the classification error of the task of concern. Another flavor of adversarial training can be seen in , where an adversarial autoencoder is trained to generate features, which a discriminator cannot distinguish from random samples drawn from a prior Laplace distribution. This regularization prevents the hidden space from overfitting to the source domains, in a similar spirit to how variational autoencoders do not leave gaps in the latent space. In , it is argued that the domain labels needed in such approaches are not always well-defined or easily available. Therefore they assume unknown latent domains which are learned by clustering in a space similar to the style-transfer features mentioned above. The pseudo labels resulting from clustering are then used in the adversarial training.\nAutoencoders have been employed for DG not only in an adversarial setup, but also in the sense of \\emph{multi-task learning} nets , where the classification task in such nets is replaced by a reconstruction one. In , an autoencoder is trained to reconstruct not only the input image but also the corresponding images in the other domains.\nIn the core of both DA and DG we are confronted with a distribution matching problem. However, estimating the probability density in high-dimensional spaces is intractable. The density-based metrics such as Kullback-Leibler divergence are thus not directly applicable. In statistics, the so-called \\emph{two-samples tests} are usually employed to measure the distance between two distributions in a point-wise manner, \\ie without density estimation. For deep learning applications, these metrics need not only to be point-wise but also differentiable. The two-samples tests were approached in the machine learning literature using (differentiable) K-NNs , classifier two-samples tests (C2ST) , or based on the theory of kernel methods . More specifically, the \\emph{maximum mean discrepancy} (MMD) , which belongs to the kernel methods, is widely used for DA but also for DG . Using the MMD, the distance between two samples is estimated based on pairwise kernel evaluations, \\eg the radial basis function (RBF) kernel.\nWhile the DG approaches generalize to domains from which zero shots are available, the so-called \\emph{zero shot learning} (ZSL) approaches generalize to tasks (\\eg new classes in the same source domains) for which zero shots are available. Typically, the input in ZSL is mapped to a semantic vector per class instead of a simple class label. This can be, for instance, a vector of visual attributes or a word embedding of the class name . A task (with zero shots at training time) can be then described by a vector in this space. In , there is an attempt to combine ZSL and DG in the same framework in order to generalize to new domains as well as new tasks, which is also referred to as\n \\emph{heterogeneous domain generalization}.\nNote that most discussed approaches for DG require non-standard handling, \\ie modifications to models, data, and/or the optimization procedure. This issue poses a serious challenge as it limits the practical applicability of these approaches. There is a line of research which tries to address this point by linking DG to other machine learning paradigms, especially the model-agnostic meta-learning (MAML) algorithm, in an attempt to apply DG in a model-agnostic way. Loosely speaking, a model can be exposed to simulated train-test domain shift by training on a small \\emph{support set} to minimize the classification error on a small \\emph{validation set}. This can be seen as an instance of a \\emph{few shot learning} (FSL) problem . Moreover, the procedure can be repeated on other (but related) FSL tasks (\\eg different classes) in what is known as \\emph{episodic training}. The model transfers its knowledge from one task to another task and learns how to learn fast for new tasks. This can be thus seen as a \\emph{meta-learning} objective (in a FSL setup). Since the goal of DG is to adapt to new domains rather than new tasks, several model-agnostic approaches try to recast this procedure in a DG setup.", "id": "e4ab90c2-e9e5-49ce-a3ae-b1dfdb4472d4", "level": "subsection", "origin_cites_number": 30, "parent_id": "380e4e9a-2d10-49f6-add4-04990cddb902", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Robust Training" ], [ "subsection", "Domain Generalization" ] ], "subsections": [], "title": "Domain Generalization" }, { "cite_extract_rate": 0.7272727272727271, "cites": [ 314, 900, 888, 6146, 8396, 892, 917, 906 ], "content": "\\label{sec:adversarial_attacks}\n\\sectionauthor{Andreas Bär\\textsuperscript{15}}\nOver the last few years, deep neural networks (DNNs) consistently showed state-of-the-art performance across several vision-related tasks.\nWhile their superior performance on clean data is indisputable, they show a lack of robustness to certain input patterns, denoted as \\emph{adversarial examples} .\nIn general, an algorithm for creating adversarial examples is referred to as an \\emph{adversarial attack} and aims at fooling an underlying DNN, such that the output changes in a desired and malicious way.\nThis can be carried out without any knowledge about the DNN to be attacked (black-box attack) , or with full knowledge about the parameters, architecture, or even training data of the respective DNN (white-box attack) .\nWhile initially being applied on simple classification tasks, some approaches aim at finding more realistic attacks , which particularly pose a threat to safety-critical applications, such as DNN-based environment perception systems in autonomous vehicles.\nAltogether, this motivated the research in finding ways of defending against such adversarial attacks .\nIn this section, we introduce the current state of research regarding adversarial attacks in general, more realistic adversarial attacks closely related to the task of environment perception for autonomous driving, and strategies for detecting or defending adversarial attacks.\nWe conclude each section by clarifying current challenges and research directions.", "id": "4396c1c3-a3ad-4efa-9f10-99f90060c0b4", "level": "section", "origin_cites_number": 11, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Adversarial Attacks" ] ], "subsections": [ "945be913-0c94-4b59-92f5-5fa8a6f0bf30", "7e154b91-68e0-47d3-8c11-92f8eb25b769" ], "title": "Adversarial Attacks" }, { "cite_extract_rate": 0.6341463414634141, "cites": [ 923, 912, 8184, 892, 917, 908, 8185, 890, 914, 7313, 975, 314, 953, 954, 1624, 920, 8186, 6140, 919, 899, 902, 3267, 8396, 3234, 906, 6148 ], "content": "\\label{subsec:adverarial_attacks:adversarial_attacks_defenses}\n\\sectionauthor{Andreas Bär\\textsuperscript{15}, Seyed Eghbal Ghobadi\\textsuperscript{8}, Ahmed Hammam\\textsuperscript{8}}\nThe term \\emph{adversarial example} was first introduced by Szegedy \\etal .\nFrom there on, many researchers tried to find new ways of crafting adversarial examples more effectively.\nHere, the fast gradient sign method (FGSM) , DeepFool , least-likely class method (LLCM) , C\\&W , momentum iterative fast gradient sign method (MI-FGSM) , and projected gradient descent (PGD) are a few of the most famous attacks so far.\nIn general, these attacks can be executed in an iterative fashion, where the underlying adversarial perturbation is usually bounded by some norm and is following additional optimization criteria, \\eg minimizing the number of changed pixels.\n\\par\nThe mentioned attacks can be further categorized as image-specific attacks, where for each image a new perturbation needs to be computed.\nOn the other hand, image-agnostic attacks aim at finding a perturbation, which is able to fool an underlying DNN on a set of images.\nSuch a perturbation is also referred to as a \\emph{universal adversarial perturbation} (UAP).\nHere, the respective algorithm UAP , fast feature fool (FFF) , and prior driven uncertainty approximation (PD-UA) are a few honorable mentions.\nAlthough the creation process of a universal adversarial perturbation typically relies on a white-box setting, they show a high \\emph{transferability} across models .\nThis allows black-box attacks, where one model is used to create a universal adversarial perturbation, and another model is being attacked with the beforehand-created perturbation.\nAnother way of designing black-box attacks is to create a surrogate DNN, which mimics the respective DNN to be attacked and thus can be used in the process of adversarial example creation .\nOn the contrary, some research has been done to create completely incoherent images (based on evolutionary algorithms or gradient ascent) to fool an underlying DNN .\nDifferent from that, another line of work has been proposed to alter only some pixels in images to attack a respective model.\nHere and have used optimization approaches to perturb some pixels in images to produce targeted attacks, aiming at a specific class output, or non-targeted attacks, aiming at outputting a class different from the network output or the ground truth.\nThis can be extended up to finding one pixel in the image to be exclusively perturbed to generate adversarial images .\nThe authors of proposed to train generative models to generate adversarial examples.\nGiven an input image and the target label, a generative model is trained to produce adversarial examples for DNNs.\nHowever, while the produced adversarial examples look rather unrealistic to a human, they are able to completely deceive a DNN.\n\\par\nThe existence of adversarial examples not only motivated research in finding new attacks, but also in finding \\emph{defense strategies} to effectively defend these attacks.\nEspecially for safety-critical applications, such as DNN-based environment perception for autonomous driving, the existence of adversarial examples needs to be handled accordingly.\nSimilar to adversarial attacks, one can categorize defense strategies into two types: \\emph{model-specific} defense strategies and \\emph{model-agnostic} defense strategies.\nThe former refers to defense strategies, where the model of interest is modified in certain ways.\nThe modification can be done on the architecture, training procedure, training data, or model weights.\nOn the other hand, model-agnostic defense strategies consider the model to be a black box.\nHere, only the input or the output is accessible.\nSome well-known model-specific defense strategies include adversarial training , the inclusion of robustness-oriented loss functions during training , removing adversarial patterns in features by denoising layers , and redundant teacher-student frameworks .\nThe majority of model-agnostic defense strategies primarily focuses on various kinds of (gradient masking) pre-processing strategies .\nThe idea is to remove the adversary from the respective image, such that the image is transformed from the adversarial space back into the clean space.\n\\par\nNonetheless, Athalye \\etal showed that gradient masking alone is not a sufficient criterion for a reliable defense strategy.\nIn addition, detection and out-of-distribution techniques have also been proposed as model-agnostic defense strategies against adversarial attacks.\nHere, the Mahalanobis distance or area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPR) are used to detect adversarial examples.\nThe authors of on the other hand proposed to train networks to detect, whether the input image is out-of-distribution or not. \n\\par\nMoreover, Feinman \\etal proved that adversarial attacks usually produce high uncertainty on the output of the DNN.\nAs a consequence, they proposed to use the dropout technique to estimate uncertainty on the output to identify a possible adversarial attack.\n\\par\nRegarding adversarial attacks, the majority of the listed attacks are designed for image classification.\nOnly a few adversarial attacks consider tasks that are closely related to autonomous driving, such as bounding box detection, semantic segmentation, instance segmentation, or even panoptic segmentation.\nAlso, the majority of the adversarial attacks rely on a white-box setting, which is usually not present for a potential attacker.\nEspecially universal adversarial perturbations have to be considered as a real threat due to their high model transferability.\nGenerally speaking, the existence of adversarial examples has not been thoroughly studied yet.\nAn analytical interpretation is still missing, but could help in designing more mature defense strategies.\n\\par\nRegarding defense strategies, adversarial training is still considered as one of the most effective ways of increasing the robustness of a DNN.\nNonetheless, while adversarial training is indeed effective, it is rather inefficient in terms of training time.\nIn addition, model-agnostic defenses should be favored as once being designed, they can be easily transferred to different models.\nMoreover, as most model-agnostic defense strategies rely on gradient-masking and it has been shown that gradient-masking is not a sufficient property for a defense strategy, new ways of designing model-agnostic defenses should be taken into account.\nFurthermore, out-of-distribution and adversarial attacks detection or even correction methods have been a new trend for identifying attacks.\nHowever, as the environment perception system of an autonomous driving vehicle could rely on various information sources, including LiDAR, optical flow, or depth from a stereo camera, techniques of information fusion should be further investigated to mitigate or even eliminate the effect of adversarial examples.", "id": "945be913-0c94-4b59-92f5-5fa8a6f0bf30", "level": "subsection", "origin_cites_number": 41, "parent_id": "4396c1c3-a3ad-4efa-9f10-99f90060c0b4", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Adversarial Attacks" ], [ "subsection", "Adversarial Attacks and Defenses" ] ], "subsections": [], "title": "Adversarial Attacks and Defenses" }, { "cite_extract_rate": 0.684210526315789, "cites": [ 900, 7698, 8187, 917, 890, 8189, 904, 901, 8188, 8190, 899, 7308, 6014 ], "content": "\\label{subsec:adverarial_attacks:more_realistic_attacks}\n\\sectionauthor{Svetlana Pavlitskaya\\textsuperscript{14}}\nWe consider the following two categories of realistic adversarial attacks: (1) image-level attacks, which not only fool a neural network but also pose a provable threat to autonomous vehicles, and (2) attacks which have been applied in a real world or in a simulation environment, such as car learning to act (CARLA) .\n\\par\nSome notable examples in the first category of attacks include attacks on semantic segmentation or person detection .\n\\par\nIn the second group of approaches, the attacks are specifically designed to survive real world distortions, including different distances, weather and lighting conditions, as well as camera angles. For this, adversarial perturbations are usually concentrated in a specific image area, called \\emph{adversarial patch}. Crafting an adversarial patch involves specifying a patch region in each training image, applying transformations to the patch, and iteratively changing the pixel values within this region to maximize the network prediction error. The latter step typically relies on an algorithm, proposed for standard adversarial attacks, which aim at crafting invisible perturbations while misleading neural networks, \\eg C\\&W , Jacobian-based saliency map attack (JSMA) , and PGD .\n\\par\nThe first printable adversarial patch for image classification was described by Brown \\etal . Expectation over transformations (EOT) is one of the influential updates to the original algorithm---it permits to robustify patch-based attacks to distortions and affine transformations. Localized and visible adversarial noise (LaVAN) is a further method to generate much smaller patches (up to 2\\% of the pixels in the image). In general, fooling image classification with a patch is a comparatively simple task, because adversarial noise can mimic an instance of another class and thus lower the prediction probability for a true class.\n\\par\nRecently, patch-based attacks for a more tricky task of object detection have been described . Also, Lee and Kolter generate a patch using PGD , followed by EOT applied to the patch. With this approach, all detections in an image can be successfully suppressed, even without any overlap of a patch with bounding boxes. Furthermore, several approaches for generating an adversarial T-shirt have been proposed, including .\n\\par\nDeepBillboard is the first attempt to attack end-to-end driving models with adversarial patches. The authors propose to generate a single patch for a sequence of input images to mislead four steering models, including DAVE-2 in a drive-by scenario. \n\\par\nApart from physical feasibility, inconspicuousness is crucial for a realistic attack. Whereas adversarial patches usually look like regions of noise, several works have explored attacks with an inconspicuous patch. In particular, Eykholt \\etal demonstrate the vulnerability of road sign classification to the adversarial perturbations in the form of only black and white stickers. In , an end-to-end driving model is attacked in CARLA by painting of black lines on the road. Also, Kong and Liu use a generative adversarial network to get a realistic billboard to attack an end-to-end driving model in a drive-by scenario. In , a method to hide visible adversarial perturbations with customized styles is proposed, which leads to adversarial traffic signs that look unsuspicious to a human.\n\\par\nCurrent research mostly focuses on attacking image-based perception of an autonomous vehicle. Adversarial vulnerability of further components of an autonomous vehicle, \\eg LiDAR-based perception, optical flow and depth estimation, has only recently gained attention. Furthermore, most attacks consider only a single component of an autonomous driving pipeline, the question whether the existing attacks are able to propagate to further pipeline stages has not been studied yet. The first work in this direction describes an attack on object detection and tracking. The evaluation is, however, limited to a few clips, where no experiments in the real world have been performed. Overall, the research on realistic adversarial attacks, especially combined with physical tests, is currently in the starting phase.", "id": "7e154b91-68e0-47d3-8c11-92f8eb25b769", "level": "subsection", "origin_cites_number": 19, "parent_id": "4396c1c3-a3ad-4efa-9f10-99f90060c0b4", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Adversarial Attacks" ], [ "subsection", "More Realistic Attacks" ] ], "subsections": [], "title": "More Realistic Attacks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7514 ], "content": "\\label{sec:interpretability}\n\\sectionauthor{Felix Brockherde\\textsuperscript{10}}\nNeural networks are, by their nature, black boxes and therefore intrinsically hard to interpret . Due to their unrivaled performance, they still remain first choice for advanced systems even in many safety-critical areas, such as level 4 automated driving. This is why the research community has invested considerable effort to unhinge the black-box character and make deep neural networks more transparent.\nWe can observe three strategies that provide different view points towards this goal in the state of the art. First is the most direct approach of opening up the black box and looking at intermediate representations (\\refsec{subsec:interpretability:intermediate_representations}). Being able to interpret individual layers of the system facilitates interpretation of the whole. The second approach tries to provide interpretability by explaining the network's decisions with pixel attributions (\\refsec{subsec:interpretability:pixel_attribution}). Aggregated explanations of decision can then lead to interpretability of the system itself. Third is the idea of approximating the network with interpretable proxies to benefit from the deep neural networks performance while allowing interpretation via surrogate models (\\refsec{subsec:interpretability:interpretable_proxies}). Underlying all aspects here is the area of visual analytics (\\refsec{subsec:interpretability:visual_analytics}).\nThere exists earlier research in the medical domain to help human experts understand and convince them of machine learning decisions . Legal requirements in the finance industry gave rise to interpretable systems that can justify their decisions. An additional driver for interpretability research was the concern for Clever Hans predictors .", "id": "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "level": "section", "origin_cites_number": 3, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Interpretability" ] ], "subsections": [ "cf04c28f-300a-4cf5-bc98-3f4d9734078e", "a3e2cd79-8634-4ce7-9519-acd205ba8d1d", "110c6df1-6598-4052-8ee0-6595325dfebc", "37f2baca-d82b-4285-9422-b80c37554845" ], "title": "Interpretability" }, { "cite_extract_rate": 0.17647058823529402, "cites": [ 8192, 8191, 8193 ], "content": "\\label{subsec:interpretability:visual_analytics}\n\\sectionauthor{Elena Schulz\\textsuperscript{1}}\nTraditional data science has developed a huge tool set of automated analysis processes conducted by computers, which are applied to problems that are well-defined in the sense that the dimensionality of input and output as well as the size of the data set they rely on is manageable. \nFor those problems that in comparison are more complex, the automation of the analysis process is limited and/or might not lead to the desired outcome. \nThis is especially the case with unstructured data like image or video data in which the underlying information cannot directly be expressed by numbers. \nRather, it needs to be transformed to some structured form to enable computers to perform some task of analysis. \nAdditionally, with an ever increasing amount of various types of data being collected, this ``information overload'' cannot solely be analyzed by automatic methods . \n\\par\n\\emph{Visual analytics} addresses this challenge as ``the science of analytical reasoning facilitated by interactive visual interfaces'' . \nVisual analytics therefore does not only focus on either computationally processing data or visualizing results but coupling both tightly with interactive techniques. \nThus, it enables an integration of the human expert into the iterative visual analytics process: Through visual understanding and human reasoning, the knowledge of the human expert can be incorporated to effectively refine the analysis. This is of particular importance, where a stringent safety argumentation for complex models is required. With the help of visual analytics, the line of argumentation can be built upon arguments that are understandable for humans.\nTo include the human analyst efficiently into this process, a possible guideline is the \\emph{visual analytics mantra} by Keim: ``Analyze first, show the important, zoom, filter and analyze further, details on demand'' \\footnote{Extending the original \\emph{visualization mantra} by Shneiderman ``Overview first, filter and zoom, details on demand''.}.\n\\par\nThe core concepts of visual analytics therefore rely on well-designed interactive visualizations, which support the analyst in the tasks of, \\eg reviewing, understanding, comparing and inferring not only the initial phenomenon or data but also the computational model and its results itself with the goal of enhancing the analytical process. \n\\par\nDriven by various fields of application, visual analytics is a multidisciplinary field with a wide variety of task-oriented development and research. \nAs follows, recent work has been done in several areas: \ndepending on the task, there exist different pipeline approaches to create whole \\emph{visual analytics systems} ; \nthe injection of human expert knowledge into the process of determining trends and patterns from data is the focus of \\emph{predictive visual analytics} ; \nenabling the human to explore \\emph{high-dimensional data} interactively and visually (\\eg via dimensionality reduction ) is a major technique to enhance the understandability of complex models (\\eg neural networks); \nthe iterative improvement and the understanding of machine learning models is addressed by using interactive visualizations in the field of \\emph{general machine learning} or the other way round: using machine learning to improve visualizations and guidance based on user interactions .\nEven more focused on the loop of simultaneously developing and refining machine learning models is the area of \\emph{interactive machine learning}, where the topics of \\emph{interface design} and the \\emph{importance of users} are discussed. One of the current research directions is using visual analytics in the area of \\emph{deep learning} . However, due to the interdisciplinarity of visual analytics, there are still open directions and ongoing research opportunities. \n\\par\nEspecially in the domain of neural networks and deep learning, visual analytics is a relatively new approach in tackling the challenge of \\emph{explainability} and \\emph{interpretability} of those often called \\emph{black boxes}. \nTo enable the human to better interact with the models, research is done in enhancing the \\emph{understandability} of complex deep learning models and their outputs with the use of proper visualizations. \nOther research directions attempt to achieve improving the \\emph{trustability} of the models, giving the opportunity to inspect, diagnose and refine the model. \nFurther, possible areas for research are \\emph{online training processes} and the development of \\emph{interactive systems} covering the whole process of training, enhancing and monitoring machine learning models. \nHere, the approach of \\emph{mixed guidance}, where system-initiated guidance is combined with user-initiated guidance, is discussed among the visual analytics community as well. \nAnother challenge and open question is creating ways of \\emph{comparing models} to examine which model yields a better performance, given specific situations and selecting or combining the best models with the goal of increasing performance and overall safety.", "id": "cf04c28f-300a-4cf5-bc98-3f4d9734078e", "level": "subsection", "origin_cites_number": 17, "parent_id": "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Interpretability" ], [ "subsection", "Visual Analytics" ] ], "subsections": [], "title": "Visual Analytics" }, { "cite_extract_rate": 0.514285714285714, "cites": [ 1002, 8198, 6410, 8201, 8197, 8200, 8194, 8199, 5680, 3314, 4423, 8202, 2709, 8196, 306, 318, 8195, 1256 ], "content": "\\label{subsec:interpretability:intermediate_representations}\n\\sectionauthor{Felix Hauser\\textsuperscript{11}, Jan Kronenberger\\textsuperscript{9}, Seyed Eghbal Ghobadi\\textsuperscript{8}}\nIn general, representation learning aims to extract lower dimensional features in latent space from higher dimensional inputs.\nThese features are then used as an effective representation for regression, classification, object detection and other machine learning tasks.\nPreferably, latent features should be disentangled, meaning that they represent separate factors found in the data that are statistically independent.\nDue to their importance in machine learning, finding meaningful intermediate representations has long been a primary research goal.\nDisentangled representations can be interpreted more easily by humans and \ncan for example be used to explain the reasoning of neural networks .\nAmong the longer known methods for extracting disentangled representations are principal component analysis (PCA) , independent component analysis , and nonnegative matrix factorization .\nPCA is highly sensitive to outliers and noise in the data.\nTherefore, more robust algorithms were proposed.\nIn already a small neural network was used as an encoder and the algorithm proposed in can deal with high-dimensional data.\nSome robust PCA algorithms are provided with analytical performance guarantees .\nA popular method for representation learning with deep networks is the variational autoencoder (VAE) .\nAn important generalization of the method is the $ \\beta $-VAE variant , which improved the disentanglement capability .\nLater analysis added to the theoretical understanding of $ \\beta $-VAE\n.\nCompared to standard autoencoders, VAEs map inputs to a distribution, instead of mapping them to a fixed vector.\nThis allows for additional regularization of the training to avoid overfitting and ensure good representations.\nIn $ \\beta $-VAEs the trade-off between reconstruction quality and disentanglement can be fine-tuned by the hyperparameter $ \\beta $.\nDifferent regularization schemes have been suggested to improve the VAE method.\nAmong them are Wasserstein autoencoders , attribute regularization and relational regularization .\nRecently, a connection between VAEs and nonlinear independent component analysis was established and then expanded .\nBesides VAEs, deep generative adversarial networks can be used to construct latent features .\nOther works suggest centroid encoders or conditional learning of Gaussian distributions as alternatives to VAEs.\nIn concept activation vectors are defined as being orthogonal to the decision boundary of a classifier.\nApart from deep learning, entirely new architectures, such as capsule networks , might be used to disassemble inputs. \n\\par\nWhile many different approaches for disentangling exist, the feasibility of the task is not clear yet and a better theoretical understanding is needed.\nThe disentangling performance is hard to quantify, which is only feasible with information about the latent ground truth .\nModels that overly rely on single directions, single neurons in fully connected networks or single feature maps in CNNs, have the tendency to overfit .\nAccording to , unsupervised learning does not produce good disentangling and even small latent spaces do not reduce the sample complexity for simple tasks.\nThis is in direct contrast to newer findings that show a decreased sample complexity for more complex visual downstream tasks .\nSo far, it is unclear if disentangling improves the performance of machine learning tasks.\nIn order to be interpretable, latent disentangled representations need to be aligned with human understandable concepts.\nIn training with adversarial examples was used and the learned representations were shown to be more aligned with human perception.\nFor explainable AI, disentangling alone might not be enough to generate interpretable output and additional regularization could be needed.", "id": "a3e2cd79-8634-4ce7-9519-acd205ba8d1d", "level": "subsection", "origin_cites_number": 35, "parent_id": "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Interpretability" ], [ "subsection", "Intermediate Representations" ] ], "subsections": [], "title": "Intermediate Representations" }, { "cite_extract_rate": 0.6875, "cites": [ 4875, 499, 8600, 1812, 1822, 1824, 1814, 7517, 1617, 6080, 8203 ], "content": "\\label{subsec:interpretability:pixel_attribution}\n\\sectionauthor{Stephanie Abrecht\\textsuperscript{2}, Felix Brockherde\\textsuperscript{10}, Toshika Srivastava\\textsuperscript{12}}\nThe non-linearity and complexity of DNNs allow them to solve perception problems, like detecting a pedestrian, that cannot be specified in detail. At the same time, the automatic extraction of features given in an input image and the mapping to the respective prediction is counterintuitive and incomprehensible for humans, which makes it hard to argue safety for a neural network-based perception task. Feature importance techniques are currently predominantly used to diagnose the causes of incorrect model behaviors . So-called \\emph{attribution maps} are a visual technique to express the relationship between relevant pixels in the input image and the network's prediction. Regions in an image that contain relevant features are highlighted accordingly. \nAttribution approaches mostly map to one of three categories.\n\\par\nGradient-based and activation-based approaches (such as amongst others) rely on the gradient of the prediction with respect to the input. Regions that were most relevant for the prediction are highlighted. Activation-based approaches relate the feature maps of the last convolutional layer to output classes. \n\\par\nPerturbation-based approaches suggest manipulating the input. If the prediction changes significantly, the input may hold a possible explanation at least.\n\\par\nWhile gradient-based approaches are oftentimes faster in computation, perturbation-based approaches are much easier to interpret.\n\\par\nAs many studies have shown , there is still a lot of research to be done before attribution methods are able to robustly provide explanations for model predictions, in particular erroneous behavior. One key difficulty is the lack of an agreed-upon definition of a good attribution map including important properties. Even between humans, it is hard to agree on what a good explanation is due to its subjective nature. This lack of ground truth makes it hard or even impossible to quantitatively evaluate an explanation method. Instead, this evaluation is done only implicitly. \nOne typical way to do this is the axiomatic approach. Here a set of desiderata of an attribution method are defined, on which different attribution methods are then evaluated. Alternatively, different attribution methods may be compared by perturbing the input features starting with the ones deemed most important and measuring the drop in accuracy of the perturbed models. The best method will result into the greatest overall loss in accuracy as the number of inputs are omitted .\nMoreover, for gradient-based methods it is hard to assess if an unexpected attribution is caused by a poorly performing network or a poorly performing attribution method . How to cope with negative evidence, \\ie the object was predicted because a contrary clue in the input image was missing, is an open research question. Additionally, most methods were shown on classification tasks until now. It remains to be seen how they can be transferred to object detection and semantic segmentation tasks. In the case of perturbation-based methods, the high computation time and single-image analysis inhibit wide-spread application.", "id": "110c6df1-6598-4052-8ee0-6595325dfebc", "level": "subsection", "origin_cites_number": 16, "parent_id": "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Interpretability" ], [ "subsection", "Pixel Attribution" ] ], "subsections": [], "title": "Pixel Attribution" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8502, 7507, 1821, 8204 ], "content": "\\label{subsec:interpretability:interpretable_proxies}\n\\sectionauthor{Gesina Schwalbe\\textsuperscript{3}}\nNeural networks are capable of capturing complicated logical\n(cor)relations. However, this knowledge is encoded on a\n\\emph{sub-symbolic} level in the form of learned weights and biases,\nmeaning that the reasoning behind the processing chain cannot be\ndirectly read out or interpreted by humans .\nTo explain the sub-symbolic processing, one can either use attribution\nmethods (\\cf \\refsec{subsec:interpretability:pixel_attribution}), or lift\nthis sub-symbolic representation to a \\emph{symbolic} one\n, meaning a more interpretable one.\nInterpretable proxies or surrogate models try to achieve the latter:\nThe DNN behavior is approximated by a model that\nuses symbolic knowledge representations.\nSymbolic representations can be\nlinear models like LIME (proportionality),\ndecision trees (if-then-chains) ,\nor loose sets of logical rules.\nLogical connectors can simply be AND and OR but also more general\nones like at-least-M-of-N .\nThe \\emph{expressiveness} of an approach refers to the logic that is\nused: Boolean-only versus first-order logic, and binary versus fuzzy\nlogic truth values .\nOther than attribution methods (\\cf \\refsec{subsec:interpretability:pixel_attribution}), these\nrepresentations can capture combinations of features and (spatial)\nrelations of objects and attributes. As an example consider\n\\enquote{eyes are closed} as explanation for \\enquote{person asleep}:\nAttribution methods only could mark the location of the eyes, dismissing\nthe relations of the attributes .\nAll mentioned surrogate model types (linear, set of rules) require\ninterpretable input features in order to be interpretable themselves.\nThese features must either be directly obtained from the DNN input or\n(intermediate) output, or automatically be extracted from the DNN\nrepresentation.\nExamples for extraction are the super-pixeling used in LIME for input\nfeature detection, or concept activation vectors\n for DNN representation decoding.\n\\par\nQuality criteria and goals for interpretable proxies are :\n\\emph{accuracy} of the standalone surrogate model on unseen examples,\n\\emph{fidelity} of the approximation by the proxy,\n\\emph{consistency} with respect to different training sessions, and\n\\emph{comprehensibility} measured by the complexity of the rule set\n(number of rules, number of hierarchical dependencies).\nThe criteria are usually in conflict and need to be balanced:\nBetter accuracy may require a more complex, thus less expressive sets of rules.\n\\par\nApproaches for interpretable proxies differ in the validity range of\nthe representations:\nSome aim for surrogates that are only valid \\emph{locally} around\nspecific samples, like in LIME~ or in\n via inductive logic programming.\nOther approaches try to more \\emph{globally} approximate aspects of\nthe model behavior.\nAnother categorization is defined by whether full access\n(\\emph{white-box}), some access (\\emph{gray-box}), or no access\n(\\emph{black-box}) to the DNN internals is needed.\nOne can further differentiate between \\emph{post-hoc} approaches that\nare applied to a trained model, and approaches that try to integrate\nor \\emph{enforce symbolic representations} during training.\nPost-hoc methods cover the wide field of rule extraction techniques\nfor DNNs. The reader may refer to\n.\nMost white- and gray-box methods try to turn the DNN connections into\nif-then rules that are then simplified, like done in\nDeepRED~.\nA black-box example is validity interval\nanalysis~, which refines or\ngeneralizes rules on input intervals, either starting from one sample\nor a general set of rules.\nEnforcement of symbolic representations can be achieved by enforcing\nan output structure that provides insights to the decision logic, such as\ntextual explanations, or a rich output structure allowing investigation of correlations\n.\nAn older discipline for enforcing symbolic representations\nis the field of neural-symbolic learning\n. The idea is based on a hybrid learning\ncycle \nin which a symbolic learner and a DNN iteratively update each other via\nrule insertion and extraction.\n\\par\nThe comprehensibility of global surrogate models suffers from the\ncomplexity and size of concurrent DNNs. Thus, stronger rule\nsimplification methods are required .\nThe alternative direction of local approximations mostly\nconcentrates on linear models instead of more expressive rules .\nFurthermore, balancing of the quality objectives is hard since\navailable indicators for interpretability may not be ideal.\nAnd lastly, applicability is heavily infringed by the requirement of\ninterpretable input features. These are usually not readily available\nfrom input (often pixel-level) or DNN output. Supervised extraction\napproaches vary in their fidelity, and unsupervised ones do not\nguarantee to yield meaningful or interpretable results, respectively, such\nas the super-pixel clusters of LIME.", "id": "37f2baca-d82b-4285-9422-b80c37554845", "level": "subsection", "origin_cites_number": 12, "parent_id": "e47014ad-1aec-4b32-9241-6aab90a7f0dd", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Interpretability" ], [ "subsection", "Interpretable Proxies" ] ], "subsections": [], "title": "Interpretable Proxies" }, { "cite_extract_rate": 0.75, "cites": [ 4616, 4025, 3288 ], "content": "\\label{sec:uncertainty} \n\\sectionauthor{Michael Mock\\textsuperscript{1}}\nUncertainty refers to the view that a neural network is not conceived as a deterministic function but as a probabilistic function or estimator, delivering a random distribution for each input point. Ideally, the mean value of the distribution should be as close as possible to the ground truth value of the function being approximated by the neural network and the uncertainty of the neural network refers to its variance when being considered as a random variable, thus allowing to derive a confidence with respect to the mean value. Regarding safety, the variance may lead to estimations about the confidence associated with a specific network output and opens the option for discarding network outputs with insufficient confidence. \n\\par\nThere are roughly two broad approaches for training neural networks as probabilistic functions: Parametric approaches and Bayesian neural networks on the one hand, such as , where the transitions along the network edges are modeled as probability distributions, and ensemble-based approaches on the other hand , where multiple networks are trained and considered as samples of a common output distribution. Apart from training as probabilistic function, uncertainty measures have been derived from single, standard neural networks by post-processing on the trained network logits, leading for example to calibration measures (\\cf \\eg ).", "id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "level": "section", "origin_cites_number": 4, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ] ], "subsections": [ "e3a9977a-3391-4691-bc3b-9d131807d8dd", "d4df1917-2df5-4888-9dac-86e261e7d3d8", "329b292f-42f5-496b-9775-5b5a3bd0f517", "dcb846f5-368c-45a9-a56f-6ddb586a0a6b", "81f04d24-1455-4244-a481-8a943c0061e4", "09524fc1-6625-4adb-b57e-b6186b5d4ef0" ], "title": "Uncertainty" }, { "cite_extract_rate": 0.23333333333333303, "cites": [ 1003, 5680, 8430, 8207, 8206, 8205, 8208 ], "content": "\\label{subsec:uncertainty:generative_models}\n\\sectionauthor{Sebastian Wirkert\\textsuperscript{6}, Tim Wirtz\\textsuperscript{1}}\n\\emph{Generative models} belong to the class of unsupervised machine learning models. From a theoretical perspective, these are particularly interesting, because they offer a way to analyze and model the density of data. Given a finite data set $\\mathcal{D}$ independently distributed according to some distribution $p(x)$, generative models aim to estimate or enable sampling from the underlying density $p(x)$ in a model $q_\\theta(x)$. The resulting model can be used for data indexing~, data retrieval~, for visual recognition~, speech recognition and generation~, language processing~ and robotics~.\nFollowing , we can group generative models into two main classes:\n\\begin{itemize}\n \\item Cost function-based models such as autoencoder~, deep belief networks~ and generative adversarial networks~.\n \\item Energy-based models~, where the joint probability density is modeled by an energy function.\n\\end{itemize}\nBeside these \\emph{deep learning} approaches, generative models have been studied in machine learning in general for quite some time (\\cf ). A very prominent example of generative networks are Gaussian processes~ and their deep learning extensions~ as generative models. \n\\par\nAn example of a generative model being employed for image segmentation uncertainty estimation is the probabilistic U-Net . Here a variational autoencoder (VAE) conditioned on the image is trained to model uncertainties. Samples from the VAE are fed into a segmentation U-Net which can thus give different results for the same image. This was tested in context of medical images, where inter-rater disagreements lead to uncertain segmentation results and Cityscapes segmentation. For the Cityscapes segmentation the investigated use case was label ambiguity (\\eg is a BMW X7 a car or a van) using artificially created, controlled ambiguities. Results showed that the probabilistic U-Net could reproduce the segmentation ambiguity modes more reliably than competing methods such as a dropout U-Net which is based on techniques elaborated in the next section.", "id": "e3a9977a-3391-4691-bc3b-9d131807d8dd", "level": "subsection", "origin_cites_number": 30, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Generative Models" ] ], "subsections": [], "title": "Generative Models" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 4616, 3288, 4640, 8205, 4025, 8209, 8814 ], "content": "\\label{subsec:uncertainty:mc_dropout} \n\\sectionauthor{Joachim Sicking\\textsuperscript{1}}\nA widely used technique to estimate model uncertainty is Monte-Carlo (MC) dropout , that offers a Bayesian motivation, conceptual simplicity and scalability to application-size networks. This combination distinguishes MC dropout from competing Bayesian neural network (BNN) approximations (like ,, see \\refsec{subsec:uncertainty:bayesian_neural_networks}). However, these approaches and MC dropout share the same goal: to equip neural networks with a \\emph{self-assessment} mechanism that detects unknown input concepts and thus potential model insufficiencies.\nOn a technical level, MC dropout assumes prior distributions on network activations, usually independent and identically distributed (i.i.d.) Bernoulli distributions. Model training with iteratively drawn Bernoulli samples, the so-called \\emph{dropout masks}, then yields a data-conditioned posterior distribution within the chosen parametric family. It is interesting to note that this training scheme was used earlier---independent from an uncertainty context---for better model generalization . At inference, sampling provides estimates of the input-dependent output distributions. The spread of these distributions is then interpreted as the prediction uncertainty that originates from limited knowledge of model parameters. Borrowing ‘frequentist’ terms, MC dropout can be considered as an implicit network ensemble, \\ie as a set of networks that share (most of) their parameters.\nIn practice, MC dropout requires only a minor modification of the optimization objective during training and multiple, trivially parallelizable forward passes during inference. The loss modification is largely agnostic to network architecture and does not cause substantial overhead. This is in contrast to the sampling-based inference that increases the computational effort massively---by estimated factors of 20-100 compared to networks without MC dropout. A common practice is therefore the use of \\emph{last-layer dropout} that reduces computational overhead to estimated factors of 2-10. Alternatively, analytical moment propagation allows sampling-free MC-dropout inference at the price of additional approximations (\\eg). Further extensions of MC dropout target the integration of data-inherent (aleatoric) uncertainty and tuned performance by learning layer-specific dropout rate using concrete relaxations .\nThe quality of MC-dropout uncertainties is typically evaluated using negative log-likelihood (NLL), expected calibration error (ECE) and its variants (\\cf \\refsec{subsec:uncertainty:confidence_calibration}) and by considering correlations between uncertainty estimates and model errors (\\eg AUSE ). Moreover, it is common to study how useful uncertainty estimates are for solving auxiliary tasks like out-of-distribution classification or robustness \\wrt adversarial attacks. \nMC dropout is a working horse of safe ML, being used with various networks and for a multitude of applications (\\eg). However, several authors pointed out shortcomings and limitations of the method: MC dropout bears the risk of over-confident false predictions (), offers less diverse uncertainty estimates compared to (the equally simple and scalable) deep ensembles (, see \\refsec{subsec:aggregation:ensemble_methods}) and provides only rudimentary approximations of true posteriors. \nRelaxing these modelling assumptions and strengthening the Bayesian motivation of MC dropout is therefore an important research avenue. Further directions for future work are the development of \\emph{semantic uncertainty mechanisms} (\\eg ), improved local uncertainty calibrations and a better understanding of the outlined sampling-free schemes to uncertainty estimation.", "id": "d4df1917-2df5-4888-9dac-86e261e7d3d8", "level": "subsection", "origin_cites_number": 13, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Monte-Carlo Dropout" ] ], "subsections": [], "title": "Monte-Carlo Dropout" }, { "cite_extract_rate": 0.25, "cites": [ 8633, 4025 ], "content": "\\label{subsec:uncertainty:bayesian_neural_networks}\n\\sectionauthor{Maram Akila\\textsuperscript{1}}\nAs the name suggests, Bayesian neural networks (BNNs) are inspired by a Bayesian interpretation of probability (for an introduction \\cf ).\nIn essence, it rests on Bayes' theorem,\n\\begin{equation}\n\tp(x|y) p(y) = p(x,y) = p(y | x) p(x)\n\t\\quad \\Rightarrow \\quad\n\tp(x| y) = \\frac{ p(y | x) p(x)}{ p(y) }\n\t\\,,\n\t\\label{eq:bnn:bayes}\n\\end{equation}\nstating that the conditional probability density function (PDF) $p(x|y)$ for $x$ given $y$ may be expressed in terms of the inverted conditional PDF $p(y | x)$.\nFor machine learning, where one intends to make predictions $y$ for unknown $x$ given some training data $\\mathcal{D}$, this can be reformulated into\n\\begin{equation}\n\ty = \\operatorname{NN}(x|W)\n\t\\quad\\text{with}\\quad\n\t p(W | \\mathcal{D}) = \\frac{p(\\mathcal{D} | W) p(W)}{p(\\mathcal{D})}\n\t\\,.\n\t\\label{eq:bnn:NNpdf}\n\\end{equation}\nTherein NN denotes a conventional (deep) neural network (DNN) with model parameters $W$, \\eg the set of weights and biases.\nIn contrast to a regular DNN, the weights are given in terms of a probability distribution $ p(W | \\mathcal{D}) $ turning also the output $y$ of a BNN into a distribution.\nThis allows to study the mean $\\mu=\\langle y^1 \\rangle$ of the DNN for a given $x$ as well as higher moments of the distribution, typically the resulting variance $\\sigma^2 = \\langle (y-\\mu)^2 \\rangle$ is of interest, where\n\\begin{equation}\n\t\\left\\langle y^k \\right\\rangle = \\int\\!\n\t\\operatorname{NN}(x|W)^k p(W|\\mathcal{D})\\,\\mathrm{d}W\\,.\n\t\\label{eq:bnn:moments}\n\\end{equation}\nWhile $\\mu$ yields the output of the network, the variance $\\sigma^2$ is a measure for the uncertainty of the model for the prediction at the given point.\nCentral to this approach is the probability of the data given the model, here denoted by $p(\\mathcal{D} | W) $, as it is the key component connecting model and training data.\nTypically, the prior distribution $p(\\mathcal{D})$ is ``ignored'' as it only appears as a normalization constant within the averages, see \\eqref{eq:bnn:moments}.\nIn the cases where the data $\\mathcal{D}$ is itself a distribution due to inherent uncertainty, \\ie presence of aleatoric risk , such a concept seems natural.\nHowever, Bayesian approaches are also applicable for all other cases.\nIn those, loosely speaking, the likelihood of $W$ is determined via the chosen loss function (for the connection between the two concepts \\cf ).\n\\par\nOn this general level, Bayesian approaches are broadly accepted and also find use for many other model classes besides neural networks.\nHowever, the loss surfaces of DNNs are known for their high dimensionality and strong non-convexity.\nTypically, there are abundant parameter combinations $W$ that lead to (almost) equally good approximations to the training data $\\mathcal{D}$ with respect to a chosen loss.\nThis makes an evaluation of $p(W|\\mathcal{D})$ for DNNs close to impossible in full generality.\nAt least no (exact) solutions for this case exist at the moment.\n\\par\nFinding suitable approximations to the posterior distribution $p(W|\\mathcal{D})$ is an ongoing challenge for the construction of BNNs.\nAt this point we only summarize two major research directions in the field.\nOne approach is to assume that the distribution factorizes.\nWhile the full solution would be a joined distribution implying correlations between different weights (etc.), possibly even across layers, this approximation takes each element $w_i$ of $W$ to be independent form the others.\nAlthough this is a strong assumption, it is often made, in this case parameters for the respective distributions of each element can be learned via training (\\cf ).\nThe second class of approaches focuses on the region of the loss surface around the minimum chosen for the DNN.\nAs discussed, the loss relates to the likelihood and quantities such as the curvature at the minimum, therefore directly connected to the distribution of $W$.\nUnfortunately, already using this type of quantities requires further approximations .\nAlternatively, the convergence of the training process may be altered to sample networks close to the minimum .\nWhile this approach contains information about correlations among the $w_i$, it is usually restricted to a specific minimum.\nFor a non-Bayesian ansatz taking into account several minima, see deep ensembles in \\refsec{subsec:aggregation:ensemble_methods}.\nBNNs also touch other concepts such as MC dropout (\\cf \\refsec{subsec:uncertainty:mc_dropout} or ), or prior networks, which are based on a Bayesian interpretation but use conventional DNNs with an additional (learned) $\\sigma$ output .", "id": "329b292f-42f5-496b-9775-5b5a3bd0f517", "level": "subsection", "origin_cites_number": 8, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Bayesian Neural Networks" ] ], "subsections": [], "title": "Bayesian Neural Networks" }, { "cite_extract_rate": 0.5, "cites": [ 1624, 4620, 8211, 8210, 1733, 206 ], "content": "\\label{subsec:uncertainty:gradient_based_uncertainty}\n\\sectionauthor{Matthias Rottmann\\textsuperscript{5}}\nClassical uncertainty quantification methods in frequentist inference are mostly based on the outputs of statistical models. Their uncertainty is quantified and assessed for instance via dispersion measures in classification (such as entropy, probability margin or variation ratio), or confidence intervals in regression. However, the nature of DNN architectures and the cutting edge applications tackled by those (\\eg semantic segmentation, \\cf ) open the way towards more elaborate uncertainty quantification methods. Besides the mentioned classical approaches, intermediate feature representations within a DNN (\\cf ) or gradients according to self-affirmation that represent re-learning stress (see ) reveal additional information. In addition, in case of semantic segmentation, the geometry of a prediction may give access to further information, cf.\\ . By the computation of statistics of those quantities as well as low-dimensional representations thereof, we obtain more elaborate uncertainty quantification methods specifically designed for DNNs that can help us to detect misclassifications and out-of-distribution objects (\\cf ). \n\\par\nFeatures gripped during a fordward pass of a data point $x$ through a DNN $f$ can be considered layer-wise, \\ie $f^{(\\ell)}(x)$ after the $\\ell$-th layer. These can be translated into a handful of quantities per layer or further processed by another DNN that aims that detecting errors . While in particular presents a proof of concept on small scale classification problems, their applicability to large scale datasets and problems such as semantic segmentation and object detection remain open.\n\\par\nThe development for gradient-based uncertainty quantification methods is guided by one central question:\nIf the present prediction was true, how much re-learning would this require.\nThe corresponding hypothesis is that wrong predictions would be more in conflict with the knowledge encoded in the deep neural network than correct ones, therefore causing increased re-learning stress.\nGiven a predicted class\n\\begin{equation}\n\\hat{y} = \\mathop{\\mathrm{arg\\,max}}_{y} f_y(x) \n\\end{equation}\nwe compute the gradient of layer $\\ell$ corresponding to the predicted label. That is, given a loss function $\\mathcal{L}$, we compute\n\\begin{equation}\n\\nabla_{w_\\ell} \\mathcal{L}( \\hat{y}, x, w ) \n\\end{equation}\nvia backpropagation. The obtained quantities can be treated similarly to the case of forward pass features. While this concept seems to be prohibitively expensive for semantic segmentation (at least when calculating gradients for each pixel of $\\hat{y}$), its applicability to object detection might be feasible, in particular with respect to offline applications. Gradients are also of special interest in active learning with query by \\emph{expected model change} (\\cf ).\n\\par\nIn the context of semantic segmentation, geometrical information on segments shapes as well as neighborhood relations of predicted segments can be taken into account along side with dispersion measures. It has been demonstrated that the detection of errors in an in-distribution setting strongly benefits from geometrical information. Recently, this has also been considered in scenarios under moderate domain shift . However, its applicability to out-of-distribution examples and to other sensors than the camera remains subject to further research.", "id": "dcb846f5-368c-45a9-a56f-6ddb586a0a6b", "level": "subsection", "origin_cites_number": 12, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Uncertainty Metrics for DNNs in Frequentist Inference" ] ], "subsections": [], "title": "Uncertainty Metrics for DNNs in Frequentist Inference" }, { "cite_extract_rate": 0.23529411764705802, "cites": [ 8213, 8212, 1737, 1745 ], "content": "\\label{subsec:uncertainty:markov_random_fields}\n\\sectionauthor{Seyed Eghbal Ghobadi\\textsuperscript{8}, Ahmed Hammam\\textsuperscript{8}}\nAlthough deep neural networks are currently the state of the art for almost all computer vision tasks, Markov random fields (MRF) remain one of the fundamental techniques used for many computer vision tasks, specifically image segmentation ,. MRFs hold its power in the essence of being able to model dependencies between pixels in an image. With the use of energy functions, MRFs integrate pixels into models relating between unary and pairwise pixels together . \nGiven the model, MRFs are used to infer the optimal configuration yielding the lowest energy using mainly maximum a posteriori (MAP) techniques. Several MAP inference approaches are used to yield the optimal configuration such as graph cuts and belief propagation algorithms . However, as with neural networks, MAP inference techniques result in deterministic point estimates of the optimal configuration without any sense of uncertainty in the output. To obtain uncertainties on results from MRFs, most of the work is directed towards modelling MRFs with Gaussian distributions. Getting uncertainties from MRFs with Gaussian distributions is possible by two typical methods: Either approximate models are inferred to the posterior, from which sampling is easy or the variances can be estimated analytically, or approximate sampling from the posterior is used.\nApproximate models include those inferred using variational Bayesian (VB) methods, like mean-field approximations, and using Gaussian process (GP) models enforcing a simplified prior model , . Examples of approximate sampling methods include traditional Markov chain Monte Carlo (MCMC) methods like Gibbs sampling . Some recent theoretical advances propose the perturb-and-MAP framework and a Gumbel perturbation model (GPM) , to exactly sample from MRF distributions. Another line of work has also been proposed, where MAP inference techniques are used to estimate the probability of the network output. With the use of graph cuts, try to estimate uncertainty using the min-marginals associated with the label assignments of a random field. Here, the work by Kohli and Torr was extended to show how this approach can be extended to techniques other than graph cuts or compute uncertainties on multi-label marginal distributions .\n\\par\nA current research direction is the incorporation of MRFs with deep neural networks, along with providing uncertainties on the output . This can also be extended to other forms of neural networks such as recurrent neural networks to provide uncertainties on segmentation of streams of videos with extending dependencies of pixels to previous frames , .", "id": "81f04d24-1455-4244-a481-8a943c0061e4", "level": "subsection", "origin_cites_number": 17, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Markov Random Fields" ] ], "subsections": [], "title": "Markov Random Fields" }, { "cite_extract_rate": 0.41176470588235203, "cites": [ 8214, 8216, 759, 8215, 4141, 8217, 4690 ], "content": "\\label{subsec:uncertainty:confidence_calibration} \n\\sectionauthor{Fabian Küppers\\textsuperscript{9}, Anselm Haselhoff\\textsuperscript{9}}\nNeural network classifiers output a label $\\hat{Y} \\in \\mathcal{Y}$ on a given input $X \\in \\mathcal{X}$ with an associated confidence $\\hat{P}$. This confidence can be interpreted as a probability of correctness that the predicted label matches the ground truth label $Y \\in \\mathcal{Y}$. Therefore, these probabilities should reflect the ”self-confidence” of the system. If the empirical accuracy for any confidence level matches the predicted confidence, a model is called \\emph{well calibrated}.\nTherefore, a \\emph{classification} model is perfectly calibrated if \n\\begin{align}\n\\label{eq:classification_calibration}\n& \\underbrace{\\mathbb{P}(\\hat{Y} = Y | \\hat{P} = p)}_{\\text{accuracy given } p} = \\underbrace{p}_{\\text{confidence}} \\quad \\forall p \\in [0,1]\n\\end{align}\nis fullfilled . For example, assume 100 predictions with confidence values of 0.9. We call the model well calibrated if 90 out of these 100 predictions are actually correct. However, recent work has shown that modern neural networks tend to be too overconfident in their predictions . The deviation of a model to the perfect calibration can be measured by the expected calibration error (ECE) . \nIt is possible to recalibrate models as a post-processing step after classification. One way to get a calibration mapping is to group all predictions into several bins by their confidence. Using such a binning scheme, it is possible to compute the empirical accuracy for certain confidence levels, as it is known for a long time already in reconstructing confidence outputs for Viterbi decoding . Common methods are histogram binning , isotonic regression or more advanced methods like Bayesian binning into quantiles (BBQ) and ensembles of near-isotonic regression (ENIR) . Another way to get a calibration mapping is to use scaling methods based on logistic regression like Platt scaling , temperature scaling and beta calibration .\n\\par\nIn the setting of probabilistic regression, a model is calibrated if, \\eg 95\\% of the true target values are below or equal to a credible level of 95\\% (so called \\emph{quantile-calibrated regression}) . A regression model is usually calibrated by fine-tuning its predicted CDF in a post-processing step to match the empirical frequency. Common approaches utilize isotonic regression , logistic and beta calibration , as well as Gaussian process models to build a calibration mapping. In contrast to quantile-calibrated regression, have recently introduced the concept of distribution calibration, where calibration is applied on a distribution level and naturally leads to calibrated quantiles.\n\\par\nRecent work has shown that miscalibration in the scope of \\emph{object detection} also depends on the position and scale of a detected object . The additional box regression output is denoted by $\\hat{R}$ with $J$ as the size of the used box encoding.\nFurthermore, if we have no knowledge about all anchors of a model (which is a common case in many applications), it is not possible to determine the accuracy. Therefore, K\\\"{u}ppers \\etal~ use the precision as a surrogate for accuracy and propose that an \\emph{object detection model} is perfectly calibrated if\n\\begin{align}\n\\label{eq:detection_calibration}\n& \\underbrace{\\mathbb{P}(M=1 | \\hat{P} = p, \\hat{Y} = y, \\hat{R} = r)}_{\\text{precision given } p, y, r} = \\underbrace{p}_{\\text{confidence}} \\quad \\forall p \\in [0,1], y \\in \\mathcal{Y}, r \\in \\mathbb{R}^J \n\\end{align}\nis fulfilled, where $M=1$ denotes a correct prediction that matches a ground-truth object with a chosen IoU threshold and $M=0$ denotes a mismatch, respectively. The authors propose the detection-expected calibration error (D-ECE) as the extension of the ECE to object detection tasks in order to measure miscalibration also by means of the position and scale of detected objects. Other approaches try to fine-tune the regression output in order to obtain more reliable object proposals or to add a regularization term to the training objective such that training yields models that are both well-performing and well-calibrated .", "id": "09524fc1-6625-4adb-b57e-b6186b5d4ef0", "level": "subsection", "origin_cites_number": 17, "parent_id": "9f13c349-4fb4-458e-8f34-84cd3a93fca0", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Uncertainty" ], [ "subsection", "Confidence Calibration" ] ], "subsections": [], "title": "Confidence Calibration" }, { "cite_extract_rate": 0.5, "cites": [ 6435 ], "content": "\\label{sec:aggregation}\n\\sectionauthor{Maram Akila\\textsuperscript{1}}\nFrom a high-level perspective, a \\emph{neural network} is based on processing inputs and coming to some output conclusion, \\eg mapping incoming image data onto class labels.\nAggregation or collection of non-independent information on either the input or output side of this network function can be used as a tool to leverage its performance and reliability.\nStarting with the input, any additional ``dimension'' to add data can be of use.\nFor example, in the context of autonomous vehicles this might be input from any further sensor measuring the same scene as the original one, \\eg stereo cameras or LiDAR.\nCombining those sensor sets for prediction is commonly referred to as \\emph{sensor fusion} .\nStaying with the example, the scene will be monitored consecutively providing a whole (temporally ordered) stream of input information.\nThis may be used either by adjusting the network for this kind of input \n \nor in terms of a post-processing step, in which the predictions are aggregated by some measure of temporal consistency. \n\\par\nAnother more implicit form of aggregation is training the neural network on several ``independent'' tasks, \\eg segmentation and depth regression. Although the individual task is executed on the same input, the overall performance can still benefit from the correlation among all given tasks. We refer to the discussion on \\emph{multi-task networks} in \\refsec{subsec:architecture:multi_task_networks}.\nBy extension, solving the same task in multiple different ways, can be beneficial for performance and provide a measure of redundancy.\nIn this survey, we focus on single-task systems and discuss \\emph{ensemble methods} in the next section and the use of \\emph{temporal consistency} in the one thereafter.", "id": "71d4e278-f67c-443c-84a9-ff0254362119", "level": "section", "origin_cites_number": 2, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Aggregation" ] ], "subsections": [ "d1f9f7cf-fc5d-426f-8fba-a38a5d51acf5", "61379b41-7fce-4042-aff6-0ec085cd8afc" ], "title": "Aggregation" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4616, 4678, 582, 3288, 680, 681, 8966, 8218 ], "content": "\\label{subsec:aggregation:ensemble_methods}\n\\sectionauthor{Joachim Sicking\\textsuperscript{1}}\nTraining a neural network is optimizing its parameters to fit a given training data set. The commonly used gradient-based optimization schemes cause convergence in a ‘nearby’ local minimum. As the loss landscapes of neural networks are notoriously non-convex , various locally optimal model parameter sets exist. These local optima differ in the degree of optimality (“deepness”), qualitative characteristics (“optimal for different parts of the training data”) and their generalizability to unseen data (commonly referred to by the geometrical terms of “sharpness” and “flatness” of minima ). \nA single trained network corresponds to one local minimum of such a loss landscape and thus captures only a small part of a potentially diverse set of solutions. \\emph{Network ensembles} are collections of models and therefore better suited to reflect this multi-modality. Various modelling choices shape a loss landscape: the selected model class and its meta-parameters (like architecture and layer width), the training data and the optimization objective. Accordingly, approaches to diversify ensemble components range from combinations of different model classes over varying training data (bagging) to methods that train and weight ensemble components to make up for the flaws of other ensemble members (boosting) .\nGiven the millions of parameters of application-size networks, ensembles of NNs are resource-demanding \\wrt computational load, storage and runtime during training and inference. This complexity increases linearly with ensemble size for naïve ensembling. Several approaches were put forward to reduce some dimensions of this complexity: \\emph{snapshot ensembles} require only one model optimization with a cyclical learning-rate schedule—leading to an optimized training runtime. The resulting training trajectory passes through several local minima. The corresponding models compose the ensemble. On the contrary, \\emph{model distillation} tackles runtime at inference. They ‘squeeze’ a NN ensemble into a single model that is optimized to capture the gist of the model set. However, such a compression goes along with reduced performance compared to the original ensemble.\nSeveral hybrids of single model and model ensemble exist: Multi-head networks share a backbone network that provides inputs to multiple prediction networks. Another variant are mixture-of-expert models that utilize a gating network to assign inputs to specialized expert networks . Multi-task networks (\\cf \\refsec{subsec:architecture:multi_task_networks}) and Bayesian approximations of NNs (\\cf \\refsec{subsec:uncertainty:bayesian_neural_networks} and \\refsec{subsec:uncertainty:mc_dropout}) can be seen as implicit ensembles. \nNN ensembles (or deep ensembles) are not only used to boost model quality. They pose the frequentist's approach to estimating NN uncertainties and are state-of-the-art in this regard . The emergent field of federated learning is concerned with the integration of decentrally trained ensemble components and safety-relevant applications of ensembling range from autonomous driving to medical diagnostics . \nTaking this safe-ML perspective, promising research directions comprise a more principled and efficient composition of model ensembles, \\eg by application-driven diversification, as well as improved techniques to miniaturize ensembles, \\eg by gaining a better understanding of methods like distillation. In the long run, better designed, more powerful learning systems might partially reduce the need for combining weaker models in a network ensemble.", "id": "d1f9f7cf-fc5d-426f-8fba-a38a5d51acf5", "level": "subsection", "origin_cites_number": 12, "parent_id": "71d4e278-f67c-443c-84a9-ff0254362119", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Aggregation" ], [ "subsection", "Ensemble Methods" ] ], "subsections": [], "title": "Ensemble Methods" }, { "cite_extract_rate": 0.4, "cites": [ 8219, 1745, 8220, 784 ], "content": "\\label{subsec:aggregation:temporal_consistency}\n\\sectionauthor{Timo Sämann\\textsuperscript{4}}\nThe focus of previous DNN development for semantic segmentation has been on single image prediction. This means that the final and intermediate results of the DNN are discarded after each image. However, the application of a computer vision model often involves the processing of images in a sequence, \\ie there is a temporal consistency in the image content between consecutive frames (for a metric, \\cf, \\eg ). This consistency has been exploited in previous work to increase quality and reduce computing effort. Furthermore, this approach offers the potential to improve the robustness of DNN prediction by incorporating this consistency as a-priori knowledge into DNN development. The relevant work in the field of video prediction can be divided in two major approaches: \n\\begin{enumerate}\n\\item DNNs are specially designed for video prediction. This usually requires training from scratch and the availability of training data in a sequence.\n\\item A transformation from single prediction DNNs to video prediction DNNs takes place. Usually no training is required, \\ie the existing weights of the model can be used unaltered.\n\\end{enumerate}\nThe first set of approaches often involves \\emph{conditional random fields} (CRF) and its variants. \nCRFs are known for their use as postprocessing step in the prediction of semantic segmentation, in which their parameters are learned separately or jointly with the DNN~. \nAnother way to use spatiotemporal features is to include 3D convolutions, which add an additional dimension to the conventional 2D convolutional layer. Tran \\etal use 3D convolution layers for video recognition tasks such as action and object recognition. \nOne further approach to use spatial and temporal characteristics of the input data is to integrate \\emph{long short-term memory} (LSTM)~, a variant of the \\emph{recurrent neural network} (RNN). Fayyaz \\etal integrate LSTM layers between the encoder and decoder of their convolutional neural network for semantic segmentation. The significantly higher GPU memory requirements and computational effort are a disadvantage of this method. More recently, Nilsson and Sminchisescu deployed \\emph{gated recurrent units}, which generally requires significantly less memory.\nAn approach to improve temporal consistency of automatic speech recognition outputs is known as a posterior-in-posterior-out (PIPO) LSTM ``sequence enhancer'', a postfilter which could be applicable to video processing as well .\nA disadvantage of the described methods is that sequential data for training must be available, which may be limited or show a lack of diversity. \n\\par\nThe second class of approaches has the advantage that it is model-independent most of the time. Shelhamer \\etal found that the deep feature maps within the network change only slightly with temporal changes in video content.\nAccordingly, calculate the optical flow of the input images from time steps $t_0$ and $t_{-1}$ and convert it into the so-called \\emph{transform flow} which is used to transform the feature maps of the time step $t_{-1}$ so that an aligned representation to the feature map $t_0$ is achieved. S{\\\"a}mann \\etal use a confidence-based combination of feature maps from previous time steps based on the calculated optical flow.", "id": "61379b41-7fce-4042-aff6-0ec085cd8afc", "level": "subsection", "origin_cites_number": 10, "parent_id": "71d4e278-f67c-443c-84a9-ff0254362119", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Aggregation" ], [ "subsection", "Temporal Consistency" ] ], "subsections": [], "title": "Temporal Consistency" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:verification}\n\\sectionauthor{Gesina Schwalbe\\textsuperscript{3}}\nVerification and validation is an integral part of the safety\nassurance for any safety critical systems. As of the functional safety\nstandard for automotive systems ,\n\\emph{verification} means to determine whether given requirements are\nmet \\cite[3.180]{iso/tc_22/sc_32_iso_2018}, such as performance\ngoals.\n\\emph{Validation} on the other side tries to assess whether the given\nrequirements are sufficient and adequate to guarantee safety\n\\cite[3.148]{iso/tc_22/sc_32_iso_2018}, \\eg whether certain types of\nfailures or interactions simply were overlooked. The latter is usually\nachieved via extensive testing in real operation conditions of the\nintegrated product. This differs from the notion of validation used in\nthe machine learning community in which it usually refers to simple\nperformance tests on a selected dataset.\nIn this section, we want to concentrate on general verification\naspects for deep neural networks.\nVerification as in the safety domain encompasses (manual) inspection\nand analysis activities, and testing. However, the contribution\nof single processing steps within a neural network to the final\nbehavior can hardly be assessed manually (compare to the problem of\ninterpretability in \\refsec{sec:interpretability}). Therefore, we here will concentrate on different\napproaches to verification testing.\nSection~\\ref{subsec:verification:formal_testing} covers approaches\nto systematic test data selection. While the suggested methods assume\nfull access to the model internals for coverage measurement, this is\nnot in all cases available. Therefore,\n\\refsec{subsec:verification:black_box_methods} highlights\nassessment approaches that consider the DNN as a black-box component.\nAlso, the topic of verification activity during operation with the\nhelp of observers is discussed.", "id": "94174462-51f2-497a-b88f-1ceaadd52aaf", "level": "section", "origin_cites_number": 1, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Verification" ] ], "subsections": [ "5ad8ed4b-a3fa-43df-8241-7dd338b6350c", "5a9669c2-3369-414d-a5ae-bc8b21dd4bcd" ], "title": "Verification" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 8089, 8971, 1636, 6024 ], "content": "\\label{subsec:verification:formal_testing}\n\\sectionauthor{Christian Heinzemann\\textsuperscript{2}, Gesina Schwalbe\\textsuperscript{3}, Matthias Woehrle\\textsuperscript{2}}\nFormal testing refers to testing methods that include formalized and\nformally verifiable steps, \\eg for test data acquisition, or for\nverification in the local vicinity of test samples.\nFor image data, local testing around valid samples is usually more\npractical than fully formal verification: (Safety) properties are not\nexpected to hold on the complete input space but only on the much\nsmaller unknown lower-dimensional manifold of real images .\nSources of such samples can be real ones or generated ones using an\ninput space formalization or a trained generative model.\n\\par\nCoverage criteria for the data samples are commonly used for two purposes:\n(a) deciding when to stop testing or\n(b) identifying missing tests.\nFor CNNs, there are at least three different approaches towards\ncoverage:\n(1) approaches that establish coverage based on a model with semantic\nfeatures of the input space~,\n(2) approaches trying to semantically cover the latent feature space\nof neural network or a proxy network (\\eg an\nautoencoder)~, and\n(3) approaches trying to cover neurons and their interactions,\ninspired by classical software white-box\nanalysis~.\n\\par\nTypical types of properties to verify are \nsimple test performance,\nlocal stability (robustness),\na specific structure of the latent spaces like\nembedding of semantic concepts ,\nand more complex logical constraints on inputs and outputs,\nwhich can be used for testing when fuzzified .\nMost of these properties require in-depth semantic information about\nthe DNN inner workings, which is often only available via interpreting\nintermediate representations ,\nor interpretable proxies / surrogates (\\cf \\refsec{subsec:interpretability:interpretable_proxies}), which do not guarantee fidelity.\n\\par\nThere exist different testing and formal verification methods from\nclassical software engineering that have already been applied to CNNs.\n\\emph{Differential testing} as used by\nDeepXPlore~ trains $n$ different CNNs for\nthe same task using independent data sets and compares the individual\nprediction results on a test set. This allows to identify\ninconsistencies between the CNNs but no common weak spots.\n\\emph{Data augmentation} techniques start from a given data set and\ngenerate additional transformed data. \\emph{Generic data augmentation}\nfor images like rotations and translation are state-of-the-art for\ntraining but may also be used for testing.\n\\emph{Concolic testing} approaches incrementally grow test suites with\nrespect to a coverage model to finally achieve completeness. Sun \\etal use an adversarial input model based on\nsome norm (\\cf \\refsec{subsec:adverarial_attacks:adversarial_attacks_defenses}),\n\\eg an $L_p$-norm, for generating additional images around a given image\nusing concolic testing. \\emph{Fuzzing} generates new test data constrained by\nan input model and tries to identify interesting test cases, \\eg by\noptimizing white-box coverage mentioned\nabove~. Fuzzing techniques may also be\ncombined with the differential testing approach discussed\nabove~.\nIn all these cases it needs to be ensured that the image as well as\nits meta-data remain valid for testing after transformation.\nFinally, \\emph{proving methods} surveyed by Liu \\etal try to formally prove properties on a\ntrained neural network, \\eg based on satisfiablity modulo theories (SMT). \nThese approaches require a formal characterization of an input space\nand the property to be checked, which is hard for non-trivial\nproperties like contents of an image. \n\\par\nExisting formal testing approaches can be quite costly to integrate\ninto testing workflows (\\cf ):\nDifferential testing and data augmentation require several inferences\nper initial test sample;\nconcolic and fuzzy testing apply an optimization to each given test\nsample, while convergence towards the coverage goals is not guaranteed;\nalso, the iterative approaches need tight integration into the testing\nworkflow;\nand lastly, proving methods usually have to balance computational\nefficiency against the precision or completeness of the result\n.\nAnother challenge of formal testing is that machine learning\napplications usually solve problems for which no (formal) specification\nis possible. This makes it hard to find useful requirements for\ntesting and properties that can be formally\nverified. \nEven partial requirements such as specification of useful input\nperturbations, specified corner cases, and valuable coverage goals are\ntypically difficult to identify .", "id": "5ad8ed4b-a3fa-43df-8241-7dd338b6350c", "level": "subsection", "origin_cites_number": 14, "parent_id": "94174462-51f2-497a-b88f-1ceaadd52aaf", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Verification" ], [ "subsection", "Formal Testing" ] ], "subsections": [], "title": "Formal Testing" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 7507, 7610, 681, 1818, 8221, 603, 1822, 2676 ], "content": "\\label{subsec:verification:black_box_methods}\n\\sectionauthor{Jonas Löhdefink\\textsuperscript{15}, Julia Rosenzweig\\textsuperscript{1}}\nIn machine learning literature, neural networks are often referred to as black boxes due to the fact that their internal operations and their decision making are not completely understood , hinting at a lack of interpretability and transparency. However, in this survey we consider a black box to be a machine learning model to which we only have oracle (query) access . That means we can query the model to get input-output pairs, but we do not have access to the specific architecture (or weights, in case of neural networks). As describes, black boxes are increasingly wide-spread, \\eg healthcare, autonomous driving or ML as a service in general, due to proprietary, privacy or security reasons. \n\\par\nAs deploying black boxes gains popularity, so do methods that aim to extract internal information such as architecture and parameters or to find out, whether a sample belongs to the training dataset. These include \n\\emph{model extraction attacks} , \\emph{membership inference attacks} , general attempts to reverse-engineer the model or to attack it adversarially . Protection and counter-measures are also actively researched: proposes a warning system that estimates how much information an attacker could have gained from queries. The authors of use watermarks for models to prevent illegal re-distribution and to identify intellectual property.\n\\par\nMany papers in these fields make use of so-called \\emph{surrogate}, \\emph{avatar} or \\emph{proxy models} that are trained on input-output pairs of the black box.\nIn case the black-box output is available in soft form (\\eg logits), distillation as first proposed by can be applied to train the surrogate (student) model. Then, any white-box analysis can be performed on the surrogates (\\cf \\eg ) to craft adversarial attacks targeted at the black box. More generally, (local) surrogates as for example in can be used to (locally) explain its decision-making. \nMoreover, these techniques are also of interest if one wants to compare or test black-box models (\\cf \\refsec{subsec:verification:formal_testing}, formal verification). \nThis is the case, among others, in ML marketplaces, where you wish to buy a pre-trained model , or if you want to verify or audit that a third-party black-box model obeys regulatory rules (\\cf ). \nAnother topic of active research are so-called \\emph{observers}.\nThe concept of observers is to evaluate the interface of a black-box module to determine if it behaves as expected within a given set of parameters.\nThe approaches can be divided into \\emph{model-explaining} and \\emph{anomaly-detecting} observers.\nFirst, model explanation methods answer the question of which input characteristic is responsible for changes at the output.\nThe observer is able to alter the inputs for this purpose.\nIf the input of the model under test evolves only slightly but the output changes drastically, this can be a signal that the neural network is mislead, which is also strongly related to adversarial examples (\\cf Chapter \\ref{sec:adversarial_attacks}).\nHence, the reason for changes in the classification result via the input can be very important.\nIn order to figure out in which region of an input image the main reason for the classification is located, ``delete'' information from the image by replacing regions with generated patches until the output changes.\nThis replaced region is likely responsible for the decision of the neural network.\nBuilding upon this, adapt the approach to medical images and generate ``deleted'' regions by a variational autoencoder (VAE).\nSecond, anomaly-detecting observers register input and output anomalies, either examining input and output independently or as an input-output pair, and predict the black-box performance in the current situation.\nIn contrast to model-explaining approaches, this set of approaches has high potential to be used in an online scenario since it does not need to modify the model input.\nThe maximum mean discrepancy (MMD)~ measures the domain gap between two data distributions independently from the application and can be used to raise a warning if input or output distributions during inference deviate too strongly from their respective training distributions.\nBy use of a GAN-based autoencoder perform a domain shift estimation using neural networks in conjunction with the Wasserstein distance as domain mismatch metric.\nThis metric can also be evaluated by use of a casual time-variant aggregation of distributions during inference time.", "id": "5a9669c2-3369-414d-a5ae-bc8b21dd4bcd", "level": "subsection", "origin_cites_number": 15, "parent_id": "94174462-51f2-497a-b88f-1ceaadd52aaf", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Verification" ], [ "subsection", "Black Box Methods" ] ], "subsections": [], "title": "Black Box Methods" }, { "cite_extract_rate": 0.4, "cites": [ 97, 7588, 1455, 3769 ], "content": "\\label{sec:architecture}\n\\sectionauthor{Michael Weber\\textsuperscript{14}}\nIn order to solve a specific task, the architecture of a CNN and its building blocks play a significant role.\nSince the early days of using CNNs in image processing, when they were applied to handwriting recognition~ and the later breakthrough in general image classification~, the architecture of the networks has changed radically.\nDid the term of \\emph{deep learning} for these first convolutional neural networks imply a depth of approximately four layers, their depth increased significantly during the last years and new techniques had to be developed to successfully train and utilize these networks~.\nIn this context, new activation functions~ as well as new loss functions~ have been designed and new optimization algorithms~ were investigated.\nWith regard to the layer architecture, the initially alternating repetition of convolution and pooling layers as well as their characteristics have changed significantly.\nThe convolution layers made the transition from a few layers with often large filters to many layers with small filters.\nA further trend was then the definition of entire modules, which were used repeatedly within the overall architecture as so-called \\emph{network in network}~.\nIn areas such as autonomous driving, there is also a strong interest in the simultaneous execution of different tasks within one single convolutional neural network architecture.\nThis kind of architecture is called \\emph{multi-task learning}~(MTL)~ and can be utilized in order to save computational resources and at the same time to increase performance of each task .\nWithin such multi-task networks, usually one shared feature extraction part is followed by one separate so-called head per task~.\nIn each of these architectures, manual design using expert knowledge plays a major role. The role of the expert is the crucial point here.\nIn recent years, however, there have also been great efforts to automate the process of finding architectures for networks or, in the best case, to learn them.\nThis is known under the name \\emph{neural architecture search}~(NAS).", "id": "f0f7d7d0-2bd8-48ae-bdb1-126b96887d72", "level": "section", "origin_cites_number": 10, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Architecture" ] ], "subsections": [ "0c7b2cb4-747c-4997-8a70-14bcffc01831", "9fd0032b-a9e0-4b5c-ae60-155c9130c8d4", "19fdc458-c9c5-4f0b-a62d-6dbcbe8fef85" ], "title": "Architecture" }, { "cite_extract_rate": 0.764705882352941, "cites": [ 502, 97, 810, 7588, 8224, 8223, 514, 305, 1210, 1455, 8222, 3769, 40 ], "content": "\\label{subsec:architecture:building_blocks}\n\\sectionauthor{Michael Weber\\textsuperscript{14}}\nDesigning a convolutional neural network typically includes a number of design choices.\nThe general architecture usually contains a number of convolutional and pooling layers which are arranged in a certain pattern. \nConvolutional layers are commonly followed by a non-linear activation function. \nThe learning process is based on a loss function which determines the current error and an optimization function that propagates the error back to the single convolution layers and its learnable parameters.\nWhen CNNs became state of the art in computer vision~, they were usually built using a few alternating convolutional and pooling layers having a few fully connected layers in the end.\nIt turned out that better results are achieved with deeper networks and so the number of layers increased~ over the years.\nTo deal with these deeper networks, new architectures had to be developed.\nIn a first step, to reduce the number of parameters, the convolutional layers with partly large filter kernels were replaced by several layers with small $3 \\times 3$ kernels.\nToday, most architectures are based on the \\emph{network in network} principle~, where more complex modules are used repeatedly. \nExamples of such modules are the \\emph{inception module} from GoogleNet~ or the \\emph{residual block} from ResNet~. \nWhile the inception module consists of multiple parallel strings of layers, the residual blocks are based on the \\emph{highway network}~, which means that they can bypass the original information and the layers in between are just learning residuals.\nWith ResNeXt~ and Inception-ResNet~ there already exist two networks that combine both approaches.\nFor most tasks, it turned out that replacing the fully connected layers by convolutional layers is much more convenient making the networks fully convolutional~.\nThese so-called \\emph{fully convolutional networks}~(FCN) are no longer bound to fixed input dimensions.\nNote that with the availability of convolutional long short-term memory (ConvLSTM) structures also fully convolutional recurrent neural networks (FCRNs) became available for fully scalable sequence-based tasks .\nInside the CNNs, the \\emph{rectified linear unit}~(ReLU) has been the most frequently used activation function for a long time.\nHowever, since this function suffers from problems related to the mapping of all negative values to zero like the vanishing gradient problem, new functions have been introduced in recent years.\nExamples are the \\emph{exponential linear unit}~(ELU), \\emph{swish}~ and the \\emph{non-parametric linearly scaled hyperbolic tangent}~(LiSHT)~.\nIn order to be able to train a network consisting of these different building blocks, \nthe loss function is the most crucial part. This function is responsible for how and what the network ultimately learns and how exactly the training data is applied during the training process to make the network train faster or perform better. So the different classes can be weighted in a classification network with fixed values or so-called $\\alpha$-balancing according to their probability of occurrence. Another interesting approach is weighting training examples according to their easiness for the current network~,~. For multi-task learning also weighting tasks based on their uncertainty~ or gradients~ can be done as further explained in Sec.~\\ref{subsec:architecture:multi_task_networks}.\nA closer look on how a modification of the loss function might affect safety-related aspects is given in Sec.~\\ref{subsec:robust_training:modification_of_loss}.", "id": "0c7b2cb4-747c-4997-8a70-14bcffc01831", "level": "subsection", "origin_cites_number": 17, "parent_id": "f0f7d7d0-2bd8-48ae-bdb1-126b96887d72", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Architecture" ], [ "subsection", "Building Blocks" ] ], "subsections": [], "title": "Building Blocks" }, { "cite_extract_rate": 0.25, "cites": [ 324, 8226, 8225, 8227 ], "content": "\\label{subsec:architecture:multi_task_networks}\n\\sectionauthor{Marvin Klingner\\textsuperscript{15}, Varun Ravi-Kumar\\textsuperscript{4}, Timo Sämann\\textsuperscript{4}, Gesina Schwalbe\\textsuperscript{3}}\n\\emph{Multi-task learning} (MTL) in the context of neural networks describes the process of optimizing several tasks simultaneously by learning a unified feature representation and coupling the task-specific loss contributions, thereby enforcing cross-task consistency .\\par\nUnified feature representation is usually implemented by sharing the parameters of the initial layers inside the encoder (also called feature extractor). It not only improves the single tasks by more generalized learned features but also reduces the demand for computational resources at inference. Not an entirely new network has to be added for each task but only a task-specific decoder head. It is essential to consider the growing amount of visual perception tasks in autonomous driving, e.g., depth estimation, semantic segmentation, motion segmentation, and object detection. While the parameter sharing can be soft, as in \\emph{cross stitch} and \\emph{sluice networks} , or hard , meaning ultimately sharing the parameters, the latter is usually preferred due to its straightforward implementation and lower computational complexity during training and inference.\\par\nCompared to implicitly coupling tasks via a shared feature representation, there are often more direct ways to optimize the tasks inside cross-task losses jointly. It is only made possible as, during MTL, there are network predictions for several tasks, which can be enforced to be consistent. As an example, sharp depth edges should only be at class boundaries of semantic segmentation predictions. Often both approaches to MTL are applied simultaneously~ to improve a neural network's performance as well as to reduce its computational complexity at inference.\\par\nWhile the theoretical expectations for MTL are quite clear, it is often challenging to find a good weighting strategy for all the different loss contributions as there is no theoretical basis on which one could choose such a weighting with early approaches either involving heuristics or extensive hyperparameter tuning. The easiest way to balance the tasks is to use uniform weight across all tasks. However, the losses from different tasks usually have different scales, and uniformly averaging them suppresses the gradient from tasks with smaller losses. Addressing these problems, Kendall et al. propose to weigh the loss functions by the \\emph{homoscedastic uncertainty} of each task. One does not need to tune the weighting parameters of the loss functions by hand, but they are adapted automatically during the training process. Concurrently Chen et al.~ propose \\emph{GradNorm}, which does not explicitly weigh the loss functions of different tasks but automatically adapts the gradient magnitudes coming from the task-specific network parts on the backward pass. Liu et al. proposed dynamic weight average (DWA), which uses an average of task losses over time to weigh the task losses.", "id": "9fd0032b-a9e0-4b5c-ae60-155c9130c8d4", "level": "subsection", "origin_cites_number": 16, "parent_id": "f0f7d7d0-2bd8-48ae-bdb1-126b96887d72", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Architecture" ], [ "subsection", "Multi-Task Networks" ] ], "subsections": [], "title": "Multi-Task Networks" }, { "cite_extract_rate": 0.9090909090909091, "cites": [ 97, 875, 8229, 871, 305, 870, 872, 543, 544, 8228 ], "content": "\\label{subsec:architecture:neural_architecture_search}\n\\sectionauthor{Patrick Feifel\\textsuperscript{8}, Seyed Eghbal Ghobadi\\textsuperscript{8}}\nIn the previous sections we saw manually engineered modifications of existing CNN architectures proposed by ResNet or Inception . They are results of human design and showed their ability to improve performance. ResNet introduces a \\emph{skip connection} in building blocks and Inception makes use of its specific \\emph{inception module}. Hereby, the intervention by an expert is crucial. The approach of \\emph{neural architecture search} (NAS) aims to automate this time-consuming and manual design of neural network architectures. \n\\par\nNAS is closely related to hyperparameter optimization (HO), which is described in \\refsec{subsec:robust_training:hyperparameter_optimization}. Originally, both tasks were solved simultaneously. Consequently, the kernel size or number of filters were seen as additional hyperparamters. Nowadays, the distinction between HO and NAS should be stressed. The concatenation of complex building blocks or modules cannot be accurately described with single parameters. This simplification is no longer suitable. \n\\par\nTo describe the NAS process, the authors of define three steps: (1) definition of search space, (2) search strategy and (3) performance estimation strategy. \n\\par\nThe majority of search strategies take advantage of the \\emph{NASNet search space} which arranges various operations, \\eg convolution, pooling within a single cell. However, other spaces based on a chain or multi-branch structure are possible . The search strategy comprises advanced methods from \nsequential model-based optimization (SMBO) ,\nBayesian optimization , \nevolutionary algorithms , \nreinforcement learning and \ngradient descent . \nFinally, the performance estimation describes approximation techniques due to the impracticability of multiple evaluation runs. For a comprehensive survey regarding the NAS process we refer to . \n\\par\nRecent research has shown that reinforcement learning approaches such as NASNet-A and ENAS are partly outperformed by evolutionary algorithms, \\eg AmoebaNet and gradient-based approaches, \\eg DARTS . \n\\par\nEach of these approaches focuses on different optimization aspects. Gradient-based methods are applied to a continuous search space and offer faster optimization. On the contrary, the evolutionary approach LEMONADE enables multi-object optimization by considering the conjunction of resource consumption and performance as the two main objectives. Furthermore, single-path NAS extends the multi-path approach of former gradient-based methods and proposes the integration of 'over-parameterized superkernels', which significantly reduces memory consumption.\n\\par\nThe focus of NAS is on the optimized combination of humanly predefined CNN elements with respect to objectives such as resource consumption and performance. NAS offers automation, however, the realization of the objectives is strongly limited by the potential of the CNN elements.", "id": "19fdc458-c9c5-4f0b-a62d-6dbcbe8fef85", "level": "subsection", "origin_cites_number": 11, "parent_id": "f0f7d7d0-2bd8-48ae-bdb1-126b96887d72", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Architecture" ], [ "subsection", "Neural Architecture Search" ] ], "subsections": [], "title": "Neural Architecture Search" }, { "cite_extract_rate": 0.8, "cites": [ 509, 8429, 1621, 1759, 8167, 206, 520, 4169 ], "content": "\\label{sec:compression}\n\\sectionauthor{Serin Varghese\\textsuperscript{7}}\nRecent developments in CNNs have resulted in neural networks being the state-of-the-art in computer vision tasks like image classification~, object detection~ and semantic segmentation~. This is largely due to the increasing availability of hardware computational power and an increasing amount of training data. We also observe a general upwards trend of the complexities of the neural networks along with their improvement in state-of-the-art performance. These CNNs are largely trained on back-end servers with significantly higher computing capabilities. The use of these CNNs in real-time applications are inhibited due to the restrictions on hardware, model size, inference time, and energy consumption. \nThis led to an emergence of a new field in machine learning, commonly termed as model compression. Model compression basically implies reducing the memory requirements, inference times and model size of DNNs to eventually enable the use of neural networks on edge devices. This is tackled by different approaches such as \\emph{network pruning} (identifying weights or filters that are not critical for network performance), \\emph{weight quantizations} (reducing the precision of the weights used in the network), \\emph{knowledge distillation} (a smaller network is trained with the knowledge gained by a bigger network), and \\emph{low-rank factorization} (decomposing a tensor into multiple smaller tensors).\nIn this section, we introduce some of these methods for model compression and discuss in brief the current open challenges and possible research directions with respect to its use in automated driving applications.", "id": "e23cf554-7193-4f8f-a54e-3da02ce3d7c2", "level": "section", "origin_cites_number": 10, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Model Compression" ] ], "subsections": [ "5ed86e29-ba4e-4bd5-8fc0-724c41e2999e", "28ea17f4-7eb4-44e0-85c5-640bd64d35c5" ], "title": "Model Compression" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 8234, 8236, 945, 8232, 688, 8233, 8230, 6318, 8231, 6319, 8235 ], "content": "\\label{subsec:compression:pruning}\n\\sectionauthor{Falk Kappel\\textsuperscript{13}, Serin Varghese\\textsuperscript{7}}\nPruning has been used as a systematic tool to reduce the complexity of deep neural networks. The redundancy in DNNs may exist on various levels, such as the individual weights, filters, and even layers. All the different methods for pruning try to take advantage of these available redundancies on various levels. Two of the initial approaches for neural networks proposed weight pruning in the 1990s as a way of systematically damaging neural networks~. As these weight pruning approaches do not aim at changing the structure of the neural network, these approaches are called \\emph{unstructured pruning}. Although there is reduction in the size of the network when it is saved in sparse format, the acceleration depends on the availability of hardware that facilitate sparse multiplications. As pruning filters and complete layers aim at exploiting the available redundancy in the architecture or structure of neural networks, these pruning approaches are called \\emph{structured pruning}. \nPruning approaches can also be broadly classified into: data-dependent and data-independent methods. Data-dependent methods~ make use of the training data to identify filters to prune. Theis \\etal and Molchanov \\etal propose a greedy pruning strategy that identifies the importance of feature maps one at a time from the network and measures the effect of removal of the filters on the training loss. This means that filters corresponding to those feature maps that have least effect on training loss are removed from the network. Within data-independent methods~, the selection of CNN filters to be pruned are based on the statistics of the filter values. Li \\etal proposed a straightforward method to calculate the rank of filters in a CNN. The selection of filters are based on the $\\ell_1$-norm, where the filter with the lowest norm is pruned away. He \\etal employ a LASSO regression-based selection of filters to minimize the least squares reconstruction.\\par\nAlthough the above-mentioned approaches demonstrated that a neural network can be compressed without affecting the accuracy, the effect on robustness is largely unstudied. Dhillon \\etal proposed pruning a subset of activations and scaling up the survivors to show improved adversarial robustness of a network. Lin \\etal quantize the precision of the weights after controlling the Lipschitz constant of layers. This restricts the error propagation property of adversarial pertubations within the neural network. Ye \\etal evaluated the relationship between adversarial robustness and model compression in detail and show that naive compression has a negative effect on robustness. Gui \\etal co-optimize robustness and compression constraints during the training phase and demonstrate improvement in the robustness along with reduction in the model size. However, these approaches have mostly been tested on image classification tasks and on smaller datasets only. Their effectiveness on safety-relevant automated driving tasks such as object detection and semantic segmentation tasks are not studied and remains an open research challenge.", "id": "5ed86e29-ba4e-4bd5-8fc0-724c41e2999e", "level": "subsection", "origin_cites_number": 15, "parent_id": "e23cf554-7193-4f8f-a54e-3da02ce3d7c2", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Model Compression" ], [ "subsection", "Pruning" ] ], "subsections": [], "title": "Pruning" }, { "cite_extract_rate": 0.529411764705882, "cites": [ 8241, 8375, 8239, 8240, 8238, 4496, 8237, 6488, 9139 ], "content": "\\label{subsec:compression:quantization}\n\\sectionauthor{Firas Mualla\\textsuperscript{13}}\nQuantization of a random variable $x$ having a probability density function $f(x)$ is the process of dividing the range of $x$ into intervals, each is represented using a single value (also called reconstruction value), such that the following reconstruction error is minimized:\n\\begin{equation}\n\\label{eq:quantization}\n\\sum_{i=1}^{L} \\int_{b_i}^{b_{i+1}} (q_i - x)^2 f(x) dx,\n\\end{equation}\nwhere $b_i$ is the left-side border of the i-th interval, $q_i$ is its reconstruction value, and $L$ is the number of intervals, \\eg $L = 8$ for a 3-bit quantization. This definition can be extended to multiple dimensions as well. \nQuantization of neural networks has been around since the 1990s , however, with a focus in the early days on improving the hardware implementations of these networks. In the deep learning literature, a remarkable application of quantization combined with unstructured pruning can be found in the approach of \\textit{deep compression} , where 1-dimensional k-means is utilized to cluster the weights per layer and thus finding the $L$ cluster centers ($q_i$ values in (\\ref{eq:quantization})) iteratively. This procedure conforms to an implicit assumption that $f(x)$ has the same spread inside all clusters. Deep compression can reduce the network size needed for image classification by a factor of 35 for AlexNet and a factor of 49 for VGG-16 without any loss in accuracy. However, as pointed out in , these networks from the early deep learning days are over-parameterized and a less impressing compression factor is thus expected when the same technique is applied to lightweight architectures such as MobileNet and SequeezeNet. For instance, considering SqueezeNet (50 times smaller than AlexNet), the compression factor of deep compression without accuracy loss drops to about 10.\nCompared to scalar quantization used in deep compression, there were attempts to exploit the structural information by applying variants of vector quantization of the weights . Remarkably, in the latter (\\ie ), the reconstruction error of the activations (instead of the weights) is minimized in order to find an optimal codebook for the weights, as the ultimate goal of quantization is to approximate the network's output not the network itself. This is performed in a layer-by-layer fashion (as to prevent error accumulation) using activations generated from unlabeled data. \nOther techniques apply variants of so-called ``linear'' \\textit{quantization}, \\ie the quantization staircase has a fixed interval size. This paradigm conforms to an implicit assumption that $f(x)$ in (\\ref{eq:quantization}) is uniform and is thus also called \\textit{uniform quantization}. The uniform quantization is widely applied both in specialized software packages such as the \\texttt{Texas Instruments Deep Learning Library} (automotive boards) and in general-purpose libraries such as the \\texttt{Tensorflow Lite}. The linearity assumption enables practical implementations, as the quantization and dequantization can be implemented using a scaling factor and an intercept, whereas no codebook needs to be stored. In many situations, the intercept can be omitted by employing a symmetric quantization mapping. Moreover, for power of 2 ranges, the scaling ends up being a bitwise shift operator, where quantization and dequantization differ only in the shift direction. It is also straightforward to apply this scheme \\textit{dynamically}, \\ie for each tensor separately using a tensor-specific multiplicative factor. This can be easily applied not only to filters (weight tensors) but also to activation tensors (see for instance ).\nUnless the scale factor in the linear quantization is assumed constant by construction, it is computed based on the statistics of the relevant tensor and can be thus sensitive to outliers. This is known to result in a low precision quantization. In order to mitigate this issue, the original range can be \\textit{clipped} and thus reduced to the most relevant part of the signal. Several approaches are proposed in the literature for finding an optimal clipping threshold: simple percentile analysis of the original range (\\eg clipping 2\\% of the largest magnitude values), minimizing the mean square error between the quantized and original range in the spirit of (\\ref{eq:quantization}) , or minimizing the Kullback-Leibler divergence between the original and the quantized distributions . While the clipping methods trade off large quantization errors of outliers against small errors of inliers , other methods tackle the outliers problem using a different trade-off, see for instance the outlier channel splitting approach in .\nAn essential point to consider when deciding for a quantization approach for a given problem is the allowed or intended interaction with the training procedure. The so-called \\textit{post-training quantization}, \\ie quantization of a pre-trained network, seems to be attractive from a practical point of view: No access to training data is required and the quantization and training toolsets can be independent from each other. On the other hand, the training-aware quantization methods often yield higher inference accuracy and shorter training times. The latter is a serious issue for large complicated models which may need weeks to train on modern GPU clusters.\nThe training-aware quantization can be implemented by inserting fake quantization operators in the computational graph of the forward-pass during training (simulated quantization), whereas the backward pass is done as usual in floating-point resolution . Other approaches go a step further by quantizing the gradients as well. This leads to much lower training time, as the time of the often computationally expensive backward pass is reduced. The gradient's quantization, however, is not directly applicable as it requires the derivative of the quantization function (staircase-like), which is zero almost everywhere. Luckily, this issue can be handled by employing a \\textit{straight-through estimator} (approximating the quantization function by an identity mapping). There are also other techniques proposed recently to mitigate this problem .", "id": "28ea17f4-7eb4-44e0-85c5-640bd64d35c5", "level": "subsection", "origin_cites_number": 17, "parent_id": "e23cf554-7193-4f8f-a54e-3da02ce3d7c2", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Model Compression" ], [ "subsection", "Quantization" ] ], "subsections": [], "title": "Quantization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:discussion}\nWe have presented an extensive overview of approaches to effectively handle safety concerns accompanying deep learning: lack of generalization, robustness, explainability, plausibility, and efficiency. \nIt has been described which lines of research we deem prevalent, important, and promising for each of the individual topics and categories into which the presented methods fall. \n\\par\nThe reviewed methods alone will not provide safe ML systems as such -- and neither will their future extensions. This is due to the limitations of quantifying complex real-world contexts. A complete and plausible \\emph{safety argumentation} will, thus, require more than advances in methodology and theoretical understanding of neural network properties and training processes. \nApart from methodological progress, it will be necessary to gain practical experience in using the presented methods to gather evidence for overall secure behavior, using this evidence to construct a tight safety argument, and testing its validity in various situations.\n\\par\nIn particular, each autonomously acting robotic system with state-of-the-art deep-learning-based perception and non-negligible actuation may serve as an object of study and is, in fact, in need of this kind of systematic reasoning before being transferred to widespread use or even market entry. We strongly believe that novel scientific insights, the potential market volume, and public interest will drive the arrival of reliant and trustworthy AI technology.\n\\section*{Acknowledgment}\nThe research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI Absicherung – Safe AI for Automated Driving”. The authors would like to thank the consortium for the successful cooperation. Furthermore, this research has been funded by the Federal Ministry of Education and Research of Germany as part of the competence center for machine learning ML2R (01IS18038B).\n\\bibliography{main} \n\\bibliographystyle{alpha}\n\\end{document}", "id": "3497f966-f76d-439f-b7b6-ad5a513d9e31", "level": "section", "origin_cites_number": 0, "parent_id": "569f1197-c628-4e9d-9db6-9757cb02d82a", "prefix_titles": [ [ "title", "Inspect, Understand, Overcome:\\\\A Survey of Practical Methods\\\\for AI Safety" ], [ "section", "Discussion" ] ], "subsections": [], "title": "Discussion" } ]
99
[ 8089, 6010, 8157, 1624, 8160, 3251, 3237, 3219, 8159, 8158, 3248, 8161, 8162, 1044, 5527, 4620, 1043, 1049, 330, 8163, 3267, 5113, 328, 8495, 8164, 858, 8165, 5980, 5552, 90, 2083, 5548, 7022, 5553, 62, 8133, 8166, 7698, 1636, 3500, 314, 509, 1606, 8429, 1621, 1759, 8167, 8168, 8169, 206, 520, 4169, 868, 8171, 8173, 8172, 8170, 8179, 8177, 958, 8176, 8174, 8175, 8180, 8178, 2800, 5073, 7109, 3624, 2690, 8589, 2761, 2686, 8181, 8182, 3655, 8588, 2705, 1695, 1256, 8183, 2793, 900, 888, 6146, 8396, 892, 917, 906, 923, 912, 8184, 908, 8185, 890, 914, 7313, 975, 953, 954, 920, 8186, 6140, 919, 899, 902, 3234, 6148, 8187, 8189, 904, 901, 8188, 8190, 7308, 6014, 7514, 8192, 8191, 8193, 1002, 8198, 6410, 8201, 8197, 8200, 8194, 8199, 5680, 3314, 4423, 8202, 2709, 8196, 306, 318, 8195, 4875, 499, 8600, 1812, 1822, 1824, 1814, 7517, 1617, 6080, 8203, 8502, 7507, 1821, 8204, 4616, 4025, 3288, 1003, 8430, 8207, 8206, 8205, 8208, 4640, 8209, 8814, 8633, 8211, 8210, 1733, 8213, 8212, 1737, 1745, 8214, 8216, 759, 8215, 4141, 8217, 4690, 6435, 4678, 582, 680, 681, 8966, 8218, 8219, 8220, 784, 8971, 6024, 7610, 1818, 8221, 603, 2676, 97, 7588, 1455, 3769, 502, 810, 8224, 8223, 514, 305, 1210, 8222, 40, 324, 8226, 8225, 8227, 875, 8229, 871, 870, 872, 543, 544, 8228, 8234, 8236, 945, 8232, 688, 8233, 8230, 6318, 8231, 6319, 8235, 8241, 8375, 8239, 8240, 8238, 4496, 8237, 6488, 9139 ]
0.950918
[ "Jan Deriu", "Alvaro Rodrigo", "Arantxa Otegi", "Guillermo Echegoyen", "Sophie Rosset", "Eneko Agirre", "Mark Cieliebak" ]
Survey on Evaluation Methods for Dialogue Systems
2019
2019-05-10T11:14:12Z
cs.CL
In this paper, we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation, in and of itself, is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost- and time-intensive. Thus, much work has been put into finding methods which allow a reduction in involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented, conversational, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then present the evaluation methods regarding that class. \keywords{dialogue systems \and evaluation metrics\and discourse model \and conversational AI \and chatbots}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "58c1de30-7774-42e5-a989-9c0459f4e585", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ] ], "subsections": [ "030fa328-8718-455f-8c1d-2c718f620e70", "e5e866a1-6351-4674-8666-e4056a478c2d", "7d128ace-9c90-49f7-bd5a-bbdc526dc9fd", "8c99fa37-1def-4d5a-ad34-735a28bef963", "615da0ed-3016-429b-b6ad-a2bada64cae8", "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "8a5853da-4970-44e3-bafd-bb0ee409d69c" ], "title": "root" }, { "cite_extract_rate": 1, "cites": [ 7451 ], "content": "As the amount of digital data continuously grows, users demand technologies that offer quick access to such data. In fact, users rely on systems that support information search interactions such as Siri\\footnote{https://www.apple.com/es/siri/}, Google Assistant\\footnote{https://assistant.google.com/}, Amazon Alexa\\footnote{https://www.amazon.com} or Microsoft XiaoIce~, etc. These technologies, called Dialogue Systems (DS), allow the user to converse with a computer system using natural language. Dialogue Systems are applied to a variety of tasks, e.g.:\n\\begin{itemize}\n\\item Virtual Assistants aid users in everyday tasks, such as scheduling appointments. They usually operate on predefined actions which can be triggered by voice command. \n\\item Information-seeking systems provide users with information about a question (e.g. the most suitable hotel in town). These questions also include factual questions as well as more complex questions. \n\\item E-learning dialogue systems train students for various situations. For instance, they train the interaction with medical patients or train military personnel in questioning a witness.\n\\end{itemize}\nOne crucial step in the development of DS is evaluation. That is, to measure how well the DS is performing. However, evaluating a dialogue system can prove to be problematic because there are two important factors to be considered. Firstly, the definition of what constitutes a high-quality dialogue is not always clear and often depends on the application. Even if a definition is assumed, it is not always clear how to measure it. For instance, if we assume that a high-quality dialogue system is defined by its ability to respond with an appropriate utterance, it is not clear how to measure appropriateness or what appropriateness means for a particular system. Moreover, one might ask the users if the responses were appropriate, but as we will discuss below, user feedback might not always be reliable for a variety of reasons.\nThe second factor is that the evaluation of dialogue systems is very cost- and time-intensive. This is especially true when the evaluation is carried out by a user study, which requires careful preparation, the need for inviting and compensating users for their participation. \nOver the past decades, many different evaluation methods have been proposed. The evaluation methods are closely tied to the characteristics of the dialogue system which they are aimed at evaluating. Thus, quality is defined in the context of the function which dialogue system is meant to fulfil. For instance, a system designed to answer questions will be evaluated on the basis of correctness, which is not necessarily a suitable metric for evaluating a conversational agent.\nMost methods are aimed at automating the evaluation, or at least automating certain aspects of the evaluation. The goal of an evaluation method is to obtain automated and repeatable evaluation procedures that allow efficient comparisons in the quality of different dialogue strategies. \nThis survey is structured as follows; in the next section we give a general overview over the different classes of dialogue systems and their characteristics. We then introduce the evaluation task in greater detail, with an emphasis on the goals of an evaluation and the requirements on an evaluation metric. In Sections \\ref{sec:eval_task}, \\ref{sec:eval_non_task}, and \\ref{sec:eva_qa_dialogue}, we introduce each dialogue system class (i.e. task-oriented systems, conversational agents and question answering dialogue systems). Thereafter, we give an overview of the characteristics, dialogue behaviour, and concepts behind the implementation methods of the various dialogue systems. Finally, we present the evaluation methods and the ideas behind them. Here, we set an emphasis the relationship between these methods and the dialogue system classes, including which aspects of the evaluation are automated. In Section \\ref{sec:datasets}, we give a short overview of the relevant datasets and evaluation campaigns in the domain of dialogue systems. In Section \\ref{sec:discussion}, we discuss the issues and challenges in devising automated evaluation methods and discuss the level of automation achieved.", "id": "030fa328-8718-455f-8c1d-2c718f620e70", "level": "section", "origin_cites_number": 1, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e5e866a1-6351-4674-8666-e4056a478c2d", "level": "section", "origin_cites_number": 0, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ] ], "subsections": [ "3c36dea1-7979-445d-b697-0c188d032fdd", "21b7ce87-9068-4481-b345-7e879858248c", "a2125333-f586-4eb0-8c2c-213429625d95" ], "title": "A General Overview" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1149 ], "content": "Dialogue Systems (DS) usually structure dialogues in \\emph{turns}, each turn is defined by one or more \\emph{utterances} from one speaker. Two consecutive turns between two different speakers is called an \\emph{exchange}. Multiple exchanges constitute a \\emph{dialogue}. Another different, but related view is to interpret each turn or each utterance as an action (more on this later). \nThe main component of a dialogue system is the dialogue manager that defines the content of the next utterance and thus the behaviour of the dialogue system. There are many different approaches to design a dialogue manager, which are partly dictated by the application of the dialogue system. However, there are three broad classes of dialogue systems that we encounter in the literature: task-oriented systems, conversational agents and interactive question answering systems\\footnote{In recent literature, the distinction is made only between the first two classes of dialogue systems . However, interactive question answering systems cannot be completely placed in either of the two categories. }. \nWe identified the following characteristic features that help differentiate between the three different classes: whether the system is developed to solve a task, whether the dialogue follows a structure, whether the domain is restricted or open, whether the dialogue spans over multiple turns, whether the dialogues are long or rather efficient, who takes the initiative, and what interface is used (text, speech, multi-modal). Table \\ref{tbl:ds_char} depicts the characteristics for each of the dialogue system classes. In this table, we can see the following main features for each class:\n \\begin{itemize}\n \\item Task-oriented systems are developed to help the user solve a specific task as efficiently as possible. The dialogues are characterized by following a clearly defined structure that is derived from the domain. The dialogues follow mixed initiative; both the user and the system can take the lead. Usually, the systems found in the literature are built for speech input and output. However, task-oriented systems in the domain of assisting users are built on multi-modal input and output. \n \\item Conversational agents display a more unstructured conversation, as their purpose is to have open-domain dialogues with no specific task to solve. Most of these systems are built to emulate social interactions, and thus longer dialogues are desired.\n \\item Question Answering (QA) systems are built for the specific task of answering questions. The dialogues are not defined by a structure as with task-oriented systems, however, they mostly follow the question and answer style pattern. QA systems may be built for a specific domain, but may be also tilted towards more open domain questions. Usually, the domain is dictated by the underlying data, e.g. knowledge bases or text snippets from forums. Traditional QA systems work on a singe-turn interaction, however, there are systems that allow multiple turns to cover follow-up questions. The initiative is mostly done by the user, who asks questions. \n \\end{itemize}\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{l||lll}\n & Task-oriented DS & Conversational Agents & Interactive QA \\\\ \\hline \\hline\nTask & Yes - clear defined & No & Yes - answer questions \\\\\nDial. Structure & Highly structured & Not structured & No \\\\\nDomain & Restricted & Mostly open domain & Mixed \\\\\nTurns & Multi & Multi & Single/Multi \\\\\nLength & Short & Long & - \\\\\nInitiative & Mixed/ system init & mixed/user init & user init \\\\\nInterface & multi-modal & multi-modal & mostly text \n\\end{tabular}\n\\caption{Characterizations of the different dialogue system types. }\n\\label{tbl:ds_char}\n\\end{table}", "id": "3c36dea1-7979-445d-b697-0c188d032fdd", "level": "subsection", "origin_cites_number": 3, "parent_id": "e5e866a1-6351-4674-8666-e4056a478c2d", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ], [ "subsection", "Dialogue Systems" ] ], "subsections": [], "title": "Dialogue Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "Evaluating dialogue systems is a challenging task and subject of much research. We define the goal of an evaluation method as having an automated, repeatable evaluation procedure with high correlation to human judgments, which is able to differentiate between various dialogue strategies and is able to explain which features of the dialogue systems are important. Thus, the following requirements can be stated:\n\\begin{itemize}\n\\item Automatic: in order to reduce the dependency on human labour, which is time- and cost-intensive as well as not necessarily repeatable, the evaluation method should be automated, or at least partially automated. \n\\item Repeatable: the evaluation method should yield the same result if applied multiple times to the same dialogue system under the same circumstances. \n\\item Correlated to human judgments: the procedure should yield ratings that correlate to human judgments. \n\\item Differentiate between different dialogue systems: the evaluation procedure should be able to differentiate between different strategies. For instance, if one wants to test the effect of a \\emph{barge-in} feature (i.e. allowing the user to interrupt the dialogue system), the evaluation procedure should be able to highlight the effects. \n\\item Explainable: the method should give insights into which features of the dialogue system impact the quality of the dialogue and in which manner they do so. For instance, the methods should reveal that the automatic speech recognition system's \\emph{word-error rate} has a high influence on the quality of the natural language understanding component, which in turn impacts the intent classification. \n\\end{itemize}\nIn this survey, we focus on the efforts of automating the evaluation process. This is a very difficult, but crucial task, as human evaluations are cost- and time-intensive. Although much progress has been made in automating the evaluations of dialogue systems, the reliance on human evaluation is still present. Here, we give a condensed overview on the human-based evaluations used in the literature.", "id": "21b7ce87-9068-4481-b345-7e879858248c", "level": "subsection", "origin_cites_number": 0, "parent_id": "e5e866a1-6351-4674-8666-e4056a478c2d", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ], [ "subsection", "Evaluation" ] ], "subsections": [ "fb671f8b-02d8-43f8-a113-0a4dd30051bf" ], "title": "Evaluation" }, { "cite_extract_rate": 0.125, "cites": [ 1547 ], "content": "There are various approaches to a human evaluation. The test subjects can take on two main roles: interacting with the system or rating a dialogue or utterance, or both. In the following, we differentiate among different types of user populations. Among each of the populations, the subjects can take on any of the two roles. \n\\begin{itemize}\n\\item Lab experiments: Before crowdsourcing was popular, dialogue systems were evaluated in a lab environment. Users were invited to participate in the lab where they interacted with a dialogue system and subsequently filled a questionnaire. For instance, recruited 36 subjects, which were given instructions and presented with various scenarios. The subjects were asked to solve a task using a spoken dialogue system. Furthermore, a supervisor was present to guide the users. The lab environment is very controlled, which is not necessarily comparable to the real world .\n\\item In-field experiments: Here, the evaluation is performed by collecting feedback from real users of the dialogue systems~. For instance, for the Spoken Dialogue Challenge , the systems were developed to provide bus schedule information in Pittsburgh. The evaluation was performed by redirecting the evening calls to the dialogue systems and getting the user feedback at the end of the conversation. The Alexa Prize \\footnote{\\url{https://developer.amazon.com/alexaprize}} also followed the same strategy, i.e. it let real users interact with operational systems and gathered user feedback over a span of several months.\n\\item Crowdsourcing: Recently, human evaluation has shifted from a lab environment to using crowdsourcing platforms such as Amazon Mechanical Turk (AMT). These platforms provide large amounts of recruited users. evaluate the validity of using crowdsourcing for evaluating dialogue systems, and their experiments suggest that using enough crowdsourced users, the quality of the evaluation is comparable to the lab conditions. Current research relies on crowdsourcing for human evaluation . \nEspecially conversational dialogue systems are evaluated via crowdsourcing, where there are two main evaluation procedures: crowdworkers either talk to the system and rate the interaction or they are presented with a context from the test set and a response by the system, which they need to rate. In both settings, the crowdworkers are aksed to rate the system based on quality, fluency or appropriateness. Recently, introduced Sensibleness and Specificity Average (SSA), where humans rate the sensibleness and specificity of a response. These capture two aspects of human behaviour: making sense and being specific. A dialogue system can be sensible by responding with vague answers (e.g. ``I don't know\"), whereas it is only specific if it takes the context into account.\n\\end{itemize}\nHuman based evaluation is difficult to set up and to carry out. Much care has to be taken in setting up the experiments; the users need to be properly instructed and the tasks need to be prepared so that the experiment reflects real-world conditions as closely as possible. Furthermore, one needs to take into account the high variability of user behaviour, which is present especially in crowdsourced environments.", "id": "fb671f8b-02d8-43f8-a113-0a4dd30051bf", "level": "paragraph", "origin_cites_number": 8, "parent_id": "21b7ce87-9068-4481-b345-7e879858248c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ], [ "subsection", "Evaluation" ], [ "paragraph", "Human Evaluation." ] ], "subsections": [ "41d66d9a-31db-424e-b139-de8780ad1050" ], "title": "Human Evaluation." }, { "cite_extract_rate": 0, "cites": [], "content": "A procedure which satisfies the aforementioned requirements has not yet been developed. Most evaluation procedures either require a degree of human involvement in order to be somewhat correlated to human judgement, or they require significant engineering effort. The evaluation methods, which we cover in this survey, can be categorized as follows: model the human judges, model the user behaviour, or use fine-grained methods, which evaluates a specific aspect of the dialogue system (e.g. its ability to adhere to a topic). Methods that model human judges rely on human judgements to be collected beforehand so as to fit a model which predicts the human rating. User behaviour models involve a significant engineering step in order to build a model which emulates the human behaviour. The finer-grained methods also need a certain degree of engineering, which depends on the feature being evaluated. The common trait of the evaluation methods covered in this survey is that they are coupled to the characteristics of the dialogue system that are being considered. That is, a task-oriented dialogue system is evaluated differently to a conversational dialogue system.", "id": "41d66d9a-31db-424e-b139-de8780ad1050", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fb671f8b-02d8-43f8-a113-0a4dd30051bf", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ], [ "subsection", "Evaluation" ], [ "paragraph", "Human Evaluation." ], [ "paragraph", "Automated Evaluation Procedures." ] ], "subsections": [], "title": "Automated Evaluation Procedures." }, { "cite_extract_rate": 0, "cites": [], "content": "Different evaluation procedures have been proposed based on the characteristics of the dialogue system class. For instance, the evaluation of task-oriented systems exploits the highly structured dialogues. The goal can be precisely defined and measured to compute the task-success rate. On the other hand, conversational agents generate dialogues that are more unstructured, which can be evaluated on the basis of appropriateness of the responses; this has been shown to be difficult to automate. \nWe introduce each type of dialogue system to highlight the respective characteristics and methods used to implement the dialogue system. With this knowledge, we introduce the most important concepts and methods developed to evaluate the respective class of dialogue system. In the following survey, we discuss each of the three classes of dialogue systems separately. Thus, Section \\ref{sec:eval_task}: \\emph{Task Oriented Dialogue Systems}, Section \\ref{sec:eval_non_task}: \\emph{Conversational Agents}, and Section \\ref{sec:eva_qa_dialogue}: \\emph{Interactive Question Answering} can be read independently from each other.", "id": "a2125333-f586-4eb0-8c2c-213429625d95", "level": "subsection", "origin_cites_number": 0, "parent_id": "e5e866a1-6351-4674-8666-e4056a478c2d", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "A General Overview" ], [ "subsection", "Modular Structure of this Article" ] ], "subsections": [], "title": "Modular Structure of this Article" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:eval_task}\n\\input{021_task_oriented_systems.tex}", "id": "7d128ace-9c90-49f7-bd5a-bbdc526dc9fd", "level": "section", "origin_cites_number": 0, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Task Oriented Dialogue System" ] ], "subsections": [], "title": "Task Oriented Dialogue System" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:eval_non_task}\n\\input{022_conversational_agents.tex}", "id": "8c99fa37-1def-4d5a-ad34-735a28bef963", "level": "section", "origin_cites_number": 0, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Conversational Dialogue Systems" ] ], "subsections": [], "title": "Conversational Dialogue Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:eva_qa_dialogue}\n\\input{023_qa_ds.tex}", "id": "615da0ed-3016-429b-b6ad-a2bada64cae8", "level": "section", "origin_cites_number": 0, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Question Answering Dialogue Systems" ] ], "subsections": [], "title": "Question Answering Dialogue Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:datasets}\nDatasets play an important role for the evaluation of dialogue systems, together with challenges open to public participation. A large number of datasets have been used and made publicly available for the evaluation of dialogue systems in the last decades, but the coverage across dialogue components and evaluation methods (e.g. Sections \\ref{sec:eval_task} and \\ref{sec:eval_non_task}) is uneven. Also note that datasets are not restricted to specific evaluation methods, as they can be used to feed more than one evaluation method or metric interchangeably. In this section, we cover the most relevant datasets and challenges, starting with select datasets. For further references, refer to a broad survey of publicly available datasets that have already been used to build and evaluate dialogue systems carried out by .\nThe dialogue datasets selected for this section are listed in Tables \\ref{tab:task}, \\ref{tab:conv} and \\ref{tab:qa}, where properties such as the topics covered and number of dialogues are indicated.", "id": "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "level": "section", "origin_cites_number": 1, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Evaluation Datasets and Challenges" ] ], "subsections": [ "f46ef7d5-79de-4e9d-962a-dd97da034d5d", "dafadc03-6ef1-4b8c-86c5-e93b9441f4fb", "523a0303-f590-4bb9-8ddb-25f2a555296a", "86ea82bf-50cc-46cd-9c37-9914aa488ce1" ], "title": "Evaluation Datasets and Challenges" }, { "cite_extract_rate": 0.266666666666666, "cites": [ 7453, 7331, 7452, 1548 ], "content": "\\begin{table}[t]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nName & Topics & \\# dialogues & Reference\\\\ \\hline\nDSTC1 & Bus schedules & 15,000 & \\\\\nDSTC2 & Restaurants & 3,000 & \\\\\nDSTC3 & Tourist information & 2,265 & \\\\\nDSTC4 \\& DSTC5 & Tourist information & 35 & \\\\\nDSTC6 & Restaurant reservation &- & \\\\\nDSTC7 (Flex Data) & Student guiding & 500 & \\\\\nDSTC8 (MetaLWOz)& 47 domains & 37,884 & \\\\\nDSTC8 (Schema-Guided)& 20 domains&22,825&\\\\\nMultiWOZ & Tourist information & 10,438 & \\\\\nTaskmaster-1 & 6 domains & 13,215 & \\\\\nMultiDoGo & 6 domains & 86,698 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Datasets for task-oriented dialogue systems.}\n\\label{tab:task}\n\\end{table}\nDatasets are usually designed to evaluate specific dialogue components, and very few public datasets are able to evaluate an entire task-oriented dialogue system (e.g. Section \\ref{sec:eval_task}). The evaluation of these kinds of systems is highly system-specific, and it is therefore difficult to reuse the dataset with other systems. Their evaluation also requires considerable human effort, as the involvement of individual users or external evaluators is usually needed. For example, in , which is a Partially observable Markov decision process -based dialogue system mentioned in Section \\ref{subsec:slot} for the restaurants domain, the evaluation of policies is done by crowd-sourcers via the Amazon Mechanical Turk service. Mechanical Turk users were asked first to find some specific restaurants, and after each dialogue was finished, they had to fill in a feedback form to indicate if the dialogue had been successful or not. Similarly, for the end-to-end dialogue system by (cf. Section \\ref{subsec:ent-to-end}), also for the restaurants domain, human evaluation was conducted by users recruited via Amazon Mechanical Turk. Each evaluator had to follow a given task and to rate the system's performance. More specifically, they had to grade the subjective success rate, the perceived comprehension ability and naturalness of the responses. \nMost of the task-oriented datasets are designed to evaluate components of dialogue systems. For example, several datasets have been released through different editions of the Dialog State Tracking Challenge\\footnote{\\url{https://www.microsoft.com/en-us/research/event/dialog-state-tracking-challenge/}}, focused on the development and evaluation of the dialogue state tracker component. However, even if these datasets were designed to test state tracking, used them to build and evaluate a whole dialogue system, re-adjusting the dataset by ignoring the state annotation and reusing only the transcripts of dialogues. The Schema Guided Dialogue (SGD) dataset released for the 8th edition of DSTC was designed to test not only state tracking, but also intent prediction, slot filling and language generation for large-scale virtual assistants. SGD consists of almost 23K annotated multi-domain (bank, media, calendar, travel, weather, ...), task-oriented dialogues between a human and a virtual assistant.\nThe MultiWOZ (Multi-Domain Wizard-of-Oz) dataset represented a significant breakthrough in the scarcity of dialogues as it contains around 10K dialogues, which is at least one order of magnitude larger than any structured corpus available before . It is annotated with dialogue belief states and dialogue actions, so it can be used for the development of individual components of a dialogue system. But its considerable size makes it very appropriate for the training of end-to-end based dialogue systems. The main topic of the dialogues is tourism, containing seven domains, such as attractions, hospitals, police, hotels, restaurants, taxis and trains. Each dialogue can contain more than one of these domains.\nSimilar in size and content to MultiWOZ is Taskmaster-1 task-based dialogue dataset . It includes around 13K dialogues in six domains: ordering pizza, setting auto repair appointments, arranging taxi services, ordering movie tickets, ordering coffee drinks and making restaurant reservations. What makes it different from the previous one is that more than a half of the dialogues are created following a self-dialogue methodology, in which a crowd-worker writes the full dialogue themselves. The authors claim that these self-dialogues have richer and more diverse language than, for example, MultiWOZ, as it is not restricted to a small knowledge base.\nThe largest human-generated and multi-domain dialogue dataset that is available to the public is MultiDoGo , which comprises over 81K dialogues. These dialogues were created following the Wizard-of-Oz approach between a crowd-worker and a trained annotator. These participants were guided to introduce specific biases like intent or slot change, multi-intent, multiple slot values, slot overfilling and slot deletion in conversations. Additionally, over 54K of the total amount of the dialogues are annotated at the turn level for intent classes and slot labels. Dialogues are from six different domains: airline, fast food, finance, insurance, media and software support.\nWe will conclude this section by discussing two related tools, rather than a dialogue dataset. The first tool, called PyDial\\footnote{\\url{http://www.camdial.org/pydial/}}, partially addresses the shortage of evaluation datasets for task-oriented systems. This is because it offers the opportunity for developing a dialogue management environment, based on reinforcement-learning for benchmarking purposes . Thus, it makes it possible to evaluate and compare different task-oriented dialogue systems in the same conditions. This toolkit not only provides domain-independent implementations of different modules in a dialogue system, but also simulates users (see Section \\ref{subsec:simulation}). It uses two metrics for the evaluation: (1) the average success rate and (2) the average reward for each evaluated policy model of reinforcement-learning algorithms. The success rate is defined as the percentage of dialogues that are completed successfully. Thus, it is closely related to the task-completion metric used by the PARADISE framework (see Section \\ref{subsec:paradise}).\nAnother dialogue annotation tool is called LIDA . The authors argue that the quality of a dataset has a significant effect on the quality of a dialogue system, hence, a good dialogue annotation tool is essential to create the best annotated dialogue dataset. LIDA is the first annotation tool that handles the entire dialogue annotation pipeline from turn and dialogue segmentation through to labelling structured conversation data. Moreover, it also includes an interface for inter-annotator disagreements resolution.", "id": "f46ef7d5-79de-4e9d-962a-dd97da034d5d", "level": "subsection", "origin_cites_number": 15, "parent_id": "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Evaluation Datasets and Challenges" ], [ "subsection", "Datasets for Task-Oriented Dialogue Systems" ] ], "subsections": [], "title": "Datasets for Task-Oriented Dialogue Systems" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7454, 7455, 7456, 8419 ], "content": "\\label{subsec:conv}\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nName & Topics & \\# dialogues & Reference\\\\ \\hline\nSwitchboard & Casual topics & 2,400 & \\\\\nBritish National Corpus & Casual topics & 854 & \\\\\nSubTle Corpus & Movie subtitles & 3,35M & \\\\\nReddit Domestic Abuse Corpus & Abuse help & 21,133 & \\\\\nTwitter Corpus & Unrestricted & 1,3M & \\\\\nTwitter Triple Corpus & Unrestricted & 4,322 & \\\\\nUbuntu Dialogue Corpus & Ubuntu problems & 930K & \\\\\nbAbI & Restaurant reservation & 3,000 & \\\\\nOpenSubtitles & Movie subtitles & 36M & \\\\\nCornellMovie & Movie dialogues & 220K & \\\\\n\\hline\n\\end{tabular}\n\\caption{Datasets for conversational dialogue systems.}\n\\label{tab:conv}\n\\end{table}\nRegarding the evaluation of Conversational dialogue systems presented in Section \\ref{sec:eval_non_task}, datasets derived from conversations on micro-blogging or social media websites (e.g. Twitter or Reddit) are good candidates, as they contain general-purpose or non-task-oriented conversations that are orders of magnitude larger than other dialogue datasets used before. For instance, Switchboard (telephone conversations on pre-specified topics), British National Corpus (British dialogues many contexts, from formal business or government meetings to radio shows and phone-ins) and SubTle Corpus (aligned interaction-response pairs from movie subtitles) are three datasets released earlier that have 2,400, 854 and 3.35M dialogues and 3M, 10M and 20M words, respectively. These sizes are relatively small if we compare to the huge Reddit Corpus\\footnote{\\url{https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/}} which contains over 1.7 billion comments\\footnote{As far as we know, this dataset has not been used in any research work. Researchers have used smaller and more curated versions of the Reddit dataset like Reddit Domestic Abuse Corpus , which contains 21,133 dialogues.}, or the Twitter Corpus described below.\nBecause of the limit on the number of characters permitted in each message on Twitter, the utterances are quite short, very colloquial and chat-like. Moreover, as the conversations happen almost in real-time, the conversations of this micro-blogging website are very similar to spoken dialogues between humans. There are two publicly available large corpora extracted from Twitter. The former one is the Twitter Corpus presented in , which contains roughly 1.3 million conversations and 125M words drawn from Twitter. The latter is a collection of 4,232 three-step (context-message-response) conversational snippets extracted from Twitter logs\\footnote{\\url{https://www.microsoft.com/en-us/download/details.aspx?id=52375}}. This is labeled by crowdsourced annotators, who measure the quality of a response in a given context .\nAlternatively, hypothesized that chat-room style messaging is more closely correlated to human-to-human dialogues than micro-blogging websites like Twitter, or forum-based sites such as Reddit. Thus, they presented the above-mentioned Ubuntu Dialogue Corpus. This large-scale corpus targets a specific domain. Thus, it could accordingly be used as a task-oriented dataset for the research and evaluation of dialogue state trackers. However, it also has the unstructured nature of interactions from microblog services that makes it appropriate for the evaluation of non-task-oriented dialogue systems. \nThese two large datasets are adequate for the three subtypes of non-task-oriented dialogue systems: unsupervised, trained and utterance selection metrics. Notice that, additionally, some human judgments could be needed in some cases, such as in for the ADEM system (see Section \\ref{sec:general_metrics}). Here, they use human judgments collected via Amazon Mechanical Turk in addition to the evaluation using the Twitter dataset. \nApart from the afore-mentioned two datasets, the five datasets generated recently for bAbI tasks are appropriate for evaluation using the next utterance classification method (see Section \\ref{subsec:utterance-selection}). These tasks were designed for testing end-to-end dialogue systems in the restaurant domain, but they check whether the systems can predict the appropriate utterances among a fixed set of candidates, and are not useful for systems that generate the utterance directly. The ibAbI dataset mentioned in the next section has been created based on bAbI to cover several representative multi-turn QA tasks.\nAnother interesting resource is the ParlAI framework\\footnote{\\url{http://parl.ai/}} for dialogue research, as it contains many popular datasets available all in one place with the goal of sharing, training and evaluating dialogue models across many tasks . Some of the dialogue datasets that are included have been already mentioned (bAbI Dialog tasks and the Ubuntu Dialog Corpus) but it also contains conversations mined from OpenSubtitles\\footnote{\\url{http://opus.lingfil.uu.se/OpenSubtitles.php}} and Cornell Movie\\footnote{\\url{https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html}}.", "id": "dafadc03-6ef1-4b8c-86c5-e93b9441f4fb", "level": "subsection", "origin_cites_number": 12, "parent_id": "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Evaluation Datasets and Challenges" ], [ "subsection", "Datasets for Conversational Dialogue Systems" ] ], "subsections": [], "title": "Datasets for Conversational Dialogue Systems" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1098, 1145, 8419, 7457 ], "content": "\\begin{table}[t]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nName & Topics & \\# dialogues & Reference\\\\ \\hline\nUbuntu Dialogue Corpus & Ubuntu problems & 930K & \\\\\nMSDialog & Microsoft products & 35K & \\\\\nibAbI & Restaurant reservation & - & \\\\\nCoQA &7 domains & 8,399 & \\\\\nQuAC & People & 13,594 & \\\\\nDoQA & Cooking & 1,637 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Datasets for question answering dialogue systems.}\n\\label{tab:qa}\n\\end{table}\nWith respect to QA dialogue systems, two datasets have been created based on human interactions from technical chats or forums. The first one is the Ubuntu Dialogue Corpus, containing almost one million multi-turn dialogues extracted from the Ubuntu chat logs, which was used to receive technical support for various Ubuntu-related problems . Similarly, MSDialog contains dialogues from a forum dedicated to Microsoft products. MSDialog also contains the user intent of each interaction .\nibAbI represents another approach for creating multi-turn QA datasets . ibAbI interactivity adds to the bAbI dataset that was previously presented (see Section \\ref{subsec:conv}) by adding sentences and ambiguous questions with the corresponding disambiguation question, which should be asked by an automatic system. The authors evaluate their system regarding the successful tasks. However, it is unclear how to evaluate a system if it produces a modified version of the disambiguation question.\nRecently, several datasets that are very relevant for the context of QA dialogue systems have been released. The CoQA (Conversational Question Answering) dataset contains 8K dialogues and 127K conversation turns . The answers from CoQA are free-form text with their corresponding evidence highlighted in the passage. It is a multi-domain dataset, as the passages are selected from several sources, covering seven different domains: children's stories, literature, middle and high school English exams, news, articles from Wikipedia, science and discussions from Reddit. QuAC (Question Answering in Context) consists of 14K information-seeking QA dialogues (100K total QA pairs) over sections from Wikipedia articles about people . What makes it different from other datasets so far is that some of the questions are unanswerable and that context is needed in order to answer some of the questions. Another similar dataset that has unanswerable questions and its questions are context-dependent is DoQA, a dataset for accessing domain-specific Frequently Asked Question sites via conversational QA . It contains 1,637 information-seeking dialogues on the cooking domain (7,329 questions in total). An analysis carried out by the authors showed that in this dataset there are less factoid questions than in the others, as DoQA focuses on open-ended questions about specific topics. Amazon Mechanical Turk was used to collect the dialogues for the three datasets.", "id": "523a0303-f590-4bb9-8ddb-25f2a555296a", "level": "subsection", "origin_cites_number": 6, "parent_id": "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Evaluation Datasets and Challenges" ], [ "subsection", "Datasets for Question Answering Dialogue Systems" ] ], "subsections": [], "title": "Datasets for Question Answering Dialogue Systems" }, { "cite_extract_rate": 0, "cites": [], "content": "We conclude this section by summarizing some of the recent evaluation challenges that are popular for benchmarking state-of-the-art dialogue systems. They have an important role in the evaluation of dialogue systems, not only because they offer a good benchmark scenario to test and compare the systems on a common platform, but also because they often release the dialogue datasets for later evaluation.\nPerhaps one of the most popular challenges is the Dialog State Tracking Challenge (DSTC)\\footnote{\\url{https://www.microsoft.com/en-us/research/event/dialog-state-tracking-challenge/}}, which was previously mentioned in this section. DSTC was started in 2013 in order to provide a common testbed for the task of dialogue state tracking. It continued on a yearly basis with remarkable success. For its sixth edition, it was renamed as Dialog System Technology Challenges due to the interest of the research community in a wider variety of dialogue-related problems. Various well-known datasets have been produced and released for every edition: DSTC1 has human-computer dialogues in the bus timetable domain; DSTC2 and DSTC3 used human-computer dialogues in the restaurant information domain; DSTC4 dialogues were human-human and in the tourist information domain; DSTC5 also is from the tourist information domain, but training dialogues are provided in one language and test dialogues are in a different language. Finally, as the DSTC6 edition consisted of 3 parallel tracks, different datasets were released for each track, such as, a transaction dialogue dataset for the restaurant domain, two datasets that are part of OpenSubtitles and Twitter datasets, and different chat-oriented dialogue datasets with dialogue breakdown annotations in Japanese and English.\nA more recent challenge that started in 2017 and continued into 2018, with its second edition being the Conversational Intelligence Challenge (ConvAI)\\footnote{\\url{http://convai.io/}}. This challenge, conducted under the scope of NIPS, has the aim to unify the community around the task of building systems capable of intelligent conversations. In its first edition teams were expected to submit dialogue systems able to carry out intelligent and natural conversations about specific news articles with humans. The aim of the task of the second edition has been to model normal conversation when two interlocutors meet for the first time, and get to know each other. The dataset of this task consists of 10,981 dialogues with 164,356 utterances, and it is available in the ParlAI framework mentioned above.\nFinally, the Alexa Prize\\footnote{\\url{https://developer.amazon.com/alexaprize}} has attracted mass media and research attention alike. This annual competition for university teams is dedicated at accelerating the field of conversational AI in the framework of the Alexa technology. The participants have to create socialbots that can converse coherently and engagingly with humans on news events and popular topics such as entertainment, sports, politics, technology and fashion. Unfortunately, no datasets have been released.", "id": "86ea82bf-50cc-46cd-9c37-9914aa488ce1", "level": "subsection", "origin_cites_number": 0, "parent_id": "b7d9cf74-5108-48cc-abe2-a5f162020e3c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Evaluation Datasets and Challenges" ], [ "subsection", "Evaluation Challenges" ] ], "subsections": [], "title": "Evaluation Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:discussion}\nIn the introduction, we stated that the goal of the dialogue evaluation is to find methods that are automated, repeatable, are correlated to human judgements, capable of differentiating among various dialogue strategies and explain which features of the dialogue system contribute to its quality. The main motivation behind this is the need to reduce the human evaluation effort as much as possible, since human involvement creates high costs and is highly time-consuming. In this survey, we presented the main concepts regarding evaluation of dialogue systems and showcased the most important methods. However, evaluation of dialogue systems is still an area of open research. In this section, we summarize the current challenges and future trends that we deem most important.", "id": "8a5853da-4970-44e3-bafd-bb0ee409d69c", "level": "section", "origin_cites_number": 0, "parent_id": "58c1de30-7774-42e5-a989-9c0459f4e585", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Challenges and Future Trends" ] ], "subsections": [ "f3a9645e-5633-43f6-9761-385d1a81dbc3", "cf4eb099-ff45-4799-af74-36b7ee6e5caf" ], "title": "Challenges and Future Trends" }, { "cite_extract_rate": 1, "cites": [ 7458, 7454 ], "content": "The evaluation methods covered in this survey all achieve a certain degree of automation. However, the automation is achieved with significant engineering effort, or by loss of correlation to human judgements.\nWord-overlap metrics (see Section \\ref{sec:general_metrics}), which are borrowed from the machine translation and summarization community, are fully automated. However, they do not correlate with human judgements on the turn level. On the other hand, BLEU becomes more competitive when applied on the corpus-level or system-level . More recent metrics such as $\\Delta$BLEU and ADEM (see Section \\ref{sec:general_metrics}) have significantly higher correlations to human judgements while requiring a significant amount of human annotated data as well as thorough engineering. \nTask-oriented dialogue systems can be evaluated semi-automatically or even fully automatically. These systems benefit from having a well-defined task, where success can be measured. Thus, user satisfaction modelling (see Section \\ref{sce:user_satisfaction_modelling}) as well as user simulations (see Section \\ref{subsec:simulation}) exploit this to automate their evaluation. However, both approaches need a significant amount of engineering and human annotation: user satisfaction modelling usually requires prior annotation effort, which is followed by fitting a model that predicts the judgements. In addition to this effort, the process has to be potentially repeated for each new domain or new functionality that the dialogue system incorporates. Although in some cases the model fitted on the data for one dialogue system can be reused to predict another dialogue system, this is not always possible. \nOn the other hand, user simulations require two steps: gathering data to develop a first version of the simulation, and then building the actual user simulation. The first step is only required for user simulations that are based on training corpora (e.g. the neural user simulation). A significant drawback is that the user simulation is only capable of simulating the behaviour which is represented in the corpus or the rules. This means that it cannot cover unseen behaviour well. Furthermore, the user simulation can hardly be used to train or evaluate dialogue systems for other tasks or domains. \nAutomation is thus achieved to a certain degree, but with significant drawbacks. Hence, finding ways to facilitate the automation of evaluation methods is clearly an open challenge.", "id": "f3a9645e-5633-43f6-9761-385d1a81dbc3", "level": "paragraph", "origin_cites_number": 2, "parent_id": "8a5853da-4970-44e3-bafd-bb0ee409d69c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Challenges and Future Trends" ], [ "paragraph", "Automation." ] ], "subsections": [ "290f0a6e-8287-45eb-944a-d3979702fa9f" ], "title": "Automation." }, { "cite_extract_rate": 1, "cites": [ 7451 ], "content": "One major objective for a dialogue system is to deliver high quality interactions with its users. However, it is often not clear how ``high quality\" is defined in this context or how to measure it. For task oriented dialogue systems, the mostly used definition of quality is often measured by means of task success and number of dialogue turns (e.g. a reward of 20 for task-success minus the number of turns needed to achieve the goal). However, this definition is not applicable to conversational dialogue systems and it might ignore other aspects of the interaction (e.g. frustration of the user). Thus, the current trend is to let humans judge the \\emph{appropriateness} of the system utterances. However, the notion of appropriateness is highly subjective and entails several finer-grained concepts (e.g. ability to maintain the topic, the coherence of the utterance, the grammatical correctness of the utterance itself, etc.). Currently, appropriateness is modelled by means of latent representations (e.g. ADEM), which are derived again from annotated data. \nOther aspects of quality concern the purpose of the dialogue system in conjunction with the functionality of the system. For instance, define the purpose of their conversational dialogue system to build an emotional bond between the dialogue system and the user. This goal differs significantly from the task of training a medical student in the interaction with patients. Both systems need to be evaluated with respect to their particular goal. The ability to build an emotional bond can be evaluated by means of the interaction length (longer interactions are an indicator of a higher user engagement), whereas training (or e-learning) systems are usually evaluated regarding their ability of selecting an appropriate utterance for the given context.\nThe target audience plays an important role as well. Since quality is mainly a subjective measure, different user groups prefer different types of interactions. For instance, depending on the level of domain knowledge, novice users prefer instructions that use less specialized wording, whereas domain experts might prefer a more specialized vocabulary. \nThe notion of quality is thus dependent on a large amount of factors. The evaluation needs to be adapted to take aspects such as the dialogue system's purpose, the target audience, and the dialogue system implementation itself into account.", "id": "290f0a6e-8287-45eb-944a-d3979702fa9f", "level": "paragraph", "origin_cites_number": 1, "parent_id": "f3a9645e-5633-43f6-9761-385d1a81dbc3", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Challenges and Future Trends" ], [ "paragraph", "Automation." ], [ "paragraph", "High Quality Dialogues." ] ], "subsections": [ "3ff28339-d0a2-4633-a8f9-3b3b5b9fe537" ], "title": "High Quality Dialogues." }, { "cite_extract_rate": 0.5, "cites": [ 7459 ], "content": "The notion of lifelong learning for machine learning systems has gained traction recently. The main concept of lifelong learning is that a deployed machine learning system continues to improve by interaction with its environment . Lifelong learning for dialogue systems is motivated by the fact that it is not possible to encounter all possible situations during training, thus, a component that allows the dialogue system to retrain itself and adapt its strategy during deployment seems the most logical solution. \nThe evaluation step is critical in order to achieve lifelong learning. Since the dialogue system relies on the ability to automatically find critical dialogue states where it needs assistance, a module is needed which is able to evaluate the ongoing dialogue. One step in this direction is done by , who present a solution that relies on a satisfaction module that is able of to classify the current dialogue state as either satisfactory or not. If this module finds an unsatisfactory dialogue state, a feedback module asks the user for feedback. The feedback data is then used to improve the dialogue system. \nThe aspect of lifelong learning brings a large variety of novel challenges. Firstly, the lifelong learning system requires a module that self-monitors its behaviour and notices when a dialogue is going wrong. For this, the module needs to rely on evaluation methods that work automatically, or at least semi-automatically. The second challenge lies in the evaluation of the lifelong learning system itself. The self-monitoring module as well as the adaptive behaviour need to be evaluated. This brings a new dimension of complexity into the evaluation procedure.", "id": "3ff28339-d0a2-4633-a8f9-3b3b5b9fe537", "level": "paragraph", "origin_cites_number": 2, "parent_id": "290f0a6e-8287-45eb-944a-d3979702fa9f", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Challenges and Future Trends" ], [ "paragraph", "Automation." ], [ "paragraph", "High Quality Dialogues." ], [ "paragraph", "Lifelong Learning." ] ], "subsections": [], "title": "Lifelong Learning." }, { "cite_extract_rate": 0, "cites": [], "content": "Evaluation is a critical task when developing and researching dialogue systems. Over the past decades, many methods and concepts have been proposed. These methods and concepts are related to the different requirements and functionalities of the dialogue systems. These are subsequently dependent on the current development stage of the dialogue system technology. Currently, the trend is moving towards building end-to-end trainable dialogue systems based on large amounts of data. These systems have different requirements for evaluation than a finite state, machine-based system. Thus, the problem of evaluation is evolving in tandem to the progress of the dialogue system technology itself. This survey presents the current state-of-the-art research in evaluation.\n\\section*{Acknowledgements}\nWe would like to thank Lina Scarborough for proofreading the manuscript. We would also like to thank our anonymous reviewers for their valuable reviews that helped improve the quality of this survey. \n\\section*{Funding}\nThis work was supported by the LIHLITH project in the framework of EU ERA-Net CHIST-ERA; the Swiss National Science Foundation [20CH21\\_174237]; the Spanish Research Agency [PCIN-2017-11, PCIN-2017-085/AEI]; Eneko Agirre and Arantxa Otegi received the support of the UPV/EHU [grant GIU16/16]; Agence Nationale pour la Recherche [ANR-17-CHR2-0001-03]\n\\section*{Conflicts of Interest}\nThere are no conflicts of interest to disclose.\n\\bibliographystyle{spbasic} \n\\bibliography{survey_dialogue_evaluation} \n\\end{document}", "id": "cf4eb099-ff45-4799-af74-36b7ee6e5caf", "level": "subsection", "origin_cites_number": 0, "parent_id": "8a5853da-4970-44e3-bafd-bb0ee409d69c", "prefix_titles": [ [ "title", "Survey on Evaluation Methods for Dialogue Systems" ], [ "section", "Challenges and Future Trends" ], [ "subsection", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
8
[ 7451, 1149, 1547, 7453, 7331, 7452, 1548, 7454, 7455, 7456, 8419, 1098, 1145, 7457, 7458, 7459 ]
0.845917
[ "Michael A. Hedderich", "Lukas Lange", "Heike Adel", "Jannik Strötgen", "Dietrich Klakow" ]
A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios
2020
2020-10-23T11:22:01Z
cs.CL
Deep neural networks and huge language models are becoming omnipresent in natural language applications. As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings. Motivated by the recent fundamental changes towards neural models and the popular pre-train and fine-tune paradigm, we survey promising approaches for low-resource natural language processing. After a discussion about the different dimensions of data availability, we give a structured overview of methods that enable learning when training data is sparse. This includes mechanisms to create additional labeled data like data augmentation and distant supervision as well as transfer learning settings that reduce the need for target supervision. A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting. Further key aspects of this work are to highlight open issues and to outline promising directions for future research.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ] ], "subsections": [ "050d9770-ac59-4b1f-8fe3-24003253d68d", "7f89d35b-fdb1-45b2-84f8-7e199e15b6ee", "01449395-0026-492e-b26a-4abacec7957c", "50d4dcc0-4105-402a-8978-8fa345f584f7", "08729569-e9fe-4099-b79e-d05e287b81c7", "ca44ac27-a8bd-46d0-97aa-2a6d9c9defc4", "27505e6f-b9d1-470b-89da-8ce753a42523", "91981cb9-1721-4834-a392-458a664c449d", "ca88a9a2-e919-43be-b4cb-40565c35c380" ], "title": "root" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7165 ], "content": "\\blfootnote{* equal contribution}\nMost of today's research in natural language processing (NLP) is concerned with the processing of 10 to 20 high-resource languages with a special focus on English, and thus, ignores thousands of languages with billions of speakers~. \nThe rise of data-hungry deep learning systems increased the performance of NLP for high resource-languages, but the shortage of large-scale data in less-resourced languages makes their processing a challenging problem.\nTherefore, \\newcite{intro/ruder2019problems} named NLP for low-resource scenarios one of the four biggest open problems in NLP nowadays.\nThe umbrella term low-resource covers a spectrum of scenarios with varying resource conditions. It includes work on threatened languages, such as Yongning Na, a Sino-Tibetan language with 40k speakers and only 3k written, unlabeled sentences . Other languages are widely spoken but seldom addressed by NLP research. More than 310 languages exist with at least one million L1-speakers each . Similarly, Wikipedia exists for 300 languages.\\footnote{\\url{https://en.wikipedia.org/wiki/List_of_Wikipedias}} \nSupporting technological developments for low-resource languages can help to increase participation of the speakers' communities in a digital world. \nNote, however, that tackling low-resource settings is even crucial when dealing with popular NLP languages as low-resource settings do not only concern languages but also non-standard domains and tasks, for which -- even in English -- only little training data is available. Thus, the term ``language'' in this paper also includes domain-specific language. \nThis importance of low-resource scenarios and the significant changes in NLP in the last years have led to active research on resource-lean settings and a wide variety of techniques have been proposed. They all share the motivation of overcoming the lack of labeled data by leveraging further sources. However, these works differ greatly on the sources they rely on, e.g., unlabeled data, manual heuristics or cross-lingual alignments. Understanding the requirements of these methods is essential for choosing a technique suited for a specific low-resource setting. Thus, one key goal of this survey is to highlight the underlying assumptions these techniques take regarding the low-resource setup.\n\\def\\sectionautorefname{§}\n\\def\\subsectionautorefname{§}\n\\begin{table*}\n \\footnotesize\n \\centering\n \\begin{tabular}{p{4.2cm}|p{3.3cm}|p{3.75cm}|c|c}\n \\toprule\n \\multirow{2}{*}{\\textbf{Method}} & \\multirow{2}{*}{\\textbf{Requirements}} & \\multirow{2}{*}{\\textbf{Outcome}} & \\multicolumn{2}{c}{\\textbf{For low-resource}} \\\\\n & & & \\textbf{languages} & \\textbf{domains}\\\\\n \\midrule\n Data Augmentation (\\autoref{sub:data-aug}) & labeled data, heuristics* & additional labeled data & \\ding{51} & \\ding{51} \\\\ \\midrule\n Distant Supervision (\\autoref{sub:distant}) & unlabeled data, heuristics* & additional labeled data & \\ding{51} & \\ding{51} \\\\ \\midrule\n Cross-lingual projections (\\autoref{sub:projections}) & unlabeled data, high-resource labeled data, cross-lingual alignment & additional labeled data & \\ding{51} & \\ding{55} \\\\ \\midrule\n Embeddings \\& Pre-trained LMs (\\autoref{sub:lm}) & unlabeled data & better language representation & \\ding{51} & \\ding{51} \\\\ \\midrule\n LM domain adaptation (\\autoref{sub:domain-lm}) & existing LM, \\newline unlabeled domain data & domain-specific language representation & \\ding{55} & \\ding{51} \\\\ \\midrule\n Multilingual LMs (\\autoref{sub:multi-lm}) & multilingual unlabeled data & multilingual feature representation & \\ding{51} & \\ding{55} \\\\ \\midrule\n Adversarial Discriminator (\\autoref{sec:ml}) & additional datasets & independent representations & \\ding{51} & \\ding{51} \\\\ \\midrule\n Meta-Learning (\\autoref{sec:ml}) & multiple auxiliary tasks & better target task performance & \\ding{51} & \\ding{51}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Overview of low-resource methods surveyed in this paper. * Heuristics are typically gathered manually.}\n \\label{tab:overview}\n\\end{table*}\nIn this work, we (1) give a broad and structured overview of current efforts on low-resource NLP, (2) analyse the different aspects of low-resource settings, (3) highlight the necessary resources and data assumptions as guidance for practitioners and (4) discuss open issues and promising future directions. Table \\ref{tab:overview} gives an overview of the surveyed techniques along with their requirements a practitioner needs to take into consideration.", "id": "050d9770-ac59-4b1f-8fe3-24003253d68d", "level": "section", "origin_cites_number": 3, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.5, "cites": [ 4236 ], "content": "\\label{sec:surveys-etc}\nRecent surveys cover low-resource machine translation and unsupervised domain adaptation . \nThus, we do not investigate these topics further in this paper, but focus instead on general methods for low-resource, supervised natural language processing including data augmentation, distant supervision and transfer learning. This is also in contrast to the task-specific survey by \\newcite{survey/magueresse2020low} who review highly influential work for several extraction tasks, but only provide little overview of recent approaches. In Table \\ref{tab:surveys} in the appendix, we list past surveys that discuss a specific method or low-resource language family for those readers who seek a more specialized follow-up.", "id": "7f89d35b-fdb1-45b2-84f8-7e199e15b6ee", "level": "section", "origin_cites_number": 2, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:defining}\nTo visualize the variety of resource-lean scenarios, Figure~\\ref{fig:applications} shows exemplarily which NLP tasks were addressed in six different languages from basic to higher-level tasks. While it is possible to build English NLP systems for many higher-level applications, low-resource languages lack the data foundation for this. Additionally, even if it is possible to create basic systems for tasks, such as tokenization and named entity recognition, for all tested low-resource languages, the training data is typical of lower quality compared to the English datasets, or very limited in size.\nIt also shows that the four American and African languages with between 1.5 and 60 million speakers have been addressed less than the Estonian language, with 1 million speakers. This indicates the unused potential to reach millions of speakers who currently have no access to higher-level NLP applications. study further the availability of resources for languages around the world. \n\\begin{figure}\n \\centering\n \\includegraphics[trim=55 80 190\n60,clip,width=0.49\\textwidth]{figure_applications_stacked.pdf}\n \\caption{Supported NLP tasks in different languages. Note that the figure does not incorporate data quality or system performance. More details on the selection of tasks and languages are given in the appendix Section~\\ref{app:tasks}.}\n \\label{fig:applications}\n\\end{figure}", "id": "01449395-0026-492e-b26a-4abacec7957c", "level": "section", "origin_cites_number": 1, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Aspects of ``Low-Resource''" ] ], "subsections": [ "98d99c3c-8a57-4c26-a23f-8b74617c0c79", "032c8058-5716-4e46-ab86-4be5446a81ed" ], "title": "Aspects of ``Low-Resource''" }, { "cite_extract_rate": 0, "cites": [], "content": "Many techniques presented in the literature depend on certain assumptions about the low-resource scenario. These have to be adequately defined to evaluate their applicability for a specific setting and to avoid confusion when comparing different approaches. \nWe propose to categorize low-resource settings along the following three dimensions:\n(i) The \\textbf{availability of task-specific labels} in the target language (or target domain) is the most prominent dimension in the context of supervised learning. Labels are usually created through manual annotation, which can be both time- and cost-intensive. Not having access to adequate experts to perform the annotation can also be an issue for some languages and domains.\n(ii) The \\textbf{availability of unlabeled language- or domain-specific text} is another factor, especially as most modern NLP approaches are based on some form of input embeddings trained on unlabeled texts. \n(iii) Most of the ideas surveyed in the next sections assume the \\textbf{availability of auxiliary data} which can have many forms. Transfer learning might leverage task-specific labels in a different language or domain. Distant supervision utilizes external sources of information, such as knowledge bases or gazetteers. Some approaches require other NLP tools in the target language like machine translation to generate training data. It is essential to consider this as results from one low-resource scenario might not be transferable to another one if the assumptions on the auxiliary data are broken.", "id": "98d99c3c-8a57-4c26-a23f-8b74617c0c79", "level": "subsection", "origin_cites_number": 0, "parent_id": "01449395-0026-492e-b26a-4abacec7957c", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Aspects of ``Low-Resource''" ], [ "subsection", "Dimensions of Resource Availability" ] ], "subsections": [], "title": "Dimensions of Resource Availability" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4237, 7165 ], "content": "On the dimension of task-specific labels, different thresholds are used to define low-resource. \nFor part-of-speech (POS) tagging, \\newcite{distant/garrette2013pos2hours} limit the time of the annotators to 2 hours resulting in up to 1-2k tokens. \\newcite{defining/kann2020POSPoorly} study languages that have less than 10k labeled tokens in the Universal Dependency project and \\newcite{survey/loubser2020viability} report that most available datasets for South African languages have 40-60k labeled tokens. \nThe threshold is also task-dependent and more complex tasks might also increase the resource requirements. For text generation, \\newcite{defining/yang2019responseGeneration} frame their work as low-resource with 350k labeled training instances. Similar to the task, the resource requirements can also depend on the language. \\newcite{defining/plank2016MultilingualPOS} find that task performance varies between language families given the same amount of limited training data.\nGiven the lack of a hard threshold for low-resource settings, we see it as a spectrum of resource availability. We, therefore, also argue that more work should evaluate low-resource techniques across different levels of data availability for better comparison between approaches. \nFor instance, and show that \nfor very small datasets non-neural methods outperform more modern approaches while the latter obtain better performance in resource-lean scenarios once a few hundred labeled instances are available.", "id": "032c8058-5716-4e46-ab86-4be5446a81ed", "level": "subsection", "origin_cites_number": 3, "parent_id": "01449395-0026-492e-b26a-4abacec7957c", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Aspects of ``Low-Resource''" ], [ "subsection", "How Low is Low-Resource?" ] ], "subsections": [], "title": "How Low is Low-Resource?" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:creating-additional-labeled-data}\nFaced with the lack of task-specific labels, a variety of approaches have been developed to find alternative forms of labeled data as substitutes for gold-standard supervision. This is usually done through some form of expert insights in combination with automation. We group the ideas into two main categories: data augmentation which uses task-specific instances to create more of them (\\autoref{sub:data-aug}) and distant supervision which labels unlabeled data (\\autoref{sub:distant}) including cross-lingual projections (\\autoref{sub:projections}). Additional sections cover learning with noisy labels (\\autoref{sec:noisy}) and involving non-experts (\\autoref{sec:nonexpert}).", "id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "level": "section", "origin_cites_number": 0, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ] ], "subsections": [ "42c5586a-c1af-4632-8c7f-0d63e89e8355", "8fd03a5f-b039-4b07-af1e-c88569597ca9", "e206a999-bb3c-4e85-8cf0-85d431387307", "90477119-7640-4c8d-8461-34186f884fdc", "898b669f-4c03-4325-bb41-e0b437a71f10" ], "title": "Generating Additional Labeled Data" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 4238, 115, 4240, 4239 ], "content": "\\label{sub:data-aug}\nNew instances can be obtained based on existing ones by modifying the features with transformations that do not change the label. In the computer vision community, this is a popular approach where, e.g., rotating an image is invariant to the classification of an image's content. For text, on the token level, this can be done by replacing words with equivalents, such as synonyms , entities of the same type or words that share the same morphology . Such replacements can also be guided by a language model that takes context into consideration .\nTo go beyond the token level and add more diversity to the augmented sentences, data augmentation can also be performed on sentence parts. Operations that (depending on the task) do not change the label include manipulation of parts of the dependency tree , simplification of sentences by removal of sentence parts and inversion of the subject-object relation . For whole sentences, paraphrasing through back-translation can be used. This is a popular approach in machine translation where target sentences are back-translated into source sentences . An important aspect here is that errors in the source side/features do not seem to have a large negative effect on the generated target text the model needs to predict. It is therefore also used in other text generation tasks like abstract summarization and table-to-text generation . Back-translation has also been leveraged for text classification . This setting assumes, however, the availability of a translation system. Instead, a language model can also be used for augmenting text classification datasets . It is trained conditioned on a label, i.e., on the subset of the task-specific data with this label. It then generates additional sentences that fit this label. \\newcite{data-aug/ding2020daga} extend this idea for token level tasks.\nAdversarial methods are often used to find weaknesses in machine learning models . They can, however, also be utilized to augment NLP datasets . Instead of manually crafted transformation rules, these methods learn how to apply small perturbations to the input data that do not change the meaning of the text (according to a specific score). This approach is often applied on the level of vector representations. \nFor instance, \\newcite{data-aug/grundkiewicz2019Grammar} reverse the augmentation setting by applying transformations that flip the (binary) label. In their case, they introduce errors in correct sentences to obtain new training data for a grammar correction task. \n\\textbf{Open Issues:} While data augmentation is ubiquitous in the computer vision community and while most of the above-presented approaches are task-independent, it has not found such widespread use in natural language processing. A reason might be that several of the approaches require an in-depth understanding of the language. There is not yet a unified framework that allows applying data augmentation across tasks and languages. Recently, hypothesised that data augmentation provides the same benefits as pre-training in transformer models. However, we argue that data augmentation might be better suited to leverage the insights of linguistic or domain experts in low-resource settings when unlabeled data or hardware resources are limited.", "id": "42c5586a-c1af-4632-8c7f-0d63e89e8355", "level": "subsection", "origin_cites_number": 12, "parent_id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ], [ "subsection", "Data Augmentation" ] ], "subsections": [], "title": "Data Augmentation" }, { "cite_extract_rate": 0.56, "cites": [ 4241, 4238, 8565, 7769, 4253, 4242, 4243, 1621, 2466 ], "content": "\\label{sub:distant}\nIn contrast to data augmentation, distant or weak supervision uses unlabeled text and keeps it unmodified. The corresponding labels are obtained through a (semi-)automatic process from an external source of information. For named entity recognition (NER), a list of location names might be obtained from a dictionary and matches of tokens in the text with entities in the list are automatically labeled as locations. Distant supervision was introduced by \\newcite{distant/mintz2009distant} for relation extraction (RE) with extensions on multi-instance and multi-label learning . It is still a popular approach for information extraction tasks like NER and RE where the external information can be obtained from knowledge bases, gazetteers, dictionaries and other forms of structured knowledge sources . The automatic annotation ranges from simple string matching to complex pipelines including classifiers and manual steps . This distant supervision using information from external knowledge sources can be seen as a subset of the more general approach of labeling rules. These encompass also other ideas like reg-ex rules or simple programming functions . \nWhile distant supervision is popular for information extraction tasks like NER and RE, it is less prevalent in other areas of NLP. Nevertheless, distant supervision has also been successfully employed for other tasks by proposing new ways for automatic annotation. \\newcite{distant/li-etal-2012-wiki} leverage a dictionary of POS tags for classifying unseen text with POS. For aspect classification, create a simple bag-of-words classifier on a list of seed words and train a deep neural network on its weak supervision. \\newcite{distant/wang2019sentiment} use context by transferring a document-level sentiment label to all its sentence-level instances. \\newcite{distant/mekala2020meta} leverage meta-data for text classification and \\newcite{huber2020discourse} build a discourse-structure dataset using guidance from sentiment annotations. For topic classification, heuristics can be used in combination with inputs from other classifiers like NER or from entity lists . For some classification tasks, the labels can be rephrased with simple rules into sentences. A pre-trained language model then judges the label sentence that most likely follows the unlabeled input . An unlabeled review, for instance, might be continued with \"It was great/bad\" for obtaining binary sentiment labels.\n\\textbf{Open Issues:} \nThe popularity of distant supervision for NER and RE might be due to these tasks being particularly suited. There, auxiliary data like entity lists is readily available and distant supervision often achieves reasonable results with simple surface form rules. It is an open question whether a task needs to have specific properties to be suitable for this approach. The existing work on other tasks and the popularity in other fields like image classification suggests, however, that distant supervision could be leveraged for more NLP tasks in the future.\nDistant supervision methods heavily rely on auxiliary data. In a low-resource setting, it might be difficult to obtain not only labeled data but also such auxiliary data. find a large gap between the performance on high-resource and low-resource languages for POS tagging pointing to the lack of high-coverage and error-free dictionaries for the weak supervision in low-resource languages. This emphasizes the need for evaluating such methods in a realistic setting and avoiding to just simulate restricted access to labeled data in a high-resource language.\nWhile distant supervision allows obtaining labeled data more quickly than manually annotating every instance of a dataset, it still requires human interaction to create automatic annotation techniques or to provide labeling rules. This time and effort could also be spent on annotating more gold label data, either naively or through an active learning scheme. Unfortunately, distant supervision papers rarely provide information on how long the creation took, making it difficult to compare these approaches. Taking the human expert into the focus connects this research direction with human-computer-interaction and human-in-the-loop setups .", "id": "8fd03a5f-b039-4b07-af1e-c88569597ca9", "level": "subsection", "origin_cites_number": 25, "parent_id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ], [ "subsection", "Distant \\& Weak Supervision" ] ], "subsections": [], "title": "Distant \\& Weak Supervision" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4244 ], "content": "\\label{sub:projections}\nFor cross-lingual projections, a task-specific classifier is trained in a high-resource language. Using parallel corpora, the unlabeled low-resource data is then aligned to its equivalent in the high-resource language where labels can be obtained using the aforementioned classifier. These labels (on the high-resource text) can then be projected back to the text in the low-resource language based on the alignment between tokens in the parallel texts . This approach can, therefore, be seen as a form of distant supervision specific for obtaining labeled data for low-resource languages. Cross-lingual projections have been applied in low-resource settings for tasks, such as POS tagging and parsing . \nSources for parallel text can be the OPUS project ,\nBible corpora or the recent JW300 corpus .\nInstead of using parallel corpora, existing high-resource labeled datasets can also be machine-translated into the low-resource language . Cross-lingual projections have even been used with English as a target language for detecting linguistic phenomena like modal sense and telicity that are easier to identify in a different language .\n\\textbf{Open issues:} Cross-lingual projections set high requirements on the auxiliary data needing both labels in a high-resource language and means to project them into a low-resource language. Especially the latter can be an issue as machine translation by itself might be problematic for a specific low-resource language. A limitation of the parallel corpora is their domains like political proceedings or religious texts. \\newcite{projection/mayhew-etal-2017-cheap}, \\newcite{projection/fang-cohn-2017-model} and propose systems with fewer requirements based on word translations, bilingual dictionaries and task-specific seed words, respectively.", "id": "e206a999-bb3c-4e85-8cf0-85d431387307", "level": "subsection", "origin_cites_number": 14, "parent_id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ], [ "subsection", "Cross-Lingual Annotation Projections" ] ], "subsections": [], "title": "Cross-Lingual Annotation Projections" }, { "cite_extract_rate": 0.5, "cites": [ 4241, 1695, 4245, 4242 ], "content": "\\label{sec:noisy}\nThe above-presented methods allow obtaining labeled data quicker and cheaper than manual annotations. These labels tend, however, to contain more errors. Even though more training data is available, training directly on this noisily-labeled data can actually hurt the performance. Therefore, many recent approaches for distant supervision use a noise handling method to diminish the negative effects of distant supervision. We categorize these into two ideas: noise filtering and noise modeling.\nNoise filtering methods remove instances from the training data that have a high probability of being incorrectly labeled. This often includes training a classifier to make the filtering decision. The filtering can remove the instances completely from the training data, e.g., through a probability threshold , a binary classifier , or the use of a reinforcement-based agent . Alternatively, a soft filtering might be applied that re-weights instances according to their probability of being correctly labeled or an attention measure .\nThe noise in the labels can also be modeled. A common model is a confusion matrix estimating the relationship between clean and noisy labels . The classifier is no longer trained directly on the noisily-labeled data. Instead, a noise model is appended which shifts the noisy to the (unseen) clean label distribution. This can be interpreted as the original classifier being trained on a ``cleaned'' version of the noisy labels. In \\newcite{distant/ye2019shift}, the prediction is shifted from the noisy to the clean distribution during testing. In \\newcite{distant/chen-etal-2020-relabel-agents}, a group of reinforcement agents relabels noisy instances. \\newcite{distant/rehbein-ruppenhofer-2017-detecting}, \\newcite{distant/lison-etal-2020-weak-supervision} and \\newcite{ren2020denoising} leverage several sources of distant supervision and learn how to combine them.\nIn NER, the noise in distantly supervised labels tends to be false negatives,\ni.e., mentions of entities that have been missed by the automatic method. Partial annotation learning takes this into account explicitly. Related approaches learn latent variables , use constrained binary learning or construct a loss assuming that only unlabeled positive instances exist .", "id": "90477119-7640-4c8d-8461-34186f884fdc", "level": "subsection", "origin_cites_number": 10, "parent_id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ], [ "subsection", "Learning with Noisy Labels" ] ], "subsections": [], "title": "Learning with Noisy Labels" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 4241, 4242, 4246 ], "content": "\\label{sec:nonexpert}\nAs an alternative to an automatic annotation process, annotations might also be provided by non-experts. Similar to distant supervision, this results in a trade-off between label quality and availability. For instance, \\newcite{distant/garrette2013pos2hours} obtain labeled data from non-native-speakers and without a quality control on the manual annotations. This can be taken even further by employing annotators who do not speak the low-resource language . \n take the opposite direction, integrating speakers of low-resource languages without formal training into the model development process in an approach of participatory research. This is part of recent work on how to strengthen low-resource language communities and grassroot approaches .", "id": "898b669f-4c03-4325-bb41-e0b437a71f10", "level": "subsection", "origin_cites_number": 6, "parent_id": "50d4dcc0-4105-402a-8978-8fa345f584f7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Generating Additional Labeled Data" ], [ "subsection", "Non-Expert Support" ] ], "subsections": [], "title": "Non-Expert Support" }, { "cite_extract_rate": 1, "cites": [ 7165 ], "content": "\\label{sec:transferlearning}\nWhile distant supervision and data augmentation generate and extend task-specific training data, transfer learning reduces the need for labeled target data by transferring learned representations and models.\nA strong focus in recent works on transfer learning in NLP lies in the use of pre-trained language representations that are trained on unlabeled data like BERT~. Thus, this section starts with an overview of these methods (\\autoref{sub:lm}) and then discusses how they can be utilized in low-resource scenarios, in particular, regarding the usage in domain-specific (\\autoref{sub:domain-lm}) or multilingual low-resource settings (\\autoref{sub:multi-lm}).", "id": "08729569-e9fe-4099-b79e-d05e287b81c7", "level": "section", "origin_cites_number": 1, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Transfer Learning" ] ], "subsections": [ "a8c605db-e7c6-44b9-984c-fc150c7a50bc", "df439b98-ef59-4a62-998c-bf63ccc26cd1", "d90e7d5d-8ed8-41ee-a381-537e8d432b20" ], "title": "Transfer Learning" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 8756, 38, 826, 8745, 9, 7165 ], "content": "\\label{sub:lm}\nFeature vectors are the core input component of many neural network-based models for NLP tasks. They are numerical representations of words or sentences, as neural architectures do not allow the processing of strings and characters as such. \\newcite{pre-train/collobert2011natural} showed that training these models for the task of language-modeling on a large-scale corpus results in high-quality word representations, which can be reused for other downstream tasks as well.\nSubword-based embeddings such as fastText n-gram embeddings~ and byte-pair-encoding embeddings addressed out-of-vocabulary issues by splitting words into multiple subwords, which in combination represent the original word. \n\\newcite{emb/zhu2019subword} showed that these embeddings leveraging subword information are beneficial for low-resource sequence labeling tasks, such as named entity recognition and typing, and outperform word-level embeddings. \\newcite{emb/jungmaier2020dirichlet} added smoothing to word2vec models to correct its bias towards rare words and achieved improvements in particular for low-resource settings.\nIn addition, pre-trained embeddings were published for more than 270 languages for both embedding methods.\nThis enabled the processing of texts in many languages, including multiple low-resource languages found in Wikipedia.\nMore recently, a trend emerged of pre-training large embedding models using a language model objective to create context-aware word representations by predicting the next word or sentence. \nThis includes pre-trained transformer models~, such as BERT~ or RoBERTa~. \nThese methods are particularly helpful for low-resource languages for which large amounts of unlabeled data are available, but task-specific labeled data is scarce~.\n\\textbf{Open Issues:} While pre-trained language models achieve significant performance increases compared to standard word embeddings, it is still questionable if these methods are suited for real-world low-resource scenarios. \nFor example, all of these models require large hardware requirements, in particular, considering that the transformer model size keeps increasing to boost performance~.\nTherefore, these large-scale methods might not be suited for low-resource scenarios where hardware is also low-resource.\\\\\n\\newcite{pre-train/van2020optimal} showed that low- to medium-depth transformer sizes perform better than larger models for low-resource languages and \\newcite{bert/schick2020s} managed to train models with three orders of magnitude fewer parameters that perform on-par with large-scale models like GPT-3 on few-shot task by reformulating the training task and using ensembling.\n\\newcite{pre-train/melamud2019} showed that simple bag-of-words approaches are better when there are only a few dozen training instances or less for text classification, while more complex transformer models require more training data.\n found that cross-view training leverages large amounts of unlabeled data better for task-specific applications in contrast to the general representations learned by BERT.\nMoreover, data quality for low-resource, even for unlabeled data, might not be comparable to data from high-resource languages. \\newcite{defining/alabi-etal-2020-massive} found that word embeddings trained on larger amounts of unlabeled data from low-resource languages are not competitive to embeddings trained on smaller, but curated data sources.", "id": "a8c605db-e7c6-44b9-984c-fc150c7a50bc", "level": "subsection", "origin_cites_number": 9, "parent_id": "08729569-e9fe-4099-b79e-d05e287b81c7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Transfer Learning" ], [ "subsection", "Pre-Trained Language Representations" ] ], "subsections": [], "title": "Pre-Trained Language Representations" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 4244, 826, 4247 ], "content": "\\label{sub:domain-lm}\nThe language of a specialized domain can differ tremendously from what is considered the standard language, thus, many text domains are often less-resourced as well. For example, scientific articles can contain formulas and technical terms, which are not observed in news articles. \nHowever, the majority of recent language models are pre-trained on general-domain data, such as texts from the news or web-domain, which can lead to a so-called ``domain-gap'' when applied to a different domain.\nOne solution to overcome this gap is the adaptation to the target domain by finetuning the language model.\n\\newcite{domain/gugurangan2020dontstop} showed that continuing the training of a model with additional domain-adaptive and task-adaptive pre-training with unlabeled data leads to performance gains for both high- and low-resource settings for numerous English domains and tasks.\nThis is also displayed in the number of domain-adapted language models \\cite[(i.a.)]{domain/alsentzer2019publicly,domain/huang2019clinicalbert,domain/adhikari2019docbert,domain/lee2020patent,domain/jain2020nukebert}, \nmost notably BioBERT~ that was pre-trained on biomedical PubMED articles and SciBERT~ for scientific texts. \nFor example, showed that a general-domain BERT model performs well in the materials science domain, but the domain-adapted SciBERT performs best. \n\\newcite{domain/xu2020dombert} used in- and out-of-domain data to pre-train a domain-specific model and adapt it to low-resource domains.\n\\newcite{domain/aharoni2020unsupervised} found domain-specific clusters in pre-trained language models and showed how these could be exploited for data selection in domain-sensitive training. \nPowerful representations can be achieved by combining high-resource embeddings from the general domain with low-resource embeddings from the target domain . \n\\newcite{emb/kiela-etal-2018-dynamic} showed that embeddings from different domains can be combined using attention-based meta-embeddings, which create a weighted sum of all embeddings. \\newcite{emb/lange2020adversarial} further improved on this by aligning embeddings trained on diverse domains using an adversarial discriminator that distinguishes between the embedding spaces to generate domain-invariant representations.", "id": "df439b98-ef59-4a62-998c-bf63ccc26cd1", "level": "subsection", "origin_cites_number": 5, "parent_id": "08729569-e9fe-4099-b79e-d05e287b81c7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Transfer Learning" ], [ "subsection", "Domain-Specific Pre-Training" ] ], "subsections": [], "title": "Domain-Specific Pre-Training" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 8565, 8328, 4250, 7165, 4246, 4248, 4249, 8745, 4251 ], "content": "\\label{sub:multi-lm}\nAnalogously to low-resource domains, low-resource languages can also benefit from labeled resources available in other high-resource languages.\nThis usually requires the training of multilingual language representations by combining monolingual representations or training a single model for many languages, such as multilingual BERT~ or XLM-RoBERTa~ .\nThese models are trained using unlabeled, monolingual corpora from different languages and can be used in cross- and multilingual settings, due to many languages seen during pre-training. \nIn cross-lingual zero-shot learning, no task-specific labeled data is available in the low-resource target language. Instead, labeled data from a high-resource language is leveraged. \nA multilingual model can be trained on the target task in a high-resource language and afterwards, applied to the unseen target languages, such as for named entity recognition , reading comprehension , temporal expression extraction , or POS tagging and dependency parsing .\n showed, however, that there is still a large gap between low and high-resource setting.\n and proposed adding a minimal amount of target-task and -language data (in the range of 10 to 100 labeled sentences) which resulted in a significant boost in performance for classification in low-resource languages.\nThe transfer between two languages can be improved by creating a common multilingual embedding space of multiple languages. \nThis is useful for standard word embeddings as well as pre-trained language models. For example, by aligning the languages inside a single multilingual model, i.a., in cross-lingual~ or multilingual settings . \nThis alignment is typically done by computing a mapping between two different embedding spaces, such that the words in both embeddings share similar feature vectors after the mapping . \nThis allows to use different embeddings inside the same model and helps when two languages do not share the same space inside a single model .\nFor example, \\newcite{cross-ling/zhang2019improving} used bilingual representations by creating cross-lingual word embeddings using a small set of parallel sentences between the high-resource language English and three low-resource African languages, Swahili, Tagalog, and Somali, to improve document retrieval performance for the African languages.\n\\textbf{Open Issues:} While these multilingual models are a tremendous step towards enabling NLP in many languages, possible claims that these are universal language models do not hold. \nFor example, mBERT covers 104 and XLM-R 100 languages,\nwhich is a third of all languages in Wikipedia as outlined earlier. \nFurther, \\newcite{multi-ling/wu-dredze-2020-languages} showed that, in particular, low-resource languages are not well-represented in mBERT.\nFigure~\\ref{fig:language_families} shows which language families with at least 1 million speakers are covered by mBERT and XLM-RoBERTa\\footnote{A language family is covered if at least one associated language is covered. Language families can belong to multiple regions, e.g., Indo-European belongs to Europe and Asia.}. \nIn particular, African and American languages are not well-represented within the transformer models, even though millions of people speak these languages.\nThis can be problematic, as languages from more distant language families are less suited for transfer learning, as \\newcite{lauscher2020zero} showed.\n\\begin{figure}\n \\centering\n \\includegraphics[trim=60 330 60\n60,clip,width=0.49\\textwidth]{figure_language_families.pdf}\n \\caption{Language families with more than 1 million speakers covered by multilingual transformer models. }\n \\label{fig:language_families}\n\\end{figure}", "id": "d90e7d5d-8ed8-41ee-a381-537e8d432b20", "level": "subsection", "origin_cites_number": 15, "parent_id": "08729569-e9fe-4099-b79e-d05e287b81c7", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Transfer Learning" ], [ "subsection", "Multilingual Language Models" ] ], "subsections": [], "title": "Multilingual Language Models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1690, 8757, 1695 ], "content": "\\label{sec:ml}\nTraining on a limited amount of data is not unique to natural language processing. Other areas, like general machine learning and computer vision, can be a useful source for insights and new ideas. We already presented data augmentation and pre-training. Another example is Meta-Learning , which is based on multi-task learning. Given a set of auxiliary high-resource tasks and a low-resource target task, meta-learning trains a model to decide how to use the auxiliary tasks in the most beneficial way for the target task. For NLP, this approach has been evaluated on tasks such as sentiment analysis , user intent classification , natural language understanding , text classification and dialogue generation . Instead of having a set of tasks, \\newcite{transfer/rahimi-etal-2019-massively} built an ensemble of language-specific NER models which are then weighted depending on the zero- or few-shot target language.\nDifferences in the features between the pre-training and the target domain can be an issue in transfer learning, especially in neural approaches where it can be difficult to control which information the model takes into account. Adversarial discriminators can prevent the model from learning a feature-representation that is specific to a data source. \\newcite{transfer/gui-etal-2017-part-adversarial}, \\newcite{transfer/liu2017adversarial}, \\newcite{transfer/kasai2019adversarial}, \\newcite{domain/griesshaber2020low} and\n \\newcite{transfer/zhou2019dualAdversarial} learned domain-independent representations using adversarial training. \n \\newcite{transfer/kim-etal-2017-cross-adversarial}, \\newcite{domain/chen2018adversarial} and \\newcite{cross-ling/lange2020adversarial} worked with language-independent representations for cross-lingual transfer. These examples show the beneficial exchange of ideas between NLP and the machine learning community.", "id": "ca44ac27-a8bd-46d0-97aa-2a6d9c9defc4", "level": "section", "origin_cites_number": 6, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Ideas From Low-Resource Machine Learning in Non-NLP Communities" ] ], "subsections": [], "title": "Ideas From Low-Resource Machine Learning in Non-NLP Communities" }, { "cite_extract_rate": 1, "cites": [ 115 ], "content": "In this survey, we gave a structured overview of recent work in the field of low-resource natural language processing. Beyond the method-specific open issues presented in the previous sections, we see the comparison between approaches as an important point of future work. Guidelines are necessary to support practitioners in choosing the right tool for their task. In this work, we highlighted that it is essential to analyze resource-lean scenarios across the different dimensions of data-availability. This can reveal which techniques are expected to be applicable in a specific low-resource setting. More theoretic and experimental work is necessary to understand how approaches compare to each other and on which factors their effectiveness depends. , for instance, hypothesized that data augmentation and pre-trained language models yield similar kind of benefits. Often, however, new techniques are just compared to similar methods and not across the range of low-resource approaches. While a fair comparison is non-trivial given the different requirements on auxiliary data, we see this endeavour as essential to improve the field of low-resource learning in the future. This could also help to understand where the different approaches complement each other and how they can be combined effectively.\n\\section*{Acknowledgments}\nThe authors would like to thank Annemarie Friedrich for her valuable feedback and the anonymous reviewers for their helpful comments.\nThis work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102 and the EU Horizon 2020 project ROXANNE under grant number 833635.\n\\bibliography{survey}\n\\bibliographystyle{acl_natbib}\n\\appendix\n\\begin{table*}\n\\footnotesize\n\\centering\n\\begin{tabular}{lll}\n\\toprule\n& Low-resource surveys & \\newcite{survey/cieri2016selection} , \\newcite{survey/magueresse2020low} \\\\\n\\midrule\n\\multirow{9}{*}{\\rotatebox{90}{\\textit{Method-specific}}}\n& Active learning & \\newcite{active/olsson2009literature}, \\newcite{active/settles2009survey}, \\newcite{active/aggarwal2014survey} \\\\\n& Distant supervision & \\newcite{survey/roth2013survey}, \\newcite{survey/smirnova2018relation}, \\newcite{survey/shi2019brief}. \\\\\n& Unsupervised domain adaptation & \\newcite{survey/wilson2018domain}, \\newcite{survey/ramponi2020neural} \\\\\n& Meta-Learning & \\newcite{survey/hospedales2020meta} \\\\\n& Multilingual transfer & \\newcite{survey/steinberger2012survey}, \\newcite{cross-ling/ruder2019survey} \\\\\n& LM pre-training & \\newcite{survey/rogers2020primer}, \\newcite{survey/qiu2020pre} \\\\\n& Machine translation & \\newcite{survey/liu2019survey} \\\\\n& Label noise handling & \\newcite{survey/frenay2013classification}, \\newcite{survey/algan2019image} \\\\\n& Transfer learning & \\newcite{pan2009survey}, \\newcite{weiss2016survey}, \\newcite{tan2018survey} \\\\\n\\midrule\n\\multirow{5}{*}{\\rotatebox{90}{\\textit{Language-}}} \n& African languages & \\newcite{survey/grover2010overview}, \\newcite{survey/de2011introduction} \\\\\n& Arabic languages & \\newcite{survey/al2018deep}, \\newcite{survey/guellil2019arabic}, \\newcite{survey/younes2020language} \\\\\n& American languages & \\newcite{survey/mager2018challenges} \\\\\n& South-Asian languages & \\newcite{survey/daud2017urdu}, \\newcite{survey/banik2019bengali}, \\newcite{survey/harish2020comprehensive} \\\\\n& East-Asian languages & \\newcite{survey/yude2011brief} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Overview of existing surveys on low-resource topics.}\n\\label{tab:surveys}\n\\end{table*}", "id": "27505e6f-b9d1-470b-89da-8ce753a42523", "level": "section", "origin_cites_number": 1, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Discussion and Conclusion" ] ], "subsections": [], "title": "Discussion and Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "There is a growing body of task- and language-specific surveys concerning low-resource topics. We list these surveys in Table~\\ref{tab:surveys} as a starting point for a more in-depth reading regarding specific topics.", "id": "91981cb9-1721-4834-a392-458a664c449d", "level": "section", "origin_cites_number": 0, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Existing Surveys on Low-Resource Topics and Languages" ] ], "subsections": [], "title": "Existing Surveys on Low-Resource Topics and Languages" }, { "cite_extract_rate": 0.388888888888888, "cites": [ 4253, 7165, 4252, 4237, 8565, 2222 ], "content": "\\label{app:tasks}\nWhile a large number of labeled resources for English are available for many popular NLP tasks, this is not the case for the majority of low-resource languages. To measure (and visualize as done in Figure \\ref{fig:applications} in the main paper) which applications are accessible to speakers of low-resource languages we examined resources for six different languages, ranging from high- to low-resource languages for a fixed set of tasks of varying complexity, ranging from basic tasks, such as tokenization, to higher-level tasks, such as question answering.\nFor this short study, we have chosen the following languages. The number of speakers are the combined L1 and L2 speakers according to \\newcite{intro/Ethnologue2019}.\n\\begin{itemize}\n\\item[(1)] English: The most high-resource language according to the common view and literature in the NLP community.\n\\item[(2)] Yoruba: An African language, which is spoken by about 40 million speakers and contained in the EXTREME benchmark . \nEven with that many speakers, this language is often considered as a low-resource language and it is still discussed whether this language is also endangered .\n\\item[(3)] Hausa: An African language with over 60 million speakers. It is not covered in EXTREME or the universal dependencies project .\n\\item[(4)] Quechua: A language family encompassing about 8 million speakers, mostly in Peru.\n\\item[(5)] Nahuatl and (6) Estonian: Both have between 1 and 2 million speakers, but are spoken in very different regions (North America \\& Europe).\n\\end{itemize}\nAll speaker numbers according to \\newcite{intro/Ethnologue2019} reflecting the total number of users (L1 + L2). The tasks were chosen from a list of popular NLP tasks\\footnote{\\url{https://en.wikipedia.org/wiki/Natural\\_language\\_processing\\#Common\\_NLP\\_Tasks}}.\nWe selected two tasks for the lower-lever groups and three tasks for the higher-level groups, which reflects the application diversity with increasing complexity.\nTable~\\ref{tab:tasks-covered} shows which tasks were addressed for each language.\nWord segmentation, lemmatization, part-of-speech tagging, sentence breaking and (semantic) parsing are covered for Yoruba and Estonian by treebanks from the universal dependencies project . \nCusco Quechua is listed as an upcoming language in the UD project, but no treebank is accessible at this moment.\nThe WikiAnn corpus for named entity recognition has resources and tools for NER and sentence breaking for all six languages.\nLemmatization resources for Nahuatl were developed by \\newcite{app/martinez2012computer} and \\newcite{app/lozano2013syntactic} developed resources for part-of-speech tagging, tokenization and parsing of Quechuan. \nThe CoNLL conference and SIGMORPHON organized two shared tasks for morphological reinflection which provided lemmatization resources for many languages, including Quechuan .\nBasic resources for simple semantic role labeling and entity linking were developed during the LORELEI program for many low-resource languages , including resources for Yoruba and Hausa (even though the latter \"fell short\" according to the authors).\nEstonian coreference resolution is targeted by \\newcite{app/kubler2016multilingual}, but the available resources are very limited. Estonian sentiment is done by \\newcite{app/pajupuu2016identifying}.\nAll languages are covered by the multilingual fasttext embeddings and byte-pair-encoding embeddings . Yoruba, Hausa and Estonian are covered by mBERT or XLM-RoBERTa as well.\nText summarization is done for Estonian by \\newcite{app/muurisep2005estsum} and for Hausa by \\newcite{app/bashir2017automatic}.\nThe EXTREME benchmark covers question answering and natural language inference tasks for Yoruba and Estonian (besides NER, POS tagging and more).\nPublicly available systems for optical character recognition support all six languages .\nAll these tasks are supported for the English language as well, and most often, the English datasets are many times larger and of much higher quality. Some of the previously mentioned datasets were automatically translated, as in the EXTREME benchmark for several languages. As outlined in the main paper, we do not claim that all tasks marked in the Table yield high-performance model, but we instead indicate if any resources or models can be found for a language.\n\\begin{landscape}\n\\setlength{\\tabcolsep}{3pt}\n\\begin{table}\n\\footnotesize\n\\begin{tabular}{llccccc} \\toprule\nGroup & Task\n& Yoruba & Hausa & Quechuan & Nahuatl & Estonian \\\\ \\midrule\n & Num-Speakers\n & 40 mil. & 60 mil. & 8 mil. & 1.7 mil. & 1.3 mil.\\\\ \\midrule\n\\multirow{2}{*}{Text processing} \n & Word segmentation\n & \\ding{51} & \\ding{51} & \\ding{51} & \\ding{51} & \\ding{51} \\\\\n & Optical character recognition\n & & & & & \\\\ \\midrule\n\\multirow{2}{*}{Morphological analysis} \n& Lemmatization / Stemming\n& & & & & \\\\\n & Part-of-Speech tagging\n & & & \\newcite{app/lozano2013syntactic} & \\ding{55} & \\\\ \\midrule\n\\multirow{2}{*}{Syntactic analysis} \n & Sentence breaking\n & \\ding{51} & \\ding{51} & \\ding{51} & \\ding{51} & \\ding{51} \\\\\n & Parsing\n & & \\ding{55} & & \\ding{55} & \\\\ \\midrule\n\\multirow{2}{*}{Distributional semantics} \n & Word embeddings\n & FT, BPEmb & FT, BPEmb & FT, BPEmb & FT, BPEmb & FT, BPEmb \\\\\n & Transformer models\n & mBERT & XLM-R & \\ding{55} & \\ding{55} & mBERT, XLM-R \\\\ \\midrule\n\\multirow{2}{*}{Lexical semantics} \n & Named entity recognition\n & & & & & \\\\\n & Sentiment analysis\n & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} & \\\\ \\midrule\n\\multirow{3}{*}{Relational semantics} \n & Relationship extraction\n & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} \\\\\n & Semantic Role Labelling\n & & & \\ding{55} & \\ding{55} & \\ding{55} \\\\\n & Semantic Parsing\n & & \\ding{55} & \\ding{55} & \\ding{55} & \\\\ \\midrule\n\\multirow{3}{*}{Discourse} \n & Coreference resolution\n & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} & \\\\\n & Discourse analysis\n & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} & \\\\\n & Textual entailment\n & & \\ding{55} & \\ding{55} & \\ding{55} & \\\\ \\midrule\n\\multirow{3}{*}{Higher-level NLP} \n & Text summarization\n & \\ding{55} & & \\ding{55} & \\ding{55} & \\\\\n & Dialogue management \n & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} & \\ding{55} \\\\\n & Question answering (QA)\n & & \\ding{55} & \\ding{55} & \\ding{55} & \\\\ \\midrule\n & SUM\n & 13 & 10 & 8 & 6 & 15\\\\ \\bottomrule\n\\end{tabular}\n\\caption{Overview of tasks covered by six different languages. Note that this list is non-exhaustive and due to space reasons we only give one reference per language and task. }\n\\label{tab:tasks-covered}\n\\setlength{\\tabcolsep}{6pt}\n\\end{table}\n\\end{landscape}\n\\end{document}", "id": "ca88a9a2-e919-43be-b4cb-40565c35c380", "level": "section", "origin_cites_number": 18, "parent_id": "593ff0cc-d185-411c-9f7c-02ab17399c08", "prefix_titles": [ [ "title", "A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios" ], [ "section", "Complexity of Tasks" ] ], "subsections": [], "title": "Complexity of Tasks" } ]
100
[ 7165, 4236, 4237, 4238, 115, 4240, 4239, 4241, 8565, 7769, 4253, 4242, 4243, 1621, 2466, 4244, 1695, 4245, 4246, 8756, 38, 826, 8745, 9, 4247, 8328, 4250, 4248, 4249, 4251, 1690, 8757, 4252, 2222 ]
1.288654
[ "Renhe Jiang", "Du Yin", "Zhaonan Wang", "Yizhuo Wang", "Jiewen Deng", "Hangchen Liu", "Zekun Cai", "Jinliang Deng", "Xuan Song", "Ryosuke Shibasaki" ]
DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction
2021
2021-08-20T10:08:26Z
cs.LG
Nowadays, with the rapid development of IoT (Internet of Things) and CPS (Cyber-Physical Systems) technologies, big spatiotemporal data are being generated from mobile phones, car navigation systems, and traffic sensors. By leveraging state-of-the-art deep learning technologies on such data, urban traffic prediction has drawn a lot of attention in AI and Intelligent Transportation System community. The problem can be uniformly modeled with a 3D tensor (T, N, C), where T denotes the total time steps, N denotes the size of the spatial domain (i.e., mesh-grids or graph-nodes), and C denotes the channels of information. According to the specific modeling strategy, the state-of-the-art deep learning models can be divided into three categories: grid-based, graph-based, and multivariate time-series models. In this study, we first synthetically review the deep traffic models as well as the widely used datasets, then build a standard benchmark to comprehensively evaluate their performances with the same settings and metrics. Our study named DL-Traff is implemented with two most popular deep learning frameworks, i.e., TensorFlow and PyTorch, which is already publicly available as two GitHub repositories \url{https://github.com/deepkashiwa20/DL-Traff-Grid} and \url{https://github.com/deepkashiwa20/DL-Traff-Graph}. With DL-Traff, we hope to deliver a useful resource to researchers who are interested in spatiotemporal data analysis.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ] ], "subsections": [ "c3d164ea-0fac-49d9-820a-43318a641edc", "db39650d-72d6-4e23-b522-8eaf2152ba44", "86e2ba0d-2f46-4b09-9dfc-24ff62f7ac9c", "23050b21-5fab-45c0-9c6c-dd4a43d7184b", "706a4dd3-3228-4f95-9ed5-19ffe9a3a499", "c0ce4579-96dc-48f8-84c3-543a96d159da", "a3286808-0343-4af3-a22a-dcbfba64d4f5" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 22, 24, 23, 21, 8312, 25 ], "content": "Nowadays, with the rapid development of IoT (Internet of Things) and CPS (Cyber-Physical Systems) technologies, big spatiotemporal data are being generated from mobile phones, car navigation systems, and traffic sensors. Based on such data, urban traffic prediction has been taken as a significant research problem and a key technique for building smart city, especially intelligent transportation system. From 2014 to 2017, encouraged by the huge success of deep learning technologies in the Computer Vision and Natural Language Processing field, researchers in the Intelligent Transportation System community, started to apply Long-Term Short Memory (LSTM) and Convolution Neural Network (CNN) to the well-established traffic prediction task, and also achieved an unprecedented success. Following these pioneers, researchers have leveraged the state-of-the-art deep learning technologies to develop various prediction models and publish a big amount of studies on the major AI and transportation venues as listed in Table \\ref{tab:modelsummary}. Although the prediction tasks may slightly differ from each other, they can all be categorized as deep traffic models.\n\\begin{figure}[h]\n\t\\centering\t\n\t\\includegraphics[width=0.45\\textwidth]{./figure/problem.png}\n\t\\caption{Grid-Based Traffic and Graph-Based Traffic.}\n\t\\label{fig:intro}\n\\end{figure}\nNo matter based on grid or graph, the traffic data illustrated in Fig.\\ref{fig:intro} can be uniformly represented with a 3D tensor $\\mathbb{R}^{T\\times N \\times C}$, where T denotes the size of the temporal domain (i.e., timeslots with constant sampling rate), N denotes the size of the spatial domain (i.e., mesh-grids or graph-nodes), and C denotes the number of information channels. For instance, assuming 300 traffic sensors are deployed to record traffic speed (channel1) and volume (channel2) every 30 minutes for 100 consecutive days, then the total data can be represented by tensor $\\mathbb{R}^{4800\\times 300 \\times 2}$. Besides traffic volume and speed, channels can also be used to store crowd density, taxi demand, traffic accident, car/ride-hailing order, and crowd/taxi/bike inflow and outflow. More specifically, grid-based model meshes the entire spatial domain into $H\\times W$ fine-grained mesh-grids and converts the 3D representation into 4D tensor $\\mathbb{R}^{T\\times H\\times W\\times C}$ format. Graph-based model introduces directed or undirected graph $G$ = $(V,E)$ to utilize the topological structure of the urban road network for modeling, where $v\\in V$ is a node, $|V|$ = $N$, and $e\\in E$ is an edge. Multivariate time-series model naturally takes the N spatial units as N time-series variates and shares the same representation, i.e., $\\mathbb{R}^{T\\times N \\times C}$ with graph-based model. Thus, the deep learning models listed in Table \\ref{tab:modelsummary} can be divided into three groups according to the specific modeling strategy along the spatial axis.\nThrough the citation number in Table \\ref{tab:modelsummary}, we can know how much attention these studies have drawn in our AI and data science community. But due to the huge amount of the related works, researchers are often too exhausted to follow up with the specific details of each model. More importantly, the evaluations on this family of models are still confusing and not well organized. For instance, some models demonstrated superior performances to the existing ones by using different datasets or metrics as shown in Table \\ref{tab:modelsummary}, while some models utilized a self-designed objective function or employed extra data sources such as Point-of-Interest (POI) data or navigation app data to achieve better prediction accuracy. To address the problems above, a concise but precise survey will be a great help for researchers involved in this emerging topic. But only a survey is not enough. It is also significant to conduct standard performance evaluations to examine the true function of each spatial and temporal component by using the same datasets, metrics, and other experimental settings. This paper fills these needs by providing a concise survey followed by a comprehensive benchmark evaluation on the recent deep traffic models. \nWe first define two benchmark tasks in Section 2, one is single-step prediction for inflow and outflow based on grid-based traffic data, another is multi-step prediction for traffic speed based on graph-based data. Second, in Section 3, we investigate both of the grid-based and graph-based datasets and pick up some open and widely used ones as our benchmark data including TaxiBJ, BikeNYC, TaxiNYC, METR-LA, PeMS-BAY, and PeMSD7M. Next, in Section 4, we decompose the models into spatial and temporal units and give the roadmap that how the models evolve along the spatial and temporal axis. Further, we draw the architectures for a bunch of representative models (e.g., ST-ResNet, DMVST-Net, STDN, DeepSTN+, STGCN, DCRNN, Graph WaveNet) in an intuitive and comparative manner. Then, in Section 5, we do a comprehensive evaluation on both the grid-based and graph-based models by using the benchmark tasks and datasets under the same settings and metrics (RMSE, MAE, MAPE). In Section 6, we briefly introduce the implementation details, the availability, and the usability of our benchmark. Finally, we give our conclusion in Section 7. \nThe contributions of our work are summarized as follows:\n\\begin{itemize}\n\\item We give a concise but concrete survey on the recent deep traffic models. The technique detail and the evolution are clearly summarized along spatial and temporal axes.\n\\item We carefully select two traffic flow prediction tasks, four grid-based traffic datasets, and three graph-based traffic datasets, and implement plenty of grid/graph-based state-of-the-arts to form a complete benchmark called \\textbf{DL-Traff}.\n\\item On this benchmark, we conduct a comprehensive evaluation of the effectiveness and efficiency performances of the-state-of-the-arts.\n\\item Our benchmark is implemented with the two most popular deep learning frameworks, i.e., TensorFlow and PyTorch. \\textbf{DL-Traff} is already publicly available as two GitHub repositories \\url{https://github.com/deepkashiwa20/DL-Traff-Grid} and \\url{https://github.com/deepkashiwa20/DL-Traff-Graph}.\n\\end{itemize}\nWith DL-Traff, (1) users can quickly grasp the technical details about the state-of-the-art deep spatiotemporal models; (2) users can smoothly reproduce the prediction results reported in this paper and use them as the baselines; (3) users can easily launch a new deep solution with either TensorFlow or PyTorch for not only traffic flow prediction tasks, but also for other spatiotemporal problems such as anomaly/accident, electricity consumption, air quality, etc.", "id": "c3d164ea-0fac-49d9-820a-43318a641edc", "level": "section", "origin_cites_number": 12, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 24, 31, 27, 30, 33, 29, 28, 21, 26, 8312, 25, 23, 34, 32 ], "content": "In this paper, we employ the following two prediction tasks into our benchmark. \n\\begin{enumerate}\n \\item Grid-based inflow and outflow prediction proposed by. The problem is to predict how many taxis/bikes will flow into or out from each mesh-grid in the next time interval. It takes $\\alpha$ steps of historical observations as input and gives the next step prediction as follows: [$X_{t-(\\alpha-1)}$,...,$X_{t-1}$,$X_{t}$] $\\rightarrow$ $X_{t+1}$, where $X_i$ $\\in$ $\\mathbb{R}^{H\\times W\\times C}$, $H,W$ are the indexes for the mesh, and C is equal to 2, respectively used for inflow and outflow.\n \\item Graph-based traffic speed prediction as defined in . In order to make a variation to the first task, we define this task as multi-step-to-multi-step one as follows: [$X_{t-(\\alpha-1)}$,...,$X_{t-1}$,$X_{t}$] $\\rightarrow$ [$X_{t+1}$,$X_{t+2}$,...,$X_{t+\\beta}$], where $X_i$ $\\in$ $\\mathbb{R}^{N\\times C}$, $\\alpha$/$\\beta$ is the number of steps of observations/predictions, $N$ is the number of traffic sensors (i.e., nodes), and $C$ is equal to 1 that only stores the traffic speed.\n\\end{enumerate}\n\\begin{table*}[t]\n \\small\n\t\\centering\n\t\\caption{Summary of The Public Traffic Datasets}\n\t\\label{tab:datasummary}\n\t\\begin{tabular*}{17.7cm}{@{\\extracolsep{\\fill}}l|l|l|l|l|l}\n\t\t\\hline\n\t\t\\textbf{Grid-Based} & \\textbf{Reference} & \\textbf{Data Description / Data Source} & \\textbf{Spatial Domain} & \\textbf{Time Period} & \\textbf{Time Interval}\\\\\n\t\t\\hline\n\t\t\\multirow{2}{*}{TaxiBJ*}& & \\multirow{2}{*}{Taxi In-Out Flow / Taxi GPS Data of Beijing} & \\multirow{2}{*}{32$\\times$32 grids} & 2013/7/1$\\sim$2016/4/10 & \\multirow{2}{*}{30 minutes}\\\\\n\t\t& & & & *Four Time Periods & \\\\\n\t\t\\hline\n\t\tTaxiBJ-I* & & Taxi In-Out Flow / Taxi GPS Data of Beijing (TDrive) & 32$\\times$32 grids & 2015/2/1$\\sim$2015/6/2 & 60 minutes\\\\\n\t\t\\hline\n\t\t\\multirow{2}{*}{BikeNYC*} & \\multirow{2}{*}{} & Bike In-Out Flow / Bike Trip Data of New York City & \\multirow{2}{*}{16$\\times$8 grids} & \\multirow{2}{*}{2014/4/1$\\sim$2014/9/30} & \\multirow{2}{*}{60 minutes}\\\\\n\t\t& & Citi Bike: \\url{https://www.citibikenyc.com/system-data} && \\\\\n\t\t\\hline\n\t\tBikeNYC-I* & & Bike In-Out Flow / Bike Trip Data of New York City & 21$\\times$12 grids & 2014/4/1$\\sim$2014/9/30 & 60 minutes \\\\\n\t\t\\hline\n\t\tBikeNYC-II* & & Bike In-Out Flow / Bike Trip Data of New York City & 10$\\times$20 grids & 2016/7/1$\\sim$2016/8/29 & 30 minutes\\\\\n\t\t\\hline\n\t\t\\multirow{3}{*}{TaxiNYC*} & \\multirow{3}{*}{} & Taxi In-Out Flow / Taxi Trip Data of New York City & \\multirow{3}{*}{10$\\times$20 grids} & \\multirow{3}{*}{2015/1/1$\\sim$2015/3/1} & \\multirow{3}{*}{30 minutes}\\\\\n\t\t&& The New York City Taxi\\&Limousine Commission && \\\\\n\t\t&& (TLC) \\url{https://www1.nyc.gov/site/tlc/about/data.page} && \\\\\n\t\t\\toprule\n\t\t\\hline\n\t\t\\textbf{Graph-Based} & \\textbf{Reference} & \\textbf{Data Description / Data Source} & \\textbf{Spatial Domain} & \\textbf{Time Period} & \\textbf{Time Interval}\\\\\n\t\t\\hline\n\t\t\\multirow{4}{*}{METR-LA*}\n\t\t& &\n\t\tTraffic Speed Sensors in Los Angeles County & \\multirow{4}{*}{207 sensors} & \\multirow{4}{*}{2012/3/1$\\sim$2012/6/30} & \\multirow{4}{*}{5 minutes}\\\\\n\t\t&& Los Angeles Metropolitan Transportation Authority& & &\\\\\n\t\t&& *Collaborated with University of Southern California& & &\\\\\n\t\t&& \\url{https://imsc.usc.edu/platforms/transdec/} & & &\\\\\n\t\t\\hline\n\t\t\\multirow{3}{*}{PeMS-BAY*} & & Traffic Speed Sensors in California & \\multirow{3}{*}{325 sensors} & \\multirow{3}{*}{2017/1/1$\\sim$2017/5/31} & \\multirow{3}{*}{5 minutes}\\\\\n\t\t&& Caltrans Performance Measurement System (PeMS) & &\\\\\n\t\t&& PeMS: \\url{http://pems.dot.ca.gov/} & & &\\\\\n\t\t\\hline\n\t\tPeMSD7(M)* & & Traffic Speed Sensors in California (PeMS) & 228 sensors& 2012/5/1$\\sim$2012/6/30 & 5 minutes\\\\\n\t\tPeMS03* & & Traffic Speed Sensors in California (PeMS) & 358 sensors & 2018/9/1$\\sim$2018/11/30& 5 minutes\\\\\n\t\tPeMSD4(PeMS04)* & & Traffic Speed Sensors in California (PeMS) & 307 sensors & 2018/1/1$\\sim$2018/2/28& 5 minutes\\\\\n\t\tPeMS07* & & Traffic Speed Sensors in California (PeMS) & 883 sensors & 2017/5/1$\\sim$2017/8/31& 5 minutes\\\\\n\t\tPeMSD8(PeMS08)* & & Traffic Speed Sensors in California (PeMS) & 170 sensors & 2016/7/1$\\sim$2016/8/31& 5 minutes\\\\\n\t\tPeMSD4-I* & & Traffic Speed Sensors in California (PeMS) & 3848 sensors & 2018/1/1$\\sim$2018/2/28 & 5 minutes\\\\\n\t\tPeMSD8-I* & & Traffic Speed Sensors in California (PeMS) & 1979 sensors & 2016/7/1$\\sim$2016/8/31& 5 minutes\\\\\n\t\tPeMSD10* & & Traffic Speed Sensors in California (PeMS) & 608 sensors &2018/1/1$\\sim$2018/3/31& 15 minutes\\\\\n\t\tTraffic(PeMS)* & & Traffic Speed Sensors in California (PeMS) & 862 sensors & 2015/1/1$\\sim$2016/12/31& 60 minutes\\\\\n\t\t\\hline\n\t\tLOOP-SEATTLE* & & Traffic Speed Sensors in Greater Seattle Area & 323 sensors & 2015/1/1$\\sim$2015/12/31& 5 minutes\\\\\n\t\t\\hline\n\t\tTaxiSZ* & & Taxi Speed on Roads / Taxi GPS Data of Shenzhen & 156 roads & 2015/1/1$\\sim$2015/1/31 & 15 minutes \\\\\n\t\t\\hline\n\t\t\\multirow{2}{*}{HZJTD*} & \\multirow{2}{*}{} & Traffic Speed Sensors in Hangzhou & \\multirow{2}{*}{202 sensors} & \\multirow{2}{*}{2013/10/16$\\sim$2014/10/3} & \\multirow{2}{*}{15 minutes}\\\\\n\t\t& & Hangzhou Integrated Transportation Research Center & &\\\\\n\t\t\\toprule\n\t\\end{tabular*}\n\\end{table*}", "id": "db39650d-72d6-4e23-b522-8eaf2152ba44", "level": "section", "origin_cites_number": 24, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Problem" ] ], "subsections": [], "title": "Problem" }, { "cite_extract_rate": 0, "cites": [], "content": "The public datasets for urban traffic prediction are summarized in Table \\ref{tab:datasummary}, where the reference, source, and spatial and temporal spec are enumerated. We pick up some widely used ones as our benchmark datasets.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.49\\textwidth]{./figure/dataset_one.png}\n\t\\caption{Visualization of METR-LA.}\n\t\\label{fig:dataset_one}\n\\end{figure}", "id": "86e2ba0d-2f46-4b09-9dfc-24ff62f7ac9c", "level": "section", "origin_cites_number": 0, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Dataset" ] ], "subsections": [ "1e1d840c-dfcc-4de8-b328-03ffae66ed07", "1c054657-e289-43fb-bf15-a3497000fd76" ], "title": "Dataset" }, { "cite_extract_rate": 0.5, "cites": [ 21, 24 ], "content": "\\noindent\\textbf{TaxiBJ.} This is taxi in-out flow data published by , created from the taxicab GPS data in Beijing from four separate time periods: 2013/7/1-2013/10/30, 2014/3/1-2014/6/30, 2015/3/1-2015/6/30, and 2015/11/1-2016/4/10. Based on the same underlying taxi GPS data (T-Drive), a similar dataset denoted as TaxiBJ-I is created by .\n\\noindent\\textbf{BikeNYC.} This is bike in-out flow data of New York City from 2014/4/1 to 2014/9/30 used by . The original bike trip data is published by Citi Bike, NYC's official bike-sharing system, which includes: trip duration, starting and ending station IDs, and start and end times. Similar datasets \\textbf{BikeNYC-I}, \\textbf{BikeNYC-II} were used by and respectively. These two will be used in our experiment due to the larger spatial domain.\n\\noindent\\textbf{TaxiNYC.} This is taxi in-out flow data of New York City from 2015/1/1~2015/3/1 used by . The original taxi trip data is published by the New York City Taxi and Limousine Commission (TLC), that includes pick-up and drop-off dates/times, pick-up and drop-off locations, trip distances, itemized fares, driver-reported passenger counts, etc.", "id": "1e1d840c-dfcc-4de8-b328-03ffae66ed07", "level": "subsection", "origin_cites_number": 4, "parent_id": "86e2ba0d-2f46-4b09-9dfc-24ff62f7ac9c", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Dataset" ], [ "subsection", "Grid-Based Traffic Dataset" ] ], "subsections": [], "title": "Grid-Based Traffic Dataset" }, { "cite_extract_rate": 0.75, "cites": [ 24, 31, 35, 27, 22, 29, 33, 28, 21, 8312, 25, 23, 34, 36, 32 ], "content": "\\noindent\\textbf{METR-LA.} This is a Los Angeles traffic data published by . The data are collected from 207 highway sensors within 4 months from 2012/3/1 to 2012/6/30. A quite number of studies used this dataset as shown in Table \\ref{tab:datasummary}. To be intuitive, a data visualization has been made as Fig.\\ref{fig:dataset_one}.\n\\noindent\\textbf{PeMS-BAY.} This is a traffic flow dataset collected from California Transportation Agencies Performance Measurement System (PeMS). It contains 325 traffic sensors in the Bay Area from 2017/1/1 to 2017/5/31. Massive studies also generate a variety of PeMS datasets by using the same source. \n\\noindent\\textbf{PeMSD7M.} This traffic dataset is created and published by , also collected from PeMS. It covers 228 traffic sensors lasting from 2012/5/1 to 2012/6/30 with a 5-minute sampling rate on weekdays.\n\\noindent\\textbf{Summary.} The taxi and bike trip data published by Citi Bike and TLC of New York City and the traffic sensor data from PeMS of California are taken as three trustworthy and wildly-used data sources for traffic prediction. Researchers can easily access the data through the URLs listed in Table \\ref{tab:datasummary}. \n\\begin{table*}[t]\n \\small\n\t\\centering\n\t\\caption{Base Technologies Employed for Spatial and Temporal Modeling}\n\t\\label{tab:basetechnology}\n\t\\begin{tabular*}{17.8cm}{@{\\extracolsep{\\fill}}l|lll|llll|l|lll|llll}\n\t\t\\hline\n\t\t& \\multicolumn{3}{c|}{Spatial Axis} & \\multicolumn{4}{c|}{Temporal Axis} & & \\multicolumn{3}{c|}{Spatial Axis} & \\multicolumn{4}{c}{Temporal Axis}\\\\\n\t\t\\hline\n\t\tmodels & CNN & GCN & Attn. & LSTM & GRU & TCN & Attn. & models & CNN & GCN & Attn. & LSTM & GRU & TCN & Attn. \\\\\n\t\t\\hline\n\t\tST-ResNet & \\checkmark &&&&&&& STGCN & & \\checkmark & & & & \\checkmark & \\\\ \n\t\tDMVST-Net & \\checkmark &&& \\checkmark &&&& GaAN(GGRU) & & \\checkmark&\\checkmark &&\\checkmark &&\\\\\n\t\tSTDN & \\checkmark &&& \\checkmark &&& \\checkmark & DCRNN(GCGRU) & &\\checkmark &&& \\checkmark && \\\\\n\t\tDeepSTN+ & \\checkmark &&&&&& & Multi-graph &&\\checkmark&&\\checkmark&&&\\\\\n\t\tLSTNet & \\checkmark & & & & \\checkmark & \\checkmark& \\checkmark & ASTGCN && \\checkmark &\\checkmark &&&&\\checkmark \\\\\n\t\tGeoMAN & & &\\checkmark &\\checkmark & & &\\checkmark & TGCN && \\checkmark &&&\\checkmark&&\\\\\n\t\tTPA-LSTM &&& & \\checkmark & & \\checkmark &\\checkmark & Graph WaveNet && \\checkmark &&&& \\checkmark &\\\\\n\t\tTransformer &&& & & & &\\checkmark & MTGNN && \\checkmark &&&& \\checkmark &\\\\\n\t\tST-MetaNet &&&\\checkmark &&\\checkmark&&& STGNN &&\\checkmark &\\checkmark &&\\checkmark&&\\checkmark\\\\\n\t\tGMAN &&&\\checkmark &&&&\\checkmark &AGCRN & &\\checkmark &&& \\checkmark & &\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table*}", "id": "1c054657-e289-43fb-bf15-a3497000fd76", "level": "subsection", "origin_cites_number": 20, "parent_id": "86e2ba0d-2f46-4b09-9dfc-24ff62f7ac9c", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Dataset" ], [ "subsection", "Graph-Based Traffic Dataset" ] ], "subsections": [], "title": "Graph-Based Traffic Dataset" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 24, 8313, 31, 35, 37, 38, 22, 21, 8312, 25, 23, 39, 34, 32 ], "content": "Complex spatial and temporal dependencies are the key challenges in urban traffic prediction tasks.\nTemporally, future prediction depends on the recent observations as well as the past periodical patterns; Spatially, the traffic states in certain mesh-grid or graph-node are affected by the nearby ones as well as distant ones. To capture the temporal dependency, LSTM and its simplified variant GRU are respectively utilized by the models as shown in Table \\ref{tab:basetechnology}. In parallel with the RNNs, 1D CNN and its enhanced version TCN are also employed as the core technology for temporal modeling, and demonstrate the superior time efficiency and matchable effectiveness to LSTM and GRU. \nOn the other hand, to capture the spatial dependency, grid-based models simply use the normal convolution operation thanks to the natural euclidean property of grid spacing; graph-based models leverage the graph convolution in non-euclidean space by involving the adjacency relation $A\\in$ $\\mathbb{R}^{N*N}$ between each pair of spatial units. Meanwhile, attention mechanism also known as Transformer has rapidly taken over the AI community from natural language (GPT-3) to vision since 2020. Thus, attention is also introduced as base technology for modeling both spatial and temporal dependencies. \nWe select the most representative models in Table \\ref{tab:modelsummary} and summarize the base technologies employed by each model for spatial and temporal modeling as Table \\ref{tab:basetechnology}. On the other hand, for better understanding, we simplify and plot the network architectures in a unified manner for five grid-based models including\nST-ResNet, DMVST-Net, Periodic-CRN(PCRN), STDN, and DeepSTN+ as Fig.\\ref{fig:architectures}, and five graph-based models, namely STGCN, DCRNN, Graph WaveNet, ASTGCN, and GMAN as Fig.\\ref{fig:architectures1}. Through Fig.\\ref{fig:architectures}$\\sim$\\ref{fig:architectures1}, we can easily understand how the spatial and temporal modules listed in Table \\ref{tab:basetechnology} are assembled to form an integrated model. \nMoreover, we describe how the employed technologies are evolving along the spatial and temporal axis for both grid-based and graph-based models in the next two subsections. Note that the multivariate time-series (MTS) models such as LSTNet, TPA-LSTM, GeoMAN, and Transformer are also gradually evolving along the spatial and temporal axis. From the spatial perspective, they focus on correlation/dependence between variates; from the temporal perspective, they aim to utilize the periodic patterns occurred in time series. But due to space limitations, we don't expand the details of those MTS models in this paper.", "id": "23050b21-5fab-45c0-9c6c-dd4a43d7184b", "level": "section", "origin_cites_number": 21, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Model" ] ], "subsections": [ "f9c25975-edd8-4784-bcd1-0b0a149168bc", "f509b7e2-496e-42c4-8c44-c0803a49aa44" ], "title": "Model" }, { "cite_extract_rate": 0.611111111111111, "cites": [ 22, 24, 40, 23, 32, 29, 33, 25, 21, 8312, 34 ], "content": "ST-ResNet is the earliest and the most representative grid-based deep learning method for traffic in-out flow prediction. It converts 4D tensor ($T$,$H$,$W$,$C$) into 3D tensor ($H$,$W$,$T$*$C$) by concatenating the channels at each time step so that CNN can be used to capture spatial dependency similarly to an image. Then, it creatively proposes a set of temporal features called $Closeness$, $Period$, and $Trend$, which correspond to \\emph{the most recent observations}, \\emph{daily periodicity}, and \\emph{weekly trend} respectively. Intuitively, the three parts of the features can be represented by:\n$X^{Closeness}$ = [$X_{t-l_c}$, $X_{t-(l_c-1)}$, ..., $X_{t-1}$]\n$X^{Period}$ = [$X_{t-l_p \\times s_p}$, $X_{t-(l_p-1) \\times s_p}$, ..., $X_{t-s_p}$]\n$X^{Trend}$ = [$X_{t-l_q \\times s_q}$, $X_{t-(l_q-1) \\times s_q}$, ..., $X_{t-s_q}$]\n\\noindent where $l_c$, $l_p$, $l_q$ are the sequence length of \\{$Closeness$, $Period$, $Trend$\\}, $s_p$ and $s_q$ are the time span of $Period$ and $Trend$, the $Closeness$ span $s_c$ is equal to 1 by default. This feature is not only inherited by the later grid-based models including STDN and DeepSTN+, but also some graph-based models like ASTGCN, which is still \nregarded as the state-of-the-art temporal feature by now. To capture the long-range spatial dependency between mesh-grids, it employs Residual Learning to construct deep enough CNN networks. Additionally, it further utilizes external information including weather, event, and metadata(i.e. DayOfWeek, WeekdayOrWeekend) to auxiliarily enhance spatiotemporal modeling. \n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.97\\textwidth]{./figure/architectures.png}\n\t\\caption{Architectures of Representative Grid-Based Models.}\n\t\\label{fig:architectures}\n\\end{figure*}\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.97\\textwidth]{./figure/architectures1.png}\n\t\\caption{Architectures of Representative Graph-Based Models.}\n\t\\label{fig:architectures1}\n\\end{figure*}\n\\noindent\\textbf{Improvement along Spatial Axis.} Different from ST-ResNet that takes the entire mesh-grids as input, DMVST-Net and STDN take one grid and its surrounding grids (i.e. $S$$\\times$$S$ region) as input, thus a local CNN is enough to capture spatial dependency only among nearby grids. For the global spatial dependency, DMVST-Net introduces a weighted graph as an extra input, where nodes are the grids, and each edge represents the similarity of two time-series values (i.e. historical taxi demand) between any two grids. The graph will be manually embedded into a feature vector so that it can be concatenated with the other part. Through this, DMVST-Net gains the ability to capture long-range spatial dependency. Furthermore, STDN and consider the local flow information (i.e. flow from one central grid to its surrounding $S$$\\times$$S$ grids) to facilitate predicting the traffic volume in the central grid, which is implemented with a flow gating mechanism in STDN and multitask learning in . DeepSTN+ uses Point-Of-Interest (POI) data as external information (e.g., office/residential/shopping area) to take the influence of location function on the crowd/traffic flow into consideration.\n\\noindent\\textbf{Improvement along Temporal Axis.}\nOne major drawback of ST-ResNet is it does not explicitly handle the temporal axis, because it forces the video-like tensor ($T$,$H$,$W$,$C$) to be converted into an image-like tensor ($H$,$W$,$T$*$C$). To address this, DMVST-Net and STDN employ LSTM to connect with a separate and unshared CNN for each timestamp. STDN further considers the temporal shifting problem about periodicity (i.e. traffic data is not strictly periodic) and designs a \\emph{Periodically Shifted Attention Mechanism} to solve the issue. Specifically, it sets a small time window to collect $Q$ time intervals right before and after the currently-predicting one. And the attention is used to obtain a weighted average representation $h$ from the $Q$ representations \\{$h_1$, $h_2$, $...$, $h_Q$\\} generated by LSTM. To this end, LSTM, and CNN work together to separately and sequentially model the spatial and temporal dependency. Convolutional LSTM extends the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions and achieves a lot of successes on video modeling tasks. Motivated by this, ConvLSTM and its variant ConvGRU are utilized by to simultaneously capture the spatial and temporal dependency.\n\\begin{table*}[h]\n \\small\n\t\\centering\n\t\\caption{Performance Evaluation for Single-Step Prediction on Grid-Based Traffic Datasets}\n\t\\label{tab:performance_grid}\n\t\\begin{tabular*}{17.0cm}{@{\\extracolsep{\\fill}}l|ccc|ccc|ccc|ccc}\n\t\t\\hline\n\t\t\\multicolumn{1}{l|}{} &\n\t\t\\multicolumn{3}{c|}{TaxiBJ} &\n\t\t\\multicolumn{3}{c|}{BikeNYC-I} &\n\t\t\\multicolumn{3}{c|}{BikeNYC-II} &\n\t\t\\multicolumn{3}{c}{TaxiNYC} \n\t\t\\\\\n\t\t\\hline\n\t\t\\multicolumn{1}{l|}{Model} & \n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} &\n\t\t\\multicolumn{1}{c|}{MAPE} &\n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} &\n\t\t\\multicolumn{1}{c|}{MAPE} &\n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} &\n\t\t\\multicolumn{1}{c|}{MAPE} &\n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} &\n\t\t\\multicolumn{1}{c}{MAPE}\n\t\t\\\\\n\t\t\\hline\n\t\tHistoricalAverage & 45.004 & 24.475 & 8.04\\% & 15.676 & 4.882 & 5.45\\% & 4.874 & 1.500 & 3.30\\% & 21.535 & 7.121 & 4.56\\%\\\\\n\t\tCopyLastStep & 23.609 & 13.372 & 6.20\\% & 14.152 & 4.344 & 5.01\\% & 4.999 & 1.606 & 3.50\\% & 18.660 & 6.497 & 4.91\\%\\\\\n\t\tCNN & 23.550 & 13.797 & 8.46\\% & 12.064 & 4.088 & 5.82\\% & 4.511 & 1.574 & 3.98\\% & 16.741 & 6.884 & 8.08\\%\\\\\n\t\tConvLSTM & 19.247 & 10.816 & 5.61\\% & 6.616 & 2.412 & 3.90\\% & 3.174 & 1.133 & 2.90\\% & 12.143 & 4.811 & 5.16\\%\\\\\n\t\tST-ResNet & 18.702 & 10.493 & 5.19\\% & 6.106 & 2.360 & 3.72\\% & 3.191 & 1.169 & 2.86\\% & 11.553 & 4.535 & 4.32\\%\\\\\n\t\tDMVST-Net & 20.389 & 11.832 & 5.99\\% & 7.990 & 2.833 & 3.93\\% & 3.521 & 1.287 & 2.97\\% & 13.605 & 4.928 & 4.49\\%\\\\\n\t\tPCRN & 18.629 & 10.432 & 5.45\\% & 6.680 & \\textbf{2.351} & 3.63\\% & 3.149 & \\textbf{1.107} & 2.78\\% & 12.027 & 4.606 & 4.62\\%\\\\\n\t\tDeepSTN+ & 18.141 & 10.126 & 5.14\\% & 6.205 & 2.489 & 3.48\\% & 3.205 & 1.245 & 2.80\\% & 11.420 & \\textbf{4.441} & 4.45\\%\\\\\n\t\tSTDN & \\textbf{17.826} & \\textbf{9.901} & \\textbf{4.81\\%} & \\textbf{5.783} & 2.410 & \\textbf{3.35\\%} & \\textbf{3.004} & 1.167 & \\textbf{2.67\\%} & \\textbf{11.252} & 4.474 & \\textbf{4.09\\%}\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table*}\n\\begin{table*}[h]\n \\small\n\t\\centering\n\t\\caption{Performance Evaluation for Multi-Step Prediction on Graph-Based Traffic Datasets}\n\t\\label{tab:trafficvolumn}\n\t\\begin{tabular*}{17.0cm}{@{\\extracolsep{\\fill}}l|l|ccc|ccc|ccc}\n\t\t\\hline\n\t\t\\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{3}{c|}{3 Steps / 15 Minutes Ahead} &\n\t\t\\multicolumn{3}{c|}{6 Steps / 30 Minutes Ahead} &\n\t\t\\multicolumn{3}{c}{12 Steps / 60 Minutes Ahead}\n\t\t\\\\\n\t\t\\hline\n\t\t\\multicolumn{1}{l|}{Dataset} & \\multicolumn{1}{l|}{Model} & \n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} &\n\t\t\\multicolumn{1}{c|}{MAPE} & \n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} & \n\t\t\\multicolumn{1}{c|}{MAPE} & \n\t\t\\multicolumn{1}{c}{RMSE} & \n\t\t\\multicolumn{1}{c}{MAE} & \n\t\t\\multicolumn{1}{c}{MAPE} \n\t\t\\\\\n\t\t\\hline\n\t \\multirow{8}{*}{METR-LA} & \n\t\tHistoricalAverage & 14.737 & 11.013 & 23.34\\% & 14.737 & 11.010 & 23.34\\% & 14.736 & 11.005 & 23.33\\% \\\\\n\t\t& CopyLastSteps & 14.215 & 6.799 & 16.73\\% & 14.214 & 6.799 & 16.73\\% & 14.214 & 6.798 & 16.72\\% \\\\\n\t\t& LSTNet & 8.067 & 3.914 & 9.27\\% & 10.181 & 5.219 & 12.22\\% & 11.890 & 6.335 & 15.38\\% \\\\\n\t\t& STGCN & 7.918 & 3.469 & 8.57\\% & 9.948 & 4.263 & 10.70\\% & 11.813 & 5.079 & 13.09\\% \\\\\n\t\t& DCRNN & \\textbf{7.509} & 3.261 & 8.00\\% & 9.543 & 4.021 & 10.12\\% & 11.854 & 5.080 & 13.08\\% \\\\\n\t\t& Graph WaveNet & 7.512 & \\textbf{3.204} & \\textbf{7.62\\%} &\\textbf{9.445} & \\textbf{3.922} & \\textbf{9.52\\%} & \\textbf{11.485} & \\textbf{4.848} & \\textbf{11.93\\%} \\\\\n\t\t& ASTGCN & 7.977 & 3.624 & 9.13\\% & 10.042 & 4.514 & 11.57\\% & 12.092 & 5.776 & 14.85\\% \\\\\n\t\t& GMAN & 8.869 & 4.139 & 10.88\\% & 9.917 & 4.517 & 11.77\\% & 11.910 & 5.475 & 14.10\\% \\\\\n\t\t& MTGNN & 7.707 & 3.277 & 8.02\\% & 9.625 & 3.999 & 10.00\\% & 11.624 & 4.867 & 12.17\\% \\\\\n\t\t& AGCRN & 7.558 & 3.292 & 8.17\\% & 9.499 & 4.016 & 10.16\\% & 11.502 & 4.901 & 12.43\\% \\\\\n\t\t\\hline\n\t\t\\multirow{8}{*}{PeMS-BAY} &\n\t\tHistoricalAverage & 6.687 & 3.333 & 8.10\\% & 6.686 & 3.333 & 8.10\\% & 6.685 & 3.332 & 8.10\\% \\\\\n\t\t& CopyLastSteps & 7.022 & 3.052 & 6.84\\% & 7.016 & 3.049 & 6.84\\% & 7.05 & 3.044 & 6.83\\% \\\\\n\t\t& LSTNet & 3.224 & 1.643 & 3.47\\% & 4.375 & 2.383 & 5.04\\% & 5.515 & 2.974 & 6.86\\% \\\\\n\t\t& STGCN & 2.827 & 1.327 & 2.79\\% & 3.887 & 1.698 & 3.81\\% & 4.748 & 2.055 & 5.02\\% \\\\\n\t\t& DCRNN & 2.867 & 1.377 & 2.96\\% & 3.905 & 1.726 & 3.97\\% & 4.798 & 2.091 & 4.99\\% \\\\\n\t\t& Graph WaveNet & \\textbf{2.759} & \\textbf{1.322} & \\textbf{2.78\\%} & \\textbf{3.737} & 1.660 & \\textbf{3.75\\%} & 4.562 & 1.991 & 4.75\\% \\\\\n\t\t& ASTGCN & 3.057 & 1.435 & 3.25\\% & 4.066 & 1.795 & 4.40\\% & 4.770 & 2.103 & 5.30\\% \\\\\n\t\t& GMAN & 4.219 & 1.802 & 4.47\\% & 4.143 & 1.794 & 4.40\\% & 5.034 & 2.186 & 5.29\\% \\\\\n\t\t& MTGNN & 2.849 & 1.334 & 2.84\\% & 3.800 & \\textbf{1.658} & 3.77\\% & \\textbf{4.491} & \\textbf{1.950} & \\textbf{4.59\\%} \\\\\n\t\t& AGCRN & 2.856 & 1.354 & 2.94\\% & 3.818 & 1.670 & 3.84\\% & 4.570 & 1.964 & 4.69\\% \\\\\n\t\t\\hline\n\t\t\\multirow{8}{*}{PEMSD7M} & \n\t\tHistoricalAverage & 7.077 & 3.917 & 9.90\\% & 7.083 & 3.920 & 9.92\\% & 7.095 & 3.925 & 9.95\\% \\\\\n\t\t& CopyLastSteps & 9.591 & 5.021 & 12.33\\% & 9.594 & 5.022 & 12.33\\% & 9.597 & 5.024 & 12.34\\% \\\\\n\t\t& LSTNet & 4.308 & 2.423 & 5.73\\% & 8.951 & 5.132 & 12.22\\% & 10.881 & 6.624 & 16.72\\% \\\\\n\t\t& STGCN & 4.051 & 2.124 & 5.02\\% & 5.532 & 2.783 & 6.96\\% & 6.695 & 3.374 & 8.74\\% \\\\\n\t\t& DCRNN & 4.143 & 2.213 & 5.33\\% & 5.679 & 2.907 & 7.41\\% & 7.138 & 3.670 & 9.81\\% \\\\\n\t\t& Graph WaveNet & \\textbf{3.992} & 2.130 & \\textbf{5.00\\%} & \\textbf{5.332} & 2.715 & 6.75\\% & \\textbf{6.431} & 3.266 & 8.47\\% \\\\\n\t\t& ASTGCN & 4.257 & 2.340 & 5.83\\% & 5.506 & 2.992 & 7.69\\% & 6.587 & 3.572 & 9.48\\% \\\\\n\t\t& GMAN & 5.711 & 2.877 & 7.25\\% & 6.171 & 3.084 & 7.77\\% & 7.897 & 3.988 & 10.02\\% \\\\\n\t\t& MTGNN & 4.032 & \\textbf{2.120} & 5.02\\% & 5.373 & \\textbf{2.687} & \\textbf{6.70\\%} & 6.496 & \\textbf{3.204} & \\textbf{8.24\\%} \\\\\n\t\t& AGCRN & 4.073 & 2.167 & 5.19\\% & 5.479 & 2.769 & 6.89\\% & 6.733 & 3.358 & 8.55\\% \\\\\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table*}", "id": "f9c25975-edd8-4784-bcd1-0b0a149168bc", "level": "subsection", "origin_cites_number": 18, "parent_id": "23050b21-5fab-45c0-9c6c-dd4a43d7184b", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Model" ], [ "subsection", "Roadmap for Grid-Based Model" ] ], "subsections": [], "title": "Roadmap for Grid-Based Model" }, { "cite_extract_rate": 0.521739130434782, "cites": [ 8313, 27, 30, 37, 33, 28, 26, 8312, 25, 23, 36, 32 ], "content": "STGCN is one of the earliest models that use graph neural networks to predict traffic flow. Temporally, instead of RNN, it uses TCN with a gated mechanism as shown in Fig.\\ref{fig:architectures1} to capture the dependency only from $Closeness$ features. Spatially, it applies two spectral graph convolution, ChebyNet and 1st-order approximation of ChebyNet . TCN and GCN are stacked together as an ST-Conv block to sequentially do the spatial and temporal modeling. One major limitation of STGCN is that it uses a symmetrical adjacency matrix (i.e., undirected graph) that considers the euclidean distance between two road sensors. Thus it is difficult to model the difference of the two-way traffic flow in one road. DCRNN is another pioneer to utilize graph convolution for traffic flow prediction. In contrast to the spectral convolution in STGCN, DCRNN applies a spatial graph convolution called Diffusion Convolution implemented with bidirectional random walks on a directed graph (i.e., non-symmetric adjacent matrix), so that it can capture the spatial influence from both the upstream and the downstream traffic flows. For the temporal axis, similar to ConvLSTM, it replaces the normal matrix multiplication in GRU with the proposed diffusion convolution, then a Diffusion Convolution Gated Recurrent Unit (DCGRU) is assembled that can simultaneously do the spatial and temporal modeling. With this DCGRU, it further implements an encoder-decoder structure to enable the multi-step-to-multi-step prediction. Inspired by STGCN and DCRNN, massive graph-based traffic models have been proposed as summarized in Table \\ref{tab:modelsummary}.\n\\noindent\\textbf{Improvement along Temporal Axis.} For the temporal feature, ASTGCN inherits $Closeness$, $Period$, and $Trend$ from ST-ResNet, and improves STGCN that only takes $Closeness$. Besides, STSGCN constructs a localized temporal graph by connecting all nodes with themselves at the previous and the next steps, updating the adjacency matrix from $A$$\\in$$\\mathbb{R}^{N*N}$ to $A'$$\\in$$\\mathbb{R}^{3N*3N}$, then only uses GCN to simultaneously do the spatial and temporal modeling. On the other hand, to get better ability of temporal modeling, T-GCN and TGC-LSTM respectively use GRU and LSTM instead of TCN to improve STGCN; GCGA combines Generative Adversarial Network(GAN) and Autoencoder with GCN; STGNN adopts transformer (attention) for better global/long-term temporal modeling; STG2Seq utilized GCN for temporal modeling, which is an interesting attempt.\n\\noindent\\textbf{Improvement along Spatial Axis.} A lot of effort has been put on the spatial axis, that is the graph. (1) From single-graph to multi-graph. STGCN and DCRNN only use a single graph, directed or non-directed, to describe the spatial relationship. However, multimodal correlations and compound spatial dependencies exist among regions. Therefore, a series of researches elevate single-graph to multi-graph.\nFor instance, and ST-MGCN consider spatial proximity, functional similarity, and road connectivity as mutli-graph, and so as T-MGCN; H-STGCN takes travel time correlation matrix and shortest-path distance matrix as compound matrix; MRA-BGCN builds the node-wise graph according to the road network distance, and the edge-wise graph according to the connectivity and competition. (2) From static graph to adaptive graph. Graph WaveNet, TGC-LSTM, and AGCRN adopt adaptive/learnable graph rather than a static one used in STGCN and DCRNN; DGCNN proposes dynamic Laplacian matrix learning through tensor decomposition; SLCNN designs Structure Learning Convolution (SLC) to dynamically learn the global/local graph structure. \nIn addition to the above, attention-augmented GCN also demonstrated better performance in terms of spatial modeling in GaAN, ASTGCN, and GMAN.", "id": "f509b7e2-496e-42c4-8c44-c0803a49aa44", "level": "subsection", "origin_cites_number": 23, "parent_id": "23050b21-5fab-45c0-9c6c-dd4a43d7184b", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Model" ], [ "subsection", "Roadmap for Graph-Based Model" ] ], "subsections": [], "title": "Roadmap for Graph-Based Model" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "706a4dd3-3228-4f95-9ed5-19ffe9a3a499", "level": "section", "origin_cites_number": 0, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Evaluation" ] ], "subsections": [ "1a4eb80b-27d5-4ce0-8db0-55c3923ab147", "258ee8f2-c061-469f-afdf-74ebe9d71cc3", "84c20572-238d-45f5-a869-048eb64189a8" ], "title": "Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "Towards the benchmark tasks listed in Section 2, we pick up some representative models and conduct comprehensive evaluations about their actual performances. Besides the deep models, we also implement two naive baselines as follows: (1) HistoricalAverage(HA). We average the corresponding values from historical days as the prediction result; (2) CopyLastStep(s). We directly copy the last one or multiple steps as the prediction result. Our experiments were performed on a GPU server with four GeForce GTX 2080Ti graphics cards.\nAs a benchmark evaluation, the following settings are kept the same for each model. The observation step is set to 6 for grid-based models, while the observation and prediction step are both set to 12 for graph-based models. The data ratio for training, validation, and testing is set as 7:1:2. Adam was set as the default optimizer, where the learning rate was set to 0.001 and the batch size was set to 64 by default. Mean Absolute Error is uniformly used as the loss function. The training algorithm would either be early-stopped if the validation error was converged within 10 epochs or be stopped after 200 epochs, and the best model on validation data would be saved. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE) are used as metrics, where zero values will be ignored.", "id": "1a4eb80b-27d5-4ce0-8db0-55c3923ab147", "level": "subsection", "origin_cites_number": 0, "parent_id": "706a4dd3-3228-4f95-9ed5-19ffe9a3a499", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Evaluation" ], [ "subsection", "Setting" ] ], "subsections": [], "title": "Setting" }, { "cite_extract_rate": 0, "cites": [], "content": "The evaluation of grid-based models for single-step prediction is shown in Table \\ref{tab:performance_grid}; the evaluation of graph-based models for multi-step prediction is shown in Table \\ref{tab:trafficvolumn}.\n\\noindent\\textbf{Evaluation for Grid-Based Model:} Table \\ref{tab:performance_grid} shows that the state-of-the-art models did have advantages over the baselines (HA $\\sim$ ConvLSTM). In particular, STDN showed better performances in general, PCRN and DeepSTN+ achieved the lowest MAE on BikeNYC-I and TaxiNYC respectively. None of these grid-based models could be acknowledged as a dominant one at the current stage. Through the experiment, we find that their main limitations are as follows: (1) ST-ResNet converts the video-like data to high-dimensional image-like data and uses a simple fusion-mechanism to handle different types of temporal dependency; (2) through the experiment, it was found PCRN took more epochs to converge and tended to cause overfitting; (3) DMVST-Net and STDN use local CNN to take grid (pixel) as computation unit, resulting in long training time; (4) DeepSTN+ utilized a fully-connected layer in $ConvPlus$ block, which would result in a big number of parameters on TaxiBJ; (4) Multitask Learning model needs multiple data sources as the inputs, which hinders the applicability.\n\\noindent\\textbf{Evaluation for Graph-Based Model:} Table \\ref{tab:trafficvolumn} compares the prediction performances of our selected models at 15 minutes, 30 minutes, 60 minutes ahead on METR-LA, PeMS-BAY, PEMSD7M datasets. Through Table \\ref{tab:trafficvolumn}, we can find that: (1) Despite the effect of time-series model LSTNet in short-term prediction, its performance would deteriorate as the horizon gets longer; (2) Almost all of the graph-based models achieved better performance than traditional methods and time-series models on all metrics, which proved that the addition of spatial information would bring substantial performance improvements; (3) Although the models' performances depended more or less on the dataset, the scores of DCRNN, Graph WaveNet, and MTGNN on all datasets ranked in the top 3, which also proved their robustness and versatility in traffic prediction tasks; (4) GMAN was found more prone to overfitting, due to which its performances on all three datasets were not as good as LSTNet; This is probably because GMAN adopted a global attention mechanism to capture the spatial dependency between each pair of nodes; (5) MTGNN and GraphWaveNet got most of the highest scores on different datasets and metrics. The self-adaptive/learnable graph demonstrated its great effectiveness for traffic prediction.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.48\\textwidth]{./figure/compare_step3.png}\n\t\\caption{Case Study on Graph-Based Datasets.}\n\t\\label{fig:Evaluation_graph}\n\\end{figure}\n\\noindent\\textbf{Case Study:} We randomly selected one day (24 hours) and one sensor (node) from the three datasets (i.e., METR-LA, PEMS-BAY, and PEMSD7M) and plotted the time series of the ground truth and the predictions as Fig.\\ref{fig:Evaluation_graph}. To make the time-series chart clear, in addition to the ground truth, we only plot the prediction results of LSTNet, DCRNN, and Graph WaveNet instead of all of the models listed in Table \\ref{tab:trafficvolumn}. Through Fig.\\ref{fig:Evaluation_graph}, we can observe that: (1) All of the three models could learn the peak and valley trend on all three datasets; (2) The graph-based models DCRNN and Graph WaveNet always outperformed the time-series model LSTNet, which proved the excellent performance of GCN in capturing spatial correlation and dependency; (3) Time lag could be observed on all of the three prediction results, especially when violent fluctuations occur in the original time series such as 2012/6/20 21:00 in PEMSD7M. This problem will be magnified in the longer-term forecast like 60 minutes lead time. Despite this, the graph-based models still show better performance in terms of volatility prediction errors, which further confirmed the effectiveness and robustness of GCN in traffic prediction.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.49\\textwidth]{./figure/efficiency.png}\n\t\\caption{Efficiency Summary.}\n\t\\label{fig:Efficiency}\n\\end{figure}", "id": "258ee8f2-c061-469f-afdf-74ebe9d71cc3", "level": "subsection", "origin_cites_number": 2, "parent_id": "706a4dd3-3228-4f95-9ed5-19ffe9a3a499", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Evaluation" ], [ "subsection", "Effectiveness Evaluation" ] ], "subsections": [], "title": "Effectiveness Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "Besides comparing the effectiveness of these deep models, we also provide an analysis of their efficiency. The space complexity and time complexity of these approaches are significant for their future application, so we exhibit the number of the trainable parameters and training time of each model through bar charts Fig.\\ref{fig:Efficiency}. \n\\noindent\\textbf{Evaluation for Grid-Based Model:} From Fig.\\ref{fig:Efficiency}-(a) and Fig.\\ref{fig:Efficiency}-(b), we can observe that: (1) the parameter numbers of DeepSTN+ and STDN are more than other models, especially DeepSTN+, which captures the citywide spatial correlation by utilizing fully connected layer; (2) ST-ResNet has the fewest trainable parameters, and demonstrates its superiority to other models in terms of space complexity; (3) The training time of STDN and DMVST is longer than other models because they utilize the LSTM to capture the temporal dependency and take each mesh-grid as the computation unit rather than the entire mesh. \n\\noindent\\textbf{Evaluation for Graph-Based Model:} From Fig.\\ref{fig:Efficiency}-(c) and Fig.\\ref{fig:Efficiency}-(d), we can conclude that: (1) STGCN and GMAN have relatively lower space complexity than others; (2) AGCRN and DCRNN need more parameters than other models because they are based on RNNs; (3) On PEMSBAY, the parameter numbers of ASTGCN and MTGNN dramatically increase. The reason for this is those two models have more GNN layers and they are more sensitive to the node number; (4) The training time of GMAN on PEMSBAY outdistances other models because it applies a global attention mechanism to the entire graph (nodes). In summary, TCN-based models like STGCN and Graph WaveNet have higher computation efficiency.", "id": "84c20572-238d-45f5-a869-048eb64189a8", "level": "subsection", "origin_cites_number": 0, "parent_id": "706a4dd3-3228-4f95-9ed5-19ffe9a3a499", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Evaluation" ], [ "subsection", "Efficiency Evaluation" ] ], "subsections": [], "title": "Efficiency Evaluation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 41 ], "content": "DL-Traff is already available at GitHub as the following two repositories under the MIT License: one is for grid-based datasets/models \\url{https://github.com/deepkashiwa20/DL-Traff-Grid}, and another is for graph-based datasets/models \\url{https://github.com/deepkashiwa20/DL-Traff-Graph}. It is implemented with Python and the most popular deep learning frameworks: Keras on TensorFlow and PyTorch. Fig.\\ref{fig:usecase} shows a use case by taking DCRNN model on METR-LA dataset as an example. To run the benchmark, the repository should be cloned locally and a conda environment with the necessary dependencies should be created. The directory is structured in a flat style and only with two levels. The traffic datasets are stored in DATA directories (e.g., METRLA), and the python files are put in workDATA directories (e.g., workMETRLA). Entering the work directory for a certain dataset, we can find MODEL class file (e.g., DCRNN.py) and its corresponding running program named pred\\_MODEL.py (e.g., pred\\_DCRNN.py). We can run ``python MODEL.py'' to simply check the model architecture without feeding the training data and run ``python pred\\_MODEL.py'' to train and test the model. Additionally, Param.py file contains a variety of hyper-parameters as described in Section 5.1 that allow the experiment to be customized in a unified way. Metrics.py file contains the metric functions listed in Section 5.1. Utils.py file integrates a set of supporting functions such as pickle file reader and self-defined loss function. More details about the usability and implementation can be found at GitHub.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.49\\textwidth]{./figure/usecase.png}\n\t\\caption{Illustration of The Use Case for DL-Traff.}\n\t\\label{fig:usecase}\n\\end{figure}", "id": "c0ce4579-96dc-48f8-84c3-543a96d159da", "level": "section", "origin_cites_number": 3, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Availability and Usability" ] ], "subsections": [], "title": "Availability and Usability" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we first survey the deep learning models as well as the widely used datasets for urban traffic prediction. Then we build a standard benchmark to comprehensively evaluate the deep traffic models on the selected open datasets. The survey and the benchmark combine together to form our study called DL-Traff, which is already available at \\url{https://github.com/deepkashiwa20/DL-Traff-Grid} and \\url{https://github.com/deepkashiwa20/DL-Traff-Graph}. With DL-Traff, we hope to deliver a useful and timely resource to researchers in AI and data science community.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{cikm2021-resource}\n\\end{document}", "id": "a3286808-0343-4af3-a22a-dcbfba64d4f5", "level": "section", "origin_cites_number": 0, "parent_id": "59d19c15-d2fe-4919-9357-dccad053bdfa", "prefix_titles": [ [ "title", "DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
101
[ 22, 24, 23, 21, 8312, 25, 31, 27, 30, 33, 29, 28, 26, 34, 32, 35, 36, 8313, 37, 38, 39, 40, 41 ]
0.793706
[ "Dan Zeng", "Raymond Veldhuis", "Luuk Spreeuwers" ]
A survey of face recognition techniques \\ under occlusion
2020
2020-06-19T20:44:02Z
cs.CV
The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5be78ea8-4e37-4219-b1a2-958030400717", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ] ], "subsections": [ "739d8dcb-1e21-4d11-ab59-ef21b7d88150", "02c9227b-1254-4e45-872c-168a9224ca8e", "0ab6d3e5-5f21-47d0-96aa-3f9c5795fb03", "56e478f6-fe2d-45d3-b5e4-e656af4eef50", "1e45db83-eb31-4a5f-8e0c-ae42d992d199", "14605930-a574-4766-b31c-9598c1677fbc", "729eb785-bd28-450a-a746-83bf1ec3a0a6", "037a6797-60e9-4040-a1f5-3c5f5f3bfc6c" ], "title": "root" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 1214, 1222, 514, 1218, 305, 97, 1219, 1220, 1221, 1210, 1216, 8426, 1215, 1217 ], "content": "\\IEEEPARstart{F}{ace} recognition is a computer vision task that has been extensively studied for several decades~. Compared with other popular biometrics such as fingerprint, iris, palm, and vein, the face has a significantly better potential to recognize the identity in a non-intrusive manner. Therefore, face recognition is widely used in many application domains such as surveillance, forensics, and border control. With the development of deep learning techniques~ and the publicly available large-scale face datasets~, face recognition performance has improved substantially~. However, face recognition systems still tend to perform far from satisfactory when encountering challenges such as large-pose variation, varying illumination, low resolution, different facial expressions, and occlusion. Generally, face images stored in a gallery are of high quality and free from the above degradations, while probe faces are suffering from what can be seen as \\textit{a missing data problem} due to these challenges. Consequently, fewer facial parts are available for recognition, which induces a mismatch between feature available in probe faces and gallery faces.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.2in]{figures/occlusion_examples.pdf}\n\\caption{Examples of occluded face images from the MAFA dataset.}\n\\label{fig:occlusion_examples}\n\\end{figure}\n\\begin{table}[t]\n\\renewcommand\\arraystretch{1.1}\n\\caption{A categorization of Occlusion Challenges}\\label{tab:occlusion_challenges}\n\\centering\n\\begin{tabularx}{0.5\\textwidth}{|l|X|}\n\\hline\n\\textbf{Occlusion Scenario} & \\textbf{Examples} \\\\\n\\hline\nFacial accessories & eyeglasses, sunglasses, scarves, mask, hat, hair\\\\\nExternal occlusions & occluded by hands and random objects\\\\\nLimited field of view & partial faces\\\\\nSelf-occlusions& non-frontal pose\\\\\nExtreme illumination& part of face highlighted\\\\\n\\hline\n\\multirow{3}{*}{Artificial Occlusions}&random black rectangles\\\\\n&random white rectangles\\\\\n&random salt \\& pepper noise\\\\\n\\hline\n\\end{tabularx}\n\\end{table}\nFacial occlusion~ is considered one of the most intractable problems because we do not have prior knowledge about the occluded part, which can be anywhere and of any size or shape in a face image. From a practical point of view, it is not feasible to collect a large-scale training dataset with all possible occlusions in a realistic scenario to use deep learning techniques. Therefore, the problem of face recognition under occlusions remains a challenge. Facial occlusion occurs when the subject wears accessories such as a scarf, a face mask, glasses, a hat, etc., or when random objects are present in front of the face. The recognition accuracy has been compromised in some way because of the higher inter-class similarity and the more considerable intra-class variations caused by occlusion. Facial appearance changes substantially due to occlusion, as illustrated in Fig.~\\ref{fig:occlusion_examples}. We present a categorization of occlusion challenges in different scenarios with their typical occlusion examples (see Table~\\ref{tab:occlusion_challenges}). Pose variation can partially be seen as a self-occlusion problem caused by a large rotation of the head. The self-occlusion problem due to pose variation is usually dealt with in pose correction and therefore not discussed here.\n{In most cases, occluded face recognition~(OFR) involves querying a gallery consisting of occlusion-free faces using a probe image from an alternative test dataset of occluded faces. Occluded faces rely on either the collection of real occlusions or synthetic occlusions. We first break down OFR research scenarios in the most obvious way by the pairs of images considered. Fig.~\\ref{fig:OFR_problems} offers an illustration of the five categories regarding OFR testing scenarios. More specifically, five widely used testing scenarios for OFR, ranging from most real to least real, are:\n\\begin{itemize}\n \\item \\textbf{Real occlusions}: gallery images are mugshots free from occlusion while probe images are faces occluded by realistic images such as sunglasses, or a scarf.\n \\item \\textbf{Partial faces}: gallery images are mugshots free from occlusion while test face images are partial faces; hence the name partial face recognition is given by researchers.\n \\item \\textbf{Synthetic occlusions}: gallery images are faces in the wild which are captured from uncontrolled scenarios while probe faces are blocked with synthetic occlusions to simulate real occlusions. \n \\item \\textbf{Occluding rectangle}: gallery images are occlusion-free mugshots while test face images are occluded with a rectangle such as white and black rectangles.\n \\item \\textbf{Occluding unrelated images}: gallery images are mugshots free from occlusion while test face images are occluded with unrelated images such as a baboon, or a non-square image.\n\\end{itemize}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures/OFR_problems.pdf}\n\\caption{Different occluded face recognition testing scenarios involved in OFR.}\n\\label{fig:OFR_problems}\n\\end{figure}\n}\nApproaches to recognizing faces under occlusions can be broadly classified into three categories~(shown in Fig.~\\ref{fig:methods_category}), which are 1) occlusion robust feature extraction~(ORFE), 2) occlusion aware face recognition~(OAFR) and 3) occlusion recovery based face recognition~(ORecFR). An OFR system consists of three components, each corresponding to an important design decision: cross-occlusion strategy, feature extraction, and comparison strategy. Of these components, the second and third have analogues in general face recognition, while cross-occlusion strategy is unique to OFR.\n\\begin{itemize}\n\\item \\textbf{ORFE category} searches for a feature space that is less affected by facial occlusions. Generally, patch-based engineered and learning-based features are used as the cross-occlusion strategy.\n\\item \\textbf{OAFR category} is explicitly aware where the occlusion is. Generally, occlusion-discard is applied as the cross-occlusion strategy. As a result, only visible face parts qualify for face recognition~(i.e., feature extraction, feature comparison). Furthermore, we classify partial face recognition approaches as OAFR because they exclude occlusion parts from face recognition, assuming that visible parts are ready in the beginning.\n\\item \\textbf{ORecFR category} intends to recover an occlusion-free face from the occluded face to meet the demands of conventional face recognition systems. In other words, it takes occlusion recovery as the cross-occlusion strategy.\n\\end{itemize}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/methods_category.pdf}\n\\caption{The three categories of methods for face recognition under occlusion challenges.}\n\\label{fig:methods_category}\n\\end{figure}\nNumerous methods have been proposed to push the frontier of face recognition research. Several comprehensive surveys~ have been published for face recognition under occlusion. Of the existing works, the survey by Lahasan et al.~ on face recognition under occlusion challenges presents a thorough overview of the approaches before 2017, which is the most relevant to this paper. However, there are at least two reasons why a new survey on occluded face recognition is needed. \\textit{First, the explosive growth of face recognition techniques these years has stimulated many innovative contributions to handle occluded face recognition problems.} The increased number of publications over the last few years calls for a new survey for occluded face recognition, including up-to-date approaches, especially deep learning techniques. \\textit{Second, several large-scale occluded face datasets have become publicly available in recent years.} Without large-scale training data of occluded face images, deep learning models cannot function well~. Recently, the MAFA dataset~ is accessible for occluded face detection, and the IJB-C dataset~ is introduced as a general evaluation benchmark to include meta-information regarding the occlusion~(i.e., occlusion location, occlusion degree). Predictably, these datasets would encourage occluded face recognition to develop faster. The proposed survey provides a systematic categorization of methods for face recognition. Specifically, occluded face detection techniques are briefly reviewed because an OFR system requires the application of occluded face detection as the first step. Moreover, newly published and innovative papers addressing occlusion problems are thoroughly reviewed. Finally, we represent comparative performance evaluations in terms of occluded face detection and face recognition on widely used datasets as well as newly-developed large-scale datasets.\nThe remainder of the paper is organized as follows: occluded face detection techniques are introduced in Section 2. Methods of occlusion robust feature extraction are described and analyzed in Section 3. We review occlusion-aware face recognition approaches in Section 4. Then Section 5 briefly studies occlusion-recovery face recognition methods. A performance evaluation of the reviewed approaches is given in Section 6. In section 7, we discuss future challenges to datasets as well as to research. Finally, we draw an overall conclusion for occluded face recognition.", "id": "739d8dcb-1e21-4d11-ab59-ef21b7d88150", "level": "section", "origin_cites_number": 26, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we break down occluded face detection into two parts containing~\\textit{general face detection} methods which can be applied to detect occluded face, and~\\textit{occluded face detection} methods which are designed specifically to tackle the occlusion issue in face detection. As for general face detection, we briefly study relevant methods for the simplicity. We then elaborate occluded face detection methods that are specifically designed to detect occluded faces. One way to classify the methods can be seen in Fig.~\\ref{fig:ofd}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/occluded-face-detection.pdf}\n\\caption{Methods used in occluded face detection.}\n\\label{fig:ofd}\n\\end{figure}", "id": "02c9227b-1254-4e45-872c-168a9224ca8e", "level": "section", "origin_cites_number": 0, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ] ], "subsections": [ "849dd43f-ae0a-4996-be33-2fabdb84e136", "58332e8c-aea0-4d5e-96a0-5382fc491812" ], "title": "Occluded face detection" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 1223, 8428, 1226, 8429, 206, 1229, 1227, 802, 1224, 8427, 1231, 209, 1228, 1230, 1225 ], "content": "Face detection generally intends to detect the face that is captured in an unconstrained environment. It is challenging due to large-pose variation, varying illumination, low resolution, and occlusion etc.~. Approaches to general face detection can roughly be classified into three categories, which are \n\\begin{itemize}\n\\item Rigid templates based face detection.\n\\item Deformable part models~(DPM) based face detection.\n\\item Deep convolutional neural networks~(DCNNs) based face detection. \n\\end{itemize}\nThe Viola-Jones face detector and its variations~ are typical in the rigid templates based category, which utilizes Haar-like features and AdaBoost to train cascaded classifiers and can achieve good performance with real-time efficiency. However, the performance of these methods can drop dramatically when handling real-world application~. In contrast, DPM based face detection can achieve significantly better performance based on the cost of high computational complexity~. A third, most promising category of research is DCNNs based face detection~. Some methods~ joint face detection with face alignment to exploit their inherent correlation to boost the performance. There are two major branches of object detection framework: (i)~region proposals based CNN~(i.e.,two-stage detectors), such as R-CNN~, fast R-CNN~, faster R-CNN~; (ii)~region proposals free CNN~(i.e.,one-stage detectors), such as the Single-Shot Multibox Detector~(SSD)~, YOLO~. In short, two-stage detectors achieve higher performance but are time-consuming. One-stage detectors have significant computational advantages but compensate by less accurate detection results. Some methods~ treat a face as a natural object and adopt techniques from object detection in face detection. Most recently, finding tiny faces has become popular in face detection and superior performance has been achieved~. \nRecent years have witnessed promising results of exploring DCNNs for face detection with the introduction of Widerface~, which offers a wide pose variation, significant scale difference~(tiny face), expression variation, make-up, severe illumination, and occlusion. Up to now, Feature Agglomeration Networks~(FANet)~, a single-stage face detector, achieves the state-of-art performance on several face detection benchmarks include the FDDB~, the PASCAL Face~, and the Widerface benchmark~. To exploit inherent multi-scale features of a single convolutional neural network, FANet introduced an Agglomeration Connection module to enhance the context-aware features and augment low-level feature maps with a hierarchical structure so that it can cope with scale variance in face detection effectively. Besides, Hierarchical Loss is proposed to train FANet to become stable and better in an end-to-end way. \\textbf{In short, methods that achieve remarkable detection performance, for example on the Widerface dataset, also provide a solid solution for occluded face detection.}", "id": "849dd43f-ae0a-4996-be33-2fabdb84e136", "level": "subsection", "origin_cites_number": 25, "parent_id": "02c9227b-1254-4e45-872c-168a9224ca8e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ], [ "subsection", "General face detection" ] ], "subsections": [], "title": "General face detection" }, { "cite_extract_rate": 0.2, "cites": [ 7338 ], "content": "Detecting partially occluded faces aims to locate the face region in a given image where occlusion is present. Handling occlusion in face detection is challenging due to the unknown location and the type of occlusions~. Recently, occluded face detection is beginning to attract the attention of researchers; therefore a few publications are reviewed. At the same time, detecting occluded pedestrians is a long-standing research topic that has been intensively studied during the past few decades. Therefore, many researchers borrow techniques from pedestrian detection~ to push the frontier of occluded face detection by treating occlusion as the dominating challenge during the detection. \\textbf{Most occluded face detection methods report their performance on the MAFA dataset~ while general face detection methods do not, which means it is not a level playing field for general face detection and occluded face detection.} Approaches to detect partially occluded faces are roughly clustered as 1)~locating visible facial segments to estimate a full face; 2)~fusing the detection results obtained from face sub-regions to mitigate the negative impact of occlusion; 3)~using the occlusion information to help face detection in an adversarial way.", "id": "58332e8c-aea0-4d5e-96a0-5382fc491812", "level": "subsection", "origin_cites_number": 5, "parent_id": "02c9227b-1254-4e45-872c-168a9224ca8e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ], [ "subsection", "Detecting occluded face" ] ], "subsections": [ "953118eb-8381-49cf-9c57-98f6fdbcc4d0", "ffc99e50-7659-4c74-829a-f7eec57fe6aa", "fd593a64-cb47-4681-bad4-618e0fd1044f" ], "title": "Detecting occluded face" }, { "cite_extract_rate": 0.5, "cites": [ 1232, 1233, 1231 ], "content": "If visible parts of a face are known, then difficulties in face detection due to occlusions are largely relieved. Observing that facial attributes are closely related to facial parts, the attribute-aware CNNs method~ intends to exploit the inherent correlation between a facial attribute and visible facial parts. Specifically, it discovers facial part responses and scores these facial parts for face detection by the spatial structure and arrangement. A set of attribute-aware CNNs are trained with specific part-level facial attributes (e.g., mouth attributes such as big lips, open mouth, smiling, wearing lipstick) to generate facial response maps. Next, a scoring mechanism is proposed to compute the degree of face likeliness by analyzing their spatial arrangement. Finally, face classification and bounding box regression are jointly trained with the face proposals, resulting in precise face locations. The results on FDDB~, PASCAL Face~ and AFW~ demonstrate that the proposed method is capable of yielding good performance. In particular, they can achieve a high recall rate of $90.99\\%$ on FDDB. In short, Ref.~ is the first systematic study to attempt face detection with severe occlusion without using realistic occluded faces for training.\nMore recently, the extension faceness-net~ improves the robustness of feature representations by involving a more effective design of CNN. As a result, it has achieved compelling results on the Widerface dataset~, which is challenging in terms of severe occlusion and unconstrained pose variations. However, it requires the use of labeled attributes of facial data to train attribute-aware CNN, which impairs its practical use in some way.", "id": "953118eb-8381-49cf-9c57-98f6fdbcc4d0", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "58332e8c-aea0-4d5e-96a0-5382fc491812", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ], [ "subsection", "Detecting occluded face" ], [ "subsubsection", "Locating visible facial segments to estimate face" ] ], "subsections": [], "title": "Locating visible facial segments to estimate face" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 1234, 1235, 799, 1231 ], "content": "In paper~, a facial segment based face detection technique is proposed for mobile phone authentication with faces captured from the front-facing camera. The detectors are AdaBoost cascade classifiers trained with a local binary pattern (LBP) representation of face images. They train fourteen segments-based face detectors to help cluster segments in order to estimate a full face or partially visible face. As a result, this method could achieve excellent performance on the Active Authentication Dataset (AA-01)~. However, the use of simple architecture increases speed by compromising detection accuracy.\nThe introduction of MAFA~ offers plenty of faces wearing various masks, which contributes significantly to the occluded face detection, especially of masked faces. Based on this dataset, an LLE-CNNs~ is proposed to benchmark the performance of masked face detection on the MAFA dataset. They extract candidate face regions with high-dimensional descriptors by pre-trained CNNs and employ locally linear embedding (LLE) to turn them into similarity-based descriptors. Finally, they jointly train the classification and regression tasks with CNNs to identify candidate facial regions and refine their position.\nTo avoid high false positives due to masks and sunglasses, a face attention network (FAN) detector~ is proposed to highlight the features from the face region. More specifically, the FAN detector integrates an anchor-level attention mechanism into a single-stage object detector like Feature Pyramid Networks~. The attention supervision information is obtained by filling the ground-truth box and is associated with the ground-truth faces which match the anchors at the current layer. The attention maps are first fed into an exponential operation and then combined with feature maps. As a result, the method is capable of achieving impressive results on the Widerface~ with an $88.5\\%$ average precision on the hard subset as well as on an $88.3\\%$ average precision on MAFA~ dataset.", "id": "ffc99e50-7659-4c74-829a-f7eec57fe6aa", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "58332e8c-aea0-4d5e-96a0-5382fc491812", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ], [ "subsection", "Detecting occluded face" ], [ "subsubsection", "Fusing detection results obtained from face sub-regions" ] ], "subsections": [], "title": "Fusing detection results obtained from face sub-regions" }, { "cite_extract_rate": 0.625, "cites": [ 1239, 209, 1236, 1238, 1237 ], "content": "Apart from selecting the visible facial parts and fusing results obtained from face sub-regions, it is a third way to minimize the adverse effects of face detection due to occlusions. One promising approach is to use a novel grid loss~, which has been incorporated into the convolutional neural network to handle partial occlusion in face detection. It is based on the observation that partial occlusions would confuse a subset of detectors, whereas the remaining ones can still make correct predictions. To this end, this work regards occluded face detection as a particular single-class object detection problem, inspired by other works on object detection~. Furthermore, the proposed grid loss minimizes the error rate on face sub-blocks independently rather than over the whole face to mitigate the adverse effect of partial occlusions and to observe improved face detection accuracy. \nUsing the occluded area as an auxiliary rather than a hindrance is a feasible solution to help face detection adversely. Adversarial occlusion-aware face detection (AOFD)~ is proposed to detect occluded faces and segment the occlusion area simultaneously. They integrate a masking strategy into AOFD to mimic different occlusion situations. More specifically, a mask generator is designed to mask the distinctive part of a face in a training set, forcing the detector to learn what is possibly a face in an adversarial way. Besides, an occlusion segmentation branch is introduced to help detect incomplete faces. The proposed multitask training method showed superior performance on general as well as masked face detection benchmarks. To cope with different poses, scales, illumination, and occlusions, Wu et al.~ introduce a hierarchical attention mechanism, applying long short-term memory~(LSTM) to predict face-specific attention. In this way, it can further model relations between the local parts and adjust their contribution to face detection. The proposed method achieves promising performance compared with Faster R-CNN~.", "id": "fd593a64-cb47-4681-bad4-618e0fd1044f", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "58332e8c-aea0-4d5e-96a0-5382fc491812", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occluded face detection" ], [ "subsection", "Detecting occluded face" ], [ "subsubsection", "Occlusion information adversarially used for detection" ] ], "subsections": [], "title": "Occlusion information adversarially used for detection" }, { "cite_extract_rate": 0, "cites": [], "content": "If the extracted features are reasonably robust to the occlusion, then difficulties in face recognition due to occlusion are relieved. The aim is to extract features that are less affected by occlusions~(outliers) while preserving the discriminative capability. We group the approaches into engineered features and learning-based features. The former generally extract handcraft features from explicitly defined facial regions, which do not require optimization or a learning stage. The latter extract features by using learning-based methods such as linear subspace methods, sparse representation classification, or nonlinear deep learning techniques. One way to classify the methods can be seen in Fig.~\\ref{fig:orfe}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/occlusion-robust-feature-extraction.pdf}\n\\caption{Methods used in occlusion robust feature extraction approaches.}\n\\label{fig:orfe}\n\\end{figure}", "id": "0ab6d3e5-5f21-47d0-96aa-3f9c5795fb03", "level": "section", "origin_cites_number": 0, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ] ], "subsections": [ "b6740507-3d6d-4105-8204-4acaceeca84b", "6f64193b-71e9-4beb-8053-5da4cc76979e" ], "title": "Occlusion robust feature extraction" }, { "cite_extract_rate": 0, "cites": [], "content": "Facial descriptors obtained in an engineered way are rather efficient because they can: (i)~be easily extracted from the raw face images; (ii)~discriminate different individuals while tolerating large variability in facial appearances to some extent; (iii)~lie in a low feature space so as to avoid a computationally expensive classifier. Generally, engineered features~(i.e., handcraft features) are extracted from facial patches and then concatenated to represent a face. Therefore, a fusion strategy can be imported to reduce the adverse effects of occluded patches in some way. Alternatively, patch-based matching can be used for feature selection to preserve occlusion-free discriminative information.\nThese methods in general require precise registration such as alignment based on eye coordinates for frontal faces and integrate the decisions from local patches to obtain a final decision for face recognition. This is problematic because these methods rely on robust face alignment under occlusion, but the eyes are likely to be occluded. In short, these approaches are not realistic for application since most often the face images need to be aligned well to facilitate feature extraction of meaningful facial structure.", "id": "b6740507-3d6d-4105-8204-4acaceeca84b", "level": "subsection", "origin_cites_number": 0, "parent_id": "0ab6d3e5-5f21-47d0-96aa-3f9c5795fb03", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Patch-based engineered features" ] ], "subsections": [ "7003f015-2cf9-44a3-b1e2-294f487c7c4c", "d3308e7d-a9e7-4953-a749-c8031b07ae82" ], "title": "Patch-based engineered features" }, { "cite_extract_rate": 0, "cites": [], "content": "Local Binary Patterns~(LBP)~ is used to derive a novel and efficient facial image representation and has been widely used in various applications. LBP and variants~ retain popularity and succeed so far in producing good results in biometrics, especially face recognition. The main idea is to divide the face image into multiple regions, from which to extract LBP feature distributions independently. These descriptors are then concatenated to form an enhanced global descriptor of the face. For distance measurement between two faces, weighted Chi square distance is applied, accounting for some facial features being more important in human face recognition than others. The Scale Invariant Feature Transform~(SIFT) descriptor~ is popular in object recognition and baseline matching and can also be applied to face recognition~. SIFT is largely invariant to changes in scale, translation, rotation, and is also less affected by illumination changes, affine or 3D projection. Similarly to SIFT, the Histograms of Oriented Gradient~(HOG) descriptor~ has been proposed to handle human detection and has been extended to cope with object detection as well as visual recognition. The main idea is to characterize local object appearance and shape with the distribution of local integrity gradients. After applying a dense~(in fact, overlapping) grid of HOG descriptors to the detection window, the descriptors are combined to suit the further classifier. Contrary to the integrity oriented methods, Gabor filters and other frequency oriented approaches construct the face feature from filter responses. Generally, the filter responses computed for various frequencies and orientations from a single or multiple spatial locations are combined to form the Gabor feature~. Phase information instead of magnitude information from Gabor features contains discrimination and is thus widely used for recognition~.\nFeatures based on Gabor filters are versatile. By post-processing they can be converted, for example, to binary descriptors of texture similar to LBPs. Ref.~ proposes KLD-LGBP, in which Kullback-Leibler Divergence (KLD) is introduced to measure the distance between the local Gabor binary patterns~(LGBP) feature~ of the local region of test images and that of the unoccluded local region of reference faces. They define the probability of occlusions of that area as the distance between two distributions of local regions and further use it as the weight of the local region for the final feature matching. The main drawback of this method is the high dimensionality of LGBP features, which are the combination of Gabor transform, LBP, and a local region histogram on local face regions.", "id": "7003f015-2cf9-44a3-b1e2-294f487c7c4c", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "b6740507-3d6d-4105-8204-4acaceeca84b", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Patch-based engineered features" ], [ "subsubsection", "Handcraft features" ] ], "subsections": [], "title": "Handcraft features" }, { "cite_extract_rate": 0.25, "cites": [ 1240 ], "content": "Beside representation, the distance metric also plays an important role. Elastic and partial matching schemes bring in a lot of flexibility when handling challenges in face recognition. Elastic Bunch Graph Matching~(EBGM)~ uses a graph to represent a face, each node of the graph corresponding to Gabor jets extracted from facial landmarks. The matching method is used to calculate the distance between corresponding representations of two faces. To take advantage of elastic and partial matching, Ref.~ proposes a robust matching metric to match Difference of Gaussian~(DoG) filter descriptor of a facial part against its spatial neighborhood in the other faces and select the minimal distance for face recognition. Specifically, they extract $N$ local descriptors from densely overlapped image patches. During the matching, each descriptor in one face is picked up to match its spatial neighborhood descriptors and then the minimal distance is selected, which is effective in reducing the adverse effects of occlusion. A random sampling patch-based method~ has been proposed to use all face patches equally to reduce the effects of occlusion. It trains multiple support vector machine~(SVM) classifiers with selected patches at random. Finally, the results from each classifier are combined to enhance the recognition accuracy. Similarly to elastic bunch graph matching~, dynamic graph representation~ is proposed to build feature graphs based on node representations. Each node corresponds to certain regions of the face image and the edge between nodes represents the spatial relationship between two regions. Dynamic graph representation can enable elastic and partial matching, where each node in one face image is matched with adjacent nodes in the other face image for face recognition, which brings in robustness to occlusion especially when encountering partially occluded biometrics.", "id": "d3308e7d-a9e7-4953-a749-c8031b07ae82", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "b6740507-3d6d-4105-8204-4acaceeca84b", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Patch-based engineered features" ], [ "subsubsection", "Patch-based matching" ] ], "subsections": [], "title": "Patch-based matching" }, { "cite_extract_rate": 1, "cites": [ 297 ], "content": "Compared with engineered features, learned features are more flexible when various occlusion types at different locations are present. Features learned from training data can be effective and have potentially high discriminative power for face recognition. Unlike regular images, face images share common constraints, such as containing a smooth surface and regular texture. Face images are in fact confined to a face subspace. Therefore, \\textit{subspace learning} methods have been successfully applied to learn a subspace that can preserve variations of face manifolds necessary to discriminate among individuals. Taking the occlusion challenge as a major concern, it is natural to apply \\textit{statistical methods} on face patches allowing for the fact that not all types of occlusion have the same probability of occurring.~\\textit{Sparse representation classifier} methods, which fully explore the discriminative power of sparse representation and represent a face with a combination coefficient of training samples, have been the mainstream approach to handle various challenges in face recognition for a long time. The last few years have witnessed a great success of~\\textit{deep learning} techniques~, especially deep convolutional neural networks~(DCNN) for uncontrolled face recognition applications.", "id": "6f64193b-71e9-4beb-8053-5da4cc76979e", "level": "subsection", "origin_cites_number": 1, "parent_id": "0ab6d3e5-5f21-47d0-96aa-3f9c5795fb03", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Learning-based features" ] ], "subsections": [ "11d7267c-4813-4f72-84f7-ae9faa09157e", "f2381fe3-090e-4284-8e0f-f2eea1aa34fe", "cd378342-bfc7-4211-81d3-5a17939b2fa3", "2012a522-72b3-492e-8a83-c85e6b0ec9d5" ], "title": "Learning-based features" }, { "cite_extract_rate": 0, "cites": [], "content": "Approaches such as the Principal component analysis~(PCA)~ and the Linear Discriminant Analysis~(LDA)~ are the two most representative methods in learning the subspace for feature extraction. Eigenface~ uses PCA to learn the most representative subspace from training face images. Fisherface~ uses LDA to explore a discriminant subspace that differentiates faces of different individuals by incorporating the class labels of faces images. One limitation for these appearance-based methods is that proper alignment specifically based on the eye location is required. Modular PCA~ builds multiple eigen spaces (eigenfeatures) in the regions of facial components (e.g., eyes, nose, and mouth) to achieve variance tolerance, especially for face variances. Ref.~ uses an influence function technique to develop robust subspace learning for a variety of linear models within a continuous optimization framework. Ref.~ uses the property of PCA to detect ``outliers'' and occlusions, and calculates the coefficients using the rest of the pixels. Inspired by robust PCA~, a new approach combining discriminative power and reconstructive property has been proposed by Fidler et al.~, which can achieve good classification results and have the construction abilities to detect outliers (occlusions). Specifically, object recognition and face recognition applications are used to estimate the robustness of LDA coefficients. Besides, objects’ orientation application, a regression task, is introduced to estimate the robustness of extended CCA coefficients. Ref.~ proposes independent component analysis~(ICA) to use locally salient information from important facial parts for face recognition. This part-based representation method could achieve better performance in case of partial occlusions and local distortions than PCA and LDA. Unlike linear subspace methods, nonlinear subspace methods use nonlinear transforms to convert a face image into a discriminative feature vector, which may attain highly accurate recognition in practical scenarios. Ref.~ proposes kernel correlation feature analysis (KCFA) on facial regions, i.e., eye-region, nose-region, and mouth-region to cope with partial face recognition. The proposed KCFA for feature extraction outperforms the linear approaches PCA~, LDA~ and variants of kernel methods~.", "id": "11d7267c-4813-4f72-84f7-ae9faa09157e", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "6f64193b-71e9-4beb-8053-5da4cc76979e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Learning-based features" ], [ "subsubsection", "Subspace learning" ] ], "subsections": [], "title": "Subspace learning" }, { "cite_extract_rate": 0, "cites": [], "content": "In real-world applications, not all types of occlusions have the same probability of occurring; for example, a scarf and sunglasses often have a higher probability of occurrence compared with others. Hence, it is natural to apply statistical learning on face patches to account for their occlusion possibility. One early work in this direction is Ref.~, which takes a probabilistic approach, i.e., a mixture of Gaussians to compensate for partially occluded faces. Specifically, they analyze local regions divided from a face in isolation and apply the probabilistic approach to find the best match so that the recognition system is less sensitive to occlusion. Ref.~ extends the work by representing the face subspace with self-organizing maps (SOM). It presents the similarity relationship of the subblocks in the input space in the SOM topological space. Furthermore, they use the soft k nearest neighbor ensemble classifier to efficiently capture the similarity relationship obtained by SOM projections, which in turn improves the whole system's robustness. However, this method assumes knowledge of the occluded parts in advance. Since partial occlusion affects only specific local features, Ref.~ proposes a local Gaussian kernel for feature extraction and trains the SVM using the summation of local kernels as combine features. However, it is suboptimal to use the same size local kernel rather than select the appropriate size of the local kernel for local feature extraction. In Ref.~, a non-metric partial similarity measure based on local features is introduced to capture the prominent partial similarity among face images while excluding the unreliable and unimportant features. \nIn paper~, a face recognition method is proposed that takes partial occlusions into account by using statistical learning of local features. To this end, they estimated the probability density of the SIFT feature descriptors observed in training images based on a simple Gaussian model. In the classification stage, the estimated probability density is used to measure the importance of each local feature of test images by defining a weighted distance measure between two images. Based on this work, they extended the statistical learning based on local features to a general framework~, which combines the learned weight of local features and feature-based similarity to define the distance measurement. However, feature extraction from the local region cannot code the spatial information of faces. Besides, the unreliable features from the occluded area are also integrated into the final representation, which will reduce the performance. McLaughlin et al.~ try to solve random partial occlusion in test images using the largest matching areas at each point on the face. They assume that the occluded test image region can be modeled by an unseen-data likelihood with a low posterior probability. More specifically, they de-emphasize the local facial area with low posterior probabilities from the overall score for each face and select only reliable areas for recognition, which results in improved robustness to partial occlusion.", "id": "f2381fe3-090e-4284-8e0f-f2eea1aa34fe", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "6f64193b-71e9-4beb-8053-5da4cc76979e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Learning-based features" ], [ "subsubsection", "Statistical learning" ] ], "subsections": [], "title": "Statistical learning" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 1242, 1241 ], "content": "Apart from these statistical learning methods, several algorithms use sparse representation classifiers~(SRC) to tackle the occlusion in face recognition. Ever since its introduction~, SRC attracts increasing attention from researchers. This method explores the discriminative power of sparse representation of a test face. It uses a linear combination of training samples plus sparse errors to account for occlusion or corruption as its representation. Yang et al.~ propose a Gabor feature based Robust Representation and Classification (GRRC) scheme. They use Gabor features instead of pixel values to represent face images, which can increase the ability to discriminate identity. Moreover, the use of a compact Gabor occlusion dictionary requires less expensive computation to code the occlusion portions compared with that of the original SRC. To investigate the effectiveness of the proposed method, they conduct extensive experiments to recognize faces with block occlusions as well as real occlusions. A subset of the AR database was used in this experiment. It consists of 799 images (about eight samples per subject) of non-occluded frontal views with various facial expressions for training. The sunglasses test set contains $100$ images with the subject wearing sunglasses~(with a neural expression), and the scarves test set contains $100$ images with the subject wearing a scarf~(with a neural expression). The proposed GRRC achieves $93.0\\%$ recognition accuracy on sunglasses and $79\\%$ accuracy on scarves, which outperforms the original SRC by $5\\%$ and $20\\%$, respectively. In paper~, artificial occlusions are included to construct training data for training sparse and dense hybrid representation framework. The results show that artificially introduced occlusions are important to obtain discriminative features. Structured occlusion coding~(SOC)~ is proposed to employ an occlusion-appended dictionary to simultaneously separate the occlusion and classify the face. In this case, with the use of corresponding parts of the dictionary, face and occlusion can be represented respectively, making it possible to handle large occlusion, like a scarf. In paper~, efficient locality-constrained occlusion coding~(ELOC) is proposed to greatly reduce the running time without sacrificing too much accuracy, inspired by the observation that it is possible to estimate the occlusion using identity-unrelated samples.\nRecently, another work~ attempts face recognition with single sample per person and intends to achieve robustness and effectiveness for complex facial variations such as occlusions. It proposes a joint and collaborative representation with local adaptive convolution feature~(ACF), containing local high-level features from local regular regions. The joint and collaborative representation framework requires ACFs extracted from different local areas to have similar coefficients regarding a representation dictionary. Ref.~ proposes to learn a robust and discriminative low-rank representation~(RDLRR) for optimal classification, which aiming to prepare the learned representations optimally for classification as well as to capture the global structure of data and the holistic structure of each occlusion induced error image. The results demonstrate that the method can yield better performance in case of illumination changes, real occlusion as well as block occlusion.\nRef.~ combines center-symmetric local binary patterns~(CSLBP) with enhanced center-symmetric local binary patterns~(ECSLBP) to build the SRC dictionary for occluded face recognition. In Ref.~, discriminative multi-scale sparse coding~(DMSC) is proposed to address single-sample face recognition with different kinds of occlusion. More specifically, it learns the auxiliary dictionary to model the possible occlusion variations from external data based on PCA and proposes a multi-scale error measurement strategy to detect and disregard outlier pixels due to occlusion. Ref~ proposes a hierarchical sparse and low-rank regression model and uses features based on image gradient direction, leading to a weak low-rankness optimization problem. The model is suited for occluded face recognition and yields better recognition accuracy. In another work, NNAODL~(nuclear norm based adapted occlusion dictionary learning)~ has been proposed to construct corrupted regions and non-corrupted regions for occluded face recognition. The same occlusion parts in training images are used to construct the occlusions while normal training images are used to reconstruct non-occlusion regions, leading to improved computing efficiency. To cope with occluded face recognition with limited training samples, Ref.~ proposes a structural element feature extraction method to capture the local and contextual information inspired by the human optic nerve characteristics for face recognition. Besides, an adaptive fusion method is proposed to use multiple features consisting of a structural element feature, and a connected-granule labeling feature. It applies Reinforced Centrosymmetric Local Binary Pattern~(RCSLBP) to better handle the robustness to the occlusion. Finally, few-shot sparse representation learning is applied for few-shot occluded face recognition.", "id": "cd378342-bfc7-4211-81d3-5a17939b2fa3", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "6f64193b-71e9-4beb-8053-5da4cc76979e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Learning-based features" ], [ "subsubsection", "Sparse representation classifier" ] ], "subsections": [], "title": "Sparse representation classifier" }, { "cite_extract_rate": 0.684210526315789, "cites": [ 514, 309, 305, 1218, 297, 1246, 97, 1219, 1245, 1243, 1244, 1210, 1217 ], "content": "Face representation obtained by DCNN is vastly superior to other learning-based methods in discriminative power, owing to the use of massive training sets~. Face\nverification performance has been boosted due to advanced\ndeep CNN architectures~ and the development\nof loss functions~. From a practical point of view, we can obtain occlusion-robust face representation if a massive training dataset is provided to contain sufficient occluded faces for a deep network to learn to handle occlusions~. However, it is tough to collect such a dataset. Data augmentation seems to be a plausible solution to obtain a large number of occluded faces. Training with augmented occluded faces ensures the features are extracted more locally and equally~. Lv et al.~ synthesize occluded faces with various hairstyles and glasses for data augmentation to enlarge the training dataset, which alleviates the impact of occlusions. Specifically, $87$ common hairstyle templates with various bangs and $100$ glasses templates are collected to synthesize training face images with different hairstyles and face images with different glasses, to enable the CNN model is robust to various hairstyles and glasses. The method indeed relieves the data deficiency problem and results in improved performance. However, its use is limited to handling sunglasses and hair in recognition and does not solve the occlusion problem in general. In paper~, instead of using synthetic occluded faces directly, they first identify the importance of face regions in an occlusion sensitivity experiment and then train a CNN with identified face regions covered to reduce the model's reliance on these regions. Specifically, they propose to augment the training set with face images with occlusions located in high-effect regions~(the central part of the face) more frequently than in low effect regions~(the outer parts of the face). This forces the model to learn more discriminative features from the outer parts of the face, resulting in less degradation when the central part is occluded. However, pre-defined occlusions may cause performance degradation when dealing with face images with an occlusion that is not of the same size. Cen et al.~ propose a DDRC~(deep dictionary representation based classification) scheme to alleviate the occlusion effect in face recognition, where the dictionary is used to linearly code the deep features that are extracted from a convolutional neural network. The dictionary is composed of deep features of the training samples and an auxiliary dictionary associated with the occlusion patterns of the testing face samples. However, the proposed DDRC assumes that the occlusion pattern of the test faces is included in the auxiliary dictionary, which limits its use.", "id": "2012a522-72b3-492e-8a83-c85e6b0ec9d5", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "6f64193b-71e9-4beb-8053-5da4cc76979e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion robust feature extraction" ], [ "subsection", "Learning-based features" ], [ "subsubsection", "Deep learning" ] ], "subsections": [], "title": "Deep learning" }, { "cite_extract_rate": 0, "cites": [], "content": "If only visible facial parts are used for recognition, then the occlusion problem is mitigated to some extent. These approaches that explicitly exclude the occlusion area are called occlusion aware face recognition (OAFR). There are two groups of methods that constitute OAFR. One is occlusion detection based face recognition, which detects the occlusions first and then obtains a representation from the non-occluded parts only. The other one is partial face recognition, which assumes that a partial face is available and aims to use a partial face for recognition. Occlusion detection is ignored during partial face recognition. A taxonomy of occlusion aware face recognition is shown in Fig.~\\ref{fig:oafr}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/occlusion-aware-face-recognition}\\label{oafr}\n\\caption{Method classification used in occlusion aware face recognition approaches.}\n\\label{fig:oafr}\n\\end{figure}", "id": "56e478f6-fe2d-45d3-b5e4-e656af4eef50", "level": "section", "origin_cites_number": 0, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ] ], "subsections": [ "efaf8aff-3935-403d-889d-62b585a74f29", "2a328b6c-b934-4f79-8c9e-8f158a779893" ], "title": "Occlusion aware face recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "To explicitly make use of facial parts for face recognition, some methods explicitly detect the occlusion and perform face recognition based on their results. The other techniques obtain visible facial parts for face recognition based on the prior knowledge of occlusion, which is called visible part selection.", "id": "efaf8aff-3935-403d-889d-62b585a74f29", "level": "subsection", "origin_cites_number": 0, "parent_id": "56e478f6-fe2d-45d3-b5e4-e656af4eef50", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Occlusion detection based face recognition" ] ], "subsections": [ "215a500a-61f6-4381-b946-dbf862d391fb", "da8dc97f-e84d-4b22-b185-797346d85be3" ], "title": "Occlusion detection based face recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "An intuitive idea to deal with occlusions in face recognition is to detect the occlusions first and then recognize the face based on unoccluded facial parts. Methods use predefined occlusion types as a substitute for arbitrary occlusion in different locations to simplify the occlusion challenge. Usually, scarves and sunglasses are used as representative occlusions because of their high probability of appearance in the real world. Based on this idea, Ref.~ introduces a 1-NN~(nearest neighbor) threshold classifier to detect the occluded face region in a PCA subspace learned from training data of occlusion-free patches. Then they apply the selective local non-negative matrix factorization method to select features corresponding to occlusion-free regions for recognition.\nSome early works~ employ a binary classifier to search for the occluded area and incorporate only the unoccluded parts for comparison. Specifically, they first divide the face into multiple non-overlapping regions and then train an SVM classifier to identify whether the facial patch is occluded or not. By excluding occluded regions, improved overall recognition accuracy is observed. However, the performance is far from satisfactory and very sensitive to the training dataset. Ref.~ proposes to utilize compressed sensing for occlusion detection by using occluded and occlusion-free faces of the same identity. For the recognition process, discriminative information is extracted by excluding the occluded areas detected.\nSince occlusions can corrupt the features of an entire image in some way, deep learning techniques are developed to alleviate the problem by producing a better representation. Ref.~ designs a convolutional neural network to detect the occlusion in a multitask setting. More specifically, there are four region-specific tasks for occlusion detection, and each aims to predict the occlusion probability of the specific component: left eye, right eye, nose, and mouth. However, predicting only predefined occlusions limits flexibility, and inaccuracy of occlusion detection can, in return, harm the recognition performance.", "id": "215a500a-61f6-4381-b946-dbf862d391fb", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "efaf8aff-3935-403d-889d-62b585a74f29", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Occlusion detection based face recognition" ], [ "subsubsection", "Occlusion detection" ] ], "subsections": [], "title": "Occlusion detection" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 1247 ], "content": "Some works select visible facial parts for recognition and skip occlusion detection by assuming the prior knowledge of occlusion. Ref.~ compares several subspace-based methods including PCA, Non-negative matrix factorization~(NMF)~, Local NMF~ and Spatially Confined NMF~ and uses the partial information available for face recognition. During face recognition, the eye region is selected when people are wearing masks or veils, and the bottom region is used when people are wearing glasses. This method has a deficiency in flexibility use because well-aligned predefined subregions are hard to obtain in the real scenario. A paper~ in this direction extends NMF to include adaptive occlusion estimation based on the reconstruction errors. Low-dimensional representations are learned to ensure that features of the same class are close to that of the mean class center. This method does not require prior knowledge of occlusions and can handle large continuous occlusions.\nIn paper~, a proposed MaskNet is added to the middle layer of CNN models, aiming to learn image features with high fidelity and to ignore those distorted by occlusions. MaskNet is defined as a shallow convolutional network, which is expected to assign lower weights to hidden units activated by the occluded facial areas. Recently, Song et al.~ propose a pairwise differential siamese network~(PDSN) to build the correspondence between the occluded facial block and the corrupted feature elements with a mask learning strategy. The results show improved performance on synthesized and realistic occluded face datasets.", "id": "da8dc97f-e84d-4b22-b185-797346d85be3", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "efaf8aff-3935-403d-889d-62b585a74f29", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Occlusion detection based face recognition" ], [ "subsubsection", "Visible part selection" ] ], "subsections": [], "title": "Visible part selection" }, { "cite_extract_rate": 0, "cites": [], "content": "It is worth mentioning that we classify partial face recognition as occlusion aware methods because partial face recognition skips the occlusion detection phase and focuses on face recognition when arbitrary patches are available, which can be seen as implicit occlusion awareness. Partial faces frequently appear in unconstrained scenarios, with images captured by surveillance cameras or handheld devices (e.g., mobile phones) in particular. To the best of our knowledge, research for partial face detection has so far been ignored in literature reviews. It is essential to search for the semantic correspondence between the partial face~(arbitrary patch) and the entire gallery face since it is meaningless to compare the features of different semantic facial parts. The semantic correspondence can be completed either in the feature extraction phase to extract invariant and discriminative face features, or in the comparison phase to construct a robust face classifier. Feature extraction and comparison methods can be developed to address the partial face recognition problem. Therefore, we categorize the methods as feature-aware and comparison-aware methods.", "id": "2a328b6c-b934-4f79-8c9e-8f158a779893", "level": "subsection", "origin_cites_number": 0, "parent_id": "56e478f6-fe2d-45d3-b5e4-e656af4eef50", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Partial face recognition" ] ], "subsections": [ "901eeab5-fa02-4d06-aacf-394b2de48b74", "cb77c82c-17b1-4281-9b0a-6edd926908e6" ], "title": "Partial face recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "As for feature-aware methods, Multiscale Double Supervision Convolutional Neural Network~(MDSCNN)~ is proposed to have multiple networks trained with facial patches of different scales for feature extraction. Multiscale patches are cropped from the whole face and aligned based on their eye-corners. Each network is training with face patches of a scale. The weights of multiple networks are combined learned to generate final recognition accuracy. Even though this method can yield good feature representations, it is troublesome to train 55 different Double Supervision Convolutional Neural Networks~(DSCNNs) according to different scaled patches. It is time-consuming in practice because window sliding is needed to generate multiscale patches for recognition.", "id": "901eeab5-fa02-4d06-aacf-394b2de48b74", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "2a328b6c-b934-4f79-8c9e-8f158a779893", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Partial face recognition" ], [ "subsubsection", "Feature extraction component" ] ], "subsections": [], "title": "Feature extraction component" }, { "cite_extract_rate": 0, "cites": [], "content": "Comparison-aware methods facilitate the semantic correspondence in the comparison phase of face recognition. Among the comparison based approaches, the multiple classifier systems use a voting mechanism to tolerate the misalignment problem to an extent. In this regard, Gutta et al.~ present an Ensemble of Radial Basis Function~(ERBF) Networks consisting of nine RBF components, and each of which is determined according to the number of clusters and overlap factors. A verification decision is based on the output generated by the RBFs. In paper~, they propose the Lophoscopic PCA method to recognize faces in the absence of part of the relevant information. For this purpose, masks (black rectangles) that cover specific regions of the face, including left eye, right eye, both eyes, nose, mouth, and no mask are introduced to compute different subspaces. They learn the weights for each subspace during training. When classifying the subject, weights regarding each face subspace are considered and combined for recognition. These methods are not entirely satisfactory but need improvement, particularly in terms of their recognition rate.\nAlternatively, a learning-based classifier compensates for the difficulty in the alignment of partial face recognition. Ref.~ conducts the first systematic study to recognize an arbitrary patch of a face image without preliminary requirement on face alignment, which is regarded as a milestone in the field of partial face recognition. The proposed method employs the multi-keypoint descriptors~(MKD) to represent a holistic or partial face with a variable length. Descriptors from a large gallery construct the dictionary, making it possible to sparsely represent the descriptors of the partial probe image and infer the identity of the probe accordingly. However, SRC requires a sufficient number of faces to cover all possible variations of a person, which hinders the realization of a practical application.\nEven though most learning-based classifier papers are SRC based, there is a small group that develops similarity measures to address the partial face recognition problem. Ref.~ utilizes the instance-to-class distance to address the partial face recognition with scale-invariant feature transform~(SIFT) descriptors. The similarity between each probe patch and gallery image is obtained by comparing a set of local descriptors of one probe image to the nearest neighbor descriptors of all gallery images with the sparse constraint. As an improvement, Ref.~ and~ consider partial face recognition as a feature set matching problem, where they explicitly use geometric features and textural features for simultaneous matching. Robust point set matching~(RPSM)~ considers both geometric distribution consistency and textural similarity. Moreover, a constraint on the affine transformation is applied to prevent unrealistic face warping. However, these methods would fail to work if face keypoints are unavailable due to occlusions. Moreover, the computation complexity is high, which makes the recognition process slow.\nRecently, there is a trend to combine the deep learning and SRC methods to tackle the partial face recognition~. Dynamic feature learning~ combines a fully convolutional network (FCN) with sparse representation classification (SRC) for partial face recognition. The sliding window matching proposed in paper~ searches for the most similar gallery part by sliding the window of the same size as the partial probe. The combination of sliding window matching based on FCN and SRC brings a promising performance. As an improved version, Ref.~ uses multi-scale representations in dynamic feature learning, which increase the ability to tolerate the misalignment between partial probe patches and the gallery face.", "id": "cb77c82c-17b1-4281-9b0a-6edd926908e6", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "2a328b6c-b934-4f79-8c9e-8f158a779893", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion aware face recognition" ], [ "subsection", "Partial face recognition" ], [ "subsubsection", "Comparison component" ] ], "subsections": [], "title": "Comparison component" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from addressing the occlusion problem in feature space, one intuitive idea is to take occluded face recovery as a substitution to solve the occlusion in image space. Occlusion recovery methods recover a whole face from the occluded face, which allows the direct application of conventional face recognition algorithms. Existing occlusion recovery methods for face recognition use (i)~reconstruction based techniques for face recognition, or (ii)~inpainting techniques, which treat the occluded face as an image repairing problem. A possible way to classify the methods can be seen in Fig.~\\ref{fig:orfr}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/occlusion-recovery-face-recognition.pdf}\n\\caption{Classification of methods used in occlusion recovery based face recognition approaches.}\n\\label{fig:orfr}\n\\end{figure}", "id": "1e45db83-eb31-4a5f-8e0c-ae42d992d199", "level": "section", "origin_cites_number": 0, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ] ], "subsections": [ "4a9e0415-0203-4c3f-b9a5-d55c4cf62f57", "3f92ce98-5715-49bb-a7e0-65c6c135859e" ], "title": "Occlusion recovery based face recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "Image-based two-dimensional reconstructions carefully study the relationship between occluded faces and occlusion-free faces. The reconstruction techniques are classified as linear reconstruction, sparse representation classifier~(dictionary learning), and deep learning techniques.", "id": "4a9e0415-0203-4c3f-b9a5-d55c4cf62f57", "level": "subsection", "origin_cites_number": 0, "parent_id": "1e45db83-eb31-4a5f-8e0c-ae42d992d199", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Reconstruction based face recognition" ] ], "subsections": [ "d2901d7d-8367-4ebb-a5ea-136f27539f12", "f3fece0e-05eb-4a36-9d76-6cf62750af94", "7dca6067-aa37-43fe-9c0d-3c92132a216e" ], "title": "Reconstruction based face recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "As for reconstruction techniques, Ref.~ utilizes PCA reconstruction and recursive error compensation to remove the eye occlusions caused by glasses. It combines a Markov Random Field model with a parse representation of occluded faces to improve the reconstruction of corrupted facial regions. There are many variants~ employing PCA to detect outliers or occlusion and then reconstruct occlusion-free face images. Ref.~ estimates an occluded test image as a linear combination of training samples of all classes, which allows having occlusions in training and testing sets. Distinct facial areas are weighted differently so that only non-occluded facial parts are used for reconstruction.", "id": "d2901d7d-8367-4ebb-a5ea-136f27539f12", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "4a9e0415-0203-4c3f-b9a5-d55c4cf62f57", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Reconstruction based face recognition" ], [ "subsubsection", "Linear reconstruction" ] ], "subsections": [], "title": "Linear reconstruction" }, { "cite_extract_rate": 0.125, "cites": [ 1248 ], "content": "The sparse representation classifier~ is considered to be the pioneering work on occlusion robust face recognition. This method explores the discriminative power of sparse representation of a test face. It uses a linear combination of training samples plus sparse errors accounting for occlusions or corruption as its representation. To better tackle the occlusion, the SRC introduces an identity matrix as an occlusion dictionary on the assumption that the occlusion has a sparse representation in this dictionary. Ref.~ improves sparse representation by including prior knowledge of pixel error distribution. In paper~, they design a graph model-based face completion system for partially occluded face reparation. They leverage image-based data mining to find the best-matched patch to guide occluded face synthesis from the images, derived from the selection of sparse representation classification (SRC). The final face completion proceeds in the graph-based domain with the help of graph Laplace. \nSimilarly, Ref.~ proposes a reconstruction approach consisting of occlusion detection and face reconstruction. First, the downsampled SRC is used to locate all possible occlusions at a low computing complexity. Second, all discovered face pixels are imported into an overdetermined equation system to reconstruct an intact face.\nAn innovative solution for the occlusion challenge is presented by structured sparse representation based classification (SSRC)~ to learn an occlusion dictionary. The regularization term of mutual incoherence forces the resulting occlusion dictionary to be independent of the training samples. This method effectively decomposes the occluded face image into a sparse linear combination of the training sample dictionary and the occlusion dictionary. The recognition can be executed on the recovered occlusion-free face images. Nevertheless, this method requires retraining of the model to handle the different occlusions. \nIn paper~, a new criterion to compute modular weight-based SRC is proposed to address the problem of occluded face recognition. They partition a face into small modules and learn the weight function according to the Fisher rate. The modular weight is used to lessen the effect of modules with low discriminant and to detect the occlusion module. More recently, Ref.~ proposes a robust representation to model contiguous block occlusions based on two characteristics. The first characteristic introduces a tailored potential loss function to fit the errors of distribution. Specifically, a Laplacian sparse error distribution or more general distributions based on M-Estimators. The second characteristic models the error image, which is the difference between the occluded test face and the unoccluded training face of the same identity, as low-rank structural. Wang et al.~ proposed a method equipped with two stages: varying occlusion detection stage consisting of occlusion detection and elimination, and iterative recovery stage consisting of occluded parts being recovered and unoccluded parts being reserved. With the use of iteratively recovered strategy, this joint occlusion detecting and recovery method can produce good global features to benefit classification.", "id": "f3fece0e-05eb-4a36-9d76-6cf62750af94", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "4a9e0415-0203-4c3f-b9a5-d55c4cf62f57", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Reconstruction based face recognition" ], [ "subsubsection", "Sparse representation classifier" ] ], "subsections": [], "title": "Sparse representation classifier" }, { "cite_extract_rate": 0.5, "cites": [ 1249 ], "content": "A few works use deep learning techniques for occlusion reconstruction. One is work~, which extends a stacked sparse denoising autoencoder to a double channel for facial occlusion removal. It adopts the layerwise algorithm to learn a representation so that the learned encoding parameters of clean data can transfer to noisy data. As a result, the decoding parameters are refined to obtain a noise-free output. The other work~ proposes to combine the LSTM and autoencoder architectures to address the face de-occlusion problem. The proposed robust LSTM-Autoencoders (RLA) model consists of two LSTM components. One spatial LSTM network encodes face patches of different scales sequentially for robust occlusion encoding, and the other dual-channel LSTM network is used to decode the representation to reconstruct the face as well as to detect the occlusion. Additionally, adversarial CNNs are introduced to enhance the discriminative information in the recovered faces.", "id": "7dca6067-aa37-43fe-9c0d-3c92132a216e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "4a9e0415-0203-4c3f-b9a5-d55c4cf62f57", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Reconstruction based face recognition" ], [ "subsubsection", "Deep learning" ] ], "subsections": [], "title": "Deep learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Image inpainting techniques are widely used to carefully obtain occlusion-free images and are not limited to face images. Inpainting techniques focus on repairing the occluded images and leave face recognition out of consideration. They can be divided into 1)~non-blind inpainting and 2)~blind inpainting categories, depending on whether the location information of corrupted pixels is provided or not. Deep learning is an effective approach to blind inpainting.", "id": "3f92ce98-5715-49bb-a7e0-65c6c135859e", "level": "subsection", "origin_cites_number": 0, "parent_id": "1e45db83-eb31-4a5f-8e0c-ae42d992d199", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Inpainting" ] ], "subsections": [ "ae09d112-124d-420b-84ed-e3a4ec829927", "98ab77d1-15f9-4901-a3d8-7d63548773ce" ], "title": "Inpainting" }, { "cite_extract_rate": 0, "cites": [], "content": "Those techniques fill in the occluded part of an image using the pixels around the missing region. Exemplar-based techniques that cheaply and effectively generate new texture by sampling and copying color values from the source are widely used. In paper~, a non-blind inpainting method proposes a unified scheme to determine the fill order of the target region, using an exemplar-based texture synthesis technique. The confidence value of each pixel and image isophotes are combined to determine the priority of filling. Ref.~ presents an image inpainting technique to remove occluded pixels when the occlusion is small. More specifically, it combines feature extraction and fast weighted-principal component analysis~(FW-PCA) to restore the occluded images. More recently, a hybrid technique~ has been proposed where a PDE method and modified exemplar inpainting is utilized to remark the occluded face region. However, the occlusion type of face images studied in this work is not representative of the real scenario.", "id": "ae09d112-124d-420b-84ed-e3a4ec829927", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "3f92ce98-5715-49bb-a7e0-65c6c135859e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Inpainting" ], [ "subsubsection", "Non-blind inpainting" ] ], "subsections": [], "title": "Non-blind inpainting" }, { "cite_extract_rate": 0.65, "cites": [ 1003, 1252, 1254, 8430, 1250, 1251, 8431, 1253, 524, 5680, 1255, 1256, 1002 ], "content": "Generative models are known for the ability to synthesize or generate new samples from the same distribution of the training dataset. The core problem in generative models is to address density estimation by unsupervised learning, which can be carried out by explicit density estimation~, or implicit density estimation~. The most prominent generative models are originated from the variational autoencoder~(VAE)~ and the generative adversarial network~(GAN)~. The traditional autoencoder is used to reconstruct data, and the VAE~ applies probabilistic spin to the traditional autoencoder to allow generating a new image. Having assumed that training data is generated from the underlying latent representation, the VAE derives a lower bound on the data likelihood that is tractable and can be optimized. This method is a principled approach to generative models, and useful features can be extracted by inference of $q(z|x)$. However, generated images are blurrier and of low quality. Instead, the GAN~ learns to generate a new image from training distribution using a minimax game between the generator network and the discriminator network. The discriminator wants to distinguish between real and fake images, while the generator wants to fool the discriminator by generating real-looking images. Ref.~ uses convolutional architectures in the GAN and can generate realistic samples. However, the GAN is notorious for unstable training characteristics. Makhzani et al.~ propose an adversarial autoencoder~(AAE) to use the GAN framework as a variational inference in a probabilistic autoencoder to ensure that generating from any part of prior space results in meaningful samples.\nThere are GAN variants for all kinds of applications. We focus on methods that are relevant to face image editing and image inpainting. One is a blind-inpainting work~ that combines sparse coding~ and deep neural networks to tackle image denoising and inpainting. In particular, a stacked sparse denoising autoencoder is trained to learn the mapping function from generated corrupted noisy overlapping image patches to the original noise-free ones. The network is regularized by a sparsity-inducing term to avoid over-fitting. This method does not need prior information about the missing region and provides solutions to complex pattern removal like the superimposed text from an image. Context Encoders~ combine the encoder-decoder architecture with context information of the missing part by regarding inpainting as a context-based pixel prediction problem. Specifically, the encoder architecture is trained on the input images with missing parts to obtain a latent representation while the decoder architecture learns to recover the lost information by using the latent representation. Pixel-wise reconstruction loss and adversarial loss are jointly used to supervise the context encoders to learn semantic inpainting results. Several variants of context encoders~ are proposed: some extend it by defining global and local discriminators~ and some take the result of context encoders as the input and apply joint optimization of image content and texture constraints to avoid visible artifacts around the border of the hole~. Ref.~ relies on the power of the generative network to complete the corrupted image without requiring an external training dataset. A partial convolution based network~ is proposed to only consider valid pixels and apply a mechanism that can automatically generate an updated mask, resulting in robustness to image inpainting for irregular holes. Information Maximizing Generative Adversarial Networks~(InfoGAN)~ maximize the mutual information between latent variables and the observation in an unsupervised way. It decomposes the input noise vector into the source of incompressible noise $z$ and the latent code $c$ which will target the salient structured semantic features of data distribution. By manipulating latent codes, several visual concepts such as different hairstyles, presence or absence of eyeglasses are discovered. Occlusion-aware GAN~ is proposed to identify a corrupted image region with an associated corrupted region recovered using a GAN pre-trained on occlusion-free faces. Ref.~ employs GAN for eyes-to-face synthesis with only eyes visible. Very recently, AttGAN, a face image editing method, has imposed attribute classification constraints to the generated image so that the desired attributes are incorporated. Hairstyles and eyeglasses that may cause occlusion in a face image are treated as attributes which can be triggered to be present or absent in the generated image. ERGAN~(Eyeglasses removal generative adversarial network)~ is proposed for eyeglasses removal in the wild in an unsupervised manner. It is capable of rendering a competitive removal quality in terms of realism and diversity. In paper~, ID-GAN~(identity-diversity generative adversarial network) combines a CNN-based recognizer and GAN-based recognition to inpaint realism and identity-preserving faces. The recognizer is treated as the third player to compete with the generator.", "id": "98ab77d1-15f9-4901-a3d8-7d63548773ce", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "3f92ce98-5715-49bb-a7e0-65c6c135859e", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Occlusion recovery based face recognition" ], [ "subsection", "Inpainting" ], [ "subsubsection", "Deep learning for blind inpainting" ] ], "subsections": [], "title": "Deep learning for blind inpainting" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we first evaluate the performance of occluded face detection on the MAFA dataset~ of partially occluded faces. Next, we present the performance of face recognition under occlusion in terms of the identification rate and the verification accuracy on multiple benchmarks such as the AR~, CAS-PEAL~, and Extended Yale B~ datasets. Then we describe the representative algorithms based on the proposed categories. In addition, we also categorize them in the face recognition pipeline according to which component they work on to tackle the occlusion.", "id": "14605930-a574-4766-b31c-9598c1677fbc", "level": "section", "origin_cites_number": 4, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ] ], "subsections": [ "ea588296-1f71-45f6-b93c-6b1b44b18a26", "9ee58aa8-2ca7-4e1b-929e-1c20b5eb3ea3", "c8ee41eb-5b2e-4753-917f-b7dd08427589" ], "title": "Performance evaluation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1225, 1235, 1231 ], "content": "There are several datasets for face detection benchmarking, which are the FDDB~, PASCAL Face~, AFW~, IJB-A~, Widerface~ and MAFA~ datasets. MAFA is created for occluded face detection, involving 60 commonly used masks, such as simple masks, elaborate masks, and masks consisting of parts of the human body, which occur in daily life. Besides, it contains $35,806$ face annotations with a minimum size of $32\\times 32$ pixels. Some examples of occluded face images are shown in Fig.~\\ref{fig:occlusion_examples}. To the best of our knowledge, the MAFA dataset takes the occlusions as the main challenge in face detection, so it is relevant to evaluate the capacity of occluded face detection methods. The performances of representative algorithms on the MAFA dataset are summarized in Table~\\ref{tab:ofdTab}~(derived from paper~). The precision on the MAFA testing set with the IoU threshold value $0.5$ is shown. Only a few methods report the results below.\n\\begin{table}[!htbp]\n\\renewcommand\\arraystretch{1.3}\n\\centering\n\\caption{Evaluation Summary of Different Occluded Face Detection Algorithms on MAFA. The setting `masked' only counts the faces annotated by the MAFA dataset and `w/o Ignored' counts all detected faces, including the ones that are missing annotation. }\\label{tab:ofdTab}\n\\begin{tabularx}{0.49\\textwidth}{p{2cm}Xp{1.5cm}p{1.6cm}}\n\\toprule\n Approach &Publication & Precision `masked' & Precision `w/o Ignored' \\\\ \\midrule\nOcclusion unaware& & NA & $60.8\\%$ \\\\\n\\midrule\n\\multirow{2}{*}{Occlusion aware}& & NA & $76.4\\%$\\\\\n& & $76.5\\%$ & $88.3\\%$ \\\\\n\\midrule\nOcclusion segmentation & &$83.5\\%$ & $91.9\\%$ \\\\ \\bottomrule\n\\end{tabularx}\n\\end{table}", "id": "ea588296-1f71-45f6-b93c-6b1b44b18a26", "level": "subsection", "origin_cites_number": 9, "parent_id": "14605930-a574-4766-b31c-9598c1677fbc", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ], [ "subsection", "Evaluation of Occluded Face Detection" ] ], "subsections": [], "title": "Evaluation of Occluded Face Detection" }, { "cite_extract_rate": 0, "cites": [], "content": "There are numerous standard datasets for general face recognition, but they are not appropriate for OFR~(occluded face recognition) because occluded faces are barely present in the datasets. Alternatively, researchers make the most of the general datasets to generate synthetic occluded datasets by incorporating synthetic occlusions, occluding rectangles, etc. Five categories regarding OFR testing scenarios are illustrated in Fig.~\\ref{fig:OFR_problems}. In this way, synthetic occluded datasets can meet the requirements of OFR, where an occlusion-free gallery is queried using occluded faces. \n\\begin{table}[!htbp]\n\\renewcommand\\arraystretch{1.4}\n\\centering\n\\caption{Summary of existing benchmark datasets for occluded face recognition. `\\# Sub' is short for the number of subjects. `\\# Ims' means the number of images. `Real Occ.' says whether real occlusions are included in the original dataset. `Syn. Occ.' is short for synthesized occlusions imposed on the original dataset to obtain `D-occ'.}\\label{tab:dataset}\n\\begin{tabular}{l c c l l } \n\\toprule\nDataset & \\# Sub & \\# Ims &Real Occ. &Syn. Occ. \\\\ \\midrule\nAR~ & $126$ &$4,000$ &Yes &NA\\\\\nExtend YaleB~ &$38$&$161,289$ &No & Unrelated Ims\\\\\nORL~&$40$&$400$&No&Rectangle \\\\\nFERET~&$1500$&$13,000$&No&Rectangle\\\\\nLFW~&$5749$&$13,000$&No&Partial Faces \\\\ \nCAS-PEAL~&$1040$&$9,594$&Yes& NA\\\\ \nCMU-PIE~&$68$&$41,368$&No&Rectangle\\\\\nPubFig~&$200$&$ 58,797$&No&Partial Faces\\\\\nNIR-Distance~&$276$&$4300$&No&Partial Faces\\\\\nNIR-Mobile~&$374$&$11,286$&No&Partial Faces\\\\\nFRGC~&$276$&$943$&No&Partial Faces\\\\\nFRGC-v2~&$466$&$4,007$&No&Partial Faces\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\nWe make it a rule to state the `D' as the original dataset, and `D-occ' as the occluded dataset originated from `D'. If there is real occlusion included in the `D', then it is directly used for establishment. Otherwise, synthesized occlusions are imposed to form `D-occ' for establishment. Table~\\ref{tab:dataset} summarizes each dataset by the number of subjects, the number of images, whether or not real occlusions are included, and if not, what kind of synthesized occlusions are used to obtain the `D-occ' dataset. Here AR and Extended Yale B, the most widely used face datasets, are discussed.\n\\textbf{AR} face database is one of the very few datasets that contain real occlusions~(see Fig.~\\ref{fig:OFR_problems} first column). It consists of over $4000$ faces of $126$ individuals: $70$ men and $56$ women, taken in two sessions with a two week interval. There are $13$ images per individual in every session, and these images differ in terms of facial expression, illumination, and partial occlusion, getting sunglasses and scarves involved. Index $8$ and $11$ of each session indicates the person wearing sunglasses or a scarf, respectively. Index $9-10$ and $11-12$ combine the sunglasses or the scarf with illuminations, respectively. \n\\textbf{Extended Yale B} face database contains $161,289$ face images from $38$ persons, each captured under nine poses and 64 different illuminations without occlusion. It is widely used to evaluate the efficacy of the algorithms under synthesized occlusions. Occluding unrelated images are randomly superimposed on the occlusion-free faces~(see Fig.~\\ref{fig:OFR_problems} last column). Typically, gallery images are occlusion-free faces, and test images are randomly occluded with unrelated images such as a baboon, or a square image. Sometimes, occluded faces are occluded with, for example white or black rectangles to constitute a `D-occ' dataset~(see Fig.~\\ref{fig:OFR_problems} fourth column).", "id": "9ee58aa8-2ca7-4e1b-929e-1c20b5eb3ea3", "level": "subsection", "origin_cites_number": 12, "parent_id": "14605930-a574-4766-b31c-9598c1677fbc", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ], [ "subsection", "Existing OFR benchmark datasets" ] ], "subsections": [], "title": "Existing OFR benchmark datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "In this part, we demonstrate the results of the most representative methods based on the proposed categories. In addition, we also categorize methods based on the face recognition pipeline and show evaluation results from this aspect.", "id": "c8ee41eb-5b2e-4753-917f-b7dd08427589", "level": "subsection", "origin_cites_number": 0, "parent_id": "14605930-a574-4766-b31c-9598c1677fbc", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ], [ "subsection", "Evaluation of Occluded Face Recognition" ] ], "subsections": [ "cf623906-db78-4818-ae0f-ad192b450a79", "430e7b2e-689b-4c0f-8e9f-786ae4eae2b6" ], "title": "Evaluation of Occluded Face Recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "We summarize the identification performances of representative algorithms on the AR database in Table~\\ref{tab:ARidenrate}. To make sure these methods can be compared, we group the experimental settings and introduce the corresponding abbreviation for simplicity as follows:\n\\begin{itemize}\n \\item[(1)] Face images for training and testing belong to the same individual and without overlapping. We use `S-TR-$X$ips' to abbreviate the situation when the training set contains $X$ unoccluded images per person. Specifically, `S-TR-SSPP' single sample per person in the training set remains a challenging task. Usually, the gallery set keeps the same training set if there are no additional announcements. We add asterisks to mark the gallery, only taking one neutral face for enrollment.\n \\item[(2)] Face images for training and testing belong to the same individual, without overlapping. The training set consists of occluded faces for training, denoting as `S-TR-Occ$X$', with $X$ as the number of images. Typically, one neutral face image per individual is enrolled.\n \\item[(3)] Testing subjects are a subset of training subjects denoting as `Ex-TR-SSPP'. Images of extra persons are included in the training set. Apart from that, the same subjects are used for training and testing. The training set also consists of a single sample per person.\n \\item[(4)] There is no overlapping between subjects for training and testing, which is represented by `D-TR'. We use the setting `D-TR-DL' to indicate if there are large-scale general data for training with deep learning techniques. \n \\item[(5)] Images of subjects are randomly selected for training, and the remaining ones are for testing, what we called `S-TR-Rand'.\n\\end{itemize}\nThe identification performance of representative algorithms on extended Yale B as well as other benchmarks is summarized in Table~\\ref{tab:otheridenrate}. An evaluation summary of different categories of representative algorithms as regards verification rates is exhibited in Table~\\ref{tab:allverirate}.\nAs results on the AR dataset in Table~\\ref{tab:ARidenrate} show, these experimental setups were slightly different from paper to paper, bringing a struggle to interpret these results at the first sight. This is because the AR dataset does not provide standard protocols for use, which are essential to compare methods in a fair ground. Despite the absence of the standard protocols, it is still possible to get some useful observations and findings. First, most methods treat session1 of AR as the target and report the results on sunglasses and scarf individually. Second, there are some popular protocols gradually formed among these methods, which can be treated as a relatively fair ground for comparison. These are (i)~S-TR-7IPS, (ii)~S-TR-8IPS, and (iii)~S-TR-SSPP. Specifically, S-TR-SSPP in particular has become a challenge and defines an unambiguous protocol to follow. Last but not least, thanks to the presence of deep learning methods, there is a trend to rely on general massive data for training rather than splitting the AR dataset to form the training set. There is still some room to improve the OFR performance, especially when we take the SSPP~(single sample per person) protocol into consideration.\nResults on Extended Yale B and other benchmark datasets are shown in Table~\\ref{tab:otheridenrate} and Table~\\ref{tab:allverirate}. Even though the original occlusion-free datasets such as Extended Yale B and ORL are well addressed by current face recognition algorithms, the `D-occ'~(occluded version) originated from these datasets still remains a challenge. It is not surprising that algorithms suffer severe degradation when $90\\%$ of the face parts are occluded by a rectangle or unrelated images. Moreover, these testing scenarios are not similar to what we would find in a realistic scenario. \nApart from an occluding rectangle and unrelated images, some methods intend to solve partial face issues by using arbitrary patches cropped from a face image. Since partial faces are arbitrary patches, it is hard to be sure algorithms are on a level playing field. For example, partial faces with eye regions are supposed to contain more discriminative features compared to those with no eyes involved.", "id": "cf623906-db78-4818-ae0f-ad192b450a79", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c8ee41eb-5b2e-4753-917f-b7dd08427589", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ], [ "subsection", "Evaluation of Occluded Face Recognition" ], [ "subsubsection", "Evaluation based on the proposed categories" ] ], "subsections": [], "title": "Evaluation based on the proposed categories" }, { "cite_extract_rate": 0, "cites": [], "content": "Generally, occlusion robust feature extraction~(ORFE) methods deal with occlusion using feature extraction. However, they sometimes treat occluded face recognition as a semantic correspondence problem and address the occlusion challenge in the comparison component of the face recognition pipeline. Similarly, occlusion aware face recognition~(OAFR) usually solves the occluded face recognition using comparison. Nevertheless, some occlusion-aware techniques use an occlusion-robust dictionary to represent an occluded face, which lies in the feature extraction component. Therefore, we categorize methods discussed so far based on a general face recognition pipeline to offer a fresh perspective~(see Table~\\ref{tab:components}). Each component is elaborated as follows:\n\\begin{itemize}\n \\item Data Augmentation or Data Recovery: augmentation based techniques cope with the occlusion challenge by augmenting training data to include possible real-life occlusion. With the use of deep learning techniques, occlusion-robust features can be extracted. Recovery based methods use local or global knowledge to explicitly fill the occlusion area or de-occlude the face in an implicit way, for example using an encoder-decoder structure. \n \\item Feature extraction intends to make the most of the locality characteristic of occlusion using patch-based engineered feature extraction or applies machine learning or deep learning to obtain features. Specifically, statistical learning or sparse representation are two commonly used techniques in this approach.\n \\item Feature comparison: features obtained for an entire face are of fixed length, while features for occluded face images are determined by the occlusion area and are therefore of varied length. The comparison phase is used to find out the semantic correspondence between fixed-length and varied features because it is meaningless to match different semantic parts.\n \\item Fusion strategy relies on the assumption that occlusions have less effect on identity recognition compared with other visible facial areas. Occlusion challenges can be met by fusing the decision of multiple classifiers or using the majority voting strategy to minimize the effect of features extracted from the occlusion.\n\\end{itemize}", "id": "430e7b2e-689b-4c0f-8e9f-786ae4eae2b6", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c8ee41eb-5b2e-4753-917f-b7dd08427589", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Performance evaluation" ], [ "subsection", "Evaluation of Occluded Face Recognition" ], [ "subsubsection", "Evaluation based on the face recognition pipeline for categorization" ] ], "subsections": [], "title": "Evaluation based on the face recognition pipeline for categorization" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, future dataset challenges and research challenges are discussed. In many cases, a new research challenge means specific dataset requirements. On the other hand, datasets also reflect underlying challenges that need to be solved in real life.", "id": "729eb785-bd28-450a-a746-83bf1ec3a0a6", "level": "section", "origin_cites_number": 0, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Discussion" ] ], "subsections": [ "e7fa8a2b-dca3-441c-b61d-698eaac27e35", "51ae8615-afa5-4cc3-a40e-72c692d5f5a1" ], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "There are three major issues in the datasets referred to: \\textbf{dataset scale}, \\textbf{occlusion diversity}, and \\textbf{standard protocol}. The datasets for occluded face recognition are of a small scale. AR is one of the very few that contain real occlusions and images of only $126$ individuals are included. As for occlusion diversity, sunglasses and scarf occlusions are frequently considered. However, occlusions in real life are a lot more diverse than that. Regarding synthesized occlusions, occluding rectangles and unrelated images are commonly applied. In the literature, ‘Baboon’ is a typically used unrelated image to generate synthetic occluded faces. However, these synthetically occluded faces like those using ‘Baboon’, or a black/white rectangle are not representative of real-life occluded faces. As regards a standard protocol, the results of representative approaches obtained from AR and other benchmark datasets cannot be compared objectively due to a wide diversity of occluded faces. Therefore, future research will have to overcome three weaknesses in the current datasets. Future benchmark datasets will contain more individuals, various and sufficient occlusions, and well-defined evaluation protocols. \nApart from specific requirements for benchmark datasets, collecting a massive occluded face dataset for training is also inevitable as we plunge into deep learning techniques. However, the main source of face images is usually the web, where labelled faces with occlusions are scarce. How to label occluded faces efficiently is an open issue that needs to be resolved because some severely occluded faces are difficult or impossible to be recognized by humans. From this aspect, occluded face recognition remains a challenge in the future. Given the combination of the pose variations, low resolution etc., recognizing unconstrained faces is far from being solved.\nFor the development of occluded face recognition techniques, new large-scale datasets are needed that respond to the occlusion challenges. Recently, IJB-C, a large-scale dataset, has been made publicly available. It not only contains many natural occlusions but also provides annotated occlusion locations. The public availability of IJB-C may usher in a new development of occluded face recognition. In the future, we expect to see the standard evaluation of fully efficient occluded face recognition algorithms on the AR dataset as well as on newly developed real-life and large-scale IJB-C face datasets.", "id": "e7fa8a2b-dca3-441c-b61d-698eaac27e35", "level": "subsection", "origin_cites_number": 0, "parent_id": "729eb785-bd28-450a-a746-83bf1ec3a0a6", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Discussion" ], [ "subsection", "Future Dataset Challenges" ] ], "subsections": [], "title": "Future Dataset Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "In the future, unconstrained occluded face recognition will become a challenging problem that needs to be addressed. Unlike the occluded faces we are handling, unconstrained occluded faces should conform to the future benchmark datasets, not limited by predefined occlusions. Currently used datasets such as AR and Extended Yale B are both problematic since they are aligned and captured in a constrained environment. Apart from unconstrained occluded face recognition, it remains an open issue to resolve face recognition burdened with unconstrained occlusions and other challenges such as large pose variations, age gap, low resolution, etc. It is worth mentioning that unconstrained face recognition usually does not take the occlusion challenge as the major challenge. Instead, unconstrained face recognition mainly takes pose and expression challenges into consideration. Face recognition will be free from constraints if real-life occlusion plays an equally important role among these challenges. \nSince the existing occluded face recognition approaches cope with occlusion challenges from distinct perspectives, we expect the individual improvements in occlusion robust feature extraction, occlusion aware face recognition, and occlusion recovery based face recognition. The majority of methods in the literature so far follow the line of traditional machine learning techniques. Methods for occluded face recognition in the future will make use of deep learning techniques, especially CNN based deep networks. Based on the method categories in Fig.~\\ref{fig:methods_category}, we suggest a number of ways to develop a face recognition system that is better able to handle occlusions as follows.\n\\textbf{Novel data augmentation techniques} are needed that could help to learn more discriminative feature representations robust to occlusions if a CNN deep network is trained on the augmented dataset. Current solutions for data augmentation mainly rely on large-scale training data from the web, and use occlusion templates to generate synthesized occluded faces. However, manually designed occlusion templates heavily rely on accurate facial landmarking, and an acceptable occluded face can only be achieved if the occlusion template is properly aligned with the facial landmarks. For example, eyeglasses should be scaled and aligned to the eye centers. One potential solution is to take advantage of GANs to generate natural occluded faces by transferring the occlusion type from a real occluded face to an occlusion-free face.\n\\textbf{More effective occlusion recovery models} need to be devised to make the most of state-of-the-art unified face recognition systems. Current occlusion recovery solutions adopt an encoder-decoder architecture for recovery and are designed for visually pleasing results rather than accurate reconstruction. One potential way to resolve this issue is to combine the occlusion recovery task and identification or verification task in a multi-task manner. The adversarial loss can be incorporated to ensure that the recovered face looks natural.\n\\textbf{New occlusion detection techniques} need to be designed to take advantage of massive public datasets. Better occlusion detection not only makes it possible to generate occlusion templates automatically but also provides an informative preprocessing for face recognition. Most methods reviewed address the occlusion detection problem by partitioning the face into patches using a binary classifier, resulting in a rough occlusion area. Combining occlusion detection and occluded face recognition in a unified framework seems a promising way, leading to an automatic face recognition system.", "id": "51ae8615-afa5-4cc3-a40e-72c692d5f5a1", "level": "subsection", "origin_cites_number": 0, "parent_id": "729eb785-bd28-450a-a746-83bf1ec3a0a6", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Discussion" ], [ "subsection", "Future Research Challenges" ] ], "subsections": [], "title": "Future Research Challenges" }, { "cite_extract_rate": 0.19, "cites": [ 1249, 1253, 524, 1241, 1231, 1232, 1234, 1235, 1233, 8431, 1251, 1247, 1255, 1242, 1002, 1254, 1237, 1250, 1248 ], "content": "In this paper, we present a thorough survey of face recognition techniques under occlusion and systematically categorize methods up to now into 1) occlusion robust feature extraction, 2) occlusion aware face recognition, and 3) occlusion recovery based face recognition. Newly published and innovative papers, especially recent deep learning techniques for occluded face recognition, have been discussed. Furthermore, we report comparative performance evaluations in terms of occluded face detection and face recognition. In the end, we discuss future challenges in terms of dataset and research~(including potential solutions) that move the field forward.\n\\begin{table*}[htbp]\n\\renewcommand\\arraystretch{0.9}\n\\caption{Evaluation Summary of Representative Algorithms on AR dataset regarding identification rates. Three categories are: 1)~\\textbf{ORFE}=Occlusion Robust Feature Extraction, 2)~\\textbf{OAFR}=Occlusion Aware Face Recognition, and 3)~\\textbf{ORecFR}=Occlusion recovery FR. The detail of `Experiment settings' abbreviations is shown. In column `Occlusions,' the asterisk means specific occlusion under illumination. In column `Identification Rates,' TR means these methods train on session one.}\\label{tab:ARidenrate}\n\\centering\n\\begin{tabularx}{\\textwidth}{lXlXXccc}\n\\toprule\n\\multirow{2}{*}{Category} & \\multirow{2}{*} {Publication} & \\multirow{2}{*}{Occlusions}&\\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}Experiment\\\\Settings\\end{tabular}} &\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}\\# of Train\\\\Subjects\\end{tabular}}&\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}\\# of Test\\\\Subjects\\end{tabular}}& \\multicolumn{2}{c}{Identification Rates (\\%)} \\\\\n\\cline{7-8}\n &&&&&&Session1& Session2\\\\\n\\midrule\n\\multirow{4}{*}{\\raggedright \\begin{tabular}[c]{@{}l@{}l@{}}\\textbf{ORFE:}\\\\Patch-engineered\\\\Features\\end{tabular} } \n &\\multirow{2}{*}{} & Sunglasses & \\multirow{2}{*}{S-TR-$8$IPS}&\\multirow{2}{*}{50} & \\multirow{2}{*}{50} &84.0&80.0\\\\\n &&Scarf & & & & 100.0& 96.0\\\\\n&\\multirow{2}{*}{}& Sunglasses & \\multirow{2}{*}{S-TR-SSPP}&\\multirow{2}{*}{100} & \\multirow{2}{*}{100} & 89.0&98.0\\\\\n&&Scarf&&&&73.0&92.0\\\\\n\\midrule\n\\multirow{30}{*}{\\raggedright \\begin{tabular}[c]{@{}l@{}l@{}}\\textbf{ORFE:}\\\\Learning-based\\\\Features\\end{tabular}} \n&\\multirow{2}{*}{} & Sunglasses & \\multirow{2}{*}{S-TR-SSPP}&\\multirow{2}{*}{50} &\\multirow{2}{*}{50} & 52.0&NA\\\\\n&&Scarf&&&&48.0&NA\\\\\n&\\multirow{2}{*}{} &Sunglasses& \\multirow{2}{*}{S-TR-$8$IPS}&\\multirow{2}{*}{40} &\\multirow{2}{*}{40} & 80.0&NA\\\\\n&&Scarf&&&&70.8&NA\\\\\n\\cline{2-7}\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{D-TR}&\\multirow{2}{*}{50}&\\multirow{2}{*}{50}&98.0&NA\\\\\n&&Scarf&&&&90.0&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-8IPS}& \\multirow{2}{*}{100}&\\multirow{2}{*}{100}&97.5&NA\\\\\n&&Scarf&&&&93.5&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses\\&Scarf& \\multirow{2}{*}{S-TR-3IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&89.0&NA\\\\\n&&Black block&&&& 94.0 &NA\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-7IPS}&\\multirow{2}{*}{100} &\\multirow{2}{*}{100}& 92.3&51.7\\\\\n&&Scarf&&&&95.0&84.3\\\\\n&\\multirow{3}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-Occ1}&\\multirow{3}{*}{100}&\\multirow{3}{*}{100}&\\multicolumn{2}{c}{93.0}\\\\\n&&Scarf$^\\star$&&&&\\multicolumn{2}{c}{92.7}\\\\\n&&Sunglasses\\&Scarf$^\\star$&&&&\\multicolumn{2}{c}{92.6}\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-3IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&95.7& NA\\\\\n&&Scarf$^\\star$&&&&96.3&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses$\\star$& \\multirow{2}{*}{S-TR-Occ3}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&TR&98.0\\\\\n&&Scarf$^\\star$&&&&TR&97.0\\\\\n&\\multirow{3}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-SSPP}&\\multirow{3}{*}{100}&\\multirow{3}{*}{100}&TR& 92.0\\\\\n&&Scarf&&&&TR& 91.0\\\\\n&&Sunglasses\\&Scarf&&&&TR& 82.5\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-SSPP}&\\multirow{2}{*}{80}&\\multirow{2}{*}{80}& 88.0&56.0\\\\\n&&Scarf$^\\star$&&&&69.0&44.0\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{EX-TR-SSPP}&\\multirow{2}{*}{80} & \\multirow{2}{*}{80}& 95.8& 78.3\\\\\n&&Scarf$^\\star$&&&&90.0&77.9\\\\\n&&Sunglasses\\&Scarf&D-TR-DL&Webface+20 & 80 &100.0&96.3\\\\\n&&Sunglasses\\&Scarf&S-TR-SSPP&100&100&\\multicolumn{2}{c}{75.0}\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-7IPS}&\\multirow{2}{*}{50} & \\multirow{2}{*}{50}& 94.7& 85.3\\\\\n&&Scarf&&&&99.3&98.7\\\\\n&\\multirow{2}{*}{}&Sunglasses&\\multirow{2}{*}{S-TR-3IPS}&\\multirow{2}{*}{80}&\\multirow{2}{*}{80}&99.0&NA\\\\\n&&Scarf&&&&87.6&NA\\\\\n\\midrule\n\\multirow{8}{*}{\\raggedright \\begin{tabular}[c]{@{}l@{}l@{}}\\textbf{OAFR:}\\\\Occlusion Detection\\\\Face Recognition\\end{tabular}}\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{EX-TR-SSPP}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&NA&49.0\\\\\n&&Scarf&&&&NA&55.0\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-3IPS$^\\star$}&\\multirow{2}{*}{80}&\\multirow{2}{*}{80}& TR&54.2\\\\\n&&Scarf$^\\star$&&&&TR&81.3\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-8IPS-Occ1}&\\multirow{2}{*}{60}&\\multirow{2}{*}{60}&97.5&NA\\\\\n&&Scarf$^\\star$&&&&99.2&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-RAND}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&\\multicolumn{2}{c}{95.2}\\\\\n&&Scarf&&&&\\multicolumn{2}{c}{94.2}\\\\\n&\\multirow{2}{*}{}&Sunglasses&\\multirow{2}{*}{D-TR-DL}&\\multirow{2}{*}{Webface}&\\multirow{2}{*}{100}&99.7&NA\\\\\n&&Scarf&&&&100.0&NA\\\\\n\\midrule\n\\multirow{8}{*}{\\raggedright \\begin{tabular}[c]{@{}l@{}}\\textbf{OAFR:}\\\\Partial Face Recognition\\end{tabular}}\n&\\multirow{3}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-$7$IPS}&\\multirow{3}{*}{100}&\\multirow{3}{*}{100}&94.3&NA\\\\\n&&Scarf$^\\star$&&&&98.0&NA\\\\\n&&Sunglasses\\&Scarf$^\\star$&&&&96.2&NA\\\\\n&\\multirow{3}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-$14$IPS}&\\multirow{3}{*}{100}&\\multirow{3}{*}{100}&98.0&NA\\\\\n&&Scarf$^\\star$&&&&97.0&NA\\\\\n&&Sunglasses\\&Scarf$^\\star$&&&&97.5&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-$7$IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&100.0&92.0\\\\\n&&Scarf$^\\star$&&&&100.0&95.3\\\\\n\\midrule\n\\midrule\n\\multirow{12}{*}{\\raggedright \\begin{tabular}[c]{@{}l@{}l@{}}\\textbf{ORecFR:}\\\\Occlusion Recovery\\\\Face Recognition\\end{tabular}}\n&~&Sunglasses\\&Scarf$^\\star$&S-TR-13IPS&100&100&TR&90.6\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-$8$IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&100.0&NA\\\\\n&&Scarf&&&&97.0&NA\\\\\n&&Sunglasses$^\\star$&\\multirow{2}{*}{S-TR-$7$IPS}&\\multirow{2}{*}{121}&\\multirow{2}{*}{121}&76.6&NA\\\\\n&&Scarf$^\\star$&&&&60.9&NA\\\\\n&&Sunglasses$^\\star$& \\multirow{2}{*}{S-TR-$8$IPS}&100&100&97.5&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-$8$IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&94.5&NA\\\\\n&&Scarf&&&&95.0&NA\\\\\n&\\multirow{2}{*}{}&Sunglasses&\\multirow{2}{*}{S-TR-$3$IPS-Occ3}&\\multirow{2}{*}{120}&\\multirow{2}{*}{120}&NA&68.5\\\\\n&&Scarf&&&&NA&70.7\\\\\n&\\multirow{2}{*}{}&Sunglasses& \\multirow{2}{*}{S-TR-$2$IPS}&\\multirow{2}{*}{100}&\\multirow{2}{*}{100}&\\multicolumn{2}{c}{89.8}\\\\\n&&Scarf&&&&\\multicolumn{2}{c}{78.8}\\\\\n&\\multirow{2}{*}{}&Sunglasses&\\multirow{2}{*}{S-TR-$4$IPS}&\\multirow{2}{*}{120}&\\multirow{2}{*}{120}&99.2&99.7\\\\\n&&Scarf&&&&87.5&$83.6$\\\\\n\\bottomrule\n\\end{tabularx}\n\\end{table*}\n\\begin{table*}[!htbp]\n\\renewcommand\\arraystretch{0.95}\n\\caption{Evaluation Summary of Representative Algorithms on the other Benchmarks regarding identification rates. Three categories: 1)~\\textbf{ORFE}=Occlusion Robust Feature Extraction, 2)~\\textbf{OAFR}=Occlusion Aware Face Recognition, and 3)~\\textbf{ORecFR}=Occlusion recovery FR. In the `Occlusion Arguments' column, details including the size of occlusion, the occlusion ratio to the face image and portion of noise are listed. In `\\# of Train Subjects,' NA represents not available from the paper. The Rank 1 identification rates are shown. Some results are roughly estimated from the figure in the original papers. Since not all methods are of the same experimental settings, these methods cannot be compared properly. }\\label{tab:otheridenrate}\n\\centering\n\\begin{tabularx}{\\textwidth}{llXllcccX}\n\\toprule\nCategory & Publication & Occlusions&\\begin{tabular}[c]{@{}l@{}}Occlusion Arguments \\end{tabular}& Benchmark&\\begin{tabular}[c]{@{}c@{}}\\# of Train\\\\Subjects\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}\\# of Test\\\\Subjects\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Identification \\\\Rates(\\%)\\end{tabular}\\\\\n\\midrule\n\\multirow{20}{*}{\\raggedright\\textbf{ORFE}}&&Black or White Rectangle &occlusion size:$50\\times50$&ORL&40&40&70.0\\\\\n&&Black Rectangle &occlusion size:$50\\times 50$&FERET&240&960&$78.5$\\\\\n&&Unrelated image &occlusion ratio:0.5&Extended Yale B&38&38&$65.3$\\\\\n&&Unrelated image &occlusion ratio:0.5&Extended Yale B&38&38&$87.4$\\\\\n&&Unrelated image &occlusion ratio:0.8&Extended Yale B&38&38&$70.0$\\\\\n&&Unrelated image &occlusion ratio:0.8&Extended Yale B&38&38&$53.0$\\\\\n&\\multirow{2}{*}{}&\\multirow{2}{*}{Wild condition}&NA&\\multirow{2}{*}{LFW}& 108 &50&$86.0$\\\\\n&&&NA&&851&124&$65.3$\\\\\n&\\multirow{3}{*}{}&Glasses\\&Sunglasses&NA&\\multirow{2}{*}{CAS-PEAL}&\\multirow{2}{*}{350}&\\multirow{2}{*}{350}&$96.6$\\\\\n&&Hat&NA&&&&$90.3$\\\\\n&&Unrelated image &occlusion ratio:0.5&Extended Yale B&30&30&78.4\\\\\n&\\multirow{3}{*}{}&Unrelated image &occlusion ratio:0.6&\\multirow{3}{*}{Extended Yale B}&\\multirow{3}{*}{38}&\\multirow{3}{*}{38}&$96.0$\\\\\n&&Random pixels Rectangle &occlusion ratio:0.6&&&&81.0\\\\\n&&Mixture noise&occlusion ratio:0.6&&&&$25.0$\\\\\n&&Unrelated image &occlusion ratio:0.3&Extended Yale B&30&30&$77.6$\\\\\n&\\multirow{4}{*}{}&Pepper noise&portion:40\\%&\\multirow{2}{*}{Extended Yale B}&\\multirow{2}{*}{38}&\\multirow{2}{*}{38}&$81.3$\\\\\n&&White Rectangle&NA&&&&$82.9$\\\\\n&&Salt\\&Pepper noise &portion:40\\%&\\multirow{2}{*}{CMU-PIE}&\\multirow{2}{*}{68}&\\multirow{2}{*}{68}&$98.5$\\\\\n&&White Rectangle&occlusion size:$10\\times10$&&&&$98.8$\\\\\n&\\multirow{3}{*}{}&Black Rectangle &occlusion ratio:0.3&\\multirow{3}{*}{Extended Yale B}&\\multirow{3}{*}{38}&\\multirow{3}{*}{38}&$99.2$\\\\\n&&Black Rectangle&occlusion ratio:0.4&&&&$97.6$\\\\\n&&Black Rectangle&occlusion ratio:0.5&&&&$96.1$\\\\\n&\\multirow{3}{*}{}&Unrelated image &occlusion ratio:0.5&\\multirow{2}{*}{Extended Yale B}&\\multirow{2}{*}{38}&\\multirow{2}{*}{38}&$96.9$\\\\\n&& Random pixels Rectangle&portion:70\\%&&&&$98.9$\\\\\n&&Black Rectangle&occlusion ratio:0.5&CMU-PIE&68&68&$93.9$\\\\\n\\midrule\n\\midrule\n\\multirow{10}{*}{\\raggedright \\textbf{OAFR} } & &Arbitrary Patches&NA&Partial-LFW&158&158&$34.8$\\\\\n&\\multirow{2}{*}{} &Arbitrary Patches&NA&Partial-LFW&158&158&$50.7$\\\\\n&&Unrelated image &occlusion ratio:0.5&Extended Yale B&32&32&$30.2$\\\\\n&\\multirow{2}{*}{}&Unrelated image &occlusion ratio:0.5&Extended Yale B&32&32&$56.7$\\\\\n&&Arbitrary Patches&NA&Partial-PubFig&60&140&$42.9$\\\\\n&\\multirow{3}{*}{}&Arbitrary Patches&NA&Partial-LFW&NA&1000&$27.3$\\\\\n&&Arbitrary Patches&NA&NIR-Distance&NA&276&$94.9$\\\\\n&&Arbitrary Patches&NA&Partial-YTF&NA&200&$61.0$\\\\\n&\\multirow{3}{*}{}&Arbitrary Patches&NA&Partial-LFW&NA&1000&$32.4$\\\\\n&&Arbitrary Patches&NA&NIR-Distance&NA&127&$92.8$\\\\\n&&Arbitrary Patches&NA&NIR-Mobile&NA&178&$93.8$\\\\\n\\midrule\n\\midrule\n\\multirow{8}{*}{\\raggedright \\textbf{ORecFR} } &&Unrelated image &occlusion ratio:0.7&Extended Yale B&38&38&$62.3$\\\\\n&\\multirow{2}{*}{}&Unrelated image &occlusion ratio:0.7&\\multirow{2}{*}{Extended Yale B}&\\multirow{2}{*}{38}&\\multirow{2}{*}{38}&$88.5$\\\\\n&&Multiple patches&occlusion ratio:0.8&&&&$96.0$\\\\\n&\\multirow{2}{*}{}&Black or White Rectangle &occlusion ratio:0.57&\\multirow{2}{*}{Extended Yale B}&\\multirow{2}{*}{38}&\\multirow{2}{*}{38}&$90.4$\\\\\n&&Unrelated image &occlusion ratio:0.5&&&&$87.7$\\\\\n&&Black rectangle on eyes&occlusion ratio:0.3&Extended Yale B&38&38&$98.6$\\\\\n&\\multirow{2}{*}{}&Unrelated image &occlusion ratio:0.6&\\multirow{2}{*}{Extended Yale B}&\\multirow{2}{*}{38}&\\multirow{2}{*}{38}&$95.8$ \\\\\n&&Unrelated image &occlusion ratio:0.9&&&&$71.9$\\\\\n\\bottomrule\n\\end{tabularx}\n\\end{table*}\n\\begin{table*}[!htbp]\n\\caption{Evaluation Summary of Representative Algorithms regarding verification rates. Occlusion Robust Feature Extraction and Occlusion Aware Face Recognition report the verification rates by cut-off rules. In the `Benchmark' column, `Partial-x' means the new partial faces originate from the database named `x.' Some results are roughly estimated from the figure in the original papers. Since not all methods are of the same experimental settings, these methods cannot be compared properly.}\\label{tab:allverirate}\n\\centering\n\\begin{tabularx}{0.68\\textwidth}{lllcl}\n\\toprule\nPublication & Occlusions&Benchmark& \\# of Subjects in Gallery & Verification Rates\\\\\n\\midrule\n&Eyeglasses& FRGC-v2 &466 & 90\\%@FAR=0.1\\%\\\\\n& Wild condition& LFW& 1680& 60\\%@FAR=0.1\\\\\n&Wild condition&LFW&1680&61\\%@FAR=0.1\\\\\n\\midrule\n\\midrule\n\\multirow{2}{*}{}&Arbitrary Patches&Partial-LFW&1680&50\\%@FAR=0.1\\\\\n&Arbitrary Patches&Partial-PubFig&140&63\\%@FAR=0.1\\\\\n&Arbitrary Patches&Partial-LFW&1000&29.8\\%@FAR=0.1\\%\\\\\n&Arbitrary Patches&Partial-LFW&1000&37.9\\%@FAR=0.1\\%\\\\\n\\bottomrule\n\\end{tabularx}\n\\end{table*}\n\\begin{table*}[!htbp]\n\\caption{Summary of Representative Algorithms based on components they work on during OFR. In the `data augmentation/recovery' category, data augmentation component means generating synthesized occluded faces while the data recovery component intends to eliminate the occluded facial part. The fusion strategy component consists of feature-level fusion as well as decision-level fusion.}\\label{tab:components}\n\\renewcommand\\arraystretch{1.3}\n\\centering\n\\begin{tabularx}{\\textwidth}{|l|X|}\n\\hline\nPipeline Category & Publication\\\\\n\\hline\nData Augmentation/ Recovery&\\\\\n\\hline\nFeature Extraction&\\\\\n\\hline\nFeature Comparison&\\\\\n\\hline\nFusion Strategy&\\\\\n\\hline\n\\end{tabularx}\n\\end{table*}\n\\begin{table*}[!htbp]\n\\caption{Summary of Representative Algorithms regarding application-oriented purpose from Fig.~\\ref{fig:methods_category}. There is a category for occluded face detection with the abbreviation `\\textbf{OFD}'. Three classes for occluded face recognition: 1)~\\textbf{ORFE}=Occlusion Robust Feature Extraction, 2)~\\textbf{OAFR}=Occlusion Aware Face Recognition, and 3)~\\textbf{ORFR}=Occlusion recovery FR. To our knowledge, there are no research works on detecting partial face; thus, we mark it as `NA' (not available).}\\label{apporiented}\n\\renewcommand\\arraystretch{1.3}\n\\centering\n\\begin{tabularx}{\\textwidth}{|c|l|X|}\n\\hline\nAbbreviation&Application-oriented Purpose &Publication\\\\\n\\hline\n\\textbf{OFD}&Occluded Face Detection&\\\\\n\\hline\n\\multirow{3}{*}{}\n\\multirow{2}{*}{\\textbf{ORFE}}& Patch based engineered features&\\\\\n\\cline{2-3}\n &Learning based features&\\\\\n\\hline\n\\multirow{3}{*}{\\textbf{OAFR}}\n&Occlusion Detection& \\\\\n\\cline{2-3}\n&Occlusion Discard Face Recognition&\\\\\n\\cline{2-3}\n&Partial Face Detection &NA\\\\\n\\cline{2-3}\n&Partial Face Recognition&\\\\\n\\hline\n\\multirow{2}{*}{\\textbf{ORFR}}\n&Occlusion Recovery&\\\\\n\\cline{2-3}\n&Occlusion Recovery Face Recognition&\\\\\n\\hline\n\\end{tabularx}\n\\end{table*}\n\\appendix\nA glossary of abbreviations and expansions for terminologies is illustrated in Table~\\ref{tab:abbreviation}.\n\\begin{table*}[!ht]\n\\renewcommand\\arraystretch{0.95}\n\\centering\n\\caption{A glossary of abbreviations and expansions for terminologies. }\\label{tab:abbreviation}\n\\begin{tabularx}{\\textwidth}{l l | l l } \n\\toprule\nAbbreviation & Expansion & Abbreviation & Expansion\\\\ \\midrule\nAAE & Adversarial AutoEncoder &LBP & Local Binary Patterns\\\\\nAFC&Adaptive Feature Convolution&LDA & Linear Discriminant Analysis\\\\\nAOFD & Adversarial Occlusion-aware Face Detection&LLE & Locally Linear Embedding\\\\\nCSLBP&Center-Symmetric Local Binary Patterns&LSTM & Long Short Term Memory\\\\\nDA& Denosing Autoencoder&MDSCNN&Multiscale Double Convolutional Neural Network\\\\\nDCNNs & Deep Convolutional Neural Networks&MKD&Multi-Keypoint Descriptors\\\\\nDDRC&Deep Dictionary Representation based Classification&NNAODL&Nuclear Norm based Adapted Occlusion Dictionary Learning\\\\\nDMSC&Discriminative Multiscale Sparse Coding&NMF&Non-negative Matrix Factorization\\\\\nDoG & Different of Gaussian filters &OAFR & Occlusion Aware Face Recognition\\\\\nDPM & Deformable Part Models&OFR & Occluded Face Recognition\\\\\nDSCNNs&Double Supervision Convolutional Neural Network &ORFE & Occlusion Robust Feature Extraction\\\\\nEBGM & Elastic Bunch Graph Matching&ORecFR & Occlusion Recovery based Face Recognition\\\\\nECSLBP &Enhanced Center-Symmetric Local Binary Patterns&PCA & Principal Component Analysis\\\\\nELOC&Efficient Locality-constrained Occlusion Coding&PDSN&Pairwise Differential Siamese Network\\\\\nERBF&Ensemble of Radial Basis Function&R-CNN &Regions with Convolutional Neural Networks\\\\\nERGAN&Eyeglasses Removal Generative Adversarial Network& RCSLBP&Reinforced Centrosymmetric Local Binary Pattern \\\\\nFAN & Face Attention Network&RDLRR&Robust Discriminative Low-Rank Representation\\\\\nFANet & Feature Agglomeration Networks&RLA & Robust LSTM-Autoencoders\\\\\nFCN&Fully Convolutional Network&RPSM&Robust Point Set Matching \\\\\nFW-PCA&Fast Weighted-Principal Component Analysis&SIFT & Scale Invariant Feature Transform\\\\\nGAN&Generative Adversarial Network&SOC&Structured Occlusion Coding\\\\\nGRRC & Gabor based Robust Representation and Classification&SOM & Self-Organizing Maps\\\\\nHOG & Histogram of Oriented Gradient&SRC & Sparse Representation Classifiers\\\\\nICA & Independent Component Analysis&SSD & Single-Shot multibox Detector\\\\\nID-GAN&Identity-Diversity Generative Adversarial Network &SSRC&Structured Sparse Representation Classification\\\\\nInfoGAN&Information maximizing Generative Adversarial Network&SVM & Support Vector Machine \\\\\nKCFA & Kernel Correlation Feature Analysis &VAE& Variational AutoEncoder\\\\ \nKLD-LGBP &Kullback-Leibler Divergence-Local Gabor Binary Patterns&YOLO & You Only Look Once\\\\\n\\bottomrule\n\\end{tabularx}\n\\end{table*}\n\\clearpage\n\\clearpage\n\\bibliographystyle{plain}\n\\newpage\n\\bibliography{references}\n\\end{document}", "id": "037a6797-60e9-4040-a1f5-3c5f5f3bfc6c", "level": "section", "origin_cites_number": 100, "parent_id": "5be78ea8-4e37-4219-b1a2-958030400717", "prefix_titles": [ [ "title", "A survey of face recognition techniques \\\\ under occlusion" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
102
[ 1214, 1222, 514, 1218, 305, 97, 1219, 1220, 1221, 1210, 1216, 8426, 1215, 1217, 1223, 8428, 1226, 8429, 206, 1229, 1227, 802, 1224, 8427, 1231, 209, 1228, 1230, 1225, 7338, 1232, 1233, 1234, 1235, 799, 1239, 1236, 1238, 1237, 1240, 297, 1242, 1241, 309, 1246, 1245, 1243, 1244, 1247, 1248, 1249, 1003, 1252, 1254, 8430, 1250, 1251, 8431, 1253, 524, 5680, 1255, 1256, 1002 ]
0.906164
[ "Dinh C Nguyen", "Pubudu N Pathirana", "Ming Ding", "Aruna Seneviratne" ]
Blockchain for 5G and Beyond Networks: \\ A State of the Art Survey
2019
2019-12-11T00:28:49Z
cs.NI
The fifth generation (5G) wireless networks are on the way to be deployed around the world. The 5G technologies target to support diverse vertical applications by connecting heterogeneous devices and machines with drastic improvements in terms of high quality of service, increased network capacity and enhanced system throughput. Despite all these advantages that 5G will bring about, there are still major challenges to be addressed, including decentralization, transparency, risks of data interoperability, network privacy and security vulnerabilities. Blockchain, an emerging disruptive technology, can offer innovative solutions to effectively solve the challenges in 5G networks. Driven by the dramatically increased capacities of the 5G networks and the recent breakthroughs in the blockchain technology, blockchain-based 5G services are expected to witness a rapid development and bring substantial benefits to future society. In this paper, we provide a state-of-art survey on the integration of blockchain with 5G networks and beyond. In this detailed survey, our primary focus is on the extensive discussions on the potential of blockchain for enabling key 5G technologies, including cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communications. We then explore and analyse the opportunities that blockchain potentially empowers important 5G services, ranging from spectrum management, data sharing, network virtualization, resource management to interference management, federated learning, privacy and security provision. The recent advances in the applications of blockchain in 5G Internet of Things are also surveyed in a wide range of popular use-case domains, such as smart healthcare, smart city, smart transportation, smart grid and UAVs. The main findings derived from the comprehensive survey on the cooperated blockchain-5G networks and services are then summarized, and possible research challenges with open issues are also identified. Lastly, we complete this survey by shedding new light on future directions of research on this newly emerging area.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5bf49667-4d6b-4397-b786-063258be0fd6", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ] ], "subsections": [ "08422b6d-c828-45fc-86b2-f323e455db6b", "949b37ce-df58-4f01-99f6-8f1594e9cf39", "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "1d75d04b-9b07-42a5-8804-07c825c81bb3", "98c513b6-3c53-40f8-ac2a-2fc1ee45e75a", "9242e86a-2d2a-4b42-972b-12cec34c594f" ], "title": "root" }, { "cite_extract_rate": 0.04, "cites": [ 7971 ], "content": "\\begin{figure*}\n\t\\centering\n\t\\includegraphics[height=8.2cm, width=18cm]{Overview1}\n\t\\caption{The convergence of blockchain and 5G. }\n\t\\vspace{-0.17in}\n\\end{figure*}\nThe fifth generation 5G technology, referred to as beyond 2020 communications systems, represents the next important phase of the global telecommunication evolution, with recent successful deployments in several areas across almost all the continents\\footnote{{https://www.speedtest.net/ookla-5g-map}}. The 5G networks are characterized by three major features with its ability to support Enhanced Mobile Broadband, Massive Machine Type Communication and the provisioning of Ultra-reliable Low Latency Communication services . Driven by the explosion of smart mobile devices and the rapid advances of communication technologies, 5G could be a technical enabler for a plethora of new innovative business opportunities and industrial applications, and facilitates the seamless collaboration across domains by interconnecting billions of devices. The 5G mobile networks promise to revolutionize global industries and provide immediate impacts on customers and business stakeholders. The main vision of future 5G services is to provide a customized and advanced user-centric value, enabling connection of nearly all aspects of the human life to communication networks to meet the ever growing demands of user traffic and emerging services . To achieve these objectives, several underlying wireless technologies have been proposed to enable future 5G networks, including cloud computing, edge computing, Software Defined Networking (SDN), Network Function Virtualization (NFV), Network Slicing, and D2D communication . However, the rapid surge and breakneck expansion of 5G wireless services in terms of scale, speed, and capacity also pose new security challenges such as network reliability, data immutability, privacy that must be considered and solved before wide deployments. \nMany security solutions have been used in the previous generations of communication networks (i.e., 2G, 3G and 4G) [48]. For example, in the physical layer of 2G-4G networks, Hybrid Automatic Repeat reQuest (HARQ) techniques, combining Forward Error Correction (FEC) channel codes and Automatic Repeat reQuest (ARQ) have been used widely, which can detect and rectify wrong data bits in supporting data authentication. Moreover, for detecting errors in data communications, data storage, and data compression, error-detection techniques such as cyclic redundancy check (CRC) have been leveraged in the radio link control (RLC) layer for data reliability guarantees. However, these security techniques and architectures used in the previous generations (2G-4G), apparently, will not suffice for 5G due to the following reasons.\n\\begin{itemize}\n\t\\item A critical reason is that the above security techniques used in 2G-4G are powerless to deal with the problem of data tampering, such as deletion, injection, alternation in 5G networks.\n\t\\item Another reason is the dynamics of new technologies and services in 5G networks, which pose new requirements on security and privacy beyond protecting data integrity.\n\\end{itemize}\nIn particular, the emerging 5G technologies such as SDN, NFV, network slicing and D2D communications in 5G will support new service delivery models and thus further exacerbate the security challenges. Unlike the legacy cellular networks, 5G wireless networks are going to be decentralized and ubiquitous service-oriented which have a special emphasis on security and privacy requirements from the perspective of services. In particular, the security management in 5G is more complex due to various types of and a massive number of devices connected. How to provide an open data architecture for flexible spectrum sharing, data sharing, multiuser access, for example, to achieve ubiquitous 5G service provisions while ensuring high data immutability and transparency is a critical issue. Succinctly, the security architectures of the previous generations lack the sophistication needed to secure 5G networks.\nIn the 5G/6G era, immutability, decentralization and transparency are crucial security factors that ensure the successful roll-out of new services such as IoT data collection, driverless cars, Unmanned Aerial Vehicles (UAVs), Federated Learning (FL). Among the existing technologies, blockchain is the most promising one to meet these new security requirements and reshape the 5G communication landscape , . Hence, 5G needs blockhain for its wide 5G service deployments. \nFrom the technical perspective, blockchain is a distributed ledger technology that was firstly used to serve as the public digital ledger of cryptocurrency Bitcoin for economic transactions. The blockchain is basically a decentralized, immutable and transparent database. The concept of blockchain is based on a peer-to-peer network architecture in which transaction information is managed flexibly by all network participants and not controlled by any single centralized authority. In particular, the blockchain technology boasts a few desirable characteristics of decentralization, immutability, accountability, and truly trustless database storage which significantly improve network security and save operational costs . The rapid development and the adoption of blockchain as a disruptive technology are paving the way for the next generation of financial and industrial services. Currently, blockchain technology has been investigated and applied in various applications, such as Internet of Things (IoT) , , edge computing , smart city , vehicular networks , and industries .\nFor the inherent superior properties, blockchain has the potential to be integrated with the 5G ecosystems to empower mobile networks and services as shown in Fig. 1. Due to the advanced technical capabilities to support future network services, blockchain was regarded as one of the key technical drivers for 6G at the 2018 Mobile World Congress (MWC) . It is also predicted that blockchains would be a key technology in reaping real benefits from 5G networks, for giving birth to novel applications from autonomous resource sharing, ubiquitous computing to reliable content-based storage and intelligent data management . \nThe combination of blockchain and 5G is also expected to pave the way for emerging mobile services . In fact, 5G is all about connecting heterogeneous devices and complex networks interconnecting more than 500 billion mobile devices by 2030 . Besides, the emerging Internet of Things (IoT), and Massive Machine Communications (MMC) are predicted to create over 80 billion connections by 2020 . In such a context, the ultra-dense small cell networks, a fundamental component of 5G infrastructure, will provide connections and energy efficiencies of radio links with high data rates and low latencies. However, it introduces trust and secure interoperability concerns among complex sub-networks. Therefore, providing a reliable cooperation among heterogeneous devices is vitally important for 5G mobile networks. In this regard, blockchain with its immutable and decentralized transaction ledgers can enable distributed massive communication with high security and trustworthiness . Moreover, network slicing associated with other emerging technologies such as cloud/ edge computing, SDN, NFV, and D2D communication are also key enablers for future 5G networks and services. A big challenge for current 5G platforms is the need to guarantee an open, transparent, and secure system among the extraordinary number of resources and mobile users. Blockchain with its innovative concepts of decentralized operation can provide a high level of data privacy, security, transparency, immutability for storage of 5G heterogeneous data , . Blockchain is thus expected to be an indispensable tool to fulfill the performance expectations for 5G systems with minimal costs and management overheads. \n\\textit{\\textbf{Related survey works and Contributions:}}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics [height=15.5cm, width=14cm]{Structure.pdf}\n\t\\caption{The structure of the paper. }\n\t\\vspace{-0.17in}\n\\end{figure*}\nBlockchains have gained momentum in the academia, with a number of surveys published in , , , , , , which have discussed many aspects such as architecture, concepts, technologies and application domains. The 5G systems have also attracted attention , , , . Despite growing interest in blockchain and 5G, the focus of existing survey works is on each of the specific technologies. There have been no surveys that emphasize the integration of blockchain and 5G. The authors in only provided a brief introduction of the blockchain adoption in secure 5G resource management and reliable network orchestration. The survey in provided a short survey on the potential of blockchain for 5G networks in Industry 4.0. Similarly, the studies in , presented a brief review on the benefits of blockchain for 5G-based industrial IoTs. \nThus, to our best knowledge, there is no comprehensive survey on the integrated use of blockchain and 5G technologies and services. In this paper, we provide an extensive survey on the integration of blockchain and 5G technologies for providing services, including cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication. We also detail the use of blockchain for supporting important 5G services, ranging from spectrum management, data sharing, network virtualization, resource management to mitigating interference, federated learning, privacy and security attacks. The potential of blockchain in 5G IoT networks is also discussed through a number of use-case domains, such as smart healthcare, smart city, smart transportation, smart grid and UAVs. Besides, we highlight the research challenges and open issues, and point out the promising future research directions related to the blockchain-5G integrations. The main contributions of this survey article can be summarized as follows:\n\\begin{enumerate}\n\t\\item We conduct a state-of-art survey on the convergence of blockchain and 5G, starting with an analysis on the background, definitions as well as highlighting the motivations of the integration of these two emerging technologies. \n\t\\item We provide a review on the adoption of blockchain for enabling key 5G technologies, with a particular focus on cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication.\n\t\\item\tWe present an in-depth discussion on opportunities that blockchain brings to 5G services, including spectrum management, data sharing, network virtualization, resource management, interference management, federated learning, privacy and security services.\n\t\\item We investigate the potential of leveraging blockchains in 5G IoT networks and review the latest developments of the integrated blockchain-5G IoT applications in a number of domains, ranging from smart healthcare, smart city, smart transportation to smart grid and UAVs. \n\t\\item Based on the comprehensive survey, we summarize the main findings, highlight research challenges and open issues, and point out several future research directions. \n\\end{enumerate}\n\\textit{\\textbf{Structure of this survey:}} The structure of this survey is shown as Fig. 2. Section II presents an overview of blockchain and 5G networks, and then highlight the motivations for the integration of blockchains in 5G networks and services. In Section III, we present a state-of-art survey on the convergence of blockchain and key 5G technologies, namely cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication. We also provide a comprehensive discussion on the use of blockchain for supporting fundamental 5G requirements, ranging from spectrum management, data sharing, network virtualization, resource management to interference management, federated learning privacy and security services in Section IV. The benefits of blockchain for 5G IoT applications are analysed in details in Section V, with a focus on popular applications such as smart healthcare, smart city, smart transportation, smart grid and UAVs. We summarize the key main findings in Section VI, and the potential research challenges and future research directions are also outlined. Finally, Section VII concludes the paper. A list of acronyms used throughout the paper is presented in TABLE I.\n\\begin{table}\n\t\\caption{List of key acronyms.}\n\t\\label{table}\n\t\\scriptsize\n\t\\centering\n\t\\captionsetup{font=scriptsize}\n\t\\setlength{\\tabcolsep}{5pt}\n\t\\begin{tabular}{p{2.5cm}|p{5cm}}\n\t\t\\hline\n\t\t\\textbf{Acronyms}& \n\t\t\\textbf{Definitions}\n\t\t\\\\\n\t\t\\hline\n\t\t3GPP& Third Generation Partnership Project\n\t\t\\\\\n\t\tMWC& Mobile World Congress\n\t\t\\\\\n\t\tNGMN&Next Generation Mobile Networks\n\t\t\\\\\n\t\tETSI&European Telecommunications Standards Institute\n\t\t\\\\\n\t\tMNO&Mobile Network Operator\n\t\t\\\\\n\t\tMVNO &Mobile Virtual Network Operator\n\t\t\\\\\n\t\tML & Machine learning\n\t\t\\\\\n\t\tUAVs & Unmanned Aerial Vehicles\n\t\t\\\\\n\t\tSDN& Software-Defined Networking\n\t\t\\\\\n\t\tSDI& Software-Defined Infrastructure\n\t\t\\\\\n\t\tNFV&Network Functions Virtualisation\n\t\t\\\\\n\t\tVNFs& Virtual Network Functions \n\t\t\\\\\n\t\tD2D&Device-to-Device\n\t\t\\\\\n\t\tVM& Virtual\tMachine \n\t\t\\\\\n\t\tCloud-RANs& Cloud Radio Access Networks\n\t\t\\\\\n\t\tBBU&Baseband Unit\n\t\t\\\\\n\t\tIoT&Internet of Thing\n\t\t\\\\\n\t\tMEC& Mobile Edge Computing\n\t\t\\\\\n\t\tESPs &Edge Service Providers\n\t\t\\\\\n\t\tVANETs & Vehicular ad-hoc Networks\n\t\t\\\\\n\t\tMANO& Management and Network Orchestration\n\t\t\\\\\n\t\tSFC& Service Function Chaining\n\t\t\\\\\n\t\tVMOA& Virtual Machine Orchestration Authentication\n\t\t\\\\\n\t\tV2V & Vehicle-to-Vehicle\n\t\t\\\\\n\t\tRSU & Roadside Units\n\t\t\\\\\n\t\tCCN& Content Centric Networking\n\t\t\\\\\n\t\tSLA& Service-Level Agreement \n\t\t\\\\\n\t\tIPFS & Inter-Planetary File System\n\t\t\\\\\n\t\tDoS & Denial-of-Service\n\t\t\\\\\n\t\tQoS& Quality of Services\n\t\t\\\\\n\t\tQoE&Quality of Experience\n\t\t\\\\\n\t\tCSI & Channel State Information\n\t\t\\\\\n\t\tFUEs & Femtocell Users\n\t\t\\\\\n\t\tPoW&Proof of Work\n\t\t\\\\\n\t\tPBFT & Practical Byzantine Fault Tolerance \n\t\t\\\\\n\t\tEHRs & Electronic Health Records\n\t\t\\\\\n\t\tMaaS & Mobility-as-a-Service\n\t\t\\\\\n\t\tTPAs & Third Party Auditors \n\t\t\\\\\n\t\tITS & Intelligent Transportation System\n\t\t\\\\\n\t\tV2G & Vehicle-to-Grid\n\t\t\\\\\n\t\tEVs & Electric Vehicles\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab1}\n\t\\vspace{-0.2in}\n\\end{table}", "id": "08422b6d-c828-45fc-86b2-f323e455db6b", "level": "section", "origin_cites_number": 25, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "949b37ce-df58-4f01-99f6-8f1594e9cf39", "level": "section", "origin_cites_number": 0, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ] ], "subsections": [ "6baf8df7-12ad-4ae1-89fc-d8db7b49cf7c", "9972da82-34db-4635-9229-ae8b20177764", "c3b425f9-5d6c-40cc-b247-e9bf822a6f13" ], "title": "Blockchain and 5G: background, definition and motivation" }, { "cite_extract_rate": 0.5, "cites": [ 1295 ], "content": "Blockchain is mostly known as the technology underlying the cryptocurrency Bitcoin . The core idea of a blockchain is decentralization. This means that blockchain does not store any of its database in a central location. Instead, the blockchain is copied and spread across a network of participants (i.e. computers). Whenever a new block is added to the blockchain, every computer on the network updates its blockchain to reflect the change. This decentralized architecture ensures robust and secure operations on blockchain with the advantages of tamper resistance and no single-point failure vulnerabilities. In particular, blockchain can be accessible for everyone and is not controlled by any network entity. This is enabled by a mechanism called consensus which is a set of rules to ensure the agreement among all participants on the status of the blockchain ledger. The general concept on how blockchain operates is shown in Fig. 3. \nIn general, blockchains can be classified as either a public (permission-less) or a private (permissioned) blockchain . A public blockchain is accessible for everyone and anyone can join and make transactions as well as participate in the consensus process. The best-known public blockchain applications include Bitcoin and Ethereum. Private blockchains on the other hand are an invitation-only network managed by a central entity. A participant has to be permissioned using a validation mechanism. In order to realize the potential of blockchain in 5G networks, it is necessary to understand the operation concept, main properties of blockchain, and understand how blockchain can bring opportunities to 5G applications. In this section, we first present the main components of a blockchain network. Next, we discuss the key characteristics of blockchains in terms of immutability, decentralization, transparency, security and privacy, which can benefit for 5G networks and services.", "id": "6baf8df7-12ad-4ae1-89fc-d8db7b49cf7c", "level": "subsection", "origin_cites_number": 2, "parent_id": "949b37ce-df58-4f01-99f6-8f1594e9cf39", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tBlockchain " ] ], "subsections": [ "e545915a-d9e1-4900-8187-d072fa6e789f", "2a1f0e22-6e85-471b-b018-28cfcafe35c1" ], "title": "\tBlockchain " }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 5651, 1295 ], "content": "Blochain features several key components which are summarized as the following.\n- \\textit{Data block:} Blockchain is essentially a chain of blocks, a linear structure beginning with a so-called genesis block and continuing with every new block linked to the chain. Each block contains a number of transactions and is linked to its immediately-previous block through a hash label. In this way, all blocks in the chain can be traced back to the previous one, and no modification or alternation to block data is possible. Specially, a typical structure of data block includes two main components, including transaction records and a blockchain header . Here, transaction records are organized in a Merkle tree based structure where a leaf node represents a transaction of a blockchain user. For example, a user can make a request with associated metadata (i.e. transferred money or contract) to establish a transaction that is also signed with the private key of user for trust guarantees. Meanwhile, the block header contains the following information: 1) hash of the block for validation, 2) Merkle root to store a group of transactions in each block, 3) nonce value which is a number that is generated by consensus process to produce a hash value below a target difficulty level, and 4) timestamp which refers to the time of when the block is created. A typical blockchain structure is illustrated in Fig. 4.\n- \\textit{Distributed ledger (database):} Distributed ledger is a type of database which is shared and replicated among the entities of a peer-to-peer network. The shared database is available for all network participants within the blockchain ecosystem. Distributed ledger records transactions similar to the process of data exchange among the members of the network. Participants of the network can achieve on the agreement by a consensus mechanism in a distributed environment where no third party is required to perform the transaction. For example, if a person joins the Bitcoin application, then he has to abide by all rules and guidelines which are established in the programming code of the Bitcoin application. He can make transactions to exchange currency or information with other members automatically without a third party such as a financial institution. In the distributed ledger, every record has a unique cryptographic signature associated with timestamp which makes the ledger auditable and immutable. \n\\begin{figure}\n\t\\centering\n\t\\includegraphics [height=4.3cm,width=8.8cm]{Blockchain_concept.pdf}\n\t\\caption{The concept of blockchain operation.}\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\n- \\textit{Consensus algorithms:} When nodes start to share or exchange data on a blockchain platform, there is no centralized parties to regulate transaction rules and preserve data against security threats. In this regard, it is vitally necessary to validate the block trustfulness, keep track the data flow and guarantee safe information exchange to avoid fraud issues, such as double-spending attacks . These requirements can be met by using validation protocols called as consensus algorithms. In the blockchain context, a consensus algorithm is a process used to reach agreement on a single data block among multiple unreliable nodes. An example of consensus applications is in Bitcoin blockchain. Bitcoin adopts a Proof of Work algorithm (PoW) as an enabling consensus mechanism run by miners to ensure security in a untrusted network. Software on the network of miners uses their computation resources to solve complex mathematical puzzles. The first miner solving the puzzle to create a new block will receive a reward as an encouragement for future mining contributions. However, a critical drawback of PoW is its high resource consumption which would be unsustainable in the future. As a result, other efficient consensus algorithms appears as strong alternatives, such as Proof-of-stake (PoS), Byzantine Faulty Tolerant (BFT). Details of conceptual features and related technical issues of such consensus algorithms can be referenced to previous excellent surveys , .\n- \\textit{Smart contracts:} A smart contract is a programmable application that runs on a blockchain network. Since the first smart contract platform known as Ethereum was released in 2015, smart contracts have increasingly become one of the most innovative topics in the blockchain area. When we talk about smart contracts, the natural question is: What makes smart contracts so smart? This is due to their self-executing nature which means the codes will execute automatically the contractual clauses defined in the contract once the conditions have been met. For example, when a person signs a smart contract to transfer his funds, the funds will transfer automatically themselves over the blockchain network. Then the transfer information will be recorded as a transaction which is kept on the blockchain as an immutable ledger. Such a type of self-executing agreement relying on the code makes smart contracts unalterable and resistant to external attacks .\nIn addition to the capability of defining the operational rules and penalties around an agreement similar to the way a traditional contract does, smart contracts are capable of automatically enforcing their obligations to manage transactions. Particularly, smart contracts allow the performance of credible transactions without requiring the involvement of middlemen or third-party intermediaries . This property is particularly useful because it significantly reduces the issues of confliction and saves operation time as well as system costs. Therefore, smart contracts can provide cheaper, faster and more efficient options compared to the traditional systems in which contract conditions are always enforced physically by a central authority, enforcement mechanism or guidance system. With its programmable and automatic features, smart contracts offer a wide range of new applications to solve real-world problems, such as financial services and insurance, mortgage transactions, supply chain transparency, digital identity and records management .\n\\begin{figure}\n\t\\centering\n\t\\includegraphics [height=4.3cm,width=6.9cm]{BlockStructure.pdf}\n\t\\caption{The data block structure.}\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\n\\begin{table*}[ht]\n\t\\centering\n\t\\caption{Main characteristics of blockchain and their potentials to 5G. }\n\t\\label{table}\n\t\\setlength{\\tabcolsep}{5pt}\n\t\\begin{tabular}{|p{2.5cm}|p{3.5cm}|p{10cm}|}\n\t\t\\hline\n\t\t\\centering \\textbf{Key characteristics of blockchain}& \n\t\t\\centering \\textbf{Description}&\t\n\t\t\\textbf{Potential applications to 5G networks and services}\n\t\t\\\\\n\t\t\\hline\n\t\t\\textbf{Decentralization} &No central authority or trusted third party is needed to perform transactions. Users have full control on their own data.&Eliminate the need of trusted external authorities in 5G ecosystems, i.e. spectrum licenses, band managers, and database managers in spectrum management; central cloud/edge service manager in mobile computing and D2D networks; UAV control center in 5G UAV networks; and complex cryptographic primitives in 5G IoT systems. Decentralizing 5G networks potentially eliminates single-point failures, ensures data availability and enhance service delivery efficiency. \n\t\t\\\\\n\t\t\\hline\n\t\t\\textbf{Immutability}&It is very difficult to modify or change the data recorded in the blockchain.&Enable high immutability for 5G services. Spectrum sharing, data sharing, virtualized network resource provisions, resource trading can be recorded immutably into the only-appended blockchain. Besides, D2D communications, ubiquitous IoT networking and large-scale human-centric interconnections can be achieved via peer-to-peer networks of ubiquitous blockchain nodes without being modified or changed. The high immutability is very useful for 5G networks to performing accounting tasks, i.e. logging of session statistics and usage information for billing, resource utilization, and trend analysis.\n\t\t\\\\\n\t\t\\hline\n\t\t\\textbf{Transparency} &All information of transactions on blockchain (i.e. public ledgers) can be viewable to all network participants. & Provide better localized visibility into 5G service usage. The same copy of records of blockchain spreads across a large network for public verifiability. This enables service providers and users to fully access, verify and track transaction activities over the network with equal rights. Also, blockchains potentially offer transparent ledger solutions for truly open 5G architectures (i.e. decentralized network virtualization, distributed edge computing, distributed IoT networks). Blockchain ledgers also support fair service trading applications (i.e. resource trading, payment) under the control of all network entities. \n\t\t\\\\\n\t\t\\hline\n\t\t\\textbf{Security and privacy} & Blockchain employs asymmetric cryptography for security with high authentication, integrity, and nonrepudiation. Smart contracts available on blockchain can support data auditability, access control and data provenance for privacy. & Provide high security for 5G networks involved in decentralized ledgers. Blockchain helps secure the 5G networks by providing distributed trust models with high access authentication, in turn enabling 5G systems to protect themselves and ensure data privacy. By storing data information (i.e. IoT metadata) across a network of computers, the task of compromising data becomes much more difficult for hackers. Besides, smart contracts, as trustless third parties, potentially support 5G services, such as data authentication, user verification, and preservation of 5G resource against attacks. \n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}", "id": "e545915a-d9e1-4900-8187-d072fa6e789f", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "6baf8df7-12ad-4ae1-89fc-d8db7b49cf7c", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tBlockchain " ], [ "subsubsection", "Main components of blockchain" ] ], "subsections": [], "title": "Main components of blockchain" }, { "cite_extract_rate": 0, "cites": [], "content": "As a general-purpose database technology, in theory blockchain can be applied to any data-related context. However, the efficiency of distributed ledgers come with costs. Blockchain technology may be not the best solution for every scenario. The important step in assessing the potential benefits of blockchain in 5G is to ask whether its characteristics such as decentralization, immutability, transparency, security and privacy are useful for 5G networks and services. We will briefly review such key properties as follows. \n\\textit{Immutability:} It is the ability for a blockchain ledger to keep transaction data unchangeable over time. Technically, transactions are timestamped after being verified by the blockchain network and then included into a block which is secured cryptographically by a hashing process. It links to and incorporates the hash of the previous block. This mechanism connects multiple blocks together and builds a chronological chain. Particularly, the hashing process of a new block always contains metadata of the hash value of previous block, which makes the chain data strongly unalterable. This property of blockchain supports secure data storage and sharing in 5G scenarios, i.e. secure spectrum sharing, D2D communication or privacy-preserved network virtualization. Further, by deploying immutable transaction ledgers, the network operators can establish secure communications to perform heterogeneous networking and computing, such as large-scale IoT collaborations or mobile edge/cloud computing over the trustless IoT environments. \n\\textit{\tDecentralization:} The decentralized nature of blockchain means that it does not rely on a central point of control to manage transactions. Instead of depending on a central authority or third party to perform transactions between network users, blockchain adopts consensus protocols to validate transactions in a reliable and incorruptible manner. This exceptional property brings promising benefits, including eliminating single point failure risks due to the disruption of central authority, saving operational costs and enhancing trustworthiness.\n\\textit{Transparency:} The transparency of a blockchain stems from the fact that all information of transactions on blockchains (i.e. permission-less ones) is viewable to all network participants. In other words, the same copy of records of blockchain spreads across a large network for public verifiability. As a result, all blockchain users can fully access, verify and track transaction activities over the network with equal rights. Such transparency also helps to maintain the integrity of the blockchain-based systems by reducing risks of unauthorized data alternations. This feature is particularly suitable for 5G ecosystems where the openness and fairness are required. In the cooperative network slicing, for instance, the blockchains can offer transparent ledger solutions to support open and secure data delivery and payment such that the resource providers and slice customers can trace and monitor transactions. Moreover, service trading applications (i.e. mobile resource trading in 5G IoT) can be performed automatically on blockchain by triggering smart contracts, which ensures transparent and reliable data exchange among different service providers and IoT users. \n\\textit{Security and privacy:} One of the most appealing aspects of blockchain is the degree of security and privacy that it can provide. The key aspect of security in blockchains is the use of private and public keys. Blockchain systems use asymmetric cryptography to secure transactions between members. These keys are generated randomly with strings of numbers so that it is mathematically impossible for an entity to guess the private key of other users from their public key. This preserves blockchain records against potential attacks and reduces data leakage concerns . Additionally, the privacy service provided by blockchain and smart contract gives the data provenance rights to users. In other words, this ability enables data owners to manage the disclosure of their information on blockchain. Specially, by setting access rules on self-executing smart contracts, blockchain guarantees data privacy and data ownership of individuals. Malicious access is validated and removed by user identification and authorization of smart contract.\n\\textbf{Remark:} Transparency implies open data, while privacy concerns whether it is possible to infer private and sensitive information from such open data. How to protect people's privacy in open data is a hot topic. A typical example in this area is the face blurring used in the open-access Google Street service. In the context of blockchains, privacy-preserving data provenance based on smart contracts is a promising technique to realize privacy protection in open data [10]. \nFrom the above high-level analysis, blockchain technology would be a promising candidate for 5G networks and services by providing a number of technical benefits. We summarize the potential applications that blockchain can provide to 5G in TABLE II.", "id": "2a1f0e22-6e85-471b-b018-28cfcafe35c1", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "6baf8df7-12ad-4ae1-89fc-d8db7b49cf7c", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tBlockchain " ], [ "subsubsection", "Main characteristics of blockchain" ] ], "subsections": [], "title": "Main characteristics of blockchain" }, { "cite_extract_rate": 0, "cites": [], "content": "The next generations of mobile network (5G and beyond) have revolutionized industry and society by providing an unimaginable level of innovation with significant network and service performance improvements. In this subsection, we present an overview of the 5G networks. Also, 5G design principles are highlighted to provide insights into integrating blockchain in future networks and services.", "id": "9972da82-34db-4635-9229-ae8b20177764", "level": "subsection", "origin_cites_number": 0, "parent_id": "949b37ce-df58-4f01-99f6-8f1594e9cf39", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "5G networks " ] ], "subsections": [ "e257e34d-0906-4dba-81b7-4d914575584f", "513fb75e-6f9a-4da9-8809-f884e1d2c17a" ], "title": "5G networks " }, { "cite_extract_rate": 0.125, "cites": [ 7971, 7972 ], "content": "Over the past few decades, the world has seen a steady development of communication networks, initializing from the first generation and moving towards the fourth generation. The global communication traffic has shown a drastic increase in recent years and is expected to continue, which triggers the appearance of the forthcoming generation of telecommunication networks, namely 5G, aiming to address the limitations of previous cellular standards and scope with such ever-increasing network capacity. The 5G network can outperform earlier versions of wireless communication technology and provide diverse service abilities as well as encourage full networking among countries globally , . 5G networks also provide solutions for efficient and cost-effective launch of a multitude of new services, tailored for different vertical markets with a wide range of service requirements. In particular, the advances in 5G communication are envisioned as opening up new applications in various domains with great impacts on nearly aspects of our life, such as IoT , smart healthcare , vehicular networks , smart grid , smart city . Particularly, according to 3GPP and IMT-2020 vision , , the 5G technology is able to provide the following key capabilities:\n\\begin{itemize}\n\t\\item Provide 1-10Gbps connections to end points in the field and can reach up to 20Gbps in certain scenarios. \n\t\\item\tProvide ultra-low latency services (1ms or less than 1ms).\n\t\\item\tAchieve high mobility in the network (up to 500km/h). \n\t\\item\tEnable massive machine-type communication and support high dense network.\n\t\\item\tEnable Perception of 99.999\\% availability and 90\\% reduction in network energy usage.\n\t\\item\tEnable 10-100x number of connected devices with the ability to achieve ten year battery life for low power, machine-type devices.\n\t\\item\tEnable 1000x bandwidth per unit area.\n\\end{itemize}\nIn order to achieve such promising performance targets, the 5G networks leverage a number of underlying technologies, such as cloud/ edge computing, Software-Defined Networking (SDN), Network functions virtualisation (NFV), network slicing, Device-to-Device Communications, Millimeter wave communication . \n\\begin{itemize}\n\t\\item \\textit{Cloud/edge computing:} Cloud computing has been introduced to meet the increasing demands for resource management, data storage, and mobile sensing in the 5G era. In specific, cloud computing paradigms with resourceful virtual computation centers can well support 5G services such as mobility/network management, resource offloading, and sensing services in various application domains . Meanwhile, as an extension of cloud computing, edge computing has emerged as the promising technology to empower 5G ecosystems. It provides computing services at the edge of the mobile network, with a close proximity to IoT devices, which enables computation and storage services with much lower transmission delays. \n\t\\item \\textit{Software defined networking (SDN):} Using software defined networks, it is possible to run the network using software rather than hardware. It also considers a split between control and data planes, thereby introducing swiftness and flexibility in 5G networks .\n\t\\item \t\\textit{Network functions virtualisation (NFV):} When using software defined networks, it is possible to run the different network functions purely using software. NFV enables decoupling the network functions from proprietary hardware appliances so they can run on standardized hardware . The key purpose of NFV is to transform the way networks are built and services are delivered. With NFV, any 5G service operators can simplify a wide array of network functions, as well as maximize efficiencies and offer new revenue-generating services faster and easier than ever before .\n\t\\item \\textit{Network slicing:} As 5G will require very different types of networks for the different applications, a scheme known as network slicing has been devices. By using SDN and NFV, it will be possible to configure the type of network that an individual user will require for his application. In this way the same hardware using different software can provide a low latency level for one user, whilst providing voice communications for another using different software and other users may want other types of network performance and each one can have a slice of the network with the performance needed.\n\t\\item \\textit{Device-to-Device (D2D) communication:} It allows IoT devices in close proximity to communicate together using a direct link rather than long signal transmissions via traditional base stations. By using D2D communication, 5G heterogeneous data can be transferred quickly between mobile devices in short range, which promises ultra-low latency for communication among users. Moreover, D2D connectivity will make 5G operators more flexible in terms of offloading traffic from the core network, improve spectral efficiency and eliminate unnecessary energy loss due to long data transmissions . \n\t\\item \\textit{Millimeter wave (mmWave) communication:} The mmWave communication technology gives new facilities with a tremendous amount of spectrum to 5G mobile communication networks to supply mobile data demands. It comes with a number of advantages including huge bandwidth, narrow beam, high transmission quality, and strong data access ability to overcome shortcomings caused by the explosive growth in mobile traffic volumes, unprecedented connected devices, and diversified use cases .\n\\end{itemize}\nIn the 5G networks, these above technologies will be used to meet the demands of diverse applications from the ongoing traffic explosion of connected devices. For example, the combination of cloud/edge computing and Software Defined Networking and Network Function Virtualization (NFV) is regarded as the potential facilitators for flexible network deployment and operation. Moreover, the network slicing and D2D communication will enable ultra-reliable, affordable broadband access and intelligent use of network data to facilitate the optimal use of network resources with extremely low latency and high-speed device connection , . The proliferation of 5G networks was initially shaped by the Next Generation Mobile Networks (NGMN) alliance with a 5G initiative for enabling emerging services and business demands with the time target of 2020 and beyond.", "id": "e257e34d-0906-4dba-81b7-4d914575584f", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "9972da82-34db-4635-9229-ae8b20177764", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "5G networks " ], [ "subsubsection", "Overview of 5G networks" ] ], "subsections": [], "title": "Overview of 5G networks" }, { "cite_extract_rate": 0, "cites": [], "content": "The rapid advances of new 5G technologies provide an impetus for new fundamental design principles toward 5G networks. The 5G design principle was outlined by the NGMN alliance \tas shown in Fig. 5. Specifically, 5G systems can employ software and virtualisation to achieve the service objectives on flexibility, configurability, and scalability. Particularly, one of the key design concepts behind the 5G networks will be network slicing which separates the user and control planes and enables dynamic network function placement for a ubiquitous flexible and extensible infrastructure for all types of communication services on top of which a dynamic service and business environment can involve. The vision of 5G lies in providing smart services with very high data rates, extremely low network latency, manifold increase in base station density and capacity, and brings about significant improvements in the quality of services, quality of user experience, compared to 4G systems. It provides a convergence of pervasive broadband, sensing, and intelligence to establish a greater scale for the fourth industrial revolution that will stimulate the development of society and industrial markets. \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics [height=6.7cm, width=15cm]{5GDesignPrinciple.pdf}\n\t\\caption{The 5G design principle . }\n\t\\vspace{-0.17in}\n\\end{figure*}\nThe 5G network architecture must support the deployment of security mechanisms and functions (e.g. virtual security firewalls) whenever required in any network perimeter. As presented in Fig. 5, the operation and management need to be simplified. The most prominent technology for simplifying network management is SDN [58]. SDN separates the network control from the data forwarding plane. The control plane is logically centralized to oversee the whole network underneath and control network resources through programmable Application Programming Interfaces (APIs). Network Functions Virtualization (NFV) implements Network Functions (NF) virtually by decoupling hardware appliances (such as firewalls, gateways) from the functions that are running on them to provide virtualized gateways, virtualized firewalls and even virtualized components of the network, leading to the provisions of flexible network functions. Meanwhile, cloud computing/cloud RAN supports unlimited data storage and data processing to cope with the growing IoT data traffic in 5G. The combinations of 5G enabling technologies promise to foster mobile networks with newly emerging services such as intelligent data analytics, big data processing. Specially, different from previous network generations (i.e. 3G/4G), 5G is promising to provide mobile services with extremely low latency, energy savings due to flexibility (i.e. network slicing and proximity of edge computing), all of which will enhance QoS of the network and ensure high QoE for users.", "id": "513fb75e-6f9a-4da9-8809-f884e1d2c17a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9972da82-34db-4635-9229-ae8b20177764", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "5G networks " ], [ "subsubsection", "5G design principles" ] ], "subsections": [], "title": "5G design principles" }, { "cite_extract_rate": 0, "cites": [], "content": "In this subsection, we highlight the motivation of the integration which comes from the security challenges of 5G networks and the promising opportunities brought by the incorporation of such two technology families.", "id": "c3b425f9-5d6c-40cc-b247-e9bf822a6f13", "level": "subsection", "origin_cites_number": 0, "parent_id": "949b37ce-df58-4f01-99f6-8f1594e9cf39", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tMotivations of the Blockchain and 5G integration" ] ], "subsections": [ "d644002f-bcdb-4585-8584-f159690c7dc7", "0c30f626-c5af-40f3-b6c6-d214ae780847", "5919c227-3374-4dc8-a951-1f3aca7ac398" ], "title": "\tMotivations of the Blockchain and 5G integration" }, { "cite_extract_rate": 0, "cites": [], "content": "To highlight the motivation, we recall the most important properties of both technologies for the integration. Blockchain brings the capability of storing and managing 5G data through its secure distributed ledger. More importantly, blockchain can provide a series of security features such as immutability, decentralization, transparency and privacy, all of which promise to tackle efficiently security issues of current 5G networks. Thus, the main points of blockchain here are its capabilities to support security and network management for 5G networks and applications. On the other side, 5G considered in this paper refers to the latest generation wireless networks which are envisioned to provide higher capacity, higher data rate, lower latency, massive device connectivity, enhanced end-user quality-of-experience (QoE), reduced operation cost, and consistent service provisioning. Therefore, the key points of 5G here are its advantages of providing fast and high-quality services and the need for security and networking improvement.\nReviewing the rich and state of the art articles in the field, the motivation behind the integration of blockchain and 5G stems mainly from the promising benefits of blockchain for solving challenges in 5G networks in terms of security, privacy, networking and service management. With the help of innovative blockchain designs, 5G is expected to overcome the existing challenges and open up new opportunities to empower blockchain 5G-based services and applications. In the following, we discuss the motivation of the integration coming from current 5G challenges and then present opportunities brought from the blockchain-5G integrations.", "id": "d644002f-bcdb-4585-8584-f159690c7dc7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c3b425f9-5d6c-40cc-b247-e9bf822a6f13", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tMotivations of the Blockchain and 5G integration" ], [ "subsubsection", "Definition of the integration of Blockchain and 5G" ] ], "subsections": [], "title": "Definition of the integration of Blockchain and 5G" }, { "cite_extract_rate": 0, "cites": [], "content": "The security associated with 5G technologies has been considered as one of the key requirements related to both 5G and beyond systems. The existing 5G technology infrastructure has remained unsolved challenges in terms of security, networking and computing performance degradation due to its centralized architecture . For example, edge/cloud computing models current rely on centralized service providers (i.e. Amazon cloud), which reveals various security bottlenecks. Indeed, this configuration is vulnerable to single-point failures, which bring threats to the availability of cloud/edge services for on-demand user access. A centralized system does not guarantee seamless provisions of IoT services when multiple users request simultaneously data or servers are disrupted due to software bugs or cyberattacks. \nMoreover, network function virtualization (NFV) and service function chaining in 5G networks, however, also incur new security challenges , . Since end-to-end service function chains may deploy NFVs in an environment involving multiple cloud providers, such data transmissions can be compromised by curious cloud entities, leading to data leakage concerns. Furthermore, in a virtualized scenario, tenants often share the same cloud infrastructure. In this context, the possibility of attacks inside the cloud can increase, which damages the transparency and accountability of service providers. In NFVs, virtualization servers can run on virtual machines (VM) to offer specific functions to execute distinct operating systems such as VM migration or resource allocation using orchestration protocols. However, the security for the communication between the orchestrator and the physical machine VM manager is a real challenge.\nThe rapid proliferation of mobile data traffic and the increasing user demands on 5G infrastructure also introduce new challenges in terms of security and performance degradation. For example, the increasing requirement for bandwidth-hungry applications for 5G services such as mobile video streaming, big data processing requires a proper 5G spectrum resource management strategy to avoid resource scarcity issues for ensuring continuous service functionalities. Therefore, spectrum sharing between mobile network operators (MNOs) and mobile users is necessary. However, spectrum sharing in such scenarios also raises security concerns and provides a central point of attacks for malicious users . A possible approach is to use certification authorities, providing provide certificates for cognitive radios inside each cell. This approach not only requires infrastructure to be implemented for each cell but also requires a protocol for defence against central-point attacks. Further, it requires greater calculation complexity and longer packet lengths, which increases overhead for spectrum sharing systems and thus reduces the Quality of Services (QoS) of the involved system. Importantly, the use of such centralized architectures also adds single-of-failure bottlenecks when the authority is attacked or out of services, which leads to the disruption of the entire spectrum sharing network. \nIn the 5G IoT scenarios such as smart healthcare, smart cities where mobile environments are highly dynamic with the conjunction of ubiquitous IoT devices, heterogeneous networks, largescale data storage, and powerful processing centres such as cloud computing for service provisions, security and privacy issues become much more complex to be solved . In fact, a prohibitively large amount of IoT data will be generated continuously from ubiquitous IoT sensor devices. It is very challenging to immediately identify the objects of interest or detect malicious actions from thousands of data transactions on a large scale. The solution of using a centralized management may be infeasible to such use cases due to long latency, privacy risks due to curious third parties and network congestion. Obviously, how to provide efficient mobile services (i.e. data sharing, data processing, user management) in terms of low latency and increased network throughput while still ensure high degrees of security is a critical challenge. Therefore, there are urgent needs of innovative solutions to overcome the above security and network performance limitations for future 5G networks.", "id": "0c30f626-c5af-40f3-b6c6-d214ae780847", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "c3b425f9-5d6c-40cc-b247-e9bf822a6f13", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tMotivations of the Blockchain and 5G integration" ], [ "subsubsection", "\tSecurity challenges in 5G networks" ] ], "subsections": [], "title": "\tSecurity challenges in 5G networks" }, { "cite_extract_rate": 0, "cites": [], "content": "With its promising security properties, blockchain promises to provide a new set of innovative solutions for 5G networks and services for better security, privacy, decentralization and transform the network management architectures for improved QoS as well as better 5G performances. Therefore, 5G should leverage the benefits of blockchain to accommodate flexibility and security in providing mobile network services and ubiquitous coverage. In short, we highlight the significant opportunities that blockchain can bring to 5G networks and services, with a focus on three main aspects, including security enhancements, system performance improvements, and network simplification. \n\\begin{enumerate}\n\t\\item \\textit{Security enhancements:} Blockchain promises to enhance the security and privacy of 5G ecosystems, by offering many promising technical properties such as decentralization, privacy, immutability, traceability, and transparency. Blockchain can eliminate the centralized network management concept by decentralizing the network infrastructure where there are no third party authorities needed. As an example, the concept of blockchain-based cloud computing enables decentralization of cloud/edge 5G networks which removes centralized control at the core network and provides a decentralized fair agreement with blockchain consensus platform, which eliminates single point failure bottlenecks and improves significantly system trust. Besides, the security of D2D communication can be achieved by building a peer to peer network via blockchain, which transforms each D2D device as blockchain node to hold a ledge copy with the ability of verifying and monitoring transactions for better system transparency and reliability. \n\tEspecially, different from the conventional database management systems which often use a centralized server to perform access authentication and security mechanisms, blockchain with smart contracts can implement decentralized user access validation by using the computing power of all legitimate network participants. This makes the 5G services (i.e. spectrum sharing, data sharing, resource allocation) strongly resistant to data modifications. Many research works on blockchain [11], [12], [13] demonstrate that the blockchain adoption is beneficial to spectrum 5G management in terms of better verification of spectrum access with blockchain contracts, improved accessibility thanks to the transparency of blockchain. Moreover, the use of blockchain fosters scalable spectrum sharing over the peer-to-peer ledge network where spectrum license holders and band managers are eliminated for high trustworthiness. The ledger services with strong immutability from blockchain also provide a high degree of security and better system protection capability against DoS attacks and threats. Empowered by smart contracts, which provide highly flexible efficient user access control mechanisms via access rules and intelligent coding logics, blockchain potentially introduce new authentication solutions for 5G cellular networks. Instead of relying on external public key infrastructure, contracts can authenticate automatically user access, detect threats and discard malicious access from the networks in an autonomous manner without revealing user information. Besides, by publishing user data to ledger where data is signed by hash functions and appended immutably to blocks, blockchain platforms ensure strong data protection. Blockchain is capable of providing a full control of personal data when sharing over the untrusted network, which is unique from all traditional approaches which hinder users from tracking their data [14]. \n\t\\item \\textit{System performance improvements: }The use of blockchain also potentially improves the performances of 5G systems. In comparison to traditional database platforms such as SQL, blockchain can provide better data storage and management services with low latency data retrieval. In fact, resource requests (i.e. data access) can be verified by decentralized blockchain nodes with the support of intelligent smart contracts without passing a centralized authority, which is promising to reduce network latency. Moreover, motivated by the removal of decentralization, blockchain is able to establish direct communications between 5G service providers and mobile users so that the management cost can be significantly reduced. This would provide a much more flexible and efficient data delivery model for 5G ecosystems but still meet stringent security requirements [12]. For example, blockchain can help establish secure peer-to-peer communication among users (i.e. in D2D communication) using the computing power of all participants to operate the network instead of passing a third party intermediary. This would potentially reduce communication latency, transaction costs, and provide the global accessibility for all users, all of which will enhance the overall system performance. Specially, even when an entity is compromised by malicious attacks or threats, the overall operation of the involved network is still maintained via consensus on distributed ledgers, which in return ensures no single-point failure vulnerabilities for better security. \n\t\\item \\textit{Network simplification:} It is believed that blockchain can simplify the 5G network deployments thanks to its decentralized architectures. Indeed, by leveraging blockchain, the mobile operators now can have no worries about the establishment of centralized control servers. The 5G service delivery can be achieved by the blockchain network where user access, service responses and service trading (i.e. resource trading and payment) can be implemented on the decentralized ledgers among network participants including service providers and mobile users without the need for additional management infrastructure [5]. Therefore, the blockchain adoption potentially reduces network complexity and thus saves significantly operational costs. Furthermore, the transactions for 5G services (i.e. data sharing, spectrum sharing) are controlled by the blockchain network itself where all entities hold the same rights to manage and maintain the network. The capability of exploiting internal resources from participants is also another great advantage that blockchain can provide to simplify the network organization and management for better user experience and facilitation of service transactions, especially in complex mobile environments in the future 5G networks [6]. \n\\end{enumerate}", "id": "5919c227-3374-4dc8-a951-1f3aca7ac398", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c3b425f9-5d6c-40cc-b247-e9bf822a6f13", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain and 5G: background, definition and motivation" ], [ "subsection", "\tMotivations of the Blockchain and 5G integration" ], [ "subsubsection", "\tOpportunities brought by blockchain to 5G networks and services" ] ], "subsections": [], "title": "\tOpportunities brought by blockchain to 5G networks and services" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7971 ], "content": "Reviewing state-of-art literature works , , , we found that blockchain has mainly cooperated with the key 5G enabling technologies including cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication. Motivated by this, in this section, we present a review on the integration of blockchain and such 5G technologies. The benefits of blockchain for different 5G use cases and applications empowered from the integration are also analysed in details.", "id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "level": "section", "origin_cites_number": 3, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ] ], "subsections": [ "ad01a2c4-dc99-48a5-b688-0611c92f9b5b", "2ef93ba7-a7dd-4f9b-b4be-155c35b9a10c", "9e0617f6-086a-4138-9d37-97e50fa7eb44", "8139960e-dc42-4e3f-b49e-ba63285e936c", "dc4f0658-30a9-4107-bb13-1a43d12545f5", "daa268a8-2e05-44ab-adbd-d893c30429f0" ], "title": "Blockchain for enabling 5G technologies" }, { "cite_extract_rate": 0, "cites": [], "content": "Cloud computing has drawn significant attention in the last decades thanks to its unlimited resources of storage and computation power, which can provide on-demand, powerful and efficient services with minimum management efforts. Cloud computing has been investigated and integrated extensively with 5G networks, paving the way for the computing-intensive applications involving multi-dimensional massive data processing assisted by the cloud , . In fact, cloud computing paradigms provide a number of technical solutions for realizing 5G services, such as optimizing the communications, processing and storage processes , 5G data content delivery and catching , resource allocation and data transmission management , and cloud-enabled small cell networking for 5G media services . Specially, in order to meet the ever-increasing demand of user association and resource allocation in cellular 5G networks, the architecture of cloud radio access networks (Cloud-RANs) is envisioned as an attractive model that manages the large number of small cells through the centralized cloud controller as baseband unit (BBU) pool . Cloud-RAN is able to offer high-speed interconnection and shared powerful processing to facilitate optimal multicell cooperation and collaborative radio, real-time cloud computing , , which makes Cloud-RAN become a promising candidate of next-generation 5G access networks. \nHowever, the existing cloud computing models remain unsolved challenges in terms of security, networking and computing performance degradation due to its centralized architecture. Indeed, in the 5G era, the massive data traffic outsourced from IoT devices to the cloud has brought about a series of new security challenges, mainly including data availability, data privacy management, and data integrity . \n\\begin{itemize}\n\t\\item \\textit{Data availability:} In current cloud network architectures, cloud services are provided and managed centrally by the centralized authority. However, this configuration is vulnerable to single-point failures, which bring threats to the availability of cloud services for on-demand user access. A centralized cloud IoT system does not guarantee seamless provisions of IoT services when multiple users request simultaneously data or cloud servers are disrupted due to software bugs or cyberattacks.\n\t\\item \\textit{Privacy management:} Although the centralized cloud 5G networks can provide convenient services, this paradigm raises critical concerns related to user data privacy, considering a large amount of 5G heterogeneous data being collected, transferred, stored and used on the dynamic cloud networks. In fact, IoT users often place their trust in cloud providers managing the applications while knowing very little about how data is transmitted and who is currently using their information . In other words, by outsourcing data protection to the cloud, IoT data owners lose control over their data, which has also adverse impacts on the data ownership of individuals. Moreover, even in the distributed cloud IoT paradigms with multiple clouds, IoT data are not fully distributed but stored in some cloud data centres at high density . In this context, a massive amount of heterogeneous data may be leaked and user privacy is breached if one of the cloud servers is attacked.\n\t\\item \\textit{Data integrity:} The storage and analysis of 5G data on clouds may give rise to integrity concerns. Indeed, due to having to place trust on the centralized cloud providers, outsourced data is put at risks of being modified or deleted by third parties without user consent. Moreover, adversaries can tamper with cloud data resources , all of which can breach data integrity. For these reasons, many solutions have been applied to overcome the problem, by using public verification schemes where a third party auditor is needed to perform the integrity verification periodically. This scheme potentially raises several critical issues, including irresponsible verification to generate bias data integrity results or invalidated verification due to malicious auditors. \n\t\\item \\textit{Lack of immutability:} The dynamic process of 5G data to clouds and data exchange between cloud providers and mobile users are vulnerable to information modifications and attacks caused by adversaries or third parties. Even entities within the network may be curious about transmitted data over the sharing and unauthorized obtain personal information (i.e. customer data of 5G smart grid or location information of vehicles in vehicular networks). These issues may lead to serious data leakage bottlenecks and consequently damage system immutability.\n\t\\item \\textit{Lack of transparency:} In the conventional cloud systems, cloud resource providers have full control over outsourced network data (i.e. IoT data) while users are not aware of it and lacks the ability of tracking data after offloading to the cloud. This poses critical challenges on data users to perform verification and monitoring of data flows or usage, especially in the 5G scenarios where transparency among networks members is highly required to ensure fairness and openness, i.e. cloud service providers and slice users in cloud-based network slicing, or between healthcare providers and patients in cloud e-health. \n\\end{itemize}\nRecently, blockchains have been investigated and integrated in cloud computing to effectively address the above security challenges in the cloud-based 5G networks. For example, the work in takes advantage of blockchain to develop a framework called BlockONet for 5G access scenarios, aiming to improve the network credibility and security in 5G fronthaul. Blockchain is employed to build a verification platform between IoT devices, BBU unit, and manufacturer, where user access information is stored immutably on the chain, while smart contracts are also leveraged to perform automatic user authentication. The benefits from the use of blockchain in Cloud-RAN 5G networks are twofold. First, the concept of blockchain-based Cloud-RAN gets rid of centralized control at the core network and offers a decentralized fair agreement with blockchain consensus platform, which eliminates single point failure bottlenecks and improves significantly system trust. Second, by applying a decentralized blockchain without third parties, the blockchain-based cloud-RAN strategy can achieve optimal resource utilization and save a large amount of signalling and connection costs. In the same direction, the study in applies blockchain to build a trusted authentication architecture for cloud radio access network (Cloud-RAN) in the 5G era. They also show that the proposed schemes can address effectively network access authentication with trusted agreement among service providers and IoT users with reduced operation costs and improved spectrum usage over Cloud-RAN based mobile networks.\nBlockchain is also integrated with cloud computing for 5G IoT networks. The study proposed a cloud-centric IoT framework enabled by smart contracts and blockchain for secure data provenance. Blockchain incorporates in cloud computing to build a comprehensive security network where IoT metadata (i.e. cryptographic hash) is stored in blockchain while actual data is kept in cloud storage, which makes it highly scalable for dense IoT deployments. In the system, smart contracts with its autonomous, transparent and immutable properties are also adopted to ensure high cloud data validity. Meanwhile, a secure data sharing architecture was introduced in with attributed based-access control cryptosystem. Its network model consists of four main components: IoT devices, a data owner, a blockchain network and a cloud computing platform. More specific, a permissioned blockchain model is adopted to manage IoT transactions and perform access control for device requests received by cloud, while cloud monitors closely the blockchain network. As a result, such a cloud blockchain integration brings a comprehensive security framework with enhanced privacy preservation, data ownership and secure data sharing. Similarly, a hierarchical access control structure for Cloud blockchain was investigated in with a blockchain-based distributed key management. Especially, the blockchain network topology involves distributed side blockchains deployed at fog nodes and a multi-blockchain operated in the cloud, which would speed up access verification offer flexible storage for scalable IoT networks. In addition, to protect cloud blockchain in security-critical applications, a forensic investigation framework is proposed using decentralized blockchain . Security issues from dynamic interactions between cloud service providers, clients, and IoT devices were considered and analysed with a tamper evident scheme. Blockchain is performed to audit evidence during the investigation of a criminal incident among cloud blockchain entities in a decentralized manner, and therefore avoiding single points of failure on the cloud storage and improving evidence availability.\nIn addition, blockchain has also incorporated with the cloud federation architectures to further improve the performance of complex 5G-IoT networks in terms of transparent collaboration and interconnected services. As an example, a blockchain framework was proposed on a joint cloud collaboration environment where multiple clouds are interconnected securely by peer-to-peer ledges . The proposed scheme contains three tiers with an IoT sensor network, a federation of multiple clouds, and a service platform. Typically, the blockchain platform can offer many advantages over the schemes based on a single cloud. For instance, since IoT data at each area is stored in a private local cloud in the multi-cloud network, its data security is significantly improved. Further, the single cloud can offer instant services for IoT users through the private blockchain network, which also mitigates risks of malicious attacks on cloud systems . Besides, a cloud blockchain model with micro-clouds was introduced by using blockchain-enabled distributed ledgers. The authors pay special attention to building a joint cloud blockchain to enable secure decentralized collaborative governance services, i.e. immutable data storage, transparent monitoring and resource management for suitable performance on lightweight computing nodes like IoT devices.", "id": "ad01a2c4-dc99-48a5-b688-0611c92f9b5b", "level": "subsection", "origin_cites_number": 22, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "Blockchain for cloud computing/ Cloud RAN" ] ], "subsections": [], "title": "Blockchain for cloud computing/ Cloud RAN" }, { "cite_extract_rate": 0.047619047619047006, "cites": [ 2145 ], "content": "\\begin{figure}\n\t\\centering\n\t\\includegraphics[height=7.5cm,width=8cm]{Blockchain_Edge.pdf}\n\t\\caption{The convergence of blockchain and edge computing for 5G services. }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nAs an extension of cloud computing, mobile edge computing (MEC) has emerged as the promising technology to empower 5G services. Edge computing may have other names such as fog computing, mobile cloud or cloudlet. Similar to the cloud paradigm, edge computing can offer a series of computing services with capabilities of task processing, data storage, heterogeneity support and QoS improvements. In fact, edge servers are less powerful than remote clouds, but they are located at the edge of the network, with a close proximity to IoT devices, which enables highly efficient 5G data computation with much lower transmission delay, compared with the remote cloud . As a result, edge computing can provide instant computing applications to IoT users with low latency and fast service response, which would be particularly useful in the next generation services (i.e. in 5G and beyond). The distributed structure of edge computing also potentially brings numerous benefits, from ubiquitous computing services, scalability improvement to complexity reduction of network management to cope with the explosion of IoT devices and rapid growth of 5G service demands . However, its security is a significant challenge , . Indeed, the migration of 5G services, i.e. data computation, in the dynamic edge computing environments can be vulnerable to malicious attacks (such as jamming attacks, sniffer attacks, denial-of-service attacks, etc.). Further, the setting and configuration information by the edge service providers (ESP) must be trustworthy and secure, but in fact these are actually challenged due to the high dynamism and openness of the MEC system. Another challenge is to ensure data privacy and immutability for outsourced 5G heterogeneous data from external modifications or alternations. Importantly, how to avoid the system disruption caused by the attack on an edge node in the multi-edge computing is of paramount importance for 5G-based edge computing networks. Fortunately, blockchain has come as a promising technical enabler to overcome most of security and networking challenges faced by the existing edge computing architectures. The same decentralization characteristic of both the blockchain and MEC built on the networking, storage, computation, communications makes their combination become natural. The recent research results have demonstrated that blockchain can be applied to the edge computing systems to support a number of services of security and management in edge computing . Generally, the blockchains can support edge computing-based 5G services in three main aspects: networking, storage and computation as shown in Fig. 6.\nIn fact, with the help of blockchain, the networking capability of edge networks can be optimized. The blockchain is employed in to build a distributed and trusted authentication system to realize reliable authentication and information sharing among different edge-based IoT platforms. In the system, authentication data and user access information can be stored securely on blockchain, which is also capable of automatically tracking activities of mobile terminals (devices) without the need of central authorities. In particular, smart contracts are also utilized to perform trusted content catching in the edge computing network. Meanwhile, the works in , suggest a blockchain-based architecture for vehicular edge computing. Vehicular edge computing is introduced to provide data processing services with low latency, but it also raises privacy concerns since user information can be disclosed during the sharing process. The adaption of blockchain potentially solves such challenges by establishing a secure communication channel empowered by immutable transaction ledgers. Then, this robust and secure concept enables the energy flow and information flow to be protected against external malicious attacks when performing vehicular networking. Furthermore, ensuring security in the transmission process is one of the achievements of blockchain. The authors in , take advantage of blockchain to establish a security mechanism for edge computing-based energy systems where smart contracts are leveraged to build a trusted access control scheme for energy sharing and distribution. Further, the blockchain-based solutions can support efficient conditional anonymity and key management for the privacy-preserving authentication protocol without the need for other complex cryptographic primitives between network users. Moreover, to achieve a trustworthy and efficient edge computing system, the blockchain functionality is applied to the resource management , data sharing or resource allocation , all of which improve edge computing performances while guaranteeing security properties of the network.\nIn addition, blockchain also provides security features for efficient data storage for edge computing systems. Indeed, blockchain can offer decentralized data storage enabled by the combined storage capacity of a network of peers to store and share contents. The work in proposes a MEC-based sharing economy system by using the blockchain and off-chain framework to store immutable ledgers. Specifically, in a smart vehicular network, blockchain can keep information of the driver and the car profile with the history of maintenance, accident, and other car usage information. The raw vehicular data, i.e. vehicle sensor data, can be captured and processed by the MEC node under the control of the blockchain. Blockchain can also connect the stakeholders of a car through a shared chain and provide help in car-sharing economy scenarios. The work in also proposes a blockchain database to secure communication between the home devices and sensors in the MEC-based smart city. In the sense of the ledger, blockchain can be regarded as a distributed database which keeps data by interconnecting a network of strongly immutable blocks. It is noting that the scalability of blockchain is a critical challenge due to the constrained ledger size, throughput and latency . In this regard, the on-chain and off-chain storage concept can be very useful. For example, in the vehicle context, the real-time updates regarding traffic and pollution of nearby roads can be stored locally in a cache unit for autonomous cars, while data hash values can be kept securely in blockchain. Any modifications on the storage unit can be acknowledged by blockchain via decentralized ledgers, improving the trustworthiness of the MEC-based network. Moreover, to facilitate easy access to data in a distrusted MEC blockchain setting, a decentralized big data repository platform, such as Inter-Planetary File System (IPFS) can be necessary for improving storage capability on blockchain . On top of IPFS, several blockchain-based storage platforms such as Filecoin or Storij have been applied as an incentive layer to form an entirely distributed file storage system. These blockchain database systems contain the off-chain service data while providing the on-chain identifier, so that data integrity can be checked by the identifier from the data and hash values in the blockchain and comparing it for monitoring. Such a blockchain platform is integrated with edge computing to solve storage risks caused by dynamic MEC .\nLastly, blockchain can support the computation processes in MEC networks. Specifically, blockchain can provide authentication capability to protect MEC systems. The study in leverages blockchain features such as decentralization, tamper-proofing and consistency to build an authentication layer between edge/fog servers and IoT devices. The main objective is to monitor and verify all computing tasks offloaded to the MEC servers, which preserves edge computing from external attacks. In , smart contracts are employed for MEC to improve the efficiency of IoT computing, i.e. video coding, by providing a self-organized video transcoding and delivery service without a centralized authentication. Blockchain can protect the accuracy, consistency, and origins of the data files in a transparent way. Further, the transactional data are also encrypted and stored on blocks, which has the potential to achieve privacy and security for MEC .", "id": "2ef93ba7-a7dd-4f9b-b4be-155c35b9a10c", "level": "subsection", "origin_cites_number": 21, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "\tBlockchain for mobile edge computing" ] ], "subsections": [], "title": "\tBlockchain for mobile edge computing" }, { "cite_extract_rate": 0, "cites": [], "content": "Software-Defined Networking (SDN) has gained great attraction over the past years and has been regarded as the key pillar of future 5G networks. SDN is an intelligent networking architecture that envisions to improve the programmability and flexibility of networks. The main concept of SDN is the separation of the control plane outside the network switches and the provisioning of external control of data through a logical software controller, enabling mutual access between different parts of heterogeneous networks . This design architecture not only offers a number of new architecture, management and operation options, but also provides the ability for efficient delivery of user services while exploiting network resources more efficiently. In the 5G context, SDN is developed to make the connectivity services provided by 5G networks programmable, where traffic flows can be dynamically steered and controlled in order to achieve maximum performance benefits. However, despite the obvious advantages that this novel networking paradigm introduces, there remains some non-trivial challenges that hold back its undisputed dominance over legacy solutions, namely security, flexibility and scalability. \n\\begin{itemize}\n\t\\item \\textit{Security:} In SDN, security is about the authentication in the control plane and mitigation of data modification and leakage in the data plan. In fact, one of the most important shortcomings of SDN is its increased attack surface compared to traditional networking deployments when the controller is modified or compromised. The most fundamental property of the SDN architecture is the decoupling of the control plane and the data plane, but this decoupling also broadens the attack surface of the network and introduces attack bottlenecks for the application layer . Furthermore, the centralized design of the SDN controller is also vulnerable to attacks on the control layer, which can cause controllers, routers, and switches to be maliciously modified, generate and cause loss of flow table information .\n\t\\item \\textit{Scalability:} How to build scalable SDN networks to enable multiple SDN controllers to communicate each other and achieve secure information exchanges between them is a challenge. By providing a distributed network architecture, SDN service providers not only reduce costs and enhance the flexibility to extend the network but also involve the deployment of new services to meet new market requirements .\n\t\\item \\textit{Full network decentralization:} The centralized design concept of current SDN models is possibly vulnerable to single-of-failure risks when a network entity is attacked or compromised, which leads to the disruption of the entire network. Therefore, developing a decentralized SDN architecture which can solve this problem and improve quality of services is vitally significant. \n\t\\item \\textit{Network management:} In the multi-SDN environments, SDN devices cannot be interoperable and achieve interconnection and cooperation due to the stringent latency requirements from different 5G service providers. The utilization of network resources requires a centralized repository maintained by all parties for the service provider, but it is challenging to achieve mutual trust between suppliers and the fairness of resource allocation due to the potential conflicts of interest of service providers. How to achieve a trusted network management for an efficient network cooperation multi-SDN networking and perform reliable resource sharing is a challenge . \n\\end{itemize}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics [height=6.8cm,width=6cm]{Blockchain_SDN.pdf}\n\t\\caption{A blockchain-based SDN architecture. }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nIn order to overcome these shortcomings in SDN architectures, many research efforts have been dedicated to research on blockchain as a decentralized security provisioning solution for SDN. The authors in propose blockchain as an authentication solution for SDN-based 5G networks with the objective of eliminating the unnecessary re-authentication in repeated handover among heterogeneous cells. Multiple SDN controllers in this proposed approach can communicate each other and interact with blockchain which enables secure information exchanges between them. Transactions and messages from blockchain can be shared via the dedicated transfer keys to the controller. Each SDN controller has a dedicated transfer key received from blockchain and is applied to transfer and receive information. Importantly, scalability can be solved effectively by a blockchain-based hierarchical structure. If any SDN controller becomes down in a cell, the system will then manage this cell using another SDN controller in the network where consensus between SDN controller candidates can be achieved by blockchain ledgers. The integration of blockchain in SDN is thus promising to remove intermediaries for authentication, reduce transaction costs, and achieve global accessibility for all users. Meanwhile, the work in proposes a decentralized blockchain-based security framework for SDN-enabled vehicular ad-hoc networks (VANETs). The SDN controller is in charge of the global policies, including authentication, and mobility/traffic management, while the controller-defined policies are implemented at the data plane. With the immutable and decentralized features, blockchain helps record all vehicular messages and build trust for the SDN-based vehicular system to ensure reliable message transmissions and avoid fake messages from malicious vehicles. Further, in SDN, security also includes authentication in the control plane and data preservation in the data plane. Blockchain can be a solution for a decentralized security provisioning system in such scenarios . To improve throughput and ensure trust in vehicular SDN systems, the work in introduces a blockchain-based consensus protocol that interacts with the domain control layer in SDN, aiming to securely collect and synchronize the information received from different distributed SDN controllers. Specifically, in the area control layer, vehicles and link information is collected and sent to the domain control layer which operates in the distributed blockchain manner. Blockchain is able to share the model parameters of a domain controller to other domain controllers in a transactional manner to reach a consensus among multiple controllers in distributed software-defined VANET. \nBesides, blockchains also potentially address other security and networking issues caused by the centralized control concept of SDN. In fact, most network functions can be implemented by SDN applications and malicious software may cause severe damage to the SDN infrastructure. The lack of standards and guidelines for software development is also possible to pose security threats. For example, third party providers can access the network and modify control rules without the consent of SDN controllers, leading to serious data leakage risks. The work in uses immutable and incorruptible blockchain as a significant security mechanism for solving potential attacks in SDN such as unauthenticated access control, Denial-of-Service (DoS) attacks, SDN controller attacks and flooding attacks. Another work in builds a global trust assessment scheme using blockchain for SDN-based home network controllers. Users can assign a desired trust level to isolated network slices using a simplified risk assessment scale. The SDN controllers can update on the trust score of users and evaluate scores via reports which are then managed securely by blockchain in a tamper-resistant distributed manner. \nTo achieve a high-efficiency fault tolerant control in SDN, the study employs blockchain on SDN controllers as depicted in Fig. 7. The data plane provides underlying data forwarding function which is software defined with OpenFlow protocol. In the control plane, all the controllers are connected via blockchain in a distributed manner within different control domains. At the software level, each controller in the control plane is loaded with the identical distributed ledger maintained by consensus plane, and smart contracts utilize the consistent data in the distributed ledger to provide the customized network function. The consensus plane performs multi-controller consensus for the pending-process services and inserts the results into a block data structure on a distributed ledger, while the contract plane contains smart contracts to perform automatic network functions. The blockchain-based solution is feasible to solve a number of security issues, including fault tolerance enabled by blockchain consensus, data consistency based on distributed ledger without the need of any third parties.\nMoreover, the authors in propose a Software-Defined Infrastructure (SDI) framework that leverages the blockchain technique along with abundant edge computing resources to manage secure data sharing and computing on sensitive data in healthcare. They focus on a blockchain-secured peer-to-peer network with SDI resources to make sure that every transaction on SDI is regulation compliant, while still providing high data interoperability. The proposed scheme is capable of performing effective authorized interactions between patients and medical applications, delivering patient data securely to a variety of organizations and devices, as well as improving the overall efficiency of medical applications.", "id": "9e0617f6-086a-4138-9d37-97e50fa7eb44", "level": "subsection", "origin_cites_number": 13, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "\tBlockchain for Software Defined Networking" ] ], "subsections": [], "title": "\tBlockchain for Software Defined Networking" }, { "cite_extract_rate": 0.066666666666666, "cites": [ 7973 ], "content": "\\begin{figure*}\n\t\\centering\n\t\\includegraphics [height=5.8cm, width=13cm]{Blockchain_NFV.pdf}\n\t\\caption{The conceptual blockchain-based NFV architecture. }\n\t\\vspace{-0.17in}\n\\end{figure*}\nNetwork Functions Virtualization (NFV) is a network architecture concept, standardized by the European Telecommunications Standards Institute (ETSI) that employs standard hardware for hosting various independent and network software components . Basically, NFV includes three main architectural components, namely Network Function Virtualization Infrastructure (NFVI) which supports the execution of VNFs, Virtualized Network Functions (VNFs) that are the functions running on the NFVI, and Management and Network Orchestration (MANO) which cover the lifecycle management and orchestration of physical and software resources . NFV implements virtually Network Functions (NF) by decoupling hardware appliances (such as firewalls, gateways) from the functions that are running on them to provide virtualized gateways, virtualized firewalls and even virtualized components of the network, providing flexible network functions. In this way, the network operators can save significantly equipment costs and reduce operational expenditures as well as automate network operation tasks without concerning about hardware installation. Particularly, NFV envisions to provide a diverse number of benefits for 5G networks, including enhancing flexibility and scalability of NF deployments and connections thanks to the decoupling of software from hardware, optimizing resource provision of the VNFs for better cost and energy usage, and optimizing VNFs operations with maximum failure rate and tolerable unplanned packet loss . \nNetwork function virtualization and service function chaining, however, also incur new security challenges , . Since end-to-end service function chains may deploy NFVs in an environment involving multiple cloud providers, such data transmissions can be compromised by curious cloud entities, leading to data leakage concerns. Furthermore, in a virtualized scenario, tenants often share the same cloud infrastructure. In this context, the possibility of attacks inside the cloud can increase, which damage the transparency and accountability of service providers. In NFVs, virtualization servers can run on virtual machines (VM) to offer specific functions to execute distinct operating systems such as VM migration or resource allocation using orchestration protocols. However, the security for the communication between the orchestrator and the physical machines is a current challenge. In fact, these architectures are very sensitive to attacks that can come from different horizons. In fact, a VM can be created by an attacker to run in a server and leveraged to carry out external denial-of-service attacks. Besides, internal attacks from curious VMs are another concern which can adversely impact data integrity and confidentiality . \nIn such a context, the blockchain technology has emerged as an efficient tool to help with these challenges. With the authenticity, integrity and non-repudiation natures, blockchain can facilitate NFV networks in three main aspects , . First, blockchain can enable reliable, easy and flexible orchestration of VNF services for better orchestration and network management. Second, blockchain can secure delivery of network functions and ensure system integrity against both insider attacks and external threats, i.e. malicious VM modifications and DoS attacks. Final, blockchain can perform data auditing and monitoring of system state during the network communication. We here review the latest advances in the use of blockchain to solve the above challenges for NFVs in 5G scenarios. \nThe authors of propose a blockchain-based system called BSec-NFVO for secure management of service function chain orchestration operations in the Open Platform for Network Function Virtualization (OPNFV). A Practical Byzantine Fault Tolerance (PBFT) consensus protocol is employed to prevent collusion attacks without compromising latency and throughput. The architecture of BSec-NFVO is depicted in Fig. 8, consisting of three main modules: the visualization module, which provides an interface between tenants and the NFV and Service Function Chaining (SFC) services; the orchestration module, which executes instructions transmitted by tenants via the visualization module; and lastly the blockchain module that verifies and confirms transactions before execution by the orchestration module. By immutably logging all instructions that manipulate service chains enabled by blockchain, the proposed scheme can ensure authenticity, integrity and non-repudiation of instructions, which also provide data provenance and traceability in a multi-tenant and multi-domain NFV environment.\nThe work in builds a blockchain-based Virtual Machine Orchestration Authentication (VMOA) framework to secure NFV/cloud orchestration operations for better authentication of orchestration commands in the lifecycle of cloud services. Here, blockchain acts as a decentralized database ledger shared between the virtualization server, the orchestrator and VM agents. The virtualization server is able to authenticate the orchestration command via blockchain VMOA ledger in an immutable and secure manner. Due to the removing of the requirement of third parties in the VMOA and using security features of blockchain, the proposed solution potentially achieves superior advantages such as records integrity, fault tolerance, and network trustworthiness, compared to its centralized counterparts. \nAdditionally, to realize a faulty or compromised VNF configuration, the study in introduces a blockchain-based architecture to provide auditability to orchestration operations of network slices for securing VNF configuration updates. The prototype implements two smart contracts with specific transaction formats for safeguarding network slice management and VNF configuration operations. Especially, a Hyperledger Fabric blockchain platform associated with certificate authorities is integrated to manage digital certificates of every node, improving auditability and that only certified and authorized nodes participate in the blockchain-based NFV network.\nThe authors of introduce a scheme called BRAIN, a Blockchain-based Reverse Auction solution for Infrastructure supply in NFV scenarios for dealing with challenges of discovery and selection of infrastructures to host VNFs acquired by end users. Smart contracts are designed to achieve a trustworthy agreement between stakeholders such as users and infrastructure providers regarding resources contracted and configurations required. Meanwhile, to support efficiency and security in wireless virtualization, blockchain is proposed in to improve the trust and transparency among participants and stakeholders and enable more seamless and dynamic exchange of spectrum and computing resources in the 5G wireless networks. \nAnother work presents a blockchain-based architecture for the secure configuration management of virtualized network functions (VNFs). Thanks to the immutability and the traceability features provided by blockchain and integrity and consistency of transactions ensured by a consensus protocol, the proposed solution can provide security for VNF configuration state migration, building a trust mechanism between different infrastructure providers (tenants) and VNF vendors. Asymmetric keys are employed to develop a transaction model for building anonymous authentication of tenants and VNFs and gaining confidentiality of configuration data through encryption. Such transactions are then appended in the blockchain data structure which also gives traceability and accountability of the VNF configuration updates. \nMeanwhile, to realize the orchestration/management capabilities and business support systems in the context of architectural NFV, the research in analyses blockchain-based Decentralized Applications (DApps) in support of multi-administrative domain networking. Blockchain can be an effective approach to establish an authentication layer for NFV Management and Orchestration (MANO) services across administrative domains. For example, blockchain can verify user access and grant access permission to resources between providers NFV-MANO components. In such a context, a smart contract can be leveraged to store access permission and assets information for MANO components as well as perform mappings of the structure of quotas, access grants and capacity of NFV users for efficient resource usage.", "id": "8139960e-dc42-4e3f-b49e-ba63285e936c", "level": "subsection", "origin_cites_number": 15, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "\tBlockchain for Network Function Virtualization (NFV)" ] ], "subsections": [], "title": "\tBlockchain for Network Function Virtualization (NFV)" }, { "cite_extract_rate": 0, "cites": [], "content": "5G offers a completely new vision of mobile networks to unify the management of IoT networks. In order to support various types of IoT applications, 5G relies on the concept of Network Slicing, which is the separation of multiple virtual networks operating on the same physical hardware . It enables telecom operators to portion their networks for specific services and applications, such as smart home, smart factory or vehicular network. Network slicing is well supported by Network Softwarization as the key technology enabler which consists of Virtual Network Functions (VNFs) running in the cloud inside virtual machines or containers. Each network slice contains a set of VNFs associated with physical network functions to enable network services based on the computing and storage capabilities of cloud infrastructure . Besides, network slicing also brings many unprecedented security challenges which consist of inter-slice security threats and the issues of resource harmonization between inter-domain slice segments , . For example, due to the design of network slice instances sharing on open cloud-based architectures, attackers may abuse the capacity elasticity of one slice to consume the resources of another target slice, which makes the target slice out of service. Further, since multiple slices have often common control plane functions, attackers can exploit this network weakness to compromise the data of the target slice by maliciously accessing the common functions from another slice, leading to serious data leakages and damage of the system integrity . \nIn such contexts, blockchains can bring great opportunities for the security of 5G network slicing management. Blockchain can be exploited to build reliable end-to-end network slices and allow network slide providers to manage their resources. The work of uses blockchain for the dynamic control of the source reliability in vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communications in vehicular network slices. In the V2X network slice operated with content-centric networking (CCN), vehicles can share securely messages (e.g., the specific messages for the management of the distributed ledger and the creation of new blockchains, including the list of trustable entities) with other nearby vehicles or roadside units via distributed blockchain ledgers. The blockchain acts as the middle-security layer between vehicles and network controllers (i.e. roadside equipment), which eliminates the need of installing additional hardware from the operator side. This not only solves trust issues thanks to no required external authorities but also improves significantly vehicular network performances with low latency and enhanced throughput. Further, the blockchain-based approach can allow for the dynamic control of resource reliability, and improved the integrity and validity of the information exchanged among vehicles in the untrusted vehicular environments. \nIn order to guarantee secure and private transactions between the network slice provider and the resource provider for 5G services, blockchain is employed to build a brokering mechanism in network slicing . When a slice provider receives a request or query to establish an end-to-end slice, it submits this request to blockchain for tracking and sharing. To support the deployment of the sub-slice components, smart contracts are designed, called as slice smart contracts (SSCs), where each SSC specifies the essential resources needed by the sub-slice. In this way, the resource providers can perform resource trading on contracts with sub-slice components. All related information about the sub-slice deployment is immutably recorded and stored in a permissioned blockchain controlled by the slice provider. The proposed blockchain-based broker not only adds security abilities, but also supports privacy and accountability in network slicing. \nThe authors in consider a blockchain slice leasing ledger concept using the 5G network slice broker in a blockchain to reduce service creation time and enable autonomous and dynamic manufacturing process. Blockchain plays a significant role in the establishment of mutual trust relationships between the operators and management of virtual 5G network slices, enabling new end-to-end business models including the provision of connectivity or managed services for factories as well as IT infrastructure. In the same direction, the works , also present how the blockchain technology can support the resource configuration value creation micro-processes and the 5G network slice broker use case in industrial automation use and smart grid. Manufacturing equipment leases independently the network slice needed for operations on-demand, approve service-level agreement (SLA) and pay for the service fee based on actual usage. In this context, blockchain performs the network slice trading, while smart contract orders slice orchestration according to agreed SLA from a 5G network slice broker as shown in Fig. 9. \n\\begin{figure}\n\t\\centering\n\t\\includegraphics [height=5.7cm,width=8cm]{Blockchain_Slice.pdf}\n\t\\caption{Blockchain for 5G Network Slice Brokering . }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nIn an effort to virtualize the slicing network, the authors in propose a blockchain based wireless virtualization architecture where wireless resources such as RF channels are sliced into multiple (time/frequency) slices for mobile virtual network operators (MVNOs). Each transaction in blockchain for wireless virtualization contains information of bandwidth allocation, maximum channel power, and data rate which are used by the MVNOs when serving their users, and such a transaction is recorded immutably in the block for sharing. The blockchain based distributed scheme creates new MVNOs securely without revealing their private information to the public. Similarly, the work in also proposes a blockchain-based wireless network virtualization approach to optimally allocate wireless resources for wireless network virtualization where the blockchain technology helps Virtual Network Operators (MVNOs) to sublease the RF slices from trustworthy Wireless Infrastructure Providers (WIPs). Blockchain is mainly used to build a reputation-based scheme for RF allocation of network slices with the objective of minimizing the extra delay caused by double-spending attempts in NFVs.", "id": "dc4f0658-30a9-4107-bb13-1a43d12545f5", "level": "subsection", "origin_cites_number": 11, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "\tBlockchain for network slicing" ] ], "subsections": [], "title": "\tBlockchain for network slicing" }, { "cite_extract_rate": 0, "cites": [], "content": "The exponential growth of mobile 5G data traffic has given impetus to the demand for high network rate proximity services. Device-to-device (D2D) communication has been envisioned as an allied technology for such 5G scenarios . Conceptually, D2D communications refers to a type of technology that enables mobile devices (such as smartphone, tablet, etc.) to communicate directly with each other without the involvement of an access point or a core network of a cellular infrastructure. D2D takes advantage of the proximity of device communication for efficient utilization of available resources, enabling to improve the overall system throughput, mitigate communication delays and reduce energy consumption and traffic load . D2D communication thus can facilitate new peer-to-peer and location-based applications and services, making it well suitable for the next mobile 5G communication networks and services. \nHowever, direct communication between mobile devices also introduces new non-trivial challenges for D2D-based 5G networks in terms of security, network management and performance loss. Indeed, data sharing between devices may face risks of data leakage due to the malicious threats on the untrusted D2D environments. How to exchange mobile data to achieve low latency but ensure security is a critical challenge . Furthermore, D2D devices may not be trusted, and can obtain illegal access to resources on servers (i.e. edge/cloud servers) if there is no an authentication mechanism on the network. Besides, the existing D2D architectures rely on the external authorities to grant data permission and request authentication during the D2D communication, which can incur unnecessary communication latency and degrade the overall network performance . \nBlockchain can be a good solution to help overcome such challenges to facilitate D2D communication in 5G networks. For example, the work in employs blockchain to build a secure content catching and sharing scheme among mobile devices for D2D networks. To mitigate the computation burden on devices, edge servers with high computing power are used to run mining puzzles for blockchain. In particular, blockchain demonstrates its efficiency in providing an incentive solution, which encourages caching-enabled users to store and share the contents with other mobile devices via D2D for better content sharing among mobile devices. The award policy empowered by blockchain stimulates the mining process in D2D devices, improving the robustness and security for the D2D network. \nIn order to support the authenticity of channel state information (CSI) of mobile users in D2D underlying cellular network, blockchain is applied in to develop a secure mechanism using a consensus protocol. The blockchain consensus based D2D network is composed of mobile users and two blockchains, integrity chain (I-chain) and fraud chain (F-chain). The mobile users can verify and validate the received broadcast CSI messages through the consensus mechanism before signing and adding immutably to the decentralized ledgers for sharing and storage. The authors also suggest that the blockchain-based approach is potential to dramatically improve the spectral efficiency while providing efficient CSI authenticity services for D2D networks. \nBlockchain is also useful in another D2D scenario for supporting computation offloading . In this work, a decentralized computation offloading coordination platform is developed and empowered by the blockchain which is able to manage the computation offloading requests and perform user matching. Each mobile user can participate in the computation offloading process and submit offloading requests to the blockchain platform. The other users in the D2D network and edge servers perform user matching to decide whether to participate in the offloading process to execute the requested computation tasks. The blockchain platform will incentivize COP which agrees to compute the task, and all request information is recorded and appended into blockchain for secure offloading management. \nThe work in presents a delegated authorization architecture using blockchain-based smart contracts that enable users to use D2D communication to access IoT resources with respect to the preservation of the authorization information and network trust. Blockchains can immutably record hashes of the information exchanged during user authorization and payment events, while smart contracts can support for the concatenation of authorization requests. Here, smart contracts are placed on blockchain and run on all ledger nodes so that the resource access from D2D users can be handled automatically and quickly. The authentication mechanism can also protect network resource against DoS attacks that involve a very high resource request rate. \nThe authors in the works , integrate blockchain with D2D communication to support the computation and offloading of the mobile data tasks as Fig. 10. With the trust and traceability features of the blockchain, a decentralized incentive approach is introduced to foster the collaboration of content creators and D2D users without the intervention of any third party. Mobile data can be transferred securely over the D2D network via blockchain ledgers, and computation offloading and content caching can be performed by edger servers for efficient execution. \n\\begin{figure}\n\t\\centering\n\t\\includegraphics[height=4.9cm,width=8cm]{Blockchain_D2D.pdf}\n\t\\caption{Blockchain for supporting D2D computation .}\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nIn , a consortium blockchain is considered for further security and efficiency in the feature extraction application for encrypted images in D2D systems. Smart contracts are stored in blockchain, which solves the privacy leaking problem of image features (e.g. tempering, forging by the semi-trusted clouds). In a different direction, the study exploits blockchain and smart contracts for the design and implementation of a trading application between the seller and the buyer via D2D communication. The trading can be performed automatically on blockchain by triggering the contract, which ensures transparent and reliable data exchange among different users. Moreover, to build a distributed secure monitoring system in D2D systems, blockchain is also considered in to provide a high level of security with reduced computational and communication costs. In particular, a secure access control using blockchain is also integrated to support identity authentication in a lightweight and scalable manner. \nIn summary, blockchain brings numerous opportunities to support 5G technologies and provides emerging services for 5G systems. Reviewing the state of the art works, we find that blockchain can provide security, networking solutions to protect 5G services and improve the performance of 5G-based systems. In the next section, we will present an in-depth analysis and survey on the benefits of blockchain in a number of 5G services.", "id": "daa268a8-2e05-44ab-adbd-d893c30429f0", "level": "subsection", "origin_cites_number": 13, "parent_id": "dbe49dc6-60e3-49d0-9815-b5f8a32b8d02", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for enabling 5G technologies" ], [ "subsection", "\tBlockchain for D2D communication" ] ], "subsections": [], "title": "\tBlockchain for D2D communication" }, { "cite_extract_rate": 0, "cites": [], "content": "Blockchains offer tremendous potential for improving existing 5G services and applications by supporting 5G technologies as discussed in the previous section. This vision can be achieved by taking advantage of interesting features that blockchains offer such as decentralization, privacy, immutability, and traceability. Blockchain can be regarded as a natural choice to facilitate the next-generation mobile communication networks for better 5G services. In this section, we provide an extensive discussion on the use of blockchain for important 5G services, including spectrum management, data sharing, network virtualization, resource management, interference management, federated learning, privacy and security services.", "id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "level": "section", "origin_cites_number": 0, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ] ], "subsections": [ "63b23089-434c-4ebc-b1ee-59bf2c1e5727", "e5fa5f49-6a90-4072-8a21-95eac0cabc93", "c17004f0-c212-4d88-9d1b-dd132a442631", "a9487d06-b8ac-4178-b089-44ab0a219a3c", "f075aaba-2f23-4e65-aa8b-1fbba5c02334", "348399d0-7c4d-4855-a49f-84d94050fc2c", "7d290cc1-4745-4c91-a782-7db0dd94e8fe", "58a556cd-50ad-4621-aece-fd211b4e9d2d" ], "title": "Blockchain for 5G services" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 5652 ], "content": "With the increasing demand for bandwidth-hungry applications for 5G services such as mobile video streaming, big data processing, a foreseen network capacity shortage has become a key threat to mobile network operators (MNOs). Despite the technological achievements of 5G networks, the physical constraints such as spectrum limitations are still major limiting factors, which prevent operators from scaling their services properly. Spectrum scarcity in wireless networks hinders the fast improvement of throughput and service quality. Operators are forced to invest a large amount of money in their infrastructure to optimize the capacity by network densification and higher frequency reuse factors. Currently, MNOs have to face the challenges from the unavailability of usable frequency resources caused by spectrum fragmentation and the current fixed allocation policy, which prevents from meeting the requirements of the expanding market of wireless broadband and multimedia users . To deal with the desire of mobile users to be connected at all times, anywhere, and for any application, more spectrum bandwidth and/or more efficient usage of that bandwidth is urgently needed. Some solutions have been proposed, including the fixed spectrum allocation strategies, but these approaches are inefficient in terms of wasteful spectrum usage because the license holders (or primary users) do not continuously utilize their full spectrum allocation. One solution for addressing the spectrum scarcity problem in radio 5G networks is to introduce secondary users that opportunistically monitor the spectrum and then transmit their data whenever the spectrum is idle . However, spectrum sharing in such scenarios also raises security concerns and provides a central point of attack for malicious users. Another approach is to use certification authorities, providing provide certificates for cognitive radios inside each cell. This approach not only requires infrastructure to be implemented for each cell but also requires a protocol for defence against central-point attacks. Further, it requires greater calculation complexity and longer packet lengths, which increases overhead for spectrum sharing systems. Importantly, the use of such centralized architectures also adds single-of-failure bottlenecks when the authority is attacked or out of services, which leads to the disruption of the entire spectrum sharing network . \nIn comparison to such conventional spectrum management schemes, blockchain can be a much better solution to overcome the security and performance issues for spectrum management in 5G. Since blockchain is a form of decentralized database where no single party has control, blockchain can be applied to build spectrum sharing and management models with improved security and better performances, i.e. low latency and enhanced throughput. Especially, blockchain envisions to support spectrum management by providing the following benefits . \n\\begin{itemize}\n\t\\item \\textit{Decentralization:} The blockchain adoption eliminates the need of trusted external authorities such as spectrum licenses, band managers, and database managers. The inherent benefits are twofold: reducing unnecessary network overhead due to communicating with the authorities during the spectrum sharing, and improving system integrity and privacy due to no concerns about data leakage caused by curious third party intermediaries. \n\t\\item \\textit{Transparency:} Since all transactions between spectrum users and service providers are reflected and recorded on distributed blockchain ledgers, the blockchain-based solution is able to provide better localized visibility into spectrum usage. Besides, blockchain can employ smart contracts, a self-executing platform, to perform auditability of spectrum sharing activities according to the pre-defined sharing policies. \n\t\\item \\textit{Immutability:} The spectrum services, i.e spectrum sharing, monitoring or user payment is recorded to the only-appended blockchain in an immutable manner. By using consensus mechanisms empowered by blockchain miners, blockchain ledgers is well resistant to modifications caused by attacks or malicious users. This also ensures the reliability of the spectrum services and enhances the accuracy of the network implementation. \n\t\\item \\textit{Availability:} Any network participants such as mobile users can access to spectrum resources managed by service providers to perform spectrum sharing and payment. Moreover, as blockchain broadcasts all service information to all entities, the spectrum sharing databases are also assessable to everyone in the network. Furthermore, there is no central authority to verify or record the data and transactions, which potentially enables a more transparent system without a loss of security properties. \n\t\\item \\textit{Permissionless:} Because there is no single trusted entity as the central authority to control the network, new users or applications can be added to the overall system without seeking the approval of other users, providing a flexible sharing environment. \n\t\\item \\textit{Security:} Blockchains enable efficient communication between users and service providers with strong security capabilities against threats, DoS risks and insider attacks. \n\\end{itemize}\nIn spectrum management, verification and access management is also of significant importance for enabling secure spectrum sharing . In this work, blockchain can secure distributed medium-access protocol for cognitive radios (CRs) to lease and access available wireless channels. Blockchain is responsible for verifying and authenticating each spectrum-leasing transactions between primary and secondary users. Here, primary users are defined as spectrum license holders and can lease their allocated spectrum to increase spectrum efficiency as well as gain profits via a spectrum coin protocol. The blockchain performs exchanging currency, mining and updating the transactions, and leasing available spectrum through an auction. The authors also demonstrated that the blockchain adoption is beneficial to spectrum management in terms of better scalability, power efficiency in spectrum usage, improved accessibility with high degree of security and better system protection capability against DoS attacks and threats.\nThe work presented in also describes a verification solution by taking advantage of blockchain for securing spectrum sharing in cognitive radio networks. The authors focus on building an auction protocol for spectrum payment services among primary users. Blockchain is regarded as a middle layer to perform spectrum trading, verify sharing transactions and lease securely the spectrum provided by a license holder. Besides, to solve the issues of privacy risks in spectrum sharing, a blockchain-based trustworthy framework called TrustSAS is presented in for a dynamic spectrum access system (SAS) to enable seamless spectrum sharing between secondary users (SUs) and incumbent users. The TrustSAS scheme relies on permissioned blockchains to monitor and control systems and cluster activities as well as tackle spectrum sharing events by using a Byzantine fault tolerant (BFT) consensus mechanism. All spectrum sharing transactions are validated by BFT and signed by blockchain miners for immutable recording on blocks. The experimental results show the superior advantages in terms of efficient auditability, improved privacy and lower end-to-end latency for spectrum access. \nIn addition, a spectrum sensing platform empowered by blockchain has been proposed and referred to as Spectrum Sensing as a Service (Spass) , , which provide services of spectrum sensing trading and payment. Smart contract acts as the core component which is responsible for scheduling spectrum sensing among secondary users and helpers which are the nodes offering sensing service in the secondary user network. Based on operation rules defined in the contract, smart contracts also perform access verification by using a malicious helper detection mechanism to identify whether a helper is honest or malicious. The proposed solution not only maximizes the profits of MNOs to encourage spectrum provision for user applications but also guarantees security requirements in an untrusted and decentralized spectrum sharing setting. \nOne of the biggest problems for unlicensed spectrum utilization is the unfair competition between MNOs for the utilization of unlicensed spectrum resources which are free to use and quite often available. To cope with this challenge, the authors of introduce a new unlicensed spectrum sharing among MNOs on blockchain. For this purpose, authors use smart contracts in conjunction with virtual cryptocurrency to develop a coalitional spectrum sharing game for optimizing spectrum allocation. The account balance of each MNO can be achieved fairly through a transparent sharing enabled by smart contracts, aiming to mitigate the conflict between MNOs during the sharing. To further improve spectrum sharing for sustainability in unlicensed frequency bands, the work in proposes to build a brokering platform to facilitate the collaboration between the network stakeholders. In this context, blockchain is feasible to establish a secure sharing to implement automatic negotiation processes for spectral resources between access point (AP) operators in a reliable manner. \nMeanwhile, in the spectrum sharing environment between the aerial and terrestrial communication systems, unmanned aerial vehicles (UAVs) has been used for facilitating communication on the sky. Currently, most UAVs in the market operate on the unlicensed spectrum (i.e., the industrial, scientific and medical bands) over the untrusted environment with significant security and privacy threats because of untrusted broadcast features and wireless transmission of UAV networks. To overcome such challenges, a spectrum blockchain architecture is considered in to improve the spectrum sharing. To avoid wasteful spectrum usage in UAV network, a pricing-based incentive mechanism is proposed to encourage MNOs to lease their idle spectrum to a secondary UAV network to obtain some revenue from the UAV operators. Then, a secure spectrum sharing framework is introduced where blockchain uses immutable distributed ledgers to implement spectrum exchange while protect the sharing system from threats. The authors focus on developing a Stackelberg game for an optimal spectrum sharing strategy, which can maximize the profits of MNOs while provide security services for UAV-based networks.", "id": "63b23089-434c-4ebc-b1ee-59bf2c1e5727", "level": "subsection", "origin_cites_number": 12, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Spectrum management" ] ], "subsections": [], "title": "Spectrum management" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the prominent characteristics of 5G is the strong data sharing capability in order to cope with the increasing content demands and data usage, especially in the 5G IoT scenarios. According to the latest release of Cisco , global mobile data traffic on the Internet will increase sevenfold between 2017 and 2022, reaching 77.5 exabytes per month by 2022. The rapid increase of content delivery over mobile 5G networks has revealed the need for new innovative data protection solutions to ensure secure and efficient data sharing over the untrusted environments . In fact, sharing data in mobile networks is highly vulnerable to serious data leakage risks and security threats due to data attacks . Mobile users tend to use information without caring about where it is located and the level of reliability of the information delivery, and the ability to control a large scale of information over the Internet is very weak. Blockchain may be an answer for such data sharing challenges. Indeed, blockchain can provide a wide range of features to improve the efficiency of data sharing in the 5G era such as traceability, security, privacy, transparency, immutability and tamper-resistance . To control the user access to data resources, blockchain miners can check whether the requester meets the corresponding access control policy. Due to the decentralized architecture which enables data processing for user requests over the distributed nodes, the overall system latency for data delivery is greatly reduced and the network congestion can be eliminated, which improves the performance of data sharing with blockchain. \nThe problem of secure storage for data sharing is considered and discussed in . The authors leverage blockchain as an underlying mechanism to build a decentralized storage architecture called as Meta-key wherein data decryption keys are stored in a blockchain as part of the metadata and preserved by user private key. Proxy re-encryption is integrated with blockchain to realize ciphertext transformation for security issues such as collusion-attack during the key-sharing under untrusted environments. In this context, the authors in study blockchain to develop a data storage and sharing scheme for decentralized storage systems on cloud. Shared data can be stored in cloud storage, while metadata such as hash values or user address information can be kept securely in blockchain for sharing. In fact, the cloud computing technology well supports data sharing services, such as off-chain storage to improve the throughput of blockchain-sharing or data distribution over the cloud federation .\nIn IoT networks, data transmission has faced various challenges in terms of low security, high management cost of data centre and supervision complexity due to the reliance on the external infrastructure . Blockchain can arrive to provide a much more flexible and efficient data delivery but still meet stringent security requirements. A secure sharing scheme for industrial IoT is proposed in , which highlights the impact of blockchain for security and reliability of IoT data exchange under untrustworthy system settings. In comparison to traditional database such as SQL, blockchain can provide better sharing services with low-latency data retrieval and higher degrees of security, reliability, and stronger resistance to some malicious attacks (DoS, DDoS) for data sharing. Further, the privacy of data is well maintained by distributed blockchain ledgers, while data owners have full control on their data shared in the network, improving the data ownership capability of sharing models . \nThe work in also introduces a sharing concept empowered by blockchain and fog computing. The proposed solution constitutes a first step towards a realization of blockchain adoption as a Function-as-a-Service system for data sharing. Fog nodes can collect IoT data arising from private IoT applications and securely share each other via a blockchain platform which can verify all data requests and monitor data sharing behaviours for any threat detection. \nSmart contracts running on blockchain have also demonstrated efficiency in data sharing services . Smart contracts can take the role of building a trusted execution environment so that we can establish a set of information exchange frameworks working on blockchain. For example, the study in leverages smart contracts to build a trustless data sharing in vehicular networks as depicted in Fig. 11. The roadside units (RSU) can set the constraints for data sharing by using smart contracts which define shared time, region scope, and objects to make sure the data coins is distributed fairly to all vehicles that participate in the contribution of data. In addition, the authors of introduce a smart contract-based architecture for consent-driven and double-blind data sharing in the Hyperledger Fabric blockchain platform. In the system, confidential customer data can be authorized and validated by smart contracts, and the service providers can execute the data tasks, add attributes and metadata, and submit it to the blockchain for validation and recording in a transparent manner.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[height=7.1cm,width=8cm]{Blockchain_IoTVehicular.pdf}\n\t\\caption{A data sharing model for vehicular IoT networks based on blockchain . }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}", "id": "e5fa5f49-6a90-4072-8a21-95eac0cabc93", "level": "subsection", "origin_cites_number": 15, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Data sharing" ] ], "subsections": [], "title": "Data sharing" }, { "cite_extract_rate": 0, "cites": [], "content": "Wireless network virtualization is considered as an emerging paradigm in 5G to build different virtual wireless networks (VWNs) through mobile virtual network operators (MVNOs) to support rapidly increasing data demands caused by emerging 5G IoT applications . Network virtualization is able to enhance wireless resource (RF slice) utilization, provide better coverage, and increase network capacity and energy efficiency . The blockchain technology can provide the required characteristics of nonrepudiation and immutability to overcome the shortcomings of the previous configuration models. More precisely, blockchain is capable of creating secure virtual wireless networks (VWNs) so that wireless resource-owners sublease their wireless resources (e.g., slice of RF spectrum, infrastructure) to mobile virtual network operators (MVNOs) . All participants of each virtual network slice is managed by a slice blockchain, which provides auditability of slice creation, monitors orchestration operations and data access of clients to the data centre. In such a decentralized virtual network, smart contracts can be very useful to provide automation and transparency in a distributed way instead of trusting a particular node or an authority process transactions. The solution of using blockchain and smart contracts can be an ideal solution to create secure end-to-end network slices for supporting virtual services with diverse requirements and resiliency .\nMeanwhile, the work in proposes blockchain to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. The main objective is to protect and secure virtual machines and make virtual machine managers well resistant to be compromised by threats. In fact, the complexity of virtual networks with multiple physical machines and virtual machines raises security concerns to be solved. For instance, a virtual machine can be created virtually by an external attacker to run in a server and used to perform external DDOS attacks, and internal attacks can act as legitimate entities to perform unauthorized data access which can impair the data integrity and confidentiality of the network. Therefore, the proposed work considers the authentication issues in virtualization using a blockchain system shared between the virtualization server, the orchestrator and VMM agents. The orchestration requests (create, destroy, resize, copy, migrate) to a virtualization server are recorded as a transaction which is then authenticated by smart contracts to grant permission for the orchestration command, avoid malicious access to the data centre. \nMoreover, in order to prevent from double-spending of same RF resources (frequency slices), the work in leverages a distributed blockchain based scheme to sublease the frequency slice to MVNOs through wireless network virtualization. The proposed wireless virtualization architecture contains three main entities: wireless service providers who participate in sharing or subleasing their wireless resources to MVNOs; data sharing services for wireless resources; and block managers that are trusted devices working to maintain the blockchain. Each transaction in blockchain for wireless virtualization includes the information of bandwidth allocation, allocated channel power, data rates which are utilized by the MVNOs while serving their users through virtual networks. Specially, the work pays special attention to addressing the double-spending issue which is the allocation of same wireless resources to multiple MVNOs with a hope that all MVNOs would not use their leased spectrum at the same time for obtain maximum revenues. Compared to traditional approaches which mainly rely on centralized trusted authorities to perform resource sharing, blockchain is much more efficient in verifying each transaction to ensure that the wireless resources are scheduled to a given MVNO, which not only solves double-spending problems but provides fairness and transparency for network virtualization. \nIn an effort to secure management, configuration and migration of virtual networks services, the work in presents a blockchain-based architecture for network function virtualization (NFV) and service function chaining (SFC). The blockchain module designed mainly performs three main functions: verify the format of the transaction, validate the accuracy of the signature of the transaction, and check the duplication of transactions. The service requests sent from NFV clients would be verified by blockchain via VNF key pairs and blockchain module key pairs for authentication through a consensus mechanism. In the same direction, the authors of also analyse on how blockchain can support secure configuration and migration of NFVs. The consensus of Practical Byzantine Fault Tolerance (PBFT) is implemented on the Open Platform for Network Function Virtualization (OPNFV) to monitor and schedule the orchestration operations in virtualized networks. \nFurthermore, the security for SDN-based network virtualization is analysed in , and that is based on blockchain to enable privacy of spectrum resources. Here, blockchain is installed in the SDN controller of MVNOs to perform subleasing (or releasing) wireless recourses to virtual wireless network operators (VWNOs). Blockchain is able to offer auditability such that each spectrum assignment done by SDN controllers of PWROs is validated by other participants, with each allocation is recorded as a transaction in the blockchain with a timestamp.", "id": "c17004f0-c212-4d88-9d1b-dd132a442631", "level": "subsection", "origin_cites_number": 9, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Network virtualization " ] ], "subsections": [], "title": "Network virtualization " }, { "cite_extract_rate": 0, "cites": [], "content": "In 5G networks, mobile resource (i.e. computation, memory, bandwidth, channel and storage) is one of the most popular services. The growing variety of 5G services leads to unprecedented levels of complexity in mobile resource management . Edge/cloud computing in 5G needs to allocate its computation capacity to ensure efficient data execution while maintaining resources to serve the increasing demands of mobile users in the long term. In virtualized networks, the VNFs of a single slice may have heterogeneous resource requirements, i.e., CPU, memory, bandwidth and storage, depending on their functions and user requirements. The resource demands of slices of the same function type may be also different since they are serving different number of mobile users. For instance, a provider might run multiple Internet of Things (IoT) slices each one dedicated for a specific application. In such contexts, with heterogeneous resource capacities and heterogeneous resource requirements, implementing an optimal resource allocation to the mobile 5G network is a critical challenge. Importantly, the current resource management architectures mainly rely on a central authority to perform resource allocation and verification of user resource access, but such models obviously remain single point failure risks and security from the third party. Moreover, the traceability of the current resource sharing schemes is very weak, which makes shared resources being compromised by attacks or used illegally by malicious users. All of these issues need to be solved effectively before deploying 5G services in practical scenarios. \nBlockchains can be a highly efficient approach to solve the above remaining issues and improve the resource management. The use of blockchain enables the distributed resource allocation schemes as a strong alternative which is more preferable for both the service providers (edge/cloud, slice providers) and also mobile users/equipments. Blockchain would simplify the resource management concept, while remaining the important features of the core network and ensure strong security. For example, blockchain has been applied in VNFs in to implement reliable resource allocation corresponding to user requests from different aspects such as user demand, cost. More interesting, smart contracts are also integrated to build an auction scheme which enables to allocate optimally to the network of users in a transparent manner (due to the transparency and immutability of smart contracts) in dynamic mobile environments. \nSpurred by the power of blockchain, a resource management model is introduced in which proposes a new concept of blockchain radio access network (B-RAN). The main goal is to achieve a spectrum resource balance in the network of user equipment (UE), access points (AP), spectrum bands and blockchain. The resource access services between UE and AP can be implemented by a smart contract-enabled protocol which defines access rules in conjunction with certain resource constraints such as service time, service demand, and service fee. The service requestor, i.e. mobile user, can undertake resource trading with AP by triggering the smart contract so that spectrum access is authenticated and resource is released via blockchain. \nIn the 5G networks, edge computing plays a significant role in improving QoS of mobile services thanks to its low latency and fast computing capabilities. Resource allocation for edge computing is of significant importance in edge-based mobile networks, such as IoT for better QoS and robustness of the system. A study in employs blockchain to develop a decentralized resource allocation scheme which overcomes the limitation of previous centralized schemes in terms of latency and service provision speed. To provide adaptive computation services for IoT data, resource allocation should be dynamically adjusted without any centralized controller to maintain the high QoS. Blockchain is well suitable for such scenarios by offering a distributed ledgers to update resource information in an automatic and trustworthy manner . In the case of resource scarcity in the network, a cooperative edge computing model can be necessary to support low-capability edge devices . In this regard, blockchain would be useful to provide a reliable resource sharing between edge nodes. Resource requests can be verified strictly by intelligent contracts with access policies without passing a centralized authority, which also reduces resource sharing latency. \nAnother blockchain-based resource allocation architecture for edge computing is presented in . In this work, a three-stage auction scheme is introduced, including the blockchain miners act as the buyers, the edge servers which provide resources act as the sellers, and a trusted third party auctioneer that undertakes the resource trading. Blockchain is responsible to monitor resource trading and user payment between miners and edge servers. The experimental results also show that the blockchain-based solution is truthful, individual rational and computationally efficient, compared to the conventional approaches. \nIn the multi-user network, a critical challenge is to allocate fairly the wireless resources among users with respect to their priorities (i.e., emergency levels). For example, a user who needs more resources for his service demand should be allocated more resources from the provider. Without authenticating the users’ priorities can lead to insufficient wireless resources to the users who are actually in high priorities. To provide a dynamic resource allocation solution for optimal resource usage, the work in presents a blockchain consensus protocol to check the authenticity of priorities. Each mobile user can take the role of a blockchain node to perform authenticity for a new message or request. The resource level is decided by an asynchronous Byzantine agreement among nodes, which guarantees trustworthiness and fairness for resource sharing.", "id": "a9487d06-b8ac-4178-b089-44ab0a219a3c", "level": "subsection", "origin_cites_number": 8, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "\tResource management" ] ], "subsections": [], "title": "\tResource management" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 5653 ], "content": "The problem of interference management in the 5G infrastructural wireless networks is expected to become critical due to the unexpected data content traffic and numbers of 5G IoT devices. Although the telecom operators provide mobile services with the implementation of small size networks which can deliver various advantages such as high data rate and low signal delay, it is likely to suffer from various issues such as inter-cell, intra-cell, and inter-user interferences . In the data-intensive service scenarios where a huge amount of mobile data is required to be transmitted in cellular networks, D2D communication can be a good choice to implement low-latency data transmission. However, the coexistence of D2D devices and cellular users in the same spectrum for communication and the short distance between D2D devices and users in small cells can result in cross-tier interference (CTI). The possibility of collaborating communication and sharing service benefits between mobile devices can be infeasible in practice due to the interest conflict between them. Building a fair and trusted economic scheme can be a solution to this problem, and thus mitigate the network interference. Currently, electronic money transactions have received extensive attention in small cell deployments, but the transaction consensus is often reached by passing a central authority . This approach not only incurs additional costs of latency and transmission energy, but also raises security concerns from third parties. Distributed interference management with blockchain would be a feasible approach to cope with such challenges and facilitate interference management. \nFor example, the authors in present a first example of using distributed blockchain to support a linear interference network. The main objective is to build a monetary mechanism using blockchain for optimal interference management. More precisely, a network model for a pair of two nodes including a transmitter and receiver is considered, wherein the transmitter (payer) may cause interference at the receiver (payee). A distributed interference avoidance transmission strategy is proposed so that a node has to pay in order to be active and then maximizes its monetary credit. The blockchain implementation realizes the monetary policies for cooperative interference management using a greedy algorithm. The proposed strategy also relieved that blockchain can help allocate economic benefits among users for interference avoidance . \nIn the D2D networks, interference may incur from the unfair resource allocation from the service providers to different user types. For example, users with higher spectral resource demands should be prioritized during resource scheduling. Motivated by this, a blockchain consensus method is proposed in to evaluate the amount of cross-tier interference (CTI) caused by each user. The authors pay special attention to building an access control mechanism using blockchain for the authenticity of channel state information (CSI) with a dynamic resource allocation. A user with higher CSI can be allocated a larger amount of wireless resource. A simulation implementation with an optimal user access algorithm is also presented, showing that the proposed scheme can improve the spectral efficiency for D2D users without interference effects. \nThe study in utilizes power control with blockchain to support Quality-of-Service (QoS) provisioning for enabling efficient transmission of a macrocell user (MUE) and the time delay of femtocell users (FUEs) in blockchain-based femtocell networks. The macrocell base station (MBS) shares its spectrum resource to FUEs and the co-channel interference can be caused by the FUEs. In order to avoid excessive interference from FUEs, MBS can price the interference to the FUEs, the FUEs determine their transmission powers and payments with the constraint of time delay according to a modelled Stackelberg game. Blockchain is essential to build a decentralized femtocell network so that payment can be done in a reliable way without the involvement of a middle party. \nIn another scenario, the interference between IoT transaction nodes (TNs) in the blockchain-enabled IoT network is also analysed in . In this work, the authors focus on investigating the performance of blockchain transaction throughput and communication throughput by deriving the probability density function (PDF) with respect to the interference of TNs, for a transmission from an IoT node to a blockchain full function node. The blockchain-based solution is able to ensure high successful rate and overall communication throughput and preserve the IoT network against security threats.\nDespite great research efforts in the field, the use of blockchain for interference management in 5G mobile networks is still in its infancy with few investigated works. The preliminary findings from the literature works are expected to open the door for exploring blockchain in overcoming the challenges in network interference management in terms of network throughput and security.", "id": "f075aaba-2f23-4e65-aa8b-1fbba5c02334", "level": "subsection", "origin_cites_number": 7, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Interference management" ] ], "subsections": [], "title": "Interference management" }, { "cite_extract_rate": 0.5, "cites": [ 602, 3473, 659, 5654 ], "content": "Recent years, federated learning has emerged as a promising machine learning technique for large-scale mobile network scenarios , . Federated learning enables distributed model training using local datasets from distributed nodes such as IoT devices, edge servers but shares only model updates without revealing raw training data. More specific, it employs the on-device processing power and untapped private data by implementing the model training in a decentralized manner and keeping the data where it is generated. This emerging approach provides an ability to protect privacy of mobile devices while ensuring high learning performance and thus promises to play a significant role in supporting privacy-sensitive 5G mobile applications such as edge computing and catching, networking, and spectrum management .\nIn particular, the cooperation of blockchain and federated learning has been considered in recent works to solve complex issues in mobile 5G wireless networks. The authors in introduce a blockchained federated learning (BlockFL) architecture which enables on-device machine learning without any centralized training data or coordination by employing a consensus mechanism in blockchain. By relying on the decentralized blockchain ledger, the proposed model overcomes the single point of failure problem and enhances the network federation to untrustworthy devices in a public network due to federated validation on the local training results. Besides, the blockchain also accelerate the training process by a reward mechanism, which in return promotes the collaboration of ubiquitous devices. \nThe study in considers a reputation scheme which selects reliable mobile devices (workers) for federated learning to defend against unreliable model updates in mobile networks. To ensure accurate reputation calculation, a consortium blockchain with the properties of non-repudiation and tamper-resistance is leveraged to create a secure decentralized model update network of edge servers and mobile devices, leading to the reliability of federated learning on mobile edge computing. Importantly, blockchain associated with contract theory enables an incentive mechanism, which stimulates high-reputation workers with high-quality data to join the model training for preventing the poisoning attacks in federated learning . \nMeanwhile, the authors in incorporate blockchain with federated learning in the determination of data relevance in mobile device networks. This can be done by encourage mobile users to aggregate relevant information belonging to a specific topic that they are seeking during the interaction process with other users. They also introduced a decentralized way of storing data which reduces the risk from centralized data storage. A consensus mechanism called the Proof of Common Interest is considered that provides data verification services to ensure that data that is added to the blockchain ledger is relevant. \nTo provide a parallel computing architecture for big data analysis, especially for the precision medicine which data sets are owned by healthcare data users, an integrated blockchain-federated learning model is proposed in . Federated learning assists training large medical data sets from various distributed data sources owned and hosted by different hospitals, patients, and health service providers, while blockchain-empowered smart contract is used to enable a distributed parallel computing environment for distributed deep learning using heterogeneous and distributed data. Moreover, the blockchain adoption enables secure, transparent, and auditable data sharing to promote international collaboration.\nThe work in considers a blockchain empowered secure data sharing architecture for distributed devices in Industrial Internet of Things (IIoT). The key focus is on building a data sharing with privacy preservation by incorporating in federated learning. By using the power of federation of IoT devices, the data privacy is ensured via the federated learning model which allows to share the data model without revealing the actual data. Further, to enhance the data integrity of the data training, the federated learning is integrated with the consensus process of permissioned blockchain, which also ensures secure data retrieval and accurate model training.", "id": "348399d0-7c4d-4855-a49f-84d94050fc2c", "level": "subsection", "origin_cites_number": 8, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "\tFederated learning " ] ], "subsections": [], "title": "\tFederated learning " }, { "cite_extract_rate": 0.0625, "cites": [ 8937 ], "content": "In addition to smart emerging services that 5G can provide to mobile users and stakeholders, the complex 5G mobile environments also raise many privacy issues to be investigated carefully. According to a survey work in , the privacy challenges in 5G come from various aspects, such as end-to-end data privacy, data sharing privacy, trust issues in information flows, and trust issues in centralized mobile data architectures with third parties. Blockchain with its decentralization, traceability, availability and trust capabilities has been demonstrated widely its great potential in solving privacy issues in 5G networks and services . As an example, blockchain is feasible to protect user data for decentralized personal data management , which enables to provide personalized services. Laws and regulations for data protection could be programmed into the blockchain so that they are enforced automatically. Interestingly, blockchain is capable of providing full control of monitoring personal data when sharing on the network, which is unique from all traditional approaches which hinder users from tracking their data . \nTo provide decentralized and trusted data provenance services on cloud computing, the work in uses blockchain to provide tamper-proof records and enable the transparency of data accountability. Blockchain can support in three steps, namely provenance data collection, provenance data storage, and provenance data validation. Data provenance record is published globally on the blockchain, where blockchain nodes (i.e. mobile users, data owners, and service providers) can participate in consensus for confirmation of every block. During the data sharing between users and service providers, transmitted data can be highly vulnerable to malicious threats, i.e. data attacks, then privacy for shared data should be considered carefully. In this context, the authors in presented a blockchain-based solution for secure data exchange. Data can be recorded in blocks and signed by miners so that sharing is securely implemented. An automated access-control and audit mechanism is considered wherein blockchain enforces user data privacy policies when sharing their data across third parties for privacy preservation . \nIn current IoT applications, the private information management often relies on centralized databases owned by third-party organizations for data services such as data processing, data storage, data sharing. However, it is easy to find that this architecture remains weaknesses in terms of data leakage coming from curious third parties and high communication latency due to such centralized models. A privacy architecture using blockchain for smart cities is presented in , focusing on solving the above issues. Blockchain has the potential to help mitigate privacy exposure while allowing users to benefit from trusted transactions and better data control. The records of data access are added to a transparent ledger so that blockchain with consensus mechanism can verify and validate the data requests from all users to detect any potential threats in a decentralized manner without the involvement of any third parties. In another research effort, the work in investigates how blockchain can support secure data storage and data availability in IoT health networks. With the combination of the cryptographically secured encryption and the common investment of the network peers via a consensus mechanism, blockchain empowers a decentralized and openly extendable network while protecting data on the network. \nA privacy-preserved scheme empowered by blockchain is also considered and discussed in . In this work, a consortium blockchain-oriented approach is designed to solve the problem of privacy leakage without restricting trading functions in energy networks. Both energy users and suppliers are verified by a trading smart contract so that all trading transactions are authenticated for trustworthiness. Moreover, to achieve good privacy in industrial IoT, the study introduces a decentralized blockchain architecture in conjunction with a hash algorithm and an asymmetric encryption algorithm. IoT data are still stored by the offline database (i.e. cloud storage), and the access record (storage, reading, and control) of each entity is stored in the block for tracking. Therefore, data storage on blockchain can be solved efficiently, and each operation will be strictly supervised via blocks. \nIn dealing with privacy issues in vehicular networks, the authors of present a privacy-preserving authentication framework. The main goal of the proposed system is to preserve the identity privacy of the vehicles in the vehicular ad hoc networks. All the certificates and transactions are recorded immutably and securely in the blockchain to make the activities of vehicles (i.e. data sharing, energy trading) transparent and verifiable. In a similar direction, a model called CreditCoin for a novel privacy-preserving incentive announcement solution is presented in . On the one hand, by offering incentives to users, CreditCoin can promote data sharing for network expansion, and the transactions and account information of blockchain are also immutable and resistant to be modified by attacks. On the other hand, with a strongly linked ledger, the blockchain controller can be easy to trace user activities, including malicious behaviours, for data protection. \nIn addition, the work in proposes to use private smart contracts to design a privacy-preserving business protocol in e-commerce. In the contract, the interaction policy is defined via a business logic that determines types of trade, counterparties, underlying assets, and price information of the online shopping. The transactions between the seller and the buyer can be implemented securely and transparently via the contract without the disclosure of private information. Recently, the blockchain benefit to privacy of machine learning algorithm implementation is investigated in . A privacy-preserving and secure decentralized Stochastic Gradient Descent (SGD) algorithm is established on blockchain, which enables computation in a decentralized manner in computing nodes. Computation parameters and information are kept in the block without revealing their own data and being compromised by data attacks. Obviously, the blockchain technology is promising to privacy preservation in the modern mobile networks and services, especially in 5G IoT systems, where data protection is becoming more important in the context of exponential mobile data growth in the 5G era .", "id": "7d290cc1-4745-4c91-a782-7db0dd94e8fe", "level": "subsection", "origin_cites_number": 16, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "\tPrivacy " ] ], "subsections": [], "title": "\tPrivacy " }, { "cite_extract_rate": 0, "cites": [], "content": "The rapid increase of the 5G traffic and the explosive growth of valuable data produced by user equipment have led to strong demands for security mechanisms to protect mobile data against threats and attacks. With the important security properties, blockchain can provide a number of security services for 5G to improve the overall performance of future mobile systems. Considering the state of the art literature , blockchain mainly offers three main security services, including access control, data integrity and authentication, which will be summarized as follows.", "id": "58a556cd-50ad-4621-aece-fd211b4e9d2d", "level": "subsection", "origin_cites_number": 1, "parent_id": "3e5c18d2-056a-44ed-ad3f-7183096e33c8", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Security services" ] ], "subsections": [ "cf3cb5ad-1bbf-402b-8220-bede2b5a8a54", "ffd672e2-bbef-4d5b-b36b-81b54e785317", "51db2a52-68ba-4896-90c1-78272d388673" ], "title": "Security services" }, { "cite_extract_rate": 0.375, "cites": [ 5656, 5655, 7974 ], "content": "Access control refers to the ability of preventing the malicious use of network resource. Access control mechanisms guarantee that only legitimate users, devices or machines are granted permissions (e.g., read, write, etc.) the resources in a network, database, services and applications. Blockchain, especially smart contracts can offer access control capability to protect the involved system against any threats. As an example, a trustworthy access control scheme leveraging smart contracts is introduced in to implement access right validation for IoT networks. The access policy is predefined and stored in the contract, which runs on blockchain. The contract can verify the user request using such a policy in a dynamic and decentralized manner. Different from traditional access control architectures which always use external authority for verification, the blockchain-based approach can perform direct access control between the requestor and the data centre so that the access latency can be reduced and security is improved.\nTo achieve access control for user requests to data resources in fog cloud-based IoT networks, a privacy-oriented distributed key management scheme using blockchain is proposed in to achieve hierarchical access control. To receive a permission grant for data access, a subject needs to send a request with access information (i.e. identification, user address) to the security manager which checks the access and broadcast this request to other entities for verification via blockchain. The access is granted only when a consensus is achieved among all entities, which enhances reliability of the access control architecture. \nTo overcome the challenges caused by complicated access management and the lack of credibility due to centralization of traditional access control models, the authors in introduce an attribute-based access control scheme. The ultimate goal is to simplify the access management by a distributed blockchain ledger while providing efficient access control ability to safeguard IoT data resources. Moreover, the work in introduces a combination of Ethereum blockchain and ciphertext-policy attribute-based encryption (CP-ABE) to realize fine-grained access control for cloud storage. An access control policy is programmed in a smart contract which verifies the request based on the access period time and the attributes of data users. All information of control functionality results is stored on the blockchain, so the access control is visible to all users. \nMeanwhile, a transaction-based access control scheme based on blockchain is proposed in . The access verification follows a four-step procedure: subject registration, object escrowing and publication, access request and grant. Each request of the subject is registered as a transaction that is then submitted to blockchain to be validated by the data owner on blockchain by suing a Bitcoin-type cryptographic script. The works in , also investigate the capability of blockchain for realizing access control services with Ethereum and Hyperledger Fabric platforms. To perform access control in the large-scale IoT networks, a platform called BlendCAC is considered in as a promising solution for securing data sharing and resource trading among devices, users and service providers. The proposed approach concentrates on an identity-based capability token management strategy which takes advantage of a smart contract for registration, propagation and revocation of the access authorization.", "id": "cf3cb5ad-1bbf-402b-8220-bede2b5a8a54", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "58a556cd-50ad-4621-aece-fd211b4e9d2d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Security services" ], [ "subsubsection", "\tAccess Control" ] ], "subsections": [], "title": "\tAccess Control" }, { "cite_extract_rate": 0, "cites": [], "content": "The integrity property ensures that the data is not modified in the transit or data is intact from its source to the destination. In recent years, distributed blockchain ledgers are starting to be used to verify data integrity for mobile services and networks, such as data management services or IoT applications, to overcome the limitations of the traditional models, which often rely on a third party auditor for integrity validation . A blockchain-based framework for data integrity service is also presented in which performs integrity verification based on blockchain for both data owners and data customers. To operate the data integrity service, a smart contract living on the blockchain is employed to audit transactions from all users. Upon the deployment of smart contract, participants can interact with it anytime, the integrity service cannot be terminated by any entities except the author. The blockchain store information of data history and database stored in blockchain is strong resistant to modifications, which improves data integrity. \nTo provide data integrity services on resource-limited IoT devices, the authors in introduce a lightweight integrity verification model in Cyber-Physical Systems (CPS) by taking advantage of blockchain features. The key concept of the proposal is enabled by a three-level design, including the first level for running the Proof-of-Trust (PoT) mechanism among IoT devices, and two upper levels for data persistence and integrity verification by cloud. The implementation results reveal the efficiency of the blockchain-empowered model with good confidentiality, availability, integrity, and authenticity for IoT communication.\nIn an effort to deal with challenges caused by centralized traditional data integrity schemes such as symmetric key approaches and public key infrastructure (PKI) which often suffer from the single point of failure and network congestion, a decentralized stochastic blockchain-enabled data integrity framework is analysed and discussed in . The proposed stochastic blockchain design includes the chain structure and the consensus mechanism for the data integrity checking procedures. \nAt present, with the popularity of cloud storage, how to guarantee data integrity on the cloud has become a challenging problem. The authors of describe a framework for data integrity verification in P2P cloud storage via blockchain which makes the verification process more open, transparent, and auditable to all data users. Moreover, a new solution for improving integrity on cloud is introduced in . In the system, blockchain constructs a semi-finished block on a candidate block arranged by data packages that is broadcast to all entities, while the consensus mechanism in blockchain, i.e Proof of Work, is able to generate tamper-resistant metadata associated with policy-based encryption method, leading to better data integrity. Besides, to tackle the issue of verification delay caused by procrastinating third-party auditors, the study implements a solution for cloud storage using blockchain which enables the auditors to record each verification result into a blockchain as a transaction with a stringent time requirement. The time stamp in conjunction with signature and hash values can provide a time-sensitive data integrity service with a high degree of system security.", "id": "ffd672e2-bbef-4d5b-b36b-81b54e785317", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "58a556cd-50ad-4621-aece-fd211b4e9d2d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Security services" ], [ "subsubsection", "\tData integrity" ] ], "subsections": [], "title": "\tData integrity" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 8938 ], "content": "Recent years, blockchain has been also investigated to realize the authentication capability to improve the overall security levels of 5G networks . Mobile user access needs to be authenticated to detect and prevent any potential malicious behaviours to network resources (i.e. database, computing resources), which preserves the involved system and enhances the network robustness. In , a privacy-enhancing protocol is proposed by using the blockchain technology. The approach provides an ability to identify users by the evaluation on personal information which is extracted from the user request package. The smart contract is also integrated to perform authentication, aiming to prevent unauthorized access from attacks. \nIn our recent works , , blockchain-based smart contracts are also leveraged to build an authentication mechanism for the cooperative edge IoT networks. By forcing an access control policy, smart contracts are able to identify and verify the user access for authentication. Only users with access grants can gain permission for their functionality, i.e. data offloading to edge servers.\nThe authors in consider an authentication scheme using blockchain for fog computing. The fog nodes running on Ethereum blockchain employ smart contracts to authenticate access from IoT users. The proposed scheme facilitates managing and accessing IoT devices on a large scale fog network while providing security features such as decentralization, privacy and authentication without the need of a trusted third party.\nIn order to achieve authentication in vehicular networks, a strategy working on the blockchain platform is proposed in which can undertake vehicle authentication and privacy preservation with seamless access control for vehicles. Blockchain can bring more advantages than conventional approaches using third party auditors in terms of high trust degree and transparency. Another blockchain application for privacy-awareness authentication is shown in , which allows both the server and the user to authenticate each other through this credential or certificate in a decentralized manner. All entities in the network achieve a consensus on an authentication task, and any potential threats can be detected and reflected on decentralized ledgers for necessary prevention.", "id": "51db2a52-68ba-4896-90c1-78272d388673", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "58a556cd-50ad-4621-aece-fd211b4e9d2d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Blockchain for 5G services" ], [ "subsection", "Security services" ], [ "subsubsection", "\tAuthentication" ] ], "subsections": [], "title": "\tAuthentication" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2153 ], "content": "Nowadays, Internet of Things (IoT) have constituted a fundamental part of the future Internet and drawn increasing attention from academics and industries thanks to their great potential to deliver exciting services across various applications. IoT seamlessly interconnects heterogeneous devices and objects to create a physical environment where sensing, processing and communication processes are implemented automatically without human involvement. The evolution of the 5G networks would be the key enabler of the advancement of the IoT. A number of key enabling 5G technologies such as edge/cloud computing, SDN, NFV, D2D communication are developed to facilitate future IoT, giving birth to a new model as 5G IoT, which is expected to disrupt the global industry , . Especially, in recent years, blockchain has been investigated and integrated with 5G IoT networks to open up new opportunities to empower IoT services and applications . Reviewing the literature works, we find that blockchains mainly support some key IoT applications, namely smart healthcare, smart city, smart transportation, smart grid and UAVs, which will be highlighted as follows.", "id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "level": "section", "origin_cites_number": 3, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ] ], "subsections": [ "f8730706-0636-444d-b0b6-c22f6047651b", "a6dd50fd-69d7-4d8e-ac01-6b1f5d0dfae2", "82992e49-0fd6-4f69-bbd8-5f64a0443562", "6d92e58e-b703-4f65-b992-f8d4a94cff0f", "f33f3f84-eb94-46f7-b21e-360a865c5336" ], "title": "\tBlockchain for 5G IoT applications" }, { "cite_extract_rate": 0.21428571428571402, "cites": [ 2153, 5657, 5658 ], "content": "Healthcare is an industrial sector where organizations and medical institutions provide healthcare services, medical equipment, health insurance to facilitate healthcare delivery to patients. The emerging 5G technologies are potential to support smart healthcare applications, which fulfill the new requirements for healthcare such as improved QoS, better density and ultra-high reliability . The integration of blockchain with 5G technologies can advance current healthcare systems and provide more performance benefits in terms of better decentralization, security, privacy , service efficiency and system simplification for lower operational costs . Blockchain can incorporate with 5G technologies such as softwarization, cloud/edge computing for new smart healthcare services as depicted in Fig. 12. The softwarized infrastructure can perform network functions through NFVs, which promote IoT communication, while cloud computing can support fast healthcare delivery services for early detection of patient health conditions. In such a 5G healthcare scenario, blockchain is employed to build a peer-to-peer database system which can validate and record all transactions (i.e. healthcare request, patient data) and store immutably them in decentralized ledgers. All transaction blocks are also visible to healthcare network members, including doctors, clinicians, and patients to accelerate data sharing during medications and treatment processes. \nBlockchain is also integrated with SDN-based healthcare networks for healthcare networking and computing. A software-defined infrastructure is designed to facilitate the specification of home-based healthcare services, and a cloud edge model is considered to provide a flexible heterogeneous health computation services. The role of blockchain in this work is to deal with health data interoperability and security issues, such as enabling effective authorized interactions between patients and healthcare providers (doctors, insurance companies), and delivering patient data securely to a variety of organizations and devices. Also, an access control mechanism empowered by smart contracts is integrated to support secure data sharing through user access verification, aiming to prohibit unauthorized users or threats from malicious access to health data resources. \nA healthcare architecture based on D2D communications can a notable solution for efficient information sharing and large-scale data sharing, but it also exists critical privacy issues due to untrusted sharing environments. An example is presented in wherein blockchain is incorporated with the D2D technology for large scale feature extraction applications on cloud. In healthcare, for example, image features extracted from health data collection contain important information of patients and thus need to be secured. Blockchain would ensure secure data storage by shifting the information to decentralized ledgers which are maintained by all participants. All stored data on blockchain is signed digitally and identified by hash values, which also solve privacy leaking issues from tampering or forging. \nRecently, blockchain is also considered and investigated in mobile edge computing (MEC)-empowered healthcare applications. The authors in consider an edge blockchain for telemedicine applications, with the main objective of providing secure transmission and computation of health data. The MEC-based cellular health network contains a base station and a set of mobile users. Here, mobile users can access the Internet via the cellular network, and they share the computation resources of a MEC server linked with a base station in a small cell. Blockchain provides a consensus protocol to verify the patient priority which is defined as the level of wireless resources that a user needs for their computation. As a result, the optimal resource allocation can be achieved to ensure the quality of data transmission of the whole network, and user information is secured due to storing on blockchain ledgers. Another blockchain approach in edge-based mass screening applications for disease detections is presented in . Due to a massive amount of captured multimedia IoT test data, an offline storage solution is considered and integrated with blockchain, which keeps cryptographic hashes of health data. This approach allows patients to take control of their information when performing clinical tests, visiting doctors or moving to other hospitals thanks to the transparency and availability of the blockchain protocol.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[height=6.63cm,width=8cm]{Blockchain_IoTHealthcare.pdf}\n\t\\caption{Blockchain for 5G healthcare . }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nMeanwhile, cloud computing, a key enabling technology of 5G networks, has also provided many notable solutions for healthcare services . Many research works have dedicated to use blockchain for cloud-based healthcare networks, such as . In this work, blockchain has proven its efficiency in improving the security of electronic health records (EHRs) sharing in cloud-assisted healthcare. The cloud computing is employed to store EHR ciphertext while the consortium blockchain keeps records of keyword ciphertext for data searching and sharing. In addition, to achieve secure data exchange between IoT health devices and cloud servers, a blockchain-enabled communication protocol is described in . All sensitive patient information and medical test results can be stored and managed by blockchain where a consensus mechanism is necessary for user verification when a medical test is performed. \nVery recently, we have also investigated and designed a blockchain architecture for cloud-based health management systems , . A mobile cloud blockchain platform is proposed to implement dynamic EHRs sharing among healthcare providers and patients. Blockchain is integrated with cloud computing to manage user transactions for data access enabled by smart contracts. In particular, a decentralized storage IPFS run by blockchain is combined with cloud computing to make data sharing more efficient in terms of low latency, easy data management and improved data privacy, compared to centralized cloud architectures. IoT users (i.e. doctors or patients) can perform data sharing transactions via their mobile devices such as smartphones, which offers flexible data sharing services with high security.", "id": "f8730706-0636-444d-b0b6-c22f6047651b", "level": "subsection", "origin_cites_number": 14, "parent_id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ], [ "subsection", "Smart healthcare" ] ], "subsections": [], "title": "Smart healthcare" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 5659 ], "content": "The evolution of 5G technologies has enabled enormous business opportunities and digital transformation initiatives for new smart city models, proving a wide range of services for city citizens . Smart cities involve a variety of components, including ubiquitous IoT devices, heterogeneous networks, largescale data storage, and powerful processing centres such as cloud computing for service provisions. Despite the potential vision of smart cities, how to provide smart city services with high efficiency and security remains unsolved. In this scenario, blockchain can be a promising candidate to solve critical security issues and empower smart city services , . To simplify the management of smart city services on a large scale, a city can be divided into small blocks called smart blocks. Each smart block consists of a variety of IoT devices, such as sensors, cameras, etc. of a certain area under the control of a block admin. A private blockchain using a ledger database is important to securely store all information generated from IoT devices during data exchange, data offloading and computation services. \nAnother research in analyses a sustainable IoT architecture empowered by blockchain for a secure sharing economy services in mega smart cities. The proposed system employs cognitive fog nodes at the edge to gather and process offloaded multimedia payload and transactions from a mobile edge node and IoT devices. To extract significant information from the outsourced data, machine learning is used during the data analytic, and such results are then put in blockchain for secure sharing and storage. Furthermore, to solve data security issues in IoT for smart cities, blockchain is considered in to secure communication between the smart city and home devices and sensors. IoT data can be executed and computed at the edge layer for latency reduction, while user access information is recorded by blockchain, which works as a universal ledger. The key benefits of the proposed scheme include system transparency as well as the permissionless property which allows adding any new IoT devices without involving any authorities. \nIn 5G smart cities, a prohibitively large amount of surveillance data will be generated continuously from ubiquitous video sensors. It is very challenging to immediately identify the objects of interest or detect malicious actions from thousands of video frames on the large scale. In such a context, building a distributed edge computing networks is highly efficient to achieve scalable data computation , . From the security perspective, blockchain would be a natural choice to establish decentralized security solutions by interconnecting edge nodes, IoT devices and city users, where data sharing, computation and business transactions can be performed on the blockchain ledger platform. It is also demonstrated that the use of distributed blockchain provides more benefits than the centralized architectures with a central cloud server in terms of lower latency, energy consumption, better service delivery, faster user response with security and privacy guarantees . \nCurrently, most Mobility-as-a-Service (MaaS) which monitors the connections between transportation providers and passengers in smart cities is controlled by a central MaaS manager, which potentially introduces privacy leakage and system disruptions if this entity is attacked. By integrating with the blockchain, the MaaS model can be operated in a much more secure and decentralized manner . In this work, blockchain can help improve trust and transparency for all stakeholders and eliminate the need of centralized entity to make commercial agreements on MaaS. The mobility services, such as ticket purchase or payments for using transports, can be programmed by smart contracts, which enable automatic and reliable service trading and payment. \nCloud computing is also a promising technology which can be incorporated to support strong computation and storage capabilities for smart city data, i.e big data from ubiquitous IoT devices. A cloud-smart city architecture is introduced in , wherein big data processing can be performed by cloud servers, while data auditing can be achieved by using the blockchain without third party auditors (TPAs). The proposed scheme focuses on building an optimized blockchain instantiation called data auditing blockchain (DAB) that collects auditing proofs and employs a consensus algorithm using a Practical Byzantine Fault Tolerance (PBFT) protocol. The simulation results reveal the potential of the blockchain adoption for big data in smart city with lower communication costs and better security. Furthermore, blockchain can enable interconnection cloud service providers to achieve a larger scale computation service . Any cloud server can be regarded as a blockchain node and cloud computing events are recorded on the ledgers, which effectively improves the system robustness and avoids the risks of single points of failures once the cloud server is compromised or attacked.", "id": "a6dd50fd-69d7-4d8e-ac01-6b1f5d0dfae2", "level": "subsection", "origin_cites_number": 11, "parent_id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ], [ "subsection", "\tSmart city" ] ], "subsections": [], "title": "\tSmart city" }, { "cite_extract_rate": 0, "cites": [], "content": "With the rapid development of modern 5G communication and computation technologies, recent years have witnessed a tremendous growth in intelligent transportation systems (ITS), which create significant impacts on various aspects of our lives with smarter transport facilities and vehicles as well as better transport services , . Smart transportation is regarded as a key IoT application which refers to the integrated architectures of communication technologies and vehicular services in transportation systems. One critical issue in smart transportation is security risks resulted by dynamic vehicle-to-vehicle (V2V) communications in untrusted vehicular environments and reliance on centralized network authorities. Blockchain shed lights on several inherent features to implement distributed data storage, peer-to-peer communication, and transparently anonymous user systems, which envisions to build secure, decentralized ITS systems to facilitate customer transportations . One of the most significant services to realize intelligent transportation is data transmission among vehicles. How to provide efficient data exchange services in terms of low latency and increased network throughput while still ensure high degrees of security is a critical challenge. Blockchain would enhance QoS of the current ITS system by offering a decentralized management platform, wherein all vehicles and road side units (RSU) can perform data transmission and sharing on a peer-to-peer model to reduce end-to-end delay without using a vehicular authority . \nIn order to adapt the large volumes of electric vehicle (EV) charging/discharging demand during transportation, the blockchain concept is introduced in that enables peer-to-peer transaction and decentralized storage to record all transaction data of EVs. In fact, EVs can be considered as a mobile power backup device to support the smart grid for load flattening, peak shaving and frequency regulation. This new energy trading paradigm is known as vehicle-to-grid (V2G), which is essential to build a safer and more sustainable energy platform for both EVs and the main power grid. Consumer power loads from smart city are all connected to the public blockchain power exchanging platform, where the electricity supply and user demand information are transmitted, encrypted and recorded in the blockchain platform. In such a context, the EV can publish and transmit the charging or discharging orders (for buying and selling) to the power blockchain platform which executes the EV request, performs energy trading and payment, and saves the transaction to the distributed ledger, which is also visible to every vehicle in the vehicular network. \nIn the line of discussion, the authors in also analyse a V2G energy trading model with a combination of blockchain and edge computing. EVs can buy energy from local energy aggregators (LEAGs) via trading. The vehicular communication is secured by a consortium blockchain, in which all the transactions are created, propagated, and verified by authorized LEAGs. To further reduce latency and processing posing on burden blockchain, edge computing servers are employed to undertake block creation and mining. LEAGs can buy computation services from edge computing providers to finalize this process, and store mined blocks to the nearby edge nodes. The blockchain technology envisions a trustless network to eliminate the operation cost of the intermediary participation, which will realize a quicker, safer and cheaper way in ITS systems. Moreover, authentication for vehicle access is of paramount importance for vehicular networks. In this regard, smart contract would be a notable approach which can authenticate and verify vehicular transactions by triggering the programmed logic functions . This enables direct authentication for registered vehicles without revealing device privacy and effectively prevents potential attacks from malicious vehicles.\nRecently, blockchain has been incorporated with SDN to build secured and controlled vehicular ad hoc networks (VANETs) . With the increasing scale of the current VANETs, traditional VANET frameworks with centralized SDN control mechanisms obviously cannot match the diversification of VANET traffic requirements. Distributed SDN control can be an efficient solution to localize decision making to an individual controller, which thus minimizes the control plane response time to data plane requests. To achieve secure communications between SDN controllers as well as between SDN controllers and EVs, blockchain is leveraged to achieve agreement among different nodes in terms of traffic information and energy demands without using centralized trust management. \nAnother aspect in VANETs is the security of power trading between EVs and V2G networks. In fact, it is very important to design a safe, efficient, transparent, information symmetrical trading model for VANETs to provide ubiquitous vehicular services (i.e. traffic transmission, vehicle cooperation, energy payment). Blockchain is introduced in for a reliable decentralized power trading platform where a V2G EV trading smart contract is integrated for trading authentication and a decentralized energy ledger is for data storage and sharing without relying on a trusted third party, eliminating the need for trusted third parties to address the high cost, inefficiency, and insecure data storage of traditional centralized organizations.", "id": "82992e49-0fd6-4f69-bbd8-5f64a0443562", "level": "subsection", "origin_cites_number": 9, "parent_id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ], [ "subsection", "\tSmart transportation" ] ], "subsections": [], "title": "\tSmart transportation" }, { "cite_extract_rate": 0, "cites": [], "content": "The continuously growing power demand in modern society has been a critical challenge that needs significant attention in the present day of the smart grid era. The energy industry has witnessed a paradigm shift in power delivery from a centralized production and distribution energy system into a dynamic mode of decentralized operation thanks to the support of ubiquitous 5G technologies such as IoT, edge/cloud computing, SDN, network slice and D2D communication , . In this regard, blockchain, a decentralized database platform, enables completely new technological systems and business models for energy management with added features such as decentralization, security, privacy and transparency . In the 5G energy network slice, the electricity can be allocated to each power user in the housing society through a distributed blockchain platform where all users are interlinked with energy providers on secured and distributed ledgers. \nIn smart grid, in order to monitor the electricity distribution and power usage of customers, a smart meter can be installed at each home to collect the real-time electricity consumption data for better smart home services. However, a critical drawback is that private user information such as home address, personal information may be disposed and adversaries can track users to obtain electricity consumption profile. To overcome this challenge, blockchain has been introduced in for a privacy-preserving and efficient data aggregation network. \nThe power network has been divided into small groups, each group is controlled by a private blockchain. Instead of relying on a third party for data aggregation, a certain user is chosen to aggregate all user data within his network and record them to the blockchain for storage and monitoring. Such an aggregator only collects data and all other users share the equal right to verify and validate transactions to achieve consensus, which eliminates the risks of single points of failure and improves system trust accordingly. \nIn order to achieve traceability of power delivery in smart grid, blockchain can be applied to provide transparency and provenance services . The customer can register their information on blockchain and perform energy trading and payment by uploading a transaction to blockchain. By creating an immutable data structure, data recorded and transferred onto the system cannot be altered. Smart contracts are also very useful to provide a transparent and fair energy trading between consumers and utility companies through an energy policy which defines all trading rules. Once the energy billing payment is completed, for example, both the user and the service provider receive a copy of the transaction, which allows users to keep track of their energy usage. \nAt present, the sophistication of cyberattacks has posed a challenge to the current smart power systems. In recent years, cyber-attacks have caused power systems blackout due to data vulnerability, malicious events or market data manipulation . Therefore, the introduction of blockchain, a strong security mechanism, can help overcome such challenges. \nThe interactions between the electricity market agent and the customer are reflected via transactions which contain electricity demands, electricity price, user information. All such transactions are signed by the private key of the sender (i.e. energy user) to perform energy trading with the agent. In such a context, an attacker can threaten the communication link between users and the agent, but it may be impossible to break the transaction due to the lack of user private key and such malicious access is detected and discarded by consensus mining. Additionally, the authors in also present a research effort in using blockchain to mitigate cyber-attacks on a smart grid. Every prosumer, consumer and substation are connected through a block chain based application under the control of a smart contract, which perform transaction verification when energy transmission occurs. The consensus is maintained by the computing power of all distributed energy servers and users, which also make the energy system well resistant to cyber-attacks . \nIn a similar direction, the work in proposes a smart and scalable ledger framework for secure peer to peer energy trading in smart grid ecosystems. The energy network considered consists of a set of EVs which can participate in three operations, namely charging, discharging and staying idle, EV aggregator which works as an energy broker and provides access points to EVs for both charging and discharging operation, and energy cash as the currency for energy payment. To avoid the issue of spanning and Sybil attacks, instead of using PoW which remains high block generation latency, the authors suggest a proof of time concept. A client must collect a random token, i.e., random messages from neighbours, which makes the process costly for an attacker to achieve the throughput of honest transactions as each transaction contains associated timestamp with it. For security of energy transactions, another work in also builds a fully decentralised blockchain-based peer-to-peer trading scheme. The main goal is to present a pay-to-public-key-hash implementation with multiple signatures as a transaction standard to realise a more secure transaction and reduced storage burden of distributed prosumers. \nRecently, mobile edge computing (MEC), a significant 5G enabling technology, is also cooperated with smart grid. Although MEC can offer promising benefits such as low-latency computation, reduced network congestion for better energy delivery, the characteristics inherent of the MEC architecture such as heterogeneity, mobility, geo-distribution and location-awareness, can be exploited by attackers to perform nefarious activities. Thus, designing practical security solutions for MEC-based smart grid system is critical. In the work , a permissioned blockchain edge model is introduced with the main objectives of privacy protections and energy security. At the layer of distributed edge devices and power supply, smart devices and power supply facilities compose smart grid generating electricity trading transactions. Meanwhile, the smart contract running on blockchain assigns tasks to edge devices and records transaction on blockchain, which enables a secure and trustworthy trading environment. By integrating with distributed edge computing, blockchain can offer a larger number of services, such as device configuration and governance, sensor data storage and management, and trading payments.\nBlockchain for edge-empowered smart grid has been considered in , in which a blockchain based mutual authentication and key agreement protocol is proposed without the need for other complex cryptographic primitives. The smart grid network model used consists of registration authority (RA), end users (EUs), edge servers (ESs) and blockchain. ESs are responsible to supply timely data analysis and service delivery, and each ES is linked with blockchain to prevent web spoofing attacks and guarantee smooth energy trading and user interactions. The authors in also present a blockchain implementation for smart grid to guarantee information privacy of energy user and energy trading. MEC servers act as active blockchain nodes with strong computation capabilities to enable fast data analytic services, i.e. processing large transaction graphs of energy trading, within the energy trading system among EVs.", "id": "6d92e58e-b703-4f65-b992-f8d4a94cff0f", "level": "subsection", "origin_cites_number": 13, "parent_id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ], [ "subsection", "\tSmart grid" ] ], "subsections": [], "title": "\tSmart grid" }, { "cite_extract_rate": 0, "cites": [], "content": "The rapid growth of drones or Unmanned Aerial Vehicles (UAVs) is creating numerous new business opportunities for service providers. UAVs can be regarded as flying IoT devices and have been employed widely in various areas, ranging from military, security, healthcare, and surveillance to vehicle monitoring applications . In the era of modern 5G communication networks, due to the rapidly growing IoT traffic, it is very challenging for static base stations (i.e. access point, router) to support data demands of billions of IoT devices in large scale IoT scenarios. Therefore, the adoption of UAV in IoT networks can be a future direction. Indeed, UAV can act as a flying base station to support unprecedented IoT services, i.e. dynamic data offloading, data sharing or service collaboration, due to its mobility and flexibility . However, the operation of UAVs in the sky is highly vulnerable to several privacy and security risks that target data accountability, data integrity, data authorization, and reliability . \nRecent years have also witnessed a new research trend on the combination of blockchain and UAVs for solving critical challenges in UAV networks and empowering new 5G IoT applications. For instance, the work in takes advantage of consortium blockchain for a spectrum sharing platform between the aerial and terrestrial communication systems for UAV-based cellular networks. The key idea is to establish the distributed shared database to perform secure spectrum trading and sharing between the mobile network operators (MNOs) and the UAV operators. The proposed model possibly addresses two key issues: security risks of UAV-based spectrum trading due to the unauthorized spectrum exploitations of malicious UAVs, and privacy leakages caused by the centralized sharing architecture with third parties. \nTo support the security of UAV communication in ad hoc networks (UAANETs), permissioned blockchain has been adopted in to provide decentralized content storage services and detect internal attackers during efficient content dissemination. The key reason behind the blockchain adoption for UAANETs is the ability of blockchain to securely maintain a consistent and tamper-resistant ledger to record all the transactions of content sharing and storage in a decentralized environment without the need for any central authority, which is applicable to the complex and vulnerable network. Besides, to overcome the limitations of traditional blockchain models with low throughput and high resource consumption, an efficient and scalable Adaptive Delegate Consensus Algorithm (ADCA) is integrated to perform consensus without the mining procedures. Similarly, the work also proposes to use blockchain for secure data dissemination in UAV networks. Data collected from UAVs can be recorded and stored in decentralized database ledgers to mitigate the storage burden on UAVs. The use of blockchain allows any of the users in the UAVs network to participate in consensus processes and implement verification without any external authorities, such as cloud servers. The proposed model has the potential to solve various security issues, including spoofing, Denial-of-service (DoS), eavesdropping and data tampering. \nThe authors in consider an autonomous economic system with UAVs where blockchain acts as a protocol of autonomous business activities in modern industrial and business processes. IoT devices, robots, UAVs in the multi-agent systems can exchange data each other to perform automatic collaborative works (i.e. in smart factory) and share collected data to users via a peer-to-peer ledger. Blockchain link all agents together to create a distributed network where any agent can join and perform block verification to maintain the correct operation and security of the system. To avoid the issues of data leakage or data loss during the transmission among UAVs, blockchain is also considered in . The data transfer process occurs within the blockchain which allows storing all user information and exchange records for security management. \nMore interesting, blockchain has been considered and incorporated with cloud/edge computing for enabling emerging UAV-based applications. The authors in , analyse a blockchain-enabled secure data acquisition scheme for UAV swarm networks in which data are collected from IoT devices employing UAV swarms. Each of the UAVs maintains its own shared key in order to expedite communication with IoT devices when performing the security mechanism (i.e., sign, verify, encrypt, and decrypt). A smart contract is also employed in order to handle the IoT devices and missions in data acquisition. The study in also explores a Hyperledger Fabric blockchain design for UAV swarm networks. Each communication request among UAVs is recorded as a transaction which is validated and verified by the mining process enabled by the computing power of all entities in the UAV network for maintaining the blockchain. \nIn an effort to enhance the security of edge-based UAV networks, the work in proposes a neural blockchain-based transport model as Fig. 13 to ensure ultra-reliability for UAV communication and enable intelligent transport during UAV caching through user equipment (UE) via MEC. The blockchain acts as a distributed database ledger which is shared among all the involved entities (UAVs, MEC servers, and users) identified by their public keys (IDs). The smart contract is responsible to monitor user access and perform verification, while blockchain provides a secure data sharing environment to facilitate content sharing and data delivery between the UEs and the caching servers.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[height=8.35cm,width=7.5cm]{Blockchain_IoTDrone.pdf}\n\t\\caption{ Blockchain for secure 5G UAV networks . }\n\t\\label{fig11}\n\t\\vspace{-0.15in}\n\\end{figure}\nIn addition, the authors in integrate blockchain in a cloud-assisted UAV network for surveillance services to investigate the safety condition of the dam infrastructure in real-time. Two blockchains are designed, a public bitcoin blockchain for payment trading, and a private blockchain for data storage on the network of UAV providers, users, and cloud providers. To join the blockchain, each entity, i.e. IoT sensor users should have certificates obtained from a certificate authority. Data gathered from cloud providers is considered as an object which is then hashed and anchored by the UAV provider into the blockchain network. The solution using blockchain bring various benefits, including reduced latency due to direct communication without passing a third party, and high data integrity and tampering resistance thanks to the hash function and consensus process.", "id": "f33f3f84-eb94-46f7-b21e-360a865c5336", "level": "subsection", "origin_cites_number": 14, "parent_id": "1d75d04b-9b07-42a5-8804-07c825c81bb3", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "\tBlockchain for 5G IoT applications" ], [ "subsection", "\tUnmanned Aerial Vehicles (UAVs)" ] ], "subsections": [], "title": "\tUnmanned Aerial Vehicles (UAVs)" }, { "cite_extract_rate": 0, "cites": [], "content": "Integrating blockchain in the 5G mobile networks is a hot research topic now. Many research efforts have been devoted to the development of blockchain technology for 5G mobile networks. In the previous sections, we have presented a state of the art review on the current achievements in the blockchain adoption in 5G networks. Specially, we have provided an extensive discussion on the convergence of blockchain into key 5G enabling technologies, namely cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication. The survey has also covered and highlighted the benefits of blockchain to empower fundamental 5G services such as spectrum management, data sharing, network virtualization, resource management, interference management, privacy and security services. We also analyse the integration of blockchain in a wide range of 5G IoT applications, ranging from smart healthcare, smart city, smart transportation to smart grid and UAVs. Based on the current great research efforts in the literature, in this section, we will summarize the key findings inherited from the integration of blockchain in 5G networks and services. We also identify possible research challenges and open issues in the field along with the future research directions that should be considered and investigated to encourage more innovative solutions and studies in this promising area.", "id": "98c513b6-3c53-40f8-ac2a-2fc1ee45e75a", "level": "section", "origin_cites_number": 0, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ] ], "subsections": [ "99715869-155e-4b0d-b6e7-33b675f1975f", "42b2762c-01aa-40af-98a6-be78834e99ec", "eb6e9dd9-9621-4c2e-a35c-f0a3d8ce256d" ], "title": "Main findings, Challenges and Future research directions " }, { "cite_extract_rate": 0, "cites": [], "content": "The comprehensive literature review on the integration of blockchain in 5G technologies, 5G services and IoT applications reveals many important findings, which would enable to open up numerous opportunities for the newly emerging 5G scenarios. This sub-section will highlight the key findings inherited from the convergence of these promising technologies.", "id": "99715869-155e-4b0d-b6e7-33b675f1975f", "level": "subsection", "origin_cites_number": 0, "parent_id": "98c513b6-3c53-40f8-ac2a-2fc1ee45e75a", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tMain findings" ] ], "subsections": [ "d3a89512-2ca3-4b75-bf84-a72724c1be2c", "ebfff875-0ac1-48c8-b604-d11fa0b2e482", "a2e7ae67-613c-498c-9804-f50498bda878" ], "title": "\tMain findings" }, { "cite_extract_rate": 0, "cites": [], "content": "Blockchain can offer many promising technical properties such as decentralization, privacy, immutability, traceability, and transparency to empower 5G technologies. Reviewing the literature works, we find that blockchain can support well 5G technologies mainly from three key aspects, including security, system performance, and resource management. The current 5G technology infrastructure is mainly enabled by the centralized network settings, such as edge/cloud computing, and SDN which obviously show security vulnerabilities due to the reliance of third parties. Blockchain can arrive to build decentralized network architectures for 5G technology platforms. For example, the concept of blockchain-based cloud computing enables decentralization of cloud/edge 5G networks , which gets rid of centralized control at the core network and offers a decentralized fair agreement with blockchain consensus platform. Even when an entity is compromised by malicious attacks or threats, the overall operation of the involved network is still maintained via consensus on distributed ledgers. More interesting, blockchain can help establish secure peer-to-peer communication among users (i.e. in D2D communication) using the computing power of all participants to operate the network instead of passing a third-party intermediary. This would potentially reduce communication latency, transaction costs, and provide the global accessibility for all users, all of which will enhance the overall system performance. \nFurthermore, blockchain is expected to improve the resource management for network function virtualization and network slicing. On the one hand, blockchain can boost the trust and transparency among participants and stakeholders and enable more seamless and dynamic exchange of computing resources in the cooperative. The secure spectrum resource provision can be achieved via blockchain which provides a decentralized sharing platform of the network of network servers, service providers and customers. Moreover, the network function resource can be shared at a faster speed, compared to conventional centralized schemes, which thus facilitates service delivery. Currently, the design of network slice instances is based on the open cloud-based architectures, and attackers may abuse the capacity elasticity of one slice to consume the resources of another target slice, which makes the target slice out of service. Blockchain can be exploited to build reliable end-to-end network slices and allow network slide providers to manage their resources, providing the dynamic control of resource reliability.", "id": "d3a89512-2ca3-4b75-bf84-a72724c1be2c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "99715869-155e-4b0d-b6e7-33b675f1975f", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tMain findings" ], [ "subsubsection", "Blockchain for 5G technologies" ] ], "subsections": [], "title": "Blockchain for 5G technologies" }, { "cite_extract_rate": 0.125, "cites": [ 5652 ], "content": "Blockchain is expected to facilitate the 5G services by adding security properties and simplification of service management. Blockchain is particularly useful to create secure sharing environments for spectrum or data exchange in the 5G mobile networks. Blockchain is regarded as a middle layer to perform spectrum trading, verify sharing transactions and lease securely the spectrum provided by spectrum resource providers, i.e. license holders. Different from the conventional database management systems which often use a centralized server to perform access authentication, blockchain with smart contracts can implement decentralized user access validation by using the computing power of all legitimate network participants. This makes the sharing system strongly resistant to data modifications. Many research studies on blockchain , , , demonstrate that the blockchain adoption is beneficial to spectrum management in terms of better scalability, power efficiency in spectrum usage, improved accessibility with high degree of security and better system protection capability against DoS attacks and threats. \nBesides, blockchain can simplify the network virtualization in 5G networks with high degrees of security , . The blockchain technology can provide the required characteristics of nonrepudiation and immutability to overcome the shortcomings of the previous centralized configuration settings in virtual networks. More precisely, blockchain is capable of creating secure virtual wireless networks (VWNs) so that wireless resource-owners sublease their wireless resources (e.g., slice of RF spectrum, infrastructure) to mobile virtual network operators (MVNOs). In such a decentralized virtual network, smart contracts can be very useful to provide automation and transparency in a distributed way instead of trusting a particular node or an authority process transactions, which also enhances the trustworthiness of the resource management services. The building of a fair and trusted economic scheme empowered by blockchain can be a notable solution for network interference control, especially in small cell deployments . \nIn addition to the above 5G services, blockchain also provides privacy and security benefits to 5G networks. By publishing user data to ledger where data is signed by hash functions and appended immutably to blocks, blockchain platforms ensure strong data protection. Blockchain is capable of providing full control of personal data when sharing on the network, which is unique from all traditional approaches which hinder users from tracking their data . Besides, blockchain is expected to offer a wide range of security merits such as access control enabled by smart contracts, data integrity thanks to the decentralized ledger and authentication from consensus process and smart contracts.", "id": "ebfff875-0ac1-48c8-b604-d11fa0b2e482", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "99715869-155e-4b0d-b6e7-33b675f1975f", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tMain findings" ], [ "subsubsection", "Blockchain for 5G services" ] ], "subsections": [], "title": "Blockchain for 5G services" }, { "cite_extract_rate": 0, "cites": [], "content": "Blockchain has been investigated and integrated into a number of key 5G IoT applications, such as smart healthcare, smart city, smart transportation, smart grid and UAVs. The integration of blockchain with 5G technologies can advance current IoT systems and provide more performance benefits in terms of better decentralization, security, privacy, service efficiency and system simplification for lower operational costs . For example, blockchain has been demonstrated its high efficiency in healthcare and smart city scenarios. By implementing a direct and secure interconnection in a network of users, service providers (i.e. hospital in healthcare or traffic control units in smart transportation) and network operators, the data sharing, resource sharing and cooperative communication can be achieved in a secure and low-latency manner. Importantly, the sharing of data over the untrusted environments is highly vulnerable to cyber-attacks, which can monitor and obtain the user information profile (patient information in healthcare of customer data in smart grid). Blockchain comes as a notable solution to address such challenges by securing the transaction and verifying the user access.\nRecent years have also witnessed a new research trend on the combination of blockchain and UAVs for solving critical challenges in UAV networks and empowering new 5G IoT applications. UAV with its high mobility and flexibility can be a promising transmission solution for aerial and terrestrial communication systems, but it also remains critical challenges in terms of security due to adversaries and short battery life. Blockchain would be notable to solve such challenges. Recent studies show the feasibility of blockchain in UAV networks , , . UAV can collect data from the IoT devices and offload data to the blockchain, where data is hashed and recorded securely on the ledger. This would not only preserve IoT data against threats but also reduce the data storage burden on UAV, which is promising to prolong the duration of UAV operations for better service delivery.", "id": "a2e7ae67-613c-498c-9804-f50498bda878", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "99715869-155e-4b0d-b6e7-33b675f1975f", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tMain findings" ], [ "subsubsection", "\tBlockchain for 5G IoT applications" ] ], "subsections": [], "title": "\tBlockchain for 5G IoT applications" }, { "cite_extract_rate": 0, "cites": [], "content": "At present, the amalgamation of blockchain and 5G networks has been received widespread research interests from academics and industries. The blockchain technology is promising to revolutionize 5G networks and services by offering the newly emerging features such as decentralization, privacy, and security. The arrival of this emerging technology is potential to change the current shape of 5G infrastructure and transform industrial network architectures with advanced blockchain-5G paradigms. However, the throughout survey on the use of blockchain for 5G networks also reveals several critical research challenges and open issues that should be considered carefully during the system design. We analyse them from three main aspects: blockchain scalability, blockchain security, and QoS limitations, which will be analysed in details as follows.", "id": "42b2762c-01aa-40af-98a6-be78834e99ec", "level": "subsection", "origin_cites_number": 0, "parent_id": "98c513b6-3c53-40f8-ac2a-2fc1ee45e75a", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tChallenges and Open issues" ] ], "subsections": [ "124e5d2c-f8a5-48b5-9968-433c4b19ae35", "91c05c87-8806-4fc7-b2fe-c3827d7f5adc", "2eea0940-510a-45f1-a0f2-74f605cd3e22" ], "title": "\tChallenges and Open issues" }, { "cite_extract_rate": 0.1, "cites": [ 1295 ], "content": "Despite the benefits of blockchain, scalability and performance issues of are major challenges in the integrated blockchain-5G ecosystems. Here, we analyse the scalability issues of blockchain from the perspectives of throughput, storage and networking.\n\\begin{itemize}\n\t\\item \\textit{Throughput: }In fact, blockchain has much lower throughput in comparison to non-blockchain applications. For instance, Bitcoin and Ethereum process only a maximum of 4 and 20 transactions per second respectively, while Visa and PayPal process 1667 and 193 transactions per second respectively. Obviously, the current blockchain systems have serious scalability bottlenecks regarding the number of replicas in the network as well the performance concerns such as constrained throughput (number of transactions per second) and latency (required time for adding a block of transactions in the blockchain) . Many blockchains have long waiting time for transactions to be appended into the chain because of block size limitations. Therefore, the block generation time increases rapidly, which limits the overall system throughput. Therefore, in order to sustain a huge volume of real world transactions for 5G applications, proper solutions should be considered carefully to improve the throughput. \n\\item\t\\textit{Storage:} When using blockchain in 5G networks, a huge quantity of data generated by ubiquitous IoT devices is processed by the blockchain for 5G services such as data sharing, resource management and user transaction monitoring. In the conventional blockchain systems, each blockchain node must process and store a copy of the complete transaction data. This can pose a storage and computation burden on resource-constrained IoT devices to participate in the blockchain network. Moreover, if all transaction data are stored on chain, the blockchain capacity will become very large to maintain on the chain over time .\n\\item\t\\textit{Networking:} Blockchain networking is another issue that also affects the scalability of blockchain systems. Blockchain is computationally expensive and requires significant bandwidth resources to perform computational mining puzzle. However, in the 5G scenarios, such as ultra-dense networks where resource is very limited due to the demands from IoT devices and service operators, it may be impossible to meet resource requirement for blockchain to achieve large scale transaction processing. Further, stemming from the property of blockchain consensus mechanisms which require multiple transaction transmissions among nodes to validate a block, the blockchain operation needs to consume much network resources (i.e. bandwidth, mining power, and transmission power), which also results in high network latency . \n\\end{itemize}\nConsidering complex 5G IoT scenarios, i.e. smart cities, the IoT workload and data are enormous and thus will result in the rapid growth in the IoT blockchain size, making it difficult to process high volumes of data. The end-to-end latency in 5G networks is expected to achieve less than 1 millisecond for payload and data transmissions. This vision requires careful considerations in designing blockchain platforms before integrating into 5G systems. Many research efforts have been dedicated to improving the performance and scalability in blockchain from different design perspectives such as mining hardware design , hybrid consensus protocols , on-chain and off-chain solutions , . Very recently, a solution using 5G network virtualization is also considered to solve scalability of blockchain by decoupling the blockchain management from the transaction processing to improve QoS of blockchain operations. The preliminary results are expected to shed light on the blockchain research for solving scalability issues and improving the system performance in integrated blockchain 5G networks.", "id": "124e5d2c-f8a5-48b5-9968-433c4b19ae35", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "42b2762c-01aa-40af-98a6-be78834e99ec", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tChallenges and Open issues" ], [ "subsubsection", "\tBlockchain performance and scalability" ] ], "subsections": [], "title": "\tBlockchain performance and scalability" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 5660, 5661 ], "content": "Blockchain is considered as secure database platform to ensure safety and privacy for involved 5G networks. However, recent studies have revealed inherent security weaknesses in blockchain operations which are mostly related to 5G systems . A serious security bottleneck is 51\\% attack which means that a group of miners controls more than 50\\% of the network’s mining hash rate, or computing power, which prevents new transactions from gaining confirmations and halts payments between service providers and IoT users. Seriously, adversaries can exploit this vulnerability to perform attacks, for example, they can modify the ordering of transactions, hamper normal mining operations or initiate double-spending attack, all of which can degrade the blockchain network . In addition, the security aspect of smart contract, which is regarded as core software on blockchain, is also very important since a small bug or attack can result in significant issues like privacy leakage or system logic modifications , . Some of the critical security vulnerabilities can include timestamp dependence, mishandled exceptions, reentrancy attacks on smart contracts in 5G applications.\nIn addition to that, in current 5G IoT systems, data can be stored off-chain in cloud computing to reduce the burden on blockchain. However, this storage architecture can arise new privacy concerns. Specifically, an autonomous entity can act as a network member to honestly perform the cloud data processing, but meanwhile obtains personal information without the consent of users, which leads to serious information leakage issues. External attacks can also gain malicious access to retrieve cloud data, or even alter and modify illegally outsourced IoT records on cloud. Besides, privacy leakage on blockchain transactions is another significant problem. Although blockchain uses encryption and digital signature to preserve transactions, recent measurement results show that a certain amount of transaction is leaked during blockchain operations and data protection of blockchain is not very robust in practice. Furthermore, criminals can leverage smart contracts for illegal purposes, facilitating the leakage of confidential information, theft of cryptographic keys. Importantly, privacy of IoT users cannot be ensured once they join the network. Indeed, by participating in the blockchain network, all information of users such as address of sender and receiver, amount values is publicly available on the network due to the transparency of blockchain. Consequently, curious users or attacks can analyse such information and keep track of activities of participants, which can lead to leakage of information secrets such as personal data. \nSecurity problems in blockchain in 5G networks can be solved by recent security improvements. For example, a mining pool system called SmartPool was proposed to improve transaction verification in blockchain mining to mitigate security bottlenecks, such as 51\\% vulnerability, ensuring that the ledger cannot be hacked by increasingly sophisticated attackers. Particularly, recent works , introduced efficient security analysis tools to investigate and prevent threat potential in order to ensure trustful smart contract execution on blockchain. Such research efforts make contributions to addressing security issues in blockchain 5G environments and improving the overall performance of the system.", "id": "91c05c87-8806-4fc7-b2fe-c3827d7f5adc", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "42b2762c-01aa-40af-98a6-be78834e99ec", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tChallenges and Open issues" ], [ "subsubsection", "\tBlockchain security and privacy" ] ], "subsections": [], "title": "\tBlockchain security and privacy" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 5662, 8938 ], "content": "With the advances of mobile 5G technologies, blockchain now can be implemented in mobile devices to provide more flexible blockchain-based solutions for 5G IoT applications. The foundation of the efficient and secure operation of blockchain is a computation process known as mining. In order to append a new transaction to the blockchain, a blockchain user, or a miner, needs to run a mining puzzle, i.e. Proof of Work (PoW) or Proof of Stake (PoS) which is generally complicated and requires vast computing and storage resources. Further, blockchain also requires network bandwidth resources to perform its consensus process. Without a careful design, the blockchain implementation to operate involved IoT applications may lead to Quality of Service (QoS) degradation with long latency, high energy consumption, high bandwidth demands, and high network congestion. Obviously, the integration of blockchain can introduce new QoS challenges that would negatively impact the overall performance of blockchain-5G networks. It is noting that one of the most important goals of future 5G is to provide user-centric values with high QoS to satisfy the growing demands of user traffic and emerging services . Therefore, it is vitally important to develop efficient solutions that can enhance service qualities of blockchain ecosystems to empower the future blockchain-5G networks. \nRecently, some strategies have been proposed to solve the above issues from different perspectives. On the one hand, the design of lightweight blockchain platforms can be a notable solution to enhance the QoS, by eliminating computation consensus mechanisms of blockchain , compressing consensus storage , or designing lightweight block validation techniques , , . These solutions potentially simplify the blockchain mining process for lower energy consumption and better latency efficiency, which make greats contributions to the QoS improvements in blockchain-5G applications. On the other hand, computation offloading is also another feasible approach to solve the low QoS issues of blockchain . With the development of 5G technologies such as edge/cloud computing, SDN, D2D communication, blockchain computation tasks (i.e. consensus puzzle) can be offloaded to resourceful servers such as edge/cloud servers , by combining SDN and D2D communication to bridge the gap between constrained resources of local mobile devices and growing demands of executing the computation tasks. By using offloading solutions, the performance of blockchain-5G systems would be improved significantly, such as saving system energy, reducing computation latency and improving the quality of computation experience for mobile devices. As a result, the system QoS will be enhanced while blockchain features are ensured for high level network security. The offloading optimization solutions should be explored further to balance both blockchain and the core 5G networks for future mobile blockchain-5G applications.", "id": "2eea0940-510a-45f1-a0f2-74f605cd3e22", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "42b2762c-01aa-40af-98a6-be78834e99ec", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tChallenges and Open issues" ], [ "subsubsection", "QoS limitations" ] ], "subsections": [], "title": "QoS limitations" }, { "cite_extract_rate": 0, "cites": [], "content": "Motivated by our detailed survey on research studies on the convergence of blockchain and 5G networks, we point out possible research directions which should be considered in the future works.", "id": "eb6e9dd9-9621-4c2e-a35c-f0a3d8ce256d", "level": "subsection", "origin_cites_number": 0, "parent_id": "98c513b6-3c53-40f8-ac2a-2fc1ee45e75a", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tFuture research directions" ] ], "subsections": [ "6b3c22e9-6e16-4452-98e9-1fff7b335885", "19d8f721-959e-4344-9452-d9b207d8f980", "220834fd-3278-44c3-832e-3e0f67976762" ], "title": "\tFuture research directions" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 8938 ], "content": "The rapid developments in blockchain technology are creating new opportunities for artificial intelligence applications. The revolution of machine learning (ML) technology transforms current 5G services by enabling its ability to learn from data and provide data-driven insights, decision support, and predictions. These advantages of machine learning would transform the way data analytics are performed to assist intelligent services in the age of 5G. For example, ML has the ability to interact with the wireless environment to facilitate resource management and user communication . ML also exhibits great potential on data feature discovery to predict data usage behaviour for developing control algorithms, such as data traffic estimation for network congestion avoidance or user access tracking for privacy preservation . Recent years, there is a growing trend of integrating machine learning with blockchain for 5G use case domains. For example, deep reinforcement learning (DRL) has been investigated and combined with blockchain to enable secure and intelligent resource management and orchestration in 5G networks. An advanced DRL algorithm is proposed to accurately analyze the topology, channel assignment, and interference of the current wireless network, and then select the most appropriate wireless access mode (i.e., cellular network, V2V, or D2D) to improve communication rate, reduce energy consumption, or enhance user experience. Meanwhile, blockchain provides a secure decentralized environment where operating reports and network configurations can be replicated and synchronized among edge servers, which can facilitate network diagnosis and enable reliable orchestration. Other significant works also propose the integrated blockchain-DRL architectures for flexible and secure computation offloading , reliable network channel selection , and networking optimization .", "id": "6b3c22e9-6e16-4452-98e9-1fff7b335885", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "eb6e9dd9-9621-4c2e-a35c-f0a3d8ce256d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tFuture research directions" ], [ "subsubsection", "\tIntegrating machine learning with blockchain for 5G" ] ], "subsections": [], "title": "\tIntegrating machine learning with blockchain for 5G" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 5663 ], "content": "In the age of data explosion, big data becomes a hot research topic in 5G . A large amount of multimedia data generated from ubiquitous 5G IoT devices can be exploited to enable data-related applications, for example, data analytics, data extraction empowered by artificial intelligence solutions . Cloud computing services can offer high storage capabilities to cope with the expansion of quantity and diversity of digital IoT data. However, big data technologies can face various challenges, ranging from data privacy leakage, access control to security vulnerabilities due to highly sophisticated data thefts . Further, big data analytics on cloud/edge computing are also highly vulnerable to cyberattacks in the complex operational and business environments. \nIn such contexts, blockchain appears as the ideal candidate to solve big data-related issues . Indeed, the decentralized management associated with authentication and reliability of blockchain can provide high-security guarantees to big data resources. Specifically, blockchain can offer transparency and trustworthiness for the sharing of big data among service providers and data owners. By eliminating the fear of security bottlenecks, blockchain can enable universal data exchange which empowers large-scale 5G big data deployments. Recently, some big data models enabled by blockchain are proposed, such as data sharing with smart contracts , access control for big data security , or privacy preservation for big data analytics . Such preliminary results show that blockchain can bring various advantages in terms of security and performance enhancement to big data applications in the age of 5G.", "id": "19d8f721-959e-4344-9452-d9b207d8f980", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "eb6e9dd9-9621-4c2e-a35c-f0a3d8ce256d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tFuture research directions" ], [ "subsubsection", "Blockchain for big data in 5G" ] ], "subsections": [], "title": "Blockchain for big data in 5G" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 7975 ], "content": "Beyond the fifth-generation (B5G) networks, or so-called 6G, will emerge to provide superior performance to 5G and meet the increasingly high requirements of future mobile services and applications in the 2030s. The key drivers of 6G will be the convergence of all the past features, such as network densification, high throughput, high reliability, low energy consumption, and massive connectivity . According to , 6G wireless networks are expected to support massive user connectivity and multi-gigabits data transmissions with super-high throughput, extremely low-latency communications (approximately 10 µs), and support underwater and space communications. The 6G networks are also envisioned to create new human-centric values enabled by numerous innovative services with the addition of new technologies. The new services may include smart wearables, implants, fully autonomous vehicles, computing reality devices, 3D mapping, smart living, space travel, Internet of Nano-Things, deep-sea sightseeing and space travel . \nTo satisfy such applications for the 2030 intelligent information society, 6G will have to meet a number of stringent technical requirements. Following this rationale, high security and privacy are the all-important features of 6G, which shall be paid special attention from the wireless research community . With the promising security capability, blockchain is expected to play a pivotal role in the successful development of the future 6G networks. Blockchain potentially provides a wide range of security services, from decentralization, privacy, transparency to privacy and traceability without needing any third parties, which will not only enhance the security of 6G networks but also promise to promote the transformation of future mobile services . The Federal Communications Commission (FCC) also suggests that blockchain will be a key technology for 6G services. For example, it is believed that blockchain-based spectrum sharing is a promising technology for 6G to provide secure, smarter, low-cost, and highly efficient decentralized spectrum sharing. Blockchain can also enable security and privacy of quantum communications and computing, molecular communications, and the Internet of Nano-Things via secure decentralized ledgers. \nIn summary, blockchain has provided enormous opportunities to 5G mobile networks thanks to its exceptional security properties. The convergence of blockchain and 5G technologies has reshaped and transformed the current 5G service provision models with minimal management effort, high system performance with high degrees of security. This detailed survey is expected to pay a way for new innovative researches and solutions for empowering the future blockchain-5G networks.", "id": "220834fd-3278-44c3-832e-3e0f67976762", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "eb6e9dd9-9621-4c2e-a35c-f0a3d8ce256d", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Main findings, Challenges and Future research directions " ], [ "subsection", "\tFuture research directions" ], [ "subsubsection", "Blockchain for 6G" ] ], "subsections": [], "title": "Blockchain for 6G" }, { "cite_extract_rate": 0, "cites": [], "content": "Blockchain is an emerging technology that has drawn significant attention recently and is recognized as one of the key enablers for 5G networks thanks to its unique role to security assurance and network performance improvements. In this paper, we have explored the opportunities brought by blockchain to empower the 5G systems and services through a state-of-art survey and extensive discussions based on the existing literature in the field. This work is motivated by the lack of a comprehensive review on the integration of blockchain and 5G networks. In this article, we have presented a comprehensive survey focusing on the current state-of-the-art achievements in the integration of blockchain into 5G wireless networks. Particularly, we have first provided a brief overview on the background knowledge of blockchain and 5G networks and highlighted the motivation of the integration. We have then explored and analysed in detail the potential of blockchain for enabling key 5G technologies, such as cloud computing, edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communication. A comprehensive discussion on the use of blockchain in a wide range of popular 5G services has been provided, with a prime focus on spectrum management, data sharing, network virtualization, resource management, interference management, federated learning, privacy and security services. Our survey has also covered a holistic investigation on the applications of blockchain in 5G IoT networks and reviews the latest developments of the cooperated blockchain-5G IoT services in various significant use-case domains, ranging from smart healthcare, smart city, smart transportation to smart grid and UAVs. Through the comprehensive survey on the related articles, we have summarized the main findings derived from the integrations of blockchain in 5G networks and services. Finally, we have pointed out several research challenges and outlined potential research directions toward 6G networks.\nResearch on blockchain for 5G wireless networks is still in its infancy. But it is obvious that blockchain will significantly uplift the shape and experience of future mobile services and applications. We believe our timely study will shed valuable light on the research problems associated with the blockchain-5G integration as well as motivate the interested researchers and practitioners to put more research efforts into this promising area. \n\\bibliography{Ref}\n\\bibliographystyle{IEEEtran}\n\\end{document}", "id": "9242e86a-2d2a-4b42-972b-12cec34c594f", "level": "section", "origin_cites_number": 0, "parent_id": "5bf49667-4d6b-4397-b786-063258be0fd6", "prefix_titles": [ [ "title", "Blockchain for 5G and Beyond Networks: \\\\ A State of the Art Survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
103
[ 7971, 1295, 5651, 7972, 2145, 7973, 5652, 5653, 602, 3473, 659, 5654, 8937, 5656, 5655, 7974, 8938, 2153, 5657, 5658, 5659, 5660, 5661, 5662, 5663, 7975 ]
1.028046
[ "Wenhao Yu", "Chenguang Zhu", "Zaitang Li", "Zhiting Hu", "Qingyun Wang", "Heng Ji", "Meng Jiang" ]
A Survey of Knowledge-Enhanced Text Generation
2020
2020-10-09T06:46:46Z
cs.CL
The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most important yet challenging tasks in natural language processing (NLP). Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far from satisfaction in many real-world scenarios. To address this issue, researchers have considered incorporating (i) internal knowledge embedded in the input text and (ii) external knowledge from outside sources such as knowledge base and knowledge graph into the text generation system. This research topic is known as \textit{knowledge-enhanced text generation}. In this survey, we present a comprehensive review of the research on this topic over the past five years. The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data. This survey can have broad audiences, researchers and practitioners, in academia and industry.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ] ], "subsections": [ "130eff31-5033-4384-9c99-729ff90fe873", "e083ab82-0002-443e-8734-d6e90fa797b2", "b426fa47-f85f-4367-a1dc-a6d9c866a437", "fd996682-bb73-4d48-801b-792919f81dc2", "bb6a84ff-227f-4a65-a3fd-9b82bdf7f80f", "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "70d1664f-e38f-49e9-9a57-f9700fbe2848", "4d48ae73-8c5b-48f8-b370-bc45fe5245eb" ], "title": "root" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 166, 9147, 2401, 2002, 790, 38 ], "content": "Text generation, which is often formally referred as natural language generation (NLG), is one of the most important yet challenging tasks in natural language processing (NLP)~.\nNLG aims at producing understandable text in human language from linguistic or non-linguistic data in a variety of forms such as textual data, numerical data, image data, structured knowledge bases, and knowledge graphs.\nAmong these, text-to-text generation is one of the most important applications and thus often shortly referred as ``text generation''. Researchers have developed numerous technologies for this task in a wide range of applications~.\nText generation takes text (e.g., a sequence, keywords) as input, processes the input text into semantic representations, and generates desired output text.\nFor example, machine translation generates text in a different language based on the source text; summarization generates an abridged version of the source text to include salient information; question answering (QA) generates textual answers to given questions; dialogue system supports chatbots to communicate with humans with generated responses.\nWith the recent resurgence of deep learning technologies~, deep neural NLG models have achieved remarkable performance in enabling machines to understand and generate natural language. A basic definition of the text generation task is to generate an expected \\emph{output sequence} from a given \\emph{input sequence}, called sequence-to-sequence (Seq2Seq). \nThe Seq2Seq task and model were first introduced in 2014~. It maps an input text to an output text under encoder-decoder schemes. The encoder maps the input sequence to a fixed-sized vector, and the decoder maps the vector to the target sequence.\nSince then, developing NLG systems has rapidly become a hot topic. Various text generation models have been proposed under deep neural encoder-decoder architectures. Popular architectures include recurrent neural network (RNN) encoder-decoder~, convolutional neural network (CNN) encoder-decoder~, and Transformer encoder-decoder~.\nNevertheless, the input text alone contains limited knowledge to support neural generation models to produce the desired output.\nMeanwhile, the aforementioned methods generally suffer from an inability to well comprehend language, employ memory to retain and recall knowledge, and reason over complex concepts and relational paths; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output.\nTherefore, the performance of generation is still far from satisfaction in many real-world scenarios. For example, in dialogue systems, conditioning on only the input text, a text generation system often produces trivial or non-committal responses of frequent words or phrases in the corpus~, such as \\emph{``Me too.''} or \\emph{``Oh my god!''} given the input text \\emph{``My skin is so dry.''} These mundane responses lack meaningful content, in contrast to human responses rich in knowledge. In comparison, humans are constantly acquiring, understanding, and storing knowledge from \\emph{broader sources} so that they can be employed to understand the current situation in communicating, reading, and writing. For example, in conversations, people often first select \\emph{concepts from related topics} (e.g., sports, food), then organize those topics into understandable content to respond; for summarization, people tend to write summaries containing \\emph{keywords} used in the input document and perform necessary modifications to ensure grammatical correctness and fluency; in question answering (QA), people use \\emph{commonsense} or \\emph{professional knowledge} pertained to the question to infer the answer. Therefore, it is often the case that knowledge beyond the input sequence is required to produce informative output text.\n\\vspace{-0.05in}", "id": "130eff31-5033-4384-9c99-729ff90fe873", "level": "section", "origin_cites_number": 10, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Introduction" ] ], "subsections": [ "67a93181-da5c-4a6d-95df-1dc33e363289", "feb06be2-5673-4204-8578-9022624f9d3c", "bb957f7c-33ba-4e63-98aa-f033211ce23b" ], "title": "Introduction" }, { "cite_extract_rate": 0.8, "cites": [ 7046, 3119, 2378, 2369 ], "content": "\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/int-ent-knowledge-framework.pdf}\n \\end{center}\n \\vspace{-0.1in}\n \\caption{We divide different knowledge sources into internal knowledge and external knowledge. Internal knowledge creation takes place within the input text(s), while external knowledge acquisition occurs when knowledge is provided from outside sources (e.g., Wikipedia, ConceptNet~).}\n \\label{fig:int-ent-framework}\n\\vspace{-0.05in}\n\\end{figure}\nIn general, knowledge is the familiarity, awareness, or understanding that coalesces around a particular subject. \nIn NLG systems, knowledge is an awareness and understanding of the input text and its surrounding context.\nThese knowledge sources can be categorized into internal knowledge and external knowledge (see Figure~\\ref{fig:int-ent-framework}).\n\\textit{Internal knowledge} creation takes place within the input text(s), including but not limited to keyword, topic, linguistic features, and internal graph structure. \\textit{External knowledge} acquisition occurs when knowledge is provided from outside sources, including but not limited to knowledge base, external knowledge graph, and grounded text. \nThese sources provide information (e.g., commonsense triples, topic words, reviews, background documents) that can be used as knowledge through various neural representation learning methods, and then applied to enhance the process of text generation. \nIn addition, knowledge introduces interpretability for models with explicit semantics.\nThis research direction of incorporating knowledge into text generation is named as \\textit{knowledge-enhanced text generation}.\n\\vspace{-0.05in}\n\\begin{problem}[Knowledge-enhanced Text Generation] \nGiven a text generation problem where the system is given an input sequence $X$,\nand aims to generate an output sequence $Y$.\nAssume we also have access to additional knowledge denoted as $K$. Knowledge-enhanced text generation aims to incorporate the knowledge $K$ to enhance the generation of $Y$ given $X$, through leveraging the dependencies among the input text, knowledge, and output text.\n\\label{prob:seq2seq}\n\\end{problem}\nMany existing knowledge-enhanced text generation systems have demonstrated promising performance on generating informative, logical, and coherent texts. In dialogue systems, a topic-aware Seq2Seq model helped understand the semantic meaning of an input sequence and generate a more informative response such as \\emph{``Then hydrate and moisturize your skin.''} to the aforementioned example input \\emph{``My skin is so dry.''} \nIn summarization, knowledge graph produced a structured summary and highlight the proximity of relevant concepts, when complex events related with the same entity may span multiple sentences. A knowledge graph enhanced Seq2Seq model generated summaries that were able to correctly answer 10\\% more topically related questions~.\nIn question answering (QA) systems, facts stored in knowledge bases completed missing information in the question and elaborate details to facilitate answer generation .\nIn story generation, using commonsense knowledge acquired from knowledge graph facilitated understanding of the storyline and better\nnarrate following plots step by step, so each step could be reflected as a link on the knowledge graph and the whole story would be a path~.\n\\vspace{-0.05in}", "id": "67a93181-da5c-4a6d-95df-1dc33e363289", "level": "subsection", "origin_cites_number": 5, "parent_id": "130eff31-5033-4384-9c99-729ff90fe873", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Introduction" ], [ "subsection", "What is Knowledge-enhanced Text Generation?" ] ], "subsections": [], "title": "What is Knowledge-enhanced Text Generation?" }, { "cite_extract_rate": 0.5, "cites": [ 1165, 9147 ], "content": "Recent years have witnessed a surge of interests in\ndeveloping methods for incorporating knowledge in NLG beyond input text. However, there is a lack of comprehensive survey of this research topic.\nRelated surveys have laid the foundation of discussing this topic.\nFor example, Garbacea et al.~ and Gatt et al.~ reviewed model architectures for core NLG tasks but did not discuss knowledge-enhanced methods.\nJi et al.~ presented a review on\nknowledge graph techniques which could be used for enhancing NLG.\nWang et al.~ summarized how to represent structural knowledge such as knowledge base and knowledge graph for reading comprehension and retrieval.\nTo the best of our knowledge, this is the first survey that presents a comprehensive review of knowledge-enhanced text generation. It aims to provide NLG researchers a synthesis and pointer to related research. Our survey includes a detailed discussion about how NLG can benefit from recent progress in deep learning and artificial intelligence, including technologies such as graph neural network, reinforcement learning, and neural topic modeling.\n\\vspace{-0.05in}", "id": "feb06be2-5673-4204-8578-9022624f9d3c", "level": "subsection", "origin_cites_number": 4, "parent_id": "130eff31-5033-4384-9c99-729ff90fe873", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Introduction" ], [ "subsection", "Why a Survey of Knowledge-enhanced Text Generation?" ] ], "subsections": [], "title": "Why a Survey of Knowledge-enhanced Text Generation?" }, { "cite_extract_rate": 0, "cites": [], "content": "To start with, we note that the \\textit{first challenge} in knowledge-enhanced NLG is to \\emph{obtain} useful related knowledge from diverse sources.\nThere has been a rising line of work that discovers knowledge from topic, keyword, knowledge base, knowledge graph and knowledge grounded text.\nThe \\textit{second challenge} is how to effectively \\emph{understand} and \\emph{leverage} the acquired knowledge to facilitate text generation. Multiple methods have been explored to improve the encoder-decoder architecture (e.g., attention mechanism, copy and pointing mechanism).\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/taxonomy.pdf}\n \\end{center}\n \\vspace{-0.2in}\n \\caption{Categorization of information sources and methods for knowledge-enhanced text generation. Knowledge can be learnt from various information sources, and then integrated into the generation process.}\n \\label{fig:overall}\n\\vspace{-0.15in}\n\\end{figure}\nBased on the first challenge, the main content of our survey is divided into two parts: (1) general methods of integrating knowledge into text generation (Section 2); (2) specific methods and applications according to different sources of knowledge enhancement (Sections 3--4). \nMore concretely, since knowledge can be obtained from different sources, we first divide existing knowledge enhanced text generation work into two categories: internal knowledge enhanced and external knowledge enhanced text generation. The division of internal and external knowledge is widely adopted by management science~, which can be analogous with knowledge enhanced text generation.\nBased on the second challenge, we categorize recent knowledge-enhanced text generation methods evolved from how knowledge is extracted and incorporated into the process of text generation in each section (named as M1, M2, and etc).\nFurthermore, we review methods for a variety of natural language generation applications in each section to help practitioners choose, learn, and use the methods.\nIn total, we discuss seven mainstream applications presented in more than 80 papers that were published or released in or after the year of 2016.\nAs shown in Figure \\ref{fig:overall}, the remainder of this survey is organized as follows.\nSection 2 presents basic NLG models and general methods of integrating knowledge into text generation. \nSections 3 reviews internal knowledge-enhanced NLG methods and applications. The internal knowledge is obtained from topic, keyword, linguistic features and internal graph structures.\nSections 4 reviews external knowledge-enhanced NLG methods and applications. The external knowledge sources include knowledge bases, knowledge graphs, and grounded text.\nSection 5 presents knowledge-enhanced NLG benchmarks.\nSection 6 discusses future work and concludes the survey.\n\\vspace{-0.05in}", "id": "bb957f7c-33ba-4e63-98aa-f033211ce23b", "level": "subsection", "origin_cites_number": 1, "parent_id": "130eff31-5033-4384-9c99-729ff90fe873", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Introduction" ], [ "subsection", "What are the Challenges in Knowledge-enhanced Text Generation?" ] ], "subsections": [], "title": "What are the Challenges in Knowledge-enhanced Text Generation?" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e083ab82-0002-443e-8734-d6e90fa797b2", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ] ], "subsections": [ "5e9e393b-b67b-4b15-80cd-e48ae82912dc", "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "5c5bd456-c1bb-49e2-8001-2c1c0983cea3" ], "title": "General Methods of Integrating Knowledge into NLG" }, { "cite_extract_rate": 1, "cites": [ 2401, 790, 38 ], "content": "Early encoder-decoder frameworks are often based on recurrent neural network (RNN) such as RNN-Seq2Seq~. Convolutional neural network (CNN) based encoder-decoder~ and Transformer encoder-decoder~ have been increasingly widely used. From a probabilistic perspective, the encoder-decoder frameworks learn the conditional distribution over a variable length sequence conditioned on yet another variable length sequence:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n P(Y|X) = P(y_1, \\cdots, y_m|x_1, \\cdots, x_n) = \\prod_{t=1}^{m}p(y_t|X, y_1, \\cdots, y_{t-1}).\n\\end{equation}\n\\vspace{-0.05in}", "id": "5e9e393b-b67b-4b15-80cd-e48ae82912dc", "level": "subsection", "origin_cites_number": 3, "parent_id": "e083ab82-0002-443e-8734-d6e90fa797b2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "The Basic Text Generation Models" ] ], "subsections": [ "5bfe95e3-8494-41a4-b983-424c260b9239" ], "title": "The Basic Text Generation Models" }, { "cite_extract_rate": 0, "cites": [], "content": "The encoder learns to encode a variable length sequence into a fixed length vector representation. RNN encoder reads the input sentence $X$ \\textit{sequentially}. CNN encoder performs convolutional operations on a word and its surrounding word(s) in a sequential window.\nTransformer encoder eschews recurrence and instead relying entirely on the self-attention mechanism to draw global dependencies between different tokens in the input $X$. We denote them uniformly as:\n\\begin{equation}\n (\\textbf{h}_1, \\textbf{h}_2, \\cdots, \\textbf{h}_n)=\\textsc{Encoder}(\\textbf{e}(x_1), \\textbf{e}(x_2), \\cdots, \\textbf{e}(x_n)),\n\\end{equation}\nwhere $\\textbf{e}(x_i)$ is the word embedding of word $x_i$, $\\textbf{h}_i$ is the contextualized hidden representation of $x_i$.\n\\vspace{-0.05in}", "id": "5bfe95e3-8494-41a4-b983-424c260b9239", "level": "paragraph", "origin_cites_number": 0, "parent_id": "5e9e393b-b67b-4b15-80cd-e48ae82912dc", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "The Basic Text Generation Models" ], [ "paragraph", "Encoder." ] ], "subsections": [ "10252dc6-a9ac-4ba2-b32d-6a80c9b8f135", "98a93103-4f5b-4787-aafd-748b1bff92d4" ], "title": "Encoder." }, { "cite_extract_rate": 1, "cites": [ 2401 ], "content": "The decoder is to decode a given fixed length vector representation into a variable length sequence~. Specially, the decoder generates an output sequence one token at each time step. At each step the model is auto-regressive, consuming the previously generated tokens as additional input when generating the next token. Formally, the decoding function is represented as:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{s}_t = \\textsc{Decoder} (\\textbf{s}_{t-1}, \\textbf{e}(y_{t-1})), \\label{eq:Seq2Seq-decoder}\n\\end{equation}\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n p(y_t|y_{t-1}, y_{t-2}, \\cdots , y_1) = \\textsc{Readout}(\\textbf{s}_t),\n\\end{equation}\nwhere $\\textsc{Readout}(\\cdot)$ is a nonlinear multi-layered function that outputs the probability of $y_t$.\n\\vspace{-0.05in}", "id": "10252dc6-a9ac-4ba2-b32d-6a80c9b8f135", "level": "paragraph", "origin_cites_number": 1, "parent_id": "5bfe95e3-8494-41a4-b983-424c260b9239", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "The Basic Text Generation Models" ], [ "paragraph", "Encoder." ], [ "paragraph", "Decoder." ] ], "subsections": [], "title": "Decoder." }, { "cite_extract_rate": 0, "cites": [], "content": "A generation process is regarded as a sequential multi-label classification problem. It can be directly optimized by the negative log likelihood ({NLL}) loss. Therefore, the objective of a text generation model via maximum likelihood estimation (MLE) is formulated as:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{1pt}\n \\mathcal{L}_{NLL}(\\theta) = - \\log p_{\\theta}(Y|X) = - \\sum^m_{t=1} \\log \\left( p_{\\theta}(y_t|y_{< t}, X) \\right).\n\\label{eq:Seq2Seq-loss}\n\\end{equation}\n\\vspace{-0.05in}", "id": "98a93103-4f5b-4787-aafd-748b1bff92d4", "level": "paragraph", "origin_cites_number": 0, "parent_id": "5bfe95e3-8494-41a4-b983-424c260b9239", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "The Basic Text Generation Models" ], [ "paragraph", "Encoder." ], [ "paragraph", "Optimization" ] ], "subsections": [], "title": "Optimization" }, { "cite_extract_rate": 0, "cites": [], "content": "The most popular idea of incorporating knowledge is designing \\emph{specialized architectures} of text generation models that can reflect the particular type of knowledge.\nIn the context of neural networks, several general neural architectures are widely used and customized to bake the knowledge about the problems being tackled into the models.\n\\vspace{-0.03in}", "id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "level": "subsection", "origin_cites_number": 0, "parent_id": "e083ab82-0002-443e-8734-d6e90fa797b2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ] ], "subsections": [ "7100a363-05ad-41f2-80ad-599b293e84f9", "439b4acd-a0f8-43cc-9128-c6a02aa6b48c", "a00f2dfb-145c-46fe-8e98-bb54b94cf9e9", "4c9bc10b-8e87-4a1b-9b24-f82cd019c61e", "b40e3380-9e7b-4d8e-af7a-5fcf7d31ad2a" ], "title": "Knowledge-enhanced Model Architectures" }, { "cite_extract_rate": 0.521739130434782, "cites": [ 9147, 3117, 3119, 3116, 3115, 168, 2002, 3118, 2369, 8554, 1934, 3120 ], "content": "}\n\\label{sec:k-attn}\nIt is useful to capture the weight of each time step in both encoder and decoder~.\nDuring the decoding phase, the context vector $\\textbf{c}_t$ is added, so the hidden state $\\textbf{s}_t$ is:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{s}_t = \\textsc{Decoder} (\\textbf{s}_{t-1}, \\textbf{e}(y_{t-1}), \\textbf{c}_t).\n\\end{equation}\nUnlike Eq.(\\ref{eq:Seq2Seq-decoder}), here the probability is conditioned on the distinct context vector $\\textbf{c}_t$ for target word $y_t$, and $\\textbf{c}_t$ depends on a sequence of hidden states $\\textbf{H} = \\{\\textbf{h}_i\\}_{i=1}^{n}$ that were mapped from input sequence. \nIn RNN-Seq2Seq decoder, the $\\textbf{c}_t$ is computed as a weighted sum of $\\{\\textbf{h}_i\\}_{i=1}^{n}$:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{c}_t = \\sum^{n}_{i=1} \\alpha_{ti} \\textbf{h}_i, ~\\text{where}~ \\alpha_{ti} = \\frac{\\exp (\\eta(\\textbf{s}_{t-1}, \\textbf{h}_i ))} {\\sum^{n}_{k=1} \\exp(\\eta(\\textbf{s}_{t-1}, \\textbf{h}_k)) } , \n \\label{eq:atten-weight}\n\\end{equation}\nwhere $\\eta(\\cdot)$ is parametrized as a multi-layer perception to compute a soft alignment. \n$\\eta(\\cdot)$ enables the gradient of loss function to be backpropagated. There are six alternatives for the $\\eta(\\cdot)$ function (see Table 2 in ). The probability $\\alpha_{ti}$ reflects the importance of the hidden state of input sequence in presence of the previous hidden state $\\textbf{s}_{t-1}$ for deciding the next hidden state.\nIn Transformer decoder, on top of the two sub-layers in the encoder, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack $\\textbf{H}$. \nEfficient implementations of the transformer use the cached history matrix $\\textbf{S}_t$ to generate next token.\nTo compare with RNN-Seq2Seq, we summarize the Transformer decoder using recurrent notation:\n\\begin{equation}\n \\textbf{S}_t = \\textsc{Transformer-Decoder}(\\textbf{S}_{t-1}, \\textbf{e}(y_{t-1}), \\textbf{H}), \n\\label{eq:transformer}\n\\end{equation}\nwhere $\\textbf{S}_t$ = $[(\\textbf{K}^{(1)}_t, \\textbf{V}^{(1)}_t), \\cdots, (\\textbf{K}^{(l)}_t, \\textbf{V}^{(l)}_t)]$, where $(\\textbf{K}^{(i)}_t, \\textbf{V}^{(i)}_t)$ corresponds to the key-value pairs from the $i$-th layer generated at all time-steps from $0$ to $t$.\nInstead of noting a specific name, we will use \\textsc{Encoder}($\\cdot$) and \\textsc{Decoder}($\\cdot$) to represent encoder and decoder in the following sections.\n\\begin{table*}[t]\n\\caption{NLG methods that incorporates knowledge attention ($\\S$\\ref{sec:k-attn}) and knowledge mode ($\\S$\\ref{sec:k-mode}).}\n\\vspace{-0.15in}\n\\centering\n\\scalebox{0.82}{\\begin{tabular}{|l|c|c|c|c|c|}\n\\hline\n& Topic & Keyword & Knowledge base & Knowledge graph & Grounded text \\\\\n\\hline \\hline\nKnowledge-related attention & & & & & \\\\\nKnowledge-related mode & & & & & \\\\\nKnowledge-related memory & & - & & & \\\\\n\\hline\n\\end{tabular}}\n\\vspace{-0.15in}\n\\label{tab:mul-source-attn}\n\\end{table*}\n\\vspace{-0.05in}", "id": "7100a363-05ad-41f2-80ad-599b293e84f9", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Attention Mechanism" ] ], "subsections": [ "19576d26-7a1c-47a7-9adb-7f2601cfef36" ], "title": "Attention Mechanism" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 2002, 3117, 1934, 2369, 3121 ], "content": "Attention mechanism has been widely used to incorporate knowledge representation in recent knowledge-enhanced NLG work.\nThe general idea is to learn a knowledge-aware context vector (denoted as $\\widetilde{\\textbf{c}}_t$) by combining both hidden context vector ($\\textbf{c}_t$) and knowledge context vector (denoted as $\\textbf{c}^{K}_t$) into decoder update, such as $\\widetilde{\\textbf{c}}_t = f_{mlp}(\\textbf{c}_t \\oplus \\textbf{c}^{K}_t)$. \nThe knowledge context vector ($\\textbf{c}^{K}_t$) calculates attentions over knowledge representations (e.g., topic vectors, node vectors in knowledge graph).\nTable \\ref{tab:mul-source-attn} summarizes a variety of knowledge attentions, including\nkeyword attention~, topic attention~, knowledge base attention~, knowledge graph attention~, and grounded text attention~. \n\\vspace{-0.03in}", "id": "19576d26-7a1c-47a7-9adb-7f2601cfef36", "level": "paragraph", "origin_cites_number": 14, "parent_id": "7100a363-05ad-41f2-80ad-599b293e84f9", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Attention Mechanism" ], [ "paragraph", "Knowledge-related attention" ] ], "subsections": [], "title": "Knowledge-related attention" }, { "cite_extract_rate": 1, "cites": [ 1114 ], "content": "}\n\\label{sec:k-mode}\nCopyNet and Pointer-generator (PG) are used to choose subsequences in the input sequence and put them at proper places in the output sequence.\nCopyNet and PG have a differentiable network architecture~. They can be easily trained in an end-to-end manner. In CopyNet and PG, the probability of generating a target token is a combination of the probabilities of two modes, generate-mode and copy-mode. First, they represent unique tokens in the global vocabulary $\\mathcal{V}$ and the vocabulary of source sequence $\\mathcal{V}_X$. They build an extended vocabulary $\\mathcal{V}_{\\text{ext}} = \\mathcal{V} \\cup \\mathcal{V}_X \\cup \\{\\text{unk}\\}$. The difference between CopyNet and PG is the way to calculate distribution over the extended vocabulary. CopyNet calculates the distribution by\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n p(y_t) = p_{g}(y_t) + p_{c}(y_t),\n\\label{eq:mode1}\n\\end{equation}\nwhere $p_{g}(\\cdot|\\cdot)$ and $p_{c}(\\cdot|\\cdot)$ stand for the probability of generate-mode and copy-mode.\nDifferently, PG explicitly calculates a switch probability $p_{m}$ between generate-mode and copy-mode. It recycles the attention distribution to serve as the copy distribution. The distribution over $\\mathcal{V}_{\\text{ext}}$ is calculated by\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n p(y_t) = p_m{(\\mathrm{g})} \\cdot p_g(y_t) + (1 - p_m{(\\mathrm{g})}) \\cdot p_c(y_t),\n\\end{equation}\nwhere $p_m(\\mathrm{g})$ indicates the probability of choosing generate-mode, which is obtained by a nonlinear multi-layered (MLP) function.\nImportantly, CopyNet and pointer-generator network have been used as the \\textit{base module} for a lot of knowledge-enhanced NLG work.\n\\vspace{-0.05in}", "id": "439b4acd-a0f8-43cc-9128-c6a02aa6b48c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Copy and Pointing Mechanisms" ] ], "subsections": [ "214fd441-539b-4d2c-9d7c-4f5552010290" ], "title": "Copy and Pointing Mechanisms" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 2002, 3117, 1934 ], "content": "A knowledge-related mode chooses subsequences in the obtained knowledge and puts them at proper places in the output sequence. It helps NLG models to generate words that are not included in the global vocabulary ($\\mathcal{V}$) and input sequence ($\\mathcal{V}_X$).\nFor example, by adding the model of knowledge base, the extended vocabulary ($\\mathcal{V}_{ext}$) adds entities and relations from the knowledge base, i.e., $\\mathcal{V}_{ext} = \\mathcal{V} + \\mathcal{V}_X + \\mathcal{V}_{KB}$. The probability of generating a target token is a combination of the probabilities of three modes: generate-mode, copy-mode and knowledge base-mode. Therefore, knowledge-related mode is not only capable of regular generation of words but also operation of producing appropriate subsequences in knowledge sources. Table \\ref{tab:mul-source-attn} summarizes different kinds of knowledge-related modes such as topic mode~, keyword mode~, knowledge base mode~, knowledge graph mode~, and background mode~.\n\\vspace{-0.03in}", "id": "214fd441-539b-4d2c-9d7c-4f5552010290", "level": "paragraph", "origin_cites_number": 7, "parent_id": "439b4acd-a0f8-43cc-9128-c6a02aa6b48c", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Copy and Pointing Mechanisms" ], [ "paragraph", "Knowledge-related mode" ] ], "subsections": [], "title": "Knowledge-related mode" }, { "cite_extract_rate": 1, "cites": [ 9109 ], "content": "}\nMemory networks (MemNNs) are recurrent attention models over a possibly large external memory~. They write external memories into several embedding matrices, and use query (generally speaking, the input sequence $X$) vectors to read memories repeatedly. This approach encodes long dialog history and memorize external information.\nGiven an input set $\\{m_1, \\cdots, m_i\\}$ to be stored in memory. The memories of MemNN are\nrepresented by a set of trainable embedding matrices $\\textbf{C} = \\{\\textbf{C}^1, \\cdots, \\textbf{C}^{K+1} \\}$, where each $\\textbf{C}^k$ maps tokens to vectors, and a query (i.e., input sequence) vector $\\textbf{h}_X^{k}$ is used as a reading head. The model loops over $K$ hops and it computes the attention weights at hop $k$ for each memory $m_i$ using:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{p}^k_i = \\mathrm{softmax}((\\textbf{h}^k_X)^\\top \\textbf{C}^k_i), \n\\end{equation}\nwhere $\\textbf{C}^k_i = \\textbf{C}^k(m_i) $ is the memory content in $i$-th position, i.e., mapping $m_i$ into a memory vector. Here, $\\textbf{p}^k$ is a soft memory selector that decides the memory relevance with respect to the query vector $\\textbf{h}_X^{k}$. Then, the model reads out the memory $\\textbf{o}^k$ by the weighted sum over $\\textbf{C}^{k+1}$,\n\\begin{equation}\n\\textbf{o}^k = \\sum_i \\textbf{p}^k_i \\textbf{C}^{k+1}_i.\n\\end{equation}\nThen, the query vector is updated for the next hop by using $\\textbf{h}_X^{k+1} = \\textbf{h}_X^{k} + \\textbf{o}^k$. The result from the encoding step is the memory vector $\\textbf{o}^K$ and becomes the input for the decoding step.\n\\vspace{-0.03in}", "id": "a00f2dfb-145c-46fe-8e98-bb54b94cf9e9", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Memory Network" ] ], "subsections": [ "1b78bb95-fe98-499b-86a6-7a0ed257c4f3" ], "title": "Memory Network" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3116, 3122, 8554, 3115 ], "content": "Memory augmented encoder-decoder framework has achieved promising progress for many NLG tasks. For example, MemNNs are widely used for encoding dialogue history in task-oriented dialogue systems~. Such frameworks enable a decoder to retrieve information from a memory during generation. Recent work explored to model external knowledge with memory network such as knowledge base~ and topic\n~.\n\\vspace{-0.03in}", "id": "1b78bb95-fe98-499b-86a6-7a0ed257c4f3", "level": "paragraph", "origin_cites_number": 6, "parent_id": "a00f2dfb-145c-46fe-8e98-bb54b94cf9e9", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Memory Network" ], [ "paragraph", "Knowledge-related memory" ] ], "subsections": [], "title": "Knowledge-related memory" }, { "cite_extract_rate": 1, "cites": [ 553, 259, 7011, 180 ], "content": "}\n\\label{sec:graph-net}\nGraph network captures the dependence of graphs via message passing between the nodes of graphs.\nGraph neural networks (GNNs)~ and graph-to-sequence (Graph2Seq)~ potentiate to bridge up the gap between graph representation learning and text generation. \nKnowledge graph, dependency graph, and other graph structures can be integrated into text generation through various GNN algorithms. Here we denote a graph as $\\mathcal{G} = (\\mathcal{U}, \\mathcal{E})$, where $\\mathcal{U}$ is the set of entity nodes and $\\mathcal{E}$ is the set of (typed) edges.\nModern GNNs typically follow a neighborhood aggregation approach, which iteratively updates the representation of a node by aggregating information from its neighboring nodes and edges. After $k$ iterations of aggregation, a node representation captures the structural information within its $k$-hop neighborhood. Formally, the $k$-th layer of a node $u \\in \\mathcal{U}$ is:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{u}^{(k)} = \\textsc{Combine}_k (\\textbf{u}^{(k-1)}, \\textsc{Aggregate}_k(\\big{\\{} (\\textbf{u}^{(k-1)}_{i}, \\textbf{e}_{ij}^{(k-1)}, \\textbf{u}^{(k-1)}_{j}): \\forall (u_i, e_{ij}, u_j) \\in \\mathcal{N}(u)\\big{\\}})),\n\\end{equation}\nwhere $\\mathcal{N}(u)= \\{(u_i, e_{ij}, u_j) \\in \\mathcal{E} | u_i = u ~\\text{or}~ u_j = u\\}$ denotes the set of edges containing node $u$, $\\textbf{u}^{(k)}$ and $\\textbf{e}_{ij}^{(k)}$ are feature vectors of a node $u$ and the edge between $u_i$ and $u_j$ at the $k$-th iteration/layer. The choice of $\\textsc{Aggregate}(\\cdot)$ and $\\textsc{Combine}(\\cdot)$ in GNNs is crucial. A number of architectures for $\\textsc{Aggregate}(\\cdot)$ have been proposed in different GNN works such as GAT~. Meanwhile, the $\\textsc{Aggregate}(\\cdot)$ function used in labeled graphs (e.g., a knowledge graph) is often taken as those GNNs for modeling relational graphs~.\nTo obtain the representation of graph $\\mathcal{G}$ (denoted as $\\textbf{h}_{G}$), the $\\textsc{Readout}(\\cdot)$ function (either a simple permutation invariant function or sophisticated graph-level pooling function) pools node features from the final iteration $K$,\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\textbf{h}_{G} =\n \\textsc{Readout}(\\big{\\{}\\textbf{u}^{(K)}: u \\in \\mathcal{U}\\big{\\}}).\n\\end{equation}\n\\vspace{-0.05in}", "id": "4c9bc10b-8e87-4a1b-9b24-f82cd019c61e", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Graph Network" ] ], "subsections": [ "fc84e294-2c0a-45c6-a327-dd03985e96bf" ], "title": "Graph Network" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1934, 180 ], "content": "Graph network has been commonly used in integrating knowledge in graph structure such as knowledge graph and dependency graph. Graph attention network~ can be combined with sequence attention and jointly optimized~. \n{We will introduce different graph structure knowledge in subsequent sections such as knowledge graph (Section \\ref{sec:know-graph}), dependency graph (Section \\ref{sec:syn-graph}-\\ref{sec:sem-graph}), and open knowledge graph (OpenKG) (Section \\ref{sec:openkg}).}\n\\vspace{-0.03in}", "id": "fc84e294-2c0a-45c6-a327-dd03985e96bf", "level": "paragraph", "origin_cites_number": 3, "parent_id": "4c9bc10b-8e87-4a1b-9b24-f82cd019c61e", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Graph Network" ], [ "paragraph", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0.9, "cites": [ 2482, 8623, 2487, 9, 3123, 3124, 3125, 7, 3126 ], "content": "}\nPre-trained language models (PLMs) aims to learn universal language representation by conducting self-supervised training on large-scale unlabeled corpora.\nRecently, substantial PLMs such as BERT~ and T5~ have achieved remarkable performance in various NLP downstream tasks.\nHowever, these PLMs suffer from two issues when performing on knowledge-intensive tasks.\nFirst, these models struggle to grasp structured world knowledge, such as concepts and relations, which are very important in language understanding. For example, BERT cannot deliver great performance on many commonsense reasoning and QA tasks, in which many of the concepts are directly linked on commonsense knowledge graphs~.\nSecond, due to the domain discrepancy between pre-training and fine-tuning, these models do not perform well on domain-specific tasks. For example, BERT can not give full play to its value when dealing with electronic medical record analysis task in the medical field~.\nRecently, a lot of efforts have been made on investigating how to integrate knowledge into PLMs~. Specifically, we will introduce some PLMs designed for NLG tasks.\nOverall, these approaches can be grouped into two categories:\nThe first one is to explicitly inject entity representation into PLMs, where the representations is pre-computed from external sources~. For example, KG-BART encoded the graph structure of KGs with knowledge embedding algorithms like TransE~, and then took the informative entity embeddings as auxiliary input~.\nHowever, the method of explicitly injecting entity representation into PLMs has been argued that the embedding vectors of words in text and entities in KG are obtained in separate ways, making their vector-space inconsistent~.\nThe second one is to implicitly modeling knowledge information into PLMs by performing knowledge-related tasks, such as concept order recovering~, entity category prediction~.\nFor example, CALM proposed a novel contrastive objective for packing more commonsense knowledge into the parameters, and jointly pre-trained both generative and contrastive objectives for enhancing commonsense NLG tasks~.\n\\vspace{-0.05in}", "id": "b40e3380-9e7b-4d8e-af7a-5fcf7d31ad2a", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "4e1ad8d3-1399-4bd1-ae45-57b3679fd0ab", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Model Architectures" ], [ "subsubsection", "Pre-trained Language Models" ] ], "subsections": [], "title": "Pre-trained Language Models" }, { "cite_extract_rate": 1, "cites": [ 457, 3118 ], "content": "Besides specialized model architectures, one common way of injecting knowledge to generation models is through the supervised knowledge learning. For example, one can encode knowledge into the objective function that guides the model training to acquire desired model behaviors~. Such approaches enjoy the flexibility of integrating diverse types of knowledge by expressing them as certain forms of objectives. In general, knowledge-enhanced learning is agnostic to the model architecture, and can be combined with the aforementioned architectures.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/knowledge-as-target.pdf}\n \\end{center}\n \\vspace{-0.1in}\n \\caption{Incorporating knowledge into text generation by treating knowledge as the target. The first category of methods (left) combine knowledge-related tasks as auxiliary into the text generation task, resulting in a \\textit{multi-task learning} setting. The second category of methods (right) create \\textit{weakly-supervised} labels from knowledge, enforcing the relevancy between the knowledge and the target sequence.}\n \\label{fig:knowledge-related-task}\n\\vspace{-0.15in}\n\\end{figure}\n\\vspace{-0.05in}", "id": "5c5bd456-c1bb-49e2-8001-2c1c0983cea3", "level": "subsection", "origin_cites_number": 2, "parent_id": "e083ab82-0002-443e-8734-d6e90fa797b2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ] ], "subsections": [ "bad0692a-ea6d-42f6-ab72-0e4604a3f3b4", "7cc37322-8500-4abd-b9a0-cb0de0e042b5", "175121d4-807a-498b-9b55-642d5bd5480e" ], "title": "Knowledge-enhanced Learning and Inference" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nOne could devise learning tasks informed by the knowledge so that the model is trained to acquire the knowledge information. \n\\vspace{-0.05in}", "id": "bad0692a-ea6d-42f6-ab72-0e4604a3f3b4", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "5c5bd456-c1bb-49e2-8001-2c1c0983cea3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ], [ "subsubsection", "Learning with knowledge-related tasks" ] ], "subsections": [ "264ece9d-16ec-4456-8245-928826233148", "a6831b8e-400c-426b-9942-61e91c6cce17" ], "title": "Learning with knowledge-related tasks" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 457, 3129, 3128, 3118, 3130, 3127, 3120 ], "content": "}\nThe methods can be mainly divided into two categories as shown in Figure \\ref{fig:knowledge-related-task}.\nThe first category of knowledge-related tasks creates learning targets based on the knowledge, and the model is trained to recover the targets. These tasks can be combined as auxiliary tasks with the text generation task, resulting in a \\emph{multi-task learning} setting.\nFor example, knowledge loss is defined as the cross entropy between the predicted and true knowledge sentences, and it is combined with the standard conversation generation loss to enhance grounded conversation~. Similar tasks include keyword extraction loss~, template re-ranking loss~, \nlink prediction loss on knowledge graph~,\npath reasoning loss~,\nmode loss~, bag-of-word (BOW) loss~, etc.\nThe second category of methods directly derive the text generation targets from the knowledge, and use those (typically noisy) targets as supervisions in the standard text generation task. The approach is called \\textit{weakly-supervised} learning. Weakly-supervised learning enforces the relevancy between the knowledge and the target sequence.\nFor example, in the problem of aspect based summarization, the work automatically creates target summaries based on external knowledge bases, which are used to train the summarization model in a supervised manner.\n\\vspace{-0.05in}", "id": "264ece9d-16ec-4456-8245-928826233148", "level": "paragraph", "origin_cites_number": 12, "parent_id": "bad0692a-ea6d-42f6-ab72-0e4604a3f3b4", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ], [ "subsubsection", "Learning with knowledge-related tasks" ], [ "paragraph", "Knowledge as target" ] ], "subsections": [], "title": "Knowledge as target" }, { "cite_extract_rate": 0.846153846153846, "cites": [ 1114, 3116, 457, 1996, 2005, 168, 650, 7186, 3127, 1887, 3131 ], "content": "}\nThe second way of devising knowledge-related tasks is to augment the text generation task by conditioning the generation on the knowledge. That is, the goal is to learn a function $p_{\\theta}(Y|X, K)$, where $X$ is the input sequence, $Y$ is the target text and $K$ is the knowledge. Generally, the knowledge $K$ is first given externally (e.g., style, emotion) or retrieved from external resources (e.g., facts from knowledge base, a document from Wikipedia) or extracted from the given input text (e.g., keywords, topic words). Second, a conditional text generation model is used to incorporate knowledge and generate target output sequence. In practice, knowledge is often remedied by soft enforcing algorithms such as attention mechanism~ and copy/pointing mechanism~.\nRegarding knowledge as condition is widely used in knowledge-enhanced text generation. For examples, work has been done in making personalized dialogue response by taking account of persona~ and emotion~, controlling various aspects\nof the response such as politeness~, grounding the responses in external source of knowledge~ and generating topic-coherent sequence~.\nBesides, using variational autoencoder (VAE) to enforce the generation process conditioned on knowledge is one popular approach to unsupervised NLG.\nBy manipulating latent space for certain attributes, such as topic~ and style~, the output sequence can be generated with desired attributes without supervising with parallel data.\n\\vspace{-0.05in}", "id": "a6831b8e-400c-426b-9942-61e91c6cce17", "level": "paragraph", "origin_cites_number": 13, "parent_id": "bad0692a-ea6d-42f6-ab72-0e4604a3f3b4", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ], [ "subsubsection", "Learning with knowledge-related tasks" ], [ "paragraph", "Knowledge as condition" ] ], "subsections": [], "title": "Knowledge as condition" }, { "cite_extract_rate": 0.5, "cites": [ 3132, 3134, 3133 ], "content": "}\nInstead of creating training objectives in standalone tasks that encapsulate knowledge, another paradigm of knowledge-enhanced learning is to treat the knowledge as the \\emph{constraints} to regularize the text generation training objective. \nThe posterior regularization (PR) framework was proposed to restrict the space of the model posterior on unlabeled data as a way to guide the model towards desired behavior~. PR has been used as a principled framework to impose knowledge constraints on probabilistic models (including deep networks) in general~.\nPR augments any regular training objective $\\mathcal{L}(\\theta)$ (e.g., negative log-likelihood, as in Eq.\\eqref{eq:Seq2Seq-loss}) with a constraint term to encode relevant knowledge. Formally, \ndenote the constraint function as $f(X,Y) \\in \\mathbb{R}$ such that a higher $f(X,Y)$ value indicates a better generated sequence $Y$ that incorporates the knowledge. \nPR introduces an auxiliary distribution $q(Y|X)$, and imposes the constraint on $q$ by encouraging a large expected $f(X, Y)$ value: $\\mathbb{E}q [f(X,Y)]$. Meanwhile, the model $p_{\\theta}$ is encouraged to stay close to $q$ through a KL divergence term. The learning problem is thus a constrained optimization:\n\\begin{align}\n \\max_{\\theta, q} &\\mathcal{L}(\\theta) - \\mathrm{KL}(q(Y|X)||p_\\theta(Y|X)) + \\xi \\\\\n &s.t.~~ \\mathbb{E}q [f(X,Y)] > \\xi,\n\\end{align}\nwhere $\\xi$ is the slack variable. The PR framework is also related to other constraint-driven learning methods~. We refer readers to for more discussions.\n\\vspace{-0.05in}", "id": "7cc37322-8500-4abd-b9a0-cb0de0e042b5", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "5c5bd456-c1bb-49e2-8001-2c1c0983cea3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ], [ "subsubsection", "Learning with knowledge constraints" ] ], "subsections": [], "title": "Learning with knowledge constraints" }, { "cite_extract_rate": 0.5, "cites": [ 3135 ], "content": "}\nPre-trained language models leverage large amounts of unannotated data with a simple log-likelihood training objective. Controlling language generation by particular knowledge in a pre-trained model is difficult if we do not modify the model architecture to allow for external input knowledge or fine-tuning with specific data~. Plug and play language model (PPLM) opened up a new way to control language generation with particular knowledge during inference. At every generation step during inference, the PPLM shifts the history matrix in the direction of the sum of two gradients: one toward higher log-likelihood of the attribute\n$a$ under the conditional attribute model $p(a|Y)$ and the other toward higher log-likelihood of the unmodified pre-trained generation model $p(Y|X)$ (e.g., GPT). Specifically, the attribute model $p(a|Y)$ makes gradient based updates to $\\Delta\\textbf{S}_t$ as follows:\n\\begin{equation}\n\\Delta \\textbf{S}_t \\leftarrow \\Delta \\textbf{S}_t + \\frac{\\nabla_{\\Delta \\textbf{S}_t} \\log p(a|\\textbf{S}_t + \\Delta\\textbf{S}_t)}{||\\nabla_{\\Delta \\textbf{S}_t} \\log p(a|\\textbf{S}_t + \\Delta\\textbf{S}_t)||^{\\gamma} },\n\\end{equation}\nwhere $\\gamma$ is the scaling coefficient for the normalization term; $\\Delta \\textbf{S}_t$ is update of history matrix $\\textbf{S}_t$ (see Eq.(\\ref{eq:transformer})) and initialized as zero. The update step is repeated multiple times. Subsequently, a forward pass through the generation model is performed to obtain the updated $\\widetilde{\\textbf{S}}_{t+1}$ as $\\widetilde{\\textbf{S}}_{t+1} = \\textsc{Decoder}((\\textbf{S}_t + \\Delta \\textbf{S}_t), \\textbf{e}(y_t), \\textbf{H})$. The perturbed $\\widetilde{\\textbf{S}}_{t+1}$ is then used to generate a new logit vector. PPLMs is efficient and flexible to combine differentiable attribute models to steer text generation~.\n\\vspace{-0.05in}", "id": "175121d4-807a-498b-9b55-642d5bd5480e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "5c5bd456-c1bb-49e2-8001-2c1c0983cea3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "General Methods of Integrating Knowledge into NLG" ], [ "subsection", "Knowledge-enhanced Learning and Inference" ], [ "subsubsection", "Inference with knowledge constraints" ] ], "subsections": [], "title": "Inference with knowledge constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "b426fa47-f85f-4367-a1dc-a6d9c866a437", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ] ], "subsections": [ "01a02363-5419-45c4-a559-242074181a80", "9faa22fc-e3f2-4757-83e5-d722d65d47c3", "a4877cd1-8484-4f68-8049-88876f588173", "d9dae76d-34fa-49af-b1bc-6337a2c466ae" ], "title": "NLG enhanced by Internal Knowledge" }, { "cite_extract_rate": 0.5, "cites": [ 2002, 3136 ], "content": "Topic, which can be considered as a representative or compressed form of text, has been often used to maintain the semantic coherence and guide the NLG process.\nTopic modeling is a powerful tool for finding the high-level content of a document collection in the form of latent topics~.\nA classical topic model, Latent Dirichlet allocation (LDA), has been widely used for inferring a low dimensional representation that captures latent semantics of words and documents~. In LDA, each topic is defined as a distribution over words and each document as a mixture distribution over topics. LDA generates words in the documents from topic distribution of document and word distribution of topic.\nRecent advances of neural techniques open a new way of learning low dimensional representations of words from the tasks of word prediction and context prediction, making neural topic models become a popular choice of finding latent topics from text~.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.95\\textwidth]{figures/topic-enhanced-three-methods.pdf}\n \\end{center}\n \\vspace{-0.15in}\n \\caption{Three typical methodologies for incorporating topics into NLG. Detailed designs are not included.}\n \\label{fig:topic}\n \\vspace{-0.05in}\n\\end{figure}\nNext, we introduce popular NLG applications enhanced by topics:\n\\begin{itemize}\n \\item \\textbf{Dialogue system.} A vanilla Seq2Seq often generates trivial or non-committal sentences of frequent words or phrases in the corpus~. For example,\n a chatbot may say \\textit{``I do not know''}, \\textit{``I see''} too often. Though these off-topic responses are safe to reply to many queries, they are boring with very little information. Such responses may quickly lead the conversation to an end, severely hurting user experience. Thus, on-topic response generation is highly needed.\n \\item \\textbf{Machine translation.} Though the input and output languages are different (e.g., translating English to Chinese), the contents are the same, and globally, under the same topic. Therefore, topic can serve as an auxiliary guidance to preserve the semantics information of input text in one language into the output text in the other language. \n \\item \\textbf{Paraphrase.} Topic information helps understand the potential meaning and determine the semantic range to a certain extent. Naturally, paraphrases concern the same topic, which can serve as an auxiliary guidance to promote the preservation of source semantic.\n\\end{itemize}\nAs shown in Figure \\ref{fig:topic}, we summarize topic-enhanced NLG methods into three methodologies: (M1) leverage topic words from generative topic models; (M2) jointly optimize generation model and CNN topic model; (M3) enhance NLG by neural topic models with variational inference.\n\\vspace{-0.05in}", "id": "01a02363-5419-45c4-a559-242074181a80", "level": "subsection", "origin_cites_number": 4, "parent_id": "b426fa47-f85f-4367-a1dc-a6d9c866a437", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Topic" ] ], "subsections": [ "bfcfe80d-d9d0-4072-af71-cf7604b89bc0", "23b7d35e-1fe1-4a67-9fdd-70137fbbcf04", "d6d92af5-2a1d-4ddd-b9a7-c44d12867c48", "94c4ec14-3c4f-4412-a14a-b15327803110" ], "title": "NLG Enhanced by Topic" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 2002, 3127 ], "content": "}\n\\label{sec:topic-m1}\nTopics help understand the semantic meaning of sentences and determine the semantic spectrum to a certain extent. \nTo enhanced text generation, an effective solution is to first discover topics using generative topic models (e.g., LDA), and then incorporate the topics representations into neural generation models, as illustrated in Figure \\ref{fig:topic}(a). In existing work, there are two mainstream methods to represent topics obtained from generative topic models. The first way is to use the generated topic distributions for each word (i.e., word distributions over topics) in the input sequence~. The second way is to assign a specific topic to the input sequence, then picks the top-$k$ words with the highest probabilities under the topic, and use word embeddings (e.g., GloVe) to represent topic words~. Explicitly making use of topic words can bring stronger guidance than topic distributions, but the guidance may deviate from the target output sequence when some generated topic words are irrelevant.\nZhang et al. proposed the first work of using a topic-informed Seq2Seq model by concatenating the topic distributions with encoder and decoder hidden states~.\nXing et al. designed a topic-aware Seq2Seq model in order to use topic words as prior knowledge to help dialogue generation~. \n\\begin{table*}[t]\n\\centering\n\\caption{Natural language generation methods that incorporate topic knowledge in text generation. Since most of the methods are tested on different tasks and datasets, we only compare the performance between ``w/o topic'' setting and ``with topic'' setting. For evaluation metrics, PPL is short for perplexity (lower is better); B-4 is short for BLEU-4 (higher is better); R-L is short for ROUGE-L (higher is better).}\n\\label{tab:topicinseq2seq}\n\\vspace{-0.1in}\n{\\scalebox{0.82}{\\begin{tabular}{|l|l|c|c|l|l|l|l|l|}\n\\hline\n\\multirow{2}*{Task} & \\multirow{2}*{Method} & \\multirow{2}*{Ref.} & \\multirow{2}*{Cat.} & \\multicolumn{2}{c|}{Framework components} & \\multicolumn{3}{c|}{Effect of topic modeling} \\\\\n\\cline{5-9}\n& & & & Seq. Enc/Dec & Topic model & Dataset & w/o topic & with topic \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Dialogue \\\\ system}} & Tp-S2S & & M1 & RNN Seq2Seq & LDA topics & Baidu Tieba & (PPL) 147.0 & (PPL) 134.6\\\\\n& PEE & & M3 & RNN Seq2Seq & Neural topics & PersonaChat& (B-4) 2.98 & (B-4) 3.56 \\\\ \n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Machine \\\\ translation}} & Tp-NMT & & M1 & RNN Seq2Seq & LDA topics & NIST & (B-4) 34.76 & (B-4) 35.91 \\\\\n& BLT-NMT & & M2 & RNN Seq2Seq & CNN topics & NIST & (B-4) 38.97 & (B-4) 40.10 \\\\\n\\hline \\hline\n\\multirow{3}*{\\makecell[l]{Summari\\\\ -zation}} & Tp-CS2S & & M1 & CNN Seq2Seq & LDA topics & XSum & (R-L) 25.23 & (R-L) 25.75 \\\\\n& TGVAE & & M3 & RNN with VAE & Neural topics & Gigawords & (R-L) 32.13 & (R-L) 33.02 \\\\\n& VHTM & & M3 & RNN with VAE & Neural topics & CNN/DM & (R-L) 36.73 & (R-L) 37.18 \\\\\n\\hline \\hline\n\\multirow{2}*{Paraphrase} & TGLM & & M2 & RNN Seq2Seq & CNNs topics & Yahoo! Ans & (PPL) 99.13 & (PPL) 88.69 \\\\\n& PTA & & M1 & RNN Seq2Seq & LDA topics & Quora & (B-4) 28.76 & (B-4) 31.75 \\\\\n\\hline\n\\end{tabular}}}\n\\vspace{-0.1in}\n\\end{table*}\n\\vspace{-0.05in}", "id": "bfcfe80d-d9d0-4072-af71-cf7604b89bc0", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "01a02363-5419-45c4-a559-242074181a80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Topic" ], [ "subsubsection", "M1: Leverage Topic Words from Generative Topic Models" ] ], "subsections": [], "title": "M1: Leverage Topic Words from Generative Topic Models" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\label{sec:topic-m2}\nThe LDA models were separated from the training process of neural generation model and were not able to adapt to the diversity of dependencies between input and output sequences. \nTherefore, the idea of addressing this issue is to use neural topic models. \nConvolutional neural networks (CNN) were used to\nlearn latent topic representations through iterative convolution and pooling operations.\nThere are growing interests of using the CNNs to map latent topics implicitly into topic vectors that can be used to enhance text generation tasks~. Empirical analyses showed that convolution-based topic extractors could outperform LDA-based topic models for multiple applications (e.g., dialogue system, text summarization, machine translation). However, theoretical analysis was missing to ensure the quality of the topics captured by the convolutions. And their interpretability is not as satisfactory as the LDA-based topic models.\n\\vspace{-0.05in}", "id": "23b7d35e-1fe1-4a67-9fdd-70137fbbcf04", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "01a02363-5419-45c4-a559-242074181a80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Topic" ], [ "subsubsection", "M2: Jointly Optimize Generation Model and CNN Topic Model" ] ], "subsections": [], "title": "M2: Jointly Optimize Generation Model and CNN Topic Model" }, { "cite_extract_rate": 0.5, "cites": [ 3137, 5680 ], "content": "}\n\\label{sec:topic-m3}\nNeural topic models can be trained efficiently by backpropagation~. In neural topic models, Dirichlet distributions can be employed as the prior to generate the parameters of the multinomial distribution $\\theta_d$ for each document~. The generative process of LDA is represented as: (1) $\\theta_d \\sim \\mathrm{Dirichlet} (\\alpha)$; (2) $t_i \\sim \\mathrm{Multinomial} (\\theta_d)$; (3) $w_i \\sim \\mathrm{Multinomial} (\\beta_{t_i})$,\nwhere $d$ denotes the bag-of-words representation of a document, $t_i$ represents the topic assignment for word $w_i$, and $\\beta_{t_i}$ represents the topic distribution over words given topic assignment $t_i$. \nHowever, a directed generative model comes up against the problem of establishing low variance gradient estimators. Miao et al. parameterized the multinomial distributions with neural networks and jointly learned the model parameters via variational inference~. They created neural structures for constructing topic distributions conditioned on a draw from a multivariate Gaussian distribution, represented as \n$\\theta_d\\sim\\mathrm{G} (\\mu_0, \\sigma_0^2)$,\nwhere $\\mathrm{G} (\\mu_0, \\sigma_0^2)$ is composed of a neural network conditioned on an isotropic Gaussian $\\mathrm{N} (\\mu_0, \\sigma_0^2)$. \nTaking a Gaussian prior distribution makes re-parameterization feasible to build an unbiased and low-variance gradient estimator for the variational distribution~. Without conjugacy prior, the updates of the parameters are derived directly and easily from the variational lower bound. Formally, a variational lower bound\nfor the document log-likelihood is:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\mathcal{J}_{topic} = \\mathbb{E}_{q(\\theta|d)} [\\log p(d|\\beta, \\theta)] - \\mathrm{KL} (q(\\theta|d)||p(\\theta |\\mu_0, \\sigma^2_0)),\n\\end{equation}\nwhere $q(\\theta|d)$ is the variational distribution approximating the true posterior $p(\\theta|d)$. Its lower bound is estimate by sampling $\\theta$ from $q(\\theta|d) = \\mathrm{G}(\\theta|\\mu(d), \\sigma^2(d))$. \nIn order to combine neural topic model and neural generation model, the idea is to use the Variational Auto-Encoder (VAE)~. It adopts autoregressive networks (e.g., LSTM) both as the encoder and decoder. VAE can learn latent codes $z$ of texts by reconstructing texts with its decoder. It assumes that the generation process is controlled by codes in a continuous latent space. This kind of VAE implementation considers sequential information of texts that can model the linguistic structure of texts. Wang et al. proposed topic guided variational autoencoder (TGVAE), to draw latent code $z$ from a topic-dependent Gaussian Mixture Prior in order to incorporate the topical knowledge into latent variables~. The topic-dependent Gaussian Mixture Model (GMM) is defined as:\n$p(z|\\beta, t) = \\sum^{T}_{i=1} t_i \\mathrm{N} (\\mu(\\beta_i), \\sigma^2(\\beta_i))$,\nwhere $T$ is the number of topics, $\\mu(d)$ and $\\sigma^2(d)$ are functions implemented by MLP. TGVAE uses bag-of-words as input and embeds an input document into a topic vector. The topic vector is then used to reconstruct the bag-of-words input, and the learned topic distribution over words is used to model a topic-dependent prior to generate an output sequence $Y$ from conditioned on an input sequence $X$. \nTherefore, to maximize the log-likelihood log $p(Y,d|X)$, a variational objective function is constructed as:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\mathcal{J}_{seq2seq} = \\mathbb{E}_{q(z|X)} [\\log p(Y|X, z)] - \\mathbb{E}_{q(\\theta|d)} [\\mathrm{KL} (q(z|X)||p(z|\\beta, \\theta))],\n\\end{equation}\nwhere $q(z|X)$ is variational distributions for $z$. The combined object function is given by:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\mathcal{J} = \\mathcal{J}_{topic} + \\mathcal{J}_{seq2seq}.\n\\end{equation}", "id": "d6d92af5-2a1d-4ddd-b9a7-c44d12867c48", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "01a02363-5419-45c4-a559-242074181a80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Topic" ], [ "subsubsection", "M3: Enhance NLG by Neural Topic Models with Variational Inference" ] ], "subsections": [], "title": "M3: Enhance NLG by Neural Topic Models with Variational Inference" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3127 ], "content": "} \n\\vspace{-0.05in}\nFor \\textbf{M1}, topic models (e.g., LDA) has a strict probabilistic explanation since the semantic representations of both words and documents are combined into a unified framework. Besides, topic models can be easily used and integrated into generation frameworks. For example, topic words can be represented as word embeddings; topic embeddings can be integrated into the decoding phase through topic attention. However,\nLDA models are separated from the training process of generation, so they cannot adapt to the diversity of dependencies between input and output sequences.\nFor \\textbf{M2}, it is an end-to-end neural framework that simultaneously learns latent topic representations and generates output sequences. Convolutional neural networks (CNN) are often used to generate the latent topics through iterative convolution and pooling operations. However, theoretical analysis is missing to ensure the quality of the topics captured by the convolutions. And their interpretability is not as good as the LDA-based topic models.\nFor \\textbf{M3}, neural topic models combine the advantages of neural networks and probabilistic topic models. They enable back propagation for joint optimization, contributing to more coherent topics, and can be scaled to large data sets. Generally, neural topic models can provide better topic coherence than LDAs~. However, neural variational approaches share a same drawback that topic distribution is assumed to be an isotropic Gaussian, which makes them incapable of modeling topic correlations. Existing neural topic models assume that the documents should be i.i.d. to adopt VAE, while they are commonly correlated. The correlations are critical for topic modeling.\n\\vspace{-0.05in}", "id": "94c4ec14-3c4f-4412-a14a-b15327803110", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "01a02363-5419-45c4-a559-242074181a80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Topic" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ] ], "subsections": [], "title": "Discussion and Analysis of Different Methods" }, { "cite_extract_rate": 0.416666666666666, "cites": [ 3116, 168, 650, 3139, 3138 ], "content": "Keyword (aka., key phrase, key term) is often referred as a sequence of one or more words, providing a compact representation of the content of a document. The mainstream methods of keyword acquisition for documents can be divided into two categories~: keyword assignment and keyword extraction. Keyword assignment means that keywords are chosen from a controlled vocabulary of terms or predefined taxonomy.\nKeyword extraction selects the most representative words explicitly presented in the document, which is independent from any vocabulary. Keyword extraction techniques (e.g., TF-IDF, TextRank, PMI) have been widely used over decades. Many NLG tasks can benefit from incorporating such a condensed form of essential content in a document to maintain the semantic coherence and guide the generation process.\nNext, we introduce popular NLG applications enhanced by keywords:\n\\begin{itemize}\n \\item \\textbf{Dialogue system.} Keywords help enlighten and drive the generated responses to be informative and avoid generating universally relevant replies which carry little semantics. Besides, \n recent work introduced personalized information into the generation of dialogue to help deliver better dialogue response such as emotion ~, and persona~.\n \\item \\textbf{Summarization.} \n Vanilla Seq2Seq models often suffer when the generation process is hard to control and often misses salient information~. Making use of keywords as explicit guidance can provide significant clues of the main points about the document~. It is closer to the way that humans write summaries: make sentences to contain the keywords, and then perform necessary modifications to ensure the fluency and grammatically correctness.\n \\item \\textbf{Question generation.} It aims to generate questions from a given answer and its relevant context. Given an answer and its associated context, it is possible to raise multiple questions with different focuses on the context and various means of expression.\n\\end{itemize}\n\\begin{table}[t]\n\\caption{Natural language generation methods that incorporate keyword in text generation.}\n\\vspace{-0.1in}\n\\label{tab:keyword}\n\\begin{subtable}[t]{0.98\\textwidth}\n\\centering\n\\caption{(M1) Descriptions and quantitative comparisons between three methods for emotional dialogue systems.}\n\\label{tab:emotion}\n\\vspace{-0.05in}\n{\\scalebox{0.82}{\\begin{tabular}{|l|l|c|l|c|c|c|}\n\\hline\n\\multirow{2}*{Task} & \\multirow{2}*{Method} & \\multirow{2}*{Ref.} & \\multirow{2}*{Assignment method} & \\multicolumn{3}{c|}{Experiments on NLPCC dataset} \\\\\n\\cline{5-7}\n& & & & BLEU & D-1/D-2 & Emotion w/s \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Dialogue \\\\ system}} & Seq2Seq & & Seq2Seq attention \\textit{without} using keywords & 1.50 & 0.38/1.20 & 33.5/37.1 \\\\\n& E-SCBA & & MLP classifier to 7 emotions (categories) & 1.69 & 0.54/4.84 & 72.0/51.2 \\\\\n& EmoChat & & E-SCBA + two memory modules for decoding & 1.68 & 0.90/7.35 & 76.5/58.0 \\\\ \n& EmoDS & & MLP classifier after decoding (discriminator) & 1.73 & 1.13/8.67 & 81.0/68.7 \\\\\n\\hline\n\\end{tabular}}}\n\\end{subtable}\n\\begin{subtable}[t]{0.98\\textwidth}\n\\centering\n\\vspace{0.1in}\n\\caption{(M2) As most methods are tested on different tasks and datasets, we only compare the performance between ``w/o keyword'' setting and ``with keyword'' setting. Besides, HM is short for human evaluation.}\n\\label{tab:hm}\n\\vspace{-0.05in}\n{\\scalebox{0.82}{\\begin{tabular}{|l|l|c|l|l|l|l|l|}\n\\hline\n\\multirow{2}*{Task} & \\multirow{2}*{Method} & \\multirow{2}*{Ref.} & Extraction & Keyword & \\multicolumn{3}{c|}{Effect of keyword} \\\\\n\\cline{6-8}\n& & & method & labels & Dataset & w/o keyword & with keyword \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Summari- \\\\ zation}} & \\multirow{2}*{\\makecell[l]{KIGN}} & \\multirow{2}*{\\makecell[l]{}} & \\multirow{2}*{\\makecell[l]{TextRank}} & \\multirow{2}*{\\makecell[l]{Unsupervised}} & CNN/DM & (R-2) 15.66 & (R-2) 17.12 \\\\ \n& & & & & Gigaword & (R-2) 23.61 & (R-2) 23.93 \\\\ \n\\cline{2-8} \n& ComGen & & PMI and TFIDF & Unsupervised & Tencent & (HM) 5.77 & (HM) 7.19 \\\\ \n& KGAS & & BiLSTM-Softmax & $\\text{w}(X) \\cap \\text{w}(Y)$ & Gigaword & (R-2) 23.61 & (R-2) 25.06 \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Question \\\\ generation}} & Selector & & BiLSTM-Softmax & $\\text{w}(X) \\cap \\text{w}(Y)$ & SQuAD & (B-4) 14.72 & (B-4) 15.87 \\\\\n& Prior & & BiLSTM-Softmax & $\\text{w}(X) \\cap \\text{w}(Y)$ & SQuAD & (B-4) 14.72 & (B-4) 15.34 \\\\\n\\hline\n\\end{tabular}}}\n\\end{subtable}\n\\vspace{-0.05in}\n\\end{table}\nResearchers have developed a great line of keyword-enhanced NLG methods. These methods can be categorized into two methodologies: (M1) Incorporate keyword assignment into text generation; (M2) Incorporate keyword extraction into text generation.\n\\vspace{-0.05in}", "id": "9faa22fc-e3f2-4757-83e5-d722d65d47c3", "level": "subsection", "origin_cites_number": 12, "parent_id": "b426fa47-f85f-4367-a1dc-a6d9c866a437", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ] ], "subsections": [ "3790b927-52af-44a2-84c6-8f025ac055eb", "d070fae5-6e22-45a4-95da-ee1547936825", "fcd1fe7a-8ff8-4e23-b63c-8248ac289f2a" ], "title": "NLG Enhanced by Keywords" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 3116, 3140, 3127, 3141 ], "content": "}\n\\label{sec:keywork-m1}\nWhen assigning a keyword to an input document, the set of possible keywords is bounded by a pre-defined vocabulary~. \nThe keyword assignment is typically implemented by a classifier that maps the input document to a word in the pre-defined vocabulary~. \nUnfortunately, some NLG scenarios do not hold an appropriate pre-defined vocabulary, so keyword assignment cannot be widely used to enhance NLG tasks.\nOne applicable scenario is to use a pre-determined domain specific vocabulary to maintain relevance between the input and the output sequence~. Another scenario is to generate dialogue with specific attributes such as persona~, emotion~. \n\\vspace{-0.05in}", "id": "3790b927-52af-44a2-84c6-8f025ac055eb", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "9faa22fc-e3f2-4757-83e5-d722d65d47c3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "M1: Incorporate Keyword Assignment into Text Generation" ] ], "subsections": [ "b64176b7-636a-410d-8fca-fd044a39d56d", "54e5e57c-d49c-4b0e-9f92-b34c3aa4508e" ], "title": "M1: Incorporate Keyword Assignment into Text Generation" }, { "cite_extract_rate": 0.75, "cites": [ 3116, 3127, 3141 ], "content": "}\nA straightforward method of keyword assignment is to assign the words from pre-defined vocabulary and use them as the keywords~.\nSometimes, the input sequence does not have an explicit keyword, but we can find one from the pre-defined vocabulary. For example, a dialogue utterance \\emph{``If you had stopped him that day, things would have been different.''} expresses sadness but it does not have the word ``sad.''\nTo address this issue, Li et al. propose a method to predict an emotion category by fitting the sum of hidden states from encoder into a classifier~.\nThen, the response will be generated with the guidance of the emotion category. \nIn order to dynamically track how much the emotion is expressed in the generated sequence, Zhou et al. propose a memory module to capture the emotion dynamics during decoding~.\nEach category is initialized with an emotion state vector before the decoding phase starts. At each step, the emotion state decays by a certain amount. Once the decoding process is completed, the emotion state decays to zero, indicating that the emotion is completely expressed. \n\\vspace{-0.05in}", "id": "b64176b7-636a-410d-8fca-fd044a39d56d", "level": "paragraph", "origin_cites_number": 4, "parent_id": "3790b927-52af-44a2-84c6-8f025ac055eb", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "M1: Incorporate Keyword Assignment into Text Generation" ], [ "paragraph", "M1.1: Adding assigned keyword into the decoder" ] ], "subsections": [], "title": "M1.1: Adding assigned keyword into the decoder" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nAs mentioned in~, explicitly incorporating emotional keywords suffers from expressing a certain emotion overwhelmingly. Instead, Song et al. propose to increase the intensity of the emotional experiences not by using emotional words explicitly, but by implicitly combining neutral words in distinct ways on emotion~. Specifically, they use an emotion classifier to build a sentence-level emotion discriminator, which helps to recognize the responses that express a certain emotion but not explicitly contain too many literal emotional words. The discriminator is connected to the end of the decoder.\n\\vspace{-0.05in}", "id": "54e5e57c-d49c-4b0e-9f92-b34c3aa4508e", "level": "paragraph", "origin_cites_number": 1, "parent_id": "3790b927-52af-44a2-84c6-8f025ac055eb", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "M1: Incorporate Keyword Assignment into Text Generation" ], [ "paragraph", "M1.2: Assigning keyword for generated sequence" ] ], "subsections": [], "title": "M1.2: Assigning keyword for generated sequence" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 3139 ], "content": "}\n\\label{sec:keywork-m2}\nKeyword extraction selects salient words from input documents~.\nRecent work has used statistical keyword extraction techniques (e.g., PMI~, TextRank~), and neural-based keyword extraction techniques (e.g., BiLSTM~).\nThe process of incorporating extracted keywords into generation is much like the process discussed in Section \\ref{sec:keywork-m1}. It takes keywords as an additional input into decoder. Recent work improves encoding phase by adding another sequence encoder to represent keywords~.\nThen, the contextualized keywords representation is fed into the decoder together with input sequence representation.\nTo advance the keyword extraction, Li et al. propose to use multi-task learning for training a keyword extractor network and generating summaries~. Because both summarization and keyword extraction aim to select important information from input document, these two tasks can benefit from sharing parameters to improve the capacity of capturing the gist of the input text. \nIn practice, they take overlapping words between the input document and the ground-truth summary as keywords, and adopt a BiLSTM-Softmax as keyword extractor. Similar idea has also been used in question generation tasks~. They use overlapping words between the input answer context and the ground-truth question as keywords.", "id": "d070fae5-6e22-45a4-95da-ee1547936825", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "9faa22fc-e3f2-4757-83e5-d722d65d47c3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "M2: Incorporate Keyword Extraction into Text Generation" ] ], "subsections": [], "title": "M2: Incorporate Keyword Extraction into Text Generation" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\vspace{-0.05in}", "id": "fcd1fe7a-8ff8-4e23-b63c-8248ac289f2a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9faa22fc-e3f2-4757-83e5-d722d65d47c3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ] ], "subsections": [ "138bb13a-0350-4afe-a410-de961ad276b2", "8491e52a-8058-437f-bbe0-b5470ce10c95" ], "title": "Discussion and Analysis of Different Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nFor \\textbf{M1}, the primary advantage of keyword assignment is that the quality of keywords is guaranteed, because irrelevant keywords are not included in the pre-defined vocabulary. Another advantage is that even if two semantically similar documents do not have common words, they can still be assigned with the same keyword. However, there are mainly two drawbacks. On one hand, it is expensive to create and maintain dictionaries in new domains. So, the dictionaries might not be available. On the other hand, potential keywords occurring in the document would be unfortunately ignored if they were not in the vocabulary. Therefore, keyword assignment is suitable for the task that requires specific categories of keywords to guide the generated sentences with these key information. For example, dialogue systems generate responses with specific attitudes.\nFor \\textbf{M2}, keyword extraction selects the most representative words explicitly presented in the document, which is independent from any vocabulary.\nIt is easy to use but has two drawbacks.\nFirst, it cannot guarantee consistency because similar documents may still be represented by different keywords if they do not share the same set of words. Second, when an input document does not have a proper representative word, and unfortunately, the keyword extractor selects an irrelevant word from the document as a keyword, this wrong guidance will mislead the generation.\nTherefore, keyword extraction is suitable for the task that the output sequence needs to keep important information in the input sequence such as document summarization and paraphrase.\n\\vspace{-0.05in}", "id": "138bb13a-0350-4afe-a410-de961ad276b2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fcd1fe7a-8ff8-4e23-b63c-8248ac289f2a", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ], [ "paragraph", "Pros and cons." ] ], "subsections": [], "title": "Pros and cons." }, { "cite_extract_rate": 0, "cites": [], "content": "} Table \\ref{tab:keyword} summarizes tasks and datasets used in keyword-enhanced NLG work. Comparing with keyword-enhanced methods (E-SCBA~) and the basic Seq2Seq attention model, keyword-enhanced methods can greatly improve both generation quality (evaluated by BLEU) and emotional expression (evaluated by emotion-w and emotion-s) on the NLPCC dataset. Besides, as shown in Table \\ref{tab:keyword}(a), EmoDS~ achieved the best performance among three M1 methods, which indicates taking keyword assignment as a discriminant task can make better improvement than assigning keyword before the sentence decoding. For M2 methods, since most methods were evaluated on different tasks, we can only compare the performance between ``without using keyword'' and ``using keyword''. As shown in Table \\ref{tab:keyword}(b), leveraging extracted keywords from input sequence into Seq2Seq model can improve the generation quality on summarization and question generation tasks. Comparing with KGAS~ and KIGN~, we can observe using BiLSTM-Softmax to extract keyword (a supervised manner by using overlapping words between $X$ and $Y$ as labels) can make better performance than using TextRank (an unsupervised manner). \n\\vspace{-0.05in}", "id": "8491e52a-8058-437f-bbe0-b5470ce10c95", "level": "paragraph", "origin_cites_number": 4, "parent_id": "fcd1fe7a-8ff8-4e23-b63c-8248ac289f2a", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Keywords" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ], [ "paragraph", "Quantitative analysis." ] ], "subsections": [], "title": "Quantitative analysis." }, { "cite_extract_rate": 1, "cites": [ 3143, 3142, 8624 ], "content": "Feature enriched encoder means that the encoder not only reads the input sequence, but also incorporates auxiliary hand-crafted features~. Linguistic features are the most common hand-crafted features, such as part-of-speech (POS) tags, dependency parsing, and semantic parsing.\n\\vspace{-0.1in}", "id": "a4877cd1-8484-4f68-8049-88876f588173", "level": "subsection", "origin_cites_number": 3, "parent_id": "b426fa47-f85f-4367-a1dc-a6d9c866a437", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Linguistic Features" ] ], "subsections": [ "d648911b-d037-414b-b733-5dd997fd5de0", "c418f3bb-d4d8-4dd3-87fb-212a2958082e", "4a0973d1-25ad-4c02-8191-660f39f1001c" ], "title": "NLG Enhanced by Linguistic Features" }, { "cite_extract_rate": 0.75, "cites": [ 1109, 3143, 3144 ], "content": "}\nPart-of-speech tagging (POS) assigns token tags to indicate the token's grammatical categories and part of speech such as \\textit{noun (N)}, \\textit{verb (V)}, \\textit{adjective (A)}. Named-entity recognition (NER) classifies named entities mentioned in unstructured text into pre-defined categories such as \\textit{person (P)}, \\textit{location (L)}, \\textit{organization (O)}. \nCoreNLP is the most common used tool~. In spite of homonymy and word formation processes, the same surface word form may be shared between several word types. Incorporating NER tags and POS tags can detect named entities and understand input sequence better, hence, further improve NLG~.\n\\vspace{-0.05in}", "id": "d648911b-d037-414b-b733-5dd997fd5de0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a4877cd1-8484-4f68-8049-88876f588173", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Linguistic Features" ], [ "subsubsection", "POS tags and NER tags" ] ], "subsections": [], "title": "POS tags and NER tags" }, { "cite_extract_rate": 1, "cites": [ 3146, 8337, 3145 ], "content": "}\n\\label{sec:syn-graph}\nSyntactic dependency graph is a directed acyclic graph representing syntactic relations between words~. For example, in the sentence ``The monkey eats a banana'', ``monkey'' is the subject of the predicate ``eats'', and ``banana'' is the object.\nEnhancing sequence representations by utilizing dependency information captures source long-distance dependency constraints and parent-child relation for different words~. \nIn NLG tasks, dependency information is often modeled in three different ways as follows: (i) linearized representation: linearize dependency graph and then use sequence model to obtain syntax-aware representation ~; (ii) path-based representation: calculate attention weights based on the linear distance between a word and the aligned center position, i.e., the greater distance a word to the center position on the dependency graph is, the smaller contribution of the word to the context vector is~; and (iii) graph-based representation: use GNNs to aggregate information from dependency relations~. \n\\vspace{-0.05in}", "id": "c418f3bb-d4d8-4dd3-87fb-212a2958082e", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a4877cd1-8484-4f68-8049-88876f588173", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Linguistic Features" ], [ "subsubsection", "Syntactic dependency graph" ] ], "subsections": [], "title": "Syntactic dependency graph" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1052, 1088 ], "content": "}\n\\label{sec:sem-graph}\nSemantic dependency graph represents \\textit{predicate-argument} relations between content words in a sentence and have various semantic representation schemes (e.g., DM) based on different annotation systems.\nNodes in a semantic dependency graph are extracted by semantic role labeling (SRL) or dependency parsing, and connected by different intra-semantic and inter-semantic relations~.\nSince semantic dependency graph introduces a higher level of information abstraction that captures commonalities between different realizations of the same underlying predicate-argument structures, it has been widely used to improve text generation~. Jin et al. propose a semantic dependency guided summarization model~. They incorporate the semantic dependency graph and the input text by stacking encoders to guide summary generation process. The stacked encoders consist of a sequence encoder and a graph encoder, in which the sentence encoder first reads the input text through stacked multi-head self-attention, and then the graph encoder captures semantic relationships and incorporates the semantic graph structure into the contextual-level representation. \n\\vspace{-0.05in}", "id": "4a0973d1-25ad-4c02-8191-660f39f1001c", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a4877cd1-8484-4f68-8049-88876f588173", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Linguistic Features" ], [ "subsubsection", "Semantic dependency graph" ] ], "subsections": [], "title": "Semantic dependency graph" }, { "cite_extract_rate": 0.75, "cites": [ 2378, 3147, 2369 ], "content": "\\label{sec:openkg}\nFor those KGs (e.g., ConceptNet) constructed based on data beyond the input text, we refer them as \\textit{external KGs}. On the contrary, an \\textit{internal KG} is defined as a KG constructed solely based on the input text. In this section, we will discuss incorporating internal KG to help NLG~.\nInternal KG plays an important role in understanding the input sequence especially when it is of great length. By constructing an internal KG intermediary, redundant information can be merged or discarded, producing a substantially compressed form to represent the input document~. Besides, representations on KGs can produce a structured summary and highlight the proximity of relevant concepts, when complex events related with the same entity may span multiple sentences~.\nOne of the mainstream methods of constructing an internal KG is using open information extraction (OpenIE).\nUnlike traditional information extraction (IE) methods, OpenIE is not limited to a small set of target entities and relations known\nin advance, but rather extracts all types of entities and relations found in input text~. In this way, OpenIE facilitates the domain independent discovery of relations extracted from text and scales to large heterogeneous corpora.\nAfter obtaining an internal KG, the next step is to learn the representation of the internal KG and integrate it into the generation model. For example, Zhu et al. use a graph attention network (GAT) to obtain the representation of each node, and fuse that into a transformer-based encoder-decoder architecture via attention~. Their method generates abstractive summaries with higher factual correctness. Huang et al. extend by first encoding each paragraph as a sub-KG using GAT, and then connecting all sub-KGs with a Bi-LSTM~. This process models topic transitions and recurrences, which enables the identification of notable content, thus benefiting summarization. \n\\vspace{-0.05in}", "id": "d9dae76d-34fa-49af-b1bc-6337a2c466ae", "level": "subsection", "origin_cites_number": 4, "parent_id": "b426fa47-f85f-4367-a1dc-a6d9c866a437", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by Internal Knowledge" ], [ "subsection", "NLG Enhanced by Open Knowledge Graphs" ] ], "subsections": [], "title": "NLG Enhanced by Open Knowledge Graphs" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "fd996682-bb73-4d48-801b-792919f81dc2", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ] ], "subsections": [ "c70e6955-5a55-48af-901c-b9f0878be07a", "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "c29ceb80-be3d-4dc9-8137-f14c6627e9b9", "14a1bb60-b565-432a-af53-1ea70efa8605" ], "title": "NLG enhanced by External Knowledge" }, { "cite_extract_rate": 1, "cites": [ 2366 ], "content": "One of the biggest challenges in NLG is to discover the dependencies of elements within a sequence and/or across input and output sequences. The dependencies are actually various types of \\emph{knowledge} such as commonsense, factual events, and semantic relationship. Knowledge base (KB) is a popular technology that collects, stores, and manages large-scale information for knowledge-based systems like search engines. It has a great number of triples composed of subjects, predicates, and objects. People also call them ``facts'' or ``factual triplets''. Recently, researchers have been designing methods to use KB as external knowledge for learning the dependencies easier, faster, and better. \nNext, we introduce popular NLG applications enhanced by knowledge base:\n\\begin{itemize}\n \\item \\textbf{Question answering.} It is often difficult to generate proper answers only based on a given question. This is because, depending on what the question is looking for, a good answer may have different forms. It may completes the question precisely with the missing information. It may elaborate details of some part of the question. It may need reasoning and inference based on some facts and/or commonsense. So, only incorporating input question into neural generation models often fails the task due to the lack of commonsense/factual knowledge . Related structured information of commonsense and facts can be retrieved from KBs.\n \\item \\textbf{Dialogue system.} The needs of KB in generating conversations or dialogues are relevant with QA but differ from two aspects. First, a conversation or dialogue could be open discussions when started by an open topic like ``\\emph{Do you have any recommendations?}'' Second, responding an utterance in a certain step needs to recall previous contexts to determine involved entities. KB will play an important role to recognize dependencies in the long-range contexts. \n\\end{itemize}\nTo handle different kinds of relationships between KB and input/output sequences, these methods can be categorized into two methodologies which is shown in Figure \\ref{fig:overall_kb_e}: (M1) design supervised tasks around KB for joint optimization; (M2) enhance incorporation by selecting KB or facts.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/kb-enhanced-two-methods.pdf}\n \\end{center}\n \\vspace{-0.1in}\n \\caption{The left figure demonstrates retrieving relevant triples, then using them for generation; the right figure demonstrate using KL to measure the proximity between prior and posterior distribution.}\n \\label{fig:overall_kb_e}\n\\vspace{-0.15in}\n\\end{figure}\n\\vspace{-0.05in}", "id": "c70e6955-5a55-48af-901c-b9f0878be07a", "level": "subsection", "origin_cites_number": 1, "parent_id": "fd996682-bb73-4d48-801b-792919f81dc2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Base" ] ], "subsections": [ "ce756bcf-0543-4240-af6d-ef9e29193c61", "bd237645-e27e-4fda-a09b-396270983466", "2013671e-3699-41ca-9035-5db949d200da" ], "title": "NLG Enhanced by Knowledge Base" }, { "cite_extract_rate": 0.5, "cites": [ 8625 ], "content": "}\nKnowledge bases (KBs) that acquire, store, and represent factual knowledge can be used to enhance\ntext generation. However, designing effective incorporation to achieve a desired enhancement is challenging because a vanilla Seq2Seq often fails to represent discrete isolated concepts though they perform well to learn smooth shared patterns (e.g., language diversity). \nTo fully utilize the knowledge bases, {the idea is} to jointly train neural models on multiple tasks. For example, the target task is answer sequence generation, and additional tasks include question understanding and fact retrieval in the KB. Knowledge can be shared across a unified encoder-decoder framework design. Typically, question understanding and fact retrieval are relevant and useful tasks, because a question could be parsed to match (e.g., string matching, entity linking,\nnamed entity recognition) its subject and predicate with the components of a fact triple in KB, and the answer is the object of the triple. \nKBCopy was the first work to generate responses using factual knowledge bases~. During the generation, KBCopy is able to copy words from the KBs. However, the directly copying relevant words from KBs is extremely challenging.\nCoreQA used both copying and retrieving mechanisms to generate answer sequences with an end-to-end fashion~. Specifically, it had a retrieval module to understand the question and find related facts from the KB. \nThen, the question and all retrieved facts are transformed into latent representations by two separate encoders. During the decoding phase, the integrated representations are fed into the decoder by performing a joint attention on both input sequence and retrieved facts.\nFigure \\ref{fig:overall_kb_e}(a) demonstrates a general pipeline that first retrieves relevant triples from KBs, then leverages the top-ranked triples into the generation process.\n\\vspace{-0.05in}", "id": "ce756bcf-0543-4240-af6d-ef9e29193c61", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "c70e6955-5a55-48af-901c-b9f0878be07a", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Base" ], [ "subsubsection", "M1: Design Supervised Tasks around KB for Joint Optimization" ] ], "subsections": [], "title": "M1: Design Supervised Tasks around KB for Joint Optimization" }, { "cite_extract_rate": 0.5, "cites": [ 3128 ], "content": "}\nIdeally, the relevance of the facts is satisfactory with the input and output sequence dependencies, however, it is not always true in real cases. Lian et al. addressed the issue of selecting relevant facts from KBs based on retrieval models (e.g. semantic similarity) might not effectively achieve appropriate knowledge selection~. The reason is that different kinds of selected knowledge facts can be used to generate diverse responses for the same input utterance. Given a specific utterance and response pair, the posterior distribution over knowledge base from both the utterance and the response may provide extra guidance on knowledge selection. The challenge lies in the discrepancy between the prior and posterior distributions. Specifically, the model learns to select effective knowledge only based on the prior distribution, so it is hard to obtain the correct posterior distribution during inference.\nTo tackle this issue, the work of Lian et al.~ and Wu et al.~ (shown in Figure \\ref{fig:overall_kb_e}(b)) approximated the posterior distribution using the prior distribution in order to select appropriate knowledge even without posterior information. They introduced an auxiliary loss, called Kullback-Leibler divergence loss ({KLDivLoss}), to measure the proximity between the prior distribution and the posterior distribution. The {KLDivLoss} is defined as follows:\n\\begin{equation}\n \\setlength\\abovedisplayskip{2pt} \n \\setlength\\belowdisplayskip{2pt}\n \\mathcal{L}_{\\emph{KLDiv}}(\\theta) = \\sum^{N}_{i=1} p(k = k_i|X, Y) \\log\\frac{p(k = k_i|X, Y)}{p(k = k_i|X)},\n\\end{equation}\nwhere $N$ is the number of retrieved facts. When minimizing {KLDivLoss}, the posterior distribution $p(k|X, Y)$ can be regarded as labels to apply the prior distribution $p(k|X)$ for approximating $p(k|X,Y)$. Finally, the total loss is written as the sum of the {KLDivLoss} and {NLL} (generation) loss.\n\\vspace{-0.05in}", "id": "bd237645-e27e-4fda-a09b-396270983466", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "c70e6955-5a55-48af-901c-b9f0878be07a", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Base" ], [ "subsubsection", "M2: Enhance Incorporation by Selecting KB or Facts in KB" ] ], "subsections": [], "title": "M2: Enhance Incorporation by Selecting KB or Facts in KB" }, { "cite_extract_rate": 0.5, "cites": [ 3148, 3118, 3115 ], "content": "} \nThe relevance between triples in KBs and input sequences plays a central role in discovering knowledge for sequence generation.\nMethods in \\textbf{M1} typically follows the process that parses input sequence, retrieves relevant facts, and subsequently, a knowledge-aware output can be generated based on the input sequence and previously retrieved facts.\nEven though the improvement by modeling KB with memory network~, existing KG-enhanced methods still suffer from effectively selecting precise triples.\nMethods of \\textbf{M2} improve the selection of facts, in which the ground-truth responses used as the posterior context knowledge to supervise the training of the prior fact probability distribution.\nWu et al. used exact match and recall to measure whether the retrieved triples is used to generate the target outputs~. Table \\ref{tab:kb-quant} shows the entity recall scores of M1-based methods and M2-based methods reported in . We observe that compared to M1-based methods, M2-based methods can greatly improve the accuracy of triple retrieval, as well as the generation quality.\nThere are still remaining challenges in KB-enhanced methods.\nOne is that retrieved facts may contain noisy information, making the generation unstable~.\nThis problem is extremely harmful in NLG tasks, e.g., KB-based question answering and task-oriented dialogue system, since the information in KB is usually the expected entities in the response.\n\\begin{table*}[t]\n\\caption{M2-based methods can retrieve more precise triples, and further improve the generation performance.}\n\\vspace{-0.15in}\n\\centering\n\\setlength{\\tabcolsep}{2.5mm}\\scalebox{0.85}{\\begin{tabular}{|l|l|c|c|c|c|c||c|c|c|c|}\n\\hline\n\\multirow{3}*{Method} & \\multirow{3}*{Cat.} & \\multirow{3}*{Ref.} & \\multicolumn{4}{c||}{Chinese Weibo (large)~} & \\multicolumn{4}{c|}{Chinese Weibo (small)~} \\\\ \n\\cline{4-11}\n& & & \\multicolumn{2}{c|}{Entity score} & \\multicolumn{2}{c||}{Generation score} & \\multicolumn{2}{c|}{Entity score} & \\multicolumn{2}{c|}{Generation score} \\\\ \n\\cline{4-11}\n& & & Match & Recall & BLEU-2 & Dist-2 & Match & Recall & BLEU-2 & Dist-2 \\\\\n\\hline \\hline\nGenDS & M1 & & 0.97 & 0.37 & 3.42 & 4.27 & 0.75 & 0.26 & 2.09 & 1.66 \\\\\nCCM & M1 & & 1.09 & 0.37 & 4.75 & 4.87 & 0.99 & 0.28 & 3.26 & 2.59 \\\\\n\\hline\nConKADI & M2 & & - & - & - & - & \\textbf{1.48} & \\textbf{0.38} & \\textbf{5.06} & \\textbf{23.93} \\\\\nTaFact & M2 & & \\textbf{1.81} & \\textbf{0.47} & \\textbf{5.07} & \\textbf{23.56} & - & - & - & - \\\\\n\\hline\n\\end{tabular}}\n\\vspace{-0.05in}\n\\label{tab:kb-quant}\n\\end{table*}", "id": "2013671e-3699-41ca-9035-5db949d200da", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "c70e6955-5a55-48af-901c-b9f0878be07a", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Base" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ] ], "subsections": [], "title": "Discussion and Analysis of Different Methods" }, { "cite_extract_rate": 0.842105263157894, "cites": [ 7011, 8623, 3150, 3119, 3121, 553, 3152, 3151, 1165, 3153, 3149, 1934, 8626, 3126, 3120, 8627 ], "content": "\\label{sec:know-graph}\nKnowledge graph (KG), as a type of structured human knowledge, has attracted great attention from both academia and industry. A KG is a structured representation of facts (a.k.a. knowledge triplets) consisting of entities\\footnote{For brevity, we use “entities” to denote both entities (e.g., prince) and concepts (e.g., musician) throughout the paper.}, relations, and semantic descriptions~. The terms of “knowledge base” and “knowledge graph” can be interchangeably used, but they do not have to be synonymous. The knowledge graph is organized as a graph, so the connections between entities are first-class citizens in it. {In the KG, people can easily traverse links to discover how entities are interconnected to express certain knowledge.} Recent advances in artificial intelligence research have demonstrated the effectiveness of using KGs in various applications like recommendation systems~.\nNext, we introduce popular NLG applications that have been enhanced by knowledge graph:\n\\begin{itemize}\n \\item \\textbf{Commonsense reasoning.} It aims to empower machines to capture the human commonsense from KG during generation.\n The methods exploit both structural and semantic information of the commonsense KG and perform reasoning over multi-hop relational paths, in order to augment the limited information with chains of evidence for commonsense reasoning. Popular tasks in commonsense reasoning generation include abductive reasoning (e.g., the $\\alpha$NLG task)~, counterfactual reasoning~, and entity description generation~. \n \\item \\noindent\\textbf{Dialogue system.} It frequently makes use of KG for the semantics in linked entities and relations~. A dialogue may shift focus from one entity to another, breaking one discourse into several segments, which can be represented as a linked path connecting the entities and their relations.\n \\item \\noindent\\textbf{Creative writing.} This task can be found in both scientific and story-telling domains. Scientific writing aims to explain natural processes and phenomena step by step, so each step can be reflected as a link on KG and the whole explanation is a path~. In story generation, the implicit knowledge in KG can facilitate the understanding of storyline and better predict what will happen in the next plot~.\n\\end{itemize}\nCompared with separate, independent knowledge triplets, knowledge graph provides comprehensive and rich entity features and relations for models to overcome the influence of the data distribution and enhance its robustness.\nTherefore, node embedding and relational path have played important roles in various text generation tasks. The corresponding techniques are knowledge graph embedding (KGE)~ and path-based knowledge graph reasoning~. Furthermore, it has been possible to encode multi-hop and high-order relations in KGs using the emerging graph neural network (GNN)~ and graph-to-sequence (Graph2Seq) frameworks~.\n\\vspace{-0.05in}\n\\begin{definition}[Knowledge graph (KG)]\nA knowledge graph (KG) is a directed and multi-relational graph composed of entities and relations which are regarded as nodes and different types of edges. Formally, a KG is defined as $\\mathcal{G} = (\\mathcal{U}, \\mathcal{E}, \\mathcal{R})$, where $\\mathcal{U}$ is the set of entity nodes and $\\mathcal{E} \\subseteq \\mathcal{U} \\times \\mathcal{R} \\times \\mathcal{U}$ is the set of typed edges between nodes in $\\mathcal{U}$ with a certain relation in the relation schema $\\mathcal{R}$.\n\\label{def:kg}\n\\end{definition}\nThen given the input/output sequences in the text generation task, a subgraph of the KG which is associated with the sequences can be defined as below.\n\\vspace{-0.05in}\n\\begin{definition}[Sequence-associated K-hop subgraph]\nA sequence-associated K-hop subgraph is defined as $\\mathcal{G}_{sub} = (\\mathcal{U}_{sub}, \\mathcal{E}_{sub}, \\mathcal{R})$, where $\\mathcal{U}_{sub}$ is the union of the set of entity nodes mapped through an \\emph{entity linking} function $\\psi: \\mathcal{U} \\times \\mathcal{X} \\rightarrow \\mathcal{U}_{sub}$ \\textit{and} their neighbors within \\textit{K}-hops.\nSimilarly, $\\mathcal{E}_{sub} \\subseteq \\mathcal{U}_{sub} \\times \\mathcal{R} \\times \\mathcal{U}_{sub}$ is the set of typed edges between nodes in $\\mathcal{U}_{sub}$. \n\\label{def:sks}\n\\end{definition}\nSequence-associated subgraph provides a graphical form of the task data (i.e., sequences) and thus enables the integration of KGs and the sequences into graph algorithms.\nMany methods have been proposed to learn the relationship between KG semantics and input/output sequences. They can be categorized into four methodologies as shown in Figure \\ref{fig:overall_kg}: (M1) incorporate knowledge graph embeddings into language generation; (M2) transfer knowledge into language model with triplet information; (M3) perform reasoning over knowledge graph via path finding strategies; and (M4) improve the graph embeddings with graph neural networks.\n\\vspace{-0.03in}", "id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "level": "subsection", "origin_cites_number": 19, "parent_id": "fd996682-bb73-4d48-801b-792919f81dc2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ] ], "subsections": [ "f133a740-6739-4557-8b15-8ca5b4d0e2f6", "f8f4882c-1e70-477e-9035-663aad7f5f30", "0dd5d1f7-6679-4ea3-8424-34630dbe7456", "f848f8d7-023b-460a-8658-d07a5db87028", "957f66c6-436e-4306-94e5-deaddb7ee602" ], "title": "NLG Enhanced by Knowledge Graph" }, { "cite_extract_rate": 0.25, "cites": [ 1934 ], "content": "}\nKnowledge graph embedding (KGE) techniques learn node embedding from a KG~.\nKGE aims to capture the semantic relatedness between entity nodes from their connectivity information (i.e., different types of relations) in the KG. The primary idea is to represent entities and relations in a low-dimensional vector space $\\mathbb{R}^d$, where $d \\ll |\\mathcal{U} \\cup \\mathcal{R}|$, to reduce data dimensionality while preserving the inherent structure of the KG.\nTransE~ \nis the most widely used KGE technique. In TransE, given a KG edge $(u_i, r, u_j)$, the relation is seen as a translation vector $\\mathbf{r}$ so that the embedded entities $\\mathbf{u}_i$ and $\\mathbf{u}_j$ can be connected with low translation error, namely $\\mathbf{u}_i + \\mathbf{r} \\approx \\mathbf{u}_j$. For example, we have $\\overrightarrow{Tokyo} + \\overrightarrow{IsCapticalOf} \\approx \\overrightarrow{Japan} $ for the knowledge edge $(\\textit{Tokyo},~ \\textit{IsCapticalOf}, ~\\textit{Japan})$. \nAs shown in Figure \\ref{fig:overall_kg}(a), a common strategy of incorporating KGE into NLG is to concatenate the original word representations ($\\textbf{x}$) with the corresponding entity representations ($\\textbf{u}$) from KGE~.\n\\vspace{-0.03in}", "id": "f133a740-6739-4557-8b15-8ca5b4d0e2f6", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M1: Incorporate Knowledge Graph Embeddings into Language Generation" ] ], "subsections": [], "title": "M1: Incorporate Knowledge Graph Embeddings into Language Generation" }, { "cite_extract_rate": 1, "cites": [ 3126 ], "content": "}\nThe vector spaces of entity embeddings (from KGE) and word embeddings (from pre-trained language models) are usually inconsistent~.\nBeyond a simple concatenation,\nrecent methods have explored to fine-tune the language models directly on knowledge graph triplets.\nGuan et al. transformed the commonsense triplets (in ConceptNet and ATOMIC) into readable sentences using templates, as illustrated in Figure \\ref{fig:overall_kg}(b). And then the language model (e.g., GPT-2) is fine-tuned on the transformed sentences to learn the commonsense knowledge to improve text generation.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/kg-enhanced-two-methods.pdf}\n \\end{center}\n \\vspace{-0.15in}\n \\caption{Four typical methodologies for incorporating KG semantics into text generation.}\n \\label{fig:overall_kg}\n\\vspace{-0.15in}\n\\end{figure}\n\\vspace{-0.03in}", "id": "f8f4882c-1e70-477e-9035-663aad7f5f30", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M2: Transfer Knowledge into Language Model with Knowledge Triplet Information" ] ], "subsections": [], "title": "M2: Transfer Knowledge into Language Model with Knowledge Triplet Information" }, { "cite_extract_rate": 0.5, "cites": [ 1169 ], "content": "}\nKGE learns node representations from one-hop relations through a certain semantic relatedness (e.g. TransE). However, Xiong et al. argued that an intelligent machine is supposed to be able to conduct explicit reasoning over relational paths to make multiple inter-related decisions rather than merely embedding entities in the KGs~.\nTake the QA task an example. The machine performs reasoning over KGs to handle complex queries that do not have an obvious answer, infer potential answer-related entities, and generate the corresponding answer. So, the challenge lies in identifying a subset of desired entities and mentioning them properly in a response~. Because the connected entities usually follow natural conceptual threads, they help generate reasonable and logical answers to keep conversations engaging and meaningful. As shown in Figure \\ref{fig:overall_kg}(c), path-based methods explore various patterns of connections among entity nodes such as meta-paths and meta-graphs. Then they learn from walkable paths on KGs to provide auxiliary guidance for the generation process. The path finding based methods can be mainly divided into two categories: (1) path ranking based methods and (2) reinforcement learning (RL) based path finding methods. \n\\vspace{-0.03in}", "id": "0dd5d1f7-6679-4ea3-8424-34630dbe7456", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M3: Perform Reasoning over Knowledge Graph via Path Finding Strategies" ] ], "subsections": [ "279636fc-df7d-4e34-8428-8f99d62c0fba", "842983b3-1887-468e-a149-773a115613af" ], "title": "M3: Perform Reasoning over Knowledge Graph via Path Finding Strategies" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3151, 3154, 3150 ], "content": "}\nPath ranking algorithm (PRA) emerges as a promising method for learning and inferring paths on large KGs~. PRA uses random walks to perform multiple bounded depth-first search processes to find relational paths. Coupled with elastic-net based learning~, PRA picks plausible paths and prunes non-ideal, albeit factually correct KG paths. For example, Tuan et al. proposed a neural conversation model with PRA on dynamic knowledge graphs~. In the decoding phase, it selected an output from two networks, a general GRU decoder network and a PRA based multi-hop reasoning network, at each time step. \nBauer et al. ranked and filtered paths to ensure both the information quality and variety via a 3-step scoring strategy: initial node scoring, cumulative node scoring, and path selection~. Ji et al. heuristically pruned the noisy edges between entity nodes and proposed a path routing algorithm to propagate the edge probability along\nmulti-hop paths to the entity nodes~.\n\\vspace{-0.03in}", "id": "279636fc-df7d-4e34-8428-8f99d62c0fba", "level": "paragraph", "origin_cites_number": 5, "parent_id": "0dd5d1f7-6679-4ea3-8424-34630dbe7456", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M3: Perform Reasoning over Knowledge Graph via Path Finding Strategies" ], [ "paragraph", "M3.1: Path routing and ranking" ] ], "subsections": [], "title": "M3.1: Path routing and ranking" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3149, 1169, 3150, 2378 ], "content": "}\nReinforcement learning (RL) based methods make an agent to perform reasoning to find a path in a continuous space. These methods incorporate various criteria in their reward functions of path finding, making the path finding process flexible.\nXiong et al. proposed DeepPath, the first work that employed Markov decision process (MDP) and used RL based approaches to find paths in KGs~. \nLeveraging RL based path finding for NLG tasks typically consists of two stages~. First, they take a sequence as input, retrieve a starting node $u_0$ on $\\mathcal{G}$, then perform multi-hop graph reasoning, and finally arrive at a target node $u_k$ that incorporates the knowledge for output sequence generation. Second, they represent the sequence $X$ and selected path $\\Phi_k(u_0,u_k)$ through two separate encoders. They decode a sequence with multi-source attentions on the input sequence and selected path. \nPath-based knowledge graph reasoning converts the graph structure of a KG into a linear path structure that can be easily represented by sequence encoders (e.g, RNN)~. For example, Niu et al. encoded selected path and input sequence with two separate RNNs and generated sequence with a general attention-based RNN decoder~.\nTo enhance the RL process, Xu et al. proposed six reward functions for training an agent in the reinforcement learning process.\nFor example, the functions looked for accurate arrival at the target node as well as the shortest path between the start and target node, i.e., minimize the length of the selected path $\\Phi_k(u_0,u_k)$~.\n\\vspace{-0.03in}", "id": "842983b3-1887-468e-a149-773a115613af", "level": "paragraph", "origin_cites_number": 6, "parent_id": "0dd5d1f7-6679-4ea3-8424-34630dbe7456", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M3: Perform Reasoning over Knowledge Graph via Path Finding Strategies" ], [ "paragraph", "M3.2: Reinforcement learning based path finding" ] ], "subsections": [], "title": "M3.2: Reinforcement learning based path finding" }, { "cite_extract_rate": 1, "cites": [ 553, 3119, 7011, 3121 ], "content": "}\nThe contexts surrounding relevant entities on KGs play an important role in understanding the entities and generating proper text about their interactions~.\nFor example, in scientific writing, it is important to consider the neighboring nodes of relevant concepts on a taxonomy and/or the global context of a scientific knowledge graph~.\nHowever, neither KGE nor relational path could fully represent such information. Graph-based representations aim at aggregating the context/neighboring information on graph data;\nand recent advances of GNN models demonstrate a promising advancement in graph-based representation learning~. In order to improve text generation, graph-to-sequence (Graph2Seq) models encode the structural information of the KG in a neural encoder-decoder architecture~. Since then, GNNs have been playing an important role in improving the NLG models.\nThey have been applied to both \\emph{encoding} and \\emph{decoding} phases.\n\\vspace{-0.03in}", "id": "f848f8d7-023b-460a-8658-d07a5db87028", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M4: Improve the Graph Embeddings with Graph Neural Networks" ] ], "subsections": [ "12dacd0c-ccdc-43ff-bef4-2efc4f30b6a8", "6684051e-311b-4b4a-b6ea-aaa93c8e2ff6" ], "title": "M4: Improve the Graph Embeddings with Graph Neural Networks" }, { "cite_extract_rate": 0.8, "cites": [ 3119, 1934, 2369, 3155 ], "content": "}\nFor encoding phase, a general process of leveraging GNNs for incorporating KG is to augment semantics of a word in the input text by combining with the vector of the corresponding entity node vector to the word on the KG~. A pre-defined entity linking function $\\psi: \\mathcal{U} \\times \\mathcal{X} \\rightarrow \\mathcal{U}_{sub}$ maps words in the input sequence to entity nodes on the KG. Given an input sequence, all the linked entities and their neighbors within $K$-hops compose a \\textit{sequence-associated K-hop subgraph} $\\mathcal{G}_{sub}$ (formally defined in Definition \\ref{def:sks}).\nFor each entity node in $\\mathcal{G}_{sub}$,\nit uses the KG structure as well as entity and edge features (e.g., semantic description if available) to learn a representation vector $\\textbf{u}$. \nSpecifically, a GNN model follows a neighborhood aggregation approach that iteratively updates the representation of a node by aggregating information from its neighboring nodes and edges. After $k$ iterations of aggregation, the node representation captures the structural information within its $k$-hop neighborhood. Formally, the $k$-th layer of a node $u \\in \\mathcal{U}_{sub}$ is:\n\\begin{equation}\n\\textbf{u}^{(k)} = \\textsc{Combine}_k (\\textbf{u}^{(k-1)}, \\textsc{Aggregate}_k(\\big{\\{} (\\textbf{u}^{(k-1)}_{i}, \\textbf{e}_{ij}^{(k-1)}, \\textbf{u}^{(k-1)}_{j}): \\forall (u_i, e_{ij}, u_j) \\in \\mathcal{N}(u)\\big{\\}})).\n\\end{equation}\nThe sub-graph representation $\\textbf{h}_{subG}$ is learned thorough a $\\textsc{Readout}(\\cdot)$ function from all entity node representations (i.e., $\n\\textbf{h}_{subG} = \\textsc{Readout}(\\big{\\{}\\textbf{u}^{(k)}, u \\in \\mathcal{U}_{sub}\\big{\\}})$.\nZhou et al. was the first to design such a knowledge graph interpreter to enrich the context representations with neighbouring concepts on ConceptNet using graph attention network (GAT)~.\n\\vspace{-0.05in}", "id": "12dacd0c-ccdc-43ff-bef4-2efc4f30b6a8", "level": "paragraph", "origin_cites_number": 5, "parent_id": "f848f8d7-023b-460a-8658-d07a5db87028", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M4: Improve the Graph Embeddings with Graph Neural Networks" ], [ "paragraph", "Learning KG-aware input text representation with GNNs (Encoding)." ] ], "subsections": [], "title": "Learning KG-aware input text representation with GNNs (Encoding)." }, { "cite_extract_rate": 0.8, "cites": [ 3151, 3154, 7011, 8623, 3121, 3119, 3152, 1934, 8627, 3156, 3120, 3126 ], "content": "}\nThe sequence decoder uses attention mechanism to find useful semantics from the representation of KG as well as the hidden state of the input text, where the KG's representation is usually generated by GNNs.\nSpecially, the hidden state is augmented by subgraph representation $\\textbf{h}_{subG}$, i.e., $\\textbf{s}_0 = \\textbf{h}_n \\oplus \\textbf{h}_{subG} $~. Then, the decoder attentively reads the retrieved subgraph to obtain a graph-aware context vector. Then it uses the vector to update the decoding state~. It adaptively chooses a generic word or an entity from the retrieved subgraph to generate output words. Because graph-level attention alone might overlook fine-grained knowledge edge information, some recent methods adopted the hierarchical graph attention mechanism~. It attentively read the retrieved subgraph $\\mathcal{G}_{sub}$ and then attentively read all knowledge edges $\\mathcal{E}_{sub}$ involved in $\\mathcal{G}_{sub}$. Ji et al. added a relevance score that reflected the relevancy of the knowledge edge according to the decoding state~.\n\\begin{table*}[t]\n\\caption{Tasks, datasets and KG sources used in different KG-enhanced papers. We also compared the performance of different models before and after incorporating KG into the generation process, in which ``w/o KG'' performance comes from the best baseline method; ``with KG'' comes from the KG-enhanced method.}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.815}{\\begin{tabular}{|l|l|c|c|l|r|r|r|c|r|}\n\\hline\n\\multirow{2}*{Tasks} & \\multirow{2}*{Methods} & \\multirow{2}*{Ref.} & \\multirow{2}*{Cat.} & \\multicolumn{2}{c|}{Dataset Information} & \\multicolumn{3}{c|}{Effect of KG} & KG\\\\\n\\cline{5-9}\n& & & & Name & \\#Instance & w/o KG & with KG & $\\Delta$BLEU & source \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Common- \\\\sense \\\\ reasoning}} & KG-BART & & M4 & CommonGen & 77,449 & 28.60 & 30.90 & +2.30 & ConceptNet \\\\ \n\\cline{2-10}\n& CE-PR & & M3 & ComVE & 30,000 & 15.70 & 17.10 & +1.60 & ConceptNet \\\\ \n\\cline{2-10}\n& GRF & & M4 & $\\alpha$NLG-ART & 60,709 & 9.62 & 11.62 & +2.00 & ConceptNet \\\\\n\\cline{2-10}\n& MGCN & & M3 & EntDesc & 110,814 & 24.90 & 30.00 & +4.30 & Self-built KG \\\\ \n\\hline \\hline\n\\multirow{5}*{\\makecell[l]{Story \\\\ generation}} & IE+MSA & & M4 & ROCStories & \\multirow{2}*{98,162} & 8.25 & 9.36 & +1.11 & ConceptNet \\\\ \n\\cline{2-4}\\cline{7-10}\n& GRF & & M4 & (split-1) & & 10.40 & 11.00 & +0.60 & ConceptNet \\\\ \n\\cline{2-10}\n& \\multirow{2}*{KEPM} & \\multirow{2}*{} & \\multirow{2}*{M2} & ROCStories & \\multirow{2}*{98,162} & \\multirow{2}*{14.10} & \\multirow{2}*{14.30} & \\multirow{2}*{+0.20} & ConceptNet\\\\ \n& & & & (split-2) & & & & & \\& ATOMIC \\\\ \n\\cline{2-10}\n& MRG & & M3 & VisualStory & 50,000 & 3.18 & 3.23 & +0.05 & ConceptNet \\\\ \n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Scientific \\\\ writing}} & GraphWriter & & M4 & AGENDA & 40,000 & 12.20 & 14.30 & +1.90 & Self-built KG \\\\ \n\\cline{2-10}\n& PaperRobot & & M4 & PaperWriting & 27,001 & 9.20 & 13.00 & +3.80 & Self-built KG \\\\\n\\hline \\hline\n\\multirow{3}*{\\makecell[l]{Dialogue \\\\ system}} & ConceptFlow &\n & M4 & Reddit-10M & 3,384K & 1.62 & 2.46 & +0.84 & ConceptNet \\\\ \n\\cline{2-10}\n& AKGCM & & M3 & EMNLP dialog & 43,192 & 32.45 & 30.84 & \\underline{-1.61} & Self-built KG \\\\ \n\\cline{2-10}\n& AKGCM & & M3 & ICLR dialog & 21,569 & 6.74 & 6.94 & +0.20 & Self-built KG \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Question \\\\ answering}} & \\multirow{2}*{MHPGM} & \\multirow{2}*{} & \\multirow{2}*{M3} & \\multirow{2}*{NarrativeQA} & \\multirow{2}*{46,765} & \\multirow{2}*{19.79} & \\multirow{2}*{21.07} & \\multirow{2}*{+1.28} & \\multirow{2}*{Self-built KG} \\\\\n&&&&&&&&& \\\\\n \\hline\n\\end{tabular}}\n\\end{center}\n\\vspace{-0.15in}\n\\label{tab:data-kg}\n\\end{table*}\n\\begin{table*}[t]\n\\caption{Qualitative comparison between different KG-enhanced methods.}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.86}{\\begin{tabular}{|l|c|c|c|c|c|l|l|l|}\n\\hline\n{\\multirow{2}*{Methods}} & \\multirow{2}*{Ref.} & \\multicolumn{4}{c|}{Method category} & Multi-hop info. & Multi-hop path & Auxiliary (knowledge \\\\\n\\cline{3-6}\n& & M1 & M2 & M3 & M4 & aggregation & reasoning & related) task(s) \\\\\n\\hline \\hline\nTHOTH & & $\\checkmark$ & & & & $\\times$ & $\\times$ & $\\times$ \\\\\n\\hline\nCCM & & & & & $\\checkmark$ & $\\times$, one-hop & $\\times$ & $\\times$ \\\\\n\\hline\nKEPM & & & $\\checkmark$ & & & $\\times$ & $\\times$ & $\\times$ \\\\\n\\hline\nAKGCM & & & & $\\checkmark$ & & $\\times$ & $\\checkmark$, Markov decision & $\\checkmark$, Path selection \\\\\n\\hline\nIE+MSA & & & & & $\\checkmark$ & $\\checkmark$, by GNN & $\\times$ & $\\times$ \\\\\n\\hline\nConceptFlow & & & & & $\\checkmark$ & $\\checkmark$, by GNN & $\\times$ & $\\times$ \\\\\n\\hline\nCE-PR & & & & $\\checkmark$ & & $\\times$ & $\\checkmark$, Path routing & $\\checkmark$, Concept selection \\\\\n\\hline\nGRF & & & & & $\\checkmark$ & $\\checkmark$, by GNN & $\\checkmark$, Path scoring & $\\checkmark$, Link prediction \\\\\n\\hline\n\\end{tabular}}\n\\end{center}\n\\vspace{-0.15in}\n\\label{tab:qual-kg}\n\\end{table*}", "id": "6684051e-311b-4b4a-b6ea-aaa93c8e2ff6", "level": "paragraph", "origin_cites_number": 15, "parent_id": "f848f8d7-023b-460a-8658-d07a5db87028", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "M4: Improve the Graph Embeddings with Graph Neural Networks" ], [ "paragraph", "Dynamically attending KG representation (Decoding)." ] ], "subsections": [], "title": "Dynamically attending KG representation (Decoding)." }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\vspace{-0.05in}", "id": "957f66c6-436e-4306-94e5-deaddb7ee602", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "2e94b5c0-1ff8-4d56-a733-a12ae2a57672", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "Discussion and Analysis of the Methodologies and Methods" ] ], "subsections": [ "87fd3061-044d-4dae-8158-e55567627c92", "f78642de-c75c-4e86-bbee-c6a8e8793d4a", "3639fe71-9f79-46f8-a74c-c3ae7b1efe9e" ], "title": "Discussion and Analysis of the Methodologies and Methods" }, { "cite_extract_rate": 0.8, "cites": [ 3119, 1934, 8626, 3120 ], "content": "}\nKnowledge graph embedding (\\textbf{M1}) was the earliest attempt to embed components of a KG including entities and relations into continuous vector spaces and use them to improve text generation. \nThose entity and relation embeddings can simply be used to enrich input text representations (e.g., concatenating embeddings), bridging connections between entity words linked from input text in latent space. \nBecause the graph projection and text generation are performed as two separate steps, the embedding vectors from knowledge graph and the hidden states from input text were in two different vector spaces. The model would have to learn to bridge the gap, which might make a negative impact on the performance of text generation.\nFine tuning pre-trained language models on the KG triplets (\\textbf{M2}) can eliminate the gap between the two vector spaces.\nNevertheless, M1 and M2 share two drawbacks.\nFirst, they only preserve information of direct (one-hop) relations in a KG, such as pair-wise proximity in M1 and KG triplet in M2, but ignore the indirect (multi-hop) relations of concepts. The indirect relations may provide plausible evidence of complex reasoning for some text generation tasks.\nSecond, from the time KGs were encoded in M1 or M2 methods, the generation models would no longer be able to access the KGs but their continuous representations. Then the models could not support reasoning like commonsense KG reasoning for downstream tasks.\nDue to these two reasons, M1 and M2 were often used to create basic KG representations upon which the KG path reasoning (M3) and GNNs (M4) could further enrich the hidden states~.\nThe path finding methods of KG reasoning (\\textbf{M3}) perform multi-hop walks on the KGs beyond one-hop relations. It enables reasoning that is needed in many text generation scenarios such as commonsense reasoning and conversational question answering.\nAt the same time, it provides better interpretability for the entire generation process, because the path selected by the KG reasoning algorithm will be explicitly used for generation.\nHowever, the selected paths might not be able to capture the full contexts of the reasoning process due to the limit of number.\nBesides, reinforcement-learning based path finding uses heuristic rewards to drive the policy search, making the model sensitive to noises and adversarial examples.\nThe algorithms of GNN and Graph2Seq (\\textbf{M4}) can effectively aggregate semantic and structural information from multi-hop neighborhoods on KGs, compared to M3 that considers multi-hop paths.\nTherefore, the wide range of relevant information can be directly embedded into the encoder/decoder hidden states. Meanwhile, M4 enables back propagation for jointly optimizing text encoder and graph encoder. \nFurthermore, the attention mechanism that has been applied in GNN and Graph2Seq (e.g., graph attention) can explain the model's output at some extent, though the multi-hop paths from M3 has better interpretability.\nM3 and M4 are able to use multi-hop relational information, compared to M1 and M2. However, they have two weak points. First, they have higher complexity than M1 and M2. In M3, the action space of path finding algorithms can be very large due to the large size and sparsity of the knowledge graph. In M4, the decoder has to attentively read both input sequence and knowledge graph. Second, the subgraphs retrieved by M3 and M4 might provide low coverage of useful concepts for generating the output.\nFor example, people use ConceptNet, a widely used commonsense KG, to retrieve the subgraph on three generative commonsense reasoning tasks. The task datasets are ComVE~, $\\alpha$-NLG~, and ROCSories~. We found 25.1\\% / 24.2\\% / 21.1\\% of concepts in the output could be found on ConceptNet, but only 11.4\\% / 8.1\\% / 5.7\\% of concepts in the output can be found on the retrieved 2-hop sequence-associated subgraph, respectively. It means that a large portion of relevant concepts on the KG are not utilized in the generation process.\n\\vspace{-0.05in}", "id": "87fd3061-044d-4dae-8158-e55567627c92", "level": "paragraph", "origin_cites_number": 5, "parent_id": "957f66c6-436e-4306-94e5-deaddb7ee602", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "Discussion and Analysis of the Methodologies and Methods" ], [ "paragraph", "Pros and cons." ] ], "subsections": [], "title": "Pros and cons." }, { "cite_extract_rate": 1, "cites": [ 3121, 3156, 3157, 8626, 3158 ], "content": "}\nTable \\ref{tab:data-kg} summarizes tasks, datasets, and KG sources used in existing KG-enhanced works. Three important things should be mentioned. First, all the datasets in the table are public, and we include their links in Table \\ref{tab:code}. CommonGen~, ComVE~ and $\\alpha$-NLG~ have a public leaderboard for competition. \nSecond, for KG sources, we observe that eight (57.1\\%) papers use ConceptNet as external resource, while six (42.9\\%) papers constructed their own KGs from domain-specific corpus. For example, Koncel et al. created a scientific knowledge graph by applying the SciIE tool (science domain information extraction)~. \nBesides, Zhao et al. compared the performance of models between using ConceptNet and using a self-built KG, and found the model with self-built KG could work better on story generation and review generation tasks~.\nThird, we observed that KG-enhanced NLG methods made the largest improvement on generative commonsense reasoning tasks, in which the average improvement is +2.55\\% in terms of $\\Delta$BLEU, while the average improvement on all different tasks is +1.32\\%.\n\\vspace{-0.05in}", "id": "f78642de-c75c-4e86-bbee-c6a8e8793d4a", "level": "paragraph", "origin_cites_number": 5, "parent_id": "957f66c6-436e-4306-94e5-deaddb7ee602", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "Discussion and Analysis of the Methodologies and Methods" ], [ "paragraph", "Quantitative analysis." ] ], "subsections": [], "title": "Quantitative analysis." }, { "cite_extract_rate": 0.5, "cites": [ 3151, 3120 ], "content": "}\nTable \\ref{tab:qual-kg} compares different KG-enhanced methods from three dimensions: multi-hop information aggregation, multi-hop path reasoning, and auxiliary knowledge graph related tasks. M3 is commonly used for multi-hop path reasoning and M4 is used for multi-hop information aggregation, except that CCM~ only aggregates one-hop neighbors. Besides, the auxiliary KG-related tasks are often used to further help the model learn knowledge from the KG. For example, ablation studies in show that the tasks of path selection, concept selection and link prediction can further boost the generation performance. GRF~ learns these three abilities at the same time. It achieves the state-of-art performance on three generation tasks.\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{figures/kt-enhanced-two-methods.pdf}\n \\end{center}\n \\vspace{-0.15in}\n \\caption{The left figure demonstrates retrieving relevant documents, then using them for generation; the right figure demonstrate reading background document to conduct conversions.}\n \\label{fig:overall_kt}\n\\vspace{-0.1in}\n\\end{figure}", "id": "3639fe71-9f79-46f8-a74c-c3ae7b1efe9e", "level": "paragraph", "origin_cites_number": 4, "parent_id": "957f66c6-436e-4306-94e5-deaddb7ee602", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG Enhanced by Knowledge Graph" ], [ "subsubsection", "Discussion and Analysis of the Methodologies and Methods" ], [ "paragraph", "Qualitative analysis." ] ], "subsections": [], "title": "Qualitative analysis." }, { "cite_extract_rate": 0.4, "cites": [ 2005, 3130 ], "content": "Knowledge grounded text refers to textual information that can provide additional knowledge relevant to the input sequence. The textual information may not be found in training corpora or structured databases, but can be obtained from massive textual data from online resources. These online resources include encyclopedia (e.g., Wikipedia), social media (e.g., Twitter), shopping \nwebsites (e.g., Amazon reviews). Knowledge grounded text plays an important role in understanding the input sequence and its surrounding contexts. For example, Wikipedia articles may offer textual explanations or background information for the input text. Amazon reviews may contain necessary descriptions and reviews needed to answer a product-related question. Tweets may contain people's comments and summaries towards an event.\nTherefore, knowledge grounded text is often taken as an important external knowledge source to help with a variety of NLG applications.\nNext, we introduce popular NLG applications enhanced by knowledge grounded text:\n\\begin{itemize}\n \\item \\textbf{Dialogue system.} Building a fully data-driven dialogue system is difficult since most of the universal knowledge is not presented in the training corpora~. The lack of universal knowledge considerably limits the appeal of fully data-driven generation methods, as they are bounded to respond evasively or defectively and seldom include meaningfully factual contents. To infuse the response with factual information, an intelligent machine is expected to obtain necessary background information to produce appropriate response.\n \\item \\noindent\\textbf{Summarization.} Seq2Seq models that purely depend on the input text tend to ``lose control'' sometimes. For example, 3\\% of summaries contain less than three words, and 4\\% of summaries repeat a word for more than 99 times as mentioned in~. Furthermore, Seq2Seq models usually focus on copying source words in their exact order, which is often sub-optimal in abstractive summarization. Therefore, leveraging summaries of documents similar as the input document as templates can provide reference for the summarization process~.\n \\item \\noindent\\textbf{Question answering (QA).} It is often difficult to generate proper answers only based on the given question. For example, without knowing any information of an Amazon product, it is hard to deliver satisfactory answer to the user questions such as \\emph{``Does the laptop have a long battery life?''} or \\emph{``Is this refrigerator frost-free?''} So, the product description and customer reviews can be used as a reference for answering product-related questions~.\n\\end{itemize}\nTo handle different kinds of relationships between grounded text and input/output sequences, these methods can be categorized into two methodologies as shown in Figure \\ref{fig:overall_kt}: (M1) guiding generation with retrieved information; (M2) modeling background knowledge into response generation.\n\\vspace{-0.05in}", "id": "c29ceb80-be3d-4dc9-8137-f14c6627e9b9", "level": "subsection", "origin_cites_number": 5, "parent_id": "fd996682-bb73-4d48-801b-792919f81dc2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG enhanced by Grounded Text" ] ], "subsections": [ "1dd2c8ed-bc5c-4bef-b423-37b20faf6dc4" ], "title": "NLG enhanced by Grounded Text" }, { "cite_extract_rate": 1, "cites": [ 3159, 7225, 2376 ], "content": "}\nBecause knowledge grounded text is not presented in the training corpora, an idea is to retrieve relevant textual information (e.g., a review, a relevant document, a summary template) from \\emph{external sources} based on the input text and to incorporate the retrieved grounded text into the generation process. This process is similar to designing knowledge acquisition and incorporation of KBs and KGs in text generation tasks. The difference is that ground text is unstructured and noisy. So, researchers design knowledge selection and incorporation methods to address the challenges.\nBased on the number of stages, we further divide related methods into two categories: retrieve-then-generate (also known as retrieval-augmented generation, short as RAG, in many existing papers~) methods (2-stage methods) and retrieve, rerank and rewrite methods (3-stage methods).\n\\vspace{-0.05in}", "id": "1dd2c8ed-bc5c-4bef-b423-37b20faf6dc4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "c29ceb80-be3d-4dc9-8137-f14c6627e9b9", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG enhanced by Grounded Text" ], [ "subsubsection", "M1: Guiding Generation with Retrieved Information" ] ], "subsections": [ "04bc4b0f-e214-44e6-a206-1736fc63f6a4", "1639fa52-01b7-46e4-b2f5-daee48310180", "8fa936eb-f794-41d7-8d93-89a8dd65817c" ], "title": "M1: Guiding Generation with Retrieved Information" }, { "cite_extract_rate": 0.8125, "cites": [ 457, 9109, 2005, 1073, 3117, 3118, 3130, 3161, 3159, 3160, 7225, 7126, 2376 ], "content": "}\nRAG follows a two-stage process: retrieval and generation. \nSpecially, as shown in Figure \\ref{fig:overall_kt}(a), a retriever $p(Z|X)$ first returns (usually top-K truncated) distributions over text passages given a query $X$, and then a generator $p(y_i|X, Z, y_{1:i-1})$ generates a current token based on a context of the previous tokens $y_{1:i-1}$, the original input $X$ and a retrieved passage $Z$.\nMethods for retrieving fact or review snippets are various, including matching from a collection of raw text entries indexed by named entities~; scoring relevant documents within a large collection by statistical approaches such as BM25~, or neural-based retrieval approaches such as dense paragraph retrieval (DPR)~.\nFor training the retriever and generator, most of existing work has jointly optimized these two components, without any direct supervision on what document should be retrieve~.\nHowever, by asking human experts to label what document should be retrieved and adding the retrieval loss (resulting in a multi-task learning setting), the generation performance can be greatly improved~, though the labelling process is an extremely time-consuming and labor-intensive task.\n\\begin{table*}[t]\n\\caption{Tasks, datasets and evidence sources used in retrieve-then-generate (M1) papers. We also include their document(d)/sentence(s) retrieval space and the number of retrieved document(d)/sentence(s).}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.83}{\\begin{tabular}{|l|l|l|c|l|r|r|r|}\n\\hline\nEvidence & \\multirow{2}*{Tasks} & \\multirow{2}*{Methods} & \\multirow{2}*{Ref.} & \\multicolumn{2}{c|}{Dataset Information} & Retrieval & \\# Retri- \\\\\n\\cline{5-6}\nsources & & & & Name & \\#Instance & space (d/s) & eved d/s \\\\\n\\hline \\hline\n\\multirow{7}*{\\makecell[l]{Wikipedia}} & \\multirow{2}*{\\makecell[l]{Dialogue \\\\ system}} & MemNet & & \\multirow{2}*{\\makecell[l]{Wizard of \\\\ Wikipedia (WoW)}} & \\multirow{2}*{22,311} & \\multirow{2}*{5.4M/93M} & 7 \\\\\n & & SKT & & & & & 7 \\\\ \n\\cline{2-8}\n& \\multirow{3}*{\\makecell[l]{Question \\\\ answering}} & RAG & & MS-MARCO & 267,287 & 21M/- & 10 \\\\ \n\\cline{3-8}\n& & BART+DPR & & \\multirow{2}*{ELI5} & \\multirow{2}*{274,741} & 3.2M/- & - \\\\\n& & RT+C-REALM & & & & 3.2M/- & 7 \\\\\n\\cline{2-8}\n& \\multirow{2}*{\\makecell[l]{Argument \\\\ generation}} & H\\&W & & \\multirow{2}*{ChangeMyView} & \\multirow{2}*{287,152} & 5M/- & 10 \\\\ \n& & CANDELA & & & & 5M/- & 10 \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Online platform \\\\ (e.g., Amazon)}} & \\multirow{2}*{\\makecell[l]{Dialogue (for \\\\ business)}} & AT2T & & Amazon books & 937,032 & -/131K & 10 \\\\ \n& & KGNCM & & Foursquare & 1M & -/1.1M & 10 \\\\ \n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Gigawords}} & \\multirow{2}*{\\makecell[l]{Summari- \\\\ zation}} & R$^3$Sum & & \\multirow{2}*{Gigawords} & \\multirow{2}*{3.8M} & -/3.8M & 30 \\\\ \n& & BiSET & & & & -/3.8M & 30 \\\\ \n\\hline\n\\end{tabular}}\n\\end{center}\n\\vspace{-0.1in}\n\\label{tab:data-kt}\n\\end{table*}\n\\begin{table*}[t]\n\\caption{Qualitative comparison between different grounded text enhanced methods.}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.895}{\\begin{tabular}{|l|c|c|c|c|l|l|l|}\n\\hline\n{\\multirow{2}*{Methods}} & \\multirow{2}*{Ref.} & \\multicolumn{3}{c|}{Method category} & \\multirow{2}*{Retrieval supervision} & Retriever & Number \\\\\n\\cline{3-5}\n& & M1.1 & M1.2 & M2 & & pre-training & of stages \\\\\n\\hline \\hline\nMemNet & & $\\checkmark$ & & & $\\checkmark$, Human annotated labels & $\\times$ & 2 \\\\\n\\hline\nSKT & & $\\checkmark$ & & & $\\checkmark$, Human annotated labels & $\\times$ & 2 \\\\\n\\hline\nR$^3$Sum & & & $\\checkmark$ & & $\\checkmark$, Pseudo labels & $\\times$ & 3, with rerank \\\\\n\\hline\nBiSET & & & $\\checkmark$ & & $\\checkmark$, Pseudo labels & $\\times$ & 3, with rerank \\\\\n\\hline\nRefNet & & & & $\\checkmark$ & $\\times$ & $\\times$ & 1, no retrieval \\\\\n\\hline\nGLKS & & & & $\\checkmark$ & $\\times$ & $\\times$ & 1, no retrieval \\\\\n\\hline\nRAG & & $\\checkmark$ & & & $\\times$ & $\\checkmark$, DPR & 2 \\\\\n\\hline\nKilt & & $\\checkmark$ & & & $\\times$ & $\\checkmark$, DPR & 2 \\\\\n\\hline\nRT+C-REALM & & $\\checkmark$ & & & $\\times$ & $\\checkmark$, REALM & 2 \\\\\n\\hline\n\\end{tabular}}\n\\end{center}\n\\vspace{-0.1in}\n\\label{tab:qual-kt}\n\\end{table*}\nGhazvininejad et al. proposed a knowledge grounded neural conversation model (KGNCM), which is the first work to retrieve review snippets from Foursquare and Twitter. Then it incorporates the snippets into dialogue response generation~. It uses an end-to-end memory network~ to generate responses based on the selected review snippets.\nLewis et al. introduced a general retrieval-augmented generation (RAG) framework by leveraging a pre-trained neural retriever and generator. It can be easily fine-tuned on downstream tasks, and it has demonstrated state-of-the-art performance on various knowledge intensive NLG tasks~. Recently, the fusion-in-decoder methods (i.e., the decoder performs attention over the concatenation of the resulting representations of all retrieved passages~) could even outperform RAG as reported in KILT benchmark~.\n\\vspace{-0.05in}", "id": "04bc4b0f-e214-44e6-a206-1736fc63f6a4", "level": "paragraph", "origin_cites_number": 16, "parent_id": "1dd2c8ed-bc5c-4bef-b423-37b20faf6dc4", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG enhanced by Grounded Text" ], [ "subsubsection", "M1: Guiding Generation with Retrieved Information" ], [ "paragraph", "M1.1: Retrieval-augmented generation (RAG)" ] ], "subsections": [], "title": "M1.1: Retrieval-augmented generation (RAG)" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3130 ], "content": "$)}}\nDifferent from RAG, a $R^{3}$-based method is expected to retrieve a most precise reference document that can be directly used for rewriting/editing.\n$R^{3}$-based method has proved successful in a number of NLG tasks such as machine translation~, and summarization~.\nIn summarization, Seq2Seq models that purely depend on the input document to generate summaries tend to deteriorate with the accumulation of word generation, e.g., they generate irrelevant and repeated words frequently~. Template-based summarization assume the golden summaries of the similar sentences (i.e., templates) can provide a reference point to guide the input sentence summarization process~. These templates are often called \\textit{soft templates} in order to distinguish from the traditional rule-based templates. Soft template-based summarization typically follows a three-step design: retrieve, rerank, and rewrite. The step of retrieval aims to return a few candidate templates from a summary collection. The reranking identifies the best template from the retrieved candidates. And the rewriting leverages both the source document and template to generate more faithful and informative summaries. \n\\vspace{-0.05in}", "id": "1639fa52-01b7-46e4-b2f5-daee48310180", "level": "paragraph", "origin_cites_number": 3, "parent_id": "1dd2c8ed-bc5c-4bef-b423-37b20faf6dc4", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG enhanced by Grounded Text" ], [ "subsubsection", "M1: Guiding Generation with Retrieved Information" ], [ "paragraph", "M1.2: Retrieve, rerank and rewrite ($R^{3" ] ], "subsections": [], "title": "M1.2: Retrieve, rerank and rewrite ($R^{3" }, { "cite_extract_rate": 0, "cites": [], "content": "Compared with $R^3$-based methods, RAG-based have several differences, including less of emphasis on lightly editing a retrieved item, but on aggregating content from several pieces of retrieved content, as well as learning latent retrieval, and retrieving evidence documents rather than related training pairs.\n\\vspace{-0.05in}", "id": "8fa936eb-f794-41d7-8d93-89a8dd65817c", "level": "paragraph", "origin_cites_number": 0, "parent_id": "1dd2c8ed-bc5c-4bef-b423-37b20faf6dc4", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "NLG enhanced by Grounded Text" ], [ "subsubsection", "M1: Guiding Generation with Retrieved Information" ], [ "paragraph", "Difference between RAG and $R^3$" ] ], "subsections": [], "title": "Difference between RAG and $R^3$" }, { "cite_extract_rate": 0.8, "cites": [ 2046, 439, 3117, 3162 ], "content": "Background document, with more global and comprehensive knowledge, has been often used for generating informative responses and ensuring a conversation to not deviate from its topic. Keeping a conversation grounded on a background document is referred as background based conversation (BBC)~. Background knowledge plays an important role in human-human conversations. For example, when talking about a movie, people often recall important points (e.g., a scene or review about the movie) and appropriately mention them in the conversation context. Therefore, an intelligent NLG model is expected to find an appropriate background snippet and generate response based on the snippet.\nAs shown in Figure \\ref{fig:overall_kt}(b), the task of BBC is often compared with machine reading comprehension (MRC), in which a span is extracted from the background document as a response to a question~. However, since BBC needs to generate natural and fluent responses, the challenge lies in not only locating the right semantic units in the background, but also referring to the right background information at the right time in the right place during the decoding phase.\nAs MRC models tie together multiple text segments to provide a unified and factual answer, many BBC models use the same idea to connect different pieces of information and find the appropriate background knowledge based on which the next response is to be generated~. For instance, Qin et al. proposed an end-to-end conversation model that jointly learned response generation together with on-demand machine reading~.\nThe MRC models can effectively encode the input utterance by treating it as a question in a typical QA task (e.g., SQuAD ) and encode the background document as the context. Then, they took the utterance-aware background representation as input into decoding phase.", "id": "14a1bb60-b565-432a-af53-1ea70efa8605", "level": "subsection", "origin_cites_number": 5, "parent_id": "fd996682-bb73-4d48-801b-792919f81dc2", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "M2: Modeling Background Knowledge into Response Generation" ] ], "subsections": [ "8917b74f-612b-41f1-a0a4-7a391b128fa5" ], "title": "M2: Modeling Background Knowledge into Response Generation" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\vspace{-0.05in}", "id": "8917b74f-612b-41f1-a0a4-7a391b128fa5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "14a1bb60-b565-432a-af53-1ea70efa8605", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "M2: Modeling Background Knowledge into Response Generation" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ] ], "subsections": [ "597d7113-9a58-4a0d-81d5-cbb6890216ad", "caed0784-4ac3-40d6-9b15-52b07f7290c8" ], "title": "Discussion and Analysis of Different Methods" }, { "cite_extract_rate": 1, "cites": [ 3117 ], "content": "}\nFor M1, guiding generation with retrieved information explicitly exposes the role of world knowledge by asking the model to decide what knowledge to retrieve and use during language generation. \nSince retrieval-augmented generation (RAG) captures knowledge in a interpretable and modular way, it is often used for knowledge-intensive tasks such as long-form QA and argument generation.\nHowever, a knowledge retriever is expected to retrieve documents from a large-scale corpus, e.g., the entire Wikipedia, which causes significant computational challenge.\nBesides, one input often requires retrieved text whose amount is much larger than the input itself (as indicated in Table~\\ref{tab:data-kt}), leading to serious information overwhelming for the generation model. \nFor M2, background based conversations (BBCs) avoid generating generic responses in a dialogue system and are able to generate more informative responses by exploring related background information. However, existing methods still cannot solve inherent problems effectively, such as tending to break a complete semantic unit and generate shorter responses~.\n\\vspace{-0.05in}", "id": "597d7113-9a58-4a0d-81d5-cbb6890216ad", "level": "paragraph", "origin_cites_number": 1, "parent_id": "8917b74f-612b-41f1-a0a4-7a391b128fa5", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "M2: Modeling Background Knowledge into Response Generation" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ], [ "paragraph", "Pros and cons." ] ], "subsections": [], "title": "Pros and cons." }, { "cite_extract_rate": 1, "cites": [ 7225, 2376 ], "content": "}\nTable \\ref{tab:data-kt} summarizes tasks, datasets and evidence sources used in existing grounded text enhanced work. Three important things should be mentioned. First, all the datasets in the table are public, and we include their links in Table \\ref{tab:code}.\nSecond, Wikipedia is the most commonly used evidence source since it is the largest free online encyclopedia. Besides, some online platforms contain plenty of product-related textural information, e.g., product reviews on Amazon, which are often used to build up task/goal oriented dialogue systems for business purpose.\nThird, the retrieval space of candidate documents are usually larger than 1 million and only 7-10 documents are selected. So, the process of retrieving relevant documents is challenging.\nTable \\ref{tab:qual-kt} compares different grounded text enhanced methods from three dimensions: retrieval supervision, pre-training of the retriever, and number of stages. First, as mentioned above, retrieving relevant documents from a large candidate set is a challenging task. To improve the retrieval accuracy, four (57.1\\%) papers added the retrieval supervision either by human annotated labels or pseudo labels, resulting in a multi-task learning setting. Besides, three (42.9\\%) papers used pre-trained language models to produce document representation for better retrieval. Though existing work has greatly improved the retrieval accuracy, the performance is still far from satisfactory in many text generation tasks~.\nHow to learn mutually enhancement between retrieval and generation is still a promising direction in the grounded text enhanced text generation systems.\n\\vspace{-0.05in}", "id": "caed0784-4ac3-40d6-9b15-52b07f7290c8", "level": "paragraph", "origin_cites_number": 2, "parent_id": "8917b74f-612b-41f1-a0a4-7a391b128fa5", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "NLG enhanced by External Knowledge" ], [ "subsection", "M2: Modeling Background Knowledge into Response Generation" ], [ "subsubsection", "Discussion and Analysis of Different Methods" ], [ "paragraph", "Qualitative analysis." ] ], "subsections": [], "title": "Qualitative analysis." }, { "cite_extract_rate": 0.758620689655172, "cites": [ 457, 439, 7186, 650, 3151, 3118, 3128, 1071, 3164, 3159, 3139, 7127, 8626, 442, 2376, 3126, 3157, 3120, 3163, 8415, 3165, 3158 ], "content": "The development of general evaluation benchmarks for text generation helps to promote the development of research in related fields. Existing text generation benchmarks did not specially focus on choosing the tasks and datasets that have been widely used for knowledge-enhanced text generation. Therefore, we re-screened from the existing four text generation benchmarks, i.e., GLGE~, GEM~, KilT~, GENIE~, and determined 9 benchmark datasets for evaluating knowledge-enhanced NLG methods. Here is our criteria for selection:\n\\begin{itemize}\n \\item We only consider benchmark datasets that have open-access downloading link.\n \\item We focus on diverse text generation tasks, involving various applications.\n \\item We select at most three benchmark datasets for each text generation task.\n \\item We include a mix of internal and external knowledge focused datasets.\n \\item We prefer multi-reference datasets for robust automatic evaluation.\n\\end{itemize}\nBased on the benchmark selection criteria, we finalize 9 knowledge-centric tasks that covers various NLG tasks, including commonsense reasoning, text summarization, question generation, generative question answering, and dialogue. The data statistics is shown in Table \\ref{tab:benchmark}. Descriptions and dataset links are listed as follows:\n\\begin{itemize}\n \\item \\textbf{Wizard of Wikipedia (WOW):} It is an open-domain dialogue dataset, where two speakers conduct an open-ended conversion that is directly grounded with knowledge retrieved from Wikipedia. (Data link: \\url{https://parl.ai/projects/wizard\\_of\\_wikipedia/})\n \\item \\textbf{CommonGen:} It is a generative commonsense reasoning dataset. Given a set of common concepts, the task is to generate a coherent sentence describing an everyday scenario using these concepts. (Data link: \\url{https://inklab.usc.edu/CommonGen/})\n \\item \\textbf{$\\alpha$NLG-ART:} It is a generative commonsense reasoning dataset. Given the incomplete observations about the world, the task it to generate a valid hypothesis about the likely explanations to partially observable past and future. (Data link: \\url{http://abductivecommonsense.xyz/})\n \\item \\textbf{ComVE:} It is a generative commonsense reasoning dataset. The task is to generate an explanation given a counterfactual statement for sense-making. (Data link: \\url{https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation}\n \\item \\textbf{ELI5:} It is a dataset for long-form question answering. The task is to produce explanatory multi-sentence answers for diverse questions. Web search results are used as evidence documents to answer questions. (Data link: \\url{https://facebookresearch.github.io/ELI5/})\n \\item \\textbf{SQuAD:} It is a dataset for answer-aware question generation. The task is to generate a question asks towards the given answer span based on a given text passage or document. (Data link: \\url{https://github.com/magic282/NQG})\n \\item \\textbf{CNN/DailyMail (CNN/DM):} It is a dataset for summarization. Given a news aticles, the goal is to produce a summary that represents the most important or relevant information within the original content. (Data link: \\url{https://www.tensorflow.org/datasets/catalog/cnn_dailymail})\n \\item \\textbf{Gigaword:} It is a dataset for summarization. Similar with CNN/DM, the goal is to generate a headline for a news article. (Data link: \\url{https://www.tensorflow.org/datasets/catalog/gigaword})\n \\item \\textbf{PersonaChat:} It is an open-domain dialogue dataset.\n It presents the task of making chit-chat more engaging by conditioning on profile information. (Data link: \\url{https://github.com/facebookresearch/ParlAI/tree/master/projects/personachat})\n\\end{itemize}\n\\begin{table*}[t]\n\\caption{We choose 9 knowledge-enhanced NLG benchmark datasets. These datasets have been included in four existing general NLG benchmarks (i.e., GLGE~, GEM~, Kilt~, GENIE~) or in SemEval tasks.}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.85}{\\begin{tabular}{|l|c|l|r|r|r|c|c|l|}\n\\hline\n\\multirow{2}*{Tasks} & \\multirow{2}*{Ref.} & \\multicolumn{4}{c|}{Dataset Information} & Leader & In which NLG & Papers including \\\\\n\\cline{3-6}\n& & Name & \\#Train & \\#Dev. & \\#Test & board & benchmark & this dataset \\\\\n\\hline \\hline\n\\multirow{3}*{\\makecell[l]{Dialogue \\\\ system}} & \\multirow{2}*{\\makecell[l]{}} & \\multirow{2}*{\\makecell[l]{Wizard of \\\\ Wikipedia}} & \\multirow{2}*{18,430} & \\multirow{2}*{1,948} & \\multirow{2}*{1,933} & \\multirow{2}*{$\\checkmark$\\footnotemark[1]} & \\multirow{2}*{Kilt} & \\multirow{2}*{} \\\\\n&&&&&&&& \\\\\n\\cline{2-9}\n& & PersonaChat & 122,499 & 14,602 & 14,056 & $\\times$ & GLGE & \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Question \\\\ answering}} & \\multirow{2}*{} & \\multirow{2}*{ELI5} & \\multirow{2}*{272,634} & \\multirow{2}*{1,507} & \\multirow{2}*{600} & \\multirow{2}*{$\\checkmark$\\footnotemark[3]} & \\multirow{2}*{Kilt} & \\multirow{2}*{} \\\\\n&&&&&&&& \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Question \\\\ generation}} & \\multirow{2}*{} & \\multirow{2}*{SQuAD} & \\multirow{2}*{75,722} & \\multirow{2}*{10,570} & \\multirow{2}*{11,877} & \\multirow{2}*{$\\times$} & \\multirow{2}*{GLGE} & \\multirow{2}*{} \\\\\n&&&&&&&& \\\\\n\\hline \\hline\n\\multirow{3}*{\\makecell[l]{Commonsense \\\\ reasoning}} & & CommonGen & 67,389 & 4,018 & 6,042 & $\\checkmark$\\footnotemark[4] & GEM & \\\\\n\\cline{2-9}\n& & $\\alpha$NLG-ART & 50,481 & 7,252 & 2,976 & $\\checkmark$\\footnotemark[5] & GENIE & \\\\\n\\cline{2-9}\n& & ComVE & 25,596 & 1,428 & 2,976 & $\\checkmark$\\footnotemark[6] & SemEval & \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Summarization}} & & CNN/DM & 287,226 & 13,368 & 11,490 & $\\checkmark$\\footnotemark[7] & GLGE & \\\\ \n\\cline{2-9}\n& & Gigaword & 3.8M & 189K & 1,951 & $\\checkmark$\\footnotemark[8] & GLGE & \\\\ \n\\hline\n\\end{tabular}}\n\\end{center}\n\\vspace{-0.1in}\n\\label{tab:benchmark}\n\\end{table*}\n\\footnotetext[1]{\\url{https://parl.ai/projects/wizard\\_of\\_wikipedia}}\n\\footnotetext[2]{\\url{https://nikitacs16.github.io/holl-e-website/}}\n\\footnotetext[3]{\\url{https://facebookresearch.github.io/ELI5/}}\n\\footnotetext[4]{\\url{https://inklab.usc.edu/CommonGen/leaderboard.html}}\n\\footnotetext[5]{\\url{https://leaderboard.allenai.org/genie-anlg/submissions/public}}\n\\footnotetext[6]{\\url{https://competitions.codalab.org/competitions/21080\\#results}}\n\\footnotetext[7]{\\url{https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail}}\n\\footnotetext[8]{\\url{https://paperswithcode.com/sota/text-summarization-on-gigaword}}\n\\vspace{-0.05in}", "id": "bb6a84ff-227f-4a65-a3fd-9b82bdf7f80f", "level": "section", "origin_cites_number": 29, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Benchmark, Toolkit and Leaderboard Performance" ] ], "subsections": [], "title": "Benchmark, Toolkit and Leaderboard Performance" }, { "cite_extract_rate": 0, "cites": [], "content": "Many efforts have been conducted to tackle the problem of knowledge-enhanced text generation and its related applications. To advance the field, there remains several open problems and future directions. Designing more effective ways to represent knowledge and integrate them into the generation process is still the most important trend in knowledge-enhanced NLG systems.\nFrom a broader perspective, we provide three directions that make focusing such efforts worthwhile now: (i) incorporating knowledge into visual-language generation tasks, (ii) learning knowledge from broader sources, especially pre-trained language models, (iii) learning knowledge from limited resources, (iv) learning knowledge in a continuous way.\n\\vspace{-0.05in}", "id": "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ] ], "subsections": [ "4697a984-5562-45d9-8fb5-094f3d263a4f", "877f3075-82a9-4733-9944-b9510faecbb3", "81830161-ff95-4683-919d-4afae3d0cb04", "074792d8-0bcd-4698-aece-0902f744c1a0" ], "title": "Discussion on Future Directions" }, { "cite_extract_rate": 0.75, "cites": [ 3167, 3166, 3168 ], "content": "Beyond text-to-text generation tasks, recent years have witnessed a growing interest in visual-language (VL) generation tasks, such as describing visual scenes~, and answering visual-related questions~.\nAlthough success has been achieved in recent years on VL generation tasks, there is still room for improvement due to the fact that image-based factual descriptions are often not enough to generate high-quality captions or answers~. External knowledge can be added in order to generate attractive image/video captions. We observed some pioneer work has attempted to utilize external knowledge to enhance the image/video captioning tasks. For example, Tran et al. proposed to detect a diverse set of visual concepts and generate captions by using an external knowledge base (i.e., Freebase), in recognizing a broad range of entities such as celebrities and landmarks~. \nZhou et al. used a commonsense knowledge graph (i.e., ConceptNet), to infer a set of terms directly or indirectly related to the words that describe the objects found in the scene by the object recognition module~.\nIn addition,\n proposed a neuro-symbolic learner for improving visual-language generation tasks (e.g., visual question answering).\nHowever, existing approaches for knowledge-enhanced visual-language generation tasks still have a lot of space for exploration. Some promising directions for future work include using other knowledge sources, such as retrieving image/text to help solve open-domain visual question answering and image/video captioning tasks; bringing structured knowledge for providing justifications for the captions that they produce, tailoring captions to different audiences and contexts, etc.\n\\vspace{-0.05in}", "id": "4697a984-5562-45d9-8fb5-094f3d263a4f", "level": "subsection", "origin_cites_number": 4, "parent_id": "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ], [ "subsection", "Incorporate Knowledge into Visual-Language Generation Tasks" ] ], "subsections": [], "title": "Incorporate Knowledge into Visual-Language Generation Tasks" }, { "cite_extract_rate": 0.5, "cites": [ 3169, 3170 ], "content": "More research efforts should be spent on learning to discover knowledge more broadly and combine multiple forms of knowledge from different sources to improve the generation process. More knowledge sources can be but not limited to network structure, dictionary and table. For examples, Yu et al.~ and An et al.~ augmented the task of scientific papers intention detection and summarization by introducing the citation graph; Yu et al. augmented the rare word representations by retrieving their descriptions from Wiktionary and feed them as additional input to a pre-trained language model~.\nBesides, structured knowledge and unstructured knowledge can play a complementary role in enhancing text generation. To improve knowledge richness, Fu et al. combined both structured (knowledge base) and unstructured knowledge (grounded text)~.\n\\vspace{-0.05in}", "id": "877f3075-82a9-4733-9944-b9510faecbb3", "level": "subsection", "origin_cites_number": 4, "parent_id": "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ], [ "subsection", "Learning Knowledge from Broader Sources" ] ], "subsections": [ "212793a3-642a-4b98-a22a-05ddf89dbf1c" ], "title": "Learning Knowledge from Broader Sources" }, { "cite_extract_rate": 1, "cites": [ 8623, 3171, 9, 7225 ], "content": "Pre-trained language models can learn a substantial amount of in-depth knowledge from data without any access to an external memory, as a parameterized implicit knowledge base~.\nHowever, as mentioned in~, directly fine-tuning pre-trained language generation models on the story generation task still suffers from \ninsufficient knowledge by representing the input text thorough a pre-trained encoder,\nleading to repetition, logic conflicts, and lack of long-range coherence in the generated output sequence.\nTherefore, discovering knowledge from pre-trained language models can be more flexible, such as knowledge distillation, data augmentation, and using pre-trained models as external knowledge~. More efficient methods of obtaining knowledge from pre-trained language models are expected.\n\\vspace{-0.05in}", "id": "212793a3-642a-4b98-a22a-05ddf89dbf1c", "level": "paragraph", "origin_cites_number": 4, "parent_id": "877f3075-82a9-4733-9944-b9510faecbb3", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ], [ "subsection", "Learning Knowledge from Broader Sources" ], [ "paragraph", "Leveraging Knowledge from Pre-trained Language Models" ] ], "subsections": [], "title": "Leveraging Knowledge from Pre-trained Language Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Most of current NLG research conduct on extensively labelled data to favor model training. However, this is in contrast to many real-world application scenarios, where only a few shots of examples are available for new domains. Limited data resources lead to \\textit{limited knowledge} that can be learnt in new domains. For examples, learning topical information of a dialogue occurring under a new domain is difficult since the topic may be rarely discussed before; \nconstructing a syntactic dependency graph of a sequence in a low-resource language is hard since many linguistic features are of great uniqueness.\nBesides, external knowledge bases are often incomplete and insufficient to cover full entities and relationships due to the human costs of collecting domain-specific knowledge triples. Therefore, quick domain adaptation is an essential task in text generation tasks. One potential route towards addressing these issues is meta-learning, which in the context of NLG means a generation model develops a broad set of skills and pattern recognition abilities at training time, and quickly adapt to a new task given very few examples without retraining the model from scratch. Recently, there has been raising interests in both academia and industry to investigate meta-learning in different NLG tasks.\nThus, it is a promising research direction to build efficient meta-learning algorithms that only need a few task-specific fine-tuning to learn the new task quickly. And for knowledge-enhanced text generation, it is of crucial importance to adapt the model quickly on new domains with limited new knowledge (e.g., only a few knowledge triples).\n\\vspace{-0.05in}", "id": "81830161-ff95-4683-919d-4afae3d0cb04", "level": "subsection", "origin_cites_number": 0, "parent_id": "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ], [ "subsection", "Learning Knowledge from Limited Resources" ] ], "subsections": [], "title": "Learning Knowledge from Limited Resources" }, { "cite_extract_rate": 0.5, "cites": [ 3172 ], "content": "A machine learning is expected to learn continuously, accumulate the knowledge learned in previous tasks, and use it to assist future learning. This research direction is referred as lifelong learning~. In the process, the intelligent machine becomes more and more knowledgeable and effective at learning new knowledge. To make an analogy, humans continuously acquire new knowledge and constantly update the knowledge system in the brain. However, existing knowledge-enhanced text generation systems usually do not keep updating knowledge in real time (e.g., knowledge graph expansion). \nA meaningful exploration of was discussed in~. They built a general knowledge learning engine for chatbots to enable them to continuously and interactively learn new knowledge during conversations.\nTherefore, it is a promising research direction to \ncontinuously update knowledge obtained from various information sources, empowering intelligent machines with incoming knowledge and improving the performance on new text generation tasks.\n\\vspace{-0.05in}", "id": "074792d8-0bcd-4698-aece-0902f744c1a0", "level": "subsection", "origin_cites_number": 2, "parent_id": "918c7c6b-1917-4b0e-a06c-ecaf632c2d80", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Discussion on Future Directions" ], [ "subsection", "Learning Knowledge in a Continuous Way" ] ], "subsections": [], "title": "Learning Knowledge in a Continuous Way" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we present a comprehensive review of current representative research efforts and trends on knowledge-enhanced text generation, and expect it can facilitate future research. To summarize, this survey aims to answer two questions that commonly appears in knowledge-enhanced text generation: \\textit{how to acquire knowledge} and \\textit{how to incorporate knowledge to facilitate text generation}. Base on knowledge acquisition, the main content of our survey is divided into three sections according to different sources of knowledge enhancement. Based on knowledge incorporation, we first present general methods of incorporating knowledge into text generation and further discuss a number of specific ideas and technical solutions that incorporate the knowledge to enhance the text generation systems in each section. Besides, we review a variety of text generation applications in each section to help practitioners learn to choose and employ the methods.\n\\vspace{-0.05in}\n\\section*{Acknowledgements}\nWe thank all anonymous reviewers for valuable comments. We also appreciate the suggestions from readers of the pre-print version.\nWe thank Dr. Michael Zeng (Microsoft) and Dr. Nazneen Rajani (Saleforce) for their constructive comments and suggestions.\nWenhao Yu and Dr. Meng Jiang's research is supported by National Science Foundation grants IIS-1849816, CCF-1901059, and IIS-2119531.\nQingyun Wang and Dr. Heng Ji's research is based upon work supported by Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture, U.S. DARPA SemaFor Program No. HR001120C0123, DARPA AIDA Program No. FA8750-18-2-0014, and DARPA KAIROS Program No. FA8750-19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copy-right annotation therein.\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{reference_short}\n\\clearpage\n\\appendix", "id": "70d1664f-e38f-49e9-9a57-f9700fbe2848", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\vspace{0.05in}\n\\noindent \\textbf{Figure \\ref{fig:paper-stat}} demonstrates the statistics of selected publications in this survey. The left figure shows the paper publishing venues. Most papers were published in top machine learning, artificial intelligence, and natural language processing conferences, such as ACL, EMNLP, AAAI, ICLR, NeurIPS. Besides, many selected papers were published in high-impact journals, such as TNNLS, JMLR, TACL. The right figure shows the paper categories. Among 160 selected papers, 87 papers (``general methods (General)'', ``topic'', ``keyword'', ``knowledge base (KG)'', ``knowledge graph (KG)'', ``grounded text (Text)'') are directly relevant to the different kinds of knowledge-enhanced text generation methods; 10 papers are relevant to benchmark datasets; 10 papers are related survey papers. Besides, other 43 papers are about basic (pre-trained) generation methods (e.g., Seq2Seq, CopyNet, BART, T5), or necessary background (e.g., TransE, OpenIE, GNN, LDA), or future direction. \n\\vspace{0.05in}\n\\noindent \\textbf{Figure \\ref{fig:paper-year}} summarized different papers according to years, knowledge sources, and methods. \n\\vspace{0.05in}\n\\noindent \\textbf{Table \\ref{tab:leaderboard}} lists the leaderboard performance on ten knowledge-enhanced generation benchmarks. \n\\vspace{0.05in}\n\\noindent\\textbf{Table \\ref{tab:code}} lists code links and programming language of representative open-source knowledge-enhanced text generation systems that have been introduced in this survey. \\\\\n\\begin{figure}[hb]\n \\vspace{-0.1in}\n \\begin{center}\n \\includegraphics[width=0.48\\textwidth]{figures/appendix/paper-venue.pdf}\n \\includegraphics[width=0.48\\textwidth]{figures/appendix/paper-cat.pdf}\n \\end{center}\n \\vspace{-0.1in}\n \\caption{Paper statistics of selected publications in this survey.}\n \\vspace{-0.2in}\n \\label{fig:paper-stat}\n\\end{figure}\n\\begin{figure}[hb]\n \\begin{center}\n \\includegraphics[width=0.7\\textwidth]{figures/paper-trend.pdf}\n \\end{center}\n \\vspace{-0.2in}\n \\caption{Knowledge-enhanced text generation has been gaining emerging interests in the recent five years.}\n \\label{fig:paper-year}\n\\vspace{-0.1in}\n\\end{figure}", "id": "4d48ae73-8c5b-48f8-b370-bc45fe5245eb", "level": "section", "origin_cites_number": 0, "parent_id": "5f69c7fa-bf2b-4523-bd34-303cffb73a99", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Appendix" ] ], "subsections": [ "f3d88e56-31ac-4e40-b641-96e8e809dcc9" ], "title": "Appendix" }, { "cite_extract_rate": 0.686274509803921, "cites": [ 457, 3154, 3117, 3150, 1139, 3119, 7186, 3162, 3121, 3152, 38, 3127, 3151, 1934, 3118, 2369, 3130, 3173, 2401, 3143, 8554, 3128, 3115, 1071, 3139, 3159, 8626, 3126, 9, 7225, 3120, 8627, 3163, 8415, 3165 ], "content": "\\vspace{0.05in}\n\\noindent \\textbf{BLEU-$m$ (short as B-$m$):} BLEU is a weighted geometric mean of $n$-gram precision scores.\n\\vspace{0.05in}\n\\noindent \\textbf{ROUGE-$m$ (short as R-$m$):} ROUGE measures the overlap of n-grams between the reference and hypothesis; ROUGE-L measures the longest matched words using longest common sub-sequence.\n\\vspace{0.05in}\n\\noindent \\textbf{Distinct-$k$ (short as D-k):} Distinct measures the total number of unique $k$-grams normalized by the total number of generated $k$-gram tokens to avoid favoring long sentences.\n\\begin{table}\n\\caption{Leaderboard performance on ten knowledge-enhanced generation benchmarks.}\n\\vspace{-0.1in}\n\\begin{subtable}[t]{1.0\\textwidth}\n\\caption{Leaderboard performance on two summarization benchmark datasets with different knowledge-enhanced NLG methods. Evaluation metrics are standard n-gram based metrics: ROUGE-2 and ROUGE-L.}\n\\vspace{-0.1in}\n\\begin{center}\n\\scalebox{0.83}{\\begin{tabular}{|l|c|l|l|cc|cc|l|}\n\\hline\n{\\multirow{2}*{Methods}} & \\multirow{2}*{Ref.} & Knowledge & Method & \\multicolumn{2}{c|}{\\textbf{CNN/DM}} & \n\\multicolumn{2}{c|}{\\textbf{Gigaword}} & \\\\\n & & source & category & R-2 & R-L & R-2 & R-L & \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{9}{|c|}{Baseline methods (w/o KG)} \\\\ \\hline\nSeq2Seq & & & & 11.81 & 28.83 & 11.32 & 26.42 & with attention mechanism \\\\\nPG & & & & 15.66 & 33.42 & 17.63 & 33.66 & w/o coverage mechanism \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{9}{|c|}{Knowledge enhanced methods} \\\\ \\hline\nVHTM & & Topic & M3 & 18.05 & 37.18 & - & - & - \\\\\nSELECTOR & & Keyword & M2 & 18.31 & - & - & - & * Improve generation diversity \\\\\nFASUM & & OpenKG & - & 17.84 & 37.40 & - & - & * Improve factual correctness \\\\\nKIGN & & Keyword & M1 & 17.12 & 35.68 & 17.93 & 34.44 & - \\\\\nBottomUp & & Keyword & M2 & 18.68 & 38.34 & 17.61 & 33.54 & - \\\\\nTGVAE & & Topic & M3 & - & - & 17.27 & 33.02 & - \\\\\nHierDualPG & & Keyword & M2 & - & - & 18.06 & 34.39 & - \\\\\nR$^3$Sum & & Text & M1 & - & - & 19.03 & 34.46 & - \\\\\nBiSET & & Text & M1 & - & - & \\textbf{19.78} & \\textbf{36.87} & - \\\\ \nSemSUM & & DepGraph & - & - & - & 19.75 & 36.09 & - \\\\ \nASGARD & & OpenKG & - & \\textbf{20.37} & \\textbf{40.48} & - & - & - \\\\\n\\hline\n\\end{tabular}}\n\\end{center}\n\\label{tab:quant-kg}\n\\end{subtable}\n\\begin{subtable}[t]{1.0\\textwidth}\n\\begin{minipage}{0.48\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on $\\alpha$NLG-ART dataset. Both B-4 and R-L are commonly used.}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|l|c|c|} \n\\hline\nMethod & Ref. & Source & B-4 & R-L \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Baseline methods} \\\\\n\\hline \nSeq2Seq & & - & 2.37 & 22.30 \\\\\n\\hline \nGPT-2 & & - & 9.80 & 32.90 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nGPT-COMeT & & KG & 9.62 & 32.88 \\\\\n\\hline \nGRF & & KG & \\textbf{11.62} & \\textbf{34.62} \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.15in}\n\\begin{minipage}{0.47\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on ComVE dataset. Both B-4 and R-L are commonly used.}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|l|c|c|} \n\\hline\nMethod & Ref. & Source & B-4 & R-L \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Baseline methods} \\\\\n\\hline \nSeq2Seq & & - & 6.10 & 25.80 \\\\\n\\hline \nGPT-2 & & - & 15.70 & 36.50 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nCE-PR & & KG & 17.10 & 37.90 \\\\\n\\hline \nGRF & & KG & \\textbf{17.19} & \\textbf{38.10} \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\end{subtable}\n\\begin{subtable}[t]{1.0\\textwidth}\n\\begin{minipage}{0.48\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on CommonGen dataset. SPICE is the primary evaluation metric.}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|l|c|c|} \n\\hline\nMethod & Ref. & Source & B-4 & SPICE \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Baseline methods} \\\\\n\\hline \nBART & & - & 31.83 & 27.99 \\\\\n\\hline \nT5 & & - & 31.96 & 28.86 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nEKI-BART & & Text & 35.95 & 29.59 \\\\\n\\hline \nKG-BART & & KG & 33.87 & 29.63 \\\\\n\\hline \nRE-T5 & & Text & \\textbf{40.87} & \\textbf{31.08} \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.15in}\n\\begin{minipage}{0.47\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on Holl-E (mix-short setting) dataset. R-L are the primary metric.}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|l|c|c|} \n\\hline\nMethod & Ref. & Source & R-L & B-4 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Baseline methods} \\\\\n\\hline \nSeq2Seq & & - & 21.48 & 5.26 \\\\\n\\hline \nBiDAF & & - & 35.09 & 27.44 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nAKGCM & & KG & 34.72 & \\textbf{30.84} \\\\\n\\hline \nRefNet & & Text & 36.17 & 29.38 \\\\\n\\hline \nGLKS & & Text & \\textbf{39.63} & - \\\\\n\\hline\n\\end{tabular}}\n\\end{minipage}\n\\end{subtable}\n\\label{tab:leaderboard}\n\\end{table}\n\\begin{table}\n\\ContinuedFloat\n\\vspace{-0.1in}\n\\begin{subtable}[t]{1.0\\linewidth}\n\\begin{minipage}{0.48\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on Wizard of Wikipedia with seen (S) and unseen (UnS) test set. }\n\\vspace{-0.05in}\n\\scalebox{0.85}{\\begin{tabular}{|l|c|c|c|} \n\\hline\nMethod & Ref. & R-1/R-2 (S) & R-1/R-2 (UnS) \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Baseline methods} \\\\\n\\hline \nTransformer & & 17.8/ --- & 14.0/ --- \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nMemNet & & 16.9/ --- & 14.4/ --- \\\\\n\\hline \nPostKS & & 18.1/5.3 & 13.5/2.0 \\\\\n\\hline \nSKT & & 19.3/6.8 & 16.1/4.2 \\\\\n\\hline \nPIPM+KDBTS & & \\textbf{19.9}/\\textbf{7.3} & \\textbf{17.6}/\\textbf{5.4} \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.15in}\n\\begin{minipage}{0.47\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{State-of-the-art performance on SQuAD.}\n\\vspace{-0.05in}\n\\scalebox{0.87}{\\begin{tabular}{|l|c|l|c|} \n\\hline\nMethod & Ref. & Source & B-4 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Baseline methods} \\\\\n\\hline \nSeq2Seq & & - & 3.01 \\\\\n\\hline \nTransformer & & - & 3.09 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nNQG++ & & LF & 13.27 \\\\\n\\hline \nSELECTOR & & LF+Keyword & 15.87 \\\\\n\\hline \nG2S+BERT & & LF+DepGraph & 17.49 \\\\\n\\hline \nG2S+BERT+RL & & LF+DepGraph & \\textbf{18.30} \\\\\n\\hline\n\\end{tabular}}\n\\end{minipage}\n\\end{subtable}\n\\begin{subtable}[t]{1.0\\linewidth}\n\\begin{minipage}{0.48\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Leaderboard performance on ELI5 dataset. The Kilt R-L (KRL) is the primary evaluation metric.}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|l|c|c|} \n\\hline\nMethod & Ref. & Source & KRL & R-L \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Baseline methods} \\\\\n\\hline \nT5 & & - & 0.0 & 19.1 \\\\\n\\hline \nBART & & - & 0.0 & 20.1 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nRAG & & Text & 1.7 & 17.4 \\\\\n\\hline \nBART+DPR & & Text & 1.9 & 17.4 \\\\\n\\hline \nRT+\\textsc{c}-REALM & & Text & \\textbf{2.4} & \\textbf{23.2} \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.15in}\n\\begin{minipage}{0.47\\linewidth}\n\\centering\n\\vspace{0.05in}\n\\caption{Some state-of-the-art performance on PersonaChat dataset (no leaderboard on this dataset).}\n\\vspace{-0.05in}\n\\scalebox{0.9}{\\begin{tabular}{|l|c|c|c|} \n\\hline\nMethod & Ref. & B-1/B-2 & D-1/D-2\\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Baseline methods} \\\\\n\\hline \nSeq2Seq & & 18.2/9.3 & 2.6/7.4 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{4}{|c|}{Knowledge-enhanced methods} \\\\\n\\hline \nMemNet(soft) & & 17.7/9.1 & 3.5/9.6 \\\\\n\\hline \nMemNet(hard) & & 18.6/9.7 & 3.7/9.9 \\\\\n\\hline \nPostKS & & 19.0/9.8 & \\textbf{4.6}/\\textbf{13.4} \\\\\n\\hline \nPEE & & \\textbf{23.2}/\\textbf{11.5} & - / - \\\\\n\\hline \n\\end{tabular}}\n\\end{minipage}\n\\end{subtable}\n\\end{table}\n\\begin{table*}[t]\n\\caption{A list of representative open-source knowledge-enhanced text generation systems.}\n\\vspace{-0.15in}\n\\begin{center}\n\\scalebox{0.72}{\\begin{tabular}{|l|c|l|c|l|}\n\\hline\n\\multirow{2}*{Task} & \\multirow{2}*{Ref.} & \\multirow{2}*{Paper title and open source code/toolkit} & Programming & Venue \\\\\n& & & language & \\& Year \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Topic-enhanced methods} \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Summarization}} & \\multirow{2}*{} & Topic-Aware Convolutional Neural Networks for Extreme Summarization & \\multirow{2}*{PyTorch} & EMNLP \\\\\n& & ------ Code: \\url{https://github.com/EdinburghNLP/XSum} & & 2018 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Friendly Topic Assistant for Transformer Based Abstractive Summarization & \\multirow{2}*{-} & EMNLP \\\\\n& & ------ Code: \\url{https://github.com/BoChenGroup/TA} & & 2020 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Keyword-enhanced methods} \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Dialogue \\\\ system}} & \\multirow{2}*{} & A Content-Introducing Approach to Generative Short-Text Conversation & \\multirow{2}*{Tensorflow} & COLING \\\\\n& & ------ Code: \\url{https://github.com/MaZhiyuanBUAA/Seq2BFforDialogueGeneration} & & 2016 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Emotional Chatting Machine: Emotional Conversation Generation with & \\multirow{2}*{PyTorch} & AAAI \\\\\n& & Internal and External Memory ------ Code: \\url{https://github.com/loadder/ECM-tf} & & 2018 \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Summarization}} & \\multirow{2}*{} & Coherent Comment Generation with a Graph-to-Sequence Model & \\multirow{2}*{PyTorch} & ACL \\\\\n& & ------ Code: \\url{https://github.com/lancopku/Graph-to-seq-comment-generation} & & 2018 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{KB-enhanced methods} \\\\\n\\hline \\hline\n\\multirow{10}*{\\makecell[l]{Dialogue \\\\ system}} & \\multirow{2}*{} & Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End & \\multirow{2}*{PyTorch} & ACL \\\\\n& & Dialog Systems ------ Code: \\url{https://github.com/HLTCHKUST/Mem2Seq} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Global-to-local Memory Pointer Networks for Task-Oriented Dialogue & \\multirow{2}*{PyTorch} & ICLR \\\\\n& & ------ Code: \\url{https://github.com/jasonwu0731/GLMP} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Improving Knowledge-aware Dialogue Generation via Knowledge Base & \\multirow{2}*{PyTorch} & AAAI \\\\\n& & Question Answering ------ Code: \\url{https://github.com/siat-nlp/TransDG} & & 2020 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Diverse and Informative Dialogue Generation with Context-Specific Knowledge & \\multirow{2}*{Tensorflow} & ACL \\\\\n& & Awareness ------ Code: \\url{https://github.com/pku-sixing/ACL2020-ConKADI} & & 2020 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & TopicKA: Generating Commonsense Knowledge-Aware Dialogue Responses & \\multirow{2}*{Tensorflow} & IJCAI \\\\\n& & ------ Code: \\url{https://github.com/pku-sixing/IJCAI2020-TopicKA} & & 2020 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{KG-enhanced methods} \\\\\n\\hline \\hline\n\\multirow{6}*{\\makecell[l]{Dialogue \\\\ system}} & \\multirow{2}*{} & Commonsense Knowledge Aware Conversation Generation with Graph & \\multirow{2}*{Tensorflow} & IJCAI \\\\\n& & Attention ------ Code: \\url{https://github.com/thu-coai/ccm} & & 2018 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & DyKgChat: Benchmarking Dialogue Generation Grounding on Dynamic & \\multirow{2}*{Tensorflow} & EMNLP \\\\\n& & Knowledge Graphs ------ Code: \\url{https://github.com/Pascalson/DyKGChat} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Grounded Conversation Generation as Guided Traverses in Commonsense & \\multirow{2}*{PyTorch} & ACL \\\\\n& & Knowledge Graphs ------ Code: \\url{https://github.com/thunlp/ConceptFlow} & & 2020 \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Scientific \\\\ writing}} & \\multirow{2}*{} & Text Generation from Knowledge Graphs with Graph Transformers & \\multirow{2}*{PyTorch} & NAACL \\\\\n& & ------ Code: \\url{https://github.com/rikdz/GraphWriter} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & PaperRobot: Incremental Draft Generation of Scientific Ideas & \\multirow{2}*{PyTorch} & ACL \\\\\n& & ------ Code: \\url{https://github.com/EagleW/PaperRobot} & & 2019 \\\\\n\\hline \\hline\n\\multirow{8}*{\\makecell[l]{Commonsense \\\\ reasoning \\\\ \\&\\\\ Story \\\\ generation }} & \\multirow{2}*{} & Story Ending Generation with Incremental Encoding and Commonsense & \\multirow{2}*{Tensorflow} & AAAI \\\\\n& & Knowledge ------ Code: \\url{https://github.com/JianGuanTHU/StoryEndGen} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Language Generation with Multi-Hop Reasoning on Commonsense & \\multirow{2}*{PyTorch} & EMNLP \\\\\n& & Knowledge Graph ------ Code: \\url{https://github.com/cdjhz/multigen} & & 2020 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & KG-BART: Knowledge Graph-Augmented BART for Generative & \\multirow{2}*{PyTorch} & AAAI \\\\\n& & Commonsense Reasoning ------ Code: \\url{https://github.com/yeliu918/KG-BART} & & 2021 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & ENT-DESC: Entity Description Generation by Exploring Knowledge Graph & \\multirow{2}*{MXNet} & EMNLP \\\\\n& & ------ Code: \\url{https://github.com/LiyingCheng95/EntityDescriptionGeneration} & & 2020 \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Question \\\\ answering}} & \\multirow{2}*{} & Commonsense for Generative Multi-Hop Question Answering Tasks & \\multirow{2}*{Tensorflow} & EMNLP \\\\\n& & ------ Code: \\url{https://github.com/yicheng-w/CommonSenseMultiHopQA} & & 2018 \\\\\n\\hline \\hline\n\\rowcolor{gray!12}\\multicolumn{5}{|c|}{Ground text-enhanced methods} \\\\ \n\\hline \\hline\n\\multirow{8}*{\\makecell[l]{Dialogue \\\\ system}} & \\multirow{2}*{} & Wizard of Wikipedia: Knowledge-Powered Conversational agents & \\multirow{2}*{PyTorch} & ICLR \\\\\n& & ------ Code: \\url{https://github.com/facebookresearch/ParlAI} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Conversing by Reading: Contentful Neural Conversation with On-demand & \\multirow{2}*{PyTorch} & ACL \\\\\n& & Machine Reading ------ Code: \\url{https://github.com/qkaren/converse_reading_cmr} & & 2019 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue & \\multirow{2}*{Tensorflow} & ICLR \\\\\n& & ------ Code: \\url{https://github.com/bckim92/sequential-knowledge-transformer} & & 2020 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & RefNet: A Reference-aware Network for Background Based & \\multirow{2}*{Tensorflow} & AAAI \\\\\n& & Conversation ------ Code: \\url{https://github.com/ChuanMeng/RefNet} & & 2020 \\\\\n\\hline \\hline\n\\multirow{2}*{\\makecell[l]{Summarization}} & \\multirow{2}*{} & BiSET: Bi-directional Selective Encoding with Template for & \\multirow{2}*{PyTorch} & ACL \\\\\n& & Abstractive Summarization ------ Code: \\url{https://github.com/InitialBug/BiSET} & & 2019 \\\\\n\\hline \\hline\n\\multirow{4}*{\\makecell[l]{Question \\\\ answering}} & \\multirow{2}*{} & Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks & \\multirow{2}*{PyTorch} & Neurips \\\\\n& & ------ Code in \\url{https://github.com/huggingface/transformers} & & 2020 \\\\\n\\cline{2-5}\n& \\multirow{2}*{} & KILT: a Benchmark for Knowledge Intensive Language Tasks & \\multirow{2}*{PyTorch} & NAACL \\\\\n& & ------ Code: \\url{https://github.com/facebookresearch/KILT} & & 2021 \\\\\n\\hline\n\\end{tabular}}\n\\end{center}\n\\label{tab:code}\n\\end{table*}\n\\end{document}\n\\endinput", "id": "f3d88e56-31ac-4e40-b641-96e8e809dcc9", "level": "subsection", "origin_cites_number": 51, "parent_id": "4d48ae73-8c5b-48f8-b370-bc45fe5245eb", "prefix_titles": [ [ "title", "A Survey of Knowledge-Enhanced Text Generation" ], [ "section", "Appendix" ], [ "subsection", "Evaluation Metrics" ] ], "subsections": [], "title": "Evaluation Metrics" } ]
104
[ 166, 9147, 2401, 2002, 790, 38, 7046, 3119, 2378, 2369, 1165, 3117, 3116, 3115, 168, 3118, 8554, 1934, 3120, 3121, 1114, 9109, 3122, 553, 259, 7011, 180, 2482, 8623, 2487, 9, 3123, 3124, 3125, 7, 3126, 457, 3129, 3128, 3130, 3127, 1996, 2005, 650, 7186, 1887, 3131, 3132, 3134, 3133, 3135, 3136, 3137, 5680, 3139, 3138, 3140, 3141, 3143, 3142, 8624, 1109, 3144, 3146, 8337, 3145, 1052, 1088, 3147, 2366, 8625, 3148, 3150, 3152, 3151, 3153, 3149, 8626, 8627, 1169, 3154, 3155, 3156, 3157, 3158, 3159, 7225, 2376, 1073, 3161, 3160, 7126, 2046, 439, 3162, 1071, 3164, 7127, 442, 3163, 8415, 3165, 3167, 3166, 3168, 3169, 3170, 3171, 3172, 1139, 3173 ]
1.091483
[ "Shail Dave", "Riyadh Baghdadi", "Tony Nowatzki", "Sasikanth Avancha", "Aviral Shrivastava", "Baoxin Li" ]
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\ A Survey and Insights
2020
2020-07-02T04:08:40Z
cs.AR
Machine learning (ML) models are widely used in many important domains. For efficiently processing these computational- and memory-intensive applications, tensors of these over-parameterized models are compressed by leveraging sparsity, size reduction, and quantization of tensors. Unstructured sparsity and tensors with varying dimensions yield irregular computation, communication, and memory access patterns; processing them on hardware accelerators in a conventional manner does not inherently leverage acceleration opportunities. This paper provides a comprehensive survey on the efficient execution of sparse and irregular tensor computations of ML models on hardware accelerators. In particular, it discusses enhancement modules in the architecture design and the software support; categorizes different hardware designs and acceleration techniques and analyzes them in terms of hardware and execution costs; analyzes achievable accelerations for recent DNNs; highlights further opportunities in terms of hardware/software/model co-design optimizations (inter/intra-module). The takeaways from this paper include: understanding the key challenges in accelerating sparse, irregular-shaped, and quantized tensors; understanding enhancements in accelerator systems for supporting their efficient computations; analyzing trade-offs in opting for a specific design choice for encoding, storing, extracting, communicating, computing, and load-balancing the non-zeros; understanding how structured sparsity can improve storage efficiency and balance computations; understanding how to compile and map models with sparse tensors on the accelerators; understanding recent design trends for efficient accelerations and further opportunities.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6220d89f-a486-420a-914e-59957895a313", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ] ], "subsections": [ "6d6e1c41-d730-4443-9f55-c97f70207f24", "9808a39b-5577-4a19-8e94-4d9dd70a65c5", "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "4bd89bd4-f378-4c43-b038-faf39e54d1ba", "76637357-2ce7-49d4-924f-7e20b7606e08", "2c2b4338-ab49-49e8-95e3-2e1350db0598", "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "4b82dfad-305c-4c60-9b38-8189a27dd3b9", "d9ab0fd8-41d0-4648-973b-6a9930053093", "ac4d8613-1421-4c83-a4a5-66deeb11b225", "d451769e-a134-445a-a7af-73c10cbbd92a", "cce74b6f-9cb1-4065-8a80-43648099f245", "4b09d196-2359-424f-bc4b-e7c987fe6803", "92b659a6-eb60-494f-84a3-2859bf5b052c", "e04212f9-2787-4a9d-94a3-01623a9139c0" ], "title": "root" }, { "cite_extract_rate": 0.581395348837209, "cites": [ 679, 7167, 7, 836, 9103, 7005, 885, 547, 4335, 4334, 38, 1759, 2912, 784, 4333, 97, 7168, 7634, 7299, 842, 4331, 194, 4332, 4330, 7430 ], "content": "Machine learning (ML) models implement intelligence in computing systems. Different ML models are widely used in several important domains including computer vision (object classification and detection ), natural language processing , media generation , recommendation systems , medical diagnosis , large-scale scientific computing , embedded systems , mobile and edge processing , and even for designing or optimizing hardware and software systems . Domain-customized accelerators can significantly speed up their execution in an energy-efficient manner . However, the computational and memory requirements for processing these models have surged drastically . Moreover, ML models can be deeper and larger, which improves learning accuracy, but significant redundancy may exist in these often over-parameterized models . Therefore, recent techniques for efficient learning and inference have proposed compressing tensors of ML models. Tensors are compressed by inducing and leveraging: \\textit{(a) sparsity (zero values in tensors)} , \\textit{(b) size reduction (tensor decomposition, dimension reduction, and shape reduction)} , and \\textit{(c) quantization (precision lowering and leveraging value similarity)} . With significantly lowered computational, storage, and communication requirements, efficient processing of \\textit{compressed tensors (sparse, size-reduced, and quantized)} offers notable acceleration and energy efficiency opportunities .\n\\begin{figure*}[t!]\n\\centering\n\\centerline{\\includegraphics[width=0.95\\linewidth]{figures/sparse-accel-block-diagram}}\n\\caption{Overview of the accelerator system for processing sparse and irregular tensor computations. (Section \\ref{sec::overview-accel-compressed-tensors} provides further discussion.)}\n\\label{fig::sparse-accel-block-diagram}\n\\end{figure*}\nHardware accelerators can efficiently process tensor computations of ML models. In particular, coarse-grain spatial architectures are a common choice for hardware accelerator designs. They contain an array of processing elements (PEs) with local registers/memory and shared memory. These accelerators feature interconnects like mesh or multicast for communicating data to PEs and reusing the data spatially, which reduces the accesses to the memory hierarchy. With simple PE designs and effective spatial and temporal management of the data and computations, such architectures achieve high speedups and energy-efficiency .\nSpecial mechanisms are needed to exploit the acceleration benefits due to tensor sparsity, size reduction, and quantization. This is because, while hardware accelerators for ML can process low-precision tensors, they inherently cannot benefit from sparsity . They are designed for performing structured computations with regular memory accesses and communication patterns. Without special support for sparse tensors, they fetch all the data, including zero values from memory and feed into PEs, thereby wasting the execution time. Sparsity, especially unstructured, induces irregularity in processing since non-zeros (NZs) or blocks of NZs are scattered across tensors. So, leveraging sparsity necessitates additional mechanisms to store, extract, communicate, compute, and load-balance the NZs and the corresponding hardware or software support. The goal of exploiting sparsity is to exploit all forms of sparsity possible to considerably reduce computation, communication, and storage of zeros while avoiding adding performance, power, and area overheads. Exploiting sparsity effectively depends on tailoring the data encoding and extraction, dataflow, memory banking structure, interconnect design, and write-back mechanisms. Further, it requires new representations and enables new opportunities for hardware/software/model co-designs. In this survey, we mainly discuss different accelerator designs that have leveraged the sparsity of different tensors and different opportunities for performance gains and energy efficiency. \nTensor decomposition and dimension reduction yield tensors of various sizes and asymmetric shapes . Dataflow mechanisms for executing layers of the models are typically optimized well for some commonly used layers (symmetric dimensions). They often become ill-suited for processing tensors with reduced dimensions and different functionality. So, we describe how configurable designs and flexible dataflows can help to achieve efficient execution. Sparse tensors quantized with value sharing require additional support to index a dictionary for obtaining shared values. The survey also discusses how accelerators leverage value similarity across inputs, weights, or outputs and support variable bit-widths of sparse tensors.\n\\textbf{Contributions:} This paper provides a comprehensive survey of different techniques for efficiently executing sparse and irregular tensor computations of the compact ML models on hardware accelerators. It describes corresponding enhancements in the hardware architecture and the required software support. In specific, \n\\begin{itemize}\n \\item For inference and training of different ML models, we summarize various sources of the sparsity of tensors.\n \\item We highlight challenges in accelerating computations of sparse (especially unstructured) and irregular-shaped tensors (e.g., dot product, convolution, and matrix multiplication) on spatial-architecture-based hardware accelerators that execute with dataflow mechanisms.\n \\item We present an overview of the accelerator system along with the different hardware/software modules for sparse and irregular computations, their interfacing, and the execution flow. We provide an in-depth discussion of the need of each module, different design choices, and qualitative analysis of the different choices.\n \\item We survey different accelerator systems and execution techniques for sparse tensors of ML models and provide taxonomies to categorize them based on the various hardware/software aspects of the designs.\n \\item We analyze how variations in sparsity and tensor shapes of different models impact the storage efficiency of different sparsity-encodings and the reuse of tensors. \n \\item For designing these accelerator modules and overall accelerator system, we discuss recent trends and outline further opportunities for hardware/software/model co-designs.\n\\end{itemize} \n\\textbf{Paper organization:} \n\\begin{itemize}[leftmargin=*]\n \\item Section \\ref{sec::background} provides a brief background on different ML models, hardware accelerators for their tensor computations, and the need for further efficiency by reducing computation, storage, and communication requirements.\n \\item Section \\ref{sec::tensor-compression} discusses tensor compression and opportunities due to sparse, size-reduced, and quantized tensors and why their efficient processing requires special support.\n \\item Section \\ref{sec::overview-accel-compressed-tensors} provides an overview of the accelerator system with enhanced architectural modules and software support for sparse and irregular tensor computations (Fig. \\ref{fig::sparse-accel-block-diagram}). It also presents a case study of accelerations of recent, sparse DNNs and analyzes execution bottlenecks. In-depth discussions of individual modules follow through sections \\ref{sec::sparse-data-coding}--\\ref{sec::software-support}. Opportunities for optimizing each module further are discussed at the end of corresponding sections or subsections.\n \\item Section \\ref{sec::sparse-data-coding} illustrates common sparse data encodings, analyzes their implications in terms of storage and coding overheads, and describes the group-wise encoding of tensors.\n \\item Section \\ref{sec::NZ-data-extraction} discusses techniques for extracting matching NZs from tensors for computations. It analyzes the advantages and limitations of the centralized and in-PE extractions.\n \\item Section \\ref{sec::memory-management} discusses managing non-coherent, multi-banked, global scratchpad and hiding the memory access latency behind computations. It also discusses data reuse of the sparse tensors and cross-layer reuse opportunities. \n \\item Section \\ref{sec::comm-networks} discusses interconnect designs for distributing data from memory and reducing partial outputs, their bandwidth requirements, spatial data reuse, and their configurability to support multiple dataflows for execution. \n \\item Section \\ref{sec::PEArch} describes sparsity-aware dataflows and pipelined PE architecture including tailoring functional units for sparsity, bit-adaptive computing, and leveraging value similarity. \n \\item Section \\ref{sec::load-balance} discusses sources of the inter-PE and intra-PE imbalance due to sparsity and their impact, software-directed balancing, and hardware structures for dynamic balancing. \n \\item Section \\ref{sec::post-processing} describes different write-back mechanisms for collecting data from PEs and assembling the data locally in PEs or on a central module. It also discusses data layout transformations and on-the-fly encoding of sparse outputs.\n \\item Section \\ref{sec::software-support} discusses compiler support for targeting hardware accelerators, including intermediate representations for deep learning models, compiler optimizations and their automation, and ISAs and code generation for accelerators.\n \\item Section \\ref{sec::future-directions} describes recent trends and future directions in terms of developing tools and techniques for systematic exploration of hardware/software/model co-designs.\n \\item Section \\ref{sec::related-works} discusses relevant surveys that describe additional details (domain-specific models, tensor compression techniques, etc.) and can be useful to readers. \n\\end{itemize}", "id": "6d6e1c41-d730-4443-9f55-c97f70207f24", "level": "section", "origin_cites_number": 43, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::background}", "id": "9808a39b-5577-4a19-8e94-4d9dd70a65c5", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Background: Need for Efficient Execution of ML Models on Hardware Accelerators" ] ], "subsections": [ "1fe75192-0778-4883-abf0-6e82d54675a3", "7dbbc9d2-5da7-400f-b7e4-603692ededcf", "df4922fd-4d93-44b1-926c-837577b79edd" ], "title": "Background: Need for Efficient Execution of ML Models on Hardware Accelerators" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 194, 1759, 9103, 4333, 41, 7157, 7007, 7633, 7, 38 ], "content": "Learning through ML models can be supervised (where labeled data is available), unsupervised (training samples are unlabeled), or semi-supervised. We refer non-expert readers to surveys for a detailed discussion on different learning approaches and inference and training of various models. Discussions through this survey mainly focus on accelerating different \\textit{deep neural networks (DNNs)} that are commonly used for supervised learning. \n\\textbf{Convolutional neural networks (CNNs)} are used for object classification and detection in image processing, video analysis, and autonomous vehicle systems. CNNs majorly consist of many \\textit{convolution layers (CONV)} and a \\textit{few fully-connected (FC)} layers. Early CONV layers capture low-level features from the images (e.g., edges and corners), which are used for constructing high-level features (e.g., shapes) by subsequent layers. Finally, the classifier aka FC layer determines the type of the objects .\n\\textbf{Sequence-to-sequence models} include recurrent neural networks (RNNs), gated recurrent units (GRU), long-short term memory (LSTM) , and attention mechanisms . These models are used for \\textit{natural language processing (NLP)} and media processing tasks. They essentially use unidirectional or bidirectional recurrent cells at their core and process \\textit{multi-layer perceptrons (MLP)} aka FC structures. \n\\textbf{Models for semantic segmentation and language translation} use encoder-decoder structures with convolutions , recurrent cells, or attention layers , respectively.\n\\textbf{Generative adversarial networks (GANs)} are used by media generation applications. GANs use generators and discriminative networks that consist of convolution layers. \n\\textbf{Graph neural networks (GNNs)} and other graph learning models are used for applications such as text classification and translation, node classification and link predictions in large social graphs, etc. They learn graph properties and infer about unforeseen information. To achieve this objective, each node contains an embedding feature vector with the information mixture about own and neighborhood features. The nodes then recurrently aggregate features of local neighbors, perform neural network computations on aggregated data (e.g., MLP for down-scaling embeddings), and update their embeddings. \n\\textbf{Recommendation system models} consist of embedding layers (look-ups and matrix operations) , CNNs for object detection and video understanding, and RNNs for processing language models . \nPrimitives like MLP or GEMM (general matrix multiply) and CONV are at the core of many models and dominate the execution. So, ML frameworks like PyTorch , TensorFlow , and Intel MKL provide efficient implementations of these primitives for execution on commodity hardware (CPUs, GPUs, FPGAs) or even specialized accelerators. So, our discussions mainly focus on efficiently accelerating tensor computations of MLP, CONV, and RNN operators.", "id": "1fe75192-0778-4883-abf0-6e82d54675a3", "level": "subsection", "origin_cites_number": 15, "parent_id": "9808a39b-5577-4a19-8e94-4d9dd70a65c5", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Background: Need for Efficient Execution of ML Models on Hardware Accelerators" ], [ "subsection", "Domain-Specific Machine Learning Models" ] ], "subsections": [], "title": "Domain-Specific Machine Learning Models" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 7005 ], "content": "\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/dataflow-accelerators}}\n\\caption{Abstract accelerator design for processing sparse tensors of machine learning applications. Execution of applications require explicit management of computational, communication, and memory resources.}\n\\label{fig::ML-accelerators}\n\\end{figure}\nIn the “new golden age of computer architecture”, recent research efforts and commercial solutions have extensively demonstrated that domain-customized hardware accelerators significantly speed up the execution of ML models in an energy-efficient way . Typically, these specialized solutions feature \\textit{spatial architectures}, which are those that expose low-level aspects of the hardware's interconnect and storage to the hardware-software interface. Spatial architectures can be coarse-grained or fine-grained. Coarse-grained architectures feature arrays of interconnected PEs, and fine-grained designs are realized by programming FPGAs. \\textit{Coarse-grained spatial architectures} are a common implementation choice for designing hardware accelerators for ML . As Fig. \\ref{fig::ML-accelerators} illustrates, the accelerator comprises an array of PEs that may contain private register files (RFs) and shared buffers or a scratchpad memory. PEs are simple in design (functional units with little local control), and the shared scratchpad is non-coherent with software-directed execution. Therefore, these accelerators are a few orders of magnitude more power-efficient than out-of-order CPU or GPU cores . They lead to highly energy-efficient execution of ML models that are compute-intensive and memory-intensive. Performance-critical tensor computations of ML models are relatively simple operations like element-wise or tensor additions and multiplications. So, they can be processed efficiently with structured computations on the PE-array. Moreover, private and shared memories of PEs enable high temporal reuse of the data ; with efficient data management, PEs can be continuously engaged in tensor computations while the data is communicated via memories . Additionally, interconnects like mesh or multicast enable data communication among PEs and spatial reuse of the data, lowering the accesses to off-chip memory. Thus, with minimized execution time, spatial-architecture-based hardware accelerators yield very high throughput and low latency for processing ML models.", "id": "7dbbc9d2-5da7-400f-b7e4-603692ededcf", "level": "subsection", "origin_cites_number": 11, "parent_id": "9808a39b-5577-4a19-8e94-4d9dd70a65c5", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Background: Need for Efficient Execution of ML Models on Hardware Accelerators" ], [ "subsection", "Hardware Accelerators for Machine Learning" ] ], "subsections": [], "title": "Hardware Accelerators for Machine Learning" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 679, 97, 7633, 4334, 38 ], "content": "\\label{sec::need-for-compression}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures/openai-plot}\n \\caption{Computation requirements for the training of AI algorithms almost double every few months (Figure adopted from ).}\n \\label{fig::ML-model-compute-req}\n\\end{figure}\nWith recent advances in the development of ML models, their computational and memory requirements have increased drastically . Fig. \\ref{fig::ML-model-compute-req} provides an overview of this dramatic surge. One major reason is the rise of deeper models. For example, for processing ImageNet images, AlexNet contained five CONV and three FC layers (eight parameter layers) with the model size of 61M parameters (weights and bias) and computation of 724 MFLOPs. DNNs like ResNet-101 achieved higher classification accuracy but contained 100+ parameter layers and required processing about 7.6 GFLOPs per image. \\insight{Memory requirements for NLP models have increased massively}, e.g., from 50M--100M parameters (Transformer , 2017) to 175 billion (GPT-3 , 2020).\nWhile deeper and larger models achieve high efficiency for various tasks , they consume high execution time, energy, and memory. Previous studies showed that \\insight{significant data redundancy exists in these often over-parameterized models} . So, researchers have developed techniques that compress tensors and obtain compact models, reducing computational, communication, and storage requirements significantly.", "id": "df4922fd-4d93-44b1-926c-837577b79edd", "level": "subsection", "origin_cites_number": 9, "parent_id": "9808a39b-5577-4a19-8e94-4d9dd70a65c5", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Background: Need for Efficient Execution of ML Models on Hardware Accelerators" ], [ "subsection", "Need for Further Efficient Execution" ] ], "subsections": [], "title": "Need for Further Efficient Execution" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 2912, 7167, 4332, 7168, 7299, 7430, 842 ], "content": "\\label{sec::tensor-compression}\nEfficiency of executing ML models can be improved further by drastically reducing computation, communication, and memory requirements. This can be achieved by compressing tensors of ML models. Tensors are compressed by inducing and leveraging: (a) sparsity (zero values) , (b) size reduction (tensor decomposition, dimension reduction, and shape reduction) , and (c) quantization (precision lowering and value similarity) . Previous techniques have achieved highly compact models without incurring accuracy loss. For example, after applying pruning, quantization, and Huffman encoding, Deep Compression reduced the model size of AlexNet and VGG-16 by 35$\\times$ and 49$\\times$ (e.g., from 552 MB to 11.3 MB), respectively. Accelerator-aware designs can compress the model further. For AlexNet and GoogLeNet models, pruned 91\\% and 66\\% of weights and reduced computational requirements by 6.63$\\times$ and 3.43$\\times$, respectively. ADMM-NN applied weight pruning and quantization, thereby reducing the model size of AlexNet, VGG-16, and ResNet-50 (with up to 0.2\\% accuracy loss) by 99$\\times$, 66.5$\\times$ and 25.3$\\times$, respectively.\nThis section describes various sources of tensor sparsity which is either inherent or induced by model architecture or regularization. It describes how sparsity reduces computations, storage, and communication requirements. It also discusses techniques for reducing the size and quantization of the tensors, and how they offer advantages in terms of storage/performance/energy-efficiency. Then, it describes how compression techniques may induce irregularity in the processing and why special support is needed for efficiently processing the compressed tensors on hardware accelerators.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures/sparsity-structures}\n \\caption{Common sparsity structures (e.g., for a 75\\% sparse 8$\\times$8 matrix).}\n \\label{fig::sparsity-structures}\n\\end{figure}", "id": "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "level": "section", "origin_cites_number": 12, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Acceleration Opportunities due to Compact Models and the Need for Special Support" ] ], "subsections": [ "a09f7517-1ff1-4968-99cf-053b28f0c253", "3fb7d8ac-2c30-4eb5-9d3b-8956b4a4bcb8", "08935b60-7ccb-45a9-af71-db60dc826e34", "916ba4be-6ddc-490b-b26e-ccd9d49fedca" ], "title": "Acceleration Opportunities due to Compact Models and the Need for Special Support" }, { "cite_extract_rate": 0.607843137254901, "cites": [ 8767, 4343, 4342, 1003, 7280, 848, 4346, 7, 4339, 840, 7169, 4337, 4340, 1199, 38, 4344, 4348, 4347, 1759, 7168, 4345, 8766, 7634, 842, 4331, 4336, 4338, 844, 4349, 4341, 8768 ], "content": "\\mysubsubsection{Sparsity Structures} Inherent sparsity is usually unstructured (e.g., of activations, gradients, or tensors of scientific computing applications), where NZ elements are randomly scattered (shaded elements in Fig. \\ref{fig::sparsity-structures}a). Applying ReLU, dropout, quantization, or fine-grain pruning also induces unstructured sparsity in input activations ($IA$) or weights ($W$). For improving execution efficiency, pruning techniques or model operators induce structured sparsity. For example, weights can be pruned in coarse-grain blocks where block shape can vary from 1-D (vector) to n-D for an n-dimension tensor . Fig. \\ref{fig::sparsity-structures}b shows 4$\\times$4 blocks for a block-sparse tensor, where each block contains all zeros or all NZs. With larger blocks, techniques often prune entire dimensions (e.g., channels or filters in CNN models) . The selection of block size and shape depends on task accuracy requirements. Alternatively, tensors are sparsified with density bounded blocks (Fig. \\ref{fig::sparsity-structures}c), where each n-D block contains a fixed ($k$) number of NZs . It equally scatters NZs throughout the tensor. NZs are located arbitrarily in the whole block, or a fixed number of NZs can be induced across each dimension of the block. Values of $k$ can be selected based on the sensitivity of the pruning to accuracy. For example, analysis of showed that for VGG-16 and ResNet-50, about 12 out of 16 elements can be pruned without any accuracy loss, and about 10 out of 16 elements for compact models like MobileNetV1 and SqueezeNetV1. To preserve accuracy while achieving high sparsity, a mixture of blocks (with different block sizes or sparsity) can also be introduced . Lastly, tensors can be pruned in patterns or conditionally with sophisticated rules (e.g., diagonally, as Fig. \\ref{fig::sparsity-structures}d shows).\n\\mysubsubsection{Sources of Sparsity}\nTensors of different ML models can be sparse due to multiple reasons:\n$\\bullet$ CNNs use the ReLU activation function that clamps negative values to zero. So, sparsity of input activations ($IA$-sparsity) can be 40\\% in CNNs, on average and higher in later layers (about up to 70\\% ). Cao et al. reported that max-pooling can amplify it, e.g., up to 80\\% for VGG-16 layers. Lee et al. showed that IA-sparsity eliminated about 40\\% and 55\\% of the multiply-and-accumulate (MAC) operations during CNN training and inference, respectively. For recent compact models like MobileNetV2 , IA-sparsity eliminates about 20\\% of the MACs.\n$\\bullet$ Neural networks use drop-out layers to avoid overfitting. After applying the drop-out, only partial activations are retained . Dropping the activations induces sparsity . \n$\\bullet$ Pruning techniques remove unimportant weights and alleviate the overfitting of the model while maintaining the classification accuracy. Typically, weights with the least significant values can be safely pruned (in training or post-training). Pruning can bring regularity in the learning of the model and can even increase accuracy slightly . Pruning algorithms introduce significant sparsity, e.g., more than 60\\% weights of CONV and more than 90\\% of the weights of FC layers can be removed ($W$-sparsity). For recent compact models like MobileNetV2 and EfficientNetB0, W-sparsity can be 50\\%--93\\% (80\\%--85\\% in point-wise convolutions) , which reduces MACs by 2.5$\\times$--4.2$\\times$. Similarly, more than 80\\% weights of RNN, GRU, or LSTMs can be pruned , especially for medium or large models, without significantly increasing error rate. For NLP models Transformers and BERT , recent techniques induce 80\\% and 93\\% W-sparsity, which reduces total MACs by about 4.8$\\times$ and 12.3$\\times$, respectively. Besides, regularization of the models (e.g., L1 or group-lasso based) can induce unstructured or structured W-sparsity . \nPruning of activations is also shown as effective . DasNet reported eliminating about 27\\% and 12\\% MACs by activation sparsification for AlexNet and MobileNet. It achieved 79\\% IA-sparsity for AlexNet FC layers along with pruning 81\\% weights, without dropping top-1 accuracy. Similarly, MASR refactored batch normalization, achieving about 60\\% IA-sparsity for RNNs. For attention-based NLP models, SpAtten pruned unimportant tokens and heads. It reported reducing computations and DRAM accesses by up to 3.8$\\times$ and 1.1$\\times$, respectively, without accuracy loss.\n$\\bullet$ CNNs use Atrous (dilated) convolutions where filters are upsampled by inserting zeros between weights . \n$\\bullet$ GANs use transposed convolution in a degenerator network, where input data is upscaled first by inserting zeros between values, and then convolution is applied. For transposed convolutions in different GANs, about 60\\% MACs can be zero . Additional sparsity is introduced when GANs are forced to forget generating specific objects .\n$\\bullet$ Input data for object detection tasks can be inherently sparse, as only specific regions of frames are valid . For example, object detection models of autonomous driving systems process 3D LiDAR data by constructing point clouds and projecting them from the bird's eye view (top view) . The resultant images are then fed to object detection algorithms for locating the regions of interest. Recent techniques have reported that the sparsity of the input data for object detection can be 80\\% or more .\n$\\bullet$ For efficient communication in distributed training, \\emph{gradients} ($Grad$) are sparsified and compressed. E.g., \\textit{Grad-sparsity} can be 99\\%+ for \\textit{computer vision (CV)} or language processing tasks and 95\\%--99\\% for recommendation models .\n$\\bullet$ Input data for the tasks of recommendation systems (e.g., user-item matrix) can be inherently highly sparse, e.g., from 95\\% to 99\\% . Recommendation models compute dot products on dense-sparse or sparse-sparse data .\n$\\bullet$ GNNs process large graphs, e.g., with thousands of vertices. Depending on the real-world interactions of objects (vertices), data contain high (e.g., 75\\%--99\\%) or hyper (99\\%+) unstructured sparsity . For example, in processing large graphs with GCNs, many features of vertices are local and lead to zeros in adjacency matrices for remote nodes . GNN computations involve aggregation on sparse data and multiplications of dense matrices with dense or sparse matrices , which are often processed on separate modules of the accelerator (e.g., in HyGCN and EnGN ).\n$\\bullet$ Text corpus in text analytics applications leads to high sparsity since each document contains only a fraction of the words from the vocabulary. Such analytics applications include PCA for dimensionality reduction of the sparse data, support vector machines and regression for classification, collaborative filtering for the recommendation, and k-means for clustering the data . These operations involve multiplications of sparse matrices with dense or sparse vectors, where the matrix sparsity can vary from 67\\% to 99\\% .\nWhile we describe leveraging sparsity for ML models, applications of many domains, including linear algebra, graph processing, and scientific computing , can be accelerated by exploiting sparsity.\n\\mysubsubsection{Advantages}\nSparsity allows (i) \\textit{eliminating ineffectual computations}, i.e., reduces execution time and energy by processing only NZs, (ii) \\textit{reducing storage} by encoding only NZ values, so more data fits in on-chip memory and off-chip memory accesses (extremely energy-consuming ) are reduced, and (iii) improving speedup due to \\textit{reduced communication requirements} for data-intensive ML models.", "id": "a09f7517-1ff1-4968-99cf-053b28f0c253", "level": "subsection", "origin_cites_number": 51, "parent_id": "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Acceleration Opportunities due to Compact Models and the Need for Special Support" ], [ "subsection", "Opportunities Due to Sparse Tensors" ] ], "subsections": [], "title": "Opportunities Due to Sparse Tensors" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 305, 7167, 502, 301, 7430, 842 ], "content": "Symmetric or high-dimensional tensors have large sizes and their processing requires more computation and memory. So, ML models are designed to reduce such requirements by using group or parallel operators , 1$\\times$1 or point-wise convolutions (PW-CONV) , or dimensionality reduction with PCA . Moreover, tensors can be decomposed with spatial factorization , depth-wise separation for convolutions , or low-rank approximations . Further, tensors can be ragged to eliminate the need for structured or rectangular shapes. While these transformations significantly reduce storage and computations, they make tensors irregular-shaped (asymmetric).", "id": "3fb7d8ac-2c30-4eb5-9d3b-8956b4a4bcb8", "level": "subsection", "origin_cites_number": 10, "parent_id": "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Acceleration Opportunities due to Compact Models and the Need for Special Support" ], [ "subsection", "Opportunities Due to Size-Reduced Tensors" ] ], "subsections": [], "title": "Opportunities Due to Size-Reduced Tensors" }, { "cite_extract_rate": 0.416666666666666, "cites": [ 4330, 4352, 4351, 4350, 7299 ], "content": "Quantization includes precision lowering and leveraging value similarity . Precision lowering allows representing tensors (weights, activations, gradients, weight updates) at much lower bit-width (e.g., 8b or lower for inference and 8b/16b for learning). Moreover, elements with similar values can be clustered and approximated by sharing common values (centroids of clusters). Further, similar values of outputs are reused with memoization (partially or the entire layer). In general, significant redundancy exists in tensor elements (particularly in the parameters of large models), and a successfully trained model is generalized and immune to noisy data. So, the error induced by quantization or approximation may often be tolerated by a well-trained model . It can also obviate over-fitting caused otherwise by excessive precision, thereby bringing generality in learning . For compensating accuracy drop due to quantization, learning algorithms fine-tune the model or use quantization-aware training . Thus, quantization or approximation techniques typically do not degrade inference accuracy or trade it off for notable execution efficiency . \nQuantization significantly reduces storage requirements and accesses to off-chip memory. It also \\textit{reduces area and power}, since for quantized tensors, functional units can be simpler and energy-efficient (e.g., int8 multiplier consumes 20$\\times$ less energy than FP32 multiplier for a 45 nm process). Bus sizes can be smaller as bandwidth requirements are reduced. \nThus, with sparse, size-reduced, and quantized tensors, compact models can achieve higher accuracy as models with uncompressed tensors, while becoming amenable for deployment at the edge, mobile, or online-learning platforms due to scope for low latency, energy, and storage. So, leveraging such opportunities is crucial for further accelerations.", "id": "08935b60-7ccb-45a9-af71-db60dc826e34", "level": "subsection", "origin_cites_number": 12, "parent_id": "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Acceleration Opportunities due to Compact Models and the Need for Special Support" ], [ "subsection", "Opportunities Due to Quantized Tensors" ] ], "subsections": [], "title": "Opportunities Due to Quantized Tensors" }, { "cite_extract_rate": 0.266666666666666, "cites": [ 4353, 7005, 7634, 4335 ], "content": "\\label{sec::need-special-support}\nHardware accelerators efficiently process different models . But, they inherently cannot benefit from the sparsity because all the data, including the zero values of activations, weights, and gradients, have to be fetched from memory and communicated to PEs; PEs are also unable to skip ineffectual computations, wasting the execution time. Sparsity, especially unstructured, induces irregularity in processing since NZs or blocks of NZs are scattered across the tensor. Therefore, leveraging sparsity necessitates additional mechanisms to store, extract, communicate, compute, and load-balance the NZs, and corresponding hardware and software support . Different sparsity levels and patterns from various sources lead to unique challenges and solutions in hardware/software co-design. Therefore, our discussions throughout this survey mainly focus on exploiting tensor sparsity for accelerating compact models.\nTensor dimension reduction and tensor decomposition make tensors irregular-shaped (asymmetric), and they may also modify the functionality of the computational primitives, e.g., depthwise convolution (DW-CONV). Since execution on hardware accelerators is typically well-optimized for processing symmetric tensors with a specific dataflow mechanism, these shape transformations and supporting different functionality (e.g., DW-CONV, randomized or approximated matrix multiply ) may introduce irregularity in processing requirements. To sustain high utilization of computational resources, it requires additional support including configurable hardware architectures and flexible mappings of the functionality onto architectural resources . \nHardware accelerators have supported low-precision tensors of fixed bit-widths, and even more recently, tensors with mixed precision . However, when sparse tensors are quantized with value sharing, it requires indexing the codebook through indices for approximated elements . Such irregular accesses are handled by implementing separate indirection tables in the pipelined hardware datapath . Moreover, value similarity is leveraged further by reusing computations with memoized outputs, which requires additional processing. Further, supporting different bit-widths of various sparse tensors of different models requires configurable architectures for \\emph{bit-adaptive} computing . \nTo sum up, compressed tensors lead to sparse and irregular computations. Their efficient accelerations require special support, which is described in the next section. The appendix describes that exploiting sparsity (especially unstructured) is relatively hard for execution on CPUs and GPUs; with special support, hardware accelerators can achieve notable gains.", "id": "916ba4be-6ddc-490b-b26e-ccd9d49fedca", "level": "subsection", "origin_cites_number": 15, "parent_id": "f772f077-d5f9-4a62-b8e3-6c1cf6e8e806", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Acceleration Opportunities due to Compact Models and the Need for Special Support" ], [ "subsection", "Need for Special Support to Accelerate Sparse and Irregular Tensor Computations" ] ], "subsections": [], "title": "Need for Special Support to Accelerate Sparse and Irregular Tensor Computations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::overview-accel-compressed-tensors}", "id": "4bd89bd4-f378-4c43-b038-faf39e54d1ba", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Accelerator Design for Efficient Sparse and Irregular Tensor Computations" ] ], "subsections": [ "dfcc21bd-b4bb-440e-9f6f-0af65d5e6281", "c1d7323b-337a-4566-9a3e-103c4b3f9afa" ], "title": "Accelerator Design for Efficient Sparse and Irregular Tensor Computations" }, { "cite_extract_rate": 0.27083333333333304, "cites": [ 4355, 8767, 4342, 7169, 7804, 4335, 4347, 4344, 4354, 7634, 7805, 4356, 4341 ], "content": "To efficiently process sparse and irregular tensor computations, designers of the accelerator systems can integrate special hardware or software modules. It enables orchestration of the structured computations while processing the tensors in compressed formats. Consequently, it can lead to efficient utilization of the accelerator resources and allows exploiting acceleration opportunities. Fig. \\ref{fig::sparse-accel-block-diagram} provides an overview of the accelerator system equipped with such modules. This section briefly describes these system modules.\n\\begin{table}\n\\centering\n\\caption{Accelerators for Processing Sparse Tensors.}\n\\label{tab:overview-sparse-accel}\n\\begin{tabular}{|c|m{4.85cm}|}\n\\hline\n\\thead{Objective} & \\thead{Techniques} \\\\ \\hline\n\\makecell{Compressed data \\\\ in off-chip \\\\ memory (storage)} \n& \\\\ \\hline \n\\makecell{Compressed data \\\\ in on-chip \\\\ memory (storage)} & \\\\ \\hline \n\\makecell{Skip processing \\\\ zeros \\\\ (energy efficiency)} & \\\\ \\hline , \n\\makecell{Reduce ineffectual \\\\ computation cycles \\\\ (performance \\& energy)} & \\\\ \\hline \n\\makecell{Load balancing \\\\ (performance)} & \\\\ \\hline \n\\end{tabular}\n\\end{table}\n\\begin{table*}[!t]\n\\centering\n\\caption{Accelerator Systems Leveraging Sparsity of Different Tensors for Different ML Models.}\n\\label{tab:target-sparsity}\n\\begin{tabular}{|c|c|m{1.5cm}|m{10cm}|}\n\\hline\n\\multirow{3}{*}{\\makecell{Dynamicity \\\\ of Sparsity}} \n& Static & \\multicolumn{2}{l|}{} \\\\ \\cline{2-4} \n& \\multirow{2}{*}{Dynamic} \n& \\multicolumn{2}{l|}{} \\\\ \n& & \\multicolumn{2}{l|}{} \\\\ \\hline\n\\multirow{4}{*}{\\makecell{Tensors Treated \\\\ as Sparse}} & \\multirow{2}{*}{Weight} & Unstructured & , \\\\ \\cline{3-4} \n& & Structured & \\\\ \\cline{2-4}\n& Activation & \\multicolumn{2}{l|}{} \\\\ \\cline{2-4}\n& Both & \\multicolumn{2}{l|}{} \\\\ \\hline\n\\multirow{5}{*}{\\makecell{Primitive \\\\ Operation}} \n& \\makecell{Matrix-Vector Multiply} & \\multicolumn{2}{l|}{} \\\\ \\cline{2-4} \n& \\makecell{Matrix-Matrix Multiply} & \\multicolumn{2}{l|}{} \\\\ \\cline{2-4} \n& \\makecell{Convolution} & \\multicolumn{2}{l|}{} \\\\ \\cline{2-4}\n& \\makecell{Recurrent / Attention Layer} & \\multicolumn{2}{l|}{} \\\\ \\hline\n\\multicolumn{2}{|l|}{Accelerators for Learning} & \\multicolumn{2}{l|}{} \\\\ \\hline\n\\end{tabular}\n\\end{table*}\nSparse, size-reduced, and quantized tensors of ML models offer various opportunities for storage, performance, and energy efficiency. Hence, several accelerators have provided marginal or comprehensive support and leveraged some or all the opportunities. Table \\ref{tab:overview-sparse-accel} lists such common objectives and corresponding accelerator solutions that meet these objectives. \nDifferent accelerators for inference and learning exploit $W$-sparsity, $IA$-sparsity, or both, which impacts acceleration gains . Several accelerators, including Cambricon-X , exploit only static sparsity (Table \\ref{tab:target-sparsity}), e.g., when locations of zeros in weights are known beforehand for inference. Static sparsity allows offline encoding and data transformations for arranging structured computations (e.g., for systolic arrays ). Recent accelerators, including ZENA , SNAP , and EyerissV2 , leverage dynamic sparsity also. It requires determining locations of intersecting NZs in both tensors at run-time to feed functional units, on-the-fly decoding (encoding) NZs, and often balancing computations on PEs. Table \\ref{tab:target-sparsity} lists different accelerators that support static and dynamic sparsity of tensors. Now, we describe different hardware and software aspects of the accelerator system that help in leveraging sparsity effectively.\n\\textbf{Sparsity encodings:} Sparse tensors are compressed using encodings, where only NZ values are stored in a \"data\" tensor and one or more \"metadata\" tensors encode locations of NZs. Section \\ref{sec::sparse-data-coding} discusses different formats and associated costs for encoding and decoding. For different sparsity levels, it analyzes their effectiveness in terms of storage efficiency. E.g., tensors can be compressed by 1.8$\\times$ and 2.8$\\times$ for 50\\% and 70\\% sparsity (bitmap or RLC-2) and 7.6$\\times$ and 55$\\times$--60$\\times$ for 90\\% (RLC-4) and 99\\% sparsity (CSC or RLC-7). Structured sparsity (coarse-grain block-sparse) can alleviate the overheads of metadata and fine-grained data extraction by encoding indices for only large dense blocks. For accelerating ML models, sparse tensors are also quantized i.e., their precisions are lowered (typically int8 or int16 for inference and FP16 for learning ) and often approximated by clustering data of similar values . Therefore, encoded sparse data contains quantized values of NZs. \n\\textbf{NZ detection and data extraction:} In processing sparse tensors of different primitives, corresponding elements of the weight and activation tensors are multiplied and accumulated. Depending on the sparsity, accelerators need to use data extraction logic that decodes compressed tensors, search within a window of NZs or index the buffer, and obtain matching pairs of NZs to feed the functional units for computation. Section \\ref{sec::NZ-data-extraction} provides a taxonomy of different data extraction mechanisms and analyzes their implications for various sparsity levels. Up to moderate $IA$-sparsity and high $W$-sparsity, these indexing or intersection-based mechanisms efficiently extract sufficient NZs at every cycle for keeping functional units engaged. For efficient compute-bounded executions at such sparsity, accelerators reported achieving near-ideal speedups (e.g., about 80\\%--97\\% of the speedup corresponding to reduced operations, i.e., \\textit{sparsity-speedup ratio}) . However, extraction becomes challenging at high (e.g., 90\\%+) or hyper sparsity as NZs are scattered at distant locations , and execution is usually memory-bounded with low arithmetic intensity. Section \\ref{sec::NZ-data-extraction} also discusses sharing of the data extraction mechanism among PEs or employing in PEs. Then, it discusses opportunities for further optimizations.\n\\textbf{Memory management:} Compressed tensors are often stored in the shared on-chip memory that is non-coherent, multi-banked, and often non-unified. For a pre-determined sequence of execution, a controller or PEs initiates the accesses between off-chip and on-chip memory; their latency needs to be hidden behind computations on PEs. Section \\ref{sec::memory-management} discusses corresponding memory architectures and techniques for hiding miss penalty for sparse tensors via double-buffering or asynchronous computation and memory accesses. It describes the data reuse opportunities for various sparsity and dimensions of tensors of common DNNs and how sparsity lowers the reuse. It also discusses techniques that leverage cross-layer reuse of intermediate output layers and reduce latency. \n\\textbf{Communication networks:} Once tensor blocks are fetched from memory, they are distributed to appropriate PEs via interconnect networks (often one per operand). Efficient designs ensure that sufficient data can be fed to PEs while they perform computations. Reuse is leveraged spatially by multicast or mesh networks that communicate common data blocks to multiple PEs. It lowers accesses to memory hierarchy and communication latency. However, spatial reuse opportunities vary depending on the sparsity, NZ extraction mechanism, and mapping of the functionality on the accelerator. Section \\ref{sec::comm-networks} discusses different designs for distributing sparse and quantized tensors and reducing partial outputs. It also describes challenges in executing inter-PE communications that may become unstructured due to sparsity and the temporal and spatial mechanisms for reduction/collection of the outputs. It describes how configurable designs support various communication patterns for different sparsity, reuse, and functionality. \n\\textbf{PE architecture:} Several accelerators consist of scalar PEs with fused MAC units (e.g., EIE , LNPU , and Envision ). Others contain SIMD PEs (multiple functional units) (e.g., EyerissV2 ) or vector PEs consisting of multiplier-arrays and adder-trees (e.g., Cambricon-X and SNAP ). PE architectures either directly process pairs of matching NZs extracted from tensors or use hardware logic for data extraction or coordinate computation (Fig. \\ref{fig::ML-accelerators}). Effectively utilizing functional units can be challenging for variations in sparsity, precisions, and functionality, and it may require configurable designs. Section \\ref{sec::PEArch} provides corresponding discussions and describes sparsity-aware dataflow mechanisms (mapping of tensor computations on accelerator resources) used by different accelerators. It also describes how accelerators have leveraged value similarity of tensors and the corresponding modifications in the PE architecture.\n\\textbf{Load balancing:} Depending on the distribution of zeros, the execution may end up with processing a different amount of NZs on different PEs or their functional units, which creates inter-PE or intra-PE load imbalance. Section \\ref{sec::load-balance} analyzes such sources of the imbalance and introduces a taxonomy of different load balancing techniques. Accelerators achieve load balance through either software techniques (e.g., structured pruning or data reorganization) or by providing a hardware module for dynamic work balance (through asynchronous execution or work sharing), which provides further accelerations. For example, ZENA leveraged the sparsity of both activation and weight tensors for AlexNet and VGG-16 models and reported about 32\\% additional performance gains through load balancing. Dynamic load balancing can provide notable speedups for high, unstructured sparsity .\n\\textbf{Write-back and post-processing:} Tensor elements produced by PEs need to be collected, post-processed for further operations, and written back to the memory. PEs in different accelerators either write back sequentially or asynchronously through a shared bus or via point-to-point links. In addition, accelerators usually contain a post-processing unit that re-organizes the data (as per the dataflow mechanism of the current and next layer of the model) and encodes sparse output on the fly. Section \\ref{sec::post-processing} discusses such mechanisms. \n\\textbf{Compilation support:} It is important to support the execution of various ML models on accelerators and easier programming of models from ML libraries. Section \\ref{sec::software-support} discusses compiler support for sparse models and hardware accelerators. It discusses polyhedral and non-polyhedral intermediate representations and their implications on the compiler’s ability to represent the code and apply code transformations. It describes challenges in supporting sparse tensors and DNN compilers that facilitate sparse tensor computations. Then, it discusses compiler optimizations including common loop optimizations and those specific to hardware intrinsics. It also describes semi-automatic optimizations for transforming the loops and data layout and automatic optimizations using cost models. Finally, it discusses ISAs used by accelerators and their code generation by using libraries of high-level primitives.", "id": "dfcc21bd-b4bb-440e-9f6f-0af65d5e6281", "level": "subsection", "origin_cites_number": 48, "parent_id": "4bd89bd4-f378-4c43-b038-faf39e54d1ba", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Accelerator Design for Efficient Sparse and Irregular Tensor Computations" ], [ "subsection", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7, 4348, 848, 439, 7634, 842, 38, 840 ], "content": "\\label{sec::sparse-dnn-acceleration-case-study}\n\\begin{table}[!t]\n\\centering\n\\caption{Sparsity of Some Popular DNNs.}\n\\label{tab:sparsity-dnn-models}\n\\resizebox{0.48\\textwidth}{!}{\n\\addtolength{\\tabcolsep}{-3pt}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Model} & \\multirow{2}{*}{Domain} & \\multirow{2}{*}{Dataset} & \\multirow{2}{*}{\\makecell{GOps \\\\ (dense)}} & \\multicolumn{3}{c|}{Sparsity \\%} & \\multirow{2}{*}{\\makecell{Sparse\\\\ Model}} \\\\ \\cline{5-7}\n & & & & IA & W & Ops & \\\\ \\hline\nMobileNetV2 & CV & ImageNet & 0.3 & 34 & 52 & 81 & \\\\ \\hline\nEfficientNetB0 & CV & ImageNet & 0.5 & 0 & 68 & 60 & \\\\ \\hline\nTransformer & NLP & WMT En-De & 4.6 & 0 & 79 & 79 & \\\\ \\hline\n\\makecell{BERT-base-uncased } & NLP & \\makecell{SQuAD} & 9.3 & 0 & 92 & 92 & \\\\ \\hline\n\\end{tabular}\n\\addtolength{\\tabcolsep}{3pt}\n}\n\\end{table}\n\\begin{table*}[!t]\n\\centering\n\\caption{Architectural features of analyzed accelerators for sparse DNNs.}\n\\label{tab:microarch-features-sparse-DNN-accelerators}\n\\addtolength{\\tabcolsep}{-3pt}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{3}{*}{ID} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Reference \\\\ Architecture\\end{tabular}} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Supported \\\\ Operators\\end{tabular}} & \\multicolumn{2}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ Leveraged\\end{tabular}}} & \\multicolumn{3}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Non-zero Data \\\\ Extraction\\end{tabular}}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}PE \\\\ Architecture\\end{tabular}} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Work \\\\ Synch-\\\\ ronization\\end{tabular}} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}Freq.\\\\ (GHz)\\end{tabular}} & \\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}}DRAM \\\\ BW\\\\ (GBPS)\\end{tabular}} & \\multicolumn{4}{c|}{Bit-width} \\\\ \\cline{13-16} \n & & & \\multicolumn{2}{c|}{} & \\multicolumn{3}{c|}{} & & & & & \\multicolumn{2}{c|}{data} & \\multicolumn{2}{c|}{metadata} \\\\ \\cline{4-9} \\cline{13-16} \n & & & IA & W & Encoding & Discovery & Loc. & FU & & & & IA / O & W & IA & W \\\\ \\hline\nA1 & EIE & GEMM & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}unstructured\\end{tabular}} & CSR & \\multirow{2}{*}{Indexing} & in-PE & Scalar & Prefetch & 0.8 & 256 & 16 & 4 & N/A & 4 \\\\ \\cline{1-6} \\cline{8-16} \nA2 & \\begin{tabular}[c]{@{}c@{}}Cambricon-X\\\\ \\end{tabular} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}CONV, \\\\ GEMM\\end{tabular}} & dense & \\makecell{unstru-\\\\ctured} & \\begin{tabular}[c]{@{}c@{}}COO-\\\\ 1D\\end{tabular} & & \\begin{tabular}[c]{@{}c@{}}central \\\\ (per PE)\\end{tabular} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Vector (16 \\\\ multipliers \\&\\\\ adder tree)\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Every \\\\ Output \\\\ Activation\\end{tabular}} & 1 & 256 & 16 & 16 & N/A & 8 \\\\ \\cline{1-2} \\cline{4-8} \\cline{11-16} \nA3 & \\begin{tabular}[c]{@{}c@{}}Cambricon-S\\\\ \\end{tabular} & & \\makecell{unstru-\\\\ctured} & \\begin{tabular}[c]{@{}c@{}}block-\\\\ sparse\\end{tabular} & \\multirow{2}{*}{Bitmap} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Inter-\\\\ section\\end{tabular}} & \\begin{tabular}[c]{@{}c@{}}central, \\\\ shared\\end{tabular} & & & 1 & 256 & 16 & 8 & 1 & 1 \\\\ \\cline{1-5} \\cline{8-16} \nA4 & \\begin{tabular}[c]{@{}c@{}}ZENA-\\\\ IA-W \\end{tabular} & CONV & \\multicolumn{2}{c|}{unstructured} & & & in-PE & Scalar & \\begin{tabular}[c]{@{}c@{}}Intra-\\\\ Workgroup\\end{tabular} & 0.2 & 12 & 16 & 16 & 1 & 1 \\\\ \\hline\n\\end{tabular}\n\\addtolength{\\tabcolsep}{3pt}\n\\end{table*}\nThis section analyzes the sparsity of recent DNN models (for NLP and CV) and the acceleration that can be achieved with some of the popular accelerators-alike architectures.\n\\textbf{DNN models:} Table \\ref{tab:sparsity-dnn-models} summarizes analyzed DNN models and their overall sparsity across all CONV, GEMM, and DW-CONV operations. For each of these DNN operations, $W$-sparsity was obtained from sparse DNN models (listed in the last column). $IA$-sparsity was obtained by performing inference with sample data (images and text sequences). $GOps$ corresponds to processing of a single image for CV models and sequences of 24\nand 107 tokens for Transformer and BERT. \n\\textbf{Accelerators:} Table \\ref{tab:microarch-features-sparse-DNN-accelerators} summarizes analyzed accelerators and their sparsity-centered features. Their architectures targeted unstructured or block sparsity of activations and/or weights. Their features represent variations across data encoding, data extraction, vector processing, memory hierarchy, NoC, and load balancing.\n\\textbf{Methodology:} To determine the impact of sparsity on achievable acceleration, we performed a data-driven analysis of the execution latency. For each DNN layer, zeros (or blocks of zeros) were induced randomly according to the sparsity of its tensors. The overall execution time was determined from the latency of processing on functional units, data decoding, extraction of non-zeros, work synchronization, and off-chip memory transfers, which were calculated based on analytical modeling of the microarchitectural features. The speedups were calculated over oracle processing of dense tensors at the accelerator's peak utilization of computational resources and off-chip bandwidth. In this study, we do not consider the processing of DW-CONV on these accelerators, since they are often not pruned, and their execution needs to be group-wise, which is extremely inefficient. Such unsupported performance-critical operators were assumed to be processed with dense tensors at peak utilization of hardware resources.\n\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/sparsity-perf-analysis}}\n\\caption{(a) Obtained speedups for accelerators listed in Table \\ref{tab:microarch-features-sparse-DNN-accelerators}. (b) Analysis of execution time overheads for obtained accelerations.}\n\\label{fig::sparse-dnn-acceleration-analysis}\n\\end{figure}\n\\textbf{Analysis:} Fig. \\ref{fig::sparse-dnn-acceleration-analysis}(a) shows speedups of accelerators for targeted DNN models, for leveraging the sparsity of supported DNN operators. It illustrates speedups for (i) reduction in the operations due to sparsity (desired), (ii) peak utilization of accelerator's computational resources and off-chip bandwidth while leveraging sparsity, over such oracle processing of dense tensors (potential), and (iii) actual processing on accelerator over oracle processing of dense tensors (obtained). For understanding implications of execution overheads including those incurred by metadata processing and load imbalance, Fig. \\ref{fig::sparse-dnn-acceleration-analysis}(b) illustrates fractions for desired computation time and execution overheads in a stacked format. The overheads were extracted for layer-wise processing and then accumulated to determine the overall impact. Fractions include:\n$\\bullet$ Computation time: Minimum execution time required for processing at peak on accelerator's functional units.\n$\\bullet$ NZ extraction: Time required for decoding NZs from communicated operands and extracting matching operands for feeding the functional units. It also corresponds to balanced computations.\n$\\bullet$ Load imbalance: Time required for on-chip processing on the accelerator, considering the imbalanced computations subjected to the accelerator's work synchronization and work sharing schemes.\n$\\bullet$ DMA time: Time required for off-chip data communication via DMA transfers, in addition to on-chip processing.\nFig. \\ref{fig::sparse-dnn-acceleration-analysis}(a) shows that accelerators efficiently exploited moderate sparsity. E.g., for 4.8$\\times$ reductions in operations of Transformer due to $W$-sparsity, they achieved about 4$\\times$--4.2$\\times$ speedup. The exploitation of speedup lowers when activations are dense and weights are highly or hyper-sparse. This is because accelerators like EIE and Cambricon-X broadcast activations to PEs and extract matching pairs corresponding to NZ weights. So, communication of activations and extraction of matching NZ operands consume significant execution time, while there are fewer operations to feed the functional units (Fig. \\ref{fig::sparse-dnn-acceleration-analysis}b). E.g., for BERT-base-uncased (92\\% sparse weights ) on SQuAD , they achieved about 7.7$\\times$--8.9$\\times$ speedup out of 12.2$\\times$ speedup for processing at peak. Due to block-sparse weights, computations on PEs of Cambricon-S are always balanced. Therefore, it achieved higher speedups. By using blocks of 16$\\times$16 or even 1$\\times$16 (across input and output channels) for pruning, inducing similar sparsity is not possible sometimes. So, the reduction in operations and potential for the speedup was slightly lower for Cambricon-S (e.g., for EfficientNetB0). In general, due to high DRAM bandwidth, overheads incurred by DMA transfers were hidden (for Cambricon-X/S) or negligible for non-interleaved transfers (e.g., for EIE).\nFig. \\ref{fig::sparse-dnn-acceleration-analysis}(a) also shows that Cambricon-S and ZENA-IA-W achieved higher speedups for CV models by leveraging unstructured sparsity of activations. High IA-sparsity amplified total sparsity during processing several layers (e.g., MobileNetV2), incurring considerable excess processing in data extraction for Cambricon-X/S and in load imbalance for ZENA-IA-W. With zero-aware static sorting of filters and dynamic load balance, ZENA could overcome such imbalance. But, it would suffer through high on-chip communication time since it used only one shared bus for multicast via NoC and collecting outputs. We disregarded such communication overhead for ZENA-IA-W in this study, as most accelerators use separate NoCs or buses for alleviating communication overheads. Also, due to low DRAM bandwidth, overheads incurred by DMA transfers were higher for ZENA-IA-W, mainly for executing DW-CONVs with dense tensors.", "id": "c1d7323b-337a-4566-9a3e-103c4b3f9afa", "level": "subsection", "origin_cites_number": 12, "parent_id": "4bd89bd4-f378-4c43-b038-faf39e54d1ba", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Accelerator Design for Efficient Sparse and Irregular Tensor Computations" ], [ "subsection", "Case Study: Acceleration of DNNs and Bottleneck Analysis" ] ], "subsections": [], "title": "Case Study: Acceleration of DNNs and Bottleneck Analysis" }, { "cite_extract_rate": 1, "cites": [ 4357 ], "content": "\\label{sec::sparse-data-coding}\nA sparse tensor is compressed with an encoding format. An encoded tensor contains actual \\textit{data} (NZ values) and \\textit{metadata} (information about positions of NZs). Later, metadata is used by an accelerator's data indexing logic to locate and extract NZs. This section discusses commonly used encodings through an example (Fig. \\ref{fig::sparse-data-encodings}) and their implications on the storage and processing requirements. For different formats, Fig. \\ref{fig::sparse-encoding-schemes-overview} introduces a taxonomy for processing metadata during data extraction, and Table \\ref{tab:sparse-encodings-overhead} lists the corresponding storage overhead. Depending on the mapping of a layer onto the accelerator, tensors are divided into blocks (per PE-wise work) which are encoded separately. We refer to such processing as a \\textit{group-wise encoding}, which is discussed later. Finally, this section briefly describes encoding on the fly and further opportunities.\n\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=0.85\\linewidth]{figures/sparse-encoding-schemes-overview}}\n\\caption{A taxonomy for the required processing on the metadata during data extraction when a sparse tensor is encoded using different formats.}\n\\label{fig::sparse-encoding-schemes-overview}\n\\end{figure}\n\\begin{figure*}[t]\n\\centering\n\\centerline{\\includegraphics[width=0.9\\linewidth]{figures/sparse-data-encodings}}\n\\caption{Encodings to store sparse tensors in different formats. Elements with green shade encode the same NZ element. (Figure inspired by .)}\n\\label{fig::sparse-data-encodings}\n\\end{figure*}", "id": "76637357-2ce7-49d4-924f-7e20b7606e08", "level": "section", "origin_cites_number": 1, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Encodings for Compressing Sparse Tensors" ] ], "subsections": [ "7aaaec9f-b2f4-4e9c-b23d-5e1be0d4f563", "c38dbc5f-be9d-49a8-8c86-f1dc0a62356c", "88d6dbef-ac33-4b34-b43e-9ce569064200", "55296c8b-346d-4804-8382-bbbe3495d55b" ], "title": "Encodings for Compressing Sparse Tensors" }, { "cite_extract_rate": 0.1875, "cites": [ 4357, 514, 7169, 4358, 4335, 4344, 4347, 4354, 7634, 7805, 4359, 4341 ], "content": "\\mysubsubsection{Coordinate (COO)} It stores absolute positions of NZs. As Fig. \\ref{fig::sparse-data-encodings}(b) shows, all NZs of an uncompressed tensor $T$ are stored in a data vector $val$, and vectors $coord\\_y$ and $coord\\_x$ indicate the coordinates of each NZ value. So, COO is a natural way to express sparse tensors and is used commonly (e.g., in PyTorch). Formats adopted by FROSTT and matrix market closely resemble COO.\nThe COO format stores all coordinates in the uncompressed format. E.g., as Fig. \\ref{fig::sparse-data-encodings}(b) shows, the metadata for values '2' and '3' (same row) or '2' and '5' (same column) are not compressed, i.e., duplicate values of row and column indices exist in coordinate vectors. So, the overhead of storing $n$ coordinates per NZ value is about $\\sum_1^n { \\lceil \\log_2{d_i} \\rceil}$ bits (vector $d$ contains the tensor's dimensions). It makes COO inefficient for storing tensors with low or moderate sparsity. \nFig. \\ref{fig::sparse-data-encodings-overhead} shows storage benefits for encoding 2 MB matrices of various sparsity in different formats. We calculated storage requirements with the analysis presented in Table \\ref{tab:sparse-encodings-overhead} and normalized them to the matrix's size in dense format. We used the Scipy library to generate matrices of various sparsity and encode them in COO, CSR, and CSC. Fig. \\ref{fig::sparse-data-encodings-overhead} shows that for a 2 MB matrix, COO achieves storage efficiency for 70\\%+ sparsity. However, COO may yield simple indexing logic, as both the data and metadata can be directly extracted.\n\\mysubsubsection{COO-1D} For \\emph{tile-wise processing} of an encoded tensor, accelerators often process only a block of NZs at a time, where block elements vary across only a single dimension. For example, Cnvlutin processes the input activations and weights across the channel direction. Therefore, the data block is encoded with COO-1D, which is just like COO, but there is only one $pos$ vector for storing coordinates of NZs in the flattened block. For instance, if we flatten $T$ and consider it a block, then the value '5' is indexed by position '3'. \n\\mysubsubsection{Run-Length Coding (RLC)} It compresses a sequence of values by replacing consecutive duplicate values with a single value and the number of repetitions (aka \\textit{run}). For RLC-encoded sparse tensor, ``run'' indicates a total number of zeros before (after) an NZ. Fig. \\ref{fig::sparse-data-encodings}(d) shows RLC encoding of $T$. Run values for '2' and '3' are '0' and '1', respectively. A few accelerators, including Eyeriss , encode both the NZs and runs altogether in the same vector $val$. For example, $T$ can be encoded as $val$: (0, 2, 1, 3, 0, 5, 4, 7).\nRLC requires a \\emph{step-processing} on metadata, as run-length needs to be calculated by accumulating runs and preceding \\textit{number of NZs (NNZs)}, for determining the position of an NZ. The storage overhead for RLC-B is NNZ $\\times$ B bits, where B is bit-width of the run. If a vector $d$ contains tensor dimensions, then B can be set as up to $\\lceil \\log_2{(\\prod_1^n d_i)} \\rceil$ bits for accommodating the number of leading zeros in a highly sparse tensor. When B is set lower, it cannot always capture the number of zeros as \\textit{run}. Fig. \\ref{fig::sparse-data-encodings}(d) shows RLC-2b encoding, where leading zeros before '7' are four. This cannot be expressed in 2 bits. As a work-around, padding zeros are inserted and treated as NZs. In this example, a padding zero is inserted between '5' and '7'; run values corresponding to the padding zero and '7' are '3' and '0', which contributes to the total run of four.\nTo accelerate CNNs with 30\\%--90\\% sparsity of tensors, designers have set B as two or four bits. In general, setting the B as $\\lfloor \\log_2{(\\frac{sparsity}{density})} \\rfloor + 1$ bits can effectively compress tensors and provide a feasible bit-width to indicate leading zeros. Here, $sparsity$ and $density$ are fractional numbers indicating the actual or anticipated number of zeros and non-zeros in the tensor, respectively. Thus, setting the B as 1, 1, 1, 2, 4, and 7 efficiently encodes tensors with sparsity of 10\\%, 30\\%, 50\\%, 70\\%, 90\\%, and 99\\%, which is depicted in Fig. \\ref{fig::sparse-data-encodings-overhead}.\nAs RLC requires step-processing on metadata, the indexing logic needs an accumulator to determine the position of an NZ. When an encoded tensor is not processed block-wise but rather indexed by n-dimensions, the indexing logic may require performing division and modulo operations on the metadata. Alternatively, a multi-dimension representation can be used where $run$ for the coordinates of each dimension can be calculated separately and stored. The overall computational cost (arithmetic and logical operations realized in hardware) for such step-processing can be low. Therefore, several accelerator designs, including Eyeriss and SCNN , used RLC or its variant. As run indicates repetition of a value, CompAct used an enhanced RLC format for encoding both the sparse and similar-value activations. \n\\mysubsubsection{Bitmap} It stores all NZs in a tensor $val$ along with a tensor \\textit{flag} which contains 1-bit flags for all elements of an uncompressed tensor $T$. As Fig. \\ref{fig::sparse-data-encodings}(e) shows, a flag indicates whether an element is NZ or not. Storage overhead for the bitmap (aka bit-mask) is $\\prod_1^n d_i$ bits (where vector $d$ stores $n$ dimensions of $T$) . Since bitmap stores metadata for all elements, it is effective for compressing the tensors of low or moderate sparsity. Like RLC, decoding or indexing bitmap also requires step-processing. The indexing logic to locate an NZ typically consists of at least an adder and a comparator . Due to moderate storage overhead and low encoding/decoding cost, several accelerators used bitmap, including Cambricon-X , SparTen , and SIGMA , as shown in Table \\ref{tab:accel-sparse-encodings}.\n\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=0.9\\linewidth]{figures/sparse-data-encodings-overhead}}\n\\caption{Storage benefits for encoding a sparse tensor (512$\\times$2048 matrix with 16b elements) in different formats, normalized to the size of the fully dense tensor (Figure inspired by ).}\n\\label{fig::sparse-data-encodings-overhead}\n\\end{figure}\n\\mysubsubsection{Compressed Sparse Row (CSR)} It compresses a matrix by processing each row as a sparse vector. In a CSR-coded tensor, an array $val$ contains all NZ values (ordered row-wise), and an array $idx$ stores their column indices . Array $ptr$ stores information about total NZs in each row $i$, which is obtained by calculating $ptr[i+1]$ - $ptr[i]$. The last element of $ptr$ contains the total number of NZs in $T$. Row-wise compression enables random accesses to any row.\nWhile COO redundantly stores row-coordinates of NZs in the same row, CSR compresses such metadata by storing NZs row-wise . For example, in Fig. \\ref{fig::sparse-data-encodings}(b) (COO), \\textit{coord-y} stores row indices '0' and '0' for NZs '2' and '3'. This redundancy is removed in the CSR coding of Fig. \\ref{fig::sparse-data-encodings}(f), as $ptr$ stores only total NZs in each row. For compressing an M$\\times$N matrix using CSR, the total storage overhead is {NNZ $\\times$ $\\lceil \\log_2{N} \\rceil$} (for $idx$) + {(M + 1) $\\times$ $\\lfloor \\log_2{NNZ} + 1 \\rfloor$} (for $ptr$). Due to high storage overhead (proportional to NNZs and size of the row), CSR coding is efficient at high sparsity , e.g., 90\\% or higher (Fig. \\ref{fig::sparse-data-encodings-overhead}). \nDecoding a CSR-encoded tensor can require a two-step processing of metadata. The first step locates NZs of a row by iterating over $ptr$, and the next step locates an NZ element in the NZs of the row through the column index. Accelerators efficiently process CSR-coded matrices row-wise such that $ptr$ is accessed once for fetching each row, and then the decoder iterates through $idx$ (to locate column positions).\nCSR variants can improve efficiency further. For example, $ptr$ stores duplicate values when consecutive rows are zero. Doubly CSR (DCSR) eliminates this redundancy and achieves additional compression for hyper-sparse matrices. Block CSR (BCSR) stores a block of elements in $val$, if the block contains at least one NZ. As Fig. \\ref{fig::sparse-data-encodings}(k) shows, in BCSR, $idx$ indicates the column index of a block, and $ptr$ informs about the number of dense blocks located in the same rows. BCSR avoids storing blocks of all zeros and populates dense regions, and hence suitable for encoding block-sparse structured weight tensors. Thus, BCSR-coded tensors can be efficiently executed not only on conventional processors but also on hardware accelerators (with additional support for appropriately indexing dense regions, e.g., ). \n\\begin{table}[t]\n\\centering\n\\caption{Storage overhead for common encodings. Vector $d$ stores $n$ dimensions of a tensor that contains $NNZ$ non-zero elements.}\n\\label{tab:sparse-encodings-overhead}\n\\begin{tabular}{|c|c|} \\hline\nFormat & Storage Overhead (bits) \\\\ \\hline\nCOO & $NNZ \\times \\sum_1^n { \\lceil \\log_2{d_i} \\rceil}$ \\\\ \\hline\nCOO-1D & $NNZ \\times \\lceil \\log_2{\\prod_1^n d_i} \\rceil$ \\\\ \\hline\nRLC & $NNZ \\times B$ \\\\ \\hline\nBitmap & $\\prod_1^n d_i$ \\\\ \\hline\nCSR & \\makecell{$NNZ \\times \\lceil \\log_2{d_1} \\rceil$ + $(d_0 + 1) \\times \\lfloor \\log_2{NNZ} + 1 \\rfloor$} \\\\ \\hline\nCSC & \\makecell{$NNZ \\times \\lceil \\log_2{d_0} \\rceil$ + $(d_1 + 1) \\times \\lfloor \\log_2{NNZ} + 1 \\rfloor$} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\mysubsubsection{Compressed Sparse Column (CSC)} CSC is similar to CSR, except that NZs are stored column-wise . As Fig. \\ref{fig::sparse-data-encodings}(g) shows, an array $val$ contains NZs (organized column-wise); $idx$ stores their row indices; $ptr$ informs about the total NZs in each column. The storage overhead and hardware costs for encoding/decoding tensors in CSC format are similar to those for CSR. Accelerators, including EIE and Sticker , processed high-sparsity tensors with CSC format. \nFor alleviating the high storage overhead of CSR or CSC formats due to storing $idx$ and $ptr$ arrays, a few accelerators further encode the metadata $idx$ or $ptr$. For example, EIE and EyerissV2 encode $idx$ in RLC such that elements in $idx$ indicate zeros between column indices of NZs (similar to $run$ in RLC for NZ values). Fig. \\ref{fig::sparse-data-encodings}(h) shows CSC encoding with such an RLC-encoded row index array. Values '2' and '5' have column index '0' and '1', respectively, which can be encoded as '0' and '0' since there are no leading zeros before NZs '2' and '5'. Similarly, if the first column of $T$ is (0, 2, 0, 0, 5), then the row indices for '2' and '5' can be encoded as '1' and '2'. $ptr$ can also be encoded likewise (store NZs per column instead of a cumulative number). However, encoding positions relatively requires additional step-processing on the metadata. Therefore, decoding a CSR or CSC encoded matrix with RLC-encoded metadata can require triple-step processing on metadata (additional hardware cost).\n\\mysubsubsection{Compressed Sparse Fiber (CSF)} CSF provides a generalization of CSR for higher-order ($n$-dimensional) tensors by forming a tree (with $n$ levels). Nodes at level $l$ contain indices for $l$th mode (dimension) of an uncompressed tensor $T$. Path from a root to a leaf node encodes different coordinates of an NZ, which are stored in the nodes throughout the path; each leaf node stores an NZ value. So, the height of the tree is the total dimensions of $T$; the width is NNZs in $T$. \nFig. \\ref{fig::sparse-data-encodings}(i) illustrates a mode-0 tree and corresponding arrays of index pointers. Root nodes represent the major mode (0 or $y$), and their child nodes represent the consecutive dimension (1 or $x$). Like in CSR, $ptr$ informs about a group of indices corresponding to a dimension. For instance, $ptr$ array at the beginning informs that one group of three coordinates corresponds to the mode 0 ($idx$ stores coordinates). Similarly, the next $ptr$ array informs about three different groups of coordinates for the next mode (dimension 1). The corresponding $idx$ array stores the coordinates for mode 1, separated into three groups (marked by thick outer vertical borders). \nLayering the arrays of index pointers reduces duplication of indices . Each time when a node directs to children, it eliminates duplicating indices for the corresponding mode. Storage benefits increase with the increase in dimensions and redundancy among coordinates of NZs. The organization of the data also impacts storage efficiency. For example, Fig. \\ref{fig::sparse-data-encodings}(j) shows another ordering, which eliminates storing redundant coordinates of column (mode 1), achieving fewer nodes. For an n-mode CSF tensor, the storage overhead corresponds to more than $NNZ + n - 1$ coordinates and typically much less than $n \\times NNZ$ coordinates. Works provide further details about managing higher-order tensors with CSF format. Processing metadata at each dimension requires two-step processing (just like processing $ptr$ and $idx$ in CSR), thereby up to 2n-step processing for an n-dimensional tensor. So, accelerator designers may opt for CSF format when processing high-dimensional tensors with high sparsity.\n\\mysubsubsection{Huffman coding} It typically is applied for compressing sparse tensors once they are quantized using precision lowering or value sharing. After quantization, values of the reduced range appear with different frequencies and can be compressed further with Huffman encoding . For example, Deep Compression pruned and quantized weights of AlexNet and VGG-16 , achieving 8b/5b indices with a codebook of 256/32 weights for CONV/FC layers. With Huffman encoding, it compressed the models further by 22\\% and 36\\% (total compression of 35$\\times$ and 49$\\times$).\n\\begin{table}[!t]\n\\centering\n\\caption{Commonly Used Sparsity Encodings by Accelerators}\n\\label{tab:accel-sparse-encodings}\n\\begin{tabular}{|c|m{6cm}|}\n\\hline\nCOO & \\\\ \\hline\nCOO-1D & \\\\ \\hline\nRLC & \\\\ \\hline\nBitmap & \\\\ \\hline\nCSR & \\\\ \\hline \nCSC & \\\\ \\hline\nCSF & \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\mysubsubsection{Encodings for tensors with structured sparsity}\nDensity-bounded blocks (Fig. \\ref{fig::sparsity-structures}c) can be encoded similarly as blocks with unstructured sparsity, e.g., with bitmap , COO-1D , or RLC. So, for the same sparsity and block size, the overhead is similar to tile-wise processing of a tensor with unstructured sparsity. It is usually low for small block sizes (e.g., 8$\\times$1 , 1$\\times$4 in NVIDIA A100 Tensor Core GPU ), since the position of each NZ is indicated by a few bits. Coarse-grain block-sparse tensors (Fig. \\ref{fig::sparsity-structures}b) can be encoded at block-granularity, which can significantly reduce the metadata size (almost eliminated for dimensional pruning ). Cambricon-S used bitmap to indicate the presence of each 1$\\times$16 dense block with a single bit. Similarly, ERIDANUS used few bytes to process each 8$\\times$8 dense block on systolic arrays. Such encodings require indicating the position of a dense block across rows or columns and additional indices for higher dimensions that indicate dense blocks packed per dimension, e.g., in block CSR (Fig. \\ref{fig::sparse-data-encodings}-k).\n\\mysubsubsection{Other formats} Various encoding formats have been proposed, which improve the compression or efficiently access sparse tensors during execution on CPUs/GPUs (for high-performance and scientific computing). It includes compressed sparse blocks (CSB) , libsvm , ELLPACK , diagonal (DIA) , dynamic CSR , delta-coded CSR , and mode-generic and mode-specific formats . Prior works including , SPARSKIT , and surveyed them along with additional formats and discussed their implications. Different libraries that provide support for encoding the tensors and for sparse tensor computations on CPUs or GPUs include MATLAB tensor toolbox , Intel MKL , SciPy , and cuSPARSE .", "id": "7aaaec9f-b2f4-4e9c-b23d-5e1be0d4f563", "level": "subsection", "origin_cites_number": 64, "parent_id": "76637357-2ce7-49d4-924f-7e20b7606e08", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Encodings for Compressing Sparse Tensors" ], [ "subsection", "Encoding Formats and Implications" ] ], "subsections": [], "title": "Encoding Formats and Implications" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7634 ], "content": "One way of processing sparse tensors is to encode the whole tensor. Then, the accelerator's data management logic extracts an appropriate tile (optionally decodes it) and communicates to the PEs. In contrast, for group-wise encoding, tensor tiles are encoded separately, based on pre-determined per-PE work. Depending on the mapping, each tile is typically communicated to a unique PE (or a PE-group) during execution. Thus, the encoding considers the dataflow, i.e., mapping of the tensor computations onto PEs. It can make the decoding and data extraction easier, as each group corresponds to execution on a distinct PE (or a PE-group). EIE , Cambricon-X , and CompAct used group-wise encoding.", "id": "c38dbc5f-be9d-49a8-8c86-f1dc0a62356c", "level": "subsection", "origin_cites_number": 3, "parent_id": "76637357-2ce7-49d4-924f-7e20b7606e08", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Encodings for Compressing Sparse Tensors" ], [ "subsection", "Group-wise Encoding" ] ], "subsections": [], "title": "Group-wise Encoding" }, { "cite_extract_rate": 0.25, "cites": [ 4354, 7634 ], "content": "Accelerator designers often target only static sparsity of weights and encode them off-line, e.g., DNN inference accelerators, including EIE , Cambricon-X , and . However, on-the-fly encoding is required for efficiently processing dynamically sparsified tensors (sparse activations in the inference and tensors in training the models). Therefore, accelerators, such as CompAct , SCNN , NullHop , Cnvlutin , and Sticker employ an on-the-fly encoder. Typically, before encoding a tensor, the data is re-organized as per requirements of the group-wise encoding and dataflow mechanism for processing the subsequent layer. So, on-the-fly encoding is often combined with assembling the outputs from PEs (section \\ref{sec::post-process-encoding} provides further details).", "id": "88d6dbef-ac33-4b34-b43e-9ce569064200", "level": "subsection", "origin_cites_number": 8, "parent_id": "76637357-2ce7-49d4-924f-7e20b7606e08", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Encodings for Compressing Sparse Tensors" ], [ "subsection", "On-the-fly Encoding" ] ], "subsections": [], "title": "On-the-fly Encoding" }, { "cite_extract_rate": 0, "cites": [], "content": "\\noindent \\textit{(i) Tailoring encoding formats for sparsity levels and patterns:} Various layers of deep learning models exhibit a wide range of sparsity (\\textit{inter-layer, intra-tensor sparsity variation}). Moreover, even within a DNN layer, sparsity among tensors can be different (\\textit{intra-layer, inter-tensor sparsity variation}). Accelerators need to support such sparsity variations effectively without incurring significant overheads for storage, encoding, and indexing. When the sparsity range or pattern of multiple tensors is diverse, designers can opt for the separate encoding of different tensors (e.g., ). These different sparsity-encodings can be utilized for off-chip storage, zero-guarding the PEs, or reducing the latency of on-chip extraction to locate intersecting NZs. When different formats are used for performance gains, the accelerator should provide hardware logic for decoding different tensors that are stored in different formats (and support for any on-the-fly encoding). Such decoding logic may use existing data extraction mechanisms, but it will require separate/configurable decoding logic for supporting multiple formats.", "id": "55296c8b-346d-4804-8382-bbbe3495d55b", "level": "subsection", "origin_cites_number": 1, "parent_id": "76637357-2ce7-49d4-924f-7e20b7606e08", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Encodings for Compressing Sparse Tensors" ], [ "subsection", "Optimization Opportunities" ] ], "subsections": [], "title": "Optimization Opportunities" }, { "cite_extract_rate": 0.291666666666666, "cites": [ 7169, 4335, 4347, 4354, 7634, 7805, 4356 ], "content": "\\label{sec::NZ-data-extraction}\n\\begin{table}[!t]\n\\centering\n\\caption{Classification of NZ Data Extraction Techniques}\n\\label{tab:data-extraction}\n\\begin{tabular}{|c|c|c|m{3cm}|}\n\\hline\n\\makecell{Target \\\\ Sparsity} & \\makecell{PE Arch-\\\\itecture} & \\makecell{Functional Unit \\\\ Operation} & \\makecell{Accelerators} \\\\ \\hline\n\\multirow{3}{*}{\\makecell{One \\\\ Tensor}} \n& Scalar & MAC & \\\\ \\cline{2-4}\n& \\multirow{2}{*}{\\makecell{SIMD/\\\\Vector}} & Sc-Vec-Mul & \\\\ \\cline{3-4} \n& & Vec-Vec-Mul & \\\\ \\hline\n\\multirow{3}{*}{\\makecell{Both \\\\ Tensors}} \n& Scalar & MAC & \\\\ \\cline{2-4} \n& \\multirow{2}{*}{\\makecell{SIMD/\\\\Vector}} & Sc-Vec-Mul & \\\\ \\cline{3-4} \n& & Vec-Vec-Mul & \\\\ \\hline\n\\end{tabular}\n\\begin{tabular}{ccc}\n& & \\\\\n\\end{tabular}\n\\begin{tabular}{|c|m{5.6cm}|}\n\\hline\n\\makecell{Location of \\\\ Extraction Units} & \\makecell{Accelerators} \\\\ \\hline\n\\makecell{Centralized/\\\\Shared} & \\\\ \\hline\nIn-PE & \\\\ \\hline\n\\end{tabular}\n\\end{table}\nTensors are typically stored in the compressed format in the accelerator's memory. Therefore, locations of NZs that need to be processed are determined from the metadata. Once a matching pair is extracted (elements of two tensors that need to be added or multiplied), a PE can proceed for computations. Identifying effective NZs is the primary step towards eliminating ineffectual computations due to the sparsity of weights and/or activations. \nThis section describes different data extraction mechanisms (Table \\ref{tab:data-extraction} provides a taxonomy), their management in PEs or centrally, and their trade-offs. Then, it discusses further acceleration opportunities to exploit various sparsity levels.", "id": "2c2b4338-ab49-49e8-95e3-2e1350db0598", "level": "section", "origin_cites_number": 24, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Extraction of Matching Data for Computations on Non-Zeros" ] ], "subsections": [ "20ccd576-d8ff-4c83-bbfa-db2fd8b3f424", "6c46d7ee-8f0a-4a7a-aac7-d1bb719f64e7", "c3855c4e-a3f8-4153-9abf-839df6cb02ad" ], "title": "Extraction of Matching Data for Computations on Non-Zeros" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4354, 7634, 4361, 4360, 4335, 7169 ], "content": "\\label{sec::data-extraction-HW}\n\\insight{A data extraction mechanism needs to feed functional units of PEs every cycle.} So, based on their processing of scalars or vectors of NZs, Table \\ref{tab:data-extraction} categorizes extraction mechanisms for (i) MAC operation on scalars, (ii) scalar-vector multiplication, and (iii) vector-vector multiplication.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NZ-detection-1-sparse-tensor-SIMDPE-svm}}\n\\caption{Data extraction in subunits of Cnvlutin PE (Figure adopted from ).}\n\\label{fig::NZ-detection-cnvlutin}\n\\end{figure}\n\\mysubsubsection{Indexing dense tensors by indices of NZs of a sparse tensor}\nDepending on sparsity, only one tensor may be treated as sparse and compressed (e.g., activations for Cnvlutin or weights for Cambricon-X and NVIDIA A100 ). So, the position of an NZ can be used for indexing the other (i.e., dense) tensor to extract the corresponding value. \n\\textit{MAC:} Consider the activation lane and filter lane 0 of subunit 0 in Fig. \\ref{fig::NZ-detection-cnvlutin}, which can be visualized as processing on a scalar PE. For an NZ streaming from the activation lane, matching weight can be looked up and provided to the multiplier or MAC unit. For COO-1D encoded blocks, absolute positions of NZs can be obtained directly from metadata. Otherwise, absolute positions of NZs need to be computed explicitly by decoding metadata (e.g., bitmap or RLC) through simple combinational logic consisting of AND gates, multiplexers, and adders (e.g., in and ). \n\\textit{Sc-Vec Mul:} For SIMD processing, multiple arrays are indexed with the position of an NZ. Fig. \\ref{fig::NZ-detection-cnvlutin} shows such mechanism used in Cnvlutin PEs . Each of 16 subunits in Cnvlutin PE featured an activation lane (streamed an input channel vector), 16 multipliers, and 16 filter lanes. A common NZ activation was fetched from the activation lane, and its position was used for looking up in all 16 filter lanes to obtain corresponding weights for multiplication. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NZ-detection-1-sparse-tensor-SIMDPE-vvm}}\n\\caption{Data extraction via central indexing module in Cambricon-X accelerator. The indexing module decodes weights encoded in step-indexed COO-1D format to obtain the absolute positions of NZs. Then, it extracts the activations via a parallel look-up, which are later communicated to a PE via fat-tree NoC for a vector-vector multiplication. (Figure adopted from .)}\n\\label{fig::NZ-detection-cambriconX}\n\\end{figure}\n\\textit{Vec-Vec Mul:} PEs of some accelerators spatially process vectors at every cycle (e.g., with 16 multipliers and an adder tree in Cambricon-X). As Fig. \\ref{fig::NZ-detection-cambriconX} illustrates, based on positions of NZs of a vector, a combinational logic with multiplexers can select matching data elements to feed the arithmetic units (e.g., in ). \\insight{An associated challenge is overheads of parallel look-up}. To exploit high sparsity, larger multiplexers need to be used for indexing the dense tensor, as positions of scattered NZs are likely distant. With the search length set as 256 (supports 93.75\\% sparsity for fetching 16 NZ elements), a central indexing module in Cambricon-X occupied about 31\\% and 35\\% of total on-chip area and power, respectively (exceeded total power of all 16 PEs) . \n\\mysubsubsection{Compare metadata of sparse tensors for extracting matching pairs of NZs}\nFor effectual computations over multiple compressed tensors, the extraction logic determines pairs of NZs (intersections) by comparing indices either from metadata streams or in multi-stage indexing.\n\\textit{MAC:} Circuitry for extracting NZ scalars can consist of one or more comparators (or AND gates for comparing bitmaps) and an additional indexing logic (e.g., in ZENA and SparTen ). The comparators match positions of NZs, and the indexing logic uses their outputs to extract the leading pair. Due to the diverse sparsity of tensors, positions of NZs may not match during comparison. Therefore, the detection logic uses several comparators to search within a large window, which usually can provide at least one pair at every cycle. Priority encoders provide the leading $n$-pairs for feeding $n$ computational units (n=1 for scalar PEs). The data extraction unit can use skip mechanisms (e.g., in ExTensor ) to quickly navigate through the lanes. \nAlternatively, multi-stage indexing logic is used for extracting the pair. The first stage obtains a position of an NZ from one tensor for indexing another tensor. Later stage checks if there is a corresponding NZ in another tensor and extracts it upon matching the positions. For example, in EIE , each PE loads an NZ activation from a queue; when it does not have any matching weights, it fetches the next activation from the queue in the next cycle. Depending on the sparsity level and pattern, the indexing-based design occasionally may not find the matching data, wasting the execution cycles, i.e., functional units in the pipeline are not utilized. \n\\textit{Sc-Vec Mul:} PEs in EyerissV2 use multi-stage extraction. Each SIMD PE fetches an CSC-coded activation and its position, and checks positions of NZ weights. Upon match, it forwards the activation and weights to two MAC units. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NZ-detection-2-sparse-tensor-SIMDPE-vvm-SNAP}}\n\\caption{Associative index matching in SNAP (Figure adopted from ).}\n\\label{fig::NZ-detection-SNAP}\n\\end{figure}\n\\textit{Vec-Vec Mul:} The data extraction logic to feed multiple arithmetic units of a vector PE requires multiple comparators followed by priority encoders or multiplexers. For example, in SNAP architecture , an associate index matching module (AIM, Fig. \\ref{fig::NZ-detection-SNAP}) determines the positions of NZs in case of valid matches. Each PE of a row is interfaced with a shared AIM. Using comparison outcomes from AIM, a sequencer in each PE determines leading pairs of matching data, which are then fed to three multipliers within the PE. Cambricon-S uses similar extraction logic, but its comparator array is just ANDing of the bits due to bitmap encoding. \n\\mysubsubsection{Eliminating extraction of intersecting NZs} Some accelerators do not require extracting unstructured NZs. \n\\begin{figure}[b]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/orchestration-structured-sparsity}}\n\\caption{Computation of locally dense regions in ERIDANUS (Figure adopted from ). (a) Matrix multiplication with block-sparse weights. (b) Sub-matrices for processing on a 2$\\times$2 systolic array. (c) Multiplication of streaming blocks (NZs) with stationary data.}\n\\label{fig::orchestration-structured-sparsity}\n\\end{figure}\n\\textbf{Orchestrating structured computations:} A few techniques targeted high sparsity of single tensor (DNN weights). With data pruning or transformations, they achieved coarse-grain sparsity so that each PE can process a dense region of NZs. ERIDANUS proposed a pruning algorithm to cluster the weights (Fig. \\ref{fig::orchestration-structured-sparsity}a). Blocks of NZ weights are streamed to PEs of systolic arrays for conventional processing (Fig. \\ref{fig::orchestration-structured-sparsity}c). Corresponding activations are kept stationary. Partial products computed by each row of PEs are added on a separate adder tree. When block width for structured pruning can be set as the height/width of the systolic array, dot products can be accumulated linearly over the systolic array itself. Thus, \\insight{structured sparsity allows executing denser blocks conventionally on accelerators, while requiring additional support to index and communicate the blocks.} Adaptive tiling used a column-combining approach. For a sparse GEMM, NZ weights were statically combined such that each column of the systolic array could process multiple columns of input activations. Thus, it obviated the run-time data extraction and reduced total invocations of the systolic array by 2$\\times$--3$\\times$ for processing point-wise CONVs of MobileNet. \nCirCNN and C-LSTM proposed executing DNN operators as FFT (Fast Fourier Transform) on smaller block-circulant matrices.\n\\textbf{Coordinate computation unit:} SCNN and SqueezeFlow perform unit-strided convolutions as a Cartesian product where all elements of two blocks of tensors should be multiplied together. Due to all-to-all multiplication, no special support is required for extracting matching pairs of NZs. However, index computation is still required to determine which partial-sums should be accumulated with partial products. This calculation is performed in a ``coordinate computation unit'' that processes metadata (indices of NZs) and determines indices of outputs. These approaches require conflict detection in hardware since it can't be pre-determined which accumulators would be accessed in any cycle. Since coordinate computation unit facilitates direct processing on compressed tensors, it may also be used for computing block-level indices for processing a \\emph{coarse-grain block-sparse} tensor.", "id": "20ccd576-d8ff-4c83-bbfa-db2fd8b3f424", "level": "subsection", "origin_cites_number": 18, "parent_id": "2c2b4338-ab49-49e8-95e3-2e1350db0598", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Extraction of Matching Data for Computations on Non-Zeros" ], [ "subsection", "Non-Zero Detection and Extraction Mechanisms" ] ], "subsections": [], "title": "Non-Zero Detection and Extraction Mechanisms" }, { "cite_extract_rate": 0.2, "cites": [ 4335 ], "content": "\\mysubsubsection{Centralized} The data extraction unit can be either centralized (and shared among PEs) or within pipelines of PEs. Advantages of central mechanisms are: (i) PEs can be directly provided effective NZs for useful computations . It can also be used as a pre-processing unit for a PE-array that processes structured computations, e.g., systolic arrays or near-data accelerators. (ii) Centralized extraction in some architectures (e.g., Cambricon-X ) duplicates hardware for concurrent extractions for PEs. However, the module can be time-shared by multiple PEs (e.g., in SNAP ), which can reduce area and power. In fact, by leveraging structured $W$-sparsity, the module in Cambricon-S \\emph{shares} extracted indices among \\emph{all} PEs. (iii) Centralized logic extracts work for multiple PEs, and often it is coupled with a controller that allocates data to PEs. So, it can enable \\emph{run-time load balancing}. However, a major challenge is to maintain spatial data reuse. This is because, the centralized unit mostly extracts data on a per-PE basis for communication to a unique PE. So, the common data for multiple PEs cannot be multi-cast. SNAP overcomes this limitation by sharing a module with a row of PEs and multicasting data to PEs. The multicast occurs first, followed by PEs communicating their metadata to the extraction unit. Then, extracted indices are streamed back to a PE, which uses them to obtain data from its local RF for computations.\n\\mysubsubsection{In-PE} PEs of several accelerators, such as Cnvlutin , ZENA , and EyerissV2 , extract appropriate data.It allows a controller to multicast or broadcast tensor elements for spatial reuse. Then, in-PE logic extracts the data. However, challenges are: (i) in-PE logic may incur ineffectual cycles for extraction that cannot be hidden. (ii) employing inter-PE load-balancing in the hardware may be infeasible or costlier, as the actual work carried out by different PEs is unknown while offloading compressed tensors to PEs (until extraction in PE datapath). \n\\begin{figure*}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/analysis-data-reuse}}\n\\caption{Data reuse opportunities for executing different CNN layers (dense tensors) on hardware accelerators (Figure inspired by ).} \n\\label{fig::analysis-data-reuse}\n\\end{figure*}", "id": "6c46d7ee-8f0a-4a7a-aac7-d1bb719f64e7", "level": "subsection", "origin_cites_number": 5, "parent_id": "2c2b4338-ab49-49e8-95e3-2e1350db0598", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Extraction of Matching Data for Computations on Non-Zeros" ], [ "subsection", "Centralized vs. Distributed Management" ] ], "subsections": [], "title": "Centralized vs. Distributed Management" }, { "cite_extract_rate": 0, "cites": [], "content": "\\textit{(i) Sparsity-adaptive low-cost data extraction mechanisms:} Encodings of sparse tensors are often selected with a focus on storage benefits. However, the computational overhead and hardware cost for encoding and decoding tensors should be also reduced, since they affect the performance and energy consumption. When the data extraction cannot feed $n$ pairs of NZs to $n$ computational units of a PE at every cycle, achieved speedup can be lower from the peak. Sustaining the acceleration across various sparsity of tensors can be challenging, as different extraction schemes may be cost-effective for only a certain sparsity range and patterns. For example, for similar sparsity, extraction logic with a few comparators may easily locate a pair of NZs. However, an indexing-based mechanism may be more effective, when one tensor is highly sparse and another is dense. Moreover, when positions of NZs in the two tensors are considerably distant (e.g., for diverse sparsity levels or for hyper-sparse tensors), the extraction logic needs to use several comparators or multiplexers for parallel lookup, so that it can extract at least one pair to feed each computational unit. Therefore, the extraction module needs to be \\emph{configurable} or consist of (and select among) multiple mechanisms so that it can exploit a variety of sparsity at a modest hardware cost. \nFor the latter, it can dynamically use partial features for desired sparsity levels/patterns (power-gated otherwise).\n\\textit{(ii) Tightening integration with load balance mechanism:} Central data extraction module can enable dynamic load balancing of work among PEs (e.g., data-driven dynamic work dispatch in GraphDynS ). As section \\ref{sec::load-balance} discusses, the inter-PE imbalance can be severe due to the irregular distribution of NZs in tensor blocks that are allocated to PEs. Its mitigation by structuring the data may not always be possible (e.g., for activations/weights of some models or applications beyond deep learning). Consequently, accelerators may attain ineffective utilization of PEs and low speedup. Although some accelerators used hardware modules for dynamic balancing, further efficiency may be achieved by enhancing the centralized extraction module with additional low-cost logic. This is because it already keeps the track of the data provided to PEs, which can lead to information about the number of operations performed by different PEs.", "id": "c3855c4e-a3f8-4153-9abf-839df6cb02ad", "level": "subsection", "origin_cites_number": 1, "parent_id": "2c2b4338-ab49-49e8-95e3-2e1350db0598", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Extraction of Matching Data for Computations on Non-Zeros" ], [ "subsection", "Optimization Opportunities" ] ], "subsections": [], "title": "Optimization Opportunities" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 7005, 4335 ], "content": "\\label{sec::memory-management}\nAccelerators contain \\emph{multi-banked} scratchpads, which are usually shared among PEs. Either a scratchpad is \\emph{unified} , or separate buffers store different tensors . Their sizes vary from several tens of KBs to several MBs . Effective management of shared and local memory highly reuses data and hides memory access latency behind computations on PEs. This section discusses how sparsity and reduced shapes of tensors lower reuse. \nHowever, compressed tensors help to achieve better speedups and energy efficiency, as more data fits in on-chip memory, reducing off-chip accesses. This section also describes how irregular accesses (e.g., arbitrating output activations) make management of the banks challenging. Then, it discusses reusing intermediate outputs via fused-layer executions and how sparsity affects it. \n\\begin{figure*}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/analysis-data-reuse-sparse}}\n\\caption{Impact of sparsity on data reuse opportunities for accelerating CNNs and NLP models.}\n\\label{fig::analysis-data-reuse-sparse}\n\\end{figure*}", "id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "level": "section", "origin_cites_number": 7, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ] ], "subsections": [ "ecce2096-4015-48ff-97f9-355f9c2a3463", "8f654738-ca19-44ff-ae76-2a74f2ebe86c", "0afeddc8-8d83-41ab-9493-366288a6e53d", "5dbe3587-d8d2-4242-b7d0-700e4d7a42ae", "ccc7ce5f-8a10-49d7-adaf-8358c7cd8a5c", "b1a6e5c9-14b7-4609-a56e-1a10c9f815be" ], "title": "Memory Management of Compressed Tensors" }, { "cite_extract_rate": 0.625, "cites": [ 784, 4362, 7633, 7, 38 ], "content": "\\label{sec::data-reuse}\n\\mysubsubsection{Reuse characteristics} Depending on the functionality of layers, there can be significant reuse of tensors. Figures \\ref{fig::analysis-data-reuse} and \\ref{fig::analysis-data-reuse-sparse} depict reuse opportunities for different layers (early CONV layers, later CONV layers, MLPs, DW-CONVs, PW-CONVs, expand or reduce layers, attention mechanism). For each tensor, the data reuse is calculated as the total number of MACs per data element. For better visualization, reuse factors and layers are plotted on a logarithmic scale. \n\\textit{Input activations:} Reuse of input activations increases with going deeper in CNNs since the number of filters increases significantly. It is also high for 'expansion' layers in bottleneck blocks (Fig. \\ref{fig::analysis-data-reuse-sparse}). DW-CONVs are an exception and present very low reuse as there is only one filter. 'Squeeze' or 'reduce' layers present moderate reuse for dense tensors. Reuse in FC layers or MLPs (e.g., in encoder/decoder layers of Transformers ) depends on the sizes of weight matrices (i.e., sizes of output tensors). \n\\textit{Weights:} Since 2D feature maps in CNNs are usually much larger than 2D weights, weight reuse can be higher by an order of magnitude. With going deeper in CNNs, feature maps shrinks spatially, which lowers the reuse. There is no weight reuse for MLPs, but increasing the batch size linearly improves the weight reuse. Video processing applications use 3D CNNs (e.g., c3d ), which can further increase the reuse opportunities for input activations and weights due to additional processing steps on consecutive frames. For NLP models such as Transformer and BERT , Fig. \\ref{fig::analysis-data-reuse-sparse} illustrates weight reuse for executing a sequence of 24 and 107 tokens, respectively. $MatMul$s in the attention-based calculation are shown for a single head.\n\\textit{Partial summations:} Input channels are increased as we go deeper into CNNs. Similarly, 'reduction' layers in bottleneck blocks involve more input channels. Both improve the reuse of partial summations. MLPs also usually provide high reuse due to larger input vectors. DW-CONVs show very low reuse because partial summations are not accumulated across input channels. \n\\mysubsubsection{Impact of sparsity on reuse} Increase in sparsity can lead to lower reuse. To determine the impact of sparsity, we considered evaluations by Han et al. for pruned AlexNet and VGG-16 models. For recent DNNs like MobileNetV2 or BERT models, we considered sparse models as listed in Table \\ref{tab:sparsity-dnn-models}. Then, we calculated the reuse as NZ MACs per NZ of a tensor. Fig. \\ref{fig::analysis-data-reuse-sparse} plots the reuse opportunities for both dense and sparse tensors of CNNs and NLP models. Since execution in encoder/decoder modules of NLP models is repetitive, unique layers of a single module are only shown (sparsity averaged across all encoder/decoder modules). \nThe figure shows that for sparse models, reuse characteristics are preserved, but the reuse factor decreases for almost all layers and tensors, as compared to processing dense tensors. Primarily, this is due to the reduced number of effectual MACs. For example, for MLPs without batching, weight reuse can drop below one. It means that even if a weight matrix consists of NZs, some of them are never used due to the unavailability of matching NZs in input activations. As an exception, reuse of weights remains the same, when activation sparsity is absent (e.g., EfficientNetB0 , BERT ). Similarly, with dense weights, low or moderate reuse of activations remains the same for DW-CONV or 'excite' layers, respectively.\nThe reuse of partial summations also decreases since effectual MACs per partial summation decrease with sparsity. Note that each output activation element still needs to be populated or assembled before ReLU/encoding. Due to sparsity and fewer input channels, the reuse is low or moderate in 'expansion' layers. Similarly, small matrices in processing individual attention heads exhibit low reuse. The reuse remains high for 'reduce' layers in CNNs or query and value processing and FC layers in NLP models. To sum up, although sparsity reduces the reuse of tensors, there can be high data reuse for many layers (up to $1E+04$), which should be exploited for efficient accelerations.\n\\mysubsubsection{Temporally reusing data through shared on-chip memory} Like CPUs, accelerators have memory hierarchies because applications have different working set sizes. Data reuse can be leveraged \\emph{temporally} (repeatedly accessing data from memory without accessing lower-level memory) and \\emph{spatially} (providing the same data to multiple PEs without repeatedly accessing memory). After exploiting high temporal reuse, the highest energy is spent in upper buffers .", "id": "ecce2096-4015-48ff-97f9-355f9c2a3463", "level": "subsection", "origin_cites_number": 8, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Leveraging Data Reuse Opportunities" ] ], "subsections": [], "title": "Leveraging Data Reuse Opportunities" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7005, 4354, 4335 ], "content": "\\mysubsubsection{Management of tiled data in double-buffered memory} On-chip buffers are typically not large enough to accommodate all tensors. Therefore, loops are tiled for reusing some tensors from buffers, while repeatedly accessing other tensors from the off-chip memory . Since scratchpads are non-coherent and their management is software-directed, data is transferred by direct memory accesses (DMA) . PEs are kept engaged in useful computations by interleaving computations with memory accesses. Such an objective is usually achieved by double-buffering (aka ping-pong buffers) . Loop optimization techniques, like loop tiling and ordering, can determine the sizes of tensor blocks to be managed in memories and sequence of memory accesses for high reuse and reduced data transfers .\n\\mysubsubsection{Asynchronous communication} Some accelerators hide the latency of communicating data to the shared/local memory with an asynchronous mechanism that refills the memory after some data has been consumed (e.g., in Cambricon-X ). For such execution, PEs and a DMA controller may simultaneously produce/consume data either through different banks or at the granularity of small blocks in the same bank. Similarly, when accessing shared memories via configurable communication networks , PEs can execute in a dataflow fashion and request partial refilling of their memory with new data. Such \\insight{mechanisms for asynchronous communication and computations can alleviate work imbalance among PEs} that is caused by leveraging unstructured sparsity.\n\\mysubsubsection{Impact of sparsity on the latency of memory accesses and speedup} For memory-bounded execution (e.g., MLPs), even with effective prefetching, miss penalty may be significant. It restricts accelerators from achieving peak performance . When tensors are sparse, the amount of data that needs to be transferred from off-chip reduces significantly, leading to substantial performance gains. For example, Cambricon-S reported up to 59.6$\\times$ speedup of FC layers for hyper-sparse weights. However, higher $IA$-sparsity did not provide such gains (speedup saturated at about 14$\\times$) since the latency of accessing weights dominated total execution time. For processing high sparsity (e.g., 90\\%+) and low reuse, it becomes challenging to engage functional units into effectual computations. This is because, with \\emph{low} arithmetic intensity, required data \\emph{may not be prefetched} at available bandwidth.", "id": "8f654738-ca19-44ff-ae76-2a74f2ebe86c", "level": "subsection", "origin_cites_number": 9, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Hiding Miss Latency Behind Computations" ] ], "subsections": [], "title": "Hiding Miss Latency Behind Computations" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 4354, 4335, 8769 ], "content": "\\label{sec::memory-bank-management}\n\\mysubsubsection{Concurrent accesses to memory banks} While single-bank memory can be easier to manage, it is infeasible to provide multiple ports for the PE-array with just one bank . Moreover, multi-port unified memory consumes very high power and longer latency . \nSo, on-chip memories are partitioned into smaller banks . For mapping a layer onto the accelerator, each bank is usually allocated to only one tensor (e.g., in EyerissV2 ). Banked buffers provide multiple read and write ports, allowing simultaneous accesses to different tensors stored in different banks . \nSometimes, a data layout reorganization is required before loading into memory banks. Such transformation is done after loading it from DRAM or before writing outputs to DRAM, which consumes additional execution time and energy. For compressed tensors, such transformation can be done along with the data encoding at alleviated overheads.\n\\mysubsubsection{Arbitration and conflict management} Depending on the indexing logic and interconnect between memory and PEs, managing application data may require additional compilation support or hardware logic for data arbitration and conflict management . For regular memory accesses (e.g., dense or block-sparse data), allocation and accesses to banks can be determined for mappings of layers. However, computations on unstructured sparse data can lead to accessing \\emph{arbitrary} banks and require special support. E.g., outputs from PEs may need to be written to different banks. Moreover, accelerators contain accumulator-buffers , where PEs or their functional units are connected with memory banks via a crossbar. The crossbar arbitrates write-back of outputs to the appropriate bank . Since these partial outcomes can correspond to non-contiguous elements in an output tensor, bank conflicts are possible during arbitration, i.e., multiple outputs need to be simultaneously handled by the same bank . To obviate conflicts, the buffer contains more banks (e.g., 2$\\times$N banks for storing outputs from $N$ sources in SCNN ). It alleviates collisions in hashing irregular outputs into different memory banks. Consequently, the crossbar may require higher bandwidth and significant on-chip area (e.g., 21\\% for a 16$\\times$32 crossbar in each SCNN's PE).", "id": "0afeddc8-8d83-41ab-9493-366288a6e53d", "level": "subsection", "origin_cites_number": 10, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Management of Multi-Bank Memory" ] ], "subsections": [], "title": "Management of Multi-Bank Memory" }, { "cite_extract_rate": 0.5, "cites": [ 502, 4354, 97 ], "content": "\\mysubsubsection{Reusing intermediate tensors from large on-chip memory} Intermediate feature map in DNNs is an output of a layer that serves as input to later layers. It can be kept stationary and reused from on-chip memory to reduce off-chip traffic. Such reuse is amplified when input is the same for multiple layers due to residual connections or high cardinality (e.g., ResNeXt ). Leveraging it can be important for latency-bounded real-time applications. \\insight{Sparsity-encoding and quantization significantly makes such reuse opportunities more feasible} due to reduced storage requirements. Accelerators with large memories (hundreds of KBs) such as SCNN and Cnvlutin , can leverage such reuse. \n\\mysubsubsection{Overcoming static bank assignment} Many accelerators process models layer-by-layer and do not leverage cross-layer reuse, i.e., write outputs for layer $L$ in DRAM and load them back later as inputs for layer $L+1$. It is more prevalent among accelerators with small memories. Moreover, bank assignment for each tensor is often fixed at design time , which enforces write-back of outputs and reloading them later in other banks as inputs while processing next layers. Thus, in both cases, output activations are not reused on-chip, causing excessive off-chip memory traffic. To address this problem and exploit cross-layer reuse, shortcut-mining used a flexible architecture with decoupled physical-logical buffers. \nFor pre-known sparsity, prior techniques for statically determining the data allocation to memory banks may work well by estimating sizes of encoded tensors. However, for dynamic sparsity, conservative estimations may lead to inefficient utilization of banks, and efficient banking for non-conflicting accesses can also be challenging. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/fused-layer-CNNs}}\n\\caption{Fusing the execution of layers can significantly reuse intermediate activations (Figure adopted from ).}\n\\label{fig::fused-layer-CNNs}\n\\end{figure}\n\\mysubsubsection{Fused-layer execution} Fused-layer CNNs leveraged cross-layer reuse by processing a small tile of activations such that outputs for few layers can be computed alongside while retaining the corresponding data in the on-chip memory. Fig. \\ref{fig::fused-layer-CNNs} shows an example for processing an input tile of 5$\\times$5 activations (C\\_L input channels) for layer $L$ and finally obtaining 1$\\times$1 output activations (M\\_L+1 output channels) for layer $L+1$. Apart from reusing intermediate outputs for obtaining the output tile, corresponding tiles of intermediate activations and filters are maintained in the memory and reused partially for processing the next tiles (striding execution in the spatial direction). Alwani et al. reported reducing off-chip transfers of input feature maps by 28\\% for the first two layers of AlexNet and by 95\\% for the first five layers of VGG-19. Since the cascading by storing all the filters and input channels (dense tensors) requires high memory, applied it to only early layers. However, encoded sparse tensors and further tiling across filters/channels allow fitting tensors for multiple layers in the small memory, making such reuse opportunities more feasible. The tile size and number of layers that can be fused are bounded by memory capacity. So, fusion parameters depend on the actual/anticipated sparsity levels. For efficient executions, fusion parameters need to be explored systematically with sparsity-aware dataflows.", "id": "5dbe3587-d8d2-4242-b7d0-700e4d7a42ae", "level": "subsection", "origin_cites_number": 6, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Reusing Intermediate Tensors" ] ], "subsections": [], "title": "Reusing Intermediate Tensors" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7634 ], "content": "\\mysubsubsection{Look-ahead snoozing} Depending on the sparsity, encoding of tensors, and mapping of the layer, several banks can be unused or inactive for certain time intervals. Accelerators achieve further energy efficiency by power gating unused or inactive banks. For example, look-ahead snoozing in CompAct targeted reducing the leakage power of large on-chip SRAMs. Each bank of its activation SRAM can be power-gated. Banks unutilized during the execution of a layer were put in the deep sleep mode (maximal savings in leakage power, while not preserving any data in unused banks). Further, the period of active cycles for each bank was determined based on the data movement schedule. Then, inactive banks were snoozed during execution (i.e., connecting to the data retention voltage for consuming lower leakage power). \n\\mysubsubsection{Skipping memory hierarchy} Some layers do not provide significant reuse. Data reuse is also lowered due to sparsity and architectural choice for extracting or communicating NZs. Therefore, a few accelerators (e.g., EIE and Cambricon-X ) obviate storing non-reusable data in the shared memory and directly feed it to appropriate PEs (weights for MLPs).", "id": "ccc7ce5f-8a10-49d7-adaf-8358c7cd8a5c", "level": "subsection", "origin_cites_number": 3, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Techniques for Further Energy-Efficiency" ] ], "subsections": [], "title": "Techniques for Further Energy-Efficiency" }, { "cite_extract_rate": 1, "cites": [ 4335 ], "content": "\\textit{(i) Managing both data and metadata in unified memory:} Accelerators often contain separate buffers for metadata (positions of NZs, indirection tables for shared values). Although such designs are easy to manage for processing tensors of some models encoded in a specific format, they may not work well across different levels of sparsity and value similarity, as storage requirements vary significantly. \nSo, designers can explore unified memory architectures for managing both data and metadata (including memory partitioning and bank management) and their trade-offs. It can also be leveraged to tailor efficient designs for programming FPGAs.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/common-NoC-designs}}\n\\caption{Common NoC designs (Figure adopted from ).}\n\\label{fig::NoC-designs}\n\\end{figure}", "id": "b1a6e5c9-14b7-4609-a56e-1a10c9f815be", "level": "subsection", "origin_cites_number": 1, "parent_id": "0c7ea9d1-6b8d-48fe-91f9-fa72adc10e69", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Memory Management of Compressed Tensors" ], [ "subsection", "Optimization Opportunities" ] ], "subsections": [], "title": "Optimization Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::comm-networks}\nNetwork-on-chip (NoC) is required to efficiently distribute data to PEs, exchange data between PEs (for reducing partial outputs), and collect distinct outputs back from PEs. To process data-intensive ML models, accelerators employ multiple high-bandwidth interconnects for simultaneous communication of different tensors between PEs and buffers. At first, this section describes NoCs for the distribution of operands, which vary in terms of bandwidth and spatial reuse of the data. With efficient NoC design, PEs can be engaged in processing data from input FIFOs or local memory, which gets interleaved with communication of another set of data via NoC. This section also discusses \\emph{configurable NoC} designs that can support various bandwidth requirements and spatial reuse opportunities due to variations in sparsity and tensor shapes. In processing sparse tensors, \\insight{unstructured reduction of partial outputs among PEs can be challenging}. This section describes different mechanisms for \\emph{accumulating} the outputs \\emph{temporally} or \\emph{spatially} at PE level and PE-array level. It also discusses configurable mechanisms for \\emph{asymmetric} accumulation of \\emph{variable-sized} partial outputs.", "id": "4b82dfad-305c-4c60-9b38-8189a27dd3b9", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Interconnects for Distributing Non-Zeros and Reducing Partial Outputs" ] ], "subsections": [ "0ed84c84-7c0f-440f-a66c-b4d0e03b9b4c", "79d994f1-db90-4ba7-9daf-de3a3cd3da3c", "9fdf5ca0-0307-4f7d-a9ba-a9873050ebdc" ], "title": "Interconnects for Distributing Non-Zeros and Reducing Partial Outputs" }, { "cite_extract_rate": 0.25, "cites": [ 7169, 4363, 7005, 4335, 4347, 4354, 7634, 7805, 4356 ], "content": "\\begin{table}[!t]\n\\centering\n\\caption{NoC Designs for Distribution of Sparse Tensors}\n\\label{tab:NoC-data-distribution}\n\\addtolength{\\tabcolsep}{-3pt}\n\\begin{tabular}{|c|c|m{5.7cm}|}\n\\hline\n\\multirow{5}{*}{\\makecell{Topo-\\\\logy}} \n & Unicast & \\\\ \\cline{2-3} \n & Multicast & \\\\ \\cline{2-3} \n & Broadcast & \\\\ \\cline{2-3} \n & Mesh & \\\\ \\cline{2-3}\n & Configurable & \\\\ \\hline\n\\multirow{2}{*}{\\makecell{Spatial \\\\ Reuse}} \n & Activations & \\\\ \\cline{2-3} \n & Weights & \\\\ \\hline \n\\end{tabular}\n\\addtolength{\\tabcolsep}{3pt}\n\\end{table}\nFig. \\ref{fig::NoC-designs} shows some common NoC designs for distributing the operands, their bandwidth, and achievable spatial reuse . Data can be reused spatially by distributing it to multiple PEs or functional units. For layers with high reuse opportunities (Fig. \\ref{fig::analysis-data-reuse}), it lowers communication and helps to hide the communication latency. Most accelerators leverage spatial reuse with multicast or broadcast NoC. They consist of configurable buses or trees that multicast the data to PEs (often in a single cycle) . In contrast, the mesh interconnect (e.g., in systolic arrays ) or 1D buses communicate the data and reuse spatially with a store-and-forward mechanism. Low-reuse tensors are distributed with unicast NoCs. Table \\ref{tab:NoC-data-distribution} lists common interconnect topologies used by previous accelerators for data distribution.\nCommunication requirements vary significantly depending on the sparsity of tensors, available reuse, and adopted dataflow mechanism. Prior work provides a detailed analysis of different NoC topologies and characterizes the NoC bandwidth required for different dataflows. Similarly, analytical tools including model implications of different dataflows on communication requirements and execution time. \n\\mysubsubsection{Broadcast} Accelerators, including Cnvlutin , EIE , and Cambricon-S , use broadcast NoC to reuse activations for processing CONVs or MLPs. Similarly, in SCNN , weights are broadcast to PEs for executing unit-strided convolutions with input stationary dataflow. For sparsity-encoded tensors, their NZs (and positions) can be broadcast for spatial reuse, as long as the NZs are indexed or extracted afterward (e.g., in-PE extraction in Cnvlutin and EIE). In Cambricon-S, positions of intersecting NZs are extracted centrally before broadcast, but due to structured sparsity, the same extracted positions are used by all PEs. So, NZ activations are broadcast to all PEs. \n\\mysubsubsection{Multicast} Eyeriss , ZENA , and SNAP use multicast NoC to reuse multiple operands spatially. For example, Eyeriss processed tensors with row-stationary dataflow where PEs of a row processed the same spatial rows of filters, and diagonal PEs processed the same spatial row of feature maps. Eyeriss facilitated such multicasting through its configurable NoCs, which consisted of row-wise and column-wise controllers for 2D PE-array. Each controller could be configured with a pre-determined tag value, which was compared with the row or column tag of a packet. Upon matching the tags, a row-wise controller forwarded the packet to associated column-wise controllers, and a column-wise controller forwarded it to the associated PE. Similarly, for processing bitmap-coded tensors in ZENA , a block of activations was broadcast to a row of PEs, and a block of weights was multicast to PEs of the same column. \n\\mysubsubsection{Mesh} A few accelerators, including Compact , ERIDANUS , and , use systolic arrays with mesh interconnects. Since the same data is forwarded among PEs in the same row or column, such NoCs achieve the same amount of spatial reuse as multicast NoCs. But, for sparse tensors, efficient and correct processing becomes challenging. Hence, \\emph{pre-processing} is needed to cluster appropriate NZs or index appropriate block of structured-sparse tensor before feeding PEs of the systolic array . \n\\mysubsubsection{Unicast} SCNN , Cambricon-X , and SqueezeFlow use unicast NoC or point-to-point links. Such NoCs concurrently feed different elements to various PEs. They are used when spatial reuse of a tensor is infeasible (e.g., weights in MLPs, NZs are extracted beforehand, due to dataflow requirements) or outputs are collected simultaneously (section \\ref{sec::write-back}). \nWith high bandwidth, they reduce communication latency , but can incur high area and power. \n\\mysubsubsection{Configurable}\n\\insight{Communication requirements vary with different dataflows} that are effective for only some DNN layers (section \\ref{sec::dataflows} and Table \\ref{fig::layer-characteristics}). Further, while communication may consist of gather, scatter, forward, or reduction patterns , efficient execution may demand their combination or even non-uniform patterns including multi-hop communications among PEs . Therefore, configurable NoC designs are required, which can support various communication patterns that are amenable to different reuse and sparsity. Recent designs including EyerissV2 , microswitch-NoC , and SIGMA address some of these challenges.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/architecture-EyerissV2}}\n\\caption{EyerissV2 accelerator architecture (Figure adopted from ).}\n\\label{fig::architecture-EyerissV2}\n\\end{figure}\nEyerissV2 uses a novel \\emph{hierarchical-mesh} NoC, which is illustrated in Fig. \\ref{fig::architecture-EyerissV2}. EyerissV2 contains 16 clusters (8$\\times$2 array) of PEs and global buffers (GLBs). Each PE-cluster contains 3$\\times$4 PEs, and each 12 kB GLB-cluster contains seven banks for input and output activations. At the top level, router clusters are connected through a 2D mesh, and they enable communication among different PE-clusters and GLB-clusters. For local communication among each PE-cluster and GLB-cluster, a router-cluster with ten routers is used. Each router connects PEs with a port of the GLB cluster for accessing GLB bank or off-chip memory (three, three, and four routers for managing input activations, weights, partial summations). Locally, an all-to-all NoC connects all PEs of a PE-cluster to the routers for each data type. As Fig. \\ref{fig::NoC-Hierarchical-Mesh-EyerissV2}(a)--(d) illustrates, it facilitates multiple communication patterns including multicast, broadcast, and unicast of the tensors. 2D mesh topology enables inter-cluster communications, allowing an interleaved-multicast or broadcast to all clusters.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NoC-Hierarchical-Mesh-EyerissV2}}\n\\caption{Different configuration modes of hierarchical mesh network in EyerissV2 architecture (Figure adopted from ).}\n\\label{fig::NoC-Hierarchical-Mesh-EyerissV2}\n\\end{figure}\n\\begin{figure}[!b]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NoC-microswitches}}\n\\caption{(a) Microswich network . NoC configurations: (b) multicast (c) gather (d) local communication. (Figure adopted from .)}\n\\label{fig::NoC-microswitches}\n\\end{figure}\nFor an N-PE accelerator, an array of microswitches (Fig. \\ref{fig::NoC-microswitches}a) contains $\\log_{2}N + 1$ levels with $N$ micro-switches at each level. Each microswitch contains a small combinational logic for configuration and up to two FIFOs for buffering the data during routing conflict. With small logic and storage, data traverses through several microswitches within each cycle . All microswitches contain \\emph{gather} and \\emph{scatter} units, and bottom microswitches (level $\\log_{2}N$) also contain local units for inter-PE communication. In top microswitches (level 0), the scatter unit connects to memory banks, and the gather unit uses round-robin-based priority logic for arbitrating the incoming data in a pipelined manner. In middle microswitches, scatter units forward data to desired lower-level links, and gather units stream the data back. In bottom microswitches, scatter and gather units stream the data, and local units connect adjacent PEs. Fig. \\ref{fig::NoC-microswitches}(b)--(d) shows how configurable microswitches can enable various communication patterns. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=0.95\\linewidth]{figures/NoC-benes-SIGMA}}\n\\caption{(a) Flexible dot product engine in SIGMA accelerator features a data distribution NoC with configurable switches interconnected via Benes topology. (b)--(c): Configuration of the interconnect facilitates different unicast and multicast communication patterns. (Figure adopted from .)}\n\\label{fig::NoC-benes-SIGMA}\n\\end{figure}\nSIGMA used \\emph{Benes topology} with \\emph{configurable} switches (Fig. \\ref{fig::NoC-benes-SIGMA}a). For N source and destinations, the interconnect contains $2\\log_{2}N + 1$ levels, each with N number of 2$\\times$2 switches. Each switch receives two control signals to determine whether to forward data vertically and/or diagonally. After combining communication requirements for distributing all elements to desired multipliers, switches can be configured to forward the data, as shown in Fig. \\ref{fig::NoC-benes-SIGMA}(b)--(c).", "id": "0ed84c84-7c0f-440f-a66c-b4d0e03b9b4c", "level": "subsection", "origin_cites_number": 36, "parent_id": "4b82dfad-305c-4c60-9b38-8189a27dd3b9", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Interconnects for Distributing Non-Zeros and Reducing Partial Outputs" ], [ "subsection", "Mechanisms for Distribution of Operands" ] ], "subsections": [], "title": "Mechanisms for Distribution of Operands" }, { "cite_extract_rate": 0.23333333333333303, "cites": [ 7633, 7005, 4335, 4347, 4354, 7634, 4356 ], "content": "\\label{sec:reduction-partial-outputs}\nComputation primitives of ML models require reduction (accumulation) of partial outputs from PEs or their functional units. It can be done temporally and/or spatially (Table \\ref{tab:reduction-mechanisms}).\n\\mysubsubsection{Temporal} All the reductions for computing an output scalar are performed on a \\emph{single} PE (or a functional unit) during \\emph{different} cycles. Accumulations are done temporally when different PEs compute distinct outputs, e.g., for output stationary dataflow. The temporal reduction makes the processing of sparse tensors \\emph{simple} since PEs update partial outputs in their private memory or registers \\emph{without communicating} to other PEs via an additional interconnect. Therefore, it is adopted by many accelerators, including EIE , ZENA , and SparTen . However, temporal accumulation requires \\emph{indexing} the buffer for reading/writing partial outputs or accumulating computations in the output register (of MAC unit). So, it involves register/memory read and write operations, which consume higher energy than integer arithmetic . Besides, using local accumulator buffers for vector/SIMD functional units (e.g., in SCNN ) requires support for arbitration of partial outputs.\n\\mysubsubsection{Spatial} Partial outputs can be reduced spatially for an output scalar. It can be either done by functional units within a PE (e.g., adder-trees in Cambricon-X/S to sum up partial products) or inter-PE communication via a \\emph{separate interconnect} (e.g., forwarding in systolic arrays). Inter-PE spatial reduction usually requires communication among neighboring PEs and is typically achieved through a mesh or similar topology . Spatial reduction obviates buffer accesses and improves energy efficiency (e.g., by 2$\\times$--3$\\times$ , as compared to the temporal reduction on scalar PEs). These linear or tree-based reductions are typically symmetric. However, a major challenge is to enable \\emph{asymmetric and asynchronous reductions} of a variable number of partial outputs, for adapting to high sparsity, tensor shape, or target functionality (e.g., DW-CONV). This is because, an efficient dataflow may require some of the interconnected functional units or PEs to process partial outputs for distinct output elements (e.g., different depth-wise groups); all partial outputs cannot be reduced altogether. Hence, configurable interconnects are needed. Otherwise, for high or hyper sparsity, functional units cannot be fed enough NZs and are poorly utilized. Note that structured sparsity can alleviate imbalance by inducing patterns such that all PEs process the same number of NZs. However, configurable mechanisms are still required to support different dataflows for the variations in functionalities or tensor shapes. \n\\mysubsubsection{Spatio-temporal} Partial outputs can be reduced spatially and temporally and \\emph{locally} (within PEs) and \\emph{globally} (across PEs). Spatial and temporal reduction of outputs depends on the mapping of computation graph onto PEs . In spatiotemporal reduction, different PEs or their functional units compute partial outputs at every cycle or a few, which are, at first, reduced spatially. The resultant partial output is then reduced temporally by updating the previous partial output in the memory. E.g., when data streams through PEs of a systolic array, there is an inter-PE spatial reduction of partial outputs (via PEs of each column). Then, the bottom PE-row provides the reduced partial outputs to accumulator buffers (CompAct , TPU ). PEs of SNAP perform spatiotemporal accumulation locally, where partial products are first spatially accumulated through a configurable adder-tree and then accumulated in PE's memory over time. \n\\mysubsubsection{Temporo-spatial} In temporospatial reduction, PEs compute partial outputs and reduce them locally over time. Then, they are collected later and accumulated spatially via interconnect before further processing (e.g., write-back, encoding). For example, PEs of a cluster in EyerissV2 first locally accumulate partial summations. Then, partial outputs can be accumulated across vertically connected clusters. SCNN PEs compute output tiles corresponding to distinct input feature maps stored in their buffers. Outputs are temporally reduced by indexing the accumulator buffers. Then, overlapping fractions of incomplete outputs are exchanged among neighboring PEs for reduction. SNAP also performs temporospatial reduction at PE-array (core) level. Its PEs accumulate outputs locally over time, which are reduced spatially across horizontal/diagonal PEs by a core-level reducer.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NoC-configurable-reduction}}\n\\caption{Configurable spatial reduction trees: (a) Augmented reduction tree in MAERI (Figure adopted from .) (b) Forwarding adder network in SIGMA (Figure adopted from .)}\n\\label{fig::NoC-configurable-reduction}\n\\end{figure}\n\\mysubsubsection{Configurable} \nMAERI and SIGMA employ configurable reduction trees for efficient and asymmetric spatial reduction of partial outputs. So, it can be useful for spatial processing of unstructured sparsity and variable-sized vectors for dot products. The augmented reduction tree in MAERI (Fig. \\ref{fig::NoC-configurable-reduction}a) allows an asymmetric reduction of partial outputs with configurable adder switches and bidirectional forwarding links. Each 3-input adder switch can receive two partial outputs from the previous level and one via a forwarding link, and it can add and forward them. Plus, upper-levels of the tree (near root) have double bandwidth than lower-levels, allowing simultaneous collection of multiple reduced outputs. The forwarding adder network in SIGMA (Fig. \\ref{fig::NoC-configurable-reduction}b) enables similar configurable reduction but at reduced area and power. Instead of 3-input adders, it uses 2-input adders and N:2 mux for selecting the inputs. Also, adders at the $0^{th}$ level allow bypassing of partial products to the next level.\n\\begin{table}[!t]\n\\centering\n\\caption{Mechanisms for Accumulations of Partial Outputs}\n\\label{tab:reduction-mechanisms}\n\\begin{tabular}{|c|m{5.1cm}|}\n\\hline\nTemporal & \\\\ \\hline\nSpatial (intra-PE) & \\\\ \\hline \nSpatial (inter-PE) & \\\\ \\hline \nSpatio-temporal & \\\\ \\hline\nTemporo-spatial & \\\\ \\hline\nConfigurable & \\\\ \\hline\n\\end{tabular}\n\\end{table}", "id": "79d994f1-db90-4ba7-9daf-de3a3cd3da3c", "level": "subsection", "origin_cites_number": 30, "parent_id": "4b82dfad-305c-4c60-9b38-8189a27dd3b9", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Interconnects for Distributing Non-Zeros and Reducing Partial Outputs" ], [ "subsection", "Mechanisms for Reduction of Partial Outputs" ] ], "subsections": [], "title": "Mechanisms for Reduction of Partial Outputs" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4335 ], "content": "\\textit{i) Low-cost flexible interconnects for accommodating spatial reuse opportunities, dynamic communication requirements, various sparsity, and different precision:} Variations in data reuse (Fig. \\ref{fig::analysis-data-reuse-sparse}) are caused by the tensor size, functionality (e.g., stride, separable convolution), batch size, and sparsity of tensors. The communication mechanism needs to leverage available reuse by supporting various multicast and unicast patterns . Moreover, the distribution, inter-PE communication, and collection of the outputs can be done \\emph{asynchronously} and \\emph{concurrently}. These require the interconnect switches to support \\emph{dynamic} management (priority arbitration and congestion) at low cost. Furthermore, communication among distant PEs may be required (e.g., for store-and-forward or exchanging outputs during sparse computations). Finally, depending on sparsity and precision, the bit-width of the metadata and NZ value can differ significantly. Communicating different sizes of data and metadata can be facilitated by configurable interconnect buses and their interfacing with PEs and memory. For instance, in EyerissV2 , a 24-bit bus can supply PEs either three 8b uncompressed values or two pairs of 8b NZ and 4b metadata. Thus, configurable interconnect topologies should be explored for effectively serving various communication requirements. FPGAs can also be leveraged for designing accelerators with tailored interconnects.\n\\textit{ii) Programming of configurable interconnects and design exploration:} Configurable interconnections can support various communication patterns and dynamic data movement for sparse computations. But, compilation support is needed to program them as they often contain parameterized multi-level switches and switches with many-to-many links between source and destination (e.g., ). Depending on the interconnect topology and optimized dataflow, the compiler may need to select efficient paths for distributing data from source to destination switches. Additionally, the underlying topology (e.g., lack of multi-hop connectivity) \\emph{may not} support some dataflows (e.g., spatiotemporal accumulation of partial outputs from distant PEs in the absence of multi-hop connectivity). Further, a systematic methodology for mapping communication onto interconnect topology can enable design space exploration of interconnects needed for accelerating target ML models, allowing minimum overhead of \\emph{run-time reconfiguration} of the interconnect to support various dataflows.", "id": "9fdf5ca0-0307-4f7d-a9ba-a9873050ebdc", "level": "subsection", "origin_cites_number": 3, "parent_id": "4b82dfad-305c-4c60-9b38-8189a27dd3b9", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Interconnects for Distributing Non-Zeros and Reducing Partial Outputs" ], [ "subsection", "Optimization Opportunities" ] ], "subsections": [], "title": "Optimization Opportunities" }, { "cite_extract_rate": 0.375, "cites": [ 4347, 4354, 4335 ], "content": "\\label{sec::PEArch}\nPE architecture consists of functional units, local memory (RFs or SRAMs), and local control (instruction buffer or finite state machine). Fig. \\ref{fig::PE-pipeline} shows pipeline stages for processing sparse and value-approximated tensors. Depending upon PE's interface, it either gets data from the interconnect (typical) or directly accesses off-chip memory via DMA transfer. At every cycle or few, a PE (a) processes an instruction or events based on its state , (b) fetches data from local memory or interconnect, (c) computes tensor elements via functional unit, and (d) writes the intermediate result to local memory or interconnect. PEs may contain \\emph{special-function} modules (e.g., for ReLU or sigmoid computations ). \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=0.9\\linewidth]{figures/PE-pipeline}}\n\\caption{Overview of the PE pipeline for processing sparse and value-approximated tensors (Figure adopted from ).} \n\\label{fig::PE-pipeline}\n\\end{figure}\nProcessing compressed tensors can impose significant maneuvering efforts for PE design. For example, reusing tensors temporally through local memory (e.g., in EyerissV2 , SNAP ) alleviates overheads of repeatedly accessing compressed tensors via memory hierarchy and decoding them. However, it requires communicating data to PEs before extracting NZs. Thus, the PE may require additional hardware for \\emph{extracting} or \\emph{correctly indexing} NZs (section \\ref{sec::NZ-data-extraction}). Additionally, \\insight{the selection of functional units is affected by the number of NZs that can be fed for various sparsity of tensors, support for mixed-precision, and functionality of the layers.} In such various scenarios, a single dataflow may not always be effective and can lead to significant acceleration loss. So, PE datapath needs to be \\emph{adaptive} for supporting multiple dataflows optimized for different layers and sparsity. Further, techniques for leveraging computation reuse due to \\emph{value similarity} often require enhancements in the design. PEs may also post-process outputs or generate additional metadata for communicating outputs. So, an efficient pipeline needs to hide pre-processing and post-processing latency.", "id": "d9ab0fd8-41d0-4648-973b-6a9930053093", "level": "section", "origin_cites_number": 8, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "PE Architecture Design" ] ], "subsections": [ "adae221e-0b86-440d-9247-a6adfbc6cb8f", "77a52b82-cfaf-475e-a733-c6109662e9aa", "cc845f17-b9e8-4c7d-8469-99a422bd86f0" ], "title": "PE Architecture Design" }, { "cite_extract_rate": 0.212765957446808, "cites": [ 7169, 4335, 4365, 4347, 4354, 7634, 4364, 7805, 4356, 4366 ], "content": "\\mysubsubsection{Scalar PEs} Table \\ref{tab:PE-vectorization} lists accelerators based on their functional units for scalar, SIMD, or vector processing. Many architectures contain an array of scalar PEs; PE datapath contains a pipelined MAC unit (e.g., EIE , SparTen ). \n\\mysubsubsection{SIMD/Vector PEs} PEs of Cnvlutin and Cambricon-S contain multiplier arrays and adder trees. By performing dot products at every cycle, they can deliver high throughput. Moreover, accumulation through adder-trees reuses data spatially, which lowers energy consumption (by 2$\\times$--3$\\times$ ) as compared to temporal accumulation on scalar PEs by reading and writing partial summations via local memory. However, a major challenge is the inefficient utilization of multipliers and adders, which often leads to ineffectual computation cycles and acceleration loss. This is because, for high sparsity, enough NZs may not be extracted to feed all multipliers at every cycle. For example, a sensitivity analysis for Cambricon-X determined that, for hyper $W$-sparsity, it accelerated CONVs by about 8$\\times$ (out of 16$\\times$ peak speedup). The utilization may be improved by employing larger indexing or extraction modules (increased on-chip area and power). Alternatively, PEs can be designed with fewer multipliers to sustain the scalability and efficiency over a wide sparsity range.\n\\begin{table}[!t]\n\\centering\n\\caption{PE Architectures for Sparse Tensor Computations}\n\\label{tab:PE-vectorization}\n\\begin{tabular}{|c|m{6.6cm}|}\n\\hline \nScalar & \\\\ \\hline\n\\makecell{SIMD /\\\\Vector} & \\\\ \\hline \n\\end{tabular}\n\\end{table}\nWhile SIMD or vector PEs achieve spatial reuse, due to fixed designs, they are utilized poorly when executing some functionalities like DW-CONV. The efficiency of SIMD PEs is further affected by high sparsity, as functional units of the PE require synchronization, and there may not be enough effectual NZs to feed all of them. \\emph{Configurable functional units} can overcome such limitations. For example, PEs of SNAP architecture use a configurable adder-tree. It processes inputs from three multipliers and computes different combinations of partial summations. With multiple adders and multiplexers, the PE can concurrently process different partial summations (vs. gather in adder-tree) without high-bandwidth crossbars. Such configurable designs can support different DNN operators (e.g., DW-CONVs).\n\\mysubsubsection{Multiplier-free PEs} Accelerators, such as ZENA and , use multiplier-free PEs for high energy efficiency. These PEs process tensors of very low-precision (binary or ternary values) or logarithmic quantization. So, they replace multipliers with \\emph{simpler arithmetic} like 2's complement (inverters and adders or subtractors) or bit-wise shift and additions . However, one challenge is to \\emph{maintain} the accuracy of DNNs, as aggressive quantization often drops top-1 and top-5 accuracy, e.g., by 0.1\\% -- 5\\% . By trading off the flexibility with simple hardware, supporting various models can be challenging.\n\\begin{figure}[!b]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/bit-adaptive-computing}}\n\\caption{Bit-serial processing of sparse activations in Pragmatic . (a) Bit-parallel unit. (b) Bit-serial unit. (c) Bit-serial unit in Pragmatic for processing only essential bits. (Figure adopted from ).}\n\\label{fig::bit-serial-pragmatic}\n\\end{figure}\n\\mysubsubsection{Bit-adaptive computing} Precision requirements for targeted accuracy can vary for different models , which can be supported by PEs with bit-adaptive computing. \n\\textbf{Bit-serial computing:} Albericio et al. showed that \\emph{zero bits in NZ activations} (8b or 16b precision) can be more than 50\\% and proposed the Pragmatic accelerator to leverage sparsity of activation bits. Fig. \\ref{fig::bit-serial-pragmatic}(b) shows the \\emph{bit-serial} computation of an inner product with AND gates, adder tree, and bit-wise shift of partial output. AND gates are serially fed 1b activations (variable precision) and bit-parallel 16b weights (fixed precision). Fig. \\ref{fig::bit-serial-pragmatic}(c) shows the processing of only NZ activations in Pragmatic (essential bits indicated by their positions). Laconic achieved further accelerations by processing only NZ bits of both activations and weights. \n\\textbf{Bit-composable computing:} Bit-fusion employed fusion units consisting of an array of BitBricks. The fusion units can be configured for processing multiplications of 2b, 4b, 8b, or 16b operands. For processing NZs, PEs of CNN accelerator Envision used a single-cycle N-subword-parallel multiplier, followed by an N$\\times$48b/N reconfigurable adder. The subword-parallel design allowed the configuration of MAC units for processing the data of 4b, 8b, or 16b. SPU architecture employed DGRA, a decomposable CGRA, for efficiently processing stream-join accesses. The DGRA PE and interconnect switches enabled decomposing up to four 16b sub-word operands. DGRA also supported accessing sub-word data from the scratchpad. For DNN training with mixed-precision and sparse tensors, PEs of LNPU contained configurable MAC units that can process FP8 or FP16 tensors. Table \\ref{tab:precision} lists precisions of sparse tensors that are supported by different accelerators. The precision indicates the bit-width of input operands (activations and weights). For MAC operations, accumulators usually produce high-precision output, which can be down-scaled or truncated afterward. \n\\begin{table}[!t]\n\\centering\n\\caption{Precision of Sparse Tensors Supported by Accelerators}\n\\label{tab:precision}\n\\begin{tabular}{|c|m{5.8cm}|}\n\\hline \nbinary/ternary & \\\\ \\hline\nint8 & \\\\ \\hline\nint16 & \\\\ \\hline\nlogarithmic & \\\\ \\hline\nbit-adaptive & \\\\ \\hline\nFP8 & \\\\ \\hline\nFP16 & \\\\ \\hline\nFP32 & \\\\ \\hline\nFP64 & \\\\ \\hline \n\\end{tabular}\n\\end{table}\n\\mysubsubsection{Clock-gated PEs} PEs can be clock-gated when \\emph{not used} for executing a layer and for ineffectual computations. For example, Eyeriss , Thinker , and Minerva use zero detection for clock gating the datapath in the PE pipeline. PEs check whether the value being read is zero (or compare with a threshold, e.g., in MNNFast ). Based on the comparator output, their clock gating logic prevent the MAC datapath from switching in the consecutive cycle, which reduces energy (e.g., it saved power consumption of Eyeriss PE by 45\\%). Zero-skipping through flags in Envision and Sticker achieved similar savings.\n\\mysubsubsection{Optimization opportunities}\n\\textit{(i) Exploring efficient designs of functional units for various sparsity ranges/patterns and functionality:} Utilization of vector/SIMD units can drop significantly due to unstructured sparsity and functionality beyond structured dot products (e.g., DW-CONV). So, for exploring design hyperparameters such as the number of functional units, designers need to consider the impacts of sparsity, data extraction mechanism, required synchronization among computational units, and configurations required to support various functionalities. Moreover, for low sparsity, designs should deliver performance \\emph{at par} with a sparsity-oblivious design. For example, for processing dense tensors, SCNN achieved 79\\% of the performance and consumed 33\\% higher energy as compared to the baseline accelerator for processing dense tensors. So, designers may ensure that additional features for exploiting sparsity and configurable components do not increase the critical path latency and are power-gated if not used.", "id": "adae221e-0b86-440d-9247-a6adfbc6cb8f", "level": "subsection", "origin_cites_number": 47, "parent_id": "d9ab0fd8-41d0-4648-973b-6a9930053093", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "PE Architecture Design" ], [ "subsection", "Functional Units" ] ], "subsections": [], "title": "Functional Units" }, { "cite_extract_rate": 0.305555555555555, "cites": [ 7633, 4367, 8769, 7169, 7005, 4335, 4347, 4354, 7634, 7805, 4356 ], "content": "\\label{sec::dataflows}\n\\mysubsubsection{Background} The efficiency of executing a layer onto hardware accelerator depends on the computational, communication, and memory access patterns, which are commonly referred to as dataflow mechanisms . A dataflow refers to the spatiotemporal execution of a model layer (nested loop) on architectural resources . Here, spatial execution corresponds to how PEs exploits parallelism in the computation graph and processes different subsets of tensors. Temporal execution drives the data accessed throughout memory hierarchy and data communication via interconnects. Thus, depending on the functionality and tensor dimensions, dataflow can significantly impact the utilization of resources, data reuse, and latency hiding for memory accesses and data communication, and consequently, the execution time and energy consumption . \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/dataflows}}\n\\caption{Commonly used dataflow mechanisms for executing convolution layers on hardware accelerators.}\n\\label{fig::dataflows}\n\\end{figure}\nOne way to classify dataflows is by what data is kept ``stationary'' in registers or local memory of PEs (and reused fully before eviction), while other data is being iterated over.\nSome commonly used dataflow mechanisms are output stationary, weight stationary, input stationary, row stationary, and no local reuse. Fig. \\ref{fig::dataflows} shows an example of convolution and the layout of the stationary data for mapping the convolution with these dataflows. In weight stationary dataflow, each weight (of a 2D filter) remains stationary on a unique PE, and reused many times, during processing activations (corresponding to the same input channel $C$). By processing a unique weight, each PE produces partial summations for output activations, which are communicated by PEs and accumulated before outputs are written back to the memory hierarchy. Thus, input and output activations are accessed from off-chip memory (via shared scratchpad and PE's local memory) several times, while weights are continuously reused. After reuse, a new set of weights is loaded from memory, and the execution repeats. Weight reuse is higher in processing larger feature maps (CNNs) and multi-folded for processing data in a batch (e.g., images for CNNs, tokens of sentences for NLP models). Fig. \\ref{fig::layer-characteristics} lists such characteristics of different layers. \nDataflows can be applied at a coarser level, where PEs process a data block or plane (1D/2D/3D). In a coarse weight stationary approach , each PE processes weights of an entire 2D filter (dimensions $C$ and/or $M$ are laid out spatially on PEs). Rows and columns of PEs process the data corresponding to unique input and output channels, respectively. So, activations need to be multicast to the PEs of a row, different weights need to be provided to each PE, and partial summations for output channels can be accumulated vertically . Similarly, in an input stationary dataflow, unique activations (or blocks of input feature maps) remain stationary and are reused. In an output stationary dataflow, each PE produces a unique output activation (corresponding to the same or different output channel) . By processing spatial data and input channels first, partial summations are accumulated and reused in the memory of each PE. With the temporal accumulation of outputs on PEs, the output stationary dataflow does not need to reduce partial outputs spatially by collecting them from appropriate PEs, which is otherwise challenging for unstructured sparse data (section \\ref{sec:reduction-partial-outputs}). Therefore, many accelerators opt for such dataflow. In no local reuse dataflow, input operands are streamed to PEs, but they are not stored in PE's memory . In row stationary dataflow, PEs of the same row process the same weights (a row of a filter), diagonal PEs process the same row of input activations, and partial summations for rows of the output feature map are accumulated through vertically connected PEs . \nThus, different dataflows uniquely exploit the spatial parallelism and reuse of different tensors. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/dataflow-low-PE-utilization}}\n\\caption{Low utilization of a 16$\\times$16 PE-array in (a) coarse weight stationary dataflow when executing depth-wise layers and (b) input stationary dataflow for executing later layers of deep CNN models (Figure inspired by ).}\n\\label{fig::dataflow-low-PE-utilization}\n\\end{figure}\n\\textit{Dataflow optimization:} As dimensions of tensors are often large, many ways exist for spatiotemporally executing a layer onto the computational and memory resources of an accelerator. Optimization of the dataflow is important as it can significantly impact the performance and energy consumption . For instance, mappings with similar performance can consume an order of magnitude higher energy or vice versa. Further, as Fig. \\ref{fig::layer-characteristics} shows, reuse characteristics, tensor dimensions, functionality, and sparsity can vary significantly for different DNN layers. Hence, a single dataflow may not always be effective for acceleration. Fig. \\ref{fig::dataflow-low-PE-utilization} provides two such examples that lead to low PE-array utilization. The coarse weight stationary dataflow processes different 2D filters on different PEs. So, it is inefficient for DW-CONV. Similarly, output-stationary or input-stationary dataflows can result in low utilization of PEs for processing later layers of deep CNNs. With the vast space of execution methods and the growing development of new models (with variations in tensor dimensions), it becomes hard for non-experts to figure out optimized execution methods and designs. Therefore, many optimization tools have been proposed recently including Timeloop , dMazeRunner , MAESTRO , and Interstellar . They analytically model the execution of accelerators to estimate execution metrics and evaluate a set of mappings from the pruned space of various dataflows. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/layer-characteristics}}\n\\caption{Characteristics of different DNN layers pertaining to hardware execution (Figure inspired by ).}\n\\label{fig::layer-characteristics}\n\\end{figure}\n\\mysubsubsection{Sparsity-aware dataflows} Dataflows for processing sparse tensors are typically similar to those for dense tensors while processing the data in compressed format. For correct functionality, dataflow executions are facilitated by extraction/orchestration of NZs, which is done either in PEs , on a different module , or by a separate controller. For example, SCNN used PT-IS-CP dataflow. It processed planar tiles of feature maps with input stationary dataflow. SCNN's PT-IS-CP-sparse dataflow extended the PT-IS-CP. It processed only NZ activations and weights in compressed format while accessing them from memory and performing computations. The coordinate computation module in each PE ensured that partial products generated by all-to-all multiplications of NZ inputs and weights were accumulated correctly and stored in appropriate buffers. Table \\ref{tab:dataflows} lists sparsity-aware dataflow mechanisms used by accelerators.\n\\begin{table}[!t]\n\\centering\n\\caption{Dataflow Mechanisms of Accelerators}\n\\label{tab:dataflows}\n\\begin{tabular}{|c|m{4.6cm}|}\n\\hline \nInput Stationary & \\\\ \\hline \nOutput Stationary & \\\\ \\hline\nWeight Stationary & \\\\ \\hline\nCoarse Weight Stationary & , \\\\ \\hline\nRow Stationary & \\\\ \\hline\n\\end{tabular}\n\\end{table}\nEyerissV2 used an enhanced row-stationary dataflow. By using statically known sparsity of weights, more NZ weights were allocated in local memories and global scratchpads. For example, each PE can store up to 192 NZ weights. Mappings of CONV and FC layers of AlexNet with row-stationary dataflow allocated 64--174 NZ weights, which corresponded to a total of 132--480 weights in the dense format. With in-PE data extraction logic, each PE only processed NZ values from CSC-encoded data. Thus, sparsity-aware dataflow can be optimized with the pre-known (or expected bounds of) sparsity and value similarity. \n\\mysubsubsection{Optimization opportunities}\n\\textit{(i) Dataflow optimizations accounting for storage and computational overheads for metadata and codebooks:} Sparse and value-shared tensors are processed along with metadata (indicates positions of NZs) and codebook (common values shared among tensor elements), respectively. It requires additional processing, e.g., buffer management, communication via interconnects, and indexing the appropriate values. Depending on the dataflow, such processing can amplify the execution costs, which needs to be optimized. Existing tools for optimizing dataflows target dense tensor computations. Accelerators EIE, SCNN, and Cambricon-S process sparse tensor computations but with customized dataflows. Hence, frameworks for mapping and design explorations need to consider the sparsity and value similarity of tensors and their variations across layers/models. Such tools can include additional costs corresponding to storage, communication, and extraction in their analytical models. Explorations supporting multiple dataflows can help to achieve efficient designs for handling different functionality and variations in sparsity, shapes, and quantizations of tensors.\n\\textit{(ii) Sparsity-aware resource partitioning:} Acceleration of deep learning models is scaled by simultaneously processing multiple layers. It is done either by partitioning resources of a scaled-up accelerator or on multiple accelerators (scale-out) by leveraging model- or data-parallelism . Techniques for resource partitioning aim to highly reuse data from the on-chip memory of accelerators. It involves evaluating many-to-many mappings between layers and accelerators. Such optimizations can be crucial for several applications that require low latency, real-time processing, or high frame rates (e.g., processing the frames for multiple object detection models of an autonomous vehicle's perception system). Exploiting sparsity can provide further opportunities due to fewer computation, communication, and storage.", "id": "77a52b82-cfaf-475e-a733-c6109662e9aa", "level": "subsection", "origin_cites_number": 36, "parent_id": "d9ab0fd8-41d0-4648-973b-6a9930053093", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "PE Architecture Design" ], [ "subsection", "Dataflow Mechanisms" ] ], "subsections": [], "title": "Dataflow Mechanisms" }, { "cite_extract_rate": 0.478260869565217, "cites": [ 4370, 4372, 4371, 836, 4369, 887, 784, 7634, 4359, 4368, 4350 ], "content": "Several techniques have leveraged value similarity for accelerating DNNs by value sharing and computation reuse (Table \\ref{tab:value-similarity}). Video frames exhibit high similarity spatially (among neighboring pixels) and temporally (over consecutive frames) . After precision lowering, values of limited range repeat frequently , which are further compressed by maintaining a codebook of unique values . With repetition of values, computation (outputs) can be reused, either partially during processing a layer or by skipping processing of a whole layer . This subsection describes such techniques and corresponding hardware enhancements.\n\\begin{table}[!t]\n\\centering\n\\caption{Techniques for leveraging value similarity.}\n\\label{tab:value-similarity}\n\\begin{tabular}{|c|c|m{3.5cm}|}\n\\hline\n\\multirow{2}{*}{\\makecell{Value sharing}}\n & Weights & \\\\ \\cline{2-3} \n & Activations & \\\\ \\hline\n\\multirow{2}{*}{\\makecell{Computation reuse \\\\ and memoization}} \n & Partial & \\\\ \\cline{2-3} \n & Full & \\\\ \\hline\n\\multicolumn{2}{|c|}{Early termination of computations} & \\\\ \\hline \n\\end{tabular}\n\\end{table}\n\\mysubsubsection{Weight similarity} Prior studies have shown that weights can be approximated with a small set of values. \nHegde et al. showed that for 8b weights of DNNs, each NZ value mostly repeated more than 10 times and even more than 100 times in later layers of AlexNet and ResNet-50 models. Han et al. pruned weights of DNNs with k-means clustering for value sharing. Shared unique values were represented with 4 or 5 bits without dropping classification accuracy. Local quantization (applying clustering separately over different sub-tensors) can achieve even smaller codebooks . Leveraging the weight similarity can compress pruned models further by up to an order of magnitude .\nValue-shared weights are processed by augmenting the PE datapath with a weight decoder (e.g., in EIE ). For processing NZ weights, the PE provides the encoded index of the weight to the decoder and obtains shared value. \nDepending on the lookup mechanism and total bits to be extracted at every cycle, the decoder can incur considerable area and power costs (e.g., for Cambricon-S , 32.56\\% and 3.98\\% of the total on-chip area and power, respectively). \n\\mysubsubsection{Input similarity} Audio or video frames can contain high similarity spatially or temporally. This is because a speech signal can be quasi-stationary for a short interval. Also, successive executions of DNNs process overlapping windows of audio frames for context extraction . Feature maps for CNNs exhibit high spatial correlation . High input similarity enables only storing unique values and reusing computations by \\emph{differential computing} over non-similar data. \nRiera et al. showed that after uniform linear quantization of inputs of DNNs (e.g., C3D , EESEN , CNN for self-driving cars ), about 61\\% of input activations are the same as previous execution, and 66\\% computations can be avoided. Their accelerator maintains centroids of quantized inputs and the index corresponding to each input element. Then, consecutive frames are processed layer-wise with differential computing. For example, for each activation of an FC layer (of a new frame), the accelerator calculates centroid and index, and then it compares calculated centroid to memoized centroid. If the difference is zero, then output from the previous execution is reused, and the next activation is processed. Otherwise, a new value of the index is updated in the buffer, and new values for output activations are computed by accumulating multiplications of weights with the difference.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/weight-similarity-UCNN}}\n\\caption{(a) Leveraging weight similarity and reuse of partial outputs . (b) Modifications in UCNN PE architecture (shaded blocks) for buffering indirection tables, partial summations of activation groups, and memoization of partial outputs. (Figure adopted from .)}\n\\label{fig::weight-similarity-UCNN}\n\\end{figure}\n\\mysubsubsection{Computation reuse (partial during processing a layer)} UCNN leverages the repetition of weights by forming activation groups (summations of activations) that share the same weight. It also reuses activation sub-groups, i.e., memoizes partial summations of activations that can repeatedly appear across different filters. Fig. \\ref{fig::weight-similarity-UCNN}(a) illustrates an example. Weights $A$ and $C$ can be shared among corresponding activation groups. For producing activation groups, subgroups like (r+s) can be reused with memoization. So, an output activation is calculated by indexing a unique weight value and corresponding activation groups. Indirection tables provide indices of the unique weight and grouped activations. Fig. \\ref{fig::weight-similarity-UCNN}(b) shows corresponding modifications in the PE datapath. UCNN reported up to 24\\% area overhead for a PE and 1.8$\\times$ speedup for CNNs as compared to execution on a baseline accelerator without exploiting weight repetition. \nSilfa et al. showed that for RNNs (e.g., DeepSpeech2 , EESEN ), the relative difference between the output activations over consecutive frames was about 23\\%. Leveraging temporal similarity of outputs saved about 24\\% computations with negligible accuracy loss. For predicting whether an output activation leads to a value similar to the previous output, their technique extended each RNN layer with a binary neural network (BNN). With BNN outputs correlating to actual outputs, execution of much smaller BNN layers led to an efficient prediction of the temporal output similarity. \n\\mysubsubsection{Computation reuse (skip processing of entire layer)} A few techniques predict outputs based on previous computations and skip heavy computations of some layers. Gon{\\c{c}}alves et al. showed that 18\\%--81\\% of computations in AlexNet CONV layers could be reused due to spatial (intra-frame) and temporal (inter-frame) redundancy of the inputs. They leveraged such reuse with memory look-ups and avoided executing CONVs. \nFor YOLO-v3 , it processed only 22\\%--32\\% frames while incurring negligible accuracy loss.\nBuckler et al. proposed skipping heavy processing of some CNN layers for several frames (predicted) and executing precise computations periodically for remaining (key) frames. For predicted frames, their algorithm estimated motion in the input frame. It used results for incrementally updating the output saved from the last key frame. Unlike other value similarity techniques that incur changes in PE datapath, such techniques can be efficiently executed on a separate module (e.g., EVA$^2$ ) or co-processor, while other modules of the same or different accelerator process sparse tensors of DNN layers. EVA$^2$ identified 78\\%--96\\% of the frames for AlexNet and 40\\%--71\\% of the frames for Faster-RCNN as predicted frames while processing the YouTube-BoundingBoxes dataset .\n\\mysubsubsection{Early termination of computations by predicting outputs} SnaPEA , SparseNN , and CompEND reduce ineffectual computations by early prediction of the usefulness of outputs. They check whether computations contribute to the effective inputs for the subsequent layer (e.g., ReLU or max-pooling). If not, their PEs terminate such computations. To reduce computations corresponding to output sparsity, statically re-ordered weights based on their signs. PEs of its SnaPEA architecture contained prediction activation units (with each MAC), which checked the sign-bit of the partial summation and raised a termination signal to notify the controller as the sign-bit was set. \n\\mysubsubsection{Optimization opportunities} \n\\textit{(i) Joint exploration of spatial and temporal similarity of inputs, weights, and outputs:} Depending on the model's configurations (depth, layer dimensions, cardinality) and domain-specific data, opportunities for value sharing and computation reuse (at both fine-grain and coarse-grain levels) in processing activations and weights can vary considerably. A joint exploration for different tensors can help to identify storage-Ops-accuracy trade-offs for efficient acceleration. \n\\textit{(ii) Leveraging value similarity through separate processing:} Determining value similarity and leveraging computation reuse often demands modifications in PE-array, increasing the accelerator's area, latency, or power. Designers may obviate it by providing a separate and compact module for differential computing that handles necessary pre-processing or post-processing and can be interfaced with the PE-array and on-chip memory. Upon requirement, it can trigger execution on PE-array for structured computations. Further, algorithms expressing the functionality of ML layers/models may be defined in terms of \\emph{differential computing} (i.e., execution is conditional to the input mismatch, reused otherwise). With efficient accelerator/model co-designs for differential computing of tensors, accelerators may attain structured effectual computations with fewer overheads of metadata or memoization.", "id": "cc845f17-b9e8-4c7d-8469-99a422bd86f0", "level": "subsection", "origin_cites_number": 23, "parent_id": "d9ab0fd8-41d0-4648-973b-6a9930053093", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "PE Architecture Design" ], [ "subsection", "Leveraging Value Similarity" ] ], "subsections": [], "title": "Leveraging Value Similarity" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::load-balance}\nDepending on the distribution of zeros, the inter-PE or intra-PE imbalance can cause low utilization of PEs or their functional units, which increases execution time and energy consumption. This section summarizes sources of such imbalance, and then it discusses different software-directed techniques or hardware designs for balancing the computations. Table \\ref{tab:load-balance} categorizes these techniques. Software-based techniques facilitate structured computations by forming local regions of dense elements, sorting the data by combining same-sparsity tensor blocks, or regularizing models with structured pruning. Although requiring low/no additional hardware, they are often limited to static $W$-sparsity. Accelerators dynamically balance computations by prefetching work in FIFOs or memory, obviating fine-grained synchronization of computations on PEs. Some accelerators achieve further run-time balance across PEs by a central hardware module for work sharing.", "id": "ac4d8613-1421-4c83-a4a5-66deeb11b225", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Load Balancing of Effectual Computations" ] ], "subsections": [ "2c8f696c-91bc-46b7-a15a-683e0f85caef", "2c592133-299e-499f-805e-2d476151c4f4", "a52c92e6-ede5-4d3a-999b-a2c59bcadf17", "a6e6350a-7c3d-4517-bdb5-11afa0cc8a48" ], "title": "Load Balancing of Effectual Computations" }, { "cite_extract_rate": 0.39130434782608703, "cites": [ 4355, 4342, 7169, 305, 4347, 4354, 7634, 7805, 4341 ], "content": "\\mysubsubsection{Inter-PE imbalance} Zeros in different tensors can be scattered, and their positions may not be determined statically (e.g., unstructured $IA$-sparsity). For most accelerators, work to be performed concurrently by PEs is fixed statically. Also, executions with conventional dataflows usually require synchronization among PEs (e.g., in SCNN , Cnvlutin ), which is achieved by barriers implemented in software via instructions or in hardware via PE architecture or controller logic. Consequently, computations per PE during each execution pass can vary drastically (inter-PE load imbalance). So, many PEs may finish their computations early, get stalled, and wait for the next set of data due to synchronized execution, while other PEs still process the previously allocated data. It increases execution time and energy consumption. Kim et al. analyzed the distribution of NZ weights in AlexNet CONV3 filters and showed that in an execution pass, NZs processed by the leading and trailing PEs differed by up to 6.5$\\times$. Similarly, up to 40\\% cycles were idle for executions of PEs in SCNN architecture . Sensitivity analysis for EIE showed that, without any load balance, about 47\\% of the cycles were idle for the 64-PE accelerator .\n\\begin{table}[!t]\n\\centering\n\\caption{Classification of Load Balancing Techniques}\n\\label{tab:load-balance}\n\\begin{tabular}{|c|c|m{3.95cm}|}\n\\hline\n\\multirow{3}{*}{\\makecell{Software \\\\ Directed}} \n& Data Clustering & \\\\ \\cline{2-3} \n& Data Reorganization & \\\\ \\cline{2-3}\n& Model Regularization & \\\\ \\hline \n\\multirow{2}{*}{\\makecell{Hardware \\\\ Module}} \n& Work Prefetching & \\\\ \\cline{2-3} \n& Work Sharing & \n\\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\mysubsubsection{Intra-PE imbalance} For SIMD or vector PEs, intra-PE load imbalance can also contribute to a significant acceleration loss. With unstructured sparsity of one or more tensors, enough NZs may not be extracted to feed all the functional units within PEs, which causes intra-PE load imbalance. Sensitivity analysis for the SNAP accelerator showed that with moderate sparsity, utilization of multipliers falls below 80\\% and up to 20\\% for 90\\% sparsity . Similarly, SCNN reported below 80\\% utilization of multipliers for all GoogLeNet layers and 20\\% for the last two inception modules. Moreover, a few architectures use PE designs with multiple subunits in each PE. For SIMD processing, a subunit works in synchronization with other subunits of the same PE, e.g., in Cnvlutin , , and . With unstructured sparsity, multipliers and accumulators in some subunits can often be idle, while trailing subunits process computations.", "id": "2c8f696c-91bc-46b7-a15a-683e0f85caef", "level": "subsection", "origin_cites_number": 23, "parent_id": "ac4d8613-1421-4c83-a4a5-66deeb11b225", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Load Balancing of Effectual Computations" ], [ "subsection", "Sources and Impact of Imbalance" ] ], "subsections": [], "title": "Sources and Impact of Imbalance" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4374, 4347, 4373, 7169 ], "content": "\\mysubsubsection{Clustering of NZs for populating dense data regions} As described in section \\ref{sec::data-extraction-HW}, a few techniques targeted high $W$-sparsity. They used structured pruning or data combining approaches for clustering the tensor elements in locally dense regions that can be dispatched to PEs for processing in a conventional manner . Thus, they achieve high PE utilization and lower invocations to accelerator resources. However, such techniques may not be effective when algorithms cannot generate or pack structured sparse data (e.g., dynamic unstructured sparsity).\nConcise convolution rules (CCR) partitioned sparse convolutions into effective and ineffective sub-convolutions for processing locally dense regions of filters and input feature maps. It eliminated a majority of ineffectual computations and their storage (for VGG-16, achieving reduction of about 79\\% and 51\\%, respectively) . Sub-convolutions after CCR transformation were executed on the SqueezeFlow accelerator . However, with PEs performing only all-to-all multiplications, it may not support generic tensor operations; it can be challenging to extend CCR methodology for other algorithms. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/NZ-load-imbalance}}\n\\caption{Distribution of NZ weights for executing CONV1 of AlexNet with coarse weight stationary dataflow on a 4$\\times$3 PE-array. Distribution shows NZ weights in different workgroups (each workgroup contains NZs for 12 PEs): (a) without load balance (b) after sorting. (Figure inspired by .)}\n\\label{fig::NZ-load-imbalance}\n\\end{figure}\n\\mysubsubsection{Data reorganization before work allocation} In ZENA , each PE processed a different set of filters for processing a sub-workgroup. For balancing computations among these PEs, filters were sorted by sparsity and allocated to PEs such that all PEs executed filters of similar sparsity.\nTo determine the efficacy of such sorting, we considered AlexNet for ImageNet classification. We obtained the pruned model through the neural network distiller with a pruning algorithm similar to . For accelerating AlexNet CONV1 layer with coarse weight stationary dataflow, Fig. \\ref{fig::NZ-load-imbalance} presents distributions of NZs in filters before and after reorganization. For processing 64 filters of size 3$\\times$11$\\times$11 on 4$\\times$3 PEs, we consider execution through 16 different workgroups. Each workgroup contains NZ weights for concurrent processing of four filters and three channels on 12 PEs (up to 11$\\times$11 on a PE). The next workgroup is initiated once all PEs entirely use previously allocated weights. Fig. \\ref{fig::NZ-load-imbalance}(a) shows that before data re-organization, the total NZ weights allocated to PEs within workgroups differed by up to 21.4$\\times$ (5 vs. 107 for 11$\\times$11 filters) and 6.09$\\times$ on average. Fig. \\ref{fig::NZ-load-imbalance}(b) shows that after sorting the weights (both filter-wise and input channel-wise), it leads to an almost equal number of NZs for computations onto 12 PEs during each workgroup. The total allocated NZ weights differed by only 1.36$\\times$. \nAfter static sorting, ZENA achieved about 20\\%--32\\% more acceleration for CONV layers of AlexNet and VGG-16 . Depending on the sparsity, distribution of NZs, and achievable sorting granularity, the work allocated to PEs may differ considerably even after sorting. Moreover, such transformations are usually feasible only statically. So, ZENA also used dynamic work sharing, which we discuss in section \\ref{sec::dynamic-load-balance}.\n\\mysubsubsection{Accelerator-aware regularization of the model} Recent accelerators, including Sparse Tensor Cores in NVIDIA Ampere architecture , , and , execute models pruned with $k$:$n$ block-sparsity (e.g., 2:4 sparsity supported by Ampere ). Their PEs contain multiplexers that use indices of $k$ NZ weights to select $k$ out of $n$ dense activations. Then, functional units process extracted values. Like $k$:$n$ block-sparsity, ESE used a load-balance aware pruning for RNNs. It considered sub-matrices to be processed by different PEs and induced the same sparsity into all sub-matrices.\nIn some architectures, all PEs receive the same set of NZ activations. They process them with their unique weights and produce distinct output activations. One such architecture is Cambricon-S which used a coarse-grained pruning of weights. The block size for pruning depends on the total number of PEs (16). Over local regions, the pruning removed all connections between an input activation and all (16) output activations. So, when PEs processed output neurons, they processed the same number of NZ input activations and weights for computing MACs.", "id": "2c592133-299e-499f-805e-2d476151c4f4", "level": "subsection", "origin_cites_number": 12, "parent_id": "ac4d8613-1421-4c83-a4a5-66deeb11b225", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Load Balancing of Effectual Computations" ], [ "subsection", "Software Directed Load Balance" ] ], "subsections": [], "title": "Software Directed Load Balance" }, { "cite_extract_rate": 0.25, "cites": [ 7634 ], "content": "\\label{sec::dynamic-load-balance}\n\\mysubsubsection{Facilitating asynchronous computations by prefetching allocated work} One way to improve PE utilization (in the presence of load imbalance) is to prefetch the allocated work for PEs and avoid their fine-grain synchronization. So, even if there is a different amount of work (e.g., MACs per input activation), all the PEs may perform effectual computations at the same time (e.g., work on different activations). Thus, each PE can be engaged in performing some computations, before it runs out of the available data. This can be achieved by offloading more data into the FIFO or memory of each PE. For example, in EIE , activations are broadcast to FIFOs of all PEs. Once a PE finishes multiplying an activation to corresponding NZ weights or does not find any matching weights, it processes the next activation from its queue. FIFO size of 8 or higher ensured each PE almost having an NZ activation to process (during 87\\%--91\\% of computation cycles) and lowered idle time of PEs from about 47\\% to 13\\% . \nCambricon-X allows asynchronous communication of weights to PEs. A centralized data extraction mechanism provides NZ activations to each PE via a unicast network, and compressed weights are loaded in the memory of each PE (2 KB). The memory access port is assigned to each PE for a short period, where it fetches several chunks of weights via DMA transfer. Depending on the prefetching interval and unstructured sparsity, each PE may asynchronously work on useful computations in most of the execution cycles. \nWhile asynchronous execution improves the utilization of PEs, the work allocated to PEs is still fixed. Plus, in-PE data fetching mechanisms may restrict PEs from finding the pending work in other PEs and sharing it. For highly imbalanced computations, straggling PEs can still be the bottleneck.\n\\mysubsubsection{Centralized load balance} In some accelerators, data is multicast to one or more rows (or columns) of PEs. A central logic processes the metadata (indices) of the tensor tiles to be distributed along with control signals from PE-rows and finds out work distribution. Then, it feeds the fast-acting rows/lanes of PEs and facilitates work sharing. For instance, ZENA allocates work dynamically through down counters. Different PE-groups (e.g., PE-rows) process the same filters with different activation tiles. A central distribution mechanism contains down counters that store the number of remaining activation tiles for each PE-group. When a leading PE-group finishes its work (counter is zero), it obtains an activation tile from a straggling group (has the biggest count value) and then continues processing output activations. The work sharing improved acceleration by about 10\\% for CONV layers of AlexNet and VGG-16 . Memory port contention may occur when multiple leading groups simultaneously attempt to fetch the same set of input activation tiles. ZENA's execution mechanism overcomes this problem by reassigning only one activation tile at a time (to the leading group) and performing reassignments only during bus idle time.\n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/load-balance-LNPU}}\n\\caption{Load balance mechanism in LNPU (Figure adopted from ).}\n\\label{fig::load-balance-LNPU}\n\\end{figure}\nLNPU uses an input load balancer (ILB) which is shared among PE-rows. As Fig. \\ref{fig::load-balance-LNPU} shows, ILB contains address generator units to determine the indices of the compressed elements that need to be fetched. Once ILB fetches them, the skip-index decoder unit determines the appropriate indices for data extraction. It pushes them along with the NZ values into the FIFO of a PE-row. It also calculates bitmaps, which are used for pushing the data (indices and NZs) selectively into FIFOs of PE-rows at run time. Due to ILB, PE utilization in LNPU was increased by 2\\%--26\\% for 10\\%--90\\% sparsity of the inputs (activations or their gradients) . Thus, centralized load balancing mechanisms can leverage the information about data allocation for PEs and provide equal work to PEs or feed the fast-acting PEs during run-time.", "id": "a52c92e6-ede5-4d3a-999b-a2c59bcadf17", "level": "subsection", "origin_cites_number": 4, "parent_id": "ac4d8613-1421-4c83-a4a5-66deeb11b225", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Load Balancing of Effectual Computations" ], [ "subsection", "Load Balancing with Hardware Structures" ] ], "subsections": [], "title": "Load Balancing with Hardware Structures" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8767 ], "content": "\\textit{(i) Software-level or hardware/software/model co-design optimizations for low-cost load balance:} Most accelerators lack special support to balance computations among PEs, e.g., to avoid area and power overheads due to special hardware logic. (a) One technique is to reorganize the data . But, it can mostly exploit only static $W$-sparsity for inference at no/low hardware cost. So, we may require additional co-design optimizations for regularizing dynamic sparsity. (b) For pre-known or estimated sparsity, sparsity-aware mapping optimizations for accelerators may identify efficient dataflows that sustain high PE utilization. (c) When sparsity may be regularized at modest accuracy loss (e.g., for several DNNs), accelerator/model co-designs can induce the balance. It can be either done via structured pruning of activations or refactoring the operators (nonlinear activations, batch normalization , or quantization of outputs). Consequently, the co-designs may achieve structured computations over both activations and weights to an extent, leading to further accelerations.", "id": "a6e6350a-7c3d-4517-bdb5-11afa0cc8a48", "level": "subsection", "origin_cites_number": 3, "parent_id": "ac4d8613-1421-4c83-a4a5-66deeb11b225", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Load Balancing of Effectual Computations" ], [ "subsection", "Optimization Opportunities" ] ], "subsections": [], "title": "Optimization Opportunities" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4354, 7634 ], "content": "\\label{sec::post-processing}\nOnce PEs process allocated data, they write (partial) outputs back via interconnect. For unstructured sparsity, managing \\textit{write-backs (WBs)} can be challenging because different PEs can produce outputs of different sizes at different times. Moreover, operations like ReLU, pooling, and batch-normalization need to be performed on outputs. They are usually not performance-critical like CONV or MLP. So, they can be either executed on PEs before WBs of outputs (in SCNN , Cambricon-S , and EIE ) or post-processed on central modules (in MAERI , ZENA , and SqueezeFlow ). Central modules often assemble the outputs collected from PEs, transform data for the next layer, and encode sparse outputs on the fly.", "id": "d451769e-a134-445a-a7af-73c10cbbd92a", "level": "section", "origin_cites_number": 6, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Write-Back and Post-processing" ] ], "subsections": [ "d7a904a1-024e-4031-87c5-cfa5487bd195", "70747015-64a2-4214-9646-0e7ae4610e46", "cff1bf65-03c5-45a4-b414-fd42cc60602a", "7e34c4f1-3645-438c-95fb-7faf7ec757a7" ], "title": "Write-Back and Post-processing" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4354 ], "content": "\\label{sec::write-back}\n\\mysubsubsection{Simultaneous WB} Cambricon-X and SCNN use fat-tree networks or point-to-point links, which allows simultaneous WBs from multiple PEs. Whenever ready, PEs can execute in a dataflow manner and immediately write outputs back after computations. This is important for processing unstructured sparsity because different PEs may process a different number of NZs and produce different amounts of output values for WB at different time intervals. With such high bandwidth, communication time can be reduced and interleaved with computations, which is important for processing models with low arithmetic intensity. These PEs write to a central module for post-processing (e.g., in Cambricon-X ), the on-chip memory , or off-chip memory (e.g., in SCNN ). Although simultaneous WBs are faster, such a fat-tree network can incur considerable overhead due to increased bandwidth and inefficient bandwidth utilization in some scenarios. So, accelerator designs can instead use a common bus that is time-shared among multiple PEs; PEs can write the data back turn-wise or asynchronously.\n\\mysubsubsection{Sequential WB} PEs in several accelerator designs operate in a lock-stepped manner, where data blocks common to PEs are broadcast to them, and all PEs synchronize for processing the outputs (idle when done). Synchronized execution can allow WB in a specific sequence (e.g., a PE with the lowest PE-index writes the data first and so forth). It makes the programming of the accelerator easier. It also obviates overheads of specialized hardware/software support, which is required otherwise for asynchronous WB. \n\\mysubsubsection{Asynchronous WB} With unstructured sparsity, PEs process a different amount of data and can asynchronously request WB during the execution. For facilitating such support, accelerator designs can employ additional hardware logic. For example, ZENA used a common bus for multicasting blocks of filters and feature maps to PEs and collecting the output. Output buffers of PEs were flushed to the memory during the idle period of the bus, which avoided bus contention between broadcasting activations from memory and WB of partial summations. For prioritizing the requests from PEs to access the bus for WB, it determined the PE groups with a high number of pending output tiles.", "id": "d7a904a1-024e-4031-87c5-cfa5487bd195", "level": "subsection", "origin_cites_number": 3, "parent_id": "d451769e-a134-445a-a7af-73c10cbbd92a", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Write-Back and Post-processing" ], [ "subsection", "Write-Back from PEs" ] ], "subsections": [], "title": "Write-Back from PEs" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 4354 ], "content": "PEs often process large output tiles. So, they perform fine-grained assembling of outputs locally. For example, SCNN PEs use a coordinate computation unit that determines appropriate indices for arbitrating partial outputs to the local accumulator buffer. In other accelerators, PEs produce metadata and supplies it with outputs for correctly indexing the memory (e.g., in ZENA ) or assembling outputs on a central module (e.g., in Cambricon-X , CoNNA ). The central module uses the metadata (e.g., output coordinates) from PEs or pre-known indices of PEs to assemble collected outputs before WB or post-processing. In some designs, data assembling is done by global accumulators that reduce partial summations and update outputs into appropriate memory banks (e.g., SNAP ). The data assembling logic typically also handles data layout transformation (e.g., in ), which is required for processing the subsequent layer.", "id": "70747015-64a2-4214-9646-0e7ae4610e46", "level": "subsection", "origin_cites_number": 6, "parent_id": "d451769e-a134-445a-a7af-73c10cbbd92a", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Write-Back and Post-processing" ], [ "subsection", "Data Assembling" ] ], "subsections": [], "title": "Data Assembling" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 2671 ], "content": "\\mysubsubsection{Data reorganization} Accelerators are often designed for efficient vector or matrix multiplications. So, for processing convolutions, they (e.g., ) require data layout in \\textit{NHWC (channels-first)} format , which is also used for processing on CPUs and GPUs. Fig. \\ref{fig::data-layout-xform}(b) shows data reorganization for striding execution of the convolution of Fig. \\ref{fig::data-layout-xform}(a). It shows iterative processing of the spatial data with channels-first processing. For example, an output activation $1A$ can be processed by fetching a block containing all channels of the first filter and ifmap. Vectors corresponding to channels can be processed iteratively. Sparse data blocks are also processed similarly but with computations on appropriate NZs.\n\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=\\linewidth]{figures/data-layout-xform}}\n\\caption{Data layout transformation for executing convolution. (a) Convolution of two 2$\\times$3$\\times$3 feature maps with two 2$\\times$2$\\times$2 filters. (b) Reorganizing data for striding execution. (c) Transforming feature maps into Toeplitz matrix.}\n\\label{fig::data-layout-xform}\n\\end{figure}\n\\mysubsubsection{Transformations to Toeplitz matrix} Processing with NHWC format allows executing CONVs as iterative vector-vector multiplications, but it requires hardware support to fetch appropriate blocks. So, for processing CONVs as sparse GEMMs, a few accelerators, including ERIDANUS and , transform sparse feature maps into Toeplitz matrix with \\textit{im2col} transformation .Once transformed matrices are obtained and sparsity-encoded, accelerators compute sparse matrix multiplication. Fig. \\ref{fig::data-layout-xform}(c) illustrates the transformation for tensors of Fig. \\ref{fig::data-layout-xform}(a). It shows that neighborhood values for computing an output with a 2D CONV are combined as a vector. For multiple input channels of ifmaps or filters, corresponding elements are stacked in column-vectors or row-vectors. However, transforming ifmap into Toeplitz matrix duplicates neighborhood data and yields storage overhead (Fy$\\times$Fx times higher memory for unit-strided CONV).", "id": "cff1bf65-03c5-45a4-b414-fd42cc60602a", "level": "subsection", "origin_cites_number": 6, "parent_id": "d451769e-a134-445a-a7af-73c10cbbd92a", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Write-Back and Post-processing" ], [ "subsection", "Data Layout Transformations" ] ], "subsections": [], "title": "Data Layout Transformations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::post-process-encoding}\nSeveral accelerators, such as SparTen , SqueezeFlow , Eyeriss , and CompAct , use an encoding module. Such a module encodes blocks of output tensor on the fly and typically before WB. It reduces accesses to off-chip memory significantly . On-the-fly encoding allows efficient processing of dynamically sparsified tensors, i.e., sparse activations for DNN inference and tensors in the training of models. It typically consumes low on-chip area and power, e.g., 2.5\\% and 3.06\\% for the RLC encoder-decoder unit in SqueezeFlow and 0.3\\% of the total on-chip area for the RLC unit in Eyeriss. The complexity of the hardware logic required for encoding depends on the coding format (section \\ref{sec::sparse-data-coding}). For example, single-step processing for bitmap, RLC, or COO-1D incurs low overhead. A central bitmap-encoder in SparTen consisted of comparators (XNOR gates) for determining NZs and additional logic for shifting NZs to populate data vector. The encoding overhead may be lowered for block-sparse tensors, which requires indicating only positions of blocks of NZs. \nSticker facilitates sparsity-aware encoding. It uses three modes to encode DNN tensors of high, medium, or low sparsity with COO, bitmap, and dense format. The three modes are controlled by two threshold values. Since weights can be processed offline for DNN inference, they are pre-encoded in appropriate formats. To encode activations online, Sticker uses a sparsity adaptor module. . It consists of a sparsity detector, a four kB buffer, an encoder, and a controller. Sparsity detector contains counters that count zeros in activations of consecutive 16 channels. After the detector processes output activations (obtained after ReLU), they are stored in the buffer. Then, the controller determines the encoding mode with which the encoder can encode the data in the buffer.", "id": "7e34c4f1-3645-438c-95fb-7faf7ec757a7", "level": "subsection", "origin_cites_number": 5, "parent_id": "d451769e-a134-445a-a7af-73c10cbbd92a", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Write-Back and Post-processing" ], [ "subsection", "On-the-fly Encoding" ] ], "subsections": [], "title": "On-the-fly Encoding" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::software-support}\nThis section provides an overview of the compiler support for sparse deep learning accelerators. It focuses on:\n\\begin{itemize}[leftmargin=*]\n \\item Intermediate representations (IRs). They determine what type of code the compiler can support and what kind of compiler transformations it can perform.\n \\item Support for sparse tensors. This subsection discusses compilation challenges in supporting sparse deep learning and compilers developed to overcome these challenges.\n \\item Compiler optimizations. This subsection provides an overview of state-of-the-art techniques that allow the compiler to apply advanced optimizations and generate the most efficient code from high-level neural network descriptions.\n \\item Accelerator ISAs and code generation. This subsection focuses on accelerator ISAs (e.g., instruction set for high-level tensor operations) and the compiler support for machine code generation for accelerators.\n\\end{itemize}", "id": "cce74b6f-9cb1-4065-8a80-43648099f245", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Compiler Support" ] ], "subsections": [ "0648e6ac-e21a-4f45-9a3c-833f135b255f", "fc3d1662-53d4-4119-840c-a54a80c25327", "5cba33c7-04e8-4faa-a124-3d6bfae52a69", "4879cb49-5aea-4686-ba02-9effd2aedd5d" ], "title": "Compiler Support" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 4377, 882, 4376, 4375 ], "content": "IR determines which types of code can be represented by the compiler, whether it can support sparse tensor computations, the types of code transformations that can be done, and even the scalability of the compiler. \n\\mysubsubsection{Need for high-level representations} A common example of low-level IR is LLVM IR which is well suited for low-level code optimizations such as register allocation but not for many high-level code optimizations needed for optimizing sparse deep learning. This is mainly because low-level IRs do not preserve information about loop structures and data layouts, and reconstructing such information is not trivial . That is why many deep learning compilers such as TVM , Tiramisu , and Halide apply many code optimizations on a high-level IR (an IR that has loops and represents multi-dimensional tensors). This is also one of the motivations for creating MLIR , which serves as a high-level IR for low-level compilers like LLVM.\n\\mysubsubsection{Mathematical abstractions of code} While previous IRs have focused on representing program statements and program structure, many compilers use an additional mathematical representation (abstraction) to represent iteration domains\\footnote{The iteration domain of loop iterators in a loop is all possible values that loop iterators can take.} and array accesses of statements. These mathematical representations are usually used in conjunction with the IR to simplify iteration domain and array access transformations. This subsection presents two major families of mathematical representations and compares their strengths and weaknesses. \n\\myparagraph{2.A. Polyhedral representation.}\nIt is a unified mathematical representation for the iteration domains of statements, code transformations, and dependencies. It relies on two main concepts: \\emph{integer sets} and \\emph{maps}. \\emph{Integer sets} represent iteration domains. \\emph{Maps} are used for representing memory accesses and transforming iteration domains and memory accesses.\nAn integer set is a set of integer tuples described using affine constraints. An example of a set of integer tuples is \\polyc{$\\{(1,1); (2,1); (1,2); (2,2); (1,3); (2,3)\\}$}\nInstead of listing all the tuples, we can describe the set by using affine constraints over loop iterators and symbolic constants as follows: \\polyc{$\\{S(i,j): 1 \\leq i \\leq 2 \\wedge 1 \\leq j \\leq 3\\}$} where $i$ and $j$ are the dimensions of the tuples in the set.\nA map is a relation between two integer sets. For example,\n\\polyc{$\\{S1(i,j) \\rightarrow S2(i+1,j+1) : 1 \\leq i \\leq 2 \\wedge 1 \\leq j \\leq 3\\}$}\nis a map between tuples in the set S1 and tuples in the set S2. More details about the polyhedral model and formal definitions can be found in .\n\\myparagraph{Polyhedral compilers:}\nNotable polyhedral compilers for deep learning include Tiramisu , Tensor Comprehensions , Diesel , and TensorFlow XLA (through affine MLIR dialect ). General-purpose compilers that support deep learning include PENCIL , Pluto , Polly , PolyMage , AlphaZ , and CHiLL .\n\\myparagraph{Strengths of the polyhedral representation:}\n\\begin{itemize}[leftmargin=*]\n \\item Unified representation: It eliminates friction within compiler IRs and greatly simplifies design of code transformations.\n \\item Instance-wise representation: The representation granularity is instances of statement executions where each instance is a single execution of a statement during one loop iteration. Instance-wise representation includes iteration domains, data dependencies, data accesses, and code transformations, which allows the compiler to have a precise representation. \n \\item Support for the whole class of affine transformations: It allows applying any affine transformation on iteration domain and data accesses. An example of a complex affine transformation is iteration space skewing, which allows extracting parallelism from multi-layer recurrent neural networks to increase hardware occupancy.\n \\item Non-rectangular iteration domains: The representation allows compilers to naturally express non-rectangular iteration domains (i.e., iteration domains with an affine conditional).\n\\end{itemize}\n\\myparagraph{Weaknesses of the polyhedral representation:}\n\\begin{itemize}[leftmargin=*]\n \\item Limited support for non-affine code: The polyhedral model mainly represents code and transformations using sets and maps described using affine constraints. So, it does not naturally support code that leads to non-affine constraints. This includes code with non-affine loop bounds, non-affine array accesses, and non-affine conditional. While the classical polyhedral model does not support non-affine constraints, recent work has extended the polyhedral representation to support non-affine array accesses, non-affine loop bounds, non-affine conditionals , and parametric tiling . The efficiency of these techniques has been demonstrated in practice by PENCIL and Tiramisu .\n \\item Slower compilation: While polyhedral operations are precise, they are computationally expensive. So, polyhedral compilers are slower than non-polyhedral compilers. Recent techniques reduce the number of statements by clustering groups of statements into macro-statements and scheduling macro-statements instead of individual statements , reducing the compilation time notably. \\\\\n\\end{itemize} \n\\myparagraph{2.B Non-polyhedral representation.} A common non-polyhedral representation used in deep learning compilers is \\emph{interval}-based representation. It uses intervals and interval arithmetic to represent iteration domain and code transformations, respectively. Using intervals, N-dimensional loops are represented with N-dimensional boxes, e.g., iteration domain of a loop nest can be represented as: $(i,j) \\in$ ([0, N],[2, M-2]). \n\\myparagraph{Non-polyhedral DNN compilers:}\nTheir examples include TVM , Halide , DLVM , and Latte .\n\\myparagraph{Strengths of {interval}-based representations:}\n\\begin{itemize}[leftmargin=*]\n \\item Better support for non-affine code: Non-polyhedral compilers can naturally support non-affine code transformations such as parametric tiling (loop tiling with parametric tile size). This is because the interval-based representation does not rely on using affine sets and affine relations to represent the code or dependencies. However, non-polyhedral compilers also have limited support for non-affine code (e.g., indirect memory accesses) and code transformations.\n \\item Faster compilation: Operations on intervals are computationally less expensive than polyhedral equivalent operations on sets of integer points, which yields faster compilation.\n\\end{itemize}\n\\myparagraph{Weaknesses of {interval}-based representations:}\n\\begin{itemize}[leftmargin=*]\n \\item Limited expressiveness: Interval-based non-polyhedral compilers cannot naturally represent non-rectangular iteration spaces (e.g., when bounds of loop iterators depend on a condition). It is also hard to perform certain complex affine transformations such as iteration space skewing. \n \\item Lack of support for programs with cyclic data-flow graphs: To simplify checking the legality of a schedule, many interval-based compilers assume that the program has an acyclic dataflow graph. This prevents users from expressing many programs with cyclic dataflow. For example, when a value produced by a loop is read by another loop, Halide does not allow fusion of the two loops (with \\texttt{compute\\_with} command). While it avoids illegal fusion, it prevents legal loop fusions in common cases. Polyhedral compilers avoid these over-conservative constraints by using dependence analysis to check for the correctness of code transformations, which enables more schedules. While interval-based compilers can also implement non-polyhedral dependence analysis (by computing dependence distance vectors ), it is not as precise as polyhedral dependence analysis .\n\\end{itemize}", "id": "0648e6ac-e21a-4f45-9a3c-833f135b255f", "level": "subsection", "origin_cites_number": 22, "parent_id": "cce74b6f-9cb1-4065-8a80-43648099f245", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Compiler Support" ], [ "subsection", "Intermediate Representations" ] ], "subsections": [], "title": "Intermediate Representations" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 4375 ], "content": "\\mysubsubsection{Challenges in supporting sparse tensors}\nWhile compiler support is needed in general for targeting ML hardware accelerators with diverse features, sparse tensor computations with various dataflows especially need further support. The code for manipulating sparse tensors exhibits \\insight{non-static loop bounds, non-static array accesses, and conditionals}, which are difficult to analyze at compile time. The following pseudo-code shows one example of a direct convolution with sparse tensors (bounds of $j$ and accesses of $in$ are non-static).\n\\begin{lstlisting}\nfor each output channel c_o\n for j in (W.row_ptr[c_o], W.row_ptr[c_o + 1])\n {\n coeff = W.value[j]\n offset = W.col_idx[j]\n for y in (0, out_H)\n for x in (0, out_W)\n out[c_o][y][x] += coeff*in[y*out_W+x+offset]\n }\n\\end{lstlisting}\n\\mysubsubsection{DNN compilers supporting sparsity}\nTheir examples include Tiramisu , Acorns , and Taichi .\nTiramisu supports $W$-sparsity by extending the polyhedral model in a way similar to . For example, a non-affine conditional is transformed into a predicate that is attached to computation. The list of accesses of the computation is the union of the accesses of the computation in the two branches of the conditional, which is an over-approximation. During code generation, a pre-processing step inserts the conditional back into generated code. Non-static loop bounds and tensor accesses are represented as parameters in the polyhedral model. Statements that define those parameters are inserted just before the original statements that have non-static code. These techniques introduce approximations in the compiler. Their efficiency was demonstrated by and confirmed by PENCIL and Tiramisu .\nAcorns optimizes the CNNs with \\textit{IA}-sparsity. It fuses operators in a computation graph of a deep CNN, followed by sparse layout conversion (which ensures that dense/sparse tensors produced by each operator are compatible with the next operation), followed by code optimization and code generation. Acorns introduces a data layout for exploiting the sparsity structure of input data in certain domains (face detection, LiDAR, etc.) where only certain data regions are NZs. For code optimization and generation, the compiler processes a set of template codes for CNN operators (e.g., convolution, pooling) and applies optimizations such as loop tiling, vectorization, and weight packing. It does not implement advanced loop-nest optimizations like iteration space skewing.\nTACO uses a specific representation (iteration graphs) to generate code for sparse tensor operations and uses a scheduling language to guide the code optimization.", "id": "fc3d1662-53d4-4119-840c-a54a80c25327", "level": "subsection", "origin_cites_number": 6, "parent_id": "cce74b6f-9cb1-4065-8a80-43648099f245", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Compiler Support" ], [ "subsection", "Support for Sparse Tensors" ] ], "subsections": [], "title": "Support for Sparse Tensors" }, { "cite_extract_rate": 0.2, "cites": [ 4375, 4378, 882 ], "content": "To generate efficient code for NN operators, a compiler has to apply a large set of complex code optimizations. It includes operator fusion; multi-level tiling and register blocking which improve data reuse; loop reordering, array packing and data prefetching; loop skewing which enables the extraction of wavefront parallelism from multi-layer RNNs; parallelization; loop unrolling; vectorization; full/partial tile separation; tuning optimization parameters to the target architecture (e.g., tile sizes or loop unrolling factors). There are two major families of optimizing compilers: compilers that allow semi-automatic code optimization and fully automatic compilers.\n\\mysubsubsection{Compilers with semi-automatic code optimization (scheduling languages)}\nThe main idea in these compilers is to separate the algorithm from optimizations. A program, in this case, has two parts: The first part specifies the algorithm without specifying how it is optimized. The second part specifies how the algorithm is optimized (transformed). This is done through a set of high-level scheduling commands for common optimizations. Halide , Tiramisu , and TVM are examples of compilers that allow semi-automatic optimization. The main advantage of this approach is it allows a user to have full control over how code should be optimized. This is important because fully automatic optimization techniques do not always succeed in providing the best performance.\nSemi-automatic deep learning compilers usually provide a library of highly optimized deep learning operators. The compiler then only needs to decide automatically whether to apply certain optimizations such as operator fusion. All other optimizations are encoded manually in the library using scheduling commands. This minimizes the number of decisions that the compiler needs to make and thus guarantees the best possible performance. Note that semi-automatic compilers usually also have automatic optimization modules, but such modules can be disabled if necessary.\n\\mysubsubsection{Fully automatic compilers} Tensor Comprehensions and Diesel are examples of fully automatic compilers for deep learning. Other examples of fully automatic compilers include PENCIL , Pluto , and Polly . All of them use Pluto algorithm to automatically optimize code (choosing the schedule of statements). The main idea of Pluto algorithm is to use integer linear programming to model the problem of automatic code optimization where constraints are dependencies of the program and the objective function is the minimization of the distance between producer and consumer statements. Other compilers such as PolyMage use a custom algorithm for automatic optimization.\nAll these compilers do not have a scheduling language and therefore do not allow the user to have fine-grain control over optimizations. Although fully automatic compilers provide productivity, they may not always obtain the best performance. Performance can be sub-optimal because they do not have a precise cost model to decide which optimizations are profitable. For instance, the Pluto algorithm does not consider the redundant computations, data layout, or the complexity of the control flow of generated code.\n\\myparagraph{Cost models for automatic code optimization:} The goal of an automatic code optimization pass in a compiler is to find the best combination of code optimizations that minimizes the execution time. This can be modeled as a search problem where the search space is a set of combinations of code optimizations. Then, the compiler needs a search technique and a cost model to evaluate each combination. Classical compilers use hand-tuned cost models , while others use machine learning to build cost models . Both of these models do not precisely capture hardware complexity (different memory hierarchies, out-of-order execution, hardware prefetching, communication latency, etc.). Instead, state-of-the-art models are built using deep learning for better accuracy . For example, Ithemal is a cost model that predicts the throughput of a basic block of x86 instructions and gets less than half the error of state-of-the-art hand-tuned models (llvm-mca in LLVM and Intel’s IACA).", "id": "5cba33c7-04e8-4faa-a124-3d6bfae52a69", "level": "subsection", "origin_cites_number": 15, "parent_id": "cce74b6f-9cb1-4065-8a80-43648099f245", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Compiler Support" ], [ "subsection", "Compiler Optimizations" ] ], "subsections": [], "title": "Compiler Optimizations" }, { "cite_extract_rate": 0.30769230769230704, "cites": [ 3416, 7157, 41, 4379 ], "content": "Accelerators, such as Cambricon-X , Scaledeep , Thinker , and DnnWeaver , expose a high-level ISA where some instructions perform tensor operations (e.g., dot product, matrix-matrix multiplication, convolution, pooling, and sigmoid). They simplify the compiler's mission because it can invoke high-level operations instead of generating and optimizing a low level-code. However, it still has to manage data copies automatically. This subsection describes such high-level ISAs used by accelerators and machine code generation.\n\\mysubsubsection{Instruction sets} For tensor computations on hardware accelerators, ISAs typically feature instructions for arithmetic, logical, and data transfer operations with matrix, vector, and scalar data. Layers of ML models feature loops iterating thousands of times; dynamic instances of repetitive instructions can significantly increase the bandwidth requirements for delivering them to PEs at each cycle and the energy consumption. To mitigate such overheads, accelerators are designed with an array of vector or SIMD PEs. It allows PEs to process a single instruction for performing multiple computations on the blocks of tensors. Alternatively, PEs contain additional control logic such that they process an instruction once, but repeatedly perform the sequence of execution for a certain interval. \nCambricon ISA for machine learning contains instructions for matrix and vector processing with arithmetic and logic operations, control (conditional branch and jump), and data transfer. Each operand of the instruction is either an immediate value or one of the 64 32b general-purpose registers. The registers are used for temporarily storing scalars or register-indirect addressing of the on-chip scratchpad memory. The tensor blocks are communicated between computational units from the on-chip scratchpad that is transparent to the compiler and programmers. The instructions support commonly used primitives in various ML models, e.g., multiplication, addition, subtraction, and division operations on matrices and vectors. It also supports max-pooling with a vector-greater-than-merge instruction and provides dedicated instruction for random vector generation with uniform distribution of values within [0, 1]. For supporting weight update during the training of DNNs, Cambricon provides additional instructions such as outer product, scalar-matrix multiplication, and matrix-matrix addition. However, it lacks support for managing data in the local memory of PEs and configuring NoC for communication in various dataflows. Moreover, it does not provide specific instructions for handling sparsity, e.g., predicated execution of encoded sparse data. \nThe instruction set for Sticker consists of instructions for high-level operations. For processing each layer, one of the instructions is executed only once. It configures instruction registers and common control signals that correspond to the sparsity levels and tensor dimensions. Then, at a certain time interval, a dynamic 32b instruction executes for computing convolution over data blocks on PE-array. Meanwhile, the accelerator controller distributes the next instruction, if there is no collision between the current and the next instruction. It allows hiding the execution of other dynamic instructions including the write-back and encoding of outputs and transferring data between on-chip and off-chip memory.\n\\mysubsubsection{Finite state machines (FSMs)} Some accelerators use FSMs for PE executions. The parameters of FSMs or PE's control logic correspond to tensor shapes and target functionality, and they are configured once (e.g., through bit-streams ) before executing a model or a layer. Accelerator controllers (which usually initiate the data movement between on-chip and off-chip memory and configure PEs and NoC) can also contain FSMs. For example, in Thinker architecture , a finite-state controller is used for configuring the accelerator at three levels, i.e., PE-array level, model layer level, and PE level. Configuration word for PE-array level handles partitioning of the PE-array, and it points to the memory address of configurations for model layers. Each configuration word for a layer contains information about tensor dimensions and their memory addresses. Lastly, layer configurations for PEs correspond to PE functionality and the interval (loop iterations) of computations and idle time.\n\\mysubsubsection{Library support and code generation} The instructions for cycle-level executions or primitives are usually obtained offline. Accelerator system designers often provide users a template library that defines high-level primitives such as model layers or low-level primitives such as vector/matrix operations. Using these primitives, users can construct the model of their interest. Then, the low-level code is obtained automatically by the compiler or using the pre-defined optimized code . For example, Zhang et al. programmed Cambricon-X accelerator with a set of library functions (written in C/C++) for primitives like convolution and matrix/vector multiplication and addition. Chen et al. proposed a programming framework consisting of assembly language, an assembler, and run-time support for executing ML models with their Cambricon ISA. For executing common layers, it also replaced the primitives with pre-defined code blocks. \nTVM supports defining custom back-ends for accelerators, which was demonstrated using a vanilla accelerator with a matrix-multiply engine. For executing primitives on accelerators, TVM enables Tensorization , i.e., decoupling the target hardware intrinsic from the schedule while mapping ML operators. To demonstrate code generation for the vanilla accelerator, TVM enabled a driver library and runtime support that constructs the instructions and offloads them to the accelerator. Its code generation module translated the program into appropriate function calls of the runtime API. Moreau et al. leveraged the TVM stack and proposed a JIT compiler and a runtime system to generate code for a programmable VTA accelerator. \nIt is important that the accelerator can support multiple front-ends corresponding to different ML frameworks such as TensorFlow , PyTorch , and MXNet . Integration of the programming, compilation, and runtime environment with the common frameworks for ML application development is necessary for supporting different compact ML models. Leveraging the existing system stack (e.g., TVM) can provide such opportunities to accelerator system developers. Note that although TVM supports defining custom accelerator back-ends and can lower optimized mappings to accelerator-specific code, it currently does not provide support for sparse tensors.", "id": "4879cb49-5aea-4686-ba02-9effd2aedd5d", "level": "subsection", "origin_cites_number": 13, "parent_id": "cce74b6f-9cb1-4065-8a80-43648099f245", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Compiler Support" ], [ "subsection", "Accelerator ISAs and Code Generation" ] ], "subsections": [], "title": "Accelerator ISAs and Code Generation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec::future-directions}", "id": "4b09d196-2359-424f-bc4b-e7c987fe6803", "level": "section", "origin_cites_number": 0, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Trends and Future Directions" ] ], "subsections": [ "6c2f34b5-5f91-4599-987d-4e0d10ea32a6", "d52e14eb-0f98-4e17-bb3f-2760d39c8a3c", "ad9f2e9a-5ed2-40f2-b737-e0f2fac672e7", "99fbcd18-0e68-428e-8c53-012eab11e6cd" ], "title": "Trends and Future Directions" }, { "cite_extract_rate": 0.29411764705882304, "cites": [ 7806, 4332, 4380, 4381, 849 ], "content": "\\mysubsubsection{Hardware-aware compression techniques} The framework for exploring efficient model compression (either of quantization, pruning, and size reduction) should be aware of hardware features and provide directed search accordingly. For example, bit-widths of tensors that can be efficiently processed by different hardware platforms vary considerably (e.g., from multiples of 8-bits to arbitrary bit-widths). Accelerators typically support only uniform widths of tensors (activations and weights), and many accelerators do not support value sharing. Also, when hardware only supports widths that are multiple of 4 or 8 bits, quantization with other bit-widths requires zero paddings, which incurs inefficient storage and processing. Instead, the compression algorithm can opt for improving the accuracy, increasing sparsity, or trading off the bit-widths among layers for achieving higher compression and acceleration. Similarly, depending on the hardware support for fine-grained or block-sparsity, hardware-aware pruning can better achieve the compression objectives (model exploration time, performance, and energy efficiency, while meeting target accuracy). Efficiency can be further improved when compression techniques leverage execution models of hardware accelerators (e.g., energy-aware pruning ). Relatively simple logic modules of hardware accelerators have enabled recent techniques to estimate execution metrics through analytical cost models. Accommodating such cost models (including for different sparsity levels/patterns, precisions) enables the compression algorithms to select effective pruning ratios/structures, tensor shapes, and tensor precisions, which can help to achieve desired accelerations. \n\\begin{figure}[!t]\n\\centering\n\\centerline{\\includegraphics[width=0.75\\linewidth]{figures/codesign}}\n\\caption{Co-designs can enable efficient accelerations of compact models.}\n\\label{fig::codesign}\n\\end{figure}\n\\mysubsubsection{Joint and automated exploration of sparsity, precision, and value similarity} Recent compression techniques typically employ structured or fine-grained data pruning during training with a fixed precision of tensors. Techniques for adaptive quantization often do not explore pruning. Joint explorations of pruning and quantization may achieve high compression due to the interplay of these compression mechanisms. For instance, quantization can increase sparsity considerably , as more values can be represented as zero after compressing the range . Likewise, pruning may reduce bit-widths further since fewer non-zero values in the pruned model may be expressed with a much lower numeric range and precision. Moreover, such compression techniques do not leverage temporal and spatial value similarity in inputs, outputs, or weights. So, joint exploration algorithms may be developed that use multiple compression strategies during training and automatically explore combinations that compress the model further. Recent techniques for automated explorations include CLIP-Q , , and . Exploring a wide range of compression combinations during the training may not be feasible. Therefore, model designers may reduce the space of compression choices by limiting effective options before beginning resource-extensive training, and if required, further limiting the search space by quick evaluations with a pre-trained model and fine-tuning. \nCompression benefits achieved through joint explorations need to be translated into efficient hardware accelerations. So, the exploration heuristic should not preclude experts from expressing a directed search for hardware-friendly executions, e.g., specifying pruning with 1D or $k$:$n$ block-sparsity, constraints for bit-widths, tolerable accuracy loss, etc. Moreover, the heuristic should also provide automated optimization/exploration of hyperparameters (including using cost models of accelerators). This is because the compression algorithm needs to adjust the strategy of pruning or quantization and its hyperparameters. For instance, the pruning algorithm needs to find out the pruning ratio for each iteration (epoch); pruning mechanism (which values to prune, e.g., below a certain threshold); pruning pattern (fine-grain, block size); bit-widths of tensors (quantization). All such hyperparameters or strategies need to be adjusted automatically (to an extent) such that the memory footprint or computations are greatly reduced, with no or tolerable accuracy loss.\n\\mysubsubsection{Value-aware neural architecture search (NAS) and accelerator/model co-designs} Techniques for NAS or AutoML can automatically obtain efficient models that surpass the accuracy of models devised by human developers. However, there remains scope for considerably improving NAS for obtaining highly compact models. Recent techniques have explored accelerator/model co-designs that support quantized models and layers of different shapes. However, the efficiency can be further amplified by including the sparsity and adaptive bit-widths of model layers and analytically considering their implications on hardware accelerators. \nA major challenge faced by the model search techniques and automated accelerator/model co-designs is the vast search space. As Fig. \\ref{fig::codesign} shows, explorations can be performed for (i) ML models (i.e., NAS) , (ii) compression strategies (e.g., automated pruning and quantization) , (iii) mappings of models on accelerators , and (iv) specifications of hardware accelerators . The explorations of (i) and (ii) directly impact compression and accuracy, while search optimizations for (iii) and (iv) affect the performance and energy-efficiency of the accelerator for given models. Among these exploration spaces, NAS can be significantly time-consuming (several GPU days ), followed by automated model compression (e.g., ). Therefore, the resultant joint space for value-aware NAS and accelerator/model co-designs is many-folded. So, it may require notable efforts for developing automated exploration of co-designs that can obtain extremely efficient and accelerator-friendly compact models.\n\\mysubsubsection{Facilitating structured computations of sparse tensors} Designers may opt for accelerators that are effective for structured computations of dense tensors, e.g., systolic arrays (as near-data accelerators or coupled to processor cores) and in-memory processing with resistive crossbars. While sparsity or size reduction of tensors may need to be leveraged, significant design modifications are often infeasible due to design requirements (area/power budgets) or the increase in complexity of the system stack. So, techniques for pre-processing can be developed, which can arrange structured dense regions for feeding underlying engines or achieve structured data through sparsification/reorganization at almost no accuracy loss. Such pre-processing can be done on additional hardware modules or the host processor that handles the non-performance-critical tasks. Such disjoint mechanisms can obviate heavy design modifications in systolic arrays (e.g., ) or in-memory/near-data processing engines (e.g., ReCom , SNNrram ) while leveraging various sparsity and value similarity opportunities across different models.", "id": "6c2f34b5-5f91-4599-987d-4e0d10ea32a6", "level": "subsection", "origin_cites_number": 17, "parent_id": "4b09d196-2359-424f-bc4b-e7c987fe6803", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Trends and Future Directions" ], [ "subsection", "Hardware/Software/Model Co-designs" ] ], "subsections": [], "title": "Hardware/Software/Model Co-designs" }, { "cite_extract_rate": 0.18181818181818102, "cites": [ 4363, 4379 ], "content": "\\mysubsubsection{Framework for analyzing performance gains of accelerators due to sparsity} Given that tensors of several ML models are sparse, it is important to design accelerator systems that can exploit performance gains for multiple models through low-cost hardware modules and enhanced software support. As we discussed in sections \\ref{sec::sparse-data-coding}--\\ref{sec::software-support}, each enhancement presents multiple implementation choices at the hardware or software level. Although crafting a cycle-level simulation infrastructure for such a wide design space may be infeasible, a data-driven quantitative model can be significantly helpful for design explorations. It can process the actual data (or discover distributions of zeros), provide high-level modeling of common choices, and estimate the performance gains for each combination of the implementation choices. For newer models or functionality, hardware designers can run through a set of implementation choices in an early design phase. They can explore the implications of sparsity for the desired choice of encoding, data extraction logic, functional units, NoC, load balancing, and dataflow mechanism. \n\\mysubsubsection{Accelerator design frameworks for compact models} Several frameworks for developing and simulating FPGA or ASIC based accelerators have recently been proposed, including DNNWeaver , DNNBuilder , T2S-Tensor , and HeteroCL for FPGAs and NVDLA , VTA , MAGNet , MAERI , and AutoDNNChip for specialized accelerators. Similarly, hardware construction languages or representations such as Chisel and $\\mu$IR enable expressing microarchitectural features through high-level primitives. Such infrastructures are key for the community since they can serve as a good learning resource for training the new professionals and provide a kick-starter baseline for developing new design features. \nHowever, most frameworks support designs for dense tensors of fixed bit-widths and lack support for sparsity-tailoring features. Such frameworks can provide some pre-built modules for encoding/extracting sparse data (e.g., with common formats like RLC, bitmap, or for block-sparsity), dynamic load balancing or data reorganization, configurable functional units, and configurable interconnects for sparse and bit-adaptive computing, etc. Even with limited features, they may serve as reusable logic that can be leveraged by designers for quick prototyping and design explorations. Further, abstractions and specifications for constructing sparsity-tailored hardware and dataflows can enable automated and efficient design explorations and easier programming of accelerators.", "id": "d52e14eb-0f98-4e17-bb3f-2760d39c8a3c", "level": "subsection", "origin_cites_number": 11, "parent_id": "4b09d196-2359-424f-bc4b-e7c987fe6803", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Trends and Future Directions" ], [ "subsection", "Design Tools and Frameworks" ] ], "subsections": [], "title": "Design Tools and Frameworks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4367 ], "content": "While there have been significant advances in performing inference on hardware accelerators, efficient training of the models on hardware accelerators has received relatively little attention. Training has been done in high-performance computing environments containing CPU and GPU platforms and recently on FPGAs and TPU accelerators. Hardware accelerators can offer significant benefits to the model training in both edge and datacenter-scale computing environments, and they can notably improve performance and energy efficiency, respectively. In particular, they are promising for enabling online learning on edge devices through compact models.\nAccelerators, such as , ScaleDeep , and HyPar , have been proposed for efficiently training the models. However, they either do not leverage sparsity, or may not be efficiently utilized for irregular-shaped tensors, or lack support for various precisions of weights, activations, gradients, and weight updates. This presents further opportunities for performance gains and energy efficiency. Additionally, designers can leverage cross-layer optimizations (e.g., by reusing the data of gradients during back-propagation) and support mixed-precision of tensors during the training of compact models.", "id": "ad9f2e9a-5ed2-40f2-b737-e0f2fac672e7", "level": "subsection", "origin_cites_number": 3, "parent_id": "4b09d196-2359-424f-bc4b-e7c987fe6803", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Trends and Future Directions" ], [ "subsection", "Accelerating Training of ML Models" ] ], "subsections": [], "title": "Accelerating Training of ML Models" }, { "cite_extract_rate": 0, "cites": [], "content": "In this work, we considered a wide variety of techniques that leverage sparsity for the machine learning domain, which represents an enormous research effort. Many other domains face similar challenges in exploiting sparsity, and accelerators have been proposed for some of the more processing-intensive domains; this includes graph processing , database operations , genomics , and compression . In some cases, computation primitives even extend across domains. For example, finding intersecting non-zeros is analogous to joins in a database context . Applying the lessons learned from extensive research on sparsity in an ML context can likely speed innovation in a broader context.", "id": "99fbcd18-0e68-428e-8c53-012eab11e6cd", "level": "subsection", "origin_cites_number": 7, "parent_id": "4b09d196-2359-424f-bc4b-e7c987fe6803", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Trends and Future Directions" ], [ "subsection", "Applying Techniques for Sparsity to Other Domains" ] ], "subsections": [], "title": "Applying Techniques for Sparsity to Other Domains" }, { "cite_extract_rate": 0.631578947368421, "cites": [ 7633, 4384, 4385, 7632, 4382, 885, 547, 8770, 872, 4333, 4383, 8771 ], "content": "\\label{sec::related-works}\n\\textbf{Deep learning models and their applications:} Surveys described different deep learning models along with different frameworks and datasets. Gu et al. discussed applications of CNNs in computer vision and language processing. Recent surveys have also discussed applications of deep learning in medical image analysis , biomedical applications , wireless and networking , and embedded systems . Elsken et al. surveyed techniques for neural architecture search.\n\\textbf{Compact models:} Cheng et al. surveyed techniques for parameter pruning and low-rank factorization. Wang et al. surveyed techniques for pruning, precision lowering, weight sharing, low-rank factorization, and knowledge distillation. Deng et al. described techniques to obtain compact models including sparsification, quantization, tensor decomposition, and joint-way compression. \n\\textbf{Hardware accelerators for dense ML models:} Shawahna et al. surveyed FPGA accelerators for processing dense tensor computations of deep learning applications. Venieris et al. discussed different CNN-to-FPGA toolchains and described their hardware architectures, design space exploration techniques, and support for different precisions of tensors. They also compared execution metrics of the designs obtained with various toolchains and those with the previously proposed FPGA accelerators for CNNs. Sze et al. presented a survey about efficiently executing DNNs on hardware accelerators. It described different DNNs, different compression techniques for compact models, and optimized dataflows for spatial architectures. Reuther et al. benchmarked executions of different ML accelerators. Li et al. discussed different ML frameworks and compilers for deep learning models.\n\\textbf{Hardware accelerators for compact ML models:} Mittal surveyed executing compact models, including BNNs, on FPGAs. It also discussed processing convolutions with the Winograd algorithm and executions on multiple FPGAs. Deng et al. surveyed hardware accelerators that support bit-adaptive computing and the data extraction modules for leveraging the sparsity of inputs, weights, or outputs. Du et al. recently proposed MinMaxNN system for dynamically switching NN models. They surveyed techniques for designing self-aware NN systems (which can continuously sense information from the environment and dynamically react), including leveraging sparsity and tensor quantization. Wang et al. surveyed hardware implementations for processing tensors of lower precisions (binary, ternary, and logarithmic quantizations). Ignatov et al. benchmarked executions of quantized deep learning models on mobile AI accelerators.\nIn contrast to the above surveys, this work highlights sources of sparsity and size reduction of tensors in ML models and challenges in efficiently executing them on hardware accelerators. Then, it surveys and discusses the corresponding hardware and software support, including encodings and extraction of sparse data, sparsity-aware dataflows, memory management and on-chip communication of sparse tensors while leveraging data reuse, load balancing of computations, and compiler support. It also discusses techniques for computations of mixed-precision and value-shared sparse tensors.", "id": "92b659a6-eb60-494f-84a3-2859bf5b052c", "level": "section", "origin_cites_number": 19, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Related Work" ] ], "subsections": [], "title": "Related Work" }, { "cite_extract_rate": 0.461538461538461, "cites": [ 4348, 7168, 2671, 4386, 7634, 4335 ], "content": "For efficient and hardware-friendly processing, compact deep learning models have been designed. They consume less storage and computations and consist of tensors with considerable sparsity, asymmetric shapes, and variable precisions. While these compact models can be accelerated on hardware accelerators efficiently, it requires special hardware and software support. We have highlighted challenges in efficiently accelerating their sparse and irregular tensor computations. Leveraging sparsity, especially unstructured, requires a significant redesign to store, extract, communicate, compute, and load-balance only non-zeros. Moreover, the sparsity levels and patterns due to various sources lead to unique challenges and solutions in hardware/software/model co-designs. \nIn this article, we have discussed how exploiting sparsity effectively depends on tailoring the data encoding and extraction, dataflow, memory bank structure, interconnect design, and write-back mechanisms. We provided an overview of corresponding enhancements in accelerator designs and their effectiveness in exploiting sparsity. Categorization of different techniques informs how they leveraged structured or unstructured sparsity of weight or activations during learning or inference of ML models (Tables \\ref{tab:overview-sparse-accel}, \\ref{tab:target-sparsity}). For recent DNNs, we analyzed achievable accelerations for a few popular accelerators (section \\ref{sec::sparse-dnn-acceleration-case-study}). The analysis showed that accelerators exploit moderate sparsity and achieve high speedups as sparsity increases. However, exploiting high or hyper sparsity can further provide considerable opportunities, which would also need efficient mechanisms for data extraction and load balancing. Also, configurable architectures for NoCs, functional units, and buffers are required for catering to various functionalities and metadata management.\nOur analysis of sparsity-encodings describes their storage efficiency for various sparsity and the decoding requirements. While bitmaps and RLC/CSC formats are commonly used for moderate and high sparsity, respectively, storage efficiency can be improved with block-sparse tensors (especially at hyper sparsity). We have introduced a taxonomy for non-zero extraction techniques that are used for feeding the functional units of PEs. Existing data extraction mechanisms (e.g., in EIE , EyerissV2 , Cambricon-X/S ) exploit moderate sparsity. But, they may not extract enough NZs at high or hyper sparsity of large tensors (e.g., sparse BERT ), achieving lower speedups. We also discuss how block-sparsity can simplify data extraction and facilitate balanced computations. For exploiting diverse sparsity across tensors of different models, designers can explore multiple or configurable mechanisms for decoding and extraction of non-zeros. \nData reuse opportunities in processing common DNNs vary significantly, and sparsity lowers the reuse due to fewer effectual computations. However, compressed tensors allow to fit and reuse more data in on-chip memory, which reduces accesses to off-chip memory and overall latency. We have discussed techniques for memory bank management to support unstructured accesses for sparse computations and hiding the memory access latency. At high or hyper sparsity, execution may become \\emph{bandwidth-bounded}, as enough data may not be prefetched always. Hence, techniques for efficient data management (e.g., cross-layer, on-chip data reuse) and exploiting high bandwidths need to be explored. Different accelerator designs have used various interconnects for the distribution of operands, reduction of partial outputs, and collecting the outputs. They vary in terms of the bandwidth requirement and exploiting spatial data reuse. \\emph{Configurable interconnects} (e.g., in EyerissV2 , SIGMA ) are required for accelerating different DNNs of diverse sparsity, functionality, and tensor shapes, since they can support a mix of communication patterns. They are important for enabling \\emph{asymmetric} spatial accumulations of partial outputs (for sparse tensor computations) and concurrent spatial processing of different groups, e.g., for DW-CONV. \nProcessing compressed tensors can impose significant maneuvering efforts in the PE architecture design. We discuss further opportunities including configurable designs of functional units for efficient vector processing and flexible sparsity-aware dataflows for high utilization across variations in sparsity and functionality of different layers. We also surveyed techniques for approximated computing through multiplier-free PEs and leveraging temporal and spatial similarity of values, which improve execution efficiency further. Sparse tensor computations over different PEs can be highly imbalanced. We have surveyed different techniques that sustain the acceleration by balancing the work through hardware modules for asynchronous computations or work sharing (e.g., EIE , ZENA ). Software-directed regularization such as structured sparsity eliminates load imbalance, e.g., in leveraging weight/activation sparsity for Cambricon-S and 50\\% weight sparsity for NVIDIA A100 . Techniques including data transformations and refactoring of DNN operators may achieve low-cost load balance, including for dynamic sparsity. We have also surveyed mechanisms for asynchronous write-backs of outputs and sparsity-aware encodings on the fly. Compilation for the accelerators requires the ability to efficiently express sparsity in intermediate representations, flexibly apply different compiler optimizations, and emit efficient accelerator-specific code. The survey has discussed techniques that can enable such support and open challenges. \nAccelerator/model co-designs can efficiently leverage various precision and value similarity of different tensors and induce sparsity for accelerator-friendly executions. Automated and joint explorations of accelerator-aware compression algorithms can advance acceleration opportunities further. We have highlighted future directions for such co-designs and the system stack development (section \\ref{sec::future-directions}). In individual sections, we have also discussed further opportunities for tailoring different hardware or software enhancements for sparsity. While our discussions focused on leveraging sparsity for ML models, exploiting diverse sparsity can also aid the efficient processing of applications of other domains .\nIn conclusion, while different accelerators and compression algorithms have been proposed for efficiently processing compact ML models, it remains an active research frontier. In particular, hardware/software/model co-designs and automated and joint explorations of tensor sparsity, adaptive quantization, shape reductions, and dataflow will likely provide further opportunities for innovations across the system. With a boost in energy-efficient accelerations of the learning and inference at the cloud and edge, they can be anticipated to further improve the intelligence of various systems or applications. \n\\newpage\n\\appendix[Hardware Accelerators Can \\\\ Exploit Sparsity Better]\nExploiting acceleration opportunities due to sparsity (especially unstructured) is relatively hard for execution on CPUs and GPUs . The performance of ML models can even degrade, as compared to the execution with dense data (e.g., for a GEMM, when unstructured $W$-sparsity is below 70\\% ). For executing AlexNet layers on GPUs, analyzed speedup for processing CSR-encoded matrices with cuSPARSE and dense matrices with cuBLAS. Their experiments showed obtaining \\emph{limited} speedups (below 1.4$\\times$) or even slowdowns for high sparsity. This is because unstructured sparsity may yield poor data locality for scattered effectual NZs. Plus, it is challenging to skip ineffectual computations and equally distribute the work among multiple threads or computational units of processor cores. Zhang et al. analyzed performance benefits of executing sparse models (LeNet, AlexNet, and VGG-16) on CPU (with sparse BLAS) and GPU (with cuSPARSE) platforms, as compared to processing dense models (with Caffe ). For average sparsity of 90.94\\%, they reported geomean speedup of only 23.34\\% for GPU and 110\\% more time on CPU. In their sparsity-sensitivity analysis, CPU and GPU showed marginal speedup only at moderate or high sparsity due to non-trivial costs of sparse data processing. But, for Cambricon-X , performance gains were reported for 5\\% or more sparsity due to its design tailored for sparse tensor computations. For hyper sparsity, it achieved high speedups (e.g., 15.5$\\times$ for CONV and 48.5$\\times$ for FC layer), as compared to executing dense tensors . Thus, with special support for sparse and irregular tensor computations, hardware accelerators can achieve notable speedups and efficiency. \n\\section*{Acknowledgment}\nThis work was partially supported by funding from NSF grant CCF 1723476 - NSF/Intel joint research center for Computer Assisted Programming for Heterogeneous Architectures (CAPA). The authors thank anonymous reviewers for providing valuable feedback.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{ref.bib}\n\\end{document}", "id": "e04212f9-2787-4a9d-94a3-01623a9139c0", "level": "section", "origin_cites_number": 13, "parent_id": "6220d89f-a486-420a-914e-59957895a313", "prefix_titles": [ [ "title", "Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models:\\\\ A Survey and Insights" ], [ "section", "Summary" ] ], "subsections": [], "title": "Summary" } ]
105
[ 679, 7167, 7, 836, 9103, 7005, 885, 547, 4335, 4334, 38, 1759, 2912, 784, 4333, 97, 7168, 7634, 7299, 842, 4331, 194, 4332, 4330, 7430, 41, 7157, 7007, 7633, 8767, 4343, 4342, 1003, 7280, 848, 4346, 4339, 840, 7169, 4337, 4340, 1199, 4344, 4348, 4347, 4345, 8766, 4336, 4338, 844, 4349, 4341, 8768, 305, 502, 301, 4352, 4351, 4350, 4353, 4355, 7804, 4354, 7805, 4356, 439, 4357, 514, 4358, 4359, 4361, 4360, 4362, 8769, 4363, 4365, 4364, 4366, 4367, 4370, 4372, 4371, 4369, 887, 4368, 4374, 4373, 2671, 4377, 882, 4376, 4375, 4378, 3416, 4379, 7806, 4380, 4381, 849, 4384, 4385, 7632, 4382, 8770, 872, 4383, 8771, 4386 ]
1.087209
[ "Yun Peng", "Byron Choi", "Jianliang Xu" ]
Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art
2020
2020-08-26T09:56:30Z
cs.LG
Graphs have been widely used to represent complex data in many applications, such as e-commerce, social networks, and bioinformatics. Efficient and effective analysis of graph data is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is graph representation learning, which embeds the graphs into low-dimension vectors. The second stage uses machine learning to solve the CO problems using the embeddings of the graphs learned in the first stage. The works for the first stage can be classified into two categories, graph embedding methods and end-to-end learning methods. For graph embedding methods, the learning of the the embeddings of the graphs has its own objective, which may not rely on the CO problems to be solved. The CO problems are solved by independent downstream tasks. For end-to-end learning methods, the learning of the embeddings of the graphs does not have its own objective and is an intermediate step of the learning procedure of solving the CO problems. The works for the second stage can also be classified into two categories, non-autoregressive methods and autoregressive methods. Non-autoregressive methods predict a solution for a CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution of the CO problem. The solution can be computed from the matrix using search heuristics such as beam search. Autoregressive methods iteratively extend a partial solution step by step. At each step, an autoregressive method predicts a node/edge conditioned to current partial solution, which is used to its extension. In this survey, we provide a thorough overview of recent studies of the graph learning-based CO methods. The survey ends with several remarks on future research directions. \keywords{Graph Representation Learning \and Graph Neural Network \and Combinational Optimization}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "637cec56-3ca1-4c47-a516-630783fcfed0", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ] ], "subsections": [ "62b1af43-a3bd-4e6e-864b-ee180efd70ba", "94198133-cfd6-4775-88b5-1be5172a0b6a", "ea200033-9c6a-44ef-9913-a794ab1396bb", "0dd7256f-34f8-494d-abe3-bd16bf0e03a4", "21ddc588-a04c-4204-9c43-e565cf44ed48", "069c40b2-40c4-497f-ba83-45f6d98ae47c", "f665555f-63d7-4204-a663-9a97c0e84159" ], "title": "root" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 1648, 167, 8332, 8475, 1647, 280, 187, 9086, 7007, 1646, 215 ], "content": "Graphs are ubiquitous and are used in a wide range of domains, from e-commerce , to social networking~ to bioinformatics , . Effectively and efficiently analyzing graph data is important for graph-based applications. However, many graph analysis tasks are combinatorial optimization (CO) problems, such as the traveling salesman problem (TSP) , maximum independent set (MIS) , maximum cut (MaxCut) , minimum vertex cover (MVC) , maximum clique (MC) , graph coloring (GC) , subgraph isomorphism (SI) , and graph similarity (GSim) . These graph-based CO problems are NP-hard. In the existing literature on this subject, there are three main approaches used to solve a CO problem: exact algorithms, approximation algorithms, and heuristic algorithms. Given a CO problem on a graph $G$, exact algorithms aim to compute an optimum solution. Due to the NP-hardness of the problems, the worst-case time complexity of exact algorithms is exponential to the size of $G$. To reduce time complexity, approximation algorithms find a suboptimal solution that has a guaranteed approximation ratio to the optimum, with a worst-case polynomial runtime. Nevertheless, many graph-based CO problems, such as General TSP , GC , and MC , are inapproximable with such a bounded ratio. Thus, heuristic algorithms are designed to efficiently find a suboptimal solution with desirable empirical performance. Despite having no theoretical guarantee of optimality, heuristic algorithms often produce good enough solutions in practice.\nThe practice of applying machine learning (ML) to solve graph-based CO problems has a long history. For example, as far back as the 1980s, researchers were using the Hopfield neural network to solve TSP , . Recently, the success of deep learning methods has led to an increasing attention being paid to this subject , , , . Compared to manual algorithm designs, \n ML-based methods have several advantages in solving graph-based CO problems. First, ML-based methods can automatically identify distinct features from training data. In contrast, human algorithm designers need to study the heuristics with substantial problem-specific research based on intuitions and trial-and-errors. Second, for a graph-based CO problem, ML has the potential to find useful features that it may be hard to specify by human algorithm designers, enabling it to develop a better solution . Third, an ML-based method can adapt to a family of CO problems. For example, S2V-DQN can support TSP, MVC, and MaxCut; GNNTS can support MIS, MVC, and MC. In comparison, it is unlikely for a handcrafted algorithm of one CO problem to be adapted to other CO problems. \nMost recent graph learning-based CO methods follow the two-stage framework. The first stage is graph representation learning which embeds the graphs into low-dimension vectors. The second stage uses machine learning to solve the CO problems using the embedding vectors of the graphs learned in the first stage. In this survey, we review the state-of-the-art works of the two stages, respectively.\nFor the first stage, existing graph representation learning techniques that have been used in ML-based CO methods can be classified into two categories: graph embedding methods and end-to-end learning methods. On one hand, graph embedding methods embed the nodes of a graph into low-dimension vectors. The embedding vectors of the graph learned are inputted to downstream machine learning tasks to solve CO problems. Graph embedding has its learning objective, which may not rely on the CO problems to be solved. The embeddings of the graph are fixed during the solving of the downstream task. On the other hand, in end-to-end learning methods, graph representation learning does not have its own learning objective and is an intermediate step of the learning procedure of solving the CO problem. The embeddings learned are specific to the CO problem being solved. \nFor the second stage, existing works can be classified into two categories: non-autoregressive methods and autoregressive methods. On one hand, non-autoregressive methods predict a solution for a graph-based CO problem in one shot. For example, for the TSP problem on a graph, a non-autoregressive method predicts a matrix, where each element of the matrix is the probability of an edge belonging to a TSP tour. The TSP tour can be computed from the matrix using beam search. On the other hand, autoregressive methods iteratively extend a partial solution step by step. For the TSP problem, at each step, an autoregressive method predicts an edge conditioned to the current partial TSP tour, which is used to extend the current partial TSP tour.\nThere have been several surveys of graph representation learning , , , , . However, existing surveys mainly focus on the graph representation learning models and their applications in node classification, link prediction or graph classification. In contrast, we focus on using graph learning to solve CO problems.\nThere have also been several previous surveys that have discussed ML-based CO methods , , . The present survey, however, has different emphases from previous studies. The survey focuses on branch and bound (B\\&B) search techniques for the mixed-integer linear programming (MILP) problem. Although many graph-based CO problems can be formulated using MILP and solved using the B\\&B method, most existing ML-based methods for solving graph-based CO problems focus on graph-specific methods. Mazyavkina et al. discuss RL-based CO methods. However, there are ML-based CO methods that do not use RL. This survey is not limited to RL approaches. Lamb et al. survey the GNN-based neural-symbolic computing methods. Symbolic computing is a broad field and graph-based CO is a topic of it. In contrast, we focus on the graph-based CO problems in this survey. \nThe rest of this survey is organized as follows. Section~\\ref{back} presents the notations and preliminaries. Section~\\ref{ga4gl} summarizes graph representation learning techniques. Section~\\ref{gl4ga} discusses the use of ML to solve graph-based CO problems. Section~\\ref{future} suggests directions for future research. Section~\\ref{conc} concludes this survey. Sec.~\\ref{sec-abbr} lists the abbreviations used in this survey.", "id": "62b1af43-a3bd-4e6e-864b-ee180efd70ba", "level": "section", "origin_cites_number": 27, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{back}\nIn this section, we present some of the notations and definitions that are frequently used in this survey. \nWe denote a graph by $G$ = $(V,E, {\\bf X})$,\nwhere $V$ and $E$ are the node set and the edge set of $G$,\nrespectively, ${\\bf X}^{|V|\\times d'}$ is the matrix of initial features of all nodes, and ${\\bf x}_u$ = ${\\bf X}$$[u,\\cdot]$ denotes the initial features of node $u$. We may choose $u \\in G$ or $u \\in V$ to denote a node of the graph, when the choice is more intuitive. Similarly, we may use $(u,v) \\in G$ or $(u,v) \\in E$ to denote an edge of the graph. The adjacency matrix of $G$ is denoted by ${\\bf A}$. The weight of edge $(u,v)$ is denoted by \n$w_{u,v}$. We use $\\bf P$ to denote the transition matrix, where ${\\bf P}[u,v] = w_{u,v}/\\sum_{v'\\in G} w_{u,v'}$. ${\\bf P}^k = \\prod_k {\\bf P}$ is the $k$-step transition matrix and $\\bf P$ is also called the 1-step transition matrix. We use $g$ to denote a subgraph of $G$ and $G\\backslash g$ to denote the subgraph of $G$ after removing all nodes in $g$. For a node $u\\in G,u\\not\\in g$, we use $g\\cup\\{u\\}$ to denote adding the node $u$ and the edges $\\{(u,v)|v\\in g, (u,v)\\in G\\}$ to $g$. \n$G$ can be a directed or undirected graph. If $G$ is directed, $(u,v)$ and $(v,u)$ may not present simultaneously in $E$. $N^o(u)$ and $N^i(u)$ denote the outgoing and incoming neighbors of $u$, respectively. If $G$ is undirected, $N(u)$ denotes the neighbors of $u$. \nWe use a bold uppercase character to denote a matrix ({\\it e.g.}, $\\bf X$), a bold lowercase character to denote a vector ({\\it e.g.}, ${\\bf x}$), and a lowercase character to denote a scalar ({\\it e.g.}, $x$). The embedding vectors (or embeddings for short) of a node $u$ and a graph $G$ are $d$-dimensional vectors denoted by ${\\bf h}_u$ and ${\\bf h}_G$, respectively. \nTable~\\ref{notation_table} summarizes the notations of frequently used symbols.\n\\begin{table}[t]\n\\vspace{0ex}\n\\caption{Notations of frequently used symbols and their meaning}\n\\centering\n\\begin{scriptsize}\n\\vspace{-0ex}\n\\begin{tabular}{|c|l|c|c|}\n\\hline\nNotation & Description \\\\\n\\hline \\hline\n$G=(V,E,\\bf{X})$ & a graph\\\\ \\hline\n$\\bf A$ & the adjacent matrix of $G$\\\\ \\hline\n$u,v$ & node $u$ and node $v$ of $G$\\\\ \\hline\n${\\bf X}$ & a matrix of features of all nodes\\\\ \\hline\n${\\bf x}_u$ & the features of $u$, {\\it i.e.}, a row of $\\bf X$ \\\\\\hline\n${\\bf y}$ & a signal, {\\it i.e.}, a column of $\\bf X$\\\\ \\hline\n$\\mathcal N$$_u$ & neighborhood of $u$\\\\ \\hline\n${\\bf h}_u, {\\bf h}_G$ & embedding vector of a node $u$, a graph $G$\\\\ \\hline\n${\\vec H}$ & the matrix of embedding vectors of all nodes \\\\ \\hline\n$d$ & dimension of a vector \\\\ \\hline\n$f^l_\\theta$ & the $l$-th layer of a neural network with parameter $\\theta$\\\\ \\hline\n\\end{tabular}\\label{notation_table}\n\\end{scriptsize}\n\\vspace{0ex}\n\\end{table}\nA graph-based CO problem is formulated in Definition~\\ref{def:coproblem}.\n\\begin{definition}\\label{def:coproblem}\nGiven a graph $G$ and a cost function $c$ of the subgraphs of $G$, a\nCO problem is \nto find the optimum value of $c$ or the corresponding subgraph that produces that optimum value.\n\\end{definition}\nFor example, for a graph $G$, the maximum clique (MC) problem is to find the largest clique of $G$, and the minimum vertex cover (MVC) problem is to find the minimum set of nodes that are adjacent to all edges in $G$.", "id": "94198133-cfd6-4775-88b5-1be5172a0b6a", "level": "section", "origin_cites_number": 0, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Notations and Preliminaries" ] ], "subsections": [ "232a3950-892c-4149-98e9-9b7be64741a3" ], "title": "Notations and Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "Most existing methods that use machine learning to solve the graph-based CO problem follow the two-stage framework, as illustrated in Fig.~\\ref{fig:overview}. Given an input graph, the first stage is to learn the representation of the graph in a low-dimension embedding space. The nodes or edges of the graph are represented as embedding vectors (or {\\it embeddings} for short). The techniques for the first stage are discussed in Sec.~\\ref{ga4gl}. The second stage uses machine learning to solve the CO problem using the embeddings of the graph learned in the first stage. The techniques for the second stage are discussed in Sec.~\\ref{gl4ga}. \nThere are mainly two ways to learn graph representation in the first stage. In the first way, the embeddings are learned by graph embedding methods. Graph embedding has its own learning objectives that may not rely on the objective of the CO problem to be solved in the second stage. The CO problem is solved as a downstream task of graph embedding and the gradient of the loss of the CO problem in the second stage will not be back-propagated to the first stage. In the second way, the CO problem is solved in the end-to-end manner. The first stage does not have its own learning objective and the gradient of the second stage is back-propagated to the first stage for learning the embeddings of the graph.\nThere are mainly two different approaches to solve the CO problems in the second stage, namely non-autoregressive methods and autoregressive methods.\nThe non-autoregressive methods predict a solution for a graph-based CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution. The solution of the CO problem can be computed from the matrix by \nsearch heuristics, {\\it e.g.}, beam search. The autoregressive methods compute the solution by iteratively extending a partial solution. At each time step, the node/edge that is used to extend the partial solution is predicted conditioned to the current partial solution. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 12cm]{fig/arch/overview2.pdf}\n\\caption{Overview of the two stages of ML-based CO methods} \n\\label{fig:overview}\n\\end{figure}", "id": "232a3950-892c-4149-98e9-9b7be64741a3", "level": "subsection", "origin_cites_number": 0, "parent_id": "94198133-cfd6-4775-88b5-1be5172a0b6a", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Notations and Preliminaries" ], [ "subsection", "Overview of Graph Learning Based CO Methods" ] ], "subsections": [], "title": "Overview of Graph Learning Based CO Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{ga4gl}\nIn this section, we survey the graph representation learning methods that have been applied to solve graph-based CO problems. In Sec.~\\ref{grl:emb}, we review the graph embedding methods, which learn the embeddings of the graph independently to the downstream task of solving the CO problem; and in Sec.~\\ref{grl:etoe}, we review the end-to-end learning methods that learn the embeddings of the graph as an intermediate step of solving the CO problem.", "id": "ea200033-9c6a-44ef-9913-a794ab1396bb", "level": "section", "origin_cites_number": 0, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ] ], "subsections": [ "332e9bac-b9a4-441f-b1ef-a40b6551e719", "7d401168-628a-4543-9c98-1b536e518f44" ], "title": "Graph Representation Learning Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{grl:emb}\nWe first review generalized SkipGram and AutoEncoder that are two widely used models in graph embedding.", "id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "level": "subsection", "origin_cites_number": 0, "parent_id": "ea200033-9c6a-44ef-9913-a794ab1396bb", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ] ], "subsections": [ "9f6320ca-945d-4675-a664-d9f104a32ce8", "804ab46f-ea1a-49d6-82fe-f88a9546bda8", "f21f6a6d-cec6-46bb-9635-1e66882ebe00", "8b9b70e1-2281-42d6-a6be-d3e754e1e570", "16f5011f-7ff7-4ffd-8767-ebe3566348f9" ], "title": "Graph Embedding Methods" }, { "cite_extract_rate": 1, "cites": [ 1684, 218, 282 ], "content": "The generalized SkipGram model is extended from the well-known SkipGram model for embedding words in natural language processing. The generalized SkipGram model relies on the neighborhood ${\\mathcal N}_u$ of node $u$ to learn an embedding of $u$. The objective is to maximize the likelihood of the nodes in ${\\mathcal N}_u$ conditioned on $u$.\n\\begin{equation}\\label{loss1_skipgram}\n\\max {P}(v_1,v_2,...,v_{|{\\mathcal N}_u|} | u), v_i\\in {\\mathcal N}_u\n\\end{equation}\nAssuming conditional independence, ${P}(v_1,v_2,...,v_{|{\\mathcal N}_u|} | u)$ = $\\prod_{v_i\\in {\\mathcal N}_u} {P}(v_i | {\\bf h}_u)$. $P(v_i | {\\bf h}_u)$ can be defined as $\\frac{{\\bf h}_{v_i}^T {\\bf h}_u}{\\sum_{v\\in G} {\\bf h}_{v}^T {\\bf h}_u}$. Maximizing $\\prod_{v_i\\in {\\mathcal N}_u} {P}(v_i | {\\bf h}_u)$ is then equivalent to maximizing its logarithm. Hence, (\\ref{loss1_skipgram}) becomes\n\\begin{equation}\\label{loss2_skipgram}\n\\begin{array}{ll}\n& \\max \\sum_{v_i \\in {\\mathcal N}_u} \\log {P}(v_i | {\\bf h}_u)\\\\[0.2cm]\n & = \\max \\sum_{v_i \\in {\\mathcal N}_u} \\log \\frac{{\\bf h}_{v_i}^T {\\bf h}_u}{\\sum_{v\\in G} {\\bf h}_{v}^T {\\bf h}_u}\\\\[0.2cm]\n\\end{array}\n\\end{equation}\nSince computing the denominator of the softmax in (\\ref{loss2_skipgram}) is time consuming, many optimization techniques have been proposed. Negative sampling is one of the most well-known techniques. Specifically, the nodes in the neighborhood ${\\mathcal N}_u$ of $u$ are regarded as positive samples of $u$. On the other hand, the nodes not in ${\\mathcal N}_u$ are considered negative samples of $u$. Then, maximizing the likelihood in Formula~\\ref{loss2_skipgram} can be achieved as follows:\n\\begin{equation}\\label{maxloss_skipgram}\n\\max \\log\\sigma({\\bf h}_v^T {\\bf h}_u) + \\sum_{i=1}^K {\\mathbb E}_{\\bar{v}\\sim { P_n}} \\log\\sigma (-{\\bf h}_{\\bar{v}}^T {\\bf h}_u),\n\\end{equation}\n\\noindent\nwhere $v$ is a positive sample of $u$, $\\bar{v}$ is a negative sample, ${P_n}$ is the probability distribution of negative samples, $\\bar{v}\\sim {P_n}$ means sampling a node from the probability distribution $P_n$, $K$ is the number of negative samples, $\\sigma$ is the sigmoid activation function, and $\\mathbb E$ is the expectation.\nTo conveniently adopt the gradient descent algorithms, maximizing an objective is often rewritten as minimizing its negative. Thus, the objective function of the generalized SkipGram model is to minimize the loss $\\cal L$ as follows:\n\\begin{equation}\\label{loss3_skipgram}\n\\min {\\mathcal L} = \\min -\\log\\sigma({\\bf h}_v^T {\\bf h}_u) - \\sum_{i=1}^K {\\mathbb E}_{\\bar{v}\\sim { P_n}} \\log\\sigma (-{\\bf h}_{\\bar{v}}^T {\\bf h}_u)\n\\end{equation}\nExisting studies on the generalized SkipGram model define the neighborhood in different ways. For example, LINE defines the 1-hop neighbors as the neighborhood in order to preserve the second-order proximity; DeepWalk uses the random walk to define the neighborhood for preserving more global structural information of $G$.", "id": "9f6320ca-945d-4675-a664-d9f104a32ce8", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ], [ "subsubsection", "Generalized SkipGram" ] ], "subsections": [], "title": "Generalized SkipGram" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 167 ], "content": "\\label{GEMB_autoencoder}\nAutoEncoder is composed of an encoder and a decoder. \nFor a graph-based CO problem, the encoder encodes the nodes in the graph into $d$-dimensional embedding vectors. The decoder then predicts a solution to the CO problem using the node embeddings ({\\it e.g.}, PointerNet ). \nFormally, the encoder is a function \n\\[\nenc: {\\mathbb R}^{d'} \\to {\\mathbb R}^d\n\\]\n\\noindent\n$enc({\\bf x}_u)$ embeds node $u$ into ${\\bf h}_u \\in {\\mathbb R}^{d}$. \nThere are several different types of decoder. For instance, the inner product-based decoder, the reconstruction-based decoder, and the classification-based decoder are three widely-used decoders.\nThe inner product-based decoder is a function \n\\[\ndec: {\\mathbb R}^d\\times {\\mathbb R}^d \\to {\\mathbb R}\n\\]\n\\noindent\n$dec({\\bf h}_u, {\\bf h}_v)$ returns the similarity of ${\\bf h}_u$ and ${\\bf h}_v$. Let $sim(u,v)$ denote the proximity of $u$ and $v$ in $G$ ({\\it e.g.}, $A[u,v]$ in ). The objective function of the inner product decoder is to minimize the loss \n\\begin{equation}\\label{equ:loss_inner_prod_ae}\n{\\mathcal L} = \\sum_{(u,v)\\in {\\mathcal D}} dist(dec({\\bf h}_u,{\\bf h}_v), sim(u,v)),\n\\end{equation}\n\\noindent\nwhere $\\mathcal D$ is the training dataset and $dist$ is a user-specified distance function.\nThe reconstruction-based decoder is a function \n\\[\ndec: {\\mathbb R}^{d} \\to {\\mathbb R}^{d'}\n\\]\n\\noindent\n$dec({\\bf h}_u)$ outputs $\\hat{{\\bf x}}_u$ as the reconstruction of ${\\bf x}_u$. The objective function is to minimize the reconstruction loss\n\\[\n{\\mathcal L} = \\sum_{u\\in G} ||(dec({\\bf h}_u), {\\bf x}_u)||^2_2\n\\]\nThe encoder and the decoder can be implemented by different types of neural networks, {\\it e.g.}, the multi-layer perceptron (MLP) or the recurrent neural network (RNN) .", "id": "804ab46f-ea1a-49d6-82fe-f88a9546bda8", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ], [ "subsubsection", "AutoEncoder" ] ], "subsections": [], "title": "AutoEncoder" }, { "cite_extract_rate": 0.375, "cites": [ 218, 1010, 7495 ], "content": "This subsection reviews the generalized SkipGram based graph embedding methods DeepWalk , Node2Vec , and Struc2Vec ; and the subgraph based graph embedding methods DeepGK , Subgraph2Vec , RUM , Motif2Vec , and MotifWalk . \nDeepWalk was one of the earlist works to introduce the generalized SkipGram model to graph embedding. The main idea of DeepWalk is to sample a set of truncated random walks of the graph $G$, and the nodes in a window of a random walk are regarded as co-occurence. The neighborhood of a node is the nodes that co-occurred with it. DeepWalk uses the generalized SkipGram model with the negative sampling (refer to Formula~\\ref{loss3_skipgram}) to learn the graph embedding. \nTo incorporate more flexibility into the definition of node neighborhood, Node2Vec introduces breadth-first search (BFS) and depth-first search (DFS) in neighborhood sampling. The nodes found by BFS and DFS can capture different structural properties. Node2Vec uses the second-order random walk to simulate the BFS and DFS. (``second-order'' means that when the random walk is at the step $i$, the random walk needs to look back to the step $i-1$ to decide the step $i+1$.) Two parameters $p$ and $q$ are introduced to control the random walk. $p$ controls the probability of return to an already visited node in the following two steps; and $q$ controls the probability of visiting a close or a far node in the following two steps. Let $u_{i}$ denote the current node in the walk and $u_{i-1}$ denote the previous node. The probability of the random walk to visit the next node $u_{i+1}$ is defined as below.\n\\begin{equation}\\label{def:n2v2ndrw}\n{P}(u_{i+1}|u_{i}) = \n\\begin{cases}\n\\alpha_{u_{i-1},u_{i+1}}\\times w_{u_i,u_{i+1}} & \\text{if } (u_{i},u_{i+1})\\in G\\\\\n0 & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\\[\n\\alpha_{u_{i-1},u_{i+1}} = \n\\begin{cases}\n1/p & \\text{if } dist(u_{i-1},u_{i+1}) = 0\\\\\n1 & \\text{if } dist(u_{i-1},u_{i+1}) = 1\\\\\n1/q & \\text{if } dist(u_{i-1},u_{i+1}) = 2\\\\\n\\end{cases}\n\\]\nwhere $dist(u_{i-1},u_{i+1})$ is the shortest distance from $u_{i-1}$ to $u_{i+1}$ and $w_{i,i+1}$ is the weight of the edge $(u_{i},u_{i+1})$.\nAn example is shown in Fig.~\\ref{fig:node2vec}. The current node of the random walk is $u_i$. There are four nodes $u_{i-1}$, $v_1$, $v_2$ and $v_3$ that can be the next node of the random walk. The probability of selecting each of them as the next node is shown in Fig.~\\ref{fig:node2vec}. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 10cm]{fig/node2vec/rw2.pdf}\n\\caption{An example of selecting the next node by the second-order random walk of Node2Vec . $u_i$ is the current node of the random walk and $u_{i-1}$ is the previous node. $u_{i-1}$, $v_1$, $v_2$, and $v_3$ can be selected as the next node with the corresponding probabilities, respectively.}\n\\label{fig:node2vec}\n\\end{figure}\nStruc2Vec argues that the random walks of Node2Vec cannot find nodes that have similar structures but are far away. Struc2Vec builds a multi-layer graph $G'$ for the input graph $G$. The layer $l$ is a complete graph $G'_l$, where each node in $G$ is a node in $G'_l$ and each edge $(u,v)\\in G'_l$ is weighted by the structural similarity of the $l$-hop neighborhoods of $u$ and $v$ in $G$. In this way, two nodes that are far away in $G$ can reach each other by just one hop in $G'_l$. The nodes in $G'_l$ can have directed edges to the nodes in $G'_{l-1}$ and $G'_{l+1}$. Random walks are sampled on $G'$, and the generalized SkipGram model is used to learn the node embedding. \nBesides using paths to sample the neighborhood, many works use representative subgraphs of the input graph. The representative subgraphs may be termed motifs, graphlets or kernels in different studies. Yanardag and Vishwanathan propose DeepGK, which is the earlist work embedding the motifs. The neighborhood of a motif $g$ is defined as the motifs within a small distance from $g$. The generalized SkipGram model is used to learn the embeddings for the motifs. \nYu et al. propose a network representation learning method using motifs\n(RUM). RUM builds a motif graph $G'$ for the input graph $G$, where each node in $G'$ is a motif of $G$ and two nodes have an edge in $G'$ if the corresponding motifs share common nodes. Triangle is used as the graph motif in RUM. RUM uses random walks on the motif graph $G'$ to define the neighborhood of a motif. Then, the generalized SkipGram model is used to learn the embedding of the motif. An original node $u$ of $G$ may occur in multiple motifs of $G'$. RUM uses the average of the embeddings of the motifs as the embedding of $u$. \nDareddy et al. propose another type of motif graph. Given a graph $G=(V,E)$, for each motif $g$, Motif2Vec builds a motif graph $G'=(V,E')$, where the weight of an edge $(u,v)\\in E'$ is the number of motif instances of $g$ in $G$ that contain node $u$ and $v$. Then, Motif2Vec simulates a set of random walks on each motif graph and uses Node2Vec to learn the embeddings of the nodes in $G$.\nA similar idea is also proposed in the MotifWalk method of .\nNarayanany et al. propose Subgraph2Vec to compute the embeddings of the neighboring subgraphs of the nodes in the input graph. Let $g_u$ denote the neighboring subgraph of a node $u$, Subgraph2Vec computes ${\\bf h}_{g_u}$ using the generalized SkipGram model. The neighborhood of $g_u$ is defined as the neighboring subgraphs of the neighbors of $u$, {\\it i.e.}, $\\{g_v| v\\in N(u)\\}$.", "id": "f21f6a6d-cec6-46bb-9635-1e66882ebe00", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ], [ "subsubsection", "Generalized SkipGram Based Graph Embedding Method" ] ], "subsections": [], "title": "Generalized SkipGram Based Graph Embedding Method" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7496, 1155, 282, 1649, 7265, 215, 1651, 1650 ], "content": "AutoEncoder-based graph embedding often preserves the graph structure properties measured by the following proximities. \n\\begin{definition}\nGiven a graph $G=(V,E)$, the {\\it first-order proximity} from $u$ to $v$ is the weight of $(u,v)$. If $(u,v)\\in E$, $p^{(1)}(u,v)=w_{u,v}$; otherwise, $p^{(1)}(u,v)=0$.\n\\end{definition}\nThe first-order proximity captures the direct relationship between nodes. The second-order proximity captures the similarity of the neighbors of two nodes.\n\\begin{definition}\\label{def:2ndprox}\nGiven a graph $G$, the {\\it second-order proximity} between $u$ and $v$ is $p^{(2)}(u,v)$ = $sim({\\bf p}^{(1)}(u), {\\bf p}^{(1)}(v))$, where ${\\bf p}^{(1)}(u)$ is the vector of the first-order proximity from $u$ to all other nodes in $G$, i.e., ${\\bf p}^{(1)}(u)=(p^{(1)}(u,v_1), p^{(1)}(u,v_2), ...,$ $p^{(1)}(u,v_{|V|}))$, and $sim$ is a user-specified similarity function.\n\\end{definition}\nThe first- and second-order proximities encode the local structures of a graph. Proximities to capture more global structures of a graph have also been proposed in the literature. For example, Cai et al. propose to use $p^{(k)}(u,v)$ (recursively defined, similar to Definition~\\ref{def:2ndprox}) as the $k$-th-order proximity between $u$ and $v$, Cao et al. use the $k$-step transition probability $\\bf P$$^k[u,v]$ to measure the $k$-step relationship from $u$ to $v$, Chen et al. use the node centrality, Tsitsulin et al. use the Personalized PageRank, and Ou et al. use the Katz Index and Adamic-Adar to measure more global structural properties of $G$.\nLarge-scale information\nnetwork embedding (LINE) preserves both the first- and second-order proximity in graph embedding using two AutoEncoders, respectively. In order for AutoEncoder to preserve the first-order proximity, the encoder is a simple embedding lookup . The decoder outputs the estimated adjacent matrix using the node embeddings, and the objective is to minimize the loss between the estimated adjacent matrix and the ground truth.\nThe decoder of LINE is designed as follows. Since adjacent nodes $u$ and $v$ in $G$ have high first-order proximity, they should be close in the embedding space. LINE uses the inner product of ${\\bf h}_u$ and ${\\bf h}_v$ to measure the distance between $u$ and $v$ in the embedding space, as shown below.\n\\begin{equation}\n{P}_1(u,v) = \\frac{1}{1+exp(-{\\bf h}_u^T {\\bf h}_v)}\n\\end{equation}\n$P_1(\\cdot,\\cdot)$ defines the estimated distribution of the first-order proximity ({\\it i.e.}, the estimated adjacent matrix). LINE ensures that the estimated distribution $P_1(\\cdot,\\cdot)$ is close to the empirical distribution $\\hat{P}_1(\\cdot,\\cdot)$ so as to preserve the first-order proximity. \n\\begin{equation}\\label{edgeprob}\n{\\mathcal L}_1 = \\min dist(\\hat{P}_1(\\cdot,\\cdot), {P}_1(\\cdot,\\cdot))\n\\end{equation}\n\\noindent\nwhere $\\hat{P}_1(u,v)=\\frac{w_{u,v}}{\\sum_{(u',v')\\in G} w_{u',v'}}$ and $dist$ is the distance between two probability distributions. If the KL-divergence is used as $dist$, ${\\mathcal L}_1$ becomes\n\\begin{equation}\\label{firstproxloss}\n{\\mathcal L}_1 = \\min -\\sum_{(u,v)\\in G} w_{u,v} \\log {P}_1(u, v)\n\\end{equation}\nIn order for AutoEncoder to preserve the second-order proximity, the encoder is a simple embedding lookup . The decoder outputs an estimated distribution between each node and its neighbors. The estimated distribution is reconstructed from the embeddings of the nodes. The objective is to minimize the reconstruction loss between the estimated distribution and the ground truth.\nThe decoder is designed as follows.\nInspired by word embedding , the neighbors of $u$ are regarded as the ``context'' of $u$. LINE uses a conditional probability $P_2(v|u)$ defined in Formula~\\ref{neighprob} to model the estimated probability of $u$ generating a neighbor $v$. \n\\begin{equation}\\label{neighprob}\n{P}_2(v | u) = \\frac{exp({\\bf h}_v'^T {\\bf h}_u)}{\\sum_{v'\\in G} exp({\\bf h}_{v'}'^T {\\bf h}_u)},\n\\end{equation}\n\\noindent\nwhere ${\\bf h}'$ is the vector of a node when the node is regarded as context. \n${P}_2(\\cdot | u)$ defines the estimated distribution of $u$ over the context. The nodes $u$ and $u'$ in $G$ that have a high second-order proximity should have similar estimated distributions over the context, {\\it i.e.}, ${P}_2(\\cdot | u)$ should be similar to ${P}_2(\\cdot | u')$. This can be achieved by minimizing the distance between the estimated distribution ${P}_2(\\cdot | u)$ and the empirical distribution $\\hat{P}_2(\\cdot |u)$, for each node $u$ in $G$. The empirical distribution $\\hat{P}_2(\\cdot|u)$ is defined as $\\hat{P}_2(v|u) = w_{u,v} / \\sum_{u,v'} w_{u,v'}$. LINE preserves the second-order proximity as follows. \n\\begin{equation}\\label{secondproxloss_general}\n{\\mathcal L}_2 = \\min \\sum_{u\\in G} dist(\\hat{P}_2(\\cdot|u), {P}_2(\\cdot | u)))\n\\end{equation}\nUsing the KL-divergence for $dist$, Formula~\\ref{secondproxloss_general} produces\n\\begin{equation}\\label{secondproxloss}\n{\\mathcal L}_2 = \\min -\\sum_{(u,v)\\in G} w_{u,v} \\log {P}_2(v|u)\n\\end{equation}\nLINE trains the two AutoEncoders separately. The node embeddings generated by the two AutoEncoders are concatenated as the embeddings of the nodes. The model of LINE is also adopted by Tang et al. to embed the words in a heterogeneous text graph.\nWang et al. argue that LINE is a shallow model, in the sense that it cannot effectively capture the highly non-linear structure of a graph. Therefore, structural deep network embedding (SDNE) is proposed as a mean of using the deep neural network to embed the nodes. As with LINE, SDNE also preserves the first- and second-order proximity. Both the encoder and decoder of SDNE are MLPs. Given a graph $G$, the encoder embeds ${\\bf x}_u$ to ${\\bf h}_u$, where ${\\bf x}_u$ is the $u$-th row in the adjacent matrix $\\vec A$ of $G$, and the decoder reconstructs $\\hat{{\\bf x}}_u$ from ${\\bf h}_u$. \nSDNE preserves the first-order proximity by minimizing the distance in the embeded space for the adjacent nodes in $G$. \n\\[\n{\\mathcal L}_1 = \\sum_{(u,v)\\in G} {\\bf A}[u,v]\\times ||{\\bf h}_u - {\\bf h}_v||^2_2\n\\]\nThe second-order proximity is preserved by minimizing the reconstrucspation loss.\n\\[\n{\\mathcal L}_2 = \\sum_{u\\in G} ||\\hat{{\\bf x}}_u - {\\bf x}_u||_2^2\n\\]\nSDNE combines ${\\mathcal L}_1$, ${\\mathcal L}_2$, and a regularizer term as the objective function and jointly optimizes them by means of a deep neural network. The first- and second-order proximity are preserved and the graph embedding learned is more robust than LINE. As demonstrated in experiments, SDNE outperforms LINE in several downstream tasks ({\\it e.g.}, node classification and link prediction). \nVersatile graph embedding method (VERSE) shows that the first- and second-order proximity are not sufficient to capture the diverse forms of similarity relationships among nodes in a graph.\nTsitsulin et al. propose to use a function $sim(u,v)$ to measure the similarity between any two nodes $u$ and $v$ in $G$, where $sim(\\cdot, \\cdot)$ can be any similarity function. The similarity distribution of $u$ to all other nodes can be defined by $sim(u,\\cdot)$. The encoder of VERSE is a simple embedding lookup. The decoder estimates the similarity distribution using the node embeddings, as in Formula~\\ref{neighprob}. The objective is to minimize the reconstruction loss between the estimated similarity distribution and the ground truth.\nDave et al. propose Neural-Brane to capture both node attribute information and graph structural information in the embedding of the graph. Bonner et al. study the interpretability of graph embedding models.", "id": "8b9b70e1-2281-42d6-a6be-d3e754e1e570", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ], [ "subsubsection", "AutoEncoder Based Graph Embedding" ] ], "subsections": [], "title": "AutoEncoder Based Graph Embedding" }, { "cite_extract_rate": 0.5, "cites": [ 1652 ], "content": "The generalized SkipGram model is inspired by the word embedding model in natural language processing (NLP). Random walks of the graphs, which are the analog of sentences in texts are widely used by the generalized SkipGram model-based methods for computing the embeddings of the graphs. However, computing random walks is time-consuming. Moreover, the generalized SkipGram model is often regarded as a shallow model when compared to AutoEncoder. AutoEncoder can be deeper by stacking more layers and has more potentials to encode the complex and non-linear relationships between the nodes of a graph . Recent works of word embedding in NLP also verify the advantage of AutoEncoder . However, designing the architectures of the encoder and decoder and the loss function to encode the structure information of the graph is challenging.\nGraph embedding methods can precompute the embedding vectors of graphs. The advantage is that the structure information encoded in the embeddings can be transferred to different downstream tasks. Graph embedding methods learn the embeddings of the graph without considering the downstream CO problems to be solved. The embeddings may not encode the information that are critical for solving the CO problem. There is an opportunity that the performance of the graph embedding-based methods may be inferior to the end-to-end learning methods for solving CO problems. Therefore, there have been recent studies on alternative graph representation learning methods for solving CO problems such as end-to-end learning methods.", "id": "16f5011f-7ff7-4ffd-8767-ebe3566348f9", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "332e9bac-b9a4-441f-b1ef-a40b6551e719", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "Graph Embedding Methods" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{grl:etoe}\nGraph neural network (GNN) and AutoEncoder are widely used in the end-to-end learning methods of solving CO problems, where computing the embeddings of the graphs is an intermediate step.", "id": "7d401168-628a-4543-9c98-1b536e518f44", "level": "subsection", "origin_cites_number": 0, "parent_id": "ea200033-9c6a-44ef-9913-a794ab1396bb", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "End-to-end Method" ] ], "subsections": [ "6326b114-e63a-4e29-a364-3e3a3c398275", "992778fd-ffaa-4190-8e36-987b7ee1c9cb", "fa2dfe79-ffe2-4fc6-af27-c19493c7c008", "5f179e73-3ea6-4fde-9219-e18ee7325673" ], "title": "End-to-end Method" }, { "cite_extract_rate": 1, "cites": [ 553, 242 ], "content": "Graph neural network uses the graph convolution operation to aggregate graph structure and node content information. Graph convolution can be divided into two categories: i) spectral convolutions, defined using the spectra of a graph, which can be computed from the eigendecomposition of the graph's Laplacian matrix, and ii) spatial convolutions, directly defined on a graph by information propagation. \n\\vspace{1ex}\n\\noindent\n{\\it A) Graph Spectral Convolution}\n\\vspace{1ex}\nGiven an undirected graph $G$, $\\bf L$ = $\\bf I$ - ${\\bf D}^{-1/2} {\\bf A D}^{-1/2}$ is the normalized Laplacian matrix of $G$. $\\bf L$ can be decomposed into $\\bf L$ = $\\bf U\\Lambda U$$^T$, where $\\bf U$ is the eigenvectors ordered by eigenvalues, $\\bf \\Lambda$ is the diagonal matrix of eigenvalues, and $\\bf \\Lambda$$[i,i]$ is the $i$-th eigenvalue $\\lambda_i$.\nThe {\\it graph convolution} $\\ast_G$ of an input signal $\\bf s$ $\\in {\\mathbb R}^{|V|}$ with a filter ${\\bf g_\\theta}$ is defined as \n\\begin{equation}\\label{def:gspectral}\n{\\bf s} \\ast_G {\\bf g_\\theta} = {\\bf U} {\\bf g_\\theta} {\\bf U}^T {\\bf s}\n\\end{equation}\nExisting studies on graph spectral convolution all follow Formula~(\\ref{def:gspectral}), and the differences are the choice of the filter ${\\bf g_\\theta}$ . The $u$-th row of the output channel is the embedding ${\\bf h}_u$ of a node $u$. \n\\vspace{1ex}\n\\noindent\n{\\it B) Graph Spatial Convolution}\n\\vspace{1ex}\n Graph spatial convolution aggregates the information from a node's local neighborhood. Intuitively, each node sends messages based on its current embedding and updates its embedding based on the messages received from its local neighborhood. A graph spatial convolution model often stacks multiple layers, and each layer performs one iteration of message propagation. To illustrate this, we recall the definition given in GraphSAGE . A layer of GraphSAGE is as follows:\n\\begin{equation}\\label{gconv}\n{\\bf h}_u^l = \\sigma({\\bf W}^l [{\\bf h}_u^{l-1} || {\\bf h}_{{\\mathcal N}_u}^{l}])\n\\end{equation}\n\\begin{equation}\n{\\bf h}_{{\\mathcal N}_u}^{l} = AGG( \\{ {\\bf h}_{v}^{l-1}, v\\in {\\mathcal N}_u \\} ),\n\\end{equation}\n\\noindent\nwhere \n$l$ denotes the $l$-th layer, $||$ denotes concatenation, ${\\mathcal N}_u$ is a set of randomly selected neighbors of $u$, and $AGG$ denotes an order-invariant aggregation function. GraphSAGE suggests three aggregation functions: element-wise mean, LSTM-based aggregator, and max-pooling.", "id": "6326b114-e63a-4e29-a364-3e3a3c398275", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7d401168-628a-4543-9c98-1b536e518f44", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "End-to-end Method" ], [ "subsubsection", "Graph Neural Network" ] ], "subsections": [], "title": "Graph Neural Network" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 226, 242, 7006 ], "content": "Graph convolutional network (GCN) is a well-known graph spectral convolution model, which is an approximation of the original graph spectral convolution defined in Formula~\\ref{def:gspectral}. Given a graph $G$ and a one-channel input signal $\\vec s$ $\\in {\\mathbb R}^{|V|}$, GCN can output a $d$-channel signal ${\\bf H}^{|V|\\times d}$ as follows: \n\\begin{equation}\\label{equ:gcn_oneChannel}\n {\\bf H} = {\\bf s} \\ast_G {\\bf g_\\theta} = ({\\bf \\widetilde{D}}^{-1/2}{\\bf \\widetilde{A}}{\\bf \\widetilde{D}}^{-1/2}){\\bf s} {\\vec \\theta},\n\\end{equation}\n\\noindent\nwhere ${\\vec \\theta}$ is a ${1\\times d}$ trainable parameter vector of the filter, ${\\bf \\widetilde{A}} = {\\bf A} + {\\bf I}$ and ${\\bf \\widetilde{D}}$ is a diagonal matrix with ${\\bf \\widetilde{D}}[i,i]$ = $\\sum_j {\\bf \\widetilde{A}}[i,j]$. The $u$-th row of $\\bf H$ is the embedding of the node $u$, ${\\bf h}_u$. To allow a $d'$-channel input signal $\\bf S$$^{|V|\\times d'}$ and output a $d$-channel signal ${\\vec H}^{|V|\\times d}$, the filter needs to take a parameter matrix ${\\bf \\Theta}^{d'\\times d}$. Formula~\\ref{equ:gcn_oneChannel} becomes\n\\begin{equation}\\label{equ:mchannelGCN}\n{\\bf H} = {\\bf S} \\ast_G {\\bf g_\\Theta} = ({\\bf \\widetilde{D}}^{-1/2}{\\bf \\widetilde{A}}{\\bf \\widetilde{D}}^{-1/2}){\\bf S} {\\bf \\Theta}.\n\\end{equation}\nLet ${\\bf s}_i$ denote the $i$-th channel ({\\it i.e.}, column) of $\\vec S$. ${\\bf h}_u$ can then be written in the following way. \n\\begin{equation}\\label{equ:gcn}\n{\\bf h}_u = {\\bf \\Theta}^T {\\bf y}, {\\bf y}[i] = \\sum_{v\\in N_u\\cup\\{u\\}} \\frac{1}{\\sqrt{|N_u|}\\sqrt{|N_v|}} {\\bf s}_i[v], 1\\leq i\\leq d',\n\\end{equation}\n\\noindent\nwhere $\\vec y$ is a $d'$-dimensional column vector.\nWhen multi-layer models are considered, Formulas~\\ref{equ:mchannelGCN} and \\ref{equ:gcn} are written as Formulas~\\ref{equ:mchannelGCN_layerwise} and \\ref{equ:gcn_layerwise}, respectively, where $l$ denotes the $l$-th layer.\n\\begin{equation}\\label{equ:mchannelGCN_layerwise}\n{\\bf H}^l = ({\\bf \\widetilde{D}}^{-1/2}{\\bf \\widetilde{A}}{\\bf \\widetilde{D}}^{-1/2}){\\bf H}^{l-1} {\\bf \\Theta}^{l}\n\\end{equation}\n\\begin{equation}\\label{equ:gcn_layerwise}\n{{\\bf h}^{l}}_{u} = {{\\bf \\Theta}^{l}}{}^{T} {\\bf y}^{l}\n\\end{equation}\n\\[\n\\begin{array}{ll}\n{\\bf y}^l[i] & = \\sum_{v\\in N_u\\cup\\{u\\}} \\frac{1}{\\sqrt{|N_u|}\\sqrt{|N_v|}} {\\bf H}^{l-1}[v,i]\\\\[0.4cm]\n& = \\sum_{v\\in N_u\\cup\\{u\\}} \\frac{1}{\\sqrt{|N_u|}\\sqrt{|N_v|}} {\\bf h}_v^{l-1}[i]\n\\end{array}\n\\]\nFrom Formula~\\ref{equ:gcn_layerwise}, we can observe that GCN aggregates weighted information from a node's neighbors. In particular, for a node $u$ and a neighbor $v$ of $u$, the information from $v$ is weighted by their degrees, {\\it i.e.}, $1/\\sqrt{|N_u||N_v|}$. Graph attention network (GAT) argues that the fixed weight approach of GCN may not always be optimal. Therefore, GAT introduces the attention mechanism to graph convolution. A learnable weight function $\\alpha(\\cdot, \\cdot)$ is proposed, where $\\alpha(u,v)$ denotes the attention weight of $u$ over its neighbor $v$. Specifically, the convolution layer of GAT is as follows. \n\\begin{equation}\\label{equ:attention}\n{\\bf h}_{u}^{l} = \\sigma( \\sum_{v\\in N(u)} \\alpha^l(u, v) {\\bf W}^l {\\bf h}_{v}^{l-1} )\n\\end{equation}\n\\begin{equation}\n\\alpha^l(u, v) = \\frac{exp( LeakyReLU({{\\bf a}^l}{}^T[{\\bf W}^l {\\bf h}_{u}^{l-1} || {\\bf W}^l {{\\bf h}_v}^{l-1}]))}{\\sum_{v'\\in N(u)} exp( LeakyReLU({{\\bf a}^l}{}^T [{\\bf W}^l {\\bf h}_{u}{}^{l-1} || {\\bf W}^l {\\bf h}_{v'}{}^{l-1}]) )},\n\\end{equation}\n\\noindent\nwhere $||$ denotes concatenation, ${\\bf a}^l$ and ${\\bf W}^l$ are the trainable vector and matrix of parameters, respectively. \nThe attention mechanism enhances models' capacity, and hence, GAT can perform better than GCN in some downstream tasks ({\\it e.g.}, node classification). However, when $L$ layers are stacked, the $L$-hop neighbors of a node are needed to be computed. If the graph $G$ is dense or a power-law graph, there may exist some nodes that can access almost all nodes in $G$, even for a small value of $L$. The time cost can be unaffordable.\nTo optimize efficiency, Hamilton et al. propose a sampling based method (GraphSAGE). GraphSAGE randomly samples $k$ neighbors in each layer. Therefore, a model having $L$ layers only needs to expand $O(k^L)$ neighbors. Huang et al. further improve the sampling process with an adaptive sampling method. The adaptive sampling in samples the neighbors based on the embedding of $u$, as illustrated in Fig.~\\ref{fig:asgcn}(a). The efficiency is further improved by layer-wise sampling, as shown in Fig.~\\ref{fig:asgcn}(b). These sampling techniques are experimentally verified effective regarding the classification accuracy. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 10cm]{fig/asgcn/asgcn.pdf}\n\\caption{Adaptive sampling of ASGCN : (a) the node-wise sampling and (b) \nthe layer-wise sampling. In the node-wise sampling, each node in a layer samples its neighbors in the next layer independently. In particular, a node $v$ in the $l+1$-th layer samples its neighbors in the $l$-th layer by $p(u_j|v)$. In contrast, all nodes in a layer jointly sample the neighbors in the next layer. $u_j$ is sampled based on $p(u_j|v_1,v_2,...,v_4)$. The layer-wise sampling is more efficient than the node-wise sampling.} \n\\label{fig:asgcn}\n\\end{figure}\nYang et al. combine the ideas of attention and sampling and propose the shortest path attention method (SPAGAN). The shortest path attention of SPAGAN has two levels, as shown in Fig.~\\ref{fig:spagan}. The first level is length-specific, which embeds the shortest paths of the same length $c$ to a vector ${\\bf h}_u^c$. The second level aggregates ${\\bf h}_u^c$ of different values of $c$ to get the embedding ${\\bf h}_u$ of $u$. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 12cm]{fig/spagan/spagan3.pdf}\n\\caption{The two-level convolution of SPAGAN }\n\\label{fig:spagan}\n\\end{figure}\nMore specifically, let $P_u^c$ be the set of shortest paths starting from $u$ of the length $c$ and $p_{u,v}$ be a shortest path from node $u$ to node $v$. ${\\bf h}_{u}^c$ is computed as follows.\n\\[\n{\\bf h}_{u}^c = \\sum_{p_{u,v}\\in P_u^c} \\alpha_{u,v}\\phi(p_{u,v}),\n\\]\nwhere $\\alpha_{u,v}$ is the attention weight and $\\phi(p_{u,v})$ is a mean pooling that computes the average of the embeddings of the nodes in $p_{u,v}$. \n\\[\n\\alpha_{u,v} = \\frac{exp(\\sigma({\\bf a}_1 [ ({\\bf W} {\\bf h}_u)||\\phi(p_{u,v})])}{\\sum_{p_{u,v'}\\in P_u^c} exp(\\sigma({\\bf a}_1 [ ({\\bf W} {\\bf h}_u)||\\phi(p_{u,v'})])},\n\\]\nwhere $\\vec a_1$ and $\\bf W$ are trainable parameters shared by all nodes, and $||$ is concatenation.\nThe second level aggregates the paths with different lengths as follows.\n\\[\n{\\bf h}_{u} = \\sigma(\\sum_{c=2}^C \\beta_c {\\bf h}_{u}^c),\n\\]\nwhere $C$ is a hyperparameter of the path length limit and $\\beta_c$ is the attention weight.\n\\[\n\\beta_{c} = \\frac{exp(\\sigma({\\bf a}_2 [ ({\\bf W} {\\bf h}_{u})||{\\bf h}_{u}^c]))}{\\sum_{c'=2}^C exp(\\sigma({\\bf a}_2 [ ({\\bf W} {\\bf h}_{u})||{\\bf h}_{u}^{c'}]))},\n\\]\nwhere ${\\bf a}_2$ is a trainable parameter vector.", "id": "992778fd-ffaa-4190-8e36-987b7ee1c9cb", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "7d401168-628a-4543-9c98-1b536e518f44", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "End-to-end Method" ], [ "subsubsection", "GNN Based Graph Representation Learning" ] ], "subsections": [], "title": "GNN Based Graph Representation Learning" }, { "cite_extract_rate": 0.5, "cites": [ 167 ], "content": "\\label{end2end_autoencoder}\nFor the AutoEncoder used in the end-to-end learning, the embeddsings of the graph are computed by the encoder. The decoder outputs the probabilities of nodes/edges belonging to the solutions of the CO problems. In recent works, the encoder mainly uses RNN and attention-based model, and the decoder mainly uses MLP, RNN and attention-based model. The encoder corresponds to the first stage of the ML-based CO methods and the decoder corresponds to the second stage (see Fig.~\\ref{fig:overview}). In this subsection, we mainly focus on the encoder. The details of the decoders will be discussed in Section~\\ref{gl4ga}.\nThe pointer network (Ptr-Net) proposed by Vinyals et al. is a seminal work of using AutoEncoder to solve the TSP problem. The encoder of Ptr-Net is an RNN taking the nodes of the graph $G$ as input and outputting an embedding of $G$, where the order of the nodes is randomly chosen. Experiments of Ptr-Net observe that the order of input nodes have affects on the quality of the TSP tour found. Therefore, the decoder of Ptr-Net introduces an attention mechanism that can assign weights to the input nodes and ignore the order of them. \nKool et al. use AutoEncoder to sequentially output a TSP tour of a graph $G$. \nThe encoder stacks $L$ self-attention layers. Each layer is defined as follows. \n\\[\n\\hat{{\\bf h}}_{v_i} = BN^l({\\bf h}_{v_i}^{l-1}+MHA_i^l({\\bf h}_{v_1}^{l-1},...,{\\bf h}_{v_n}^{l-1}))\n\\] \n\\[\n{\\bf h}_{v_i}^l = BN^l(\\hat{{\\bf h}}_{v_i}+MLP^l(\\hat{{\\bf h}}_{v_i})),\n\\]\n\\noindent\nwhere ${\\bf h}_i$ denotes the embedding vector of the node $v_i$, $l$ means the $l$-th layer, $MHA$ denotes the multi-head attention and $BN$ denotes the batch normalization. The embedding of $G$ ${\\bf h}_G = \\frac{1}{n} \\sum_i^n {\\bf h}_{v_i}^L$ and the embedding of each node ${\\bf h}_{v_i}^L$ are input to the decoder.", "id": "fa2dfe79-ffe2-4fc6-af27-c19493c7c008", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7d401168-628a-4543-9c98-1b536e518f44", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "End-to-end Method" ], [ "subsubsection", "AutoEncoder Based Graph Representation Learning" ] ], "subsections": [], "title": "AutoEncoder Based Graph Representation Learning" }, { "cite_extract_rate": 1, "cites": [ 7006 ], "content": "Most graph neural network-based methods adopt the message propagation framework. Each node iteratively aggregates the message from neighbors. The structure information of $k$-hops of a node can be captured by $k$ iterations of message aggregation. GNN does not require any node order and can support permutation invariance of CO problems. AutoEncoder-based methods are often used in solving the CO problems having sequential characteristics, {\\it e.g.}, TSP. Sequence model is often used as the encoder to compute the embeddings of the graphs. Attention mechanism is used to support permutation invariance.\nEnd-to-end learning methods learn the embeddings of graph as an intermediate step in solving the CO problem. The embeddings of the graph learned are more specific for the CO problem being solved and are expected to lead to better solutions of the CO problem.\nA disadvantage of the GNN-based method is that the GNN is often shallow, due to the over-smooth problem. The attention-based encoder can alleviate this problem, where the encoder with self-attention layers and skip connections can be potentially deeper. However, the time complexity of such encoder on large graphs will be a bottleneck.\nFor the GNN-based method, the current trend is to use anisotropy GNN ({\\it e.g.} GAT ), which can differentiate the information propagated from different neighbors. For AutoEncoder-based method, more recent studies are integrating the attention mechanism with the sequence model to increase the capacity of the model and encode inductive biases.", "id": "5f179e73-3ea6-4fde-9219-e18ee7325673", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "7d401168-628a-4543-9c98-1b536e518f44", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Representation Learning Methods" ], [ "subsection", "End-to-end Method" ], [ "subsubsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0.5, "cites": [ 167, 179, 1655, 280, 1657, 187, 1653, 1656, 7497, 1654 ], "content": "\\label{gl4ga}\nIn this section, we review the works that solve CO problems using graph learning. We review the whole learning procedure in solving a CO problem. For the two stages of the learning procedure, we pay more attention to the second stage, as the first stage has been thoroughly reviewed in the previous section. We will brief the first stage of the ML-based CO methods for the convenience of presentation.\nRecent works can be classified into two categories. The first category is the non-autoregressive method which predicts the solution of a CO problem in one shot. The non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution. The solution of the CO problem can be found by search heuristics such as beam search. The second category is the autoregressive method, which constructs a solution by iteratively extending a partial solution to obtain a solution of the CO problem. Table~\\ref{gpdlcompare} lists the selected graph learning-based CO methods.\nSec.~\\ref{nonauto} summarizes the recent non-autoregressive methods for traver travelling salesman problem (TSP), graph partition, graph similarity, minimum vertex cover (MVC), graph coloring, maximum independent set, graph matching and graph isomorphism. Sec.~\\ref{autoreg} presents the recent autoregressive methods for TSP, graph matching, graph alignment, MVC and maximum common subgraph. \n\\begin{table}[t]\n\\vspace{0ex}\n\\caption{Summary of selected CO methods using graph embedding}\n\\centering\n\\begin{scriptsize}\n\\vspace{-0ex}\n\\begin{tabular}{|c|c|c|}\n\\hline\nMethod & \\tabincell{c}{CO Problem} & \\tabincell{c}{Model}\\\\ \\hline\\hline\nConvNet & TSP & GNN, non-autoregressive \\\\\\hline\nDTSPGNN & TSP & GNN, non-autoregressive\\\\ \\hline\nCPNGNN & MDS, MM, MVC & GNN, non-autoregressive\\\\ \\hline\nGAP & Graph Partition & GNN, non-autoregressive \\\\ \\hline\nGMN & GED & GNN, non-autoregressive \\\\ \\hline\nSimGNN & GED & GNN, non-autoregressive \\\\ \\hline\nGRAPHSIM & GED & GNN, non-autoregressive\\\\ \\hline\nGNNGC & GColor & GNN, non-autogressive \\\\ \\hline\nSiameseGNN & Graph matching, TSP & GNN, non-autogressive \\\\ \\hline\nPCAGM & Graph matching & GNN, non-autogressive \\\\ \\hline\nIsoNN & Graph Iso. & AutoEncoder, non-autogressive \\\\ \\hline\nGNNTS & MIS, MVC, MC & GNN, non-autoregressive\\\\\\hline\nPtr-Net & TSP & AutoEncoder, autoregressive \\\\ \\hline\nLSTMGMatching & Graph matching & AutoEncoder, autogressive \\\\ \\hline\nS2V-DQN & MVC, MaxCut, TSP & GNN, autoregressive \\\\ \\hline\nCombOptZero & MVC, MaxCut, MC & GNN, autoregressive\\\\ \\hline\nRLMCS & MCS & GNN, autoregressive \\\\ \\hline\nCENALP & Graph Alignment & SkipGram, autoregressive\\\\ \\hline\nTSPImprove & TSP & AutoEncoder, autoregressive\\\\ \\hline\nAM & TSP & AutoEncoder, autoregressive \\\\ \\hline\n\\end{tabular}\\label{gpdlcompare}\n\\end{scriptsize}\n\\vspace{0ex}\n\\end{table}", "id": "0dd7256f-34f8-494d-abe3-bd16bf0e03a4", "level": "section", "origin_cites_number": 20, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Learning Based Combinatorial Optimization Methods" ] ], "subsections": [ "63d1e43d-cd84-4331-9867-f11642f16f9d", "8219174f-8a93-4666-9076-7142eed07e54", "3d76407d-b235-468e-973f-ceb7c0440bf5" ], "title": "Graph Learning Based Combinatorial Optimization Methods" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 280, 1656, 7497, 1657, 1653, 179, 1655 ], "content": "\\label{nonauto}\nMost works in this category use classification techniques to predict the class label of the nodes in the input graph. For a graph $G$, the prediction result is a $|V|\\times K$ matrix ${\\bf Y}$, where $K$ is the number of classes. The $u$-th row ${\\bf y}_u$ of $\\vec Y$ is the prediction result for the node $u$, where ${\\bf y}_u[i]$ is the probability that $u$ is of the $i$-th class, for $1\\leq i\\leq K$. \nFor example, for the minimum vertex cover (MVC) problem, the classification is binary ({\\it i.e.}, $K=2$), and $\\{u|{\\bf y}_u[1] > {\\bf y}_u[0]\\}$ is the predicted solution. For the graph partition problem, $K$ is the number of parts, and a node $u$ is classified to the part with the largest predicted probability. There are some works that predict a score for the input graphs. For example, for the graph similarity problem, the similarity score between two graphs is predicted. \n\\vspace{2ex}\n\\noindent\n{\\it A. Travelling Salesman Problem}\n\\vspace{2ex}\nJoshi et al. propose a GNN-based model (ConvNet) to solve the TSP problem on Euclidean graph. The graph convolution layer of ConvNet is as follows.\n\\[\n{\\bf h}_i^{l+1} = {\\bf h}_i^l + ReLU(BN({\\bf W}_1^l {\\bf h}_i^l + \\sum_{j\\in N_i}{\\bf \\eta}_{ij}^l\\odot{\\bf W}_2^l{\\bf h}_j^l))\n\\]\n\\[\n{\\bf \\eta}_{ij}^l = \\frac{\\sigma({\\bf e}_{ij}^l)}{\\sum_{j'\\in N_i} \\sigma({\\bf e}_{ij'}^l) + \\epsilon}\n\\]\n\\[\n{\\bf e}_{ij}^l = {\\bf e}_{ij}^l + ReLU(BN({\\bf W}_3^l {\\bf e}_{ij}^l + {\\bf W}_4^l {\\bf h}_i^l + {\\bf W}_5^l {\\bf h}_j^l)),\n\\]\n\\noindent\nwhere $BN$ stands for batch normalization, $\\odot$ denotes element-wise product, ${\\bf \\eta}$ is attention weight, $\\epsilon$ is a small value, ${\\bf W}_1$, ${\\bf W}_2$ and ${\\bf W}_3$ are trainable parameters. \nThe embeddings of the edges outputted by the $l$-th layer of ConvNet are fed into a multilayer perceptron (MLP) to predict $p_{ij}$ the probability of the edge $e_{ij}$ belongs to the solution of TSP. The cross entropy with the ground-truth TSP tour is used as the loss. The experiments of ConvNet show that ConvNet outperforms recent autoregressive methods but falls short of standard Operations Research solvers.\nPrates et al. use GNN to solve the decision version of TSP, which is to decide if a given graph admits a Hamiltonian route with a cost no greater than a given threshold $C$. Since the weights of edges are closely related to the cost of a route, Prates et al. compute edge embedding in the graph convolution. Specifically, given a graph $G=(V,E)$, an auxiliary bipartite graph $G'=(V\\cup V',E')$ is constructed, where for each edge $(u,v)$ in $G$, $G'$ has a node $n_{u,v}$ in $V'$ and edges $(n_{u,v},u)$ and $(n_{u,v},v)$ are added to $E'$. The embeddings of the nodes and edges of $G$ can be computed by a GNN on the auxiliary graph $G'$. Finally, the embeddings of the edges of $G$ are fed into an MLP to make a binary classification. If the class label of $G$ is predicted to be 1, $G$ has a Hamiltonian route with a cost no greater than $C$; otherwise, $G$ has no such route.\n\\vspace{2ex}\n\\noindent\n{\\it B. Graph Partition}\n\\vspace{2ex}\nNazi et al. propose GAP as a method for computing a balanced partition of a graph. GAP is composed of a graph embedding module, which uses a GNN model to determine the embedding of the input graph, and a graph partition module, which uses an MLP to predict the partition of nodes. The architecture of GAP is illustrated in Fig.~\\ref{fig:gap}. The normalized cut size and the balancedness of the partition is used as the loss. GAP trained on a small graph can be generalized at the inference time on unseen graphs of larger size. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 12cm]{fig/gap/gap.pdf}\n\\caption{Overview of GAP }\n\\label{fig:gap}\n\\end{figure}\nSpecifically, suppose $G=(V,E,{\\bf X})$ is to be partitioned to $K$ disjoint parts and $V_1,V_2,...,V_K$ denote the sets of nodes of the parts, respectively.\nA GNN first computes the embeddings of the nodes in $G$. Then, the MLP uses the node embeddings to predict the partition probability ${\\vec Y}^{|V|\\times K}$ for the nodes, where ${\\vec Y}[u,i]$ is the probability that node $u$ is partitioned to $V_i$. Finally, each node can be partitioned to the partition of the largest probability.\nThe loss of GAP has two components. The first component is to minimize the normalized cut size of the partition: \n\\[\n\\sum_{i=1}^K \\frac{cut(V_i, \\bar{V_i})}{vol(V_i)},\n\\]\n\\noindent\nwhere $\\bar{V_i}$ denotes the nodes not in $V_i$, $cut(V_i, \\bar{V_i})$ denotes the number of edges crossing $V_i$ and $\\bar{V_i}$, and $vol(V_i)$ denotes the total degree of the nodes in $V_i$.\nThe second component is to minimize the distance from the balanced\npartition:\n\\[\n\\sum_{i=1}^K \\sum_{u\\in G} ({\\bf Y}[u,i] - \\frac{|V|}{K})^2,\n\\]\n\\noindent \nwhere $\\frac{|V|}{K}$ is the part size of the balanced partition. The objective function of GAP is as follows.\n\\[\n\\min \\sum_{i=1}^K \\frac{cut(V_i, \\bar{V_i})}{vol(V_i)} + \\sum_{i=1}^K \\sum_{u\\in G} ({\\bf Y}[u,i] - \\frac{|V|}{K})^2\n\\]\n\\vspace{2ex}\n\\noindent\n{\\it C. Graph Similarity}\n\\vspace{2ex}\nBai et al. propose SimGNN as a method for predicting the similarity between two graphs. SimGNN combines two strategies for predicting the similarity between two graphs $G_1$ and $G_2$. The first strategy compares $G_1$ and $G_2$ by comparing their global summaries ${\\bf h}_{G_1}$ and ${\\bf h}_{G_2}$. The second strategy uses the pair-wise node comparison to provide a fine-grained information as a supplement to the global summaries ${\\bf h}_{G_1}$ and ${\\bf h}_{G_2}$. The architecture of SimGNN is shown in Fig.~\\ref{fig:simgnn}. \nAs shown in Fig.~\\ref{fig:simgnn}, \nSimGNN first computes the node embeddings of the two input graphs $G_1$ and $G_2$ using GCN. For the first strategy, SimGNN computes ${\\bf h}_{G_1}$ and ${\\bf h}_{G_2}$ from the node embeddings by means of an attention mechanism that can adaptively emphasize the important nodes with respect to a specifc similarity metric. Then, ${\\bf h}_{G_1}$ and ${\\bf h}_{G_2}$ are input to a neural tensor network (NTN) to compute a similarity score vector for $G_1$ and $G_2$. \nThe attention mechanism to compute ${\\bf h}_G$ is defined as follows. \nFor a graph $G$, SimGNN introduces a context vector $\\vec c$ = $tanh({\\vec W} \\sum_{u\\in G} {\\bf h}_u)$ to encode the global information of $G$. $\\vec c$ is adaptive to the given similarity metric via $\\vec W$. Intuitively, nodes that are close to the global context should receive more attention. Therefore, the attention weight $\\alpha_u$ of a node $u$ is defined based on the inner product of $\\vec c$ and ${\\bf h}_u$. $\\alpha_u = \\sigma({\\bf c}^T {\\bf h}_u)$, where $\\sigma$ is the sigmoid function. The embedding of $G$, ${\\bf h}_G$, is computed as ${\\bf h}_G = \\sum_{u\\in G} \\alpha_u {\\bf h}_u$. \nFor the second strategy, SimGNN constructs a pair-wise node similarity matrix $M$ by computing the inner product of ${\\bf h}_{u}$ and ${\\bf h}_{v}$ for each $u\\in G_1,v\\in G_2$. SimGNN uses a histogram of $M$ to summarize the pair-wise node similarity.\nFinally, the similarity score vector outputted by NTN and the histogram are input to a fully connected neural network to predict the similarity between $G_1$ and $G_2$. The mean squared error between the predicted similarity with the ground truth is used as the loss of SimGNN. In the follow-up work GRAPHSIM , a CNN-based method is used to replace the histogram of SimGNN. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 12cm]{fig/simgnn/simgnn.pdf}\n\\caption{Overview of SimGNN . The blue solid line illustrates the first strategy of comparing $G_1$ and $G_2$ using their global summaries ${\\bf h}_{G_1}$ and ${\\bf h}_{G_2}$. The orange dashed line indicates the second strategy of the find-grained pair-wise node comparison.}\n\\label{fig:simgnn}\n\\end{figure}\nLi et al. propose the graph matching network (GMN) to solve the graph similarity problem. Instead of embedding each graph independently, GMN embeds two graphs $G_1$ and $G_2$ jointly by examining the matching between them. The matching used in GMN is soft matching, which means that a node of $G_1$ can match to all nodes of $G_2$ yet with different strengths. The embedding of $G_1$ can change based on the other graph it is compared against. At inference time, GMN can predict if the distance between two graphs is smaller than a given threshold $\\gamma$. \nGiven two graphs $G_1=(V(G_1), E(G_1))$ and $G_2=(V(G_2), E(G_2))$, the $l$-th convolution layer of GMN is defined as below. \n\\begin{equation}\n\\begin{array}{ll}\n{\\bf m}_{j\\to i} & = MLP({\\bf h}^l_i, {\\bf h}^l_j), \\forall (i,j)\\in E(G_1) \\\\[0.2cm]\n{\\bf m}_{j'\\to i'} & = MLP({\\bf h}^l_{i'}, {\\bf h}^l_{j'}), \\forall (i',j')\\in E(G_2) \\\\[0.2cm]\n{\\vec \\mu}_{j'\\to i} & = f_{match}({\\bf h}^l_i, {\\bf h}^l_{j'}), \\forall i\\in V(G_1), j'\\in V(G_2)\\\\[0.2cm]\n{\\vec \\mu}_{i\\to j'} & = f_{match}({\\bf h}^l_i, {\\bf h}^l_{j'}), \\forall i\\in V(G_1), j'\\in V(G_2)\\\\[0.2cm]\n{\\bf h}_i^{l+1} & = MLP({\\bf h}_i^l, \\sum_{j\\in G_1} {\\bf m}_{j\\to i}, \\sum_{j'\\in G_2} {\\bf \\mu}_{j'\\to i})\\\\[0.2cm]\n{\\bf h}_{j'}^{l+1} & = MLP({\\bf h}_{j'}^l, \\sum_{i'\\in G_2} {\\bf m}_{i'\\to j'}, \\sum_{i\\in G_1} {\\bf \\mu}_{i\\to j'}),\n\\end{array}\n\\end{equation}\n\\noindent\nwhere ${\\bf m}$ denotes the message aggregation of a node from its neighbors in the same graph, ${\\vec \\mu}$ is the cross-graph matching vector that measures the difference between a node in a graph and all the nodes in the other graph, and $f_{match}$ can be defined by the following attention based method.\n\\[\n\\begin{array}{ll}\n{\\vec \\mu}_{j'\\to i} & = \\alpha_{j'\\to i}({\\bf h}^l_i - {\\bf h}^l_{j'}), \\forall i\\in V(G_1), j'\\in V(G_2)\\\\[0.2cm]\n\\alpha_{j'\\to i} & = \\frac{exp(dist({\\bf h}^l_i, {\\bf h}^l_{j'}))}{\\sum_{v'\\in G_2} exp(dist({\\bf h}^l_i, {\\bf h}^l_{v'}))}\\\\[0.2cm]\n{\\vec \\mu}_{i\\to j'} & = \\alpha_{i\\to j'}({\\bf h}^l_i - {\\bf h}^l_{j'}), \\forall i\\in V(G_1), j'\\in V(G_2)\\\\[0.2cm]\n\\alpha_{i\\to j'} & = \\frac{exp(dist({\\bf h}^l_i, {\\bf h}^l_{j'}))}{\\sum_{v\\in G_1} exp(dist({\\bf h}^l_{v}, {\\bf h}^l_{j'}))},\\\\[0.2cm]\n\\end{array}\n\\]\n\\noindent\nwhere $dist$ is the Euclidean distance. \nSuppose GMN stacks $L$ layers. The embedding of a graph $G$ is computed as below. \n\\begin{equation}\n{\\bf h}_{G} = MLP(\\{{\\bf h}^L_i)_{i\\in G}\\}),\n\\end{equation}\n\\noindent \nwhere ${\\bf h}^L_i$ is the embedding of node $i$ outputted by the last convolution layer.\nThe objective function of GMN is to minimize the margin-based pairwise loss ${\\mathcal L} = \\max\\{0, \\gamma - t \\times (1-dist(G_1,G_2))\\}$, \nwhere $\\gamma>0$ is the given margin threshold, $dist(G_1,G_2)=||{\\bf h}_{G_1} - {\\bf h}_{G_2}||_2$ is the Euclidean distance, and $t$ is the ground truth of the similarity relationship between $G_1$ and $G_2$, {\\it i.e.}, if $G_1$ and $G_2$ are similar, $t=1$; otherwise, $t=-1$. \n\\vspace{2ex}\n\\noindent\n{\\it D. Minimum Vertex Cover}\n\\vspace{2ex}\nSato et al. , from a theoretical perspective, study the power of GNNs in learning approximation algorithms for the minimum vertex cover (MVC) problem. They prove that no existing GNN can compute a $(2 - \\epsilon)$-approximation for MVC, where $\\epsilon > 0$ is any real number and $\\Delta$ is the maximum node degree.\nMoreover, Sato et al. propose a more powerful consistent port numbering GNN (CPNGNN), which can return a $2$-approximation for MVC. The authors theoretically prove that there exists a set of parameters of CPNGNN that can be used to find an optimal solution for MVC. However, the authors do not propose a method for finding this set of parameters.\nCPNGNN is designed based on graph port numbering. Given a graph $G$, the ports of a node $u$ are pairs $(u, i)$, $1\\leq i\\leq |N_u|$, where $i$ is the port number. A port numbering is a function $p$ such that for any edge $(u_1,u_2)\\in G$, there exists a port $(u_1, i)$ of $u_1$ and a port $(u_2, j)$ of $u_2$ satisfying $p(u_1,i)=(u_2,j)$. Intuitively, $u_1$ can send messages from the $i$th port of $u_1$ to the $j$th port of $u_2$. If $p(u_1,i)=(u_2,j)$, $u_1$ is denoted by $p_{tail}(u_2,j)$ and $i$ is denoted by $p_n(u_2,j)$. An example of port numbering is shown in Fig.~\\ref{fig:portnumb}.\n\\begin{figure}\n\\centering\n\\includegraphics[width = 8cm]{fig/cpngnn/portnumb.pdf}\n\\caption{An example of port numbering}\n\\label{fig:portnumb}\n\\end{figure}\nCPNGNN stacks $L$ convolution layers, and the $l$-th layer is defined as follows.\n\\begin{equation}\n{\\bf h}_u^l = ReLU({\\bf W}^l [{\\bf h}_u^{l-1} || {\\bf x}_{u,1}^{l-1} || {\\bf x}_{u,2}^{l-1} || ... ||{\\bf x}_{u, |N_u|}^{l-1}])\n\\end{equation}\n\\[\n{\\bf x}_{u,i}^{l-1} = {\\bf h}_{p_{tail}(u,i)}^{l-1} || p_n(u,i),\n\\]\n\\noindent\nwhere ${\\bf W}^l$ is the trainable parameter matrix and $||$ is concatenation.\nLet ${\\bf h}^{L}_u$ denote the embedding of $u$ outputted by the last layer of CPNGNN. An MLP takes ${\\bf h}^{L}_u$ as input and outputs the prediction ${\\bf y}_u$ for $u$, where ${\\bf y}_u[1]$ and ${\\bf y}_u[0]$ are the probabilities that $u$ is in an MVC or not, respectively. Then, the nodes $\\{u|{\\bf y}_u[1] > {\\bf y}_u[0]\\}$ are outputted as an MVC of $G$. The approximation ratio of CPNGNN is 2 for MVC. \nCPNGNN can also solve the minimum dominating set (MDS) problem and the maximum matching (MM) problem with the approximation ratio $\\frac{\\Delta+1}{2}$. \n\\vspace{2ex}\n\\noindent\n{\\it E. Graph Coloring}\n\\vspace{2ex}\nLemos et al. propose a graph recurrent neural network to predict if a graph is $k$-colorable. Each node $v$ has an embedding vector ${\\bf h}_v$ and each color $c$ also has an embedding vector ${\\bf h}_c$. Let $\\vec A$ denote the adjacent matrix of the graph $G$ and $\\vec M$ denote the color assignment matrix, where each row of $\\vec M$ is a node of $G$ and each column of $\\vec M$ is a color. ${\\bf M}[v,c]=1$ means the node $v$ is assigned the color $c$. \nThe embeddings of the $l+1$-th iteration ${\\bf h}_v^{l+1}$ and ${\\bf h}_c^{l+1}$ are computed as follows.\n\\[\n{{\\bf h}_v}^{l+1}, {{{\\bf h}}_{nhid}}^{l+1} = RNN_1({{\\bf h}_{nhid}}^l, {\\bf A}\\times {\\bf h}_v^l, {\\bf M}\\times MLP_1({\\bf h}_c^l))\n\\]\n\\[\n{{\\bf h}_c}^{l+1}, {{\\bf h}_{chid}}^{l+1} = RNN_2({\\bf h}_{chid}^l, {\\bf M} \\times MLP_2({\\bf h}_v^l))\n\\]\nThe embeddings of nodes are fed into an MLP to predict the probability if $G$ is $k$-colorable and the loss is the binary cross entropy between the prediction and the ground-truth. Experiments of show that the proposed techniques outperform the existing heuristic algorithm Tabucol and the greedy algorithm.\n\\vspace{2ex}\n\\noindent\n{\\it F. Graph Matching}\n\\vspace{2ex}\nNowak et al. study the GNN-based model for the quadratic assignment problem, that can be used to address the graph matching problem. A siamese GNN is constructed to compute the embeddings of two graphs. Let $\\vec Y$ be the product of the embeddings of the nodes of the two graphs. A stochastic matrix is computed from $\\vec Y$ by taking the softmax along each row (or column). The cross entropy between the stochastic matrix and the ground-truth node mapping is the loss. The proposed model can also be used to solve the TSP problem, as TSP can be formulated as a quadratic assignment problem.\nWang et al. propose a GNN-based model to predict the matching of two graphs. Given two graphs $G_1$ and $G_2$, it first uses GNN to compute the embeddings of the nodes of the two graphs. Then, the embeddings are fed to a Sinkhorn layer to obtain a doubly-stochastic matrix. The cross entropy with the ground-truth node mapping is used as the loss. The idea of the Sinkhorn layer is that given a non-negative matrix, iteratively normalize each row and each column of the matrix until the sum of each row and the sum of each column equal to 1, respectively. Experiments of show that the proposed model outperforms the existing learning-based graph matching methods. \n\\vspace{2ex}\n\\noindent\n{\\it G. Graph Isomorphism}\n\\vspace{2ex}\nMeng and Zhang propose an isomorphic neural network (IsoNN) for learning graph embedding. The encoder has three layers: a convolution layer, a min-pooling layer, and a softmax layer. The encoder is shown in Fig.~\\ref{fig:isonn}. The decoder is an MLP to predict the binary class of $G$, and the loss is the cross entropy between the prediction and the ground truth. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 10cm]{fig/isonn/arch.pdf}\n\\caption{Overview of IsoNN}\n\\label{fig:isonn}\n\\end{figure}\nSpecifically, the encoder of IsoNN is designed as follows.\nGiven a set of motifs, the convolution layer of the encoder extracts a set of isomorphism features from $G$ for each motif. Suppose ${\\bf K}_i$ is the adjacent matrix of the $i$-th motif that has $k$ nodes. The L2-norm between ${\\bf K}_i$ and a $k$ by $k$ submatrix ${\\bf A}_{x,y,k}$ of the adjacent matrix $\\vec A$ of $G$ is an isomorphism feature extracted by ${\\bf K}_i$ with respect to ${\\bf A}_{x,y,k}$, where $x$ and $y$ denote the top-left corner of the submatrix in $\\vec A$. IsoNN examines $k!$ permutations of ${\\bf K}_i$ and extracts $k!$ isomorphism features for ${\\bf A}_{x,y,k}$. The smallest one is regarded as the optimal isomorphism feature extracted by ${\\bf K}_i$ for ${\\bf A}_{x,y,k}$, which is computed by the min-pooling layer. Since the optimal isomorphism features for ${\\bf A}_{x,y,k}$ extracted by different motifs can have different scales, the softmax layer is used to normalize them. Finally, the normalized isomorphism features extracted by all motifs for all values of $x$ and $y$ are concatenated as the embedding of $G$.\n\\vspace{2ex}\n\\noindent\n{\\it H. Maximum Independent Set}\n\\vspace{2ex}\nLi et al. propose a GNNTS model that combines GNN and heuristic search to compute the maximum independent set (MIS) of a graph. GNNTS trains a GCN $f$ using a set of training graphs, where the MISs of a graph can be used as the ground truth labels of the graph. \nFor a graph $G=(V,E)$, the prediction result of $f$ is a $|V|\\times 2$ matrix $\\vec Y$, where ${\\vec Y}[\\cdot, 1]$ and ${\\vec Y}[\\cdot, 0]$ are the probabilities of the nodes being in or not in an MIS of $G$, respectively. \nThe basic idea of GNNTS is to use $f$ as the heuristic function within a greedy search procedure. Specifically, in each iteration, the nodes of $G$ are sorted by ${\\bf Y}[\\cdot,1]$. The greedy algorithm picks the node $u$ with the largest value in ${\\bf Y}[\\cdot,1]$, marks $u$ as 1, and adds $u$ to a partial solution $U$. All neighbors of $u$ are marked as 0. $u$ and its neighbors are removed from $G$, and the remaining graph is input to $f$ for the next iteration. Once all nodes in $G$ are marked, $U$ is returned as the MIS of $G$.\n\\begin{figure}\n\\centering\n\\includegraphics[width = 4cm]{fig/gnnts/case.pdf}\n\\caption{Illustration of the two MISs of the square graph }\n\\label{fig:gnnts_case}\n\\end{figure}\nThe basic method described above has the disadvantage that it cannot support the case in which $G$ has multiple solutions. For the example shown in Fig.~\\ref{fig:gnnts_case}, the square graph of four nodes has two MISs and the basic method predicts that each node has a probability 0.5 of belonging to an MIS, which is not useful. \nTo address this disadvantage, the GNN $f$ is extended to output multiple prediction results, {\\it i.e.}, $f(G) = \\{f^1(G), f^2(G),...,f^m(G)\\}$, where $f^i(G)$ is a $|V|\\times 2$ matrix ${\\vec Y}^i$, $1\\leq i\\leq m$, and $m$ is a hyperparameter. Then, the GNN $f$ is used in a tree search procedure. Specifically, GNNTS maintains a tree of partial solutions, where each leaf is a partital solution to be extended. At each step, GNNTS randomly picks a leaf $n_{leaf}$ from the search tree and uses $f$ to output $m$ prediction results ${\\vec Y}^1, {\\vec Y}^2, ..., {\\vec Y}^m$. Then, for each ${\\vec Y}^i$, GNNTS uses the basic method to compute an extension of $n_{leaf}$. The $m$ newly obtained partial solutions are inserted to the search tree as the children of $n_{leaf}$. If a leaf of the search tree cannot be extended anymore, the leaf is a maximal independent set. The largest computed maximal independent set is outputted. GNNTS can also solve the minimum vertex cover (MVC) and maximal clique (MC) problems by reducing to MIS.", "id": "63d1e43d-cd84-4331-9867-f11642f16f9d", "level": "subsection", "origin_cites_number": 12, "parent_id": "0dd7256f-34f8-494d-abe3-bd16bf0e03a4", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Learning Based Combinatorial Optimization Methods" ], [ "subsection", "Non-autoregressive CO Methods" ] ], "subsections": [ "a434626f-6e6f-448b-a5ad-f779a8bcec6f" ], "title": "Non-autoregressive CO Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Non-autoregressive methods output a\nsolution in one shot. The advantage is that the inference of non-autoregressive methods is faster than autoregressive methods . However, the probability of a node/edge being a part of a solution does not depend on that of other nodes/edges. There is an opportunity that non-autoregressive methods are not able to outperform autoregressive methods for solving the CO problems having sequential characteristics, such as TSP. Therefore, there are many recent works studying autoregressive methods.", "id": "a434626f-6e6f-448b-a5ad-f779a8bcec6f", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "63d1e43d-cd84-4331-9867-f11642f16f9d", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Learning Based Combinatorial Optimization Methods" ], [ "subsection", "Non-autoregressive CO Methods" ], [ "subsubsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 187, 1654, 167 ], "content": "\\label{autoreg}\nAutoregressive methods iteratively extend a partial solution. In each iteration, a node/edge is added to the partial solution. Most existing works use sequence model-based methods or reinforcement learning-based methods to iteratively extend the partial solution.\n\\vspace{1ex}\n\\noindent\n{\\it A. Sequence Model Based Methods}\n\\vspace{1ex}\nThe pointer network (Ptr-Net) proposed by Vinyals et al. is a seminal work in this category. It uses an RNN-based AutoEncoder to solve the travelling salesman problem (TSP) on a Euclidian graph. The encoder of Ptr-Net is an RNN taking the nodes of the graph $G$ as input and outputting an embedding of $G$, where the order of the nodes is randomly chosen. The decoder of Ptr-Net is also an RNN. In each time step, the decoder computes an attention over the input nodes, and selects the input node that has the largest attention weight as output. \nSpecifically, given a graph $G$, suppose the nodes of $G$ are sequentially input as $v_1,v_2,...,v_{|V|}$ to the encoder, and the decoder sequentially outputs $v_{j_1}, v_{j_2},..., v_{j_{|V|}}$. Let ${\\bf a}_1,{\\bf a}_2,...,{\\bf a}_{|V|}$ and ${\\bf b}_1,{\\bf b}_2,...,{\\bf b}_{|V|}$ denote the sequences of the hidden states of the encoder and the decoder, respectively. \nFor the $k$-th time step of the decoder, the decoder selects one node in $v_1,v_2,...,v_{|V|}$ as $v_{j_k}$ by an attention weight vector ${\\bf \\alpha}^k$ over ${\\bf a}_1,{\\bf a}_2,...,{\\bf a}_{|V|}$. ${\\bf \\alpha}^k$ is defined as: \n\\[\n{\\bf \\alpha}^k[j] = {\\bf c}^T [tanh ({\\bf W}_1 {\\bf a}_j + {\\bf W}_2 {\\bf b}_k)], 1\\leq j\\leq |V|\n\\]\n\\noindent\nwhere ${\\bf c}$, ${\\bf W}_1$, and ${\\bf W}_2$ are trainable parameters.\nThen, the decoder outputs $v_{j_k}$ = $v_i$, where $i=argmax~{\\bf \\alpha}^k$. \nFor example, Fig.~\\ref{fig:ptrnet}(a) shows a Euclidean graph $G$ with four nodes and a solution $v_1,v_3,v_2,v_4$. Fig.~\\ref{fig:ptrnet}(b) shows the procedure of Ptr-Net for computing the solution. The hollow arrow marks the node that has the largest attention weight at each time step of the decoder. \n\\begin{figure}\n\\centering\n\\includegraphics[width = 10cm]{fig/pointerNet/pnet.pdf}\n\\caption{An example of using Ptr-Net . (a) shows a Euclidean graph $G$ on a 2D plane, and the solution is marked by the edges. (b) shows the encoder and the decoder of Ptr-Net for finding the solution on $G$.}\n\\label{fig:ptrnet}\n\\end{figure}\nMilan et al. propose a LSTM-based method to solve the graph matching problem. \n Given two graphs $G_1$ and $G_2$ of $n$ nodes, from the features of nodes and edges of $G_1$ and $G_2$, a $n^2$ by $n^2$ similarity matrix $\\vec M$ can be computed, where ${\\bf M}_{ij,lk}$ is the similarity of the edge $(v_i,v_j)\\in G_1$ and $(v_j,v_k)\\in G_2$, and ${\\bf M}_{ii,ll}$ is the similarity of the node $v_i$ of $G_1$ and $v_l$ in $G_2$. ${\\bf M}$ is input to the LSTM as the input feature. At each step, the LSTM will predict a node pair of matching. The cross entropy with the ground-truth matching is used as the loss. However, ${\\bf M}$ is of $O(n^4)$ size, which is too large for matching large graphs. \nDu et al. observe that link prediction and graph alignment are inherently related and the joint learning of them can benefit each other. Given two graphs $G_1$ and $G_2$, crossing edges between all nodes of $G_1$ and $G_2$ are added. The network alignment model predicts the probability of accepting a crossing edge, {\\it i.e.}, the end nodes of the crossing edge are aligned. The link prediction model predicts the probability of inserting an edge $(u,v)$ to $G_1$ based on if $(u',v')$ is in $G_2$, where $u$ and $v$ are aligned to $u'$ and $v'$, respectively. Both the network alignment model and the link prediction model need the embeddings of the nodes of $G_1$ and $G_2$, which are computed by the generalized SkipGram model using the random walks crossing the two graphs. Suppose the random walk is on $G_1$, it will switch to $G_2$ at the next step with probability $p$. If the random walk switches, the probability of walking from a node $v$ in $G_1$ to a node $u$ in $G_2$ is $p'(v,u)$. If the crossing edge between $v$ and $u$ is an accepted crossing edge, $p'(v,u)=1$; otherwise, $p'(v,u)=\\frac{w(v,u)}{Z}$, where $w(v,u)$ is the structure similarity between $v$ and $u$ and $Z=\\sum_{u'\\in G_2} w(v,u')$. $w(v,u)$ is measured by the degree distributions of the neighbors of $v$ and $u$ in $G_1$ and $G_2$, respectively. In each iteration, the pair of nodes of the two graphs having the largest predicted probability by the graph alignment model is aligned and the edges of $G_1$ and $G_2$ whose probabilities predicted by the link prediction model exceed the threshold are added to $G_1$ and $G_2$, respectively. Node embeddeings are recomputed in each iteration, as the alignment between $G_1$ and $G_2$ and the edges in $G_1$ and $G_2$ are updated. Experiments of \n show that link prediction and graph alignment can benefit each other and the proposed techniques are suitable for aligning graphs whose distribution of the degree of aligned nodes is close to linear or the graphs having no node attribute information. \n\\vspace{1ex}\n\\noindent\n{\\it B. Reinforcement Learning Based Searching}\n\\vspace{1ex}\nWhen iteratively extending a partial solution, each iteration selects the node in order to optimize the final solution. Such a sequential decision process can be modeled as a Markov decision process (MDP) and solved by reinforcement learning (RL). Therefore, we first presents a brief review of RL.\n\\vspace{1ex}\n\\noindent\n{\\it B.1 Review of Reinforcement Learning}\n\\vspace{1ex}\nIn RL, an agent acts in an environment, collecting rewards and updating its policy to select future actions. It can be formulated as an MDP $(S, A, T, R, \\gamma)$, where \n\\begin{itemize}\n\\item $S$ is the set of states, and some states in $S$ are end states;\n\\item $A$ is the set of actions;\n\\item $T: S\\times A\\times S\\to [0,1]$ is the transition function, $T(s,a,s')$ is the transition probability to state $s'$ after taking action $a$ in state $s$;\n\\item $R: S\\times A\\to {\\mathbb R}$ is the reward of taking action $a$ in state $s$; and\n\\item $\\gamma$ is a discount factor.\n\\end{itemize}\nThe agent uses a policy $\\pi:S\\to A$ to select an action for a state. RL is to learn an optimal policy $\\pi^*$ that can return the optimal action for each state in terms of the overall reward. RL relies on the state-value function and the action-value function to optimize the policy. The state-value function $V^\\pi(s)$ denotes the overall reward starting from the state $s$ following the policy $\\pi$. The action-value function $Q^\\pi(s,a)$ denotes the overall reward starting from the state $s$ and the action $a$ following the policy $\\pi$. Formally, \n\\[\nV^\\pi(s) = {\\mathbb E}_\\pi [\\sum_{t=0}^T \\gamma^t R(s_{t}, a_{t}) | s_0 = s],\n\\]\n\\[\nQ^\\pi(s,a) = {\\mathbb E}_\\pi [ \\sum_{t=0}^T \\gamma^k R(s_t, a_t) | s_0 = s, a_0 = a],\n\\]\n\\noindent\nwhere ${\\mathbb E}_\\pi$ denotes the expected value given that the agent follows the policy $\\pi$, $t$ is the time step and $T$ is the time step of reaching an ending state. The state-value function and the action-value function of the optimal policy $\\pi^*$ are denoted by $V^*$ and $Q^*$, respectively. \nRL can learn $\\pi^*$ by iteratively optimizing the value functions, which is called as the value-based method. The value-based methods compute $Q^*$ and output the optimal policy $\\pi^*(s)$ = $\\max_a Q^*(s,a)$. Q-learning is a well-known value-based RL method. Suppose $Q$ is the current action-value function. At each state $s_t$, Q-learning selects the action $a_t$ by the $\\epsilon$-greedy policy, which is selecting $\\max_a Q(s,a)$ with a probability $1-\\epsilon$ and selecting a random action with a probability $\\epsilon$, and updates $Q$ as Formula~\\ref{equ:qlearn}. \n\\begin{equation}\\label{equ:qlearn}\nQ(s_t,a_t) = Q(s_t,a_t) + \\alpha_t[R(s_t,a_t) + \\gamma\\max_a Q(s_{t+1},a) - Q(s_t,a_t)],\n\\end{equation}\n\\noindent\nwhere $\\alpha_t$ is the learning rate at the time step $t$. Q-learning converges to $Q^*$ with probability 1, if each state-action pair is performed infinitely often and $\\alpha_t$ satisfies $\\sum_{n=1}^\\infty \\alpha_t = \\infty$ and $\\sum_{n=1}^\\infty \\alpha^2_t < \\infty$.\nQ-learning needs a table, namely Q-table, to store the action values. The size of the Q-table is $|S|\\times |A|$, which can be too large to support the applications having a large number of states and actions. Therefore, many methods have been proposed to approximate the Q-table by parameterized\nfunctions. For example, deep Q-learning network (DQN) uses a deep neural network as the function approximation of the Q-table . \nThe value-based methods first optimize the value functions and then improve the policy based on the optimized value functions. There are also many methods that directly optimize the policy based on policy gradient. We refer the reader to for more details of RL.\n\\vspace{1ex}\n\\noindent\n{\\it B.2 Reinforcement Learning Based CO Methods}\n\\vspace{1ex}\nSince iteratively extending a partial solution of a CO problem is inherently a sequential decision process, several works use reinforcement learning (RL) to extend the partial solution. The partial solution and the input graph together determine the state of RL, whereas the node that can be added to the partial solution is the action. RL can learn an optimal policy to find the optimal node for a partial solution. \nDai et al. propose S2V-DQN that combines GNN and deep Q-learning to tackle the MVC problem. Given a graph $G$, let $U$ denote the current partial solution and $\\bar{U}=V\\backslash U$. The RL task for MVC can be formulated as follows. \n\\begin{itemize}\n\\item A state $s$ is determined by $G$ and $U$, $s=f_{state}(G,U)$. If $U$ is a vertex cover of $G$, the state is an end state;\n\\item An action $a_v$ is adding a node $v\\in \\bar{U}$ to $U$;\n\\item The transition $T(f_{state}(G,U), a_v)$ = $f_{state}(G, U\\cup\\{v\\})$; and\n\\item The reward of an action $R(s,a_v) = -1$ so as to minimize the vertex cover.\n\\end{itemize}\nThe representation of state $s$ can be computed by embedding $G$ and $U$ using a GNN as follows. \n\\begin{equation}\\label{equ:gnn_stru2vrl}\nf_{state}(G,U) = \\sum_v {\\bf h}^L_v\n\\end{equation}\n\\[\n{\\bf h}^l_u = ReLU({\\vec\\theta_1} x_u + {\\vec \\theta_2}\\sum_{v\\in N_u} {\\bf h}_v^{l-1}+{\\vec \\theta_3}\\sum_{v\\in N_u} ReLU({\\vec \\theta_7} w_{u,v})),\n\\]\n\\noindent\nwhere $L$ is the total number of layers of the GNN, $x_u=1$ if $u\\in U$ and otherwise, $x_u=0$, $w_{u,v}$ is the weight of the edge $(u,v)$, and ${\\vec\\theta_1},{\\vec\\theta_2}$, and ${\\vec\\theta_3}$ are trainable parameters.\nWe can use the embedding of $v$, ${\\bf h}_v$ to represent the action $a_v$. The representations of the state $s$ and the action $a_v$ are fed into an MLP to compute $Q(s,a_v)$ as below. \n\\begin{equation}\nQ(s,a_v) = {\\vec \\theta_4} ReLU(Concat({\\vec \\theta_5}\\sum_{u\\in V} {\\bf h}^L_u, {\\vec \\theta_6}{\\bf h}^L_v)),\n\\end{equation}\n\\noindent\nwhere ${\\vec\\theta_4}, {\\vec\\theta_5}$, and ${\\vec\\theta_6}$ are trainable parameters.\nDeep Q-learning is used to optimize the parameters. After the MLP and the GNN are trained, they can be generalized to compute MVC for unseen graphs. S2V-DQN can also solve the MaxCut and TSP problems.\nBai et al. propose to compute the maximum common subgraph (MCS) of two graphs using GNN and Q-learning. Given two graphs $G_1$ and $G_2$, the partial solution is a subgraph $g_1$ of $G_1$ and a subgraph $g_2$ of $G_2$ satisfying $g_1$ and $g_2$ are isomorphic. The RL task for MCS is formulated as follows.\n\\begin{itemize}\n\\item A state $s$ is determined by $G_1$, $G_2$, $g_1$ and $g_2$, $s=f_{state}(G_1,G_2,g_1,g_2)$. If $g_1$ and $g_2$ cannot be extended, the state is an end state;\n\\item An action $a_{u,v}$ is to select a node $u$ from $G_1\\backslash g_1$ and a node $v$ from $G_2\\backslash g_2$ and add them to $g_1$ and $g_2$, respectively;\n\\item The transaction $T(f_{state}(G_1,G_2,g_1,g_2), a_{u,v}) = f_{state}(G_1,G_2,g_1\\cup\\{u\\}, g_2\\cup\\{v\\})$. The isomorphism between $g_1\\cup\\{u\\}$ and $g_2\\cup\\{v\\}$ needs to be assured; and\n\\item The reward $R(s, a_{u,v})$ = 1.\n\\end{itemize}\n\\begin{figure}\n\\centering\n\\includegraphics[width = 11cm]{fig/rlmcs/qlearn.pdf}\n\\caption{Overview of RLMCS}\n\\label{fig:rlmcs_beam}\n\\end{figure}\nThe represention of the state $s$ can be computed by a GNN on an auxiliary graph $G'$. $G'$ is constructed by adding a pseudo node $n_s$ connecting to the nodes in $g_1$ and the nodes in $g_2$. Then, a GNN is used to compute the node embeddings for $G'$. Note that the node embeddings change with the extension of the partial solution $g_1$ and $g_2$. ${\\bf h}_{G_1}$ and ${\\bf h}_{G_1}$ can be computed by the summation of the embeddings of the nodes in $G_1$ and $G_2$, respectively. The concatenation of ${\\bf h}_{n_s}$, ${\\bf h}_{G_1}$ and ${\\bf h}_{G_1}$ is the representation of the state $s$.\nThe action $a_{u,v}$ is represented by the concatenation of ${\\bf h}_u$ and ${\\bf h}_v$. The representations of the states and the actions are fed into an MLP to predict $Q$.\nFig.~\\ref{fig:rlmcs_beam}(a)-(b) show an example. \nRather than just selecting one node with the largest Q-value as in , Bai et al. propose to select $k$ nodes utilizing the beam search. At each time step, the agent of RL is allowed to transit to at most $k$ best next states. The beam search builds an exploration tree, where each node of the tree is a state and each edge of the tree is an action. Fig.~\\ref{fig:rlmcs_beam}(c) shows an example of $k=3$. The partial solution is returned as a maximal independent set if it cannot be extended. The largest one among the computed maximal independent sets is outputted. \nInspired by AlphaGo Zero, which has surpassed human in the game Go, Abe et al. propose CombOptZero, combining GNN and Monte Carlo tree search (MCTS)-based RL to solve the MVC problem. The formulation of the RL task is as S2V-DQN . The key difference is that CombOptZero uses the MCTS-based searching for the next action. \nFor a state $s$, suppose $U$ is the partial solution, a GNN embeds $G$ and $U$ and outputs two vectors ${\\bf p}$ and ${\\bf v}$, where ${\\bf p}[a]$ is the probability of taking the action $a$ for the state, and ${\\bf v}[a]$ is the estimated overall reward from the state $s$ with action $a$. $\\bf p$ and $\\bf v$ are input to a MCTS, which can produce a better action prediction ${\\bf p}'$ than $\\bf p$. $argmax_a~{\\bf p}'[a]$ is outputted as the optimal action selected for $s$. CombOptZero can also solve the MaxCut problem.\nKool et al. use AutoEncoder to sequentially output a TSP tour of a graph $G$. \nThe encoder stacks $L$ self-attention layers. The details of the encoder are presented in Sec.~\\ref{end2end_autoencoder}.\nThe decoder of sequentially predicts the next node to be added to the partial solution $seq$, {\\it i.e.}, a partial TSP tour. At the $t$-th step of decoding, $seq$ has $t-1$ nodes. A special context vector ${\\bf h}_c$ is introduced to represent the decoding context. At the $t$-th step of decoding, ${\\bf h}_c = {\\bf h}_G || {\\bf h}_{seq_{t-1}} || {\\bf h}_{seq_0}$, where $||$ denotes concatenation, $seq_0$ denotes the 0-th node in $seq$ and $seq_{t-1}$ denotes the $t-1$-th node in $seq$. The embedding of a node $v_i$ is computed as ${\\bf h}_{v_i}=\\sum_{j\\in N_i} \\alpha_j {\\bf W}_1 {\\bf h}_j$, where the attention weight $\\alpha_j=\\frac{e^{u_j}}{\\sum_{v_{j'}\\in N_i} e^{u_{j'}}}$ and $u_j=({\\bf W}_2{\\bf h}_c)^T ({\\bf W}_3{\\bf h}_{v_j})$, if $v_j\\not\\in seq$; otherwise, $u_j=-\\infty$. The probability of choosing $v_i$ to add to $seq$ at the $t$-th step is $p_{v_i}=\\frac{e^{u_j}}{\\sum_{v_{j'}\\in G} e^{u_{j'}}}$. ${\\bf W}_1, {\\bf W}_2$ and ${\\bf W}_3$ are trainable parameters. The REINFORCE algorithm is used to train the model. \nThe experiments presented in show that the proposed method can support several related problems of TSP, including Vehicle Routing Problem (VRP), Orienteering Problem (OP), Prize Collecting TSP (PCTSP) and Stochastic PCTSP (SPCTSP) with the same set of hyperparameters. However, the proposed method does not outperform the specialized algorithm for TSP ({\\it e.g.}, Concorde).\nThere are works not iteratively extending a partial solution to a solution of a CO problem but iteratively improving a suboptimal solution to a better solution. For example, \nWu et al. propose to improve the solution of TSP ({\\it i.e.}, a TSP tour) on $G$ using RL. The MDP is defined as follows. A TSP tour of $G$ is a state $s=(v_1, v_2, ..., v_n)$, $n$ is the number of nodes in $G$ and $v_i\\neq v_j$ for $i\\neq j$. A 2-opt operator is an action. Given two nodes $v_i,v_j$ in $s$, the 2-opt operator selects a pair of nodes $v_i$ and $v_j$ and reverses the order of nodes between $v_i$ and $v_j$ in $s$. The transition of an action is deterministic. The reward of an action is the reduction of the TSP tour with respect to the current best TSP tour so far. The architecture of Transformer is adopted to compute node embeddings. The compatibility of a pair of nodes $v_i$ and $v_j$ is computed as $({\\bf W}_1 {\\bf h}_i)^T ({\\bf W}_2{\\bf h}_j)$, where ${\\bf W}_1$ and ${\\bf W}_2$ are trainable parameters. The compatibilities of all pairs of nodes are stored in a matrix $\\vec Y$. $\\vec Y$ is fed into a masked softmax layer as follows.\n\\[\n{\\bf Y}'_{i,j}=\\left\\{\\begin{array}{ll}\nC\\cdot tanh(Y_{i,j}), & \\text{if $i\\neq j$}\\\\\n-\\infty, & \\text{if $i=j$}\n\\end{array}\\right.\n\\]\n\\[\nP = softmax({\\bf Y}'),\n\\]\n\\noindent\nwhere $P_{i,j}$ is the probability of selecting the pair of nodes $v_i$ and $v_j$ in the 2-opt operator. REINFORCE is used to train the model. The experiments reported in show that the proposed techniques outperform the heuristic algorithms for improving TSP tours.", "id": "8219174f-8a93-4666-9076-7142eed07e54", "level": "subsection", "origin_cites_number": 10, "parent_id": "0dd7256f-34f8-494d-abe3-bd16bf0e03a4", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Learning Based Combinatorial Optimization Methods" ], [ "subsection", "Autoregressive CO Methods" ] ], "subsections": [], "title": "Autoregressive CO Methods" }, { "cite_extract_rate": 0.4, "cites": [ 187, 1652 ], "content": "Non-autoregressive methods predict the probabilities that each node/edge being a part of a solution in one shot. The cross entropy between the predicted probabilities and the ground-truth solution of the CO problem is used as the loss function. Autoregressive methods predict the node/edge to add to the partial solution step by step. The inference of non-autoregressive methods can be faster than autoregressive methods, as when performing inference non-autoregressive methods predict a solution in one shot. Fast inference is desired for some real-time decision-making tasks, {\\it e.g.}, the vehicle routing problem. However, non-autoregressive methods inherently ignore some sequential characteristics of some CO problems, {\\it e.g.}, the TSP problem. Autoregressive methods can explicitly model this sequential inductive bias by attention mechanism or recurrent neural networks. Experimental comparison in shows that the autoregressive methods can outperform the non-autoregressive methods in terms of the quality of the tour found for the TSP problem but takes much longer time. However, for the problem without sequential characteristic, non-autoregressive methods can produce better solution, {\\it e.g.}, molecule generation task .\nThe non-autoregressive methods need the ground-truth solution for supervised training. It is a drawback of the non-autoregressive methods as it is hard to compute the ground-truth solution for the CO problems on large graphs, considering the NP-hardness of the CO problems. The autoregressive methods with reinforcement learning-based searching do not need the ground-truth, which has the potential to support larger graphs. Moreover, the supervised learning of non-autoregressive models that having a large number of parameters can make the models remember the training instances and the generalization is limited on unseen instances. Although reinforcement learning can overcome this problem, the sample efficiency needs to be improved. \nRegarding the comparison with traditional heuristic algorithms for the CO problems, current learning-based CO methods can have competitive performance with the traditional problem-specific heuristic algorithms on small graphs, but current learning-based CO methods do not scale well to large graphs. As large graphs have been emerging in many applications, there have been a trend of studying learning-based methods on large graphs.\nThe techniques and ideas of traditional heuristics for the CO problems can benefit the learning-based CO methods. For example, Dai et al. present that incorporating the idea of adding the farthest nodes first and the 2-opt operation of traditional heuristics can improve the performance of the learning-based method for TSP. Exploring the chance of integrating the ideas and operations of traditional heuristics into the learning-based methods is attracting increasing research attention .", "id": "3d76407d-b235-468e-973f-ceb7c0440bf5", "level": "subsection", "origin_cites_number": 5, "parent_id": "0dd7256f-34f8-494d-abe3-bd16bf0e03a4", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Graph Learning Based Combinatorial Optimization Methods" ], [ "subsection", "Discussions" ] ], "subsections": [], "title": "Discussions" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 187 ], "content": "\\label{future}\n Although there are recent significant advances of using graph learning models in solving several different CO problems, graph learning-based methods for CO problems are still at the early stage and there are many open problems for further studies. Some possible directions are listed as follows.\n{\\bf Encoding global information of graphs.} In many graph-based CO problems, the global information of the graph is needed for solving the CO problem ({\\it e.g.}, graph edit distance, TSP). However, existing graph learning models, especially graph convolution, is aggregating local information from neighbors. Although more global information can be obtained by adding more graph convolution layers, there may be a non-trivial over-smooth problem. Therefore, how to effectively encode more global information of graphs is an important direction.\n{\\bf Designing task-dependent model.} A GNN architecture is used to support diverse types of CO problems. However, each problem has its own characteristics. How to encode inductive bias into GNN architectures in order to better capture the characteristics of the CO problems is an important direction. \nThe loss function that is generally used in classification or regression ({\\it e.g.}, cross entropy) is widely used in the learning-based methods for solving CO problems. However, the general loss function may not have a strong relationship with the objective of the CO problems. For example, switching two nodes in a TSP tour will produce a TSP tour of very different score with respect to the objective of TSP. However, the two TSP tours can have the same loss in terms of cross entropy . Therefore, designing the problem-specific loss function needs to be studied.\n{\\bf Generalization.} Most existing learning-based methods for a CO problem cannot outperform traditional heuristic algorithms specifically designed for the CO problem on a larger graph or the graphs unseen in training, although the learning-based methods can be on par with or better than the traditional heuristic algorithm on small graphs. Therefore, an important direction is to rethink the learning pipeline for CO\nproblem in order to generalize to larger graphs and unseen graphs . \n{\\bf Integration of traditional heuristics.} Integrating traditional heuristics can improve the performance of learning-based CO methods. For example, Dai et al. present that incorporating the idea of adding the farthest nodes first and the 2-opt operation of traditional heuristics can improve the performance of the learning-based method for TSP. Therefore, identifying the operations of traditional heuristics of a CO problem that can benefit the learning-based methods for the CO problem and integrating the operations appropriately into the learning procedure need to be studied. \n{\\bf Supporting many graphs.} Most existing graph learning based CO methods focus on a graph or two graphs. \nAnother possible future direction is to study the problems that involve a large number of graphs, for example, by optimizing the query evaluation on a large graph database such as graph similarity search, graph pattern matching query and subgraph isomorphism search.", "id": "21ddc588-a04c-4204-9c43-e565cf44ed48", "level": "section", "origin_cites_number": 3, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Future Work" ] ], "subsections": [], "title": "Future Work" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{conc}\nIn this survey, we provided a thorough overview of the recent graph learning methods for solving CO problems. Existing works fall into two main categories. First, non-autoregressive methods predict the solution of a CO problem in one shot. Second, autoregressive methods iterative extend a partial solution step by step. Heuristic search and reinforcement learning are widely used in the autoregressive methods to extend the partial solution. In these graph learning based CO methods, a graph is represented in numerical vectors. Then, we also survey the recent graph representation learning methods, including the generalized SkipGram-based methods, the AutoEncoder-based methods and the GNN-based methods. Several possible directions for future research are discussed as well.", "id": "069c40b2-40c4-497f-ba83-45f6d98ae47c", "level": "section", "origin_cites_number": 0, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-abbr}\nML, machine learning; GNN, graph neural network; DL, deep learning; RL, reinforcement learning; CNN, convolutional neural network; DNN, deep neural\nnetwork; RNN, recurrent neural network; MLP, multi-layer perceptron; MDP, Markov decision process; MCTS, Monte Carlo tree search; CO, combinatorial optimization; MVC, minimum vertex cover; MIS, maximum independent set; TSP, travelling salesman problem; GC, graph coloring; MDS, minimum dominating set; MM, maximum matching; MaxCut, maximum cut; MC, maximum clique; SI, subgraph isomorphism; GSim, graph similarity; MF, matrix factorization; B\\&B, branch and bound, MILP, mixed-integer linear programming; BFS, breadth-first search; DFS, depth-first search. \n\\bibliographystyle{spmpsci} \n\\bibliography{template} \n\\end{document}", "id": "f665555f-63d7-4204-a663-9a97c0e84159", "level": "section", "origin_cites_number": 0, "parent_id": "637cec56-3ca1-4c47-a516-630783fcfed0", "prefix_titles": [ [ "title", "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art \n" ], [ "section", "List of abbreviations" ] ], "subsections": [], "title": "List of abbreviations" } ]
106
[ 1648, 167, 8332, 8475, 1647, 280, 187, 9086, 7007, 1646, 215, 1684, 218, 282, 1010, 7495, 7496, 1155, 1649, 7265, 1651, 1650, 1652, 553, 242, 226, 7006, 179, 1655, 1657, 1653, 1656, 7497, 1654 ]
1.222034
[ "Kalvik Jakkala" ]
Deep Gaussian Processes: A Survey
2021
2021-06-21T13:59:47Z
cs.LG
Gaussian processes are one of the dominant approaches in Bayesian learning. Although the approach has been applied to numerous problems with great success, it has a few fundamental limitations. Multiple methods in literature have addressed these limitations. However, there has not been a comprehensive survey of the topics as of yet. Most existing surveys focus on only one particular variant of Gaussian processes and their derivatives. This survey details the core motivations for using Gaussian processes, their mathematical formulations, limitations, and research themes that have flourished over the years to address said limitations. Furthermore, one particular research area is Deep Gaussian Processes (DGPs), it has improved substantially in the past decade. The significant publications that advanced the forefront of this research area are outlined in their survey. Finally, a brief discussion on open problems and research directions for future work is presented at the end.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6486154a-d131-4b97-8685-8ede70aa3f14", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ] ], "subsections": [ "3dfa318f-9e94-413e-b4be-a0335bf846ae", "56c4a585-532f-4fd4-ba62-a9a533a8bdb1", "e30dcd13-95dc-487d-a194-1ca12905053f", "d89feaff-8daf-46c9-b741-b54627b16b16", "6b1d5f60-4c1a-4ef7-9f03-b1502c6407db", "81f57f2c-9b98-47b8-a6ad-8cbc3942b7ca", "cbf4d6cf-6b44-4200-9536-7a8dc186c6a4", "cf9221b2-05ad-498a-bf27-3fcff7c2e0b9" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "There have been numerous advances in the field of machine learning in recent years. Most of these advances can be attributed to backpropagation, large datasets, and computational resource improvements. However, most of the currently popular machine learning approaches mainly, deep learning methods, are based on frequentist approaches, which entail making any prediction decisions by studying the correlations among features and predictions in a dataset. The problem with such an approach is that it is easy to overfit to the available dataset and risk learning unwanted biasing in the dataset. \nFurthermore, current approaches make it difficult and unintuitive to introduce any prior domain knowledge into a prediction model. Several real-world problems have domain experts; incorporating their knowledge could result in substantially better models. However, most deep learning approaches do not accommodate such incorporations and require application-specific methods to be developed to address such an issue.\nPrediction uncertainty is an important metric that needs to be estimated by reliable models. Most data sources contain non-negligible quantities of noise that might hinder the performance of a prediction model. It is also not uncommon to have test data samples that do not closely resemble the training dataset distribution. In such cases, it is essential to know the model's prediction uncertainty. If the model were to be used in a mission-critical task without accounting for its prediction uncertainty, it could lead to catastrophic results. \nAnother major drawback of conventional deep learning approaches is model comparison. Deep learning approaches are parametric and require explicit definitions for the model architecture. Moreover, model architectures are application specific. Often multiple model architectures need to be compared against each other to determine which is the best model for a task. However, it is nontrivial to factor in model size in terms of its parameter count and accuracy for comparison. \nThe limitations mentioned above are addressed by Bayesian approaches with varying degrees of ease and efficiency. We can incorporate domain knowledge with a prior distribution, prediction uncertainty can be estimated with prediction variance, and models can be aptly compared against each other with the Bayes factor.\nApart from the advantages mentioned above, another interesting feature of Bayesian methods is that they facilitate causal modeling of any system or process. Indeed, most classification or regression problems require a chain of sub-decisions, each of which would lead to the final prediction. However, conventional deep learning approaches are not particularly amenable to specifying such causal models. The Bayesian framework, along with do-calculus , can be used to specify such structures in models. \nThe advantages of Bayesian approaches raise the question of why they dont have widespread adaptation yet. Bayesian approaches often incur heavy computational expense or outright intractabilities making them infeasible for several problems. Nonetheless, these methods are steeped with history and have been used to solve numerous problems with substantial ramifications . Time and time again, the Bayesian framework has proved itself to be worthy of further research. \nThis paper considers one particular type of Bayesian approach, i.e., Gaussian processes . The method has its roots in Stochastic processes—a field of study dedicated to modeling random processes with Probability theory . Most problems of interest are usually not deterministic processes, or even if they are, one might not have access to all the information needed to model it as such. Stochastic processes mathematically accommodate such uncertainties, and Gaussian processes are one particular variant of Stochastic processes. \nI start my exposition by detailing Gaussian processes, their advantages, and their disadvantages. However, the main focus of this survey is Deep Gaussian Processes (DGPs). I will describe some of the Gaussian Processes' prominent variants crucial for building DGPs and explain the critical DGP approaches.", "id": "3dfa318f-9e94-413e-b4be-a0335bf846ae", "level": "section", "origin_cites_number": 6, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "A collection of random variables is called a Gaussian Process(GP) if the joint distribution of any finite subset of its variables is a Gaussian. Gaussian processes can be thought of as a reinterpretation or a generalization of Gaussian distributions. Gaussian processes are essentially Gaussian distributions with infinite dimensions. Each dimension of the distribution corresponds to an input data sample, and the distribution mean represents each data input's associated label. \nGaussian processes are fascinating because of their analytical properties. Gaussian processes being Gaussian distributions, have closed-form analytical solutions to summation, conditioning, and marginalization under certain conditions . Given a Gaussian distribution on a dataset $\\mathbf{X}$ with mean $\\boldsymbol{\\mu}$ and covariance $\\mathbf{\\Sigma}$, the marginals and conditionals are given by the following\n$$\n\\mathbf{X}=\\left(\\begin{array}{l}\n\\mathbf{X}_{1} \\\\\n\\mathbf{X}_{2}\n\\end{array}\\right), \\quad \\boldsymbol{\\mu}=\\left(\\begin{array}{l}\n\\boldsymbol{\\mu}_{1} \\\\\n\\boldsymbol{\\mu}_{2}\n\\end{array}\\right), \\quad \\boldsymbol{\\Sigma}=\\left(\\begin{array}{ll}\n\\boldsymbol{\\Sigma}_{11} & \\boldsymbol{\\Sigma}_{12} \\\\\n\\boldsymbol{\\Sigma}_{21} & \\boldsymbol{\\Sigma}_{22}\n\\end{array}\\right)\n$$\n$$\n\\begin{aligned}\np\\left(\\mathbf{X}_{1}\\right) &=\\mathcal{N}\\left(\\mathbf{X}_{1} \\mid \\boldsymbol{\\mu}_{1}, \\boldsymbol{\\Sigma}_{11}\\right) \\\\\np\\left(\\mathbf{X}_{2}\\right) &=\\mathcal{N}\\left(\\mathbf{X}_{2} \\mid \\boldsymbol{\\mu}_{2}, \\mathbf{\\Sigma}_{22}\\right) \\\\ \\\\\np\\left(\\mathbf{X}_{1} \\mid \\mathbf{X}_{2}\\right) &=\\mathcal{N}\\left(\\mathbf{X}_{1} \\mid \\boldsymbol{\\mu}_{1 \\mid 2}, \\boldsymbol{\\Sigma}_{1 \\mid 2}\\right) \\\\\n\\boldsymbol{\\mu}_{1 \\mid 2} &=\\boldsymbol{\\mu}_{1}+\\boldsymbol{\\Sigma}_{12} \\boldsymbol{\\Sigma}_{22}^{-1}\\left(\\mathbf{X}_{2}-\\boldsymbol{\\mu}_{2}\\right) \\\\\n\\boldsymbol{\\Sigma}_{1 \\mid 2} &=\\boldsymbol{\\Sigma}_{11}-\\boldsymbol{\\Sigma}_{12} \\boldsymbol{\\Sigma}_{22}^{-1} \\boldsymbol{\\Sigma}_{21}\n\\end{aligned}\n$$\nMoreover, as mentioned before, deep neural networks require the specification of model architectures which are usually application-specific. However, GPs do not suffer from such issues as they are non-parametric. GPs consider the entire function space and marginalize it to obtain their predictions. The approach has two main advantages; first, one need not decide the model's complexity. The second, the marginalization process, induces Occum's Razor , thereby overcoming any overfitting issues. \nOne might wonder if modeling any arbitrary process as a Gaussian would result in poor performance. However, Gaussians are omnipresent, and the Central Limit Theorem applies to them . The theorem states that any dataset's mean follows a Gaussian distribution as the number of samples from the distribution increases. So, assuming enough data is available, usually, a Gaussian approximation results in reasonable estimates. \nAnother attractive property of Gaussians is that the distribution has the maximum entropy given a dataset's mean and variance . The first two moments, mean and variance, are usually the only metrics that we can accurately compute from data. As a result, using Gaussians to estimates distributions results in the most informative data distribution estimates. \nFinally, another essential property of Gaussian processes was established by Neil where he showed that GPs are equivalent to a single-layered Perceptron with infinite hidden units. Perceptrons were, in turn, known to be universal approximators that can approximate any function given enough data .", "id": "56c4a585-532f-4fd4-ba62-a9a533a8bdb1", "level": "section", "origin_cites_number": 4, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Preliminary" ] ], "subsections": [], "title": "Preliminary" }, { "cite_extract_rate": 0, "cites": [], "content": "I have detailed the critical advantages of Bayesian approaches and why researchers are interested in Gaussian processes specifically. This section further elaborates on GPs. I give an intuition of GPs; their mathematical formulation , and an intuitive interpretation of the terms in its formulation. Furthermore, I will explain Kernel functions and list some limitations of GPs. \n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=\\linewidth]{figs/GP_ill.pdf}\n \\caption{Illustration of a Gaussian Process on the MNIST dataset.}\n \\label{fig:mnist_gp}\n\\end{figure}\nGPs, as mentioned before, are Gaussian distributions interpreted as probabilistic mappings from an input data space to its output label space. This mapping has a convenient interpretation in function space. Formally, we assume a function $f$, maps an input space $\\mathbf{X}$ to an output space $\\mathbf{y}$. This mapping is characterized by a noise term $\\epsilon$.\n$$\n\\mathbf{y}=f(\\mathbf{X})+\\boldsymbol{\\epsilon}, \\boldsymbol{\\epsilon} \\sim \\mathcal{N}\\left(\\mathbf{0}, \\beta^{-1} \\mathbf{I}\\right)\n$$\nIn a non-probabilistic approach, a parametric form for the function $f$ is considered, and Maximum A Posteriori (MAP) is used to estimate the function parameters. GPs, on the other hand, assume a distribution over the functions $f$. The function values are also called latent variables, and any predictions in GPs involve marginalizing over the entire function space. Fig.~\\ref{fig:mnist_gp} illustrates this process. Furthermore, the distribution over the functions is parametrized by a mean vector and a covariance matrix. The covariance is computed with a similarity metric known as a kernel function $K$. \n$$\n\\begin{aligned}\\mathbf{f}|\\mathbf{X} & \\sim \\mathcal{G} \\mathcal{P}\\left(\\mu(\\mathbf{X}), K\\left(\\mathbf{X}, \\mathbf{X}\\right)\\right) \\\\\n\\mu(\\mathbf{X}) &=\\mathbb{E}[\\mathbf{f}(\\mathbf{X})] \\\\\nK\\left(\\mathbf{X}, \\mathbf{X}\\right) &=\\mathbb{E}\\left[(\\mathbf{f}(\\mathbf{X})-\\mu(\\mathbf{X}))\\left(\\mathbf{f}\\left(\\mathbf{X}\\right)-\\mu\\left(\\mathbf{X}\\right)\\right)\\right]\n\\end{aligned}\n$$\nMoreover, by treating the function outputs as noisy versions of the labels, the distribution over $\\mathbf{y}$ conditioned on the input $\\mathbf{X}$ can be achieved as follows. \n$$\n\\begin{aligned}\np(\\mathbf{y} \\mid \\mathbf{f}) &= \\mathcal{N}\\left(\\mathbf{f}, \\beta^{-1} \\mathbf{I}\\right) \\\\\np(\\mathbf{y} \\mid \\mathbf{X}) &= \\int p(\\mathbf{y} \\mid \\mathbf{f})p(\\mathbf{f} \\mid \\mathbf{X}) d \\mathbf{f}\n\\end{aligned}\n$$\nThe above marginalization is the likelihood of the dataset. It is used to fit hyperparameters involved in the GP formulation by maximizing the training dataset's likelihood. However, we are also interested in predicting labels for new data samples. We can compute the predictions by considering a joint distribution over the function values of data $\\mathbf{X}$ and $\\mathbf{X}_*$ from the training and testing datasets denoted by $\\mathbf{f}$ and $\\mathbf{f}_*$ respectively. \n$$\n\\left[\\begin{array}{l}\n\\mathbf{f} \\\\\n\\mathbf{f}_{*}\n\\end{array}\\right] \\sim \\mathcal{N}\\left(\\mathbf{0},\\left[\\begin{array}{ll}\nK(\\mathbf{X}, \\mathbf{X}) & K\\left(\\mathbf{X}, \\mathbf{X}_{*}\\right) \\\\\nK\\left(\\mathbf{X}_{*}, \\mathbf{X}\\right) & K\\left(\\mathbf{X}_{*}, \\mathbf{X}_{*}\\right)\n\\end{array}\\right]\\right)\n$$\nWe are interested in $\\mathbf{f}_*$ as it is the noise-free realization of $\\mathbf{y}_*$. The posterior distribution of $\\mathbf{f}_*$ can be obtained by conditioning it with $\\mathbf{X}_*$, $\\mathbf{X}$, and $\\mathbf{y}$ as follows. \n$$\np(\\mathbf{f_*} \\mid \\mathbf{X}_*, \\mathbf{X}, \\mathbf{y}) = \\frac{1}{p({\\mathbf{y}})} \\int p(\\mathbf{y} \\mid \\mathbf{f}) p(\\mathbf{f}, \\mathbf{f_*}) d \\mathbf{f}\n$$\n$$\n\\begin{aligned}\np(\\mathbf{f_*} \\mid \\mathbf{X}_*, \\mathbf{X}, \\mathbf{y}) \\sim \\mathcal{N}(& K\\left(\\mathbf{X}_{*}, \\mathbf{X}\\right) K(\\mathbf{X}, \\mathbf{X})^{-1} \\mathbf{f}, \\\\\n&\\left.K\\left(\\mathbf{X}_{*}, \\mathbf{X}_{*}\\right)-K\\left(\\mathbf{X}_{*}, \\mathbf{X}\\right) K(\\mathbf{X}, \\mathbf{X})^{-1} K\\left(\\mathbf{X}, \\mathbf{X}_{*}\\right)\\right)\n\\end{aligned}\n$$\nThe above conditional prediction distribution's mean is treated as the model prediction since a Gaussian distribution's mode coincides with its mean. The mean term of the predictive distribution can be interpreted as a weighted average of the training set labels. The weights are defined by the kernel matrices, which assign higher weights to data points close to the test samples in feature space. \nThe variance term has two parts; the first term on the left can be assumed to be the prior variance from the testing data. And the second term can be thought of as the variance induced by the training data. By subtracting the two terms, the initial test data variance is reduced by the training data's information.", "id": "e30dcd13-95dc-487d-a194-1ca12905053f", "level": "section", "origin_cites_number": 2, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Gaussian Processes" ] ], "subsections": [ "c6c2ac5e-548c-4fdf-bce7-58db461fbad2", "9f83acea-8464-481b-8b97-f23de22e7231" ], "title": "Gaussian Processes" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike a conventional Gaussian distribution, a GP has infinite dimensions; therefore, explicitly defining a covariance matrix would be infeasible. GPs resort to using kernel functions to determine the covariance matrix, allowing for a compact representation of the kernel matrix. \nKernel functions measure data similarity, and GPs use it to produce a positive semi-definite (PSD) matrix. Kernel functions can be interpreted as dot products between data points in a high dimensional manifold without explicitly mapping them to such a manifold. Hence, these functions allow us to define meaningful similarity metrics relevant to a given task in a computationally efficient manner. \nNumerous kernel functions have been used in literature, each with its own set of properties that make it useful for different tasks. Enumerating and explaining all such functions is out of this paper's scope, and the readers are referred to for a detailed overview. However, two kernels are often used in literature, particularly the Squared Exponential kernel and the Automatic Relevance Detection (ARD) kernel functions detailed below. \nThe Squared Exponential kernel, also known as the Radial Basis Function and Gaussian kernel, has an exponential form. It has two parameters, the length scale $\\ell$, which determines how close a pair of data points should be in order to be correlated. And it has a variance scale $\\sigma$, which quantifies the variance induced from data points. The function is parameter efficient as it has only two parameters, and its exponential form allows for analytical computations of integrals in some cases. For a pair of inputs $\\mathbf{x}$, $\\mathbf{x}^{'} \\in \\mathbb{R}^q$, the Gaussian kernel is defined as follows\n$$\nK\\left(\\mathbf{x}, \\mathbf{x}^{'}\\right)=\\sigma^{2} \\exp \\left(-\\frac{1}{2 \\ell^{2}} \\sum_{j=1}^{q}\\left(\\mathbf{x}_{j}-\\mathbf{x}^{'}_{j}\\right)^{2}\\right)\n$$\nAnother important kernel function is the Automatic Relevance Detection (ARD) kernel. It has a similar form to the Gaussian kernel. However, it has weight parameters for each dimension of the input space. We can use the weights to scale down irrelevant features and increase the importance of the relevant ones. \n$$\nK\\left(\\mathbf{x}, \\mathbf{x}^{'}\\right)=\\sigma^{2} \\exp \\left(-\\frac{1}{2} \\sum_{j=1}^{q}w_j\\left(\\mathbf{x}_{j}-\\mathbf{x}^{'}_{j}\\right)^{2}\\right)\n$$\nThe weight parameter in the ARD kernel function can be defined manually. However, such an approach is infeasible as we usually don't know in advance if a feature is essential or not. In practice, the weights are determined from the training dataset, which allows us to obtain kernel function parametrizations that are most suited for the task. Indeed, there are two primary approaches to determine the function parameters, Maximum A Posteriori (MAP) and the Bayesian approach. \nThe MAP approach considers the log probability of the dataset $\\log p(\\mathbf{y} \\mid \\mathbf{X})$. The optimal parameters are computed by taking the log probability derivative with respect to the kernel parameters. Depending on the dataset size, the parameters can be calculated analytically but, in some cases, one needs to resort to approaches like gradient descent to determine the parameters. \n$$\n\\log p(\\mathbf{f} \\mid \\mathbf{X})=-\\frac{1}{2} \\mathbf{f}^{\\top} K^{-1} \\mathbf{f}-\\frac{1}{2} \\log |K|-\\frac{n}{2} \\log 2 \\pi \\\\\n$$\n$$\n\\log p(\\mathbf{y} \\mid \\mathbf{X})=-\\frac{1}{2} \\mathbf{y}^{\\top}\\left(K+\\sigma_{n}^{2} I\\right)^{-1} \\mathbf{y}-\\frac{1}{2} \\log \\left|K+\\sigma_{n}^{2} I\\right|-\\frac{n}{2} \\log 2 \\pi\n$$\nA limitation of using MAP to determine the kernel parameters is that it could lead to overfitting. The problem's severity varies depending on the dataset size, the GP's variant, and the kernel function being used. Nonetheless, we can mitigate the problem by using a Bayesian approach. \nA Bayesian approach defines a distribution over the kernel parameters and computes the posterior parameter distribution conditioned on the training dataset. Although the Bayesian approach is ideal, it is usually intractable to analytical computations. As such, one has to resort to sampling approaches such as Hamiltonian Monte Carlo (HMC) methods to determine the parameter distributions. However, HMC methods have their limitations and introduce additional computational costs that might be infeasible.", "id": "c6c2ac5e-548c-4fdf-bce7-58db461fbad2", "level": "subsection", "origin_cites_number": 2, "parent_id": "e30dcd13-95dc-487d-a194-1ca12905053f", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Gaussian Processes" ], [ "subsection", "Kernel Function" ] ], "subsections": [], "title": "Kernel Function" }, { "cite_extract_rate": 0, "cites": [], "content": "Although GPs have several advantages, they also have a few key limitations which hinder their use in most machine learning problems. Particularly, there are three main problems:\n\\begin{enumerate}\n\\item Computation cost\n\\item Storage cost\n\\item Hierarchical feature extraction\n\\end{enumerate}\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=0.6\\linewidth]{figs/GP.pdf}\n \\caption{Overview of Gaussian Processes and their variants.}\n \\label{fig:overview}\n\\end{figure}\nThe computational cost of GPs can be rather substantial; one needs to invert the kernel matrix to get the predictive distribution of a GP. The kernel matrix's size is $n \\times n$ where $n$ is the number of data points in the training dataset. Inverting such a matrix takes $O(n^3)$ computation time. Furthermore, once the kernel matrix inverse is available, it takes $O(n)$ and $O(n^2)$ time to determine the predictive distribution's mean and variance of a new data point.\nAdditionally, since GPs require the entire training dataset's storage, the storage cost is $O(n^2)$. Depending on the size of the dataset, the storage costs substantially limit the scalability of the approach. Moreover, if the GP were to be used in an environment where the training dataset size keeps increasing, the computation and storage costs could overwhelm the entire process, rendering the benefits of GP far too expensive. As such, GPs are usually only feasible for datasets with about $1000-3000$ data points.\nAnother major drawback of GPs is the lack of kernel functions capable of handling structured data wherein one needs to consider hierarchical feature extraction to determine the similarity of a pair of data points properly. Such an issue often arises in data such as images but, it is also prevalent in simpler vector datasets. Conventional kernel functions are ill-equipped to handle such correlations, and one needs to resort to deep feature extraction like the ones used in Deep learning models. However, such feature extraction still needs to be confined to a Bayesian framework to retain a GP's advantages.\nSparse Gaussian Processes address the computational and storage costs. And the feature extraction issue is addressed by Deep Gaussian Processes. I explain some of the prominent methods for Sparse and Deep GPs that have been developed over the last two decades in the following sections. Fig:~\\ref{fig:overview} shows a flow chart that relates the limitations to the GP varients that address them.", "id": "9f83acea-8464-481b-8b97-f23de22e7231", "level": "subsection", "origin_cites_number": 0, "parent_id": "e30dcd13-95dc-487d-a194-1ca12905053f", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Gaussian Processes" ], [ "subsection", "Limitations" ] ], "subsections": [], "title": "Limitations" }, { "cite_extract_rate": 0.2, "cites": [ 7220 ], "content": "\\begin{wrapfigure}{r}{0.5\\textwidth}\n \\begin{center}\n \\includegraphics[width=0.2\\textwidth]{figs/SGP.pdf}\n \\end{center}\n \\caption{Overview of Sparse Gaussian Processes.}\n \\label{fig:sgp}\n\\end{wrapfigure}\nGiven the computational and storage requirements that hinder the widespread use of GPs, a substantial number of papers have tried to address the problem and are collectively referred to as Sparse Gaussian Processes (SGPs), Fig.~\\ref{fig:sgp} depicts the prominent methods coverd in this section. The terminology stems from the way most of these approaches address the issue. Because the primary problem is the inversion of a covariance matrix, most methods try to introduce sparsity and reduce the matrix size that needs to be inverted while retaining the original matrix's performance. \nThis section focuses on some of the well-known methods crucial for developing some of the Deep Gaussian Process methods that will be detailed in the upcoming section. A complete overview of all SGPs is outside this survey's scope; the readers are referred to for a thorough summary. \nThe Nyström approximation of is a well know approach to reducing the inversion cost of the covariance matrix in GPs. The Nyström approximation allows one to generate a low rank approximation of any kernel matrix. The approach is applied to GPs by selecting $m$ data points with $m<<n$ from the training set. Then a low rank approximation $\\tilde{K}$ to the kernel matrix is computed as shown below\n$$\\tilde{K} = K_{n,m} K_{m,m}^{-1} K_{m,n}$$\nHere, $K_{n, m}$ represents the kernel matrix computed from the $n$ and $m$ data points in the training dataset and the selected subset, respectively. The same notation is used for the other kernel matrices. The approximation needs the inversion of only an $m \\times m$ matrix, thereby reducing the computation cost from $ O(n^3)$ to $O(m^3)$.\nHowever, the approximation assumes that the data comes from a low rank manifold, which would be the case if the data dimension $d<n$. In such a case, the low rank approximation would be exact and result in no information loss. But, the selected $m$ data points also influence the approximation. It is possible to have data points that result in a poor approximation even if the data is from a low dimensional manifold. \nIn practice, most datasets have more data points than the number of features; thus, the method is applicable in most cases. However, selecting the data points is crucial to the approximation's performance. Williams and Seeger used a random subset of $m$ data points in their approach . Although the method works, the unsophisticated data selection process limited the method's performance. \nSnelson and Ghahramani addressed the subset selection for Nyström approximation by treating the subset as model parameters and termed them pseudo data. The pseudo data is assumed to be synthetic and does not necessarily correspond to any data point available in the training dataset. Indeed, they could take values that are some combination of the training dataset. \nThe pseudo points were computed using maximum likelihood. However, to use maximum likelihood, one needs to parameterize the GP with pseudo points appropriately. Snelson and Ghahramani introduced a distribution over the pseudo points and considered a joint distribution over latent representations of data from the training, testing and, pseudo data points given by $\\mathbf{f}, \\mathbf{f_*}$ and $\\mathbf{m}$. The authors then marginalized the pseudo points to obtain the posterior distribution as shown below\n$$\n\\begin{aligned}\np(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{m}) &= \\mathcal{N}\\left(\\mathbf{y} \\mid K_{nm} K_{mm}^{-1} \\mathbf{m}, \\mathbf{\\Lambda}+\\sigma^{2} \\mathbf{I}\\right) \\\\\n\\mathbf{\\Lambda} &=\\operatorname{diag}(\\boldsymbol{\\lambda}) \\\\\n\\lambda_{n} &= K_{nn}-K_{mn}^{\\top} K_{mm}^{-1} K_{mn} \\\\\np\\left(\\mathbf{f}_{*}, \\mathbf{f}\\right) &= \\int p\\left(\\mathbf{f}_{*}, \\mathbf{f}, \\mathbf{m}\\right) \\mathrm{d} \\mathbf{m} =\\int p\\left(\\mathbf{f}_{*}, \\mathbf{f} \\mid \\mathbf{m}\\right) p(\\mathbf{m}) \\mathrm{d} \\mathbf{m} \\\\\np(\\mathbf{m}) &=\\mathcal{N}\\left(\\mathbf{0}, K_{\\mathbf{m}, \\mathbf{m}}\\right)\n\\end{aligned}\n$$\nAlthough using maximum likelihood to determine the pseudo points' distribution works in practice, using maximum likelihood runs the risk of overfitting. It would be ideal to use a Bayesian approach to compute the pseudo points distribution conditioned with the training set. Unfortunately, such an approach is infeasible as it becomes intractable to analytical solutions of the pseudo points. Moreover, the approach works by assuming that the joint distribution $p\\left(\\mathbf{f}_{*}, \\mathbf{f}\\right)$ can be partitioned as follows\n$$\np\\left(\\mathbf{f}_{*}, \\mathbf{f}\\right) =\\int p\\left(\\mathbf{f}_{*}, \\mid \\mathbf{m}\\right)p\\left(\\mathbf{f} \\mid \\mathbf{m}\\right) p(\\mathbf{m}) \\mathrm{d} \\mathbf{m} \\\\\n$$\nThe assumption limits the information a GP obtains from the training set to be induced only through the pseudo set. Hence, the pseudo points are also referred to as inducing inputs. The factorization assumption limits the capacity of the model and affects the accuracy of the model. Notably, the prior distribution that is assumed for the pseudo set substantially influences the results. \nSnelson and Ghahramani treated the pseudo points as hyperparameters and introduced a prior over them, resulting in an inaccurate posterior compared to a vanilla GP. The inaccuracy was a consequence of the formulation of the kernel approximation. Titsias addressed the outfitting and inexact posterior issue by considering a variational approach. The approach introduces a lower bound that can be optimized to determine the inducing inputs and the kernel hyperparameters. The bound shown below, can be used to solve for the inducing points and kernel hyperparameters. We can then use the inducing points for computing the predictive distribution. \n$$\n\\begin{aligned}\n\\mathcal{F} &= \\log \\left[\\mathcal{N}\\left(\\mathbf{y} \\mid \\mathbf{0}, \\sigma^{2} I+Q_{nn}\\right)\\right]-\\frac{1}{2 \\sigma^{2}} \\operatorname{Tr}(\\widetilde{K}) \\\\\nQ_{nn} &= K_{nm} K^{-1}_{mm} K_{mn} \\\\\n\\widetilde{K} &= K_{nn}-K_{nm} K^{-1}_{mm} K_{mn} \n\\end{aligned}\n$$\nHowever, the marginal in didn't have the factorization required to apply stochastic gradient descent. Hensman et al. improved on the work of Titsias by developing a new bound that could be optimized with stochastic gradient descent. Unlike Titsias's approach, the method in did not require the entire dataset at once to compute the variational parameters. It used the following bound that could be optimized with stochastic gradient descent. \n$$\n\\begin{aligned}\n\\mathcal{F}=\\sum_{i=1}^{n}\\{& \\log \\mathcal{N}\\left(\\mathbf{y}_{i} \\mid K_{i}^{\\top} K_{mm}^{-1} \\mathbf{v}, \\beta^{-1}\\right) \\\\\n&\\left.-\\frac{1}{2} \\beta \\widetilde{k}_{i, i}-\\frac{1}{2} \\operatorname{tr}\\left(\\mathbf{S} \\mathbf{\\Lambda}_{i}\\right)\\right\\} \\\\\n&-\\mathrm{KL}(q(\\mathbf{u}) \\| p(\\mathbf{u})) \\\\\n\\end{aligned}\n$$\n$$\n\\begin{aligned}\nq(\\mathbf{u}) &= \\mathcal{N} (\\mathbf{u} \\mid \\mathbf{v}, \\mathbf{S}) \\\\\n\\mathbf{\\Lambda} &= \\beta K^{-1}_{mm} k_i k_i^{\\top} K^{-1}_{mm}\n\\end{aligned}\n$$\nHere, $\\mathbf{u}$ is the set of feature space representations of the inducing points, an0d $k_i$ is the $i^{th}$ column of $K_{mn}$. Hensman et al. showed the approach to scale well to large datasets while retaining the reduced model complexity of $O(m^3)$.", "id": "d89feaff-8daf-46c9-b741-b54627b16b16", "level": "section", "origin_cites_number": 5, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Sparse Gaussian Processes" ] ], "subsections": [], "title": "Sparse Gaussian Processes" }, { "cite_extract_rate": 0, "cites": [], "content": "The methods discussed so far primarily addressed the computation and storage cost issues. This section introduces a variant of GPs that can be trained in an unsupervised manner. Gaussian Process Latent Variable Models (GPLVMs) assume the feature space is a latent space with unknown data distribution. The latent space distribution in then learned during the training phase. Although the method seems irrelevant, it plays a significant role in some of the Deep Gaussian Processes that I will cover in the next section. \nLawrence showed that if the function space of a GP is constrained to linear function space, the GP can be interpreted as a probabilistic variant of Principal Component Analysis (PCA). Moreover, if the function space is relaxed to a non-linear space defined by a kernel function, it can be interpreted as probabilistic non-linear PCA. The approach assumes a standard Gaussian prior over the input space and maximizes the log probability of the dataset $p( \\mathbf{y} \\mid \\mathbf{X}, \\beta)$ with respect to the input data likelihood $\\mathbf{X}$. \nThe input data or latent space distribution can not be computed analytically because of the non-linearities introduced by the kernel function. However, Lawrence showed that the distribution could be estimated using the Expectation-Maximization algorithm. But, the approach returns only the mode of the data distribution as a consequence.\nAdditionally, GPLVMs were shown to be useful for reconstructing inputs with partially observed features. Such a scenario often occurs in image reconstruction or denoising tasks.\nAlthough GPLVMs are very useful for unsupervised tasks, the original method assumed access to the full kernel matrix, which entails storing the entire training dataset and inverting an $n \\times n$ kernel matrix. Lawrence addressed this issue by showing that most sparse Gaussian process approaches can be turned into GPLVMs. However, the approach still gave a MAP estimate of the latent space and risked overfitting to the training dataset. \nTitsias and Lawrence addressed the overfitting issue by proposing a Bayesian approach. Instead of finding a MAP solution to the latent space, they proposed a variational approach. However, naive use of the variational approach to finding the data distribution introduces intractabilities. Titsias and Lawrence addressed the problem by incorporating Titsias's variational approach to SGPs . The introduction of pseudo points canceled out the intractable terms in the variational bound for GPLVMs and resulted in a feasible optimization bound shown below \n$$\n\\begin{aligned}\n\\mathcal{F}(q) \\geq \\log \\left(\\int e^{\\left\\langle\\log \\mathcal{N}\\left(\\mathbf{y}_{d} \\mid \\boldsymbol{\\alpha}_{d}, \\beta^{-1} I\\right)\\right\\rangle_{q(\\mathbf{X})}}p\\left(\\mathbf{m}_{d}\\right) d \\mathbf{m}_{d}\\right) \\\\\n-\\frac{\\beta}{2} \\operatorname{Tr}\\left(\\left\\langle K_{nn}\\right\\rangle_{q(\\mathbf{X})}\\right)+\\frac{\\beta}{2} \\operatorname{Tr}\\left(K_{mm}^{-1}\\left\\langle K_{mn} K_{nm}\\right\\rangle_{q(\\mathbf{X})}\\right)\n\\end{aligned}\n$$\nHere, $q$ is the variational distribution over pseudo points $\\mathbf{m}$ and $\\mathbf{\\alpha} = K_{nm}K^{-1}_{mm}\\mathbf{m}$. The subscript $d$ is used to denote each dimension of the features. \nGPLVMs have been applied to numerous applications and their variants that we did not cover here. The readers are referred to for an in-depth survey of GPLVMs.", "id": "6b1d5f60-4c1a-4ef7-9f03-b1502c6407db", "level": "section", "origin_cites_number": 6, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Gaussian Process Latent Variable Model" ] ], "subsections": [], "title": "Gaussian Process Latent Variable Model" }, { "cite_extract_rate": 0.3125, "cites": [ 7224, 7221, 363, 7222, 7223 ], "content": "Although SGPs addressed the computation cost issue, GPs remain inapplicable to a lot of applications. The reason being the kernel function. The most commonly used kernel functions have relatively simple similarity metrics. However, in specific datasets, one might have to use a different similarity metric in different regions of the input space. A similarity metric that can extract such features would have to utilize a hierarchical structure for feature extraction. \nOne strategy to the problem would be to stack GPs similar to how Perceptrons are stacked in an MLP. But, stacking GPs such that one layer's output becomes the following layer's input makes them highly non-linear and intractable to analytical solutions. Moreover, a stacked GP would not even correspond to a GP anymore as the posterior distribution can take on any arbitrary distribution. However, such methods are usually referred to as Deep Gaussian Processes (DGPs). Several authors have tried to model and fit such models; this section explains such methods' development. \nOne of the earliest DGP approaches is that of Lawrence and Moore . They considered a GPLVM model, but a GP was assumed for the input space prior distribution, making it a two layered DGP. The DGP resulted in the following likelihood function, which cannot be marginalized analytically. \n$$\np\\left(\\mathbf{y}\\mid\\mathbf{t}\\right) =\\int p\\left(\\mathbf{y}\\mid\\mathbf{X}\\right) p\\left(\\mathbf{X}\\mid\\mathbf{t}\\right)\\mathrm{d} \\mathbf{X} \\\\\n$$\nHere, $\\mathbf{t}$ is the input GP at the input layer, and $\\mathbf{X}$ are the intermediate representations passed to the second layer GP. Lawrence and Moore considered a MAP solution to the above problem. This was achieved by maximizing the following. \n$$\np\\left(\\mathbf{X}\\mid\\mathbf{y, t}\\right) =\\log p\\left(\\mathbf{y}\\mid\\mathbf{X}\\right) + \\log p\\left(\\mathbf{X}\\mid\\mathbf{t}\\right) + \\text {const} \\\\\n$$\nThe authors also showed that deeper hierarchies could be modeled with such an approach. However, the model was limited to a MAP solution which is highly susceptible to overfitting. \nDamianou et al. proposed a variational approach to the overfitting problem. They also considered a 2-layered stacked GP, but a naive variational bound for such a model introduces intractabilities similar to a GPLVM. However, the authors showed that the variational approach used for Bayesian GPLMVs in could also be utilized to formulate a variational bound for a 2-layered GP. The final bound is shown below, with $q(\\mathbf{X})$ as the variational distribution\n$$\n\\mathcal{F}=\\hat{\\mathcal{F}}-\\operatorname{KL}(q(X) \\| p(X \\mid \\mathbf{t})) \\\\\n\\hat{\\mathcal{F}}=\\int q(X) \\log p(y \\mid \\mathbf{f}) p(\\mathbf{f} \\mid X) \\mathrm{d} X \\mathrm{~d} \\mathbf{f}\n$$\nFurthermore, Damianou and Lawrence improved the bound shown above by generalizing the variational bound to a DGP with an arbitrary number of layers. The bound shown below can be used on a DGP with two or more layers. \n$$\n\\begin{aligned}\n\\mathcal{F}&=\\mathbf{g}_{Y}+\\mathbf{r}_{X}+\\mathcal{H}_{q(\\mathbf{X})}-\\operatorname{KL}(q(\\mathbf{Z}) \\| p(\\mathbf{Z})) \\\\\n\\mathbf{g}_{Y}&=g\\left(\\mathbf{Y}, \\mathbf{F}^{Y}, \\mathbf{U}^{Y}, \\mathbf{X}\\right) \\\\\n\\quad&=\\left\\langle\\log p\\left(\\mathbf{Y} \\mid \\mathbf{F}^{Y}\\right)+\\log \\frac{p\\left(\\mathbf{U}^{Y}\\right)}{q\\left(\\mathbf{U}^{Y}\\right)}\\right\\rangle_{p\\left(\\mathbf{F}^{Y} \\mid \\mathbf{U}^{Y}, \\mathbf{X}\\right) q\\left(\\mathbf{U}^{Y}\\right) q(\\mathbf{X})} \\\\\n\\mathbf{r}_{X}&=r\\left(\\mathbf{X}, \\mathbf{F}^{X}, \\mathbf{U}^{X}, \\mathbf{Z}\\right) \\\\\n&=\\left\\langle\\log p\\left(\\mathbf{X} \\mid \\mathbf{F}^{X}\\right)+\\log \\frac{p\\left(\\mathbf{U}^{X}\\right)}{q\\left(\\mathbf{U}^{X}\\right)}\\right\\rangle_{p\\left(\\mathbf{F}^{X} \\mid \\mathbf{U}^{X}, \\mathbf{Z}\\right) q\\left(\\mathbf{U}^{X}\\right) q(\\mathbf{X}) q(\\mathbf{Z})}\n\\end{aligned}\n$$\nHere, $\\mathbf{Y}$ represents the output multidimensional label space, $\\mathbf{Z}$ represents the latent variables in the input layer, and $\\mathbf{X}$ represents the latent inputs in intermediate layers. $\\mathbf{U}$ and $\\mathbf{F}$ are the values of the latent function corresponding to inducing points and latent inputs, respectively; their superscript denotes the layer to which they belong. Additionally, $\\mathcal{H}$ represents the entropy of the distribution shown in its subscript, and $\\operatorname{KL}$ is the standard KL-divergence. Fig.~\\ref{fig:dgp} shows the DGP model architecture from . \n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figs/DGP.pdf}\n \\caption{Deep Gaussian Process model overview .}\n \\label{fig:dgp}\n\\end{figure}\nAgain, the crux of the approach relied on the variational trick of introducing inducing points as presented in . Damianou and Lawrence conducted experiments on the MNIST dataset wherein they showed a 5-layer DGP could be used for the image classification task. \nA limitation of the approach in is that the number of variational parameters that need to be learned increases linearly with the number of data points in the training set. And it involved inverting matrices which is a computationally expensive operation, thereby limiting its scalability. Dai et al. addressed this by introducing a back-constraint. The constraint allowed them to define the latent variables' mean terms as deterministic functions of the latent variables themselves via an MLP. The approach reduced the number of variational parameters. Moreover, Dai et al. also showed that their approach could be trained in a distributed manner, allowing the model to be scaled to large datasets.\nSalimbeni and Deisenroth recently proposed an approach that addressed the layer independence issue of prior DGP approaches. The DGP in assumed independence of GPs across layers and only considered the correlations within a layer. However, Salimbeni and Deisenroth argue that such an approach is equivalent to a single GP with each GP's input coming from a GP itself. The authors also stated that they found that some layers get turned off when using the DGP in . \nSalimbeni and Deisenroth presented a new variational bound that retains the exact model posterior similar to while maintaining the correlations both within and across adjacent layers. However, Salimbeni and Deisenroth showed such an approach is infeasible to analytic computations, but the bound could still be optimized with MCMC sampling techniques. Such an approach is computationally expensive. But, it can be parallelized by exploiting the factorization of the DGP across output dimensions. Furthermore, the method also required sampling methods during inference, but its performance is substantially better than prior works. \n$$\n\\mathcal{F}=\\sum_{i=1}^{N} \\mathbb{E}_{q\\left(\\mathbf{f}_{i}^{L}\\right)}\\left[\\log p\\left(\\mathbf{y}_{n} \\mid \\mathbf{f}_{n}^{L}\\right)\\right]-\\sum_{l=1}^{L} \\mathrm{KL}\\left[q\\left(\\mathbf{U}^{l}\\right) \\| p\\left(\\mathbf{U}^{l} ; \\mathbf{Z}^{l-1}\\right)\\right]\n$$\nIn the optimization bound shown above from , the subscripts are used to denote each data sample in the dataset, and superscripts are used to indicate a layer in the DGP. The rest of the terms follow the same convention as the one used by Damianou et al. .\nI briefly mentioned that DGPs do not necessarily correspond to Gaussian processes. Still, the methods discussed so far do model the posterior distribution as a Gaussian, each with its assumptions. Havasi et al. presented a technique that departs even more from a conventional GP. The authors show that since a Gaussian is uni-modal, using it to model the posterior will result in poor results. Instead, they suggest using a multi-modal distribution that can better capture the true posterior distribution. \nHowever, it is not possible to formulate an analytical solution to a multi-modal posterior. We can use variational inference to learn a multi-modal posterior. Still, one needs to determine the exact form of the variational distribution, which is difficult as we usually don't know the posterior distribution in advance. Havasi et al. circumvent this issue by using Stochastic Gradient Hamiltonian Monte Carlo (SGMCMC) method to estimate the posterior. The approach can determine the inducing points by sampling from the true posterior instead of using a variational distribution. \nAlthough the approach far exceeds the performance of prior DGPs and is the current state-of-the-art, it still has its limitations. Remarkably, the SGMCMC method is difficult to tune as it introduces its own parameters in addition to the ones that already have to be estimated for DGP. Several MCMC method variants attempt to improve upon SGMCMC, but none of those approaches have been applied to DGPs. \nThe DGPs we discussed so far attempted to develop variants of GPs that can model hierarchical features in data, which was done by assuming a feed forward network with each node of the network being modeled as a GP. It is the most popular method to address the problem and has resulted in approaches that can get reasonably promising results. However, there are other approaches as well which do not consider such an explicit feed forward network.\nWilson et al. proposed a method that uses deep neural networks as the kernel function, termed deep kernel. Unlike a Gaussian kernel, the deep kernel produced a vector output, and a GP was assigned to each of the vector elements. Wilson et al. further combined the GPs with an additive structure that facilitated its training with an analytical bound. \nWilson et al. showed that their approach was good at several tasks. However, the deep neural network architecture needs to be task specific, and its parameters are susceptible to overfitting given its large parameter count. \nAnother interesting perspective on DGPs was presented by Lee et al. . Until now, all the discussed GPs with linear latent functions are combined in different ways to achieve an aggregate non-linear latent function space. Lee et al. developed a method that considers an entire function space that consists of non-linear functions. Unlike prior approaches, the function space was not constrained to a specific subspace as a consequence of a particular kernel function being used. The approach can be regarded as a generalization of Neil who showed the equivalence of an infinitely wide single layer neural network to a GP. Lee et al. showed the equivalence of a GP to an infinitely wide deep neural. \nLee et al. showed the approach was on par with some neural networks trained with gradient descent while retaining its uncertainty estimation. Furthermore, the uncertainty estimates were proportional to the model accuracy. But, the approach has a polynomial increasing kernel matrix making it infeasible for some problems. Moreover, the approach only considered deep neural networks with fully connected layers and Relu activation functions. \nGarnelo et al. presented an approach with similar ethos and introduced Neural Processes (NPs). However, instead of considering a neural network with asymptotically increasing depth and width, a deep neural network was used in place of a Gaussian distribution parametrized by a kernel function to define $p(\\mathbf{f} \\mid \\mathbf{X})$. \n$$\n\\begin{aligned}\np(\\mathbf{y} \\mid \\mathbf{f}) &= \\mathcal{N}\\left(\\mathbf{f}, \\beta^{-1} \\mathbf{I}\\right) \\\\\np(\\mathbf{y} \\mid \\mathbf{X}) &= \\int p(\\mathbf{y} \\mid \\mathbf{f})\\underbrace{p(\\mathbf{f} \\mid \\mathbf{X})}_\\text{Deep neural network} d \\mathbf{f}\n\\end{aligned}\n$$\nThe deep neural network was trained using amortized variational inference. The consequence of such an approach is that the function space defined by deep neural networks allows us to extract hierarchical features and retain a probabilistic interpretation. The model, however, needs to be trained with Meta-learning, an approach wherein multiple diverse datasets or tasks are used to train the same model. Meta-learning is used because each function in the function space corresponds to a sequence of inputs or a task. Considering multiple tasks allows the DNN to approximate the variability of the function space. While training, a context vector $\\mathbf{r}_c$ is passed to the DNN to indicate the task currently being considered as shown in Fig:~\\ref{fig:NPs}. \n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=0.6\\linewidth]{figs/NPs.pdf}\n \\caption{Neural Processes model architecture .}\n \\label{fig:NPs}\n\\end{figure}\nMoreover, to retain the probabilistic interpretation, a latent variable $z$ was introduced, which captured uncertainty in the context data. The implication being that, unlike vanilla GPs whose uncertainty comes from the kernel function and its function space, NPs do this with data. As such, the context being provided could significantly influence the model's performance and can be considered to be similar to inducing points in SGPs. \nAdditionally, the model does not assume a Gaussian prior or posterior, allowing one to fit any data distribution. Garnelo et al. showed that their approach produces good predictive distributions while being parameter efficient and fast compared to a vanilla Gaussian process.\nNonetheless, the method assumed a predefined DNN model architecture which needs to be task specific. Also, the model is only an approximation to some stochastic process using a DNN. But, it isn't possible to guarantee the approximation quality of the DNN. Furthermore, the meta-learning requirement imposes a large training computation requirement, and the datasets considered have to be similar to the primary dataset of interest. \nFinally, Yang et al. recently proposed an Energy-based Process (EBP). EBPs are a generalization of Neural Processes as they utilize Energy-based models to approximate $p(\\mathbf{f} \\mid \\mathbf{X})$ instead of a MAP trained DNN as shown below, where $f_w$ is the Energy model and $Z$ is the partition function. \n$$\np(\\mathbf{f} \\mid \\mathbf{X}) = \\frac{\\exp (f_w(\\mathbf{X}))}{Z(f_w)}\n$$\nHowever, by utilizing an Energy based model, the authors were able to show that vanilla GPs and NPs can be recovered from EBPs as special cases. The Energy based formulation also allows one to approximate the conditional $p(\\mathbf{f} \\mid \\mathbf{X})$ with arbitrary distributions, unlike GP and NPs, which are limited to Gaussian distributions and the distribution defined by the DNN, respectively. \nUnlike feedforward networks which are trained to predict a label $y$ given an input $\\mathbf{X}$, Energy based models predict the Energy between a pair of $(\\mathbf{X}, y)$. A well trained Energy-based will output low Energy for well matched $(\\mathbf{X}, y)$ pairs and high Energy for mismatched pairs. As such, the prediction task in these models becomes a minimization task, wherein one needs to find the label $y$, which has low Energy for the given data $\\mathbf{X}$. \nThe consequence of training such a model to approximate our conditional in the stochastic process is that the function space is not constrained to any predefined subspace. However, Energy based models are challenging to train and require several tricks to stabilize the training process. Furthermore, it takes longer to train such models similar to models trained with Meta-learning.", "id": "81f57f2c-9b98-47b8-a6ad-8cbc3942b7ca", "level": "section", "origin_cites_number": 16, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Deep Gaussian Processes" ] ], "subsections": [], "title": "Deep Gaussian Processes" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7224, 7221, 7223 ], "content": "Gaussian processes have come a long way from their origins. Although many of the limitations have been addressed, there remain open problems and research directions that have not been thoroughly explored. \nOne such problem is the assumption of factored output dimensions. The assumption has been made in all the methods mentioned in this paper. It dictates that each output dimension is independent of the other. The assumption allowed factorizations that simplify some derivations, and in some cases, the assumption is required for the method to be tractable. However, the assumption might not hold in some datasets. Addressing this factorization assumption would be an interesting line of research. \nAnother issue is that most SGP and DGP methods require careful model initializations and hyperparameter tuning without which, the model does not converge. Nevertheless, there aren't any formalized rules for determining model initializations and hyperparameters for most methods that could guarantee good model convergence. The tuning issue is particularly a problem when using MCMC approaches and remains to be solved. \nAdditionally, MCMC methods have been shown to be successful at training DGP. But, the method need not be limited to DGP. Indeed, the technique might result in good results even for SGP. The primary motivation of using MCMC methods was to address the non-Gaussian posterior. Although vanilla GPs might not have a non-Gaussian posterior, the assumptions made in inducing methods often change this. As such, it might be worth exploring the feasibility of MCMC for training SGPs. \nSimilarly, there are several variants of SG-MCMC methods that have not been benchmarked against DGPs. Havasi et al. only considered the original SGMCMC approach. However, numerous variants of the method have been introduced that improve upon SGMCMC, some of which might result in stable training dynamics.\nDeep kernels in themselves were found to be highly susceptible to overfitting. However, Wilson et al. only considered vanilla DNNs. But, DNNs could be used as Bayesian approximators as shown by Gal and Ghahramani . Such an approach might mitigate some of the overfitting issues. 'Furthermore, one might also consider methods such as Bayes by Backprob to train deep kernels. It would be interesting to find out the ramifications of such an approach to deep kernels. \nGarnelo et al. considered a similar approach but, they consider a DNN approximation to a distribution over the function space itself. It required an explicit definition of a DNN, which needs to be task specific. Also, the model performance is dependent on the context vector to estimate prediction uncertainty which isn't consistent with a proper stochastic process. Kim et al. introduced a variant of Neural Processes which uses attention to improve the context vectors. Perhaps we could modify the attention mechanism to take the test data into account and generate an uncertainty estimate that incorporates test data.\nMoreover, most methods assume relatively constrained function space, either with the formulation of a kernel function or a DNN. However, that need not be the case; perhaps we can consider multiple function spaces by utilizing models such as chunked hypernetworks to generate both the model parameters and the model architecture. Thus greatly expanding the stochastic process's modeling capacity. \nEnergy-based models appear to be another viable approach to expanding the function space but, the method is challenging to train and incurs substantial computational costs. Moreover, even model inference is an expensive operation requiring Hamiltonian Markov Chain methods for sampling. \nFinally, there is the issue of scalability. Although some DGP methods have been shown to scale well to large datasets, they have not been thoroughly benchmarked on highly structured datasets such as Imagenet . The problem lies in the model depth required to achieve good performance on such a dataset. Unlike MNIST, Imagenet requires DNNs that are substantially deeper. DGPs, however, are usually only tested on models with up to ten layers. It would be indispensable to study and understand how DGPs scale to such a dataset.", "id": "cbf4d6cf-6b44-4200-9536-7a8dc186c6a4", "level": "section", "origin_cites_number": 9, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.5, "cites": [ 7224 ], "content": "Gaussian Processes in it themselves are fascinating. Their non-parametric form, analytical properties, and ability to model uncertainty are coveted in machine learning. However, they are plagued with limitations, particularly their significant computation and storage cost. Also, conventional kernel functions restrict the family of functions that a GP could model. \nSparse Gaussian Processes attempt to address the storage and computation cost. One dominant approach to SGPs is to use the Nyström approximation. The approach entails using variational methods to model the distribution of pseudo points for a fully Bayesian treatment. Several methods have been proposed along this line of research, each with its advantages and limitations. \nFurthermore, GPLVM was a step towards DGP. However, hierarchical feature representation was not the intended use case. It was proposed as an approach for probabilistic PCA and unsupervised learning. The Bayesian GPLVM on improved upon the original method by introducing a purely Bayesian training approach. BGPLVMs facilitated the propagation of latent space uncertainty to the posterior, thereby establishing a technique to propagate uncertainties through non-linearities in GPs.\nMost of the DGP approaches considered SGPs and GPLVMs to address the issue of hierarchical feature representation. The primary trend in DGPs is to stack GPs in a feedforward manner and train them with approaches used to train SGPs and GPLVMs. However, such an approach has its limitations. The optimization bounds that were developed were not always tight, and some methods were limited to analytical solutions, which imposed scalability limitations on such techniques. \nMoreover, stacking GPs makes the model parametric as it requires a predefined model depth and layer width. Lee et al. considered these issues and attempted to solve them by modeling the latent function space as the space of deep neural networks. But, the approach is not feasible for real world applications as of yet and requires more work to be done to get there.\nGarnelo et al. consider a stochastic process parametrized with a DNN instead of a Gaussian distribution with a kernel function to define the latent function space. Still, the approach requires modeling a task specific neural network and is only an approximation to an unknown stochastic process. Energy-based Processes address this limitation, but the method isn't mature enough as of yet. \nIn conclusion, GPs are an excellent approach to model datasets. The overall trend of the field seems to be transitioning away from the Gaussian assumption and consider general Stochastic processes. The method has come a long way from its infancy, but there are still open problems that need to be addressed for it to be elevated to the prominence that it deserves. \n\\bibliographystyle{apalike}\n\\bibliography{references}\n\\end{document}", "id": "cf9221b2-05ad-498a-bf27-3fcff7c2e0b9", "level": "section", "origin_cites_number": 2, "parent_id": "6486154a-d131-4b97-8685-8ede70aa3f14", "prefix_titles": [ [ "title", "Deep Gaussian Processes: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
107
[ 7220, 7224, 7221, 363, 7222, 7223 ]
1.054095
[ "Feng Xia", "Ke Sun", "Shuo Yu", "Abdul Aziz", "Liangtian Wan", "Shirui Pan", "Huan Liu" ]
Graph Learning: A Survey
2021
2021-05-03T09:06:01Z
cs.LG
Graphs are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of application domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs) is gaining attention from both researchers and practitioners. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. In this survey, we present a comprehensive overview on the state-of-the-art of graph learning. Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning. Major models and algorithms under these categories are reviewed respectively. We examine graph learning applications in areas such as text, images, science, knowledge graphs, and combinatorial optimization. In addition, we discuss several promising research directions in this field.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "64dae0de-0654-4c28-90d7-95856f2349e2", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Graph Learning: A Survey" ] ], "subsections": [ "adfae688-a94e-4b79-9ffa-32308b8cb709", "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "ad0cb974-233d-49a2-87c6-a6386791ce72", "6779c1aa-f870-4ba7-91a9-b423ac0b7ff3", "81c691a3-3630-4eef-92a6-3d901c8fb5ce" ], "title": "root" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 6092, 6093, 6094, 212 ], "content": "\\label{sec:introduction}\n\\IEEEPARstart{G}{raphs}, also referred to as networks, can be extracted from various real-world relations among abundant entities. Some common graphs have been widely used to formulate different relationships, such as social networks, biological networks, patent networks, traffic networks, citation networks, and communication networks~. A graph is often defined by two sets, i.e., vertex set and edge set. Vertices represent entities in graph, whereas edges represent relationships between those entities. Graph learning has attracted considerable attention because of its wide applications in the real world, such as data mining and knowledge discovery. Graph learning methods have gained increasing popularity for capturing complex relationships, as graphs exploit essential and relevant relations among vertices~. For example, in microblog networks, the spread trajectory of rumors can be tracked by detecting information cascades. In biological networks, new treatments for difficult diseases can be discovered by inferring protein interactions. In traffic networks, human mobility patterns can be predicted by analyzing the co-occurrence phenomenon with different timestamps~. Efficient analysis of these networks massively depends on the way how networks are represented.", "id": "adfae688-a94e-4b79-9ffa-32308b8cb709", "level": "section", "origin_cites_number": 6, "parent_id": "64dae0de-0654-4c28-90d7-95856f2349e2", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "184cb113-77bd-4cc7-8492-66a0ed82a4fa", "f0ded957-2bcf-40e1-958b-807b63d5a972", "0f442737-0148-49f3-aafe-a19417e9a54e" ], "title": "Introduction" }, { "cite_extract_rate": 0.7333333333333331, "cites": [ 6098, 6095, 318, 6096, 6100, 212, 8975, 1010, 6099, 6097, 550 ], "content": "Generally speaking, graph learning refers to machine learning on graphs. Graph learning methods map the features of a graph to feature vectors with the same dimensions in the embedding space. A graph learning model or algorithm directly converts the graph data into the output of the graph learning architecture without projecting the graph into a low dimensional space. Most graph learning methods are based on or generalized from deep learning techniques, because deep learning techniques can encode and represent graph data into vectors. The output vectors of graph learning are in continuous space. The target of graph learning is to extract the desired features of a graph. Thus the representation of a graph can be easily used by downstream tasks such as node classification and link prediction without an explicit embedding process. Consequently, graph learning is a more powerful and meaningful technique for graph analysis.\nIn this survey paper, we try to examine machine learning methods on graphs in a comprehensive manner. As shown in Fig.~\\ref{fig1}, we focus on existing methods that fall into the following four categories: graph signal processing (GSP) based methods, matrix factorization based methods, random walk based methods, and deep learning based methods. Roughly speaking, GSP deals with sampling and recovery of graph, and learning topology structure from data. Matrix factorization can be divided into graph Laplacian matrix factorization and vertex proximity matrix factorization. Random walk based methods include structure-based random walk, structure and node information based random walk, random walk in heterogeneous networks, and random walk in time-varying networks. Deep learning based methods include graph convolutional networks, graph attention networks, graph auto-encoder, graph generative networks, and graph spatial-temporal networks. Basically, the model architectures of these methods/techniques differ from each other. This paper presents an extensive review of the state-of-the-art graph learning techniques.\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=6in]{fig1.png}\n\t\\caption{The categorization of graph learning.}\\label{fig1}\n\\end{figure*}\nTraditionally, researchers adopt an adjacency matrix to represent a graph, which can only capture the relationship between two adjacent vertices. However, many complex and irregular structures cannot be captured by this simple representation. When we analyze large-scale networks, traditional methods are computationally expensive and hard to be implemented in real-world applications. Therefore, effective representation of these networks is a paramount problem to solve~. Network Representation Learning (NRL) proposed in recent years can learn latent features of network vertices with low dimensional representation~. When the new representation has been learned, previous machine learning methods can be employed for analyzing the graph data as well as discovering relationships hidden in the data.\nWhen complex networks are embedded into a latent, low dimensional space, the structural information and vertex attributes can be preserved~. Thus the vertices of networks can be represented by low dimensional vectors. These vectors can be regarded as the features of input in previous machine learning methods. Graph learning methods pave the way for graph analysis in the new representation space, and many graph analytical tasks, such as link prediction, recommendation and classification, can be solved efficiently~. Graphical network representation sheds light on various aspects of social life, such as communication patterns, community structure, and information diffusion~. According to the attributes of vertices, edges and subgraph, graph learning tasks can be divided into three categories, which are vertices based, edges based, and subgraph based, respectively. The relationships among vertices in a graph can be exploited for, e.g., classification, risk identification, clustering, and community detection~. By judging the presence of edges between two vertices in graphs, we can perform recommendation and knowledge reasoning, for instance. Based on the classification of subgraphs~, the graph can be used for, e.g., polymer classification, 3D visual classification, etc. For GSP, it is significant to design suitable graph sampling methods to preserve the features of the original graph, which aims at recovering the original graph efficiently~. Graph recovery methods can be used for constructing the original graph in the presence of incomplete data~. Afterwards, graph learning can be exploited to learn the topology structure from graph data. In summary, graph learning can be used to tackle the following challenges, which are difficult to solve by using traditional graph analysis methods~.\n\\begin{enumerate}\n \\item \\textbf{Irregular domains:} Data collected by traditional sensors have a clear grid structure. However, graphs lie in an irregular domain (i.e., non-Euclidean space). In contrast to regular domain (i.e., Euclidean space), data in non-Euclidean space are not ordered regularly. Distance is hence difficult to be defined. As a result, basic methods based on traditional machine learning and signal processing cannot be directly generalized to graphs.\n \\item \\textbf{Heterogeneous networks:} In many cases, networks involved in the traditional graph analysis algorithms are homogeneous. The appropriate modeling methods only consider the direct connection of the network and strip other irrelevant information, which significantly simplifies the processing. However, it is prone to cause information loss. In the real world, the edges among vertices and the types of vertices are usually diverse, such as in the academic network shown in Fig.~\\ref{fig:Heterogenous}. Thus it isn't easy to discover potential value from heterogeneous information networks with abundant vertices and edges.\n \\item \\textbf{Distributed algorithms:} In big social networks, there are often millions of vertices and edges~. Centralized algorithms cannot handle this since the computational complexity of these algorithms would significantly increase with the growth of vertex number. The design of distributed algorithms for dealing with big networks is a critical problem yet to be solved~. One major benefit of distributed algorithms is that the algorithms can be executed in multiple CPUs or GPUs simultaneously, and hence the running time can be reduced significantly.\n\\end{enumerate}", "id": "184cb113-77bd-4cc7-8492-66a0ed82a4fa", "level": "subsection", "origin_cites_number": 15, "parent_id": "adfae688-a94e-4b79-9ffa-32308b8cb709", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Introduction" ], [ "subsection", "What is Graph Learning?" ] ], "subsections": [], "title": "What is Graph Learning?" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 553, 3344, 7233, 215, 212, 9134, 219, 550 ], "content": "There are several surveys that are partially related to the scope of this paper. Unlike these surveys, we aim to provide a comprehensive overview of graph learning methods, with a focus on four specific categories. In particular, graph signal processing is introduced as one approach for graph learning, which is not covered by other surveys.\nGoyal and Ferrara~ summarized graph embedding methods, such as matrix factorization, random walk and their applications in graph analysis. Cai et al.~ reviewed graph embedding methods based on problem settings and embedding techniques. Zhang et al.~ summarized NRL methods based on two categories, i.e., unsupervised NRL and semi-supervised NRL, and discussed their applications. Nickel et al.~ introduced knowledge extraction methods from two aspects: latent feature models and graph based models. Akoglu et al.~ reviewed state-of-the-art techniques for event detection in data represented as graphs, and their applications in the real world. Zhang et al.~ summarized deep learning based methods for graphs, such as graph neural networks (GNNs), graph convolutional networks (GCNs) and graph auto-encoders (GAEs). Wu et al.~ reviewed state-of-the-art GNN methods and discussed their applications in different fields. Ortega et al.~ introduced GSP techniques for representation, sampling and learning, and discussed their applications. Huang et al.~ examined the applications of GSP in functional brain imaging and addressed the problem of how to perform brain network analysis from signal processing perspective.\nIn summary, none of the existing surveys provides a comprehensive overview of graph learning. They only cover some parts of graph learning, such as network embedding and deep learning based network representation. The NRL and/or GNN based surveys do not cover the GSP techniques. In contrast, we review GSP techniques in the context of graph learning, as it is an important approach for GNNs. Specifically, this survey paper integrates state-of-the-art machine learning techniques for graph data, gives a general description of graph learning, and discusses its applications in various domains.", "id": "f0ded957-2bcf-40e1-958b-807b63d5a972", "level": "subsection", "origin_cites_number": 9, "parent_id": "adfae688-a94e-4b79-9ffa-32308b8cb709", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "The contributions of this paper can be summarized as follows.\n\\begin{itemize}\n \\item\\textbf{A comprehensive overview of state-of-the-art graph learning methods:} we present an integral introduction to graph learning methods, including, e.g., technical sketches, application scenarios, and potential research directions.\n \\item\\textbf{Taxonomy of graph learning:} we give a technical classification of mainstream graph learning methods from the perspective of theoretical models. Technical descriptions are provided wherever appropriate to improve understanding of the taxonomy.\n \\item\\textbf{Insights into future directions in graph learning:} Besides qualitative analysis of existing methods, we shed light on potential research directions in the field of graph learning through summarizing several open issues and relevant challenges.\n\\end{itemize}\nThe rest of this paper is organized as follows. An overview of graph learning approaches containing graph signal processing based methods, matrix factorization based methods, random walk based methods, and deep learning based methods is provided in Section~\\ref{sec:algorithm}. The applications of graph learning are examined in Section~\\ref{sec:appication}. Some future directions as well as challenges are discussed in Section~\\ref{sec:issues}. We conclude the survey in Section~\\ref{sec:con}.\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{fig2.png}\n\\caption{Heterogeneous academic network~.}\n\\label{fig:Heterogenous}\n\\end{figure}", "id": "0f442737-0148-49f3-aafe-a19417e9a54e", "level": "subsection", "origin_cites_number": 1, "parent_id": "adfae688-a94e-4b79-9ffa-32308b8cb709", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Contributions and Organization" ] ], "subsections": [], "title": "Contributions and Organization" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 166, 9134, 219, 244, 550 ], "content": "\\label{sec:algorithm}\nThe feature vectors that represent various categorical attributes are viewed as the input in previous machine learning methods. However, the mapping from the input feature vectors to the output prediction results need to be handled by graph learning~. Deep learning has been regarded as one of the most successful techniques in artificial intelligence~. Extracting complex patterns by exploiting deep learning from a massive amount of irregular data has been found very useful in various fields, such as pattern recognition and image processing. Consequently, how to utilize deep learning techniques to extract patterns from complex graphs has attracted lots of attention. Deep learning on graphs, such as GNNs, GCNs, and GAEs, has been recognized as a powerful technique for graph analysis~. Besides, GSP has also been proposed to deal with graph analysis~. One of the most typical scenarios is that a set of values reside on a set of vertices, and these vertices are connected by edges~. Graph signals can be adopted to model various phenomena in real world. For example, in social networks, users in Facebook can be viewed as vertices, and their friendships can be modeled as edges. The number of followers of each vertex is marked in this social network. Based on this assumption, many techniques (e.g., convolution, filter, wavelet, etc.) in classical signal processing can be employed for GSP with suitable modifications~.\nIn this section, we review graph learning models and algorithms under four categories as mentioned before, namely GSP based methods, matrix factorization based methods, random walk based methods, and deep learning based methods. In Table~\\ref{tab:abbreviations}, we list the abbreviations used in this paper.\n\\begin{table}[htbp]\n \\centering\n \\caption{Definitions of abbreviations}\n \\begin{tabular}{cc}\n \\toprule\n \\textbf{Abbreviation} & \\textbf{Definition} \\\\\n \\midrule\n PCA & Principal component analysis \\\\\n NRL & Network representation learning \\\\\n LSTM & Long short-term memory (networks) \\\\\n GSP & Graph signal processing \\\\\n GNN & Graph neural network \\\\\n GMRF & Gauss markov random field \\\\\n GCN & Graph convolutional network \\\\\n GAT & Graph attention network \\\\\n GAN & Generative adversarial network \\\\\n GAE & Graph auto-encoder \\\\\n ASP & Algebraic signal processing \\\\\n RNN & Recurrent neural network \\\\\n CNN & Convolutional neural network \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:abbreviations}\n\\end{table}", "id": "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "level": "section", "origin_cites_number": 6, "parent_id": "64dae0de-0654-4c28-90d7-95856f2349e2", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ] ], "subsections": [ "b98b1608-63ed-40e4-8a31-3e8023ef0207", "d840a266-c9d1-40a4-b01c-161e41f531dc", "3d125124-0f76-495a-849b-3574679d280e", "9610d7f6-5599-46e1-869b-238d95753770" ], "title": "Graph Learning Models and Algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Signal processing is a traditional subject that processes signals defined in regular data domain. In recent years, researchers extend concepts of traditional signal processing into graphs. Classical signal processing techniques and tools such as Fourier transform and filtering can be used to analyze graphs. In general, graphs are a kind of irregular data, which are hard to handle directly. As a complement to learning methods based on structures and models, GSP provides a new perspective of spectral analysis of graphs. Derived from signal processing, GSP can give an explanation of graph property consisting of connectivity, similarity, etc. Fig.~\\ref{fig:US} gives a simple example of graph signals at a certain time point, which is defined as observed values. In a graph, the above mentioned observed values can be regarded as graph signals. Each node is then mapped to the real number field in GSP. The main task of GSP is to expand signal processing approaches to mine implicit information in graphs.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig3.png}\n\\caption{The measurements of PM2.5 from different sensors on July 5, 2014 (data source: https://www.epa.gov/).}\n\\label{fig:US}\n\\end{figure}", "id": "b98b1608-63ed-40e4-8a31-3e8023ef0207", "level": "subsection", "origin_cites_number": 0, "parent_id": "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Graph Signal Processing" ] ], "subsections": [ "ae89c93b-e71a-4b64-9f6b-e73bb685f27f", "0f2c92c2-0d30-4395-818c-bed78b2932d8", "e6b8b008-dd20-4650-9ffa-d2d8c5d120f5", "51f7053a-6fd9-43c5-a057-1f594aea4b94" ], "title": "Graph Signal Processing" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 8499, 245, 9134, 6101, 244 ], "content": "A meaningful representation of graphs has contributed a lot to the rapid growth of graph learning. There are two main models of GSP, i.e., adjacency matrix based GSP~ and Laplacian based GSP~. An adjacency matrix based GSP comes from algebraic signal processing (ASP)~, which interprets linear signal processing from algebraic theory. Linear signal processing contains signals, filters, signal transformation, etc. It can be applied in both continuous and discrete time domains. The basic assumption of linear algebra is extended to the algebra space in ASP. By selecting signal model appropriately, ASP can obtain different instances in linear signal processing. In adjacency matrix based GSP, the signal model is generated from a shift. Similar to traditional signal processing, a shift in GSP is a filter in graph domain~. GSP usually defines graph signal models using adjacency matrices as shifts. Signals of a graph are normally defined at vertices.\nLaplacian based GSP originates from spectral graph theory. High dimensional data are transferred into a low dimensional space generated by a part of the Laplacian basis~. Some researchers exploited sensor networks~ to achieve distributed processing of graph signals. Other researchers solved the problem globally under the assumption that the graph is smooth. Unlike adjacency matrix based GSP, Laplacian matrix is symmetric with real and non-negative edge weights, which is used to index undirected graphs.\nAlthough the models use different matrices as basic shifts, most of the notions in GSP are derived from signal processing. Notions with different definitions in these models may have similar meanings. All of them correspond to concepts in signal processing. Signals in GSP are values defined on graphs, and they are usually written as a vector, \\(\\bm{s}=[ s_{0} ,s_{1} ,\\dots ,s_{N-1}] \\in \\mathbb{C}^{N}.\\) \\(N\\) is the number of vertices, and each element in the vector represents the value on a vertex. Some studies~ allow complex-value signals, even though most applications are based on real-value signals.\nIn the context of adjacency matrix based GSP, a graph can be represented as a triple \\(G(V,E,\\bm{W})\\), where \\(V\\) is the vertex set, \\(E\\) is the edge set and \\(\\bf{W}\\) is the adjacency matrix. With the definition of graphs, we can also define degree matrix \\(\\bm{D}_{ii}=\\bm{d}_{i}\\), where \\(\\bm{D}\\) is a diagonal matrix, and \\(\\bm{d}_{i}\\) is the degree of vertex \\(i\\). Graph Laplacian is defined as \\(\\bm{L}=\\bm{D}-\\bm{W}\\), and normalized Laplacian is defined as \\(\\bm{L}_{norm}=\\bm{D}^{-1/2}\\bm{L}\\bm{D}^{-1/2}\\). Filters in signal processing can be seen as a function that amplifies or reduces relevant frequencies, eliminating irrelevant ones. Matrix multiplication in linear space equals to scale changing, which is identical with filter operation in frequency domain. It is obvious that we can use matrix multiplication as a filter in GSP, which is written as \\(\\bm{s}_{out} =\\bm{Hs}_{in}\\), where $\\bm{H}$ stands for a filter.\nShift is an important concept to describe variation in signal, and time-invariant signals are used frequently~. In fact, there are different choices of shifts in GSP. Adjacency matrix based GSP uses \\(\\bm{A}\\) as shift. Laplacian based GSP uses \\(\\bf{L}\\)~, and some researchers also use other matrices . By following time invariance in traditional signal processing, shift invariance is defined in GSP. If filters are commutative with shift, they are shift-invariant, which can be written as \\(\\bm{AH}=\\bm{HA}\\). It is proved that shift-invariant filter can be represented by the shift. The properties of shift are vital, and they determine the fashion of other definitions such as Fourier transform and frequency.\nIn adjacency matrix based GSP, eigenvalue decomposition of shift \\(\\bm{A}\\) is \\(\\bm{A}=\\bm{V\\Lambda V}^{-1}\\). \\(\\bm{V}\\) is the matrix of eigenvectors \\([\\bm{v}_{0}, \\bm{v}_{1},\\dots,\\bm{v}_{N-1}]\\) and\n\\[\n \\bm{\\Lambda} =\n \\begin{bmatrix}\n \\bm{\\lambda}_{0} & & \\\\\n & \\ddots & \\\\\n & & \\bm{\\lambda}_{N-1}\n \\end{bmatrix}\n\\]\nis a diagonal matrix of eigenvalues. The Fourier transform matrix is the inverse of \\(\\bm{V}\\), i.e., \\(\\bm{F}=\\bm{V}^{-1}\\). Frequency of shift is defined as total variation, which states the difference after shift \\[TV_{G} =||\\bm{v}_{k} -\\frac{1}{\\lambda _{max}} \\bm{A}\\bm{v}_{k} ||_{1},\\] where \\(\\frac{1}{\\lambda _{max}}\\) is a normalized factor of matrix. It means that the frequencies of eigenvalue far away from the largest eigenvalues on complex plane are large. A large frequency means that signals are changed with a large scale after shift filtering. The differences between minimum and maximum \\(\\bm{\\lambda}\\) can be seen in Fig.~\\ref{fig:lambda}. Generally, the total variation tends to be relatively low with larger frequency, and vice versa. Eigenvectors of larger eigenvalues can be used to construct low-frequency filters, which capture fundamental characteristics, and smaller ones can be employed to capture the variation among neighbor nodes.\n\\begin{figure}[htb]\n\\centering\n\\subfigure[The maximum frequency]{\n\\label{maxfre}\n\\includegraphics[width=0.5\\textwidth]{fig4-1.png}}\n\\subfigure[The minimum frequency]{\n\\label{minfre}\n\\includegraphics[width=0.5\\textwidth]{fig4-2.png}}\n\\caption{Illustration of difference between minimum and maximum frequencies.}\n\\label{fig:lambda}\n\\end{figure}\nFor topology learning problems, we can distinguish the corresponding solutions depending on known information. When topology information is partly known, we can use the known information to infer the whole graph. In case the topology information is unknown while we still can observe the signals on the graph, the topology structure has to be inferred from the signals. The former one is often solved as a sampling and recovery problem, and blind topology inference is also known as graph topology (or structure) learning.", "id": "ae89c93b-e71a-4b64-9f6b-e73bb685f27f", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "b98b1608-63ed-40e4-8a31-3e8023ef0207", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Graph Signal Processing" ], [ "subsubsection", "Representation on Graphs" ] ], "subsections": [], "title": "Representation on Graphs" }, { "cite_extract_rate": 0.8125, "cites": [ 6108, 6112, 6109, 6107, 6105, 6102, 6110, 6106, 6104, 245, 8975, 6103, 6111 ], "content": "Sampling is not a new concept defined in GSP. In conventional signal processing, we normally need to reconstruct original signals with the least samples and retain all information of original signals for a sampling problem. Few samples lead to the lack of information and more samples need more space to store. The well-known Nyquist-Shannon sampling theorem gives the sufficient condition of perfect recovery of signals in time domain.\nResearchers have migrated the sampling theories into GSP to study the sampling problem on graphs. As the volume of data is large in some real-world applications such as sensor networks and social networks, sampling less and recovering better are vital for GSP. In fact, most algorithms and frameworks solving sampling problems require that the graph models correlations within signals observed on it~. The sampling problem can be defined as reconstructing signals from samples on a subset of vertices, and signals in it are usually band-limited. Nyquist-Shannon sampling theorem was extended to graph signals in~. Based on the normalized Laplacian matrix, sampling theorem and cut-off frequency are defined for GSP. Moreover, the authors provided a method for computing cut-off frequency from a given sampling set and a method for choosing sampling set for a given bandwidth. It should be noted that the sampling theorem proposed therein is merely applied to undirected graph. As Laplacian matrix represents undirected graphs only, sampling theory for directed graph adopts adjacent matrix. An optimal operator with a guarantee for perfect recovery was proposed in~, and it is robust to noise for general graphs.\nOne of the explicit distinctions between classical signal processing and GSP is that signals of the former fall in regular domain while the latter falls in irregular domain. For sampling and recovery problems, classical signal processing samples successive signals and can recover successive signals from samplings. GSP samples a discrete sequence, and recovers the original sequences from samplings. By following this order, the solution is generally separated into two parts, i.e., finding sampling vertex sets and reconstructing original signals based on various models.\nWhen the size of the dataset is small, we can handle the signal and shift directly. However, for a large-scale dataset, some algorithms require matrix decomposition to obtain frequencies and save eigenvalues in the procedure, which are almost impossible to realize. As a simple technique applicable to large-scale datasets, a random method can also be used in sampling. Puy et al.~ proposed two sample strategies: a non-adaptive one depending on a parameter and an adaptive random sampling strategy. By relaxing the optimized constraint, they extended random sampling to large scale graphs. Another common strategy is greedy sampling. For example, Shomorony and Avestimehr~ proposed an efficient method based on linear algebraic conditions that can exactly compute cut-off frequency. Chamon and Ribeiro~ provided near-optimal guarantee for greedy sampling, which guarantees the performance of greedy sampling in the worst cases.\nAll of the sampling strategies mentioned above can be categorized as selecting sampling, where signals are observed on a subset of vertices . Besides selecting sampling, there exists a type of sampling called aggregation sampling~, which uses observations taken at a single vertex as input, containing a sequential applications of graph shift operator.\nSimilar to classical signal processing, the reconstruction task on graphs can also be interpreted as data interpolation problem . By projecting the samples on a proper signal space, researchers obtain interpolated signals. Least squares reconstruction is an available method in practice. Gadde and Ortega~ defined a generative model for signal recovery derived from a pairwise Gaussian random field (GRF) and a covariance matrix on graphs. Under sampling theorem, the reconstruction of graph signals can be viewed as the maximum posterior inference of GRF with low-rank approximation. Wang et al.~ aimed at achieving the distributed reconstruction of time-varying band limited signal, where the distributed least squares reconstruction (DLSR) was proposed to recover the signals iteratively. DLSR can track time-varying signals and achieve perfect reconstruction. Di Lorenzo et al.~ proposed a linear mean squares (LMS) strategy for adaptive estimation. LMS enables online reconstruction and tracking from the observation on a subset of vertices. It also allows the subset to vary over time. Moreover, a sparse online estimation was proposed to solve the problems with unknown bandwidth.\nAnother common technique for recovering original signals is smoothness. Smoothness is used for inferring missing values in graph signals with low frequencies. Wang et al.~ defined the concept of local set. Based on the definition of graph signals, two iterative methods were proposed to recover band limited signals on graphs. Besides, Romero et al.~ advocated kernel regression as a framework for GSP modeling and reconstruction. For parameter selection in estimators, two multi-kernel methods were proposed to solve a single optimization problem as well. In addition, some researchers investigated different recovery problems with compressed sensing~.\nIn addition, there exists some researches on sampling of different kinds of signals such as smooth graph signals, piece-wise constant signals and piece-wise smooth signals~. Chen et al. ~ gave a uniform framework to analyze graph signals. The reconstruction of a known graph signal was studied in~, where the signal is sparse, which means only a few vertices are non-zeros. Three kinds of reconstruction schemes corresponding to various seeding patterns were examined. By analyzing single simultaneous injection, single successive value injection, and multiple successive simultaneous injections, the conditions for perfect reconstruction on any vertices were derived.", "id": "0f2c92c2-0d30-4395-818c-bed78b2932d8", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "b98b1608-63ed-40e4-8a31-3e8023ef0207", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Graph Signal Processing" ], [ "subsubsection", "Sampling and Recovery" ] ], "subsections": [], "title": "Sampling and Recovery" }, { "cite_extract_rate": 0.6875, "cites": [ 6113, 6118, 6114, 6105, 8976, 6117, 6116, 5167, 6119, 8977, 6115 ], "content": "In most application scenes, graphs are constructed according to connections of entity correlations. For example, in sensor networks, the correlations between sensors are often consistent with geographic distance. Edges in social networks are defined as relations such as friends or colleagues~. In biochemical networks, edges are generated by interactions. Although GSP is an efficient framework for solving problems on graphs such as sampling, reconstruction, and detection, there lacks a step to extract relations from datasets. Connections exist in many datasets without explicit records. Fortunately, they can be inferred in many ways.\nAs a result, researchers want to learn complete graphs from datasets. The problem of learning graph from a dataset is stated as estimating graph Laplacian, or graph topology~. Generally, they require the graph to satisfy some properties, such as sparsity and smoothness. Smoothness is a widespread assumption in networks generated from datasets. Therefore, it is usually used to constrain observed signals and provide a rational guarantee for graph signals. Researchers have applied it to graph topology learning. The intuition behind smoothness based algorithms is that most signals on graph are stationary, and the result filtered by shift tends to be the lowest frequency. Dong et al.~ adopted a factor analysis model for graph signals, and also imposed a Gaussian prior on latent variables to obtain a Principal Component Analysis (PCA) like representation. Kalofolias~ formulated the objective as a weighted \\(l_1\\) problem and designed a general framework to solve it.\nGauss Markov Random Field (GMRF) is also a widely used theory for graph topology learning in GSP . The models of GRMF based graph topology learning select graphs that are more likely to generate signals which are similar to the ones generated by GMRF. Egilmez et al.~ formulated the problem as a maximum posterior parameter estimation of GMRF, and the graph Laplacian is a precision matrix. Pavez and Ortega~ also formulated the problem as a precision matrix estimation, and the rows and columns are updated iteratively by optimizing a quadratic problem. Both of them restrict the result matrix, which should be Laplacian. In~, Pavez et al. chose a two steps framework to find the structure of the underlying graph. First, a graph topology inference step is employed to select a proper topology. Then, a generalized graph Laplacian is estimated. An error bound of Laplacian estimation is computed. In the next step, the error bound can be utilized to obtain a matrix in a specific form as the precision matrix estimation. It is one of the first work that suggests adjusting the model to obtain a graph satisfying the requirement of various problems.\nDiffusion is also a relevant model that can be exploited to solve the topology interfering problem~. Diffusion refers to that the node continuously influences its neighborhoods. In graphs, nodes with larger values will have higher influence on their neighborhood nodes. Using a few components to represent signals will help to find the main factors of signal formation. The models of diffusion are often under the assumption of independent identically-distributed signals. Pasdeloup et al.~ gave the concept of valid graphs to explain signals and assumed that the signals are observed after diffusion. Segarra et al.~ agreed that there exists a diffusion process in the shift, and the signals can be observed. The signals in~ were explained as a linear combination of a few components.\nFor time series recorded in data, researchers tried to construct time-sequential networks. For instance, Mei and Moura~ proposed a methodology to estimate graphs, which considers both time and space dependencies and models them by auto-regressive process. Segarra et al.~ proposed a method that can be seen as an extension of graph learning. The aim of the paper was to solve the problem of joint identification of a graph filter and its input signal.\nFor recovery methods, a well-known partial inference problem is recommendation~. The typical algorithm used in recommendation is collaborative filtering (CF)~. Given the observed ratings in a matrix, the objective of CF is to estimate the full rating matrix. Huang et al.~ demonstrated that collaborative filtering could be viewed as a specific band-stop graph filter on networks representing correlations between users and items. Furthermore, linear latent factor methods can also be modeled as band limited interpretation problem.", "id": "e6b8b008-dd20-4650-9ffa-d2d8c5d120f5", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "b98b1608-63ed-40e4-8a31-3e8023ef0207", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Graph Signal Processing" ], [ "subsubsection", "Learning Topology Structure from Data" ] ], "subsections": [], "title": "Learning Topology Structure from Data" }, { "cite_extract_rate": 0, "cites": [], "content": "GSP algorithms have strict limitations on experimental data, thus leading to less real-world applications. Moreover, GSP algorithms require the input data to be exactly the whole graph, which means that part of graph data cannot be the input. Therefore, the computational complexity of this kind of methods could be significantly high. In comparison with other kinds of graph learning methods, the scalability of GSP algorithms is relatively poor.", "id": "51f7053a-6fd9-43c5-a057-1f594aea4b94", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "b98b1608-63ed-40e4-8a31-3e8023ef0207", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Graph Signal Processing" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "Matrix factorization is a method of simplifying a matrix into its components. These components have a lower dimension and could be used to represent the original information of a network, such as relationships among nodes. Matrix factorization based graph learning methods adopt a matrix to represent graph characteristics like vertex pairwise similarity, and the vertex embedding can be achieved by factorizing this matrix~. Early graph learning approaches usually utilized matrix factorization based methods to solve the graph embedding problem. The input of matrix factorization is the non-relational high dimensional data feature represented as a graph. The output of matrix factorization is a set of vertex embedding. If the input data lies in a low dimensional manifold, the graph learning for embedding can be treated as a dimension-reduced problem that preserves the structure information. There are mainly two types of matrix factorization based graph learning. One is graph Laplacian matrix factorization, and the other is vertex proximity matrix factorization.", "id": "d840a266-c9d1-40a4-b01c-161e41f531dc", "level": "subsection", "origin_cites_number": 1, "parent_id": "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Matrix Factorization Based Methods" ] ], "subsections": [ "da066372-7801-4798-8580-d18b8701c6a3", "e751efbf-c93c-4087-8e00-0c156e9e7629", "251e27bc-08dc-47f1-8d80-b51a205b0267" ], "title": "Matrix Factorization Based Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "The preserved graph characteristics can be expressed as pairwise vertex similarities. Generally, there are two kinds of graph Laplacian matrix factorization, i.e., transductive and inductive matrix factorization. The former only embeds the vertices contained in the training set, and the latter can embed the vertices that are not contained in the training set. The general framework has been designed in~, and the graph Laplacian matrix factorization based graph learning methods have been summarized in~. The Euclidean distance between two feature vectors is directly adopted in the initial Metric Multidimensional Scaling (MDS)~ to find the optimal embedding. The neighborhoods of vertices are not considered in the MDS, i.e., any pair of training instances are considered as connected. The data feature is extracted by constructing a $k$ nearest neighbor graph, and the subsequent studies tackle this issue. The top $k$ similar neighbors of each vertex are connected with itself. A similar matrix is calculated by exploiting different methods, and thus the graph characteristics can be preserved as much as possible.\nRecently, researchers have designed more sophisticated models. The performance of earlier matrix factorization model Locality Preserving Projection (LPP) can be improved by introducing an anchor taking advantage of Anchorgraph-based Locality Preserving Projection (AgLPP)~. The graph structure can be captured by using a local regression model and a global regression process based on Local and Global Regressive Mapping (LGRM)~. The global geometry can be preserved by using local spline regression~.\nMore information can be preserved by exploiting the auxiliary information. An adjacency graph and a labelled graph were constructed in~. The objective function of LPP preserves the local geometric structure of the datasets~. An adjacency graph and a relational feedback graph were constructed in~ as well. The graph Laplacian regularization, k-means and PCA were considered in RF-Semi-NMF-PCA simultaneously~. Other works, e.g.,~, adopt semi-definite programming to learn the adjacency graph that maximizes the pairwise distances.", "id": "da066372-7801-4798-8580-d18b8701c6a3", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "d840a266-c9d1-40a4-b01c-161e41f531dc", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Matrix Factorization Based Methods" ], [ "subsubsection", "Graph Laplacian Matrix Factorization" ] ], "subsections": [], "title": "Graph Laplacian Matrix Factorization" }, { "cite_extract_rate": 0, "cites": [], "content": "Apart from solving the above generalized eigenvalue problem, another approach of matrix factorization is to factorize vertex proximity matrix directly. In general, matrix factorization can be used to learn the graph structure from non-relational data, and it is applicable to learn homogeneous graphs.\nBased on matrix factorization, vertex proximity can be approximated in a low dimensional space. The objective of preserving vertex proximity is to minimize the error. The Singular Value Decomposition (SVD) of vertex proximity matrix was adopted in~. There are some other approaches such as regularized Gaussian matrix factorization~, low-rank matrix factorization~, for solving SVD.", "id": "e751efbf-c93c-4087-8e00-0c156e9e7629", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "d840a266-c9d1-40a4-b01c-161e41f531dc", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Matrix Factorization Based Methods" ], [ "subsubsection", "Vertex Proximity Matrix Factorization" ] ], "subsections": [], "title": "Vertex Proximity Matrix Factorization" }, { "cite_extract_rate": 0, "cites": [], "content": "Matrix factorization algorithms operate on an interaction matrix to decompose several lower dimension matrices. The process brings some drawbacks. For example, the algorithms require a large memory when the decomposed matrices become large. In addition, matrix factorization algorithms are not applicable to supervised or semi-supervised tasks with the training process.", "id": "251e27bc-08dc-47f1-8d80-b51a205b0267", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d840a266-c9d1-40a4-b01c-161e41f531dc", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Matrix Factorization Based Methods" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.5, "cites": [ 6120 ], "content": "Random walk is a convenient and effective way to sample networks~. This method can generate sequences of nodes meanwhile preserving original relations between nodes. Based on network structure, NRL can generate feature vectors of vertices so that downstream tasks can mine network information in a low dimensional space. An example of NRL is shown in Fig.~\\ref{fig:NRL}. The image in Euclidean space is shown in Fig.~\\ref{enc}, and the corresponding graph in non-Euclidean space is shown in Fig.~\\ref{noenc}. As one of the most successful NRL algorithms, random walks play an important role in dimensionality reduction.\n\\begin{figure}[htb]\n\\centering\n\\subfigure[Image in Euclidean space]{\n\\label{enc}\n\\includegraphics[width=0.4\\textwidth]{fig5a}}\n\\subfigure[Graph in non-Euclidean space]{\n\\label{noenc}\n\\includegraphics[width=0.4\\textwidth]{fig5b.png}}\n\\caption{An example of NRL mapping an image from Euclidean space into non-Euclidean space.}\n\\label{fig:NRL}\n\\end{figure}", "id": "3d125124-0f76-495a-849b-3574679d280e", "level": "subsection", "origin_cites_number": 2, "parent_id": "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ] ], "subsections": [ "aeae4916-091c-4fb3-9196-b086759a00f2", "b0c51e02-fd00-4569-aa41-615bfe766b54", "9bc956d7-ac64-4377-afad-8de347ff8311", "ba7ea660-f325-425d-9818-58b7145cd6a2", "79cc0d88-cd98-44e0-a1cb-3c845388c13f" ], "title": "Random Walk Based Methods" }, { "cite_extract_rate": 0.625, "cites": [ 8978, 218, 1010, 6121, 282 ], "content": "Graph-structured data have various data types and structures. The information encoded in a graph is related to graph structure and vertex attributes, which are the two key factors affecting the reasoning of networks. In real-world applications, many networks only have structural information, but lack vertex attribute information. How to identify network structure information effectively, such as important vertices and invisible links, attracts the interest of network scientists~. Graph data have high dimensional characteristics. Traditional network analysis methods cannot be used for analyzing graph data in a continuous space.\nIn recent years, various NRL methods have been proposed, which preserve rich structural information of networks. DeepWalk~ and Node2vec~ are two representative methods for generating network representation of basic network topology information. These methods use random walk models to generate random sequences on networks. By treating the vertices as words and the generated random sequences of vertices as word sequences (sentences), the models can learn the embedding representation of the vertices by inputting these sequences into the Word2vec model~. The principle of the learning model is to maximize the co-occurrence probability of vertices such as Word2vec. In addition, Node2vec shows that network has complex structural characteristics, and different network structure samplings can obtain different results. The sampling mode of DeepWalk is not enough to capture the diversity of connection patterns in networks. Node2vec designs a random walk sampling strategy, which can sample the networks with the preference of breadth-first sampling and depth-first sampling by adjusting the parameters.\nThe NRL algorithms mentioned above focused on the first-order proximity information of vertices. Tang et al.~ proposed a method called LINE for large-scale network embedding. LINE can maintain the first and second order approximations. The first-order neighbor refers to the one-hop neighbor between two vertices, and the second-order neighbor is the neighbor with two hops. LINE is not a deep learning based model, but it is often compared with these edge modeling based methods.\nIt has been proved that the network structure information plays an important role in various network analysis tasks. In addition to this structural information, network attributes in the original network space are also critical in modeling the formation and evolution of the network~.", "id": "aeae4916-091c-4fb3-9196-b086759a00f2", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "3d125124-0f76-495a-849b-3574679d280e", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ], [ "subsubsection", "Structure Based Random Walks" ] ], "subsections": [], "title": "Structure Based Random Walks" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 7009, 6123, 7495, 6122, 3953 ], "content": "In addition to network topology, many types of networks also have rich vertex information, such as vertex content or label in networks. Yang et al.~ proposed an algorithm called TADW. The model is based on DeepWalk and considers the text information of vertices. The MMDW~ is another model based on DeepWalk, which is a kind of semi-supervised network embedding algorithm, by leveraging labelling\ninformation of vertices to enhance the performance.\nFocusing on the structural identity of nodes, Ribeiro et al.~ formulated a framework named Struc2vec. The framework considers nodes with similar local structure rather than neighborhood and labels of nodes. With hierarchy to evaluate structural similarity, the framework constrains structural similarity more stringently. The experiments indicate that DeepWalk and Node2vec are worse than Struc2vec which considers structural identity. There are some other NRL models, such as Planetoid~, which learn network representation using the feature of network structure and vertex attribute information. It is well known that vertex attributes provide effective information for improving network representation and help to learn embedded vector space. In the case of relatively sparse network topology, vertex attribute information can be used as supplementary information to improve the accuracy of representation. In practice, how to use vertex information effectively and how to apply this information to network vertex embedding are the main challenges in NRL.\nResearchers not only investigate random walk based NRL on vertices but also on graphs. Adhikari et al.~ proposed an unsupervised scalable algorithm, Sub2Vec, to learn arbitrary subgraph. To be more specific, they proposed a method to measure the similarities between subgraphs without disturbing local proximity. Narayanan et al.~ proposed graph2vec, which is a neural embedding framework. Modeling on neural document embedding models, graph2vec takes a graph as a document and the subgraph around words as vertices. By migrating the model to graphs, the performance of graph2vec significantly exceeds other substructure representation learning algorithms.\nGenerally, random walk can be regarded as a Markov process. The next state of the process is only related to last state, which is known as Markov chain. Inspired by vertex-reinforced random walks, Benson et al.~ presented spacey random walk, a non-Markovian stochastic process. As a specific type of a more general class of vertex-reinforced random walks, it takes the view that the probability of time remained on each vertex relates to the long term behavior of dynamical systems. They proved that dynamical systems can converge to a stationary distribution under sufficient conditions.\nRecently, with the development of Generative Adversarial Network (GAN), researchers combined random walks with the GAN method~. Existing research on NRL can be divided into generative models and discriminative models. GraphGAN~ integrated these two kinds of models and played a game-theoretical minimax game. With the process of the game, the performance of the two models can be strengthened. Random walk is used as a generator in the game. NetGAN~ is a generative model that can model network in real applications. The method takes the distribution of biased random walk as input, and can produce graphs with known patterns. It preserves important topology properties and does not need to define them in model definition.", "id": "b0c51e02-fd00-4569-aa41-615bfe766b54", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "3d125124-0f76-495a-849b-3574679d280e", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ], [ "subsubsection", "Structure and Vertex Information Based Random Walks" ] ], "subsections": [], "title": "Structure and Vertex Information Based Random Walks" }, { "cite_extract_rate": 0.23529411764705802, "cites": [ 1155, 1419, 6094, 1421 ], "content": "In reality, most networks contain more than one type of vertex, and hence networks are heterogeneous. Different from homogeneous NRL, heterogenous NRL should well reserve various relationships among different vertices~. Considering the ubiquitous existence of heterogeneous networks, many efforts have been made to learn network representations of heterogeneous networks. Compared to homogeneous NRL, the proximity among entities in heterogeneous NRL is more than a simple measure of distance or closeness. The semantics among vertices and links should be considered. Some typical scenarios include knowledge graphs and social networks.\nKnowledge graph is a popular research domain in recent years. A vital part in knowledge base population is relational inference. The central problem of relational inference is inferring unknown knowledge from the existing facts in knowledge bases~. There are three types of common relational inference method in general: statistical relational learning (SRL), latent factor models (LFM) and random walk models (RWM). Relational learning methods based on statistics lack generality and scalability. As a result, latent factor model based graph embedding and relational paths based random walk have been adopted more widely.\nIn a knowledge graph, there exist various vertices and various types of relationships among different vertices. For example, in a scholar related knowledge graph~, the types of vertices include scholar, paper, publication venue, institution, etc. The types of relationships include coauthor, citation, publication, etc. The key idea of knowledge graph embedding is to embed vertices and their relationships into a low dimensional vector space, while the inherent structure of the knowledge graph can be reserved~.\nFor relational paths based random walk, the path ranking algorithm (PRA) is a path finding method using random walks to generate relational features on graph data~. Random walks in PRA are with restart, and combine features with a logistic regression. However, PRA cannot predict connection between two vertices if there does not exist a path between them. Gardner et al.~ introduced two ways to improve the performance of PRA. One method enables more efficient processing to incorporate new corpus into knowledge base, while the other method uses vector space to reduce the sparsity of surface forms. To resolve cascade errors in knowledge construction, Wang and Cohen~ proposed a joint information extraction and knowledge base based model with a recursive random walk. Using latent context of the text, the model obtains additional improvement. Liu et al.~ developed a new random walk based learning algorithm named Hierarchical Random-walk inference (HiRi). It is a two-tier scheme: the upper tier recognizes relational sequence pattern, and the lower tier captures information from subgraphs of knowledge bases.\nAnother widely-investigated type of heterogeneous networks is social networks, such as online social networks and location based social networks. Social networks are heterogeneous in nature because of the different types of vertices and relations. There are two main ways to embed heterogeneous social networks, including meta path-based approaches and random walk-based approaches.\nA meta path in heterogeneous networks is defined as a sequence of vertex types encoding significant composite relations among various types of vertices. Aiming to employ the rich information in social networks by exploiting various types of relationships among vertices, Fu et al.~ proposed HIN2Vec, which is a representation learning framework based on meta-paths. HIN2Vec is a neural network model and the meta-paths are well embedded based on two independent phases, i.e., training data preparation and representation learning. Experimental results on various social network datasets show that HIN2Vec model is able to automatically learn vertex vector in heterogeneous networks to support a variety of applications. Metapath2vec~ was designed by formalizing meta-path based random walks to construct the neighborhoods of a vertex in heterogeneous networks. It takes the advantage of a heterogeneous skip-gram model to perform vertex embedding.\nMeta path based methods require either prior knowledge for optimal meta-path selection or extended computations for path length selection. To overcome these challenges, random walk based approaches have been proposed. Hussein et al.~ proposed the JUST model, which is a heterogeneous graph embedding approach using random walks with jump and stay strategies so that the aforementioned bias can be overcomed effectively. Another method which does not require prior knowledge for meta-path definition is MPDRL~, meta-path discovery with reinforcement earning. This method employs the reinforcement learning algorithm to perform multi-hop reasoning to generate path instances and then further summarizes the important meta-paths using the Lowest Common Ancestor principle. Shi et al.~ proposed the HERec model, which utilizes the heterogeneous information network embedding for providing accurate recommendations in social networks. HERec is designed based on a random walk based approach for generating meaningful vertex sequences for heterogeneous network embedding. HERec can effectively adopt the auxiliary information in heterogeneous information networks. Other typical heterogeneous social network embedding approaches include, e.g., PTE~ and SHNE~.", "id": "9bc956d7-ac64-4377-afad-8de347ff8311", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "3d125124-0f76-495a-849b-3574679d280e", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ], [ "subsubsection", "Random Walks in Heterogeneous Networks" ] ], "subsections": [], "title": "Random Walks in Heterogeneous Networks" }, { "cite_extract_rate": 0.25, "cites": [ 242 ], "content": "Network is evolving over time, which means that new vertices may emerge and new relations may appear. Therefore, it is significant to capture the temporal behaviour of networks in network analysis. Many efforts have been made to learn time-varying network embedding (e.g., dynamic networks or temporal networks)~. In contrast to static network embedding, time-varying NRL should consider the network dynamics, which means that old relationships may become invalid and new links may appear.\nThe key of time-varying NRL is to find a suitable way to incorporate the time characteristic into embedding via reasonable updating approaches. Nguyen et al.~ proposed the CTDNE model for continuous dynamic network embedding based on random walk with \"chronological\" paths which can only move forward as time goes on. Their model is more suitable for time-dependent network representation that can capture the important temporal characteristics of continuous-time dynamic networks. Results on various datasets show that CTDNE outperforms static NRL approaches. Zuo et al.~ proposed the HTNE model which is a temporal NRL approach based on the Hawkes process. HTNE can well integrate the Hawkes process into network embedding so that the influence of historical neighbors on the current neighbors can be accurately captured.\nFor unseen vertices in a dynamical network, GraphSAGE~ was presented to efficiently generate embeddings for new vertices in network. In contrast to methods that training embedding for every vertex in the network, GraphSAGE designs a function to generate embedding for a vertex with features of the neighborhoods locally. After sampling neighbors of a vertex, GraphSAGE uses different aggregators to update the embedding of the vertex. However, current graph neural methods are proficient of only learning local neighborhood information and cannot directly explore the higher-order proximity and the community structure of graphs.", "id": "ba7ea660-f325-425d-9818-58b7145cd6a2", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "3d125124-0f76-495a-849b-3574679d280e", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ], [ "subsubsection", "Random Walks in Time-varying Networks" ] ], "subsections": [], "title": "Random Walks in Time-varying Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "As mentioned before, random walk is a fundamental way to sample networks. The sequences of nodes could preserve the information of network structure. However, there are some disadvantages of this method. For example, random walk relies on random strategies, which creates some uncertain relations of nodes. To reduce this uncertainty, it needs to increase the number of samples, which will significantly increase the complexity of algorithms. Some random walk variants could preserve local and global information of networks, but they might not be effective in adjusting parameters to adapt to different types of networks.", "id": "79cc0d88-cd98-44e0-a1cb-3c845388c13f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3d125124-0f76-495a-849b-3574679d280e", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Random Walk Based Methods" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep learning is one of the hottest areas over the past few years. Nevertheless, it is an attractive and challenging task to extend the existing neural network models, such as Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (CNNs), to graph data. Gori et al.~ proposed a GNN model based on recursive neural network. In this model, a transfer function is implemented, which maps the graph or its vertices to an m-dimensional Euclidean space. In recent years, lots of GNN models have been proposed.", "id": "9610d7f6-5599-46e1-869b-238d95753770", "level": "subsection", "origin_cites_number": 1, "parent_id": "b80c97cb-21ee-4d1e-8989-5d7676554f7d", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ] ], "subsections": [ "208cf390-35c2-4759-957e-630de3176dd3", "b75e6fab-d3c1-4722-9e3c-4f5032b18f80", "66e692cf-fb7f-41d4-85ba-1bdd02dff439", "e5ad0c9e-68fa-412c-b001-9342caaabe3d", "45ad40ac-209a-468b-a7d6-814a40408855", "7cc791fe-91f8-407d-ba5f-868a9d09aff0" ], "title": "Deep Learning on Graphs" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 25, 7012, 225, 8313, 3966, 4001, 228, 7213, 23, 7212, 213, 216, 235, 6124, 8332 ], "content": "GCN works on the basis of grid structure domain and graph structure domain~.\n\\textbf{Time Domain and Spectral Methods}. Convolution is one of a common operation in deep learning. However, since graph lacks a grid structure, standard convolution over images or text cannot be directly applied to graphs. Bruna et al.~ extended the CNN algorithm from image processing to the graph using the graph Laplacian matrix, dubbed as spectral graph CNN. The main idea is similar to Fourier basis for signal processing. Based on~, Henaff et al.~ defined kernels to reduced the learning parameters by analogizing the local connection of CNNs on the image. Defferrard et al.~ provided two ways for generalizing CNNs to graph structure data based on graph theory. One method is to reduce the parameters by using polynomial kernel, and this method can be accelerated by using Chebyshev polynomial approximation. The other method is the special pooling method, which is pooling on the binary tree constructed from vertices. An improved version of was introduced by Kipf and Welling~. The proposed method is a semi-supervised learning method for graphs. The algorithm employs an excellent and straightforward neural network followed by a layer-by-layer propagation rule, which is based on the first-order approximation of spectral convolution on the graph and can be directly acted on the graph.\nThere are some other time domain based methods. Based on the mixture model of CNNs, for instance, Monti et al.~ generalized the CNN to non-Euclidean space. Zhou and Li~ proposed a new CNN graph modeling framework, which designs two modules for graph structure data: K-order convolution operator and adaptive filtering module. In addition, the high-order adaptive graph convolution network (HA-GCN) framework proposed in~ is a general architecture that is suitable for many applications of vertices and graph centers. Manessi et al.~ proposed a dynamic graph convolution network algorithm for dynamic graphs. The core idea of the algorithm is to combine the expansion of graph convolution with the improved Long Short Term-Memory networks (LSTM) algorithm, and then train and learn the downstream recursive unit by using graph structure data and vertex features. The spectral based NRL methods have many applications, such as vertex classification~, traffic forecasting~, and action recognition~.\n\\textbf{Space Domain and Spatial Methods}. Spectral graph theory provides a convolution method on graphs, but many NRL methods directly use convolution operation on graphs in space domain. Niepert et al.~ applied graph labeling procedures such as Weisfeiler-Lehman kernel on graphs to generate unique order of vertices. The generated sub-graphs can be fed to the traditional CNN operation in space domain. Duvenaud et al.~ designed Neural fingerprints (FP), which is a spatial method using the first-order neighbors similar to the GCN algorithm. Atwood and Towsley~ proposed another convolution method, called diffusion-convolutional neural network, which incorporates transfer probability matrix and replaces the characteristic basis of convolution with diffusion basis. Gilmer et al.~ reformulated existing models into a single common framework, and exploited this framework to discover new variations. Allamanis et al.~ represented the structure of code from syntactic and semantic, and utilized the GNN method to recognize program structures.\nZhuang and Ma~ designed dual graph convolution networks (DGCN), which use diffusion basis and adjacency basis. DGCN uses two convolutions: one is the characteristic form of polynomial filter, and the other is to replace the adjacency matrix with the PPMI (Positive Pointwise Mutual Information) of the transition probability~. Dai et al.~ proposed the SSE algorithm, which uses asynchronous random to learn vertex representation so as to improve learning efficiency. In this model, a recursive method is adopted to learn vertex latent representation and the sampled batch data are utilized to update parameters. The recursive function of SSE is calculated from the weighted average of historical state and new state. Zhu et al.~ proposed a graph smoothing splines neural network which exploits non-smoothing node features and global topological knowledge such as centrality for graph classification. Gao et al.~ proposed a large scale graph convolution network (LGCN) based on vertex feature information. In order to adapt to the scene of large scale graphs, they proposed a sub-graph training strategy, which first trained the sampled sub-graph in a small batch. Based on a deep generative graph model, a novel method called DeepNC for inferring the missing parts of a network was proposed in~.\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=4.5in,height=1.5in]{fig6.png}\n\t\\caption{A brief history of algorithms of deep learning on graphs.}\n\t\\label{fig3}\n\\end{figure*}\nA brief history of deep learning on graphs is shown in Fig.~\\ref{fig3}. GNN has attracted lots of attention since 2015, and it is widely studied and used in various fields.", "id": "208cf390-35c2-4759-957e-630de3176dd3", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Graph Convolutional Networks" ] ], "subsections": [], "title": "Graph Convolutional Networks" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 252, 180, 218, 38, 27 ], "content": "In sequence-based tasks, attention mechanism has been regarded as a standard~. GNNs achieve lots of benefits from the expanded model capacity of attention mechanisms. GATs are a kind of spatial-based GCNs~. It takes the attention mechanism into consideration when determining the weights of vertex's neighbors. Likewise, Gated Attention Networks (GAANs) also introduced the multi-head attention mechanism for updating the hidden state of some vertices~. Unlike GATs, GAANs employ a self-attention mechanism which can compute different weights for different heads. Some other models such as graph attention model (GAM) were proposed for solving different problems~. Take GAM as an example, the purpose of GAM is to handle graph classification. Therefore, GAM is set to process informative parts by visiting a sequence of significant vertices adaptively. The model of GAM contains LSTM network, and some parameters contain historical information, policies, and other information generated from exploration of the graph. Attention Walks (AWs) are another kind of learning model based on GNN and random walks~. In contrast to DeepWalk, AWs use differentiable attention weights when factorizing the co-occurrence matrix~.", "id": "b75e6fab-d3c1-4722-9e3c-4f5032b18f80", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Graph Attention Networks" ] ], "subsections": [], "title": "Graph Attention Networks" }, { "cite_extract_rate": 0.375, "cites": [ 229, 322, 257 ], "content": "GAE uses GNN structure to embed network vertices into low dimensional vectors. One of the most general solutions is to employ a multi-layer perception as the encoder for inputs~. Therein the decoder reconstructs neighborhood statistics of the vertex. PPMI or the first and the second nearest neighborhood can be taken into statistics~. Deep neural networks for graph representations (DNGR) employ PPMI. Structural deep network embedding (SDNE) employs stacked auto-encoder to maintain both the first-order and the second-order proximity. Auto-encoder~ is a traditional deep learning model, which can be classified as a self-supervised model~. Deep recursive network embedding (DRNE) reconstructs some vertices' hidden state rather than the entire graph~. It has been found that if we regard GCN as an encoder, and combine GCN with GAN or LSTM with GAN, then we can design the auto-encoder for graphs. Generally speaking, DNGR and SDNE embed vertices by the given structure features, while other methods such as DRNE learn both topology structure and content features~. Variational graph auto-encoder~ is another successful approach that employs GCN as an encoder and a link prediction layer as a decoder. Its successor, adversarially regularized variational graph auto-encoder~, adds a regularization process with an adversarial training approach to learn a more robust embedding.", "id": "66e692cf-fb7f-41d4-85ba-1bdd02dff439", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Graph Auto-Encoders" ] ], "subsections": [], "title": "Graph Auto-Encoders" }, { "cite_extract_rate": 1, "cites": [ 222, 6125, 259, 6983 ], "content": "The purpose of graph generative networks is to generate graphs according to the given observed set of graphs. Many previous methods of graph generative networks have their own application domains. For example, in natural language processing, the semantic graph or the knowledge graph is generated based on the given sentences. Some general methods have been proposed recently. One kind of them considers the generation process as the formation of vertices and edges. Another kind is to employ generative adversarial training. Some GCNs based graph generative networks such as molecular generative adversarial networks (MolGAN) integrate GNN with reinforcement learning~. Deep generative models of graphs (DGMG) achieves a hidden representation of existing graphs by utilizing spatial-based GCNs~. There are some knowledge graph embedding algorithms based on GAN and Zero-Shot Learning~. Vyas et al.~ proposed a Generalized Zero-Shot learning model, which can find unseen semantic in knowledge graphs.", "id": "e5ad0c9e-68fa-412c-b001-9342caaabe3d", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Graph Generative Networks" ] ], "subsections": [], "title": "Graph Generative Networks" }, { "cite_extract_rate": 0.68, "cites": [ 25, 225, 8313, 27, 3966, 222, 259, 228, 180, 7213, 23, 8312, 7212, 213, 252, 6124, 8332 ], "content": "Graph spatial-temporal networks simultaneously capture the spatial and temporal dependence of graphs. The global structure is included in the spatial-temporal graphs, and the input of each vertex varies with the change of time. For example, in traffic networks, each sensor records the traffic speed of a road continuously as a vertex, in which the edge of the traffic networks is determined by the distance between the sensor pairs~. The goal of a spatial-temporal network can be to predict future vertex values or labels, or to predict spatial-temporal graph labels. Recent studies in this direction have discussed the use of GCNs, the combination of GCNs with RNN or CNN, and recursive structures for graph structures~.\n\\begin{table*}[htbp]\n \\centering\n \\caption{Summary of graph learning methods and their applications}\n \\begin{tabular}{clll}\n \\toprule\n \\textbf{Categories} & \\multicolumn{1}{c}{\\textbf{Algorithms}} & \\multicolumn{1}{c}{\\textbf{Neural Component}} & \\multicolumn{1}{c}{\\textbf{Applications}} \\\\\n \\midrule\n \\multicolumn{1}{c}{\\multirow{8}[16]{*}{Time Domain and\\newline{} Spectral Methods}}\n & SNLCN~ & Graph Neural Network &\n Classification\\\\\n\\cmidrule{2-4} & DCN~ & Spectral Network & Classification \\\\\n\\cmidrule{2-4} & ChebNet~ & Convolution Network & Classification \\\\\n\\cmidrule{2-4} & GCN~ & Spectral Network & Classification \\\\\n\\cmidrule{2-4} & HA-GCN~ & GCN & Classification \\\\\n\\cmidrule{2-4} & Dynamic GCN~ & GCN, LSTM & Classification \\\\\n\\cmidrule{2-4} & DCRNN~ & Diffusion Convolution Network & Traffic Forecasting \\\\\n\\cmidrule{2-4} & ST-GCN ~ & GCN & Action Recognition \\\\\n \\midrule\n \\multicolumn{1}{c}{\\multirow{7}[14]{*}{Space Domain and\\newline{} Spatial Methods}} & PATCHY-SAN~ & Convolutional Network & \\multicolumn{1}{p{11.94em}}{Runtime Analysis,\\newline{}Feature Visualization,\\newline{}Graph Classification} \\\\\n\\cmidrule{2-4} & \\multicolumn{1}{p{11.5em}}{Neural FP~} & & Sub-graph Classification \\\\\n\\cmidrule{2-4} & DCNN~ & DCNN & Classification \\\\\n\\cmidrule{2-4} & DGCN~ & \\multicolumn{1}{p{11.5em}}{Graph-Structure-Based Convolution, PPMI-Based Convolution.} & Classification \\\\\n\\cmidrule{2-4} & SSE~ & & Vertex Classification \\\\\n\\cmidrule{2-4} & LGCN~ & Convolutional Neural Network & Vertex Classification \\\\\n\\cmidrule{2-4} & STGCN~ & Gated Sequential Convolution & Traffic Forecasting \\\\\n \\midrule\n \\multicolumn{1}{l}{\\multirow{12}[24]{*}{Deep Learning\\newline{} Model Based Methods}} & GATs~ & \\multicolumn{1}{l}{\\multirow{3}[6]{*}{Attention Neural Network}} & Classification \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & GAAN~ & & Vertex Classification \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & GAM~ & & Graph Classification \\\\\n\\cmidrule{2-4} & Aws~ & \\multicolumn{1}{l}{\\multirow{4}[8]{*}{Auto-encoder Neural Network}} & \\multicolumn{1}{p{11.94em}}{Link Prediction, \\newline{}Sensitivity Analysis,\\newline{}Vertex Classification} \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & SDNE~ & & \\multicolumn{1}{p{11.94em}}{Classification,\\newline{} Link Prediction, \\newline{}Visualization} \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & DNGR~ & & Clustering, Visualization \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & DRNE~& & \\multicolumn{1}{p{11.94em}}{Regular Equivalence Prediction,\\newline{}Structural Role Classification,\\newline{}Network Visualization} \\\\\n\\cmidrule{2-4} & MolGAN~ & \\multicolumn{1}{l}{\\multirow{2}[4]{*}{Generative Neural Network}} & Generative Model \\\\\n\\cmidrule{2-2}\\cmidrule{4-4} & DGMG~ & & Molecule Generation \\\\\n\\cmidrule{2-4} & DCRNN~ & Diffusion Convolution Network & Traffic Forecasting \\\\\n\\cmidrule{2-4} & STGCN~ & Gated Sequential Convolution & \\\\\n\\cmidrule{2-4} & ST-GCN~ & GCNs & Action Recognition \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:addlabel}\n\\end{table*}", "id": "45ad40ac-209a-468b-a7d6-814a40408855", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Graph Spatial-Temporal Networks" ] ], "subsections": [], "title": "Graph Spatial-Temporal Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this context, the task of graph learning can be seen as optimizing the objective function by using gradient descent algorithms. Therefore the performance of deep learning based NRL models is influenced by gradient descent algorithms. They may encounter challenges like local optimal solutions and the vanishing gradient problem.", "id": "7cc791fe-91f8-407d-ba5f-868a9d09aff0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9610d7f6-5599-46e1-869b-238d95753770", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Graph Learning Models and Algorithms" ], [ "subsection", "Deep Learning on Graphs" ], [ "subsubsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 1, "cites": [ 550 ], "content": "\\label{sec:appication}\nMany problems can be solved by graph learning methods, including supervised, semi-supervised, unsupervised, and reinforcement learning. Some researchers classify the applications of graph learning into three categories, i.e., structural scenarios, non-structural scenarios, and other application scenarios~. Structural scenarios refer to the situation where data are performed in explicit relational structures, such as physical systems, molecular structures, and knowledge graphs. Non-structural scenarios refer to the situation where data are with unclear relational structures, such as images and texts. Other application scenarios include, e.g., integrating models and combinatorial optimization problems. Table II lists the neural components and applications of various graph learning methods.", "id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "level": "section", "origin_cites_number": 1, "parent_id": "64dae0de-0654-4c28-90d7-95856f2349e2", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ] ], "subsections": [ "718d1a88-a8ed-4488-8cc4-5730817b768a", "d71dfd60-8d36-44a3-8f75-1c26ea482baf", "74a3bf0e-c434-4b67-a633-48f77c44c49b", "6e167047-2d60-4243-89db-1578cae21cde", "f41d2e3e-418d-40a1-96ae-40e2ba10a214", "98e00854-a5c9-4846-b094-59c75cc496f6" ], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "There are several datasets and benchmarks used to evaluate the performance of graph learning approaches for various tasks such as link prediction, node classification, and graph visualization. For instance, datasets like Cora\\footnote{https://relational.fit.cvut.cz/dataset/CORA} (citation network), Pubmed\\footnote{https://catalog.data.gov/dataset/pubmed} (citation network), BlogCatalog\\footnote{http://networkrepository.com/soc-BlogCatalog.php} (social network), Wikipedia\\footnote{https://en.wikipedia.org/wiki/Wikipedia:Database\\_download} (language network) and PPI\\footnote{https://openwetware.org/wiki/Protein-protein\\_interaction\\_databases} (biological network) include nodes, edges, labels or attributes of nodes. Some research institutions developed graph learning libraries, which include common and classical graph learning algorithms. For example, OpenKE\\footnote{https://github.com/thunlp/OpenKE} is a Python library for knowledge graph embedding based on PyTorch. The open-source framework has the implementations of RESCAL, HolE, DistMult, ComplEx, etc. CogDL\\footnote{https://github.com/THUDM/cogdl/} is a graph representation learning framework, which can be used for node classification, link prediction, graph classification, etc.", "id": "718d1a88-a8ed-4488-8cc4-5730817b768a", "level": "subsection", "origin_cites_number": 0, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Datasets and Open-source Libraries" ] ], "subsections": [], "title": "Datasets and Open-source Libraries" }, { "cite_extract_rate": 0.75, "cites": [ 7011, 8410, 180, 242, 1157, 274 ], "content": "Many data are in textual form coming from various resources like web pages, emails, documents (technical and corporate), books, digital libraries and customer complains, letters, patents, etc. Textual data are not well structured for obtaining any meaningful information as text often contains rich context information. There exist abundant applications around text, including text classification, sequence labeling, sentiment classification, etc. Text classification is one of the most classical problems in natural language processing. Popular algorithms proposed to handle this problem include GCNs~, GATs~, Text GCNs~, and Sentence LSTM~. Sentence LSTM has also been applied to sequence labeling, text generation, multi-hop reading comprehension, etc~. Syntactic GCN was proposed to solve semantic role labeling and neural machine translation~. Gated Graph Neural Networks (GGNNs) can also be used to address neural machine translation and text generation~. For relational extraction, Tree LSTM, graph LSTM, and GCN are better solutions~.", "id": "d71dfd60-8d36-44a3-8f75-1c26ea482baf", "level": "subsection", "origin_cites_number": 8, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Text" ] ], "subsections": [], "title": "Text" }, { "cite_extract_rate": 0.5, "cites": [ 6126 ], "content": "Graph learning applications pertaining to images include social relationship understanding, image classification, visual question answering, object detection, region classification, and semantic segmentation, etc. For social relationship understanding, for instance, graph reasoning model (GRM) is widely used~. Since social relationships such as friendships are the basis of social networks in real world, automatically interpreting these relationships is important for understanding human behaviors. GRM introduces GGNNs to learn a propagation mechanism. Image classification is a classical problem, in which GNNs have demonstrated promising performance. Visual question answering (VQA) is a learning task that involves both computer vision and natural language processing. A VQA system takes the form of a certain pictures and its open natural language question as input, in order to generate a natural language answer as output. Generally speaking, VQA is question-and-answer for a given picture. GGNNs have been exploited to help with VQA~.", "id": "74a3bf0e-c434-4b67-a633-48f77c44c49b", "level": "subsection", "origin_cites_number": 2, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Images" ] ], "subsections": [], "title": "Images" }, { "cite_extract_rate": 0.25, "cites": [ 4003 ], "content": "Graph learning has been widely adopted in science. Modeling real-world physical systems is one of the most fundamental perspectives in understanding human intelligence. Representing objects as vertices and relations as edges between them is a simple but effective way to perform physics. Battaglia et al.~ proposed interaction networks (IN) to predict and infer abundant physical systems, in which IN takes objects and relationships as input. Based on IN, the interactions can be reasoned and the effects can be applied. Therefore, physical dynamics can be predicted. Visual interaction networks (VIN) can make predictions from pixels by firstly learning a state code from two continuous input frames per object~.\nOther graph networks based models have been developed to address chemistry and biology problems. Calculating molecular fingerprints, i.e., using feature vectors to represent molecular, is a central step. Researchers~ proposed neural graph fingerprints using GCNs to calculate substructure feature vectors. Some studies focused on protein interface prediction. This is a challenging issue with significant applications in biology. Besides, GNNs can be used in biomedical engineering as well. Based on protein-protein interaction networks, Rhee et al.~ used graph convolution and protein relation networks to classify breast cancer subtypes.", "id": "6e167047-2d60-4243-89db-1578cae21cde", "level": "subsection", "origin_cites_number": 4, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Science" ] ], "subsections": [], "title": "Science" }, { "cite_extract_rate": 0.36363636363636304, "cites": [ 1719, 1165, 6127, 1166 ], "content": "Various heterogeneous objects and relationships are regarded as the basis for a knowledge graph . GNNs can be applied in knowledge base completion (KBC) for solving the out-of-knowledge-base (OOKB) entity problem~. The OOKB entities are connected to existing entities. Therefore, the embedding of OOKB entities can be aggregated from existing entities. Such kind of algorithms achieve reasonable performance in both settings of KBC and OOKB. Likewise, GCNs can also be used to solve the problem of cross-lingual knowledge graph alignment. The main idea of the model is to embed entities from different languages into an integrated embedding space. Then the model aligns these entities according to their embedding similarities.\nGenerally speaking, knowledge graph embedding can be categorized into two types: translational distance models and semantic matching models. Translational distance models aim to learn the low dimensional vector of entities in a knowledge graph by employing distance-based scoring functions. These methods calculate the plausibility as the distance between two entities after a translation measured by the relationships between them. Among current translational distance models, TransE~ is the most influential one. TransE can model the relationship of entities by interpreting them as translations operating on the low dimensional embedding. Inspired by TranE, TranH~ was proposed to overcome the disadvantages of TransE in dealing with 1-to-N, N-to-1, and N-to-N relations by introducing relation-specific hyperplanes. Instead of hyperplanes, TransR~ introduces relation-specific spaces to solve the flows in TransE. Meanwhile, various extensions of TransE have been proposed to enhance knowledge graph embeddings, such as TransD~ and TransF~. On the basis of TransE, DeepPath incorporates reinforcement learning methods for learning relational paths in knowledge graphs. By designing a complex reward function involving accuracy, efficiency and path diversity, the path finding process is better controlled and more flexible.\nSemantic matching models utilize the similarity-based scoring functions. They measure the plausibility among entities by matching latent semantics of entities and relations in low dimensional vector space. Typical models of this type include RESCAL~, DistMult~, ANALOGY~, etc.", "id": "f41d2e3e-418d-40a1-96ae-40e2ba10a214", "level": "subsection", "origin_cites_number": 11, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Knowledge Graphs" ] ], "subsections": [], "title": "Knowledge Graphs" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 178, 187 ], "content": "Classical problems such as traveling salesman problem (TSP) and minimum spanning tree (MST) have been solved by using different heuristic solutions. Recently, deep neural networks have been applied to these problems. Some solutions make further use of GNNs thanks to their structures. Bello et al.~ first proposed such kind of methods to solve TSP. Their method mainly contains two steps, i.e., a parameterized reward pointer network and a strategy gradient module for training. Khalil et al.~ improved this work with GNN and achieved better performance by two main procedures. First, they used structure2vec to achieve vertex embedding and then input them into Q-learning module for decision-making. This work also proves the embedding ability of GNN. Nowak et al. ~ focused on the secondary assignment problem, i.e., measuring the similarity of two graphs. The GNN model learns each graph's vertex embedding and uses the attention mechanism to match the two graphs. Other studies use GNNs directly as the classifiers, which can perform the intensive prediction on graphs with two sides. The rest of the model facilitates diverse choices and effective training.", "id": "98e00854-a5c9-4846-b094-59c75cc496f6", "level": "subsection", "origin_cites_number": 3, "parent_id": "ad0cb974-233d-49a2-87c6-a6386791ce72", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Applications" ], [ "subsection", "Combinatorial Optimization" ] ], "subsections": [], "title": "Combinatorial Optimization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:issues}\nIn this section, we briefly summarize several future research directions and open issues for graph learning.\n\\textbf{Dynamic Graph Learning}: For the purpose of graph learning, most existing algorithms are suitable for static networks without specific constraints. However, dynamic networks such as traffic networks vary over time. Therefore, they are hard to deal with. Dynamic graph learning algorithms have rarely been studied in the literature. It is of significant importance that dynamic graph learning algorithms are designed to maintain good performance, especially in the case of dynamic graphs.\n\\textbf{Generative Graph Learning}: Inspired by the generative adversarial networks, generative graph learning algorithms can unify the generative and discriminative models by playing a game-theoretical min-max game. This generative graph learning method can be used for link prediction, network evolution, and recommendation by boosting the performance of generative and discriminative models alternately and iteratively.\n\\textbf{Fair Graph Learning}: Most graph learning algorithms rely on deep neural networks, and the resulting vectors may have captured undesired sensitive information. The bias existing in the network is reinforced, and hence it is of significant importance to integrate the fair metrics into the graph learning algorithms to address the inherent bias issue.\n\\textbf{Interpretability of Graph Learning}: The models of graph learning are generally complex by incorporating both graph structure and feature information. The interpretability of graph learning (based) algorithms remains unsolved since the structures of graph learning algorithms are still a black box. For example, drug discovery can be achieved by graph learning algorithms. However, it is unknown how this drug is discovered as well as the reason behind this discovery. The interpretability behind graph learning needs to be further studied.", "id": "6779c1aa-f870-4ba7-91a9-b423ac0b7ff3", "level": "section", "origin_cites_number": 0, "parent_id": "64dae0de-0654-4c28-90d7-95856f2349e2", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Open Issues" ] ], "subsections": [], "title": "Open Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:con}\nThis survey gives a general description of graph learning, and provides a comprehensive review of the state-of-the-art graph learning methods. We examined existing graph learning methods under four categories: graph signal processing based methods, matrix factorization based methods, random walk based methods, and deep learning based methods. The applications of graph learning methods mainly under these four categories in areas such as text, images, science, knowledge graphs, and combinatorial optimization are outlined. We also discuss some future research directions in the field of graph learning. Graph learning is currently a hot area which is growing at an unprecedented speed. We do hope that this survey will help researchers and practitioners with their research and development in graph learning and related areas.\n\\section*{Acknowledgments}\nThe authors would like to thank Prof. Hussein Abbass at University of New South Wales, Yuchen Sun, Jiaying Liu, Hao Ren at Dalian University of Technology, and anonymous reviewers for their valuable comments and suggestions.\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{GraphLearningSurvey}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{fx}}]{Feng Xia}\n(M'07$-$SM'12) received the B.Sc. and Ph.D. degrees from Zhejiang University, Hangzhou, China. He is currently an Associate Professor and Discipline Leader in School of Engineering, IT and Physical Sciences, Federation University Australia. Dr. Xia has published 2 books and over 300 scientific papers in international journals and conferences. His research interests include data science, computational intelligence, social computing, and systems engineering. He is a Senior Member of IEEE and ACM.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{sk}}]{Ke Sun} received the B.Sc. and M.Sc. degrees from Shandong Normal University, Jinan, China. He is currently Ph.D. Candidate in Software Engineering at Dalian University of Technology, Dalian, China. His research interests include deep learning, network representation learning, and knowledge graph.\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{ys}}]{Shuo Yu}(M'20) received the B.Sc. and M.Sc. degrees from Shenyang University of Technology, China, and the Ph.D. degree from Dalian University of Technology, Dalian, China. She is currently a Post-Doctoral Research Fellow with the School of Software, Dalian University of Technology. She has published over 30 papers in ACM/IEEE conferences, journals, and magazines. Her research interests include network science, data\nscience, and computational social science.\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{aziz}}]{Abdul Aziz} received the Bachelor's degree in computer science from COMSATS Institute of Information Technology, Lahore Pakistan in 2013 and Master degree in Computer science from National University of Computer \\& Emerging Sciences Karachi in 2018. He is currently a PhD student at the Alpha Lab, Dalian University of Technology, China. His research interests include big data, information retrieval, graph learning, and social computing.\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{wlt}}]{Liangtian Wan}(M'15) received the B.S. degree and the Ph.D. degree from Harbin Engineering University, Harbin, China, in 2011 and 2015, respectively. From Oct. 2015 to Apr. 2017, he has been a Research Fellow at Nanyang Technological University, Singapore. He is currently an Associate Professor of School of Software, Dalian University of Technology, China. He is the author of over 70 papers. His current research interests include data science, big data and graph learning.\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{srp}}]{Shirui Pan} received a Ph.D. in computer science from the University of Technology Sydney (UTS), Australia. He is currently a lecturer with the Faculty of Information Technology, Monash University, Australia. His research interests include data mining and machine learning. Dr Pan has published over 60 research papers in top-tier journals and conferences.\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{hl}}]{Huan Liu}\n(F'12) received the B.Eng. degree in computer science and electrical engineering from Shanghai Jiaotong University and the Ph.D. degree in computer science from the University of Southern California. He is currently a Professor of computer science and engineering at Arizona State University. His research interests include data mining, machine learning, social computing, and artificial intelligence, investigating problems that arise in many real-world applications with high-dimensional data of disparate forms. His well-cited publications include books, book chapters, and encyclopedia entries and conference, and journal papers. He is a Fellow of IEEE, ACM, AAAI, and AAAS.\\end{IEEEbiography}\n\\end{document}", "id": "81c691a3-3630-4eef-92a6-3d901c8fb5ce", "level": "section", "origin_cites_number": 0, "parent_id": "64dae0de-0654-4c28-90d7-95856f2349e2", "prefix_titles": [ [ "title", "Graph Learning: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
9
[ 6092, 6093, 6094, 212, 6098, 6095, 318, 6096, 6100, 8975, 1010, 6099, 6097, 550, 553, 3344, 7233, 215, 9134, 219, 166, 244, 8499, 245, 6101, 6108, 6112, 6109, 6107, 6105, 6102, 6110, 6106, 6104, 6103, 6111, 6113, 6118, 6114, 8976, 6117, 6116, 5167, 6119, 8977, 6115, 6120, 8978, 218, 6121, 282, 7009, 6123, 7495, 6122, 3953, 1155, 1419, 1421, 242, 25, 7012, 225, 8313, 3966, 4001, 228, 7213, 23, 7212, 213, 216, 235, 6124, 8332, 252, 180, 38, 27, 229, 322, 257, 222, 6125, 259, 6983, 8312, 7011, 8410, 1157, 274, 6126, 4003, 1719, 1165, 6127, 1166, 178, 187 ]
1.672432
[ "Jingfeng Yang", "Hongye Jin", "Ruixiang Tang", "Xiaotian Han", "Qizhang Feng", "Haoming Jiang", "Bing Yin", "Xia Hu" ]
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
2023
2023-04-26T17:52:30Z
cs.CL
This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks. We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at \url{https://github.com/Mooler0410/LLMsPracticalGuide}.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "65198946-bcab-42d9-bc99-da3a8beb9233", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ] ], "subsections": [ "50506660-07a7-4316-9f4f-7eb13e615050", "5e50f17a-7d9a-4376-9534-28733a19fad8", "ecb8bb46-39ea-4696-8c8e-aa03c3c2e7c8", "8bfedb53-eb7f-45de-b964-352dab91a005", "ff91c03f-4bf0-4721-898f-5219db670483", "36ebc08b-941e-409b-9405-648aacfcf028" ], "title": "root" }, { "cite_extract_rate": 1, "cites": [ 1549, 8461, 1550 ], "content": "\\label{sec:intro}\nIn recent years, the rapid development of Large language Models has been revolutionizing the field of natural language processing~. These powerful models have shown great potential in addressing a variety of NLP tasks, ranging from natural language understanding~(NLU) to generation tasks, even paving the way to Artificial General Intelligence (AGI). However, utilizing these models effectively and efficiently requires a practical understanding of their capabilities and limitations, as well as the data and tasks involved in NLP. \nTo provide a guide for partitioners and end-users, this work focuses on the practical aspects of working with LLMs in downstream NLP tasks. \nThis guide aims to provide practical advice on why or why not to choose LLMs for a given task, as well as guidance on how to select the most suitable LLM, taking into account factors such as model sizes, computational requirements, and the availability of domain-specific pre-trained models.\nThis work offers a thorough understanding of LLMs from a practical perspective, therefore, empowers practitioners and end-users with the practical knowledge needed to successfully leverage the power of LLMs for their own NLP tasks.\nOur work is structured as follows. First, our work offers a brief introduction to LLMs by discussing the most important models, such as GPT-style and BERT-style architectures. \nThen, we delve into the critical factors that influence model performance from the data perspective, including pre-training data, training/tuning data, and test data. \nLast and most importantly, we dive deep into various concrete NLP tasks, offering insights into the applicability of LLMs for knowledge-intensive tasks, traditional NLU tasks, and generation tasks, along with the emergent abilities that these models possess and challenging real-world scenarios. We provide detailed examples to highlight both the successful use cases and the limitations of LLMs in practice.\nTo analyze the abilities of large language models, we compare them with fine-tuned models. \nAs of present, there is no universally recognized definition for LLMs and fine-tuned models. With consideration to practical utility, in our article, the definitions of them are proposed as: LLMs are huge language models pretrained on large amounts of datasets without tuning on data for specific tasks; fine-tuned models are typically smaller language models which are also pretrained and then further tuned on a smaller, task-specific dataset to optimize their performance on that task\\footnote{From a practical standpoint, we consider models with less than 20B parameters to be fine-tuned models. While it's possible to fine-tune even larger models like PlaM (540B), in reality, it can be quite challenging, particularly for academic research labs and small teams. Fine-tuning a model with 3B parameters can still be a daunting task for many individuals or organizations.}. \nThis work summarizes the following main practical guides for using LLMs:\n\\begin{itemize}\n \\item \\textbf{Natural language understanding.} Employ the exceptional generalization ability of LLMs when facing out-of-distribution data or with very few training data. \n \\item \\textbf{Natural language generation.} Utilize LLMs' capabilities to create coherent, contextually relevant, and high-quality text for various applications.\n \\item \\textbf{Knowledge-intensive tasks.} Leverage the extensive knowledge stored in LLMs for tasks requiring domain-specific expertise or general world knowledge.\n \\item \\textbf{Reasoning ability.} Understand and harness the reasoning capabilities of LLMs to improve decision-making and problem-solving in various contexts.\n\\end{itemize}", "id": "50506660-07a7-4316-9f4f-7eb13e615050", "level": "section", "origin_cites_number": 3, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.8518518518518511, "cites": [ 679, 9115, 7460, 1552, 1150, 1558, 1551, 826, 9, 856, 8385, 1557, 8462, 1556, 1555, 1559, 1554, 7462, 7, 7461, 7463, 1553, 11 ], "content": "\\begin{figure}[tp]\n \\begin{adjustwidth}{-0.0cm}{}\n \\centering\n \\includegraphics[width=1.00\\textwidth]{./models-colorgrey-2.pdf}\n \\end{adjustwidth} \n \\caption{The evolutionary tree of modern LLMs traces the development of language models in recent years and highlights some of the most well-known models. Models on the same branch have closer relationships. Transformer-based models are shown in non-\\textcolor{YGrey}{grey} colors: decoder-only models in the \\textcolor{GreyBlue2}{blue} branch, encoder-only models in the \\textcolor{GreyPink2}{pink} branch, and encoder-decoder models in the \\textcolor{GreyGreen2}{green} branch. The vertical position of the models on the timeline represents their release dates. Open-source models are represented by solid squares, while closed-source models are represented by hollow ones. The stacked bar plot in the bottom right corner shows the number of models from various companies and institutions.}\\label{fig:tree}\n\\end{figure}\nThis section provides a brief introduction to state-of-the-art LLMs. These models differ in their training strategies, model architectures, and use cases. To provide a clearer understanding of the LLM landscape, we categorize them into two types: encoder-decoder or encoder-only language models and decoder-only language models. In Figure~\\ref{fig:tree}, we show the detailed evolution process of language models. From the evolutionary tree, we make the following interesting observations: \n\\begin{enumerate}[label=\\alph*)]\n \\item Decoder-only models have been gradually dominating the development of LLMs. At the early stage of LLMs development, \\textcolor{GreyBlue}{decoder-only} models were not as popular as \\textcolor{GreyPink}{encoder-only} and \\textcolor{GreyGreen}{encoder-decoder} models. However, after 2021, with the introduction of game-changing LLMs - GPT-3, decoder-only models experienced a significant boom. Meanwhile, after the initial explosive growth brought about by BERT, encoder-only models gradually began to fade away.\n \\item OpenAI consistently maintains its leadership position in LLM, both currently and potentially in the future. Other companies and institutions are struggling to catch up with OpenAI in developing models comparable to GPT-3 and the current GPT-4. This leadership position may be attributed to OpenAI's steadfast commitment to its technical path, even when it was not widely acknowledged initially.\n \\item Meta contributes significantly to open-source LLMs and promotes research of LLMs. When considering contributions to the open-source community, particularly those related to LLMs, Meta stands out as one of the most generous commercial companies, as all the LLMs developed by Meta are open-sourced.\n \\item LLMs exhibit a tendency towards closed-sourcing. In the early stages of LLM development (before 2020), the majority of models were open-sourced. However, with the introduction of GPT-3, companies have increasingly opted to close-source their models, such as PaLM, LaMDA, and GPT-4. Consequently, it has become more difficult for academic researchers to conduct experiments on LLM training. As a result, API-based research could become the predominant method in the academic community.\n \\item Encoder-decoder models remain promising, as this type of architecture is still being actively explored, and most of them are open-sourced. Google has made substantial contributions to open-source encoder-decoder architectures. However, the flexibility and versatility of decoder-only models seem to make Google's insistence on this direction less promising.\n\\end{enumerate}\nWe also briefly summarize the characteristics and the representative LLMs of each type in Table~\\ref{tab:llms}.\n\\begin{table}[tp]\n\\centering\n\\caption{Summary of Large Language Models.}\n\\label{tab:llms}\n\\resizebox{1\\textwidth}{!}{\n \\begin{tabular}{c|rl|c}\n \\toprule\n \\multicolumn{1}{c|}{} &\\multicolumn{2}{c|}{\\textbf{Characteristic}} & \\multicolumn{1}{c}{\\textbf{LLMs}} \\\\ \\midrule\n \\multirow{5}{*}{Encoder-Decoder or Encoder-only} & & & \\multirow{4}{6cm}{ELMo~, BERT~, RoBERTa~, DistilBERT~, BioBERT~, XLM~, Xlnet~, ALBERT ~, ELECTRA~, T5~, GLM~, XLM-E~, ST-MoE~, AlexaTM~} \\\\ \n &Training: &Masked Language Models & \\\\\n &Model type: &Discriminative & \\\\\n (BERT-style) &Pretrain task:&Predict masked words & \\\\\n & & & \\\\\n \\midrule\n \\multirow{5}{*}{Decoder-only} & & & \\multirow{4}{6cm}{GPT-3~, OPT~. PaLM~, BLOOM~, MT-NLG~, GLaM~,Gopher~, chinchilla~, LaMDA~, GPT-J~, LLaMA~, GPT-4~, BloombergGPT~} \\\\\n &Training & Autoregressive Language Models & \\\\\n &Model type: &Generative & \\\\\n (GPT-style) &Pretrain task:&Predict next word & \\\\\n & & & \\\\\n \\bottomrule\n \\end{tabular}\n}\n\\end{table}", "id": "5e50f17a-7d9a-4376-9534-28733a19fad8", "level": "section", "origin_cites_number": 27, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Models" ] ], "subsections": [ "64e27ad0-4c47-4785-a472-a13fd6fa2f43", "5356f595-71f3-4390-8b5e-38319883856a" ], "title": "Practical Guide for Models" }, { "cite_extract_rate": 1, "cites": [ 7, 826, 9 ], "content": "As natural language data is readily available and unsupervised training paradigms have been proposed to better utilize extremely large datasets, this motivates the unsupervised learning of natural language. One common approach is to predict masked words in a sentence while considering the surrounding context. This training paradigm is known as the Masked Language Model. This type of training allows the model to develop a deeper understanding of the relationships between words and the context in which they are used. These models are trained on a large corpus of texts using techniques such as the Transformer architecture and have achieved state-of-the-art results in many NLP tasks, such as sentiment analysis and named entity recognition. Notable examples of Masked Language Models include BERT~, RoBERTa~, and T5~. MLMs have become an important tool in the field of natural language processing due to their success in a wide range of tasks.", "id": "64e27ad0-4c47-4785-a472-a13fd6fa2f43", "level": "subsection", "origin_cites_number": 3, "parent_id": "5e50f17a-7d9a-4376-9534-28733a19fad8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Models" ], [ "subsection", "BERT-style Language Models: Encoder-Decoder or Encoder-only" ] ], "subsections": [], "title": "BERT-style Language Models: Encoder-Decoder or Encoder-only" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 679, 1556, 7463, 7462, 1554 ], "content": "Although language models are typically task-agnostic in architecture, these methods require fine-tuning on datasets of the specific downstream task. Researchers found that scaling up language models significantly improves the few-shot, even zero-shot performance~. The most successful models for better few-shot and zero-show performance are Autoregressive Language Models, which are trained by generating the next word in a sequence given the preceding words. These models have been widely used for downstream tasks such as text generation and question answering. Examples of Autoregressive Language Models include GPT-3~, OPT~, PaLM~, and BLOOM~. The game changer, GPT-3, for the first time, demonstrated reasonable few-/zero-shot performance via prompting and in-context learning, \nthus showing the superiority of autoregressive language models. There are also models such as CodeX~ that are optimized for specific tasks such as code generation, BloombergGPT~ for the financial domain. The recent breakthrough is ChatGPT, which refines GPT-3 specifically for conversational tasks, resulting in more interactive, coherent, and context-aware conversational for various real-world applications.", "id": "5356f595-71f3-4390-8b5e-38319883856a", "level": "subsection", "origin_cites_number": 6, "parent_id": "5e50f17a-7d9a-4376-9534-28733a19fad8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Models" ], [ "subsection", "GPT-style Language Models: Decoder-only" ] ], "subsections": [], "title": "GPT-style Language Models: Decoder-only" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we'll be discussing the critical role that data plays in selecting appropriate models for downstream tasks. The impact of data on the models' effectiveness starts during the pre-training stage and continues through to the training and inference stages.\n\\begin{applebox}{Remark 1}\n\\begin{enumerate}[leftmargin=0.4cm]\n\\item LLMs generalize better than fine-tuned models in downstream tasks facing out-of-distribution data, such as adversarial examples and domain shifts.\n\\item LLMs are preferable to fine-tuned models when working with limited annotated data, and both can be reasonable choices when abundant annotated data is available, depending on specific task requirements.\n\\item It's advisable to choose models pre-trained on fields of data that are similar to downstream tasks.\n\\end{enumerate}\n\\end{applebox}", "id": "ecb8bb46-39ea-4696-8c8e-aa03c3c2e7c8", "level": "section", "origin_cites_number": 0, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Data" ] ], "subsections": [ "7cd2af4d-ab98-4dec-93b3-0692677a51d7", "c18800ff-43df-4c70-b530-3188cbb5488b", "4f7f7e54-1a01-4226-aeb4-46d63ceee443" ], "title": "Practical Guide for Data" }, { "cite_extract_rate": 0.8, "cites": [ 472, 1560, 7462, 1554 ], "content": "Pre-training data plays a pivotal role in the development of large language models. As the foundation of remarkable capabilities of LLMs, the quality, quantitative, and diversity of pre-training data influence the performance of LLMs significantly~. The commonly used pretraining data consists of a myriad of text sources, including books, articles, and websites. The data is carefully curated to ensure a comprehensive representation of human knowledge, linguistic nuances, and cultural perspectives. The importance of pretraining data lies in its capacity to inform the language model with a rich understanding of word knowledge, grammar, syntax, and semantics, as well as the ability to recognize context and generate coherent responses.\nThe diversity of pretraining data also plays a crucial role in shaping the model's performance, and the selection of LLMs highly depends on the components of the pretraining data. For example, PaLM~ and BLOOM~ excel in multilingual tasks and machine translation with an abundance of multilingual pretraining data. Moreover, PaLM's performance in Question Answering tasks is enhanced by incorporating a considerable amount of social media conversations and Books corpus . Likewise, code execution and code completion capabilities of GPT-3.5~(code-davinci-002) are amplified by the integration of code data in its pretraining dataset. In brief, when selecting LLMs for downstream tasks, it is advisable to choose the model pre-trained on a similar field of data.", "id": "7cd2af4d-ab98-4dec-93b3-0692677a51d7", "level": "subsection", "origin_cites_number": 5, "parent_id": "ecb8bb46-39ea-4696-8c8e-aa03c3c2e7c8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Data" ], [ "subsection", "Pretraining data" ] ], "subsections": [], "title": "Pretraining data" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 679, 1563, 1562, 8463, 1561 ], "content": "When deploying a model for downstream tasks, it is essential to consider three primary scenarios based on the availability of annotated data: zero, few, and abundant. In this section, we provide a succinct overview of the appropriate models to employ for each scenario.\n\\noindent\\textbf{Zero annotated data}: \nIn scenarios where annotated data is unavailable, utilizing LLMs in a zero-shot setting proves to be the most suitable approach. LLMs have been shown to outperform previous zero-shot methods . Additionally, the absence of a parameter update process ensures that catastrophic forgetting is avoided since the language model parameters remain unaltered.\n\\noindent\\textbf{Few annotated data}: In this case, the few-shot examples are directly incorporated in the input prompt of LLMs, which is named as in-context learning, and these examples can effectively guide LLMs to generalize to the task. As reported in~, one-shot and few-shot performance make significant gains, even matching the performance of the SOTA fine-tuned open-domain models. And LLMs' zero/few-shot ability can be improved further by scaling~. \n Alternatively, some few-shot learning methods are invented to enhance fine-tuned models, such as meta-learning or transfer learning . However, performance might be inferior compared to using LLMs due to fine-tuned models' smaller scale and overfitting. \n\\noindent\\textbf{Abundant annotated data}: With a substantial amount of annotated data for a particular task available, both fine-tuned models and LLMs can be considered. In most cases, fine-tuning the model can fit the data pretty well. Although, LLMs can be used to meet some constraints such as privacy~.In this scenario, the choice between using a fine-tuned model or a LLM is task-specific and also depends on many factors, including desired performance, computational resources, and deployment constraints. \nIn a brief summary: LLMs are more versatile w.r.t. the data availability, while fine-tuned models can be considered with abundant annotated data.", "id": "c18800ff-43df-4c70-b530-3188cbb5488b", "level": "subsection", "origin_cites_number": 6, "parent_id": "ecb8bb46-39ea-4696-8c8e-aa03c3c2e7c8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Data" ], [ "subsection", "Finetuning data" ] ], "subsections": [], "title": "Finetuning data" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 364, 1564, 7057, 7464 ], "content": "\\label{sec:test_data}\nWhen deploying LLMs for downstream tasks, we often face challenges stemming from distributional differences between the test/user data\nand that of the training data. These disparities may encompass domain shifts , out-of-distribution variations , or even adversarial examples . \nSuch challenges significantly hinder fine-tuned modes' effectiveness in real-world applications. They fit into a specific distribution and have a poor ability to generalize to OOD data. However, LLMs perform quite well facing such scenarios because they do not have an explicit fitting process.\nMoreover, recent advancements have further enhanced the ability of language models in this regard.\nThe Reinforcement Learning from Human Feedback (RLHF) method has notably enhanced LLMs' generalization capabilities~. For example, InstructGPT demonstrates proficiency in following various instructions for a wide range of tasks and occasionally complying with instructions in different languages, even though such instructions are scarce. \nSimilarly, ChatGPT exhibits consistent advantages on most adversarial and out-of-distribution (OOD) classification and translation tasks . Its superiority in understanding dialogue-related texts led to an impressive performance on the DDXPlus dataset , a medical diagnosis dataset designed for OOD evaluation.\n\\footnotetext{As we mention in Section \\ref{sec:intro}, LLMs are pretrained on large and diverse datasets without fine-tuning, while fine-tuned models are typically pretrained on a large dataset and then further trained on a smaller, task-specific dataset to optimize their performance on that task.}", "id": "4f7f7e54-1a01-4226-aeb4-46d63ceee443", "level": "subsection", "origin_cites_number": 6, "parent_id": "ecb8bb46-39ea-4696-8c8e-aa03c3c2e7c8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for Data" ], [ "subsection", "Test data/user data " ] ], "subsections": [], "title": "Test data/user data " }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we discuss in detail the use cases and no use cases for LLMs in various downstream NLP tasks and the corresponding model abilities. And in Figure~\\ref{fig:decision}, we summarize all discussions into a decision flow. It can be a guide for a quick decision while facing a task.", "id": "8bfedb53-eb7f-45de-b964-352dab91a005", "level": "section", "origin_cites_number": 0, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ] ], "subsections": [ "8d97a985-4400-4b64-81e8-92641bd467a1", "dc0358fa-c709-43bf-add5-35e70a95583a", "53896e85-9b59-4292-8c97-3e89d6904421", "d91bd4a1-b1bc-4161-af0b-784b2df163c2", "5fe8f0e4-6376-4a8f-bc4b-9868ac9f320c", "1371d8e3-f812-4bcf-b123-202530b09d42" ], "title": "Practical Guide for NLP Tasks" }, { "cite_extract_rate": 0, "cites": [], "content": "Traditional NLU tasks are some fundamental tasks in NLP including text classification, named entity recognition~(NER), entailment prediction, and so on. Many of them are designed to serve as intermediate steps in larger AI systems, such as NER for knowledge graph construction. \n\\begin{applebox}{Remark 2}\n Fine-tuned models generally are a better choice than LLMs in traditional NLU tasks, but LLMs can provide help while requiring strong generalization ability.\n\\end{applebox}\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=\\textwidth]{./decision.pdf}\n \\caption{The decision flow for choosing LLMs or fine-tuned models~\\protect\\footnotemark for user's NLP applications. The decision flow helps users assess whether their downstream NLP applications at hand meet specific conditions and, based on that evaluation, determine whether LLMs or fine-tuned models are the most suitable choice for their applications. During the decision process in the figure, $\\vcenter{\\hbox{\\includegraphics[scale=0.35]{./Y.pdf}}}$ means meeting the condition, and $\\vcenter{\\hbox{\\includegraphics[scale=0.35]{./N.pdf}}}$ means not meeting the condition. The yellow circle for $\\vcenter{\\hbox{\\includegraphics[scale=0.35]{./Y.pdf}}}$ of the last condition means there's no model working well on this kind of application.}\\label{fig:decision} \n\\end{figure}", "id": "8d97a985-4400-4b64-81e8-92641bd467a1", "level": "subsection", "origin_cites_number": 0, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Traditional NLU tasks" ] ], "subsections": [ "1b7558df-4da4-47a6-adf9-d64950daf5a5", "a45a154b-7a13-4553-aec0-d2f95ef46a1c" ], "title": "Traditional NLU tasks" }, { "cite_extract_rate": 0.8823529411764701, "cites": [ 1569, 1570, 1174, 8465, 1571, 1566, 9, 8464, 1568, 1098, 1145, 1565, 446, 1567, 1554 ], "content": "In most natural language understanding tasks, such as tasks in GLUE and SuperGLUE, fine-tuned models still have better performance, if such tasks come with rich well-annotated data and contain very few out-of-distribution examples on test sets. For different tasks and datasets, the gap between small fine-tuned models and LLMs varies. \nIn text classification, on most datasets, LLMs perform slightly worse than fine-tuned models. \nFor sentiment analysis, such as on IMDB~ and SST~, fine-tuned models and LLMs perform equally well. For toxicity detection, which is another iconic text classification task, the gap is much larger. All LLMs cannot perform well on this task, and on CivilComments~ even the best one is only better than random guessing~. On the other hand, most popular fine-tuned models can obtain much better performance~. \nand the Perspective API~\\footnote{https://perspectiveapi.com} is still one of the best for detecting toxicity. This API is powered by a multilingual BERT-based model, which is tuned on publicly available toxicity data \nand several smaller single-language CNNs distilled from this model.\nThis might be due to the fact that toxicity is defined by subtle nuances in linguistic expressions, and large language models are unable to accurately comprehend this task solely based on the provided input. \nThe trend of performance gaps is similar in some other tasks. For natural language inference~(NLI) tasks, on most datasets, such as on RTE~ and SNLI~, fine-tuned models perform better than LLMs, while on some data such as CB~, LLMs have obtained comparable performance with fine-tuned models~. For question answering~(QA), on SQuADv2~, QuAC~ and many other datasets, fine-tuned models have superior performance, while on CoQA~, LLMs perform as well as fine-tuned models~. \nIn information retrieval~(IR) tasks, LLMs are not widely exploited yet. One major reason is that IR tasks are fundamentally different from others. There's no natural way to transform the thousands of candidate texts into a few/zero-shot form which is required by LLMs. The existing evaluation results on MS MARCO(regular/TREC)~ show that methods based on fine-tuned models have better performance~. In this evaluation, the LLMs rank passages in an unorthodox way, which requires the LLMs to produce probabilities for passages one by one. \nFor some low-level intermediate tasks, which are not intended for regular users but rather for high level tasks, such as named entity recognition~(NER) and dependency parsing, there's not enough result coming from LLMs, because the most current evaluation of LLMs focuses on practical tasks. According to available evaluation results, for the NER task, CoNLL03~ is still a challenge for LLMs~, where the performance of fine-tuned models is around as twice as LLMs. These intermediate tasks may vanish soon because LLMs can take over high-level tasks without the help of those intermediate tasks~(e.g. dependency parsing for coding tasks; NER for some text generation tasks). \nIn brief, for most traditional NLU tasks, a fine-tuned model is a better choice in terms of the performance on benchmark datasets and the computational cost. The scale of LLMs is usually $10\\times$ or even $100\\times$ larger than fine-tuned models. One possible cause for the inferior performance of LLMs on certain tasks can be the design of instructions/prompts. Transforming input from tasks like IR and sentence labeling into a few/zero-short instruction form is non-trivial. There may be better ways to adapt language models to traditional NLP tasks in the future. On the other hand, the upper limit of capabilities of fine-tuned models is not reached, and some methods like FLAN-tuning~ can further boost the performance on NLU tasks. Another interesting finding is that on NLU tasks, after fine-tuning, masked language models, like T5, are better than most auto-regressive language models at the same scale, while some recent results imply that this gap can be bridged by scaling.", "id": "1b7558df-4da4-47a6-adf9-d64950daf5a5", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "8d97a985-4400-4b64-81e8-92641bd467a1", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Traditional NLU tasks" ], [ "subsubsection", "No use case" ] ], "subsections": [], "title": "No use case" }, { "cite_extract_rate": 1, "cites": [ 1570, 8466 ], "content": "However, there are still some NLU tasks suitable for LLMs.\nOne of the representative tasks is miscellaneous text classification~. In contrast to classic domain-specific text classification tasks such as sentiment analysis, miscellaneous text classification deals with a diverse range of topics and categories that may not have a clear or strong relationship with one another. It's closer to real-world cases and hard to be formatted for using fine-tuned models.\nAnother is the Adversarial NLI~(ANLI). It is a difficult dataset composed of adversarially mined natural language inference questions in three rounds (R1, R2, and R3). LLMs have shown superior performance on ANLI, especially on the R3 and R2. Both examples demonstrate the exceptional ability of LLMs to generalize well on out-of-distribution and sparsely annotated data in traditional NLP tasks, surpassing that of fine-tuned models. We've discussed this in the section above \\ref{sec:test_data}.", "id": "a45a154b-7a13-4553-aec0-d2f95ef46a1c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8d97a985-4400-4b64-81e8-92641bd467a1", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Traditional NLU tasks" ], [ "subsubsection", "Use case" ] ], "subsections": [], "title": "Use case" }, { "cite_extract_rate": 0, "cites": [], "content": "Natural Language Generation broadly encompasses two major categories of tasks, with the goal of creating coherent, meaningful, and contextually appropriate sequences of symbols. The first type focuses on converting input texts into new symbol sequences, as exemplified by tasks like paragraph summarization and machine translation. The second type, \"open-ended\" generation, aims to generate text or symbols from scratch to accurately match input descriptions such as crafting emails, composing news articles, creating fictional stories and writing code. \n\\begin{applebox}{Remark 3}\n Due to their strong generation ability and creativity, LLMs show superiority at most generation tasks. \n\\end{applebox}", "id": "dc0358fa-c709-43bf-add5-35e70a95583a", "level": "subsection", "origin_cites_number": 0, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Generation tasks" ] ], "subsections": [ "df2b931d-26af-45d4-8e32-1713e5a9975d", "38534ecc-2634-4ce7-bc5a-773915850229" ], "title": "Generation tasks" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 1574, 679, 8433, 1575, 1572, 1109, 1576, 7465, 1573, 9115, 8467, 7463, 1554, 7462 ], "content": "Generation tasks require models to have a comprehensive understanding of the input contents or requirements and a certain level of creativity. This is what LLMs excel at. \nFor summarization tasks, although LLMs do not have an obvious advantage over fine-tuned models under traditional automatic evaluation metrics, such as ROUGE~, human evaluation results indicate that humans tend to prefer the results generated by LLMs~ compared to that of fine-tuned models. For example, on CNN/DailyMail~ and XSUM~, fine-tuned models like Brio~ and Pegasus~ have much better performance than any LLMs w.r.t. ROUGE, but LLMs like OPT~ perform far better in human evaluation considering all aspects including faithfulness, coherence, and relevance~. This demonstrates the superiority of LLMs in summarization tasks. On the other hand, it implies that current summarization benchmarks don't contain summaries with high quality or the automatic metrics are not proper for the evaluation of summarization.\nIn machine translation~(MT), LLMs can perform competent translation, although the average performance is slightly worse than some commercial translation tools~ considering some automatic metrics like BLEU. LLMs are particularly good at translating some low-resource language texts to English texts, such as in the Romanian-English translation of WMT'16~, zero-shot or few-shot LLMs can perform better than SOTA fine-tuned model. This is mainly due to the fact that English resources compose the main part of the pre-training data. BLOOM~ is pre-trained on more multi-lingual data, leading to better translation quality in both rich-resource and low-resource translation. Another interesting finding is that BLOOM achieves good translation quality among Romance languages, even for translation from Galician, which is not included in the pre-training data. One reasonable explanation is that texts from some languages in the same language group can help the LLMs learn more from the similarity. \nIf more multi-lingual texts can be added to the pre-training data, the translation capability may be improved further. \n Additionally, LLMs are highly skilled in open-ended generations. One example is that the news articles generated by LLMs are almost indistinguishable from real news articles by humans~.\n LLMs are remarkably adept at code synthesis as well. Either for text-code generation, such as HumanEval~ and MBPP~, or for code repairing, such as DeepFix~, LLMs can perform pretty well. GPT-4 can even pass 25\\% problems in Leetcode, which are not trivial for most human coders~. With training on more code data, the coding capability of LLMs can be improved further~. While performing well on such tasks, the codes generated by LLMs should be tested carefully to figure out any subtle bugs, which is one of the main challenges for applying LLMs in code synthesis.", "id": "df2b931d-26af-45d4-8e32-1713e5a9975d", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "dc0358fa-c709-43bf-add5-35e70a95583a", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Generation tasks" ], [ "subsubsection", "Use case" ] ], "subsections": [], "title": "Use case" }, { "cite_extract_rate": 1, "cites": [ 1576, 1554, 7462 ], "content": "Fine-tuned models, such as DeltaLM+Zcode~, still perform best on most rich-resource translation and extremely low-resource translation tasks. In rich resource machine translation, fine-tuned models slightly outperform LLMs~. And in extremely low-resource machine translation, such as English-Kazakh translation, fine-tuned models significantly perform better than LLMs.", "id": "38534ecc-2634-4ce7-bc5a-773915850229", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "dc0358fa-c709-43bf-add5-35e70a95583a", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Generation tasks" ], [ "subsubsection", "No use case" ] ], "subsections": [], "title": "No use case" }, { "cite_extract_rate": 0, "cites": [], "content": "Knowledge-intensive NLP tasks refer to a category of tasks that have a strong reliance on background knowledge, domain-specific expertise, or general real-world knowledge. These tasks go beyond simple pattern recognition or syntax analysis. And they are highly dependent on memorization and proper utilization of knowledge about specific entities, events, and common sense of our real world.\n\\begin{applebox}{Remark 4}\n \\begin{enumerate}[leftmargin=0.4cm]\n \\item LLMs excel at knowledge-intensive tasks due to their massive real-world knowledge.\n \\item LLMs struggle when the knowledge requirements do not match their learned knowledge, or when they face tasks that only require contextual knowledge, in which case fine-tuned models can work as well as LLMs.\n \\end{enumerate}\n\\end{applebox}", "id": "53896e85-9b59-4292-8c97-3e89d6904421", "level": "subsection", "origin_cites_number": 0, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Knowledge-intensive tasks" ] ], "subsections": [ "4d1dc079-bbff-4662-91db-5d057b42e5c3", "6fe6c7af-dbdf-4ef9-af06-9fb96da0f351" ], "title": "Knowledge-intensive tasks" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 451, 440, 9115, 7466, 1554 ], "content": "In general, with billions of training tokens and parameters, LLMs have much more real-world knowledge than fine-tuned models. \nClosed-book question-answering tasks require the model to answer a given question about factual knowledge without any external information. It does require the memorization of real-world knowledge in the model. LLMs perform better on nearly all datasets, such as on NaturalQuestions~, WebQuestions~, and TriviaQA~. On TriviaQA, even zero-shot LLMs is still much better~. \nThe massive multitask language understanding~(MMLU)~ is also highly knowledge-intensive. It contains multiple-choice questions spanning over 57 different subjects and requires general knowledge of the model. It's pretty challenging even for LLMs, although the newly released GPT-4~ outperforms existing models by a considerable margin in English with a satisfactory 86.5\\% accuracy.\nAlso, some tasks in Big-bench, which are designed to probe LLMs and extrapolate their future capabilities, heavily relied on the memorization of real-world knowledge. In such tasks, the performance of some LLMs is better than the average level of humans, and even comparable to the best human performance. For example, the task \\textit{Hindu\\_knowledge} requires models to give facts about Hindu mythology, \\textit{Periodic Elements} require the capability of predicting the element name from the periodic table and \\textit{Physics} tests the physics knowledge of models by asking for the formula needed to solve a given physics problem.", "id": "4d1dc079-bbff-4662-91db-5d057b42e5c3", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "53896e85-9b59-4292-8c97-3e89d6904421", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Knowledge-intensive tasks" ], [ "subsubsection", "Use case" ] ], "subsections": [], "title": "Use case" }, { "cite_extract_rate": 0, "cites": [], "content": "There are some other tasks requiring knowledge different from that learned by LLMs. The required knowledge is not that learned by LLMs about the real world. In such tasks, LLMs are not notably superior.\nSome tasks only require the model to capture the self-contained knowledge in the contexts. The knowledge in the contexts from the input is enough for the model to make predictions. For these tasks, small fine-tuned models can work pretty well. One such task is machine reading comprehension~(MRC). An MRC task provides several paragraphs and requires the model to predict the answer to questions based on these paragraphs. We've discussed MRC in the previous section because it's also a traditional NLU task. \nAnother scenario is that the knowledge within LLMs about real world is useless to the task, or even the required knowledge is counterfactual to the real world. As a result, the LLMs cannot work well on such tasks. In some cases, inconsistent knowledge may even make the LLMs worse than random guessing. For example, in Big-Bench, the Mnist ascii task requires the model to tell the digit represented by an ASCII art. The capability required by this task is nothing about real-world knowledge. Also, in the Inverse Scaling Phenomenon competition~, the task \\textit{redefine math} redefines a common symbol and requires the model to choose between the original meaning and the meaning derived from the redefinition. What it requires contrasts to the LLMs' knowledge, thus LLMs even perform worse than random guessing. \nAs an alternative to real-world knowledge in LLMs, access to extra knowledge is allowed, and models can thus get enough knowledge for a task via retrieval augmentation. The basic idea of retrieval augmentation is to add an extra information retrieval step prior to making predictions, in which, some useful texts related to the task will be retrieved from a large corpus. Then, the model will make predictions based on both the input contexts and the retrieved texts. With retrieved additional information, the closed-book task can become \"open-book\". In such a scenario, fine-tuned models are pretty good with much smaller sizes, because the required knowledge can be obtained by retrieving. For example, on NaturalQuestions~, with extra corpus, retrieval augmented models~ are much better than any other methods.", "id": "6fe6c7af-dbdf-4ef9-af06-9fb96da0f351", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "53896e85-9b59-4292-8c97-3e89d6904421", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Knowledge-intensive tasks" ], [ "subsubsection", "No use case" ] ], "subsections": [], "title": "No use case" }, { "cite_extract_rate": 1, "cites": [ 472, 7460 ], "content": "Scaling of LLMs~(e.g. parameters, training computation, etc.) can greatly empower pretrained language models. With the model scaling up, a model generally becomes more capable in a range of tasks. Reflected in some metrics, the performance shows a power-law relationship with the model scale. For example, the cross-entropy loss which is used to measure the performance for language modeling decreases linearly with the exponential increase in the model scale, which is also called 'scaling-law'~. For some crucial abilities, such as reasoning, scaling the model has gradually transformed these abilities from a very low state to a usable state, and even approaching human capabilities. In this section, we provide an overview of the usage of LLMs in terms of the abilities and behaviors of LLMs along with scaling.\n\\begin{applebox}{Remark 5}\n\\begin{enumerate}[leftmargin=0.4cm]\n\\item With the exponential increase of model scales, LLMs become especially capable of reasoning like arithmetic reasoning and commonsense reasoning.\n\\item Emergent abilities become serendipity for uses that arise as LLMs scale up, such as ability in word manipulation and logical ability.\n\\item In many cases, performance does not steadily improve with scaling due to the limited understanding of how large language models' abilities change as they scale up. \n\\end{enumerate}\n\\end{applebox}", "id": "d91bd4a1-b1bc-4161-af0b-784b2df163c2", "level": "subsection", "origin_cites_number": 2, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Abilities Regarding Scaling" ] ], "subsections": [ "8bd46f70-2108-4b07-bbf3-03ca764ca7bc", "2c1ba13d-9548-4fa8-831d-01c8e52af880", "755f4ab5-f9fb-4404-acd4-e549da46cd2e" ], "title": "Abilities Regarding Scaling" }, { "cite_extract_rate": 1, "cites": [ 1578, 679, 1580, 444, 1579, 458, 9115, 443, 1577 ], "content": "Reasoning, which involves making sense of information, drawing inferences, and making decisions, is one of the essential aspects of human intelligence. It is challenging for NLP. Many existing reasoning tasks can be classified into commonsense reasoning and arithmetic reasoning.\n\\noindent\\textbf{Arithmetic reasoning/problem solving}. \nThe arithmetic reasoning capability of LLMs benefits greatly from the scaling of model size. For GPT-3, the ability of two-digit addition only becomes apparent when the number of parameters exceeds 13B~. Tasks to test arithmetic reasoning are trivial for humans \nand designed to challenge the capability of transferring natural language into mathematical symbols and multi-step inference. On GSM8k~, SVAMP~ and AQuA~, LLMs, as generalists, have competitive performance with most methods which have task-specific designs. And GPT-4 overperforms any other methods~, even some huge models particularly tuned for arithmetic problems~. Nevertheless, it should be noted that, without the intervention of external tools, LLMs may occasionally make mistakes in performing basic calculations, although chain-of-thought~(CoT) prompting~ can significantly improve LLMs' ability in calculations. \n\\noindent\\textbf{Commonsense reasoning}. Commonsense reasoning not only requires LLMs to remember factual knowledge but also requires LLMs to do several inference steps about the facts. Commonsense reasoning increases gradually with the growth of model size. Compared to fine-tuned models, LLMs keep the superiority on most datasets, such as StrategyQA~ and ARC-C~. Especially on ARC-C, which contains difficult questions in science exams from grade 3 to grade 9, GPT-4 has been close to the performance of 100\\% ~(96.3\\%)~.", "id": "8bd46f70-2108-4b07-bbf3-03ca764ca7bc", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "d91bd4a1-b1bc-4161-af0b-784b2df163c2", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Abilities Regarding Scaling" ], [ "subsubsection", "Use Case with Reasoning" ] ], "subsections": [], "title": "Use Case with Reasoning" }, { "cite_extract_rate": 1, "cites": [ 679, 1554 ], "content": "Scaling of models also endows the model with some unprecedented, fantastic abilities that go beyond the power-law rule. These abilities are called \"emergent ability\". As defined in , \\textit{emergent abilities of LLMs are abilities that are not present in smaller-scale models but are present in large-scale models}. This means such abilities cannot be predicted by extrapolating the performance improvements on smaller-scale models and the model suddenly gains good performance on some tasks once the scale exceeds a certain range. The emergent ability is typically unpredictable and surprising, leading to tasks that emerge randomly or unexpectedly. We examine concrete examples of the emergent abilities of LLMs and provide them as an important reference for deciding whether to leverage LLMs' emergent abilities.\nHandling word manipulation is a typical emergent ability. It refers to the ability to learn symbolic manipulations, such as the reversed words~, in which the model is given a word spelled backwards, and must output the original word. For example. GPT-3~ shows the emergent ability for word sorting, and word unscrambling tasks. PaLM~ exhibits the emergent ability on ASCII word recognition~\\footnote{Asking models to identify the word displayed as ASCII art, https://github.com/google/BIG-bench/tree/main/bigbench/benchmark\\_tasks/ascii\\_word\\_recognition} and hyperbaton~\\footnote{Asking models to choose the English sentence with adjectives in the \"correct\" order within two choices, https://github.com/google/BIG-bench/tree/main/bigbench/benchmark\\_tasks/hyperbaton} task. \nThe logical abilities of language models tend to emerge as the model scales up, such as logical deduction, logical sequence, and logic grid puzzles. Additionally, other tasks, such as advanced coding (e.g., auto debugging, code line description), and concept understanding (e.g., novel concepts, simple Turing concepts), are also use cases with the emergent abilities of large language models.", "id": "2c1ba13d-9548-4fa8-831d-01c8e52af880", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "d91bd4a1-b1bc-4161-af0b-784b2df163c2", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Abilities Regarding Scaling" ], [ "subsubsection", "Use Cases with Emergent Abilities" ] ], "subsections": [], "title": "Use Cases with Emergent Abilities" }, { "cite_extract_rate": 1, "cites": [ 9115, 1581 ], "content": "Although in most cases, as discussed above, larger models bring better performance, there are still many exceptions that should be considered when choosing the appropriate model. \nOn certain tasks, with the size of LLMs increasing, the performance begins to decrease, such as Redefine-math: tests whether language models are able to work with common symbols when they are redefined to mean something else; Into-the-unknown: requires the model to choose which piece of information would help answer a question; Memo-trap: asks an LM to write a phrase in a way that starts like a famous quote but ends differently\\footnote{More such tasks include: modus-tollens, pattern-matching-suppression, prompt-injection, repetitive-algebra and sig-figs. You can check them on: https://github.com/inverse-scaling/prize}. This is also called \\textit{Inverse Scaling Phenomenon}.\nAnother interesting phenomenon observed in the scaling of LLMs is called the \\textit{U-shaped Phenomenon}~. As the name implies, This phenomenon refers to that as LLM size increases, their performance on certain tasks initially improves but then starts to decline before eventually improving again, such as on: Hindsight-neglect: it tests whether language models are able to assess whether a bet was worth taking based on its expected value; NegationQA: this task takes an existing multiple-choice dataset and negates a part of each question to see if language models are sensitive to negation; Quote-repetition: it asks models to repeat back sentences given in the prompt, with few-shot examples to help it recognize the task. \nHence the risk of diminishing performance should be noted and if the task is similar to those we just discussed, careful consideration should be given to whether or not to use huge LLMs. \nGaining a deeper understanding of emergent abilities, inverse scaling phenomenon and U-shape phenomenon in LLMs is essential for advancing research in this field. In a certain sense, the U-shape phenomenon suggests that small-scale models and huge-scale models make predictions with different internal mechanisms. From this perspective, the U-shape phenomenon can be seen as a transformation of the inverse-scaling phenomenon due to some emergent abilities from sufficiently large models~. GPT-4~ exhibits a reversal of the inverse scaling phenomenon in some cases, such as on a task called Hindsight Neglect. The explanation for these behaviors of LLMs during scaling is still an open problem. Several hypotheses have been proposed. For emergent abilities, one explanation is that there may be multiple key steps for a task and the LLM cannot handle this task until it's large enough to handle every step, and another explanation is focused on the granularity of evaluation metrics~. For inverse-scaling phenomenon and u-shape phenomenon, the explanations mainly focus on the model's over-reliance on information from its prior rather than the input prompts, valid but misleading few-shot examples, and distracting easier tasks within a hard task~.", "id": "755f4ab5-f9fb-4404-acd4-e549da46cd2e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "d91bd4a1-b1bc-4161-af0b-784b2df163c2", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Abilities Regarding Scaling" ], [ "subsubsection", "No-Use Cases and Understanding" ] ], "subsections": [], "title": "No-Use Cases and Understanding" }, { "cite_extract_rate": 0, "cites": [], "content": "This section explores miscellaneous tasks which cannot be involved in previous discussions, to better understand LLMs' strengths and weaknesses. \n\\begin{applebox}{Remark 6}\n\\begin{enumerate}[leftmargin=0.4cm]\n \\item Fine-tuned models or specified models still have their space in tasks that are far from LLMs' pretraining objectives and data.\n \\item LLMs are excellent at mimicking human, data annotation and generation. They can also be used for quality evaluation in NLP tasks and have bonuses like interpretability.\n\\end{enumerate}\n\\end{applebox}", "id": "5fe8f0e4-6376-4a8f-bc4b-9868ac9f320c", "level": "subsection", "origin_cites_number": 0, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Miscellaneous tasks" ] ], "subsections": [ "40ab9810-5556-4e41-9750-0e49b3f51ab4", "85fadad0-1368-4d30-9626-74f1ecc1c17b" ], "title": "Miscellaneous tasks" }, { "cite_extract_rate": 1, "cites": [ 7467, 9115, 1582, 1583 ], "content": "LLMs generally struggle with some tasks due to differences in objectives and training data.\nAlthough LLMs have achieved remarkable success in various natural language processing tasks, their performance in regression tasks has been less impressive. For example, ChatGPT's performance on the GLUE STS-B dataset, which is a regression task evaluating sentence similarity, is inferior to a fine-tuned RoBERTa performance . The Regression tasks typically involve predicting a continuous value rather than a discrete label, posing unique challenges for LLMs. One primary reason for their subpar performance is the inherent difference between the language modeling objective and the regression task objective. LLMs are designed to predict the next word in a sequence or generate coherent text, with their pre-training focused on capturing linguistic patterns and relationships. Consequently, their internal representations may not be well-suited for modeling continuous numerical outputs. \nBesides, LLMs have predominantly been trained on text data, focusing on capturing the intricacies of natural language processing. As a result, their performance on multimodal data, which involves handling multiple data types such as text, images, audio, video, actions, and robotics, remains largely unexplored. And fine-tuned multimodal models, like BE\\textsc{i}T and PaLI~, still dominate many tasks such as visual question answering~(VQA) and image captioning. Nonetheless, the recently introduced GPT-4~ has taken the step in multimodal fusion, but there is still a lack of detailed evaluation of its capabilities.", "id": "40ab9810-5556-4e41-9750-0e49b3f51ab4", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "5fe8f0e4-6376-4a8f-bc4b-9868ac9f320c", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Miscellaneous tasks" ], [ "subsubsection", "No use case" ] ], "subsections": [], "title": "No use case" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 1576, 1586, 9142, 8468, 1584, 1585, 1561 ], "content": "LLMs are particularly suitable for certain tasks. \nLLMs are very good at mimicking humans, acting as a chatbot, and performing various kinds of tasks. The LLMs-powered ChatGPT\\footnote{https://chat.openai.com} is surprising for its consistency, reliability, informativeness, and robustness during multiple utterances with humans. The human-feedback procedure plays an important role in acquiring such abilities\nLLMs can both act as a good annotator and data generator for data augmentation, such as in. Some LLMs have been found as good as human annotators~ in some tasks. And the collected texts from GPT-3.5~(text-davinci-003) have been used as human-like instruction-following demonstrations to train other language models~. \nLLMs can also be used for quality assessment on some NLG tasks, such as summarization and translation. On summarization tasks, GPT-4 as an evaluator achieves a higher correlation with humans than other methods with a large margin~. Some other evaluators based on LLMs~ also show good human alignment in more NLG tasks, especially compared with traditional automatic metrics. But the LLM evaluator may have a bias towards the LLM-generated texts~.\nAlso, as we discussed above, some abilities of LLMs bring bonuses in addition to performance improvement, such as interpretability. The CoT reasoning ability of LLMs can show how an LLM reaches the prediction, which is a good interpretation on the instance level, while it also improves the performance.", "id": "85fadad0-1368-4d30-9626-74f1ecc1c17b", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "5fe8f0e4-6376-4a8f-bc4b-9868ac9f320c", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Miscellaneous tasks" ], [ "subsubsection", "Use case" ] ], "subsections": [], "title": "Use case" }, { "cite_extract_rate": 1, "cites": [ 1587, 364, 8469 ], "content": "In the last part of this section, we would like to discuss the usage of LLMs and fine-tuned models in real-world \"tasks\". We use the term \"tasks\" loosely, as real-world scenarios often lack well-formatted definitions like those found in academia. Many requests to models even cannot be treated as NLP tasks. Models face challenges in the real world from three perspectives:\n\\begin{itemize}\n \\item \\textbf{Noisy/Unstructured input}. Real-world input comes from real-world non-experts. They have little knowledge about how to interact with the model or even cannot use texts fluently. As a result, real-world input data can be messy, containing typos, colloquialisms, and mixed languages, unlike those well-formed data used for pre-training or fine-tuning. \n \\item \\textbf{Tasks not formalized by academia}.In real-world scenarios, tasks are often ill-defined by academia and much more diverse than those in academic settings. Users frequently present queries or requests that do not fall neatly into predefined categories, and sometimes multiple tasks are in a single query. \n \\item \\textbf{Following users' instructions}. A user's request may contain multiple implicit intents~(e.g. specific requirement to output format), or their desired predictions may be unclear without follow-up questions. Models need to understand user intents and provide outputs that align with those intents. \n\\end{itemize}\nEssentially, these challenges in the real world come from that users' requests deviate significantly from the distribution of any NLP datasets designed for specific tasks. Public NLP datasets are not reflective of how the models are used~. \n\\begin{applebox}{Remark 7}\n LLMs are better suited to handle real-world scenarios compared to fine-tuned models. However, evaluating the effectiveness of models in the real world is still an open problem.\n\\end{applebox}\nHandling such real-world scenarios requires coping with ambiguity, understanding context, and handling noisy input. Compared to fine-tuned models, LLMs are better equipped for this because they have been trained on diverse data sets that encompass various writing styles, languages, and domains. Additionally, LLMs demonstrate a strong ability to generate open-domain responses, making them well-suited for these scenarios. Fine-tuned models, on the other hand, are often tailored to specific, well-defined tasks and may struggle to adapt to new or unexpected user requests. They heavily rely on clear objectives and well-formed training data that specify the types of instructions the models should learn to follow. Fine-tuned models may struggle with noisy input due to their narrower focus on specific distributions and structured data. An additional system is often required as an assistant for fine-tuned models to process unstructured context, determine possible intents, and refine model responses accordingly.\nAdditionally, some mechanics such as instruction tuning~ and human alignment tuning~ further boost the capabilities of LLMs to better comprehend and follow user instructions. These methods improve the model's ability to generate helpful, harmless, and honest responses while maintaining coherence and consistency~. While both methods can make LLMs better generalize to unseen tasks and instructions, it has been noticed that while human labelers prefer models tuned for human alignment~ to models tuned with instructions from public NLP tasks, such as FLAN~ and T0~. The reason may be similar to reasons for fine-tuned models' inferiority: public NLP tasks/datasets are designed for easy and automatic evaluation, and they can only cover a small part of real-world usage.\nOne of the main issues when it comes to real-world scenarios is how to evaluate whether the model is good or not. Without any formalized tasks or metrics, the evaluation of model effectiveness can only rely on feedback from human labelers. Considering the complexity and cost of human evaluation, there's no massive and systematic comparison between fine-tuned models and LLMs yet. Nevertheless, the huge success and popularity of LLMs such as chatGPT, have confirmed the superiority of LLMs to some extent.", "id": "1371d8e3-f812-4bcf-b123-202530b09d42", "level": "subsection", "origin_cites_number": 3, "parent_id": "8bfedb53-eb7f-45de-b964-352dab91a005", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Practical Guide for NLP Tasks" ], [ "subsection", "Real world \"tasks\"" ] ], "subsections": [], "title": "Real world \"tasks\"" }, { "cite_extract_rate": 0, "cites": [], "content": "Despite LLMs are suitable for various downstream tasks, there are some other factors to consider, such as efficiency and trustworthiness. \nOur discussion of efficiency encompasses the training cost, inference latency, and parameter-efficient tuning strategies for LLMs. Meanwhile, our examination of trustworthiness includes robustness \\& calibration, fairness \\& biases, potential spurious correlations, and the safety challenges in LLMs.\n\\begin{applebox}{Remark 8}\n\\begin{enumerate}[leftmargin=0.4cm]\n \\item Light, local, fine-tuned models should be considered rather than LLMs, especially for those who are sensitive to the cost or have strict latency requirements. Parameter-Efficient tuning can be a viable option for model deployment and delivery.\n \\item The zero-shot approach of LLMs prohibits the learning of shortcuts from task-specific datasets, which is prevalent in fine-tuned models. Nevertheless, LLMs still demonstrate a degree of shortcut learning issues.\n \\item Safety concerns associated with LLMs should be given utmost importance as the potentially harmful or biased outputs, and hallucinations from LLMs can result in severe consequences. Some methods such as human feedback have shown promise in mitigating these problems.\n\\end{enumerate}\n\\end{applebox}", "id": "ff91c03f-4bf0-4721-898f-5219db670483", "level": "section", "origin_cites_number": 0, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ] ], "subsections": [ "f0e841b3-8f4d-49ae-ba5d-a934fbb0c47a", "ea598959-0f56-4d8e-a556-bc38157b9a19", "8ca777cb-b1a7-42cb-88ed-4b9e175861af" ], "title": "Other considerations" }, { "cite_extract_rate": 0, "cites": [], "content": "In real-world deployment, performance, cost, and latency are all important considerations, not just the performance of the models. While some parameter-efficient methods have been developed, practitioners must balance efficiency with effectiveness in the practice.", "id": "f0e841b3-8f4d-49ae-ba5d-a934fbb0c47a", "level": "subsection", "origin_cites_number": 0, "parent_id": "ff91c03f-4bf0-4721-898f-5219db670483", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Efficiency" ] ], "subsections": [ "7141aff7-7e78-47a2-9234-7f2210caca46" ], "title": "Efficiency" }, { "cite_extract_rate": 0.4, "cites": [ 679, 1588 ], "content": "LLMs have grown increasingly larger in recent years, with models such as GPT-1, GPT-2, and GPT-3 featuring 117 million, 1.5 billion, and 175 billion parameters, respectively. The cost of training an LLM is heavily influenced by its size, with estimates suggesting that training the 11B parameter variant of T5 costs well over \\$1.3 million for a single run, while a single training run of GPT-3 175B requires \\$4.6 million~.\nThe energy consumption for training large models is equally impressive. The total energy consumption for training a transformer model with 6B parameters to completion is estimated to be around 103.5 MWh~. Google reports that training PaLM consumed about 3.4 GWh in about two months~.\nFurthermore, the dataset size also scales rapidly with the size of the model, with GPT-3 175B trained on 499 billion tokens~. Another key metric that reflects the computing cost is Flops, with GPT-3 175B requiring $3.14 \\times 10^{23}$ Flops, while a T5 11B model only requires $3.30 \\times 10^{22}$, which is 10 times less.\nIn addition to these costs, hardware requirements are also substantial.\nOpenAI has collaborated with Microsoft on a supercomputer hosted in the Microsoft Azure cloud, consisting of 285k CPU cores and 10k high-end GPUs to support the training of large models.\nFor users of the OpenAI API, pricing varies based on the model and usage, with options such as GPT-3.5-turbo charging \\$0.002 per 1k tokens for chat service. However, for users who require custom models, training costs \\$0.03 per 1k tokens, while usage costs \\$0.12 per 1k tokens~.\nTherefore, for users who cannot afford such a large cost, such as small startups, individual users, etc., a small, fine-tuned model is a better and more reasonable choice.", "id": "7141aff7-7e78-47a2-9234-7f2210caca46", "level": "paragraph", "origin_cites_number": 5, "parent_id": "f0e841b3-8f4d-49ae-ba5d-a934fbb0c47a", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Efficiency" ], [ "paragraph", "Cost" ] ], "subsections": [ "a3a2a7f0-6713-43d4-b283-83b86312b545", "00fb88d7-4ae2-4791-8f99-852ee021df33" ], "title": "Cost" }, { "cite_extract_rate": 0, "cites": [], "content": "Latency is a crucial factor to consider in real-world applications of LLMs. Inference time is a commonly used metric to measure latency, which is highly dependent on the model size, architecture, and token size. For instance, the inference time for the GPT-J 6B model is 0.077s, 0.203s, and 0.707s when the max token size is set to 2, 8, and 32, respectively. Additionally, when the max token size is fixed at 32, the inference time for the InstructGPT model~(davinci v2) is 1.969s.\nAs LLMs are often too large to be run on a single user's machine, companies provide LLM services via APIs. The API latency can vary depending on the user's location, and the average latency of the OpenAI API service for a single request can range from a few hundred milliseconds to several seconds.\nIn scenarios where high latency is not acceptable, large LLMs may not be appropriate. For example, scalability is critical in many information retrieval applications. To deploy information retrieval systems on the web, search engines require very efficient inference for systems to be useful. The idealized denoised inference time for the InstructGPT davinci v2 (175B*) model is 0.21s per request (i.e., a query-passage pair to be scored), which is too slow for web search engines.", "id": "a3a2a7f0-6713-43d4-b283-83b86312b545", "level": "paragraph", "origin_cites_number": 0, "parent_id": "7141aff7-7e78-47a2-9234-7f2210caca46", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Efficiency" ], [ "paragraph", "Cost" ], [ "paragraph", "Latency" ] ], "subsections": [], "title": "Latency" }, { "cite_extract_rate": 0.75, "cites": [ 1589, 1590, 1591 ], "content": "In practice, we may tune the model on some specific datasets. Parameter-Efficient Tuning (PET) is an efficient technique to tune a small portation of model parameters (or extra parameters) while freezing most parameters of the pre-trained LLMs. The main goal of PEFT is to greatly decrease the computational and storage costs while keeping the performance of the original models. The common techniques for PET are LoRA~, Prefix Tuning~, P-Tuning~. As an illustration, the LoRA method maintains the weights of the pre-trained model and incorporates low-rank matrices into every layer of the Transformer architecture. This approach considerably minimizes the number of parameters that require training for subsequent tasks, thereby increasing overall efficiency. Alpaca-LoRA\\footnote{https://github.com/tloen/alpaca-lora} proposes integrating Low-Rank Adaptation (LoRA) into LLaMA-Alpaca, which enables runs LLaMA within hours on a single RTX 4090. All these PFT methods can be helpful either for fine-tuning a model to a specific task or tuning LLMs to meet special requirements like human alignment.", "id": "00fb88d7-4ae2-4791-8f99-852ee021df33", "level": "paragraph", "origin_cites_number": 4, "parent_id": "7141aff7-7e78-47a2-9234-7f2210caca46", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Efficiency" ], [ "paragraph", "Cost" ], [ "paragraph", "Parameter-Efficient Tuning" ] ], "subsections": [], "title": "Parameter-Efficient Tuning" }, { "cite_extract_rate": 0, "cites": [], "content": "Given that LLMs are now involved in sensitive areas such as healthcare, finance, and law, it is crucial to ensure that they are trustworthy and capable of producing reliable output.", "id": "ea598959-0f56-4d8e-a556-bc38157b9a19", "level": "subsection", "origin_cites_number": 0, "parent_id": "ff91c03f-4bf0-4721-898f-5219db670483", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Trustworthiness" ] ], "subsections": [ "f734ffb5-17e3-4bf6-a0ec-98a1940f21e8" ], "title": "Trustworthiness" }, { "cite_extract_rate": 0.75, "cites": [ 1570, 1593, 1592 ], "content": "The accuracy and robustness of the LLMs are shown to have a very strong correlation~. The models that have high accuracy on the scenario also have good robustness. However, the robustness of the zero-shot becomes worse after being tuned on extra application-specific tasks data~. This may due to overfitting, which leads to poor generalizability due to the extremely high complexity of the model and the limited training samples from downstream tasks~.\nIn a similar vein, it has been observed that fine-tuning a model can result in significant miscalibrations, owing to over-parameterization~. Therefore, fine-tuned models may not be an optimal choice when robustness and calibration are critical considerations.\nHowever, human alignment has been found as a potential solution for enhancing model robustness. InstructGPT davinci v2 (175B*) has been shown to outperform other models in terms of robustness. On the other hand, achieving optimal calibration of the model depends on the scenario and adaptation procedure employed.", "id": "f734ffb5-17e3-4bf6-a0ec-98a1940f21e8", "level": "paragraph", "origin_cites_number": 4, "parent_id": "ea598959-0f56-4d8e-a556-bc38157b9a19", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Trustworthiness" ], [ "paragraph", "Robustness and Calibration" ] ], "subsections": [ "c2f1af17-91a6-4092-a7ef-cb6dce389eaf", "0ed97184-e734-48cc-9123-7af7d4155541" ], "title": "Robustness and Calibration" }, { "cite_extract_rate": 0.5, "cites": [ 1570, 7468 ], "content": "LLMs have been shown to exhibit disparate treatment and impact, perpetuating societal biases and potentially leading to discrimination~. To ensure fairness and equity for all users, it is crucial to address these issues in the development and deployment of NLP models. Disparities in performance between demographic groups can serve as an indicator of fairness problems.\nLLMs are particularly susceptible to fairness issues, as significant performance disparities have been observed across demographic categories such as dialect, religion, gender, and race~. However, research has shown that aligning models with human instructions can improve LLM performance regardless of their size, with the InstructGPTmodel~(davinci v2) exhibiting smaller performance disparities than other LLMs~.", "id": "c2f1af17-91a6-4092-a7ef-cb6dce389eaf", "level": "paragraph", "origin_cites_number": 4, "parent_id": "f734ffb5-17e3-4bf6-a0ec-98a1940f21e8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Trustworthiness" ], [ "paragraph", "Robustness and Calibration" ], [ "paragraph", "Fairness and Bias" ] ], "subsections": [], "title": "Fairness and Bias" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 1594, 1595, 1596, 8470, 8471, 1597 ], "content": "The shortcut learning problem has been observed in various natural language understanding tasks under the pretraining and fine-tuning paradigm, where models heavily rely on spurious correlations between input and labels in the fine-tuning data for prediction . For example, in reading comprehension tasks, fine-tuned models tend to focus on the lexical matching of words between the question and the original passage, neglecting the intended reading comprehension task itself . \nIn contrast, large language models are not directly trained on fine-tuned datasets, which makes it less likely for them to learn shortcut features present in the fine-tuned dataset, thereby enhancing the model's generalization capabilities. However, LLMs are not infallible and may exhibit some shortcut learning during in-context learning. For example, recent preliminary studies have begun investigating the robustness of prompt-based methods in large-scale language models . One such study evaluates the few-shot learning performance of GPT-3 on text classification and information extraction tasks . and reveal that the examined LLMs are susceptible to majority label bias and position bias, where they tend to predict answers based on the frequency or position of the answers in the training data. Moreover, these LLMs exhibit common token bias, favoring answers that are prevalent in their pre-training corpus. Recent studies show that this positional bias can be mitigated by selecting proper prompts .\nIn summary, while LLMs significantly reduce the shortcut learning problem prevalent in fine-tuned models, they still exhibit some shortcut learning issues and should be approached with caution when deploying them in downstream applications.", "id": "0ed97184-e734-48cc-9123-7af7d4155541", "level": "paragraph", "origin_cites_number": 7, "parent_id": "f734ffb5-17e3-4bf6-a0ec-98a1940f21e8", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Trustworthiness" ], [ "paragraph", "Robustness and Calibration" ], [ "paragraph", "Spurious Biases" ] ], "subsections": [], "title": "Spurious Biases" }, { "cite_extract_rate": 0.5, "cites": [ 9115 ], "content": "LLMs have demonstrated their extremely strong capabilities in many areas such as reasoning, knowledge retention, and coding. As they become more powerful and human-like, their potential to influence people's opinions and actions in significant ways grows. As a result, some new safety challenges to our society should be considered and have caught lots of attention in recent works~.", "id": "8ca777cb-b1a7-42cb-88ed-4b9e175861af", "level": "subsection", "origin_cites_number": 2, "parent_id": "ff91c03f-4bf0-4721-898f-5219db670483", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Safety challenges" ] ], "subsections": [ "9b79094e-a1ad-490d-aec6-2aea389d1b8c" ], "title": "Safety challenges" }, { "cite_extract_rate": 0.5, "cites": [ 364 ], "content": "The potential for LLMs to \"hallucinate,\" or generate nonsensical or untruthful content, can have significant negative impacts on the quality and reliability of information in various applications. As LLMs become increasingly convincing and believable, users may develop an overreliance on them and trust them to provide accurate information in areas with which they are somewhat familiar. This can be particularly dangerous if the model produces content that is entirely false or misleading, leading to incorrect decisions or actions taken based on that information. Such outcomes can have serious consequences in many domains, such as healthcare, finance, or public policy, where the accuracy and reliability of information are critical. To mitigate these issues, reinforcement learning from human feedback (RLHF) is widely used~ and LLMs themselves have been integrated into the loop~.", "id": "9b79094e-a1ad-490d-aec6-2aea389d1b8c", "level": "paragraph", "origin_cites_number": 2, "parent_id": "8ca777cb-b1a7-42cb-88ed-4b9e175861af", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Safety challenges" ], [ "paragraph", "Hallucinations" ] ], "subsections": [ "e2bac6f3-e69e-48de-8eff-74f68663d0f4", "bac6b5f9-b7c5-47a6-915f-6aa4c0ba130a" ], "title": "Hallucinations" }, { "cite_extract_rate": 0.5, "cites": [ 1598 ], "content": "Due to the high coherence, quality, and plausibility of texts generated by LLMs, harmful contents from LLMs can cause significant harm, including hate speech, discrimination, incitement to violence, false narratives, and even social engineering attack. The implementation of safeguards to detect and correct those contents can be mitigation~. These LLMs can also have dual-use potential by providing required illicit information, leading to risks such as the proliferation of weapons~ and even terrorism attack planning. It is crucial to ensure using these LLMs responsibly, with safeguards in place to prevent harm. Also, in existing work, feedback from humans plays an important role in getting rid of harmful outputs.", "id": "e2bac6f3-e69e-48de-8eff-74f68663d0f4", "level": "paragraph", "origin_cites_number": 2, "parent_id": "9b79094e-a1ad-490d-aec6-2aea389d1b8c", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Safety challenges" ], [ "paragraph", "Hallucinations" ], [ "paragraph", "Harmful content" ] ], "subsections": [], "title": "Harmful content" }, { "cite_extract_rate": 0, "cites": [], "content": "LLMs can face serious security issues. An example is the issue of user privacy. It is reported that Samsung employees were using ChatGPT to process their work when they inadvertently leaked top-secret data, including the source code proper of the new program, internal meeting minutes related to the hardware, etc. The Italian data protection agency declared that OpenAI, the developer of ChatGPT, illicitly gathered personal user data, leading Italy to become the first government to prohibit ChatGPT over privacy concerns~.", "id": "bac6b5f9-b7c5-47a6-915f-6aa4c0ba130a", "level": "paragraph", "origin_cites_number": 1, "parent_id": "9b79094e-a1ad-490d-aec6-2aea389d1b8c", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Other considerations" ], [ "subsection", "Safety challenges" ], [ "paragraph", "Hallucinations" ], [ "paragraph", "Privacy" ] ], "subsections": [], "title": "Privacy" }, { "cite_extract_rate": 1, "cites": [ 8472, 1599 ], "content": "Recent advances in large language models have been revolutionizing the field of natural language processing. Effectively using LLMs requires understanding their capabilities, and limitations for various NLP tasks. This work presents a practical guide to working with LLMs for downstream NLP tasks. We first discuss prominent models like GPT-style and BERT-style architectures and the factors influencing their performance. We then explore using LLMs for downstream tasks, including knowledge-intensive tasks, NLU, and NLG tasks, as well as providing concrete examples of successes and limitations. This practical guide offers insights into LLMs and best practices for harnessing LLMs across NLP tasks. We hope it would enable researchers and practitioners to leverage their potential and drive innovation in language technologies.\nIn the following, we figure out the future challenges of the LLMs:\n\\begin{itemize}[leftmargin=0.4cm]\n \\item\\textbf{Evaluation of proposed models on real-world ``datasets''.} \n While existing deep learning models are primarily evaluated on standard academic datasets, such as ImageNet, which have been milestones in deep learning development. However, the limitations of standard academic datasets can not exactly reflect real-world performance. As models advance, it is crucial to assess them on more diverse, complex, and realistic data that reflect real-world needs. Evaluating models on real-world ``datasets'', in addition to academic ones, will provide a more rigorous test of their capabilities, as well as a better understanding of their effectiveness in real-world applications. This ensures that the models are capable of addressing real-world challenges and delivering practical solutions. \n \\item \\textbf{Model Alignment.} \n Ensuring that increasingly powerful and autonomous models align with human values and priorities is essential. Methods must be developed to guarantee that these models behave as intended and do not optimize for undesirable outcomes. It is crucial to integrate alignment techniques from the start of the model development process. Model transparency and interpretability are also important factors for evaluating and ensuring alignment. Additionally, as we look toward the future, an even more daunting challenge looms: aligning superhuman systems. While this task is currently beyond our demands, it is important to consider and prepare for the potential implications of aligning such advanced systems, as they may present unique complexities and ethical concerns~. \n \\item\\textbf{Safety Alignment.} While discussion of AI existential risks is important, concrete research is needed to guarantee the safe development of advanced AI. This includes techniques for interpretability, scalable oversight and governance, and formal verification of model properties. Safety should be considered not just an add-on but an integral part of the model-building process.\n \\item\\textbf{Performance Prediction with Scaling.} It is difficult to anticipate how model performance will change as model size and complexity increases dramatically. Developing methods to better predict model performance after scaling up or as new architectures are developed would allow for more efficient use of resources and accelerated progress. Some possibilities include: training a smaller 'seed' model and extrapolating its growth, simulating the effects of increased scale or model tweaks, and benchmarking iterations of the model at different scales to build scaling laws. These could provide insight into the performance of models even before they are built.\n\\end{itemize}\n\\bibliographystyle{plain}\n\\bibliography{./H.J, ./Q.F, ./X.H, ./R.X}\n\\end{document}", "id": "36ebc08b-941e-409b-9405-648aacfcf028", "level": "section", "origin_cites_number": 2, "parent_id": "65198946-bcab-42d9-bc99-da3a8beb9233", "prefix_titles": [ [ "title", "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" ], [ "section", "Conclusion and Future Challenges" ] ], "subsections": [], "title": "Conclusion and Future Challenges" } ]
108
[ 1549, 8461, 1550, 679, 9115, 7460, 1552, 1150, 1558, 1551, 826, 9, 856, 8385, 1557, 8462, 1556, 1555, 1559, 1554, 7462, 7, 7461, 7463, 1553, 11, 472, 1560, 1563, 1562, 8463, 1561, 364, 1564, 7057, 7464, 1569, 1570, 1174, 8465, 1571, 1566, 8464, 1568, 1098, 1145, 1565, 446, 1567, 8466, 1574, 8433, 1575, 1572, 1109, 1576, 7465, 1573, 8467, 451, 440, 7466, 1578, 1580, 444, 1579, 458, 443, 1577, 1581, 7467, 1582, 1583, 1586, 9142, 8468, 1584, 1585, 1587, 8469, 1588, 1589, 1590, 1591, 1593, 1592, 7468, 1594, 1595, 1596, 8470, 8471, 1597, 1598, 8472, 1599 ]
1.191284
[ "Qingxiu Dong", "Lei Li", "Damai Dai", "Ce Zheng", "Jingyuan Ma", "Rui Li", "Heming Xia", "Jingjing Xu", "Zhiyong Wu", "Tianyu Liu", "Baobao Chang", "Xu Sun", "Lei Li", "Zhifang Sui" ]
A Survey on In-context Learning
2022
2022-12-31T15:57:09Z
cs.CL
With the increasing capabilities of large language models (LLMs), in-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP), where LLMs make predictions based on contexts augmented with a few examples. It has been a significant trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis. Additionally, we explore various ICL application scenarios, such as data engineering and knowledge updating. Finally, we address the challenges of ICL and suggest potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on In-context Learning" ] ], "subsections": [ "24204e94-b539-4b86-bbee-4acd9e25a204", "c5b589e0-c439-4d4c-88cc-1b6df436c094", "d04516c1-e5c9-4a71-b557-9810697071f4", "45f129c0-f2ad-411c-a646-2578b65d6743", "55f7b1b0-ead8-405c-a87a-2eebb0d97e59", "0a759d57-0691-46bd-b44b-b65962f3c7a4", "802fa11b-661a-42b8-8518-6da66bcd6b28", "b2ae7d73-bac0-43bb-ae4d-92f917aae87e", "2aa08bb9-e45f-4156-b22d-928383da2a66", "634951dd-b69d-404b-ade3-4141eca1b76e", "e33c6336-e49c-4261-adca-a52c2331dc95", "6f8ecda4-ab0a-4fa9-99a1-7bcb0f3e77fe" ], "title": "root" }, { "cite_extract_rate": 0.631578947368421, "cites": [ 1578, 2189, 8470, 679, 3368, 7697, 8556, 1552, 1554, 3369, 2445, 9115 ], "content": "\\label{sec:intro}\nWith the scaling of model size and data size~, large language models (LLMs) demonstrate the in-context learning (ICL) ability, that is, learning from a few examples in the context. \nMany studies have shown that LLMs can perform a series of complex tasks through ICL, such as solving mathematical reasoning problems~. These strong abilities have been widely verified as emerging abilities for large language models~. \nThe key idea of in-context learning is to learn from analogy. Figure~\\ref{fig:icl} gives an example that describes how language models make decisions via ICL.\nFirst, ICL requires a few demonstration examples to form a prompt context. These examples are usually written in natural language templates. \nThen, ICL concatenates a query question and the piece of prompt context together to form the input, which is then fed into the language model for prediction.\nDifferent from supervised learning, which requires a training stage that uses backward gradients to update model parameters, ICL does not perform parameter updates. The model is expected to learn the pattern hidden in the demonstration and accordingly make the right prediction. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{fig/icl.pdf}\n \\caption{Illustration of in-context learning. ICL requires a prompt context containing a few demonstration examples written in natural language templates. Taking this prompt and a query as the input, large language models are responsible for making predictions.}\n \\label{fig:icl}\n\\end{figure}\n\\input{fig/taxonomy.tex}\nAs a new paradigm, ICL has multiple attractive advantages. \nFirst, since the demonstration is written in natural language, it provides an interpretable interface to communicate with LLMs~.\nThis paradigm makes it much easier to incorporate human knowledge into LLMs by changing the demonstration and templates~. \nSecond, in-context learning is similar to the decision process of human beings by learning from analogy~. \nThird, compared to supervised training, ICL is a training-free learning framework. \nThis could not only greatly reduce the computational costs for adapting the model to new tasks, but also make language-model-as-a-service~ possible and can be easily applied to large-scale real-world tasks.\nDespite being promising, there are also interesting questions and intriguing properties that require further investigation in ICL. \nAlthough a range of vanilla GPT models show excellent ICL capability, several studies have found that this capability can be significantly improved through adaptation during pretraining~.\nMoreover, the performance of ICL is sensitive to specific settings, including the prompt template, the selection and order of demonstration examples, and other factors~. Additionally, optimizing the conciseness of demonstration examples and improving the computational efficiency of ICL are critical areas of ongoing research~. \nFurthermore, despite preliminary explanations~, the underlying working mechanism of ICL remains unclear and requires further investigation.\nWith the rapid growth of studies in ICL, our survey aims to sensitize the community toward the current progress.\nIn the following sections, we delve into an in-depth discussion of related studies, and we summarize the key findings in Appendix~\\ref{app:takeaway}. \nWe highlight the challenges and potential directions and hope our work provide a useful roadmap for beginners interested in this area and shed light on future research.\nThe strong performance of ICL relies on two stages: (1) the training stage that cultivates the ICL ability of LLMs, and (2) the inference stage where LLMs predict according to task-specific demonstrations.\nIn terms of the training stage, LLMs are directly trained on language modeling objectives, such as left-to-right generation. Although the models are not specifically optimized for in-context learning, they still exhibit the ICL ability.\nExisting studies on ICL basically take a well-trained LLM as the backbone, and thus this survey will not cover the details of pretraining language models. \nTowards the inference stage, as the input and output labels are all represented in interpretable natural language templates, there are multiple directions for improving ICL performance. \nWe organize the current progress in ICL following the taxonomy above (as shown in Figure~\\ref{taxo_of_icl}). \nWith a formal definition of ICL~(\\S\\ref{sec:formulation}), \nwe provide a detailed discussion of the ICL-augmented pretraining and warmup approaches~(\\S\\ref{sec:training}) and the demonstration designing strategies (\\S\\ref{sec:prompt_tuning}). \\S\\ref{sec:analysis} provides in-depth discussions of current explorations on unveiling the secrets behind the ICL. \nWe further provide useful evaluation and resources~(\\S\\ref{sec:evaluation}), potential applications~(\\S\\ref{sec:application}) and potential challenges~(\\S\\ref{sec:challege_future}) in Appendix", "id": "24204e94-b539-4b86-bbee-4acd9e25a204", "level": "section", "origin_cites_number": 19, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 679 ], "content": "\\label{sec:formulation}\nFollowing , we here provide a formal definition of in-context learning:\n\\begin{quote}\n\\textsl{In-context learning is a paradigm that allows language models to learn tasks given only a few examples in the form of demonstration.}\n\\end{quote}\nFormally, given a query input text $x$ and a set of candidate answers $Y = \\{y_1, \\ldots, y_m\\}$, a pretrained language model $\\mathcal{M}$ takes the candidate answer with the maximum score as the prediction,\\footnote{$Y$ could be class labels or a set of free-text phrases.} conditioned a demonstration set $C$.\n$C$ contains an optional task instruction $I$ and $k$ demonstration examples, thus $C = \\{ I, s(x_1, y_1), \\ldots, s(x_k, y_k) \\}$ or $C = \\{ s^{\\prime}(x_1, y_1, I), \\ldots, s^{\\prime}(x_k, y_k, I) \\}$, where $s^{\\prime}(x_i, y_i, I)$ is an example written in natural language according to the task.\nThe likelihood of a candidate answer $y_j$ comes from a scoring function $f$ on the whole input sequence:\n\\begin{equation}\n P( y_j \\mid x) \\triangleq\n f_\\mathcal{M} ( y_j, C, x)\n\\end{equation}\nThe final predicted label $\\hat y$ is the candidate answer with the highest probability:\n\\begin{equation}\n \\hat y = \\arg\\max_{y_j \\in Y } P(y_j \\mid x). \n\\end{equation}\nAccording to the definition, we can see that ICL differs from related concepts as follows: (1) \\textit{Prompt Learning}: prompts can be discrete templates or soft parameters that encourage the model to predict the desired output. ICL can be regarded as a subclass of prompt tuning where the demonstration examples are part of the prompt. made a thorough survey on prompt learning, but ICL was not included in their study. (2) \\textit{Few-shot Learning}: few-shot learning is a general machine learning approach that involves adapting model parameters to perform a task with a limited number of supervised examples~. In contrast, ICL does not require parameter updates and is directly performed on pretrained LLMs.", "id": "c5b589e0-c439-4d4c-88cc-1b6df436c094", "level": "section", "origin_cites_number": 3, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Definition and Formulation" ] ], "subsections": [], "title": "Definition and Formulation" }, { "cite_extract_rate": 1, "cites": [ 7711, 7136 ], "content": "\\label{sec:training}\nAlthough LLMs have demonstrated promising ICL capability directly, many studies revealed that these ICL capabilities can be further enhanced through specialized training before inference~.", "id": "d04516c1-e5c9-4a71-b557-9810697071f4", "level": "section", "origin_cites_number": 2, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Model Training" ] ], "subsections": [ "803089ad-8508-470d-8df7-06e2822a82ca", "cfccaa5e-3b97-42f9-888b-ff671fa0d565" ], "title": "Model Training" }, { "cite_extract_rate": 0.5, "cites": [ 7136 ], "content": "\\label{sec:pretraining}\nOne straightforward direction to boost the ICL capability of LLMs is through pretraining or continual pretraining.\nFor instance, and proposed to reorganize pretraining corpora by aggregating related contexts, making models learn to reason across prior demonstrations. Differently, introduced a meta-distillation pretraining process, which allows LLMs to reason with distilled demonstration vectors, thereby enhancing ICL efficiency without compromising its effectiveness.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.92\\columnwidth]{fig/crop_train.pdf}\n \\caption{Illustration of model training methods to enhance ICL capabilities through two different stages: pretraining and warmup.}\n \\label{fig:train_method}\n\\end{figure}", "id": "803089ad-8508-470d-8df7-06e2822a82ca", "level": "subsection", "origin_cites_number": 2, "parent_id": "d04516c1-e5c9-4a71-b557-9810697071f4", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Model Training" ], [ "subsection", "Pretraining" ] ], "subsections": [], "title": "Pretraining" }, { "cite_extract_rate": 0.875, "cites": [ 8534, 1587, 7711, 8649, 1553, 2198, 7468 ], "content": "\\label{sec:warmup}\nAnother way to enhance ICL ability is adding a continual training stage between pretraining and ICL inference, which we call model warmup for short. \nWarmup is an optional procedure for ICL, which adjusts LLMs before inference by modifying or adding parameters.\nAs most pretraining data are not tailored for ICL~, researchers have introduced various warmup strategies to bridge the gap between pretraining and ICL inference. Both and proposed to continually finetune LLMs on a broad range of tasks with multiple demonstration examples, which boosts ICL abilities.\nTo encourage the model to learn input-label mappings from the context, proposed symbol tuning, which substitutes natural language labels (e.g., ``positive/negative sentiment'') with arbitrary symbols (e.g., ``foo/bar''). proposed a self-supervised method to align raw text with ICL formats in downstream tasks. Besides, multiple studies have indicated the potential value of instructions~. Tuning the 137B LaMDA-PT~ on over 60 datasets verbalized via natural language instruction templates, FLAN~ improves the ability of LLMs to follow instructions, boosting both the zero-shot and few-shot ICL performance. and proposed to further scale up instruction tuning with more than 1000+ task instructions.", "id": "cfccaa5e-3b97-42f9-888b-ff671fa0d565", "level": "subsection", "origin_cites_number": 8, "parent_id": "d04516c1-e5c9-4a71-b557-9810697071f4", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Model Training" ], [ "subsection", "Warmup" ] ], "subsections": [], "title": "Warmup" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:prompt_tuning}\n\\label{sec:demo}\n\\input{tabs/promptmethods}\nIn this section, we focus on the principles of ICL during inference, including demonstration organization~(\\S \\ref{sec:demonstration_org}) and instruction formatting~(\\S \\ref{sec:instruction}) .", "id": "45f129c0-f2ad-411c-a646-2578b65d6743", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ] ], "subsections": [ "bc0582f7-8484-43f1-8d1e-ca6b7d828fbb", "3bdaadb6-7ca4-4ca2-8e38-e78de33e240f", "f8a812bb-8dc9-4815-84d6-18b13d203e1a" ], "title": "Prompt Designing" }, { "cite_extract_rate": 1, "cites": [ 8470, 1594 ], "content": "\\label{sec:demonstration_org}\nMany studies have shown that the performance of ICL strongly relies on the demonstration surface, including the selection, formatting, and ordering of demonstration examples~. \nIn this subsection, we survey demonstration organization strategies and classify them into three categories, as shown in Table~\\ref{tab:promptmethods}.", "id": "bc0582f7-8484-43f1-8d1e-ca6b7d828fbb", "level": "subsection", "origin_cites_number": 2, "parent_id": "45f129c0-f2ad-411c-a646-2578b65d6743", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ] ], "subsections": [ "fee666c2-68ae-4733-89a0-9f9017fc554d", "862f6284-d97d-4dbe-b05c-0f108c797835", "38ac0700-6366-4274-a02b-fc58997d5d1c" ], "title": "Demonstration Organization" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:select}\nDemonstrations selection aims to answer a fundamental question: \\textsl{Which samples are good examples for ICL?} We categorize the related studies into two approaches: unsupervised methods based on predefined metrics and supervised methods.", "id": "fee666c2-68ae-4733-89a0-9f9017fc554d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "bc0582f7-8484-43f1-8d1e-ca6b7d828fbb", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ], [ "subsubsection", "Demonstration Selection" ] ], "subsections": [ "732044bb-34fe-421d-b1c8-9440e96dc829", "a86b2ef8-d0e3-4802-9253-132694da9348" ], "title": "Demonstration Selection" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 3374, 3370, 3373, 2189, 8650, 3372, 3371 ], "content": "A straightforward approach to selecting ICL examples is to choose the nearest neighbors of input instances based on their similarities~. Distance metrics, such as L2 distance or cosine similarity based on sentence embeddings, are commonly used for this purpose. For example, proposed KATE, the first $k$NN-based unsupervised retriever for selecting in-context examples. Similarly, $k$-NN cross-lingual demonstrations can be retrieved for multi-lingual ICL to strengthen source-target language alignment~. proposed to combine graphs and confidence scores to select diverse and representative examples. In addition to distance metrics, mutual information~ and perplexity~ have proven valuable for prompt selection without labeled examples or specific LLMs. Furthermore, using output scores \nof LLMs as unsupervised metrics has shown effectiveness in demonstration selection~. Particularly, selected the best subset permutation of $k$NN examples based on the code length for data transmission to compress label $y$ given $x$ and $C$. used infoscore, i.e., the average of $P(y|x_i,y_i,x) P(y|x)$ for all $(x,y)$ pairs in a validation set with a diversity regularization.", "id": "732044bb-34fe-421d-b1c8-9440e96dc829", "level": "paragraph", "origin_cites_number": 9, "parent_id": "fee666c2-68ae-4733-89a0-9f9017fc554d", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ], [ "subsubsection", "Demonstration Selection" ], [ "paragraph", "Unsupervised Method" ] ], "subsections": [], "title": "Unsupervised Method" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3374, 3375, 3377, 2189, 3378, 3376 ], "content": "Though off-the-shelf retrievers offer convenient services for extensive NLP tasks, they are heuristic and sub-optimal due to the lack of task-specific supervision. To address this issue, numerous supervised methods have been developed~. EPR~ introduced a two-stage method to train a dense retriever for demonstration selection. For a specific input, it first utilized unsupervised methods (e.g., BM25) to recall similar examples as candidates and then used this data to build a supervised dense retriever. Following EPR, adopted a unified demonstration retriever to select demonstrations across different tasks. Unlike prior work that retrieves individual demonstrations, proposed retrieving entire demonstration sets to model inter-relationships between examples. Additionally, introduced AdaICL, a model-adaptive method that employs LLM to predict the unlabeled data set, generating an uncertainty score for each instance.\nBased on prompt tuning, viewed LLMs as topic models that can infer concepts $\\theta$ from a few demonstrations and generate tokens based on these concepts. They represent latent concepts with task-related concept tokens, which are learned to maximize $P(y|x,\\theta)$. Demonstrations are selected based on their likelihood to infer the concept variable using $P(\\theta|x,y)$. Additionally, reinforcement learning was introduced by for example selection. They formulated demonstration selection as a Markov decision process~ and selected demonstrations via Q-learning. The action is choosing an example, and the reward is defined as the accuracy of a labeled validation set. \n\\begin{table}[t] \n\\centering \n\\setlength{\\tabcolsep}{2pt} \n{ \\fontsize{9pt}{11pt}\\selectfont \n\\begin{tabular}{lccccccc} \n\\toprule \n\\bf Model & \\bf Method & \\bf SST5 & \\bf SST2 & \\bf CQA & \\bf SNLI & \\bf News & \\bf Avg \\\\ \n\\midrule \n\\multirow{3}{*}{GPT2} \n& topk & 40.1 & 74.9 & 30.2 & 39.7&62.7 & 49.5\\\\ \n& votek & 32.4 & 51.0 & 29.8 & 35.8& 25.5 & 34.9 \\\\ \n& mdl & \\textbf{43.3} & \\textbf{86.7} & \\textbf{32.7} & \\textbf{41.4}& \n\\textbf{68.0} & \\textbf{54.4}\\\\ \n\\midrule \n\\multirow{3}{*}{GPT-J} \n& topk & \\textbf{46.9} & 84.6 & 58.4 & \\textbf{60.7} & \\textbf{69.1} & \\textbf{63.9} \\\\ \n& votek & 33.8 & 87.3 & 63.4 & 43.1& 25.3 & 50.6\\\\ \n& mdl & 37.6 & \\textbf{87.9} & \\textbf{64.1} & 59.8 & 68.2 &63.5\\\\ \n\\midrule \n\\multirow{3}{*}{Qwen2} \n& topk & 54.1 & 83.3 & 76.3 & \\textbf{68.2} &64.9 & \\textbf{69.4}\\\\ \n& votek & \\textbf{55.3} & \\textbf{86.9} & 76.1 &51.6& \\bf 65.3 & 67.0\\\\ \n& mdl & 54.6 & 86.1 & \\textbf{77.1} &65.0& 63.2 &69.2\\\\ \n\\midrule \n\\multirow{3}{*}{Llama3} \n& topk & 53.0 & \\textbf{90.3} & 76.1 & \\textbf{64.0} & 74.0 & \\textbf{71.5}\\\\ \n& votek & 54.9 & 88.9 & 72.6 & 57.7& \\textbf{78.3} & 70.5\\\\ \n& mdl & \\textbf{54.4} & 89.1 & \\textbf{76.5} & 59.9 & 74.6 &70.9 \\\\ \n\\bottomrule \n\\end{tabular}} \n\\caption{Fair comparison of demonstration selection methods. CQA and News are abbreviations of Commonsense QA and AG News, respectively. The best results are \\textbf{bolded}. Our experiments on topk~, votek~, mdl~ show that topk selects the most effective examples on average.} \n\\label{tab:experiment_design_centered_model_names} \n\\end{table} \nIn order to have a more intuitive comparison of the performance of several unsupervised methods, we select topk~, votek~, mdl~ \nto conduct experiments. The result is shown in Table 2. The details of the experiment can be found in Appendix \\ref{app:experiment}.", "id": "a86b2ef8-d0e3-4802-9253-132694da9348", "level": "paragraph", "origin_cites_number": 10, "parent_id": "fee666c2-68ae-4733-89a0-9f9017fc554d", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ], [ "subsubsection", "Demonstration Selection" ], [ "paragraph", "Supervised Method" ] ], "subsections": [], "title": "Supervised Method" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8651, 3379 ], "content": "\\label{sec:reformatting}\nIn addition to directly selecting examples from training data, another research trend involves utilizing LLMs to reformat the representation of existing demonstrations~. For instance, proposed generating demonstrations directly from LLMs to reduce the reliance on external demonstration data. Structured Prompting proposed to encode demonstration examples separately with special positional embeddings, which are then provided to the test examples using a rescaled attention mechanism. Diverging from these methods, other approaches focus on modifying the latent representation of demonstrations~. Specifically, developed In-Context Vectors (ICVs) derived from the latent embeddings of demonstration examples in LLMs. These ICVs are used during inference to adjust the latent states of the LLM, thereby enhancing the model's ability to follow the demonstrations more effectively.", "id": "862f6284-d97d-4dbe-b05c-0f108c797835", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "bc0582f7-8484-43f1-8d1e-ca6b7d828fbb", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ], [ "subsubsection", "Demonstration Reformatting" ] ], "subsections": [], "title": "Demonstration Reformatting" }, { "cite_extract_rate": 1, "cites": [ 3369, 2189, 8470 ], "content": "\\label{sec:order}\nOrdering the selected demonstration examples is also an important aspect of demonstration organization. have proven that order sensitivity is a common problem and always exists for various models. To handle this problem, previous studies have proposed several training-free methods for sorting demonstration examples. Particularly, arranged examples based on their proximity to the input, positioning the closest example as the rightmost demonstration. introduced global and local entropy metrics, finding a positive correlation between these metrics and the ICL performance. Consequently, they utilized the entropy metric to determine the optimal demonstration ordering. Additionally, ICCL~ suggested ranking demonstrations from simple to complex, thereby gradually increasing the complexity of demonstration examples during the inference process.", "id": "38ac0700-6366-4274-a02b-fc58997d5d1c", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "bc0582f7-8484-43f1-8d1e-ca6b7d828fbb", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Demonstration Organization" ], [ "subsubsection", "Demonstration Ordering" ] ], "subsections": [], "title": "Demonstration Ordering" }, { "cite_extract_rate": 0.8888888888888881, "cites": [ 3380, 3381, 1578, 424, 3382, 3383, 3384, 2467 ], "content": "\\label{sec:instruction}\nA common way to format demonstrations is concatenating examples $(x_1, y_1), \\ldots, (x_k, y_k)$ with a template $\\mathcal{T}$ directly. However, in some tasks that need complex reasoning (e.g., math word problems and commonsense reasoning), it is not easy to learn the mapping from $x_i$ to $y_i$ with only $k$ demonstrations. Although template engineering has been studied in prompting~, some researchers aim to design a better format of demonstrations for ICL by describing tasks with the instruction $I$. found that given several demonstration examples, LLMs can generate task instructions themselves. Considering the generation abilities of LLMs, proposed an Automatic Prompt Engineer for automatic instruction generation and selection.\nTo further improve the quality of the automatically generated instructions, several strategies have proposed using LLMs to bootstrap off its own generations~. \nAdditionally, chain-of-thought (CoT)~ introduces intermediate reasoning steps between inputs and outputs to enhance problem-solving and comprehension. Recent advancements have also emphasized the process of enhancing step-by-step reasoning in models~.", "id": "3bdaadb6-7ca4-4ca2-8e38-e78de33e240f", "level": "subsection", "origin_cites_number": 9, "parent_id": "45f129c0-f2ad-411c-a646-2578b65d6743", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Instruction Formatting" ] ], "subsections": [], "title": "Instruction Formatting" }, { "cite_extract_rate": 1, "cites": [ 679, 3385 ], "content": "\\label{sec:scoring}\n\\input{tabs/score_func}\nThe scoring function determines how to transform the predictions of a language model into an estimation of the likelihood of a specific answer. The Direct method uses the conditional probability of candidate answers represented by tokens in the model's vocabulary . The answer with the highest probability is selected as the final answer, but this method restricts template design by requiring answer tokens to be at the end of input sequences.\nPerplexity (PPL) is another commonly used metric that computes the sentence perplexity of the entire input sequence \\( S_j = \\{ C, s(x, y_j, I) \\} \\), which includes tokens from demonstration examples \\( C \\), the input query \\( x \\), and the candidate label \\( y_j \\). PPL evaluates the probability of the sentence, eliminating token position limitations but requiring additional computation time. proposed using channel models (Channel) to compute the conditional probability in reverse, estimating the likelihood of the input query given the label. This approach requires language models to generate every token in the input, potentially boosting performance under imbalanced training data. We summarize all three scoring functions in Table~\\ref{tab:score_func}.", "id": "f8a812bb-8dc9-4815-84d6-18b13d203e1a", "level": "subsection", "origin_cites_number": 2, "parent_id": "45f129c0-f2ad-411c-a646-2578b65d6743", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Prompt Designing" ], [ "subsection", "Scoring Function" ] ], "subsections": [], "title": "Scoring Function" }, { "cite_extract_rate": 0.4, "cites": [ 3386, 7712 ], "content": "\\label{sec:analysis}\nTo understand ICL, recent studies attempt to investigate what influence ICL performance~ and why ICL works~. \nIn this section, we present a detailed elaboration of influencing factors~(\\S \\ref{sec:inf_factors}) and learning mechanisms~(\\S \\ref{sec:mech}) of ICL, as illustrated in Figure~\\ref{fig:factor}.\n\\begin{figure*}\n \\centering\n \\vspace{-1.0cm}\n \\includegraphics[width=0.9\\textwidth]{fig/icl_ana.pdf}\n \\caption{Summary of factors that have a relatively strong correlation to ICL performance and different perspectives to explain why ICL works.}\n \\label{fig:factor}\n\\end{figure*}", "id": "55f7b1b0-ead8-405c-a87a-2eebb0d97e59", "level": "section", "origin_cites_number": 5, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ] ], "subsections": [ "3427e8cb-a3e1-4bac-aca3-43f553eaaaa2", "b67a0787-b89f-4174-9d2f-f1c0b7ad9f5a" ], "title": "Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:inf_factors}\nWe discuss relevant research addressing \\textsl{what influences ICL performance}, including factors both in the pretraining stage and in the inference stage.", "id": "3427e8cb-a3e1-4bac-aca3-43f553eaaaa2", "level": "subsection", "origin_cites_number": 0, "parent_id": "55f7b1b0-ead8-405c-a87a-2eebb0d97e59", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Influencing Factors" ] ], "subsections": [ "9175a2f0-51d6-4413-abde-252cc2e7cae3", "5bc33e85-fc18-430c-a28b-9f08a9afcf64" ], "title": "Influencing Factors" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 3388, 3386, 679, 3387, 8556 ], "content": "\\label{sec:inf_factors_pre}\nWe first introduce factors that influence the pretraining stage. The diversity of pretraining corpora significantly impacts ICL performance~. \nIn particular, found that the source domain is more important than the corpus size, suggesting that combining multiple corpora may lead to the emergence of ICL ability. Similarly, empirically identified a task diversity threshold beyond which LLMs exhibit strong ICL capabilities in unseen tasks.\nAnother line of research investigates the impact of data distribution on ICL~. For instance, demonstrated that ICL capability emerges when the training data exhibits specific distributional properties, such as burstiness, wherein items appear in clusters rather than being uniformly distributed over time.\nBeyond these works, several studies have investigated the impact of model architecture and training process on ICL performance~. investigated the emergent abilities of many large-scale models on multiple tasks. They suggested that a pretrained model acquires some emergent ICL abilities when it reaches a large scale of pretraining steps or model parameters. pointed out that the in-context samples should attend to each other during inference, indicating that current causal LLMs may lead to suboptimal ICL performance.", "id": "9175a2f0-51d6-4413-abde-252cc2e7cae3", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "3427e8cb-a3e1-4bac-aca3-43f553eaaaa2", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Influencing Factors" ], [ "subsubsection", "Pretraining Stage" ] ], "subsections": [], "title": "Pretraining Stage" }, { "cite_extract_rate": 0.75, "cites": [ 3389, 3377, 2189, 8470, 7712, 3390, 427, 3391, 7137 ], "content": "\\label{sec:inf_factors_infe}\nDuring inference, there are also multiple properties of demonstration examples that influence ICL performance. proved that input-label settings such as the pairing format, the exposure of label space, and the input distribution contribute substantially to ICL performance. \nHowever, contrary to the conclusion in that\ninput-label mapping matters little to ICL, latter studies showed that the accurate mapping influence ICL performance significantly~. further pointed that flipped or semantically-unrelated input-label mapping also can be learned.\nFrom the perspective of demonstration construction, recent literature focuses on the diversity and simplicity of demonstrations~, the order of samples~, and the similarity between demonstrations and queries~. For example, found that demonstration samples with embeddings closer to those of the query samples typically yield better performance than those with more distant embeddings.\nNotably, despite efforts to refine demonstrations to optimize the performance, there still remain clear feature biases during ICL inference~. Overcoming strong prior biases and ensuring the model gives equal weight to all contextual information remain challenges~.", "id": "5bc33e85-fc18-430c-a28b-9f08a9afcf64", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "3427e8cb-a3e1-4bac-aca3-43f553eaaaa2", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Influencing Factors" ], [ "subsubsection", "Inference Stage" ] ], "subsections": [], "title": "Inference Stage" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mech}\nFrom a learning mechanism perspective, we delve into the research addressing why ICL is effective.", "id": "b67a0787-b89f-4174-9d2f-f1c0b7ad9f5a", "level": "subsection", "origin_cites_number": 0, "parent_id": "55f7b1b0-ead8-405c-a87a-2eebb0d97e59", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ] ], "subsections": [ "603d2bd4-3e98-4a01-9bd8-b6aadc1927f2", "974284f2-bba9-44b8-96d0-90963643f1ad" ], "title": "Learning Mechanism" }, { "cite_extract_rate": 0.375, "cites": [ 3393, 3392, 7713 ], "content": "\\label{sec:mech_fun}\nThe ICL capability is intimately connected to specific functional modules within Transformers.\nAs one of the core components, the attention module is a focal point in the study of ICL mechanism~. Particularly, identified specific attention heads, referred to as ``induction heads'', that can replicate previous patterns for next-token prediction, thus progressively developing ICL capabilities.\nAdditionally, focused on the information flow in Transformers and found that during the ICL process, demonstration label words serve as anchors, which aggregate and distribute key information for the final prediction.", "id": "603d2bd4-3e98-4a01-9bd8-b6aadc1927f2", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "b67a0787-b89f-4174-9d2f-f1c0b7ad9f5a", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ], [ "subsubsection", "Functional Modules" ] ], "subsections": [], "title": "Functional Modules" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mech_theo}\nIn this subsection, we introduce the theoretical interpretations of ICL from different views.", "id": "974284f2-bba9-44b8-96d0-90963643f1ad", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "b67a0787-b89f-4174-9d2f-f1c0b7ad9f5a", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ], [ "subsubsection", "Theoretical Interpretation" ] ], "subsections": [ "a6fbde0e-09e4-47ee-819b-4e8355af9aaf", "b4efe669-d087-41b2-9a4c-d1b8e85990ac", "7efb0396-f265-4288-97b3-a8cba48ccb14" ], "title": "Theoretical Interpretation" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 3396, 3368, 3387, 3395, 3394 ], "content": "In the Bayesian framework, ICL is explained as implicit Bayesian inference, where models perform ICL by identifying a shared latent concept among examples~. Additional perspectives suggest that LLMs encode the Bayesian Model Averaging algorithm via the attention mechanism~. As the number of in-context examples increases, implicit Bayesian inference becomes analogous to kernel regression~.", "id": "a6fbde0e-09e4-47ee-819b-4e8355af9aaf", "level": "paragraph", "origin_cites_number": 7, "parent_id": "974284f2-bba9-44b8-96d0-90963643f1ad", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ], [ "subsubsection", "Theoretical Interpretation" ], [ "paragraph", "Bayesian View" ] ], "subsections": [], "title": "Bayesian View" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 8653, 3393, 8652 ], "content": "Gradient descent offers another valuable lens for understanding ICL. ~ identified a dual form between Transformer attention and gradient descent, finding that GPT-based ICL behaves similarly to explicit fine-tuning from multiple perspectives. Other studies have attempted to establish connections between ICL and gradient descent in simplified regression settings~. For instance, showed that linear attention-only Transformers with manually constructed parameters are closely related to models learned by gradient descent. found that self-attention-only Transformers exhibit similarities with models trained via gradient descent. However, the simplified settings used in these studies have led to debates about the direct applicability of these connections in real-world contexts~. argued that Transformers perform ICL on linear regression using higher-order optimization techniques rather than gradient descent.", "id": "b4efe669-d087-41b2-9a4c-d1b8e85990ac", "level": "paragraph", "origin_cites_number": 7, "parent_id": "974284f2-bba9-44b8-96d0-90963643f1ad", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ], [ "subsubsection", "Theoretical Interpretation" ], [ "paragraph", "Gradient Descent View" ] ], "subsections": [], "title": "Gradient Descent View" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 3397, 3399, 7714, 3391, 3398 ], "content": "Beyond connecting ICL with a single algorithm, researchers have analyzed it from various perspectives, including ability decoupling, algorithmic learning, and information theory. decoupled ICL capabilities into task recognition ability and task learning ability, each manifesting under different conditions. Another typical theory abstracts ICL as an algorithmic learning problem~, where Transformers dynamically select algorithms, such as gradient descent and ridge regression, tailored to different ICL instances. Moreover, utilized information theory to show an error bound for ICL under linguistically motivated assumptions, explaining how next-token prediction can bring about the ICL ability. \nThese analytical studies have taken an essential step to explain ICL. However, most of them focused on simple tasks and small models. Extending analysis on extensive tasks and large models may be the next step to be considered.", "id": "7efb0396-f265-4288-97b3-a8cba48ccb14", "level": "paragraph", "origin_cites_number": 6, "parent_id": "974284f2-bba9-44b8-96d0-90963643f1ad", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Analysis" ], [ "subsection", "Learning Mechanism" ], [ "subsubsection", "Theoretical Interpretation" ], [ "paragraph", "Other Views" ] ], "subsections": [], "title": "Other Views" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3401, 1578, 424, 3400, 7565, 8654, 3001, 9122, 3005, 3002, 3003, 7715, 3004, 1576 ], "content": "\\label{sec:application}\n\\label{app}\nGiven its user-friendly interface and lightweight prompting method, ICL has broad applications on traditional NLP tasks~.\nParticularly, by using demonstrations that explicitly guide the reasoning process, ICL manifests remarkable effects on tasks requiring complex reasoning~ and compositional generalization~. \nWe explore several emerging and prevalent applications of ICL, including data engineering, model augmentation, and knowledge updating.\n\\textbf{1) Data Engineering:} Unlike traditional methods such as human annotation and noisy automatic annotation, ICL generates relatively high-quality data at a lower cost, leading to improved performance.~.\n\\textbf{2) Model Augmentation:} The context-flexible nature of ICL shows promise in model augmentation. It can enhance retrieval-augmented methods by prepending grounding documents to the input~. Additionally, ICL for retrieval demonstrates potential in steering models toward safer outputs~.\n\\textbf{3) Knowledge Updating:} LLMs often contain outdated or incorrect knowledge~. ICL has demonstrated efficacy in revising such knowledge through carefully crafted demonstrations, yielding higher success rates compared to gradient-based methods~.\nAs mentioned above, ICL has yielded significant benefits on both traditional and emergent NLP applications. \nThe tremendous success of ICL in NLP has inspired researchers to explore its potential in various modalities beyond text (elaborated in Appendix~\\ref{app:vision}), including vision ~, vision-language~, as well as speech applications~.", "id": "0a759d57-0691-46bd-b44b-b65962f3c7a4", "level": "section", "origin_cites_number": 21, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Application" ] ], "subsections": [], "title": "Application" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:challege_future}\nIn this section, we review existing challenges and discuss future directions for ICL.", "id": "802fa11b-661a-42b8-8518-6da66bcd6b28", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Challenges and Future Directions" ] ], "subsections": [ "64592261-2e34-4c89-82f0-e119f7459b09" ], "title": "Challenges and Future Directions" }, { "cite_extract_rate": 0.5, "cites": [ 3403, 3402 ], "content": "The use of demonstrations in ICL introduces two challenges: (1) higher computational costs with an increasing number of demonstrations (\\textit{efficiency}), and (2) fewer learnable samples due to the maximum input length of LLMs (\\textit{scalability}). Prior research has attempted to mitigate these issues by distilling lengthy demonstrations into compact vectors~ or expediting LLM inference times~. However, these methods often involve a trade-off in performance or necessitate access to model parameters, which is impractical for closed-source models like ChatGPT and Claude~. Thus, enhancing the scalability and efficiency of ICL with more demonstrations remains a significant challenge.", "id": "64592261-2e34-4c89-82f0-e119f7459b09", "level": "paragraph", "origin_cites_number": 4, "parent_id": "802fa11b-661a-42b8-8518-6da66bcd6b28", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Efficiency and Scalability" ] ], "subsections": [ "04759d55-93a4-4f25-82cf-dfb0d48ffd43" ], "title": "Efficiency and Scalability" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3372 ], "content": "ICL heavily relies on high-quality demonstrations selected from annotated examples, which are often scarce in low-resource languages and tasks. This scarcity poses a challenge to the generalization ability of ICL~. Given that there is a substantial discrepancy in the availability of annotated high-resource data and low-resource data, the potential to leverage high-resource data to address low-resource tasks is highly appealing~.", "id": "04759d55-93a4-4f25-82cf-dfb0d48ffd43", "level": "paragraph", "origin_cites_number": 3, "parent_id": "64592261-2e34-4c89-82f0-e119f7459b09", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Efficiency and Scalability" ], [ "paragraph", "Generalization" ] ], "subsections": [ "4c62fbf4-3315-4e53-a63b-a83cbb5b16c3" ], "title": "Generalization" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7716, 3404 ], "content": "Recent advances in context-extended LLMs have spurred research into the impact of ICL when using an increasing number of demonstration examples~. However, researchers have found that increasing the number of demonstrations does not necessarily enhance performance and may even be detrimental. These performance declines indicate a need for further investigation. Additionally, developed LongICLBench, which includes diverse extreme-label classification tasks, revealing further weaknesses of LLMs in comprehending extended demonstrations.", "id": "4c62fbf4-3315-4e53-a63b-a83cbb5b16c3", "level": "paragraph", "origin_cites_number": 3, "parent_id": "04759d55-93a4-4f25-82cf-dfb0d48ffd43", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Challenges and Future Directions" ], [ "paragraph", "Efficiency and Scalability" ], [ "paragraph", "Generalization" ], [ "paragraph", "Long-context ICL" ] ], "subsections": [], "title": "Long-context ICL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this paper, we comprehensively review the existing literature on ICL, examining advanced techniques, conducting analytical studies, discussing relevant applications, and identifying critical challenges and potential directions for future research. To our knowledge, this is the first comprehensive survey dedicated to ICL. We aim to highlight the current state of research in ICL and provide insights to guide future work in this promising area.\n\\section*{Limitations}\nThis paper offers a comprehensive examination and summary of current methodologies and analyses in the area of In-Context Learning (ICL). However, given the extensive body of related work, particularly in demonstration design and the principle analysis of ICL, we may have overlooked some equally valuable contributions. Additionally, we outline several future directions for research in ICL, including long-context ICL, efficiency and scalability in ICL, etc. We plan to leave these aspects for future work.\n\\bibliography{custom}\n\\appendix", "id": "b2ae7d73-bac0-43bb-ae4d-92f917aae87e", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{takeaway}\n\\label{app:takeaway}\nThrough a comprehensive literature review of ICL, we have discovered takeaways across several domains. These include training, demonstration design, scoring functions, analysis, and ICL applications that go beyond text.", "id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ] ], "subsections": [ "df1883cb-dd19-4cdd-b9d0-8f645890723b", "eb0a8c85-e141-48a1-b138-458c646f65b6", "432b76b1-df24-43b9-840c-fb0dccf6c9ea", "547c04d6-4f21-4f35-95d5-4d2b00594ab2", "1c2b901c-e1cc-4779-9e45-8ae68dd7893f" ], "title": "Takeaway" }, { "cite_extract_rate": 0, "cites": [], "content": "To further enhanced ICL capabilities, methods propose to train the LLMs in the stage of pre-training and warmup before ICL inference.\n \\textbf{$\\Diamond$ Takeaway}: \n(1) The key idea of training before inference is to bridge the gap between pretraining and downstream ICL formats by introducing objectives close to in-context learning. Warmup is optional for ICL as many pretrained LLMs have manifested the ICL ability. \n(2) Compared to in-context finetuning involving demonstration, instruction finetuning without a few examples as demonstration is simpler and more popular. All these warmup methods improve the ICL capability by updating the model parameters, which implies that the ICL capability of the original LLMs has great potential for improvement. Therefore, although ICL does not strictly require model warmup, we recommend adding a warmup stage before ICL inference.\n(3) \nThe performance advancement made by warmup encounters a plateau when increasingly scaling up the training data, indicating that LLMs only need a small amount of data to adapt to learn from the context during warmup.", "id": "df1883cb-dd19-4cdd-b9d0-8f645890723b", "level": "subsection", "origin_cites_number": 0, "parent_id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ], [ "subsection", "Training" ] ], "subsections": [], "title": "Training" }, { "cite_extract_rate": 0, "cites": [], "content": "The performance of ICL strongly relies on the demonstration surface, including the selection, formatting, and ordering of demonstration examples.\n \\textbf{$\\Diamond$ Takeaway}: \n (1) Demonstration selection strategies improve the ICL performance, but most of them are instance level. Since ICL is mainly evaluated under few-shot settings, the corpus-level selection strategy is more important yet underexplored. \n (2) The output score or probability distribution of LLMs plays an important role in instance selecting. \n (3) For k demonstrations, the size of search space of permutations is k!. How to find the best orders efficiently or how to approximate the optimal ranking better is also a challenging question. \n(4) Adding chain-of-thoughts can effectively decompose complex reasoning tasks into intermediate reasoning steps. During inference, multi-stage demonstration designing strategies are applied to generate CoTs better. How to improve the CoT prompting ability of LLMs is also worth exploring. \n(5) In addition to human-written demonstrations, the generative nature of LLMs can be utilized in demonstration designing. LLMs can generate instructions, demonstrations, probing sets, chain-of-thoughts, and so on. By using LLM-generated demonstrations, ICL can largely get rid of human efforts on writing templates.", "id": "eb0a8c85-e141-48a1-b138-458c646f65b6", "level": "subsection", "origin_cites_number": 0, "parent_id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ], [ "subsection", "Demonstration Organization" ] ], "subsections": [], "title": "Demonstration Organization" }, { "cite_extract_rate": 0, "cites": [], "content": "The scoring function determines how to transform the predictions of a language model into an estimation of the likelihood of a specific answer. The answer with the highest probability is selected as the final answer.\n \\textbf{$\\Diamond$ Takeaway}: \n(1) Although directly adopting the conditional probability of candidate answers is efficient, this method still poses some restrictions on the template design. Perplexity is also a simple and widely scoring function. This method has universal applications, including both classification tasks and generation tasks. However, both methods are still sensitive to demonstration surface, while Channel is a remedy that especially works under imbalanced data regimes. \n(2) Existing scoring functions all compute a score straightforwardly from the conditional probability of LLMs. There is limited research on calibrating the bias or mitigating the sensitivity via scoring strategies.", "id": "432b76b1-df24-43b9-840c-fb0dccf6c9ea", "level": "subsection", "origin_cites_number": 0, "parent_id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ], [ "subsection", "Scoring Function" ] ], "subsections": [], "title": "Scoring Function" }, { "cite_extract_rate": 0, "cites": [], "content": "Numerous analytical studies investigate influencing factors of ICL during both the pretraining and inference stages, and attempt to figure out the learning mechanisms of ICL from the perspective of functional modules and theoretical interpretation.\n \\textbf{$\\Diamond$ Takeaway}: \n(1)\nKnowing and considering why ICL works and what factors may influence can help us improve the ICL performance.\n(2) \nAlthough some analytical studies have taken a preliminary step to explain ICL, most of them are limited to simple tasks and small models. \nExtending analysis on extensive tasks and large models may be the next step to be considered. \n(3) Among existing work, explaining ICL with gradient descent seems to be a reasonable, general, and promising direction for future research. \nIf we build clear connections between ICL and gradient-descent-based learning, we can borrow ideas from the history of traditional deep learning to improve ICL.", "id": "547c04d6-4f21-4f35-95d5-4d2b00594ab2", "level": "subsection", "origin_cites_number": 0, "parent_id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ], [ "subsection", "Analysis" ] ], "subsections": [], "title": "Analysis" }, { "cite_extract_rate": 1, "cites": [ 7717 ], "content": "The tremendous success of ICL in NLP has inspired researchers to explore in-context learning in different modalities beyond natural language with promising results.\n\\textbf{$\\Diamond$ Takeaway}: \n(1) Properly formatted data (e.g., interleaved image-text datasets for vision-language tasks) and architecture designs are key factors for activating the potential of in-context learning. Exploring it in a more complex structured space such as for graph data is challenging and promising~.\n(2) Findings in textual in-context learning demonstration design and selection cannot be trivially transferred to other modalities. Domain-specific investigation is required to fully leverage the potential of in-context learning in various modalities.", "id": "1c2b901c-e1cc-4779-9e45-8ae68dd7893f", "level": "subsection", "origin_cites_number": 1, "parent_id": "2aa08bb9-e45f-4156-b22d-928383da2a66", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Takeaway" ], [ "subsection", "In-context Learning Beyond Text" ] ], "subsections": [], "title": "In-context Learning Beyond Text" }, { "cite_extract_rate": 0.4, "cites": [ 3405, 1096, 7718 ], "content": "\\label{app:experiment}\nIn the experiment, we utilize 8 demonstrations and test on gpt2~, gptj~, LLaMA3-8B-Instruct and Qwen2-7B-Instruct~. All experiments are executed on a single NVIDIA A100 (80G). For datasets we choose sst2~, sst5~, commonsense\\_qa~, ag\\_news~ and snli~. For the last two datasets, we only select 1000 data from the training set for retrieval and the first 1000 data from the test set for testing. During the inference phase, a PPL-based approach is employed. The entire code framework is built upon OpenICL~, for which we extend our gratitude to the authors.", "id": "634951dd-b69d-404b-ade3-4141eca1b76e", "level": "section", "origin_cites_number": 10, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Experimental Detail" ] ], "subsections": [], "title": "Experimental Detail" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:evaluation}", "id": "e33c6336-e49c-4261-adca-a52c2331dc95", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Evaluation and Resources" ] ], "subsections": [ "bd2bad96-bfd7-49ee-9cce-84cf58960935", "15a026a5-d602-4a24-8aa8-2f69176da092", "0351607d-6a8b-444a-8c0d-074db1af9e50" ], "title": "Evaluation and Resources" }, { "cite_extract_rate": 1, "cites": [ 1565, 679, 1569, 3379 ], "content": "As a general learning paradigm, ICL can be examined on various traditional datasets and benchmarks, e.g., SuperGLUE~, SQuAD~. \nImplementing ICL with 32 randomly sampled examples on SuperGLUE, ~ found that GPT-3 can achieve results comparable to state-of-the-art (SOTA) finetuning performance on COPA and ReCoRD, but still falls behind finetuning on most NLU tasks.\n~ showed the potential of scaling up the number of demonstration examples. However, the improvement brought by scaling is very limited. At present, compared to finetuning, there still remains some room for ICL to reach on traditional NLP tasks.\n\\input{tabs/tab_dataset.tex}", "id": "bd2bad96-bfd7-49ee-9cce-84cf58960935", "level": "subsection", "origin_cites_number": 4, "parent_id": "e33c6336-e49c-4261-adca-a52c2331dc95", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Evaluation and Resources" ], [ "subsection", "Traditional Tasks" ] ], "subsections": [], "title": "Traditional Tasks" }, { "cite_extract_rate": 0.75, "cites": [ 2221, 2215, 7466, 1550, 2220, 5965 ], "content": "In the era of large language models with in-context learning capabilities, researchers are more interested in evaluating the intrinsic capabilities of large language models without downstream task finetuning~.\nTo explore the capability limitations of LLM on various tasks, \n~ proposed the BIG-Bench~, a large benchmark covering \na large range of tasks, including linguistics, chemistry, biology, social behavior, and beyond. \nThe best models have already outperformed the average reported human-rater results on 65\\% of the BIG-Bench tasks through ICL~. To further explore tasks actually unsolvable by current language models, proposed a more challenging ICL benchmark, BIG-Bench Hard (BBH). BBH includes 23 unsolved tasks, constructed by selecting challenging tasks where the state-of-art model performances are far below the human performances. Besides, researchers are searching for inverse scaling tasks,\\footnote{\\url{https://github.com/inverse-scaling/prize}} that is, tasks where model performance reduces when scaling up the model size. Such tasks also highlight potential issues with the current paradigm of ICL.\nTo further probe the model generalization ability, ~ proposed OPT-IML Bench, consisting of 2000 NLP tasks from 8 existing benchmarks, especially benchmark for ICL on held-out categories.\nSpecifically, a series of studies focus on exploring the reasoning ability of ICL.~ generated an example from a synthetic world model\nrepresented in first-order logic and parsed the ICL generations into symbolic proofs for formal analysis. They found that LLMs can make correct individual deduction steps via ICL.\n~ constructed the MGSM benchmark to evaluate the chain-of-thought reasoning abilities of LLMs in multilingual settings, finding that LLMs manifest complex reasoning across multiple languages.\nTo further probe more sophisticated planning and reasoning abilities of LLMs, ~ provided multiple test cases for evaluating various reasoning abilities on actions and change, where existing ICL methods on LLMs show poor performance.\nIn addition, ~ proposed a benchmark called SAMSum, which is a human-annotated dataset specifically designed for multi-turn dialogue summarization, to evaluate the quality of dialogue summaries generated by LLMs via ICL.", "id": "15a026a5-d602-4a24-8aa8-2f69176da092", "level": "subsection", "origin_cites_number": 8, "parent_id": "e33c6336-e49c-4261-adca-a52c2331dc95", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Evaluation and Resources" ], [ "subsection", "New Challenging Tasks" ] ], "subsections": [], "title": "New Challenging Tasks" }, { "cite_extract_rate": 1, "cites": [ 7718 ], "content": "Noticing that ICL methods are often implemented differently and evaluated using different LLMs and tasks, developed OpenICL, an open-source toolkit enabling flexible and unified ICL assessment. With its adaptable architecture, OpenICL facilitates the combination of distinct components and offers state-of-the-art retrieval and inference techniques to accelerate the integration of ICL into advanced research.", "id": "0351607d-6a8b-444a-8c0d-074db1af9e50", "level": "subsection", "origin_cites_number": 1, "parent_id": "e33c6336-e49c-4261-adca-a52c2331dc95", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "Evaluation and Resources" ], [ "subsection", "Open-source Tools" ] ], "subsections": [], "title": "Open-source Tools" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{app:vision}\nThe tremendous success of ICL in NLP has inspired researchers to explore its potential in different modalities, including visual, vision+language and speech tasks as well.", "id": "6f8ecda4-ab0a-4fa9-99a1-7bcb0f3e77fe", "level": "section", "origin_cites_number": 0, "parent_id": "66ef2e50-c617-457c-84da-f8257ab93ea9", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "In-Context Learning Beyond Text" ] ], "subsections": [ "4a7fc6b4-2d7c-41b4-8888-0d93b87f5fb8", "0c600619-28cd-4f67-844f-b94c602c69e1", "2a212c7e-84d8-4ea3-a5e1-494fcfbb99c9" ], "title": "In-Context Learning Beyond Text" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3406, 3001, 3407, 9144 ], "content": "\\begin{figure}\n \\centering\n \\includegraphics[width=0.49\\textwidth]{fig/mm-case.pdf}\n \\caption{Image-only and textual augmented prompting for visual in-context learning.}\n \\label{fig:mm_case}\n\\end{figure}\nEmploying masked auto-encoders (MAE) for image patch infilling, the model trained by generates consistent output images at inference, demonstrating robust ICL capabilities for tasks like image segmentation. This method is expanded in Painter , which incorporates multiple tasks to develop a generalist model with competitive performance. SegGPT further builds on this by integrating diverse segmentation tasks and exploring ensemble techniques to enhance example quality. Additionally, introduce the Prompt Diffusion model, the first diffusion-based model with ICL abilities, guided by an extra text prompt for more precise image generation, as illustrated in Figure~\\ref{fig:mm_case}.\nSimilar to ICL in NLP, the effectiveness of visual in-context learning greatly depends on the choice of demonstration images, as shown in research by and . To optimize this, examine two strategies: using an unsupervised retriever to select the nearest samples with an existing model, and a supervised approach to train a specialized retriever to boost ICL performance. These approaches improve results by ensuring semantic similarity and better alignment in viewpoint, background, and appearance. Beyond retrieval, also investigate a prompt fusion technique to further enhance outcomes.", "id": "4a7fc6b4-2d7c-41b4-8888-0d93b87f5fb8", "level": "subsection", "origin_cites_number": 6, "parent_id": "6f8ecda4-ab0a-4fa9-99a1-7bcb0f3e77fe", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "In-Context Learning Beyond Text" ], [ "subsection", "Visual In-Context Learning" ] ], "subsections": [], "title": "Visual In-Context Learning" }, { "cite_extract_rate": 0.9, "cites": [ 7565, 3003, 2243, 7564, 3015, 2237, 2229, 3408, 2238 ], "content": "In the vision-language domain, a vision encoder paired with a frozen language model demonstrates multi-modal few-shot learning capabilities after training on image-caption datasets, as shown by the Frozen model . Extending this, Flamingo integrates a vision encoder with large language models (LLMs) for enhanced in-context learning across multi-modal tasks, leveraging large-scale web corpora . Similarly, Kosmos-1 exhibits zero-shot, few-shot, and multi-modal chain-of-thought prompting abilities . METALM introduces a semi-causal language modeling objective to achieve strong ICL performance across vision-language tasks . The ICL-D3IE approach employs a novel in-context learning framework that iteratively updates diverse demonstrations—including hard, layout-aware, and formatting demonstrations to train large language models (LLMs) for enhanced document information extraction (DIE). Recent advancements include creating instruction tuning datasets from existing vision-language tasks or with advanced LLMs like GPT-4, connecting LLMs with powerful vision foundational models like BLIP-2 for multi-modal learning .", "id": "0c600619-28cd-4f67-844f-b94c602c69e1", "level": "subsection", "origin_cites_number": 10, "parent_id": "6f8ecda4-ab0a-4fa9-99a1-7bcb0f3e77fe", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "In-Context Learning Beyond Text" ], [ "subsection", "Multi-Modal In-Context Learning" ] ], "subsections": [], "title": "Multi-Modal In-Context Learning" }, { "cite_extract_rate": 1, "cites": [ 3002, 3004 ], "content": "In the speech area, ~ treated text-to-speech synthesis as a language modeling task. \nThey use audio codec codes as an intermediate representation and propose the first TTS framework with strong in-context learning capability. \nSubsequently, VALLE-X~ extend the idea to multi-lingual scenarios, demonstrating superior performance in zero-shot cross-lingual text-to-speech synthesis and zero-shot speech-to-speech translation tasks.\n\\end{document}", "id": "2a212c7e-84d8-4ea3-a5e1-494fcfbb99c9", "level": "subsection", "origin_cites_number": 2, "parent_id": "6f8ecda4-ab0a-4fa9-99a1-7bcb0f3e77fe", "prefix_titles": [ [ "title", "A Survey on In-context Learning" ], [ "section", "In-Context Learning Beyond Text" ], [ "subsection", "Speech In-Context Learning" ] ], "subsections": [], "title": "Speech In-Context Learning" } ]
109
[ 1578, 2189, 8470, 679, 3368, 7697, 8556, 1552, 1554, 3369, 2445, 9115, 7711, 7136, 8534, 1587, 8649, 1553, 2198, 7468, 1594, 3374, 3370, 3373, 8650, 3372, 3371, 3375, 3377, 3378, 3376, 8651, 3379, 3380, 3381, 424, 3382, 3383, 3384, 2467, 3385, 3386, 7712, 3388, 3387, 3389, 3390, 427, 3391, 7137, 3393, 3392, 7713, 3396, 3395, 3394, 8653, 8652, 3397, 3399, 7714, 3398, 3401, 3400, 7565, 8654, 3001, 9122, 3005, 3002, 3003, 7715, 3004, 1576, 3403, 3402, 7716, 3404, 7717, 3405, 1096, 7718, 1565, 1569, 2221, 2215, 7466, 1550, 2220, 5965, 3406, 3407, 9144, 2243, 7564, 3015, 2237, 2229, 3408, 2238 ]
1.024331
[ "Akrati Saxena", "Sudarshan Iyengar" ]
Centrality Measures in Complex Networks: A Survey
2020
2020-11-14T01:55:11Z
cs.SI
In complex networks, each node has some unique characteristics that define the importance of the node based on the given application-specific context. These characteristics can be identified using various centrality metrics defined in the literature. Some of these centrality measures can be computed using local information of the node, such as degree centrality and semi-local centrality measure. Others use global information of the network like closeness centrality, betweenness centrality, eigenvector centrality, Katz centrality, PageRank, and so on. In this survey, we discuss these centrality measures and the state of the art literature that includes the extension of centrality measures to different types of networks, methods to update centrality values in dynamic networks, methods to identify top-k nodes, approximation algorithms, open research problems related to the domain, and so on. The paper is concluded with a discussion on application specific centrality measures that will help to choose a centrality measure based on the network type and application requirements.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ] ], "subsections": [ "1a06195d-152a-44a2-82b3-04d1fc9fc651", "71218188-70f9-437f-9b95-51ca95b32683", "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "6306c16c-fa57-44c4-bc09-40622f955f9d", "a2c63b19-259d-46ba-ba80-92708981d707", "cb521dba-dace-41a1-8478-d8f347f3c503", "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "d46b1247-4503-4fb3-9300-7e6dee95952b", "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "daa3664b-35a8-4d0b-b5a1-43bbb0f3c08b", "62ad3af5-4922-4a34-81ec-5b5896fc098d" ], "title": "root" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 9132, 8897 ], "content": "Complex networks are encountered frequently in our day to day lives such as World Wide Web , Internet , Social Friendship networks , Collaboration networks , etc. In these varieties of the networks, each node possesses some unique characteristics that are used to define its importance based on the given application context. In various real-life applications, we need to identify highly influential or important nodes; for example, if we want to set up a service center for the public, then which location is best for this, or if we want to provide free samples of the product then to whom we should give it, or if we want to identify popular people in society then how to find them? The importance of a node changes based on the given application context. A high degree node can be influential in spreading the information in the local neighborhood, but it will not be able to make the information viral globally if it is not connected with the other influential/core nodes . Similarly, a high betweenness node will be important to spread the information across the communities, but it might not be influential locally based on its connections in the local community. Researchers have studied these phenomena and defined various centrality measures to identify important nodes based on the application requirements like degree centrality , semi-local centrality , closeness centrality , betweenness centrality , eigenvector centrality , katz centrality , PageRank , and so on.\nThese centrality measures can be categorized as local centrality measures and global centrality measures. Centrality measures that can be calculated using local information of the node are called local centrality measures like degree centrality, semi-local centrality, etc. But the computation of global centrality measures, such as closeness centrality, betweenness centrality, eigenvector centrality, coreness centrality, PageRank, etc., requires the entire structure of the network. So, these centrality measures are called global centrality measures as they can not be computed without global information. These measures have high computational complexity. \nNext, we discuss the main research problems that have attracted researchers in this area.\n\\begin{enumerate}\n\\item \\textbf{Extensions:} \nIn 1977, Freeman proposed three main centrality measures to identify the importance of nodes based on its local and global connectivity . The proposed definitions were applicable for undirected and unweighted networks. But these unweighted networks are not enough to convey the complete information of the system. The different complex systems are represented using a variety of networks like directed networks, weighted networks, multiplex networks, and so on. These networks require redefining the centrality metrics to measure the importance of nodes. We will discuss these extensions in the current report.\n\\item \\textbf{Approximation Algorithms:} The computation of global centrality measures takes more time in large scale networks due to their high computational complexity. So, researchers have proposed approximation methods to fast compute centrality values in real-world large scale networks. \nThese approximation methods can be efficiently used to compare two nodes, where we don't need actual centrality values.\n\\item \\textbf{Update centrality values in dynamic networks:} Real-world networks are highly dynamic. It will be very costly to compute global centrality measures every time the network is updated like addition or deletion of nodes or edges. So, researchers have proposed efficient methods to update centrality values in the dynamic networks whenever there is a change in the network.\n\\item \\textbf{Identification of top-k nodes:} In many applications, we are only interested in identifying the top few nodes like identifying top-k important nodes in the Internet system to provide instant backup, selecting top-k people to provide free samples, etc. There are methods to identify top-k nodes without computing the centrality value of all nodes. \nIt reduces the overall complexity of the method.\n\\item \\textbf{Ranking of a node:} Main objective to define centrality measures is to rank nodes. There is not much work in this direction. In one of our previous works, we have proposed a method to estimate the degree rank of a node using local information. To measure the global rank of a node using local information based on other centrality measures is still an open research question.\n\\item \\textbf{Applications:} The applications of centrality measures are highly dependent on the requirements. The discussion on each centrality measure is concluded with its applications. We also discuss which centrality measure can be applied to what types of applications in the Applications section.\n\\item \\textbf{Others:} A few other related works on global centrality measures are like the computation of centrality measures in distributed networks, parallel algorithms to fast compute centrality values, correlations of different centrality measures, hybrid centrality measures, and so on. The works that study the correlation of different centrality measures include . We will discuss these in brief in section 3.\n\\end{enumerate}\nThe brief categorization of research work on centrality measures is shown in Figure 1. Other research directions include the proposal of hybrid and application-specific centrality measures. These centrality measures work better for some specific types of networks.\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=14cm]{rank2.png}\n\\caption{Categorization of research work on Centrality Measures}\n\\end{figure}\nCentrality measures also have been defined for a group of nodes to measure how central the group is with respect to the given network. For example, the closeness centrality of a group of nodes can be computed to understand how close these nodes are in the given network. Similarly, the coreness of a group of nodes can be computed to measure how tightly knit these nodes are with each other and with the rest of the network.\nLike nodes, the complex networks also have unique characteristics. \nFew simple parameters to compare two networks are like their densities, diameters, the rate of changes, clustering coefficient, assortativity, and so on. Apart from these simple parameters, there are some other methods that can be used to compare other complex properties of the network like core-periphery profiling that shows how central the given network is . The work on centrality measures can be categorized as shown in Figure 2. In this report, our main focus is on the centrality measures defined for nodes.\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=12cm]{rank1.png}\n\\caption{Categories of Centrality Measures}\n\\end{figure}\nThis report is structured as follows. In Section 2, we discuss preliminaries and definitions of centrality measures. Next, we discuss the state of the art on centrality measures. It is followed by the discussion on various real-world complex networks and the centrality measures that have been applied to study those networks. The paper is concluded in Section \\ref{con}.", "id": "1a06195d-152a-44a2-82b3-04d1fc9fc651", "level": "section", "origin_cites_number": 18, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "A graph is represented as $G(V, E)$, where $V$ represents a set of the nodes, and $E$ represents a set of the edges. $n$ is the total number of nodes, and $m$ is the total number of edges in the network. $u,v,w,..$ represent nodes of the network. A binary network can be represented using a matrix $A$, where $a_{ij} =1$, if $i_{th}$ and $j_{th}$ nodes are connected with each other else $a_{ij}=0$. Similarly, a weighted network can be represented using an adjacency matrix $W$, where $w_{ij}$ represents the weight of a link between $i_{th}$ and $j_{th}$ nodes. We will use the following terminologies throughout the discussion:\n\\begin{itemize}\n\\item Adjacent Nodes: The nodes that are connected via an edge are called adjacent nodes.\n\\item Degree: Degree of a node $u$ is denoted by $k_u$, which represents the total number of neighbors of the node.\n\\item In-degree: In-degree of a node $u$ is denoted by $k^{in}_u$, representing the total number of nodes with a directed link towards $u$.\n\\item Out-degree: Out-degree of a node $u$ is denoted by $k^{out}_u$, that represents the total number of connections that $u$ have towards other nodes in the network.\n\\item Neighbor Set: $\\Gamma(u)$ represents set of all neighbors of node $u$.\n\\item Strength $(s_u)$: In weighted networks, the strength of a node denotes the sum of weights of all edges connected to that node. It is defined as, $s_{u} = \\sum_{v}w_{uv}$, and it is equivalent to degree of a node in unweighted network. In-strength and out-strength can be computed similarly to in-degree and out-degree of an undirected network.\n\\item Walk: $u_1, u_2, .... , u_p$ is called a walk between nodes $u_1$ and $u_p$, if $\\forall i$, $u_i$ is connected to $u_{i+1}$, where $i \\epsilon (1, p-1)$. Here $p-1$ is the length of the walk.\n\\item Path: $u_1, u_2, .... , u_p$ is called a path between nodes $u_1$ and $u_p$, if $\\forall i$, $u_i$ is connected to $u_{i+1}$, where $i \\epsilon (1, p-1)$. In a path, a node can not be revisited. In the circular path, only the first and last node is the same. Here $p-1$ is the length of the path.\n\\item Distance: $d(u,v)$ represents the distance between two nodes $u$ and $v$, if there exists a path between nodes $u$ and $v$, and there exists no path that has the length smaller than this.\n\\item Average distance: The average distance in a given graph represents the average number of edges that need to be traversed from one node to another node in the network. It is computed as the average of the length of the shortest path for all possible pairs of nodes in the graph. Average distance is given by\n$dis_{avg} = \\frac{1}{n(n-1)} \\sum_{\\forall u,v \\& u \\neq v} d(u,v)$\n\\item Network Diameter: The maximum distance between any pair of nodes in the given graph $G(V, E)$ is the diameter of the network.\n\\item Sampling: When the data set is too large, the access and analysis of the data are very slow and expensive. In this case, we use a fraction of available data to make inferences about the whole dataset. This technique is referred to as sampling. In a graph, sampling can be of different types like node sampling, edge sampling etc. In node sampling, set $V'$ is called a sampled set of nodes if $ \\forall u \\epsilon V' \\Rightarrow u \\epsilon V$. Similarly, in edge sampling set $E'$ is called sampled set of edges, if $ \\forall (u,v) \\epsilon E' \\Rightarrow (u,v) \\epsilon E$. Based on the used technique, sampling methods can be categorized as follows:\n\\begin{enumerate}\n\\item Graph Traversal: In graph traversal techniques, a node can be visited only once while sampling, for example, breadth-first traversal, depth-first traversal, etc.\n\\item Random Walk: Random walk is a process to traverse a network. It starts from a node, and at each step, it moves to one of its neighbors uniformly at random. In random walks, a node or an edge can be sampled more than once. Based on the requirement, the random walk can also be weighted where the probability of moving at a new node is not equal for all neighbors. It can be directly or inversely proportional to the degree or can be decided using any other function.\n\\end{enumerate}\n\\item Breadth First Traversal (BFT): Breadth-First Traversal (BFT) is an algorithm for traversing a graph. It starts from a root node and explores neighbors of the root node. At each step, it traverses through all neighbors of the nodes that were explored in the last step. The time complexity of BFS in the worst case is $O(|V| + |E|)$, since every vertex and every edge will be explored. The space complexity is $O(|V|+ |E|)$ when the graph is stored as an adjacency list, and $O(|V|^2)$ when stored as an adjacency matrix.\n\\item Disconnected Graph: The graph $G(V,E)$ is called disconnected graph, if there exist at least one pair of nodes in the given graph such that there is no path between them.\n\\item Induced Subgraph: $H(V', E')$ is the induced subgraph of $G(V,E)$ if it satisfies following conditions:\n\\begin{itemize}\n\\item $\\forall u \\epsilon V' \\Rightarrow u \\epsilon V$\n\\item $\\forall (u,v) \\epsilon E' \\Rightarrow (u,v) \\epsilon E$\n\\end{itemize}\n\\item Component of a Graph: Given a graph $G(V, E)$, if $H$ is a subgraph of $G$ then it is called component of graph $G$ if,\n\\begin{itemize}\n\\item $H$ is a connected graph.\n\\item $H$ is not contained in any connected subgraph of $G$ which has more vertices or edges than $H$.\n\\end{itemize}\n\\end{itemize}", "id": "71218188-70f9-437f-9b95-51ca95b32683", "level": "section", "origin_cites_number": 0, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Preliminaries" ] ], "subsections": [ "0abc2473-a09c-40ba-8e86-d1a684965742" ], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we are going to discuss the definition of all basic centrality measures.\n\\begin{definition}{\\textbf{Degree Centrality:}}\nDegree Centrality of a node $u$ is defined as,\n\\begin{center}\n$C_D(u)= \\frac{k_u}{n-1}$\n\\end{center}\n\\end{definition}\n\\begin{definition}{\\textbf{Closeness Centrality:}}\nCloseness centrality represents the closeness of a given node with every other node of the network. In precise terms, it is inverse of the farness which in turn is the sum of distances with all other nodes. The closeness centrality of a node $u$ is defined as,\n\\begin{center}\n$C_C(u)= \\frac{n-1}{\\sum_{\\forall v, v \\neq u}d(u,v)}$\n\\end{center}\n\\end{definition}\nIn disconnected graphs, the distance between all pairs of nodes is not defined, so this definition of closeness centrality can not be applied for disconnected graphs.\n\\begin{definition}{\\textbf{Betweenness Centrality:}}\nBetweenness centrality of a given node $u$ is based on the number of shortest paths passing through the node . This measure basically quantifies the number of times a node acts as a bridge along the shortest path between a pair of nodes. The betweenness centrality of a node $u$ is defined as,\n\\begin{center}\n$C_B(u)= \\frac{\\sum_{s \\neq u \\neq t}\\frac{\\partial_{st}(u)}{\\partial_{st}}}{(n-1)(n-2)/2}$\n\\end{center}\nwhere $\\partial_{st}(u)$ represents the number of shortest paths between nodes $s$ and $t$ with node $u$ acting as an intermediate node in the shortest path.\n\\end{definition}\n\\begin{definition}{\\textbf{Eigenvector Centrality:}}\nEigenvector Centrality is used to measure the influence of a node in the network . It assigns a relative index value to all nodes in the network based on the concept that connections with high indexed nodes contribute more to the score of the node than the connections with low indexed nodes.\nThe Eigenvector centrality for a graph $G(V,E)$ is given as,\n\\begin{center}\n$C_E(u)= (1/\\lambda) \\sum A_{uv} C_E(v)$\n\\end{center}\nwhere $v$ is the neighbour of $u$ and $\\lambda$ is a constant. With simple rearrangement we can express it as an eigenvector equation,\n\\begin{center}\n$AX = \\lambda X$\n\\end{center}\n\\end{definition}\n\\begin{definition}{\\textbf{Katz Centrality:}}\nKatz centrality was introduced by Katz in 1953 to measure the influence of a node . It assigns different weights to shortest paths according to their lengths, as the shorter paths are more important for information flow than the longer paths. Contribution of a path of length $P$ is directly proportional to $s^P$ and $s \\in (0,1)$. It is defined as,\n\\begin{center}\n$K= sA + s^2A^2 + s^3A^3 + ... + s^PA^P + .... = (I-sA)^{-1} -I$\n\\end{center}\nwhere $I$ is a unit matrix, $A$ is the adjacency matrix of the graph.\n\\end{definition}\n\\begin{definition}{\\textbf{Coreness:}}\nA node $u$ has coreness $C_S(u) = i$, if it belongs to a maximal connected subgraph $H(V_1, E_1)$, where $\\forall u, u \\epsilon V_1$, and $k_u \\geq i$. $k_u$ is degree of node $u$ in induced subgraph $H$.\n\\end{definition}\nThis definition of coreness is based on the K-shell decomposition method that was proposed by Seidman .\n\\section*{Centrality Measures}\nIn the following subsections, we will discuss the state of the art literature of different centrality measures. We cover their variations, extensions for different types of networks, such as weighted networks, directed networks, and multilayer networks, algorithms to update centrality measures in dynamic networks, approximation algorithms, methods to identify top-k nodes, and their applications.", "id": "0abc2473-a09c-40ba-8e86-d1a684965742", "level": "subsection", "origin_cites_number": 4, "parent_id": "71218188-70f9-437f-9b95-51ca95b32683", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Definitions" ] ], "subsections": [], "title": "Definitions" }, { "cite_extract_rate": 0, "cites": [], "content": "Degree centrality term was first coined in graph theory, and it is also called degree or valence in graph theory. Degree simply denotes the number of neighbors of the node. Degree centrality is the most basic centrality measure defined in network science and it can be computed as $C_D(u)= {k_u}/{(n-1)}$. As per the definition, degree centrality is normalized using the total number of nodes. So, the first question that comes to our mind is, why is it simply not equal to the degree. Let us take an example: there are two graphs $A$ and $B$ in Figure \\ref{degfig}, and in both graphs, nodes $a$ and $b$ have degree $1$. But if we analyze it, then we can simply say that node $b$ is more important, as it is connected with $25\\%$ of nodes in the given network, but node $a$ is only connected with $10\\%$ of the nodes. So, degree centrality is normalized using total possible connections to analyze it in a better way. It will help us to compare two nodes that belong to two different networks regardless of network size. In 1982 Grofman proposed a game-theoretic approach to measure degree centrality in social networks .\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=10cm]{degree_centrality1.jpg}\n\\end{center}\n\\caption{Graphs A and B have two nodes a and b respectively, both having the same degree but different degree centrality.}\n\\label{degfig}\n\\end{figure}", "id": "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "level": "section", "origin_cites_number": 1, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Degree Centrality" ] ], "subsections": [ "8ec5c082-0fbc-42e3-bba9-d1db34f51ae1", "a9f9d0ce-4abe-48c0-846c-ebb40bb9db6e", "09b9544d-1019-4023-b360-65a29303f3c5", "6a7a51a4-ee82-46cf-aeb8-52cc0513c37a" ], "title": "Degree Centrality" }, { "cite_extract_rate": 0.17647058823529402, "cites": [ 7921, 7922, 4885 ], "content": "In directed networks, in-degree and out-degree can be used to define in-degree centrality and out-degree centrality of a node. In-degree centrality is defined as,\n\\begin{center}\n$C_{D_{in}}(u) = \\frac{k^{in}_u}{n-1}$\n\\end{center}\nwhere, $C_{D_{in}}(u)$ represents in-degree centrality of node $u$, and $k^{in}_u$ represents in-degree of node $u$.\nSimilarly, out-degree centrality can be defined as,\n\\begin{center}\n$C_{D_{out}}(u) = \\frac{k_{out}(u)}{n-1}$\n\\end{center}\nwhere, $C_{D_{out}}(u)$ represents out-degree centrality of node $u$, and $k^{out}(u)$ represents out-degree of node $u$.\nIn disconnected networks, the degree centrality of a node can be computed just by taking its degree in the largest connected component to that the node belongs. For isolated nodes (having degree zero), the degree centrality is zero.\nDegree centrality also has been extended to weighted networks, where the strength of the node is used to define it. In weighted networks, the strength of a node $s_u$ denotes the sum of weights of all the edges connected to that node. It is defined as,\n\\begin{center}\n$s_{u} = \\sum_{v}w_{uv}$\n\\end{center}\nIn weighted networks, the degree centrality considers two important parameters: 1. strength of the node, and 2. degree of the node (total number of connections that a node has). These two parameters show two different aspects of the network. For example, in a network, if two nodes have same strength but a different number of connections, then these two nodes can not be rated equally. It is based on the requirement that more connections are preferred or fewer connections. To incorporate these both parameters, Opsahl proposed generalized degree centrality for weighted networks . It is defined as,\n\\begin{center}\n$C_D^{w \\alpha}(u)= k_u^{(1-\\alpha)} \\times s_u^{\\alpha}$\n\\end{center}\nwhere $\\alpha$ is a tuning parameter that can be decided based on the requirement. If this parameter is between 0 and 1, then more importance is given to degree, whereas if it is set above 1, then more preference is given to the strength of the node. But still, it is difficult to determine the exact value of $\\alpha$. Wei et al. proposed a method to select the optimal value of tuning parameter $\\alpha$ . Yustiawan et al. used this centrality metric to identify influential nodes in online social networks . This metric can be further extended to directed weighted networks, where in-degree and out-degree of the node can be considered to define in-degree and out-degree centrality, respectively.\nKretschmer used the information of weighted ties to extend degree centrality . They verify the proposed method on collaboration and citation networks. Rachman et al. created a weighted network of Twitter, where the weight of the ties is based on following/follower relationship, and the number of tweet interactions such as mention, reply, and retweet . They show that the Kretschmer method can be efficiently used on Twitter network to identify influential nodes.\nDegree centrality has been used for decades to identify eimportant and influential nodes in the network. By considering the simplicity of degree centrality, many extensions have been provided to better rank nodes using the degree and some other local parameters. Chen et al. studied the correlation of the clustering coefficient and the influential power of a node. They showed that the local clustering has negative impacts on the information spreading and making new connections. They analyzed the impact of clustering and proposed a local ranking algorithm named ClusterRank that considers the number of neighbors, neighbors’ influences, and their clustering coefficient . They simulated the experiment using susceptible-infected-recovered (SIR) spreading model and showed that the clusterRank is much more efficient than other benchmark algorithms such as PageRank and LeaderRank. They also verified the proposed method on undirected networks and showed that it is more efficient than degree centrality and coreness. It runs 15 times faster than pagerank algorithm, so this can be used in real-life applications.\nYang et al. further studied the impact of triangular links on the influential power of a node and proposed a node prominence profile method that considers preferential attachment and triadic closure to determine the importance of a node . The degree centrality of a node can convey more information if the degree centrality of its neighbors is also considered. Ai et al. introduced the neighbor vector centrality metric that also considers the degree distribution of its neighbors . Their results show that it is highly efficient to measure the importance of a node, and it can be easily computed on large networks.\nIn real-life applications, sometimes, we are interested in selecting a group of nodes that is highly influential. This group can be the collection of nodes that might not be highly influential individually, but as a group, they have the highest influential power. Elbirt studied the degree centrality property of a network at the node level as well as at the group level . They further compare degree centrality across different topologies like ring lattice, small world, random network, core-periphery, scale-free, and so on.\nCsato proposed generalized degree centrality based on the idea that the connections with more interconnected nodes contribute to centrality more than the connections with less central ones . Generalized degree centrality $k'_u$ for a node $u$ is defined as,\n\\begin{center}\n$k_u=k'_u + \\varepsilon \\sum_{v \\epsilon V \\setminus u} a_{uv}[k'_u - k'_v]$\n\\end{center}\nwhere $\\varepsilon$ is a constant, and it is used to decide the importance of the neighbors. Generalized degree centrality is better to use than the degree centrality as it considers both the degree and role of the neighbors to measure a node's importance. It gives results similar to eigenvector centrality that we will discuss later.\nFrom the above discussion, it is clear that degree centrality alone is not enough to give a clear picture of the importance of a node in real-world networks. In 2013 Abbasi et al. proposed hybrid centrality measures called Degree-Degree, Degree-Closeness, Degree-Betweenness by combining existing centrality measures . The Degree-Degree centrality of a node is calculated by summing up the degree of its neighbors. Similarly, Degree-Closeness and Degree-Betweenness centrality of a node is calculated just by summing up the closeness and betweenness centrality of its neighbors, respectively. They also studied the correlation and importance of the proposed centrality measures with respect to other existing centrality measures. The proposed method was verified on the co-authorship network and gave a good correlation with the ranking of scholars. They have also extended the proposed method for weighted networks.\nIn traditional degree centrality, we only consider the number of neighbors, but we do not consider the duration of the link. This information can give us more insights to better understand the position of a node in the given network. Uddien et al. proposed a time-variant approach to degree centrality measure, called TSDC (time scale degree centrality) . TSDC considers both the presence and duration of the links between nodes. They simulate the proposed approach on the doctor-patient network, where a link is established when a patient is hospitalized, and the duration of the link represents the length of the stay. If the length of the stay is smaller, TSDC can give a better explanation than traditional degree centrality. This method can be applied to real-world networks that are highly dynamic and where the strength of a link has more impact on the node.\nDegree centrality is also extended to other types of networks, such as multiplex networks , hypergraphs , and so on. Kapoor et al. extended degree centrality for hypergraphs and proposed two centrality measures called: 1. strong tie degree centrality, and 2. weak tie degree centrality . They verified their methods on two real-world networks, DBLP (computer science collaboration network) and CR3 (group network of a popular Chinese multi-player online game), and showed that the proposed methods outperform degree centrality. Brodka et al. proposed multilayered degree centrality called CLDC (cross-layer degree centrality) for multilayer social networks . They check the efficacy of their method on online real-world social networks containing ten layers. The proposed method is also extended to directed networks where the cross-layer indegree centrality $(CLDC_{In})$, and cross-layer out-degree centrality $(CLDC_{Out})$ can be defined using in-degree and out-degree centrality, respectively.", "id": "8ec5c082-0fbc-42e3-bba9-d1db34f51ae1", "level": "subsection", "origin_cites_number": 17, "parent_id": "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Degree Centrality" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0, "cites": [], "content": "In recent years, the attention of researchers has been shifted towards the more challenging task of identifying top-k nodes in a given network. It mainly focuses on finding top-k nodes with high accuracy and without having knowledge of the entire network. The main challenge is that the complexity of the proposed methods should be many folds lesser than the actual method so that they can be applied in real-life applications. Lim et al. proposed a sampling technique to estimate top-k central nodes in the network . To verify the accuracy of the proposed method, the authors investigated two types of errors: 1. sampling error and 2. identification error. When a node is not sampled in the top-k most central nodes identified by the sampling algorithm, then this is called the sampling error. Similarly, if a sampled top-k node is not identified as a top-k influential node, it is referred to as the identification error. They showed that among the analyzed methods, random walk yields low sampling error. The proposed technique can also be used for other centrality measures. They further showed that the degree centrality could be used in the sampling process to reduce identification error for closeness and betweenness centrality.", "id": "a9f9d0ce-4abe-48c0-846c-ebb40bb9db6e", "level": "subsection", "origin_cites_number": 1, "parent_id": "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Degree Centrality" ], [ "subsection", "Identify Top-k Nodes" ] ], "subsections": [], "title": "Identify Top-k Nodes" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5354, 5353, 5355 ], "content": "The main objective to define centrality measures is to rank nodes in the given network . The degree centrality rank of a node $u$ can be computed as, $R(u) = \\sum_{v} X_{uv} + 1$, where $X_{uv} = 1,$ if $C_D(v) > C_D(u)$, otherwise $X_{uv} = 0$. A node having the highest centrality value is ranked $1$, and all nodes having the same centrality value will have the same rank . Currently, we follow a two-step process to compute the rank, 1. compute centrality value of all the nodes, and 2. compare them to measure the rank of the node. This method requires the entire network to measure the rank of a single node. The size of real-world networks is increasing very fast, and they are highly dynamic. This motivates researchers to propose efficient methods to estimate the centrality rank of a node without computing the centrality value of all the nodes. \nSaxena et al. proposed a method to estimate the degree rank in real-world scale-free networks, and proved that the expected degree rank of a node $u$ can be computed as, $E[R_{G}(u)] \\approx n \\left( \\frac{k_{max}^{1-\\gamma} - (k_{u}+1)^{1-\\gamma}}{ k_{max}^{1-\\gamma} - k_{min}^{1-\\gamma} } \\right) + 1$, where $n$ denotes the network size, $k_{max}$ and $k_{min}$ denote the maximum and minimum degree in the network respectively, and $k_u$ denotes the degree of node $u$. The authors also proposed methods to estimate degree ranking using different sampling techniques, such as uniform sampling, random walk , and metropolis-hastings random walk . The sampling methods were used to collect a small sample set of the nodes for the rank estimation, and the results showed that $1\\%$ samples are adequate to estimate the degree rank of a node with high accuracy. The proposed methods were simulated on both synthetic as well as real-world scale-free networks, and the accuracy was evaluated using absolute and weighted error functions. The proposed methods were further extended to estimate the degree rank in random networks .", "id": "09b9544d-1019-4023-b360-65a29303f3c5", "level": "subsection", "origin_cites_number": 9, "parent_id": "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Degree Centrality" ], [ "subsection", "Ranking" ] ], "subsections": [], "title": "Ranking" }, { "cite_extract_rate": 0, "cites": [], "content": "Imamverdiyev et al. used degree centrality to understand the dynamic relationships among a group of female BSc. students . They analyzed how do these relationships change when there are some special events and how they stabilized again. It is also important in a network to understand how the importance of different nodes changes with time. Holme et al. studied the dynamics of networking agents using degree and closeness centrality measures . Agents use local information based strategies to improve their chance of success, that is directly related to their position in the network. The success of an agent increases with its closeness centrality and size of the connected component it belongs to, while it decreases with its degree. Thus the score function is defined as,\n\\begin{center}\n$s(u)=\\left\\{\\begin{matrix}\nC_C(u)/k_u & ,if & k_u > 0 \\\\\n0 & ,if & k_u = 0\n\\end{matrix}\\right.$\n\\end{center}\nThey also showed that the network is stabilized itself between a fragmented state and a state having a giant component, and the level of fragmentation is decreased as the size of the network is increased. \nIn online social networks, the spread of a meme depends on the influential power of the source node. Various parameters have been considered to identify these influential nodes. Different centrality measures are highly dependent on the network parameters and its structure. Maharani et al. observed the difference between the top ten influential nodes using degree centrality and eigenvector centrality on Twitter , and the result showed that there is a significant difference among top-10 influential users. Wambeke et al. studied the underlying social network of trades using degree and eigenvector centrality . They analyze the social network data of a project over seven months and show that such analysis can be very helpful for the management team to understand the underlying activities going on the network. Martino et al. used degree and eigenvector centrality to study attention-deficit/hyperactivity disorder (ADHD) on the brain connectivity network . Degree centrality has also been used to identify influential spreaders to propagate true information in the network to minimize the negative impact of fake news. Budak et al. showed that selecting a minimal group of users having the highest degree to disseminate true information in the network gives good results in mitigating bad information. \nIn this section, we have discussed various extensions and applications of degree centrality in real-world networks. These are mainly used due to their simplicity and fast computation in dynamic online networks like Facebook, Twitter, WWW, and so on. Degree centrality of a node conveys its importance in the local neighborhood, but it is not able to depict its importance with respect to various other global parameters of the network. To understand the impact of network structure on the importance of a node, researchers have defined many other centrality measures. We are going to discuss those in the following sections.", "id": "6a7a51a4-ee82-46cf-aeb8-52cc0513c37a", "level": "subsection", "origin_cites_number": 6, "parent_id": "5e497a2d-9d11-4ca1-b91c-030c2e79cb0a", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Degree Centrality" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "In many real-world applications, information travels through the shortest paths. In such kinds of applications, a node will be highly influential if it has a shorter distance to other nodes. This network property is captured by closeness centrality. The closeness centrality of a node denotes how close a node is in the given network. It is inversely proportional to the farness of the node. Freeman defined the closeness centrality as, $C_C(u)= \\frac{n-1}{\\sum_{\\forall v, v \\neq u}d(u,v)}$.\nOne similar centrality measure is graph centrality, that was proposed by Hage and Harary . It is defined as,\n\\begin{center}\n$C_G(u) = \\frac{1}{max_{v \\in V}d(u,v)}$\n\\end{center}", "id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "level": "section", "origin_cites_number": 1, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ] ], "subsections": [ "e81cdea4-785a-4685-9ee0-4dc67c3aa884", "ef0ba2e1-ad34-4f92-b0d8-20f3ff355e02", "7e4d694c-eb6b-4ce5-b9da-29d0278c90fd", "5665b0a6-fbe5-4876-ab34-9b15c2469286", "ea4e7ea1-7b2d-42d4-8192-22ff2f00ab1f", "c0f81fc6-0c8b-46a2-8c41-07d2843839bd", "e22aeb58-4958-4d70-b60e-b2aa5169836f" ], "title": "Closeness Centrality" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 8898 ], "content": "Researchers have also extended the closeness centrality definition for different types of networks. In weighted networks, closeness centrality of a node $u$ can be defined as,\n\\begin{center}\n$C_C^w(u) = \\left [\\sum_{\\forall v}d^w(u,v) \\right ]^{-1}$\n\\end{center}\nwhere $v \\epsilon V$ and $d^w(u,v)$ is the shortest weighted distance between nodes $u$ and $v$. Ruslan et al. used arithmetic mean approach and extended closeness centrality for weighted networks . The proposed centrality metric was simulated on synthetic networks. They showed that the proposed method gives better results to identify influential nodes in weighted networks.\nDu et al. proposed effective distance closeness centrality (EDCC) for directed networks that can also be applied to undirected, weighted, and unweighted networks . The effective distance was proposed by Brockmann to analyze disease spread . In EDCC, the authors used Susceptible-Infected (SI) model to evaluate the performance of the proposed method. SI model is used to measure the influential power of a node. This model is used to simulate information flow in a given complex network. In this model, all nodes are considered uninfected in the starting. Then few nodes are infected to understand the flow of the information spread. Once a node is infected, its status is changed to infected. An infected node gets one chance to infect its neighbors in each step. When a node is infected, it can infect any of its neighbors with some fixed probability in the next iteration. Thus this model perfectly captures the exponential information flow in real-world networks. The authors used this model to rank nodes based on the information flow. The efficiency of the proposed method is verified on four real-world networks. Brandes and Fleischer proposed the idea that the information spreads like an electric current in a network as it does not spread only using the shortest paths. . They proposed a variation of closeness centrality based on that and also proposed a faster method to compute the same.\nIn a disconnected network, the basic definition of closeness centrality is not able to rank nodes properly. The traditional closeness centrality is defined as,\n\\begin{center}\n$C_C(u) \\propto \\left [\\sum_{v}d(u,v) \\right ]^{-1}$\n\\end{center}\nIf the graph is disconnected, then at least one term in the summation will be $\\infty$, so the summation will be $\\infty$, and the closeness centrality of all nodes will be $0$. One solution that we can use is, compute the closeness centrality of each node with respect to the connected component to which this node belongs. But it ignores a lot of other information that is present in the network.\nIn 2001, Latora et al. proposed closeness centrality for disconnected networks . It is defined as,\n\\begin{center}\n$C_C(u) = \\sum_{v} {\\frac{1}{d(u,v)}} $\n\\end{center}\nIn 2006, Dangalchev proposed another definition of closeness centrality that can be used for disconnected graphs . It is defined as,\n\\begin{center}\n$C_C(u) = \\sum_{v} {\\frac{1}{2^{d(u,v)}}} $\n\\end{center}\nThis method provides an easy way to compute closeness centrality than the traditional way. Yang et al. showed that all provided extension of closeness centrality for disconnected graphs are not true extensions . They do not rank nodes in the same way as the closeness centrality does. In 2009, Rochat proposed a harmonic centrality index that is an alternative to the closeness centrality index for disconnected networks . The author showed that the results are the same on connected networks with the same computational complexity. The main benefit is that it can be used for disconnected networks.\nIn real-world social networks, a node can be part of multiple communities that gives birth to overlap community structure. Tarkowski et al. defined closeness centrality for the networks having overlapped community structure . They used a cooperative game-theoretic approach to compute the closeness centrality of a node in polynomial time complexity. They also verified their results on Warsaw public transportation networks. Still, the use of game-theoretic approaches for other centrality measures like betweenness centrality, eigenvector centrality, is an open problem.\nBarzinpour et al. extended closeness centrality for multilayer networks . He explained how inter-connections and intra-connections of layers could be used to get the true position of a node based on closeness centrality.\nYu et al. combined closeness centrality and degree centrality to identify important nodes in the given network . The algorithm is defined as,\n\\begin{enumerate}\n\\item Calculate the shortest distance matrix of the given graph.\n\\item Calculate node importance evaluation matrix using degree centrality. This is defined as,\n\\begin{center}\n$E=\\begin{bmatrix}\nk_1 & a_{12}k_1k_2/2m & \\cdots & a_{1n}k_1k_n/2m\\\\\na_{21}k_1k_2/2m & k_2 & \\cdots & a_{2n}k_2k_n/2m \\\\\n\\vdots & \\vdots & \\vdots & \\vdots \\\\\na_{n1}k_1k_n/2m & a_{n2}k_2k_n/2m & \\cdots & k_n\n\\end{bmatrix}$\n\\end{center}\nwhere $k_i$ is degree of node $i$.\n\\item Improve node importance evaluation matrix using closeness centrality.\n\\begin{center}\n$H=\\begin{bmatrix}\nk_1C_1 & a_{12}k_1k_2C_2/2m & \\cdots & a_{1n}k_1k_nC_n/2m\\\\\na_{21}k_1k_2C_1/2m & k_2C_2 & \\cdots & a_{2n}k_2k_nC_n/2m \\\\\n\\vdots & \\vdots & \\vdots & \\vdots \\\\\na_{n1}k_1k_nC_1/2m & a_{n2}k_2k_nC_2/2m & \\cdots & k_nC_n\n\\end{bmatrix}$\n\\end{center}\nwhere $C_i$ is closeness centrality of node $i$.\n\\item Calculate importance of each node using given formula that is defined as,\n\\begin{center}\n$I_i = \\sum_{j=1}^{n}H_{ij} = k_iC_i + \\sum_{j=1,i\\neq j}^{n} H_{ij}$\n\\end{center}\n\\end{enumerate}\nThus, the proposed centrality metric considers both local and global information of the node. The proposed method is verified on real-world networks like the Advanced Research Project Agency (ARPA) network and AIDS network as well as synthetic networks.", "id": "e81cdea4-785a-4685-9ee0-4dc67c3aa884", "level": "subsection", "origin_cites_number": 12, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0.375, "cites": [ 7923, 8899 ], "content": "The complexity to compute the closeness centrality of all nodes is $n$ times the complexity of BFT. Eppstein et al. proposed a randomized approximation algorithm to fast compute closeness centrality in weighted networks . The proposed method calculates the centrality of all vertices in $O(m)$ time, where $m$ is the number of edges. Cohen et al. proposed a method to approximate closeness centrality for directed and undirected networks . The proposed approach is a combination of the exact and sampling method. In the sampling method, few nodes are sampled uniformly at random, and BFT is executed from these nodes. The execution of these BFTs will determine the shortest paths of all nodes with the sampled nodes. In the hybrid approach, while calculating the closeness centrality of a node, a threshold distance is decided. Now to compute the closeness centrality of a node, its exact distance is computed with all nodes within the threshold distance. For all nodes which fall outside the threshold distance, two approaches are used: 1. If the node is a sampled node, its exact distance is already known; otherwise 2. approximation method is used to approximate the shortest distance of the node. Thus, the closeness centrality of a node is calculated in linear time complexity using this approach.\nRattigan used the concept of network structure index (NSI) to approximate the values of different centrality measures that need to identify the shortest paths in the given network . Shi et al. developed a software package gpu-fan (GPU-based Fast Analysis of Networks) for fast computation of centrality measures in large networks . This method can be used for other centrality measures if they also use the computation of shortest paths like betweenness centrality, eccentricity centrality, and stress centrality. Eppstein and Wang proposed an approximation to compute closeness centrality . The proposed method estimates the centrality value of all nodes in $O(n)$ time with $(1 + \\epsilon)$ linear approximation factor. Some other approximation methods for closeness centrality include .", "id": "ef0ba2e1-ad34-4f92-b0d8-20f3ff355e02", "level": "subsection", "origin_cites_number": 8, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Approximation Methods" ] ], "subsections": [], "title": "Approximation Methods" }, { "cite_extract_rate": 0.25, "cites": [ 8900 ], "content": "Real-world networks are highly dynamic, and their structure keeps changing at every single moment by addition and removal of nodes or edges. Kas et al. proposed closeness centrality for dynamic networks . Whenever there is any addition, removal, and modification of nodes or edges, we can make the set of affected nodes and update all pair shortest paths using that. In 2013, Yen also proposed an algorithm called CENDY (Closeness centrality and avErage path leNgth in DYnamic networks) to update closeness centrality when an edge is updated . They also used this approach to propose a method to update the average path length of the network just by computing a very small number of shortest paths. Sariyuce et al. proposed a method to update closeness centrality using the level difference information of breadth-first traversal . They also used biconnected component decomposition, spike-shaped shortest-distance distributions, and identical vertices techniques to improve their results. The proposed method gives improvement by a factor of 43.5 on small networks and 99.7 on large networks than the traditional non-incremental algorithm .", "id": "7e4d694c-eb6b-4ce5-b9da-29d0278c90fd", "level": "subsection", "origin_cites_number": 4, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Update in Dynamic Networks" ] ], "subsections": [], "title": "Update in Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In 2006, Bader et al. proposed a parallel algorithm to compute closeness centrality, where it executes a breadth-first traversal (BFT) from each vertex as a root . If there are $p$ processors, then the time taken to compute the closeness centrality of all the $V$ vertices would be $O( \\frac{\\left |V \\right |}{p} (\\left | V \\right | + \\left | E \\right |))$. Some other network analysis libraries are also available to compute parallel closeness centrality . Lehmann and Kaufmann proposed a method for decentralized computation of closeness centrality and graph centrality . The proposed method is computationally expensive for large-scale real-world complex networks. Wang et al. also proposed a distributed algorithm to compute closeness centrality . They showed that the proposed method estimates closeness centrality with $91\\%$ accuracy in terms of ordering on random geometric, Erdos-Renyi, and Barabasi-Albert graphs.", "id": "5665b0a6-fbe5-4876-ab34-9b15c2469286", "level": "subsection", "origin_cites_number": 6, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Parallel and Distributed Computation" ] ], "subsections": [], "title": "Parallel and Distributed Computation" }, { "cite_extract_rate": 0, "cites": [], "content": "In real-life applications, mostly, we are not interested in computing the closeness centrality of all nodes. All practical applications focus on identifying a few top nodes having the highest closeness centrality. Some of these algorithms to identify top-k nodes are discussed here. Ufimtsev proposed an algorithm to identify high closeness centrality nodes using group testing . Okamoto et al. proposed a method to rank k-highest closeness centrality nodes using a hybrid of approximate and exact algorithms . Olsen et al. presented an efficient technique to find k-most central nodes based on closeness centrality . They used intermediate results of centrality computation to minimize the computation time. The proposed method uses $O(V+E)$ additional space, and it is 142 times faster than the conventional method, where we compute the closeness centrality of each node independently.\nBergamini et al. proposed a faster method to identify top-k nodes in undirected networks . They used BFT information to determine the upper bound on closeness centrality and halt the process when top-k nodes having the highest closeness centrality have been identified. They have proposed two methods that can be used based on network properties. The first one can be used for small-world graphs with low diameter, and the second one can be used for the graphs having a high diameter. The proposed method outperforms the state of the art methods to compute top-k nodes.\nWehmuth et al. proposed a method named DACCER (Distributed Assessment of the Closeness CEntrality Ranking) to estimate the closeness ranking of nodes using local information . They have shown that the DACCER rank is highly correlated with closeness centrality rank for both real-world and synthetic networks. The DACCER centrality is computed using the local neighborhood volume of the node. It is defined as,\n\\begin{center}\n$Vol(H_h^u) = \\sum_{v \\epsilon H_h^u} k_v$\n\\end{center}\nwhere $k_v$ is the degree of node $v$, and $h$ is the level of breadth-first traversal (BFT). $H_h^u$ denotes the set of all nodes that belong to $h$ level BFT of node $u$. By definition, the volume of a node gives us the sum of degrees of all nodes that belong to the h-level BFT of the given node. The results show that if we take $h=2$, then the ranking is highly correlated with closeness centrality ranking. This value can be easily computed for each node using 2-level BFT, and it is not required to have information about the entire network. Lu et al. extended this method and proposed MDACCER (Modified Distributed Assessment of the Closeness CEntrality Ranking) to compute closeness centrality in a parallel environment like General Purpose Graphics Processing Units (GPGPUs) .", "id": "ea4e7ea1-7b2d-42d4-8192-22ff2f00ab1f", "level": "subsection", "origin_cites_number": 6, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Identify Top-k Nodes" ] ], "subsections": [], "title": "Identify Top-k Nodes" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5356 ], "content": "The classical method of computing the closeness centrality rank of a node, first computes the closeness centrality value of all nodes, and then compare its closeness value with others to determine the closeness rank of the node. The time complexity of the first step is $O(n \\cdot m)$ to compute the closeness centrality of all nodes; for the second step, it is $O(n)$ to compare the centrality value of the given node with all other nodes. So, the overall time complexity of this process is $O(n \\cdot m)+O(n)= O(n \\cdot m)$, which is very high. This high complexity method is infeasible to use in real-life applications of large size networks. Saxena et al. studied the structural properties of closeness centrality in real-world networks and observed that the reverse rank\\footnote{In the reverse ranking, the node having the lowest closeness value has the highest rank, namely $1$, and the node having the highest closeness value has the lowest rank.} versus closeness centrality follows a \\textit{sigmoid curve}. Once the parameters of the sigmoid equation are estimated, this can be used to fast estimate the closeness rank of a node. The authors proposed the methods to estimate these parameters using network properties. They further proposed heuristic methods to estimate the closeness rank of a node in $O(m)$ time that is a huge improvement over the classical method. The accuracy of the proposed methods was measured using absolute and weighted error functions, and correlation coefficients. The results showed that the proposed methods could be used efficiently for large scale complex networks of different types .", "id": "c0f81fc6-0c8b-46a2-8c41-07d2843839bd", "level": "subsection", "origin_cites_number": 3, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Ranking" ] ], "subsections": [], "title": "Ranking" }, { "cite_extract_rate": 0, "cites": [], "content": "Closeness centrality has been applied in many important research areas. Newman used closeness centrality to better understand collaboration networks . Yan et al. also used closeness centrality to understand various parameters of collaboration networks . Sporns et al. used closeness centrality to identify hubs in the brain network . Park et al. proposed a method to measure closeness centrality among workflow-actors of workflow-supported social network models . Kim et al. proposed an estimation driven closeness centrality based ranking algorithm named RankCCWSSN (Rank Closeness Centrality Workflow-supported Social Network) for large-scale workflow-supported social networks . They showed that the time efficiency of the proposed method is about $50\\%$ than the traditional method. This method can easily be extended to weighted workflow-supported social networks.\nZhang et al. used closeness centrality to identify the community of few nodes by using community information of other nodes . Jarukasemratana et al. proposed a community detection method using closeness centrality . Ko et al. proposed the closeness preferential attachment (CPN) model to create synthetic networks using closeness centrality . According to closeness preferential attachment law, a new node joining the network would like to make connections with other nodes that will help it to increase its closeness with the entire network. They compare the CPN model with the BA model. In the CPN model, each node tries to decrease its distance with the remaining network, but it gives birth to a longer average distance than the BA model. Other applications of closeness centrality can be looked at .\nIn this section, we have discussed the state of the art literature on closeness centrality, and next, we will discuss betweenness centrality.", "id": "e22aeb58-4958-4d70-b60e-b2aa5169836f", "level": "subsection", "origin_cites_number": 16, "parent_id": "6306c16c-fa57-44c4-bc09-40622f955f9d", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Closeness Centrality" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "In 1973, Granovetter emphasized the inequality of edges in a network and introduced the idea of weak ties . The edges of a network can be broadly categorized as weak ties or strong ties. Strong ties represent the relationships between people who frequently contact each other, and weak ties represent the relationships having less communication frequency. Granovetter's work was the first work of its kind that distinguished between the edges of a network in some way. It simply shows that some edges play the role of bridges for information flow more frequently than others.\nSimilarly, in complex networks, the uniqueness of a node can be determined by its importance in the information flow in the network. This unique characteristic is captured by the betweenness centrality of the node. Betweenness centrality is based on the flow of information through nodes, so, it is also called flow centrality. It assumes that the information always flows through the shortest paths like water and electricity. It accounts for the number of shortest paths passing through a node that explains the importance of a node with respect to the information flow. The basic definition of betweenness centrality is defined as, $C_B(u)= \\frac{\\sum_{s \\neq u \\neq t}\\frac{\\partial_{st}(u)}{\\partial_{st}}}{(n-1)(n-2)/2}$, where $\\partial_{st}(u)$ represents the number of shortest paths between nodes $s$ and $t$ with node $u$ acting as an intermediate node in the shortest path. This is the extension of the stress centrality measure that was proposed by Shimbel in 1953 . Stress centrality is based on the total number of shortest paths passing through a node; it is defined as,\n\\begin{center}\n$C_S(u)= \\sum_{s,t \\in V, s \\neq t} \\sigma_{st}(u) $\n\\end{center}\nwhere, $\\sigma_{st}(u)$ is the number of shortest paths from $s$ to $t$ that passes through $u$.\nThe time complexity of betweenness centrality is very high $(O(m^3))$, as it counts the total number of shortest paths passing through a node for each pair of nodes present in the network.\nIn 2001, Brandes proposed a fast method to compute the betweenness centrality of all nodes. The proposed algorithm takes $O(n+m)$ space, and $O(nm)$ and $O(nm + n^2 logn)$ time on unweighted and weighted networks respectively . The proposed method executes BFT from a node and counts the number of shortest paths passing through a node using this information. Thus, the total number of shortest paths passing through a node is counted by executing BFT once from each node. The complexity of BFT on an undirected network is $O(m)$, so the complexity of betweenness centrality is $O(nm)$.", "id": "a2c63b19-259d-46ba-ba80-92708981d707", "level": "section", "origin_cites_number": 4, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ] ], "subsections": [ "e1e3be01-89be-498a-9f5d-645292a50cb4", "b0815c00-a60a-47a8-9790-7f3fbf94c0f3", "a0496fe4-698c-43ea-9a54-20c0faaf1d50", "9b2a2cab-d32b-4de4-9a37-32acb030bae1", "a6196e6a-ac90-47e0-9799-257087643069" ], "title": "Betweenness Centrality" }, { "cite_extract_rate": 0, "cites": [], "content": "Betweenness centrality is also extended based on the specific application requirements. Freeman proposed a family of new centrality measures based on the betweenness centrality . These measures can be used for both connected and disconnected networks. Brandes also proposed some variations of standard betweenness centrality metric called endpoints, proximal betweenness, k-betweenness, length-scaled betweenness, linearly scaled betweenness, edge betweenness, group betweenness, q-measures, stress centrality, and load centrality . He also extended betweenness centrality for valued networks and multigraphs. Brandes and Fleischer proposed a variation of betweenness centrality based on the assumption that the information spreads like an electric current in the network . As the computation of this centrality measure is very costly on large scale networks, so they also proposed an approximation method for the same.", "id": "e1e3be01-89be-498a-9f5d-645292a50cb4", "level": "subsection", "origin_cites_number": 3, "parent_id": "a2c63b19-259d-46ba-ba80-92708981d707", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0.2, "cites": [ 7924 ], "content": "Betweenness centrality only considers the number of shortest paths passing through a node. It assumes that the information always flows through the shortest paths. But it might not always true in real-world scenarios. Information can also pass through longer paths with some probability. Newman considered this fact and proposed a betweenness measure that considers all paths, but more importance is assigned to shorter paths . The centrality of a node is computed using random walks, and it is directly proportional to how often a node is traversed while taking a random walk. He also shows that the proposed measure can be calculated using matrix inversion methods. There are several other approximation algorithms that include .\nLehmann and Kaufmann proposed an efficient method for decentralized computation of betweenness centrality and stress centrality .", "id": "b0815c00-a60a-47a8-9790-7f3fbf94c0f3", "level": "subsection", "origin_cites_number": 5, "parent_id": "a2c63b19-259d-46ba-ba80-92708981d707", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ], [ "subsection", "Approximation Methods" ] ], "subsections": [], "title": "Approximation Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Betweenness centrality has a high computational cost than other centrality measures. In dynamic networks, it is not feasible to recompute the centrality values if the network is updated. Some methods have been proposed to update betweenness centrality in dynamic networks that we are going to discuss next.\nLee et al. proposed a method called QUBE framework to update betweenness centrality in the case of edge insertion and deletion within the network . The proposed method is based on the biconnected component decomposition of the graphs. When an edge is inserted or deleted, then the centrality values within the updated biconnected component are recomputed from scratch. If the decomposition is affected due to edge insertion/deletion, the modified graph is first decomposed into biconnected components. The performance of the QUBE highly depends on the distribution of vertices to the biconnected components. In real-world networks, component size is large, so it does not reduce update time significantly. The authors show the performance of the proposed method on smaller graphs having a low density. This method performs significantly well on small graphs with a tree-like structure having many small biconnected components.\nGreen et al. also proposed a method to update betweenness centrality values rather than recomputing them from scratch upon edge insertions or edge deletions . This approach is based on storing the whole data structure used by the previous betweenness centrality update kernel. This storage helps in reducing computation time significantly as some of the centrality values will remain the same. This method uses quadratic storage space, so it is impractical to use for large size networks.\nIn 2015, Chernoskutov et al. proposed a method to approximate betweenness centrality values in dynamic networks . This method contains two steps: 1. condensed the initial graph to get a smaller version, and 2. approximate betweenness centrality for smaller graph and extrapolates it to the large graph. The proposed method gives a speedup of $60\\%$ for real-world networks.", "id": "a0496fe4-698c-43ea-9a54-20c0faaf1d50", "level": "subsection", "origin_cites_number": 3, "parent_id": "a2c63b19-259d-46ba-ba80-92708981d707", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ], [ "subsection", "Update in Dynamic Networks" ] ], "subsections": [], "title": "Update in Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Agryzkov et al. proposed the random walk betweenness centrality index to rank nodes based on the concept of pagerank . First, the adapted pagerank algorithm is executed to rank nodes based on their importance. Then the final ranking of the nodes is computed using betweenness centrality ranking. They analyzed the proposed method on the real urban street network and compared it with other centrality measures. Kourtellis et al. proposed k-path centrality to identify nodes having high betweenness centrality . They used a randomized algorithm to identify nodes with high k-path centrality and showed that nodes with high k-path centrality also have high betweenness centrality. The proposed method executes faster than existing methods on the real world and synthetic networks. The fast estimation of betweenness centrality rank is still an open research question.", "id": "9b2a2cab-d32b-4de4-9a37-32acb030bae1", "level": "subsection", "origin_cites_number": 2, "parent_id": "a2c63b19-259d-46ba-ba80-92708981d707", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ], [ "subsection", "Identify Top-k Nodes" ] ], "subsections": [], "title": "Identify Top-k Nodes" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 8901 ], "content": "Newman used betweenness centrality to study collaboration networks and showed the effect of funneling . It shows that most of the shortest paths of a node pass through only the top few collaborators and remaining collaborators participate in a very small number of shortest paths. Leydesdorff used betweenness centrality to study the citation network of Journals . Their results help us understand that betweenness centrality can measure the interdisciplinarity of journals using local citation environments. Abbasi et al. showed that the betweenness centrality is a good measure for preferential attachment than the degree and closeness centrality in collaboration networks . Other applications that have used betweenness centrality include .\nBrandes et al. studied the dependency of closeness and betweenness centrality . Before this work, researchers used to consider both of these centrality measures independently. This was the first work of its kind that show the inter-dependency of both centrality measures mathematically. Real-world scale-free networks can be categorized into three categories based on the average nearest neighbor degree: 1. assortative networks, 2. disassortative networks, and 3. neutral networks . In assortative networks, a node with a high degree tends to be connected with other nodes having high degrees. Few examples of assortative networks are social networks, co-authorship networks, actor networks, and so on. In disassortative networks, a node with a high degree tends to be connected with other nodes having low degrees and vice versa, for example, Internet network, WWW network, biological networks, and so on. Goh et al. studied the correlation of betweenness centrality in scale-free networks . Results show that the betweenness centrality correlation behaves the same in disassortative and neutral networks. But in assortative networks, it shows a different pattern. In assortative networks, a node is connected to other nodes having the same influential power. These results are highly important in understanding information dynamics in different networks.\nIn this section, we have discussed betweenness centrality, its fast computation algorithms, approximation methods, and its extensions. There has not been any significant work to identify top-k nodes in betweenness centrality as the complexity to compute one node's centrality is equivalent to compute the centrality of all nodes. Betweenness centrality is highly applicable in real-life applications, and we have discussed it in the later part of the section.", "id": "a6196e6a-ac90-47e0-9799-257087643069", "level": "subsection", "origin_cites_number": 12, "parent_id": "a2c63b19-259d-46ba-ba80-92708981d707", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Betweenness Centrality" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "PageRank is the key parameter to measure the success of search engines, which helps to find out top results for the given search query. Pagerank is a global centrality measure that needs the entire network to measure the importance of one node. It measures the importance of one node based on the importance of its neighbors. Thus it is an iterative process that uses global information to estimate where you stand in the network. The first method to compute PageRank was proposed by Brin and Page in 1998 while developing the ranking module for the prototype of Google . Various other methods also have been proposed to compute the pagerank value of a node quickly.\nIn dynamic big real-world networks, we use the random walk based method to estimate the rank of a node. To compute the pagerank, a few crawlers are started to walk on the network. Initially, the counter for all nodes is set to zero. When a crawler reaches a node, it increases its counter by one and moves to one of its neighbors uniformly at random. While taking random walks, there can be situations when a crawler can be stuck in some part of the network or in a community or be stuck on a node with no outgoing links. To handle such conditions, the teleportation facility is used. In teleportation, when a crawler reaches a node, then with probability $q$ (where $0 < q < 1$), it selects a node uniformly at random in the entire network and jumps to it, and with probability $(1-q)$ it moves to one of its neighbors randomly. It is observed that $q \\approx 0.15$ gives good results in real-world networks.\nPagerank of a node is defined as,\n\\begin{center}\n$P(u)=\\frac{q}{n}+(1-q)\\sum_{v:v\\rightarrow u} {P(v)/k^{out}_v}$\n\\end{center}\nwhere $n$ is the total number of nodes in the network, $q$ is teleportation factor, and $k^{out}_v$ is the out-degree of node $v$. $v \\rightarrow u$ shows a link from $v$ to $u$. Thus the pagerank value $P(u)$ shows the probability to find the crawler at node $u$ when the complete process converges to a stable state.", "id": "cb521dba-dace-41a1-8478-d8f347f3c503", "level": "section", "origin_cites_number": 1, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ] ], "subsections": [ "7a967627-fb08-4863-8082-5753bf10b31a", "53b79b4d-61fd-4ef6-b815-cbef59c788df", "d4f5b3bb-fbb9-47aa-a5ec-db1b0768c01b", "7c9ccf33-6faa-4d85-af1c-a7df6725cf55", "d91892b5-6506-4f10-9964-e7b20b3a9c5f" ], "title": "PageRank Centrality" }, { "cite_extract_rate": 0.058823529411764004, "cites": [ 5357 ], "content": "Pagerank has also been extended for different types of networks. Xing and Ghorbani extended the page rank for weighted networks and proposed the weighted PageRank algorithm (WPR). The WPR method considers the importance of the links and distributes rank scores based on the popularity of the nodes. The results show that the WPR method performs better than the classic PageRank method in terms of finding the relevant pages to a given search query. Pagerank has also been extended for temporal networks , multilayer networks , and hypergraphs .\nFiala extended the pagerank for bibliographic networks as in these networks, we have temporal and meta-information about the citations and authors. The proposed methods weigh citations between authors based on the information from the co-authorship network, and the methods were tested on the Web of Science dataset of computer science journal articles to determine the most prominent computer scientists in the period of 1996–2005.\n\\subsubsection*{Customized Pagerank}\nMany research papers talk about the customized ranking and how to improve users' experience on the search engines. These specialized rankings are suitable for many particular applications. The main underlying idea of specialized ranking is based on the concept that the page importance is not absolute, but it depends on the particular needs of a search engine or a user. For example, If a user is searching for the list of all top institutes, then the home page of all these institutes will not be so important. He would like to get a page that has consolidated information of all top institutes with their rankings. Specialized pagerank also can consider the user history and preferences to decide the order to display search query results. For example, different institutes might be interested in customizing the ranking algorithm for the specific environment. Scarselli et al. proposed a neural network model to compute customized page ranks in World Wide Web . Many approaches have been proposed for specialized page ranking based on the topic, user, or search query . Honglun et al. have compared various techniques to get personalized rankings and have written a survey on the same .", "id": "7a967627-fb08-4863-8082-5753bf10b31a", "level": "subsection", "origin_cites_number": 17, "parent_id": "cb521dba-dace-41a1-8478-d8f347f3c503", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0.038461538461538006, "cites": [ 8902 ], "content": "Amento et al. analyzed the correlation between pagerank and in-degree based on five queries . They show $60\\%$ average precision as observed by the human subjects. There are also some other studies that show the correlation of pagerank with in-degree in web network . The plot between pagerank and in-degree follows power-law distribution with a broad tail having power-law exponent $\\gamma \\approx 2.1$. Grolmusz showed that the pagerank in an undirected graph is not directly proportional to the degree . They proposed an upper and a lower bound for the pagerank distribution and explained necessary and sufficient conditions for the PageRank to be proportional to the degree.\nLitvak et al. performed experiments and observed that the pagerank and in-degree both obey the power law with the same exponent . They presented a mathematical model using a stochastic equation to explain this phenomenon. They also showed that the tail behavior of the PageRank and the in-degree differs only by a multiplicative factor, and derived a closed-form expression for the same. They have further worked to propose Monte Carlo methods to identify top-k personalized PageRank lists . There are few other works that propose efficient approaches to identify top-k nodes based on the requirement .\nIn 2008, Fortunato et al. used the mean-field theory to understand the correlation of pagerank with in-degree of the node . They showed that the pagerank is directly proportional to the in-degree, modulo an additive constant. They also showed that the global ranking $R(P)$ of a node based on pagerank $P$ could be defined as,\n\\begin{center}\n$R(P) \\approx AP^{-\\beta}$\n\\end{center}\nwhere $\\beta = \\gamma - 1 \\approx 1.1$. $\\gamma$ is the power law exponent of pagerank distribution and $A$ is a proportionality constant.\nThis complete process includes following steps:\n\\begin{enumerate}\n\\item Compute pagerank of a node using following equation:\n\\begin{center}\n$P(k) = \\frac{q}{n} + \\frac{1-q}{n}\\frac{k_{in}}{\\left\\langle k_{in}\\right\\rangle }$\n\\end{center}\nThis gives the average PageRank of all nodes having degree $k$.\n\\item Compute the global rank $R(P)$ of the node\n\\item A page with global ranking $R$ can be placed at any position in the hit list of length $h$ based on the query. We can compute local ranking $r$ of the node as,\n\\begin{center}\n$r=R\\frac{h}{n}$\n\\end{center}\n\\end{enumerate}\nUsing this approach, we can compute pagerank of a node if we know its in-degree. The combined expression to calculate local rank of a node can be written as,\n\\begin{center}\n$r=\\frac{Ah}{(\\frac{q}{n} + \\frac{1-q}{n}\\frac{k_{in}}{\\left\\langle k_{in}\\right\\rangle })^{1.1} n}$\n\\end{center}\nBroder et al. proposed a framework to compute random walk based pagerank values for web networks . The proposed method shows the speedup of 2.1, and the spearman correlation of ranking with pagerank is 0.95. Kamwar et al. proposed a technique to rank nodes in large real-world directed networks . They partitioned the link matrix into blocks, and the local ranking of each node is calculated in the corresponding block. Then the block level rank is used to estimate the global rank of the node. This method will give fast results as we can use a distributed computing environment to calculate local ranks in different blocks. Shariaty also analyzed the impact of neighbors on the pagerank of a node and proposed an approximation method based on local neighborhood information . There are some other works that use local information to estimate pagerank value .\nRichardson et al. proposed a machine learning based approach to approximate static pagerank values using user history and other static features . Liu et al. used two sampling methods, 1. direct sampling and 2. adaptive sampling, to approximate google pagerank values . The direct sampling method samples the transition matrix once and uses the sample directly in PageRank computation, whereas the adaptive sampling method samples the transition matrix multiple times. In adaptive sampling, the sample rate can be adjusted iteratively as the computing procedure proceeds. They have simulated the methods on six real-world datasets, and results show that the proposed methods can achieve higher computational efficiency. Luh used a latent semantic analysis approach to approximate Google pagerank .\nIn pagerank algorithm, we can also modify the sampling technique to converge the values faster or to get the application specific results. Boldi et al. analyzed the crawl strategies and studied whether the results obtained by partial crawling can be used to represent global ranking or not . They analyze when the crawling process can be stopped to announce the result that is very close to the actual ranking of the nodes. They performed the experiment on real-world networks and compared the ranking using Kendall's coefficient . Results show that if the sampling strategy computes the pagerank quickly, then it is badly correlated with the actual ranking. But the results are opposite for synthetic random graphs.\nKeong et al. proposed a modified random surfer model, which makes the number of iterations required to compute PageRank more predictable . They showed that 30 iterations are enough to accumulate the total PageRank up to 0.992, and 50 iterations are enough to accumulate the total PageRank up to 0.9997 theoretically. Borgatti et al. studied the effect of sampling on different centrality measures like degree centrality, closeness centrality, betweenness centrality, and eigenvector centrality . They showed that the accuracy of centrality measures decreases as the sample size decreases. Maiya et al. also studied the impact of sampling techniques to identify highly influential nodes .\nHaveliwala presented convergence results for deciding the number of iterations that are required to get stable pagerank values . Yu et al. proposed IRWR (Incremental Random Walk with Restart) approach to update the pagerank in $O(1)$ time in dynamic networks . Mainly they proposed a fast incremental algorithm that shows high efficiency and exactness for computing proximities whenever an edge is updated. Sarma et al. proposed random walk based distributed algorithms for computing pagerank in directed and undirected graphs . The first approach takes $O(logn/q)$ rounds in both directed and undirected networks, where $n$ is the total number of nodes and $q$ is the teleportation factor. They also proposed a faster algorithm for undirected networks that takes $O(\\sqrt{logn}/q)$ rounds. Berkhin has written a survey on pagerank computing that can be referred to for further information .", "id": "53b79b4d-61fd-4ef6-b815-cbef59c788df", "level": "subsection", "origin_cites_number": 26, "parent_id": "cb521dba-dace-41a1-8478-d8f347f3c503", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ], [ "subsection", "Approximation Methods" ] ], "subsections": [], "title": "Approximation Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Like other centrality measures, various approaches have been proposed to update pagerank in dynamic networks. Pagerank is mainly used in the WWW network that is highly dynamic. Desikan et al. proposed a method to update pagerank in evolving networks based on the first-order Markov Model . In the WWW network, whenever any change occurs, it mainly affects a small part of the graph, and the remaining large part is unchanged. The pagerank of a node is dependent on the nodes that have directed link towards it and is independent of out-going links of the node. They carefully analyzed the changed and unchanged part and their dependencies to compute the pagerank incrementally. They divided the network into two parts: 1. First partition $Q$ is such that there are no incoming links from a partition, and 2. the second part $P$ includes remaining nodes. Now, we can compute the pagerank of partition $Q$ separately and then scaled and merged it with the rest of the network to get the actual PageRank values of nodes in $Q$. The scaling is done by considering the number of nodes in both partitions. Berberich et al. proposed a normalized PageRank metric to compare two nodes, and the proposed score is robust to non-local changes in the graph, unlike the standard PageRank method .", "id": "d4f5b3bb-fbb9-47aa-a5ec-db1b0768c01b", "level": "subsection", "origin_cites_number": 2, "parent_id": "cb521dba-dace-41a1-8478-d8f347f3c503", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ], [ "subsection", "Update in Dynamic Networks" ] ], "subsections": [], "title": "Update in Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In real-world networks, the total number of nodes is very large. So, most of the time users are interested in finding top-k pages (where $k$ can be typically from 10 to 100) based on the search query and user preference. The exact ranking of lower-ranked nodes is not much important. Many researchers have looked into it and have proposed different methods to find top-k nodes .", "id": "7c9ccf33-6faa-4d85-af1c-a7df6725cf55", "level": "subsection", "origin_cites_number": 7, "parent_id": "cb521dba-dace-41a1-8478-d8f347f3c503", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ], [ "subsection", "Identify Top-k Nodes" ] ], "subsections": [], "title": "Identify Top-k Nodes" }, { "cite_extract_rate": 0.4, "cites": [ 8903, 8905, 5358, 8904 ], "content": "Apart from WWW network, pagerank is also used to rank nodes in different types of networks, such as citation networks , collaboration networks , social networks , protein interaction networks , brain networks , semantic networks , and so on. Some online social networks, such as Linkedin, Researchgate, ask for the users' endorsements for their special skills. A directed graph can then be created using this endorsement information, where nodes are the users, and edges represent the score of endorsements. Roses et al. used a pagerank method to rank these nodes and verified their results on a synthetic network with 1493 nodes . Cheng et al. used pagerank and HITS to rank nodes in Journal citation networks . In Journal citation networks, nodes represent different journals, and there is a directed weighted edge between two nodes $(u,v)$ if journal $u$ cites journal $v$, and edge weight depends on the number of citations. In WWW network, there are no self-loops, but in the journal citation network, all journals have self-citations, and it makes self-loops in the final network. Their results present that pagerank and HITS can be used to rank journals, and it gives good ranking than the ISI impact factor. \nIn this section, we have discussed pagerank, its extensions, its variations, and approximation methods to compute it. Pagerank is highly used to rank nodes when a node's importance is dependent on its neighbors. In the next section, we will discuss the coreness of the nodes representing how well a node is connected to other important nodes and also with periphery nodes in the network.", "id": "d91892b5-6506-4f10-9964-e7b20b3a9c5f", "level": "subsection", "origin_cites_number": 10, "parent_id": "cb521dba-dace-41a1-8478-d8f347f3c503", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "PageRank Centrality" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "Real-world networks have a self-regulatory evolving phenomenon that gives rise to a core-periphery structure. The hierarchical organization of the network gives birth to the core-periphery structure that coexists with the community structure. The concept of the core-periphery structure was first proposed by Borgatti and Everettee . The core is a very dense nucleus of the network that is highly connected with periphery nodes. In social networks, core nodes are the group of highly elite people of the society. Similarly, in a co-authorship network, core nodes are the pioneer researchers of the area.\nSeidman proposed the k-shell decomposition method to identify core nodes in an unweighted network. The k-shell algorithm is a well-known method in social network analysis to find the tightly knit group of influential core nodes in the given network. This method divides the entire network into shells and assigns a shell index to each node. The innermost shell has the highest shell index $C_s(max)$ and is called nucleus of the network.\nThis algorithm works by recursively pruning the nodes from lower degrees to higher degrees. First, we recursively remove all nodes of degree one until there is no node of degree 1. All these nodes are assigned shell index $C_s=1$. In a similar fashion, nodes of degree 2,3,4, ... are pruned step by step. When we remove nodes of degree $k$, if there appears any node of degree less than $k$, it will also be removed in the same step. All these nodes are assigned shell index $k$. Here, a higher shell index represents higher coreness. Vladimir Batagelj et al. proposed an order $O(m)$ algorithm to calculate the coreness of all nodes, where $m$ is the total number of edges in the graph .", "id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "level": "section", "origin_cites_number": 3, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ] ], "subsections": [ "ec9f5694-d519-4ae0-9ccb-83b639d27e23", "2c5fc386-89d6-4d9a-9b74-ff63f8f53a0e", "c96565f4-eeea-4ffb-9de1-7fc7aba85a1c", "0f8ffac2-7217-4837-accc-5e5f9e7f0d85", "ef8e7bae-b2bb-4850-91ea-8c86acf1284d" ], "title": "Coreness Centrality" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 5362, 5360, 5359, 5361 ], "content": "Initially, the k-shell decomposition method was defined for unweighted undirected networks, but recently it has been extended to different types of networks. Garas et al. extended the k-shell method to identify core-periphery structure in weighted networks . They define the weighted degree that considers both the degree as well as the weights of the connected edges. Then the weighted degree is used while applying the k-shell decomposition method. Eidsaa and Almaas also proposed a method to identify core-periphery structure in weighted networks where they only consider the strength of the nodes while pruning them in each iteration , and this method is referred to as S-shell or strength decomposition algorithm. The strength of a node is defined as, $s_i=\\sum_{j \\epsilon \\Gamma (i)} W_{ij}$, where $W_{ij}$ denotes the weight of an edge connecting nodes $i$ and $j$. Wei et al. proposed an edge-weighting k-shell method where they consider both the degree as well as the edge-weights and the edge weight is computed by adding the degree of its two end points . The importance of both of these parameters can be set using a tuning parameter, which varies from 0 to 1. If it is set to 0, then the complete importance is given to edge-weights, and if it is set to 1, then the complete importance is given to the degree of the node.\nShell-index assigns the same index values to many nodes, which actually might have different influential power . Zeng et al. modified the k-shell decomposition method and proposed a mixed degree decomposition (MDD) method, which considers both the residual degree and the exhausted degree of the nodes while assigning them index values . Liu et al. proposed an improved ranking method that considers both the k-shell value of the node and its distance with the highest k-shell value nodes . The proposed method computes the shortest distance of all nodes with the highest k-shell nodes, so it has high computational complexity. Liu et al. showed that some core-like groups are formed in real-world networks, which are not true-core . The nodes in these groups are tightly connected with each other but have very few links outside. Based on this observation, authors filtered out redundant links with low diffusion power but support non-pure core groups to be formed and then apply k-shell decomposition methods. The authors show that the coreness computed on this new graph is a better metric of influential power, and it is highly correlated with spreading power computed using the SIR model in the original graph. \nResearchers also have proposed hybrid centrality measures by combining the k-shell with other existing centrality measures. Hou et al. introduced the concept of all-around score to find influential nodes . All around score of a node can be defined as, $Score=\\sqrt{\\left \\| d \\right \\|^2 + \\left \\| C_B \\right \\| ^2 +\\left \\| k_s \\right \\|^2 }$, where $d$ is the degree, $C_B$ is the betweenness centrality, and $k_s$ is the shell-index of the node. The degree takes care of local connectivity of the node, betweenness takes care of shortest paths that represent global information, and k-shell represents the position of the node with respect to the center. The total time complexity of the complete process is $O(nm)$, as it depends on the complexity of betweenness centrality that has the highest complexity. Basaras et al. proposed a hybrid centrality measure based on degree and shell-index and showed that it works better than the traditional shell-index . Bae and Kim proposed a method where the centrality value of a node is computed based on its neighbors' shell-index value; it thus considers both degree and shell-index value of the nodes . The results show that the proposed method outperforms other methods in scale-free networks with community structure. Tatti and Gionis proposed a graph decomposition method that considers both the connectivity as well as the local density while the k-shell decomposition method only considers the connectivity of the nodes . The running time of the proposed algorithm is $O(|V|^2|E|)$. They further proposed a linear-time algorithm that computes a factor-2 approximation of the optimal decomposition value. All the proposed centrality measures have better monotonicity, but all these measures require global information of the network to be computed, and so, they are not favorable in large-scale networks. Basaras et al. showed that the hybrid of degree and coreness (k-shell index) centrality could be efficiently used to identify influential spreaders .", "id": "ec9f5694-d519-4ae0-9ccb-83b639d27e23", "level": "subsection", "origin_cites_number": 12, "parent_id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0.5, "cites": [ 8906 ], "content": "Lu et al. showed the relationship between degree, H-index , and coreness of a node . In real world networks, it is observed that the coreness is highly correlated with H-index. H-index family of a node is represented as $H(u) = (h_u^{(0)}, h_u^{(1)}, .... , h_u^{(l)})$, where $l$ is the distance of the farthest node from $u$. $h_u^{(0)}$ is the zero order h-index of the node that is equal to the degree of node $u$, $h_u^{(0)} = k_u$. $h_u^{(i)}$ index of a node is calculated using $h_v^{(i-1)}$ index of its neighbors, where $v \\epsilon \\Gamma(u)$. If we calculate H-index family of a node then it converges to coreness of the node. This method provides us a new perspective to understand the coreness of a node. In k-shell decomposition method we compute coreness using recursive removal of the nodes but in this method, coreness is computed using an iterative procedure. Fred et al. derived a function to compute the coreness (kshell no) using h-index of the node . The function is proposed as,\n\\begin{center}\n$\\frac{m \\cdot ln(c(d))}{ln(d)}=\\frac{ln(h)}{ln(N)}$\n\\end{center}\nwhere, $N$ represents total number of nodes, $d$ is degree of node, $m= 1/(\\alpha \\cdot \\beta)$, $\\alpha$ is the power-law coefficient for the degree distribution, and $\\beta$ is power law exponent from the degree-coreness correlation ($c(d)=d^\\beta$). They also empirically verify this correlation on the co-keyword network of mathematical journals.", "id": "2c5fc386-89d6-4d9a-9b74-ff63f8f53a0e", "level": "subsection", "origin_cites_number": 2, "parent_id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ], [ "subsection", "Approximation Methods" ] ], "subsections": [], "title": "Approximation Methods" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 5363, 8907 ], "content": "Real-world networks are highly dynamic, and it will not be feasible to recompute the shell-index of each node for every single change in the network. Li et al. proposed a method to update the shell-index value of the affected nodes whenever a new edge is added or deleted from the network . The proposed method updates the coreness of the affected nodes whenever a new edge is added or deleted from the network. Dasari et al. proposed a k-shell decomposition algorithm called ParK that reduces the number of random access calls and the size of the working set while computing the shell-index in larger graphs . They further proposed a faster algorithm that involves parallel execution to compute the k-shell in larger graphs. Sariyuce et al. proposed the first incremental k-core decomposition algorithms for streaming networks . They show that the proposed method has a million times speed-up than the original k-shell method on a network having 16 million nodes. Miorandi et al. also proposed methods to rank nodes based on coreness in real-world dynamic networks.\nJakma et al. proposed the first continuous, distributed method to compute shell-index in dynamic graphs . Pechlivanidou et al. proposed a distributed algorithm based on MapReduce to compute the k-shell of the graph . Montresor et al. proposed an algorithm to compute k-shell in live distributed systems . They further show that the execution time of the algorithm is bounded by $1+\\sum_{u \\in V}[d(u)-k_s(u)]$, and it gives an 80 percent reduction in execution time on the considered real-world networks.", "id": "c96565f4-eeea-4ffb-9de1-7fc7aba85a1c", "level": "subsection", "origin_cites_number": 7, "parent_id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ], [ "subsection", "Update in Dynamic Networks" ] ], "subsections": [], "title": "Update in Dynamic Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "There is not much work on identifying the top-k core nodes in real-world networks or coreness rank. The solutions to these problems will really help network scientists identify influential nodes and better understand the phenomenon of dynamic processes, such as an epidemic, influential spread, or information diffusion taking place on complex networks. Saxena and Iyengar proposed a method to estimate the shell-index of a node using local neighborhood information, and the efficiency of the estimator was verified using the monotonicity and SIR spreading model. The authors further discussed hill-climbing based methods to identify the top-ranked nodes using the proposed estimator. The results on real-world networks show that, on average, a top-ranked node can be reached in a small number of steps. The authors also proposed a heuristic method to fast estimate the percentile rank of a node based on the proposed estimator and structural properties of real-world networks.", "id": "0f8ffac2-7217-4837-accc-5e5f9e7f0d85", "level": "subsection", "origin_cites_number": 1, "parent_id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ], [ "subsection", "Identify Top-k Nodes" ] ], "subsections": [], "title": "Identify Top-k Nodes" }, { "cite_extract_rate": 0.264705882352941, "cites": [ 5366, 5368, 8909, 5364, 8897, 5365, 5367, 8908 ], "content": "In 2010, Kitsak et al. showed that the nodes of the nucleus are highly influential. If the infection is started from any single node of the core, it will spread more than if it is started from any periphery node. Saxena et al. showed the importance of core nodes in information diffusion on the networks having mesoscale structures . Several other works support the fact that core-nodes play a crucial role in making information viral . The k-shell method has also been used to identify influential leaders in the communities based on their influence propagation .\nCatini et al. used shell-index to identify clusters in PubMed scientific publications . To identify the clusters, a graph is created where the nodes are the publications. There is an edge between two nodes if the distance between the corresponding researchers' locations is less than the threshold. In the experiments, the authors have taken the threshold of 1 km. Based on the k-shell decomposition, the authors categorize the cities into monocore and multicore. Later on, the journal impact factors are used to quantify the quality of research of each core. Results show that the k-shell decomposition method can be used to identify the research hub clusters.\nCore-periphery structure has been studied in a wide variety of networks, such as financial networks , human-brain networks , nervous system of C. elegans worm , blog-networks , collaboration network , protein interaction networks , communication network of software development team , hollywood collaboration network , language network , youtube social interaction network , metabolic networks etc. Carmi et al. used k-shell decomposition method to analyze the hierarchical structure of the network . Researchers have studied the evolution of core-periphery structure using k-shell method and proposed evolving models to generate synthetic networks based on their observations . Karwa et al. proposed a method to generate all graphs for a given shell-index sequence .", "id": "ef8e7bae-b2bb-4850-91ea-8c86acf1284d", "level": "subsection", "origin_cites_number": 34, "parent_id": "5242a34a-dcae-41c1-bf0e-c0cdcff90c07", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Coreness Centrality" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 7925 ], "content": "We have discussed all main centrality measures defined in network science. Researchers have combined some of these centrality measures or extended them to define new centrality measures based on the requirement. In this section, we will discuss some of these centrality measures with their specific properties.\n\\begin{enumerate}\n\\item All-around Score: Identification of the most influential nodes in complex networks is an important issue for more efficient spread of the information. Hou et al. introduced the concept of all-around score to find influential nodes . All around score of a node can be defined as,\n\\begin{center}\n$d=\\sqrt{\\left \\| k \\right \\|^2 + \\left \\| C_B \\right \\| ^2 +\\left \\| C_S \\right \\|^2 }$\n\\end{center}\nwhere $k$ is the degree, $C_B$ is the betweenness centrality, and $C_S$ is the k-shell index of the node. Thus, we consider three metrics to define the importance of a node. The degree takes care of local connectivity of the node, betweenness takes care of shortest paths that represent global information, and k-shell represents the position of the node with respect to the center. The total time complexity of the complete process is $O(nm)$, as it depends on the complexity of betweenness centrality that has the highest complexity. Results show that the all-around distance could be a more effective and stable indicator to show the influential power of a node.\n\\item Alpha Centrality: Eigenvector centrality does not give good results in some specific kind of networks. Bonacich et al. proposed a centrality measure called alpha centrality that gives similar results as eigenvector centrality . It gives good comparable results for the networks where eigenvector centrality can not be applied. Alpha centrality can be defined as,\n\\begin{center}\n$x=\\alpha A^T x + e$\n\\end{center}\nwhere $e$ is a vector having extra information. Parameter $\\alpha$ is used to represent the relative importance of endogenous versus exogenous factors. The matrix solution can be written as,\n\\begin{center}\n$x=(I - \\alpha A^T)^{-1}e$\n\\end{center}\nNewman et al. showed that under some conditions, the efficacy of eigenvector centrality is impaired, as it gives more importance to a small number of nodes in the network . This mainly happens in the networks having hubs or power-law degree distribution. So, they proposed an alternative centrality measure based on the nonbacktracking matrix that gives similar results in dense networks. However, it gives better results where the eigenvector centrality is failed. The complexity of the new centrality measure is the same as the standard one, so it can be easily used for large real-world networks. \n\\item Synthesize Centrality ($SC(u)$): Liu et al. defined synthesize centrality to identify opinion leaders in social networks . Opinion leaders have an important influence on information propagation, so it is important to efficiently identify them to understand this dynamic phenomenon. Synthesize centrality is defined as follows:\n\\begin{center}\n$SC(u)= \\frac{C_D(u) + C_B(u)}{C_C(u)}$\n\\end{center}\nExperimental results show that if a node is identified as an opinion leader using SC centrality, then there is a high probability that it will be an opinion leader using HITS and PageRank. The proposed method has high efficiency, and its accuracy is increased as the number of opinion leaders increase.\n\\item C-Index: Yan et al. studied the competence of researchers in a weighted collaboration network and proposed a centrality measure called C-index based on that . The c-index is computed using the degree, strength, and centrality information of the neighbors of the node. They show that C-index is highly correlated with the competence of the collaboration network. It follows power-law distribution in the weighted scale-free networks. They also propose two more extensions of the c-index centrality measure called iterative c-index and cg-index.\n\\item Sociability Centrality Index: This centrality metric is used to measure the social skill of a node in a large scale social network . The proposed centrality measure is based on TOPSIS and Genetic Algorithm. All other centrality measures that we have discussed measure the importance of the node based on its topological location in the network. But the social importance of the node depends on some other features also. The proposed metric considers the psychological and sociological features with the topological location to socially rank a node. The proposed centrality measure is tested on real-world datasets, and it outperforms other existing measures to rank nodes based on social skills.\n\\end{enumerate}", "id": "d46b1247-4503-4fb3-9300-7e6dee95952b", "level": "section", "origin_cites_number": 7, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Other Centrality Measures" ] ], "subsections": [], "title": "Other Centrality Measures" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we are going to discuss applications of centrality measures to understand specific types of networks and the centrality measures that can be applied to the same.", "id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "level": "section", "origin_cites_number": 0, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ] ], "subsections": [ "3eefc717-c206-45f2-979b-ebfa222a06a0", "301e0671-404b-4579-aafb-006950d4282c", "b7641404-acae-45cb-bb13-cb40fffc6e16", "a05df858-7540-41bc-8389-b2de9d12d643", "5be998a1-5324-4347-aad7-245f259f1c22", "e93a901d-37ce-4916-ac9e-666eebdf75d6", "be450504-fc01-4b5d-94ce-738752fed0c7" ], "title": "Centrality Applications in Real-World Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Wang et al. studied the air transportation network of China from the perspective of network science . The authors study the network structure and the centrality measures of each city in the network. They analyze the clustering coefficient, degree centrality, closeness centrality, and betweenness centrality of each node to examine its importance in the network under different contexts. They further study the correlation of these centrality measures with other characteristics of the nodes like the number of seats, frequency of transportation, gross regional domestic product (GRDP), and so on. The network analysis of transportation network can be used to better understand the structure and connectivity of the cities with the main central cities. There are various points that can be looked deeper like how these networks are evolved, how a new city starts making connections in the network, how the location and size of a city affect its connectivity, and so on.", "id": "3eefc717-c206-45f2-979b-ebfa222a06a0", "level": "subsection", "origin_cites_number": 1, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Air Transportation Network" ] ], "subsections": [], "title": "Air Transportation Network" }, { "cite_extract_rate": 0, "cites": [], "content": "Erciyes studied a biological network where the nodes represent cells, and edges represent the interactions between the connected cells . He performed the centrality analysis on this network to identify important nodes and edges. Other related works include ; please refer to the papers for more details.", "id": "301e0671-404b-4579-aafb-006950d4282c", "level": "subsection", "origin_cites_number": 6, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Biological Networks" ] ], "subsections": [], "title": "Biological Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Centrality measures are used to study the brain network, and it helps to understand various brain disorders. Sporns et al. used closeness centrality to identify hubs in the brain network . Martino et al. used degree and eigenvector centrality to study attention-deficit/hyperactivity disorder (ADHD) on the brain connectivity network .", "id": "b7641404-acae-45cb-bb13-cb40fffc6e16", "level": "subsection", "origin_cites_number": 2, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Brain Network" ] ], "subsections": [], "title": "Brain Network" }, { "cite_extract_rate": 0.5, "cites": [ 8906 ], "content": "There are various centrality measures that have been defined to rank researchers based on the analysis of the citation network, such as h-index , g-index, and so on. Vitanov studied some of these centrality metrics like h-index , variations of h-index, g-index, $i_n$-index, on citation network for the assessment of researchers . He further discusses m-index, p-index, $IQ_p$-index, A-index, and R-index with respect to the success of a researcher. The paper can be referred to for further details.", "id": "a05df858-7540-41bc-8389-b2de9d12d643", "level": "subsection", "origin_cites_number": 2, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Citation Network" ] ], "subsections": [], "title": "Citation Network" }, { "cite_extract_rate": 0, "cites": [], "content": "Borgatti has used the degree, closeness, betweenness, and eigenvector centrality for analyzing the sexual network . He throws light on one important point of the shortest path centrality measures like closeness and betweenness centrality. The author mentions that these centrality measures can be used if the information or disease spread in all directions at a time, and flow through the shortest paths. But these measures can not help if the information flow in one direction at a time like in a sexual network, a node will be in relation with one node only at any given time. In such type of applications, we can use application-specific centrality measures that are the extensions or hybrid of some main centrality measures. In such types of applications, we need to use the centrality measures based on the trail, not on the shortest paths. One similar example is the rumor spread on social networks, as the rumor can also spread using any path, not only the shortest paths, and it can reach a node any number of times. If we know that the information has been flowed using the shortest paths, only then betweenness centrality can be used to measure the importance of a node in such networks.", "id": "5be998a1-5324-4347-aad7-245f259f1c22", "level": "subsection", "origin_cites_number": 1, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Sexual Network" ] ], "subsections": [], "title": "Sexual Network" }, { "cite_extract_rate": 0.5, "cites": [ 5369 ], "content": "We have already discussed many applications of centrality measures in social networks. Here we further discuss some very specific centrality measures that have been proposed for social networks. Wang et al. proposed a method to measure the node centrality in directed and weighted networks based on the connectivity of the node . The proposed method is verified to identify the most influential methods and the results show that the proposed method outperforms other existing methods. Centrality measures have also been used to study students' social networks to understand knowledge diffusion, peer support, homophily, teamwork, academic performance, and so on .", "id": "e93a901d-37ce-4916-ac9e-666eebdf75d6", "level": "subsection", "origin_cites_number": 2, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Social Network" ] ], "subsections": [], "title": "Social Network" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 8910 ], "content": "Porta et al. used centrality measures to understand the networks of urban streets and intersections . Crucitti et al. study centrality in urban street patterns of different world cities represented as networks in geographical space. The results show that the self-organized cities have scale-free properties as observed in nonspatial networks, while planned cities do not. Other works on centrality applications on urban street networks include .", "id": "be450504-fc01-4b5d-94ce-738752fed0c7", "level": "subsection", "origin_cites_number": 6, "parent_id": "ff5648bf-dbc7-4db3-bccd-4fd365b87c8f", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Centrality Applications in Real-World Networks" ], [ "subsection", "Urban Street Network" ] ], "subsections": [], "title": "Urban Street Network" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{enumerate}\n\\item Centralities based on shortest paths: closeness, betweenness, stress, graph.\n\\item Following groups of centrality measures are based on similar concepts:\n\\begin{enumerate}\n\\item Closeness, Harmonic\n\\item Betweenness, Load, Stress, Flow\n\\item EigenVector, Katz, Pagerank\n\\item Coreness, H-index, C-index\n\\end{enumerate}\n\\item Endpoints, proximal betweenness, k-betweenness, length-scaled betweenness, linearly scaled betweenness, edge betweenness, group betweenness, stress centrality, and load centrality are a modification of betweenness centrality.\n\\item The performance of various centrality measures is verified either using the well-known ranking of the nodes or using spreading models like SI, SIR, SIS, and so on.\n\\end{enumerate}", "id": "daa3664b-35a8-4d0b-b5a1-43bbb0f3c08b", "level": "section", "origin_cites_number": 0, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Quick points" ] ], "subsections": [], "title": "Quick points" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{con}\nIn this paper, we have discussed various centrality measures that are used to identify important and influential nodes in real-world networks. As we have discussed, degree centrality is the first basic centrality measure that was used to rank nodes based on their degrees. It was later combined with other parameters, such as the clustering coefficient, the degree of neighbors, the age of ties, and so on, to rank nodes by considering the local neighborhood properties. Then, there are some centrality measures, such as the closeness centrality, betweenness centrality, flow centrality, which are based on the concept of shortest paths. These centrality measures are dependent on each other, and their correlation is discussed in the paper. Next, we discussed eigenvector centrality, pagerank, and coreness; these centrality measures assign the importance to a node based on the importance of its neighbors. The applications of different centrality measures are briefly discussed with the reasons why one specific centrality measure is more applicable than others in the given situation, as observed in different research works based on their experiments. The paper also includes various hybrid centrality measures that have been proposed to rank nodes more efficiently. In the last section, we discuss various real-world networks and the centrality measures that have been applied for the analysis of these networks.\n\\bibliographystyle{unsrt}\n\\bibliography{mybib}\n\\end{document}", "id": "62ad3af5-4922-4a34-81ec-5b5896fc098d", "level": "section", "origin_cites_number": 0, "parent_id": "6773f6d5-118a-4a4b-ab25-f5ac5b895812", "prefix_titles": [ [ "title", "Centrality Measures in Complex Networks: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
110
[ 9132, 8897, 7921, 7922, 4885, 5354, 5353, 5355, 8898, 7923, 8899, 8900, 5356, 7924, 8901, 5357, 8902, 8903, 8905, 5358, 8904, 5362, 5360, 5359, 5361, 8906, 5363, 8907, 5366, 5368, 8909, 5364, 5365, 5367, 8908, 7925, 5369, 8910 ]
0.853924
[ "Yashar Deldjoo", "Tommaso Di Noia", "Felice Antonio Merra" ]
A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks
2020
2020-05-20T19:17:11Z
cs.IR
Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deep CF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: \textit{many applications of machine learning (ML) are adversarial in nature}~\cite{DBLP:series/synthesis/2018Vorobeychik}. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, \updated{thanks to their ability for learning (high-dimensional) data distributions.} In this survey, we provide an exhaustive literature review of \updated{74} articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ] ], "subsections": [ "0e8838fc-ecb8-4564-b02a-62fa54f70b8c", "cde5e9dc-556a-489c-8400-af776f3af7b8", "853aaac3-266e-4a22-bdef-6c605bd09586", "76c138c6-bcb1-42a0-b007-917968172e54", "aef46f5a-0d3e-4fef-b9e3-7d5fc876059b" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "{3}{\\z@}\n {-18\\p@ \\@plus -4\\p@ \\@minus -4\\p@}\n {4\\p@ \\@plus 2\\p@ \\@minus 2\\p@}\n {\\normalfont\\normalsize\\bfseries\\boldmath\\itshape\n \\rightskip=\\z@ \\@plus 8em\\pretolerance=10000 }}\n\\makeatother\n\\begin{document}\n\\title[A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks]{A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks}\n\\author{Yashar Deldjoo}\n\\orcid{0000-0002-6767-358X}\n\\email{yashar.deldjoo@poliba.it}\n\\author{Tommaso Di Noia}\n\\orcid{0000-0002-0939-5462}\n\\email{tommaso.dinoia@poliba.it}\n\\author{Felice Antonio Merra}\n\\authornote{Authors are listed in alphabetical order. Corresponding author: Felice Antonio Merra.}\n\\email{felice.merra@poliba.it}\n\\orcid{1234-5678-9012}\n\\affiliation{\n \\institution{Polytechnic University of Bari}\n \\streetaddress{Via Orabona, 4}\n \\city{Bari}\n \\state{Italy}\n \\postcode{70125}\n}\n\\renewcommand{\\shortauthors}{Y. Deldjoo, T. Di Noia and F.A. Merra}\n\\begin{abstract}\nLatent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deep CF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: \\textit{many applications of machine learning (ML) are adversarial in nature}~. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. \nThe goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, \\updated{thanks to their ability for learning (high-dimensional) data distributions.} In this survey, we provide an exhaustive literature review of \\updated{74} articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.\n\\end{abstract}\n\\maketitle", "id": "0e8838fc-ecb8-4564-b02a-62fa54f70b8c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "subsubsection", "\\@startsection{subsubsection" ] ], "subsections": [], "title": "\\@startsection{subsubsection" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 314, 891, 7318, 892, 1189, 923, 1187, 1186, 1188 ], "content": "\\label{sec:Introduction}\nIn the age of data deluge, where users face a new form of information explosion, recommender systems (RS) have emerged as a paradigm of information push to lessen decision anxieties and consumer confusion by over-choice. RS enhance users' decision-making process and support sales\nby personalizing item recommendations for each user and helping them discover novel products. RS are a pervasive part of user experience online today and serve as the primary choice for many consumer-oriented companies such as Amazon, Netflix, and Google (e.g., YouTube~).\nAmong different recommendation techniques, collaborative filtering (CF) methods have been the mainstream of recommendation research both in academia and industry due to their superb recommendation quality. CF builds on the fundamental assumption that users who have expressed similar interests in the past will maintain similar choices in future~, and infers target user preference over unseen items by leveraging behavioral data of other users and exploiting similarities in their behavioral patterns.\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.90\\linewidth]{Pictures/From_CF_to_SecureCF.pdf}\n \\caption{Milestones of CF recommender models.}\n \\label{fig:milestones}\n\\end{figure}\nMilestones in CF models over the last three decades are illustrated in Figure~\\ref{fig:milestones}. We can identify two major eras in development of CF models based on their main objective:\n\\begin{enumerate}\n \\item The era focused on maximizing/enhancing the recommendation accuracy and beyond-accuracy; \n \\item The post neural era, the transition era from classical learning to adversarial machine learning.\n\\end{enumerate}\n\\noindent \\textbf{Accuracy maximization and beyond-accuracy enhancement era.} In this era, the main effort of research and practitioner-scholars was concentrated on the \\dquotes{golden objective} of \\textit{maximizing recommendation accuracy}. Consequently, machine-learned models tend to use any available signal in the data to reach this goal, even though some of the data contained noise as the results of users' misoperations. We distinguish between two classes of CF techniques in this era: (i) classical non-neural CF, (ii) deep neural CF, each described in the following.\n\\begin{itemize} \n \\item \\textbf{Classical non-neural CF.} The starting of this era dates back to the 1990s and is still progressing. Over these three decades, the study on CF methods has been the subject of active research by the RS community resulting in a diverse set of models and evaluation measures to assess the effectiveness of these models. We can classify these CF approaches based on various dimensions. For example, from a \\textit{learning paradigm} perspective, CF models can be classified according to (i) \\textit{memory-based CF} and (ii) \\textit{model-based CF} models, in which the former category makes recommendation based on the similarity of users-user interactions (i.e., user-based neighborhood model) or item-item interactions (i.e., item-based neighborhood model) while the latter category predicts users' feedback of unseen items using latent factor models such as matrix-factorization (MF)~. From the \\textit{model training} perspective, it is possible to categorize these models based on the loss functions employed according to (i) \\textit{point-wise} loss where the goal is to optimize towards a predefined ground-truth (e.g., matrix factorization approach based on SVD), (ii) \\textit{pairwise ranking loss} where the goal is to optimize personalized ranking (e.g., matrix factorization based on BPR) and (iii) \\textit{list-wise} loss where the objective is to reflect the distance between the reference list and the output list ~.\n\t\\item \\textbf{Deep neural CF.} Another milestone is concerned with the success of \\dquotes{neural} technology in machine learning (ML). DNNs have shown to be capable of providing remarkable accuracy in several predictive tasks and domains such as image classification~ and speech recognition~ among others. In the field of RS, DNNs have been shown useful for the recommendation in several ways such as extracting deep features (via using CNNs), modeling item content in CF models by integrating side item information, building CF models by parameterizing latent factor models into layers of a DNN (deep CF), and modeling sequential relations (via using RNNs). As for deep-CF approaches, while MF assumes that the \\textit{linear interaction} between user and item latent factors can explain observed feedback, deep CF models can model a more complex representation of hidden latent factors by \\textit{parametrization of MF via a DNN}. \n\\end{itemize}\n\\updated{The above system types have been redesigned to use a wealth of side information beyond the URM into the recommendation models to make RS adapted in specific domains. The surveys~ provide a good frame of reference for CF methods leveraging rich side information.} \n\\noindent \\textbf{The post neural era, the transition era from classical learning to adversarial machine learning.} Despite the significant success of DNNs to solve a variety of complex prediction tasks on non-structured data such as images, recently, they have been demonstrated to be vulnerable to \\textit{adversarial examples}. Adversarial examples (or adversarial samples) are subtle but non-random perturbations \\textit{designed} to dictate a ML model to produce erroneous outputs (e.g., to misclassify an input sample). The subject started booming after the pioneering work by~ reported the vulnerability of DNNs against adversarial samples for the image classification task. It has been shown that by adding a negligible amount of adversarial perturbation on an image (e.g., a panda), a CNN classifier could misclassify the image in another class (e.g., a gibbon) with high confidence. These results were quite shocking since it was expected that state-of-the-art DNNs that generalize well on unknown data do not change the label of a test image that is slightly perturbed and is human-imperceptible. Algorithms that aim to find such adversarial perturbations are referred to as \\textit{adversarial attacks}. As ML models are involved in many consumer safety and security-intensive tasks such as autonomous driving, facial recognition, and camera surveillance, adversarial attacks pose significant concerns to the security and integrity of the deployed ML-models.\nIn the field of RS, numerous works have reported the failure of machine-learned recommendation models, i.e., latent-factor models (LFM) based on CF like MF and deep CF methods widely adopted in modern RS, against adversarial attacks. For instance, showed that by exposing\nthe model parameters of BPR~ to both adversarial and random perturbations of the BPR model parameters, the value of nDCG is decreased by -21.2\\% and -1.6\\% respectively, which is equal to a staggering impact of approximately 13 times difference. One main explanation for such behavior is that adversarial attacks exploit the imperfections and approximations made by the ML model during the training phase to control the models' outcomes in an engineered way~.\nAdversarial machine learning (AML) is an emerging research field that combines the best practices in the areas of ML, robust statistics, and computer security~. It is concerned with the design of learning algorithms that can resist adversarial attacks, studies the capabilities and limitations of the attacker, and investigates suitable countermeasures to design more secure learning algorithms~.\nThe pivotal distinguishing characteristic of AML is the notion of \\dquotes{min-max} game, in which two competing players play a zero-sum differential game, one --- i.e., the attacker --- tries to \\textit{maximize} the likelihood of the attack success, while the other --- i.e., the defender --- attempts to \\textit{minimize} the risk in such a worst-case scenario. In the context of RS, the defender players can be a machine-learned model such as BPR or a neural network, while the attacker is the adversarial model.\nTo protect models against adversarial attacks, \\textit{adversarial training} has been proposed. It is a defensive mechanism whose goal is not to detect adversarial examples, instead to build models that perform equally well with adversarial and clean samples. Adversarial training consists of injecting adversarial samples ---generated via a specific attack model such as FGSM~ or BIM~--- into each step of the training process. It has been reported ---both in RS~ and ML~--- that this process leads to robustness against adversarial samples (based on the specific attack type on which the model was trained on), and better generalization performance on clean samples. For instance, in~, the authors show that the negative impact of adversarial attacks measured in terms of nDCG is reduced from -8.7\\% to -1.4\\% when using adversarial training instead of classical training.\nThe above discussion highlights the failure of classical ML models (trained on clean data) in adversarial settings and advocates the importance of AML as a new paradigm of learning to design more secure models. Nevertheless, the attractiveness of AML that exploits the power of two adversaries within a \\dquotes{min-max} game is not limited to security applications and has been exploited to build novel \\textit{generative} models, namely generative adversarial networks (GANs). The key difference is as follows: the models used in AML for security (or attack and defense) focus only on a class of discriminative models (e.g., classifiers), whereas GANs build upon both discriminative and generative models.\nA GAN is composed of two components: the generator $G$ and the discriminator $D$. The training procedure of a GAN is a min-max game between $G$, optimized to craft fake samples such that $D$ cannot distinguish them from real ones, and $D$, optimized to classify original samples from generated ones correctly. \nThrough the interplay between these two components, the model reaches the Nash equilibrium where $G$ has learned to mimic the ground-truth data distribution, e.g., a profile of a particular user. In the present survey, we identified different application for GAN-based RS that include, improving negative sampling step in learning-to-rank objective function~, fitting the generator to predict missing ratings by leveraging both temporal~ and side-information~, or augmenting training dataset~.", "id": "cde5e9dc-556a-489c-8400-af776f3af7b8", "level": "section", "origin_cites_number": 27, "parent_id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Introduction" ] ], "subsections": [ "8c4057d1-01d8-489a-89ad-d33f198b61f8", "7eeb464d-45a5-4117-9380-47e8cc4f5aa9", "6d22fd18-7538-4ae7-ae2e-7ef1308188eb", "6ee875b7-2355-4ad2-9101-387bc242ccfe" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\updated{The adversarial learning paradigm uses a unified and theoretically principled \\dquotes{min-max} optimization formulation at its heart to advance research in adversarial robustness. Nonetheless, the utility of this general min-max optimization has been demonstrated in another category of applications named GANs, to learn (high-dimensional) data distributions. The current survey at hands aims to help the reader to learn fundamental concepts in these two application areas of AML and provide a comprehensive literature review on the new trend of AML research in the RecSys field.}\n\\begin{enumerate}\n \\item \\textit{AML for the robustness/security of RS}: This is the \\dquotes{principal application} of AML in RS, which focuses on adversarial attacks on and defense of recommendation models, which we present it in Section~\\ref{sec:security}.\n \\item \\textit{Application of AML in GANs}: This is a \\dquotes{derived topic} from AML that is focused on \\dquotes{generative} learning models. We identified four types of applications in this category, namely: \\textit{improving CF recommendation}, \\textit{context-aware recommendation}, \\textit{cross-domain recommendation} and \\textit{complementary recommendation}, which we present in Section~\\ref{sec:GAN}.\n\\end{enumerate}\nOverall, AML-based recommendation scenarios are highly relevant to the field of RS. Indeed, in recent years, a growing number of relevant research works have been proposed. Despite this success, research in AML-RS is overly scattered with each paper focusing on a particular task, domain, or architecture.\nOne major objective of this survey is to categorize state-of-the-art research in the field based on several identified dimensions in order to provide a richer understanding of the different facets of the AML-RS. \nOur ultimate motivation is to lay the foundation for a more standardized approach for reproducible research works in the field.\nThe practical outcome of the present survey includes:\n\\begin{enumerate}\n \\item To the best of our knowledge, this is the first work that provides a \\textit{comprehensive understanding} of AML in RS domain, unifying the advances made in the communities of ML and RS;\n \\item This survey sheds lights on two successful applications of AML, namely: adversarial attacks and defenses and GANs, both using the concept of \\dquotes{min-max} game at their core. It provides an extensive literature review of the existing research, specifically:\n \\begin{itemize}\n \\item For AML-based RS focused on security: we present a unified problem formulation and discuss the existing adversarial attack studies on RS from various perspectives in particular attack and defense models, recommendation dimensions as well as evaluation and attack metrics used in different papers.\n \\item For GAN-based RS, we provide a conceptual view of recommendation approaches incorporating GAN to address the item recommendation task and we review an extensive number of research, which we classify according the generator, discriminator type and training paradigm. We also categorize the existing research into several distinctive high-level goals (e.g., complementary recommendation in fashion domain, context-aware recommendation, etc.).\n \\end{itemize}\n \\item We created an open-source repository\\footnote{Table with AML-RS publications at \\url{https://github.com/sisinflab/adversarial-recommender-systems-survey}} that includes all reviewed research articles which is updated over time. The aim of this repository is to facilitate bench-marking AML in the RS field by proving the released codes links and datasets used for the evaluation.\n\\end{enumerate}", "id": "8c4057d1-01d8-489a-89ad-d33f198b61f8", "level": "subsection", "origin_cites_number": 0, "parent_id": "cde5e9dc-556a-489c-8400-af776f3af7b8", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Introduction" ], [ "subsection", "Main contributions and outcome of the survey" ] ], "subsections": [], "title": "Main contributions and outcome of the survey" }, { "cite_extract_rate": 0, "cites": [], "content": "To identify the relevant publications that constitute the state-of-the-art on adversarial learning in recommender systems, we mainly relied on publications indexed in major computer science bibliography databases namely DBLP (\\url{https://dblp.uni-trier.de/}) and Scopus (\\url{https://www.scopus.com}). In addition, realizing the fact that many top-tiers venues also publish related works, which may not be necessarily indexed in the above databases, we also gathered a number of related publications by searching directly through Google Scholar. Our search strategy was composed of two main stages: \n\\begin{enumerate}\n\\item relevant publication collection, \n\\item filtering and preparing the final list.\n\\end{enumerate} \nWe collected also referenced publications in the yet selected ones.\nAs for the first stage, we queried the term \\dquotes{adversarial recommend} in the above-mentioned indexing services. While search in DBLP returns publications containing the query term in the \\textit{title}, the search results from Scopus and Google Scholar return publications containing the query \\textit{both in the tile and the content}, thereby all-together forming a complete list of identified research works. We collected all resulting publications from DBLP, Scopus and Google Scholar search. In the second stage, we went through all the collected research works and removed all irrelevant works. These for instance could include works that used AML for an application different than RS e.g., in Computer Vision, and Speech Enhancement.\nWe mostly turned our attention to conference-level and journal publications published in top conferences and to a lesser extent to workshop publications or works published in entry-level venues. Some of the considered journals and conferences include: \\updated{RecSys, SIGIR, WSDM, TheWebConference, IJCAI, and KDD}.", "id": "7eeb464d-45a5-4117-9380-47e8cc4f5aa9", "level": "subsection", "origin_cites_number": 0, "parent_id": "cde5e9dc-556a-489c-8400-af776f3af7b8", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Introduction" ], [ "subsection", "Strategies for literature search" ] ], "subsections": [], "title": "Strategies for literature search" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 1187, 1190, 313, 7336 ], "content": "\\updated{While there exist several survey articles on general RS topics, for example~,\nto the best of our knowledge, none of them focuses on the application of AML techniques in the recommendation task. In contrast, we provide a comprehensive literature review and extended taxonomy on the application of AML for security purposes (i.e., adversarial attacks and defenses) and generative models (i.e., in GAN-based systems). This classification is further accompanied with the identification of main application trends, possible future directions, and novel open challenges.}\n\\updated{The current literature review can be seen nonetheless related to other surveys such as~. In particular,~ introduce AML as a novel application of DL by pointing out to very few works of the field such as~ without providing a detailed study on the topic of AML for RS. The work by~, on the other hand, is centered around the application of AML for graph learning in general ML setting. Although, link prediction techniques can be adapted from graph-learning based system to perform item recommendation task (e.g., by predicting a user's connection with an item), this work remains far from the focus of the current survey. We would like also to acknowledge existence of related surveys on the application of AML in other tasks, not directly related to RS, for example for the CV field by~, on classical ML models by~, and GAN applications in CV and NLP tasks by~.}\nWe can also highlight that part of the material presented in this survey has been presented as a tutorial at the WSDM'20~ and the RecSys'20 conference.\\footnote{Tutorial slides at ~\\url{https://github.com/sisinflab/amlrecsys-tutorial}}", "id": "6d22fd18-7538-4ae7-ae2e-7ef1308188eb", "level": "subsection", "origin_cites_number": 14, "parent_id": "cde5e9dc-556a-489c-8400-af776f3af7b8", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Introduction" ], [ "subsection", "Survey Context and Related Surveys" ] ], "subsections": [], "title": "Survey Context and Related Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "\\updated{In the subsequent core section of this survey, we\npresent foundation concepts to AML and its application for security of ML and RS models in Section~\\ref{sec:security}. Afterward, in Section~\\ref{subsec:AML_RS}, we present a literature review on state-of-the-art research on the application of AML for the security of recommendation models and categorize the research works based on several identified dimensions. Section~\\ref{sec:GAN} sheds light on another exciting application for AML in GAN-based systems and reveals several applications of that in the RecSys domain. Section~\\ref{sec:conclusion} rounds off the survey by presenting the findings and open research directions.}\n\\begin{table}[t]\n\\centering\n\\caption{List of abbreviations used throughout this paper.} \n\\label{tbl:abbr}\n\\scalebox{0.80}{\n \\begin{tabular}{l|l}\n\\toprule\n\\multicolumn{1}{c}{\\textbf{Abbreviation}} & \\multicolumn{1}{c}{\\textbf{Term}} \\\\\n\\bottomrule\nAI & Artificial Intelligence \\\\ \\hline\nAML & Adversarial Machine Learning \\\\ \\hline\nAMR & Adversarial Multimedia Recommendation \\\\ \\hline\nC\\&W &Carlini and Wagner \\\\\\hline\nCA & Context-Aware RS \\\\ \\hline\nCBF-RS & Content-Based Filtering RS\\\\ \\hline\nCF-RS & Collaborative Filtering RS \\\\ \\hline\nCS & Cold start \\\\ \\hline\nCD-RS & Cross-Domain RS\\\\ \\hline\nCV & Computer Vision \\\\ \\hline\nDL & Deep Learning \\\\ \\hline\nDNN & Deep Neural Network \\\\ \\hline\nERM & Empirical Risk Minimization \\\\ \\hline \nFGSM & Fast Gradient Sign Method \\\\ \n\\bottomrule\n\\end{tabular}\n\\quad\n \\begin{tabular}{l|l}\n\\toprule\n\\multicolumn{1}{c}{\\textbf{Abbreviation}} & \\multicolumn{1}{c}{\\textbf{Term}} \\\\\n\\bottomrule\nFNCF & Feature-based Neural Collaborative Filtering \\\\ \\hline\nGAN & Generative Adversarial Network \\\\ \\hline\nG-RS & Graph-based RS \\\\ \\hline\nGRU & Gated Recurrent Unit \\\\ \\hline\nIR & Information Retrieval \\\\ \\hline\nLFM & Latent Factor Model \\\\ \\hline\nLSTM & Long Short-Term Memory \\\\ \\hline\nMF & Matrix Factorization \\\\ \\hline\nML & Machine Learning \\\\ \\hline\nNLP & Natural Language Processing \\\\ \\hline\nRS & Recommender Systems \\\\ \\hline\nSM & Social Media \\\\ \\hline\nSN & Social Network \\\\ \\hline\nURM & User Rating Matrix \\\\ \n\\bottomrule\n\\end{tabular}\n}\n\\end{table}", "id": "6ee875b7-2355-4ad2-9101-387bc242ccfe", "level": "subsection", "origin_cites_number": 0, "parent_id": "cde5e9dc-556a-489c-8400-af776f3af7b8", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Introduction" ], [ "subsection", "Structure of the Survey" ] ], "subsections": [], "title": "Structure of the Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:security}\nFor security concerns to be addressed appropriately in today's ML systems, there is a need to bridge the knowledge gap between the ML and computer security communities. Adversarial machine learning (AML) is a recently proposed and popularized approach that lies at the intersection of the above fields combining the best works of the two. The main goal of AML for security is to build systems that can learn in normal conditions and such that when they are under attack ---in particular under \\textit{adversarial attack}--- they can respond rapidly and\nsafeguard ML models against emerging adversaries' threats.\nAs the literature on AML for security emerged in the context of ML, in Section~\\ref{sec:security}, we first discuss the fundamentals of attacks on, and defenses of ML models (cf. Section~\\ref{subsec:AML_att_def}). We then present AML-based RS focused on security applications in which we survey various identified literature in the field and classify them based on several methodological and evaluation-related dimensions (cf. Section~\\ref{subsec:AML_RS}).", "id": "853aaac3-266e-4a22-bdef-6c605bd09586", "level": "section", "origin_cites_number": 0, "parent_id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ] ], "subsections": [ "37d5f95e-65ad-40e8-a04c-82ef99007910", "6267834f-3f7b-44b2-a329-ab08802823e3" ], "title": "Adversarial Machine Learning for Security of RS" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:AML_att_def}\nThroughout this section, we consider a supervising learning --- classification --- task. Assume a training dataset $\\mathcal{D}$ of $n$ pairs $(x, y) \\in \\mathcal{X} \\times \\mathcal{Y}$, where $x$ is the input sample, and $y$ is its corresponding class label. The problem of classification is formulated as finding a candidate function $f_\\theta: \\mathcal{X} \\rightarrow \\mathcal{Y}$ that can predict the class label $y$ around the input sample $x$, where $\\theta$ is the model parameter. This leads to solving an empirical risk minimization (ERM) problem of the form \n\\[\n\\min_{\\theta} \\sum_{(x_i,y_i) \\in \\mathcal{D}} \\ell(f(x_i;\\theta),y_i)\n\\] \nwhere $\\ell(.)$ is the empirical risk function (the loss function). Various adversarial attacks aim to find a non-random perturbation $\\delta$ to produce an adversarial example $x^{adv} = x + \\delta$ that can cause an erroneous prediction (e.g., misclassification) as we will see in the following section.", "id": "37d5f95e-65ad-40e8-a04c-82ef99007910", "level": "subsection", "origin_cites_number": 0, "parent_id": "853aaac3-266e-4a22-bdef-6c605bd09586", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Foundations of adversarial machine learning" ] ], "subsections": [ "8a2feeca-6f33-4ae6-8808-36429930926a", "9668c3aa-18b8-4212-b46b-d6733b069086" ], "title": "Foundations of adversarial machine learning" }, { "cite_extract_rate": 0.6190476190476191, "cites": [ 314, 891, 1193, 890, 1192, 1191, 892, 923, 1196, 7337, 916, 1194, 1195 ], "content": "}\nIn recent years, the advances made in deep learning (DL) have considerably advanced the intelligence of ML models in a unique number of predictive tasks such as classification of images and other unstructured data. Notwithstanding their great success, recent studies have shown that ML/DL models are not immune to security threats from adversarial use of AI. We can classify attacks against a ML model along three main dimensions, attack \\textit{timing} and \\textit{goal}.\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.70\\paperwidth]{Pictures/posion_evasion_attack.pdf}\n \\caption{A schematic representation of the distinction between \\textit{evasion} attacks and \\textit{poisoning} attacks.}\n \\label{figure:attack_scheme}\n\\end{figure}\n\\textbf{Attack timing.} As illustrated in Fig.~\\ref{figure:attack_scheme}, an adversary can attack a ML model at two main stages of the learning pipeline, during \\textit{training} or \\textit{production}. These two categories of attacks are respectively known as (i) \\textit{training-time attack} (a.k.a. causative or poisoning attack)~ and, ii) \\textit{inference-time attack} (a.k.a. exploratory or evasion attack)~.\n \\begin{itemize}\n \\item \\textit{Poisoning attack.} Data poisoning attacks are realized by injecting false data points into the training data with the goal to corrupt/degrade the model (e.g., the classifier). Poisoning attacks have been explored in the literature for a variety of tasks~, such as (i) attacks on binary classification for tasks such as label flipping or against kernelized SVM~, (ii) attacks on unsupervised learning such as clustering and anomaly detection~, and (iii) attacks on matrix completion task in RS~. As an example, in the pioneering work by~, the authors propose a poisoning attack based on properties of the SVM optimal solution that could significantly degrade the classification test accuracy. \n \\item \\textit{Evasion attack.} \\updated{Unlike poisoning attacks, evasion attacks do not interfere with training data. They adjust malicious samples during the inference phase. These attacks are also named \\textit{decision-time} attacks referring to their attempt to \\textit{evade} the \\textit{decision} made by the learned model at test time~. For instance, evasion attacks can be used to evade spam~, as well as network intrusion~ detectors. Recently, evasive attacks are conducted by crafting \\textit{adversarial examples}, which are subtle but non-random human-imperceptible perturbations, added to original data to cause the learned model to produce erroneous output.~ were the first to discover that some carefully selected perturbations that are barely perceptible to the human eye, when added to an image, could lead a well-trained DNN to misclassify the adversarial image with high confidence.}\n \\end{itemize}\n\\textbf{Attack goal.} Attacks are conducted for different goals. We can distinguish between two main classes of attack goals: i) \\textit{untargeted attack} and, ii) \\textit{targeted attack}. To provide the reader with an intuitive insight of the mechanism behind adversarial attacks and defense strategies, we define them formally for a classification task~.\n\\begin{definition}[Untargeted adversarial attack~]\\label{def:untar_attack} The goal of the attacker in \\textbf{untargeted adversarial attack} (misclassification) is to add a minimal amount of perturbation $\\delta$ on the input sample $x$ such that it can cause incorrect classification.\n Given $f(x;\\theta) = y$, an Untargeted Adversarial Attack is formulated as:\n \\begin{equation}\n \\label{eq:att_untrg}\n \\begin{aligned}\n & \\min_{\\delta}\n & & \\left\\lVert \\delta \\right\\rVert \\\\\n & \\text{s.t.:}\n & & f(x + \\delta; \\theta) \\neq y , \\ \\ x + \\delta \\in [0,1]^n\n \\end{aligned}\n \\end{equation}\nThe second constraint $x + \\delta \\in [0,1]^n$ is a value-clipping constraint needed for images, to bound the adversarial samples into to a predefined range so that the images remain visible after adversarial attack. Alternatively, we can formulate the problem as an unconstrained optimization problem where the goal of the attacker is to \\textit{maximize} the loss between the perturbed sample $x + \\delta$ and true class $y$\n \\begin{equation}\n \\label{eq:att_rel_gen}\n \\begin{aligned}\n & \\underset{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon }{\\text{max}}\n & &\\ell(f(x+\\delta; \\theta),y) \n \\end{aligned}\n \\end{equation}\nObviously since adding an unbounded amount of noise on the input will eventually lead to a classification error, the goal of the attacker is to minimize a norm-constrained form of noise, that is $\\left\\lVert \\delta \\right\\rVert \\leq \\epsilon$ for some exogenously given $\\delta$. \\qed\n\\end{definition}\nIn the context of DNN, the above attacks are categorized based on the norm used to represent the magnitude of the noise according to the following norm types~: $l_0$, $l_1$ and $l_2$ and $l_{\\infty}$.\n\\begin{definition}[Targeted adversarial attack~]\n \\label{def:untar_attack}\nThe goal of the attacker in \\textbf{targeted adversarial attack} is to perturb the input by adding a minimum amount of perturbation $\\delta$ such that it can force the model to misclassify the perturbed sample into an illegitimate target class (aka mis-classification label). Given $f(x;\\theta) = y$, with $y \\neq y_t$, we formulate the problem as:\n \\begin{equation}\n \\label{eq:att_trg}\n \\begin{aligned}\n & \\underset{\\delta}{\\text{min}}\n & & \\left\\lVert \\delta \\right\\rVert \\\\\n & \\text{s.t.:}\n & & f(x + \\delta; \\theta) = y_t\n \\end{aligned}\n \\end{equation}\nSimilarly, the above problem can be expressed as a unconstrained optimization problem \n \\begin{equation}\n \\label{eq:att_rel_gen}\n \\begin{aligned}\n & \\underset{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon }{\\text{min}}\n & & \\ell(f(x+\\delta; \\theta), y_t) \n \\end{aligned}\n \\end{equation} \n \\qed \n\\end{definition}\nThe most common attack types so far exploited in the community of RS are fast gradient sign attack (FGSM)~ and Carlini and Wagner (C\\&W) attacks, which belong to $l_{\\infty}$- and $l_2$-norm attack types respectively. We provide the formal definition of the FGSM and C\\&W attacks here.\n\\begin{definition}[FGSM attack~]\nThe fast gradient sign method (FGSM)~ utilizes the \\textit{sign of the gradient} of the loss function to find perturbation that maximizes the training loss (for untargeted case)\n \\begin{equation}\n \\label{eq:att_linf}\n \\delta = \\epsilon \\cdot \\sign (\\bigtriangledown_x \\ell(f(x;\\theta),y))\n \\end{equation}\nwhere $\\epsilon$ (perturbation level) represents the attack strength and $\\bigtriangledown_x$ is the gradient of the loss function w.r.t. input sample $x$. The adversarial example is generated as $x^{adv} = x + \\delta$. FGSM applies an $l_{\\infty}$-bound constraint $||\\delta||_{\\infty} \\leq \\epsilon$ with the original idea to encourage perceptual similarity between the original and perturbed samples. The unconstrained FGSM aims to find perturbation that would increase/maximize the loss value. The corresponding approach for targeted FSGM~ is\n \\begin{equation}\n \\label{eq:att_linf_targeted}\n \\delta = - \\epsilon \\cdot \\sign (\\bigtriangledown_x \\ell(f(x;\\theta),y_t))\n \\end{equation}\nwhere the goal is to maximize the conditional probability $p(y_t|x)$ for a given input $x$.\\qed \n\\end{definition}\nSeveral variants of the FGSM has been proposed in the literature~. For instance, the fast gradient value (FGV) method~, which instead of using the sign of the gradient vector in FGSM, uses the actual value of the gradient vector to modify the adversarial change, or basic iterative method (BIM)~ (a.k.a iterative FGSM) that applies FGSM attack multiple times \\textit{iteratively} using a small step size and within a total acceptable input perturbation level.\n\\begin{definition}[C\\&W attack~]\nThe Carlini and Wagner (C\\&W) attack~ is one of the most effective attack models. The core idea of C\\&W attack is to replace the standard loss function --- e.g., typically cross-entropy --- with an empirically-chosen loss function and use it in an \\textit{unconstrained optimization formulation} given by\n\\begin{equation}\n \\underset{\\delta}{\\text{min}} \\left\\lVert \\delta \\right\\rVert_p^p + c \\cdot h(x+\\delta, y_t)\n\\end{equation}\nwhere $h(\\cdot)$ is the candidate loss function.\\qed \n\\end{definition}\nThe C\\&W attack has been used with several norm-type constraints on perturbation $l_0$, $l_2$, $l_{\\infty}$ among which the $l_2$-bound constraint has been reported to be most effective~.\n\\textbf{Adversarial attacks on RS - challenges and differences with ML tasks.} In spite of the similarities between ML classification and recommendation learning tasks, there are considerable differences/challenges in adversarial attacks on RS compared with ML and the degree to which the subject has been studied in the respective communities:\n\\begin{itemize}\n \\item \\textit{Poisoning vs. adversarial attack.} In the beginning, the main focus of RS research community has been on \\textit{hand-engineered} fake user profiles (a.k.a shilling attacks) against rating-based CF~. Given a URM with $n$ real users and $m$ items, the goal of a shilling attack is to augment a fraction of malicious users $\\lfloor{\\alpha n} \\rfloor$ ($\\lfloor{.}\\rfloor$ is the floor operation) to the URM ($\\alpha \\ll 1$) in which each malicious use profile can contain ratings to a maximum number of $C$ items. The ultimate goal is to harvest recommendation outcomes toward an illegitimate benefit, e.g., pushing some targeted items into the top-$K$ list of users for market penetration. Shilling attacks against RS have an established literature and their development face two main milestones: the first one ---since the early 2000s--- where the literature was focused on building hand-crafted fake profiles whose rating assignment follow different strategy according to random, popular, love-hate, bandwagon attacks among others~; the second research direction started in 2016 when the first ML-optimized attack was proposed by~ on factorization-based RS. This work reviews a novel type of data poisoning attack that applies the adversarial learning paradigm for generating poisoning input data. Nonetheless, given their significant impact against modern recommendation models, the research works focusing on \\textit{machine-learned adversarial attacks} against RS have recently received great attention from the research community, \\updated{e.g., consider~.}\n \\item \\textit{CF vs. classification models:} Attacks against classification tasks focus on enforcing the wrong prediction of individual instances in the data. In RS, however, the mainstream attacks rely on CF principles, i.e., mining similarity in opinions of like-minded users to compute recommendations. This interdependence between users and items can, on the one hand, \\textit{improve robustness} of CF, since predictions depend on a group of instances not on an individual one and, on the other other hand, may cause \\textit{cascade effects}, where attack on individual user may impact other neighbor users~.\n \\item \\textit{Attack granularity and application type:} Adversarial examples created for image classification tasks are empowered based on continuous real-valued representation of image data (i.e., pixel values), but in RS, the raw values are user/item IDs and ratings that are discrete. Perturbing these discrete entities is infeasible since it may lead to changing the semantics of the input, e.g., loosely speaking applying $ID+\\delta$ can result in a new user $ID$. Therefore, existing adversarial attacks in the field of ML are not transferable to the RS problems trivially. Furthermore, in the context of CV --- attacks against images --- the perturbations often need to be \\dquotes{human-imperceptible} or \\dquotes{inconspicuous} (i.e., may be visible but not suspicious)~. How can we capture these nuances for designing attacks in RS remains as an open challenge.\n\\end{itemize}", "id": "8a2feeca-6f33-4ae6-8808-36429930926a", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "37d5f95e-65ad-40e8-a04c-82ef99007910", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Foundations of adversarial machine learning" ], [ "subsubsection", "\\updated{Adversarial attacks on ML models" ] ], "subsections": [], "title": "\\updated{Adversarial attacks on ML models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 892, 1187, 923, 681 ], "content": "From a broad perspective, defense mechanisms against adversarial attacks can be classified as \\textit{detection} methods and methods seeking to increase the \\textit{robustness} of the learning model. The goal of this section is to briefly review approaches that build \\textit{robust ML} models in adversarial settings. The prominent methods used in RS are (i) the robust optimization~ and, (ii) the distillation method~.\n\\textbf{Robust optimization against adversarial attacks.}\nAt the heart of the robust optimization method is the assumption that every sample in the training data $\\mathcal{D}$ can be a source for adversarial behavior. It performs an ERM against a specific adversary on each sample in $\\mathcal{D}$ and applies a zero sum-game between the prediction and attack adversaries\nleading to the following robust optimization framework\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:robust_opt_2}\n \\min_{\\theta} \\sum_{(x_i,y_i) \\in \\mathcal{D}} \\max_{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon} \\ell(f(x_i+\\delta; \\theta),y_i)\n \\end{aligned}\n\\end{equation}\nwhere $\\epsilon$ is an upper-bound on the adversarial perturbation level $\\delta$. The ultimate goal in robust optimization is that the prediction model will perform equally well with adversarial and clean inputs.\n\\begin{definition}[Adversarial training~]\nThe goal of adversarial training is to build a robust model from ground-up on a training set augmented with adversarial examples. Adversarial regularization is one of the mostly investigated techniques for adversarial training, which utilizes an approximation of the worst-case loss function, i.e., $\\max_{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon} \\ell(f(x + \\delta; \\theta),y_i)$, as the regularizer.\n\\begin{equation}\n \\label{eq:robust_opt_adv_reg}\n \\ell_{T} = \\underbrace{\\min_{\\theta} \\sum_{i \\in \\mathcal{D}} [\\ell(f(x; \\theta),y_i) + \\lambda \\underbrace{\\max_{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon} \\ell(f(x + \\delta; \\theta),y_i)}_\\text{optimal attack model}}_\\text{optimal robustness-preserving prediction}]\n\\end{equation}\n\\qed \n\\end{definition}\nAs it can be noted, the inner maximization finds the strongest attack against the prediction model that is subject to adversarial perturbation. The outer minimization estimates the strongest defensive against a given attack by giving up a level of accuracy due to the regularization. The parameter $0<\\lambda<1$ controls the trade-off between accuracy (on clean data) and robustness (on perturbed data).\n\\begin{exmp}[Adversarial training of BPR-MF]\nBPR is the state-of-the-art method for personalized ranking implicit feedbacks. The main idea behind BPR is to maximize the distance between positively and negatively rated items. Given the training dataset $D$ composed by positive and negative items for each user, and the triple $(u,i,j)$ (user $u$, a positive item $i$ and negative item $j$), the BPR objective function is defined as \n\\begin{equation}\n \\label{eq:BPR_loss}\n \\ell_{BPR}(\\mathcal{D} | \\Theta) = \\argmax_{\\Theta} \\sum_{(u,i,j) \\in \\mathcal{D}} ln \\, \\sigma(\\hat{x}_{ui}(\\Theta)-\\hat{x}_{uj}(\\Theta))-\\lambda\\left \\| \\Theta \\right \\|^2\n\\end{equation}\nwhere $\\sigma$ is the logistic function, and $\\hat{x}_{ui}$ is the predicted score for user $u$ on item $i$ and $\\hat{x}_{uj}$ is the predicted score for user $u$ on item $j$; $\\lambda \\left \\| \\Theta \\right \\|^2$ is a regularization method to prevent over-fitting.\\footnote{As it can be noted, BPR can be viewed as a classifier on the triple $(u,i,j)$, where the goal of the learner is to classify the difference $\\hat{x}_{ui}-\\hat{x}_{uj}$ as correct label +1 for a positive triple sample and 0 for a negative instance.} Adversarial training of BPR-MF similar to Eq.~\\ref{eq:robust_opt_adv_reg} can be formulated as\n\\begin{equation}\n \\label{eq:robust_bpr_adv_reg}\n \\ell_{APR} = \\underbrace{\\min_{\\theta} \\sum_{(u,i,j) \\in D} [\\ell_{BPR}(\\mathcal{D} | \\Theta) + \\lambda \\underbrace{\\max_{\\delta: \\left\\lVert \\delta \\right\\rVert \\leq \\epsilon} \\ell_{BPR}(\\mathcal{D} | \\Theta + \\delta)]}_\\text{optimal attack model against BPR}}_\\text{optimal robustness preserving defensive}\n\\end{equation} \\qed \n\\end{exmp}\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.7\\paperwidth]{Pictures/adversarial_framework_v2.pdf}\n \\caption{A notional view of Adversarial Recommendation Framework integrating the adversarial perturbations on users and items, and their side information, model parameters.}\n \\label{figure:adv_framework}\n\\end{figure}\nWe do not report details on \\textit{distillation} as defense strategy since it is not very common for RS.", "id": "9668c3aa-18b8-4212-b46b-d6733b069086", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "37d5f95e-65ad-40e8-a04c-82ef99007910", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Foundations of adversarial machine learning" ], [ "subsubsection", "Defense against adversarial attacks" ] ], "subsections": [], "title": "Defense against adversarial attacks" }, { "cite_extract_rate": 0.5, "cites": [ 313, 902, 917, 1197 ], "content": "\\label{subsec:AML_RS}\nIn this section, we focus on state-of-the-art approaches to the application of AML in RS research. RS which employ AML for security applications in recommendation tasks, follow the simplified steps sketched in Fig.~\\ref{figure:adv_framework}.\nIn the following, in addition to providing concise summaries of the surveyed works, for a convenient overview, we categorize the reviewed research articles in Table~\\ref{tbl:adl_attack} according to the following dimensions:\n\\begin{itemize}\n \\item \\textbf{Model.} This column lists the model name and provides the reference to the main paper.\n \\item \\textbf{Attack and Defense Model.} This column represents the main \\textit{attack} and \\textit{defense} strategies applied on various recommendation models and the \\textit{attack granularity} on the system.\n \\begin{enumerate}\n \\item \\textit{Attack model.}\n Among all attacks strategies proposed in the community of CV~, in RS the most dominant attack approaches to date have been \\textit{FGSM} and \\textit{C\\&W}, \\updated{and \\textit{Other} strategies (e.g., multi-step adversarial attacks~, GAN-based attack models~))}\n \\item \\textit{Defense model.} As for the best defensive strategy against attack, we have found the strategy \\textit{adversarial training (a.k.a. adversarial regularization)} as the most commonly-adopted approach irrespective of the attack model, while \\textit{distillation} is adopted only by a single paper~.\n \\item \\textit{Attack granularity.} This column represents the level of data on which the adversarial perturbation is added on. It is important to note that while in the computer vision domain, these perturbations are added on raw data (e.g., pixel values), in RS, they are applied on the model parameters of recommendation strategy, as illustrated in Fig.~\\ref{figure:adv_framework}. In particular, adversarial perturbations are added to one of the following data: (i) directly on the \\textit{user profile }(i.e., user rating profile), (ii) \\textit{user and item latent factor model parameters} in an LFM, e.g., according to $\\mathbf{p}'_u = \\mathbf{p}_u + \\delta$, $\\mathbf{q}'_i = \\mathbf{q}_i + \\delta$ in which $\\mathbf{p}_u, \\mathbf{q}_i \\in \\mathbb{R}^F$ are $F$-dimensional embedding factors whose linear interaction explains an unobserved preference; (iii) and (iv) \\textit{embeddings representing side information of user and items} respectively.\n \\end{enumerate}\n \\item \\textbf{Recommendation \\& Learning.}\n The core recommendation models that we consider in this survey are CBF, CF and CA. We also consider hybrid systems but we do not specify a placeholder for such systems; if an approach use both CBF+CF, we simply mark both corresponding columns, regardless of which hybridization technique it uses~. Instead, given the ML (optimization)-based approach for most of the considered papers, we categorize papers based on the recommendation prediction model according to \\textit{linear LFM} (e.g., MF or variations of that such as PMF), \\textit{linear tensor factorization} (TF), \\textit{non-linear models based on auto-encoder} (NL-AE) and \\textit{neural network} (NL-NN); furthermore we classify the loss function used in the core optimization model of the attack and defense scenarios based on \\textit{BPR}~ and \\textit{cross-entropy}.\n\\end{itemize}\n\\updated{To further help categorization of the research works, we split them according to three adversarial objectives, according to adversarial attacks and defense against: (i) accuracy of recommendations, (ii) privacy of users and (iii) bias and fairness of recommendation models as shown in Table~\\ref{tbl:adl_attack}.}\n\\input{Tables/table_security}", "id": "6267834f-3f7b-44b2-a329-ab08802823e3", "level": "subsection", "origin_cites_number": 8, "parent_id": "853aaac3-266e-4a22-bdef-6c605bd09586", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Adversarial Machine Learning for Attack and Defense on RS" ] ], "subsections": [ "6f7afaec-eb67-418a-bbfd-b446bed85c06" ], "title": "Adversarial Machine Learning for Attack and Defense on RS" }, { "cite_extract_rate": 0.521739130434782, "cites": [ 1199, 890, 917, 97, 681, 1201, 892, 1187, 1186, 1202, 1200, 1198 ], "content": "\\updated{Adversarial attacks against RS models have primarily focused on the degradation of recommendation predictive performance. Thus, in this section we review research works that perform adversarial attacks to undermine the accuracy of recommendation models and measure the impact of the adversarial attack (and defense) via an accuracy evaluation metric.}\nLooking at Table~\\ref{tbl:adl_attack} globally, we note that adversarial personalized ranking (APR)~ by He et. al. was the first work that formally addressed AML to improve the robustness of BPR-MF. After this pioneering work, in the following years, a growing number of works have considered application of AML for different recommendation tasks. Another interesting observation is the co-occurrence of the attack type FGSM and defense model adversarial training (AdReg). In fact, the adversarial training procedure based on FGSM is the first defense strategy proposed by~ to train DNNs resistant to adversarial examples. The authors interpret the improvement in robustness to adversarial examples because the proposed procedure is based on the minimization of the error on adversarially perturbed data.\n\\input{Tables/table_security_datasets_metrics}\nFurthermore, in Table~\\ref{tbl:adl_evaluation}, we provide an overview of the presented approaches under the perspective of experimental evaluation. In particular, we classify the surveyed works according to the \\textit{preference score} used for building/training the recommender models according to implicit and explicit (i.e., rating-based) feedbacks, the prominent \\textit{evaluation metrics} utilized for the offline evaluation of attack success (NDCG, HR, Success Rate, F1, distortion, Precision, and MAP), the \\textit{domain} of focus (e.g., movie, music, social media, business) and \\textit{datasets} used for evaluation. We may notice that, most of the approaches have been tested on an \\textit{implicit} preference type. As for the evaluation metrics, HR is the most adopted one followed by nDCG with a partial overlap among approaches adopting them both. As for the application domain of the datasets used for the evaluation, \\textit{movie} is the most adopted one. This is mainly due to the popularity the Movielens datasets (in their two variants 1M and 100k). Interestingly, \\textit{tourism} is an emerging domain thanks to the availability of the Yelp dataset. Finally, we observe that the high majority of the baselines are based on MF approaches. The following section will provide a detailed description of the most prominent approaches.\n\\setlength{\\parindent}{15pt} [\\textbf{APR}] are the first to propose an adversarial learning framework for recommendation. The proposed model, called \\textit{adversarial personalized ranking (APR)}, examines the robustness of BPR-MF to adversarial perturbation on users and items embedding of a BPR-MF~. The authors verify the success of using adversarial training as a defense strategy against adversarial perturbations and demonstrate the competitive results in applying adversarial training on BPR-MF. \\updated{Recently,~ studied application of iterative FGSM-based perturbation techniques and demonstrated the inefficacy of defacto defense mechanism APR in protecting the recommender against such multi-step attacks. For instance, the authors showed that the defended model loses more than 60\\% of its accuracy under iterative perturbations, while only less than 9\\% in the case of FGSM-ones.} \n\\\\\\setlength{\\parindent}{15pt} [\\textbf{AMR}] put under adversarial framework another BPR model, namely visual-BPR (VBPR). VBPR is built upon BPR and extends it by incorporating visual dimensions (originally based on deep CNN feature) by using an embedding matrix. In~, the authors first motivate the importance for adversarial training of VBPR by visually depicting how a surprisingly modest amount of adversarial perturbation ($\\epsilon = 0.007$) added on raw image pixels \\textemdash~where the added noise is barely perceivable to the human eye \\textemdash~can alter recommendation raking outcomes of VBPR and produce erroneous results. The proposed model therefore consists of constructing adversarial perturbations under the FGSM attack model and adding them to the deep latent feature of items' images extracted by CNN (i.e., ResNet50~) with the goal to learn robust image embedding parameters. One of the key insights about this work is that it does not add perturbations directly on raw image pixels for two main reasons: (i) it would require the feature extractor (CNN) component and the recommender model to be trained end-to-end with overfitting issues on the CNN due to the sparsity of user-item feedback data, (ii) it would be a time-consuming operation because at each update of the recommender model it is necessary to update all the CNN parameters.\nIn the above-mentioned works, the authors adopt several steps to validate the effectiveness of the proposed adversarial training framework, which can be summarized according to the following dimensions: (i) the \\textit{generalization} capability, (ii) the comparison of \\textit{adversarial noise v.s. random noise}, and (iii) the \\textit{robustness of models}. Regarding (i), the key insight is that adversarial training approaches (i.e., APR and AMR) can lead to learning model parameters, which can enhance model generalization capability \\textemdash~in other words, improvement of the general performance of recommendation while not being exposed to adversarial perturbation. Concerning (ii), it has been demonstrated that the impact adversarial perturbation on classical recommendation models (e.g., MF-BPR or VBPR) is significantly larger than their random noise counter-part under similar perturbation level. For instance,~ shows that by exposing MF to adversarial and random noise, the test on nDCG is decreased by -21.2\\% and -1.6\\% respectively \\textemdash~i.e., an impact of approximately 13 times difference. Dimension (iii) constitutes the core of the system validations in these works in which compelling evidence has been provided on the vulnerability of classical recommendation models to adversarial examples, or equivalently the robustness of the proposed training framework against adversarial samples. To provide an illustrating example, in~ it has been shown for an experiment on the Amazon dataset, that by changing the perturbation level from $\\epsilon = 0.05$ to $\\epsilon = 0.2$, the amount of decrease in nDCG ranges from -8.7\\% to -67.7\\% whereas for AMR it varies from -1.4\\% to -20.2\\%. These results suggest that approaches using adversarial learning instead of classical learning act significantly in a more robust way against adversarial perturbations.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{AdvIR}] In~, the authors propose a system to address CF recommendation based on implicit feedbacks. The main issue in learning from implicit interaction is characterized by scarcity of negative feedbacks compared with positive ones, regarded as one-class problem. Sampling uniformly from unobserved data, known as \\textit{negative sampling }, has been introduced in prior work to address this issue. The proposed system in~ is called AdvIR, which entails an adversarial sampling and training framework to learn recommendation models from implicit interactions. The system applies adversarial training on both positive and negative interaction separately, to create informative adversarial positive/negative samples. The proposed adversarial training approach works for both discrete and continuous input by adding the adversarial perturbation directly on the input vector (e.g., one-hot encoding user-id).\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{ACAE / FG-ACAE}] use the adversarial training framework for a neural network-based recommendation model, namely collaborative denoising auto-encoder (CDAE)~, based on which the authors propose two variations, namely: i) the adversarial collaborative auto-encoder (ACAE) and (ii) fine-grained collaborative auto-encoder (FG-ACAE). ACAE applies adversarial noise on encoder and decoder parameters and adopts an adversarial training framework. FG-ACAE considers the impact of adversarial noise in a more fine-grained manner. In particular, in FG-ACAE adversarial noise is added not only on encoder and decoder but also on the user's embedding matrix as well as hidden layers of the network. Furthermore, to increase the flexibility of training, all the noise factors in ACAE and FG-ACAE are controlled by different parameters. The experimental results confirm the trend that AdReg may improve the model's robustness against adversarial perturbed input, as well as the generalization performance of recommenders.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{ATF}] combine tensor factorization and adversarial learning to improve the robustness of pairwise interaction tensor factorization (PITF)~ for context-aware recommendation. Comparison with standard tensor models in tag recommendations acknowledges that the adversarial framework outperforms state-of-the-art tensor-based recommenders.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{FNCF}] approach security issues for C\\&W attacks~. The authors propose to make more robust neural network-based collaborative filtering models (e.g., NCF~) by using knowledge distillation~ instead of the adversarial (re)training. The framework integrates knowledge distillation with the injection of additive adversarial noise at training time. Experiments demonstrate that this system enhances the robustness of the treated recommender model.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{SACRA}] propose a novel recommender model, named Click Feedback-Aware Network (CFAN), to provide query suggestions considering the sequential search queries issued by the user and her history of clicks.\nThe authors employ additional adversarial (re)training epochs (i.e., adding adversarial perturbations on item embeddings) to improve the robustness of the model. \\updated{A similar approach has been also implemented by~ to robustify a self-attention sequential recommender model, named [\\textbf{SAO}].}\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{TAaMR}] explore the influence of targeted adversarial attacks (i.e., FGSM, and PGD~) against original product images used to extract deep features in state-of-the-art visual recommender models (i.e., VBPR~, and AMR~). The authors verify that recommendation lists can be altered such that a low recommended product category can be pushed by adding adversarial noise on product images in a human-imperceptible way. \\updated{Within a similar scenario,~ verify, in the \\textbf{VAR} framework, the inefficacy of the adversarial robustification of the image feature extractor component in protecting the visual recommender from such adversarial perturbations}.\\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{AIP}] Similar to TAaMR~,~ propose a series of adversarial attacks to increase the recommendability of items by perturbing the products images. The authors model three level of adversary's knowledge (i.e., high, medium, and low) with corresponding adversarial image perturbation strategies. Furthermore, they validate the inefficacy of JPEG compression and bit depth reduction as two possible defense mechanisms.}", "id": "6f7afaec-eb67-418a-bbfd-b446bed85c06", "level": "paragraph", "origin_cites_number": 23, "parent_id": "6267834f-3f7b-44b2-a329-ab08802823e3", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Adversarial Machine Learning for Attack and Defense on RS" ], [ "paragraph", "Accuracy of Recommendations." ] ], "subsections": [ "040cb700-1d51-4eee-9bf6-c37990ef326a", "593599d6-ed2b-4d65-90d7-83868d6662f2" ], "title": "Accuracy of Recommendations." }, { "cite_extract_rate": 0.1, "cites": [ 1203 ], "content": "\\updated{Another main area of concern is user privacy and averting the negative consequences of adversarial attacks on user privacy. Recently, in the light of privacy-violation scandals such as Cambridge Analytica~ privacy-protection laws such as the GDPR, US Congress, and other jurisdictions have been proposed to legislate new disclosure laws. Thus, attempts have been made to build machine-learned recommendation models that offer a privacy-by-design architecture, such as federated learning~, or the ones based on differential privacy~. Nonetheless, several works recently have challenged user data confidentiality via adversarial attacks, for example, in the context of \\textit{social recommenders} that has been widely studied in these scenarios. For instance,~ propose a privacy-preserving framework to protect users from adversaries that want to infer, or reconstruct, their historical interactions and social connections.~ propose to defend users' privacy from inference attacks with a privacy-oriented social media data publishing framework optimized to preserve the recommendation performance, while domain-independent recommendation algorithms have been developed by~ as a MF extension. All these works use the differential privacy~ technique to reduce privacy violations.} \n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{PAT}]~ propose an adversarial training procedure, the Domain-Adversarial Training~, to build a privacy-adversarial method to defeat the data leakage. The intuition is to train a LFM with the classical minimax paradigm where the model learns its parameters by minimizing both the recommendation cost an adversarial regularization component related to the adversarial privacy-violation. \n}\\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{RAP}]~ propose an adversarial learning procedure to protect users' from attribute-inference attacks. The model, named Recommendation with Attribute Protection (RAP), simultaneously learns to maximize the users' gain from the recommendation while minimizing the adversary capability in inferring users' personal attributes (e.g., gender, age, and occupancy).}", "id": "040cb700-1d51-4eee-9bf6-c37990ef326a", "level": "paragraph", "origin_cites_number": 10, "parent_id": "6f7afaec-eb67-418a-bbfd-b446bed85c06", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Adversarial Machine Learning for Attack and Defense on RS" ], [ "paragraph", "Accuracy of Recommendations." ], [ "paragraph", "Users' Privacy." ] ], "subsections": [], "title": "Users' Privacy." }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 1204 ], "content": "\\updated{Another area of concern is related to biases and fairness of recommendations. RS assist users in many life-affecting scenarios such as medical, financial, or job-related ones. Unfair recommendations could have far-reaching consequences, impacting people's lives and placing minority groups at a significant disadvantage~. From a RS perspective, where users are first-class citizens, fairness is a \\textit{multi-sided concept} and the utility of recommendations needs to be studied by considering the benefits of multiple groups of individual~, for instance based on the user-centered utility and the vendor-centered utility (e.g., profitability). In the literature, a few research works have exploited the adversarial training procedure to reduce the biased/unfair impact of recommendations.} \n\\\\\\updated{\\setlength{\\parindent}{15pt} [\\textbf{DPR}] \nInspired by~,~ propose a debiased personalized ranking (DPR) model composed of two components: an adversary model, i.e., multi-layer perceptron network, and a classical recommender model, i.e., BPR-MF. During the adversarial training, the adversary tries to infer the group of an item, while the recommender model tries to reduce both the recommendation error and the adversary's capability of identifying the true class of the item. The central intuition is to unbias the recommender model by enhancing the similarity of the predicted score distributions between different item groups. Extensive experiments demonstrate that DPR reduces the under-recommendation bias while retaining accurate recommendations.} \\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{FAN}] design a fairness-aware news recommendation (FAN) method via adversarial learning. Similar to DPR~, the authors extend the neural news recommendation with a multi-head self-attention (NRMS) model~ with an adversarial component trained to infer the sensitive users' characteristics, while the recommender model is regularized to reduce the adversarial possibility of inferring the users' attributes. The intuition is that adversarial training generates bias-free users' embeddings that can be in turn used to produce fair-aware recommendations.}", "id": "593599d6-ed2b-4d65-90d7-83868d6662f2", "level": "paragraph", "origin_cites_number": 7, "parent_id": "6f7afaec-eb67-418a-bbfd-b446bed85c06", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Machine Learning for Security of RS" ], [ "subsection", "Adversarial Machine Learning for Attack and Defense on RS" ], [ "paragraph", "Accuracy of Recommendations." ], [ "paragraph", "Bias, Fairness." ] ], "subsections": [], "title": "Bias, Fairness." }, { "cite_extract_rate": 0, "cites": [], "content": "~\\label{sec:GAN}\nWhat we presented in Section~\\ref{sec:security} deals with the class of \\dquotes{discriminative} models where the main aim is to learn the conditional probability $p(y|x)$. \nThe focus of the current section is on a novel class of \\dquotes{generative} models, named \\textit{Generative Adversarial Networks} (GANs). \nLoosely speaking, a generative model cares about the \\textit{generative} process behind data ---or product features in a recommendation scenario --- to categorize the data instances. Here the focus is on learning $p(x|y)$ from the data.\nGANs are a powerful class of generative models that use two networks ---trained simultaneously in a zero-sum game--- with one network focused on data generation and the other one centered on discrimination. The adversarial learning scheme --- or the min-max game --- which lies in the heart of GANs empowers these ML models with phenomenal capabilities such as the ability to model high-dimensional distributions. As a result, these networks have been exploited to solve challenging problems in computer vision. \nThe research in RS community has used the generalization (or in technical term data distribution capturing) potential of GANs as an opportunity to solve a variety of tasks relevant to RS.\nAs it can be noted the term \\dquotes{adversarial} inside generative adversarial networks refers to the learning scheme used by these models and \\textit{not the application}. In other words, the application of GANs for RS covers variety of aspects not limited to the security of RS, as we will see in the subsequent sections.\n\\updated{In the following, we present the foundations of GANs in Section~\\ref{subsec:found_GAN}. Section~\\ref{subsec:GAN_RS} provides a conceptual framework on applications of GANs in RS. Finally, in Section~\\ref{subsec:GAN_appl_RS} we focus on the new trends of GAN research in RS and provide a detailed literature review on this topic by categorizing the research works according to collaborative (cf. Section~\\ref{subsec:GAN_CF_rec}), context-aware (cf. Section~\\ref{subsec:GAN_CA_rec}), and cross domain (cf. Section~\\ref{subsec:GAN_CR_rec})}.\n\\begin{figure}[t]\n \\includegraphics[width=0.85\\textwidth]{Pictures/GAN_joint.pdf}\n \\caption{Schematic comparison of two well-known GAN models: (a) conventional vanilla GAN with filled color and (b) Conditional GAN (which includes the dashed blue entities)}\n \\label{fig:gan_architecture}\n\\end{figure}", "id": "76c138c6-bcb1-42a0-b007-917968172e54", "level": "section", "origin_cites_number": 0, "parent_id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ] ], "subsections": [ "6dd4510f-4915-4716-a591-27cd3851d8dc", "65d97e24-cbc8-4621-a296-e433b8a989c8", "485abc46-712e-46d8-8a44-993d95b3a599" ], "title": "Adversarial Learning for GAN-based Recommendation" }, { "cite_extract_rate": 1, "cites": [ 529, 7217, 64 ], "content": "\\label{subsec:found_GAN}\nGANs are deep generative models proposed by~ in 2014. A GAN is composed of two components, a generator $\\mathcal{G}$, and a discriminator $\\mathcal{D}$. The generator works to capture the real data distribution to generate adversarial examples and fool the discriminator, while the discriminator endeavors to distinguish the fake examples from real ones.\nThis competition, known as adversarial learning, ends when the components reach the Nash equilibrium. The GAN architecture is shown in Figure~\\ref{fig:gan_architecture}.\n\\begin{definition}[Conventional Vanilla GAN~]\\label{def:vanilla-GAN}\nAssume that we are given a dataset of input samples $x \\in \\mathcal{X}$, where $\\mathbb{P}_{\\mathcal{X}}$ represents the probability distribution of the original data and suppose $z \\in \\mathcal{Z}$ denotes a sample from some latent space $\\mathcal{Z}$. We are interested in sampling from $\\mathbb{P}_{\\mathcal{X}}$. The goal of GAN is to train the generator $\\mathcal{G}$ to transform samples $z \\sim \\mathbb{P}_{\\mathcal{Z}}$ into $g_{\\theta}(z) \\sim \\mathbb{P}_{\\theta}$ such that $ \\mathbb{P}_{\\theta} \\approx \\mathbb{P}_{\\mathcal{X}}$. The role of the discriminator $\\mathcal{D}$ is to distinguish $\\mathbb{P}_{\\theta}$ and $\\mathbb{P}_{\\mathcal{X}}$ by training a classifier $f_{\\phi}$. The training involves solving the following min-max objective\n \\begin{equation}\n \\label{eq:gan_basic}\n \\min_{\\theta} \\max_{\\phi} L(\\mathcal{G}_\\theta, \\mathcal{D}_{\\phi}) = \\, \\mathbb{E}_{x \\sim \\mathbb{P}_{\\mathcal{X}}} \\, \\log f_{\\phi}(x) + \\mathbb{E}_{z \\sim \\mathbb{P}_{\\mathcal{Z}}} \\log (1-f_{\\phi}(g_{\\theta}(z)))\n \\end{equation}\nwhere $\\theta$ and $\\phi$ are model parameters of the discriminator and generator respectively, learned during the trained phase. \\qed \n\\end{definition}\nDifferent distance measures $f_{\\theta}$ lead to different GAN models, e.g., Vanilla GAN (based on Jensen-Shannon divergence)~, Wasserstein GAN (based on Wasserstein distance)~, and Conditional GAN (based on class conditioning on both the generator and discriminator)~.\n\\begin{definition}[Conditional-GAN (CGAN)]\nConditional GAN extends the conventional GAN by incorporating an extra condition information term $c$ on both the input of the generator $\\mathcal{G}$ and the discriminator $\\mathcal{D}$, thus conditioning them on this new term\n \\begin{equation}\n \\label{eq:gan_basic}\n \\min_{\\theta} \\max_{\\phi} L(\\mathcal{G}_\\theta, \\mathcal{D}_{\\phi}) = \\, \\mathbb{E}_{x \\sim \\mathbb{P}_{\\mathcal{X}}} \\, \\log f_{\\phi}(x|c) + \\mathbb{E}_{z \\sim \\mathbb{P}_{\\mathcal{Z}}} \\log (1-f_{\\phi}(g_{\\theta}(z|c)))\n \\end{equation}\nwhere $c$ can represent any auxiliary information to the networks such as class labels, content features, data from other domains and so forth. \\qed\n\\end{definition}", "id": "6dd4510f-4915-4716-a591-27cd3851d8dc", "level": "subsection", "origin_cites_number": 3, "parent_id": "76c138c6-bcb1-42a0-b007-917968172e54", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "Foundations of Generative Adversarial Networks (GANs)" ] ], "subsections": [], "title": "Foundations of Generative Adversarial Networks (GANs)" }, { "cite_extract_rate": 0.2, "cites": [ 782 ], "content": "\\label{subsec:GAN_RF}\nGANs have been successfully applied in start-of-the-art RS to learning recommendation models. Since the first pioneering GAN-based work IRGAN~ in 2017, we have witnessed rapid adoption of these network architectures in many traditional and novel applications and domains. In this section, we provide a conceptual framework that will show how GANs are employed in RS domain and shed light on particularities and differences of GAN application in RecSys and ML. \n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{Pictures/gan_cf_new2.pdf}\n \\caption{A conceptual view of GAN-CF incorporating GAN to address item recommendation task.}\n \\label{fig:GAN-CF}\n\\end{figure}\n\\noindent {\\textbf{GAN-CF problem formulation and conceptual model.}}\\label{subsec:GAN_RS}\nThe prominent recommendation models in the literature that successfully apply GAN~ for the CF task, utilize the two-player min-max game with objective function built on top of Eq.~\\ref{eq:gan_basic}.\n\\begin{definition}[The GAN-CF model~]\nLet $\\mathcal{U}$ and $\\mathcal{I}$ denote a set of users and items in a system, respectively. The training objective is given by\n \\begin{equation}\n \\label{eq:irgan_basic}\n \\min_{\\theta} \\max_{\\phi} L(\\mathcal{G}_\\theta, \\mathcal{D}_{\\phi}) = \\, \\mathbb{E}_{i \\sim \\mathbb{P}_{\\mathcal{X}}(i|u)} \\, \\log f_{\\phi}(i|u) + \\mathbb{E}_{\\hat{i} \\sim \\mathbb{P}_{\\theta}(\\hat{i}|u)} \\log \\, (1-f_{\\phi}(\\hat{i} | u))\n \\end{equation}\nwhere $i \\in \\mathcal{I}$ is an item receiving implicit (or explicit) feedback by user $u \\in \\mathcal{U}$ (e.g., purchased) and $\\hat{i} \\in \\mathcal{I}$ is a generated item.\n\\qed \n\\end{definition}\nA few observations are important to be made here: (i) the output of generator $\\mathcal{G}$ is a set of item indices deemed relevant to user $u$; (ii) both $\\mathcal{G}$ and $\\mathcal{D}$ are \\textit{user-conditioned}, signifying that model parameters are learnt in a \\textit{personalized fashion}; (iii) the GAN-based CF works do not use the noise term as input (to $\\mathcal{G}$) as the goal is to generate one unique ---yet plausible--- item rather than a set of items. Figure~\\ref{fig:GAN-CF} summarizes these aspects conceptually.\n\\input{Tables/sampling_strategy}\n\\noindent {\\textbf{Discrete outcome and sampling strategies.}} \nThe parameters in the GAN-CF model are learned in an end-to-end fashion. \nHowever, before we can take benefit of this training paradigm, the system needs to solve a critical issue that does not exist on the original GAN presented in Def. 3.1. based on the sampled noise signal. The generation of recommendation lists is a discrete sampling operation, i.e., performed over discrete candidate items (see Figure~\\ref{fig:GAN-CF}). Thus, the gradients that are derived from the objective function in Eq.~\\eqref{eq:irgan_basic} cannot be directly used to optimize the generator via gradient descent as happens in the original GAN formulation, where gradients are applied for differentiable values (e.g.,images and videos). To obtain a differentiable sampling strategy in GAN-CF models, two sampling strategies are proposed in the literature based on reinforcement learning algorithm and the Gumbel-Softmax differentiable sampling procedure~, summarized in Table~\\ref{tbl:smple_strg}.", "id": "65d97e24-cbc8-4621-a296-e433b8a989c8", "level": "subsection", "origin_cites_number": 5, "parent_id": "76c138c6-bcb1-42a0-b007-917968172e54", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Framework" ] ], "subsections": [], "title": "GAN-based Recommendation Framework" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{subsec:GAN_appl_RS}\nWe have identified a total of \\updated{53 papers} that integrate GAN in order to accomplish a particular RS-related task, and we classified them according to:\n\\begin{enumerate}\n \\item Collaborative Recommendation\n \\item Context-aware Recommendation\n \\item Cross-domain Recommendation\n \\item Fashion Recommendation\n\\end{enumerate}\nWe present Table~\\ref{tbl:gan-approaches} to summarize the proposed models and provide insights about the constituting building blocks of the GAN model. From a global perspective, we can see a correlation between the class of $\\mathcal{G}$, $\\mathcal{D}$ and the recommendation task. For example, recursive models based on RNN are used for CA \\textbf{\\textit{Temporal-aware Rec.}} tasks, areas where these models can better capture the sequence information. This is while, for \\textbf{\\textit{Collaborative Rec.}} tasks, the rest of models are commonly used (e.g., Linear LFM, MLP and so on). It is interesting to note that CNN is used for majority of works in \\textbf{\\textit{Fashion Rec.}} From a training perspective, we can see that both point-wise and pair-wise models are almost equally used in all these works, perhaps indicating the point-wise training is still a useful method for evaluation of many GAN-based related RS models. In the following, we review each of these application scenarios by describing the most prominent approaches.\n\\vspace{-2mm}", "id": "485abc46-712e-46d8-8a44-993d95b3a599", "level": "subsection", "origin_cites_number": 0, "parent_id": "76c138c6-bcb1-42a0-b007-917968172e54", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Models: State of the Art" ] ], "subsections": [ "b30f5353-add0-45bc-a6e6-aa4a2dfe8a78", "57530d1b-af15-4cf3-a9ed-4da067fd7b5c", "9a0c94e9-c849-443b-a1f3-b16c512135c9", "6e7196de-82aa-4d00-b39f-67020a488ecb" ], "title": "GAN-based Recommendation Models: State of the Art" }, { "cite_extract_rate": 0.05, "cites": [ 1189 ], "content": "\\label{subsec:GAN_CF_rec}\nGANs have been shown powerful in generating relevant recommendations --- in particular, using the CF approach --- and capable of successively competing with state-of-the-art models in the field of RS. We have identified the following reasons for the potential of GANs in RS: (i) they are able to generalize well and learn unknown user preference distributions and thus be able to model user preference in complex settings (e.g., IRGAN~ and CFGAN~); (ii) they are capable of generating more negative samples than random samples in pairwise learning tasks (e.g., APL~, DASO~) and (iii) they can be used for data augmentation (e.g., AugCF~ and RAGAN~).\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{IRGAN}] The work by Wang et. al.~ is presumably the first attempt to integrate the generative and discriminative approach to IR under the same roof by proposing a GAN-based IR model. The authors demonstrate the application of IRGAN for web search, item recommendation and question answering tasks where for the item recommendation task, the query is constructed from the user's historical interactions.\nDuring adversarial learning ---the min-max game--- the generator learns the actual distribution of relevant items as much as possible. It turns out that this novel training idea results in a more satisfactory accuracy in recommendation than optimizing the traditional pure discriminative loss functions based on pointwise, or pairwise, objectives.\n\\input{Tables/gan_rs.tex}\n\\\\ \\setlength{\\parindent}{15pt} [\\textbf{GraphGAN}] propose GraphGAN --- a graph-based representation learning --- (a.k.a. network embedding) for CF recommendation. Graph-based analysis is gaining momentum in recent years due to their ubiquity in real-world problems such as modeling user preference for item recommendation as well as social graphs in social media (SM) networks, co-occurrence graph in linguistics, citation graph in research, knowledge graph and so forth. The central idea of network embedding is to represent each entity in a graph with a lower-dimensional latent representation to facilitate tasks within the network and prediction over entities. For example, such latent representation makes it possible to perform prediction for supervised tasks, while the distance between node embedding vectors can serve as a useful measure in unsupervised tasks. GraphGAN can be viewed as a graph-based representation of IRGAN, where queries/items are \\textit{nodes} of the graph. For a given node $v_c$, the objective of $\\mathcal{G}$ is to learn the ground-truth connectivity distribution over vertices $p_{true}(v|v_c)$, whereas $\\mathcal{D}$ aims to discern whether or not a connectivity should reside between vertex pairs $(v,v_c)$. GraphGan furthermore proposes the graph softmax as $\\mathcal{G}$ ---instead of traditional softmax--- which appears to boost the computational efficiency of training (graph sampling and embedding learning) performed by $\\mathcal{G}$.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{GAN-HNBR}] From an application perspective, GAN-based graph representations have also been applied in more niche domains of RS, including personalized citation recommendation. The goal is to recommend research articles for citation by using a content-based and author-based representation~ or learning heterogeneous bibliographic network representation (HBNR). propose GAN-HNBR ---a GAN-based citation recommendation model--- that can learn the optimal representation of a bibliographic network consisting of heterogeneous vertex content features such as papers and authors into a common shared latent space and provide personalized citation recommendation.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{CFGAN}] CFGAN has been introduced in~ to address a problem with \\textit{discrete items} in IRGAN, where $\\mathcal{G}$ produces at each iteration a single item index, which is a discrete entity in nature. This is different from the original GAN in the CV domain in which the output of $\\mathcal{G}$ is an image (i.e., a vector). The generation of discrete item indices by $\\mathcal{G}$ results in a poor sampling of items from the pool of available alternatives (i.e., samples identical to ground-truth) deteriorating the performance of $\\mathcal{G}$ and $\\mathcal{D}$ ---instated of improvement--- during the min-max training iteration. CFGAN introduces \\textit{vector-wise\ntraining} in which $\\mathcal{G}$ generates continuous-valued vectors to avoid misleading $\\mathcal{D}$, which in turn improves the performance of both $\\mathcal{G}$ and $\\mathcal{D}$. The authors show the improvement of CFGAN over IRGAN and GraphGAN baselines. As an example, with regards to P@20 on the Ciao dataset, the improvement is $100\\%$ for CFGAN vs. IRGAN (0.45 v.s. 0.23) and $160\\%$ for CFGAN vs. GraphGAN (0.45 v.s. 0.17), which turns to be a significant improvement of the recommendation accuracy.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{Chae et al.}] propose an auto-encoder-based GAN, in which an auto-encoder (AE) is used as $\\mathcal{G}$ to model the underlying distribution of user preferences over items. The primary motivation behind this work is that conventional MF-based approaches are linear. Instead, the proposed system can generate non-linear latent factor models and uncover more complex relationships in the underlying user-item interaction matrix.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{VAE}] An adversarial variational auto-encoder (VAE) is adopted by~, where the authors propose the usage of a GAN to regularize the VAE by imposing an arbitrary prior to the latent representation (based on implicit feedback). Similar works can be found in~, which exploits a VAE to enhance the robustness of adversarial examples. The authors furthermore present the Wasserstein distance with gradient penalty.\\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{GAN-VAE-CF}] design an adversarial generative network to learn inter-item interactions used to generate informative \\textit{niche} negative samples paired with the positive ones to reduce the popularity bias, and promote recommendation of long-tail products.}\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{CALF]} Other issues of IRGAN, such as sparsity causing gradient vanishing and update instability and discrete value preventing a training to optimize using gradient descent, are addressed in~. The proposed solution is named convolutional adversarial latent factor model (CALF), which employs a CNN to learn correlations between embeddings and Rao-Blackwell sampling to deal with discrete values optimizing CALF.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{PD-GAN]} The authors of~ propose a solution to improve diversity of CF-based recommendation with GAN based on personalized diversification.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{LambdaGAN]} In~, the authors propose LambdaGAN ---a GAN model with a lambda ranking strategy--- that improves the recommendation performance in a pairwise ranking setting by proposing lambda rank~ function into the adversarial learning of the proposed GAN-based CF framework.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{VAEGAN]} A variant of VAE is introduced in~ to address the limited expressiveness of the inference model and latent features, which reduces the generalization performance of the model. The proposed solution, named adversarial variational autoencoder GAN (VAEGAN), is a more expressive, and flexible model that better approximates the posterior distribution by combining VAEs and GAN. This work is one of the first work to propose the application of adversarial variational Bayes (AVB)~ to perform the adversarial training.", "id": "b30f5353-add0-45bc-a6e6-aa4a2dfe8a78", "level": "subsubsection", "origin_cites_number": 20, "parent_id": "485abc46-712e-46d8-8a44-993d95b3a599", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Models: State of the Art" ], [ "subsubsection", "Collaborative Recommendation" ] ], "subsections": [], "title": "Collaborative Recommendation" }, { "cite_extract_rate": 0.29411764705882304, "cites": [ 1207, 1206, 5180, 1205, 243 ], "content": "\\label{subsec:GAN_CA_rec}\nAlthough long-term preference modeling has proven to be effective in several domains~, recent research indicates that users' preferences are highly variable based on the user's context, e.g., time, location, and mood~. Context provides the background of \\textit{user objective} for using the system and can be exploited to generate more relevant recommendations.\n\\noindent \\textbf{Temporal-aware Recommendation.}\nIn real applications, users' preferences change over time, and modeling such \\textit{temporal evolution} is needed for effective recommendation. While long-term preferences of users change slowly, their \\textit{short-term preferences} can be seen as more dynamic and changing more rapidly. Predicting short-term user preference has been recently studied in the context of \\textit{session-based} and \\textit{sequential recommendations}. A temporal extension of SVD++ towards the modeling of temporal dynamic, named TimeSVD++, has been proposed in~. It has also been reported that the structure of time-aware inputs (e.g., click-logs, session) are effectively modeled by a recurrent neural network (RNN).\nFor instance, proposed to model the sequential user clicks to output session-based recommendation with a GRU-gated recurrent unit; while proposed to integrate an LSTM model, to capture both the user and the item temporal evolution, and MF to model stationary preferences. \n\\updated{\nBoth LSTM and GRU are variants of RNNs deployed to reduce the gradient vanishing problem by including a mechanism to enable the model to learn the historical evolution of users' behavior. In particular, GRUs are generally preferred over LSTMs since they have to learn less model parameters and, consequently, require less memory than LSTMs~. \n}\nInspired by the accuracy improvements of IRGAN, GAN-based models have been combined in temporal frameworks to boost the recommendation performance in sequence-aware recommendation tasks. \n\\setlength{\\parindent}{15pt} [\\textbf{RecGAN}] In~, the authors propose to incorporate in a single framework both the temporal modeling capabilities of RNN and the latent feature modeling power of the \\textit{min-max game}. The proposed framework, named RecGAN, implements both the generator and the discriminator with the Gated Recurrent Unit (GRU)~, in order to make $\\mathcal{G}$ capable of predicting a sequence of relevant items based on the dynamic evolution of user's preferences.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{PLASTIC \\& LSIC}] Differently from RecGAN that implements only an RNN cell to capture the dynamic evolution of the user's behavior, propose to combine MF and RNN in an adversarial recommendation framework to model respectively long and short-term user-item associations. The proposed framework, named PLASTIC, adopts MF and LSTM cells into $\\mathcal{G}$ to account for the varying aspect of both users and items, while a two-input Siamese network ---built manually by using a MF and RNN--- as $\\mathcal{D}$ encodes both the \\textit{long-term} and \\textit{session-based information} in the pair-wise scenario.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{NMRN-GAN}] Recent studies have endorsed that adversarially created close-to-observed negative samples are capable of improving the user and item representation. introduce \\textit{GAN-based negative sampling} for streaming recommendation. Instead of using a random sampling strategy, which is static and hardly contributes towards the training of the recommender model, adversarially generated negative samples result more informative. NMRN-GAN uses a key-value memory network~ to keep the model's long-term and short-term memory combined with a GAN-based negative sampling strategy to create more instructive negative samples thus improving the training effectiveness and the quality of the recommendation model.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{GAN-CQDN}] A GAN-based solution has been proposed in~ for sequence-aware recommendation in conjunction with reinforcement learning (RL). The main aim here is that of modeling the dynamic of user's status and long-term performance. The authors propose GAN-CQDN, an RL-based recommender system that exploits GAN to model user behavior dynamics and learn her reward function. The advantages of using GAN is that it improves the representation of the user profile a well as the reward function according to the learned user profile, and it accommodates online changes for new users. \\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{MFGAN}] implement the generator as a Transformer-based network to predict, for each user, the next relevant item whose importance will be judged from multiple factor-specific discriminators. The discriminators are a set of Transformed-based binary classifiers that measure the recommendation relevance with respect to additional information (e.g., item popularity, semantic information, and price). Similar to MFGAN~, propose an Adversarial Oracular Seq2seq learning for sequential Recommendation (AOS4Rec) framework to enhance the recommendation performance of Transformer-based, or RNN-based, next-item recommenders.\n}\\\\\n\\noindent \\textbf{Geographical-aware Recommendation.} Another relevant application of contextual information is point-of-interest (POI) recommendation. In this field, many approaches have been proposed over the year especially after the mobile revolution. Location-based social networks (LBSNs) have attracted millions of users to share rich information, such as experiences and tips. Point-of-Interest (POI) recommender systems play an important role in LBSNs since they can help users explore attractive locations as well as help social network service providers design location-aware advertisements for Point-of-Interest.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{Geo-ALM}] In~, the authors propose Geo-ALM, a GAN-based POI recommender that integrates geographical features (POI and region features) with a GAN to achieve (better) POI recommendation. In the proposed system, $\\mathcal{G}$ improves the random negative sampling approach in the pairwise POI recommendation scenario that leads to better representation of user and items and enhances recommendation quality with respect to state-of-the-art models.\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{APOIR}] Inspired by the advances of POI recommendation performance under GAN-based framework, propose adversarial point-of-interest recommendation (APOIR) to learn user-latent representations in a generative manner. The main novelty of the proposed framework is the use of POIs' geographical features and the users' social relations into the reward function used to optimize the $\\mathcal{G}$. The reward function acts like a contextual-aware regularizer of $\\mathcal{G}$, that is the component of APOIR in the proposed POI recommendation model.", "id": "57530d1b-af15-4cf3-a9ed-4da067fd7b5c", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "485abc46-712e-46d8-8a44-993d95b3a599", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Models: State of the Art" ], [ "subsubsection", "Context-aware Recommendation" ] ], "subsections": [], "title": "Context-aware Recommendation" }, { "cite_extract_rate": 0.27272727272727204, "cites": [ 1189, 8425, 1208 ], "content": "\\label{subsec:GAN_CR_rec}\nRecommender models are usually designed to compute recommendations for items belonging to a single domain. Items belonging to a specific domain share characteristics and attributes, which are intrinsically similar, and domain-specific recommendation models allow the designer to study these characteristics individually. However, \\textit{single-domain} recommendation faces numerous challenges. The first challenge refers to the well-known \\textit{cold-start} problem, when insufficient interactions exist in the considered domain. Second, users' interests and needs span across different application areas and large e-commerce sites, like Amazon or eBay, store users' preference scores related to products/services of various domains ---from books and products to online movies and music. As companies strive to increase the diversity of products or services to users, cross-domain recommendation can help such companies to increase sales productivity by offering personalized cross-selling or bundle recommendations for items from multiple domains~. The third aspect is a novel research idea related to discovering relationships between items (e.g., images) of two different domains. For example, can a machine achieve a human-level understanding to recommend a fashion item consistent with user taste/style in another domain such as media or visual scenery? \n\\setlength{\\parindent}{15pt} [\\textbf{FR-DiscoGAN}]\nIn~, the authors propose a cross-domain GAN to generate fashion designs from the sceneries. In the proposed hypothetical scenario, the user can specify via a query her POI to visit (e.g., mountain, beach) together with keywords describing a season (i.e., spring, summer, fall, and winter). The core idea is to automatically generate fashion items (e.g., clothes, handbags, and shoes) whose useful features (i.e., style) match the natural scenery specified by the user. For instance, the system can recommend a collection of fashion items that look cool/bright for visiting a beach in summer, even though the actual preference of the user is black-style clothes. The role of GAN is to learn associations between scenery and fashion images. In the field of ML and CV, the problem is termed as \\dquotes{style transfer} or \\dquotes{image to image translation} problem~. \\\\\n\\setlength{\\parindent}{15pt} [\\textbf{VAE-GAN-CC}]\nAn effective cross-domain recommendation system relies on \\textit{capturing both similarities and differences} among features of domains and exploiting them for improving recommendation quality in multiple domains. Single-domain algorithms have difficulty in uncovering the specific characteristics of each domain. To solve this problem, some approaches extract latent features of the domains by a separate network~. Although these approaches might be successful in capturing characteristic features of each domain, they do not establish the similarity between features of multiple domains. To extract both homogeneous and divergent features in multiple domains, propose a generic cross-domain recommendation system that takes as input the user interaction history (click vector) in each domain, maps the vectors to a shared latent space using two AEs and then uses $\\mathcal{G}$ to remap the underlying latent representation to click vectors. The main novelty of this work lies in building/linking shared latent space between domains, which in turn facilitates \\textit{domain-to-domain} translation. In particular, the former is realized by enforcing a weight-sharing constraint related to variational auto-encoders, i.e., the encoder-generator pair $\\{\\mathcal{E}_A, \\mathcal{G}_A\\}$ and $\\{\\mathcal{E}_B, \\mathcal{G}_B\\}$ and using cycle-consistency (CC) as a weight-sharing constraint. Finally, two separate adversarial discriminators are employed to determine whether the translated vectors are realistic. The final system is called VAE-GAN-CC network, which extends the unsupervised image-to-image translation network in the CV domain~ for RS applications and is thus named domain-to-domain translation model (D2D-TM).\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{DASO}]\nInspired by the efficacy of \\textit{adversarial negative sampling} techniques proposed by~, address the limitation of typical negative sampling in the \\textit{social recommendation} domain in transferring users' information from social domain to item domain. The proposed Deep Adversarial SOcial recommendation (DASO) system, harnesses the power of adversarial learning to dynamically generate difficult negative samples for \\textit{user-item} and \\textit{user-user pairs}, to guide the network to learn better user and item representations. \\\\\n\\updated{\n\\setlength{\\parindent}{15pt} [\\textbf{Asr}] propose an adversarial social regularization (Asr) framework to improve the item recommendation performance by integrating contextual information, i.e., users' social connections, within a GAN-based approach. Furthermore, the proposed framework is agnostic to the recommender models guaranteeing applicability in several settings and domains. For instance, the authors demonstrate the framework's efficacy in within a large set of models (e.g., VAE).}\\\\\n\\setlength{\\parindent}{15pt} [\\textbf{CnGAN}]\n, propose GAN for cross-network (CnGAN) to address one of the significant shortcomings of cross-network recommendation concerning \\textit{non-overlapping users} missing preference scores. These users exist in the source domain but not in the target domain, and thus, their preferences about items in the target domain are not available. In the proposed work, $\\mathcal{G}$ learns the mapping of user preferences from \\textit{target to source} and generate more \\textit{informative preferences} on the source domain. $\\mathcal{D}$ uses the synthetically generated preferences (generated from $\\mathcal{G}$) to provide recommendations for users who only have interactions on the target network (not overlapped users). The authors also propose two novel loss functions ---a content-wise and a user-wise loss function--- to guide the min-max training process better. The authors validate the effectiveness of the system against state-of-the-art models both in terms of accuracy and beyond-accuracy measures (novelty, diversity).", "id": "9a0c94e9-c849-443b-a1f3-b16c512135c9", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "485abc46-712e-46d8-8a44-993d95b3a599", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Models: State of the Art" ], [ "subsubsection", "Cross-domain Recommendation" ] ], "subsections": [], "title": "Cross-domain Recommendation" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 1001, 1209, 1211, 117, 1213, 1212, 1210 ], "content": "\\label{subsec:GAN_FA_rec}\nMost conventional RS are not suitable for application in the fashion domain due to unique characteristics hidden in this domain. For instance, people do not follow the crowd blindly when buying clothes or do not buy a fashion item twice~. \nAnother aspect is related to the notion of \\textit{complementary} relationship for recommending a personalized fashion outfit. It is natural for humans to establish a sense of relationship between products based on their visual appearance. \nRecently, GAN-based models have shown promising performance for outfit recommendation, being able to compete with state-of-the-art fashion recommendation models in the field, such as Siamese-base networks~. Finally, another new application of GANs is related to exploiting the \\textit{generative power of GANs} to synthesize real-looking fashion clothes. This aspect can inspire the aesthetic appeal/curiosity of costumer and designers and motivates them to explore the space of\npotential fashion styles.\n\\setlength{\\parindent}{15pt} [\\textbf{CRAFT}] address the problem of recommending complementary fashion items based on visual features by using an adversarial process that resembles GAN and uses a conditional feature transformer as $\\mathcal{G}$ and a discriminator $\\mathcal{D}$. One main distinction between this work and the prior literature is that the $\\langle$input, output$\\rangle$ pair for $\\mathcal{G}$ are both features (here features are extracted using pre-trained CNNs~), instead of $\\langle$image, image$\\rangle$ or hybrid types such as $\\langle$image, features$\\rangle$ explored in numerous previous works~. This would allow the network to learn the relationship between items directly on the feature space, spanned by the features extracted. The proposed system is named complementary recommendation using adversarial feature transform (CRAFT) since in the model, $\\mathcal{G}$ acts like a feature transformer that ---for a given query product image $q$--- maps the source feature $s_{q}$ into a complementary target feature $\\hat{t}_{q}$ by playing a min-max game with $\\mathcal{D}$ with the aim to classify fake/real features. For training, the system relies on learning the co-occurrence of item pairs in real images. In summary, the proposed method does not generate new images; instead it learns how to generate features of the complementary items conditioned on the query item. \n\\setlength{\\parindent}{15pt} [\\textbf{DVBPR}] Deep visual Bayesian personalized ranking (DVBPR)~ is presumably one of the first works that exploit the \\textit{visual generative power of the GAN} in the fashion recommendation domain. It aims at generating clothing images based on user preferences. Given a user and a fashion item category (e.g., tops, t-shirts, and shoes), the proposed system generates new images ---i.e., clothing items--- that are consistent with the user's preferences. The contributions of this work are two-fold: first, it builds and end-to-end learning framework based on the Siamese-CNN framework. Instead of using the features extracted in advance, it constructs an end-to-end system that turns out to improve the visual representation of images. Second, it uses a GAN-based framework to generate images that are consistent with the user's taste. Iteratively, $\\mathcal{G}$ learns to generate a product image integrating a \\textit{user preference maximization objective}, while $\\mathcal{D}$ tries to distinguish crafted images from real ones. Generated images are quantitatively compared with real images using the preference score (mean objective value), inception score~, and opposite SSIM~. This comparison shows an improvement in preference prediction in comparison with non-GAN based images. At the same time, the qualitative comparison demonstrates that the generated images are realistic and plausible, yet they are quite different from any images in the original dataset ---they have standard shape and color profiles, but quite different styles.\n\\setlength{\\parindent}{15pt} [\\textbf{MrCGAN}] propose a compatibility learning framework that allows the user to visually explore candidate \\textit{compatible prototypes} (e.g., a white T-shirt and a pair of blue-jeans). The system uses metric-regularized conditional GAN (MrCGAN) to pursue the item generation task. It takes as the input a projected prototype (i.e., the transformation of a query image in the latent \"Compatibility Space\"). It produces as the output a synthesized image of a compatible item (the authors consider a compatibility notion based on the complementary of the query item across different catalog categories). Similar to the evaluation protocol in~, the authors conduct online user surveys to evaluate whether their model could produce images that are perceived as compatible. The results show that MrCGAN can generate compatible and realistic images under compatibility learning setting compared to baselines.\n\\setlength{\\parindent}{15pt} [\\textbf{Yang et al.} \\& \\textbf{$c^+$GAN}] address the same problem settings of MrCGAN~ by proposing a fashion clothing framework composed of two parts: a clothing recommendation model based on BPR combined with visual features and a clothing complementary item generation based GAN. Notably, the generation component takes in input a piece of clothing recommended in the recommendation model and generates clothing images of other categories (i.e., top, bottom, or shoes) to build up a set of complementary items. The authors follow a similar qualitative and quantitative evaluation procedure as DVBPR~ and further propose a \\textit{compatibility index} to measure the compatibility of the generated set of complementary items. A similar approach has also been proposed in $c^+$GAN~, to generate bottom fashion item paired with a given top item.", "id": "6e7196de-82aa-4d00-b39f-67020a488ecb", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "485abc46-712e-46d8-8a44-993d95b3a599", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Adversarial Learning for GAN-based Recommendation" ], [ "subsection", "GAN-based Recommendation Models: State of the Art" ], [ "subsubsection", "Fashion Recommendation" ] ], "subsections": [], "title": "Fashion Recommendation" }, { "cite_extract_rate": 0.75, "cites": [ 7305, 64, 63 ], "content": "\\label{sec:conclusion}\nIn this paper, we have surveyed a wide variety of tasks in which adversarial machine learning (AML) is important to attack/defense a recommendation model as well as improve the generalization performance of the model itself. This broad range of applications can be categorized into two ---objective-wise distinct--- technologies: (i) AML for improving security (cf. Section~\\ref{sec:security}) and, (ii) AML used in generative adversarial networks (GANs) exploited for numerous tasks such as better CF recommendation, context-aware recommendation, cross-domain system, or visually-aware fashion item/outfit recommendation (cf. Section~\\ref{sec:GAN}). The common point of both technologies is the joint min-max optimization used for training models, in which two competing players play a zero-sum differential game until they reach an equilibrium. To the best of our knowledge, this is the first work that sums up the advances of AML application in recommendation settings and proposes a clear taxonomy to classify such applications.\nWe put forward what is better to invest in AML-RS research and introduce the following open research directions:\n\\noindent\\emph{\\underline{Bridging the gap between attack/defense models in the ML/CV and RS domain.}} \nAs the prior literature of AML for security emerged in the field of machine learning (ML) and computer vision (CV), there remains a large gap between advances made in those fields and that in RS. Consider the questions: \\dquotes{Attacks for images are designed to be human-imperceptible or inconspicuous (i.e., may be visible but not suspicious). How can we capture these notions for designing attacks in RS?}; furthermore, \\dquotes{Images are continuous-valued data while a user profile is a discrete data. Modifying users' profiles completely changes the semantic of their behaviors. What is the best approach to treat these nuances in RS attack designs?} \\\\\n\\emph{\\underline{Choice of recommendation models.}} Modern recommendation models exploit a\nwealth of side-information beyond the user-item matrix such as social-connections, multimedia content, semantic data, among others. However, most of the attacks against recommendation systems are designed and validated against CF systems. Investigating the impact of adversarial attacks against these ---heterogeneous in nature--- data types remains as an open highly interesting challenge, e.g, consider adversarial attacks against music, image, and video recommendation models leveraging multimedia content. In this regard, we also recognize attack against state-of-the-art deep and graph-based models, another highly-valued research direction.\\\\\n\\emph{\\underline{Definition of attack threat model.}} The research in RS community misses a common evaluation approach for attacking/defending scenarios such as the one introduced by Carlini at el.~. For instance, it is important to define a common attacker threat model to establish in advance the attacker knowledge and capabilities to make the attack (or defense) reproducible and comparable with novel proposals.\\\\\n\\emph{\\underline{Move the attention towards beyond accuracy goal in recommendation.}}\nAccording to our survey, most of the identified research works focus on accuracy metrics such as HR and nDCG. Consider the question: \\dquotes{What is the impact of adversarial attacks and defenses in other evaluation objectives of RS, for instance, diversity, novelty, and fairness of recommendations}. The impact on these metrics could be, in principle, the main objective of a new breed of attack strategies aiming at compromise the diversity/novelty of results.\\\\\n\\emph{\\underline{Scalability and stability of learning.}} We identify that there exists the need to further explore the stability learning problems in the discrete item sampling strategy to train the generator. This has been already identified as a big problem when GAN-based RS are applied in real scenarios with huge catalogues. A point of study may be that of novel GAN models proposed in computer vision (e.g., WGAN~, LSGAN~, and BEGAN~).\\\\\n\\emph{\\underline{Users preferences learning with GANs.}} An interesting and already established application of AML-RS is to exploit the generative power of GANs to produce more plausible user-rating profiles that can be used to improve recommendations in the cold-user scenario or improve the prediction performance in warm-start settings. We consider such applications extremely interesting, and we motivate further research in this direction to resolve the well-known cold-start obstacles in recommendation settings.\n\\input{Tables/Recommender_Models.tex}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{refs_final}\n\\end{document}\n\\endinput", "id": "aef46f5a-0d3e-4fef-b9e3-7d5fc876059b", "level": "section", "origin_cites_number": 4, "parent_id": "68e5db2c-bcee-48f2-8fe1-51da0b40c965", "prefix_titles": [ [ "title", "A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks" ], [ "section", "Summary and Future Directions" ] ], "subsections": [], "title": "Summary and Future Directions" } ]
111
[ 314, 891, 7318, 892, 1189, 923, 1187, 1186, 1188, 1190, 313, 7336, 1193, 890, 1192, 1191, 1196, 7337, 916, 1194, 1195, 681, 902, 917, 1197, 1199, 97, 1201, 1202, 1200, 1198, 1203, 1204, 529, 7217, 64, 782, 1207, 1206, 5180, 1205, 243, 8425, 1208, 1001, 1209, 1211, 117, 1213, 1212, 1210, 7305, 63 ]
1.025264
[ "Wenqi Wang", "Run Wang", "Lina Wang", "Zhibo Wang", "Aoshuang Ye" ]
Towards a Robust Deep Neural Network in Texts: A Survey
2019
2019-02-12T02:42:54Z
cs.CL
Deep neural networks (DNNs) have achieved remarkable success in various tasks (\eg{}, image classification, speech recognition, and natural language processing (NLP)). However, researchers have demonstrated that DNN-based models are vulnerable to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation. In this paper, we give a comprehensive survey on the existing studies of adversarial techniques for generating adversarial texts written by both English and Chinese characters and the corresponding defense methods. More importantly, we hope that our work could inspire future studies to develop more robust DNN-based text analyzers against known and unknown adversarial techniques. We classify the existing adversarial techniques for crafting adversarial texts based on the perturbation units, helping to better understand the generation of adversarial texts and build robust models for defense. In presenting the taxonomy of adversarial attacks and defenses in the text domain, we introduce the adversarial techniques from the perspective of different NLP tasks. Finally, we discuss the existing challenges of adversarial attacks and defenses in texts and present the future research directions in this emerging and challenging field.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "697307fb-5058-4940-99fb-a281f51f4034", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ] ], "subsections": [ "d277d9a2-375a-4a41-9ddb-db3a89d81595", "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "744ede91-1e59-47e5-9ed0-4ceac7757786", "60350839-4653-460f-8134-0be4be772e64", "1804625a-81a9-4a52-8749-90a4382611c3", "1246daa9-3531-4f92-b8a9-9822a164a00e", "c668fad8-3b0a-4b93-8d38-3007046ffb10", "cf4bc9dc-d910-4f84-b886-04185306a490" ], "title": "root" }, { "cite_extract_rate": 0.6756756756756751, "cites": [ 5718, 205, 892, 3863, 313, 5829, 5828, 314, 8957, 5826, 4722, 209, 5832, 5827, 5830, 893, 2465, 2401, 3859, 5833, 5834, 3699, 4235, 4078, 5831 ], "content": "\\label{introduction}\n\tNowadays, deep neural networks (DNNs) have shown their great power in addressing masses of challenging problems in various areas, such as computer vision , audio , and natural language processing (NLP) . Due to their tremendous success, DNN-based systems are widely deployed in the physical world, including many security-critical areas . However, a series of studies have found that crafted inputs by adding imperceptible perturbations could easily fool DNNs. These modified inputs are so-called adversarial examples, which bring potential security threats to DNN-based systems even in the black-box scenario where the target system is not available to attackers. For example, Figure \\ref{isnatncefigure} shows an adversarial attack on the physical sentiment analysis system named ParallelDots\\footnote{\\url{https://www.paralleldots.com}}. In this case, we cannot obtain any knowledge of the system architecture, model parameters, and training data. However, it fails to distinguish the adversarial example correctly and output erroneous results. In fighting against the threats of adversarial examples, researchers have conducted numerous works on attacks and defenses, leading to a dramatic increase in both theory and application techniques, varying from images to texts. Here, we focus on the adversarial examples in the text domain rather than the well-investigated image domain. \n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\subfigure[original input]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{instance_a.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\subfigure[adversarial text]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{instance_b.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\centering\n\t\t\\caption{Instance of an adversarial attack on the popular text analysis system, ParallelDots. ParallelDots provides a series of APIs for various NLP tasks (\\eg{}, sentiment analysis) that have achieved state-of-the-art (SOTA) performance. We employ a popular adversarial technique based on the genetic algorithm to craft adversarial texts and evade the ParallelDots. We can find that the text is predicted as negative in high confidence when the words \\emph{inspire} and \\emph{wonderful} in the original input are simply replaced by \\emph{touch} and \\emph{good}, respectively.}\n\t\t\\label{isnatncefigure}\n\t\\end{figure}\n\tIn NLP, DNNs are widely employed in many fundamental tasks (\\eg{}, text classification, natural language inference, and machine translation). Unfortunately, these DNN-based systems suffer obvious performance degradation in facing adversarial examples. Papernot \\etal{} first found that attackers could generate adversarial examples by adding imperceptible noises into texts, which would induce classifiers to produce incorrect results. Then, an arms race starts in the text domain battleground, resulting in the exposure of studies in this emerging field. Most of the adversarial attacks in texts focus on specific NLP tasks , which will bring potential security concerns to our users. For instance, in the real world, when booking food online, users tend to search for nearby recommended restaurants in mobile apps and read reviews of their products. The service providers will give suggestions according to the posted comments via various techniques like sentiment analysis . However, these DNN-based text analyzers could be easily fooled by adversarial examples. Attackers can interfere with product ratings by posting adversarial texts. More seriously, attackers can maliciously propagate disinformation via adversarial texts to reap profits and cause profit losses to consumers. Thus, effective defense methods need to be devised, and robust models should be developed for the community.\n\tFor defense, countermeasures have been proposed to enhance the robustness of DNN-based text analyzers. Nevertheless, they are obviously not prepared for the emerging threats of adversarial examples, so that continuous efforts should be taken further. Figure \\ref{statisticsjpg} shows us the publications of adversarial examples in recent years, and it reveals that numerous studies are developing various adversarial techniques which pose challenges to defense. At present, adversarial texts detection and model enhancement are two mainstream ideas in fighting against the threats of adversarial texts, but both of them exhibit obvious weakness. For instance, adversarial text detection is only suitable for certain adversarial attacks. Model enhancement like adversarial training suffers the shortcoming in distinguishing adversarial texts generated by unknown adversarial techniques. In summary, tackling unknown adversarial techniques, generalized to different languages, and effective to a wide range of NLP tasks are the three obstacles for the existing defense methods. To bridge this striking gap, it is urgent to inspire researchers to invest in the study of adversarial attacks and defenses in the text domain. Thus, a comprehensive survey is needed to present the preliminary knowledge and introduce the challenges of this field.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\subfigure[publications in all areas]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{all_areas_color.png}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\subfigure[publications in texts]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{text_color.png}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\centering\n\t\t\\caption{Publications of adversarial examples. Figure \\ref{statisticsjpg}(a) shows the number of publications in the field of adversarial example, which is collected by Carlini, covering a wide range such as image, audio, text, \\etc{}. Figure \\ref{statisticsjpg}(b) represents the number of publications in the adversarial text domain.}\n\t\t\\label{statisticsjpg}\n\t\\end{figure}\n\tIn adversarial attacks and defenses, several surveys focus on the image domain , but few in texts . Here, we introduce these three surveys in texts and list the differences between them. \n\t\\begin{itemize}\n\t\t\\item In 03/2019, Belinkov \\etal{} mainly focused on the interpretability of machine learning in NLP. They only review some attacks to understanding these models' failure, but their work lacks surveying the defense methods against adversarial attacks.\n\t\t\\item In 03/2020, Xu \\etal{} systematically reviewed cutting-edge algorithms in the field of images, graphics, and texts. For adversarial attacks in texts, they only describe some methods according to different NLP tasks, but they do not analyze which kind of attack is suitable for the task, nor do they compare the similarities and differences between these methods. Meanwhile, the authors also do not pay attention to the defense in the text domain.\n\t\t\\item In 04/2020, Zhang \\etal{} mainly compared attack methods in the image domain and described how adversarial attacks were implemented in texts. They divide adversarial attacks into black-box and white-box attacks, just like in the image domain. However, this classification method does not reflect how to generate adversarial examples in NLP. Due to the difference between texts and images, adversarial examples can be classified as char-level, word-level, sentence-level, and multi-level attacks according to the perturbation units in texts. Besides, the specially designed defense method (\\ie{}, spelling-check) in NLP is not introduced in their \\textit{defense} section.\n\t\t\\item In addition, all of them lack some important guidelines such as the difference between Chinese-based and English-based adversarial examples, interpretability of adversarial examples, and combination with other interesting works (\\eg{}, adding adversarial perturbations into deepfake texts to fool deepfake detectors ).\n\t\\end{itemize}\n\tIn this paper, we review the studies of adversarial examples in the text domain with the goal to build robust DNN-based text analyzers by understanding the generation of adversarial texts, the weakness and strengths of existing defense methods, and the adversarial techniques for different NLP tasks. The advances of our work are summarized as follows. \n\t\\begin{itemize}\n\t\t\\item We review not only adversarial attacks and defenses in the text domain, but also interpretation, imperceptibility, and certification works. Our systematic and comprehensive review helps newcomers to understand this research filed.\n\t\t\\item The prior three surveys only focus on works related to English-based models, and neither of them reviews the efforts of evaluating the robustness of Chinese-based models. We bridge this gap and analyze the differences of adversarial examples between English-based and Chinese-based models.\n\t\t\\item We classify the adversarial texts into \\textit{char-level}, \\textit{word-level}, \\textit{sentence-level}, and \\textit{multi-level} according to the perturbation units in generating adversarial texts. Additionally, we focus on the adversarial attacks for the different NLP tasks. We hope this could inspire future researchers to understand the generation of adversarial texts and further develop general and effective defense methods for these NLP tasks.\n\t\t\\item We combine adversarial examples with model analysis methods to study the adversarial attacks and defenses. We review related analysis methods to explore NLP models' behaviors, contributing to proving the rationality of adversarial attack and defense methods.\n\t\\end{itemize}\n\tThe rest of this paper is organized as follows. We first give the preliminary knowledge of adversarial examples in Section \\ref{background}. Section \\ref{adversarialattacksintext} reviews the adversarial attacks for text classification. Attacks on other NLP tasks are presented in Section \\ref{adversarialexamplesonothertasks}. We introduce the defense methods in Section \\ref{defense}. Section \\ref{Chinesebasedmodels} shows related works on Chinese-based models, and we analyze the differences from English-based ones. Finally, Section \\ref{discussion} discusses our findings and the challenges from the reviewed works, which can shed new light on the following research direction. Section \\ref{conclusion} gives a conclusion about our comprehensive survey.", "id": "d277d9a2-375a-4a41-9ddb-db3a89d81595", "level": "section", "origin_cites_number": 37, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{background}\n\tThis section gives the basic preliminary knowledge of adversarial examples, including the formula descriptions, interpretation and classification of adversarial examples, metrics for measuring the imperceptibility of adversarial texts, and available public datasets in texts.", "id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ] ], "subsections": [ "61de5970-1eb5-4dd1-a72c-b42c60495dfe", "af01e25a-8888-4896-a952-a40b9d1d8076", "e34d2918-2c0c-4ca5-986f-0cac9b8993dd", "517dcaa0-072e-462f-aa02-a4b70f3d4127", "5a33b7a9-ff14-42ad-a78b-cab87026c4d5", "31cd2141-07aa-4e48-843f-12381de7f937" ], "title": "Preliminaries" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{adversarialexampleformulation}\n\tTo present a more intuitive understanding of the definitions, we give some formula descriptions about DNN, adversarial examples, and robustness of DNN models.\n\t\\textbf{DNN.} A typical DNN can be presented as the function $F: X \\rightarrow Y$, which maps from an input set $X$ to a label set $Y$. $\\emph{Y}$ is a set of $k$ classes like $\\{1, 2, \\ldots, k\\}$. For a sample $\\emph{x} \\in X$, it is correctly classified by $\\emph{F}$ to the truth label $\\emph{y}$, \\ie{}, $F(x)=y$. \n\t\\textbf{Adversarial Examples.} An attacker aims at adding the small perturbation $\\varepsilon$ in $\\emph{x}$ to create adversarial example $\\emph{x'}$, such that $F(\\emph{x'}) = \\emph{y'}(\\emph{y} \\not= \\emph{y'})$. At the same time, $\\emph{x'}$ not only needs to fool $F$ but also should be imperceptible to humans. To enforce the generated $\\emph{x'}$ to be imperceptible, a series of metrics (\\eg, semantic similarity) are adopted to achieve this goal, \\ie{}, $\\parallel\\varepsilon\\parallel < \\delta$. $\\delta$ is a threshold to limit the size of perturbations. \n\t\\textbf{Robustness.} In defending against adversarial examples, a robust DNN model should tolerate the adversarial attacks and output the correct predictions in tackling the imperceptible additive noises . Hence, the prediction of the adversarial example $\\emph{x'}$ should be $y$ rather than $y'$ in a robust DNN model, \\ie{}, $F(x')=y$. The defense methods to enhance the robustness of models should tackle a wide range of $\\varepsilon$ effectively.", "id": "61de5970-1eb5-4dd1-a72c-b42c60495dfe", "level": "subsection", "origin_cites_number": 1, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Formula Descriptions" ] ], "subsections": [], "title": "Formula Descriptions" }, { "cite_extract_rate": 1, "cites": [ 967, 969, 5836, 892, 5837, 5835 ], "content": "Answering why adversarial examples exist can help us devise more effective and practical defense methods. In recent years, researchers have been continuously exploring this question since they observed the adversarial examples in 2014. However, the existence of adversarial examples is still an open question to the community. Here, we briefly introduce the recent efforts in exploring this question.\n\t\\begin{enumerate}[leftmargin=*]\n\t\t\\item \\textbf{Model Linear Hypothesis.} Goodfellow \\etal{} proposed the linearity hypothesis and claimed that the existence of adversarial examples was the linear behavior of DNNs in high-dimensional space. The classifier is not sensitive to adversarial perturbations added to each input dimension, but it will misbehave when the perturbations are applied to all dimensions. Other studies like also support the linear hypothesis contributing to the vulnerability of DNNs. However, Sabour \\etal{} doubled this point and demonstrated that the linearity hypothesis did not apply to their work because of the internal representation of adversarial examples in the DNN.\n\t\t\\item \\textbf{Data Distribution Influence.} Shafahi \\etal{} claimed that DNN models were vulnerable to adversarial examples due to the data distribution. For a dataset, if adjacent pixels in images are highly correlated, the model is relatively robust against adversarial examples based on these images. While the pixels spread out and have less correlation, the vulnerability to adversarial examples will increase. However, this could explain the existence of adversarial examples in images well, but it is not applicable to texts.\n\t\t\\item \\textbf{Input Features.} Ilyas \\etal{} demonstrated that adversarial examples were not bugs, but features. The features can be classified as robust and fragile in prediction. Both of them could be used for prediction, but adversarial examples are generated when the perturbations are added to the fragile features. Thus, the adversarial examples widely exist in image, text, or other domains.\n\t\\end{enumerate}\n\tHowever, the reason for the existence of adversarial examples is still not clear, though continuous efforts have been paid in recent years. Recently, the DNN models have achieved tremendous success in many challenging tasks, but it is still a black-box to us, which draws continuous efforts to open it. Experience simply tells us that the deeper and wider network is the master key for improving performance. Thus, it is difficult to understand why our DNN models are susceptible to adversarial examples. Understanding the working mechanism of DNNs will be a key step in identifying the existence of adversarial examples. We hope that our survey could inspire future researchers to investigate this open question, which will promote the usage of DNNs in safety-critical areas.", "id": "af01e25a-8888-4896-a952-a40b9d1d8076", "level": "subsection", "origin_cites_number": 6, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Interpretation of Adversarial Examples" ] ], "subsections": [], "title": "Interpretation of Adversarial Examples" }, { "cite_extract_rate": 1, "cites": [ 923, 314, 5838 ], "content": "Szegedy \\etal{} first found that adversarial examples generated from a neural network could also make another network misbehave by different datasets, reflecting their transferability. Therefore, attackers can train a substitute model and utilize the transferability of adversarial examples for the attack when they have no access and query restriction to target models. Recently, studies show that different types of adversarial attacks have different transferability . For instance, adversarial examples generated from one-step gradient-based methods are more transferable than iterative methods , but their attack abilities are the opposite. Hence, the generation of adversarial examples with high transferability is not only the premise to carry out black-box attacks, but also a metric to evaluate generalized attacks.", "id": "e34d2918-2c0c-4ca5-986f-0cac9b8993dd", "level": "subsection", "origin_cites_number": 3, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Transferability of Adversarial Examples" ] ], "subsections": [], "title": "Transferability of Adversarial Examples" }, { "cite_extract_rate": 0, "cites": [], "content": "Figure \\ref{fig2} presents a classification of adversarial attacks and defenses. The adversarial attacks could be easily divided into black-box and white-box attacks according to the knowledge of target models. Additionally, the attacks could also be divided into targeted attacks and non-targeted attacks based on whether the erroneous output is desired. In defending against adversarial examples, the mainstream ideas are adversarial example detection and model enhancement. Adversarial example detection indicates observing the minor difference between legitimate inputs and adversarial examples. Model enhancement denotes modifying the model architecture and updating the parameters in a model.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\setlength{\\belowcaptionskip}{-0.3cm}\n\t\t\\includegraphics[width=\\linewidth]{fig2.pdf}\n\t\t\\caption{Classification of adversarial attack and defenses.}\t\\label{fig2}\n\t\\end{figure}", "id": "517dcaa0-072e-462f-aa02-a4b70f3d4127", "level": "subsection", "origin_cites_number": 0, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Adversarial Examples" ] ], "subsections": [ "dc001980-e008-4359-983f-bed84c0054fa", "01a6946c-26d2-4715-84cd-3d93e2d6a9b9" ], "title": "Taxonomy of Adversarial Examples" }, { "cite_extract_rate": 1, "cites": [ 892 ], "content": "\\label{typesofadversarialattack}\n\tAdversarial attacks can be conducted in both white-box and black-box scenarios. In the white-box scenario, adversaries have full access to target models. They can generate perfect adversarial examples by leveraging target models' knowledge, including model architectures, parameters, and training data. In the black-box scenario, adversaries can not obtain any knowledge of the target models. They utilize the transferability of adversarial examples or repeated queries for optimization to perform a black-box attack. \n\tAccording to the desire of adversaries, adversarial attacks can be divided into targeted and non-targeted attacks. In the targeted attack, the generated adversarial example $\\emph{x'}$ is purposefully classified into a specified class $t$, which is the adversary's target. This process mainly relies on increasing the confidence score of class $t$. In the non-targeted attack, the adversary only aims at fooling the model rather than expect the desired output. The result $\\emph{y'}$ can be any class except for $\\emph{y}$. Contrary to the targeted attack, the non-targeted attack operates via reducing the confidence score of the correct class $y$.\n\tIn the text domain, adversarial attacks can be classified as char-level, word-level, sentence-level, and multi-level (shown in Figure \\ref{AEintexts}) according to the perturbation units in generating adversarial examples. Char-level attacks indicate that adversaries modify several characters in words to generate adversarial examples that can fool the detectors. Specifically, the modifications are mostly misspellings, and the common operations include insertion, swap, deletion, and flip. Word-level attacks involve various word perturbations. Attackers generate adversarial examples by inserting, replacing, or deleting certain words in various manners. Sentence-level attacks usually insert a sentence into a text or rewrite the sentence while maintaining its meanings. Multi-level attacks incorporate more than one of the three perturbation attacks to achieve the imperceptible and high success rate attack.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\setlength{\\belowcaptionskip}{-0.3cm}\n\t\t\\includegraphics[width=\\linewidth]{AEintext.pdf}\n\t\t\\caption{Classification of adversarial texts based on the perturbations units.} \\label{AEintexts}\n\t\\end{figure}", "id": "dc001980-e008-4359-983f-bed84c0054fa", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "517dcaa0-072e-462f-aa02-a4b70f3d4127", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Adversarial Examples" ], [ "subsubsection", "Taxonomy of Adversarial Attacks" ] ], "subsections": [], "title": "Taxonomy of Adversarial Attacks" }, { "cite_extract_rate": 1, "cites": [ 7305 ], "content": "\\label{typesofadversarialdefense}\n\tIn defending against adversarial attacks, the goal is to build a robust DNN model in tackling various known and unknown adversarial techniques well . In the existing studies, the mainstream defense strategies could be divided into adversarial example detection and model enhancement. \n\tAdversarial example detection indicates to directly distinguish adversarial examples from the legitimate inputs based on the observed subtle differences. Model enhancement involves parameters update or architecture modification, such as adversarial training and adding additional layers. In texts, spelling check and adversarial training are two major ways for defending against adversarial attacks. The spelling check is a special detection method in NLP, while adversarial training is a general approach employed in image, text, audio, \\etc{}", "id": "01a6946c-26d2-4715-84cd-3d93e2d6a9b9", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "517dcaa0-072e-462f-aa02-a4b70f3d4127", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Taxonomy of Adversarial Examples" ], [ "subsubsection", "Taxonomy of Defenses against Adversarial Attacks" ] ], "subsections": [], "title": "Taxonomy of Defenses against Adversarial Attacks" }, { "cite_extract_rate": 0.8125, "cites": [ 975, 967, 890, 314, 5833, 3103, 8958, 894, 892, 902, 906, 914, 5829 ], "content": "\\label{metric}\n\tAdversarial examples are crafted by adding imperceptible perturbations into legitimate inputs to incur erroneous output labels . In the image domain, various metrics are adopted to measure the imperceptibility of adversarial examples. $L_{p}$ norm is the most commonly used method defined as\n\t\\begin{equation} \\label{triangle}\n\t\\Vert\\triangle c \\Vert_{p}=\\sqrt[p]{\\sum_{i=1}^{n} |c'_{i}-c_{i}|^{p}} \n\t\\end{equation}\n\twhere $\\triangle c$ represents the perturbations. $c'_{i}$ and $c_{i}$ are the $i$-th factors in $n$-dimensional vectors $\\vec{c'}$ and $\\vec{c}$, respectively. Formula \\eqref{triangle} represents a series of distances, where $p$ could be 0 , 2 , $\\infty$ , and so on. Specially, when $p$ is equal to zero, $\\Vert\\triangle c \\Vert_{0}$=$\\sum bool(c_{i}\\ne 0)$. $bool$ is a logical function with 0 or 1. \n\tHowever, it is impossible to borrow the ``imperceptible'' measurements in images to texts. In comparison with the pixel-level modification in images, perturbations for generating adversarial texts operate the characters, words, or sentences, which are visible to humans due to the introduced grammatical and spelling errors. A successful attack needs to maintain the semantic meaning of the generated adversarial texts the same as the original ones and be imperceptible to humans. Therefore, imperceptible adversarial examples (\\eg{}, Figure \\ref{isnatncefigure}) in the text domain should satisfy the following basic requirements. (1) No obvious errors could be easily observed by human eyes. (2) The crafted adversarial texts should convey the same semantic meaning as the original ones. (3) The model output on the adversarial text and the legitimate input should be different, which means an erroneous output occurred. Thus, the majority of metrics adopted in images can not be directly applied in measuring texts due to the symbolic representations of perturbations in texts rather than the number representations like the pixel. Next, we detail the metrics (\\eg{}, Euclidean distance, Edit distance, Cosine similarity, and Jaccard Similarity Coefficient) employed for measuring the imperceptibility of adversarial texts. \n\t\\textbf{Euclidean Distance}. The original Euclidean distance is the beeline from one point to another in Euclidean space. As the mapping of text to this space, it acts as a metric to calculate the similarity between two objects, which are represented as vectors. Thus, given two word vectors $\\vec{m}=(m_1,\\ldots, m_i, \\ldots, m_k)$ and $\\vec{n}=(n_1,\\ldots, n_i, \\ldots, n_k)$, the Euclidean distance $ED$ of these two vectors is defined as\n\t\\begin{equation} \\label{EuclideanDistance}\n\tED\\!=\\!\\sqrt{(m_1\\!-\\!n_1)^2\\!+\\!\\cdots\\!+\\!(m_i\\!-\\!n_i)^2+\\!\\cdots\\!+\\!(m_k\\!-\\!n_k)^2}\n\t\\end{equation}\n\twhere $m_{i}$ and $n_{i}$ are the $i$-th factors in the $k$-dimensional vectors, respectively. The lower the distance is, the more similar they are. \n\t\\textbf{Edit Distance}. Edit distance refers to the number of editing operations required to convert one string to another, and Levenshtein distance is a widely used edit distance. For two strings $a$ and $b$, the Levenshtein distance $lev$ is calculated by \n\t\\begin{eqnarray} \\label{editDistance}\n\tlev(i,j)=\\left\\{\n\t\\begin{array}{ll}\n\t\\max (i,j),\\quad\\quad \\min (i,j)=0 \\\\\n\t\\min\\left\\{ \\begin{array}{ll}\n\tlev(i-1,j)+1 \\\\\n\tlev(i,j-1)+1 \\quad otherwise. \\\\\n\tlev(i-1,j-1)+1_{a_i\\neq b_i} \n\t\\end{array}\n\t\\right.\n\t\\end{array} \n\t\\right.\n\t\\end{eqnarray}\n\twhere $lev(i,j)$ is the distance between the first $i$ characters in $a$ and the first $j$ characters in $b$. The lower it is, the more similar the two strings are.\n\t\\textbf{Cosine Similarity}. Cosine similarity refers to the similarity between two vectors by measuring the cosine of the angle between them. For two given word vectors $\\vec{m}$ and $\\vec{n}$, the cosine similarity $CD$ is calculated by \n\t\\begin{equation} \\label{cosinesimilarity}\n\tCD = \\frac{\\vec{m} \\cdot \\vec{n}}{\\Vert m \\Vert \\cdot \\Vert n \\Vert} = \\frac{\\sum\\limits_{i=1}^k m_i \\times n_i}{\\sqrt{\\sum\\limits_{i=1}^k (m_i)^2} \\times \\sqrt{\\sum\\limits_{i=1}^k (n_i)^2}} \n\t\\end{equation}\n\tCompared with Euclidean distance, the cosine similarity pays more attention to the difference between the directions of two vectors. The more consistent their directions are, the more similar they are. \n\t\\textbf{Jaccard Similarity Coefficient}. The Jaccard similarity coefficient is used to compare the similarity between a limited sample set. For two given sets A and B, their Jaccard similarity coefficient $J(A, B)$ is calculated by\n\t\\begin{equation} \\label{JaccardSimilarity}\n\tJ\\left(A, B\\right) = |A \\cap B| / |A \\cup B| \n\t\\end{equation}\n\twhere $0 \\leq J(A,B) \\leq 1$. The closer the value of $J(A,B)$ is to 1, the more similar they are. In texts, intersection $A \\cap B$ refers to similar words in the samples, and union $A \\cup B$ is all words without duplication.\n\tThese aforementioned metrics are widely applied in tackling various machine learning tasks. Euclidean distance and cosine distance accept vectors for calculation, while the Jaccard similarity coefficient and edit distance directly operate on the raw texts without any transformations into vectors. Particularly, Michel \\etal{} proposed a natural criterion for adversarial texts on sequence-to-sequence models. This work focuses on evaluating the semantic equivalence between adversarial examples and the original ones. Experimental results show that strict constraints are useful for keeping meaning-preserving, but the performance compared with the aforementioned metrics needs further research efforts.", "id": "5a33b7a9-ff14-42ad-a78b-cab87026c4d5", "level": "subsection", "origin_cites_number": 16, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Metrics on Imperceptibility in texts" ] ], "subsections": [], "title": "Metrics on Imperceptibility in texts" }, { "cite_extract_rate": 0.7812500000000001, "cites": [ 4846, 5848, 5849, 5843, 5847, 5829, 9133, 5840, 5844, 3103, 5845, 5842, 5851, 1632, 5830, 5850, 2565, 5839, 5841, 5833, 8012, 5846, 4235, 7586, 8959 ], "content": "\\label{datasets}\n\tWe survey the top-tier conferences and journals in artificial intelligence (AI) and NLP (\\eg{}, ICLR, ACL, AAAI, EMNLP, IJCAI, NAACL, COLING, TACL, TKDE, and JMLR) to collect the employed databases in texts. Table \\ref{datasetsinthetable} shows the details of the widely-adopted datasets employed in the studies of adversarial attacks.\n\t\\begin{table*}[t]\n\t\t\\scriptsize\n\t\t\\centering\n\t\t\\caption{Twelve popular text datasets employed in the studies of adversarial attacks. The second column shows the name of each data with the source download link. The following three columns give a brief description, size, and application in which NLP task. NLI is short for natural language inference, NMT is short for neural machine translation, and QA is short for question and answer.}\n\t\t\\label{datasetsinthetable}\n\t\t\\begin{adjustbox}{width=\\linewidth,center}\n\t\t\t\\begin{tabular}{|l|l|l|l|l|}\n\t\t\t\t\\hline \n\t\t\t\t\\multirow{3}{*}{Task} & \\multirow{3}{*}{Name} & \\multirow{3}{*}{Description} & \\multirow{3}{*}{Size} & \\multirow{3}{*}{Application} \\\\\n\t\t\t\t& & & & \\tabularnewline\n\t\t\t\t& & & & \\tabularnewline\n\t\t\t\t\\hline \n\t\t\t\t\\hline\n\t\t\t\t\\multirow{8}{*}{classification} & AG's news\\footnotemark[2] & News from over 2,000 sources & 144K & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& DBPedia\\footnotemark[2] & Structured content from Wikimedia projects & 45K & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& Amazon\\footnotemark[2] & Product reviews on Amazon & 2 million & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& Yahoo\\footnotemark[2] & Yahoo! Answers Comprehensive Questions & 1.4 million & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& Yelp\\footnotemark[2] & User reviews of merchants & 140K & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& IMDB\\footnotemark[2] & polarized movie reviews & 50K & \\tabincell{l}{ \\\\ } \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& MR\\footnotemark[3] & movie-review data & 10K & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& SST\\footnotemark[4] & standard sentiment dataset from Stanford & 240K & \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\tQA & SQuAD\\footnotemark[5] & dataset for question answering and reading comprehension from Wikipedia & 100K & \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{2}{*}{NLI}& SNLI\\footnotemark[6] & human-written English sentence pairs & 570K & \\tabularnewline\n\t\t\t\t\\cline{2-5}\n\t\t\t\t& MultiNLI\\footnotemark[7] & crowd-sourced collection of sentence pairs & 433K & \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\tNMT & WMT14\\footnotemark[8] & parallel texts (\\eg{}, German/English) for translation models & -- & \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{adjustbox}\n\t\\end{table*}\n\tBeyond investigating the popular text databases employed in adversarial attacks, we also explore how many datasets are adopted in the recent works and which tasks they apply. Figure \\ref{howmanydatasets} presents the proportion in the corresponding NLP task. We can observe that more than half of the databases focus on text classification, which is a critical NLP task.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\setlength{\\belowcaptionskip}{-0.3cm} \n\t\t\\includegraphics[width=\\linewidth]{pie_Chart.png}\n\t\t\\caption{Statistics of datasets used in the research of adversarial attacks. Totally, there are 52 different datasets employed in related works. All of them can be found in Table \\ref{tabpaper_information}.} \\label{howmanydatasets}\n\t\\end{figure}\n\t\\footnotetext[2]{\\url{https://course.fast.ai/datasets}}\n\t\\footnotetext[3]{\\url{http://www.cs.cornell.edu/people/pabo/movie-review-data/}}\n\t\\footnotetext[4]{\\url{https://github.com/stanfordnlp/sentiment-treebank}}", "id": "31cd2141-07aa-4e48-843f-12381de7f937", "level": "subsection", "origin_cites_number": 32, "parent_id": "7637a2bd-4cd6-4630-89e2-0afc850edf5f", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Preliminaries" ], [ "subsection", "Datasets in Texts" ] ], "subsections": [], "title": "Datasets in Texts" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{adversarialattacksintext}\n\tIn recent studies, the majority of adversarial attacks focus on fooling the text classification systems. Next, we detail the four types of adversarial attacks based on the perturbations units: char-level attack, word-level attack, sentence-level attack, and multi-level attack.", "id": "744ede91-1e59-47e5-9ed0-4ceac7757786", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ] ], "subsections": [ "9d7f9e26-b7ed-40f5-92b8-f3630a83eebd", "b4a6e968-9839-41d5-a721-50141a19865b", "b35348ef-663e-45ed-a6ed-04b953a814d0", "c7760775-486e-4856-bb89-ff5980a44113" ], "title": "Adversarial Attacks for classification in Texts" }, { "cite_extract_rate": 0.75, "cites": [ 5853, 5833, 5852 ], "content": "Char-level attacks indicate that adversaries modify several characters in words to generate adversarial examples that can fool the detectors. Generally, the modifications are often misspellings, and the operations include insertion, swapping, deletion, and flipping. Although this kind of attack can achieve a high success rate, misspellings can be easily detected. Next, we introduce some representative char-level attacks. \n\tGao \\etal{} proposed a char-level attack, called DeepWordBug, to generate adversarial examples in the black-box scenario, which followed a two-step pipeline. The first stage quantifies the importance of words and determines which one to change. The calculation process for the first stage is shown in \n\t\\begin{equation} \\label{firststage}\n\t\\begin{split}\n\tCS(x_i)=&[F(x_1,\\ldots,x_{i-1},x_i)-F(x_1,x_2,\\ldots,x_{i-1})]+\\\\&\\lambda[F(x_i,x_{i+1},\\ldots,x_n)-F(x_{i+1},\\ldots,x_n)]\n\t\\end{split}\n\t\\end{equation}\n\twhere $CS(x_i)$ represents the importance score of $i$-th word in $(x_{1},\\ldots,x_{n})$, evaluated by the function $F$. $\\lambda$ is a hyper-parameter. The second stage adds imperceptible perturbations to the selected words through swapping, flipping, deletion, and insertion. Meanwhile, edit distance is used to preserve the readability of generated adversarial examples. \n\tGil \\etal{} derived a new method DISTFLIP based on HotFlip . The authors distill the knowledge of the procedure in HotFlip for training their model. Through the trained model, the authors generate adversarial examples to conduct a black-box attack. This method performs better than HotFlip on a toxicity classifier, and its run-time in generating adversarial examples is ten times faster than HotFlip. However, the capability to distill the knowledge of any white-box attacks is not clear.", "id": "9d7f9e26-b7ed-40f5-92b8-f3630a83eebd", "level": "subsection", "origin_cites_number": 4, "parent_id": "744ede91-1e59-47e5-9ed0-4ceac7757786", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Char-level Attacks" ] ], "subsections": [], "title": "Char-level Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "Word-level attacks manipulate the whole word rather than several characters in words. Hence, the modifications are more imperceptible to humans than char-level attacks. The common manipulation includes insertion, deletion, and replacement. According to the way of selecting manipulated words, the word-level adversarial attack can be classified into gradient-based, importance-based, and other attacks.\n\t\\footnotetext[5]{\\url{https://datarepository.wolframcloud.com/resources/SQuAD-v1.1}}\n\t\\footnotetext[6]{\\url{https://nlp.stanford.edu/projects/snli/}}\n\t\\footnotetext[7]{\\url{https://cims.nyu.edu/~sbowman/multinli/}}\n\t\\footnotetext[8]{\\url{http://www.statmt.org/wmt14/translation-task.html}}", "id": "b4a6e968-9839-41d5-a721-50141a19865b", "level": "subsection", "origin_cites_number": 0, "parent_id": "744ede91-1e59-47e5-9ed0-4ceac7757786", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Word-level Attacks" ] ], "subsections": [ "8fce6467-8e06-4ccb-9a18-af7004b0280c", "cb07270b-e207-4c05-8ba5-bcabec467a6d", "5e766cfb-bd34-4e06-a371-319b57a74027" ], "title": "Word-level Attacks" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 5841, 5842, 5830, 892, 894 ], "content": "Studies on adversarial examples in the image domain are more active than those in texts. Inspired by the fast gradient sign method (FGSM) in the image domain, attackers generate adversarial examples by calculating the gradient of text vectors in a model.\n\tAs far as we know, Papernot \\etal{} first studied the problem of adversarial examples in texts and contributed to producing adversarial input sequences. The authors leverage computational graph unfolding to evaluate the forward derivative (\\ie{}, the model’s Jacobian $J(x)$), which is related to the search of modified words. In detail, they utilize FGSM to guide the perturbations, and it can be represented as \n\t\\begin{eqnarray}\n\tsign(J(x)[i,\\arg\\max_{0,1}(p_j)])\n\t\\end{eqnarray} \n\twhere $i$ is the $i$-th word in an input sequence, and $p_j$ indicates the probability of belonging to category $j$. If the value of $\\arg\\max_{0,1}(p_j)$ changes, the modification is effective. However, the words in input sequences are iteratively selected for substitution. Hence, there may exist grammatical errors in the generated adversarial examples. \n\tDifferent from Papernot \\etal{} , Samanta \\etal{} employed FGSM to evaluate the important or salient words, which deeply affected the results of classification when they were removed. Three modification strategies (\\textit{i.e.}, insertion, replacement, and deletion) are introduced to craft top $k$ words with the highest importance, where $k$ is a threshold. Except for the deletion strategy, both insertion and replacement on top $k$ words require an additional dictionary for operation. Thus, the authors establish a pool of candidates for each word in the experiments, including synonyms, typos, and type-specific keywords. However, the establishment of the candidate pool suffers huge consumption, and there may be no candidate pool for some top $k$ words in the actual inputs. \n\tUnlike the above methods, Sato \\etal{} proposed iAdv-Text by adding perturbations in the embedding space. iAdv-Text formulate this as an optimization problem, which jointly minimizes objection function $\\mathcal{J}_{iAdvT}(D,W)$ on the entire training dataset $D$ with parameters $\\emph{W}$. The optimization procedure is shown in \n\t\\begin{equation} \\label{eq5}\n\t\\begin{split}\n\t\\mathcal{J}_{iAdvT}(D,W) = &\\frac{1}{|D|}\\mathop{\\arg\\min}_{W}\\{\\sum_{(\\hat{X},\\hat{Y})\\in D}\\ell(\\hat{X},\\hat{Y},W)+\\\\&\\lambda\\sum_{(\\hat{X},\\hat{Y})\\in D}\\alpha_{iAdvT}\\}\n\t\\end{split}\n\t\\end{equation}\n\twhere $\\hat{X}$ and $\\hat{Y}$ represent the inputs and labels, respectively. $\\lambda$ is a hyper-parameter to balance the two loss functions. $\\ell(\\hat{X},\\hat{Y},W)$ is the loss function of individual training sample $(\\hat{X},\\hat{Y})$ in $D$. $\\alpha_{iAdvT}$ is a maximization process to find the worst case weights of the direction vectors calculated by\n\t\\begin{equation} \\label{functionofiAdvT}\n\t\\alpha_{iAdvT} = \\frac{\\epsilon g}{\\Vert g \\Vert_2}, g = \\nabla_{\\alpha}\\ell(\\vec{w} + \\sum_{k=1}^{|V|}a_kd_k, \\hat{Y}, W)\n\t\\end{equation}\n\twhere $\\sum_{k=1}^{|V|}a_kd_k$ is the perturbation generated from each input on its word embedding vector $\\vec{w}$, $\\epsilon$ is a hyper-parameter to control adversarial perturbations, $a_{k}$ is the $k$-th factor of a $|V|$-dimensional word embedding vector $\\alpha$, $d_{k}$ is the $k$-th factor of a $|V|$-dimensional direction vector $\\vec{d}$, which is a mapping from one word to another in embedding space. iAdv-Text restricts the direction of perturbations with cosine similarity for finding a substitution, which is in a pre-defined vocabulary rather than an unknown word. \n\tBehjati \\etal{} designed a universal perturbations added to any input. They optimize the gradient of loss function $loss$ to construct a word vocabulary $V$ containing words that apply to all data. The optimization is shown in \n\t\\begin{eqnarray}\\label{Behjatietal}\n\tw'_i=\\arg\\min_{w'_i \\in V} cos(emb(w'_i),(emb(w_i)+\\alpha r_i))\n\t\\end{eqnarray}\n\twhere $r_i=\\nabla_{emb(w_i)}loss(l,f(x'))$. $emb(w_i)$ is the corresponding embedding of word $w_i$, $l$ represents the label. If it is a targeted attack, $l$ is the target label, and the learning rate $\\alpha$ is negative. Otherwise, $l$ is the ground truth label, and $\\alpha$ is positive. The words in $V$ will insert into texts to generate adversarial examples. However, the insertion occurs at the beginning of the input sequence leading to grammatical errors and breaking the imperceptibility rule.", "id": "8fce6467-8e06-4ccb-9a18-af7004b0280c", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "b4a6e968-9839-41d5-a721-50141a19865b", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Word-level Attacks" ], [ "subsubsection", "Gradient-based attacks" ] ], "subsections": [], "title": "Gradient-based attacks" }, { "cite_extract_rate": 0.4, "cites": [ 5849, 8959 ], "content": "By analyzing the existing methods of generating adversarial texts, researchers noticed that the importance of each word in determining the final predictions is vastly different. Based on this initial idea, researchers have launched successful attacks by computing the importance of words and modifying these high valuable words. Importance-based attacks usually follow a two-step pipeline.\n\t\\begin{enumerate}[leftmargin=*]\n\t\t\\item Calculating the importance of words by querying the target model multiple times.\n\t\t\\item Modifying these important words via insertion, deletion, or replacement. \n\t\\end{enumerate}\n\tRen \\etal{} designed a synonym replacement method PWWS working on the word-level. The authors construct a synonym set $\\mathbb{L}_i$ for each word in the inputs. To search for suitable substitutions, an optimization process in formula \\eqref{synonymset} is conducted by maximizing the word saliency of these words.\n\t\\begin{equation} \\label{synonymset}\n\t\\begin{split}\n\tR(w_i,\\mathbb{L}_{i})=\\arg\\max{P(y_{true}|x)-P(y_{true}|x_{i}')}\n\t\\end{split}\n\t\\end{equation}\n\twhere $R(w_i,\\mathbb{L}_{i})$ substitutes the best candidate synonym $w_{i}^{*}$ of $i$-th word in the text $x$. $x_{i}'$ is obtained by replacing the $i$-th word in $x$ with each candidate. $P(y|x)$ is the classification probability of $x$. After that, the final process determines the order of replacement in $x$. \n\tHsieh \\etal{} thought that the changes in words with the highest or lowest attention scores could substantially undermine self-attentive models' predictions. Hence, they exploit the attention scores as a potential source of vulnerability and modify target words with the highest or lowest scores. A random word in the vocabulary is greedily selected to replace the target word until the attack succeeded. Although the constraint on the embedding distance is imposed to keep semantically similar, the vocabulary construction is not clear. Similarly, Yang \\etal{} greedily searched for the weak spot of the input sentence by replacing a word with the padding. If the probability changes much after modification, the word will be replaced with a randomly selected word in the vocabulary. Due to the lack of constructions, this attack sometimes changes the semantics of the original sentence. \n\tJin \\etal{} presented Textfooler, a black-box attack to fool the bidirectional encoder representations from transformer model (BERT) on text classification. They first identify the important words for the target model and then prioritize to replace them with synonyms until the prediction is altered. The word importance $I_{w_i}$ is calculated as \n\t\\begin{eqnarray}\n\tI_{w_i}=\\left\\{\n\t\\begin{array}{ll}\n\tF_Y(X)-F_Y(X_{w_i}),if F(X)=F(X_{w_i})=Y \\\\\n\tF_Y(X)-F_Y(X_{w_i})+F_{\\hat{Y}}(X_{w_i})-F_{\\hat{Y}}(X), \\\\\n\tif F(X)=Y,F(X_{w_i})=\\hat{Y}, and \\quad Y\\neq \\hat{Y}\n\t\\end{array} \n\t\\right.\n\t\\end{eqnarray}\n\twhere $w_i$ is the $i$-th word in $X$. $F_Y(\\cdot)$ represents the prediction score for the $Y$ label. \n\tConsidering the limitations (\\textit{i.e.,} out-of-context and unnaturally complex token replacements) of synonym substitution methods, Garg \\etal{} used contextual perturbations from a BERT masked language model to generate adversarial examples. They search for important words similar to Jin \\etal{} . Then, words from the pre-trained BERT masked language model are used to replace the important words in the inputs or inserted to adjacent positions.", "id": "cb07270b-e207-4c05-8ba5-bcabec467a6d", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "b4a6e968-9839-41d5-a721-50141a19865b", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Word-level Attacks" ], [ "subsubsection", "Importance-based attacks" ] ], "subsections": [], "title": "Importance-based attacks" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 5833, 2565, 5829 ], "content": "Apart from gradient-based and importance-based attacks, researchers also propose other ways to create adversarial examples such as the genetic algorithm and particle swarm optimization-based search algorithm.\n\tAlzantot \\etal{} proposed a continuous optimization method at the word-level using the genetic algorithm (GA) . GA initializes the generation by nearest neighbor replacement. The optimization is computed as\n\t\\begin{equation} \\label{nearestneighborreplacement}\n\tx_{adv}=\\mathcal{P}^{g-1}_{\\arg\\max f(\\mathcal{P}^{g-1})_{target}} \n\t\\end{equation}\n\twhere $\\mathcal{P}^{g-1}$ represents the $(g-1)$-th generation, $f$ is the target model, $x_{adv}$ means the best individuals in this generation that can fool $f$ to produce incorrect predictions. If the samples in $x_{adv}$ do not satisfy the requirements, they are selected as the next generation to repeat the previous optimization process, where $\\mathcal{P}^{g}={x_{adv}}$. Different from DeepWordBug , GA utilizes Euclidean distance to maintain the semantics. \n\tZang \\etal{} proposed a novel attack model, which incorporated the sememe-based word substitution method and particle swarm optimization-based search algorithm. For each word in a given text, the authors use HowNet to find its sememe and then add the same labeled words to the word list. After that, a particle swarm optimization algorithm is applied to search for adversarial examples in a discrete search space composed of all the word lists. This work solves the lack of search space reduction methods and inefficient optimization algorithms, significantly improving the success rate of adversarial attacks.", "id": "5e766cfb-bd34-4e06-a371-319b57a74027", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "b4a6e968-9839-41d5-a721-50141a19865b", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Word-level Attacks" ], [ "subsubsection", "Other attacks" ] ], "subsections": [], "title": "Other attacks" }, { "cite_extract_rate": 1, "cites": [ 5854, 2470, 1632, 168, 4235, 7186, 5847 ], "content": "Compared with char-level and word-level attacks, the sentence-level attack is more flexible. The modified sentence can be inserted at the beginning, middle, or end of the text when the semantics and grammar are correct. To some extent, the sentence-level attack can be seen as a special kind of word-level attack manipulated by adding some ordered words. This kind of attack usually appears in other NLP tasks like natural language inference (NLI) , neural machine translation (NMT) , reading comprehension (RC) , and question answering (QA) . In the text classification, this kind of attack is much less than others. \n\tIyyer \\etal{} designed syntactically controlled paraphrase networks (SCPNS) for generating adversarial examples by grammar conversion, which relied on the encoder-decoder architecture of SCPNS. Given a sequence and a corresponding target syntax structure, the authors encode them by a bidirectional LSTM and decode them by LSTM. The decoder is augmented with soft attention over encoded states and the copy mechanism . They then modify the inputs to the decoder to incorporate the target syntax structure for the generation. The syntactically adversarial sentences can not only fool pre-trained models but also improve the robustness of them to syntactic variation. However, the measurement of paraphrase quality and grammaticality requires much human effort. The structure of the sentence has changed, although the semantic difference is small.", "id": "b35348ef-663e-45ed-a6ed-04b953a814d0", "level": "subsection", "origin_cites_number": 7, "parent_id": "744ede91-1e59-47e5-9ed0-4ceac7757786", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Sentence-level Attacks" ] ], "subsections": [], "title": "Sentence-level Attacks" }, { "cite_extract_rate": 0.625, "cites": [ 5833, 3103, 5844, 5846, 914 ], "content": "Multi-level attacks incorporate at least two of the three adversarial attacks to create more imperceptible and high success rate adversarial examples. Therefore, unlike a single method, the multi-level attack calculation is more expensive and more complicated.\n\tLiang \\etal{} utilized the FGSM to determine what, where, and how to insert, remove, and modify. They use the natural language watermarking technique to ensure generated adversarial examples compromise their utilities. In the white-box scenario, they define hot training phrases and hot sample phrases by computing the cost gradients of inputs. The former sheds light on what to insert, and the latter implies where to insert, remove, and modify. In the black-box scenario, the hot training phrases and hot sample phrases are obtained through the fuzzing technique. When an input is fed to the target model, they use isometric whitespace to substitute the origin word each time. The difference between the results before and after modification is the deviation of each word. The larger it is, the more significant the corresponding word is to the classification. Hence, hot training phrases are the most frequent words in the set of inputs, which consist of the largest deviation words for each training sample. Hot sample phrases are the words with the largest deviation for every test sample.\n\tLike one pixel attack in the image domain, a similar method named HotFlip was proposed by Ebrahimi \\etal{}. HotFlip is a white-box attack in texts, which relies on an atomic flip operation to swap one character with another by gradient computation. Compared with DeepWordBug , the adversarial examples from HotFlip are more imperceptible due to the fewer modifications. The flip operation is represented by \n\t\\begin{equation} \\label{eq10}\n\t\\begin{split}\n\t\\vec{v}_{ijb} = &(\\vec{0},\\ldots;(\\vec{0},\\ldots(0,0,\\ldots,0,-1,0,\\ldots,1,0)_j,\\\\&\\ldots,\\vec{0})_i;\\vec{0},\\ldots)\n\t\\end{split}\n\t\\end{equation}\n\tThe formula \\eqref{eq10} means that the $j$-th character of $i$-th word in an example is changed from $a$ to $b$, which are both characters at $\\emph{a}$-th and $\\emph{b}$-th places in the alphabet. -1 and 1 are the corresponding positions for $a$ and $b$, respectively. The alteration from directional derivative along this vector is calculated to find the biggest growth in the loss $\\emph{J}(x, y)$. The procedure of calculation is shown in \n\t\\begin{equation} \\label{biggestincrease}\n\t\\max\\nabla_{x}J(x, y)^T\\cdot\\vec{v}_{ijb} = \\mathop{\\max}_{ijb}\\frac{\\partial J^{(b)}}{\\partial x_{ij}} - \\frac{\\partial J^{(a)}}{\\partial x_{ij}}\n\t\\end{equation}\n\twhere $x_{ij}$ is a one-hot vector, which denotes the $\\emph{j}$-th character of $\\emph{i}$-th word, $y$ refers to the corresponding label vector, $T$ is a transpose function. Apart from character-level attack, HotFlip could also be used on word-level by different modifications. Although HotFlip performs well, only a few successful adversarial examples are generated with one or two flips under strict constraints, thus it is not suitable for a large-scale experiment.\n\tLi \\etal{} proposed an attack framework TextBugger for generating adversarial examples, which could mislead the deep learning-based text understanding system in both black-box and white-box settings. Similar to DeepWordBug , TextBugger also searches for important words to modify. In the white-box scenario, Jacobian matrix $J$ is used to calculate the importance of each word.\n\t\\begin{equation}\n\tC_{x_i} = J_{F(i,y)} = \\frac{\\partial F_y(x)}{\\partial x_i}\n\t\\end{equation}\n\twhere $F_y(\\cdot)$ represents the confidence value of class $y$, $C_{x_i}$ is the important score of $i$-th word in $x$. Then, similar modification strategies like DeepWordBug are used to generate both character-level and word-level adversarial examples. In the black-box scenario, the authors segment documents into sequences, and then they query the target model to filter out sentences with different predicted labels from the original ones. The odd sequences are sorted in an inverse order according to their confidence scores calculated by the removal operation as\n\t\\begin{equation} \\label{removingmethod}\n\t\\begin{split}\n\tC_{x_i} = &F_y\\left(x_1,\\ldots,x_{i-1},x_i,x_{i+1},\\ldots,x_n\\right) \\\\& - F_y\\left(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_n\\right)\n\t\\end{split}\n\t\\end{equation}\n\tThe final modification process is the same as that in the white-box setting.\n\tCompared with Deep-fool and TextBugger , Vijayaraghavan \\etal{} applied reinforcement learning to generate adversarial examples in a black-box setting, following an encoder-decoder framework. They extract character and word information from inputs encoded to produce hidden representations of words. Then, an attention mechanism is applied to the decoder for identifying the most relevant text units that highly affect the predictions. During the decoding step, the perturbation vectors are added to those units, and the creations are optimized using target model predictions.", "id": "c7760775-486e-4856-bb89-ff5980a44113", "level": "subsection", "origin_cites_number": 8, "parent_id": "744ede91-1e59-47e5-9ed0-4ceac7757786", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Attacks for classification in Texts" ], [ "subsection", "Multi-level Attacks" ] ], "subsections": [], "title": "Multi-level Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{adversarialexamplesonothertasks}\n\tIn section \\ref{adversarialattacksintext}, we have reviewed adversarial attacks for the text classification task. Next, we solve some other puzzles on adversarial texts, such as which adversarial examples can attack other kinds of NLP systems or applications and how they are generated in these cases.", "id": "60350839-4653-460f-8134-0be4be772e64", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ] ], "subsections": [ "f683ab1e-0a81-471f-a050-f908dd0f2567", "4ba8a06d-f447-476f-bfe0-ec4eccddab8a", "859d59d9-5e21-4fb3-a3a7-1fd9c29ea03a", "f0afc92f-597d-4c92-8fd2-8578028934b7", "b1d84ee1-0f72-4dee-9d20-df6037c76035" ], "title": "Adversarial Techniques for Other NLP Tasks" }, { "cite_extract_rate": 0.5, "cites": [ 4235 ], "content": "\\label{RCS}\n\tThe reading comprehension task means that the machine answers the query after reading a given context and the corresponding query. As a constraint, the answer to the query must be a paragraph (\\ie{}, several consecutive words) that can be found in the original context. Attackers usually modify the content through sentence-level attacks and induce the system to produce a different answer to the query.\n\tTo explore whether reading comprehension systems are vulnerable to adversarial examples, Jia \\etal{} inserted sentence-level adversarial perturbations into paragraphs to test the systems without changing the answers or misleading humans. They extract nouns and adjectives in the question and replace them with antonyms. Meanwhile, named entities and numbers are changed by the nearest word in GloVe embedding space. The modified question is transformed into a declarative sentence as the adversarial example, which is then concatenated to the end of the original paragraph. This process is called ADDSENT by the authors. Another way ADDANY randomly chooses words of the sentences to craft. Compared with ADDSENT, ADDANY does not consider the grammaticality of sentences, and it needs to query the model several times. This work's core idea is to draw the models' attention to the generated sequences rather than original sequences to produce incorrect answers. \n\tCurrently, there is no good defense method to resist this kind of attack. The analysis of the coherence of contextual semantics may be helpful for detection.", "id": "f683ab1e-0a81-471f-a050-f908dd0f2567", "level": "subsection", "origin_cites_number": 2, "parent_id": "60350839-4653-460f-8134-0be4be772e64", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ], [ "subsection", "Attack on Reading Comprehension Systems" ] ], "subsections": [], "title": "Attack on Reading Comprehension Systems" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 5851, 5843, 5829 ], "content": "\\label{NLI}\n\tNatural language inference is mainly to judge the semantic relationship between two sentences (\\ie{}, premise and hypothesis) or two words. To some extent, it can be regarded as a classification task to ensure that the model can focus on semantic understanding.\n\tLi \\etal{} designed a word-level attack to affect the model inference of entity relationship. The method follows the same two-step pipeline of importance-based attacks, and the importance of each word $I_{w_i}$ is calculated like \n\t\\begin{eqnarray} \\label{nNaturaLanguageInf}\n\tI_{w_i}=o_y(S)-o_y(S_{\\backslash w_i})\n\t\\end{eqnarray} \n\twhere $o_y(S)$ denotes the logit output by the target model for correct label $y$, $S_{\\backslash w_i}$ represents the rest $S$ except for the word $w_i$. Unlike synonyms or similar words substitution in the embedding space , a BERT is used for the word replacement, ensuring the semantic similarity and grammar-correct of generated adversarial examples. Considering the word segmentation of a text, the authors divide the substitution into two parts. If the important words are single after segmentation, they are iteratively replaced by the candidates calculated via BERT. Otherwise, the phrase containing an important word is iteratively replaced by the candidates, which are phrases either. \n\tMinervini \\etal{} cast the generation of adversarial examples as an optimization problem and proposed a novel multi-level attack. The authors maximize the proposed inconsistency loss $J_{I}$ to search for substitution sets $S$ (\\textit{i.e.}, adversarial examples) by using a language model as \n\t\\begin{equation}\n\t\\begin{split}\n\t\\mathop{maximize}\\limits_{S} J_{I}(S) = &\\left[p(S;body)-p(S;head)\\right]_{+}, \\\\&s.t. \\log p_{L}(S)\\leq\\tau\n\t\\end{split}\n\t\\end{equation}\n\twhere $[x]_{+}=\\max(0,x)$. $p_{L}(S)$ refers to the probability of the sentences in $S$.\n\t\\begin{itemize}\n\t\t\\item $\\tau$: a threshold on the perplexity of generated sequences\n\t\t\\item ${{X_{1},\\ldots,X_{n}}}$: the set of universally quantified variables in a rule to sequences in S\n\t\t\\item $S= \\lbrace{X_{1}\\to s_{1},\\ldots,X_{n}\\to s_{n}}\\rbrace$: a mapping from $\\lbrace{X_{1},\\ldots,X_{n}}\\rbrace$\n\t\t\\item $p(S; body)$ and $p(S; head)$: probability of the given rule, after replacing $X_{i}$ with the corresponding sentence $S_{i}$\n\t\t\\item $body$ and $head$: represent the premise and the conclusion of the NLI rules\n\t\\end{itemize}\n\tHowever, the generated adversarial examples may keep different semantics from the original because of ignoring the semantic changes between modifications and original words.", "id": "4ba8a06d-f447-476f-bfe0-ec4eccddab8a", "level": "subsection", "origin_cites_number": 5, "parent_id": "60350839-4653-460f-8134-0be4be772e64", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ], [ "subsection", "Attack on Natural Language Inference Models" ] ], "subsections": [], "title": "Attack on Natural Language Inference Models" }, { "cite_extract_rate": 1, "cites": [ 5854, 7217, 1003, 4846, 5857, 5856, 5855, 5845, 1096 ], "content": "\\label{NMT}\n\tMachine translation means that the machine can automatically map one language to another. Attackers slightly modify the content of an input language, resulting in the failure to obtain the expected translation result.\n\tBelinkov \\etal{} conducted a char-level black-box attack to explore the vulnerability of three different neural machine translation models . The authors devise adversarial examples depending on natural and synthetic language errors, including typos, misspellings, etc. Although these generations can easily fool three different models, they are also visible due to grammatical problems. Apart from the black-box attack, Ebrahimi \\etal{} and Cheng \\etal{} generated adversarial examples with gradient optimization in the white-box settings. Compared with Belinkov \\etal{}, Ebrahimi \\etal{} have demonstrated that char-level adversarial examples in black-box attacks are much weaker than white-box ones in most cases, and the generations in the word-level from Cheng \\etal{} are more fluent and better. \n\tZou \\etal{} generated adversarial examples in the word-level via a new paradigm based on reinforcement learning. They construct a generator based on the generative adversarial networks (GANs) to create adversarial examples as follows. The environment (\\textit{i.e.,} discriminator and victim NMT model) receives the tokens of current sentences into the agent to select suitable candidates for replacing the target tokens. The candidates of each token are collected through the victim NMT model within Euclidean distance. After modification, the discriminator receives these modified sentences and returns the agent a surviving feedback signal. The process is repeated until the termination signal is received. Unlike the previous works, this work balances semantic approximation and attack effects through self-supervision, and both have achieved good results.", "id": "859d59d9-5e21-4fb3-a3a7-1fd9c29ea03a", "level": "subsection", "origin_cites_number": 9, "parent_id": "60350839-4653-460f-8134-0be4be772e64", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ], [ "subsection", "Attack on Machine Translation Models" ] ], "subsections": [], "title": "Attack on Machine Translation Models" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 5840, 890, 8012, 4235 ], "content": "Inspired by the model's sensitivity to semantically similar questions, Gan \\etal{} generated diverse paraphrased questions in the sentence-level, guiding target models to produce different answers. They obtain all n-grams (up to 6-grams) from the source questions via a language model, and the stopwords in them would be removed. Then, they search the paraphrase database for paraphrases of the remaining. In constrain, the equivalence score between the substitutions and n-grams needs to be greater than 0.25. After paraphrase generation, a pre-trained model is applied to filter the generated questions with a score greater than 0.95. Compared with adversarial attacks on reading comprehension systems, this work is to modify the question rather than declarative content like Figure \\ref{mrcandqaaaa}. Tan \\etal{} also guided the question and answer model to produce incorrect answers by perturbing the words in the questions. However, the generations are likely to have grammatical errors and easy to be detected.\n\tTo deal with the non-differentiable and discrete attributes of texts, Wang \\etal{} proposed a tree-based autoencoder to transfer the discrete text into a continuous representation space for creating adversarial perturbations. They firstly select the adversarial seed (\\textit{i.e.,} the input sentence) transferred into a continuous embedding. Next, the optimization similar to C\\&W attack is conducted upon the embedding to search for perturbations. Finally, the modified embedding is decoded back to adversarial examples with semantic similarity. In the targeted attack for question and answer models, the declarative sentence obtained by reconstructing the target answer and question is treated as an adversarial seed. The generated adversarial examples are inserted into the end of the paragraph like Jia \\etal{}. \n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\subfigure[attack on RC]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{QAandRC_a.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\subfigure[attack on QA]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{QAandRC_b.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\centering\n\t\t\\caption{Adversarial attacks on the machine reading comprehension and question and answer systems. (a) is from Jia \\etal{}, and (b) is the instance in Gan \\etal{} .}\n\t\t\\label{mrcandqaaaa}\n\t\\end{figure}", "id": "f0afc92f-597d-4c92-8fd2-8578028934b7", "level": "subsection", "origin_cites_number": 7, "parent_id": "60350839-4653-460f-8134-0be4be772e64", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ], [ "subsection", "Attack on Question and Answer Systems" ] ], "subsections": [], "title": "Attack on Question and Answer Systems" }, { "cite_extract_rate": 0.7234042553191491, "cites": [ 8419, 1171, 5865, 5848, 5855, 5870, 9133, 5844, 2470, 790, 8385, 168, 4036, 1632, 38, 5833, 8012, 5853, 4235, 2058, 7586, 912, 1113, 5868, 673, 303, 5866, 5854, 5860, 1348, 5850, 7796, 5839, 5862, 5861, 5846, 5869, 919, 5849, 5843, 1901, 8418, 5845, 5857, 1139, 5867, 439, 5842, 5851, 5830, 2565, 1173, 5856, 862, 8959, 4846, 5847, 5859, 5829, 5840, 3103, 923, 1096, 7, 5858, 5863, 5841, 5864 ], "content": "\\begin{table*}[!t]\n\t\t\\centering\n\t\t\\caption{Summary of existing adversarial attacks. We mainly show the category, time, work, targeted/non-targeted, black/white, model, data, task, gradient related or not, and project url. For each of them, the papers are sorted by category and time. T/N is short for the targeted/non-targeted attack, W/B is short for the white/black-box attack, C is short for text classification, RC is short for reading comprehension, and TS is short for text summarization.} \n\t\t\\label{tabpaper_information}\n\t\t\\begin{adjustbox}{width=\\linewidth,center}\n\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{3}{*}{Category} & \\multirow{3}{*}{Time} & \\multirow{3}{*}{Work} & \\multirow{3}{*}{\\tabincell{c}{Targeted/ \\\\ Non-targeted}} & \\multirow{3}{*}{\\tabincell{c}{White/ \\\\ Black}} & \\multirow{3}{*}{Model} & \\multirow{3}{*}{Data} & \\multirow{3}{*}{task} & \\multirow{3}{*}{Gradient} & \\multirow{3}{*}{Project URL} \\\\\n\t\t\t\t& & & & & & & & & \\tabularnewline \n\t\t\t\t& & & & & & & & & \\tabularnewline\n\t\t\t\t\\hline \n\t\t\t\t\\hline\n\t\t\t\t\\multirow{5}{*}{\\tabincell{c}{char}} & 2018.1.26 & Gao & N & B & LSTM & \\tabincell{c}{Enron Spam Dataset \\\\ IMDB} & C & N & \\url{https://github.com/QData/deepWordBug} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.4.30 & Belinkov & N & B & \\tabincell{c}{char-CNN \\\\ Nematus \\\\ char2char } & \\tabincell{c}{WCPC , RWSE\\footnotemark[9] \\\\ MERLIN , MAE } & NMT & N & \\url{https://github.com/ybisk/charNMT-noise}\t\\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.8.20 & Ebrahimi & T & binary & char-CNN & TED & NMT & Y & \\url{https://github.com/jebivid/adversarial-nmt} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.6.2 & Gil & N & B & GRU & Toxic Comment\\footnotemark[10] & C & N & \\url{https://github.com/orgoro/white-2-black}\n\t\t\t\t\\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.4.7 & Wang & binary & W & BERT & \\tabincell{c}{THUCNews\\footnotemark[11] \\\\ Wechat Finance Dataset} & C & Y & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{21}{*}{\\tabincell{c}{word}} & 2016.11.2 & Papernot & binary\t& W & LSTM & IMDB & C & Y & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2017.4.9 & Samanta & N & W & CNN & IMDB,twitter & C & Y\t& — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.7.13 & Sato & N & W & FFNN, LSTM & \\tabincell{c}{IMDB,RCV1 \\\\ Elec\\footnotemark[12],MR \\\\ Dbpedia} & C & Y & \\url{https://github.com/aonotas/interpretable-adv} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.7.15 & Mudrakarta & T & W & \\tabincell{c}{LSTM, NP \\\\ QANet}\t& \\tabincell{c}{VQA 1.0 , SQuAD \\\\ WikiTableQuestions} & RC,QA & Y & \\url{ https://github.com/pramodkaushik/acl18\\_results} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.7.15 & Glockner & N & B & \\tabincell{c}{bi-LSTM,ESIM \\\\ DAM} & SNLI & NLI & N & \\url{https://github.com/BIU-NLP/Breaking\\_NLI} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.10.31 & Alzantot & T & B & LSTM,RNN & IMDB,SNLI & C & N & \\url{https://github.com/nesl/nlp\\_adversarial\\_examples} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.5.12 & Behjati & binary & W & LSTM & AG’s news,SST & C & Y & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7.28 & Ren & N & B & char-CNN, LSTM & \\tabincell{c}{AG's news,IMDB \\\\ Yahoo! Answers} & C & N & \\url{https://github.com/JHL-HUST/PWWS/} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7.28 & Hsieh & binary & binary & \\tabincell{c}{LSTM,BERT \\\\ Transformer} & Yelp,MultiNLI,WMT15\\footnotemark[2] & C,NMT & N & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7.28 & Cheng & T & W & Transformer & LDC corpus, WMT14 & NMT & Y & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7.28 & Zhang & T & binary & bi-LSTM,BiDAF & IMDB,SNLI\t& C,NLI\t& Y\t& \\url{https://github.com/LC-John/Metropolis-Hastings-Attacker} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.2.7\t& Jin\t& N\t& B\t& CNN,LSTM,BERT\t& \\tabincell{c}{AG’s news,IMDB,Fake\\footnotemark[13] \\\\ Yelp,MR,SNLI,MultiNLI} & C,NLI & N & \t\\url{https://github.com/jind11/TextFooler} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.2.7 & Cheng & binary & W & seq2seq & \\tabincell{c}{DUC2003, DUC2004 \\\\ Gigaword, WMT15} & NMT,TS & Y & \\url{https://github.com/cmhcbb/Seq2Sick} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.3 & Yang & N & B & CNN,LSTM & IMDB,Yahoo! Answers & C & N\t& — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.7.5 & Zang\t& T\t& B\t& LSTM, BERT\t& IMDB,SST,SNLI\t& C,NLI\t& N\t& \\url{https://github.com/thunlp/SememePSO-Attack} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.7.5\t& Zou & N & W & \\tabincell{c}{RNN-Search \\\\ Transformer} & WMT14 & NMT\t& N & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.7.5 & Tan & N & B & \\tabincell{c}{BERT, BiDAF \\\\ Transformer \\\\ Seq2Seq } & SQuAD, WMT14 & QA, NMT & N &\t\\url{https://github.com/salesforce/morpheus} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.7.5 & Zheng & binary & binary & parser & English Penn Treebank & C & Y & \t\\url{https://github.com/zjiehang/DPAttack} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.9.7 & Garg\t& N\t& B\t& CNN,LSTM,BERT\t& \\tabincell{c}{Amazon,Yelp,IMDB \\\\ MR,MPQA\\footnotemark[14] \\\\ SUBJ,TREC\\footnotemark[15] }\t& C & N\t& — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.9.7 & Li & N & B & BERT & \\tabincell{c}{Yelp, IMDB, AG’s news \\\\ SNLI, MultiNLI} & C,NLI & N & \\url{https://github.com/LinyangLee/BERT-Attack} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2021.2.9 & Maheshwary & N & B & \\tabincell{c}{CNN,LSTM,BERT \\\\ ESIM,InferSent} & \\tabincell{c}{AG's news, MR, Yelp \\\\ Yahoo Answers, IMDB \\\\ SNLI, MultiNLI} & C,NLI & N & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{7}{*}{\\tabincell{c}{sentence}} & 2017.9.7 & Jia & T & B & \\tabincell{c}{Match-LSTM \\\\ BiDAF } & SQuAD & RC & N & \\url{https://github.com/robinjia/adversarial-squad} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.4.30 & Zhao & N & B & LSTM,Google Translate & SNLI & NLI,NMT & N & \\url{https://github.com/zhengliz/natural-adversary} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.7.15 & Ribeiro & N & B & \\tabincell{c}{Visual7W \\\\ fastText} & Visual7W data,MR,IMDB & QA,C & N &\t\\url{https://github.com/marcotcr/sears} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.11.16 & Iyyer & N & B & LSTM & SST,SICK\t& C\t& N\t& \\url{https://github.com/miyyer/scpn} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.11.16 & Wang & T & B & BSAE & SQuAD & RC & N & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7 & Wallace & T & W & RNN,IR & Quizbowl questions & QA & N & \\url{https://github.com/Eric-Wallace/trickme-interface/} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.7.28 & Gan & T & B & \\tabincell{c}{BERT , DrQA \\\\ BiDAF } & SQuAD & QA & N & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{11}{*}{\\tabincell{c}{multi}} & 2018.7.13\t& Liang & T & binary & char-CNN & \\tabincell{c}{Dbpedia, MR, MPQA \\\\ Customer review\\footnotemark[16]} & C & Y & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.7.15 & Ebrahimi & N & W & char-CNN,LSTM & AG’s news & C & Y & \\url{https://github.com/AnyiRao/WordAdver} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.10.31 & Minervini\t& N\t& W\t& \\tabincell{c}{DAM , ESIM \\\\ bi-LSTM} & SNLI, MultiNLI &\tNLI\t& Y\t& \\url{https://github.com/uclnlp/adversarial-nli} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.10.31 & Blohm & N & binary & CNN,LSTM & MovieQA\\footnotemark[17] & QA & N & \\url{https://github.com/DigitalPhonetics/reading-comprehension} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2018.10.31 & Niu & N & B & VHRED,RL & \\tabincell{c}{Dialogue Corpus \\\\ CoCoA} & Dialogue & N & \\url{https://github.com/WolfNiu/AdversarialDialogue} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.2.24\t& Li & N & binary & CNN,LSTM & IMDB,MR & C & Y & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.6.2 & Zhang & N & B & \\tabincell{c}{BOW,LSTM,BERT \\\\ ESIM.DecAtt \\\\ DIIN} & \\tabincell{c}{Quora Question Pairs \\\\ Wikipedia\\footnotemark[18]} & C & N & \\url{https://g.co/dataset/paws} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.9.20\t& Vijayaraghavan & N & B & CNN & AG’s news, IMDB & C & N & — \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2019.11.4\t& Wallace & T & W & \\tabincell{c}{Bi-LSTM,ESIM \\\\ DAM,BiDAF} & SST,SNLI,SQuAD & C,NLI,RC & Y & \\url{https://github.com/Eric-Wallace/universal-triggers} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.9.7 & Wang & T & W & \\tabincell{c}{BERT, Transformer \\\\ BiDAF} & Yelp, SQuAD & C,QA & Y & \\url{https://github.com/AI-secure/T3} \\tabularnewline\n\t\t\t\t\\cline{2-10}\n\t\t\t\t& 2020.12.29 & Li & N & B & BERT & \\tabincell{c}{Sogou,IflyTek \\\\ Weibo,Law34\\footnotemark[19]} & C & N & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{adjustbox}\n\t\\end{table*}\n\tAdversarial attack methods have developed rapidly in recent years. Across the four categories (\\ie{}, char, word, sentence, and multi-level), high-quality adversarial texts are becoming more difficult to detect by the human eyes. On the other hand, the diversified legitimate and generated texts also promote the development of adversarial attacks and defenses. Therefore, there is still much room for improvement in generating adversarial examples such as transferability and deployment in real-world. \n\tTo demonstrate the generation methods of adversarial texts and their corresponding attributes in detail, we build Table \\ref{tabpaper_information} and Table \\ref{instance}. Through the two tables, we summarize and analyze the promising trend of the generation method.\n\t\\textbf{Possible problems with white-box attacks.} Table \\ref{tabpaper_information} summarizes the existing adversarial attacks in texts. We observe that the majority of white-box attacks in Table \\ref{tabpaper_information} employ the optimization of gradients for the attack. Gradient-based methods are widely used in the image domain with many variants , which can also be applied to texts. However, there are some shortcomings in using the gradients, such as vanishing and exploding gradient problems and limitations of the access to target models. In addition, gradient masking could incur the gradients useless in some cases, leading to failure in gradient-based methods. \n\t\\footnotetext[9]{\\url{https://www.informatik.tu-darmstadt.de/ukp/research\\_6/data/index.en.jsp}}\n\t\\footnotetext[10]{\\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/}}\n\t\\footnotetext[11]{\\url{https://github.com/thunlp/THUCTC}}\n\t\\textbf{Black-box attacks in the real world.} Table \\ref{tabpaper_information} also shows that the focus of research is on more realistic black-box attacks at present. It is basically difficult for adversaries to obtain target models' full knowledge in the physical world, so they do not know what dataset and model the defenders use. In this case, a black-box attack that can effectively deceive the system deployed in the physical world is expected. However, existing black-box attacks do not satisfy the requirements. We create 1,000 adversarial examples via various black-box attacks to evaluate their performance on a physical classification system ParallelDots, but only about 9\\% of the samples can successfully fool the system. Although these adversarial examples are not specially designed for ParallelDots, the results also reflect the insufficient transferability of adversarial examples in real applications. On the other hand, if a model's output is the hard label rather than a score (\\textit{e.g.,} the output of Figure \\ref{isnatncefigure}(a) is 1 rather than 64.30\\%), the black-box attacks such as importance-based ones need to be improved and adapted to new situations.\n\t\\footnotetext[12]{\\url{http://riejohnson.com/cnn_data.html}}\n\t\\footnotetext[13]{\\url{https://www.kaggle.com/c/fake-news/data}}\n\t\\footnotetext[14]{\\url{http://mpqa.cs.pitt.edu/}}\n\t\\footnotetext[15]{\\url{https://cogcomp.seas.upenn.edu/Data/QA/QC/}}\n\t\\textbf{Evaluation by human eyes.} Furthermore, outstanding adversarial texts not only achieve a high success rate to fool DNNs, but also need to have good readability, semantic similarity, and imperceptibility. Hence, we can also evaluate generated adversarial examples through instances (in Table \\ref{instance}). Modifications on texts are generally divided into char-level, word-level, sentence-level, and multi-level. The char-level operates on the characters, and others modify words or sentences. In Table \\ref{instance}, the word-level adversarial examples seem more imperceptible than the char-level ones, although people are robust against misspellings . Nevertheless, some char-level methods also perform very well such as HotFlip . Generally, the more operations there are, the easier it is to be perceived. The more imperceptible the perturbations are, the better the readability and semantic similarity would be.\n\t\\begin{table*}[t]\n\t\t\\centering\n\t\t\\caption{Instances of some adversarial attacks. The content in parentheses is the modified word or sentence. After modification, the output of the model changes from one category to another. NNR is short for nearest neighbor replacement, ST is short for synonym substitution, GC is short for Grammar conversion.}\n\t\t\\label{instance}\n\t\t\\begin{tabular}{|p{1cm}<{\\centering}|p{1.7cm}<{\\centering}|p{12.5cm}|p{1.1cm}<{\\centering}|}\n\t\t\t\\hline\n\t\t\tCategory & Work & \\multicolumn{1}{c|}{Instance} & Operation \\\\\n\t\t\t\\hline\n\t\t\t\\multirow{1}{*}{char} & Gao & \\tabincell{l}{This film has a special place (\\textbf{plcae}) in my heart (\\textbf{herat}). \\quad \\textbf{positive $\\rightarrow$ negative}} & swap \\\\\n\t\t\t\\hline\n\t\t\t\\multirow{3}{*}{word} & Alzantot & \\tabincell{l}{A runner (\\textbf{racer}) wants to head for the finish line. \\quad \\textbf{86\\%Entailment$\\rightarrow$43\\%Contradiction}} & NNR \\\\ \n\t\t\t\\cline{2-4}\n\t\t\t& Ren & \\tabincell{l}{seoul allies calm on nuclear (\\textbf{atomic}) shock. south korea’s key allies play down a shock admission \\\\ its scientists experimented to enrich uranium. \\quad \\textbf{74.25\\%Sci/Tech$\\rightarrow$86.66\\%World}} & ST \\\\\n\t\t\t\\cline{2-4}\n\t\t\t& Garg & \\tabincell{l}{Our server was great (\\textbf{enough}) and we had perfect service (\\textbf{but}). \\quad \\textbf{positive $\\rightarrow$ negative}} & insert \\\\\n\t\t\t\\hline\n\t\t\t\\multirow{1}{*}{sentence} & Iyyer & \\tabincell{l}{there is no pleasure in watching a child suffer. (\\textbf{in watching the child suffer, there is no pleasure.}) \\\\ \\textbf{negative $\\rightarrow$ positive}} & GC\\\\\n\t\t\t\\hline\n\t\t\t\\multirow{1}{*}{multi} & Liang & \\tabincell{l}{The Old Harbor Reservation Parkways are three \\sout{historic} roads in the Old Harbor area of Boston.\\\\ (\\textbf{Some exhibitions of Navy aircrafts were held here.}) They are part of the Boston parkway system \\\\ designed by Frederick Law Olmsted. They include all of William J. Day Boulevard running from \\\\ Castle (\\textbf{Cast1e}) Island to Kosciuszko Circle along Pleasure Bay and the Old Harbor shore. The part \\\\ of Columbia Road from its northeastern end at Farragut Road west to Pacuska Circle (formerly \\\\ called Preble Circle). \\textbf{87.3\\%Building$\\rightarrow$95.7\\%Means of Transportation}} & \\tabincell{c}{Insert \\\\ delete \\\\ flip} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table*}", "id": "b1d84ee1-0f72-4dee-9d20-df6037c76035", "level": "subsection", "origin_cites_number": 94, "parent_id": "60350839-4653-460f-8134-0be4be772e64", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial Techniques for Other NLP Tasks" ], [ "subsection", "Summary of Adversarial Attacks" ] ], "subsections": [], "title": "Summary of Adversarial Attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{defense}\n\tThe constant arms race between adversarial attacks and defenses invalidates backward methods quickly . Advanced adversarial attacks will be developed inadvertently, requiring effective defense methods to fight against the threats. In this section, we describe the proposed efficient methods against adversarial attacks in texts, which can be divided into adversarial example detection and model enhancement. The former directly detect adversarial perturbations, and the latter aims at enhancing the robustness of models.\n\t\\footnotetext[16]{\\url{https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html}}\n\t\\footnotetext[17]{\\url{http://movieqa.cs.toronto.edu/leaderboard/}}\n\t\\footnotetext[18]{\\url{https://dumps.wikimedia.org/}}\n\t\\footnotetext[19]{\\url{https://github.com/LinyangLee/CN-TC-datasets}}", "id": "1804625a-81a9-4a52-8749-90a4382611c3", "level": "section", "origin_cites_number": 1, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ] ], "subsections": [ "0bb687e6-9e9d-44aa-8ea1-968b7a310601", "063d028b-08cd-40f0-9602-c426e8ef4b67", "c8ca8542-4edc-4b76-a92d-61d84ef18eb0" ], "title": "Defenses Against Adversarial Attacks in Texts" }, { "cite_extract_rate": 0.75, "cites": [ 4078, 3103, 5871 ], "content": "For defense, it is natural to consider whether it can be directly identified based on the difference between adversarial examples and legitimate texts. In the char-level attacks, the majority of modified words are misspellings due to the operations. Similarly, these words may also be unknown in some word-level attacks (\\eg{}, word substitution attack). It naturally comes up with an idea to detect adversarial examples by checking misspellings and unknown words. Unknown words are low-frequency or unseen words in the pre-trained model's vocabulary. Misspellings can also be treated as unknown words, which are out of the vocabulary.\n\tPruthi \\etal{} built a word recognition model in front of the downstream classifier to distinguish adversarial examples in the char-level. The recognition model treats misspellings as unknown words. In tackling these words, three feedback mechanisms are applied to deal with them. (1) The model passes it through, regardless of whether it is adversarial. (2) Neutral words like `a' or `the' are used for replacement. (3) The word recognition model is retrained with a larger and less-specialized corpus. This work outperforms the general spelling check and adversarial training. Besides, Li \\etal{} applied a context-aware spelling check service to detect misspellings. However, experimental results show that the detection is effective on char-level modifications and partly useful on word-level attacks. The spelling check method is also not suitable for adversarial examples based on other languages like Chinese .\n\tFurthermore, Zhou \\etal{} introduced three components (\\ie{}, discriminator, estimator, and recovery) to the model, which was used to discriminate both char-level and word-level perturbations. The components are trained with the original corpus in the training phase. When new text is fed to the detector, the discriminator classifies each token representation as to the perturbation or not. If a token is marked as adversarial, the estimator generates an approximate embedding vector to replace the token representation for recovery. The detector highly relies on the original corpus, leading to the failure of adversarial examples from other corpora. Meanwhile, the basis for classifying whether a token is perturbed based on neighboring tokens is not clear.", "id": "0bb687e6-9e9d-44aa-8ea1-968b7a310601", "level": "subsection", "origin_cites_number": 4, "parent_id": "1804625a-81a9-4a52-8749-90a4382611c3", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Detecting Misspellings and Unknown Words" ] ], "subsections": [], "title": "Detecting Misspellings and Unknown Words" }, { "cite_extract_rate": 1, "cites": [ 5872, 943, 5494 ], "content": "The direct detection of inputs will fail when faced with some word-level and sentence-level attacks without unknown words. Therefore, researchers defend against adversarial attacks by ensuring the security of models. Mainstream model enhancement methods include changing the model architecture or updating the model parameters, such as adversarial training , certified robustness training , and functional enhancement .", "id": "063d028b-08cd-40f0-9602-c426e8ef4b67", "level": "subsection", "origin_cites_number": 3, "parent_id": "1804625a-81a9-4a52-8749-90a4382611c3", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Model Enhancement" ] ], "subsections": [ "bec8e5d9-ca56-4850-8db6-a3501aae190e", "f6552377-06ba-48b5-91a6-4f62f8128dab", "9219db05-661b-4254-be49-5062c0611b8c" ], "title": "Model Enhancement" }, { "cite_extract_rate": 0.6923076923076921, "cites": [ 5873, 8013, 7217, 1003, 917, 4235, 892, 5874, 5839 ], "content": "\\label{Adversarialrainin}\n\tIn texts, adversarial training and its variants are widely applied to defend against adversarial examples. Researchers mix adversarial examples with the original ones for retraining, improving the models' tolerance to adversarial examples. \n\tWang \\etal{} applied data augmentation to create diverse samples for adversarial training. Compared with the AddSent-trained model , abundant data and semantic-relations can overcome model overstability and increase their robustness. Similarly, Wang \\etal{} and Wang \\etal{} extracted the synonyms for the training data and replaced the original words to construct a larger dataset for retraining. Nevertheless, they are specially designed for the synonym substitution attack and may fail to detect other kinds of adversarial attacks like the char-level ones.\n\tThe aforementioned methods are traditional adversarial training that directly constructs the training dataset with all the adversarial examples and legitimate ones. Yet, researchers have demonstrated that traditional adversarial training is weak against iterative attacks and proposed iterative training methods in the image domain . Inspired by the iterative methods in images, Liu \\etal{} and Liu \\etal{} created new adversarial examples at each epoch and added them for training. Through the iterative optimization on loss functions, the retrained models are more robust against adversarial examples. \n\tUnlike the above methods, Xu \\etal{} proposed a novel adversarial training approach LexicalAT based on GANs . LexicalAT has two components, generator and classifier. The generator creates adversarial examples with the designed replacement actions, and the classifier returns feedback of this action by calculating the absolute difference in probability between the adversarial examples and the corresponding original texts. Then, the generator maximizes the expectation of the feedback by policy gradient, and the classifier minimizes the loss function until convergence. LexicalAT combines a knowledge base and adversarial learning and improves the robustness of sentiment classification models to a certain degree. Dinan \\etal{} conducted a similar work to construct robust models. Differently, they use crowderworkers instead of the generator in Xu \\etal{} , so their work needs much human effort.\n\tIn adversarial training, data diversity is the key factor in determining the robustness of models, which relies on heuristic approximations to the worst-case perturbations. As a result, adversarial training is vulnerable to unknown attacks.", "id": "bec8e5d9-ca56-4850-8db6-a3501aae190e", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "063d028b-08cd-40f0-9602-c426e8ef4b67", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Model Enhancement" ], [ "subsubsection", "Adversarial Training" ] ], "subsections": [], "title": "Adversarial Training" }, { "cite_extract_rate": 1, "cites": [ 5872, 38, 8960 ], "content": "DNN contains many built-in functions as well as functions that can be added externally. Researchers utilize the specially designed functions to reduce differences in the representation of adversarial examples and legitimate samples in the model, thus eliminating the impact of adversarial perturbations on the model. Here, we introduce existing works in this aspect.\n\tJones \\etal{} constructed a robust encoding function to map inputs to a smaller and discrete encoding space. They construct a vocabulary containing the most frequent words in texts. Through clustering words in the dictionary, the words in a cluster share the same encoding. These encodings are the models' training data, \\ie{}, $f_{\\alpha}=g(\\alpha (x))$. $f_{\\alpha}$ is the classifier, and $g(\\cdot)$ receives the outputs from the encoding function $\\alpha$. For an adversarial example, the perturbations and the corresponding original words will be in the same cluster (stability), and the non-perturbations are not affected (fidelity). However, the performance is restricted by the size of the vocabulary. Besides, the trade-off between stability and fidelity also needs more detailed analysis.\n\tLi \\etal{} incorporated the external knowledge to the multi-head attention for enhancing the robustness of NLI systems. The model can search for external knowledge when conducting NLP tasks, helping the model to explore beyond the data distribution of specific tasks. Experimental results show a significant improvement in defending against adversarial examples when the knowledge is added to the cross-encoder in their models. Although the method does not need extra parameters and is suitable for any model with attention units, the quality and size of external knowledge limit the performance of the method.", "id": "f6552377-06ba-48b5-91a6-4f62f8128dab", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "063d028b-08cd-40f0-9602-c426e8ef4b67", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Model Enhancement" ], [ "subsubsection", "Functional Improvement" ] ], "subsections": [], "title": "Functional Improvement" }, { "cite_extract_rate": 0.7826086956521741, "cites": [ 5871, 2476, 5872, 3103, 5873, 5878, 5877, 1096, 1901, 8960, 5875, 5874, 5839, 1159, 5876, 4078, 1619, 5494 ], "content": "The detection of unknown words and adversarial training partially mitigate the threats of adversarial examples, but they are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations . Hence, researchers have proposed certification robustness training to search for a boundary. In some conditions, a model is guaranteed to be robust against an attack, \\ie{}, it can not cross the boundary, no matter how adversaries create adversarial examples. \n\tJia \\etal{} presented certified robustness training by optimizing the interval bound propagation (IBP) upper bound , which could limit the loss of worst-case perturbations and compress the living space of adversarial examples. This method is provably robust to the attacks with word substitution on IMDB and SNLI. Similarly, Huang \\etal{} also applied IBP for certified robustness training. However, IBP only works to certify DNNs with continuous inputs, so it is not applicable to other models, such as the char-level one . Differently, Ko \\etal{} proposed a gradient-based approach to find the minimum distortion of neural networks or lower bounds for robustness quantification. It is suitable for various models and has no restrictions like IBP, but it is inefficient and poses a computational challenge for Transformer verification due to the self-attention mechanism.\n\tConsidering that the current certification methods only deal with naive DNNs, Shi \\etal{} proposed a novel method to certify the robustness of more complex Transformers. A Transformer model is decomposed into a number of sub-layers. Each sub-layer contains multiple positions, and each position consists of multiple neurons. The global lower and upper bounds of each neuron (\\textit{w.r.t.}, the input within the perturbation space) are calculated to efficiently obtain a safety guarantee by reducing the distance between bounds. Compared with IBP-based methods , the certified robustness bounds in this work are much tighter, and they also identify word importance as the same as importance-based methods.\n\tYe \\etal{} designed a structure-free certified defense method that can guarantee the robustness of any pre-trained model. They construct a smoothed classifier $g^{RS}$ by introducing random substitutions from a synonym set, where $R$ represents the perturbed words and $S$ refers to the corresponding substitutions. The certification of the newly constructed model is defined as\n\t\\begin{eqnarray}\n\t\\Delta x \\overset{def}{=} \\min_{X'\\in S_X}g^{RS}(X',y)-\\max_{X'\\in S_X}g^{RS}(X',c) >0\n\t\\end{eqnarray}\n\twhere $X'$ represents the modified sentences in synonym set $S_X$. $c$ is any label except the true label $y$. If the lower bound is larger than the upper bound (\\textit{i.e.,} $\\Delta x$), the smoothed classifier is certified robust.\n\tHowever, this kind of method is largely affected by models, testing data, and optimization methods. It is not general, as the detection of unknown words and adversarial training.\n\t\\begin{table*}[t]\n\t\t\\scriptsize\n\t\t\\centering\n\t\t\\caption{Summary information of defense methods against adversarial examples. We mainly show the category, time, work, model, data, attack, and project url.}\n\t\t\\label{defenseinformation}\n\t\t\\begin{adjustbox}{width=\\linewidth,center}\n\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\t\\hline \n\t\t\t\t\\multirow{3}{*}{Category} & \\multirow{3}{*}{Time} & \\multirow{3}{*}{Work} & \\multirow{3}{*}{Model} & \\multirow{3}{*}{Attack} & \\multirow{3}{*}{NLP Task} & \\multirow{3}{*}{Project URL} \\\\\n\t\t\t\t& & & & & & \\tabularnewline\n\t\t\t\t& & & & & & \\tabularnewline\n\t\t\t\t\\hline \n\t\t\t\t\\hline\n\t\t\t\t\\multirow{3}{*}{Detection} & 2019.2.24 & Li & Microsoft Azure & char-level & C & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.7.28 & Pruthi & bi-LSTM, BERT & word, char-level & C & \\url{https://github.com/danishpruthi/adversarial-misspellings} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.11.4 & Zhou & BERT & word,char-level & C & \\url{https://github.com/joey1993/bert-defender} \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{7}{*}{\\tabincell{c}{Adversarial \\\\ training}} & 2018.11.16 & Wang & BSAE & sentence-level & RC & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.9.15 & Wang & CNN,LSTM & word-level & C & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.11.4 & Xu & CNN,LSTM,BERT & word-level & C & \\url{https://github.com/lancopku/LexicalAT} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.11.4 & Dinan & BERT & char,word-level & dialogue & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.2.7 & Liu & \\tabincell{c}{QANet, BERT \\\\ ERNIE2.0 } & sentence-level & RC & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.2.7 & Liu & char-CNN. LSTM & char, word-level & C & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.8.28 & Wang & CNN,LSTM & word-level & C & \\url{https://github.com/Raibows/RSE-Adversarial-Defense} \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{2}{*}{\\tabincell{c}{Functional \\\\ Improvement}} & 2019.8.31 & Li & \\tabincell{c}{DAM , BERT \\\\ Transformer} & word-level & NLI & — \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.7.5 & Jones & BERT & char-level & C,NLI & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\t\\multirow{5}{*}{Certification} & 2019.6.9 & Ko & LSTM & char,word-level & C & \\url{https://github.com/ZhaoyangLyu/POPQORN} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.7.28 & Huang & CNN & char,word-level & C & \\url{https://github.com/deepmind/interval-bound-propagation/tree/master/examples/language/} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2019.11.4 & Jia & BOW,CNN,LSTM & word-level & C,NLI & \\url{https://github.com/robinjia/certified-word-sub} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.4.30 & Shi & Transformer & word-level & C & \\url{https://github.com/shizhouxing/Robustness-Verification-for-Transformers} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.7.5 & Ye & CNN, BERT & word-level & C & \\url{https://github.com/lushleaf/Structure-free-certified-NLP} \\tabularnewline\n\t\t\t\t\\cline{2-7}\n\t\t\t\t& 2020.8.12 & Li & \\tabincell{c}{TextCNN \\\\ bi-LSTM} & char,word-level & C & — \\tabularnewline\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{adjustbox}\n\t\\end{table*}", "id": "9219db05-661b-4254-be49-5062c0611b8c", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "063d028b-08cd-40f0-9602-c426e8ef4b67", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Model Enhancement" ], [ "subsubsection", "Certification" ] ], "subsections": [], "title": "Certification" }, { "cite_extract_rate": 0.533333333333333, "cites": [ 2910, 5881, 5833, 5879, 5883, 5880, 205, 5882 ], "content": "The aforementioned methods shown in Table \\ref{tabpaper_information} and Table \\ref{defenseinformation} are actual ways for adversarial attacks and defenses, but none of them explain theoretically why NLP models give different predictions. However, analyzing and explaining models' abnormal behavior is the fundamental way to carry out or solve adversarial attacks, which is lacking at present. \n\tAt present, the related model analysis works in NLP take legitimate data as inputs and observe the behavior of DNNs. According to the objects, we divide analysis methods into two categories: external input and model's internal structure. These works have confirmed the theoretical correctness of some existing methods. They also help us have a better understanding of DNNs and then propose stronger attacks and defenses. \n\t\\textbf{External input.} Studies have demonstrated that the changes of external inputs (\\eg{}, input composition or representation ) will affect the outputs of models. For example, Arras \\etal{} extended the layer-wise relevance propagation (LRP) method to LSTM, producing reliable explanations of which words were responsible for attributing sentiment in individual texts. Gupta \\etal{} proposed layer wise semantic accumulation (LISA) method to explain how to build semantics for a recurrent neural network (RNN) and how the saliency patterns act in the decision. During these findings, the authors analyze the sensitiveness of RNNs about different inputs to check the increase or decrease in prediction. The two works prove the theoretical correctness of these importance-based attacks, such as DeepWordBug , PWWS , and Textfooler .\n\t\\textbf{Internal structure.} Exploring the performance of the internal units of the model is a more effective analysis method. Aubakirova \\etal{} presented activation clustering to track the maximally activated neurons. Similarly, Dalvi \\etal{} studied individual neurons capturing certain properties that are deemed important for the task. Through this way, they can increase model transparency and uncover the importance of the individual parameters, helping understand the inner workings of DNNs. Researchers have realized the defense methods by operating the neurons in the image domain . Whether it is feasible in texts is worth exploring. \n\tJacovi \\etal{} presented an analysis into the inner workings of CNNs for processing text. They have demonstrated that the filters capture semantic classes of ngrams, and max-pooling separates the ngrams related to the final classification from the others. By inserting several ngrams, the filters will produce the results beyond extraction, leading to misclassification. Wallace \\etal{} applied HotFlip to the AllenNLP for interpreting models' weaknesses. However, they simply analyze the realization of different models and do not go deep into the network's internal behavior, contributing to the implementation of defense methods.\n\tIndeed, researchers can combine adversarial examples with existing model analysis methods to explore and analyze models' behavior, such as which layer of the model changes the prediction and differences in propagation path (\\ie{}, composed of activated neurons) between adversarial examples and legitimate inputs. The combination can inspire us to come up with more effective ways to eliminate the vulnerability of models.", "id": "c8ca8542-4edc-4b76-a92d-61d84ef18eb0", "level": "subsection", "origin_cites_number": 15, "parent_id": "1804625a-81a9-4a52-8749-90a4382611c3", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Defenses Against Adversarial Attacks in Texts" ], [ "subsection", "Theoretical Analysis" ] ], "subsections": [], "title": "Theoretical Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Chinesebasedmodels}\n\tEnglish and Chinese are the two most popular languages in the world. However, DNN's processing of two language inputs is different, resulting in the abnormal performance of adversarial examples in the two languages. Next, we introduce the works in adversarial attacks and defenses targeting the Chinese.", "id": "1246daa9-3531-4f92-b8a9-9822a164a00e", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial examples in Chinese-based models" ] ], "subsections": [ "a67ace58-d936-4133-a08b-2b60867592d1", "be0fa632-bb80-49d9-a42a-a71c97003dd8" ], "title": "Adversarial examples in Chinese-based models" }, { "cite_extract_rate": 0.8, "cites": [ 886, 5859, 5843, 5870 ], "content": "Adversarial attacks in Chinese-based models are different from those in English due to the text attributes. First, Chinese texts need segmentations before feeding to the models. Second, each token after segmentation is a signal character, word, or phrase. The operations, such as swapping in the char-level and simple substitution of a phrase in the word-level, are not suitable for better generations in Chinese. To deal with these challenges, researchers investigate and design new ways to generate adversarial Chinese texts. \n\tWang \\etal{} proposed a Chinese char-level attack against BERT, which could map the discrete text into a high-dimensional embedding space. Due to the mapping capability of BERT, attackers can search the high-dimensional embedding space for modification. The perturbed embeddings are mapped back to the characters with the closest semantics. In the targeted attack scenario, the optimization of the objective function $g(\\cdot)$ following C\\&W attack is defined as \n\t\\begin{eqnarray}\n\tg(x')=\\max \\left[\\max\\left\\{f(x')_i:i\\neq t\\right\\}-f(x')_t,-\\kappa\\right]\n\t\\end{eqnarray}\n\twhere $f(x')_i$ is the $i$-th element in a logit vector from BERT model $f$. $\\kappa$ encourages the optimization to find perturbed character $x'$ classified as class $t$ with high probability. \n\tIn the non-targeted attack scenario, $g(\\cdot)$ is slightly different from the targeted attack scenario.\n\t\\begin{eqnarray}\n\tg(x')=\\max \\left[f(x')_t-\\max\\left\\{f(x')_i:i\\neq t\\right\\},-\\kappa\\right]\n\t\\end{eqnarray}\n\tHere, $t$ is the original class of the input. However, the embedding of $x'$ is closest to the original character $x$, but their semantics may be different, and sometimes $x'$ is unnatural to Chinese readers. \n\tLi \\etal{} followed the importance-based methods to quantify the importance of each segmentation replaced by the pieces in pre-constructed vocabulary. The pieces are similar to the segmentation, which can be a signal character, word, or phrase. The generations in this work are more natural and semantically similar than Wang \\etal{} . We treat the phrases in pieces as special sentences that are shorter than normal, so that the attack can be seen as a multi-level one. \n\tAdversarial examples in Chinese are shown in Figure \\ref{chinesebasedsamples}. When the Chinese text is slightly modified, the prediction of the model is converted from one to another. However, the meaning of translations changes more obviously. If we feed the Chinese text and its translation into Chinese-based and English-based classifiers respectively, the consistency of two models' outputs is worth exploring as a basis for judging adversarial examples.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\subfigure[Wang et al.]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{AEinChinese_a.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\subfigure[Li et al.]{\n\t\t\t\\begin{minipage}[t]{0.5\\linewidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\linewidth]{AEinChinese_b.pdf}\n\t\t\t\\end{minipage}\n\t\t}\n\t\t\\centering\n\t\t\\caption{Adversarial examples in Chinese. (a) is Wang \\etal{} , and (b) is the instance in Li \\etal{} .}\n\t\t\\label{chinesebasedsamples}\n\t\\end{figure}", "id": "a67ace58-d936-4133-a08b-2b60867592d1", "level": "subsection", "origin_cites_number": 5, "parent_id": "1246daa9-3531-4f92-b8a9-9822a164a00e", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial examples in Chinese-based models" ], [ "subsection", "Attack" ] ], "subsections": [], "title": "Attack" }, { "cite_extract_rate": 0, "cites": [], "content": "In tackling the adversarial attacks, countermeasures are urgently needed to handle the threats. Nevertheless, there are few defense methods against adversarial texts, let alone against Chinese texts. The mainstream methods to enhance the robustness of English-based models are adversarial training and spelling check, but they are sometimes not suitable to Chinese-based models. The reasons are shown below.\n\t\\begin{itemize}\n\t\t\\item The typos in Chinese refer to additional/missing words, wrong word sequence, homophonic/similar words, and semantic errors. Spelling check for English can not be applied to Chinese.\n\t\t\\item Adversarial training requires a lot of data to achieve a good result and is always sensitive to unknown attacks. The rich meaning and diverse composition of Chinese make it easy to generate adversarial examples. Simply modifying the words can change the semantics. Hence, it is difficult to defend against such diverse Chinese-based adversarial examples.\n\t\\end{itemize}\n\tTo bridge this striking gap, Li \\etal{} proposed TEXTSHIELD, a defense method specially designed for Chinese-based text classification models. The main components of TEXTSHIELD are the NMT model trained with adversarial–benign text pairs and the feature extraction framework. When a Chinese text is fed to TEXTSHIELD, it is first translated into English and then back into Chinese. Finally, the Chinese-based models extract the semantic, glyph, and phonetic-level features of corrected Chinese texts and fuse them for classification. The translation avoids the interference of perturbations from raw texts, and multi-modal embedding features provide more valuable information for classification. However, the performance of TEXTSHIELD highly relies on the two pre-trained models, NMT and the Chinese text classification model. TEXTSHIELD will fail when adversarial examples can fool both of those two models.", "id": "be0fa632-bb80-49d9-a42a-a71c97003dd8", "level": "subsection", "origin_cites_number": 1, "parent_id": "1246daa9-3531-4f92-b8a9-9822a164a00e", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Adversarial examples in Chinese-based models" ], [ "subsection", "Defense" ] ], "subsections": [], "title": "Defense" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{discussion}\n\tIn the previous sections, detailed descriptions of adversarial attacks and defenses in texts enable readers to have a faster and better understanding of this aspect. Next, we present more general observations and shed some light on further work in this area.", "id": "c668fad8-3b0a-4b93-8d38-3007046ffb10", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Discussions" ] ], "subsections": [ "897a908d-1634-4984-b8c2-781e7254006a", "f49d96fb-676f-4a07-8aa2-02c0071c73c6", "99244c31-822a-4beb-a33c-8d2843150f76" ], "title": "Discussions" }, { "cite_extract_rate": 0.8, "cites": [ 5841, 5833, 3103, 5842, 5830, 5846, 7586, 5829 ], "content": "We have reviewed over 40 published or pre-printed papers on the topic of adversarial example generation. In reviewing these attacks, we have some interesting findings, including challenges, which may shed new light on designing more powerful attacks.\n\t\\textbf{Limitation of char-level attacks.} Compared with word-level and sentence-level attacks, the char-level attacks are more obvious to human eyes and easier to be detected by some spelling-check tools. Besides, it is difficult to generate an outstanding sample by only modifying one or two characters. In most cases, people have to increase the number of modified characters to generate adversarial examples, resulting in reduced imperceptibility and readability. \n\t\\textbf{The failure of transferability in reality.} Currently, the majority of studies on adversarial texts are about theoretical models and rarely related to practical applications. We have used the adversarial examples presented in recent works to attack ParallelDots like Figure \\ref{isnatncefigure}, but most of the adversarial examples are ineffective and can be correctly classified. Only a few samples successfully fool this system, which means that the transferability of these adversarial examples is bad. For the physical NLP systems, we can not obtain any knowledge of them, and the query may be limited sometimes. Hence, transferability is the main choice for attacking these physical applications, which is the key factor for practical attacks. \n\t\\textbf{Lacking general methods.} There are no well-performed adversarial perturbations in texts that can fool any DNN-based model (so-called universal adversarial perturbations). Although Wallace \\etal{} find input-agnostic sequences that can trigger specific classifications to generate universal adversarial examples, these sequences impact the readability of inputs, and the generated samples are offensive in nature. \n\t\\textbf{Lacking better evaluation methods.} Most of the studies evaluate their performances of adversarial attacks by using success rate or accuracy. Only a few works, employ speed, scale, and efficiency into consideration, although they only list the attacks' time. Whether there is a relationship among the scale of the dataset, time consumed, and success rate of adversarial attacks is still unknown. If there exists such a relationship, the trade-off of these three aspects may be a research point in future work, like the related study of speed in adversarial examples. Besides, the experimental results on different datasets are various when the attack method is the same. Whether the type and amount of data may affect adversarial attacks is worth pondering.", "id": "897a908d-1634-4984-b8c2-781e7254006a", "level": "subsection", "origin_cites_number": 10, "parent_id": "c668fad8-3b0a-4b93-8d38-3007046ffb10", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Discussions" ], [ "subsection", "Generation of Adversarial Examples" ] ], "subsections": [], "title": "Generation of Adversarial Examples" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 973, 5884, 3103, 5885, 893 ], "content": "We have reviewed nearly 20 published or pre-printed papers on the topic of defense methods against adversarial attacks. In reviewing these methods, we have some interesting findings, including challenges, which may shed new light on designing more robust models.\n\t\\textbf{Application of adversarial examples.} In order to ensure the safety of the model, we can employ adversarial samples to expose its vulnerabilities for further improvements actively. For example, Blohm \\etal{} generated adversarial examples to discover the limitations of their machine reading comprehension model. In different scenarios, their model is robust against meaning-preserving lexical substitutions but fails in importance-based attacks. Fortunately, some other attributions (\\eg{}, answer by elimination via ranking plausibility) can be added to improve the model's performance. Cheng \\etal{} proposed a projected gradient method to verify the robustness of seq2seq models. They find that seq2seq models are more robust to adversarial attacks than CNN-based classifiers. Through the various adversarial examples, we can know which features, functions, or models can better resist these attacks and guide us where to start and how to improve.\n\t\\textbf{Lacking beachmarks.} Various methods have been proposed to study adversarial attacks and defenses in texts, but there is no benchmark. Researchers use different datasets (in Section \\ref{datasets}) in their works, making it difficult to compare these methods' advantages and disadvantages. Meanwhile, it also affects the selection of metrics. There is no exact statement about which metric measure is better in a situation and why it is more useful than others. Some comparisons have been made in Textbugger with several metrics. The best one in this work may be only suitable for it, but ineffective in other works. \n\t\\textbf{Generalization abilities of detectors.} Tackling the unknown adversarial attacks is one of the main challenges for defense. In the past four years, researchers have been working towards this goal to design a general method. However, none of the existing works meet this need. We think that future work can focus more on designing a general defense to a single NLP task and then extending it to other NLP tasks. \n\t\\textbf{A Platform for research.} In terms of a quick start in this aspect, it is necessary to establish an open-source toolbox (\\eg{}, AdvBox and cleverhans in the image domain) for the research on adversarial texts. The toolboxes in the image domain integrate existing representative methods of generating adversarial images. People can easily do some further studies by them, which reduce time consumption for repetition and promote the development in this field. Compared with those in the image domain, the visual analytics framework proposed by Laughlin \\etal{} lacks diverse attack and defense methods. TextAttack contains some representative attacks, including char-level, word-level, and sentence-level. If more attack and defense methods can be incorporated into it, the toolbox will become more powerful.", "id": "f49d96fb-676f-4a07-8aa2-02c0071c73c6", "level": "subsection", "origin_cites_number": 9, "parent_id": "c668fad8-3b0a-4b93-8d38-3007046ffb10", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Discussions" ], [ "subsection", "Defense Methods Against Adversarial Attacks" ] ], "subsections": [], "title": "Defense Methods Against Adversarial Attacks" }, { "cite_extract_rate": 0.8235294117647051, "cites": [ 5886, 5887, 5159, 975, 5156, 5888, 5890, 5889, 1603, 8957, 917, 5834, 5157, 5831 ], "content": "In future work, studies on adversarial examples can start from the following aspects: As an attacker, it is worth designing universal perturbations as they work in the image domain . Any text with universal perturbations can induce a model to produce the incorrect output. Moreover, more wonderful universal perturbations can fool multi-model or any model on any text. On the other hand, enhancing the transferability is meaningful in more practical black-box attacks, and the combination of optimization-based and transferability-based methods is another viable way like the work in . On the contrary, defenders prefer to revamp this vulnerability in DNNs completely, but it is no less difficult than redesigning a network. Both of them are long and arduous tasks with the common efforts of many people. At the moment, defenders can draw on methods from the image area to text for improving the robustness of DNNs, \\textit{e.g.}, adversarial training, adding extra layer, optimizing cross-entropy function, or weakening the transferability of adversarial examples.\n\tAlongside, the combination of deepfake and adversarial examples (also called AdvDeepfakes) is a worthy research direction. Deepfake refers to a technique to naturally synthesize human imperceptible fake images and editing images via artificial intelligence (AI), especially through GANs. In response to this emerging challenge, researchers have constructed various deepfake detectors , but they fail to detect AdvDeepfakes where attackers add adversarial perturbations to deepfake images. Inspired by these works, whether the fake text detectors are robust against AdvDeepfakes needs further exploration. On the other hand, it also encourages researchers to build more robust detectors.", "id": "99244c31-822a-4beb-a33c-8d2843150f76", "level": "subsection", "origin_cites_number": 17, "parent_id": "c668fad8-3b0a-4b93-8d38-3007046ffb10", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Discussions" ], [ "subsection", "Further Work" ] ], "subsections": [], "title": "Further Work" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{conclusion}\n\tThis paper presents a comprehensive survey about adversarial attacks and defenses on DNNs in texts. Although DNNs have a high performance on a wide variety of NLP tasks, they are inherently vulnerable to adversarial examples. Hence, people pay great attention to the security problem caused by adversarial examples. We integrate the existing adversarial attacks and defenses focusing on recent works in texts. The threats of adversarial attacks are real, but defense methods have fallen far behind. Most existing works have their limitations, like application scenes and constraint conditions. More attention should be paid to the problem caused by adversarial examples, which remains an open issue for designing considerably robust models against adversarial attacks.\n\t\\section*{Acknowledgments}\n\tThis work was partly supported by the National Natural Science Foundation of China under No. 61876134, U1536204 and U1836112.\n\t\\bibliographystyle{IEEEtran}\n\t\\bibliography{reference}\n\\end{document}", "id": "cf4bc9dc-d910-4f84-b886-04185306a490", "level": "section", "origin_cites_number": 0, "parent_id": "697307fb-5058-4940-99fb-a281f51f4034", "prefix_titles": [ [ "title", "Towards a Robust Deep Neural Network in Texts: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
112
[ 5718, 205, 892, 3863, 313, 5829, 5828, 314, 8957, 5826, 4722, 209, 5832, 5827, 5830, 893, 2465, 2401, 3859, 5833, 5834, 3699, 4235, 4078, 5831, 967, 969, 5836, 5837, 5835, 923, 5838, 7305, 975, 890, 3103, 8958, 894, 902, 906, 914, 4846, 5848, 5849, 5843, 5847, 9133, 5840, 5844, 5845, 5842, 5851, 1632, 5850, 2565, 5839, 5841, 8012, 5846, 7586, 8959, 5853, 5852, 5854, 2470, 168, 7186, 7217, 1003, 5857, 5856, 5855, 1096, 8419, 1171, 5865, 5870, 790, 8385, 4036, 38, 2058, 912, 1113, 5868, 673, 303, 5866, 5860, 1348, 7796, 5862, 5861, 5869, 919, 1901, 8418, 1139, 5867, 439, 1173, 862, 5859, 7, 5858, 5863, 5864, 5871, 5872, 943, 5494, 5873, 8013, 917, 5874, 8960, 2476, 5878, 5877, 5875, 1159, 5876, 1619, 2910, 5881, 5879, 5883, 5880, 5882, 886, 973, 5884, 5885, 5886, 5887, 5159, 5156, 5888, 5890, 5889, 1603, 5157 ]
1.443787
[ "Pengzhen Ren", "Yun Xiao", "Xiaojun Chang", "Po-Yao Huang", "Zhihui Li", "Brij B. Gupta", "Xiaojiang Chen", "Xin Wang" ]
A Survey of Deep Active Learning
2020
2020-08-30T04:28:31Z
cs.LG
Active learning (AL) attempts to maximize a model's performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have a relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely according the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to a large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.). Therefore, AL is gradually coming to receive the attention it is due. It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DeepAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DeepAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DeepAL from an application perspective. Finally, we discuss the confusion and problems associated with DeepAL and provide some possible development directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ] ], "subsections": [ "aed28b61-7567-46ed-896a-a6d1ddec0f8c", "504a0a59-0bf6-4525-8c0b-ff9673369a89", "3de3bdd7-51c6-4365-9c87-09efb8923a32", "dcc2782c-e704-4c3c-9abb-4c1b9d13e14c", "eb711660-bd71-47e8-b216-91168204d5d2", "533d8a22-af2d-4c05-ba9a-53183224b451" ], "title": "root" }, { "cite_extract_rate": 0.421052631578947, "cites": [ 5507, 1044, 5511, 4023, 5509, 5508, 5510, 5506 ], "content": "Both deep learning (DL) and active learning (AL) are a subfield of machine learning. DL is also called representation learning . It originates from the study of artificial neural networks and realizes the automatic extraction of data features. DL has strong learning capabilities due to its complex structure, but this also means that DL requires a large number of labeled samples to complete the corresponding training. With the release of a large number of large-scale data sets with annotations and the continuous improvement of computer computing power, DL-related research has ushered in large development opportunities. Compared with traditional machine learning algorithms, DL has an absolute advantage in performance in most application areas. \nAL focuses on the study of data sets, and it is also known as query learning . AL assumes that different samples in the same data set have different values for the update of the current model, and tries to select the samples with the highest value to construct the training set. Then, the corresponding learning task is completed with the smallest annotation cost. Both DL and AL have important applications in the machine learning community. Due to their excellent characteristics, they have attracted widespread research interest in recent years. More specifically, DL has achieved unprecedented breakthroughs in various challenging tasks; however, this is largely due to the publication of massive labeled datasets . Therefore, DL is limited by the high cost of sample labeling in some professional fields that require rich knowledge. In comparison, an effective AL algorithm can theoretically achieve exponential acceleration in labeling efficiency . This large potential saving in labeling costs is a fascinating development. However, the classic AL algorithm also finds it difficult to handle high-dimensional data . Therefore, the combination of DL and AL, referred to as DeepAL, is expected to achieve superior results. DeepAL has been widely utilized in various fields, including image recognition , text classification , visual question answering and object detection , etc. Although a rich variety of related work has been published, DeepAL still lacks a unified classification framework. To fill this gap, in this article, we will provide a comprehensive overview of the existing DeepAL related work \\footnote{We search about 270 related papers on \\href{https://dblp.uni-trier.de/}{DBLP} using \"deep active learning\" as the keyword. We review the relevance of these papers to DeepAL one by one, eliminate irrelevant (just containing a few keywords) or information missing papers, and manually add some papers that do not contain these keywords but use DeepAL-related methods or relate to our current discussion. Finally, the survey references are constructed. The latest paper is updated to November 2020. The references include 103 conference papers, 153 journal papers, 3 books , 1 research report , and 1 dissertation . There are 28 unpublished papers.}, along with a formal classification method. The contributions of this survey are summarized as follows:\n\\begin{itemize}\n \\item As far as we know, this is the first comprehensive review work in the field of deep active learning.\n \\item We analyze the challenges of combining active learning and deep learning, and systematically summarize and categorize existing DeepAL-related work for these challenges.\n \\item We conduct a comprehensive and detailed analysis of DeepAL-related applications in various fields and future directions.\n\\end{itemize}\nNext, we first briefly review the development status of DL and AL in their respective fields. Subsequently, in Section \\ref{sec:The necessity and challenge of combining DL and AL}, the necessity and challenges of combining DL and AL are explicated. In Section \\ref{sec: Deep Active Learning}, we conduct a comprehensive and systematic summary and discussion of the various strategies used in DeepAL. In Section \\ref{sec: Various applications of DeepAL}, we review various applications of DeepAL in detail. In Section \\ref{sec: Discussion and future directions}, we conduct a comprehensive discussion on the future direction of DeepAL. Finally, in Section \\ref{sec: Summary and conclusions}, we make a summary and conclusion of this survey.", "id": "aed28b61-7567-46ed-896a-a6d1ddec0f8c", "level": "section", "origin_cites_number": 19, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Introduction" ] ], "subsections": [ "a6c42c1c-9b1a-4227-8773-697a4f5c0acd", "eb4b2e52-eb80-4448-b71f-5daaed837f6b" ], "title": "Introduction" }, { "cite_extract_rate": 0.41509433962264103, "cites": [ 684, 5515, 5514, 2909, 514, 5512, 166, 8923, 4023, 5516, 7950, 7948, 7217, 4507, 1496, 97, 5513, 5509, 7300, 7949, 7584, 8924 ], "content": "DL attempts to build appropriate models by simulating the structure of the human brain. The McCulloch-Pitts (MCP) model proposed in 1943 by is regarded as the beginning of modern DL. Subsequently, in 1986, introduced backpropagation into the optimization of neural networks, which laid the foundation for the subsequent rapid development of DL. In the same year, Recurrent Neural Networks (RNNs) were first proposed. In 1998, the LeNet network made its first appearance, representing one of the earliest uses of deep neural networks (DNN). However, these pioneering early works were limited by the computing resources available at the time and did not receive as much attention and investigation as they should have . In 2006, Deep Belief Networks (DBNs) were proposed and used to explore a deeper range of networks, which prompted the name of neural networks as DL. \nAlexNet is considered the first CNN deep learning model, which greatly improves the image classification results on large-scale data sets (such as ImageNet). In the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC)-2012 competition , the AlexNet won the championship in the top-5 test error rate by nearly 10\\% ahead of the second place. AlexNet uses the ReLUs (Rectified Linear Units) activation function to effectively suppress the gradient disappearance problem, while the use of multiple GPUs greatly improves the training speed of the model. \nSubsequently, DL began to win championships in various competitions and demonstrated very competitive results in many fields, such as visual data processing, natural language processing, speech processing, and many other well-known applications . From the perspective of automation, the emergence of DL has transformed the manual design of features in machine learning to facilitate automatic extraction . It is precisely because of this powerful automatic feature extraction capability that DL has demonstrated such unprecedented advantages in many fields. \nAfter decades of development, the research work related to DL is very rich. In Fig.\\ref{fig:DL}, we present a standard deep learning model example: convolutional neural network (CNN) . Based on this approach, similar CNNs are applied to various image processing tasks. In addition, RNNs and GANs (Generative Adversarial Networks) are also widely utilized. Beginning in 2017, DL gradually shifted from the initial feature extraction automation to the automation of model architecture design ; however, this still has a long way to go. \n\\begin{figure}[!tp] \n\t\\centering \n\t\\subfloat[Structure diagram of convolutional neural network.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.45\\textwidth]{figure/DL}\n \t\t\\label{fig:DL}\n \t}\\hspace{5mm}\n\t\\subfloat[The pool-based active learning cycle.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.45\\textwidth]{figure/AL}\n \t\t\\label{fig:AL}\n \t}\\\\\n\t\\subfloat[A typical example of deep active learning.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.95\\textwidth]{figure/DAL}\n \t\t\\label{fig:DeepAL}\n \t}\n\t\\caption{Comparison of typical architectures of DL, AL, and DeepAL. (a) A common DL model: Convolutional Neural Network. (b)The pool-based AL cycle: Use the query strategy to query the sample in the unlabeled pool $U$ and hand it over to the oracle for labeling, then add the queried sample to the labeled training dataset $L$ and train, and then use the newly learned knowledge for the next round of querying. Repeat this process until the label budget is exhausted or the pre-defined termination conditions are reached. (c) A typical example of DeepAL: The parameters $\\theta$ of the DL model are initialized or pre-trained on the labeled training set $L_0$, and the samples of the unlabeled pool $U$ are used to extract features through the DL model. Select samples based on the corresponding query strategy, and query the label in querying to form a new label training set $L$, then train the DL model on $L$, and update $U$ at the same time. Repeat this process until the label budget is exhausted or the pre-defined termination conditions are reached (see Section \\ref{sec: DeepAL Stopping Strategy} for stopping strategy details).} \n\t\\label{fig:AL_DL_DAL} \n\\end{figure}\nThanks to the publication of a large number of existing annotation datasets , in recent years, DL has made breakthroughs in various fields including machine translation , speech recognition , and image classification . However, this comes at the cost of a large number of manually labeled datasets, and DL has a strong greedy attribute to the data. While, in the real world, obtaining a large number of unlabeled datasets is relatively simple, the manual labeling of datasets comes at a high cost; this is particularly true for those fields where labeling requires a high degree of professional knowledge . For example, the labeling and description of lung lesion images of COVID-19 patients requires experienced clinicians to complete, and it is clearly impractical to demand that such professionals complete a large amount of medical image labeling. Similar fields also include speech recognition , medical imaging , recommender\nsystems , information extraction , satellite remote sensing and robotics , machine translation and text classification , etc. Therefore, a way of maximizing the performance gain of the model when annotating a small number of samples is urgently required.", "id": "a6c42c1c-9b1a-4227-8773-697a4f5c0acd", "level": "subsection", "origin_cites_number": 53, "parent_id": "aed28b61-7567-46ed-896a-a6d1ddec0f8c", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Introduction" ], [ "subsection", "Deep Learning" ] ], "subsections": [], "title": "Deep Learning" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 1044, 5517, 8716, 8812, 5518, 5519 ], "content": "AL is just such a method dedicated to studying how to obtain as many performance gains as possible by labeling as few samples as possible. More specifically, it aims to select the most useful samples from the unlabeled dataset and hand it over to the oracle (e.g., human annotator) for labeling, to reduce the cost of labeling as much as possible while still maintaining performance. AL approaches can be divided into membership query synthesis , stream-based selective sampling and pool-based AL from application scenarios . Membership query synthesis means that the learner can request to query the label of any unlabeled sample in the input space, including the sample generated by the learner. Moreover, the key difference between stream-based selective sampling and pool-based sampling is that the former makes an independent judgment on whether each sample in the data stream needs to query the labels of unlabeled samples, while the latter chooses the best query sample based on the evaluation and ranking of the entire dataset. Related research on stream-based selective sampling is mainly aimed at the application scenarios of small mobile devices that require timeliness, because these small devices often have limited storage and computing capabilities. The more common pool-based sampling strategy in the paper related to AL research is more suitable for large devices with sufficient computing and storage resources. In Fig.\\ref{fig:AL}, we illustrate the framework diagram of the pool-based active learning cycle. In the initial state, we can randomly select one or more samples from the unlabeled pool $U$, give this sample to the oracle query label to get the labeled dataset $L$, and then train the model on $L$ using supervised learning. Next, we use this new knowledge to select the next sample to be queried, add the newly queried sample to $L$, and then conduct training. This process is repeated until the label budget is exhausted or the pre-defined termination conditions are reached (see Section \\ref{sec: DeepAL Stopping Strategy} for stopping strategy details).\nIt is different from DL by using manual or automatic methods to design models with high-performance feature extraction capabilities. AL starts with datasets, primarily through the design of elaborate query rules to select the best samples from unlabeled datasets and query their labels, in an attempt to reduce the labeling cost to the greatest extent possible. Therefore, the design of query rules is crucial to the performance of AL methods. Related research is also quite rich. For example, in a given set of unlabeled datasets, the main query strategies include the uncertainty-based approach , diversity-based approach and expected model change . In addition, many works have also studied hybrid query strategies , taking into account the uncertainty and diversity of query samples, and attempting to find a balance between these two strategies. Because separate sampling based on uncertainty often results in sampling bias , the currently selected sample is not representative of the distribution of unlabeled datasets. On the other hand, considering only strategies that promote diversity in sampling may lead to increased labeling costs, as may be a considerable number of samples with low information content will consequently be selected. More classic query strategies are examined in . Although there is a substantial body of existing AL-related research, AL still faces the problem of expanding to high-dimensional data (e.g., images, text, and video, etc.) ; thus, most AL works tend to concentrate on low-dimensional problems . In addition, AL often queries high-value samples based on features extracted in advance and does not have the ability to extract features.", "id": "eb4b2e52-eb80-4448-b71f-5daaed837f6b", "level": "subsection", "origin_cites_number": 27, "parent_id": "aed28b61-7567-46ed-896a-a6d1ddec0f8c", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Introduction" ], [ "subsection", "Active Learning" ] ], "subsections": [], "title": "Active Learning" }, { "cite_extract_rate": 0.6875, "cites": [ 1044, 5521, 5517, 4641, 1042, 5523, 1050, 5520, 4618, 5518, 5522 ], "content": "\\label{sec:The necessity and challenge of combining DL and AL}\nDL has a strong learning capability in the context of high-dimensional data processing and automatic feature extraction, while AL has significant potential to effectively reduce labeling costs. Therefore, an obvious approach is to combine DL and AL, as this will greatly expand their application potential. This combined approach, referred to as DeepAL, was proposed by considering the complementary advantages of the two methods, and researchers have high expectations for the results of studies in this field. However, although AL-related research on query strategy is quite rich, it is still quite difficult to apply this strategy directly to DL. This is mainly due to:\n\\begin{itemize}\n\\item \\textbf{Model uncertainty in Deep Learning.} The query strategy based on uncertainty is an important direction of AL research. In classification tasks, although DL can use the softmax layer to obtain the probability distribution of the label, the facts show that they are too confident. The SR (Softmax Response) of the final output is unreliable as a measure of confidence, and the performance of this method will thus be even worse than that of random sampling .\n\\item \\textbf{Insufficient data for labeled samples.} AL often relies on a small amount of labeled sample data to learn and update the model, while DL is often very greedy for data . The labeled training samples provided by the classic AL method thus insufficient to support the training of traditional DL. In addition, the one-by-one sample query method commonly used in AL is also not applicable in the DL context .\n\\item \\textbf{Processing pipeline inconsistency.} The processing pipelines of AL and DL are inconsistent. Most AL algorithms focus primarily on the training of classifiers, and the various query strategies utilized are largely based on fixed feature representations. In DL, however, feature learning and classifier training are jointly optimized. Only fine-tuning the DL models in the AL framework, or treating them as two separate problems, may thus cause divergent issues .\n\\end{itemize}\nTo address the first problem, some researchers have applied Bayesian deep learning to deal with the high-dimensional mini-batch samples with fewer queries in the AL context , thereby effectively alleviating the problem of the DL model being too confident about the output results.\nTo solve the problem of insufficient labelled sample data, researchers have considered using generative networks for data augmentation or assigning pseudo-labels to high-confidence samples to expand the labeled training set . Some researchers have also used labeled and unlabeled datasets to combine supervised and semisupervised training across AL cycles . In addition, the empirical research in shows that the previous heuristic-based AL query strategy is invalid when it is applied to DL in batch settings; therefore, for the one-by-one query strategy in classic AL, many researchers focus on the improvement of the batch sample query strategy , taking both the amount of information and the diversity of batch samples into account.\nFurthermore, to deal with the pipeline inconsistency problem, researchers have considered modifying the combined framework of AL and DL to make the proposed DeepAL model as general as possible, an approach that can be extended to various application fields. This is of great significance to the promotion of DeepAL. For example, embeds the idea of AL into DL and consequently proposes a task-independent architecture design.", "id": "504a0a59-0bf6-4525-8c0b-ff9673369a89", "level": "section", "origin_cites_number": 16, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "The necessity and challenge of combining DL and AL" ] ], "subsections": [], "title": "The necessity and challenge of combining DL and AL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec: Deep Active Learning}\nIn this section, we will provide a comprehensive and systematic overview of DeepAL-related works. Fig.\\ref{fig:DeepAL} illustrates a typical example of DeepAL model architecture. The parameters $\\theta$ of the deep learning model are initialized or pre-trained on the labeled training set $L_0$, while the samples of the unlabeled pool $U$ are used to extract features through the deep learning model. The next steps are to select samples based on the corresponding query strategy, and query the label in the oracle to form a new label training set $L$, then train the deep learning model on $L$ and update $U$ at the same time. This process is repeated until the label budget is exhausted or the predefined termination conditions are reached (see Section \\ref{sec: DeepAL Stopping Strategy} for stopping strategy details). From the DeepAL framework example in Fig.\\ref{fig:DeepAL}, we can roughly divide the DeepAL framework into two parts: namely, the AL query strategy on the unlabeled dataset and the DL model training method. These will be discussed and summarized in the following Section \\ref{sec: Query Strategy Optimization in DeepAL} and \\ref{sec: Insufficient Data in DeepAL} respectively. Next, we will discuss the efforts made by DeepAL on the generalization of the model in Section \\ref{sec: Common Framework DeepAL}. Finally, we briefly discuss the stopping strategy in DeepAL in Section \\ref{sec: DeepAL Stopping Strategy}.", "id": "3de3bdd7-51c6-4365-9c87-09efb8923a32", "level": "section", "origin_cites_number": 0, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ] ], "subsections": [ "0d6ad266-3e42-4732-a729-183187f8427f", "d6b7acac-8ffe-421c-a9ab-57e8404c786d", "e325634b-5e6a-4c21-ae7a-d63f06cf344c", "334f2d49-93f9-4238-9f87-25f656785fde" ], "title": "Deep Active Learning" }, { "cite_extract_rate": 1, "cites": [ 5524, 4618 ], "content": "\\label{sec: Query Strategy Optimization in DeepAL}\nIn the pool-based method, we define $U^n=\\{\\mathcal{X},\\mathcal{Y}\\}$ as an unlabeled dataset with $n$ samples; here, $\\mathcal{X}$ is the sample space, $\\mathcal{Y}$ is the label space, and $P(x,y)$ is a potential distribution, where $x\\in \\mathcal{X},y\\in \\mathcal{Y}$. $L^m=\\{X,Y\\}$ is the current labeled training set with $m$ samples, where $\\mathrm{x}\\in X,\\mathrm{y}\\in Y$. Under the standard supervision environment of DeepAL, our main goal is to design a query strategy $Q$, $U^n\\stackrel{Q}{\\longrightarrow}L^m$, using the deep model $f\\in \\mathcal{F },f:\\mathcal{X}\\rightarrow\\mathcal{Y}$. The optimization problem of DeepAL in a supervised environment can be expressed as follows:\n\\begin{equation}\n\\mathop{\\arg\\min}_{L^m\\subseteq U^n, (\\mathrm{x,y}) \\in L^m, (x,y) \\in U^n} \\mathbb{E}_{(x,y)}[\\ell(f(\\mathrm{x}),\\mathrm{y})],\n\\end{equation}\nwhere $\\ell(\\cdot)\\in \\mathcal{R}^+$ is the given loss equation, and we expect that $m\\ll n$. Our goal is to make $m$ as small as possible while ensuring a predetermined level of accuracy. Therefore, the query strategy $Q$ in DeepAL is crucial to reduce the labeling cost.\nNext, we will conduct a comprehensive and systematic review of DeepAL's query strategy from the following five aspects.\n\\begin{itemize}\n \\item \\textit{Batch Mode DeepAL (BMDAL).} The batch-based query strategy is the foundation of DeepAL. The one-by-one sample query strategy in traditional AL is inefficient and not applicable to DeepAL, so it is replaced by batch-based query strategy.\n \\item \\textit{Uncertainty-based and Hybrid Query Strategies.} Uncertainty-based query strategy refers to the model based on sample uncertainty ranking to select the sample to be queried. The greater the uncertainty of the sample, the easier it is to be selected. However, this is likely to ignore the relationship between samples. Therefore, the method that considers multiple sample attributes is called the hybrid query strategy.\n \\item \\textit{Deep Bayesian Active Learning (DBAL).} Active learning based on Bayesian convolutional neural network is called deep Bayesian active learning.\n \\item \\textit{Density-based Methods.} The density-based method is a query strategy that attempts to find a core subset representing the distribution of the entire dataset from the perspective of the dataset to reduce the cost of annotation.\n \\item \\textit{Automated Design of DeepAL.} Automated design of DeepAL refers to a method that uses automated methods to design AL query strategies or DL models that have an important impact on DeepAL performance.\n\\end{itemize}", "id": "0d6ad266-3e42-4732-a729-183187f8427f", "level": "subsection", "origin_cites_number": 2, "parent_id": "3de3bdd7-51c6-4365-9c87-09efb8923a32", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ] ], "subsections": [ "f812f3b9-8493-4d5c-9abd-f0111976583e", "7e186f19-788f-4246-9ae0-6cbed00c6a9e", "f610f09b-72e3-4867-aa7a-fd61d236efa2", "fb0f4ba8-e10b-45d4-bec6-9cbf53f668c0", "576fbe37-bef7-4cc1-9263-b1e9c6892881" ], "title": "Query Strategy Optimization in DeepAL" }, { "cite_extract_rate": 0.75, "cites": [ 8925, 4641, 8717 ], "content": "\\label{sec:Batch Mode DeepAL (BMDAL)}\nThe main difference between DeepAL and classical AL is that DeepAL uses batch-based sample querying. In traditional AL, most algorithms use a one-by-one query method, which leads to frequent training of the learning model but little change in the training data. The training set obtained by this query method is not only inefficient in the training of the DL model, but can also easily lead to overfitting. Therefore, it is necessary to investigate BMDAL in more depth. In the context of BMDAL, at each acquisition step, we score a batch of candidate unlabeled data samples $\\mathcal{B}=\\{x_1,x_2,...,x_b\\}\\subseteq U$ based on the acquisition function $a$ used and the deep model $f_{\\theta}(L)$ trained on $L$, to select a new batch of data samples $\\mathcal{B}^*=\\{x_1^ *,x_2^*,...,x_b^*\\}$. This problem can be formulated as follows:\n\\begin{equation}\n\\mathcal{B}^*= \\mathop{\\arg\\max}_{\\mathcal{B}\\subseteq U} a_{batch}(\\mathcal{B},f_{\\theta}(L)),\n\\end{equation}\nwhere $L$ is labeled training set. In order to facilitate understanding, we also use $ D_{train}$ to represent the labeled training set. \n\\begin{figure}[!tp] \n\t\\centering \n\t\\subfloat[Batch query strategy considering only the amount of information.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.4\\textwidth]{figure/One_by_one}\n \t\t\\label{fig:One_by_one}\n \t}\\hspace{8mm}\n\t\\subfloat[Batch query strategy considering both information volume and diversity.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.4\\textwidth]{figure/Batch}\n \t\t\\label{fig:Batch}\n \t}\n\t\\caption{A comparison diagram of two batch query strategies, one that only considers the amount of information and one that considers both the amount and diversity of information. The size of the dots indicates the amount of information in the samples, while the distance between the dots represents the similarity between the samples. The points shaded in gray indicate the sample points to be queried in a batch.} \n\t\\label{fig:OneBatch} \n\\end{figure}\nA naive approach would be to continuously query a batch of samples based on the one-by-one strategy. For example, adopts the method of batch acquisition and chooses BALD (Bayesian Active Learning by Disagreement) to query top-$K$ samples with the highest scores. \nThe acquisition function $ a_{BALD} $ of this idea is expressed as follows:\n\\begin{equation}\n \\label{eq: BALD}\n \\begin{aligned}\n &a_{\\mathrm{BALD}}\\left(\\left\\{x_{1}, \\ldots, x_{b}\\right\\}, \\mathcal{P}\\left(\\omega \\mid D_{train}\\right)\\right)=\\sum_{i=1}^{b} \\mathbb{I}\\left(y_{i} ; \\omega \\mid x_{i}, D_{train}\\right),\\\\\n &\\mathbb{I}\\left(y ; \\boldsymbol{\\omega} \\mid x, D_{train}\\right)=\\mathbb{H}\\left(y \\mid x, D_{train}\\right)-\\mathbb{E}_{\\mathcal{P}\\left(\\boldsymbol{\\omega} \\mid D_{train}\\right)}\\left[\\mathbb{H}\\left(y \\mid x, \\boldsymbol{\\omega}, D_{train}\\right)\\right],\n \\end{aligned}\n\\end{equation}\nwhere $ \\mathbb{I}\\left(y ; \\boldsymbol{\\omega}\\mid x, D_{train}\\right) $ used in BALD is to estimate the mutual information between model parameters and model predictions. The larger the mutual information value $\\mathbb{I}(*)$, the higher the uncertainty of the sample. The condition of $\\boldsymbol{\\omega}$ on $D_{train}$ indicates that the model has been trained with $D_{train}$. And $\\omega \\sim \\mathcal{P}\\left(\\omega \\mid D_{train}\\right)$ represents the model parameters of the current Bayesian model. $\\mathbb{H(*)}$ represents the entropy of the model prediction. $\\mathbb{E}[H(*)]$ is the expectation of the entropy of the model prediction over the posterior of the model parameters. Equation (\\ref{eq: BALD}) considers each sample independently and selects samples to construct a batch query dataset in a one-by-one way.\nClearly, however, this method is not feasible, as it is very likely to choose a set of information-rich but similar samples. The information provided to the model by such similar samples is essentially the same, which not only wastes labeling resources, but also makes it difficult for the model to learn genuinely useful information. In addition, this query method that considers each sample independently also ignores the correlation between samples. This is likely to lead to local decisions that make the batch sample set of queries insufficiently optimized. Therefore, how to simultaneously consider the correlation between different query samples is the primary problem for BMDAL. To solve the above problems, BatchBALD expands BALD, which considers the correlation between data points by estimating the joint mutual information between multiple data points and model parameters. The acquisition function of BatchBALD can be expressed as follows:\n\\begin{equation}\n\\begin{aligned}\n &a_{\\text {BatchBALD }}\\left(\\left\\{x_{1}, \\ldots, x_{b}\\right\\}, \\mathcal{P}\\left(\\omega \\mid D_ {train }\\right)\\right)=\\mathbb{I}\\left(y_{1}, \\ldots, y_{b} ; \\omega \\mid x_{1}, \\ldots, x_{b}, D_ {train }\\right),\\\\\n &\\mathbb{I}\\left(y_{1: b} ; \\boldsymbol{\\omega} \\mid x_{1: b}, D_ {train }\\right)=\\mathbb{H}\\left(y_{1: b} \\mid x_{1: b}, D_ {train }\\right)-\\mathbb{E}_{\\mathcal{P}\\left(\\boldsymbol{\\omega} \\mid D_ {train }\\right)} \\mathbb{H}\\left(y_{1: b} \\mid x_{1: b}, \\boldsymbol{\\omega}, D_ {train }\\right),\n\\end{aligned}\n\\end{equation}\nwhere $x_{1}, \\ldots, x_{b}$ and $y_{1}, \\ldots, y_{b}$ are represented by joint random variables $x_{1: b}$ and $y_{1: b}$ in a product probability space, and $\\mathbb{I}\\left(y_{1: b} ; \\boldsymbol{\\omega} \\mid x_{1: b}, D_ {train }\\right)$ denotes the mutual information between these two random variables. BatchBALD considers the correlation between different query samples by designing an explicit joint mutual information mechanism to obtain a better query batch sample set. \nThe batch-based query strategy forms the basis of the combination of AL and DL, and related research on this topic is also very rich. We will provide a detailed overview and discussion of BMDAL query strategies in the following sections.", "id": "f812f3b9-8493-4d5c-9abd-f0111976583e", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "0d6ad266-3e42-4732-a729-183187f8427f", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ], [ "subsubsection", "Batch Mode DeepAL (BMDAL)" ] ], "subsections": [], "title": "Batch Mode DeepAL (BMDAL)" }, { "cite_extract_rate": 0.4, "cites": [ 4024, 2009, 5517, 5523, 5525, 8926, 5526, 5519, 5518, 8924 ], "content": "\\label{sec: Hybrid query strategy}\nBecause the uncertainty-based approach is simple in form and has low computational complexity, it is a very popular query strategy in AL. This query strategy is mainly used in certain shallow models (eg, SVM or KNN ). This is mainly because the uncertainty of these models can be accurately obtained by traditional uncertainty sampling methods.\nIn uncertainty-based sampling, learners try to select the most uncertain samples to form a batch query set. For example, in the margin sampling , margin $M$ is defined as the difference between the predicted highest probability and the predicted second highest probability of an sample as follows: \n$M=P\\left(y_{1} \\mid x\\right)-P\\left(y_{2} \\mid x\\right),$\nwhere \\(y_1\\) and \\(y_2\\) are the first and second most probable labels predicted for the sample \\(x\\) under the current model. The smaller the margin $M$, the greater the uncertainty of the sample $x$. The AL algorithm selects the top-$K$ samples with the smallest margin $M$ as the batch query set by calculating the margin $M$ of all unlabeled samples.\nInformation entropy is also a commonly used uncertainty measurement standard. For a $k$-class task, the information entropy $\\mathbb{E}(x)$ of sample $x$ can be defined as follows:\n\\begin{equation}\n\\label{eq: entropy}\n\\mathbb{E}(x)=-\\sum_{i=1}^{k} P(y_i\\mid x) \\cdot \\log \\left(P(y_i\\mid x)\\right),\n\\end{equation}\nwhere $P(y_i\\mid x)$ is the probability that the current sample $x$ is predicted to be class $y_i$. The greater the entropy of the sample, the greater its uncertainty. Therefore, the top-$K$ samples with the largest information entropy should be selected. More query strategies based on uncertainty can be found in .\nThere are many DeepAL methods that directly utilize an uncertainty-based sampling strategy.\nHowever, DFAL (DeepFool Active Learning) contends that these methods are easily fooled by adversarial examples; thus, it focuses on the study of examples near the decision boundary, and actively uses the information provided by these adversarial examples on the input spatial distribution in order to approximate their distance to the decision boundary. This adversarial query strategy can effectively improve the convergence speed of CNN training. \nNevertheless, as analyzed in Section \\ref{sec:Batch Mode DeepAL (BMDAL)}, this can easily lead to insufficient diversity of batch query samples (such that relevant knowledge regarding the data distribution is not fully utilized), which in turn leads to low or even invalid DL model training performance. \nA feasible strategy would thus be to use a hybrid query strategy in a batch query, taking into account both the information volume and diversity of samples in either an explicit or implicit manner.\nThe performance of early Batch Mode Active Learning (BMAL) algorithms are often excessively reliant on the measurement of similarity between samples. In addition, these algorithms are often only good at exploitation (learners tend to focus only on samples near the current decision boundary, corresponding to high-information query strategies), meaning that the samples in the query batch sample set cannot represent the true data distribution of the feature space (due to the insufficient diversity of batch sample sets). To address this issue, Exploration-P uses a deep neural network to learn the feature representation of the samples, then explicitly calculates the similarity between the samples. At the same time, the processes of exploitation and exploration (in the early days of model training, learners used random sampling strategies for exploration purposes) are balanced to enable more accurate measurement of the similarity between samples. \nMore specifically, Exploration-P uses the information entropy in Equation (\\ref{eq: entropy}) to estimate the uncertainty of sample $x$ under the current model. The uncertainty of the selected sample set $S$ can be expressed as $E(S) = \\sum_{x_i\\in S}\\mathbb{E}(x_i)$. Furthermore, to measure the redundancy between samples in the selected sample set $S$, Exploration-P uses $R(S)$ to represent the redundancy of selected sample set $S$:\n\\begin{equation}\n R(S) = \\sum_{x_i\\in S}\\sum_{x_j\\in S}Sim(x_i,x_j),\\quad Sim(x_i,x_j) = f(x_i)\\mathcal{M}f(x_j),\n\\end{equation}\nwhere $f(x)$ represents the feature of sample $x$ extracted by deep learning model $f$, $Sim(x_i,x_j)$ measures the similarity between two samples, and $\\mathcal{M}$ is a similarity matrix (when $\\mathcal{M}$ is the identity matrix, the similarity of two samples is the product of their feature vectors. In addition, $\\mathcal{M}$ can also be learned as a parameter of $f$). Therefore, the selected sample set $S$ is expected to have the largest uncertainty and the smallest redundancy. For this reason, Exploration-P considers these two strategies, and the final goal equation is defined as:\n\\begin{equation}\n\\mathrm{I}(S)=E(S)-\\frac{\\alpha}{|S|} R(S),\n\\end{equation}\nwhere, $\\alpha$ is used to balance the weight of the hybrid query strategies, uncertainty and redundancy.\nMoreover, DMBAL (Diverse Mini-Batch Active Learning) adds informativeness to the optimization goal of K-means by weight, and further presents an in-depth study of a hybrid query strategy that considers the sample information volume and diversity under the mini-batch sample query setting. DMBAL can easily achieve expansion from the generalized linear model to DL; this not only increases the scalability of DMBAL but also increases the diversity of active query samples in the mini-batch. Fig.\\ref{fig:OneBatch} illustrates a schematic diagram of this idea. This hybrid query strategy is quite popular. For example, WI-DL (Weighted Incremental Dictionary Learning) mainly considers the two stages of DBN. In the unsupervised feature learning stage, the key consideration is the representativeness of the data, while in the supervised fine-tuning stage, the uncertainty of the data is considered; these two indicators are then integrated, and finally optimized using the proposed weighted incremental dictionary learning algorithm.\nAlthough the above improvements have resulted in a good performance, there is still a hidden danger that must be addressed: namely, that, diversity-based strategies are not appropriate for all datasets. More specifically, the richer the category content of the dataset, the larger the batch size, and the better the effect of diversity-based methods; by contrast, an uncertainty-based query strategy will perform better with smaller batch sizes and less rich content. These characteristics depend on the statistical characteristics of the dataset. The BMAL context, whether the data are unfamiliar and potentially unstructured, makes it impossible to determine which AL query strategy is more appropriate. In light of this, BADGE (Batch Active learning by Diverse Gradient Embeddings) samples point groups that are disparate and high magnitude when represented in a hallucinated gradient space, meaning that both the prediction uncertainty of the model and the diversity of the samples in a batch are considered simultaneously. Most importantly, BADGE can achieve an automatic balance between forecast uncertainty and sample diversity without the need for manual hyperparameter adjustments. \nMoreover, while BADGE considers this hybrid query strategy in an implicit way, WAAL (Wasserstein Adversarial Active Learning) proposes a hybrid query strategy that explicitly balances uncertainty and diversity. In addition, WAAL uses Wasserstein distance to model the interactive procedure in AL as a distribution matching problem, derives losses from it, and then decomposes WAAL into two stages: DNN parameter optimization and query batch selection. \nTA-VAAL (Task-Aware Variational Adversarial Active Learning) also explores the balance of this hybrid query strategy. The assumption underpinning TA-VAAL is that the uncertainty-based method does not make good use of the overall data distribution, while the data distribution-based method often ignores the structure of the task. Consequently, TA-VAAL proposes to integrate the loss prediction module and the concept of RankCGAN into VAAL (Variational Adversarial Active Learning) , enabling both the data distribution and the model uncertainty to be considered. TA-VAAL has achieved good performance on various balanced and unbalanced benchmark datasets. The structure diagram of TA-VAAL and VAAL is presented in Fig.\\ref{fig:TA-VAAL}.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\includegraphics[width = 0.95\\textwidth]{figure/TA-VAAL}\n\t\\caption{Structure comparison chart of VAAL and TA-VAAL . 1) VAAL uses labeled data and unlabeled data in a semi-supervised way to learn the latent representation space of the data, then selects the unlabeled data with the largest amount of information according to the latent space for labeling. 2) TA-VAAL expands VAAL and integrates the loss prediction module and RankCGAN into VAAL in order to consider data distribution and model uncertainty simultaneously.} \n\t\\label{fig:TA-VAAL}\n\\end{figure}\nNotably, although the hybrid query strategy achieves superior performance, the uncertainty-based AL query strategy is more convenient to combine with the output of the softmax layer of DL. Thus, the query strategy based on uncertainty is still widely used.", "id": "7e186f19-788f-4246-9ae0-6cbed00c6a9e", "level": "subsubsection", "origin_cites_number": 25, "parent_id": "0d6ad266-3e42-4732-a729-183187f8427f", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ], [ "subsubsection", "Uncertainty-based and Hybrid Query Strategies" ] ], "subsections": [], "title": "Uncertainty-based and Hybrid Query Strategies" }, { "cite_extract_rate": 0.518518518518518, "cites": [ 1044, 4641, 8717, 1042, 4599, 8316, 4610, 5530, 1050, 5529, 5528, 4618, 5527, 6991 ], "content": "As noted in Section \\ref{sec:The necessity and challenge of combining DL and AL}, which analyzes the challenge of combining DL and AL, the acquisition function based on uncertainty is an important research direction of many classic AL algorithms. Moreover, traditional DL methods rarely represent such model uncertainty.\n To solve the above problems, Deep Bayesian Active Learning appears. In the given input set $X$ and the output $Y$ belonging to class $c$, the probabilistic neural network model can be defined as $f(\\mathrm{x};\\theta)$, $p(\\theta)$ is a prior on the parameter space $\\theta$ (usually Gaussian), and the likelihood $p(\\mathrm{y} = c|\\mathrm{x},\\theta)$ is usually given by $softma\\mathrm{x}(f(\\mathrm{x};\\theta))$. Our goal is to obtain the posterior distribution over $\\theta$, as follows:\n \\begin{equation}\n p(\\theta|X, Y)=\\frac{p(Y | X, \\theta) p(\\theta)}{p(Y | X)}.\n \\end{equation}\n For a given new data point $\\mathrm{x}^*$, $\\hat{\\mathrm{y}}$ is predicted by:\n \\begin{equation}\n p\\left(\\hat{\\mathrm{y}} | \\mathrm{x}^{*}, X, Y\\right)=\\int p\\left(\\hat{\\mathrm{y}} | \\mathrm{x}, \\theta\\right) p(\\theta | X, Y) d \\theta=\\mathbb{E}_{\\theta \\sim p(\\theta | X, Y)}[f(\\mathrm{x} ; \\theta)].\n \\end{equation}\nDBAL combines BCNNs (Bayesian Convolutional Neural Networks) with AL methods to adapt BALD to the deep learning environment, thereby developing a new AL framework for high-dimensional data. This approach adopts the above method to first perform Gaussian prior modeling on the weights of a CNN, and then uses variational inference to obtain the posterior distribution of network prediction. In addition, in practice, researchers often also use a powerful and low-cost MC-dropout (Monte-Carlo dropout) stochastic regularization technique to obtain posterior samples, consequently attaining good performance on real-world datasets . Moreover, this regularization technique has been proven to be equivalent to variational inference . \nHowever, a core-set approach points out that DBAL is unsuitable for large datasets due to the need for batch sampling. It should be noted here that while DBAL allows the use of dropout in testing for better confidence estimation, the analysis presented in contends that the performance of this method is similar to the performance of using neural network SR as uncertainty sampling, which requires vigilance. \nIn addition, \nDEBAL (Deep Ensemble Bayesian Active Learning) argues that the pattern collapse phenomenon in the variational inference method leads to the overconfident prediction characteristic of the DBAL method. For this reason, DEBAL combines the expressive power of ensemble methods with MC-dropout to obtain better uncertainty in the absence of trading representativeness.\nFor its part, BatchBALD opts to expand BALD to the batch query context; this approach no longer calculates the mutual information between a single sample and model parameters but rather recalculates the mutual information between the batch samples and the model parameters to jointly score the batch of samples. This enables BatchBALD to more accurately evaluate the joint mutual information. Inspired by the latest research on Bayesian core sets , ACS-FW (Active Bayesian CoreSets with Frank-Wolfe optimization) reconstructed the batch structure to optimize the sparse subset approximation of the log-posterior induced by the entire dataset. Using this similarity, ACS-FW then employs the Frank-Wolfe algorithm to enable effective Bayesian AL at scale, while its use of random projection has made it still more popular. Compared with other query strategies (e.g., maximizing the predictive entropy\n(MAXENT) and BALD ), ACS-FW achieves better coverage across the entire data manifold. \nDPEs (Deep Probabilistic Ensembles) introduces an expandable DPEs technology, which uses a regularized ensemble to approximate the deep BNN, and then evaluates the classification effect of these DPEs in a series of large-scale visual AL experiments.\nActiveLink (Deep Active Learning for Link Prediction in Knowledge Graphs) is inspired by the latest advances in Bayesian deep learning . Adopting the Bayesian view of the existing neural link predictors, it expands the uncertainty sampling method by using the basic structure of the knowledge graph, thereby creating a novel DeepAL method. ActiveLink further noted that although AL can sample efficiently, the model needs to be retrained from scratch for each iteration in the AL process, which is unacceptable in the DL model training context. A simple solution would be to use newly selected data to train the model incrementally, or to combine it with existing training data ; however, this would cause the model to be biased either towards a small amount of newly selected data or towards data selected early in the process. In order to solve this bias problem, ActiveLink adopts a principled and unbiased incremental training method based on meta-learning. More specifically, in each AL iteration, ActiveLink uses the newly selected samples to update the model parameters, then approximates the meta-objective of the model's future prediction by generalizing the model based on the samples selected in the previous iteration. This enables ActiveLink to strike a balance between the importance of the newly and previously selected data, and thereby to achieve an unbiased estimation of the model parameters.\nIn addition to the above-mentioned DBAL work, due to the lesser parameter of BNN and the uncertainty sampling strategy being similar to traditional AL, the research on DBAL is quite extensive, and there are many works related to this topic .", "id": "f610f09b-72e3-4867-aa7a-fd61d236efa2", "level": "subsubsection", "origin_cites_number": 27, "parent_id": "0d6ad266-3e42-4732-a729-183187f8427f", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ], [ "subsubsection", "Deep Bayesian Active Learning (DBAL)" ] ], "subsections": [], "title": "Deep Bayesian Active Learning (DBAL)" }, { "cite_extract_rate": 0.555555555555555, "cites": [ 5524, 1044, 5531, 8927, 1042, 4641, 5517, 5528, 5532, 5518 ], "content": "The term, density-based method, mainly refers to the selection of samples from the perspective of the set (core set ). The construction of the core set is a representative query strategy. This idea is mainly inspired by the compression idea of the core set dataset and attempts to use the core set to represent the distribution of the feature space of the entire original dataset, thereby reducing the labeling cost of AL. \nFF-Active (Farthest First Active Learning) is based on this idea and uses the farthest-first traversal in the space of neural activation over a representation layer to query consecutive points from the pool. It is worth noting here that FF-Active and Exploration-P resemble the way in which random queries are used in the early stages of AL to enhance AL's exploration ability, which prevents AL from falling into the trap of insufficient sample diversity. Similarly, to solve the sampling bias problem in batch querying, the diversity of batch query samples is increased. \nThe Core-set approach attempts to solve this problem by constructing a core subset. A further attempt was made to solve the k-Center problem by building a core subset so that the model learned on the selected core set will be more competitive than the rest of the data. However, the Core-set approach requires a large distance matrix to be built on the unlabeled dataset, meaning that this search process is computationally expensive; this disadvantage will become more apparent on large-scale unlabeled datasets .\nActive Palmprint Recognition applies DeepAL to high-dimensional and complex palmprint recognition data. Similar to the core set concept, regards AL as a binary classification task. It is expected that the labeled and unlabeled sample sets will have the same data distribution, making the two difficult to distinguish; that is, the goal is to find a labeled core subset with the same distribution as the original dataset. More specifically, due to the heuristic generative model simulation data distribution being difficult to train and unsuitable for high-dimensional and complex data such as palm prints, the author considers whether the sample can be positively distinguished from the unlabeled or labeled dataset with a high degree of confidence. Those samples that can be clearly distinguished are obviously different from the data distribution of the core annotation subset. These samples will then be added to the annotation dataset for the next round of training.\nPrevious core-set-based methods often simply try to query data points as far as possible to cover all points of the data manifold without considering the density, which results in the queried data points overly representing sample points from manifold sparse areas. Similar to , DAL (Discriminative Active Learning) also regards AL as a binary classification task and further aims to make the queried labeled dataset indistinguishable from the unlabeled dataset. The key advantage of DAL is that it can sample from the unlabeled dataset in proportion to the data density, without biasing the sample points in the sparse popular domain. Moreover, the method proposed by DAL is not limited to classification tasks, which are conceptually easy to transfer to other new tasks.\nIn addition to the corresponding query strategy, some researchers have also considered the impact of batch query size on query performance. For example, focus primarily on the optimization of query strategies in smaller batches, while recommended expanding the query scale of AL for large-scale sampling (10k or 500k samples at a time). Moreover, by integrating hundreds of models and reusing intermediate checkpoints, the distributed searching of training data on large-scale labeled datasets can be efficiently realized with a small computational cost. also proved that the performance of using the entire dataset for training is not the upper limit of performance, as well as that AL based on subsets specifically may yield better performance.\nFurthermore, the attributes of the dataset itself also have an important impact on the performance of DeepAL. With this in mind, GA (Gradient Analysis) assesses the relative importance of image data in common datasets and proposes a general data analysis tool design to facilitate a better understanding of the diversity of training examples in the dataset. GA finds that not all datasets can be trained on a small sub-sample set because the relative difference of sample importance in some datasets is almost negligible; therefore, it is not advisable to blindly use smaller sub-datasets in the AL context.\nIn addition, finds that compared with the Bayesian deep learning approach (Monte-Carlo dropout ) and density-based methods, ensemble-based AL can effectively offset the imbalance of categories in the dataset during the acquisition process, resulting in more calibration prediction uncertainty, and thus better performance.\nIn general, density-based methods primarily consider the selection of core subsets from the perspective of data distribution. There are relatively few related research methods, which suggests a new possible direction for sample querying.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\subfloat[Active learning pipeline.] \n \t{\\centering \n \t\t\\includegraphics[width = 0.27\\textwidth]{figure/ALp}\n \t\t\\label{fig:ALp}\n \t}\\hspace{2mm}\n\t\\subfloat[Reinforced Active Learning (RAL) .] \n \t{\\centering \n \t\t\\includegraphics[width = 0.27\\textwidth]{figure/RAL}\n \t\t\\label{fig:RAL}\n \t}\\hspace{2mm}\n \\subfloat[Deep Reinforcement Active\nLearning (DRAL) .] \n \t{\\centering \n \t\t\\includegraphics[width = 0.37\\textwidth]{figure/DRAL}\n \t\t\\label{fig:DRAL}\n \t}\n\t\\caption{Comparison of standard AL, RAL and DRAL pipelines.} \n\t\\label{fig:ALp_RAL} \n\\end{figure}", "id": "fb0f4ba8-e10b-45d4-bec6-9cbf53f668c0", "level": "subsubsection", "origin_cites_number": 18, "parent_id": "0d6ad266-3e42-4732-a729-183187f8427f", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ], [ "subsubsection", "Density-based Methods" ] ], "subsections": [], "title": "Density-based Methods" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7951, 5531, 5533, 5534, 5516, 5506 ], "content": "DeepAL is composed of two parts: deep learning and active learning. Manually designing these two parts requires a lot of energy and their performance is severely limited by the experience of researchers. Therefore, it has important significance to consider how to automate the design of deep learning models and active learning query strategies in DeepAL.\nTo this end, redefines the heuristic AL algorithm as a reinforcement learning problem and introduces a new description through a clear selection strategy. In addition, some researchers have also noted that, in traditional AL workflows, the acquisition function is often regarded as a fixed known prior, and that it will not be known whether this acquisition function is appropriate until the label budget is exhausted. This makes it impossible to flexibly and quickly tune the acquisition function. Accordingly, one good option may be to use reinforcement learning to dynamically tune the acquisition function. \nRAL (Reinforced Active Learning) proposes to use BNN as a learning predictor for acquisition functions. As such, all probability information provided by the BNN predictor will be combined to obtain a comprehensive probability distribution; subsequently, the probability distribution is sent to a BNN probabilistic policy network, which performs reinforcement learning in each labeling round based on the oracle feedback. This feedback will fine-tune the acquisition function, thereby continuously improving its quality. \nDRAL (Deep Reinforcement Active Learning) adopts a similar idea and designs a deep reinforcement active learning framework for the person Re-ID task. This approach uses the idea of reinforcement learning to dynamically adjust the acquisition function so as to obtain high-quality query samples. Fig.\\ref{fig:ALp_RAL} presents a comparison between traditional AL, RAL and DRAL pipelines. \nThe pipeline of AL is shown in Fig.\\ref{fig:ALp}. The standard AL pipeline usually consists of three parts. The oracle provides a set of labeled data; the predictor (here, BNN) is used to learn these data and provides predictable uncertainty for the guide. The guide is usually a fixed, hard-coded acquisition function that picks the next sample for the oracle to restart the cycle. \nThe pipeline of RAL (Reinforced Active Learning) is shown in Fig.\\ref{fig:RAL}. RAL replaces the fixed acquisition function with the policy BNN. The policy BNN learns in a probabilistic manner, obtains feedback from the oracle, and learns how to select the next optimal sample point (new parts in red) in a reinforcement learning-based manner. Therefore, RAL can adjust the acquisition function more flexibly to adapt to the existing dataset. \nThe pipeline of DRAL (Deep Reinforcement Active Learning) is shown in Fig.\\ref{fig:DRAL}. DRAL utilizes a deep reinforcement active learning framework for the person Re-ID task. For each query anchor (probe), the agent (reinforcement active learner) will select sequential instances from the gallery pool during the active learning process and hand it to the oracle to obtain manual annotation with binary feedback (positive/negative). The state evaluates the similarity relationships between all instances and calculates rewards based on oracle feedback to adjust agent queries.\nOn the other hand, Active-iNAS (Active Learning with incremental Neural Architecture Search) notices that most previous DeepAL methods assume that a suitable DL model has been designed for the current task, meaning that their primary focus is on how to design an effective query mechanism; however, the existing DL model is not necessarily optimal for the current DeepAL task. Active-iNAS accordingly challenges this assumption and uses NAS (neural architecture search) technology to dynamically search for the most effective model architectures while conducting active learning. \nThere is also some work devoted to providing a convenient performance comparison platform for DeepAL; for example, discusses and studies the robustness and reproducibility of the DeepAL method in detail, and presents many useful suggestions.\nIn general, these query strategies are not independent of each other but are rather interrelated. Batch-based BMDAL provides the basis for the update training of AL query samples on the DL model. Although the query strategies in DeepAL are rich and complex, they are largely designed to take the diversity and uncertainty of query batches in BMDAL into account. Previous uncertainty-based methods often ignore the diversity in the batch and can thus be roughly divided into two categories: those that design a mechanism that explicitly encourages batch diversity in the input or learning representation space, and those that directly measure the mutual information (MI) of the entire batch.", "id": "576fbe37-bef7-4cc1-9263-b1e9c6892881", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "0d6ad266-3e42-4732-a729-183187f8427f", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Query Strategy Optimization in DeepAL" ], [ "subsubsection", "Automated Design of DeepAL" ] ], "subsections": [], "title": "Automated Design of DeepAL" }, { "cite_extract_rate": 0.7222222222222221, "cites": [ 7100, 1042, 8926, 1001, 7952, 5536, 1050, 5520, 8924, 5535, 5680, 5522, 5526 ], "content": "\\label{sec: Insufficient Data in DeepAL}\nAL often requires only a small amount of labeled sample data to realize learning and model updating, while DL requires a large amount of labeled data for effective training. Therefore, the combination of AL and DL requires as much as possible to use the data strategy without consuming too much human resources to achieve DeepAL model training. Most previous DeepAL methods often only train on the labeled sample set sampled by the query strategy. However, this ignores the existence of existing unlabeled datasets, meaning that the corresponding data expansion and training strategies are not fully utilized. These strategies help to improve the problem of insufficient labeled data in DeepAL training without adding to the manual labeling costs. Therefore, the study of these strategies is also quite meaningful.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\includegraphics[width = 0.95\\textwidth]{figure/CEAL}\n\t\\caption{In CEAL , the overall framework of DeepAL is utilized. CEAL gradually feeds the samples from the unlabeled dataset to the initialized CNN, after which the CNN classifier outputs two types of samples: a small number of uncertain samples and a large number of samples with high prediction confidence. A small number of uncertain samples are labeled through the oracle, and the CNN classifier is used to automatically assign pseudo-labels to a large number of high-prediction confidence samples. These two types of samples are then used to fine-tune the CNN, and the updated process is repeated.} \n\t\\label{fig:CEAL} \n\\end{figure}\nFor example, CEAL (Cost-Effective Active Learning) enriches the training set by assigning pseudo-labels to samples with high confidence in model prediction in addition to the labeled dataset sampled by the query strategy. This expanded training set is then also used in the training of the DL model. This strategy is shown in Fig.\\ref{fig:CEAL}.\nAnother very popular strategy involves performing unsupervised training on labeled and unlabeled datasets and incorporating other strategies to train the entire network structure.\nFor example, WI-DL notes that full DBN training requires a large number of training samples, and it is impractical to apply DBN to a limited training set in an AL context. Therefore, in order to improve the training efficiency of DBN, WI-DL employs a combination of unsupervised feature learning on all datasets and supervised fine-tuning on labeled datasets.\nAt the same time, some researchers have considered using GAN (Generative Adversarial Networks) for data augmentation. \nFor example, GAAL (Generative Adversarial Active Learning) introduced the GAN to the AL query method for the first time. GAAL aims to use generative learning to generate samples with more information than the original dataset.\nHowever, random data augmentation does not guarantee that the generated samples will have more information than those contained in the original data, and could thus represent a waste of computing resources. \nAccordingly, BGADL (Bayesian Generative Active Deep Learning) expands the idea of GAAL and proposes a Bayesian generative active deep learning method. More specifically, BGADL combines the generative adversarial active learning , Bayesian data augmentation , ACGAN (Auxiliary-Classifier Generative Adversarial Networks) and VAE (Variational Autoencoder) methods, with the aim of generating samples of disagreement regions belonging to different categories. Structure comparison between GAAL and BGADL is presented in Fig.\\ref{fig:GAAL_BGADL}.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\subfloat[Generative adversarial active learning (GAAL).] \n \t{\\centering \n \t\t\\includegraphics[width = 0.4\\textwidth]{figure/GAAL}\n \t\t\\label{fig:GAAL}\n \t}\\hspace{3mm}\n\t\\subfloat[Bayesian generative active deep learning (BGADL).] \n \t{\\centering \n \t\t\\includegraphics[width = 0.5\\textwidth]{figure/BGADL}\n \t\t\\label{fig:BGADL}\n \t}\n\t\\caption{Structure comparison chart of GAAL and BGADL . For more details, please see .} \n\t\\label{fig:GAAL_BGADL} \n\\end{figure}\nSubsequently, VAAL and ARAL (Adversarial Representation Active Learning) borrowed from several previous methods not only to train the network using labeled and unlabeled datasets but also to introduce generative adversarial learning into the network architecture for data augmentation purposes, thereby further improving the learning ability of the network.\nIn more detail, VAAL noticed that the batch-based query strategy based on uncertainty not only readily leads to insufficient sample diversity, but is also highly susceptible to interference from outliers. In addition, density-based methods are susceptible to $p$-norm limitations when applied to high-dimensional data, resulting in calculation distances that are too concentrated . To this end, VAAL proposes to use the adversarial learning representation method to distinguish between the potential spatial coding features of labeled and unlabeled data, thus reducing interference from outliers. \nVAAL also uses labeled and unlabeled data to jointly train a VAE in a semi-supervised manner; the goal here is to deceive the adversarial network into predicting that all data points come from the labeled pool, in order to solve the problem of distance concentration. VAAL can learn an effective low-dimensional latent representation on a large-scale dataset, and further provides an effective sampling method by jointly learning the representation form and uncertainty.\nSubsequently, ARAL expanded VAAL , aiming to use as few manual annotation samples as possible while still making full use of the existing or generated data information in order to improve the model's learning ability. In addition to using labeled and unlabeled datasets, ARAL also uses samples produced by deep production networks to jointly train the entire model. ARAL comprises both VAAL and adversarial representation learning . By using VAAL to learn the potential feature representation space of the labeled and unlabeled data, the unlabeled samples with the largest amount of information can be selected accordingly. At the same time, both real and generated data are used to enhance the model's learning ability through confrontational representation learning . Similarly, TA-VAAL also extends VAAL by using the global data structure from VAAL and local task-related information from the learning loss for sample querying purposes. We present the framework of ARAL in Fig.\\ref{fig:VAAL_ARAL}. \n\\begin{figure}[!tp] \n\t\\centering \n\t\\includegraphics[width = 0.95\\textwidth]{figure/VAAL_ARAL}\n\t\\caption{The overall structure of ARAL . ARAL uses not only real datasets (both labeled and unlabeled), but also generated datasets to jointly train the network. The whole network consists of an encoder ($E$), generator ($G$), discriminator ($D$), classifier ($C$) and sampler ($S$), and all parts of the model are trained together.} \n\t\\label{fig:VAAL_ARAL} \n\\end{figure}\nUnlike ARAL and VAAL , which use labeled and unlabeled datasets for adversarial representation learning, SSAL (Semi-Supervised Active Learning) implements a new training method. More specifically, SSAL uses unsupervised, supervised, and semi-supervised learning methods across AL cycles, and makes full use of existing information for training without increasing the cost of labeling as much as possible. In more detail, the process is as follows: before the AL starts, first use labeled and unlabeled data for unsupervised pretraining. In each AL learning cycle, first, perform supervised training on the labeled dataset, then perform semi-supervised training on all datasets. This represents an attempt to devise a wholly new training method. The author finds that, compared with the difference between the sampling strategies, this model training method yields a surprising performance improvement.\nAs analyzed above, this kind of exploration of training methods and data utilization skills is also essential; in fact, the resultant performance gains may even exceed those generated by changing the query strategy. Applying these techniques enables the full use of existing data without any associated increase in labeling costs, which helps in resolving the issue of the number of AL query samples being insufficient to support the updating of the DL model.", "id": "d6b7acac-8ffe-421c-a9ab-57e8404c786d", "level": "subsection", "origin_cites_number": 18, "parent_id": "3de3bdd7-51c6-4365-9c87-09efb8923a32", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "Data Expansion of Labeled Samples in DeepAL" ] ], "subsections": [], "title": "Data Expansion of Labeled Samples in DeepAL" }, { "cite_extract_rate": 0.47058823529411703, "cites": [ 2009, 810, 8923, 5508, 5523, 5537, 1050, 825 ], "content": "\\label{sec: Common Framework DeepAL}\nAs mentioned in Section \\ref{sec:The necessity and challenge of combining DL and AL}, a processing pipeline inconsistency exists between AL and DL; thus, only fine-tuning the DL model in the AL framework, or simply combining AL and DL to treat them as two separate problems, may cause divergence. For example, first conducts offline supervised training of the DL model on two different types of session datasets to grant basic conversational capabilities to the backbone network, then enables the online AL stage to interact with human users, enabling the model to be improved in an open way based on user feedback. AL-DL proposes an AL method for DL models with DBNs, while ADN further proposes an active deep network architecture for sentiment classification. proposes an AL algorithm using CNN for captcha recognition. However, generally speaking, the above methods first perform routine supervised training on this depth model on the labeled dataset, then actively sample based on the output of the depth model. There are many similar related works that adopt this split-and-splitting approach that treats the training of AL and deep models as two independent problems and consequently increases the possibility, which the two problems will diverge. Although this method achieved some success at the time, a general framework that closely combines the two tasks of DL and AL would play a vital role in the performance improvement and promotion of DeepAL.\nCEAL is one of the first works to combine AL and DL in order to solve the problem of depth image classification. CEAL merges deep convolutional neural networks into AL, and consequently proposes a novel DeepAL framework. It sends samples from the unlabeled dataset to the CNN step by step, after which the CNN classifier outputs two types of samples: a small number of uncertain samples and a large number of samples with high prediction confidence. A small number of uncertain samples are labeled by the oracle, and the CNN classifier is used to automatically assign pseudo-labels to a large number of high-prediction-confidence samples. Then, these two types of samples are used to fine-tune the CNN and the update process is repeated. In Fig.\\ref{fig:CEAL}, we present the overall framework of CEAL. Moreover, HDAL (Heuristic Deep Active Learning) uses a similar framework for face recognition tasks: it combines AL with a deep CNN model to integrate feature learning and AL query model training.\nIn addition, Fig.\\ref{fig:DeepAL} illustrates a widespread general framework for DeepAL tasks. Related works include , among others. More specifically, proposes a framework that uses an FCN (Fully Convolutional Network) and AL to solve the medical image segmentation problem using a small number of annotations. It first trains FCN on a small number of labeled datasets, then extracts the features of the unlabeled datasets through FCN, using these features to estimate the uncertainty and similarity of unlabeled samples. This strategy, which is similar to that described in Section \\ref{sec: Hybrid query strategy}, helps to select highly uncertain and diverse samples to be added to the labeled dataset in order to start the next stage of training. \nActive Palmprint Recognition proposes a similar DeepAL framework as that for the palmprint recognition task. The difference is that inspired by domain adaptation , Active Palmprint Recognition regards AL as a binary classification task: it is expected that the labeled and unlabeled sample sets have the same data distribution, making the two difficult to distinguish. Supervision training can be performed directly on a small number of labeled datasets, which reduces the burden associated with labeling. \n proposes a DeepAL framework for defect detection. This approach performs uncertainty sampling based on the output features of the detection model to generate a list of candidate samples for annotation. In order to further take the diversity of defect categories in the samples into account, designs an average margin method to control the sampling ratio of each defect category.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\includegraphics[width = 0.95\\textwidth]{figure/AL-MV}\n\t\\caption{Taking a common CNN as an example, this figure presents a comparison between the traditional uncertainty measurement method and the uncertainty measurement method of synthesizing information in two stages (i.e., the feature extraction stage and task learning stage).} \n\t\\label{fig:AL-MV} \n\\end{figure}\nDifferent from the above methods, it is common for the final output of the DL model to be used as the basis for determining the uncertainty or diversity of the sample (Active Palmprint Recognition uses the output of the first fully connected layer). also used the output of the DL model's middle hidden layer. As analyzed in Section \\ref{sec: Hybrid query strategy} and Section \\ref{sec:The necessity and challenge of combining DL and AL}, due to the difference in learning paradigms between the deep and shallow models, the traditional uncertainty-based query strategy cannot be directly applied to the DL model. In addition, unlike the shallow model, the deep model can be regarded as composed of two stages, namely the feature extraction stage and the task learning stage. It is inaccurate to use only the output of the last layer of the DL model as the basis for evaluating the sample prediction uncertainty; this is because the uncertainty of the DL model is in fact composed of the uncertainty of these two stages. A schematic diagram of this concept is presented in Fig.\\ref{fig:AL-MV}. \nTo this end, AL-MV (Active Learning with Multiple Views) treats the features from different hidden layers in the middle of CNN as multiview data, taking the uncertainty of both stages into account, and the AL-MV algorithm is designed to implement adaptive weighting of the uncertainty of each layer, to enable more accurate measurement of the sampling uncertainty.\n LLAL (Learning Loss for Active Learning) also used a similar idea. More specifically, LLAL designs a small parameter module of the loss prediction module to attach to the target network, using the output of multiple hidden layers of the target network as the input of the loss prediction module. The loss prediction module is learned to predict the target loss of the unlabeled dataset, while the top-$K$ strategy is used to select the query samples. LLAL achieves task-agnostic AL framework design at a small parameter cost and further achieves competitive performance on a variety of mainstream visual tasks (namely, image classification, target detection, and human pose estimation).\nSimilarly, uses a similar strategy to implement a DeepAL framework for finger bone segmentation tasks. uses Deeply Supervised U-Net as the segmentation network, then subsequently uses the output of the multilevel segmentation hidden layer and the output of the last layer as the input of AL; this input information is then integrated to form the basis for the evaluation of the sample information size. \nWe take LLAL as an example to explicate the overall network structure of this idea in Fig.\\ref{fig:LLAL}.\n\\begin{figure}[!tp] \n\t\\centering \n\t\\includegraphics[width = 0.95\\textwidth]{figure/LLAL}\n\t\\caption{The overall framework of LLAL . The black line represents the stage of training model parameters, optimizing the overall loss composed of target loss and loss-prediction loss. The red line represents the sample query phase of AL. The output of the multiple hidden layers of the DL model is used as the input of the loss prediction module, while the top-$K$ unlabeled data points are selected according to the predicted losses and assigned labels by the oracle.} \n\t\\label{fig:LLAL} \n\\end{figure}\nThe research on the general framework is highly beneficial to the development and promotion of DeepAL, as this task-independent framework can be conveniently transplanted to other fields. In the current fusion of DL and AL, DL is primarily responsible for feature extraction, while AL is mainly responsible for sample querying; thus, a deeper and tighter fusion will help DeepAL achieve better performance. Of course, this will require additional exploration and effort on the part of researchers. Finally, the challenges of combining DL and AL and related work on the corresponding solutions are summarized in Table \\ref{tab: DAL_challenges_solutions}.", "id": "e325634b-5e6a-4c21-ae7a-d63f06cf344c", "level": "subsection", "origin_cites_number": 17, "parent_id": "3de3bdd7-51c6-4365-9c87-09efb8923a32", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "DeepAL Generic Framework" ] ], "subsections": [], "title": "DeepAL Generic Framework" }, { "cite_extract_rate": 0.584905660377358, "cites": [ 1044, 7951, 4641, 5537, 4024, 5522, 2009, 8923, 8927, 1042, 5517, 4599, 7952, 5523, 5533, 5532, 4610, 5531, 5536, 1050, 5528, 5527, 6991, 8926, 5526, 5509, 5508, 5520, 5518, 8924, 5519 ], "content": "\\label{sec: DeepAL Stopping Strategy}\nIn addition to querying strategies and training methods, an appropriate stopping strategy has an important impact on DeepAL performance. At present, most DeepALs often use the predefined stopping criterion, and when the criterion is satisfied, they stop querying labels from the oracle. These predefined stopping criteria include the maximum number of iterations, the minimum threshold for changing classification accuracy, the minimum number of labeled samples, and the expected accuracy value, etc.\nAlthough these stopping criteria are simple, these predefined stopping criteria are likely to cause DeepAL to fail to achieve optimal performance. This is because the premature termination of AL annotation querying leads to large performance losses in the model, and excessive annotation behavior wastes a lot of annotation budget. Therefore, Stabilizing Predictions (SP) makes a comprehensive review of AL stopping strategies and proposes an AL stopping strategy based on stability prediction. Specifically, the SP predivides a part of the samples from the unlabeled dataset to form a stop set (the stop set does not need to be labeled), and the SP checks the prediction stability on the stop set in each iteration. When the prediction performance of the model on the stop set stabilizes, the iteration is stopped. A well-trained model often has a stable predictive ability, and SP takes advantage of this feature. The predivided stop set does not require specific labeling information, which avoids additional labeling costs contrary to the purpose of AL. Although SP is a stopping strategy proposed mainly for AL, it also is relevant for DeepAL.\n\\begin{table}[!tp]\n \\centering\n \\scriptsize\n \\caption{The challenges of combining DL and AL, as well as a summary of related work on the corresponding solutions.}\n \\begin{tabular}{|l|l|l|c|l|}\\toprule\n \\hline\n Challenges & Solutions & Foundation & Category& Publications\\\\\\hline\n \\multirow{7}{*}{\\makecell[l]{Model\\\\uncertainty\\\\in Deep\\\\Learning}} & \\multirow{7}{*}{\\makecell[l]{Query\\\\strategy\\\\optimization}} & \\multirow{7}{*}{\\makecell[l]{Batch\\\\Mode\\\\DeepAL\\\\(BMDAL)}}& \\makecell[l]{Uncertainty-based\\\\ and Hybrid \\\\Query Strategies}&\\makecell[l]{}\\\\\n \\cline{4-5}\n &&&\\makecell[l]{Deep Bayesian\\\\Active Learning\\\\(DBAL)}& \\makecell[l]{\\\\}\\\\\n \\cline{4-5}\n &&&\\makecell[l]{Density-based\\\\Methods}& \\\\\n \\cline{4-5}\n &&&\\makecell[l]{Automated\\\\Design of DeepAL}& \\\\\n \\hline\n \\makecell[l]{Insufficient\\\\data for\\\\labeled samples} & \\makecell[l]{Data\\\\expansion of\\\\labeled samples}&\\multicolumn{2}{c|}{-}& \\\\\n \\hline\n \\makecell[l]{Processing\\\\pipeline\\\\inconsistency} & \\makecell[l]{Common\\\\framework\\\\DeepAL}&\\multicolumn{2}{c|}{-}&\\makecell[l]{\\\\}\\\\\n \\hline\n \\end{tabular}\n \\label{tab: DAL_challenges_solutions}\n\\end{table}", "id": "334f2d49-93f9-4238-9f87-25f656785fde", "level": "subsection", "origin_cites_number": 53, "parent_id": "3de3bdd7-51c6-4365-9c87-09efb8923a32", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Deep Active Learning" ], [ "subsection", "DeepAL Stopping Strategy" ] ], "subsections": [], "title": "DeepAL Stopping Strategy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec: Various applications of DeepAL}\nToday, DeepAL has been applied to areas including but not limited to visual data processing (such as object detection, semantic segmentation, etc.), NLP (such as machine translation, text classification, semantic analysis, etc.), speech and audio processing, social network analysis, medical image processing, wildlife protection, industrial robotics, and disaster analysis, among other fields. In this section, we provide a systematic and detailed overview of existing DeepAL-related work from an application perspective.", "id": "dcc2782c-e704-4c3c-9abb-4c1b9d13e14c", "level": "section", "origin_cites_number": 0, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ] ], "subsections": [ "4e9307b4-41e6-4c33-a90c-b467ddf251a8", "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "eedaf731-74b8-49c4-895b-ed1869063c9a" ], "title": "Application of DeepAL in fields such as vision and NLP" }, { "cite_extract_rate": 0, "cites": [], "content": "Just as DL is widely used in the computer vision field, the first field in which DeepAL is expected to reach its potential is that of computer vision. In this section, we mainly discuss DeepAL-related research in the field of visual data processing.", "id": "4e9307b4-41e6-4c33-a90c-b467ddf251a8", "level": "subsection", "origin_cites_number": 0, "parent_id": "dcc2782c-e704-4c3c-9abb-4c1b9d13e14c", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Visual Data Processing" ] ], "subsections": [ "ebf2ea49-34b5-4e93-9905-e894e39da2ba", "bb9e3757-032e-41ad-b515-b103736802f7", "af0a8362-ff08-4ae6-9a01-e0c6659ec7da" ], "title": "Visual Data Processing" }, { "cite_extract_rate": 0.210526315789473, "cites": [ 5538, 1050, 8924, 5539 ], "content": "As with DL, the classification and recognition of images in DeepAL form the basis for research into other vision tasks. One of the most important problems that DeepAL faces in the field of image vision tasks is that of how to efficiently query samples of high-dimensional data (an area in which traditional AL performs poorly) and obtain satisfactory performance at the smallest possible labeling cost.\nTo solve this problem, CEAL assigns pseudo-labels to samples with high confidence and adds them to the highly uncertain sample set queried using the uncertainty-based AL method, then uses the expanded training set to train the DeepAL model image classifier.\n first integrated the criteria of AL into the deep belief network and subsequently conducted extensive research on classification tasks on a variety of real uni-modal and multi-modal datasets.\nWI-DL uses the DeepAL method to simultaneously consider the two selection criteria of maximizing representativeness and uncertainty on hyperspectral image (HSI) datasets for remote sensing classification tasks. Similarly, also studied the classification of HSI. introduces AL to initialize HSI and then performs transfer learning. This work also recommends constructing and connecting higher-level features to source and target HSI data in order to further overcome the cross-domain disparity. proposes a unified deep network combined with active transfer learning, thereby training the HSI classification well while using less labeled training data.\nMedical image analysis is also an important application. For example, explores the use of AL rather than random learning to train convolutional neural networks for tissue (e.g., stroma, lymphocytes, tumor, mucosa, keratin pearls, blood, and background/adipose) classification tasks.\n conducted a comprehensive review of DeepAL-related methods in the field of medical image analysis.\nAs discussed above, since the annotation of medical images requires strong professional knowledge, it is usually both very difficult and very expensive to find well-trained experts willing to perform annotations. In addition, DL has achieved impressive performance on various image feature tasks. Therefore, a large number of works continue to focus on combining DL and AL in order to apply DeepAL to the field of medical image analysis .\nThe DeepAL method is also used to classify in situ plankton and perform the automatic counting of cells .\nIn addition, DeepAL also has a wide range of applications in our daily life. For example, proposes an AL algorithm that uses CNN for verification code recognition. It can use the ability to obtain labeled data for free to avoid human intervention and greatly improve the recognition accuracy when less labeled data is used.\nHDAL combines the excellent feature extraction capabilities of deep CNN and the ability to save on AL labeling costs to design a heuristic deep active learning framework for face recognition tasks.", "id": "ebf2ea49-34b5-4e93-9905-e894e39da2ba", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "4e9307b4-41e6-4c33-a90c-b467ddf251a8", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Visual Data Processing" ], [ "subsubsection", "Image classification and recognition" ] ], "subsections": [], "title": "Image classification and recognition" }, { "cite_extract_rate": 0.25, "cites": [ 8923, 5508, 5537 ], "content": "Object detection and semantic segmentation have important applications in various fields, including autonomous driving, medical image processing, and wildlife protection. However, these fields are also limited by the higher sample labeling cost. Thus, the lower labeling cost of DeepAL is expected to accelerate the application of the corresponding DL models in certain real-world areas where labeling is more difficult.\n designs a DeepAL framework for object detection, which uses the layered architecture where labeling is more difficult as an example of \"query by committee\" to select the image set to be queried, while at the same time introducing a similar exploration/exploitation trade-off strategy to .\nDeepAL is also widely used in natural biological fields and industrial applications. For example, uses deep neural networks to quickly transferable and automatically extract information, and further combines transfer learning and AL to design a DeepAL framework for species identification and counting in camera trap images. \n uses unmanned aerial vehicles (UAV) to obtain images for wildlife detection purposes; moreover, to enable this wildlife detector to be reused, uses AL and introduces transfer sampling (TS) to find the corresponding area between the source and target datasets, thereby facilitating the transfer of data to the target domain. \n proposes a DeepAL framework for deep object detection in autonomous driving to train LiDAR 3D object detectors.\n proposes the adaptation of a widespread DeepAL framework to defect detection in real industries, along with an uncertainty sampling method for use in generating candidate label categories. This work uses the average margin method to set the sampling scale of each defect category and is thus able to obtain the required performance with less labeled data.\nIn addition, DeepAL also has important applications in the area of medical image segmentation. For example, proposes an AL-based transfer learning mechanism for medical image segmentation, which can effectively improve the image segmentation performance on a limited labeled dataset.\n combines FCN and AL to create a DeepAL framework for biological-image segmentation. This work uses the uncertainty and similarity information provided by the FCN to extend the maximum set cover problem, significantly reducing the required labeling workload by pointing out the most effective labeling areas.\nDASL (Deep Active Self-paced Learning) proposes a deep region-based network, Nodules R-CNN, for pulmonary nodule segmentation tasks. This work generates segmentation masks for use as examples, and at the same time, combines AL and SPL (Self-Paced Learning) to propose a new deep active self-paced learning strategy that reduces the labeling workload.\n proposes a Nodule-plus Region-based CNN for pulmonary nodule detection and segmentation in 3D thoracic Computed Tomography (CT). This work combines AL and SPL strategies to create a new deep self-paced active learning (DSAL) strategy, which reduces the annotation workload and makes effective use of unannotated data.\n further proposes a new deep-supervised active learning method for finger bone segmentation tasks. This model can be fine-tuned in an iterative and incremental learning manner and uses the output of the intermediate hidden layer as the basis for sample selection. Compared with the complete markup, achieved comparable segmentation results using fewer samples.", "id": "bb9e3757-032e-41ad-b515-b103736802f7", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "4e9307b4-41e6-4c33-a90c-b467ddf251a8", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Visual Data Processing" ], [ "subsubsection", "Object detection and semantic segmentation" ] ], "subsections": [], "title": "Object detection and semantic segmentation" }, { "cite_extract_rate": 0.4, "cites": [ 5540, 5506 ], "content": "Compared with the image task that only needs to process information in the spatial dimension, the video task also needs to process the information in the temporal dimension.\nThis makes the task of annotating the video more expensive, which also means that the need to introduce AL has become more urgent. DeepAL also has broader application scenarios in this field.\nFor example, proposes to use imitation learning to perform navigation tasks. The visual environment and actions taken by the teacher viewed from a first-person perspective are used as the training set. Through training, it is hoped that students will become able to predict and execute corresponding actions in their own environment. When performing tasks, students use deep convolutional neural networks for feature extraction, learn imitation strategies, and further use the AL method to select samples with insufficient confidence, which are added to the training set to update the action strategy. significantly improves the initial strategy using fewer samples.\nDeActive proposes a DeepAL activity recognition model. Compared with the traditional DL activity recognition model, DeActive requires fewer labeled samples, consumes fewer resources, and achieves high recognition accuracy.\n minimizes the annotation cost of the video-based person Re-ID dataset by integrating AL into the DL framework. Similarly, proposes a deep reinforcement active learning method for person Re-ID, using oracle feedback to guide the agent (i.e. the model in the reinforcement learning process) in selecting the next uncertainty sample. The agent selection mechanism is continuously optimized through alternately refined reinforcement learning strategies.\n further proposes an active learning object detection method based on convolutional neural networks for pedestrian target detection in video and static images.", "id": "af0a8362-ff08-4ae6-9a01-e0c6659ec7da", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "4e9307b4-41e6-4c33-a90c-b467ddf251a8", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Visual Data Processing" ], [ "subsubsection", "Video processing" ] ], "subsections": [], "title": "Video processing" }, { "cite_extract_rate": 0, "cites": [], "content": "NLP has always been a very challenging task. The goals of NLP are to make computers understand complex human language and to help humans deal with various natural language-related tasks. Insufficient data labeling is also a key challenge in the NLP context. Below, we introduce some of the most famous DeepAL methods in the NLP field.", "id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "level": "subsection", "origin_cites_number": 0, "parent_id": "dcc2782c-e704-4c3c-9abb-4c1b9d13e14c", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ] ], "subsections": [ "6785e494-c738-484a-9553-4331052998ea", "60de43bd-c79a-412f-89c5-24e11735adfa", "a8571c36-5a98-407d-b652-ba8c8c39187c", "2110095a-f072-45b4-9a58-3fb5bf39160a", "04105094-0a90-4d96-8404-fd3673003366" ], "title": "Natural Language Processing (NLP)" }, { "cite_extract_rate": 1, "cites": [ 7948, 5541 ], "content": "Machine translation has very important application value, but it usually requires a large number of parallel corpora as a training set. For many low-resource language pairs, building such a corpus requires a very high cost.\nFor this reason, proposes to use the AL framework to select information source sentences to construct a parallel corpus. It proposes two effective sentence selection methods for AL: selection based on semantic similarity and decoder probability. Compared with traditional methods, the two proposed sentence selection methods show considerable advantages.\n proposes a curriculum learning framework related to AL for machine translation tasks. It can decide which training samples to show to the model during different periods of training based on the estimated difficulty of a sample and the current competence of the model. This method not only effectively improves the training efficiency but also obtains a good accuracy improvement. This kind of thinking is also very valuable for DeepAL's sample selection strategy.", "id": "6785e494-c738-484a-9553-4331052998ea", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ], [ "subsubsection", "Machine translation" ] ], "subsections": [], "title": "Machine translation" }, { "cite_extract_rate": 0.4, "cites": [ 5509, 5542, 4023, 1096 ], "content": "Text classification tasks also face the challenge of excessive labeling costs, such as patent classification and clinical text classification . These labeling tasks often need to be completed by experts, and the number of datasets and texts in each document is often very large, which makes it difficult for human experts to complete the corresponding labeling tasks.\n claims to be the first AL method for text classification with CNNs. focuses on selecting those samples that have the greatest impact on the embedding space. It proposes a method for sentence classification that selects instances containing words whose embeddings are likely to be updated with the greatest magnitude, thereby rapidly learning discriminative, task-specific embeddings. They also extend this method to text classification tasks, which outperformed the baseline AL method in sentence and text classification tasks. also proposes a new DeepAL framework for text classification tasks. It uses RNN as the acquisition function in AL. The method proposed by can effectively reduce the number of label instances required for deep learning while saving training time without reducing model accuracy. \n focuses on the problem of sampling bias in deep active classification and apply active text classification on the large-scale text corpora of . These methods generally show better performance than that of the traditional AL-based baseline methods, and more relevant DeepAL-based text classification applications can be found in .", "id": "60de43bd-c79a-412f-89c5-24e11735adfa", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ], [ "subsubsection", "Text classification" ] ], "subsections": [], "title": "Text classification" }, { "cite_extract_rate": 0, "cites": [], "content": "In this typical NLP task, the aim is to make the computer understand a natural language description.\nThe relevant application scenarios are numerous and varied, including but not limited to sentiment classification, news identification, etc. \nMore specifically, for example, uses restricted Boltzmann machines (RBM) to construct an active deep network (ADN), then conduct unsupervised training on the labeled and unlabeled datasets. ADN uses a large number of unlabeled datasets to improve the model's generalization ability, and further employs AL in a semi-supervised learning framework, unifying the selection of labeled data and classifiers in a semi-supervised classification framework; this approach obtains competitive results on sentiment classification tasks.\n proposes a human-computer collaborative learning system for news accuracy detection tasks (that is, identifying misleading and false information in news) that utilizes only a limited number of annotation samples. This system is a deep AL-based model that uses 1-2 orders of magnitude fewer annotation samples than fully supervised learning. Such a reduction in the number of samples greatly accelerates the convergence speed of the model and results in an astonishing 25\\% average performance gains in detection performance.", "id": "a8571c36-5a98-407d-b652-ba8c8c39187c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ], [ "subsubsection", "Semantic analysis" ] ], "subsections": [], "title": "Semantic analysis" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 5543 ], "content": "Information extraction aims to extract and simplify the most important information from large texts, which is an important basis for correlation analysis between different concepts.\n uses relevant tweets from disaster-stricken areas to extract information that facilitates the identification of infrastructure damage during earthquakes. For this reason, combines RNN and GRU-based models with AL, using AL-based methods to pre-train the model so that it will retrieve tweets featuring infrastructure damage in different regions, thereby significantly reducing the manual labeling workload. \nIn addition, entity resolution (ER) is the task of recognizing the same real entities with different representations across databases and represents a key step in knowledge base creation and text mining. \n uses the combination of DL and AL to determine how the technical level of NER (Named Entity Recognition) can be improved in the case of a small training set. developed a DL-based ER method that combines transfer learning and AL to design an architecture that allows for the learning of a model that is transferable from high-resource environments to low-resource environments.\n proposes a novel ALPNN (Active Learning Policy Neural Network) design to recognize the concepts and relationships in large EEG (electroencephalogram) reports; this approach can help humans extract available clinical knowledge from a large number of such reports.", "id": "2110095a-f072-45b4-9a58-3fb5bf39160a", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ], [ "subsubsection", "Information extraction" ] ], "subsections": [], "title": "Information extraction" }, { "cite_extract_rate": 1, "cites": [ 2009, 5544 ], "content": "Intelligent question-answering is also a common processing task in the NLP context, and DL has achieved impressive results in these areas. However, the performance of these applications still relies on the availability of massive labeled datasets; AL is expected to bring new hope to this challenge.\nThe automatic question-answering system has a very wide range of applications in the industry, and DeepAL is also highly valuable in this field. \nFor example, uses the online AL strategy combined with the DL model to achieve an open domain dialogue by interacting with real users and learning incrementally from user feedback in each round of dialogue.\n finds that AL strategies designed for specific tasks (e.g., classification) often have only one correct answer and that these uncertainty-based measurements are often calculated based on the output of the model. Many real-world vision tasks often have multiple correct answers, which leads to the overestimation of uncertainty measures and sometimes even worse performance than random sampling baselines. For this reason, proposes to estimate the uncertainty in the hidden space within the model rather than the uncertainty in the output space of the model in the Visual Question Answer (VQA) generation, thus overcoming the paraphrasing nature of language.", "id": "04105094-0a90-4d96-8404-fd3673003366", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "84077737-96d6-4cf9-bb3e-a190b2e3eb47", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Natural Language Processing (NLP)" ], [ "subsubsection", "Question-answering" ] ], "subsections": [], "title": "Question-answering" }, { "cite_extract_rate": 0.24074074074074, "cites": [ 5514, 5538, 4023, 5545, 5257, 7948, 1050, 5542, 5508, 5539, 5544, 2893, 2009, 5540, 7796, 4968, 8465, 5541, 5509, 7953, 5506, 5543, 8923, 8422, 1096, 8924 ], "content": "The emergence of DeepAL is exciting, as it is expected to reduce the annotation costs by orders of magnitude while maintaining performance levels. For this reason, DeepAL is also widely used in other fields.\nThese applications include, but are not limited to, gene expression, robotics, wearable device data analysis, social networking, ECG signal analysis, etc.\nFor some more specific examples, MLFS (Multi-Level Feature Selection) combines DL and AL to select genes/miRNAs based on expression profiles and proposes a novel multi-level feature selection method. MLFS also considers the biological relationship between miRNAs and genes and applies this method to miRNA expansion tasks.\nMoreover, the failure risk of real-world robots is expensive. proposes a risk-aware resampling technique; this approach uses AL together with existing solvers and DL to optimize the robot's trajectory, enabling it to effectively deal with the collision problem in scenes with moving obstacles, and verify the effectiveness of the DeepAL method on a real nano-quadcopter.\n further proposes an active trajectory generation framework for the inverse dynamics model of the robot control algorithm, which enables the systematic design of the information trajectory used to train the DNN inverse dynamics module.\nIn addition, uses sensors installed in wearable devices or mobile terminals to collect user movement information for human activity recognition purposes. proposes a DeepAL framework for activity recognition with context-aware annotator selection. ActiveHARNet (Active Learning for Human Activity Recognition) proposes a resource-efficient deep ensembled model that supports incremental learning and inference on the device, utilizes the approximation in the BNN to represent the uncertainty of the model, and further proves the feasibility of ActiveHARNet deployment and incremental learning on two public datasets.\nFor its part, DALAUP (Deep Active Learning for Anchor User Prediction) designs a DeepAL framework for anchor user prediction in social networks that reduces the cost of annotating anchor users and improves the prediction accuracy.\nDeepAL is also using in the classification of electrocardiogram (ECG) signals. For example, proposes an active DL-based ECG signal classification method. proposed an AL-based ECG classification method using eigenvalues and DL. The use of the AL method enables the cost of marking ECG signals by medical experts to be effectively reduced.\nFurthermore, the cost of label annotation in the speech and audio fields is also relatively high.\n finds that a model trained on a corpus composed of thousands of recordings collected by a small number of speakers is unable to be generalized to new domains; therefore, developed a practical scheme that involves using AL to train deep neural networks for speech emotion recognition tasks when label resources are limited.\nIn general, the current applications of DeepAL are mainly focused on visual image processing tasks, although there are also applications in NLP and other fields. Compared with DL and AL, DeepAL is still in the preliminary stage of research, meaning that the corresponding classic works are relatively few; however, it still has the same broad application scenarios and practical value as DL. In addition, in order to facilitate readers' access to specific applications of DeepAL in related fields, we have classified and summarized all application scenarios and datasets used by survey-related work in Section \\ref{sec: Various applications of DeepAL} in detail. The specific information is shown in Table \\ref{tab: Application of DeepAL}.\n\\begin{table}[!tp]\n\\caption{DeepAL's research examples in Vision, NLP and other fields.}\n \\centering\n \\scriptsize\n \\begin{threeparttable}\n \\begin{tabular}{|c|c|l|l|c|}\\toprule\n \\hline\n Field & Task & Publications & Datasets & Scenes\\\\ \\hline\n \\multirow{21}{*}{\\makecell[c]{Vision}} & \\multirow{7}{*}{\\makecell[c]{Image \\\\classification \\\\ and\\\\ recognition}} & \\makecell[l]{} & \\makecell[l]{CACD , Caltech-256 , VidTIMIT ,\\\\ CK , MNIST , CIFAR 10 ,\\\\ emoFBVP , MindReading \\\\Cool PHP CAPTCHA } & \\makecell[c]{Handwritten numbers, \\\\face, CAPTCHA\\\\ recognition, etc.}\\\\\\cline{3-5}\n && & \\makecell[l]{PaviaC, PaviaU, Botswana ,\\\\ Salinas Valley, Indian Pines , \\\\Washington DC Mall, Urban } & \\makecell{Hyperspectral\\\\ image}\\\\\n \\cline{3-5}\n &&\\makecell[l]{\\\\\\\\}& \\makecell[l]{Erie County , EEG , \\\\BreaKHis , \\\\ SVEB, SVDB } & Biomedical\\\\\n \\cline{2-5}\n & \\multirow{4}{*}{\\makecell[c]{\\makecell[c]{Object \\\\detection}}} & &VOC , Kitti & -- \\\\\n \\cline{3-5}\n &&& \\makecell[l]{SS , eMML , NACTI\\tnote{1}, CCT\\tnote{2}, UAV\\tnote{3}} & \\makecell{Biodiversity survey}\\\\\n \\cline{3-5}\n && &KITTI & Autonomous driving\\\\\n \\cline{3-5}\n && & NEU-DET & Defect detection\\\\\n \\cline{2-5}\n & \\makecell{Semantic \\\\segmentation} && \\makecell[l]{SPIM , Confocal, LIDC-IDRI ,\\\\MICCAI, Lymph node }& Bio-medical image \\\\\\cline{2-5}\n & \\multirow{5}{*}{\\makecell[c]{Video \\\\processing}} & & Mash-simulator\\tnote{4} & \\makecell[c]{Autonomous\\\\ navigation}\\\\\n \\cline{3-5}\n &&&\\makecell[l]{OPPORTUNITY , WISDM ,\\\\ SenseBox , Skoda Daphnet ,CASAS }&Smart home\\\\\n \\cline{3-5}\n &&&\\makecell[l]{PRID , MARS , BDD100K ,\\\\DukeMTMC-VideoReID ,\\\\ CityPersons , Caltech Pedestrian}&Person Re-id\\\\\n \\hline\n \\multirow{14}{*}{\\makecell[c]{NLP}} & \\makecell[c]{Machine\\\\ translation} & &\\makecell[l]{OPUS , UNPC ,\\\\IWSLT, WMT }&\\makecell[c]{Ind-En, Ch-En, En-Vi, \\\\Fr-En, En-De, etc.}\\\\\n \\cline{2-5}\n & \\makecell[c]{Text \\\\classification}&&\\makecell[l]{CR\\tnote{5}, Subj, MR\\tnote{6}, MuR\\tnote{7}, DR \\\\AGN, DBP, AMZP, AMZF, YRF }&-- \\\\ \\cline{2-5}\n & \\multirow{3}{*}{\\makecell[c]{Semantic \\\\analysis}} & &MOV , BOO, DVDs, ELE, KIT &\\makecell[c]{Sentiment \\\\classification}\\\\\n \\cline{3-5}\n &&&\\makecell[l]{KDnugget’s Fake News\\tnote{8}, \\\\Harvard Dataverse , Liar }&\\makecell[c]{News veracity \\\\detection}\\\\\n \\cline{2-5}\n &\\multirow{5}{*}{\\makecell[c]{Information \\\\extraction}}&&Italy, Iran-Iraq, Mexico earthquake dataset & Disaster assessment \\\\\n \\cline{3-5}\n &&&Temple University Hospital\\tnote{10}& \\makecell[c]{Electroencephalography\\\\(EEG) reports}\\\\\\cline{3-5}\n &&&\\makecell[l]{CoNLL , NCBI , MedMentions ,\\\\OntoNotes , DBLP, FZ, AG , Cora } &\\makecell[c]{Named entity \\\\recognition (NER)}\\\\\n \\cline{2-5}\n &\\multirow{2}{*}{\\makecell[c]{Question\\\\answering}}&&CMDC , JabberWacky’s chatlogs\\tnote{9} &Dialogue generation\\\\\n \\cline{3-5}\n &&&Visual Genome , VQA &\\makecell[c]{Visual question answer (VQA)}\\\\\n \\hline\n \\multirow{7}{*}{Other} &\\multirow{7}{*}{--}& &BC, HCC, Lung&Gene expression\\\\\\cline{3-5}\n &&&EATG , Crazyflie 2.0\\tnote{11} &Robotics\\\\\\cline{3-5}\n &&&HHAR , NWFD & Smart device\\\\\\cline{3-5}\n &&&Foursquare, Twitter & Social network\\\\\\cline{3-5}\n &&&MIT-BIH , INCART, SVDB & \\makecell[c]{Electrocardiogram (ECG)\\\\signal classification}\\\\\\cline{3-5}\n &&&MSP-Podcast & \\makecell[c]{Speech emotion recognition}\\\\\\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\footnotesize\n \\item[1] http://lila.science/datasets/nacti\n \\item[2] http://lila.science/datasets/caltech-camera-traps\n \\item[3] http://kuzikus-namibia.de/xe\\_index.html\n \\item[4] https://github.com/idiap/mash-simulator \n \\item[5] www.cs.uic.edu/liub/FBS/sentiment-analysis.html\n \\item[6] Subj and MR datasets are available at: http://www.cs.cornell.edu/people/pabo/movie-review-data/\n \\item[7] http://www.cs.jhu.edu/˜mdredze/datasets/sentiment/\n \\item[8] https://github.com/GeorgeMcIntire/fake\\_real\\_news\\_dataset\n \\item[9] http://www.jabberwacky.com/j2conversations. JabberWacky is an in-browser, open-domain, retrieval-based bot.\n \\item[10] https://www.isip.piconepress.com/projects/tuh\\_eeg/\n \\item[11] https://www.bitcraze.io/\n \\item[--] Non-specific application scenarios\n \\end{tablenotes}\n \\end{threeparttable}\n \\label{tab: Application of DeepAL}\n\\end{table}", "id": "eedaf731-74b8-49c4-895b-ed1869063c9a", "level": "subsection", "origin_cites_number": 108, "parent_id": "dcc2782c-e704-4c3c-9abb-4c1b9d13e14c", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Application of DeepAL in fields such as vision and NLP" ], [ "subsection", "Other Applications" ] ], "subsections": [], "title": "Other Applications" }, { "cite_extract_rate": 0.875, "cites": [ 1042, 5523, 4024, 514, 5534, 8926, 5522 ], "content": "\\label{sec: Discussion and future directions}\nDeepAL combines the common advantages of DL and AL: it inherits not only DL's ability to process high-dimensional image data and conduct automatic feature extraction but also AL's potential to effectively reduce annotation costs. DeepAL, therefore, has fascinating potential especially in areas where labels require high levels of expertise and are difficult to obtain.\nMost recent work reveals that DeepAL has been successful in many common tasks. DeepAL has attracted the interest of a large number of researchers by reducing the cost of annotation and its ability to implement the powerful feature extraction capabilities of DL; consequently, the related research work is also extremely rich. However, there are still a large number of unanswered questions on this subject.\nAs discovered, the results reported on the random sampling baseline (RSB) differ significantly between different studies. For example, under the same settings, using 20\\% of the label data of CIFAR 10, the RSB performance reported by is 13\\% higher than that in . Secondly, the same DeepAL method may yield different results in different studies. For example, using 40\\% of the label data of CIFAR 100 and VGG16 as the extraction network, the reported results of and differ by 8\\%. Furthermore, the latest DeepAL research also exhibits some inconsistencies. For example, and point out that diversity-based methods have always been better than uncertainty-based methods, and that uncertainty-based methods perform worse than RSB; however, the latest research of shows that this is not the case.\nCompared with AL's strategic selection of high-value samples, RSB has been regarded as a strong baseline . However, the above problems reveal an urgent need to design a general performance evaluation platform for DeepAL work, as well as to determine a unified high-performance RSB. Secondly, the reproducibility of different DeepAL methods is also an important issue. The highly reproducible DeepAL method helps to evaluate the performance of different DALs. A common evaluation platform should be used for experiments under consistent settings, and snapshots of experimental settings should be shared. In addition, multiple repetitive experiments with different initializations under the same experimental conditions should be implemented, as this could effectively avoid misleading conclusions caused by experimental setup problems. Researchers should pay sufficient attention to these inconsistent studies to enable them to clarify the principles involved. On the other hand, adequate ablation experiments and transfer experiments are also necessary. The former will make it easier for us to determine which improvements bring about performance gains, while the latter can help to ensure that the AL selection strategy does indeed enable the indiscriminate selection of high-value samples for the dataset.\nThe current research directions regarding DeepAL methods focus primarily on the improvement of AL selection strategies, the optimization of training methods, and the improvement of task-independent models. \nAs noted in Section \\ref{sec: Query Strategy Optimization in DeepAL}, the improvement of AL selection strategy is currently centered around taking into account the query strategy based on uncertainty and diversity explicitly or implicitly. Moreover, hybrid selection strategies are increasingly favored by researchers. \nMoreover, the optimization of training methods mainly focuses on labeled datasets, unlabeled datasets, or the use of methods such as GAN to expand data, as well as the hybrid training method of unsupervised, semi-supervised, and supervised learning across the AL cycle. This training method promises to deliver even more performance improvements than are thought to be achievable through changes to the selection strategy. In fact, this makes up for the issues of the DL model requiring a large number of labeled training samples and the AL selecting a limited number of labeled samples. In addition, the use of unlabeled or generated datasets is also conducive to making full use of existing information without adding to the annotation costs. Furthermore, the incremental training method is also an important research direction. From a computing resource perspective, it is unacceptable to train a deep model from scratch in each cycle. While simple incremental training will cause the deviation of model parameters, the huge potential savings on resources are quite attractive. Although related research remains quite scarce, this is still a very promising research direction. \nTask independence is also an important research direction, as it helps to make DeepAL models more directly and widely extensible to other tasks. However, the related research remains insufficient, and the corresponding DeepAL methods tend to focus only on the uncertainty-based selection method. Because DL itself is easier to integrate with the uncertainty-based AL selection strategy, we believe that uncertainty-based methods will continue to dominate research directions not related to these tasks in the future. On the other hand, it may also be advisable to explicitly take the diversity-based selection strategy into account; of course, this will also give rise to great challenges. \nIn addition, it should be pointed out that blindly pursuing the idea of training models on smaller subsets would be unwise, as the relative difference in sample importance in some datasets with a large variety of content and a large number of samples can almost be ignored.\nThere is no conflict between the above-mentioned improvement directions; thus, a mixed improvement strategy is an important development direction for the future. In general, DeepAL research has significant practical application value in terms of both labeling costs and application scenarios; however, DeepAL research remains in its infancy at present, and there is still a long way to go in the future.", "id": "eb711660-bd71-47e8-b216-91168204d5d2", "level": "section", "origin_cites_number": 8, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Discussion and future directions" ] ], "subsections": [], "title": "Discussion and future directions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec: Summary and conclusions}\nFor the first time, the necessity and challenges of combining traditional active learning and deep learning have been comprehensively analyzed and summarized. In response to these challenges, we analyze and compare existing work from three perspectives: query strategy optimization, labeled sample data expansion, and model generality. In addition, we also summarize the stopping strategy of DeepAL. Then, we review the related work of DeepAL from the perspective of the application. Finally, we conduct a comprehensive discussion on the future direction of DeepAL. As far as we know, this is the first comprehensive and systematic review in the field of deep active learning.\n\\begin{acks}\nThis work was partially supported by the NSFC under Grant (No.61972315 and No.62072372) and the Shaanxi Science and Technology Innovation Team Support Project under grant agreement (No.2018TD-026) and the Australian Research Council Discovery Early Career Researcher Award (No.DE190100626).\n\\end{acks}\n\\bibliographystyle{ACM-Reference-Format}\n\\normalem\n\\bibliography{DeepAL}\n\\appendix\n\\end{document}\n\\endinput", "id": "533d8a22-af2d-4c05-ba9a-53183224b451", "level": "section", "origin_cites_number": 0, "parent_id": "69f98c5d-4a32-4911-97b2-4b657d88f671", "prefix_titles": [ [ "title", "A Survey of Deep Active Learning" ], [ "section", "Summary and conclusions" ] ], "subsections": [], "title": "Summary and conclusions" } ]
113
[ 5507, 1044, 5511, 4023, 5509, 5508, 5510, 5506, 684, 5515, 5514, 2909, 514, 5512, 166, 8923, 5516, 7950, 7948, 7217, 4507, 1496, 97, 5513, 7300, 7949, 7584, 8924, 5517, 8716, 8812, 5518, 5519, 5521, 4641, 1042, 5523, 1050, 5520, 4618, 5522, 5524, 8925, 8717, 4024, 2009, 5525, 8926, 5526, 4599, 8316, 4610, 5530, 5529, 5528, 5527, 6991, 5531, 8927, 5532, 7951, 5533, 5534, 7100, 1001, 7952, 5536, 5535, 5680, 810, 5537, 825, 5538, 5539, 5540, 5541, 5542, 1096, 5543, 5544, 5545, 5257, 2893, 7796, 4968, 8465, 7953, 8422 ]
1.413549
[ "Lifeng Han", "Gareth J. F. Jones", "Alan F. Smeaton" ]
Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods
2021
2021-05-05T18:28:10Z
cs.CL
To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess translation quality. From the perspectives of accuracy, reliability, repeatability and cost, translation quality assessment (TQA) itself is a rich and challenging task. In this work, we present a high-level and concise survey of TQA methods, including both manual judgement criteria and automated evaluation metrics, which we classify into further detailed sub-categories. We hope that this work will be an asset for both translation model researchers and quality assessment researchers. In addition, we hope that it will enable practitioners to quickly develop a better understanding of the conventional TQA field, and to find corresponding closely relevant evaluation solutions for their own needs. This work may also serve inspire further development of quality assessment and evaluation methodologies for other natural language processing (NLP) tasks in addition to machine translation (MT), such as automatic text summarization (ATS), natural language understanding (NLU) and natural language generation (NLG). \footnote{authors GJ and AS in alphabetic order}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ] ], "subsections": [ "3b626ccf-78b0-4e5d-a303-dac86144c819", "4cdd5363-4d5c-4f8e-b175-a5f7de42d3ca", "49723647-b1d3-43c7-acc5-d99388d7916e", "3217cd33-b9b0-4887-8ed0-a4a6e0619721", "97d72fde-8d63-4c85-976d-be6d9a7b7fc8" ], "title": "root" }, { "cite_extract_rate": 0.1, "cites": [ 6987, 1551, 38, 303 ], "content": "Machine translation (MT) research, starting from the 1950s , has been one of the main research topics in computational linguistics (CL) and natural language processing (NLP), and has influenced and been influenced by several other language processing tasks such as parsing and language modeling. Starting from rule-based methods to example-based, and then statistical methods , to the current paradigm of neural network structures , MT quality continue to improve. However, as MT and translation quality assessment (TQA) researchers report, MT outputs are still far from reaching human parity . MT quality assessment is thus still an important task to facilitate MT research itself, and also for downstream applications. TQA remains a challenging and difficult task because of the richness, variety, and ambiguity phenomena of natural language itself, e.g. the same concept can be expressed in different word structures and patterns in different languages, even inside one language .\nIn this work, we introduce human judgement and evaluation (HJE) criteria that have been used in standard international shared tasks and more broadly, such as NIST , WMT , and IWSLT . We then introduce automated TQA methods, including the automatic evaluation metrics that were proposed inside these shared tasks and beyond.\nRegarding Human Assessment (HA) methods, we categorise them into traditional and advanced sets, with the first set including intelligibility, fidelity, fluency, adequacy, and comprehension, and the second set including task-oriented, extended criteria, utilizing post-editing, segment ranking, crowd source intelligence (direct assessment), and revisiting traditional criteria. \nRegarding automated TQA methods, we classify these into three categories including simple n-gram based word surface matching, deeper linguistic feature integration such as syntax and semantics, and deep learning (DL) models, with the first two regarded as traditional and the last one regarded as advanced due to the recent appearance of DL models for NLP. We further divide each of these three categories into sub-branches, each with a different focus. Of course, this classification does not have clear boundaries. For instance some automated metrics are involved in both n-gram word surface similarity and linguistic features. This paper differs from the existing works by introducing recent developments in MT evaluation measures, the different classifications from manual to automatic evaluation methodologies, the introduction of more recently developed quality estimation (QE) tasks, and its concise presentation of these concepts.\nWe hope that our work will shed light and offer a useful guide for both MT researchers and researchers in other relevant NLP disciplines, from the similarity and evaluation point of view, to find useful quality assessment methods, either from the manual or automated perspective, inspired from this work. This might include, for instance, natural language generation , natural language understanding , and automatic summarization .\nThe rest of the paper is organized as follows: Sections 2 and 3 present human assessment and automated assessment methods respectively;\nSection 4 presents some discussions and perspectives; Section 5 summarizes our conclusions and future work. We also list some further relevant readings in the appendices, such as evaluating methods of TQA itself, MT QE, and mathematical formulas.\\footnote{This work is based on an earlier preprint edition }", "id": "3b626ccf-78b0-4e5d-a303-dac86144c819", "level": "section", "origin_cites_number": 40, "parent_id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section we introduce human judgement methods, as reflected in Fig.~\\ref{fig:humanMTEfig}. This categorises these human methods as Traditional and Advanced. \n\\begin{figure}[!t]\n\\centering\n\\includegraphics*[height=2.1in,width=3in]{./figs/human_evaluation_pic-cropped.pdf}\n\\caption{Human Assessment Methods}\n\\label{fig:humanMTEfig}\n\\end{figure}", "id": "4cdd5363-4d5c-4f8e-b175-a5f7de42d3ca", "level": "section", "origin_cites_number": 0, "parent_id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ] ], "subsections": [ "fc4c49f1-0f76-4ba1-ba85-8421b5bef6f1", "86788fac-2abe-4bf3-b54e-a503b4d497fc" ], "title": "Human Assessment Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "fc4c49f1-0f76-4ba1-ba85-8421b5bef6f1", "level": "subsection", "origin_cites_number": 0, "parent_id": "4cdd5363-4d5c-4f8e-b175-a5f7de42d3ca", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Traditional Human Assessment" ] ], "subsections": [ "86b7c19b-9ae2-4434-89bc-d593bbaba95c", "8657b1df-8e26-485a-824f-646c0165d7b4", "b65e3fc1-d892-41c1-b387-0055ce4e646b" ], "title": "Traditional Human Assessment" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe earliest human assessment methods for MT can be traced back to around 1966. They include the intelligibility and fidelity used by the automatic language processing advisory committee (ALPAC) .\nThe requirement that a translation is intelligible means that, as far as possible, the translation should read like normal, well-edited prose and be readily understandable in the same way that such a sentence would be understandable if originally composed in the translation language. The requirement that a translation is of high fidelity or accuracy includes the requirement that the translation should, as little as possible, twist, distort, or controvert the meaning intended by the original.", "id": "86b7c19b-9ae2-4434-89bc-d593bbaba95c", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fc4c49f1-0f76-4ba1-ba85-8421b5bef6f1", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Traditional Human Assessment" ], [ "subsubsection", "\\emph{Intelligibility and Fidelity" ] ], "subsections": [], "title": "\\emph{Intelligibility and Fidelity" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn 1990s, the Advanced Research Projects Agency (ARPA) created a methodology to evaluate machine translation systems using the adequacy, fluency and comprehension of the MT output which adapted in MT evaluation campaigns including .\n\\begin{comment}\n\\begin{eqnarray}\n\\text{Comprehension}=\\frac{\\#\\text{Cottect}}{6} \\\\\n\\text{Fluency}=\\frac{\\frac{\\text{Judgment point}-1}{\\text{S}-1}}{\\#\\text{Sentences in passage}} \\\\\n\\text{Adequacy}=\\frac{\\frac{\\text{Judgment point}-1}{\\text{S}-1}}{\\#\\text{Fragments in passage}} \n\\end{eqnarray}\n\\end{comment}\nTo set upp this methodology, the human assessor is asked to look at each fragment, delimited by syntactic constituents and containing sufficient information, and judge its adequacy on a scale 1-to-5. Results are computed by averaging the judgments over all of the decisions in the translation set.\nFluency evaluation is compiled in the same manner as for the adequacy except that the assessor is to make intuitive judgments on a sentence-by-sentence basis for each translation. Human assessors are asked to determine whether the translation is good English without reference to the correct translation. Fluency evaluation determines whether a sentence is well-formed and fluent in context.\nComprehension relates to ``Informativeness'', whose objective is to measure a system's ability to produce a translation that conveys sufficient information, such that people can gain necessary information from it. The reference set of expert translations is used to create six questions with six possible answers respectively including, ``none of above'' and ``cannot be determined''.", "id": "8657b1df-8e26-485a-824f-646c0165d7b4", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "fc4c49f1-0f76-4ba1-ba85-8421b5bef6f1", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Traditional Human Assessment" ], [ "subsubsection", "\\emph{Fluency, Adequacy and Comprehension" ] ], "subsections": [], "title": "\\emph{Fluency, Adequacy and Comprehension" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\newcite{BangaloreRambowWhittaker2000} classified accuracy into several categories including simple string accuracy, generation string accuracy, and two corresponding tree-based accuracy. \\newcite{reeder2004amta} found the correlation between fluency and the number of words it takes to distinguish between human translation and MT output.\nThe ``Linguistics Data Consortium (LDC)'' \\footnote{https://www.ldc.upenn.edu} designed two five-point scales representing fluency and adequacy for the annual NIST MT evaluation workshop. The developed scales became a widely used methodology when manually evaluating MT by assigning values. The five point scale for adequacy indicates how much of the meaning expressed in the reference translation is also expressed in a translation hypothesis; the second five point scale indicates how fluent the translation is, involving both grammatical correctness and idiomatic word choices. \n\\newcite{SpeciaHajlaouiHallettAziz2011} conducted a study of MT adequacy and broke it into four levels, from score 4 to 1: highly adequate, the translation faithfully conveys the content of the input sentence; fairly adequate, where the translation generally conveys the meaning of the input sentence, there are some problems with word order or tense/voice/number, or there are repeated, added or non-translated words; poorly adequate, the content of the input sentence is not adequately conveyed by the translation; and completely inadequate, the content of the input sentence is not conveyed at all by the translation.", "id": "b65e3fc1-d892-41c1-b387-0055ce4e646b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fc4c49f1-0f76-4ba1-ba85-8421b5bef6f1", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Traditional Human Assessment" ], [ "subsubsection", "\\emph{Further Development" ] ], "subsections": [], "title": "\\emph{Further Development" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "level": "subsection", "origin_cites_number": 0, "parent_id": "4cdd5363-4d5c-4f8e-b175-a5f7de42d3ca", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ] ], "subsections": [ "93d527d3-de18-43b1-b60b-3a6951749f62", "ef5972d4-4a1b-4a40-bb42-5f8a26ea3b38", "ba679a83-9ad1-4e02-9762-6d392ccebb03", "c7f2eb04-ae8b-4c29-b84a-25c3c1647921", "d8a1163f-5f13-4fc9-b1ed-35ea52883e29", "55ed2831-702c-4a98-a341-f81079a61df4" ], "title": "Advanced Human Assessment" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\newcite{WhiteTaylor1998} developed a task-oriented evaluation methodology for Japanese-to-English translation to measure MT systems in light of the tasks for which their output might be used. They seek to associate the diagnostic scores assigned to the output used in the DARPA (Defense Advanced Research Projects Agency) \\footnote{https://www.darpa.mil} evaluation with a scale of language-dependent tasks, such as scanning, sorting, and topic identification. They develop an MT proficiency metric with a corpus of multiple variants which are usable as a set of controlled samples for user judgments. The principal steps include identifying the user-performed text-handling tasks, discovering the order of text-handling task tolerance, analyzing the linguistic and non-linguistic translation problems in the corpus used in determining task tolerance, and developing a set of source language patterns which correspond to diagnostic target phenomena. A brief introduction to task-based MT evaluation work was shown in their later work .\n\\newcite{Voss06task-basedevaluation} introduced tasked-based MT output evaluation by the extraction of \\textit{who}, \\textit{when}, and \\textit{where} three types of elements. They extended their work later into event understanding .", "id": "93d527d3-de18-43b1-b60b-3a6951749f62", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Task-oriented" ] ], "subsections": [], "title": "\\emph{Task-oriented" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\newcite{KingbelisHovy2003} extend a large range of manual evaluation methods for MT systems which, in addition to the earlir mentioned accuracy, include \\textit{suitability}, whether even accurate results are suitable in the particular context in which the system is to be used; \\textit{interoperability}, whether with other software or hardware platforms; \\textit{reliability}, i.e., don't break down all the time or take a long time to get running again after breaking down; \\textit{usability}, easy to get the interfaces, easy to learn and operate, and looks pretty; \\textit{efficiency}, when needed, keep up with the flow of dealt documents; \\textit{maintainability}, being able to modify the system in order to adapt it to particular users; and \\textit{portability}, one version of a system can be replaced by a new version, because MT systems are rarely static and they tend to improve over time as resources grow and bugs are fixed.", "id": "ef5972d4-4a1b-4a40-bb42-5f8a26ea3b38", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Extended Criteria" ] ], "subsections": [], "title": "\\emph{Extended Criteria" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nOne alternative method to assess MT quality is to compare the post-edited correct translation to the original MT output. This type of evaluation is, however, time consuming and depends on the skills of the human assessor and post-editing performer. One example of a metric that is designed in such a manner is the human translation error rate (HTER) . This is based on the number of editing steps, computing the editing steps between an automatic translation and a reference translation. Here, a human assessor has to find the minimum number of insertions, deletions, substitutions, and shifts to convert the system output into an acceptable translation. HTER is defined as the sum of the number of editing steps divided by the number of words in the acceptable translation.", "id": "ba679a83-9ad1-4e02-9762-6d392ccebb03", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Utilizing Post-editing" ] ], "subsections": [], "title": "\\emph{Utilizing Post-editing" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nIn the WMT metrics task, human assessment based on segment ranking was often used. Human assessors were frequently asked to provide a complete ranking over all the candidate translations of the same source segment . In the WMT13 shared-tasks , five systems were randomised for the assessor to give a rank. Each time, the source segment and the reference translation were presented together with the candidate translations from the five systems. The assessors ranked the systems from 1 to 5, allowing tied scores. For each ranking, there was the potential to provide as many as 10 pairwise results if there were no ties. The collected pairwise rankings were then used to assign a corresponding score to each participating system to reflect the quality of the automatic translations. The assigned scores could also be used to reflect how frequently a system was judged to be better or worse than other systems when they were compared on the same source segment, according to the following formula:\n\\begin{small}\n\\begin{eqnarray}\n\\frac{\\#\\text{better pairwise ranking}}{\\#\\text{total pairwise comparison}-\\#\\text{ties comparisons}}\n\\end{eqnarray}\n\\end{small}", "id": "c7f2eb04-ae8b-4c29-b84a-25c3c1647921", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Segment Ranking" ] ], "subsections": [], "title": "\\emph{Segment Ranking" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nWith the reported very low human inter-agreement scores from the WMT segment ranking task, researchers started to address this issue by exploring new human assessment methods, as well as seeking reliable automatic metrics for segment level ranking . \n\\newcite{graham-etal-2013-continuous} noted that the lower agreements from WMT human assessment might be caused partially by the \ninterval-level scales set up for the human assessor to choose regarding quality judgement of each segment. For instance, the human assessor possibly corresponds to the situation where neither of the two categories they were forced to choose is preferred. In light of this rationale, they proposed continuous measurement scales (CMS) for human TQA using fluency criteria. This was implemented by introducing the crowdsource platform Amazon MTurk, with some quality control methods such as the insertion of \\textit{bad-reference} and \\textit{ask-again}, and statistical significance testing. This methodology reported improved both intra-annotator and inter-annotator consistency. Detailed quality control methodologies, including statistical significance testing were documented in direct assessment (DA) .", "id": "d8a1163f-5f13-4fc9-b1ed-35ea52883e29", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Crowd Source Intelligence" ] ], "subsections": [], "title": "\\emph{Crowd Source Intelligence" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n\\newcite{popovic-2020coling-informative} criticized the traditional human TQA methods because they fail to reflect real problems in translation by assigning scores and ranking several candidates from the same source. Instead, \\newcite{popovic-2020coling-informative} designed a new methodology by asking human assessors to mark all problematic parts of candidate translations, either words, phrases, or sentences. \nTwo questions that were typically asked of the assessors related to \\textit{comprehensibility} and \\textit{adequacy}. The first criteria considered whether the translation is understandable, or understandable but with errors; the second criteria measures if the candidate translation has different meaning to the original text, or maintains the meaning but with errors. Both criteria take into account whether parts of the original text are missing in translation.\nUnder a similar experimental setup, \\newcite{popovic-2020conll-relations} also summarized the most frequent error types that the annotators recognized as misleading translations.", "id": "55ed2831-702c-4a98-a341-f81079a61df4", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "86788fac-2abe-4bf3-b54e-a503b4d497fc", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Human Assessment Methods" ], [ "subsection", "Advanced Human Assessment" ], [ "subsubsection", "\\emph{Revisiting Traditional Criteria" ] ], "subsections": [], "title": "\\emph{Revisiting Traditional Criteria" }, { "cite_extract_rate": 0, "cites": [], "content": "Manual evaluation suffers some disadvantages such as that it is time-consuming, expensive, not tune-able, and not reproducible. Due to these aspects, automatic evaluation metrics have been widely used for MT. Typically, these compare the output of MT systems against human reference translations, but there are also some metrics that do not use reference translations. There are usually two ways to offer the human reference translation, either offering one single reference or offering multiple references for a single source sentence . \nAutomated metrics often measure the overlap in words and word sequences, as well as word order and edit distance. We classify these kinds of metrics as ``simple n-gram word surface matching''. Further developed metrics also take linguistic features into account such as syntax and semantics, including POS, sentence structure, textual entailment, paraphrase, synonyms, named entities, multi-word expressions (MWEs), semantic roles and language models. We classify these metrics that utilize the linguistic features as ``Deeper Linguistic Features (aware)''. This classification is only for easier understanding and better organization of the content. It is not easy to separate these two categories clearly since sometimes they merge with each other. For instance, some metrics from the first category might also use certain linguistic features. Furthermore, we will introduce some recent models that apply deep learning into the TQA framework, as in Fig.~\\ref{fig:automaticMTEfig}. Due to space limitations, we present MT quality estimation (QE) task which does not rely on reference translations during the automated computing procedure in the appendices.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics*[height=2.5in,width=3in]{./figs/automatic_evaluation_pic2_v2-cropped.pdf}\n\\caption{Automatic Quality Assessment Methods}\n\\label{fig:automaticMTEfig}\n\\end{figure}", "id": "49723647-b1d3-43c7-acc5-d99388d7916e", "level": "section", "origin_cites_number": 2, "parent_id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ] ], "subsections": [ "c869edf1-2c53-4da5-86c1-d2ef5c593009", "d713479f-7f15-44e5-bf5f-52ad1cd69bd4", "b8ef0d14-0a42-48b7-a8a8-5796e644be16" ], "title": "Automated Assessment Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "c869edf1-2c53-4da5-86c1-d2ef5c593009", "level": "subsection", "origin_cites_number": 0, "parent_id": "49723647-b1d3-43c7-acc5-d99388d7916e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "N-gram Word Surface Matching" ] ], "subsections": [ "f3602051-b0f0-44ce-bd8f-6d29b7050ba0", "9367b4f1-e5a9-4ff8-812a-cf010a99fc21", "29c3cffa-9b80-428d-a95a-c6a482283078" ], "title": "N-gram Word Surface Matching" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nBy calculating the minimum number of editing steps to transform MT output to reference, \\newcite{SuWenShin1992} introduced the word error rate (WER) metric into MT evaluation. This metric, inspired by Levenshtein Distance (or edit distance), takes word order into account, and the operations include insertion (adding word), deletion (dropping word) and replacement (or substitution, replace one word with another), the minimum number of editing steps needed to match two sequences.\n\\begin{comment}\n\\begin{eqnarray}\n\\text{WER}=\\frac{\\text{substitution+insertion+deletion}}{\\text{reference}_{\\text{length}}}.\n\\end{eqnarray}\n\\end{comment}\nOne of the weak points of the WER metric is the fact that word ordering is not taken into account appropriately. The WER scores are very low when the word order of system output translation is ``wrong\" according to the reference. In the Levenshtein distance, the mismatches in word order require the deletion and re-insertion of the misplaced words. However, due to the diversity of language expressions, some so-called ``wrong\" order sentences by WER also prove to be good translations. To address this problem, the position-independent word error rate (PER) introduced by \\newcite{TillmannVogelNeyZubiagaSawaf1997} is designed to ignore word order when matching output and reference. Without taking into account of the word order, PER counts the number of times that identical words appear in both sentences. Depending on whether the translated sentence is longer or shorter than the reference translation, the rest of the words are either insertion or deletion ones.\n\\begin{comment}\n\\begin{small}\n\\begin{equation}\n\\text{PER}\n=1-\\frac{\\text{correct}-\\max(0,\\text{output}_{\\text{length}}-\\text{reference}_{\\text{length}})}{\\text{reference}_{\\text{length}}}.\n\\end{equation}\n\\end{small}\n\\end{comment}\nAnother way to overcome the unconscionable penalty on word order in the Levenshtein distance is adding a novel editing step that allows the movement of word sequences from one part of the output to another. This is something a human post-editor would do with the cut-and-paste function of a word processor. In this light, \\newcite{SnoverDorrSchwartzMicciulla2006} designed the translation edit rate (TER) metric that adds block movement (jumping action) as an editing step. The shift option is performed on a contiguous sequence of words within the output sentence. \n\\begin{comment}\nThe TER score is calculated as:\n\\begin{equation} \n\\text{TER}=\\frac{\\#\\text{of edit}}{\\#\\text{of average reference words}}\n\\end{equation}\n\\end{comment}\nFor the edits, the cost of the block movement, any number of continuous words and any distance, is equal to that of the single word operation, such as insertion, deletion and substitution.", "id": "f3602051-b0f0-44ce-bd8f-6d29b7050ba0", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c869edf1-2c53-4da5-86c1-d2ef5c593009", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "N-gram Word Surface Matching" ], [ "subsubsection", "\\em{Levenshtein Distance" ] ], "subsections": [], "title": "\\em{Levenshtein Distance" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe widely used evaluation BLEU metric is based on the degree of n-gram overlap between the strings of words produced by the MT output and the human translation references at the corpus level. BLEU calculates precision scores with n-grams sized from 1-to-4, together multiplied by the coefficient of brevity penalty (BP). If there are multi-references for each candidate sentence, then the nearest length as compared to the candidate sentence is selected as the effective one.\nIn the BLEU metric, the n-gram precision weight $\\lambda_n$ is usually selected as a uniform weight. However, the 4-gram precision value can be very low or even zero when the test corpus is small. To weight more heavily those n-grams that are more informative, \\newcite{Doddington2002} proposes the NIST metric with the information weight added.\n\\begin{comment}\n\\begin{eqnarray} \n\\text{Info}=\\log_{2}\\big(\\frac{\\#\\text{occurrence of}~w_1,\\cdots,w_{n-1}}{\\#\\text{occurrence of}~ w_1,\\cdots,w_n}\\big)\n\\end{eqnarray}\n\\end{comment}\nFurthermore, \\newcite{Doddington2002} replaces the geometric mean of co-occurrences with the arithmetic average of $n$-gram counts, extends the $n$-gram into 5-gram ($N=5$), and selects the average length of reference translations instead of the nearest length.\nROUGE is a recall-oriented evaluation metric, which was initially developed for summaries, and inspired by BLEU and NIST. \nROUGE has also been applied in automated TQA in later work . \nThe F-measure is the combination of precision ($P$) and recall ($R$), which was firstly employed in information retrieval (IR) and latterly adopted by the information extraction (IE) community, MT evaluations, and others.\n\\newcite{turian2006evaluation} carried out experiments to examine how standard measures such as precision, recall and F-measure can be applied to TQA and showed the comparisons of these standard measures with some alternative evaluation methodologies. \n\\begin{comment}\n\\begin{eqnarray}\nF_{\\beta}=(1+\\beta^{2})\\frac{PR}{R+\\beta^{2}P}\n\\end{eqnarray}\n\\end{comment}\n\\newcite{BanerjeeLavie2005} designed METEOR as a novel evaluation metric. METEOR is based on the general concept of flexible unigram matching, precision and recall, including the match between words that are simple morphological variants of each other with identical word stems and words that are synonyms of each other. To measure how well-ordered the matched words in the candidate translation are in relation to the human reference, METEOR introduces a penalty coefficient, different to what is done in BLEU, by employing the number of matched chunks.\n\\begin{comment}\n\\begin{align} \n\\text{Penalty}&=0.5 \\times ( \\frac{\\#\\text{chunks}}{\\#\\text{matched unigrams}})^3,\\\\\n\\text{MEREOR}&=\\frac{10PR}{R+9P}\\times (1-\\text{Penalty}).\n\\end{align}\n\\end{comment}", "id": "9367b4f1-e5a9-4ff8-812a-cf010a99fc21", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "c869edf1-2c53-4da5-86c1-d2ef5c593009", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "N-gram Word Surface Matching" ], [ "subsubsection", "\\emph{Precision and Recall" ] ], "subsections": [], "title": "\\emph{Precision and Recall" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nThe right word order plays an important role to ensure a high quality translation output. However, language diversity also allows different appearances or structures of a sentence. How to successfully achieve a penalty on really wrong word order, i.e. wrongly structured sentences, instead of on ``correct'' different order, i.e. the candidate sentence that has different word order to the reference, but is well structured, has attracted a lot of interest from researchers. In fact, the Levenshtein distance (\\textit{Section 3.1.1}) and n-gram based measures also contain word order information. \nFeaturing the explicit assessment of word order and word choice, \\newcite{WongKit2009} developed the evaluation metric ATEC (assessment of text essential characteristics). This is also based on precision and recall criteria, but with a position difference penalty coefficient attached. The word choice is assessed by matching word forms at various linguistic levels, including surface form, stem, sound and sense, and further by weighing the informativeness of each word. \n\\begin{comment}\nto add back: Combining the precision, order, and recall information together, \\newcite{ChenKuhnLarkin2012} develop an automatic evaluation metric PORT that is initially for the tuning of the MT systems to output higher quality translation. \nMeanwhile, LEPOR...\n\\end{comment}\nPartially inspired by this, our work LEPOR \nis designed as a combination of augmented evaluation factors including $n$-gram based \\textit{word order penalty} in addition to \\textit{precision}, \\textit{recall}, and \\textit{enhanced sentence-length penalty}. The LEPOR metric (including \\textit{h}LEPOR) was reported with top performance on the English-to-other (Spanish, German, French, Czech and Russian) language pairs in ACL-WMT13 metrics shared tasks for \\textit{system level} evaluation . The n-gram based variant \\textit{n}LEPOR was also analysed by MT researchers as one of the three best performing \\textit{segment level} automated metrics (together with METEOR and sentBLEU-MOSES) that correlated with human judgement at a level that was not significantly outperformed by any other metrics, on Spanish-to-English, in addition to an aggregated set of overall tested language pairs .", "id": "29c3cffa-9b80-428d-a95a-c6a482283078", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "c869edf1-2c53-4da5-86c1-d2ef5c593009", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "N-gram Word Surface Matching" ], [ "subsubsection", "\\emph{Revisiting Word Order" ] ], "subsections": [], "title": "\\emph{Revisiting Word Order" }, { "cite_extract_rate": 0, "cites": [], "content": "Although some of the previously outlined metrics incorporate linguistic information, e.g. synonyms and stemming in METEOR and part of speech (POS) in LEPOR, the simple n-gram word surface matching methods mainly focus on the exact matches of the surface words in the output translation. \nThe advantages of the metrics based on the first category (simple n-gram word matching) are that they perform well in capturing translation fluency , are very fast to compute and have low cost. On the other hand, there are also some weaknesses, for instance, syntactic information is rarely considered and the underlying assumption that a good translation is one that shares the same word surface lexical choices as the reference translations is not justified semantically. Word surface lexical similarity does not adequately reflect similarity in meaning. Translation evaluation metrics that reflect meaning similarly need to be based on similarity of semantic structure and not merely flat lexical similarity.", "id": "d713479f-7f15-44e5-bf5f-52ad1cd69bd4", "level": "subsection", "origin_cites_number": 1, "parent_id": "49723647-b1d3-43c7-acc5-d99388d7916e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "Deeper Linguistic Features" ] ], "subsections": [ "619c1e9f-3e9b-411f-83bf-184493be173f", "318fd55b-90fd-49cb-9a5c-39102a55f40f" ], "title": "Deeper Linguistic Features" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nSyntactic similarity methods usually employ the features of morphological POS information, phrase categories, phrase decompositionality or sentence structure generated by linguistic tools such as a language parser or chunker.\nIn grammar, a \\textbf{POS} is a linguistic category of words or lexical items, which is generally defined by the syntactic or morphological behaviour of the lexical item. Common linguistic categories of lexical items include noun, verb, adjective, adverb, and preposition. To reflect the syntactic quality of automatically translated sentences, researchers employ POS information into their evaluations. Using the IBM model one, \\newcite{PopovicVilarAvramidisBurchardt2011} evaluate translation quality by calculating the similarity scores of source and target (translated) sentences without using a reference translation, based on the morphemes, 4-gram POS and lexicon probabilities. \\newcite{DahlmeierLiuNg2011} developed the TESLA evaluation metrics, combining the synonyms of bilingual phrase tables and POS information in the matching task. Other similar work using POS information include .\nIn linguistics, a \\textbf{phrase} may refer to any group of words that form a constituent, and so functions as a single unit in the syntax of a sentence. To measure an MT system's performance in translating new text types, such as in what ways the system itself could be extended to deal with new text types, \\newcite{PovlsenUnderwoodMusicNeville1998} carried out work focusing on the study of an English-to-Danish MT system. The syntactic constructions are explored with more complex linguistic knowledge, such as the identification of fronted adverbial subordinate clauses and prepositional phrases. Assuming that similar grammatical structures should occur in both source and translations, \\newcite{AvramidisPopovicVilarBurchardt2011} perform evaluation on source (German) and target (English) sentences employing the features of sentence length ratio, unknown words, phrase numbers including noun phrase, verb phrase and prepositional phrase. Other similar work using phrase similarity includes that uses noun phrases and verb phrases from chunking, that only uses the noun phrase chunking in automatic evaluation, and that designs a universal phrase tagset for French to English MT evaluation.\n\\textbf{Syntax} is the study of the principles and processes by which sentences are constructed in particular languages. To address the overall goodness of a translated \\textbf{sentence's structure}, \\newcite{LiuGildea2005} employ constituent labels and head-modifier dependencies from a language parser as syntactic features for MT evaluation. They compute the similarity of dependency trees. Their experiments show that adding syntactic information can improve evaluation performance, especially for predicting the fluency of translation hypotheses. Other works that use syntactic information in evaluation include and that use an automatic shallow parser and the RED metric that applies dependency trees.", "id": "619c1e9f-3e9b-411f-83bf-184493be173f", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "d713479f-7f15-44e5-bf5f-52ad1cd69bd4", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "Deeper Linguistic Features" ], [ "subsubsection", "\\emph{Syntactic Similarity" ] ], "subsections": [], "title": "\\emph{Syntactic Similarity" }, { "cite_extract_rate": 0.13636363636363602, "cites": [ 8759, 3145, 8760 ], "content": "}\nAs a contrast to syntactic information, which captures overall grammaticality or sentence structure similarity, the semantic similarity of automatic translations and the source sentences (or references) can be measured by employing semantic features.\nTo capture the semantic equivalence of sentences or text fragments, \\textbf{named entity} knowledge is taken from the literature on named-entity recognition, which aims to identify and classify atomic elements in a text into different entity categories . The most commonly used entity categories include the names of persons, locations, organizations and time . In the MEDAR2011 evaluation campaign, one baseline system based on Moses utilized an Open NLP toolkit to perform named entity detection, in addition to other packages. The low performances from the perspective of named entities causes a drop in fluency and adequacy. In the quality estimation of the MT task in WMT 2012, introduced features including named entity, in addition to discriminative word lexicon, neural networks, back off behavior and edit distance. Experiments on individual features showed that, from the perspective of the increasing the correlation score with human judgments, the named entity feature contributed the most to the overall performance, in comparisons to the impacts of other features.\n\\textbf{Multi-word Expressions} (MWEs) set obstacles for MT models due to their complexity in presentation as well as idiomaticity . To investigate the effect of MWEs in MT evaluation (MTE), \\newcite{mwe4mte2015} focused on the \\textit{compositionality} of noun compounds. They identify the \\textbf{noun compounds} first from the system outputs and reference with Stanford parser. The matching scores of the system outputs and reference sentences are then recalculated, adding up to the Tesla metric, by considering the predicated compositionality of identified noun compound phrases. Our own recent work in this area provides an extensive investigation into various MT errors caused by MWEs.\n\\textbf{Synonyms} are words with the same or close meanings. One of the most widely used synonym databases in the NLP literature is WordNet , which is an English lexical database grouping English words into sets of synonyms. WordNet classifies words mainly into four kinds of POS categories; Noun, Verb, Adjective, and Adverb, without prepositions, determiners, etc. Synonymous words or phrases are organized using the unit of synsets. Each synset is a hierarchical structure with the words at different levels according to their semantic relations. \n\\textbf{Textual entailment} is usually used as a directive relation between text fragments. If the truth of one text fragment TA follows another text fragment TB, then there is a directional relation between TA and TB (TB $\\Rightarrow$ TA). Instead of the pure logical or mathematical entailment, textual entailment in natural language processing (NLP) is usually performed with a relaxed or loose definition . For instance, according to text fragment TB, if it can be inferred that the text fragment TA is \\textit{most likely} to be true then the relationship TB $\\Rightarrow$ TA is also established. Since the relation is directive, it means that the inverse inference (TA $\\Rightarrow$ TB) is not ensured to be true . \\newcite{castillo-estrella-2012-semantic} present a new approach for MT evaluation based on the task of ``Semantic Textual Similarity\". This problem is addressed using a textual entailment engine based on WordNet semantic features. \n\\textbf{Paraphrase} is to restate the meaning of a passage of text but utilizing other words, which can be seen as bidirectional textual entailment . Instead of the literal translation, word by word and line by line used by meta-phrases, a paraphrase represents a dynamic equivalent. Further knowledge of paraphrases from the aspect of linguistics is introduced in the works by . \\newcite{SnoverDorrSchwartzMicciulla2006} describe a new evaluation metric TER-Plus (TERp). Sequences of words in the reference are considered to be paraphrases of a sequence of words in the hypothesis if that phrase pair occurs in the TERp phrase table. \n\\textbf{Semantic roles} are employed by researchers as linguistic features in MT evaluation. To utilize semantic roles, sentences are usually first shallow parsed and entity tagged. Then the semantic roles are used to specify the arguments and adjuncts that occur in both the candidate translation and reference translation. For instance, the semantic roles introduced by \\newcite{GimenezMarquez2007,GimenezMarquez2008} include causative agent, adverbial adjunct, directional adjunct, negation marker, and predication adjunct, etc. In a further development, \\newcite{LoWu2011a,LoWu2011b} presented the MEANT metric designed to capture the predicate-argument relations as structural relations in semantic frames, which are not reflected in the flat semantic role label features in the work of \\newcite{GimenezMarquez2007}. Furthermore, instead of using uniform weights, \\newcite{LoTurmuluruWu2012} weight the different types of semantic roles as empirically determined by their relative importance to the adequate preservation of meaning. Generally, semantic roles account for the semantic structure of a segment and have proved effective in assessing adequacy of translation.\n\\textbf{Language models} are also utilized by MT evaluation researchers. A statistical language model usually assigns a probability to a sequence of words by means of a probability distribution. \\newcite{GamonAueSmets2005} propose the LM-SVM, language model, and support vector machine methods investigating the possibility of evaluating MT quality and fluency in the absence of reference translations. They evaluate the performance of the system when used as a classifier for identifying highly dis-fluent and ill-formed sentences.\nGenerally, the linguistic features mentioned above, including both syntactic and semantic features, are combined in two ways, either by following a machine learning approach , or trying to combine a wide variety of metrics in a more simple and straightforward way, such as .", "id": "318fd55b-90fd-49cb-9a5c-39102a55f40f", "level": "subsubsection", "origin_cites_number": 22, "parent_id": "d713479f-7f15-44e5-bf5f-52ad1cd69bd4", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "Deeper Linguistic Features" ], [ "subsubsection", "\\emph{Semantic Similarity" ] ], "subsections": [], "title": "\\emph{Semantic Similarity" }, { "cite_extract_rate": 0, "cites": [], "content": "We briefly list some works that have applied deep learning and neural networks for TQA which are promising for further exploration. \nFor instance, \\newcite{guzman2015MTE_NN,Guzmn17MTE_NN}\nuse neural networks (NNs) for TQA for pair wise modeling to choose the best hypothetical translation by comparing candidate translations with a reference, integrating syntactic and semantic information into NNs.\n\\newcite{Gupta_reval:a2015} proposed LSTM networks based on dense vectors to conduct TQA, while \\newcite{DBLP:conf/nlpcc/MaMZWGJL16} designed a new metric based on bi-directional LSTMs, which is similar to the work of \\newcite{guzman2015MTE_NN} but with less complexity by allowing the evaluation of a single hypothesis with a reference, instead of a pairwise situation.\n\\subsection*{Appendix A: Evaluating TQA}\n\\subsection*{A.1: Statistical Significance}\nIf different MT systems produce translations with different qualities on a dataset, how can we ensure that they indeed own different system quality? To explore this problem, \\newcite{Koehn2004} presents an investigation statistical significance testing for MT evaluation. The bootstrap re-sampling method is used to compute the statistical significance intervals for evaluation metrics on small test sets. Statistical significance usually refers to two separate notions, one of which is the p-value, the probability that the observed data will occur by chance in a given single null hypothesis. The other one is the ``Type I\" error rate of a statistical hypothesis test, which is also called ``false positive\" and measured by the probability of incorrectly rejecting a given null hypothesis in favour of a second alternative hypothesis .\n\\subsection*{A.2: Evaluating Human Judgment}\nSince human judgments are usually trusted as the gold standards that automatic MT evaluation metrics should try to approach, the reliability and coherence of human judgments is very important. Cohen's kappa agreement coefficient is one of the most commonly used evaluation methods . For the problem of nominal scale agreement between two judges, there are two relevant quantities $p_{0}$ and $p_{c}$. $p_{0}$ is the proportion of units in which the judges agreed and $p_{c}$ is the proportion of units for which agreement is expected by chance. The coefficient $k$ is simply the proportion of chance-expected disagreements which do not occur, or alternatively, it is the proportion of agreement after chance agreement is removed from consideration:\n\\begin{equation} \nk = \\frac{p_0-p_c}{1-p_c}\n\\end{equation}\n\\noindent where $p_0-p_c$ represents the proportion of the cases in which beyond-chance agreement occurs and is the numerator of the coefficient .\n\\subsection*{A.3: Correlating Manual and Automatic Score}\nIn this section, we introduce three correlation coefficient algorithms that have been widely used at recent WMT workshops to measure the closeness of automatic evaluation and manual judgments. The choice of correlation algorithm depends on whether scores or ranks schemes are utilized.\n\\subsubsection*{\\emph{Pearson Correlation}}\nPearson's correlation coefficient is commonly represented by the Greek letter $\\rho$. The correlation between random variables X and Y denoted as $\\rho_{XY}$ is measured as follows .\n\\begin{equation} \n\\rho_{XY} = \\frac{cov(X,Y)}{\\sqrt{V(X)V(Y)}}=\\frac{\\sigma_{XY}}{\\sigma_X\\sigma_Y}\n\\end{equation}\nBecause the standard deviations of variable $X$ and $Y$ are higher than 0 ($\\sigma_X>0$ and $\\sigma_Y>0$), if the covariance $\\sigma_{XY}$ between $X$ and $Y$ is positive, negative or zero, the correlation score between X and Y will correspondingly result in positive, negative or zero, respectively. Based on a sample of paired data $(X, Y)$ as $(x_i,y_i), i=1\\,to\\,n$ , the Pearson correlation coefficient is calculated as:\n\\begin{equation} \\rho_{XY} = \\frac{\\sum_{i=1}^{n}(x_i-\\mu_x)(y_i-\\mu_y)}{\\sqrt{\\sum_{i=1}^{n}(x_i-\\mu_x)^2}\\sqrt{\\sum_{i=1}^{n}(y_i-\\mu_y)^2}}\n\\end{equation}\n\\noindent where $\\mu_x$ and $\\mu_y$ specify the means of discrete random variable $X$ and $Y$ respectively.\n\\subsubsection*{\\emph{Spearman rank Correlation}}\nSpearman rank correlation coefficient, a simplified version of Pearson correlation coefficient, is another algorithm to measure the correlations of automatic evaluation and manual judges, e.g. in WMT metrics task . When there are no ties, Spearman rank correlation coefficient, which is sometimes specified as (rs) is calculated as:\n\\begin{equation} \nrs_{\\varphi(XY)} = 1- \\frac{6\\sum_{i=1}^{n}d_{i}^2}{n(n^2-1)}\n\\end{equation}\n\\noindent where $d_i$ is the difference-value (D-value) between the two corresponding rank variables $(x_i-y_i)$ in $\\vec{X}=\\{x_1,x_2,...,x_n\\}$ and $\\vec{Y}=\\{y_1,y_2,...,y_n\\}$ describing the system $\\varphi$.\n\\subsubsection*{\\emph{Kendall's $\\tau$}}\nKendall's $\\tau$ has been used in recent years for the correlation between automatic order and reference order . It is defined as:\n\\begin{equation} \n\\tau=\\frac{\\text{num concordant pairs}-\\text{num discordant pairs}}{\\text{total pairs}}\n\\end{equation}\nThe latest version of Kendall's $\\tau$ is introduced in . \\newcite{LebanonLafferty2002} give an overview work for Kendall's $\\tau$ showing its application in calculating how much the system orders differ from the reference order. More concretely, \\newcite{Lapata2003} proposed the use of Kendall's $\\tau$, a measure of rank correlation, to estimate the distance between a system-generated and a human-generated gold-standard order.\n\\subsection*{A.4: Metrics Comparison}\nThere are researchers who did some work about the comparisons of different types of metrics. For example, \\newcite{callison2006re,callison2007meta,lavie2013automated} mentioned that, through some qualitative analysis on some standard data set, BLEU cannot reflect MT system performance well in many situations, i.e. higher BLEU score cannot ensure better translation outputs. There are some recently developed metrics that can perform much better than the traditional ones especially on challenging sentence-level evaluation, though they are not popular yet such as nLEPOR and SentBLEU-Moses . Such comparison will help MT researchers to select th appropriate metrics to use for specialist tasks.", "id": "b8ef0d14-0a42-48b7-a8a8-5796e644be16", "level": "subsection", "origin_cites_number": 14, "parent_id": "49723647-b1d3-43c7-acc5-d99388d7916e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Automated Assessment Methods" ], [ "subsection", "Neural Networks for TQA" ] ], "subsections": [], "title": "Neural Networks for TQA" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we examine several topics that can be considered for further development of MT evaluation fields.\nThe first aspect is that development should involve both n-gram word surface matching and the deeper linguistic features. Because natural languages are expressive and ambiguous at different levels , simple n-gram word surface similarity based metrics limit their scope to the lexical dimension and are not sufficient to ensure that two sentences convey the same meaning or not. For instance, and report that simple n-gram matching metrics tend to favor automatic statistical MT systems. If the evaluated systems belong to different types that include rule based, human aided, and statistical systems, then the simple n-gram matching metrics, such as BLEU, give a strong disagreement between these ranking results and those of the human assessors. So deeper linguistic features are very important in the MT evaluation procedure. \nHowever, inappropriate utilization, or abundant or abused utilization, of linguistic features will result in limited popularity of measures incorporating linguistic features. In the future, how to utilize the linguistic features in a more accurate, flexible and simplified way, will be one challenge in MT evaluation. Furthermore, the MT evaluation from the aspects of semantic similarity is more reasonable and reaches closer to the human judgments, so it should receive more attention.\nThe second major aspect is that MT quality estimation (QE) tasks are different to traditional MT evaluation in several ways, such as extracting reference-independent features from input sentences and their translation, obtaining quality scores based on models produced from training data, predicting the quality of an unseen translated text at system run-time, filtering out sentences which are not good enough for post processing, and selecting the best translation among multiple systems. This means that with so many challenges, the topic will continuously attract many researchers.\nThirdly, some advanced or challenging technologies that can be further applied to MT evaluation include the deep learning models , semantic logic form, and decipherment model.", "id": "3217cd33-b9b0-4887-8ed0-a4a6e0619721", "level": "section", "origin_cites_number": 5, "parent_id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Discussion and Perspectives" ] ], "subsections": [], "title": "Discussion and Perspectives" }, { "cite_extract_rate": 0.052631578947368, "cites": [ 4273 ], "content": "In this paper we have presented a survey of the state-of-the-art in translation quality assessment methodologies from the viewpoints of both manual judgements and automated methods. This work differs from conventional MT evaluation review work by its concise structure and inclusion of some recently published work and references.\nDue to space limitations, in the main content, we focused on conventional human assessment methods and automated evaluation metrics with reliance on reference translations. However, we also list some interesting and related work in the appendices, such as the quality estimation in MT when the reference translation is not presented during the estimation, and the evaluating methodology for TQA methods themselves.\nHowever, this arrangement does not affect the overall understanding of this paper as a self contained overview. \nWe believe this work can help both MT and NLP researchers and practitioners in identifying appropriate quality assessment methods for their work. We also expect this work might shed some light on evaluation methodologies in other NLP tasks, due to the similarities they share, such as text summarization , natural language understanding , \nnatural language generation , as well as programming language (code) generation .\n\\section*{Acknowledgments}\nWe appreciate the comments from Derek F. Wong, editing help from Ying Shi (Angela), and the anonymous reviewers for their valuable reviews and feedback.\nThe ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. The input of Alan Smeaton is part-funded by Science Foundation Ireland \nunder grant number SFI/12/RC/2289 (Insight Centre). \n\\bibliographystyle{acl_natbib}\n\\bibliography{nodalida2021}\n\\section*{Appendices}\n\\input{sec-Evaluate-TQA}\n\\subsection*{Appendix B: MT QE}\nIn past years, some MT evaluation methods that do not use manually created gold reference translations were proposed. These are referred to as ``Quality Estimation (QE)\". Some of the related works have already been introduced in previous sections. \nThe most recent quality estimation tasks can be found at WMT12 to WMT20 . These defined a novel evaluation metric that provides some advantages over the traditional ranking metrics. The DeltaAvg metric assumes that the reference test set has a number associated with each entry that represents its extrinsic value. Given these values, their metric does not need an explicit reference ranking, the way that Spearman ranking correlation does. The goal of the DeltaAvg metric is to measure how valuable a proposed ranking is according to the extrinsic values associated with the test entries.\n\\begin{equation} \nDeltaAvg_v[n]= \\frac{\\sum\\limits_{k=1}^{n-1}V(S_{1,k})}{n-1}-V(S)\n\\end{equation}\nFor scoring, two task evaluation metrics were used that have traditionally been used for measuring performance in regression tasks: Mean Absolute Error (MAE) as a primary metric, and Root of Mean Squared Error (RMSE) as a secondary metric. For a given test set S with entries $s_{i},1\\leqslant{i}\\leqslant|S|$, $H(s_i)$ is the proposed score for entry $s_i$ (hypothesis), and $V(s_i)$ is the reference value for entry $s_i$ (gold-standard value).\n\\begin{eqnarray} \n\\text{MAE}&=&\\frac{\\sum_{i-1}^{N}|H(s_i)-V(s_i)|}{N}\\\\\n\\text{RMSE}&=&\\sqrt{\\frac{\\sum_{i-1}^{N}(H(s_i)-V(s_i))^2}{N}}\n\\end{eqnarray}\nwhere $N=|S|$. Both these metrics are non-parametric, automatic and deterministic (and therefore consistent), and extrinsically interpretable.\nSome further readings on MT QE are the comparison between MT evaluation and QE \\newcite{SPECIA2010MTE_vs_QE} and the QE framework model QuEst ;\nthe weakly supervised approaches for quality estimation and the limitations analysis of QE Supervised Systems , and unsupervised QE models ; the recent shared tasks on QE .\nIn very recent years, the two shared tasks, i.e. MT quality estimation and traditional MT evaluation metrics, have tried to integrate into each other and benefit from both knowledge. For instance, in WMT2019 shared task, there were 10 reference-less evaluation metrics which were used for the QE task, ``QE as a Metric\", as well .\n\\subsection*{Appendix C: Mathematical Formulas}\nSome mathematical formulas that are related to aforementioned metrics: \nSection 2.1.2 - Fluency / Adequacy / Comprehension:\n\\begin{eqnarray}\n\\text{Comprehension}=\\frac{\\#\\text{Cottect}}{6} \\\\\n\\text{Fluency}=\\frac{\\frac{\\text{Judgment point}-1}{\\text{S}-1}}{\\#\\text{Sentences in passage}} \\\\\n\\text{Adequacy}=\\frac{\\frac{\\text{Judgment point}-1}{\\text{S}-1}}{\\#\\text{Fragments in passage}} \n\\end{eqnarray}\n\\noindent\nSection 3.1.1 - Editing Distance:\n\\begin{comment}\n\\end{comment}\n\\begin{eqnarray}\n\\text{WER}=\\frac{\\text{substitution+insertion+deletion}}{\\text{reference}_{\\text{length}}}.\n\\end{eqnarray}\n\\begin{small}\n\\begin{equation}\n\\text{PER}\n=1-\\frac{\\text{correct}-\\max(0,\\text{output}_{\\text{length}}-\\text{reference}_{\\text{length}})}{\\text{reference}_{\\text{length}}}.\n\\end{equation}\n\\end{small}\n\\begin{equation} \n\\text{TER}=\\frac{\\#\\text{of edit}}{\\#\\text{of average reference words}}\n\\end{equation}\n\\noindent\nSection 3.1.2 - Precision and Recall:\n\\begin{align}\n\\text{BLEU}&=\\text{BP}\\times\\exp\\sum_{n=1}^{N}\\lambda_{n}\\log{\\text{Precision}_{\\text{n}}},\\\\\n\\text{BP}&=\n\\begin{cases}\n1 & \\text{if } c>r,\\\\\ne^{1-\\frac{r}{c}} & \\text{if } c<=r.\n\\end{cases}\n\\end{align}\n\\noindent\nwhere $c$ is the total length of candidate translation, and $r$ refers to the sum of effective reference sentence length in the corpus. Bellow is from NIST metric, then F-measure, METEOR and LEPOR:\n\\begin{eqnarray} \n\\text{Info}=\\log_{2}\\big(\\frac{\\#\\text{occurrence of}~w_1,\\cdots,w_{n-1}}{\\#\\text{occurrence of}~ w_1,\\cdots,w_n}\\big)\n\\end{eqnarray}\n\\begin{eqnarray}\nF_{\\beta}=(1+\\beta^{2})\\frac{PR}{R+\\beta^{2}P}\n\\end{eqnarray}\n\\begin{align} \n\\text{Penalty}&=LP0.5 \\times ( \\frac{\\#\\text{chunks}}{\\#\\text{matched unigrams}})^3\\\\\n\\text{MEREOR}&=\\frac{10PR}{R+9P}\\times (1-\\text{Penalty})\n\\end{align}\n\\begin{align}\n\\text{LEPOR}&=LP \\times NPosPenal \\times Harmonic(\\alpha R, \\beta P)\n\\end{align}\n\\begin{gather*} \n\\text{\\textit{h}LEPOR} = {Harmonic(w_{LP}LP},\n\\\\ w_{NPosPenal}NPosPenal, w_{HPR}HPR)\n\\end{gather*}\n\\begin{gather*} \n\\text{\\textit{n}LEPOR} = LP \\times NPosPenal \\\\\n\\times exp(\\sum_{n=1}^{N}w_nlogHPR) \\end{gather*}\n\\noindent\nwhere, in our own metric LEPOR and its variations, \\textit{n}LEPOR (\\textit{n}-gram \\textit{precision} and \\textit{recall} LEPOR) and \\textit{h}LEPOR (\\textit{harmonic} LEPOR), \\textit{P} and \\textit{R} are for precision and recall, \\textit{LP} for length penalty, \\textit{NPosPenal} for n-gram position difference penalty, and \\textit{HPR} for harmonic mean of precision and recall, respectively .\n\\end{document}", "id": "97d72fde-8d63-4c85-976d-be6d9a7b7fc8", "level": "section", "origin_cites_number": 19, "parent_id": "6a75a2f0-f319-4440-b296-cb2eb0722a2e", "prefix_titles": [ [ "title", "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods" ], [ "section", "Conclusions and Future Work" ] ], "subsections": [], "title": "Conclusions and Future Work" } ]
114
[ 6987, 1551, 38, 303, 8759, 3145, 8760, 4273 ]
0.944833
[ "Alina Matei", "Andreea Glavan", "Estefania Talavera" ]
Deep learning for scene recognition from visual data: a survey
2020
2020-07-03T16:53:18Z
cs.CV
The use of deep learning techniques has exploded during the last few years, resulting in a direct contribution to the field of artificial intelligence. This work aims to be a review of the state-of-the-art in scene recognition with deep learning models from visual data. Scene recognition is still an emerging field in computer vision, which has been addressed from a single image and dynamic image perspective. We first give an overview of available datasets for image and video scene recognition. Later, we describe ensemble techniques introduced by research papers in the field. Finally, we give some remarks on our findings and discuss what we consider challenges in the field and future lines of research. This paper aims to be a future guide for model selection for the task of scene recognition. \keywords{Scene Recognition \and Ensemble Techniques \and Deep Learning \and Computer Vision}
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "6f522f76-96be-4dfd-b594-a033e89ef933", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ] ], "subsections": [ "5df6f572-89a6-4c24-823e-d93660f8cead", "79a61d81-4d66-4bab-b8b4-c2f1bc5a0abd", "e3f50ddb-f307-447f-aaab-8cac8c08ebbf", "d4982858-e156-4d6f-886c-d30e036e7507", "18e00f63-c6d8-4834-9b0c-749bf6b3c447" ], "title": "root" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 4582, 4583 ], "content": "Recognizing scenes is a task that humans do on a daily basis. When walking down the street and going from one location to the other, tends to be easy for a human to identify where s/he is located. During the past years, deep learning architectures, such as Convolutional Neural Networks (CNNs) have outperformed traditional methods in many classification tasks. These models have shown to achieve high classification performance when large and variety datasets are available for training. Nowadays, the available visual data is not only presented in a static format, as an image, but also in a dynamic format, as video recordings. The analysis of videos adds an additional level of complexity since the inherent temporal aspect of video recordings must be considered: a video can capture scenes which suffer temporal alterations. Scene recognition with deep learning has been addressed by ensemble techniques that combine different levels of semantics extracted from the images, e.g. recognized objects, global information, and context at different scales. \nDeveloping robust and reliable models for the automatic recognition of scenes is of importance in the field of intelligent systems and artificial intelligence since it directly supports real-life applications. For instance, \\textit{Scene and event recognition} has been previously addressed in the literature . \\textit{Scene recognition for robot localization} with indoor localization for mobile robots is one of the emerging application scopes of scene recognition . According to the authors of , in the following two decades, every household could own a social robot employed for housekeeping, surveillance or companionship tasks. In the field of lifelogging, collections of photo-sequences have proven to be a rich tool for the understanding of the behaviour of people. In methods were develop for the analysis of egocentric image collected by wearable cameras. The above-mentioned approaches address the recognition of scenes either following an image-based approach or a video or photo-sequence based approach.\nAs contributions, (i) to the best of our knowledge, this is the first survey that collects works that address the task of scene recognition with deep deep learning from visual data, both from images and videos. Moreover, (ii) we describe available datasets which assisted the fast \nadvancement in the field.\nThis paper is structured as follows: in Section \\ref{sec:datasets} we discuss the available datasets supporting scene and object focused recognition. Section \\ref{sec:methods} addresses the methodology of the state-of-the-art techniques and approaches discussed in the paper at hand. Furthermore, in Section \\ref{sec:discussion} we discuss the presented approaches. Finally, in Section \\ref{sec:conclusion} we draw some conclusions.", "id": "5df6f572-89a6-4c24-823e-d93660f8cead", "level": "section", "origin_cites_number": 7, "parent_id": "6f522f76-96be-4dfd-b594-a033e89ef933", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.14285714285714202, "cites": [ 486, 4584 ], "content": "\\label{sec:datasets}\nThe latest advancements in deep learning methods for scene recognition are motivated by the availability of large and exhaustive datasets and hardware that allows the training of deep networks. Thus, deep learning CNNs are applied to tackle the complexity and high variance of the task of scene recognition. \nThe inherent difficulty of scene recognition is related to the nature of the images depicting a scene context. Two major challenges were described in : \n\\begin{itemize}\n \\item \\textit{Visual inconsistency} refers to low inter-class variance. Some scene categories can share similar visual appearances which create the issue of class overlaps. Since images belonging to two different classes can be easily confused with one another, the class overlap cannot be neglected. \n \\item \\textit{Annotation ambiguity} describes a high intra-class variance of scene categories. Demarcation of the categories is a subjective process which is highly dependent on the experience of the annotators, therefore images from the same category can showcase significant differences in appearance.\n\\end{itemize}\nThe majority of the available datasets are focused on object categories providing labels , bounding boxes or segmentations . ImageNet , COCO (Common Objects in Context), and Open Images are well known in the field of object recognition. Even though these dataset were built for object recognition, transfer learning has shown to be an effective approach when aiming to apply them for scene recognition. \n\\begin{figure}[h!]\n \\caption{Example of samples of the publicly available datasets as described in Table \\ref{tab1}. Samples are presented from the same classes amongst similar datasets (i.e. scene, video and object centric) in order to emphasize the diversity of the image and video data. For the video-centric datasets (i.e. Maryland \"in-the-wild\", YUPENN, YUP++) representative video frames are presented.}\n \\centering\n \\includegraphics[width=\\columnwidth]{images/image_place_classification.jpg}\n \\label{fig:places}\n\\end{figure}\nIn the literature we can find the 15-scenes , MIT Indoor67 , SUN397 , and Places365 as scene-centered datasets. More specifically, the Places project introduced Places365 as a reference dataset, which is composed of 434 scenes which account for 98\\% of the type of scenes a person can encounter in the natural and man-made world.\nA total of 10 million images were gathered, out of which 365 scene categories were chosen to be part of the dataset. Several annotators were asked to label every image and images with contradicting labels were discarded. Currently, the dataset is available in the Places365-standard format (i.e. 365 categories, roughly 1 million images training set, validation set with 50 images per class and test with 900 images per class) and the Places365-challenge format which extends the training set to 8 million image samples in total. With a dataset of this magnitude, the training of CNNs exclusively on data describing scenes becomes feasible.\n\\begin{table}[]\n\\caption{An overview of publicly available datasets for the task of scene recognition.}\n\\label{tab1}\n\\centering\n\\resizebox{\\columnwidth}{!} {\n\\begin{tabular}{c|c|c|c|c|c|c}\n\\multirow{2}{*}{Dataset} & \\multirow{2}{*}{Data} & \\multirow{2}{*}{ \\#Classes } & \\multicolumn{2}{c|}{ Classification of } & \\multicolumn{2}{c}{ Labelled as } \\\\\n& & & \\ Images \\ & \\ Streams \\ & \\ Object \\ & \\ Scenes \\\\ \\hline\nPlaces365 & 1M images & 365 &\\checkmark & & &\\checkmark \\\\\nMIT Indoor67 & 15620 images & 67 &\\checkmark & & &\\checkmark \\\\\nSUN397 & 108754 images & 397 &\\checkmark & & &\\checkmark \\\\\n15 scene & 4000 images & 15 &\\checkmark & & &\\checkmark \\\\ \\hline\nMaryland `in-the-wild' & 10 videos & 13 & &\\checkmark & &\\checkmark \\\\\nYUPENN & 410 videos & 14 & &\\checkmark & &\\checkmark \\\\\nYUP++ & 1200 videos & 20 & & \\checkmark& &\\checkmark \\\\ \\hline\nImagenet & 3.2M images & 1000 &\\checkmark & &\\checkmark & \\\\\nCOCO & 1.5M images & 80 & \\checkmark& &\\checkmark & \\\\\nOpen Images & 1.7M images & 600 &\\checkmark & &\\checkmark & \n\\end{tabular}}\n\\end{table}\nScene recognition also encloses dynamic scene data; due to the limited amount of available datasets which include such data, most of the research efforts in this sub-field also include gathering suitable experimental data. Here we highlight the Maryland `in-the-wild' , YUPENN , YUP++ datasets. The dataset in poses new challenges by introducing more complex data, i.e. videos with camera motion. The scope of the categories that are being recorded amongst the three datasets presented is not nearly as exhaustive as in the case of the objects and scenes datasets mentioned above. This is an indicator of the incipient status of research in this particular area of scene recognition.\nThe original models proposed by the authors of the and datasets were not based on deep learning techniques. The authors of the the Maryland `in-the-wild' , introduced a chaotic system framework for describing the videos. The authors' proposed pipeline extracts a 960-dimensional Gist descriptor per videoframe. Each dimension is considered a time-series, from which the chaotic invariants are computed. Traditional classifiers, such as KNN and SVM, are used for the final classification. In , the authors introduced the YUPENN dataset and for its analysis, they proposed a spatiotemporal oriented energy feature representation of the videos which they classify using KNN.\nAn overview of the described datasets is provided in Table \\ref{tab1}. In Figure \\ref{fig:places} we complete the quantitative overview of the datasets by presenting representative image samples for each of the datasets described.", "id": "79a61d81-4d66-4bab-b8b4-c2f1bc5a0abd", "level": "section", "origin_cites_number": 14, "parent_id": "6f522f76-96be-4dfd-b594-a033e89ef933", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Datasets for scene recognition" ] ], "subsections": [], "title": "Datasets for scene recognition" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:methods}\nIn this section, we describe relevant aspects of the state-of-the-art methods on scene recognition with deep learning. The choice for deep architectures is motivated by the complexity of the task: since the images are not described semantically the models used are aimed at learning generic contextual features of the scenes, which are captured by the high-level convolutional layers. \nPrevious to deep learning, visual recognition techniques have made extensive use of object recognition when faced with such problems . The scenes would be recognized based on exhaustive lists of objects identified in the scene. However, other challenges appear such as object detection and their high appearance variability. The combination of object detection and overall context recognition showed promising results. \nFocusing on deep learning research papers, we group them based on the type of the analysed datasets, images or videos. We present their performances and limitations in the context of the evaluated datasets. \n\\begin{comment}\n\\begin{table}[h!]\n\\centering\n\\caption{Deep learning techniques employed in the literature to classify scenes in images and videos. We use FT for fine-tuning, AE for Autoencoder, ... \\textcolor{red}{to fill in}}\n\\resizebox{8cm}{!} {\n\\begin{tabular}{l|l|l|l}\n\\hline\n\\multicolumn{1}{c|}{Year} & \\multicolumn{1}{c|}{Method} & \\multicolumn{1}{c|}{Deep Learning method} & \\multicolumn{1}{c}{Dataset(s) [Accuracy Top-1]} \\\\ \\hline\n(example 1:) 2019 & Yong et al. [ref] & FT ResNet & Places365 [85\\%] and COCO [60\\%]\\\\ \\hline\n(example 1:) 2019 & Davis et al. [ref] & AE & YUP++ [\\%] and COCO [\\%]\\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:reviewMethods}\n\\end{table}\n\\end{comment}", "id": "e3f50ddb-f307-447f-aaab-8cac8c08ebbf", "level": "section", "origin_cites_number": 3, "parent_id": "6f522f76-96be-4dfd-b594-a033e89ef933", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Frameworks for scene recognition" ] ], "subsections": [ "0b0773e6-00c7-4a39-b323-e6466e694b7f", "432992b7-3780-4789-a144-e1a4f1b21793" ], "title": "Frameworks for scene recognition" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 514, 4582, 305, 97, 4585 ], "content": "Several works have addressed the recognition of scenes based on single image analysis. The best well-known work on scene recognition was introduced in , which relied on the Places365 dataset. \n\\begin{table}[h!]\n\\centering\n\\caption{Top-5 classification accuracy of the trained networks on the validation and test splits of the Places365 dataset. Apart from the ResNet architecture which has been fine-tuned over Places365, the other architectures are trained from scratch.}\n\\begin{tabular}{l|c|c}\nArchitectures & \\multicolumn{2}{c}{Top-5 accuracy}\\\\\ntrained on Places365 & \\ Validation set \\ & \\ Test set \\ \\\\\n\\hline \\hline\nPlaces365 AlexNet & 82.89\\% & 82.75\\% \\\\\nPlaces365 GoogleNet & 83.88\\% & 84.01\\% \\\\\nPlaces365 VGG & 84.91\\% & 85.01\\% \\\\\nPlaces365 ResNet & 85.08\\%& 85.07\\%\\\\\n\\end{tabular}\n \\label{tab:original_performance_places365}\n\\end{table}\nDeep learning architectures have been trained over the Places365 dataset. The approach proposed by the authors of literature is to exploit the vast dataset at hand by training three popular CNNs architectures (i.e. AlexNet , GoogLeNet , VGG16 ) on the Places dataset. The performance of these architectures over the validation and test splits of the Places365 dataset are presented in Table \\ref{tab:original_performance_places365}. When introducing a new dataset, it became a ritual to test the generalization capabilities of weights trained over Places365. Thus, authors fine-tune these specialised networks trained on Places365 over newly available datasets. For instance, the VGG16, pre-trained on the Places365 dataset, achieved a 92.99\\% accuracy on the SUN Attribute dataset . To compare the performance of the above approaches for static scene recognition, the following datasets are considered: 15 scenes dataset , MIT Indoor 67 and SUN 397 . An overview of the comparison of the quantitative results is presented in Table \\ref{tab:res_overview}.\n\\begin{table}\n\\caption{An overview of the quantitative comparison in terms of accuracy between methods for single image classification for the 15 scenes, MIT Indoor, SUN 397 datasets.} \n\\centering\n \\begin{tabular}{l| c| c| c}\n & 15 scenes & MIT Indoor & SUN 397 \\\\\n \\hline \n Places365 AlexNet & 89.25\\% & 70.72\\% & 56.12\\% \\\\\n Places365 GoogleNet & 91.25\\% & 73.20\\% & 58.37\\% \\\\\n Places365 VGG & 91.97\\% & 76.53\\% & \\textbf{63.24\\%} \\\\\n Hybrid1365 VGG & \\textbf{92.15\\%} & \\textbf{79.49\\%} & 61.77\\% \\\\\n 7-scale Hybrid VGG & \\textbf{94.08\\%} & 80.22\\% \\iffalse (w/ FT) \\fi & 63.19\\%* \\\\\n 7-scale Hybrid AlexNet & 93.90\\% & \\textbf{80.97\\%} \\iffalse (w/ FT) \\fi & \\textbf{65.38\\%} \\\\\n \\end{tabular}\n \\label{tab:res_overview}\n\\end{table}\nFurthermore, in the authors experimented with the ResNet152 residual network architecture, fine-tuned over the Places365. This work achieved a top-5 accuracy of 85.08\\% and 85.07\\% on the validation and, respectively, the test set of the Places365 dataset, as shown in Table \\ref{tab:original_performance_places365}.\nThe use of the semantic and contextual composition of the image has been proposed by various approaches. For instance, in , the authors proposed the Hybrid1365 VGG architecture, a combination of deep learning techniques trained for object and scene recognition. The method uses different scales at which objects appear in a scene can facilitate the classification process by targeting distinct regions of interest within the image. Objects usually appear at lower scales. Therefore, the object classifier should target local scopes of the image. In contrast, the scene classifier should be aimed at the global scale, in order to capture contextual information. They concluded that it is possible to extend the performance obtained individually by each method. The Hybrid1365 VGG architecture scores the highest average accuracy of 81.48\\% over all the experiments conducted for the place-centric CNN approach (has the highest performance for 2 out of 3 comparison datasets as shown in Table \\ref{tab:res_overview}).\nThe dataset biases which arise under different scaling conditions of the images is addressed in , by involving a multi-scale model which combines various CNNs specialized either on object or place knowledge. The authors combined the training data available in the Places and ImageNet datasets. The knowledge learned from the two datasets is coupled in a scale-adaptive way. In order to aggregate the extracted features over the architectures used, simple max pooling\\footnote{Max pooling is a pooling operation which computes the maximum value in each patch of a feature map; it is employed for down-sampling input representations.} is adopted in order to down-sample the feature space. If the scaling operation is significant, the features of the data can drastically change from describing scene data to object data. The architectures are employed to extract features in parallel from patches, which represent the input image at increasingly larger scale versions. The multi-scale model combines several AlexNet architectures . The hybrid multi-scale architecture uses distinctive models for different scale ranges; depending on the scale range, the most suitable model is chosen from object-centric CNN (pre-trained on ImageNet), scene-centric CNN (pre-trained on Places365) or a fine-tuned CNN (adapted to the corresponding scale based on the dataset at hand). In total, seven scales were considered; the scales were obtained by scaling the original images between $227\\times227$ and $1827\\times1827$ pixels. For the final classification given by the multi-scale hybrid approach, the concatenation of the fc7 features (i.e. features extracted by the 7th fully connected layer of the CNN) from the seven networks are considered. Principal Component Analysis (PCA) is used to reduce the feature space. This model obtained the highest accuracy of 95.18\\% on the 15 scenes dataset .\nThe hybrid approaches presented in and achieve higher accuracy than a human expert, which was quantified as 70.60\\%. This indicates that the combination of object-centric and scene-centric knowledge can potentially establish a new performance standard for scene recognition.\n\\begin{comment}\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\columnwidth]{images/scene_obj_scale.png}\n \\caption{Multi-scale hybrid architecture combining scale-specific networks as proposed by . ImageNet and Places pre-trained CNNs are combined according to the corresponding scale of the image patch. This method adapts test data to the underlying train data of the networks to account for dataset biased. Scale specific features are obtained using max-pooling at each scale, then concatenated into a final multi-scale feature. }\n \\label{fig:obj_scene_scale}\n\\end{figure}\n\\end{comment}", "id": "0b0773e6-00c7-4a39-b323-e6466e694b7f", "level": "subsection", "origin_cites_number": 11, "parent_id": "e3f50ddb-f307-447f-aaab-8cac8c08ebbf", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Frameworks for scene recognition" ], [ "subsection", "Static scene recognition" ] ], "subsections": [], "title": "Static scene recognition" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 4588, 2899, 4587, 8804, 4589, 4586, 8803 ], "content": "While early research in the field of scene recognition has been directed at single images, lately attention has been naturally drawn towards scene recognition from videos. CNNs have shown promising results for the general task of scene recognition in single images and have the potential to be also generalized to video data. To achieve this generalization, the spatio-temporal nature of dynamic scenes must be considered. While static scenes (depicted as single images) only present spatial features, videos also capture temporal transformations which affect the spatial aspect of the scene. Therefore, one challenge related to the task of scene classification from videos is creating a model which is powerful enough to capture both the spatial and temporal information of the scene. However, there are few works on video analysis for scene recognition. \nIn the works introduced in , the authors relied on Long Short Term Memory networks (LSTMs) for video description. However, they did not focus on recognizing the scenes. \n\\begin{table}[h!]\n \\caption{Overview of the results achieved by the spatio-temporal residual network (T-ResNet) proposed in over the YUP++ dataset.}\n \\centering\n \\begin{tabular}{l | c| c |c}\n & YUP++ static & YUP++ moving & YUP++ complete \\\\\n \\hline \n ResNet & 86.50\\% & 73.50\\% & 85.90\\% \\\\\n T-ResNet & \\textbf{92.41\\%} & \\textbf{81.50\\%} & \\textbf{89.00\\%}\n \\end{tabular}\n \\label{tab:results_dynamic}\n\\end{table}\nIn , the authors introduced the T-ResNet architecture, alongside the YUP++ dataset, which established a new benchmark in the sub-field of dynamic scene recognition. The T-ResNet is based on a residual network that was pre-trained on the ImageNet dataset . It employs transfer learning to adapt the spatial-centric residual architecture to a spatio-temporal-centric network. The results achieved by the architecture were only compared with the classical ResNet architecture as shown in Table \\ref{tab:results_dynamic}. \nThe superiority of the T-ResNet is evident: it achieves an accuracy of 92.41\\% on the YUP++ static camera partition, 81.50\\% on the YUP++ moving camera partition and finally 89.00\\% on the entire YUP++ dataset. This demonstrates the superiority of the spatio-temporal approach. The T-ResNet model exhibits strong performance for classes with linear motion patterns, e.g. classes `elevator', `ocean', `windmill farm'. However, for scene categories presenting irregular or mixed defining motion patterns the performance is negatively impacted, e.g. classes `snowing' and `fireworks'. The authors of observed that T-ResNet exhibits difficulties distinguishing intrinsic scene dynamics from the additional motion of the camera. Further research is required to account for this difference.\n\\begin{comment}\nSince they test their network on action recognition, we can exclude it from this analysis\nThe authors of conducted experiments to analyse the generalization ability of the proposed network on action recognition tasks. For the challenge of action recognition, optical flow is a discriminative feature. The proposed T-ResNet architecture presents a healthy performance improvement on two popular action recognition datasets (i.e. UCF101 , HMDB51 ) in contrast to previously achieved state-of-the-art results as shown in Table \\ref{tab:t-resnet_qualitative}. The performance boost of T-ResNet, according to , is based on the added temporal filters which are 1x1 in spatial dimension and span only 3 instances in time, which come at a very low cost.\n\\begin{table}[]\n \\caption{Accuracy comparison of the T-ResNet architecture and state-of-the-art models on action recognition datasets (UCF101 , HMDB51 ) as presented in .}\n \\centering\n \\begin{tabular}{l l l}\n & UCF101 & HMDB51 \\\\\n \\hline \\hline\n & \\textit{State of the art} & \\\\\n \\hline\n Literature & 92.40\\% & 62.00\\% \\\\\n Literature & 92.50\\% & 65.40\\% \\\\\n Literature & 93.40\\% & 66.40\\% \\\\\n \\hline \\hline\n & \\textit{T-ResNet } &\\\\\n \\hline\n Appearance & 85.40\\% & 51.30\\% \\\\\n Flow & 89.10\\% & 62.00\\% \\\\\n Fusion & \\textbf{93.90\\%} & \\textbf{67.20\\%} \\\\\n \\end{tabular}\n \\label{tab:t-resnet_qualitative}\n\\end{table}\n\\end{comment}", "id": "432992b7-3780-4789-a144-e1a4f1b21793", "level": "subsection", "origin_cites_number": 12, "parent_id": "e3f50ddb-f307-447f-aaab-8cac8c08ebbf", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Frameworks for scene recognition" ], [ "subsection", "Dynamic scene recognition" ] ], "subsections": [], "title": "Dynamic scene recognition" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 514, 305, 4585 ], "content": "\\label{sec:discussion}\nThe novel availability of large, exhaustive datasets, such as the Places Database, is offering significant support for further research for the challenge of scene recognition. The combination of scene-centric and object-centric knowledge has proven superior to only considering the scene context. Dynamic scene recognition reached new state-of-the-art performance through the approach of adapting spatial networks to the task, transforming the network to also consider the temporal aspect of the scenes. These emerging spatio-temporal networks are suitable for video data captured with a static camera. However, it still faces difficulties in the case of added camera motion.\nOne observation arising from methods addressing single image analysis scene recognition is that deeper CNN architectures such as GoogLeNet or VGG are not superior in all cases. For the hybrid multi-scale model combining scene-centric and object-centric networks in , experiments using VGG architecture for more than two-scales (two VGG networks) obtained disappointing results, inferior to the baseline performance achieved with one single scale (one network). Since the multi-scale hybrid model entails seven different scales, it can be inferred that VGG becomes noisy when applied on small input image patches.\nAddressing the task of scene recognition from the global features that describe an image, the CNNs are expected to learn deep features that are relevant for the contextual clues present in the image. Literature observers that the low-level convolutional layers detect low-level visual concepts such as object edges and textures, while the high-level layers activate on entire objects and scene parts. Even though the model has been previously trained on an exclusively places-centric dataset, the network still identifies semantic clues in the image by detecting objects alongside contextual clues. Therefore, CNNs trained on the Places Database (which does not contain object labels) could still be employed for object detection. \nAnother aspect arising from training the same architecture on datasets with a different number of scene categories (i.e. and Places365) proves that having more categories leads to better results as well as more predicted categories. We can observe that the architecture AlexNet trained on Places205 (version prior to Places365) obtains 57.2\\% accuracy, while the same architecture trained on Places365 obtains 57.7\\% accuracy. For the places CNN approach two main types of miss-classifications occur: on one hand, less-typical activities happening in a scene context (e.g. taking a photo at a construction site) and on the other hand, images depicting multiple scene parts. A possible solution, as proposed by , would be assigned multiple ground-truth labels in order to capture the content of an image more precisely.\nThe results achieved by the T-ResNet model illustrate the potential of spatio-temporal networks for video analysis. The transformation from a purely spatial network to a spatio-temporal one can succeed on the basis of a very small training set (i.e. only 10\\% of the YUP++ dataset introduced) as proven by . Well-initialized spatial networks can be efficiently transformed to extract spatio-temporal features, therefore, in theory, most networks that perform well on single image analysis could be easily adapted to video analysis.", "id": "d4982858-e156-4d6f-886c-d30e036e7507", "level": "section", "origin_cites_number": 5, "parent_id": "6f522f76-96be-4dfd-b594-a033e89ef933", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nIn this work, we describe the state-of-the-art on deep learning for scene recognition. Furthermore, we presented some of the applications of scene recognition to emphasize the importance of this topic. We argue that the main factor to consider is the type of data on which recognition and classification are applied. Since the task of scene recognition is not entirely subjective due to the nature of the scene images and the scene categories overlap, no one particular method can be generalized to all scene recognition tasks. This paper will aid professionals in making an informed decision about which approach best fits their scene recognition challenge. \nWe have found room for research in the field of video analysis and expect that numerous works will emerge in the coming years.\n\\bibliographystyle{splncs04}\n\\bibliography{bibliography}\n\\end{document}", "id": "18e00f63-c6d8-4834-9b0c-749bf6b3c447", "level": "section", "origin_cites_number": 0, "parent_id": "6f522f76-96be-4dfd-b594-a033e89ef933", "prefix_titles": [ [ "title", "Deep learning for scene recognition from visual data: a survey" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" } ]
115
[ 4582, 4583, 486, 4584, 514, 305, 97, 4585, 4588, 2899, 4587, 8804, 4589, 4586, 8803 ]
0.892977
[ "Huawei Huang", "Wei Kong", "Sicong Zhou", "Zibin Zheng", "Song Guo" ]
A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools
2020
2020-07-07T14:50:32Z
cs.DC
To draw a roadmap of current research activities of the blockchain community, we first conduct a brief overview of state-of-the-art blockchain surveys published in the recent 5 years. We found that those surveys are basically studying the blockchain-based applications, such as blockchain-assisted Internet of Things (IoT), business applications, security-enabled solutions, and many other applications in diverse fields. However, we think that a comprehensive survey towards the essentials of blockchains by exploiting the state-of-the-art theoretical modelings, analytic models, and useful experiment tools is still missing. To fill this gap, we perform a thorough survey by identifying and classifying the most recent high-quality research outputs that are closely related to the theoretical findings and essential mechanisms of blockchain systems and networks. Several promising open issues are also summarized finally for future research directions. We wish this survey can serve as a useful guideline for researchers, engineers, and educators about the cutting-edge development of blockchains in the perspectives of theories, modelings, and tools.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ] ], "subsections": [ "86beefe2-e56f-4ee8-b0cb-5275202efa70", "1d23888a-2da9-4dfa-85ca-5b4d1b9044f9", "952d99e8-1c8e-4de4-9790-1ae69d217ecb", "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "4358fa51-9a15-47c5-bdf1-24771001391e", "476e3f3a-32cd-4b56-a840-1d3541093fd4", "588c4324-eb6d-4b36-b0cf-212c18117c63", "50684cfe-ad96-46e5-b0ad-556bae0c19c7" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:introduction}\n \\modified{Centralized security mechanisms are prone to Single Point of Failure, meaning that once a centralized component is compromised, the whole system would cease to function. The decentralization of blockchain can eliminate such concern without the need of a trusted third party. With the benefit of decentralized characteristics,} blockchains have been deeply diving into multiple applications that are closely related to every aspect of our daily life, such as cryptocurrencies, business applications, smart city, Internet-of-Things (IoT) applications, and etc. \n In the following, before discussing the motivation of this survey, we first conduct a brief exposition of the state-of-the-art blockchain survey articles published in the recent few years.\n }", "id": "86beefe2-e56f-4ee8-b0cb-5275202efa70", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ] ], "subsections": [ "699aa7dd-44f0-4ce9-8112-4d474559a02d", "4e654309-2d1f-439e-a689-57061b3e4461", "abed2d3c-0cf3-490a-9dc7-b107c7e7b739" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "To identify the position of our survey, we first collect 67 state-of-the-art blockchain-related survey articles. The numbers of each category of those surveys are shown in Fig. \\ref{fig:category}. We see that the top-three popular topics of blockchain-related survey are IoT \\& IIoT, Consensus Protocols, and Security \\& privacy.\nWe also classify those existing surveys and their chronological distribution in Fig. \\ref{fig:classification}, from which we discover that i) the number of surveys published in each year increases dramatically, and ii) the diversity of topics also becomes greater following the chronological order.\nIn detail, we summarize the publication years, topics, and other metadata of these surveys in Table \\ref{Table:taxonomy} and Table \\ref{Table:taxonomyPart2}. \nBasically, those surveys can be classified into the following 7 groups. \\modified{The overall principal of the collection is based on different aspects of blockchain covered in the surveys. In \\textit{Group-1}, different abstraction layers of blockchain protocols and intrinsic properties are the main focus. In \\textit{Group-2}, the behaviour of blockchain's clients are analysed by means of data mining. In \\textit{Group-3}, blockchain as a complicated and rewarding environment, with hard choices made by means of AI or game theory. Also, surveys that analysed the integration of blockchain and decision making techniques are also classified in this group. In \\textit{Group-4}, the integration of blockchain and different communication techniques. In \\textit{Group-5} and \\textit{Group-6}, the applications of blockchain are reviewed. We singled out surveys on IoT applications due to the popularity. \\textit{Group-7} are the works of holistic overview of blockchain.\n} \n\\begin{figure*}[t]\n\\centering\n\t\\vspace{-3mm}\n\\includegraphics[width=0.7\\textwidth]{./figures/categories_V2.png}\n\t\\vspace{-2mm}\n\\caption{The categories and number of state-of-the-art blockchain-related surveys published in recent few years.}\n\\label{fig:category}\n\t\\vspace{-5mm}\n\\end{figure*}", "id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "level": "subsection", "origin_cites_number": 0, "parent_id": "86beefe2-e56f-4ee8-b0cb-5275202efa70", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ] ], "subsections": [ "6a255f9f-fd0e-48ca-84c9-b6c7842b7759", "c73a4ffd-bdc8-4f7a-a0ec-414444ab2fcd", "63b445de-c10c-4306-86a4-e8fae830d25a", "8ccfbc8f-00ce-4423-b3c9-b4b963fd8924", "bf72abc0-fa7b-4c46-86b5-a4ca4d48ed47", "0d615fcf-d016-447c-856a-d9395512d589", "1f0803e0-b29b-4fcc-9608-4cebe12ee97c" ], "title": "Taxonomy of State-of-the-art Blockchain Surveys" }, { "cite_extract_rate": 0.228571428571428, "cites": [ 2147, 1295, 2150, 2149, 2146, 2145, 2148, 2151 ], "content": "}\nThe first group is related to the essentials of the blockchain. A large number of consensus protocols, algorithms, and mechanisms have been reviewed and summarized in . For example, motivated by lack of a comprehensive literature review regarding the consensus protocols for blockchain networks, Wang \\textit{et al.} emphasized on both the system design and the incentive mechanism behind those distributed blockchain consensus protocols such as Byzantine Fault Tolerant (BFT)-based protocols and Nakamoto protocols. From a game-theoretic viewpoint, the authors also studied how such consensus protocols affect the consensus participants in blockchain networks.\n During the surveys of smart contracts , Atzei \\textit{et al.} paid their attention to the security vulnerabilities and programming pitfalls that could be incurred in Ethereum smart contracts.\nDwivedi \\textit{et al.} performed a systematic taxonomy on smart-contract languages, while Zheng \\textit{et al.} conducted a survey on the challenges, recent technical advances and typical platforms of smart contracts.\n Sharding techniques are viewed as promising solutions to solving the scalability issue and low-performance problems of blockchains. Several survey articles provide systematic reviews on sharding-based blockchain techniques. For example, Wang \\textit{et al.} focused on the general design flow and critical design challenges of sharding protocols.\n Next, Yu \\textit{et al.} mainly discussed the intra-consensus security, atomicity of cross-shard transactions, and other advantages of sharding mechanisms.\n\\begin{figure*}[t]\n\\centering\n\t\\vspace{-4mm}\n\\includegraphics[width=0.8\\textwidth]{./figures/classification.png}\n\t\\vspace{-3mm}\n\\caption{\\modified{The category and number of state-of-the-art blockchain surveys organized in a chronological order.}}\n\\label{fig:classification}\n\t\\vspace{-3mm}\n\\end{figure*}\nRegarding scalability, Chen \\textit{et al.} analyzed the scalability technologies in terms of efficiency-improving and function-extension of blockchains, while Zhou \\textit{et al.} compared and classified the existing scalability solutions in the perspective of different layers.\nThen, Zamyatin \\textit{et al.} conducted a systematic classification of protocols for cross-chain communication.\n\\modified{Further on interoperability, Belchior \\textit{et al.} defined related terms and provided interesting directions.}\nDuring the investigations on security \\& privacy issues,\nTaylor \\textit{et al.} reviewed the cyber security space of blockchains including security of blockchain in different directions such as IoT, AI data and sidechain.\n Dasgupta \\textit{et al.} discussed general security issues of blockchains from theory to implementation, such as vulnerability, malicious attacks, risks of blockchain applications, and etc.\n Ma \\textit{et al.} focused on security, privacy and trust issues in crowdsourcing services.\n Under the background of big data, Tariq \\textit{et al.} reviewed the security challenges of fog computing-enabled IoT applications, in which blockchain techniques are playing a role of security enabler.\n In contrast, emphasized on the privacy issues of blockchain systems and blockchain-based applications.\n\\begin{table*}[t]\n\\centering\n\\footnotesize\n\\caption{\\modified{Taxonomy of existing blockchain-related surveys (Part 1).}}\n\\label{Table:taxonomy}\n\\begin{tabular}{|p{0.11\\textwidth}|p{0.14\\textwidth}|p{0.1\\textwidth}|p{0.029\\textwidth}|p{0.46\\textwidth}|}\n\\hline\n \\textbf{Group}&\\textbf{Category}&\\textbf{Ref.}&\\textbf{Year} &\\textbf{Topic}\\\\\n\\hline\n \\multirow{18}*{\\textit{Group-1:}}\n & \\multirow{6}*{Consensus} & \n Sankar & 2017 & Consensus protocols on blockchain applications\\\\\n\t\\cline{3-5}\n \\multirow{18}*{\\textbf{Blockchain}}\n & \\multirow{6}*{Protocols} & Yuan & 2018 & Blockchain consensus algorithms \\\\\n\t\\cline{3-5}\n \\multirow{18}*{\\textbf{Essentials}}\n & {}&\\multirow{1}*{Wang } & 2018 & Consensus mechanisms and mining management in blockchains\\\\\n\t\\cline{3-5}\n & {}&\\multirow{1}*{Garay } & \\multirow{1}*{2018} & Consensus Taxonomy in Blockchain Era \\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Nguyen } & \\multirow{1}*{2018} & Consensus Algorithms Used in Blockchains \\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Wang } & \\multirow{1}*{2019} & Consensus and mining strategy in blockchain networks\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Bano } & \\multirow{1}*{2019} & Consensus in the Age of Blockchains\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Xiao }& 2020 & Distributed Consensus Protocols for Blockchain Networks\\\\\n\t\\cline{2-5}\n\t&\\multirow{3}*{Smart Contract} &\n\t\\multirow{1}*{Atzei } & 2016 & Attacks on Ethereum smart contracts\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Dwived } & 2019 & Blockchain-Based Smart-Contract Languages\\\\\t\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Zheng } & 2020 & Challenges, Advances and Platforms of Smart Contracts\\\\\n\t\\cline{2-5}\n &\\multirow{2}*{Sharding} & \\multirow{1}*{Wang } & 2019 & \n Sharding on blockchains \\\\\n\t\\cline{3-5}\n\t{}&& \\multirow{1}*{Yu } & 2020 & \n Sharding in Blockchains \\\\\n\t\\cline{2-5}\n\t&\\multirow{2}*{Scalability} & \n \\multirow{1}*{Pan } & 2018 & Scalability of blockchain technology\\\\\n\t\\cline{3-5}\n\t{}&&\\multirow{1}*{Zhou } & 2020 & Solutions to Scalability of Blockchain\\\\\n\t\\cline{2-5}\n &\\multirow{2}*{Cross-chain} & \\multirow{1}*{Zamyatin } & \\multirow{1}*{2019} & \\multirow{1}*{Cross-ledger Communications}\\\\\n \\cline{3-5}\n\t{}&&\\multirow{1}*{\\modified{Belchior } } & \\modified{2020} & \\modified{Interoperability solutions and problems}\\\\\n\t\\cline{2-5}\n & \\multirow{5}*{Security} & \\multirow{1}*{Taylor } & 2019 & Blockchain cyber security\\\\\n\t\\cline{3-5}\n\t&\\multirow{5}*{\\& Privacy} &\\multirow{1}*{Dasgupta } & 2019 & Security perspective of Blockchain\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Ma } & \\multirow{1}*{2019} & Blockchain technology in crowdsourcing services\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Tariq } & 2019 & Security of big data in blockchain-enabled IoT applications\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Feng } & 2019 & Privacy protection in blockchain systems\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Soni } & \\multirow{1}*{2019} & Security, privacy and potential applications of Blockchain\\\\\n \\cline{3-5}\n {}&&\\multirow{1}*{\\modified{Xie }} & \\multirow{1}*{\\modified{2020}} & \\modified{Analysis of vulnerabilities, attacks, and other risks of cloud exchange markets}\\\\\n \\hline\n\t\\hline\n \\textit{Group-2:}\n & \\multirow{1}*{Data Analytics} & \\multirow{1}*{Chen } & 2018 & Blockchain data analysis\\\\\n\t\\cline{2-5}\n \\textbf{Data Mining}\n\t& \\multirow{1}*{Ponzi Scheme} &\\multirow{1}*{Bartoletti } & 2020 & Dissecting Ponzi schemes on Ethereum\\\\\n\t\\cline{3-5}\n\t\\hline\t\n\t\\hline\t\n \t\\textit{Group-3:}\n\t& \\multirow{1}*{Artificial Intelligence}& \\multirow{1}*{Salah } & 2019 & Blockchain for Artificial Intelligence \\\\\n\t\\cline{3-5}\n\t \\textbf{Decision-}\n\t{}&\\multirow{1}*{(AI)}&\\multirow{1}*{Zheng } & 2020 & Blockchain and Artificial Intelligence\\\\\n\t\\cline{2-5}\n\t \\textbf{Making}\n\t&\\multirow{1}*{Machine Learning} & \\multirow{1}*{Chen } & 2018 & Privacy and security design when integrating ML and blockchain \\\\\n\t\\cline{3-5}\n\t \\textbf{Techniques}\n\t{}& \\multirow{1}*{(ML)} &\\multirow{1}*{Liu } & 2020 & {Blockchain and ML for Comm. and Networking Systems}\\\\\n\t\\cline{2-5}\n & \\multirow{1}*{Game Theory} & \\multirow{1}*{Liu } & 2019 & Game theories on blockchain\\\\\n\t\\hline\n\t\\hline\n \t\\textit{Group-4:}\n\t&\\multirow{1}*{Cloud Computing} \n\t& Park & 2017 & {Blockchain security in cloud computing}\\\\\n\t\\cline{2-5}\n \t\\textbf{New Comm.}\n\t&\\multirow{2}*{Edge Computing} \n\t& \\multirow{1}*{Xiong } & 2018 & Blockchain meets edge computing\\\\\n\t\\cline{3-5}\n \t\\textbf{Networking}\n\t {} &&\\multirow{1}*{Yang } & \\multirow{1}*{2019} & Integration of blockchain and edge computing systems\\\\\n\t\\cline{2-5}\n & \\multirow{1}*{5G and Beyond}\n & Nguyen & 2019 & Blockchain for 5G and Beyond Networks\\\\\n\t\\hline\n\\end{tabular}\n\\end{table*}", "id": "6a255f9f-fd0e-48ca-84c9-b6c7842b7759", "level": "subsubsection", "origin_cites_number": 35, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "Blockchain Essentials" ] ], "subsections": [], "title": "Blockchain Essentials" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2146 ], "content": "}\n The direction of data analytics for blockchains has not yet received too much attention. The existing survey studies are shown as follows.\n Chen \\textit{et al.} summarized seven typical research issues of data analysis in blockchains, such as entity recognition, privacy identification, network risk parsing, network visualization and portrait, analysis of cryptocurrency market, and etc.\n Recently, Bartoletti \\textit{et al.} reviewed the Ponzi schemes hiding in Ethereum, aiming to discover the scam behavior and analyze their impact. The authors focused on multiple viewpoints such as the identification methods, the impact of Ponzi schemes to the blockchain ecosystem.\n Finally, Xie \\textit{et al.} provided an overview on the security and privacy issues, management of transactions, reputation systems of could exchange, where the blockchain technology is used as a key enabler.", "id": "c73a4ffd-bdc8-4f7a-a0ec-414444ab2fcd", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "Data Mining and Analytics" ] ], "subsections": [], "title": "Data Mining and Analytics" }, { "cite_extract_rate": 0.2, "cites": [ 2148 ], "content": "}\n Blockchains can bring many security advantages for many other fields. On the other hand, blockchain networks also reply on decision-making techniques such as artificial intelligence (AI) , machine learning , and game theory . This is because the tuning of blockchain network parameters, analysis of user behavior patterns, detection of malicious attacks, identification of market risks, and etc., are playing critical roles for the performance, security, healthy conditions of blockchain systems and blockchain networks. \n For example,\n Salah \\textit{et al.} studied how blockchain technologies benefit key problems of AI.\n Zheng \\textit{et al.} proposed the concept of \\textit{blockchain intelligence} and pointed out the opportunities that both these two terms can benefit each other.\n Next, Chen \\textit{et al.} discussed the privacy-preserving and secure design of machine learning when blockchain techniques are imported.\n Liu \\textit{et al.} identified the overview, opportunities, and applications when integrating blockchains and machine learning technologies in the context of communications and networking.\n Recently, game theoretical solutions have been reviewed when they are applied in blockchain security issues such as malicious attacks and selfish mining, as well as the resource allocation in the management of mining. Both the advantages and disadvantages of game theoretical solutions and models were discussed.", "id": "63b445de-c10c-4306-86a4-e8fae830d25a", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "Decision-Making Techniques" ] ], "subsections": [], "title": "Decision-Making Techniques" }, { "cite_extract_rate": 0.2, "cites": [ 2147, 2145, 2154, 2152, 7084, 8527, 2153 ], "content": "}\n First, Park \\textit{et al.} discussed how to take the advantages of blockchains in cloud computing with respect to security solutions. Xiong \\textit{et al.} then investigated how to facilitate blockchain applications in mobile IoT and edge computing environments.\n Yang \\textit{et al.} identified various perspectives including motivations, frameworks, and functionalities when integrating blockchain with edge computing.\n Nguyen \\textit{et al.} presented a comprehensive survey when blockchain meets 5G networks and beyond. The authors focused on the opportunities that blockchain can bring for 5G technologies, which include cloud computing, mobile edge computing, SDN/NFV, network slicing, D2D communications, 5G services, and 5G IoT applications.\n\\begin{table*}[t]\n\\centering\n\\footnotesize\n\\caption{Taxonomy of existing blockchain-related surveys (Part 2).}\n\\label{Table:taxonomyPart2}\n\\begin{tabular}{|p{0.1\\textwidth}|p{0.11\\textwidth}|p{0.1\\textwidth}|p{0.029\\textwidth}|p{0.5\\textwidth}|}\n\\hline\n \\textbf{Group}&\\textbf{Category}&\\textbf{Ref.}&\\textbf{Year} &\\textbf{Topic}\\\\\n\\hline\n \\multirow{12}*{\\textit{Group-5:}}\n &\\multirow{15}*{IoT, IIoT} & \\multirow{1}*{Christidis } & 2016 & Blockchains and Smart Contracts for IoT\\\\\n\t\\cline{3-5}\n \t \\multirow{12}*{\\textbf{IoT \\& IIoT}}\n {}&&\\multirow{1}*{Ali } & 2018 & Applications of blockchains in IoT\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Fernandez } & 2018 & Usage of Blockchain for IoT\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Kouicem } & 2018 & IoT security\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Panarello } & 2018 & Integration of Blockchain and IoT \\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Dai } & 2019 & Blockchain for IoT\\\\\n\t\\cline{3-5}\n\t{}&&\\multirow{1}*{Wang } & 2019 & Blockchain for IoT\\\\ \n\t\\cline{3-5}\n {}&&\\multirow{1}*{Nguyen } & 2019 & Integration of Blockchain and Cloud of Things\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Restuccia } & 2019 & Blockchain technology for IoT\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Cao } & 2019 & {Challenges in distributed consensus of IoT}\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Park } & 2020 & Blockchain Technology for Green IoT\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Lao } & \\multirow{1}*{2020} & IoT Applications in Blockchain Systems\\\\\n\t\\cline{3-5}\n {}&&\\multirow{1}*{Alladi } & 2019 & Blockchain Applications in Industry 4.0 and IIoT\\\\\n \t\\cline{3-5}\n {}&&\\multirow{1}*{Zhang } & 2019 & {5G Beyond for IIoT based on Edge Intelligence and Blockchain}\\\\\n \\cline{2-5}\n & \\multirow{1}*{UAV} & {Alladi } & 2020 & Blockchain-based UAV applications\\\\\n\t\\hline\n\t\\hline\n \\multirow{10}*{\\textit{Group-6:}}\n\t& \\multirow{1}*{General} & \\multirow{1}*{Lu } & 2018 & Functions, applications and open issues of Blockchain\\\\\n\t\\cline{3-5}\n\t \\multirow{10}*{\\textbf{Blockchain}}\n & \\multirow{1}*{Applications} &\\multirow{1}*{Casino } & \\multirow{1}*{2019} & Current status, classification and open issues of Blockchain Apps\\\\\n\t\\cline{2-5}\n\t \\multirow{10}*{\\textbf{Applications}}\n\t&\\multirow{2}*{Agriculture} & \\multirow{1}*{Bermeo } & 2018 & Blockchain technology in agriculture\\\\\n\t\\cline{3-5}\n\t{} && \\multirow{1}*{Ferrag } & \\multirow{1}*{2020} & Blockchain solutions to Security and Privacy for Green Agriculture\\\\\n\t\\cline{2-5}\n\t&\\multirow{1}*{SDN} & \\multirow{1}*{Alharbi } & 2020 & Deployment of Blockchains for Software Defined Networks\\\\\n\t\\cline{2-5}\n\t&\\multirow{1}*{Business Apps} & \\multirow{1}*{Konst. } & \\multirow{1}*{2018} & \\multirow{1}*{Blockchain-based business applications} \\\\\n\t\\cline{2-5}\n\t& \\multirow{1}*{Smart City} & \\multirow{1}*{Xie } & \\multirow{1}*{2019} & Blockchain technology applied in smart cities\\\\\n\t\\cline{2-5}\n\t& \\multirow{2}*{Smart Grids} & \\multirow{1}*{Alladi } & 2019 & Blockchain in Use Cases of Smart Grids\\\\\n\t\\cline{3-5}\n\t{} && \\multirow{1}*{Aderibole } & \\multirow{1}*{2020} & Smart Grids based on Blockchain Technology \\\\\n\t\\cline{2-5}\n\t& \\multirow{1}*{File Systems} & \\multirow{1}*{Huang } & \\multirow{1}*{2020} & \\multirow{1}*{Blockchain-based Distributed File Systems, IPFS, Filecoin, etc.}\\\\\n\t\\cline{2-5}\n\t& \\multirow{1}*{Space Industry} & \\multirow{1}*{Torky } & \\multirow{1}*{2020} & \\multirow{1}*{Blockchain in Space Industry}\\\\\n\t\\cline{2-5}\n\t& \\multirow{1}*{COVID-19} & \\multirow{1}*{Nguyen } & \\multirow{1}*{2020} & \\multirow{1}*{Combat COVID-19 using Blockchain and AI-based Solutions}\\\\\n\t\\hline\n\t\\hline\n \\multirow{3}*{\\textit{Group-7:}}\n\t& \\multirow{4}*{Overview} & \\multirow{1}*{Yuan } & 2016 & The state of the art and future trends of Blockchain \\\\\n\t\\cline{3-5}\n \\multirow{3}*{\\textbf{General}}\n & \\multirow{4}*{\\& Outlook} & Zheng & 2017 & Architecture, Consensus, and Future Trends of Blockchains\\\\\n\t\\cline{3-5}\n\t\\multirow{3}*{\\textbf{Overview}}\n\t{} && \\multirow{1}*{Zheng } & 2018 & Challenges and opportunities of Blockchain\\\\\n\t\\cline{3-5}\n\t{} && \\multirow{1}*{Yuan } & 2018 & Blockchain and cryptocurrencies \\\\\n\t\\cline{3-5}\n\t{} && \\multirow{1}*{Kolb } & 2020 & Core Concepts, Challenges, and Future Directions in Blockchains\\\\\n\\hline\n\\end{tabular}\n\\end{table*}", "id": "8ccfbc8f-00ce-4423-b3c9-b4b963fd8924", "level": "subsubsection", "origin_cites_number": 35, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "New Communications Networking" ] ], "subsections": [], "title": "New Communications Networking" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2152, 2153, 7084, 8527, 2154 ], "content": "}\n The blockchain-based applications for Internet of Things (IoT) and Industrial Internet of Things (IIoT) have received the largest amount of attention from both academia and industry. \n For example, as a pioneer work in this category, Christidis \\textit{et al.} provided a survey about how blockchains and smart contracts promote the IoT applications.\n Later on, Nguyen \\textit{et al.} presented an investigation of the integration between blockchain technologies and cloud of things with in-depth discussion on backgrounds, motivations, concepts and architectures.\n Recently, Park \\textit{et al.} emphasized on the topic of introducing blockchain technologies to the sustainable ecosystem of green IoT.\n For the IIoT, Zhang \\textit{et al.} discussed the integration of blockchain and edge intelligence to empower a secure IIoT framework in the context of 5G and beyond.\n In addition, when applying blockchains to the unmanned aerial vehicles (UAV), Alladi \\textit{et al.} reviewed numerous application scenarios covering both commercial and military domains such as network security, surveillance, etc.", "id": "bf72abc0-fa7b-4c46-86b5-a4ca4d48ed47", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "IoT \\& IIoT" ] ], "subsections": [], "title": "IoT \\& IIoT" }, { "cite_extract_rate": 0, "cites": [], "content": "}\nBlockchains have spawned enormous number of applications in various fields. The research areas covered by the existing surveys on the blockchain-based applications include general applications , agriculture , Software-defined Networking (SDN) , business applications , smart city , smart grids , distributed file systems , space industry , and COVID-19 .\nSome of those surveys are reviewed as follows.\nLu \\textit{et al.} performed a literature review on the fundamental features of blockchain-enabled applications. Through the review, the authors expect to outlook the development routine of blockchain technologies. \nThen, Casino \\textit{et al.} presented a systematic survey of blockchain-enabled applications in the context of multiple sectors and industries. Both the current status and the prospective characteristics of blockchain technologies were identified.\n In more specific directions, Bermeo \\textit{et al.} proposed a review on the research works focusing on applying blockchain technologies to agriculture. Through an overview on the primary studies published between 2016 and 2018, they found some interesting phenomena such as a large part of relevant papers are solving problems of food supply chain, and Asian community researchers are dominating the blockchain-based agriculture studies.\n Later on, Ferrag \\textit{et al.} concentrated on the security and privacy issues of green IoT-based agriculture. They also investigated how would blockchain solutions and consensus algorithms be adapted to green IoT-based agriculture.\nAlharbi then described how blockchain technologies can be integrated into SDN architecture to provide security, confidentiality, and integrity.\nKonstantinidis \\textit{et al.} discussed the various applications of blockchain technology on the business sectors.\n Xie \\textit{et al.} provided a literature review on the smart city services involving blockchain technologies, such as smart citizen, smart healthcare, smart transportation, management of supply chain, etc.\n Then, based on the blockchain technology, the two surveys discussed the conceptual model, different use cases, energy trading processes, efficient power generation and distribution strategies, system maintenance and diagnosis for grid facilities, and security and privacy preserving of smart grid domains.\n Huang \\textit{et al.} reviewed the integration of blockchain-based solutions and the distributed file systems. Taking the Inter-Planetary File System (IPFS) and Swarm as two representative distributed file systems, the authors introduced the principle and structure, as well as the state-of-the-art studies of blockchain-empowered distributed file systems and their utilization scenarios.\n Next, Torky \\textit{et al.} conducted a systematic discussion on the conceptual exploration to adopt the blockchain technology in space industry. A blockchain-based satellite network, namely SpaceChain, has been initially implemented as a case study of the proposed blockchain-empowered satellite system.\n As a most timely survey regarding combating the coronavirus (COVID-19), Nguyen \\textit{et al.} presented a comprehensive review on the integrating blockchain and AI technologies while fighting the coronavirus crisis. The roles of blockchain during tackling the pandemic vary in a wide range of applications, such as tracking of population, privacy preserving of citizens, supply chain management, and other tracking services.", "id": "0d615fcf-d016-447c-856a-d9395512d589", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "Blockchain Applications" ] ], "subsections": [], "title": "Blockchain Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "}\n The final group of survey articles overviewed the basic concepts of blockchains and cryptocurrencies, the fundamental research challenges, and general issues such as consensus algorithms, solutions to scalability and security, privacy preserving issues, and etc. Finally, The authors outlooked further potential technical challenges and open issues for shedding light on future studies of blockchain technologies.\n\\textbf{Summary of Survey-Article Review}: Through the brief review of the state-of-the-art surveys, we have found that the blockchain technologies have been adaptively integrated into a growing range of application sectors. The blockchain theory and technology will bring substantial innovations, incentives, and a great number of application scenarios in diverse fields.\n Based on the analysis of those survey articles, we believe that there will be more survey articles published in the near future, very likely in the areas of sharding techniques, scalability, interoperability, smart contracts, big data, AI technologies, 5G and Beyond, edge computing, cloud computing, and many other fields.", "id": "1f0803e0-b29b-4fcc-9608-4cebe12ee97c", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "699aa7dd-44f0-4ce9-8112-4d474559a02d", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Taxonomy of State-of-the-art Blockchain Surveys" ], [ "subsubsection", "General Overview \\& Outlook" ] ], "subsections": [], "title": "General Overview \\& Outlook" }, { "cite_extract_rate": 0, "cites": [], "content": "{\\color{black}\n Via the overview, shown in Table \\ref{Table:taxonomy}, Table \\ref{Table:taxonomyPart2}, Fig. \\ref{fig:category} and Fig. \\ref{fig:classification}, of the existing blockchain-related surveys, we have found that a survey of the state-of-the-art theories, modelings and useful tools that can i) improve the performance of blockchains, and ii) help better understand blockchains, is still missing.\n In particular, the following directions need in-depth investigations.", "id": "4e654309-2d1f-439e-a689-57061b3e4461", "level": "subsection", "origin_cites_number": 0, "parent_id": "86beefe2-e56f-4ee8-b0cb-5275202efa70", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Motivation of This Survey" ] ], "subsections": [ "cba48f3b-4d3d-43a3-91c4-faed5dc07e93", "93998f25-61e9-41fa-84b6-a47dcbacdadf", "860f63a4-a14f-4a93-8d8e-e39c054ed50b" ], "title": "Motivation of This Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "The performance of blockchains includes a number of metrics such as throughput, latency, storage efficiency, reliability, scalability, interoperability, and etc. Many theories can be devoted to improving the performance metrics of blockchains. For example, the following perspectives are worthy paying more efforts.\n \\textbf{Scalability Solutions}. Although blockchain is viewed as a distributed and public database of transactions and has become a platform for decentralized applications, blockchains still face the scalability problem. For example, the system throughput is not scalable with the increasing size of a blockchain network. Thus, the scalability solutions of blockchain are still required further studying. The promising solutions to improve the scalability of blockchains include sharding-based and multiple-chain \\& cross-chain techniques.\n \\textbf{New Protocols and Infrastructures}. \n Several classic consensus protocols, such as practical byzantine-fault tolerant (PBFT) protocol and proof-of-work (PoW) protocol, have been widely adopted by popular blockchain systems. However, those classic protocols cannot meet all consensus requirements existing in emerging new blockchains. Thus, it is necessary to review new protocols and infrastructures proposed to serve new scenarios of blockchain-based applications.", "id": "cba48f3b-4d3d-43a3-91c4-faed5dc07e93", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4e654309-2d1f-439e-a689-57061b3e4461", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Motivation of This Survey" ], [ "subsubsection", "Theories to Improving the Performance of Blockchains" ] ], "subsections": [], "title": "Theories to Improving the Performance of Blockchains" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2146 ], "content": "The existing studies on better understanding blockchains that have been reviewed by other surveys mainly focus on the security and privacy issues, and the analysis of cryptocurrency market, for example, the identification of Ponzi schemes and other scam behaviors.\n In our opinion, to better understand blockchains, the following wider range of topics should be also emphasized on.\n \\textbf{Graph-based Theories}. Excepting the classic graph \\modified{knowledge} that have applied to blockchains, such as the Merkel tree and directed acyclic graph (DAG) techniques, the general graph based analytical techniques are powerful approaches to find insights behind the transactions, smart contracts, and the network structure of blockchains. \n \\textbf{Stochastic Modelings, Queueing Theories and Analytical Models}. \n Several phases of blockchain networks can be described using the stochastic modelings, queueing theories, and analytical models. Based on these theoretical models, researchers can conduct the property analysis of blockchain network, stability analysis, deriving failure probability, modeling of mining procedure, estimating blockchain confirming time, exploring the synchronization process of Bitcoin network and other working principles of blockchains, and understanding how blockchains respond to difference network conditions even malicious attacks.\n \\textbf{Data Analytics for Cryptocurrency Blockchains}. \n Security issues of cryptocurrency blockchains and their markets are attracting more and more attention. Although several surveys have already studied the Ponzi schemes in Ethereum and other general security and privacy issues of blockchain systems, their surveys mainly emphasized on the identification approaches and the impacts to blockchain systems.\n In contrast, in our survey, we review the latest studies by exploiting the data analytics techniques to detect the market risks in cryptocurrency ecosystems, where the risks not only include Ponzi schemes, but also take into account the cryptojacking, market manipulation mining, and money-laundering activities. Furthermore, we also review a few studies utilizing data science and stochastic modelings to produce a portrait of cryptoecomonic systems.", "id": "93998f25-61e9-41fa-84b6-a47dcbacdadf", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "4e654309-2d1f-439e-a689-57061b3e4461", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Motivation of This Survey" ], [ "subsubsection", "Modelings and Techniques for Better Understanding Blockchains" ] ], "subsections": [], "title": "Modelings and Techniques for Better Understanding Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "In the aforementioned 66 surveys, we find that there is still no a single article focusing on the performance measurements, datasets and experiment tools for blockchains. Instead, our survey in this article particularly reviews: i) performance measurements with respect to throughput, end-to-end confirmation delays of transactions, forking rate, resource utilization, scalability, etc.; and ii) useful evaluation tools and datasets dedicated to blockchain experiments.\n In a summary, by this article, we would like to fill the gap by emphasizing on the cutting-edge theoretical studies, modelings, and useful tools for blockchains.\n Particularly, we try to include the latest high-quality research outputs that have not been included by other existing survey articles.\n We believe that this survey can shed new light on the further development of blockchains.", "id": "860f63a4-a14f-4a93-8d8e-e39c054ed50b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "4e654309-2d1f-439e-a689-57061b3e4461", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Motivation of This Survey" ], [ "subsubsection", "Useful Measurements, Datasets and Experiment Tools for Blockchains" ] ], "subsections": [], "title": "Useful Measurements, Datasets and Experiment Tools for Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "Our survey presented in this article includes the following contributions.\n\\begin{itemize}\n\t\\item We conduct a brief classification of existing blockchain surveys to highlight the meaning of our literature review shown in this survey.\n\t\\item We then present a comprehensive investigation on the state-of-the-art theoretical modelings, analytics models, performance measurements, and useful experiment tools for blockchains, blockchain networks, and blockchain systems.\n\t\\item Several promising directions and open issues for future studies are also envisioned finally.\n\\end{itemize}\n \\begin{figure}[t]\n \\centering\n\t\\vspace{-3mm}\n \\includegraphics[width=0.56\\textwidth]{./figures/PaperStructure.png}\n\t\\vspace{-2mm}\n \\caption{The structure of this article.}\n \\label{fig:structure}\n \\vspace{-5mm}\n \\end{figure}\nThe structure of this survey is shown in Fig. \\ref{fig:structure} and organized as follows.\nSection \\ref{sec:preliminary} introduces the preliminaries of blockchains.\nSection \\ref{sec:theoryImproving} summarizes the state-of-the-art theoretical studies that improve the performance of blockchains.\nIn Section \\ref{sec:understand}, we then review various modelings and analytic models that help understand blockchains.\nDiverse measurement approaches, datasets, and useful tools for blockchains are overviewed in Section \\ref{sec:tools}.\nWe outlook the open issues in Section \\ref{sec:openissue}.\nFinally, Section \\ref{sec:conclusion} concludes this article. \n}\n{\\color{black}", "id": "abed2d3c-0cf3-490a-9dc7-b107c7e7b739", "level": "subsection", "origin_cites_number": 0, "parent_id": "86beefe2-e56f-4ee8-b0cb-5275202efa70", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Introduction" ], [ "subsection", "Contribution of Our Survey" ] ], "subsections": [], "title": "Contribution of Our Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:preliminary}\n Blockchain is a promising paradigm for content distribution and distributed consensus over P2P networks. In this section, we present the basic concepts, definitions and terminologies of blockchains appeared in this article. \\modified{Due to the frequent use of acronyms in this paper, we will include an acronym table, i.e., Table \\ref{Table:acronyms}, in this section.}\n\\begin{table*}[t]\n\\caption{\\modified{Acronym Table}}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.25\\textwidth}|p{0.25\\textwidth}|}\n\\hline\n\\textbf{Acronym} &\\textbf{Meaning}\\\\\n \\hline\n AI & Artificial Intelligence\\\\\n \\hline\n BFT & Byzantine Fault Tolerant\\\\\n \\hline\n CA & Contract Account\\\\\n \\hline\n CapsNet & Capsule Network\\\\\n \\hline\n CCG & Contract Creation Graph\\\\\n \\hline\n CIG & Contract Invocation Graph\\\\\n \\hline\n DAG & Directed Acyclic Graph\\\\\n \\hline\n DApp & Distributed Application\\\\\n \\hline\n EHG & Extreme High Graph\\\\\n \\hline\n ELG & Extreme Low Graph\\\\\n \\hline\n EOA & External Owned Account\\\\\n \\hline\n ETG & Extreme Transaction Graph\\\\\n \\hline\n EVM & Ethereum Virtual Machine\\\\\n \\hline\n IIoT & Industrial Internet of Things\\\\\n \\hline\n IoT & Internet of Things\\\\\n \\hline\n IPFS & Inter-Planetary File System\\\\\n \\hline\n L2S & Latency-to-Shard\\\\\n \\hline\n MFG & Money Flow Graph\\\\\n \\hline\n ML & Machine Learning\\\\\n \\hline\n MMR & Monitor Multiplexing Reading\\\\\n \\hline\n NMG & Normal Graph\\\\\n \\hline\n PBFT & Practical Byzantine-fault Tolerant\\\\\n \\hline\n PoS & Proof of Stake\\\\\n \\hline\n PoT & Proof-of-Trust\\\\\n \\hline\n PoW & Proof of Work\\\\\n \\hline\n RDMA & Remote Direct Memory Access\\\\\n \\hline\n SDN & Software-defined Networking\\\\\n \\hline\n SPV & Simple Payment Verification\\\\\n \\hline\n T2S & Transaction-to-Shard\\\\\n \\hline\n TX & Transaction\\\\\n \\hline\n UAV & Unmanned Aerial Vehicles \\\\\n \\hline\n UTXO & Unspent Transaction Output\\\\\n \\hline\n\\end{tabular}\n\\label{Table:acronyms}\n\\end{table*}", "id": "1d23888a-2da9-4dfa-85ca-5b4d1b9044f9", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ] ], "subsections": [ "a1e013e4-8c60-4688-acb5-68434c282aa8", "ee89a45d-70ed-4ea5-b465-e3eface4c8a0", "cd1c6281-ede4-44fb-bea8-ec04cba453ea" ], "title": "Preliminaries of Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "a1e013e4-8c60-4688-acb5-68434c282aa8", "level": "subsection", "origin_cites_number": 0, "parent_id": "1d23888a-2da9-4dfa-85ca-5b4d1b9044f9", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Prime Blockchain Platforms" ] ], "subsections": [ "6b128908-2c93-401b-96c2-38104339dd18", "22837043-f856-468f-925e-ce2b735cc666", "4df01591-1fef-4698-b044-a7acd85340c7", "ace8f5fe-bbdc-4138-bc22-3d578fac4043" ], "title": "Prime Blockchain Platforms" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7550 ], "content": "Bitcoin is viewed as the blockchain system that executes the first cryptocurrency. It builds upon two major techniques, i.e., \\textit{Nakamoto Consensus} and \\textit{UTXO Model}, which are introduced as follows.\n \\textbf{Nakamoto Consensus}. To achieve an agreement of blocks, Bitcoin adopts the Nakamoto Consensus, in which miners generate new blocks by solving a puzzle. In such a puzzle-solving process, also referred to as mining, miners need to calculate a nonce value that fits the required difficulty level . Through changing the difficulty, Bitcoin system can maintain a stable rate of block-generation, which is about one block per 10 minutes.\n When a miner generates a new block, it broadcasts this message to all the other miners in the network. If others receive this new block, they add this block to their local chain. If all of the other miners receive this new block timely, the length of the main chain increases by one. However, because of the network delays, not always all the other miners can receive a new block in time. When a miner generates a block before it receives the previous one, a fork yields. Bitcoin addresses this issue by following the rule of longest chain.\n \\textbf{UTXO Model}. The Unspent Transaction Output (UTXO) model is adopted by cryptocurrencies like Bitcoin, and other popular blockchain systems . A UTXO is a set of digital money, each represents a chain of ownership between the owners and the receivers based on the cryptography technologies.\n In a blockchain, the overall UTXOs form a set, in which each element denotes the unspent output of a transaction, and can be used as an input for a future transaction. A client may own multiple UTXOs, and the total coin of this client is calculated by summing up all associated UTXOs.\n Using this model, blockchains can prevent the double-spend attacks efficiently.", "id": "6b128908-2c93-401b-96c2-38104339dd18", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a1e013e4-8c60-4688-acb5-68434c282aa8", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Prime Blockchain Platforms" ], [ "subsubsection", "Bitcoin" ] ], "subsections": [], "title": "Bitcoin" }, { "cite_extract_rate": 0, "cites": [], "content": "Ethereum is an open-source blockchain platform enabling the function of smart contract.\n As the token in Ethereum, \\textit{Ether} is rewarded to the miners who conducted computation to secure the consensus of the blockchain.\n Ethereum executes on decentralized Ethereum Virtual Machines (EVMs), in which scripts are running on a network consisting of public Ethereum nodes. Comparing with Bitcoin, the EVM's instruction set is believed Turing-complete. Ethereum also introduces an internal pricing mechanism, called \\textit{gas}. A unit of gas measures the amount of computational effort needed to execute operations in a transaction. Thus, gas mechanism is useful to restrain the spam in smart contracts.\n Ethereum 2.0 is an upgraded version based on the original Ethereum. The upgrades include a transition from PoW to Proof-of-Stake (PoS), and a throughput-improving based on sharding technologies.\n \\modified{The comparison between Bitcoin \\& Ethereum is summarized in Table \\ref{Table:compareBandE}.}\n \\modified{\\textbf{Account/Balance Model}. Unlike Bitcoin where states are composed by UTXOs, Ethereum adopts a more common and straightforward model that is used by banks, the Account/Balance Model. In every account, an incrementing counter of transaction execution, nonce, is implemented to prevent double spending attacks, which serves as a complement for the model's simple structure. There are basically 2 types of accounts, \\textit{external owned accounts} (EOAs) and \\textit{contract accounts} (CAs), each controlled by private keys and contract codes, respectively.}\n\\begin{table*}[t]\n\\caption{\\modified{Comparison between Bitcoin \\& Ethereum}}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.1\\textwidth}|p{0.13\\textwidth}|p{0.17\\textwidth}|p{0.1\\textwidth}|}\n\\hline\n\\textbf{} &\\textbf{State Model} &\\textbf{Consensus Protocols} &\\textbf{Throughput}\\\\\n\\hline\n\t\\textbf{Bitcoin}\n\t& UTXO & PoW & 3 to 7 TPS\\\\\n\t\\hline\n \\textbf{Ethereum1.0}\n\t& Account/Balance & PoW & 7 to 15 TPS\\\\\n\t\\hline\n\t\\textbf{Ethereum2.0}\n\t& Account/Balance & PoS Sharding & Unknown\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:compareBandE}\n\\end{table*}\n\\modified{", "id": "22837043-f856-468f-925e-ce2b735cc666", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "a1e013e4-8c60-4688-acb5-68434c282aa8", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Prime Blockchain Platforms" ], [ "subsubsection", "Ethereum" ] ], "subsections": [], "title": "Ethereum" }, { "cite_extract_rate": 0, "cites": [], "content": "Hyperledger Fabric is a popular permissioned blockchain platform for industrial use. In industry, goals are quite different from cryptocurrency systems. Greater significance is attached to lower maintenance cost, higher throughput performance and permission control. For a node in a permissioned setting, other nodes, though untrusted, the identities are known. With different levels of trust among users, different consensus protocols can be customized for fault tolerant.\n}", "id": "4df01591-1fef-4698-b044-a7acd85340c7", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a1e013e4-8c60-4688-acb5-68434c282aa8", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Prime Blockchain Platforms" ], [ "subsubsection", "Hyperledger Fabric" ] ], "subsections": [], "title": "Hyperledger Fabric" }, { "cite_extract_rate": 0, "cites": [], "content": "EOSIO is another popular blockchain platform released by a company \\textit{block.one} on 2018. Different from Bitcoin and Ethereum, the smart contracts of EOSIO don't need to pay transaction fees. Its throughput is claimed to reach millions of transactions per second. Furthermore, EOSIO also enables low block-confirmatoin latency, low-overhead BFT finality, and etc. These excellent features has attracted a large-number of users and developers to quickly and easily deploy decentralized applications in a governed blockchain. For example, in total 89,800,000 EOSIO blocks have been generated in less than one and a half years since its first launching.", "id": "ace8f5fe-bbdc-4138-bc22-3d578fac4043", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a1e013e4-8c60-4688-acb5-68434c282aa8", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Prime Blockchain Platforms" ], [ "subsubsection", "EOSIO" ] ], "subsections": [], "title": "EOSIO" }, { "cite_extract_rate": 0, "cites": [], "content": "The consensus mechanism in blockchains is for fault-tolerant to achieve an agreement on the same state of the blockchain network, such as a single state of all transactions in a cryptocurrency blockchain.\n Popular proof-based consensus protocols include PoW and PoS.\n In PoW, miners compete with each other to solve a puzzle that is difficult to produce a result but easy to verify the result by others. Once a miner yields a required nonce value through a huge number of attempts, it gets paid a certain cryptocurrencies for creating a new block.\n In contrast, PoS doesn't have miners. Instead, the new block is forged by \\textit{validators} selected randomly within a committee. The probability to be chosen as a validator is linearly related to the size of its stake.\n PoW and PoS are both adopted as consensus protocols for the security of cryptocurrencies. The former is based on the CPU power, and the latter on the coin age. Therefore, PoS is with lower energy-cost and less likely to be attacked by the 51\\% attack.", "id": "ee89a45d-70ed-4ea5-b465-e3eface4c8a0", "level": "subsection", "origin_cites_number": 0, "parent_id": "1d23888a-2da9-4dfa-85ca-5b4d1b9044f9", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Consensus Mechanism" ] ], "subsections": [], "title": "Consensus Mechanism" }, { "cite_extract_rate": 0, "cites": [], "content": "Blockchain as a distributed and public database of transactions has become a platform for decentralized applications. Despite its increasing popularity, blockchain technology faces the scalability problem: throughput does not scale with the increasing network size. Thus, scalable blockchain protocols that can solve the scalability issues are still in an urgent need. Many different directions, such as \\textit{Off-chain}, \\textit{DAG}, and \\textit{Sharding} techniques, have been exploited to address the scalability of blockchains.\nHere, we present several representative terms related to scalability.", "id": "cd1c6281-ede4-44fb-bea8-ec04cba453ea", "level": "subsection", "origin_cites_number": 0, "parent_id": "1d23888a-2da9-4dfa-85ca-5b4d1b9044f9", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Scalability of Blockchains" ] ], "subsections": [ "1bc71002-be3a-4b13-8591-b63f5f29b22f", "ef410d3d-cbe3-4eaf-b2f1-d9badabb9dfa", "f8845832-e1dc-4cb6-ade0-886771ebac9b", "e58ab384-f3f9-447c-8c35-09379b060820" ], "title": "Scalability of Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "Contrary to the on-chain transactions that are dealt with on the blockchain and visible to all nodes of the blockchain network, the off-chain transactions are processed outside the blockchain through a third-party guarantor who endorses the correctness of the transaction. \n The on-chain transactions incur longer latencies since the confirmation of an on-chain transaction has to take different steps. In contrast, the off-chain techniques can instantly execute the off-chain transactions because those transactions don't need to wait on the queue as on an on-chain network.", "id": "1bc71002-be3a-4b13-8591-b63f5f29b22f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd1c6281-ede4-44fb-bea8-ec04cba453ea", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Scalability of Blockchains" ], [ "subsubsection", "Off-chain Techniques" ] ], "subsections": [], "title": "Off-chain Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "Mathematically, a DAG is a finite directed graph where no directed cycles exist. In the context of blockchain, DAG is viewed as a revolutionized technology that can upgrade blockchain to a new generation. This is because DAG is blockless, and all transactions link to multiple other transactions following a topological order on a DAG network. Thus, data can move directly between network participants. This results in a faster, cheaper and more scalable solution for blockchains.\n In fact, the bottleneck of blockchains mainly relies on the structure of blocks. Thus, probably the blockless DAG could be a promising solution to improve the scalability of blockchains substantially.", "id": "ef410d3d-cbe3-4eaf-b2f1-d9badabb9dfa", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd1c6281-ede4-44fb-bea8-ec04cba453ea", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Scalability of Blockchains" ], [ "subsubsection", "DAG" ] ], "subsections": [], "title": "DAG" }, { "cite_extract_rate": 0, "cites": [], "content": "The consensus protocol of Bitcoin, i.e., Nakamoto Consensus, has significant drawbacks on the performance of transaction throughput and network scalability. To address these issues, \\textit{sharding} technique is one of the outstanding approaches, which improves the throughput and scalability by partitioning the blockchain network into several small shards such that each can process a bunch of unconfirmed transactions in parallel to generate medium blocks. Such medium blocks are then merged together in a final block. \n Basically, sharding technique includes \\textit{Network Sharding}, \\textit{Transaction Sharding} and \\textit{State Sharding}.", "id": "f8845832-e1dc-4cb6-ade0-886771ebac9b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd1c6281-ede4-44fb-bea8-ec04cba453ea", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Scalability of Blockchains" ], [ "subsubsection", "Sharding Technique" ] ], "subsections": [], "title": "Sharding Technique" }, { "cite_extract_rate": 0, "cites": [], "content": "One shortcoming of sharding technique is that the malicious network nodes residing in the same shard may collude with each other, resulting in security issues. Therefore, the sharding-based protocols exploits \\textit{reshuffling} strategy to address such security threats. However, reshuffling brings the \\textit{cross-shard} data migration. Thus, how to efficiently handle the cross-shard transactions becomes an emerging topic in the context of sharding blockchain.\n}", "id": "e58ab384-f3f9-447c-8c35-09379b060820", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cd1c6281-ede4-44fb-bea8-ec04cba453ea", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Preliminaries of Blockchains" ], [ "subsection", "Scalability of Blockchains" ], [ "subsubsection", "Cross-Shard Transactions" ] ], "subsections": [], "title": "Cross-Shard Transactions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:theoryImproving}", "id": "952d99e8-1c8e-4de4-9790-1ae69d217ecb", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ] ], "subsections": [ "ac3c9506-8452-4524-a356-ccef714e8bef", "b5de63b1-5d76-4633-938e-da9b3948c3e1", "f3a78dd5-a681-47f6-a4a2-e800a7104483" ], "title": "Theories to Improving the Performance of Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "Latest Theories to Improving Blockchain Performance}}\n \\modified{Summary of this subsection is included in Table \\ref{Table:theories}.}", "id": "ac3c9506-8452-4524-a356-ccef714e8bef", "level": "subsection", "origin_cites_number": 0, "parent_id": "952d99e8-1c8e-4de4-9790-1ae69d217ecb", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ] ], "subsections": [ "e4b30937-4b91-4f31-905b-7791ea4498ba", "137d8029-8f96-4c96-9505-5586f472e891", "036e9cec-3b08-4d7c-b493-33908a004238" ], "title": " {\\color{black" }, { "cite_extract_rate": 0.5, "cites": [ 2163, 8528, 8529, 2158, 2161, 2162, 2159, 7551, 2155, 2160, 2156, 7552, 2157 ], "content": "Throughput \\& Latency}}\n {\\color{black} \n Aiming to reduce the confirmation latency of transactions to milliseconds, Hari \\textit{et al.} proposed a high-throughput, low-latency, deterministic confirmation mechanism called ACCEL for accelerating Bitcoin's block confirmation. The key findings of this paper includes how to identify the singular blocks, and how to use singular blocks to reduce the confirmation delay. Once the confirmation delay is reduced, the throughput increases accordingly.\n Two obstacles have hindered the scalability of the cryptocurrency systems. The first one is the low throughput, and the other one is the requirement for every node to duplicate the communication, storage, and state representation of the entire blockchain network. Wang \\textit{et al.} studied how to solve the above obstacles. Without weakening decentralization and security, the proposed Monoxide technique offers a linear scale-out ability by partitioning the workload. And they preserved the simplicity of the blockchain system and amplified its capacity. The authors also proposed a novel \\textit{Chu-ko-nu} mining mechanism, which ensures the cross-zone atomicity, efficiency and security of the blockchain system with thousands of independent zones. Then, the authors have conducted experiments to evaluate the scalability performance of the proposed Monoxide with respect to TPS, the overheads of cross-zone transactions, the confirmation latency of transactions, etc.\n To bitcoin, low \\textit{throughput} and long \\textit{transaction confirmation latency} are two critical bottleneck metrics. To overcome these two bottlenecks, Yang \\textit{et al.} designed a new blockchain protocol called Prism, which achieves a scalable throughput as high as 70,000 transactions per second, while ensuring a full security of bitcoin. \n The project of Prism is open-sourced in Github. The instances of Prism can be flexibly deployed on commercial cloud platform such as AWS.\n However, the authors also admitted that although the proposed Prism has a high throughput, its confirming latency still maintains as large as 10 seconds since there is only a single \\textit{voter chain} in Prism. A promising solution is to introduce a large number of such voter chains, each of which is not necessarily secure. Even though every voter chain is under attacking with a probability as high as 30\\%, the successful rate of attacking a half number of all voter chains is still theoretically very low. Thus, the authors believed that using multiple voter chains would be a good solution to reducing the confirmation latency while not sacrificing system security.\n Considering that Ethereum simply allocates transactions to shards according to their account addresses rather than relying on the workload or the complexity of transactions, the resource consumption of transactions in each shard is unbalanced. In consequence, the network transaction throughput is affected and becomes low. To solve this problem, Woo \\textit{et al.} proposed a heuristic algorithm named GARET, which is a gas consumption-aware relocation mechanism for improving throughput in sharding-based Ethereum environments. In particular, the proposed GARET can relocate transaction workloads of each shard according to the gas consumption. The experiment results show that GARET achieves a higher transactions throughput and a lower transaction latency compared with existing techniques.\n }\n\\begin{table*}[t]\n\\caption{Latest Theories of Improving the Performance of Blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.08\\textwidth}|p{0.03\\textwidth}|p{0.12\\textwidth}|p{0.23\\textwidth}|p{0.35\\textwidth}|}\n\\hline\n\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Recognition} &\\textbf{{\\color{blue}Challenge}}& \\textbf{Methodology}\\\\\n\\hline\n\t\\multirow{11}*{Throughput} \n\t& & ACCEL: Reduce the confirmation delay of blocks & {\\color{blue}Most of the blockchain applications desire fast confirmation of their transactions} & Authors proposed a high-throughput, low-latency, deterministic confirmation mechanism, aiming to accelerate Bitcoin's block confirmation. \\\\\n\t\\cline{2-5}\n\t\\multirow{6}*{\\& Latency} & & Monoxide & {\\color{blue}Scalability issues, and efficient processing of cross-shard transactions} & The proposed Monoxide offers a linear scale-out by partitioning workloads. Particularly, \\textit{Chu-ko-nu} mining mechanism enables the cross-zone atomicity, efficiency and security of the system. \\\\\n\t\\cline{2-5}\n\t{ } & & Prism & {\\color{blue}Low transaction throughput and large transaction confirmation of bitcoin} & Authors proposed a new blockchain protocol, i.e., Prism, aiming to achieve a scalable throughput with a full security of bitcoin. \\\\\n\t\\cline{2-5}\n\t& & GARET & {\\color{blue}How to place transactions to shards considering the complexity of transactions or the workload generated by transactions} & Authors proposed a gas consumption-aware relocation mechanism for improving throughput in sharding-based Ethereum. \\\\\n\t\\hline\n\t\\multirow{8}*{Storage} \n\t& & Erasure code-based & {\\color{blue}How to reduce the storage consumption of blockchains} & Authors proposed a new type of low-storage blockchain nodes using erasure code theory to reduce the storage space of blockchains. \\\\\n\t\\cline{2-5}\n\t\\multirow{4}*{Efficiency} \n\t& & Jidar: Data-Reduction Strategy & {\\color{blue}How to reduce the data consumption of bitcoin's blocks} & Authors proposed a data reduction strategy for Bitcoin namely Jidar, in which each node only has to store the transactions of interest and the related Merkle branches from the complete blocks. \\\\\n\t\\cline{2-5}\n\t& & \\textit{Segment blockchain} & {\\color{blue} To reduce the storage of blockchain systems while maintaining the decentralization without sacrificing security} & Authors proposed a data-reduced storage mechanism named \\textit{segment blockchain} such that each node only has to store a segment of the blockchain. \\\\\n\t\\hline\n\t\\multirow{5}*{Reliability} \n\t& & Availability of blockchains & {\\color{blue} The availability of read and write on blockchains is uneven} & Authors studied the availability for blockchain-based systems, where the read and write availability is conflict to each other. \\\\\n\t\\cline{2-5}\n\t\\multirow{1}*{Analysis} \n\t& & Reliability prediction & {\\color{blue}The reliability of blockchain peers is unknown} & Authors proposed H-BRP to predict the reliability of blockchain peers by extracting their reliability parameters. \\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:theories}\n\\end{table*}\n\\begin{table*}[t]\n\\caption{Latest Scalability Solutions to Improving the Performance of Blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.12\\textwidth}|p{0.03\\textwidth}|p{0.12\\textwidth}|p{0.62\\textwidth}|}\n\\hline\n\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Recognition} &\\textbf{Methodology}\\\\\n\\hline\n\t\\multirow{24}*{Solutions to}\n\t& & Elastico & Authors proposed a new distributed agreement protocol for the permission-less blockchains, called Elastico, which is viewed as the first secure candidate for a sharding protocol towards the open public blockchains.\\\\\n\t\\cline{2-4}\n\t\\multirow{20}*{Sharding} & & Monoxide & \\modified{The proposed Monoxide enables the system to handle transactions through a number of independent zones. This scheme is essentially following the principle of sharding mechanism.} \\\\\n\t\\cline{2-4}\n\t\\multirow{18}*{blockchains}\n\t& & Rapidchain & Authors proposed a new sharding-based protocol for public blockchains that achieves non-linearly increase of intra-committee communications with the number of committee \\modified{members}.\\\\\n\t\\cline{2-4}\n\t{ } & & SharPer & Authors proposed a permissioned blockchain system named \\textit{SharPer}, which adopts sharding techniques to improve scalability of cross-shard transactions.\\\\\n\t\\cline{2-4}\n\t{ } & & D-GAS & Authors proposed a dynamic load balancing mechanism for Ethereum shards, i.e., D-GAS. It reallocates Tx accounts by their gas consumption on each shard.\\\\\n\t\\cline{2-4}\n\t{ } & & NRSS & Authors proposed a node-rating based new Sharding scheme, i.e., NRSS, for blockchains, aiming to improve the throughput of committees.\\\\\n\t\\cline{2-4}\n\t{ } & & OptChain & Authors proposed a new sharding paradigm, called OptChain, mainly used for optimizing the placement of transactions into shards.\\\\\n\t\\cline{2-4}\n\t{ } & & Sharding-based scaling system & Authors proposed an efficient shard-formation protocol that assigns nodes into shards securely, and a distributed transaction protocol that can guard against malicious Byzantine fault \\modified{coordinators}.\\\\\n\t\\cline{2-4}\n\t{ } & & SSChain & Authors proposed a non-reshuffling structure called SSChain, which supports both transaction sharding and state sharding while eliminating huge data-migration across shards.\\\\\n\t\\cline{2-4}\n\t{ } & & Eumonia & Authors proposed Eumonia, which is a permissionless parallel-chain protocol for realizing a global ordering of blocks.\\\\\n\t\\cline{2-4}\n\t{ } & & Vulnerability of Sybil attacks & Authors systematically analyzed the vulnerability of Sybil attacks in protocol Elastico.\\\\\n\t\\cline{2-4}\n\t{ } & & n/2 BFT Sharding approach & Authors proposed a new blockchain sharding approach that can tolerate up to 1/2 of the Byzantine nodes within a shard.\\\\\n\t\\cline{2-4}\n\t{ } & & CycLedger & Authors proposed a protocol CycLedger to pave a way towards scalability, security and incentive for sharding blockchains.\\\\\n\t\\hline\n\t\\multirow{10}*{Interoperability }\n\t& & Interoperability architecture & Authors proposed a novel interoperability architecture that supports the cross-chain cooperations among multiple blockchains, and a novel Monitor Multiplexing Reading (MMR) method for the passive cross-chain communications.\\\\\n\t\\cline{2-4}\n\t\\multirow{6}*{of multiple-chain}\n\t& & HyperService & Authors proposed a programming platform that provides interoperability and programmability over multiple heterogeneous blockchains.\\\\\n\t\\cline{2-4}\n\t\\multirow{4}*{systems}\n\t& & Protocol \\textit{Move} & Authors proposed a programming model for smart-contract developers to create DApps that can interoperate and scale in a multiple-chain \\modified{environment}.\\\\\n\t\\cline{2-4}\n\t{ } & & Cross-cryptocurrency TX protocol & Authors proposed a decentralized cryptocurrency exchange protocol enabling cross-cryptocurrency transactions based on smart contracts deployed on Ethereum.\\\\\n\t\\cline{2-4}\n\t{ } & & Cross-chain comm. & Authors conducted a systematic classification of cross-chain communication protocols.\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:scalability}\n\\end{table*}", "id": "e4b30937-4b91-4f31-905b-7791ea4498ba", "level": "subsubsection", "origin_cites_number": 26, "parent_id": "ac3c9506-8452-4524-a356-ccef714e8bef", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ], [ "subsubsection", " {\\color{black" ] ], "subsections": [], "title": " {\\color{black" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 2157, 2159 ], "content": "Storage Efficiency}}\n {\\color{black}\n The transactions generated at real-time make the size of blockchains keep growing. For example, the storage efficiency of original-version Bitcoin has received much criticism since it requires to store the full transaction history in each Bitcoin peer. Although some revised protocols advocate that only the full-size nodes store the entire copy of whole ledger, the transactions still consume a large storage space in those full-size nodes. To alleviate this problem, several pioneer studies proposed storage-efficient solutions for blockchain networks.\n For example,\n By exploiting the erasure code-based approach, Perard \\textit{et al.} proposed a low-storage blockchain mechanism, aiming to achieve a low requirement of storage for blockchains. The new low-storage nodes only have to store the linearly encoded fragments of each block. The original blockchain data can be easily recovered by retrieving fragments from other nodes under the erasure-code framework. Thus, this type of blockchain nodes allows blockchain clients to reduce the storage capacity. The authors also tested their system on the low-configuration Raspberry Pi to show the effectiveness, which demonstrates the possibility towards running blockchains on IoT devices.\n Then, Dai \\textit{et al.} proposed Jidar, which is a data reduction strategy for Bitcoin. In Jidar, each node only has to store the transactions of interest and the related Merkle branches from the complete blocks. All nodes verify transactions collaboratively by a query mechanism. This approach seems very promising to the storage efficiency of Bitcoin. Their experiments show that the proposed Jidar can reduce the storage overhead of each peer \\modified{to} about 1\\% comparing with the original Bitcoin. \n Under the similar idea, Xu \\textit{et al.} reduced the storage of blockchains using a \\textit{segment blockchain} mechanism, in which each node only needs to store a piece of blockchain segment. The authors also proved that the proposed mechanism endures a failure probability $(\\phi/n)^m$ if an adversary party commits a collusion with less than a number $\\phi$ of nodes and each segment is stored by a number $m$ of nodes. This theoretical result is useful for the storage design of blockchains when developing a particular segment mechanism towards data-heavy distributed applications.\n }", "id": "137d8029-8f96-4c96-9505-5586f472e891", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "ac3c9506-8452-4524-a356-ccef714e8bef", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.5, "cites": [ 8528 ], "content": "Reliability of Blockchains}}\n {\\color{black}\n \\modified{As a decentralized mechanism for data protection, the reliability of blockchains plays an important role in data falsification. The following works studied the fundamental supporting mechanisms to achieve data falsification prevention.}\n The availability of blockchains is a key factor for blockchain-based distributed applications (DApps). However, such availability guarantees of blockchain systems are unknown. To this end, Weber \\textit{et al.} studied the availability limitations of two popular blockchains, i.e., Bitcoin and Ethereum. The authors found that the availability of reading and writing operations are conflict to each other. Through measuring and analyzing the transactions of Ethereum, they observed that the DApps could be stuck in an uncertain state while transactions are pending in a blockchain system. This observation suggests that maybe blockchains should support some built-in transaction-abort options for DApps. The authors finally presented techniques that can alleviate the availability limitations of Ethereum and Bitcoin blockchains.\n In public blockchains, the system clients join the blockchain network basically through a third-party peer. Thus, the reliability of the selected blockchain peer is critical to the security of clients in terms of both resource-efficiency and monetary issues. To enable clients evaluate and choose the reliable blockchain peers, Zheng \\textit{et al.} proposed a hybrid reliability prediction model for blockchains named H-BRP, which is able to predict the reliability of blockchain peers by extracting their reliability parameters.\n }", "id": "036e9cec-3b08-4d7c-b493-33908a004238", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "ac3c9506-8452-4524-a356-ccef714e8bef", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "Scalability-Improving Solutions}}\n {\\color{black}\n One of the critical bottlenecks of today's blockchain systems is the scalability. For example, the throughput of a blockchain is not scalable when the network size grows. To address this dilemma, a number of scalability approaches have been proposed. In this part, we conduct an overview of the most recent solutions with respect to Sharding techniques, interoperability among multiple blockchains, and other solutions.\n \\modified{We summarize this subsection in Table \\ref{Table:scalability}.}\n }", "id": "b5de63b1-5d76-4633-938e-da9b3948c3e1", "level": "subsection", "origin_cites_number": 0, "parent_id": "952d99e8-1c8e-4de4-9790-1ae69d217ecb", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ] ], "subsections": [ "1fbd5a47-b54d-41aa-af6b-8381354f8fe4", "d687ebbd-b2bf-4d03-af43-dfca83ba86b8" ], "title": " {\\color{black" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 2158, 2161, 2156, 7552, 2155, 2160, 8529 ], "content": "Solutions to Sharding Blockchains}}\n {\\color{black}\n Bitcoin's transaction throughput does not scale well. The solutions that use classical Byzantine consensus protocols do not work in an open environment like cryptocurrencies. To solve the above problems, Luu \\textit{et al.} proposed a new distributed agreement protocol for the permission-less blockchains, called \\textit{Elastico}, which is viewed as the first secure candidate for a sharding protocol towards the open public blockchains that tolerate a constant fraction of byzantine-fault network nodes. The key idea in Elastico is to partition the network into smaller committees, each of which processes a disjoint set of transactions or a \\textit{shard}. The number of committees grows linearly in the total computational power of the network. Using Elastico, the blockchain's transaction throughput increases almost linearly with the computational power of the network.\n Some early-stage sharding blockchain protocols (e.g., Elastico) improve the scalability by enforcing multiple groups of committees work in parallel. However, this manner still requires a large amount of communication for verifying every transaction linearly increasing with the number of nodes within a committee. Thus, the benefit of sharding policy was not fully employed. As an improved solution, Zamani \\textit{et al.} proposed a Byzantine-resilient sharding-based protocol, namely Rapidchain, for permissionless blockchains. Taking the advantage of block pipelining, RapidChain improves the throughput by using a sound intra-committee consensus. The authors also developed an efficient cross-shard verification method to avoid the broadcast messages flooding in the holistic network.\n To enforce the throughput scaling with the network size, Gao \\textit{et al.} proposed a scalable blockchain protocol, which leverages both sharding and Proof-of-Stake consensus techniques. Their experiments were performed in an Amazon EC2-based simulation network. Although the results showed that the throughput of the proposed protocol increases following the network size, the performance was still not so high, for example, the maximum throughput was 36 transactions per second and the transaction latency was around 27 seconds.\n Aiming to improve the efficiency of cross-shard transactions, Amiri \\textit{et al.} proposed a permissioned blockchain system named \\textit{SharPer}, which is strive for the scalability of blockchains by dividing and reallocating different data shards to various network clusters. \n The major contributions of the proposed SharPer include the related algorithm and protocol associated to such SharPer model. In the author's previous work, they have already proposed a permissioned blockchain, while in this paper the authors extended it by introducing a consensus protocol in the processing of both intra-shard and cross-shard transactions. Finally, SharPer was devised by adopting sharding techniques. One of the important contributions is that SharPer can be used in the networks where there are a high percentage of non-faulty nodes. Furthermore, this paper also contributes a flattened consensus protocol w.r.t the order of cross-shard transactions among all involved clusters.\n Considering that the Ethereum places each group of transactions on a shard by their account addresses, the workloads and complexity of transactions in shards are apparently unbalanced. This manner further damages the network throughput. To address this uneven problem, Kim \\textit{et al.} proposed D-GAS, which is a dynamic load balancing mechanism for Ethereum shards. Using such D-GAS, the transaction workloads of accounts on each shard can be reallocated according to their gas consumption. The target is to maximize the throughput of those transactions. The evaluation results showed that the proposed D-GAS achieved at most a 12\\% superiority of transaction throughput and a 74\\% lower transaction latency comparing with other existing techniques.\n}\n{\\color{black}\n The random sharding strategy causes imbalanced performance gaps among different committees in a blockchain network. Those gaps yield a bottleneck of transaction throughput. Thus, Wang \\textit{et al.} proposed a new sharding policy for blockchains named NRSS, which exploits node rating to assess network nodes according to their performance of transaction verifications. After such evaluation, all network nodes will be reallocated to different committees aiming at filling the previous imbalanced performance gaps. Through the experiments conducted on a local blockchain system, the results showed that NRSS improves throughput by around 32\\% under sharding techniques.\n }\n{\\color{black}\nSharding has been proposed to mainly improve the scalability and the throughput performance of blockchains. A good sharding policy should minimize the cross-shard communications as much as possible. A classic design of sharding is the \\textit{Transactions Sharding}. However, such Transactions Sharding exploits the \\textit{random sharding} policy, which leads to a dilemma that most transactions are cross-shard. To this end, Nguyen \\textit{et al.} proposed a new sharding paradigm differing from the random sharding, called OptChain, which can minimize the number of cross-shard transactions. The authors achieved their goal through the following two aspects. First they designed two metrics, named T2S-score (Transaction-to-Shard) and L2S-score (Latency-to-Shard), respectively. T2S-score aims to measure how likely a transaction should be placed into a shard, while L2S-score is used to measure the confirmation latency when placing a transaction into a shard. Next, they utilized a well-known PageRank analysis to calculate T2S-score and proposed a mathematical model to estimate L2S-score. Finally, how does the proposed OptChain place transactions into shards based on the combination of T2S and L2S scores? In brief, they introduced another metric composed of both T2S and L2S, called \\textit{temporal fitness} score. For a given transaction $u$ and a shard $S_i$, OptChain figures the temporal fitness score for the pair $\\langle u, S_i \\rangle$. Then, OptChain just puts transaction $u$ into the shard that is with the highest temporal fitness score.\n Similar to , Dang \\textit{et al.} proposed a new shard-formation protocol, in which the nodes of different shards are re-assigned into different committees to reach a certain safety degree. In addition, they also proposed a coordination protocol to handle the cross-shard transactions towards guarding against the Byzantine-fault malicious coordinators. The experiment results showed that the throughput achieves a few thousands of TPS in both a local cluster with 100 nodes and a large-scale Google cloud platform testbed.\n Considering that the reshuffling operations lead to huge data migration in the sharding-based protocols, Chen \\textit{et al.} devised a non-reshuffling structure called SSChain. Such new sharding-based protocol can avoid the overhead of data migration while enabling both transaction sharding and state sharding. Their evaluation results showed that SSChain achieves at least 6500 TPS in a network with 1800 nodes and no periodical data-migration needed.\n Multiple chains can help increase the throughput of the blockchain. However, one issue under multiple-chain system must be solved. That is, the logical ordering of blocks generated should be guaranteed, because the correct logical order is critical to the confirmation of transactions. To this end, Niu \\textit{et al.} proposed Eumonia, which is a permissionless parallel-chain protocol towards a global ordering of blocks. The authors implemented Eunomia by exploiting a fine-grained UTXO sharding model, in which the conflicted transactions can be well handled, and such protocol is proved as Simple Payment Verification (SPV) friendly.\n Although the sharding techniques have received much interests recently, it should be noticed that the committee organization is easily to attract Sybil attacks, in which a malicious node can compromise the consensus by creating multiple dummy committee members in the vote phase of the consensus protocol. To address such Sybil attacks, Rajab \\textit{et al.} systematically formulated a model and performed an analysis w.r.t the vulnerability of Sybil attacks in the pioneer sharding protocol Elastico . The authors found that the blockchain nodes that have high hash-computing power are capable to manipulate Elastico protocol using a large number of Sybil IDs. The other two conditions of Sybil attacks were derived and evaluated by numerical simulations.\n The traditional Sharding blockchain protocols can only endure up to 1/3 Byzantine-fault nodes within a shard. This weak BFT feature makes the number of nodes inside a shard cannot be small to ensure the shard functions securely. To improve the sustainability of blockchain sharding, Xu \\textit{et al.} proposed a new BFT sharding approach that can tolerate at most 1/2 Byzantine-fault nodes existing inside a shard. This approach benefits the throughput of decentralized databases.\n Although the existing sharding-based protocols, e.g., Elastico, OminiLedger and RapaidChain, have gained a lot of attention, they still have some drawbacks. For example, the mutual connections among all honest nodes require a big amount of communication resources. Furthermore, there is no an incentive mechanism driven nodes to participate in sharding protocol actively. To solve those problems, Zhang \\textit{et al.} proposed \\textit{CycLedger}, which is a protocol designed for the sharding-based distributed ledger towards scalability, reliable security, and incentives. Such the proposed CycLedger is able to select a leader and a subset of nodes for each committee that handle the intra-shard consensus and the synchronization with other committees. A semi-commitment strategy and a recovery processing scheme were also proposed to deal with system crashing. In addition, the authors also proposed a reputation-based incentive policy to encourage nodes behaving honestly. \n}", "id": "1fbd5a47-b54d-41aa-af6b-8381354f8fe4", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "b5de63b1-5d76-4633-938e-da9b3948c3e1", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7551, 2163, 2162 ], "content": "Multiple-Chain \\& Cross-Chain: Interoperability amongst Multiple Blockchains}}\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace{-3mm}\n\t\\includegraphics[width=0.5\\textwidth]{./figures/InteroperabilityMap.pdf}\n\t\\vspace{-2mm}\n\t\\caption{\\modified{The illustration of interoperability across blockchains . The left figure demonstrates the indirect way of interoperability that requires a centralized third party. The right figure demonstrates the direct way of interoperability without the presence of any third party.}}\n\t\\label{fig:interoperabilityMap}\n\t\\vspace{-3mm}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{./figures/Interoperability.pdf}\n\t\\vspace{-2mm}\n\t\\caption{\\modified{The interoperability of blockchains . Passive mode is shown in the left figure, in which the source chain is monitored by the destination chain instead of actively sending information to the destination chain as shown in the right figure.}}\n\t\\label{fig:interoperability}\n\t\\vspace{-3mm}\n\\end{figure}\n {\\color{black}\n The interoperability of blockchains plays a significant role for the cross-chain transactions. Such interoperability mainly includes the effective communications and data exchange amongst multiple blockchains, as shown in Fig. \\ref{fig:interoperabilityMap}. A lot of theoretical and practical issues of this direction need urgent solutions. Some representative studies are reviewed as follows.\n To enable rich functionalities and capabilities for the future blockchain ecosystems, Jin \\textit{et al.} proposed a novel interoperability architecture that supports the cross-chain cooperation among multiple blockchains, such as bitcoin and Ethereum. The authors classified the interoperability of multiple-chain ecosystems into passive and active modes, which are shown in Fig. \\ref{fig:interoperability}. Then, the authors introduced a particular method, called Monitor Multiplexing Reading (MMR), dedicated to the passive cross-chain communications. \n Following the widespread adoption of smart contracts, the roles of blockchains have been upgraded from token exchanges into programmable state machines. Thus, the blockchain interoperability must evolve accordingly. To help realize such new type of interoperability among multiple heterogeneous blockchains, Liu \\textit{et al.} proposed HyperService, which includes two major components, i.e., a programming framework allowing developers to create cross-chain applications; and a universal interoperability protocol towards secure implementation of DApps on blockchains. The authors implemented a 35,000-line prototype to prove the practicality of HyperService. Using the prototype, the end-to-end delays of cross-chain DApps, and the aggregated platform throughput can be measured conveniently.\nIn an ecosystem that consists of multiple blockchains, interoperability among those difference blockchains is an essential issue. To help the smart-contract developers build DApps, Fynn \\textit{et al.} proposed a practical \\textit{Move} protocol that works for multiple blockchains. The basic idea of such protocol is to support a move operation enabling to move objects and smart contracts from one blockchain to another.\nRecently, to enable cross-cryptocurrency transactions, Tian \\textit{et al.} proposed a decentralized cryptocurrency exchange strategy implemented on Ethereum through smart contracts.\n Additionally, a great number of studies of cross-chain communications are included in , in which readers can find a systematic classification of cross-chain communication protocols.\n }", "id": "d687ebbd-b2bf-4d03-af43-dfca83ba86b8", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "b5de63b1-5d76-4633-938e-da9b3948c3e1", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", " {\\color{black" ], [ "subsubsection", " {\\color{black" ] ], "subsections": [], "title": " {\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "New Protocols and Infrastructures}}\n\\modified{This subsection is summarized in Table \\ref{Table:newProto}.}", "id": "f3a78dd5-a681-47f6-a4a2-e800a7104483", "level": "subsection", "origin_cites_number": 0, "parent_id": "952d99e8-1c8e-4de4-9790-1ae69d217ecb", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [ "341f13b6-0035-412f-901d-c5b08e02bddd", "91d2ab40-ef07-4c79-984b-1702f95f6b77" ], "title": "{\\color{black" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 2165, 2164 ], "content": "New Protocols for Blockchains}}\n {\\color{black}\n David \\textit{et al.} proposed a provably secure PoS protocol named \\textit{Ouroboros Praos}, which particularly exploits forward secure digital signatures and a verifiable random function such that the proposed Ouroboros Praos can endure any corruption towards any participants from an adversary in a given message delivery delay. \nIn blockchain systems, a node only connects to a small number of neighbor nodes. Mutual communications are achieved by gossip-like P2P messages. Based on such P2P gossip communications, Buchman \\textit{et al.} proposed a new protocol named Tendermint, which serves as a new termination mechanism for simplifying BFT consensus protocol.\n In Monoxide proposed by , the authors have devised a novel proof-of-work scheme, named \\textit{Chu-ko-nu mining}. This new proof protocol encourages a miner to create multiple blocks in different zones simultaneously with a single PoW solving effort. This mechanism makes the effective mining power in each zone is almost equal to the level of the total physical mining power in the entire network. Thus, Chu-ko-nu mining increases the attack threshold for each zone to 50\\%. Furthermore, Chu-ko-nu mining can improve the energy consumption spent on mining new blocks because a lot of more blocks can be produced in each round of normal PoW mining.\n The online services of crowdsourcing face a challenge to find a suitable consensus protocol. By leveraging the advantages of the blockchain such as the traceability of service contracts, Zou \\textit{et al.} proposed a new consensus protocol, named \\textit{Proof-of-Trust} (PoT) consensus, for crowdsourcing and the general online service industries. Basically, such PoT consensus protocol leverages a trust management of all service participants, and it works as a hybrid blockchain architecture in which a consortium blockchain integrates with a public service network.\n }\n\\begin{table*}[t]\n\\caption{New Protocols \\& Infrastructures to Improving the Performance of Blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.1\\textwidth}|p{0.03\\textwidth}|p{0.14\\textwidth}|p{0.6\\textwidth}|}\n\\hline\n\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Recognition} &\\textbf{Methodology}\\\\\n\\hline\n\t\\multirow{6}*{New Protocols}\n\t& & Ouroboros Praos & Authors proposed a new secure Proof-of-stake protocol named \\textit{Ouroboros Praos}, which is proved secure in the semi-synchronous adversarial setting.\\\\\n\t\\cline{2-4}\n\t{ } & & Tendermint & Authors proposed a new BFT consensus protocol for the wide area network organized by the gossip-based P2P network under adversarial conditions.\\\\\n\t\\cline{2-4}\n\t{ } & & Chu-ko-nu mining & Authors proposed a novel proof-of-work scheme, named \\textit{Chu-ko-nu mining}, which incentivizes miners to create multiple blocks in different zones with only a single PoW mining.\\\\\n\t\\cline{2-4}\n\t{ } & & Proof-of-Trust (PoT) & Authors proposed a novel Proof-of-Trust consensus for the online services of crowdsourcing.\\\\\n\t\\hline\n\t\\multirow{10}*{New} \n\t& & StreamChain & Authors proposed to shift the block-based distributed ledgers to a new paradigm of \\textit{stream transaction processing} to achieve a low end-to-end latencies without much affecting throughput.\\\\\n\t\\cline{2-4}\n\t\\multirow{6}*{Infrastructures }\n\t& & CAPER: Cross-App Trans. handling & Authors proposed a permissioned blockchain named CAPER that can well manage both the internal and the cross-application transactions for distributed applications.\\\\\n\t\\cline{2-4}\n\t\\multirow{4}*{\\& Architectures} \n\t& & Optimal mining for miners & Authors proposed an edge computing-based blockchain network architecture, aiming to allocate optimal computational resources for miners.\\\\\n\t\\cline{2-4}\n\t{ } & & AxeChain: Useful Mining & Authors proposed a new framework for practical PoW blockchains called AxeChain, which can spend computing power of blockchains to solve arbitrary practical problems submitted by system clients.\\\\\n\t \\cline{2-4}\n\t{ } & & Non-linear blockchain system & Authors explored three major metrics of blockchains, and devised a non-linear blockchain system.\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:newProto}\n\\end{table*}", "id": "341f13b6-0035-412f-901d-c5b08e02bddd", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "f3a78dd5-a681-47f6-a4a2-e800a7104483", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.2, "cites": [ 2165 ], "content": "New Infrastructures \\& Architectures for Blockchains}}\n {\\color{black}\n Conventionally, block-based data structure is adopted by permissionless blockchain systems as blocks can efficiently amortize the cost of cryptography. However, the benefits of blocks are saturated in today's permissioned blockchains since the block-processing introduces large batching latencies. To the distributed ledgers that are neither geo-distributed nor Pow-required, Istv{\\'a}n \\textit{et al.} proposed to shift the traditional block-based data structure into the paradigm of \\textit{stream-like transaction processing}. The premier advantage of such paradigm shift is to largely shrink the end-to-end latencies for permissioned blockchains. The authors developed a prototype of their concept based on Hyperledger Fabric. The results showed that the end-to-end latencies achieved sub-10 ms and the throughput was close to 1500 TPS.\n Permissioned blockchains have a number of limitations, such as poor performance, privacy leaking, and inefficient cross-application transaction handling mechanism. To address those issues, Amiri \\textit{et al.} proposed CAPER, which a permissioned blockchain that can well deal with the cross-application transactions for distributed applications. In particular, CAPER constructs its blockchain ledger using DAG and handles the cross-application transactions by adopting three specific consensus protocols, i.e., a global consensus using a separate set of orders, a hierarchical consensus protocol, and a \\textit{one-level} consensus protocol.\n Then, Chang \\textit{et al.} proposed an edge computing-based blockchain architecture, in which edge-computing providers supply computational resources for blockchain miners. The authors then formulated a two-phase stackelberg game for the proposed architecture, aiming to find the Stackelberg equilibrium of the theoretical optimal mining scheme.\n Next, Zheng \\textit{et al.} proposed a new infrastructure for practical PoW blockchains called AxeChain, which aims to exploit the precious computing power of miners to solve arbitrary practical problems submitted by system users. The authors also analyzed the trade-off between energy consumption and security guarantees of such AxeChain. This study opens up a new direction for pursing high energy efficiency of meaningful PoW protocols.\n With the non-linear (e.g., graphical) structure adopted by blockchain networks, researchers are becoming interested in the performance improvement brought by new data structures. To find insights under such non-linear blockchain systems, Chen \\textit{et al.} performed a systematic analysis by taking three critical metrics into account, i.e., \\textit{full verification}, \\textit{scalability}, and \\textit{finality-duration}. The authors revealed that it is impossible to achieve a blockchain that enables those three metrics at the same time. Any blockchain designers must consider the trade-off among such three properties. \n }", "id": "91d2ab40-ef07-4c79-984b-1702f95f6b77", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "f3a78dd5-a681-47f6-a4a2-e800a7104483", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Theories to Improving the Performance of Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:understand}\n\\modified{We summarize various analytical models for blockchain networks in Table \\ref{Table:modelings} and Table \\ref{Table:analytics}.}", "id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ] ], "subsections": [ "261fc701-f180-4af9-9a74-cfe186fc88a1", "d700d74e-91e2-48b1-b9de-8fc3eda7469c", "0a3a3a4c-0a54-493e-8d44-710a340c3f03", "64cabce6-95cc-4fe3-9b56-cad94bfb2e57", "bf36c972-b50d-40cf-8f5d-faeccc9f5939" ], "title": "Various Modelings and Techniques for Better Understanding Blockchains" }, { "cite_extract_rate": 0.416666666666666, "cites": [ 2167, 8530, 1302, 2168, 2166 ], "content": "Graph-based Theories}}\n {\\color{black}\n The graphs are widely used in blockchain networks. For example, Merkel Tree has been adopted by Bitcoin, and several blockchain protocols, such as Ghost , Phantom , and Conflux , constructed their blocks using the directed acyclic graph (DAG) technique.\n Different from those generalized graph structures, we review the most recent studies that exploit the graph theories for better understanding blockchains in this part.\n Since the transactions in blockchains are easily structured into graphs, the graph theories and graph-based data mining techniques are viewed as good tools to discover the interesting findings beyond the graphs of blockchain networks. \n Some representative recent studies are reviewed as follows. \nLeveraging the techniques of graph analysis, Chen \\textit{et al.} characterized three major activities on Ethereum, i.e., money transfer, the creation of smart contracts, and the invocation of smart contracts. The major contribution of this paper is that it performed the first systematic investigation and proposed new approaches based on cross-graph analysis, which can address two security issues existing in Ethereum: attack forensics and anomaly detection.\nParticularly, w.r.t the graph theory, the authors mainly concentrated on the following two aspects:\n\\begin{enumerate}\n \\item \\textit{Graph Construction}: They identified four types of transactions that are not related to money transfer, smart contract creation, or smart contract invocation.\n \\item \\textit{Graph Analysis}: Then, they divided the remaining transactions into three groups according to the activities they triggered, i.e., money flow graph (MFG), smart contract creation graph (CCG) and contract invocation graph (CIG).\n \\end{enumerate}\n Via this manner, the authors delivered many useful insights of transactions that are helpful to address the security issues of Ethereum.\n Similarly, by processing Bitcoin transaction history, Akcora \\textit{et al.} and Dixon \\textit{et al.} modeled the transfer network into an extreme transaction graph. Through the analysis of chainlet activities in the constructed graph, they proposed to use GARCH-based forecasting models to identify the financial risk of Bitcoin market for cryptocurrency users. \n An emerging research direction associated with blockchain-based cryptocurrencies is to understand the network dynamics behind graphs of those blockchains, such as the transaction graph. This is because people are wondering what the connection between the price of a cryptocurrency and the dynamics of the overlying transaction graph is. To answer such a question, Abay \\textit{et al.} proposed Chainnet, which is a computationally lightweight method to learning the graph features of blockchains. The authors also disclosed several insightful findings. For example, it is the topological feature of transaction graph that impacts the prediction of Bitcoin price dynamics, rather than the degree distribution of the transaction graph.\n Furthermore, utilizing the Mt. Gox transaction history, Chen \\textit{et al.} also exploited the graph-based data-mining approach to dig the market manipulation of Bitcoin. The authors constructed three graphs, i.e., extreme high graph (EHG), extreme low graph (ELG), and normal graph (NMG), based on the initial processing of transaction dataset. Then, they discovered many correlations between market manipulation patterns and the price of Bitcoin.\n On the other direction, based on \\textit{address graphs}, Victor \\textit{et al.} studied the ERC20 token networks through analyzing smart contracts of Ethereum blockchain. Different from other graph-based approaches, the authors focused on their attention on the address graphs, i.e., token networks. With all network addresses, each token network is viewed as an overlay graph of the entire Ethereum network addresses. Similar to , the authors presented the relationship between transactions by exploiting graph-based analysis, in which the arrows can denote the invoking functions between transactions and smart contracts, and the token transfers between transactions as well. The findings presented by this study help us have a well understanding of token networks in terms of time-varying characteristics, such as the usage patterns of the blockchain system. An interesting finding is that around 90\\% of all transfers stem from the top 1000 token contracts. That is to say, only less than 10\\% of token recipients have transferred their tokens. This finding is contrary to the viewpoint proposed by , where Somin \\textit{et al.} showed that the full transfers seem to obey a power-law distribution. However, the study indicated that those transfers in token networks likely do not follow a power law. The authors attributed such the observations to the following three possible reasons: 1) most of the token users don't have incentives to transfer their tokens. Instead, they just simply hold tokens; 2) the majority of inactive tokens are treated as something like unwanted spam; 3) a small portion, i.e., approximately 8\\%, of users intended to sell their tokens to a market exchange.\n Recently, Zhao \\textit{et al.} explored the account creation, account vote, money transfer and contract authorization activities of early-stage EOSIO transactions through graph-based metric analysis. Their study revealed abnormal transactions like voting gangs and frauds.\n }", "id": "261fc701-f180-4af9-9a74-cfe186fc88a1", "level": "subsection", "origin_cites_number": 12, "parent_id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.38095238095238004, "cites": [ 2167, 8531, 2169, 7553, 2166, 8530, 2168, 2170 ], "content": "Stochastic Modelings}}\n {\\color{black}\n The latencies of block transfer and processing are generally existing in blockchain networks since the large number of miner nodes are geographically distributed. Such delays increase the probability of forking and the vulnerability to malicious attacks. Thus, it is critical to know how would the network dynamics caused by the block propagation latencies and the fluctuation of hashing power of miners impact the blockchain performance such as block generation rate. To find the connection between those factors, Papadis \\textit{et al.} developed stochastic models to derive the blockchain evolution in a wide-area network. Their results showed us practical insights for the design issues of blockchains, for example, how to change the difficulty of mining in the PoW consensus while guaranteeing an expected block generation rate or an immunity level of adversarial attacks. The authors then performed analytical studies and simulations to evaluate the accuracy of their models. This stochastic analysis opens up a door for us to have a deeper understanding of dynamics in a blockchain network.\nTowards the stability and scalability of blockchain systems, Gopalan \\textit{et al.} also proposed a stochastic model for a blockchain system. During their modeling, a structural asymptotic property called \\textit{one-endedness} was identified. The authors also proved that a blockchain system is one-ended if it is stochastically stable. The upper and lower bounds of the stability region were also studied. The authors found that the stability bounds are closely related to the conductance of the P2P blockchain network. Those findings are very insightful such that researchers can assess the scalability of blockchain systems deployed on large-scale P2P networks.\n Although Sharding protocol is viewed as a very promising solution to solving the scalability of blockchains and adopted by multiple well-known blockchains such as RapidChain , OmniLedger , and Monoxide , the failure probability for a committee under Sharding protocol is still unknown. To fill this gap, Hafid \\textit{et al.} proposed a stochastic model to capture the security analysis under Sharding-based blockchains using a probabilistic approach. With the proposed mathematical model, the upper bound of the failure probability was derived for a committee. In particular, three probability inequalities were used in their model, i.e., Chebyshev, Hoeffding, and Chv{\\'a}tal. The authors claim that the proposed stochastic model can be used to analyze the security of any Sharding-based protocol.\n }\n\\begin{table*}[t]\n\\caption{Various Modelings, Techniques and Theories for Better Understanding Blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.07\\textwidth}|p{0.11\\textwidth}|p{0.07\\textwidth}|p{0.21\\textwidth}|p{0.38\\textwidth}|}\n\\hline\n\\textbf{Category}&\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Metrics} &\\textbf{Methodology \\& Implications}\\\\\n\\hline\n\t\\multirow{12}*{Graph-} \n\t& \\multirow{8}*{Transactions} & & Cross-graph analysis of Ethereum & Via graph analysis, authors extracted three major activities, i.e., money transfer, smart contracts creation, and smart contracts invocation.\\\\\n\t\\cline{3-5}\n\t\\multirow{8}*{based} \n\t&\\multirow{4}*{mining}& & Features of transaction graphs & Proposed an extendable and computationally efficient method for graph representation learning on Blockchains.\\\\\n\t\\cline{3-5}\n\t\\multirow{6}*{Theories} \n\t&{}& & Market manipulation\npatterns & Authors exploited the graph-based data-mining approach to reveal the market manipulation evidence of Bitcoin.\\\\\n\t\\cline{3-5}\n\t{ }&{}& & Clustering coefficient, assortativity of TX graph & Authors exploited the graph-based analysis to reveal the abnormal transactions of EOSIO.\\\\\n\t\\cline{2-5}\n\t{ }&\\multirow{5}*{Token networks}& & Token-transfer distributions & Authors studied the token networks through analyzing smart contracts of Ethereum blockchain based on graph analysis.\\\\\n\t\\cline{3-5}\n\t{ }&{}& & Extreme chainlet activity & Authors proposed graph-based analysis models for assessing the financial investment risk of \\modified{Bitcoin}.\\\\\n\t\\hline\n\t\\multirow{12}*{Stochastic} \n\t & {Blockchain network analysis} & & Block completion rates, and the probability of a successful adversarial attack & Authors derived stochastic models to capture critical blockchain properties, and to evaluate the impact of blockchain propagation latency on key performance metrics. This study provides us useful insights of design issues of blockchain networks.\\\\\n\t\\cline{2-5}\n\t\\multirow{4}*{Modelings} \n\t& \\multirow{3}*{Stability analysis} & \\multirow{1}*{ } & Time to consistency, cycle length, consistency fraction, age of information & Authors proposed a network model which can identify the stochastic stability of blockchain systems. \n\t\\\\\n\t\\cline{2-5}\n\t{} & Failure probability analysis & & Failure probability of a committee, sums of upper-bounded hypergeometric and binomial distributions for each epoch & Authors proposed a probabilistic model to derive the security analysis under Sharding blockchain protocols. This study can tell how to keep the failure probability smaller than a defined threshold for a specific sharding protocol.\\\\\n\t\\hline\n\t\\multirow{14}*{Queueing} \n\t& Mining procedure and block-generation & & The average number of TX in the arrival queue and in a block, and average confirmation time of TX & Authors developed a \\modified{Markovian} batch-service queueing system to express the mining process and the generation of new blocks in miners pool.\\\\\n\t\\cline{2-5}\n\t\\multirow{10}*{Theories} \n\t& Block-confirmation time & & The residual lifetime of a block till the next block is confirmed & Authors proposed a theoretical framework to deeply understand the transaction confirmation time, by integrating the queueing theory and machine learning techniques. \\\\\n\t\\cline{2-5}\n\t& Synchronization process of Bitcoin network & & Stationary queue-length distribution & Authors proposed an infinite-server model with random fluid limit for Bitcoin network.\\\\\n\t\\cline{2-5}\n\t& Mining resources allocation & & Mining resource for miners, queueing stability & Authors proposed a Lyapunov optimization-based queueing analytical model to study the allocation of mining resources for the PoW-based blockchain networks. \\\\\n \t\\cline{2-5}\n\t& Blockchain's theoretical working principles & & number of TX per block, mining interval of each block, memory pool size, waiting time, number of unconfirmed TX & Authors proposed a queueing theory-based model to have a better understanding the theoretical working principle of blockchain networks. \\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:modelings}\n\\end{table*}", "id": "d700d74e-91e2-48b1-b9de-8fc3eda7469c", "level": "subsection", "origin_cites_number": 21, "parent_id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.47368421052631504, "cites": [ 7550, 8531, 2169, 2172, 8533, 2171, 7554, 2170, 8532 ], "content": "Queueing Theories for Blockchain Systems}}\n {\\color{black}\n In blockchain networks, several stages of mining processing and the generation of new blocks can be formulated as queueing systems, such as the transaction-arrival queue, the transaction-confirmation queue, and the block-verification queue. Thus, a growing number of studies are exploiting the queueing theory to disclose the mining and consensus mechanisms of blockchains. Some recent representative works are reviewed as follows.\n To develop a queueing theory of blockchain systems, Li \\textit{et al.} devised a batch-service queueing system to describe the mining and the creating of new blocks in miners' pool. For the blockchain queueing system, the authors exploited the type GI/M/1 continuous-time Markov process. Then, they derived the stable condition and the stationary probability matrix of the queueing system utilizing the matrix-geometric techniques.\n Then, viewing that the confirmation delay of Bitcoin transactions are larger than conventional credit card systems, Ricci \\textit{et al.} proposed a theoretical framework integrating the queueing theory and machine learning techniques to have a deep understanding towards the transaction confirmation time. The reason the authors chose the queueing theory for their study is that a queueing model is suitable to see insights into how the different blockchain parameters affect the transaction latencies. Their measurement results showed that the Bitcoin users experience a delay that is slightly larger than the residual time of a block confirmation.\n Frolkova \\textit{et al.} formulated the synchronization process of Bitcoin network as an infinite-server model. The authors derived a closed-form for the model that can be used to capture the queue stationary distribution. Furthermore, they also proposed a random-style fluid limit under service latencies.\n On the other hand, to evaluate and optimize the performance of blockchain-based systems, Memon \\textit{et al.} proposed a simulation model by exploiting queueing theory. In the proposed model, the authors constructed an M/M/1 queue for the memory pool, and an M/M/c queue for the mining pool, respectively. This model can capture multiple critical statistics metrics of blockchain networks, such as the number of transactions every new block, the mining interval of a block, transactions throughput, and the waiting time in memory pool, etc.\n Next, Fang \\textit{et al.} proposed a queueing analytical model to allocate mining resources for the general PoW-based blockchain networks. The authors formulated the queueing model using Lyapunov optimization techniques. Based on such stochastic theory, a dynamic allocation algorithm was designed to find a trade-off between mining energy and queueing delay. Different from the aforementioned work , the proposed Lyapunov-based algorithm does not need to make any statistical assumptions on the arrivals and services.\n }\n\\begin{table*}[t]\n\\caption{Various Analytics Models for Better Understanding Blockchain Networks.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.15\\textwidth}|p{0.03\\textwidth}|p{0.2\\textwidth}|p{0.48\\textwidth}|}\n\\hline\n\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Metrics} &\\textbf{Methodology \\& Implications}\\\\\n\\hline\n \t\\multirow{4}*{Applicability}\n\t & & Public verifiability, transparency, privacy, integrity, redundancy, and trust anchor & Authors proposed the first structured analytical methodology that can help decide whether a particular application system indeed needs a blockchain, either a permissioned or permissionless, as its technical solution.\\\\\n\t\\cline{2-4}\n \t\\multirow{1}*{of blockchains} & \\modified{} & \\modified{Scalability, efficiency and privacy issues in cloud for blockchains} & \\modified{Authors proposes a novel upper bound privacy leakage based approach to identify intermediate data sets partitioned and distributed in cloud for encryption. This approach can significantly improve the scalability and efficiency of data processing for privacy preserving in cloud.}\\\\\n\t\\hline\n \t\\multirow{5}*{Exploration of}\n\t & & Temporal information and the multiplicity features of Ethereum transactions & Authors proposed an analytical model based on the multiplex network theory for understanding Ethereum transactions.\\\\\n\t\\cline{2-4}\n \t\\multirow{1}*{Ethereum transactions}\n \t& & Pending time of Ethereum transactions & Authors conducted a characterization study of the Ethereum by focusing on the pending time, and attempted to find the correlation between pending time and fee-related parameters of Ethereum.\\\\\n\t\\hline\n \tModeling the competition over multiple miners & & Competing mining resources of miners of a cryptocurrency blockchain & Authors exploited the Game Theory to find a Nash equilibria while peers are competing mining resources.\\\\\n\t\\hline\n\tA neat bound of consistency latency & & Consistency of a PoW blockchain & Authors derived a neat bound of mining latencies that helps understand the consistency of Nakamoto's blockchain consensus in asynchronous networks.\\\\\n\t\\hline\n\tNetwork connectivity & & Consensus security & Authors proposed an analytical model to evaluate the impact of network connectivity on the consensus security of PoW blockchain under different adversary models.\\\\\n\t\\hline\n\tHow Ethereum responds to sharding & & Balance among shards, number of TX that would involve multiple shards, the amount of data relocated across shards & Authors studied how sharding impact Ethereum by firstly modeling Ethereum through graph modeling, and then assessing the three metrics mentioned when partitioning the graph.\\\\\n\t\\hline\n\tRequired properties of sharding protocols & & Consistency and Scalability & Authors proposed an analytical model to evaluate whether a protocol for sharded distributed ledgers fulfills necessary properties.\\\\\n\t\\hline\n\tVulnerability by forking attacks & & Hashrate power, net cost of an attack & Authors proposed fine-grained vulnerability analytical model of blockchain networks incurred by intentional forking attacks taking the advantages of large deviation theory. \\\\\n\t\\hline\n\tCounterattack to double-spend attacks & & Robustness parameter, vulnerability probability & Authors studied how to defense and even counterattack the double-spend attacks in PoW blockchains.\\\\\n\t\\hline\n\tLimitations of PBFT-based blockchains & & Performance of blockchain applications, Persistence, Possibility of forks & Authors studied and identified several misalignments between the requirements of permissioned blockchains and the classic BFT protocols.\\\\\n\t\\hline\n\t\\modified{Unified analysis of different PoX consensus schemes} & & \\modified{Resource sensitivity, system convergence, and resource Fairness} & \\modified{Authors proposed a new Markov model to unify the analysis of the steady-state for weighted resource distribution of different PoX-based Blockchains.} \\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:analytics}\n\\end{table*}", "id": "0a3a3a4c-0a54-493e-8d44-710a340c3f03", "level": "subsection", "origin_cites_number": 19, "parent_id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.4, "cites": [ 2167, 7550, 2173, 2172, 8533, 2174, 7555, 2171, 7554, 8532 ], "content": "Analytical Models for Blockchain Networks}}\n \\modified{This subsection is summarized in Table \\ref{Table:analytics}.}\n {\\color{black}\n For the people considering whether a blockchain system is needed for his/her business, a notable fact is that blockchain is not always applicable to all real-life use cases. To help analyze whether blockchain is appropriate to a specific application scenario, Wust \\textit{et al.} provided the first structured analytical methodology and applied it to analyzing three representative scenarios, i.e., supply chain management, interbank payments, and decentralized autonomous organizations.\n \\modified{The other article proposes a novel upper bound privacy leakage based approach to identify intermediate data sets partitioned and distributed in cloud for encryption. This approach can significantly improve the scalability and efficiency of data processing for privacy preserving in cloud. This study provides insights of scalability, efficiency and privacy issues in cloud for blockchain.}\n Although Ethereum has gained much popularity since its debut in 2014, the systematically analysis of Ethereum transactions still suffers from insufficient explorations. Therefore, Lin \\textit{et al.} proposed to model the transactions using the techniques of multiplex network. The authors then devised several random-walk strategies for graph representation of the transactions network. This study could help us better understand the temporal data and the multiplicity features of Ethereum transactions.\nTo better understand the network features of an Ethereum transaction, Sousa \\textit{et al.} focused on the pending time, which is defined as the latency counting from the time a transaction is observed to the time this transaction is packed into the blockchain. The authors tried to find the correlations between such pending time with the fee-related parameters such as gas and gas price. Surprisingly, their data-driven empirical analysis results showed that the correlation between those two factors has no clear clue. This finding is counterintuitive.\n To achieve a consensus about the state of blockchains, miners have to compete with each other by invoking a certain proof mechanism, say PoW. Such competition among miners is the key module to public blockchains such as Bitcoin. \n To model the competition over multiple miners of a cryptocurrency blockchain,\nAltman \\textit{et al.} exploited the Game Theory to find a Nash equilibria while peers are competing mining resources. The proposed approach help researchers well understand such competition. However, the authors also mentioned that they didn't study the punishment and cooperation between miners over the repeated games. Those open topics will be very interesting for future studies.\n\\modified{\n Besides competitions among individual miners, there are also competitions among mining pools. Malicious pools can pull off DDoS attacks to overload the victim pools' manager with invalid share submissions. The delay in verifying extra share submissions potentially impairs the hash power of the victim pool and thus undermines the potential reward for pool miners. Knowing that the chance of getting a reward is smaller, miners in the victim pools would migrate to another mining pools, which would further weaken the victim pools. To better understand this kind of competition, Wu \\textit{et al.} proposed a stochastic game-theoretic model in a two-mining-pool case. The authors used Q-learning algorithm to find the Nash equilibrium and maximize the long-term payoffs. The experiment showed that the smaller mining pool is more likely to attack the larger one. Also, mining pools tend to adopt lower attack level when the DDoS attack cost increases.\n}\n To ensure the consistency of PoW blockchain in an asynchronous network, Zhao \\textit{et al.} performed an analysis and derived a neat bound around $\\frac{2\\mu}{\\ln (\\mu/\\nu)}$, where $\\mu + \\nu = 1$, with $\\mu$ and $\\nu$ denoting the fraction of computation power dominated by the honest and adversarial miners, respectively. Such a neat bound of mining latencies is helpful to us to well understand the consistency of Nakamoto's blockchain consensus in asynchronous networks. \n Bitcoin's consensus security is built upon the assumption of honest-majority. Under this assumption, the blockchain system is thought secure only if the majority of miners are honest while voting towards a global consensus. Recent researches believe that network connectivity, the forks of a blockchain, and the strategy of mining are major factors that impact the security of consensus in Bitcoin blockchain. To provide pioneering concrete modelings and analysis, Xiao \\textit{et al.} proposed an analytical model to evaluate the network connectivity on the consensus security of PoW blockchains.\nTo validate the effectiveness of the proposed analytical model, the authors applied it to two adversary scenarios, i.e., \\textit{honest-but-potentially-colluding}, and \\textit{selfish mining} models.\n Although Sharding is viewed as a prevalent technique for improving the scalability to blockchain systems, several essential questions are: what we can expect from and what price is required to pay for introducing Sharding technique to Ethereum? To answer those questions, Fynn \\textit{et al.} studied how sharding works for Ethereum by modeling Ethereum into a graph. Via partitioning the graph, they evaluated the trade-off between the edge-cut and balance. Several practical insights have been disclosed. For example, three major components, e..g, computation, storage and bandwidth, are playing a critical role when partitioning Ethereum; A good design of incentives is also necessary for adopting sharding mechanism.\n As mentioned multiple times, sharding technique is viewed as a promising solution to improving the scalability of blockchains. However, the properties of a sharded blockchain under a fully adaptive adversary are still unknown. To this end, Avarikioti \\textit{et al.} defined the \\textit{consistency} and \\textit{scalability} for sharded blockchain protocol. The limitations of security and efficiency of sharding protocols were also derived. Then, they analyzed these two properties on the context of multiple popular sharding-based protocols such as \\textit{OmniLedger}, \\textit{RapidChain}, \\textit{Elastico}, and \\textit{Monoxide}. Several interesting conclusions have been drawn. For example, the authors thought that Elastico and Momoxide failed to guarantee the balance between consistency and scalability properties, while OmniLedger and RapidChain fulfill all requirements of a robust sharded blockchain protocol. \n Forking attacks has become the normal threats faced by the blockchain market. The related existing studies mainly focus on the detection of such attacks through transactions. However, this manner cannot prevent the forking attacks from happening. To resist the forking attacks, Wang \\textit{et al.} studied the fine-grained vulnerability of blockchain networks caused by intentional forks using the large deviation theory. This study can help set the robustness parameters for a blockchain network since the vulnerability analysis provides the correlation between robust level and the vulnerability probability. In detail, the authors found that it is much more cost-efficient to set the robust level parameters than to spend the computational capability used to lower the attack probability.\n The existing economic analysis reported that the attacks towards PoW mining-based blockchain systems can be cheap under a specific condition when renting sufficient hashrate capability. Moroz \\textit{et al.} studied how to defense the double-spend attacks in an interesting reverse direction. The authors found that the counterattack of victims can lead to a classic game-theoretic \\textit{War of Attrition} model. This study showed us the double-spend attacks on some PoW-based blockchains are actually cheap. However, the defense or even counterattack to such double-spend attacks is possible when victims are owning the same capacity as the attacker.\n Although BFT protocols have attracted a lot of attention, there are still a number of fundamental limitations unaddressed while running blockchain applications based on the classical BFT protocols. Those limitations include one related to low performance issues, and two correlated to the gaps between the state machine replication and blockchain models (i.e., the lack of strong persistence guarantees and the occurrence of forks). To identify those limitations, Bessani \\textit{et al.} first studied them using a digital coin blockchain App called SmartCoin, and a popular BFT replication library called BFT-SMART, then they discussed how to tackle these limitations in a protocol-agnostic manner. The authors also implemented an experimental platform of permissioned blockchain, namely SmartChain. Their evaluation results showed that SmartChain can address the limitations aforementioned and significantly improve the performance of a blockchain application. \n {\\color{blue}\n The Nakamoto protocol is designed to solve the Byzantine Generals Problem for permissionless Blockchains. However, a general analytical model is still missing for capturing the steady-state profit of each miner against the competitors.\n To this end, Yu \\textit{et al.} studied the weighted resource distribution of proof-based consensus engines, referred to as Proof-of-X (PoX), in large-scale networks. The proposed Markov model attempts to unify the analysis of different PoX mechanisms considering three new unified metrics, i.e., resource sensitivity, system convergence, and resource fairness. }\n }\n\\begin{table*}[h!t]\n\\caption{Data Analytics for Better Understanding Cryptocurrency Blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.14\\textwidth}|p{0.05\\textwidth}|p{0.19\\textwidth}|p{0.5\\textwidth}|}\n\\hline\n\\textbf{Emphasis} &\\textbf{Ref.} &\\textbf{Metrics} &\\textbf{Methodology \\& Implications}\\\\\n\\hline\n \t\\multirow{3}*{Cryptojacking}\n\t & \\multirow{2}*{} & Hardware performance counters & Authors proposed a machine learning-based solution to prevent cryptojacking attacks.\\\\\n\t\\cline{2-4}\n \t\\multirow{1}*{detection}\n\t& \\multirow{2}*{} & Various system resource utilization & Authors proposed an in-browser cryptojacking detection approach (CapJack), based on the latest CapsNet.\\\\\n\t\\hline\n\tMarket-manipulation mining & \\multirow{2}*{ } & Various graph characteristics of transaction graph & Authors proposed a mining approach using the exchanges collected from the transaction networks.\\\\\n\t\\hline\n\tPredicting volatility of Bitcoin price & \\multirow{2}*{ } & Various graph characteristics of extreme chainlets & Authors proposed a graph-based analytic model to predict the intraday financial risk of Bitcoin market.\\\\\n\t\\hline\n\tMoney-laundering detection & \\multirow{2}*{} & Various graph characteristics of transaction graph & Authors exploited machine learning models to detect potential money laundering activities from Bitcoin transactions.\\\\\n\t\\hline\n\t\\multirow{3}*{Ponzi-scheme} & \\multirow{2}*{} & Factors that affect scam persistence & Authors analyzed the demand and supply perspectives of Ponzi schemes on Bitcoin ecosystem.\\\\\n\t\\cline{2-4}\n\t\\multirow{1}*{detection} \n\t& & Account and code features of smart contracts & Authors detected Ponzi schemes for Ethereum based on data mining and machine learning approaches.\\\\\n\t\\hline\n\tDesign problem of cryptoeconomic systems & \\multirow{3}*{} & Price of XNS token, Subsidy of App developers & Authors presented a practical evidence-based example to show how data science and stochastic modeling can be applied to designing cryptoeconomic blockchains.\\\\\n\t\\hline\n\tPricing mining hardware & \\multirow{2}*{} & \\multirow{2}*{Miner revenue, ASIC value} & Authors studied the correlation between the price of mining hardware (ASIC) and the value volatility of underlying cryptocurrency.\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:analyticsCrptocurrency}\n\\end{table*}", "id": "64cabce6-95cc-4fe3-9b56-cad94bfb2e57", "level": "subsection", "origin_cites_number": 25, "parent_id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "Data Analytics for Cryptocurrency Blockchains}}\n \\modified{This subsection is summarized in Table \\ref{Table:analyticsCrptocurrency}.}", "id": "bf36c972-b50d-40cf-8f5d-faeccc9f5939", "level": "subsection", "origin_cites_number": 0, "parent_id": "88c7cab9-6a17-48c0-8631-92a4e4e5aca6", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [ "2977ef78-bd38-45c0-b088-d5877574541a", "08d04ee4-7c7f-4ab3-aa6e-a15de266eceb", "f7db6540-c654-4adb-b015-2b80fb119159", "c4a55875-0d20-4c44-86a2-c9c2c4361b3e" ], "title": "{\\color{black" }, { "cite_extract_rate": 0.5, "cites": [ 2167, 8530, 2175, 306 ], "content": "Market Risks Detection}}\n {\\color{black}\n As aforementioned, Akcora \\textit{et al.} proposed a graph-based predictive model to forecast the investment risk of Bitcoin market.\n On the other hand, with the tremendously increasing price of cryptocurrencies such as Bitcoin, hackers are imminently utilizing any available computational resources to participate in mining. Thus, any web users face severe risks from the cryptocurrency-hungry hackers. For example, the \\textit{cryptojacking} attacks have raised growing attention. In such type of attacks, a mining script is embedded secretly by a hacker without notice from the user. When the script is loaded, the mining will begin in the background of the system and a large portion of hardware resources are requisitioned for mining.\n To tackle the cryptojacking attacks, Tahir \\textit{et al.} proposed a machine learning-based solution, which leverages the hardware performance counters as the critical features and can achieve a high accuracy while classifying the parasitic miners. The authors also built their approach into a browser extension towards the widespread real-time protection for web users.\nSimilarly, Ning \\textit{et al.} proposed \\textit{CapJack}, which is an in-browser cryptojacking detector based on deep capsule network (CapsNet) technology.\n As mentioned previously, to detect potential manipulation of Bitcoin market, Chen \\textit{et al.} proposed a graph-based mining to study the evidence from the transaction network built based on Mt. Gox transaction history. The findings of this study suggests that the cryptocurrency market requires regulation.\n To predict drastic price fluctuation of Bitcoin, Dixon \\textit{et al.} studied the impact of extreme transaction graph (ETG) activity on the intraday dynamics of the Bitcoin prices. The authors utilized chainlets (sub graphs of transaction graph) for developing their predictive models.\n}", "id": "2977ef78-bd38-45c0-b088-d5877574541a", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "bf36c972-b50d-40cf-8f5d-faeccc9f5939", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.4, "cites": [ 2146, 2176 ], "content": "Ponzi Schemes Detection}}\n {\\color{black}\n Ponzi scheme , as a classic scam, is taking advantages of mainstream blockchains such as Ethereum. Data mining technologies are widely used for detecting Ponzi schemes.\n For example, several representative studies are reviewed as follows.\n Vasek \\textit{et al.} analyzed the demand and supply Ponzi schemes on Bitcoin ecosystem. The authors were interested at the reasons that make those Ponzi frauds succeeded in attracting victims, and the lifetime of those scams.\n To detect such Ponzi schemes towards a healthier blockchain economic environment, Chen \\textit{et al.} proposed a machine learning-based classification model by exploiting data mining on smart contracts of Ethereum. The experimental results showed that the proposed detection model can even identify Ponzi schemes at the very beginning when those schemes are created.\n }", "id": "08d04ee4-7c7f-4ab3-aa6e-a15de266eceb", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "bf36c972-b50d-40cf-8f5d-faeccc9f5939", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.5, "cites": [ 2173 ], "content": "Money-Laundering Detection}}\n {\\color{black}\n Although Bitcoin has received enormous attention, it is also criticized for being carried out criminal financial activities such as ponzi schemes and money laundering.\n For example, Seo \\textit{et al.} mentioned that money laundering conducted in the underground market can be detected using the Bitcoin mixing services. However, they didn't present an essential anti-money laundering strategy in their paper. \n In contrast, utilizing a transaction dataset collected over three years, Hu \\textit{et al.} performed in-depth detection for discovering money laundering activities on Bitcoin network. To identify the money laundering transactions from the regular ones, the authors proposed four types of classifiers based on the graph features appeared on the transaction graph, i.e., immediate neighbors, deepwalk embeddings, node2vec embeddings and decision tree-based.\n }", "id": "f7db6540-c654-4adb-b015-2b80fb119159", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "bf36c972-b50d-40cf-8f5d-faeccc9f5939", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 1, "cites": [ 7555, 2174 ], "content": "Portrait of Cryptoeconomic Systems}}\n {\\color{black}\n It is not common to introduce data science and stochastic simulation modelings into the design problem of cryptoeconomic engineering. Laskowski \\textit{et al.} presented a practical evidence-based example to show how this manner can be applied to designing cryptoeconomic blockchains.\n Yaish \\textit{et al.} discussed the relationship between the cryptocurrency mining and the market price of the special hardware (ASICs) that supports PoW consensus. The authors showed that the decreasing volatility of Bitcoin's price has a counterintuitive negative impact to the value of mining hardware. This is because miners are not financially incentivized to participate in mining, when Bitcoin becomes widely adopted thus making its volatility decrease. This study also revealed that a mining hardware ASIC could be imitated by bonds and underlying cryptocurrencies such as bitcoins.\n }", "id": "c4a55875-0d20-4c44-86a2-c9c2c4361b3e", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "bf36c972-b50d-40cf-8f5d-faeccc9f5939", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Various Modelings and Techniques for Better Understanding Blockchains" ], [ "subsection", "{\\color{black" ], [ "subsubsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:tools}\n\\modified{Measurements are summarized in Table \\ref{Table:measurements}, and datasets are summarized in Table \\ref{Table:datasetFramework}.}", "id": "4358fa51-9a15-47c5-bdf1-24771001391e", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Useful Measurements, Datasets and Experiment Tools for Blockchains" ] ], "subsections": [ "403127a8-201c-43b8-a139-52fe4169de74", "f94facb6-7d61-4fb1-b046-dd39e22272c1" ], "title": "Useful Measurements, Datasets and Experiment Tools for Blockchains" }, { "cite_extract_rate": 0.3125, "cites": [ 1292, 2177, 7556, 2178, 2179 ], "content": "Performance Measurements and Datasets for Blockchains}}\n {\\color{black}\n Although diverse blockchains have been proposed in recent years, very few efforts have been devoted to measuring the performance of different blockchain systems. Thus, this part reviews the representative studies of performance measurements for blockchains. The measurement metrics include throughput, security, scalability, etc.\n As a pioneer work in this direction, Gervais \\textit{et al.} proposed a quantitative framework, using which they studied the security and performance of several PoW blockchains, such as Bitcoin, Litecoin, Dogecoin and Ethereum. The authors focused on multiple metrics of security model, e.g., stale block rate, mining power, mining costs, the number of block confirmations, propagation ability, and the impact of eclipse attacks. They also conducted extensive simulations for the four blockchains aforementioned with respect to the impact of block interval, the impact of block size, and throughput. Via the evaluation of network parameters about the security of PoW blockchains, researchers can compare the security performance objectively, and thus help them appropriately make optimal adversarial strategies and the security provisions of PoW blockchains.\n\\begin{table*}[t]\n\\caption{Various performance measurements of blockchains.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.03\\textwidth}|p{0.15\\textwidth}|p{0.22\\textwidth}|p{0.48\\textwidth}|}\n\\hline\n\\textbf{Ref.}&\\textbf{Target Blockchains} &\\textbf{Metrics} &\\textbf{Implementation / Experiments / Methodology}\\\\\n\\hline\n\t\\multirow{4}*{} & General mining-based blockchains, e.g., Bitcoin and Ethereum & TPS, the overheads of cross-zone transactions, the confirmation latency of transactions, etc. & Monoxide was implemented utilizing C++. RocksDB was used to store blocks and TX. The real-world testing system was deployed on a distributed configuration consisting of 1200 virtual machines, with each owning 8 cores and 32 GB memory. In total 48,000 blockchain nodes were exploited in the testbed.\\\\\n\t\\hline\n\t\\multirow{5}*{} & \\multirow{5}*{General blockchains} & Throughput and confirmation latency, scalability under different number of clients, forking rate, and resource utilization (CPU, network bandwidth) & Prism testbed is deployed on Amazon EC2 instances each with 16 CPU cores, 16 GB RAM, 400 GB NVMe SSD, and a 10 Gbps network interface. In total 100 Prism client instances are connected into a topology in random 4-regular graph.\\\\\n\t\\hline\n\t\\multirow{3}*{ }& \\multirow{3}*{Ethereum} & TX throughput, the makespan of transaction latency & The proposed GARET algorithm was measured to outperform existing techniques by up to 12\\% in TX throughput, and decrease the makespan of TX latency by about 74\\% under various conditions in Sharding Ethereum. \\\\\n\t\\hline\n\t\\multirow{5}*{} & \\hspace{0.15\\textwidth} Bitcoin, Litecoin, Dogecoin, Ethereum & Block interval, block size, and throughput & Proposed a quantitative framework, using which they studied the security and performance of several PoW blockchains. Via the evaluation of network parameters about the security of PoW blockchains, researchers can make trade-offs between the security provisions and performance objectively.\\\\\n\t\\hline\n\t\\multirow{3}*{} & \\multirow{3}*{Hyperledger Fabric} & Execution time, latency, throughput, scalability vs the number of blockchain nodes & Presented the performance measurement and analysis towards Hyperledger Fabric version 0.6 and version 1.0.\\\\\n\t\\hline\n\t\\multirow{4}*{} & Ethereum, Parity, CITA, Hyperledger Fabric & TPS, Average response delay, Transactions per CPU, TX per memory second, TX per disk I/O and TX per network data & Proposed a scalable framework for monitoring the real-time performance blockchain systems. The authors evaluated four popular blockchain systems, i.e., Ethereum, Parity, CITA and Hyperledger Fabric.\\\\\n\t\\hline\n\t\\multirow{5}*{} & \\multirow{5}*{Private blockchains} & Throughput and latency, Scalability, Fault tolerance and security, and other micro measurements, e.g., CPU utilization, Network utilization, etc. & The authors proposed Blockbench for measuring and analyzing the multiple performance of private blockchain systems. Through this Blockbench, the authors revealed several insightful bottlenecks and trade-offs while designing the software of blockchains.\\\\\n\t\\hline\n\t\\multirow{3}*{} & \\multirow{3}*{Ethereum} & Network size and geographic distribution of Ethereum network nodes & Proposed a network monitoring tool named NodeFinder, which is designed to find the unusual network properties of Ethereum network nodes in the underlying P2P network perspective.\\\\\n\t\\hline\n\t\\multirow{3}*{} & \\multirow{3}*{Bitcoin network} & TPS, network latency, number of forks, and mining rewards & The authors proposed a local Bitcoin network simulator to study the performance of Bitcoin under different network conditions including various topologies, network latencies, packet loss rates, and mining difficulties.\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:measurements}\n\\end{table*}\n Nasir \\textit{et al.} conducted performance measurements and discussion of two versions of Hyperledger Fabric. The authors focused on the metrics including execution time, transaction latency, throughput and the scalability versus the number of nodes in blockchain platforms. Several useful insights have been revealed for the two versions of Hyperledger Fabric. \n As already mentioned previously in , the authors evaluated their proposed Monoxide w.r.t the metrics including the scalability of TPS as the number of network zones increase, the overhead of both cross-zone transactions and storage size, the confirmation latency of transactions, and the orphan rate of blocks.\n In , the authors performed rich measurements for their proposed new blockchain protocol Prism under limited network bandwidth and CPU resources. The performance evaluated includes the distribution of block propagation delays, the relationship between block size and mining rate, block size versus assembly time, the expected time to reach consensus on block hash, the expected time to reach consensus on blocks, etc. \n Later, Zheng \\textit{et al.} proposed a scalable framework for monitoring the real-time performance blockchain systems. This work has evaluated four popular blockchain systems, i.e., Ethereum, Parity , Cryptape Inter-enterprise Trust Automation (CITA) and Hyperledger Fabric , in terms of several metrics including \\textit{transactions per second}, \\textit{average response delay}, \\textit{transactions per CPU}, \\textit{transactions per memory second}, \\textit{transactions per disk I/O} and \\textit{transactions per network data}.\n Such comprehensive performance evaluation results offered us rich viewpoints on the 4 popular blockchain systems.\n Their experimental logs and technique report can be accessed from \\url{http://xblock.pro}. \n Recently, Zheng \\textit{et al.} extended their work and released a new open-source dataset framework, called XBlock-ETH, for the data-driven analysis of Ethereum. XBlock-ETH contains multiple types of Ethereum data such as transactions, smart contracts and tokens. Thus, researchers can extract and explore the data of Ethereum using XBlock-ETH. The authors first collected and cleaned the most recent on-chain dataset from Ethereum. Then, they presented how to perform basic exploration of these datasets to make them best. Like their previous work, those datasets and processing codes can be found from the webpage \\textit{xblock.pro} aforementioned.\n In the other similar work of the same team, authors proposed another new dataset framework dedicated to EOSIO, named XBlock-EOS, which also includes multiple types of rich on-chain/off-chain datasets such as transactions, blocks, smart contracts, internal/external EOS transfer events, tokens, accounts and resource management. To show how to utilize the proposed framework, the authors presented comprehensive statistics and explorations using those datasets, for example, blockchain analysis, smart contract analysis, and cryptocurrency analysis. Finally, this study also discussed future directions of XBlock-EOS in the topics including: i) data analysis based on off-chain data to provide off-chain user behavior for blockchain developers, ii) exploring new features of EOSIO data that are different from those of Ethereum, and iii) conducting a joint analysis of EOSIO with other blockchains.\n }\n\\begin{table*}[t]\n\\caption{Blockchain Dataset Frameworks and Evaluation Tools.}\n\\centering\n\\footnotesize\n\\begin{tabular}{|p{0.15\\textwidth}|p{0.1\\textwidth}|p{0.03\\textwidth}|p{0.58\\textwidth}|}\n\\hline\n\\textbf{Recognition}&\\textbf{Target} &\\textbf{Ref.} &\\textbf{Utilization}\\\\\n\\hline\n\t\\multirow{3}*{XBlock-ETH} & \\multirow{3}*{Ethereum} & \\multirow{3}*{} & Authors released a new open-source dataset framework for analysis of Ethereum, i.e., XBlock-ETH, which includes multiple types of Ethereum datasets such as transactions, smart contracts and tokens. \\\\\n\t\\hline\n\t\\multirow{2}*{XBlock-EOS} & \\multirow{2}*{EOS} & \\multirow{2}*{} & Authors proposed a new dataset framework dedicated to EOSIO, named XBlock-EOS, to show how to perform comprehensive statistics and exploration of EOSIO datasets.\\\\\n\t\\hline\n\t\\multirow{2}*{BlockSci} & {General blockchains} & \\multirow{2}*{} & Authors proposed an open-source software platform, named BlockSci, for the analysis of blockchains.\\\\\n\t\\hline\n\t\\multirow{2}*{Blockbench} & General blockchains & \\multirow{2}*{} & Authors proposed a benchmarking framework for measuring the data processing capability and performance of different layers of a blockchain system.\\\\\n\t\\hline\n\t\\multirow{2}*{NodeFinder} & Etheruem nodes & \\multirow{2}*{} & Authors proposed a measuring tool named NodeFinder, to investigate the opaque network characteristics of Ethereum network nodes.\\\\\n\t\\hline\n\tNetwork simulator for Bitcoin & \\multirow{2}*{Bitcoin} & \\multirow{2}*{} & Authors proposed a configurable network simulator for the performance measurements of Bitcoin using lightweight virtualization technologies.\\\\\n\t\\hline\n\\end{tabular}\n\\label{Table:datasetFramework}\n\\end{table*}", "id": "403127a8-201c-43b8-a139-52fe4169de74", "level": "subsection", "origin_cites_number": 16, "parent_id": "4358fa51-9a15-47c5-bdf1-24771001391e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Useful Measurements, Datasets and Experiment Tools for Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0.75, "cites": [ 2177, 7556, 2179 ], "content": "Useful Evaluation Tools for Blockchains}}\n{\\color{black}\n Kalodner \\textit{et al.} proposed BlockSci, which is designed as an open-source software platform for blockchain analysis. Under the architecture of BlockSci, the raw blockchain data is parsed to produce the core blockchain data including transaction graph, indexes and scripts, which are then provided to the analysis library. Together with the auxiliary data including P2P data, price data and user tags, a client can either directly query or read through a Jupyter notebook interface.\n To evaluate the performance of private blockchains, Dinh \\textit{et al.} proposed a benchmarking framework, named Blockbench, which can measure the data processing capability and the performance of various layers of a blockchain system. Using such Blockbench, the authors then performed detailed measurements and analysis of three blockchains, i.e., Ethereum, Parity and Hyperledger. The results disclosed some useful experiences of those three blockchain systems. For example, today's blockchains are not scalable w.r.t data processing workloads, and several bottlenecks should be considered while designing different layers of blockchain in the software engineering perspective.\n Ethereum has received enormous attention on the mining challenges, the analytics of smart contracts, and the management of block mining. However, not so many efforts have been spent on the information dissemination in the perspective of P2P networks. To fill this gap, Kim \\textit{et al.} proposed a measuring tool named NodeFinder, which aims to discover the opaque network properties of Ethereum network nodes. Through a three-month long data collection on the P2P network, the authors analyzed and found several unprecedented differences of Ethereum network comparing with other popular P2P networks like BitTorrent, Bitcoin and Gnutella in terms of network size and geographic distribution.\n Recently, by exploiting lightweight virtualization technologies, Alsahan \\textit{et al.} developed a configurable network simulator for the performance measurements of Bitcoin. The proposed simulator allows users to configure diverse network conditions, such as blockchain network topology, link delays, and mining difficulties, to emulate the real-world operation environment. Using this simulator, experiments can be performed to measure Bitcoin network under various network conditions. It also supports conducting the tests of security attacks and point of failure simulations. The authors also made this simulator open-source on Github.\n }", "id": "f94facb6-7d61-4fb1-b046-dd39e22272c1", "level": "subsection", "origin_cites_number": 4, "parent_id": "4358fa51-9a15-47c5-bdf1-24771001391e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Useful Measurements, Datasets and Experiment Tools for Blockchains" ], [ "subsection", "{\\color{black" ] ], "subsections": [], "title": "{\\color{black" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:openissue}\nIn this section, we envision the open issues and promising directions for future studies.", "id": "476e3f3a-32cd-4b56-a840-1d3541093fd4", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ] ], "subsections": [ "a98dc6df-34f4-454f-8b42-bf8560139253", "364f9f64-9808-4826-ae95-be46e04b5c57", "f5c9ddcf-d114-40db-ad3c-975e9479c546", "d17e51c2-575c-44bc-8172-d077f27a24a0" ], "title": "Open Issues and Future Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "a98dc6df-34f4-454f-8b42-bf8560139253", "level": "subsection", "origin_cites_number": 0, "parent_id": "476e3f3a-32cd-4b56-a840-1d3541093fd4", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ] ], "subsections": [ "4fa04e67-a427-4f57-aeb0-845c436deb97", "6032c497-2ca7-4677-b776-3350a459afe1", "c3924808-862c-4bf5-a51f-a3062d031e4d", "e023aabf-3038-4697-9da3-f553e79d68a6", "ba4709ea-2fda-4266-aed3-ff5876a87f93", "b083057b-2291-4712-9d8d-1c19458f9031", "ec2fc8f7-fb6c-4565-931a-bd1759b1d54f", "e0d8fb60-5210-452e-9e5f-79fd02100137" ], "title": "Performance-Improving Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "Scalability is still a severe challenge for most of the blockchain systems.\n For example, the PBFT consensus protocols issue a \\textit{O}($n^{2}$) number of messages, where $n$ is the number of participants. The large number of messages makes the scalability unrealistic.\n Therefore, new distributed practical byzantine protocols and theoretical modelings of scalability solutions, such as sidechain, subchain, off-chain, sharding technique, DAG, and even chain-less proposals, are in an urgent need for scalable blockchains.", "id": "4fa04e67-a427-4f57-aeb0-845c436deb97", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", "Scalability Issues" ] ], "subsections": [], "title": "Scalability Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "The sharding technique includes three typical categories, i.e., transaction sharding, network sharding, and state sharding. Via the extensive review on the existing studies of sharding techniques, we found that the resilient mechanisms for sharding blockchains are still missing. Particularly to the state sharding, once the failures occurred on blockchain nodes, how to ensure the correct recovery of the real-time running states in the failed blockchain node(s) is critical to the resilience and robustness of the blockchain.", "id": "6032c497-2ca7-4677-b776-3350a459afe1", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", " Resilient Mechanisms for Sharding Technique " ] ], "subsections": [], "title": " Resilient Mechanisms for Sharding Technique " }, { "cite_extract_rate": 0, "cites": [], "content": "Although a number of committee-based sharding protocols have been proposed, those protocols can only endure at most 1/3 adversaries. Thus, more robust byzantine agreement protocols need to be devised. Furthermore, all the sharding-based protocols incur additional cross-shard traffics and latencies because of the cross-shard transactions. Therefore, the cross-shard performance in terms of throughput, latency and other metrics, has to be well guaranteed in future studies.\n On the other hand, the cross-shard transactions are inherent for the cross-shard protocols. Thus, the pros and cons of such the correlation between different shards are worthy investigating using certain modelings and theories such as graph-based analysis.", "id": "c3924808-862c-4bf5-a51f-a3062d031e4d", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", " Cross-Shard Performance " ] ], "subsections": [], "title": " Cross-Shard Performance " }, { "cite_extract_rate": 0, "cites": [], "content": "On cross-chain operations, is essentially a pioneer step towards practical blockchain-based ecosystems. Following this roadmap paved by , we are exciting to anticipate the subsequent related investigations will appear soon in the near future.\n For example, although the inter-chain transaction experiments achieve an initial success, we believe that the secure cross-chain transaction accelerating mechanisms are still on the way.\n In addition, further improvements are still required for the interoperability among multiple blockchains, such as decentralized load balancing smart contracts for sharded blockchains.", "id": "e023aabf-3038-4697-9da3-f553e79d68a6", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", " Cross-Chain Transaction Accelerating Mechanisms " ] ], "subsections": [], "title": " Cross-Chain Transaction Accelerating Mechanisms " }, { "cite_extract_rate": 0, "cites": [], "content": "Although multiple-chain techniques can improve the throughput by exploiting the parallel mining of multiple chain instances, how to construct and manage the blocks in all chains in a globally consistent order is still a challenge to the multiple-chain based scalability protocols and solutions.", "id": "ba4709ea-2fda-4266-aed3-ff5876a87f93", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", " Ordering Blocks for Multiple-Chain Protocols " ] ], "subsections": [], "title": " Ordering Blocks for Multiple-Chain Protocols " }, { "cite_extract_rate": 0, "cites": [], "content": "To improve the performance of blockchains, for example, to reduce the latency of transaction confirmation, some advanced network technologies, such as RDMA (Remote Direct Memory Access) and high-speed network cards, can be exploited in accelerating the data-access among miners in blockchain networks.", "id": "b083057b-2291-4712-9d8d-1c19458f9031", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", "Hardware-assisted Accelerating Solutions for Blockchain Networks" ] ], "subsections": [], "title": "Hardware-assisted Accelerating Solutions for Blockchain Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "The blockchain network is built over the P2P networks, which include several typical layers, such as mac layer, routing layer, network layer, and application layer. The BFT-based protocols are essentially working for the network layer. In fact, performance improvements can be achieved by proposing various protocols, algorithms, and theoretical models for other layers of the blockchain network.", "id": "ec2fc8f7-fb6c-4565-931a-bd1759b1d54f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", "Performance Optimization in Different Blockchain Network Layers" ] ], "subsections": [], "title": "Performance Optimization in Different Blockchain Network Layers" }, { "cite_extract_rate": 0, "cites": [], "content": "Although big data and blockchain have several performance metrics that are contrary to each other. For example, big data is a centralized management technology with an emphasize on the privacy-preserving oriented to diverse computing environments. The data processed by big data technology should ensure nonredundancy and unstructured architecture in a large-scale computing network. In contrast, blockchain technology builds on a decentralized, transparent and immutable architecture, in which data type is simple, data is structured and highly redundant. Furthermore, the performance of blockchains require scalability and the off-chain computing paradigm.\nThus, how to integrate those two technologies together and pursue the mutual benefit for each other is an open issue that is worthy in-depth studies. For example, the potential research topics include how to design a suitable new blockchain architecture for big data technologies, and how to break the isolated data islands using blockchains while guaranteeing the privacy issues of big data.", "id": "e0d8fb60-5210-452e-9e5f-79fd02100137", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a98dc6df-34f4-454f-8b42-bf8560139253", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Performance-Improving Issues" ], [ "subsubsection", "Blockchain-assisted BigData Networks" ] ], "subsections": [], "title": "Blockchain-assisted BigData Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "{\\color{blue}\nAlthough the state-of-the-art studies have reviewed a lot of modelings and theories for better understanding blockchains, more sophisticated approaches and insightful mechanisms are still needed to help researchers gain a new level of perception over the high-performance blockchain systems.}\nSome interesting directions are summarized here for inspiring more subsequent investigations.\n\\begin{itemize}\n\\item Exploiting more general queueing theories to capture the real-world arrival process of transactions, mining new blocks, and other queueing-related blockchain phases.\n\\item Performing priority-based service policies while dealing with transactions and new blocks, to meet a predefined security or regulation level.\n\\item Developing more general probabilistic models to characterize the correlations among the multiple performance parameters of blockchain systems.\n\\end{itemize}", "id": "364f9f64-9808-4826-ae95-be46e04b5c57", "level": "subsection", "origin_cites_number": 0, "parent_id": "476e3f3a-32cd-4b56-a840-1d3541093fd4", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Issues for Better Understanding Blockchains Further" ] ], "subsections": [], "title": "Issues for Better Understanding Blockchains Further" }, { "cite_extract_rate": 0, "cites": [], "content": "}", "id": "f5c9ddcf-d114-40db-ad3c-975e9479c546", "level": "subsection", "origin_cites_number": 0, "parent_id": "476e3f3a-32cd-4b56-a840-1d3541093fd4", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "\\modified{Security Issues of Blockchains" ] ], "subsections": [ "d7ecf26d-cc42-4e58-b63b-f8a860916de7", "62504ae1-f41c-4e73-9673-857ec6b3b230", "f913cfbd-254b-45db-8050-1ea3d2a21178" ], "title": "\\modified{Security Issues of Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "From the previous overview, we observe that most of the existing works under this category are discussing the blockchain-based security and privacy-preserving applications. The fact is that the security and privacy are also the critical issues of the blockchain itself. For example, the privacy of transactions could be hacked by attackers. However, dedicated studies focusing on those issues are still insufficient.", "id": "d7ecf26d-cc42-4e58-b63b-f8a860916de7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f5c9ddcf-d114-40db-ad3c-975e9479c546", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "\\modified{Security Issues of Blockchains" ], [ "subsubsection", "Privacy-Preserving for Blockchains" ] ], "subsections": [], "title": "Privacy-Preserving for Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "The Cryptojacking Miners are reportedly existing in web browsers according to . This type of malicious codes is commandeering the hardware resources such as computational capability and memory of web users. Thus, the anti-cryptojacking mechanisms and strategies are necessary to develop for protecting normal browser users.", "id": "62504ae1-f41c-4e73-9673-857ec6b3b230", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f5c9ddcf-d114-40db-ad3c-975e9479c546", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "\\modified{Security Issues of Blockchains" ], [ "subsubsection", "Anti-Cryptojacking Mechanisms for Malicious Miners" ] ], "subsections": [], "title": "Anti-Cryptojacking Mechanisms for Malicious Miners" }, { "cite_extract_rate": 0, "cites": [], "content": "The security issues of cryptocurrency blockchains, such as double-spend attacks, frauds in smart contracts, have arisen growing attention from both industrial and academic fields. However, little efforts have been committed to the theoretical investigations towards the security issues of cryptocurrency blockchains. For example, the exploration of punishment and cooperation between miners over multiple chains is an interesting topic for cryptocurrency blockchains. Thus, we expect to see broader perspectives of modeling the behaviors of both attackers and counterattackers in the context of monetary blockchain attacks.", "id": "f913cfbd-254b-45db-8050-1ea3d2a21178", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f5c9ddcf-d114-40db-ad3c-975e9479c546", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "\\modified{Security Issues of Blockchains" ], [ "subsubsection", "Security Issues of Cryptocurrency Blockchains" ] ], "subsections": [], "title": "Security Issues of Cryptocurrency Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "To most of the beginners in the field of the blockchain, they have a dilemma about lack of powerful simulation/emulation tools for verifying their new ideas or protocols. Therefore, the powerful simulation/emulation platforms that are easy to deploy scalable testbeds for the experiments would be very helpful to the research community.", "id": "d17e51c2-575c-44bc-8172-d077f27a24a0", "level": "subsection", "origin_cites_number": 0, "parent_id": "476e3f3a-32cd-4b56-a840-1d3541093fd4", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Open Issues and Future Directions" ], [ "subsection", "Powerful Experimental Platforms for Blockchains" ] ], "subsections": [], "title": "Powerful Experimental Platforms for Blockchains" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion} \nThrough a brief review of state-of-the-art blockchain surveys at first, we found that a dedicated survey focusing on the theoretical modelings, analytical models and useful experiment tools for blockchains is still missing.\nTo fill this gap, we then conducted a comprehensive survey of the state-of-the-art on blockchains, particularly in the perspectives of theories, modelings, and measurement/evaluation tools.\nThe taxonomy of each topic presented in this survey tried to convey the new protocols, ideas, and solutions that can improve the performance of blockchains, and help people better understand the blockchains in a further level.\nWe believe our survey provides a timely guidance on the theoretical insights of blockchains for researchers, engineers, educators, and generalized readers.", "id": "588c4324-eb6d-4b36-b0cf-212c18117c63", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (No. 2019B020214006), the National Natural Science Foundation of China (No. 61902445, No. 61722214, No. 61872310), the Fundamental Research Funds for the Central Universities of China (No.19lgpy222), the Guangdong Basic and Applied Basic Research Foundation (No. 2019A1515011798), the Hong Kong RGC Research Impact Fund (RIF) (No. R5060-19, No. R5034-18), General Research Fund (GRF) (No. 152221/19E), and the Collaborative Research Fund (CRF) (No. C5026-18G).\n\\bibliographystyle{IEEEtran}\n\\bibliography{reference}\n\\end{document}", "id": "50684cfe-ad96-46e5-b0ad-556bae0c19c7", "level": "section", "origin_cites_number": 0, "parent_id": "71dcb80c-95ed-47fb-909c-9249db523d8e", "prefix_titles": [ [ "title", "A Survey of State-of-the-Art on Blockchains: Theories, Modelings, and Tools" ], [ "section", "Acknowledgement" ] ], "subsections": [], "title": "Acknowledgement" } ]
116
[ 2147, 1295, 2150, 2149, 2146, 2145, 2148, 2151, 2154, 2152, 7084, 8527, 2153, 7550, 2163, 8528, 8529, 2158, 2161, 2162, 2159, 7551, 2155, 2160, 2156, 7552, 2157, 2165, 2164, 2167, 8530, 1302, 2168, 2166, 8531, 2169, 7553, 2170, 2172, 8533, 2171, 7554, 8532, 2173, 2174, 7555, 2175, 306, 2176, 1292, 2177, 7556, 2178, 2179 ]
0.851524
[ "Rudzidatul Akmam Dziyauddin", "Dusit Niyato", "Nguyen Cong Luong", "Mohd Azri Mohd Izhar", "Marwan Hadhari", "Salwani Daud" ]
Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey
2019
2019-12-17T03:44:06Z
cs.NI
Autonomous Vehicles (AVs) generated a plethora of data prior to support various vehicle applications. Thus, a big storage and high computation platform is necessary, and this is possible with the presence of Cloud Computing (CC). However, the computation for vehicular networks at the cloud computing suffers from several drawbacks, such as latency and cost, due to the proximity issue. As a solution, the computing capability has been recently proposed at the edge of vehicular networks, which is known as Vehicle Edge Computing (VEC). This leads to other open problems for vehicles to offload and compute data at edge nodes, and also how data is cached in edge nodes and then disseminated to other vehicles. In this paper, we initially present an overview of VEC architectures including types of layers, fog nodes, communication technologies and also vehicle applications, which are used in data offloading and dissemination scenarios. Since the mobility is critical on the VEC performance, the mobility model used in the VEC scenario is also discussed. We extensively review the Computation Offloading (ComOf) techniques as well as Content Caching and Delivery (CachDel) approaches for VEC. We finally highlight some key research challenges, issues and future works in the paper. {\it Keywords}- vehicular edge, offloading, caching, computing, dissemination, resource allocation, scheduling, mobility, architecture
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ] ], "subsections": [ "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "e68aeb1a-f5ba-42ba-a8c4-a6e6091ace7b", "430bbdcf-db7f-4ad9-9856-13b97566a0b4", "2d41e993-b0c6-4adb-816f-465c68bab53e", "4d6558b7-cdb9-410d-9e30-acaefcdc90b0" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:architecture}\nTo realise edge computing in a vehicular environment, the architecture of VEC is essential as it dictates the granularity of the resource management algorithms. This section gives an overview of VEC and then discusses the VEC architecture in terms of types of layers, edge nodes and communication technologies employed. The mobility models considered for the vehicles are also highlighted in this section.", "id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "level": "section", "origin_cites_number": 0, "parent_id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ] ], "subsections": [ "cdcc7f36-97e1-4a93-8366-281a104a993e", "139c9067-1957-4234-b7fe-e2aa0c87d5f7", "a2e8b174-5757-4a65-b71c-881cc26e7e99", "8fb1215b-9ac0-4214-bdf7-1609161bffa4", "58dcbc64-c99c-4986-bfcb-c648958162f7", "94c89fe2-819e-4e1d-bc2d-ef924adeafac", "4502ad02-043a-4aaf-9c44-5efe5745a27a" ], "title": "Architecture of vehicular edge computing" }, { "cite_extract_rate": 0.2, "cites": [ 2919, 2917, 2916, 2918, 2915 ], "content": "The three-layer of a VEC architecture, namely, the cloud layer, edge layer and smart vehicular layer in is similar to the general edge computing architecture in . Even if the survey includes the three VEC layers , our survey, in order to focus on the ComOf and CachDel, defines more detail VEC layers where the edge cloud layer is further divided into vehicle edge nodes layer and static edge nodes layer. We then expand the three-layer into four-layer whereby the nodes in the edge layer are categorised into vehicle edge nodes and stationary edge nodes layers.\nFigure \\ref{fig:VEC-layer} illustrates a general four-tier VFC architecture resumed from the bottom layer, namely, smart vehicles, vehicle edge nodes, stationary edge nodes, and the top layer that is a centralised cloud. The layers are briefly explained as follows. \\\\\n\\begin{figure*}[h]\n\t\\center{\\includegraphics[width=0.7\\textwidth]\t{figures/newlayer12122019.jpg}}\n\t\\caption{A general four-layer VEC architecture}\n\t\\label{fig:VEC-layer}\n\\end{figure*}\n\\begin{itemize} \n\t\\item \\textbf{Smart Vehicles} \n\t\tAny types of vehicles that can be autonomous and equipped with OBUs, number of sensors which includes Radio Detection and Ranging (RADAR) and Light Detection and Ranging (LIDAR), Global Positioning System (GPS), videos and cameras. The OBU has features of computation, storage and also networking . The smart vehicles are sometimes known as service requestors , client vehicles , task vehicles and AVs in the VEC architecture. The vehicles under this layer can upload or download segmented/unsegmented data to or from nearby edge nodes. The former is often described as offloading, and the latter is known as the content delivery or dissemination. \\\\\n\t\\item \\textbf{Vehicular edge nodes} \n\tSmart vehicles with available resources can establish a vehicular cloudlet and share their resources to the requesting smart vehicles within their coverage. The vehicle edge node can be considered to be a vehicle service provider or a service vehicle . Note that the vehicles can either be in moving or parking states .\\\\\n\t\\item \\textbf{Stationary edge nodes}\n\tAn RSU or a typical cellular BS is connected to an MEC server for a high computation and storage capability. The MEC server can be co-located at the BS or RSU . However, the cost can be reduced if a pool of RSUs can share the MEC server. The RSU is more preferable than the BS to serve the vehicles due to its proximity to smart vehicles and mobile edge nodes . In case of that there is no RSU coverage, the BS can still provide the edge computational service.\\\\\n\t\\item \\textbf{Centralised cloud}\n\tA centralised cloud consists of a large number of dedicated servers with great computation, storage capability, and also stable connections. Nevertheless, the trade-off is a high cost of computational service and can cause a long delay.\\\\\n\\end{itemize} \nIn the next section, we will describe VEC layers that are used in the existing ComOf and CachDel schemes.", "id": "cdcc7f36-97e1-4a93-8366-281a104a993e", "level": "subsection", "origin_cites_number": 25, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Overview of vehicular edge computing" ] ], "subsections": [], "title": "Overview of vehicular edge computing" }, { "cite_extract_rate": 0.1, "cites": [ 2919, 2916, 2918, 2915 ], "content": "In general, the VEC architecture may consist of different number of tiers, which are two, three or four. Table \\ref{type-layers} tabulates some examples of VEC architectures in terms of layers, name of the VEC systems, their advantages and disadvantages. \n\\begin{table*}[!h]\n\t\\centering\n\t\\caption{A summary of VEC architectures}\n\t\\label{type-layers}\n\t\\begin{tabular}{|C{1.5cm}|C{4cm}|L{3.5cm}|L{3.5cm}|L{3.5cm}|}\\hline\n\t\t\\textbf{Related Works} & \\textbf{Name of Layers/planes} & \\textbf{Name of System} &\\textbf{Advantages} &\\textbf{Disadvantages}\\\\ \\hline\n\t\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Computing domain��(vehicular cloud computing and vehicular edge computing) \\item Data domain \\end{itemize} \\end{minipage} & Big data enabled EnerGy-efficient vehIcular edge computing (BEGIN) &big data analytics &high computing platform and high cost \\\\ \\hline\n\t\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Data Plane\t\\item Control Plane \\end{itemize} \\end{minipage} & Software-Defined Vehicular Edge Computing (SD-VEC) & SDN controls resource management for high efficiency &high overhead for control messages \\\\ \\hline\n\t\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Vehicular social edge computing \\item End user \\end{itemize} \\end{minipage} & Vehicular Vocial Edge Computing (VSEC) &vehicular social edge computing layer satisfies QoE &content catching can reduce storage of vehicles\\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Edge cloud \\item Vehicles \\end{itemize} \\end{minipage} & Vehicular Edge Computing Networks (VECN) , Fog-enhanced Radio Access Networks (FeRANs) & low delay and overhead & insufficient resources for high density areas \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Vehicles, access \\item Edge \\item Central or core cloud \\end{itemize} \\end{minipage} & Software-Defined and FC-based Vehicular NETwork (SDFC-VeNET) &additional resources or resource management controller & high overhead and long delay \\\\ \\hline\n\t\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Smart Pervasive Service layer (L-SPS) \\item Dynamic resource adaption layer (L-DRA) \\item Collaborative network component layer (L-CNC) \\end{itemize} \\end{minipage} &Smart identifier network(SINET) & constitutes of function-node autonomic mapping and service-function autonomic mapping & high complexity for real implementation \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Cloud-enabled control layer \\item Mobile edge computing layer \\item Multi-access connected cloudlet layer \\end{itemize} \\end{minipage}\t&Vehicular Edge Multi-Access Network (VE-MAN) &cloud-enabled control layer has a global view on traffic environment and network state & vehicular cloudlet leader might change at certain period that can interrupt the offloading and computing service \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.2\\textwidth}\\begin{itemize} \\item Application \\item Control(i.e. SDN Controller) \\item Transmission, caching and computation \\item Road and vehicular plane \\end{itemize} \\end{minipage}\t\t &Software-defined vehicle networks with MEC & SDN controller & no vehicles as edge nodes\\\\\t\\hline \t\t\t \n\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Cloud \\item Local Authority and VEC Servers \\item Network Infrastructure (RSU and BS) \\item Road and vehicular plane \\end{itemize} \\end{minipage} &Distributed REputAtion Management System (DREAMS) & local authority handles reputation of vehicles, monitors networks, record information and update blacklist & high overhead with the reputation information \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.2\\textwidth} \\begin{itemize} \\item Requesting vehicles \\item Service provider \\item VEC Servers \\item Parked vehicles \\end{itemize} \\end{minipage}\t \t& Parked Vehicle Edge Computing (PVEC) & service provider divides task into multiple subtasks, selects parked vehicles for computing and manages the reward & high overhead \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}\nMostly ComOf works focused on the two primary layers, namely the vehicle and edge layers , which can be distinguished in terms of types of edge nodes used, as discussed in the next subsection. However, some works introduced different types of two tiers, such as Big data enabled EnerGy-efficient vehIcular edge computing (BEGIN) proposed a computing domain and a data domain whilst Software-Defined Vehicular Edge Computing (SD-VEC) proposed a data plane and a control plane in their VEC architectures. The two layers can also be in the form of types of computing, cloud and edge computing layers .\nThe third layer of the VEC architecture involves with the cloud, such as core or central cloud , cloud computing , cloud-enabled control layer , regional cloud , public cloud , cloud server layer , remote cloud . The key advantage of the third layer is to provide an additional computational resources in the case of insufficient resources at edge nodes. Unlike others, the proposed VEC layers in constitutes of three unique layers as seen in Table \\ref{type-layers}. \nSubsequently, the application layer , local authority , city-wide controller or service provider is the fourth and also the last layer expanded in the VEC architecture. The entities in such layer deal with the Quality of Service (QoS), resource allocation or reward policy . However, the bottleneck is the high overhead as the number of control messages, for instance, the amount of storage offered and computation time, may presence in the system. Regardless number of tiers in the VEC, interestingly for the scenario of vehicles social communication a layer related to the social edge computing is introduced. The social relationships of vehicles are created based on their social interest and ties , preferences of content selection , or temporary storage of current contents or movies.\nApart from that, the enabler technology, such as big data , blockchain , Software-Defined Networks (SDN) and Network Function Virtualisation (NFV) are also incorporated in the VEC system. The SDN and NFV bring new insights as their benefits can be in the functionalities presented in Table \\ref{sdn-table} and Table \\ref{nfv-table}, respectively. Typically, SDN is used to separate the control plane and data plane in serving VEC and even to separate the control plane for cloud and fog servers . Since SDN stored the contextual information of vehicles , such as vehicle identification, location, speed, link, and contents in a database that resulted to an optimal offloading decision including vehicle-to-vehicle (V2V) paths and handover . With the information and optimal decision, the performance of a VEC system can then be predicted . In addition to that, the resource abstraction and slicing as well as wireless access interworking are handled by SDN. Therefore, SDN has a potential to improve the agility, reliability, scalability, and latency performance in VEC .\nOn the other hand, NFV primarily concentrates on the creation and configuration of virtual resources whereby the task is variably segmented dependent on the available virtual machine (VM) . Despite that, the spectrum slicing, load balancing and power control can also be administered by NFV . Therefore, NFV offers scalable virtual resources for scheduling different tasks in a timely manner. The connected edge server including the SDN/NFV controller offers important capabilities for identifying and selecting edge nodes, allocating and migrating tasks and giving rewards to the AV that undertakes the residual workloads .\n\\begin{table*}[!h]\n\t\\centering\n\t\\caption{Functionalities of SDN in VEC}\n\t\\label{sdn-table}\n\t\\begin{tabular}{|c|l|} \\hline\n\t\t\\textbf{Related Works} &\\textbf{Functions of SDN} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item abstract and slice physical infrastructure resources into distinct virtual vehicular networks \\item administers the complex control and management functionalities \\end{itemize} \\end{minipage} \\\\ \\hline \n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item facilitates flexible and dynamic network management by separating control plane and data plane \\item collects all the data plane information periodically \\item facilitates optimal task offloading \\end{itemize} \\end{minipage} \\\\ \\hline \n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item splits and handles control plane and data plane. \\item wireless access interworking \\item abstracts and reallocates diverse radio spectrum resources to BSs \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item provide control functions, including radio resource management, mobility management, communication management, traffic management \\item receives real-time vehicle information, such as speed, traffic density, channel state information (CSI) and queue state information (QSI) \\end{itemize} \\end{minipage} \\\\ \\hline \n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item collects information from vehicles, RSUs, and BSs and servers within the fog cell (information-gathering module). \\item manages different wireless networks (wireless vehicles network manager module). \\item generates control directives and forwards them to all devices in the fog cell (forward and control instructions module) \\item manages the links status communications (management links status module ). \\end{itemize} \\end{minipage} \\\\ \\hline \n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item stores contextual information of vehicles \\item computes the V2V path \\item controlls the switching for V2V offloading \\end{itemize} \\end{minipage}\\\\ \\hline \n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item updates current storage requested, location and link \\item acquires and predicts system \\item executes algorithm and determine optimal decision state \\item sends flow control information and decision policies state to all edge nodes \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}\n\\begin{table*}[!h]\n\t\\centering\n\t\\caption{Functionalities of NFV in VEC}\n\t\\label{nfv-table}\n\t\\begin{tabular}{|c|l|} \\hline\n\t\t\\textbf{Related Works} &\\textbf{Functions of Network Function Virtualisation} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item manage resources centrally and dynamically \\item adjust transmit power of wireless router \\item execute spectrum slicing at edge nodes \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item divide into several independent VMs \\item control and adjust size of a task assigned to each VM and also its processing rate\\end{itemize} \\end{minipage} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item task balancing for computation and storage between MEC servers \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item provides network virtualisation mapping based on function-group of vehicles (i.e. multimedia entertainment, mobile business, and location-based services) \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\t &\\begin{minipage}[t] {0.5\\textwidth} \\begin{itemize} \\item creates and configures different VMs according to quantity of data offloaded \\end{itemize} \\end{minipage} \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "139c9067-1957-4234-b7fe-e2aa0c87d5f7", "level": "subsection", "origin_cites_number": 40, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "VEC layers" ] ], "subsections": [], "title": "VEC layers" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec-fognodes}\nThe edge nodes are known for the nodes that promptly compute, store and transmit data located at distributed edge networks. In general, the vehicles, which are over utilised or faced a slow computation, offload (i.e., upload) their data to nearby edge nodes. On one hand, there is a case of contents like movies caching and delivery from the edge nodes instead of directly fetching from the centre cloud. Table \\ref{types-fognodes} presents the types of edge nodes employed in the ComOf and CachDel for VEC. The edge nodes are commonly characterised into two types which are:", "id": "a2e8b174-5757-4a65-b71c-881cc26e7e99", "level": "subsection", "origin_cites_number": 0, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Edge nodes" ] ], "subsections": [ "7db25230-9d54-4390-a34c-b5a4b2a2fcce", "ae03f7be-f7c3-48bc-b086-4278101e8b04" ], "title": "Edge nodes" }, { "cite_extract_rate": 0.25, "cites": [ 2919, 2917 ], "content": "SENs are the computing nodes that co-located at a cellular BS , RSUs , wireless access router or any other stationary infrastructure. SENs are often connected to an MEC server , VEC server , VM server , SDN controller for managing data computation, storage and distribution in edge networks. Despite that, a pool of edge nodes can also share the computational resources and communication resources of a single MEC/VEC server . This sharing may reduce the cost of VEC implementation rather than the former. Nevertheless, the comparison of latency performance between the two remains a question. In supporting cooperative and interoperability between fog servers, a localised coordinator server is introduced .In addition to that, the SEN with the MEC server has higher computing platform, higher power consumption and more expensive rather than the Vehicular Edge Nodes (VENs). Since the network operator provides the SEN, the cost and revenue of computational resources become great attention in proposing the optimum offloading and catching mechanisms.", "id": "7db25230-9d54-4390-a34c-b5a4b2a2fcce", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "a2e8b174-5757-4a65-b71c-881cc26e7e99", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Edge nodes" ], [ "subsubsection", "Stationary Edge Nodes" ] ], "subsections": [], "title": "Stationary Edge Nodes" }, { "cite_extract_rate": 0.105263157894736, "cites": [ 2919, 2917, 7118, 2916, 2918, 2915 ], "content": "VENs are smart vehicles equipped with communication modules and OBUs with computing and low storage capabilities as the concept of Vehicle as a Resource (VaaR) . As shown in Table \\ref{types-fognodes}, the types of VENs considered in the offloading and caching works are cars, UAVs or buses . The mobility of fog nodes explicitly expands new opportunities, such as on-demand computing where the moving vehicles may offer ubiquitously their available computational resources, particularly at the area without any SEN, like in rural areas. Leveraging Parked Vehicles (PV) at the parking area, for examples in the airport or shopping mall, as a primary VEN for computing and caching is also a promising solution for VEC. Also, parked vehicles integrated with fog node controller as a data centre can expand the storage capacity for improving the performance of delivery services in vehicular networks. However, the key limitation is that a cluster of parked cars cannot be fully used as supplemental resources when their power are turned off. Although VEN has a small storage capacity compared with SEN, the vehicles are likely to offer a cheaper computational cost. Another significant issue is that not all VENs including the PV are keen to offer their computational resources to other vehicles. To solve this, a good reputation VEN is necessary to receive a token or reward based on the acquired utility. \n\\begin{table*}[h]\n\t\\centering\n\t\\caption{VEC edge nodes}\n\t\\label{types-fognodes}\n\t\\begin{tabular}{|l|l|L{8cm}|}\n\t\t\\hline\n\t\tCategories & Types of fog nodes & Related Works \\\\ \\hline\n\t\t\\multirow{4}{*}{Vehicular Fog Nodes} & Vehicles & \\\\ \\cline{2-3} \n\t\t& Parked Vehicles & \\\\ \\cline{2-3} \n\t\t& Buses & \\\\ \\cline{2-3} \n\t\t& UAVs & \\\\ \\hline\n\t\t\\multirow{8}{*}{Stationary Edge Nodes} & BS & \\\\ \\cline{2-3} \n\t\t& RSU & \\\\ \\cline{2-3} \n\t\t& Wireless Access Router & \\\\ \\cline{2-3} \n\t\t& RSU with MEC/Fog server & \\\\ \\cline{2-3} \n\t\t& RSU with VEC server & \\\\ \\cline{2-3} \n\t\t& RSU with VM server & \\\\ \\cline{2-3} \n\t\t& BS with MEC server & \\\\ \\cline{2-3} \n\t\t& RSU and BS connected to MEC server & \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}\nAnother important aspect, the term of cloudlet has been used to represent the coverage and connection of VENs and SENs, such as a bus-based cloudlet , a vehicle-based cloudlet , roadside cloudlet , cloudlet layer . Similar to the cloud, a cloudlet constitutes a group of servers that are nearby to the requesting user resulting in low computation power and short communication latency . If edge nodes encounter insufficient computational resources, a central cloud computing platform is the final alternative for computation and storage that causes high computational cost, high communication overhead and a long delay. Next, we discuss the communication technologies used between edge nodes and requested vehicles.", "id": "ae03f7be-f7c3-48bc-b086-4278101e8b04", "level": "subsubsection", "origin_cites_number": 57, "parent_id": "a2e8b174-5757-4a65-b71c-881cc26e7e99", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Edge nodes" ], [ "subsubsection", "Vehicular Edge Nodes" ] ], "subsections": [], "title": "Vehicular Edge Nodes" }, { "cite_extract_rate": 0.11764705882352901, "cites": [ 2919, 2917, 7118, 2918 ], "content": "Table \\ref{type-communication} summarises the underlying communication technologies from vehicles to edge nodes and cloud or vice versa. The communication technologies employed in VEC are broadly divided into vehicle-to-vehicle (V2V) , vehicle-to-infrastructure (V2I) or infrastructure-to-vehicle (I2V) and infrastructure-to-infrastructure (I2I) . Specifically, wireless access in\nvehicular environments (WAVE) standard or also known as IEEE 802.11p is an enabler for vehicles to communicate with other vehicles (i.e., V2V) on dedicated short-range communications (DSRC) frequency band . In the United States, out of 75 MHz of spectrum, seven 10-MHz are assigned for channels and 5-MHz is reserved as a guard band at the 5.9 GHz frequency band under DSRC applications . Besides that, the WAVE also supports multihop communication among vehicles. Despite DSRC, a feasible fronthaul link is assumed for the vehicle to communicate to the UAV as a fog node . For the parked vehicles in the building, wired Ethernet connection or available wireless hotspots can be used to establish the connections . Apparently, the benefits of wired setup are faster data upload and download speed, secure and reliable connectivity, but the cost of VEC implementation might increase.\nThe V2I or I2V is called for communication from vehicle to RSU/BS or vice versa. The enabler for the vehicle-to-RSU (V2RSU) can be based on IEEE 802.11p , IEEE 802.22 (TV whitespace) , wireless local area networks (WLAN) or WiFi . The IEEE 802.11 or often known as DSRC is exclusively defined to support emerging Intelligent Transport System (ITS) applications in Vehicular Ad Hoc Networks (VANET). Thus, adopting other wireless alternatives, such as WLAN or cellular networks, are likely unable to address some delay-sensitive and location-dependent ITS applications. The key difference between WLAN and DSRC is that the WLAN users can only communicate with the access point after the association and authentication procedures, which required several seconds. In contrast, the exchange of data in DSRC can be performed without the association and authentication procedures. In other words, the vehicles can immediately send or receive data once they switch to the communication channel without waiting for the link establishment. Meanwhile, frequent interaction between vehicles and infrastructures using cellular networks may incur high payments for VEC customers. It seems that DSRC is the most appealing solution to support the intelligent transport system. \nApart from the RSU, the vehicle may offload to a BS under the category of infrastructure via cellular networks, WiMAX , Orthogonal Frequency-Division Multiple Access (OFDMA) , Long-Term Evolution (LTE) LTE/4G , 5G . Due to the scalability and high efficiency, the author in suggested to use 5G throughout the VEC communication from smart vehicles up to data centers. \nThe I2I communication is established between infrastructures, for example, RSU-BS, RSU-Cloud, BS-Cloud, can be either via cellular networks , 5G or wired . Several works have assumed that the autonomous vehicles consist of multiple radio access networks interfaces , full-duplex radio for connecting to edge nodes. Multi communication modules of vehicles may lead to robust and reliable connectivity, particularly during handover and beneficial in rural areas. Nevertheless, power consumption and the practicality for heterogeneous vehicular communication pose some challenges in VEC.\n\\begin{table*}[h]\n\t\\centering\n\t\\caption{Communication technologies for VEC}\n\t\\label{type-communication}\n\t\\begin{tabular}{|l|l|} \\hline\n\t\t\\textbf{Communication protocols} &\\textbf{Related Works} \\\\ \\hline\n\t\tDSRC & \\\\ \\hline\n\t\tFull duplex radio & \\\\ \\hline\n\t\tIEEE 802.22/TV White space & \\\\ \\hline\n\t\t5G & \\\\ \\hline\n\t\tOFDMA & \\\\ \\hline\n\t\tLTE/4G & \\\\ \\hline\n\t\tWireless LAN/WIFI/IEEE 802.11p & \\\\ \\hline\n\t\tWiMAX & \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "8fb1215b-9ac0-4214-bdf7-1609161bffa4", "level": "subsection", "origin_cites_number": 34, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Communication Technologies" ] ], "subsections": [], "title": "Communication Technologies" }, { "cite_extract_rate": 0.043478260869565, "cites": [ 2919, 2917 ], "content": "Table~\\ref{v-app} lists the three categories of vehicle applications, namely critical applications (CAs), high-priority applications (HPAs) and low-priority applications (LPAs) . In general, the vehicle manufacturer develops the CAs which include autonomous driving and safety support applications as the primary services initiated by the vehicles. Examples of autonomous driving applications are electronic stability control, automatic braking and adaptive cruise control, and for safety support applications are collision warning, signal violation warning, and emergency vehicles . HPAs are typically driving-related and optional safety-enhancing applications, such as high definition (HD) map , held-of view enhancement , location sight recognition and road conditions . The high definition (HD) map encompasses the three-dimensional location of a roadmap (e.g., lane markings, signboards, crosswalks, barriers) .\nOn the other hand, LPAs are non-critical types of applications, such as infotainment , speech recognition , and video processing to support a driver for instructing specific tasks. Despite the low bandwidth required for the CAs, low latency is mandatory to be fulfilled in mitigating road accidents and deaths. On the contrary, LPAs services may required high bandwidth and real-time, but the QoS is not as stringent as CAs. Therefore, it is vital for each service to be associated with its latency requirement, quality constraints, and workload profiles either in a SDN or a fog controller/coordinator. \n\\begin{table*}[h]\n\t\\centering\n\t\\caption{Categories of vehicle applications}\n\t\\label{v-app}\n\t\\begin{tabular}{L{4cm}L{5cm}L{2cm}L{2cm}L{2cm}L{2cm}}\n\t\t\\hline\n\t\tCategories of Vehicle Applications &Examples & Traffic Priority & Bandwidth &Latency \\\\ \\hline\n\t\tCritical Applications (CAs) & autonomous driving, road-safety applications & Highest & Low &Low \\\\\n\t\tHigh-Priority Applications (HPAs) & Image aided navigation, social-based application (i.e. intervehicle), parking navigation system, information services, traffic management, optional safety applications & High &Low to medium & Low to medium \\\\\n\t\tLow-Priority Applications (LPAs) & Infotainment, multimedia, speech processing, passenger entertainment (interactive gaming) & Medium to Low &Medium to High &Real-time \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}\nTable ~\\ref{sum-app} summarises the types of traffic model attempted in the VEC offloading and downloading works. A significant number of works consider either fixed or uniform distribution data , and followed by a Poisson distribution traffic . Nevertheless, only several works employ video , safety-sensitive messages , and high definitions (HD) maps , and real-time object recognition as well as face recognition as the traffic model.\n\\begin{table*}[h]\n\t\\centering\n\t\\caption{Types of traffic used in VEC}\n\t\\label{sum-app}\n\t\\begin{tabular}{|l|l|}\n\t\t\\cline{1-2}\n\t\t\\textbf{Traffic model} & \\textbf{Related Works} \\\\ \\hline\n\t\tAudio recoder & \\\\ \\hline\n\t\tCoded packet & \\\\ \\hline\n\t\tDelay-tolerant data & \\\\ \\hline\n\t\tElastic services & \\\\ \\hline\n\t\tEV calendards (i.e. UDP) & \\\\ \\hline\n\t\tFixed data/Uniform distribution data & \\\\ \\hline\n\t\tHD Maps\t\t& \\\\ \\hline\n\t\tInelastic services & \\\\ \\hline\n\t\tLocation sight recognition service, parking lot detection &, \\\\ \\hline \n\t\tM/M/1 Queue & \\\\ \\hline\n\t\tPoisson Distribution & \\\\ \\hline\n\t\tReal content\t\t\t& \\\\ \\hline\n\t\tReal-time object recognition, Face recognition \t\t&, \\& \\\\ \\hline\n\t\tSafety-sensitive packets or messages\t\t& \\\\ \\hline\n\t\tSocially-aware applications & \\\\ \\hline\n\t\tVideo & \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}\nThe authors in characterised the vehicle application as \\textit{M/M/1} which represented the task arrival in the MEC server. However, disagreed that the serving process at MEC servers is modeled as the \\textit{M/M/c} queuing model with \\textit{c} computation thread due to the Poisson distribution for the task arrival from the vehicle. The authors in suggested a \\textit{G/M/c} model because the vehicular tasks follow a more general process rather than a Poisson distribution, which is evident in the results . \nThe works in differ from others in terms of how they represent the vehicle applications. The authors in assumed coded packets to be cached at vehicles for the resource allocation optimisation problem. The researchers in dealt with a scheduling of electric vehicle energy demands via user datagram protocol-based calendars for both charging and discharging requests whereas used time series analysis to model the content demands patterns subject to actual content request logs, which the model is called seasonal auto-regressive integrated moving average. On the other hand, explored the fog computing under the assumption of social-based vehicular services such as Waze, Mooveit, SocialDrive, CaravanTrack, and GeoVanet, in Social Internet of vehicles (SOIV) . \nThus far, only little efforts investigated various traffic in the offloading and catching works. While considered delay-sensitive (i.e., safety-related packets) and delay-tolerant traffic (i.e. downloading HD map), focused on HPAs (i.e.safety-related vehicular services) and LPAs. Besides that, examined the elastic and inelastic groups of services. The elastic group is typically tolerant of latency and bandwidth that explicitly divides into two: traditional elastic services (e.g., data transmission) and interactive elastic services (e.g., online chatting). In contrast, the inelastic group that is defined for delay-sensitive services can also be divided into two: hard real-time services (e.g., Voice-over-Internet Protocol) and soft real-time services (e.g., Video over Demand or living streaming). All these four kinds of services are considerably explored by . On the contrary, explored a hundred of vehicles concurrently run three types of applications, namely audio transcoder, face recognition and parking lot detection with the same deadline of 10-time slots, but different processing density level (i.e., 400, 2500 and 100000).\nAnswering what edge nodes appropriately served the vehicle applications, the vehicle must initially prioritise its applications, and itself serves the CPAs so that the latency is guaranteed unless insufficient computation arises. Meanwhile, HPAs may be offloaded to nearby VEN and SEN. The low-priority applications are likely to be computed by all edge nodes and even the core cloud platform. Note that offloading HPAs and LPAs to other edge nodes may only take place when the vehicle undergoes insufficient computational resources. Next, the mobility models considered in the VEC offloading and caching works are discussed.", "id": "58dcbc64-c99c-4986-bfcb-c648958162f7", "level": "subsection", "origin_cites_number": 46, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Vehicle Applications" ] ], "subsections": [], "title": "Vehicle Applications" }, { "cite_extract_rate": 0.1, "cites": [ 2918, 2915 ], "content": "The mobility model in VEC characterises the movements of vehicles with regards to their locations, velocity and direction over a period of time. In the mobility model also the researchers incorporated the distribution and movement of the vehicles at a specific area or region in the real world. The model is essential for demonstrating the performance of VEC nearer to the reality. In VEC, significant works used Simulation of Urban MObility (SUMO) , followed by specific vehicle speed, vehicle acceleration and other models . They are detailed as follows:", "id": "94c89fe2-819e-4e1d-bc2d-ef924adeafac", "level": "subsection", "origin_cites_number": 20, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Mobility Model" ] ], "subsections": [ "433d7b55-dc47-4966-93fc-7e241a7a6521", "d98d50d1-cc8a-40bc-ab1d-46475b1ac5aa", "d3dc5ff0-ffc6-49b5-9640-df305a1168be" ], "title": "Mobility Model" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 2915 ], "content": "SUMO is an open source of multi-modal traffic simulation for vehicles, public transport, and pedestrians. It is a microscopic simulation where the vehicles can be modeled explicitly on the actual lanes or highways. Generally, SUMO can transform the real map often from Open Street Map (OSM) into a simplified road topology and integrated into the SUMO simulator with a specific configuration of vehicle speed or actual vehicle movements from trace file . The Luxembourg SUMO Traffic scenario (LuST) is employed in to emulate real traffic of Luxembourg divided into two regions, namely the highways and urban. Highways consist of high speed vehicles, and urban has a long inter-vehicle distance resulting in a low density of vehicles.\nOn the other hand, the real-world map of three different roads in China is imported to SUMO for simulations . Three different velocity models, which are a constant velocity model, vehicle-following model, and traveling-time statistical model, are generated. The traffic simulator SUMO is integrated with a network simulator OMNeT++3 that enables to use the real maps from OSM for G6 Highway in Beijing . Other works also used SUMO to simulate a particular area in France with the vehicle speed of 30km/h and 80 km/h , Al-Ain City in the United Arab Emirates with the maximum speed of 100 km/h . Furthermore, content distribution works also use SUMO to generate the movement trajectories of vehicles in the area of Ottawa, Canada and San Franciso , which are obtained from OSM.\nIn contrast, SUMO is used to simulate the mobility of the vehicles without the actual road topology, i.e., three-lanes, at the speed of 60 km/h and 80 km/h .", "id": "433d7b55-dc47-4966-93fc-7e241a7a6521", "level": "subsubsection", "origin_cites_number": 12, "parent_id": "94c89fe2-819e-4e1d-bc2d-ef924adeafac", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Mobility Model" ], [ "subsubsection", "SUMO" ] ], "subsections": [], "title": "SUMO" }, { "cite_extract_rate": 0.07692307692307601, "cites": [ 2920 ], "content": "Referring to , the speed of the vehicles used is Gaussian distributed and is varied from 30 km/h to 60 km/h within an urban area of 400 $km^2$ with 400 distributed BS-fogs . The range of speed between 70 km/h and 150 km/h is used in\nmost of the works , as shown in Table \\ref{mobility}. In addition to that, the proposed DREAMS assumed the speed ranges 50 to 150 km/h and acceleration ranges 0.5 to 1.5 $m/s^2$ . Other speed ranges of the vehicles are also explored in VEC, such as 10m/s and 20 m/s , 2 to 20 m/s , and 0 to 27.7m/s . On the other hand, the data caching works investigated the average mobility of vehicle about 100 km/h and the low mobility between 40 km/h and 60 km/h .", "id": "d98d50d1-cc8a-40bc-ab1d-46475b1ac5aa", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "94c89fe2-819e-4e1d-bc2d-ef924adeafac", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Mobility Model" ], [ "subsubsection", "Vehicle Speed Range" ] ], "subsections": [], "title": "Vehicle Speed Range" }, { "cite_extract_rate": 0.09523809523809501, "cites": [ 2918, 2915 ], "content": "Another exciting work in used the trajectory data of all green taxis and limousines trip in New York City (NYC) for simulations. Similarly, vehicle trajectory prediction based on GPS and GIS big data is also examined in . The mobility of vehicles is modeled by discrete random jumps characterised by the contact time or sojourn time and the transmission frequency . It is assumed that the vehicle was connecting to the same vehicle edge node and RSU within the contact time. Evaluating the impact of vehicles mobility on the resources, thereby a stochastic geometry is applied to model the random vehicular networks, and the locations of nodes are generated by a Poisson point process (PPP) . Another work used the Manhattan Grid as a vehicle mobility model in the proposed software-defined vehicular networks . \n\\begin{table*}[!htb]\n\t\\centering\n\t\\caption{Mobility model or speed used in data offloading, caching and dissemination for VEC}\n\t\\label{mobility} \n\t\\begin{tabular}{|L{4cm}|L{7cm}|}\n\t\t\\hline\n\t\t\\textbf{Mobility model/speed} &\\textbf{Related Works} \\\\ \\hline\n\t\tSUMO & Luxembourg, China , France , UAE , Canada , San Francisco \\\\ \\hline\n\t\tSpeed ranges & \t50 to 150 km/h , 50 to 120 km/h , 80km/h to 150km/h , 80km/h to 140km/h , 80km/h to 120 km/h , 70km/h to 130km/h , 10 m/s and 20 m/s , 2-20 m/s , 0 to 27.7m/s , 0.5 to 1.5 $m/s^2$ \\\\ \\hline\n\t\tManhattan Grid Model & \\\\ \\hline\n\t\tVehicle trajectory prediction model & \\\\ \\hline\n\t\tDiscrete random jumps & \\\\ \\hline\n\t\tStochastic geometry and Poisson & \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "d3dc5ff0-ffc6-49b5-9640-df305a1168be", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "94c89fe2-819e-4e1d-bc2d-ef924adeafac", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Mobility Model" ], [ "subsubsection", "Miscellaneous" ] ], "subsections": [], "title": "Miscellaneous" }, { "cite_extract_rate": 0, "cites": [], "content": "The motivation of adopting edge computing in vehicular networks is primarily to solve the latency issue as the edge nodes are in proximity to the vehicles compared to the central cloud. The problem becomes worse when the autonomous vehicle occupies with many applications, yet it has a limited storage capacity. In VEC, a decision on what edge nodes compute which task or what vehicle is critical in meeting a low latency. In addition to that, a high-density vehicular network poses significant challenges on the computation and storage resources of geo-distributed edge nodes; and thus, optimum resource allocation is essential.", "id": "4502ad02-043a-4aaf-9c44-5efe5745a27a", "level": "subsection", "origin_cites_number": 0, "parent_id": "9414e4fd-a9ee-48f2-a3f1-5ac2847d0fb0", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Computation Offloading and Content Delivery Issues" ] ], "subsections": [ "6b9f5c91-9345-47e9-a549-8c7a5023f9a9", "9b0a19da-5751-48b1-b6a4-8f4dc8736d96", "be5db7e5-1fa1-42b8-b64b-0a193cdf9b4e" ], "title": "Computation Offloading and Content Delivery Issues" }, { "cite_extract_rate": 0.5, "cites": [ 2921, 2922 ], "content": "In general, the MEC in non-vehicular networks, e.g., mobile cellular networks, served by MEC servers with several options to deploy, such as co-located with the BS , radio network controllers and can be farther from the UE at the core network . On the other hand, considerable VEC offloading works assumed deploying the server at the RSU (see Table \\ref{types-fognodes}) besides the BS. Another primary different is various kind of vehicles with specific mobility are used as edge nodes instead of mobile devices, which owing to a fast fading channel that affects the VEC performance. In terms of the communication protocol, DSRC is mostly used for the V2I communication for the case of RSU as edge nodes. Nevertheless, other cellular networks as highlighted previously are also evaluated in the VEC works. In VEC, the requesting vehicles are surrounded by several edge nodes (i.e., SEN and VEN) for the offloading decision. Identifying a reliable edge node for consistent connectivity is a big challenge because of the vehicle speed. Therefore, the edge nodes are currently characterised and selected in terms of available workload, central processing unit (CPU) processing rate, energy consumption, radio transmission rate, offloading cost, security and reputations. \nSince the requesting vehicle contains tempospatial data; unlike mobile cellular networks, another concern is how to optimally execute the offloading includes types of data whether segmented or not from the vehicle. Furthermore, optimum task allocation on the computational resources of edge nodes is essential to satisfy the stringent latency. Under the assumption of vehicles with multi-communication interfaces, a decision on what radio access networks is also an important problem, but thus far limited work highlighted this. From the perspective of VEN, the underutilised ones inevitably can serve many vehicles within their boundaries, but at certain extent a high density of vehicles, particularly at the road segments or junctions, can substantially serve as edge nodes and probably has high offloading requirement. The small segments within a region or small cloudlets may reduce the offloading complexity. However, this would lead to the issues of VEC overlapping region and interference called interoffloading and intraoffloading .", "id": "6b9f5c91-9345-47e9-a549-8c7a5023f9a9", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "4502ad02-043a-4aaf-9c44-5efe5745a27a", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Computation Offloading and Content Delivery Issues" ], [ "subsubsection", "Data computation offloading" ] ], "subsections": [], "title": "Data computation offloading" }, { "cite_extract_rate": 0, "cites": [], "content": "The content caching at multiple locations near the users have been widely used in wireless networks. It is beneficial in reducing the content access delay, and at the same time increasing the content hit ratio or response time and network delivery rate. Caching popular content at small base-stations and even at the UE can be exploited to mitigate the burden of backhaul and also a high cost transmission from the macro base station. In contrast, the contents in VEC are time-varying, location-dependent and delay-constrained, such as automated driving services . In other words, the popular contents for VEC are likely short term with regards to location. Therefore, the service content must be\ncompletely fetched within few number of VEC cloudlets or else, the quality of the service will deteriorate. It is challenging for edge nodes to optimize data transmitted through wireless networks while satisfying the content deadlines due to unbalanced traffic and different density of vehicular networks. Another issue is regarding the content placement policy to choose the optimum edge nodes for caching that leads to high cache hit ratio.\nBoth the dynamic topology of vehicular networks and the spatial distribution of data chunks have a great impact on content dissemination speed . The underlying V2V and V2I wireless communication is important to determine an efficient content distribution.The edge nodes may use the contextual information of the vehicles within its coverage to schedule the contents to the requesting vehicles. \n\\begin{figure*}[]\n\t\\center{\\includegraphics[width=0.7\\textwidth]\t{figures/Handover.jpg}}\n\t\\caption{Interoffloading and Intraoffloading scenario}\n\t\\label{fig:interoffload}\n\\end{figure*}", "id": "9b0a19da-5751-48b1-b6a4-8f4dc8736d96", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "4502ad02-043a-4aaf-9c44-5efe5745a27a", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Computation Offloading and Content Delivery Issues" ], [ "subsubsection", "Content Caching and Delivery" ] ], "subsections": [], "title": "Content Caching and Delivery" }, { "cite_extract_rate": 0, "cites": [], "content": "The mobility is critical when the vehicle migrates from one fog cloudlet to another cloudlet while the data offloading or downloading is still ongoing. As depicted in Figure \\ref{fig:interoffload}, this type of case is referred to intraoffloading/intradownloading in which the RSUs are connected to the same MEC. The issue becomes substantial when the vehicle handovers to an adjacent BS, as shown in the figure of the interoffloading case. The cell dwell time (i.e., cell residence time) of vehicles is varied and affected by several factors, such as road capacity and conditions, speed limits, traffic lights and so forth. As highlighted previously, the unbalanced traffic and various cell dwell time lead to the resource utilisation issue of fog nodes that deserves an optimal strategy. The moving vehicles also suffer from service interruptions due to the fast fading channel and also the availability of the fog nodes for computation in a particular area.", "id": "be5db7e5-1fa1-42b8-b64b-0a193cdf9b4e", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "4502ad02-043a-4aaf-9c44-5efe5745a27a", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Architecture of vehicular edge computing" ], [ "subsection", "Computation Offloading and Content Delivery Issues" ], [ "subsubsection", "Service Handover" ] ], "subsections": [], "title": "Service Handover" }, { "cite_extract_rate": 0.12244897959183601, "cites": [ 2919, 2917, 7118, 2916, 2918, 2915 ], "content": "\\label{sec:ComOf}\n\\begin{figure*}[]\n\t\\center{\\includegraphics[width=0.6\\textwidth]\t{figures/OFFLOADING.jpg}}\n\t\\caption{Data Computation offloading scenario}\n\t\\label{fig:offload}\n\\end{figure*}\nThe primary reason of adopting edge computing in VEC is to solve the latency issue as the edge nodes are in proximity to the requesting vehicles. Figure \\ref{fig:offload} illustrates the data computation offloading scenario for non-high dense and high dense networks. Typically, the vehicles send the data to the core cloud that results in a significant delay. Considering several constraints such as the deadline, quality loss, transmission rate, energy efficiency, and fog capacity, the process of task allocation across VENs and SENs is formulated into an optimization problem, which is known as a non-deterministic polynomial-time hard (NP-hard) problem or mixed integer nonlinear programming (MINP) . The latency can be calculated based on transmission time including the round trip time between the requesting vehicle and the edge node, queuing and processing times at edge node and also handover time for the inter-offloading scenario . The existing optimisation schemes in VEC broadly emphasised several elements, such as QoS, energy, monetary, and security. Table \\ref{rm-summary} summarises the characteristics of the existing offloading optimisation schemes in VEC. In this survey, the schemes are broadly divided into three groups, which are single objective optimisation, joint optimisation, and multi-objective optimisation. The three groups are discussed as follows:\n\\begin{table*}[]\n\t\\centering\n\t\\caption{Summary of resource management characteristics in Vehicular Edge Computing}\n\t\\label{rm-summary}\n\t\\begin{tabular}{|C{2cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{2cm}|C{1cm}|C{2cm}|}\n\t\t\\hline\n\t\t\\textbf{References} &\\textbf{Mobility} &\\textbf{QoS-Transmission Rate} &\\textbf{QoS -Acceptance Ratio or ratio of job offloaded} & \\textbf{QoS- Dateline} & \\textbf{Reward System} & \\textbf{Security} & \\textbf{Energy Efficiency} & \\textbf{Revenue} & \\textbf{Cost} \\\\ \\hline\n\t\t\t\t\t\t& \\checkmark\t\t& \t & \t & \\checkmark\t & & \t\t& \t\t& \t\t& \\checkmark \\\\ \\hline\n\t\t \t\t\t& \\checkmark & & & \\checkmark & & \\checkmark & & \\checkmark & \\checkmark \\\\ \\hline\n\t\t \t\t\t& \\checkmark & & & \\checkmark & & & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n\t\t \t\t\t& & & & & & \\checkmark & & & \\checkmark \\\\ \\hline\n\t\t \t\t\t& & & & & & & & & \\checkmark \\\\ \\hline\n\t\t \t\t\t& \\checkmark & & & \\checkmark & & & & & \\\\ \\hline\n\t\t\t\t\t& \\checkmark & & & \\checkmark & \\checkmark & & & & \\checkmark \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & \\checkmark & & & & & X \\\\ \\hline\n\t\t & \\checkmark & & & \\checkmark & & & & & \\checkmark \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & \\checkmark & & & \\checkmark & \\checkmark &\\checkmark \\\\ \\hline\n\t\t & \\checkmark & & & \\checkmark & & & & & \\\\ \\hline\n\t\t & & & & \\checkmark & & & & & \\\\ \\hline\n\t\t & \\checkmark & & & \\checkmark & \\checkmark & & & \\checkmark & \\checkmark \\\\ \\hline\n\t\t & \\checkmark & & & \\checkmark & & \\checkmark & & & \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & \\checkmark & & & \\checkmark & & \\\\ \\hline\n\t\t & \\checkmark & & & \\checkmark & & & & & \\\\ \\hline \n\t\t & \\checkmark & & \\checkmark & \\checkmark & \\checkmark & & & & \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & & \\checkmark & & & & \\checkmark \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & & & & & \\checkmark & \\checkmark \\\\ \\hline\n\t\t & \\checkmark & & \\checkmark & & & & \\checkmark & & \\\\ \\hline\n\t\t &\t&\t &\\checkmark\t &\t &\t &\t &\t &\t &\\\\ \\hline\n\t\t &\t&\t &\t &\\checkmark\t &\t &\t &\t &\t &\\\\ \\hline\n\t\t &\\checkmark\t&\t &\t &\\checkmark\t &\\checkmark\t &\t &\t &\t &\\\\ \\hline\n\t\t &\\checkmark\t&\\checkmark\t &\\checkmark\t &\\checkmark\t &\t &\t &\\checkmark\t &\t &\\\\ \\hline\n\t\t &\t&\t &\t &\\checkmark\t & &\t &\t &\t &\\\\ \\hline\n\t\t &\\checkmark &\\checkmark & & & & &\\checkmark & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark& & &\\checkmark & & \\\\ \\hline\n\t\t & & & &\\checkmark &\\checkmark & &\\checkmark & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t & & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & &\\checkmark \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark &\\checkmark & &\\checkmark &\\checkmark &\\checkmark \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & &\\checkmark \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t & & & &\\checkmark & & &\\checkmark & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & &\\checkmark & & \\\\ \\hline\n\t\t &\\checkmark & & & & & & & & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & & & & \\\\ \\hline\n\t\t & & & &\\checkmark & & &\\checkmark &\\checkmark &\\checkmark \\\\ \\hline\n\t\t & & & &\\checkmark & & &\\checkmark & & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark & & & & & &\\checkmark \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "e68aeb1a-f5ba-42ba-a8c4-a6e6091ace7b", "level": "section", "origin_cites_number": 49, "parent_id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ] ], "subsections": [ "288edc35-fe23-4d8e-92fb-034cb49c2b69", "3279bcb8-6544-4c00-b33f-bb7add033abe", "42ce72f8-993b-4588-b1fa-4157947f7dcb" ], "title": "Data Computation Offloading" }, { "cite_extract_rate": 0, "cites": [], "content": "A single optimisation (SO) is defined for the computation offloading technique that maximises or minimises a single type of parameter, for instance, latency, energy or cost. Tables \\ref{so} summarises the existing SO schemes in VEC and their advantages and disadvantages. The SO schemes are explained below based on their key parameters.", "id": "288edc35-fe23-4d8e-92fb-034cb49c2b69", "level": "subsection", "origin_cites_number": 0, "parent_id": "e68aeb1a-f5ba-42ba-a8c4-a6e6091ace7b", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Single Optimisation Approaches" ] ], "subsections": [ "c86573d9-1582-4efa-a8c0-f67f2040c0e1", "06cd5acd-e061-4242-87e2-692c156f82bb", "54a390e1-db9c-4110-a967-5b0635b9d907" ], "title": "Single Optimisation Approaches" }, { "cite_extract_rate": 0.2, "cites": [ 2916, 2915 ], "content": "Significant works formulated the optimisation problem for task offloading with regards to the minimisation of latency , followed by message response time , message overhead , and the maximisation of routing and path lifetime . \nA two-stage of collaborative task offloading that comprises the vehicular function partition and task assignment stages is proposed . Based on task similarity and task computing, the former classified the vehicles into two kinds of sub-cloudlets respectively, which were task offloading subcloudlet and task computing subcloudlet. Then, the later stage used the graph theory and maximum weight independent set for a two-sided matching process between the requesting vehicles and edge nodes by minimising the service latency and balancing the load of heterogeneous edge nodes. However, the results demonstrated high latency when there is a small number of resource-rich vehicles in the networks. Meanwhile, suggested an adaptive volatile upper confidence bound algorithm with load-awareness and occurrence-awareness where the utility function with regards to the classic multi-armed bandit is designed for V2V. The work is then extended by presenting an adaptive learning-based task offloading algorithm in minimising the average offloading delay . The algorithm improved the average delay between 10\\% and 13\\% compared with the former algorithm, but the scenario is only a single requesting vehicle. Another delay optimisation problem presented a pricing-based one-to-one matching and one-to-many matching algorithms for task offloading primarily at an RSU subject to minimise the total network delay . The advantage is that the algorithm is validated based on three different road conditions, which are straight road, urban road with a traffic light, and winding road extracted from the realistic road topologies of Beijing and Guangdong, China. Interestingly, the lowest average delay is achieved when the offloading presents on the straight road. \nOn the other hand, minimised the response time for each time slot using some steps based on brand-and-bound and Edmonds-Karp algorithms. The message response time is the summation of the response time of cloudlet, the response time of parked vehicles and response time of moving vehicles as well as the delay caused by incoming messages . The key advantage of the work is that the offloading is performed at both moving and parked vehicles. The findings demonstrated that the average response time achieved less than 1s with the increase of message arrival rates in Shanghai and considerably dropped when the total number of parked vehicle-based edge nodes raises. Similarly, devised the task scheduling optimisation problem by minimising the average response time of computation based on a modified genetic algorithm where integer coding was used. One major drawback of both works is that the mobility analysis is not conducted. Whereas, devised the computation offloading problem subject to minimise the computation overhead based on game theory, i.e., a strategic game, and was able to achieve Nash Equilibrium. The offloading problem also can be solved using total utility jobs completed at instantaneous time, and an Autonomous Vehicular Edge framework is proposed that comprises of four main phases, namely, job caching, discovery, ant-colony optimisation-based scheduling and data transmission. The limitation of the work is that a high computation required.\nComparing with others, the computation offloading problem can be solved also based on the longest lifetime of V2V routing path and a recovery of a broken V2V path using LifeTime-based Network State Routing and LifeTime-based Path Recovery algorithms, respectively . The results demonstrated that the average lifetime of V2V paths increased with low mobility vehicles due to stable connections. The works in might be more persuasive if the authors considered the QoS and mobility.", "id": "c86573d9-1582-4efa-a8c0-f67f2040c0e1", "level": "subsubsection", "origin_cites_number": 10, "parent_id": "288edc35-fe23-4d8e-92fb-034cb49c2b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Single Optimisation Approaches" ], [ "subsubsection", "QoS Improvement" ] ], "subsections": [], "title": "QoS Improvement" }, { "cite_extract_rate": 0, "cites": [], "content": " explored the offloading optimisation problem from the same ground to minimise the sum of energy consumption for local task execution and offloading task from the mobile device (MD). Despite that, the works are distinctive in terms of the type of edge nodes used, which are a bus-based cloudlet and a vehicle-based cloudlet . A vehicle-based cloudlet relaying scheme allows the inter-cloudlet offloading or back to the MD if there is no cloudlet available. Due to the offloading rate performance increases with the number of buses in the cloudlet as well as with the number of cloudlets , the completion time is found improved for both cases. The key advantage of is that multi-cloudlets and high mobility are considered and yet the computation of application completion time is not properly discussed. Another technique called Energy Consumption Minimisation (ECM) and Resource Reservation (RR) assignment for UE is proposed based on dynamic programming . Simulation results demonstrated that the RR assignment could achieve a near-optimal performance with proper parameter settings, and the tasks were offloaded to multiple serving vehicles via multi-radio access that beneficial in energy saving.", "id": "06cd5acd-e061-4242-87e2-692c156f82bb", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "288edc35-fe23-4d8e-92fb-034cb49c2b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Single Optimisation Approaches" ], [ "subsubsection", "Energy Efficiency" ] ], "subsections": [], "title": "Energy Efficiency" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 2917, 2916, 2915 ], "content": "The monetary involved with the price of a unit size of computing resource , the price of VEC offloading time , the price of energy , the price of transmission bandwidth or link , the incentive or reward paid to VEN and the revenue generated by service provider . In general, monetary-based optimisation problems significantly adopted game theory, such as Stackelberg game , a contract theoretic , and auction . Only few works presented the offloading optimisation problems based on machine learning, i.e., deep reinforcement learning .\n\\begin{figure*}[htp]\n\t\\center{\\includegraphics[width=0.7\\textwidth]\t{figures/offloadingv2.jpg}}\n\t\\caption{A Predictive Combination-mode Offloading mechanism }\n\t\\label{fig:offloadv2}\n\\end{figure*}\nIn an optimal Predictive Combination-mode Offloading mechanism , the vehicle offloaded the task file to the ahead MEC server via number of vehicle hops as illustrated in Figure \\ref{fig:offloadv2}. The strength of the proposed scheme is that the transmission mode is adaptive with vehicle speeds; however, the vehicle hops may cause some delay. The results demonstrated that the mobile-edge computing servers reduced the cost of computational resources and transmission besides fast response time compared with direct vehicle-to-infrastructure (V2I), i.e., RSU with multihops backhaul RSU relay. \nConsidering the minimisation of the predicted cost of the load and also the load distribution among MEC servers, a load-aware MEC offloading method called as Load-Aware Mode is proposed . It is seen that the proposed scheme can reduce up to 65\\% of total cost and achieve approximately 100\\% of the task success ratio. A mobility-aware and a location-based task offloading are introduced in based on Newton-Raphson methods by minimising the completed system cost includes the communication and computation costs of the required vehicle while satisfying the task latency. Numerical results showed that the proposed schemes can reduce the system costs approximately 50\\% from other techniques, and the latency constraint was satisfied. However, both works overlook the types of vehicle applications that can be investigated.\nExploring a machine learning technique, specifically deep reinforcement learning can solve the offloading problem by minimising the expected cumulative discounted reward (i.e., task execution delay) for multi-task services. The advantage of the work is that dynamic VEC environment is studied, but the results are not explicitly discussed.\n\\begin{table*}[!htb]\n\t\\centering\n\t\\caption{Single optimisation schemes in VEC}\n\t\\label{so}\n\t\\begin{tabular}{|C{1.5cm}|C{2cm}|L{3cm}|L{3cm}|L{3cm}|L{3cm}|}\n\t\t\\hline\n\t\t{\\bf Related Works}\t&{\\bf Optimisation Type} &{\\bf Optimisation Utility} &{\\bf Optimsiation Techniques} &{\\bf Advantages} &{\\bf Disadvatages} \\\\ \\hline\n\t\t &Optimal &Minimise service latency &two-sided matching algorithm &Load balancing & High latency when small number of resource-rich vehicles\\\\ \\hline\n\t\t &Suboptimal &Minimise average offloading delay &multi-armed bandit (MAB) & Input data size and occurrence of vehicles alert & Only a single requesting vehicle \\\\ \\hline \t \t\n\t\t &Optimal &Minimise total offloading delay &One-to-one and one-to-many matching & Three different mobility model and road conditions &No partial offloading at local vehicle \\\\ \\hline\n\t\t &Optimal & Minimise computation overhead & Game theory (i.e., strategic game) &Computation overhead & No QoS delay and mobility \\\\ \\hline\n\t\t & Near-optimal & Maximise total utility of jobs completed & Ant Colony Optimisation (ACO) and a brute-force method & Successful offloading &High computation time \\\\ \\hline\n\t\t &Optimal &Maximum lifetime-based routing and path recovery & path routing &V2V path broken (e.g. vehicle run away) &No latency analysis and mobility \\\\ \\hline\t \t\n\t\t &Optimal &Minimise average response time of computing &Modified genetic algorithm and statistical priority & Mutual dependent tasks & No mobility analysis\\\\ \\hline\n\t\t &Optimal &Minimise message response time & brand-and-bound algorithm, Edmond Karps-algorithm & Offloading to both moving and parked vehicles & No mobility analysis \\\\ \\hline \t\n\t\t\\multirow{3}{3cm}{} & \\multirow{3}{*}{Optimal} & \\multirow{3}{3cm}{Minimise energy consumption for offloading } & Sequential task graph \\\\ \\cline{4-4} \n\t\t& & & Exhaustive Method \\\\ \\cline{4-4} \n\t\t&\n\t\t& & Semi-Markov Decision Process (SMDP) and value iteration algorithm & Multi-cloudlet and high mobility & Application completion time is uncertain \\\\ \\hline \n\t\t &Optimal &Minimise user's energy consumption for offloading &Dynamic programming & Vehicle fog node serves multiple tasks or vehicles & No specific applications and mobility \\\\ \\hline\n\t\t &optimal &Minimise the cost of offloading & Load-aware mode (LAM) &Multi vehicular MEC networks &No specific application and delay performance \\\\ \\hline\n\t\t &Suboptimal &Minimise system costs (communication and computation) &Convex problem solved using Newton-Raphson &Mobility and cooperation between MEC servers &No specific applications\\\\ \\hline\n\t\t & Optimal & Minimise offloading cost of both data transmission and task computation resources & Game theory with one mixed strategies & Local and fog computing, adaptive transmission mode & Number of offloading hops may cause significant delay \\\\ \\hline\n\t\t &Optimal & Minimise expected cumulative discounted reward & Deep reinforcement learning & Dynamic VEC environment state &No mobility\\\\ \\hline \t\n\t\\end{tabular}\n\\end{table*}", "id": "54a390e1-db9c-4110-a967-5b0635b9d907", "level": "subsubsection", "origin_cites_number": 27, "parent_id": "288edc35-fe23-4d8e-92fb-034cb49c2b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Single Optimisation Approaches" ], [ "subsubsection", "Prices Optimisation" ] ], "subsections": [], "title": "Prices Optimisation" }, { "cite_extract_rate": 0, "cites": [], "content": "Hybrid optimisation schemes are classified for the optimisation techniques proposing a combination of several mechanisms or parameters, such as joint offloading ratio and resource allocation, joint QoS and energy efficiency, and joint security with QoS. In contrast to the single optimisation, joint optimisation is often designed based on the summation of number of paramaters.", "id": "3279bcb8-6544-4c00-b33f-bb7add033abe", "level": "subsection", "origin_cites_number": 0, "parent_id": "e68aeb1a-f5ba-42ba-a8c4-a6e6091ace7b", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Hybrid Optimisation Approaches" ] ], "subsections": [ "b44392ea-aa83-444e-a2ab-bb3d47bc1b45", "739ae665-c748-490f-bf8e-f2a3e5307d49", "7a9bf2f1-72d5-40d2-805c-2634483eddfb", "1c7c873f-3e5d-44ef-aea8-5162674e14d2" ], "title": "Hybrid Optimisation Approaches" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 2919, 7118 ], "content": "\\begin{figure*}[htp]\n\t\\center{\\includegraphics[width=0.7\\textwidth]\t{figures/offloadingv3.jpg}}\n\t\\caption{System model used in Joint Optimisation for Selection, Computation and Offloading algorithm }\n\t\\label{fig:offloadv3}\n\\end{figure*}\nThe authors in proposed a joint optimisation for selection, computation and offloading algorithm (JSCO) to maximise QoS-based system utility. The system model consists of multiple VEC servers and multiple vehicles as illustrated in Figure \\ref{fig:offloadv3}. The problem is solved using relaxation and rounding method and Lagrange multipliers. The proposed JSCO demonstrated the fair allocation of VEC servers to the requesting vehicles besides optimum computation resource and offloading ratio. Similarly, presented joint offloading proportion and resource allocation optimisation (JOPRAO) for all vehicles to minimise the task completion time. The key difference of both works is the offloading ratio whereby introduced the amount of data offloads to the VEC server and computes the remaining locally, while offloaded the sub-task between a vehicle edge node and MEC server. The offloading proportion for vehicle increases with the computation capability of the vehicle (i.e., MHz) and is enabled to achieve lower delay , but is not specifically discussed in . Another work formulated a network utility maximization consisted of two-level resource allocation, system bandwidth slicing, and resources partitioning for AVs. The work is then extended to three aggregate network-wide utility maximization problems (refer to Table \\ref{jo-tech}) focusing on transmit power control and solved by Linear programming relaxation and first-order Taylor series approximation and an alternate concave search algorithm . However, both works do not consider AVs as computing edge nodes. Unlike others, solved the offloading problem by joining two different QoS parameters, which were latency and quality task allocation using Linear Programming based Optimization and Binary Particle Swarm Optimization prior to dynamic task allocation approach . The study might far more convincing if the authors had attempted the multi vehicular cloudlets.", "id": "b44392ea-aa83-444e-a2ab-bb3d47bc1b45", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "3279bcb8-6544-4c00-b33f-bb7add033abe", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Hybrid Optimisation Approaches" ], [ "subsubsection", "Hybrid QoS" ] ], "subsections": [], "title": "Hybrid QoS" }, { "cite_extract_rate": 0, "cites": [], "content": " developed the total utility maximisation by jointly combined computation bits and caching bits while optimising the transmit power of each vehicle and the trajectory of UAV adaptive to the instantaneous time and environment. To solve the power optimization problem, the energy-aware dynamic power optimization problem was presented in the non-cooperation and cooperation cases under a fixed UAV trajectory. The former case was where the vehicles competed for the resources among each other, whereas the latter case was all the vehicles cooperatively share their common interest by forming a grand coalition. Simulation results demonstrated that the cooperation case with optimised trajectory achieved the best performance. However, the proposed optimisation scheme has high complexity for a real implementation. \nMeanwhile, combined the energy with QoS in their proposed algorithm called QoS Loss Minimization algorithm. It comprises static RSU estimated power minimisation, Temporal Energy Balancing Algorithm and Spatial Energy Balancing Algorithm under a constraint of the delay workload. The proposed algorithm significantly reduced the QoS loss owing to power deficiency of SRSU.\nThe researchers in devised a joint control algorithm of workload assignment and VM pool resource allocation by minimising the energy consumption of vehicles for task processing. The proposed algorithm also reconciled the application latency with the incentive of vehicles in long-term. The online task scheduling algorithm, TaskSche, designed an efficient task workload assignment policy based on graph transformation and a knapsack-based VM pool resource allocation policy as the core components.\nTo minimise the energy consumption for the offloading, the authors in formulated a joint workload offloading and power control. The problem is solved by using an alternating direction method of multipliers (ADMM) based energy efficient resource allocation algorithm. For certain transmission power, the increase of the workload offloading portion decreased the energy consumption, and the transmission power had negative impact on the energy performance. However, the aforementioned approaches suffer from a serious weakness in terms of vehicle mobility.", "id": "739ae665-c748-490f-bf8e-f2a3e5307d49", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "3279bcb8-6544-4c00-b33f-bb7add033abe", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Hybrid Optimisation Approaches" ], [ "subsubsection", "Hybrid Energy" ] ], "subsections": [], "title": "Hybrid Energy" }, { "cite_extract_rate": 0, "cites": [], "content": "The deep reinforcement learning approach with the multi-timescale framework is developed to solve the joint optimal resource allocation in minimising the cost of communication, caching, and computing for VEC . Due to the NP-hard problem, the mobility-aware reward estimation is proposed, and yet the critical metric, such as latency is not considered for simplicity. Likewise, explored a reverse auction mechanism based on Vickrey-Clarke-Groves for the cost of V2V computation offloading in which the requesting vehicles and the service provider acted as buyers and seller, respectively. Due to the fact that VCG is an optimal solution and an NP-hard problem, a suboptimal solution called the unilateral-matching-based mechanism is proposed and evaluated. The performance of sub-optimal was close to that of optimal, and the proposed sub-optimal served more vehicles with the increasing number of sellers, and the computation-intensive applications can be processed approximately by 75\\% faster than local. However, the study considers the offloading to the seller first rather than the local vehicle that can lead to high payment.\nBesides that, minimised the system cost, i.e., network overhead and execution time of computing task, and formulated an optimal policy called Partially Observable Markov Decision Process to select the network access, computing node and also caching node. Simulation results demonstrated that the proposed framework reduced the system cost significantly across different number of data sizes offloaded. The benefit of the proposed framework is that various edge nodes are considered from local vehicles to the cloud computing server, but the revenue to service provider is not highlighted.\nMeanwhile, formulated two optimisation problems for Vehicle Terminals (VTs) and Mobile Radio Side Units (MRSUs) by minimising the average cost of VTs and the MRSUs in a MEC enabled IEEE 802.22-cognitive vehicular networks. The problems are solved based on Lyapunov optimisation theory with a low-complexity online algorithm called Dual-side Dynamic Joint Task Offloading and Resource Allocation Algorithm in Vehicular Networks (DDORV), but vehicle application and mobility were not specified. The simulation results demonstrated that the DDORV achieved the highest profit for MRSU apart from the lowest cost of VTs with a trade-off the queue backlog in VTs.\nInvestigating a different optimisation problem, minimised the total computation overhead, which was the summation of weighted-sum of task completion time and monetary cost for using cloud resources of the MEC provider. The original problem is decomposed into two-subproblem, resource allocation (RA) problem with fixed task offloading and node selection (NS) problem with the optimal-value function prior to the RA. Although the cost of both computational resources of vehicle and core cloud is considered, the bottleneck of the work is to investigate on the mobility effect.", "id": "7a9bf2f1-72d5-40d2-805c-2634483eddfb", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "3279bcb8-6544-4c00-b33f-bb7add033abe", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Hybrid Optimisation Approaches" ], [ "subsubsection", "Hybrid Prices" ] ], "subsections": [], "title": "Hybrid Prices" }, { "cite_extract_rate": 0.125, "cites": [ 2919, 7118 ], "content": "The reputation value of a vehicle is computed to exhibit the trustworthiness of vehicles before the offloading takes place . The computation is based on previous experience with the targeted vehicles, recommendations from the neighbouring vehicles and also from a central authority. A bargaining game is formulated for the service provider to consider the vehicle reputation value in the bargaining power and also the required task allocation, which is known as reputation-assisted optimisation. A vehicle with high reputation is given high priority to determine its required resources resulted in high user satisfaction. Despite that, the exchange of reputation messages between vehicles and VEC server might lead to a high overhead in the networks. Another similar work demonstrated data security and load balancing for the proposed 5G-enabled Intelligent Transport System (ITS) framework. It integrated the unmanned aerial vehicles (UAVs), dispatcher, edge nodes, and aggregator to maximize the processing capabilities, minimize delay, and maximize the security of the ITS. With the Bloom-filter-based security protocol used in the proposed framework, the delay can be reduced by increasing the number of edge nodes and transmitting UAV. One concern with the emergence of UAVs as edge nodes is on the battery lifetime for handling the data. \n\\begin{table*}[h]\n\t\\centering\n\t\\caption{Hybrid Optimisation in VEC}\n\t\\label{jo-tech}\n\t\\begin{tabular}{|C{1.5cm}|C{2cm}|L{3cm}|L{3cm}|L{3cm}|L{3cm}|}\n\t\t\\hline\n\t\t{\\bf Related Works}\t&{\\bf Optimisation} &{\\bf Policy/Optimisation Problem} &{\\bf Techniques} &{\\bf Advantages} &{\\bf Disadvantages} \\\\ \\hline\n\t\t, & Near-optimal & Maximise system utility of VEC server selection, offloading ratio and computation resource & Relaxation for VEC selection, rounding method optimisation for computation resource and Lagrangian method for offloading ratio & Offloading ratio between VEC server and local computation is considered & High complexity algorithm is given \\\\ \\hline\n\t\t &Optimal &Minimise task completion time & Joint offloading proportion and resource allocation optimization (JOPRAO) &Offloading proportion decision and analysis &No mobility analysis \\\\ \\hline \n\t\t &Suboptimal &Maximise network throughput with guaranteed QoS &Linear programming relaxation and first-order Taylor series approximation and an alternate concave search (ACS) algorithm &Optimal transmit power and network slicing &No resource allocation from other vehicles \\\\ \\hline \n\t\t &Optimal and suboptimal &Minimise latency and quality loss & Linear Programming based Optimization (LBO) and Binary Particle Swarm Optimization (BPSO) &Quality loss ratio of application &No inter-zone or inter-cloudlets investigation \\\\ \\hline \n\t\t &Optimal & Maximise total utility by optimising transmit power of vehicle and UAV trajectory & dynamic programming method and search algorithm &Optimise power of vehicle &High complexity \\\\ \\hline\n\t\t &Optimal &Minimise QoS loss and SRSU power consumption & QoS Loss Minimisation &RSU energy consumption and UE association & No mobility analysis \\\\ \\hline\n\t\t &Suboptimal &Minimise energy consumption via energy-efficient workload and power control &Non-linear fractional programming &Energy efficient offloading and power control & No mobility analysis \\\\ \\hline\n\t\t &Optimal &Minimise energy consumption of network-wide vehicles for task processing while satisfying latency and vehicle incentive &Graph transformation and a knapsack-based VM pool resource allocation policy &Energy and incentive in proposed algorithm &No mobility analysis \\\\ \\hline \n\t\t & Sub-optimal &Minimise cost of communication, storage and computation & Deep Q-learning with multi-timescale framework (i.e. large and smale scale) & Application-aware offloading and long term reward &No latency analysis \\\\ \\hline\n\t\t & Optimal and suboptimal & Maximise group efficiency, i.e., total utility offloading computation and expected cost & Vickrey-Clarke-Groves (VCG) based reverse auction mechanism (optimal) and a unilateral-matching-based algorithm (suboptimal) & Payment procedure to seller, i.e., vehicle fog node & Offloading to the seller first rather than local vehicle \\\\ \\hline\n\t\t & Optimal & Minimise system cost, i.e., network overhead and execution time of task computing & Partially Observable Markov Decision Process (POMDP) & Computing edge nodes varies from local storage, mobile edge (i.e. MEC server) & No revenue to service provider \\\\ \\hline \n\t\t &Optimal &Minimise the cost of vehicle terminals and maximise the profit of MRSU &Lyapunov optimisation &Cost and profit &No specific vehicular applications and mobility \\\\ \\hline \n\t\t &Optimal & minimise the total computation overhead (i.e. weighted-sum of task completion time and monetary cost for using cloud resources) & Karush-Kuhn-Tucker (KKT) conditions and branch-and-bound algorithm & Cost of resources (vehicles and core cloud) &No mobility performance \\\\ \\hline\n\t\t & Single optimal & Reputation of vehicle and pricing policy & Multi-weighted subjective logic and bargaining game & Reputation-based optimisation for vehicles & High overhead with the reputation messages \\\\ \\hline\n\t\t & Optimal & Maximise processing capabilities (Pc), minimize delay (De) for proactive decision making, and maximize the security (Se) of the 5G enabled ITS framework & A triple Bloom filter probabilistic data structure (PDS) based scheduling technique &Three objective functions, the processing capabilities, delay and security & Battery and storage constraints with the emergence of UAVs in handling data \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "1c7c873f-3e5d-44ef-aea8-5162674e14d2", "level": "subsubsection", "origin_cites_number": 16, "parent_id": "3279bcb8-6544-4c00-b33f-bb7add033abe", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Hybrid Optimisation Approaches" ], [ "subsubsection", "Hybrid Security" ] ], "subsections": [], "title": "Hybrid Security" }, { "cite_extract_rate": 0, "cites": [], "content": "The multi-objective optimisation (MOO) is categorised for the optimisation techniques consisting of several independent utility functions for the task offloading and computing. In general, the QoS-based optimisation is separately combined with other optimisation variables, such as energy , and monetary despite other several QoS parameters . The existing MOO frameworks are explained as follows.", "id": "42ce72f8-993b-4588-b1fa-4157947f7dcb", "level": "subsection", "origin_cites_number": 4, "parent_id": "e68aeb1a-f5ba-42ba-a8c4-a6e6091ace7b", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Multi Optimisation Approaches" ] ], "subsections": [ "e55290b2-81b7-4071-b44e-ef9ce4c6c6ff", "a8539832-ee7b-4555-bea9-08c2c243996f", "9e15d3a4-0e07-4b85-9ce8-943b4d4fd255", "d47ce87f-54dc-4bc3-b729-2c4fa4c7eb40" ], "title": "Multi Optimisation Approaches" }, { "cite_extract_rate": 0, "cites": [], "content": "The study in formulated the resource allocation with delay optimization scheme as a Markov decision process (MDP) in SDFC-VeNET, which can be solved via an equivalent Bellman equation. The solution was simplified subject to two stages, macro policy, and micro policy, where a linear approximation method and a stochastic approximation were exploited, respectively. The macro policy and micro policy handled the complete system state, i.e., traffic density state, channel state and queue state, and resource allocation, respectively. The limitation of the work is that it relied on CSI of RRH and QSI of vehicles owing to high overhead. Another work designed a novel multi-objective task scheduling algorithm based on a bat algorithm that optimised two objectives functions, which were to minimise the total execution time and to maximise the total successful tasks. Extensive performance evaluation demonstrated that the proposed algorithm can reduce the task execution time and achieved a high offloading ratio as well as high number of successful tasks. However, the work does not specify the type of vehicle application used.", "id": "e55290b2-81b7-4071-b44e-ef9ce4c6c6ff", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "42ce72f8-993b-4588-b1fa-4157947f7dcb", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Multi Optimisation Approaches" ], [ "subsubsection", "Several QoS metrics" ] ], "subsections": [], "title": "Several QoS metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "The authors in developed a meta-heuristic approach called Hybrid Adaptive Particle Swamp Optimisation (PSO), which was optimised using a Genetic Algorithm (GA). In this work, every three layers computed their fitness values according to three main objectives, namely i) reduced network latency, ii) decreased energy consumption of the system, and also iii) increased availability of virtual machines. The work suffers with a serious high number of iterations for the convergence of optimal value. The authors in investigated exciting work by considering intra-fog resource management in the local fog server (LFS) and inter-fog resource management in the coordinator server. In former, a convex optimization model was developed by minimising the expected total energy consumption of the fog server while satisfying the data processing rate.\nOn the other hand, for the inter-fog resource management, the optimal traffic was derived by minimising the maximum delay time of all fog servers and migrated massive data to nearby fog server (i.e., min-max optimisation). However, the work has dealt with fog servers and not with vehicles as edge nodes.", "id": "a8539832-ee7b-4555-bea9-08c2c243996f", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "42ce72f8-993b-4588-b1fa-4157947f7dcb", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Multi Optimisation Approaches" ], [ "subsubsection", "QoS and Energy" ] ], "subsections": [], "title": "QoS and Energy" }, { "cite_extract_rate": 0, "cites": [], "content": "The work in used minimal processing time delay as an initial stage to solve the optimal payment of each user for CPU, RAM, and storage space of the edge device by using the Lagrangian method. Then, the maximum utility of the VSEC was set as the second stage optimisation by using the same process, which was Lagrangian for determining the optimal resource allocation scheme. Results demonstrated that the proposed optimisation approach achieved a shorter end-to-end delay and completion time compared with existing approaches due to a low number of control messages in the network and the offloading based on the available capacity at each server. The study have been more useful if the author had considered reward for the vehicles. Other researchers in proposed four phases in Phasing Virtual Network Embedding algorithm, which were a function-group topology using k-core decomposition, backbone part mapping, and edge part mapping phases based on Bloom filter, and the last phase was link mapping using one of the shortest path trees called the distributed Bellman-Ford, in the SINET-based vehicular networks. The mapping process for resource allocation was based on two key objectives, which were the maximum revenue ratio and maximum acceptance ratio. Although the proposed algorithm outperforms in terms of the revenue and cost performance, the QoS subject to the deadline is not yet discussed. The work in investigated bandwidth allocation model for TES, IES, HRTS, and SRTS and solved via a two-step approach. The first step, all the sub-optimal solutions are provided based on a Lagrangian algorithm. For the second step, the highest utility is selected as an optimal solution. The model is assumed to reduce the serving time by tuning the available bandwidth to our services without considering any constraints.", "id": "9e15d3a4-0e07-4b85-9ce8-943b4d4fd255", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "42ce72f8-993b-4588-b1fa-4157947f7dcb", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Multi Optimisation Approaches" ], [ "subsubsection", "QoS and Monetary" ] ], "subsections": [], "title": "QoS and Monetary" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike previous MOO, proposed the EV demands (i.e., charging and discharging) using a calendar policy whereby the calendars were scheduled by the appropriate fog data center based on the vehicle's location. The optimal calendar was then selected for each vehicle considering parameters, such as waiting time to plug in, the price at a specific time, distance to the EV public supply stations (EVPSS), and the demand/supply energy curve. However, the authors overlook the prices of the energy consumption in the optimisation.\n proposed a Stackelberg game model between a service provider (leader) and PVs s(followers). The former was designed to minimise the overall cost of users, and the latter was to maximise the utility of PVs under the constraint of given rewards. To achieve the Stackelberg equilibrium, a subgradient-based iterative algorithm was proposed to determine the workload allocation among the PVs and concurrently minimised the overall cost of users that led to high complexity. The total amount of served vehicles reached twice compared to the existing schemes when the average workload was 2500 GHz, i.e., CPU cycles. has slightly different Stackelberg game model, whereby the game leader, i.e., RSU, issued a computing service price to vehicles within its coverage in the first stage. The utility function of the service provider is based on the total provision revenue minus the total electricity cost, which the latter is not considered in . The multi-objective optimisation is transformed into a single objective by introducing the linear weighting function. In the second stage, each vehicle as a game follower optimised its offloading proportion based on the computed service price. Then, the formulated Stackelberg game can be solved using backward induction iteratively, and the obtained strategy at each stage was shown converging to Nash equilibrium. \nAdopting a contract theoretic approach, maximised the revenue of service provider by identifying the cost of computation resources and maximised the utility of vehicles based on computation resources and energy saving. Another solution is that the BS designed a contract associated with distinct performance of vehicle and rewarded the vehicles based on their vehicle types resulted in a maximum payoff . However, the study seems initiated high number of contracts in the networks and the latency performance is not investigated. The work is then expanded by ; a contract-matching algorithm consists of two stages, which are a contract-based incentive mechanism for vehicles to promptly share their resources and a pricing-based stable matching algorithm for the assignment of UE's tasks with the preferred vehicle. The key advantage of the work is that the scenario of vehicles' private information (i.e., preference on resource sharing and the total amount of available resources) is not known at the BS, which is called as information asymmetry, is compared with that of information symmetry. On the other hand, proposed a single- round multi-item parking reservation auction for two different rules, allocation rule and payment rule to determine the parking allocation and the corresponding parking payment work, respectively. The work is then extended to to a multi-round multi-item parking reservation auction for the optimal offload price. The simulation revealed that with the optimal offload price, the proposed multi-round auction can improve both the profit of the fog node controller and the utility of parked vehicles. The research makes no attempt for computational resources of other edge nodes except for the parked vehicles.\n\\begin{table*}[!htb]\n\t\\centering\n\t\\caption{Multi Objective Optimisation Schemes in VEC}\n\t\\label{mo-tech}\n\t\\begin{tabular}{|C{1.5cm}|C{2cm}|L{3cm}|L{3cm}|L{3cm}|L{3cm}|}\n\t\t\\hline\n\t\t{\\bf Related Works}\t&{\\bf Optimisation} &{\\bf Policy/Optimisation Problem} &{\\bf Techniques} &{\\bf Advantages} &{\\bf Disadvantages} \\\\ \\hline\t\t\n\t\t & Optimal & Minimise delay & Linear approximation method and stochastic approximation method &Macro and micro policies & Dependent on CSI of RRH and QSI of vehicles \\\\ \\hline\n\t\t &Optimal &Maximise fitness values at every layer: vehicular, roadside and network cloud. \t&Hybrid Particle Swamp Optimisation (HPSO) & Offload subtasks to the VMs of the cloud that yields a maximum fitness value\t\t& High number of iterations for the convergence of optimal value \t\t\t \\\\ \\hline\n\t\t &Optimal &Minimise energy efficiency while satisfying processing data rate (i.e. intra-fog), minimise delay time while migrating massive data (inter-fog) & Convex optimisation, a min-max optimisation solved by KKT &Resource management within a virtualised fog server and between fog servers (i.e. handover) &No vehicular fog nodes \\\\ \\hline\n\t\t &Optimal & Minimise total execution time and maximise the total weight & Bat algorithm & Number of vehicle clusters & no specific applications \\\\ \\hline \n\t\t & Optimal & Minimise total processing time for the optimal payment of user and maximise VSEC utility related to the optimal payment, the resource consumption, the number of devices, and the number of users & Lagrangian theory and closed form expression & Various edge nodes ranging from CPU, RAM and storage space of edge devices and the user's optimal prices for each edge resources \t&No reward for serving vehicles \\\\ \\hline\n\t\t & Optimal & Maximise revenue ratio and acceptance ratio. & Phasing Virtual Network Embedding (PVNE) & Map or allocate virtual network onto the shared vehicular network & Vehicle require a small database for the resource evaluation table \\\\ \\hline\n\t\t &Suboptimal and optimal &Maximise total utility functions of four services (TES, IES, HRTS and SRTS) &Lagrangian algorithm &Bandwidth allocation for four types of services & No mobility analysis \\\\ \\hline\n\t\t & Optimal & Minimise response time of scheduling calendars and minimise transmission delay of data &Calendar policy & Search optimum calendar for EV & no cost or pricing considered\\\\ \\hline\n\t\t & Optimal & Maximise utility function of service provider & A two-stage of Stackelberg Game with backward induction (i.e. Nash Equilibrium) & Price of electricity for computing as the cost, and total service provision revenue & Only RSUs as the edge node \\\\ \\hline\n\t\t & Optimal & Minimises overall cost of users and maximises utility of parked vehicles & Stackelberg game and a subgradient-based iterative algorithm (i.e. Nash equilibrium) & Resource allocation between parked vehicles and service provider & High complexity \\\\ \\hline\n\t\t & Optimal & Maximise the service provider (VEX provider) revenue while improving the utilities of vehicles & A contract theoretic approach & Revenue of service provider and cost of the computation resources and energy saving & Massive number of contracts, no latency analysis \\\\ \\hline\n\t\t &Suboptimal & Maximise utility of base station, minimise total delay of overall networks & Contract-based incentive mechanism solved by KKT, matching based assignment & Contract design with/without information assymetry at Base Station &High complexity \\\\ \\hline\n\t\t & Optimal & Maximise the payment of FNC and maximise aggregate utility of smart vehicles &Auction game , Vickrey Clarke Groves & Optimal parking allocation &Offloading to vehicles only \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "d47ce87f-54dc-4bc3-b729-2c4fa4c7eb40", "level": "subsubsection", "origin_cites_number": 15, "parent_id": "42ce72f8-993b-4588-b1fa-4157947f7dcb", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Data Computation Offloading" ], [ "subsection", "Multi Optimisation Approaches" ], [ "subsubsection", "Energy and Monetary Combination" ] ], "subsections": [], "title": "Energy and Monetary Combination" }, { "cite_extract_rate": 0.1, "cites": [ 2917, 2920, 2918 ], "content": "\\label{sec:CachDel}\n\\begin{figure*}[htp]\n\t\\center{\\includegraphics[width=0.7\\textwidth]\t{figures/DOWNLOADING.jpg}}\n\t\\caption{Content caching and delivery}\n\t\\label{fig:download}\n\\end{figure*}\nVehicular data can be characterised by three primary groups, namely location-centric, user-centric, and vehicle-centric . When the vehicle drives into a new city, the driver possibly acquires some spatial information on the attraction places, road conditions, live traffic, favorite restaurants or available parking spaces i.e., location-centric. Meanwhile, infotainment services like video or games may be requested by the vehicle passengers and can be analysed in terms of the users' demographics, i.e., user-centric. Information regarding the vehicles, for instance car safety, road tax, and car service or built-in sensors also can be cached at specific storage. Relaxing the burden of cloud computing, some data, such as location-centric and user-centric, can be cached locally via RSUs, vehicles or edge servers and timely shared with other vehicles, as depicted in Figure \\ref{fig:download}. With the advent of IoV and VSNs have led to the V2V caching and communications become a reality. Caching mobile contents at the edge of networks may reduce the backhaul congestion and achieve the peak traffic apart from the lower latency . Nevertheless, the high variability in vehicular connectivity and rapid changes of the vehicular network topology pose some challenges on data safety and accuracy. \nIn retrieving the vehicular contents, the Information-Centric Networking (ICN) used the content name despite the IP address of the caching node and is prominently applied in the vehicular networks. It brings certain benefits, such as reducing the response time and the overwhelming access on the content provider. In addition to that, the key feature of the ICN is to store the most popular contents as a priority. \nTable \\ref{sum-datacaching} summarises the characteristics of data caching and dissemination approaches in VEC. For convenience,the data caching and dissemination approaches are classified into two, which are homogeneous cache and heterogeneous cache. Table \\ref{cc-schemes} lists the content caching and dissemination schemes together with their advantages and disadvantages. The following explains the details of the data cache and dissemination mechanisms.\n\\begin{table*}[htp]\n\t\\centering\n\t\\caption{Summary of Data Caching and Dissemination Approaches in VEC}\n\t\\label{sum-datacaching}\n\t\\begin{tabular}{|C{2cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{1cm}|C{2cm}|C{1cm}|C{2cm}|}\n\t\t\\hline\n\t\t\\textbf{References} &\\textbf{Mobility} &\\textbf{QoS-Transmission Rate} &\\textbf{QoS -Acceptance Ratio/Hit Ratio} & \\textbf{QoS-Delay} & \\textbf{Reward System} & \\textbf{Security} & \\textbf{Energy Efficiency} & \\textbf{Revenue} & \\textbf{Cost} \\\\ \\hline\n\t\t & & & & & \\checkmark & \\checkmark & & & \\\\ \\hline\n\t\t & & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark &\t &\t &\\checkmark\t &\t &\t &\t &\t &\\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & &\\checkmark \\\\ \\hline\n\t\t &\\checkmark & & & &\\checkmark &\\checkmark & & & \\\\ \\hline\n\t\t &\\checkmark &\\checkmark & & & & & & & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & & & & \\\\ \\hline\n\t\t\t\t\t& \\checkmark & & & \\checkmark & \\checkmark & & & & \\checkmark \\\\ \\hline\n\t\t & & \\checkmark & & & \\checkmark & & & \\checkmark & \\checkmark \\\\ \\hline\n\t\t & &\\checkmark & & &\\checkmark & & &\\checkmark &\\checkmark \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\t&\t & &\\checkmark\t &\t &\t &\t & &\\\\ \\hline\n\t\t & & & &\\checkmark & & & & &\\checkmark\\\\ \\hline\n\t\t & & & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark &\\checkmark & & & & & & & \\\\ \\hline\n\t\t &\\checkmark &\\checkmark & &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark &\\checkmark & & & && & &\\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & && & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark & &\\checkmark &\\checkmark & & & & & \\\\ \\hline\n\t\t &\\checkmark &\\checkmark & &\\checkmark & & & & & \\\\ \\hline\n\t\\end{tabular}\n\\end{table*}", "id": "430bbdcf-db7f-4ad9-9856-13b97566a0b4", "level": "section", "origin_cites_number": 30, "parent_id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ] ], "subsections": [ "3cf6aabb-fc2e-4743-a34c-32f7c0336c4d", "8dfe075a-4711-44d9-91bb-4f6536b19b69" ], "title": "Content Caching and Delivery" }, { "cite_extract_rate": 0, "cites": [], "content": "Homegeneous cache is defined for a single type of edge node, e.g., vehicle and RSU, that caches and shares the contents. In this paper, we broadly categorised homogeneous caching appraoches into two groups, which are non-cooperative and cooperative homogenenous caching.", "id": "3cf6aabb-fc2e-4743-a34c-32f7c0336c4d", "level": "subsection", "origin_cites_number": 0, "parent_id": "430bbdcf-db7f-4ad9-9856-13b97566a0b4", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Homogeneous Edge Nodes" ] ], "subsections": [ "35065ded-5e9b-46b9-a84d-80ba8c3f3d91", "ee4f2911-a32a-4cb7-9114-a4e64608b89d" ], "title": "Homogeneous Edge Nodes" }, { "cite_extract_rate": 0, "cites": [], "content": "In the non-cooperative caching, each edge node stored and disseminated the content without relying on other homogeneous edge nodes. The authors in explored a handoff decision between two RSUs for the caching service, as their model. A multi-object auction was presented in solving the RSU-caching problem with the objective was to maximise the total amount of downloaded data . The advantages of the work are that a caching-handoff mechanism and handoff delay are considered. The results demonstrated that the cached-enabled RSUs were fully utilised when the vehicle density was moderate. It is interesting to discover that the increase of data block size can reduce the total downloaded segmented data due to the wasted space of unallocated data block, and the reduction is critical for the unsegmented data. However, the study was conducted for a low mobility of 20 m/s. Meanwhile, a joint peer discovery, power control, and channel selection problem was formulated in maximising the sum of weighted transmission rate subject to the physical-social score and the spectrum efficiency for matching the vehicles, i.e., V-TX and V-RX. . When multiple content providers assigned with the same spectrum resource or content consumer that can be solved using a price rising strategy and yet merely for V2V edge solution. For edge server caching, two dynamic queuing theory-based scheduling schemes were proposed based on a probabilistic function of sending a job to a server by computing the ratio of mean response time at a server to the transient response time, and another scheduling considered a queue length of the server . A formal compositional method called Performance Evaluation Process Algebra was used to model the scheduling algorithms in a fog-based vehicular network. Because of the consistent queue length, the algorithm based on mean response time and transient response time outperformed in an unstable vehicular server system. However, the work is only investigated for edge servers and the detailed VEC architecture is not well-discussed.", "id": "35065ded-5e9b-46b9-a84d-80ba8c3f3d91", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "3cf6aabb-fc2e-4743-a34c-32f7c0336c4d", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Homogeneous Edge Nodes" ], [ "subsubsection", "Non-cooperative caching" ] ], "subsections": [], "title": "Non-cooperative caching" }, { "cite_extract_rate": 0, "cites": [], "content": "In the cooperative caching, the same kind of edge nodes cooperatively cache and disseminate the contents in any means.\nA Robust and Distributed Reputation System (REPSYS) consists of three different modules, namely, the reputation module (reputation collection and evaluation modules), the trust module, and the decision module based on Bayesian classification. The nodes monitored and evaluated the neighbouring nodes' behaviours with regards to the reputation rating and trust rating besides the recommendation of other nodes for the cooperation . The work is then extended to an incentive mechanism for vehicles caching and disseminating data in content-centric cellular-based vehicular delay-tolerant networks . The system achieved a high percentage for the misbehaving nodes detection, but the bottleneck required long detection time. On the other hand, the classification of vehicles can also be in terms of vehicles types and storage capabilities . Data dissemination in the dense Internet of Vehicles (IoV) can be abstracted as a complete graph using graph theory . Two replication-based algorithms; a deterministic algorithm and a distributed randomised algorithm, were designed under heterogeneous vehicular networks occupied with different types of vehicles with varying capabilities of storage. Interestingly, the proposed randomised algorithm improved the delivery ratio up to 80\\% and the latency below 1.5 ms due to the various capabilities of vehicles. \n\\begin{figure*}[htp]\n\t\\center{\\includegraphics[width=0.6\\textwidth]\t{figures/caching.jpg}}\n\t\\caption{ICN-based mmWave vehicular framework }\n\t\\label{fig:caching}\n\\end{figure*}\nAnother cooperative caching, assumed that the contents are cached in vehicles' storage, and a novel ICN-based mmWave vehicular framework is proposed, which is illustrated in Figure \\ref{fig:caching}. A decentralised vehicle association algorithm called Adaptive-beamwidth Weighted Association Scheme (ABWAS) was efficiently developed to match between vehicles by maximising the content dissemination efficiency, which jointly optimised the content dissemination rate and the number of retrieval content segments. A vehicle with higher dissemination rate can support maximum transmitted content segments within the beam coherence time. Therefore, the content retrieval latency is low. The reason is that ABWAS adjusted the vehicles with their associated beamwidths in retrieving more content segments at higher transmission rate despite that high overheads.\nOn the other hand, the formulated joint peer discovery, spectrum allocation, and route selection optimisation problem was solved using a big data integrated coalition formation game approach via D2D-V2V multihop content distribution . In particular, the V-RX that received the content can also cache and hence served other adjacent vehicles. The utility function proposed is to minimise the average network delay, which considered as the individual pay for each coalition member. The technique can serve approximately 90\\% of vehicles on the area and achieve the average network delay below 3 ms with the increasing number of resource blocks (RBs). \nDespite that, Interest Packet transmissions were introduced in the proposed location-based, and information-centric (LoICen) architecture which consisted of three components, namely content request, content-location management, and content delivery. The redundant data transmission problem was solved by the vehicles' Pending Interest Table (PIT) prior to the Interest packet arrival. The data can be sent either based on location or agnostic search, i.e., link stability-based Interest forwarding (LISIC) protocol , to identify the vehicle that cached the required content. LoICen outperformed in terms of content delivery ratio, delay, and overhead due to the location-specific mechanism. However, the specific component that handled the LoICen is not mentioned and this is probably a BS or a coordinator server. All the previously mentioned cooperative caching methods suffer a serious limitation of only vehicles as caching nodes, i.e., V2V and not other edge nodes involved.", "id": "ee4f2911-a32a-4cb7-9114-a4e64608b89d", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "3cf6aabb-fc2e-4743-a34c-32f7c0336c4d", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Homogeneous Edge Nodes" ], [ "subsubsection", "Cooperative caching" ] ], "subsections": [], "title": "Cooperative caching" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 2917, 2920 ], "content": "Heterogeneous cache involves several kinds of edge nodes that cooperatively store and disseminate the contents to the vehicles. Cooperative data caching and delivery among a variety of edge nodes are essential in the heterogeneous cache. We review the optimisation of such caching based on game theory , graph theory , network-based technologies and miscellaneous.", "id": "8dfe075a-4711-44d9-91bb-4f6536b19b69", "level": "subsection", "origin_cites_number": 9, "parent_id": "430bbdcf-db7f-4ad9-9856-13b97566a0b4", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Heterogenous Edge Nodes" ] ], "subsections": [ "2beaa411-7cb6-4a73-ba3c-2cbe674f96eb", "7c527afe-80c4-4d5e-8245-72afc076c27a", "8a235b8d-bce3-4eac-bdba-3431d5eda369", "b4a48d96-b0fb-4aa2-a047-997aba0f288e" ], "title": "Heterogenous Edge Nodes" }, { "cite_extract_rate": 0, "cites": [], "content": "In general, auction game , coalition formation game , Stackelberg game were exploited to optimise the cost of data caching and dissemination or jointly optimised with the transmission capabilities . A novel auction game model jointly considered the content dissemination requirements of an edge computing device (ECD) with regards to the transmission capability of vehicles and prices. A distributed two-stage relay selection algorithm was formulated for the ECD to select the optimal vehicle bids with the lowest cost in relaying the content to other vehicles using the first-price sealed-bid-auction. However, the study does not consider the content deadline. In contrast, leveraging the idle storage of parked vehicles in multiple parking areas, an iterative ascending price auction-based caching algorithm was presented . The resource blocks of parked vehicles were assigned to the highest bid, which was the difference between the valuation of caching and the cost paid. The process of auction might cause some delay for a high dense vehicular networks. Another work considered the cooperation among vehicles with regards to content interests (content cached in vehicles) and content requests (content needs to download) were formulated based on a coalition formation game . The selection of optimal access link was subject to the minimum cost of the content downloading time and the content price . Aforementioned game-based approaches achieved the optimal strategy for each vehicle with a minimal cost , and yielded higher revenue to the ECDs than the conventional schemes. Adopting the content-centric framework , a pricing model based on a Stackelberg game was developed for the delivery of the contents competitively from RSU or parked vehicles and cooperatively from both to the moving vehicles . The cost was computed based on the computational task on a unit size of the resource (e.g., content). The proposed gradient-based iteration algorithm decreased the prices of RSU and parking area with the increasing transmission rate until the Nash Equilibrium was achieved. However, the work has a high algorithm complexity and the content deadline is not adddressed.", "id": "2beaa411-7cb6-4a73-ba3c-2cbe674f96eb", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "8dfe075a-4711-44d9-91bb-4f6536b19b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Heterogenous Edge Nodes" ], [ "subsubsection", "Game Theory" ] ], "subsections": [], "title": "Game Theory" }, { "cite_extract_rate": 0.5, "cites": [ 2920, 2917 ], "content": "Generally, the graph theory connects the vehicles and the edge nodes for content placement and delivery, whereby the vertices and edges can be distinguished for each proposed mechanism. The construction of the graph was assumed at the base station , RSU and edge server . A cooperative sharing of a large volume of vehicular data from both OBUs and RSUs was developed using an undirected neighbour graph based scheduling scheme called Balanced MWIS (BMWIS) that transformed the content distribiution problem into a MWIS problem . The results demonstrated that the proposed BMWIS achieved the lowest average delay and a high number of served nodes for each scheduling periods. A two-level edge computing architecture was presented whereby a contact graph was constructed for the content placement and solved using a tree-based heuristic method. Meanwhile, the approximation method was used to address the conflict graph for a cooperative content sharing between vehicles. Another technique based on periodic location reports from vehicles where the edge server constructed a contact graph representing the links between the vehicles for a content placement solution . A vehicle with a substantial gain that was proportional to the urgency of the content had a priority of broadcasting on the time slot. The implementation of graph theory-based caching might have a shortcoming of additional processing at the BS or RSU.\nDespite that, mmWave data sharing algorithms for V2V communications based on graph theory scheduling is proposed in . A vertex weighting function represented the priority of each transmission whereby a high priority transmission was assigned to the farthest vehicle from the intersection, and a low priority was given to data near the intersection, i.e., max distance scheduling. It is because the high vehicle density around the intersection is overlapped. The work is beneficial in solving the interference between beams using an approximation method called a conflict rule to improve spatial reuse, but the research might face redundant data at the intersection. With the weight parameter, the results demonstrated that the data could be shared at a large geographical area.", "id": "7c527afe-80c4-4d5e-8245-72afc076c27a", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "8dfe075a-4711-44d9-91bb-4f6536b19b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Heterogenous Edge Nodes" ], [ "subsubsection", "Graph Theory" ] ], "subsections": [], "title": "Graph Theory" }, { "cite_extract_rate": 0.2, "cites": [ 2920 ], "content": "In general, data caching and dissemination for vehicular networks can also be solved using content-centric networking , blockchain , and network message protocols . The use of parked vehicles for delivering the contents over VSNs based on D2D communication is highlighted . With CCN, the vehicles only request the name of the required content from the parked vehicles without additional overhead and the process of content interest sending, content distribution, and content replacement is detailed in . The proposed technique considerably achieved a high number of successful content transmission and the shortest download delay, but required a high algorithm complexity. Another similar work employed a group of the content-centric unit (CCU) to work with vehicles and RSUs in the proposed Content-Centric Vehicular Networks framework . The CCU can serve one or multiple RSUs, and even one RSU can be attached to several CCUs. The priority for the contents storage is subject to the request time, the arrival rate of vehicles, and the request content distribution, i.e., Zipf, which is likely can cause high overheads for the information. However, referring to the information the lowest priority of content can be appropriately removed. A replica of required content in a selective nearby CCU was delivered to vehicle via RSU.\n\\begin{figure*}[htp]\n\t\\center{\\includegraphics[width=0.8\\textwidth]\t{figures/blockchain.png}}\n\t\\caption{Secure data storage and sharing using blockchain in VEC }\n\t\\label{fig:blockchain}\n\\end{figure*}\nFollowing the recent blockchain technology, Figure \\ref{fig:blockchain} shows a secure data storage and sharing using blockchain in VEC . Smart contracts on the vehicular blockchain were proposed for a secure RSU data, and a reputation-based data sharing among vehicles called a three-weight subjective logic model for selecting the most reliable data source. The vehicle coin, which is a specific crypto-currency for vehicular edge computing and networks, is rewarded as incentives to the edge nodes in three kinds of cases: resource storage contribution, new data block update, and data providers. A local controller as seen in the figure records the total amount of contributed data storage of edgde nodes. The edge nodes (i.e., RSUs) as data aggregators periodically integrates raw data received into a data block, and requests verification from other edge nodes by broadcasting the data block.\nNevertheless, the proposed blockchain-based method will be more pervasive if QoS is taken into consideration, as in . Exploring a different mechanism, an edge-based data dissemination protocol was developed for traffic safety messages from RSUs to vehicles, and the protocol was integrated with a route request (RREQ) message and a route reply (RREP) message for delay-tolerant applications . The farthest vehicles were scheduled on the earliest timeslot for the minimal delay, similar to . If the vehicle received a duplicate of the messages, the message broadcasting was then terminated to minimise the overheads.", "id": "8a235b8d-bce3-4eac-bdba-3431d5eda369", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "8dfe075a-4711-44d9-91bb-4f6536b19b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Heterogenous Edge Nodes" ], [ "subsubsection", "Network-based Protocols and Technologies" ] ], "subsections": [], "title": "Network-based Protocols and Technologies" }, { "cite_extract_rate": 0.148148148148148, "cites": [ 2917, 2920, 2918, 2923 ], "content": "Another optimisation problem was formulated to minimise the average completion time and solved using a modified genetic algorithm based joint scheduling where integer coding was used . The edge cloud sent a service request to the BS, which was aware of vehicular channel conditions, and hence the BS transmitted the request to the vehicle for data processing. However, a poor performance was observed for the case of high mobility cooperation. Another caching solution introduced the utility function to maximise the hit ratio using a cross-entropy based dynamic content caching algorithm . The requested vehicles can fetch the content from several candidates, which are other moving vehicles, RSU, or the remote content server accordingly with regards that the average delay is well-kept. With the consideration of content popularity, content size, and also content cache capacity of the vehicle and RSU, the proposed technique achieved the highest hit ratio, the lowest relative delay, and overhead. Another work on a cost-effective resource sharing under a mixed integer nonlinear programming optimization problem was developed to minimise the cost of video service provider's in IoV . The target user received the video prior to the VSP optimally retrieved the video either from the moving vehicle or cloud. An incentive mechanism was introduced to encourage the vehicle sharing its contents with the BS. With a massive number of vehicles on the road, a multihop vehicle-to-vehicle connection can be performed instead of multihop backhaul relay. Apart from that, demonstrated that the VSP's cost was exponentially decreasing with the vehicle user's speed between 50 to 120 km/h. This is because the low speed VUs can sufficiently fetch the data from other vehicles within the delay constraint while the fast vehicle has a limited BS streaming process. However, both works in have a drawback in terms of high algorithm complexity and computation.\n\\begin{table*}[!htb]\n\t\\centering\n\t\\caption{Content Caching and Delivery Approaches in VEC}\n\t\\label{cc-schemes}\n\t\\begin{tabular}{|C{1cm}|C{1.5cm}|C{1.5cm}|L{3cm}|L{3cm}|L{3cm}|L{3cm}|}\n\t\t\\hline\n\t\t{\\bf Related Works}\t&{\\bf Optimisation Type} &{\\bf Optimality} &{\\bf Optimisation Utility} &{\\bf Optimsiation Techniques} &{\\bf Advantages} &{\\bf Disadvantages}\\\\ \\hline\n\t\t &Single &Optimal &Maximum reputation rating & Modified Bayesian Approach & Vehicle's reputation and trust & high overhead \\\\ \\hline\t\t\n\t\t &Single &Optimal &Maximise total number of valid content received &Graph theory &Content distribution and sharing & BS constructed the undirected neighbour graph \\\\ \\hline\n\t\t & Single &Suboptimal &Maximise total gain of transimission relevant to urgency of the content &Tree-based heuristic method on contact graph (content placement) and aprroximation method on conflict graph (content sharing) & Base station and autonomous vehicles for content placement & Additional processing for content popularity \\\\ \\hline\n\t\t &Single & Optimal & Minimise the cost (i.e. content downloading time and price) &Coalition formation game & Cooperation between vehicles & High complexity \\\\ \\hline\t\n\t\t &Single &Optimal & Maximise the final reputation of vehicle service provider & Consortium block chain and smart contract &Data security and vehicle coin & High signalling and overheads\\\\ \\hline\n\t\t &Joint &Optimal &Maximize the weighted transmission rate &Pricing strategy & Power control and interference &V2V communications only \\\\ \\hline\n\t\t &Single &Optimal &Minimise delay &Data dissemination protocol & Various vehicular applications &High overheads \\\\ \\hline \n\t\t &Single &Optimal &Minimise VSP's cost &Mixed nonlinear integer programming (MNIP) & Mobility analysis & Algorithm complexity \\\\ \\hline\n\t\t &Multi objective &Optimal &Maximise the utility of requesting vehicles (content delivey time and conteny cost), maximise the individual utility of RSU and parked vehicles based on profits gained &Stackelberg game &Competitive and cooperative cases between RSUs and parked vehicles & Algorithm complexity \\\\ \\hline\n\t\t &Joint &Optimal &Minimise bids of vehicles &Auction game &Cost and revenue & No deadline \\\\ \\hline\n\t\t &Multiple objective &Optimal & Minimise content priority and CCU distance &Information-Centric Networking (ICN) & Temporal content catched and CCU & High overheads\\\\ \\hline\n\t\t &Single &Optimal &Minimise average response time of computing &Modified genetic algorithm and statistical priority & Cooperative task scheduling in vehicular cloud &Poor in high mobility cooperation\\\\ \\hline\n\t\t &Single &Optimal & minimise expected cumulative discounted reward (i.e. task execution delay) for all tasks & deep reinforcement & VEC environment state & algorithm complexity \\\\ \\hline\n\t\t &Single &Suboptimal & Maximise the bid of content provider &Auction game & Latency, content popularity and cost & Auction process delay for resource assignment \\\\ \\hline\n\t\t &Single &Optimal &Minimum response time &Queueing theory & Prediction of response time & Edge server only\\\\ \\hline\t\n\t\t &Single &Suboptimal &Maximise total downloaded data &Multi-object auction and graph theory &Segmented data and caching-specific handoff &Low mobility only 20m/s \\\\ \\hline \n\t\t &Joint &Optimal &Maximise content dissemination efficiency &Alpha-fair utility function &Vehicle associations with mmWave beam & V2V communications only\\\\ \\hline \n\t\t &Single &Suboptimal & Maximum weight of vertex &Graph-based &Beam interference and priority transmission & Redundant data at intersections \\\\ \\hline\n\t\t &Single &Optimal &Minimum defer time (latency) &LoICen &Mitigate redundant content dissemination, location-based and agnostic search & High overheads \\\\ \\hline\n\t\t &Single &Optimal & Maximum gap between the vehicle values &Complete graph, i.e., graph theory &Heterogeneous vehicles &Classification of vehicle types\\\\ \\hline\n\t\t &Single &Optimal &Maximum hit ratio &Cross entropy based Dynamic Caching Algorithm &Content size, popularity and edge node cache size & High computation \\\\ \\hline\n\t\t &Joint &Optimal & Minimum average network delay &Coalition formation game & Channel condition and resource block & No content information\\\\ \\hline\n\t\\end{tabular}\t \n\\end{table*}", "id": "b4a48d96-b0fb-4aa2-a047-997aba0f288e", "level": "subsubsection", "origin_cites_number": 27, "parent_id": "8dfe075a-4711-44d9-91bb-4f6536b19b69", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Content Caching and Delivery" ], [ "subsection", "Heterogenous Edge Nodes" ], [ "subsubsection", "Miscellaneous" ] ], "subsections": [], "title": "Miscellaneous" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:issues}\nThis section discusses the key challenges in deploying the VEC, some open issues and also potential future works.", "id": "2d41e993-b0c6-4adb-816f-465c68bab53e", "level": "section", "origin_cites_number": 0, "parent_id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ] ], "subsections": [ "3b9fb9a3-6976-4dee-84f4-0884e8d64311", "c94a2699-36dc-46dd-bc4d-78f4f8584171", "a668d8ae-703b-47b8-a3ab-1d749db5b77e" ], "title": "Key Challenges, Open Issues and Future Works" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "3b9fb9a3-6976-4dee-84f4-0884e8d64311", "level": "subsection", "origin_cites_number": 0, "parent_id": "2d41e993-b0c6-4adb-816f-465c68bab53e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Key Challenges" ] ], "subsections": [ "ff6623ac-19fe-4519-914f-1eb19cbb3170", "e784b342-ac91-4bad-a71d-05f025e08008", "f63a3fdd-fb2a-427f-9f72-f2e420c8c600" ], "title": "Key Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "Substantial vehicular application exceptional for the infotainment are time-varying spatial type of data. Such data offloading and dissemination will necessitate regular update and purge old/unpopular data in the caches. As a result, the edge nodes, particularly vehicles consume some energy and hence become drain quickly. Besides that, the high dense vehicular networks, e.g., at intersections or urban, are overlapped with number of VEC regions resulting to the flood of redundant data. Thus, the coordination for data offloading and caching must be synchronised between the overlapped areas.", "id": "ff6623ac-19fe-4519-914f-1eb19cbb3170", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3b9fb9a3-6976-4dee-84f4-0884e8d64311", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Key Challenges" ], [ "subsubsection", "Temporal and Spatial Vehicular Data" ] ], "subsections": [], "title": "Temporal and Spatial Vehicular Data" }, { "cite_extract_rate": 0, "cites": [], "content": "The mobility of vehicles leads to a dynamic vehicular network topology in which the attached RSUs, neighbouring vehicles and even the routing often varied, in particular for the case of high mobility. It is even worse when some rural areas are out of vehicular or mobile networks coverage. The vehicle edge node might equip with multiple communication modules causing to the complexity of the hardware, algorithm and definitely high battery consumption. The challenge is to sustain seamless data offloading and dissemination within the QoS requirements regardless of any circumstances and terrain. As we know, the high dense vehicular networks certainly contain with many vehicle cloudlets. The primary challenge of this is how to tackle the interference from inter-cloudlets, intra-cloudlets, and mobile networks. Another big challenge is to search for an optimum V2V communication since there is a fleet of moving vehicles around.", "id": "e784b342-ac91-4bad-a71d-05f025e08008", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3b9fb9a3-6976-4dee-84f4-0884e8d64311", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Key Challenges" ], [ "subsubsection", "Dynamic vehicular networks and unstable communication" ] ], "subsections": [], "title": "Dynamic vehicular networks and unstable communication" }, { "cite_extract_rate": 0, "cites": [], "content": "Relying on the capability of stationary edge node, i.e., RSUs and edge servers, for computing and caching is more likely impossible since billions of connected cars are anticipated to be served. The request for content from a moving vehicle is forwarded to the RSUs when this moving vehicle enters the coverage area. When the number of requests keeps increasing, the load of an RSU to process the requests becomes heavy. Therefore, the vehicle as edge node either moving or parked could release the burden of the SEN computing. However, the challenge is that how promising the vehicle can give a full cooperation as an edge computing node. The vehicles are likely shared the computing resources to their families, relatives, friends or the person whom they know and yet not to strangers. The key challenge is to attract the vehicle owners with certain benefits for continously supporting the VEC regardless who the requestors are.", "id": "f63a3fdd-fb2a-427f-9f72-f2e420c8c600", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "3b9fb9a3-6976-4dee-84f4-0884e8d64311", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Key Challenges" ], [ "subsubsection", "Vehicle Cooperation" ] ], "subsections": [], "title": "Vehicle Cooperation" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "c94a2699-36dc-46dd-bc4d-78f4f8584171", "level": "subsection", "origin_cites_number": 0, "parent_id": "2d41e993-b0c6-4adb-816f-465c68bab53e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Open Issues" ] ], "subsections": [ "695f3976-7805-40f2-af1e-eebe9ed2fd9c", "bc98ec12-c79e-4c84-a44e-ee723428e061" ], "title": "Open Issues" }, { "cite_extract_rate": 0, "cites": [], "content": "Numerous VEC architectures have been proposed with different names, edge nodes and frameworks. The underlying architecture is utmost important because the computation offloading, caching includes the resource management are solved with regards to the architectures. Thus, the VEC architecture is still an open issue that can have several options. The communication with the RSUs or edge servers using DSRC and mobile cellular technologies is significantly proposed by previous works. However, the reliable V2V communication technology is still uncertain.", "id": "695f3976-7805-40f2-af1e-eebe9ed2fd9c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c94a2699-36dc-46dd-bc4d-78f4f8584171", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Open Issues" ], [ "subsubsection", "Vehicular Edge Computing Architecture and Communication" ] ], "subsections": [], "title": "Vehicular Edge Computing Architecture and Communication" }, { "cite_extract_rate": 0, "cites": [], "content": "The ICN or VSN have been applied in the data offloading, caching and dissemination works for VEC. Adopting content-centric, location-centric or social-centric information in the edge cache are also an open issue as they have certain advantages and disadvantages. Considering the large volume, variations, temporal and spatial data features of vehicular, it is of important to classify and analyse the vehicle data effectively. The integration of big data, SDN, NFV and machine learning in the actual context of VEC remains an open issue.", "id": "bc98ec12-c79e-4c84-a44e-ee723428e061", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c94a2699-36dc-46dd-bc4d-78f4f8584171", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Open Issues" ], [ "subsubsection", "Data Retrieval Information and Big Data Analytics" ] ], "subsections": [], "title": "Data Retrieval Information and Big Data Analytics" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "level": "subsection", "origin_cites_number": 0, "parent_id": "2d41e993-b0c6-4adb-816f-465c68bab53e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ] ], "subsections": [ "be48f54c-992e-4d36-8aad-6bae3b0395f9", "f4cf4bc0-f370-4f70-952b-fc6ac3f03dcb", "02faa3f7-0324-4831-92ff-b422152bcd19", "c98b7c4d-e7c3-4f99-8e38-d337382f9279", "ce2abac7-aae6-412b-83fb-9ca2f46c1ced" ], "title": "Future Works" }, { "cite_extract_rate": 0, "cites": [], "content": "A number of the data offloading works addressed on the energy efficiency while only a little data caching and dissemination works highlighted this issue. Joint optimisation of energy, monetary (e.g. incentives) and QoS in providing fine-granularity of resource management in VEC particularly for both cases of computational offloading and data caching and delivery is a potential research direction. Susbtantial works explored for a single type of vehicle application and its associated QoS. A great concern must also be addressed for multiple applications and their QoS requirements.", "id": "be48f54c-992e-4d36-8aad-6bae3b0395f9", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ], [ "subsubsection", "Energy Efficiency and Various Vehicle Applications" ] ], "subsections": [], "title": "Energy Efficiency and Various Vehicle Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "The proposed framework involves a variety of data, the source of which covers from mobile devices to infrastructures. Therefore, higher data security and privacy mechanisms should be developed. The underlying security technology should be investigated to ensure a secure communication and also maintain the confidentiality of data. \nThere is a serious security issue in current vehicular communication, including false information release, traffic scene forgery, and so on. Therefore, security authentication and privacy protection are the main concerns in vehicular networks. A vehicular user is necessary to recognise reliable traffic information for safe driving. Moreover, vehicular user information should be protected to prevent data breaching.", "id": "f4cf4bc0-f370-4f70-952b-fc6ac3f03dcb", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ], [ "subsubsection", "Data Security and Privacy" ] ], "subsections": [], "title": "Data Security and Privacy" }, { "cite_extract_rate": 0, "cites": [], "content": "For an effective utilisation of scarce resources, resource reuse and network density are adopted in future cellular networks. Interference would be more severe when radio behaviour occurs in densely deployed topology. Thus, the interference problem is serious in vehicular networks, especially co-channel interference. Additionally, D2D technology is applied in vehicular networks. How to deal with the interference between D2D communication and traditional communication needs to be investigated.", "id": "02faa3f7-0324-4831-92ff-b422152bcd19", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ], [ "subsubsection", "Interference Coordination" ] ], "subsections": [], "title": "Interference Coordination" }, { "cite_extract_rate": 0, "cites": [], "content": "In VSNs, due to similar social activities, vehicles may have the same interest for certain content and exchange between each other. However, the content is relayed by RSUs or VENs, where direct communication between vehicles cannot be efficiently provided. Investigating multiple communication protocols for V2V and V2I is significant to demonstrate the performance enhancement in particular for the cases of interoffloading and intraoffloading. Similar to mobile social users who use mobile devices to access mobile social networks, vehicles can also form a vehicular social network. Vehicles that have the same interests may exchange contents (e.g., traffic status and road conditions) with each other. A major research issue in such network is to understand the social relations between vehicles, which could be quite different from other types of social networks .", "id": "c98b7c4d-e7c3-4f99-8e38-d337382f9279", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ], [ "subsubsection", "Advancement of Vehicle Communication and Technologies Support" ] ], "subsections": [], "title": "Advancement of Vehicle Communication and Technologies Support" }, { "cite_extract_rate": 0, "cites": [], "content": "As the future research direction of vehicles, the unmanned vehicle is having a profound impact on the automotive industry and intelligent transportation systems. The maturity and development of vehicular network technology bring reliable basic support for future unmanned vehicles. However, the unmanned vehicle still faces great challenges. Some technical problems need to be resolved, such as sensitivity and accuracy of the sensors. In addition, a quantified general technical standard and an operation standard are vital for unmanned vehicles.", "id": "ce2abac7-aae6-412b-83fb-9ca2f46c1ced", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "a668d8ae-703b-47b8-a3ab-1d749db5b77e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Key Challenges, Open Issues and Future Works" ], [ "subsection", "Future Works" ], [ "subsubsection", "Advancement of UAV Edge Node" ] ], "subsections": [], "title": "Advancement of UAV Edge Node" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nThe paper has presented a comprehensive survey on computation offloading as well as data caching and delivery approaches including the optimisation techniques. We initially introduced a general architecture of VEC and then considerably discussed existing VEC architectures and frameworks, such as the VEC layers, edge nodes, communication technologies, types of vehicle applications and mobility models. The architecture is essential in designing and developing the computation offloading, data caching and dissemination techniques for VEC. The detailed reviews, findings and comparisons of existing optimisation techniques for computation offloading as well as content caching and delivery had thoroughly discussed. Finally, some key challenges, open issues and also future works were highlighted to guarantee the feasibility and reality of VEC. \n\\bibliographystyle{IEEEtran}\n\\bibliography{referencesDatabase}{}\n\\end{document}", "id": "4d6558b7-cdb9-410d-9e30-acaefcdc90b0", "level": "section", "origin_cites_number": 0, "parent_id": "73e1092a-6f4c-4f17-8375-efb82b86042e", "prefix_titles": [ [ "title", "Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
117
[ 2919, 2917, 2916, 2918, 2915, 7118, 2920, 2921, 2922, 2923 ]
1.085703
[ "Jinjie Ni", "Tom Young", "Vlad Pandelea", "Fuzhao Xue", "Erik Cambria" ]
Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey
2021
2021-05-10T14:07:49Z
cs.CL
Dialogue systems are a popular natural language processing (NLP) task as it is promising in real-life applications. It is also a complicated task since many NLP tasks deserving study are involved. As a result, a multitude of novel works on this task are carried out, and most of them are deep learning based due to the outstanding performance. In this survey, we mainly focus on the deep learning based dialogue systems. We comprehensively review state-of-the-art research outcomes in dialogue systems and analyze them from two angles: model type and system type. Specifically, from the angle of model type, we discuss the principles, characteristics, and applications of different models that are widely used in dialogue systems. This will help researchers acquaint these models and see how they are applied in state-of-the-art frameworks, which is rather helpful when designing a new dialogue system. From the angle of system type, we discuss task-oriented and open-domain dialogue systems as two streams of research, providing insight into the hot topics related. Furthermore, we comprehensively review the evaluation methods and datasets for dialogue systems to pave the way for future research. Finally, some possible research trends are identified based on the recent research outcomes. To the best of our knowledge, this survey is the most comprehensive and up-to-date one at present for deep learning based dialogue systems, extensively covering the popular techniques\footnote{The frameworks, topics, and datasets discussed are originated from the extensive literature review of state-of-the-art research. We have tried our best to cover all but may still omit some works. Readers are welcome to provide suggestions regarding the omissions and mistakes in this article. We also intend to update this article with time as and when new approaches or definitions are proposed and used by the community}. We speculate that this work is a good starting point for academics who are new to the dialogue systems or those who want to quickly grasp up-to-date techniques in this area.\\ \textbf{Keywords}\ \ Dialogue systems, Chatbots, Conversational AI, Natural Language Processing, Deep learning
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ] ], "subsections": [ "ed56c257-dd24-43df-9ed3-a67ef9197cd1", "6989df99-9194-4243-be01-3851ccdbbcf9", "d107f8de-54e0-4843-bd82-941b70519bc2", "5826cc8e-0271-43d0-9eea-71af25a29e66", "bc9566fa-d483-4ea7-9525-13fe2649796a", "f65a71d6-ddc8-4d98-80e5-8bcdf95dbaa4", "641bce50-fbff-48dd-9b23-43912a76e4cb" ], "title": "root" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 1107, 1149, 1878, 1879, 7478, 7455, 1876, 7074, 8506, 1877 ], "content": "\\label{intro}\nDialogue systems (or chatbots) are playing a bigger role in the world. People may still have a stereotype that chatbots are those rigid agents in their phone calls to a bank. However, thanks to the revival of artificial intelligence, the modern chatbots can converse with rich topics ranging from your birthday party to a speech given by Biden, and, if you want, they can even book a place for your party or play the speech video. At present, dialogue systems are one of the hot topics in NLP and are highly demanded in industry and daily life. The market size of chatbot is projected to grow from \\$2.6 billion in 2021 to \\$9.4 billion by 2024 at a compound annual growth rate (CAGR) of 29.7\\% \\footnote{Statistic source: \\url{https://markets.businessinsider.com}} and 80\\% of businesses are expected to be equipped with chatbot automation by the end of 2021 \\footnote{Statistic source: \\url{https://outgrow.co}}.\nDialogue systems perform chit-chat with human or serve as an assistant via conversations. By their applications, dialogue systems are commonly divided into two categories: task-oriented dialogue systems (TOD) and open-domain dialogue systems (OOD). Task-oriented dialogue systems solve specific problems in a certain domain such as movie ticket booking, restaurant table reserving, etc. Instead of focusing on task completion, open-domain dialogue systems aim to chat with users without the task and domain restrictions~, which are usually fully data-driven. Both task-oriented and open-domain dialogue systems can be seen as a mapping $\\varphi$ from user message $U = \\{\\mathrm{\\mathbf{u}}^{(1)}, \\mathrm{\\mathbf{u}}^{(2)}, ... , \\mathrm{\\mathbf{u}}^{(i)}\\}$ to agent response $R = \\{\\mathrm{\\mathbf{r}}^{(1)}, \\mathrm{\\mathbf{r}}^{(2)}, ... , \\mathrm{\\mathbf{r}}^{(j)}\\}$: $R = \\varphi(U)$, where $\\mathrm{\\mathbf{u}}^{(i)}$ and $\\mathrm{\\mathbf{r}}^{(j)}$ denote the $i$th token of the user message and the $j$th token of the agent response respectively. In many open-domain and task-oriented dialogue systems, this mapping also considers a source of external knowledge/database $K$ as input: $R = \\varphi(U, K)$. Table \\ref{An example of inputs and outputs of task-oriented and open-domain dialogue systems in datasets} presents examples of inputs and outputs of task-oriented and open-domain dialogue systems. More specific details and works will be discussed in Section \\ref{Task-oriented Dialogue Systems} and \\ref{Open-Domain Dialogue Systems}.\n\\begin{table}[h]\n\\caption{Examples of inputs and outputs of task-oriented and open-domain dialogue systems in datasets. Some datasets provide external knowledge annotations for each dialogue pair, e.g., in task-oriented dialogue systems, the external knowledge can be retrieved from restaurant databases; in open-domain dialogue systems, it can be retrieved from commonsense knowledge graphs (KG).}\n\\label{An example of inputs and outputs of task-oriented and open-domain dialogue systems in datasets} \n\\centering\n\\begin{tabular}{m{2cm}m{3cm}m{3cm}m{3.7cm}}\n\\bottomrule\n\\multicolumn{1}{c}{\\textbf{Category}} & \\multicolumn{1}{c}{\\textbf{User message ($U$)}} & \\multicolumn{1}{c}{\\textbf{Agent response ($R$)}} & \\multicolumn{1}{c}{\\textbf{External Knowledge ($K$)}}\\\\\\midrule\n\\multicolumn{1}{c}{Task-oriented} & I need to find a nice restaurant in Madrid that serves expensive Thai food. & There is a restaurant called \\textit{Bangkok City} locating at 9 Red Ave. & \\multicolumn{1}{c}{restaurant database} \\\\\\hline\n\\multicolumn{1}{c}{Open-domain} & I love the grilled fish so much! & Yeah. it's a famous \\textit{Chinese dish}. & \\multicolumn{1}{c}{commonsense KG} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\nTraditional task-oriented dialogue systems are organized in a pipeline structure and consist of four functional modules: Natural Language Understanding, Dialogue State Tracking, Policy Learning, and Natural Language Generation, which will be discussed in detail in Section~\\ref{Task-oriented Dialogue Systems}. Many state-of-the-art works design end-to-end task-oriented dialogue systems to achieve better optimization compared with pipeline methods. Open-domain dialogue systems are generally divided into three categories: generative systems, retrieval-based systems, and ensemble systems. Generative systems apply sequence-to-sequence models (see Section \\ref{Vanilla Sequence-to-sequence Models (Encoder-decoder Models)}) to map the user message and dialogue history into a response sequence that may not appear in the training corpus. By contrast, retrieval-based systems try to select a pre-existing response from a certain response set. Ensemble systems combine generative methods and retrieval-based methods in two ways: retrieved responses can be compared with generated responses to choose the best among them; generative models can also be used to refine the retrieved responses~. Generative systems can produce flexible and dialogue context-related responses while sometimes they lack coherence \\footnote{The quality of being logical and consistent not only between words/subwords but also between responses of different timesteps.} and tend to make dull responses . Retrieval-based systems select responses from human response sets and thus are able to achieve better coherence in surface-level language. However, retrieval systems are restricted by the finiteness of the response sets and sometimes the responses retrieved show a weak correlation with the dialogue context~. \n\\begin{figure}[t]\n\\begin {center}\n\\includegraphics[width=1.0\\textwidth]{ArticleStructure-eps-converted-to.pdf}\n\\caption{The overall diagram of this article}\n\\label{The overall diagram of this article}\n\\end {center}\n\\end{figure}\nFor dialogue systems, existing surveys~ are either outdated or not comprehensive. Some definitions in these papers are no longer being used at present, and a lot of new works and topics are not covered. In addition, most of them lack a multi-angle analysis. Thus, in this survey, we comprehensively review high-quality works in recent years with a focus on deep learning-based approaches and provide insight into state-of-the-art research from both model angle and system angle. Moreover, this survey updates the definitions/names according to state-of-the-art research. E.g., we name \"open-domain dialogue systems\" instead of \"chit-chat dialogue systems\" because most of the articles (roughly 70\\% according to our survey) name them as the prior one. We also extensively cover the diverse hot topics in dialogue systems and extend some new topics that are popular in current research community (such as Domain Adaptation, Dialogue State Tracking Efficiency, End-to-end methods for task-oriented dialogue systems; Controllable Generation, Interactive Training, and Visual Dialogue for open-domain dialogue systems).\nTraditional dialogue systems are mostly rule-based~ and non-neural machine learning based systems. Rule-based systems are easy to implement and can respond naturally, which contributed to their popularity in earlier industry products. However, the dialogue flows of these systems are predetermined, which keeps the applications of the dialogue systems within certain scenarios. Non-neural machine learning based systems usually perform template filling to manage certain tasks. These systems are more flexible compared with rule-based systems because the dialogue flows are not predetermined. However, they cannot achieve high F1 scores~ in template filling\\footnote{Template filling is an efficient approach to extract and structure complex information from text to fill in a pre-defined template. They are mostly used in task-oriented dialogue systems.} and are also restricted in application scenarios and response diversity because of the fixed templates. Most if not all state-of-the-art dialogue systems are deep learning-based systems (neural systems). The rapid growth of deep learning improves the performance of dialogue systems~. Deep learning can be viewed as representation learning with multilayer neural networks. Deep learning architectures are widely used in dialogue systems and their subtasks. Section~\\ref{Neural Models in Dialogue Systems} discusses various popular deep learning architectures. \nApart from dialogue systems, there are also many dialogue-related tasks in NLP, including but not limited to question answering, reading comprehension, dialogue disentanglement, visual dialogue, visual question answering, dialogue reasoning, conversational semantic parsing, dialogue relation extraction, dialogue sentiment analysis, hate speech detection, MISC detection, etc. In this survey, we also touch on some works tackling these dialogue-related tasks, since the design of dialogue systems can benefit from advances in these related areas. \nWe produced a diagram for this article to help readers familiarize the overall structure (Figure~\\ref{The overall diagram of this article}). In this survey, Section~\\ref{intro} briefly introduces dialogue systems and deep learning; Section~\\ref{Neural Models in Dialogue Systems} discusses the neural models popular in modern dialogue systems and the related work; Section~\\ref{Task-oriented Dialogue Systems} introduces the principles and related work of task-oriented dialogue systems and discusses the research challenges and hot topics; Section~\\ref{Open-Domain Dialogue Systems} briefly introduces the three kinds of systems and then focuses on hot topics in open-domain dialogue systems; Section~\\ref{Evaluation Approaches} reviews the main evaluation methods for dialogue systems; Section~\\ref{Data sets} comprehensively summarizes the datasets commonly used for dialogue systems; finally, Section~\\ref{Conclusions and Trends} concludes the paper and provides some insight on research trends.", "id": "ed56c257-dd24-43df-9ed3-a67ef9197cd1", "level": "section", "origin_cites_number": 14, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Neural Models in Dialogue Systems}\nIn this section, we introduce neural models that are popular in state-of-the-art dialogue systems and related subtasks. We also discuss the applications of these models or their variants in modern dialogue systems research to provide readers with a picture from the model's perspective. This will help researchers acquaint these models and see how they are applied in state-of-the-art frameworks, which is rather helpful when designing a new dialogue system. The models discussed include: Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Vanilla Sequence-to-sequence Models, Hierarchical Recurrent Encoder-Decoder (HRED), Memory Networks, Attention Networks, Transformer, Pointer Net and CopyNet, Deep Reinforcement Learning models, Generative Adversarial Networks (GANs), Knowledge Graph Augmented Neural Networks. We start from some classical models (e.g., CNNs and RNNs), and readers who are familiar with their principles and corresponding applications in dialogue systems can choose to read selectively.", "id": "6989df99-9194-4243-be01-3851ccdbbcf9", "level": "section", "origin_cites_number": 0, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ] ], "subsections": [ "6876bff4-a6c3-43bf-9ce6-c990efac86f9", "381d8d12-53b8-46e7-a77f-e37b433a9889", "24b037f7-a365-4ed3-9d8f-9d31b8ec91c5", "70447676-b1e3-4e3b-9af1-131f5cf9895b", "c34913d1-da34-4487-a755-149ff84b17e8", "6b37701b-9c79-447a-9547-3a51e8427dbe", "08350a4c-0975-417d-8781-84e0835277fa", "7dc8033c-2bea-42f9-8af0-47edede88ab2" ], "title": "Neural Models in Dialogue Systems" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 514, 7075, 305, 1881, 7528, 499, 1882, 97, 1880, 9092 ], "content": "\\label{Convolutional Neural Network}\nDeep neural networks have been considered as one of the most powerful models. `Deep' refers to the fact that they are multilayer, which extracts features by stacking feed-forward layers. Feed-forward layers can be defined as: $y = \\sigma(Wx + b)$. Where the $\\sigma$ is an activation function; $W$ and $b$ are trainable parameters. The feed-forward layers are powerful due to the activation function, which makes the otherwise linear operation, non-linear. Whereas there exist some problems when using feed-forward layers. Firstly, the operations of feed-forward layers or multilayer neural networks are just template matching, where they do not consider the specific structure of data. Furthermore, the fully connected mechanism of traditional multilayer neural networks causes an explosion in the number of parameters and thus leads to generalization problems. proposed LeNet-5, an early CNN. The invention of CNNs mitigates the above problems to some extent.\n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=0.65\\textwidth]{textCNN-eps-converted-to.pdf}\n\\caption{A CNN architecture for text classification }\n\\label{Illustration of a CNN architecture for text classification}\n\\end {center}\n\\end{figure} \nCNNs (Figure~\\ref{Illustration of a CNN architecture for text classification}) usually consist of convolutional layers, pooling layers and feed-forward layers. Convolutional layers apply convolution kernels to perform the convolution operation: \n\\begin{equation}\nG(m, n) = (f*h)(m, n) = \\sum_{j}\\sum_{k}h(j, k)f(m-j, n-k)\n\\label{1}\n\\end{equation}\nWhere $m$ and $n$ are respectively the indexes of rows and columns of the result matrix. $f$ denotes the input matrix and $h$ denotes the convolutional kernel. The pooling layers perform down-sampling on the result of convolutional layers to get a higher level of features and the feed-forward layers map them into a probability distribution to predict class scores. \nA sliding window feature enables convolution layers to capture local features and the pooling layers can produce hierarchical features. These two mechanisms give CNNs the local perception and global perception ability, helping to capture some specific inner structures of data. The parameter sharing mechanism eases the parameter explosion problem and overfitting problem because the reduction of trainable parameters leads to less model complexity, improving the generalization ability. \nDue to these good properties, CNNs have been widely applied in many works. Among them, the Computer Vision tasks benefit the most for that the Spatio-temporal data structures of images or videos are perfectly captured by CNNs. For more detailed mechanism illustrations and other variants of CNNs, readers can refer to these representative algorithm papers or surveys:~. In this survey, we focus on dialogue systems.\nRecent years have seen a dramatic increase in applications of CNNs in NLP. Many tasks take words as basic units. However, phrases, sentences, or even paragraphs are also useful to semantic representations. As a result, CNNs are an ideal tool for the hierarchical modeling of language~. \\\\\nCNNs are good textual feature extractors, but they may not be ideal sequential encoders. Some dialogue systems~ directly used CNNs as the encoder of utterances or knowledge, but most of the state-of-the-art dialogue systems such as~ and~ chose to use CNNs as a hierarchical feature extractor after encoding the text information, instead of directly applying them as encoders. This is due to the fixed input length and limited convolution span of CNNs. Generally, there are two main situations where CNNs are used to process encoded information in dialogue systems. The first situation is applying CNNs to extract features directly based on the feature vectors from the encoder~ and~. Within the works above,~ extracted features from character-level embeddings, illustrating the hierarchical extraction capability of CNNs. Another situation in which CNNs are used is extracting feature maps in response retrieval tasks. Some works built retrieval-based dialogue systems~. They used separate encoders to encode dialogue context and candidate responses and then used a CNN as an extractor of the similarity matrix calculated from the encoded dialogue context and candidate responses. Their experiments showed that this method can achieve good performance in response retrieval tasks. \nThe main reason why more recent works do not choose CNNs as dialogue encoders is that they fail to extract the information across temporal sequence steps continuously and flexibly~. Some models introduced later do not process data points independently, which are desirable models for encoders.", "id": "6876bff4-a6c3-43bf-9ce6-c990efac86f9", "level": "subsection", "origin_cites_number": 22, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Convolutional Neural Networks" ] ], "subsections": [], "title": "Convolutional Neural Networks" }, { "cite_extract_rate": 0.5, "cites": [ 302 ], "content": "\\label{RNNs and Vanilla Sequence-to-sequence Models}\nNLP tasks including dialogue-related tasks try to process and analyze sequential language data points. Even though standard neural networks, as well as CNNs, are powerful learning models, they have two main limitations~. One is that they assume the data points are independent of each other. While it is reasonable if the data points are produced independently, essential information can be missed when processing interrelated data points (e.g., text, audio, video). Additionally, their inputs are usually of fixed length, which is a limitation when processing sequential data varying in length. Thus, a sequential model being able to represent the sequential information flow is desirable. \nMarkov models like Hidden Markov Models (HMMs) are traditional sequential models, but due to the time complexity of the inference algorithm~ and because the size of transition matrix grows significantly with the increase of the discrete state space, in practice they are not applicable in dealing with problems involving large possible hidden states. The property that the hidden states of Markov models are only affected by the immediate hidden states further limits the power of this model.\nRNN models are not proposed recently, but they greatly solve the above problems and some variants can amazingly achieve state-of-the-art performance in dialogue-related tasks as well as many other NLP tasks. The inductive bias of recurrent models is non-replaceable in many scenarios, and many up-to-date models incorporate the recurrence.", "id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "level": "subsection", "origin_cites_number": 2, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ] ], "subsections": [ "00e7a832-6875-4fc6-baf9-32d2cdf913fc", "d75bfb88-be21-4eeb-a183-37e9e21984a9", "dfed0923-4020-4afa-b76f-8e06042171e2", "7f1c59ed-d5fb-4f13-9a57-64f2c307cf44", "906c1f98-e038-4512-912d-3641ff570f8c" ], "title": "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" }, { "cite_extract_rate": 0.125, "cites": [ 1883 ], "content": "In 1982, Hopfield introduced an early family of RNNs to solve pattern recognition tasks~. and~ introduced two kinds of RNN architectures respectively. Generally, modern RNNs can be classified into Jordan-type RNNs and Elman-type RNNs. \n\\begin{figure}\n\\begin {center}\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{JordonRNN_c-eps-converted-to.pdf}\n \\caption{Jordan-type RNNs}\n \\label{Jordan type RNNs}\n \\end{subfigure}\n\\hfill\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{ElmanRNN_c-eps-converted-to.pdf}\n \\caption{Elman-type RNNs}\n \\label{Elman type RNNs}\n \\end{subfigure}\n\\caption{Graphical models of two basic types of RNNs}\n\\label{Two basic types of RNNs}\n\\end {center}\n\\end{figure} \nThe Jordan-type RNNs are shown in Figure~\\ref{Jordan type RNNs}. $x_t$, $h_t$, and $y_{t}$ are the inputs, hidden state, and output of time step $t$, respectively. $W_h$, $W_y$ and $U_h$ are weight matrixes. Each update of hidden state is decided by the current input and the output of last time step while each output is decided by current hidden state. Thus the hidden state and output of time step $t$ can be calculated as: \n\\begin{equation}\nh_t = \\sigma_h (W_h x_t + U_h y_{t-1} + b_h)\n\\label{2}\n\\end{equation}\n\\begin{equation}\ny_t = \\sigma_y (W_y h_t + b_y)\n\\label{3}\n\\end{equation}\nWhere $b_h$ and $b_y$ are biases. $\\sigma_h$ and $\\sigma_y$ are activation functions. \nThe Elman-type RNNs are shown in Figure~\\ref{Elman type RNNs}. The difference is that each hidden state is decided by the current input and the hidden state of last time step. Thus the hidden state and output of time step $t$ can be calculated as: \n\\begin{equation}\nh_t = \\sigma_h (W_h x_t + U_h h_{t-1} + b_h)\n\\label{4}\n\\end{equation}\n\\begin{equation}\ny_t = \\sigma_y (W_y h_t + b_y)\n\\label{5}\n\\end{equation}\nSimple RNNs can model long-term dependencies theoretically. But in practical training, long-range dependencies are difficult to learn~. When backpropagating errors over many time steps, simple RNNs suffer from problems known as gradient vanishing and gradient explosion~. Some solutions were proposed to solve these problems~, which led to the inventions of some variants of traditional recurrent networks.", "id": "00e7a832-6875-4fc6-baf9-32d2cdf913fc", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ], [ "subsubsection", "Jordan-Type and Elman-Type RNNs" ] ], "subsections": [], "title": "Jordan-Type and Elman-Type RNNs" }, { "cite_extract_rate": 0, "cites": [], "content": "~ introduced gate mechanisms in LSTM mainly to address the gradient vanishing problem. Input gate, forget gate and output gate were introduced to decide how much information from new inputs and past memories should be reserved. The model can be described by the following equations: \n\\begin{equation}\n\\hat{h}^{(t)} = tanh\\left(W^{\\hat{h}x} x^{(t)} + W^{\\hat{h}h} h^{(t-1)} + b_{\\hat{h}}\\right)\n\\end{equation}\n\\begin{equation}\ni^{(t)} = \\sigma\\left(W^{ix} x^{(t)} + W^{ih} h^{(t-1)} + b_i\\right)\n\\end{equation}\n\\begin{equation}\nf^{(t)} = \\sigma\\left(W^{fx} x^{(t)} + W^{fh} h^{(t-1)} + b_f\\right)\n\\end{equation}\n\\begin{equation}\no^{(t)} = \\sigma\\left(W^{ox} x^{(t)} + W^{oh} h^{(t-1)} + b_o\\right)\n\\end{equation}\n\\begin{equation}\ns^{(t)} = \\hat{h}^{(t)} \\odot i^{(t)} + s^{(t-1)} \\odot f^{(t)}\n\\end{equation}\n\\begin{equation}\nh^{(t)} = tanh (s^{(t)})\\odot o^{(t)}\n\\end{equation}\nWhere $t$ represents time step $t$. $i$, $f$ and $o$ are gates, denoting input gate, forget gate and output gate respectively. $x$, $\\hat{h}$, $s$ and $h$ are input, short-term memory, long-term memory and output respectively. $b$ is bias and $W$ is weight matrix. $\\odot$ denotes element-wise multiplication. \nThe intuition of the term ``Long Short-Term Memory\" is that the proposed model applies both long-term and short-term memory vectors to encode the sequential data, and uses gate mechanisms to control the information flow. The performance of LSTM is impressive since that it achieved state-of-the-art results in many NLP tasks as a backbone model although this model was proposed in 1997.", "id": "d75bfb88-be21-4eeb-a183-37e9e21984a9", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ], [ "subsubsection", "LSTM" ] ], "subsections": [], "title": "LSTM" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 243, 39 ], "content": "Inspired by the gating mechanism,~ proposed Gated Recurrent Unit (GRU), which can be modeled by the equations:\n\\begin{equation}\nz^{(t)} = \\sigma\\left(W^z x^{(t)} + U^{z} h^{(t-1)} + b_z\\right)\n\\end{equation}\n\\begin{equation}\nr^{(t)} = \\sigma\\left(W^r x^{(t)} + U^{r} h^{(t-1)} + b_r\\right)\n\\end{equation}\n\\begin{equation}\n\\hat{h}^{(t)} = tanh\\left(W^h x^{(t)} + U^{h} (r^{(t)} \\odot h^{(t-1)}) + b_h\\right)\n\\end{equation}\n\\begin{equation}\nh^{(t)} = (1-z^{(t)})\\odot h^{(t-1)} + z^{(t)} \\odot \\hat{h}^{(t)}\n\\end{equation}\nWhere $t$ represents time step $t$. $z$ and $r$ are gates, denoting update gate and reset gate respectively. $x$, $\\hat{h}$ and $h$ are input, candidate activation vector and output respectively. $b$ is bias while $W$ and $U$ are weight matrixes. $\\odot$ denotes element-wise multiplication. \nLSTM and GRU, as two types of gating units, are very similar to each other~. The most prominent common point between them is that from time step $t$ to time step $t+1$, an additive component is introduced to update the state whereas simple RNNs always replace the activation. Both LSTM and GRU keep certain old components and mix them with new contents. This property enables the units to remember the information of history steps farther back and, more importantly, avoid gradient vanishing problems when backpropagating the error. \nThere also exist several differences between them. LSTM exposes its memory content under the control of the output gate, while the same content in GRU is in an uncontrolled manner. Additionally, different from LSTM, GRU does not independently gate the amount of new memory content being added. And if looking from experimental perspective, GRU has fewer parameters, which contributes to its faster convergence and better generalization ability. It has also been shown that GRU can achieve better performance in smaller datasets~. However,~ showed that LSTM cells exhibited consistently better performance in a large-scale analysis of Neural Machine Translation.", "id": "dfed0923-4020-4afa-b76f-8e06042171e2", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ], [ "subsubsection", "GRU" ] ], "subsections": [], "title": "GRU" }, { "cite_extract_rate": 0, "cites": [], "content": "In sequence learning, not only the past information is essential to the model inference, the future information should also be considered to achieve a better inference ability. proposed the bi-directional recurrent neural networks (BRNNs), which had two kinds of hidden layers: the first encoded information from past time steps while the second encoded information in a flipped direction. The model can be described using the equations: \n\\begin{equation}\nh^{(t)} = \\sigma\\left(W^{hx} x^{(t)} + W^{hh} h^{(t-1)} + b_h\\right)\n\\end{equation}\n\\begin{equation}\nz^{(t)} = \\sigma\\left(W^{zx} x^{(t)} + W^{zz} z^{(t+1)} + b_z\\right)\n\\end{equation}\n\\begin{equation}\n\\hat{y}^{(t)} = softmax\\left(W^{yh} h^{(t)} + W^{yz} z^{(t)} + b_y\\right)\n\\end{equation}\nWhere $h$ and $z$ are the two hidden layers. Other variables are defined in the same way as in the case of LSTMs and GRUs.", "id": "7f1c59ed-d5fb-4f13-9a57-64f2c307cf44", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ], [ "subsubsection", "Bidirectional Recurrent Neural Networks" ] ], "subsections": [], "title": "Bidirectional Recurrent Neural Networks" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 1876, 7076, 1885, 1890, 1887, 1886, 8507, 1891, 2401, 1884, 9092, 1889, 1880, 1888 ], "content": "\\label{Vanilla Sequence-to-sequence Models (Encoder-decoder Models)}\n~ first proposed the sequence-to-sequence model to solve the machine translation tasks. The sequence-to-sequence model aimed to map an input sequence to an output sequence by first using an encoder to map the input sequence into an intermediate vector and a decoder further generated the output based on the intermediate vector and history generated by the decoder. The equations below illustrate the encoder-decoder model: \n\\begin{equation}\nEncoder: h_t = E(h_{t-1}, x_{t}) \n\\end{equation}\n\\begin{equation}\nDecoder: y_t = D(h_t, y_{t-1})\n\\end{equation}\nWhere $t$ is the time step, $h$ is the hidden vector and $y$ is the output vector. $E$ and $D$ are the sequential cells used by the encoder and decoder respectively. The last hidden state of the encoder is the intermediate vector, and this vector is usually used to initialize the first hidden state of the decoder. At encoding time, each hidden state is decided by the hidden state of the previous time step and the input at the current time step, while at decoding time, each hidden state is decided by the current hidden state and the output of the previous time step. \nThis model is powerful because it is not restricted to fixed-length inputs and outputs. Instead, the length of the source sequence and target sequence can differ. Based on this model, many more advanced sequence-to-sequence models have been developed, which will be discussed in this and subsequent sections. \\\\\nRNNs play an essential role in neural dialogue systems for their strong ability to encode sequential text information. RNNs and their variants are found in many dialogue systems. Task-oriented systems apply RNNs as encoders of dialogue context, dialogue state, knowledge base entries, and domain tags~. Open-domain systems apply RNNs as dialogue history encoders~, among which retrieval-based systems model dialogue history and candidate responses together~. In knowledge-grounded systems, RNNs are encoders of outside knowledge sources (e.g., background, persona, topic, etc.)~. \nFurthermore, as the decoder of sequence-to-sequence models in dialogue systems~, RNNs usually decode the hidden state of utterance sequences by greedy search or beam search~. These decoding mechanisms cause problems like generic responses, which will be discussed in later sections. \nSome works~ combined RNNs as a part of dialogue representation models to train dialogue embeddings and further improved the performance of dialogue-related tasks. These embedding models were trained on dialogue tasks and present more dialogue features. They consistently outperformed state-of-the-art contextual representation models (e.g., BERT, ELMo, and GPT) in some dialogue tasks when these contextual representation models were not fine-tuned for the specific tasks.", "id": "906c1f98-e038-4512-912d-3641ff570f8c", "level": "subsubsection", "origin_cites_number": 24, "parent_id": "381d8d12-53b8-46e7-a77f-e37b433a9889", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Recurrent Neural Networks and Vanilla Sequence-to-sequence Models" ], [ "subsubsection", "Vanilla Sequence-to-sequence Models (Encoder-decoder Models)" ] ], "subsections": [], "title": "Vanilla Sequence-to-sequence Models (Encoder-decoder Models)" }, { "cite_extract_rate": 0.846153846153846, "cites": [ 1107, 8508, 1113, 7529, 2401, 1893, 1892, 7455, 1885, 1894, 38 ], "content": "\\label{Hierarchical Recurrent Encoder-Decoder (HRED)}\nHierarchical Recurrent Encoder-Decoder (HRED) is a context-aware sequence-to-sequence model. It was first proposed by~ to address the context-aware online query suggestion problem. It was designed to be aware of history queries and the proposed model can provide rare and high-quality results. \nWith the popularity of the sequence-to-sequence model,~ extended HRED to the dialogue domain and built an end-to-end context-aware dialogue system. HRED achieved noticeable improvements in dialogue and end-to-end question answering. This work attracted even more attention than the original paper for that dialogue systems are a perfect setting for the application of HRED. Traditional dialogue systems~ generated responses based on the single-turn messages, which sacrificed the information in the dialogue history. combined dialogue history turns with a window size of 3 as the input of a sequence-to-sequence model for response generation, which is limited as well for that they encode the dialogue history only in token-level. The ``turn-by-turn\" characteristic of dialogue indicated that the turn-level information also matters. The HRED learned both token-level and turn-level representation, thus exhibiting promising dialogue context awareness. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1\\textwidth]{HRED_ORI-eps-converted-to.pdf}\n\\caption{The HRED model in a dialogue setting~}\n\\label{The HRED model in a dialogue setting}\n\\end {center}\n\\end{figure} \nFigure~\\ref{The HRED model in a dialogue setting} represents the HRED in a dialogue setting. HRED models the token-level and turn-level sequences hierarchically with two levels of RNNs: a token-level RNN consisting of an encoder and a decoder, and a turn-level context RNN. The encoder RNN encodes the utterance of each turn token by token into a hidden state. This hidden state is then taken as the input of the context RNN at each turn-level time step. Thus the turn-level context RNN iteratively keeps track of the history utterances. The hidden state of context RNN at turn $t$ represents a summary of the utterances up to turn $t$ and is used to initialize the first hidden state of decoder RNN, which is similar to a standard decoder in sequence-to-sequence models~. All of the three RNNs described above apply GRU cells as the recurrent unit, and the parameters of encoder and decoder are shared for each utterance. \n~ further proposed Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) to model complex dependencies between sequences. Based on HRED, VHRED combined a latent variable into the decoder and turned the decoding process into a two-step generation process: sampling a latent variable at the first step and then generating the response conditionally. VHRED was trained with a variational lower bound on the log-likelihood and exhibited promising improvement in diversity, length, and quality of generated responses. \\\\\nMany recent works in dialogue-related tasks apply HRED-based frameworks to capture hierarchical dialogue features. argued that standard HRED processed all contexts in dialogue history indiscriminately. Inspired by the architecture of Transformer~, they proposed ReCoSa, a self-attention-based hierarchical model. It first applied LSTM to encode token-level information into context hidden vectors and then calculated the self-attention for both the context vectors and masked response vectors. At the decoding stage, the encoder-decoder attention was calculated to facilitate the decoding. proposed a hierarchical model consisting of 3 hierarchies: the discourse-level which captures the global knowledge, the pair-level which captured the topic information in utterance pairs, and the utterance level which captured the content information. Such a multi-hierarchy structure contributed to its higher quality responses in terms of diversity, coherence, and fluency. applied HRED and VGG-19 as a multimodal HRED (MHRED). The HRED encoded hierarchical dialogue context while VGG-19 extracted visual features for all images in the corresponding turn. With the addition of a position-aware attention mechanism, the model showed more diverse and accurate responses in a visually grounded setting. learned dialogue context representations via four sub-tasks, three of which (next-utterance generation, masked-utterance retrieval, and inconsistency identification) made uses of HRED as the context encoder, and good performance was achieved. used HRED to encode the dialogue history between therapists and patients to categorize therapist and client MI behavioral codes and predict future codes. applied an LSTM-based VHRED to address the two-agent and multi-agent dialogue structure induction problem in an unsupervised fashion. On top of that, they applied a Conditional Random Field model in two-agent dialogues and a non-projective dependency tree in multi-agent dialogues, both of them achieving better performance in dialogue structure modeling.", "id": "24b037f7-a365-4ed3-9d8f-9d31b8ec91c5", "level": "subsection", "origin_cites_number": 13, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Hierarchical Recurrent Encoder-Decoder (HRED)" ] ], "subsections": [], "title": "Hierarchical Recurrent Encoder-Decoder (HRED)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7077, 9093, 9109, 1898, 1896, 1695, 1897, 1895 ], "content": "\\label{Memory Networks}\nMemory is a crucial component when addressing problems regarding past experiences or outside knowledge sources. The hippocampus of human brains and the hard disk of computers are the components that humans and computers depend on for reading and writing memories. Traditional models rarely have a memory component, thus lacking the ability of knowledge reusing and reasoning. RNNs iteratively pass history information across time steps, which, to some extent, can be viewed as a memory model. However, even for LSTM, which is a powerful variant of RNN equipped with a long-term and short-term memory, the memory module is too small and facts are not explicitly discriminated, thus not being able to compress specific knowledge facts and reuse them in tasks. \n~ proposed memory networks, a model that is endowed with a memory component. As described in their work, a memory network has five modules: a memory module which stores the representations of memory facts; an `I' module which maps the input memory facts into embedded representations; a `G' module which decides the update of the memory module; an `O' module which generates the output conditioned on the input representation and memory representation; an `R' module which organizes the final response based on the output of `O' module. This model needs a strong supervision signal for each module and thus is not practical to train in an end-to-end fashion. \n~ extended their prior work to an end-to-end memory network, which was commonly accepted as a standard memory network being easy to train and apply. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1\\textwidth]{MemoryNetwork_c-eps-converted-to.pdf}\n\\caption{The structure of end-to-end memory networks~}\n\\label{The structure of end-to-end memory network}\n\\end {center}\n\\end{figure} \nFigure~\\ref{The structure of end-to-end memory network} represents the proposed end-to-end memory networks. Its architecture consists of three stages: weight calculation, memory selection, and final prediction. \n\\textbf{Weight calculation.} The model first converts the input memory set $\\{x_i\\}$ into memory representations $\\{m_i\\}$ using a representation model $A$. Then it maps the input query into its embedding space using another representation model $B$, obtaining an embedding vector $u$. The final weights are calculated as follows: \n\\begin{equation}\np_i = Softmax(u^Tm_i)\n\\end{equation}\nWhere $p_i$ is the weight corresponding to each input memory $x_i$ conditioned on the query. \n\\textbf{Memory selection.} Before generating the final prediction, a selected memory vector is generated by first encoding the input memory $x_i$ into an embedded vector $c_i$ using another representation model $C$, then calculating the weighted sum over the $\\{c_i\\}$ using the weights calculated in the previous stage: \n\\begin{equation}\no = \\sum_i p_ic_i\n\\end{equation}\nWhere o represents the selected memory vector. This vector cannot be found in memory representations. The soft memory selection facilitates differentiability in gradient computing, which makes the whole model end-to-end trainable. \n\\textbf{Final prediction.} The final prediction is obtained by mapping the sum vector of the selected memory $o$ and the embedded query $u$ into a probability vector $\\hat{a}$:\n\\begin{equation}\n\\hat{\\alpha} = Softmax(W(o+u))\n\\end{equation}\n\\\\\nMany dialogue-related works incorporate memory networks into their framework, especially for tasks involving an external knowledge base like task-oriented dialogue systems, knowledge-grounded dialogue systems, and QA.\n\\paragraph*{Memory networks for task-oriented dialogue systems}~ argued that state-of-the-art task-oriented dialogue systems tended to combine dialogue history and knowledge base entries in a single memory module, which influenced the response quality. They proposed a task-oriented system that consists of three memory modules: two long-term memory modules storing the dialogue history and the knowledge base respectively; a working memory module that memorizes two distributions and controls the final word prediction. trained a task-oriented dialogue system with a ``Two-teacher-one-student\" framework to improve the knowledge retrieval and response quality of their memory networks. They first trained two teacher networks using reinforcement learning with complementary goal-specific reward functions respectively. Then with a GAN framework, they trained two discriminators to teach the student memory network to generate responses similar to those of the teachers, transferring the expert knowledge from the two teachers to the student. The advantage is that this training framework needs only weak supervision and the student network can benefit from the complementary targets of teacher networks. solved the dialogue state tracking in task-oriented dialogue systems with a memory network that memorized the dialogue states. Different from other works, they did not update all dialogue states in the memory module from scratch. Instead, their model first predicted which states needed to be updated and then overwrote the target states. By selectively overwriting the memory module, they improved the efficiency of the dialogue state tracking task. applied the MemN2N~ as task-oriented utterance encoder, memorizing the existing responses and dialogue history. Then they used model-agnostic meta-learning (MAML)~ to train the framework to retrieve correct responses in a few-shot fashion. \n\\paragraph*{Memory networks for open-domain dialogue systems}~ proposed a knowledge-grounded chit-chat system. A memory network was used to store query-response pairs and at the response generation stage, the generator produced the response conditioned on both the input query and memory pairs. It extracted key-value information from the query-response pairs in memory and combined them into token prediction. proposed to use meta-words to generate responses in open-domain systems in a controllable way. Meta-words are phrases describing response attributes. Using a goal-tracking memory network, they memorized the meta-words and generated responses based on the user message while incorporating meta-words at the same time. performed multi-step reasoning conditioned on a dialogue history memory module and a visual memory module. Two memory modules recurrently refined the representation to perform the next reasoning process. Experimental results illustrated the benefits of combining image and dialogue clues to improve the performance of visual dialogue systems. trained a reinforcement learning agent to decide which memory vector can be replaced when the memory module is full to improve the accuracy and efficiency of the document-grounded question-answering task. They solved the scalability problem of memory networks by learning the query-specific value corresponding to each memory. solved the same problem in a conversational machine reading task. They proposed an Explicit Memory Tracker (EMT) to decide whether the provided information in memory is enough for final prediction. Furthermore, a coarse-to-fine strategy was applied for the agent to make clarification questions to request additional information and refine the reasoning.", "id": "70447676-b1e3-4e3b-9af1-131f5cf9895b", "level": "subsection", "origin_cites_number": 12, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Memory Networks" ] ], "subsections": [], "title": "Memory Networks" }, { "cite_extract_rate": 1, "cites": [ 303 ], "content": "\\label{Attention and transformer}\nAs introduced in Section~\\ref{RNNs and Vanilla Sequence-to-sequence Models}, traditional sequence-to-sequence models decode the token conditioning on the current hidden state and output vector of last time step, which is formulated as: \n\\begin{equation}\n\\label{P(y_i|y_1,}\nP(y_i|y_1, ..., y_{i-1}, x) = g(y_{i-1}, h_i)\n\\end{equation}\nWhere g is a sequential model which maps the input vectors into a probability vector. \nHowever, such a decoding scheme is limited when the input sentence is long. RNNs are not able to encode all information into a fixed-length hidden vector. proved via experiments that a sequence-to-sequence model performed worse when the input sequence got longer. Also, for the limited-expression ability of a fixed-length hidden vector, the performance of the decoding scheme in Equation (\\ref{P(y_i|y_1,}) largely depends on the first few steps of decoding, and if the decoder fails to have a good start, the whole sequence would be negatively affected.", "id": "c34913d1-da34-4487-a755-149ff84b17e8", "level": "subsection", "origin_cites_number": 1, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Attention and Transformer" ] ], "subsections": [ "48488ad5-3502-400b-8c27-e6d2cb7225d3", "9450e615-73c8-4c3b-b0da-30bd7bfa4b93" ], "title": "Attention and Transformer" }, { "cite_extract_rate": 1, "cites": [ 9109, 168 ], "content": "\\begin{figure}[t]\n\\begin {center}\n\\includegraphics[width=0.35\\textwidth]{attention-eps-converted-to.pdf}\n\\caption{The attention model~}\n\\label{The attention model}\n\\end {center}\n\\end{figure} \n proposed the attention mechanism in the machine translation task. They described the method as ``jointly align and translate\", which illustrated the sequence-to-sequence translation model as an encoder-decoder model with attention. At the decoding stage, each decoding state would consider which parts of the encoded source sentence are correlated, instead of depending only on the immediate prior output token. The output probability distribution can be described as: \n\\begin{equation}\nP(y_i|y_1, ..., y_{i-1}, x) = g(y_{i-1}, s_i, c_i)\n\\end{equation}\nWhere $i$ denotes the $i^{th}$ time step; $y_i$ is the output token, $s_i$ is the decoder hidden state and $c_i$ is the weighted source sentence: \n\\begin{equation}\ns_i = f(s_{i-1}, y_{i-1}, c_i)\n\\end{equation}\n\\begin{equation}\nc_i = \\sum_{j = 1}^{T_x}\\alpha_{ij}h_j\n\\end{equation}\nWhere $\\alpha_{ij}$ is the normalized weight score: \n\\begin{equation}\n\\alpha_{ij} = \\frac{exp(e_{ij})}{\\sum_{k = 1}^{T_x}exp(e_{ik})}\n\\end{equation}\n$e_{ij}$ is the similarity score between $s_{i-1}$ and $j^{th}$ encoder hidden state $h_j$, where the score is predicted by the similarity model $a$: \n\\begin{equation}\ne_{ij} = a(s_{i-1}, h_j)\n\\end{equation}\nFigure~\\ref{The attention model} illustrates the attention model, where t and T denote time steps of decoder and encoder respectively.\nMemory networks are similar to attention networks in the way they operate, except for the choice of the similarity model. In memory networks, the encoded memory can be viewed as the encoded source sentence in attention. However, the memory model proposed by~ chose cosine distance as the similarity model while the attention proposed by~ used a feed-forward network which is trainable together with the whole sequence-to-sequence model.", "id": "48488ad5-3502-400b-8c27-e6d2cb7225d3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "c34913d1-da34-4487-a755-149ff84b17e8", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Attention and Transformer" ], [ "subsubsection", "Attention" ] ], "subsections": [], "title": "Attention" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 1902, 1899, 1904, 7302, 7530, 8385, 1876, 38, 1903, 1900, 1901, 7371, 7, 7078, 7370, 1884, 790, 1499 ], "content": "Before transformers, most works combined attention with recurrent units, except for few works such as and . Recurrent models condition each hidden state on the previous hidden state and the current input and are flexible in sequence length. However, due to their sequential nature, recurrent models cannot be trained in parallel, which severely undermines their potential. proposed Transformer, which entirely utilized attention mechanisms without any recurrent units and deployed more parallelization to speed up training. It applied self-attention and encoder-decoder attention to achieve local and global dependencies respectively. \nFigure~\\ref{The transformer model} represents the transformer. The following details its key mechanisms. \n\\begin{figure}[h]\n\\begin {center}\n\\includegraphics[width=0.7\\textwidth]{Transformer_c-eps-converted-to.pdf}\n\\caption{The transformer model~}\n\\label{The transformer model}\n\\end {center}\n\\end{figure} \n\\paragraph*{Encoder-decoder} The Transformer consists of an encoder and a decoder. The encoder maps the input sequence $(x_1, \\ldots,x_n)$ into continuous hidden states $(z_1, \\ldots,z_n)$. The decoder further generates the output sequence $(y_1, \\ldots,y_n)$ based on the hidden states of the encoder. The probability model of the Transformer is in the same form as that of the vanilla sequence-to-sequence model introduced in Section~\\ref{Vanilla Sequence-to-sequence Models (Encoder-decoder Models)}. stacked 6 identical encoder layers and 6 identical decoder layers. An encoder layer consists of a multi-head attention component and a simple feed-forward network, both of which apply residual structure. The structure of a decoder layer is almost the same as that of an encoder layer, except for an additional encoder-decoder attention layer, which computes the attention between decoder hidden states of the current time step and the encoder output vectors. The input of the decoder is partially masked to make sure that each prediction is based on the previous tokens, avoiding predicting with the presence of future information. Both inputs of encoder and decoder use a positional encoding mechanism. \n\\paragraph*{Self-attention} For an input sentence $x = (x_1, \\ldots, x_n)$, each token $x_i$ corresponds to three vectors: query, key, and value. The self-attention computes the attention weight for every token $x_i$ against all other tokens in $x$ by multiplying the query of $x_i$ with the keys of all the remaining tokens one-by-one. For parallel computing, the query, key ,and value vectors of all tokens are combined into three matrices: Query (Q), Key (K) ,and Value (V). The self-attention of an input sentence $x$ is computed by the following formula: \n\\begin{equation}\nAttention(Q, K, V) = softmax(\\frac{QK^T}{\\sqrt{d_k}})V\n\\end{equation}\nWhere $d_k$ is the dimension of queries or keys. \n\\paragraph*{Multi-head attention} To jointly consider the information from different subspaces of embedding, query, key, and value vectors are mapped into $h$ vectors of identical shapes by using different linear transformations, where $h$ denotes the number of heads. Attention is computed on each of these vectors in parallel, and the results are concatenated and further projected. The multi-head attention can be described as: \n\\begin{equation}\nMultiHead(Q, K, V) = Concat(head_1, ..., head_h)W^O\n\\end{equation}\nWhere $head_i = Attention (QW_i^Q, KW_i^K, VW_i^V)$ and $W$ denotes the linear transformations. \n\\paragraph*{Positional encoding} The proposed transformer architecture has no recurrent units, which means that the order information of sequence is dismissed. The positional encoding is added with input embeddings to provide positional information. The paper chooses cosine functions for positional encoding: \n\\begin{equation}\nPE_{(pos, 2i)} = sin(pos/10000^{2i/d_{model}})\n\\end{equation}\n\\begin{equation}\nPE_{(pos, 2i+1)} = cos(pos/10000^{2i/d_{model}})\n\\end{equation}\nWhere $pos$ denotes the position of the target token and $i$ denotes the dimension, which means that each dimension of the positional matrix uses a different wavelength for encoding. \n\\paragraph*{Transformer-based pretrain models and Transformer variants} Recently, many transformer-based pretrain models have been developed. Unlike Embeddings from Language Model (ELMo) proposed by~, which is an LSTM-based contextual embedding model, transformer-based pretrain models are more powerful. Two most popular models are GPT-2 \\footnote{https://openai.com/blog/better-language-models/} and BERT~. GPT-2 and BERT both consist of 12 transformer blocks and BERT is further improved by making the training bi-directional. They are powerful due to their capability of adapting to new tasks after pretraining. This property helped achieve significant improvements in many NLP tasks. There also evolve many Transformer variants , which are designed to reduce the model parameters/computational complexity, or improve performance of the original Transformer in diverse scenarios. and systematically summarize the state-of-the-art Transformer variants for academics that are interested.\\\\\n\\paragraph*{Attention for dialogue systems} Attention is a mechanism to catch the importance of different parts in the target sequence. applied a two-level attention to generate words. Given the user message and candidate responses selected by a retrieval system, the generator first computes word-level attention weights, then uses sentence-level attention to rescale the weights. This two-level attention helps the generator catch different importance given the encoded context. used an attention-based recurrent architecture to generate responses. They designed a multi-level encoder-decoder of which the multi-level encoder tries to map raw words, low-level clusters, and high-level clusters into hierarchical embedded representations while the multi-level decoder leveraged the hierarchical representations using attention and then generated responses. At each decoding stage, the model calculated two attention weights for the output of the higher-level decoder and the hidden state of the current level's encoder. computed multi-head self-attention for the outputs of a dialogue act predictor. Unlike the transformer, which concatenates the outputs of different heads, they passed the outputs directly to the next multi-head layer. The stacked multi-head layers then generated the responses with dialogue acts as the input. \n\\paragraph*{Transformers for dialogue systems} Transformers are powerful sequence-to-sequence models and meanwhile, their encoders also serve as good dialogue representation models. built a transformer-based response retrieval model for task-oriented dialogue systems. A two-channel transformer encoder was designed for encoding user messages and responses, both of which were initially presented as unigrams and bigrams. A simple cosine distance was then applied to calculate the semantic similarity between the user message and the candidate response. built multiple incremental transformer encoders to encode multi-turn conversations and their related document knowledge. The encoded utterance and related document of the previous turn were treated as a part of the input of the next turn's transformer encoder. The pretrained model was adaptable to multiple domains with only a small amount of data from the target domain. used stacked transformers for dialogue generation pretraining. Besides the response generation task, they also pretrained the model together with a latent act prediction task. A latent variable was applied to solve the ``one-to-many\" problem in response generation. The multi-task training scheme improved the performance of the proposed transformer pretraining model. \n\\paragraph*{Transformer-based pretrain models for dialogue systems} Large transformer-based pretrain models are adaptable to many tasks and are thus popular in recent works. used GPT as a sequence-to-sequence model to directly generate utterances and compared the performances under single- and multi-input settings. first used a probability model to retrieve related news corpus and then combined the news corpus and dialogue context as input of a GPT-2 generator for response generation. They proposed that by using discourse pattern recognition and interrogative type prediction as two subtasks for multi-task learning, the dialogue modeling could be further improved. used BERT as an encoder of context and candidate responses in their goal-based response retrieval system while~ built Co-BERT, a BERT-based response selection model, to retrieve empathetic responses given persona-based training corpus. built a knowledge-grounded dialogue system in a synthesized fashion. They used both BERT and GPT-2 to perform knowledge selection and response generation jointly, where BERT was for knowledge selection and GPT-2 generated responses based on dialogue context and the selected knowledge.", "id": "9450e615-73c8-4c3b-b0da-30bd7bfa4b93", "level": "subsubsection", "origin_cites_number": 21, "parent_id": "c34913d1-da34-4487-a755-149ff84b17e8", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Attention and Transformer" ], [ "subsubsection", "Transformer" ] ], "subsections": [], "title": "Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Pointer Net and CopyNet}", "id": "6b37701b-9c79-447a-9547-3a51e8427dbe", "level": "subsection", "origin_cites_number": 0, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Pointer Net and CopyNet" ] ], "subsections": [ "9ca2d24a-6bb9-4b81-806f-f9d5afbcc584", "d99b10b7-d2d5-4695-943b-2adfea1a87db" ], "title": "Pointer Net and CopyNet" }, { "cite_extract_rate": 1, "cites": [ 2401, 1905, 167 ], "content": "In some NLP tasks like dialogue systems and question-answering, the agents sometimes need to directly quote from the user message. Pointer Net~ (Figure~\\ref{Pointer Net}) solved the problem of directly copying tokens from the input sentence. \n\\begin{figure}\n\\begin {center}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ptn1-eps-converted-to.pdf}\n \\caption{Sequence-to-sequence}\n \\end{subfigure}\n\\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ptn2-eps-converted-to.pdf}\n \\caption{Pointer Net}\n \\end{subfigure}\n\\caption{\\textbf{(a)} \\textit{Sequence-to-sequence} - The RNN (blue) processes the input sequence to produce a code vector, which is then used by the probability chain rule and another RNN to generate the output sequence (purple). The dimensionality of the problem determines the output dimensionality, which remains constant through training and inference. \\textbf{(b)} \\textit{Pointer Net} - The input sequence is converted to a code (blue) by an encoding RNN, which is fed to the generating network (purple). The generating network generates a vector at each step that modulates a content-based attention process across inputs. The attention mechanism produces a softmax distribution with a dictionary size equal to the input length.~}\n\\label{Pointer Net}\n\\end {center}\n\\end{figure} \nTraditional sequence-to-sequence models~ with an encoder-decoder structure map a source sentence to a target sentence. Generally, these models first map source sentence into hidden state vectors with an encoder, and then predict the output sequence based on the hidden states. The sequence prediction is accomplished step-by-step, each step predicting one token using greedy search or beam search. The overall sequence-to-sequence model can be described by the following probability model: \n\\begin{equation}\nP(C^P|P;\\theta) = \\prod_{i=1}^{m(P)}p(C_i|C_1, ..., C_{i-1}, P; \\theta)\n\\end{equation}\nWhere $(P, C_p)$ constitutes a training pair, $P$ = $\\{P_1, ..., P_n\\}$ denotes the input sequence and $C_p$ = $\\{C_1, ..., C_{m(p)}\\}$ denotes the ground target sequence. $\\theta$ is a decoder model. \nThe sequence-to-sequence models have the vanilla backbones and attention-based backbones. Vanilla models predict the target sequence based only on the last hidden state of the encoder and pass it across different decoder time steps. Such a mechanism restricts the information received by the decoder at each decoding stage. Attention-based models consider all hidden states of the encoder at each decoding step and calculate their importance when utilizing them. To compare the mechanism of Pointer Net and Attention, we present the equations explained in Section~\\ref{RNNs and Vanilla Sequence-to-sequence Models} here again. The decoder predicts the token conditioned partially on the weighted sum of encoder hidden states $d_i$: \n\\begin{equation}\nd_i = \\sum_{j=1}^{T_x}\\alpha_{ij}h_j\n\\end{equation}\nWhere $\\alpha_{ij}$ is the normalized weight score:\n\\begin{equation}\n\\alpha_{ij} = \\frac{exp(e_{ij})}{\\sum_{k=1}^{T_x}exp(e_{ik})}\n\\end{equation}\n$e_{ij}$ is the similarity score between $s_{i-1}$ and $jth$ encoder hidden state $h_j$, where the score is predicted by the similarity model $a$: \n\\begin{equation}\ne_{ij} = a(s_{i-1}, h_j)\n\\end{equation}\nAt each decoding step, both vanilla and attention-based sequence-to-sequence models predict a distribution over a fixed dictionary $X = \\{x_1, ..., x_n\\}$, where $x_i$ denotes the tokens and $n$ denotes the total count of different tokens in the training corpus. However, when copying words from the input sentence, we do not need such a large dictionary. Instead, $n$ equals to the number of tokens in the input sequence (including repeated ones) and is not fixed since it changes according to the length of the input sequence. Pointer Net made a simple change to the attention-based sequence-to-sequence models: instead of predicting the token distribution based on the weighted sum of encoder hidden states $d_i$, it directly used the normalized weights $\\alpha_i$ as predicted distribution: \n\\begin{equation}\nP(C_i|C_1, ..., C_{i-1}, P) = \\alpha_i\n\\end{equation}\nWhere $\\alpha_i$ is a set of probability numbers $\\{\\alpha_i^1, ..., \\alpha_i^j\\}$ which represents the probability distribution over the tokens of the input sequence. Obviously, the \\textit{token prediction} problem is now transformed into \\textit{position prediction} problem, where the model only needs to predict a position in the input sequence. This mechanism is like a pointer that points to its target, hence the name ``Pointer Net\".", "id": "9ca2d24a-6bb9-4b81-806f-f9d5afbcc584", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "6b37701b-9c79-447a-9547-3a51e8427dbe", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Pointer Net and CopyNet" ], [ "subsubsection", "Pointer Net" ] ], "subsections": [], "title": "Pointer Net" }, { "cite_extract_rate": 0.5882352941176471, "cites": [ 1114, 1891, 1909, 1906, 1889, 1908, 1907, 1905, 7076, 7531 ], "content": "\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1.0\\textwidth]{CopyNet-eps-converted-to.pdf}\n\\caption{The overall architecture of CopyNet~}\n\\label{The overall architecture of CopyNet}\n\\end {center}\n\\end{figure} \nIn real-world applications, simply copying from the source message is not enough. Instead, in tasks like dialogue systems and QA, agents also require the ability to generate words that are not in the source sentence. CopyNet~ (Figure~\\ref{The overall architecture of CopyNet}) was proposed to incorporate the copy mechanism into traditional sequence-to-sequence models. The model decides at each decoding stage whether to copy from the source or generate a new token not in the source. \nThe encoder of CopyNet is the same as that of a traditional sequence-to-sequence model, whereas the decoder has some differences compared with a traditional attention-based decoder. When predicting the token at time step $t$, it combines the probabilistic models of generate-mode and copy-mode: \n\\begin{equation}\nP(y_t|s_t, y_{t-1}, c_t, M) = P_g(y_t|s_t, y_{t-1}, c_t, M) + P_c(y_t|s_t, y_{t-1}, c_t, M)\n\\end{equation}\nWhere $t$ is the time step. $s_t$ is the decoder hidden state and $y_t$ is the predicted token. $c_t$ and $M$ represent weighted sum of encoder hidden states and encoder hidden states respectively. $g$ and $c$ are generate-mode and copy-mode respectively. \nBesides, though it still uses $y_{t-1}$ and weighted attention vector $c_t$ to update the decoder hidden state, $y_{t-1}$ is uniquely encoded with both its embedding and its location-specific hidden state; also, CopyNet combines attentive read and selective read to capture information from the encoder hidden states, where the selective read is the same method used in Pointer Net. Different from the Neural Turing Machines~, the CopyNet has a location-based mechanism that enables the model to be aware of some specific details in training data in a more subtle way. \\\\\nCopy mechanism is suitable for dialogues involving terminologies or external knowledge sources, and it is popular in knowledge-grounded or task-oriented dialogue systems. \n\\paragraph*{Copy mechanism for knowledge-grounded dialogue systems} For knowledge-grounded systems, external documents or dialogues are sources to copy from. combined a recurrent knowledge interactive decoder with a knowledge-aware pointer network to achieve both knowledge-grounded generation and knowledge copy. In the proposed model, they first calculated the attention distribution over external knowledge, then used two pointers referring to dialogue context and knowledge source respectively to copy out-of-vocabulary (OOV) words. applied a multi-class classifier to flexibly fuse three distributions: generated words, generated knowledge entities, and copied query words. They used Context-Knowledge Fusion and Flexible Mode Fusion to perform the knowledge retrieval, response generation, and copying jointly, making the generated responses precise, coherent, and knowledge-infused. proposed a Cross Copy Network to copy from internal utterance (dialogue history) and external utterance (similar cases) respectively. They first used pretrained language models for similar case retrieval, then combined the probability distribution of two pointers to make a prediction. They only experimented with court debate and customer service content generation tasks, where similar cases were easy to obtain. \n\\paragraph*{Copy mechanism for task-oriented dialogue systems} Many dialogue state tracking tasks generate slots and slot values using a copy component~. Among them~,~ and~ solved the problem of multi-domain dialogue state tracking. proposed TRAnsferable Dialogue statE generator (TRADE), a copy-based dialogue state generator. The generator decoded the slot value multiple times for each possible (domain, slot) pair, then a slot gate was applied to decide which pair belonged to the dialogue. The output distribution was a copy of the slot values belonging to the selected (domain, slot) pairs from vocabulary and dialogue history. used a different copy strategy from TRADE. Instead of using the whole dialogue history as the copy source, they copied state values from user utterances and system messages respectively, which took the slot-level context as input. proposed slot connection mechanism to efficiently utilize existing states from other domains. Attention weights were calculated to measure the connection between the target slot and related slot-value tuples in other domains. Three distributions over token generation, dialogue context copying, and past state copying were finally gated and fused to predict the next token. combined a pointer network with a template-based tree decoder to fill the templates recursively and hierarchically. Copy mechanisms also alleviated the problem of expensive data annotation in end-to-end task-oriented dialogue systems. Copy-augmented dialogue generation models were proven to perform significantly better than strong baselines with limited domain-specific or multi-domain data~. \n\\paragraph*{Copy mechanism for dialogue-related tasks} Pointer networks and CopyNet are also used to solve other dialogue-related tasks. applied a pointer net for online conversation disentanglement. The pointer module pointed to the ancestor message to which the current message replies and a classifier predicted whether two messages belonged to the same thread. In dialogue parsing tasks, the pointer net is used as the backbone parsing model to construct discourse trees~. used a pointer-generator framework to perform machine reading comprehension over a long span, where the copy mechanism reduced the demand of including target answers in context.", "id": "d99b10b7-d2d5-4695-943b-2adfea1a87db", "level": "subsubsection", "origin_cites_number": 17, "parent_id": "6b37701b-9c79-447a-9547-3a51e8427dbe", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Pointer Net and CopyNet" ], [ "subsubsection", "CopyNet" ] ], "subsections": [], "title": "CopyNet" }, { "cite_extract_rate": 0.45454545454545403, "cites": [ 1409, 7477, 1910, 1911, 1390 ], "content": "\\label{Deep Reinforcement Learning Models and Generative Adversarial Network}\nIn recent years, two exciting approaches exhibit the potential of artificial intelligence. The first one is deep reinforcement learning, which outperforms humans in many complex problems such as large-scale games, conversations, and car-driving. Another technique is GAN, showing amazing capability in generation tasks. The data samples generated by GAN models like articles, paintings, and even videos, are sometimes indistinguishable from human creations. \nAlphaGo~ stimulated the research interests again in reinforcement learning in recent years~. Reinforcement learning is a branch of machine learning aiming to train agents to perform appropriate actions while interacting with a certain environment. It is one of the three fundamental machine learning branches, with supervised learning and unsupervised learning being the other two. It can also be seen as an intermediate between supervised learning and unsupervised learning because it only needs weak signals for training . \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=0.60\\textwidth]{ReinforcementLearning_c-eps-converted-to.pdf}\n\\caption{The reinforcement learning framework}\n\\label{The reinforcement learning framework}\n\\end {center}\n\\end{figure} \nFigure~\\ref{The reinforcement learning framework} illustrates the reinforcement learning framework, consisting of an agent and an environment. The framework is a Markov Decision Process (MDP)~, which can be described by a five-tuple M = $\\langle S, A, P, R, \\gamma \\rangle $. $S$ denotes an infinite set of environment states; $A$ denotes a set of actions that agent chooses from conditioned on a given environment state $s$; $P$ is the transition probability matrix in MDP, denoting the probability of an environment state transfer after agent takes an action; $R$ is an average reward the agent receives from the environment after taking an action under state $s$; $\\gamma$ is a discount factor. The flow of this framework is a loop of the following two steps: the agent first makes an observation on the current environment state $s_t$ and chooses an action based on its policy; then according to the transition probability matrix $P$, the environment's state transfers to $s_{t+1}$, and simultaneously provides a reward $r_t$. \nReinforcement learning is applicable to solve many challenges in dialogue systems because of the agent-environment nature of a dialogue system. A two-party dialogue system consists of an agent, which is an intelligent chatbot, and an environment, which is usually a user or a user simulator. Here we mainly discuss deep reinforcement learning. \nDeep reinforcement learning means applying deep neural networks to model the value function or policy of the reinforcement learning framework. ``Deep model\" is in contrast to the ``shallow model\". The shallow model normally refers to traditional machine learning models like Decision Trees or KNN. Feature engineering, which is usually based on shallow models, is time and labor consuming, and also over-specified and incomplete. Different from that, deep neural models are easy to design and have a strong fitting capability, which contributes to many breakthroughs in recent research. Deep representation learning gets rid of human labor and exploits hierarchical features in data automatically, which strengthens the semantic expressiveness and domain correlations significantly. \nWe discuss two typical reinforcement models: Deep Q-Networks~ and REINFORCE~. They belong to \\textit{Q-learning} and \\textit{policy gradient} respectively, which are two families of reinforcement learning.", "id": "08350a4c-0975-417d-8781-84e0835277fa", "level": "subsection", "origin_cites_number": 11, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Deep Reinforcement Learning Models and Generative Adversarial Networks" ] ], "subsections": [ "56effa6f-a8d6-4b3c-ae24-e55788013b58", "8ba3c013-e2ea-4dc8-8fdb-25b0f01a33a4", "4f044a3f-a039-4b68-b908-6a3a121e297f" ], "title": "Deep Reinforcement Learning Models and Generative Adversarial Networks" }, { "cite_extract_rate": 0.25, "cites": [ 7478 ], "content": "A Deep Q-Network is a value-based RL model. It determines the best policy according to the Q-function: \n\\begin{equation}\n\\pi^{*}(s) = arg\\max_{a}Q^{*}(s, a)\n\\end{equation}\nWhere $Q^{*}(s, a)$ is an optimal Q-function and $\\pi^{*}(s)$ is the corresponding optimal policy. In Deep Q-Networks, the Q function is modeled using a deep neural network, such as CNNs, RNNs, etc. \nAs in , the parameters of the Q model are updated using the rule: \n\\begin{equation}\n\\theta \\leftarrow \\theta + \\alpha \\underbrace{\\left (r_t+\\gamma\\max_{a_{t+1}}Q(s_{t+1}, a_{t+1}; \\theta) - Q(s_t, a_t; \\theta) \\right )}_\\text{temporal\\ difference} \\bigtriangledown_{\\theta}Q(s_t, a_t; \\theta)\n\\end{equation}\nWhere the $(s_t, a_t, r_t, s_{t+1})$ is an observed trajectory. $\\alpha$ denotes step-size and the parameter update is calculated using temporal difference~. However, this update mechanism suffers from unstableness and demands a large number of training samples. There are two typical tricks for a more efficient and stable parameter update.\nThe first method is experience replay~. Instead of using one training sample at a time to update the parameters, it uses a buffer to store training samples, and iteratively retrieves training samples from the buffer pool to perform parameter updates. It avoids encountering training samples that change too fast in distribution during training time, which increases the learning stability; further, it uses each training sample multiple times, which improves the efficiency. \nThe second is two-network implementation~. This method uses two networks in Q-function optimization, one being the Q-network, another being a target network. The target network is used to calculate the temporal difference, and its parameters $\\theta_{target}$ are frozen while training, aligning with $\\theta$ periodically. The parameters are then updated with the following rule: \n\\begin{equation}\n\\theta \\leftarrow \\theta + \\alpha \\underbrace{\\left (r_t+\\gamma\\max_{a_{t+1}}Q(s_{t+1}, a_{t+1}; \\theta_{target}) - Q(s_t, a_t; \\theta) \\right )}_\\text{temporal\\ difference\\ with\\ a\\ target\\ network} \\bigtriangledown_{\\theta}Q(s_t, a_t; \\theta)\n\\end{equation}\nSince $\\theta_{target}$ does not change in a period of time, the target network calculates the temporal difference in a stable manner, which facilitates the convergence of training.", "id": "56effa6f-a8d6-4b3c-ae24-e55788013b58", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "08350a4c-0975-417d-8781-84e0835277fa", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Deep Reinforcement Learning Models and Generative Adversarial Networks" ], [ "subsubsection", "Deep Q-Networks" ] ], "subsections": [], "title": "Deep Q-Networks" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1912 ], "content": "REINFORCE is a policy-based RL algorithm that has no value network. It optimizes the policy directly. The policy is parameterized by a policy network, whose output is a distribution over continuous or discrete actions. A long-term reward is computed for evaluation of the policy network by collecting trajectory samples of length $H$: \n\\begin{equation}\nJ(\\theta) = E \\left[ \\sum_{t=1}^{H}\\gamma^{t-1}r_t|a_t \\sim \\pi(s_t;\\theta) \\right]\n\\end{equation}\n$J(\\theta)$ denotes a long-term reward and the goal is to optimize the policy network in order to maximize $J(\\theta)$. Here stochastic gradient ascent\\footnote{Stochastic gradient ascent simply uses the negated objective function of stochastic gradient descent.} is used as an optimizer: \n\\begin{equation}\n\\theta \\leftarrow \\theta + \\alpha \\bigtriangledown_{\\theta}J(\\theta)\n\\end{equation}\nWhere $\\bigtriangledown_{\\theta}J(\\theta)$ is computed by: \n\\begin{equation}\n\\bigtriangledown_{\\theta}J(\\theta) = \\sum_{t=1}^{H-1}\\gamma^{t-1}\\left(\\bigtriangledown_{\\theta}log\\pi(a_t|s_t; \\theta) \\sum_{h=t}^{H}\\gamma^{h-t}r_h \\right)\n\\label{bigtriangledown}\n\\end{equation} \nBoth models have their advantages: Deep Q-Networks are more sample efficient while REINFORCE is more stable~. REINFORCE is more popular in recent works. Modern research involves larger action spaces, which means that value-based RL models like Deep Q-Networks are not suitable for problem-solving. Value-based methods ``select an action to maximize the value\", which means that their action sets should be discrete and moderate in scale; while policy gradient methods such as REINFORCE are different, they predict the action via policy networks directly, which sets no restriction on the action space. As a result, policy gradient methods are more suitable for tasks involving a larger action space. \nConsidering the respective benefits brought by the Q-learning and policy gradient, some work has been done combining the value- and policy-based methods. Actor-critic algorithm~ was proposed to alleviate the severe variance problem when calculating the gradient in policy gradient methods. It estimates a value function for term $\\sum_{h=t}^{H}\\gamma^{h-t}r_h$ in Equation (\\ref{bigtriangledown}) and incorporates it in policy optimization. Equation (\\ref{bigtriangledown}) is then transformed into the formula below:\n\\begin{equation}\n\\bigtriangledown_{\\theta}J(\\theta) = \\sum_{t=1}^{H-1}\\gamma^{t-1}\\left(\\bigtriangledown_{\\theta}log\\pi(a_t|s_t; \\theta) \\hat{Q}(s_t, a_t, h) \\right)\n\\end{equation} \nWhere $\\hat{Q}(s_t, a_t, h)$ stands for the value function estimated.", "id": "8ba3c013-e2ea-4dc8-8fdb-25b0f01a33a4", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "08350a4c-0975-417d-8781-84e0835277fa", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Deep Reinforcement Learning Models and Generative Adversarial Networks" ], [ "subsubsection", "REINFORCE" ] ], "subsections": [], "title": "REINFORCE" }, { "cite_extract_rate": 0.648648648648648, "cites": [ 1914, 7217, 1921, 7079, 1923, 1920, 1925, 1918, 1928, 7530, 1919, 1876, 1926, 1917, 1878, 1915, 1913, 1686, 1922, 1348, 8507, 1916, 1924, 1927 ], "content": "It is easy to link the actor-critic model with another framework - GANs~ because of their similar inner structure and logic~. Actually, there are quite a few recent works in dialogue systems that train GANs with reinforcement learning framework~. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=0.8\\textwidth]{GAN_c-eps-converted-to.pdf}\n\\caption{The GAN framework}\n\\label{The GAN model}\n\\end {center}\n\\end{figure} \nFigure~\\ref{The GAN model} represents the GAN consisting of a generator and a discriminator where the training process can be viewed as a competition between them: the generator tries to generate data distributions to fool the discriminator while the discriminator attempts to distinguish between real data (real) and generated data (fake). During training, the generator takes noise as input and generates data distribution while the discriminator takes real and fake data as input and the binary annotation as the label. The whole GAN model is trained end-to-end as a connection of generator and discriminator to minimize the following cross-entropy losses: \n\\begin{equation}\nL_1(D, G) = -E_{\\omega\\sim P_{data}}[logD(\\omega)]-E_{z\\sim N(0,I)}[log(1-D(G(z)))]\n\\end{equation} \n\\begin{equation}\nL_2(D, G) = -E_{z\\sim N(0,I)}[logD(G(z))]\n\\end{equation} \nWhere $L_1$ and $L_2$ denote a bilevel loss, where $D$ and $G$ being discriminator and generator respectively. $z \\sim N (0, I)$ is the noise input of the generator and $w$ is the input of the discriminator. \n\\paragraph*{Relationship between RL and GAN} GAN can be viewed as a special actor-critic~. In the learning architecture of GAN, the generator acts as the actor and the discriminator acts as the critic or environment which gives the real/fake feedback as a reward. However, the actions taken by the actor cannot change the states of the environment, which means that the learning architecture of GAN is a stateless Markov decision process. Also, the actor has no access to the state of the environment and generates data distribution simply conditioned on Gaussian noise, which means that the generator in the GAN framework is a blind actor/agent. In a nutshell, GAN is a special actor-critic where the actor is blind and the whole process is a stateless MDP. \\\\\nThe interactive nature of dialogue systems motivates the wide application of reinforcement learning and GAN models in its research. \n\\paragraph*{RL for task-oriented dialogue systems} One common application of reinforcement learning in dialogue systems is the reinforced dialogue management in task-oriented systems. Dialogue state tracking and policy learning are two typical modules of a dialogue manager. and~ trained the dialogue state tracker with reinforcement learning. Both of them combined a reward manager into their tracker to enhance tracking accuracy. For the policy learning module, reinforcement learning seems to be the best choice since almost all recent related works learned policy with reinforcement learning~. The increasing preference of reinforcement learning in policy learning tasks attributes to the characteristic of them: in policy learning tasks, the model predicts a dialogue action (action) based on the states from the DST module (state), which perfectly accords with the function of the agent in the reinforcement learning framework. \n\\paragraph*{RL for open-domain dialogue systems} Due to the huge action space needed to generate language directly, many open-domain dialogue systems trained with reinforcement learning framework do not generate responses but instead select responses. Retrieval-based systems have a limited action set and are suitable to be trained in a reinforcement learning scheme. Some works achieved promising performance in retrieval-based dialogue tasks~. However, retrieval systems fail to generalize in all user messages and may give unrelated responses~, which makes generation-based dialogue systems preferable. Still considering the action space problem, some works build their systems combining retrieval and generative methods~. chose to first retrieve a set of n-best response candidates and then generated responses based on the retrieved results and user message. Comparatively,~ first generated and retrieved candidate responses with different dialogue models and then trained a scoring model with online reinforcement learning to select responses from both generated and retrieved responses. Since training a generative dialogue agent using reinforcement learning from scratch is particularly difficult, first pretraining the agent with supervised learning to warm-start is a good choice. ,~,~ and~ applied this pretrain-and-finetune strategy on dialogue learning and achieved outstanding performance, which proved that the reinforcement learning can improve the response quality of data-driven chatbots. Similarly, pretrain-and-finetune was also applicable to domain transfer problems. Some works pretrained the model in a source domain and expanded the domain area with reinforcement training~. \n\\paragraph*{RL for knowledge grounded dialogue systems} Some systems use reinforcement learning to select from outside information like persona, document, knowledge graph, etc., and generate responses accordingly. and~ performed persona selection and persona-based response generation simultaneously and trained their agents with a reinforcement framework. and~ built document-grounded systems. Similarly, they used reinforcement learning to accomplish document selection and knowledge-grounded response generation. There were also some works combining knowledge graphs into the dialogue systems and treated them as outside knowledge source~. In a reinforced training framework, the agent chooses an edge based on the current node and state for each step and then combines the knowledge into the response generation process. \n\\paragraph*{RL for dialogue related tasks} Dialogue-related tasks like dialogue relation extraction~, question answering~ and machine reading comprehension~ benefit from reinforcement learning as well because of their interactive nature and the scarcity of annotated data. \n\\paragraph*{GAN for dialogue systems} The application of GAN in dialogue systems is divided into two streams. The first sees the GAN framework applied to enhance response generation~. The discriminator distinguishes generated responses from human responses, which incentivizes the agent, which is also the generator in GAN, to generate higher-quality responses. Another stream uses GAN as an evaluation tool of dialogue systems~. After training the generator and discriminator as a whole framework, the discriminator is used separately as a scorer to evaluate the performance of a dialogue agent and was shown to achieve a higher correlation with human evaluation compared with traditional reference-based metrics like BLEU, METEOR, ROUGE-L, etc. We discuss the evaluation of dialogue systems as a challenge in Section~\\ref{Evaluation Approaches}.", "id": "4f044a3f-a039-4b68-b908-6a3a121e297f", "level": "subsubsection", "origin_cites_number": 37, "parent_id": "08350a4c-0975-417d-8781-84e0835277fa", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Deep Reinforcement Learning Models and Generative Adversarial Networks" ], [ "subsubsection", "GANs" ] ], "subsections": [], "title": "GANs" }, { "cite_extract_rate": 0.5454545454545451, "cites": [ 1933, 1931, 1935, 8509, 1932, 1937, 1934, 1913, 1165, 1936, 1929, 1930 ], "content": "\\label{Knowledge Graph Augmented Neural Networks}\t \nSupervised training with annotated data tries to learn the knowledge distribution of a dataset. However, a dataset is comparatively sparse and thus learning a reliable knowledge distribution needs a huge amount of annotated data~. \nKnowledge Graph (KG) is attracting more and more research interests in recent years. KG is a structured knowledge source consisting of entities and their relationships~. In other words, KG is the knowledge facts presented in graph format. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=0.7\\textwidth]{KG_c-eps-converted-to.pdf}\n\\caption{Entities and relations in knowledge graph~}\n\\label{Entities and relations in knowledge graph}\n\\end {center}\n\\end{figure} \nFigure~\\ref{Entities and relations in knowledge graph} shows an example of a KG consisting of entities and their relationships. A KG is stored in triples under the Resource Description Framework (RDF). For example, Albert Einstein, University of Zurich, and their relationship can be expressed as $(Albert Einstein, GraduateFrom, University of Zurich)$. \nKnowledge graph augmented neural networks first represent the entities and their relations in a lower dimension space, then use a neural model to retrieve relevant facts~. Knowledge graph representation learning can be generally divided into two categories: structure-based representations and semantically-enriched representations. Structure-based representations use multi-dimensional vectors to represent entities and relations. Models such as TransE~, TransR~, TransH~, TransD~, TransG~, TransM~, HolE~ and ProjE~ belong to this category. The semantically-enriched representation models like NTN~, SSP~ and DKRL~ combine semantic information into the representation of entities and relations. The neural retrieval models also have two main directions: distance-based matching model and semantic matching model. Distance-based matching models~ consider the distance between projected entities while semantic matching models~ calculate the semantic similarity of entities and relations to retrieve facts. \\\\\n\\paragraph*{Knowledge graph augmented dialogue systems} Knowledge-grounded dialogue systems benefit greatly from the structured knowledge format of KG, where facts are widely intercorrelated. Reasoning over a KG is an ideal approach for combining commonsense knowledge into response generation, resulting in accurate and informative responses~. proposed AttnIO, a bi-directional graph exploration model for knowledge retrieval in knowledge-grounded dialogue systems. Attention weights were calculated at each traversing step, and thus the model could choose a broader range of knowledge paths instead of choosing only one node at a time. In such a scheme, the model could predict adequate paths even when only having the destination node as the label. built ConceptFlow, a dialogue agent that guided to more meaningful future conversations. It traversed in a commonsense knowledge graph to explore concept-level conversation flows. Finally, it used a gate to decide to generate among vocabulary words, central concept words, and outer concept words. proposed to generate persona-based responses by first using COMET~ to expand a persona sentence in context along 9 relation types and then applied a pretrained model to generate responses based on dialogue history and the persona variable. used knowledge graph as an external knowledge source in task-oriented dialogue systems to incorporate domain-specified knowledge in the response. First, the dialogue history was parsed as a dependency tree and encoded into a fixed-length vector. Then they applied multi-hop reasoning over the graph using the attention mechanism. The decoder finally predicted tokens either by copying from graph entities or generating vocabulary words. proposed DialKG Walker for the conversational reasoning task. They computed a zero-shot relevance score between predicted KG embedding and ground KG embedding to facilitate cross-domain predictions. Furthermore, they applied an attention-based graph walker to generate graph paths based on the relevance scores. evaluated the dialogue systems by combining the utterance-level contextualized representation and topic-level graph representation. They first constructed the dialogue graph based on encoded (context, response) pairs and then reasoned over the graph to get a topic-level graph representation. The final score was calculated by passing the concatenated vector of contextualized representation and graph representation to a feed-forward network.", "id": "7dc8033c-2bea-42f9-8af0-47edede88ab2", "level": "subsection", "origin_cites_number": 22, "parent_id": "6989df99-9194-4243-be01-3851ccdbbcf9", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Neural Models in Dialogue Systems" ], [ "subsection", "Knowledge Graph Augmented Neural Networks" ] ], "subsections": [], "title": "Knowledge Graph Augmented Neural Networks" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 1149 ], "content": "\\label{Task-oriented Dialogue Systems} \nThis section introduces task-oriented dialogue systems including modular and end-to-end systems. Task-oriented systems solve specific problems in a certain domain such as movie ticket booking, restaurant table reserving, etc. We focus on deep learning-based systems due to the outstanding performance. For readers who want to learn more about traditional rule-based and statistical models, there are several surveys to refer to~. \nThis section is organized as follows. We first discuss modular and end-to-end systems respectively by introducing the principles and reviewing recent works. After that, we comprehensively discuss related challenges and hot topics for task-oriented dialogue systems in recent research to provide some important research directions. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1\\textwidth]{TODS_c-eps-converted-to.pdf}\n\\caption{Structure of a task-oriented dialogue system in the task-completion pipeline}\n\\label{Structure of a task-oriented dialogue system in the task-completion pipeline}\n\\end {center}\n\\end{figure} \nA task-oriented dialogue system requires stricter response constraints because it aims to accurately handle the user message. Therefore, modular methods were proposed to generate responses in a more controllable way. The architecture of a modular-based system is depicted in Figure~\\ref{Structure of a task-oriented dialogue system in the task-completion pipeline}. It consists of four modules:\n\\textbf{Natural Language Understanding (NLU)}. This module converts the raw user message into semantic slots, together with classifications of domain and user intention. However, some recent modular systems omit this module and use the raw user message as the input of the next module, as shown in Figure~\\ref{Structure of a task-oriented dialogue system in the task-completion pipeline}. Such a design aims to reduce the propagation of errors between modules and alleviate the impact of the original error~. \n\\textbf{Dialogue State Tracking (DST)}. This module iteratively calibrates the dialogue states based on the current input and dialogue history. The dialogue state includes related user actions and slot-value pairs. \n\\textbf{Dialogue Policy Learning}. Based on the calibrated dialogue states from the DST module, this module decides the next action of a dialogue agent. \n\\textbf{Natural Language Generation (NLG)}. This module converts the selected dialogue actions into surface-level natural language, which is usually the ultimate form of response. \nAmong them, Dialogue State Tracking and Dialogue Policy Learning constitute the Dialogue Manager (DM), the central controller of a task-oriented dialogue system. Usually, a task-oriented system also interacts with an external Knowledge Base (KB) to retrieve essential knowledge about the target task. For example, in a movie ticket booking task, after understanding the requirement of the user message, the agent interacts with the movie knowledge base to search for movies with specific constraints such as movie name, time, cinema, etc.", "id": "d107f8de-54e0-4843-bd82-941b70519bc2", "level": "section", "origin_cites_number": 6, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ] ], "subsections": [ "911ba578-546a-4bd9-9297-a80c87034374", "8f1634d2-eae5-4272-8131-ae5350bc959b", "26825009-67d7-4477-ab8a-347c8195cab6", "00985231-ac70-4498-9a05-cfa0cd269aca", "4d7d2060-883f-4a0b-9ff6-e0a44b6921c4", "a753676c-7e09-4d59-8e23-8df1a027166a" ], "title": "Task-oriented Dialogue Systems" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 9117, 7483, 1938, 1939, 8510, 7528, 1942, 1940, 1941 ], "content": "\\label{Natural Language Understanding}\nIt has been proven that the NLU module impacts the whole system significantly in the term of response quality~. The NLU module converts the natural language message produced by the user into semantic slots and performs classification. Table~\\ref{The output example of an NLU module} shows an example of the output format of the NLU module. The NLU module manages three tasks: domain classification, intent detection, and slot filling. Domain classification and intent detection are classification problems, which use classifiers to predict a mapping from the input language sequence to a predefined label set. In the given example, the predicted domain is ``\\textit{movie}\" and the intent is ``\\textit{find\\_movie}\". Slot filling is a tagging problem, which can be viewed as a sequence-to-sequence task. It maps a raw user message into a sequence of slot names. In the example, the NLU module reads the user message ``\\textit{Recommend a movie at Golden Village tonight.}\" and outputs the corresponding tag sequence. It recognizes ``\\textit{Golden Village}\" as the place to go, which is tagged as ``\\textit{B\\_desti}\" and ``\\textit{I\\_desti}\" for the two words respectively. Similarly, the token ``\\textit{tonight}\" is converted into ``\\textit{B\\_time}\". `B' represents the beginning of a chunk, and `I' indicates that this tag is inside a target chunk. For those unrelated tokens, an `O' is used indicating that this token is outside of any chunk of interest. This tagging method is called Inside-Outside-Beginning (IOB) tagging~, which is a common method in Named-Entity Recognition (NER) tasks. \n\\begin{table}\n\\caption{The output example of an NLU module}\n\\label{The output example of an NLU module} \n\\centering\n\\begin{tabular}{|c|ccccccc|}\n\\hline\n\\textbf{Sentence} & \\cellcolor{grey5}Recommend & \\cellcolor{grey5}a & \\cellcolor{grey5}movie & \\cellcolor{grey5}at & \\cellcolor{blue4}Golden & \\cellcolor{blue4}Village & \\cellcolor{orange4}tonight \\\\\\cline{1-1}\n\\textbf{Slots} & \\cellcolor{grey5}O & \\cellcolor{grey5}O & \\cellcolor{grey5}O & \\cellcolor{grey5}O & \\cellcolor{blue4}B-desti & \\cellcolor{blue4}I-desti & \\cellcolor{orange4}B-time\\\\\\hline\n\\textbf{Intent} & & & & find\\_movie & & & \\\\\\hline\n\\textbf{Domain} & & & & movie & & & \\\\\\hline\n\\end{tabular}\n\\end{table}\n\\paragraph*{Techniques for domain classification and intent detection} Domain classification and intent detection belong to the same category of tasks. Deep learning methods are proposed to solve the classification problems of dialogue domain and intent. and~ were the first who successfully improved the recognition accuracy of dialogue intent. They built deep convex networks to combine the predictions of a prior network and the current utterances as an integrated input of a current network. A deep learning framework was also used to classify the dialogue domain and intent in a semi-supervised fashion~. To solve the difficulty of training a deep neural network for domain and intent prediction, Restricted Boltzmann Machine (RBM) and Deep Belief Networks (DBNs) were applied to initialize the parameters of deep neural networks~. To make use of the strengths of RNNs in sequence processing, some works used RNNs as utterance encoders and made predictions for intent and domain categories~. used a CNN to extract hierarchical text features for intent detection and illustrated the sequence classification capabilities of CNNs. proposed a model for intent classification of short utterances. Short utterances are hard for intent detection because of the lack of information in a single dialogue turn. This paper used RNN and CNN architectures to incorporate the dialogue history, thus obtaining the context information as an additional input besides the current turn's message. The model achieved promising performances on three intent classification datasets. More recently,~ pretrained Task-Oriented Dialogue BERT (TOD-BERT) and significantly improved the accuracy in the intent detection sub-task. The proposed model also exhibited a strong capability of few-shot learning and could effectively alleviate the data insufficiency issue in a specific domain. \n\\paragraph*{Techniques for slot filling} The slot filling problem is also called semantic tagging, a sequence classification problem. It is more challenging for that the model needs to predict multiple objects at a time. Deep Belief Nets (DBNs) exhibit promising capabilities in the learning of deep architectures and have been applied in many tasks including semantic tagging. used a DBN-initialized neural network to complete slot filling in the call-routing task. built a DBN-based sequence tagger. In addition to the NER input features used in traditional taggers, they also combined part of speech (POS) and syntactic features as a part of the input. The recurrent architectures benefited the sequence tagging task in that they could keep track of the information along past timesteps to make the most of the sequential information. first argued that instead of simply predicting words, RNN Language Models (RNN-LMs) could be applied in sequence tagging. On the output side of RNN-LMs, tag labels were predicted instead of normal vocabularies. and~ further investigated the impact of different recurrent architectures in the slot filling task and found that all RNNs outperformed the Conditional Random Field (CRF) baseline. As a powerful recurrent model, LSTM showed promising tagging accuracy on the ATIS dataset owing to the memory control of its gate mechanism~. \n argued that the shallow output representations of traditional semantic tagging lacked the ability to represent the structured dialogue information. To improve, they treated the slot filling task as a template-based tree decoding task by iteratively generating and filling in the templates. Different from traditional sequence tagging methods,~ tackled the slot filling task by treating it as a turn-based span extraction task. They applied the conversational pretrained model ConveRT and utilized the rich semantic information embedded in the pretrained vectors to solve the problem of in-domain data insufficiency. The inputs of ConveRT are the requested slots and the utterance, while the output is a span of interest as the slot value. \n\\paragraph*{Unifying domain classification, intent detection, and slot filling} Some works choose to combine domain classification, intent detection, and slot filling into a multitask learning framework to jointly optimize the shared latent space. applied a bi-directional RNN-LSTM architecture to jointly perform three tasks. augmented the traditional RNN encoder-decoder model with an attention mechanism to manage intent detection and slot filling. The slot filling applied explicit alignment. proposed an end-to-end memory network and used a memory module to store user intent and slot values in history utterances. Attention was further applied to iteratively select relevant intent and slot values at the decoding stage. Multi-task learning of three NLU subtasks contributed to the domain scaling and facilitated the zero-shot or few-shot training when transferring to a new domain~. captured the hierarchical structure of dialogue semantics in NLU multi-task learning by applying a capsule-based neural network. With a dynamic routing-by-agreement strategy, the proposed architecture raised the accuracy of both intent detection and slot filling on the SNIPS-NLU and ATIS dataset. \n\\paragraph*{Novel perspectives} More recently, some novel ideas appear in NLU research, which provides new possibilities for further improvements. Traditional NLU modules rely on the text converted from the audio message of the user using the Automatic Speech Recognition (ASR) module. However,~ jumped over the ASR module and directly used audio signals as the input of NLU. They found that by reducing the module numbers of a pipeline system, the predictions were more robust since fewer errors were broadcasted. argued that Natural Language Understanding (NLU) and Natural Language Generation (NLG) were reversed processes. Thus, their dual relationship could be exploited by training with a dual-supervised learning framework. The experiments exhibited improvement in both tasks.", "id": "911ba578-546a-4bd9-9297-a80c87034374", "level": "subsection", "origin_cites_number": 27, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Natural Language Understanding" ] ], "subsections": [], "title": "Natural Language Understanding" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 1946, 1947, 1948, 1944, 1943, 1891, 1945, 1909, 1897 ], "content": "\\label{Dialogue State Tracking }\nDialogue State Tracking (DST) is the first module of a dialogue manager. It tracks the user's goal and related details every turn based on the whole dialogue history to provide the information based on which the Policy Learning module (next module) decides the agent action to make. \n\\paragraph*{Differences between NLU and DST} The NLU and DST modules are closely related. Both NLU and DST perform slot filling for the dialogue. However, they actually play different roles. The NLU module tries to make classifications for the current user message such as the intent and domain category as well as the slot each message token belongs to. For example, given a user message ``\\textit{Recommend a movie at Golden Village tonight.}\", the NLU module will convert the raw message into ``$inform (domain = movie;\\ destination = Golden Village;\\ date = today;\\ time = evening)$\", where the slots are usually filled by tagging each word of the user message as described in Section~\\ref{Natural Language Understanding}. However, the DST module does not classify or tag the user message. Instead, it tries to find a slot value for each slot name in a pre-existing slot list based on the whole dialogue history. For example, there is a pre-existing slot list ``$intent:\\_;\\ domain:\\_;\\ name:\\_;\\ pricerange:\\_;\\ genre:\\_;\\ destination:\\_;\\ date:\\_$\", where the underscore behind the colon is a placeholder denoting that this place can be filled with a value. Every turn, the DST module will look up the whole dialogue history up to the current turn and decide which content can be filled in a specific slot in the slot list. If the user message ``\\textit{Recommend a movie at Golden Village tonight.}\" is the only message in a dialogue, then the slot list can be filled as ``$intent: inform;\\ domain: movie;\\ name: None;\\ pricerange: None;\\ genre: None;\\ destination: Golden Village;\\ date: today$\", where the slots unspecified by the user up to current turn can be filled with ``$None$\". To conclude, the NLU module tries to tag the user message while the DST module tries to find values from the user message to fill in a pre-existing form. Some dialogue systems took the output of the NLU module as the input of DST module~, while others directly used raw user messages to track the state~. \nDialogue State Tracking Challenges (DSTCs), a series of popular challenges in DST, provides benchmark datasets, standard evaluation frameworks, and test-beds for research~. The DSTCs cover many domains such as restaurants, tourism, etc. \nA dialogue state contains all essential information to be conveyed in the response~. As defined in DSTC2~, the dialogue state of a given dialogue turn consists of informable slots \\textit{Sinf} and requestable slots \\textit{Sreq}. Informable slots are attributes specified by users to constrain the search of the database while requestable slots are attributes whose values are queried by the user. For example, the serial number of a movie ticket is usually a requestable slot because users seldom assign a specific serial number when booking a ticket. Specifically, the dialogue state has three components:\n\\begin{itemize}\n \\item \\textbf{Goal constraint corresponding with informable slots}. The constraints can be specific values mentioned by the user in the dialogue or a special value. Special values include \\textit{Dontcare} indicating the user's indifference about the slot and \\textit{None} indicating that the user has not specified the value in the conversation yet. \n \\item \\textbf{Requested slots}. It can be a list of slot names queried by the user seeking answers from the agent. \n \\item \\textbf{Search method of current turn}. It consists of values indicating the interaction categories. \\textit{By constraints} denotes that the user tries to specify constraint information in his requirement; \\textit{by alternatives} denotes that the user requires an alternative entity; \\textit{finished} indicates that the user intends to end the conversation. \n\\end{itemize}\nHowever, considering the numerous challenges such as tracking efficiency, tracking accuracy, domain adaptability, and end-to-end training, many alternative representations have been proposed recently, which will be discussed later. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1\\textwidth]{DSTeg-eps-converted-to.pdf}\n\\caption{An example of DST procedure~}\n\\label{An example Dialogue State Tracking procedure}\n\\end {center}\n\\end{figure} \nFigure~\\ref{An example Dialogue State Tracking procedure} is an example of the DST process for 4 dialogue turns in a restaurant table booking task. The first column includes the raw dialogue utterances, with $S$ denoting the system message and $U$ denoting the user message. The second column includes the N-best output lists of the NLU module and their corresponding confidence scores. The third column includes the labels of a turn, indicating the ground truth slot-value pairs. The fourth column includes the example DST outputs and their corresponding confidence scores. The fifth column indicates the correctness of the tracker output. \nEarlier works use hand-craft rules or statistical methods to solve DST tasks. While widely used in industry dialogue systems, rule-based DST methods~ have many restrictions such as limited generalization, high error rate, low domain adaptability, etc~. Statistical methods~ also suffer from noisy conditions and ambiguity~. \nRecently, many neural trackers have emerged. Neural trackers have multiple advantages over rule-based and statistical trackers. In general, they are categorized into two streams. The first stream has predefined slot names and values, and each turn the DST module tries to find the most appropriate slot-value pairs based on the dialogue history; the second stream does not have a fixed slot value list, so the DST module tries to find the values directly from the dialogue context or generate values based on the dialogue context. Obviously, the latter one is more flexible and in fact, more and more works are solving DST in the second way. We discuss the works of both categories here. \n\\paragraph*{Neural trackers with predefined slot names and values} The first stream can be viewed as a multi-class or multi-hop classification task. For multi-class classification DST, the tracker predicts the correct class from multiple values but this method suffers from high complexity when the value set grows large. On the other hand, for the multi-hop classification tasks, the tracker reads only one slot-value pair at a time and performs binary prediction. Working in this fashion reduces the model complexity but raises the system reaction time since for each slot there will be multiple tracking processes. was the first who used a deep learning model in the DST tasks. They integrated many feature functions (e.g., SLU score, Rank score, Affirm score, etc.) as the input of a neural network, then predict the probability of each slot-value pair. applied an RNN as a neural tracker to gain awareness on dialogue context. proposed a multi-hop neural tracker which took the system output and user utterances as the first two inputs (to model the dialogue context), and the candidate slot-value pairs as the third input. The tracker finally made a binary prediction on the current slot-value pair based on the dialogue history. \n\\paragraph*{Neural trackers with unfixed slot names and values} The second stream attracts more attention because it not only reduces the model and time complexity of DST tasks but also facilitates end-to-end training of task-oriented dialogue systems. Moreover, it is also flexible when the target domain changes. proposed belief span, a text span of the dialogue context corresponding to a specific slot. They built a two-stage CopyNet to copy and store slot values from the dialogue history. The slots were stored to prepare for neural response generation. The belief span facilitated the end-to-end training of dialogue systems and increased the tracking accuracy in out-of-vocabulary cases. Based on this,~ proposed the minimal belief span and argued that it was not scalable to generate belief states from scratch when the system interacted with APIs from diverse domains. The proposed MinTL framework operated \\textit{insertion (INS)}, \\textit{deletion (DEL)} and \\textit{substitution (SUB)} on the dialogue state of last turn based on the context and the minimal belief span. proposed the TRADE model. The model also applied the copy mechanism and used a soft-gated pointer-generator to generate the slot value based on the domain-slot pair and encoded dialogue context. argued that simply concatenating the dialogue context was not preferable. Alternatively, they used \\textit{[sys]} and \\textit{[usr]} to discriminate the system and user messages. This simple long context modeling method achieved a 7.03\\% improvement compared with the baseline. proposed Tree Encoder-Decoder (TED) architecture which utilized a hierarchical tree structure to represent the dialogue states and system acts. The TED generated tree-structured dialogue states of the current turn based on the dialogue history, dialogue action, and dialogue state of the last turn. This approach led to a 20\\% improvement on the state-of-the-art DST baselines which represented dialogue states and user goals in a flat space. built an interactive encoder to exploit the dependencies within a turn and between turns. Furthermore, they used the attention mechanism to construct the slot-level context for user and system respectively, which were embedding vectors based on which the generator copied values from the dialogue context. applied BERT to perform multi-task learning and generated the dialogue state. They first encoded word-level and turn-level contexts. Then they retrieved the relevant information for each slot from the context by applying both word-level and turn-level attention. Furthermore, the slot values were predicted based on the retrieved information. Similarly,~ used BERT for slot value prediction. They performed Slot Attention (SA) to retrieve related spans and Value Normalization (VN) to convert the spans into final values. proposed Meta-Reinforced MultiDomain State Generator (MERET), which was a dialogue state generator further finetuned with policy gradient reinforcement learning.", "id": "8f1634d2-eae5-4272-8131-ae5350bc959b", "level": "subsection", "origin_cites_number": 27, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Dialogue State Tracking " ] ], "subsections": [], "title": "Dialogue State Tracking " }, { "cite_extract_rate": 0.6875, "cites": [ 1949, 1951, 1915, 1954, 1923, 1919, 1950, 1952, 1953, 1149, 1955 ], "content": "\\label{Policy Learning}\nThe Policy learning module is the other module of a dialogue manager. This module controls which action will be taken by the system based on the output dialogue states from the DST module. Assuming that we have the dialogue state $S_t$ of the current turn and the action set $A = \\{a_1, ..., a_n\\}$, the task of this module is to learn a mapping function $f$: $S_t \\to a_i \\in A$. This module is comparatively simpler than other modules in the term of task definition but actually, the task itself is challenging~. For example, in the tasks of movie ticket and restaurant table booking, if the user books a two-hour movie slot and intends to go for dinner after that, then the agent should be aware that the time gap between movie slot and restaurant slot has to be more than two hours since the commuting time from the cinema to the restaurant should be considered. \nSupervised learning and reinforcement learning are mainstream training methods for dialogue policy learning~. Policies learned in a supervised fashion exhibit great decision-making ability~. In some specific tasks, the supervised policy model can complete tasks precisely, but the training process totally depends on the quality of training data. Moreover, the annotated datasets require intensive human labor, and the decision ability is restricted by the specific task and domain, showing weak transferring capability. With the prevalence of reinforcement learning methods, more and more task-oriented dialogue systems use reinforcement learning to learn the policy. The dialogue policy learning fits the reinforcement learning setting since the agent of reinforcement learning learns a policy to map environment states to actions as well. \nUsually, the environment of reinforce policy learning is a user or a simulated user in which setting the training is called online learning. However, it is data- and time-consuming to learn a policy from scratch in the online learning scenario, so the warm-start method is needed to speed up the training process. used expert data to restrict the initial action space exploration. applied teacher-student learning framework to transfer the teacher expert knowledge to the target network in order to warm-start the system. \n\\paragraph*{Reinforcement policy learning techniques} Almost all recent dialogue policy learning works are based on reinforcement learning methods. Online learning is an ideal approach to get training samples iteratively for a reinforcement learning agent, but human labor is very limited. proposed Budget-Conscious Scheduling (BCS) to better utilize limited user interactions, where the user interaction is seen as the budget. The BCS used a probability scheduler to allocate the budget during training. Also, a controller decided whether to use real user interactions or simulated ones. Furthermore, a goal-based sampling model was applied to simulate the experiences for policy learning. Such a budget-controlling mechanism achieved ideal performance in the practical training process. Considering the difficulty of getting real online user interactions and the huge amount of annotated data required for training user simulators,~ proposed Multi-Agent Dialog Policy Learning, where they have two agents interacting with each other, performing both user and agent, learning the policy simultaneously. Furthermore, they incorporated a role-specific reward to facilitate role-based response generation. A High task completion rate was observed in experiments. introduced Monte Carlo Tree Search with Double-q Dueling network (MCTS-DDU), where a decision-time planning was proposed instead of background planning. They used the Monte Carlo simulation to perform a tree search of the dialogue states. trained expert demonstrators in a weakly supervised fashion to perform Deep Q-learning from Demonstrations (DQfD). Furthermore, Reinforced Fine-tune Learning was proposed to facilitate domain transfer. In reinforce dialogue policy learning, the agent usually receives feedback at the end of the dialogue, which is not efficient for learning. proposed an innovative reward learning method that constrains the dialogue progress according to the expert demonstration. The expert demonstration could either be annotated or not, so the approach was not labor intensive. proposed to co-generate the dialogue actions and responses to maintain the inherent semantic structures of dialogue. Similarly,~ proposed a unified framework to simultaneously perform dialogue state tracking, dialogue policy learning, and response generation. Experiments showed that unified frameworks have a better performance both in their sub-tasks and in their domain adaptability. used a knowledge graph to provide prior knowledge of the action set and solved policy learning task in a graph-grounded fashion. By combining a knowledge graph, a long-term reward was obtained to provide the policy agent with a long-term vision while choosing actions. Also, the candidate actions were of higher quality due to prior knowledge. The policy learning was further performed in a more controllable way.", "id": "26825009-67d7-4477-ab8a-347c8195cab6", "level": "subsection", "origin_cites_number": 16, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Policy Learning" ] ], "subsections": [], "title": "Policy Learning" }, { "cite_extract_rate": 0.5625, "cites": [ 1958, 1957, 1961, 1962, 1956, 1963, 1960, 8511, 1959 ], "content": "\\label{Natural Language Generation}\nNatural Language Generation (NLG) is the last module of a task-oriented dialogue system pipeline. It manages to convert the dialogue actions generated from the dialogue manager into a final natural language representation. E.g., Assuming ``\\textit{Inform (name = Wonder Woman;\\ genre = Action;\\ desti = Golden Village)}\" to be the dialogue action from policy learning module, then the NLG module converts it into language representations such as ``\\textit{There is an action movie named Wonder Woman at Golden Village.}\"\nTraditional NLG modules are pipeline systems. Defined by~, the standard pipeline of NLG consists of four components, as shown in Figure~\\ref{The pipeline NLG system}. \n\\begin{figure}\n\\begin {center}\n\\includegraphics[width=1\\textwidth]{NLG_c-eps-converted-to.pdf}\n\\caption{The pipeline NLG system}\n\\label{The pipeline NLG system}\n\\end {center}\n\\end{figure}\nThe core modules of this pipeline are Content Determination, Sentence Planning, and Surface Realization, as proposed by~. further improved the NLG pipeline by adding three more components: lexicalization, referring expression generation, and aggregation. However, this model has a drawback that the input of the system is ambiguous. \n\\paragraph*{End-to-end NLG techniques} Deep learning methods were further applied to enhance the NLG performance and the pipeline is collapsed into a single module. End-to-end natural language generation has achieved promising improvements and is the most popular way to perform NLG in recent work. argued that language generation should be fully data-driven and not depend on any expert rules. They proposed a statistical language model based on RNNs to learn response generation with semantic constraints and grammar trees. Additionally, they used a CNN reranker to further select better responses. Similarly, an LSTM model was used by~ to learn sentence planning and surface realization simultaneously. further improved the generation quality on multiple domains using GRU. The proposed generator consistently generated high-quality responses on multiple domains. To improve the domain adaptability of recurrent models,~ proposed to first train the recurrent language model on data synthesized from out-of-domain datasets, then finetune on a comparatively smaller in-domain dataset. This training strategy was proved effective in human evaluation. Context-awareness is important in dialogue response generation because only depending on the dialogue action of the current turn may cause illogical responses. built an attention-based Context-Aware LSTM (CA-LSTM) combining target user questions, all semantic values, and dialogue actions as input to generate context-aware responses in QA. Likewise,~ concatenated the preceding user utterance with the dialogue action vector and fed it into an LSTM model. put a syntax constraint upon their neural response generator. A two-stage sequence generation process was proposed. First, a syntax dependency tree was generated to have a structured representation of the dialogue utterance. The generator in the second stage integrated sentence planning and surface realization and produced natural language representations. \n\\paragraph*{Robust natural language generation} More recent works have focused on the reliability and quality of generated responses. A tree-structured semantic representation was proposed by~ to achieve better content planning and surface realization performance. They further designed a novel beam search algorithm to improve the semantic correctness of the generated response. To avoid mistakes such as slot value missing or redundancy in generated responses,~ proposed Iterative Rectification Network (IRN), a framework trained with supervised learning and finetuned with reinforcement learning. It iteratively rectified generated tokens by incorporating slot inconsistency penalty into its reward. applied large-scale pretrained models for NLG tasks. After comparing single-input and multi-input methods, they concluded that different types of input context will cause different inductive biases in generated responses and further proposed to utilize this characteristic to better adapt a pretrained model to a new task. solved NLG reliability problem in conversational QA. Though with different pipeline structures, they used similar methods to increase the fluency and semantic correctness of the generated response. They proposed Syntactic Transformations (STs) to generate candidate responses and used a BERT to rank their qualities. These generated responses can be viewed as an augmentation of the original dataset to be further used in NLG model learning. proposed a method to create datasets with rich style markups from easily available user reviews. They further trained multiple NLG models based on generated data to perform joint control of semantic correctness and language style. Similarly,~ put forward a data augmentation approach which put a restriction on response generation. Though this restriction caused dull and less diverse responses, they argued that in task-oriented systems, reliability was more important than diversity.", "id": "00985231-ac70-4498-9a05-cfa0cd269aca", "level": "subsection", "origin_cites_number": 16, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Natural Language Generation" ] ], "subsections": [], "title": "Natural Language Generation" }, { "cite_extract_rate": 0.636363636363636, "cites": [ 1953, 1918, 7483, 1952, 1907, 1966, 8512, 8507, 1945, 1964, 1967, 1930, 1965, 1959 ], "content": "\\label{End-to-end methods}\nThe modules discussed above can achieve good performance in their respective tasks, with the help of recent relevant advances. However, there exist two significant drawbacks in modular systems~: (1) Modules in many pipeline systems are sometimes not differentiable, which means that errors from the end are not able to be propagated back to each module. In real dialogue systems training, usually the only signal is the user response, while other supervised signals like dialogue states and dialogue actions are scarce. (2) Though the modules jointly contribute to the success of a dialogue system, the improvement of one module may not necessarily raise the response accuracy or quality of the whole system. This causes additional training of other modules, which is labor intensive and time-consuming. Additionally, due to the handcrafted features in pipeline task-oriented systems such as dialogue states, it is usually hard to transfer modular systems to another domain, since the predefined ontologies require modification. \nThere exist two main methods for the end-to-end training of task-oriented dialogue systems. One is to make each module of a pipeline system differentiable, then the whole pipeline can be viewed as a large differentiable system and the parameters can be optimized by back-propagation in an end-to-end fashion~. Another way is to use only one end-to-end module to perform both knowledge base retrieval and response generation, which is usually a multi-task learning neural model. \n\\paragraph*{End-to-end trainable pipeline TOD} The increasing applications of neural models have made it possible for modules to be differentiable. While many modules are easily differentiable, there remains one task that makes differentiation challenging: the knowledge base query. Many task-oriented dialogue systems require an external knowledge source to retrieve related knowledge facts required by the user. For example, in the restaurant table booking task, the knowledge fact can be an available slot of one specific restaurant. Traditional methods use a symbolic query to match entries based on their attributes. The system performs semantic parsing on the user message to represent a symbolic query according to the user goal~. However, this retrieval process is not differentiable, which prevents the whole framework from being end-to-end trainable. With the application of key-value memory networks~,~ used the key-value retrieval mechanism to retrieve relevant facts. The proposed architecture was augmented with the attention mechanism to compute the relevance between utterance representations of dialogue and key representations of the knowledge base. presented a soft retrieval mechanism that uses a ``soft\" posterior distribution over the knowledge base to replace the symbolic queries. They further combined this soft retrieval mechanism into a reinforcement learning framework to achieve complete end-to-end training based on user feedback. proposed Hybrid Code Networks (HCNs), which encoded domain-specific knowledge into software and system action templates, achieving the differentiability of the knowledge retrieval module. They did not explicitly model the dialogue states but instead learned the latent representation and optimized the HCN using supervised learning and reinforcement learning jointly. used GPT-2 to form a neural pipeline and perform domain prediction, dialogue state tracking, policy learning, knowledge retrieval, and response generation in a pipeline fashion. The system could easily interact with external systems because it outputs explicit intermediate results from each module and thus being interpretable. Likewise,~ built a neural pipeline with GPT-2 and explicitly generated results for each neural module as well. \n\\paragraph*{End-to-end trainable single module TOD} More recent works tend not to build their end-to-end systems in a pipeline fashion. Instead, they use complex neural models to implicitly represent the key functions and integrate the modules into one. Research in task-oriented end-to-end neural models focuses either on training methods or model architecture, which are the keys to response correctness and quality.~ proposed an incremental learning framework to train their end-to-end task-oriented system. The main idea is to build an uncertainty estimation module to evaluate the confidence of appropriate responses generated. If the confidence score was higher than a threshold, then the response would be accepted, while a human response would be introduced if the confidence score was low. The agent could also learn from human responses using online learning. used model-agnostic meta-learning (MAML) to improve the adaptability and reliability jointly with only a handful of training samples in a real-life online service task. Similarly,~ also trained the end-to-end neural model using MAML to facilitate the domain adaptation, which enables the model to first train on rich-resource tasks and then on new tasks with limited data. proposed Minimalist Transfer Learning (MinTL) to plug-and-play large-scale pretrained models for domain transfer in dialogue task completion. To maintain the sequential correctness of generated responses,~ trained an inconsistent order detection module in an unsupervised fashion. This module detected whether an utterance pair is ordered or not to guide the task-completing agent towards generating more coherent responses. proposed a ``Two-Teacher One-Student\" training framework. At the first stage, the two teacher models were trained in a reinforcement learning framework, with the objective of retrieving knowledge facts and generating human-like responses respectively. Then at the second stage, the student network was forced to mimic the output of the teacher networks. Thus, the expert knowledge of the two teacher networks was transferred to the student network. introduced a constrained decoding method to improve the semantic correctness of the responses generated by the proposed end-to-end system. Many end-to-end task-oriented systems used a memory module to store relevant knowledge facts and dialogue history. argued that a single memory module was not enough for precise retrieval. They used two long-term memory modules to store the knowledge tuples and dialogue history respectively, and then a working memory was applied to control the token generation. proposed LAtent BElief State (LABES) model, which treated the dialogue states as discrete latent variables to reduce the reliance on turn-level DST labels. To solve the data insufficiency problem in some tasks,~ augmented the response generation model with a paraphrase model in their end-to-end system. The paraphrase model was jointly trained with the whole framework and it aimed to augment the training samples. leveraged the graph structure information of both a knowledge graph and the dialogue context-dependency tree. They proposed a recurrent cell architecture to learn representations on the graph and performed multi-hop reasoning to exploit the entity links in the knowledge graph. With the augmentation of graph information, consistent improvement was achieved on two task-oriented datasets.", "id": "4d7d2060-883f-4a0b-9ff6-e0a44b6921c4", "level": "subsection", "origin_cites_number": 22, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "End-to-end Methods" ] ], "subsections": [], "title": "End-to-end Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Research Challenges and Hot Topics}\nIn this section, we review recent works in task-oriented dialogue systems and point out the frequently studied topics to provide some important research directions. This section can be seen as an augmentation of the literature review in previous sections discussing techniques developed for each module, and focuses more on some specific problems to be solved in the current research community.", "id": "a753676c-7e09-4d59-8e23-8df1a027166a", "level": "subsection", "origin_cites_number": 0, "parent_id": "d107f8de-54e0-4843-bd82-941b70519bc2", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ] ], "subsections": [ "b12b7784-6154-4183-9bae-dfc5bfd18b75", "8dd512aa-7c47-4587-927b-2b03c6663ee9", "caf3d8c3-1051-4ea3-aa54-911088beb161", "7d09adcf-7105-4b5e-8093-7c6efa5de6cb", "126f4ecf-20b1-42a6-a316-22f2b7ead365", "d7b031b0-72c2-45cd-a245-a3103cef645c", "963fcc6a-8033-41b4-9476-0b12d42cb605", "e92e645f-2ae7-4734-a88d-4c0770b49cee" ], "title": "Research Challenges and Hot Topics" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1968, 7528 ], "content": "The Natural Language Understanding task converts the user message into a predefined format of semantic slots. A popular way to perform NLU is by finetuning large-scale pretrained language models. compared many pretrained language models including BERT-based and GPT-based systems in three subtasks of task-oriented dialogue systems - domain identification, intent detection, and slot tagging. This empirical paper is aimed to provide insights and guidelines in pretrained model selection and application for related research. pretrained TOD-BERT and outperformed strong baselines in the intent detection task. The model proposed also had a strong few-shot learning ability to alleviate the data insufficiency problem. proposed Span-ConveRT, which was a pretrained model designed for slot filling task. It viewed the slot filling task as a turn-based span extraction problem and also performed well in the few-shot learning scenario.", "id": "b12b7784-6154-4183-9bae-dfc5bfd18b75", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Pretrained Models for NLU" ] ], "subsections": [], "title": "Pretrained Models for NLU" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1941, 1942 ], "content": "Another challenge or hot topic in NLU research is the domain transfer problem, which is also the key issue of task-oriented dialogue systems. built an RNN-LSTM architecture for multitask learning of domain classification, intent detection, and slot-filling problem. Training samples from multiple domains were combined in a single model where respective domain data reinforces each other. used a multi-task learning framework to leverage slot name encoding and slot description encoding, thus implicitly aligning the slot-filling model across domains. Likewise,~ also applied slot description to exploit the similar semantic concepts between slots of different domains, which solved the sub-optimal concept alignment and long training time problems encountered in past works involving multi-domain slot-filling.", "id": "8dd512aa-7c47-4587-927b-2b03c6663ee9", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Domain Transfer for NLU" ] ], "subsections": [], "title": "Domain Transfer for NLU" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 1970, 1945, 1944, 1969 ], "content": "Domain adaptability is also a significant topic for dialogue state trackers. The domain transfer in DST is challenging due to three main reasons~: (1) Slot values in ontologies are different when the domain changes, which accounts for the incompatibility of models. (2) When the domain changes, the slot number will also change, causing different numbers of model parameters. (3) Hand-crafted lexicons make it difficult for generalization over domains. used delexicalized n-gram features to solve the domain incompatibility problem by replacing all specified slot names and values with generic symbols. introduced Levenshtein belief spans (Lev), which were short context spans relating to the user message. Different from previous methods which generated dialogue state from scratch, they performed substitution (SUB), deletion (DEL), and insertion (INS) based on past states to alleviate the dependency on annotated in-domain training samples. applied model-agnostic meta-learning (MAML) to first learn on several source domains and then adapt on the target domain, while~ improved the zero-shot transfer learning by synthesizing in-domain data using an abstract conversation model and the domain ontology. modeled explicit slot connections to exploit the existing slots appearing in other domains. Thus, the tracker could copy slot values from the connected slots directly, alleviating the burden of reasoning and learning. proposed Value Normalization (VN) to convert supporting dialogue spans into state values and could achieve high accuracy with only 30\\% available ontology.", "id": "caf3d8c3-1051-4ea3-aa54-911088beb161", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Domain Transfer for DST" ] ], "subsections": [], "title": "Domain Transfer for DST" }, { "cite_extract_rate": 0.25, "cites": [ 1897 ], "content": "Tracking efficiency is another hot topic in dialogue state tracking challenges. Usually, there are multiple states within a dialogue, so how to compute the slot values without any redundant steps becomes very significant when attempting to reduce the reaction time of a system. argued that predicting the dialogue state from scratch at every turn was not efficient. They proposed to first predict the operations to be taken on each of the slots (i.e., Carryover, Delete, Dontcare, Update), and then perform respective operations as predicted. used a slot connection mechanism to directly copy slot values from the source slot, which reduced the expense of reasoning. and~ proposed slot attention to calculate the relations between the slot and dialogue context, thus only focusing on the relevant slots at each turn.", "id": "7d09adcf-7105-4b5e-8093-7c6efa5de6cb", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Tracking Efficiency for DST" ] ], "subsections": [], "title": "Tracking Efficiency for DST" }, { "cite_extract_rate": 1, "cites": [ 7483, 1915, 1923 ], "content": "The environment of the Policy Learning framework has been a long-existing problem. built a user simulator to model the user feedback as the reward signal of an environment. They modeled a stack-like user agenda to iteratively change the user goal and thus shifting the dialogue states. While using a user simulator for environment modeling seems to be promising for that it involves less human interaction,~ argued that training a user simulator required a large amount of annotated data. proposed Multi-Agent Dialog Policy Learning, where they have two agents interact with each other, performing both user and agent, learning policy simultaneously. Furthermore, they incorporated a role-specific reward to facilitate role-based response generation and here both agents also acted as the environment of the other one.", "id": "126f4ecf-20b1-42a6-a316-22f2b7ead365", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Training Environment for PL" ] ], "subsections": [], "title": "Training Environment for PL" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1956, 1963 ], "content": "Response consistency in NLG is a challenging problem since it cannot be solved by simply augmenting the training samples. Instead, additional corrections or regulations should be designed. proposed the Semantically Controlled LSTM (SC-LSTM) which used a semantic planning gate to control the retention or abandonment of dialogue actions thus ensuring the response consistency. Likewise,~ also applied a gating mechanism to jointly perform sentence planning and surface realization where dialogue action features were gated before entering GRU cells. proposed Iterative Rectification Network (IRN), which combined a slot inconsistency reward into the reinforcement learning framework. Thus, the model iteratively checked the correctness of slots and corresponding values.", "id": "d7b031b0-72c2-45cd-a245-a3103cef645c", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Response Consistency for NLG" ] ], "subsections": [], "title": "Response Consistency for NLG" }, { "cite_extract_rate": 0.625, "cites": [ 1903, 1899, 1959, 1885, 8513 ], "content": "End-to-end systems are usually fully data-driven, which contributes to their robust and natural responses. However, because of the finiteness of annotated training samples, a hot research topic is figuring out how to increase the response quality of end-to-end task-oriented dialogue systems with limited data. Using rule-based methods to constrain response generation is a way to improve response quality. used linearized tree-structured representation as input to obtain control over discourse-level and sentence-level semantic concepts. used templates to improve the semantic correctness of generated responses. They broke down the response generation into a two-stage process: first generating semantically correct but possibly incoherent responses based on the slots, with the constraint of templates; then in the second stage, pretrained language models were applied to re-organize the generated utterances into coherent ones. Training the network with reinforcement learning was another strategy to alleviate the reliance on annotated data. trained two teacher networks using a reinforcement learning framework with the objectives of knowledge retrieval and response generation respectively. Then the student network learns to produce responses by mimicking the output of teacher networks. Training the network in a supervised way,~ alternatively tried to optimize the learning strategy to improve the learning efficiency of models given limited data. They combined the meta-learning algorithm with human-machine interaction and achieved significant improvement compared with strong baselines not trained with the meta-learning algorithms. A more direct way to solve the data finiteness problem in supervised learning was augmenting the dataset~, which also improved the response quality to some extent. Additionally, pretraining large-scale models on common corpus and then applying them in a domain that lacks annotated data is a popular approach in recent years~.", "id": "963fcc6a-8033-41b4-9476-0b12d42cb605", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "End-to-end Task-oriented Dialogue Systems" ] ], "subsections": [], "title": "End-to-end Task-oriented Dialogue Systems" }, { "cite_extract_rate": 0.5, "cites": [ 1903 ], "content": "Retrieval-based methods are rare in task-oriented systems for the insufficiency of candidate entries to cover all possible responses which usually involve specific knowledge from external knowledge-base. However,~ argued that in some situations not relating with specific knowledge facts, retrieval-based methods were more precise and effective. They first pretrained the response selection model on general domain corpora and then finetuned on small target domain data. Experiments on six datasets from different domains proved the effectiveness of the pretrained response selection model. constructed Spatio-temporal context features to facilitate response selection, and achieved significant improvements on the Ubuntu IRC dataset.", "id": "e92e645f-2ae7-4734-a88d-4c0770b49cee", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "a753676c-7e09-4d59-8e23-8df1a027166a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Task-oriented Dialogue Systems" ], [ "subsection", "Research Challenges and Hot Topics" ], [ "subsubsection", "Retrieval Methods for Task-oriented Dialogue Systems" ] ], "subsections": [], "title": "Retrieval Methods for Task-oriented Dialogue Systems" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 1878, 1879, 1876 ], "content": "\\label{Open-Domain Dialogue Systems}\nThis section discusses open-domain dialogue systems, which are also called chit-chat dialogue systems or non-task-oriented dialogue systems. Almost all state-of-the-art open-domain dialogue systems are based on neural methods. We organize this section by first briefly introducing the concepts of different branches of open-domain dialogue systems, and then we focus on different research challenges and hot topics. We view these challenges and hot topics as different research directions in open-domain dialogue systems. \nInstead of managing to complete tasks, open-domain dialogue systems aim to perform chit-chat with users without the task and domain restriction~ and are usually fully data-driven. Open-domain dialogue systems are generally divided into three categories: generative systems, retrieval-based systems, and ensemble systems. Generative systems apply sequence-to-sequence models to map the user message and dialogue history into a response sequence that may not appear in the training corpus. By contrast, retrieval-based systems try to find a pre-existing response from a certain response set. Ensemble systems combine generative methods and retrieval-based methods in two ways: retrieved responses can be compared with generated responses to choose the best among them; generative models can also be used to refine the retrieved responses~. Generative systems can produce flexible and dialogue context-related responses while sometimes they lack coherence and tend to make dull responses. Retrieval-based systems select responses from human response sets and thus are able to achieve better coherence in surface-level language. However, retrieval systems are restricted by the finiteness of the response sets and sometimes the responses retrieved show a weak correlation with the dialogue context~. \nIn the next few subsections, we discuss some research challenges and hot topics in open-domain dialogue systems. We aim to to help researchers quickly grasp the current research trends via a systematic discussion on articles solving certain problems.", "id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "level": "section", "origin_cites_number": 5, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ] ], "subsections": [ "97b90f01-228e-4fca-8e73-3e9c38599644", "15040386-9985-4a7b-b3eb-c0983deb7879", "f8d6fc87-26f7-49c9-902d-f8065a58c726", "eb50140f-0b60-46ca-9f6c-fb3022bd2b2e", "d40ff0a5-9123-4e55-9d4b-3ebf66c1c8df", "4f512ac0-4a9b-4003-b468-e454b7984291", "1e37c053-4db9-4621-b612-0f20ddc4f0e8", "f02aaeda-a67a-4281-8f5b-7a4469e907c2", "fb7b9548-89f9-498a-b8cc-7136406cea76", "36aad892-1810-4a4a-81d4-a4cd64eaa0dd" ], "title": "Open-Domain Dialogue Systems" }, { "cite_extract_rate": 0.8823529411764701, "cites": [ 1107, 1976, 1113, 7529, 1893, 1977, 1971, 1972, 7455, 1975, 1978, 1973, 1885, 1894, 1974 ], "content": "\\label{Context Awareness}\nDialogue context consists of user and system messages and is an important source of information for dialogue agents to generate responses because dialogue context decides the conversation topic and user goal~. A context-aware dialogue agent responds not only depending on the current message but also based on the conversation history. The earlier deep learning-based systems added up all word representations in dialogue history or used a fixed-size window to focus on the recent context~. proposed Hierarchical Recurrent Encoder-Decoder (HRED), which was ground-breaking in building context-awareness dialogue systems. They built a word-level encoder to encode utterances and a turn-level encoder to further summarize and deliver the topic information over past turns. augmented the hierarchical neural networks with the attention mechanism to help the model focus on more meaningful parts of dialogue history. \nBoth generative and retrieval-based systems rely heavily on dialogue context modeling. proposed Conversational Semantic Relationship RNN (CSRR) to model the dialogue context in three levels: utterance-level, pair-level, and discourse-level, capturing content information, user-system topic, and global topic respectively. argued that the hierarchical encoder-decoder does not lay enough emphasis on certain parts when the decoder interacted with dialogue contexts. Also, they claimed that attention-based HRED models also suffered from position bias and relevance assumption insufficiency problems. Therefore, they proposed ReCoSa, whose architecture was inspired by the transformer. The model first used a word-level LSTM to encode dialogue contexts, and then self-attention was applied to update the utterance representations. In the final stage, an encoder-decoder attention was computed to facilitate the response generation process. Additionally,~ examined several applications of large-scale pretrained models in dialogue context learning, providing guidance for large-scale network selection in context modeling.\nSome works propose structured attention to improve context-awareness. learned structured dialogue context by combining structured attention with a Variational Recurrent Neural Network (VRNN). Comparatively,~ examined the RST discourse tree model proposed by~ and observed little or even no discourse structures in the learned latent tree. Thus, they argued that structured attention did not benefit dialogue modeling and sometimes might even harm the performance. \nInterestingly,~ not only utilized dialogue history, but also future conversations. Considering that in real inference situations dialogue agents cannot be explicitly aware of future information, they first trained a scenario-based model jointly on past and future context and then used an imitation framework to transfer the scenario knowledge to a target network. \nBetter context modeling improves the response selection performance in retrieval-based dialogue systems . proposed Interaction-over-Interaction network (IoI), which consisted of multiple interaction blocks to perform deeper interactions between dialogue context and candidate responses. organized the dialogue history into conversation threads by performing classifications on their dependency relations. They further used a pretrained Transformer model to encode the threads and candidate responses to compute the matching score. argued that response-retrieval datasets should not only be annotated with relevant or irrelevant responses. Instead, a greyscale metric should be used to measure the relevance degree of a response given the dialogue context, thus increasing the context-awareness ability of retrieval models. \nDialogue rewriting problem aims to convert several messages into a single message conveying the same information and dialogue context awareness is very crucial to this task~. modeled multi-turn dialogues via dialogue rewriting and benefited from the conciseness of rewritten utterances.", "id": "97b90f01-228e-4fca-8e73-3e9c38599644", "level": "subsection", "origin_cites_number": 17, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Context Awareness" ] ], "subsections": [], "title": "Context Awareness" }, { "cite_extract_rate": 0.8, "cites": [ 7077, 1917, 1979, 1981, 7075, 1876, 1980, 7076, 7532, 8507, 1894, 8514 ], "content": "\\label{Response Coherence}\nCoherence is one of the qualities that a good generator seeks~. Coherence means maintaining logic and consistency in a dialogue, which is essential in an interaction process for that a response with weak consistency in logic and grammar is hard to understand. Coherence is a hot topic in generative systems but not in retrieval-based systems because candidate responses in retrieval methods are usually human responses, which are naturally coherent. \nRefining the order or granularity of sentence functions is a popular strategy for improving the language coherence.~ improved the response coherence via the task of inconsistent order detection. The dialogue systems learned response generation and order detection jointly, which was self-supervised multi-task learning. presented the concept of meta-words. Meta-words were diverse attributes describing the response. Learning dialogue based on meta-words helped promote response generation in a more controllable way. used three granularities of encoders to encode raw words, low-level clusters, and high-level clusters. The architecture was called Vocabulary Pyramid Network (VPN), which performed a multi-pass encoding and decoding process on hierarchical vocabularies to generate coherent responses. also built a three-level hierarchical dialogue model to capture richer features and improved the response quality. built Cross Copy Networks (CCN), which used a copy mechanism to copy from similar dialogues based on the current dialogue context. Thus, the system benefited from the pre-existing coherent responses, which alleviated the need of performing the reasoning process from scratch. \nMany work employ strategies to achieve response coherence on a higher level, which improves the overall quality of the generated responses. improved the logical consistency of generated utterances by incorporating an unlikelihood loss to control the distribution mismatches. proposed a Generation-Evaluation framework that evaluated the qualities, including coherence, of the generated response. The feedback was further seen as a reward signal in the reinforcement learning framework and guided to a better dialogue strategy via policy gradient, thus improving the response quality. raised response quality by ranking generated responses based on user feedbacks like upvotes, downvotes, and comments on social networks. built a retrieval-enhanced generation model, which enhanced the generated responses in two ways. First, a discriminator was trained with the help of a retrieval system, and then the generator was trained in a GAN framework under the supervision signal of a discriminator. Second, retrieved responses were also used as a part of the generator input to provide a coherent example for the generator. achieved a global coherent dialogue by constructing a knowledge graph from corpora. They further performed graph walks to decide ``what to say\" and ``how to say\", thus improving the dialogue flow coherence. proposed an assessment approach for dialogue coherence evaluation by combining the dialogue act prediction in a multi-task learning framework and learned rich dialogue representations. \nThere also evolve some data-wise methods for better response coherence.~ proposed to annotate sentence functions in existing conversation datasets to improve the sentence logic and coherence of generated responses. focused on data effectiveness as well. They filtered out low-quality utterance pairs by scoring the relatedness and connectivity, which was proved to be effective in improving the response coherence. presented a method for evaluating dataset utterance pairs' quality in terms of connectedness and relatedness. The proposed scoring technique is based on research findings that have been widely disseminated in the conversation and linguistics communities. included a weighting model in their neural architecture. The weighting model, which is based on conversation data, assigns a numerical weight to each training sample that reflects its intrinsic quality for dialogue modeling and achieved good result in experiments.", "id": "15040386-9985-4a7b-b3eb-c0983deb7879", "level": "subsection", "origin_cites_number": 15, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Response Coherence" ] ], "subsections": [], "title": "Response Coherence" }, { "cite_extract_rate": 0.631578947368421, "cites": [ 1987, 8514, 1107, 1985, 1971, 1984, 1917, 1986, 1983, 1982, 7455, 1877 ], "content": "\\label{Response Diversity}\nThe Bland and generic response is a long-existing problem in generative dialogue systems. Because of the high frequency of generic responses like ``\\textit{I don't know}\" in training samples and the beam search decoding scheme of neural sequence-to-sequence models, generative dialogue systems tend to respond with universally acceptable but meaningless utterances~. For example, to respond to the user message ``\\textit{I really want to have a meal}\", the agent tends to choose simple responses like ``\\textit{It's OK}\" instead of responding with more complicated sentences like recommendations and suggestions. \nEarlier works solve this challenge by modifying the decoding objective or adding a reranking process. replaced the traditional likelihood objective $p(R|C)$ with mutual information. The optimization of mutual information objective aims to achieve a Maximum Mutual Information (MMI). Specifically, the task is to find a best response $R$ based on the dialogue context $C$, in order to maximize their mutual information: \n\\begin{equation}\n\\begin{split}\n\\hat{R} &= arg\\max_{R}{log\\frac{P(C,R)}{P(C)P(R)}} \\\\&= arg\\max_{R}{logP(R|C)-logP(R)}\n\\end{split}\n\\label{logP(R|C)-logP(R)}\n\\end{equation} \nThe objective $p(R|C)$ causes the model to choose responses with high probability even if the response is unconditionally frequent in the dataset, thus causing it to ignore the content of $C$. Maximizing the mutual information as Equation (\\ref{logP(R|C)-logP(R)}) solves this issue by achieving a trade-off between safety and relativity. \nWith a similar intuition as described above, increasing response diversity by modifying the decoding scheme at inference time has been explored in earlier works.~ combined a dissimilarity term into the beam search objective and proposed Diverse Beam Search (DBS) to promote diversity. Similarly,~ proposed a stochastic beam search algorithm by performing stochastic sampling when choosing top-B responses. In the beam search algorithm, siblings sharing the same parent nodes tended to guide to similar sequences. Inspired by this,~ penalized siblings sharing the same parent nodes using an additional term in the beam search objective. This encouraged the algorithm to search more diverse paths by expanding from different parent nodes. Some works further added a reranking stage to select more diverse responses in the generated N-best list~. \nA user message can be mapped into multiple acceptable responses, which is also known as the one-to-many mapping problem. considered the one-to-many mapping problem in open-domain dialogue systems and proposed a two-stage generation model to increase response diversity - the first stage extracting common features of multiple ground truth responses and the second stage extracting the distinctive ones. solved the one-to-many mapping problem via a classification task to learn latent semantic representations. So that given one example response, different ones could be generated by exploring the semantically close vectors in the latent space. \nDifferent training strategies have been proposed to increase response diversity. used human instinct or pre-defined objective as a reward signal in a reinforcement learning setting to prompt the agent to avoid generating dull responses. Still, in a reinforcement learning framework,~ performed counterfactual reasoning to explore the potential response space. Given a pre-existing response, the model inferred another policy, which represented another possible response, thus increasing the response diversity.~ used a negative training method to minimize the generation of bland responses. They first collected negative samples and then gave negative training signals based on these samples to fine-tune the model, impeding the model to generate bland responses. To achieve a better performance,~ synthesized different dialogue models designed for response diversity based on boosting training. The ensemble model significantly outperformed each of its base models. \nUtilizing external knowledge sources is another way to improve the diversity of generated responses because it can enrich the content. built a common-sense dialogue generation model which seeks highly related knowledge facts based on the dialogue history. Likewise,~ incorporated external knowledge sources to diversify the response generation, but the difference was that they utilized non-conversational texts like news articles as relevant knowledge facts, which were obviously easier to obtain. used a memory module to abstract and store useful information in the training corpus for generating diverse responses. \nAnother approach to diversify the response generation is to make modifications to the training corpus. solved the challenge by filtering out the generic responses in the dataset using an entropy-based algorithm, which was simple but effective. Augmented with human feedback data,~ proposed that the generated responses could be reranked via a response ranking framework trained on the human feedback data and responses with higher quality including diversity were selected. proposed to change the data collection pipeline by iteratively computing the diversity of responses from different human participants in dataset construction and selected those participants who tend to generate informative and diverse responses.", "id": "f8d6fc87-26f7-49c9-902d-f8065a58c726", "level": "subsection", "origin_cites_number": 19, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Response Diversity" ] ], "subsections": [], "title": "Response Diversity" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1988, 1913, 1989, 7481 ], "content": "\\label{Speaker Consistency and Personality Response}\nIn open-domain dialogue systems, one big issue is that the responses are entirely learned from training data. The inconsistent response may be received when asking the system about some personal facts (e.g., age, hobbies). If the dataset contains multiple utterance pairs about the query of age, then the response generated tends to be shifting, which is unacceptable because personal facts are usually not random. Thus, for a data-driven chatbot, it is necessary to be aware of its role and respond based on a fixed persona. \nExplicitly modeling the persona is the main strategy in recent works.~ proposed a persona-based dialogue generator consisting of a Receiver and a Transmitter. The receiver was responsible for modeling the interlocutor's persona through several turns' chat while Transmitter generated utterances based on the persona of agent and interlocutor, together with conversation content. The proposed model supported conversations between two persona-based chatbots by modeling each other's persona. Without training with additional Natural Language Inference labels,~ built an imaginary listener following a normal generator, which reasoned over the tokens generated by the generator and predicted a posterior distribution over the personas in a certain space. After that, a self-conscious speaker generated tokens aligned with the predicted persona. Likewise,~ used an augmented GPT-2 to reason over the past conversations and model the target actor's persona, conditioning on which persona consistency was achieved. \nResponding with personas needs to condition on some persona descriptions. For example, to build a generous agent, descriptions like ``\\textit{I am a generous person}\" are needed as a part of the model input. However, these descriptions require hand-crafted feature design, which is labor intensive. proposed to use Model-Agnostic Meta-Learning (MAML) to adapt to new personas with only a few training samples and needed no persona description. relied on external knowledge sources to expand current persona descriptions so that richer persona descriptions were obtained, and the model could associate current descriptions with some commonsense facts. \n~ argued that traditional persona-based systems were one-stage systems and the responses they generated still contain many persona inconsistent words. To tackle this issue, they proposed a three-stage architecture to ensure persona consistency. A generate-delete-rewrite mechanism was implemented to remove the unacceptable words generated in prototype responses and rewrite them.", "id": "eb50140f-0b60-46ca-9f6c-fb3022bd2b2e", "level": "subsection", "origin_cites_number": 6, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Speaker Consistency and Personality-based Response" ] ], "subsections": [], "title": "Speaker Consistency and Personality-based Response" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 1904, 1991, 1962, 1990 ], "content": "\\label{Empathetic Response}\nEmpathy means being able to sense other people's feelings~. An empathetic dialogue system can sense the user's emotional changes and produce appropriate responses with a certain sentiment. This is an essential topic in chit-chat systems because it directly affects the user's feeling and to some extent decides the response quality. Industry systems such as Microsoft's Cortana, Facebook M, Google Assistant, and Amazon's Alexa are all equipped with empathy modules~. \nThere are two ways to generate utterances with emotion: one is to use explicit sentiment words as a part of input; another is to implicitly combine neural words~. proposed a unified framework that uses a lexicon-based attention to explicitly plugin emotional words and a sequence-level emotion classifier to classify the output sequence, implicitly guiding the generator to generate emotional responses through backpropagation. used CoBERT for persona-based empathetic response selection and further investigated the impact of persona on empathetic responses. blended the skills of being knowledgeable, empathetic, and role-aware in one open-domain conversation model and overcame the bias issue when blending these skills. \nSince the available datasets for empathetic conversations are scarce,~ provided a new benchmark and dataset for empathetic dialogue systems. constructed a dialogue dataset with rich emotional markups from user reviews and further proposed a novel way to generate similar datasets with rich markups.", "id": "d40ff0a5-9123-4e55-9d4b-3ebf66c1c8df", "level": "subsection", "origin_cites_number": 7, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Empathetic Response" ] ], "subsections": [], "title": "Empathetic Response" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 1995, 1993, 2001, 1996, 7533, 1887, 1999, 1997, 1994, 2000, 1992, 1998 ], "content": "\\label{Controllable Generation}\nControllable dialogue generation is an important line of work in open-domain dialogue systems since solely learning from data sample distributions causes many uncertain responses. Some of the dialogue systems are grounded on some external knowledge such as knowledge graph and documents. However, grounding alone without explicit control and semantic targeting may induce output that is accurate but vague.\nWe may get some inspirations from the prior work on language generation and machine translation since similarly to dialogue systems they are generation-based or seq-to-seq problems. Some related work aimed to enforce user-specified constraints, most notably using lexical constraints . These methods exclusively use constraints at inference time. Constraints can be included into the latent space during training, resulting in better predictions. Other studies have looked at non-lexical constraints, but they haven't looked into how they can help with grounding external knowledge. These publications also assume that the system can always be given (gold) constraints, which limits the ability to demonstrate larger benefits of the approaches.\nControllable text generation has also been used to extract high-level style information from contextual information in text style transfer and other tasks , allowing the former to be independently modified. learns an interpretable representation for dialogue systems using discrete latent actions. While existing studies employ ``style\" descriptors (e.g., positive/negative, formal/informal) as control signals, use specific lexical constraints to regulate creation, allowing for finer semantic control. Content planned generation focuses response generation on a small number of essential words or table entries. This line of work, on the other hand, does not require consideration of the discourse context, which is critical for response generation.", "id": "4f512ac0-4a9b-4003-b468-e454b7984291", "level": "subsection", "origin_cites_number": 14, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Controllable Generation" ] ], "subsections": [], "title": "Controllable Generation" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 8508, 1902, 2002, 1887, 2004, 2003 ], "content": "\\label{Conversation Topic}\nDaily chats of people usually involve a topic or goal. Actually, a topic or goal is the key to keep each participant engaged in conversations and thus being essential to a chatbot. In real applications, a good topic model helps to retrieve related knowledge and guide the conversation instead of passively responding to the user's message~. For example, if the user mentions ``\\textit{I like sunny days}\", a topic-aware system may reason over relevant external knowledge and produce responses like ``\\textit{I know there is a nice park near the seaside, have you ever been there before?}\". Thus, the agent pushes the conversation to a more engaging stage and enriches the dialogue content. \nAlmost all topic-aware dialogue agents need to model explicit topics, which can be entities from external knowledge-base, or topic embeddings that have some semantic meaning.~ tried to change the traditional passive response fashion and radically pursue active guidance of conversation. The dialogue agent consists of a leader and a follower, where the leader reasons over a knowledge graph and decides the conversation topic. Likewise, a common-sense knowledge graph was used by~ to lead the conversation topic and make recommendations. built a topic-aware retrieval-based chatbot. It aimed to guide the conversation topic to the target one step by step. It used a keyword predictor to predict turn-level keywords and selected the discourse-level keyword based on that. The discourse-level keyword was further fed into the retrieval model to retrieve responses regarding a certain topic. built a multi-view sequence-to-sequence model to learn dialogue topics by first extracting dialogue structures of unstructured chit-chat dialogues, then generating topic summaries using BART decoder. \nIn some applications of certain scenarios the conversation topic is essential, and these are where the topic-aware dialogue agents can be applied to.~ studied the topic-aware chatbot in counseling conversations. In counseling conversations, the agent led the dialogue topic by deciding between empathetically addressing a situation within the current range and moving on to a new target resolution. studied chatbots in the psychotherapy treatment area and built a topic prediction model to forecast the behavior codes for upcoming conversations, thus guiding the dialogue.", "id": "1e37c053-4db9-4621-b612-0f20ddc4f0e8", "level": "subsection", "origin_cites_number": 7, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Conversation Topic" ] ], "subsections": [], "title": "Conversation Topic" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 2008, 2005, 2006, 7530, 1934, 1900, 1936, 1890, 2007 ], "content": "\\label{Knowledge-Grounded System}\nExternal knowledge such as common-sense knowledge is a significant source of information when organizing an utterance. Humans associate current conversation context with their experiences and memories and produce meaningful related responses, such capability results in the gap between human and machine chit-chat systems. As discussed, the earlier chit-chat systems are simply variants of machine translation systems, which can be viewed as sequence-to-sequence language models. However, dialogue generation is much more complicated than machine translation because of the higher freedom and vaguer constraints. Thus, chit-chat systems cannot simply consist of a sequence-to-sequence mapping since appropriate and informative responses are always related to some external common-sense knowledge. Instead, there must be a module incorporating world knowledge. \nMany researchers devoted their research efforts to building knowledge-grounded dialogue systems. A representative model is memory networks introduced in Section~\\ref{Memory Networks}. Knowledge grounded systems use Memory Networks to store external knowledge and the generator retrieves relevant knowledge facts from it at the generation stage~.~ built a memory-augmented conversation model. The proposed model abstracted from the training samples and stored useful ones in the memory module. built a knowledge-grounded dialogue generation system based on GPT-2. They combined a knowledge selection module into the language model and learned knowledge selection and response generation simultaneously.~ proposed Knowledge-Interaction and knowledge Copy (KIC). They performed recurrent knowledge interactions during the decoding phase to compute an attention distribution over the memory. Then they performed knowledge copy using a knowledge-aware pointer network to copy knowledge words according to the attention distribution computed. \nDocuments contain large amount of knowledge facts, but they have a drawback that they are usually too long to retrieve useful information from .~ built a multi-turn document-grounded system. They used an incremental transformer to encode multi-turns' dialogue context and respective documents retrieved. In the generation phase, they designed a two-stage generation scheme. The first stage took dialogue context as input and generated coherent responses; the second stage utilized both the utterance from the first stage and the document retrieved for the current turn for response generation. In this case, selecting knowledge based on both dialogue context and generated response was called posterior knowledge selection, while selecting knowledge with only dialogue context was called prior knowledge selection, which only utilized prior information. built a document quotation model in online conversations and investigated the consistency between quoted sentences and latent dialogue topics. \nKnowledge graph is another source of external information, which is becoming more and more popular in knowledge-grounded systems because of their structured nature. proposed a dialogue-conditioned graph traversal model for knowledge-grounded dialogue systems. The proposed model leveraged attention flows of two directions and fully made use of the structured information of knowledge graph to flexibly decide the expanding range of nodes and edges. Likewise,~ applied graph attention to traverse the concept space, which was a common-sense knowledge graph. The graph attention helped to move to more meaningful nodes conditioning on dialogue context. applied knowledge graphs as an external source to control a coarse-level utterance generation. Thus, the conversation was supported by common-sense knowledge, and the agent guided the dialogue topic in a more reasonable way. built a retrieval system retrieving responses based on the graph reasoning task. They used a graph walker to traverse the graph conditioning on symbolic transitions of the dialogue context. proposed Graph-enhanced Representations for Automatic Dialogue Evaluation (GRADE), a novel evaluation metric for open-domain dialogue systems. This metric considered both contextualized representations and topic-level graph representations. The main idea was to use an external knowledge graph to model the conversation logic flow as a part of the evaluation criteria. \nKnowledge-grounded datasets containing context-knowledge-response triples are scarce and hard to obtain. collected a large dataset consisting of more than 26000 turns of improvised dialogues which were further grounded with a larger movie corpus as external knowledge. Also tackling the data insufficiency problem,~ proposed a method that did not require context-knowledge-response triples for training and was thus data-efficient. They viewed knowledge as a latent variable to bridge the context and response. The variational approach learned the parameters of the generator from both a knowledge corpus and a dialogue corpus which were independent of each other.", "id": "f02aaeda-a67a-4281-8f5b-7a4469e907c2", "level": "subsection", "origin_cites_number": 15, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Knowledge-Grounded System" ] ], "subsections": [], "title": "Knowledge-Grounded System" }, { "cite_extract_rate": 1, "cites": [ 1914, 2009, 7079, 8515, 7459 ], "content": "\\label{Interactive Training}\nInteractive training, also called human-in-loop training, is a unique training method for dialogue systems. Annotated data is fixed and limited, not being able to cover all dialogue settings. Also, it takes a long time to train a good system. But in some industrial products, the dialogue systems need not be perfect when accomplishing their tasks. Thus, interactive training is desirable because the dialogue systems can improve themselves via interactions with users anywhere and anytime, which is a more flexible and cheap way to finetune the parameters. \nTraining schemes with the above intuition have been developed in recent years.~ introduced a reinforcement learning-based online learning framework. The agent interacted with a human dialogue partner and the partner provided feedback as a reward signal. first trained the agent with two-stage supervised learning, and then used an interaction-based reinforcement learning to finetune. Every time the user chose the best one from K responses generated by the pretrained model and then responded to this selected response. Instead of learning through being passively graded,~ proposed a model that actively asked questions to seek improvement. Active learning was applicable to both offline and online learning settings. argued that most conversation samples an agent saw happened after it was pretrained and deployed. Thus, they proposed a framework to train the agent from the real conversations it participated in. The agent evaluated the satisfaction score of the user from the user's response of each turn and explicitly requested the user feedback when it thought that a mistake has been made. The user feedback was further used for learning. placed the interactive learning in a cooperative game and tried to learn a long-term implicit strategy via Reinforce algorithm. Some of these work has been adopted by industry products and is a very promising direction for study.", "id": "fb7b9548-89f9-498a-b8cc-7136406cea76", "level": "subsection", "origin_cites_number": 5, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Interactive Training" ] ], "subsections": [], "title": "Interactive Training" }, { "cite_extract_rate": 0.896551724137931, "cites": [ 2016, 2014, 1275, 2019, 7080, 2018, 2023, 2011, 2012, 2017, 2015, 7534, 2020, 7535, 2013, 1278, 1272, 2024, 8516, 2010, 768, 1886, 2022, 1895, 2021, 1279 ], "content": "\\label{Visual Dialogue}\nMore and more researchers cast their eyes to a broader space and are not only restricted to NLP. The combination of CV and NLP giving rise to tasks like visual question answering attracted lots of interest. The VQA task is to answer a question based on the content of a picture or video. Recently, this has evolved into a more challenging task: visual dialogue, which conditions a dialogue on the visual information and dialogue history. The dialogue consists of a series of queries, and the query form is usually more informal, which is why it is more complicated than VQA. \n\\begin{figure}[ht]\n\\begin {center}\n\\includegraphics[width=1.0\\textwidth]{VDBERT-eps-converted-to.pdf}\n\\caption{The architecture of VD-BERT, a state-of-the-art visual dialogue system~}\n\\label{VD-BERT}\n\\end {center}\n\\end{figure} \n\\begin{figure}[ht]\n\\begin {center}\n\\includegraphics[width=1.0\\textwidth]{IMAGECHAT-eps-converted-to.pdf}\n\\caption{ Three samples from the IMAGE-CHAT dataset~}\n\\label{IMAGECHAT}\n\\end {center}\n\\end{figure} \nVisual dialogue can be seen as a multi-step reasoning process over a series of questions . learned semantic representation of the question based on dialogue history and a given image, and recurrently updated the representation. proposed a set of image-based tasks and provided strong baselines. employed R-CNN as an image encoder and fused the visual and dialogue modality with a VD-BERT. The proposed architecture achieved sufficient interactions between multi-turn dialogue and images. The proposed architecture is shown as an example model for Visual Dialogue tasks in Figure~\\ref{VD-BERT}.\nCompared with image-grounded dialogue systems, video-grounded systems are more interesting but also more challenging. There are two main challenges of video dialogue, as claimed by~. One is that both spatial and temporal features exist in the video, which increases the difficulty of feature extraction. Another is that video dialogue features span across multiple conversation turns and thus are more complicated. A GPT-2 model was applied by~, being able to fuse multi-modality information over different levels. Likewise,~ built a multi-modal transformer network to incorporate information from different modalities and further applied a query-aware attention to extract context-related features from non-text modalities. proposed a Bi-directional Spatio-Temporal Learning (BiST) leveraging temporal-to-spatial and spatial-to-temporal reasoning process and could adapt to the dynamically evolving semantics in the video. \nSome researchers hold different opinions on the effectiveness of dialogue history in visual dialogue. proposed that many expressions were already mentioned in previous turns and they built a visual dialogue model grounded on both image and conversation history. They further proved that better performance was achieved when grounding the model on dialogue context. However,~ argued that though with dialogue history the visual dialogue model could achieve better results, in fact only a small proportion of cases benefited from the history. Furthermore, they proved that existing evaluation metrics for visual dialogue promoted generic responses. \nThe visual dialogue task benefits a lot from the pretraining-based learning. The popularity of NLP pretraining sparked interest in multi-modal pretraining. VideoBERT is widely recognized as the pioneering work in the field of multimodal pretraining. It's a model that's been pre-trained on video frame features and text. CBT , which is similarly pretrained on video-text pairs, is a contemporary work of VideoBERT. For video representation learning, used unlabeled narrated films. More researchers have focused their attention on visual-linguistic pretraining, inspired by the early work in multi-modal pretraining. For this objective, there are primarily two types of model designs. The single-stream model is one example. used a BERT model to process the concatenation of objects and words and pre-trained it with three standard tasks. Similar methods were proposed by and , but with more pretraining tasks and larger datasets. With an adversarial training technique, further enhanced the model. employed the same architecture, but incorporated single-modal data and pre-trained the object detector. Instead of using recognized objects, sought to enter pixels directly. The object labels were used by to improve cross-modal alignment. suggested a single-stream model that learns both caption generation and VQA tasks at the same time. The two-stream model is another type of model architecture. suggested a two-stream model with co-attention and solely used in-domain data to train the model. introduced a similar architecture with a more complex co-attention model, which they pretrained with out-of-domain data, and improved VilBERT with multi-task learning. recently added the scene graph to the model, which improved performance. Aside from these studies, looked at the impact of pretraining dataset selection on downstream task performance.\nThe annotation of visual dialogue is laborious and thus the datasets are scarce. Recently, some researchers have tried to tackle the data insufficiency problem. collected a dataset (IMAGE-CHAT, shown in Figure~\\ref{IMAGECHAT}) of image-grounded human-human conversations in which speakers are asked to perform role-playing based on an emotional mood or style offered, since the usage of such characteristics is also a significant factor in engagingness. constructed a visual-grounded dialogue dataset. Interestingly, it additionally annotated the eye-gaze locations of the interlocutor in the image to provide information on what the interlocutor was paying attention to. proposed a method to utilize the VQA data when adapting to a new task, minimizing the requirement of dialogue data which is expensive to annotate.", "id": "36aad892-1810-4a4a-81d4-a4cd64eaa0dd", "level": "subsection", "origin_cites_number": 29, "parent_id": "5826cc8e-0271-43d0-9eea-71af25a29e66", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Open-Domain Dialogue Systems" ], [ "subsection", "Visual Dialogue" ] ], "subsections": [], "title": "Visual Dialogue" }, { "cite_extract_rate": 1, "cites": [ 1917 ], "content": "\\label{Evaluation Approaches}\nEvaluation is an essential part of research in dialogue systems. It is not only a way to assess the performance of agents, but it can also be a part of the learning framework which provides signals to facilitate the learning~. This section discusses the evaluation methods in task-oriented and open-domain dialogue systems.", "id": "bc9566fa-d483-4ea7-9525-13fe2649796a", "level": "section", "origin_cites_number": 1, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Evaluation Approaches" ] ], "subsections": [ "9bd94015-0475-4994-8ae7-7fd82646dbd6", "0cc842c0-1c24-47ec-9ef4-fe8a2955fc39" ], "title": "Evaluation Approaches" }, { "cite_extract_rate": 0.4, "cites": [ 8519, 8517, 1954, 8518 ], "content": "\\label{Evaluation Methods for Task-oriented Dialogue Systems}\nTask-oriented systems aim to accomplish tasks and thus have more direct metrics evaluating their performance such as task completion rate and task completion cost. Some evaluation methods also involve metrics like BLEU to compare system responses with human responses, which will be discussed later. In addition, human-based evaluation and user simulators are able to provide real conversation samples. \nTask Completion Rate is the rate of successful events in all task completion attempts. It measures the task completion ability of a dialogue system. For example, in movie ticket booking tasks, the Task Completion Rate is the fraction of dialogues that meet all requirements specified by the user, such as movie time, cinema location, movie genre, etc. The task completion rate was applied in many task-oriented dialogue systems~. Additionally, some works~ used partial success rate. \nTask Completion Cost is the resources required when completing a task. Time efficiency is a significant metric belonging to Task Completion Cost. In dialogue-related tasks, the number of conversation turns is usually used to measure the time efficiency and dialogue with fewer turns is preferred when accomplishing the same task. \nHuman-based Evaluation provides user dialogues and user satisfaction scores for system evaluation. There are two main streams of human-based evaluation. One is to recruit human labor via crowdsourcing platforms to test-use a dialogue system. The crowdsource workers converse with the dialogue systems about predefined tasks and then metrics like Task Completion Rate and Task Completion Cost can be calculated. Another is computing the evaluation metrics in real user interactions, which means that evaluation is done after the system is deployed in real use. \nUser Simulator provides simulated user dialogues based on pre-defined rules or models. Since recruiting human labor is expensive and real user interactions are not available until a mature system is deployed, user simulators are able to provide task-oriented dialogues at a lower cost. There are two kinds of user simulators. One is agenda-based simulators~, which only feed dialogue systems with the pre-defined user goal as a user message, without surface realization. Another is model-based simulators~, which generate user utterances using language models given constraint information.", "id": "9bd94015-0475-4994-8ae7-7fd82646dbd6", "level": "subsection", "origin_cites_number": 10, "parent_id": "bc9566fa-d483-4ea7-9525-13fe2649796a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Evaluation Approaches" ], [ "subsection", "Evaluation Methods for Task-oriented Dialogue Systems" ] ], "subsections": [], "title": "Evaluation Methods for Task-oriented Dialogue Systems" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 1921, 1979, 8514, 2031, 2026, 1917, 2028, 2027, 2032, 2025, 2029, 7454, 2030, 1936, 7458, 7455 ], "content": "\\label{Evaluation Methods for Open-domain Dialogue Systems}\nEvaluation of open-domain dialogue systems has long been a challenging problem. Unlike task-oriented systems, there is no clear metric like task completion rate or task completion cost. Both human and automatic evaluation methods are developed for ODD during these years. Human evaluation has been adopted by many works to converse with and rate dialogue agents. However, human evaluation is not an ideal approach for that human labor is expensive and the evaluation results are highly subjective, varying from person to person. Researchers tend to hire crowd source workers or random people to conduct human evaluation, both of which have two main drawbacks: 1. The evaluator group is highly random, and there exists huge gap between people with different knowledge levels or from different domains. 2. Though individual bias could be weakened by increasing the number of evaluators, the evaluator group cannot be very large because of the limited budgets (in articles mentioned above the sizes of human evaluator groups are usually 5-20). Thus, automatic and objective metrics are desirable. In general, there are two categories of automatic metrics in recent research: word-overlap metrics and neural metrics. \nWord-overlap Metrics are widely used in Machine Translation and Summarization tasks, which calculate the similarity between the generated sequence and the ground truth sequence. Representative metrics like BLEU~ and ROUGE~ are n-gram matching metrics. METEOR~ was further proposed with an improvement based on BLEU. It identified the paraphrases and synonyms between the generated sequence and the ground truth. extended the BLEU by exploiting numerical ratings of responses. argued that word-overlap metrics were not correlated well with human evaluation. These metrics are effective in Machine Translation because each source sentence has a ground truth to compare with, whereas in dialogues there may be many possible responses corresponding with one user message, and thus an acceptable response may receive a low score if simply computing word-overlap metrics. \nNeural Metrics are metrics computed by neural models. Neural methods improve the evaluation effectiveness in terms of adaptability compared with word-overlap metrics, but they require an additional training process. used an RNN and a CNN model to extract turn-level features in a sequence and give the score. proposed Ruber, which was an automatic metric combining referenced and unreferenced components. The referenced one computed the similarity between generated response representations and ground truth representations, while the unreferenced one learned a scoring model to rate the query-response pairs. learned representations of dialogue utterances using an RNN and then computed the dot-product between generated response and ground truth response as an evaluation score. and~ used the discriminator of a GAN framework to distinguish the generated responses from human responses. If a generated response achieved a high confidence score, this was indicative of a human-like response, thus desirable. \nEvaluation of open-domain dialogue systems is a hot topic at present and many researchers cast their eyes on this task recently. Some papers introduce two or more custom evaluation metrics for better evaluation, such as response diversity, response consistency, naturalness, knowledgeability, understandability, etc., to study \"what to evaluate\". evaluated the generated responses by designing two metrics. One was the informativeness metric calculating information utilization over turns. Another was the coherence metric, which was predicted by GRUs, given the response, context, and background as input. Likewise,~ designed scoring functions to compute connectivity of utterance pairs and content relatedness as two evaluation metrics and used another fusion function to combine the metrics. combined four metrics in their automatic evaluation framework: the context coherence metric based on GPT-2; phrase fluency metric based on GPT-2; diversity metric based on n-grams; logical self-consistency metric based on textual-entailment-inference. proposed a reference-free evaluation metric. They annotated responses considering the following qualities: Understandable (0-1), Maintains Context (1-3), Natural (1-3), Uses Knowledge (0-1), Interesting (1-3), Overall Quality (1-5). Furthermore, a transformer was trained on these annotated dialogues to compute the score of quality. \nApart from \"what to evaluate\", there are also a multitude of papers studying \"how to evaluate\", which focus more on refining the evaluation process.~ proposed a three-stage framework to denoise the self-rating process. They first performed dialogue flow anomaly detection via self-supervised representation learning, and then the model was fine-tuned with smoothed self-reported user ratings. Finally, they performed a denoising procedure by calculating the Shapley value and removed the samples with negative values. trained RoBERTa as a response scorer to achieve reference-free and semi-supervised evaluation. constructed a test set by first generating several responses based on one user message and then human evaluation was performed to annotate each response with a score, where the response with the highest score was taken as a true response and the remainder taken as false responses. Dialogue systems were further evaluated by comparing the response selection accuracy on the test set, where a cross-entropy loss was calculated between the generated response and candidate responses to perform the selection operation. Likewise,~ trained a BERT-based model to discriminate between true and false responses, where false responses were automatically generated. The model was further used to predict the evaluation score of a response based on dialogue context. argued that responses should not be simply evaluated based on their surface-level features, and instead the topic-level features were more essential. They incorporated a common-sense graph in their evaluation framework to obtain topic-level graph representations. The topic-level graph representation and utterance-level representation were jointly considered to evaluate the coherence of responses generated by open-domain dialogue systems. \nRanking is also an approach that evaluates dialogue systems effectively.~ leveraged large-scale human feedback data such as upvotes, downvotes, and replies to learn a GPT-2-based response ranker. Thus, responses were evaluated by their rankings given by the ranker. also evaluated the dialogue systems by ranking. They proposed a low-cost human-involved evaluation framework, in which different conversational agents conversed with each other and the human's responsibility was to annotate whether the generated utterance was human-like or not. The systems were evaluated by comparing the number of turns their responses were judged as human-like responses.", "id": "0cc842c0-1c24-47ec-9ef4-fe8a2955fc39", "level": "subsection", "origin_cites_number": 26, "parent_id": "bc9566fa-d483-4ea7-9525-13fe2649796a", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Evaluation Approaches" ], [ "subsection", "Evaluation Methods for Open-domain Dialogue Systems" ] ], "subsections": [], "title": "Evaluation Methods for Open-domain Dialogue Systems" }, { "cite_extract_rate": 1, "cites": [ 2033 ], "content": "\\label{Data sets}\nThe dataset is one of the most essential components in dialogue systems study. Nowadays the datasets are not enough no matter for task-oriented or open-domain dialogue systems, especially for those tasks requiring additional annotations . For task-oriented dialogue systems, data can be collected via two main methods. One is to recruit human labor via crowdsourcing platforms to produce dialogues in a given task. Another is to collect dialogues in real task completions like film ticket booking. For open-domain dialogue systems, apart from dialogues collected in real interactions, social media is also a significant source of data. Some social media companies such as Twitter and Reddit provide API access to a small proportion of posts, but these services are restricted by many legal terms which affect the reproducibility of research. As a result, many recent works in dialogue systems collect their own datasets for train and test. \nIn this section, we review and categorize these datasets and make a comprehensive summary. To our best knowledge, Table~\\ref{Datasets for Task-oriented dialogue systems} and~\\ref{Datasets for Open-domain dialogue systems} cover almost all available datasets used in recent task-oriented or open-domain dialogue systems.", "id": "f65a71d6-ddc8-4d98-80e5-8bcdf95dbaa4", "level": "section", "origin_cites_number": 1, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Datasets" ] ], "subsections": [ "56826a6d-4d55-43b1-bf5f-ce8c1a11a5e5", "0fab74f0-8cd6-437f-aa56-de316105926e" ], "title": "Datasets" }, { "cite_extract_rate": 0.8148148148148141, "cites": [ 2036, 7452, 8521, 7331, 2039, 2033, 1548, 2037, 7528, 1962, 1908, 2034, 7076, 450, 1903, 8520, 8419, 8512, 1943, 2038, 2035, 1959 ], "content": "\\begin{longtable}{>{\\centering\\arraybackslash}y{2.5cm} y{5cm} >{\\centering\\arraybackslash}y{2cm} >{\\centering\\arraybackslash}y{2.5cm}}\n\\caption{Datasets for Task-oriented dialogue systems}\n\\label{Datasets for Task-oriented dialogue systems} \n\\\\\\hline\n\\textbf{Name}\t&\t\\centering\\arraybackslash\\textbf{Description}\t&\t\\centering\\arraybackslash\\textbf{Task}\t&\t \\textbf{Origin}\t \\\\\\hline\nSchema\t&\tA dataset mainly for dialogue state tracking.\t&\tDialogue State Tracking\t&\t~\t \\\\\\hline\nMetaLWOZ\t&\tCollected by crowdsourcing platforms, spanning over 227 tasks and 47 domains. This dataset is designed for learning in unseen domains.\t&\tDomain Transfer\t&\t~\t \\\\\\hline\nE2E\t&\tA dataset for end-to-end dialogue generation in restaurant domain. Data is collected in crowdsourced fashion.\t&\tEnd-to-end Task-oriented Dialogue Systems\t&\t~\t \\\\\\hline\nMSR-E2E\t&\tContain dialogues spanning over 3 domains: movie-ticket booking, restaurant reservation, and taxi booking.\t&\tEnd-to-end Task-oriented Dialogue Systems\t&\t~\t \\\\\\hline\nYELPNLG\t&\tA corpus consisting of utterances spanning over different restaurant attributes.\t&\tNatural Language Generation\t&\t~\t \\\\\\hline\nClinical Conversation data set\t&\tIt consists of conversations between physicians and participants.\t&\tNatural Language Understanding\t&\t~\t \\\\\\hline\nOOS\t&\tA large-scale dataset for intent detection.\t&\tNatural Language Understanding\t&\t~\t \\\\\\hline\nATIS\t&\tA dataset consisting of voice calls from people who intend to make flight reservations.\t&\tNatural Language Understanding; Dialogue State Tracking\t&\t~\t \\\\\\hline\nMultiWOZ\t&\tHuman-human written conversations with rich annotations spanning over multi-domains.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nSNIPS-NLU\t&\tTask-oriented dialogue dataset colleted in a crowdsourced fashion. It was used to train voice assistant agents.\t&\tTask-oriented Dialogue\t&\t\\url{https://github.com/snipsco/nlubenchmark}\t \\\\\\hline\nbAbI\t&\tRestaurant table reservation dialogues.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nJDC\t&\tA Chinese customer service dataset, consisting of context-response pairs.\t&\tTask-oriented Dialogue\t&\t\\url{https://www.jddc.jd.com}\t \\\\\\hline\nUbuntuV2\t&\tIt consists of dialogues collected via Ubuntu question-answering forum.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nMICROSOFT DIALOGUE CHALLENGE data set\t&\tA task-oriented dataset collected via Amazon Mechanical Turk.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nWOZ\t&\tTask-oriented data collected in crowdsourced fashion.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nDSTC series\t&\tMulti-domain task-oriented dataset.\t&\tTask-oriented Dialogue\t&\t\\url{https://www.microsoft.com/en-us/research/event/dialog-state-tracking-challenge/}\t \\\\\\hline\nSimDial\t&\tSimulated conversations spanning over multiple domains.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nSMD\t&\tHuman-human dialogues in weather, navigation and scheduling domain.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nBANKING\t&\tQuestion-answer pairs with 77 categories in e-banking domain.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nWeather forecast\t&\tA task-oriented dataset in the weather domain.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nMedDialog-(EN,CN)\t&\tLarge scale dataset in medical domain consisting of conversations between doctors and patients\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nCamRest\t&\tIt consists of human-human multi-turn dialogues in restaurant domain.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nTaskmaster\t&\tContain dialogues spanning over 6 domains. It has 22.9 average length of conversational turns.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nFrames\t&\tConversational dataset with annotations of semantic frame tracking.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nJDDC\t&\tA Chinese customer service dataset, consisting of context-response pairs.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nCourt Debate Dataset\t&\tA task-oriented dataset in judicial field containing court debate conversations.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nTreeDST\t&\tA task-oriented dataset annotated with tree structured dialogue states and agent acts.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nRiSAWOZ\t&\tContain utterances for 12 domains, annotated with rich semantic information.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nCambridge Restaurant\t&\tA task-oriented dataset in restaurant booking field.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nSB-TOP\t&\tA task-oriented dataset with semantic parsing annotation. It spans over 4 domains: Reminder, Weather, Calling and Music.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nGSIM\t&\tA machine-machine task-oriented dataset. It covers two domains: restaurant table booking and movie ticket booking.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\nSGD\t&\tA schema-guided dataset spanning over multiple domains.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\\hline\ncite-8K\t&\tA task-oriented dataset collected in restaurant booking calls.\t&\tTask-oriented Dialogue\t&\t~\t \\\\\n\\hline\n\\end{longtable}", "id": "56826a6d-4d55-43b1-bf5f-ce8c1a11a5e5", "level": "subsection", "origin_cites_number": 27, "parent_id": "f65a71d6-ddc8-4d98-80e5-8bcdf95dbaa4", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for Task-oriented Dialogue Systems" ] ], "subsections": [], "title": "Datasets for Task-oriented Dialogue Systems" }, { "cite_extract_rate": 0.592105263157894, "cites": [ 2054, 2051, 2060, 1902, 2053, 2042, 2021, 2064, 2043, 2061, 2050, 2056, 2052, 2057, 7075, 7081, 1565, 2058, 1904, 2044, 2065, 2055, 2045, 442, 650, 1145, 7082, 2062, 1890, 1882, 2041, 8522, 457, 2047, 1153, 2046, 2049, 2066, 2059, 2063, 2040, 2067, 8523, 2048, 1990 ], "content": "\\begin{longtable}{>{\\centering\\arraybackslash}y{2.5cm} y{5cm} >{\\centering\\arraybackslash}y{2cm} >{\\centering\\arraybackslash}y{2.5cm}}\n\\caption{Datasets for Open-domain dialogue systems}\n\\label{Datasets for Open-domain dialogue systems} \n\\\\\\hline\n\\textbf{Name}\t&\t\\centering\\arraybackslash\\textbf{Description}\t&\t\\centering\\arraybackslash\\textbf{Task}\t&\t \\textbf{Origin} \\\\\\hline\nLarge-Scale Corpus for Conversation Disentanglement\t&\tA dataset consisting of messages annotated with reply-structure graphs for dialogue disentanglement.\t&\tConversation Disentaglement\t&\t~\t \\\\\\hline\nDuConv\t&\tCollected in conversations between a conversation leader and a conversation follower. \t&\tConversation Topic\t&\t~\t \\\\\\hline\nPERSUASION FOR GOOD\t&\tA topic-oriented dataset annotated with persuasion strategies.\t&\tConversation Topic\t&\t~\t \\\\\\hline\nMutualFriends\t&\tA topic-oriented dataset based on bot-bot stratigical conversations.\t&\tConversation Topic\t&\t~\t \\\\\\hline\nSAMSum\t&\tA large-scale dialogue summary dataset.\t&\tConversation Topic\t&\t~\t \\\\\\hline\nOpenDialKG\t&\tIt consists conversations between two agents and each dialogue corresponds with a knowledge graph path annotation.\t&\tConversation Topic; Dialogue Reasoning\t&\t~\t \\\\\\hline\ndoc2dial\t&\tA dataset consisting of conversations annotated with goals and accociated documents.\t&\tConversation Topic; Knowledge-Grounded System\t&\t~\t \\\\\\hline\nDialEdit\t&\tA dataset constructed for image editing via conversational language instructions.\t&\tConversational Image Editing\t&\t~\t \\\\\\hline\nCHART DIALOGS\t&\tA dataset containing dialogues describing matplotlib plot features.\t&\tConversational Plotting\t&\t~\t \\\\\\hline\nCONAN\t&\tA multilingual dataset for hate speech tackling.\t&\tDialogue Classification\t&\t~\t \\\\\\hline\nDialogue NLI\t&\tA NLI dataset with sentences annotated with entailment (E), neutral (N), or contradiction (C).\t&\tDialogue Inference\t&\t~\t \\\\\\hline\nMuTual\t&\tA dialogue reasoning dataset containing English listening comprehension exams.\t&\tDialogue Reasoning\t&\t~\t \\\\\\hline\nRST-DT\t&\tIt consists of samples from 385 news articles annotated with dialogue features.\t&\tDiscourse Parsing\t&\t~\t \\\\\\hline\nNLPCC\t&\tA dataset consisting of emotional classification data.\t&\tEmpathetic Response\t&\t\\url{http://tcci.ccf.org.cn/nlpcc.php}\t \\\\\\hline\nMELD\t&\tA multi-party conversational dataset with emotion annotations.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nEMPATHETIC DIALOGUES\t&\tA dataset containing conversations annotated with emotion labels.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nIEMOCAP\t&\tContain multi-party dialogues. Each dialogue is annotated with an emotion label.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nEmoryNLP\t&\tCollected from Friends' TV series, annotated with emotion labels.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nMojiTalk\t&\tA largescale dataset collected from Twitter, including emojis.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nCBET\t&\tA dialogue dataset annotated with nine emotion labels: surprise, anger, love, sadness, joy, fear, guilt, disgust and thankfulness\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nStanford Politeness Corpus\t&\tA conversational dataset annotated with politeness labels.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nAIT-2018\t&\tCollected in SemEval-2018 Task 1: Affect in Tweets.\t&\tEmpathetic Response\t&\t~\t \\\\\\hline\nEMOTyDA\t&\tA dataset containing short videos about multi-party conversations, each annotated with respective emotion.\t&\tEmpathetic Response; Visual Dialogue\t&\t~\t \\\\\\hline\nWizard of Wikipedia\t&\tA large-sclale dataset consisting of conversations grounded with Wikipedia knowledge.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nCMU DoG\t&\tA dataset consisting of conversations grounded with Wikipedia articles about popular movies.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nHoll-E\t&\tContain dialogues grounded with documents.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nInterview\t&\tA dataset containing multi-party conversations in the form of interviews.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nCuriosity\t&\tAn open-domain dataset annotated with pre-existing user knowledge and dialogue acts, also grounding in Wikipedia.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nKdConv\t&\tA chinese knowledge-grounded dialogue dataset. \t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nELI5\t&\tA QA dataset grounded with retrieved documents.\t&\tKnowledge-Grounded System\t&\t~\t \\\\\\hline\nTopical Chat\t&\tA knowledge-grounded dataset where the knowledge spans over eight different topics.\t&\tKnowledge-Grounded System; Conversation Topic\t&\t~\t \\\\\\hline\nWHERE ARE YOU?\t&\tA dialogue dataset annotated with localization information.\t&\tLocalization Dialogue\t&\t~\t \\\\\\hline\nMMD\t&\tA multi-modal dataset consisting of dialogues between sales agents and shoppers.\t&\tMulti-modal Dialogue\t&\t~\t \\\\\\hline\nOpenSubtitles\t&\tA multilingual dataset made up of movie captions, containing about 8 billion words.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nNTCIR\t&\tA social media dataset collected from Sina Weibo.\t&\tOpen-domain Dialogue\t&\t\\url{http://research.nii.ac.jp/ntcir/data/data-en.html}\t \\\\\\hline\nTwitter\t&\tA social media dataset collected from Twitter.\t&\tOpen-domain Dialogue\t&\t\\url{https://github.com/Marsan-Ma-zz/chat corpus}\t \\\\\\hline\nDouban Conversation Corpus\t&\tA social media dataset collected from Douban.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nE-commerce Dialogue Corpus\t&\tIt consists of conversations between customers and customer service staff on Taobao.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nREDDIT\t&\tA social media dataset collected from REDDIT.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nSTC-SeFun\t&\tA social media dataset collected from Tieba, Zhidao, Douban and Weibo.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nDailyDialog\t&\tA dataset consisting of daily dialogues, annotated with conversation intention and emotion information.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nPDTB\t&\tDialogue dataset annotated with discourse relations.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nLuna\t&\tDialogue dataset with Italian relation annotations.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nEdina-DR\t&\tDialogue dataset with English relation annotations, which is based on Luna data set.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nCornell Movie Dialog Corpus\t&\tA dialogue dataset collected via IMDB database.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nReddit Movie Dialogue Dataset\t&\tA movie dialogue dataset collected from Reddit.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nLIGHT\t&\tA dialogue dataset with configurable text adventure environment.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nThis American Life\t&\tA media dialogue dataset collected in long-form expository podcast episodes.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nRadioTalk\t&\tA media dialogue dataset collected from radio transcripts.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nFrench EPAC\t&\tA media dialogue dataset collected from news.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nTREC Conversational Assistance \t&\tAn open-domain dataset spanning 30 conversation topics.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nSearch as a Conversation\t&\tA dataset for conversations with search engines.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nAmazon Alexa Prize Competition\t&\tA dataset containing real-world conversations between Amazon Alexa customers and Gunrock, which is a champion chatbot.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nSwitchBoard\t&\tAn open-domain dataset containing English phone conversations.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nZhihu\t&\tA Chinese social media dataset with posts and comments.\t&\tOpen-domain Dialogue\t&\t\\url{https://www.zhihu.com}\t \\\\\\hline\nSPOLIN\t&\tA dataset containing yes-and conversations.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nCRD3\t&\tA dataset collected in the role-playing game Dungeons and Dragons.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nBaidu Zhidao\t&\tA Chinese social media dataset with posts and comments.\t&\tOpen-domain Dialogue\t&\t\\url{https://zhidao.baidu.com/}\t \\\\\\hline\nWebis Gmane Email Corpus 2019\t&\tA conversational dataset collected from 153M emails.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nLibreSpeech Corpus\t&\tContain 500 hours' speech produced by 1252 participants.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nMotivational Interviewing\t&\tA dialogue dataset about conversational psychotherapy.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nSubTle Corpus\t&\tContact Ameixa for data.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nTED-LIUM\t&\tTED-talk monologues.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nECG NLPCC 2017 Data\t&\tConversational dataset extracted from Weibo.\t&\tOpen-domain Dialogue\t&\t~\t \\\\\\hline\nSEMEVAL15\t&\tQA dataset with answer quality annotations via Amazon Mechanical Turk.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nAMAZONQA\t&\tA QA dataset solving one-to-many problems.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nTGIF-QA\t&\tA video-grounded QA dataset.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nQuAC\t&\tA QA dataset with 14K QA dialogues.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nSQuAD\t&\tA question-answering dataset collected in crowdsourced fashion.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nLIF\t&\tA dataset constructed based on QuAC.\t&\tQuestion Answering\t&\t~\t \\\\\\hline\nYelp\t&\tIt consists of customer reviews from Yelp Dataset Challenge\t&\tResponse Retrieval\t&\t~\t \\\\\\hline\nDebates\t&\tThe dataset consists of debates on Congerssional bills.\t&\tResponse Retrieval\t&\t~\t \\\\\\hline\nPERSONACHAT\t&\tIt provides profile information of the agents and background of users.\t&\tSpeaker Consistency and Personality Response\t&\t~\t \\\\\\hline\nKvPI\t&\tContain consistency annotations between response and corresponding key-value profiles.\t&\tSpeaker Consistency and Personality Response\t&\t~\t \\\\\\hline\nConvAI2\t&\tA dataset constructed on the base of Persona-Chat, each conversation having profiles from a set containing persona candidates.\t&\tSpeaker Consistency and Personality Response\t&\t~\t \\\\\\hline\nPEC\t&\tAn open-domain dataset annotated with persona labels.\t&\tSpeaker Consistency and Personality Response; Empathetic Response\t&\t~\t \\\\\\hline\nGuessWhat?!\t&\tA visual dialogue dataset for a two-player game about object recognition.\t&\tVisual Dialogue\t&\t~\t \\\\\\hline\nVisDial\t&\tA visual dialogue dataset whose images are obtained from COCO data set.\t&\tVisual Dialogue\t&\t\\url{https://visualdialog.org/ data}\t \\\\\\hline\nAVSD\t&\tA video-grounded dialogue dataset.\t&\tVisual Dialogue\t&\t~\t \\\\\\hline\nVFD\t&\tA visual dialogue dataset annotated with unique eye-gaze locations.\t&\tVisual Dialogue\t&\t~\t \\\\\\hline\nPhotoBook\t&\tA dataset for task-oriented visual dialogues.\t&\tVisual Dialogue\t&\t~\t \\\\\\hline\nIGC\t&\tA dataset containing conversations discussing a given image.\t&\tVisual Dialogue\t&\t~\t \\\\\\hline\nImage-Chat\t&\tContain conversations grounded with images. The conversations are also annotated with personality.\t&\tVisual Dialogue; Speaker Consistency and Personality Response\t&\t~\t \\\\\n\\hline\n\\end{longtable}", "id": "0fab74f0-8cd6-437f-aa56-de316105926e", "level": "subsection", "origin_cites_number": 76, "parent_id": "f65a71d6-ddc8-4d98-80e5-8bcdf95dbaa4", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Datasets" ], [ "subsection", "Datasets for Open-domain Dialogue Systems" ] ], "subsections": [], "title": "Datasets for Open-domain Dialogue Systems" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2014, 2068, 1921 ], "content": "\\label{Conclusions and Trends} \nMore and more researchers are investigating conversational tasks. One factor contributing to the popularity of conversational tasks is the increasing demand for chatbots in industry and daily life. Industry agents like Apple's Siri, Microsoft's Cortana, Facebook M, Google Assistant, and Amazon's Alexa have brought huge convenience to people's lives. Another reason is that a considerable amount of natural language data is in the form of dialogues, which contributes to the efforts in dialogue research. \nIn this paper we discuss dialogue systems from two perspectives: model and system type. Dialogue systems are a complicated but promising task because it involves the whole process of communication between agent and human. The works of recent years show an overwhelming preference towards neural methods, no matter in task-oriented or open-domain dialogue systems. Neural methods outperform traditional rule-based methods, statistical methods and machine learning methods for that neural models have the stronger fitting ability and require less hand-crafted feature engineering. \nWe systematically summarized and categorized the latest works in dialogue systems, and also in other dialogue-related tasks. We hope these discussions and insights provide a comprehensive picture of the state-of-the-art in this area and pave the way for further research. Finally, we discuss some possible research trends arising from the works reviewed: \n\\paragraph*{Multimodal dialogue systems} The world is multimodal and humans observe it via multiple senses such as vision, hearing, smell, taste, and touch. In a conversational interaction, humans tend to make responses not only based on text, but also on what they see and hear. Thus, some researchers argue that chatbots should also have such abilities to blend information from different modalities. There are some recent works trying to build multimodal dialogue systems~, but these systems are still far from mature. \n\\paragraph*{Multitask dialogue systems} Dialogue systems are categorized into task-oriented and open-domain systems. Such a research boundary has existed for a long time because task-oriented dialogue systems involve dialogue states, which constrain the decoding process. However, works in end-to-end task-oriented dialogue systems and knowledge-grounded open-domain systems provide a possibility of blending these two categories into a single framework, or even a single model. Such blended dialogue systems perform as assistants and chatbots simultaneously. \n\\paragraph*{Corpus exploration on Internet} In Section~\\ref{Data sets} we reviewed many datasets for dialogue systems training. However, data is still far from enough to train a perfect dialogue system. Many learning techniques are designed to alleviate this problem, such as reinforcement learning, meta-learning, transfer learning, and active learning. But many works ignore a significant source of information, which is the dialogue corpus on the Internet. There is a large volume of conversational corpus on the Internet but people have no access to the raw corpus because much of it is in a messy condition. In the future, dialogue agents should be able to explore useful corpus on the Internet in real-time for training. This can be achieved by standardizing online corpus access and their related legal terms. Moreover, real-time conversational corpus exploration can be an independent task that deserves study.\n\\paragraph*{User modeling} User modeling is a hot topic in both dialogue generation~ and dialogue systems evaluation~. Basically, the user modeling module tries to simulate the real decisions and actions of a human user. It makes decisions based on the dialogue state or dialogue history. In dialogue generation tasks, modeling the user helps the agent converse more coherently, based on the background information or even speaking habits. Besides that, a mature user simulator can provide an interactive training environment, which reduces the reliance on annotated training samples when training a dialogue system. In dialogue systems evaluation tasks, a user simulator provides user messages to test a dialogue agent. More recent user simulators also give feedback concerning the responses generated by the dialogue agent. However, user modeling is a challenging task since no matter explicit user simulation or implicit user modeling is actually the same in difficulty as response generation. Since response generation systems are not perfect yet, user modeling can still be a topic worthy of study.\n\\paragraph*{Dialogue generation with a long-term goal} Most of our daily conversations are chitchats without any purpose. However, there are quite a few scenarios when we purposely guide the conversation content to achieve a specific goal. Current open-domain dialogue systems tend to model the conversation without a long-term goal, which does not exhibit enough intelligence. There are some recent works that apply reinforcement policy learning to model a long-term reward \nwhich encourages the agent to converse with a long-term goal, such as the work of~. This topic will lead to strong artificial intelligence, which is useful in some real-life applications such as negotiation or story-telling chatbots.\n\\section*{Acknowledgements}\nThis research/project is supported by A*STAR under its Industry Alignment Fund (LOA Award I1901E0046).\n\\bibliographystyle{spbasic} \n\\bibliography{Bib} \n\\end{document}", "id": "641bce50-fbff-48dd-9b23-43912a76e4cb", "level": "section", "origin_cites_number": 9, "parent_id": "743f4886-7aea-4caf-9d50-52e5013fbaf3", "prefix_titles": [ [ "title", "Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey" ], [ "section", "Conclusions and Trends" ] ], "subsections": [], "title": "Conclusions and Trends" } ]
10
[ 1107, 1149, 1878, 1879, 7478, 7455, 1876, 7074, 8506, 1877, 514, 7075, 305, 1881, 7528, 499, 1882, 97, 1880, 9092, 302, 1883, 243, 39, 7076, 1885, 1890, 1887, 1886, 8507, 1891, 2401, 1884, 1889, 1888, 8508, 1113, 7529, 1893, 1892, 1894, 38, 7077, 9093, 9109, 1898, 1896, 1695, 1897, 1895, 303, 168, 1902, 1899, 1904, 7302, 7530, 8385, 1903, 1900, 1901, 7371, 7, 7078, 7370, 790, 1499, 1905, 167, 1114, 1909, 1906, 1908, 1907, 7531, 1409, 7477, 1910, 1911, 1390, 1912, 1914, 7217, 1921, 7079, 1923, 1920, 1925, 1918, 1928, 1919, 1926, 1917, 1915, 1913, 1686, 1922, 1348, 1916, 1924, 1927, 1933, 1931, 1935, 8509, 1932, 1937, 1934, 1165, 1936, 1929, 1930, 9117, 7483, 1938, 1939, 8510, 1942, 1940, 1941, 1946, 1947, 1948, 1944, 1943, 1945, 1949, 1951, 1954, 1950, 1952, 1953, 1955, 1958, 1957, 1961, 1962, 1956, 1963, 1960, 8511, 1959, 1966, 8512, 1964, 1967, 1965, 1968, 1970, 1969, 8513, 1976, 1977, 1971, 1972, 1975, 1978, 1973, 1974, 1979, 1981, 1980, 7532, 8514, 1987, 1985, 1984, 1986, 1983, 1982, 1988, 1989, 7481, 1991, 1990, 1995, 1993, 2001, 1996, 7533, 1999, 1997, 1994, 2000, 1992, 1998, 2002, 2004, 2003, 2008, 2005, 2006, 2007, 2009, 8515, 7459, 2016, 2014, 1275, 2019, 7080, 2018, 2023, 2011, 2012, 2017, 2015, 7534, 2020, 7535, 2013, 1278, 1272, 2024, 8516, 2010, 768, 2022, 2021, 1279, 8519, 8517, 8518, 2031, 2026, 2028, 2027, 2032, 2025, 2029, 7454, 2030, 7458, 2033, 2036, 7452, 8521, 7331, 2039, 1548, 2037, 2034, 450, 8520, 8419, 2038, 2035, 2054, 2051, 2060, 2053, 2042, 2064, 2043, 2061, 2050, 2056, 2052, 2057, 7081, 1565, 2058, 2044, 2065, 2055, 2045, 442, 650, 1145, 7082, 2062, 2041, 8522, 457, 2047, 1153, 2046, 2049, 2066, 2059, 2063, 2040, 2067, 8523, 2048, 2068 ]
1.083216
[ "Richard Dazeley", "Peter Vamplew", "Francisco Cruz" ]
Explainable reinforcement learning for broad-XAI: a conceptual framework and survey
2021
2021-08-20T05:18:50Z
cs.AI
Broad Explainable Artificial Intelligence (\textit{Broad-XAI}) moves away from interpreting individual decisions based on a single datum and aims to provide integrated explanations from multiple machine learning algorithms into a coherent explanation of an agent’s behaviour that is aligned to the communication needs of the explainee. Reinforcement Learning (RL) methods, we propose, provide a potential backbone for the cognitive model required for the development of Broad-XAI. RL represents a suite of approaches that have had increasing success in solving a range of sequential decision-making problems. However, these algorithms all operate as black-box problem solvers, where they obfuscate their decision-making policy through a complex array of values and functions. EXplainable RL (XRL) is relatively recent field of research that aims to develop techniques to extract concepts from the agent's: perception of the environment; intrinsic/extrinsic motivations/beliefs; Q-values, goals and objectives. This paper aims to introduce a conceptual framework, called the Causal XRL Framework (CXF), that unifies the current XRL research and uses RL as a backbone to the development of Broad-XAI. Additionally, we recognise that RL methods have the ability to incorporate a range of technologies to allow agents to adapt to their environment. CXF is designed for the incorporation of many standard RL extensions and integrated with external ontologies and communication facilities so that the agent can answer questions that explain outcomes and justify its decisions. This paper aims to: establish XRL as a distinct branch of Explainable Artificial Intelligence (XAI); introduce a conceptual framework for XRL; review existing approaches explaining agent behaviour; and identify opportunities for future research. Finally, this paper will discuss how additional information can be extracted and ultimately integrated into models of communication, facilitating the development of Broad-XAI.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "764857f2-7694-4295-9f0c-97d9b42fc288", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ] ], "subsections": [ "0425b479-03b8-41ed-b4b4-e2f51453e399", "6dc72259-37ed-463c-a276-92c3d3719a1c", "f80a8ecb-1143-4a2a-b637-ad3fc834f0f0", "6767e243-8c89-4c62-9618-f81e30a5eb31", "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "afb54c54-3a76-432d-b11f-bd0d4d1d33ad", "2dd5f5b4-e62c-4bdb-8223-f5bd1220aad1" ], "title": "root" }, { "cite_extract_rate": 0.148148148148148, "cites": [ 4538, 1798, 4537, 8801 ], "content": "\\label{Sec: Introduction}\nSuccesses, such as AlphaGo , autonomous vehicles and playing atari video games , saw the MIT Technology Review list Reinforcement Learning (RL) as one of the top ten technologies of 2017 . However, while RL can often solve complex sequential decision-making problems, the algorithms currently operate as a black-box, where experts must analyse vast amounts of data and functions to determine why they make particular decisions. For example, during AlphaGo's second challenge against Lee Sedol (ranked 9-dan) AlphaGo's 37th turn surprised both commentators and Lee Sedol, which turned the course of the game in AlphaGo's favour . David Silver, DeepMind researcher, reportedly had no insight into why AlphaGo made such a creative move until he had investigated the actual calculations made by the program . For these systems to go the next step and be used by everyday non-expert users that are not able to inspect an agent's internal representation of its policy, they must be able to provide explanations for their behaviour.\nInvestigations into the development of explanation facilities for Artificial Intelligence (AI) have progressed steadily for nearly fifty years. For instance, Knowledge-Based Systems (KBS) researchers and designers such as \\citeauthor{shortliffe1975model} first discussed approaches to providing explanations in the mid-70s through rule tracing. These were later applied in a range of early projects such as \\citeauthor{davis1977production} , \\citeauthor{swartout1983xplain} and \\citeauthor{chandrasekaran1988explanation} . These early ideas were later extended in domains such as Bayesian networks , early neural network systems and recommender systems . \nThe last decade has seen a surge in the uptake of machine learning systems, while at the same time the systems have become increasingly obfuscated and non-transparent to end-users. This has resulted in significant growth in research, and funding, for the development of eXplainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML)\\footnote{The literature also uses the term Explainable Machine Learning (XML) interchangeably with IML. As most of this work is also focused on the interpretation of algorithms' decision-making this paper will refer to all this work as IML.} . Drivers for this increase include: increased social anxiety towards automated systems; increases in funding such as Defense Advanced Research Projects Agency (DARPA) ; legislative changes such as the European Union's new General Data Protection Regulation ; and, increased interest from futurists and start-up companies like AGI Innovations and bons.ai .\nThe majority of current research has focused on the interpretation of an AI's decision. \\citeauthor{miller2017explainable} , emphasizes this view, stating that AI researchers typically build explainability from their perspective resulting in mostly interpretable or debugging like explanations. However, for AI to succeed, it must provide trusted and socially acceptable systems and, therefore, should be modelled on philosophical, psychological and cognitive science models of human explanation. \\citeauthor{dazeley2021levels} identified a set of levels of explainability that are adapted from Animal Cognitive Ethology’s levels of intentionality , combined with human social contexts . After reviewing the literature across these levels, \\citeauthor{dazeley2021levels} shows that the majority of XAI research is focused on the lowest level of Zero-order explanations. Furthermore, that a fully \\textit{Broad-XAI} system requires the full range of XAI levels to provide an integrated conversational explanation. While \\citeauthor{dazeley2021levels} argues that a mix of technologies can be used to develop broad-XAI, RL approaches were identified as one approach for providing a backbone supporting explanations across multiple levels of explainability.\nResearch into explaining a decision made by a RL-based agent has been increasing with several papers being published recently. Occasionally this work is subsumed under the banner of XAI and IML. XAI covers explanation of any AI based decision and, therefore, encompasses a broad range of decision-making situations --- including those conducted in an RL agent. \nHowever, explainability for an RL agent, while clearly a subset of XAI and with similarities to IML, has distinct characteristics that requires its explicit separation from current XAI and IML research. The term eXplainable Reinforcement Learning (XRL) has begun to emerge recently to cover research into explaining agent's decisions during temporally separated decision-making tasks. Separating explanation for RL from general XAI and IML provides the opportunity for RL researchers to more easily identify potential avenues of research and to track developments for providing explanations from an RL agent. \nConsidering explanation in an RL context allows extra focus on the types of explanations possible in RL and how they can be combined to provide levels of explanation to better facilitate understanding and acceptance by different users. \\citeauthor{heuillet2020explainability} and \\citeauthor{wallkotter2021explainable} have both provided in-depth surveys of the issues and abilities that reinforcement learning and embodied agents can provide. These papers pull together a number of papers that have explored the potential of explainable systems in interactive temporal agents. In this paper, we aim to go beyond exploring the current work alone and instead put forward a conceptual framework that sets up a structure for providing Broad-XAI. The objective is to promote the research and development of systems that can explain behaviour from integrated systems built on a foundation of RL. \nInteractive temporal agents built on this framework would be able to explain decisions and outcomes that provide for the three key areas of human explanation identified by \\citeauthor{miller2017explanationBook} : contrastive explanation, attribution theory and explanation selection. This framework will be tied to an accepted psychological model of explanation that allows for user controlled and conversational levels of explanation, as discussed by \\citeauthor{dazeley2021levels} . In so doing this paper suggests that, if RL decisions can be explained using human models of explanation, then they can build more trust and social acceptance. In presenting this framework, this paper will discuss plausible approaches to developing each component, as well as identify current work in each area. \nThis paper is structured with six further parts. The next section will provide a background to XAI and argue how XRL presents a distinct domain to be pursued. Section \\ref{Sec: Framework} will propose the conceptual framework for XRL and discuss how this integrates with human models of explainability. Section \\ref{Sec:Review} will provide a review of current approaches to the initial stage of the framework, while section \\ref{Sec:Opportunities} will identify future research opportunities for the advanced stages of the framework. Section \\ref{Sec:Using} will discuss how the framework can be integrated into models of communication to better facilitate the development of \\textit{Broad-XAI}. Finally, section \\ref{Sec:Conclusion} will summarise the paper and its contributions.", "id": "0425b479-03b8-41ed-b4b4-e2f51453e399", "level": "section", "origin_cites_number": 27, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.055555555555555004, "cites": [ 4538, 4539 ], "content": "\\label{Sec: XAI}\n\\citeauthor{Harari2016} suggests that humans have always been a socially oriented species that have utilised their unique ability to articulate myths as integral to the social fabric. A myth is a story that aims to \\textit{explain} historical events or natural/social phenomena , which helps guide future behaviour. There is no onus on the myth to be grounded in fact (e.g. religion, ideology, nationalism, money), simply that it provides an explanation for the world that allows us to operate socially --- if everyone else accepts the myth then large groups of people can be socially coordinated . The emphasis here is that explanation is fundamental to human social interaction and trust, and therefore, key to the social acceptance of artificial intelligent agents. However, while explanation has been studied by philosophers since Socrates, and over the last fifty years by psychologists and cognitive scientists, what it actually is, is still an open question . As with the development of Artificial Intelligence, where research is hampered by people's poor understanding of intelligence, research into explainability is similarly restricted by a poor understanding of human explanation.\nEXplainable Artificial Intelligence (XAI) is the general title given to the field of research aiming to generate explanations of AI systems that satisfies people's requirements in understanding and accepting the decisions made. There is a huge body of work providing a range of ways of interpreting black-box algorithms with mostly limited success. Various surveys have reviewed some of this work . \\citeauthor{miller2017explainable} , however, argues that the majority of researchers make XAI systems that are specific to their area of AI and that the primary aim behind these systems is to debug --- rather than also considering the end-users' requirements. For instance, there are many explanation systems developed for image processing Convolutional Neural Networks (CNN) that universally focus on identifying areas of an image or the parts of the network that contributed the most to a particular result . \\citeauthor{dazeley2021levels} suggests that these `narrow' XAI approaches that only focus on the individual task at hand do not provide the details required by users of the ever increasing integrated intelligences currently appearing in the market. These emerging systems, such as autonomous cars, require \\textit{Broad-XAI} approaches that merge the decision-making of several integrated systems into a coherent explanation .\n\\citeauthor{dazeley2021levels} suggest that most XAI research, which is often referred to as Interpretable Machine learning(IML), corresponds to Zero-order (Reaction) explanations --- where the zero refers to the absence of any explanation of the system's intentionality. Such approaches focus on explaining how the input just received was interpreted and how that input affected the resulting output. They argue that this foundational level is crucial to the development of Broad-XAI, but higher levels need to be developed for everyday users to accept decisions made by these systems. \\citeauthor{dazeley2021levels} suggests a set of levels, reproduced in Figure \\ref{fig:Levels}, that build up an explanation based on the level of intentionality utilised when making the decision. For instance: First-order (Disposition) details an agent's intention such as its current goal or objective; Second-order (Social) justifies its behaviour based on a prediction of other actors' intentions; and, N\\textsuperscript{th}-order (Cultural) provides an explanation of how it has modified its actions based on what it believes other actors' expectations are of its behaviour. Interestingly, there have been several attempts to develop approaches for these higher levels. \\citeauthor{dazeley2021levels}'s meta-survey identifies diverse subfields of XAI research, such as Explainable Agency , Goal-driven XAI , Memory-aware XAI , Socially-aware XAI ; Cultural-aware XAI , Meta-explanation , Utility-driven XAI and Deceptive Explanations .\n\\begin{figure}\n \\centering\n \\includegraphics[trim={5.2cm 3.0cm 6.2cm 2.6cm},clip,width=20.0pc]{Images/F1_Levels.pdf}\n \\caption{Levels of Explanation for XAI as proposed by \\citeauthor{dazeley2021levels} . Zero-order explanations cover an agent's immediate reaction to its environment and provides no indication of intentionality. First-order explanations justifies an agent's behaviour based on its current goal or objective, which is based on its current disposition. A Second-order explanation details an agent's behaviour in the context of the social situation it perceives, taking into account other actors in the environment. N\\textsuperscript{th}-order explanations discuss how the agent has modified its behaviour on account of cultural expectations it believes are currently relevant. Meta explanations reflect on the processing process used in generating the explanation.}\n \\label{fig:Levels}\n\\end{figure}\nAll these sub fields, however, are focused on developing approaches for explaining that individual component of an explanation, whereas, Broad-XAI requires an integrated approach across all levels that affect an agent's decision. RL is a Machine Learning technique that potentially covers all these levels to some degree and offers a starting point for developing integrated explanations. However, currently research in this space is relatively limited. Hence, the aim in this paper is to present a conceptual framework of how RL can be used to provide explanations across all levels of explainability, and thereby, provide a foundation for the development of Broad-XAI.", "id": "6dc72259-37ed-463c-a276-92c3d3719a1c", "level": "section", "origin_cites_number": 36, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Explainable Artificial Intelligence" ] ], "subsections": [ "f4a222a9-2294-4026-af55-65bfa5e402cb" ], "title": "Explainable Artificial Intelligence" }, { "cite_extract_rate": 0.5862068965517241, "cites": [ 4538, 4548, 4540, 4542, 1388, 1344, 3168, 4544, 4545, 4541, 4543, 4547, 4549, 1912, 4546 ], "content": "\\label{Sec: XRL}\nMost introductory texts on Machine Learning (ML) identify three subfields: Supervised, Unsupervised and Reinforcement Learning (RL) methods. RL is often identified as separate and distinct to other ML because it utilises a fundamentally different approach to learning. In RL an agent learns by interacting with an environment using trial-and-error learning. While trialling a sequence of actions it will occasionally receive feedback in the form of a positive or negative reward, which it will then attribute to those actions taken, reinforcing those behaviours to increase or decrease their selection in the future. \nThis has similarities to supervised learning, in that an agent learns a mapping from input (state) to output (action), but unlike supervised approaches the reward can be distributed temporally, as it may not receive the reward until many actions have been taken. Formally, as defined by \\citeauthor{Sutton2018book} and shown in Figure \\ref{fig:RL}, in the RL model the agent and environment interact through a series of discrete time steps, $t$. Each time step the agent receives a representation of the environment's current \\textit{state}, $s_t \\in \\mathcal{S}$, where $\\mathcal{S}$ is the set of all possible states. In a fully Markov Decision Process (MDP)\\footnote{RL is also often applied in environments that are not fully Markov. For instance, Semi-MDPs require information from previous states to determine outcomes taken in future states. There are also significant work into Hidden, Partially Observable, Continuous-time, Multiobjective Markov processes etc.} the agent uses only this state information to select an \\textit{action}, $a_t \\in \\mathcal{A}(s_t)$, where $\\mathcal{A}(s_t)$ represents the set of all possible actions in state, $s_t$. In the subsequent time step, $t+1$, the agent receives a numerical \\textit{reward}, $R_{t+1} \\in \\mathcal{R} \\subset \\mathbb{R}$, along with the new state, $s_{t+1}$. \nEssentially, an RL agent learns a mapping from each state to an action, which expresses the agent's behaviour. In model-based methods the agent optimises the trajectory of its behaviour to minimise cost, while value-based methods maximise the reward explicitly through a value-function. This mapping is commonly referred to as a \\textit{policy}, and is denoted $\\pi$, where $\\pi(s, a)$ represents an individual mapping from state, $s$, to action, $a$\\footnote{Methods not used for prediction rather than control may often learn a value per state rather than state/action pairs.}. \n\\begin{figure}\n \\centering\n \\includegraphics[trim={1.8cm 5.8cm 4.5cm 6.5cm},clip,width=20pc]{Images/F2_RL_model.pdf} \n \\caption{Standard Reinforcement Learning model as presented by . An agent interacts with an environment through a series of discrete interactions. Each iteration it receives the current state of the environment and decides on an action to be taken. The next iteration it receives the new state and sometimes will also receive a numerical, positive or negative, reward which is used to train the previous choices.}\n \\label{fig:RL}\n\\end{figure}\nThere are numerous extensions to the basic RL approach that are frequently used in the literature. These are not the focus of this paper, but they do frequently present interesting information to the RL approach that allows for significantly improved explanations, and hence need some discussion. For instance, one difficulty with RL is as the state space grows so does the complexity of the agent's search for a solution. Hence, function approximation techniques such as Neural Networks are frequently utilised, giving rise to the field of Deep RL (DRL) . Secondly, while most RL assigns a single goal to an agent, such as pick up the rubbish in the room and put it in the bin, there is substantial work in multi-goal RL. In such systems the agent not only must achieve its goal, but must also select the appropriate sub-goal to pursue . Finally, a goal represents the agent's ultimate objective, however, Multiobjective RL (MORL) assumes that there can often be other conflicting objectives that also need to be balanced with the primary objective . For instance an agent may have the goal to tidy the room and therefore its primary objective is to do this efficiently, however it may have a secondary objective to not damage any delicate things while accomplishing the primary objective . \nRL, from an explanation point of view, is of particular interest as it is often regarded as differing to that of supervised learning approaches . Supervised Learning techniques map each input to an output individually and so on their own the only explanation required is to identify the input components or the processes or processing stages that created the resulting classification. Each instance classified is regarded as a standalone instance and any \\textit{local} explanation is inherently based on this fact. Additionally, classifiers may provide \\textit{global} explanations that show how particular hyper-parameters or sets of training examples caused different outcomes for the classifier as a whole . These causal explanations are important for system developers or designers to understanding issues like training bias etc. However, supervised methods do not typically provide a mechanism for providing \\textit{local} causal explanations that explain individual decisions or behaviours of a system for non-technical end-users.\nRL based systems, however, have an implicit relationship between each instance. This is because the next state has only been visited because of the action taken in the previous state\\footnote{Assuming a purely deterministic MDP. Clearly in many situations the new state only partially results from the previous state/action and sometimes other factors could also contribute to the resulting state, such as wheel slippage, other actors in the environment etc.}. This creates a temporal dependency between states, actions and subsequent states. These temporal dependencies, typically referred to as \\textit{transitions} and denoted as $T(s_t, a, s_{t+1})$, provides an implied causation for that individual transition. A sequence of transitions, either when reflecting on past transitions or a prediction of future transitions, can potentially provide causal networks that can be used to explain a number of details such as why actions were chosen according to some long term goal . So, while an individual transition is similar to an individual classification in supervised methods the temporal sequence of transitions allows us to provide causal-based temporally-extended explanations. \nAdditionally, supervised learning uses the learnt mapping to provide a classification or regression value with the aim of getting the \\textit{`right'} answer, whereas, RL aims to maximise a reward signal, which symbolises the goal or objective of the agent. Many approaches to RL have been developed to identify sub-goals , or may have alternative objectives that it can switch between, such as . These approaches mean the aim of the agent that guides its behaviour is not automatically going to be known to people affected by an agent operating in a human-agent shared environment. These approaches allow us to explain an agent's intentionality behind its behaviour, and thus, facilitate the provision of First-order explanations . \nThese fundamental differences between RL and supervised approaches to machine learning require us to think differently about explanation than simple interpretation --- the common approach in Machine Learning. Of interest is the ability to provide introspective, causal and contrastive explanations within a single platform. RL is an approach that allows us to potentially develop broad-XAI systems. The aim of the remainder of this paper is to develop and present a conceptual framework for the development of Broad-XAI utilising RL as the basic backbone. Within the context of this framework this paper will survey current attempts to provide explanations (section \\ref{Sec:Review}) and discuss potential approaches, not yet attempted, that will promote further research and development in to Broad-XAI (section \\ref{Sec:Opportunities}).", "id": "f4a222a9-2294-4026-af55-65bfa5e402cb", "level": "subsection", "origin_cites_number": 29, "parent_id": "6dc72259-37ed-463c-a276-92c3d3719a1c", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Explainable Artificial Intelligence" ], [ "subsection", "Explainable Reinforcement Learning: Temporal Explanations" ] ], "subsections": [], "title": "Explainable Reinforcement Learning: Temporal Explanations" }, { "cite_extract_rate": 0.25, "cites": [ 4538, 4553, 4550, 4552, 4551 ], "content": "\\label{Sec: Framework}\nPeople interpret the world through explanations --- either by attributing explanations to others' behaviour or by explaining their own behaviour to themselves or others. When moving away from simply interpreting the decision-making process, as done by IML, developers need to consider how people tend to assign causes to behaviour . Attribution theory, based on \\citeauthor{heider1958psychology}'s seminal work, attempts to understand the process to which people attribute causal explanations for events . Most events are usually categorised as either being dispositional or situational. Dispositional attribution assigns the cause to the internal disposition of the person such as their personality, motives or beliefs. In contrast, situational attribution assigns cause outside the person's control such as accidents or external events. More recently, researchers have shown that people instead tend to attribute behaviour towards the person's intention, goal, motive or disposition . \nDrawing on knowledge structures such as scripts, plans, goals, and themes suggested by \\citeauthor{schank2013scripts} , \\citeauthor{bohm2015people} extended ideas in attribution theory to develop a Casual Explanation Network (CEN), Figure \\ref{fig:CEN}, based on the actual explanations provided by people. This model emphasizes preconceptions about a causal relationship when providing explanations of behaviour. It builds on the idea that people will often want to explain others' behaviour not only in terms of why a particular behaviour occurred, but also what happened before to cause that behaviour and what is likely to happen in the future. \\citeauthor{bohm2015people} propose a taxonomy that classifies both behaviour and explanations and is built around the intentionality that lead to the behaviour.\n\\begin{figure}\n \\centering\n \\includegraphics[trim={1.4cm 2.2cm 1.2cm 2.8cm},clip,width=21pc]{Images/F3_Causal_Network.pdf}\n \\caption{A reproduction of the Causal Explanation Network (CEN) model for human lay causal explanations as suggested by \\citeauthor{bohm2015people} . Each node represents a component used by people when explaining a person's behaviour, while the arcs between nodes indicate the causal links between these concepts when people provide an explanation.}\n \\label{fig:CEN}\n\\end{figure}\nThe CEN, Figure \\ref{fig:CEN} identifies seven categories that are relevant when considering the causal thinking about an actor's behaviour. This network is represented with a directed graph consisting of two sources and one sink. The end point, or sink, is the \\textit{outcome}, which is the final result of any behaviours. These outcomes are a result of either a person's intentional goal-directed \\textit{actions} or as a result of unintentional and uncontrolled \\textit{events}, such as tripping over. A person's \\textit{goal}, represents the future states that the person is striving for, which can be caused by higher-order goals. The goal can also be caused by the \\textit{temporary state} or what can be thought of as their momentary disposition based on emotions, evaluations, mental states, motivational states, or bodily states (e.g. hunger, pain). This temporary state (momentary disposition) is in-turn affected by the person's personality traits or attitudes, which refers to as \\textit{disposition}, that are the result of long term ingrained culturally-based behaviours. The temporary state can also be caused by \\textit{stimulus attributes}, representing the features of the person or object that their behaviour was directed. For example, a person explaining the outcome of only passing an exam may state that it was too difficult (stimulus attribute) causing them to be upset (temporary state) so they altered their goal to make sure they at least passed. \nFigure \\ref{fig:CEN} shows causal lines between these nodes indicating the causal directions provided in a person's explanations. These do not necessarily reflect the full and direct sequence of causes for outcomes, but they do represent the causal explanations that people \\textit{typically} use . For instance, if a person trips (event) they may explain that they are clumsy (disposition) and that fearing injury (temporary state), attempt to arrest their fall (goal), by reaching out their hand (action) resulting in scratches on their hand. When asked what happened to their hand, they may provide the full causal path or simply explain the shortened causal path indicating they had tripped. This allows the explainee to fill in the gaps with their own general understanding of probable causes. Similar choices are provided for causal paths between other nodes. In this way, an explanation does not always require the full causal path from event, stimulus attribute or disposition through goal and action. This approach elegantly agrees with \\citeauthor{lombrozo2007simplicity}'s suggestion that an explanation should rely on as few causes (simple) as possible that covers the outcomes. \nThe CEN's focus on causal behaviour being the basis of explanations of intentionality aligns with \\citeauthor{dazeley2021levels}'s suggested levels of explanation for XAI. These levels were built upon Animal Ethology's idea of explaining behaviour through levels of intentionality . Furthermore, the taxonomy of causal behaviour suggested in the CEN aligns well with the operating paradigm of an RL agent, and therefore, its application to XRL would be useful in providing structure to the generation of causal explanations from an RL agent. This paper proposes to merge these ideas from \\citeauthor{dazeley2021levels} with the CEN, suggested by \\citeauthor{bohm2015people} , to form a framework, referred to as the Causal XRL Framework (CXF), and taxonomy for how XRL can generate causal explanations.\nFigure \\ref{fig:Framework} is an adaptation of Figure \\ref{fig:CEN} to facilitate the same causal pathways for explanation, but with categories aligned to RL and those indicated by \\citeauthor{dazeley2021levels}'s suggested levels of explanation. Included in this diagram is a mapping of XAI levels indicating the degree of intentionality that can be provided at each category of behaviour. This causal structure is intended to operate in a similar way to that suggested by \\citeauthor{bohm2015people} . An \\textit{outcome}, represented by changes in the environment or the agent itself, is caused by either an intentional \\textit{action} by the agent or by an unintended or uncontrolled sequence of events. These \\textit{events} could be due to stochastic actions, such as wheel slippage or external actors. \nIn RL an action is caused by an agent pursuing a particular \\textit{goal} or \\textit{objective}. This may be a single goal or a hierarchy of goals, each of which can be cycled through to generate the explanation of its behaviour. A goal may be aligned to a single objective or to multiple objective that must be balanced . The agent switch between these goals/objectives due to internal changes in priorities or progression in solving a larger goal. These internal changes are what \\citeauthor{bohm2015people} labels as temporary states, however this name could be confused with the perceived state of the RL agent, and hence, is avoided in CXF framework. \\citeauthor{dazeley2021levels} on the other hand refers to this same concept as disposition --- referring to an agent's internal disposition. Therefore, to align with \\citeauthor{dazeley2021levels} a \\textit{disposition} in this sense is the same as a temporary states in the CEN model, and represents temporary internal motivations such as a change in parameter, simulated emotion or safety threshold being passed. \n\\begin{figure*}\n \\centering\n \\includegraphics[trim={0.4cm 2.2cm 0.4cm 2.8cm},clip,width=30pc]{Images/F4_Conceptual_Framework.pdf}\n \\caption{Conception Framework for Explainable Reinforcement Learning, referred to as the Causal XRL Framework (CXF) is based on the CEN given in Figure \\ref{fig:CEN}. Each node represents a process used by an agent when deciding on its behaviour. Each arc, joining nodes, represent the causal relationships that should be utilised when generating an explanation of an agent's behaviour.}\n \\label{fig:Framework}\n\\end{figure*}\nSimilarly, \\citeauthor{bohm2015people} CEN model referred to disposition as an overarching set of long term personality traits about how a person responds to situations. While there is no direct reference to responses to perceived cultural expectation, it is clear that disposition is the node where this would be best captured. As the temporal state node was renamed to be disposition the disposition node has also been renamed to align with \\citeauthor{dazeley2021levels} notion of cultural expectations. Therefore, in this model an \\textit{expectation} refers to the ultimate aim of the agent to achieve what is expected of it. \\citeauthor{dazeley2021levels} suggests that expectations refer to a range of cultural conditions placed on an agent's operation. In essence, expectations in this framework are the same as dispositions in the CEN. Finally, an agent's current disposition, and therefore its goal/objective, action and ultimately the outcome, are caused by what is perceived by the agent. \\textit{Perception} is both the literal state, but also the result of any feature extraction, inference placed over what is perceived, or belief state in a Partially Observable MDP (POMDP). \nAdditionally, this framework is readily applicable to Multiagent Reinforcement Learning (MARL) domains . For example, a MARL agent operating globally can simply use this framework directly with the understanding that the action space is a vector of actions that are similarly derived from its goals and higher-order influences. This aligns and extends current state-of-the-art explainable MARL . However, a decentralised model presents a larger problem for the provision of explanations. A decentralised model requires agents to act independently of each other, and therefore, provide explanations of their behaviour independently. However, these agents require a sophisticated communication model between the agents to allow them to adjust their behaviour based on the other agents . The CXF framework directly facilitates this MARL model. For example, when an agent changes its behaviour because of another agent's communication or action then the CXF model allows us to incorporate this behaviour as a causal event that potentially alters the agent's intrinsic disposition and goals. This approach allows for sophisticated models of explanation that incorporate teamwork directly into the causal framework. This decentralised model can be further extended to AI-Human collaborative teams where we require an explanation of an agent's action in response to events caused by the human collaborators.\nUltimately, this framework is aimed at promoting future directions of research into explaining RL behaviour, but it also provides a lens for examining the current state of the art. The framework described in Figure \\ref{fig:Framework} is beyond the majority of current XRL research. Hence, this paper also presents a Simplified Conceptual Framework, which captures the majority of current XRL work. The simplified framework, Figure \\ref{fig:Simplified Framework}, shows the types of behaviours that can be explained when using a traditional approach to RL, as described in section \\ref{Sec: XRL}. As can be seen, this model only includes behaviours caused by what is perceived and the actions taken by the agent. It can also be observed that these behaviours all align with Zero-order explanations , and therefore, do not include any explanation of intentionality. \nIn this simplified model it is assumed that an agent has a single preset goal and the objective is to maximise the reward in achieving that goal. In such a situation the goal is often known to the user or can be observed over-time through observation of behaviour . When utilising a predefined goal as its only objective its actions are directed toward achieving that goal. Its explanation of those actions are the target of that behaviour. \\citeauthor{dazeley2021levels} argues that there is no need to explain that the action is aimed at accomplish the goal in such a system. In situations where the goal itself is possible unknown to an end-user then the developers can incorporate details of the preset goal directly into any explanation of its behaviour. Equally, if the agent cannot alter its goal then no change to disposition or expectation can affect the goal being pursued. The aim of the \\textit{Goal} node in Figure \\ref{fig:Framework} is aimed at identifying how the current goal affected the action selected and why that is the current goal based on the agents current dispositions or expectation. Hence, the simplified model has no need to include causal explanations of these higher-level intentions. Similarly, the general RL model makes no attempt to model events outside its control making explanations of these also irrelevant. With the removal of goal/objectives, dispositions, expectations and events an RL agent cannot utilise those causal paths, therefore, the simplified framework must include a causal path from perception to action skipping those behaviours included in the full framework. Because this causal path is not part of the full framework this is included only as a dotted line.\n\\begin{figure*}\n \\centering\n \\includegraphics[trim={2.6cm 11.8cm 1.9cm 2.8cm},clip,width=30pc]{Images/F5_Simplified_Conceptual_Framework.pdf}\n \\caption{Simplified Conception Framework for Explainable Reinforcement Learning, referred to as the Simplified-CXF, representing causal explanations in traditional RL. This framework includes a causal link between perception and action that is not included in the full model. This link replaces several behavioural components representing deeper causal paths that are assumed to not be modelled or explained in this Simplified-CXF. Note, this simplified model only includes Zero-order explanations and therefore does not include any explanation of intentionality.}\n \\label{fig:Simplified Framework}\n\\end{figure*}", "id": "f80a8ecb-1143-4a2a-b637-ad3fc834f0f0", "level": "section", "origin_cites_number": 20, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Conceptual Framework for Explainable Reinforcement learning (XRL)" ] ], "subsections": [], "title": "Conceptual Framework for Explainable Reinforcement learning (XRL)" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Sec:Review}\nThe term, eXplainable Reinforcement learning (XRL) only appeared in research publications recently and is often published as Interpretable Machine Learning (IML). However, the aim of this paper is to show that the idea of explaining the behaviour of an RL agent, while sometimes related, is often quite distinct and separate to traditional IML; provides opportunities for deeper explanations to provide user trust and acceptance; already has a substantial body of research; and, still has significant avenues for future work. This section represents the second substantive component of this paper, which will review current work and discuss opportunities for future research. Rather than using a traditional taxonomy of approaches, it will review the literature in light of the Simplified-CXF discussed in section \\ref{Sec: Framework}.\nThe following subsections will discuss each of the processes used by an agent to influence its choice of behaviour. This will include a discussion of the possible types of causal explanations that each process can contribute. Finally, for each type of causal explanation pathway, this paper will both discuss current approaches to explaining that causal link, as well as suggest additional approaches that could be utilised. In discussing these points it will start with the nodes represented in the Simplified-CXF and discuss the opportunities available in the more advanced components of the full CXF in section \\ref{Sec:Opportunities}. The first subsection \\ref{Sec:PerceptionsSimple} will discuss explanations of what the agent has perceived and how that perception has affected the actions and outcomes. Subsection \\ref{Sec:Actions} will discuss explanations based on why actions are selected and how they caused the resulting outcomes.", "id": "6767e243-8c89-4c62-9618-f81e30a5eb31", "level": "section", "origin_cites_number": 0, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ] ], "subsections": [ "8ed4cf71-4ba2-4ddf-a534-6bf2f80d79ae", "ffa3ebdc-2d49-4b63-a5c0-072912b38735" ], "title": "Simplified Framework: Reviewing Explainable Reinforcement Learning" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3586, 4538, 7830, 1910 ], "content": "\\label{Sec:PerceptionsSimple}\nAt its fundamental level, an RL algorithm is learning to do two things: receive information about the environment and use this to decide on an action to make in response. These two fundamental operations of an RL system represent the first two types of XRL discussed in this paper. The fundamental nature of these operations are also indicated in Figure \\ref{fig:Simplified Framework}, with them being recognised as providing Zero-order explanations . That is, these operations represent a purely reactionary level of processing with zero intentionality. The first of these operations is to perceive the environment, which represents a significant amount of research in XRL. This section will briefly overview this class of XRL and discuss some example approaches. As identified by the simplified conceptual framework, Figure \\ref{fig:Simplified Framework}, the perceptual stage, not only explains what it has perceived, but also how that perception resulted in the action taken and the outcome observed. Therefore, explanations of an agent’s perception aim to detail one or more of the following:\n\\begin{enumerate}\n \\item \\textit{Perception:} what did the agent perceive as the current environment?\n \\item \\textit{Introspective:} how the perceived state contributed to the action being selected?\n \\item \\textit{Contrastive:} why didn't the perceive state cause some other action to be selected?\n \\item \\textit{Counterfactual:} what changes in perception would be required to cause an alternative action to be selected?\n \\item \\textit{Influenced:} how did the perceived state affect the outcome?\n\\end{enumerate}\nIn the simple discrete RL situation each state or state/action pair can be represented with a mapping directly to the preferred action. However, in most realistic problems the state dimensionality for a direct mapping is too complex or continuous preventing a direct mapping. Instead, one of several approaches can be used such as function approximation , hierarchical representations , state aggregation , relational methods or options . \nTo perform these approximations RL researchers generally utilise a range of traditional supervised learning approaches. For instance, the utilisation of Deep Neural Networks (DNN) is so common that a separate branch of research, known as Deep RL (DRL), has emerged, which now represent approximately 32\\% of RL papers published in 2019\\footnote{At time of writing, using Google Scholar, the number of papers with a title including the phrase \"Deep Reinforcement Learning\" was 1710 and the number of \"Reinforcement Learning\" titled papers was 5310. This is a crude estimation and almost certainly lower than the true percentage as many researchers assume DRL when discussing RL.}. DRL methods utilise a DNN to map large state spaces to Q-values (regression) or directly to actions (classification) . In many cases the supervised learning model used requires some level of adaptation to handle the temporal aspects of RL. For instance, DRL methods frequently utilise various forms of experience replay to improve convergence . However, regardless of the learning process, the perception of the environment at any single moment is essentially the same process used in the supervised version.", "id": "8ed4cf71-4ba2-4ddf-a534-6bf2f80d79ae", "level": "subsection", "origin_cites_number": 12, "parent_id": "6767e243-8c89-4c62-9618-f81e30a5eb31", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Perceptions" ] ], "subsections": [ "1b313733-2959-4e2c-b1a6-f549d7e2b18e", "04c64be5-957c-441b-80fb-7ee6f86c259a", "422b7f0a-8a65-4c10-885b-805a00aed2f6" ], "title": "Explanation of Perceptions" }, { "cite_extract_rate": 0.4, "cites": [ 8500, 8502 ], "content": "\\label{Sec:IML}\nReliance on traditional supervised learning for function approximation means that XRL-Perception is essentially the process of interpreting the function used to model the state. Therefore, XRL-Perception is closely aligned with \\textit{Interpretable Machine Learning} (IML) methods . IML is a well-established field with substantial work already having been done. The aim of this paper is not to resurvey IML work in detail - except to discuss how this work can be related to XRL specifically. According to \\citeauthor{molnar2019interpretable} there are several approaches to interpreting machine learning models, as shown in Figure \\ref{fig:IML}. This suggests that IML typically produces one or more of the following types of interpretation:\n\\begin{figure}\n \\centering\n \\includegraphics[trim={1.2cm 4.2cm 0.3cm 2.5cm},clip,width=20pc]{Images/F6_IML.pdf}\n \\caption{Types of Interpretation that can be generated from an Interpretable Machine Learning model. This diagram is derived from the taxonomy described by \\citeauthor{molnar2019interpretable} .}\n \\label{fig:IML}\n\\end{figure}\n\\begin{itemize}\n \\item a \\textit{feature summary}, using \\textit{statistics} or \\textit{visualisations}, showing the features and their relationships that were of most importance when reaching the outcome.\n \\item a representation of the \\textit{internal model's} operation, such as the rules or neurons that fired, or pathways through the evaluation process that were followed.\n \\item through the identification of similar or related \\textit{data points}, such as an image from the same class.\n \\item through the construction of a secondary \\textit{intrinsically interpretable model}, which may then use one of the above methods to provide an interpretation. \n\\end{itemize}\nDeep learning methods for IML tend to focus on visualisations of features found in the input (feature summaries) and neuron/layer activity (internal models); with some examples of using specifically designed neural networks for the provision of interpretations --- see \\citeauthor{gilpin2018explaining} for a detailed discussion. Regardless of the approach used to interpret these models they can all be utilised to provide an interpretation of the perception of current state of an RL model.", "id": "1b313733-2959-4e2c-b1a6-f549d7e2b18e", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "8ed4cf71-4ba2-4ddf-a534-6bf2f80d79ae", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Perceptions" ], [ "subsubsection", "XRL-Perception with Interpretable Machine Learning (IML)" ] ], "subsections": [], "title": "XRL-Perception with Interpretable Machine Learning (IML)" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 1409, 514, 8802, 4557, 3580, 4555, 8801, 4554, 4556 ], "content": "\\label{Sec:Introspective XRL Perception}\nDue to this alignment of perception and traditional IML there has been limited research specifically on perception in the context of XRL . However, there are two primary issues that make perception in XRL distinct from traditional IML. The first is that spatially similar states may often still require different control rules making generalising difficult. This contrasts with most traditional supervised approaches that can afford local generalisation. Secondly, the perceptually similar states may in fact be significantly temporally separated . This problem is attributed to why pooling layers are often absent in many DRL approaches as these are used to identify local generalisable patterns . Therefore, research into XRL-perception has largely focused on providing explanations that can help developers to better understand the learning process, improve interpretation of policy, and for debugging/parameter tuning . \nOne approach used by both \\citeauthor{mnih2015human} and \\citeauthor{zahavy2016graying} employed t-Distributed Stochastic Neighbor Embedding (t-SNE) on recorded neural activations to identify and visualise the similarity of states. \\citeauthor{zahavy2016graying} also displayed hand crafted policy features over the low dimensional t-SNE to better describe what each sub-manifold represents. A second approach used by \\citeauthor{wang2015dueling} and \\citeauthor{zahavy2016graying} was to use Jacobian Saliency Maps to better analyse how different features affect the network. \\citeauthor{shi2020self} uses a self-supervised interpretable network (SSINet) to locate causal features most used by an agent in its action selection. \nThese approaches are complex to understand and do not easily provide a reasonable explanation to a non-expert user. Saliency maps provide a reasonable level of understandability when using image-based state spaces but the Jacobian approach, borrowed from IML, can provide poor results as they have no relationship with the physical meaning of entities in the image. This problem can be exacerbated in an RL agent due to the spatial similarity of states. \\citeauthor{greydanus2017visualizing} improved this approach by utilising the unique dual use of networks in the Asynchronous Advantage Actor-Critic (A3C) algorithm to separately represent both the critic's value assignment and the actor's actions. \\citeauthor{greydanus2017visualizing} then used these more accurate maps to visualise an agent's perception over time during the training process. This approach provides an important example for detecting features and identifying which features caused the agent to take a particular action, and separately, which ones were associated with particular outcomes, such as the highest rewards. \n\\citeauthor{verma2018programmatically} presented a unique approach to performing introspection of an RL agent's perception by altering the RL framework itself. This work introduced the Programmatically Interpretable RL (PIRL) approach, where policies are initially learnt using DRL. This network is then used to direct a search over programmatic policies using Neurally Directed Program Synthesis(NDPS). During this repeated search process, a set of interesting perception patterns are maintained that minimise the distance between the DRL and NDPS (oracle) models. The completed oracle can then be inspected to identify causal links between feature vectors and actions taken and/or outputs.", "id": "04c64be5-957c-441b-80fb-7ee6f86c259a", "level": "subsubsection", "origin_cites_number": 11, "parent_id": "8ed4cf71-4ba2-4ddf-a534-6bf2f80d79ae", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Perceptions" ], [ "subsubsection", "Introspective XRL-Perception" ] ], "subsections": [], "title": "Introspective XRL-Perception" }, { "cite_extract_rate": 0.44444444444444403, "cites": [ 4560, 4558, 1820, 4559 ], "content": "\\label{Sec:Results of XRL-Perception}\nPerceiving the state is of particular interest to developers when validating a system's operation. Reassuring a non-expert user that the important features being used are also of importance, provided this is combined with the resulting effect of what was perceived. Simply informing the user of the action and the resulting change in the environment is implied in the previously discussed approaches as these are generally easily observed and do not require an explanation. However, the ability of a system to provide either contrastive or counterfactual explanations can be very valuable to a non-expert user and not easily observable from the agent's behaviour. Such explanation facilities, not only want to be able to identify the features that led to the selected action, but also suggest why another action was not selected (contrastive), or which features were needed to be observed to cause a different action/outcome (counterfactual). \nConceptually counterfactual thinking and contrastive explanations are viewed as very different concepts. However, they are really just different views of the same predictive mechanism . A counterfactual focuses on a prediction of what would happen under different initial circumstances, whereas a contrastive explanation details what change was needed to get a particular outcome. A counterfactual can be derived by providing a case study, or example fictitious state (sometimes referred to as a `distractor' state or image), and observing the result. The real outcome, along with the fictitious outcome, can then be compared to provide the counterfactual explanation . \nContrastive explanations, however, are not as simple because there is no specific start state, but instead have a specific result that is of interest. The approaches in the last section cannot readily provide such explanations. For example, generating contrastive or counterfactual explanations requires us to identify the features that are missing from the input space. One approach is to present multiple distractors and find the closest to the required conclusion . This, however, is highly computational, and impossible when there are infinite possible distractors, such as continuous state problems. Recent methods for generating missing features, such as the Contrastive Explanation Method (CEM) , have been proposed. These systems effectively identify absent pixels using a perturbation variable or through Contrastive Layer-wise Relevance Propagation (CLRP) .\nIn RL, however, there is a temporal relationship between states and the outcomes that can be used to map a sequence of changes over time. This creates additional possibilities for providing contrastive explanations, and thereby, through extension counterfactual explanations as well. One approach is to identify those states that are critical to a human understanding the result of an agent, such as the \\citeauthor{huang2017leveraging} utilisation of DBSCAN to identify such states. An alternative approach used recently, especially in non-image-based inputs was that of \\citeauthor{hayes2017improving} . This work uses hand crafted state features specifically identified for being semantically meaningful to humans. This vector of features are used to generate a list of predicates that could be searched to identify subsets of actions commonly associated. Therefore, this approach could explain why an action was selected in terms of features perceived by the agent. This approach is not, however, readily usable in large state spaces or where hand crafted features could not be provided. There is potential in using an agent's perception to generate contrastive and counterfactual explanation, however, currently XRL researchers do not appear to have pursued this approach significantly. Instead, XRL focus has been around explaining the choice of actions and performing causal analysis of those choices. These approaches will be further discussed in the following section \\ref{Sec:Actions}.", "id": "422b7f0a-8a65-4c10-885b-805a00aed2f6", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "8ed4cf71-4ba2-4ddf-a534-6bf2f80d79ae", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Perceptions" ], [ "subsubsection", "Results of XRL-Perception" ] ], "subsections": [], "title": "Results of XRL-Perception" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4561 ], "content": "\\label{Sec:Actions}\nWhile the provision of explanations of an agent's perception is interesting, and in many cases required by the explainee, such explanations are not particularly unique to RL. In fact, in reviewing the literature above, very little referred to RL specifically. As discussed in section \\ref{Sec: XRL}, the reason XRL is different to IML approaches is due to the temporal nature of RL. This temporality is evident when considering how an action taken by an agent affects the outcome. These explanations are inherently \\textit{temporal explanations} as they detail a prediction of the expected future efficacy of an action. Temporal explanations detail relations between temporal constraints, such as delays between causes and effects and were first investigated in temporal abductive reasoning and recommendation systems . The CXF, Figure \\ref{fig:Framework}, and simplified CXF, Figure \\ref{fig:Simplified Framework}, indicate that an explanation can include why an agent took particular actions and how those actions caused particular results. \nTherefore, explanations of an agent's actions aim to detail either: \n\\begin{enumerate}\n \\item \\textit{Introspective:} why was an action chosen?\n \\item \\textit{Contrastive:} why wasn't another action chosen?\n \\item \\textit{Influenced:} how the action taken affected the outcome?\n \\item \\textit{Counterfactual:} what prior behaviour would have resulted in a particular alternative action being selected?\n\\end{enumerate}\nThe first point addresses an explainee's requirement to understand the choice of action and why the agent predicts it is a better choice than the alternatives. This can be presented in one of two forms, either: providing a visual representation of the path; or, by stating how the action leads to the eventual aim. For example, imagine an agent takes an action a user wants justified. It could present a map showing where the agent is currently located and the path it plans to follow, where the user can see the selected action follows this path. They could also be shown the best path should an alternative action be taken. This approach is of course regularly used in navigation recommendation systems such as Google Maps. Non-navigation in discrete tasks can also use this approach by representing the MDP as a graph using nodes and arcs to represent concepts the explainee will understand. An alternative approach is to state that the agent has selected a particular action because it has a measurably better result of a desirable quality as defined by the reward function, such as a higher chance of success, reduced cost, safer, smoother, etc. In either case the agent is being asked to make a prediction about both its future behaviour and how it expects the environment to respond.", "id": "ffa3ebdc-2d49-4b63-a5c0-072912b38735", "level": "subsection", "origin_cites_number": 3, "parent_id": "6767e243-8c89-4c62-9618-f81e30a5eb31", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Actions" ] ], "subsections": [ "58268db7-f505-4f50-bdf0-d8b653ba0029", "c4df1aa7-7c55-46d5-b1f7-6aa0dfbd163a", "853e84ad-5b5c-4f6c-8096-818b4260f046" ], "title": "Explanation of Actions" }, { "cite_extract_rate": 0.25, "cites": [ 4565, 4562, 4567, 4563, 4566, 7831, 4564 ], "content": "\\label{Sec:Model-basedApproaches}\nEarly research into explaining why an action is preferred when accomplishing a particular task can be traced back to some of the earliest work in explaining the reasoning of expert systems . An expert system generates a conclusion through a series of inferences. These inferences represent a sequence of reasoning steps that can be considered actions during a problem-solving process. Explaining these involved providing either a rule trace of the inferences/actions taken or a trace of key, previously identified, decision points. These early ideas were later extended in domains such as Bayesian Networks (BN) , where explanations were generated from the relations between variables or through visual representations of relations between nodes . Decision Networks or influence diagrams further extended BNs through the incorporation of utility nodes. These models help the decision process by selecting the path with the maximum utility, where explanations have been generated by reducing the optimal decision table .\nAn MDP, as used in RL, can be considered to be a \\textit{dynamic} decision network . Similar approaches have been applied in deterministic or decision theoretic planning because these focus on the entire decision path followed, and therefore, rely on a model of the environment. XAI-Planning (XAIP) approaches are well placed to provide explanations for planning tasks with MDPs. \\citeauthor{fox2017explainable} provides a roadmap for the development of XAIP. These methods have a model of the environment in which they operate, and therefore, are inherently more transparent and understandable as they can use their model directly in their explanations. These approaches allow a more direct utilisation of the historical BN and DN methods. For instance, \\citeauthor{krarup2019model} uses waypoints for explanation, where this use of an execution-trace is a similar approach to that of rule traces and tracing nodes through a BN or DN. Similar approaches of generating explanations from actions using a model can be seen in other recent research . \\citeauthor{fox2017explainable} identifies several questions that XAIP can answer. Ignoring questions regarding if and when to replan, which are specific to XAIP, these questions align with the previously mentioned aims for explaining actions. While planning approaches are not the focus of this paper, \\citeauthor{chakraborti2020emerging} provides an extensive and recent survey of XAIP identifying the recent growth.", "id": "58268db7-f505-4f50-bdf0-d8b653ba0029", "level": "subsubsection", "origin_cites_number": 28, "parent_id": "ffa3ebdc-2d49-4b63-a5c0-072912b38735", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Actions" ], [ "subsubsection", "Model-based XRL-Behaviour" ] ], "subsections": [], "title": "Model-based XRL-Behaviour" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 4572, 4568, 4570, 4545, 4571, 4552, 4569 ], "content": "\\label{Sec:Introspective XRL-Behaviour}\nA direct adaptation of the BN and DN approaches are not as evident in value-based RL. \\citeauthor{cruz2019XRL} , \\citeauthor{hayes2017improving} , and \\citeauthor{lee2019complementary} could be considered as attempts to do this by essentially developing a model of the environment during exploration. The models built can then be used to generate an explanation, such as a prediction of the likelihood of reaching a goal, and how long until it was reached, from each state/action pair. \\citeauthor{hayes2017improving} learns its model entirely separately from the agent, while \\citeauthor{cruz2019XRL} build the model internally. The approaches are inherently still RL as the model is not used for planning purposes and the agent still learns entirely from experience. However, these approaches of building a model of the environment presents allow an RL agent to present a similar level and range of transparency exhibited by the model-based approaches. \nThese learnt-model based approaches can also be used to provide users with an overview of the model through Policy Summarization or similar approaches . These global explanation approaches learn key state/action pairs that globally characterize the agent’s behaviour. Using Inverse RL techniques, a policy can be inferred, and a summary formed from multiple examples of agent behaviour. The intuition is that policy summaries, like waypoints, can help people generalise and anticipate agent behaviour . Another approach is to abstract away from low level decision and provide explanations from this higher level. \\citeauthor{beyret2019dot} used Hierarchical RL to perform these layered abstractions and recognised their applicability to providing explanations and \\citeauthor{acharya2020explaining} used a decision tree classifier to learn which state features were most likely to predict particular behaviours. \nUltimately, without a model, value-based approaches are hampered in their ability to explain an action in terms of the eventual aim. While people may think their aim is to achieve a goal, it is in fact only to maximise the long-term average reward. \\citeauthor{cruz2020XRobot} extended their learnt model approach to provide the same explanations without requiring the memory overhead of learning a model, thereby providing the ability to provide these explanations in larger environments, including those requiring deep learning based function approximation. To do this \\citeauthor{cruz2020XRobot} proposed two approaches: learning-based and introspection-based. The first approach was to directly learn a probability value $P$ during training, while the second approach, referred to as introspection-based, was to infer the value directly from the agent's $Q$-value using a numerical transformation. These approaches allow an agent to explain why one action is preferred over another in terms of outcomes in a similar way as XAIP approaches. \nWhat is interesting about these approaches is that rather than learning a model they use introspection of available information to provide explanations. \\textit{Introspection} is the utilisation of internal data for explanation as opposed to external frameworks that explain through observation. This introspective approach has also been utilised by \\citeauthor{sequeira2020interestingness} , which actively builds a database of historical interactions, allowing for simple information like, observations, actions and transitions; along with inferred probabilities such as the prediction error. While this work as presented is not built to specifically answer questions, it does provide details that can provide additional analytics to the user and could easily utilise these statistics to provide such answers. This work has since been extended to provide short video highlights of key interactions .", "id": "c4df1aa7-7c55-46d5-b1f7-6aa0dfbd163a", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "ffa3ebdc-2d49-4b63-a5c0-072912b38735", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Actions" ], [ "subsubsection", "Introspective XRL-Behaviour" ] ], "subsections": [], "title": "Introspective XRL-Behaviour" }, { "cite_extract_rate": 0.478260869565217, "cites": [ 3217, 4573, 4575, 4545, 7834, 7507, 4574, 7833, 4549, 7832, 4564 ], "content": "\\label{Sec:Results of XRL-Behaviour}\nWhen providing an explanation using the above techniques the system can simply state the reason the action selected is a good choice for achieving its goal. This, however, will often result in a relatively meaningless explanation that it chose the best, fastest, cheapest etc, depending on the choice of reward. Instead, as discussed in section \\ref{Sec:Results of XRL-Perception}, an explanation aiming to improve trust and acceptance would ideally be presented in contrast to an alternative action. These \\textit{contrastive explanations} are presented as \\textit{fact} and \\textit{foil} , where the same fact, the action selected, can have multiple foils, any one of the actions not selected. Explaining contrastive and counterfactual of XRL-Behaviour involves comparing outcomes from alternative transitions paths through the MDP. \nThe most common approach to providing these explanations is to develop a model of the agent's behaviour using a separate observer that learns the agent's behaviour. There have been several generic explanation facilities that can perform this task, such as \\citeauthor{pocius2019strategic} , which extends Local Interpretable Model-Agnostic Explanations (LIME) and can provide contrastive explanations of any type of agents' behaviour --- not solely an RL agent. These generic explanation facilities can predict behaviour, but do not explain the agent's internal reasoning for its behaviour.\nExtending \\citeauthor{hayes2017improving} , \\citeauthor{van2018contrastive} provides contrastive explanations based on the result of transitions. The approach uses a provided model of the transition network but acknowledges this can be learnt through the observation of behaviour, by translating state features and actions to a predefined domain specific ontology. The system then compares a user selected foil to the taken actions to provide explanations on outcome differences. \\citeauthor{cashmore2019towards} provides a generic planning wrapper that builds on \\citeauthor{fox2017explainable}'s roadmap for XAIP, to provide these contrastive explanations for known MDPs as a service. Rather than an \\textit{a priori} model \\citeauthor{madumal2019explainable} used a learnt model to extensively study the generation of both contrastive and counterfactual explanations for explaining recommendations in the game of Starcraft II . \\citeauthor{madumal2019explainable} learns a Structural Causal Model (SCM) during training and analyses this model to understand how states led to different outcomes. \nTo investigate the ability to provide a value-based approach, \\citeauthor{cruz2020XRobot} illustrates that contrastive explanations on the likely success or failure of actions and the time to a result can be provided by an agent using the introspection-based approach to transforming the Q-values directly. \\citeauthor{khan2009minimal} developed an approach to generate explanations for why a recommendation has been provided to a user, called a Minimal Sufficient Explanation (MSE). In this approach, a recommendation equates to an action and the approach tries to explain why that action is regarded as optimal. It takes one step beyond simply saying the action selected has the highest Q-value and thus is the optimal action, and instead provides reasons according to templated justifications about frequency of expected future rewards. \nTwo possible approaches to providing contrastive explanations is through the utilisation of either reward decomposition or multi-objective Reinforcement Learning (MORL) . Reward Decomposition separates each of the different rewards into semantically meaningful reward types allowing actions to be explained in terms of trade-offs between the separate rewards . \nOne avenue to providing contrastive explanation that has only recently been attempted is through the utilisation of multiobjective RL (MORL) . MORL approaches maintain a vector of Q-values for each reward and at any given time there may be several Pareto-optimal policies offering different trade-offs between the objectives. Such approaches, such as reward decomposition, allow an agent to compare the known results of these policies that aligned with different actions. \\citeauthor{sukkerd2018toward} and its extension \\citeauthor{sukkerd2020tradeofffocused} along with work by are the first papers to directly pursue this approach to contrastive explanation. This model-based approach generates quality-attribute-based contrastive explanations to compare actions against alternative objectives. In value-based RL there is also one known attempt to use multiple objectives using reward decomposition\\footnote{The authors do not identify their work as MORL, and strictly speaking decomposed rewards are not necessarily conflicting (a generally accepted component of an MORL problem), but the approach uses the same principles that would underpin an explainable MORL approach.} . This approach is performing RL in an Adaptive Based Programming formalism that allows annotations of decision points with ontological information for explanation. Currently there is significant opportunity to pursue explainable MORL approaches for contrastive and counterfactual explanations. \nThe above approaches assume there is only one foil, alternative action, or that the user knows which foil they want the agent to compare with the selected action. However, this can be tedious, difficult or sometimes impossible for the user to provide. For instance, in an autonomous car it is not practical to go through all alternative angles a steering wheel could have been turned to observe alternative results. Deriving the foil from the context is part of the explanation facility's task, but apart from some attempts in IML , has not been widely discussed in the context of RL. While not the focus of their research some work in dynamic programming, such as \\citeauthor{erwig2020explanations} , have found that the context for contrastive explanations could be anticipated by identifying principal and minor categories and using these to anticipate user questions through value decomposition. As yet, foil prediction does not appear to have been transferred to value-based RL.", "id": "853e84ad-5b5c-4f6c-8096-818b4260f046", "level": "subsubsection", "origin_cites_number": 23, "parent_id": "ffa3ebdc-2d49-4b63-a5c0-072912b38735", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Simplified Framework: Reviewing Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Actions" ], [ "subsubsection", "Results of XRL-Behaviour" ] ], "subsections": [], "title": "Results of XRL-Behaviour" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4538, 1798 ], "content": "\\label{Sec:Opportunities}\nExplaining perception, action and the causal outcomes of each, discussed above, represent the majority of current XRL research. These explanation facilities are important but focus primarily on providing debugging style explanations for developers . \\citeauthor{dazeley2021levels} argued that this represents only a zero-order, or reactionary level, of explanation and does not provide the broad-XAI required to develop user trust and acceptance. While there is still plenty of scope for interesting advances in the above simplified-CXF, this paper suggests there are significant possibilities for higher-level explanations built on an RL foundation. This section will discuss each of the remaining components of the full framework and how existing extensions to RL can be utilised to provide Broad-XAI facilities in XRL.", "id": "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "level": "section", "origin_cites_number": 3, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Full Framework: Opportunities for Explainable Reinforcement Learning" ] ], "subsections": [ "bf04f43e-dda6-4be4-868b-71304c482dde", "79e60728-fc44-4502-8d93-5cb833e8b2de", "d668aedc-3432-474f-b72d-b27087fe1d02", "e63d507d-a49b-4f8a-8e49-ae1568c15341" ], "title": "Full Framework: Opportunities for Explainable Reinforcement Learning" }, { "cite_extract_rate": 0.25, "cites": [ 4538, 4548, 4568, 7833, 4549, 4576, 4546 ], "content": "\\label{Sec:Goals}\nExplaining an agent's goal and how it caused the selected action has been recognised as a potential future direction of research for XRL . \\textit{Goal-driven explanation}, also referred to as \\textit{eXplainable Goal-Driven AI (XGDAI)}, is an emerging area of importance in the XAI literature with recent papers surveying the concept . This recent work shows a growing recognition that the only way people will accept an agent's behaviour is if the system provides details around the context in which its decision was based . \\citeauthor{langley2017explainable} describes this as \\textit{explainable agency} and is considered as a first-order explanation , where the aim is to communicate the agent's Theory of Mind . Goal-driven explainability is primarily focused on Belief, Desire, Intention (BDI) agents , although recognises the potential for XRL in providing such explanations. In particular, \\citeauthor {sado2020explainable} accepts approaches to explaining actions, discussed in section \\ref{Sec:Actions}, as a post-hoc and domain independent approach to explaining behaviour. \nThe difficulty is that RL does not explicitly project the effect of their actions and associate them with a goal. Therefore, when there is no model RL is essentially learning a habit, rather than a goal . For most applications, this distinction is trivial as there is only a single goal and the agent learns a habit for how to solve it. Beyond informing the user of what the goal is, explaining the choice of goal (when there is only one to choose from) is relatively meaningless. Therefore, for XRL to provide meaningful goal explanation it should have multiple goals that it could be pursuing at any given time. This utilisation of multiple goals, while not part of the standard RL framework, is a well-established approach with extensions to RL such as hierarchical , multi-goal , and multi-objective . This paper argues that more meaningful goal-based explanations can be provided if RL utilise these methods more readily. \nAs seen in section \\ref{Sec:Actions}, the first attempts to utilise MORL to provide contrastive explanations have been published. The aim for a goal-based explanation though would be to extend this initial work and answer questions about the XRL-goal being selected and how that goal affected the action selection. For instance, \\citeauthor{karimpanal2017identification} identifies `interesting states' and learns how to find them using off-policy learning while focusing on its primary objective. Attaching a goal-based explanation to this would allow explanation about how actions could also lead to/or avoid alternative objectives. A second example would be when an agent is performing a primary task, but has an alternative objective to avoid dangerous situations , then an explanation can identify contrastive explanations for an action on the basis of the primary or secondary objective, e.g. \"While X was the fastest action, I chose Y because it was safer\". \nMulti-goal and Hierarchical RL provide mechanisms for identifying alternative or sub-goals to problems and switching or progressing through these during a problem-solving process. At this stage there does not appear to be any attempts to provide explanations based on the currently selected goal as a means of providing better contextual information to a user. However, this paper has suggested the provision of such explanations would be a valuable area of pursuit. For instance, \\citeauthor{beyret2019dot}'s approach could be extended to provide an explanation for the currently active goal through a tree traversal of potential goals using way-points during the inference process.", "id": "bf04f43e-dda6-4be4-868b-71304c482dde", "level": "subsection", "origin_cites_number": 28, "parent_id": "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Full Framework: Opportunities for Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Goals" ] ], "subsections": [], "title": "Explanation of Goals" }, { "cite_extract_rate": 0.421052631578947, "cites": [ 4538, 4540, 4542, 4568, 4545, 3588, 7835, 4549 ], "content": "\\label{Sec:Dispositions}\nAgents that are changing their goals and/or objectives do not generally do so randomly. Some agents may do so because they have learnt that a sequence of sub-goals are required to achieve its primary goal . Others may have multiple conflicting objectives , such as achieving a task while maintaining a safe working environment . This process of changing goals or objectives is a result of variations made in an agent's internal disposition . It is important that an agent is, therefore, able to include in its explanation how its current internal disposition has influenced the current choice of goal or objective. \nThis change can be caused by: an observation that the prior goal was no longer appropriate for achieving its primary goal; an observed change in the environment, possibly by an external actor; or, a change in an internal simulation of an emotion, belief or desire. In Cognitive Science the Theory of Motivated control investigates how behaviour is coordinated to achieve meaningful outcomes . In particular, \\citeauthor{pezzulo2018hierarchical} discusses the multidimensional and hierarchical nature of goals when decision making. Essentially, people weigh-up conflicting objectives through a hierarchy of goals . Through careful introspection it is possible for an RL agent to identify these changes in its internal disposition and provide an explanation for these changes. Such an explanation would represent a first-order explanation and provide a valuable insight into an agent's reasoning for a human observer.\nCurrently there have not been any examples of explaining such dispositional RL systems, but there are numerous examples of agent-based systems, including RL, that adapt their goal autonomously during their operation. Intrinsically motivated RL has been researched for two decades, where agents construct a hierarchy of reusable skills dynamically . These agents change their operating goal due to internal changes such as motivations . While methods such as \\citeauthor{beyret2019dot} explain an action relative to a goal, they could be extended to explain the motivation behind its choice of goal and skill.\nDisposition and motivation is not just hierarchical, but also multidimensional . For instance, \\citeauthor{vamplew2017steering} and \\citeauthor{vamplew2015reinforcement} used an algorithm referred to as Q-steering to provide the agent the ability to switch between objectives autonomously. When objectives are in conflict the agent can have an internal desire to focus on one over another and while it pursues that objective the desire to switch to an alternative objective often increases until that change is made. This approach has potential in several domains where autonomous balancing of objectives is required. An explanation for identifying the reason behind switching between policies would provide a user valuable information.\nThe recently emerging research in Emotion-aware Explainable AI (EXAI) methods illustrate an interest in providing explanations for agent's internal dispositions . This work focuses on self-explaining emotions and can identify important beliefs and desires. While this work is based in on a BDI framework, \\citeauthor{dazeley2021levels} argues that this can be extended to XRL. One example of this approach in RL is \\citeauthor{barros2020Moody} which uses \\citeauthor{cruz2020XRobot}'s introspection-based approach to identify an explanation, which is used to provide a self-explanation so that it can self-determine its intrinsic `mood' concerning its performance in competitive games. This approach uses an explanation informs the agent's behaviour directly. However, \\citeauthor{barros2020Moody}'s approach does not currently provide an explanation for how this dispositional change has affected its current goal. Currently providing such an explanation is not evident in the XRL literature and represents an opportunity for future research.", "id": "79e60728-fc44-4502-8d93-5cb833e8b2de", "level": "subsection", "origin_cites_number": 19, "parent_id": "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Full Framework: Opportunities for Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Disposition" ] ], "subsections": [], "title": "Explanation of Disposition" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 4538, 7836, 3766, 4579, 978, 4580, 4578, 979, 4577 ], "content": "\\label{Sec:Events}\nIn many real world applications, an RL agent will be required to deal with stochastic and dynamic environments . In such environments unplanned events will occur potentially creating unexpected outcomes. An explainable agent, in such an environment, will be expected to explain how that event caused an outcome, or provide a full causal path detailing how the event caused any changes in the agent's disposition, goal or action selection. For an agent to provide such an explanation it must be able to predict the future states that would arise independent of the presence and actions of other actors within the environment. The agent's response in terms of disposition, goals and actions of the expected state and the actual state can then be compared to provide such an explanation. An extension of this model would also be able to explain what the event was that changed the environment from that which was expected. Therefore, it must be able to model the nature of stochastic events or model external actor's behaviour to understand how they may affect the environment. Therefore, this type of explanation requires the agent to perform a second-order, or social, level of explanation . \nThere are a range of value-based approaches to optimising an agent's behaviour in such environments. For instance, Robust RL , and specially designed training mechanisms can provide value-based solutions for learning and adapting in stochastic and dynamic environments. However, these approaches rarely predict the future state or model changes in the environment explicitly. Therefore, providing an explanation facility with such approaches is unlikely to provide suitable results. The nature of requiring an explicit prediction for such an explanation excludes the direct application of value-based RL methods without some form of separate predictive model being developed. For instance one approach used for this is the utilisation of generative adversarial networks (GANs) and even recurrent generative adversarial networks (RGANs) . In RL these methods are more frequently being referred to as Predictive State Encoders and are used to generate future states, also called belief states, and to predict dynamic actor's behaviour . Similarly, in model based methods there has been significant work in developing multiple models of a domain, where prediction errors are used to select the controller or policy . \nThere is evidence of this being a valuable form of explanation by work in BDI agents . As the name suggests, BDI agents use knowledge engineering principles to explicitly model beliefs, desires and intentions for an agent. Using knowledge-based graph traversals the beliefs about events and external actors can be a component of an integrated broad explanation of the system's behaviour. Similarly, outside of BDI research, there have been planning methods developed for providing such an explanation. Based on knowledge engineering principles, these approaches utilise abductive reasoning to generate explanations . \\citeauthor{molineaux2011learning} presents a particularly interesting method that learns an event-model to explain anomalies through generative abductive reasoning over historical observation in partially observable dynamic environments. Currently explaining events in stochastic and dynamic environments has not been done in the RL space. As XRL research moves away from debugging style explanation towards non-expert focused explanations, providing event-based explanations is an important future research direction.", "id": "d668aedc-3432-474f-b72d-b27087fe1d02", "level": "subsection", "origin_cites_number": 21, "parent_id": "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Full Framework: Opportunities for Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Events" ] ], "subsections": [], "title": "Explanation of Events" }, { "cite_extract_rate": 0.06451612903225801, "cites": [ 4538, 4581 ], "content": "\\label{Sec:Expectations}\nTraditional RL has a single goal that is generally defined by the reward engineer implementing the solution. This goal is an articulation of the expectation being placed on the agent. Due to the `hard coded' nature of this expectation there is little need to explain how this expectation has caused its behaviour. However, this approach only allows for the development of very narrowly defined agents and is not applicable as agents become increasingly societal integrated with society. Such a system must adapt to their dynamic surroundings; changing their disposition and goal based on the cultural expectations of the external society with which they are integrated. Any such system must also be able to explain what expectations it is using to modify its behaviour. \\citeauthor{dazeley2021levels} extensively discussed these N\\textsuperscript{th}-order explanations and the need for an autonomous agent operating in a human-AI integrated environment to model the cultural expectations that other actors may have on how the agent should behave. \nExpectations may be easily codifiable rules such as government enforced laws, military rules of engagement, ethical guidelines or business rules or they can be more abstract, learnt, or niche rules such as staying out of the doctor's way when they are rushing through an emergency ward. To meet these expectations an agent is required to change their behaviour away from their primary objective, whatever that might be. These changes in behaviour represent an area that must be explained as it may not be obvious to observers why an agent behaved in the way that it did. In particular, it should be able to articulate what expectation the agent is pursuing at any given time, why it selected that expectation, and how that changed its behaviour. \nOnly agents that actively maintain a model of the expectations being placed upon it would require such explanations and currently this can only be done in RL through the incorporation of secondary systems. For instance, behaviour modelling has been studied in several fields such as BDI based Normative Agents ; Game Theory ; Emotion-Driven or Emotion Augmentation learning ; and, most directly by Social Action research, which models the external demands placed on an agent that affect its goals or actions . Direct use of expectation in RL is evident where some systems are designed to incorporate social and cultural awareness in to their action selection mechanism, such as pedestrian and crowd avoidance systems . \nLike explanations of events, there is currently no known examples of XRL research into providing explanations of such systems. One particularly interesting recent study by \\citeauthor{kampik2019explaining} uses the idea of Explicability , where an agent can perform actions and make decisions based on human expectations. \\citeauthor{kampik2019explaining} developed an approach and taxonomy for sympathetic actions that incorporate a utility for socially beneficial behaviour at the detriment of the agent's own personal gain. This system then provided explanations for the agent's behaviour resulting from these expectations. Furthermore, \\citeauthor{kampik2019explaining} recognises the relevance and applicability of this approach to RL based systems. Identifying papers in this space is, however, difficult as there is no defined research domain for this research and papers are often published under more generic fields such as understandability , transparency , predictability .", "id": "e63d507d-a49b-4f8a-8e49-ae1568c15341", "level": "subsection", "origin_cites_number": 31, "parent_id": "eb006b52-7d27-46c8-93fd-2b3539a15b1f", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Full Framework: Opportunities for Explainable Reinforcement Learning" ], [ "subsection", "Explanation of Expectations" ] ], "subsections": [], "title": "Explanation of Expectations" }, { "cite_extract_rate": 0.310344827586206, "cites": [ 4538, 4579, 4573, 4545, 7507, 4580, 8802, 7833, 7837 ], "content": "\\label{Sec:Using}\nThe work discussed previously focused on how each of the individual components of the Causal XRL Framework (CXF) has, or could be, implemented. This section will briefly look at how the CXF can be implemented and used. To some extent this can simply involve implementing all of the approaches in a single explanation facility for an agent. For instance, a system could initially use a technique such as \\citeauthor{greydanus2017visualizing} to identify the active features of a state. These key features could then also be used to learn causal links between actions and outcomes creating a model similar to those developed by \\citeauthor{madumal2019explainable} , \\citeauthor{khan2009minimal} or \\citeauthor{cruz2019XRL} . The combination of these approaches could provide answers to many of the reactive explanations required of the Simple-CXF. Extending these approaches to incorporate multiple objectives , or reward decomposition would allow expressive contrastive and counterfactual explanations that would also facilitate the explanation of the goals and dispositions behind those choices. Second and N\\textsuperscript{th}-order explanations of events and expectations would require an agent to construct models of other actors in dynamic environment using approaches such as Predictive State Encoders , Emotion Augmentation , or Social Action . Methods would then need to be developed to explain how these models affected the agent's expectations, disposition or its interpretation of an event. Such an approach combining all these elements would accomplish the idea of explaining the full details of the decision.\nOne important example illustrating the potential of this combined approach was a study of non-experts carried out by \\citeauthor{anderson2019mental} . This study of 124 participants found that there was a significant improvement in the explainee's mental model of the agents behaviour when both XRL-Perception and XRL-Behavioural explanations were provided, when compared to only providing one or no explanation. This suggests that the combined approach was of value in improving peoples understanding. However, \\citeauthor{anderson2019mental} also found that the combined explanation created disproportionately high cognitive loads for the explainee. This suggests that providing explanations across all categories would be unwieldy and difficult for most people to understand --- simply because there is too much, potentially conflicting, information. This results aligns with \\citeauthor{lombrozo2007simplicity}'s suggestion that an explanation needs as few causes (simple) that cover as many events (general) and maintain consistency with peoples' prior knowledge (coherent) . Therefore, simply merging explanation facilities brings us no closer to presenting explanations that improve understanding, and hence trust and social acceptance. \n\\citeauthor{dazeley2021levels} presents a model for conversational interaction for explaining an AI agent's behaviour, reproduced in Figure \\ref{fig:Quiescence}. The proposed model suggests that the agent presents the explanations incrementally over a sequence of interaction cycles. It is suggested that such a model would start at the highest level of intentionality in its explanation (N\\textsuperscript{th}-order) and progress down the pyramid, Figure \\ref{fig:Levels}, until the explainee reaches a point of \\textit{Quiescence} (state of being quiet) representing a measure of stability in the user's understanding and acceptance and no longer requires deeper explanations. This model of conversational explanation aligns with the CXF proposed in this paper. Due to each of the CXF categories being aligned to the levels of explanation , this paper proposes that an implementation of the CXF can use this model to break the range of explanation types down and only present those that are required for the user at that time. \n\\begin{figure*}\n \\centering\n \\includegraphics[trim={0.4cm 4.1cm 0.6cm 4.0cm},clip,width=30pc]{Images/F7_Quiescence.pdf} \n \\caption{A conversational model for explanation, where an agent iterates through three stages until the explainee is satisfied. The agent starts at the highest level of explanation and progresses down the pyramid, Figure \\ref{fig:Levels} to more specific explanations. This conversational model is from \\citeauthor{dazeley2021levels} .}\n \\label{fig:Quiescence}\n\\end{figure*}\nIn this model the user would initially pose a query concerning an agent's decision, either explicitly or implicitly, which is first interpreted. The second stage attempts to identify and clarify any assumptions. This stage allows the agent to skip higher levels of explanation and go straight to the lower level explanations to address any assumptions if required. For example, if the user asks: `why didn't you catch the ball?' There is an assumption that the agent was aware that there was a ball, or that it did not succeed in catching the ball. Therefore, in resolving such assumption the agent should first determine if it was aware of a ball, and secondly, whether the outcome was in fact that no ball was caught. In the event that the assumptions are incorrect, e.g. there was no ball in its perception, then the explanation provided in the last stage skips the higher levels and provides the relevant lower level explanation. If there are no assumptions or if they are correct then the agent provides the highest level explanation as this is the most general. In the final stage the agent, using ontological models or visualisations etc, to provide a causal explanation at the determined level. \nThis approach ensures that the explanation is coherent, focused on the explainee's context, while otherwise being as general as possible. In the event that the explanation does not satisfy the user they will either ask a follow up question, or through body language, indicate they are not satisfied. In such situations, the agent simply progresses to the next lower level explanation. The process ends once: the user expresses satisfaction; they change their questions to a new topic; or, all available explanations have been provided. This interactive approach to communicating explanations to a user represents a process where the agent aims to facilitate the development of a shared mental model with a human. This shared mental model is key in many situations, particularly in team-based and socially integrated domains. Development of these shared mental models has been previously explored by \\citeauthor{tabrez2019improving} where an agent uses a process referred to as Reward Augmentation and Repair through Explanation (RARE), based on inverse RL, to infer the most likely `reward' function used by a human collaborator and explain how that differs from the optimal function. In other words this project, is similar to this paper's approach in that it is providing an explanation in the context of the explainee's current understanding. \nCurrently, there is no attempt to build a facility like the CXF in the XRL literature, apart from some attempts to combine perception with behaviour and suggested extensions to combine actions with goals . The approach has been extended more thoroughly outside of XRL, using generic explanation facilities. These systems observed interactions by the agent and use the learnt model to provide explanations, such as Local Interpretable Model-Agnostic Explanations (LIME) and Black Box Explanations through Transparent Approximations (BETA) . Both of these approaches provide explanations across a subset of the components in the CXF.\nOne particularly notable example is \\citeauthor{neerincx2018using} , which extended LIME and separated perceptual explanations from the cognitive processing to provide holistic explanations. The cognitive processing component incorporated goal and dispositional explanation based on emotion-based explanations. Finally, the approach incorporated ontological and interaction design patterns to communicate explanations. This approach represents the most advanced implementation utilising levels of explanation based on intentionality and could be interpreted using multiple sections of the CXF.", "id": "afb54c54-3a76-432d-b11f-bd0d4d1d33ad", "level": "section", "origin_cites_number": 29, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Using the Causal Explainable Reinforcement Learning Framework" ] ], "subsections": [], "title": "Using the Causal Explainable Reinforcement Learning Framework" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4538 ], "content": "\\label{Sec:Conclusion}\nReinforcement Learning (RL) is widely acknowledged as a one of three subfields of Machine Learning, where an agent learns through interaction with the environment using trial-and-error. However, research in the eXplainable RL (XRL) is often published under the area of Interpretable Machine Learning (IML), along with supervised learning approaches to explanation. This categorisation, however, misrepresents the possibilities that XRL presents. This paper's aim was to articulate how XRL is distinct when compared to IML, and that it offers the potential to go well beyond simply interpreting decisions. More importantly, that XRL could be the foundation to the development of truly \\textit{Broad-XAI} systems that are capable of providing trusted and socially acceptable AI systems to the wider public. In order to illustrate this the paper's second contribution was to provide a conceptual framework, referred to as the Causal XRL Framework (CXF), that highlights the range of explanations that can be provided. This framework was used to review the current extent of research that has been carried out and to identify opportunities for future research.\nThe Causal XRL Framework (CXF), presented in Figure \\ref{fig:Framework} is based on the Casual Explanation Network (CEN) suggested by \\citeauthor{bohm2015people} . The CEN presents a cognitive science view of how people explain behaviour and extends prior work in attribution theory . Like the CEN, the CXF identifies seven components to causal thinking about an actor's behaviour. This directed graph of causal relationships includes a single sink node representing the outcome. This outcome is caused by either an intentional action by an agent, or from an unintended or uncontrolled sequence of events. These events could be a result of stochastic actions or external actors. The agent's actions is caused by a goal which in turn may be altered by its internal and temporary disposition, such as a change in parameter, simulated emotion or safety threshold being passed. Finally, the disposition can be affected by external cultural expectations placed upon the agent or by its perception of the world. A simplified framework, referred to as the Simplified-CXF, containing only perception and action causes for an outcome, was also provided, which represents the most current research in XRL.\nIn surveying the current state of the art research into XRL this paper discussed how most of the XRL-Perception was derived directly from IML research. This conection with IML is due to RL's utilisation of standard supervised learning approaches for function approximation and state feature extraction. However, there were also several examples moving beyond straight IML; providing both model and value-based extensions that were specific to XRL. These methods use introspection of the RL framework to identify causal relationships between the perceived features and either the action selected or the outcome that resulted. Of particular interest was the emergence of methods used for generating counterfactual and contrastive explanations based on these causal links between state-features and outcomes.\nXRL-behaviour, as a subbranch of XRL where the agent's choice of action and the effect it has on the outcome, was explored in detail. In particular, the temporal nature of RL allows causal explanation over a sequence of actions. In surveying the literature, a number of approaches were identified. One branch of extensive work research in this space is model-based RL methods. These methods extended historical domains like Bayesian Networks and Decision Networks in providing dynamic explanations. A second branch that has seen recent results is using introspection in value-based RL, where agents either learn a transition model during interaction with the environment or convert internal values into meaningful explanations.\nFinally, in discussing the full framework, this paper identified several opportunities for further research in XRL, such as using hierarchical, multi-goal, multi-objective and intrinsically motivated RL techniques in Goal-driven explanation and emotion-aware explainable AI (EXAI). This paper also discussed hypothetical approaches to the development of event-based and expectation-based explanations, such as utilising predictive state encoders and explicability in RL. Currently, these areas have been studied by other fields of explanation, such as BDI agents and Social Action, but represent the fringe of XRL research. This paper suggests these are exciting areas of future study that this field should pursue so that RL can be more widely used in real-world human-agent mixed application domains. \n\\bibliographystyle{cas-model2-names}\n\\bibliography{cas-refs}\n\\balance\n\\end{document}", "id": "2dd5f5b4-e62c-4bdb-8223-f5bd1220aad1", "level": "section", "origin_cites_number": 3, "parent_id": "764857f2-7694-4295-9f0c-97d9b42fc288", "prefix_titles": [ [ "title", "Explainable reinforcement learning for broad-XAI: a conceptual framework and survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
118
[ 4538, 1798, 4537, 8801, 4539, 4548, 4540, 4542, 1388, 1344, 3168, 4544, 4545, 4541, 4543, 4547, 4549, 1912, 4546, 4553, 4550, 4552, 4551, 3586, 7830, 1910, 8500, 8502, 1409, 514, 8802, 4557, 3580, 4555, 4554, 4556, 4560, 4558, 1820, 4559, 4561, 4565, 4562, 4567, 4563, 4566, 7831, 4564, 4572, 4568, 4570, 4571, 4569, 3217, 4573, 4575, 7834, 7507, 4574, 7833, 7832, 4576, 3588, 7835, 7836, 3766, 4579, 978, 4580, 4578, 979, 4577, 4581, 7837 ]
1.068117
[ "James Kotary", "Ferdinando Fioretto", "Pascal Van Hentenryck", "Bryan Wilder" ]
End-to-End Constrained Optimization Learning: A Survey
2021
2021-03-30T14:19:30Z
cs.LG
This paper surveys the recent attempts at leveraging machine learning to solve constrained optimization problems. It focuses on surveying the work on integrating combinatorial solvers and optimization methods with machine learning architectures. These approaches hold the promise to develop new hybrid machine learning and optimization methods to predict fast, approximate, solutions to combinatorial problems and to enable structural logical inference. This paper presents a conceptual review of the recent advancements in this emerging area.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ] ], "subsections": [ "5b1ec1d9-6282-42dc-8ea1-5f5904327969", "ddacfebb-65b0-4627-848f-b49b55402afa", "f4bf48cc-5f58-4a56-a503-f30a6717807c", "ebf5b173-97b6-40d2-a04f-4417e9f6aa4a", "37ceb366-2e0f-4885-b587-327310d7db04", "8df9af23-b0c9-4b28-a15b-e5cf13a6159d", "18ed630f-d9cd-45a7-af17-fcea1c0396d5", "c0c3784c-24d5-427a-8e19-a1e786b10c0f" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 9086 ], "content": "\\emph{Constrained optimization} (CO) has made a profound impact in industrial and societal applications in numerous fields, including transportation, supply chains, energy, scheduling, and the allocation of critical resources. The availability of algorithms to solve CO problems is highly dependent on their form, and they range from problems that are efficiently and reliably solvable, to problems that provably have no efficient method for their resolution. \nOf particular interest in many fields are \\emph{combinatorial optimization} problems, which are characterized by discrete state spaces, and whose solutions are often combinatorial in nature, involving the selection of subsets, permutations, paths through a network, or other discrete structures to compose a set of optimal decisions. They are known for their difficulty, and are often NP-Hard.\nDespite their complexity, many CO problems are solved routinely and the AI and Operational Research communities have devised a wide spectrum of techniques and algorithms to effectively leverage the problem structure and solve many hard CO problems instances within a reasonable time and with high accuracy. While this success has made possible the deployment of CO solutions in many real-world contexts, the complexity of these problems often prevent them to be adopted in contexts of repeated (e.g., involving expensive simulations, multi-year planning studies) or real-time nature, or when they depend in nontrivial ways on empirical data. \nHowever, in many practical cases, one is interested in solving problem instances sharing similar patterns. Therefore, machine learning (ML) methods appear to be a natural candidate to aid CO decisions and have recently gained traction in the nascent area at the intersection between CO and ML.\nCurrent research areas in the intersection of CO and ML can be categorized into two main directions: {\\emph{ML-augmented CO}} and {\\emph{End-to-End CO learning}}. \nThe former focuses on using ML to aid the decisions performed within an optimization algorithm used to solve CO problems. The latter involves the combination of ML and CO techniques to form integrated models which predict solutions to optimization problems. The survey subdivides this context to into two main application domains: {\\bf (1)} the fast, approximate prediction of solutions to CO problems and {\\bf (2)} the integration of data-driven inference with CO solvers for structure logical inference.\nWhile there exists work surveying ML-augmented CO methods , the more modern end-to-end CO learning methods lack of a cohesive and critical analysis. The goal of this survey is to address this gap and provide a focused overview on the work to-date on end-to-end CO learning, provide a critical analysis, and pose a set of open questions and directions.", "id": "5b1ec1d9-6282-42dc-8ea1-5f5904327969", "level": "section", "origin_cites_number": 2, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "A constrained optimization (CO) problem poses the task of minimizing an \\emph{objective function} $f: \\mathbb{R}^n \\to \\mathbb{R}_+$ of one or more variables $\\bm{z} \\in \\mathbb{R}^n$, subject to the condition that a set of \\emph{constraints} $\\mathcal{C}$ are satisfied between the variables: \n\\begin{equation}\n\\label{eq:opt}\n\\mathcal{O} = \\argmin_{\\bm{z}} f(\\bm{z}) \\;\\; \n \\text{subject to} \\;\\;\n \\bm{z} \\in \\mathcal{C}.\n\\end{equation}\nAn assignment of values $\\bm{z}$ which satisfies $\\mathcal{C}$ is called a \\emph{feasible solution}; if, additionally $f(\\bm{z}) \\leq f(\\bm{w})$ for all feasible $\\bm{w}$, it is called an \\emph{optimal solution}.\nA well-understood class of optimization problems are \\emph{convex} problems, those in which the constrained set ${\\cal C}$ is a convex set, and the objective $f$ is a convex function. Convex problems have the favorable properties of being efficiently solvable with strong theoretical guarantees on the existence and uniqueness of solutions . \nA particularly common constraint set arising in practical problems takes the form \n$ {\\cal{C}}= \\{ \\bm{z} \\;:\\; \\bm{A} \\bm{z} \\leq \\bm{b} \\}$,\nwhere $\\bm{A} \\in \\mathbb{R}^{m \\times n}$ and $\\bm{b} \\in \\mathbb{R}^m$. \nIn this case, ${\\cal{C}}$ is a convex set. If the objective $f$ is an affine function, the problem is referred to as \\emph{linear program} (LP). When a linearly constrained problem includes a quadratic objective rather than a linear one, the result is called a \\emph{quadratic program} (QP). \nIf, in addition, some subset of a problem's variables are required to take integer values, it is called \\emph{mixed integer program} (MIP). While LP and QP with convex objectives belong to the class of convex problems, the introduction of integral constraints ($\\bm{x} \\in \\mathbb{N}^n$) results in a much more difficult problem. The feasible set in MIP consists of distinct points in $\\bm{x} \\in \\mathbb{R}^n$, not only nonconvex but also disjoint, and the resulting problem is, in general, NP-Hard. \nFinally, nonlinear programs (NLPs) are optimization problems where some of the constraints or the objective function are nonlinear. Many NLPs are nonconvex and can not be efficiently solved .\nOf particular interest are the \\emph{mixed integer linear programs} (MILPs), linear programs in which a subset of variables required to take integral values. \nThis survey is primarily concerned with optimization problems involving linear constraints, linear or quadratic objective, and either continuous or integral variables, or a combination thereof.", "id": "ddacfebb-65b0-4627-848f-b49b55402afa", "level": "section", "origin_cites_number": 2, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Constrained Optimization" ] ], "subsections": [ "adb13d68-5daf-4450-b862-529d027bcad0" ], "title": "Preliminaries: Constrained Optimization" }, { "cite_extract_rate": 0, "cites": [], "content": "A well-developed theory exists for solving convex problems. \nMethods for solving LP problems include \\emph{simplex} methods , \\emph{interior point} methods and \\emph{Augmented Lagrangian} methods .\nEach of these methods has a variant that applies when the objective function is convex. Convex problems are in the class of $\\bm{P}$, and can be assumed to be reliably and efficiently solvable in most contexts. For a review on convex optimization and Lagrangian duality, the interested reader is referred to .\nMILPs require a fundamentally different approach, as their integrality constraints put them in the class of NP-Hard problems. \n\\emph{Branch and bound} is a framework combining optimization and search, representable by a search tree in which a LP \\emph{relaxation} of the MILP is formed at each node by dropping integrality constraints, and efficiently solved using solution methods for linear programming. Subsequently, a variable $z_i$ with (fractional) value $a_i$ in the relaxation is selected for branching to two further nodes. In the right node, the constraint $z_i \\geq a_i$ is imposed and in the left, $z_i < a_i$; bisecting the search space and discarding fractional values in between the bounds. The solution of each LP relaxtion provides a lower bound on the MILP's optimal objective value. When an LP relaxation happens to admit an integral solution, an upper bound is obtained. The minimal upper bound found thus far is used, at each node, for pruning.\nFinally, \\emph{Constraint Programming} is an additional effective paradigm adopted to solve industrial-sized MI(L)P and discrete combinatorial programs.\n\\iffalse\nThe decision of which variable to select for branching at each node can have an important effect on the efficiency of branch and bound. Various branching rules have been developed which aim to reduce the total size of a branch and bound search tree without investing too much computational time in each variable selection, and often function by assigning \\emph{scores} to each variable based on auxiliary calculations, with the highest-scoring variable being selected for branching. \\emph{Primal Heuristics} are algorithms used to compute approximate solutions to MILP problems, and are often employed as subroutines in branch and bound at select nodes with the goal of finding a better incumbent solution at the expense of additional computation.\n\\nando{Check if all the techniques discussed below are introduced}\n\\fi\n\\iffalse", "id": "adb13d68-5daf-4450-b862-529d027bcad0", "level": "paragraph", "origin_cites_number": 5, "parent_id": "ddacfebb-65b0-4627-848f-b49b55402afa", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Constrained Optimization" ], [ "paragraph", "Optimization Solving Methods" ] ], "subsections": [ "b5f0e1a4-2fd6-42a4-a70b-606858f8371d", "6d5c4c01-100c-4df2-a573-49747643c743" ], "title": "Optimization Solving Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "For an optimization problem of the form:\n\\begin{flalign}\n x^* = \\qquad \\bm{\\argmin}_{\\bm{x} \\in \\mathbf{R^n}}\\;\\;& f( x ) && \\label{obj_kkt} \\\\\n\t\\mbox{\\bf subject to:}\\;\\; \n\t\t\t& g(x) \\leq 0, && \n\t\t\t& h(x) =0 && \\label{con_kkt}\n\\end{flalign}\nThe \\emph{Lagrangian} is defined to be \n$$ \\mathcal{L}(x, \\mu, \\lambda) = f(x) + \\sum_i \\mu_i g_i(x) + \\lambda_i h_i(x) $$ \nThen the Lagrange dual function is \n$$ \\cal{D}(\\mu, \\lambda) = \\inf_x \\mathcal{L}(x, \\mu, \\lambda) $$ \nand the Lagrange dual \\emph{problem} is defined\n\\nando{cannot compile this formula}\nThe dual problem has the useful property that any feasible solution provides a lower bound on the solution to the original so-called \\emph{primal} problem (\\ref{obj_kkt} - \\ref{con_kkt})", "id": "b5f0e1a4-2fd6-42a4-a70b-606858f8371d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "adb13d68-5daf-4450-b862-529d027bcad0", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Constrained Optimization" ], [ "paragraph", "Optimization Solving Methods" ], [ "subsubsection", "Lagrangian Duality" ] ], "subsections": [], "title": "Lagrangian Duality" }, { "cite_extract_rate": 0, "cites": [], "content": "For any optimization problem expressed in the form (\\ref{obj_kkt} - \\ref{con_kkt}),\nThe Karush-Kuhn-Tucker (KKT) conditions are a set of equations which provide a necessary condition for optimality of a candidate solution. If $f$ and $g$ are convex functions and $h$ is affine, the conditions are also sufficient: \n$$ \\nabla f (x^*) + \\sum_i \\mu_i \\nabla g_i(x^*) \n + \\sum_j \\lambda_j \\nabla h_j (x^*) = 0 \\label{kkt1}$$\n$$ g(x) \\leq 0 $$\n$$ h(x) = 0 $$\n$$ \\sum_i \\mu_i g_i(x^*) = 0 $$\nIn the case where $f$ is a quadratic function and $g,h$ are linear, the KKT conditions form an efficiently solvable linear system. \n\\fi", "id": "6d5c4c01-100c-4df2-a573-49747643c743", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "adb13d68-5daf-4450-b862-529d027bcad0", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Constrained Optimization" ], [ "paragraph", "Optimization Solving Methods" ], [ "subsubsection", "Karush–Kuhn–Tucker Conditions" ] ], "subsections": [], "title": "Karush–Kuhn–Tucker Conditions" }, { "cite_extract_rate": 0.5, "cites": [ 166 ], "content": "Supervised deep learning can be viewed as the task of\napproximating a complex non-linear mapping from targeted data. Deep\nNeural Networks (DNNs) are deep learning architectures composed of a\nsequence of layers, each typically taking as inputs the results of the previous layer . Feed-forward neural networks are basic DNNs where the layers are fully connected and the function connecting the layer is given by\n\\(\n \\bm{o} = \\pi(\\bm{W} \\bm{x} + \\bm{b}),\n\\)\nwhere $\\bm{x} \\!\\in\\! \\mathbb{R}^n$ and is the input vector, \n$\\bm{o} \\!\\in\\! \\mathbb{R}^m$ the output vector, $\\bm{W} \\!\\in\\! \n\\mathbb{R}^{m \\times n}$ a matrix of weights, and $\\bm{b} \\!\\in\\! \n\\mathbb{R}^m$ a bias vector. The function $\\pi(\\cdot)$ is\noften non-linear (e.g., a rectified linear unit (ReLU)).\nSupervised learning tasks consider datasets $\\bm{\\chi}\\!=\\!\\{\\bm{x}_i, \\bm{y}_i\\}_{i=1}^N$ consisting of $N$ data points with $\\bm{x}_i \\in \\mathcal{X}$ being a feature vector and $\\bm{y}_i \\in \\mathcal{Y}$ the associated targets. The goal is to learn a model ${\\cal M}_\\theta : \\mathcal{X} \\to \\mathcal{Y}$, where $\\theta$ is a vector of real-valued parameters, and whose quality is measured in terms of a nonnegative, and assumed differentiable, \\emph{loss function} $\\mathcal{L}: \\mathcal{Y} \\times \\mathcal{Y} \\to \\mathbb{R}_+$. The learning task minimizes the empirical risk function (ERM):\n\\begin{equation}\n\\label{eq:erm}\n \\min_\\theta J({\\cal M}_\\theta, \\bm{\\chi}) = \\frac{1}{n} \\sum_{i=1}^n \n \\mathcal{L}({\\cal M}_\\theta(\\bm{x}_i), \\bm{y}_i).\n\\end{equation}\nMost of the techniques surveyed in this work use (variants of) DNNs whose training conforms to the objective above. Other notable classes of deep learning methods used to solve CO problems are Sequence Models and Graph Neural Networks (GNNs), which are reviewed below, and \nReinforcement Learning (RL), which differs from supervised learning in not requiring labeled input/output pairs and concerns with learning a policy that maximizes some expected reward function. \nWe refer the reader to \n for an extensive overview of RL.", "id": "f4bf48cc-5f58-4a56-a503-f30a6717807c", "level": "section", "origin_cites_number": 2, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Deep Learning" ] ], "subsections": [ "e17bae29-4071-448d-919e-2d412e836afe" ], "title": "Preliminaries: Deep Learning" }, { "cite_extract_rate": 1, "cites": [ 167, 168 ], "content": "Recurrent neural networks (RNNs) have proven highly effective for tasks requiring pattern recognition in sequential data. A basic RNN uses a \\emph{recurrent layer} to compute a sequence of output arrays $\\bm{y}_1, \\ldots, \\bm{y}_N$ from a sequence of input arrays $\\bm{x}_1, \\ldots, \\bm{x}_N$. Each of several hidden units $h_t$ depends on its corresponding input array $\\bm{x}_t$ along with the previous hidden unit $h_{t-1}$ in the following manner:\n$$ \n\\bm{h}_t = \\pi( \\bm{W}^x \\bm{x}_t + \\bm{W}^h \\bm{h}_{t-1} + \\bm{b}) \n$$\nwhere $\\pi$ is a nonlinear activation function, $\\bm{W}^x$,$\\bm{W}^h$ are weight matrices, and $\\bm{b}$ is an additive bias. $\\bm{W}$ and $\\bm{b}$ are learned parameters, applied at each unit of the recurrent layer. \nIn their basic form, RNNs are known to suffer from vanishing gradient issues: the propagation of gradients through many hidden units by the chain rule can lead to loss of information as gradient magnitudes diminish. Modern RNNs use gated recurrent units (GRU) to form long-short term memory models (LSTM) which can better preserve relevant information across many hidden layers. \n\\emph{Sequence-to-Sequence} models build on recurrent neural networks with the aim of extending the framework to handle input and output sequences of variable size. This is accomplished by the introduction of an \\emph{endoder-decoder} architecture in which an encoder LSTM maps an input sequence ${\\bf x} = (\\bm{x}_1,\\ldots,\\bm{x}_N)$ to a fixed-length context vector $\\bm{c}$, by way of the hidden encoder states $(e_1,\\ldots,e_N)$ (typically, $\\bm{c} = e_N$). \nA subsequent decoder LSTM is used to map the context vector to a sequence of hidden decoder states $(d_1,\\ldots,d_M)$, which are used to target a sequence of labeled vectors $\\bm{y} = (\\bm{y}_1,\\ldots, \\bm{y}_M)$. That is, some function $g$ is trained to approximate the likelihood of predicting $\\bm{y}_t$ given the context vector $\\bm{c}$, the current decoder state and all previous $\\bm{y}_i$:\n\\begin{equation*}\n\t\\Pr(\\bm{y}_t | \\{ \\bm{y}_1, \\ldots, \\bm{y}_{t-1} \\}, \\bm{c}) = \n\tg(\\bm{y}_{t-1}, \\bm{d}_t, \\bm{c}),\n\\end{equation*}\nwhere the overall training goal is to model the probability of predicting sequence $\\bf y$. This joint probability is decomposed into a product of ordered conditionals: \n\\begin{equation*}\n\t\\Pr({\\bf y}) = \\Pi_{t=1}^M \n\t\t\\Pr(\\bm{y}_t | \\{ \\bm{y}_1, \\ldots, \\bm{y}_{t-1} \\}, \\bm{c}).\n\\end{equation*} \nThe primary downside to this approach is the reliance on a single fixed-length vector to encode the input sequence, resulting in limited expressivity, particularly when the input sequences observed at test time are of a different length than those used during training. \n\\emph{Attention} mechanisms are used to address this shortcoming. They use the same encoder design, but allow the decoder to adaptively use any subset of the encoder's hidden units during decoding. \nIn place of a single context vector, a \\emph{sequence} of context vectors $\\bm{c}_r$ (one per decoder unit) is computed as \n\\begin{subequations}\n\\begin{flalign}\n\\label{eq:attention1}\n \\bm{c}_r &= \\textstyle \\sum_{t = 1}^N \\bm{a}_{rt} e_t \\quad \\forall r \\leq M\\\\\n\\label{eq:attention2}\n \\bm{a}_{rt} &= \\textstyle \\frac{ \\exp(\\bm{u}_{rt}) }{ \\sum_{k=1}^N \\exp(\\bm{u}_{rk}) }\\\\\n\\label{eq:attention3}\n \\bm{u}_{rt} &= \\bm{v}^T \\pi(\\bm{W}_1 e_t + \\bm{W}_2 d_r), \n\\end{flalign}\n\\end{subequations}\nwhere $\\bm{a}$ above is computed as the softmax of $\\bm{u}$, the result of \\emph{alignment} model (\\ref{eq:attention3}) in which $\\bm{v}$, $\\bm{W}_1$, and $\\bm{W}_2$ are learnable matrices and $\\pi$ is a nonlinear activation function. The values $\\bm{a}_{rt}$ constitute a sequence of \\emph{attention} vectors which measure the alignment between decoder state $d_r$ and encoder state $e_t$. Each attention vector is used as a set of weights to form a linear combination of encoder states, used as a context vector to inform subsequent predictions.\n\\emph{Pointer networks} are a special use case of attention mechanisms, used when the intended output of a network is a permutation of its input. Rather than utilizing the attention vector to assist output sequence prediction pointer networks treat the attention vector itself as the prediction. That is, \n\\begin{align*}\n\t&\\bm{u}_{rt} = \\bm{v}^T \\pi(\\bm{W}_1 e_t + \\bm{W}_2 d_r) \n\t\\quad \\forall 1 \\leq t \\leq N, \\\\\n\t&\\Pr(\\bm{y}_t | \\{ \\bm{y}_1, \\ldots, \\bm{y}_{t-1} \\}, \\bm{x}) \n\t= \\text{softmax}( \\bm{u}_r ),\n\\end{align*}\nwhere $\\bm{x}$ is again the input sequence and $\\bm{y}$ is a permutation of $\\bm{x}$. The attention vector is viewed as a pointer to a particular encoder state, and can be used in subsequent predictions.", "id": "e17bae29-4071-448d-919e-2d412e836afe", "level": "paragraph", "origin_cites_number": 2, "parent_id": "f4bf48cc-5f58-4a56-a503-f30a6717807c", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Deep Learning" ], [ "paragraph", "Sequence Models" ] ], "subsections": [ "7b33fbbc-4278-453b-9eb3-0e52a8ec7184" ], "title": "Sequence Models" }, { "cite_extract_rate": 0.5, "cites": [ 169 ], "content": "Graph Neural Networks (GNNs) , learn to represent each node $v$ of a graph with a \\emph{state embedding} $x_v$, which depends on information from its neighboring nodes. The goal of these embeddings is to provide useful latent information about each node to subsequent components of a prediction model. Let $x_v, x_{co[v]}, h_{ne[v]}, x_{ne[v]}$ represent, respectively, the features of node $v$, the features of its edges, the state embeddings, and features of its neighbors. Then consider two functions: \n\\begin{subequations}\n\\label{eq:gnn}\n \\begin{flalign}\n \\label{eq:gnn1}\n h_v &= f(x_v, x_{co[v]}, h_{ne[v]}, x_{ne[v]}) \\\\\n\t\\label{eq:gnn2}\n o_v &= g( h_v, x_v)\n \\end{flalign}\n\\end{subequations}\nEquivalently to (\\ref{eq:gnn1}) and (\\ref{eq:gnn2}), \n\\begin{subequations}\n \\begin{flalign}\n\t\\label{eq:gnn3}\n \tH &= F(H,X)\\\\\n\t\\label{eq:gnn4}\n \tO &= G(H,X_N)\n \\end{flalign}\n\\end{subequations}\nwhere $X$ and $H$ represent the concatentation of all features and all state embeddings, and $X_N$ contains only the node-wise features. When $F$ is a contractive mapping, the equation has a fixed point solution $H^*$ which can be found by repeated application of \\eqref{eq:gnn3}. When $F$ and $G$ are interpreted as feedforward neural networks, their parameters can be trained to produce state embeddings that are optimized to assist a particular learning task. \nSeveral variants and enhancements to the basic GNN have been proposed; provides a thorough survey of techniques and applications. \n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.91\\linewidth]{fig_1.pdf}\n \\caption{Machine Learning and Constrained Optimization.\n \\label{fig:survey_overview}}\n\\end{figure*}", "id": "7b33fbbc-4278-453b-9eb3-0e52a8ec7184", "level": "paragraph", "origin_cites_number": 2, "parent_id": "e17bae29-4071-448d-919e-2d412e836afe", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Preliminaries: Deep Learning" ], [ "paragraph", "Sequence Models" ], [ "paragraph", "Graph Neural Networks" ] ], "subsections": [], "title": "Graph Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Current research areas in the synthesis of constrained optimization and machine learning can be categorized into two main directions: \\emph{ML-augmented CO}, which focuses on using ML to aid the performance of CO problem solvers, and \\emph{End-to-End CO learning (E2E-COL)}, which focuses on integrating combinatorial solvers or optimization methods into deep learning architectures.\nA diagram illustrating the main research branches within this area is provided in Figure \\ref{fig:survey_overview}. \nThe area related with End-to-End CO learning is the focus of this survey and is concerned with the data-driven prediction of solutions to CO problems. This paper divides this area into:\n{\\bf (1)} approaches that develop ML architectures to predict fast, approximate solutions to predefined CO problems, \nfurther categorized into \\emph{learning with constraints} and \\emph{learning on graphs}, and\n{\\bf (2)} approaches that exploit CO solvers as neural network layers for the purpose of structured logical inference, referred to here as the \\emph{Predict-and-Optimize} paradigm.", "id": "ebf5b173-97b6-40d2-a04f-4417e9f6aa4a", "level": "section", "origin_cites_number": 0, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Overview of ML and CO" ] ], "subsections": [], "title": "Overview of ML and CO" }, { "cite_extract_rate": 0.6153846153846151, "cites": [ 8325, 172, 9086, 173, 170, 7004, 7195, 171 ], "content": "ML-augmented CO involves the augmentation of already highly-optimized CO solvers with ML inference models. \nTechniques in this area draw on both supervised and RL frameworks to develop more efficient approaches to various aspects of CO solving for both continuous and discrete combinatorial problems. \nIn the context of combinatorial optimization, these are broadly categorized into methods that learn to guide the search decisions in branch and bound solvers, and methods that guide the application of primal heuristics within branch and bound. The former include low-cost emulation of expensive branching rules in mixed integer programming , prediction of optimal combinations of low-cost variable scoring rules to derive more powerful ones , and learning to cut in cutting plane methods within MILP solvers. The latter include prediction of the most effective nodes at which to apply primal heuristics , \n and specification of primal heuristics such as the optimal choice of variable partitions in large neighborhood search . The reader is referred to the excellent surveys for a thorough account of techniques developed within ML-augmented combinatorial optimization.\nTechniques in this area have also used ML to improve decisions in continuous CO problems and include learning restart strategies , learn rules to ignore some optimization variables leveraging the expected sparsity of the solutions and consequently leading to faster solvers, , and learning active constraints to reduce the size of the problem to be fed into a CO solver\n. \n\\iffalse\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{COaugML_flowchart.pdf}\n \\caption{Machine Learning Trained to Augment Subroutines in an Optimization Solver\n \\label{fig:COaugML}}\n\\end{figure}\n\\fi\n\\iffalse\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{endtoend_flowchart.pdf}\n \\caption{End-to-End Learning for the Prediction of CO Solutions.\n \\label{fig:net_arch}}\n\\end{figure}\n\\fi", "id": "37ceb366-2e0f-4885-b587-327310d7db04", "level": "section", "origin_cites_number": 13, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "ML-augmented CO" ] ], "subsections": [], "title": "ML-augmented CO" }, { "cite_extract_rate": 0.8, "cites": [ 167, 7196, 174, 175 ], "content": "\\iffalse \nBy contrast with approaches that use machine learning to augment search-based CO solvers and configure their subroutines, a diverse body of work studies the application of ML to learn and predict solutions \\emph{directly} without the use of an external solver. \nOn one hand, several frameworks have been proposed for the incorporation of constraints into end-to-end learning, for the prediction of feasible or near-feasible solutions to both continuous and discrete constrained optimization problems. Another category of work frames the task of predicting solutions to CO problems as a sequence-to-sequence model, with the goal of producing outputs as near-optimal permutations of variable-sized inputs. \n\\fi \nA diverse body of work within the end-to-end CO learning literature has focused on developing ML architectures to predict fast, approximate solutions to predefined CO problems \\emph{end-to-end} without the use of CO solvers at the time of inference, by observing a set of solved instances or execution traces. These approaches contrasts with those that use ML to augment search-based CO solvers and configure their subroutines to direct the solver to find solutions efficiently. This survey categorizes the literature on \\emph{predicting CO solutions} into {\\bf (1)} methods that incorporate constraints into end-to-end learning, for the prediction of feasible or near-feasible solutions to both continuous and discrete constrained optimization problems, and {\\bf (2)} methods that learn combinatorial solutions on graphs, with the goal of producing outputs as {combinatorial structures from variable-sized inputs}.\nThese two categories, referred to as \\emph{learning with constraints} and \\emph{learning CO solutions}, respectively, are reviewed next.\n\\iffalse \nTechniques range from the integration of constrained optimization techniques into the training cycle of a learning model, named here \\rev{\\emph{learning with constraints}} , to the training of deep learning architectures capable of \\rev{\\emph{predicting combinatorial solutions}}, largely via the use of graph neural networks and attention mechanisms and without the explicit introduction of constraint representations . \n\\fi", "id": "8df9af23-b0c9-4b28-a15b-e5cf13a6159d", "level": "section", "origin_cites_number": 5, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predicting CO Solutions" ] ], "subsections": [ "ec4dbca1-ff4d-425e-b736-2d6599b5f2c3", "8a5e64b8-7c31-42ac-8506-68227059b0ff" ], "title": "E2E-COL: Predicting CO Solutions" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 178, 176, 7196, 166, 177, 174 ], "content": "The methods below consider datasets $\\bm{\\chi} = \\{\\bm{x}_i, \\bm{y}_i\\}_{i=1}^N$ whose inputs $\\bm{x}_i$ describe some problem instance \nspecification, such as matrix $\\bm{A}$ and vector $\\bm{b}$ describing linear constraints in MILPs, \nand the labels $\\bm{y}_i$ describe complete solutions to problem ${\\cal O}$ with input $\\bm{x}_i$. \nNotably, each sample may specify a different problem instance (with \ndifferent objective function coefficients and constraints).\nAn early approach to the use of ML for predicting CO problem solutions was presented by , which used Hopfield Networks () with modified energy functions to emulate the objective of a traveling salesman problem (TSP), and applied Lagrange multipliers to enforce feasibility to the problem's constraints. It was shown in however, that this approach suffers from several weakness, notably sensitivity to parameter initialization and hyperparameters. As noted in , similar approaches have largely fallen out of favor with the introduction of practically superior methods. \nFrameworks that exploit Lagrangian duality to guide the prediction of approximate solutions to satisfy the problem's constraints have found success in the context of continuous NLPs including energy optimization as well as constrained prediction on tasks such as transprecision computing and fair classification .\nOther end-to-end learning approaches have demonstrated success on the prediction of solutions to constrained problems by injecting information about constraints from targeted feasible solutions. Recently, presented an iterative process of using an external solver for discrete or continuous optimization to adjust targeted solutions to more closely match model predictions while maintaining feasibility, reducing the degree of constraint violation in the model predictions in subsequent iterations.", "id": "ec4dbca1-ff4d-425e-b736-2d6599b5f2c3", "level": "subsection", "origin_cites_number": 9, "parent_id": "8df9af23-b0c9-4b28-a15b-e5cf13a6159d", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predicting CO Solutions" ], [ "subsection", "Learning with Constraints" ] ], "subsections": [], "title": "Learning with Constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "By contrast to approaches learning solutions to unstructured CO problems, a variety of methods learn to solve CO cast on graphs. The development of deep learning architectures, such as sequence models and attention mechanisms, as well as GNNs, has provided a natural toolset for these tasks. \nThe survey categorizes these modern approaches broadly based on whether they rely supervised learning or reinforcement learning. It observes that: {\\bf (1)} Supervised learning concerns training models that take as input a representation of a problem instance and output a solution. Target solutions must be provided for this training task; this is often problematic as it entails solving, typically, NP-hard problems. {\\bf (2)} Optimization problems have a native objective function, which in principle can be used in lieu of a loss function in supervised learning (possibly obviating the need for training labels) and can serve as a natural reward function in reinforcement learning.", "id": "8a5e64b8-7c31-42ac-8506-68227059b0ff", "level": "subsection", "origin_cites_number": 0, "parent_id": "8df9af23-b0c9-4b28-a15b-e5cf13a6159d", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predicting CO Solutions" ], [ "subsection", "Learning Solutions on Graphs" ] ], "subsections": [ "4fdf0275-0165-4757-8c41-5dfb136135f6", "104d240a-8076-48fb-8447-9b4789edeb96" ], "title": "Learning Solutions on Graphs" }, { "cite_extract_rate": 1, "cites": [ 179, 167, 175 ], "content": " introduced the \\emph{pointer network}, in which a sequence-to-sequence model uses an encoder-decoder architecture paired with an attention mechanism to produce permutations over inputs of variable size. Noting that certain CO problems provide a compelling use case for their architecture, the authors test it for solving the Traveling Salesman (TSP), Delaunay Triangulation, and Convex Hull problem variants. In each, the solution to a problem instance can be expressed as a single permutation. They develop a pointer network model to predict approximately optimal solutions by learning from previously solved instances in a supervised manner, and demonstrate some ability to generalize over variable-sized problem instances. \nFor example, in the $2D$ Euclidean TSP, the pointer network's inputs are the $2D$ coordinates of each city that must be visited. A predicted permutation represents a tour over all cities, and each target label is a permutation representing the pre-computed minimum-length tour over its corresponding input coordinates. Any city is directly reachable from any other---by a straight line whose length is equal to the euclidean distance. In general, the approach introduced in this work applies only to problems whose solutions take the form of a single permutation and where all permutations are feasible (e.g., the problem does not feature constraints on which tours are allowed in the TSP).\nDespite the subsequent trend toward RL-oriented frameworks, as discussed in the next section, studied a purely supervised learning method for the general Quadratic Assignment Problem (QAP). The proposed solution was based on the use of simple graph neural networks trained on representations of individual problem instances and their targeted solutions. The TSP is chosen as a test problem, along with two graph matching problems (all instances of the QAP). Inferences from the model take the form of approximate permutation matrices, which are converted into feasible tours by a beam search. The authors note promising accuracy results on small instances, but the ability for trained models to generalize their performance to larger-sized instances was not shown.\n built on this concept by applying a Graph Convolutional Network model to the $2D$ Euclidean TSP. Aside from the enhanced deep learning architecture, most aspects of the approach remain similar. However, accurate results are reported on problem instances much larger than in . The authors observe that compared to contemporary reinforcement learning approaches, their training method is more sample-efficient, but results in models that do not generalize nearly as well to problem instances larger than those used in training.", "id": "4fdf0275-0165-4757-8c41-5dfb136135f6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "8a5e64b8-7c31-42ac-8506-68227059b0ff", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predicting CO Solutions" ], [ "subsection", "Learning Solutions on Graphs" ], [ "subsubsection", "Supervised Learning" ] ], "subsections": [], "title": "Supervised Learning" }, { "cite_extract_rate": 0.7894736842105261, "cites": [ 181, 188, 185, 7197, 182, 184, 186, 189, 175, 178, 180, 183, 167, 190, 187 ], "content": "The pointer network architecture of was adopted by , who proposed to train an approximate TSP solver with reinforcement learning rather than supervised learning. The transition to RL was motivated partly by the difficulties associated with obtaining target solutions that are optimal, and the existence of nonunique optimal solutions to TSP instances. Rather than generating and targeting precomputed solutions, the authors present an \\emph{actor-critic} RL framework using expected tour length $L(\\pi | g)$ as the reward signal, where $g$ and $\\pi$ represent a graph (problem instance) and a permutation (a tour over the graph):\n\\begin{equation}\n\\label{eq:belloreward}\n J(\\theta | g) = \\mathbb{E}_{\\pi \\sim p_{\\theta}(\\pi|g)}L(\\pi | g).\n\\end{equation}\nA policy, represented by the underlying pointer network with parameters $\\theta$, is optimized using the well-known REINFORCE rule for computing policy gradients :\n\\begin{equation}\n\\label{eq:bellograd}\n \\nabla_{\\theta} J(\\theta | g) = \\mathbb{E}_{\\pi \\sim p_{\\theta}(\\pi|g)} \n[ (L(\\pi | g) - b(g) \\nabla_{\\theta} \\log p_{\\theta} \\; (\\pi | g)] \n\\end{equation}\nThe total training objective is then the expectation of $J(\\theta | g)$ over a distribution of graphs.\nThe policy gradient calculation requires a baseline function $b$ which estimates the expected reward. In this work, an auxiliary \\emph{critic} network with its own set of parameters is trained to predict the expected tour length for any graph in supervised fashion, using empirically observed tours from the most recent policy as training labels. At the time of inference, two methods are available for producing tours from a trained model: {\\bf (1)} A set of tours can be sampled from the stochastic policy by varying the softmax temperature parameter, from which the shortest is selected or {\\bf (2)} an active search method which modifies the stochastic policy based on solutions sampled during inference.\n departed from the RNN encoder scheme used in and arguing that when a problem's defining data do not adhere to a natural order (e.g. city coordinates in the TSP), it need not be modeled sequentially. This provides motivation to leave out the encoder RNN and consider combinatorial problems whose state changes over time, replacing the role of encoder hidden states with representations of the problem in each time frame. The authors consider a capacitated vehicle routing problem (VRP/CVRP) in which demands are associated to points in space and change over time; a vehicle with limited capacity must satisfy all the demands by delivering supply from a central depot while minimizing total tour length. This setting also diverges from those previously discussed in that solutions no longer take the form of permutations, since an optimal tour may now include several trips to the supply depot or a demand sink. \nThe static (location coordinates) and dynamic (demand value) data defining the problem are concatenated into a single embedding vector $\\bm{x}_t$ for each time frame $t$. In each decoder step, the attention layer computes a context vector $c_t$ based only on $\\bm{x}_t$ and the previous decoder state $d_t$. A final prediction at time $t$ indicates the next visited located based on $\\bm{x}_t$ and $c_t$, and the embeddings $\\bm{x}_t$ can then be updated based on the resulting state-changes. The training consists of an actor-critic method similar to that of .\nSeveral improvements of these approaches exists. Notably, \n developed a general reinforcement learning framework based on a Graph Attention Network architecture and trained with the REINFORCE policy gradients, and present significant improvements in accuracy on 2D euclidean TSP over , , , and . \nDifferently from previous approaches, \nfocused on CO instances with hard constraints and consider TSP variants in which certain tours are considered infeasible. \nA mutil-level RL framework is proposed in which each layer of a hierarchy learns a different policy, from which actions can be sampled. Lower layers learn policies that result in feasible solutions by using reward functions that bias feasibility, and provide latent variables to higher layers which learn to optimize a given objective while sampling actions from lower layers. Layer - wise policy optimization uses the REINFORCE algorithm, training each layer in turn before fixing its weights and sampling actions to train the next layer. Each policy is represented by a graph neural network to produce context embeddings that capture inter-node relationships from node-wise inputs, and an attention-based decoder to predict distributions over candidate actions as in prior works. When the direct output $\\bm{u}_i$ from layer $i$ is predicted at a particular decoding step, it is combined with the output of the lower layer $i-1$ to ensure mutual compatibility in the policy distribution\n$$ \\bm{p}_i = \\textrm{softmax}( \\bm{u}_i + \\alpha \\bm{u}_{i-1} ) $$ where $\\alpha$ is a trainable parameter. \nFinally, adopted a different RL approach based on a greedy heuristic framework which determines approximate solutions by the sequential selection of graph nodes. The selection policy is learned using a combination of Q-learning and fitted Q-iteration, by optimizing the optimization objective directly. The graph embedding network \\texttt{structure2vec} is used to represent the policy of the learned greedy algorithm. It is used to produce feature embeddings for each node of the graph, which are updated between actions and capture relationships between neighboring nodes, given the current state of a partial solution. It is noted that this framework excels in combinatorial problems that have a true graphical structure, as opposed to most previous previously studied applications (primarily the TSP and VRP variants) in the underlying 'graph' is typically fully connected. The Minimum Vertex Cover, Max Cut, and TSP are used as test test problems, and a strong ability to generalize over various problem sizes is advertised in comparison to prior approaches. \n\\iffalse\n\\begin{table*}[t]\n\\centering\n\\resizebox{0.8\\linewidth}{!}{\n\\begin{tabular}{rl lll lll}\n\\toprule\nRef & Constrained Model & Loss Function & Model Approximation & Gradient Approximation\\\\\n\\midrule\n & QP & Any & None & Exact \\\\ \n & LP & Any & Quad. Reg. & Exact \\\\ \n & LP & Regret & None & Subgradient \\\\ \n & MILP & Any & Cuts $+$ Quad. Reg. & Exact \\\\\n & MILP & Regret & None & Subgradient \\\\ \n & LP & Any & Log-barrier & Subgradient \\\\\n & Blackbox & Any & None & Subgradient \\\\ \n & LP & Any & None & Stochastic \\\\ \n & MAXSAT & Any & SDP Approximation & Exact\\\\\n\\bottomrule\n\\end{tabular}\n}\n \\caption{Comparison of main predict-and-optimize learning frameworks.}\n \\label{end-to-end-compare}\n\\end{table*}\n\\fi\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{fig2.pdf}\n \\caption{Predict-and-optimize framework; gradients of a solver output (solution) must be computed with respect to its input (problem parameters) in order to maximize empirical model performance.\n \\label{fig:pred_opt_flowchart}}\n\\end{figure}\n\\iffalse \nOn the other hand, the integration of combinatorial solvers into deep learning architectures has shown promise in generalizing the applicability of ML to new settings, especially settings in which discrete logical decisions must be made based on empirical data. This area is distinguished from the prediction of CO solutions from solved examples, in that CO problems which function as neural network layers are parametrized by previous layers and solved by fully featured external solvers, for the purpose of structured logical inference. The primary difficulty in these contexts is the formation of useful gradients to backpropagate through layers which depend on discrete solvers. \nVarious approaches have been proposed to this end, ranging from techniques for developing useful subgradients over the output of a CO solver (), to the formation of continuous approximations to CO solvers which admit exact gradients (, , ) This nascent subfield shows substantial promise for opportunities to extend the applicability of machine learning to new areas and bridge the gap between discrete logic and empirical, data-driven inference models. \nMultiple lines of work have resulted in distinct approaches to this form of end-to-end model training. As such, several nomenclatures have been suggested for the modeling paradigm, including \"task-based end-to-end learning\" , \"predict-and-optimize\" , and \"decision-focused learning\" . In this survey, we will favor the term \"Predict-and-Optimize\".\n\\emph{This survey outlines the current state of this exciting subfield}.\n\\fi", "id": "104d240a-8076-48fb-8447-9b4789edeb96", "level": "subsubsection", "origin_cites_number": 19, "parent_id": "8a5e64b8-7c31-42ac-8506-68227059b0ff", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predicting CO Solutions" ], [ "subsection", "Learning Solutions on Graphs" ], [ "subsubsection", "Reinforcement Learning" ] ], "subsections": [], "title": "Reinforcement Learning" }, { "cite_extract_rate": 1, "cites": [ 184 ], "content": "A burgeoning topic in the intersection of ML and CO is the fusion of prediction (ML) and decision (CO) models, in which decision models are represented by partially defined optimization problems, whose specification is completed by parameters that are predicted from data. The resulting composite models employ constrained optimization as a neural network layer and are trained \\emph{end-to-end}, based on the optimality of their decisions. This setting is altogether different in motivation to those previously discussed, in which the goal was to solve predefined CO instances with increased efficiency. The goal here is the synthesis of predictive and prescriptive techniques to create ML systems that learn to make decisions based on empirical data.\nThe following constrained optimization problem is posed, in which the objective function $f_{\\bm{y}}$ and feasible region ${\\cal C}_{\\bm{y}}$ depend on a parameter vector $\\bm{y}$:\n\\begin{equation}\n\\label{eq:opt_theta}\n\\mathcal{O}(\\bm{y}) = \\argmin_{\\bm{z}} f_{\\bm{y}}(\\bm{z}) \\;\\; \n \\text{subject to} \\;\\;\n \\bm{z} \\in \\mathcal{C}_{\\bm{y}}.\n\\end{equation}\nThe goal here is to use supervised learning to predict $\\hat{\\bm{y}}$ the unspecified parameters from empirical data. The learning task is performed so that the optimal solution ${\\cal O}(\\hat{\\bm{y}})$ best matches a targeted optimal solution ${\\cal O}(\\bm{y})$, relative to some appropriately chosen loss function. The empirical data in this setting is defined abstractly as belonging to a dataset $\\chi$, and can represent any empirical observations correlating with targeted solutions to \\eqref{eq:opt_theta} for some $\\bm{y}$. See Figure \\ref{fig:pred_opt_flowchart} for an illustration.\nThis framework aims to improve on simpler two-stage approaches, in which a conventional loss function (e.g. MSE) is used to target labeled parameter vectors $\\bm{y}$ that are provided in advance, before solving the associated CO problem to obtain a decision. Such approaches are suboptimal in the sense that predictions of $\\bm{y}$ do not take into account the accuracy of the resulting solution ${\\cal O}(\\bm{y})$ during training.\nWe note that there are two ways to view the predictions that result from these integrated models. If $\\hat{\\bm{y}}$ is viewed as the prediction, then the calculation of ${\\cal O}(\\hat{\\bm{y}})$ is absorbed into the loss function $\\mathcal{L}(\\hat{\\bm{y}},\\bm{y})$, which targets the provided parameter vectors. Otherwise, the loss function $\\mathcal{L}( {\\cal O}(\\hat{\\bm{y}}) , {\\cal O}(\\bm{y}))$ is considered separately from the decision model and aims to match computed optimal solutions to targeted ones. \nOne advantage sought in either case is the opportunity to minimize during training the ultimate error in the computed optimal objective values $f_{\\hat{\\bm{y}}}({\\cal O}(\\hat{\\bm{y}}))$, relative to those of the target data. This notion of training loss is known as \\emph{regret:\n$$\n\\textsl{regret}(\\hat{\\bm{y}},\\bm{y}) = \n f_{\\hat{\\bm{y}}}({\\cal O}(\\hat{\\bm{y}}))-f_{\\bm{y}}({\\cal O}(\\bm{y})).\n$$\nOtherwise the optimal solution ${\\cal O}(\\bm{y})$ is targeted and one can use\n$\\textsl{regret}({\\cal O}(\\hat{\\bm{y}}),{\\cal O}(\\bm{y}))$, regardless of whether the corresponding $\\bm{y}$ is available.} \nDepending on the techniques used, it may be possible to minimize the regret without access to ground-truth solutions, as in , since the targeted solutions ${\\cal O}(\\bm{y})$ contribute only a constant term to the overall loss.\nIt is worth mentioning that because the optimization problem in \\eqref{eq:opt_theta} is viewed as a function, the existence of nonunique optimal solutions is typically not considered. The implication then is that ${\\cal O}(\\bm{y})$ is directly determined by $\\bm{y}$.\nTraining these end-to-end models involves the introduction of external CO solvers into the training loop of a ML model, often a DNN. Note that combinatorial problems with discrete state spaces do not offer useful gradients; viewed as a function, the \\emph{argmin} of a discrete problem is piecewise constant. \n\\emph{The challenge of forming useful approximations to \n$\\frac{\\partial \\mathcal{L}}{\\partial y } $ is\n central in this context and must be addressed in order to perform backpropagation}. \n It may be approximated directly, but a more prevalent strategy is to model $\\frac{\\partial {\\cal O}(y)}{\\partial y } $ and $ \\frac{\\partial \\mathcal{L}}{\\partial {\\cal O}}$ separately, in which case the challenge is to compute the former term by \\emph{differentiation through argmin}. Figure \\ref{fig:pred_opt_flowchart} shows the role of this gradient calculation in context.", "id": "18ed630f-d9cd-45a7-af17-fcea1c0396d5", "level": "section", "origin_cites_number": 1, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predict-and-Optimize" ] ], "subsections": [ "9e5522bf-ee70-4b18-a897-c4d23d321cb0", "0f2d353e-1220-4538-a10a-8caa0f7d6834", "fe5ff831-da77-4c1e-be94-54014b589ead" ], "title": "E2E-COL: Predict-and-Optimize" }, { "cite_extract_rate": 0.5, "cites": [ 7197 ], "content": "One catalyst for the development of this topic was the introduction of \\emph{differentiable optimization layers}, beginning with which introduced a GPU-ready QP solver that offers exact gradients for backpropagation by differentiating the KKT optimality conditions of a quadratic program at the time of solving, and utilizing information from the forward pass to solve a linear system for incoming gradients, once the outgoing gradients are known. Subsequently, proposed a \\emph{predict-and-optimize} model in which QPs with stochastic constraints were integrated in-the-loop to provide accurate solutions to inventory and power generator scheduling problems specified by empirical data.", "id": "9e5522bf-ee70-4b18-a897-c4d23d321cb0", "level": "subsection", "origin_cites_number": 2, "parent_id": "18ed630f-d9cd-45a7-af17-fcea1c0396d5", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predict-and-Optimize" ], [ "subsection", "Quadratic Programming" ] ], "subsections": [], "title": "Quadratic Programming" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 182, 183, 7197, 184, 186, 189 ], "content": "Concurrent with , an alternative methodology for end-to-end learning with decision models, called \\emph{smart predict-and-optimize} (SPO), was introduced by , which focused on prediction with optimization of constrained problems with linear objectives, in which the cost vector is predicted from data and the feasible region $\\mathcal{C}$ is invariant to the parameter $\\bm{y}$:\n\\begin{equation}\n\\label{eq:opt_spo}\n\\mathcal{O}(\\bm{y}) = \\argmin_{\\bm{z}} \\bm{y}^T\\bm{z} \\;\\; \n \\text{subject to} \\;\\;\n \\bm{z} \\in \\mathcal{C}.\n\\end{equation}\nThe target data in this work are the true cost vectors $\\bm{y}$, and an inexact subgradient calculation is used for the backpropagation of regret loss \n$\\mathcal{L}(\\hat{\\bm{y}}, \\bm{y}) = \\hat{y}^T ( {\\cal O}(\\hat{\\bm{y}}) - {\\cal O}(\\bm{y}) )$ on the decision task, by first defining a convex surrogate upper bound on regret called the \\emph{SPO+loss}, for which it is shown that ${\\cal O}(\\bm{y})-{\\cal O}(2\\hat{\\bm{y}} - \\bm{y})$ is a useful subgradient. \nSince this work is limited to the development of surrogate loss functions on regret from the optimization task, it does not apply to learning tasks in which the full solution to an optimization problem is targeted. The paper includes a discussion justifying the method's use on problems with discrete constraints in $\\mathcal{C}$, as in combinatorial optimization, but experimental results are not provided on that topic. It is, however, demonstrated that the approach succeeds in a case where $\\mathcal{C}$ is convex but nonlinear. \n introduced an alternative framework to predict-and-optimize linear programming problems, based on exact differentiation of a smoothed surrogate model. While LPs are special cases of QPs, the gradient calculation of does not directly apply due to singularity of the KKT conditions when the objective function is purely linear. This is addressed by introducing a small quadratic regularization term to the LP objective $f_{\\bm{y}}(\\bm{z}) = \\bm{y}^T \\bm{z}$ so that the problem in \\eqref{eq:opt_theta} becomes\n\\begin{equation}\n\\label{eq:opt_theta_LP}\n\\mathcal{O}(\\bm{y}) = \\argmin_{\\bm{z}} \\bm{y}^T \\bm{z} + \\epsilon\\|\\bm{z}\\|^2 \\;\\;\n \\text{subject to} \\;\\;\n \\bm{A}\\bm{z} \\leq \\bm{b}.\n\\end{equation}\nThe resulting problems approximate the desired LP, but have unique solutions that vary smoothly as a function of their parameters, allowing for accurate backpropagation of the result. \nThe integrated model is trained to minimize the expected optimal objective value across all training samples, which is equivalent to minimizing the regret loss but without the need for a target dataset.\n This work demonstrated success on problems such as the knapsack (using LP relaxation) and bipartite matching problems where a cost vector is predicted from empirical data (e.g., historical cost data for knapsack items), and is shown to outperform two-stage models which lack integration of the LP problem. We note that although the differentiable QP solving framework of is capable of handling differentiation with respect to any objective or constraint coefficient, this work only report results on tasks in which the \\emph{cost} vector is parameterized within the learning architecture, and constraints are held constant across each sample. \n\\emph{This limitation is common to all of the works described below, as well.} \nNext, introduced an altogether different approach to obtaining approximate gradients for the {\\em argmin} of a linear program. An interior point method is used to compute the solution of a homogeneous self-dual embedding with early stopping, and the method's log-barrier term is recovered and used to solve for gradients in the backward pass. Equivalently, this can be viewed as the introduction of a log-barrier regularization term, by analogy to the QP-based approach of :\n\\begin{equation*}\n\\label{eq:opt_theta_QP}\n\\mathcal{O}(\\bm{y}) = \\argmin_{\\bm{z}} \\bm{y}^T\\bm{z} + \n \\lambda\\left(\\sum_i \\ln (z_i) \\right) \\;\\;\n \\text{subject to} \\;\\;\n \\bm{A}\\bm{z} \\leq \\bm{b}.\n\\end{equation*}\nFurther, the method's performance on end-to-end learning tasks is evaluated against the QP approach of on LP-based predict-and-optimize tasks, citing stronger accuracy results on energy scheduling and knapsack problems with costs predicted from data. \n detailed an approach based on stochastic perturbation to differentiate the output of linear programs with respect to their cost vectors. The output space of the LP problem is smoothed by adding low-amplitude random noise to the cost vector and computing the expectation of the resulting solution in each forward pass. This can be done in Monte Carlo fashion and in parallel across the noise samples. The gradient calculation views the solver as a black box in this approach, and does not require the explicit solving of LP for operations that can be mathematically formulated as LP, but are simple to perform (e.g., sorting and ranking). Results include a replication of the shortest path experiments presented in , in which a model integrated with convolutional neural networks is used to approximate the shortest path through stages in a computer game, solely from images.", "id": "0f2d353e-1220-4538-a10a-8caa0f7d6834", "level": "subsection", "origin_cites_number": 7, "parent_id": "18ed630f-d9cd-45a7-af17-fcea1c0396d5", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predict-and-Optimize" ], [ "subsection", "Linear Programming" ] ], "subsections": [], "title": "Linear Programming" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 181, 182, 185, 184, 186, 189 ], "content": " extended the approach of to integrate MILP within the end-to-end training loop, with the aim of utilizing more expressive NP-Hard combinatorial problems with parameters predicted from data. This is done by reducing the MILP with integer constraints to a LP by a method of cutting planes. In the ideal case, the LP that results from the addition of cutting planes has the same optimal solution as its mixed-integer parent. Exact gradients can then be computed for its regularized QP approximation as in . Although the LP approximation to MILP improves with solving time, practical concerns arise when the MILP problem cannot be solved to completion. Each instance of the NP-Hard problem must be solved in each forward pass of the training loop, which poses clear obstructions in terms of runtime. \nA disadvantage of the approach is that cutting-plane methods are generally considered to be inferior in efficiency to staple methods like branch and bound.\nImproved results were obtained on portfolio optimization and diverse bipartite matching problems, when compared to LP-relaxation models following the approach of .\n investigates the application of the same SPO approach to NP-Hard combinatorial problems. Primary among the challenges faced in this context is, as in , the computational cost of solving hard problems within every iteration of the training. The authors found that continuous relaxations of common MILP problems (e.g. knapsack) often offer subgradients of comaparable quality to the full mixed-integer problem with respect to the SPO loss, so that training end-to-end systems with hard CO problems can be simplified in such cases by replacing the full problem with an efficiently solvable relaxation, an approach termed \\emph{SPO-relax}. The authors put continuous relaxations into the broader category of ``weaker oracles'' for the CO solver, which also includes approximation algorithms (e.g. greedy approximation for knapsack). \nThe main results showed that SPO-relax achieves accuracy competitive with the full SPO approach but with shorter training times on a handful of discrete problems. The SPO-relax approach was compared also against the formulation of on equivalent relaxations, but no clear winner was determined. \n introduced a new idea for approximating gradients over discrete optimization problems for end-to-end training, which relies on viewing a discrete optimization problem as a function of its defining parameters (in this context coming from previous layers), whose range is piecewise constant. The only requirement is that the objective be linear. The gradient calculation combines the outputs of two calls to an optimization solver; one in the forward pass on initial parameters $\\bm{y}$, and one in the backward pass on perturbed parameters $\\bar{\\bm{y}}$. The results are used to construct a piecewise \\emph{linear} function which approximates the original solver's output space, but has readily available gradients. Because the gradient calculation is agnostic to the implementation of the solver, it is termed \"black-box differentiation\". As such, input parameters to the solver do not correspond explicitly to coefficients in the underlying optimization problem. Results on end-to-end learning for the shortest path problem, TSP and min-cost perfect matching are shown. In each case, the discrete problem's specification is implicitly defined in terms of images, which are used to predict parameters of the appropriate discrete problem through deep networks. The optimal solutions coming from blackbox solvers are expressed as binary indicator matrices in each case and matched to targeted optimal solutions using a Hamming distance loss function.\nFinally, presented a differentiable solver for the MAXSAT problem, another problem form capable of representing NP-Hard combinatorial problems. Approximate gradients are formed by first approximating the MAXSAT problem as a related semidefinite program (SPD), then differentiating its solution analytically during a specialized coordinate descent method which solves the SDP. The successful integration of MAXSAT into deep learning is demonstrated with a model trained to solve sudoku puzzles represented only by handwritten images. \n\\iffalse\n\\rem{\nThe frameworks discussed above for end-to-end learning with constrained optimization models can be roughly categorized in terms of their approach for obtaining gradients to the constrained \\rev{\\emph{argmax}} problem and the classes of constrained models that they accomodate. Note that the majority involve the formation of some approximation, due to the fact that discrete functions inherently lack well-defined gradients. The exception is , which is concerned with quadratic programs not characterized by a discrete state space. The remaining methods either depend on exact differentation of a smoothed surrogate model, or calculation of approximate gradients for the exact model. In the former cases, approaches to smoothing involve quadratic regularization (, ), log-barrier regularization (), stochastic perturbation(), or SDP approximation in the case of . Finally, the SPO approaches merit a further distinction on account of their reliance on assuming a particular loss function (the \\emph{regret}, or suboptimality of the solver output's objective value), whereas the remaining approaches allow a choice of loss function by differentiating the solver's argmin function directly. \n}\n\\fi", "id": "fe5ff831-da77-4c1e-be94-54014b589ead", "level": "subsection", "origin_cites_number": 9, "parent_id": "18ed630f-d9cd-45a7-af17-fcea1c0396d5", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "E2E-COL: Predict-and-Optimize" ], [ "subsection", "Combinatorial Optimization" ] ], "subsections": [], "title": "Combinatorial Optimization" }, { "cite_extract_rate": 0.75, "cites": [ 183, 185, 184 ], "content": "The current state of the art in integrating combinatorial optimization with end-to-end machine learning \nshows promise on challenging tasks for which there was previously no viable approach. Further, it has been demonstrated that a variety of non-overlapping approaches can be effective. Despite these encouraging results, a number of challenges remain that must be addressed to allow an integration that lives up to its full potential.\n{\\bf (1)} Despite the variety of approaches, the success of the predict-and-optimize paradigm has been demonstrated on a relatively limited set of optimization problems and a majority of reported experimental results focus on linear programming formulations. \nChallenges posed by the parametrization of constraints stand in the way of broader applications, but have not been yet been addressed. \n{\\bf (2)} Issues associated with the runtime of combinatorial solvers in-the-loop still make some potential applications impractical. \n{\\bf (3)} Additionally, despite being possible in theory, the role of the CO model in-the-loop has not been generalized successfully beyond being applied as the final layer of a deep model. The use of additional layers beyond the solver, or even compositions of CO solving layers, could potentially lead to new applications if the practical challenges were to be overcome. \n{\\bf (4)} In predicting solutions to CO problems, the current methods cannot reliably guarantee the problem constraints to be satisfied. This critical shortcoming may be addressed by integrating ML approaches with methods from the robust optimization literature or by developing ad-hoc layers to project the predictions onto the feasible space. {\\bf (5)} While it has been observed in limited contexts that predict-and-optimize frameworks based on optimization layers are competitive only when the underlying constrained problem is convex, this area still lacks theoretical results providing guarantees on their viability or performance.\nFinally, {\\bf (6)} the need for uniform benchmark experiments and systematic comparisons between each predict-and-optimization framework is apparent. provided a study comparing the approaches of and , along with problem-specific approaches, on knapsack problems but did not conclude strongly as to which method should be preferred. Further, reports that for knapsack problems, SPO performs comparably on the knapsack problem whether the LP relaxation or the full problem is used, but does not show that this effect generalize to other NP-Hard problems. This signals a need for comprehensive studies that test performance on a variety of hard CO problems.\nAlthough the approaches surveyed are still in an early stage of their development, and have been adopted mainly for academic purposes, we strongly believe that the integration between combinatorial optimization and machine learning is a promising direction for the development of new, transformative, tools in combinatorial optimization and learning. \n\\section*{Acknowledgments}\nThis research is partially supported by NSF grant 2007164. Its views and conclusions are those of the authors only and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations, agencies, or the U.S.~government.\n\\bibliographystyle{named}\n\\bibliography{ijcai21}\n\\end{document}", "id": "c0c3784c-24d5-427a-8e19-a1e786b10c0f", "level": "section", "origin_cites_number": 4, "parent_id": "76c7c5c6-a4be-4647-adfa-d8a4a864aace", "prefix_titles": [ [ "title", "End-to-End Constrained Optimization Learning: A Survey" ], [ "section", "Challenges and Research Directions" ] ], "subsections": [], "title": "Challenges and Research Directions" } ]
11
[ 9086, 166, 167, 168, 169, 8325, 172, 173, 170, 7004, 7195, 171, 7196, 174, 175, 178, 176, 177, 179, 181, 188, 185, 7197, 182, 184, 186, 189, 180, 183, 190, 187 ]
1.144833
[ "Nur Imtiazul Haque", "Md Hasan Shahriar", "Md Golam Dastgir", "Anjan Debnath", "Imtiaz Parvez", "Arif Sarwat", "Mohammad Ashiqur Rahman" ]
Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey
2020
2020-09-01T05:16:51Z
cs.CR
Smart grid (SG) is a complex cyber-physical system that utilizes modern cyber and physical equipment to run at an optimal operating point. Cyberattacks are the principal threats confronting the usage and advancement of the state-of-the-art systems. The advancement of SG has added a wide range of technologies, equipment, and tools to make the system more reliable, efficient, and cost-effective. Despite attaining these goals, the threat space for the adversarial attacks has also been expanded because of the extensive implementation of the cyber networks. Due to the promising computational and reasoning capability, machine learning (ML) is being used to exploit and defend the cyberattacks in SG by the attackers and system operators, respectively. In this paper, we perform a comprehensive summary of cyberattacks generation, detection, and mitigation schemes by reviewing state-of-the-art research in the SG domain. Additionally, we have summarized the current research in a structured way using tabular format. We also present the shortcomings of the existing works and possible future research direction based on our investigation.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "779753ce-850d-42dc-864e-3ee7783627a1", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ] ], "subsections": [ "cad211a4-1b7d-42e5-80c1-410e1a4dc6d1", "a194fe68-fa2b-494e-a0bf-8df5c90dae7d", "177d8436-9b97-40d6-a640-e71768806d8f", "18633bc6-d50e-4b6a-baeb-61d8385a22a4", "5a14d348-f16b-43ae-9072-36fa8337a104", "9fd43cce-1854-415c-9af9-acffa298cd38", "9e3c4149-3eac-4f7f-aa28-ad4b705ad3b0" ], "title": "root" }, { "cite_extract_rate": 0.130434782608695, "cites": [ 6802, 6801, 6803 ], "content": "\\label{Sec:Introduction}\nIn the modern power system, a vast amount of intelligent devices form a cyber network to monitor, control, and protect the physical network. The cyber-physical (CP) networks' inter-dependency is the backbone of the modern smart grid (SG). Fig.~\\ref{architecture} illustrates the typical multi-layer architecture of SG, composed of physical, data acquisition, communication, and application layers. The physical layer consists of generation, transmission, and distribution networks~. SG incorporates distributed energy resources (DERs) such as solar, wind, hydro, etc., connected to the grid with converters for extracting maximum power~. The data acquisition layer consists of smart sensors and measurement devices, where the smart devices collect data and transmit them to the communication layer. The communication layer includes a wide variety of wired/wireless technologies and network devices, which transmits data to the energy management system (EMS) that optimizes, monitors, and sends control signals to the actuators.\nThough the cyber layers improve the efficiency of the SG, they might put the system at higher risk by expanding the attack space. An attacker can compromise the vulnerable points, disrupting the monitoring and controlling of the physical equipment~. \nAdditionally, demand response, energy efficiency, dynamic electricity market, distributed automation, etc. are the key features of the SG network~.\nAll these salient features nominate the SG, a very complex network. machine learning (ML) is a ubiquitous prominent tool with capability of extracting patterns in any complex network data without being explicitly programmed. Recently, a lot of researchers are using ML to analyze the cybersecurity of SG.\n\\begin{figure}[h]\n\\centering\n\\vspace{-5pt}\n \\includegraphics[scale=0.7]{figures/architecture_smart_grid.pdf}\n \\vspace{-5pt}\n \\caption[width = 0cm]{Architecture of the cyber-physical systems of smart grid \n }\n \\label{architecture}\n\\vspace{-5pt}\n\\end{figure}\n\\begin{table*}[!ht]\n\\scriptsize\n\\centering\n\\caption{Classification of ML-based attack generation techniques in smart grid\\label{tab:attack}}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\textbf{Reference} & \\textbf{Institution} & \\textbf{Year} & \\textbf{Attack Type} & \\textbf{Attack Goal} & \\textbf{Category} & \\textbf{\\begin{tabular}[c]{@{}l@{}}ML \\\\ Algorithm\\end{tabular}} & \\textbf{Performance} & \\textbf{Testbed} \\\\ \\hline\nPaul et al.~ & \\begin{tabular}[c]{@{}l@{}}South Dakota State \\\\ University, USA\\end{tabular} & & & \\begin{tabular}[c]{@{}l@{}}Generation loss, \\\\ Line outage\\end{tabular} & Unsupervised & K-means & \\begin{tabular}[c]{@{}l@{}}Line outage-63 \\\\ Generation loss- \\\\ 12029 MW\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}W\\&W 6, IEEE 7, \\\\ 8, 300 bus\\end{tabular} \\\\ \\cline{1-2} \\cline{5-9} \nNi et al.~ & \\begin{tabular}[c]{@{}l@{}}South Dakota State \\\\ University, USA\\end{tabular} & \\multirow{-2}{*}{2019} & & \\begin{tabular}[c]{@{}l@{}}Optimal Multistage \\\\ Attack Sequence \\\\ for Line outage\\end{tabular} & & & \\begin{tabular}[c]{@{}l@{}}Line outage- \\\\ 30\\%, Generation \\\\ loss-60\\%\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}W\\&W 6 bus and\\\\ IEEE 39 bus\\end{tabular} \\\\ \\cline{1-3} \\cline{5-5} \\cline{8-9} \nNi et al.~ & \\begin{tabular}[c]{@{}l@{}}South Dakota State \\\\ University, USA\\end{tabular} & 2017 & & \\begin{tabular}[c]{@{}l@{}}Optimal Attack \\\\ Sequence for \\\\ Blackout\\end{tabular} & & & & \\begin{tabular}[c]{@{}l@{}}W\\&W 6 bus and \\\\ IEEE 30 bus\\end{tabular} \\\\ \\cline{1-3} \\cline{5-5} \\cline{9-9} \nYan et al.~ & \\begin{tabular}[c]{@{}l@{}}University of Rhode \\\\ Island, USA\\end{tabular} & 2016 & \\multirow{-4}{*}{\\begin{tabular}[c]{@{}l@{}}Line \\\\ Switching\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Optimal Attack \\\\ Sequence for \\\\ Blackout\\end{tabular} & \\multirow{-3}{*}{Reinforcement} & \\multirow{-3}{*}{Q-learning} & \\multirow{-3}{*}{\\begin{tabular}[c]{@{}l@{}} Generation \\\\loss- 160.1278\\\\MW and blackout\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}IEEE 5, 24, and \\\\ 300 bus\\end{tabular} \\\\ \\hline\nChen et al.~ & \\begin{tabular}[c]{@{}l@{}}Tsinghua University, \\\\ China\\end{tabular} & 2019 & & \\begin{tabular}[c]{@{}l@{}}Disrupt Automatic \\\\ Voltage Control\\end{tabular} & Reinforcement & Q-learning & \\begin{tabular}[c]{@{}l@{}} Voltage sag- 0.5 pu \\end{tabular} & IEEE 39 bus \\\\ \\cline{1-3} \\cline{5-9} \nAhmadian et al.~ & \\begin{tabular}[c]{@{}l@{}}University of \\\\ Houston, USA\\end{tabular} & & & Maximize Cost & Unsupervised & GAN & \\begin{tabular}[c]{@{}l@{}}Generated fake \\\\ load data\\end{tabular} & IEEE 5 bus \\\\ \\cline{1-2} \\cline{5-9} \nNawaz et al.~ & \\begin{tabular}[c]{@{}l@{}}Air University, \\\\ Pakistan\\end{tabular} & \\multirow{-2}{*}{2018} & \\multirow{-3}{*}{FDIA} & State Estimation & Supervised & LR & \\begin{tabular}[c]{@{}l@{}}Generated stealthy \\\\ FDI attack vectors\\end{tabular} & IEEE 5 bus \\\\ \\hline\n\\end{tabular}\n\\vspace{-5pt}\n\\end{table*}\nUntil now, a few ML-based security surveys have been conducted in the smart grid domain. \nA detailed overview of ML-based security analysis of false data injection (FDI) attack in SG has been presented by Muslehet et al.~. However, the review focus was limited to a single attack. Hossain et al. conducted a study on the application of big data and ML in the electrical power grid~. Most of the existing review papers do not include recent trends toward the application of ML in the security study of SG. This survey paper provides a review of state-of-the-art applications of ML in attack generation, detection, and mitigation schemes in the SG. After introducing the existing and emerging ML-based security issues, this paper attempts to inspire the researchers in providing security solutions with a view to increasing the resiliency of the SG.", "id": "cad211a4-1b7d-42e5-80c1-410e1a4dc6d1", "level": "section", "origin_cites_number": 23, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Sec:Machine_Learning_Based_Attack_Generation}\nML-based attacks in the SG domain are less explored. Table~\\ref{tab:attack} summarizes the ML-based attack generation in SG. According to the existing research works, four types of ML algorithms are utilized to generate malicious data to launch an attack in the SG. K-means and Q-learning algorithms are used to launch the line switching attacks. In contrast, Q-learning, generative adversarial networks (GAN), and linear regression (LR) models are used to generate false data injection (FDI) attacks. \nPaul et al. used load ranking and K-means clustering algorithms as two different approaches to attack SG for selecting the most vulnerable transmission lines to create contingencies~. They found that clustering-based algorithms perform better in tripping transmission lines. On the other hand, load ranking shows better results to gain higher generation loss. In~, Yan et al. proposed Q-learning-based cyberattacks in different buses of the system that leads the system to blackout. Ni et al. proposed another reinforcement learning-based sequential line switching attack to initiate blackout~. They recently proposed a multistage game using a Q-learning algorithm to create transmission line outage and generation loss~. Nawaz et al. proposed an LR-based FDI attack generator against the state estimation of the SG. They implemented and evaluated their model on IEEE 5 bus system~. Ahmadian et al. presented a GAN-based false load data generator in~. The attacker's goal was to maximize the generation cost by injecting that false load data into the system. Recently, Chen et al. also presented a Q-learning-based FDI attack generator against the automatic voltage control using partially observable Markov decision process and was able to create a voltage sag on IEEE 39 bus system~.", "id": "a194fe68-fa2b-494e-a0bf-8df5c90dae7d", "level": "section", "origin_cites_number": 7, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Generation" ] ], "subsections": [], "title": "Machine Learning Based Attack Generation" }, { "cite_extract_rate": 0.15625, "cites": [ 6807, 6805, 6808, 6806, 6804 ], "content": "\\label{Sec:ML_Attack_Detection}\nA wide range of research has been conducted to detect various attacks in SG leveraging ML approaches. In this section, we review the existing research efforts of ML-based attack detection in various segments of the SG, as summarized in Table~\\ref{tab:detection}. In the following subsections, we discuss the detection techniques of a few prevalent cyberattacks.\n\\begin{table*}[!ht]\n\\scriptsize\n\\centering\n\\caption{Classification of ML-based attack detection techniques in smart grid}\n\\label{tab:detection}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{\\begin{tabular}[c]{@{}c@{}}ML \\\\ Algorithm\\end{tabular}} & \\textbf{Attack} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Number of \\\\Features\\end{tabular}} & \\textbf{Data Collection} & \\textbf{Testbed} & \\textbf{Performance} & \\textbf{Reference} \\\\ \\hline\n & & & 304 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 118 bus & 99\\% accuracy & \\\\ \\cline{4-8} \n & & & NA & NA & IEEE 30 bus & 96.1\\% accuracy & \\\\ \\cline{4-8} \n & & \\multirow{-3}{*}{FDI} & 34 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 14 bus & 90.79\\% accuracy & \\\\ \\cline{3-8} \n & & IL & NA & \\begin{tabular}[c]{@{}c@{}}Ground truth \\\\ profile database\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}IEC 61850 \\\\ conforming testbed\\end{tabular} & 91\\% accuracy & \\\\ \\cline{3-8} \n & & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}99.954\\% accuracy \\\\ and 0.939 F1-score\\end{tabular} & \\\\ \\cline{3-8} \n & \\multirow{-6}{*}{SVM} & \\begin{tabular}[c]{@{}c@{}}DoS, R2L, \\\\ and U2R\\end{tabular} & NA & NSL-KDD dataset & NA & \\begin{tabular}[c]{@{}c@{}}0.67\\% FPR and\\\\ 2.15\\% FNR\\end{tabular} & \\\\ \\cline{2-8} \n & & & NA & NA & IEEE 30 bus & 95.1\\% accuracy & \\\\ \\cline{4-8} \n & & \\multirow{-2}{*}{FDI} & 34 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 14 bus & 85.59\\% accuracy & \\\\ \\cline{3-8} \n & \\multirow{-3}{*}{KNN} & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & 77.234\\% accuracy & \\\\ \\cline{2-8} \n & ENN & FDI & NA & NA & IEEE 30 bus & 100\\% accuracy & \\\\ \\cline{2-8} \n & & \\begin{tabular}[c]{@{}c@{}}BF, DoS/DDoS, \\\\ PS, etc.\\end{tabular} & 80 & \\begin{tabular}[c]{@{}c@{}}CIC-IDS2017 \\\\ dataset\\end{tabular} & NA & 99.6\\% accuracy & \\\\ \\cline{3-8} \n & \\multirow{-2}{*}{DT} & DoS , & 2 & N/A & N/A & 100\\% & \\\\ \\cline{2-8} \n & & DoS & NA & NS2 simulation tool & NA & 99\\% Accuracy & \\\\ \\cline{3-8} \n & \\multirow{-2}{*}{NB} & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}67.321\\% accuracy \\\\ and 0.631 F1-score\\end{tabular} & \\\\ \\cline{2-8} \n & CDBfN & FDI & NA & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 118, 300 bus & 98\\% accuracy & \\\\ \\cline{2-8} \n & & & 48 & \\begin{tabular}[c]{@{}c@{}}Irish Social Science \\\\ Data Archive Center\\end{tabular} & NA & 84.37\\% accuracy & \\\\ \\cline{4-8} \n & & \\multirow{-2}{*}{FDI} & 34 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 14 bus & 81.78\\% accuracy & \\\\ \\cline{3-8} \n & \\multirow{-3}{*}{ANN} & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39, 57 \\\\ and 118-bus systems\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}86.469\\% accuracy \\\\ and 0.863 F1-score\\end{tabular} & \\\\ \\cline{2-8} \n & DBsN & FDI & NA & NA & \\begin{tabular}[c]{@{}c@{}}IEEE 39, 118, \\\\ and 2848 bus\\end{tabular} & 99\\% accuracy & \\\\ \\cline{2-8} \n & \\begin{tabular}[c]{@{}c@{}}DL model \\\\ (Novel)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}DoS/DDoS, \\\\ PS etc.\\end{tabular} & 80 & CIC-IDS 2017 dataset & N/A & 99.99\\% accuracy & \\\\ \\cline{2-8} \n & EDAD & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & 90\\% accuracy & \\\\ \\cline{2-8} \n & XGBoost & \\begin{tabular}[c]{@{}c@{}}XSS, SQLI, DoS/\\\\ DDoS, PS, etc.\\end{tabular} & 71 & CIC-IDS2018 dataset & NA & \\begin{tabular}[c]{@{}c@{}}99.87\\% precision \\\\ and 99.75\\% recall\\end{tabular} & \\\\ \\cline{2-8} \n & \\begin{tabular}[c]{@{}c@{}}DT coupled \\\\ SVM (Novel)\\end{tabular} & ET & NA & NA & NA & 92.5\\% accuracy & \\\\ \\cline{2-8} \n & Adaboost & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}85.958\\% accuracy \\\\ and 0.852 F1-score\\end{tabular} & \\\\ \\cline{2-8} \n & AIRS & \\begin{tabular}[c]{@{}c@{}}DoS, R2L, \\\\ U2R, and PA\\end{tabular} & NA & NSL-KDD dataset & NA & \\begin{tabular}[c]{@{}c@{}}1.3\\% FPR, and\\\\ 26.32\\% FNR\\end{tabular} & \\\\ \\cline{2-8} \n & Multi-SVM & \\begin{tabular}[c]{@{}c@{}}DoS \\end{tabular} & 44 & ADFA-LD & NA & 90 accuracy\\% & \\\\ \\cline{2-8}\n & Autoencoder ANN & \\begin{tabular}[c]{@{}c@{}}FDI.\\end{tabular} & 339 & NA & IEEE 118 bus & 95.05\\% accuracy & \\\\\n \\cline{2-8}\n & NARX ANN & \\begin{tabular}[c]{@{}c@{}}FDI.\\end{tabular} & NA & OPAL-RT simulator & DC microgrid system & 95.05\\% accuracy & \\\\\n \\cline{2-8}\n & & & 112 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 30 bus & \\begin{tabular}[c]{@{}c@{}}99.9\\% accuracy,\\\\ 91.529\\% precision, \\\\ and 85.02\\% recall\\end{tabular} & \\\\ \\cline{4-8} \n\\multirow{-28}{*}{Supervised} & \\multirow{-2}{*}{RNN} & \\multirow{-2}{*}{FDI} & 41 & NSL-KDD dataset & IEEE 39 bus & 90\\% accuracy & \\\\ \\hline\n & Statistical & FDI & 304 & \\begin{tabular}[c]{@{}c@{}}MATPOWER \\\\ simulation tool\\end{tabular} & IEEE 118 bus & 99\\% accuracy & \\\\ \\cline{2-8} \n & \\begin{tabular}[c]{@{}c@{}}ART and SOM \\\\ based classifier \\\\ (Novel)\\end{tabular} & FDI & 24 & \n \\begin{tabular}[c]{@{}c@{}}Real-time digital \\\\Simulator(RTDS)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}RTDS hardware \\\\ in the loop testbed\\end{tabular} & 90\\% accuracy & \\\\ \\cline{2-8} \n & CLONALG & \\begin{tabular}[c]{@{}c@{}}DoS, R2L, \\\\ U2R\\end{tabular} & NA & NSL-KDD dataset & NA & \\begin{tabular}[c]{@{}c@{}}0.7\\% FPR, and\\\\ 21.02\\% FNR\\end{tabular} & \\\\ \\cline{2-8} \n\\multirow{-4}{*}{Unsupervised} & iForest & CC & 233 & SE-MF datasets & \\begin{tabular}[c]{@{}c@{}}IEEE 14, 39-, 57 \\\\ and 118-bus systems\\end{tabular} & 90\\% accuracy & \\\\ \\hline\n\\end{tabular}\n\\vspace{-5pt}\n\\end{table*}\n\\begin{table*}[!htb]\n\\centering\n\\scriptsize\n\\caption{Classification of ML-based attack mitigation techniques in smart grid\\label{tab:mitigation}}\n\\begin{tabular}{|l|l|r|l|l|l|l|}\n\\hline\n\\textbf{Reference} & \\textbf{Institution} & \\multicolumn{1}{l|}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Publication \\\\ Year\\end{tabular}}} & \\textbf{Attack} & \\textbf{\\begin{tabular}[c]{@{}l@{}}ML Model \\\\ Type\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}ML \\\\ Algorithm\\end{tabular}} & \\textbf{Testbed} \\\\ \\hline\nChen et al. & \\begin{tabular}[c]{@{}l@{}}Tsinghua University Beijing,\\\\ China\\end{tabular} & 2019 & \\multirow{3}{*}{FDI} & Reinforcement & Q-learning & IEEE 39 bus \\\\ \\cline{1-3} \\cline{5-7} \nLi et al. & \\begin{tabular}[c]{@{}l@{}}North China Electric Power \\\\ University, China\\end{tabular} & 2019 & & \\multirow{2}{*}{Unsupervised} & GAN & \\begin{tabular}[c]{@{}l@{}}IEEE 30, and \\\\ 118 bus\\end{tabular} \\\\ \\cline{1-3} \\cline{6-7} \nWei et al. & University of Akron, USA & 2016 & & & DBfN & \\begin{tabular}[c]{@{}l@{}}New England 39\\\\ bus power system\\end{tabular} \\\\ \\hline\nAn et al. & \\begin{tabular}[c]{@{}l@{}}Xian Jiaotong University, \\\\ China)\\end{tabular} & 2019 & \\multirow{2}{*}{DIA} & Reinforcement & Q-learning & \\begin{tabular}[c]{@{}l@{}}IEEE 9, 14, and \\\\ 30 bus system\\end{tabular} \\\\ \\cline{1-3} \\cline{5-7} \nParvez et al. & \\begin{tabular}[c]{@{}l@{}}Florida International University,\\\\ USA\\end{tabular} & 2016 & & Supervised & KNN & AMI network \\\\ \\cline{1-4} \\cline{5-7} \nMahrjan et al. & \\begin{tabular}[c]{@{}l@{}}University of Texas at Dallas,\\\\ USA\\end{tabular} & 2019 & \\multirow{2}{*}{DUA} & Unsupervised & SVM & MPEI \\\\ \\cline{1-3} \\cline{6-7} \nRen et al. & \\begin{tabular}[c]{@{}l@{}}Nanyang Technological \\\\ University, Singapore\\end{tabular} & 2019 & & & GAN & \\begin{tabular}[c]{@{}l@{}}New England 39 \\\\ bus\\end{tabular} \\\\ \\cline{1-4} \\cline{6-7} \nShahriar et al. & Florida International University & 2020 & \\multirow{2}{*}{LAD} & & GAN & KDD-99 \\\\ \\cline{1-3} \\cline{6-7} \nYing et al. & Zhejiang University, China & 2019 & & & GAN & Synthetic \\\\ \\hline\n\\end{tabular}\n\\vspace{-5pt}\n\\end{table*}", "id": "177d8436-9b97-40d6-a640-e71768806d8f", "level": "section", "origin_cites_number": 32, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Detection" ] ], "subsections": [ "b66662ab-5d6e-4009-8543-57c2e49ddcba", "9fff6266-4315-4cf6-a7e6-ca38f41ff9da", "110c8dd3-5cb4-4c42-8efc-f886a0ff2069", "7137ee30-f9d0-4315-86d5-b549060ddf45" ], "title": "Machine Learning Based Attack Detection" }, { "cite_extract_rate": 0.22222222222222202, "cites": [ 6804, 6805 ], "content": "Most of the research efforts attempted to detect stealthy FDI attacks using ML. Esmalifalak et al. attempted to detect stealthy FDI attacks using a support vector machine (SVM)-based technique and a statistical anomaly detection approach~. They showed that SVM outperforms the statistical approach when the model is trained with sufficient data. He et al. proposed a conditional deep belief network (CDBfN)-based detection scheme that extracts temporal features from distributed sensor measurements~. The proposed detection scheme is robust against various attacked measurements and environmental noise levels. Moreover, it can perform better than SVM and artificial neural network (ANN)-based detection mechanisms. Karimipour et al. proposed a continuous, computationally efficient, and independent mechanism using feature extraction scheme and time series partitioning method to detect FDI attacks~. This paper used dynamic bayesian networks (DBsN) concept and Boltzmann machine-based learning algorithms to detect unobservable attacks. Valdes et al. presented a novel intrusion detection system (IDS) utilizing adaptive resonance theory (ART) and self-organizing maps (SOM) to differentiate normal, fault, and attack states in a distribution substation system~. Yan et al. viewed the FDI attack detection problem as a binary classification problem and attempted to solve it using three different algorithms: SVM, K-nearest neighbor (KNN), and extended nearest neighbor (ENN)~. Their experimental analysis showed that all these algorithms could be tuned to attain optimal performance against FDI attack detection. Ayad et al. examined the use of a recurrent neural network (RNN)-based method that deals with temporal and spatial correlations between the measurements, unlike other learning methods, to recognize FDI attacks in SG.~. Niu et al. presented a deep learning-based framework combining a convolutional neural network (CNN) and a long short term memory (LSTM) network to detect novel FDI attacks~. Sakhnini et al. analyzed three different algorithms (e.g., ANN, KNN, and SVM) that incorporate different feature selection (FS) techniques and used a genetic algorithm (GA) as optimal FS method for power systems. The authors of~ proposed a nonlinear autoregressive exogenous (NARX) neural networks to estimate DC current and voltage to detect FDI attacks in DC microgrid and showed that the proposed method has successfully identified FDI attacks during transient and steady-state conditions. The autoencoder ANN-based FDI attack detection mechanism has also been investigated \nby Wang et al.~.", "id": "b66662ab-5d6e-4009-8543-57c2e49ddcba", "level": "subsection", "origin_cites_number": 9, "parent_id": "177d8436-9b97-40d6-a640-e71768806d8f", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Detection" ], [ "subsection", "False Data Injection Attack" ] ], "subsections": [], "title": "False Data Injection Attack" }, { "cite_extract_rate": 0, "cites": [], "content": "Ahmed et al. tried to detect covert cyber (CC) attacks in their several research efforts. In one of their works, they proposed two euclidean distance-based abnormality recognition scheme for detecting anomalies in the state estimation measurement features (SE-MF) dataset~. In their another work, they leveraged several ML methods (KNN, SVM, multi-layer perceptron (MLP), naïve bayes (NB), and adaboost) to identify a CC attack in the SE information that is gathered through a communication network of smart grid~ along with a GA for optimizing the features. Their discovery revealed that KNN has low CC attack detection performance than the other ML methods. They also proposed an unsupervised ML-based mechanism utilizing a state-of-the-art algorithm, called isolation forest (iForest) to distinguish CC attacks in SG systems using non-labeled information. The proposed mechanism can sensibly improve detection performance in the periodic operational condition~.", "id": "9fff6266-4315-4cf6-a7e6-ca38f41ff9da", "level": "subsection", "origin_cites_number": 3, "parent_id": "177d8436-9b97-40d6-a640-e71768806d8f", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Detection" ], [ "subsection", "Covert Cyber Attack" ] ], "subsections": [], "title": "Covert Cyber Attack" }, { "cite_extract_rate": 0, "cites": [], "content": "Energy Theft (ET) detection in SG mostly leverages supervised ML algorithms. Ford et al. examined a novel utilization of ANN and smart meter fine-grained information to report energy fraud, accomplishing a higher energy theft detection rate ~. Jindal et al. proposed an extensive top-down scheme utilizing a decision tree (DT) and SVM. In contrast to other works, the proposed scheme is sufficiently proficient in accurately distinguishing and detecting constant power theft at each level in the power transmission system~.", "id": "110c8dd3-5cb4-4c42-8efc-f886a0ff2069", "level": "subsection", "origin_cites_number": 2, "parent_id": "177d8436-9b97-40d6-a640-e71768806d8f", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Detection" ], [ "subsection", "Electricity Theft" ] ], "subsections": [], "title": "Electricity Theft" }, { "cite_extract_rate": 0, "cites": [], "content": "Denial of Service (DoS) and Distributed Denial of Service (DDoS) attack detection has got a significant research focus. Vijayanand et al. proposed a novel DoS attack detection framework utilizing diverse multi-layer deep learning algorithms for precisely identifying the attacks by analyzing smart meter traffic~. In another work, they have used another novel approach named as multi-SVM for DoS attack detection~. Zhang et al. attempted to detect anomalies in diverse layer network structure using SVM, clonal selection algorithm (CLONALG), and artificial immune recognition system (AIRS). \nAccording to their performance analysis, SVM based IDS outperforms CLONALG and AIRS~. Radoglou et al. proposed a novel IDS for AMI using DT. Boumkheld et al. proposed an NB classifier based centralized IDS for accumulating all information sent by data collector that requires massive memory and computational resources. Roy et al. used extreme gradient boosting (XGBoost), random forest (RF) on CIC-IDS2018 dataset for detecting various SG cyberattacks, including DoS~.\nMoreover, the ML-based detection of cross-site scripting (XSS) and SQL injection (SQLI), unknown routing attack (URA), brute force (BF), information leakage (IL), port scanning (PS), remote to local (R2L), the user to root (U2R) attacks also gained some attention in the recent research.", "id": "7137ee30-f9d0-4315-86d5-b549060ddf45", "level": "subsection", "origin_cites_number": 5, "parent_id": "177d8436-9b97-40d6-a640-e71768806d8f", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Detection" ], [ "subsection", "Denial of Service Attack" ] ], "subsections": [], "title": "Denial of Service Attack" }, { "cite_extract_rate": 0.11111111111111101, "cites": [ 6808 ], "content": "\\label{Sec:Machine_Learning_Based_Attack_Mitigation}\n\\begin{figure*}[t]\n \\begin{center}\n \\subfigure[]\n {\n \\label{gen}\n \\includegraphics[width=0.32\\textwidth, keepaspectratio=true]{figures/bar_generation.pdf}\n }\n \\hspace{-15pt}\n \\subfigure[]\n {\n \\label{det}\n \\includegraphics[width=0.32\\textwidth, keepaspectratio=true]{figures/bar_detection.pdf}\n }\n \\hspace{-15pt}\n \\subfigure[]\n {\n \\label{mit}\n \\includegraphics[width=0.32\\textwidth, keepaspectratio=true]{figures/bar_mitigation.pdf}\n } \n \\hspace{-15pt}\n \\end{center}\n \\vspace{-10pt}\n \\caption{Pie-chart of mostly used ML techniques in a) generation, b) detection, and c) mitigation of cyberattacks in smart grid}\n \\label{Fig:piechart}\n\\vspace{-10pt}\n\\end{figure*}\nAttack mitigation is the strategy to minimize the effect of malicious attacks maintaining the functionality of the system.\nTable~\\ref{tab:mitigation} summarizes the ML-based attack mitigation in SG.\nWei et al. proposed a deep belief network (DBfN)-based cyber-physical model to identify and mitigate the FDI attack while maintaining the transient stability of wide-area monitoring systems (WAMSs)~. An et al. modeled a deep-Q-network (DQN) detection scheme to defend against data integrity attacks (DIA) in AC power systems~ and showed that the DQN detection outperforms the baseline schemes in terms of detection accuracy and rapidity. Chen et al. presented a Q-learning-based mitigation technique for FDI attacks in automatic voltage controller~. They replaced the suspected data with their maximum likelihood estimation (MLE) values to enhance the securities of the state estimation and OPF-based controls. In~, Maharjan et al. proposed an SVM based resilient SG network with DERs to mitigate the data unavailability attack (DUA). Parvez et al. proposed a localization-based key management system using the KNN algorithm for node/meter authentication in AMI networks~. They showed that the source meter could be authenticated accurately by the KNN algorithm utilizing the pattern of sending frequency, packet size, and distance between two meters. Ren et al. proposed a GAN model that predicts the missing/unavailable PMU data even without observability and network topology~. Shahriar et al. proposed a GAN-based approach to generate a synthetic attack dataset from the existing attack data. They achieved up to 91\\% f1 score in detecting different cyberattacks for the emerging smart grid technologies~. In another work, Ying et al. proposed a similar GAN-based approach achieving a 4\\% increase in the attack detection accuracy~. Li et al. proposed another GAN based model to defend against FDI attacks~ that provides the predicted deviation of the measurements and recovers the compromised sensors.", "id": "18633bc6-d50e-4b6a-baeb-61d8385a22a4", "level": "section", "origin_cites_number": 9, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Machine Learning Based Attack Mitigation" ] ], "subsections": [], "title": "Machine Learning Based Attack Mitigation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Sec:future_research_direction}\nFig.~\\ref{Fig:piechart} shows the pie-chart of ML techniques used in attack generation, detection, and mitigation for the SG network. Fig.~\\ref{det} illustrates that a lot of research have been conducted towards the application of different ML algorithms in attack-detection, whereas, generation and mitigation fields are comparatively less explored. As SG is dynamic and intermittent in nature, researchers are mostly applying Q-learning to generate real-time attacks, as shown in Fig.~\\ref{gen}. On the other hand, GAN has the capability of generating missing data with considerable accuracy, thus, as shown in Fig.~\\ref{mit}, it is prominently used in attack mitigation strategies. However, GAN also has the potential to generate attack data considering the system's topology and states. Hence, future researchers can focus on GAN and the other less explored algorithms such as ANN, RNN, KNN, DT, etc. in the attack generation and mitigation strategies.", "id": "5a14d348-f16b-43ae-9072-36fa8337a104", "level": "section", "origin_cites_number": 0, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Future Research Direction" ] ], "subsections": [], "title": "Future Research Direction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Sec:Conclusion}\nML has been creating new dimensions for both attackers and defenders with respect to scalability and accuracy due to its wide range of applications in SG. \nTherefore, it draws attention to the researchers for conducting security-related investigations applying emerging ML algorithms. In this paper, we review current research works, related to the potential ML-based attack generation, detection, and mitigation schemes for future researchers. In addition, we present a tabular form summarizing the existing studies in an organized way that would help future researchers to emphasize the unfocused areas.", "id": "9fd43cce-1854-415c-9af9-acffa298cd38", "level": "section", "origin_cites_number": 0, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Sec:Acknowledgement}\nThis research was supported in part by the U.S. National Science Foundation under awards \\#1553494 and \\#1929183.\n\\bibliographystyle{unsrt}\n\\bibliography{main_paper} \n\\end{document}", "id": "9e3c4149-3eac-4f7f-aa28-ad4b705ad3b0", "level": "section", "origin_cites_number": 0, "parent_id": "779753ce-850d-42dc-864e-3ee7783627a1", "prefix_titles": [ [ "title", "Machine Learning in Generation, Detection, and Mitigation of Cyberattacks in Smart Grid: A Survey" ], [ "section", "Acknowledgement" ] ], "subsections": [], "title": "Acknowledgement" } ]
119
[ 6802, 6801, 6803, 6807, 6805, 6808, 6806, 6804 ]
1.240968
[ "Emna Baccour", "Naram Mhaisen", "Alaa Awad Abdellatif", "Aiman Erbad", "Amr Mohamed", "Mounir Hamdi", "Mohsen Guizani" ]
Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence
2021
2021-05-04T23:42:06Z
cs.DC
Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems and speech processing applications to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes of real-time data streams. Designing accurate models using such data streams, to revolutionize the decision-taking process, inaugurates pervasive computing as a worthy paradigm for a better quality-of-life (e.g., smart homes and self-driving cars.). The confluence of pervasive computing and artificial intelligence, namely Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges, including privacy and latency requirements. In this context, an intelligent resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g., edge nodes and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques and strategies developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and reinforcement learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed training and inference across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ] ], "subsections": [ "2ca60ad7-b1a8-4a6c-8e91-ae361d2993c4", "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "3712d67e-e696-4951-a604-d1fa86a3fe72", "e43e3646-a694-4588-a6e6-93cc3e3f42c2", "8f96916e-e394-41c3-94f4-9283f9cfdbaa", "caa8daf7-97f9-471c-b534-e135a402b2d0", "9df0fd96-bdea-482d-88b7-240ff428f334", "01d3f0a9-9cb9-438d-bb09-071f097b269c", "8fd87342-e326-4cd7-af47-a1b93c69a12c" ], "title": "root" }, { "cite_extract_rate": 0.5, "cites": [ 3409, 97, 514, 301, 7008, 8655, 859, 3410 ], "content": "\\IEEEPARstart{D}{riven} by the recent development and prevalence of computing power, \nInternet of Things (IoT) systems, and big data, a booming era of AI has emerged, covering a wide spectrum of applications including natural language processing , speech recognition , computer vision , and robotics . Owing to these breakthroughs, AI has achieved unprecedented improvements in multiple sectors of academia, industry, and daily services in order to improve the humans' productivity and lifestyle. As an example, multiple intelligent IoT applications have been designed such as self-driving cars, disease mapping services, smart home appliances, manufacturing robots, and surveillance systems. In this context, studies estimate that AI will have higher impact on the global Gross Domestic Product (GDP) by 2030, accounting for \\$ 13 trillion additional gains compared to 2018 . \n\\begin{figure*}[!h]\n\\centering\n\t\\frame{\\includegraphics[scale=0.55]{Figures/introduction3.pdf}}\n\t\\caption{A scenario illustrating examples of pervasive AI techniques.}\n\t\\label{intro}\n\\end{figure*}\nThe popularity of AI is also related to the abundance of storage and computing devices, ranging from server clusters in the cloud to personal phones and computers, further, to wearables and IoT units. In fact, the unprecedented amount of data generated by the massive number of ubiquitous devices opens up an attractive opportunity to provide intelligent IoT services that can transform all aspects of our modern life and fuel the continuous advancement of AI. Statistics forecast that, by 2025, the number of devices connected to the internet will reach more than 500 billion owing to the maturity of their sensing capabilities and affordable prices. Furthermore, reports revealed that these devices will generate enormous data reaching more than 79 ZB by 2025 and will increase the economic gains up to 11 trillion by the same year . \nWith the rapid evolution of AI and the enormous bulks of data generated by pervasive devices, conventional wisdom resorts to centralized cloud servers for analytics. In fact, the high performance of AI systems applied to multiple fields comes at the expense of a huge memory requirement and an intensive computational load to perform both training and inference phases, which requires powerful servers.\n\\begin{comment} More specifically, training an intelligent model is computationally expensive because of the large number of parameters, reaching millions for deep networks, that need to be repetitively fine-tuned over hundreds of iterations. Similarly, the inference phase is computationally intensive due to the high dimension of raw data (e.g., high resolution images) and millions of tasks (e.g., multiplications and max-pooling) in deep networks . To this end, the resource consumption has been considered as an important parameter to evaluate the performance of AI models.\n\\end{comment}\nHowever, this approach is no longer sustainable as it introduces several challenges: (1) the appearance of a new breed of services and the advent of delay-sensitive technologies spanning from self-driving cars to Virtual and Augmented Reality (VR/AR), make the cloud-approaches inadequate for AI tasks due to the long transmission delays. More precisely, the aforementioned applications are real-time and cannot allow any additional latency or connectivity loss. For example, autonomous cars sending camera frames to remote servers need to receive prompt inferences to detect potential obstacles and apply brakes . \nSending data to cloud servers may not satisfy the latency requirements of the real-time applications. Particularly, experiments in demonstrated that executing a computer vision task on a camera frame offloaded to an Amazon server takes more than 200 ms. (2) In addition to latency, privacy presents a major concern for cloud-based AI approaches. In fact, end-users are typically reluctant to upload their private data to cloud servers (e.g., photos or audios), as they can be highly exposed to cyber risks, malicious attacks, or disclosures. Among the most popular breaches reported in the 21st century, we can cite the Marriott attack revealed in 2018 and affecting 500 million customers and Equifax breach recorded in 2017 and affecting 147 million users . (3) A tremendous number of AI tasks, involving unstructured and bandwidth-intensive data, needs to be transferred across the Wide Area Network (WAN), which poses huge pressure on the network infrastructure having varying quality. (4) In the same context, offloading the data to remote servers encounters also scalability issues, as the access to the cloud can become a bottleneck when the number of data sources increases, particularly if some devices import irrelevant and noisy inputs. (5) Nowadays, Explainable AI (XAI) has become extremely popular, aiming to enhance the transparency of learning and detect prediction errors. However, consigning AI tasks to the cloud makes the whole process a black-box vis-a-vis the end-user, and prevents model decomposability and debugging. \nPushing AI to the network edge has been introduced as a viable solution to face latency, privacy, and scalability challenges described earlier. As such, the large amount of computational tasks can be handled by edge devices without exchanging the related data with the remote servers, which guarantees agile IoT services owing to the physical proximity of computing devices to the data sources . In the case when the AI tasks can only be executed at the cloud datacenters, the edge devices can be used to pre-process the data and polish it from noisy inputs in order to reduce the transmission load . Furthermore, the edge network can play the role of a firewall that enhances the privacy by discarding sensitive information prior to data transfer. A variety of edge devices can be candidate for executing different AI tasks with different computation requirements, ranging from edge servers provisioned with GPUs, to smart-phones with strong processors and even small IoT wearable with Raspberry Pi computing. These edge devices have been continuously improving to fit for deep AI models.\nIn spite of this technological advancement, a large range of pervasive devices used in countless fields of our daily life still suffers from limited power and memory, such as smart home IoT appliances, sensors, and gaming gears.\nGiven the limited resources of edge-devices, computing the full AI model in one device may be infeasible, particularly when the task requires high computational load, e.g., Deep Neural Networks (DNN). A promising solution is to opt for pervasive computing, where different data storage and processing capacities existing everywhere (e.g., distributed cloud datacenters, edge servers, and IoT devices.) cooperate to accomplish AI tasks that need large memory and intensive computation. This marriage of pervasive computing and AI has given rise to a new research area, which garnered considerable attention from both academia and industry. The new research area comprises different concepts (e.g., federated learning, active learning, etc.) that suggest to distribute AI tasks into pervasive devices for multiple objectives. In this paper, we propose to gather all existing concepts having different terminologies under the same umbrella that we named \\textit{“Pervasive AI\"}. Indeed, we define the pervasive AI as \\textit{“The intelligent and efficient distribution of AI tasks and models over/amongst any types of devices with heterogeneous capabilities in order to execute sophisticated global missions”}. \nThe pervasive AI concepts are firstly introduced to solve the described challenges of centralized approaches (e.g., on-cloud or on-device computation):\n(1) To preserve privacy and reduce the huge overhead of data collection and the complexity of training an astronomical dataset, \\textit{Federated Learning (FL)} is proposed, where raw data are stored in their source entities and the model is trained collaboratively. Particularly, each entity computes a local model using its collected data, then sends the results to a fusion server to aggregate the global model. Such an approach suggests the distribution of data and the assembly of the trained AI models. (2) To cope with the limited resources provided by edge devices and simultaneously avoid latency overheads caused by cloud transmissions, the inference task is distributed among ubiquitous devices located at the proximity of the source. The basic idea is to divide the trained model into segments and subsequently, each segment is assigned to a participant. Each participant shares the output to the next one until generating the final prediction. In other words, the \\textit{Pervasive Inference} covers the distribution of the established model resulting from the training phase. (3) Some AI techniques are inherently distributed such as Multi-Agent Reinforcement Learning (MARL) or Multi-agent Bandits (MAB), where agents cooperate to build and improve a policy in real-time enabling them to take on-the-fly decisions/actions based on the environment status. In this case, the distribution covers the online creation and update of the Reinforcement Learning (RL) policy. A scenario illustrating some pervasive AI techniques is presented in Fig. \\ref{intro}.\nThe pervasive AI exploits the on-device computation capacities to collaboratively achieve learning tasks. This requires \na careful scheduling to wisely use the available resources without resorting to remote computing. Yet, some intensive AI tasks can only be performed by involving the cloud servers, which results in higher communication costs. Therefore, leveraging the small and ubiquitous resources and managing the enormous communication overheads present a major bottleneck for the pervasive AI.", "id": "2ca60ad7-b1a8-4a6c-8e91-ae361d2993c4", "level": "section", "origin_cites_number": 16, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Introduction" ] ], "subsections": [ "54ccf8f4-8f81-43ab-ac06-0326b1642776", "e2cbaac2-fe2b-4ece-bd59-5c80205f2c6b" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "In this survey, we focus on the confluence of the two emerging paradigms: pervasive computing and artificial intelligence, which we name \\textit{Pervasive AI}. The pervasive AI is a promising research field, in which the system design is highly correlated to the resource constraints of the ubiquitous participants (e.g., memory, computation, bandwidth, and energy.) and the communication overheads between them. More specifically, the size of some deep AI models, their computational requirements and their energy consumption may exceed the available memory (e.g., RAM) or the power supply capacity of some devices, which restricts them from participating in the collaborative system. Furthermore, the process of decentralized training or inference may involve a big number of participants that potentially communicate over wireless links, which creates new challenges related to channels capacities and conditions, the delay performance, and the privacy aspect. Therefore, the pervasive AI should rely on various parameters, including the optimal AI partitioning, the efficient design of architectures and algorithms managing the distributed learning, and the smart selection and scheduling of pervasive participants supported by efficient communication protocols.\nNot only that, all the on-device constraints should be taken into consideration such as the memory, the computation, the energy, not to mention the privacy requirements of the system. Finally, the load of real-time inferences (e.g., an area that needs 24/7 surveillance), the pace of data collection (e.g., weather monitoring) and the dynamics of the studied environment should also be considered as they highly impact the number of selected participants and the parallelization strategies. In this paper, we survey the aforementioned challenges in deploying pervasive AI models and algorithms. Particularly, we provide a deep study of resource-efficient distributed learning for the training phase, the inference tasks, and real-time training and decision process. We start by identifying the motives behind establishing a pervasive AI system for IoT applications and the corresponding communication and resource challenges.", "id": "54ccf8f4-8f81-43ab-ac06-0326b1642776", "level": "subsection", "origin_cites_number": 0, "parent_id": "2ca60ad7-b1a8-4a6c-8e91-ae361d2993c4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Introduction" ], [ "subsection", "Our scope" ] ], "subsections": [], "title": "Our scope" }, { "cite_extract_rate": 0, "cites": [], "content": "The contributions of this paper are summarized as follows:\n\\begin{itemize}\n \\item We present an overview of the pervasive computing and introduce its architecture and potential participants. \n \\item We provide a brief background of artificial intelligence, particularly deep learning and reinforcement learning. We, also, describe the frameworks that support AI tasks and the metrics that assess their performance. Furthermore, we present multiple IoT applications, in which pervasive AI can be useful.\n \\item For each phase of the AI (i.e., training and inference), we profile the communication and computation models and review the state-of-the-art. A comparison between different existing works, lessons learned, in addition to recent use cases, are also provided. \n \\item We conclude by an elaborative discussion of our future vision and we identify some open challenges that may arouse new promising research ideas.\n\\end{itemize}\n\\begin{figure*}[h]\n\\centering\n\\hspace{-9 mm}\n\t\\includegraphics[scale=0.5]{Figures/taxonomyV4.pdf}\n\t\\caption{Pervasive AI survey roadmap.}\n\t\\label{journalSkeleton}\n\\end{figure*}\nThe rest of this paper is organized as follows: Sections \\ref{pervasive_systems} and \\ref{AI} present the fundamentals of pervasive computing and artificial intelligence, respectively. In section \\ref{related_surveys}, we introduce the related surveys and we highlight the novelty of our paper. Section \\ref{Pervasive_training} presents the related studies that investigated the potential of \\emph{federated learning} schemes in different domains. Moreover, it highlights the use of FL within UAV swarms for cooperative target recognition as a case study. \nThen, we investigate diverse \\emph{reinforcement learning} schemes, and \\emph{active learning}. \nSpecifically, we focus on the state-of-art algorithms that study the trade-off between the utilized communication resources and the performance, which allows us next to evaluate the strengths and weaknesses of the discussed approaches. In section \\ref{PI}, we present a deep study of the \\emph{pervasive inference}. Particularly, we review the state-of-the-art approaches adopting different splitting strategies and managing the existing pervasive resources to distribute the inference. Next, we compare the performance of these works and discuss the learned lessons and potential use cases. \nIn section \\ref{privacy_AI}, we discuss the privacy and security issues associated with pervasive AI and the corresponding mitigation strategies proposed in the literature.\nWe discuss the future vision and open challenges, in section \\ref{future}. Finally, we conclude in section \\ref{conclusion}. More details about the road map of the paper are illustrated in Fig. \\ref{journalSkeleton}. \n\\begin{comment}\nAdditionally, the list of acronyms is presented in Table \\ref{acro}. \n\\begin{table}[h!]\n\\centering\n\\footnotesize\n\\caption{List of Acronyms}\n\\label{acro}\n\\begin{tabular}{ll}\nAC & Actor-Critic \\\\\nAE & Auto-Encoder \\\\\nAI & Artificial Intelligence \\\\\nAL & Active Learning \\\\\nAR & Augmented Reality \\\\\nBM & Bolzman Machines \\\\\nBS & Base Station \\\\\nCNN & Convolutional Neural Network \\\\\nConv & Convolutional \\\\\nCTDE & \\begin{tabular}[c]{@{}l@{}}Centralized Training and\\\\ Decentralized Execution\\end{tabular} \\\\\nDAG & Directed Acyclic Graph \\\\\nDB & Distributed Bandits \\\\\nDDPG & Deep Deterministic Policy Gradient \\\\\nDL & Deep Learning \\\\\nDNN & Deep Neural Network \\\\\nDPPO & \\begin{tabular}[c]{@{}l@{}}Distributed Proximal Policy\\\\ Optimization\\end{tabular} \\\\\nDQL & Deep Q-Learning \\\\\nDQN & Deep Q-Network \\\\\nDRL & Deep Reinforcement Learning \\\\\nECG & Electrocardiogram \\\\\nEEG & Electroencephalogram \\\\\nFANET & Flying Ad-hoc Network \\\\\nFB & Federated Bandit \\\\\nFc & Fully connected \\\\\nFL & Federated Learning \\\\\nFNN & Feed forward Neural Network \\\\\nGAN & Generative Adversial Networks \\\\\nGDP & Gross Domestic Product \\\\\nIID & \\begin{tabular}[c]{@{}l@{}}Independent and Identically\\\\ Distributed\\end{tabular} \\\\\nIoT & Internet of Things \\\\\nIoV & Internet of Vehicles \\\\\nLSTM & Long Short Term Memory \\\\\nMAB & Multi-Agent Bandit \\\\\nMADDPG & \\begin{tabular}[c]{@{}l@{}} Multi-Agent Deep Deterministic \\\\ Policy Gradient\\end{tabular} \\\\\nMARL & \\begin{tabular}[c]{@{}l@{}}Multi-Agent Reinforcement\\\\ Learning\\end{tabular} \\\\\nMDP & Markov Decision Process \\\\\nMEC & Mobile Edge Computing \\\\\nMLP & Multi-Layer Percepteron \\\\\nNN & Neural Network \\\\\nPAC & Probably Approximately Correct \\\\\nPAIaas & Pervasive AI as a service \\\\\nPOMDP & \\begin{tabular}[c]{@{}l@{}}Partially Observable Markov\\\\ Decision Process\\end{tabular} \\\\\nPOMG & \\begin{tabular}[c]{@{}l@{}}Partially Observable Markov\\\\ Game\\end{tabular} \\\\\nppb & part per billion \\\\\nppm & part per million \\\\\nPPO & Proximate Policy Optimization \\\\\nQoE & Quality of Experience \\\\\nQoS & Quality of Service \\\\\nRL & Reinforcement Learning \\\\\nrMSE & \\begin{tabular}[c]{@{}l@{}}regularized Maximum Likelihood\\\\ Estimation\\end{tabular} \\\\\nRNN & Recurrent Neural Network \\\\\nSGD & Stochastic Gradient Descent \\\\\nSINR & \\begin{tabular}[c]{@{}l@{}}Signal to Interference plus \\\\ Noise Ratio\\end{tabular} \\\\\nTPU & Tensor Processing Unit \\\\\nUAV & Unmanned Aerial Vehicle \\\\\nUCB & Upper Confidence Bound \\\\\nUE & User Equipment\\\\\nVR & Virtual Reality \\\\\nWAN & Wide Area Network \\\\\nXAI & Explainable AI \\\\\n\\begin{tabular}[c]{@{}l@{}}6G, 5G,\\\\ 4G\\end{tabular} & Sixth, Fifth, Fourth Generations\n\\end{tabular}\n\\end{table}\n\\end{comment}", "id": "e2cbaac2-fe2b-4ece-bd59-5c80205f2c6b", "level": "subsection", "origin_cites_number": 0, "parent_id": "2ca60ad7-b1a8-4a6c-8e91-ae361d2993c4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Introduction" ], [ "subsection", "Contributions and structure of the paper" ] ], "subsections": [], "title": "Contributions and structure of the paper" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{pervasive_systems}", "id": "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ] ], "subsections": [ "143aad4c-4161-4906-8b9b-0b1f3dde98c5", "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "f5e2a266-16ab-494b-85d0-e254d34de1e8", "716b2cc0-a652-4114-b3fd-f2e5117f0b41" ], "title": "Fundamentals of pervasive computing" }, { "cite_extract_rate": 0, "cites": [], "content": "The pervasive computing , named also ubiquitous computing, is the growing trend to embed computational capabilities in all devices in order to enable them to communicate efficiently and accomplish any computing task, while minimizing their resource consumptions e.g. battery, memory, cpu time, etc. The pervasive computing can occur in any device, at any format, in any place and any time. More specifically, it can span from resource-constrained devices to highly performant servers and can involve cloud datacenters, mobile edge computing servers, mobile devices, wearable computers, embedded systems, laptops, tablets, pair of intelligent glasses, and even a refrigerator or a TV. These ubiquitous devices are constantly connected and available for any task. In other words, we are not talking anymore about devices acting on a passive data. Instead, the pervasive systems are able to collect, process, communicate any data type or size, understand its surroundings, adapt to the input context, and enhance humans’ experiences and lifestyles.", "id": "143aad4c-4161-4906-8b9b-0b1f3dde98c5", "level": "subsection", "origin_cites_number": 2, "parent_id": "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Definition" ] ], "subsections": [], "title": "Definition" }, { "cite_extract_rate": 0, "cites": [], "content": "The pervasive systems are characterized by highly heterogeneous devices (see Fig. \\ref{participants}), where the critical challenge is to design a scalable infrastructure able to dynamically discover different components, manage their interconnection and interaction, interpret their context, and adapt rapidly to the deployment of new software and user interfaces. A pervasive system can be composed of:\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.5]{Figures/Pervasive_architecture.pdf}\n\t\\caption{Ubiquitous participants.}\n\t\\label{participants}\n\\end{figure}", "id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "level": "subsection", "origin_cites_number": 0, "parent_id": "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ] ], "subsections": [ "8ccdc4eb-a5b6-4d13-8e48-3be24f209874", "235df246-40b3-4d92-8355-a2651c0250ba", "4f570801-4296-4207-90ac-1a00d464f57e", "7813dea6-2572-4021-8c0a-9e4ed8282bb8", "21b234be-e825-4776-8cf4-215e80d7e822" ], "title": "Ubiquitous participants" }, { "cite_extract_rate": 0.2, "cites": [ 3411 ], "content": "Cloud computing is defined as delivering on-demand services from storage, management, advertising, and computation to artificial intelligence and natural language processing, following different pricing models, such as pay-as-you-go and subscription-based billing.\nHence, instead of owning computing servers, companies, operators, and end-users can exploit the high-performance facilities offered by the cloud service provider. In this way, they can benefit from better computational capacities, while reducing the cost of owning and maintaining a computation infrastructure, and paying only for their requested services. Cloud computing underpins a broad number of services, including data storage, cloud back-up of photos, video streaming services, and online gaming.", "id": "8ccdc4eb-a5b6-4d13-8e48-3be24f209874", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ], [ "subsubsection", "Data center and cloud servers" ] ], "subsections": [], "title": "Data center and cloud servers" }, { "cite_extract_rate": 0, "cites": [], "content": "Edge computing is introduced as a solution to bring cloud facilities at the vicinity of users in order to minimize the services perceived latency, relieve the data transmission, and ease the cloud congestion. In another word, the edge computing has become an essential complement to the cloud and even a substitute in some scenarios. Services and computing capabilities equipped at the edge of cellular networks are called Mobile Edge Computing (MEC) facilities . Deploying MEC servers within the edge Base Stations (BSs) allows providing location and context awareness, deploying new services quickly and flexibly, and enhancing the Quality of Service (QoS).", "id": "235df246-40b3-4d92-8355-a2651c0250ba", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ], [ "subsubsection", "Mobile Edge Computing (MEC) servers" ] ], "subsections": [], "title": "Mobile Edge Computing (MEC) servers" }, { "cite_extract_rate": 0, "cites": [], "content": "Cloudlets are the network components that connect cloud computing to mobile computing (e.g., computers cluster). This network part presents the middle layer of the three-tier hierarchical architecture composed of mobile devices, micro-clouds, and cloud data centers. The role of cloudlets is to define the algorithms and implement the functionalities that support low latency edge-cloud tasks offloading.", "id": "4f570801-4296-4207-90ac-1a00d464f57e", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ], [ "subsubsection", "Cloudlet devices" ] ], "subsections": [], "title": "Cloudlet devices" }, { "cite_extract_rate": 0, "cites": [], "content": "The fog and cloud computing share the same set of services provided to end-users, such as storage, networking, computing, and artificial intelligence. However, the cloud architecture is composed of fully distributed large-scale data centers. Meanwhile, fog services focus on IoT devices in a specific geographical area and target applications requiring real-time response such as live streaming, interactive applications, and online collective gaming. Examples include phones, wearable health monitoring devices, connected vehicles, etc.", "id": "7813dea6-2572-4021-8c0a-9e4ed8282bb8", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ], [ "subsubsection", "Fog devices" ] ], "subsections": [], "title": "Fog devices" }, { "cite_extract_rate": 0, "cites": [], "content": "In most of the studies, the interpretation of edge devices (i.e., edge nodes and IoT devices) is still ambiguous , which means the difference between end or IoT devices and edge nodes is still unclear. Yet, common consensus defines the end-devices/IoT as ubiquitous gadgets that are embedded with processing capacities, sensors, and software, serving to connect and exchange data with other systems over different communication networks.\nMeanwhile, the edge nodes are defined as devices in higher levels including fog nodes, MEC servers, and cloudlets. The edge nodes are expected to possess high storage and computation capacities and to offer high-quality networking and processing service with a lower response time compared to the cloud remote servers. \nDriven by the expansion and pervasiveness of the computing devices, we believe that the heterogeneity of ubiquitous systems will increase in the future. These devices have to interact seamlessly and coherently, despite their difference in terms of software and hardware capacities.", "id": "21b234be-e825-4776-8cf4-215e80d7e822", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fa09ade2-b90a-43ea-b14f-f6ce789482a1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Ubiquitous participants" ], [ "subsubsection", "Edge devices" ] ], "subsections": [], "title": "Edge devices" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "f5e2a266-16ab-494b-85d0-e254d34de1e8", "level": "subsection", "origin_cites_number": 0, "parent_id": "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Architecture and intersection with AI" ] ], "subsections": [ "b9c46377-8cf8-458d-b66d-5ecaa53b0e15" ], "title": "Architecture and intersection with AI" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3410 ], "content": "\\begin{figure}[H]\n\\centering\n\t\\includegraphics[scale=0.645]{Figures/UbiLayers.pdf}\n\t\\caption{Pervasive architecture.}\n\t\\label{Architecture}\n\\end{figure}\nFig. \\ref{Architecture} illustrates the hierarchical architecture of a pervasive system , which is composed of three layers:\n\\begin{itemize}\n \\item Data source layer: the data is collected from different monitored sources generating information of physical world or human activities, multimedia data such as images and audio, and social media information.\n \\item Data management layer: this layer involves the storage and integration of heterogeneous data incoming from pervasive sources, the cleaning and pre-processing that tailor the context of the system, and the data analytics that convert the raw information into useful and personalized insights using multiple approaches, such as business and artificial intelligence.\n \\item Application layer: to this end, the insights generated from the previous layer are used to offer multiple intelligent applications, such as health advisor and smart home applications. \n\\end{itemize}\nIn our paper, we focus only on the data management layer, specifically the data analytics using artificial intelligence. The data source layer is thoroughly discussed in , whereas the application layer can be found in .", "id": "b9c46377-8cf8-458d-b66d-5ecaa53b0e15", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "f5e2a266-16ab-494b-85d0-e254d34de1e8", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Architecture and intersection with AI" ], [ "subsubsection", "Architecture" ] ], "subsections": [], "title": "Architecture" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[H]\n\\centering\n\t\\includegraphics[scale=0.6]{Figures/distVSPervasiveV4.pdf}\n\t\\caption{AI forms and considerations.}\n\t\\label{distVSPervasiveV4}\n\\end{figure}\n The artificial intelligence techniques are centralized by design and most of the challenges revolve around the accuracy, complexity, computation power, memory, and explainability. With the evolution and migration to edge computing characterized by scarce resources, solving these issues as well as facing the new challenges related to privacy, become crucial. This called for AI distribution, where the training and the inference (e.g., data, models, policies.) are split into smaller parts in order to reduce local computation and memory overheads, while considering the privacy and latency constraints. However, the nascent IoT applications deployed in large-scale IoT devices have driven the distributed computation towards further dispersion, which urged the support of interoperability, high heterogeneity, scalability, context-awareness, smart resource allocation, coordination, and invisibility, which are the characteristics of pervasive computing. To this end, the intersection between AI and pervasive computing came to the light paving the way to introduce \\textit{\"Pervasive AI\"}. As shown in Fig. \\ref{distVSPervasiveV4} , Pervasive AI is a special class of distributed AI, where the decentralization of AI models is managed using intelligent techniques that take into consideration the IoT resource constraints, their heterogeneity, the application context, etc.", "id": "716b2cc0-a652-4114-b3fd-f2e5117f0b41", "level": "subsection", "origin_cites_number": 0, "parent_id": "d3eb54a2-bc51-4bd4-9d8e-214b42e424f4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of pervasive computing" ], [ "subsection", "Intersection with AI" ] ], "subsections": [], "title": "Intersection with AI" }, { "cite_extract_rate": 1, "cites": [ 166 ], "content": "\\label{AI}\nSince approaches and techniques reviewed in this survey rely on artificial intelligence and deep neural networks, we start first by providing a brief background of deep learning. A deeper and detailed review of AI can be found in the reference book in .", "id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "level": "section", "origin_cites_number": 1, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ] ], "subsections": [ "5df83101-053a-4cfe-9d04-c09b58e36f6b", "9342660a-236d-4249-92d7-89fa948e8ef1", "8e6febe5-4bb9-43f7-b364-4dc285c93213", "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "6caa0663-4e5f-484e-83a1-18c728b2a299" ], "title": "Fundamentals of Artificial Intelligence" }, { "cite_extract_rate": 0, "cites": [], "content": "Even though AI has recently gained enormous attention, it is not a new term and it was initially coined in 1956. \nMultiple techniques and procedures fall under AI broad umbrella, such as rule-based systems, expert systems, control systems, and well-known machine learning algorithms. Machine learning generally includes three categories, which are supervised, unsupervised and reinforcement learning. An important branch of machine learning is deep learning that can be supervised or unsupervised and it is based on simulating the biological nervous system and performing the learning through subsequent layers transformation. As most of the pervasive applications are led by deep learning techniques and recently reinforcement learning, the crossover between the above-mentioned domains (shown in Fig. \\ref{DL}) defines the scope of this paper.\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.56]{Figures/DL3.pdf}\n\t\\caption{Relation between AI,\nmachine learning, deep learning, and reinforcement learning. This survey mainly focuses on\npervasive deep and reinforcement learning.}\n\t\\label{DL}\n\\end{figure}\n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.7]{Figures/DNN_types.pdf}\n\t\\caption{ Examples of NN structures: (a) Multilayer Perceptron (MLP), (b) Convolutional Neural Network (CNN), (c) Residual Neural Network, (d) Randomly Wired Neural Network.}\n\t\\label{DNN_types}\n\\end{figure*}", "id": "5df83101-053a-4cfe-9d04-c09b58e36f6b", "level": "subsection", "origin_cites_number": 0, "parent_id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ] ], "subsections": [ "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "12ea514f-8733-4c19-9346-9ed4ba7a8c74" ], "title": "Background" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 514, 97, 7430, 895, 305 ], "content": "In the following, we briefly present an overview of the most common deep learning networks. \nNeural networks consist of a first input layer, one or multiple hidden layers, and a last output layer, as shown in Fig. \\ref{DNN_types}. When the neural network contains a high number of sequential layers, it can be called Deep Neural Network (DNN). The DNN layers include smaller units, namely neurons.\nMost commonly, the output of one layer is the input of the next layer and the output of the final layer is either a classification or a feature. The correctness of the prediction is assessed by the loss function that calculates the error between the true and predicted values. \nThe DNN networks have various structures. Hence, we introduce the fundamentals of the most known types as follows: \n\\begin{comment}\n\\begin{table*}[]\n\\footnotesize\n\\tabcolsep=0.08cm\n\\caption{Parameters comparison of state-of-the-art DNNs trained on ImageNet , in terms of flops\\protect\\footnotemark.\\\\ \nMACC: Multiply-ACCumulate operations.\n}\n\\label{Macc}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\textbf{Model} & \\textbf{Comp} & \\textbf{Add} & \\textbf{Div} & \\textbf{MACC} & \\textbf{Activations} & \\textbf{params} & \\begin{tabular}[c]{@{}l@{}}\\textbf{size} \\\\\\textbf{(Mb)}\\end{tabular} & \\textbf{pros} & \\textbf{cons} \\\\ \\hline\nVGG 16 & 196.85 M & 10 K & 10 K & 154.7 G & 288.03 M & 138.36 M & 512.2 & \\begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\\\ - Simple and homogeneous\\\\ topology.\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Computationally expensive\\\\ fully connected layers.\\end{tabular} \\\\ \\hline\nAlexNet & 17.69 M & 4.78 M & 9.55 M & 7.27 G & 20.81 M & 60.97 M & 217 & \\begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\\\ - Low, medium, and high\\\\ feature extraction.\\\\ - Introduces regularization \\\\ in CNN.\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Inactive neurons in the first\\\\ layers.\\\\ - Large filter size that causes\\\\ artifacts aliasing in the output\\\\ feature maps.\\end{tabular} \\\\ \\hline\nGoogleNet & 161.07 M & 8.83 M & 16.64 M & 16.04 G & 102.19 M & 7 M & 40 & \\begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\\\ - Multi-scale layers.\\\\ - Reduces number of params by\\\\ using bottleneck and average\\\\ pooling layers.\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Potential lose of important\\\\ information because of \\\\ representational bottleneck.\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}ResNet 50 \\\\ \\\\\\\\ \\end{tabular} & 10.89 M & 16.21 M & 10.59 M & 3.87 G & 46.72 M & 25.56 M & 97.7 & \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}- Depth and Multi-path.\\\\ - Introduce residual learning.\\\\ - Solve the vanishing gradient\\\\ problem.\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}- Complex architecture.\\\\ - Multiple layers have no \\\\ contribution for the inference.\\\\ - Potential re-learning of \\\\ redundant feature maps.\\end{tabular}} \\\\ \\cline{1-8}\n\\begin{tabular}[c]{@{}l@{}}ResNet 152 \\\\ \\\\ \\\\\\end{tabular} & 22.33 M & 35.27 M & 22.03 M & 11.3 G & 100.11 M & 60.19 M & 230 & & \\\\ \\hline\nSqueezeNet & 9.67 M & 226 K & 1.51 M & 861.34 M & 12.58 M & 1.25 M & 4.7 & - \\begin{tabular}[c]{@{}l@{}} Squeezes non-important \\\\ features. \\end{tabular}& - Lower accuracy. \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\end{comment}", "id": "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "5df83101-053a-4cfe-9d04-c09b58e36f6b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Deep learning and Deep Neural Networks" ] ], "subsections": [ "010c4789-ae88-4520-ae39-2e919e67a236", "af0fb1b2-5a8e-4998-a900-bf8d2e43b2f2", "b14650c1-0115-4222-87d4-83d1e8ac5490", "aa5955bb-48c6-4f39-a8a9-642b9947d3a2" ], "title": "Deep learning and Deep Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "If the output of one layer is fed forward to the subsequent layer, the Neural Network (NN) is termed as the Feed Forward NN (FNN). The baseline FNN is called MLP or Vanilla. As shown in Fig. \\ref{DNN_types} (a), each layer is Fully connected (Fc) to the next one and the output is sent to the next layer’s perceptron without any additional computation or recursion other than the activation function.", "id": "010c4789-ae88-4520-ae39-2e919e67a236", "level": "paragraph", "origin_cites_number": 0, "parent_id": "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Deep learning and Deep Neural Networks" ], [ "paragraph", "Multilayer Perceptron (MLP)" ] ], "subsections": [], "title": "Multilayer Perceptron (MLP)" }, { "cite_extract_rate": 0.5, "cites": [ 514 ], "content": "\\label{CNN}\nProcessing vision-based tasks (e.g., image data), using MLP, potentially requires a deep model with a huge number of perceptrons, as for each data pixel a perceptron is assigned, which makes the network hard to train and scale. One of the successors of MLP is CNN that is introduced to solve this problem by defining additional pre-processing layers, (i.e., convolutional (conv) and pooling layers), as shown in Fig. \\ref{DNN_types} (b). \nFurthermore, the convolutional layer includes a set of learning parameters, namely filters that have the same number of channels as the data feature maps with smaller dimensions. Each filter channel passes through the length and width of the corresponding input feature map and calculates the inner product to the data. The summation of all the outputs produces one feature map. Finally, the number of output feature maps equals the number of filters, as illustrated in Fig. \\ref{Conv}. The main difference between the Fc and the conv layers is that each neuron in Fc networks is connected to the entire input, which is not the case of CNN that is connected to only a subset of the input. The second basic component of the CNN network is the pooling task, which has an objective to reduce the spatial size of the input feature maps and minimize the computation time. \nA milestone for CNN applied to computer vision problems is the design of AlexNet and VGG .\n\\begin{figure}[h]\n\\centering\n\t\\includegraphics[scale=0.65]{Figures/Conv_2.pdf}\n\t\\caption{Convolutional task.}\n\t\\label{Conv}\n\\end{figure}", "id": "af0fb1b2-5a8e-4998-a900-bf8d2e43b2f2", "level": "paragraph", "origin_cites_number": 2, "parent_id": "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Deep learning and Deep Neural Networks" ], [ "paragraph", "Convolutional Neural Networks (CNN)" ] ], "subsections": [], "title": "Convolutional Neural Networks (CNN)" }, { "cite_extract_rate": 0.5, "cites": [ 97 ], "content": "\\label{DRN}\nFollowing the victory of AlexNet and VGG, the deep residual networks have achieved a new breakthrough in the computer vision challenges during the recent years. Particularly, the residual networks paved the way for the deep learning community to train up to hundreds and even thousands of layers, while achieving high performance.\nResNet is the-state-of-the-art variant of the residual network. This model uses the so called shortcut/skip connections that skip multiple nodes and feed the intermediate output to a destination layer (see Fig. \\ref{DNN_types} (c)), which serves as a memory to the model. A similar idea is applied in the Long Short Term Memory (LSTM) networks , where a forget gate is added to control the information that will be fed to the next time step. LSTM belongs to the Recurrent Neural Networks (RNN) family.", "id": "b14650c1-0115-4222-87d4-83d1e8ac5490", "level": "paragraph", "origin_cites_number": 2, "parent_id": "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Deep learning and Deep Neural Networks" ], [ "paragraph", "Deep Residual Networks" ] ], "subsections": [], "title": "Deep Residual Networks" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3413, 3412, 7217 ], "content": "The aforementioned networks focus more on connecting operations such as convolutional tasks through wise and sequential paths. Unlike previous DNNs, the randomly wired networks arbitrarily connect the same operations throughout the sequential micro-architectures, as shown in Fig. \\ref{DNN_types} (d). Still, some decisions are required to design a random DNN, such as the number of stages to down-sample feature maps using Maxpooling and the number of nodes to deploy in each stage. The edge of the randomly wired networks over the other models is that the training is faster, the number of weights is reduced and the memory footprint is optimized.\\\\\nFig. \\ref{DNN_types} presents the NN structures introduced in this section and serving to understand the following sections. Other state-of-the-art structures achieved unprecedented performance in multiple deep learning applications , including Recurrent Neural Networks (RNNs) , Auto-Encoders (AEs) , and Generative Adversarial Networks (GANs) ; however, detailed overview of all models falls outside the scope of this paper.", "id": "aa5955bb-48c6-4f39-a8a9-642b9947d3a2", "level": "paragraph", "origin_cites_number": 5, "parent_id": "1bf807f0-03f6-4024-96db-b31ce1b91bb8", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Deep learning and Deep Neural Networks" ], [ "paragraph", "Randomly Wired Networks" ] ], "subsections": [], "title": "Randomly Wired Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Reinforcement learning, also known as sequential decision making, refers to techniques that update the model/policy at each time step, i.e., when receiving each new instance of data. \nThe advantage of RL is that it is adaptable, as it does not have any knowledge or assumption about the data distribution. In this way, if the trend of data drifts or morphs, the policy or the model can adapt to the changes on the fly.\n\\begin{comment}\nIn a typical online learning formulation, an agent interacts with an environment by performing actions at discrete time steps. Each of these actions results in a feedback signal that is referred to as reward, which describes the goodness of that action. This is fundamentally different from supervised learning, where the true values of the trained data are known and the aim is to learn a model capable of classifying the inference data samples or forecasting targeted features. The online learning problem is similar to the optimal control problem in the field of systems control . The difference is that a perfect model that describes the environment is used in the latter, whereas the online system model is only estimated from trials.\nTo motivate for online learning, we present some real-world systems modeled and solved using algorithms from the online settings. Consider a website that wants to maximize the engagement and relevance of articles presented to users. When a new user arrives, the website needs to decide on an article header to show and observe whether or not the user interacts with this article. In this example, the selected action is the article to display, and the reward is binary, $1$ if clicked, $0$ otherwise. One can think of many recommendation-based systems that could be modeled similarly, such as movie-recommender systems where actions are which movie to recommend, and web search where the actions are which results to show. In these scenarios, the reward is assessed according to the users' satisfaction.\n\\end{comment}", "id": "12ea514f-8733-4c19-9346-9ed4ba7a8c74", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "5df83101-053a-4cfe-9d04-c09b58e36f6b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Reinforcement Learning (RL)" ] ], "subsections": [ "5bf9ef40-94eb-4878-940a-3cab123f3a0e", "769a0a3f-bd0b-40db-9bd3-432dcab1bb75" ], "title": "Reinforcement Learning (RL)" }, { "cite_extract_rate": 0, "cites": [], "content": "The bandit problem represents the simplest RL formulation, where an agent interacts with an environment by performing actions at discrete time steps. Each of these actions results in a feedback signal that is referred to as reward, which describes the goodness of that action. Consider a website that wants to maximize the engagement and relevance of articles presented to users. When a new user arrives, the website needs to decide on an article header to show and observe whether or not the user interacts with this article. In this example, the selected action is the article to display, and the reward is binary, $1$ if clicked, $0$ otherwise.\nNote that a critical assumption in bandits is that actions do not have any effect on the agent other than causing a sample of a reward signal. In cases where actions may transform the environment from a well-described state to another, a Markov Decision Process (MDP) is required to model the problem and the formulation is known as MDP reinforcement learning.", "id": "5bf9ef40-94eb-4878-940a-3cab123f3a0e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "12ea514f-8733-4c19-9346-9ed4ba7a8c74", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Reinforcement Learning (RL)" ], [ "paragraph", "Bandit learning" ] ], "subsections": [], "title": "Bandit learning" }, { "cite_extract_rate": 0.5, "cites": [ 620, 1390, 2219 ], "content": "This RL concept is based on learning how to map MDP's states to actions in order to maximize the long-term reward signal. The RL-agent is not apprised which action to choose; instead, it discovers the actions that achieve the highest reward by trying different combinations and receiving immediate gains and penalties, which can be modeled as MDP process. Different from the bandit learning, the RL chosen action does not impact only the direct reward, but also all subsequent situations and related rewards. \nDeep Reinforcement Learning (DRL) combines reinforcement learning and the deep learning, as illustrated in Fig. \\ref{DRL}. The DRL is well-suited, and even indispensable, when the environment is highly dynamic and dimensional and the number of states is large or continuous. \n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.6]{Figures/DRL.pdf}\n\t\\caption{Deep Reinforcement Learning (DRL) design.}\n\t\\label{DRL}\n\\end{figure}\nVariants of DRL include the deep policy gradient RL , the Deep Q-Networks (DQN) , Distributed Proximal Policy Optimization (DPPO) , and Asynchronous Advantage Actor-Critic .", "id": "769a0a3f-bd0b-40db-9bd3-432dcab1bb75", "level": "paragraph", "origin_cites_number": 6, "parent_id": "12ea514f-8733-4c19-9346-9ed4ba7a8c74", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Background" ], [ "subsubsection", "Reinforcement Learning (RL)" ], [ "paragraph", "Markov Decision Process (MDP)-based Learning" ] ], "subsections": [], "title": "Markov Decision Process (MDP)-based Learning" }, { "cite_extract_rate": 0, "cites": [], "content": "The assessment of the DNN performance depends on the proximity-aware IoT application where deep learning is used. For example, for object detection, face authentication, or self-driving car, the accuracy is of an ultrahigh importance. Yet, some performance metrics are general and not specific to any application, including latency, memory footprint, and energy consumption. An overview of different performance metrics is presented as follows:", "id": "9342660a-236d-4249-92d7-89fa948e8ef1", "level": "subsection", "origin_cites_number": 0, "parent_id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ] ], "subsections": [ "706e42d3-2b4c-434f-bc3b-d8e33723a0be", "379e8563-e924-49b5-9a29-fce15806c7cc", "d648b147-e10b-4bf1-bfd9-553b4585cffa", "51a1f717-cc74-4cd5-ad83-e394483fb00f", "2594c2d2-e21b-4f8a-b476-5aacfed5fb63" ], "title": "Performance metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "The latency, typically measured in milliseconds, is defined as the required time to perform the whole inference/training process, which includes the data pre-processing, data transmission, the classification process or the model training, and the post processing. Real-time applications led by artificial intelligence (e.g., autonomous vehicles and AR/VR gaming) have usually stringent latency constraints, of around 100 ms . Hence, the near-processing is advantageous for fast inference response.\nThe latency metric is affected by different factors, such as the size of the DNN model, the computational capacity of the host device, and the transmission efficiency.", "id": "706e42d3-2b4c-434f-bc3b-d8e33723a0be", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "9342660a-236d-4249-92d7-89fa948e8ef1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ], [ "subsubsection", "Latency" ] ], "subsections": [], "title": "Latency" }, { "cite_extract_rate": 0, "cites": [], "content": "Unlike the cloud and edge servers, the IoT devices are battery-limited (e.g., commercial drones.). Moreover, the communication and computation overhead caused by the deep model training/inference incurs huge energy consumption. Hence, the energy efficiency, typically measured in nanojoules, is of a large importance in the context of edge AI and it primarily depends on the size of the DNN and the capabilities of the computing device.", "id": "379e8563-e924-49b5-9a29-fce15806c7cc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9342660a-236d-4249-92d7-89fa948e8ef1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ], [ "subsubsection", "Energy efficiency" ] ], "subsections": [], "title": "Energy efficiency" }, { "cite_extract_rate": 1, "cites": [ 3414 ], "content": "To perform DNN training/inference, significant cycles are executed for memory data transfer to/from computational array, which makes it a highly intensive and challenging task. For example, VGG 16 and AlexNet require respectively 512 MB and 217 MB of memory to store more than 138 M and 60 M of weights in addition to the model complexity or Multiply-ACCumulate operations (MACC) which is equal to 154.7 G and 7.27 G . Such amounts of memory and computational tasks, typically measured in Megabyte and number of multiplications respectively, are infeasible to be executed in power and resource constrained devices with a real-time response.", "id": "d648b147-e10b-4bf1-bfd9-553b4585cffa", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "9342660a-236d-4249-92d7-89fa948e8ef1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ], [ "subsubsection", "Computation and memory footprint" ] ], "subsections": [], "title": "Computation and memory footprint" }, { "cite_extract_rate": 0, "cites": [], "content": "The communication overhead impacts the performance of the system, when the DNN computation is offloaded to the cloud or other edge participants. Hence, it is indispensable to minimize this overhead, particularly in costly network infrastructures. The data overhead, typically measured in Megabyte, depends on the input and how the model is designed, i.e., types and configuration of the layers that determine the output size, in addition to the communication technology. Furthermore, the fault-tolerance should be guaranteed to deal with communication failures efficiently.\n\\begin{comment}\n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.64]{Figures/DL_application_services.pdf}\n\t\\caption{AI pervasive application and the related foundational DL services reviewed in this survey.}\n\t\\label{services}\n\\end{figure*}\n\\end{comment}", "id": "51a1f717-cc74-4cd5-ad83-e394483fb00f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9342660a-236d-4249-92d7-89fa948e8ef1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ], [ "subsubsection", "Communication Overhead" ] ], "subsections": [], "title": "Communication Overhead" }, { "cite_extract_rate": 0.5, "cites": [ 3415, 603 ], "content": "IoT devices produce and offload a massive amount of data every second, which can result in serious privacy vulnerabilities and security attacks such as white-box attacks , data poisoning , and membership attacks . Guaranteeing the robustness and privacy of the DNN system has become a primary concern for the deep learning community. The traditional wise resorts to data encryption, pre-processing, and watermarking. Yet, all these solutions can be neutralized using model stealing attacks. Hence, more sophisticated defenses need to be designed to secure the DNN training and execution, through data distribution. The robustness of a privacy mechanism is judged by its ability to protect the data from attacks while maintaining the accuracy performance.\nTo design an efficient deep learning network or select the adequate one for the targeted application, a large number of hyperparameters need to be considered. Therefore, understanding the trade-off between these parameters (e.g., latency, accuracy, energy, privacy, and memory.) is essential before designing the model. Recently, automated machine learning frameworks responsible for DNN selection and parameters tuning, have been introduced, such as Talos .", "id": "2594c2d2-e21b-4f8a-b476-5aacfed5fb63", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "9342660a-236d-4249-92d7-89fa948e8ef1", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Performance metrics" ], [ "subsubsection", "Privacy" ] ], "subsections": [], "title": "Privacy" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 3416 ], "content": "Several hardware and software libraries are publicly available for pervasive devices, particularly resource-limited ones, to enable DNN training and inference. As a first example, Google TensorFlow is an open source deep learning framework released in 2015 to execute DNN tasks on heterogeneous distributed systems based on their estimated computational and communication capacities, which was optimized later to be adequate for resource constrained devices (e.g., Raspberry Pi) and GPU execution.\nAnother lightweight deep learning framework developed by Facebook is Caffe2 that provides a straightforward way to experiment heterogeneous deep learning models on low-power devices.\nCore ML and DeepLearningKit are two machine learning frameworks commercialized by Apple to support pre-trained models on iPhone/iPad devices. More specifically, Core ML was designed to leverage the CPU/GPU endowed with the end-device for deep learning applications such as natural language and image processing, while DeepLearningKit supports more complex networks such as CNNs and it is coined to utilize the GPU more efficiently for iOS based applications.\nSince pervasive AI is still in its early stages, only few frameworks are dedicated specifically for distributed learning. One of these deep learning frameworks is MXNet , which is used for pervasive training. MXNet \nuses KVStore\\footnote{www.kvstore.io} to synchronize parameters shared among participants during the learning process. To monitor the utilization of pervasive resources, Ganglia is designed to identify memory, CPU, and network requirements of the training and track the hardware usage for each participant. As for the inference phase, authors in designed a hardware prototype targeting distributed deep learning for on-device prediction.", "id": "8e6febe5-4bb9-43f7-b364-4dc285c93213", "level": "subsection", "origin_cites_number": 6, "parent_id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive frameworks for AI" ] ], "subsections": [], "title": "Pervasive frameworks for AI" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep learning methods have brought substantial breakthroughs in a broad range of IoT applications, spanning from signal and natural language processing to image and motion recognition.\nIn this section, we review the accomplishments of deep learning in different domains where pervasive computing is needed, including intelligent vehicles and robots, smart homes and cities, and virtual reality/augmented reality.", "id": "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "level": "subsection", "origin_cites_number": 0, "parent_id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive AI for IoT Applications" ] ], "subsections": [ "b6b18d3b-681d-4dfa-b91f-43cba1cb5f32", "ab510be1-c383-4515-9a75-263d3b0597d4", "d6893849-c38f-4d25-bff3-63dfab11eba3", "91bbca78-24e7-4690-962f-3d81d6f33231" ], "title": "Pervasive AI for IoT Applications" }, { "cite_extract_rate": 0.5, "cites": [ 3417, 887 ], "content": "Recently, DNNs have been widely used to lead a variety of mobile platforms such as drones, robots, and vehicles, in order to achieve critical tasks. \nIn this context, applications such as driving assistance, autonomous driving, and mobility mapping have become more reliable and commonly used in intelligent mobile systems. As an example, in , the captured image from the vehicle front facing camera is used to decide the steering angle and keep the car in the middle of the lane.\nThe ever-improving online learning is broadly exploited for UAVs/robots guidance, including the work in where drones learn how to navigate and avoid obstacles while searching target objects. Several start-ups are also using DL for their self-driving systems, such as prime-air UAVs of Amazon used to deliver packages , and Uber self-navigating cars .", "id": "b6b18d3b-681d-4dfa-b91f-43cba1cb5f32", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive AI for IoT Applications" ], [ "subsubsection", "Intelligent vehicles, robots, and drones" ] ], "subsections": [], "title": "Intelligent vehicles, robots, and drones" }, { "cite_extract_rate": 0.25, "cites": [ 3418 ], "content": "The concept of a smart home covers a large range of applications, that contribute to enhance the productivity, convenience, and life quality of the house occupants. Nowadays, many smart appliances are able to connect to the internet and offer intelligent services, such as smart air conditioners, smart televisions, and lighting control systems. Most of these appliances require the deployment of wireless controllers and sensors in walls, floors, and corners to collect data for motion recognition DL services. Speech/voice DL recognition services are also involved for a better home control, where a Well-known example is Amazon Alexa . \nCompared to smart homes, smart city services are more relevant to the deep learning community as the data collected from different ubiquitous participants is huge and highly heterogeneous, which allows high-quality analysis. Examples involve waste classification , energy consumption and smart grid , and parking control .", "id": "ab510be1-c383-4515-9a75-263d3b0597d4", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive AI for IoT Applications" ], [ "subsubsection", "Smart homes and cities" ] ], "subsections": [], "title": "Smart homes and cities" }, { "cite_extract_rate": 0.5, "cites": [ 8656 ], "content": "VR is designed to create an artificial environment, where users are placed into a 3D experience while AR can be defined as a VR that inserts artificial objects into the real environment. Popular examples of applications using AR/VR include the tactile internet and holographic telepresence , and multi-players VR games. The latency of the virtual reality systems is measured in terms of “motion-to-photons” metric, which is defined as the delay starting from moving the headset to updating the display according to the movement. This motion-to-photons latency should be in the range of tens to hundreds of milliseconds . Offloading the VR/AR computation to the remote cloud servers may incur higher latencies exceeding the required constraints. Hence, on-device computation is indispensable to achieve real-time performance.\n\\begin{comment}", "id": "d6893849-c38f-4d25-bff3-63dfab11eba3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive AI for IoT Applications" ], [ "subsubsection", "Virtual Reality (VR) and Augmented Reality (AR)" ] ], "subsections": [], "title": "Virtual Reality (VR) and Augmented Reality (AR)" }, { "cite_extract_rate": 0.2, "cites": [ 3419 ], "content": "The potential applications of DNN aiming to enhance networking performance are countless, particularly after the emergence of the sixth generation (6G). Different from previous generations, the 6G paradigm is based on supporting a wider variety of AI services spanning from high-performance servers to resource-limited devices, making “connected things” evolve into “connected intelligence”. Applications of DL in the new generation networks involve adaptive resource allocation to serve users in real-time , device-to-device (D2D) task offloading using online learning and localization services , proactive caching to minimize remote communication and reduce latency , network energy efficiency , and privacy and data security . \n\\end{comment}", "id": "91bbca78-24e7-4690-962f-3d81d6f33231", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "b85221a3-fb27-44cf-b1c0-77f43b2c6992", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Pervasive AI for IoT Applications" ], [ "subsubsection", "5G/6G intelligent networks" ] ], "subsections": [], "title": "5G/6G intelligent networks" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we reviewed state-of-the-art deep learning and reinforcement learning techniques, examined their performance metrics, and presented some of their applications that may require pervasive deployment. In this context, multiple conclusions can be stated:\n\\begin{itemize}\n \\item The AI proximity-aware IoT applications have different requirements and each one has its distinctive performance keys. For example, VR/AR is highly sensitive to delays and cannot tolerate any motion sickness. Meanwhile, the applications relying on UAVs and moving robots have stringent requirements in terms of energy to accomplish their missions. For the surveillance applications, the accuracy is paramount.\n However, such requirements come with other costs. More specifically, lower delays and energy consumption can be achieved using small DL networks that generate fast inference and can be deployed locally. On the other hand, high accuracy requires deep networks that incur higher memory and computation utilization and consequently higher communication overheads for remote execution. \n Therefore, understanding the requirements of the targeted application and the trade-off between different hyper-parameters is crucial for selecting the adequate AI model and the processing device.\n \\item The common characteristic for most of AI applications, particularly for IoT applications that require real-time data collection, is the need for prompt response and fast analytics that should not be piled for later processing. Hence, centralized solutions such as cloud-based data analytics are not feasible anymore, due to the communication overheads. Pervasive computation has emerged as a solution that enables the deployment of AI at the proximity of the data source for latency-sensitive applications, and in collaboration with high-performance servers for better computational resources.\n \\item Understanding the application requirements and the pervasive environment and wisely selecting the data shape and the adopted AI technique, is critical for determining the distribution mode. More specifically, the privacy constraints and the size of the data open the doors for federated learning where each entity trains its data locally. The low latency requirements and the limited resources imposed by some pervasive systems, push for the partitioning of inference where the AI model is split into smaller segments. Finally, the dynamics of the system, the unavailability of labeled data and the inherently decentralized architectures call for the reinforcement learning where agents are distributed.\n\\end{itemize}\n\\begin{comment}\nThis approach requires an IID datasets.\nIn this case, the partitioning strategy, the size of segments, and the parallelization approach are conditioned by the chosen model (i.e., layers' types, connections, and data reduction ability.), and the number of ubiquitous participants and their capacities.\n\\end{comment}\nAfter understanding the motivations for \\textit{pervasive AI} and the requirements of the IoT applications and their related AI models, we present different distribution modes and their communication and computation models in the subsequent sections. We review, first, the pervasive training including federated learning, multi-agent RL, and active learning, and then we survey the pervasive inference. However, we start by presenting the related surveys and highlighting our paper novelty.", "id": "6caa0663-4e5f-484e-83a1-18c728b2a299", "level": "subsection", "origin_cites_number": 0, "parent_id": "3712d67e-e696-4951-a604-d1fa86a3fe72", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Fundamentals of Artificial Intelligence" ], [ "subsection", "Lessons learned" ] ], "subsections": [], "title": "Lessons learned" }, { "cite_extract_rate": 0.5862068965517241, "cites": [ 1315, 9123, 3420, 7719, 659, 3427, 8657, 3428, 3426, 3425, 1313, 3422, 660, 547, 3423, 3424, 3421 ], "content": "\\label{related_surveys}\n\\begin{table*}[!h]\n\\footnotesize\n\\centering\n\\tabcolsep=0.09cm\n\\caption{Comparison with existing surveys.}\n\\label{tab:Related_works}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{Refs} &\\textbf{Summary} & \\multicolumn{2}{c|}{\\textbf{AI/pervasivity}} & \\multicolumn{3}{c|}{\\textbf{Scope}} & \\multicolumn{3}{c|}{\\textbf{AI technique}} & \\multicolumn{2}{c|}{\\textbf{Topic}} \\\\ \\hline\n & & \\begin{tabular}[c]{@{}c@{}}AI on pervasive\\\\ networks\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}AI for pervasive\\\\ networks\\end{tabular} & cloud & \\begin{tabular}[c]{@{}c@{}}edge\\\\ servers\\end{tabular} & IoT & DI & FL & MARL & \\begin{tabular}[c]{@{}c@{}}Deployment:\\\\ hardware,\\\\ software\\\\ techniques, \\\\protocols.\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Management: \\\\ communication, \\\\ resource allocation, \\\\ and algorithms\\end{tabular} \\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020-2021)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Deep Learning \\\\ applications for the \\\\ Mobile Edge \\\\ computing networks\\end{tabular} & \\xmark& \\begin{tabular}[c]{@{}c@{}}\\cmark\\\\ 5G,\\\\ wireless \\\\ networks\\end{tabular}& \\cmark& \\cmark & \\cmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}}\\\\ (2019)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Efficient usage of IoT\\\\ hardware and software\\\\ for AI applications\\end{tabular} & \\cmark& \\xmark& \\xmark& \\xmark& \\cmark& \\xmark& \\xmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}}\\\\ (2019-2020)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Enabling AI on \\\\ edge networks\\end{tabular} & \\cmark & \\cmark& \\xmark& \\cmark& \\cmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2018)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Enabling AI on \\\\ edge networks\\end{tabular} & \\cmark& \\xmark& \\xmark& \\cmark& \\cmark& \\cmark& \\xmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}}\\\\ (2018-2020)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Decision making in \\\\multi-agent \\\\ systems and related \\\\applications\\end{tabular} & \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\cmark& \\xmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}}\\\\ (2020)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Deep RL \\\\ for IoT systems\\end{tabular} & \\cmark& \\cmark& \\xmark& \\xmark& \\cmark& \\xmark& \\xmark& \\xmark& \\cmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Deep RL for \\\\wireless networks\\end{tabular} & \\cmark& \\begin{tabular}[c]{@{}c@{}}\\cmark\\\\ wireless \\\\ networks\\end{tabular} & \\xmark& \\cmark& \\cmark& \\xmark& \\xmark& \\cmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Distributed ML\\end{tabular} & \\cmark& \\begin{tabular}[c]{@{}c@{}}\\xmark\\end{tabular}& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2019)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Communication for ML \\\\ and \\\\ML for communication\\end{tabular} & \\cmark& \\begin{tabular}[c]{@{}c@{}}\\cmark\\\\ wireless \\\\ networks\\end{tabular}& \\xmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark& \\xmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Communication efficient \\\\ edge AI\\end{tabular} & \\cmark& \\cmark& \\xmark& \\cmark& \\cmark& \\cmark& \\cmark& \\xmark& \\xmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2019)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}AI on mobile \\\\ and wireless networks\\end{tabular} & \\cmark& \\begin{tabular}[c]{@{}c@{}}\\cmark\\\\ 5G,\\\\ wireless \\\\ networks\\end{tabular}& \\xmark& \\cmark& \\cmark& \\xmark& \\xmark& \\xmark& \\cmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Distributed Training \\\\of DNN\\end{tabular} & \\cmark& \\begin{tabular}[c]{@{}c@{}}\\xmark \\end{tabular}& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\xmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2021)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Federated learning \\\\for IoT applications\\end{tabular} & \\xmark& \\begin{tabular}[c]{@{}c@{}}\\cmark \\end{tabular}& \\cmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark& \\xmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Enabling protocols, \\\\ technologies\\\\ for federated learning\\end{tabular} & \\cmark& \\cmark& \\cmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Architecture, design and\\\\ applications of centralized,\\\\ distributed and federated \\\\learning\\end{tabular} & \\cmark& \\cmark& \\cmark& \\cmark& \\cmark& \\xmark& \\cmark& \\xmark& \\cmark& \\cmark\\\\ \\hline\n\\ {}\\begin{tabular}[c]{@{}c@{}} \\\\ (2020)\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Enabling protocols, \\\\technologies\\\\ for federated learning\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\cmark\\\\ Vehicular\\\\ IoT\\end{tabular} & \\xmark& \\xmark& \\xmark& \\cmark& \\xmark& \\cmark& \\xmark& \\cmark& \\xmark\\\\ \\hline\n\\rowcolor[HTML]{ECF4FF} \nOur paper & Pervasive AI & \\cmark& \\xmark& \\cmark& \\cmark& \\cmark& \\cmark& \\cmark& \\cmark& \\xmark& \\cmark\\\\ \\hline\n\\end{tabular}\n\\end{table*}\nThe intersection of pervasive computing and AI is still in its early stage, which attracts the researchers to review the existing works and provide innovative insights, as illustrated in Table \\ref{tab:Related_works}. First, many efforts discussed the applications of artificial intelligence that support edge networks, \nin order to meet the networking requirements. Multiple edge contexts are explored such as healthcare, smart cities, and grid energy. As an example, two recent surveys provided an in-depth discussion of the usage of AI in wireless and 5G networks to empower caching and offloading, resource scheduling and sharing, and network privacy. These surveys touched upon the pervasive AI, particularly federated learning and distributed inference. However, the distribution was discussed briefly as one of the techniques that further enables AI in the edge. In our survey, the applications of AI for pervasive networks is not the main topic. Instead, the deployment of AI on pervasive devices is the scope of this paper. \nThe surveys in conducted a comprehensive review on the systems, architectures, frameworks, software, technologies, and algorithms that enable AI computation on edge networks and discussed the advantages of edge computing to support the AI deployment compared to cloud approaches. However, even though they dedicated a short part for distributed AI, these papers did not discuss the resource and communication challenges of pervasive computing nor the partitioning techniques of AI (e.g., splitting strategies of the trained DNN models or the training data.). Moreover, they did not consider the cloud computing as indispensable part of the distributed system. Therefore, unlike the previous surveys , we present an in-depth review that covers the resources, communication and computation challenges of distributed AI among ubiquitous devices. More specifically, applying the same classical communication and computation techniques adopted in centralized approaches for pervasive AI is not trivial. As an alternative, both pervasive computing systems and distributed AI techniques are tailored to take into consideration the heterogeneous resources of participants, the AI model, and the requirements of the system.\nThese customized strategies for pervasive AI are the main focus of our survey.\nThe multi-agent reinforcement learning has not been reviewed by any of previous papers. Other papers surveyed the single agent and multi-agents RL, such as . In these tutorials, the authors conducted comprehensive studies to show that the single-agent RL is not sufficient anymore to meet the requirements of emerging networks in terms of efficiency, latency, and reliability. In this context, they highlighted the importance of cooperative MARL to develop decentralized and scalable systems. They also surveyed the existing decision making models including game theory and Markov decision process and they presented an overview of the evolution of cooperative and competitive MARL, in terms of rewards optimization, policy convergence, and performance improvement. Finally, the applications of MARL for networking problems are also reported. However, despite this recent popularity of MARL, the designed algorithms to achieve efficient communication between agents and minimum computation are not surveyed yet. To the best of our knowledge, we are the first to survey the computation and communication challenges faced to achieve a consensus on the distributed RL policy. In other words, our focus is not the performance of the RL policy. Instead, we survey the computational load, communication schemes and architectures experienced by cooperative agents during learning and execution.\nUnlike aforementioned papers, authors of focused only on distributed machine learning. In these papers, they covered the training phase and data partitioning. The survey in discussed the issues of learning from a data characterised by its large volume, different types of samples, uncertainty, incompleteness, and low value density. Solutions to minimize the learning complexity and divide the data are introduced in , where authors reviewed the algorithms and decision rules to fragment large scale data into distributed datasets. The paper in described the architectures and topologies of nodes participating in the distributed training by presenting existing frameworks and communication patterns that can be employed to exchange states. Authors in presented a contemporary and comprehensive survey of distributed ML techniques, which includes the applicability of such concept to wireless communication networks, and the computation and communication efficiency. However, this survey along with the previous works focus only on the training phase. Also, the authors do not provide a comprehensive and deep summary of the complexity, computation and communication efficiency witnessed by different decentralized architectures and the amount of data shared by participants, particularly for the multi-agent reinforcement learning. Our survey aims to bridge the gap by providing a comprehensive review of distributed AI, including both training and inference phases. More specifically, we thoroughly study the architectures of federated learning, active learning and reinforcement learning, and the partitioning strategies of DNN trained models. Then, for each approach, we show the impact on the communication and computation complexity and the algorithms scheduling the collaboration between devices. \nFinally, the authors in provided a deep review of communication challenges of AI-based applications on edge networks. Specifically, the survey in provided insights about allocating mobile and wireless networks resources for AI learning tasks. However, the distribution of AI techniques was not targeted in this latter paper. The surveys in are considered the closest ones to our topic as they explored the communication-efficient AI distribution. However, they mainly focused on the training phase, i.e., federated learning, whereas the pervasive inference and MARL were not studied. The inference distribution is briefly discussed in from a communication angle, without considering other constraints such as the memory and computation nor presenting the partitioning strategies (i.e., splitting of the trained model), which highly impact the distribution process, the parallelization technique, and participants orchestration. Our paper represents a holistic survey that covers all AI tasks that require cooperation between pervasive devices motivated either by the application requirements or by the system design and the AI model.\nOur search of related papers has been conducted through different databases and engines, including IEEE Xplorer, ScienceDirect, and ArXiv; and the papers have been chosen from a time frame set between 2017 and 2021, in addition to some well-established research works. More specifically, we selected all surveys with high citation rates that cover AI, pervasive computing, federated learning, reinforcement learning, bandit learning, deep learning applications in IoT systems, and AI deployment on edge networks. In the rest of our survey, we review the research conferences and journal papers with solid results that provide comprehensive studies on resource-efficient distributed inference and training.\n\\begin{comment}", "id": "e43e3646-a694-4588-a6e6-93cc3e3f42c2", "level": "section", "origin_cites_number": 29, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Related surveys and paper novelty" ] ], "subsections": [ "06f12287-75d8-4b08-8068-87f5281dd1ae" ], "title": "Related surveys and paper novelty" }, { "cite_extract_rate": 0, "cites": [], "content": "Our search of related papers has been conducted through different databases and engines, which are: (1) IEEE Xplorer, (2) ScienceDirect, (3) ACM Digital Library, (4) Springer Link, (5) ArXiv, and (6) Google scholar; and the papers have been chosen from a time frame set between 2018 and 2021. The selection process comprises two steps: First, we filter the papers by assessing the relevance of the titles and screening the abstracts. Second, the selected papers are evaluated through a full text reading for a final inclusion and exclusion. Particularly, the final inclusion criteria are as follow:\n\\begin{itemize}\n \\item Books, reviews, and surveys that cover similar topics and related research questions, including reviews on AI, pervasive computing, federated learning, reinforcement learning, bandit learning, deep learning applications in IoT systems, and AI deployment on edge networks. \n \\item Research conferences and journal papers that provide comprehensive studies and extensive experiments on: (1) DNN splitting techniques, distribution of inference, and tasks allocation on pervasive devices to empower IoT applications, (2) FL model aggregation mechanisms and their impacts on the communication between participants, (3) multi-agent online learning including multi-arms bandits, multi-agent reinforcement learning, and active learning.\n \\item Papers with sufficient findings, solid results, and high citation rates according to Google scholar.\n\\end{itemize}\nThe final exclusion criteria are as follow:\n\\begin{itemize}\n \\item Papers that elaborate only low-level communication or protocols between hardware devices, which is not the focus of this work.\n \\item Papers that study the privacy and security issues caused by the distribution and the corresponding defense strategies, which is out of our scope.\n \\item Papers that cover the distribution of AI among different processors and CPUs of a single machine.\n \\item Papers that adopt distributed and decentralized AI, but the partitioning, distribution, collaborations between participants, and resource challenges are not one of the main contributions.\n\\end{itemize}\n\\end{comment}", "id": "06f12287-75d8-4b08-8068-87f5281dd1ae", "level": "subsection", "origin_cites_number": 0, "parent_id": "e43e3646-a694-4588-a6e6-93cc3e3f42c2", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Related surveys and paper novelty" ], [ "subsection", "Papers Selection Strategy" ] ], "subsections": [], "title": "Papers Selection Strategy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{Pervasive_training}\n\\vspace{-0.1cm}\nIn this section, we discuss the pervasive training, where the fitting of the model or the learning policy is accomplished within the distributed devices. Particularly, we present the resource management for federated learning, multi-agent reinforcement learning, and active learning. These aforementioned techniques are distributed by design, which means their objective is to train the learning model within pervasive devices. More specifically the concept of federated learning and active learning is based on guaranteeing the privacy of the pervasive data by training each set locally at its source. Similarly, multi-agent reinforcement learning is designed to be implemented in a system comprising multiple independent/related entities that interact with the same environment. However, the distribution concepts of these techniques are different. In fact, in federated learning, the data is distributed and each participant creates a local model. Then, the global model is obtained by aggregating these pervasive models. Meanwhile, in active learning, the participants collaborate to label the data and each one can benefit from on-the-fly labeled samples incoming from the others. Finally, agents in MARL collaborate to converge to the policy that ensures the selection of the best actions.", "id": "8f96916e-e394-41c3-94f4-9283f9cfdbaa", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ] ], "subsections": [ "c4778902-b31a-4153-8252-7d9c704e337a", "da7d1e06-40e4-4c83-bd80-f6bc0ee710aa", "73c038a7-dfdd-4ff9-b0ca-29be710e2068" ], "title": "Pervasive training" }, { "cite_extract_rate": 0.5, "cites": [ 582 ], "content": "\\label{FL}\nDespite the great potential of deep learning in different applications, it still has major challenges that need to be addressed. These challenges are mainly due to the massive amount of data needed for training deep learning models, which imposes severe communication overheads in both network design and end-users. Moreover, the conventional way of transferring the acquired data to a central server for training comes with many privacy concerns that several applications may not tolerate. In this context, the need for intelligent and on-device DL training has emerged. More specifically, instead of moving the data from the users to a centralized data center, pervasive data-sources engage the server to broadcast a pre-trained model to all of them. Then, each participant deploys and personalizes this generic model by training it on its own data locally. In this way, privacy is guaranteed as the data is processed within the host. The on-device training has been widely used in many applications, such as the medical field, assistance services, and smart education. However, this no-round-trip training technique precludes the end-devices to benefit from others' experiences, which limits the performance of the local models. To this end, Federated Learning (FL) has been advanced, where end-users can fine-tune their learning models while preserving privacy and local data processing. Then, these local models (i.e., model updates) are aggregated and synchronized (averaged) at a centralized server, before being sent back to the end-users. This process is repeated several times (i.e., communication rounds) until reaching converge. Accordingly, each participant builds a model from its local data and benefits from other experiences, without violating privacy constraints. FL is proposed by Google researchers in 2016 , and since then, it has witnessed unprecedented growth in both industry and academia. \nWe present in what follows an overview for this emerging pervasive learning technique, i.e., Federated Learning. In particular, we introduce the computation and communication models of the FL techniques. Then, we present a brief summary of the related works in the literature, while highlighting a use case that considers the application of FL within UAV swarms. It is worth mentioning that the FL can be used for both online and offline learning (i.e., the training can be performed on static datasets at once, or continuously training on new data received by different participants). \n\\begin{comment}\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.6]{Figures/FL_taxonomy2.pdf}\n\t\\caption{Outline of federated learning.} \n\t\\label{FL_outline}\n\\end{figure}\n\\end{comment}", "id": "c4778902-b31a-4153-8252-7d9c704e337a", "level": "subsection", "origin_cites_number": 2, "parent_id": "8f96916e-e394-41c3-94f4-9283f9cfdbaa", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ] ], "subsections": [ "7cc9c4fa-1160-4383-a4bc-5fa5eb5646f1", "1b661b5c-6fb5-4606-987f-cc5635444cf4", "eed40686-a7ba-4e6f-be93-9abfee6a5cbd", "77508e12-2980-420b-9045-084b9804e210" ], "title": "Federated Learning" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 602, 3430, 3429, 582 ], "content": "}\nGenerally, the FL system is composed of two main entities, which are the data-sources (i.e., owners of data or pervasive participants) and the centralized server (i.e., model owner). Let $N$ denote the number of data-sources. Each one of these devices has its own dataset $D_i$. This private data is used to train the local model $m_i$, and then the local parameters are sent to the centralized server. Next, the local models are collected and aggregated onto a global model $m_G=\\bigcup_{i=1}^{N} m_i$. The FL is different from training in the remote server, where the distributed data are collected and aggregated first, i.e., $D_G=\\bigcup_{i=1}^{N} D_i$, and then one model $m$ is trained centrally. We assume that data-sources are honest and submit their real data or their true local models to the centralized server. Otherwise,\ncontrol and incentive techniques are used to guarantee the reliability of FL, including .\nTypically, the life cycle of FL is composed of multiple communication rounds that are completed when the centralized model reaches a satisfactory accuracy. Each round includes the following steps:\n\\begin{itemize}\n \\item \\textit{Initialization of FL:} The centralized server fixes the training task, the data shape, the initial model parameters, and the learning process (e.g., learning rate). This initial model $m_G^0$ is broadcasted to the selected participants.\n \\item \\textit{Training and updating the local models:} Based on the current global model $m_G^t$, each data-source $i$ utilizes its own data $D_i$ to update the local model $m_i^t$. We note that $t$ presents the current round index. Hence, at each step $t$, the goal of each participant is to find the optimal parameters minimizing the loss function $L(m_i^t)$ defined as:\n \\begin{equation}\n m_i^{t*}= argmin_{m_i^{t}} L(m_i^t).\n \\end{equation}\n Subsequently, the updated parameters of the local models are offloaded to the server by all selected participants.\n \\item \\textit{Global model aggregation:} The received parameters are aggregated into one global model $m_G^{t+1}$, which will be sent back in its turn to the data owners. This process is repeated continuously, until reaching convergence. The server goal is to minimize the global loss function presented as follows: \n \\begin{equation}\n L(m^t_G)= \\frac{1}{N} \\sum\\limits_{i=1}^{N}L(m^t_i).\n \\end{equation}\n The aggregation of the global model is the most important phase of FL. A classical and straightforward aggregation technique, namely FedAvg, is proposed by Google reference paper . In this technique, the centralized server tries to minimize the global loss function by averaging the aggregation following the equation below:\n \\begin{equation}\n m_G^{t+1}=\\sum\\limits_{i=1}^{N}\\frac{|D_i|}{\\sum\\limits_{j=1}^{N}|D_j|}m_i^{t+1},\n \\end{equation}\n where $D_i$ is the local dataset. The FL system is iterated continuously until the convergence of the global loss function or reaching a desirable accuracy.\n\\end{itemize}\n\\begin{comment}\n\\begin{algorithm}\n\\caption{FederatedAveraging (FedAvg) }\n\\label{alg:FL}\n\\begin{algorithmic}[1]\n\\State \\textbf{Input:} $N$: participants, $D_i$: dataset of the device $i$, $B$: local mini-batches, $E$: number of local epochs, $T$: number of rounds, $\\rho$: learning rate, c: fraction of participants.\n\\State \\textbf{Output:} Global model $m_G$.\n\\State \\textbf{Initialization of FL:} initialize $m_G^0$\n\\State \\textbf{Global model aggregation:} \n\\For{t=1...T} \n\\State $NP\\leftarrow max(c.N,1)$\n\\State $P \\leftarrow$ select random $NP$ participants\n\\For {$i \\in P$ \\textbf{in parallel}}\n\\State $m_i^{t+1}\\leftarrow$ \\textbf{Local model update} $(i,m_G^t)$\n\\EndFor\n\\State $m_G^{t+1}=\\sum\\limits_{i=1}^{N}\\frac{|D_i|}{\\sum\\limits_{j=1}^{N}|D_j|}m_i^{t+1}$ \\text{\\footnotesize\\% Averaging aggregation}\n\\EndFor\n\\State \\textbf{Local model update} (i,m):\n\\State $d \\leftarrow$ split $D_i$ into batches of size $B$\n\\For{j=1..E}\n\\For{samples $b \\in d$}\n\\State $m \\leftarrow m-\\rho \\Delta L(m,b)$ \\text{\\footnotesize\\% $\\Delta L$ is the gradient of $L$}\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\\end{comment}\nA major challenge in FL is the large communication and energy overhead related to exchanging the models updates between different end-users, and the centralized server . Such overheads depend on multiple parameters, including the models' updates size, the number of participating users, the number of epochs per user, and the number of communication rounds required to maintain the convergence. Particularly, the energy consumed by an FL participant $i$ characterized by a frequency $f$, a local dataset $D_i$, and a number of local epochs $E$, is given by :\n\\begin{equation}\n\\label{FL_energy}\ne^c_i= E \\times (\\phi\\gamma |D_i| f^2),\n\\end{equation}\nwhere $\\phi$ is the number of CPU cycles required to compute one input instance, and $\\gamma$ is a constant related to the CPU. The latency required to compute the local model can be expressed as: \n\\begin{equation}\n\\label{FL_latency}\nt^c_i= E \\times (\\frac{\\phi|D_i|}{f}).\n\\end{equation}\nFrom the equations (\\ref{FL_energy}) and (\\ref{FL_latency}), we can see that a trade-off exists between the local training latency and the consumed energy. More specifically, for a fixed accuracy determined by the number of local epochs and a fixed frequency, the latency is accelerated depending on the size of the private data. If the data size and the accuracy are fixed, increasing the CPU frequency can help to minimize the local model computation. However, minimizing the latency comes at the expense of energy consumption that increases to the square of the operating frequency. \n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.4]{Figures/FL_designs1.pdf}\n\t\\caption{The FL architectures considered in the literature: (a) one-layer FL, (b) edge-assisted FL.}\n\t\\label{fig: FL_arch}\n\\end{figure*}\nThe transmission time to share the model updates between the centralized servers and different FL participants mainly depends on the channel quality, the number of devices and the number of global rounds, illustrated as follows:\n\\begin{equation}\n\\label{FL_transmission}\nt^T= T \\times \\sum\\limits_{i=1}^{N}\\frac{K}{\\rho_i},\n\\end{equation}\nwhere $K$ is the models' parameters size shared with the server and $\\rho$ is the data rate of the participant $i$. On the other hand, the total energy consumed during the federated learning process using the local transmit powers $P_i$ is equal to:\n\\begin{equation}\ne^T= T \\times \\sum\\limits_{i=1}^{N}\\frac{KP_i}{\\rho_i},\n\\end{equation}\nFrom the above equations, we can see that the local iterations $E$ and the global communication rounds $T$ are very important to optimize the energy, computation, and communication costs. Particularly, for a relative local accuracy $\\theta_l$, $E$ can be expressed as follows : \n\\begin{equation}\n\\label{FL_E}\nE= \\alpha \\times log(\\frac{1}{\\theta_l}),\n\\end{equation}\nwhere $\\alpha$ is a parameter that depends on the dataset size and local sub-problems. The global upper bound on the number of iterations to reach the targeted accuracy $\\theta_G$ can be presented as :\n\\begin{equation}\n\\label{FL_T}\nE^g= \\frac{\\zeta log(\\frac{1}{\\theta_G})}{1-\\theta_l}.\n\\end{equation}\nWe note that $\\zeta log(\\frac{1}{\\theta_G})$ is used instead of $O(log(\\frac{1}{\\theta_G}))$, where $\\zeta$ is a positive constant. The computation cost depending on the local iterations $E$ and the communication cost depending on the global rounds $T$ are contradictory. It means, minimizing $E$ implies maximizing $T$ to update the local parameters frequently, which results in increasing the convergence latency. \nTo summarize, FL pervasiveness aspects that are being tackled by different studies, to reduce communication and energy overheads, may include: \n\\begin{enumerate}\n\t\\item reducing communication frequency, i.e., number of communication rounds;\n\t\\item reducing the number of local iterations;\n\t\\item selecting minimum number of participating users in the training process; \n\t\\item optimizing local devices operating frequencies;\n\t\\item minimizing the entropy of the models updates by using lossy compression schemes; \n\t\\item using efficient encoding schemes in communicating models updates. \n\\end{enumerate} \nIn what follows, we categorize different presented FL schemes in the literature, based on the system architecture, namely one-layer FL and edge-assisted FL. The former refers to user-cloud architecture, where different users share their learning models with a cloud or centralized server for aggregation, while the latter refers to user-edge-cloud architecture, where edge nodes are leveraged to reduce communication overheads and accelerate FL convergence (see Figure \\ref{fig: FL_arch}). \n\\begin{comment}\n\\begin{table*}[!h]\n\\centering\n\\caption{Comparison between privacy-aware FL strategies.\\\\\n(H: High, M: Medium, L: Low).}\n\\label{tab:privacy}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\begin{tabular}[c]{@{}c@{}}\\textbf{Privacy-aware}\\\\ \\textbf{strategy}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Privacy}\\\\ \\textbf{level}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Accuracy}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Compatibility}\\\\ \\textbf{with IoT devices}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Communication}\\\\ \\textbf{overhead}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Computation}\\\\ \\textbf{overhead}\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Differential Privacy \\end{tabular} & M & \\xmark & \\cmark & \\xmark & M \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Secure multi-party \\\\ computation \\end{tabular} & H & \\xmark & \\xmark & \\xmark & M \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Homomorphic Encryption \\\\ \\end{tabular} & H & \\xmark & \\xmark & \\xmark & M \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Hybrid protocols \\end{tabular} & H & \\xmark & \\xmark & \\xmark & M \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Blockchain \\end{tabular} & H & \\cmark & \\xmark & \\cmark & H \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\end{comment}", "id": "7cc9c4fa-1160-4383-a4bc-5fa5eb5646f1", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "c4778902-b31a-4153-8252-7d9c704e337a", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Profiling computation and communication models \\label{sec:Fundamentals" ] ], "subsections": [], "title": "Profiling computation and communication models \\label{sec:Fundamentals" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1b661b5c-6fb5-4606-987f-cc5635444cf4", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "c4778902-b31a-4153-8252-7d9c704e337a", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Resource management for Federated learning" ] ], "subsections": [ "e77d3744-620c-4405-b29f-e28af04c0418", "1c741ea7-6543-4f53-818d-35f8e7b21dd2" ], "title": "Resource management for Federated learning" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 7138, 619, 7721, 7720, 582, 616, 3431 ], "content": "} \nThe efficiency of FL concept has been proved by different experiments on various datasets. \nIn particular, the proposed model in presented a one-layer FL, where the available users/devices could exchange their local models with a centralized server that collects the local models and forms a global model. \nAfterward, several extensions have been proposed to the original FL. The investigated problems/approaches in FL, considering one-layer architecture, can be categorized into:\n\\begin{itemize}\n\t\\item studying the convergence behaviour of the proposed FL schemes from a theoretical perspective, while optimizing the learning process given limited computational and communication resources ; \n\t\\item considering partial user participation for the FL aggregation process in a resource-constrained environment while balancing between the model accuracy and communication cost ; \n\t\\item presenting communication-efficient schemes that aim at reducing the FL communications cost by adopting distinct sparsification and compression techniques . \n\\end{itemize} \nThe effect of non-Independent and Identically Distributed (non-IID) data on the performance of FL has been investigated in . This work illustrated, theoretically and empirically, that highly skewed non-IID data (i.e., the local data at different users are not identically distributed) can substantially decrease the accuracy of the obtained trained model by up to $55\\%$. \nTo solve this issue, the authors suggested to share a small subset of data between all participants. By integrating these data from the neighboring participants with the local data at each participant, the local dataset will be less skewed. However, sharing data among the available participants is not always feasible, given strict privacy constraints and communication cost of sharing such data. \nThe convergence analysis of FedAvg scheme using non-IID data has been investigated in for strongly convex problems. \nIn , the authors started first by studying the convergence behaviour of gradient-descent based FL scheme on non-IID data from a theoretical point of view. \nAfter that, the obtained convergence bound is used to develop a control mechanism, for resource-limited systems, by adjusting the frequency of the global model aggregation in real-time while minimizing the learning loss. \nA new FL algorithm, named FEDL, is presented in . This algorithm used a local surrogate function that enables each user to solve its local problem approximately up to a certain accuracy level. The authors presented the linear convergence rate of FEDL as a function of the local accuracy and hyper-learning rate. Then, a resource allocation problem over wireless networks was formulated, using FEDL, to capture the trade-off between the training time of FEDL and user's energy consumption. \nIn , the effect of considering the participation of all users in FL algorithm has been studied. Indeed, it is shown that increasing the number of participant users may lead to increasing the learning time since the central server have to wait for \\textit{stragglers}, i.e., participants with bad wireless channels or large computational delay. \nTo overcome the impact of \\textit{stragglers}, different schemes have been proposed to select the best subset of users that can participate in the FL aggregation . \nFor instance, the authors in presented a control algorithm, leveraging reinforcement learning, in order to accelerate the FL convergence by obtaining the subset of users that can participate in each communication round of FL, while accounting for the effect of non-IID data distribution. \n{\nTo maintain the balance between computational and communication costs, and global model accuracy, the authors in presented a joint optimization model for data and users selection. } \nIn , the problem of users selection to minimize the FL training time was investigated for Cell-Free massive Multiple-Input Multiple-Output (CFmMIMO) networks. \nAlternatively, sparsification and compression techniques are used to decrease the entropy of the exchanged models in FL process. \nIn particular, instead of communicating dense models' updates, the authors in presented a framework that aims at accelerating the distributed stochastic gradient descent process by exchanging sparse updates (i.e., forwarding the fraction of entries with the biggest magnitude for each gradient). \nIn , a sparse ternary compression technique was presented to compress both the upstream and downstream communications of FL, using sparsification, error accumulation, ternarization, and optimal Golomb encoding.", "id": "e77d3744-620c-4405-b29f-e28af04c0418", "level": "paragraph", "origin_cites_number": 12, "parent_id": "1b661b5c-6fb5-4606-987f-cc5635444cf4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Resource management for Federated learning" ], [ "paragraph", "One-layer FL \\label{sec:Single" ] ], "subsections": [], "title": "One-layer FL \\label{sec:Single" }, { "cite_extract_rate": 0.448275862068965, "cites": [ 7721, 3434, 582, 3433, 619, 652, 643, 3431, 7720, 646, 7139, 7138, 3432 ], "content": "} \nSome studies have considered edge-assisted FL architecture to tackle the problem of non-IID data. For example, the authors in extended the work in in order to analytically prove the convergence of the edge-assisted FedAvg algorithm. Then, this work was further extended in to mitigate the effect of \\textit{stragglers} by proposing probabilistic users selection scheme. \nThe authors in presented two strategies to prevent the bias of training caused by non-IID data. The first strategy was applied before training the global model by performing data augmentation to tackle the challenge of non-IID data. The second strategy utilized mediators, i.e., edge nodes, to reschedule the training of the participants based on the distribution distance between the mediators. \nIn , the impact of non-IID data in edge-assisted FL architecture was investigated and compared to the centralized FL architecture. This study defined the main parameters that affect the learning process of edge-assisted FL. \nTable \\ref{tab:Fl} presents the taxonomy of the federated learning techniques described in this section.\n\\begin{table*}[]\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{Taxonomy of federated learning techniques.}\n\\label{tab:Fl}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|}\n\\hline\n\\textbf{Refs} & \\textbf{Year} & \\textbf{FL devices} & \\textbf{Architecture} & \\textbf{Trained model} & \\textbf{Aggregation algorithm} & \\textbf{Dataset} & \\textbf{Targeted metrics} \\\\ \\hline\n & 2017 & \\begin{tabular}[c]{@{}l@{}}Mobile\\\\ devices\\end{tabular} & One-layer & \\begin{tabular}[c]{@{}l@{}}- 2NN\\\\ - CNN \\\\ - LSTM\\end{tabular} & FedAvg & \\begin{tabular}[c]{@{}l@{}}- CIFAR-10 \\\\ - MNIST \\end{tabular} & - Accuracy vs rounds \\\\ \\hline\n & 2018 & \\begin{tabular}[c]{@{}l@{}}Mobile and\\\\ IoT devices\\end{tabular} & One-layer & - CNN & Enhanced FedAvg & \\begin{tabular}[c]{@{}l@{}}- CIFAR-10\\\\ - MNIST\\\\ - KWS \\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\\\ - Shared data\\\\ - Weight divergence\\end{tabular} \\\\ \\hline\n & 2019 & End-users & One-layer & - Logistic regression & FedAvg & - MNIST & \\begin{tabular}[c]{@{}l@{}}- Global loss vs rounds\\\\ - Rounds vs local epochs\\end{tabular} \\\\ \\hline\n & 2019 & Edge nodes & One-layer & \\begin{tabular}[c]{@{}l@{}}- Squared-SVM\\\\ - Linear regression,\\\\ - K-means\\\\ - CNN\\end{tabular} & FedAvg & \\begin{tabular}[c]{@{}l@{}}- MNIST \\\\ - Energy \\\\ - User Knowledge \\\\ Modeling \\\\ - CIFAR-10\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Loss vs nodes\\\\ - Accuracy vs nodes\\end{tabular} \\\\ \\hline\n & 2019 & End-users & One-layer & \\xmark & \\begin{tabular}[c]{@{}l@{}}Non-weighted\\\\ averaging\\end{tabular} & \\xmark & \\begin{tabular}[c]{@{}l@{}}- Communication vs \\\\ computation time\\\\ - Learning time vs energy\\end{tabular} \\\\ \\hline\n & 2019 & End-users & One-layer & - CNN & Averaging & \\begin{tabular}[c]{@{}l@{}}- CIFAR-10\\\\ - Fashion-MNIST \\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs time\\\\ - Number of participants\\end{tabular} \\\\ \\hline\n & 2020 & \\begin{tabular}[c]{@{}l@{}}Mobile\\\\ devices\\end{tabular} & One-layer & - CNN & \\begin{tabular}[c]{@{}l@{}}FedAvg with users seclection\\\\ Favor\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- MNIST\\\\ - Fashion-MNIST\\\\ - CIFAR-10\\end{tabular} & -Accuracy vs rounds \\\\ \\hline\n & 2020 & End-users & One-layer & \\begin{tabular}[c]{@{}l@{}}- MLP\\\\ - CNN\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}FedAvg\\end{tabular} & - MNIST & - Accuracy vs rounds \\\\ \\hline\n & 2020 & \\begin{tabular}[c]{@{}l@{}}Mobile\\\\ devices\\end{tabular} & One-layer & \\xmark & \\xmark & \\xmark & \\begin{tabular}[c]{@{}l@{}}- Transmission time\\\\ - Loss\\end{tabular} \\\\ \\hline\n & 2020 & \\begin{tabular}[c]{@{}l@{}}Mobile \\\\ devices\\end{tabular} & One-layer & \\begin{tabular}[c]{@{}l@{}}- VGG11\\\\ - CNN\\\\ - LSTM \\\\ - Logistic regression\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Weighted averaging with \\\\ Top-k sparsified communication\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- CIFAR\\\\ - KWS\\\\ - MNIST\\\\ - Fashion-MNIST\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Communication delay\\\\ - Accuracy\\end{tabular} \\\\ \\hline\n & 2020 & \\begin{tabular}[c]{@{}l@{}}IoT\\\\ devices\\end{tabular} & One-layer & \\begin{tabular}[c]{@{}l@{}}- 2NN\\\\ - CNN\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}communication-efficient\\\\ FedAvg (CE-FedAvg)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- CIFAR-10\\\\ - MNIST\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Uploaded data\\\\ - communication rounds\\\\ - convergence time\\end{tabular} \\\\ \\hline\n & 2020 & UAVs & One-layer & \\xmark & FedAvg & \\xmark & - Rounds vs bandwidth \\\\ \\hline\n & 2020 & UAVs & One-layer & - FCN & FedAvg & CRAWDAD & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\\\ - local learning time\\end{tabular} \\\\ \\hline\n & 2020 & UAVs & One-layer & - CNN & FedAvg & - MNIST & - Utility of participants \\\\ \\hline\n & 2020 & UAVs & One-layer & \\begin{tabular}[c]{@{}l@{}}- LSTM\\\\ - GRU\\\\ - AQNet \\end{tabular} & FedAvg & \\begin{tabular}[c]{@{}l@{}}- Ground and aerial \\\\Sensing Data collected \\\\ by authors\\end{tabular} & - Energy consumption \\\\ \\hline\n & 2020 & End-users & Edge-assisted & - CNN & Hierarchical FedAvg & \\begin{tabular}[c]{@{}l@{}}- CIFAR-10\\\\ - MNIST\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs epochs\\\\ - Training time\\\\ - Energy consumption\\end{tabular} \\\\ \\hline\n & 2020 & End-users & Edge-assisted & \\begin{tabular}[c]{@{}l@{}}- FCN \\\\ - LeNet-5\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Weighted averaging with\\\\ Effective Data Coverage (EDC)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- MNIST\\\\ - Aerofoil \\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\\\ - Training time\\\\ - Energy consumption\\end{tabular} \\\\ \\hline\n & 2021 & \\begin{tabular}[c]{@{}l@{}}Mobile\\\\ devices\\end{tabular} & Edge-assisted & - CNN & FedAvg & \\begin{tabular}[c]{@{}l@{}}- EMNIST \\\\ - CINIC-10 \\\\ - CIFAR-10\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\\\ - Accuracy vs epochs\\\\ - Storage requirement\\end{tabular} \\\\ \\hline\n & 2021 & End-users & Edge-assisted & \\begin{tabular}[c]{@{}l@{}}- FCN\\\\ - CNN\\end{tabular} & FedAvg & \\begin{tabular}[c]{@{}l@{}}- MNIST\\\\ - Fashion-MNIST\\\\ - CIFAR-10\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\\\ - Accuracy vs edge \\\\ distance distribution\\\\ - Speed\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "1c741ea7-6543-4f53-818d-35f8e7b21dd2", "level": "paragraph", "origin_cites_number": 29, "parent_id": "1b661b5c-6fb5-4606-987f-cc5635444cf4", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Resource management for Federated learning" ], [ "paragraph", "Edge-assisted FL \\label{sec:Hierarchical" ] ], "subsections": [], "title": "Edge-assisted FL \\label{sec:Hierarchical" }, { "cite_extract_rate": 0.5, "cites": [ 3434, 3432 ], "content": "\\label{sec:Cases}\n\\begin{figure}[h!]\n\t\\centering\n\t\t\\scalebox{1.4}{\\frame{\\includegraphics[width=0.53\\columnwidth]{Figures/FL_2.pdf}}}\n\t\\caption{An example of FL applications in UAV-assisted environment.}\n\t\\label{fig:FL_CS}\n\\end{figure}\nNowadays, deep learning has been widely used in Flying Ad-hoc Network (FANET). Different tasks can be executed using DL techniques at UAV swarms, such as coordinated trajectory planning and jamming attack defense . However, due to the related massive network communication overheads, forwarding the generated large amount of data from the UAV swarm to a centralized entity, e.g., ground base stations, makes implementing centralized DL challenging. \nAs a promising solution, FL was introduced within a UAV swarm in several studies to avoid transferring raw data, while forwarding only local trained models' updates to the centralized entity that generates the global model and send it to the end-user and all participants over the intra-swarm network (see Fig. \\ref{fig:FL_CS}). \nIn , the authors present a FL framework for the swarm of wirelessly connected UAVs flying at the same altitude. The considered swarm includes a leader UAV and a set of followers UAVs. It is assumed that each follower collects data while flying and implements FL for executing inference tasks such as trajectory planning and cooperative target recognition. Hence, each follower exploits its gathered data to train its own learning model, then forwarding its model's updates to the leading UAV. All received models are then aggregated at the leading UAV to generate a global FL model, that will be used by the following UAVs in the next iteration. \nInterestingly, investigates the impact of wireless factors (such as fading, transmission delay, and UAV antenna angle deviations) on the performance of FL within the UAV swarms. The authors present the convergence analysis of FL while highlighting the communication rounds needed to obtain FL convergence. Using this analysis, a joint power allocation and scheduling optimization problem is then formulated and solved for the UAV swarm network in order to minimize the FL convergence time. The proposed problem considers the resource limitations of UAVs in terms of: (1) the strict energy limitations due to the energy consumed by learning, communications, and flying during FL convergence; and (2) delay constraints imposed by the control system that guarantees the stability of the swarm.", "id": "eed40686-a7ba-4e6f-be93-9abfee6a5cbd", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "c4778902-b31a-4153-8252-7d9c704e337a", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Use case: Learning in the sky" ] ], "subsections": [], "title": "Use case: Learning in the sky" }, { "cite_extract_rate": 0.5, "cites": [ 7138 ], "content": "Despite the prompt development of diverse DL techniques in different areas, they still impose a major challenge, which is: How can we efficiently leverage the massive amount of data generated from pervasive IoT devices for training DL models if these data cannot be shared/transferred to a centralized server? \n\\begin{itemize}\n\\item FL has emerged as a promising privacy-preserving collaborative learning scheme to tackle this issue by enabling multiple collaborators to jointly train their deep learning models, using their local-acquired data, without the need of revealing their data to a centralized server . \n\\item The model aggregation mechanisms are the most discussed in the FL literature, which are applied to address the communication efficiency, system and model performance, reliability issues, statistical heterogeneity, data security, and scalability. More specifically, one-layer FL approaches are the most studied by previous works, even if researchers are recently investigating decentralized strategies.\n\\item A major dilemma in FL is the large communication overhead associated with transferring the models' updates. Typically, by following the main steps of FL protocol, every node or collaborator has to send a full model update in every communication round. Such update follows the same size of the trained model, which can be in the range of gigabytes for densely-connected DL models . Given that large number of communication rounds may be needed to reach the FL convergence on big datasets, the overall communication cost of FL can become unproductive or even unfeasible. Thus, minimizing the communication overheads associated with the FL process is still an open research area.\n\\item We also remark that despite the considerable presented studies that have provided significant insights about different FL scenarios and user selection schemes, optimizing the performance and wireless resources usage for edge-assisted FL is still missing. Most of the existing schemes for FL suffer from slow convergence. Also, considering FL schemes in highly dynamic networks, such as vehicular networks, or resource-constraint environments, such as healthcare systems, is still challenging.\n\\end{itemize}\n\\begin{comment}\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.635]{Figures/OL_1.pdf}\n\t\\caption{Outline of online learning.}\n\t\\label{OL_outline}\n\\end{figure}\n\\end{comment}", "id": "77508e12-2980-420b-9045-084b9804e210", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "c4778902-b31a-4153-8252-7d9c704e337a", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Federated Learning" ], [ "subsubsection", "Lessons learned" ] ], "subsections": [], "title": "Lessons learned" }, { "cite_extract_rate": 0, "cites": [], "content": "In reinforcement learning , the agent learns how to map the environment's states to actions in order to maximize a reward signal. This agent is not told which actions to choose but instead should discover which ones lead to the best reward by trying them. In the most general case, actions affect not only the immediate reward but also the next environment's states and potentially all subsequent rewards, which is called \\textit{Markov decision process reinforcement learning}. In the simplified and special case setting of RL with a single state, the agent needs only to detect the best action that maximizes the current reward (bandit objective) without accounting for the transition to any other state. This non-associative setting, namely \\textit{multi-arm bandit learning}, has less modeling power of real systems but avoids much of the complexity of the full reinforcement learning problem.\nIn this section, we discuss the multi-agent Reinforcement Learning (MARL) with its both forms. The \"Multi-agent\" prefix indicates the existence of multiple collaborative agents \\footnote{Note that while competitive settings can also be modeled, the focus of this section is on systems that aim to jointly optimize an objective function with minimum resource utilization (pervasive AI systems). Thus, competitive and zero-sum games will not be deeply surveyed.} that are aiming to optimize a specific criterion through learning from past experience (i.e., past interactions). We note that MARL was originally proposed to model a single agent interacting with the environment and aiming to maximize its reward. However, in pervasive computing systems, where there are numerous but resource-limited agents (i.e., devices), collaboration becomes essential to leverage the potential of the collective experience of these devices.\nMotivated by the prevalence of collaboration in pervasive systems, we review in this section distributed MAB and MDP MARL algorithms from a resource utilization perspective. As has been the case throughout the paper, we are interested in the performance/resource-management trade-offs. Specifically, we propose a taxonomy based on the obtained performance with specific resource budgets (e.g., communication rounds).", "id": "da7d1e06-40e4-4c83-bd80-f6bc0ee710aa", "level": "subsection", "origin_cites_number": 1, "parent_id": "8f96916e-e394-41c3-94f4-9283f9cfdbaa", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ] ], "subsections": [ "10fee397-c412-4448-a2a8-65fc32b0b26c", "43587b7d-943c-4595-8dee-22efc7f3cf8b" ], "title": "Multi-agent reinforcement learning" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we first provide technical definitions of the single-agent stochastic bandit problem and then explain its multi-agent extension.", "id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "da7d1e06-40e4-4c83-bd80-f6bc0ee710aa", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ] ], "subsections": [ "550606f4-dbb7-410f-98cb-41d963020977", "1c9276dc-6802-455a-a4cb-16ea213c580b", "02c1cf97-fdba-478e-b148-fc34d1dd2599", "2cbf2ee9-d88a-4258-8086-3e1b5a6d98bc", "77bb70d1-31f6-4366-aae9-289d77691899" ], "title": "Multi-agent multi-arm bandit learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.3]{Figures/multi-bandits_3.pdf}\n\t\\caption{The basic bandit problem: a set of actions corresponding to different reward distributions.}\n\t\\label{multi-bandits}\n\\end{figure}\nThe Bandit problem, introduced earlier in section \\ref{AI}, is given in Algorithm \\ref{alg:bp}, and visually illustrated in Fig. \\ref{multi-bandits}. Fundamentally, there exists a set of actions $\\mathcal{K}$ (10 actions in the figure), where each action $a$ results in a reward sampled from a distribution $\\mathcal{D}_a$ (Gaussians in the example illustrated in Fig. \\ref{multi-bandits}). \n\\begin{algorithm}\n\\caption{Basic bandit problem}\n\\label{alg:bp}\n\\begin{algorithmic}[1]\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\Require The set of $K$ actions (arms) $\\mathcal{K}$, \n\\State \\textbf{for} each round $t\\in[T]$:\n\\State \\hskip1em Algorithm picks an action $a_t$\n\\State \\hskip1em Environment returns a reward $r_t \\sim \\mathcal{D}_{a_t}$\n\\end{algorithmic}\n\\end{algorithm}\nThe problem instance is fully specified by the time horizon $T$ and mean reward vector (the vector of the expected reward for each action/arm) $\\boldsymbol{\\mu}=u_a, a\\in \\mathcal{K}$, where $u_a = \\mathbb{E}[\\mathcal{D}_a]$. The optimal policy is simply choosing the action whose expected value is the highest, i.e., $a_* = \\argmax_a u_a$. However, as this action is not known a priori ($\\mathcal{D}_a$ is not known), it has to be estimated online from samples. Thus, it is inevitable that some sub-optimal actions will be picked, while building certainty on the optimal one. A reasonable performance measure is the \\emph{Regret}, which is defined as the difference between the optimal policy's cumulative rewards, and the cumulative rewards achieved by a solution algorithm.\n\\begin{equation}\n R_T = \\underbrace{u_* \\times T}_{\\text{\\shortstack{Optimal policy's\\\\cumulative rewards}}} - \\underbrace{\\sum_{t=1}^T u_{at}}_{\\text{\\shortstack{An algorithm's cumulative\\\\ rewards}}}\n\\end{equation}\nIn other words, the regret $R_T$ is the sum of \\emph{per-step regrets}. A per-step regret at time step $t$ is simply the difference between the best action's expected reward $u_*$ and the expected reward of the action chosen by an algorithm $u_{a,t}$ (i.e., $a$ is selected by the algorithm we are following). Thus, it represents how much rewards are missed because the best action is not known and has to be estimated from samples. Solution algorithms typically prove sub-linear regret growth (i.e., this difference goes to zero as time progresses. In this way, learning is achieved). The best achievable regret bound for the described bandit problem was proven to be $O(log T)$ .\nSeveral solution algorithms with optimal performance guarantees have been proposed in the literature , which fall generally into two categories, explore-then-commit and optimism-based algorithms. Explore-then-commit class, such as successive elimination algorithm, acts in phases and eliminates arms using increasingly sensitive hypothesis tests. On the other hand, the optimism algorithm, such as Upper Confidence Bound (UCB) algorithm, builds confidence for the reward of each action and selects the action with the highest upper bound. The asymptotic performance of both classes is similar. Note that performance guarantees are also classified into instance-dependent bound that depends on the problem information such as the difference between the best and second-best arms, and instance-independent regret (i.e., worst-case regret). These algorithms are recently being extended to model pervasive computing though two main MAB formulations: \\emph{distributed} and \\emph{federated} bandits, as shown in Fig. \\ref{fed-bandits}. \nIn distributed bandits, agents aim to solve the same bandit instance (i.e., quickly discover the best action), represented by the action set and their generating distributions. Meanwhile, in the federated bandit settings, agents handle different bandits instances and utilize each others' experiences to solve them. While the terms used to describe the exact problem is sometimes ambiguous in the literature (i.e., distributed, federated, and decentralized were sometimes used interchangeably), in this work, we adopt the recent convention on reserving the term federated for the case where each agent faces a different (but related to others) problem instance, while keeping the term distributed for the case where the instance is the same for all agents but the decision making is distributed across other agents. \n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.32]{Figures/FLBandit4.pdf}\n\t\\caption{Multi-agent bandits formulations: (a) Distributed Bandits: each agent collaborates with others to identify the best action in the same environment (b) Federated bandits: each agent collaborates with others to identify the best global action using biased local samples. In this example, the local environments were generated (e.g., sampled) from a global one.}\n\t\\label{fed-bandits}\n\\end{figure}", "id": "550606f4-dbb7-410f-98cb-41d963020977", "level": "paragraph", "origin_cites_number": 1, "parent_id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ], [ "paragraph", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0.6875, "cites": [ 3435, 3439, 7723, 7724, 3436, 3437, 9098, 3438, 7722, 7725, 8658 ], "content": "In many bandit problem instances, it is appealing to employ more agents to learn collaboratively and concurrently to speed up the learning process. In the distributed bandit problem, there exists a set of agents $[M]$ collaborating to solve the \\emph{same} bandit instance (the $K$ arms are the same). These agents communicate according to some networking topologies. In many contexts, the sequential decision-making problem at hand is distributed by nature. For example, we can consider a recommender system deployed over multiple servers in different locations. While every server aims to always recommend the best item, it is intuitive to reuse each other's experiences and cut the time needed to learn individually. Furthermore, since their communication may violate the latency constraints, it is desirable that this collaboration and reuse of experience achieve minimum communication overhead. \nWhile the classical single-agent bandit algorithm has been proposed since the $~2002$, its multi-agent counterpart is much more recent, with new state-of-the-art algorithms being currently proposed. The work in initiated the interest in the communication-regret trade-off. The authors established a non-trivial bound on the regret, given an explicitly stated bound on the number of exchanged messages. However, they focused on the full-information setting, assuming that the agents observe the rewards of all actions at each round, and not only the one picked, which is the case in bandit settings. Nonetheless, this work initiated the interest in studying the same trade-off under the bandit settings. The authors of considered the partial feedback (i.e., bandit settings) and presented an optimal trade-off between performance and communication. This work did not consider regret as the performance criterion but rather assumed the less common ``best arm identification\" setup, where the goal is to purely explore in order to eventually identify the best arm with high probability after some number of rounds. The authors in studied the regret of distributed bandits with a gossiping-based P2P communication specialized to their setup, where at every step, each agent communicates only with two other agents randomly selected. studied the regret under the assumption that the reward obtained by each agent is observable by all its neighbors. proposed a collaborative UCB algorithm on a graph-network of agents and studied the effect of the communication graph topology on the regret bound. improved this line of work as the approach requires less global information about the communication graph by removing the graph dependent factor multiplying the time horizon in the regret bound. \nOther works go beyond merely studying the effect of the network topology on the regret bound and explicitly accounting for the communication resources to use. The authors in deduced an upper bound on the number of needed communicated bits, proving the ability to achieve the regret bound in with a finite number of communication bits. However, the interesting question, particularly from the perspective of pervasive computing design, is whether the use of communication resources can also be bounded, i.e., can the order of optimal regret bound be guaranteed with a maximum number of communicated bits / communicated messages.\nThe work in established the first logarithmic upper bound on the number of communication rounds needed for an optimal regret bound. The authors considered a complete graph network topology, wherein a set of agents are initialized with a disjoint set of arms. As time progresses, a gossiping protocol is used to spread the best performing arm with agents. The authors showed that, with high probability, all agents will be aware of the best arm while progressively communicating at less (halving) periods. The authors generalized this work with a sequel formulation , which relaxes the assumption of a complete graph, and introduces the option for agents to pull information. However, this approach is still using the same gossiping style of communication. According to , this dependence on pair-wise gossiping communication results in a sub-optimal instance-independent regret bound. The authors in focused on the regret-communication trade-off in the distributed bandit problem. The networking model utilizes a central node that all agents communicate with. Initially, agents work independently to eliminate bad arms. Then, they start communicating with the central node at the end of each epoch, where epochs' duration grows exponentially, leading to a logarithmic bound on the number of needed messages. \n \\begin{table*}[!h]\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{ Multi-agent stochastic bandit learning literature.}\n\\label{table:MAB_previous_work}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\textbf{Refs} & \\shortstack{\\textbf{Problem} \\\\ \\textbf{Formulation}} &\t\\shortstack{\\textbf{Communication} \\\\ \\textbf{Model}} & \\shortstack{\\textbf{Communication}\\\\\\textbf{Guarantee}}\t& \\textbf{Regret Guarantee} & \\textbf{Method}\\\\ \\hline\nP2P-$\\epsilon$-Greedy & DB & Two Neighbors on a graph & $O(T)$& $O(T)$ & Gossiping arms estimates. \\\\ \\hline \ncoop-UCB2 & DB & Neighbors on a graph & $O(T)$& $O(logT)$ & \\shortstack{A running Consensus on \\\\the estimates of arms total rewards.} \\\\ \\hline\nUCB-Network & DB & \\shortstack{Multiple \\\\ (Graph and centralized)} & $O(T)$& $O(logT)$ & \\shortstack{Identifying and utilizing \\\\dominating sets in the network.} \\\\ \\hline \nDDUCB & DB & Neighbors on a graph & $O(T)$& \\shortstack{$O(logT)$ \\\\ (with improved constants)}& \\shortstack{A running Consensus on \\\\the estimates of arms total rewards.} \\\\ \\hline\n & DB & \\shortstack{Individual neighbors \\\\ on a complete graph} & $O(constant)$& $O(logT)$ & \\shortstack{Gossiping among different \\\\local Poison clocks.} \\\\ \\hline \nGosInE & DB & \\shortstack{Neighbors on \\\\ a complete graph}& $\\Omega(T)$& $O(logT)+C_G$ & \\shortstack{Gossiping and information \\\\ pulling.}\\\\ \\hline \nDPE2 & DB & Neighbors on a graph & $O(constant)$& $O(logT)$ & \\shortstack{Leader-election to handle \\\\exploration (exploration is centralized).} \\\\ \\hline \nDEMAB & DB & Centralized coordinator & $O(logT)$& $O(logT)$ &\\shortstack{ Utilizing public randomness \\\\to divide arms among clients. }\\\\ \\hline \nLCC-UCB & DB & \\shortstack{Multiple \\\\(Graph and centralized)} & $O(logT)$& $O(logT)$ &\\shortstack{ Communicating estimates \\\\after epochs of doubling lengths. }\\\\ \\hline \n & FB & Neighbors on a graph & $O(logT)$ & $O(logT+C_G)$ & \\shortstack{Selecting the best arm \\\\according to voting.}\\\\ \\hline \nGossipUCB & FB & Neighbors on a graph &$O(T)$ & \\shortstack{$O(max\\{logT,log_{C_G}N\\})$} & \\shortstack{Maintaining local belief \\\\that is updated through gossiping.} \\\\ \\hline \n & FB & Centralized coordinator & $O(logT)$ & $O(logT)$ & \\shortstack{Aggregating estimates through \\\\the controller until a fixed point of time.} \\\\ \\hline\n & FB & Centralized coordinator & $O(logT)$ & $O(logT)$ & \\shortstack{Mixed target learning objective\\\\ based on local and global objectives.} \\\\ \\hline\n & FB & Neighbors on a graph& $O(T)$& $O(logT)$& \\shortstack{Agent use estimates of their\\\\ neighbors weighted by a similarity metric.}\\\\ \\hline\n\\end{tabular}\n\\end{table*}\nThe work in presents a state of the art distributed bandit learning algorithm. The authors proposed algorithms for both fully connected and partially connected graphs (i.e., assuming that every agent can broadcast to everyone and assuming that agents can communicate with a subset of the others). Similar to elimination-based algorithms, the proposed algorithm proceeds with epochs of doubling lengths, only communicating at the end of an epoch, thus guaranteeing a logarithmic need for communication resources. The communicated messages are only the ID of the action played most often. Furthermore, the regret is proved to be optimal even in instance independent problems, for reasonable values of the time horizon (i.e., $log(T)>2^{14}$). During each epoch, agents maintain a set of arms that are recommended by other agents at the end of previous epochs and implement a UCB algorithm among them.", "id": "1c9276dc-6802-455a-a4cb-16ea213c580b", "level": "paragraph", "origin_cites_number": 16, "parent_id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ], [ "paragraph", "Distributed Bandits Formulations" ] ], "subsections": [], "title": "Distributed Bandits Formulations" }, { "cite_extract_rate": 0.5, "cites": [ 3439, 7725 ], "content": "The federated bandit formulation, shown in Fig. \\ref{fed-bandits} (b), is a recently emerging framework akin to the federated learning framework discussed earlier. In this formulation, there exists a set of agents, each one is facing a \\emph{different} instance of the bandit (but the instances are related to each other). This is different from the distributed bandit formulation discussed in the previous sub-section, where a set of agents collaborate to solve the \\emph{same} instance of the multi-arms bandits. Recall that a bandit instance is determined by the mean reward vector $\\boldsymbol{\\mu}$. By ``related\" instances in the federated bandit settings, we mean that each local bandit instance is a \\emph{noisy} and potentially \\emph{biased} observation of the mean vector. In light of this, collaboration is necessary, as even perfect local learning algorithms might not perform adequately due to their biased observations. \nThe setting of federated bandits is first proposed by (although not under the same term). The authors proposed an algorithm, where agents agree on the best global arm, and they all play it at the beginning of each round. In this way, communication is needed at the beginning of each round. Recently, studied this federated setting, where the global arm mean vector is the average of the local ones. Although the authors did not propose a bound on the number of messages needed to be exchanged, the communication model considered a partially connected graph, where each agent communicates only with neighbors but with a focus on constrained communication resources. The algorithm contains two main steps: First, each agent shares a set of local information with other neighbors (the number of times an arm was sampled and its sampled mean). Second, a gossip update is performed, where each agent incorporates information received from neighbors in updating its estimate of each arm's mean. \n presented a more general formulation, where the global mean vector is not necessarily the average of the local ones. Instead, the local means are themselves \\emph{samples} from a distribution whose mean is unknown. The local observation for each agent is, in turn, samples from the local distributions. The communication model is similar to supervised federated learning, where agents communicate periodically with an orchestrator that updates the estimates of arms payoffs and instructs the agents on which arms to keep and which to eliminate. Although the communication is periodic, the total number of communication rounds is bounded (logarithmic with respect to the horizon $T$). This is because the number of agents incorporated in the learning process decays exponentially with time. Such an approach works since the average of clients' local means concentrates exponentially fast around that global mean (a known result from probability concentration analyses). \nA setting that is slightly different from the federated bandits was studied in. The difference is that although agents have similar yet not identical local models, the reward for each agent is actually sampled from its local distribution. Thus, each agent is trying to identify the best arm in its local instance through using information from other ones on arms that are similar. This work is different from other aforementioned approaches where the agents' rewards are sampled from the \\emph{global} distribution that they are collaboratively trying to estimate from biased local observations. \nTable \\ref{table:MAB_previous_work} summarizes the works in MAB problems. It lists the problem formulation: distributed Bandits (DB) or federated Bandits (FB), the communication model (i.e., the network topology), the communication guarantee (i.e., number of messages needed to achieve the performance), the regret guarantee (i.e., the growth of the regret with respect to the time horizon), and the main characteristics of the method, which describes how the rewards' estimates are communicated among the agents (Recall that the agents aim to collectively learn an accurate estimates of the rewards distributions). $C_G$ denotes a constant related to the communication graph or gossiping matrix and $N$ is the number of agents.", "id": "02c1cf97-fdba-478e-b148-fc34d1dd2599", "level": "paragraph", "origin_cites_number": 4, "parent_id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ], [ "paragraph", "Federated Bandits Formulations" ] ], "subsections": [], "title": "Federated Bandits Formulations" }, { "cite_extract_rate": 1, "cites": [ 7725 ], "content": "Online learning systems are fundamental tools in recommender systems, which are, in turn, a cornerstone in the development of current intelligent user applications, from social media application feeds to content caching across networks. Due to the recent growth in data generation, local geo-distributed servers are often used to support applications that utilize recommender systems. Furthermore, privacy concerns sometimes limit the ability of these local servers to share data with other servers. The work in studies the case of a set of servers that run a recommender system for their prospective clients. The goal of each one is to recommend the most popular content across all servers. However, due to latency constraints, communication at every decision-making step is infeasible. Besides, sharing individual samples of rewards violates privacy, as all servers will learn about a particular user's choice and preference. Due to these reasons, the authors proposed and utilized a federated bandits algorithm (Fed-UCB) which only communicates $logT$ times in a $T$ horizon to minimize recommendation latency. At each round, only the sample \\emph{means} are communicated, preserving a certain level of privacy (additional improvements are also discussed). Finally, the performance of the system is shown to be near-optimal; thus, achieving the goal of recommending the best item across all servers while meeting the privacy and communication constraints.", "id": "2cbf2ee9-d88a-4258-8086-3e1b5a6d98bc", "level": "paragraph", "origin_cites_number": 1, "parent_id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ], [ "paragraph", "Use case: MAB for recommender systems" ] ], "subsections": [], "title": "Use case: MAB for recommender systems" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7725, 7723 ], "content": "\\begin{itemize}\n \\item Distributed bandits formulations are the most popular in the literature compared to the recent federated formulation. Specifically, we note that distributed bandits with gossip-style communication, like the one introduced in , are a prevalent choice despite their sub-optimal communication resource utilization. This is attributed to the balance between complexity and the robustness resultant from the lack of a central controller.\n \\item Communication-cognizant Multi-agent Bandit formulations: Online-learning systems need to account for the communication resources. Thus, recent works do not only analyze regret but explicitly optimize the communication resources. This is manifested through two main observations. First, the derived regret guarantees are always affected by the networking topology (e.g., parameters representing the connectivity of a communication graph, number of involved agents, or number of exchanged messages). Second, accompanied by every regret guarantee, an upper bound on communication resource usage is also provided (e.g., the maximum number of exchanged messages or exchanged bits).\n \\item Towards the federation of the bandit framework: When the bandit instances faced by each agent are local biased instances, the federated bandits framework arises. In such a situation, agents need to learn with the help of a logically centralized controller, similar to supervised federated learning, in order to estimate the true global instance and the true best action. However, if agents are not interested in solving a hidden global instance but rather only their own, they may reuse their peers' experience and an instance-similarity metric to help them solve their own instances .\n\\end{itemize}", "id": "77bb70d1-31f6-4366-aae9-289d77691899", "level": "paragraph", "origin_cites_number": 3, "parent_id": "10fee397-c412-4448-a2a8-65fc32b0b26c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent multi-arm bandit learning" ], [ "paragraph", "Lessons Learned" ] ], "subsections": [], "title": "Lessons Learned" }, { "cite_extract_rate": 0.5, "cites": [ 3440 ], "content": "This section presents an overview of Multi-agent MDP from a pervasive computing perspective. We specifically focus on the \\emph{communication-performance} trade-off and classify previous works according to their approach to handle this trade-off. We note that our perspective is different from previous surveys (i.g., , ), which studied the technical merits and demerits of the learning algorithms. Instead, we are interested in the \\emph{systems} aspects of the considered works. That is, what are the communication topology and protocol used between agents and how do these choices affect the performance (rewards obtained by all agents).", "id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "da7d1e06-40e4-4c83-bd80-f6bc0ee710aa", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ] ], "subsections": [ "1c4d2079-0573-4572-94af-65d6a420cc4f", "b6524c20-563a-4101-892f-233c30c024df", "9c26dfa8-82e0-4cde-ba5b-bea1e5f72c4a", "cf290967-0002-4f65-a8cd-25c83dfb9eb3", "df6238e4-c9ad-4836-a1d4-c0085a69b492" ], "title": "Multi-agent Markov decision process learning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.54]{Figures/multi-agents1.pdf}\n\t\\caption{MARL Framework: multiple autonomous agents interact with an environment, observe (parts of) its state, and receive potentially different reward signals.}\n\t\\label{fig:MARL}\n\\end{figure}\nUnlike MAB formulations, in MDP, we have a \\emph{state space}, which is a set of all possible states the environment might be in, along with a \\emph{transition operator} which describes the distribution over the next states given the current state and performed actions. Therefore, agents need not only to detect the best actions, which maximize the reward (bandits objective) but also to account for the possible next state, as it might be arbitrarily bad/good regardless of the current one. Hence, in MARL, the collaborative agents aim to maximize the current and \\emph{future} expected sum of rewards, where the expectation is with respect to both the randomness of the state transitions and the action selection.\nMARL problems, visualized in Fig. \\ref{fig:MARL}, are often modeled as a Partially Observable Markov Game (POMG), which is a multi-agent extension for Partially Observable Markov Decision Process (POMDP). POMGs are represented by the tuple $(\\mathcal{N},\\mathcal{S},\\mathcal{A},\\mathcal{O},\\mathcal{T},\\mathcal{R}, \\gamma)$, where:\n\\begin{itemize}\n \\item $\\mathcal{N}$ is the set of all agents.\n \\item $s_t \\in \\mathcal{S}$ is a possible configuration of all the agents at time $t$.\n \\item $a_t \\in \\mathcal{A}$ is a possible action vector for the agents, where $\\mathcal{A} = \\mathcal{A}_1 \\times \\mathcal{A}_2 \\times ....\\times \\mathcal{A}_N$.\n \\item $o_t \\in \\mathcal{O}$ is a possible observation of the agents, where $\\mathcal{O} = \\mathcal{O}_1 \\times \\mathcal{O}_2 \\times .... \\times \\mathcal{O}_N$.\n \\item $\\mathcal{T}: \\mathcal{O} \\times \\mathcal{A} \\mapsto \\mathcal{O}$ is the state transition probability.\n \\item $\\mathcal{R}$ is the set of rewards for all the agents $r_i: \\mathcal{O} \\times \\mathcal{A} \\mapsto \\mathbb{R}$.\n \\item $\\gamma$ is the reward discount factor.\n\\end{itemize}\nEach agent aims to find a policy $\\pi_n$ that maximizes its own reward. If the rewards are not the same for all agents, the framework is referred to as mixed Decentralized-POMDP. When the rewards are similar for all agents (i.e., $r_n=r \\quad \\forall n \\in \\mathcal{N}$), the POMG is collaborative, and each agent's policy aims at maximizing the total reward. In the following, we discuss algorithms that might work on one or both settings. The main focus will be, on the communication aspects (i.e., topology and cost) of MARL algorithms. \nThere exist results on the hardness of solving the POMG under several settings. We can cite the case of a tabular representation of the spaces and the cases where function approximation is used (linear or nonlinear). The main solution approaches are policy gradient and value-based methods .\nPolicy gradient methods parametrize agents' policies within a class of functions and utilize gradient descent to optimize an objective function (i.e., reward obtained by the policy). Value-based methods aim to generalize the famous Q-learning algorithm to the multi-agent settings, either through making each agent learn its own Q-function and treating others as a part of a non-stationary environment, or through learning a global Q-function.\nThe optimization in policy gradient methods is done on the objective function: $J(\\mathbf{\\theta}\\doteq v_{\\pi_{\\mathbb{\\theta}}}(s_0))$, which is the cost of starting from the initial state $s_0$, and following the parametrized policy $\\pi_{\\mathbf{\\theta}}$ thereafter. The gradient of this function can be written as:\n\\begin{equation}\n (G_t - b(S_t)) \\frac{\\nabla \\pi (A_t|S_t, \\mathbb{\\theta}_t)} {\\pi (A_t|S_t, \\mathbb{\\theta}_t)}.\n\\end{equation}\nAs shown in the policy gradient algorithm , $b$ is any function of the state, referred to as the baseline. If it is the zero function, the resulting equation represents the ``reinforce'' algorithm. Another popular option is the value function of the state. If this state value function is updated through bootstrapping, the resulting method is called Actor-critic. Thus, Actor-critic methods are policy gradients that use the state value function as a baseline ($b(s)=V(s)$) and update this function through bootstrapping.\nReaders may refer to for more details and comparison between these approaches. As will be clarified next, each work tunes different parts of these main solution approaches according to the application. In the sequel, we present a classification of MARL literature from pervasive computing perspective.", "id": "1c4d2079-0573-4572-94af-65d6a420cc4f", "level": "paragraph", "origin_cites_number": 3, "parent_id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ], [ "paragraph", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7726, 3444, 3448, 3450, 3441, 3449, 3442, 3447, 3446, 8659, 3443, 3445 ], "content": "The Centralized Training and Decentralized Execution (CTDE) approach is first proposed in . This approach leverages the assumption that, in most application scenarios, the initial training is done on centralized simulators, where agents can communicate with each other with no communication cost. This phase is denoted as centralized training. Then, at deployment, agents are assumed not to communicate at all, or conduct limited communication with each other, and they rely on their ``experience\" at the training phase to actually execute their collaborative policies.\n\\textbullet{ Communication only at training:}\nThe advantage of such an approach is that it does not require communication between agents upon deployment and thus incurs no communication cost. However, this comes at the cost of losing adaptability, which is the major motivation behind online learning. Such loss might occur in case of a major shift in the environment model between the training and deployment, where the learned coordinated policies are no longer performant, and new coordination is needed. The main workaround is to monitor the agents' performance and re-initiating the centralized training phase to learn new coordinated policies whenever needed.\nThis approach has been popularized by recent methods such as VDN , QMIX , and QTRAN . These works adopt \\emph{value function factorization} technique, where factorizations of the global value function in terms of individual (i.e., depending only on local observations) value function are learned during centralized training. Then, the global function (i.e., neural network) can be discarded at execution time, and each individual agent utilizes only the local function. When each agent acts greedily according to its local network, the global optimality can still be guaranteed since, at the training phase, these local networks were trained according to gradient signals with respect to the global reward.\nAnother approach to solving POMG is actor-critic. The CTDE version of actor-critic approaches is represented by learning a centralized critic, which evaluates the global action, and decentralized policy network, which takes an action based only on local observation. During training, the actor-critic networks are jointly learned, and hence the global critic ``guides\" the training of the actors. Then, at execution, the global critic may be discarded, and only actors can be used. The works in present a deep deterministic policy gradient method that follows the described approach, where each agent learns a centralized critic and decentralized actor. Similarly, follows the same approach, but all agents learn the same critic. Multiple other variations are done on the same DDPG algorithm aiming to either enhance performance through incorporating an attention-mechanism, or reducing the use of communication resources (limited budget on the number of messages, or designing the message as (a part of) an agent's state). .\n\\begin{table*}\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{ Communication-Cognizent Multi-Agent Reinforcement Learning literature.}\n\\label{table:rl_previous_work}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\textbf{Refs} & \\textbf{Framework} &\t\\shortstack{\\textbf{Learning} \\\\ \\textbf{algorithm}} & \\shortstack{\\textbf{Communication}\\\\\\textbf{scheme}}\t& \\shortstack{\\textbf{Communication}\\\\\\textbf{decision}} \\\\ \\hline\n\\shortstack{VDN , QMIX ,\\\\ QTRAN } & CTDE & Value-based &NA& \\shortstack{Always during training,\\\\ None at execution.} \\\\ \\hline\n\\shortstack{MADDPG , \\\\ COMA} & CTDE & Actor-critic-based &NA& \\shortstack{Always during training, \\\\ None at execution.} \\\\ \\hline\n IMAC & CTDE with learned comm. & Policy gradient & \\shortstack{Learned source \\\\ and destination} &\\shortstack{At every step (limited size)} \\\\ \\hline\n ATOC & CTDE with learned comm. & Actor-critic based & \\shortstack{Gated communication \\\\with neighbors} &\\shortstack{When network topology \\\\changes.} \\\\ \\hline\n IC3Net & CTDE with learned comm. & Policy gradient & \\shortstack{Gated communication \\\\with neighbors} &\\shortstack{ Communicate when necessary, \\\\possibly many messages per round. } \\\\ \\hline\n ACML &CTDE with learned comm. & Actor-critic based & \\shortstack{Gated communication \\\\with neighbors} & \\shortstack{Communicate when necessary, \\\\respecting a limited bandwidth.} \\\\ \\hline\n & Fully decentralized & Value-based & Indirect & No message passing. \\\\ \\hline\n& Fully decentralized & actor-critic-based & With neighbors & At every step. \\\\ \\hline\n& Fully decentralized & actor-critic-based & With neighbors & At every step (limited size). \\\\ \\hline\n& Fully decentralized & Policy gradient & \\shortstack{Broadcast to all through \\\\central controller} & At every step. \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\textbullet{ Learned communication:}\nAn important line of work within the MARL community is the study of learned communication between agents. In these settings, agents are allowed to send arbitrary bits through a communication channel to their peers in order to convey useful information for collaboration. These agents need to learn \\emph{what} to send, and \\emph{how} to interpret the received messages so that they inform each other of action selection. Thus, the agents are effectively learning communication protocols, which is a difficult task .\nWhile the learned communication can be trained centrally and executed in a decentralized fashion, agents can still communicate at the execution phase through a limited bandwidth channel. Hence, we distinguish this setting from the works discussed in the previous subsection. Yet, similar approaches can be followed. For example, discarding the critic in execution (sometime used interchangeably with the term CTDE) but still maintaining the learned communication and parameter sharing and gradient pushing in , where in execution, these messages are discretized.\nWithin the learned communication line of work, the authors in aimed to learn to schedule communication between agents in a wireless environment and focused only on collision avoidance mechanism in the wireless environment. In , information theoretic approach was used to compress the content of the message. In addition, source and destination are also learned through a scheduler. On the other hand, a popular line of work targeted the design of the so-called \\emph{gating mechanism} techniques in order to accomplish the efficiency of the learned communication protocols. In this line of work, agents train a gating network, which generates a binary action to specify whether the agent should communicate with others or not at a given time step, limiting the number of communicated bits/messages needed to realize a certain desirable behavior. investigates the adaptability of these communication protocols and demonstrates the importance of communicating only with selected groups. Specifically, agents cannot distinguish messages that are particularly important to them (i.e., have implications on their actions) from the messages of all other agents. Thus, they introduce an attention scheme within each agent where an attention unit receives encoded local observation and action intention of the agent to decide whether a communication with others in its observable field is needed. The communication group dynamically changes and persists only when necessary.\nThe authors in looked at \\textit{communication at scale} and proposed an Individualized Controlled Continuous Communication Model (IC3Net), where agents are trained according to their own rewards (hence the approach can work for competitive scenarios also). Then, they demonstrated that their designed gating mechanism allows agents to block their communication, which is useful in competitive scenarios and reduces communication in cooperative scenarios by opting out from sending unnecessary messages. However, the effect of the gate on communication efficiency was not thoroughly studied, and the focus was instead on the emerging behavior. The work in presents the state-of-the-art on efficient learned communication. The authors introduced Actor-Critic Message Learner (ACML), wherein the gate adaptively prunes less beneficial messages. To quantify the benefit of an action, Gated-ACML adopts a global Q-value difference as well as a specially designed threshold. Then, it applies the gating value to prune the messages, which do not hold values. The authors showed that surprisingly, not only the communication-efficiency significantly increases, but in specific scenarios, even the performance improves as a result of well-tuned communication. The reason behind this is that, since the communication protocol is learned, it is probable to hold redundant information that agents do not decode successfully. The proposed gating mechanism can also be integrated with several other learned communication methods.", "id": "b6524c20-563a-4101-892f-233c30c024df", "level": "paragraph", "origin_cites_number": 18, "parent_id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ], [ "paragraph", "Centralized training and Decentralized Execution (CTDE)" ] ], "subsections": [], "title": "Centralized training and Decentralized Execution (CTDE)" }, { "cite_extract_rate": 0.5, "cites": [ 7726, 3445 ], "content": "In fully decentralized reinforcement learning, there is no distinction between training and testing environments. Thus, the communication model stays the same throughout the agents' interaction with the environment. Under these settings, we recognize two extreme cases. First, agents do not communicate with each other, and learn to coordinate solely through the obtained rewards. In the case of no communication, the major challenge faced by the agents is the non-stationarity of the environment. A non-stationary environment from the perspective of the agents is when the distribution of the next states varies for the same current state and action pairs. The fully decentralized DRL was recently popularized by . In , the authors proposed a 3-dimensional reply buffer whose axes are the episode index, timestep index, and agent index. It was illustrated that conditioning on data from that buffer helps agents to minimize the effect of the perceived non-stationarity of the environment.\nOn the other extreme, agents can be modeled to be able to communicate at every step. Specifically, the problem of graph networked agents is investigated in . In this paper, agents are connected via a time-varying and possibly sparse communication graph. The policy of each agent takes actions that are based on the local observation and the neighbors' messages to maximize the globally averaged return. The authors fully decentralized actor-critic algorithms and provided convergence guarantees when the value functions are approximated by linear functions. However, a possible disadvantage of this algorithm is that the full parameter vector of the value function is required to be transmitted at each step. This has been addressed in , where also graph-networked agents are assumed, but each agent broadcasts only one (scaled) entry of its estimate of parameters. This significantly reduces communication cost (given that it occurs at every iteration). The paper also does not assume a bidirectional communication matrix and deals with only unidirectional ones, which is a more general formulation that appeals to more applications. The decentralized actor-critic-based algorithm also solves the distributed reinforcement learning problem for strongly connected graphs with linear value function approximation.\n considered the communication efficiency in fully decentralized agents, but with the assumption of a centralized controller. The paper utilizes policy gradient solution methods, where the controller aggregates the gradients of the agents to update the policy parameters. This process is akin to federated learning clients selection. The authors propose a process to determine which clients should communicate to the controller based on the amount of progress in their local optimization. They also propose a methodology to quantify the importance of local gradient (i.e., the local optimization progress) and then only involve agents who are above a certain threshold. Following this approach, the authors showed that the performance (i.e., cumulative reward) is similar to the case where all clients are participating, with considerable communication round savings.\nTable \\ref{table:rl_previous_work} summarises the works discussed above according to their communication model and the approach in handling the communication-performance tradeoff. We first identify the framework (CTDE, CTDE with learned communication, or fully decentralized) as well as the learning framework (value, policy gradient, or actor-critic). Note that these MARL algorithmic frameworks, which are based on a single-agent variant of the problem, involve learning the state space transition operator as it plays a major role in estimating the future expected sum of rewards. \nThen, we list two important configurations. First, the communication scheme, which states \\emph{how} agents communicate with each other. In CTDE, the training is done in simulation. Thus, agents are logically centralized and do not communicate. If no messages are passed between agents and their collaboration is solely learned through rewards, then the communication scheme is \\textit{indirect}. Otherwise, it is either \\textit{gated with neighbors} directly or through a \\textit{central controller}. Lastly, the communication decision states \\emph {when} the communication is made, which can be at every step (with optimized message size or not), or according to other conditions as detailed in the discussion.", "id": "9c26dfa8-82e0-4cde-ba5b-bea1e5f72c4a", "level": "paragraph", "origin_cites_number": 4, "parent_id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ], [ "paragraph", "Fully Decentralized Agents" ] ], "subsections": [], "title": "Fully Decentralized Agents" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[h!]\n\t\\centering\n\t\t\\scalebox{1.8}{\\includegraphics[width=0.46\\columnwidth]{Figures/UC_uav1.pdf}}\n\t\\caption{UAV-assisted networks: UAV agents are trained to deduce a collaborative policies for providing compute/communication resources for on ground equipment.}\n\t\\label{fig:uav_MARL}\n\\end{figure}\nUAVs have provided new potentials for extending communication coverage and even compute capabilities of devices in areas where full networking infrastructure is not present. This is done through wireless communication between UAVs and on-ground equipment, enabling those equipment to extend their connectivity and potentially offload tasks to a broader network relayed by the flying devices.\nThe work in aims at utilizing UAVs to provide intermittent compute/communication resource support for user equipments (UEs) on the ground. The benefits of such UAV-assisted scenarios are numerous, including creating dynamic networking graphs without the need for full networking infrastructure, which can be of extreme value in catastrophes response for example. Nonetheless, the UAVs need to optimize their trajectory paths so that they cover the widest areas with minimum movement (i.e., energy consumption) and maximum utility (i.e., providing the resources to the UE that needs it the most). However, such optimization is shown to be intractable. Thus, the authors opted for learning-assisted (i.e., data-driven) methods. Since a centralized training was possible in their tackled scenario, they used a CTDE algorithm, specifically Multi-Agent DDPG (MADDPG). In MADDPG, agents aim to jointly optimize the UE-load of each UAV while minimizing the overall energy consumption experienced by UEs, that depends on the UAV's trajectory and offloading decisions. Following the MADDPG algorithm, the UAVs observations were \\textit{communicated} among them to deduce the collaborative policy at training. At execution, no message sharing was needed. This resulted in a satisfactory performance due to the accurate simulator. However, as discussed earlier, environments that are expected to change require the use of other algorithms that maintain periodic, learned, or persistent communication after deployment.\nWe note that the application of reinforcement learning in resource-constrained environments (e,g., IoT devices), requiring the design of communication-aware techniques, is still scarce. Most testing for these methods is done on artificial testing environments like Open AI's Particle environments , or video games like StarCraft II , which is a typical practice in the RL community since success in these environments is often indicative of broader applicability.", "id": "cf290967-0002-4f65-a8cd-25c83dfb9eb3", "level": "paragraph", "origin_cites_number": 3, "parent_id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ], [ "paragraph", "MDP MARL for UAV-assisted networks" ] ], "subsections": [], "title": "MDP MARL for UAV-assisted networks" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3444, 3448, 3442, 3443 ], "content": "\\begin{itemize}\n \\item Most of the MDP MARL works focused on performance gains and benchmarking, with little attention to resource utilization. This is because MARL is applied for games and other areas (e.g., robotics) where the priority is for performance. IoT applications, where resource utilization plays major role, is yet to make full use of state-of-the-art MARL algorithms.\n \\item \\textit{CTDE-a practical middle ground}: We note that CTDE is the most adopted in pervasive/IoT scenarios. We attribute this to the simplicity in the way agents communicate in this framework. Specifically, CTDE algorithm leverages the fact that training is often done in simulators, where there is no communication cost, and agents may share experience tuples, network parameters, and observations freely, in order to train policies that can be executed later on, based on only local observations. This approach seems to model most of the pervasive computing applications where agents do not need to start training while being decentralized. In this framework, the actor-critic-based algorithms are more popular, where a centralized critic network that uses the observations of all agents guides the training of a decentralized policy network that uses only the local observations. The critic network can be discarded at execution time, enabling decentralized execution. The framework is emerging as a possible alternative to the fully decentralized extremes, where agents communicate at every step or do not communicate at all and try to indirectly and independently learn collaborative policies . \n \\item \\textit{Scheduling for efficient learned communication}: In learned communication, agents learn to encode and decode useful messages. In this area, gating mechanisms are the main tools towards efficient communication . In gate design, agents learn when to send and refrain from sending a messages by quantifying the benefit (i.e., reward) of actions following this communication. More general \\emph{schedulers} modules investigate the design of communication module that learn to minimize the content of the messages as well (i.e., compressing the communication messages) . Overall, scheduling mechanisms are being increasingly used in MARL settings with learned communication, in order to face the limited bandwidth problems often encountered in practical scenarios.\n\\end{itemize}\n{", "id": "df6238e4-c9ad-4836-a1d4-c0085a69b492", "level": "paragraph", "origin_cites_number": 6, "parent_id": "43587b7d-943c-4595-8dee-22efc7f3cf8b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Multi-agent reinforcement learning" ], [ "subsubsection", "Multi-agent Markov decision process learning" ], [ "paragraph", "Lessons learned" ] ], "subsections": [], "title": "Lessons learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{AL}\nAs far as pervasive training schemes have been tackled, AL has emerged as a promising and effective concept. Herein, we first present a brief overview for the concept of AL, then we discuss some recent applications of AL presented in the literature.", "id": "73c038a7-dfdd-4ff9-b0ca-29be710e2068", "level": "subsection", "origin_cites_number": 0, "parent_id": "8f96916e-e394-41c3-94f4-9283f9cfdbaa", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Active Learning (AL)" ] ], "subsections": [ "925ca5d6-5fa4-4ce1-a4e4-cdb64f74b962" ], "title": "Active Learning (AL)" }, { "cite_extract_rate": 0.75, "cites": [ 1050, 3452, 3451 ], "content": "\\label{AL_Overview}\nThe main idea behind AL is that an active learner is allowed to actively select over time the most informative data to be added to its training dataset in order to enhance its learning goals , . \nHence, in AL framework, the training dataset is not static, which means that the training dataset and learning model are progressively updated in order to continuously promote the learning quality . \nSpecifically, the main steps of AL are: (1) acquiring new data from the contiguous nodes; (2) picking out the most informative data to append to the training dataset; (3) retraining the learning model using newly-acquired data. Hence, the communication overheads associated with different AL schemes will depend on:\n\\begin{itemize}\n\t\\item Type and amount of exchanged data between the contiguous nodes. We remark here that contiguous nodes can exchange labels, features, or samples. Hence, based on the type and amount of changed data there will be always a tradeoff between enhancing the performance and decreasing communication overheads. \n\t\\item Number of selected nodes that will be considered in the AL process. \n\\end{itemize}\nIt is worth mentioning also that FL allows multiple nodes to cooperatively train a global model without sharing their local data, which differs from AL in many ways. In particular, FL seeks for obtaining a synchronization between different cooperative nodes, in addition to the presence of a centralized node (or server) to generate the global model. Thus, AL and FL are addressing orthogonal problems – the former leverages the newly-acquired data from the contiguous nodes to retrain its model, while the latter trains its model in a distributed manner by sharing the model's updates with the contiguous nodes .", "id": "925ca5d6-5fa4-4ce1-a4e4-cdb64f74b962", "level": "paragraph", "origin_cites_number": 4, "parent_id": "73c038a7-dfdd-4ff9-b0ca-29be710e2068", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Active Learning (AL)" ], [ "paragraph", "Overview" ] ], "subsections": [ "a389510b-9c5d-412a-8879-96984e6bfae6", "a7289a44-2845-45ba-b0a2-8896e5daae85" ], "title": "Overview" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 3451, 3454, 3453 ], "content": "\\label{AL_Applications}\nTraditionally, AL algorithms depend on the presence of an accurate classifier that generates the ground-truth labels for unlabeled data. However, this assumption becomes hard to maintain in several real-time applications, such as crowdsourcing applications and automated vehicles. Specifically, in crowdsourcing, many sources are typically weak labelers, which may generate noisy data, i.e., data that may be affected by errors due to low resolution and age of information problems. \nHowever, most of the existing studies on AL investigate the noisy data (or imperfect labels) effect on the binary classification problems , while few works consider the general problem of multi-class or multi-labeled data . \nOne of the main problems in crowdsourcing is how to collect large amount of labeled data with high quality, given that the labeling can be done by volunteers or non-expert labelers. Hence, the process of acquiring large amount of labeled data turned to be challenging, computationally demanding, resource hungry, and often redundant. Moreover, crowdsourced data with cheap labels comes with its own problems. Despite being labels cheap, it is still expensive to handle the problem of noisy labels. \nThus, when data/labelers are not selected carefully, the acquired data may be very noisy , due to many reasons such as varying degrees of competence, labelers biases, and disingenuous behavior, which significantly affects the performance of supervised learning. \nSuch challenges have encouraged the researcher to design innovative schemes that can enhance the quality of the acquired data from different labelers. \nFor instance, tackles the problem of demanding deep learning techniques to large datasets by presenting an AL-based solution that leverages multiple freely accessible crowdsourced geographic data to increase datasets' size. However, in order to effectively deal with the noisy labels extracted from these data and avoid performance degradation, the authors have proposed a customized loss function that integrates multiple datasets by assigning different weights to the acquired data based on the estimated noise. \n enhances the performance of supervised learning with noisy labels in crowdsourcing systems by introducing a simple quality metric and selecting the $\\epsilon$-optimal labeled data samples. The authors investigate the data subset selection problem based on the Probably Approximately Correct (PAC) learning model. Then, they consider the majority voting label integration method and propose two data selection algorithms that optimally select a subset of $k$ samples with high labelling quality. \nIn , the authors investigate the problem of imbalanced noisy data, where the acquired labeled data are not uniformly distributed across different classes. \nThe authors therein aim to label training data given received noisy labels from diverse sources. Then, they used their learning model to predict the labels for new unlabeled data, and update their learning model until some conditions are met (e.g., the performance of the learned model meets a predefined requirement, or it cannot be improved any more). Specifically, for labeled data, they implemented a label integration and data selection scheme that considers data uncertainty and class imbalance level, while classifying the unlabeled data using the trained model before adding them to the training dataset. Hence, the proposed framework presents two core procedures: label integration and sample selection. In the label integration procedure, a Positive LAbel Threshold (PLAT) algorithm is used to infer the correct label from the received noisy labels of each sample in the training set. After that, three sample selection schemes are proposed to enhance the learning performance. These schemes are respectively based on the uncertainty derived from the received-noisy labels, the uncertainty derived from the learned model, and the combination method. \nA different application of AL is investigated in , where AL is exploited for incremental face identification. Conventional incremental face recognition approaches, such as incremental subspace approaches, have limited performance on complex and large-scale environment. Typically, the performance may drastically drop when the training data of face images is either noisy or insufficient. Moreover, most of existing incremental methods suffer from noisy data or outliers when updating the learning model. Hence, the authors in present an active self-paced learning framework, which combines: active learning and Self-Paced Learning (SPL). The latter refers to a recently developed learning approach that mimics the learning process of humans by gradually adding to the training set the easy to more complex data, where easy data is the one with high classification confidence. \nIn particular, this study aims to solve the incremental face identification problem by building a classifier that progressively selects and labels the most informative samples in an active self-paced way, then adds them to the training set. \nAL has been also considered in various applications of intelligent transportation systems. For instance, the authors in investigate the vehicle type recognition problem, in which labeling a sufficient amount of data in surveillance images is very time consuming. To tackle this problem, this work leveraged fully labeled web data to decrease the required labeling time of surveillance images using deep transfer learning. Then, the unlabeled images with high uncertainty are selected to be queried in order to be added later to the training set. Indeed, the cross-domain similarity metric is linearly combined with the entropy in the objective function of the query criteria to actively select the best samples. \nUltimately, we highlight that most of the presented studies so far consider in their AL framework specific classifiers (or learning models), which cannot be easily used in other learning models . Accordingly, obtaining an optimal label integration and data selection strategy that can be used with a generic multi-class classification techniques is still worth further investigation.}\n{", "id": "a389510b-9c5d-412a-8879-96984e6bfae6", "level": "paragraph", "origin_cites_number": 10, "parent_id": "925ca5d6-5fa4-4ce1-a4e4-cdb64f74b962", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Active Learning (AL)" ], [ "paragraph", "Overview" ], [ "paragraph", "Applications of AL" ] ], "subsections": [], "title": "Applications of AL" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[h!]\n\t\\centering\n\t\t\\scalebox{1.9}{\\includegraphics[width=0.49\\columnwidth]{Figures/AL.pdf}}\n\t\\caption{AL for time-varying vehicular network.}\n\t\\label{fig:CS}\n\\end{figure}\nTraditional machine learning models require massive, accurately labeled datasets for training in order to ensure high classification accuracy for new data as it arrives . This assumption cannot be guaranteed in many real-time applications, such as connected and autonomous vehicles. Indeed, vehicles are typically weak labelers (i.e., data sources that generate label with low classification confidence). Hence, they may acquire/generate noisy data, e.g., data generated by cameras in the presence of fog or rain. Also, in a highly dynamic environment like vehicular network, not only the generated data by the vehicles' classifiers can have low classification accuracy, but also the data received from neighboring vehicles may be prone to noise and communication errors. Hence, the authors in have tackled these challenges by proposing a cooperative AL framework for connected vehicles. The main goal of this work is two-fold: (1) selecting the optimal set of labelers, those considered to be the most reliable ones; (2) selecting a maximally diverse subset of high quality data that are locally-generated and/or received from neighboring vehicles to be used for updating the learning model at each vehicle. \nIn , the time-varying vehicular network shown in Figure \\ref{fig:CS} is considered. \nIt is assumed that each vehicle can communicate and exchange information only with the neighboring vehicles that are located within its communication range. For instance, the set $\\mathcal{N}_{v_0}(t) = \\left\\{v_{j}, v_{j+1}, v_{j+2}\\right\\}$ at time $t$ means that there are only three vehicles staying in the communication range of vehicle $v_0$. \nFurthermore, this framework considers two types of data: multiple-labeled online dataset and offline/historical labeled dataset. \nThe online dataset is considered as sequences of samples that arrive from neighboring vehicles or generated at vehicle $v_0$ within time $T$ (i.e., refers to the period of time during which a vehicle $v_0$ is exposed to a certain road view). At time $T$, vehicle $v_0$ receives a sequence of training samples/labels that contains input features and associates with multiple noisy labels generated from the vehicles sending data to $v_0$. \nThe presented framework in includes five main stages, as described below: \n\\begin{enumerate}\n\t\\item \\textbf{Offline Learning:} Initially, each vehicle with its own offline/historical training data generates an initial learning model with a certain accuracy level. \n\t\\item \\textbf{Online labeling:} The vehicle starts to collect new information through its local sensors or from neighboring vehicles. These information can be labels, features, or samples, depending on the adopted operational mode. \n\t\\item \\textbf{Label integration:} After acquiring the new information, each vehicle obtains an aggregated label for the received data using different proposed label integration strategies. \n\t\\item \\textbf{Labeler selection:} After monitoring the behavior of the neighboring vehicles, each vehicle selects a subset of high-quality labelers, based on their reputation values that are estimated from the past interactions using subjective logic model. \n\t\\item \\textbf{Data selection and models update:} Finally, each vehicle selects the maximally diverse collection of high-quality samples to update its learning model. \n\\end{enumerate}\nThe proposed AL framework in depicts its efficiency for connected automated vehicles, as follows: \n(1) it allows to increase the amount of acquired data at different vehicles during the training phase; \n(2) it accounts for the labelers' accuracy, data freshness, and data diversity while selecting the optimal subset of labelers and data to be included in the training set; \n(3) Using different real-world datasets, it could provide $5-10\\%$ increase in the classification accuracy compared to the state-of-the-art approaches that consider random data selection, while enhancing the classification accuracy by $6\\%$ compared to random labelers selection approaches. \n}", "id": "a7289a44-2845-45ba-b0a2-8896e5daae85", "level": "paragraph", "origin_cites_number": 2, "parent_id": "925ca5d6-5fa4-4ce1-a4e4-cdb64f74b962", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive training" ], [ "subsection", "Active Learning (AL)" ], [ "paragraph", "Overview" ], [ "paragraph", "Use case: AL for Connected Vehicles" ] ], "subsections": [], "title": "Use case: AL for Connected Vehicles" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{PI}\n\\begin{comment}\n\\begin{figure}[!h]\n\\hspace{-5mm}\n\\centering\n\t\\includegraphics[scale=0.55]{Figures/IVA.pdf}\n\t\\caption{Outline of pervasive inference section.}\n\t\\label{IVA}\n\\end{figure}\n\\end{comment}\nIn this section, we discuss the pervasive inference, where the trained model is partitioned and different segments are distributed among ubiquitous devices. It is worth mentioning that the training method of the distributed model is out of the scope of this section. Fig. \\ref{pervasive_AI} illustrates different scenarios, where the distribution can solve the challenges presented by the centralized approaches. In the following subsections, the communication and computation components of the pervasive inference are introduced. Then, the resource management approaches for the distribution are reviewed and a use cases is described.\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.37]{Figures/Pervasive_inference.pdf}\n\t\\caption{Pervasive inference system in multiple scenarios. }\n\t\\label{pervasive_AI}\n\\end{figure}", "id": "caa8daf7-97f9-471c-b534-e135a402b2d0", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ] ], "subsections": [ "6392d18e-faaf-420e-81d4-75b4fe5f17bf", "ca471488-ff37-437f-9c43-59fb13ca9bae", "6f7ddb5a-d1b9-4bec-a004-bea50c65f5dd" ], "title": "Pervasive Inference" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{profiling}\nThe computation and communication models present the mechanisms to formulate different operations and functions into an optimization problem in order to facilitate the theoretical analysis of DNN distribution. More specifically, we discuss the computational requirements of different DNN tasks, the wireless communication latency between different pervasive participants and their energy consumption.", "id": "6392d18e-faaf-420e-81d4-75b4fe5f17bf", "level": "subsection", "origin_cites_number": 0, "parent_id": "caa8daf7-97f9-471c-b534-e135a402b2d0", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ] ], "subsections": [ "21a6c43d-7438-4644-821d-fd45df28a551", "8ef4c5b6-14e4-460d-a25c-bb3d171f49bd", "8c559b64-76c6-4153-9d7e-6d099c92eee7" ], "title": "Profiling computation and communication models" }, { "cite_extract_rate": 0, "cites": [], "content": "Various parameters play a critical role to model the computational tasks of different segments of the DNN network including latency, generality, scalability and context awareness. In this subsection, we describe the computation models of two popular splitting strategies adopted in the literature, which are the per-layer and per-segment splitting. These models are presented after introducing some definitions.", "id": "21a6c43d-7438-4644-821d-fd45df28a551", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6392d18e-faaf-420e-81d4-75b4fe5f17bf", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Computation models" ] ], "subsections": [ "1fc2e5e9-36e2-4079-a976-bf35b5ed230d", "7d397ed8-ad6f-4062-8b87-70911b16a406", "d238ccd5-4472-4a57-a890-fda0e3ecd93a" ], "title": "Computation models" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 3455, 3357, 3413, 514, 7198 ], "content": "\\mbox{}\\\\\n\\textbf{\\indent Binary offloading}: Relatively simple or highly complex tasks that cannot be divided into sub-tasks and have to be computed as a whole, either locally at the source devices or sent to the remote servers because of resource constraints, are called binary offloading tasks. These tasks can be denoted by the three-tuple notation $T(K,\\tau, c)$. This commonly used notation illustrates the size of the data to be classified presented by $K$ and the constraint $\\tau$ (e.g., completion deadline, the maximum energy, or the required accuracy). The computational load to execute the input data of the DNN task is modeled as a variable $c$, typically defined as the number of multiplications per second .\nAlthough binary offloading has been widely studied in the literature, we note that it is out of the scope of this survey covering the pervasivity and distribution of AI tasks.\n\\newline\n\\textbf{ \\indent Partial offloading}: In practice, DNN classification is composed of multiple subtasks (e.g., layers execution, multiplication tasks, and feature maps creation), which allows to implement fine-grained (partial) computations. More specifically, the AI task can be split into two or more segments, where the first one can be computed at the source device and the others are offloaded to pervasive participants (either remote servers or neighboring devices). \n\\begin{figure}[h!]\n\\centering\n\t\\includegraphics[scale=0.6]{Figures/parallelization_2.pdf}\n\t\\caption{Inference parallelization: data and model parallelization}\n\t\\label{parallelization}\n\\end{figure}\n\\textbf{Data parallelization:} The most manageable task of partial offloading is the data parallelization, where duplicated offloaded segments are independent and can be arbitrarily divided into different groups and executed by different participants of the pervasive computing system, e.g., segments from different classification requests (as shown in Fig. \\ref{parallelization} (a)). We highlight that the input data to parallel segments are independent and can be different or akin.\n\\textbf{Model parallelization:} A more sophisticated partial offloading pattern is the model parallelization, where the execution of one task is split across multiple pervasive devices. Accordingly, the input data is also split and fed to different parallel segments. Then, their outputs are merged again. In this offloading pattern, the dependency between different tasks cannot be ignored as it affects the execution of the inference. Particularly, the computation order of different tasks (e.g., layers) cannot be determined arbitrarily because the outputs of some segments serve as the inputs of others (as shown in Fig. \\ref{parallelization} (b)). In this context, the inter-dependency between different computational parts of the DNN model needs to be defined. It is worth mentioning that many definitions of data and model parallelism are presented in the literature, which are slightly different. In our paper, we opted for the definitions presented in .\n\\textbf{Typical dependencies:} Different DNN networks can be abstracted as task-call graphs. These graphs are generally presented by Directed Acyclic Graphs (DAGs), which have a finite directed structure with no cycles. Each DNN graph is defined as $G(V,E)$, where the set of vertices $V$ presents different segments of the network, while the set of edges $E$ denotes their relations and dependencies. Typically, three types of dependencies contribute to determining the partition strategies, namely the sequential dependency which occurs in the conventional CNN networks with sequential layers and without any residual block (e.g., VGG ), the parallel dependency which depicts the relation between different tasks in the same layer (e.g., different feature maps transformations), and the general dependency existing in general DNN models (e.g., randomly wired CNN ). Different dependencies are depicted in Fig. \\ref{partitioning}. The required computation workload and memory are specified for each vertex $V$ and the amount of the input and output data can be defined on the edges.\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.6]{Figures/Partitionning_2.pdf}\n\t\\caption{Typical typologies of DNNs and partitioning strategies.}\n\t\\label{partitioning}\n\\end{figure}\nBased on the presented dependencies, two partition strategies can be introduced, namely per-layer and per-segment partitioning (see Fig. \\ref{partitioning}). Per-layer partitioning defines dividing the model into layers and allocating each set of layers within a pervasive participant (e.g., IoT device or remote servers). On the other hand, per-segment partitioning denotes segmenting the DNN model into smaller tasks such as feature maps transformations, multiplication tasks and even per-neuron segmentation.\n\\textbf{Computation latency:}\nThe primary and most common engine of the pervasive devices to perform local computation is the CPU. The performance of the CPU is assessed by cycle frequency/ clock speed $f$ or the multiplication speed $e$ . In the literature, authors adopt the multiplication speed to control the performance of the devices executing the deep inference. In practice, $e$ is bounded by a maximum value $e_{max}$ reflecting the limitation of the device computation capacity. Based on the model introduced for binary offloading, the computation latency of the inference task $T(K,\\tau,c)$ is calculated as follows :\n\\begin{equation}\n \\begin{aligned}\n t^c=\\frac{c}{e}.\n \\end{aligned}\n \\label{eq:1}\n\\end{equation}\nImportantly, a higher computational capacity $e_{max}$ is desirable to minimize the computation latency at the cost of energy consumption. As end-devices are energy constrained, the energy consumption of the local computation is considered as a key measure for evaluating the inference efficiency. More specifically, a high amount of energy consumed by AI applications is not desirable by end-devices due to their incurred cost. Similarly, significant energy consumption of edge nodes (e.g., access points or MEC servers.) increases the cost envisaged by the service providers.\n\\begin{table*}[]\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{Characteristics of different splitting strategies:\\\\\n{\\footnotesize A: After, B: Before, $N_{Fc}$: number of fully connected layers, $n$: number of input neurons (Fc), $m$: number of output neurons (Fc), $H_1, W_1, D_1$: dimensions of the input data (Conv), $H_2, W_2, D_2$: dimensions of the output data (Conv), $H_f, W_f, D_1$: dimensions of the filter (Conv), $k/D_2$: number of filters, $d_x,d_y$: dimensions of the spatial splitting (Conv), $N$: Number of participants, $k'_i$: Number of segments per participant.}}\n\\label{tab:splitting}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\begin{tabular}[c]{@{}c@{}}\\textbf{Partitioning}\\\\ \\textbf{strategy}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{$N^{o}$ of smallest} \\\\\\textbf{segments}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Activation}\\\\ \\textbf{task}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Inputs}\\\\ \\textbf{per segment}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Filters weights}\\\\ \\textbf{per device}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Outputs}\\\\ \\textbf{per segment}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Computation}\\\\ \\textbf{per segment}\\end{tabular} &\\begin{tabular}[c]{@{}c@{}}\\textbf{Transmitted data}\\\\ \\textbf{per layer}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Merging}\\\\ \\textbf{strategy}\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-layer:\\\\ Fully-connected (Fc)\\end{tabular} & $N_{Fc}$ & A & $n$ & \\xmark & $m$& $n \\times m$ &$n+m$ & Seq\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-segment: Output\\\\ splitting for Fc layers\\end{tabular} & $\\sum\\limits^{N_{Fc}}_{i=1} m_i$ & B/A & $n$ & \\xmark& 1 & $n$ & $n \\times N +m$& Concat \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-segment: Input \\\\ splitting for Fc layers\\end{tabular} & $\\sum\\limits_{i=1}^{N_{Fc}} n_i$& A & 1 & \\xmark & $m$ & $m$ & $N \\times m +n$& Sum \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-layer:\\\\ Convolution (Conv)\\end{tabular} & $N_{Conv}$ & A & $H_1 \\times W_1 \\times D_1$ & \\begin{tabular}[c]{@{}c@{}}$k \\times D_1 \\times$ \\\\ $(H_f \\times W_f)$\\end{tabular} & $H_2 \\times W_2 \\times k$ & \\begin{tabular}[c]{@{}c@{}} $cp= D_1 \\times $\\\\ $(W_f \\times H_f) \\times$ \\\\ $ k \\times (W_2 \\times H_2)$ \\end{tabular} & \\begin{tabular}[c]{@{}c@{}} $H_1 \\times W_1 \\times D_1 +$\\\\ $H_2 \\times W_2 \\times k$\\end{tabular} & Seq \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-segment: channel\\\\ splitting for Conv\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} $ \\sum\\limits_{i=1}^{N_{Conv}}k_i$ \\end{tabular} & B/A & $H_1 \\times W_1 \\times D_1$ & \\begin{tabular}[c]{@{}c@{}}$k'_i \\times D_1 \\times$ \\\\ $(H_f \\times W_f)$\\end{tabular} & $H_2 \\times W_2$ & $\\frac{cp}{k}$ &\\begin{tabular}[c]{@{}c@{}} $(N \\times H_1 \\times W_1 \\times D_1)$ \\\\$+ (k \\times H_2 \\times W_2)$\\end{tabular} &Concat \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-segment: spatial\\\\ splitting for Conv\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}$\\sum\\limits_{i=1}^{ N_{Conv}}\\frac{H_1^i\\times W_1^i}{d^i_x \\times d^i_y}$ \\end{tabular}& B/A & \\begin{tabular}[c]{@{}c@{}}$\\frac{H_1 \\times W_1 \\times D_1}{d_x \\times d_y} +$ \\\\ $padding$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}$k \\times D_1 \\times$ \\\\ $(H_f \\times W_f)$\\end{tabular} & $\\frac{H_2 \\times W_2 \\times k}{d_x \\times d_y}$ & $cp /\\frac{H_1 \\times W_1}{d_x\\times d_y}$&\\begin{tabular}[c]{@{}c@{}} $H_1 \\times W_1 \\times D_1 +$\\\\ $H_2 \\times W_2 \\times k+$ \\\\ $N\\times padding$\\end{tabular}& Concat \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Per-segment: filter \\\\ splitting for Conv\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} $\\sum\\limits_{i=1}^{N_{Conv}}D^i_1\\times k_i$\\end{tabular} & A & $H_1 \\times W_1$ & \\begin{tabular}[c]{@{}c@{}}$k'_i \\times$ \\\\ $(H_f \\times W_f)$\\end{tabular} & $H_2 \\times W_2$ &$ \\frac{cp}{D_1 \\times k}$ & \\begin{tabular}[c]{@{}c@{}} $(D_1 \\times H_1 \\times W_1)+$ \\\\$ (N \\times H_2 \\times W_2 \\times k)$\\end{tabular}& \\begin{tabular}[c]{@{}c@{}}Sum+ \\\\concat \\end{tabular}\\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\textbf{Computation energy:}\nIf the inference is executed at the data source, the consumed energy is mainly associated to the task computation. In contrast, if the task is delegated to remote servers or to neighboring devices, the power consumption consists of the required energy to transfer the data between participants, the amount of energy consumed for the computation of different segments, and the energy required to await and receive the classification results. Suppose that the inference task/sub-task $T_i$ takes a time $t^c_i$ to be computed locally in the device participating in the pervasive inference and let $P_i$ denote the processing power to execute the task per second. The energy consumed to accomplish an inference task $T_i$ locally at the computing device is equal to :\n\\begin{equation}\\label{eq:2}\n \\begin{aligned}\n e^{local}_i= t^c_i \\times P_i.\n \\end{aligned}\n\\end{equation}\nNext, we profile the DNN partitioning strategies presented in the literature, in terms of computation and memory requirements first and then in terms of communicated data to offload the output of segments. The key idea of partitioning a DNN network is to evenly or unequally distributing the computational load and the data weights across pervasive devices intending to participate in the inference process, while minimizing the classification latency. A partitioning can be achieved by simply segmenting the model per-layer or set of layers (see Fig. \\ref{parallelization} (a)) or by splitting the layers' tasks (see Fig. \\ref{parallelization} (b)). Then, each part is mapped to a participant.", "id": "1fc2e5e9-36e2-4079-a976-bf35b5ed230d", "level": "paragraph", "origin_cites_number": 7, "parent_id": "21a6c43d-7438-4644-821d-fd45df28a551", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Computation models" ], [ "paragraph", "Overview and definitions" ] ], "subsections": [], "title": "Overview and definitions" }, { "cite_extract_rate": 0, "cites": [], "content": "As previously mentioned, the computational load of each layer is measured as the number of multiplications required to accomplish the layer's goal .\n\\newline\n\\textbf{\\indent Fully-connected layers:} The computation requirement of a fully-connected layer can be calculated as follows:\n\\begin{equation}\\label{eq:4}\n \\begin{aligned}\n c^{Fc}=n \\times m,\n \\end{aligned}\n\\end{equation}\nwhere $n $ represents the number of the input neurons and $m$ is the number of the output neurons. \n\\newline\n\\textbf{\\indent Convolutional layers:} The computation load of a convolution layer can be formulated as follows :\n\\begin{equation}\\label{eq:5}\n \\begin{aligned}\n c^{conv}=D_1 \\times (W_f \\times H_f) \\times D_2 \\times (W_2 \\times H_2).\n \\end{aligned}\n\\end{equation}\nWe remind that $D_1$ is the number of input channels of the convolutional layer which is equal to the number of feature maps generated by the previous layer, $(W_f \\times H_f)$ denotes the spatial size of the layer’s filter, $D_2$ represents the number of filters and $(W_2 \\times H_2)$ represents the spatial size of the output feature map (see Fig. \\ref{Conv}).\nThe computational load introduced by pooling and ReLU layers can be commonly neglected, as these layers do not require any multiplication task . We highlight that the per-layer splitting is motivated by the sequential dependency between layers. This dependency does not permit the model parallelism nor the latency minimization. Instead, it allows the resource-constrained devices to participate in the AI inference.", "id": "7d397ed8-ad6f-4062-8b87-70911b16a406", "level": "paragraph", "origin_cites_number": 1, "parent_id": "21a6c43d-7438-4644-821d-fd45df28a551", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Computation models" ], [ "paragraph", "Per-layer splitting" ] ], "subsections": [], "title": "Per-layer splitting" }, { "cite_extract_rate": 0, "cites": [], "content": "\\mbox{}\\\\\n\\textbf{\\indent Fully-connected layers:} We start by profiling the fully-connected layer partitioning. More specifically, the computations of different neurons $y_i$ in a fully-connected layer are independent. Hence, their executions can be distributed, and model parallelism can be applied to minimize the inference latency. Two methods are introduced in the literature (e.g., ), which are the output and input partitioning as shown in Fig. \\ref{FC}. \n\\begin{figure}[h]\n\\centering\n\t\\includegraphics[scale=0.27]{Figures/FC_a.pdf}\n\t\\caption{Partitioning of fully connected layers.}\n\t\\label{FC}\n\\end{figure}\n\\begin{itemize}\n \\item \\textit{Output splitting}: the computation of each neuron $y_i$ is performed in a single participant that receives all input data $\\{x_1,x_2,...,x_n\\}$, as highlighted in Fig. \\ref{FC} (a). Later, when the computation of all neurons is done, results are merged by concatenating the output of all devices in the correct order. The activation function can be applied on each device or after the merging process. \n \\item \\textit{Input splitting}: each participant computes a part of all output neurons. Fig. \\ref{FC} (b) illustrates an example, where each device executes $\\frac{1}{n}$ of the required multiplications. By adopting this partitioning method, only a part of the input, $x_i$, is fed to each participant. Subsequently, when all participants accomplish their tasks, summations are performed to build the output neurons. However, in contrast to the output-splitting method, the activation function can only be applied after the merging process. \n\\end{itemize}\n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.38]{Figures/Splitting2.pdf}\n\t\\caption{Partitioning of convolutional layer: (a) is an output splitting, and (b) and (c) are input splittings.}\n\t\\label{splitting}\n\\end{figure*}\n\\textbf{\\indent Convolutional layers:} \nNext, we illustrate different partitioning strategies of the convolutional layer. As described in the previous section \\ref{CNN}, each filter is responsible to create one of the feature maps of the output data (Fig. \\ref{Conv}). We remind that the dimensions of the input data are $H_1 \\times W_1 \\times D_1$, the dimensions of the $k$ filters are $H_f \\times W_f \\times D_f$, and the dimensions of the output feature maps are defined by $H_2 \\times W_2 \\times D_2$. We note that by definition $D_1$ is equal to $D_f$ and $k$ is equal to $D_2$. Furthermore, each filter contains $D_1 \\times (H_f \\times W_f)$ weights and performs $D_1 \\times (H_f \\times W_f)$ multiplications per output element. Similarly to the fully connected layers, two partitioning strategies characterize the convolutional layer, namely the input and output splitting. In this context, the output splitting includes the channel partitioning, meanwhile, the input splitting consists of the spatial and filter partitioning strategies (see Fig. \\ref{splitting}). These splitting strategies are introduced and adopted by multiple recent works, including , for which we will thoroughly review the resource management techniques in the following section.\n\\begin{itemize}\n \\item \\textit{Channel splitting}: each participant computes one or multiple non-overlapping output feature maps, which serve as input channels for the next layer. This implies that each device $i$ possesses only $1 \\leq k'_i \\leq k$ filters responsible to generate $k'_i$ feature maps, where $\\sum_i k_i'=k$. In addition to the $k'_i$ filters, the entire input data is fed to each device to compute different outputs. In this way, filters’ weights are distributed across participants, ($k'_i \\times D_1 \\times H_f \\times W_f$) each, and the total number of multiplications is equal to $D_1 \\times (H_f \\times W_f) \\times k'_i \\times (W_2 \\times H_2)$ per device. The channel partitioning strategy allows model parallelization, and consequently inference acceleration. At the end, when all devices finish their tasks, different feature maps are concatenated depth-wise, with a complexity equal to $O(k)$. We emphasize that the activation function can be applied before merging at each device or once at the concatenation device. Fig. \\ref{splitting} (a) shows an example of channel partitioning.\n \\item \\textit{Spatial splitting}: this fine-grained splitting divides the input spatially, in the x or y axis, in order to jointly assemble the output data, as shown in Fig. \\ref{splitting} (b). Let $d_x$ and $d_y$ define the split dimensions on the x-axis and y-axis, respectively. Therefore, the input data are partitioned into segments of size ($d_x \\times d_y$), and each group of segments can be transmitted to a device. Furthermore, each part allocated to a participant needs to be extended with overlapping elements from the neighboring parts, so that the convolution can be performed on the borders. Compared to the latter splitting, in which all the input data should be copied to all participants with parts of the filters, the spatial splitting distributes only parts of the data with all the filters to each device. It means, in addition to the segment of the input data, an amount of ($k \\times D_1 \\times W_f \\times H_f$) weights should be transmitted and stored at each device. Note that storing the filters is considered as a one-time memory cost, as they will be used for all subsequent inferences. Also, the total number of multiplications is reduced per-device and each one executes only $(\\frac{d_x \\times d_y}{H_1 \\times W_1})$ of the computational load per segment. When all computations are done, the output data is concatenated spatially with a complexity of $O(\\frac{H_2\\times W_2}{d_x \\times d_y})$, and the activation function can be applied before or after the merging process. Note that for simplicity, we presented, for spatial splitting, the case where filters do not apply any size reduction.\n \\item \\textit{Filter splitting}: in this splitting strategy, both filters and input data are split channel wise on a size of $k'_i$ for each participant $i$. Figure \\ref{splitting} (c) illustrates the convolution of the input data by one filter in order to produce one feature map. In this example, the input channels and one of the filters are divided into 4 devices, which implies that each device stores only its assigned channels of the input data and the filter, so the memory footprint is also divided. The computational load is also reduced, in such a way each participant executes $k'_i \\times (H_f \\times W_f) \\times (W_2 \\times H_2)$ multiplications. In the end, all final outputs are summed to create one feature map and the activation function can only be applied after the merging process. A concatenation task is performed, when all features are created. \n\\end{itemize}\nTable \\ref{tab:splitting} summarizes the computation and memory characteristics of different splitting strategies. In this table, we present the number of smallest segments per model, the input, output and computation requirements for each small segment, the weights of filters assigned to each device owing $k'_i$ segments, and the transmitted data per layer when having $N$ participants.", "id": "d238ccd5-4472-4a57-a890-fda0e3ecd93a", "level": "paragraph", "origin_cites_number": 4, "parent_id": "21a6c43d-7438-4644-821d-fd45df28a551", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Computation models" ], [ "paragraph", "Per-segment splitting" ] ], "subsections": [], "title": "Per-segment splitting" }, { "cite_extract_rate": 0, "cites": [], "content": "The latency is of paramount importance, in AI applications. Hence, minimizing the communication delay and the data transmission by designing an efficient DNN splitting is the main focus of pervasive inference.", "id": "8ef4c5b6-14e4-460d-a25c-bb3d171f49bd", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6392d18e-faaf-420e-81d4-75b4fe5f17bf", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Communication models" ] ], "subsections": [ "f8893cd8-c14c-44f9-9a91-88831f18771f", "5c345260-93fc-4a3d-8c36-14e1d707d1fd", "f3ecc089-5afc-46ea-992f-8f6e09b49bb8" ], "title": "Communication models" }, { "cite_extract_rate": 0.5, "cites": [ 3357 ], "content": "\\mbox{}\\\\\n\\textbf{ \\indent Communication latency}: In the literature, the communication channels between different pervasive devices are abstracted as bit-pipes with either constant rates or random rates with a defined distribution. However, this simplified bit-pipe model is insufficient to illustrate the fundamental properties of wireless propagation. More specifically, wireless channels are characterized by different key aspects, including: (1) the multi-path fading caused by the reflections from objects existing in the environment (e.g., walls, trees, and buildings); (2) the interference with other signals occupying the same spectrum due to the broadcast nature of the wireless transmissions, which reduces their Signal-to-Interference-plus-Noise-Ratios (SINRs) and increases the probability of errors; (3) bandwidth shortage, motivating the research community to exploit new spectrum resources, design new spectrum sharing and aggregation, and propose new solutions (e.g., in-device caching and data compression). Based on these characteristics, the communication/upload latency between two devices, either resource-constrained devices or high-performant servers, can be expressed as follows:\n\\begin{equation}\\label{eq:6}\n \\begin{aligned}\n t^u=\\frac{K}{\\rho_{i,j}},\n \\end{aligned}\n\\end{equation}\nwhere $K$ is the size of the transmitted data and $\\rho_{i,j}$ is the achievable data rate between two participants $i$ and $j$.\n\\begin{comment}\ndefined as follows:\n\\begin{equation}\\label{eq:7}\n \\begin{aligned}\n \\rho_{i,j}=B_i \\times log_2(1+\\Gamma_{i,j}),\n \\end{aligned}\n\\end{equation}\n$B_i$ denotes the bandwidth of the device $i$. Furthermore, the average SINR of the link between $i$ and $j$, namely $\\Gamma_{i,j}$, is given by:\n\\begin{equation}\\label{eq:8}\n \\begin{aligned}\n \\Gamma_{i,j}=\\frac{P_{i,j}h_{i,j}}{\\sum_{q, q\\neq j} I_{q,j} + \\sigma^2},\n \\end{aligned}\n\\end{equation}\nwhere $P_{i,j}$ and $h_{i,j}$ are the transmit power and the channel gain between $i$ and $j$, $\\sigma^2$ is the Gaussian noise, and $\\sum_{q, q\\neq j} I_{q,j}$ is the total interference power at the receiver $j$ resulting from neighboring devices transmitting over the same channel. \n\\end{comment}\nThe total transmission latency $t^T$ of the entire inference is related to the type of dependency between different layers of the model. This latency is defined in eq. (\\ref{eq:9}), if the dependency is sequential (e.g., layers) and in eq. (\\ref{eq:10}) if the dependency is parallel (e.g., feature maps). In case the dependency is general (e.g., randomly wired networks), we formulate the total latency as the sum of sequential communication and the maximum of parallel transmissions.\n\\begin{equation}\\label{eq:9}\n \\begin{aligned}\nt^T=\\sum_{s=1}^{S} t^u_s.\n \\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{eq:10}\n \\begin{aligned}\nt^T= max(t^u_s, \\quad \\forall s \\in \\{1...S\\}).\n \\end{aligned}\n\\end{equation}\n\\newline\n\\textbf{ \\indent Communication energy}: The energy consumption to offload the inference sub-tasks to other participants consists of the amounts of energy consumed on outwards data transmissions and when receiving the classification results generated by the last segment of the task $T$. This energy is formulated as follows :\n\\begin{equation}\\label{eq:3}\n \\begin{aligned}\n e^{ofd}_i= t^u_i. P_i+\\sum_s\\sum_k\\sum_j \\frac{K_s}{\\rho_{k,j}}.P_s.X_{k,s}X_{j,s+1},\n \\end{aligned}\n\\end{equation}\nwhere $t^u_i$ is the upload delay to send the original data/task $i$ to the first participant, $K_s$ is the output of the segment $s$ (e.g., layers or feature maps), $\\rho_{k,j}$ denotes the data rate of the communication, and $X_{k,s}$ is a binary variable indicating if the participant $k$ executes the segment $s$. \nUsing only the onboard battery and resources, the source-generating device may not be able to accomplish the inference task within the required delays and the energy constraint. In such a case, partitioning the task among neighboring devices or offloading the whole inference to the remote servers are desirable solutions.", "id": "f8893cd8-c14c-44f9-9a91-88831f18771f", "level": "paragraph", "origin_cites_number": 2, "parent_id": "8ef4c5b6-14e4-460d-a25c-bb3d171f49bd", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Communication models" ], [ "paragraph", "Overview" ] ], "subsections": [], "title": "Overview" }, { "cite_extract_rate": 0, "cites": [], "content": "Per-layer partitioning is characterized by a simple dependency between different segments and a higher data transmission per device. Indeed, the computation of one Fc layer per participant costs the system a total communication overhead equal to ($n+m$). Meanwhile, the allocation of a convolutional layer requires a transmission load equal to ($H_1 \\times W_1 \\times D_1) + (H_2 \\times W_2 \\times k$).", "id": "5c345260-93fc-4a3d-8c36-14e1d707d1fd", "level": "paragraph", "origin_cites_number": 0, "parent_id": "8ef4c5b6-14e4-460d-a25c-bb3d171f49bd", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Communication models" ], [ "paragraph", "Per-layer splitting" ] ], "subsections": [], "title": "Per-layer splitting" }, { "cite_extract_rate": 0, "cites": [], "content": "The per-segment partitioning requires a higher total transmission load with less computation and memory footprint per device. Meaning, this type of partitioning trades communication with the memory. More details are illustrated in Table \\ref{tab:splitting}, where the output and input splitting of the fully-connected layers have a total communication overhead of $n \\times (N-1)$ and $m \\times (N-1)$ respectively compared to the per-layer distribution. Hence, depending on the input and output sizes, namely $n$ and $m$, the optimal partition strategy can be selected. Regarding the convolutional layers, the channel splitting has an overhead of $(N-1) \\times H_1 \\times W_1 \\times D1$ since a copy of the entire input data needs to be broadcasted to all devices, the spatial splitting pays an overhead of the padding equal to $N \\times padding$, and the filter splitting has an overhead of $(N-1) \\times H_2 \\times W_2 \\times k$ incurred in the merging process.\n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.37]{Figures/distribution2.pdf}\n\t\\caption{Resource management for distributed inference.}\n\t\\label{distribution}\n\\end{figure*}", "id": "f3ecc089-5afc-46ea-992f-8f6e09b49bb8", "level": "paragraph", "origin_cites_number": 0, "parent_id": "8ef4c5b6-14e4-460d-a25c-bb3d171f49bd", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Communication models" ], [ "paragraph", "Per-segment splitting" ] ], "subsections": [], "title": "Per-segment splitting" }, { "cite_extract_rate": 0, "cites": [], "content": "The main lessons acquired from the review of the splitting strategies are:\n\\begin{itemize}\n \\item The performance of model parallelism is always better than that of data parallelism in terms of latency minimization, as it allows computing multiple sub-tasks simultaneously. Meanwhile, the data parallelism pays the high costs of merging and transmitting the same inputs, either for fault-tolerance purposes or to handle multiple concurrent requests.\n \\item The choice of the parallelism mode, highly depends on the partitioning strategy and the dependency between different segments. For example, in the per-layer splitting with a sequential dependency, the model parallelization cannot be applied to compute different fragments. On the other hand, the general and parallel dependencies pave the way to distribute concurrent segments.\n \\item Data parallelism is highly important for AI applications with a high load of inference requests, such as 24/7 monitoring systems and VR/AR applications. In such scenarios, the classifications and feature learning are required every short interval of time. Generally, source devices do not have sufficient resources to compute this huge load of inferences. In this case, distributing the requests within neighboring devices and parallelizing their computations, contribute to minimizing the queuing time.\n \\item Understanding the characteristics of the pervasive computing system is compulsory for selecting the partition strategy. More specifically, the per-layer distribution is more adequate for systems with a lower number of participants and higher pervasive capacities. For example, VGG19 has 19 layers and accordingly needs a maximum of 19 participants. More importantly, these devices are required to be able to accommodate the computation demand of convolutional layers. Meanwhile, opting for fine-grained partitioning results in small fragments that fit in resource-limited devices, such as sensors. However, a high number of sensors (e.g., $\\sum_{i}^{N_{conv}}D_1^i \\times k_i$ segments for filter splitting.) should be involved to accomplish the inference. \n \\item Choosing the most optimal partitioning for the per-segment strategy highly depends on the proprieties of the DNN network, including the channel sizes, the number of filters, the size of feature maps, and the number of neurons. Particularly, for Fc splitting, $m$ and $n$ are the decisive variables for choosing input or output partitioning. For convolutional layers, the size of the channels and filters, and the capacities of participants are the decisive parameters to select the strategy. In terms of memory requirements, the channel splitting requires copying the whole input channels to all devices along with a part of the filters. Meanwhile, the spatial splitting copies all the filters and a part of the data, whereas the filter splitting needs only a part of the channels and filters. In terms of transmission load, the spatial splitting has less output data per segment compared to channel and filter strategies. Finally, the channel splitting has a higher computational load per device. Still, it incurs less dependency between segments.\n\\end{itemize}\n\\vspace{-0.2cm}", "id": "8c559b64-76c6-4153-9d7e-6d099c92eee7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "6392d18e-faaf-420e-81d4-75b4fe5f17bf", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Profiling computation and communication models" ], [ "subsubsection", "Lessons learned" ] ], "subsections": [], "title": "Lessons learned" }, { "cite_extract_rate": 0, "cites": [], "content": "The joint computational and transmission resource management plays a key role in achieving low inference latency and efficient energy consumption. In this section, we conduct a comprehensive review of the existing literature on resource management for deep inference distribution and segments allocation on pervasive systems. We start by discussing the remote collaboration, which consists of the cooperation between the data source and remote servers to achieve the DNN inference. In this part, we determine the key design methodologies and considerations (e.g., partitioning strategies and number of split points) in order to shorten the classification delays. Subsequently, more complex collaboration, namely localized collaboration, is examined, where multiple neighboring devices are coordinated to use the available computational and wireless resources and accomplish the inference tasks with optimized energy, delays, and data sharing.", "id": "ca471488-ff37-437f-9c43-59fb13ca9bae", "level": "subsection", "origin_cites_number": 0, "parent_id": "caa8daf7-97f9-471c-b534-e135a402b2d0", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ] ], "subsections": [ "695e849b-fe1c-443f-9b10-15b02a8ec11b", "ac6550c4-66e5-4a7e-ba42-b511a72e29ea", "5c6d2e7f-95b8-4a0a-84eb-fbf75fef871b" ], "title": "Resource management for distributed inference" }, { "cite_extract_rate": 0, "cites": [], "content": "The remote collaboration encompasses two approaches, the binary and the partial offloading defined in the previous section. The binary offloading consists of delegating the DNN task from a single data-generating device to a single powerful remote entity (e.g., edge or cloud server), with an objective to optimize the classification latency, accuracy, energy, and cost (see Fig. \\ref{distribution} (a)). The decision will be whether to offload the entire DNN or not, depending on the hardware capability of the device, the size of the data, the network quality, and the DNN model, among other factors. Reference papers covering binary offloading of deep learning include DeepDecision and MCDNN\n. The authors of these papers based their studies on empirical measurements of trade-offs between different aforementioned parameters. The binary offloading has been thoroughly investigated in the literature for different contexts. However, the DNN offloading has a particular characteristic that distinguishes it from other networking tasks, namely the freedom to choose the type, the parameters, and the depth of the neural network, according to the available resources.\nAs the scope of this survey, is the pervasive AI, we focus on the partial offloading that covers the per-layer distribution applying one or multiple splitting points along with the per-segment distribution.", "id": "695e849b-fe1c-443f-9b10-15b02a8ec11b", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "ca471488-ff37-437f-9c43-59fb13ca9bae", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Remote collaboration" ] ], "subsections": [ "f8b27998-c7bb-4ec7-a541-7fcabaa1cb5a", "c3827174-8142-4574-b07e-3af2b4f28a29", "c17ed41c-1986-47f8-a1ad-682f2828329c" ], "title": "Remote collaboration" }, { "cite_extract_rate": 0.4, "cites": [ 3458, 689, 3456, 3457 ], "content": "The partial offloading leverages the unique structure of the deep model, particularly layers, to allow the collaborative inference between the source device and the remote servers. More specifically, in such an offloading approach, some layers are executed in the data-generating device whereas the rest are computed by the cloud or the edge servers, as shown in Fig. \\ref{distribution} (b). In this way, latency is potentially reduced owing to the high computing cycles of the powerful remote entities. Furthermore, latency to communicate the intermediate data resultant from the DNN partitioning should lead to an overall classification time benefit. The key idea behind the per-layer partitioning is that after the shallow layers, the size of the intermediate data is relatively small compared to the original raw data thanks to the sequential filters. This can speed up the transmission over the network, which motivates the partition at deep layers. \nNeurosurgeon is one of the first works that investigated layer-wise partitioning, where the split point is decided intelligently depending on the network conditions. Particularly, the authors examined deeply the status quo of the cloud and in-device inference and confirmed that the wireless network is the bottleneck of the cloud approach and that the mobile device can outperform the cloud servers only when holding a GPU unit. As a next step, the authors investigated the DNN split performance in terms of computing and output data size of multiple state-of-the-art DNNs over multiple types of devices and wireless networks and concluded that layers have significantly different characteristics. Based on the computation and the latency to transmit the output data of the DNN layers, the optimal partition points that minimize the energy consumption and end-to-end latency are identified. Finally, after collecting these data, Neurosurgeon is trained to predict the power consumption and latency based on the layer type and network configuration and dynamically partition the model between the data source and the cloud server.\nHowever, while the DNN splitting significantly minimizes the inference latency by leveraging the computational resources of the remote server, this strategy is constrained by the characteristics of intermediate layers that can still generate high-sized data, which is the case of VGG 16 illustrated in Fig. \\ref{VGG16}. \n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.5]{Figures/VGG16.pdf}\n\t\\caption{The transmitted data size between different layers of the VGG16 network.}\n\t\\label{VGG16}\n\\end{figure}\n\\newline\nTo tackle the problem of sized intermediate data, the authors of proposed to combine the early-exit strategy, namely BranchyNet , with their splitting approach. The objective is to execute only few layers and exit the model without resorting to the cloud, if the accuracy is satisfactory. In this way, the model inference is accelerated, while sacrificing the accuracy of the classification. We note that BranchyNet is a model trained to tailor the right size of the network with minimum latency and higher accuracy. Accordingly, both models cooperate to select the optimal exit and split points. \nThe authors extended the work by replacing both trained models with a reinforcement learning strategy , namely Boomerang. This RL approach offers a more flexible and adaptive solution to real-time networks and presents less complex and more optimal split and exit points’ selection. The early-exit strategy is also proposed along with the layer-wise partitioning by the ADDA approach , where authors implemented the first layers on the source device and encouraged the exit point before the split point to use only local computing and eliminate the transmission time. Similarly, authors in , formulated the problem of merging the exit point selection and the splitting strategy, while aiming to minimize the transmission energy, instead of focusing on latency. \nIn addition to using the early-exit to accelerate the inference, other efforts adopted compression combined with the partitioning to reduce the shared data between collaborating entities. Authors in introduced a distribution approach with feature space encoding, where the edge device computes up to an intermediate layer, compresses the output features (loss-less or lossy), and offloads the compressed data to the host device to compute the rest of the inference, which enhances the bandwidth utilization. To maintain high accuracy, the authors proposed to re-train the DNN with the encoded features on the host side. The works in also suggested compressing the intermediate data through quantization, aiming at reducing the transmission latency between edge and cloud entities. The authors examined the trade-off between the output data quantization and the model accuracy for different partitioning scenarios. Then, they designed accordingly a model to predict the edge and cloud latencies and the communication overhead. Finally, they formulated an optimization problem to find the optimal split layer constrained by the accuracy requirements. To make the solution adaptive to runtime, an RL-based channel-wise feature compression, namely JALAD, is introduced by the authors in . Pruning is another compression technique proposed in to be joined with the partitioning strategy. The authors introduced a 2-step pruning framework, where the first step mainly focuses on the reduction of the computation workload and the second one handles the removal of non-important features transmitted between collaborative entities, which results in less computational and offloading latency. This can be done by pruning the input channels, as their height, length, and number impact directly the size of the output data and the computation requirements, which we illustrated in Table \\ref{tab:splitting}.", "id": "f8b27998-c7bb-4ec7-a541-7fcabaa1cb5a", "level": "paragraph", "origin_cites_number": 10, "parent_id": "695e849b-fe1c-443f-9b10-15b02a8ec11b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Remote collaboration" ], [ "paragraph", "Per-layer distribution - one split point" ] ], "subsections": [], "title": "Per-layer distribution - one split point" }, { "cite_extract_rate": 0.413793103448275, "cites": [ 3455, 3456, 3457, 3460, 2672, 3458, 97, 689, 3462, 3459, 3461, 8660 ], "content": "Solely offloading the deep learning computation to the cloud can violate the latency constraints of the AI application requiring real-time and prompt intervention. Meanwhile, using only the edge nodes or IoT devices can deprive the system from powerful computing resources and potentially increase the processing time. Hence, a judicious selection of multiple cuts and distribution between different resources, i.e., IoT device – edge server – cloud, contribute to establishing a trade-off between minimizing the transmission time and exploiting the powerful servers. Additionally, the layers of the DNN model are not always stacked in a sequential dependency. More specifically, layers can be arranged in a general dependency as shown in Fig. \\ref{partitioning} (c), where some of them can be executed in parallel or do not depend on the output of the previous ones. In this case, adopting an optimized \\textit{back and forth} distribution strategy, where the end-device and the remote servers parallelize the computation of the layers and merge the output, can be beneficial for the inference latency. Authors in designed a Dynamic Adaptive DNN Surgery (DADS) scheme that optimally distributes complex structured deep models, presented by DAG graphs, under variable network conditions. In case the load of requests is light, the min-cut problem is applied to minimize the overall delay of processing one frame of the DNN structure. When the load condition is heavy, scheduling the computation of multiple requests (data parallelization) is envisaged using the 3-approximation ratio algorithm that maximizes the parallelization of the frames from different requests. Complex DNN structures were also the focus in , where the authors used the shortest path problem to formulate the allocation of different frames of the DNN \\textit{back and forth} between the cloud and the end-device. The path, in this case, is defined as latency or energy of the end-to-end inference.\nOn the other hand, \\textit{hierarchical architecture} for sequential structures is very popular as a one way distribution solution to establish a trade-off between transmission latency and computation delay (see Fig. \\ref{distribution} (c)). The papers in proposed to divide the trained DNN over a hierarchical distribution, comprising “IoT-edge-cloud” resources. Furthermore, they adopted the state-of-the-art work BranchyNet to early exit the inference if the system has a good accuracy. In this way, fast, private, and localized inference using only shallow layers becomes possible at the end and edge devices, and an offloading to the cloud is only performed when additional processing is required. Hierarchical distribution can also be combined with compressing strategies to reduce the size of the data to be transmitted and accordingly minimize the communication delay and the time of the entire inference, such as using the encoding techniques as done in . Authors in also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state. This method, is generating a considerable communication and latency overhead related to allocating redundant tasks, particularly to devices with limited-capacities. Hence, a second strategy is designed to monitor the availability of devices before the re-assignment. Finally, the work in proposed to add skip blocks to the DNN model and include at least one block in each partition, to enhance the robustness of the system in case the previous layer connection fails.\n\\begin{comment}\n\\begin{table*}[]\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{Performance of different partitioning strategies per inference compared to the remote strategy.}\n\\label{tab:performance}\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n\\begin{tabular}[c]{@{}l@{}}Partitioning\\\\ strategy\\end{tabular} & Latency & Bandwidth & Energy & computation/memory & throughput \\\\ \\hline\nNeurosurgeon & 3.1 $\\times$ $\\rightarrow$ 40.7 $\\times$ & \\xmark & 59.5 \\% $\\rightarrow$ 94.7\\% & \\xmark & 1.5 $\\times$ $\\rightarrow$ 6.7 $\\times$ \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Edgent \\\\ Boomerang \\end{tabular} & 4 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nADDA & 1.7 $\\times$ $\\rightarrow$ 6.6 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\n & \\xmark & \\xmark & 15.3 $\\times$ & \\xmark & 16.5 $\\times$ \\\\ \\hline\nJALAD & 1.1 $\\times$ $\\rightarrow$ 11.7 $\\times$ & \\xmark & \\xmark & 1.4 $\\times$ $\\rightarrow$ 25.1 $\\times$ & \\xmark \\\\ \\hline\nJointDNN & 18 $\\times$ $\\rightarrow$ 32 $\\times$ & \\xmark & 18 $\\times$ $\\rightarrow$ 32 $\\times$ & \\xmark & \\xmark \\\\ \\hline\n & 4.81 $\\times$ & 25.6 $\\times$ & \\xmark & 6.01 $\\times$ & \\xmark \\\\ \\hline\n & \\xmark & \\xmark & 0.016 J $\\rightarrow$ 0.0482 J & \\xmark & \\xmark \\\\ \\hline\nDADS & 8.08 $\\times$ & \\xmark & \\xmark & 14.01 $\\times$ & \\xmark \\\\ \\hline\nAuto tuning & 1.13 $\\times$ $\\rightarrow$ 1.7 $\\times$ & \\xmark & \\xmark & 85\\% $\\rightarrow$ 99\\% & \\xmark \\\\ \\hline\nDDNN & \\xmark & 20 $\\times$ & \\xmark & \\xmark & \\xmark \\\\ \\hline\nCOLT-OPE & 3 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\n & 0.35 $\\times$ $\\rightarrow$ 5.28 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nDINA& 2.4 $\\times$ $\\rightarrow$ 4.2 $\\times$& \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nMoDNN& 2.17 $\\times$ $\\rightarrow$ 4.28 $\\times$& \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\n& 34\\% $\\rightarrow$ 84\\% & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nAAIoT & 1 $\\times$ $\\rightarrow$ 10 $\\times$& \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nDeepWear & 5.08 $\\times$ $\\rightarrow$ 23 $\\times$& \\xmark & 53.5\\% $\\rightarrow$ 85.5\\% & \\xmark & \\xmark \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\\end{comment}\n\\begin{table*}[]\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\begin{threeparttable}\n\\caption{Performance of distribution strategies compared to:\n\\protect \\begin{tikzpicture}\n\\protect\\filldraw[color=black!60, fill=white!5, thick](-1,0) circle (0.15);\n\\protect\\end{tikzpicture} cloud only;\n\\protect\\begin{tikzpicture}\n\\protect\\filldraw[color=black!60, fill={rgb,255:red,218; green,232; blue,252}, thick](-1,0) circle (0.15);\n\\protect\\end{tikzpicture} on-device only;\\\\\n\\protect\\begin{tikzpicture}\n\\protect\\filldraw[color=black!60, fill={rgb,255:red,213;green,232;blue,212}, thick](-1,0) circle (0.15); \n\\end{tikzpicture} edge-server only.\n}\n\\label{tab:performance}\n\\begin{tabular}{|l|l|l|l|l|l|l|}\n\\hline\n\\begin{tabular}[c]{@{}l@{}}\\textbf{Refs}\\end{tabular} & \\textbf{Latency} & \\textbf{Bandwidth} & \\textbf{Energy} & \\textbf{computation/ memory} & \\textbf{throughput} & \\begin{tabular}[c]{@{}l@{}}\\textbf{Inference}\\\\ \\textbf{rate}\\end{tabular} \\\\ \\hline\nNeurosurgeon & 3.1 $\\times$ $\\rightarrow$ 40.7 $\\times$ & \\xmark & 59.5 \\% $\\rightarrow$ 94.7\\% & \\xmark & 1.5 $\\times$ $\\rightarrow$ 6.7 $\\times$ & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Edgent \\\\ Boomerang \\end{tabular} & \\cellcolor[HTML]{DAE8FC}2.3 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n & 1.2 $\\times$ $\\rightarrow$ 2 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\cline{2-7} \n\\multirow{-2}{*}{ADDA } & \\cellcolor[HTML]{DAE8FC}1.7 $\\times$ $\\rightarrow$ 3 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}15.3 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}16.5 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\cline{2-7} \n\\multirow{-2}{*}{} & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}2.3 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}2.5 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark \\\\ \\hline\nJALAD & 1.1 $\\times$ $\\rightarrow$ 11.7 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\nJointDNN & 3 $\\times$ & \\xmark & 7 $\\times$ & \\xmark & \\xmark & \\xmark \\\\ \\hline\n & 8.08 $\\times$ & \\xmark & \\xmark & 14.01 $\\times$ & \\xmark & \\xmark \\\\ \\cline{2-7} \n\\multirow{-2}{*}{DADS } & \\cellcolor[HTML]{D5E8D4}6.45 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}8.31 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark \\\\ \\hline\nAuto tuning & 1.13 $\\times$ $\\rightarrow$ 1.7 $\\times$ & \\xmark & \\xmark & 85\\% $\\rightarrow$ 99\\% & \\xmark & \\xmark \\\\ \\hline\nDDNN & \\xmark & 20 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\hline\n & 2 $\\times$ & \\xmark & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\cline{2-7} \n\\multirow{-2}{*}{COLT-OPE } & \\cellcolor[HTML]{DAE8FC}4 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n & 48.11 \\% & \\xmark & \\xmark & \\xmark & \\xmark & \\xmark \\\\ \\cline{2-7} \n\\multirow{-2}{*}{} & \\cellcolor[HTML]{DAE8FC}39.75 \\% & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}70\\% & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\nDINA & \\cellcolor[HTML]{D5E8D4}2.6 $\\times$ $\\rightarrow$ 4.2 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark \\\\ \\hline\nMoDNN & \\cellcolor[HTML]{DAE8FC}2.17 $\\times$ $\\rightarrow$ 4.28 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\nAAIoT & \\cellcolor[HTML]{D5E8D4}1 $\\times$ $\\rightarrow$ 10 $\\times$ & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark & \\cellcolor[HTML]{D5E8D4}\\xmark \\\\ \\hline\nDeepWear & \\cellcolor[HTML]{DAE8FC}5.08 $\\times$ $\\rightarrow$ 23 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}53.5\\% $\\rightarrow$ 85.5\\% & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n & \\cellcolor[HTML]{DAE8FC}2 $\\times$ $\\rightarrow$ 6 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}1.7 $\\times$ $\\rightarrow$ 4.69 $\\times$ \\\\ \\hline\nDeepThings & \\cellcolor[HTML]{DAE8FC}0.6 $\\times$ $\\rightarrow$ 3 $\\times$ & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}68\\% & \\cellcolor[HTML]{DAE8FC}\\xmark & \\cellcolor[HTML]{DAE8FC}\\xmark \\\\ \\hline\n\\end{tabular}\n\\begin{tablenotes}\n \\footnotesize\n \\item - The results in the table present the enhancement of the proposed strategies compared to the baseline approaches.\n \\item - $\\times$ stands for the number of times the metric is improved, i.e., how many times the latency, bandwidth usage, energy, computation, and memory are reduced, and how many times the throughput and inference rate are increased compared to the baselines.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table*}", "id": "c3827174-8142-4574-b07e-3af2b4f28a29", "level": "paragraph", "origin_cites_number": 29, "parent_id": "695e849b-fe1c-443f-9b10-15b02a8ec11b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Remote collaboration" ], [ "paragraph", "Per-layer distribution - back and forth, and hierarchical distribution" ] ], "subsections": [], "title": "Per-layer distribution - back and forth, and hierarchical distribution" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3463 ], "content": "The per-segment partitioning is generally more popular when distributing the inference among IoT devices with limited capacities, as some devices, such as sensors, cannot execute the entire layer of a deep network. Furthermore, per-segment partitioning creates a huge dependency between devices; and consequently, multiple communications with remote servers are required. That is why only few works adopted this strategy for inference collaboration between end devices and edge/fog servers, including . Authors in proposed a spatial splitting (see Fig. \\ref{splitting} (b)) that minimizes the communication overhead per device. Then, a distribution solution is designed based on the matching theory and the swap matching problem , to jointly accomplish the DNN inference. The matching theory is a mathematical framework in economics that models interactions between two sets of selfish agents, each one is competing to match agents of the other set. The objective was to reduce the total computation time while increasing the utilization of the resources related to the two sets of IoT and fog devices.", "id": "c17ed41c-1986-47f8-a1ad-682f2828329c", "level": "paragraph", "origin_cites_number": 3, "parent_id": "695e849b-fe1c-443f-9b10-15b02a8ec11b", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Remote collaboration" ], [ "paragraph", "Per-segment distribution" ] ], "subsections": [], "title": "Per-segment distribution" }, { "cite_extract_rate": 0, "cites": [], "content": "Another line of work considers the distribution of DNN computation across multiple edge participants, as shown in Fig. \\ref {distribution} (d). These participants present neighboring nodes that co-exist in the same vicinity, e.g., IoT devices or fog nodes. The model distribution over neighboring devices can be classified into two types: the per-layer distribution where each participant performs the computation of one layer or more and the per-segment allocation where smaller segments of the model are allocated to resource-limited devices.", "id": "ac6550c4-66e5-4a7e-ba42-b511a72e29ea", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "ca471488-ff37-437f-9c43-59fb13ca9bae", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Localized collaboration" ] ], "subsections": [ "eb0b0f83-fecd-4d90-aa43-aa41c69f497a", "a7725cd7-ce64-48a9-a5dc-49ecbf3f3f1c" ], "title": "Localized collaboration" }, { "cite_extract_rate": 0.5, "cites": [ 3455, 3459 ], "content": "The layer-wise partitioning can itself be classified under two categories, the one splitting point strategy where only two participants are involved and multiple splitting points where two or more devices are collaborating. For example, the DeepWear approach splits the DNN into two sub-models that are separately computed on a wearable and a mobile device. First, the authors conducted in-depth measurements on different devices and for multiple models to demystify the performance of the wearable-side DL and study the potential gain from the partial offloading. The derived conclusions are incorporated into a prediction-based online scheduling algorithm that judiciously determines how, and when to offload, in order to minimize latency and energy consumption of the inference. On the other hand, authors in proposed a methodology for optimal placement of CNN layers among multiple IoT devices, while being constrained by their computation and memory capacities. This methodology minimizes the latency of decision-making, which is measured as the total of processing times and transmissions between participants. Furthermore, this proposed technique can be applied both to CNNs in which the number of layers is fixed and CNNs with an early-exit. Similarly, authors in proposed a CNN multi-splitting approach to accelerate the inference process, namely AAIoT. Unlike the above-mentioned efforts, AAIoT deploys the layers of the neural network on multi-layer IoT architecture. More specifically, the lower-layer device presents the data source, and the higher-layer devices have more powerful capacities. Offloading the computation to higher participants implies sacrificing the transmission latency to reduce the computation time. However, delivering the computation to lower participants does not bring any benefit to the system. An optimal solution and an online algorithm that uses dynamic programming are designed to make the best architectural offloading strategy. Other than capacity-constrained IoT devices, the distribution of the inference process over cloudlets in a 5G-enabled MEC system is the focus of the work in , where authors proposed to minimize the energy consumption, while meeting stringent delay requirements of AI applications, using a RL technique.", "id": "eb0b0f83-fecd-4d90-aa43-aa41c69f497a", "level": "paragraph", "origin_cites_number": 4, "parent_id": "ac6550c4-66e5-4a7e-ba42-b511a72e29ea", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Localized collaboration" ], [ "paragraph", "Per-layer distribution" ] ], "subsections": [], "title": "Per-layer distribution" }, { "cite_extract_rate": 0.09090909090909001, "cites": [ 3464 ], "content": "The per-segment distribution is defined as allocating fine-grained partitioned DNN on lightweight devices such as Raspberry Pis. The partitioning strategy is based on the system configuration and the pervasive network characteristics, including the memory, computation, and communication capabilities of the IoT devices and their number. The segmentation of the DNN models varies from neurons partitioning to channels, spatial, and filters splitting, as discussed in section \\ref{profiling}. For example, the work in opted for the spatial splitting (see Fig. \\ref{splitting} (b)), where the input and the output feature maps are partitioned into a grid and distributed among lightweight devices. The authors proposed to allocate the cells along the longer edge of the input matrix (rows or columns) to each participant, in order to reduce the padding overhead produced by the spatial splitting. Different segments are distributed to IoT devices according to the load-balancing principles using the MapReduce model. The same rows/columns partitioning is proposed in , namely the data-lookahead strategy. More specifically, each block contains data from other blocks within the same layer such that its connected blocks in subsequent layers can be executed independently without requesting intermediate/padding data from other participants. The spatial splitting is also adopted in , where authors proposed a Fused Tile Partitioning (FTP) method. This method fuses the layers and divides them into a grid. Then, cells connected across layers are assigned to one participant, which largely reduces the communication overhead and the memory footprint. \nThe previous works introduced homogeneous partitioning, where segments are similar. Unlike these strategies, authors in proposed a heterogeneous partitioning of the input data to be compatible with the IoT system containing devices with different capabilities ranging from small participants that fit only few cells to high capacity participants suitable for layer computation. For the same purpose, authors in jointly conducted per-layer and per-segment partitioning, where the neurons and links of the network are modeled as a DAG. In this work, grouped convolutional techniques are used to boost the model parallelization of different nodes of the graph. The papers in studied different partitioning strategies of the convolutional layers (channel, spatial and filters splitting) and fully connected layers (output and input splitting). Next, they emphasized that an optimal splitting depends greatly on the parameters of the CNN network and that the inference speedup depends on the number of tasks to be parallelized, which is related to the adopted splitting method. Hence, one partitioning approach cannot bring benefits to all types of CNNs. Based on these conclusions, a dynamic heuristic is designed to select the most adequate splitting and model parallelism for different inference scenarios.\nTable \\ref{tab:performance} shows the performance of these techniques in terms of latency, bandwidth, energy, computation, memory, and throughput, whereas Table \\ref{tab:my-table} presents a comparison between different distributed inference techniques introduced in this section.", "id": "a7725cd7-ce64-48a9-a5dc-49ecbf3f3f1c", "level": "paragraph", "origin_cites_number": 11, "parent_id": "ac6550c4-66e5-4a7e-ba42-b511a72e29ea", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Localized collaboration" ], [ "paragraph", "Per-segment distribution" ] ], "subsections": [], "title": "Per-segment distribution" }, { "cite_extract_rate": 0.28947368421052605, "cites": [ 3455, 3465, 3456, 3457, 3460, 2672, 3464, 3458, 3462, 3459, 3461 ], "content": "The lessons acquired from the literature review covering the DNN distribution can be summarized as follows:\n\\begin{itemize}\n \\item The per-layer strategy with remote collaboration is the most studied approach in the literature, owing to its simple splitting scheme and its assets in using high-performance servers while reducing the transmitted data. However, such strategies may not be efficient in terms of privacy or for networks with unstable transmission links, and hence may not be suitable for all applications.\n \\item In per-layer strategies, selecting the split points depends on multiple parameters, which are the capacity of the end device that constrains the length of the first segment, the characteristics of the network (e.g., wi-fi, 4G, or LTE) that impact the transmission time, and the DNN topology that determines the intermediate data size.\n \\item The deep neural networks with a small-reduction capacity of pooling layers or with fully-connected layers of similar sizes undergo small variations in the per-layer data size. In this case, remote collaboration is not beneficial for data transmission. Hence, compression (e.g., quantization, pruning, and encoding) can be a good solution to benefit from remote capacity with the minimum of communication overhead.\n \\item Recently, even if it is still not mature yet, multiple efforts have focused on the localized inference through per-segment distribution that allows to involve resource-limited devices and avoid the transmission to remote servers. This kind of works targeted the model parallelization and aimed to maximize the concurrent computation of different segments within the same request. However, fewer works covered data parallelization and real-time adaptability to the dynamics and number of requests. Particularly, the load of inferences highly impacts the distribution of segments to fit them to the capacity of participants.\n \\item Adopting a mixed partitioning strategy is advantageous for heterogeneous systems composed of high and low-capacity devices and multiple DNNs, which allows to fully utilize the pervasive capacities while minimizing the dependency and data transmission between devices. \n\\end{itemize}\n\\begin{table*}[]\n\\centering\n\\footnotesize\n\\tabcolsep=0.09cm\n\\caption{Comparison between Distributed Inference techniques.}\n\\label{tab:my-table}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{Refs} & \\textbf{Year} & \\textbf{End-Device} & \\begin{tabular}[c]{@{}c@{}}$N^{o}$ \\textbf{. of}\\\\ \\textbf{end}\\\\ \\textbf{Devices}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Localized} \\\\ \\textbf{inference}\\end{tabular} & \\textbf{Context} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Real-time}\\\\\\textbf{processing}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Partitioning}\\\\ \\textbf{mechanism}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}$N^{o}$\\textbf{. of}\\\\ \\textbf{partitions}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Model or data}\\\\ \\textbf{parallelism}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{other}\\\\ \\textbf{techniques}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Runtime} \\\\ \\textbf{adaptability}\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Neuroseurgeon\\\\ \\end{tabular} & 2017 & Tegra TKI & 1 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & \\xmark & \\xmark \\\\ \\hline\nDDNN & 2017 & \\xmark & Many & \\xmark & \\xmark & \\cmark & Per-layer & Many & Data & Early exit & \\xmark \\\\ \\hline\nMoDNN & 2017 & LG Nexus 5 & 4 & \\cmark & \\xmark & \\xmark & Per-segment & Many & Model & \\xmark & \\xmark \\\\ \\hline\nEdgent & 2018 & RaspBerry Pi 3 & 1 & \\xmark & \\xmark & \\cmark & Per-layer & 1 & \\xmark & Early exit & \\xmark \\\\ \\hline\n & 2018 & \\xmark & 1 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & Compression & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}DeepThings \\\\ \\end{tabular} & 2018 & RaspBerry Pi 3 & Many & \\cmark & \\xmark & \\cmark & Per-segment & Many & Model & \\xmark & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Collaborative\\\\ robots \\end{tabular} & 2018 & RaspBerry Pi & 12 & \\cmark & \\begin{tabular}[c]{@{}c@{}}Robots and\\\\ image \\\\recognition\\end{tabular} & \\cmark & Per-segment & Many & Both & \\xmark & \\cmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Musical chair\\\\ \\end{tabular} & 2018 & RaspBerry Pi & Many & \\cmark & \\begin{tabular}[c]{@{}c@{}}object/action \\\\ recognition\\end{tabular} & \\cmark & Per-segment & Many & Both & \\xmark & \\cmark \\\\ \\hline\nHDDNN & 2018 & \\xmark & Many & \\xmark & \\xmark & \\cmark & Per-layer & Many & Data & Encryption & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Auto tuning\\\\ \\end{tabular} & 2018 & Jetson TX2 & Many & \\xmark & \\xmark & \\xmark & Per-layer & Many & \\xmark & Quantization & \\xmark \\\\ \\hline\nJALAD & 2018 & \\begin{tabular}[c]{@{}c@{}}GPU \\\\Quadro k620 \\end{tabular}& 1 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & Quantization & \\cmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}} KLP \\\\ \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}2018\\\\ 2019\\end{tabular} & STM32F469 & Many & \\cmark & \\xmark & \\xmark & Per-segment & Many & Model & \\xmark & \\xmark \\\\ \\hline\nADDA & 2019 & RaspBerry Pi 3 & 1 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & Early exit & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Boomerang\\\\ \\end{tabular} & 2019 & RaspBerry Pi 3 & 1 & \\xmark & \\xmark & \\cmark & Per-layer & 1 & \\xmark & Early exit & \\cmark \\\\ \\hline\n & 2019 & Krait CPU & 12 & \\cmark & \\begin{tabular}[c]{@{}c@{}}sensors\\\\ fault tolerance\\end{tabular} & \\cmark & \\xmark & Many & Model & \\xmark & \\cmark \\\\ \\hline\n & 2019 & \\begin{tabular}[c]{@{}c@{}}RaspBerry Pi\\\\ STM32H7\\end{tabular} & Many & \\cmark & \\xmark & \\xmark & Per-layer & Many & Data & \\xmark & \\xmark \\\\ \\hline\nDADS & 2019 & \\begin{tabular}[c]{@{}c@{}}RaspBerry Pi 3\\\\ model B\\end{tabular} & 1 & \\xmark & \\xmark & \\cmark & Per-layer & Many & \\xmark & \\xmark & \\cmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}COLT-OPE \\\\ \\end{tabular} & 2019 & \\xmark & 1 & \\xmark &\\xmark & \\xmark & Per-layer & Many & \\xmark & Early exit & \\cmark \\\\ \\hline\nEDDL & 2019 & Fog nodes & Many & \\cmark & \\xmark & \\xmark & \\begin{tabular}[c]{@{}c@{}}Per-layer\\\\ Per-segment\\end{tabular} & Many & Model & \\begin{tabular}[c]{@{}c@{}}Sparsification\\\\ Early exit\\end{tabular} & \\xmark \\\\ \\hline\n & 2019 & \\begin{tabular}[c]{@{}c@{}}GPU \\\\ GTX1080 \\end{tabular}& 1 & \\xmark & \\xmark & \\cmark & Per-layer & 1 & \\xmark & \\xmark & \\cmark \\\\ \\hline\n & 2019 & \\xmark & 7 & \\cmark & \\xmark & \\xmark & \\begin{tabular}[c]{@{}c@{}}Per-layer\\\\ Per-segment\\end{tabular} & Many & Model & \\xmark & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}deepFogGuard\\\\ \\end{tabular} & 2019 & \\xmark & Many & \\xmark & \\xmark & \\xmark & Per-layer & Many & \\xmark & \\xmark & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}2steps-pruning\\\\ \\end{tabular} & 2019 & \\xmark & 2 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & pruning & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}JointDNN \\\\ \\end{tabular}& 2019 & jetson tx2 & 1 & \\xmark & \\xmark & \\xmark & Per-layer & 1 & \\xmark & \\xmark & \\cmark \\\\ \\hline\nAAIoT & 2019 & \\begin{tabular}[c]{@{}c@{}}Raspberry Pi,\\\\ Mobile PC, \\\\ Desktop PC, \\\\ Server\\end{tabular} & Many & \\cmark & \\xmark & \\xmark & Per-layer & Many & \\xmark & \\xmark & \\xmark \\\\ \\hline\nMWWP & 2020 & \\xmark & Many & \\xmark & health care & \\cmark & Per-layer & Many & Data & \\xmark & \\cmark \\\\ \\hline\n & 2020 & Raspberry Pi & Many & \\cmark & \\begin{tabular}[c]{@{}c@{}}multi-view \\\\ object \\\\ detection\\end{tabular} & \\xmark & Per-segment & Many & Model & Compression & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}CONVENE \\\\ \\end{tabular} & 2020 & \\xmark & 1 & \\xmark & \\begin{tabular}[c]{@{}c@{}}Parallel data \\\\ sharing on \\\\antennas\\end{tabular} & \\xmark & Per-segment & Many & Model & \\xmark & \\cmark \\\\ \\hline\nDINA & 2020 & \\xmark & Many & \\xmark & \\xmark & \\xmark & Per-segment & Many & Both & \\xmark & \\cmark \\\\ \\hline\n & 2020 & \\xmark & Many & \\cmark & \\begin{tabular}[c]{@{}c@{}}Intelligent \\\\Connected\\\\ Vehicles\\end{tabular} & \\cmark & \\xmark & Many & \\xmark & \\xmark & \\xmark \\\\ \\hline\n & 2020 & \\xmark & 1 & \\xmark & \\xmark & \\xmark & Per- layer & 1 & \\xmark & Compression & \\xmark \\\\ \\hline\n & 2020 & Huawei & 1 & \\xmark & \\begin{tabular}[c]{@{}c@{}}augmented \\\\reality\\\\ in 5G\\end{tabular} & \\cmark & Per-layer & 2 & Data & Early-exit & \\cmark \\\\ \\hline\n & 2020 & Raspberry Pi 3 & Many & \\cmark & \\begin{tabular}[c]{@{}c@{}}Visual based \\\\ applications\\end{tabular} & \\xmark & Per-segment & Many & Both & \\xmark & \\xmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Deep Wear \\\\ \\end{tabular}& 2020 & Android wear & 2 & \\cmark & \\begin{tabular}[c]{@{}c@{}} Wearable\\\\ devices\\end{tabular} & \\cmark & Per-layer & 1 & \\xmark & Compression & \\cmark \\\\ \\hline\n & 2021 & Cloudlet & Many & \\cmark & 5 G & \\cmark & Per-layer & Many & Data & \\xmark & \\cmark \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}DistPrivacy \\\\ \\end{tabular} & 2021 & \\begin{tabular}[c]{@{}c@{}}Raspberry Pi\\\\ STM32H7\\\\ LG Nexus 5\\end{tabular} & Many & \\cmark & Data privacy & \\cmark & Per-segment & Many & Both & \\xmark & \\cmark \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "5c6d2e7f-95b8-4a0a-84eb-fbf75fef871b", "level": "subsubsection", "origin_cites_number": 38, "parent_id": "ca471488-ff37-437f-9c43-59fb13ca9bae", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Resource management for distributed inference" ], [ "subsubsection", "Lessons learned" ] ], "subsections": [], "title": "Lessons learned" }, { "cite_extract_rate": 0.2, "cites": [ 8648 ], "content": "Currently, robotic systems have been progressively converging to computationally expensive AI networks for tasks like path planning and object detection. However, resource-limited robots, such as low power UAVs, have insufficient on-board power-battery or computational resources to scalably execute the highly accurate neural networks. \n\\begin{comment}\nIn surveillance applications, the aim is to monitor specific objects or identify threats within the target region. Moving devices are the most suitable technology to provide information about the target object from different angles, which makes the identification more accurate. These data-generating devices are only responsible for collecting the data, while servers with higher capacities generate the identification results. The traditional wisdom resorts to cloud or edge servers to compute heavy tasks. However, due to the harsh environments where robots move (e.g., military border zones, forests, and offshore oil reserves), the communication with remote servers is strongly affected by the weather. Also, the processing might be difficult or even impossible because of the interference resulting from the UAV altitude or the underground environment (e.g., high-rise building effect on path loss). Furthermore, as surveillance devices send high-resolution images to cloud/edge servers at each small interval of time and knowing that incidents rarely occur, the large data volume transmitted by source units has become problematic, particularly for such systems characterized by an unstable bandwidth availability. Because of this tremendous amount of data obtained during the robots mission, AI should be integrated within the design of the devices. However, moving robots come with distinct features, often understated, such as that communicating with the remote servers while moving incurs unstable latency, more energy consumption, and potential loss of data.\n\\end{comment}\n\\begin{figure}[!h]\n\\centering\n\t\\frame{\\includegraphics[scale=0.53]{Figures/UAVs.pdf}}\n\t\\caption{A fire detection scenario with distributed DNN.}\n\t\\label{UAVs_mobility}\n\\end{figure}\nThe work in examined the case of per-layer distribution with one split point between one UAV and one MEC server (see Fig. \\ref{UAVs_mobility}). More specifically, the authors proposed a framework for AI-based visual target tracking system, where low-level layers of the DNN classifier are deployed in the UAV device and high-level layers are assigned to the remote servers. The classification can be performed using only the low-level layers, if the image quality is good. Otherwise, the output of these layers should be further processed in the MEC server, for higher accuracy. In this context, the authors formulated a weighted-sum cost minimization problem for binary offloading and partial offloading, while taking into consideration the error rate/accuracy, the data quality, the communication bandwidth, and the computing capacity of the MEC and the UAV. The offloading probability is derived for the binary offloading and the offloading ratio (i.e., the segment of DNN to execute in the MEC) is obtained for the partial offloading scheme. In this model, the mobility of the UAVs (i.e., the distance between the UAV and the server) is involved through the transmission data rate between the device and the MEC. Additionally, the distance between the UAV and the target impacts the quality of the image and consequently impacts the offloading decisions. In the proposed framework, multiple trade-offs are experienced:\n\\begin{itemize}\n \\item The accuracy is achieved at the expense of delay and transmitted data: if most of the images have bad quality, the system is not able to accomplish low average latency as on-board inference is not sufficient. For this reason, different inferences should be extended wisely using the segment allocated in the MEC, particularly if the environment is challenging such as bad weather or when the target is highly dynamic.\n \\item A trade-off exists also between the accuracy and latency, and the position of the UAVs: when the device is close to the target, high resolution images can be taken, which allows obtaining good accuracy on-board and avoiding the data offloading. Being close to the targets is not always possible, particularly in harsh environments or when the surveillance should be hidden.\n \\item The battery life is increased at the expense of the inference latency: the battery can be saved, if the processing coefficient is decreased, which enlarges the computation time of the classification. \n \\item The split point selection: if the intermediate data is smaller than the raw data, the offloading is encouraged to enhance the accuracy.\n\\end{itemize}\nAn online solution of this offloading trade-off using reinforcement learning is presented in .\nThe previous works adopted the per-layer wise with one split point and remote collaboration approach. This strategy is more adequate for flying devices that can enhance their link quality by approaching the MEC stations. However, for ground robots, offloading segments of the inference to remote servers costs the system a large transmission overhead and high energy consumption. Authors in studied the distribution of the DNN network among ground robots and profiled the energy consumed for such tasks, when moving or being in idle mode. Several conclusions are stated:\n\\begin{itemize}\n \\item When the robot is idle, the DNN computation and offloading increase the power consumption of the device by 50\\%.\n \\item If the device is moving, the DNN execution causes high spikes in power consumption, which may limit the device to attain a high performance as this variation incurs a frequent change of the power saving settings in the CPU.\n \\item Distributing the inference contributes to reducing the energy consumed per device, even-though the total power consumption is higher. This is due to the reduced computation and memory operations per device and the idle time experienced after offloading the tasks.\n\\end{itemize}\nBased on the energy study of moving robots, the authors proposed to distribute the DNN model into smaller segments among multiple low-power robots to achieve an equilibrium of performance in terms of energy and number of executed tasks . Still, the distribution of the model into small segments (e.g., filter splitting) requires the intervention of a large number of robots that are highly dependent, which is not realistic.", "id": "6f7ddb5a-d1b9-4bec-a004-bea50c65f5dd", "level": "subsection", "origin_cites_number": 5, "parent_id": "caa8daf7-97f9-471c-b534-e135a402b2d0", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Pervasive Inference" ], [ "subsection", "Use case: Distribution on moving robots" ] ], "subsections": [], "title": "Use case: Distribution on moving robots" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{privacy_AI}\nEven though the pervasive AI has presented unprecedented opportunities to empower IoT applications, it gave rise to novel security and privacy concerns. In fact, if servers and participants are not controlled or owned by one operator, they are considered malicious by nature. Particularly, sensitive information can be leaked while sharing intermediate data or updates between participants. Moreover, an untrusted participant can alter the local data or send wrong parameters to slow the learning or mislead the system. In this section, we overview the privacy (i.e., one of the devices revealing private information about others) and security (i.e., one of the devices injects false information to disrupt the collective behavior of the devices) challenges and we survey different approaches that address these issues.", "id": "9df0fd96-bdea-482d-88b7-240ff428f334", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ] ], "subsections": [ "8c5da2a2-c8ee-423c-8129-9a53ffb36936", "20737907-d0e1-4e53-a293-c60550831e94", "81fd49d3-5877-495b-ac06-bc69dbc0d31a" ], "title": "Privacy of pervasive AI systems" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8c5da2a2-c8ee-423c-8129-9a53ffb36936", "level": "subsection", "origin_cites_number": 0, "parent_id": "9df0fd96-bdea-482d-88b7-240ff428f334", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ] ], "subsections": [ "3b093e3f-f514-445b-a29d-c2ae8d01e7d3", "15b70f61-0ec4-40e9-b32c-0dabef9489fc" ], "title": "Privacy for pervasive training" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3466, 3469, 3468, 3467 ], "content": "In some federated learning settings, participants can randomly join or leave the training process, which raises various vulnerabilities from different sources, including malicious servers, insider adversaries and outsider attackers. More specifically, aggregation servers can be honest but curious to inspect the models without introducing any changes. On the other hand, potential malicious servers , as well as untrusted participants can tamper the model during the learning rounds or dissimulating participation in order to obtain the final aggregated model without actually contributing with any data or only contributing with a small number of samples. This attack is called free-riding . Outsider eavesdroppers can also intercept the communication channels between trusted devices and the server to spoof the model or inject noisy data (data poisoning). Authors in proved that it is possible to extract sensitive information from a trained model, as it implies the correlation between the training samples. The research work in showed that confidence information returned by ML classifiers introduce new model inversion attacks that enable the adversary to reconstruct samples of training subjects with high accuracy. Inferring the sensitive information is also possible through querying the prediction model.\nBandit and MARL algorithms are also prone to attacks if one of the agents is compromised. In general, if any of the agents starts communicating false data (i.e., false data injection attacks), the regret-guarantees in bandits, and convergence behavior in MARL no longer hold. In addition to these expected effects, recent works in have demonstrated that a malicious agent may not only be disruptive but can also actively sway the policy into malicious objectives by driving other agents to reach a policy of its choice.", "id": "3b093e3f-f514-445b-a29d-c2ae8d01e7d3", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "8c5da2a2-c8ee-423c-8129-9a53ffb36936", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Privacy and security challenges" ] ], "subsections": [], "title": "Privacy and security challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "8c5da2a2-c8ee-423c-8129-9a53ffb36936", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ] ], "subsections": [ "1ea199c0-0420-410c-b0c2-bd29b8f983bb", "7b2f9a6d-5726-4ea5-9dea-c62d51533af1", "540f89bb-bbb8-4bf5-af2d-f366725ca68a", "7aaeb880-f099-4baf-9dca-eec3253adebf", "a0a5e105-95e7-4a00-81ed-60edad5342b8", "bf40eab4-c4ab-4d73-ae5e-83a863f08334" ], "title": "Defense techniques and solutions" }, { "cite_extract_rate": 0.8, "cites": [ 3471, 7727, 3470, 7608 ], "content": "DP is a data perturbation technique that was first introduced for ML in . In DP, a statistical noise is injected to the sensitive data to mask it and an algorithm is called differentially private, if its output cannot give any insight or reveal any information about its input. The DP has been widely used to preserve the privacy of the learning, although it is always criticized by its effect on the accuracy of the results due to noise growth. For this, a careful calibration between the privacy level and the model usability is needed. The use of differential privacy for distributed learning systems becomes a very active research area. The authors in proposed a differentially private stochastic gradient descent technique that adds random noise (e.g., Gaussian mechanism) to the trained parameters, before sending them to the aggregation server. Then, during the local training, each participant keeps calculating the probability that an attacker succeeds to exploit the shared data, until reaching a predefined threshold at which it stops the process. Moreover, in each round, the aggregation server chooses random participants. In this way, neither the local parameters can be used, nor the global model, as the attacker has no information about the devices participating in the current round. DP has been also used to ensure the agent's privacy in federated bandits , where the authors considered the federated bandit formulation with contextual information (in both, centralized aggregator and p2p communication style). The authors provided regret and privacy guarantees so that the peers, or the central aggregator, do not learn individual agent's samples.\nWhile concealing the agents contribution during the training, a trade-off between the privacy and\nlearning performance should be established. In this context, authors in tested the performance of FL applied to real-world healthcare datasets while securing the private data using differential privacy techniques. Results show that a significant performance loss is witnessed, even though the privacy level is increased. This encouraged the research community to propose alternative approaches to ensure privacy in federated learning.", "id": "1ea199c0-0420-410c-b0c2-bd29b8f983bb", "level": "paragraph", "origin_cites_number": 5, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Differential Privacy (DP)" ] ], "subsections": [], "title": "Differential Privacy (DP)" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 3472 ], "content": "HE is a form of encryption that consists of performing computational operations on cipher texts while having the same results that can be generated by the original data.\nThe approach in and ensures the integrity of DL training process against outsider attackers as well as honest-but-curious server. The key idea is to encode and compress the parameters of the trained neural networks before sharing them with the server. Then, these aggregated updates are directly computed with decoder on the server. This guarantees their privacy during the communication and after decoding. Although the encryption technique can preclude the server from extracting information of local models, it costs the system more communication rounds and cannot prevent the collusion between the server and a malicious participant. To solve this problem, authors in proposed to adopt hybrid solution which integrates both lightweight homomorphic encryption and differential privacy. In this work, intentional noises are added to perturb the original parameters in case the curious server accomplices with one of the participants to get encryption parameters.\nEven though the encryption is a robust approach to achieve privacy preservation for many applications, its adoption for deep learning is facing various challenges as it can only be deployed on tasks with certain degrees and complexities. In other words, the fully homomorphic schemes are still not efficient for practical use.", "id": "7b2f9a6d-5726-4ea5-9dea-c62d51533af1", "level": "paragraph", "origin_cites_number": 3, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Homomorphic Encryption (HE)" ] ], "subsections": [], "title": "Homomorphic Encryption (HE)" }, { "cite_extract_rate": 0.5, "cites": [ 3473 ], "content": "Blockchain is a recent distributed ledger system initially designed for cryptocurrency and later increasingly applied to the IoT systems, where a record of transactions is deployed distributively in a peer-to-peer network . Authors in proposed to use a blockchain-based communication scheme to exchange updates in a distributed ML system, with the aim of leveraging the blockchain's security features in the learning process.\nIn such practice, local models are shared and verified in the trusted blockchain network. Furthermore, this framework can prevent participants from free-riding as their updates are checked and they receive rewards proportional to the number of trained data samples. However, in contrast to vanilla FL, Block FL needs to take into consideration the extra delay incurred by the blockchain network. To address this, the Block FL is formulated by considering communication, computation, and the block generation rate, i.e., the proof of work difficulty. A possible drawback of this approach is its vulnerability against any latency increase.\nAlso, the use of blockchain implies the addition of a significant cost to implement and maintain miners.", "id": "540f89bb-bbb8-4bf5-af2d-f366725ca68a", "level": "paragraph", "origin_cites_number": 2, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Blockchain-based solutions" ] ], "subsections": [], "title": "Blockchain-based solutions" }, { "cite_extract_rate": 1, "cites": [ 3474 ], "content": "SMC is a sub-field of cryptographic protocols that has as a goal to secure the data except the output when multiple participants jointly perform an arbitrarily function over their private input. A study in has used SMC to build FL systems. The proposed protocols consider secret sharing, which adds new round at the beginning of the process for the keys sharing, double-masking round that protects from potential malicious server, and server-mediated key agreement that minimizes trust.", "id": "7aaeb880-f099-4baf-9dca-eec3253adebf", "level": "paragraph", "origin_cites_number": 1, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Secure Multi-Party Computation (SMC)" ] ], "subsections": [], "title": "Secure Multi-Party Computation (SMC)" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3475, 3476 ], "content": "Data poisoning is one of the attacks that is very destructive for ML, where an attacker injects poisoned data (e.g., mislabeled samples and wrong parameters) into the dataset, which can mislead the learning process. Authors in and propose secure decentralized techniques to protect the learning against data poisoning, as well as other system attacks. A zero-sum game is proposed to formulate the conflicting objectives between honest participants that utilize Distributed Support Vector Machines (DSVMs) and a malicious attacker that can change data samples and labels. This game characterizes the contention between the honest learner and the attacker. Then, a fully distributed and iterative algorithm is developed based on Alternating Direction Method of Multipliers (ADMoM) to procure the instantaneous responses of different agents. Blockchain-based solutions can also be used to prevent the FL system from data poisoning attacks.", "id": "a0a5e105-95e7-4a00-81ed-60edad5342b8", "level": "paragraph", "origin_cites_number": 3, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Prevention against data poisoning" ] ], "subsections": [], "title": "Prevention against data poisoning" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3477, 3478 ], "content": "Most of the aforementioned techniques protect the private data from outsider attackers while assuming that the server is trustful and participants are honest. However, one malicious insider can cause serious privacy threats. Motivated by this challenge, authors in proposed a collaborative DL framework to solve the problem of internal attackers. The key idea is to select only a small number of gradients to share with the server and similarly receive only a part of the global parameters instead of uploading and updating the whole set of parameters. In this way, a malicious participant cannot have the whole information and hence cannot infer it. However, this approach suffers from accuracy loss. Furthermore, authors in presented a new attack based on Generative Adversarial Networks (GANs) that can infer sensitive information from a victim participant even with just a portion of shared parameters. In the same context, a defense approach based on GANs is designed by authors in , in which participants generate artificial data that can replace the real samples. In this way, the trained model is called federated generative model and the private data parameters are not exposed to external malicious devices. Still, this approach can lead to potential learning instability and performance reduction due to the fake data used in the training.", "id": "bf40eab4-c4ab-4d73-ae5e-83a863f08334", "level": "paragraph", "origin_cites_number": 3, "parent_id": "15b70f61-0ec4-40e9-b32c-0dabef9489fc", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive training" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Other techniques" ] ], "subsections": [], "title": "Other techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "20737907-d0e1-4e53-a293-c60550831e94", "level": "subsection", "origin_cites_number": 0, "parent_id": "9df0fd96-bdea-482d-88b7-240ff428f334", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ] ], "subsections": [ "cf0c8e7a-5d04-424c-9716-c82e2f5c654d", "1497a56b-96a9-4eaf-9373-f2b742ebc789" ], "title": "Privacy for pervasive inference" }, { "cite_extract_rate": 0.538461538461538, "cites": [ 3465, 3480, 3483, 3484, 3482, 3481, 3479 ], "content": "The data captured by end-devices and sent to remote servers (e.g., from cameras or sensors to cloud servers) may contain sensitive information such as camera images, GPS coordinates of critical targets, or vital signs of patients. Exposing these data has become a big security concern for the deep learning community. This issue is even more concerning when the data is collected from a small geographical area (e.g., edge computing) involving a set of limited and cooperating users. In fact, if an attacker reveals some data (even public or slightly sensitive), a DL classifier can be trained to automatically infer the private data of a known community. These attacks, posing severe privacy threats, are called inference attacks that analyze trivial or available data to illegitimately acquire knowledge about more robust information without accessing it, by only capturing their statistical correlations. An example of a popular inference attack is the Cambridge Analytica scandal in 2016, where public data of Facebook users were exploited to predict their private attributes (e.g., political view and location). Some well-known inference attacks are summarized in Table \\ref{inference_attacks}.\n\\begin{table}[h]\n\\centering\n\\caption{Examples of inference attacks.}\n\\label{inference_attacks}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Inference attacks} & \\textbf{Exposed data} & \\textbf{Sensitive data} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Side-channel attacks\\\\ \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Processing time,\\\\ power consumption.\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Cryptographic\\\\ keys\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Location inference\\\\ attacks \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}smartphones' sensor\\\\ data.\\end{tabular} & Location \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Feature inference\\\\ attacks \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Prediction results,\\\\ partial features of the\\\\ DNN model.\\end{tabular} & DNN structure \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Membership inference\\\\ attacks \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}confidence level \\\\ of classes,\\\\ gradients.\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}membership of \\\\ a sample to a\\\\ dataset.\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}attribute inference \\\\ attacks \\end{tabular} & \\begin{tabular}[c]{@{}c@{}}social data, likes, \\\\ friends.\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Gender, ages,\\\\ preferences.\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{table}\nEdge computing naturally enhances privacy of the sensitive information by minimizing the data transfer to the cloud through the public internet. However, additional privacy techniques should be adopted to further protect the data from eavesdroppers. In this context, in addition to its ability to allow the pervasive deployment of neural networks, the DNN splitting was also used for privacy purposes. Meaning, by partitioning the model, partially processed data is sent to the untrusted party instead of transmitting raw data. In fact,\nin contrast to the training data that belongs to a specific dataset and generally follows a statistical distribution, the inference samples are random and harder to be reverted.\nFurthermore, the model parameters are independent from the input data, which makes the inference process reveal less information about the sample . While preserving privacy, the inevitable challenge of DNN partitioning that remains valid, is selecting the splitting point that meets the latency requirements of the system.\n\\begin{table*}[!h]\n\\centering\n\\caption{Comparison between privacy-aware distribution strategies.\\\\\n(H: High, M: Medium, L: Low).}\n\\label{tab:privacy}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n\\begin{tabular}[c]{@{}c@{}}\\textbf{Privacy-aware}\\\\ \\textbf{strategy}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Privacy}\\\\ \\textbf{level}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Accuracy}\\\\ \\textbf{preserving}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{DNN}\\\\ \\textbf{re-training}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Compatibility}\\\\ \\textbf{with IoT and DNNs}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Partitioning}\\\\ \\textbf{strategy}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Communication}\\\\ \\textbf{overhead}\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}\\textbf{Computation}\\\\ \\textbf{overhead on}\\\\ \\textbf{source-device}\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Deep split \\end{tabular} & H & \\cmark & \\xmark & \\cmark & per-layer & L & H \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Feature extraction \\\\ \\end{tabular} & L & \\xmark & \\xmark & \\cmark & per-layer & L & M \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Noise addition\\\\ \\end{tabular} & M & \\xmark & \\cmark & \\xmark & per-layer & M & H \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Cryptography \\end{tabular} & H & \\xmark & \\cmark & \\xmark & per-layer & M & H \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}Privacy-aware\\\\ partitioning \\end{tabular} & M & \\cmark & \\xmark & \\cmark & \\begin{tabular}[c]{@{}c@{}}Filter\\\\ splitting \\end{tabular} & H & L \\\\ \\hline\n\\end{tabular}\n\\end{table*}", "id": "cf0c8e7a-5d04-424c-9716-c82e2f5c654d", "level": "subsubsection", "origin_cites_number": 13, "parent_id": "20737907-d0e1-4e53-a293-c60550831e94", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Privacy and security challenges" ] ], "subsections": [], "title": "Privacy and security challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "1497a56b-96a9-4eaf-9373-f2b742ebc789", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "20737907-d0e1-4e53-a293-c60550831e94", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Defense techniques and solutions" ] ], "subsections": [ "e43326f0-f867-4d51-8b73-92517f516e6f", "ed39e46e-8f57-429c-b574-55bab1950337", "d99cbce9-c2af-49b6-aad3-1202b27bc2c5", "12d8d2c7-f137-4970-9abe-a14165af9428" ], "title": "Defense techniques and solutions" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 3485, 629, 3484 ], "content": "Authors in proposed to extract the features sufficient and necessary to conduct the classification from the original image or from one of the layers' outputs using an encoder and transmit these data to the centralized server for inference. This approach prevents the exposure of irrelevant information to the untrusted party that may use it for unwanted inferences.\nThe work in also proposed feature extraction for data privacy, while achieving a trade-off between on-device computation, the size of transmitted data, and security constraints. In fact, selecting the split layer from where the data will be extracted intrinsically presents a security compromise. Particularly, as we go deeper in the DNN network, the features become more task specific and the irrelevant data that can involve sensitive information are mitigated . Hence, if the split is performed in a deep layer, the privacy is more robust and the transmission overhead is lower. However, a higher processing load is imposed on the source device. The latter work , along with the work in , advised to perform deep partition in case the source device has enough computational capacity. If the source device is resource-constrained, the model should be partitioned in the shallow layers, although most of the output features are not related to the main task. Authors in proposed a solution based on Siamese fine-tuning and dimensionality reduction to manipulate the intermediate data and send only the primary measures without any irrelevant information. In addition to enhancing privacy, this mechanism contributes to reducing the communication overhead between the end-device and the remote server. \nHowever, to this end, the arms race between attacks and defenses for DNN models has come to a forefront, as the amount of extracted features can be sufficient for adversary approaches to recover the original image. Whereas, less shared features may also result in low classification accuracy. The works in proposed adversarial attacks to predict the inference input data (or the trained model), using only available features from shared outputs between participants. Authors in focused particularly on the privacy threats presented by the DNN distribution; and accordingly, designed a white-box attack assuming that the structure of the trained model is known and the intermediate data can be inverted through a regularized Maximum Likelihood Estimation (rMSE). Additionally, a black-box attack is also proposed, where the malicious participant only has knowledge about his segment and attempts to design an inverse DNN network to map the received features to the targeted input and recover the original data. Authors demonstrated that reversing the original data is possible, when the neural system is distributed into layers.", "id": "e43326f0-f867-4d51-8b73-92517f516e6f", "level": "paragraph", "origin_cites_number": 7, "parent_id": "1497a56b-96a9-4eaf-9373-f2b742ebc789", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Features extraction" ] ], "subsections": [], "title": "Features extraction" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3479, 3483 ], "content": "Adding noise to the intermediate data is adopted in . In this paper, the authors proposed to perform a simple data transformation in the source-device to extract relevant features and add noise. Next, these features extracted from shallow layers are sent to the cloud to complete the inference. To maintain a high classification accuracy, the neural network is re-trained with a dataset containing noisy samples. However, adding noise to the intermediate data costs the system additional energy consumption and computational overhead. Therefore, the splitting should be done at a layer where the output size is minimal. Though, the latter work did not describe the partition strategy. The Shredder approach resolved this dilemma by considering the computation overhead during the noise injection process. The idea is to conduct an offline machine learning training to find the noise distribution that strikes a balance between privacy (i.e., information loss) and accuracy. In this way, the DNN model does not require retraining with the noisy data and the network can be cut at any point to apply directly the noise distribution. The partitioning decision is based on the communication and computation cost. A higher privacy level and lower communication overhead are guaranteed when the split is performed at deep layers; however, the allocation at the end-device becomes less scalable. Adding noise or extracting task-specific data can be included under the umbrella of differential privacy, which at a high level ensures that the model does not reveal any information about the private input data, while still presenting satisfactory classification. The performance of differential privacy is assessed by a privacy budget parameter $\\epsilon$ that denotes the level of distinguishability. Authors in conducted theoretical analysis to minimize $\\epsilon$, while considering accuracy and the communication overhead to offload the intermediate features among fog participants.", "id": "ed39e46e-8f57-429c-b574-55bab1950337", "level": "paragraph", "origin_cites_number": 3, "parent_id": "1497a56b-96a9-4eaf-9373-f2b742ebc789", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Noise addition" ] ], "subsections": [], "title": "Noise addition" }, { "cite_extract_rate": 0, "cites": [], "content": "Cryptography is another technique that can be used to protect the distributed inference. The main idea is to encrypt the input data and process it using a model trained on encrypted dataset, in a way the intermediate data cannot be used by a malicious participant. Little research, including , investigated the encrypted DNN distribution, as this approach suffers from a prohibitive computation and communication overhead that exacerbates the complexity of the inference process, particularly when executed in resource-constrained devices.", "id": "d99cbce9-c2af-49b6-aad3-1202b27bc2c5", "level": "paragraph", "origin_cites_number": 1, "parent_id": "1497a56b-96a9-4eaf-9373-f2b742ebc789", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Cryptography" ] ], "subsections": [], "title": "Cryptography" }, { "cite_extract_rate": 1, "cites": [ 3465 ], "content": "All the previous techniques applied additional tasks to secure the shared data, e.g., feature extraction, adding noise, and encryption, which overloads the pervasive devices with computational overhead. Different from previous works, DistPrivacy used the partitioning scheme to guarantee privacy of the data. In fact, all the existing privacy-aware approaches adopted the per-layer distribution of the DNN model. This partitioning strategy incurs an intermediate shared information that can be reverted easily using adversarial attacks. The main idea in is to divide the data resulting from each layer into small segments and distribute it to multiple IoT participants, which contributes to hiding the proprieties of the original image as each participant has only small amount of information. Particularly, the authors adopted the filter splitting strategy, in such a way that each device computes only a part of the feature maps. However, as stated in section \\ref{profiling}, this partitioning strategy results in large data transmission between participants. Therefore, the authors formulated an optimization that establishes a trade-off between privacy and communication overhead.\nTable \\ref{tab:privacy} illustrates different privacy-aware strategies for distributed inference existing in the literature and shows their performance. We can see that choosing the adequate strategy depends on the requirements of the pervasive computing system, as multiple trade-offs need to be established, such as the security level and accuracy, or the computation and communication loads.\n\\begin{comment}\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[scale=0.41]{Figures/privacy2.pdf}\n\t\\caption{Privacy-aware distribution strategies.}\n\t\\label{privacy_aware}\n\\end{figure}\n\\end{comment}", "id": "12d8d2c7-f137-4970-9abe-a14165af9428", "level": "paragraph", "origin_cites_number": 1, "parent_id": "1497a56b-96a9-4eaf-9373-f2b742ebc789", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Privacy for pervasive inference" ], [ "subsubsection", "Defense techniques and solutions" ], [ "paragraph", "Distribution for privacy" ] ], "subsections": [], "title": "Distribution for privacy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{itemize}\n\\item \nCurrent works addressing pervasive AI privacy proved their efficiency while trying to maintain an acceptable accuracy. However, some of these efforts incur significant extra communication and computation costs, while others incorporate new hyper-parameters that not only affect the accuracy but also\ndistress the communication.\n \\item Most of the efforts in the literature explored the attacks that target the federated learning and the possible defenses. However, only little research investigated the threats facing distributed inference. More specifically, changing the intermediate data or injecting malicious information to mislead the prediction is not studied yet. \n \\item Limited efforts have investigated the privacy and security in multi-agent reinforcement learning. This could be attributed to the satisfactory performance of the well-known defense mechanisms (e.g., DP.) when applied to multi-agent systems without many modifications. It is also worth to mention that these privacy and security protocols add additional communication and computation requirements, which are already high in the case of multi-agent learning. Thus, most works assume that the agents are trusted and focus on minimizing communication and computational resource utilization.\n\\end{itemize}\n\\begin{figure*}[!h]\n\\centering\n\t\\includegraphics[scale=0.43]{Figures/FutureDirections1.pdf}\n\t\\caption{Future directions and open challenges.}\n\t\\label{future_illustrated}\n\\end{figure*}", "id": "81fd49d3-5877-495b-ac06-bc69dbc0d31a", "level": "subsection", "origin_cites_number": 0, "parent_id": "9df0fd96-bdea-482d-88b7-240ff428f334", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Privacy of pervasive AI systems" ], [ "subsection", "Lessons Learned" ] ], "subsections": [], "title": "Lessons Learned" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{future}\nIn this section, we present a list of open challenges and issues facing the pervasive AI systems, and we propose some promising ideas to mitigate these issues. Specifically, we introduce the opportunities to integrate the pervasive AI in emerging systems, and we suggest some future directions for efficient distributed inference and enhanced federated learning algorithms. Finally, we present some innovative ideas related to the new concepts of multi-agent reinforcement learning. Fig. \\ref{future_illustrated} presents an illustration of the proposed directions.", "id": "01d3f0a9-9cb9-438d-bb09-071f097b269c", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ] ], "subsections": [ "7787760b-199f-46be-baca-8a1610f30587", "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "e41078b4-d37b-46ef-9b13-b3186be7dfd7", "56e15db5-6a15-4c2d-a643-b7f908271672" ], "title": "Future directions and open challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "7787760b-199f-46be-baca-8a1610f30587", "level": "subsection", "origin_cites_number": 0, "parent_id": "01d3f0a9-9cb9-438d-bb09-071f097b269c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Deployment of Pervasive AI in emerging systems" ] ], "subsections": [ "b44fd244-9c67-458c-95ac-769538143b6d", "f2960d2d-9b22-4113-b829-dc95ef3927f3", "90762f2e-5c4e-458b-bcc8-8e357bb67e8a" ], "title": "Deployment of Pervasive AI in emerging systems" }, { "cite_extract_rate": 1, "cites": [ 7140 ], "content": "While the 5G main goal is to provide high speed mobile services, the 6G pledges to establish next-generation softwarization and improve the network configurability in order to support pervasive AI services deployed on ubiquitous devices. However, the research on 6G is still in its infancy, and only the first steps are taken to conceptualize its design, study its implementation, and plan for use cases. Toward this end, academia and industry communities should pass from theoretical studies of AI distribution to real-world deployment and standardization, aiming at instating the concept of Pervasive AI-as-a-service (PAIaas). PAIaas allows the service operators and AI developers to be more domain-specific and focus on enhancing users’ quality of experience, instead of worrying about tasks distribution. Moreover, it permits to systemize the mass-production and unify the interfaces to access the joint software that gathers all participants and applications. \nSome recent works, including and , started to design distributed AI services. However, authors did not present an end-to-end architecture enclosing the whole process, neither they have envisaged an automated trusted management of the service provisioning.", "id": "b44fd244-9c67-458c-95ac-769538143b6d", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "7787760b-199f-46be-baca-8a1610f30587", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Deployment of Pervasive AI in emerging systems" ], [ "subsubsection", "Pervasive AI-as-a-service" ] ], "subsections": [], "title": "Pervasive AI-as-a-service" }, { "cite_extract_rate": 0, "cites": [], "content": "The distribution of heavy and deep neural networks on ubiquitous and resource-limited devices contributes to minimizing the latency of the AI task and guarantees the privacy of the data. However, even though pervasive systems are composed of computing units existing everywhere, anytime, and not belonging necessarily to any operator, the distribution is based on the assumption that pervasive devices are consenting to participate in the collaborative system. In this context, several considerations should be examined first: (1) The design of an incentive mechanism to motivate different nodes to take over AI tasks and sacrifice their memory, energy, communication, and computation resources to gain some rewards (e.g., monetary remuneration and free access to services); (2) In addition to the security of the private data, the security of the participants’ information should also be guaranteed (e.g., locations, identifiers, and capacities). Recently, blockchain has gained large popularity as a decentralized dataset managing transaction records across distributed devices, while ensuring trusty communication. Moreover, the aforementioned incentivizing mechanism can also be handled by blockchain systems. More specifically, all source devices and pervasive nodes have to first register to the blockchain system to benefit from the distributed AI or to participate in the computation. Then, data-generating devices request help to accomplish a task and submit at the same time a transaction application to the blockchain with a reward. Next, when the joining devices complete the offloaded tasks, they return the results to the source device and validate the completion of the transaction. Finally, the recorded participants are awarded according to their contribution to the blockchain transaction. The edge-based blockchain has a promising potential to prevent the security threats of transferring data between heterogeneous, decentralized, and untrusted devices. However, this approach is still in its infancy. Particularly, deploying it in resource-constrained devices is challenging due to the huge energy and computation load of blockchain mining .", "id": "f2960d2d-9b22-4113-b829-dc95ef3927f3", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "7787760b-199f-46be-baca-8a1610f30587", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Deployment of Pervasive AI in emerging systems" ], [ "subsubsection", "Incentive and trusty mechanism for distributed AI using blockchain" ] ], "subsections": [], "title": "Incentive and trusty mechanism for distributed AI using blockchain" }, { "cite_extract_rate": 0, "cites": [], "content": "The AI-based applications are increasingly involved in many fields, where the decisions are very critical to lives and personal wellness, such as smart health applications and autonomous drones used during wars. However, most of the users do not have visibility on how the AI is making decisions. This lack of explainability prevents us to fully trust the predictions generated by AI systems. Finding reasons and logical explanations for decisions made by AI is called Explainable AI (XAI) . XAI is an emerging field that is expected to answer some questions, including: Why are some predictions chosen, and why others not? When does an AI model succeed in taking the right decision, and when it fails?\nVarious techniques are used to explain the AI: (1) One of these techniques is decomposability, which stands for the ability to describe each part of the model, extract features of the output, and analyze them using clustering methods. The pervasive AI system is the most adequate environment to empower XAI by improving the ability to interpret, understand, and explain the behavior of the model. More specifically, by distributing the inference, the model becomes algorithmically transparent, and each segment can be interpreted and clustered by its importance for the prediction. (2) Moreover, among the most important directions supporting the XAI is model debugging. Debugging a DNN allows to detect errors and understand their sources and their influence on misleading the prediction. A distributed model produces fine-grained outputs, that help to follow the inference process and localize the errors before reaching the prediction layer. (3) A third direction to explain the AI is the extraction of data samples that are highly correlated with the results generated by the model. In fact, similarly to human behaviors when trying to understand some processes, examples are analyzed to grasp the inner correlation that is derived by the AI model. Federated learning is based on clustering data entries and training local models. This technique permits us to narrow the examples search and enables the detection of the most influencing inputs on the model behavior. Research on XAI is still in its infancy, and pervasive DNN computing looks like a promising environment to track the AI process and interpret the results.", "id": "90762f2e-5c4e-458b-bcc8-8e357bb67e8a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "7787760b-199f-46be-baca-8a1610f30587", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Deployment of Pervasive AI in emerging systems" ], [ "subsubsection", "Explainable AI (XAI)" ] ], "subsections": [], "title": "Explainable AI (XAI)" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "level": "subsection", "origin_cites_number": 0, "parent_id": "01d3f0a9-9cb9-438d-bb09-071f097b269c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ] ], "subsections": [ "c58bd15a-b9ae-4949-96e6-5a0643bd88d0", "039bcb47-6102-4382-9971-f2c49444e1fe", "0f99d7a1-3b25-45ba-838b-c2d5a1bd9b3e", "1a610d15-77eb-49a9-a0e6-ea927023f779", "c1e679bc-ad2d-4c41-9f1f-49880ba0cf43", "be3bba61-0d64-4575-8c25-1903eba1b84b", "742cb413-f464-4d59-ac3e-62bcedb0ef4c" ], "title": "Efficient algorithms for pervasive inference" }, { "cite_extract_rate": 0, "cites": [], "content": "The pervasive computing systems are characterized by an extremely dynamic environment, where the available computing resources are volatile, and the load of requests may follow some statistical distributions. \nAdditionally, the quality of the collected data may affect the depth of the adopted DL and consequently the computation requirements of the tasks. As an example, capturing high quality images allows to have a good prediction using smaller networks. In this scenario, early-exit techniques or squeezed models can be adopted. \nBecause of these network’s dynamics, the pervasive systems deploying distributed inference need a well-designed online resource orchestration and participants selection strategy to support the large number of AI services with minimum latency. Meanwhile, heterogeneous and limited resources, and high dimensional parameters of the DNN should be taken into consideration. In section \\ref{PI}, we have introduced existing theoretical approaches to split different DNN networks and distribute the resultant segments into pervasive devices to optimize pervasive computing . Nonetheless, most of them focused on how to partition the model in order to maximize the model parallelization and minimize the dependency between participants. Yet, there is no relevant work that deeply studied the performance of inferences distribution and reported the bottleneck and gain of such an approach in long-term online resource orchestration, with different loads of requests and a dynamic behavior of participants and sources. In other words, the data parallelization is not well investigated in the literature, where sources can generate multiple requests at the same time and offload them to neighboring devices. In this scenario, the critical decision of each device is to choose whether to process the same task from different requests while minimizing the memory to store the filters’ weights or to compute sequential tasks from the same request while reducing the transmission among participants. Furthermore, the age-aware inference is an important factor that can be foreseen in online data parallelization. In fact, some requests are highly sensitive to delays and need to be processed timely, such as self-driving cars, whereas others are less critical, including machine translation and recommendation systems. Thus, prioritizing urgent tasks and assigning better resources and intensive data parallelization to them is of high importance. We believe that pervasive AI computing should focus more on the online configuration to implement the above vision.", "id": "c58bd15a-b9ae-4949-96e6-5a0643bd88d0", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Online resource orchestration" ] ], "subsections": [], "title": "Online resource orchestration" }, { "cite_extract_rate": 1, "cites": [ 3486, 3465 ], "content": "Guaranteeing the privacy of the data shared between collaborative devices is one of the main concerns of pervasive computing systems, since untrusted participants may join the inference and observe critical information. Because of this heterogeneity of ubiquitous devices, the trained models are subject to malicious attacks, such as black-box and white-box risks, by which the original inputs may be in jeopardy. In this case, privacy-aware mechanisms should be enhanced to ensure the security of the distributed inference process. Many efforts have been conducted in this context, such as noise addition and cryptography. Even though these techniques succeeded in hiding features of the data from untrusted devices, most of them suffer from computation overhead and incompatibility with some end-devices or DNNs. More specifically, noisy or encrypted data need to be re-trained to preserve the accuracy of the prediction, and each input has to be obfuscated, which adds a computation overhead. Moreover, encryption may not be applicable for all DNN operations nor possible in some end-devices due to the crypto key management requirements. A notable recent work in and proposed to use the distribution for data privacy, without applying any additional task requiring computation overhead. In fact, per-segment splitting leads by design to assigning only some features of the input data to participants. Authors, of this work, applied filter partitioning and conducted empirical experiments to test the efficiency of black-box attacks on different segments’ sizes (i.e., number of feature maps per device). The lower the number of feature maps per device, the higher the privacy. However, filter partitioning incurs high communication load and dependency between devices. This study is still immature. Other partitioning strategies (e.g., channel and spatial.) can be examined to identify the optimal partitioning and distribution that guarantee satisfactory privacy and minimum resource utilization per participant.", "id": "039bcb47-6102-4382-9971-f2c49444e1fe", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Privacy-aware distributed inference" ] ], "subsections": [], "title": "Privacy-aware distributed inference" }, { "cite_extract_rate": 0.25, "cites": [ 3487 ], "content": "The usage of robots (e.g., UAVs) proved its efficiency to improve services in critical and hard-reaching regions. Recently, moving robots have been used for real-time image analysis, such as highway inspection, search and rescue operations, and border surveillance missions. These devices have numerous challenges, including energy consumption and unstable communication with remote servers. Recent works, e.g., , proposed to avoid remote AI inferences and leverage the computation capacity of ground robots to accomplish the predictive tasks. However, only few works covered the distribution of the inference among flying drones, characterized by their faster navigation, higher power consumption, and ability to reach areas with high interferences (e.g., high-rise buildings) compared to ground devices . Moreover, recent efforts did not cover the path planning for different moving robots to complete their assigned missions, while performing latency-aware predictions. More specifically, the time period between capturing the data to the moment when tasks from all the points are collected, should be minimized by optimizing the devices’ trajectories, and planning close paths for participants handling subsequent segments. Furthermore, the trajectories of devices with available resources should cross the paths of the nodes that need to offload the tasks, because of resource constraints.", "id": "0f99d7a1-3b25-45ba-838b-c2d5a1bd9b3e", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Trajectory optimization of moving robots for latency-aware distributed inference" ] ], "subsections": [], "title": "Trajectory optimization of moving robots for latency-aware distributed inference" }, { "cite_extract_rate": 0.25, "cites": [ 7217 ], "content": "A major part of pervasive inference literature analyzes the remote collaboration, where the source device computes the shallow layers of the model, while the cloud handles the deep layers . In this context, the split point is chosen based on the size of the shared data, the resource capability of the end-device, and the network capacity. This DNN partitioning approach may work well for the standard sequential model, where filters are sequentially reducing the size of the intermediate data. However, state-of-the-art networks do not only include sequential layers with reduced outputs. Indeed, generative models (GAN) proved their efficiency for image generation, image quality enhancement, text-to-image enhancement, etc. Auto-encoders also showed good performance for image generation, compression, and denoising. These types of networks have large-sized inputs and outputs. Hence, despite the reduced intermediate data, the cloud servers have to return the high-sized results to the source device, which implies high transmission overhead. Another family of efficient neural networks is the RNN (see section \\ref{DRN}) , used mostly for speech recognition and natural language processing. These networks include loops in their structures and multiple outputs of a single layer, which imposes multiple communications with remote servers in case of partitioning. Other complex DNN structures prevent remote collaboration wisdom, such as the randomly wired networks and Bolzman Machines (BM) having a non-sequential dependency. Keeping up with ever-advancing deep learning designs is a major challenge for per-layer splitting, particularly for remote collaboration. Based on these insights, the scheduling of DNN partitioning should have various patterns depending on the model structure.", "id": "1a610d15-77eb-49a9-a0e6-ea927023f779", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Remote inference of non-sequential DNN models" ] ], "subsections": [], "title": "Remote inference of non-sequential DNN models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 3488, 3460 ], "content": "When a deep neural network is split into small segments and distributed among multiple physical devices, the risk of nodes failure is increased, which leads to performance drop and even inference abortion. The typical networking wisdom resorts to re-transmission mechanisms along with scheduling redundant paths. These failure management techniques inevitably consume additional bandwidths. The DNNs are characterized by a unique structure that may enclose skip connections, convolutional neural connections, and recurrent links. These features of state-of-the-art networks implicitly increase the robustness and resiliency of the joint inference. More specifically, skip blocks allow receiving information from an intermediate layer in addition to the data fed from the previous one. These connections, serving as a memory for some DL models (e.g., ResNet), can play the role of fault-tolerant paths. If one of the devices fails or leaves the joint system, information from a prior participant can still be propagated forward to the current device via the skip blocks, which adds some failure resiliency. The skip connections proved an unprecedented ability to enhance the accuracy of deep models, in addition to its potential to strengthen the fault-tolerance of pervasive computing. However, transmission overheads are experienced, particularly for failure-free systems. Thus, a trade-off between accuracy, resilience, and resource utilization should be envisaged. Another vision to be investigated is to train the system without skip connections and use them only in case of failures. This idea is inspired from the Dropout technique that is used to reduce the data overfitting problem. It is based on randomly dropping some neurons during the training and activating them during the inference. Studying the impact of cutting off some transmissions during the inference for different splitting strategies while re-thinking the dropout training is interesting to strengthen the fault-tolerance of pervasive computing. Very recent works started to discuss such insights; however, they are still immature.", "id": "c1e679bc-ad2d-4c41-9f1f-49880ba0cf43", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Fault-tolerance of distributed inference" ] ], "subsections": [], "title": "Fault-tolerance of distributed inference" }, { "cite_extract_rate": 0.5, "cites": [ 514 ], "content": "Most of the efforts, studying the pervasive inference, focus on splitting and parallelizing the DNN tasks related to a predictive request. Next, based on the resource requirements and their availability in the joint system (e.g., computation and energy), tasks are distributed and assigned to the participants. However, in terms of memory, only the weight of the input data is considered, whereas the weights to store the DNN structure are never taken into account.\nFor example, VGG-16 model has 138 M parameters and requires 512 Mb to store its filters . What worsens the situation is that some partitions impose copying the filters to all participants (e.g., spatial splitting.). Moreover, if the intelligent application is led by multiple DNN models and different segments are assigned to each device, a huge memory burden is experienced. Therefore, data-locality-aware algorithms should be designed. More specifically, the distribution system has to account for the past tasks assigned to each participant and try to maximize the re-usability of previously-stored weights, with consideration to the capacity of the devices. Minimizing the number of weights assigned to each participant, not only contributes to reduce the memory usage, but also guarantees the privacy of the structure against white-box attacks .", "id": "be3bba61-0d64-4575-8c25-1903eba1b84b", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Data-locality-aware algorithms" ] ], "subsections": [], "title": "Data-locality-aware algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Nanotechnology is the field of innovation and research that focuses on creating particles (e.g., devices and materials) in the scale of atoms. These particles can be used in multiple domains, such as nanomedicine that studies new ways of detecting and curing diseases. One of the interesting examples of nanomedicine is the detection of diabetes through analyzing human's breaths.\nBefore nanotechnology, it was not possible to precisely detect nano biomarkers. Nowadays, intelligent and invisible nano-sensors can be trained to sniff human breath and analyse the concentration of specific particles. Still, reaching the full potential of nanomedicine (e.g., drug delivery systems and precision cancer medicine) is still yet to be fully realized. \nTo guarantee that Nano particles achieve the targeted objectives, large amount of data and computational analysis is expected. While the traditional techniques opt for an in-depth understanding of biological and chemical knowledge, the AI only requires data training. Thus, it is highly interesting to integrate the AI to evaluate and formulate the nanoscale particles . However, these particles suffer from small energy capacity that limits their communication with remote devices (e.g., handheld mobiles and computers). Hence, the distribution of inference within the nano-sensors can provide localized processing and minimize the data transmission. In this context, new partitioning strategies should be envisaged, as the existing ones do not fit the extremely limited computational resources of the particles. Particularly, even neuron, spatial, or filter splitting involving numerous multiplications are considered complex tasks. Thus, per-multiplication partitioning and the related dependency between millions of nano-participants have to be investigated to ensure the practicality of this futuristic convergence between pervasive AI and nanotechnology.", "id": "742cb413-f464-4d59-ac3e-62bcedb0ef4c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "94d6caee-ed7e-43e0-93c5-bd7d0a529643", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Efficient algorithms for pervasive inference" ], [ "subsubsection", "Pervasive inference for nanotechnology applications" ] ], "subsections": [], "title": "Pervasive inference for nanotechnology applications" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e41078b4-d37b-46ef-9b13-b3186be7dfd7", "level": "subsection", "origin_cites_number": 0, "parent_id": "01d3f0a9-9cb9-438d-bb09-071f097b269c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Enhanced federated learning algorithms" ] ], "subsections": [ "e7f9569c-7e49-4ace-9767-eb159db38879", "55b167bc-3d59-475a-b656-22fc4ca1df47" ], "title": "Enhanced federated learning algorithms" }, { "cite_extract_rate": 0, "cites": [], "content": "Given the main limitations of FL in terms of communication overheads and slow convergence, combining AL concept with emerging FL schemes would be of great interest. Since most of the existing schemes for FL suffer from slow convergence, a novel active FL solution would be needed, which exploits the distributed nature of FL, while coping with highly dynamic environments and ensuring adequately fast convergence. Indeed, heterogeneity of the local training data at distributed participating nodes and considering all nodes in the FL process can significantly slow down the convergence. Full nodes participation renders the centralized server to wait for the stragglers. Thus, we envision that: (1) exchanging some side information between different participating nodes (e.g., the unique data samples or class distribution) can significantly help in tackling the data heterogeneity problem; (2) considering partial node participation by proposing efficient user selection schemes can play an important role in decreasing communication overheads and accelerating the convergence. A preliminary study of this approach can be found in .", "id": "e7f9569c-7e49-4ace-9767-eb159db38879", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e41078b4-d37b-46ef-9b13-b3186be7dfd7", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Enhanced federated learning algorithms" ], [ "subsubsection", "Active Federated Learning" ] ], "subsections": [], "title": "Active Federated Learning" }, { "cite_extract_rate": 1, "cites": [ 9123 ], "content": "Deep neural networks require intensive memory and computational loads. This challenge is compounded, when the model is larger and deeper, as it becomes infeasible to acquire training results from a single resource-limited device. Triggered by this challenge, federated learning is proposed to train deep models over tens and even hundreds of CPUs and GPUs, by taking advantage of \\textit{inter-data parallelism} . At present, federated learning techniques split the data to be trained among pervasive nodes while copying the whole DL model to all of them. Still, small devices cannot participate in such a process due to their limited capacities. Hence, blending the \\textit{inter-data parallelism} where the trained data is distributed and the \\textit{intra-data parallelism} where the intermediate data of the model are partitioned, can be a feasible solution to enable training within non-GPU devices. Certainly, the practicality, gains and bottleneck of such an approach are to be examined and studied, as the backpropagation characterizing the training phase imposes huge dependency and communication between devices.", "id": "55b167bc-3d59-475a-b656-22fc4ca1df47", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e41078b4-d37b-46ef-9b13-b3186be7dfd7", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Enhanced federated learning algorithms" ], [ "subsubsection", "Blending inter and intra data parallelism for federated learning" ] ], "subsections": [], "title": "Blending inter and intra data parallelism for federated learning" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "56e15db5-6a15-4c2d-a643-b7f908271672", "level": "subsection", "origin_cites_number": 0, "parent_id": "01d3f0a9-9cb9-438d-bb09-071f097b269c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ] ], "subsections": [ "eebe87bb-7058-4cf1-b2f1-3f065bd3c846", "7ff618f6-b373-40a3-bbab-928395519d99", "bc4fd7b2-df00-4f6c-908e-107c575662d1", "dd1e210e-a2bb-4c97-b36c-37babb19b7c3", "318b6540-67a3-4077-858a-f0b556a00e15" ], "title": "Communication-efficient multi-agent reinforcement learning" }, { "cite_extract_rate": 1, "cites": [ 3439, 7725, 9098, 3436 ], "content": "Since most of the MAB algorithms discussed in this paper are recent , it remains interesting to see their implications on practical applications, for example, quantifying the effect of bounded communication resources or energy used in wearable devices and congestion between edge nodes. Similarly, quantifying the improvement in regret bounds on actual and Quality of Experience (QoE) metrics can be promising.", "id": "eebe87bb-7058-4cf1-b2f1-3f065bd3c846", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "56e15db5-6a15-4c2d-a643-b7f908271672", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ], [ "subsubsection", "Demonstrated applications" ] ], "subsections": [], "title": "Demonstrated applications" }, { "cite_extract_rate": 0, "cites": [], "content": "The state of art algorithms in the distributed and federated setup adapt the finite-actions, and the stochastic case of the multi-agent settings. However, there exist many more general forms of the bandit problem that are yet to be studied under the multi-agent settings. These include but are not limited to adversarial-bandits, linear bandits, pure exploration, and non-stationary bandits . Investigating potential regret improvements and communication resource utilization in the MAB settings of non-stochastic and infinite-action bandits remains to be tackled.", "id": "7ff618f6-b373-40a3-bbab-928395519d99", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "56e15db5-6a15-4c2d-a643-b7f908271672", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ], [ "subsubsection", "More general forms of MABs" ] ], "subsections": [], "title": "More general forms of MABs" }, { "cite_extract_rate": 1, "cites": [ 3489 ], "content": "In MAB settings, agents might not only differ in the instances they are trying to solve, but also in their computational capabilities. Different computational capabilities mean that agents interact with their environments at different rates, collecting an additional amount of samples and hence having different quality estimates. While the effect of this computational heterogeneity is heavily studied in supervised federated learning , it is not yet investigated either in distributed or federated bandits.", "id": "bc4fd7b2-df00-4f6c-908e-107c575662d1", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "56e15db5-6a15-4c2d-a643-b7f908271672", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ], [ "subsubsection", "Heterogeneity of Bandit agents" ] ], "subsections": [], "title": "Heterogeneity of Bandit agents" }, { "cite_extract_rate": 0, "cites": [], "content": "Methods that train in a logically centralized server and then execute in a decentralized manner (CTDE) are able to communicate less (even not at all) at execution while being able to learn good joint policy due to the central training phase, as illustrated earlier. However, their adaptability is not guaranteed when dealing with a non-stationary environment, and they might require re-training again to adapt to the new environment. On the other hand, fully decentralized agents can continue the learning throughout their deployment but need to communicate more often to reason about their joint action. Otherwise, learning can be challenging and might diverge . A natural goal is to design adaptable methods that communicate conservatively, which is the main motivation behind scheduling in learned communication. Thus, more work is needed to address the question of adaptable and communication-cognizant MARL.", "id": "dd1e210e-a2bb-4c97-b36c-37babb19b7c3", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "56e15db5-6a15-4c2d-a643-b7f908271672", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ], [ "subsubsection", " MARL performance/communication trade-off" ] ], "subsections": [], "title": " MARL performance/communication trade-off" }, { "cite_extract_rate": 0, "cites": [], "content": "Several communication characteristics have not been investigated under the MDP and POMG settings. For example, while delay, noise, failure, and time-varying topologies are vital factors in today's practical networks, they were not considered in most of MARL papers. These factors were, however, considered in other optimization frameworks like multi-agent (distributed) convex optimization . Some of the works started to study bandwidth and multiple-access aspects . Yet, it is important to study the performance of emerging policies of MARL under realistic networking constraints.", "id": "318b6540-67a3-4077-858a-f0b556a00e15", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "56e15db5-6a15-4c2d-a643-b7f908271672", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Future directions and open challenges" ], [ "subsection", "Communication-efficient multi-agent reinforcement learning" ], [ "subsubsection", "MARL under networking constraints" ] ], "subsections": [], "title": "MARL under networking constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{conclusion}\nRecently, AI and pervasive computing have drawn the attention of academia and industrial verticals, as their confluence has proved a high efficiency to enhance human’s productivity and lifestyle. Particularly, the computing capacities offered by the massive number of ubiquitous devices open up an attractive opportunity to fuel the continuously advancing and pervasive IoT services, transforming all aspects of our modern life. In this survey, we presented a comprehensive review of the resource allocation and communication challenges of pervasive AI systems, enabling to support a plethora of latency-sensitive applications. More specifically, we first presented the fundamentals of AI networks, applications and performance metrics, and the taxonomy of pervasive computing and its intersection with AI. Then, we summarized the resource management algorithms for the distributed training and inference. In this context, partitioning strategies, architectures, and communication issues and solutions were extensively reviewed. Additionally, relevant use cases were described and futuristic applications were discussed. The challenges encountered in this paper revolve around \nchoosing the categorization of different AI distribution strategies, as for example the MARL can be classified under the umbrella of pervasive training, pervasive decision-making, or simply pervasive online learning. \nMultiple challenges remain to be addressed, to further improve the performance, as well as the resource management, privacy, and avant-garde applications. Therefore, we presented our vision of technical challenges and directions that may emerge in the future, along with some opportunities for innovation. We hope that this survey will elicit fruitful discussion and inspire new promising ideas. \n\\section*{Acknowledgment}\nThis work was made possible by NPRP grant NPRP12S-0305-190231 and NPRP13S-0205-200265 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors. \n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\\bibliographystyle{IEEEtran}\n\\bibliography{References}\n\\balance\n\\end{document}", "id": "8fd87342-e326-4cd7-af47-a1b93c69a12c", "level": "section", "origin_cites_number": 0, "parent_id": "7aedf107-610a-4e90-9d9d-b0a1b2dcb03c", "prefix_titles": [ [ "title", "Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
120
[ 3409, 97, 514, 301, 7008, 8655, 859, 3410, 3411, 166, 7430, 895, 305, 3413, 3412, 7217, 620, 1390, 2219, 3414, 3415, 603, 3416, 3417, 887, 3418, 8656, 3419, 1315, 9123, 3420, 7719, 659, 3427, 8657, 3428, 3426, 3425, 1313, 3422, 660, 547, 3423, 3424, 3421, 582, 602, 3430, 3429, 7138, 619, 7721, 7720, 616, 3431, 3434, 3433, 652, 643, 646, 7139, 3432, 3435, 3439, 7723, 7724, 3436, 3437, 9098, 3438, 7722, 7725, 8658, 3440, 7726, 3444, 3448, 3450, 3441, 3449, 3442, 3447, 3446, 8659, 3443, 3445, 1050, 3452, 3451, 3454, 3453, 3455, 3357, 7198, 3458, 689, 3456, 3457, 3460, 2672, 3462, 3459, 3461, 8660, 3463, 3464, 3465, 8648, 3466, 3469, 3468, 3467, 3471, 7727, 3470, 7608, 3472, 3473, 3474, 3475, 3476, 3477, 3478, 3480, 3483, 3484, 3482, 3481, 3479, 3485, 629, 7140, 3486, 3487, 3488, 3489 ]
1.142178
[ "Nishant Subramani", "Alexandre Matton", "Malcolm Greaves", "Adrian Lam" ]
A Survey of Deep Learning Approaches for OCR and Document Understanding
2020
2020-11-27T03:05:59Z
cs.CL
Documents are a core part of many businesses in many fields such as law, finance, and technology among others. Automatic understanding of documents such as invoices, contracts, and resumes is lucrative, opening up many new avenues of business. The fields of natural language processing and computer vision have seen tremendous progress through the development of deep learning such that these methods have started to become infused in contemporary document understanding systems. In this survey paper, we review different techniques for document understanding for documents written in English and consolidate methodologies present in literature to act as a jumping-off point for researchers exploring this area.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ] ], "subsections": [ "e24da37f-c2e7-4e31-814a-850569408926", "255c2a97-23ca-4f9c-97e1-17bc8daf6522", "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "872cb3d9-2ab2-43da-9f85-2dd63e5d76ea", "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "764f42ce-1e52-4887-abf2-823b9019a5c2" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "Humans compose documents to record and preserve information. As information carrying vehicles, documents are written using different layouts to represent diverse sets of information for a variety of different consumers.\nIn this work, we look at the problem of document understanding for documents written in English. Here, we take the term document understanding to mean the automated process of reading, interpreting, and extracting information from the written text and illustrated figures contained within a document's pages. From the perspective as practitioners of machine learning, this survey covers the methods by which we build models to automatically understand documents that were originally composed for human consumption.\nDocument understanding models take in documents and segment pages of documents into useful parts (i.e. regions corresponding to a specific table or property), often using optical character recognition (OCR)~ with some level of document layout analysis.\nThese models use this information to understand the contents of the document at large, e.g. that this region or bounding box corresponds to an address.\nIn this survey, we focus on these aspects of document understanding at a more granular level and discuss popular methods for these tasks.\nOur goal is to summarize the approaches present in modern document understanding and highlight current trends and limitations.\nIn Section~\\ref{sec:document-processing-understanding}, we discuss some general themes in modern NLP and document understanding and provide a framework for building end-to-end automated document understanding systems.\nNext, in Section~\\ref{sec:ocr}, we look at the best methods for OCR encompassing both text detection (Section~\\ref{sec:text-detection}) and text transcription (Section~\\ref{sec:text-transcription}).\nWe take a broader view of the document understanding problem in section~\\ref{sec:layout-analysis}, presenting multiple approaches to document layout analysis: the problem of locating relevant information on each page.\nFollowing this, we discuss popular approaches for information extraction (Section~\\ref{sec:information-extraction}).", "id": "e24da37f-c2e7-4e31-814a-850569408926", "level": "section", "origin_cites_number": 1, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 795, 303, 206, 115, 800, 798, 8385, 38, 7, 791, 793, 797, 792, 7298, 796, 8384, 2401, 794, 790, 799 ], "content": "\\label{sec:document-processing-understanding}\nDocument processing historically involved handcrafted rule-based algorithms~, but with the widespread success of deep learning~, computer vision (CV) and natural language processing (NLP) based methods have come to the fore.\nAdvancements in object detection and image segmentation have led to systems that edge close to human performance on a variety of tasks~.\nAs a result, these methods have been applied to a variety of other domains including NLP and speech~.\nSince documents can be read and viewed as a visual information medium, many practitioners leverage computer vision techniques as well and use them for text detection and instance segmentation~. \nWe cover specific methods to do these in Sections~\\ref{sec:text-detection} and~\\ref{sec:instance-segmentation}.\nThe widespread success and popularity of large pretrained language models such as ELMo and BERT have caused document understanding to shift towards using deep learning based models~.\nThese models can be fine-tuned for a variety of tasks and have replaced word vectors as the de-facto standard for pretraining for natural language tasks.\nHowever, language models, both recurrent neural network based and transformer based~, struggle with long sequences~.\nGiven that texts can be very dense and long in business documents, model architecture modifications are necessary.\nThe most simple approach is to truncate documents into smaller sequences of 512 tokens such that pretrained language models can be used off-the-shelf~. Another approach that has gained traction recently is based on reducing the complexity of the self-attention component of transformer-based language models .\nAll effective, modern, end-to-end document understanding systems present in the literature integrate multiple deep neural network architectures for both reading and comprehending a document's content. Since documents are made for humans, not machines, practitioners must combine CV as well as NLP architectures into a unified solution. While specific use cases will dictate the exact techniques used, a full end-to-end system employs:\n\\begin{itemize}\n \\item A computer-vision based document layout analysis module, which partitions each document page into distinct content regions. This model not only delineates between relevant and irrelevant regions, but also serves to categorize the type of content it identifies.\n \\item An optical character recognition (OCR) model, whose purpose is to locate and faithfully transcribe all written text present in the document. Straddling the boundary between CV and NLP, OCR models may either use document layout analysis directly or solve the problem in an independent fashion.\n \\item Information extraction models that use the output of OCR or document layout analysis to comprehend and identify relationships between the information that is being conveyed in the document. Usually specialized to a particular domain and task, these models provide the structure necessary to make a document machine readable, providing utility in document understanding.\n\\end{itemize}\nIn the following sections, we expand upon these concepts that constitute an end-to-end document understanding solution.", "id": "255c2a97-23ca-4f9c-97e1-17bc8daf6522", "level": "section", "origin_cites_number": 26, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Document Processing \\& Understanding" ] ], "subsections": [], "title": "Document Processing \\& Understanding" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:ocr}\nOCR has two primary components: text detection and text transcription.\nGenerally, these two components are separate and employ different models for each task.\nBelow, we discuss state-of-the-art methods for each of these components and show how a document can be processed through different generic OCR systems. See Figure~\\ref{fig:ocr-figure} for details.", "id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "level": "section", "origin_cites_number": 0, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ] ], "subsections": [ "db6bda7f-5ad5-4c06-9040-d3c4c7598936", "db0f4447-7e9b-466c-9b8c-6262fb55d937", "fbc2aaa2-b236-4423-af03-87a3371531ce", "7012ceef-6a64-491f-99cb-bbe0d0a38dd7", "e9b51816-66b8-415f-93f7-12c02ebbd0d5" ], "title": "Optical Character Recognition" }, { "cite_extract_rate": 1, "cites": [ 801 ], "content": "\\label{sec:text-detection}\nText detection is the task of finding text present in a page or image.\nThe input, an image, is often represented by a three dimensional tensor, $C \\times H \\times W$, where $C$ is the number of channels (often three, for red, green and blue), $H$ is the height, and $W$ is the width of the image.\nText detection is a challenging problem because text comes in a variety of shapes and orientations and can often be distorted.\nWe explore two common ways researchers pose the text detection problem: as an object detection task and as a instance segmentation task. A text detection model must either learn to output coordinates of bounding boxes around text (object detection), or a mask, where pixels with text are marked and pixels without are not (instance segmentation).\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{OCR.png}\n \\caption{Here, we show the general OCR process. A document can take the left path and go through an object detection model, which outputs bounding boxes, and a transcription model that transcribes the text in each of those bounding boxes. If the document takes the middle path, the object passes through a generic text instance segmentation model that colors pixels black if they contain text and a text transcription model that transcribes the regions of text the instance segmentation model identifies. If the document takes the right path, the model goes through a character-specific instance segmentation model, which outputs which character a pixel corresponds to. All paths produce the same structured output. The document comes from FUNSD~.}\n \\label{fig:ocr-figure}\n\\end{figure}", "id": "db6bda7f-5ad5-4c06-9040-d3c4c7598936", "level": "subsection", "origin_cites_number": 1, "parent_id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Text Detection" ] ], "subsections": [ "54eb0570-92bb-4bac-b84d-741d65bfa806", "c239ee98-1a29-46e9-b939-21eaed9cfb9d" ], "title": "Text Detection" }, { "cite_extract_rate": 0.777777777777777, "cites": [ 807, 209, 804, 803, 802, 806, 805 ], "content": "\\label{sec:text-detection-object-detection}\nTraditionally, text detection revolved around hand-crafting features to detect characters~.\nAdvances in deep learning, especially in object detection and semantic segmentation, have led to a change in how text detection is tackled.\nUsing these well-performing object detectors from the traditional computer vision literature, such as the Single-Shot MultiBox Detector (SSD) and Faster R-CNN models~, practitioners build efficient text detectors.\nOne of the first papers applying a regression-based detector for text is TextBoxes~. \nThey added long default boxes that have large aspect ratios to SSD, in order to adapt the object detector to text.\nSeveral papers built on this work to make regression-based models resilient to orientations, like the Deep Matching Prior Network (DMPNet) and the Rotation-Sensitive Regression Detector (RRD)~.\nOther papers have a similar approach to the problem, but develop their own proposal network that is tuned towards text rather than towards natural images.\nFor instance, combine convolutional networks with recurrent networks using a vertical anchor mechanism in their Connectionist Text Proposal Network to improve accuracy for horizontal text.\nObject detection models are generally evaluated via an intersection over union (IoU) metric and an F1 score.\nThe metric computes how much of a candidate bounding box overlaps with the ground truth bounding box (the intersection) divided by the total space occupied by both the candidate and ground truth bounding boxes (the union).\nNext, an IoU threshold $\\tau$ is chosen to determine which predicted boxes count as true positives (IoU $\\geq \\tau$).\nThe remainder are classified as false positives.\nAny box that the model fails to detect is classified as a false negative.\nUsing those definitions, an F1 score is computed to evaluate the object detection model.", "id": "54eb0570-92bb-4bac-b84d-741d65bfa806", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "db6bda7f-5ad5-4c06-9040-d3c4c7598936", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Text Detection" ], [ "subsubsection", "Text Detection as Object Detection" ] ], "subsections": [], "title": "Text Detection as Object Detection" }, { "cite_extract_rate": 0.8, "cites": [ 791, 808, 810, 809 ], "content": "\\label{sec:text-detection-instance-segmentation}\nText detection in documents has its own unique set of challenges: notably, the text is usually dense and documents contain a lot more text than what is usually present in natural images. To combat this density problem, text detection can be posed as an ultra-dense instance segmentation task.\nInstance segmentation is the task of classifying each pixel of an image as specific, pre-defined categories.\nSegmentation-based text detectors work at the pixel level to identify regions of text.\nThese per-pixel predictions are often used to estimate probabilities of text regions, characters, and their relationships among adjacent characters in a unified framework.\nPractitioners use popular segmentation methods like Fully Convolutional Networks (FCN) to detect text~, improving upon object detection models, especially when text is misaligned or distorted.\nSeveral papers build on this segmentation foundation to output word bounding areas by extracting bounding areas directly from the segmentation output~.\nTextSnake extends this further by predicting the text region, center line, direction of text, and candidate radius from an FCN~.\nThese features are then combined with a striding algorithm to extract the central axis points to reconstruct the text instance.", "id": "c239ee98-1a29-46e9-b939-21eaed9cfb9d", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "db6bda7f-5ad5-4c06-9040-d3c4c7598936", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Text Detection" ], [ "subsubsection", "Text Detection as Instance Segmentation" ] ], "subsections": [], "title": "Text Detection as Instance Segmentation" }, { "cite_extract_rate": 0.5, "cites": [ 811 ], "content": "\\label{sec:text-transcription}\nWhile most papers cited above try to directly detect words or even lines of words, some papers argue that character-level detection is an easier problem than general text detection because characters are less ambiguous than text lines or words.\nCRAFT uses an FCN model to output a two-dimensional Gaussian heatmap for each character~.\nCharacters that are close together are then grouped together in a rotated rectangle that has the smallest area possible to still encapsulate the set of characters.\nMore recently, combine global, word-level, and character-level features obtained using Region Proposal Networks (RPN) to great success.\nMost of the models described above were mainly developed for text scene detection, but can be easily adapted to document text detection to handle difficult cases like distorted text.\nWe expect less distortion in documents than in natural images, but poorly scanned documents or documents with certain fonts could still pose these problems.", "id": "db0f4447-7e9b-466c-9b8c-6262fb55d937", "level": "subsection", "origin_cites_number": 2, "parent_id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Word-level versus character-level" ] ], "subsections": [], "title": "Word-level versus character-level" }, { "cite_extract_rate": 0.7000000000000001, "cites": [ 243, 813, 815, 814, 812, 168, 8384 ], "content": "\\label{sec:text-transcription}\nText transcription is the task of transcribing the text present in an image.\nThe input, an image, is often a crop corresponding to either a character, word, or sequence of words, and has dimension $C \\times H' \\times W'$.\nA text transcription model must learn to ingest this cropped image and output a sequence of tokens belonging to some pre-specified vocabulary $V$.\n$V$ often corresponds to a set of characters. For digit recognition for instance, this is the most intuitive approach~.\nOtherwise, $V$ can also correspond to a set of words, similarly to a word-level language modeling problem~. \nIn both cases, the problem can be framed as a multi-class classification problem with the number of classes equal to the size of the vocabulary $V$.\nWord-level text transcription models require more data as the number of classes in the multi-class classification problem is much larger than for character-level. On one hand, predicting words instead of characters decreases the probability of making small typos (like replacing an \"a\" by an \"o\" in a word like \"elephant\"). On the other, limiting oneself to a word-level vocabulary means that it is not possible to transcribe words which are not part of this vocabulary. This problem doesn't exist at the character-level, as the number of characters is limited. As long as we know the language of the document, it is straightforward to build a vocabulary which contains all the possible characters. Subword units are a viable alternative~, as they alleviate the issues present in both word and character level transcription. \nRecently the research community has moved towards using recurrent neural networks, specifically recurrent models with LSTM or GRU units on top of a convolutional image feature extractor~.\nTo transcribe a token, two different decoding mechanisms are often used.\nOne is standard greedy decoding or beam search using an attention-based sequence decoder with cross entropy loss~, exactly like decoding with a conditional language model.\nSometimes images are poorly oriented or misaligned, reducing the effectiveness of standard sequence attention.\nTo overcome this,~ uses attention alignment, encoding spatial information of characters directly, while~ use spatial attention mechanisms directly.\nThe second way in which transcription decoding is often done is with connectionist temporal classification (CTC) loss~, a common loss function in speech which models repeated characters in sequence outputs well.\nThe majority of text transcription models borrow from advances in sequence modeling for both text and speech and often can utilize these advancements well with only minor adjustments.\nAs a result, practitioners seldom directly tackle this aspect relative to the other components of the document understanding task.", "id": "fbc2aaa2-b236-4423-af03-87a3371531ce", "level": "subsection", "origin_cites_number": 10, "parent_id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Text Transcription" ] ], "subsections": [], "title": "Text Transcription" }, { "cite_extract_rate": 0.8, "cites": [ 818, 817, 816, 815 ], "content": "\\label{sec:combined-models}\nEnd-to-end approaches combine text detection and text transcription in order to improve both components jointly~.\nFor instance, if the text prediction has a very low probability, it means the detected box either did not capture the entire word or captured something that is not text.\nAn end-to-end approach may be very effective in this case.\nCombining these two methods is fairly common and both Fast Oriented Text Spotting (FOTS) and TextSpotter with Explicit Alignment and Attention sequentially combine these models to train end-to-end~.\nThese approaches use shared convolutions as features to both text detection and recognition, and implement methods for complex orientations of text.\n introduce TextDragon, an end-to-end model that performs well on distorted text by utilizing a differentiable region of interest slide operator, which specializes in correcting distortions in regions of interest. \nMask TextSpotter is another end-to-end model that combines region proposal networks for bounding boxes with text and character segmentation~.\nThese recent works show the power of end-to-end OCR solutions in reducing errors.\nYet, having separate text detection and text recognition models offers more flexibility. First, the two models can be trained separately. In the case where only a small dataset is available to train the whole OCR module, but a lot of text recognition data is easily accessible, it makes sense to leverage this big amount of data in the training of the recognition model. Moreover, with two separate models, it is easy to compute two separate sets of metrics and have a more complete understanding of where the bottleneck might be.\nHence, both two-model and end-to-end approaches are viable. Whether one is better than the other mainly depends on the data available and what one wants to achieve.", "id": "7012ceef-6a64-491f-99cb-bbe0d0a38dd7", "level": "subsection", "origin_cites_number": 5, "parent_id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "End-to-end models" ] ], "subsections": [], "title": "End-to-end models" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 820, 821, 819, 801 ], "content": "\\label{sec:text-detection-datasets}\nMost of the literature revolves around scene text detection, rather than document text detection, and report results on those datasets.\nSome of the major ones are ICDAR~, Total-Text~, CTW1500~, and SynthText~.\n present FUNSD, a dataset for text detection, transcription, and document understanding with 199 fully annotated forms comprising of 31k word level bounding boxes. Another recent document understanding dataset comes from the ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction (SROIE). It contains 1000 whole scanned receipt images, with line-level annotations for text detection/transcription, and labels for Key Information Extraction. The website contains a ranking of the solutions proposed to address this problem. As solutions are still posted after the end of the competition, it is a good way to keep track of the most recent methods.", "id": "e9b51816-66b8-415f-93f7-12c02ebbd0d5", "level": "subsection", "origin_cites_number": 6, "parent_id": "ab1c72d1-5fe4-43bf-8325-08c660ce0dd5", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Optical Character Recognition" ], [ "subsection", "Datasets for Text Detection \\& Transcription" ] ], "subsections": [], "title": "Datasets for Text Detection \\& Transcription" }, { "cite_extract_rate": 0.25, "cites": [ 823, 822, 8386 ], "content": "\\label{sec:layout-analysis}\nDocument layout analysis is the process of locating and categorizing regions of interest on a picture or scanned image of a page.\nBroadly, most approaches can be distilled into page segmentation and logical structural analysis~.\nPage segmentation methods focus on appearance and use visual cues to partition pages into distinct regions; the most common are text, figures, images, and tables. \nIn contrast, logical structural analysis focuses on providing finer-grained semantic classifications for these regions, i.e. identifying a region of text that is a paragraph and distinguishing that from a caption or document title.\nResearch in methods for document layout analysis has a long history, both in academia and industry.~\\footnote{The first ISO standard that defined aspects of modern-day document layout analysis was drafted four decades ago: ISO 8613-1:1989}\nFrom the first pioneering heuristic approaches~, to multi-stage classical machine learning systems~, the evolution of document layout analysis methods is now dominated by end-to-end differentiable methods~.", "id": "872cb3d9-2ab2-43da-9f85-2dd63e5d76ea", "level": "section", "origin_cites_number": 12, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Document Layout Analysis" ] ], "subsections": [ "09eca77f-2430-4aa1-a5bd-c41cc3f4ea98", "0df51f92-2d49-4e68-81c9-e79dcaaa3c41", "1e0244b9-1bab-424d-a81b-a4a89f4a6821" ], "title": "Document Layout Analysis" }, { "cite_extract_rate": 0.625, "cites": [ 823, 8387, 97, 824, 825 ], "content": "\\label{sec:instance-segmentation}\nWhen applied to the problem of layout analysis in business documents, instance segmentation methods predict per-pixel labels to categorize regions of interest.\nSuch methods are flexible and easily adapt to the courser-grained task of page segmentation or the more-specific task of logical structural analysis.\n\\begin{figure}[t]\n \\label{fig:layout-analysis}\n \\centering\n \\includegraphics[width=\\textwidth]{layout-analysis-fig.png}\n \\caption{A document is passed through a generic layout analysis model, resulting in a layout segmentation mask with the following classes: figure (green), figure caption (orange), heading (purple), paragraph (red), and algorithm (blue). The document has been reproduced with permission~.}\n\\end{figure}\nIn , the authors describe an end-to-end neural network that combines both text and visual features in a encoder-decoder architecture that also incorporates an unsupervised pretraining network.\nDuring inference, their approach uses a downsampling cascade of pooling layers to encode visual information, which is fed into a symmetrical upsampling cascade for decoding. \nAt each cascade level, the produced encoding is also directly passed into the respective decoding block, concatenating the down- and up-sampled representations.\nThis architecture ensures that visual feature information at different levels of resolution is considered during the encoding and decoding process~.\nFor the final decoding layer, localized text embeddings are supplied alongside the computed visual representation.\nThis U-Net inspired encoding-decoding architecture has been adopted for document layout analysis in several different approaches~.\nThe method in , and later extended by via additional text embeddings, use convolution maxpooling layers with large filter sizes to feed the document image through a ResNet bottleneck~.\nThe representation is then processed by bilinear upsampling layers and smaller 1x1 and 3x3 convolution layers.\nBoth works are used to perform layout analysis on historical documents and newspapers from multiple European languages, respectively.\nIn , the authors combine the U-Net architecture pattern with trainable multiplication layers.\nThis layer type is specialized for extracting co-occurrence texture features from the network's convolution feature maps, which are effective for locating regions that have periodically repeating information, such as tables.", "id": "09eca77f-2430-4aa1-a5bd-c41cc3f4ea98", "level": "subsection", "origin_cites_number": 8, "parent_id": "872cb3d9-2ab2-43da-9f85-2dd63e5d76ea", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Document Layout Analysis" ], [ "subsection", "Instance Segmentation for Layout Analysis" ] ], "subsections": [], "title": "Instance Segmentation for Layout Analysis" }, { "cite_extract_rate": 0.75, "cites": [ 7, 8388, 826, 8386, 828, 827 ], "content": "Obtaining high quality training data for layout analysis is a labor intensive task that requires both mechanical precision and an understanding of the document's contents.\nAs a consequence of the difficulties in layout annotation of documents from brand new domains, several approaches exist to either leverage structure in unlabeled data or use well-defined rule sets to generate synthetic labeled documents to further improve generalizability and performance of document layout analysis systems.\nMasked language models such as BERT and RoBERTa have shown effective empirical performance on many downstream NLP tasks~.\nInspired by the pretraining strategy in BERT and RoBERTa, define a Masked Visual-Language Model, which randomly masks input tokens and uses the model to predict the masked tokens. Unlike BERT, their method provides the 2-D positional embedding of the token during this masked prediction task, which enables the model to combine both semantic and spatial relationships between textual elements. Mentioned earlier in section~\\ref{sec:instance-segmentation}, introduce an auxiliary document image reconstruction task in their broader instance segmentation-based network. During training, this auxiliary module uses a separate upsampling decoder that, without the aid of skip connections, predicts the original pixel values from the encoded representation.\nWhile pretraining lets practitioners gain more value from their unlabeled documents, this technique alone is not always sufficient to effectively surmount data scarcity concerns. Relying on the intuition that many business and academic documents have repeated patterns in both content as well as page-level organization, several approaches have emerged to manufacture synthetic, labeled data in order to provide data suitable for a pretraining-like routine .\nIn , the authors propose a three-stage method for synthesizing new labeled documents. First, they generate the document by randomly choosing the a document background from a set of nearly 200 known document backgrounds. Second, they use a grid based layout method to define both individual document element content and their respective sizes. Third, their process introduces corruptions, such as Gaussian blur and random image crops. This modular, rule-based synthetic document generation approach creates a heterogeneous dataset to make pretraining of layout analysis models more robust.\nAlternatively, instead of defining rules to generate a heterogeneous set of documents, several synthesizing procedures take cues from data augmentation methods.\n and describe general-purpose toolkits that use an existing set of labeled documents to introduce deformations and perturbations in source images.\nImportantly, such changes to the training data are balanced so as to preserve the original semantic content while still exposing the model training to realistic errors that it must account for during inference on unseen data.", "id": "0df51f92-2d49-4e68-81c9-e79dcaaa3c41", "level": "subsection", "origin_cites_number": 8, "parent_id": "872cb3d9-2ab2-43da-9f85-2dd63e5d76ea", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Document Layout Analysis" ], [ "subsection", "Addressing Data Scarcity and Alternative Approaches" ] ], "subsections": [], "title": "Addressing Data Scarcity and Alternative Approaches" }, { "cite_extract_rate": 0.5, "cites": [ 829, 831, 830, 828 ], "content": "Recently, there has been a deluge of datasets specifically targeting the document layout analysis problem. The International Conference on Document Analysis and Recognition (ICDAR) has produced several datasets from their various annual competitions; the most recent from 2017 and 2019 provide gold-standard data for document layout analysis and other document processing tasks .\nOn the larger side, DocBank is a a collection of half a million document pages with token-level annotations suitable for training and evaluating document layout analysis systems . The authors constructed this dataset using weak supervision , matching data from the LaTeX source of known PDFs to form annotations. Similarly, created PubLayNet by automatically matching XML content representations for over one million PDFs on PubMed Central™, consiting of approximately 360 thousand document images. While not full document layout, have created PubTabNet from PubMed Central as well. Their data consists of 568 thousand table images alongside an HTML representations of content.", "id": "1e0244b9-1bab-424d-a81b-a4a89f4a6821", "level": "subsection", "origin_cites_number": 8, "parent_id": "872cb3d9-2ab2-43da-9f85-2dd63e5d76ea", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Document Layout Analysis" ], [ "subsection", "Datasets for Layout Analysis" ] ], "subsections": [], "title": "Datasets for Layout Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:information-extraction} \nThe goal of information extraction for document understanding is to take documents that may have diverse layouts and extract information into a structured format.\nExamples include receipt understanding to identify item names, quantities, and prices and form understanding to identify different key-value pairs.\nDocument extraction of information by humans goes beyond simply reading text on a page as it is often necessary to learn page layouts for complete understanding.\nAs such, recent enhancements have extended text encoding strategies for documents by additionally encoding structural and visual information of text in a variety of ways.", "id": "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "level": "section", "origin_cites_number": 0, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Information Extraction" ] ], "subsections": [ "aa2dae4e-3f50-46e8-8455-72949ee2d942", "9c50ea32-b469-4b93-a5cd-7b6e8b439660", "a3d3f144-355d-445f-9178-9818964739ce", "8926c720-3179-44ea-97bb-88e166093c96" ], "title": "Information Extraction" }, { "cite_extract_rate": 0.75, "cites": [ 8388, 38, 832 ], "content": "Multiple sequence tagging approaches have been proposed which augment current named entity recognition (NER) methods by embedding attributes of 2D bounding boxes and merging them with text embeddings to create models which are simultaneously aware of both context and spatial positioning when extracting information.\n embeds the pair of $x,y$ coordinates that define a bounding box using two different embedding tables and pretrain a masked language model (LM).\nDuring pretraining, text is randomly masked but the 2D positional embeddings are retained.\nThis model can then be fine-tuned on a downstream task.\nAlternatively, the bounding box coordinates can also be embedded using $sin$ and $cos$ functions like positional encoding methods~.\nOther features can also be embedded such as the line or sequence number~.\nIn this scenario, the document is preprocessed to assign a line number to each individual token. Each token is then ordered from left to right and given a sequential position. Finally, both the line and sequential positions are embedded.\nWhile these strategies have seen success, relying solely on the line number or bounding box coordinates can be misleading when the document has been scanned on an uneven surface, leading to curved text.\nAdditionally, bounding box based embeddings still miss critical visual information such as typographical emphases (bold, italics) and images such as logos.\nTo overcome these, a crop of the image corresponding to the token of interest can be embedded using a Faster R-CNN model to create token image embeddings which are combined with the 2D positional embeddings~.", "id": "aa2dae4e-3f50-46e8-8455-72949ee2d942", "level": "subsection", "origin_cites_number": 4, "parent_id": "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Information Extraction" ], [ "subsection", "2D Positional Embeddings" ] ], "subsections": [], "title": "2D Positional Embeddings" }, { "cite_extract_rate": 1, "cites": [ 834, 796, 835, 833 ], "content": "Information extraction for documents can also be framed as a computer vision challenge wherein the goal of the model is to semantically segment information or regress bounding boxes over the areas of interest.\nThis strategy helps preserve the 2D layout of the document and allows models to take advantage of 2D correlations.\nWhile it is theoretically possible to learn strictly from the document image, directly embedding textual information into the image simplifies the task for models to understand the 2D textual relationships.\nIn these cases, an encoding function is applied onto a proposed textual level (i.e. character, token, word) to create individual embedding vectors.\nThese vectors are transposed into each pixel that comprises the bounding box corresponding to the embedded text, ultimately creating an image of $W \\times H \\times D$ where $W$ is the width, $H$ is the height, and $D$ is the embedding dimension.\nProposed variants are listed as following:\n\\begin{enumerate}\n \\item \\textit{CharGrid} embeds characters with a one-hot encoding into the image~ \n \\item \\textit{WordGrid} embeds individual words using word2vec or FastText~ \n \\item \\textit{BERTgrid} finetunes BERT on task-specific documents and is used obtain contextual wordpiece vectors~\n \\item \\textit{C+BERTgrid}, combines context-specific and character vectors~\n\\end{enumerate}\nWhen comparing the grid methods, C+BERTgrid has shown the best performance, likely due to its contextualized word vectors combined with a degree of resiliency to OCR errors. \n proposes an alternative approach to directly apply text embeddings to the image.\nA grid is projected on top of the image and a mapping function assigns each token to a unique cell in the grid.\nModels then learn to assign each cell in the grid to a class.\nThis method significantly reduces the dimensionality due to its grid system, while still retaining the majority of the 2D spatial relationships.", "id": "9c50ea32-b469-4b93-a5cd-7b6e8b439660", "level": "subsection", "origin_cites_number": 4, "parent_id": "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Information Extraction" ], [ "subsection", "Image Embeddings" ] ], "subsections": [], "title": "Image Embeddings" }, { "cite_extract_rate": 1, "cites": [ 37, 8387, 832 ], "content": "Unstructured text on documents can also be represented as graph networks, where the nodes in a graph represent different textual segments.\nTwo nodes are connected with an edge if they are cardinally adjacent to each other, allowing the relationship between words to be modeled directly~.\nAn encoder such as a BiLSTM encodes text segments into nodes~.\nEdges can be represented as a binary adjacency matrix or a richer matrix, encoding additional visual information such as the distance between segments or shape of the source and target nodes~.\nA graph convolutional network is then applied at different receptive fields in a similar fashion to dilated convolutions~ to ensure that both local and global information can be learned~.\nAfter this, the representation is passed to a sequence tagging decoder.\nDocuments can also be represented as a directed graph and a spatial dependency parser~. \nIn this representation, nodes are represented by textual segments, but field nodes denoting the node type are used to initialize each DAG.\nIn addition, two kinds of edges are defined:\n\\begin{enumerate}\n \\item Edges that group together segments belonging to the same category (STORENAME $\\rightarrow$ Peet's $\\rightarrow$ Coffee; a field node followed by two nodes representing a store name)\n \\item Edges that connect relationships between different groups (Peet's $\\rightarrow$ 94107; a zipcode).\n\\end{enumerate}\nA transformer with an additional 2D positional embedding is used to spatially encode the text.\nAfter this, the task becomes to predict the relationship matrix for each edge type.\nThis method can represent arbitrarily deep hierarchies and can be applied towards complicated document layouts.", "id": "a3d3f144-355d-445f-9178-9818964739ce", "level": "subsection", "origin_cites_number": 3, "parent_id": "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Information Extraction" ], [ "subsection", "Documents as Graphs" ] ], "subsections": [], "title": "Documents as Graphs" }, { "cite_extract_rate": 0.5, "cites": [ 837, 830, 520, 836 ], "content": "\\label{sec:table-extraction}\nTabular data extraction remains a challenging aspect of information extraction due to their wide variety of formats and complex hierarchies. Table datasets typically have multiple tasks to perform~. The first task is table detection which involves localizing the bounding box containing the table(s) inside the document. The next task is table structure recognition, which requires extracting the row, column, and cell information into a common format. This can be taken one step further to table recognition, which requires understanding both the structural information as well as the content by classifying cells within the table itself~. As textual and visual features are equally important to properly extracting and understanding tables, many diverse methods have been proposed to perform this task.\nOne such proposal named TableSense performs both table detection and structure recognition~. \nTableSense uses a three stage approach: cell featurization, object detection with convolutional models, and an uncertainty-based active learning sampling mechanism. TableSense's proposed architecture for table detection performs significantly better than traditional methods in computer vision such as YOLO-v3 or Mask R-CNN~.\nSince this approach does not work well for general spreadsheets, ~ extend upon the previous work by using a multitask framework to jointly learn table regions, structural components of spreadsheets, and cell types.\nThey add an additional stage, which leverages language models to learn the semantic contents of table cells in order to flatten complex tables into a single standard format.\n propose TUTA, which focuses on understanding the content within tables after the structure has been determined. The authors present three new objectives for language model pretraining for table understanding by using tree-based transformers. The objectives introduced for pretraining are designed to help the model understand tables at the token, cell, and table level. The authors mask a proportion of tokens depending on the table cell for the model to predict, randomly mask particular cell headers for the model to predict the header string based on its location, and provide the table with context such as table titles or descriptions that may or may not be associated for the model to identify which contextual elements are positively associated with the table. The transformer architecture is modified to reduce distractions from attention by limiting the attention connections to items based on a cell's hierarchical distance to another cell. Fine-tuning TUTA has demonstrated state of the art performance on multiple datasets for cell type classification.", "id": "8926c720-3179-44ea-97bb-88e166093c96", "level": "subsection", "origin_cites_number": 8, "parent_id": "5c3c58c6-8219-43ce-81f3-0bc1c7ebaf0b", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Information Extraction" ], [ "subsection", "Tables" ] ], "subsections": [], "title": "Tables" }, { "cite_extract_rate": 0, "cites": [], "content": "Document understanding is a hot topic in industry and has immense monetary value.\nMost documents are private data corresponding to private contracts, invoices, and records.\nAs a result, openly available datasets are hard to come by and have not been a focus for academia with respect to other application areas.\nThe academic literature on methodologies to tackle document understanding is similarly sparse relative to areas with an abundance of publicly available data such as image classification and translation.\nHowever, the most effective approaches for document understanding make use of recent advancements in deep neural network modeling.\nEnd-to-end document understanding is achievable by creating an integrated system that performs layout analysis, optical character recognition, and domain-specific information extraction.\nIn this survey, we attempt to consolidate and organize the methodologies which are present in literature in order to be a jumping-off point for scholars and practitioners alike who want to explore document understanding.\n\\bibliography{references}\n\\end{document}", "id": "764f42ce-1e52-4887-abf2-823b9019a5c2", "level": "section", "origin_cites_number": 0, "parent_id": "7dee3631-5c10-4ff0-83ae-c48619c9e5c2", "prefix_titles": [ [ "title", "A Survey of Deep Learning Approaches for OCR and Document Understanding" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
121
[ 795, 303, 206, 115, 800, 798, 8385, 38, 7, 791, 793, 797, 792, 7298, 796, 8384, 2401, 794, 790, 799, 801, 807, 209, 804, 803, 802, 806, 805, 808, 810, 809, 811, 243, 813, 815, 814, 812, 168, 818, 817, 816, 820, 821, 819, 823, 822, 8386, 8387, 97, 824, 825, 8388, 826, 828, 827, 829, 831, 830, 832, 834, 835, 833, 37, 837, 520, 836 ]
1.38076
[ "Sina Mohseni", "Haotao Wang", "Zhiding Yu", "Chaowei Xiao", "Zhangyang Wang", "Jay Yadawa" ]
Taxonomy of Machine Learning Safety: A Survey and Primer
2021
2021-06-09T05:56:42Z
cs.LG
The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations. Research explores different approaches to improve ML dependability by proposing new models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks. However, there is a missing connection between ongoing ML research and well-established safety principles. In this paper, we present a structured and comprehensive review of ML techniques to improve the dependability of ML algorithms in uncontrolled open-world settings. From this review, we propose the \textit{Taxonomy of ML Safety} that maps state-of-the-art ML techniques to key engineering safety strategies. Our taxonomy of ML safety presents a safety-oriented categorization of ML techniques to provide guidance for improving dependability of the ML design and development. The proposed taxonomy can serve as a safety checklist to aid designers in improving coverage and diversity of safety strategies employed in any given ML system.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ] ], "subsections": [ "db1f4cf7-1d58-49ec-b135-7e260c7fb808", "2dbfe4c5-7909-4b49-8773-ed3edc158c92", "5ab20adb-f657-4a79-9b60-59d62e0df59b", "7165617c-b651-4c8c-bb23-2d8f417c79d3", "c40bab66-fefa-4f2f-adb6-c822fcacd658", "3df7c8ef-e02c-4fce-9d3b-80b1e2233171", "d2689171-2b34-421a-b1bf-c43fead1b2cb", "09cbc33b-2cc6-42f5-9853-07558520833f", "27093b3d-f55a-4c01-aeb4-419662c36eee", "f2d6188f-3ee7-40ae-b23a-f8e40643e814" ], "title": "root" }, { "cite_extract_rate": 0.583333333333333, "cites": [ 6976, 166, 1605, 1606, 1356, 1604, 892 ], "content": "Advancements in machine learning (ML) have been one of the most significant innovations of the last decade. \nAmong different ML models, Deep Neural Networks (DNNs)~ are well-known and widely used for their powerful representation learning from high-dimensional data such as images, texts, and speech. \nHowever, as ML algorithms enter sensitive real-world domains with trustworthiness, safety, and fairness prerequisites, the need for corresponding techniques and metrics for high-stake domains is more noticeable than before. \nHence, researchers in different fields propose guidelines for \\textit{Trustworthy AI}~, \\textit{Safe AI} , and \\textit{Explainable AI}~ as stepping stones for next generation Responsible AI . \nFurthermore, government reports and regulations on AI accountability~, trustworthiness , and safety are gradually creating mandating laws to protect citizens' data privacy rights, fair data processing, and upholding safety for AI-based products.\nThe development and deployment of ML algorithms for open-world tasks come with reliability and dependability challenges rooting from model performance, robustness, and uncertainty limitations . \nUnlike traditional code-based software, ML models have fundamental safety drawbacks, including performance limitations on their training set and run-time robustness constraints in their operational domain. \nFor example, ML models are fragile to unprecedented domain shift that could easily occur in open-world scenarios. \nData corruptions and natural perturbations~ are other factors affecting ML models. \nMoreover, from the security perspective, it has been shown that DNNs are susceptible to adversarial attacks that make small perturbations to the input sample (indistinguishable by the human eye) but can fool a DNN~. \nDue to the lack of verification techniques for DNNs, validation of ML models is often bounded to performance measures on standardized test sets and end-to-end simulations on the operation design domain. \nRealizing that dependable ML models are required to achieve safety, we observe the need to investigate gaps and opportunities between conventional engineering safety standards and a set of ML safety-related techniques. \n\\begin{figure*}\n\\vspace{-0.5em}\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{figures/roadmap-3.PNG}\n \\vspace{-0.6em}\n \\caption{Paper Roadmap: we first identify key engineering safety requirements (first column) that are limited or not readily applicable on complex ML algorithms (second column). From there, we present a review of safety-related ML research followed by their categorization (third column) into three strategies to achieve \\textit{(1) Inherently Safe Models}, improving \\textit{(2) Enhancing Model Performance and Robustness}, and incorporate \\textit{(3) Run-time Error Detection} techniques.} \n \\label{fig:roadmap}\n \\vspace{-1em}\n\\end{figure*}\n\\vspace{-0.5em}", "id": "db1f4cf7-1d58-49ec-b135-7e260c7fb808", "level": "section", "origin_cites_number": 12, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Introduction" ] ], "subsections": [ "a35757a0-9173-4c32-83b7-39d8a998db89", "100ff0d0-97f6-4d36-903a-6350d5ac0a70" ], "title": "Introduction" }, { "cite_extract_rate": 0.5714285714285711, "cites": [ 1609, 1356, 1608, 1607 ], "content": "ML safety includes diverse hardware and software techniques for the safe execution of algorithms in open-world applications . \nIn this paper, we limit our scope to only ML algorithm design and not the execution of those algorithms in platforms. \nWith that being said, we also mainly focus on ``in situ'' techniques to improve run-time dependability and not on further techniques for the efficiency of the network or training. \nWe used a structured and iterative methodology to find ML safety-related papers and categorize these research as summarized in Table \\ref{tab:ref-table}. \nIn our iterative paper selection process, we started with reviewing key research papers from AI and ML safety (e.g., ) and software safety literature and standards (e.g., ) to identify mutual safety attributes between engineering safety and ML techniques. \nNext, we conducted an upward and downward literature investigation using top computer science conference proceedings, journal publications, and the \\textit{Google Scholar} search engine to maintain reasonable literature coverage and balance the number of papers on each ML safety attribute.\nFigure \\ref{fig:roadmap} presents the overall organization of this paper. \nWe first review the background on common safety terminologies and situate ML safety limitations with reference to conventional engineering safety requirements in Section \\ref{sec:background}. \nIn Section \\ref{sec:error-types} we discuss a unified ``big picture'' of different ML error types for real-world applications and common benchmark datasets to evaluate models for these errors. \nNext, we propose a ML safety taxonomy in Section \\ref{sec:categorization} to organize ML techniques into safety strategies with Table \\ref{tab:ref-table} as an illustration of the taxonomy with the summary of representative papers on each subcategory. \nSections \\ref{sec:safe-design}, \\ref{sec:robustness-performance}, and \\ref{sec:error-detection} construct the main body of the reviewed papers organized into ML solutions and techniques for each safety strategy. \nFinally, Section \\ref{sec:discussion} presents a summary of key takeaways and a discussion of open problems and research directions for ML safety.\n\\vspace{-0.5em}", "id": "a35757a0-9173-4c32-83b7-39d8a998db89", "level": "subsection", "origin_cites_number": 7, "parent_id": "db1f4cf7-1d58-49ec-b135-7e260c7fb808", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Introduction" ], [ "subsection", "Scope, Organization, and Survey Method" ] ], "subsections": [], "title": "Scope, Organization, and Survey Method" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper, we review challenges and opportunities to achieve ML safety for open-world safety-critical applications. \nWe first review dependability limitations and challenges for ML algorithms in comparison to engineering safety standard requirements. \nThen, we decompose ML dependability needs into three safety strategies for (1) achieving inherently safe ML design, (2) improving model performance and robustness, and (3) building run-time error detection solutions for ML. \nFollowing our categorization of safety strategies, we present a structured and comprehensive review of 300 papers from a broad spectrum of state-of-the-art ML research and safety literature. \nWe propose a unifying taxonomy (Table \\ref{tab:ref-table}) that serves ML researchers and designers as a collection of best practices and allows to checkup the coverage and diversity of safety strategies employed in any given ML system. \nAdditionally, the taxonomy of ML safety lays down a road map to safety needs in ML and accommodates in assessing technology readiness for each safety strategy. \nWe review open challenges and opportunities for each strategy and present a summary of key takeaways in the end.\n\\vspace{-0.5em}", "id": "100ff0d0-97f6-4d36-903a-6350d5ac0a70", "level": "subsection", "origin_cites_number": 0, "parent_id": "db1f4cf7-1d58-49ec-b135-7e260c7fb808", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Introduction" ], [ "subsection", "Objectives and Contributions" ] ], "subsections": [], "title": "Objectives and Contributions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}\nIn order to introduce and categorize ML safety techniques, we start with reviewing background on engineering safety strategies and investigate safety gaps between design and development of code-based software and ML algorithms. \n\\vspace{-0.3em}", "id": "2dbfe4c5-7909-4b49-8773-ed3edc158c92", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ] ], "subsections": [ "4a4802ac-4ed5-4b71-8adf-dfa52b54866a", "50ce9f94-55df-4570-8b4a-f17b9fcdec3c", "695446ed-62d6-49c2-a600-711627e36e58" ], "title": "Background" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1605, 1614, 1356, 1612, 1615, 1608, 1610, 1613, 1607, 1611 ], "content": "\\label{sec:terminology}\nRelated survey papers dive into ML and AI safety topics to analyze problem domain, review existing solutions and make suggestions on future directions .\nSurvey papers cover diverse topics including safety-relevant characteristics in reinforcement learning~, verification of ML components~, adversarial robustness , anomaly detection , ML uncertainty , and aim to connect the relation between well-established engineering safety principals and ML safety limitations.\n introduce four major research problems to improve ML safety namely robustness, monitoring, alignment, and external safety for ML models.\n presents high level guidelines for teams, organizations, and industries to increase the reliability, safety, and trustworthiness of next generation Human-Centered AI systems.\nMore recently, multiple surveys present holistic review of ML promises and pitfalls for safety-critical autonomous systems.\nFor instance, demonstrate a systematic presentation of 4-stage ML lifecycle, including data management, model training, model verification, and deployment. \nAuthors present itemized safety assurance requirements for each stage and review methods that support each requirement.\nIn a later work, add ML safety assurance scoping and the safety requirements elicitation stages to ML lifecycle to establish the fundamental link between system-level hazard and risk analysis and unit-level safety requirements. \nIn a broader context, study challenges and limitations of existing ML system development tools and platforms (MLOps) in achieving \\textit{Responsible AI} principles such as data privacy, transparency, and safety. \nAuthors report their findings with a list of operationalized Responsible AI principles and their benefits and drawbacks.\nAlthough prior work targeted different aspects and characteristics of ML safety and dependability, in this paper, we elaborate on ML safety concept by situating open-world safety challenges with ongoing ML research.\nParticularly, we combine ML safety concerns between engineering and research communities to uncover mutual goals and accelerate safety developments.\n\\vspace{-0.3em}", "id": "4a4802ac-4ed5-4b71-8adf-dfa52b54866a", "level": "subsection", "origin_cites_number": 15, "parent_id": "2dbfe4c5-7909-4b49-8773-ed3edc158c92", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Related Surveys" ] ], "subsections": [], "title": "Related Surveys" }, { "cite_extract_rate": 0, "cites": [], "content": "We introduce terminologies related to ML safety by clarifying the relationship between ML \\textit{Safety}, \\textit{Security}, and \\textit{Dependability} that are often interchangeably used in the literature. \n\\textit{Safety} is a \\textit{System-Level} concept as a set of processes and strategies to minimize the risk of hazards due to malfunctioning of system components.\nSafety standards such as IEC 61508 and ISO 26262 mandate complete analysis of hazards and risks, documentation for system architecture and design, detailed development process, and thorough verification strategies for each component, integration of components, and final system-level testing. \n\\textit{Dependability} is a \\textit{Unit-Level} concept to ensure performance and robustness of the software in its operational domain. \nWe define ML dependability as the model's ability to minimize test-time prediction error. \nTherefore, a highly dependable ML algorithm is expected to be robust to natural distribution shifts within their intended operation design domain. \n\\textit{Security} is both a \\textit{System-Level} and a \\textit{Unit-Level} concept to protected from harm or other non-desirable (e.g., data theft, privacy violation) outcomes caused by adversaries. \nNote that engineering guidelines distinguish safety hazards (e.g., due to natural perturbations) from security hazards (e.g., due to adversarial perturbations) as the latter intentionally exploits system vulnerabilities to cause harm. \nHowever, the term safety is often loosely used in ML literature to refer to the dependability of algorithms against adversaries .\nIn this paper, we focus on unit-level strategies to maintain the dependability of ML algorithms in an intelligent system rather than the safety of a complex AI-based system as a whole.\nWe also cover adversarial training and detection techniques as a part of unit-level safety strategies regardless of the role of the adversary in generating the attack. \n\\vspace{-0.3em}", "id": "50ce9f94-55df-4570-8b4a-f17b9fcdec3c", "level": "subsection", "origin_cites_number": 3, "parent_id": "2dbfe4c5-7909-4b49-8773-ed3edc158c92", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Related Terminologies" ] ], "subsections": [], "title": "Related Terminologies" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1609, 1616 ], "content": "Engineering safety broadly refers to the management of operations and events in a system in order to protect its users by minimizing hazards, risks, and accidents.\nGiven the importance of dependability of the system's internal components (hardware and software), various engineering safety standards have been developed to ensure the system's functional safety based on two fundamental principles of safety life cycle and failure analysis. \nBuilt on collection of best practices, engineering safety processes discover and eliminate design errors followed by a probabilistic analysis of safety impact of possible system failures (i.e., failure analysis). \nSeveral efforts attempted to extend engineering safety standards to ML algorithms .\nFor example, European Union Aviation Safety Agency released a report on concepts of design assurance for neural networks~ that introduces safety assurance and assessment for learning algorithms in safety-critical applications. \nIn another work, Siebert et al.~ present a guideline to assess ML system quality from different aspects specific to ML algorithms including data, model, environment, system, and infrastructure in an industrial use case. \nHowever, the main body of engineering standards do not account for the statistical nature of ML algorithms and errors occurring due to the inability of the components to comprehend the environment. \nIn a recent review of automotive functional safety for ML-based software, Salay et al.~ present an analysis that shows about 40\\% of software safety methods do not apply to ML models. \nGiven the dependability limitations of ML algorithms and lack of adaptability for traditional software development standards, we identify 5 open safety challenges for ML and briefly review active research topics for closing these safety gaps in the following. \nWe extensively review the techniques for each challenge later in Sections \\ref{sec:safe-design}, \\ref{sec:robustness-performance}, and \\ref{sec:error-detection}.\n\\vspace{-0.3em}", "id": "695446ed-62d6-49c2-a600-711627e36e58", "level": "subsection", "origin_cites_number": 3, "parent_id": "2dbfe4c5-7909-4b49-8773-ed3edc158c92", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ] ], "subsections": [ "3d87c36f-64f8-4c05-85b0-adff49363df6", "7d7b4464-79b0-4c2d-92ca-eaf1875e72a2", "8fc22a2d-6cc6-4338-8dec-b82314bc214e", "528565de-9cae-43e4-afbf-da8fc4a82f0c", "9f2878be-f51f-4687-9bfa-175b3443c0bf" ], "title": "Engineering Safety Limitations in ML" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1346, 1356 ], "content": "Documenting and reviewing the software specification is a crucial step in engineering safety; however, formal design specification of ML models is generally not feasible, as the models learn patterns from large training sets to discriminate (or generate) their distributions for new unseen input. \nTherefore, ML algorithms learn the target classes through their training data (and regularization constraints) rather than formal specification. \nThe lack of specifiability could cause a mismatch between ``designer objectives'' and ``what the model actually learned'', which could result in unintended functionality of the system. \nThe data-driven optimization of model variables in ML training makes it challenging to define and pose specific safety constraints. \nSeshia et al.~ surveyed the landscape of formal specification for DNNs to lay an initial foundation for formalizing and reasoning about properties of DNNs. \nTo fill this gap, a common practice is to achieve partial design specification through training data specification and coverage.\nAnother practical way to overcome the design specification problem is to break ML components into smaller algorithms (with smaller tasks) to work in a hierarchical structure.\nIn the case of intelligent agents, safety-enforcing regularization terms , and simulation environments are suggested to specify and verify training goals for the agent. \n\\vspace{-0.3em}", "id": "3d87c36f-64f8-4c05-85b0-adff49363df6", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "695446ed-62d6-49c2-a600-711627e36e58", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ], [ "subsubsection", "Design Specification" ] ], "subsections": [], "title": "Design Specification" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1617, 499 ], "content": "Implementation transparency is an important requirement in engineering safety which gives the ability to trace back design requirements from the implementations. \nHowever, advanced ML models trained on high-dimensional data are not transparent. \nThe very large number of variables in the models makes them incomprehensible or a so-called black-box for design review and inspection. \nIn order to achieve traceability, significant research has been performed on interpretability methods for DNN to provide instance explanations of model prediction and DNN intermediate feature layers~.\nIn autonomous vehicles application, propose VisualBackProp technique and show that a DNN algorithm trained to control a steering wheel would in fact learn patterns of lanes, road edges, and parked vehicles to execute the targeted task. \nHowever, the completeness of interpretability methods to grant traceability is not proven yet~, and in practice, interpretability techniques are mainly used by designers to improve network structure and training process rather than support a safety assessment. \n\\vspace{-0.3em}", "id": "7d7b4464-79b0-4c2d-92ca-eaf1875e72a2", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "695446ed-62d6-49c2-a600-711627e36e58", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ], [ "subsubsection", "Implementation Transparency" ] ], "subsections": [], "title": "Implementation Transparency" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 1614, 1618, 1619 ], "content": "Design and implementation verification is another demanding requirement for unit testing to meet engineering safety standards. \nFor example, coding guidelines for software safety enforce the elimination of dead or unreachable functions.\nDepending upon the safety integrity level, complete statement, branch coverage, or modified condition and decision coverage are required to confirm the adequacy of the unit tests. \nComing to DNNs, formally verifying their correctness is challenging and in fact provably an NP-hard~ problem due to the high dimensionality of the data. \nTherefore, reaching complete testing and verification of the operational design domain is not feasible for domains like image and video. \nAs a result, researchers proposed new techniques such as searching for unknown-unknowns~ and predictor-verifier training~, and simulation-based toolkits guided by formal models and specifications. \nOther techniques, including neuron coverage and fuzz testing in neural networks incorporate these aspects. \nNote that formal verification of shallow and linear models for low dimensional sensor data does not carry verification challenges of the image domain. \n\\vspace{-0.3em}", "id": "8fc22a2d-6cc6-4338-8dec-b82314bc214e", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "695446ed-62d6-49c2-a600-711627e36e58", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ], [ "subsubsection", "Testing and Verification." ] ], "subsections": [], "title": "Testing and Verification." }, { "cite_extract_rate": 0, "cites": [], "content": "Engineering safety standards treat the ML models as a black box and suggest using methods to improve model performance and robustness.\nHowever, improving model performance and robustness is still an open problem and a vast research topic. \nUnlike code-based algorithms, statistical learning algorithms typically contain a residual error rate (due to false positive and false negative predictions) on the test set. \nIn addition to the error rate on the test set, \n\\textit{operational error} is referred to as the model's error rate that commonly occurs in open-world deployment.\nSection \\ref{sec:robustness-performance} reviews various approaches like introducing larger networks, training regularization, active learning and data collection, and domain generalization techniques to increase the model's ability to learn generalizable representations for open-world applications.\n\\vspace{-0.3em}", "id": "528565de-9cae-43e4-afbf-da8fc4a82f0c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "695446ed-62d6-49c2-a600-711627e36e58", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ], [ "subsubsection", "Performance and Robustness" ] ], "subsections": [], "title": "Performance and Robustness" }, { "cite_extract_rate": 0, "cites": [], "content": "Engineering safety standards suggest run-time monitoring functions as preventive solutions for various system errors, including less frequent transient errors.\nMonitoring functions in code-based algorithms are based on a rule-set to detect hardware errors and software crashes in the target operational domain. \nHowever, designing monitoring functions to predict ML error (e.g., false positive and false negative errors) is different in nature. \nML models generate prediction probability that could be used to predict uncertainty for run-time validation of predictions. \nHowever, research shows that prediction probability in complex models like DNN does not fully represent uncertainty and hence can not guarantee failure prediction~.\nSection \\ref{sec:error-detection} reviews different approaches for run-time uncertainty estimation and detection of outlier samples and adversarial attacks.\n\\vspace{-0.5em}", "id": "9f2878be-f51f-4687-9bfa-175b3443c0bf", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "695446ed-62d6-49c2-a600-711627e36e58", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Background" ], [ "subsection", "Engineering Safety Limitations in ML" ], [ "subsubsection", "Run-time Monitoring" ] ], "subsections": [], "title": "Run-time Monitoring" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:error-types}\nDependability is the model's ability to minimize prediction risk on test samples. \nIn this section, we categorize model dependability into three different dimensions: generalization error (Sec. \\ref{sec:generalization_error}), robustness against distributional shift (Sec. \\ref{sec:distributional_shift}), and robustness against adversarial attacks (Sec. \\ref{sec:adversarial_attack}).\nAdditionally, we review benchmark datasets commonly used for each type of model dependability.\n\\vspace{-0.5em}", "id": "5ab20adb-f657-4a79-9b60-59d62e0df59b", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ] ], "subsections": [ "99c3df0a-d1fc-465c-8107-46ba7f6eadd7", "1fce065f-a3de-4536-a770-2283500a55da", "4b004293-0e23-42b6-b373-82d9f6603af1" ], "title": "ML Dependability" }, { "cite_extract_rate": 0.4, "cites": [ 1620, 1621 ], "content": "\\label{sec:generalization_error}\nThe first and foremost goal of machine learning is to minimize the \\emph{generalization error}.\nGiven a hypothesis $h$ (e.g., a DNN with learned parameters), the generalization error (also know as the \\emph{true error} and denoted as $R(h)$) is defined as the expected error of $h$ on the data distribution $\\mathcal{D}$~:\n$\n R(h) = \\text{Pr}_{(x,y) \\sim \\mathcal{D}}[h(x)\\neq y] = \\mathbb{E}_{(x,y) \\sim \\mathcal{D}}[\\mathbbm{1}_{h(x)\\neq y}],\n$\nwhere $(x,y)$ is a pair of data and label sampled from $\\mathcal{D}$, and $\\mathbbm{1}$ is the indicator function. \nThe generalization error is not directly computable, since $\\mathcal{D}$ is usually unknown. \nThe \\textit {de facto} practical solution is to learn $h$ by empirical risk minimization (ERM) on the training set $\\mathbb{S}_{S}=\\{(x_i,y_i)\\}_{i=1}^{N_S}$ and estimate its generalization error by the \\emph{empirical error} on the holdout test set $\\mathbb{S}_{T}=\\{(x_i,y_i)\\}_{i=1}^{N_T}$. \nFormally, the empirical error $\\hat{R}(h)$ is defined as the mean error on a finite set of data points $\\mathbb{S} \\sim \\mathcal{D}^{m}$~:\n$\n \\hat{R}(h) = \\frac{1}{m} \\sum_{i=1}^{m} \\mathbbm{1}_{h(x_i)\\neq y_i},\n$\nwhere $\\mathbb{S} \\sim \\mathcal{D}^{m}$ means $\\mathbb{S}=\\{(x_i,y_i)\\}_{i=1}^m \\overset{i.i.d.}{\\sim} \\mathcal{D}$.\nThe training and test sets are all sampled from the same distribution $\\mathcal{D}$ but are disjoint.\nRecent years have witnessed the successful application of this holdout evaluation methodology to monitoring the progress of many ML fields, especially where large-scale labeled datasets are available. \nThe generalization error can be affected by many factors, such as training set quality (e.g., imbalanced class distribution , noisy labels ), model capacity, and the training methods (e.g., using pre-training , regularization , etc.)", "id": "99c3df0a-d1fc-465c-8107-46ba7f6eadd7", "level": "subsection", "origin_cites_number": 5, "parent_id": "5ab20adb-f657-4a79-9b60-59d62e0df59b", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Generalization Error" ] ], "subsections": [ "4f251806-d885-4c8d-9398-8818922bc1b6" ], "title": "Generalization Error" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1623, 1622 ], "content": "Model generalization is commonly evaluated on a separate test set provided for the dataset.\nHowever, recent research has found limitations of such evaluation strategy. \nFor example, showed that the fixed ImageNet test set is not sufficient to reliably evaluate the generalization ability of state-of-the-art image classifiers, due to the insufficiency in representing the rich visual open-world.\n observed that noisy data collection pipeline can lead to a systematic misalignment between the training sets and the real-world tasks. \n\\vspace{-0.5em}", "id": "4f251806-d885-4c8d-9398-8818922bc1b6", "level": "paragraph", "origin_cites_number": 3, "parent_id": "99c3df0a-d1fc-465c-8107-46ba7f6eadd7", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Generalization Error" ], [ "paragraph", "Benchmark Datasets:" ] ], "subsections": [], "title": "Benchmark Datasets:" }, { "cite_extract_rate": 0.75, "cites": [ 1626, 1625, 1624 ], "content": "\\label{sec:distributional_shift}\nWhen measuring generalization error, we assume the source training set $\\mathbb{S}_S$ and the target test set $\\mathbb{S}_T$ are from the same data distribution. \nHowever, this assumption is frequently violated in real world applications. In other words, we will have $p_S(x,y) \\neq p_T(x,y)$ where $p_S(x,y)$ and $p_T(x,y)$ are the joint probability density distributions of data $x$ and label $y$ on the training and test distributions, respectively.\nSuch mismatch between training and test data distribution is known as \\textit{dataset shift}~ (also termed as \\textit{distributional shift} or \\textit{domain shift} in literature).\nIn this paragraph, we describe two most common distributional shifts and their benchmark datasets. \n\\textit{Covariate shift} means that the training and test distributions of the input covariate $x$ are different (i.e., $p_S(x) \\neq p_T(x)$), while the labeling function remains the same (i.e., $p_S(y|x) = p_T(y|x)$). \nCovariate shift may occur due to natural perturbations (e.g., weather and lighting changes), data changes over time (e.g., seasonal variations of data), and even more subtle digital corruptions on images (e.g., JPEG compression and low saturation). \n\\textit{Label distribution shift} (also know as \\textit{prior probability shift}) is the scenario when the marginal distributions of $y$ changes while the class-conditional distribution remains the same. Formally, it is defined as $p_S(y) \\neq p_T(y), p_S(x|y) = p_T(x|y)$. \nPrior probability shift is typically concerned in applications where the label $y$ is the casual variable (e.g., pneumonia) for the observed feature $x$ (e.g., chest X-ray) . For example, if we trained a pneumonia predictor using the chest X-ray data collected in summer (when $p(y)$ is low), we may still require it to be accurate on patients visiting in winter (when $p(y)$ is high).\nA special case for label distributional shift is long-tailed recognition , where the training set is long-tail distributed (i.e., $p_S(y)$ follows a long-tailed distribution) while the test set is balanced (i.e., $p_T(y)$ roughly follows a uniform distribution).\nBeyond the above reasonably foreseeable domain shifts, a model may also encounter out-of-distribution (OOD) test samples with semantic contents unseen in the training distribution. \nFor example, for a classifier trained on hand-written digit images, a Roman character is an OOD test sample (while a printed digit is a test sample with domain gap).\n\\textit{OOD detection} is the approach to detect such OOD samples, whose predictions will be rejected (see Section \\ref{sec:ood-detection} for details).", "id": "1fce065f-a3de-4536-a770-2283500a55da", "level": "subsection", "origin_cites_number": 4, "parent_id": "5ab20adb-f657-4a79-9b60-59d62e0df59b", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Distributional Shifts" ] ], "subsections": [ "51c360c4-dd04-4b3d-bb90-115509be7968" ], "title": "Robustness against Distributional Shifts" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 8473, 1606, 7058, 7493, 1627 ], "content": "Several variants of the ImageNet dataset have been introduced to benchmark robustness against distributional shifts of models trained on the original ImageNet dataset.\n introduce two variants of the original ImageNet validation set: ImageNet-C benchmark for input corruption robustness and the ImageNet-P dataset for input perturbation robustness. \n present ImageNet-A which contains real-world unmodified samples that falsify state-of-the-art image classifiers. \n present a series of benchmarks for measuring model robustness to variations on image renditions (ImageNet-R benchmark), imaging time and geographic location (StreetView benchmark), and objects size, occlusion, camera viewpoint, and zoom (DeepFashion Remixed benchmark). \n collected ImageNet-V2 using the same data source and collection pipeline as the original ImageNet paper .\nThis new benchmark leads to the observation that the prediction accuracy of even the best image classifiers are still highly sensitive to minutiae of the test set distribution and extensive hyperparameter tuning. \nShifts is a very recently published benchmark dataset for distributional shifts beyond the computer vision tasks.", "id": "51c360c4-dd04-4b3d-bb90-115509be7968", "level": "paragraph", "origin_cites_number": 6, "parent_id": "1fce065f-a3de-4536-a770-2283500a55da", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Distributional Shifts" ], [ "paragraph", "Benchmark Datasets for Covariate Shift:" ] ], "subsections": [ "c2612af4-5ff3-4213-a3f7-f69a4283f2a7", "5a48c2d9-5e39-4f7f-904d-afb8149d4bbe" ], "title": "Benchmark Datasets for Covariate Shift:" }, { "cite_extract_rate": 1, "cites": [ 1628, 1626 ], "content": "Synthetic label distribution shift is commonly used for evaluation : the test samples are manually sampled according to a predefined target label distribution $p_T(y)$ which is different from the source label distribution $p_S(y)$.\nRecently, constructed a real-world label distribution shift dataset for the paper category prediction task. Specifically, the authors collected papers from 23 categories on arXiv and extracted the tf-idf vector from the abstract as the features.", "id": "c2612af4-5ff3-4213-a3f7-f69a4283f2a7", "level": "paragraph", "origin_cites_number": 2, "parent_id": "51c360c4-dd04-4b3d-bb90-115509be7968", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Distributional Shifts" ], [ "paragraph", "Benchmark Datasets for Covariate Shift:" ], [ "paragraph", "Benchmark Datasets for Label Distribution Shift:" ] ], "subsections": [], "title": "Benchmark Datasets for Label Distribution Shift:" }, { "cite_extract_rate": 1, "cites": [ 8474, 1629, 1630, 7058 ], "content": "The semantically coherent OOD (SC-OOD) benchmarks are the latest ones on CIFAR10 and CIFAR100, which fixed some sever defects of previous designs.\nImageNet-O , containing 2000 images from 200 classes within ImageNet-22k and outside ImageNet, is a commonly used for ImageNet.\nHendrycks et al. presented three large-scale and high-resolution OOD detection benchmarks for multi-class and multi-label image classification, object detection, and semantic segmentation, respectively. \nThe recent ``SegmentMeIfYouCan'' benchmark has two tasks: anomalous object segmentation and road obstacle segmentation. \n\\vspace{-0.5em}", "id": "5a48c2d9-5e39-4f7f-904d-afb8149d4bbe", "level": "paragraph", "origin_cites_number": 4, "parent_id": "51c360c4-dd04-4b3d-bb90-115509be7968", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Distributional Shifts" ], [ "paragraph", "Benchmark Datasets for Covariate Shift:" ], [ "paragraph", "Benchmark Datasets for OOD Detection:" ] ], "subsections": [], "title": "Benchmark Datasets for OOD Detection:" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 910, 1631, 892, 904, 1632 ], "content": "\\label{sec:adversarial_attack}\nAdversarial attacks add synthetic perturbations (termed as adversarial perturbations) onto the original clean sample. \nThe perturbed sample, known as adversarial sample, is intended to cause wrong predictions on machine learning models, while keeping the identical semantic meaning of the clean sample. \nDifferent forms of adversarial perturbations have been studied on different types of data.\nOn image data, typical forms include the $\\ell_p$ constrained additive perturbation , spatial perturbation~, and semantically meaningful perturbation~. \nBeyond the image data, adversarial attacks can also be designed, such as by altering the shape of 3D surfaces , by replacing words with synonyms or rephrasing the sentence in natural language data, by applying adversarial printable 2D patches on real-world physical objects , etc.", "id": "4b004293-0e23-42b6-b373-82d9f6603af1", "level": "subsection", "origin_cites_number": 7, "parent_id": "5ab20adb-f657-4a79-9b60-59d62e0df59b", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Adversarial Attacks" ] ], "subsections": [ "0deede3d-fca4-4ba3-9be6-c9ef51953225" ], "title": "Robustness against Adversarial Attacks" }, { "cite_extract_rate": 1, "cites": [ 1633, 7059, 912, 888, 1634 ], "content": "Robustness against adversarial attacks (also known as adversarial robustness) is usually evaluated as the empirical performance (e.g., accuracy in image classification tasks) on the adversarial samples. \nThe key requirement for a faithful evaluation is that the attackers should try their best to break the model. \nThere are two commonly used strategies to achieve this goal.\nFirst, an ensemble of multiple strong and diverse adversarial attacks should be simultaneously used for evaluation. \nA typical setting for such ensemble is the AutoAttack , which consists of four state-of-the-art white-box attacks and two state-of-the-art black-box attacks. \nAnother setting is used to create evaluation benchmarks of ImageNet-UA and CIFAR-10-UA , which contains both $\\ell_p$ constrained adversarial attacks and real-world adversarial attacks such as worst-case image fogging. \nSecond, the attacks should be carefully designed to prevent the ``gradient obfuscation'' effect .\nSince the success of traditional white-box attacks depends on accurate calculation of model gradient, they may fail if the model gradient is not easily accessible (e.g., the model has non-differential operations). \nAs a result, evaluating model robustness on such attacks may provide a false sense of robustness. \n proposed three enhancement for traditional white-box attacks as solutions for common causes of gradient obfuscation.\nOther solutions include designing adaptive attacks for each specific defense strategy .\n\\vspace{-0.5em}", "id": "0deede3d-fca4-4ba3-9be6-c9ef51953225", "level": "paragraph", "origin_cites_number": 5, "parent_id": "4b004293-0e23-42b6-b373-82d9f6603af1", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Robustness against Adversarial Attacks" ], [ "paragraph", "Benchmark Datasets:" ] ], "subsections": [], "title": "Benchmark Datasets:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:error-types}\nWe define ML dependability as the model's ability to minimize prediction risk on a given test set. \nUnlike code-based algorithms, the dependability of ML algorithms is bounded to the model's learning capacity and statistical assumptions such as independent and identically distribution (i.i.d) relation of source and target domains. \nHowever, maintaining data distribution assumptions when deployed in the open-world is challenging and results in different types of prediction errors.\nIn this section, we decompose model dependability limitations into three prediction error types: (i) Generalization Error, (ii) Distributional Error, and (iii) Adversarial Error as a unified ``big picture'' for dependable and robust ML models in open-world. \nAdditionally, we review benchmark datasets commonly used for evaluating model dependability.\n\\vspace{-0.5em}", "id": "7165617c-b651-4c8c-bb23-2d8f417c79d3", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ] ], "subsections": [ "be8a4443-5ba9-4476-bf69-4db3f1947256", "9efc7c12-9232-4d0a-a249-d332b0eefee2", "82c86db6-92ce-4a1a-ae20-2b136b857e5b" ], "title": "ML Dependability" }, { "cite_extract_rate": 0.4, "cites": [ 1620, 1621 ], "content": "\\label{sec:generalization_error}\nThe first and foremost goal of machine learning is to minimize the \\emph{Generalization Error}.\nGiven a hypothesis $h$ (e.g., a model with learned parameters), the generalization error (also know as the \\emph{true error} and denoted as $R(h)$) is defined as the expected error of $h$ on the data distribution $\\mathcal{D}$~:\n$\n R(h) = \\text{Pr}_{(x,y) \\sim \\mathcal{D}}[h(x)\\neq y] = \\mathbb{E}_{(x,y) \\sim \\mathcal{D}}[\\mathbbm{1}_{h(x)\\neq y}],\n$\nwhere $(x,y)$ is a pair of data and labels sampled from $\\mathcal{D}$, and $\\mathbbm{1}$ is the indicator function. \nHowever, the generalization error is not directly computable since $\\mathcal{D}$ is usually unknown. \nThe \\textit {de facto} practical solution is to learn $h$ by empirical risk minimization (ERM) on the training set $\\mathbb{S}_{S}=\\{(x_i,y_i)\\}_{i=1}^{N_S}$ and then estimate its generalization error by the \\emph{empirical error} on the holdout test set $\\mathbb{S}_{T}=\\{(x_i,y_i)\\}_{i=1}^{N_T}$. \nFormally, the empirical error $\\hat{R}(h)$ is defined as the mean error on a finite set of data points $\\mathbb{S} \\sim \\mathcal{D}^{m}$~:\n$\n \\hat{R}(h) = \\frac{1}{m} \\sum_{i=1}^{m} \\mathbbm{1}_{h(x_i)\\neq y_i},\n$\nwhere $\\mathbb{S} \\sim \\mathcal{D}^{m}$ means $\\mathbb{S}=\\{(x_i,y_i)\\}_{i=1}^m \\overset{i.i.d.}{\\sim} \\mathcal{D}$.\nThe training and test sets are all sampled from the same distribution $\\mathcal{D}$ but are disjoint.\nRecent years have witnessed the successful application of this holdout evaluation methodology to monitoring the progress of many ML fields, especially where large-scale labeled datasets are available. \nThe generalization error can be affected by many factors, such as training set quality (e.g., imbalanced class distribution , noisy labels ), model learning capacity, and training method (e.g., using pre-training or regularization ).\n\\vspace{-0.5em}", "id": "be8a4443-5ba9-4476-bf69-4db3f1947256", "level": "subsection", "origin_cites_number": 5, "parent_id": "7165617c-b651-4c8c-bb23-2d8f417c79d3", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Generalization Error" ] ], "subsections": [ "35556dc3-956e-446b-86ea-9a4aba6a9db7" ], "title": "Generalization Error" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1623, 1622 ], "content": "Model generalization is commonly evaluated on a separate i.i.d test set provided for the dataset. \nHowever, recent research has found limitations of this evaluation strategy. \nFor example, showed that the fixed ImageNet test set is not sufficient to reliably evaluate the generalization ability of state-of-the-art image classifiers due to the insufficiency in representing the rich visual open-world.\nIn another work, observed that a noisy data collection pipeline could lead to a systematic misalignment between the training sets and the real-world tasks. \n\\vspace{-0.6em}", "id": "35556dc3-956e-446b-86ea-9a4aba6a9db7", "level": "paragraph", "origin_cites_number": 3, "parent_id": "be8a4443-5ba9-4476-bf69-4db3f1947256", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Generalization Error" ], [ "paragraph", "Benchmark Datasets:" ] ], "subsections": [], "title": "Benchmark Datasets:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:distributional_shift}\nWe define \\emph{Distributional Error} as the increase in model generalization error when the i.i.d assumption between the source training set $\\mathbb{S}_S$ and target test set $\\mathbb{S}_T$ is violated. \nEliminating distributional error is particularly important for real-world applications because the i.i.d assumption is frequently violated in uncontrolled settings. \nIn other words, we will have $p_S(x,y) \\neq p_T(x,y)$ where $p_S(x,y)$ and $p_T(x,y)$ are the joint probability density distributions of data $x$ and label $y$ on the training and test distributions, respectively.\nSuch mismatch between training and test data distribution is known as \\textit{Distributional Shift} (also termed as \\textit{Dataset Shift} or \\textit{Domain Shift}).\nIn the following, we review the three most common roots of distribution shifts and their benchmark datasets. \n\\vspace{-0.6em}", "id": "9efc7c12-9232-4d0a-a249-d332b0eefee2", "level": "subsection", "origin_cites_number": 1, "parent_id": "7165617c-b651-4c8c-bb23-2d8f417c79d3", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ] ], "subsections": [ "afe3ecb8-ce9a-4d47-b542-819aee8be1f0" ], "title": "Distributional Error" }, { "cite_extract_rate": 0, "cites": [], "content": "refers to a change in the test distribution of the input covariate $x$ compared to training distribution so that $p_S(x) \\neq p_T(x)$ while the labeling function remains the same $p_S(y|x) = p_T(y|x)$. \nCovariate shift may occur due to natural perturbations (e.g., weather and lighting changes), data changes over time (e.g., seasonal variations of data), and even more subtle digital corruptions on images (e.g., JPEG compression and low saturation). \n\\vspace{-0.6em}", "id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "9efc7c12-9232-4d0a-a249-d332b0eefee2", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ] ], "subsections": [ "d0f4826a-d2b9-4bb3-b60d-6be6a87ea56f", "d00802b4-7548-4c7e-9a67-5a411a43d078", "2690b4ad-96cb-41e9-ab12-fc91f8ec6b1c", "f363f76a-5f79-4bde-9da2-7cd8033e8b1e", "e55a5a70-2897-4aae-b74f-c1515fc973d1" ], "title": "Covariate Shift" }, { "cite_extract_rate": 1, "cites": [ 1626, 1625 ], "content": "is the scenario when the marginal distributions of $y$ changes while the class-conditional distribution remains the same. \nLabel distribution shift is also know as \\textit{prior probability shift} and formally defined as $p_S(y) \\neq p_T(y), p_S(x|y) = p_T(x|y)$. \nLabel distribution shift is typically concerned in applications where the label $y$ is the casual variable for the observed feature $x$ . \nFor example, a trained model to predict pneumonia (i.e., label $y$) using chest X-ray data (i.e., features $x$) that was collected during summer time (when $p(y)$ is low), should still require it to be accurate on patients (i.e., new inputs) visiting in winter time (when $p(y)$ is high) regardless of label distribution shift. \nLong-tailed distribution is a special case for label distributional shift where the training set $p_S(y)$ follows a long-tailed distribution but the test set is balanced (i.e., $p_T(y)$ roughly follows a uniform distribution).\n\\vspace{-0.6em}", "id": "d0f4826a-d2b9-4bb3-b60d-6be6a87ea56f", "level": "paragraph", "origin_cites_number": 2, "parent_id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ], [ "paragraph", "Label Distribution Shift" ] ], "subsections": [], "title": "Label Distribution Shift" }, { "cite_extract_rate": 1, "cites": [ 1624 ], "content": "are test time inputs that are outliers to the training set without any semantic content shared with the training distribution, which is considered beyond reasonably foreseeable domain shifts. \nFor example, given a model trained to recognize handwritten characters in English, a Roman character with a completely disjoint label space is an Out-of-Distribution (OOD) test sample. \nOOD detection is a common approach to detect such outlier samples, whose predictions should be abstained (see Section \\ref{sec:ood-detection} for details).\n\\vspace{-0.6em}", "id": "d00802b4-7548-4c7e-9a67-5a411a43d078", "level": "paragraph", "origin_cites_number": 1, "parent_id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ], [ "paragraph", "Out-of-Distribution Samples" ] ], "subsections": [], "title": "Out-of-Distribution Samples" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 8473, 1606, 7058, 7493, 1627 ], "content": "Several variants of the ImageNet dataset have been introduced to benchmark distributional error (i.e., evaluating robustness against distributional shifts) when the model is trained on the original ImageNet dataset.\n introduce two variants of the original ImageNet validation set: ImageNet-C benchmark for input corruption robustness and the ImageNet-P dataset for input perturbation robustness. \nImageNet-A sorts out unmodified samples from ImageNet test set that falsifies state-of-the-art image classifiers. \n present a series of benchmarks for measuring model robustness to variations on image renditions (ImageNet-R benchmark), imaging time or geographic location (StreetView benchmark), and objects size, occlusion, camera viewpoint, and zoom (DeepFashion Remixed benchmark). \n collected ImageNet-V2 using the same data source and collection pipeline as the original ImageNet paper .\nThis new benchmark leads to the observation that the prediction accuracy of even the best image classifiers are still highly sensitive to minutiae of the test set distribution and extensive hyperparameter tuning. \nShifts is another recent benchmark dataset for distributional shifts beyond the computer vision tasks. \n\\vspace{-0.6em}", "id": "2690b4ad-96cb-41e9-ab12-fc91f8ec6b1c", "level": "paragraph", "origin_cites_number": 6, "parent_id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ], [ "paragraph", "Benchmark Datasets for Covariate Shift:" ] ], "subsections": [], "title": "Benchmark Datasets for Covariate Shift:" }, { "cite_extract_rate": 1, "cites": [ 1628, 1626 ], "content": "Synthetic label distribution shift is a common benchmarking method in which the test set is manually sampled according to a predefined target label distribution $p_T(y)$ that is different from the source label distribution $p_S(y)$ . \n is an example of a real-world label distribution shift benchmark for text domain.\n\\vspace{-0.6em}", "id": "f363f76a-5f79-4bde-9da2-7cd8033e8b1e", "level": "paragraph", "origin_cites_number": 2, "parent_id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ], [ "paragraph", "Benchmark Datasets for Label Distribution Shift:" ] ], "subsections": [], "title": "Benchmark Datasets for Label Distribution Shift:" }, { "cite_extract_rate": 1, "cites": [ 8474, 1629, 1630, 7058 ], "content": "Test sets of natural images with disjoint label space are typically used for benchmarking OOD detection. \nFor example, a model trained on the CIFAR10 dataset may use ImageNet (samples that do not overlap with CIFAR10 labels) as OOD test samples. \nImageNet-O , containing 2000 images from 200 classes within ImageNet-22k and outside ImageNet is an example for the ImageNet-1k dataset.\nHendrycks et al. presented three large-scale and high-resolution OOD detection benchmarks for multi-class and multi-label image classification, object detection, and semantic segmentation, respectively. \n presents semantically coherent OOD (SC-OOD) benchmarks for CIFAR10 and CIFAR100 datasets.\nIn another work, presents benchmarks for anomalous object segmentation and road obstacle segmentation. \n\\vspace{-0.5em}", "id": "e55a5a70-2897-4aae-b74f-c1515fc973d1", "level": "paragraph", "origin_cites_number": 4, "parent_id": "afe3ecb8-ce9a-4d47-b542-819aee8be1f0", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Distributional Error" ], [ "paragraph", "Covariate Shift" ], [ "paragraph", "Benchmark Datasets for OOD Detection:" ] ], "subsections": [], "title": "Benchmark Datasets for OOD Detection:" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 910, 1631, 892, 904, 1632 ], "content": "\\label{sec:adversarial_attack}\n\\emph{Adversarial Error} is the model misprediction due to synthetic perturbations (termed as adversarial perturbations) added to the original clean sample. \nThe adversarial attack is the act of generating adversarial perturbations to cause intentional model mispredictions while keeping the identical semantic meaning of the clean sample. \nDifferent forms of adversarial attacks have been studied on different types of data.\nOn image data, typical forms include the $\\ell_p$ constrained additive perturbation , spatial perturbation~, and semantically meaningful perturbation~. \nBeyond the image data, adversarial attacks can also be designed, such as by altering the shape of 3D surfaces , by replacing words with synonyms or rephrasing the sentence in natural language data, by applying adversarial printable 2D patches on real-world physical objects . \n\\vspace{-0.5em}", "id": "82c86db6-92ce-4a1a-ae20-2b136b857e5b", "level": "subsection", "origin_cites_number": 7, "parent_id": "7165617c-b651-4c8c-bb23-2d8f417c79d3", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Adversarial Error" ] ], "subsections": [ "be2bc610-355a-486f-934f-1c8e08af7bda" ], "title": "Adversarial Error" }, { "cite_extract_rate": 1, "cites": [ 1633, 7059, 912, 888, 1634 ], "content": "Evaluating adversarial error (also known as adversarial robustness) is usually done by measuring empirical performance (e.g., accuracy in image classification tasks) on a set of adversarial samples. \nHowever, the key requirement for a faithful evaluation is to use strong and diverse unseen attacks to break the model. \nThere are two commonly used strategies to achieve this goal.\nFirst, an ensemble of multiple strong and diverse adversarial attacks should be simultaneously used for evaluation. \nFor instance, AutoAttack consists of four state-of-the-art white-box attacks and two state-of-the-art black-box attacks. \n present another setting by creating evaluation benchmarks of ImageNet-UA and CIFAR-10-UA, which contain both $\\ell_p$ constrained adversarial attacks and real-world adversarial attacks such as worst-case image fogging. \nSecond, the attacks should be carefully designed to prevent the ``gradient obfuscation'' effect .\nSince the success of traditional white-box attacks depends on the accurate calculation of model gradients, they may fail if the model gradients are not easily accessible (e.g., the model has non-differential operations). \nAs a result, evaluating model robustness on such attacks may provide a false sense of robustness. \n proposed three enhancements for traditional white-box attacks as solutions for common causes of gradient obfuscation.\nOther solutions include designing adaptive attacks for each specific defense strategy (see Section \\ref{sec:adv-detection} for details).\n\\vspace{-0.5em}", "id": "be2bc610-355a-486f-934f-1c8e08af7bda", "level": "paragraph", "origin_cites_number": 5, "parent_id": "82c86db6-92ce-4a1a-ae20-2b136b857e5b", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Dependability" ], [ "subsection", "Adversarial Error" ], [ "paragraph", "Benchmark Datasets:" ] ], "subsections": [], "title": "Benchmark Datasets:" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:categorization}\nLooking at fundamental limitations of code-based software safety for machine learning on one hand and research debt in AI safety on the other hand, we review and organize practical ML solutions into a taxonomy for ML safety. \nThe proposed taxonomy unifies ML dependability objective with engineering safety strategies for their safe execution in open-world scenarios. \nOur ML safety taxonomy is followed by a systematic and broad review of relevant ML techniques to serve as a way to checkup coverage and diversity of safety strategies employed in any given ML system. \nAs illustrated in Figure \\ref{fig:roadmap} and Table \\ref{tab:ref-table}, we propose categorizing ML techniques in three following safety strategies: \n\\begin{itemize}\n \\item \\emph{(1) Inherently Safe Model:} refers to techniques for designing ML models that are intrinsically error-free or verifiable to be error-free in their intended target domain. \n We review model transparency and formal methods such as model specification, verification, and formal testing as main pillars to achieve the inherently safe design. \n However, there are many open challenges for these solutions to guarantee ML safety.\n \\item \\emph{(2) Enhancing Performance and Robustness:} refers to techniques to increase model performance (on the source domain) and robustness against distributional shifts. \n Perhaps the most commonly used in practice, these techniques contribute to safety by improving the operational performance of ML algorithms.\n We review key approaches and techniques such as training regularization, domain generalization, adversarial training, etc. \n \\item \\emph{(3) Run-time Error Detection} refers to strategies to detect model mispredictions at the run-time (or test-time) to prevent model errors from becoming system failures. \n This strategy can help mitigating hazards related to ML performance limitations in the operational domain. \n We review key approaches and techniques for model uncertainty estimation, out-of-distribution detection, and adversarial attack detection. \n\\end{itemize}\nAdditionally, we emphasize on safety-oriented \\textit{Human-AI Interaction} design as a type of \\textit{Procedural Safeguards} to prioritize end-user awareness and trust, and misuse prevention for non-experts end-users of ML-based products.\nAlso, we differentiate ML safety from Security because the external factors (i.e., attacker) which intentionally exploit system vulnerabilities are the security threats rather than a design limitation. \nTable~\\ref{tab:ref-table} presents a summary of reviewed techniques and papers for each safety strategy. \nReviewed papers are organized into different solutions (middle column) to group papers into individual research approaches. \nWe go through details and describe complement each other in the following sections.\n\\input{tables/table-2.tex}\n\\vspace{-0.5em}", "id": "c40bab66-fefa-4f2f-adb6-c822fcacd658", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "ML Safety Taxonomy" ] ], "subsections": [], "title": "ML Safety Taxonomy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:safe-design}\nAchieving inherently safe ML algorithms that are provably error-less w.r.t. is still an open problem (NP-hard in high-dimensional domains) despite being trivial for code-based algorithms.\nIn this section, we review the three main requirements to achieve safe ML algorithms as being \\textit{(1) model transparency}, \\textit{(2) formal specification}, and \\textit{(3) formal verification and testing}.\nThese three requirements aim to formulate high-level system design specifications into low-level task specifications, leading to transparent system design, and formal verification or testing for model specifications.\n\\input{sections/5.1.transparency}\n\\input{sections/5.2.specification}\n\\input{sections/5.3.verification}\n\\vspace{-0.5em}", "id": "3df7c8ef-e02c-4fce-9d3b-80b1e2233171", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Inherently Safe Design" ] ], "subsections": [ "1cfe93e9-6cf7-4487-b4a4-cecec977f6bc" ], "title": "Inherently Safe Design" }, { "cite_extract_rate": 0.75, "cites": [ 1636, 1635, 1637 ], "content": "The first main challenge of designing inherently safe ML models lies in the computation complexity and scalability of solutions . \nAs ML models are becoming exponentially more complex, it will become extremely difficult to impose specifications and perform verification mechanisms that are well-adapted for large ML models. \nA practical solution could be the modular approach presented by for scaling up formal methods to large ML systems, even when some components (such as perception) do not themselves have precise formal specifications. \nOn the other hand, recent advancements in 3D rendering and simulation have introduced promising solutions for end-to-end testing and semi-formal verification in simulated environments.\nHowever, it is challenging to mitigate the gap between simulation and real-world situations, causing questionable transfer of simulated verification and testing results. \nRecent work starts exploring how simulated formal simulation aid in designing real-world tests . \nAdditionally, thorough and scenario-based simulations enable system verification in broader terms such as monitoring interactions between ML modules in a complex system.\n\\vspace{-0.5em}", "id": "1cfe93e9-6cf7-4487-b4a4-cecec977f6bc", "level": "subsection", "origin_cites_number": 4, "parent_id": "3df7c8ef-e02c-4fce-9d3b-80b1e2233171", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Inherently Safe Design" ], [ "subsection", "Challenges and Opportunities" ] ], "subsections": [], "title": "Challenges and Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:robustness-performance}\nEnhancing model performance and robustness is the most common strategy to improve product quality and reduce the safety risk of ML models in the open world. \nSpecifically, techniques to enhance model performance and robustness reduce different model error types to gain dependability w.r.t criteria reviewed in Section \\ref{sec:error-types}. \nIn the following, we review and organize ML solutions to improve performance and robustness into three parts focusing on (1) robust network architecture, (2) robust training, and (3) data sampling and manipulation. \n\\input{sections/6.1.model-capacity}\n\\input{sections/6.2.training-regularization}\n\\input{sections/6.3.adv-training}\n\\input{sections/6.4.domain-generalization}\n\\input{sections/6.5.data-and-augmentaion}\n\\vspace{-0.5em}", "id": "d2689171-2b34-421a-b1bf-c43fead1b2cb", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Enhancing Model Performance and Robustness" ] ], "subsections": [ "af841c0e-903c-48e9-bd59-90f001bbc018" ], "title": "Enhancing Model Performance and Robustness" }, { "cite_extract_rate": 0.875, "cites": [ 1642, 1641, 7494, 1639, 1622, 1640, 1638 ], "content": "Despite advances in defending different types of naturally occurring distributional shifts and synthetic adversarial attacks, there lacks systematic efforts to tackle robustness limitations in a unified framework to cover the ``in-between\" cases within this spectrum. \nIn fact, in many cases, techniques proposed to enhance one type of robustness do not translate to benefiting other types of robustness. \nFor example, showed that top-performing robust training methods for one type of distribution shift may even harm the robustness on other different distribution shifts. \nAnother less explored direction for ML robustness is to benefit from multi-domain and multi-source training data for improved representation learning. \nThe rich contexts captured from sensor sets with diverse orientations and data modality may improve prediction robustness compared to a single input source (e.g., single camera). \nFor example, a recent paper showed that large models trained on multi-modality data, such as CLIP , can significantly improve representation learning to detect domain shift. \nBased on the above finding, a promising direction for future research is to design multi-modality training methods which explicitly encourage model robustness. \nAnother under-exploited approach for model robustness is run-time self-checking based on various temporal and semantic coherence of data. \nFaithful and effective evaluation of model robustness is another open challenge in real-world applications. \nTraditional evaluation approaches are designed based on the availability of labeled test sets on the target domain. \nHowever, in a real-world setting, the target domain may be constantly shifting and making the test data collection inefficient and inaccurate. \nTo address this issue, recent work propose more practical settings to evaluate model robustness with only unlabeled test data or selective data labeling . \nUnlike training datasets and evaluation benchmarks commonly used in research, a safety-aware training set requires extensive data capturing, cleaning, and labeling to increase the coverage of unknown edge cases by collecting them directly from the open-world. \nTechnique like active learning , object re-sampling , and self-labeling allow for efficient and targeted dataset improvements which can directly translate to model performance improvements. \nGenerative Adversarial Networks (GAN) could be an underway trend for generating effective large-scale vision datasets. \nFor example, propose DatasetGAN, an automatic procedure to generate realistic image datasets with semantically segmented labels. \n\\vspace{-0.5em}", "id": "af841c0e-903c-48e9-bd59-90f001bbc018", "level": "subsection", "origin_cites_number": 8, "parent_id": "d2689171-2b34-421a-b1bf-c43fead1b2cb", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Enhancing Model Performance and Robustness" ], [ "subsection", "Challenges and Opportunities" ] ], "subsections": [], "title": "Challenges and Opportunities" }, { "cite_extract_rate": 1, "cites": [ 1644, 1643 ], "content": "\\label{sec:error-detection}\nThe third strategy for ML safety is to detect model errors at run-time. \nAlthough the robust training methods discussed in Section \\ref{sec:RT} can significantly improve model robustness, they cannot entirely prevent run-time errors. \nAs a result, run-time monitoring to detect any potential prediction errors is necessary from the safety standpoint. \n\\textit{Selective prediction}, also known as prediction with reject option, is the main approach for run-time error detection .\nSpecifically, it requires the model to cautiously provide predictions only when they have high confidence in the test samples.\nOtherwise, when the model detects potential anomalies, it will trigger fail-safe plans to prevent system failure.\nSelective prediction can significantly improve model robustness at the cost of test set coverage. \nIn this section, we first review methods for model calibration and uncertainty quantification (Sec. \\ref{sec:uncertainty-estimation}) and then go over technique to adopt such methods on specific application scenarios: out-of-distribution detection (Sec. \\ref{sec:ood-detection}) and adversarial attack detection (Sec. \\ref{sec:adv-detection}). \n\\input{sections/7.1.uncertainty-estimation}\n\\input{sections/7.2.ood-detection}\n\\input{sections/7.3.adversarial-attack}\n\\vspace{-0.5em}", "id": "09cbc33b-2cc6-42f5-9853-07558520833f", "level": "section", "origin_cites_number": 2, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Run-time Error Detection" ] ], "subsections": [ "34569813-910e-4527-af24-a3b2ae876212" ], "title": "Run-time Error Detection" }, { "cite_extract_rate": 0.5, "cites": [ 1645, 7494, 888 ], "content": "An open challenge in OOD detection is to improve performance on near-OOD samples that are visually similar to ID samples but yet outliers w.r.t. semantic meanings. \nThis scenario is very common in fine-grained image classification and analysis domains where target ID samples could be highly similar to OOD samples. \nRecent papers have made attempts in this more challenging scenario ; however, the OOD detection performance on near-OOD samples is still much worse than that performance on far-OOD samples (i.e., visually more distinct samples). \nAnother open research direction is to propose techniques for efficient OOD sample selection and training.\nIn a recent work, present ATOM as an empirically verified technique for mining informative auxiliary OOD training data. \nHowever, this direction remains under-explored, and many useful measures such as gradient norms could be investigated for OOD training efficiency and performance. \nDetecting adversarial examples will remain open research as new attacks are introduced to challenge and defeat detection methods .\nGiven the overhead computational costs for both generating and detecting adversarial samples, an efficient way to nullify attacks could be is to benefit from multi-domain inputs, temporal data characteristics, and domain knowledge from a known clean training set. \nA related example is the work by that studies the spatial consistency property in the semantic segmentation task by randomly selecting image patches and cross-checking model predictions among the overlap area. \n\\vspace{-0.5em}", "id": "34569813-910e-4527-af24-a3b2ae876212", "level": "subsection", "origin_cites_number": 6, "parent_id": "09cbc33b-2cc6-42f5-9853-07558520833f", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Run-time Error Detection" ], [ "subsection", "Challenges and Opportunities" ] ], "subsections": [], "title": "Challenges and Opportunities" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 1605, 1612, 1615, 1604, 1610 ], "content": "\\label{sec:discussion}\nIn our survey, we presented a review of fundamental ML limitations in engineering safety methods; followed by a taxonomy of safety-related techniques in ML. \nThe impetus of this work was to leverage from both engineering safety strategies and state-of-the-art ML techniques to enhance the dependability of ML components in autonomous systems. \nHere we summarize key takeaways from our survey and continue with recommendations on each item for future research. \n\\vspace{-0.5em}\n\\subsection*{T1: Engineering Standards Can Support ML Product Safety} \nSafety needs for design, development, and deployment of ML learning algorithms have subtle distinctions with code-based software. Our analyses are aligned with prior work and indicate that conventional engineering safety standards are not directly applicable on ML algorithms design. Consequently, relevant industrial safety standards suggest enforcing limitations on operation domain of critical ML functions to minimize potential hazards due to ML malfunction. The limitations enforced on ML functions are due to the lack of ML technology readiness and intended to minimize the risk of hazard to an acceptable level.\nAdditionally, recent regulations mandate data privacy and algorithmic transparency which in turn could encourage new principles in responsible ML development and deployment pipelines. \nOur recommendation is aligned with safety standards to perform thorough risk and hazard assessments for ML components and limit their functionalities to minimize the risk of failure.\n\\vspace{-0.5em}\n\\subsection*{T2: The Value of ML Safety Taxonomy}\nThe main contribution of this paper is to establish a meaningful \\textit{ML Safety Taxonomy} based on ML characteristics and limitations to directly benefit from engineering safety practices. \nSpecifically, our taxonomy of ML safety techniques maps key engineering safety principles into relevant ML safety strategies to understand and emphasize on the impact of each ML solution on model reliability.\nThe proposed taxonomy is supported with a comprehensive review of related literature and a hierarchical table of representative papers (Table 1) to categorize ML techniques into three major safety strategies and subsequent solutions. \nThe benefit of the ML safety taxonomy is to break down the problem space into smaller components, help to lay down a road map for safety needs in ML, and identify plausible future research directions. \nWe remark existing challenges and plausible directions as a way to gauge technology readiness on each safety strategy within the main body of literature review in Sections 5, 6, and 7. \nHowever, given the fast pace of developments in the field of ML, a thorough assessment of technology readiness may not be a one size fits all for ML systems. \nOn the other hand, the proposed ML safety taxonomy can benefits from emerging technologies concepts such as Responsible AI to take social and legal aspects of safety into account. \n\\vspace{-0.5em}\n\\subsection*{T3: Recommendations for Choosing ML Safety Strategies} \nA practical way to improve safety of complex ML products is to benefit from diversification in ML safety strategies and hence minimizing the risk of hazards associated with ML malfunction. We recognize multiple reasons to benefit from diversification of safety strategies. To start with, as no ML solutions guarantees absolute error-less performance in open-world environments, a design based on collection of diverse solutions could learn more complete data representation and hence achieve higher performance and robustness.\nIn other words, a design based on a collection of diverse solution is more likely to maintain robustness at the time of unforeseen distribution shifts as known as edge-cases.\nAdditionally, the overlaps and interactions between ML solutions boost overall performance and reduce development costs. \nFor instance, scenario-based testing for model validation can directly impact data collection and training set quality in design and development cycles. Another example is the positive effects of transfer learning and domain generalization on uncertainty quantification and OOD detection. Lastly, diverse strategies can be applied on different stages of design, development, and deployment of ML lifecycle which benefits from continues monitoring of ML safety across all ML product teams.\n\\vspace{-0.5em}\n\\subsection*{T4: Recommendations for Safe AI Development Frameworks}\nML system development tools and platforms (MLOps) aim to automate and unify the design, development, and deployment of ML systems with a collection of best practices. \nPrior work have emphasized on MLOps tools to minimize the development and maintenance costs in large scale ML systems . \nWe propose existing and emerging MLOps to support and prioritize adaptation and monitoring of ML safety strategies across both system and software level.\nA safety-oriented ML lifecycle incorporates all aspects of ML development from constructing safety scope and requirements, to data management, model training and evaluation, and open-world deployment and monitoring . \nIndustry-oriented efforts in safety-aware MLOps can unify necessary tools, metrics, and increase accessibility for all AI development teams. \nRecent emerging concepts such as Responsible AI and Explainable AI aim for building safe AI systems to ensure data privacy, fairness, and human-centered values in AI development. \nThese new emerging AI concepts can target beyond functional safety of the intelligent system and help to prevent end-users (e.g., driver in the autonomous vehicle) from unintentional misuse of the system due to over-trusting and user unawareness .", "id": "27093b3d-f55a-4c01-aeb4-419662c36eee", "level": "section", "origin_cites_number": 7, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Discussion and Conclusions" ] ], "subsections": [], "title": "Discussion and Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "We proposed a categorization for broad range of ML techniques that support algorithmic safety for open-world deployment of ML based software.\nOur organization of ML techniques is supported by a thorough reviewing of key papers in each branch. \nAt the end, we identified open problems and research directions that can benefit from research community. \nWe hope our categorization and review helps in building new ML safety guidelines and development frameworks for safety-critical applications. \n{\\small\n\\setlength{\\bibsep}{0pt}\n\\bibliographystyle{abbrvnat}\n\\bibliography{Bibliography}\n}\n\\end{document}", "id": "f2d6188f-3ee7-40ae-b23a-f8e40643e814", "level": "section", "origin_cites_number": 0, "parent_id": "81a424eb-ce19-4d20-b1e1-67e3f071c9db", "prefix_titles": [ [ "title", "Taxonomy of Machine Learning Safety: A Survey and Primer" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
122
[ 6976, 166, 1605, 1606, 1356, 1604, 892, 1609, 1608, 1607, 1614, 1612, 1615, 1610, 1613, 1611, 1616, 1346, 1617, 499, 1618, 1619, 1620, 1621, 1623, 1622, 1626, 1625, 1624, 8473, 7058, 7493, 1627, 1628, 8474, 1629, 1630, 910, 1631, 904, 1632, 1633, 7059, 912, 888, 1634, 1636, 1635, 1637, 1642, 1641, 7494, 1639, 1640, 1638, 1644, 1643, 1645 ]
1.07983
[ "Hassan Sajjad", "Nadir Durrani", "Fahim Dalvi" ]
Neuron-level Interpretation of Deep NLP Models: A Survey
2021
2021-08-30T11:54:21Z
cs.CL
The proliferation of deep neural networks in various domains has seen an increased need for interpretability of these models. Preliminary work done along this line and papers that surveyed such, are focused on high-level representation analysis. However, a recent branch of work has concentrated on interpretability at a more granular level of analyzing neurons within these models. In this paper, we survey the work done on neuron analysis including: i) methods to discover and understand neurons in a network, ii) evaluation methods, iii) major findings including cross architectural comparisons that neuron analysis has unraveled, iv) applications of neuron probing such as: controlling the model, domain adaptation etc., and v) a discussion on open issues and future research directions.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ] ], "subsections": [ "41d0c2e5-0bc9-4e11-bba7-da658b523704", "7f95fb19-41ce-4054-b385-cc3c3594ebec", "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "99234958-aa61-4591-b8c4-762dd38f56da", "bafcff9d-1ddd-433e-9988-8693300da7f1", "acbc66e6-5814-4f0a-848b-86b17e924faa", "a2492ee2-c312-4c75-b088-6f29f214f5e9" ], "title": "root" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 2401, 7622, 8598, 7623, 7165, 7, 7621, 8599, 168, 2909 ], "content": "Models trained using Deep Neural Networks (DNNs) have constantly pushed the state-of-the-art in \nvarious Natural Language Processing (NLP) problems, for example Language Modeling and Machine Translation to name a few. Despite this remarkable revolution, the black-box nature of deep neural networks \nhas remained major bottleneck in their large scale adaptability -- especially in the applications where fairness, trust, accountability, reliability and ethical decision-making are considered critically important metrics or at least as important as model's performance . \nThis opaqueness of Deep Neural Networks has spurred a new area of research to analyze and understand these models. A plethora of papers have been written in the past five years on interpreting deep NLP models and to answer one question in particular: \\textit{\\textbf{What knowledge is learned within representations?}} We term \nthis \nwork as the \n\\emph{Representation Analysis}. \nRepresentation Analysis thrives on post-hoc decomposability, where we analyze \nthe embeddings to uncover linguistic (and non-linguistic) concepts\\footnote{Please refer to Section~\\ref{sec:definitions} for a formal definition.} that are captured as the network is trained towards an NLP task .\nA majority of the work on \\emph{Representation Analysis} has focused on a holistic view of the representations i.e. how much knowledge of a certain concept is learned within representations as a whole (See \\newcite{belinkov-etal-2020-analysis} for a survey done on this line of work). Recently, a more fine-grained neuron interpretation has started to gain attention. In addition to \nthe holistic view of the representation, \\emph{Neuron Analysis} provides insight into a fundamental question: \\textbf{\\textit{How is knowledge structured within these representations?}} In particular, it targets \nquestions such as:\n\\begin{itemize}\n\\item What \nconcepts are learned within neurons of the network?\n\\item Are there neurons \nthat specialize in learning particular concepts?\n\\item How localized/distributed and redundantly is the knowledge preserved within neurons of the network?\n\\end{itemize}\nAnswers to these questions entail potential benefits beyond understanding the inner workings of models, \nfor example: i) controlling bias and manipulating system’s behaviour by identifying relevant neurons with respect to a prediction, ii) model distillation by removing less useful neurons, iii) efficient feature selection by selecting the most salient neurons and removing the redundant ones, iv) neural architecture search by guiding the search with important neurons. \nThe work on neuron analysis has explored various \ndirections such as: proposing novel methods to \ndiscover concept neurons \n, analyzing and comparing \narchitectures \nusing neuron distributions , and \nenabling applications of neuron analysis~.\nIn this survey, we aim to provide a broad perspective of the field with an in-depth coverage of each of these directions. We propose a matrix of seven attributes to compare various neuron analysis methods. Moreover, \nwe discuss the open issues and promising future directions in this area.\nThe survey is organized as follows: \nSection~\\ref{sec:definitions} defines the terminologies and formally introduces neuron analysis. Section~\\ref{sec:methods} covers various neuron analysis methods and compares them \nusing seven attributes.\nSection~\\ref{sec:evaluation} presents the \ntechniques that have been used to evaluate the effectiveness of neuron analysis methods. \nSection~\\ref{sec:findings} discusses the findings of neuron analysis methods. \nLastly \nSection~\\ref{sec:applications} showcases various applications of the presented methods and Section~\\ref{sec:conclude} \ntouches upon the open issues and future research directions. \n\\begin{figure*}[]\n \\centering\n\t\\includegraphics[width=0.98\\linewidth]{figures/overview.png}\n\t\\caption{Overview of neuron analysis summarizing the three objectives as discussed in Section~\\ref{sec:definitions}}\n\t\\label{fig:overview}\n\\end{figure*}\n\\begin{table*}[t]\n\t\\centering\n\t\\footnotesize\n\t\\begin{tabular}{c|cccccccc}\n\t\\toprule\n\t\tWords & Obama & receives & Netanyahu & in & the & capital & of & USA \\\\ \n\t\t\\midrule\n\t\tSuffix & -- & s & -- & -- & -- & -- & -- & -- \\\\\n\t\tPOS & NNP & VBZ & NNP & IN & DT & NN & IN & NP \\\\ \n\t\tSEM & PER & ENS & PER & REL & DEF & REL & REL & GEO \\\\ \n\t\tChunk & B-NP & B-VP & B-NP & B-PP & B-NP & I-NP & B-PP & B-NP \\\\\n\t\tCCG & NP & ((S[dcl]$\\backslash$NP)/PP)/NP & NP & PP/NP & NP/N & N & (NP$\\backslash$NP)/NP & NP\n \\\\\n \t\\bottomrule\n\t\\end{tabular}\n \\caption{Example sentences with different word-level concepts. POS: Parts of Speech tags, SEM: Semantic tags, Chunk: Chunking tags, CCG: Combinatory Categorial Grammar tags}\n\t\\label{tab:example-annotation}\n\\end{table*}", "id": "41d0c2e5-0bc9-4e11-bba7-da658b523704", "level": "section", "origin_cites_number": 13, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:definitions}\nIn this section, we define the terminologies used in \nthe paper and \nthe objective of neuron analysis more formally.", "id": "7f95fb19-41ce-4054-b385-cc3c3594ebec", "level": "section", "origin_cites_number": 0, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Definitions" ] ], "subsections": [ "21bba470-4b52-4e27-9862-3bf703985965" ], "title": "Definitions" }, { "cite_extract_rate": 0, "cites": [], "content": "Neural networks,\nsuch as RNNs or Transformer models consist of various components such as gates/cells, blocks, layers, attention heads, etc. We use the term \\emph{neuron} (also called as \\emph{features}, \\emph{experts}, and \\emph{units} in the literature) to refer to the output of a single dimension from any \nneural network component. \nFor example, in the BERT base model, \nthe output of a layer block has 768 neurons and the output of an attention head has 64 neurons. \nMoreover, we refer to individual neurons that learn a single concept as \\emph{focused neurons}, and a set of neurons that in combination represent a concept as \n\\emph{group neurons}.", "id": "21bba470-4b52-4e27-9862-3bf703985965", "level": "paragraph", "origin_cites_number": 0, "parent_id": "7f95fb19-41ce-4054-b385-cc3c3594ebec", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Definitions" ], [ "paragraph", "Neuron" ] ], "subsections": [ "65899ebe-19d0-45be-896c-f5c6f0db4be4" ], "title": "Neuron" }, { "cite_extract_rate": 0, "cites": [], "content": "A concept represents a coherent fragment of knowledge, such as ``a class containing certain objects as elements, where the objects have certain properties'' .\nFor example, a concept could be lexical: e.g. words ending with suffix ``ed'', morphological: e.g. gerund verbs, or semantic: e.g. names of cities. We loosely define a concept $\\mathcal{C}$ as \\textbf{\\textit{a group of words that are coherent w.r.t to a linguistic property}}. Table \\ref{tab:example-annotation} shows an example sentence with different concept annotations.", "id": "65899ebe-19d0-45be-896c-f5c6f0db4be4", "level": "paragraph", "origin_cites_number": 0, "parent_id": "21bba470-4b52-4e27-9862-3bf703985965", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Definitions" ], [ "paragraph", "Neuron" ], [ "paragraph", "Concept" ] ], "subsections": [ "f61b1fa4-eeb3-4157-96e8-3936cf2188df" ], "title": "Concept" }, { "cite_extract_rate": 0, "cites": [], "content": "Figure~\\ref{fig:overview} presents an overview of various objectives in neuron analysis. \nFormally, given a model $\\mathcal{M}$ and a set of neurons $\\mathcal{N}$ (which may consist of all the neurons in the network or a specific subset from particular components like a layer or an attention head) and a concept $\\mathcal{C}$, neuron analysis aims to achieve one of the following objectives: \n\\begin{itemize}\n\t\\item For a concept $\\mathcal{C}$, find a ranked list of $|\\mathcal{N}|$ neurons\n\twith respect to the concept (dotted blue line)\n\t\\item Given a neuron $n_i \\in \\mathcal{N}$, find a set of concepts $|\\mathcal{C}|$ the neuron represents (dashed purple line)\n\t\\item Given a set of neurons, find a subset of neurons that encode similar knowledge (solid green line)\n\\end{itemize}\nThe former two aim to understand what concepts are encoded within the learned representation. The last objective analyzes how knowledge is distributed across neurons.\n Each neuron $n_i \\in \\mathcal{N}$ is represented as a vector of activation values over some dataset $\\mathcal{D}$. \nHere, every element of the vector corresponds to a word.\nFor phrase or sentence-level concepts, an aggregation of neuron activations over words in the phrase/sentence is used. Alternatively, \\texttt{[CLS]} token representation is also used for transformer models that are transfer learned towards a downstream NLP task.", "id": "f61b1fa4-eeb3-4157-96e8-3936cf2188df", "level": "paragraph", "origin_cites_number": 0, "parent_id": "65899ebe-19d0-45be-896c-f5c6f0db4be4", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Definitions" ], [ "paragraph", "Neuron" ], [ "paragraph", "Concept" ], [ "paragraph", "Objective" ] ], "subsections": [], "title": "Objective" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 8598, 2911, 2910, 8599, 2909 ], "content": "\\label{sec:methods}\nWe have classified the work done on neuron analysis into 5 broader categories of methods, namely: i) visualizations, ii) corpus-based, iii) probing-based, iv) causation-based and v) miscellaneous methods, based on a set of attributes we describe below: \n\\begin{itemize}\n \\item Scope: Does the method provide global \n or local \n interpretation? \n Global methods accumulate statistics across a set of examples to discover the role of a neuron. \n Local methods provide interpretation of a neuron in a particular example and may not necessarily reflect its role over a large corpus.\n \\item Input and Output: What is the input (e.g. a set of neurons or concepts) to the method and what does it output? \n \\item Scalability: Can the method be scaled to a larger set of neurons?\n \\item HITL: Does the method require a human-in-the-loop for interpretation?\n \\item Supervision: Does the method depend on labeled data to provide interpretation?\n \\item Causation: Is the interpretation connected with the model's prediction? \n\\end{itemize}\nTable~\\ref{tab:neuronmethods} summarizes and compares each method in the light of these attributes. We discuss them in detail below.\\footnote{Table \\ref{tab:appendix:neuronmethods} in Appendix gives a more comprehensive list.}\n\\begin{table*}[t]\n\\centering\n\\footnotesize\n\\begin{tabular}{p{0.19\\textwidth}p{0.08\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}p{0.06\\textwidth}p{0.1\\textwidth}p{0.06\\textwidth}}\n\\toprule\n & Scope & Input & Output & Scalability & HITL & Supervision & Causation \\\\\n\\midrule\n\\rowcolor[HTML]{D9EAD3} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{D9EAD3}\\textbf{Visualization}} \\\\\n\\rowcolor[HTML]{D9EAD3} \n & local & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}\\textbf{Corpus-based methods}} \n \\\\\n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}{Concept Search}} \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\cellcolor[HTML]{A4C2F4} & global & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{A4C2F4} \n & global & neuron & concept & high & no & no & no \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}{Neuron Search}} \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\cellcolor[HTML]{A4C2F4} & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{EA9999} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{EA9999}\\textbf{Probing-based methods}} \\\\\n\\rowcolor[HTML]{EA9999} Linear & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{EA9999} \nGaussian & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{D9D9D9} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{D9D9D9}\\textbf{Causation-based methods}} \\\\\n\\rowcolor[HTML]{D9D9D9} \nAblation & both & concept/ class & neurons & medium & no & no & yes \\\\\n\\rowcolor[HTML]{D9D9D9} \nKnowledge attribution & local & concept/ class & neurons & high & no & no & yes \\\\\n\\rowcolor[HTML]{FFE599} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{FFE599}\\textbf{Miscellaneous methods}} \\\\\n\\rowcolor[HTML]{FFE599} \nCorpus generation & global & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nMatrix factorization & local & neurons & neurons & low & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nClustering & global & neurons & neurons & high & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nMulti model search & global & neurons & neurons & high & yes & no & no \\\\ \n\\bottomrule\n\\end{tabular}\n\\caption{Comparison of neuron analysis methods based on various attributes. The exhaustive list of citations for each method are provided in the text.} \n\\label{tab:neuronmethods}\n\\end{table*}", "id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "level": "section", "origin_cites_number": 7, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ] ], "subsections": [ "01454814-cf97-4f62-adb2-09d747c8cf8b", "5176e758-71b9-4fce-87a1-c46ce47d1a3e", "2ae46c4c-2724-42b1-ad00-d1ffeade8ae0", "d96870c8-fadd-42d3-a191-1b8ead129436", "b6dbcfb8-3c71-4e58-962e-8dce2aa7597a" ], "title": "Neuron Analysis Methods" }, { "cite_extract_rate": 0.5, "cites": [ 2911 ], "content": "\\label{sec:method:visualization}\nA simple way to discover the role of a neuron is \nby visualizing its activations and manually identifying \nthe underlying concept over a set of sentences~.\nGiven that deep NLP models are trained using billions of neurons, it is \nimpossible to visualize all \nthe neurons. A number of clues have been used to shortlist the \nneurons for visualization, \nfor example, selecting saturated neurons, high/low variance neurons, or ignoring dead neurons~ when using ReLU activation function.\\footnote{Saturated neurons have a gradient value of zero. Dead neurons have an activation value of zero.}", "id": "01454814-cf97-4f62-adb2-09d747c8cf8b", "level": "subsection", "origin_cites_number": 2, "parent_id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Visualization" ] ], "subsections": [ "acce36b1-fbcb-4699-a007-45b0a0f27eec" ], "title": "Visualization" }, { "cite_extract_rate": 0, "cites": [], "content": "While visualization is a simple approach to find an explanation for a neuron, it has \nsome major limitations: i) it is qualitative \nand subjective, ii) it cannot be scaled to the entire network due to an extensive human-in-the-loop effort, iii) it is difficult to interpret polysemous neurons that acquire multiple roles in different contexts, \niv) \nit is ineffective in identifying \\emph{group neurons},\nand lastly and v) not all neurons are visually interpretable. Visualization nevertheless remains a useful tool when applied in combination to other interpretation methods that are discussed below.", "id": "acce36b1-fbcb-4699-a007-45b0a0f27eec", "level": "paragraph", "origin_cites_number": 0, "parent_id": "01454814-cf97-4f62-adb2-09d747c8cf8b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Visualization" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:method:corpus_selection}\nCorpus-based methods discover the role of neurons by aggregating statistics over data activations. \nThey establish a connection between a neuron and a concept using co-occurrence between a neuron's activation values and existence of the concept in the underlying input instances (e.g. word, phrases or the entire sentence).\nCorpus-based methods are global interpretation methods\nas they interpret the role of a neuron over a set of inputs. \nThey can be effectively used in combination with the visualization method to reduce the search space \nfor finding \nthe most relevant portions of data that activates a neuron, thus significantly reducing the human-in-the-loop effort.\nCorpus-based methods can be broadly classified into two sets: i) the methods that take a neuron as an input and identify the concept the neuron has learned (\\emph{Concept Search}), ii) and others that take a concept as input and identify the neurons learning the concept (\\emph{Neuron Search}).", "id": "5176e758-71b9-4fce-87a1-c46ce47d1a3e", "level": "subsection", "origin_cites_number": 0, "parent_id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Corpus-based Methods" ] ], "subsections": [ "735b06d6-f03b-4895-a81b-72d5ab51d096" ], "title": "Corpus-based Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "This set of methods take a neuron as an input and search for a concept that the neuron has learned. They sort the input instances based on the activation values of the given neuron. The top activating instances represent a concept the neuron represents. \n\\newcite{kadar-etal-2017-representation} discovered neurons that learn various linguistic concepts using this approach. \nThey extracted top-20, 5-gram contexts for each neuron based on the magnitude of activations and manually \nidentified the underlying concepts. This manual effort of identifying concepts is cumbersome and requires a \nhuman-in-the-loop. \\newcite{Na-ICLR} addressed this by using lexical concepts of various granularities. Instead of 5-gram contexts, they extracted top-k activating sentences for each neuron. They parsed the \nsentences to \ncreate concepts (words and phrases) using the nodes of the parse trees.\nThey then created synthetic sentences that highlight a concept e.g. a particular word occurring in all synthetic sentences. The neurons that activates largely on these sentences are considered to have learned the concept. This methodology is \nuseful in analyzing neurons that are responsible for multi-word concepts such as phrases and idiomatic collocations. \nHowever, the synthetic sentences are often ungrammatical and \nlead towards a risk of identifying neurons that exhibit \narbitrary behavior (like repetition) instead of concept specific behavior.", "id": "735b06d6-f03b-4895-a81b-72d5ab51d096", "level": "paragraph", "origin_cites_number": 0, "parent_id": "5176e758-71b9-4fce-87a1-c46ce47d1a3e", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Corpus-based Methods" ], [ "paragraph", "Concept Search" ] ], "subsections": [ "9c1dac3a-95ef-4ce8-8f52-2f04ded75c99", "8d48d6c2-fde9-4077-9f56-667365f35cb2" ], "title": "Concept Search" }, { "cite_extract_rate": 0, "cites": [], "content": "The second class of corpus-based methods \naim to discover neurons \nfor a given concept.\nThe underlying idea is the same \ni.e. to establish a link between the concept and neurons based on co-occurrences stats, but in the opposite direction. The activation values play a role in weighing these links to obtained a ranked list of neurons against the concept.\n\\newcite{Mu-Nips} \nachieved this by creating a binary mask of a neuron based on a threshold on its activation values for every sentence in the corpus. Similarly, they created a binary mask for every concept based on its presence or absence in a sentence. They then computed the overlap between a given neuron mask vector and a concept mask vector using intersection-over-union (IoU),\n and use these to generate compositional explanations. Differently from them, \\newcite{suau2020finding} used the values of neuron activations as prediction scores and computed the average precision per neuron and per concept. Finally, \\newcite{antverg2022on} considered the mean activation values of a neuron with respect to instances that posses the concept of interest.\nThe two methods give an alternative view to neuron interpretation. While \\emph{Neuron Search} methods aim to find the neuron that has learned a concept, \\emph{Concept Search} methods generate explanations for neurons by aligning them with a concept.", "id": "9c1dac3a-95ef-4ce8-8f52-2f04ded75c99", "level": "paragraph", "origin_cites_number": 0, "parent_id": "735b06d6-f03b-4895-a81b-72d5ab51d096", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Corpus-based Methods" ], [ "paragraph", "Concept Search" ], [ "paragraph", "Neuron Search" ] ], "subsections": [], "title": "Neuron Search" }, { "cite_extract_rate": 0, "cites": [], "content": "The corpus-based methods do not model the selection of \\emph{group neurons} that work together to learn a concept. Concept Search methods consider every neuron independently. Similarly, Neuron Search methods do not find the correlation of a group of neurons with respect to the given concept.", "id": "8d48d6c2-fde9-4077-9f56-667365f35cb2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "735b06d6-f03b-4895-a81b-72d5ab51d096", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Corpus-based Methods" ], [ "paragraph", "Concept Search" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:method:probing_classifier}\nProbing-based methods train \ndiagnostic classifiers~ over activations to identify neurons with respect to pre-defined concepts. \nThey are a global interpretation methods that discover \na set of neurons with respect to each concept using supervised data annotations. \nThey are highly scalable, and can be easily applied on a large set of neurons and \nover a large set of concepts. \nIn the following, we cover two types of classifiers used for probing.", "id": "2ae46c4c-2724-42b1-ad00-d1ffeade8ae0", "level": "subsection", "origin_cites_number": 1, "parent_id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Probing-based Methods" ] ], "subsections": [ "0b2f96c4-b1e2-449a-904b-792f6e29e584" ], "title": "Probing-based Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "The idea \nis to train a linear classifier towards the concept of interest, using the activation vectors generated by the model being analyzed. \nThe weights assigned to neurons (features to the classifier) serve as their importance score with respect to the concept. The regularization of the classifier directly effects the weights and therefore the ranking of neurons. \\newcite{Radford} used L1 regularization which forces the classifier to learn spiky weights, indicating the selection of very few specialized neurons learning a concept, while setting the majority of neurons' weights to zero. \\newcite{lakretz-etal-2019-emergence} on the other hand used L2 regularization to encourage grouping of features. This translates to discovering \\emph{group neurons} that are jointly responsible for a concept. \\newcite{dalvi:2019:AAAI} used ElasticNet regularization which combines the benefits of L1 and L2, accounting for both highly correlated \n\\emph{group neurons} and \nspecific \\emph{focused neurons} with respect to a concept.", "id": "0b2f96c4-b1e2-449a-904b-792f6e29e584", "level": "paragraph", "origin_cites_number": 0, "parent_id": "2ae46c4c-2724-42b1-ad00-d1ffeade8ae0", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Probing-based Methods" ], [ "paragraph", "Linear Classifiers" ] ], "subsections": [ "9c7f994b-e743-49b7-a00e-12debba70995", "c702664b-d405-4265-b074-5e7e60ae0cde", "44833605-7cfd-471f-aa27-30a550d6e97c" ], "title": "Linear Classifiers" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 2910 ], "content": "A pitfall to \nprobing classifiers is whether a probe \nfaithfully reflects the concept \nlearned within the representation or just \nmemorizes the task~. Researchers have mitigated this pitfall for some analyses by using random initialization of neurons~ and control tasks~ to demonstrate that the knowledge is possessed within the neurons and not due to the probe's capacity for memorization. Another discrepancy in the neuron probing framework, that especially affects the linear classifiers, is that variance patterns in neurons differ strikingly across the layers. \\newcite{sajjad2021:znorm} suggested to apply z-normalization as a pre-processing step to any neuron probing method to alleviate this issue.", "id": "9c7f994b-e743-49b7-a00e-12debba70995", "level": "paragraph", "origin_cites_number": 3, "parent_id": "0b2f96c4-b1e2-449a-904b-792f6e29e584", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Probing-based Methods" ], [ "paragraph", "Linear Classifiers" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\newcite{torroba-hennigen-etal-2020-intrinsic} trained a generative classifier with the assumption that neurons exhibit a Gaussian distribution. They fit a multivariate Gaussian over all neurons and \nextracted individual probes for single neurons. A \ncaveat to their approach is that activations do not always follow a \\emph{Gaussian prior} in practice -- hence restricting their analysis to only the neurons that satisfy this criteria. Moreover, the interpretation is limited to single neurons and identifying groups of neurons requires an expensive greedy search.", "id": "c702664b-d405-4265-b074-5e7e60ae0cde", "level": "paragraph", "origin_cites_number": 0, "parent_id": "0b2f96c4-b1e2-449a-904b-792f6e29e584", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Probing-based Methods" ], [ "paragraph", "Linear Classifiers" ], [ "paragraph", "Gaussian Classifier" ] ], "subsections": [], "title": "Gaussian Classifier" }, { "cite_extract_rate": 0, "cites": [], "content": "In addition to the shortcomings discussed above, a major limitation of probing-based methods is the requirement of supervised data for training the classifier, thus limiting the analysis only to pre-defined or annotated concepts.", "id": "44833605-7cfd-471f-aa27-30a550d6e97c", "level": "paragraph", "origin_cites_number": 0, "parent_id": "0b2f96c4-b1e2-449a-904b-792f6e29e584", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Probing-based Methods" ], [ "paragraph", "Linear Classifiers" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:method:causal}\nThe methods we have discussed so far are limited to identifying neurons that have learned the encoded concepts. \nThey do not inherently reflect \ntheir importance towards the model's performance. \nCausation-based methods \nidentify neurons with respect to model's prediction.", "id": "d96870c8-fadd-42d3-a191-1b8ead129436", "level": "subsection", "origin_cites_number": 0, "parent_id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Causation-based methods" ] ], "subsections": [ "13b0a245-e39d-453e-a3fc-6924bc8e4535" ], "title": "Causation-based methods" }, { "cite_extract_rate": 0, "cites": [], "content": "The central idea behind ablation is to notice the affect of a neuron on model's performance by varying its value. This is done either by clamping its value to zero or a fixed value \nand observe\nthe change in network's performance.\nAblation \nhas been effectively used to find i) salient neurons with respect to a model (unsupervised), ii) salient neurons with respect to a \nparticular output class in the network (supervised). The former identifies neurons that incur a large drop in model's performance \nwhen ablated~. The latter selects neurons that cause the model to flip its prediction with respect to a certain class~. Here, the output class serves as the concept against which we want to find the salient neurons.", "id": "13b0a245-e39d-453e-a3fc-6924bc8e4535", "level": "paragraph", "origin_cites_number": 1, "parent_id": "d96870c8-fadd-42d3-a191-1b8ead129436", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Causation-based methods" ], [ "paragraph", "Ablation" ] ], "subsections": [ "e4822c95-8f3c-456f-a54e-e63911b8a8eb", "f7b86827-8804-429d-bb21-0cb2fd37eb42", "0c0005c9-fd36-4bb7-a9ae-d8375f14b302" ], "title": "Ablation" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8600 ], "content": "Identifying \n\\emph{group neurons} require ablating all possible combinations of neurons which is an NP-hard problem~. Several researchers have tried to circumvent this by using \nleave-one-out estimates~, beam search~, by learning end-to-end differentiable prediction model~ and by using correlation clustering to group similar neurons before ablation~. Nevertheless all these approaches are approximations and may incur search errors.", "id": "e4822c95-8f3c-456f-a54e-e63911b8a8eb", "level": "paragraph", "origin_cites_number": 3, "parent_id": "13b0a245-e39d-453e-a3fc-6924bc8e4535", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Causation-based methods" ], [ "paragraph", "Ablation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Attribution-based methods highlight the importance of input features and neurons with respect to a prediction~. \\newcite{knowledgeneurons} \nused an attribution-based method to identify salient neurons with respect to a relational fact. They hypothesized that factual knowledge is stored in the neurons of the feed-forward neural networks of the Transformer model and used integrated gradient~ to identify top neurons that express a relational fact. \nThe work of \\newcite{knowledgeneurons} shows the applicability of attribution methods in discovering causal neurons with respect to a concept of interest and is a promising research direction.", "id": "f7b86827-8804-429d-bb21-0cb2fd37eb42", "level": "paragraph", "origin_cites_number": 0, "parent_id": "13b0a245-e39d-453e-a3fc-6924bc8e4535", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Causation-based methods" ], [ "paragraph", "Ablation" ], [ "paragraph", "Knowledge Attribution Method" ] ], "subsections": [], "title": "Knowledge Attribution Method" }, { "cite_extract_rate": 0, "cites": [], "content": "The attribution-based methods highlight salient neurons with respect to a prediction. What concepts these salient neurons have learned is unknown. \\newcite{knowledgeneurons} worked around \nthis by limiting their study to model classes where each class serves as a concept. Attribution-based methods can be enriched by\ncomplementing them with other neuron analysis methods such as corpus search that associate salient neurons to a concept.", "id": "0c0005c9-fd36-4bb7-a9ae-d8375f14b302", "level": "paragraph", "origin_cites_number": 0, "parent_id": "13b0a245-e39d-453e-a3fc-6924bc8e4535", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Causation-based methods" ], [ "paragraph", "Ablation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:method:unsupervised}\nIn this section, we cover a diverse set of \nmethods that do not fit in the above defined categories.", "id": "b6dbcfb8-3c71-4e58-962e-8dce2aa7597a", "level": "subsection", "origin_cites_number": 0, "parent_id": "e12954b7-fc6e-4c95-ba61-28fbdef9e14b", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ] ], "subsections": [ "e575a499-8789-438e-a289-b70ab9937e62" ], "title": "Miscellaneous Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "A large body of neuron analysis methods identify neurons with respect to pre-defined concepts and the scope of search is only limited to the corpus used to extract the activations. \nIt is possible that a neuron represents a diverse concept which is not featured in the corpus. The \\emph{Corpus Generation} method addresses this problem by generating novel sentences that maximize a neuron's activations. These sentences unravel hidden information about a neuron, facilitating the annotator to better describe its role. Corpus generation has been widely explored in Computer Vision. For example, \\newcite{erhan_gradient_ascent_techreport} used gradient ascent to generate synthetic input images that maximize the activations of a neuron. However, a \ngradient ascent can not be directly applied in NLP, because \nof the discrete inputs. \\newcite{poerner-etal-2018-interpretable} worked around this problem by using \\emph{Gumble Softmax} \nand showed their method to surpass Concept Search method\n in interpreting neurons.", "id": "e575a499-8789-438e-a289-b70ab9937e62", "level": "paragraph", "origin_cites_number": 0, "parent_id": "b6dbcfb8-3c71-4e58-962e-8dce2aa7597a", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ] ], "subsections": [ "a5e83337-536f-4f30-aedf-3456fb8780e2", "b3e7a150-417b-44e0-97ec-6f50054cb263", "863e9a16-34a7-4fda-bf13-ae7754a0f257", "df26e196-706f-4b60-9a89-3e107a266050", "ad8984bf-85df-417a-9b16-e3cf25c6e107", "37454bb4-d38b-4e8e-bfde-25cf80670475", "bc445ca4-f7f2-470e-891c-27da137c052a" ], "title": "Corpus Generation" }, { "cite_extract_rate": 0, "cites": [], "content": "Although the corpus generation method has the benefit of generating novel patterns that explain a neuron beyond the space of the underlying corpus, it often generates nonsensical patterns and sentences that are difficult to analyze in isolation. A thorough \nevaluation is necessary to know its true potential and efficacy in NLP.", "id": "a5e83337-536f-4f30-aedf-3456fb8780e2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Matrix Factorization (MF) method decomposes a large matrix into a product of smaller matrices of factors, where each factor represents a group of elements performing a similar function. Given a model, the activations of an input sentence form a matrix. MF can be effectively applied to decompose the activation matrix into smaller matrices of factors where each factor consists of a set of neurons that learn a concept. MF is a local interpretation method. It is commonly used in analyzing vision models~. We could not find any research using MF on the NLP models. To the best of our knowledge, \\newcite{alammar2020explaining} is the only blog post that introduced them \nin the NLP domain.", "id": "b3e7a150-417b-44e0-97ec-6f50054cb263", "level": "paragraph", "origin_cites_number": 1, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Matrix Factorization" ] ], "subsections": [], "title": "Matrix Factorization" }, { "cite_extract_rate": 0, "cites": [], "content": "Compared to the previously discussed unsupervised methods, MF has an innate benefit of \ndiscovering \\emph{group neurons}. However, it is still non-trivial to identify the number of groups (factors) to decompose the activations matrix into. Moreover, the scope of the method is limited to local interpretation.", "id": "863e9a16-34a7-4fda-bf13-ae7754a0f257", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Clustering is another effective way to analyze groups of neurons in an unsupervised fashion. The intuition is that if a group of neurons learns a specific concept, then their activations would \nform a cluster. \\newcite{mayes_under_hood:2020} used UMAP~ to project \nactivations to a low dimensional space and performed K-means clustering to group neurons. \\newcite{dalvi-2020-CCFS} aimed at identifying redundant neurons in the network. They first computed correlation between neuron activation pairs and used hierarchical clustering to group them. The neurons with highly correlated behavior are clustered together and are considered redundant in the network.", "id": "df26e196-706f-4b60-9a89-3e107a266050", "level": "paragraph", "origin_cites_number": 1, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Clustering Methods" ] ], "subsections": [], "title": "Clustering Methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Similar to the Matrix Factorization method, the number of clusters is a hyperparameter that needs to be pre-defined or selected empirically. A small number of clusters may result in dissimilar neurons in the same group while a large number of clusters may lead to similar neurons split in different groups.", "id": "ad8984bf-85df-417a-9b16-e3cf25c6e107", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "Multi-model search is based on the intuition that salient information is shared across the models trained towards a task i.e. if a concept is important for a task then all models optimized for the task should learn it. The search involves identifying neurons that behave similarly across the models. \\newcite{bau2018identifying} used Pearson correlation to compute a similarity score of each neuron of a model with respect to the neurons of other models. They aggregated the correlations for each neuron using several methods with the aim of highlighting different aspects of the model. \nMore specifically, they used \\emph{Max Correlation} to capture concepts that emerge strongly in multiple models, \\emph{Min Correlation} to select neurons that are correlated with many models though they are not among the top correlated neurons, \\emph{Regression Ranking} to find individual neurons whose information is distributed among multiple neurons of other models, and \\emph{SVCCA}~ to capture information that may be distributed in fewer dimensions than the whole representation.", "id": "37454bb4-d38b-4e8e-bfde-25cf80670475", "level": "paragraph", "origin_cites_number": 1, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Multi-model Search" ] ], "subsections": [], "title": "Multi-model Search" }, { "cite_extract_rate": 0, "cites": [], "content": "All the methods discussed in this section, require human-in-the-loop to provide explanation for the underlying neurons. They can nevertheless be useful in tandem with the other interpretation methods. For example \\newcite{dalvi:2019:AAAI} intersected the neurons discovered via the probing classifier and the multi-model search to describe salient neurons in the NMT models.", "id": "bc445ca4-f7f2-470e-891c-27da137c052a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e575a499-8789-438e-a289-b70ab9937e62", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Neuron Analysis Methods" ], [ "subsection", "Miscellaneous Methods" ], [ "paragraph", "Corpus Generation" ], [ "paragraph", "Limitation" ] ], "subsections": [], "title": "Limitation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:evaluation}\nIn this section, we survey the evaluation methods used to measure the correctness of the neuron analysis methods. \nDue to the absence of interpretation benchmarks, it is difficult to precisely define ``correctness''. Evaluation \nmethods in interpretation mostly resonate with the underlying method to discovered salient neurons. For example visualization methods often require qualitative evaluation via human in the loop, probing methods claim correctness of their rankings using classifier accuracy as a proxy. \n\\newcite{antverg2022on} highlighted \nthis discrepancy \nand suggested to disentangle the analysis methodology from the evaluation framework, for example by using a principally different evaluation method compared to the underlying neuron analysis method.\nIn the following, we summarize various evaluation methods and their usage in the literature.", "id": "99234958-aa61-4591-b8c4-762dd38f56da", "level": "section", "origin_cites_number": 0, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ] ], "subsections": [ "c9257f95-cd52-45b8-b280-c0930569d879", "5edbf230-71a8-4547-9d2f-863f40c545e4", "3d1d3234-720d-4a0d-ab3c-0808efcf1913", "0d3e159a-37d1-497d-a6a4-417d376b1a11", "e75f1cf6-52a1-4a01-9210-65304fa77755" ], "title": "Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "While ablation has been used to discover salient neurons for the model, it has also been used to evaluate the efficacy of the selected neurons. More concretely, given a ranked list of neurons (e.g. the output of the probing method), we ablate neurons in the model in the order of their importance and measure the effect on the performance. \nThe idea is that removing the top neurons should result in a larger drop in performance compared to randomly selected neurons.\n\\newcite{dalvi:2019:AAAI, durrani-etal-2020-analyzing} \nused ablation in the probing classifier to demonstrate correctness of their neuron ranking method. \nSimilarly \\newcite{bau2018identifying} showed that ablating the most salient neurons, discovered using multi-model search, in NMT models lead to a much bigger drop in performance as opposed to removing randomly selected neurons.", "id": "c9257f95-cd52-45b8-b280-c0930569d879", "level": "subsection", "origin_cites_number": 0, "parent_id": "99234958-aa61-4591-b8c4-762dd38f56da", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Ablation" ] ], "subsections": [], "title": "Ablation" }, { "cite_extract_rate": 0, "cites": [], "content": "Given salient neurons with respect to a concept, a simple method to evaluate their correctness is to train a classifier using them as features and predict the concept of interest. The performance of the classifier relative to a classifier trained using random neurons and least important neurons is used as a metric to gauge the efficacy of the selected salient neurons. However, it is important to ensure that the probe is truly representing the concepts encoded within the learned representations and not memorizing them during classifier training. \\newcite{hewitt-liang-2019-designing} introduced Controlled Tasks Selectivity as a measure to gauge this.\n\\newcite{durrani-etal-2020-analyzing} adapted controlled tasks for neuron-probing to show that their probes indeed reflect the underlying linguistic tasks.", "id": "5edbf230-71a8-4547-9d2f-863f40c545e4", "level": "subsection", "origin_cites_number": 0, "parent_id": "99234958-aa61-4591-b8c4-762dd38f56da", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Classification Performance" ] ], "subsections": [], "title": "Classification Performance" }, { "cite_extract_rate": 0, "cites": [], "content": "Information theoretic metrics such as mutual information have also been used to interpret representations of deep NLP models~. Here, the goal is to measure the amount of information a representation provides about a linguistic properties. \\newcite{torroba-hennigen-etal-2020-intrinsic} used mutual information to evaluate the effectiveness of their Gaussian-based method \nby calculating\nthe mutual information between subset of neurons and linguistic concepts.", "id": "3d1d3234-720d-4a0d-ab3c-0808efcf1913", "level": "subsection", "origin_cites_number": 0, "parent_id": "99234958-aa61-4591-b8c4-762dd38f56da", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Information Theoretic Metric" ] ], "subsections": [], "title": "Information Theoretic Metric" }, { "cite_extract_rate": 0, "cites": [], "content": "Another evaluation method derived from \\emph{Concept Search} methodology measures the alignment between neurons and the discovered concept, by weighing how selectively each neuron responds to the concept . Selectivity is computed by taking a difference between average activation value of a neuron over a set of sentences where the underlying concept occurs and where it doesn't. A high selectivity value is obtained when a neuron is sensitive to the underlying concept and not to other concepts.", "id": "0d3e159a-37d1-497d-a6a4-417d376b1a11", "level": "subsection", "origin_cites_number": 0, "parent_id": "99234958-aa61-4591-b8c4-762dd38f56da", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Concept Selectivity" ] ], "subsections": [], "title": "Concept Selectivity" }, { "cite_extract_rate": 0, "cites": [], "content": "\\emph{Visualization} has been used as a qualitative measure to evaluate the \nselected neurons. For example, \\newcite{dalvi:2019:AAAI} visualized the top neurons and showed that they focus on very specific linguistic properties. They also visualized top-k activating words for the top neurons per concept to demonstrate the efficacy of their method. Visualization can be a very effective tool to evaluate the interpretations when it works in tandem with other methods e.g. using Concept Search or Probing-based methods to reduce the search space towards only highly activating concepts or the most salient neurons for these concepts respectively.", "id": "e75f1cf6-52a1-4a01-9210-65304fa77755", "level": "subsection", "origin_cites_number": 0, "parent_id": "99234958-aa61-4591-b8c4-762dd38f56da", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Qualitative Evaluation" ] ], "subsections": [], "title": "Qualitative Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:findings}\nWork done on neuron interpretation in NLP is predominantly focused on the questions such as: \\textit{i) what concepts are learned within neurons? ii) \nhow the knowledge is structured within representations?} We \nnow iterate through various \nfindings that \nthe above described neuron analysis methods unravelled. Based on our main driving questions, we classify \nthese into two broad categories: \\textit{i) concept discovery and ii) architectural analysis}.", "id": "bafcff9d-1ddd-433e-9988-8693300da7f1", "level": "section", "origin_cites_number": 0, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ] ], "subsections": [ "e40aa7db-5a2f-4d49-af1c-0da376ba2c97", "cea9da8e-992b-426c-93a7-c5b11c55c707", "6b1c7149-f2cf-4421-9201-478661544d82" ], "title": "Findings" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:cd}\nIn the following, we survey what lexical concepts or core-linguistic phenomenon \nare learned by the neurons in the network.", "id": "e40aa7db-5a2f-4d49-af1c-0da376ba2c97", "level": "subsection", "origin_cites_number": 0, "parent_id": "bafcff9d-1ddd-433e-9988-8693300da7f1", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ] ], "subsections": [ "ebdf53c0-3dbf-4032-9f6b-900e9eba898d", "3a159185-eebd-49cd-a570-7c16c057ac39", "6c76585e-e576-4b28-a60b-34319793f45c" ], "title": "Concept Discovery" }, { "cite_extract_rate": 0, "cites": [], "content": "Some of the research done on neuron analysis particularly the work using visualization and concept search methods identified neurons that capture lexical concepts.", "id": "ebdf53c0-3dbf-4032-9f6b-900e9eba898d", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e40aa7db-5a2f-4d49-af1c-0da376ba2c97", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ], [ "subsubsection", "Lexical Concepts" ] ], "subsections": [ "cb21b7e4-8b58-499c-8620-dbbc41fbe3cd", "53048d0b-8dc2-4aab-991b-2c9416cfabe8" ], "title": "Lexical Concepts" }, { "cite_extract_rate": 0, "cites": [], "content": "\\newcite{karpathy2015visualizing} found neurons that learn position of a word in the input sentence: \nactivating positively in the beginning, becoming neutral in the middle and negatively towards the end. \\newcite{li-etal-2016-visualizing} found intensification neurons that activate for words that intensify a sentiment. For example ``I like this movie \\textbf{a lot}'' or ``the movie is \\textbf{incredibly} good''. Similarly they discovered neurons that captured ``negation''. Both intensification neurons and sentiment neurons are relevant for the sentiment classification task, for which the understudied model was trained.", "id": "cb21b7e4-8b58-499c-8620-dbbc41fbe3cd", "level": "paragraph", "origin_cites_number": 0, "parent_id": "ebdf53c0-3dbf-4032-9f6b-900e9eba898d", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ], [ "subsubsection", "Lexical Concepts" ], [ "paragraph", "Visualizations" ] ], "subsections": [], "title": "Visualizations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\newcite{kadar-etal-2017-representation} identified neurons that capture related groups of \nconcepts in a multi-modal image captioning task. For example, they discovered neurons that learn electronic items ``camera, laptop, cables'' and salad items ``broccoli, noodles, carrots etc''. Similarly \\newcite{Na-ICLR} found neurons that learn lexical concepts related to legislative terms, e.g. ``law, legal'' etc. They also found neurons that \nlearn phrasal concepts. \\newcite{poerner-etal-2018-interpretable} showed that \\emph{Concept Search} can be enhanced via \\emph{Corpus Generation}. They provided finer interpretation of the neurons by generating synthetic instances. For example, they showed that a ``horse racing'' neuron identified via concept search method was in fact a general ``racing'' neuron by generating novel contexts against this neuron.", "id": "53048d0b-8dc2-4aab-991b-2c9416cfabe8", "level": "paragraph", "origin_cites_number": 0, "parent_id": "ebdf53c0-3dbf-4032-9f6b-900e9eba898d", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ], [ "subsubsection", "Lexical Concepts" ], [ "paragraph", "Concept Search" ] ], "subsections": [], "title": "Concept Search" }, { "cite_extract_rate": 0, "cites": [], "content": "A number of studies \nprobed for neurons that capture core-linguistic concepts such as \nmorphology,\nsemantic tags, etc. Probing for linguistic structure is important to understand models' capacity to generalize~.\\footnote{but is not the only reason to carry such an analysis.}\nFor example, the holy grail in machine translation is that a proficient model needs to be aware of word morphology, grammatical structure, and semantics to do well~. Below we discuss major findings along this line of work:\n\\textbf{Neurons specialize in core linguistic concepts}\n\\newcite{dalvi:2019:AAAI} \nin their analysis of LSTM-based NMT models found neurons that capture \ncore linguistic concepts such as nouns, verb forms, numbers, articles, etc. \nThey also showed that \\textbf{the number of neurons responsible for a concept varies based on the nature of the concept.} \nFor example: closed class\\footnote{closed class concepts are part of language where new words are not added as the language evolves, for example functional words such as \\emph{can, be} etc. In contrast open class concepts are a pool where new words are constantly added as the language evolve, for example \"chillax\" a verb formed blending \"chill\" and \"relax\".} concepts such as \\emph{Articles} (morphological category), \\emph{Months of Year} (semantic category) are localized to fewer neurons, whereas open class concepts such as \\emph{nouns} (morphological category) or \\emph{event} (semantic category) are distributed among a large number of neurons. \n\\textbf{Neurons exhibit monosemous and polysemous behavior.} \\newcite{xin-etal-2019-part} found neurons exhibiting a variety of roles where a few neurons were \nexclusive to a single concept while \nothers were polysemous in nature and captured several concepts. \\newcite{suau2020finding} discovered neurons that capture different senses of a word. Similarly, \\newcite{bau2018identifying} found a switch neuron that activates positively for present-tense verbs and negatively for the past tense verbs.\n\\textbf{Neurons capture syntactic concepts and complex semantic concepts.} \n\\newcite{lakretz-etal-2019-emergence} discovered neurons that capture subject-verb agreement within LSTM gates. \\newcite{karpathy2015visualizing} also found neurons that activate within quotes and brackets capturing long-range dependency. \\newcite{Na-ICLR} aligned neurons with syntactic parses to show that neurons learn syntactic phrases.\n\\newcite{esther_causativity_iwcs21} analyzed complex semantic properties underlying a given sentence.", "id": "3a159185-eebd-49cd-a570-7c16c057ac39", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "e40aa7db-5a2f-4d49-af1c-0da376ba2c97", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ], [ "subsubsection", "Linguistic Concepts" ] ], "subsections": [], "title": "Linguistic Concepts" }, { "cite_extract_rate": 0, "cites": [], "content": "In contrast to analyzing neurons with respect to a pre-defined concept, researchers also interpreted\nthe concepts \ncaptured in the most salient neurons of the network. For example, in the analysis of the encoder of \nLSTM-based models, \\newcite{bau2018identifying} used Pearson correlation to discover salient neurons in the network. They\nfound neurons that learn position of a word in the sentence\namong the most important neurons. Other \nneurons found \nincluded parentheses, punctuation and conjunction neurons. \nMoreover, \\newcite{LiMJ16a} found that the two most salient neurons in \n\\emph{Glove} \nwere the frequency neurons that play an important role in all predictions. \nThe question of whether core-linguistic concepts are important for the end performance \nhas been a less explored area. \\newcite{dalvi:2019:AAAI} compared neurons learning morphological concepts and semantic concepts with unsupervised ranking of neurons with respect to their effect on the end performance. They found that the \\textbf{model is more sensitive to the top neurons obtained using unsupervised ranking compared to linguistic concepts}. They showed that the unsupervised ranking of neurons is dominated by position information and other closed class categories such as conjunction and punctuation which according to the ablation experiment are more critical concepts for the end performance than linguistic concepts.", "id": "6c76585e-e576-4b28-a60b-34319793f45c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "e40aa7db-5a2f-4d49-af1c-0da376ba2c97", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Concept Discovery" ], [ "subsubsection", "Salient Neurons for Models" ] ], "subsections": [], "title": "Salient Neurons for Models" }, { "cite_extract_rate": 0, "cites": [], "content": "Alongside studying what concepts are captured within deep NLP models, researchers have also studied: i) how these concepts are organized in the network? ii) how distributed and redundant they are? and iii) how this compares across \narchitectures? Such an analysis is helpful in better understanding of the network and can be potentially useful in architectural search and model distillation.", "id": "cea9da8e-992b-426c-93a7-c5b11c55c707", "level": "subsection", "origin_cites_number": 0, "parent_id": "bafcff9d-1ddd-433e-9988-8693300da7f1", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Architectural Analysis" ] ], "subsections": [ "7fa73ffb-ac60-4843-b89b-74e21b86fdfd", "4cf4109a-6955-4b21-92d7-bd060337d902", "14170394-df17-4343-be25-dfa34fb781fb" ], "title": "Architectural Analysis" }, { "cite_extract_rate": 0, "cites": [], "content": "Human languages are hierarchical in structure where morphology and phonology sit at the bottom followed by lexemes, followed by syntactic structures. \nConcepts such as semantics and pragmatics are placed on the top of the hierarchy. \\newcite{durrani-etal-2020-analyzing} analyzed linguistic hierarchy by studying the spread of neurons across layers in various pre-trained language models. They extracted salient neurons with respect to different linguistic concepts (e.g. morphology and syntax) and found that \\textbf{neurons that capture word morphology were predominantly found in the lower and middle layers and those learning about syntax were found at the higher layers.} The observation was found to be true in both LSTM- and the transformer-based architectures, and are inline with the findings of representation analysis~.\nSimilarly \\newcite{suau2020finding} analyzed sub-modules within GPT and RoBERTa transformer blocks and showed that lower layers within a transformer block accumulate more salient neurons than higher layers on the tasks of word sense disambiguation or homograph detection. They also found that the neurons that learn homographs are distributed \nacross the network as opposed to sense neurons that were more predominantly found at the lower layers.", "id": "7fa73ffb-ac60-4843-b89b-74e21b86fdfd", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "cea9da8e-992b-426c-93a7-c5b11c55c707", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Architectural Analysis" ], [ "subsubsection", "Information Distribution" ] ], "subsections": [], "title": "Information Distribution" }, { "cite_extract_rate": 0, "cites": [], "content": "While it is exciting to see that networks somewhat preserve linguistic hierarchy, \nmany authors found that information is not discretely preserved at any individual layer, but is distributed \nand is redundantly present in the network. This is an artifact of various training choices such as dropout that encourages the model to distribute knowledge across the network.\nFor example, \\newcite{LiMJ16a} found specialized frequency neurons in a \\emph{GloVe} model trained without dropout, as opposed to the variant trained with dropout where the information was more redundantly available.\n\\newcite{dalvi-2020-CCFS} showed that a significant amount of redundancy existed within pre-trained models. They showed that 85\\% of the neurons across the network are redundant and at least 92\\% of the neurons can be removed when optimizing towards a downstream task in feature-based transfer learning. \n\\begin{figure*}[]\n \\centering\n \\begin{subfigure}[b]{0.23\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures/POS-BERT.png}\n \\caption{POS -- BERT}\n \\label{fig:posbert}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.23\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures/CCG-BERT.png}\n \\caption{CCG -- BERT}\n \\label{fig:ccgbert}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.23\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures/POS-XLNet.png}\n \\caption{POS -- XLNet}\n \\label{fig:posxlnet}\n \\end{subfigure} \n \\begin{subfigure}[b]{0.23\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures/CCG-XLNet.png}\n \\caption{CCG -- XLNet}\n \\label{fig:ccgxlnet}\n \\end{subfigure} \n \\caption{Distribution of top neurons spread across different layers for each task. X-axis = Layer number, Y-axis = Number of neurons selected from that layer -- Figure borrowed from ~}\n \\label{fig:layerwise}\n\\end{figure*}", "id": "4cf4109a-6955-4b21-92d7-bd060337d902", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cea9da8e-992b-426c-93a7-c5b11c55c707", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Architectural Analysis" ], [ "subsubsection", "Distributivity and Redundancy" ] ], "subsections": [], "title": "Distributivity and Redundancy" }, { "cite_extract_rate": 0, "cites": [], "content": "The distribution of neurons across the network has led researchers to draw interesting cross-architectural comparisons. \n\\newcite{wu:2020:acl} performed correlation clustering of neurons across \narchitectures \nand found that different architectures may have similar representations, but their individual neurons behave differently. \n\\newcite{torroba-hennigen-etal-2020-intrinsic} compared neurons in contextualized (BERT) embedding with neurons in the static embedding (fastText) and found that fastText required two neurons to capture any morphosyntactic phenomenon as opposed to BERT which required up to 35 neurons to obtain the same performance. \\newcite{durrani-etal-2020-analyzing} showed that the \\textbf{linguistic knowledge in BERT (auto-encoder) is highly distributed across the network as opposed to XLNet (auto-regressive) where neurons from a few layers are mainly responsible for a concept} (see Figure~\\ref{fig:layerwise}). Similarly \\newcite{suau2020finding} compared RoBERTa and GPT (auto-encoder vs. generative) models and found differences in the distribution of expert neurons. \\newcite{durrani-FT} extended the cross-architectural comparison towards fine-tuned models. They showed that after fine-tuning on GLUE tasks, the neurons capturing linguistic knowledge are regressed to lower layers in RoBERTa and XLNet as opposed to BERT where it is still retained at the higher layers.", "id": "14170394-df17-4343-be25-dfa34fb781fb", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "cea9da8e-992b-426c-93a7-c5b11c55c707", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Architectural Analysis" ], [ "subsubsection", "Comparing Architectures" ] ], "subsections": [], "title": "Comparing Architectures" }, { "cite_extract_rate": 0.5, "cites": [ 7623 ], "content": "\\label{sec:findingSummary}\nBelow is a summary of the key findings that emerged from the work we covered in this survey. Neurons learned within Deep NLP models capture\nnon-trivial linguistic knowledge ranging from \nlexical phenomenon such as morphemes, words and multi-word expressions to highly complex global phenomenon such as semantic roles and syntactic dependencies. Neuron analysis resonates with the findings of representation analysis in demonstrating that the networks follow linguistic hierarchy. Linguistic neurons are distributed across the network based on their complexity, with lower layers focused on the lexical concepts and middle and higher layers learning global phenomenon based on long-range contextual dependencies. While the networks preserve linguistic hierarchy, many authors showed that information is not discretely preserved, but is rather distributed and redundantly present in the network. It was also shown that a small optimal subset of neurons w.r.t any concept can be extracted from a network. On another dimension, a few works showed that some concepts are localized to fewer neurons while others are distributed to a large group. Finally, some interesting cross architectural analyses were drawn based on how the neurons are distributed within their layers.", "id": "6b1c7149-f2cf-4421-9201-478661544d82", "level": "subsection", "origin_cites_number": 2, "parent_id": "bafcff9d-1ddd-433e-9988-8693300da7f1", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Findings" ], [ "subsection", "Summary of Findings" ] ], "subsections": [], "title": "Summary of Findings" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:applications}\nNeuron analysis leads to various applications beyond interpretation of deep models. In this section, we present several applications of neuron analysis: i) controlling model's behavior, ii) model distillation and efficiency, iii) domain adaptation, and iv) generating compositional explanations.", "id": "acbc66e6-5814-4f0a-848b-86b17e924faa", "level": "section", "origin_cites_number": 0, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Applications" ] ], "subsections": [ "89efe35b-298a-4c64-8753-8bb52e849b71", "2f3d6668-aa53-4579-81b0-a851c660c4f2", "43c50e7a-b02c-4207-bfa4-3f95c1dd34bb", "6e876fc4-ab74-4618-939f-9ea5453ddcf3" ], "title": "Applications" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:manipulation}\nOnce we have identified neurons that capture a certain concept learned in a model, \nthese can \nbe utilized for controlling the model's behavior w.r.t to that concept. \n\\newcite{bau2018identifying} identified \\emph{Switch Neurons} in \nNMT models that activate positively for the present-tense verbs and negatively for the past-tense verbs. By manipulating the values of \nthese neurons, \nthey were able to successfully change output translations from present to past tense during inference. The authors additionally found neurons that capture \\emph{gender} and \\emph{number agreement} concepts and manipulated them to control the system's output. Another effort along this line was carried by \\newcite{suau2020finding}, where they \nmanipulated the neurons responsible for a concept in the GPT model and generated sentences around specific topics of interest. Recently~\\newcite{knowledgeneurons} manipulated salient neurons of relational facts and demonstrated their ability to update and erase knowledge about a particular fact.\nControlling \nmodel's behavior using neurons \nenables on-the-fly manipulation of output, \nfor example it can be used to debias the output of the model against sensitive attributes like race and gender.", "id": "89efe35b-298a-4c64-8753-8bb52e849b71", "level": "subsection", "origin_cites_number": 0, "parent_id": "acbc66e6-5814-4f0a-848b-86b17e924faa", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Applications" ], [ "subsection", "Controlling Model's Behavior" ] ], "subsections": [], "title": "Controlling Model's Behavior" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments. Identifying salient neurons and sub-networks can be useful for model distillation and efficiency. \\newcite{dalvi-2020-CCFS} devised an efficient feature-based transfer learning procedure, stemmed from their redundancy analysis. By exploiting layer and neuron-specific redundancy in the transformer models, they were able to reduce the feature set size to less than 10\\% neurons for several tasks while maintaining more than 97\\% of the performance. The procedure achieved a speedup of up to 6.2x in computation time for sequence labeling tasks as opposed to using all the features.", "id": "2f3d6668-aa53-4579-81b0-a851c660c4f2", "level": "subsection", "origin_cites_number": 0, "parent_id": "acbc66e6-5814-4f0a-848b-86b17e924faa", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Applications" ], [ "subsection", "Model Distillation and Efficiency" ] ], "subsections": [], "title": "Model Distillation and Efficiency" }, { "cite_extract_rate": 0, "cites": [], "content": "Identifying the salient neurons with respect to a domain can be effectively used for domain adaptation and generalization. \\newcite{gu2021pruningthenexpanding} proposed a domain adaptation method using neuron pruning to target the problem of catastrophic forgetting of the general domain when fine-tuning a model for a target domain. They introduced a three step adaptation process: \ni) rank neurons based on their importance, ii) prune the unimportant neurons from the network and retrain with student-teacher framework, iii) expand the network to its original size and fine-tune towards in-domain, freezing the salient neurons and adjusting only the unimportant neurons. Using this approach helps \nin avoiding catastrophic forgetting of the general domain while also obtaining optimal performance on the in-domain data.", "id": "43c50e7a-b02c-4207-bfa4-3f95c1dd34bb", "level": "subsection", "origin_cites_number": 0, "parent_id": "acbc66e6-5814-4f0a-848b-86b17e924faa", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Applications" ], [ "subsection", "Domain Adaptation" ] ], "subsections": [], "title": "Domain Adaptation" }, { "cite_extract_rate": 0, "cites": [], "content": "Knowing the association of a neuron with a concept enables explanation of model's output. \\newcite{Mu-Nips} identified neurons that learn certain concepts in vision and NLP models. Using a composition of logical operators, they provided an explanation of model's prediction. \nFigure~\\ref{fig:composition} presents an explanation using a gender-sensitive neuron. The neuron activates for contradiction when the premise contains the word \\textit{man}. Such explanations provide a way to generate adversarial examples that change model's predictions. \n\\begin{figure}[]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{figures/gender_compositional_explanation.png}\n \\caption{Compositional explanation using neuron 870 on the NLI task -- Figure borrowed from \\newcite{Mu-Nips}}\n \\label{fig:composition}\n\\end{figure}", "id": "6e876fc4-ab74-4618-939f-9ea5453ddcf3", "level": "subsection", "origin_cites_number": 0, "parent_id": "acbc66e6-5814-4f0a-848b-86b17e924faa", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Applications" ], [ "subsection", "Compositional Explanations" ] ], "subsections": [], "title": "Compositional Explanations" }, { "cite_extract_rate": 0.55, "cites": [ 8598, 2911, 8599, 2912, 2910, 2913, 2914, 2909 ], "content": "\\label{sec:conclude}\nIn the following section, we discuss several open issues and limitations related to methods, evaluation and datasets. Moreover, we provide potential future directions vital to the progress of neuron and model interpretation.\n\\begin{itemize}\n \\item DNNs are distributed in nature which encourages groups of neurons to work together to learn a concept. The current analysis methods, at large, ignore interaction between neurons while discovering neurons with respect to a concept. Trying all possible combination of neurons is a computationally intractable problem. A linear classifier using ElasticNet regularization~ considers grouping of features during training -- however, it's effectiveness in handling grouped neurons has not been empirically validated. Evolutionary algorithms\\footnote{\\url{https://en.wikipedia.org/wiki/Evolutionary_algorithm}} do not make any assumption of the underline distribution of the features and they have been effectively used for feature selection of multivariate features. Exploring them for neuron selection is a promising research direction to probe towards latent concepts in these models.\n \\item A large number of interpretation studies rely on human-defined linguistic concepts to probe a model. It is possible that the models do not strictly adhere to the human-defined concepts and learn novel concepts about the language. \n This results in an incorrect or incomplete analysis. \n Several researchers made strides in this direction by analyzing hidden structures in the input representations in an unsupervised manner. They discovered existence of novel structures not captured in the human defined categories. \n \\newcite{dalvi2022discovering} also proposed\n BERT ConceptNet, \n a manual annotation of the latent concepts in BERT. Introducing similar datasets across other models \n enables model-centric interpretation, and is a promising research direction. \n \\item While a lot of work has been done on analyzing how knowledge is encoded within the learned representations, the question whether it is used by the model during prediction is a less explored area . \n Ablation and knowledge attribution methods are two neuron interpretation methods that intrinsically use causal relation to select concept neurons. A few other studies evaluated the causal relation of the selected concept neurons via ablation or by clamping their activation values~ and observed the change in model's prediction. However, most of the studies do not take into account the causal relation as part of the method or the evaluation of their method. The causal relation with respect to concept neurons is important to understand their importance to overall prediction and it leads way towards practical applications such as debiasing, model distillation and domain adaptation. \n \\item The work on neuron interpretation lacks standard evaluation benchmarks, and therefore studies conducted on identical models are not comparable. For example, there exists no gold annotation of neurons with respect to a certain dataset or a class. \n The curation of standard evaluation benchmarks \n is an essential step towards improving methods of interpretation of deep neural network models.\n \\item The neuron analysis methods vary in their theoretical foundations as well as the perspective they aim to capture with respect to a given concept. This results in a selection of neurons that may not strictly align across all methods.\n For example, \\emph{Visualization}, \\emph{Neuron Search} and \\emph{Corpus Search} discover neurons that are highly focused on a specific task (like \"less\" suffix or POS \"TO\" concepts), while \\emph{Probing-based} methods discover ranking of neurons that highlight grouping behavior within the neurons targeting broad concepts like POS \"Nouns\". \n Therefore, the choice of which neuron interpretation method to use is not straightforward and depends on various factors such as the nature of the concept to investigate, the availability of supervised data for the concept of interest etc. Apart from these high-level guiding principles, a thorough comparison of methods with respect to the nature of the concept of interest is needed to fully understand the strengths and weaknesses of each approach. \\newcite{antverg2022on} is one such effort in this direction that compares three neuron interpretation methods. \n \\item Neuron-level interpretation opens door for a number of applications useful for the successful deployment of DNN systems (Section \\ref{sec:applications}). However, most of the research conducted in this direction is preliminary. For example, there are many open research questions in \\textbf{controlling system's behaviour} using neurons such as: i) are all concepts manipulatable? ii) how to identify neurons that can be controlled to change the output? iii) is high distributiveness a hindrance for controlling model's behavior? iv) and whether disentangled ~ and sparse models~ may serve as a better alternate on this front? Addressing these questions will enable a more reliable control of the deep NLP models and entail numerous applications such as removing bias and adapting the system to novel domains.\n\\end{itemize}\n\\bibliography{acl2020,anthology}\n\\bibliographystyle{acl_natbib}\n\\appendix\n\\newpage\n\\begin{table*}[t]\n\\centering\n\\footnotesize\n\\begin{tabular}{p{0.19\\textwidth}p{0.08\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}p{0.06\\textwidth}p{0.1\\textwidth}p{0.06\\textwidth}}\n\\toprule\n & Scope & Input & Output & Scalability & HITL & Supervision & Causation \\\\\n\\midrule\n\\rowcolor[HTML]{D9EAD3} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{D9EAD3}\\textbf{Visualization}} \\\\\n\\rowcolor[HTML]{D9EAD3} \n & local & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{D9EAD3} \n & & & & & & & \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}\\textbf{Corpus-based methods}} \n \\\\\n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}{Concept Search}} \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\cellcolor[HTML]{A4C2F4} & global & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{A4C2F4} \n & global & neuron & concept & high & no & no & no \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{A4C2F4}{Neuron Search}} \\\\\n\\rowcolor[HTML]{A4C2F4} \n\\cellcolor[HTML]{A4C2F4} & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{EA9999} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{EA9999}\\textbf{Probing-based methods}} \\\\\n\\rowcolor[HTML]{EA9999} Linear & global & concept & neurons & high & no & yes & no\n\\\\\n\\rowcolor[HTML]{EA9999} \nRandom Forest & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{EA9999} \nGaussian & global & concept & neurons & high & no & yes & no \\\\\n\\rowcolor[HTML]{D9D9D9} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{D9D9D9}\\textbf{Causation-based methods}} \\\\\n\\rowcolor[HTML]{D9D9D9} \nAblation & both & concept/ class & neurons & medium & no & no & yes \\\\\n\\rowcolor[HTML]{D9D9D9} \nKnowledge attribution & local & concept/ class & neurons & high & no & no & yes \\\\\n\\rowcolor[HTML]{FFE599} \n\\multicolumn{8}{l}{\\cellcolor[HTML]{FFE599}\\textbf{Miscellaneous methods}} \\\\\n\\rowcolor[HTML]{FFE599} \nCorpus generation & global & neuron & concept & low & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nMatrix factorization & local & neurons & neurons & low & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nClustering & global & neurons & neurons & high & yes & no & no \\\\\n\\rowcolor[HTML]{FFE599} \nMulti model search & global & neurons & neurons & high & yes & no & no \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Comparison of neuron analysis methods based on various attributes. The exhaustive list of citations for each method are provided in the text.} \n\\label{tab:appendix:neuronmethods}\n\\end{table*}\n\\end{document}", "id": "a2492ee2-c312-4c75-b088-6f29f214f5e9", "level": "section", "origin_cites_number": 20, "parent_id": "81d577c7-6481-41cd-8b79-48e73c6a5850", "prefix_titles": [ [ "title", "Neuron-level Interpretation of Deep NLP Models: A Survey" ], [ "section", "Open Issues and Future Directions" ] ], "subsections": [], "title": "Open Issues and Future Directions" } ]
123
[ 2401, 7622, 8598, 7623, 7165, 7, 7621, 8599, 168, 2909, 2911, 2910, 8600, 2912, 2913, 2914 ]
1.452704
[ "Yoon-Ho Choi", "Peng Liu", "Zitong Shang", "Haizhou Wang", "Zhilong Wang", "Lan Zhang", "Junwei Zhou", "Qingtian Zou" ]
Using Deep Learning to Solve Computer Security Challenges: A Survey
2019
2019-12-12T01:42:09Z
cs.CR
Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community. This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending return-oriented programming (ROP) attacks, achieving control-flow integrity (CFI), defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ] ], "subsections": [ "c51401bd-580f-44a9-9612-66a18edfb9ac", "93654628-99a6-4537-9533-0c31e4f97c48", "7a06b173-0676-41cd-b4b7-f793958a3ed0", "2b839e56-71e1-4fc1-8d17-ba5b1a399e32", "140dccf7-ea9a-4de7-9296-5070c5f2b2c7", "489e2f98-db44-4e56-96fb-648e2b5625b0", "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "b422094a-ee97-4a26-a7d7-a5e21d36fbf2", "eb166b74-7da0-4abe-af7f-7cfe3d3d385a", "18fd9c41-b209-4bf4-b2a4-ffffb8f5fc68", "e83e405a-97db-4a67-84c0-b0487015c734", "88571d44-18fa-4576-92d1-ac9e7eab286f", "eabe40e8-0f6c-4026-8136-7e9f93471cea", "db0be638-d4d6-4911-bbe6-b3c95e7bc168", "37d47822-0bb0-46c8-8835-473014f3b19f" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "Using machine learning techniques to solve computer security challenges is not a new idea. For example, in the year of 1998, Ghosh and others in proposed to train a (traditional) neural network based anomaly detection scheme(i.e., detecting anomalous and unknown intrusions against programs); in the year of 2003, Hu and others in and Heller and others in applied Support Vector Machines to based anomaly detection scheme (e.g., detecting anomalous Windows registry accesses). \nThe machine-learning-based computer security research investigations during 1990-2010, however, have not been very impactful. For example, to the best of our knowledge, none of the machine learning applications proposed in has been incorporated into a widely deployed intrusion-detection commercial product. \nRegarding why not very impactful, although researchers in the computer security community seem to have different opinions, the following remarks by Sommer and Paxson (in the context of intrusion detection) have resonated with many researchers: \n\\begin{itemize}\n \\item Remark A: ``It is crucial to have a clear picture of what problem a system targets: what specifically are the attacks to be detected? The more narrowly one can define the target activity, the better one can tailor a detector to its specifics and reduce the potential for misclassifications.\" \n \\item Remark B: ``If one cannot make a solid argument for the relation of the features to the attacks of interest, the resulting study risks foundering on serious flaws.\" \n\\end{itemize}\n{\\color{myblack} These insightful remarks, though well aligned with the machine learning techniques used by security researchers during 1990-2010, could become a less significant concern with Deep Learning (DL), \na rapidly emerging machine learning technology, due to the following observations. \nFirst, Remark A implies that even if the same machine learning method is used, \none algorithm employing a cost function that is based on a more specifically defined target attack activity could perform substantially better than another algorithm deploying a less specifically defined cost function. \nThis could be a less significant concern with DL, since a few recent studies have shown that even if the target attack activity is not narrowly defined, \na DL model could still achieve very high classification accuracy. \nSecond, Remark B implies that if feature engineering is not done properly, the trained machine learning models could be plagued by serious flaws. \nThis could be a less significant concern with DL, since many deep learning neural networks require less feature engineering than conventional machine learning techniques.\n}\nAs stated in , ``DL is a statistical technique that exploits large quantities of data as training sets for a network with multiple hidden layers, called a deep neural network (DNN). A DNN is trained on a dataset, generating outputs, calculating errors, and\nadjusting its internal parameters. Then the process is repeated hundreds of thousands of times until the network achieves an acceptable level of performance. It has proven to be an effective technique for image classification, object detection, speech recognition, and\nnatural language processing––problems that challenged researchers for decades. By learning from data, DNNs can solve some problems much more effectively, and also solve problems that were never solvable before.\"\nNow let's take a high-level look at how DL could \nmake it substantially easier to overcome the challenges \nidentified by Sommer and Paxson . \nFirst, one major advantage of DL is that it makes \nlearning algorithms less dependent on feature engineering. This characteristic of DL makes it easier to \novercome the challenge indicated by Remark B. \nSecond, another major advantage of DL is that it \ncould achieve high classification accuracy with \nminimum domain knowledge. This characteristic of DL makes it easier to overcome the challenge indicated by Remark A. \n\\vspace*{2mm} \n\\textbf{Key observation. } \\textit{The above discussion indicates that DL could be a game changer in applying machine learning techniques to solving computer security challenges. }\n\\vspace*{2mm} \nMotivated by this observation, this paper seeks to provide a \\textit{dedicated} review of the very recent research works on using Deep Learning techniques to solve computer security challenges. It should be noticed that since this paper aims to provide a dedicated review, non-deep-learning techniques and their security applications are out of the scope of this paper. \nThe remaining of the paper is organized as follows. In Section~\\ref{sec:workflow}, we present a four-phase workflow framework which we use to summarize the existing works in a unified manner. In Section~\\ref{sec:programanalysis}-\\ref{sec:fuzzing}, we provide a review of eight computer security problems being solved by applications of Deep Learning, respectively. In Section~\\ref{sec:dis}, we will discuss certain similarity and certain dissimilarity among the existing works. In Section~\\ref{sec:fur}, we mention four further areas of investigation. In Section~\\ref{sec:con}, we conclude the paper.", "id": "c51401bd-580f-44a9-9612-66a18edfb9ac", "level": "section", "origin_cites_number": 5, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:workflow}\nWe found that a four-phase workflow framework can provide a unified way to summarize all the research works surveyed by us. In particular, we found that each work surveyed by us employs a particular workflow when using machine learning techniques to solve a computer security challenge, and we found that each workflow consists of two or more phases. \nBy ``a unified way\", we mean that every workflow surveyed by us is essentially an instantiation of a common workflow pattern which is shown in Figure \\ref{fig:4phases}.", "id": "93654628-99a6-4537-9533-0c31e4f97c48", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A four-phase workflow framework can summarize the existing works in a unified manner" ] ], "subsections": [ "ab1ab2db-6a69-42b1-bffa-aa304b490923", "cd39d418-ca95-49d8-bc15-bb9f46240292", "45755ebb-4b09-42be-a1ba-c9a11a6f4ec7" ], "title": "A four-phase workflow framework can summarize the existing works in a unified manner" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 318 ], "content": "The four phases, shown in Figure \\ref{fig:4phases}, are defined as follows. To make the definitions of the four phases more tangible, we use a running example to illustrate each of the four phases. \n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.80]{figure/overview.pdf}\n\\caption{Overview of the four-phase workflow} \n\\label{fig:4phases}\n\\end{center}\n\\end{figure}\n\\begin{phase}{Obtaining the Raw Data}\\label{phase1}\nIn this phase, certain raw data are collected.\n\\begin{runexample}\nWhen Deep Learning is used to detect suspicious events in a Hadoop distributed file system (HDFS), the raw data are usually the events (e.g., a block is allocated, read, written, replicated, or deleted) that have happened to each block. \nSince these events are recorded in Hadoop logs, the log files hold the raw data. \nSince each event is uniquely identified by a particular (block ID, timestamp) tuple, we could simply view\nthe raw data as $n$ event sequences. Here $n$ is the total number of blocks in the HDFS. \nFor example, the raw data collected in \nin total consists of 11,197,954 events. \nSince 575,139 blocks were in the HDFS, there were \n575,139 event sequences in the raw data, and on average\neach event sequence had 19 events. One such event sequence is shown as follows: \n\\footnotesize\n\\begin{verbatim}\n081110 112428 31 INFO dfs.FSNamesystem: BLOCK* NameSystem.allocateBlock:\n /user/root/rand/_temporary/_task_200811101024_0001_m_001649_0/\n part-01649.blk_-1033546237298158256\n081110 112428 9602 INFO dfs.DataNode$DataXceiver: \n Receiving block blk_-1033546237298158256 src: /10.250.13.240:54015\n dest:/10.250.13.240:50010\n081110 112428 9982 INFO dfs.DataNode$DataXceiver: \n Receiving block blk_-1033546237298158256 src: /10.250.13.240:52837 \n dest:/10.250.13.240:50010\n081110 112432 9982 INFO dfs.DataNode$DataXceiver: \n writeBlock blk_-1033546237298158256 received exception\n java.io.IOException:Could not read from stream\n\\end{verbatim}\n\\normalsize\n\\end{runexample}\n\\end{phase}\n\\begin{phase}{Data Preprocessing}\\label{phase2}\nBoth Phase~\\ref{phase2} and Phase~\\ref{phase3} aim to properly \nextract and represent the useful \ninformation held in the raw data collected in Phase I. \nBoth Phase~\\ref{phase2} and Phase~\\ref{phase3} are closely related\nto feature engineering. \nA key difference between Phase~\\ref{phase2} and Phase~\\ref{phase3} \nis that Phase~\\ref{phase3} is completely dedicated to \nrepresentation learning, while Phase~\\ref{phase2} is \nfocused on all the information extraction and data \nprocessing operations that are \\textbf{not} based on \nrepresentation learning. \n\\begin{runexample}\nLet's revisit the \naforementioned HDFS. \nEach recorded event is described by unstructured text. \nIn Phase~\\ref{phase2}, the unstructured text\nis parsed to a data structure that shows the event type and a list of event variables in (name, value) pairs. \nSince there are 29 types of events in the HDFS, \neach event is represented by an integer from 1 to 29 according to its type. \nIn this way, the aforementioned example event sequence \ncan be transformed to: \n \\begin{verbatim} \n 22, 5, 5, 7\n \\end{verbatim}\n\\end{runexample}\n\\end{phase}\n\\begin{phase}{Representation Learning}\\label{phase3}\nAs stated in , ``Learning representations of the data that make it easier to extract useful information when building classifiers or other predictors.\" \n\\begin{runexample}\nLet's revisit the same HDFS. \nAlthough DeepLog directly employed one-hot vectors to represent the event types without representation learning, \nif we view an event type as a word in a structured language, \none may actually use the word embedding technique to represent each event type. It should be noticed that \nthe word embedding technique is a representation \nlearning technique. \n\\end{runexample}\n\\end{phase}\n\\begin{phase}{Classifier Learning}\\label{phase4}\nThis phase aims to build specific classifiers or other predictors through Deep Learning. \n\\begin{runexample}\nLet's revisit the same HDFS.\nDeepLog used Deep Learning to build a stacked LSTM neural network for anomaly detection. \nFor example, let's consider event sequence \\{22,5,5,5,11,9,11,9,11,9,26,26,26\\} in which each \ninteger represents the event type of the corresponding event in the event sequence. \nGiven a window size $h$ = 4, the input sample and the output label pairs to train DeepLog will be: \\{22,5,5,5 $\\rightarrow$ 11 \\}, \\{5,5,5,11 $\\rightarrow$ 9 \\}, \\{5,5,11,9 $\\rightarrow$ 11 \\}, and so forth. \nIn the detection stage, DeepLog examines each individual event. It determines if an event is treated as normal or abnormal according to whether the event's type is predicted by the LSTM neural network, given the history of event types.\nIf the event's type is among the top $g$ predicted types, \nthe event is treated as normal; otherwise, it is treated as abnormal.\n\\end{runexample}\n\\end{phase}", "id": "ab1ab2db-6a69-42b1-bffa-aa304b490923", "level": "subsection", "origin_cites_number": 3, "parent_id": "93654628-99a6-4537-9533-0c31e4f97c48", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A four-phase workflow framework can summarize the existing works in a unified manner" ], [ "subsection", "Definitions of the four phases" ] ], "subsections": [], "title": "Definitions of the four phases" }, { "cite_extract_rate": 0.17391304347826, "cites": [ 8016, 8017, 8014, 3859, 8019, 3947, 8015, 8018 ], "content": "In this subsection, we use the four-phase workflow framework to summarize two representative works for each security problem. \n{\\color{myblack}\nSystem security includes many sub research topics. \nHowever, not every research topics are suitable to adopt deep learning-based methods due to their intrinsic characteristics. \nFor these security research subjects that can combine with deep-learning, \nsome of them has undergone intensive research in recent years, \nothers just emerging. \nWe notice that there are 5 mainstream research directions in system security. \nThis paper mainly focuses on system security, so the other mainstream research directions (e.g., deepfake) are out-of-scope.\nTherefore, we choose these 5 widely noticed research directions,\nand 3 emerging research direction in our survey:\n\\begin{enumerate}\n\\item In security-oriented program analysis, malware classification (MC), system-event-based anomaly detection (SEAD), memory forensics (MF), and defending network attacks, deep learning based methods have already undergone intensive research. \n\\item In defending return-oriented programming (ROP) attacks, Control-flow integrity (CFI), and fuzzing, deep learning based methods are emerging research topics.\n\\end{enumerate}\nWe select two representative works for each research topic in our survey.\nOur criteria to select papers mainly include: 1) Pioneer (one of the first papers in this field); \n2) Top (published on top conference or journal);\n3) Novelty; 4) Citation (The citation of this paper is high); \n5) Effectiveness (the result of this paper is pretty good);\n6) Representative (the paper is a representative work for a branch of the research direction). \nTable~\\ref{tab:reason} lists the reasons why we choose each paper, which is ordered according to their importance.\n}\n\\begin{table*}[ht]\n \\footnotesize\n \\caption{List of criteria we used to choose representative work for each research topic.}\n \\label{tab:reason}\n \\makeatletter\n \\newcommand{\\widenhline}{\n \\noalign {\\ifnum 0=`}\\fi \\hrule height 0.5pt\n \\futurelet \\reserved@a \\@xhline\n }\n \\newcolumntype{I}{!{\\vrule width 1pt}}\n \\makeatother\n \\centering\n \\small\n \\begin{tabular}{cIc|c|c|c} \n \\hline \n \\hline \n \\diagbox[dir=NW]{Paper}{Order} & 1 & 2 & 3 & 4 \\\\ \\widenhline\n \\hline \n RFBNN~ & Pioneer & Top & Novelty & Citations \\\\ \n \\hline \n EKLAVYA~ & Top & Novelty & Citation & N/A \\\\ \n \\hline \n ROPNN~ & Pioneer & Novelty & Effectiveness & N/A \\\\ \n \\hline \n HeNet~ & Effectiveness & Novelty & Citation & N/A \\\\ \n \\hline \n Barnum~ & Pioneer & Novelty & N/A & N/A \\\\ \n \\hline \n CFG-CNN~ & Representative & N/A & N/A & N/A \\\\ \n \\hline \n 50b(yte)-CNN & Novelty & Effectiveness & N/A & N/A \\\\ \n \\hline \n PCNN~ & Novelty & Effectiveness & N/A & N/A \\\\ \n \\hline \n Resenberg~ & Novelty & Effectiveness & Top & Representative \\\\ \n \\hline \n DeLaRosa~ & Novelty & Representative & N/A & N/A \\\\ \n \\hline \n DeepLog~ & Pioneer & Top & Citations & N/A \\\\ \n \\hline \n DeepMem~ & Pioneer & Top & N/A & N/A \\\\ \n \\hline \n NeuZZ~ & Novelty & Top & Effectiveness & N/A \\\\ \n \\hline \n Learn \\& Fuzz~ & Pioneer & Novelty & Top & N/A \\\\ \n \\hline \n \\hline \n \\end{tabular} \n \\end{table*}\nThe summary for each paper we selected is shown in Table~\\ref{Table:Summary}. \nThere are three columns in the table. \nIn the first column, we listed eight security problems, including security-oriented program analysis, defending return-oriented programming (ROP) attacks, control-flow integrity (CFI), defending network attacks (NA), malware classification (MC), system-event-based anomaly detection (SEAD), memory forensics (MF), and fuzzing for software security. \nIn the second column, we list the very recent two representative works for each security problem. From the $3$-th to $6$-th columns, we sequentially describe how the four phases are deployed at each work. \nIn the ``Summary\" column, we sequentially describe how the four phases are deployed at each work, then, we list the evaluation results for each work in terms of accuracy (ACC), precision (PRC), recall (REC), F1 score (F1), false-positive rate (FPR), and false-negative rate (FNR), respectively.\n\\begin{center} \n\\scriptsize\n\\begin{ThreePartTable}\n\\begin{TableNotes}\n \\item [1] Deep Learning metrics are often not available in fuzzing papers. Typical fuzzing metrics used for evaluations are: code coverage, pass rate and bugs.\n\\end{TableNotes}\n\\newcolumntype{C}[1]{>{\\centering\\arraybackslash}p{#1}}\n\\begin{longtable}{p{1.4cm}<{\\centering}p{1.4cm}<{\\centering}cccccc}\n\\captionsetup{width=.95\\textwidth}\n\\caption{\\footnotesize Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), $F_{1}$ score ($F_{1}$), false positive rate (FPR), and false negative rate (FNR).}\n\\label{Table:Summary}\\\\\n\\toprule \\toprule\n\\textbf{Security Problem} & \\textbf{Works} & \\multicolumn{6}{c}{\\textbf{Summary}} \\\\\n\\cmidrule[1pt]{1-8}\n\\endfirsthead\n\\multicolumn{7}{c}{{Continued on Next Page\\ldots}} \\\\\n\\endfoot\n\\insertTableNotes \n\\endlastfoot\n {\\multirow{5}{1.5cm}{Security Oriented Program Analysis~}} \n & {\\multirow{2}{*}{RFBNN~}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{Dataset comes from previous paper~, consisting of 2200 separate binaries. \n 2064 of the binaries were for Linux, obtained from the coreutils, binutils, and findutils packages. \n The remaining 136 for Windows consist of binaries from popular open-source projects.\n Half of the binaries were for x86, and the other half for x86-64. } \n & \\multicolumn{3}{C{5.7cm}}{They extract fixed-length subsequences (1000-byte chunks) from code section of binaries,\n Then, use “one-hot encoding”, which converts a\n byte into a $\\mathbb{Z}^{256}$ vector.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{ Bi-directional RNN}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&98.4\\%&&PRE:&N/A\\\\\n REC:&0.97&&$F_{1}$:&0.98\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \n \\\\\n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{EKLAVYA}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{They adopted source code from previous work~ as their rawdata,\n then obtained two datasets by using two commonly used compilers: gcc and clang, with different optimization levels ranging from O0 to O3 for both x86 and x64. \n They obtained the ground truth for the function arguments by parsing the DWARF debug information.\n Next, they extract functions from the binaries and remove functions which are duplicates of other functions in the dataset. \n Finally, they match caller snipper and callee body. } \n & \\multicolumn{3}{C{5.7cm}}{Tokenizing the hexadecimal value of each instruction.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ Word2vec technique to compute word embeddings. } \n & \\multicolumn{2}{C{3.8cm}}{RNN}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&81.0\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&N/A\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\\n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Defending Return Oriented Programming Attacks\n }} \n & {\\multirow{2}{*}{ROPNN }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{The data is a set of gadget chains obtained from existing programs. A gadget searching tool, ROPGadget is used to find available gadgets. Gadgets are chained based on whether the produced gadget chain is executable on a CPU emulator. The raw data is represented in hexadecimal form of instruction sequences. } \n & \\multicolumn{3}{C{5.7cm}}{Form one-hot vector for bytes.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{ 1-D CNN}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&99.9\\%&&PRE:&0.99\\\\\n REC:&N/A&&$F_{1}$:&0.01\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\\n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{HeNet }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{Data is acquired from Intel PT, which is a processor trace tool that can log control flow data. Taken Not-Taken (TNT) packet and Target IP (TIP) packet are the two packets of interested. Logged as binary numbers, information of executed branches can be obtained from TNT, and binary executed can be obtained from TIP. Then the binary sequences are transferred into sequences of values between 0-255, called pixels, byte by byte.\n } \n & \\multicolumn{3}{C{5.7cm}}{Given the pixel sequences, slice the whole sequence and reshape to form sequences of images for neural network training.\n } \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ Word2vec technique to compute word embeddings. } \n & \\multicolumn{2}{C{3.8cm}}{DNN}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&98.1\\%&&PRE:&0.99\\\\\n REC:&0.96&&$F_{1}$:&0.97\\\\\n FPR:&0.01&&FNR:&0.04\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Achieving Control Flow Integrity }} \n & {\\multirow{2}{*}{Barnum}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{The raw data, which is the exact sequence of instructions executed, was generated by combining the program binary, get immediately before the program opens a document, and Intel\\textsuperscript{\\textregistered} PT trace. While Intel\\textsuperscript{\\textregistered} PT built-in filtering options are set to CR3 and current privilege level (CPL), which only traces the program activity in the user space. } \n & \\multicolumn{3}{C{5.7cm}}{The raw instruction sequences are summarized into Basic Blocks with IDs assigned and are then sliced into manageable subsequences with a fix window size 32, founded experimentally. Only sequences ending on indirect calls, jumps and returns are analyzed, since control-flow hijacking attacks always occur there. The label is the next BBID in the sequence. } \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{LSTM}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&0.98\\\\\n REC:&1.00&&$F_{1}$:&0.98\\\\\n FPR:&0.98&&FNR:&0.02\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{CFG-CNN }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{The raw data is instruction level control-flow graph constructed from program assembly code by an algorithm proposed by the authors. While in the CFG, one vertex corresponds to one instruction and one directed edge corresponds to an execution path from one instruction to another. The program sets for experiments are obtained from popular programming contest CodeChief.\n } \n & \\multicolumn{3}{C{5.7cm}}{Since each vertex of the CFG represents an instruction with complex information that could be viewed from different aspects, including instruction name, type, operands etc., a vertex is represented as the sum of a set of real valued vectors, corresponding to the number of views (e.g. \\textit{addq 32,\\%rsp} is converted to linear combination of randomly assigned vectors of \\textit{addq value, reg}). The CFG is then sliced by a set of fixed size windows sliding through the entire graph to extract local features on different levels.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{DGCNN with different numbers of views and with or without operands}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&84.1\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&N/A\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Defending Network Attacks }} \n & {\\multirow{2}{*}{50b(yte)-CNN}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{Open dataset UNSW-NB15 is used. First, tcpdump tool is utilised to capture 100 GB of the raw traffic (i.e. PCAP files) containing benign activities and 9 types of attacks. The Argus, Bro-IDS (now called Zeek) analysis tools are then used and twelve algorithms are developed to generate totally 49 features with the class label. In the end, the total number of data samples is 2,540,044 which are stored in CSV files. } \n & \\multicolumn{3}{C{5.7cm}}{The first 50 bytes of each network traffic flow are picked out and each is directly used as one feature input to the neural network. } \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{ CNN with 2 hidden fully connected layers}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&0.93\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{PCCN}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{Open dataset CICIDS2017, which contains benign and 14 types of attacks, is used. Background benign network traffics are generated by profiling the abstract behavior of human interactions. Raw data are provided as PCAP files, and the results of the network traffic analysis using CICFlowMeter are pvodided as CSV files. In the end the dataset contains 3,119,345 data samples and 83 features categorized into 15 classes (1 normal + 14 attacks). } \n & \\multicolumn{3}{C{5.7cm}}{Extract a total of 1,168,671 flow data, including 12 types of attack activities, from original dataset. Those flow data are then processed and visualized into grey-scale 2D graphs. The visualization method is not specified.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{Parallel cross CNN.}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&0.99\\\\\n REC:&N/A&&$F_{1}$:&0.99\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Malware Classification }} \n & {\\multirow{2}{*}{Rosenberg}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{The android dataset has the latest malware families and their variants, each with the same number of samples. The samples are labeled by VirusTotal. Then Cuckoo Sandbox is used to extract dynamic features (API calls) and static features (string). To avoid some anti-forensic sample, they applied YARA rule and removed sequences with less than 15 API calls. After preprocessing and balance the benign samples number, the dataset has 400,000 valid samples. } \n & \\multicolumn{3}{C{5.7cm}}{Long sequences cause out of memory during training LSTM model. So they use sliding window with fixed size and pad shorter sequences with zeros. One-hot encoding is applied to API calls. For static features strings, they defined a vector of 20,000 Boolean values indicating the most frequent Strings in the entire dataset. If the sample contain one string, the corresponding value in the vector will be assigned as 1, otherwise, 0.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A } \n & \\multicolumn{2}{C{3.8cm}}{ They used RNN, BRNN, LSTM, Deep LSTM, BLSTM, Deep BLSTM, GRU, bi-directional GRU, Fully-connected DNN, 1D CNN in their experiments}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&98.3\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&N/A\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{DeLaRosa}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{The windows dataset is from Reversing Labs including XP, 7, 8, and 10 for both 32-bit and 64-bit architectures and gathered over a span of twelve years (2006-2018). They selected nine malware families in their dataset and extracted static features in terms of bytes, basic, and assembly features. } \n & \\multicolumn{3}{C{5.7cm}}{For bytes-level features, they used a sliding window to get the histogram of the bytes and compute the associated entropy in a window; for basic features, they created a fixed-sized feature vector given either a list of ASCII strings, or extracted import and metadata information from the PE Header(Strings are hashed and calculate a histogram of these hashes by counting the occurrences of each value); for assembly features, the disassembled code generated by Radare2 can be parsed and transformed into graph-like data structures such as call graphs, control flow graph, and instruction flow graph. } \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A} \n & \\multicolumn{2}{C{3.8cm}}{N/A}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&90.1\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&N/A\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{System Event Based Anomaly Detection\\\\}} \n & {\\multirow{2}{*}{DeepLog }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{More than 24 million raw log entries with the size of 2412 MB are recorded from the 203-node HDFS. Over 11 million log entries with 29 types are parsed, which are further grouped to 575,061 sessions according to block identifier. These sessions are manually labeled as normal and abnormal by HDFS experts. Finally, the constructed dataset HDFS 575,061 sessions of logs in the dataset, among which 16,838 sessions were labeled as anomalous } \n & \\multicolumn{3}{C{5.7cm}}{The raw log entries are parsed to different log type using Spell which is based a longest common subsequence. There are total 29 log types in HDFS dataset} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{DeepLog directly utilized one-hot vector to represent 29 log key without represent learning} \n & \\multicolumn{2}{C{3.8cm}}{ A stacked LSTM with two hidden LSTM layers.}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&0.95\\\\\n REC:&0.96&&$F_{1}$:&0.96\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{LogAnom }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{LogAnom also used HDFS dataset, which is same as DeepLog. } \n & \\multicolumn{3}{C{5.7cm}}{The raw log entries are parsed to different log templates using FT-Tree according the frequent combinations of log words. There are total 29 log templates in HDFS dataset} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ LogAnom employed Word2Vec to represent the extracted log templates with more semantic information } \n & \\multicolumn{2}{C{3.8cm}}{Two LSTM layers with 128 neurons}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&0.97\\\\\n REC:&0.94&&$F_{1}$:&0.96\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Memory Forensics }} \n & {\\multirow{2}{*}{DeepMem}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{400 memory dumps are collected on Windows 7 x86 SP1 virtual machine with simulating various random user actions and forcing the OS to randomly allocate objects. The size of each dump is 1GB. } \n & \\multicolumn{3}{C{5.7cm}}{Construct memory graph from memory dumps, where each node represents a segment between two pointers and an edge is created if two nodes are neighbor} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Each node is represented by a latent numeric vector from the embedding network.} \n & \\multicolumn{2}{C{3.8cm}}{ Fully Connected Network (FCN) with ReLU layer.}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&0.99\\\\\n REC:&0.99&&$F_{1}$:&0.99\\\\\n FPR:&0.01&&FNR:&0.01\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{MDMF~}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{Create a dataset of benign host memory snapshots running normal, non-compromised software, including software that executes in many of the malicious snapshots. The benign snapshot is extracted from memory after ample time has passed for the chosen programs to open. By generating samples in parallel to the separate malicious environment, the benign memory snapshot dataset created. } \n & \\multicolumn{3}{C{5.7cm}}{Various representation for the memory snapshots including byte sequence and image, without relying on domain-knowledge of the OS.} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A} \n & \\multicolumn{2}{C{3.8cm}}{Recurrent Neural Network with LSTM cells and Convolutional Neural Network composed of multiple layers, including pooling and fully connected layers. for image data}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&98.0\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&N/A\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[0.8pt]{1-8}\n {\\multirow{5}{1.5cm}{Fuzzing }} \n & {\\multirow{2}{*}{L-Fuzz}} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n && \\multicolumn{3}{C{5.7cm}}{The raw data are about 63,000 non-binary PDF objects, sliced in fix size, extracted from 534 PDF files that are provided by Windows fuzzing team and are previously used for prior extended fuzzing of Edge PDF parser. } \n & \\multicolumn{3}{C{5.7cm}}{N/A} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{N/A} \n & \\multicolumn{2}{C{3.8cm}}{ Char-RNN}\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&0.93\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\cmidrule[1pt]{2-8}\n & {\\multirow{2}{*}{NEUZZ }} & \\multicolumn{3}{C{5cm}}{Phase I} & \\multicolumn{3}{C{5cm}}{Phase II} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{3}{C{5.7cm}}{For each program tested, the raw data is collected by running AFL-2.52b on a single core machine for one hour. The training data are byte level input files generated by AFL, and the labels are bitmaps corresponding to input files. For experiments, NEUZZ is implemented on 10 real-world programs, the LAVA-M bug dataset, and the CGC dataset.\n } \n & \\multicolumn{3}{C{5.7cm}}{N/A } \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{Phase III} & \\multicolumn{2}{C{3.8cm}}{Phase IV} & \\multicolumn{2}{C{2.8cm}}{Evaluation} \\\\\n \\cmidrule[\\lightrulewidth](lr){3-8}\\addlinespace[0ex]\n & & \\multicolumn{2}{C{4.4cm}}{ N/A} \n & \\multicolumn{2}{C{3.8cm}}{NN }\n & \n \\begin{tabular}[t]{p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}p{0cm}p{0.2cm}<{\\centering}p{0.1cm}<{\\centering}}\n ACC:&N/A\\%&&PRE:&N/A\\\\\n REC:&N/A&&$F_{1}$:&0.93\\\\\n FPR:&N/A&&FNR:&N/A\\\\\n \\end{tabular} \\\\ \n \\bottomrule\n \\bottomrule\n \\end{longtable}\n \\begin{TableNotes}\n \\item [1] Deep Learning metrics are often not available in fuzzing papers. Typical fuzzing metrics used for evaluations are: code coverage, pass rate and bugs.\n \\end{TableNotes}\n\\end{ThreePartTable}\n\\end{center}", "id": "cd39d418-ca95-49d8-bc15-bb9f46240292", "level": "subsection", "origin_cites_number": 46, "parent_id": "93654628-99a6-4537-9533-0c31e4f97c48", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A four-phase workflow framework can summarize the existing works in a unified manner" ], [ "subsection", "Using the four-phase workflow framework to summarize some representative research works" ] ], "subsections": [], "title": "Using the four-phase workflow framework to summarize some representative research works" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{lab:three_questions}\nData representation (or feature engineering) plays an important role in solving security problems with Deep Learning. This is because data representation is a way to take advantage of human ingenuity and prior knowledge to extract and organize the discriminative information from the data. Many efforts in deploying machine learning algorithms in security domain actually goes into the design of preprocessing pipelines and data transformations that result in a representation of the data to support effective machine learning. \nIn order to expand the scope and ease of applicability of machine learning in security domain, it would be highly desirable to find a proper way to represent the data in security domain, which can entangle and hide more or less the different explanatory factors of variation behind the data. \nTo let this survey adequately reflect the important role\nplayed by data representation, our review will focus on how the following three questions are answered by the existing works:\n\\label{threequestions}\n\\begin{itemize}\n\\item {\\bf Question 1:} Is Phase~\\ref{phase2} pervasively done in the literature? When Phase~\\ref{phase2} is skipped in a work, are there any particular reasons? \n\\item {\\bf Question 2:} Is Phase~\\ref{phase3} employed in the literature? \n When Phase~\\ref{phase3} is skipped in a work, are there any particular reasons? \n\\item {\\bf Question 3:} When solving different security problems, is there any commonality in terms of the (types of) classifiers learned in Phase~\\ref{phase4}? Among the works solving the same security problem, is there dissimilarity in terms of classifiers learned in Phase~\\ref{phase4}?\n\\end{itemize}\n\\newlength\\treeheight\n\\setlength{\\treeheight}{8cm}\n\\begin{figure}[htbp]\n \\centering \n\\begin{tikzpicture}[\n grow = right,\n sibling distance = 5em,\n anchor=west,\n growth parent anchor=east, \n parent anchor=east, \n level distance=0.5cm, \n every node/.style={},\n level 1/.style={sibling distance=\\treeheight/8},\n level 2/.style={sibling distance=\\treeheight/8},\n level distance = 8.5em,\n every node/.style = {font=\\footnotesize},\n sloped\n]\n \\node [root] {\\textbf{Phase~\\ref{phase3}}}\n child { node [env] {\\textbf{class 1}}\n edge from parent node [below] {\\texttt{no consideration}} }\n child { node [dummy] {}\n child { node [dummy] {}\n child {node(pass) [env] {\\textbf{class 4}}\n edge from parent node [below, align=center]{\\texttt{comparison}}}\n child {node [env] {\\textbf{class 3}}\n edge from parent node [above, align=center]{\\texttt{no comparison}}}\n edge from parent node [below] {\\texttt{adoption}} }\n child { node(plugin) [env] {\\textbf{class 2}}\n edge from parent node [above, align=center]{\\texttt{no adoption}}}\n edge from parent node [above] {\\texttt{consideration}}};\n\\end{tikzpicture}\n\\caption{Classification tree for different Phase~\\ref{phase3} methods. Here, \\texttt{consideration}, \\texttt{adoption}, and \\texttt{comparison} indicate that a work considers Phase~\\ref{phase3}, adopts Phase~\\ref{phase3} and makes comparison with other methods, respectively.} \n\\label{fig:dec}\n\\end{figure}\nTo group the Phase~\\ref{phase3} methods at different applications of Deep Learning in solving the same security problem, we introduce a classification tree as shown in Figure~\\ref{fig:dec}. The classification tree categorizes the Phase~\\ref{phase3} methods in our selected survey works into four classes. First, class 1 includes the Phase~\\ref{phase3} methods which do not consider representation learning. Second, class 2 includes the Phase~\\ref{phase3} methods which consider representation learning but, do not adopt it. Third, class 3 includes the Phase~\\ref{phase3} methods which consider and adopt representation learning but, do not compare the performance with other methods. Finally, class 4 includes the Phase~\\ref{phase3} methods which consider and adopt representation learning and, compare the performance with other methods.\nIn the remaining of this paper, we take a closer look at how each of the eight security problems is being solved by applications of Deep Learning in the literature.", "id": "45755ebb-4b09-42be-a1ba-c9a11a6f4ec7", "level": "subsection", "origin_cites_number": 0, "parent_id": "93654628-99a6-4537-9533-0c31e4f97c48", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A four-phase workflow framework can summarize the existing works in a unified manner" ], [ "subsection", "Methodology for reviewing the existing works" ] ], "subsections": [], "title": "Methodology for reviewing the existing works" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:programanalysis}", "id": "7a06b173-0676-41cd-b4b7-f793958a3ed0", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving security-oriented program analysis challenges" ] ], "subsections": [ "b3244be8-d71e-4845-98c5-8fb6977b0e43", "191a0f6f-88fb-4873-81a3-03aec397dafc", "f6c1d9cc-db45-4789-9e65-d948d0bcace9" ], "title": "A closer look at applications of Deep Learning in solving security-oriented program analysis challenges" }, { "cite_extract_rate": 0.25, "cites": [ 3947 ], "content": "Recent years, security-oriented program analysis is widely used in software security. For example, symbolic execution and taint analysis are used to discover, detect and analyze vulnerabilities in programs. Control flow analysis, data flow analysis and pointer/alias analysis are important components \nwhen enforcing many secure strategies, \nsuch as control flow integrity, data flow integrity and doling dangling pointer elimination.\nReverse engineering was used by defenders and attackers to understand the logic of a program without source code.\n{\\color{myblack} In the security-oriented program analysis, there are many open problems, such as precise pointer/alias analysis, \naccurate and complete reversing engineer, complex constraint solving, program de-obfuscation, and so on. \nSome problems have theoretically proven to be NP-hard, and others still need lots of human effort to solve. \nEither of them needs a lot of domain knowledge and experience from expert to develop better solutions. \nEssentially speaking, the main challenges when solving them through traditional approaches \nare due to the sophisticated rules between the features and labels, which may change in different contexts. \nTherefore, on the one hand, it will take a large quantity of human effort to develop rules to solve the problems, \non the other hand, even the most experienced expert cannot guarantee completeness. \nFortunately, the deep learning method is skillful to find relations between features and labels if given a large amount of training data. \nIt can quickly and comprehensively find all the relations if the training samples are representative and effectively encoded.}\nIn this section, we will review the very recent four representative works that use Deep Learning for security-oriented program analysis. We observed that they focused on different goals. Shin, et al. designed a model~ to identify the function boundary. EKLAVYA~ was developed to learn the function type. Gemini~ was proposed to detect similarity among functions. DEEPVSA~ was designed to learn memory region of an indirect addressing from the code sequence. Among these works, we select two representative works~ and then, \nsummarize the analysis results in Table~\\ref{Table:Summary} in detail.\nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "b3244be8-d71e-4845-98c5-8fb6977b0e43", "level": "subsection", "origin_cites_number": 4, "parent_id": "7a06b173-0676-41cd-b4b7-f793958a3ed0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving security-oriented program analysis challenges" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.25, "cites": [ 3947 ], "content": "From a close look at the very recent applications using Deep Learning for solving security-oriented program analysis challenges, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{} \\textit{All of the works in our survey used binary files as their raw data.}\nPhase~\\ref{phase2} in our survey had one similar and straightforward goal~\\textendash~extracting code sequences from the binary. Difference among them was that the code sequence was extracted directly from the binary file when solving problems in static program analysis, \nwhile it was extracted from the program execution when solving problems in dynamic program analysis.\n\\item \\observation{*} \\textit{Most data representation methods generally took into account the domain knowledge.}\nMost data representation methods generally took into the domain knowledge, i.e., what kind of information they wanted to reserve when processing their data. Note that the feature selection has a wide influence on Phase~\\ref{phase2} and Phase~\\ref{phase3}, for example, embedding granularities, representation learning methods. Gemini~ selected function level feature and other works in our survey selected instruction level feature. To be specifically, all the works except Gemini~ vectorized code sequence on instruction level.\n\\item \\observation{} \n\\textit{To better support data representation for high performance, some works adopted representation learning.}\nFor instance, DEEPVSA~ employed a representation learning method, i.e., bi-directional LSTM, to learn data dependency within instructions. EKLAVYA~ adopted representation learning method, i.e., word2vec technique, to extract inter-instruciton information. It is worth noting that Gemini~ adopts the Structure2vec embedding network in its siamese architecture in Phase~\\ref{phase4} (see details in Observation~3.\\ref{obs:bi}). The Structure2vec embedding network learned information from an attributed control flow graph.\n\\item \\observation{} \\textit{According to our taxonomy, most works in our survey were classified into class 4.}\nTo compare the Phase~\\ref{phase3}, we introduced a \nclassification tree with three layers as shown in Figure~\\ref{fig:dec} to group different works into four categories. The decision tree grouped our surveyed works into four classes according to whether they considered representation learning or not, whether they adopted representation learning or not, and whether they compared their methods with others', respectively, when designing their framework. According to our taxonomy, EKLAVYA~, DEEPVSA~ were grouped into class 4 shown in Figure~\\ref{fig:dec}. Also, Gemini's work~ and Shin, et al.'s work~ belonged to class 1 and class 2 shown in Figure~\\ref{fig:dec}, respectively.\n\\item \\observation{} \\label{obs:reason} \\textit{All the works in our survey explain why they adopted or did not adopt one of representation learning algorithms.}\nTwo works in our survey adopted representation learning for different reasons: to enhance model's ability of generalization~; and to learn the dependency within instructions~.\nIt is worth noting that Shin, et al. did not adopt representation learning because they wanted to preserve the ``attractive'' features of neural networks over other machine learning methods~\\textendash~simplicity. As they stated, ``first, neural networks can learn directly from the original representation with minimal preprocessing (or ``feature engineering”) needed.\" and ``second, neural networks can learn end-to-end, where each of its constituent stages are trained simultaneously in order to best solve the end goal.\" Although Gemini~ did not adopt representation learning when processing their raw data, the Deep Learning models in siamese structure consisted of two graph embedding networks and one cosine function.\n\\item \\observation{*} \\textit{The analysis results showed that a suitable representation learning method could improve accuracy of Deep Learning models.}\nDEEPVSA~ designed a series of experiments to evaluate the effectiveness of its representative method. By combining with the domain knowledge, EKLAVYA~ employed t-SNE plots and analogical reasoning to explain the effectiveness of their representation learning method in an intuitive way.\n\\item \\observation{*} \\label{obs:bi} \\textit{Various Phase IV methods were used.}\nIn Phase~\\ref{phase4}, Gemini~ adopted siamese architecture model which consisted of two Structure2vec embedding networks and one cosine function. \nThe siamese architecture took two functions as its input, and produced the similarity score as the output. The other three works~ adopted bi-directional RNN, RNN, bi-directional LSTM respectively. Shin, et al. adopted bi-directional RNN because they wanted to combine both the past and the future information in making a prediction for the present instruction~. DEEPVSA~ adopted bi-directional RNN to enable their model to infer memory regions in both forward and backward ways. \n\\end{itemize} \nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{Phase~\\ref{phase3} is not always necessary.}\nNot all authors regard representation learning as a good choice\neven though some case experiments show that representation learning can improve the final results.\nThey value more the simplicity of Deep Learning methods and suppose that the adoption of representation learning weakens the simplicity of Deep Learning methods.\n\\item \\indication{}\n\\textit{Even though the ultimate objective of Phase~\\ref{phase3} in the four surveyed works is to train a model with better accuracy, \nthey have different specific motivations as described in Observation~3.\\ref{obs:reason}.}\nWhen authors choose representation learning, \nthey usually try to convince people the effectiveness of their choice by \nempirical or theoretical analysis. \n\\item \\indication{*} \n\\textit{3.\\ref{obs:bi} indicates that authors usually refer to the domain knowledge when designing the architecture of Deep Learning model.}\nFor instance, the works we reviewed commonly adopt bi-directional RNN when their prediction partly based on future information in data sequence.\n\\end{itemize} \n{\\color{myblack}", "id": "191a0f6f-88fb-4873-81a3-03aec397dafc", "level": "subsection", "origin_cites_number": 4, "parent_id": "7a06b173-0676-41cd-b4b7-f793958a3ed0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving security-oriented program analysis challenges" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pa:dis}\nDespite the effectiveness and agility of deep learning-based methods, \nthere are still some challenges in developing a scheme with high accuracy due to the hierarchical data structure, \nlots of noisy, and unbalanced data composition in program analysis. For instance, an instruction sequence, a typical data sample in program analysis, \ncontains three-level hierarchy: sequence--instruction--opcode/operand. \nTo make things worse, each level may contain many different structures, e.g., one-operand instructions, multi-operand instructions,\nwhich makes it harder to encode the training data.\n}", "id": "f6c1d9cc-db45-4789-9e65-d948d0bcace9", "level": "subsection", "origin_cites_number": 0, "parent_id": "7a06b173-0676-41cd-b4b7-f793958a3ed0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving security-oriented program analysis challenges" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:rop}", "id": "2b839e56-71e1-4fc1-8d17-ba5b1a399e32", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending ROP attacks" ] ], "subsections": [ "ef251879-f266-4be9-8fa8-01fd19ef0e47", "32365617-f623-49b8-900b-d6d6aa7866c0", "f75ed293-f4d1-4fc0-8785-900fbb7f8d7a" ], "title": "A closer look at applications of Deep Learning in defending ROP attacks" }, { "cite_extract_rate": 0.4, "cites": [ 8016, 8018 ], "content": "Return-oriented programming (ROP) attack is one of the most dangerous code reuse attacks, which allows the attackers to launch control-flow hijacking attack without injecting any malicious code. Rather, It leverages particular instruction sequences (called ``gadgets'') widely existing in the program space to achieve Turing-complete attacks~. Gadgets are instruction sequences that end with a \\textit{RET} instruction. \nTherefore, they can be chained together by specifying the return addresses on program stack.\nMany traditional techniques could be used to detect ROP attacks, such as control-flow integrity (CFI~), \nbut many of them either have low detection rate or have high runtime overhead. \nROP payloads do not contain any codes. \nIn other words, analyzing ROP payload without the context of the program’s memory dump is meaningless. \nThus, the most popular way of detecting and preventing ROP attacks is control-flow integrity. \n{\\color{myblack} The challenge after acquiring the instruction sequences is that it is hard to recognize whether the control flow is normal. \nTraditional methods use the control flow graph (CFG) to identify whether the control flow is normal, \nbut attackers can design the instruction sequences which follow the normal control flow defined by the CFG.\nIn essence, it is very hard to design a CFG to exclude every single possible combination of instructions that can be used to launch ROP attacks. \nTherefore, using data-driven methods could help eliminate such problems.}\nIn this section, we will review the very recent three representative works that use Deep Learning for defending ROP attacks: ROPNN , HeNet and DeepCheck . \nROPNN aims to detect ROP attacks, HeNet aims to detect malware using CFI, and DeepCheck aims at detecting all kinds of code reuse attacks.\n{\\color{myblack} Specifically, ROPNN is to protect one single program at a time, and its training data are generated from real-world programs along with their execution. \nFirstly, it generates its benign and malicious data by ``chaining-up\" the normally executed instruction sequences and\n``chaining-up'' gadgets with the help of gadgets generation tool, respectively, \nafter the memory dumps of programs are created.\nEach data sample is byte-level instruction sequence labeled as ``benign'' or ``malicious''. \nSecondly, ROPNN will be trained using both malicious and benign data.\nThirdly, the trained model is deployed to a target machine. After the protected program started, \nthe executed instruction sequences will be traced and fed into the trained model, \nthe protected program will be terminated once the model found the instruction sequences are likely to be malicious.\nHeNet is also proposed to protect a single program.\nIts malicious data and benign data are generated by collecting trace data through Intel PT from malware and normal software, respectively.\nBesides, HeNet preprocesses its dataset and shape each data sample in the format of image,\nso that they could implement transfer learning from a model pre-trained on ImageNet. \nThen, HeNet is trained and deployed on machines with features of Intel PT to collect and classify the program's execution trace online. \nThe training data for DeepCheck are acquired from CFGs, \nwhich are constructed by dissembling the programs and using the information from Intel PT. \nAfter the CFG for a protected program is constructed, \nauthors sample benign instruction sequences by chaining up basic blocks that are connected by edges, \nand sample malicious instruction sequences by chaining up those that are not connected by edges.\nAlthough a CFG is needed during training, there is no need to construct CFG after the training phase. \nAfter deployed, instruction sequences will be constructed by leveraging Intel PT on the protected program. \nThen the trained model will classify whether the instruction sequences are malicious or benign.} \nWe observed that none of the works considered Phase~\\ref{phase3}, so all of them belong to class 1 according to our taxonomy as shown in Figure~\\ref{fig:dec}. The analysis results of ROPNN and HeNet are shown in Table \\ref{Table:Summary}. Also, we observed that three works had different goals. \nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "ef251879-f266-4be9-8fa8-01fd19ef0e47", "level": "subsection", "origin_cites_number": 5, "parent_id": "2b839e56-71e1-4fc1-8d17-ba5b1a399e32", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending ROP attacks" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.4, "cites": [ 8016, 8018 ], "content": "From a close look at the very recent applications using Deep Learning for defending return-oriented programming attacks, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{}\\label{rop:obs1} \\textit{All the works in this survey focused on data generation and acquisition.}\nIn ROPNN , both malicious samples (gadget chains) were generated using an automated gadget generator (i.e. ROPGadget~) and a CPU emulator (i.e. Unicorn~). ROPGadget was used to extract instruction sequences that could be used as gadgets from a program, and Unicorn was used to validate the instruction sequences.\nCorresponding benign sample (gadget-chain-like instruction sequences) were generated by disassembling a set of programs.\nIn DeepCheck~ refers to the key idea of control-flow integrity~. It generates program's run-time control flow through new feature of Intel CPU (Intel Processor Tracing), then compares the run-time control flow with the program's control-flow graph (CFG) that generates through static analysis. Benign instruction sequences are that with in the program's CFG, and vice versa. In HeNet~, program's execution trace was extracted using the similar way as DeepCheck. Then, each byte was transformed into a pixel with an intensity between 0-255. Known malware samples and benign software samples were used to generate malicious data benign data, respectively.\n\\item \\observation{}\\label{rop:obs2} \\textit{None of the ROP works in this survey deployed Phase~\\ref{phase3}.}\nBoth ROPNN and DeepCheck used binary instruction sequences for training. In ROPNN, one byte was used as the very basic element for data pre-processing. Bytes were formed into one-hot matrices and flattened for 1-dimensional convolutional layer. In DeepCheck , half-byte was used as the basic unit. Each half-byte (4 bits) was transformed to decimal form ranging from 0-15 as the basic element of the input vector, then was fed into a fully-connected input layer. On the other hand, HeNet used different kinds of data. By the time this survey has been drafted, the source code of HeNet was not available to public and thus, the details of the data pre-processing was not be investigated. However, it is still clear that HeNet used binary branch information collected from Intel PT rather than binary instructions. In HeNet, each byte was converted to one decimal number ranging from 0 to 255. Byte sequences was sliced and formed into image sequences (each pixel represented one byte) for a fully-connected input layer.\n\\item \\observation{}\\label{rop:obs3} \\textit{Fully-connected neural network was widely used.}\nOnly ROPNN used 1-dimensional convolutional neural network (CNN) when extracting features. Both HeNet and DeepCheck used fully-connected neural network (FCN). None of the works used recurrent neural network (RNN) and the variants.\n\\end{itemize}\nThe above observations seem to indicate the following indications:\n\\begin{itemize}\n\\item \\indication{} \\textit{It seems like that one of the most important factors in ROP problem is feature selection and data generation.} \nAll three works use very different methods to collect/generate data, and all the authors provide very strong evidences and/or arguments to justify their approaches. ROPNN~ was trained by the malicious and benign instruction sequences. However, there is no clear boundary between benign instruction sequences and malicious gadget chains. This weakness may impair the performance when applying ROPNN to real world ROP attacks. As oppose to ROPNN, DeepCheck utilizes CFG to generate training basic-block sequences. However, since the malicious basic-block sequences are generated by randomly connecting nodes without edges, it is not guaranteed that all the malicious basic-blocks are executable. HeNet generates their training data from malware. Technically, HeNet could be used to detect any binary exploits, but their experiment focuses on ROP attack and achieves 100\\% accuracy. This shows that the source of data in ROP problem does not need to be related to ROP attacks to produce very impressive results.\n\\item \\indication{} \\textit{Representation learning seems not critical when solving ROP problems using Deep Learning.} \nMinimal process on data in binary form seems to be enough to transform the data into a representation that is suitable for neural networks. Certainly, it is also possible to represent the binary instructions at a higher level, such as opcodes, or use embedding learning. However, as stated in , it appears that the performance will not change much by doing so. The only benefit of representing input data to a higher level is to reduce irrelevant information, but it seems like neural network by itself is good enough at extracting features.\n\\item \\indication{} \\textit{Different Neural network architecture does not have much influence on the effectiveness of defending ROP attacks.}\nBoth HeNet and DeepCheck utilizes standard DNN and achieved comparable results on ROP problems. One can infer that the input data can be easily processed by neural networks, and the features can be easily detected after proper pre-process. \n\\end{itemize} \nIt is not surprising that researchers are not very interested in representation learning for ROP problems as stated in Observation~4.\\ref{rop:obs1}. \nSince ROP attack is focus on the gadget chains, it is straightforward for the researcher to choose the gadgets as their training data directly.\nIt is easy to map the data into numerical representation with minimal processing. An example is that one can map binary executable to hexadecimal ASCII representation, which could be a good representation for neural network.\nInstead, researchers focus more in data acquisition and generation. In ROP problems, the amount of data is very limited. Unlike malware and logs, ROP payloads normally only contain addresses rather than codes, which do not contain any information without providing the instructions in corresponding addresses. It is thus meaningless to collect all the payloads. At the best of our knowledge, all the previous works use pick instruction sequences rather than payloads as their training data, even though they are hard to collect.\n{\\color{myblack}", "id": "32365617-f623-49b8-900b-d6d6aa7866c0", "level": "subsection", "origin_cites_number": 5, "parent_id": "2b839e56-71e1-4fc1-8d17-ba5b1a399e32", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending ROP attacks" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "Even though, Deep Learning based method does not face the challenge to design a very complex fine-grained CFG anymore,\nit suffers from a limited number of data sources.\nGenerally, Deep Learning based method requires lots of training data.\nHowever, real-world malicious data for the ROP attack is very hard to find, \nbecause comparing with benign data, malicious data need to be carefully crafted and there is no existing database to collect all the ROP attacks. \nWithout enough representative training set, the accuracy of the trained model cannot be guaranteed.\n}", "id": "f75ed293-f4d1-4fc0-8785-900fbb7f8d7a", "level": "subsection", "origin_cites_number": 0, "parent_id": "2b839e56-71e1-4fc1-8d17-ba5b1a399e32", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending ROP attacks" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:cfi}", "id": "140dccf7-ea9a-4de7-9296-5070c5f2b2c7", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in achieving CFI" ] ], "subsections": [ "8a9bfe29-ab05-420c-8d3e-6315c4c81104", "c1dded6e-8a6b-4620-9d13-3dae3c743a0f", "4ceced50-465a-46a2-8dec-6e876e45ca6c" ], "title": "A closer look at applications of Deep Learning in achieving CFI" }, { "cite_extract_rate": 0, "cites": [], "content": "The basic ideas of control-flow integrity (CFI) techniques, proposed by Abadi in 2005, could be dated back to 2002, when Vladimir and his fellow researchers proposed an idea called program shepherding, a method of monitoring the execution flow of a program when it is running by enforcing some security policies. The goal of CFI is to detect and prevent control-flow hijacking attacks, by restricting every critical control flow transfers to a set that can only appear in correct program executions, according to a pre-built CFG. Traditional CFI techniques typically leverage some knowledge, gained from either dynamic or static analysis of the target program, combined with some code instrumentation methods, to ensure the program runs on a correct track. \n{\\color{myblack}However, the problems of traditional CFI are: (1) Existing CFI implementations are not compatible with some of important code features~; \n(2) CFGs generated by static, dynamic or combined analysis cannot always be precisely completed due to some open problems~; \n(3) There always exist certain level of compromises between accuracy and performance overhead and other important properties~.}\nRecent research has proposed to apply Deep Learning on detecting control flow violation. Their result shows that, compared with traditional CFI implementation, the security coverage and scalability were enhanced in such a fashion~. \nTherefore, we argue that Deep Learning could be another approach which requires more attention from CFI researchers who aim at achieving control-flow integrity more efficiently and accurately.\nIn this section, we will review the very recent three representative papers that use Deep Learning for achieving CFI. Among the three, two representative papers are already summarized phase-by-phase in Table \\ref{Table:Summary}. We refer to interested readers the Table~\\ref{Table:Summary} for a concise overview of those two papers. \nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "8a9bfe29-ab05-420c-8d3e-6315c4c81104", "level": "subsection", "origin_cites_number": 8, "parent_id": "140dccf7-ea9a-4de7-9296-5070c5f2b2c7", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in achieving CFI" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "From a close look at the very recent applications using Deep Learning for achieving control-flow integrity, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{} \\textit{None of the related works realize preventive\\footnote{We refer readers to~ which systemizes the knowledge of protections by CFI schemes.} prevention of control flow violation.}\nAfter doing a thorough literature search, we observed that security researchers are quite behind the trend of applying Deep Learning techniques to solve security problems. Only one paper has been founded by us, using Deep Learning techniques to directly enhance the performance of CFI~. This paper leveraged Deep Learning to detect document malware through checking program's execution traces that generated by hardware. Specifically, the CFI violations were checked in an offline mode. So far, no works have realized Just-In-Time checking for program's control flow.\nIn order to provide more insightful results, in this section, we try not to narrow down our focus on CFI detecting attacks at run-time, but to extend our scope to papers that take good use of control flow related data, combined with Deep Learning techniques. In one work, researchers used self-constructed instruction-level CFG to detect program defection. In another work, researchers used lazy-binding CFG to detect sophisticated malware~. \n\\item \\observation{} \\textit{Diverse raw data were used for evaluating CFI solutions.}\nIn all surveyed papers, there are two kinds of control flow related data being used: program instruction sequences and CFGs. Barnum et al.~ employed statically and dynamically generated instruction sequences acquired by program disassembling and Intel\\textsuperscript{\\textregistered} Processor Trace. CNNoverCFG~ used self-designed algorithm to construct instruction level control-flow graph. Minh Hai Nguyen et al.~ used proposed lazy-binding CFG to reflect the behavior of malware DEC.\n\\item \\observation{} \\textit{All the papers in our survey adopted Phase~\\ref{phase2}.}\nAll the related papers in our survey employed Phase~\\ref{phase2} to process their raw data before sending them into Phase~\\ref{phase3}. In Barnum~, the instruction sequences from program run-time tracing were sliced into basic-blocks. Then, they assigned each basic-blocks with an unique basic-block ID (BBID). Finally, due to the nature of control-flow hijacking attack, they selected the sequences ending with indirect branch instruction (e.g., indirect call/jump, return and so on) as the training data. In CNNoverCFG~, each of instructions in CFG were labeled with its attributes in multiple perspectives, such as opcode, operands, and the function it belongs to. The training data is generated are sequences generated by traversing the attributed control-flow graph. Nguyen and others~ converted the lazy-binding CFG to corresponding adjacent matrix and treated the matrix as a image as their training data.\n\\item \\observation{} \\textit{All the papers in our survey did not adopt Phase~\\ref{phase3}.}\nWe observed all the papers we surveyed did not adopted Phase~\\ref{phase3}. Instead, they adopted the form of numerical representation directly as their training data. Specifically, Barnum grouped the the instructions into basic-blocks, then represented basic-blocks with uniquely assigning IDs. In CNNoverCFG~, each of instructions in the CFG was represented by a vector that associated with its attributes. Nguyen and others directly used the hashed value of bit string representation. \n\\item \\observation{} \\textit{Various Phase IV models were used.}\nBarnum utilized BBID sequence to monitor the execution flow of the target program, which is sequence-type data. Therefore, they chose LSTM architecture to better learn the relationship between instructions. While in the other two papers, they trained CNN and directed graph-based CNN to extract information from control-flow graph and image, respectively.\n\\end{itemize}\nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{All the existing works did not achieve Just-In-Time CFI violation detection.} \nIt is still a challenge to tightly embed Deep Learning model in program execution. All existing work adopted lazy-checking~\\textendash~checking the program's execution trace following its execution.\n\\item \\indication{} \\textit{There is no unified opinion on how to generate malicious sample.}\nData are hard to collect in control-flow hijacking attacks. The researchers must carefully craft malicious sample. \nIt is not clear whether the ``handcrafted'' sample can reflect the nature the control-flow hijacking attack. \n\\item \\indication{*} The choice of methods in \\textit{Phase~\\ref{phase2} are based on researchers’ security domain knowledge.}\n\\end{itemize}\n{\\color{myblack}", "id": "c1dded6e-8a6b-4620-9d13-3dae3c743a0f", "level": "subsection", "origin_cites_number": 4, "parent_id": "140dccf7-ea9a-4de7-9296-5070c5f2b2c7", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in achieving CFI" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "The strength of using deep learning to solve CFI problems is that it can avoid the complicated processes of developing algorithms to build acceptable CFGs for the protected programs.\n Compared with the traditional approaches, \n the DL based method could prevent CFI designer from \n studying the language features of the targeted program and could also avoid the open problem (pointer analysis) in control flow analysis. \n Therefore, DL based CFI provides us a more generalized, scalable, and secure solution. \n However, since using DL in CFI problem is still at an early age, \n which kinds of control-flow related data are more effective is still unclear yet in this research area. \n Additionally, applying DL in real-time control-flow violation detection remains an untouched area and needs further research.\n}", "id": "4ceced50-465a-46a2-8dec-6e876e45ca6c", "level": "subsection", "origin_cites_number": 0, "parent_id": "140dccf7-ea9a-4de7-9296-5070c5f2b2c7", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in achieving CFI" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:network}", "id": "489e2f98-db44-4e56-96fb-648e2b5625b0", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending network attacks" ] ], "subsections": [ "5a44217d-8be6-48cc-a51f-f6f1d3cf701f", "40d2e6f3-afe7-4f4a-b9e1-92d27481af19", "9c2f52d4-6488-42e7-9a94-df1335aac208" ], "title": "A closer look at applications of Deep Learning in defending network attacks" }, { "cite_extract_rate": 0, "cites": [], "content": "Network security is becoming more and more important as we depend more and more on networks for our daily lives, works and researches. Some common network attack types include probe, denial of service (DoS), Remote-to-local (R2L), etc. Traditionally, people try to detect those attacks using signatures, rules, and unsupervised anomaly detection algorithms. \n{\\color{myblack}However, signature based methods can be easily fooled by slightly changing the attack payload; rule based methods need experts to regularly update rules; and unsupervised anomaly detection algorithms tend to raise lots of false positives.}\nRecently, people are trying to apply Deep Learning methods for network attack detection.\nIn this section, we will review the very recent seven representative works that use Deep Learning for defending network attacks. build neural networks for multi-class classification, whose class labels include one benign label and multiple malicious labels for different attack types. ignores normal network activities and proposes parallel cross convolutional neural network (PCCN) to classify the type of malicious network activities. applies Deep Learning to detecting a specific attack type, distributed denial of service (DDoS) attack. explores both binary classification and multi-class classification for benign and malicious activities. Among these seven works, we select two representative works~ and summarize the main aspects of their approaches regarding whether the four phases exist in their works, and what exactly do they do in the Phase if it exists. We direct interested readers to Table~\\ref{Table:Summary} for a concise overview of these two works.\nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "5a44217d-8be6-48cc-a51f-f6f1d3cf701f", "level": "subsection", "origin_cites_number": 7, "parent_id": "489e2f98-db44-4e56-96fb-648e2b5625b0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending network attacks" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "From a close look at the very recent applications using Deep Learning for solving network attack challenges, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{} \\textit{All the seven works in our survey used public datasets, such as UNSW-NB15~ and CICIDS2017~.} \nThe public datasets were all generated in test-bed environments, with unbalanced simulated benign and attack activities. For attack activities, the dataset providers launched multiple types of attacks, and the numbers of malicious data for those attack activities were also unbalanced. \n\\item \\observation{} \\textit{The public datasets were given into one of two data formats, i.e., PCAP and CSV.} \nOne was raw PCAP or parsed CSV format, containing network packet level features, and the other was also CSV format, containing network flow level features, which showed the statistic information of many network packets. Out of all the seven works, used packet information as raw inputs, used flow information as raw inputs, and explored both cases.\n\\item \\observation{} \\textit{In order to parse the raw inputs, preprocessing methods, including one-hot vectors for categorical texts, normalization on numeric data, and removal of unused features/data samples, were commonly used.} \nCommonly removed features include IP addresses and timestamps. also removed port numbers from used features. By doing this, they claimed that they could ``avoid over-fitting and let the neural network learn characteristics of packets themselves''. One outlier was that, when using packet level features in one experiment, blindly chose the first 50 bytes of each network packet without any feature extracting processes and fed them into neural network.\n\\item \\observation{} \\textit{Using image representation improved the performance of security solutions using Deep Learning.}\nAfter preprocessing the raw data, while transformed the data into image representation, directly used the original vectors as an input data. Also, explored both cases and reported better performance using image representation.\n\\item \\observation{} \\textit{None of all the seven surveyed works considered representation learning.} \nAll the seven surveyed works belonged to class 1 shown in Figure~\\ref{fig:dec}. They either directly used the processed vectors to feed into the neural networks, or changed the representation without explanation. One research work~ provided a comparison on two different representations (vectors and images) for the same type of raw input. However, the other works applied different preprocessing methods in Phase~\\ref{phase2}. That is, since the different preprocessing methods generated different feature spaces, it was difficult to compare the experimental results. \n\\item \\observation{} \\textit{Binary classification model showed better results from most experiments.}\nAmong all the seven surveyed works, focused on one specific attack type and only did binary classification to classify whether the network traffic was benign or malicious. Also, included more attack types and did multi-class classification to classify the type of malicious activities, and explored both cases. As for multi-class classification, the accuracy for selective classes was good, while accuracy for other classes, usually classes with much fewer data samples, suffered by up to 20\\% degradation.\n\\item \\observation{} \\textit{Data representation influenced on choosing a neural network model.} \n\\end{itemize}\nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{All works in our survey adopt a kind of preprocessing methods in Phase~\\ref{phase2}, because raw data provided in the public datasets are either not ready for neural networks, or that the quality of data is too low to be directly used as data samples.}\nPreprocessing methods can help increase the neural network performance by improving the data samples' qualities. Furthermore, by reducing the feature space, pre-processing can also improve the efficiency of neural network training and testing. Thus, Phase~\\ref{phase2} should not be skipped. If Phase~\\ref{phase2} is skipped, the performance of neural network is expected to go down considerably.\n\\item \\indication{} \\textit{Although Phase~\\ref{phase3} is not employed in any of the seven surveyed works, none of them explains a reason for it. Also, they all do not take representation learning into consideration.}\n\\item \\indication{} \\textit{Because no work uses representation learning, the effectiveness are not well-studied.} \nOut of other factors, it seems that the choice of pre-processing methods has the largest impact, because it directly affects the data samples fed to the neural network.\n\\item \\indication{} \\textit{There is no guarantee that CNN also works well on images converted from network features.} \nSome works that use image data representation use CNN in Phase~\\ref{phase4}. Although CNN has been proven to work well on image classification problem in the recent years, there is no guarantee that CNN also works well on images converted from network features.\n\\end{itemize}\nFrom the observations and indications above, we hereby present two recommendations: (1) Researchers can try to generate their own datasets for the specific network attack they want to detect. As stated, the public datasets have highly unbalanced number of data for different classes. Doubtlessly, such unbalance is the nature of real world network environment, in which normal activities are the majority, but it is not good for Deep Learning. tries to solve this problem by oversampling the malicious data, but it is better to start with a balanced data set. (2) Representation learning should be taken into consideration. Some possible ways to apply representation learning include: (a) apply word2vec method to packet binaries, and categorical numbers and texts; (b) use K-means as one-hot vector representation instead of randomly encoding texts. We suggest that any change of data representation may be better justified by explanations or comparison experiments.\n{\\color{myblack}", "id": "40d2e6f3-afe7-4f4a-b9e1-92d27481af19", "level": "subsection", "origin_cites_number": 9, "parent_id": "489e2f98-db44-4e56-96fb-648e2b5625b0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending network attacks" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "One critical challenge in this field is the lack of high-quality data set suitable for applying deep learning. \nAlso, there is no agreement on how to apply domain knowledge into training deep learning models for network security problems. \nResearchers have been using different pre-processing methods, data representations and model types, \nbut few of them have enough explanation on why such methods/representations/models are chosen, especially for data representation.\n}", "id": "9c2f52d4-6488-42e7-9a94-df1335aac208", "level": "subsection", "origin_cites_number": 0, "parent_id": "489e2f98-db44-4e56-96fb-648e2b5625b0", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in defending network attacks" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:malware}", "id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ] ], "subsections": [ "4fef8609-2a73-4dd1-bd78-9ca01992a913", "61b44751-fc07-481a-9c8d-113f52966cad", "748c94fe-9360-43db-917c-2b4348d57075", "97e6b700-3b7a-4ab7-98d6-2b09214a3c77", "2325c563-de2f-4499-a9b4-54f36b2f6906", "44d604f2-3cb2-4a3b-a80a-fca37bb7619f" ], "title": "A closer look at applications of Deep Learning in malware classification" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 3859 ], "content": "The goal of malware classification is to identify malicious behaviors in software with static and dynamic features like control-flow graph and system API calls. \nMalware and benign programs can be collected from open datasets and online websites. \n{\\color{myblack}\nBoth the industry and the academic communities have provided approaches to detect malware with static and dynamic analyses. \nTraditional methods such as behavior-based signatures, dynamic taint tracking, and static data flow analysis require experts to manually investigate unknown files. \nHowever, those hand-crafted signatures are not sufficiently effective because attackers can rewrite and reorder the malware. \nFortunately, neural networks can automatically detect large-scale malware variants with superior classification accuracy.\n}\nIn this section, we will review the very recent twelve representative works that use Deep Learning for malware classification . selects three different kinds of static features to classify malware. also use static features from the PE files to classify programs. extracts behavioral feature images using RNN to represent the behaviors of original programs. transforms malicious behaviors using representative learning without neural network. explores RNN model with the API calls sequences as programs' features. skip Phase~\\ref{phase2} by directly transforming the binary file to image to classify the file. applies dynamic features to analyze malicious features. combines static features and dynamic features to represent programs' features. Among these works, we select two representative works and identify four phases in their works shown as Table~\\ref{Table:Summary}. \nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "4fef8609-2a73-4dd1-bd78-9ca01992a913", "level": "subsection", "origin_cites_number": 12, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.08333333333333301, "cites": [ 3859 ], "content": "From a close look at the very recent applications using Deep Learning for solving malware classification challenges, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{} \\textit{Features selected in malware classification were grouped into three categories: static features, dynamic features, and hybrid features.}\nTypical static features include metadata, PE import Features, Byte/Entorpy, String, and Assembly Opcode Features derived from the PE files . De LaRosa, Kilgallon, et al. took three kinds of static features: byte-level, basic-level ( strings in the file, the metadata table, and the import table of the PE header), and assembly features-level. Some works directly considered binary code as static features.\nDifferent from static features, dynamic features were extracted by executing the files to retrieve their behaviors during execution. The behaviors of programs, including the API function calls, their parameters, files created or deleted, websites and ports accessed, etc, were recorded by a sandbox as dynamic features. The process behaviors including operation name and their result codes were extracted . The process memory, tri-grams of system API calls and one corresponding input parameter were chosen as dynamic features. An API calls sequence for an APK file was another representation of dynamic features~. \nStatic features and dynamic features were combined as hybrid features~. For static features, Xu and others in~ used permissions, networks, calls, and providers, etc. For dynamic features, they used system call sequences.\n\\item \\observation{} \\textit{In most works, Phase~\\ref{phase2} was inevitable because extracted features needed to be vertorized for Deep Learning models.}\nOne-hot encoding approach was frequently used to vectorize features. Bag-of-words (BoW) and \\textit{n}-gram were also considered to represent features . Some works brought the concepts of word frequency in NLP to convert the sandbox file to fixed-size inputs. Hashing features into a fixed vector was used as an effective method to represent features. Bytes histogram using the bytes analysis and bytes-entropy histogram with a sliding window method were considered~. In , De La Rosa and others embeded strings by hashing the ASCII strings to a fixed-size feature vector. For assembly features, they extracted four different levels of granularity: operation level (instruction-flow-graph), block level (control-flow-graph), function level (call-graph), and global level (graphs summarized). bigram, trigram and four-gram vectors and \\textit{n}-gram graph were used for the hybrid features . \n\\item \\observation{} \\textit{Most Phase~\\ref{phase3} methods were classified into class 1.}\nFollowing the classification tree shown in Figure~\\ref{fig:dec}, most works were classified into class 1 shown in Figure~\\ref{fig:dec} except two works, which belonged to class 3 shown in Figure~\\ref{fig:dec}. To reduce the input dimension, Dahl et al. performed feature selection using mutual information and random projection. Tobiyama et al. generated behavioral feature images using RNN.\n\\item {\\bf Observation 4:} \\textit{After extracting features, two kinds of neural network architectures, i.e., one single neural network and multiple neural networks with a combined loss function, were used.}\nHierarchical structures, like convolutional layers, fully connected layers and classification layers, were used to classify programs . A deep stack of denoising autoencoders was also introduced to learn programs' behaviors . De La Rosa and others trained three different models with different features to compare which static features are relevant for the classification model. Some works investigated LSTM models for sequential features~. \nTwo networks with different features as inputs were used for malware classification by combining their outputs with a dropout layer and an output layer. In , one network transformed PE Metadata and import features using feedforward neurons, another one leveraged convolutional network layers with opcode sequences. Lifan Xu et al. constructed a few networks and combined them using a two-level multiple kernel learning algorithm.\n\\end{itemize}\nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{Except two works transform binary into images, most works surveyed need to adapt methods to vectorize extracted features.} \nThe vectorization methods should not only keep syntactic and semantic information in features, but also consider the definition of the Deep Learning model. \n\\item \\indication{} \\textit{Only limited works have shown how to transform features using representation learning.} \nBecause some works assume the dynamic and static sequences, like API calls and instruction, and have similar syntactic and semantic structure as natural language, some representation learning techniques like word2vec may be useful in malware detection. In addition, for the control-flow graph, call graph and other graph representations, graph embedding is a potential method to transform those features.\n\\end{itemize} \n{\\color{myblack}", "id": "61b44751-fc07-481a-9c8d-113f52966cad", "level": "subsection", "origin_cites_number": 12, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "Though several pieces of research have been done in malware detection using Deep Learning, \nit's hard to compare their methods and performances because of two uncertainties in their approaches. \nFirst, the Deep Learning model is a black-box, researchers cannot detail which kind of features the model learned and explain why their model works. \nSecond, feature selection and representation affect the model’s performance. Because they do not use the same datasets, \nresearchers cannot prove their approaches~\\textendash~including selected features and Deep Learning model~\\textendash~are better than others. \nThe reason why few researchers use open datasets is that existing open malware datasets are out of data and limited. \nAlso, researchers need to crawl benign programs from app stores, so their raw programs will be diverse.\n}\n\\section{A closer look at applications of Deep Learning in \nsystem-event-based anomaly detection}\n\\label{sec:anomaly}", "id": "748c94fe-9360-43db-917c-2b4348d57075", "level": "subsection", "origin_cites_number": 0, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.16666666666666602, "cites": [ 8019 ], "content": "System logs recorded significant events at various critical points, which can be used to debug the system's performance issues and failures. \nMoreover, log data are available in almost all computer systems and are a valuable resource for understanding system status. \n{\\color{myblack}\nThere are a few challenges in anomaly detection based on system logs. \nFirstly, the raw log data are unstructured, while their formats and semantics can vary significantly. \nSecondly, logs are produced by concurrently running tasks. \nSuch concurrency makes it hard to apply workflow-based anomaly detection methods. \nThirdly, logs contain rich information and complexity types, including text, real value, IP address, timestamp, and so on. \nThe contained information of each log is also varied. \nFinally, there are massive logs in every system. \nMoreover, each anomaly event usually incorporates a large number of logs generated in a long period. \n}\nRecently, a large number of scholars employed deep learning techniques \nto detect anomaly events in the system logs and diagnosis system failures. \nThe raw log data are unstructured, while their formats and semantics can vary significantly. \nTo detect the anomaly event, the raw log usually should be parsed to structure data, \nthe parsed data can be transformed into a representation that supports an effective deep learning model. \nFinally, the anomaly event can be detected by deep learning based classifier or predictor.\nIn this section, we will review the very recent six representative papers that use deep learning for system-event-based anomaly detection. DeepLog utilizes LSTM to model the system log as a natural language sequence, which automatically learns log patterns from the normal event, and detects anomalies when log patterns deviate from the trained model. LogAnom employs Word2vec to extract the semantic and syntax information from log templates. Moreover, it uses sequential and quantitative features simultaneously. Desh uses LSTM to predict node failures that occur in super computing systems from HPC logs. Andy Brown et al. presented RNN language models augmented with attention for anomaly detection in system logs. LogRobust uses FastText to represent semantic information of log events, which can identify and handle unstable log events and sequences. Christophe Bertero et al. map log word to a high dimensional metric space using Google's word2vec algorithm and take it as features to classify. Among these six papers, we select two representative works and summarize the four phases of their approaches. We direct interested readers to Table~\\ref{Table:Summary} for a concise overview of these two works.\nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "97e6b700-3b7a-4ab7-98d6-2b09214a3c77", "level": "subsection", "origin_cites_number": 6, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.125, "cites": [ 8019 ], "content": "From a close look at the very recent applications using deep learning for solving security-event-based anomaly detection challenges, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{} \\textit{Most works of our surveyed papers evaluated their performance using public datasets.} \nBy the time we surveyed this paper, only two works in used their private datasets. \n\\item \\observation{} \\textit{Most works in this survey adopted Phase~\\ref{phase2} when parsing the raw log data.} \nAfter reviewing the six works proposed recently, we found that five works employed parsing technique, while only one work did not.\n\\par DeepLog parsed the raw log to different log type using Spell which is based a longest common subsequence. Desh parsed the raw log to constant message and variable component. Loganom parsed the raw log to different log templates using FT-Tree according to the frequent combinations of log words. Andy Brown et al. parsed the raw log into word and character tokenization. LogRobust extracted its log event by abstracting away the parameters in the message. Christophe Bertero et al. considered logs as regular text without parsing.\n\\item \\observation{}\\label{log:obs:3} \\textit{Most works have considered and adopted Phase~\\ref{phase3}.} \nAmong these six works, only DeepLog represented the parsed data using the one-hot vector without learning. Moreover, Loganom compared their results with DeepLog. That is, DeepLog belongs to class 1 and Loganom belongs to class 4 in Figure \\ref{fig:dec}, while the other four works follow in class 3.\n\\par The four works used word embedding techniques to represent the log data. Andy Brown et al. employed attention vectors to represent the log messages. \n\\par DeepLog employed the one-hot vector to represent the log type without learning. We have engaged an experiment replacing the one-hot vector with trained word embeddings. \n\\item \\observation{} \\textit{Evaluation results were not compared using the same dataset.}\nDeepLog employed the one-hot vector to represent the log type without learning, which employed Phase~\\ref{phase2} without Phase~\\ref{phase3}. However, Christophe Bertero et al. considered logs as regular text without parsing, and used Phase~\\ref{phase3} without Phase~\\ref{phase2}. The precision of the two methods is very high, which is greater than 95\\%. Unfortunately, the evaluations of the two methods used different datasets.\n\\item \\observation{}\\label{log:obs:5} \\textit{Most works empolyed LSTM in Phase~\\ref{phase4}.}\nFive works including employed LSTM in the Phase~\\ref{phase4}, while Christophe Bertero et al. tried different classifiers including naive Bayes, neural networks and random forest.\n\\end{itemize}\nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{Phase~\\ref{phase2} has a positive effect on accuracy if being well-designed.}\nSince Christophe Bertero et al.~ considers logs as regular text without parsing, we can say that Phase~\\ref{phase2} is not required. However, we can find that most of the scholars employed parsing techniques to extract structure information and remove the useless noise.\n\\item \\indication{} \\textit{Most of the recent works use trained representation to represent parsed data.} \nAs shown in Table \\ref{tab:deeplog}, we can find Phase~\\ref{phase3} is very useful, which can improve detection accuracy.\n\\item \\indication{} \\textit{Phase~\\ref{phase2} and Phase~\\ref{phase3} cannot be skipped simultaneously.} \nBoth Phase~\\ref{phase2} and Phase~\\ref{phase3} are not required. However, all methods have employed Phase~\\ref{phase2} or Phase~\\ref{phase3}.\n\\item \\indication{} \\textit{Observation~8.\\ref{log:obs:3} indicates that the trained word embedding format can improve the anomaly detection accuracy as shown in Table~\\ref{tab:deeplog}.}\n\\begin{center}\n\\normalsize\n \\begin{threeparttable}\n \\caption{Comparison between word embedding and one-hot representation.}\n \\label{tab:deeplog}\n\\begin{tabular}{p{3.5cm}||p{1.5cm}p{1.5cm}p{2cm}p{2cm}p{2cm}} \n\\hline \n\\hline \nMethod & FP~\\tnote{1} & FN~\\tnote{2} & Precision & Recall& F1-measure\\\\ \n\\hline \nWord Embedding~\\tnote{3} & 680 & 219 & 96.069\\% & 98.699\\% & 97.366\\% \\\\ \n\\hline \nOne-hot Vector~\\tnote{4} & 711 & 705 & 95.779\\% & 95.813\\% & 95.796\\% \\\\ \n\\hline \nDeepLog~\\tnote{5} & 833 & 619 & 95\\% & 96\\% & 96\\% \\\\ \n\\hline \n\\hline \n\\end{tabular} \n \\begin{tablenotes}\n \\item \\tnote{1}FP: false positive; \\tnote{2}FN: False negative;\\tnote{3}Word Embedding: Log keys are embedded by Continuous Bag of words;\\tnote{4} One-hot Vector: We reproduced the results according to DeepLog;\\tnote{5} DeepLog: Orignial results presented in the paper .\n \\end{tablenotes}\n\\end{threeparttable}\n\\end{center}\n\\item \\indication{} \\textit{Observation~8.\\ref{log:obs:5} indicates that most of the works adopt LSTM to detect anomaly events.}\nWe can find that most of the works adopt LSTM to detect anomaly event, since log data can be considered as sequence and there can be lags of unknown duration between important events in a time series. LSTM has feedback connections, which can not only process single data points, but also entire sequences of data.\n\\end{itemize}\nAs our consideration, neither Phase~\\ref{phase2} nor Phase~\\ref{phase3} is required in system event-based anomaly detection. However, Phase~\\ref{phase2} can remove noise in raw data, and Phase~\\ref{phase3} can learn a proper representation of the data. Both Phase~\\ref{phase2} and Phase~\\ref{phase3} have a positive effect on anomaly detection accuracy. Since the event log is text data that we can't feed the raw log data into deep learning model directly, Phase~\\ref{phase2} and Phase~\\ref{phase3} can't be skipped simultaneously.\n{\\color{myblack}", "id": "2325c563-de2f-4499-a9b4-54f36b2f6906", "level": "subsection", "origin_cites_number": 8, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "Deep learning can capture the potentially nonlinear and high dimensional dependencies among log entries from the training data that correspond to abnormal events. \nIn that way, it can release the challenges mentioned above. \nHowever, it still suffers from several challenges. \nFor example, how to represent the unstructured data accurately and automatically without human knowledge. \n}", "id": "44d604f2-3cb2-4a3b-a80a-fca37bb7619f", "level": "subsection", "origin_cites_number": 0, "parent_id": "6ae1eea5-7d10-4735-bfd3-ef2d1286e877", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in malware classification" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:forensic}", "id": "b422094a-ee97-4a26-a7d7-a5e21d36fbf2", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving memory forensics challenges" ] ], "subsections": [ "ed74da18-12e2-4bfa-bb44-42045c61fbee", "813fd68a-2956-4446-890c-dfcf57d31b98", "f9e62eaa-9f6d-4b05-9039-e20814eeb2bb" ], "title": "A closer look at applications of Deep Learning in solving memory forensics challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "In the field of computer security, memory forensics is security-oriented forensic analysis of a computer’s memory dump. \nMemory forensics can be conducted against OS kernels, user-level applications, as well as mobile devices. \nMemory forensics outperforms traditional disk-based forensics because although secrecy attacks can erase their footprints on disk, \nthey would have to appear in memory . \nThe memory dump can be considered as a sequence of bytes, \nthus memory forensics usually needs to extract security semantic information from raw memory dump to find attack traces. \n{\\color{myblack} \nThe traditional memory forensic tools fall into two categories: signature scanning and data structure traversal. \nThese traditional methods usually have some limitations. \nFirstly, it needs expert knowledge on the related data structures to create signatures or traversing rules. \nSecondly, attackers may directly manipulate data and pointer values in kernel objects to evade detection, \nand then it becomes even more challenging to create signatures and traversing rules that cannot be easily violated by malicious manipulations, system updates, and random noise. \nFinally, the high-efficiency requirement often sacrifices high robustness. \nFor example, an efficient signature scan tool usually skips large memory regions that are unlikely to have the relevant objects and relies on simple but easily tamperable string constants. \nAn important clue may hide in this ignored region.\n} \nIn this section, we will review the very recent four representative works that use Deep Learning for memory forensics. DeepMem recognized the kernel objects from raw memory dumps by generating abstract representations of kernel objects with a graph-based Deep Learning approach. MDMF detected OS and architecture-independent malware from memory snapshots with several pre-processing techniques, domain unaware feature selection, and a suite of machine learning algorithms. MemTri predicts the likelihood of criminal activity in a memory image using a Bayesian network, based on evidence data artefacts generated by several applications. Dai et al. monitor the malware process memory and classify malware according to memory dumps, by transforming the memory dump into grayscale images and adopting a multi-layer perception as the classifier. \nAmong these four works, two representative works (i.e., ) are already summarized phase-by-phase in Table 1. We direct interested readers to Table~\\ref{Table:Summary} for a concise overview of these two works.\nOur review will be centered around the three questions raised in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "ed74da18-12e2-4bfa-bb44-42045c61fbee", "level": "subsection", "origin_cites_number": 4, "parent_id": "b422094a-ee97-4a26-a7d7-a5e21d36fbf2", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving memory forensics challenges" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "From a close look at the very recent applications using Deep Learning for solving memory forensics challenges, we observed the followings: \n\\begin{itemize}[label={}]\n\\item \\observation{}\\label{mem:obs1} \\textit{Most methods used their own datasets for performance evaluation, while none of them used a public dataset.} \nDeepMem was evaluated on self-generated dataset by the authors, who collected a large number of diverse memory dumps, and labeled the kernel objects in them using existing memory forensics tools like Volatility. MDMF employed the MalRec dataset by Georgia Tech to generate malicious snapshots, while it created a dataset of benign memory snapshots running normal software.\nMemTri ran several Windows 7 virtual machine instances with self-designed suspect activity scenarios to gather memory images. Dai et al. built the Procdump program in Cuckoo sandbox to extract malware memory dumps. \nWe found that each of the four works in our survey generated their own datasets, while none was evaluated on a public dataset. \n\\item \\observation{}\\label{mem:obs2} \\textit{Among the four works, two works employed Phase~\\ref{phase2} while the other two works did not employ.}\nDeepMem devised a graph representation for a sequence of bytes, taking into account both adjacency and points-to relations, to better model the contextual information in memory dumps. MemTri firstly identified the running processes within the memory image that match the target applications, then employed regular expressions to locate evidence artefacts in a memory image. MDMF and Dai et al. transformed the memory dump into image directly.\n\\item \\observation{}\\label{mem:obs3} \\textit{Among four works, only DeepMem employed Phase~\\ref{phase3} for which it used an embedding method to represent a memory graph.} \nMDMF directly fed the generated memory images into the training of a CNN model. Dai et al. used HOG feature descriptor for detecting objects, while MemTri extracted evidence artefacts as the input of Bayesian Network. In summary, DeepMem belonged to class 3 shown in Figure~\\ref{fig:dec}, while the other three works belonged to class 1 shown in Figure~\\ref{fig:dec}. \n\\item \\observation{}\\label{mem:obs4} \\textit{All the four works have employed different classifiers even when the types of input data are the same.} \nDeepMem chose fully connected network (FCN) model that has multi-layered hidden neurons with ReLU activation functions, following by a softmax layer as the last layer. MDMF evaluated their performance both on traditional machine learning algorithms and Deep Learning approach including CNN and LSTM. Their results showed the accuracy of different classifiers did not have a significant difference. MemTri employed a Bayesian network model that is designed with three layers, i.e., a hypothesis layer, a sub-hypothesis layer, and an evidence layer. Dai et al. used a multi-layer perception model including an input layer, a hidden layer and an output layer as the classifier. \n\\end{itemize}\nThe above observations seem to indicate the following indications:\n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{There lacks public datasets for evaluating the performance of different Deep Learning methods in memory forensics.}\nFrom Observation~9.\\ref{mem:obs1}, we find that none of the four works surveyed was evaluated on public datasets. \n\\item \\indication{} \\textit{From Observation~9.\\ref{mem:obs2}, we find that it is disputable whether one should employ Phase~\\ref{phase2} when solving memory forensics problems.}\nSince both and \n directly transformed a memory dump into an image, Phase~\\ref{phase2} is not required in these two works. However, since there is a large amount of useless information in a memory dump, we argue that appropriate prepossessing could improve the accuracy of the trained models. \n\\item \\indication{} \\textit{From Observation~9.\\ref{mem:obs3}, we find that Phase~\\ref{phase3} is paid not much attention in memory forensics.} \nMost works did not employ Phase~\\ref{phase3}. Among the four works, only DeepMem employed Phase~\\ref{phase3} during which it used embeddings to represent a memory graph. The other three works did not learn any representations before training a Deep Learning model. \n\\item \\indication{} \\textit{For Phase~\\ref{phase4} in memory forensics, different classifiers can be employed.}\nWhich kind of classifier to use seems to be determined by the features used and their data structures. From Observation~9.\\ref{mem:obs4}, we find that the four works have actually employed different kinds of classifiers even the types of input data are the same. It is very interesting that MDMF obtained similar results with different classifiers including traditional machine learning and Deep Learning models. However, the other three works did not discuss why they chose a particular kind of classifier. \n\\end{itemize}\nSince a memory dump can be considered as a sequence of bytes, the data structure of a training data example is straightforward. If the memory dump is transformed into a simple form in Phase~\\ref{phase2}, it can be directly fed into the training process of a Deep Learning model, and as a result Phase~\\ref{phase3} can be ignored. However, if the memory dump is transformed into a complicated form in Phase~\\ref{phase2}, Phase~\\ref{phase3} could be quite useful in memory forensics. \nRegarding the answer for Question 3 at Section~\\ref{threequestions}, it is very interesting that during Phase~\\ref{phase4} different classifiers can be employed in memory forensics. Moreover, MDMF has shown that they can obtain similar results with different kinds of classifiers. Nevertheless, they also admit that with a larger amount of training data, the performance could be improved by Deep Learning. \n{\\color{myblack}", "id": "813fd68a-2956-4446-890c-dfcf57d31b98", "level": "subsection", "origin_cites_number": 4, "parent_id": "b422094a-ee97-4a26-a7d7-a5e21d36fbf2", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving memory forensics challenges" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "An end-to-end manner deep learning model can learn the precise representation of memory dump automatically to release the requirement for expert knowledge. \nHowever, it still needs expert knowledge to represent data and attacker behavior. \nAttackers may also directly manipulate data and pointer values in kernel objects to evade detection.\n}", "id": "f9e62eaa-9f6d-4b05-9039-e20814eeb2bb", "level": "subsection", "origin_cites_number": 0, "parent_id": "b422094a-ee97-4a26-a7d7-a5e21d36fbf2", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in solving memory forensics challenges" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:fuzzing}", "id": "eb166b74-7da0-4abe-af7f-7cfe3d3d385a", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in security-oriented fuzzing" ] ], "subsections": [ "896ebfd0-c54b-4625-834c-50838b67a7e3", "63c19e14-9f44-4182-a0ff-fa0ca6d47abe", "e9b0e749-e076-4b69-872a-5fbd8c4db1fd" ], "title": "A closer look at applications of Deep Learning in security-oriented fuzzing" }, { "cite_extract_rate": 0.5, "cites": [ 8014, 8017 ], "content": "Fuzzing of software security is one of the state of art techniques that people use to detect software vulnerabilities. The goal of fuzzing is to find all the vulnerabilities exist in the program by testing as much program code as possible. Due to the nature of fuzzing, this technique works best on finding vulnerabilities in programs that take in input files, like PDF viewers or web browsers. \nA typical workflow of fuzzing can be concluded as: given several seed input files, the fuzzer will mutate or fuzz the seed inputs to get more input files, with the aim of expanding the overall code coverage of the target program as it executes the mutated files. \n{\\color{myblack} Although there have already been various popular fuzzers, fuzzing still cannot bypass its problem of sometimes redundantly testing input files which cannot improve the code coverage rate. \nSome input files mutated by the fuzzer even cannot pass the well-formed file structure test.} Recent research has come up with ideas of applying Deep Learning in the process of fuzzing to solve these problems.\nIn this section, we will review the very recent four representative works that use Deep Learning for fuzzing for software security. Among the three, two representative works are already summarized phase-by-phase in Table \\ref{Table:Summary}. We direct interested readers to Table \\ref{Table:Summary} for a concise overview of those two works. \nOur review will be centered around three questions described in Section~\\ref{threequestions}. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks.", "id": "896ebfd0-c54b-4625-834c-50838b67a7e3", "level": "subsection", "origin_cites_number": 4, "parent_id": "eb166b74-7da0-4abe-af7f-7cfe3d3d385a", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in security-oriented fuzzing" ], [ "subsection", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8017, 8014 ], "content": "From a close look at the very recent applications using Deep Learning for solving security-oriented program analysis challenges, we observed the followings: \n\\begin{itemize}[label={}]\n \\item \\observation{} \\textit{Deep Learning has only been applied in mutation-based fuzzing.}\n Even though various of different fuzzing techniques, \n including symbolic execution based fuzzing~, tainted analysis based fuzzing~ and hybrid fuzzing~ have been proposed so far, we observed that all the works we surveyed employed Deep Learning method to assist the primitive fuzzing~\\textendash~mutation-based fuzzing.\n Specifically, they adopted Deep Learning to assist fuzzing tool's input mutation. We found that they commonly did it in two ways: 1) training Deep Learning models to tell how to efficiently mutate the input to trigger more execution path~;\n 2) training Deep Learning models to tell how to keep the mutated files compliant with the program's basic semantic requirement~.\n Besides, all three works trained different Deep Learning models for different programs,\n which means that knowledge learned from one programs cannot be applied to other programs.\n \\item \\observation{} \\textit{Similarity among all the works in our survey existed when choosing the training samples in Phase I.}\n The works in this survey had a common practice, i.e., using the input files directly as training samples of the Deep Learning model. Learn\\&Fuzz used character-level PDF objects sequence as training samples. Neuzz regarded input files directly as byte sequences and fed them into the neural network model. Mohit Rajpal et al. also used byte level representations of input files as training samples.\n \\item \\observation{} \\textit{Difference between all the works in our survey existed when assigning the training labels in Phase I.}\n Despite the similarity of training samples researchers decide to use, there was a huge difference in the training labels that each work chose to use. \n Learn\\&Fuzz directly used the character sequences of PDF objects as labels, same as training samples, but shifted by one position, which is a common generative model technique already broadly used in speech and handwriting recognition. \n Unlike Learn\\&Fuzz, Neuzz and Rajpal’s work used bitmap and heatmap respectively as training labels, \n with the bitmap demonstrating the code coverage status of a certain input, \n and the heatmap demonstrating the efficacy of flipping one or more bytes of the input file. \n Whereas, as a common terminology well-known among fuzzing researchers, bitmap was gathered directly from the results of AFL. Heatmap used by Rajpal et al. was generated by comparing the code coverage supported by the bitmap of one seed file \n and the code coverage supported by bitmaps of the mutated seed files. It was noted that if there is acceptable level of code coverage expansion when executing the mutated seed files, demonstrated by more ``1''s, instead of ``0''s in the corresponding bitmaps, the byte level differences among the original seed file and the mutated seed files will be highlighted. Since those bytes should be the focus of later on mutation, heatmap was used to denote the location of those bytes.\n Different labels usage in each work was actually due to the different kinds of knowledge each work wants to learn. For a better understanding, let us note that we can simply regard a Deep Learning model as a simulation of a ``function''. Learn\\&Fuzz~ wanted to learn valid mutation of a PDF file that was compliant with the syntax and semantic requirements of PDF objects. Their model could be seen as a simulation of $f(x, \\theta)=y$, where $x$ denotes sequence of characters in PDF objects and $y$ represents a sequence that are obtained by shifting the input sequences by one position. They generated new PDF object character sequences given a starting prefix once the model was trained. In Neuzz, an NN(Neural Network) model was used to do program smoothing, which simultated a smooth surrogate function that approximated the discrete branching behaviors of the target program. $f(x, \\theta)=y$, where $x$ denoted program's byte level input and $y$ represented the corresponding edge coverage bitmap. In this way, the gradient of the surrogate function was easily computed, due to NN's support of efficient computation of gradients and higher order derivatives. Gradients could then be used to guide the direction of mutation, in order to get greater code coverage. In Rajpal and others' work, they designed a model to predict good (and bad) locations to mutate in input files based on the past mutations and corresponding code coverage information. Here, the $x$ variable also denoted program's byte level input, but the $y$ variable represented the corresponding heatmap. \n \\item \\observation{} \\textit{Various lengths of input files were handled in Phase~\\ref{phase2}.}\n Deep Learning models typically accepted fixed length input, whereas the input files for fuzzers often held different lengths. Two different approaches were used among the three works we surveyed: splitting and padding. Learn\\&Fuzz~ dealt with this mismatch by concatenating all the PDF objects character sequences together, and then splited the large character sequence into multiple training samples with a fixed size. Neuzz~ solved this problem by setting a maximize input file threshold and then, padding the smaller-sized input files with null bytes. From additional experiments, they also found that a modest threshold gived them the best result, and enlarging the input file size did not grant them additional accuracy. Aside from preprocessing training samples, Neuzz also preprocessed training labels and reduced labels dimension by merging the edges that always appeared together into one edge, in order to prevent the multicollinearity problem, that could prevent the model from converging to a small loss value. Rajpal and others used the similar splitting mechanism as Learn\\&Fuzz to split their input files into either 64-bit or 128-bit chunks. Their chunk size was determined empirically and was considered as a trainable parameter for their Deep Learning model, and their approach did not require sequence concatenating at the beginning. \n \\item \\observation{} \\textit{All the works in our survey skipped Phase~\\ref{phase3}.}\n According to our definition of Phase~\\ref{phase3}, all the works in our survey did not consider representation learning. Therefore, all the three works fell into class 1 shown in Figure~\\ref{fig:dec}.While as in Rajpal and others' work, they considered the numerical representation of byte sequences. They claimed that since one byte binary data did not always represent the magnitude but also state, representing one byte in values ranging from 0 to 255 could be suboptimal. They used lower level 8-bit representation.\n\\end{itemize}\nThe above observations seem to indicate the following indications: \n\\begin{itemize}[label={}]\n\\item \\indication{} \\textit{No alteration to the input files seems to be a correct approach.}\nAs far as we concerned, it is due to the nature of fuzzing. That is, since every bit of the input files matters, any slight alteration to the input files could either lose important information or add redundant information for the neural network model to learn. \n\\item \\indication{} \\textit{Evaluation criteria should be chosen carefully when judging mutation.}\nInput files are always used as training samples regarding using Deep Learning technique in fuzzing problems. Through this similar action, researchers have a common desire to let the neural network mode learn how the mutated input files should look like. But the criterion of judging a input file actually has two levels: on the one hand, a good input file should be correct in syntax and semantics; on the other hand, a good input file should be the product of a useful mutation, which triggers the program to behave differently from previous execution path. This idea of a fuzzer that can generate semantically correct input file could still be a bad fuzzer at triggering new execution path was first brought up in Learn\\&Fuzz. We could see later on works trying to solve this problem by using either different training labels or use neural network to do program smoothing. We encouraged fuzzing researchers, when using Deep Learning techniques, to keep this problem in mind, in order to get better fuzzing results. \n\\item \\indication{} \\textit{Works in our survey only focus on \\textit{local knowledge}.}\nIn brief, some of the existing works~ \nleveraged the Deep Learning model to learn the relation between program's input and its behavior and used the knowledge that learned from history to guide future mutation.\nFor better demonstration, we defined the knowledge that only applied in one program as \\textit{local knowledge}.\nIn other words, this indicates that the \\textit{local knowledge} cannot direct fuzzing on other programs.\n\\end{itemize} \n{\\color{myblack}", "id": "63c19e14-9f44-4182-a0ff-fa0ca6d47abe", "level": "subsection", "origin_cites_number": 6, "parent_id": "eb166b74-7da0-4abe-af7f-7cfe3d3d385a", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in security-oriented fuzzing" ], [ "subsection", "Key findings from a closer look" ] ], "subsections": [], "title": "Key findings from a closer look" }, { "cite_extract_rate": 0, "cites": [], "content": "Corresponding to the problems conventional fuzzing has, the advantages of applying DL in fuzzing are that DL’s learning ability can ensure mutated input files follow the designated grammar rules better. \nThe ways in which input files are generated are more directed, and will, therefore, guarantee the fuzzer to increase its code coverage by each mutation. \nHowever, even if the advantages can be clearly demonstrated by the two papers we discuss above, \nsome challenges still exist, including mutation judgment challenges that are faced both by traditional fuzzing techniques and fuzzing with DL, \nand the scalability of fuzzing approaches. \nWe would like to raise several interesting questions for the future researchers: 1) Can the knowledge learned from the fuzzing history of one program be applied to direct testing on other programs? 2) If the answer to question one is positive, we can suppose that \\textit{global knowledge} across different programs exists? \nThen, can we train a model to extract the \\textit{global knowledge}? 3) Whether it is possible to combine \\textit{global knowledge} and \\textit{local knowledge} when fuzzing programs?\n}", "id": "e9b0e749-e076-4b69-872a-5fbd8c4db1fd", "level": "subsection", "origin_cites_number": 0, "parent_id": "eb166b74-7da0-4abe-af7f-7cfe3d3d385a", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "A closer look at applications of Deep Learning in security-oriented fuzzing" ], [ "subsection", "Discussion" ] ], "subsections": [], "title": "Discussion" }, { "cite_extract_rate": 0.25, "cites": [ 8017, 8019 ], "content": "\\label{sec:dis}\nUsing high-quality data in Deep Learning is important as much as using well-structured deep neural network architectures. That is, obtaining quality data must be an important step, which should not be skipped, even in resolving security problems using Deep Learning. So far, this study demonstrated how the recent security papers using Deep Learning have adopted data conversion (Phase~\\ref{phase2}) and data representation (Phase~\\ref{phase3}) on different security problems. Our observations and indications showed a clear understanding of how security experts generate quality data when using Deep Learning.\nSince we did not review all the existing security papers using Deep Learning, the generality of observations and indications is somewhat limited. Note that our selected papers for review have been published recently at one of prestigious security and reliability conferences such as USENIX SECURITY, ACM CCS and so on~-,~,~,~-. Thus, our observations and indications help to understand how most security experts have used Deep Learning to solve the well-known eight security problems from program analysis to fuzzing.\nOur observations show that we should transfer raw data to synthetic formats of data ready for resolving security problems using Deep Learning through data cleaning and data augmentation and so on. Specifically, we observe that Phases~\\ref{phase2} and~\\ref{phase3} methods have mainly been used for the following purposes: \n\\begin{itemize}\n\\item To clean the raw data to make the neural network (NN) models easier to interpret\n\\item To reduce the dimensionality of data (e.g., principle component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE))\n\\item To scale input data (e.g., normalization)\n\\item To make NN models understand more complex relationships depending on security problems (e.g. memory graphs)\n\\item To simply change various raw data formats into a vector format for NN models (e.g. one-hot encoding and word2vec embedding) \n\\end{itemize} \nIn this following, we do further discuss the question, ``What if Phase~\\ref{phase2} is skipped?\", rather than the question, ``Is Phase~\\ref{phase3} always necessary?\". \nThis is because most of the selected papers do not consider Phase~\\ref{phase3} methods (76\\%), or adopt with no concrete reasoning (19\\%). \nSpecifically, we demonstrate how Phase~\\ref{phase2} has been adopted according to eight security problems, different types of data, various models of NN and various outputs of NN models, \nin depth. \nOur key findings are summarized as follows:\n\\begin{itemize}\n \\item How to fit security domain knowledge into raw data has not been well-studied yet.\n \\item While raw text data are commonly parsed after embedding, raw binary data are converted using various Phase~\\ref{phase2} methods.\n \\item Raw data are commonly converted into a vector format to fit well to a specific NN model using various Phase~\\ref{phase2} methods.\n \\item Various Phase~\\ref{phase2} methods are used according to the relationship between output of security problem and output of NN models.\n\\end{itemize}", "id": "18fd9c41-b209-4bf4-b2a4-ffffb8f5fc68", "level": "section", "origin_cites_number": 8, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "f52936f0-ab3c-4c04-a13a-cce90812bd88" ], "title": "Discussion" }, { "cite_extract_rate": 0, "cites": [], "content": "is skipped?}\nFrom the analysis results of our selected papers for review, we roughly classify Phase~\\ref{phase2} methods into the following four categories. \n\\begin{itemize}\n\\item \\textit{Embedding:} The data conversion methods that intend to convert high-dimensional discrete variables into low-dimensional continuous vectors~. \n\\item \\textit{Parsing combined with embedding: } The data conversion methods that constitute an input data into syntactic components in order to test conformability after embedding. \n\\item \\textit{One-hot encoding:} A simple embedding where each data belonging to a specific category is mapped to a vector of $0$s and a single $1$. Here, the low-dimension transformed vector is not managed.\n\\item \\textit{Domain-specific data structures:} A set of data conversion methods which generate data structures capturing domain-specific knowledge for different security problems, e.g., memory graphs~. \n\\end{itemize}", "id": "f52936f0-ab3c-4c04-a13a-cce90812bd88", "level": "subsection", "origin_cites_number": 2, "parent_id": "18fd9c41-b209-4bf4-b2a4-ffffb8f5fc68", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ], [ "subsection", "What if Phase~\\ref{phase2" ] ], "subsections": [ "bacc96c5-0683-4d8d-9306-175374cc3048", "edf7ab4a-8f8d-4b23-b0cb-1a4ad0ba7c0a", "0d175350-6042-4687-be8d-31fdc9929c6b", "511731fd-089d-42e4-abe6-7d760a1fc6a7" ], "title": "What if Phase~\\ref{phase2" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n [\n pie chart,\n slice type={embed}{myblue}{vertical lines},\n slice type={pars}{myyellow}{grid},\n slice type={hot}{mypurple}{north west lines},\n slice type={none}{mygreen}{crosshatch},\n slice type={other}{myred}{dots},\n pie values/.style={font={\\small}},\n scale=1.3\n ]\n \\pie[yshift=2.4cm]\n {PA}{100/embed}\n \\pie[xshift=2.4cm,yshift=2.4cm]\n {ROP}{66.7/embed,33.4/hot}\n \\pie[xshift=4.8cm,yshift=2.4cm]\n {CFI}{66.7/embed,33.4/other}\n \\pie[xshift=7.2cm,yshift=2.4cm]\n {NA}{100/pars}\n \\pie[]\n {MC}{41.7/embed,41.7/hot,16.7/none}\n \\pie[xshift=2.4cm]\n {SEAD}{83.4/pars,16.7/none}\n \\pie[xshift=4.8cm]\n {MF}{100/other}\n \\pie[xshift=7.2cm]\n {FUZZING}{33.4/embed,66.7/none}\n \\legend[shift={(0cm,-1cm)}]{{Embedding}/embed, {Parsing \\& Embedding}/pars}\n \\legend[shift={(3cm,-1cm)}]{{One-hot}/hot, {None}/none}\n \\legend[shift={(6cm,-1cm)}]{{Other}/other}\n\\end{tikzpicture}\n\\caption{Statistics of Phase~\\ref{phase2} methods for eight security problems}\n\\label{fig:data2}\n\\end{figure}\nWe observe that over 93\\% of the papers use one of the above-classified Phase~\\ref{phase2} methods. 7\\% of the papers do not use any of the \nabove-classified methods, and these papers are mostly solving a software fuzzing problem. \nSpecifically, we observe that 35\\% of the papers use a Category 1 (i.e. embedding) method; \n30\\% of the papers use a Category 2 (i.e. parsing combined with embedding) method;\n15\\% of the papers use a Category 3 (i.e. one-hot encoding) method; \nand 13\\% of the papers use a Category 4 (i.e. domain-specific data structures) method. Regarding why one-hot encoding is not widely used, we found that \nmost security data include categorical input values, which are not \ndirectly analyzed by Deep Learning models. \nFrom Figure ~\\ref{fig:data2}, we also observe that according to security problems, different Phase~\\ref{phase2} methods are used. First, PA, ROP and CFI should convert raw data into a vector format using embedding because they commonly collect instruction sequence from binary data. Second, NA and SEAD use parsing combined with embedding because raw data such as the network traffic and system logs consist of the complex attributes with the different formats such as categorical and numerical input values. Third, we observe that MF uses various data structures because memory dumps from memory layout are unstructured. Fourth, fuzzing generally uses no data conversion since Deep Learning models are used to generate the new input data with the same data format as the original raw data. Finally, we observe that MC commonly uses one-hot encoding and embedding because malware binary and well-structured security log files include categorical, numerical and unstructured data in general. These observations indicate that type of data strongly influences on use of Phase~\\ref{phase2} methods. We also observe that only MF among eight security problems commonly transform raw data into well-structured data embedding a specialized security domain knowledge. This observation indicates that various conversion methods of raw data into well-structure data which embed various security domain knowledge are not yet studied in depth.", "id": "bacc96c5-0683-4d8d-9306-175374cc3048", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f52936f0-ab3c-4c04-a13a-cce90812bd88", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ], [ "subsection", "What if Phase~\\ref{phase2" ], [ "subsubsection", "Findings on eight security problems" ] ], "subsections": [], "title": "Findings on eight security problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[t!]\n \\centering\n \\begin{tikzpicture}\n \\begin{axis}[\n width = 0.95*\\textwidth,\n height = 5cm,\n major x tick style = transparent,\n ybar=2*\\pgflinewidth,\n bar width=14pt,\n ymajorgrids = true,\n ylabel = {PERCENTAGE (\\%)},\n symbolic x coords={Embedding,Parsing \\& Embdding,One-hot, None, Others},\n xtick = data,\n scaled y ticks = false,\n enlarge x limits=0.15,\n ymin=0,\n legend cell align=left,\n legend style={\n at={(1,1.05)},\n anchor=south east,\n column sep=3ex,\n font=\\footnotesize,\n },\n extra y ticks = 0.4,\n extra y tick labels={},\n extra y tick style={grid=major, major grid style={thick,draw=black}},\n label style={font=\\scriptsize},\n tick label style={font=\\footnotesize},\n nodes near coords,\n every node near coord/.append style={font=\\scriptsize},\n ]\n \\addplot[style={myblue,fill=myblue,mark=none}]\n coordinates {(Embedding, 51.9) (Parsing \\& Embdding,0.0) (One-hot,22.3)(None,14.9)(Others,11.2)};\n \\addplot[style={myyellow,fill=myyellow,mark=none}]\n coordinates {(Embedding, 0.0) (Parsing \\& Embdding,92.4) (One-hot,0.0)(None,7.6)(Others,0.0)};\n \\legend{Binary,Text}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Statistics of Phase~\\ref{phase2} methods on type of data.}\n \\label{fig:data1-2}\n\\end{figure}\nNote that according to types of data, a NN model works better than the others. For example, CNN works well with images but does not work with text. From Figure ~\\ref{fig:data1-2} for raw binary data, we observe that 51.9\\%, 22.3\\% and 11.2\\% of security papers use embedding, one-hot encoding and $Others$, respectively. Only 14.9\\% of security papers, especially related to fuzzing, do not use one of Phase~\\ref{phase2} methods. This observation indicates that binary input data which have various binary formats should be converted into an input data type which works well with a specific NN model. From Figure ~\\ref{fig:data1-2} for raw text data, we also observe that 92.4\\% of papers use parsing with embedding as the Phase~\\ref{phase2} method. Note that compared with raw binary data whose formats are unstructured, raw text data generally have the well-structured format. Raw text data collected from network traffics may also have various types of attribute values. Thus, raw text data are commonly parsed after embedding to reduce redundancy and dimensionality of data.", "id": "edf7ab4a-8f8d-4b23-b0cb-1a4ad0ba7c0a", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f52936f0-ab3c-4c04-a13a-cce90812bd88", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ], [ "subsection", "What if Phase~\\ref{phase2" ], [ "subsubsection", "Findings on different data types" ] ], "subsections": [], "title": "Findings on different data types" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}[t!]\n \\centering\n \\begin{subfigure}[b]{0.90\\textwidth}\n \\begin{tikzpicture}\n [\n pie chart,\n slice type={dnn}{myblue}{grid},\n slice type={cnn}{myyellow}{north west lines},\n slice type={rnn}{mylightblue}{crosshatch},\n slice type={lstm}{mygreen}{vertical lines},\n slice type={gnn}{myred}{dots},\n slice type={snn}{myorange}{horizontal lines},\n slice type={combination}{mylightblue}{north east lines},\n slice type={dbn}{mybrown}{fivepointed stars},\n pie values/.style={font={\\small}},\n scale=1.3\n ]\n \\pie{Embdeeing}{42.9/dnn,28.6/rnn,14.3/lstm,7.2/gnn,7.2/dbn}\n \\pie[xshift=2.2cm]\n {Parsing \\& Embddding}{16.7/dnn,16.7/cnn,8.4/rnn,50/lstm,8.4/snn}\n \\pie[xshift=4.4cm]\n {One-hot}{33.4/dnn,33.4/cnn,16.7/lstm,16.7/combination}\n \\pie[xshift=6.6cm]\n {None}{20/dnn,40/cnn,20/rnn,20/snn}\n \\pie[xshift=8.8cm]\n {Others}{66.7/cnn,33.4/gnn}\n \\legend[shift={(0cm,-1cm)}]{{DNN}/dnn, {CNN}/cnn}\n \\legend[shift={(3cm,-1cm)}]{{RNN}/rnn, {LSTM}/lstm}\n \\legend[shift={(6cm,-1cm)}]{{GNN}/gnn, {SNN}/snn}\n \\legend[shift={(8cm,-1cm)}]{{Combination}/combination,{DBN}/dbn}\n \\end{tikzpicture}\n \\caption{Statistics of Phase~\\ref{phase2} methods for eight security problems.}\n \\label{fig:data2}\n\\end{subfigure}\n\\hfill\n \\begin{subfigure}[b]{1.00\\textwidth}\n \\centering\n \\begin{tikzpicture}\n [\n pie chart,\n slice type={embed}{myblue}{crosshatch},\n slice type={parsing}{myyellow}{vertical lines},\n slice type={onehot}{mylightblue}{grid},\n slice type={none}{mygreen}{dots},\n slice type={others}{myred}{north west lines},\n pie values/.style={font={\\small}},\n scale=1.3\n ]\n \\pie[yshift=2.4cm]\n {DNN}{54.6/embed,18.2/parsing,18.2/onehot,9.1/none}\n \\pie[xshift=2.2cm,yshift=2.4cm]\n {CNN}{25/parsing,25/onehot,25/none,25/others}\n \\pie[xshift=4.4cm,yshift=2.4cm]\n {RNN}{66.7/embed,16.7/parsing,16.7/none}\n \\pie[xshift=6.6cm,yshift=2.4cm]\n {LSTM}{22.3/embed,66.7/parsing,11.2/onehot}\n \\pie{GNN}{50/embed,50/others}\n \\pie[xshift=2.2cm]\n {SNN}{50/parsing,50/none}\n \\pie[xshift=4.4cm]\n {Combination}{100/onehot}\n \\pie[xshift=6.6cm]\n {DBN}{100/embed}\n \\legend[shift={(0cm,-1cm)}]{{Embedding}/embed, {Parsing \\& Embddding}/parsing}\n \\legend[shift={(3cm,-1cm)}]{{One-hot encoding}/onehot, {None}/none}\n \\legend[shift={(6cm,-1cm)}]{{Others}/others}\n \\end{tikzpicture}\n \\caption{Phase~\\ref{phase2} methods over type of NN.}\n \\label{fig:data1-4}\n \\end{subfigure}\n \\caption{Statistics of Phase~\\ref{phase2} methods for various types of NNs.}\n \\label{fig:data12}\n\\end{figure}\nAccording to types of the converted data, a specific NN model works better than the others. For example, CNN works well with images but does not work with raw text. From Figure ~\\ref{fig:data1-3}, we observe that use of embedding for DNN (42.9\\%), RNN (28.6\\%) and LSTM (14.3\\%) models approximates to 85\\%. This observation indicates that embedding methods are commonly used to generate sequential input data for DNN, RNN and LSTM models. Also, we observe that one-hot encoded data are commonly used as input data for DNN (33.4\\%), CNN (33.4\\%) and LSTM (16.7\\%) models. This observation indicates that one-hot encoding is one of common Phase~\\ref{phase2} methods to generate numerical values for image and sequential input data because many raw input data for security problems commonly have the categorical features. We observe that the CNN (66.7\\%) model uses the converted input data using the $Others$ methods to express the specific domain knowledge into the input data structure of NN networks. This is because general vector formats including graph, matrix and so on can also be used as an input value of the CNN model.\n\\begin{figure}[t!]\n \\centering\n \\begin{subfigure}[b]{0.95\\textwidth}\n \\centering\n \\begin{tikzpicture}\n [\n pie chart,\n slice type={generation}{myblue}{crosshatch},\n slice type={detection}{myyellow}{vertical lines},\n slice type={classification}{mylightblue}{grid},\n pie values/.style={font={\\small}},\n scale=1.3\n ]\n \\pie{Embedding}{100/classification}\n \\pie[xshift=2.2cm]\n {Parsing \\& Embddding}{58.3/classification,41.7/detection}\n \\pie[xshift=4.4cm]\n {One-hot encoding}{100/classification}\n \\pie[xshift=6.6cm]\n {None}{60/classification,20/detection,20/generation}\n \\pie[xshift=8.8cm]\n {Others}{66.7/classification,33.4/detection}\n \\legend[shift={(0cm,-1cm)}]{{Data Generation}/generation}\n \\legend[shift={(3cm,-1cm)}]{{Object Detection}/detection}\n \\legend[shift={(6cm,-1cm)}]{{Classification}/classification}\n \\end{tikzpicture}\n \\caption{Output of NN over Phase~\\ref{phase2} methods.}\n \\label{fig:data1-5}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.78\\textwidth}\n \\centering\n \\begin{tikzpicture}\n [\n pie chart,\n slice type={embed}{mylightblue}{north west lines},\n slice type={parsing}{myyellow}{crosshatch},\n slice type={onehot}{myblue}{vertical lines},\n slice type={none}{mygreen}{grid},\n slice type={others}{myred}{dots},\n pie values/.style={font={\\small}},\n scale=1.3\n ]\n \\pie{Classification}{43.8/embed,21.9/parsing,18.8/onehot,9.4/none,6.3/others}\n \\pie[xshift=2.2cm]\n {Object Detection}{71.5/parsing,14.3/none,14.3/others}\n \\pie[xshift=4.4cm]\n {Data Generation}{100/none}\n \\legend[shift={(-0.5cm,-1cm)}]{{Embedding}/embed, {Parsing \\& Embddding}/parsing}\n \\legend[shift={(2cm,-1cm)}]{{One-hot encoding}/onehot, {None}/none}\n \\legend[shift={(4cm,-1cm)}]{{Others}/others}\n \\end{tikzpicture}\n \\caption{Phase~\\ref{phase2} methods over output of NN.}\n \\label{fig:data1-6}\n \\end{subfigure}\n \\caption{Statistics of Phase~\\ref{phase2} methods for various output of NN.}\n \\label{fig:data13}\n\\end{figure}\nFrom Figure ~\\ref{fig:data1-4}, we observe that DNN, RNN and LSTM models commonly use embedding, one-hot encoding and parsing combined with embedding. For example, we observe security papers of 54.6\\%, 18.2\\% and 18.2\\% models use embedding, one-hot encoding and parsing combined with embedding, respectively. We also observe that the CNN model is used with various Phase~\\ref{phase2} methods because any vector formats such as image can generally be used as an input data of the CNN model.", "id": "0d175350-6042-4687-be8d-31fdc9929c6b", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f52936f0-ab3c-4c04-a13a-cce90812bd88", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ], [ "subsection", "What if Phase~\\ref{phase2" ], [ "subsubsection", "Findings on various models of NN" ] ], "subsections": [], "title": "Findings on various models of NN" }, { "cite_extract_rate": 0, "cites": [], "content": "According to the relationship between output of security problem and output of NN, we may use a specific Phase~\\ref{phase2} method. For example, if output of security problem is given into a class (e.g., normal or abnormal), output of NN should also be given into classification.\nFrom Figure ~\\ref{fig:data1-5}, we observe that embedding is commonly used to support a security problem for classification (100\\%). Parsing combined with embedding is used to support a security problem for object detection (41.7\\%) and classification (58.3\\%). One-hot encoding is used only for classification (100\\%). These observations indicate that classification of a given input data is the most common output which is obtained using Deep Learning under various Phase~\\ref{phase2} methods. \nFrom Figure~\\ref{fig:data1-6}, we observe that security problems, whose outputs are classification, commonly use embedding (43.8\\%) and parsing combined with embedding (21.9\\%) as the Phase~\\ref{phase2} method. We also observe that security problems, whose outputs are object detection, commonly use parsing combined with embedding (71.5\\%). However, security problems, whose outputs are data generation, commonly do not use the Phase~\\ref{phase3} methods. These observations indicate that a specific Phase~\\ref{phase2} method has been used according to the relationship between output of security problem and use of NN models.", "id": "511731fd-089d-42e4-abe6-7d760a1fc6a7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "f52936f0-ab3c-4c04-a13a-cce90812bd88", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Discussion" ], [ "subsection", "What if Phase~\\ref{phase2" ], [ "subsubsection", "Findings on output of NN models" ] ], "subsections": [], "title": "Findings on output of NN models" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:fur}\nSince any Deep Learning models are stochastic, each time the same Deep Learning model is fit even on the same data, \nit might give different outcomes. \nThis is because deep neural networks use random values such as random initial weights. \nHowever, if we have all possible data for every security problem, we may not make random predictions. \nSince we have the limited sample data in practice, we need to get the best-effort prediction results using the given Deep Learning model, \nwhich fits to the given security problem. \nHow can we get the best-effort prediction results of Deep Learning models for different security problems? \nLet us begin to discuss about the stability of evaluation results for our selected papers for review. \nNext, we will show the influence of security domain knowledge on prediction results of Deep Learning models.\nFinally, we will discuss some common issues in those fields.", "id": "e83e405a-97db-4a67-84c0-b0487015c734", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Further areas of investigation" ] ], "subsections": [ "8161d984-3f1f-47fb-a176-016d10c0acf1", "a5830c70-7cd9-401c-9287-bc03e0d1520a", "4fe95820-75c2-4ff6-bb6b-bdb18e167231", "1fff475a-d723-4b2c-bac7-55b4c1174e65" ], "title": "Further areas of investigation" }, { "cite_extract_rate": 0, "cites": [], "content": "When evaluating neural network models, Deep Learning models commonly use three methods: train-test split; train-validation-test split; and $k$-fold cross validation. A train-test split method splits the data into two parts, i.e., training and test data. Even though a train-test split method makes the stable prediction with a large amount of data, predictions vary with a small amount of data. A train-validation-test split method splits the data into three parts, i.e., training, validation and test data. Validation data are used to estimate predictions over the unknown data. $k$-fold cross validation has $k$ different set of predictions from $k$ different evaluation data. Since $k$-fold cross validation takes the average expected performance of the NN model over $k$-fold validation data, the evaluation result is closer to the actual performance of the NN model.\nFrom the analysis results of our selected papers for review, we observe that 40.0\\% and 32.5\\% of the selected papers are measured using a train-test split method and a train-validation-test split method, respectively. Only 17.5\\% of the selected papers are measured using $k$-fold cross validation. This observation implies that even though the selected papers show almost more than 99\\% of accuracy or 0.99 of F1 score, most solutions using Deep Learning might not show the same performance for the noisy data with randomness.\nTo get stable prediction results of Deep Learning models for different security problems, we might reduce the influence of the randomness of data on Deep Learning models. At least, it is recommended to consider the following methods:\n\\begin{itemize}\n \\item \\textbf{Do experiments using the same data many time}: To get a stable prediction with a small amount of sample data, we might control the randomness of data using the same data many times.\n \\item \\textbf{Use cross validation methods, e.g. $k$-fold cross validation}: The expected average and variance from $k$-fold cross validation estimates how stable the proposed model is.\n\\end{itemize}", "id": "8161d984-3f1f-47fb-a176-016d10c0acf1", "level": "subsection", "origin_cites_number": 0, "parent_id": "e83e405a-97db-4a67-84c0-b0487015c734", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Further areas of investigation" ], [ "subsection", "How stable are evaluation results?" ] ], "subsections": [], "title": "How stable are evaluation results?" }, { "cite_extract_rate": 0, "cites": [], "content": "When selecting a NN model that analyzes an application dataset, e.g., MNIST dataset~, we should understand that the problem is to classify a handwritten digit using a $28\\times28$ black. Also, to solve the problem with the high classification accuracy, it is important to know which part of each handwritten digit mainly influences the outcome of the problem, i.e., a domain knowledge.\nWhile solving a security problem, knowing and using security domain knowledge for each security problem \nis also important due to the following reasons (we label the observations and indications that realted to domain knowledge with `$*$'): \n{\\color{myblack}\nFirstly, \\textit{the dataset generation, preprocess and feature selection highly depend on domain knowledge.} \nDifferent from the image classification and natural language processing, \nraw data in the security domain cannot be sent into the NN model directly. \nResearchers need to adopt strong domain knowledge to generate, extract, or clean the training set. \nAlso, in some works, domain knowledge is adopted in data labeling\nbecause labels for data samples are not straightforward.\nSecondly, \\textit{domain knowledge helps with the selection of DL models and its hierarchical structure.}\nFor example, the neural network architecture (hierarchical and bi-directional LSTM) designed in DEEPVSA~ \nis based on the domain knowledge in the instruction analysis.\nThirdly, \\textit{domain knowledge helps to speed up the training process.}\nFor instance, by adopting strong domain knowledge to clean the training set, \ndomain knowledge helps to spend up the training process while keeping the same performance. \nHowever, due to the influence of the randomness of data on Deep Learning models, \ndomain knowledge should be carefully adopted to avoid potential decreased accuracy. \nFinally, \\textit{domain knowledge helps with the interpretability of models' prediction.}\nRecently, researchers try to explore the interpretability of the deep learning model in security areas,\nFor instance, LEMNA~ and EKLAVYA~\nexplain how the prediction was made by models from different perspectives.\nBy enhancing the trained models‘ interpretability, they can improve their approaches' accuracy and security.\nThe explanation for the relation between input, hidden state, and the final output is based on domain knowledge.", "id": "a5830c70-7cd9-401c-9287-bc03e0d1520a", "level": "subsection", "origin_cites_number": 4, "parent_id": "e83e405a-97db-4a67-84c0-b0487015c734", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Further areas of investigation" ], [ "subsection", "How does security domain knowledge influence the performance of security solutions using Deep Learning?" ] ], "subsections": [], "title": "How does security domain knowledge influence the performance of security solutions using Deep Learning?" }, { "cite_extract_rate": 0, "cites": [], "content": "In this section, we will discuss the common challenges when applying DL to solving security problems. \nThese challenges as least shared by the majority of works, if not by all the works.\nGenerally, we observe 7 common challenges in our survey:\n\\begin{enumerate}\n \\item The raw data collected from the software or system usually contains lots of noise. \n \\item The collected raw is untidy. For instance, the instruction trace, the \n Untidy data: variable length sequences, \n \\item Hierarchical data syntactic/structure. As discussed in Section~\\ref{sec:pa:dis}, \n the information may not simply be encoded in a single layer, rather, it is encoded hierarchically, and the syntactic is complex.\n \\item Dataset generation is challenging in some scenarios. Therefore, the generated training data might be less representative or unbalanced. \n \\item Different for the application of DL in image classification, and natural language process, which is visible or understandable, the relation between data sample and its label is not intuitive, and hard to explain.\n\\end{enumerate}", "id": "4fe95820-75c2-4ff6-bb6b-bdb18e167231", "level": "subsection", "origin_cites_number": 0, "parent_id": "e83e405a-97db-4a67-84c0-b0487015c734", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Further areas of investigation" ], [ "subsection", "Common challenges" ] ], "subsections": [], "title": "Common challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "Finally, we investigate the availability of the trained model and the quality of the dataset.\nGenerally, the availability of the trained models affects its adoption in practice,\nand the quality of the training set and the testing set will affect the credibility of testing results and comparison between different works. \nTherefore, we collect relevant information to answer the following four questions and shows the statistic in Table~\\ref{tab:dataset}:\n\\begin{enumerate}\n \\item Whether a paper's source code is publicly available?\n \\item Whether raw data, which is used to generate the dataset, is publicly available?\n \\item Whether its dataset is publicly available?\n \\item How are the quality of the dataset?\n\\end{enumerate}\n\\input{sections/table3}\nWe observe that both the percentage of open source of code and dataset in our surveyed fields is low,\nwhich makes it a challenge to reproduce proposed schemes, make comparisons between different works, and adopt them in practice.\nSpecifically, the statistic shows that 1) the percentage of open source of code in our surveyed fields is low, only 6 out of 16 paper published their model’s source code.\n2) the percentage of public data sets is low. Even though, the raw data in half of the works are publicly available, only 4 out of 16 fully or partially published their dataset. \n3) the quality of datasets is not guaranteed, for instance, most of the dataset is unbalanced. \nThe performance of security solutions even using Deep Learning might vary according to datasets. \nTraditionally, when evaluating different NN models in image classification, standard datasets such as MNIST for recognizing handwritten 10 digits and \nCIFAR10~ for recognizing 10 object classes are used for performance comparison of different NN models. \nHowever, there are no known standard datasets for evaluating NN models on different security problems. \nDue to such a limitation, we observe that most security papers using Deep Learning do not compare the performance of different security solutions even when they consider the same security problem. \nThus, it is recommended to generate and use a standard dataset for a specific security problem for comparison.\nIn conclusion, we think that there are three aspects that need to be improved in future research:\n\\begin{enumerate}\n \\item Developing standard dataset. \n \\item Publishing their source code and dataset. \n \\item Improving the interpretability of their model.\n\\end{enumerate}\n}", "id": "1fff475a-d723-4b2c-bac7-55b4c1174e65", "level": "subsection", "origin_cites_number": 1, "parent_id": "e83e405a-97db-4a67-84c0-b0487015c734", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Further areas of investigation" ], [ "subsection", "Availability of trained model and quality of dataset." ] ], "subsections": [], "title": "Availability of trained model and quality of dataset." }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:con}\nThis paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending ROP attacks, achieving CFI, defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security. Our observations of the reviewed works indicate that the literature of using Deep Learning techniques to solve computer security challenges is still at an earlier stage of development.", "id": "88571d44-18fa-4576-92d1-ac9e7eab286f", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" }, { "cite_extract_rate": 0, "cites": [], "content": "Not applicable.", "id": "eabe40e8-0f6c-4026-8136-7e9f93471cea", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Availability of data and materials" ] ], "subsections": [], "title": "Availability of data and materials" }, { "cite_extract_rate": 0, "cites": [], "content": "This work was supported by \nARO W911NF-13-1-0421 (MURI), \nNSF CNS-1814679, and ARO W911NF-15-1-0576.", "id": "db0be638-d4d6-4911-bbe6-b3c95e7bc168", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Funding" ] ], "subsections": [], "title": "Funding" }, { "cite_extract_rate": 0, "cites": [], "content": "We are grateful to the anonymous reviewers for their useful comments and suggestions. \n\\bibliographystyle{unsrt}\n\\bibliography{reference}\n\\end{document}", "id": "37d47822-0bb0-46c8-8835-473014f3b19f", "level": "section", "origin_cites_number": 0, "parent_id": "825459b6-d132-4d0c-9bfb-51997111ee0c", "prefix_titles": [ [ "title", "Using Deep Learning to Solve Computer Security Challenges: A Survey" ], [ "section", "Acknowledgements" ] ], "subsections": [], "title": "Acknowledgements" } ]
124
[ 318, 8016, 8017, 8014, 3859, 8019, 3947, 8015, 8018 ]
0.996055
[ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ]
Efficient Transformers: A Survey
2020
2020-09-14T20:38:14Z
cs.LG
Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of \emph{``X-former''} models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory \emph{efficiency}. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored ``X-former'' models, providing an organized and comprehensive overview of existing work and models across multiple domains.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ] ], "subsections": [ "1978fd93-ec47-4f73-b68a-5ad7017384f0", "d93522a9-7156-4a25-9283-15966d4a97a9", "331ef788-f428-4379-af9c-107d866ac486", "568b3cb1-3180-48cc-9b2a-86346db55bc0", "aac8d8dc-16e0-4e77-bff5-61ffbdf5ecd0" ], "title": "root" }, { "cite_extract_rate": 0.894736842105263, "cites": [ 679, 1465, 1473, 7360, 798, 38, 7333, 7298, 1488, 7, 4730, 1453, 8384, 1472, 4731, 9, 1461 ], "content": "Transformers~ are a formidable force in the modern deep learning stack. Transformers are pervasive and have made tremendous impact in many fields such as language understanding~ and image processing~. As such, it is only natural that a wealth of research has been dedicated to making fundamental improvements to the model over the past few years~. This immense interest has also spurred research into more efficient variants of the model~.\nThere has been such a surge of Transformer model variants proposed recently, that researchers and practitioners alike may find it challenging to keep pace with the rate of innovation. As of this writing and this manuscript's first draft (circa August 2020), there have been nearly a dozen new efficiency-focused models proposed in just the past 6 months. Thus, a survey of the existing literature is both beneficial for the community and quite timely.\nThe self-attention mechanism is a key defining characteristic of Transformer models. The mechanism can be viewed as a graph-like inductive bias that connects all tokens in a sequence with a relevance-based pooling operation. A well-known concern with self-attention is the quadratic time and memory complexity, which can hinder model scalability in many settings. There has been an overwhelming influx of model variants proposed recently that address this problem. We hereinafter name this class of models \\textit{``efficient Transformers''}. \nThe \\emph{efficiency} of a model can be interpreted in a variety of ways. It might refer to the memory footprint of the model, which is of importance when the memory of accelerators on which the model is running is limited. Efficiency might also refer to computational costs, e.g. the number of FLOPs, both during training and inference. In particular, for on-device applications, models often must operate within a highly constrained computational budget. Throughout this survey, we refer to the efficiency of Transformers both in terms of memory and computation. We are especially interested in how such models perform when they are applied to large inputs. \nEfficient self-attention models are crucial in applications that model long sequences. For example, documents, images, and videos are all often composed of a relatively large number of pixels or tokens. Efficiency in processing long sequences is therefore paramount for widespread adoption of Transformers.\nThis survey sets out to provide a comprehensive overview of the recent advances made in this class of models. We are primarily interested in modeling advances and architectural innovations that improve the general efficiency of Transformers, including but not limited to tackling the quadratic complexity issue of the self-attention mechanism or reducing the computation costs by means such as pooling and/or sparsity. We also briefly discuss general improvements and other efficiency improvements such as parameter sharing.\nWe propose a taxonomy of efficient Transformer models, characterizing them by their technical innovation and primary use case. Specifically, we review Transformer models that have applications in both language and vision domains, attempting to consolidate the literature across the spectrum. We also provide a detailed walk-through of many of these models and draw connections between them.", "id": "1978fd93-ec47-4f73-b68a-5ad7017384f0", "level": "section", "origin_cites_number": 19, "parent_id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "925a6e17-12dc-4a97-bc86-dbb3a6df52b1" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "This manuscript went through a round of revision in December 2021 (approximately a year and 4 months later after the first manuscript was written). The main changes involve adding our discussions to better reflect the state of research at this current point of time (new models, new paradigms) and also accurately reflect the current meta trends surrounding this research area. A retrospective section is posed near the end of the paper. See Appendix for a meaningful change log of what has happened as we transitioned to V2 of this survey.", "id": "925a6e17-12dc-4a97-bc86-dbb3a6df52b1", "level": "paragraph", "origin_cites_number": 0, "parent_id": "1978fd93-ec47-4f73-b68a-5ad7017384f0", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Author notes on the updated version (December 2021)" ] ], "subsections": [ "46766dd5-31e6-4f8f-8d8d-17f539d3c014" ], "title": "Author notes on the updated version (December 2021)" }, { "cite_extract_rate": 0.5, "cites": [ 38 ], "content": "We wanted to post the update to arxiv in Jan but forgot about it. We lightly revised it again in Mar by adding newer SOTA sparse models such as ST-MoE-32B .\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figs/transformer_arch.pdf}\n \\vspace{-2em}\n \\label{fig:transformer_arch}\n\\caption{Architecture of the standard Transformer~}\n\\label{stats}\n\\end{figure}", "id": "46766dd5-31e6-4f8f-8d8d-17f539d3c014", "level": "paragraph", "origin_cites_number": 2, "parent_id": "925a6e17-12dc-4a97-bc86-dbb3a6df52b1", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Introduction" ], [ "paragraph", "Author notes on the updated version (December 2021)" ], [ "paragraph", "Author notes on the updated version (March 2022)" ] ], "subsections": [], "title": "Author notes on the updated version (March 2022)" }, { "cite_extract_rate": 1, "cites": [ 57, 38 ], "content": "This section provides an overview of the well-established Transformer architecture~. Transformers are multi-layered architectures formed by stacking Transformer blocks on top of one another. \nTransformer blocks are characterized by a multi-head self-attention mechanism, a position-wise feed-forward network, layer normalization~ modules and residual connectors. The input to the Transformer model is often a tensor of shape $\\reals^B \\times \\reals^N$, where $B$ is the batch size, $N$ the sequence length.\nThe input first passes through an embedding layer that converts each one-hot token representation into a $d_{model}$ dimensional embedding, i.e., $\\reals^B \\times \\reals^N \\times \\reals^{d_{model}}$. The new tensor is then additively composed with positional encodings and passed through a multi-headed self-attention module. Positional encodings can take the form of a sinusoidal input (as per~) or be trainable embeddings.\nThe inputs and output of the multi-headed self-attention module are connected by residual connectors and a layer normalization layer. The output of the multi-headed self-attention module is then passed to a two-layered feed-forward network which has its inputs/outputs similarly connected in a residual fashion with layer normalization. The sub-layer residual connectors with layer norm is expressed as:\n\\begin{align*}\nX = \\text{LayerNorm}(F_S(X)) + X \n\\end{align*}\nwhere $F_S$ is the sub-layer module which is either the multi-headed self-attention or the position-wise feed-forward layers.", "id": "d93522a9-7156-4a25-9283-15966d4a97a9", "level": "section", "origin_cites_number": 2, "parent_id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ] ], "subsections": [ "2401a3ad-badb-490a-aa57-1f203fac9801", "a8d1ee9c-6011-42fd-8ebc-c270b514e543", "3e948159-1fd5-4d90-bdf1-7c0775042d3d", "bf77b2c0-220f-4f15-ad9a-4b0683e44d0f", "7b7010c5-4356-460b-b201-085f8da41a41", "0c503c99-673c-4fd8-b89d-b8e2ac45d86a" ], "title": "Background on Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "The Transformer model leverages a multi-headed self-attention mechanism. The key idea behind the mechanism is for each element in the sequence to learn to gather from other tokens in the sequence. The operation for a single head is defined as:\n\\begin{align*}\nA_h = \\text{Softmax}(\\alpha Q_hK_h^\\top)V_h, \n\\end{align*}\nwhere $X$ is a matrix in $\\reals^{N \\times d}$, $\\alpha$ is a scaling factor that is typically set to $\\frac{1}{\\sqrt{d}}$, $Q_h=X\\bm{W}_q, K_h=X\\bm{W}_k$ and $V_h=X\\bm{W}_v$ are linear transformations applied on the temporal dimension of the input sequence, $\\bm{W}_q, \\bm{W}_k, \\bm{W}_v \\in \\mathbb{R}^{d \\times \\frac{d}{H}}$ are the weight matrices (parameters) for the query, key, and value projections that project the input $X$ to an output tensor of $d$ dimensions, and $N_H$ is the number of heads. Softmax is applied row-wise.\nThe outputs of heads $A_1 \\cdots A_{H}$ are concatenated together and passed into a dense layer. The output $Y$ can thus be expressed as $Y = \\bm{W}_o[A_1 \\cdots A_{H}]$, where $\\bm{W}_{o}$ is an output linear projection. Note that the computation of $A$ is typically done in a parallel fashion by considering tensors of $\\reals^B \\times \\reals^N \\times \\reals^{H} \\times \\reals^{\\frac{d}{H}}$ and computing the linear transforms for all heads in parallel.\nThe attention matrix $A=QK^\\top$ is chiefly responsible for learning alignment scores between tokens in the sequence. In this formulation, the dot product between each element/token in the query ($Q$) and key ($K$) is taken. This drives the self-alignment process in self-attention whereby tokens learn to \\textit{gather} from each other.", "id": "2401a3ad-badb-490a-aa57-1f203fac9801", "level": "subsection", "origin_cites_number": 0, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "Multi-Head Self-Attention" ] ], "subsections": [], "title": "Multi-Head Self-Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The outputs of the self-attention module are then passed into a two-layered feed-forward network with ReLU activations. This feed-forward layer operates on each position independently. This is expressed as follows:\n\\begin{align*}\nF_2(ReLU(F_1(X_A))) \n\\end{align*}\nwhere $F_1$ and $F_2$ are feed-forward functions of the form $Wx +b$.", "id": "a8d1ee9c-6011-42fd-8ebc-c270b514e543", "level": "subsection", "origin_cites_number": 0, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "Position-wise Feed-forward Layers" ] ], "subsections": [], "title": "Position-wise Feed-forward Layers" }, { "cite_extract_rate": 0, "cites": [], "content": "Each Transformer block can be expressed as:\n\\begin{align*}\nX_A &= \\text{LayerNorm}(\\text{MultiheadAttention}(X, X)) + X\\\\\nX_B &= \\text{LayerNorm}(\\text{PositionFFN}(X_A)) + X_A \n\\end{align*}\nwhere $X$ is the input of the Transformer block and $X_B$ is the output of the Transformer block. Note that the $\\text{MultiheadAttention}()$ function accepts two argument tensors, one for query and the other for key-values. If the first argument and second argument is the same input tensor, this is the \\text{MultiheadSelfAttention} mechanism.", "id": "3e948159-1fd5-4d90-bdf1-7c0775042d3d", "level": "subsection", "origin_cites_number": 0, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "Putting it all together" ] ], "subsections": [], "title": "Putting it all together" }, { "cite_extract_rate": 1, "cites": [ 1488, 707, 4732, 8454 ], "content": "The computation costs of Transformers is derived from multiple factors. Firstly, the memory and computational complexity required to compute the attention matrix is quadratic in the input sequence length, i.e., $N \\times N$. In particular, the $QK^\\top$ matrix multiplication operation alone consumes $N^2$ time and memory. This restricts the overall utility of self-attentive models in applications which demand the processing of long sequences. Memory restrictions are tend to be applicable more to training (due to gradient updates) and are generally of lesser impact on inference (no gradient updates). The quadratic cost of self-attention impacts speed$\\footnote{We would like to emphasize that complexity does not always translate to real world throughput or latency. A model of linear complexity can be slower than a model with quadratic complexity in practice.}$ in both training and inference. The compute costs of the self-attention mechanism contributes partially to the overall compute cost of the Transformer. A non-trivial amount of compute still stems from the two layer feed-forward layers at every Transformer block (approximately half the compute time and/or FLOPs). The complexity of the FFN is linear with respect to sequence length but is generally still costly. Hence, a large portion of recent work have explored sparsity~ as a means to scale up the FFN without incurring compute costs. Efficient attention and efficient models are generally orthogonal - although some efficient attention methods explicitly aim to reduce the sequence length~ and as a result also save computation costs in both aspects. Efficiency and computational costs is generally a complicated affair and we would suggest readers peruse~ for more details on trade-offs, intricacies etc.", "id": "bf77b2c0-220f-4f15-ad9a-4b0683e44d0f", "level": "subsection", "origin_cites_number": 4, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "On the compute cost of Transformers" ] ], "subsections": [], "title": "On the compute cost of Transformers" }, { "cite_extract_rate": 1, "cites": [ 679, 7, 9 ], "content": "It is important to note the differences in how the Transformer blocks are used. Transformers can primarily be used in three ways, namely: (1) \\emph{encoder-only} (e.g., for classification), (2) \\emph{decoder-only} (e.g., for language modeling), and (3) \\emph{encoder-decoder} (e.g., for machine translation). In encoder-decoder mode, there are usually multiple multi-headed self-attention modules, including a standard self-attention in both the encoder and the decoder, along with an encoder-decoder cross-attention that allows the decoder to utilize information from the encoder.\nThis influences the design of the self-attention mechanism. In the encoder mode, there is no restriction or constraint that the self-attention mechanism has to be causal, i.e., dependent solely on the present and past tokens. In the encoder-decoder setting, self-attention used in the decoder (i.e. across decoding positions) must be causal since each auto-regressive decoding step can only depend on previous tokens, whereas the self-attention used in the encoder need not. Fulfilling this requirement can prove challenging for many efficient self-attention designs.\nThe mode of usage of a Transformer model generally depends on the target application. Given an input sequence, the sequence is typically passed through an encoder stack. At this stage, there might be too options. For multi-class classification, a linear layer with Softmax outputs typically projects the sequence representation down to the number of classes. In the case of BERT~, this is a \\textit{[CLS]} token that is appended to the start of the sequence as a prefix. Recent work has also explored the usage of Encoder-Decoder architectures for classification, such as T5~. Decoder-only models are typically used for generation and are trained using a language modeling objective (of predicting the next token). Due to the nature of the loss, these models are often superior for open ended generation~. A decoder-only model needs to be causal and a upper triangular mask needs to be applied to prevent tokens from peeping into the future. We refer interested readers to~ for more detailed descriptions of the various Transformer modes.", "id": "7b7010c5-4356-460b-b201-085f8da41a41", "level": "subsection", "origin_cites_number": 3, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "Transformer Mode" ] ], "subsections": [], "title": "Transformer Mode" }, { "cite_extract_rate": 0.8571428571428571, "cites": [ 1453, 8384, 7298, 793, 1473, 7 ], "content": "Transformers have a wide range of applications ranging from language to vision, speech and reinforcement learning. It was initially introduced within the context of sequence to sequence machine translation in NLP. Following which, most of the applications of Transformers have been within the context of language - given the concurrent advance of pretrained models such as BERT~. Many early improvements to this line of efficient transformers is therefore focused on language processing applications~. For historical reasons, this survey paper leans slightly towards language. However, it is also worth noting that a substantial amount of papers considered in our survey also considers multimodal applications whereby a sequence processor is required. For example considers generative modeling task on images or other modalities such as proteins. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figs/Venn_Transformers_new.pdf}\n \\caption{Taxonomy of Efficient Transformer Architectures.}\n \\label{fig:taxonomy}\n\\end{figure}", "id": "0c503c99-673c-4fd8-b89d-b8e2ac45d86a", "level": "subsection", "origin_cites_number": 7, "parent_id": "d93522a9-7156-4a25-9283-15966d4a97a9", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Background on Transformers" ], [ "subsection", "Applications" ] ], "subsections": [], "title": "Applications" }, { "cite_extract_rate": 0.7741935483870961, "cites": [ 1484, 4735, 7840, 707, 1473, 1494, 798, 7366, 1133, 1503, 1182, 1482, 7333, 1490, 7298, 793, 1488, 4733, 1453, 4731, 7370, 4734, 8454, 1499 ], "content": "In this section, we provide a high-level overview of efficient Transformer models. We begin by presenting a characterization of the different models. Table~\\ref{tab:summarytable} lists the efficient Transformers released to date while Figure \\ref{fig:taxonomy} presents a graphical overview of several key efficient Transformer models. \n\\begin{table}[t]\n \\centering\n \\small\n \\begin{tabular}{l|c|c|l}\n \\hline\n Model / Paper & Complexity & Decode & Class\\\\\n \\hline\n Memory Compressed~ & $\\mathcal{O}(N_c^2)$& $\\checkmark$ & FP+M \\\\\n Image Transformer~ & $\\mathcal{O}(N.m)$& $\\checkmark$ & FP \\\\\n Set Transformer~ & $\\mathcal{O}(kN)$& $\\text{\\xmark}$ & M\\\\ \n Transformer-XL~ & $\\mathcal{O}(N^2)$ & $\\checkmark$ & RC \\\\ \n Sparse Transformer ~ & $\\mathcal{O}(N \\sqrt{N})$ & $\\checkmark$& FP\\\\\n Reformer~ & $\\mathcal{O}(N \\log N)$ & $\\checkmark$ & LP\\\\\n Routing Transformer~ & $\\mathcal{O}(N\\sqrt{N)}$ & $\\checkmark$ & LP \\\\\n Axial Transformer~ & $\\mathcal{O}(N \\sqrt{N})$ & $\\checkmark$ & FP \\\\\n Compressive Transformer~ & $\\mathcal{O}(N^2)$ & $\\checkmark$& RC \\\\\n Sinkhorn Transformer~ & $\\mathcal{O}(B^2)$ & $\\checkmark$ & LP \\\\\n Longformer~ & $\\mathcal{O}(n(k+m))$ & $\\checkmark$ & FP+M\\\\ \n ETC~ & $\\mathcal{O}(N_g^2 + N N_g)$ & $\\text{\\xmark}$ & FP+M\\\\ \n Synthesizer~ & $\\mathcal{O}(N^2)$ & $\\checkmark$ & LR+LP\\\\\n Performer~ & $\\mathcal{O}(N)$ & $\\checkmark$ & KR\\\\\n Funnel Transformer~ & $\\mathcal{O}(N^2)$ & $\\checkmark$ & FP+DS\\\\\n Linformer~ & $\\mathcal{O}(N)$ & $\\text{\\xmark}$ & LR \\\\\n Linear Transformers ~ & $\\mathcal{O}(N)$ & $\\checkmark$ & KR \\\\\n Big Bird~ &$\\mathcal{O}(N)$ &$\\text{\\xmark}$ & FP+M\\\\\n Random Feature Attention~ & $\\mathcal{O}(N)$ & $\\checkmark$ & KR\\\\\n Long Short Transformers~ &$\\mathcal{O}(kN)$ & $\\checkmark$ & FP + LR \\\\ \n Poolingformer~ & $\\mathcal{O}(N)$ & $\\text{\\xmark}$ & FP+M \\\\\n Nystr\\\"{o}mformer~ & $\\mathcal{O}(kN)$ & $\\text{\\xmark}$ & M+DS\\\\\n Perceiver~ &$\\mathcal{O}(kN)$ &$\\checkmark$ & M+DS \\\\ \n Clusterformer~ & $\\mathcal{O}(N\\log N)$ & $\\text{\\xmark}$ & LP \\\\\n Luna~ & $\\mathcal{O}(kN)$ &$\\text{\\checkmark}$ & M\\\\\n TokenLearner~ & $\\mathcal{O}(k^2)$ & $\\text{\\xmark}$ &DS \\\\\n \\hline\n Adaptive Sparse Transformer~ & $\\mathcal{O}(N^2)$ & $\\text{\\checkmark}$ & Sparse \\\\\n Product Key Memory~ & $\\mathcal{O}(N^2)$ & $\\checkmark$& Sparse\\\\\n Switch Transformer~ & $\\mathcal{O}(N^2)$& $\\checkmark$ & Sparse\\\\\n ST-MoE~ & $\\mathcal{O}(N^2)$& $\\checkmark$ & Sparse\\\\\n GShard~ & $\\mathcal{O}(N^2)$ & $\\checkmark$ & Sparse \\\\\n Scaling Transformers~ & $\\mathcal{O}(N^2)$ &$\\checkmark$ & Sparse\\\\\n GLaM~ & $\\mathcal{O}(N^2)$ &$\\checkmark$ & Sparse\\\\ \n \\hline\n \\end{tabular}\n \\caption{Summary of Efficient Transformer Models. Models in the first section are mainly efficient attention methods. Models in the subsequent lower section generally refer to sparse models. Class abbreviations include: FP = Fixed Patterns or Combinations of Fixed Patterns, M = Memory, LP = Learnable Pattern, LR = Low-Rank, KR = Kernel RC = Recurrence, and DS = Downsampling. Furthermore, $N$ generally refers to the sequence length and $B$ is the local window (or block) size. $N_g$ and $N_c$ denote global model memory length and convolutionally-compressed sequence lengths respectively.}\n \\label{tab:summarytable}\n\\end{table}", "id": "331ef788-f428-4379-af9c-107d866ac486", "level": "section", "origin_cites_number": 31, "parent_id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ] ], "subsections": [ "dce52792-e50d-4180-8ffb-06888cef96b1", "1068ae2e-644b-469f-bc75-6922be1837a6" ], "title": "A Survey of Efficient Transformer Models" }, { "cite_extract_rate": 0.8125, "cites": [ 1484, 4735, 7840, 7053, 707, 1473, 4737, 4736, 1494, 798, 1133, 1182, 1482, 7333, 1490, 7298, 793, 1488, 1451, 4531, 1501, 1453, 1472, 4734, 7370, 8454 ], "content": "This section outlines a general taxonomy of efficient Transformer models, characterized by their core techniques and primary use case. While, the primary goal of most of these models is to improve the memory complexity if the self-attention mechanism, we also include methods that improve the general efficiency of the Transformer architecture.\n\\begin{itemize}\n\\item \\textbf{Fixed Patterns (FP)} - The earliest modifications to self-attention simply sparsifies the attention matrix by limiting the field of view to fixed, predefined patterns such as local windows and block patterns of fixed strides.\n\\begin{itemize}\n \\item \\textbf{Blockwise Patterns} The simplest example of this technique in practice is the blockwise (or chunking) paradigm which considers blocks of local receptive fields by chunking input sequences into fixed blocks. Examples of models that do this include Blockwise~ and/or Local Attention~. Chunking input sequences into blocks reduces the complexity from $N^2$ to $B^2$ (block size) with $B<<N$, significantly reducing the cost. These blockwise or chunking methods serve as a basis for many more complex models. \n \\item \\textbf{Strided Patterns} Another approach is to consider strided attention patterns, i.e., only attending at fixed intervals. Models such as Sparse Transformer~ and/or Longformer~ employ strided or ``dilated'' windows.\n \\item \\textbf{Compressed Patterns} - Another line of attack here is to use some pooling operator to down-sample the sequence length to be a form of fixed pattern. For instance, Compressed Attention~ uses strided convolution to effectively reduce the sequence length.\n\\end{itemize}\n\\item \\textbf{Combination of Patterns (CP)} - The key idea of combined\\footnote{We note that this is also often referred to as factorization approaches, e.g., in~. We decide to refer to this class of models as combination approaches because (1) it is a better fit to what these models are actually doing and (2) to avoid confusion with matrix factorization or low-rank approaches. } approaches is to improve coverage by combining two or more distinct access patterns. For example, the Sparse Transformer~ combines strided and local attention by assigning half of its heads to each pattern. Similarly, Axial Transformer~ applies a sequence of self-attention computations given a high dimensional tensor as input, each along a single axis of the input tensor. In essence, the combination of patterns reduces memory complexity in the same way that fixed patterns does. The difference, however, is that the aggregation and combinaton of multiple patterns improves the overall coverage of the self-attention mechanism.\n\\item \\textbf{Learnable Patterns (LP)} - An extension to fixed, pre-determined pattern are \\emph{learnable} ones. Unsurprisingly, models using learnable patterns aim to learn the access pattern in a data-driven fashion. A key characteristic of learning patterns is to determine a notion of token relevance and then assign tokens to buckets or clusters~. Notably, Reformer~ introduces a hash-based similarity measure to efficiently cluster tokens into chunks. In a simlar vein, the Routing Transformer~ employs online $k$-means clustering on the tokens. Meanwhile, the Sinkhorn Sorting Network~ exposes the sparsity in attention weights by learning to to sort blocks of the input sequence. In all these models, the similarity function is trained end-to-end jointly with the rest of the network. The key idea of learnable patterns is still to exploit fixed patterns (chunked patterns). However, this class of methods learns to sort/cluster the input tokens - enabling a more optimal global view of the sequence while maintaining the efficiency benefits of fixed patterns approaches.\n\\item \\textbf{Neural Memory} - Another prominent method is to leverage a learnable side memory module that can access multiple tokens at once. A common form is \\emph{global} neural\\footnote{We use the term neural here to refer to a representation-like memory that is often manifested in the model.} memory which is able to access the entire sequence. The global tokens act as a form of model memory that learns to gather from input sequence tokens. This was first introduced in Set Transformers~ as the \\textit{inducing points} method. These parameters are often interpreted as ``memory'' and are used as a form of \\textit{temporary} context for future processing. This can be thought of as a form of parameter attention~. Global memory tokens are also used in ETC~ and Longformer~. With a limited amount of neural memory (or inducing points), we are able to perform a preliminary \\textit{pooling}-like operation of the input sequence to compress the input sequence - a neat trick to have at one's disposal when designing efficient self-attention modules.\n\\item \\textbf{Low-Rank Methods} - Another emerging technique is to improve efficiency by leveraging low-rank approximations of the self-attention matrix. The key idea is to assume low-rank structure in the $N\\times N$ matrix. The Linformer~ is a classic example of this technique, as it projects the length dimension of keys and values to a lower-dimensional representation ($N \\rightarrow k$). It is easy to see that the low-rank method ameliorates the memory complexity problem of self-attention because the $N \\times N$ matrix is now decomposed to $N \\times k$.\n\\item \\textbf{Kernels} - Another recently popular method to improve the efficiency of Transformers is to view the attention mechanism through kernelization. The usage of kernels~ enable clever mathematical re-writing of the self-attention mechanism to avoid explicitly computing the $N \\times N$ matrix. Since kernels are a form of approximation of the attention matrix, they can be also viewed as a type of low-rank approach~. Examples of recent work in this area include Performers, Linear Transformers and Random Feature Attention (RFA,~)\n\\item \\textbf{Recurrence} - A natural extension to the blockwise method is to connect these blocks via recurrence. Transformer-XL~ proposed a segment-level recurrence mechanism that connects multiple segments and blocks. These models can, in some sense, be viewed as \\textit{fixed pattern} models. However, we decided to create its own category due to its deviation from other block / local approaches.\n\\item \\textbf{Downsampling} - Another popular method of reducing computation cost is to reduce the resolution of the sequence, hence reducing computation costs by a commensurate factor. Examples of this class of models include Perceiver~, Funnel Transformers~, Swin Transformer~, and Charformer~ models. Notably, there might also be some form of overlap of this class of models with models that leverage \\textit{memory} tokens as models such as Set Transformer can also be viewed as a form of downsampling, albeit within the attention mechanism. The recent Nystr\\\"{o}mformer~, on the surface, may seem like a low-rank or kernal-based approach. However, it is actually a downsampling approach where the \\textit{`landmarks`} are simply strided based pooling - in similar spirit to Set Transformer, Funnel Transformer or Perceiever. \n\\item \\textbf{Sparse Models and Conditional Computation} - While not targeted specifically at the attention modules, sparse models \\textit{sparsely} activate a subset of the parameters which generally improves the parameter to FLOPs ratio. Examples of this class of model includes Switch Transformers~, ST-MoE , GShard~, Product-Key Memory Layers~. Within the scope of our studied models, sparse models typically operate on an adaptive basis in which the sparsity is typically learned (via mixture-of-experts like mechanism). Within this context, we can also consider sparsification of attention weights to fall under this paradigm. For this reason, we believe there is a close connection to fixed or learned patterns in attention. However, we believe that the emergence of an entire research direction~ based on sparse efficient should warrant a new category of efficient Transformers.\n\\end{itemize}\nWe note that these buckets are a broad characterization of the different efficient Transformer models. In reality, there is no sharp boundary between the buckets as models may be comprised of multiple technical innovations. For example, the $k$-means clustering in Routing Transformer~ can also be interpreted as a form of global model memory approach, since one can view the centroids as parameterized model memory. In Reformer, however, clustering is used to learn the sparsity pattern of the attention weights. Additionally, pooling~ can be also interpreted as a form of model memory mechanism. We also note that the recent xformer models (circa December 2021) have started adopting some form of two-staged attention mechanism. Many times, these attention mechanisms explicitly combine one or more flavours of the above, e.g., local windows and then memory in Poolingformer~, or Long Short Transformers~ that utilize low rank attention with fixed windows (e.g., a combination of local attention with Linformer-like inductive bias).", "id": "dce52792-e50d-4180-8ffb-06888cef96b1", "level": "subsection", "origin_cites_number": 32, "parent_id": "331ef788-f428-4379-af9c-107d866ac486", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "A Taxonomy of Efficient Transformers" ] ], "subsections": [], "title": "A Taxonomy of Efficient Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "This section delves into the details of several key efficient Transformer models, discussing their pros, cons, and unique talking points. The goal here is not to exhaustively detail all such models, but rather to cover a representative sample of models.", "id": "1068ae2e-644b-469f-bc75-6922be1837a6", "level": "subsection", "origin_cites_number": 0, "parent_id": "331ef788-f428-4379-af9c-107d866ac486", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ] ], "subsections": [ "5c93d5a4-2735-4123-9126-b5f6deff845f", "f3229814-d986-4490-8337-ac4d6c1f1b7a", "02a47794-db5c-4bb8-8d87-cf564d18fe45", "3d5a3287-d154-4aea-a184-9211b8d98e15", "0f8216da-de45-4a85-9828-916ee5003ac5", "6e85534f-068e-4a9c-ab6e-7292792a2be8", "735b2410-fd41-456d-8b2c-a547b050834b", "3bd19aae-a8c0-4c3c-af82-4fd6d74ee8e8", "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "f4887ab8-1a5a-4c38-a880-3fcd20b37b2e", "00fb8acf-94cc-4b0a-96a9-b2321e28bdbd", "00c367aa-0d44-4c9b-aa7c-c00fe6726789", "af6a261e-54bd-47b2-bc1e-362d6141ff6e", "c86fa3dc-2cb0-454b-b346-23e3e03e17ae", "4587bece-9f38-4735-9cf0-5c393b64224d", "76930262-894e-4e4c-855b-52de1d76e24c", "397e594a-88ee-4d92-94b6-456d9e680abe", "3a6ff1e6-8d3f-41b7-899b-c04469e7b874", "d74b6b7e-1acf-4525-be7f-2f0b21a45203" ], "title": "Detailed Walk-through of Efficient Transformer Models" }, { "cite_extract_rate": 0.8, "cites": [ 1453, 7333, 2576, 1484, 7298, 7370, 793, 1494, 1473, 1133, 7366, 798 ], "content": "We begin by discussing local and fixed patterns models such as the Memory Compressed Transformer~ and Image Transformer~. We then discuss the Set Transformers~, an early approach for utilizing global model memory. Following which, we move on to models that utilize combinations of patterns such as Sparse Transformers~, CCNet~, and Axial Transformers~. Next, we discuss Longformer~ and ETC~, as examples of memory-based Sparse Transformer approaches. Our detailed walkthrough then moves on to models that incorporate learnable patterns (LP) such as Routing Transformers~, Reformer~ and Sinkhorn Transformers~. After which, we introduce Linformer~ and Synthesizers~, models that can be considered low-rank factorization approaches. We then discuss models based on kernel approaches such as Performer~ and Linear Transformers~. Following which, we discuss the models that are based on segment-based recurrence such as Transformer-XL~ and Compressive Transformers~. Finally, we discuss the family of Sparse models which primarily leverage Mixture-of-Experts (MoE) type architectures and conditional computation to achieve computational efficiency. The logical flow of this section is aimed to be loosely chronological instead of categorically organized (with the exception of certain buckets like recurrence or sparsity that are more orthogonal approaches). We believe this is pedagogically helpful.", "id": "5c93d5a4-2735-4123-9126-b5f6deff845f", "level": "paragraph", "origin_cites_number": 15, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "paragraph", "Structure of this section" ] ], "subsections": [], "title": "Structure of this section" }, { "cite_extract_rate": 1, "cites": [ 1133 ], "content": "Memory Compressed Transformer~ is one of the early attempts at modifying Transformers to better handle longer sequences. The modification introduced by Memory Compressed Transformers is in two folds: localizing the attention span and using memory compressed attention.", "id": "f3229814-d986-4490-8337-ac4d6c1f1b7a", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Memory Compressed Transformer" ] ], "subsections": [ "aba67fd0-2d45-4210-a34b-b5f6e014b7db", "f2d18cf0-4188-497c-8c6c-9625a9dfddab", "361091ab-d721-46c1-92fc-143efc47614e" ], "title": "Memory Compressed Transformer" }, { "cite_extract_rate": 1, "cites": [ 1133 ], "content": "A straightforward solution for dealing with long sequences in Transformers is to limit the attention span to a local neighborhood. proposed dividing the input sequence into blocks of similar length so that self-attention can be computed within each block independently. This keeps the cost of attention per block constant, thus the number of activations scales linearly with the input length.", "id": "aba67fd0-2d45-4210-a34b-b5f6e014b7db", "level": "paragraph", "origin_cites_number": 1, "parent_id": "f3229814-d986-4490-8337-ac4d6c1f1b7a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Memory Compressed Transformer" ], [ "paragraph", "Local Attention Span" ] ], "subsections": [], "title": "Local Attention Span" }, { "cite_extract_rate": 0, "cites": [], "content": "The idea behind memory compressed attention is to reduce the number of keys and values using a strided convolution, while the queries remain unchanged. This leads to a reduction in the size of the attention matrix as well as the attention computations based on a compression factor that depends on the kernel size and the strides of the convolution. Memory compressed attention lets the model exchange the information globally across the input sequence as opposed to local attention.", "id": "f2d18cf0-4188-497c-8c6c-9625a9dfddab", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f3229814-d986-4490-8337-ac4d6c1f1b7a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Memory Compressed Transformer" ], [ "paragraph", "Memory-compressed Attention" ] ], "subsections": [], "title": "Memory-compressed Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "For a block size of $b$, the computational and memory cost of self-attention in each block is $\\mathcal{O}(b^2)$. Given there are $n/b$ blocks, the computational and memory cost of local attention is $\\mathcal{O}(b.n)$. For memory-compressed attention, applying a convolution with kernel size and strides of $k$, the computational and memory cost of the attention mechanism reduces to $\\mathcal{O}(n \\cdot n/k)$.", "id": "361091ab-d721-46c1-92fc-143efc47614e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f3229814-d986-4490-8337-ac4d6c1f1b7a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Memory Compressed Transformer" ], [ "paragraph", "Computation and Memory Complexity" ] ], "subsections": [], "title": "Computation and Memory Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "Image Transformer~, inspired by convolutional neural networks, restricts the receptive field of self-attention to only local neighborhoods. This helps the model scale up to process larger batch sizes while keeping the likelihood loss tractable. Besides the efficiency, adapting the notion of locality can be a desirable inductive bias for processing images. Image Transformer offers the encoder-decoder architecture, where the encoder generates a contextualized representation for every pixel-channel in the inputs and the decoder autoregressively generates one channel per pixel at each time step.", "id": "02a47794-db5c-4bb8-8d87-cf564d18fe45", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Image Transformer" ] ], "subsections": [ "fb32d632-0000-48b5-890e-5814cf6c032e", "04b46f54-0c0c-4090-8b8f-93820e9e2e09", "23f00919-597b-4b4f-b4e1-9b61ef53d23b" ], "title": "Image Transformer" }, { "cite_extract_rate": 0.5, "cites": [ 4738 ], "content": "Limiting the receptive field to a local neighborhood~ addresses the issues with the computational and memory costs of running global self-attention on large inputs, but changing the neighborhood per query position would prohibit packing the computations of the self-attention into two matrix multiplications. To avoid that, Image Transformer proposes partitioning the inputs into ``query blocks’’ and their associated ``memory blocks``, where for all queries from a single query block, the model attends to the same memory block.\nThere are two different schemes for choosing query blocks and their associated memory block neighborhoods: \\emph{1-dimensional local attention} and \\emph{2-dimensional local attention}. Here we briefly explain these schemes in the decoder case. \n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs/image_transformer_1.pdf}\n \\caption{1-dimensional local attention}\n \\label{fig:image_transformer-1d}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs/image_transformer_2.pdf}\n \\caption{2-dimensional local attention}\n \\label{fig:image_transformer-2d}\n \\end{subfigure}\n \\caption{Attention span in Image Transformer on a two-dimensional input.}\n \\label{fig:image_transformer}\n\\end{figure}\nFor the 1-dimensional local attention, the image is flattened in the raster order\\footnote{Given a 2D image as a grid of pixels, the horizontally left-to-right scanning of pixels, line-by-line, creates a raster order.} and partitioned into non-overlapping query blocks $Q$ of length $l_q$, and for each query block, a memory block $M$ is built from the same pixels in the $Q$ as well as a fixed number of pixels, $l_m$, generated before the query pixel. In 2-dimensional local attention, pixels are generated in raster order. \nFor the 2-dimensional local attention, the image is partitioned into multiple non-overlapping rectangular query blocks of length $l_q = w_q \\times h_q$. The memory block extends the query block to the top, left $h_m$ and $w_m$ pixels and to the right $w_m$ pixels, so $l_m = (w_q \\times q_h) + 2 \\times (h_m + w_m)$.\nThe query pixel can attend to all other pixels. In the 2-dimensional local attention, pixels in the image are generated one query block after another. Generated blocks are in raster order, as well as generated pixels inside every block.", "id": "fb32d632-0000-48b5-890e-5814cf6c032e", "level": "paragraph", "origin_cites_number": 2, "parent_id": "02a47794-db5c-4bb8-8d87-cf564d18fe45", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Image Transformer" ], [ "paragraph", "Localized Attention Span" ] ], "subsections": [], "title": "Localized Attention Span" }, { "cite_extract_rate": 0, "cites": [], "content": "In Image Transformer, the attention matrix has the shape of $l_q \\times m$, where $l_q$ is the chosen length for the query blocks and $M$ is the length of the memory block (which is in fact $l_q + l_m$). Given that memory blocks do not overlap, we have to compute $n \\times l_q$ attention matrices. Thus the memory and computational complexity of Image Transformer is $\\mathcal{O}(n\\cdot m)$.", "id": "04b46f54-0c0c-4090-8b8f-93820e9e2e09", "level": "paragraph", "origin_cites_number": 0, "parent_id": "02a47794-db5c-4bb8-8d87-cf564d18fe45", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Image Transformer" ], [ "paragraph", "Computational and Memory Complexity" ] ], "subsections": [], "title": "Computational and Memory Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "Image Transformer, and in general restricting the context in the attention mechanism to a local neighborhood, can decrease the cost of memory and computation at the price of losing the global receptive field. This can be an issue where global information is required to solve the task. Also, local-attention has quadratic complexity with respect to the region length, thereby introducing an extra hyper-parameter in the trade-off between performance and computational complexity.", "id": "23f00919-597b-4b4f-b4e1-9b61ef53d23b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "02a47794-db5c-4bb8-8d87-cf564d18fe45", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Image Transformer" ], [ "paragraph", "Restrictions" ] ], "subsections": [], "title": "Restrictions" }, { "cite_extract_rate": 0.5, "cites": [ 1523 ], "content": "The Set Transformer~ adapts the Transformer model for \\emph{set-input} problems - that is, problems wherein the input is a set of features and the output is some function of this set (and is thereby invariant to the permutation, or ordering, of the input features). The Set Transformer leverages attention to capture interactions between elements of the input set. Furthermore, it applies the idea of \\emph{inducing points} from the sparse Gaussian process literature to reduce the complexity of attention from quadratic to linear in the size of the input set.\nProblems involving sets of objects often have a \\emph{permutation invariance} property: the target value for the set is the same regardless of the order of the objects in the set. proved that all permutation-invariant functions can be represented by the following functional form:\n\\begin{align*}\n\\text{network}\\left(\\{x_1,\\dots, x_N\\}\\right) = \\rho\\left(\\text{pool}\\left(\\{\\phi(x_1),\\dots,\\phi(x_N)\\}\\right)\\right),\n\\end{align*}\nwhere the pooling function $\\text{pool}$ is a simple summation and $\\phi$ and $\\rho$ are continuous functions. This form can be interpreted as the composition of an \\emph{encoder} $\\phi$ and \\emph{decoder} $\\rho\\left(\\text{pool}(\\cdot)\\right)$.\nWhile this form is a universal approximator in the space of permutation-invariant functions, it is unclear how well such models fit tasks in practice. The Set Transformer proposes a solution that can be viewed as an encoder and pooled decoder, but where, unlike the form given above, the encoder and decoder can attend to input elements individually and the pooling function is parameterized.", "id": "3d5a3287-d154-4aea-a184-9211b8d98e15", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Set Transformer" ] ], "subsections": [ "5c61d8dc-eca9-4585-9657-44c54041507e", "65e9033a-faf1-4149-891c-da5618de18f2" ], "title": "Set Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "The model introduces the following constructs: ``Multihead Attention Block'' (MAB), ``Set Attention Block'' (SAB), ``Induced Set Attention Block'' (ISAB), and ``Pooling by Multihead Attention'' (PMA). They are defined as follows.\n\\begin{align*}\n\\mathbf{\\textbf{MAB}(X, Y)} &:= \\text{LayerNorm}\\left(H + \\text{rFF}(H)\\right),\\\\\nH &:= \\text{LayerNorm}\\left(X + \\text{MultiheadAttention}(X, Y)\\right),\\\\\n\\mathbf{\\textbf{SAB}(X)} &:= \\text{MAB}(X, X),\\\\\n\\mathbf{\\textbf{ISAB}_m(X)} &:= \\text{MAB}\\left(X, \\text{MAB}(I_m, X)\\right).\\\\\n\\mathbf{\\textbf{PMA}_k(X)} &:= \\text{MAB}\\left(S_k, \\text{rFF}(X)\\right).\n\\end{align*}\nHere, $X \\in \\reals^{N \\times d}$ represents $N$ $d$-dimensional input/outputs stacked row-wise and $\\text{rFF}$ is a parameterized feed-forward layer that operates on each row of its input matrix separately. $I_m \\in \\reals^{m \\times d}$ represents $m$ \\emph{trainable} $d$-dimensional ``inducing points'' while $S_k \\in \\reals^{k \\times d}$ represent $k$ trainable $d$-dimensional ``seed vectors'' (with $k$ set to $1$ except when $k > 1$ correlated outputs are needed).\nThe Set Transformer's encoder is just $N$ layers of either SAB or ISAB (with $N$ often set to $2$ in practice) while its decoder is given by:\n\\begin{align*}\n\\mathbf{\\textbf{Decoder}(X)} := \\text{rFF}\\left(\\text{SAB}\\left(\\text{PMA}_k(X)\\right)\\right).\n\\end{align*}\nIt is straightforward to see that both ISAB and SAB are \\emph{permutation equivariant} - in other words, if the input is permuted in some way then the corresponding output of the block is permuted in exactly the same way. Meanwhile, the pooling layer PMA is permutation invariant. Since functional composition, i.e. layering, preserves these properties, the Set Transformer encoder-decoder combination is permutation invariant.", "id": "5c61d8dc-eca9-4585-9657-44c54041507e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3d5a3287-d154-4aea-a184-9211b8d98e15", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Set Transformer" ], [ "paragraph", "Attention Blocks" ] ], "subsections": [], "title": "Attention Blocks" }, { "cite_extract_rate": 0, "cites": [], "content": "We can understand the $m$ inducing points $I_m$ learned in each ISAB layer as a form of static model memory. In addition to reducing the $\\mathcal{O}(N n^2)$ complexity of the self-attending SAB layer to $\\mathcal{O}(N m n)$, a reduction particularly valuable when the input set is large, the inducing points effectively encode some global structure that helps explain its inputs. For example, in the problem of \\emph{amortized clustering}, where one attempts to learn to map an input set of points to the centers of clusters of points inside the set, the inducing points learned could be appropriately distributed so that the encoder can effectively compare query elements with each other implicitly via their proximity to the inducing points.\nThe trainable $k$ seeds $S_k$ used in the pooling layer $\\text{PMA}_k$ can be viewed as static model memory in a similar light, reducing the memory and runtime complexity of the architecture. \n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs/transformer_attentino_pattern.pdf}\n \\caption{Transformer}\n \\label{fig:dense_att}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs/sparse_transformer_attentino_pattern.pdf}\n \\caption{Sparse Transformer}\n \\label{fig:sparse_att}\n \\end{subfigure}\n \\caption{Illustration of patterns of the attention matrix for dense self-attention in Transformers and sparse fixed attention in Sparse Transformers. Blue in the right diagram represents the local self-attention while green represents the strided component of the sparse attention.}\n \\label{fig:sparse_transformer}\n\\end{figure}", "id": "65e9033a-faf1-4149-891c-da5618de18f2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3d5a3287-d154-4aea-a184-9211b8d98e15", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Set Transformer" ], [ "paragraph", "Efficiency" ] ], "subsections": [], "title": "Efficiency" }, { "cite_extract_rate": 1, "cites": [ 793 ], "content": "The Sparse Transformer~ presents a simple initial attempt to reduce the quadratic complexity of the standard self-attention mechanism. The key idea is to reduce the dense attention matrix to a sparse version by only computing attention on a sparse number of $q_i,k_j$ pairs. Sparse Transformer employs fixed attention patterns which are defined by strides and local neighborhoods. Computation is \\textit{factorized}, wherein local and stride patterns are split amongst the heads.", "id": "0f8216da-de45-4a85-9828-916ee5003ac5", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Transformer" ] ], "subsections": [ "7b1e7bdf-617b-430b-85d0-de896e47cc4e", "9f70d90b-93e2-48df-8b69-bcf3a1c84221", "37cc7ca5-f236-4184-ac59-caa54007c88a", "eab15392-b995-40cc-93d9-9b7a9b365514" ], "title": "Sparse Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "Half of the heads in the Sparse Transformer are dedicated to local attention.\n\\begin{align*}\n \\hat{A}_{ij} = \n \\begin{cases}\n Q_{i}(K)_{j}^\\top),& \\text{if } \\lfloor{{j}/{N}}\\rfloor = \\lfloor{i/{N}}\\rfloor\\\\\n 0 & \\text{otherwise}\n\\end{cases}\n\\end{align*}\nwhere $A_{ij}$ is the attention weight of $q_i,k_j$ and $\\lfloor \\: \\rfloor$ denote the floor operation. In this case, we only compute the attention if $\\lfloor{{j}/{N}}\\rfloor = \\lfloor{i/{N}}\\rfloor$ (within the same block).", "id": "7b1e7bdf-617b-430b-85d0-de896e47cc4e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "0f8216da-de45-4a85-9828-916ee5003ac5", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Transformer" ], [ "paragraph", "Local Attention Heads" ] ], "subsections": [], "title": "Local Attention Heads" }, { "cite_extract_rate": 1, "cites": [ 4739 ], "content": "The other half of the heads are dedicated to fixed strided patterns. Concretely,\n\\begin{align*}\n \\hat{A}_{ij} = \n \\begin{cases}\n Q_{i}(K)_{j}^\\top),& \\text{if } (i-j) \\mod N =0 \\\\\n 0 & \\text{otherwise}\n\\end{cases}\n\\end{align*}\nThe final result of the factorized sparse attention is visualized in Figure~\\ref{fig:sparse_transformer}. We refer interested to~ for some additional theoretical analysis about the expressiveness of the Sparse attention mechanism.", "id": "9f70d90b-93e2-48df-8b69-bcf3a1c84221", "level": "paragraph", "origin_cites_number": 1, "parent_id": "0f8216da-de45-4a85-9828-916ee5003ac5", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Transformer" ], [ "paragraph", "Strided Attention Heads" ] ], "subsections": [], "title": "Strided Attention Heads" }, { "cite_extract_rate": 0, "cites": [], "content": "The modification in the self-attention mechanism does not alter the parameter costs of the model since the model still retains the $Q,K,V$ transforms from the original Transformer model. The memory complexity of the attention layer is reduced from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(n\\log n)$ .", "id": "37cc7ca5-f236-4184-ac59-caa54007c88a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "0f8216da-de45-4a85-9828-916ee5003ac5", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Transformer" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "The Sparse Transformer implementation requires custom GPU kernels to implement a specific block-sparse variant of matrix-matrix-multiplication and cannot be easily implemented on other hardware such as TPUs.", "id": "eab15392-b995-40cc-93d9-9b7a9b365514", "level": "paragraph", "origin_cites_number": 0, "parent_id": "0f8216da-de45-4a85-9828-916ee5003ac5", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Transformer" ], [ "paragraph", "Restrictions" ] ], "subsections": [], "title": "Restrictions" }, { "cite_extract_rate": 1, "cites": [ 4740, 1484 ], "content": "Axial Transformer~ uses factorization in a simple yet effective setup for the self-attention mechanism to process large inputs that are organized as multidimensional tensors. Instead of applying attention to the flattened version of the input, Axial Transformer simply applies multiple attentions, each along a single axis of the input tensor. Each attention, in fact, mixes information along a particular axis, while keeping information along other axes independent. Since the length of any single axis is typically much smaller than the total number of elements, Axial Transformer significantly saves computation and memory. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\linewidth]{figs/axial_transforme.pdf}\n \\caption{Attention span in Axial Transformer on a two-dimensional input.}\n \\label{fig:axial_transforme}\n\\end{figure}\nAxial Transformer offers an encoder-decoder architecture. For the decoding, to be able to implement the causal mask, Axial Transformer combines axial attentions with shift operations. For instance, for a model on 2-dimensional tensors, pixels are generated in raster order and to do that, first, the model encodes all pixels through an unmasked row and unmasked column attention. Then, for each row, the model applies an unmasked row and masked column attention to integrate the previously sampled rows. Finally, the model shifts the encoded representation up to make sure the conditioning information satisfies causality, and runs a masked row-attention to sample a new row in the image.\nAn advantage of Axial Transformer over similar methods like Sparse Transformer is that while it provides the global receptive field, it is straightforward to implement and does not require a custom kernel for an efficient implementation.", "id": "6e85534f-068e-4a9c-ab6e-7292792a2be8", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Axial Transformer" ] ], "subsections": [ "e6f4cc3d-ce12-4ce7-a0a2-ec8435de2cb1" ], "title": "Axial Transformer" }, { "cite_extract_rate": 0, "cites": [], "content": "In terms of memory and computational complexity, on a square image of size $N$, Axial Transformer performs the attention computation in $\\mathcal{O}(n \\sqrt{n})$, which saves $\\mathcal{O}(\\sqrt{n})$ over normal self-attention. For instance, with on square image with $N$ pixels, organized in a $b\\times b$ grid, Axial Transformer runs $b$ attention sequences of length $b$, which is of complexity $\\mathcal{O}(b.b^2)$. In a more general case, for a $d$-dimensional tensor of shape $N = N^{1/d}\\times \\ldots \\times N^{1/d }$, Axial Transformer saves a $\\mathcal{O}(N^{(d-1)/d})$ factor of resources over standard self-attention.", "id": "e6f4cc3d-ce12-4ce7-a0a2-ec8435de2cb1", "level": "paragraph", "origin_cites_number": 0, "parent_id": "6e85534f-068e-4a9c-ab6e-7292792a2be8", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Axial Transformer" ], [ "paragraph", "Computational and Memory Complexity" ] ], "subsections": [], "title": "Computational and Memory Complexity" }, { "cite_extract_rate": 1, "cites": [ 7298 ], "content": "Longformer~ is a variant of Sparse Transformer. \nIts key distinction compared to Sparse Transformer is ``Dilated Sliding Windows'', which can enable better long-range coverage without sacrificing sparsity. This is achieved by increasing the receptive fields by having gaps in the attention patterns. The Longformer also gradually increases the receptive field as the model goes deeper, dedicating lower levels for modeling local patterns and upper levels for modeling global patterns.", "id": "735b2410-fd41-456d-8b2c-a547b050834b", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Longformer" ] ], "subsections": [ "7d2388a1-a977-4b23-ad67-c0357a865382", "0b8f611d-6391-4f1b-9ee6-a5f7705bf9d3" ], "title": "Longformer" }, { "cite_extract_rate": 0, "cites": [], "content": "For classification tasks, Longformer adopts global memory tokens that have access to all input sequences.", "id": "7d2388a1-a977-4b23-ad67-c0357a865382", "level": "paragraph", "origin_cites_number": 0, "parent_id": "735b2410-fd41-456d-8b2c-a547b050834b", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Longformer" ], [ "paragraph", "Global Attention" ] ], "subsections": [], "title": "Global Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The complexity of the model is reduced from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(nk)$ where $k$ is the size of the window. When using global attention, the Longformer creates another set of query-key-value\nprojections for this global attention, doubling the cost of the parameters at the attention layer.", "id": "0b8f611d-6391-4f1b-9ee6-a5f7705bf9d3", "level": "paragraph", "origin_cites_number": 0, "parent_id": "735b2410-fd41-456d-8b2c-a547b050834b", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Longformer" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "The ETC model~ is another variation in the Sparse Transformer family. It introduces a new global-local attention mechanism. There are four components to this new attention mechanism, namely (1) global-to-global (g2g), global-to-local (g2l), local-to-global (l2g) and local-to-local (l2l). Aside from the original input to the model, ETC introduces $n_g$ auxiliary tokens as a prefix to the original input sequence. These tokens are regarded as global tokens and take part in global-to-$*$ and $*$-to-global attention. The local-to-local component acts as the local attention with a fixed radius of $k$. Overall, ETC is quite similar to Longformer in the way it introduces global auxiliary tokens. These tokens are trainable parameters and can be interpreted as a form of model memory that pools across the sequence to collect global sequence information.", "id": "3bd19aae-a8c0-4c3c-af82-4fd6d74ee8e8", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Extended Transformer Construction (ETC)" ] ], "subsections": [ "d737f00b-4208-4d24-b4dd-d8fe32a75633", "55a9071f-fb42-4931-8ffe-2a0ef5f2b691" ], "title": "Extended Transformer Construction (ETC)" }, { "cite_extract_rate": 0, "cites": [], "content": "The memory complexity of the ETC model is $\\mathcal{O}(n_{g}^2 + n_{g}N)$, where $n_g$ is the number of global tokens and $N$ is the input sequence length.", "id": "d737f00b-4208-4d24-b4dd-d8fe32a75633", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3bd19aae-a8c0-4c3c-af82-4fd6d74ee8e8", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Extended Transformer Construction (ETC)" ], [ "paragraph", "Memory and Parameter Complexity" ] ], "subsections": [], "title": "Memory and Parameter Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "Intuitively, it is easy to observe that ETC cannot be used for auto-regressive decoding. This is because we are not able to compute causal masks because of the global attention.", "id": "55a9071f-fb42-4931-8ffe-2a0ef5f2b691", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3bd19aae-a8c0-4c3c-af82-4fd6d74ee8e8", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Extended Transformer Construction (ETC)" ], [ "paragraph", "Restrictions" ] ], "subsections": [], "title": "Restrictions" }, { "cite_extract_rate": 0.5, "cites": [ 1499 ], "content": "The BigBird model~ is another Transformer for modeling longer sequences and is primarily built on top of ETC~. The Big Bird model is comprised of several key components, namely (1) global tokens, (2) random attention (queries attend to random keys), and (3) fixed patterns (local sliding windows).", "id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ] ], "subsections": [ "630e500c-c499-48b6-bbdc-81f42298b08d", "0f0083e5-d21c-4dbd-80f4-c0e1df48e911", "9e377e46-7851-48e4-922d-a4af6ee6fa5b", "f607bc9e-b56c-4faf-95e5-5f05510d0b07", "59396049-db11-4345-a29a-399e64369535" ], "title": "BigBird" }, { "cite_extract_rate": 0, "cites": [], "content": "Fundamentally, the idea of using global model memory can be traced all the way back to Longformer/ETC and Set Transformer model. Notably, the global model memory in Big Bird is extended to contain tokens within the sequence, instead of simply parameterized model memory. The authors call this the \\textit{`internal transformer construction (ITC)'} in which a subset of indices is selected as global tokens. This can be interpreted as a model-memory-based approach.", "id": "630e500c-c499-48b6-bbdc-81f42298b08d", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ], [ "paragraph", "Global Attention" ] ], "subsections": [], "title": "Global Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The window-ed attention was first proposed in early local-based attention models (Image Transformer, Compressed Attention and/or Sparse Transformer). In BigBird, each query attends to $w/2$ tokens to the left and $w/2$ tokens to the right. This corresponds to a fixed pattern (FP) approach.", "id": "0f0083e5-d21c-4dbd-80f4-c0e1df48e911", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ], [ "paragraph", "Sliding Window Attention" ] ], "subsections": [], "title": "Sliding Window Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "Finally, each query attends to $r$ random keys. This pattern is fixed.", "id": "9e377e46-7851-48e4-922d-a4af6ee6fa5b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ], [ "paragraph", "Random Attention" ] ], "subsections": [], "title": "Random Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The memory complexity of the self-attention is linear, i.e., $\\mathcal{O}(n)$. The BigBird model does not introduce new parameters beyond the Transformer model.", "id": "f607bc9e-b56c-4faf-95e5-5f05510d0b07", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ], [ "paragraph", "Memory and Parameter Complexity" ] ], "subsections": [], "title": "Memory and Parameter Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "Similar to ETC, the BigBird model cannot be used to autoregressively decode. Hence, qualifying it as an encoder-only model.", "id": "59396049-db11-4345-a29a-399e64369535", "level": "paragraph", "origin_cites_number": 0, "parent_id": "fedcf63b-a401-4b9d-9ca0-3aa532a5dc1a", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "BigBird" ], [ "paragraph", "Restrictions" ] ], "subsections": [], "title": "Restrictions" }, { "cite_extract_rate": 1, "cites": [ 1453 ], "content": "The Routing Transformer~ is a content-based sparse attention mechanism. It proposes a clustering-based attention mechanism that learns the attention sparsity in a data driven fashion. The first step is to project $Q$ and $K$ into a routing matrix $R$ of dimensions $n \\times d$.\n\\begin{align}\nR = QW_R + KW_R \n\\end{align}\nwhere $W_R$ is a $d \\times d$ orthonormal projection matrix.", "id": "f4887ab8-1a5a-4c38-a880-3fcd20b37b2e", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Routing Transformer" ] ], "subsections": [ "1e4a5e25-b500-4ec9-9ea1-7afad7fdb8ed", "65c92855-5cde-4bd1-a99a-6f5f97b107d6", "78d22117-a762-41fe-9dda-353afb116d29" ], "title": "Routing Transformer" }, { "cite_extract_rate": 1, "cites": [ 7053 ], "content": "The $R$ matrix undergoes $k$-means clustering with a series of parameterized cluster centroids $u_1, u_2 \\cdots c_k$. The $k$-means in Routing Transformer is trained in an online fashion. To ensure a similar number of tokens in each cluster, the model initializes $\\sqrt{n}$ clusters, computes each token's distance against the cluster centroid, and takes an equal top-$k$ for each centroid. Since the cluster centroids are trainable parameters, this is also reminiscent of the \\emph{all-attention} layer proposed by~.", "id": "1e4a5e25-b500-4ec9-9ea1-7afad7fdb8ed", "level": "paragraph", "origin_cites_number": 1, "parent_id": "f4887ab8-1a5a-4c38-a880-3fcd20b37b2e", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Routing Transformer" ], [ "paragraph", "$k$-means Clustering" ] ], "subsections": [], "title": "$k$-means Clustering" }, { "cite_extract_rate": 0, "cites": [], "content": "The routing strategy is then defined as:\n\\begin{align}\nX'_i = \\sum_{j \\in C_i, j \\leq i} A_{ij} V_j \n\\end{align}\nwhere $C_i$ is the cluster that vector $R_i$ is assigned to. In other words, the token at $i$ only attends to tokens in the same cluster.", "id": "65c92855-5cde-4bd1-a99a-6f5f97b107d6", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f4887ab8-1a5a-4c38-a880-3fcd20b37b2e", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Routing Transformer" ], [ "paragraph", "Routing Strategy" ] ], "subsections": [], "title": "Routing Strategy" }, { "cite_extract_rate": 0, "cites": [], "content": "The Routing Transformer introduces additional parameters in the clustering mechanism, namely $k \\times d$ centroid vectors and a $W_r$ projection matrix. The memory complexity is $\\mathcal{O}(n^{1.5})$.", "id": "78d22117-a762-41fe-9dda-353afb116d29", "level": "paragraph", "origin_cites_number": 0, "parent_id": "f4887ab8-1a5a-4c38-a880-3fcd20b37b2e", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Routing Transformer" ], [ "paragraph", "Memory and Parameter Complexity" ] ], "subsections": [], "title": "Memory and Parameter Complexity" }, { "cite_extract_rate": 0, "cites": [], "content": "Reformer~ is another efficient attention model based on locality sensitive hashing (LSH). Reformer also introduces \\emph{reversible} Transformer layers, which contribute to further reducing its memory footprint.", "id": "00fb8acf-94cc-4b0a-96a9-b2321e28bdbd", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Reformer" ] ], "subsections": [ "1d58e696-ebea-4c10-899c-631ef9e4af30", "91e55ab1-ac58-4c1f-8ff8-3d7a5090e5e2", "dc0daa65-603f-4a91-b89d-0f572426fd69" ], "title": "Reformer" }, { "cite_extract_rate": 0, "cites": [], "content": "The LSH attention introduces parameter-sharing between query and keys. It hashes the query-keys into buckets using a random-projection based hashing function. The key idea is that nearby vectors should obtain a similar hash while distant vectors should not, hence being termed as \\textit{`locality sensitive'}. To perform hashing, a random matrix $R \\in \\reals^{k \\times b/2}$ is first introduced. Next, The hashing function is defined as:\n\\begin{align}\nh(x) = \\text{arg max}([xR;-xR]) \n\\end{align}\nwhere $[;]$ is the concatenation of two vectors. For all queries, attention is computed if and only if the query and key hashes match, i.e., $h(q_i)=h(k_j)$. In other words, attention is computed amongst query and keys if they fall in the same hash bucket. In order to maintain causal masking, Reformer assigns and maintains a position index for every query and key. It is therefore able to compare if each query key comparison is auto-regressively valid.", "id": "1d58e696-ebea-4c10-899c-631ef9e4af30", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00fb8acf-94cc-4b0a-96a9-b2321e28bdbd", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Reformer" ], [ "paragraph", "LSH Attention" ] ], "subsections": [], "title": "LSH Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The key idea behind LSH attention is to classify tokens into buckets and then process them bucket by bucket in a chunked fashion. To this end, queries are first sorted by bucket number and then by sequence order within the same bucket. During computation, tokens only attend to the same bucket in its own chunk and previous chunk. The chunking and sorted bucketing techniques help to improve the overall efficiency of the Reformer model.", "id": "91e55ab1-ac58-4c1f-8ff8-3d7a5090e5e2", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00fb8acf-94cc-4b0a-96a9-b2321e28bdbd", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Reformer" ], [ "paragraph", "Memory Efficiency with LSH Attention" ] ], "subsections": [], "title": "Memory Efficiency with LSH Attention" }, { "cite_extract_rate": 0, "cites": [], "content": "The memory complexity of Reformer is $\\mathcal{O}(n \\log n)$. In terms of parameter costs, Reformer shares queries and keys, which reduces the cost of the QKV transforms by a third. The random projections are not trainable parameters and hence do not incur parameter costs. Overall, Reformer has fewer parameters than vanilla Transformers. The reversible layers in Reformer also reduce the memory consumption during training by enabling activations to be reconstructed from the next layer's. This reduces memory cost since this eliminates the need to store activations for all layers during backpropagation.", "id": "dc0daa65-603f-4a91-b89d-0f572426fd69", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00fb8acf-94cc-4b0a-96a9-b2321e28bdbd", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Reformer" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 1, "cites": [ 1473 ], "content": "This section introduces the Sparse Sinkhorn Transformer~. The Sinkhorn Transformer belongs to the family of \\textit{learned patterns}. This model is a chunked/blocked model that learns sparse patterns by re-sorting the input key and values in a block-wise fashion and then applying local block-based attention. \n\\begin{align*}\n A_{ij} = \n \\begin{cases}\n (Q_{i}\\psi_S(K)_{j}^\\top),& \\text{if} \\lfloor{{j}/{N}}\\rfloor = \\lfloor{i/{N}}\\rfloor\\\\\n 0 & \\text{otherwise}\n\\end{cases}\n\\end{align*}\nwhere $\\psi_S$ applies a sorting operator on the sequence length dimension.", "id": "00c367aa-0d44-4c9b-aa7c-c00fe6726789", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sinkhorn Transformers" ] ], "subsections": [ "5dec351a-2c52-44ff-94ae-393492ba90b0", "929b46cd-0569-444b-a0d3-83247d432041", "5403c490-0e45-4bdb-9b9a-54b777bd2a94" ], "title": "Sinkhorn Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "The sorting operator is parameterized by a meta sorting network. Let $X$ be the input sequence of dimension $N \\times d$. \n\\begin{equation}\n\\psi_S(X) = \\phi_S(F_S(\\textsc{BlockSum}(X)))\\:\\textsc{BlockShape}(X)\n\\end{equation}\nwhere $F_S(.)$ is a parameterized function such as a two layer feed-forward network with ReLU activation. The output of $F_S(.)$ is a tensor of $n_B \\times n_B$. The BlockSum function learns the sum embeddings of local blocks. The BlockShape function reshapes the input tensor into $\\mathbb{R}^{N \\times d} \\rightarrow \\mathbb{R}^{n_B \\times b \\times d}$. Here, we note that $N = n_B \\times b$, where $b$ is the size of the block and $n_B$ is the number of total blocks.", "id": "5dec351a-2c52-44ff-94ae-393492ba90b0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00c367aa-0d44-4c9b-aa7c-c00fe6726789", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sinkhorn Transformers" ], [ "paragraph", "Sorting Network" ] ], "subsections": [], "title": "Sorting Network" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 1473, 8834 ], "content": "$\\phi$ is the Sinkhorn balancing operator~ which converts the $n_B \\times n_B$ matrix into a soft permutation matrix. Specifically, a series of row- and column-wise normalizations are applied on the matrix output of $F_S\\text{BlockSum}(X)$. For the sake of brevity, we do not delve into details of this operation. Further details can be found at~.", "id": "929b46cd-0569-444b-a0d3-83247d432041", "level": "paragraph", "origin_cites_number": 3, "parent_id": "00c367aa-0d44-4c9b-aa7c-c00fe6726789", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sinkhorn Transformers" ], [ "paragraph", "Sinkhorn Sorting" ] ], "subsections": [], "title": "Sinkhorn Sorting" }, { "cite_extract_rate": 0, "cites": [], "content": "The memory complexity of the Sinkhorn Transformer is $\\mathcal{O}(b^2)$ where $b$ is the block size and $b=\\frac{N}{N_b}$. Additional parameter costs are incurred from the meta sorting network $F_S(.)$. The number of additional parameters is therefore $2d^2$ when a two layer ReLU network is used as the sorting network.", "id": "5403c490-0e45-4bdb-9b9a-54b777bd2a94", "level": "paragraph", "origin_cites_number": 0, "parent_id": "00c367aa-0d44-4c9b-aa7c-c00fe6726789", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sinkhorn Transformers" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 1, "cites": [ 7333 ], "content": "Linformer~ is an efficient Transformer based on the idea of low-rank self-attention.", "id": "af6a261e-54bd-47b2-bc1e-362d6141ff6e", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Linformer" ] ], "subsections": [ "c49f2e0b-42ca-40f6-8fb6-e52017ec9adb", "8e69f67d-d374-4a43-8756-988c01988c0a" ], "title": "Linformer" }, { "cite_extract_rate": 1, "cites": [ 4741 ], "content": "Linformer projects the $N \\times d$ dimensional keys and values to $k \\times d$ dimensions using additional projection layers. Note that this is a reduction on the length dimension instead of the key and value dimensions. This can \nGiven the newly projected keys ($K'$) and values ($V'$), the $QK'$ matrix is now $(N \\times k)$ dimensions instead of $(N \\times N)$. The attention matrix $\\text{Softmax}(QK')$ multiplies with $V' \\in \\mathbb{R}^{k \\times d}$ to result in an output tensor of dimensions $N \\times d$. To some extent, Linformer is reminiscent of depth-wise convolutions~. A projection on the length dimension causes mixing of sequence information (dimension-wise) in a single transformation. Hence, it is non-trivial to maintain causal masking and/or prevent mixing of past and future information when computing attention scores. The formulation of Linformer (for each attention head) can be expressed as:\n\\begin{align}\nSoftmax(\\frac{1}{\\sqrt{d_k}}XW^{Q}_{i}(E_i X W_i^K)) \\cdot F_iXW_i^V \n\\end{align}\nwhere $W^{Q,K,V}$ are the default linear transformation of $X$ into queries (as per vanilla Transformer) and $E_{i}, F_i$ are additional $k \\times N$ projection of the key and values into $k \\times d$ tensors.", "id": "c49f2e0b-42ca-40f6-8fb6-e52017ec9adb", "level": "paragraph", "origin_cites_number": 1, "parent_id": "af6a261e-54bd-47b2-bc1e-362d6141ff6e", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Linformer" ], [ "paragraph", "Low-Rank Projections on Length Dimensions" ] ], "subsections": [], "title": "Low-Rank Projections on Length Dimensions" }, { "cite_extract_rate": 0, "cites": [], "content": "The memory complexity of Linformer is $\\mathcal{O}(n)$. There is only a minimal parameter costs of the Linformer due to the extra $N \\times k$ length projections. If $k$ is sufficiently small, there is negligible parameter costs incurred.", "id": "8e69f67d-d374-4a43-8756-988c01988c0a", "level": "paragraph", "origin_cites_number": 0, "parent_id": "af6a261e-54bd-47b2-bc1e-362d6141ff6e", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Linformer" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 1, "cites": [ 8384, 1494 ], "content": "The Performer~ model is characterized by its Generalized Attention mechanism and its usage of random Kernels.", "id": "c86fa3dc-2cb0-454b-b346-23e3e03e17ae", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Performer" ] ], "subsections": [ "d73836ea-6083-4eef-9ce6-4bb374a06c4b", "e8ae810f-af02-44aa-8944-35d696f51a07", "caea67e9-e356-4384-bfb8-f76e36779e95" ], "title": "Performer" }, { "cite_extract_rate": 0, "cites": [], "content": "The generalized attention entangles $Q_i,K_j$ with a kernel function $K$. The attention matrix in Performer is computed via:\n\\begin{align}\nA = [g(Q_i^\\top)K(Q_i^\\top K_j^\\top) h(K_j^\\top)] \n\\end{align}\nwhere $K(.)$ is a kernel function that maps $d \\times d$ to a scalar value $\\mathbb{R}$ and $g,h$ are functions that map $d$ to a scalar value $\\mathbb{R}$.", "id": "d73836ea-6083-4eef-9ce6-4bb374a06c4b", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c86fa3dc-2cb0-454b-b346-23e3e03e17ae", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Performer" ], [ "paragraph", "Generalized Attention" ] ], "subsections": [], "title": "Generalized Attention" }, { "cite_extract_rate": 1, "cites": [ 1494 ], "content": "The above computation is still quadratic in complexity. Hence, the Performer leverages approximation tricks to avoid storing and computing the $N \\times N$ attention matrix. It leverages \\textit{orthogonal random features} (ORF) for doing so. The final attention output $Y$ of the Performer is described as follows:\n\\begin{align}\nY = \\hat{D}^{-1}(Q'((K')^\\top V))\n\\end{align}\nwhere $\\hat{D}=\\text{diag}(Q'((K')^\\top1_N))$, $Q'=D_Q\\phi(Q^\\top)^\\top$, and $K'=D_K\\phi(K^\\top)^\\top$. Note that $D_Q=g(Q_i^\\top),D_K=h(K_i^\\top)$. The function $\\phi(x)$ is defined as:\n\\begin{align}\n\\phi(X)= \\frac{c}{\\sqrt{M}}f(Wx +b)^\\top\n\\end{align}\nwhere $c > 0$ is a constant, $W \\in \\mathbb{R}^{M \\times d}$ is a random feature matrix and $M$ is the dimensionality of this matrix that controls the number of random features. We are able to see that we do not explicitly compute $A=QK^\\top$ and hence avoid paying the $N^2$ cost. For rigorous theoretical analysis and further details, we refer interested readers to~.", "id": "e8ae810f-af02-44aa-8944-35d696f51a07", "level": "paragraph", "origin_cites_number": 1, "parent_id": "c86fa3dc-2cb0-454b-b346-23e3e03e17ae", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Performer" ], [ "paragraph", "Fast Attention via Orthogonal Random Features (FAVOR)" ] ], "subsections": [], "title": "Fast Attention via Orthogonal Random Features (FAVOR)" }, { "cite_extract_rate": 0, "cites": [], "content": "The complexity of the bi-directional FAVOR algorithm is $\\mathcal{O}(Md + N d + MN)$ where $M$ is the dimensionality of the random features. It is worth noting that the unidirectional variations cannot be causally masked in an efficient linear-time fashion. As such, during training, running unidirectional (causal) implementation of kernel-based attention on an autoregressive task can be several times slower than vanilla Transformer during \\textit{parallelized} training due to the need to do a left to right pass (i.e., scan operation) in similar spirit to Recurrent neural networks. Since many autoregressive tasks trained via parallelization and teacher forcing, this makes training Performer on a generative task prohibitively slow. In order for KV to be causally masked efficiently, one would have to manifest the $d \\times d$ KV matrix at every time step - recovering a quadratic complexity model. We feel this is one of the intricate points that highlight how efficient memory complexity might not equate a faster or more efficient model in practice. We highlight that this only happens during autoregressive training. The inference-time for incremental decoding, however, would benefit from a speed up.", "id": "caea67e9-e356-4384-bfb8-f76e36779e95", "level": "paragraph", "origin_cites_number": 0, "parent_id": "c86fa3dc-2cb0-454b-b346-23e3e03e17ae", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Performer" ], [ "paragraph", "Parameter/Memory Complexity and Compute Costs" ] ], "subsections": [], "title": "Parameter/Memory Complexity and Compute Costs" }, { "cite_extract_rate": 1, "cites": [ 798, 4742 ], "content": "The Linear Transformer~ improves the complexity of self-attention from quadratic to linear by using a kernel-based formulation of self-attention and the associative property of matrix products. Furthermore, it reduces attention with causal masking (which is used in auto-regressive decoding) to a linear-time, constant memory recurrent neural network (RNN). The model has been shown to improve inference speeds up to \\emph{three orders of magnitude} without much loss in predictive performance. Linear Transformers are similar to Performers with the exception of the kernel function and therefore also suffer from the same drawbacks (unable to be parallelized across the time dimension during training in an autoregressive teacher forced setting). \nThe method rests on the simple but powerful observation that the accumulated value $V_i'$ for the query $Q_i$ in position $i$ can be written as:\n\\begin{align*}\nV_i' &= \\frac{\\sum_{j=1}^p \\text{sim}(Q_i, K_j) V_j}{\\sum_{j=1}^p \\text{sim}(Q_i, K_j)}.\n\\end{align*}\nHere, $p = N$ in full, unmasked attention and $p = i$ in the case of causal masking. Now, in usual softmax attention, $\\text{sim}(q, k) = \\exp\\left(\\frac{q^T k}{\\sqrt{d}}\\right)$. Linear Transformer, however, expresses the similarity as a kernel function. That is, $\\text{sim}(q, k) := \\phi(q)^T \\phi(k)$, where $\\phi$ is a, possibly high-dimensional, feature map. With this choice,\nwe can rewrite $V_i'$ as:\n\\begin{align*}\nV_i' &= \\frac{\\phi(Q_i)^T S_p}{\\phi(Q_i)^T Z_p},\\\\\nS_p &:= \\sum_{j=1}^p \\phi(K_j) V_j^T,\\\\\nZ_p &:= \\sum_{j=1}^p \\phi(K_j).\n\\end{align*}\nFor unmasked attention, since $p = N$ we only need to compute $S_N$ and $Z_N$ once and we reuse them for the computation at every position $0 \\leq i \\leq N$. For causal attention, the $S_i$'s and $Z_i$'s can be viewed as states of an RNN that are updated by the following recurrence relations:\n\\begin{align*}\nS_i &= S_{i-1} + \\phi(K_i)V_i^T,\\\\\nZ_i &= Z_{i-1} + \\phi(K_i)\n\\end{align*}\nwith initial condition $S_0 = Z_0 = 0$.\nIf the dimension of the key, query, and values are all $d$ and the cost to compute $\\phi$ is $\\mathcal{O}(c)$, then the overall run-time complexity of Linear Transformer is $\\mathcal{O}{(N c d)}$. The authors choose\n\\begin{align*}\n\\phi(x) = \\text{elu}(x) + 1,\n\\end{align*}\nwhere $\\text{elu}(\\cdot)$ denotes the exponential linear unit~. With this choice of feature map, $c = d$ and the end-to-end complexity of the model is $\\mathcal{O}(N d^2)$.", "id": "4587bece-9f38-4735-9cf0-5c393b64224d", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Linear Transformer" ] ], "subsections": [], "title": "Linear Transformer" }, { "cite_extract_rate": 1, "cites": [ 7366, 7660 ], "content": "Synthesizer models~ are an attempt to study and investigate the true importance of conditioning within the self-attention mechanism and are also the first attempts at unconditional token-mixing. In~, the authors study a synthetic self-attention module in which attention weights are approximated instead of being computed by pairwise dot products. Synthesizers are only implicitly related to efficient Transformers and can be considered more as a MLP-Mixer~. However, the factorized variants can be considered a low-rank efficient Transformer model.", "id": "76930262-894e-4e4c-855b-52de1d76e24c", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Synthesizers" ] ], "subsections": [ "fdaeb482-2e71-4c41-b491-baf36a77a316", "6ae3e20a-147d-40e2-a3fb-62409e04a2a5", "3bb7cddd-5c54-4035-8463-d7656227ca82", "0229e228-e7a1-4b7e-8609-3ff42cc61031" ], "title": "Synthesizers" }, { "cite_extract_rate": 0, "cites": [], "content": "In the Dense Synthesizer, each token $x_i$ is projected to a vector of length $N$ using a two-layered non-linear feed-forward network. The computation of the attention matrix $A$ is described as:\n\\begin{align}\nA = W_2(\\sigma_{R}(W_1(X)+b))+b \n\\end{align}\nwhere $X \\in \\mathbb{R}^{N \\times d}$ is the input sequence, $W_2 \\in \\mathbb{R}^{d \\times N}, W_1 \\in \\mathbb{R}^{d \\times d}$, and $\\sigma_R$ is the ReLU activation function. Given $A$, the output of the Synthetic Dense function is computed as:\n\\begin{align}\nY = \\text{Softmax}(A)G(X). \n\\end{align}\nwhere $G(X)$ is another parameterized function $\\mathbb{R}^{N \\times d} \\rightarrow \\mathbb{R}^{N \\times d}$.", "id": "fdaeb482-2e71-4c41-b491-baf36a77a316", "level": "paragraph", "origin_cites_number": 0, "parent_id": "76930262-894e-4e4c-855b-52de1d76e24c", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Synthesizers" ], [ "paragraph", "Dense Synthesizers" ] ], "subsections": [], "title": "Dense Synthesizers" }, { "cite_extract_rate": 1, "cites": [ 7366 ], "content": "Another variant of the Synthesizer model uses random matrices for $A$. In this case, the output can be expressed by:\n\\begin{align}\nY = \\text{Softmax}(R)G(X). \n\\end{align}\nwhere $R \\in \\mathbb{R}^{N \\times N}$ is a trainable and/or non-trainable matrix. In~, the authors show that Random Synthesizers achieve competitive performance.", "id": "6ae3e20a-147d-40e2-a3fb-62409e04a2a5", "level": "paragraph", "origin_cites_number": 1, "parent_id": "76930262-894e-4e4c-855b-52de1d76e24c", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Synthesizers" ], [ "paragraph", "Random Synthesizers" ] ], "subsections": [], "title": "Random Synthesizers" }, { "cite_extract_rate": 0, "cites": [], "content": "The Dense and Random Synthesizers also come with factorized variants that consider a low-rank structure of the attention matrix. For factorized random Synthesizer can be written as:\n\\begin{align}\nY = \\text{Softmax}(R_{1}R_{2}^{\\top})G(X). \n\\end{align}\nwhere $R_{1},R_{2} \\in \\mathbb{R}^{N \\times k}$. On the other hand, the Dense Synthesizer can be factorized as follows:\n\\begin{align}\nA=H_B(B)* H_C(C) \\:\\: \\text{where} \\: \\: B, C = F_B(X_i), F_C(X_i), \n\\end{align}\nwhere $F_B(.)$ projects onto $b$ dimensions and $F_C(.)$ projects $X_i$ onto $c$ dimensions with $c \\times b=N$. $H_B,H_C$ are tile and repeat functions respectively.", "id": "3bb7cddd-5c54-4035-8463-d7656227ca82", "level": "paragraph", "origin_cites_number": 0, "parent_id": "76930262-894e-4e4c-855b-52de1d76e24c", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Synthesizers" ], [ "paragraph", "Factorized Variants" ] ], "subsections": [], "title": "Factorized Variants" }, { "cite_extract_rate": 0, "cites": [], "content": "For Random Synthesizers that adopt a non-trainable $R$, there is no need to store $N^2$ activations at this layer. For the trainable Random Synthesizer, the memory complexity and parameter complexity remains as $N^2$. However, there is no need to compute $N^2$ dot products, reducing the computational costs significantly. The Factorized Random Synthesizers reduce the parameter costs to $2(N \\times k)$.", "id": "0229e228-e7a1-4b7e-8609-3ff42cc61031", "level": "paragraph", "origin_cites_number": 0, "parent_id": "76930262-894e-4e4c-855b-52de1d76e24c", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Synthesizers" ], [ "paragraph", "Parameter and Memory Complexity" ] ], "subsections": [], "title": "Parameter and Memory Complexity" }, { "cite_extract_rate": 1, "cites": [ 7370 ], "content": "The Transformer-XL model~ relies on segment-based recurrence. Segment-based recurrence can be considered an orthogonal approach to the other techniques discussed since it does not explicitly sparsify the dense self-attention matrix. Instead, it connects adjacent blocks with a recurrent mechanism.", "id": "397e594a-88ee-4d92-94b6-456d9e680abe", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Transformer-XL" ] ], "subsections": [ "88c6813f-47fa-4dc3-a9e3-657d8d1e35c7", "823ca029-539c-4f78-8287-5e12bd14fed9" ], "title": "Transformer-XL" }, { "cite_extract_rate": 0, "cites": [], "content": "The recurrent mechanism in Transformer-XL is described as:\n\\begin{align}\n\\tilde{\\bm{h}}^{n-1}_{\\tau+1} &= [\\text{SG}(\\bm{h}^{n-1}_{\\tau}) \\odot \\bm{h}^{n-1}_{\\tau +1}] \\\\ \nq^{n}_{\\tau+1}, k^{n}_{\\tau+1}, v^{n}_{\\tau+1} &= \\bm{h}^{n-1}_{\\tau+1}\\bm{W}^\\top_q \\:,\\: \\tilde{\\bm{h}}^{n-1}_{\\tau+1}\\bm{W}^\\top_k \\:,\\: \\tilde{\\bm{h}}^{n-1}_{\\tau+1}\\bm{W}^\\top_v \\\\ \n\\bm{h}^{n}_{\\tau+1} &= \\text{Transformer}(q^{n}_{\\tau+1}, k^{n}_{\\tau+1}, v^{n}_{\\tau+1})\n\\end{align}\nwhere SG() is the stop gradient function, $\\odot$ is the concatenation of two sequences along the length dimension. Notably, the keys and values are conditioned on the previous sequence length $\\tilde{\\bm{h}}^{n-1}_{\\tau +1}$ instead of $\\bm{h}^{n-1}_{\\tau +1}$", "id": "88c6813f-47fa-4dc3-a9e3-657d8d1e35c7", "level": "paragraph", "origin_cites_number": 0, "parent_id": "397e594a-88ee-4d92-94b6-456d9e680abe", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Transformer-XL" ], [ "paragraph", "Segment Recurrence" ] ], "subsections": [], "title": "Segment Recurrence" }, { "cite_extract_rate": 1, "cites": [ 7370 ], "content": "Transformer-XL introduces novel relative position encodings. In this scheme, absolute positional encodings are not added to the content embeddings. Instead, they are only considered while computing attention weights where they can be replaced with relative position encodings. Since the relative position encodings are not directly relevant to the efficiency of the model, we refer interested readers to~ for more details.", "id": "823ca029-539c-4f78-8287-5e12bd14fed9", "level": "paragraph", "origin_cites_number": 1, "parent_id": "397e594a-88ee-4d92-94b6-456d9e680abe", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Transformer-XL" ], [ "paragraph", "Relative Positional Encodings" ] ], "subsections": [], "title": "Relative Positional Encodings" }, { "cite_extract_rate": 0, "cites": [], "content": "Compressive Transformers~ are a natural extension of the Transformer-XL model. The key idea behind the Compressive Transformer is to maintain a fine-grained memory of past segment activations. This is unlike Transformer-XL, which discards past activations as it moves across segments.", "id": "3a6ff1e6-8d3f-41b7-899b-c04469e7b874", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Compressive Transformers" ] ], "subsections": [ "7d92286f-0365-47e4-a43c-a4c7001bfa8e", "f7752592-442d-4971-9bca-f5c1513a9ccc", "6d0e83f8-e9c8-4eb3-9d1a-89a36ebf43f0" ], "title": "Compressive Transformers" }, { "cite_extract_rate": 0, "cites": [], "content": "The Compressive Transformer is characterized by a dual model memory system - a primary model memory and a secondary compressed model memory. It maintains a model memory with $n_m$ memory slots and $n_{cm}$ compressive memory slots. Whenever the model accepts a new input segment, the oldest $n_s$ activations in the primary model memory are moved to the compressed model memory where a compression function is applied.", "id": "7d92286f-0365-47e4-a43c-a4c7001bfa8e", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3a6ff1e6-8d3f-41b7-899b-c04469e7b874", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Compressive Transformers" ], [ "paragraph", "Model Memory" ] ], "subsections": [], "title": "Model Memory" }, { "cite_extract_rate": 0, "cites": [], "content": "These memories are compressed with a variety of compression functions such as (1) mean/max pooling (2) 1D convolutions, (3) dilated convolutions, and (4) most used (e.g., sorted by usage of attention).", "id": "f7752592-442d-4971-9bca-f5c1513a9ccc", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3a6ff1e6-8d3f-41b7-899b-c04469e7b874", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Compressive Transformers" ], [ "paragraph", "Compression" ] ], "subsections": [], "title": "Compression" }, { "cite_extract_rate": 0, "cites": [], "content": "In order to better retain memories over long sequences, the Compressive Transformer implements an auto-encoding loss that learns to reconstruct the original memory from its compressed version, i.e., $L^{ae}=|| \\text{old\\_mem} - g(\\text{new\\_cm}^{(i)})||$ where $g(.) : \\mathbb{R}^{\\frac{n_s}{c} \\times d} \\rightarrow \\mathbb{R}^{n_s \\times d}$ is a parameterized function. A second attention reconstruction is a lossy re-construct that attempts to reconstruct the attention over model memory instead of the lossless reconstruction of the model memory itself.", "id": "6d0e83f8-e9c8-4eb3-9d1a-89a36ebf43f0", "level": "paragraph", "origin_cites_number": 0, "parent_id": "3a6ff1e6-8d3f-41b7-899b-c04469e7b874", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Compressive Transformers" ], [ "paragraph", "Memory Reconstruction" ] ], "subsections": [], "title": "Memory Reconstruction" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 707, 8454 ], "content": "In this section we describe the family of Sparse models. Sparse models typically achieve a high parameter to FLOP ratio by sparsely activating a subset of parameters or activations. It is good to note that while most of the works within the scope of this survey deals with efficient attention, the scope of sparse models goes beyond the attention module and is generally applied more frequently to the feed forward layers~. In this section, we discuss the prime variant for Sparse models, i.e., the Mixture-of-Experts based Sparse models which includes models such as GShard~, Switch Transformer~ and GLaM~.", "id": "d74b6b7e-1acf-4525-be7f-2f0b21a45203", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "1068ae2e-644b-469f-bc75-6922be1837a6", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Models" ] ], "subsections": [ "d2e5ca8f-69fa-4690-8ec1-61d368c85d69" ], "title": "Sparse Models" }, { "cite_extract_rate": 0, "cites": [], "content": "The key idea behind MoE is to route token $x_{i}$ to a set of selected experts determined by a routing function. The routing function typically computed a linear combination over experts using the softmax function and can be interpreted as a form of gating mechanism. The top-k gate values are then selected for each token $x_{i}$ and the final output of that layer is determined by a linear combination of selected top-k experts. This MoE layer remains foundational and fundamental to many MoE architectures, with the exception of certain implementation details. For example, Switch uses a top-1 routing strategy while GShard uses a group-level top-2 gating.", "id": "d2e5ca8f-69fa-4690-8ec1-61d368c85d69", "level": "paragraph", "origin_cites_number": 0, "parent_id": "d74b6b7e-1acf-4525-be7f-2f0b21a45203", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "A Survey of Efficient Transformer Models" ], [ "subsection", "Detailed Walk-through of Efficient Transformer Models" ], [ "subsubsection", "Sparse Models" ], [ "paragraph", "Mixture-of-Experts" ] ], "subsections": [], "title": "Mixture-of-Experts" }, { "cite_extract_rate": 0, "cites": [], "content": "This section explores the state of research pertaining to this class of efficient models.", "id": "568b3cb1-3180-48cc-9b2a-86346db55bc0", "level": "section", "origin_cites_number": 0, "parent_id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Discussion" ] ], "subsections": [ "37fcf857-20c6-44d7-a0e9-f78ff08cb42d", "112ca8cf-1604-4b9f-a8d3-e45e9926df02", "8dd33c6a-c511-4b8f-a7b9-6a64b3094a2f", "91d98cd0-2945-44ee-bd10-bca2c14e83d0" ], "title": "Discussion" }, { "cite_extract_rate": 0.764705882352941, "cites": [ 1453, 7333, 7298, 4731, 793, 451, 1473, 1494, 798, 7, 461, 4743 ], "content": "While the field is bustling with new Transformer models, there is not an easy way to compare these models side by side. Many research papers select their own benchmarks to showcase the abilities of the proposed model. This is also coupled with different hyperparameter settings like model sizes and configurations which can make it difficult to correctly attribute the reason for the performance gains.\nMoreover, some papers conflate this with pretraining~ which makes it even harder to distinguish the relative performance of these different models. It is still a mystery to which fundamental efficient Transformer block one should consider using.\nOn one hand, there are multiple models that focus on generative modeling, showcasing the ability of the proposed Transformer unit on auto-regressive modeling of sequences. To this end, Sparse Transformers~, Adaptive Transformers~, Routing Transformers~ and Reformers~ are mainly focused on generative modeling tasks. These benchmarks typically involve language modeling and/or pixel-wise image generation on datasets such as wikitext~, and/or ImageNet~ / CIFAR~. Models that use segment based recurrence such as Transformer-XL and Compressive Transformers are also focused on long-range language modeling tasks such as PG-19. \nOn one hand, a collection of models is mainly focused on encoding-only tasks such as question answering, reading comprehension and or selections from the GLUE benchmark. For example, the ETC model~ only runs experiments on question answering benchmarks such as NaturalQuestions~ or TriviaQA~. On the other hand, the Linformer~ focuses on subsets of the GLUE~ benchmark. This split is very natural and intuitive, since models like ETC and Linformer cannot be used in an auto-regressive fashion. This exacerbates the challenges associated with comparing these encoder-only models with the other models.\nThere are models that focus on a balance of both. Longformer~ tries to balance this by running benchmarks on both generative modeling and encoder-only tasks. The Sinkhorn Transformer~ compares on both generative modeling tasks as well as encoding only tasks. \nAdditionally, it is also worth noting that, although Seq2Seq machine translation (MT) was one of the problems that popularized Transformer models, not many of these efficient Transformer models are evaluated on MT tasks. This is likely because sequence lengths in MT are not long enough to warrant the usage of these models. \nWhile generative modeling, GLUE tasks and/or question answering appear to be the common evaluation benchmarks adopted by many of these tasks, there are several niche benchmarks that a small isolated number of papers choose to evaluate on. For starters, the Performer model~ evaluates on masked language modeling on proteins, deviating from serious head-on comparisons with other efficient Transformer models. The Linear Transformer~ also evaluates on speech recognition, which is a rare benchmark amongst this group of papers. \nThere have been recent attempts to unify evaluation on Efficient Transformers, namely Long Range Arena, i.e., LRA, ~ that benchmarked 10 different xformer variants on long range modeling tasks. It is good to note that LRA was designed for evaluating Transformers in encoder-only mode and do not consider generative (or autoregressive tasks) that require causal masking.", "id": "37fcf857-20c6-44d7-a0e9-f78ff08cb42d", "level": "subsection", "origin_cites_number": 17, "parent_id": "568b3cb1-3180-48cc-9b2a-86346db55bc0", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Discussion" ], [ "subsection", "On Evaluation" ] ], "subsections": [], "title": "On Evaluation" }, { "cite_extract_rate": 0.7619047619047611, "cites": [ 2576, 4735, 707, 1473, 1494, 798, 1503, 1133, 7333, 4744, 7298, 793, 7371, 4531, 1453, 8454 ], "content": "When matching our broad categorization against the timeline of the introduction of these models, we are able to see the trend that the community is taking towards designing efficient Transformer models. Early work in this area has primarilyy been focused on more intuitive and simple approaches such as \\textit{fixed patterns}. To this end, most early work in this area is based on block/local patterns such as Image Transformer~, Compressed Attention~, Blockwise Transformer~ or the local windows in Sparse Transformer~. \nThe paradigm of factorizing various fixed patterns was first introduced in~ and CCNet~. Around this same time, we start to observe early traces of \\textit{model-memory}-based approaches from both the inducing point methods in the Set Transformer~ or global nodes in the Star Transformer~ model.\nWe observe the next wave of models comes in the form of learnable sparsity patterns. Reformer~ and Routing Transformers~ are very similar in the sense that they are models that learn to cluster/bucket tokens before performing attention. The key difference is the means to the end whereby Reformer uses a hashing function while the Routing Transformer uses online $k$-means for cluster assignment. In parallel, Sinkhorn Transformers~ are also based on the idea of sorting, albeit at the block level. These three models largely follow a similar paradigm of re-arranging sequences for efficient computation of attention scores.\nNext, we then observe several extensions that are largely built off the Sparse Transformer paradigm. The ETC~ and Longformer~ models are very similar ideas that are fundamentally Sparse Transformer extensions. These models incorporate the notion of a global model memory, which is reminiscent of the Set Transformer's inducing point method or the global model memory of the Star Transformer. Modifications to strides, such as using dilated windows was also proposed in the Longformer work.\nThe most recent wave of models we've been seeing is models that are based on low-rank approximation or kernel methods, e.g., models such as Low-Rank Transformer~, Linformer~, Performer~ and/or Linear Transformers~. Although due to the state of evaluation and the high parallelism of research, it is quite unclear if this low-rank or kernel paradigm is actually better than the learnable pattern (LP) or model memory based efficient Transformer models. \nMore recently, there have been more models that propose a two-pronged or two-step attention mechanism combining models from different techniques. The Long Short Transformer~ is a dynamic form of Linformer combined with Fixed Pattern attention mechanisms. On the other hand, models like Poolingformer also explicitly construct a two-level attention mechanism with techniques reminiscent of memory-based approaches and local attention. Scatter Brain is a new work~ attempts to unify sparse (fixed pattern) attention with low-rank attention. Two stage attention mechanisms are also proposed by Luna~\nOn the side, it is important to note that the recurrent based models (Transformer-XL and Compressive Transformers) seem to operate orthogonally and are not as directly comparable to the other models. We also observe that Sparse models~ that are not only applicable to attention modules, are also recently emerging and becoming more popular and have demonstrated considerable success in the recent months~.", "id": "112ca8cf-1604-4b9f-a8d3-e45e9926df02", "level": "subsection", "origin_cites_number": 21, "parent_id": "568b3cb1-3180-48cc-9b2a-86346db55bc0", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Discussion" ], [ "subsection", "On Model Design Trends" ] ], "subsections": [], "title": "On Model Design Trends" }, { "cite_extract_rate": 0.807692307692307, "cites": [ 4743, 4752, 4745, 7567, 1150, 4749, 4514, 4485, 4751, 4750, 856, 7366, 681, 4519, 7660, 8835, 8361, 4747, 4746, 2488, 4748 ], "content": "While this paper is mainly focused on (1) the computational and memory complexity of the self-attention module and (2) sparsity and adaptive computation, we briefly summarize several orthogonal efforts that may also contribute to model efficiency, scalability, and overall usability of Transformer models. \n\\begin{itemize}\n\\item \\textbf{Weight Sharing} - Sharing parameters of the Transformer models would help in reducing overall model size. The Universal Transformers~ tie attention and transition weights across layers. Similarly, Albert~ does the same parameter sharing across layers. On the other hand, the Quaternion Transformer~ proposes a weight sharing scheme inspired by Hamilton products that locally shares the components in the linear transformation layers.\n\\item \\textbf{Quantization / Mixed Precision} - Learning mixed precision models has the potential to improve memory costs. Q-BERT~ is a model that quantizes Transformer models to ultra-low precision. Meanwhile mixed precision training~ is a highly popular technique to reduced the memory costs of training Transformers.~ applies Quantization Aware training to Transformer models. \n\\item \\textbf{Inference-time Efficiency and Network Pruning} - Multiple research directions explore improving the Transformer efficiency at inference time. One prime example is network model. An example is to prune attention heads during inference~. This has shown to have minimal degradation of performance on downstream tasks. On the other hand, proposes a ``block'' pruning approach which can make a Transformer 2.4x faster with little loss in predictive performance on language tasks. Another line of work involved fast exit during inference which allows us to exit compute if the model is confident of its predictions~. \n\\item \\textbf{Knowledge Distillation} - \nKnowledge distillation (KD)~ has been a useful technique for transfering the knowledge learned from a larger teacher model to a smaller student model. The smaller model can then be efficiently deployed into production. There have been many attempts to distill large Transformer models. For example, DistilBERT~, task-specific distillation~ and TinyBERT~.\n\\item \\textbf{Neural Architecture Search (NAS)} -\nSearching for more efficient Transformer architectures is also a common strategy. proposed Neural Architecture Transformer (NAT), using NAS to search for more compact and efficient Transformers by removing redundant operations. proposed HAT (Hardware-aware Transformers), a method that leverages NAS and uses hardware efficiency feedback as a reward signal.\n\\item \\textbf{Task Adapters} - This line of research has been primarily focused on the problem of fine-tuning large Transformer on $T$ tasks and aiming to reuse parameters across a variety of tasks. The key idea is that task adapters~ enable reuse of parameters across tasks and reuse the need of serving $T$ models in production - resulting in overall parameter savings. A modest number of models have been proposed, such as PALS~, MAD-X~ and HyperGrid~.\n\\item \\textbf{Alternative Architectures} - A considerable amount of effort have gone into designing Transformer alternatives. Amongst the many alternatives considered, a prominent line of emerging research belongs to the family of MLP Mixers~. Different mixing operations have been proposed, such as the G-MLP~, FNet~. Synthesizers~, although commonly referred to as an efficient attention method, is also an early manifestation of the mixer line of work, as the random matrices similarly act as an unconditioned mixing operation. A recent promising line of work, based on Structured State Spaces~ also demonstrated very promising results on long range modeling. Lastly, convolutional models are generally more efficient than Transformers since convolutional kernels operate on a fixed, small local neighborhood around the input token. shows that, when pre-trained, these more efficient convolutional models can sometimes match the predictive performance of Transformer ones.\n\\end{itemize}", "id": "8dd33c6a-c511-4b8f-a7b9-6a64b3094a2f", "level": "subsection", "origin_cites_number": 26, "parent_id": "568b3cb1-3180-48cc-9b2a-86346db55bc0", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Discussion" ], [ "subsection", "Brief Discussion on Orthogonal Efficiency Efforts" ] ], "subsections": [], "title": "Brief Discussion on Orthogonal Efficiency Efforts" }, { "cite_extract_rate": 0.7307692307692301, "cites": [ 4753, 4735, 7840, 707, 4743, 4737, 1503, 1182, 7660, 4732, 7333, 8835, 7298, 7841, 793, 1488, 1501, 8384, 1499 ], "content": "With our timely V2 update of this survey (updated December 2021), we present retrospective thoughts about how the field has evolved over the past year or so. From the time of last update, it is undeniable that more xformer variants have emerged to offer more efficient alternatives for vanilla Transformers.\nNotably, examples of these include Nystr\\\"{o}mformer~, Perceiver~, RFA~, Luna~ and Long Short Transformer~. There were also other notable models that sprung up around the time when this manuscript was published and narrowly missed the inclusion in the first edition (e.g., Funnel Transformer~). Amongst all the new xformer variants, it is good to note that most do not stray away from the fundamental concepts presented in the first version. Our taxonomy and categorization was more or less broad enough to capture many of these models as they use fundamental ideas that are already present in existing work and therefore can be categorized appropriately. \nMany works can be thought of explicit combinations of existing techniques (two-staged or combination of two method classes) or improvements over existing methods (dynamic formulation of Linformer's low rank projection or better kernels for Linear Transformers). Even though many existing \\textit{`memory'} models utilize a form of downsampling to achieve a speed and efficiency gain, we erected a new categoriation of \\textit{`downsampling'} to better reflect this new emerging trend~.\nOver the past year, it is evident that a lot of research investment have been poured into making quadratic attention scalable, in terms of complexity, or sometimes memory. \nAt this juncture, it is good to ponder about real tangible need for linear-time attention. Many applications even in language and vision are still dominated by vanilla Transformers with quadratic attention and \\emph{none of these xformer variants have caught on as the defacto standard}. There might be multiple explanations from multiple angles for this phenomena. Firstly, linear attention (e.g., Performer) models struggle to be competitive on common benchmarks, as noted from multiple sources~. \nIt is good to note that, apart from toy setups or specific domains and problems, they have never been battle-tested against common paradigms like pretrain-and-finetuning only up till recently. Meanwhile, local attention models based on fixed and/or learned patterns such as Sparse Transformers~, Longformer~, ETC~ or BigBird~ have seen more reasonable usage, especially within the areas of long context question answering. \nHowever, the high intrinsic implementation complexity of methods such as in ETC~ (substantially increases code complexity by having so many different directions of attention), Swin Transformer~ or Longformer~ (requiring custom CUDA kernels and thus making it prohibitive on hardware such as TPUs) might be reasons why these models have yet to found themselves serving as a good, simple-to-use drop-in Transformer replacement.\nAs noted by~, for applications that require to flex on sequence length and memory needs time to time, it might be suffice to \\textit{`just sequentially process it'} even if that might not be inherently as satisfying as finding a theoretical approximate. In parallel,~ suggests that local attention, when done right, can be a really tough baseline to beat. \nA notable fact about the barrage of efficient attention models is the overloading of the term efficient. It is commonly misunderstood that efficient attention models always imply that the Transformer is fast. The truth is that many of these efficient attention models, owing to their innovation constraints, may make the model much slower. Moreover, many linear attention models do not observe any speed or memory gain at all if the sequence length is short. Many of them have extraordinarily painful requirements to achieve causal masking (or TPU packing)~ and often have to substantially trade-off throughput for linear complexity. On the other hand, some models cannot be packed or causally masked at all. More notes and discussions about this efficiency misnomer can be found in this paper~ which we encourage readers to also peruse.\nThis update also extends the original scope of efficient attention based xformer models to sparse models even if they did not necessarily target the attention modules. We believe that sparse models were a necessarily addition to the scope to this paper given its recent signs of promise . A special note was made to recognize the work done in alternative architectures in the past year (in the section on orthogonal directions). Mixer type architectures have garnered some interest in computer vision but seem to not perform well on language . Meanwhile, alternative models based on Structured State Spaces such as S4 have solved the hardest Path-X task in the Long Range Arena benchmark . It should be exciting to see how a model such as S4 would perform at scale, and under pretrained conditions.\nAs the year comes to a close and as we reflect back on the amazing advances made by the community, we begin to ponder about the future of \\textit{efficient transfomers} and what the ideal transformer model should look like. We think that the ideal xformer should hopefully take care of the quadratic memory problem, while retaining universality (e.g., do well on most tasks and not only on long range tasks). The ideal xformer should also not trade-off speed for memory and should not sacrifice the ability to be TPU-packed and/or causal masked. It should ideally be simple and not make use of rigid hard-coding or over-excessive engineering, i.e., it should be elegant and scale well. Ideally, efficiency would be baked right into the next generation of Transformers instead of always having a side variant that one could use for long context tasks. While we cannot explicitly point at any of the xformer variants as the definitive one that have solved the efficiency problem in Transformers, we are optimistic that, given about the pace of advance, \\textit{the} true xformer will emerge eventually. It is then a question of whether that new xformer will still be a Transformer.", "id": "91d98cd0-2945-44ee-bd10-bca2c14e83d0", "level": "subsection", "origin_cites_number": 26, "parent_id": "568b3cb1-3180-48cc-9b2a-86346db55bc0", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Discussion" ], [ "subsection", "A Retrospective on the Past Year and Future Research Directions" ] ], "subsections": [], "title": "A Retrospective on the Past Year and Future Research Directions" }, { "cite_extract_rate": 0, "cites": [], "content": "In this paper we surveyed the literature on efficient Transformer models especially pertaining to the quadratic complexity of the self-attention module. We provided a taxonomy and high-level abstraction of the core techniques employed in these class of new models. We characterized the existing models based on techniques and provided a comprehensive walkthrough of several of the efficient Transformer models. Finally, we discussed the evaluation landscape of these models along with the design trends of these models. We ended of with a brief discussion of other parallel orthogonal efforts that may improve the efficiency of Transformer models in general. \\textbf{Note:} This survey may be revised again bi-annually or annually. Feel free to send feedback to our email address. While we may not reply to all, we certainly would read them. We also welcome anonymous feedback to \\url{https://forms.gle/kqjmhSDEQrmL4Egk6}.\n\\acks{The authors would like to send the numerous authors who send us feedback via email. We tried our best to incorporate most of the suggestions as we sat fit. We also thank Tamas Sarlos for feedback on this manuscript.}\n\\newpage\n\\vskip 0.2in\n\\bibliography{references}\n\\end{document}", "id": "aac8d8dc-16e0-4e77-bff5-61ffbdf5ecd0", "level": "section", "origin_cites_number": 0, "parent_id": "84c962b5-92c2-4d52-a6ce-d3060c601e98", "prefix_titles": [ [ "title", "Efficient Transformers: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
125
[ 679, 1465, 1473, 7360, 798, 38, 7333, 7298, 1488, 7, 4730, 1453, 8384, 1472, 4731, 9, 1461, 57, 707, 4732, 8454, 793, 1484, 4735, 7840, 1494, 7366, 1133, 1503, 1182, 1482, 1490, 4733, 7370, 4734, 1499, 7053, 4737, 4736, 1451, 4531, 1501, 2576, 4738, 1523, 4739, 4740, 8834, 4741, 4742, 7660, 451, 461, 4743, 4744, 7371, 4752, 4745, 7567, 1150, 4749, 4514, 4485, 4751, 4750, 856, 681, 4519, 8835, 8361, 4747, 4746, 2488, 4748, 4753, 7841 ]
1.153391
[ "Bo Han", "Quanming Yao", "Tongliang Liu", "Gang Niu", "Ivor W. Tsang", "James T. Kwok", "Masashi Sugiyama" ]
A Survey of Label-noise Representation Learning: Past, Present and Future
2020
2020-11-09T13:16:02Z
cs.LG
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios. However, statistical-learning-based methods may not train deep learning models robustly with these noisy labels. Therefore, it is urgent to design Label-Noise Representation Learning (LNRL) methods for robustly training deep models with noisy labels. To fully understand LNRL, we conduct a survey study. We first clarify a formal definition for LNRL from the perspective of machine learning. Then, via the lens of learning theory and empirical study, we figure out why noisy labels affect deep models' performance. Based on the theoretical guidance, we categorize different LNRL methods into three directions. Under this unified taxonomy, we provide a thorough discussion of the pros and cons of different categories. More importantly, we summarize the essential components of robust LNRL, which can spark new directions. Lastly, we propose possible research directions within LNRL, such as new datasets, instance-dependent LNRL, and adversarial LNRL. We also envision potential directions beyond LNRL, such as learning with feature-noise, preference-noise, domain-noise, similarity-noise, graph-noise and demonstration-noise.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ] ], "subsections": [ "8f91a9db-0932-4ad2-8d81-8a09c1a2a034", "49424193-a97c-418b-973a-fb5cf7ba8945", "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "75c16b96-db92-4306-8203-f235a688b5ca", "71193050-fa8c-4cca-acf2-cdfbe4b84da7", "6c23077d-879e-4355-a4b2-e8036af47caf", "e9c1d328-7cdf-466e-af77-cbf803cadee3", "7fc05d3a-d347-4130-a07b-b2c8d5945c56", "8583ac9c-4327-4ed2-a97f-e59cc1711cc5", "e3c66c9a-8067-4068-981d-ba74f39dbf1c", "5aa2deed-36b5-4ac8-be9c-4fe127b1b643" ], "title": "root" }, { "cite_extract_rate": 0.28571428571428503, "cites": [ 3340, 4451 ], "content": "\\label{sec:introduction}}\n\\IEEEPARstart{``H}{ow} can a learning algorithm cope with incorrect training examples?'' \nThis is the question raised in Dana\nAngluin's paper entitled ``Learning From Noisy Examples'' \nin 1988~. \nShe made the statement that, ``when the teacher may make independent random errors in classifying the example data, \nthe strategy of selecting the most consistent rule for the sample is sufficient, and usually requires a feasibly small number of examples, \nprovided noise affects less than half the examples on average''. \nIn other words, she claimed that a learning algorithm can cope with incorrect training examples, \nonce the noise rate is less than one half under the random noise model. \nOver the last 30 years, her seminal research opened a new door to machine learning, \nsince standard machine learning assumes that the label information is fully clean and intact. \nMore importantly, her research echoed the real-world environment, as labels or annotations are often noisy and imperfect in real scenarios.\nFor example, the surge of deep learning comes from 2012, because Geoffrey Hinton's team leveraged AlexNet (i.e., deep neural networks)~ to win the ImageNet challenge~ with an obvious margin. However, due to the huge quantity of data, the ImageNet-scale dataset was necessarily annotated by distributed workers in Amazon Mechanical Turk~\\footnote{\\url{https://www.mturk.com/}}. Due to the limited knowledge, distributed workers cannot annotate specific tasks with 100\\% accuracy, which naturally brings noisy labels. Another vivid example locates in medical applications, where datasets are typically small. However, it requires domain expertise to label medical data, which often suffers from high inter- and intra-observer variability, leading to noisy labels. We should notice that, noisy labels will cause wrong model predictions, which might further influence decisions that impact human health negatively. Lastly, noisy labels are ubiquitous in speech domains, e.g., Voice-over-Internet-Protocol (VoIP) calls~. In particular, due to unstable network conditions, VoIP calls are easily prone to various speech impairments, which should involve the user feedback to identify the cause. Such user feedback can be viewed as the cause labels, which are highly noisy, since most of users lack the domain expertise to accurately articulate the impairment in the perceived speech.\nAll the above noisy cases stem from our daily life, which cannot be avoided. Therefore, it is urgent to build up a robust learning algorithm for handling noisy labels with theoretical guarantees. In this survey paper, we term such a robust learning paradigm \\emph{label-noise learning}, and the noisy training data $(x,\\bar{y})$ is sampled from a corrupted distribution $p(X,\\bar{Y})$, where we assume that the features are intact but the labels are corrupted.\nAs far as we know, label-noise learning spans over two important ages in machine learning: statistical learning (i.e., shallow learning) and representation learning (i.e., deep learning). In the age of statistical learning, label-noise learning focused on designing noise-tolerant losses or unbiased risk estimators~. However, in the age of representation learning, label-noise learning has more options to combat with noisy labels, such as designing biased risk estimators or leveraging memorization effects of deep networks~.", "id": "8f91a9db-0932-4ad2-8d81-8a09c1a2a034", "level": "section", "origin_cites_number": 7, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Introduction" ] ], "subsections": [ "d5f540df-029d-43dd-a7e5-321601e3f096", "7766badc-add8-4639-96b5-b3218a5670f0", "2da140bc-2788-4e0b-bd79-619ad3d1062b" ], "title": "Introduction" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4452, 4453 ], "content": "Label-noise representation learning has become very important for both academia and industry. There are two reasons behind. First, from the essence of the learning paradigm, deep supervised learning requires a lot of well-labeled data, which may require too much cost, especially for many start-ups. However, deep unsupervised learning (even self-supervised learning) is too immature to work very well in complex real-world scenarios. Therefore, as deep weakly-supervised learning, label-noise representation learning naturally has attracted much attention and has become a hot topic. Second, from the aspect of data, many real-world scenarios lack purely clean annotations, such as financial data, web data, and biomedical data. These have directly motivated researchers to explore label-noise representation learning.\nAs far as we know, there indeed exist three pioneer surveys related to label noise. \nFrenay and Verleysen~ focused on discussing label-noise statistical learning, \ninstead of label-noise representation learning.\nAlthough Algan et al.~ and Karimi et al.~ focused on deep learning with noisy labels, \nboth of them only considered image (or medical image) classification tasks. \nMoreover, their surveys were written from the applied perspective, instead of discussing methodology and its beneath theory.\nTo compensate for them and go beyond, we want to contribute to the label-noise representation learning area as follows.\n\\footnote{An update-to-date list of papers related to label-noise representation learning is here: \\url{https://github.com/bhanML/label-noise-papers}.}\n\\begin{itemize}[leftmargin=*]\n\\item From the perspective of machine learning, we give the formal definition for label-noise representation learning (LNRL). \nThe definition is not only general enough to include the existing LNRL, but also specific enough to clarify what the goal of LNRL is and how we can solve it.\n\\item Via the lens of learning theory, \nwe provide a deeper understanding why noisy labels affect the performance of deep models. \nMeanwhile, we report the generalization of deep models under noisy labels, which coincides with our theoretical understanding.\n\\item We perform extensive literature review from the age of representation learning, \nand categorize them in a unified taxonomy in terms of data, objective and optimization. \nThe pros and cons of different categories are analyzed. \nWe also present a summary of insights for each category.\n\\item Based on the above observations, we can spark new directions in label-noise representation learning. Beyond label-noise representation learning, we propose several promising future directions, \nsuch as learning with noisy features, preferences, domains, similarities, graphs, and demonstrations. We hope they can provide some insights.\n\\end{itemize}", "id": "d5f540df-029d-43dd-a7e5-321601e3f096", "level": "subsection", "origin_cites_number": 3, "parent_id": "8f91a9db-0932-4ad2-8d81-8a09c1a2a034", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Introduction" ], [ "subsection", "Motivation and Contribution" ] ], "subsections": [], "title": "Motivation and Contribution" }, { "cite_extract_rate": 0.75, "cites": [ 4452, 4453, 4454 ], "content": "The position of this survey is explained as follows.\nFrenay and Verleysen~ mainly summarized the methods of label-noise statistical learning (LNSL), which cannot be used for deep learning models directly. Note that although both the LNSL and LNRL approaches address the same problem setting, they are fundamentally different. First, the underlying theories should be different due to different hypothesis space (see Section~\\ref{sec:thm:obj}); Second, the potential solution should be different due to different models (see Section~\\ref{sec:opt}). Meanwhile, LNSL may fail to handle large-scale data with label noise, while LNRL is good at handling such data.\nAlthough Algan et al.~ and Karimi et al.~ respectively summarized some methods of label-noise representation learning, both of them discussed from the perspective of applications, i.e., (medical) image analysis. Recently, Song et al.~ summarized some methods of label-noise representation learning from the view of methodology. However, their categorization is totally different from ours in philosophy. In our survey, we first introduce label-noise representation learning from three general views: input data, objective functions and optimization policies, with more theoretical understanding.", "id": "7766badc-add8-4639-96b5-b3218a5670f0", "level": "subsection", "origin_cites_number": 4, "parent_id": "8f91a9db-0932-4ad2-8d81-8a09c1a2a034", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Introduction" ], [ "subsection", "Position of the Survey" ] ], "subsections": [], "title": "Position of the Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "The remainder of this survey is organized as follows. Section~\\ref{sec:related} provides the related literature of label-noise learning, and the full version can be found in Appendix~1.\nSection~\\ref{sec:overview} provides an overview of the survey, \nincluding the formal definition of LNRL, core issues, and a taxonomy of existing works in terms of data, objectives and optimizations.\nSection~\\ref{sec:data} is for methods that leverage the noise transition matrix to solve LNRL. \nSection~\\ref{sec:obj} is for methods that modify the objective function to make LNRL feasible. \nSection~\\ref{sec:opt} is for methods that leverage the characteristics of deep networks to address LNRL. \nIn Section~\\ref{sec:fuworks}, we propose future directions for LNRL. Beyond LNRL, the survey discloses several promising future directions. We conclude this survey in Section~\\ref{sec:conclusions}.", "id": "2da140bc-2788-4e0b-bd79-619ad3d1062b", "level": "subsection", "origin_cites_number": 0, "parent_id": "8f91a9db-0932-4ad2-8d81-8a09c1a2a034", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Introduction" ], [ "subsection", "Organization of the Survey" ] ], "subsections": [], "title": "Organization of the Survey" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:related}\nWe divide the development of label-noise learning into three stages as follows. Note that the full version of related literature can be found in Appendix 1.", "id": "49424193-a97c-418b-973a-fb5cf7ba8945", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Related Literature" ] ], "subsections": [ "e51e8a4c-20a9-4b3e-bfc1-c3a76092ab77", "d53a14f2-40a6-407f-87a3-8bc8c9e37b1b", "d193ddb4-8216-4df5-ae96-889c7d05b2a7" ], "title": "Related Literature" }, { "cite_extract_rate": 0.375, "cites": [ 3454, 8734, 3453 ], "content": "Before delving into label-noise representation learning, \nwe give a brief overview of some milestone works in label-noise statistical learning. In 1988, Angluin et al.~ proved that a learning algorithm can handle incorrect training examples robustly, when the noise rate is less than one half under the random noise model. Lawrence and Sch\\\"olkopf~ constructed a kernel Fisher discriminant to formulate the label-noise problem as a probabilistic model. Bartlett et al.~ justified that most loss functions are not completely robust to label noise. This means that classifiers based on label-noise learning algorithms are still affected by label noise.\nDuring this period, a lot of works emerged and contributed to this area. For example, Crammer et al.~ proposed the online Passive-Aggressive perceptron algorithm to cope with label noise. Natarajan et al.~ formally formulated an unbiased risk estimator for binary classification with noisy labels. This work was very important to the area, since it is the first work to provide guarantees for risk minimization under random label noise. Meanwhile, Scott et al.~ studied the classification problem under the class-conditional noise model, and proposed a way to handle asymmetric label noise. In contrast, van Rooyen et al.~ proposed the unhinge loss to tackle symmetric label noise. Liu and Tao~ proposed a method using anchor points to estimate the noise rate, and further leveraged importance reweighting to design surrogate loss functions for class-conditional label noise.\nIn 2015, research of label-noise learning has been shifted from statistical learning to representation learning, since deep learning models have become a mainstream due to its better empirical performance. Therefore, it is urgent to design label-noise representation learning methods for robustly training deep models with noisy labels.", "id": "e51e8a4c-20a9-4b3e-bfc1-c3a76092ab77", "level": "subsection", "origin_cites_number": 8, "parent_id": "49424193-a97c-418b-973a-fb5cf7ba8945", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Related Literature" ], [ "subsection", "Early Stage" ] ], "subsections": [], "title": "Early Stage" }, { "cite_extract_rate": 0.785714285714285, "cites": [ 3340, 4139, 4130, 7162, 4183, 4145, 4455, 7773, 4128, 7191, 4136 ], "content": "There are three seminal works in label-noise representation learning with noisy labels from 2015. For example, Sukhbaatar et al.~ introduced an extra but constrained linear ``noise'' layer on top of the softmax layer, which adapts the network outputs to model the noisy label distribution. Reed et al.~ augmented the prediction objective with the notion of consistency via soft and hard bootstrapping. Intuitively, this bootstrapping procedure provides the learner to disagree with an inconsistent training label, and re-label the training data to improve its label quality. Azadi et al.~ proposed an auxiliary image regularization technique, which exploits the mutual context information among training data, and encourages the model to select reliable labels.\nFollowing the seminal works, Goldberger et al.~ introduced a nonlinear ``noise'' adaptation layer on top of the softmax layer. Patrini et al.~ proposed the forward and backward loss correction approaches simultaneously. Both Wang et al.~ and Ren et al.~ leveraged the same philosophy, namely data reweighting, to learn with label noise. Jiang et al.~ is the first to leverage small-loss tricks to handle label noise. However, they trained only a single network iteratively, which inherits the accumulated error. To alleviate this, Han et al.~ trained two deep neural networks, and each network backpropagated the data selected by its peer network and updated itself.\nIn the context of representation learning, classical methods, such as estimating the noise transition matrix, regularization and designing losses, are still prosperous for handling label noise. For instance, Hendrycks et al.~ leveraged trusted examples to estimate the gold transition matrix, which approximates the true transition matrix well. Han et al.~ proposed a ``human-in-the-loop'' idea to easily estimate the transition matrix. Zhang et al.~ introduced an implicit regularization called mixup, which constructs virtual training data by linear interpolations of features and labels in training data. Zhang et al.~ generalized both the categorical cross entropy loss and mean absoulte error loss by the negative Box-Cox transformation. Ma et al.~ developed a dimensionality-driven learning strategy, which can learn robust low-dimensional subspaces capturing the true data distribution.", "id": "d53a14f2-40a6-407f-87a3-8bc8c9e37b1b", "level": "subsection", "origin_cites_number": 14, "parent_id": "49424193-a97c-418b-973a-fb5cf7ba8945", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Related Literature" ], [ "subsection", "Emerging Stage" ] ], "subsections": [], "title": "Emerging Stage" }, { "cite_extract_rate": 0.818181818181818, "cites": [ 4456, 4135, 4253, 4182, 3342, 7133, 7781, 4457, 7774 ], "content": "\\label{sec:intro:mat}\nSince 2019, label-noise representation learning has become flourished in the top conference venues. Arazo et al.~ formulated clean and noisy samples as a two-component (clean-noisy) beta mixture model on the loss values. Hendrycks et al.~ empirically demonstrated that pre-training can improve model robustness against label corruption for large-scale noisy datasets. Under the criterion of balanced error rate (BER) minimization, Charoenphakdee et al.~ proposed the barrier hinge loss. In contrast to selected samples via small-loss tricks, Thulasidasan et al.~ introduced the abstention-based training, which allows deep networks to abstain from learning on confusing samples but to learn on non-confusing samples. Following the re-weighting strategy, Shu et al.~ parameterized the weighting function adaptively as a one-layer multilayer perceptron called Meta-Weight-Net.\nMenon et al.~ mitigated the effects of\nlabel noise from an optimization lens, which naturally introduced the partially Huberised loss. Nguyen et al.~ proposed a self-ensemble label filtering method to progressively filter out the wrong labels during training. Li et al.~ modeled the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples. Lyu et al.~ proposed a provable curriculum loss, which can adaptively select samples for robust stagewise training. Han et al.~ proposed a versatile approach called scaled stochastic integrated gradient underweighted ascent (SIGUA). SIGUA uses stochastic gradient decent on good data, while using scaled stochastic gradient ascent on bad data rather than dropping those data. 5 years after the birth of \\textit{Clothing1M}, Jiang et al.~ proposed a new but realistic type of noisy dataset called ``web-label noise'' (or \\textit{red noise}).", "id": "d193ddb4-8216-4df5-ae96-889c7d05b2a7", "level": "subsection", "origin_cites_number": 11, "parent_id": "49424193-a97c-418b-973a-fb5cf7ba8945", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Related Literature" ], [ "subsection", "Flourished Stage" ] ], "subsections": [], "title": "Flourished Stage" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:overview}\nIn this section, we first provide the notation used throughout the paper in Section~\\ref{sec:notation}. \nA formal definition of the LNRL problem is given in Section~\\ref{sec:pro-def} with concrete examples. As the LNRL problem relates to many machine learning problems, we discuss their relatedness and difference in Section~\\ref{sec:rl-pro}. In Section~\\ref{sec:core-issue}, we reveal the core issues that make the LNRL problem hard. Then, according to how existing works handle the core issues, we present a unified taxonomy in Section~\\ref{sec:theorem}.", "id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ] ], "subsections": [ "f8390a02-ef0e-4fe2-8beb-c43e76a2460c", "b3a21496-800f-4f0a-881b-d3aec107b3de", "7b9dd4ab-a69b-48f1-a78f-013060d44af2", "97fe083e-72eb-430b-97b3-31ebe3822d18", "4bcba53a-4669-4302-85dd-bd4ebe0c203f", "16fef8bc-4031-464d-8299-ecae7d1eb048" ], "title": "Overview" }, { "cite_extract_rate": 1, "cites": [ 4458 ], "content": "\\label{sec:notation}\nLet $x$ be features and $y$ be labels. Consider a supervised learning task $T$: LNRL deals with a data set $\\mathcal{D} = \\{\\bar{\\mathcal{D}}^{\\text{tr}},\\mathcal{D}^{\\text{te}}\\}$ consisting of training set $\\bar{\\mathcal{D}}^{\\text{tr}} = \\{(x_i,\\bar{y}_i)\\}_{i=1}^N$ and test set $\\mathcal{D}^{\\text{te}} = \\{x^{\\text{te}}\\}$, \nwhere training set $\\bar{\\mathcal{D}}^{\\text{tr}}= \\{(x_i,\\bar{y}_i)\\}_{i=1}^N$ is independently drawn from a corrupted distribution $\\bar{D} = p(X,\\bar{Y})$ ($\\bar{Y}$ denotes the corrupted label).\nNote that ($X,Y$) denotes the variable, while ($x,y$) denotes its sampled value.\nFor the corrupted distribution $p(X,\\bar{Y})$, we assume that the features are intact but the labels are corrupted. Let $p(X, Y)$ be the ground-truth (i.e., non-corrupted) joint probability distribution of features $x$ and label $y$, \nand $f^*$ be the (Bayes) optimal hypothesis from $x$ to $y$. \nTo approximate $f^*$, \nthe objective requires a hypothesis space $\\mathcal{H}$ of hypotheses \n$f_{\\theta}(\\cdot)$ parameterized by $\\theta$. \nAn algorithm contains the\noptimization policy to search through $\\mathcal{H}$ in order to find $\\theta^*$ that corresponds to the optimal function in the hypothesis for $\\bar{\\mathcal{D}}^{\\text{tr}}$: $f_{\\theta^*} \\in \\mathcal{H}$.\nIntuitively, LNRL learns to discover $f_{\\theta^*}$ by fitting $\\bar{\\mathcal{D}}^{\\text{tr}}$ robustly, which can assign correct labels for $\\mathcal{D}^{\\text{te}}$. LNRL methods robustly train deep neural networks with noisy labels, \nwhere hypotheses $f_{\\theta}(\\cdot)$ can be modeled by deep neural networks. Since the hypothesis space $\\mathcal{H}$ is sufficiently complex for deep neural networks, $f_{\\theta^*} \\in \\mathcal{H}$ is expected to approximate the Bayes optimal $f^*$ well~. \n\\begin{table*}[!tp]\n\\centering\n\\caption{Three LNRL examples based on Definition 2.2.}\n\\vspace{-10px}\n\\begin{tabular}{c|c|c}\n\t\\hline\n\t$T$ & $E = (x,y)$ & $P$ \\\\\\hline\n\tweb-scale image classification & (ImageNet, crowdsourced labels) & test accuracy \\\\\\hline\n\tintelligent healthcare & (medical data, annotations by variability) & error rate \\\\\\hline\n\tservice call analysis & (perceived speech, user rating) & quality rate of call \\\\\\hline\n\\end{tabular}\n\\label{tab:LNRL-exp}\n\\end{table*}", "id": "f8390a02-ef0e-4fe2-8beb-c43e76a2460c", "level": "subsection", "origin_cites_number": 1, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Notation" ] ], "subsections": [], "title": "Notation" }, { "cite_extract_rate": 0.5, "cites": [ 895, 892, 4115 ], "content": "\\label{sec:pro-def}\nAs LNRL is naturally a sub-area in machine learning, \nbefore giving the definition of LNRL, let us recall how machine learning is defined in literature.\nWe borrow Tom Mitchell's definition here, \nwhich is shown in Definition~\\ref{def:clsml}.\n\\begin{definition}\n\\label{def:clsml}\n(Machine Learning~).\n A computer program is said to learn from experience $E$ with respect to some classes of task $T$ and performance measure $P$ if its performance can improve with $E$ on $T$ measured by $P$.\n\\end{definition}\nThe above definition is quite classical, which has been widely adopted in the machine learning community.\nIt means that a machine learning problem is defined by three key components: $E$, $T$ and $P$. \nFor instance, consider a speech recognition task ($T$, e.g., \nApple Siri\\footnote{\\url{https://en.wikipedia.org/wiki/Siri}}), \nmachine learning programs can improve its recognition accuracy ($P$) via training with a large-scale speech data set ($E$) offline.\nAnother example of $T$ is the hot topic in the security area, \ncalled empirical defense~. \nIn a high-level, \nmachine learning algorithms can make deep neural networks defensive against malicious cases. \nSpecifically, a stop sign crafted by malicious people may cause an accident to autonomous vehicles, \nwhich employ deep neural networks to recognize the sign. \nHowever, \nafter adversarial training with adversarial examples ($E$), \nthe robust generalization ($P$) of deep neural networks can improve a lot, \nwhich may avoid the above accident with large probability. \nThe above-mentioned classical applications of machine learning require a lot of ``correctly'' supervised information $\\{(x^{(i)},y^{(i)})\\}_{i=1}^N$ for the given tasks. \nHowever, this may be difficult and even impossible. \nAs far as we know, LNRL is a special case of machine learning, which belongs to weakly supervised learning~. \nIntuitively, LNRL exactly targets at acquiring good learning performance with ``incorrectly'' (a.k.a., noisy) supervised information provided by data set $\\bar{D}$, \ndrawn independently from a corrupted distribution $p(X,\\bar{Y})$.\nThe noisy supervised information refers to training data set $\\bar{D}^{\\text{tr}}$, which consists of the intact input features $x^{(i)}$ but with corrupted labels $\\bar{y}^{(i)}$. More important, LNRL focuses on training deep neural networks robustly, which has many special characteristics, such as memorization effects~. Formally, we define LNRL in Definition~\\ref{LNRL}.\n\\begin{definition}\\label{LNRL}\n(Label-Noise Representation Learning (LNRL)). \nLNRL is a special but common case of machine learning problems (specified by $E$, $T$ and $P$), \nwhere $E$ contains noisy supervised information for the target $T$. \nMeanwhile, deep neural networks will be leveraged to model the target $T$ directly. \n\\end{definition}\nTo understand this definition better, let us show three classical scenarios of LNRL (Table \\ref{tab:LNRL-exp}):\n\\begin{itemize}[leftmargin=*]\n\\item \\textit{Image}: Large-scale image data (e.g., ImageNet~) \nis the key factor to drive the second surge of deep learning from 2012. Note that it is impossible to annotate such large-scale data individually, which motivates us to leverage crowdsourcing technique (e.g., Amazon Mechanical Turk). However, the quality of crowdsourced data is normally low with a certain degree of label noise. Therefore, an important task ($T$) is to robustly training deep neural networks with crowdsourced data ($E$), and the trained deep models can be evaluated via the test accuracy ($P$).\n\\item \\textit{Healthcare}: Healthcare is highly related to each individual, whose data requires machine learning technique to analyze deeply and intelligently. However, intelligent healthcare ($T$) requires domain expertise to label medical data first, which often suffers from high inter- and intra-observer variability, leading to noisy medical data ($E$). We should notice that, noisy labels will cause a high error rate ($P$) of deep model predictions, which might further influence decisions that impact human health negatively.\n\\item \\textit{Speech}: In the speech recognition task (e.g., Apple Siri), the machine learning program can improve its recognition accuracy via training with a large-scale speech data set offline. However, noisy labels are ubiquitous in speech domains, e.g., the task of rating for service calls ($T$). Due to the difference of the personal mood and understanding, service calls are easily prone to different rating ($E$) for the same service. Such rating can be viewed as labels, which are highly noisy since most of users lack the domain expertise to accurately rate the speech service. Therefore, it is critical to robustly train deep neural networks with the user rating ($E$), and evaluate the trained model via the quality rate of service calls ($P$).\n\\end{itemize}\nAs noisy supervised information related to $T$ is directly contained in $E$, it is quite natural that common deep supervised learning approaches will fail on LNRL problems. One of the recent findings (i.e., memorization effects~) in the deep learning area may explain this: due to the high model capacity, deep neural networks will eventually fit and memorize label noise. Therefore, when facing the noisy data $E$, LNRL methods make the learning of the target $T$ feasible by leveraging the intrinsic characteristics of deep neural networks, e.g., memorization effects.", "id": "b3a21496-800f-4f0a-881b-d3aec107b3de", "level": "subsection", "origin_cites_number": 6, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Problem Definition" ] ], "subsections": [], "title": "Problem Definition" }, { "cite_extract_rate": 0.6875, "cites": [ 4459, 8794, 7778, 4461, 4462, 4176, 4464, 4460, 4463, 7191, 139 ], "content": "\\label{sec:rl-pro}\nIn this section, we discuss the relevant learning problems of LNRL. The relatedness and difference with respect to LNRL are clarified as follows.\n\\begin{itemize}[leftmargin=*]\n\\item \\textit{Semi-supervised Learning (SSL)}~ learns the hypothesis $f$ from experience $E$ consisting of both labeled and unlabeled data, where unlabeled data will be normally given pseudo labels. Since the labeling process may be not fully correct and noisy, SSL has some relation with LNRL. However, standard SSL methods assumes that labeled data are fully clean, which is different from LNRL, where labeled data are still noisy to some degree.\n\\item \\textit{Positive-unlabeled Learning (PUL)}~ learns the hypothesis $f$ from experience $E$ consisting of only positive labeled and unlabeled data. Similar to SSL, unlabeled data will be normally given pseudo labels. However, PUL assumes that labeled data are fully clean and only positive.\n\\item \\textit{Complementary Learning (CL)}~ specifies a class that a pattern does NOT belong to. Namely, CL learns the hypothesis $f$ from experience $E$ consisting of only complementary data. Since the labeling process cannot fully exclude the uncertainty, namely belonging to which categories, CL has some relation with LNRL. However, CL requires that all diagonal entries of the transition matrix are zeros. Sometimes, the transition matrix is not required to be invertible empirically. \n\\item \\textit{Unlabeled-unlabeled Learning (UUL)}~ \nallows training a binary classifier from two unlabeled datasets with different class priors. \nDifferent from SSL/PUL, there are two sets of unlabeled data in UUL instead of one set.\n\\end{itemize}", "id": "7b9dd4ab-a69b-48f1-a78f-013060d44af2", "level": "subsection", "origin_cites_number": 16, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Relevant Learning Problems" ] ], "subsections": [], "title": "Relevant Learning Problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:core-issue}\nWhen machine learning is carried out in an ideal environment, the data should be with clean supervision. \nIn this survey, we consider the classification problem.\nThen, the $\\ell$-risk should be as follows \n\\begin{equation}\n\\label{eq:clean-risk}\nR_{\\ell,D}(f_{\\theta}) := \\mathbb{E}_{(x,y)\\sim D} [\\ell(f_{\\theta}(x), y)],\n\\end{equation}\nwhere $(x,y)$ is the clean example drawn independently from clean distribution $D$, $f_{\\theta}$ is a learning model \n(e.g., a deep neural network) parameterized by $\\theta$ and $\\ell$ is normally the cross-entropy loss. \nHowever, when machine learning is used in the real-world environment,\nlabels can be corrupted. \nNamely, the $\\ell$-risk under the noisy distribution should be\n$\\mathbb{E}_{(x,\\bar{y})\\sim \\bar{D}} [\\ell(f_{\\theta}(x), \\bar{y})]$,\nwhere $(x, \\bar{y})$ is the sample (with possibly corrupted label) drawn independently from noisy distribution $\\bar{D}$. \nUnder finite samples, \nthe empirical $\\tilde{\\ell}$-risk under $\\bar{D}$ should be as follows.\n\\begin{equation}\n\\label{eq:noisy-risk}\n\\widehat{R}_{\\tilde{\\ell},\\bar{D}}(f_{\\theta})\n:= \\nicefrac{1}{N}\\sum\\nolimits_{i=1}^N \\tilde{\\ell}(f_{\\theta}(x_i), \\bar{y}_i),\n\\end{equation}\nwhere $\\tilde{\\ell}$ is a suitably modified loss, which also serves as a noise-tolerant objective.\n\\begin{figure}\n\\begin{center}\n\\centerline{\\includegraphics[width=0.35\\textwidth]{forward_mnist_35.pdf}}\n\\vspace{-10px}\n\\caption{We empirically demonstrate the generalization difference between original $\\ell$ and corrected $\\tilde{\\ell}$ (cf. Theorem~\\ref{fw-theorem} in Section~\\ref{sec:bfcorr}). \n\tWe choose \\textit{MNIST} with 35\\% of uniform noise as noisy data. \n\tThere is an obvious gap between $\\ell$ and $\\tilde{\\ell}$ on noisy \\textit{MNIST}.}\n\\label{motivation-fig}\n\\end{center}\n\\end{figure}\nHere, we empirically demonstrate the generalization difference between $\\ell$ and $\\tilde{\\ell}$ under label noise (Figure~\\ref{motivation-fig}), where $\\tilde{\\ell}$ is a forward-corrected loss (Theorem~\\ref{fw-theorem} in Section~\\ref{sec:bfcorr}) and experimental details can be found in Appendix 3. \nGenerally, the aim of LNRL is to ``construct'' such noise-tolerant $\\tilde{\\ell}$ that the learned $f_{\\theta}$ by minimizing \\eqref{eq:noisy-risk} approximates the optimal $f_{\\theta^*}$ that minimizes \\eqref{eq:clean-risk} well. Specifically, via a suitably constructed $\\tilde{\\ell}$, we can learn a robust deep classifier $f_{\\theta}$ from the noisy training examples that can assign clean labels for test instances. \nNext, \nwe first take a theoretical look at label-noise learning, which will help us build $\\tilde{\\ell}$ more effectively.", "id": "97fe083e-72eb-430b-97b3-31ebe3822d18", "level": "subsection", "origin_cites_number": 0, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Empirical Risk under Label Noise" ] ], "subsections": [], "title": "Empirical Risk under Label Noise" }, { "cite_extract_rate": 1, "cites": [ 4452, 4453 ], "content": "\\label{sec:theorem}\nIn contrast to , via the lens of learning theory, \nwe provide a systematic way to understand LNRL. \nOur focus is to explore why noisy labels affect the performance of deep models. To figure it out, we should rethink the essence of learning with noisy labels. Normally, there are three key ingredients in label-noise learning problems, including the data, the objective function and the optimization policy.\nIn a high-level, \nthere are three rules of thumb, which explain how to handle LNRL.\n\\begin{itemize}[leftmargin=*]\n\\item For \\textit{data}, \nthe key is to discover the underlying noise transition pattern, which directly links the clean class posterior and the noisy class posterior. Based on this insight, it is critical to design an accurate estimator of noise transition matrix $T$.\n\\item For the \\textit{objective function}, \nthe key is to design noise-tolerant $\\tilde{\\ell}$ in \\eqref{eq:noisy-risk}, which can be more robust than standard loss functions. Based on this insight, it is critical to learn a robust classifier on noisy data, which can provably converge to the learned classifier on clean data.\n\\item For the \\textit{optimization policy}, \nthe key is to explore the dynamic process of optimization, which relates to memorization. Based on this insight, it is critical to trade-off overfit/underfit in training deep networks, such as early stopping and small-loss tricks, where small-loss tricks backpropogate small-loss data based on memorization effects of deep networks.\n\\end{itemize}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.65\\textwidth]{taxonomy.pdf}\n\t\\caption{A taxonomy of LNRL based on the focus of each method. For each technique branch, we list a few representative works here.}\n\t\\label{fig:taxonomy}\n\\end{figure*}", "id": "4bcba53a-4669-4302-85dd-bd4ebe0c203f", "level": "subsection", "origin_cites_number": 2, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Theoretical Understanding" ] ], "subsections": [ "40bbe856-0103-4142-b0b9-919e0afe4d65", "ba460c14-2e7c-4cf5-a4e6-b39c7e298384", "06e02b1d-b8d0-449d-9ce4-7aaaa853d7cc" ], "title": "Theoretical Understanding" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 3340, 4120, 3453, 4465, 4152 ], "content": "\\label{sec:thm:idata}\nSpecifically, \nfrom the perspective of the data, the focus is to build up the noise transition matrix, \nwhich models the process of label corruption. \nIn general, there are two types of label noise: \ninstance-dependent label noise (e.g., $p(\\bar{Y}|Y,X)$) and instance-independent label noise (e.g., $p(\\bar{Y}|Y)$) . \nFor instance-dependent label noise, the noise transition matrix can be represented as \n$T(X)$, \nwhich depends on features. \nHowever, \nit can be ill-posed to learn the transition matrix $T(X)$ by only exploiting noisy data, \ni.e., the transition matrix is unidentiable~.\nTherefore, \nwe emphasize instance-independent label noise here, \nand the noise transition matrix can be represented as $T$, \nwhich is independent of features. \nIn this case, the noise transition matrix $T$ approximately models the process of label corruption. \nAn instance $x^i$ is said to be an anchor point of the $i$-th clean class if $p(Y = e_i|x^i)=1$, where $Y = e_i$ means $Y$ belongs to the $i$-th class. The transition matrix can be obtained via\n\\begin{align}\np(\\bar{Y} = e_j | x^i) \n& = \\sum\\nolimits_{k=1}^{C} p(\\bar{Y} = e_j | Y = e_k, x^i) p(Y = e_k|x^i),\\notag\n\\\\\n& = p(\\bar{Y} = e_j | Y = e_i, x^i)p(Y = e_i|x^i),\\notag\n\\\\\n& = p(\\bar{Y} = e_j | Y = e_i, x^i) = T_{ij}.\\label{eq:trans}\n\\end{align}\nNote that if \nanchor points are hard to identify, we can use $x^i = \\arg\\max_{x}p(\\bar{Y} =i|x)$ ~. This transition matrix is very important, since it can bridge the noisy class posterior and clean class posterior, i.e., $p(Y|x) = T^{-1}p(\\bar{Y}|x)$. \nIn practice, this transition matrix has been employed to build a risk-consistent estimator via loss correction or a classifier-consistent estimator via hypotheses correction~. \nBesides, for inconsistent algorithms, the diagonal entries of this matrix are used to select reliable examples for further robust training~.", "id": "40bbe856-0103-4142-b0b9-919e0afe4d65", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "4bcba53a-4669-4302-85dd-bd4ebe0c203f", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Theoretical Understanding" ], [ "subsubsection", "Perspective of Data" ] ], "subsections": [], "title": "Perspective of Data" }, { "cite_extract_rate": 0.42857142857142805, "cites": [ 3454, 4119, 4466 ], "content": "\\label{sec:thm:obj}\nFrom the perspective of the objective function, \nthe focus is to derive the statistical consistency \nguarantees for robust \n$\\tilde{\\ell}$~.\nLet $f$ be a deep network with $d$ layers and ReLU active function,\n$R^*=R_D(f^*)$ denote the Bayes risk for Bayes optimal classifier $f^*$ under the clean distribution $D$~,\nand $\\hat{f} = \\arg\\min_{f\\in\\mathcal{H}} \\widehat{R}_{\\tilde{\\ell},\\bar{D}}(f)$\nwhere $L_{\\rho}$ is the Lipschitz constant of $\\tilde{\\ell}$.\nAssume the Frobenius norm of the weight matrices $W_1,\\ldots,W_d$ are at most $M_1,\\ldots,M_d$\nand $x$ be upper-bounded by $B$, i.e., $\\Vert x \\Vert \\leq B$ for any $x$. \nWith probability at least $1-\\delta$, if $\\ell$ is classification-calibrated~, \nthere exists a non-decreasing function $\\xi_{\\ell}$ with $\\xi_{\\ell}(0)=0$ such that \n\\begin{equation*}\n\\begin{split}\n& R_D (\\hat{f}) - R^* \\leq \\xi_{\\ell}\\big(\\min\\nolimits_{f\\in\\mathcal{H}} R_{\\ell,D}(f) - \\min\\nolimits_f R_{\\ell,D}(f) \n\\\\\n& \\quad + 4L_{\\rho}\\mathcal{R}(\\mathcal{H}) + 2 \\sqrt{\\log(\\nicefrac{1}{\\delta}) / 2N}\\big), \n\\\\\n& \\! \\leq \\! \\xi_{\\ell}\\big(\\min_{f\\in\\mathcal{H}} R_{\\ell,D}(f) \n\\! - \\! \\min_f R_{\\ell,D}(f) \n\\! + \\! 4L_{\\rho} C \n\\! + \\! 2 \\sqrt{\\log(\\nicefrac{1}{\\delta})/2N}\\big),\n\\end{split}\n\\end{equation*}\nwhere \n$C = B(\\sqrt{2 d \\log 2} \\! + \\! 1) \\prod\\nolimits_{i=1}^d M_i/\\sqrt{N}$ and\n$R_D(\\hat{f}) = \\mathbb{E}_{(x,y) \\sim D}[1_{\\{\\text{sign}(\\hat{f}(x))\\neq y\\}}]$ \ndenotes the risk of $\\hat{f}$ w.r.t. the 0-1 loss.\nNote that\nfor a deep neural network, \nits Rademacher complexity $\\mathcal{R}(\\mathcal{H})$ of the function class $\\mathcal{H}$ is upper-bounded by $C$~.\n\\begin{remark}\nThe above conclusions denote that the learned $\\hat{f}$ (using noise-tolerant $\\tilde{\\ell}$ over noisy $\\bar{D}$) can approach the Bayes optimal $f^*$, when increasing the richness of the class $\\mathcal{H}$ and the data size~$N$.\n\\end{remark}\nNote that $\\min_{f\\in\\mathcal{H}} R_{\\ell,D}(f) - \\min_f R_{\\ell,D}(f)$ denotes the approximation error for employing the hypothesis class $\\mathcal{H}$.\nAccording to the universal approximation theorem~, \nif a certain deep network model is employed, $\\mathcal{H}$ will be a universal hypothesis class and thus contains the Bayes optimal classifier. \nThen, $\\min_{f\\in\\mathcal{H}} R_{\\ell,D}(f) - \\min_f R_{\\ell,D}(f)=0$. \nThis means that by employing a proper deep network, \nthe upper bound will converge to zero by increasing the training sample size $N$. Since $R_D(\\hat{f})$ is always bigger than or equal to $R^*$, \nwe have that $R_D(\\hat{f})$ will converge to $R^*$. \nThis further means that $\\hat{f}$ learned from noisy data (independently drawn from $\\bar{D}$) will converge to Bayes optimal $f^*$ defined by the clean data.", "id": "ba460c14-2e7c-4cf5-a4e6-b39c7e298384", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "4bcba53a-4669-4302-85dd-bd4ebe0c203f", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Theoretical Understanding" ], [ "subsubsection", "Perspective of Objective Function" ] ], "subsections": [], "title": "Perspective of Objective Function" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4115, 3630 ], "content": "From the perspective of the optimization policy,\nthe focus is to explore the dynamic process of optimization.\nTake the early stopping~, \nwhich is a simple yet effective trick to avoid overfitting on noisy labels,\nas an example. \nAssume an \ninitial weight matrix having entries following the standard normal distribution, \nnamely $W^0 \\sim \\mathcal{N}(0,1)$ entries, and $W^{\\tau}$ is updated via stochastic gradient descent with step size $\\eta$, \ni.e., $W^{\\tau+1} = W^{\\tau} - \\eta \\nabla \\ell(W^{\\tau})$. If $\\varepsilon_0 \\leq \\delta\\lambda(C)^2/K^2$ and $\\rho \\leq \\delta/8$ (cf. Definition 1.2 in ), \nthen after $I \\propto \\nicefrac{\\lVert C \\rVert^2}{\\lambda(C)}$ steps, there are two conclusions with high probability. First, the model $W^I$ predicts the true label function $\\hat{y}(x)$ for all input $x$ that lies within the $\\varepsilon_0$-neighborhood of a cluster center $\\{c_k\\}_{k=1}^K$.\nNamely, $ \\hat{y}(x) = \\arg\\min_{l}|f_{W^I}(x) - \\alpha_l|$, \nwhere $\\{\\alpha_l\\}_{l=1}^{\\bar{K}}\\in[-1,1]$ denotes the labels associated with each class, and each label belongs to $\\bar{K} (\\leq K$) classes (cf. Definition 1.1 in ). Second, for all training samples, the distance to the initial weight matrix satisfies\n\\begin{equation*}\n\\lVert W^{\\tau} - W^0 \\rVert_\\mathrm{F} \\lesssim \\big(\\sqrt{K} + \\nicefrac{\\tau \\varepsilon_0 K^2}{\\lVert C \\rVert^2} \\big),\n\\end{equation*}\nwhere $0 \\leq \\tau \\leq I$ and $A \\lesssim B$ denotes $A \\leq \\beta B$ with some constant $\\beta$. \n\\begin{remark}\nThe above conclusions demonstrate that gradient descent with early stopping (i.e., $I$ steps) \ncan be robust when training deep neural networks. \nMoreover, the final network weights do not stray far from the initial weights for robustness, \nsince the distance between the initial model and final model grows with the square root of the number of clusters $\\sqrt{K}$.\n\\end{remark}\nIntuitively, due to memorization effects, deep neural networks will eventually overfit noisy training data~. Thus, it is a good strategy to stop training early, when deep neural networks fit clean training data in first few epochs. This denotes the robust weights are not far away from the initial weights.", "id": "06e02b1d-b8d0-449d-9ce4-7aaaa853d7cc", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "4bcba53a-4669-4302-85dd-bd4ebe0c203f", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Theoretical Understanding" ], [ "subsubsection", "Perspective of Optimization Policy" ] ], "subsections": [], "title": "Perspective of Optimization Policy" }, { "cite_extract_rate": 0.8, "cites": [ 4139, 3630, 4115, 139 ], "content": "\\label{sec:taxonomy}\nBased on the above theoretical understanding, we categorize these works into three general perspectives:\n\\begin{enumerate}[leftmargin=*]\n\\item \\textit{Data}~: \nFrom the perspective of data, \nthe key is to build up the noise transition matrix $T$, which explores the data relationship between clean and noisy labels. Take loss correction as an example. We first model and estimate $T$ between latent $Y$ and observed $\\bar{Y}$. Then, via the estimated matrix, different correction techniques can generate $\\tilde{\\ell}$ from the original $\\ell$. \nMathematically, \n$\\widehat{R}_{\\tilde{\\ell},\\bar{D}}(f_{\\theta}) := \\frac{1}{n}\\sum_{i=1}^n \\tilde{\\ell}(f_{\\theta}(x_i), \\bar{y}_i)$,\nwhere \n$\\ell \\xrightarrow{T} \\tilde{\\ell}$, and $\\tilde{\\ell}$ is a corrected loss transitioning from $\\ell$ via $T$.\n\\item \\textit{Objective}: From the perspective of the objective, we can construct $\\tilde{\\ell}$ by augmenting the objective function, \ni.e., the original $\\ell$, \nvia either explicit or implicit regularization.\nFor instance, we may augment $\\ell$ by an auxiliary regularizer explicitly. Meanwhile, we may augment $\\ell$ by designing implicit regularization algorithms, such as soft-/hard-bootstrapping~\nand virtual adversarial training (VAT)~. \nMeanwhile, we can construct $\\tilde{\\ell}$ by reweighting the objective function $\\ell$. Lastly, we can also construct and design $\\tilde{\\ell}$ directly. Thus, $\\tilde{\\ell}$ has three options:\n\\begin{itemize}\n\\item $\\tilde{\\ell} = \\ell + r$, where $r$ denotes a regularization; \n\\item $\\tilde{\\ell} = \\sum_i w_i\\ell_i$, where $\\ell_i$ denotes $i$-th sub-objective \nwith the coefficient $w_i$; \n\\item $\\tilde{\\ell}$ has a special format $\\ell'$ independent of $\\ell$.\n\\end{itemize}\n\\item \\textit{Optimization}~: From the perspective of optimization, we can construct $\\tilde{\\ell}$ by leveraging the memorization effects of deep models. For example, due to the memorization effects, deep models tend to fit easy (clean) patterns first, and then over-fit complex (noisy) patterns gradually. Based on this observation, we can backpropagate the small-loss examples, which is equal to constructing the restricted $\\tilde{\\ell}$\nwhere $\\tilde{\\ell} = \\text{sort}(\\ell,1-\\tau)$, namely, sorting $\\ell$ from small to large, and fetching $1-\\tau$ percentage of small losses ($\\tau$ is the noise rate).\n\\end{enumerate}\nAccordingly, existing works can be categorized into a unified taxonomy as shown in Figure~\\ref{fig:taxonomy}. \nWe will detail each category in the sequel. Note that the discussion of three perspectives can be found in Appendix 2.", "id": "16fef8bc-4031-464d-8299-ecae7d1eb048", "level": "subsection", "origin_cites_number": 5, "parent_id": "0631056c-9b2b-4b86-b30a-fc08a6ef0620", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Overview" ], [ "subsection", "Taxonomy" ] ], "subsections": [], "title": "Taxonomy" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:data}\nMethods in this section solve the LNRL problem by estimating the noise transition matrix, which builds up the relationship between latent clean labels and observed noisy labels. First, we explain what the noise transition matrix is and why this matrix is important. Then, we introduce three common ways to leverage the noise transition matrix for combating label noise. The first way is to leverage an adaptation layer in the end-to-end deep learning system to mimic the noise transition matrix, which bridges latent clean labels and observed noisy labels. The second way is to estimate the noise transition matrix empirically, and further correct the cross-entropy loss by the estimated matrix. Lastly, the third way is to leverage prior knowledge to ease the estimation burden.", "id": "75c16b96-db92-4306-8203-f235a688b5ca", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ] ], "subsections": [ "af17c79e-e64b-44a4-8276-3856c711a6d8", "f273f1da-19be-4d85-b744-80cb017f8b56", "a2fe8b6a-bc6a-4d6f-a8b8-e7a1f65578df", "77ea0429-d362-45c0-928b-10900efd468b" ], "title": "Data" }, { "cite_extract_rate": 0.5, "cites": [ 3454, 7773 ], "content": "Before introducing three common ways, we first define what the noise transition matrix is, and explain why the noise transition matrix is important.\n\\begin{definition}\n(Noise transition matrix~) Suppose that the observed noisy label $\\bar{y}$ is drawn independently from a corrupted distribution $p(X,\\bar{Y})$, where features are intact. Meanwhile, there exists a corruption process, transition from the latent clean label $y$ to the observed noisy label $\\bar{y}$. Such a corruption process can be approximately modeled via a \\textit{noise transition matrix} $T$, where $T_{ij} = p(\\bar{y} = e_j| y = e_i)$. \n\\end{definition}\nTo further understand $T$, \nwe present two representative \nstructures of $T$ (Figure~\\ref{fig:repT}): \n(1) Sym-flipping~; \n(2) Pair-flipping~. \nThe definition of the corresponding $T$ is as follow,\nwhere $\\tau$ is the noise rate and $n$ is the number of the classes.\n\\begin{figure}[ht]\n\\centering\n\\subfigure[Sym-flipping.]\n{\\includegraphics[height=0.08\\textheight]{sym}}\n\\qquad\n\\subfigure[Pair-flipping.]\n{\\includegraphics[height=0.084\\textheight]{pair}}\n\\vspace{-8px}\n\\caption{Two representatives of transition matrix $T$.}\n\\label{fig:repT}\n\\end{figure}\nSpecifically, the Sym-flipping structure models the common classification scenario under label noise, where the class of clean label can uniformly flip into other classes. Meanwhile, the Pair-flipping structure models the fine-grained classification scenario, where the class (e.g., Norwich terrier) of a clean label can flip into its adjunct class (e.g., Norfolk terrier) instead of a far-away class (e.g., Australian terrier). In the area of label-noise learning, we normally leverage the above two structures to generate simulated noise, and explore the root cause why the proposed algorithms can work on the simulated noise. Nonetheless, the real-world scenarios are very complex, where the noise transition matrix may not have structural rules (i.e., irregular). For example, \\textit{Clothing1M}~ is a Taobao clothing dataset, where mislabeled clothing images often share similar visual patterns. The noise structure of \\textit{Clothing1M} is irregularly asymmetric, which is hard ti estimate.\nThe mathematical modeling of $T$ can be found in Section~\\ref{sec:thm:idata} (Eq.~\\eqref{eq:trans}),\nwhich has been widely studied to build statistically consistent classifiers.\nNormally, the clean class posterior can be inferred by using $T$ and the noisy class posterior, i.e., we have the important equation $p(\\bar{Y}|x)=T \\cdot p(Y|x)$, where $T$ is a bridge between clean and noisy information.\nAs the noisy class posterior can be estimated by exploiting the noisy training data, \nthe key step remains how to effectively estimate $T$ and leverage the estimated matrix to combat label noise. \nBased on this observation, there are three general ways as \nin Sections~\\ref{sec:data:ada}-\\ref{sec:data:pro}.", "id": "af17c79e-e64b-44a4-8276-3856c711a6d8", "level": "subsection", "origin_cites_number": 4, "parent_id": "75c16b96-db92-4306-8203-f235a688b5ca", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Noise Transition Matrix" ] ], "subsections": [], "title": "Noise Transition Matrix" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:data:ada}\nDeep learning can be viewed as an end-to-end learning system. \nTherefore, the most intuitive way is to add an adaptation layer (Fig.~\\ref{adpt-layer}) to estimate the transition matrix.\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.35\\textwidth]{adpt-layer.pdf}\n\t\\vspace{-5px}\n\t\\caption{A general case of adaptation layer.}\n\t\\label{adpt-layer}\n\\end{figure}", "id": "f273f1da-19be-4d85-b744-80cb017f8b56", "level": "subsection", "origin_cites_number": 0, "parent_id": "75c16b96-db92-4306-8203-f235a688b5ca", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Adaptation Layer" ] ], "subsections": [ "d7498655-7294-4dd3-bc96-852e85620e57", "5624754a-8cc8-4b12-8e92-638e4e750678" ], "title": "Adaptation Layer" }, { "cite_extract_rate": 1, "cites": [ 4136 ], "content": "To realize this adaptation layer, Sukhbaatar et al.~ proposed a constrained linear layer (i.e., constrained to be a probability matrix) inserted between the base network and cross-entropy loss layer. This linear adaptation layer is parameterized by $T$, which is equivalent to the function of $T$. \nBased on this idea, we can modify a classification model using a probability matrix $T$ that modifies its prediction to match the label distribution of the noisy data. \nThe training model consists of two independent parts: the base model parameterized by $\\omega$ and the noise model parameterized by $T$. Since the noise matrix $T$ has been modeled as a constrained linear layer, update of the $T$ matrix can be easily carried out by back-propagating the cross-entropy loss.\nHowever, it is hard to achieve the optimal $T$ via minimizing the cross-entropy loss, which is jointly parameterized by $\\omega$ and $T$. \nTo achieve the optimal $T$, Sukhbaatar et al.~ leveraged a regularizer on $T$, \ne.g., the trace norm or ridge regression, which forces it to approximate the optimal $T$. \nThis work paves the way for deep learning with noisy labels, which directly motivates the following nonlinear case of the adaptation layer.", "id": "d7498655-7294-4dd3-bc96-852e85620e57", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "f273f1da-19be-4d85-b744-80cb017f8b56", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Adaptation Layer" ], [ "subsubsection", "Linear Case" ] ], "subsections": [], "title": "Linear Case" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 4136 ], "content": "Following the linear case, Goldberger et al.~ proposed a non-linear layer inserted between the base network and cross-entropy loss layer to realize this adaptation layer.\nBeyond the linear case, the training model consists of two independent parts: the base model parameterized by $\\omega$ and the noise model/channel parameterized by $\\theta$ (equal to the function of the noise transition matrix). Since the latent outputs \nof the base model are hidden, they proposed to leverage the expectation-maximization (EM) algorithm~ to estimate the hidden outputs (E-step) and the current parameters (M-step). Different from the linear case, the nonlinear case is free of strong assumptions (see Section 3.2 in~).\nHowever, there are several potential drawbacks to the EM-based approach, such as local optimality and scalability. To address these issues, Goldberger et al.~ proposed two noise modeling variants: the c-model and s-model. Specifically, the c-model predicts the noisy label based on both the latent true label and the input features; while the s-model predicts the noisy label only based on the latent true label. Since the EM algorithm is equivalent to the s-model, they regarded both $\\omega$ and $\\theta$ as components of the same network and optimized them simultaneously. Moreover, the s-model is similar to the linear case proposed by Sukhbaatar et al.~, although they proposed a different learning procedure. Note that though the c-model depends on the input features, and they still leverage network training in the M-step to update $\\theta$.", "id": "5624754a-8cc8-4b12-8e92-638e4e750678", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "f273f1da-19be-4d85-b744-80cb017f8b56", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Adaptation Layer" ], [ "subsubsection", "Nonlinear Case" ] ], "subsections": [], "title": "Nonlinear Case" }, { "cite_extract_rate": 0, "cites": [], "content": "Another important branch is to conduct loss correction via leveraging $T$, \nwhich can also be integrated into deep learning systems. \nThe aim of loss correction is that, training on noisy labels via the corrected loss should be approximately equal to training on clean labels via the original loss.\nNote that we also introduce a label smoothing method related to loss correction in Section~\\ref{label-smoothing}.", "id": "a2fe8b6a-bc6a-4d6f-a8b8-e7a1f65578df", "level": "subsection", "origin_cites_number": 0, "parent_id": "75c16b96-db92-4306-8203-f235a688b5ca", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Loss Correction" ] ], "subsections": [ "99b54dad-21cd-4d87-a301-0394ba94b25b", "bd1d010d-b83f-416b-bcaa-fc753eb4b8be", "f1ec57da-c6a2-4d3c-92a4-af20b71b432f" ], "title": "Loss Correction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:bfcorr}\nPatrini et al.~ introduced two loss correction techniques, namely forward correction and backward correction. \nThe backward procedure corrects the loss by inverse transition matrix $T^{-1}$; \nwhile the forward procedure corrects the network predictions by $T$. \nBoth corrections share the formal theoretical guarantees \\textit{w.r.t} the clean data distribution.\n\\begin{theorem}\n\\label{bc-theorem} \n(Backward Correction, Theorem 1 in~) Suppose that $T$ is non-singular, where $T_{ij} = p(\\bar{y} = j| y = i)$ given that corrupted label $\\bar{y} = j$ is flipped from clean label $y = i$. Given loss $\\ell$ and network function $f$, Backward Correction is defined as\n\\begin{equation}\\label{lemma-1-1}\n \\ell^{\\leftarrow}(f(x),\\bar{y}) = [T^{-1}\\ell_{y|f(x)}]_{\\bar{y}},\n\\end{equation}\nwhere $\\ell_{y|f(x)} = (\\ell(f(x),1),\\ldots,\\ell(f(x),k))$. Then, corrected loss $\\ell^{\\leftarrow}(f(x),\\bar{y})$ is unbiased, namely,\n\\begin{equation}\\label{lemma-1-2}\n \\mathbb{E}_{\\bar{y}|x}\\ell^{\\leftarrow}(f(x),\\bar{y}) = \\mathbb{E}_{y|x}\\ell(f(x), y), \\forall x.\n\\end{equation}\n\\end{theorem}\n\\begin{remark}\nBackward Correction is unbiased. The LHS of (\\ref{lemma-1-2}) is computed from the corrupted labels, and the RHS of (\\ref{lemma-1-2}) is computed from the clean labels. Note that the corrected loss is differentiable, but not always non-negative~.\n\\end{remark}\n\\begin{theorem}\\label{fw-theorem} (Forward Correction, Theorem 1 in~) Suppose that the label\ntransition matrix $T$ is non-singular, where $T_{ij} = p(\\bar{y} = j| y = i)$ given that corrupted label $\\bar{y} = j$ is flipped from clean label $y = i$. Given loss $\\ell$ and network function $f$, Forward Correction is defined as\n\\begin{equation}\\label{lemma-2-1}\n \\ell^{\\rightarrow}(f(x), \\bar{y}) = [\\ell_{y|T^{\\top}f(x)}]_{\\bar{y}},\n\\end{equation}\nwhere $\\ell_{y|T^{\\top}f(x)} = (\\ell(T^{\\top}f(x),1),\\ldots,\\ell(T^{\\top}f(x),k))$. Then, the minimizer of the corrected loss under the noisy distribution is the same as the minimizer of the orginal loss under the clean distribution, namely,\n\\begin{equation}\\label{lemma-2-2}\n \\arg\\min_f\\mathbb{E}_{x,\\bar{y}}\\ell^{\\rightarrow}(f(x), \\bar{y}) = \\arg\\min_f\\mathbb{E}_{x,y}\\ell(f(x), y).\n\\end{equation}\n\\end{theorem}\n\\begin{remark}\nFor Forward Correction, the LHS of (\\ref{lemma-2-2}) is computed from the corrupted labels, and the RHS of (\\ref{lemma-2-2}) is computed from the clean labels. Note that the property is weaker \nthan unbiasedness of Theorem~\\ref{bc-theorem}, since the property is a necessary but not a sufficient condition of unbiasedness.\n\\end{remark}\nNormally, $T$ is unknown, which needs to be estimated (i.e., $\\widehat{T}$). Therefore, Patrini et al.~ proposed a robust two-stage training. First, they trained the network with $\\ell$ on noisy data, and obtained an estimate of $T$ via the softmax output. Then, they re-trained the network with the corrected loss by $\\widehat{T}$. Note that the estimation quality of $T$ directly decides the learning performance via the loss correction.", "id": "99b54dad-21cd-4d87-a301-0394ba94b25b", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "a2fe8b6a-bc6a-4d6f-a8b8-e7a1f65578df", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Loss Correction" ], [ "subsubsection", "Backward/Forward Correction" ] ], "subsections": [], "title": "Backward/Forward Correction" }, { "cite_extract_rate": 1, "cites": [ 4145 ], "content": "Based on Forward Correction, \nHendrycks et al.~ proposed Gold Loss Correction to handle severe noise. When severe noise exists, the transition matrix can not be estimated accurately by purely noisy data. \nThe key motivation is to assume that a small subset of the training data is trusted and available.\nNormally, a large number of crowdsourced workers may produce an untrusted set $\\widetilde{D}$; \nwhile a small number of experts can produce a trusted set $D$. In a high-level, Hendrycks et al.~ aimed to leverage $D$ to estimate the $T$ accurately, \nand employed Forward Correction based on the estimated matrix. Then, they trained deep neural networks on $\\widetilde{D}$ via the corrected loss, \nwhile training on $D$ via the original loss. \nThis method is called Gold Loss Correction (GLC).\nThus, \nGLC's key step is to estimate $T$ accurately via a trusted set $D$. \nSpecifically, \nwe can estimate $T$ by $\\widehat{T}$ as follows.\n\\begin{equation*}\n\\widehat{T}_{ij} \n= \\frac{1}{A_i}\\sum\\nolimits_{x \\in A_i}\\widehat{p}(\\bar{Y} = e_j|Y = e_i,x),\n\\end{equation*}\nwhere $A_i$ is the subset of $x$ in $D$ with label $i$ and a classifier $\\widehat{p}(\\bar{y}|x)$ can be modeled by deep neural networks trained on $\\widetilde{D}$. Empirically, the better estimate $\\widehat{T}$ will lead to the better GLC's performance.", "id": "bd1d010d-b83f-416b-bcaa-fc753eb4b8be", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "a2fe8b6a-bc6a-4d6f-a8b8-e7a1f65578df", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Loss Correction" ], [ "subsubsection", "Gold Correction" ] ], "subsections": [], "title": "Gold Correction" }, { "cite_extract_rate": 0.5, "cites": [ 301, 4148 ], "content": "\\label{label-smoothing}\nThe technique of label smoothing is to smooth labels by mixing in a uniform label vector, whose label distribution belongs to the uniform distribution~, which is a means of regularization. Lukasik et al.~ relates label smoothing to a general family of loss-correction techniques, \nwhich demonstrates that label smoothing significantly improves performance under label noise. \nIn general, and can be unified into a label smearing framework:\n\\begin{equation*}\n\\ell^{\\text{SM}}(f_{\\theta}(X),Y) \n= M \\cdot \\ell(f_{\\theta}(X),Y),\n\\end{equation*}\nwhere $M$ is a smearing matrix~. Such a matrix is used for bridging the original loss $\\ell$ and the smeared loss $\\ell^{\\text{SM}}$.\nTherefore, in this framework, there are three examples:\n\\begin{itemize}[leftmargin=*]\n\\item Standard training: suppose $M = I$, where $I$ is the identity matrix.\n\\item Label smoothing: suppose $M = (1-\\alpha) I + \\frac{\\alpha J}{L} $, where $J$ is the all-ones matrix and $L$ is the number of classes.\n\\item Backward correction under symmetric noise: suppose $M = \\frac{1}{1-\\alpha}\\cdot(I - \\frac{\\alpha J}{L})$, where $M = T^{-1}$ in Theorem~\\ref{bc-theorem}. \n\\end{itemize}\nWe can clearly see the close connection between label smoothing and backward correction. Actually, label smoothing can have a similar effect to \nshrinkage regularization~.", "id": "f1ec57da-c6a2-4d3c-92a4-af20b71b432f", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "a2fe8b6a-bc6a-4d6f-a8b8-e7a1f65578df", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Loss Correction" ], [ "subsubsection", "Label Smoothing" ] ], "subsections": [], "title": "Label Smoothing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:data:pro}\nAs mentioned before, \nmethods in this section solve the LNRL problem by estimating $T$. \nHowever, its accurate estimation can be hard in real-world scenarios, \nwhich motivates \nresearchers\nto incorporate prior knowledge for better estimation.", "id": "77ea0429-d362-45c0-928b-10900efd468b", "level": "subsection", "origin_cites_number": 0, "parent_id": "75c16b96-db92-4306-8203-f235a688b5ca", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Prior Knowledge" ] ], "subsections": [ "322c9a77-495e-4564-a22f-7e01448a7b25", "9b02a7b8-a622-4f6f-a802-74327e756509" ], "title": "Prior Knowledge" }, { "cite_extract_rate": 0.5, "cites": [ 7773 ], "content": "Han et al.~ proposed a human-assisted approach called ``Masking'', which decouples the structure and value of the transition matrix. Specifically, the structure can be viewed as a prior knowledge, coming from human cognition, since human can mask invalid class transitions (e.g., cat $\\nleftrightarrow$ car). Given the structure information, we can only focus on estimating the noise transition probability along the structure in an end-to-end system. \nTherefore, the estimation burden will be largely reduced. Actually, it makes sense that human cognition masks invalid class transitions and highlights valid class transitions, such as the column-diagonal, tri-diagonal and block-diagonal structures. Therefore, the remaining issue is how to incorporate such prior structure into an end-to-end learning system. The answer is a generative model.\nSpecifically, there are three steps in Masking. \nFirst, the latent ground-truth label is from $y\\sim p(y|x)$, \nwhere $p(y|x)$ is a categorical distribution. Second, the noise transition matrix variable $t$ is from $t\\sim p(t)$ \nand its structure \nis generated as $t_o\\sim p(t_o)$, \nwhere $p(t)$ is an implicit distribution modeled by a neural network without its explicit form, structure prior $p(t_o)=p(t)\\frac{dt}{dt_o}\\big|_{t_o=f(t)}$ provided by human cognition, \nand $f(\\cdot)$ is the mapping function from $t$ to $t_o$. \nThird, the noisy label is from $\\bar{y}\\sim p(\\bar{y}|y,t)$, where $p(\\bar{y}|y,t)$ models the transition from $y$ to $\\bar{y}$ given $t$. Based on this generative process, \nwe can deduce the evidence lower bound~\nto approximate the log-likelihood of the noisy data.", "id": "322c9a77-495e-4564-a22f-7e01448a7b25", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "77ea0429-d362-45c0-928b-10900efd468b", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Prior Knowledge" ], [ "subsubsection", "Human-in-the-Loop Estimation" ] ], "subsections": [], "title": "Human-in-the-Loop Estimation" }, { "cite_extract_rate": 1, "cites": [ 4152 ], "content": "Xia et al.~ introduced a transition-revision method to effectively learn the transition matrix, \nwhich is called Reweight T-Revision (Reweight-R). \nSpecifically, they first initialized the transition matrix by exploiting data points that are similar to anchor points, having high noisy class posterior probabilities. Thus, in the Reweighted-R method, the initialized transition matrix is viewed as a prior knowledge. Then, they fine-tuned the initialized matrix by adding a slack variable, which is then learned and validated together with the classifier by using noisy data.\nSpecifically, given noisy training sample \n$\\bar{\\mathcal{D}}^{\\text{tr}}$ and noisy validation set $\\bar{\\mathcal{D}}^{\\text{v}}$, \nthere are two stages in Reweight-R. In the first stage, the unweighted loss is minimized to learn $\\hat{p}(\\bar{Y}|X=x)$ without a noise adaption layer. \nThen, the noise transition matrix $\\hat{T}$ is initialized, \nwhich can be viewed as a prior knowledge for further fine-tuning. \nNamely, $\\hat{T}$ is initialized according to (1) in by using data with the highest $\\hat{p}(\\bar{Y} = e_i|X=x)$ as anchor points for the $i$-th class.\nIn the second stage, based on the prior $\\hat{T}$, the neural network is initialized by minimizing the weighted loss with a noisy adaptation layer $\\hat{T}^\\top$. \nFurthermore, the weighted loss is minimized to learn classifier $f$ and incremental $\\Delta T$ with a noisy adaptation layer $(\\hat{T} + \\Delta T)^\\top$. \nNamely, the second stage modifies $\\hat{T}$ gradually by adding a slack variable $\\Delta T$, \nand learns the classifier and $\\Delta T$ by minimizing the weighted loss. \nThe two stages alternate until converges, namely achieving minimum error on $\\bar{\\mathcal{D}}^{\\text{v}}$.", "id": "9b02a7b8-a622-4f6f-a802-74327e756509", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "77ea0429-d362-45c0-928b-10900efd468b", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Data" ], [ "subsection", "Prior Knowledge" ], [ "subsubsection", "Fine-tuning Revision" ] ], "subsections": [], "title": "Fine-tuning Revision" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:obj}\nMethods in this section solve the LNRL problem by modifying the objective function $\\tilde{\\ell}$ in \\eqref{eq:noisy-risk}, \nand such modification can be realized in \nthree different ways. \nThe first way is to directly augment the objective function by either explicit regularization or implicit regularization. \nNote that implicit regularization tends to operate at the algorithm level, \nwhich is equivalent to modifying the objective function. \nThe second way is to assign dynamic weights to different objective sub-functions, \nand a larger weight means higher importance for the corresponding sub-function. The last way is to directly \nredesign new loss functions.", "id": "71193050-fa8c-4cca-acf2-cdfbe4b84da7", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ] ], "subsections": [ "3c156707-13c8-4d41-a27b-abbe2cd56ce7", "89d39c41-4b33-4f19-8ccf-5eb0810e7166", "f89f4416-15b0-4666-9c8a-3d22e8bc1664" ], "title": "Objective" }, { "cite_extract_rate": 0, "cites": [], "content": "Regularization is the most direct way to modify the objective function. \nMathematically, we add a regularization term to the original objective, i.e., $\\tilde{\\ell} = \\ell + r$. \nIn label-noise learning, the aim of regularization is to achieve better generalization, which avoids or alleviates overfitting noisy labels.\nThere are two intuitive ways as follows.", "id": "3c156707-13c8-4d41-a27b-abbe2cd56ce7", "level": "subsection", "origin_cites_number": 0, "parent_id": "71193050-fa8c-4cca-acf2-cdfbe4b84da7", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Regularization" ] ], "subsections": [ "9565da23-d5ac-44bf-b242-08852418038e", "88555417-b9ef-43dc-8f04-429b8079ed3c" ], "title": "Regularization" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7778, 4183, 139 ], "content": "\\label{exp-reg}\nAzadi et al.~ proposed a novel regularizer $r = \\Omega_{\\text{aux}}(w)$ to exploit the data structure for combating label noise, \nwhere $\\Omega_{\\text{aux}}(w) = \\left\\lVert Fw\\right\\rVert_\\mathrm{g}$. \nNote that $\\left\\lVert \\cdot \\right\\rVert_\\mathrm{g}$ denotes the group norm \nand $F^{\\top} = [X_1,\\ldots,X_n]$ represents the set of diagonal matrices (e.g., $X_i$) consisting of the (e.g., $i$-th) data features,\nwhich induces a non-overlapping group sparsity\nthat encourages most coefficients to be zero. \nThis operation will encourage a small number of clean data to contribute to learning of the model, while filtering mislabeled and non-relevant data. \nIn other words, this regularizer enforces the features of the good data to be used for modeling learning, while noisy additional activations will be disregarded. \nIt is worth noted that Berthelot et al.~ introduced MixMatch for semi-supervised learning (SSL), and the empirical performance of MixMatch reaches the state-of-the-art. Most importantly, one of the key components in MixMatch is Minimum Entropy Regularization (MER), which belongs to explicit regularization in LNRL. Speicifically, MER was proposed by Grandvalet \\& Bengio~, and the key idea is to augment the cross-entropy loss with an explicit term encouraging the classifier to make predictions with high confidence on the unlabeled examples, namely minimizing the model entropy \nfor unlabeled data $x$. Similar to MER, the pseudo-label method conducts entropy minimization implicitly~, which generates hard labels from high-confidence predictions on unlabeled data for further training. In particular, the pseudo-label method (i.e., label guessing) first computes the average of the model’s predicted class distributions across all augmentations, and then applies a temperature sharpening function to reduce the entropy of the label distribution.\nMiyato et al.~ also explored the smoothness for combating label noise, and they proposed the virtual adversarial loss, which is a new measure of local smoothness of the conditional label distribution given input. Specifically, their method enforces the output distribution to be isotropically smooth around each input data \nvia selectively smoothing the model in its most anisotropic direction. The\nbenefit of such smoothness makes the model output insensitive to the input data. To realize such smoothness, they first design the virtual adversarial direction, which can greatly deviate from the current inferred output distribution, from the status quo without the label information. Based on such a direction, they defined local distributional smoothness, and then proposed a training method called virtual adversarial training (VAT).\nMathematically, \nthey first defined local distributional smoothness (LDS):\n\\begin{equation}\n\\text{LDS}(x^*,\\theta):=D[p(y|x^*,\\hat{\\theta}),p(y|x^*+r_{\\text{vadv}},\\theta)],\n\\end{equation}\nwhere\n$r_{\\text{vadv}}\n:= \\arg\\max\\nolimits_{\\left\\lVert r \\right\\rVert_2 \\leq \\epsilon} \nD[ p(y|x^*,\\hat{\\theta}),p(y|x^*+r) ]$,\n$D$ is a non-negative function that measures the distribution divergence, and\n$x^*$ represents either labeled or unlabeled features. We use $\\hat{\\theta}$ to\ndenote the vector of model parameters at a specific iteration step of the\ntraining process. Then, we have a regularization term:\n\\begin{equation*}\nR_{\\text{vadv}}(D_\\mathrm{l},D_\\mathrm{ul},\\theta)\n:=\\nicefrac{1}{N_l+N_{ul}}\n\\sum\\nolimits_{x^* \\in D_l,D_{ul}}\\text{LDS}(x^*,\\theta),\n\\end{equation*}\nwhere $D_l$ denotes the label dataset with size $N_l$; and $D_{ul}$ denotes the unlabeled dataset with size $N_{ul}$. \nTherefore, the full objective function of VAT is given by\n\\begin{equation*}\n\\ell(D_l,\\theta) + \\alpha R_{\\text{vadv}}(D_l,D_{ul},\\theta).\n\\end{equation*}", "id": "9565da23-d5ac-44bf-b242-08852418038e", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "3c156707-13c8-4d41-a27b-abbe2cd56ce7", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Regularization" ], [ "subsubsection", "Explicit Regularization" ] ], "subsections": [], "title": "Explicit Regularization" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 4139, 3630, 4115, 4457, 7191 ], "content": "Recently, there are more and more implicit regularizers, which take the effects of regularization without the explicit form (cf.~Section~\\ref{exp-reg}).\nFor example,\n\\begin{itemize}[leftmargin=*]\n\\item Bootstrapping~: Reed et al.~\naugmented the prediction objective with a notion of consistency. In the\nhigh-level, this provides the learner justification to ``disagree'' with a\nperceptually-inconsistent training label, and effectively re-label the data.\nNamely, the learner bootstraps itself in this way, which uses a convex\ncombination of training labels and the current model’s predictions to generate\nthe training targets. Intuitively, as the learner improves over time, its\npredictions can be trusted more. Such bootstrapping can avoid modeling the noise\ndistribution. Specifically, Reed et al.~ proposed two ways\nto realize bootstrapping, such as soft and hard bootstrapping. The soft version\nloss $\\ell_{\\text{soft}}$ uses predicted class probabilities $q$ directly to\ngenerate regression targets for each batch as follows:\n\\begin{equation*}\n\\ell_{\\text{soft}}(q,t) \n= \\sum\\nolimits_{k=1}^{L}[\\beta t_k + (1-\\beta)q_k]\\log(q_k),\n\\end{equation*}\nwhere $t$ denotes the training labels.\nThe parameter $\\beta$ balances the prediction $q$ and target $t$, which can be found via cross validation. The soft version loss is equivalent to softmax regression with minimum entropy regularization, which encourages the model to have a high confidence in predicting labels. Meanwhile, the hard version loss $\\ell_{\\text{hard}}$ modifies targets using a MAP estimate of $q$ given $\\mathbf{x}$ as follows:\n\\begin{equation*}\n\\ell_{\\text{hard}}(q,t) \n= \\sum\\nolimits_{k=1}^{L}[\\beta t_k + (1-\\beta)z_k]\\log(q_k),\n\\end{equation*}\nwhere $z_k = \\mathbf{1}[k=\\arg\\max_{i = 1,\\ldots,L} q_i]$ and $\\mathbf{1}[\\cdot]$ denotes the indicator function. To solve the hard version via stochastic gradient descent, an EM-like method will be employed. In the E-step, the approximate-truth confidence targets are estimated as a convex combination of training labels and model predictions. In the M-step, the model parameters are updated in order to predict those generated targets better.\n\\item Mixup~: Motivated by vicinal risk minimization~, Zhang et al.~ introduced a data-agnostic regularization method called Mixup, which constructs virtual training examples $(\\tilde{x},\\tilde{y})$ as follows.\n\\begin{equation}\\label{mixup}\n\\tilde{x} = \\lambda x_i + (1-\\lambda)x_j\n\\;\\text{and}\\;\n\\tilde{y} = \\lambda y_i + (1-\\lambda)y_j,\n\\end{equation}\nwhere $(x_i,y_i)$ and $(x_j,y_j)$ are two examples randomly drawn from the training data and $\\lambda \\in [0,1]$ is a balanced parameter between two examples.\nIntuitively, Mixup conducts virtual data augmentation, \nwhich dilutes the noise effects and smooths the data manifold. \nThis simple but effective idea can be used in not only noisy labels but also adversarial examples, \nsince the smoothness happens in both features and labels according to \\eqref{mixup}.\n\\item SIGUA~: It is noted that, given data with noisy labels, \nover-parameterized deep networks can gradually fit the whole data~.\nTo relieve this issue, \nHan et al.~ proposed a gradient-ascent method called stochastic integrated gradient underweighted ascent (SIGUA): \nin a mini-batch, \nwe adopt gradient descent on good data as usual, and learning-rate-reduced gradient ascent on bad data; \nthe proposal is a versatile approach where data goodness or badness is w.r.t. desired or undesired memorization given a base learning method.\nTechnically, SIGUA is a specially designed regularization by pulling optimization back for generalization when their goals conflict with each other.\nA key difference between SIGUA and parameter shrinkage like weight decay is that SIGUA pulls optimization back on some data but parameter shrinkage does the same on all data; philosophically, SIGUA shows that forgetting undesired memorization can reinforce desired one, which provides a novel viewpoint on the inductive bias of neural networks.\n\\end{itemize}", "id": "88555417-b9ef-43dc-8f04-429b8079ed3c", "level": "subsubsection", "origin_cites_number": 7, "parent_id": "3c156707-13c8-4d41-a27b-abbe2cd56ce7", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Regularization" ], [ "subsubsection", "Implicit Regularization" ] ], "subsections": [], "title": "Implicit Regularization" }, { "cite_extract_rate": 0, "cites": [], "content": "Instead of augmenting the objective via explicit/implicit regularization, \nthere is another typical solution to modify the objective function termed Reweighting. \nIn high-level,\nReweighting is a way to assign different weights to different sub-objective \nfunctions, where each sub-objective function corresponds to each training sample.\nThe weights should be learned, and the larger weights will bias sub-objectives to better overcome label noise. \nThe procedure can be formulated as $\\tilde{\\ell} = \\sum_i w_i\\ell_i$. Here, we introduce several Reweighting approaches as follows.", "id": "89d39c41-4b33-4f19-8ccf-5eb0810e7166", "level": "subsection", "origin_cites_number": 0, "parent_id": "71193050-fa8c-4cca-acf2-cdfbe4b84da7", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Reweighting" ] ], "subsections": [ "b8aa75c2-9b46-4f2b-a640-7d5aa8301191", "d1e5a7e4-2062-42df-9878-dad524899233", "290b05af-9b42-4afd-b17c-d3830ca9ffec" ], "title": "Reweighting" }, { "cite_extract_rate": 0.5, "cites": [ 3453 ], "content": "Liu and Tao introduced importance reweighting from domain adaptation to label-noise learning by treating the noisy training data as the source domain and the clean test data as the target domain. The idea is to rewrite the risk w.r.t. the clean data by exploiting the noisy data. Specifically, \n\\begin{align*}\n\\label{eq:importance}\n&R(f)=\\mathbb{E}_{(X,Y)\\sim D}[\\ell(f(X),Y)]\\\\\n&=\\int_{x}\\sum_ip_D(X=x,Y=i)\\ell(f(x),i)dx\\nonumber\\\\\n&=\\int_{x}\\sum_ip_{\\bar{D}}(X=x,\\bar{Y}=i)\\frac{p_D(X=x,\\bar{Y}=i)}{p_{\\bar{D}}(X=x,\\bar{Y}=i)}\\ell(f(x),i)dx\\nonumber\\\\\n&=\\int_{x}\\sum_ip_{\\bar{D}}(X=x,\\bar{Y}=i)\\frac{p_D(\\bar{Y}=i|X=x)}{p_{\\bar{D}}(\\bar{Y}=i|X=x)}\\ell(f(x),i)dx\\\\\n&=\\mathbb{E}_{(X,\\bar{Y})\\sim \\bar{D}}[\\beta(X,\\bar{Y}){\\ell}(f(X),\\bar{Y})],\\nonumber\n\\end{align*}\nwhere the second last equation holds because label noise is assumed to be independent of instances and $\\beta(X,\\bar{Y})=p_D(\\bar{Y}=i|X=x)/p_{\\bar{D}}(\\bar{Y}=i|X=x)$ denotes the weights which plays a core role in importance reweighting and can be learned by either exploiting the transition matrix or a small set of clean data.", "id": "b8aa75c2-9b46-4f2b-a640-7d5aa8301191", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "89d39c41-4b33-4f19-8ccf-5eb0810e7166", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Reweighting" ], [ "subsubsection", "Importance Reweighting" ] ], "subsections": [], "title": "Importance Reweighting" }, { "cite_extract_rate": 1, "cites": [ 4455, 4135 ], "content": "Wang et al.~ proposed reweighted probabilistic models (RPM) to combat label noise. \nThe idea is simple and intuitive: down-weighting on corrupted labels but up-weighting on clean labels, which brings us Bayesian data reweighting. The mathematical formulations include three steps:\n\\begin{itemize}[leftmargin=15px]\n\\item[1)] Define a probabilistic model $p_{\\beta}(\\beta)\\prod_{n=1}^{N}\\ell(y_n|\\beta)$.\n\\item[2)] Assign a positive latent weight $w_n$ to each likelihood $\\ell(\\cdot|\\beta)$,\nand choose a prior on the weights $p_w(w)$, where $w=(w_1,\\ldots, w_N)$. Thus, the RPM can be represented by\n\\begin{equation*}\np(y,\\beta,w)\n= \\nicefrac{1}{Z} \\cdot p_{\\beta}(\\beta)p_{w}(w)\n\\prod\\nolimits_{n=1}^{N}\\ell(y_n|\\beta)^{w_n},\n\\end{equation*}\nwhere $Z$ is the normalizing factor.\n\\item[3)] Infer the posterior of both the latent variables $\\beta$ and the weight $w$: $p(\\beta,w|y)$. The prior knowledge on the weights $p_w(w)$ trades off extremely low likelihood terms, where the options of $p_w(w)$ are a bank of Beta priors, a scaled Dirichlet prior and a bank of Gamma priors. Note that RPMs treat weights $w$ as latent variables, which are automatically inferred.\n\\end{itemize}\nArazo et al.~ introduced a two-component (clean-noisy) beta mixture model (BMM) for a mixture of clean and noisy data, which brings us a bootstrapping loss. Specifically, the posterior probabilities under BMM are leveraged to implement a dynamically-weighted bootstrapping loss, robustly dealing with noisy samples without discarding them. Mathematically, the probability density function \nof a mixture model of $K$ components \non the loss $\\ell$ is defined as\n\\begin{equation}\np(\\ell)\n= \\sum\\nolimits_{k=1}^K \\lambda_k p(\\ell|k),\n\\end{equation}\nwhere $K=2$ is our case, $\\lambda_k$ are mixing \nweights, and $p(\\ell|k)$ can be modeled by the beta distribution:\n\\begin{equation}\np(\\ell|\\alpha,\\beta)\n= \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\ell^{\\alpha-1} (1-\\ell)^{\\beta-1}.\n\\end{equation}\nThe above BMM can be solved by the EM procedure. Specifically, latent variables $\\gamma_k(\\ell) = p(k|\\ell)$ are introduced. In the E-step, $\\lambda_k$, $\\alpha_k$, $\\beta_k$ are fixed and $\\gamma_k(\\ell)$ is updated via the Bayes rule. In the M-step, given fixed $\\gamma_k(\\ell)$, $\\alpha_k$ and $\\beta_k$ are estimated using the weighted moments. Meanwhile, the dynamic weights are updated in an easy way: $\\lambda_k = \\frac{1}{N}\\sum_{i=1}^{N}\\gamma_k(\\ell_i)$. Based on this BMM model, they further proposed dynamic hard/soft bootstrapping losses, where the weight $w_i$ of each sample is dynamically set to $p(k=1|\\ell_i)$.", "id": "d1e5a7e4-2062-42df-9878-dad524899233", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "89d39c41-4b33-4f19-8ccf-5eb0810e7166", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Reweighting" ], [ "subsubsection", "Bayesian Method" ] ], "subsections": [], "title": "Bayesian Method" }, { "cite_extract_rate": 1, "cites": [ 7774 ], "content": "Shu et al.~ introduced Meta-Weight-Net (MW-Net), which can adaptively learn an explicit weighting function from data. \nIn high-level, \nthe weighting function is an MLP with one hidden layer, mapping from a loss to weights. \nMathematically, the optimal parameter $w$ can be calculated by minimizing the weighted loss.\n\\begin{align*}\nw^*(\\theta) \n= \\arg\\min_{w}\\mathcal{\\ell}^{\\text{tr}}(w;\\theta)\n= \\nicefrac{1}{N}\\sum\\nolimits_{i=1}^{N}\\mathcal{V}(\\ell_i^{\\text{tr}}(w);\\theta)\\ell_i^{\\text{tr}}(w),\n\\end{align*}\nwhere $\\mathcal{V}(\\ell_i^{\\text{tr}}(w);\\theta)$ denotes MW-Net.\nHere, the parameters in MW-Net can be optimized by the meta learning idea. Given a small amount of clean and balanced meta-data $\\{x_i^{(\\text{meta})},y_i^{(\\text{meta})}\\}_{i=1}^M$, the optimal parameter $\\theta$ can be obtained by minimizing the meta-loss:\n\\begin{align*}\n\\theta^* \n= \\arg\\min_{\\theta}\\ell^{\\text{meta}}(w^*(\\theta))\n=\\nicefrac{1}{M}\\sum\\nolimits_{i=1}^{M}\\ell_i^{\\text{meta}}(w^*(\\theta)).\n\\end{align*}\nThen SGD is employed to update $w$ (parameters of classifier network) and $\\theta$ (parameters of MW-Net) iteratively.", "id": "290b05af-9b42-4afd-b17c-d3830ca9ffec", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "89d39c41-4b33-4f19-8ccf-5eb0810e7166", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Reweighting" ], [ "subsubsection", "Neural Networks" ] ], "subsections": [], "title": "Neural Networks" }, { "cite_extract_rate": 0, "cites": [], "content": "Besides augmenting and reweighting the objective function, there is another common solution called redesigning. \nIn high-level, redesigning,\nwhich generally replaces $\\tilde{\\ell}$ with a special format $\\ell'$\ndifferent from $\\ell$,\nis motivated by different observations and principles.\nThus,\nthese methods are diverse much for different scenarios. Here, we introduce several redesigning approaches as follows.", "id": "f89f4416-15b0-4666-9c8a-3d22e8bc1664", "level": "subsection", "origin_cites_number": 0, "parent_id": "71193050-fa8c-4cca-acf2-cdfbe4b84da7", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Redesigning" ] ], "subsections": [ "6b8e75d9-1c46-4d4a-9825-4e527b09ba1e", "4cdc34e5-5f24-43ec-806b-c7c7c4a2d158" ], "title": "Redesigning" }, { "cite_extract_rate": 0.625, "cites": [ 4130, 3340, 4182, 8630, 7781 ], "content": "In recent years, there are a lot of \nnew losses for combating label noise. Their designs are based on different principles, such as gradient clipping~ and curriculum learning~. \nHere, we choose several representative losses to explain.\n\\begin{itemize}[leftmargin=*]\n\\item Zhang et al.~ proposed a generalized cross-entropy loss called $\\ell_q$, which encompasses both the mean absolute error\n(MAE) and categorical cross entropy (CCE) loss. Since the $\\ell_q$ loss is a generalization of MAE and CCE, it enjoys the benefits of both the noise-robustness provided by MAE and the implicit weighting scheme of CCE, which can be well justified by theoretical analysis.\nMore importantly, it empirically works well for both closed-set noise~ and open-set noise~.\nMathematically, they used the negative Box-Cox transformation as a \n$\\ell_q$ loss function: \n\\begin{equation*}\n\\ell_q(f(x),e_j) = (1-f_j(x)^q) / q,\n\\end{equation*}\nwhere $q \\in (0,1]$ and $e_j$ is one-hot vector belonging to ghe $j$-th class. The $\\ell_q$ loss is reduced to CCE when\n$\\lim_{q\\rightarrow0}\\ell_q(f(x),e_j)$, and becomes MAE when $q = 1$. Since a\ntighter bound in $\\ell_q$ can bring stronger noise tolerance due to discarding many samples for training, they proposed the truncated $\\ell_q$ loss:\n\\begin{equation}\n\\ell_\\text{trunc}(f(x),e_j)\n=\n\\begin{cases}\n\\ell_q(k) & \\text{if} \\; f_j(x) \\leq k, \\\\\n\\ell_q(f(x),e_j) & \\text{otherwise}, \\\\\n\\end{cases}\n\\end{equation}\nwhere $0 < k < 1$, and $\\ell_q(k) = \\nicefrac{1-k^q}{q}$. When $k \\rightarrow 0$, the truncated $\\ell_q$ loss equals the original $\\ell_q$ loss.\n\\item\nCharoenphakdee et al.~ theoretically justified symmetric losses through the lens of theoretical tools, including the classification-calibration condition, excess risk bound, conditional risk minimizer and AUC-consistency. The key idea is to design a loss that does not have to satisfy the symmetric condition everywhere, which gives a high penalty in the non-symmetric area,\ni.e., $\\ell(z) + \\ell(-z)$ is a constant for every $z \\in \\mathbb{R}$. Motivated by this phenomenon, they introduced a barrier hinge loss, satisfying a symmetric condition not everywhere but gives a large penalty once $z$ is outside of the symmetric interval,\nwhich incentivizes to learn a prediction function inside of the symmetric interval. \nMathematically, a barrier hinge loss is defined as follows.\n\\begin{equation}\n\\ell(z) = \\max(-b(r+z)+r,\\max(b(z-r),r-z)),\n\\end{equation}\nwhere $b > 1$ and $r > 0$.\n\\item\nThulasidasan et al.~ proposed to abstain some confusing examples during training deep networks. \nIn practice, abstention has some relation with loss redesign. Based on abstention, they introduced a deep abstaining classifier (DAC). \nDAC has an additional output $p_{k+1}$, which indicates the probability of abstention.\nThe loss of DAC is as follows.\n\\begin{equation}\n\\ell(x_j) \n= - \\tilde{p}_{k + 1} \\sum\\nolimits_{i=1}^k t_i\\log( \\nicefrac{p_i}{\\tilde{p}_{k + 1}} )\n- \\alpha\\log \\tilde{p}_{k + 1}, \n\\end{equation}\nwhere $\\tilde{p}_{k + 1} = 1-p_{k+1}$. If $\\alpha$ is large, the penalty drives $p_{k+1}$ to zero, which leads the model not to abstain. If $\\alpha$ is small, the classifier may abstain on everything. They further proposed an auto-tuning algorithm to find the optimal $\\alpha$. Note that DAC can be used for both structured (e.g., asymmetric and pair-flipping) and unstructured (i.e., symmetric) label noise, where DAC works as a data cleaner.\n\\item\nAditya et al.~ leveraged gradient clipping to design a new loss. \nIntuitively, clipping the gradient prevents over-confident descent steps in the scenario of label noise. Motivated by gradient clipping, they proposed the partially Huberized loss\n\\begin{equation}\n\\tilde{\\ell}_{\\theta}(x,y)\n\\! = \\!\n\\begin{cases}\n-\\tau p_{\\theta}(x,y)\n\\! + \\! \\log\\tau \\! + \\! 1 & \\text{if} \\; p_{\\theta}(x,y) \\leq \\frac{1}{\\tau}, \n\\\\\n-\\log p_{\\theta}(x,y) & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\n\\item\nLyu et al.~ proposed the curriculum loss (CL), which is a tighter \nupper bound of the 0-1 loss compared to the conventional surrogate of the 0-1 loss (cf. Eq. (8) in ). Moreover, CL can adaptively select samples for stagewise training. In particular, giving any base loss function $\\ell(u) \\geq \\mathbf{1} (u < 0), u \\in \\mathbb{R}$, CL is defined as follows.\n\\begin{align*}\nQ(\\mathbf{u}) \n=\n\\max\n\\left( \n\\min\\nolimits_{\\mathbf{v}\\in\\{0,1\\}^n} f_1(\\mathbf{v})\n,\n\\min\\nolimits_{\\mathbf{v}\\in\\{0,1\\}^n} f_2(\\mathbf{v})\n\\right),\n\\end{align*}\nwhere\n\\begin{align*}\nf_1(\\mathbf{v}) \n\\! = \\! \\sum_{i=1}^n v_i\\ell(u_i)\n\\;\\text{and}\\;\nf_2(\\mathbf{v})\n\\! = \\! n \\! - \\! \\sum_{i=1}^n v_i \\! + \\! \\sum_{i=1}^n \\mathbf{1}(u_i \\! < \\! 0).\n\\end{align*}\nTo adapt CL for deep learning models, they further introduced the noise pruned CL.\n\\end{itemize}", "id": "6b8e75d9-1c46-4d4a-9825-4e527b09ba1e", "level": "subsubsection", "origin_cites_number": 8, "parent_id": "f89f4416-15b0-4666-9c8a-3d22e8bc1664", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Redesigning" ], [ "subsubsection", "Loss Redesign" ] ], "subsections": [], "title": "Loss Redesign" }, { "cite_extract_rate": 0.75, "cites": [ 3342, 7162, 199 ], "content": "\\begin{itemize}[leftmargin=*]\n\\item Laine and Aila~ introduced self-ensembling in semi-supervised learning, including the $\\pi$-model and temporal ensembling, which can also be used to purify the label noise. The key idea of self-ensembling is to form a consensus prediction of the unknown labels using the outputs of the network in training. Specifically, the $\\pi$-model encourages consistent network output between two realizations of the same input, under two different dropout conditions. \nBeyond the $\\pi$-model, temporal ensembling is considered for the network predictions over multiple previous training epochs. \nThe loss function of the $\\pi$-model is\n\\begin{equation}\n\\ell = -\\nicefrac{1}{B}\\sum\\nolimits_i\n\\log z_i[y_i] + \\nicefrac{w(t)}{C|B|}\n\\sum\\nolimits_i \\left\\|z_i-\\tilde{z}_i\\right\\|^2,\n\\end{equation}\nwhere the first term handles labeled data via the standard cross-entropy loss, namely, $\\log z_i[y_i]$ calculates the cross-entropy loss value between model prediction $z_i$ and label $y_i$, and the second term handles unlabeled data. Note that $(x_i,y_i)$ denotes the pair of input and label, $B$ denotes the mini-batch size, and $C$ denotes the number of different classes. Both $z_i$ and $\\tilde{z}_i$ are transformed from the same input $x_i$, namely two predictions via the same network with different dropout conditions. The second term is also scaled by time-dependent weighting function $w(t)$. Temporal ensembling goes beyond $\\pi$-model by aggregating the predictions of multiple previous network evaluations into an ensemble prediction. Namely, the main difference from $\\pi$-model is that the network and augmentations are evaluated only once per input in each epoch, and the target $\\tilde{z}$ is based on prior network evaluations. \nAfter every training epoch, \nthe network outputs $z_i$ are accumulated into ensemble outputs $Z_i$ by updating $Z_i \\leftarrow \\alpha Z_i + (1-\\alpha)z_i$ and $\\tilde{z} \\leftarrow \\nicefrac{Z_i}{(1-\\alpha^t)}$, where $\\alpha$ is a momentum term.\n\\item\nNguyen et al.~ proposed a self-ensemble label filtering (SELF) method to progressively filter out the wrong labels during training. \nIn high-level, they leveraged the knowledge provided in the network's output over different training iterations to form a consensus of predictions, which progressively identifies and filters out the noisy labels from the labeled data. In the filtering strategy, the model can determine the set of potentially corrected samples $L_i$ based on agreement between the label $y$ and its maximum likelihood prediction $\\hat{y}_x$ with $L_i = \\{(y,x)|\\hat{y}_x = y; \\forall (y,x) \\in L_0\\}$,\nwhere $L_0$ is the sample set \nin the beginning. In the self-ensemble strategy, they maintained the two-level ensemble. First, they leveraged the model ensemble with Mean Teacher~, \nnamely an exponential running average of model snapshots. Second, they employed a\nprediction ensemble by collecting the sample predictions over multiple training\nepochs: $\\bar{z}_j = \\alpha \\bar{z}_{j-1} + (1-\\alpha)\\hat{z}_j$, where\n$\\bar{z}_j$ depicts the moving-average prediction of sample $k$ at epoch $j$ and\n$\\hat{z}_j$ is the model prediction for sample $k$ at epoch $j$.\n\\item\nMa et al.~ \ninvestigated the dimensionality of the deep representation subspace of training samples. \nThen, they developed a dimensionality-driven learning strategy, \nwhich monitors the dimensionality of subspaces during training and adapts the loss function accordingly. The key idea is to leverage the local intrinsic dimensionality (LID), which discriminates clean labels and noisy labels during training deep networks.\nMathematically, the estimation of LID can be defined as\n\\begin{equation}\n\\widehat{\\text{LID}} \n= -\n\\big( \\nicefrac{1}{k}\\sum\\nolimits_{i=1}^k\\log\\nicefrac{r_i(x)}{r_{\\max}(x)}\\big)^{-1},\n\\end{equation}\nwhere $r_i(x)$ denotes the distance between $x$ and its $i$-th nearest neighbor, and $r_{\\max}(x)$ denotes the maximum of the neighbor distance. Specifically, when learning with clean labels, the LID score is consistently decreasing and the test accuracy is increasing with the increase of training epochs. However, when learning with noisy labels, the LID score first decreases and then increases after a few epochs. In contrast, the test accuracy is totally opposite, which first increases and then decreases. Based on the LID score, the dynamics of deep networks is overseen.\n\\end{itemize}", "id": "4cdc34e5-5f24-43ec-806b-c7c7c4a2d158", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "f89f4416-15b0-4666-9c8a-3d22e8bc1664", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Objective" ], [ "subsection", "Redesigning" ], [ "subsubsection", "Label Ensemble" ] ], "subsections": [], "title": "Label Ensemble" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 3340, 4140, 4126, 4253, 4128 ], "content": "\\label{sec:opt}\nMethods in this section solve the LNRL problem by changing optimization policies, such as early stopping. The effectiveness of early stopping is due to memorization effects of deep neural networks, which avoid overfitting noisy labels to some degree. To combat with noisy labels using memorization effects, there exists another and possibly better way, namely small-loss tricks.\nThe structure of this section is arranged as follows. First, we explain what \nmemorization effects are and why this phenomenon is important. Then, we introduce\nseveral common ways to leverage memorization effects for combating label\nnoise. The first common way is to self-train a single network robustly via\nsmall-loss tricks, which brings us MentorNet~ and Learning to\nReweight~. Furthermore, the second common way is to co-train two\nnetworks robustly via small-loss tricks, which brings us Co-teaching~ and Co-teaching+~. Lastly, there are several ways to\nfurther improve the performance of Co-teaching, such as by using cross-validation~,\nautomated learning~ and Gaussian mixture model~.", "id": "6c23077d-879e-4355-a4b2-e8036af47caf", "level": "section", "origin_cites_number": 7, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ] ], "subsections": [ "f18991bb-072a-4b3f-9199-093293d6c577", "8c053acd-540b-4e8b-876b-8eead263245c", "8a81ffb3-1ebf-45d8-bab8-ba236653dd4f", "fc0f77d5-54e3-4ca1-8bd4-c3e6e527e054" ], "title": "Optimization Policy" }, { "cite_extract_rate": 0.5, "cites": [ 4115 ], "content": "Arpit et al.~ introduced a very critical work called ``A\ncloser look at memorization in deep networks'', which shapes a new direction\ntowards solving LNRL. In general, memorization effects can be defined as the\nbehavior exhibited by deep networks trained on noise data. Specifically, deep\nnetworks tend to memorize and fit easy (clean) patterns, and gradually over-fit\nhard (noisy) patterns~. Here, we empirically reproduce a simulated experiment to justify this hypothesis, and experimental details can be found in Appendix 3.\n\\begin{figure}\n\\begin{center}\n\\centerline{\\includegraphics[width=0.35\\textwidth]{noise_mnist.pdf}}\n\\vspace{-10px}\n\\caption{A simulated experiment based on different noise rates ($0\\%$-$80\\%$). We chose \\textit{MNIST} with uniform noise as noisy data. The solid lines denote the training accuracy; while the dotted lines mean the validation accuracy.}\n\\label{sim-exp}\n\\end{center}\n\\end{figure}\nIn Fig.~\\ref{sim-exp}, we used the MNIST dataset, and added random noise on its labels. \nThe noise rate was chosen from the range between $0\\%$ and $80\\%$. We trained our deep networks on the corrupted training data. \nThen, we tested the trained networks on both (noisy) training data and\n(clean) validation data. \nWe can clearly see two phenomena in the graph:\n\\begin{itemize}[leftmargin=*]\n \\item The training curve will reach or approximate 100\\% accuracy eventually. All curves will converge the same.\n \\item The validation curve will first reach a very high accuracy in \n the first few epochs, \n but drop gradually until convergence (after 40 epochs).\n\\end{itemize}\nSince deep networks tend to memorize and fit easy\n(clean) patterns in the corrupted data, the validation curve will first reach a peak.\nHowever, such overparameterized models will gradually over-fit hard (noisy)\npatterns. The validation curve will drop gradually, since the validation data is\nclean. This simple experiment not only justifies the hypothesis of memorization\neffects, but also opens a new door to the LNRL problem, namely small-loss\ntricks~.\nSpecifically, small-loss tricks mean deep networks regard small-loss examples as ``clean'' examples, \nand only back-propagate such examples to update the model parameters. \nMathematically, small-loss tricks are equivalent to constructing the restricted $\\tilde{\\ell}$,\nwhere $\\tilde{\\ell} = \\text{sort}(\\ell,1-\\tau)$. Namely, $\\tilde{\\ell}$ can be constructed by sorting $\\ell$ from small to large, and fetching $1-\\tau$ percentage of small loss ($\\tau$ is noise rate).", "id": "f18991bb-072a-4b3f-9199-093293d6c577", "level": "subsection", "origin_cites_number": 2, "parent_id": "6c23077d-879e-4355-a4b2-e8036af47caf", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Memorization Effects" ] ], "subsections": [], "title": "Memorization Effects" }, { "cite_extract_rate": 0, "cites": [], "content": "Based on small-loss tricks, the seminal works leverage self-training to improve the model robustness (left panel of\nFig.~\\ref{net-structure}). There are two works as follows.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{net-structure}\n\\vspace{-10px}\n\\caption{Self-training (i.e., MentorNet, abbreviated as M-Net) vs. Co-training (i.e., Co-teaching and Co-teaching+).}\n\\label{net-structure}\n\\end{figure}", "id": "8c053acd-540b-4e8b-876b-8eead263245c", "level": "subsection", "origin_cites_number": 0, "parent_id": "6c23077d-879e-4355-a4b2-e8036af47caf", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Self-training" ] ], "subsections": [ "0f5926b1-e336-413e-bbd2-d849823893c2", "5c3eff96-4510-4c44-b427-200919a4f9f3" ], "title": "Self-training" }, { "cite_extract_rate": 0, "cites": [], "content": "Jiang et al.~ introduced MentorNet, which supervises the base deep network termed StudentNet. The key focus of MentorNet is to provide a curriculum for StudentNet. Instead of pre-defining, MentorNet learns a data-driven curriculum dynamically.\nMathematically, MentorNet $g_m$ can approximate a predefined curriculum via\nminimizing the equation as follows:\n\\begin{equation*}\n\\arg\\min_{\\theta}\n\\sum\\nolimits_{(x_i,y_i)\\in \\mathcal{D}}g_m(z_i;\\theta)\\ell_i + G(g_m(z_i;\\theta);\\lambda_1, \\lambda_2),\n\\end{equation*}\nwhere $\\lambda_1, \\lambda_2$ are regularization parameters. The first term denotes the curriculum-weighted loss and the second term\nindicates the robust penalty $G$ (Eq.~(5) in ). \nAfter optimizing the above objective, we can get\nthe closed-form solution as follows.\n\\begin{equation*}\ng_m(z_i;\\theta) \n\\! = \\! \\left\\{\n\\begin{array}{ll}\n \\mathbf{1}(\\ell_i \\leq \\lambda_1) & \\text{if} \\, \\lambda_2 = 0,\\\\\n \\min(\\max(0, 1 - \\nicefrac{\\ell_i - \\lambda_1}{\\lambda_2}),1) & \\text{if} \\, \\lambda_2 \\neq 0.\\\\\n\\end{array}\n\\right.\n\\end{equation*}\nIntuitively, when $\\lambda_2 = 0$, MentorNet only provides small-loss samples with $\\ell_i < \\lambda_1$. When $\\lambda_2 \\neq 0$, MentorNet will not provide big-loss samples, namely, samples with loss larger than ($\\lambda_1 + \\lambda_2$) will not be selected during training. Meanwhile, MentorNet can also discover new curriculums from data directly, which is unrelated to small-loss tricks.", "id": "0f5926b1-e336-413e-bbd2-d849823893c2", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "8c053acd-540b-4e8b-876b-8eead263245c", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Self-training" ], [ "subsubsection", "MentorNet" ] ], "subsections": [], "title": "MentorNet" }, { "cite_extract_rate": 0.5, "cites": [ 4128 ], "content": "Ren et al.~ employed a meta-learning framework to assign different weights to the training examples based on their gradient directions. In general, small-loss examples are assigned more weights, since small-loss examples are more likely to be clean. In general, they believed that the best example weighting should minimize the loss of a set of unbiased clean validation examples. Namely, they performed validation at every training iteration to dynamically determine the example weights of the current batch. Mathematically, they hoped to learn a reweighting of the inputs via minimizing a weighted loss:\n\\begin{equation}\n\\theta^*(w) = \\arg\\min_{\\theta}\n\\sum\\nolimits_{i=1}^N w_i \\ell_i(\\theta),\n\\end{equation}\nwhere the training loss $\\ell_i$ associates with a training set\n$\\{(x_i,y_i)\\}_{i=1}^N$. Note that $w_i$ can be viewed as training hyperparameters, and the optimal selection of $w$ is based on its validation performance: \n\\begin{equation}\nw^* = \\arg\\min_{w}\\nicefrac{1}{M}\n\\sum\\nolimits_{i=1}^M \\ell_i^v(\\theta^*(w)),\n\\end{equation}\nwhere the validation loss $\\ell_i^v$ associates with a small validation set $\\{(x_i^v,y_i^v)\\}_{i=1}^M$. To realize ``Learning to Reweight'', there are three technical steps. First, they fed forward and backward noisy training examples via the training loss, which updates the model parameter $\\theta$ and calculates $\\nabla\\theta$. Second, the $\\nabla\\theta$ affects the validation networks, where they fed forward and backward clean validation examples via the validation loss. Lastly, training networks leverage meta-learning to update example weights $w$ via backward on backward (i.e., taking a second-order gradient for the example weights). Note that the same strategy can also be used for class-imbalance problems, in which big-loss tricks are preferred, since they are more likely to be the minority class~.", "id": "5c3eff96-4510-4c44-b427-200919a4f9f3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "8c053acd-540b-4e8b-876b-8eead263245c", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Self-training" ], [ "subsubsection", "Learning to Reweight" ] ], "subsections": [], "title": "Learning to Reweight" }, { "cite_extract_rate": 0, "cites": [], "content": "Although self-training works well, in the long term, the error will still accumulate, which motivates us to explore Co-training~ based methods (right panel of Fig.~\\ref{net-structure}).", "id": "8a81ffb3-1ebf-45d8-bab8-ba236653dd4f", "level": "subsection", "origin_cites_number": 1, "parent_id": "6c23077d-879e-4355-a4b2-e8036af47caf", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Co-training" ] ], "subsections": [ "e0afb962-b6c7-4f0b-95c9-0c2571634804", "cbc62583-0a86-4ce8-b601-1429d5d61454" ], "title": "Co-training" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 3340, 4140, 4137 ], "content": "Han et al.~ proposed a new deep learning paradigm called ``Co-teaching''. In general, instead of training a single deep network, they trained two deep networks simultaneously, and let them teach each other given every mini-batch. \nSpecifically, each network feeds forward all data and selects some data with possibly clean labels; \nthen, two networks communicate with each other what data in this mini-batch should be used for training; \nlastly, each network back-propagates the data selected by its peer network and updates itself. The selection criterion is still based on the small-loss trick.\nIn MentorNet~, the error from one network will be directly transferred back to itself in the second mini-batch of data, and the error should be increasingly accumulated. However, in Co-teaching, since two networks have different learning abilities, they can filter different types of error introduced by noisy labels. Namely, in this exchange procedure, the error flows can be reduced by peer networks mutually. However, with the increase of training epochs, two networks will converge to a consensus and Co-teaching will reduce to the self-training MentorNet in function. Note that the principle of ensemble learning is to keep different classifiers diverged~.\nYu et al.~ introduced the ``update by disagreement''~ strategy to keep Co-teaching diverged and named their method Co-teaching+. \nIn general, Co-teaching+ consists of the disagreement-update step and cross-update step. In the disagreement-update step, two networks feed forward and predict all data first, and only keep prediction-disagreed data. This step indeed keeps two networks diverged. The cross-update step has been explored in Co-teaching. Note that both Co-teaching and Co-teaching+ share the same dropping rate for big-loss examples, which was hand-designed. Via both methods, we summarize three key factors in this line research: (1) using the small-loss trick; (2) cross-updating parameters of two networks; and (3) keeping two networks diverged.", "id": "e0afb962-b6c7-4f0b-95c9-0c2571634804", "level": "subsubsection", "origin_cites_number": 5, "parent_id": "8a81ffb3-1ebf-45d8-bab8-ba236653dd4f", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Co-training" ], [ "subsubsection", "Co-teaching/Co-teaching+" ] ], "subsections": [], "title": "Co-teaching/Co-teaching+" }, { "cite_extract_rate": 0.5, "cites": [ 4126, 7778, 4253 ], "content": "After 2018, \nthere are several important works to further improve Co-teaching. \nHere, we specify two representative works based on Co-teaching and go beyond. \n\\begin{itemize}[leftmargin=*]\n\\item \nChen et al.~ used cross-validation to randomly split noisy datasets, which identifies most samples that have correct labels. In general, they designed the Iterative Noisy Cross-Validation (INCV) method to select a subset of samples, which has a much smaller noise ratio than the original dataset. Then, they leveraged Co-teaching for further training over a selected subset. Apart from selecting clean samples, INCV removes samples that have large losses at each iteration.\n\\item \nYao et al. used automated machine learning (AutoML) \n\nto explore the memorization effect thus improve Co-teaching. \nIt is noted that both Co-teaching and Co-teaching+ share the same dropping rate for big-loss examples, \nwhich was hand-designed. However, such a rate is critical in training deep networks. \nSpecifically, Yao et al.~ designed a domain-specific search and proposed\na novel Newton algorithm to solve the bi-level optimization problem efficiently. \nTo explore the optimal rate $R(\\cdot)$, they formulated the problem as\n\\begin{align*}\nR^* & = \\arg\\min\\nolimits_{R(\\cdot) \\in \\mathcal{F}} \\mathcal{L}_{\\text{val}}(f(w^*;R),\\mathcal{D}_{\\text{val}})\n\\\\\n\\text{s.t.} \\;\n& w^* = \\arg\\min\\nolimits_{w} \\mathcal{L}_{\\text{tr}}(f(w^*;R),\\mathcal{D}_{\\text{tr}}),\n\\end{align*}\nwhere $\\mathcal{F}$ is the search space of $R(\\cdot)$ exploring a general pattern of the memorization effect. \nSuch a prior on $\\mathcal{F}$ not only allows \nefficient bi-level optimization\nbut also boosts the final learning performance. \n\\item\nMotivated by MixMatch,\nLi et al.~ promoted a novel framework termed DivideMix by leveraging semi-supervised learning techniques. \nIn high-level, DivideMix used a Gaussian Mixture Model (GMM) to dynamically\ndivides the training data into two parts. The first part includes labeled data\nwith clean labels; while the second part includes unlabeled data with noisy\nlabels. During the semi-supervised learning phase, they leveraged variants of\nCo-training, such as Co-refinement~ on labeled data and\nCo-guessing~ on unlabeled data. Specifically, once the data is divided into labeled and unlabeled data, they conducted Co-refinement for labeled data, which linearly combines the ground-truth label with the network's prediction and sharpens the refined label. Then they conducted Co-guessing for unlabeled data, which averages the predictions from both networks. After Co-refinement and Co-guessing, they followed the routine MixMatch to mix the data, and updated the model parameters.\n\\end{itemize}", "id": "cbc62583-0a86-4ce8-b601-1429d5d61454", "level": "subsubsection", "origin_cites_number": 6, "parent_id": "8a81ffb3-1ebf-45d8-bab8-ba236653dd4f", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Co-training" ], [ "subsubsection", "Beyond Co-teaching" ] ], "subsections": [], "title": "Beyond Co-teaching" }, { "cite_extract_rate": 0, "cites": [], "content": "This section focuses on solving LNRL by leveraging \nthe unique benefits of deep models, especially for their memorization effects. However, besides memorization, there are two new branches based on deep models as follows.", "id": "fc0f77d5-54e3-4ca1-8bd4-c3e6e527e054", "level": "subsection", "origin_cites_number": 0, "parent_id": "6c23077d-879e-4355-a4b2-e8036af47caf", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Beyond Memorization" ] ], "subsections": [ "b105c0e1-9d65-4188-8e44-a44d89df5e64", "b55f4f67-0597-4eec-b1b4-790fddae9810" ], "title": "Beyond Memorization" }, { "cite_extract_rate": 1, "cites": [ 7133, 7707 ], "content": "In many CV and NLP applications, the pre-training paradigm has become a commonplace, especially when data is scarce in the target domain. \nHendrycks et al.~ demonstrated that pre-training can improve model robustness and uncertainty, \nincluding adversarial robustness and label corruption.\nNormally, pre-training is carried out on a bigger dataset first, \nand fine-tuning is then applied to the pre-trained model on a smaller target dataset. \nFor example, if we design an LNRL method for image classification with label noise, we pre-train a model on ImageNet via an LNRL method first. \nThen, we fine-tune the pre-trained model on the target dataset via an LNRL method. \nNote that the pre-training approach has been demonstrated in many robustness and uncertainty scenarios~,\nincluding label noise, adversarial examples, class imbalance, out-of-distribution detection and calibration.", "id": "b105c0e1-9d65-4188-8e44-a44d89df5e64", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "fc0f77d5-54e3-4ca1-8bd4-c3e6e527e054", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Beyond Memorization" ], [ "subsubsection", "Pre-training" ] ], "subsections": [], "title": "Pre-training" }, { "cite_extract_rate": 1, "cites": [ 4467 ], "content": "Bahri et al.~ proposed the deep k-NN method, which works on an intermediate layer \nof a preliminary deep model $\\mathcal{M}$ to filter mislabeled training data. In\nhigh-level, deep k-NN filtering consists of two steps. In the first step, a model\n$\\mathcal{M}$ with architecture $\\mathcal{A}$ is trained \nto filter noisy data $\\mathcal{D}_{\\text{noisy}}$ via the k-NN algorithm, which identifies and removes examples whose labels disagree with their neighbors. After filtering $\\mathcal{D}_{\\text{noisy}}$, \nin the second step, a final model with architecture $\\mathcal{A}$ is re-trained on $\\mathcal{D}_{\\text{filtered}} \\cup \\mathcal{D}_{\\text{clean}}$.", "id": "b55f4f67-0597-4eec-b1b4-790fddae9810", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "fc0f77d5-54e3-4ca1-8bd4-c3e6e527e054", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Optimization Policy" ], [ "subsection", "Beyond Memorization" ], [ "subsubsection", "Deep k-NN" ] ], "subsections": [], "title": "Deep k-NN" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:fuworks}\nStarting from 1988, label-noise learning has been investigated for more than three decades, evolving from statistical learning to representation learning. Especially from 2012, representation learning becomes increasingly important, which directly gives birth to the above LNRL methods. Similar to the other areas in machine learning, we hope to propose not only new methods, but also new research directions, which can broaden and boost LNRL research in both academia and industry.", "id": "e9c1d328-7cdf-466e-af77-cbf803cadee3", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Future Works" ] ], "subsections": [ "1624c670-ba44-403d-9ff0-b3cd971304c6", "c8b26ed4-c11f-4e4d-bd30-63ea4b72881f", "814911f7-6a07-43b3-b969-435dacd6bc5a", "2cf80ce6-cd98-479c-9b73-9939550e2a16" ], "title": "Future Works" }, { "cite_extract_rate": 0.5, "cites": [ 4456 ], "content": "In LNRL, the first thing we should do is to construct new datasets with real noise, which is critical to the rapid development of LNRL. To our best knowledge, \nmost researchers test their LNRL methods on simulated datasets, \nsuch as \\textit{MNIST} and \\textit{CIFAR-10}. To make further breakthroughs, we should build new datasets with real noise, \nsuch as \\textit{Clothing1M} .\nNote that, \nsimilar to \\textit{ImageNet}, \nmany researchers train deep networks to overfit \\textit{Clothing1M} via different tricks, \nwhich may not touch the core issues of LNRL.\nThis motivates us to rethink real datasets in LNRL. 5 years after the birth of Clothing1M, Jiang et al.~ proposed a new but realistic type of noisy dataset called ``web-label noise'' (or red noise), which enables us to conduct controlled experiments systematically in realistic scenarios. Another interesting point is that benchmark datasets with real noise mainly focus on image classification, instead of natural language, speech processing, and various data collected from real sensors. Obviously, these directions also involve label noise, which need to be addressed further.\nTo sum up, we should build new benchmark datasets with \\textit{real noise}, not only for images but also for language, speech, and various sensor data. Normally, the better datasets can boost rapid development of LNRL.", "id": "1624c670-ba44-403d-9ff0-b3cd971304c6", "level": "subsection", "origin_cites_number": 2, "parent_id": "e9c1d328-7cdf-466e-af77-cbf803cadee3", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Future Works" ], [ "subsection", "Build up New Datasets" ] ], "subsections": [], "title": "Build up New Datasets" }, { "cite_extract_rate": 1, "cites": [ 4469, 4120, 4470, 4468, 4465, 7823 ], "content": "\\label{sec:fu:idlnrl}\nPreviously in Section~\\ref{sec:thm:idata},\nwe have seen that CCN is a popular assumption in LNRL.\nHowever, the CCN model is only an approximation to the real-world noise, which may not always work well in practice.\nTo directly model real-world noise, we should consider features in the label corruption process. \nThis motivates us to explore the instance-dependent noise (IDN) model, which is formulated as $p(\\bar{Y}|Y,X)$~. \nSpecifically, the IDN model considers a more general noise, in which the probability that an instance is mislabeled depends on both its class and features. Intuitively, this noise is quite realistic, as poor-quality or ambiguous instances are more likely to be mislabeled in real-world datasets. However, it is much more complex to formulate the IDN model, since the probability of a mislabeled instance is a function of not only the label space but also the input space that can be very high-dimensional. Moreover, without some extra assumptions/information, IDN is unidentifiable~.\nTowards the IDN model, there are several seminal explorations. For instance, Menon et al.~ proposed the boundary-consistent noise model, which considers stronger noise for samples closer to the decision boundary of the Bayes optimal classifier. However, this model is restricted to binary classification and cannot estimate noise functions. Cheng et al.~ recently studied a particular case of the IDN model, in which the noise functions are upper-bounded. Nonetheless, their method is limited to binary classification and has only been tested on small datasets. Berthon et al.~ proposed to tackle the IDN model from the new perspective, by considering confidence scores to be available for the label of each instance. They term this new setting confidence-scored IDN (CSIDN). Based on CSIDN, they derived an instance-level forward correction algorithm. More recently, Xia et al.~, Cheng et al.~ and Zhu et al.~ explored this direction in depth.", "id": "c8b26ed4-c11f-4e4d-bd30-63ea4b72881f", "level": "subsection", "origin_cites_number": 6, "parent_id": "e9c1d328-7cdf-466e-af77-cbf803cadee3", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Future Works" ], [ "subsection", "Instance-dependent LNRL" ] ], "subsections": [], "title": "Instance-dependent LNRL" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 4471, 892 ], "content": "When discussing robustness, \none may think about adversarial robustness~. \nHowever, adversarial robustness is obtained via adversarial training, in which the features are adversarially perturbed while the labels are clean. \nIs this the optimal way to formulate the adversarial robustness? In other words, would it be more useful to consider the scenario in which the features are adversarially perturbed while the labels are noisy? We term this adversarial LNRL.\nTowards adversarial LNRL, there are two seminal works. For example, Wang et al.~ proposed a new defense algorithm called misclassification aware adversarial training (MART), which explicitly differentiates the misclassified examples (i.e., label noise) and correctly classified examples during training. To address this issue, MART introduces misclassification aware regularization, namely $1/n\\sum_{i=1}^{n}\\mathbf{1}(h_{\\theta}(x_i)\\neq h_{\\theta}(\\hat{x}'_i))\\cdot\\mathbf{1}(h_{\\theta}(x_i)\\neq y_i)$, where $h_{\\theta}$ denotes a DNN classifer with model parameter $\\theta$, $(x_i, y_i) $ denotes the $i$th pair of features and label, and $\\hat{x}'_i$ denotes the adversarial example generated by (5) in . Intuitively, $\\mathbf{1}(h_{\\theta}(x_i)\\neq y_i)$ denotes the misclassified examples, which can be closely connected with label noise. Meanwhile, Zhang et al.~ considered the same issue, namely misclassified examples in adversarial training. Specifically, they proposed friendly adversarial training (FAT), which trains deep networks using the wrongly-predicted adversarial data minimizing the loss and the correctly-predicted adversarial data maximizing the loss. To realize FAT, they introduced early-stopped PGD. Namely, once the adversarial data is misclassified by the current model, they stopped the PGD iterations early. In high-level, the objective of FAT is min-min optimization instead of min-max optimization in standard adversarial training.", "id": "814911f7-6a07-43b3-b969-435dacd6bc5a", "level": "subsection", "origin_cites_number": 3, "parent_id": "e9c1d328-7cdf-466e-af77-cbf803cadee3", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Future Works" ], [ "subsection", "Adversarial LNRL" ] ], "subsections": [], "title": "Adversarial LNRL" }, { "cite_extract_rate": 0.6000000000000001, "cites": [ 7133, 892, 7824, 4473, 4475, 8795, 7007, 4474, 7335, 4472, 7825, 8696 ], "content": "We have envisioned three promising directions above, which belong to the vertical domain in LNRL. However, we hope to explore the horizontal domain more, namely, noisy data instead of only noisy labels. Here, we summarize different formats of noisy data, and show some preliminary works.\n\\begin{itemize}[leftmargin=*]\n\\item \n\\textit{Feature}:\nNaturally, label noise can arouse us to consider feature noise, where the adversarial example is one of the special cases in feature noise~. The problem of feature noise is formulated as $p(\\bar{X}|Y)$, in which the features are corrupted but the labels are intact.\nTherefore, adversarial training can be the main tool to defend from adversarial examples. Note that there exists another feature noise called random perturbation~. To address this issue, Zhang et al.~ proposed a robust ResNet, which is motivated by dynamical systems. Specifically, they characterized ResNet based on an explicit Euler method. This allows us to exploit the step factor in the Euler method to control the robustness of ResNet. They proved that a small step factor can benefit its training and generalization robustness during backward and forward propagation. Namely, controlling the step factor robustifies deep networks, which can alleviate feature noise.\n\\item \\textit{Preference}:\nHan et al.~ and Pan et al.~ tried to address preference noise in ranking problems,\nwhich plays an important role in our daily life, such as ordinal peer grading, online image-rating and online product recommendation.\nSpecifically, Han et al.~ proposed the ROPAL model, which integrates the Plackett-Luce model with a denoising vector. Based on the Kendall-tau distance, this vector corrects $k$-ary noisy preferences with a certain probability. However, ROPAL cannot handle the dynamic length of $k$-ary noisy preferences, which motivated Pan et al.~ to propose COUPLE, which leverages stagewise learning to break the limit of fixed length. To update the parameters of both models, they used online Bayesian inference.\n\\item \\textit{Domain}:\nDomain adaptation (DA) is one of the fundamental problems in machine learning, when the data volume in the target domain is scarce. Previous DA methods assume that labeled data in the source domain is clean. However, in practice, labeled data in the source domain may come from amateur annotators or the Internet due to its large volume. This issue brings us a new setting, where labels in the source domain are noisy. We call this setting \\textit{wild domain adaptation (WDA)}.\nThere are two seminal works. Specifically, to handle WDA, Liu et al.~ proposed the Butterfly framework, \nwhich maintains four deep networks simultaneously. \nButterfly can obtain high-quality domain-invariant representations (DIR) and target-specific representations (TSR) in an iterative manner.\nMeanwhile, Yu et al.~ proposed a novel Denoising Conditional Invariant Component (DCIC) framework, \nwhich provably ensures extracting invariant representations and estimating the label distribution in the target domain with no bias.\n\\item \\textit{Similarity}:\nSimilarity-based learning~ is one of the emerging weakly-supervised problems, where similar data pairs (i.e., two examples belonging to the same class) and unlabeled data are available. Compared to class labels, similarity labels are easier to obtain, especially for some sensitive issues, e.g., religion and politics. For example, for sensitive matters, people often hesitate to directly answer ``What is your opinion on issue A?''; while they are more likely to answer ``With whom do you share the same opinion on issue A?''. Therefore, similarity labels are easier to obtain. However, for some cases, people may not be willing to provide their real thoughts even facing easy questions. Therefore, noisy-similarity-labeled data are very challenging. Wu et al.~ employed a noise transition matrix to model similarity noise, which has been integrated into a deep learning system.\n\\item \\textit{Graph}:\nGraph neural networks (GNNs) are very hot in the machine learning community~. \nHowever, are GNNs robust to noise? For example, once the node or edge is corrupted, the performance of GNNs will certainly deteriorate. Since GNNs are highly related to discrete and combinatorial optimization, LNRL methods cannot be directly deployed. Therefore, it is very meaningful to robustify GNNs under the node or edge noise, in which the noise can occur in labels and features. Recently, Wang et al.~ proposed a robust and unsupervised embedding framework called Cross-Graph, which can handle structural corruption in attributed graphs. Meanwhile, since Hendrycks et al.~ discovered that pre-training can improve model robustness and uncertainty, we may leverage strategies for pre-training GNNs~ to overcome the issues of graph noise.\n\\item \\textit{Demonstration}:\nThe goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations~. However, the quality of demonstrations in reality should be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. This brings us a new setting of IL called diverse-quality demonstrations, where low-quality demonstrations are highly noisy~. When experts provide additional information about the quality, learning from diverse-quality demonstrations becomes relatively easy, since the quality can be estimated by their confidence scores~, ranking scores~ and a small number of high-quality demonstrations~. However, without the availability of experts, these methods may not work well. Recently, Tangkaratt et al.~ pushed forward this line, and proposed to model the quality with a probabilistic graphic model termed VILD. Specifically, they estimated the quality along with a reward function that represents an intention of experts' decision making. Moreover, they used a variational approach to handle large state-action spaces, and employed importance sampling to improve data efficiency.\n\\end{itemize}", "id": "2cf80ce6-cd98-479c-9b73-9939550e2a16", "level": "subsection", "origin_cites_number": 20, "parent_id": "e9c1d328-7cdf-466e-af77-cbf803cadee3", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Future Works" ], [ "subsection", "Beyond Labels: Noisy Data" ] ], "subsections": [], "title": "Beyond Labels: Noisy Data" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusions}\nIn this survey, \nwe reviewed the history of label-noise representation learning (LNRL), \nand formally defined LNRL from the view of machine learning. \nVia the lens of representation learning theory and empirical experiments, we tried to understand the mechanism of deep networks under label noise. Based on the above analysis, \nwe categorized LNRL methods into three perspectives, \ni.e., data, objective and model. \nSpecifically, \nunder such a taxonomy, we provided thorough discussion of the pros and cons across different categories.\nMoreover, we summarized the essential components of robust LNRL, which can enlighten new directions in LNRL. \nLastly, we proposed four possible research directions. The first three directions mainly focus on pushing the knowledge boundary of LNRL, including building up new datasets, instance-dependent LNRL and adversarial LNRL. The last direction is beyond LNRL, which learns from various types of noisy data, such as preference-noise, domain-noise, similarity-noise, graph-noise and demonstration-noise. Ultimately, we hope to uncover the secret of data-noise representation learning, and formulate a general framework in the near future.\n\\section*{Acknowledgments}\nBH was supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202 and HKBU CSD Departmental Incentive Grant.\nTLL was supported by Australian Research Council Project DE-190101473. IWT was supported by Australian Research Council under Grants DP180100106 and DP200101328. GN and MS were supported by JST AIP Acceleration Research Grant Number JPMJCR20U3, Japan. MS was also supported by the Institute for AI and Beyond, UTokyo.\n\\bibliographystyle{IEEEtran}\n\\bibliography{survey_paper}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth,height = 0.12\\textheight]{bhan.jpg}}]{Bo Han}\nis an Assistant Professor of Computer Science at Hong Kong Baptist University, and a Visiting Scientist at RIKEN Center for Advanced Intelligence Project (RIKEN AIP). He was a Postdoc Fellow at RIKEN AIP (2019-2020). He received his Ph.D. degree in Computer Science from University of Technology Sydney in 2019. He has served as area chairs of NeurIPS'20 and ICLR'21. He received the RIKEN BAIHO Award (2019) and RGC Early Career Scheme (2020).\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth]{quanming.jpg}}]{Quanming Yao}\nis a senior scientist in 4Paradigm. He obtained his Ph.D. degree at the Department of Computer Science and Engineering of Hong Kong University of Science and Technology (HKUST) in 2018 and received his bachelor degree at HuaZhong University of Science and Technology (HUST) in 2013. He has served as area chair of IJCAI'21. He is a receipt of Wunwen Jun Prize of Excellence Youth of Artificial Intelligence (issued by CAAI) and a winner of Google Fellowship (in machine learning).\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth]{Tongliang_Liu.jpg}}]{Tongliang Liu} is a Lecturer (Assistant Professor) with the School of Computer Science at the University of Sydney. He is also a Visiting Scientist at RIKEN AIP. He has served as area chair of IJCAI'21. He is a recipient of Discovery Early Career Researcher Award (DECRA) from Australian Research Council (ARC) and was shortlisted for the J. G. Russell Award by Australian Academy of Science (AAS) in 2019.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth]{Gang.jpg}}]{Gang Niu}\nis currently a research scientist (indefinite-term) at RIKEN Center for Advanced Intelligence Project.\nHe received the PhD degree in computer science from Tokyo Institute of Technology.\nBefore joining RIKEN as a research scientist, he was a senior software engineer at Baidu and then an assistant professor at the University of Tokyo. He has served as an area chair 10 times, including AISTATS'19, ICML'19-20, NeurIPS'19-20 and ICLR'21. He received the RIKEN BAIHO Award (2018).\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth, height = 0.14\\textheight]{Ivor.png}}]{Ivor W. Tsang} is Professor of Artificial Intelligence, at University of Technology Sydney. He is also the Research Director of the Australian Artificial Intelligence Institute. Prof. Tsang serves as a Senior Area Chair for Neural Information Processing Systems and Area Chair for International Conference on Machine Learning, and the Editorial Board for Journal of Machine Learning Research, Machine Learning, Journal of Artificial Intelligence Research and IEEE Transactions on Pattern Analysis and Machine Intelligence. He received the Australian Research Council Future Fellowship, and the AI 2000 AAAI/IJCAI Most Influential Scholar in Australia.\n\\end{IEEEbiography}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth]{James.jpg}}]{James T. Kwok} (F’17) received the PhD degree in computer science from the Hong Kong University of Science and Technology, in 1996. He was\nwith the Department of Computer Science, Hong Kong Baptist University, Hong Kong, as an assistant professor. He is currently a professor with the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. He has been a program cochair for a number of international conferences, and served as an associate editor for the IEEE Transactions on Neural Networks and Learning Systems and Artificial Intelligence Journal. He has also served as (Senior) Area Chairs of NeurIPS, ICML, ICLR, IJCAI, AAAI and ECML. He is a fellow of the IEEE.\n\\end{IEEEbiography}\n\\vspace{-20em}\n\\begin{IEEEbiography}[{\\includegraphics[width = 1\\textwidth]{Sugiyama.jpg}}]{Masashi Sugiyama}\nis Director of RIKEN Center for Advanced Intelligence Project and Professor at the University of Tokyo. He received the PhD\ndegree in computer science from Tokyo Institute of Technology. He served as Program Co-chair for the Neural Information Processing Conference, International Conference on Artificial Intelligence and Statistics, and Asian Conference on Machine Learning. He serves as an Associate Editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence, and an Editorial Board Member for the Machine Learning Journal and Frontiers of Computer Science. He received the Japan Academy Medal in 2017.\n\\end{IEEEbiography}\n\\clearpage", "id": "7fc05d3a-d347-4130-a07b-b2c8d5945c56", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Conclusions" ] ], "subsections": [], "title": "Conclusions" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "8583ac9c-4327-4ed2-a97f-e59cc1711cc5", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 1: Related Literature" ] ], "subsections": [ "cdd67a8f-6488-47f6-8cb3-9d172523c445", "96627aa8-43f4-4b39-9b47-65f4980e189a", "09b509d2-df66-4c35-832d-b059e7d4d74d" ], "title": "Appendix 1: Related Literature" }, { "cite_extract_rate": 0.35714285714285704, "cites": [ 3454, 8734, 4119, 8796, 3453 ], "content": "Before delving into label-noise representation learning, we should briefly overview some of milestone works in label-noise statistical learning. Starting from 1988, Angluin et al. proved that a learning algorithm can handle incorrect training examples robustly, when the noise rate is less than one half under the random noise model~. Bylander further demonstrated that linear threshold functions are polynomially learnable in the presence of classification noise~. Lawrence and Sch\\\"olkopf constructed a kernel Fisher discriminant to formulate label-noise problem as a probabilistic model~, which is solved by Expectation Maximization algorithm. Although the above works explored to tackle noisy labels theoretically and empirically, Bartlett et al. justified that most loss functions are not completely robust to label noise~. It means that classifiers based on label-noise robust algorithms are still affected by label noise.\nDuring this period, a lot of works emerged and contributed to this area. For example, Crammer et al. proposed an online Passive-Aggressive perceptron algorithm to cope with label noise~. Dredze et al. proposed confidence weighted learning to weigh trusted labels more~. Freund proposed a boosting algorithm to combat against random label noise~. To handle label noise theoretically, Cesa-Bianchi et al. proposed an online learning algorithm, leveraging unbiased estimates of the gradient of the loss~. Until 2013, Natarajan et al. formally formulated an unbiased risk estimator for binary classification with noisy labels~. This work is very important to the area, since it is the first work to provide guarantees for risk minimization under random label noise. Moreover, this work provides an easy way to suitably modify any given surrogate loss function for handling label noise.\nMeanwhile, Scott et al. studied the classification problem under class-conditional noise model, and propose the way to handle asymmetric label noise~. In contrast, van Rooyen et al. proposed the unhinge loss to tackle symmetric label noise~. Liu and Tao proposed the method of anchor points to estimate the noise rate, and further leverage importance reweighting to design surrogate loss functions for class-conditional label noise~. Instead of designing ad-hoc losses, Patrini et al. introduced linear-odd losses, which can be factorized into an even and an odd loss function~. More importantly, they estimated the mean operator from noisy data, and plug this operator in linear-odd losses for empirical risk minimization, which is resistant to asymmetric label noise.\nIt is noted that, we move from label-noise statistical learning to label-noise representation learning after 2015. There are two reasons behind this phenomenon. First, label-noise statistical learning mainly focus on designing theoretically-robust methods for small-scale noisy data. However, such methods cannot empirically work well on large-scale noisy data in our daily life, such as Clothing1M~ emerging from 2015. Second, label-noise statistical learning mainly applies to shallow and convex models, such as support vector machines. However, deep and non-convex models, such as convolutional and recurrent neural networks, have become trendy and mainstream due to the better empirical performance, not only in vision, but also in language, speech and video tasks. Therefore, it is urgent to design label-noise representation learning methods for robustly training of deep models with noisy labels.", "id": "cdd67a8f-6488-47f6-8cb3-9d172523c445", "level": "subsection", "origin_cites_number": 14, "parent_id": "8583ac9c-4327-4ed2-a97f-e59cc1711cc5", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 1: Related Literature" ], [ "subsection", "Early Stage" ] ], "subsections": [], "title": "Early Stage" }, { "cite_extract_rate": 0.769230769230769, "cites": [ 3340, 4139, 4130, 4183, 4145, 4455, 7773, 4128, 7191, 4136 ], "content": "There are three seminal works in label-noise representation learning with noisy labels. For example, Sukhbaatar et al. introduced an extra but constrained linear ``noise'' layer on top of the softmax layer, which adapts the network outputs to model the noisy label distribution~. Reed et al. augmented the prediction objective with a notion of consistency via a soft and hard bootstrapping~, where the soft version is equivalent to softmax regression with minimum entropy regularization and the hard version modifies regression targets using the MAP estimation. Intuitively, this bootstrapping procedure provides the learner to disagree with an inconsistent training label, and re-label the training data to improve its label quality. Azadi et al. proposed an auxiliary image regularization technique~. The key idea is to exploit the mutual context information among training data, and encourage the model to select reliable labels.\nFollowed by seminal works, Goldberger et al. introduced a nonlinear ``noise'' adaptation layer on top of the softmax layer~, which adapts to model the noisy label distribution. Patrini et al. proposed forward and backward loss correction approaches simultaneously~. Based on the corrected loss, they explored a robust two-stage training algorithm. A very interesting point is, both Wang et al. and Ren et al. leveraged the same philosophy, namely data reweighting, to learn with label noise. However, they tackled from different perspectives. Specifically, Wang et al. come from a view of Bayesian and propose robust probabilistic modeling~, where the posterior of reweighted model will identify uncorrupted data but ignore corrupted data. Ren et al. come from a view of meta-learning~, which assigns weights to training samples based on their gradient directions. Namely, their method performs a meta gradient descent step on the current mini-batch example weights (initialized from zero) to minimize the loss on a clean unbiased validation set.\nBesides the above works, there are many important works born in 2018, ranging in diverse directions. In high-level, there are several major directions, such as estimating transition matrix, regularization, designing losses and small-loss tricks. Among them, small-loss tricks are inspired by memorization effects of deep neural networks, where deep models will fit easy (clean) patterns first but over-fit hard (noisy) patterns eventually. Namely, small-loss tricks regard small-loss samples as relatively ``clean'' samples, and back-propagate such samples to update the model parameters. For example, Jiang et al. is the first to leverage small-loss tricks to handle label noise~. However, they train only a single network iteratively, which is similar to the self-training approach. Such approach inherits the same inferiority of accumulated error caused by the sample-selection bias. To address this issue, Han et al. train two deep neural networks simultaneously, and back propagates the data selected by its peer network and updates itself~.\nIn the context of representation learning, estimating transition matrix, regularization and designing losses are still prosperous for handling label noise. For instance, given that a small set of trusted examples are available, Hendrycks et al. proposed gold loss correction. Namely, they leveraged trusted examples to estimate the (gold) transition matrix perfectly~. Therefore, on noisy examples, they will train deep models via forward correction (by gold matrix); on trusted examples, they will train deep models normally. Han et al. proposed ``human-in-the-loop'' idea to easily estimate the transition matrix~. Specifically, they proposed a human-assisted approach called ``Masking'' that conveys human cognition of invalid class transitions and naturally speculates the structure of the noise transition matrix. Then, they regarded the matrix structure as prior knowledge, which is further incorporated into deep probabilistic modeling.\nMoreover, Zhang et al. introduced an implicit regularization called mixup~, which constructs virtual training data by linear interpolations of features and labels in training data. Mixup encourages the model to behave linearly in-between training examples, which reduces the amount of undesirable oscillations when predicting outside the training examples. Zhang et al. generalized both categorical cross entropy loss and mean absoulte error loss by the negative Box-Cox transformation~. Their proposed $\\mathcal{L}_q$ loss not only has theoretical justification, but also work for both closed-set and open-set noisy labels. Motivated by a dimensionality perspective, Ma [2018] developed a dimensionality-driven learning strategy, which can effectively learn robust low-dimensional subspaces that capture the true data distribution.", "id": "96627aa8-43f4-4b39-9b47-65f4980e189a", "level": "subsection", "origin_cites_number": 13, "parent_id": "8583ac9c-4327-4ed2-a97f-e59cc1711cc5", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 1: Related Literature" ], [ "subsection", "Emerging Stage" ] ], "subsections": [], "title": "Emerging Stage" }, { "cite_extract_rate": 0.846153846153846, "cites": [ 4456, 4140, 4126, 4135, 4253, 4182, 3342, 7133, 7781, 4457, 7774 ], "content": "Starting from 2019, label-noise representation learning has become mature in the top conference venues. Arazo et al. formulated clean and noisy samples as a two-component (clean-noisy) beta mixture model on the loss values~, where the posterior probabilities are then used to implement a dynamically weighted bootstrapping loss. To boost the performance of Co-teaching, Chen et al. introduced the Iterative Noisy Cross-Validation (INCV) method to select a subset of most confident samples (with correct labels)~, while Yu et al. employed the ``Update by Disagreement'' strategy to keep two networks diverged~. Hendrycks et al. empirically demonstrated that pre-training (i.e., ``pre-train then tune'' paradigm) can improve model robustness against label corruption~, which is for large-scale noisy datasets.\nUnder the criteria of balanced error rate (BER) minimization and area under curve (AUC) maximization, Charoenphakdee et al. found that symmetric losses have many merits in combating with noisy labels, even without knowing the noise information. Based on such observation, they proposed Barrier Hinge Loss~. In contrast to selected samples via small-loss tricks, Thulasidasan et al. introduced the abstention-based training, which allows deep abstaining networks to abstain on confusing samples while learning on non-confusing samples~. Following the re-weighting strategy, Shu et al. parameterized the weighting function adaptively as one-layer multilayer perceptron called Meta-Weight-Net~, which is free of manually pre-specifying the weighting function.\nEntering 2020, Menon et al. mitigated the effects of label noise from an optimization lens, namely using composite loss-based gradient clipping, which naturally introduces the partially Huberised loss for training deep models~. Nguyen et al. proposed a self-ensemble label filtering method to progressively filter out the wrong labels during training~. Li et al. modeled the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples~. Lyu et al. proposed a provable curriculum loss, which can adaptively select samples for robust stagewise training~. Han et al. proposed a versatile approach called scaled stochastic integrated gradient underweighted ascent (SIGUA)~. SIGUA uses gradient decent on good data, while using scaled stochastic gradient ascent on bad data rather than dropping those data. After Clothing1M born in 5 years, Jiang et al. proposed a new but realistic type of noisy dataset called ``web-label noise'' (or red noise)~, which enables us to conduct controlled experiments systematically in more realistic scenario.", "id": "09b509d2-df66-4c35-832d-b059e7d4d74d", "level": "subsection", "origin_cites_number": 13, "parent_id": "8583ac9c-4327-4ed2-a97f-e59cc1711cc5", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 1: Related Literature" ], [ "subsection", "Flourished Stage" ] ], "subsections": [], "title": "Flourished Stage" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "e3c66c9a-8067-4068-981d-ba74f39dbf1c", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 2: Discussion for Three Perspectives" ] ], "subsections": [ "69589617-d16d-41a6-a4de-6a9c30467048", "3f9b8f1b-7b6c-4e13-a71a-9ae8d923ab7d", "d3cd8bd3-94a7-4a9d-a6ec-5d68717e41d2" ], "title": "Appendix 2: Discussion for Three Perspectives" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7783, 4116, 4174, 3345, 4177, 4476 ], "content": "It is a typical method to leverage the noise transition matrix for solving the LNRL problem. First, we can insert an adaptation layer into the original network, and this layer can mimic the function of the noise transition matrix. Second, we may keep the original network, but correct the cross-entropy loss via the estimated transition matrix. Lastly, since the accurate matrix estimation will lead to the better classification accuracy, we can use the prior knowledge to ease the estimation burden.\nNote that there are other related works from the \ndata perspective. For example,\nstructured noise modeling \ndemonstrated that the noise in human-centric annotations exhibits structure, which can be modeled by decoupling the human bias from the correct visually grounded label~;\nnoisy fine-grained recognition showed the potential to train effective models of fine-grained recognition using noisy data from the web only~;\ndistillation with side information built a unified distillation framework to use ``side'' information, including a small clean dataset and label relations in a knowledge graph, to combat noisy labels~;\nrank pruning addressed the fundamental problem of estimating the noise rates~;\nnegative learning trained deep networks using complementary labels, which decrease the risk of providing incorrect information~; \ncombinatorial inference reduced the noise level by simply constructing meta-classes and improved the accuracy via combinatorial inferences over multiple constituent classifiers~; robust generative adversarial networks (GANs) incorporated a noise transition model, which can learn a clean label conditional generative distribution even when training labels are noisy~;\nnoise-tolerant fairness enabled learning of fair classifiers given noisy sensitive features using the mean-difference score~; and latent class-conditional noise modeled the noise transition in a Bayesian form, namely projecting the noise transition in a Dirichlet-distributed space~.", "id": "69589617-d16d-41a6-a4de-6a9c30467048", "level": "subsection", "origin_cites_number": 9, "parent_id": "e3c66c9a-8067-4068-981d-ba74f39dbf1c", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 2: Discussion for Three Perspectives" ], [ "subsection", "Data Perspective" ] ], "subsections": [], "title": "Data Perspective" }, { "cite_extract_rate": 0.7142857142857141, "cites": [ 8743, 4166, 7775, 4151, 4131, 4477, 8630, 4184, 139, 4134 ], "content": "Modifying the objective function is another mainstream method to solve the LNRL problem. First, we can augment the objective via either explicit regularizer, e.g., \nMinimum Entropy Regularization~, or \nimplicit regularizer, e.g., \nVirtual Adversarial Training~. Second, instead of treating all sub-objective functions equally, we can leverage the reweighting strategy to assign different weights to sub-objective functions. The more weights we assign, the more importance these sub-objective functions have. We can realize the reweighting strategy via different ways, e.g., importance reweighting, a Bayesian method, a mixture model and neural networks. Lastly, we can modify the objective function via redesigning the loss function, e.g., $\\ell_q$, barrier hinge loss, partial Huberized loss and curriculum loss. Moreover, we can conduct the label ensemble, e.g., the temporal ensembling and self-ensemble filtering.\nNote that there are other related works from the objective perspective. For instance, online crowdsourcing greatly reduces the number of redundant annotations, when crowdsourcing annotations such as bounding boxes, parts, and class labels~; an undirected graphical model represents the relationship between noisy and clean labels, where the inference over latent clean labels is tractable and regularized using auxiliary information~; the active-bias method trains robust deep networks by emphasizing high variance samples~; model bootstrapped EM jointly models labels and worker quality from noisy crowdsourced data~; the joint optimization framework corrects labels during training by alternating update of network parameters and labels~; the iterative learning framework trains deep networks with open-set noisy labels~; deep bilevel learning is based on the principles of cross-validation, where a validation set is used to limit the model over-fitting~; symmetric cross entropy (CE) boosts CE symmetrically with a noise robust counterpart, Reverse Cross Entropy (RCE)~; the ubiquitous reweighting network learns a robust model from large-scale noisy web data, by considering five key challenges (i.e., imbalanced class sizes, high intra-classes diversity and inter-class similarity, imprecise instances, insufficient representative instances, and ambiguous class labels) in image classification~; the information-theoretic loss is a generalized version of mutual information, which is provably robust to instance-independent label noise~; the peer loss enables learning from noisy labels without requiring a priori specification of the noise rates~; and the normalized loss theoretically demonstrates that a simple normalization can make any loss function robust to noisy labels~.", "id": "3f9b8f1b-7b6c-4e13-a71a-9ae8d923ab7d", "level": "subsection", "origin_cites_number": 14, "parent_id": "e3c66c9a-8067-4068-981d-ba74f39dbf1c", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 2: Discussion for Three Perspectives" ], [ "subsection", "Objective Perspective" ] ], "subsections": [], "title": "Objective Perspective" }, { "cite_extract_rate": 0.833333333333333, "cites": [ 4157, 4178, 4238, 4144, 4186, 309, 4137, 8742, 4188, 4159 ], "content": "Leveraging memorization effects is an emerging mainstream method to solve the LNRL problem. First, we can combine self-training with memorization effects, which brings us self-paced MentorNet and learning to reweight. Second, we can combine Co-training with memorization effects, which introduces Co-teaching, Co-teaching+, INCV Co-teaching and S2E. Lastly, we can combine Co-training with the SOTA semi-supervised learning MixMatch, which provides DivideMix. Meanwhile, besides memorization, pre-training and deep k-NN are new branches using overparamterized models.\nNote that there are other related works from the optimization policy perspective, namely changing training dynamics. For example, multi-task networks jointly learn to clean noisy annotations and accurately classify images~; the unified framework of random grouping and attention effectively reduces the negative impact of noisy web image annotations~; decoupling trains two deep networks simultaneously, and only updates parameters on examples, where there is a disagreement between the two classifiers~;\nCleanNet was designed to make label noise\ndetection and learning from noisy data with human supervision scalable through transfer learning~; CurriculumNet designs a training curriculum by measuring and ranking the complexity of data using its distribution in a feature space~; Co-mining combines Co-teaching with the Arcface loss~ for face recognition tasks~; O2U-Net only requires adjusting the hyper-parameters of deep networks to make their status transferred from overfitting to underfitting (O2U) cyclically~; deep self-learning is an iterative learning framework\nto relabel noisy samples and train deep networks on the real\nnoisy dataset, without using extra clean supervision~; the label-noise information strategy is a training method that controls memorization by regularizing label noise information in weights~; different from Co-teaching+, Co-regularization aims to reduce the diversity of two networks during training~; and the data coefficient method wisely takes advantage of a small trusted dataset to optimize exemplar weights and labels of mislabeled data, which distills effective supervision for robust training~.", "id": "d3cd8bd3-94a7-4a9d-a6ec-5d68717e41d2", "level": "subsection", "origin_cites_number": 12, "parent_id": "e3c66c9a-8067-4068-981d-ba74f39dbf1c", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 2: Discussion for Three Perspectives" ], [ "subsection", "Optimization Perspective" ] ], "subsections": [], "title": "Optimization Perspective" }, { "cite_extract_rate": 0, "cites": [], "content": "We conduct experiments on \\textit{MNIST} for both Figure~1 and Figure~5. Here, we provide experimental details.", "id": "5aa2deed-36b5-4ac8-be9c-4fe127b1b643", "level": "section", "origin_cites_number": 0, "parent_id": "8584763c-0388-415b-b9cf-b52c55e5eeb8", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 3: Experimental Details" ] ], "subsections": [ "7536cbcd-658f-4891-8cc0-58dab7f8ffac", "95e23931-ea9c-4bbf-a849-94b681781305", "b05514d6-a1be-4500-aaca-df07447780bf" ], "title": "Appendix 3: Experimental Details" }, { "cite_extract_rate": 1, "cites": [ 4115 ], "content": "We follow memorization effects~ to set up network structure as follows: Input $\\rightarrow$ Conv(200,5,5) $\\rightarrow$ BN $\\rightarrow$ ReLU $\\rightarrow$ MaxPool(3,3) $\\rightarrow$ Conv(200,5,5) $\\rightarrow$ BN $\\rightarrow$ ReLU $\\rightarrow$ MaxPool(3,3) $\\rightarrow$ FC(200,384) $\\rightarrow$ BN $\\rightarrow$ ReLU $\\rightarrow$ FC(384,192) $\\rightarrow$ BN $\\rightarrow$ ReLU $\\rightarrow$ FC(192,10) $\\rightarrow$ Softmax.", "id": "7536cbcd-658f-4891-8cc0-58dab7f8ffac", "level": "subsection", "origin_cites_number": 1, "parent_id": "5aa2deed-36b5-4ac8-be9c-4fe127b1b643", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 3: Experimental Details" ], [ "subsection", "Network Structure" ] ], "subsections": [], "title": "Network Structure" }, { "cite_extract_rate": 0, "cites": [], "content": "In Figure~1, we choose \\textit{MNIST} with $35\\%$ of symmetric noise (Figure~3) as noisy \\textit{MNIST}. We choose forward correction (Theorem~2) to correct original $\\ell$, which forms corrected $\\tilde{\\ell}$. Thus, we compare three approaches: 1) $\\ell$ on clean \\textit{MNIST}; 2) $\\tilde{\\ell}$ on noisy \\textit{MNIST}; and 3) $\\ell$ on noisy \\textit{MNIST}.", "id": "95e23931-ea9c-4bbf-a849-94b681781305", "level": "subsection", "origin_cites_number": 0, "parent_id": "5aa2deed-36b5-4ac8-be9c-4fe127b1b643", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 3: Experimental Details" ], [ "subsection", "Details of Figure 1" ] ], "subsections": [], "title": "Details of Figure 1" }, { "cite_extract_rate": 0, "cites": [], "content": "In Figure~5, we choose \\textit{MNIST} with $0\\%-80\\%$ of symmetric noise (Figure~3) as noisy \\textit{MNIST} with different noise rates. We choose original $\\ell$ to verify memorization effects in deep learning. The training curve will increase gradually. However, the validation curve will increase at the first few epochs, but drop gradually until the convergence.\n\\end{document}", "id": "b05514d6-a1be-4500-aaca-df07447780bf", "level": "subsection", "origin_cites_number": 0, "parent_id": "5aa2deed-36b5-4ac8-be9c-4fe127b1b643", "prefix_titles": [ [ "title", "A Survey of Label-noise Representation Learning: Past, Present and Future" ], [ "section", "Appendix 3: Experimental Details" ], [ "subsection", "Details of Figure 5" ] ], "subsections": [], "title": "Details of Figure 5" } ]
12
[ 3340, 4451, 4452, 4453, 4454, 3454, 8734, 3453, 4139, 4130, 7162, 4183, 4145, 4455, 7773, 4128, 7191, 4136, 4456, 4135, 4253, 4182, 3342, 7133, 7781, 4457, 7774, 4458, 895, 892, 4115, 4459, 8794, 7778, 4461, 4462, 4176, 4464, 4460, 4463, 139, 4120, 4465, 4152, 4119, 4466, 3630, 301, 4148, 8630, 199, 4140, 4126, 4137, 7707, 4467, 4469, 4470, 4468, 7823, 4471, 7824, 4473, 4475, 8795, 7007, 4474, 7335, 4472, 7825, 8696, 8796, 7783, 4116, 4174, 3345, 4177, 4476, 8743, 4166, 7775, 4151, 4131, 4477, 4184, 4134, 4157, 4178, 4238, 4144, 4186, 309, 8742, 4188, 4159 ]
0.826473
[ "Laura Swiler", "Mamikon Gulian", "Ari Frankel", "Cosmin Safta", "John Jakeman" ]
A Survey of Constrained Gaussian Process Regression: \\ Approaches and Implementation Challenges
2020
2020-06-16T17:03:36Z
cs.LG
Gaussian process regression is a popular Bayesian framework for surrogate modeling of expensive data sources. As part of a broader effort in scientific machine learning, many recent works have incorporated physical constraints or other a priori information within Gaussian process regression to supplement limited data and regularize the behavior of the model. We provide an overview and survey of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints provided by linear PDEs, and boundary condition constraints. We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ] ], "subsections": [ "edde07f6-ba19-4e5f-9251-7cbe2f890158", "afe884ae-3b96-4217-9d4f-3b051ad6d6c6", "348e936a-b8fa-4f3d-be3e-7cbf494cf9f7", "6b851013-5efe-40f6-a2a4-48c879d41f38", "47d03984-771a-4f08-895f-be7081965518", "fbf12205-4c0f-46ca-a510-8409ad72bca8", "65d986b6-b11e-421e-971a-bb294bcbf1db", "d8114b96-b4e0-4478-b43f-0ac0ea15b28a", "b29833f2-8093-4df3-938f-72d7f97ec09e" ], "title": "root" }, { "cite_extract_rate": 0.25, "cites": [ 8602, 7624, 2924, 2925, 8601, 7626, 8603, 7625, 2926 ], "content": "\\label{sec:intro}\nThere has been a tremendous surge in the development and application of machine learning models in recent years due to their flexibility and capability to represent trends in complex systems .\nThe parameters of a machine learning model can often be calibrated, with sufficient data, to give high fidelity representations of the underlying process \n.\nIt is now feasible to construct deep learning models over datasets of tens of thousands to millions of data points with modern computational resources . In many scientific applications, however, there may not be large amount\\REVISION{s} of data available for training. Unlike data from internet or text searches, computational and physical experiments are typically extremely expensive. Moreover, even if ample data exists, the machine learning model may yield behaviors that are inconsistent with what is expected physically when queried in an extrapolatory regime.\nTo aid and improve the process of building machine learning models for scientific applications, it is desirable to have a framework that allows the incorporation of physical principles and other a priori information to supplement the limited data and regularize the behavior of the model. Such a framework is often referred to as ``physics-constrained'' machine learning within the scientific computing community .\n provide a taxonomy for theory-guided data science, \nwith the goal of incorporating scientific\nconsistency in the learning of generalizable models. The information used to constrain models can be simple, such as known range or positivity constraints, shape constraints, or monotonicity constraints that the machine learning model must satisfy. The constraints can also be more complex; for example, they can encode knowledge of the underlying data-generating process in the form of a partial differential equation. \nSeveral recent conferences highlight the interest in ``physics-informed'' machine learning~. \nMuch of the existing research in physics-informed machine learning has focused on incorporating constraints in neural networks , often through the use of objective/loss functions which penalize constraint violation . Other works have focused on incorporating prior knowledge using Bayesian inference that expresses the data-generating process as dependent on a set of parameters, the initial distribution of which is determined by the available information, e.g., functional constraints . Unlike deterministic learning approaches, the predictions made using approximations trained with Bayesian inference are accompanied with probabilistic estimates of uncertainty/error.\nWithin the Bayesian regression framework, Gaussian processes (GPs) are popular for constructing ``surrogates'' or ``emulators'' of data sources that are very expensive to query. The use of GPs in a \nregression framework to predict a set of function values is called Gaussian Process Regression (GPR). \nAn accurate GPR can often be constructed using only a relatively small number of training data (e.g. tens to hundreds), which consists of pairs of input \nparameters and corresponding response values. Once constructed, the GPR can be thought of as a machine-learned metamodel and used to provide \nfast, cheap function evaluations for the purposes of prediction, sensitivity \nanalysis, uncertainty quantification, calibration, and optimization. \nGP regression models are constructed with data obtained from computational simulation~ or field data; in geostatistics, the process of applying Gaussian processes to field data has been used for decades and is frequently referred to as \\emph{kriging} . \nIn this survey we focus on the use of constrained GPRs\nthat honor or incorporate a wide variety of physical constraints~.\nSpecifically,\nwe focus on the following topics, after a short review of Gaussian process regression in Section \\ref{gpr}. \nSection \\ref{overview_of_constraints} presents an overview and a classification of constraints according to how the constraint is enforced during the construction of a GP. \nSection \\ref{sec:bound_constraints} discusses bound constraints, in which the GP prediction may be required to be positive, for example, or the prediction may be required to fall between upper and lower bounds.\nSection \\ref{sec:monotonicity_constraints} discusses monotonicity and related convexity constraints.\nConstraints may also be more \ntightly integrated with the underlying physics: the GP can be constrained to satisfy \nlinear operator constraints which represent physical laws expressed as partial different equations (PDE). \nThis is discussed in Section~\\ref{sec:pde_constraints}. \nSection \\ref{sec:boundary_constraints} discusses intrinsic boundary condition constraints.\nWe review several different approaches for enforcing each of these constraint types. \nFinally, Section \\ref{sec:computation_considerations} is a compendium of computational details for implementing the constraints of Sections \\ref{sec:bound_constraints} -- \\ref{sec:boundary_constraints}, together with a summary of computational strategies for improving GPR and brief commentary about the challenges of applying these strategies for the constrained GPs considered here.\nThe taxonomy we present is formulated to enable practitioners to easily query this overview for information on the specific constraint(s) they may be interested in. \nFor approaches that enforce different constraints but have significant overlap in methodology, references are made between sections to the prerequisite subsection where the technical basis of an approach is first discussed in detail. This is done, for example, when discussing spline-based approaches which are used for both bound constraints in Section \\ref{sec:bound_constraints} and monotonicity constraints in Section \\ref{sec:monotonicity_constraints}.\nNot all physical constraints can be neatly divided into the categories that we focus on in Sections \\ref{sec:bound_constraints} -- \\ref{sec:boundary_constraints}. For example, with a view toward computer vision, considered GPR for pose estimation under rigid (constant angle and length) and non-rigid (constant length) constraints between points. They proved that linear equality constraints of the form $A\\mathbf{y} = \\mathbf{b}$, if satisfied by all the data vectors $\\mathbf{y}$, are satisfied by the posterior mean predictor of a GP. Then, at the cost of squaring the input dimension, they translated quadratic length constraints into such linear constraints for pairwise products of the input variables. \nIn another example, applied GPR to predict the behavior of hyperelastic materials, in which the stress-stretch constitutive relation naturally exhibits rotational invariance. The rotational invariance was enforced by deriving a finite expansion of the Cauchy stress tensor in powers of the Finger tensor that satisfies the rotational invariance by virtue of its structure, and GPR was performed for the coefficients of the expansion. \nWe mention these examples to illustrate that physical constraints are varied, and in some cases the method to enforce them can depend highly on the specific nature of the constraint.\nEven within the selected categories represented by Sections \\ref{sec:bound_constraints} -- \\ref{sec:boundary_constraints}, the literature on constrained Gaussian processes is extensive and expanding rapidly. Consequently, we cannot provide a complete survey of every instance of constrained GPR. \nRather, we strive to discuss main areas of research within the field. The goal is to aid readers in selecting methods appropriate for their applications and enable further exploration of the literature. We present selected implementation details and numerical examples, giving references to the original works for further details. Many of the authors of these works have developed codebases and released them publicly. Finally, we remark that we have adopted consistent notation (established in Section \\ref{gpr}) for GPR that does not always follow the notation of the original works exactly.", "id": "edde07f6-ba19-4e5f-9251-7cbe2f890158", "level": "section", "origin_cites_number": 36, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Introduction" ] ], "subsections": [], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{gpr}\n\\label{sec:gpr}\nThis section provides an overview of unconstrained Gaussian process regression. \nAs mentioned previously, Gaussian process models, or simply Gaussian processes, are popular because they can be used in a regression framework to approximate \ncomplicated nonlinear functions with probabilistic estimates of the uncertainty. Seminal work discussing the use of GPs as surrogate models for computational science and engineering applications include the papers of and and the book by .\nA Gaussian process can be viewed as a distribution over a set of functions.\nA random draw or sample $f$ from a GP is a realization from the set of admissible functions. \nSpecifically, a Gaussian process is a collection of random variables\n$\\{f(\\mathbf{x}) \\ | \\ \\mathbf{x} \\in \\REVISION{\\Omega \\subset \\mathbb{R}^d} \\}$ for which, given any finite set of $N$ inputs $X = \\{\\mathbf{x}_1, \\mathbf{x}_2, ..., \\mathbf{x}_N\\}$, $\\mathbf{x}_i \\in \\REVISION{\\Omega}$, the collection $f(\\mathbf{x}_1),f(\\mathbf{x}_2),...,f(\\mathbf{x}_N)$ \nhas a joint multivariate Gaussian distribution. \nA GP is completely defined by its mean and covariance functions which generate the mean vectors and covariances matrices of these finite-dimensional multivariate normals. Assumptions such as smoothness of $f$, stationarity, and sparsity are used to construct the mean and covariance of the GP prior and then Bayes' rule is used to constrain the prior on observational/simulation data.\nThe prediction \n$\\mathbf{f}=[f(\\bold{x}_1),f(\\mathbf{x}_2),...f(\\mathbf{x}_N)]^\\top$ of a Gaussian process with mean function \n$m(\\mathbf{x})$\nand a covariance function $k(\\mathbf{x},\\mathbf{x}')$ is a random variable such that\n\\begin{equation}\np(\\bold{f}|X) = \\mathcal{N}(\\mathbf{f}; m(X), k(X,X)),\n\\label{eq:multivar_norm}\n\\end{equation}\nwhere $m(X)$ denotes the vector $[m(\\mathbf{x}_1), ..., m(\\mathbf{x}_N)]^\\top$ and $k(X,X)$ denotes the matrix with entries $[k(\\mathbf{x}_i,\\mathbf{x}_j)]_{1 \\le i,j \\le N}$. \nThe multivariate normal probability \\REVISION{density function}\n$\\mathcal{N}(\\mathbf{f}; \\mathbf{m}, K)$ with mean vector $\\mathbf{m}$ and covariance matrix $K$ has the form\n\\begin{equation}\\label{standard_mvn}\n\\mathcal{N}(\\mathbf{f}; \\mathbf{m}, K) = \\frac{1}{(2\\pi)^{N/2}|{K^{1/2}|}}\\exp\\left({-\\frac{1}{2}(\\mathbf{f}-\\mathbf{m})^\\top K^{-1}(\\mathbf{f}-\\mathbf{m})}\\right).\n\\end{equation}\nThe covariance kernel function $k$ of a Gaussian process must be symmetric and positive semidefinite. Denoting the individual $d$ components of the vector $\\mathbf{x}_{i}$ as ${x_i}^\\ell$, where $\\ell=1,...,d$, the squared exponential kernel\n\\begin{equation} \nk(\\bold{x}_{i},\\mathbf{x}_{j}) = \\eta^2\\exp\\left[-\\frac{1}{2}\\sum_{\\ell=1}^{d}\n\\left(\\frac{{x_i}^{\\ell}-{x_j}^{\\ell}}{\\rho_\\ell}\\right)^2\\right], \\quad \\REVISION{\\eta, \\rho_1, ..., \\rho_d \\in \\mathbb{R},}\n\\label{eq:gpkernel}\n\\end{equation}\nis popular, but many other covariance kernels are available. The choice of a covariance kernel can have profound impact on the GP predictions , and several approaches to constraining GPs that we survey rely on construction\nof a covariance kernel specific to the constraint.\nThe \\REVISION{density} \\eqref{eq:multivar_norm}, determined by covariance kernel $k$ and the mean $m$, is referred to as a \\emph{prior} for the GP. If the error or noise relating the actual observations $\\mathbf{y} = [y(\\bold{x}_1),y(\\mathbf{x}_2),...,y(\\mathbf{x}_N)]^\\top$ collected at the set of inputs $X=\\{\\mathbf{x}_i\\}_{i=1}^N$ to the GP prediction $\\mathbf{f}$ is assumed to be Gaussian, then the probability of observing data \n$\\mathbf{y}$ given the GP prior is given by\n\\begin{equation}\\label{eq:gauss_like}\np(\\bold{y}|X,\\bold{f}) = \\mathcal{N}(\\REVISION{\\bold{y}}; \\mathbf{f},\\sigma^2 I_N), \\quad \\REVISION{\\sigma \\in \\mathbb{R}}.\n\\end{equation}\nHere, $I_N$ denotes the $N \\times N$ identity matrix. The \\REVISION{density} $p(\\bold{y}|X,\\bold{f})$ is referred to the \\emph{likelihood} of the GP, and the Gaussian likelihood \\eqref{eq:gauss_like} is by far the most common. As discussed in Section~\\ref{sec:likelihood_formulations}, specific non-Gaussian likelihood functions can be used to enforce certain types of constraints.\nThe parameters in the covariance kernel function of a GP are referred to as \\emph{hyperparameters} of the GP. We denote them by $\\bm{\\theta}$.\nFor the squared exponential kernel \\eqref{eq:gpkernel}, the aggregate vector of hyperparameters is \n$\\bm{\\theta}=[\\eta, \\rho_1, ..., \\rho_d, \\sigma]$, where we have included the likelihood/noise parameter $\\sigma$ from \\eqref{eq:gauss_like} as a hyperparameter. In general, finding the best hyperparameters to fit the data is an important step of GPR known as \\emph{training}. From now on, we explicitly denote the dependence on $\\bm{\\theta}$ of the likelihood $p(\\bold{y}|X,\\bold{f})$ in \\eqref{eq:gauss_like} and the prior $p(\\bold{f}|X)$ in \\eqref{eq:multivar_norm}, writing these as $p(\\bold{y}|X,\\bold{f}, \\bm{\\theta})$ and $p(\\bold{f}|X, \\bm{\\theta})$, respectively. \nThe marginal likelihood is given by \n\\begin{equation}\\label{eq:marginal-likelihood}\np(\\bold{y}|X,\\bm{\\theta})=\\int{p(\\bold{y}|X,\\bold{f},\\bm{\\theta})p(\\bold{f}|X,\\bm{\\theta}) d\\bold{f}}\n\\end{equation} \nand the log-marginal-likelihood for a GP with a zero-mean prior ($m \\equiv 0$) can be written as\n\\begin{equation}\n\\log{p(\\bold{y}|X,\\bm{\\theta)}}=-\\frac{1}{2}\\bold{y^\\top}\\big(K(X,X)\n+\\sigma^2 I_N\\big)^{-1}\\bold{y}-\\frac{1}{2}\\log\\left|K(X,X)\n+\\sigma^2 I_N\\right|-\\frac{N}{2}\\log{2\\pi}\n\\label{eq:log_like}\n\\end{equation}\nFormula \\eqref{eq:log_like}, derived from \\eqref{eq:marginal-likelihood}, \\eqref{eq:multivar_norm} and \\eqref{standard_mvn}, is a function of the hyperparameters $\\bm{\\theta}$ present in the kernel $k$, which can be optimized to give the most likely values of the hyperparameters given data. This is known as maximum likelihood estimation (MLE) of the hyperparameters. \nOnce the hyperparameters of the GPR have been chosen, the \\emph{posterior} of the GP is given by \nBayes' rule,\n\\begin{equation}\n\\label{eq:bayes}\np(\\bold{f} | X, \\bold{y}, \\bm{\\theta})\n=\n\\frac{p(\\bold{f} | X,\\bm{\\theta}) p(\\bold{y} | X, \\bold{f}, \\bm{\\theta}) }\n{p(\\bold{y} | X, \\bm{\\theta})}.\n\\end{equation}\nGiven the prior $p(\\bold{f} | X,\\bm{\\theta})$ \\eqref{eq:multivar_norm} and the Gaussian likelihood $p(\\bold{y} | X, \\bold{f}, \\bm{\\theta})$ \\eqref{eq:gauss_like}, the prediction $f^*$ of a GPR at a new point $\\bold{x^*}$ can be calculated~ as \n\\REVISION{\n\\begin{equation}\n{p({f^*} | \\bold{y},X,\\bold{x}^*,\\bm{\\theta})} =\n\\mathcal{N} (\\widehat{m}(\\bold{x}^*), \\hat{v}(\\bold{x}^*)),\n\\label{eq:gpreg}\n\\end{equation}\nwhere\n\\begin{align}\n\\begin{split}\\label{eq:post_mean_var}\n\\widehat{m}(\\bold{x}^*) &= k(\\bold{x}^*,X)(K(X,X)+\\sigma^2 I_N)^{-1}\\bold{y},\\\\\n\\hat{v}(\\bold{x}^*) & = k(\\bold{x}^*,\\bold{x}^*)-k(\\bold{x}^*,X)(K(X,X)+\\sigma^2 I_N)^{-1}\\left[k(\\bold{x}^*,X)\\right]^\\top.\n\\end{split}\n\\end{align}\nNote that the mean $\\widehat{m}(\\bold{x}^*)$ of this Gaussian posterior is the mean estimate $\\mathbb{E}[f(\\mathbf{x}^*)]$ of the predicted function value $f^*$ at $\\bold{x^*}$, and the variance $\\hat{v}(\\bold{x}^*)$ is the estimated prediction variance of the same quantity.}\nWe now preface some of the computational issues of inference in GPR that will be important for the constrained case. Firstly, when the GP likelihood is Gaussian, the posterior \\eqref{eq:gpreg} is also Gaussian, thus it can be computed exactly and sampling is from the posterior is simple. This is generally not the case when the likelihood is not Gaussian. The same issue arises if the \\REVISION{density} \\eqref{eq:gpreg} is directly replaced by a non-Gaussian \\REVISION{density} in the course of enforcing constraints -- by truncation, for example. Next, inversion of $(K(X,X)+\\sigma^2 I_N)$, which scales as $N^3$, is an omnipresent issue for inference. This poor scaling to large data is compounded by the fact that increased data tends to rapidly increase the condition number of $K(X,X)$ (see Section \\ref{sec:subset_of_data}). Finally, optimizing the hyperparameters of the GP involves the nonconvex objective function~\\eqref{eq:log_like}; both this function and its derivatives are potentially costly and unstable to compute for the reasons just mentioned. These issues arise for conventional GPR, but throughout sections \\ref{sec:bound_constraints} -- \\ref{sec:boundary_constraints} we shall see that constraining the GP can make them more severe. Therefore, we review potential strategies for dealing with them in Section \\ref{sec:computation_considerations}.", "id": "afe884ae-3b96-4217-9d4f-3b051ad6d6c6", "level": "section", "origin_cites_number": 5, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Gaussian Process Regression" ] ], "subsections": [], "title": "Gaussian Process Regression" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{overview_of_constraints}\nThere are many ways to constrain a Gaussian process model.\nThe difficulty with applying constraints to a GP is that a constraint typically calls for a condition to hold \\emph{globally} -- that is, for\n\\emph{all} points $x$ in a \\REVISION{continuous domain} -- for \\REVISION{\\emph{all}} realizations or predictions of the process. \\emph{A priori}, this amounts to an infinite set of point constraints for an infinite dimensional sample space of functions. This raises a numerical feasibility issue, which each method circumvents in some way. Some methods relax the global constraints to constraints at a finite set of ``virtual'' points; others transform the output of the GP to guarantee the predictions satisfy constraints, or construct a sample space of predictions in which every realization satisfies the constraints. \nThis distinction between should be kept in mind when surveying constrained GPs. For example, the methods in Sections \\ref{sec:transform_output}, \\ref{sec:splines}, and \\ref{transformed_covariance} enforce constraints globally. The methods in Sections \\ref{sec:daveiga} and \\ref{pde_constraints} enforce the constraint at scattered auxiliary data points, be this a result of introducing virtual data points for constraints, incomplete knowledge, or spatial variability.\nStrategies for enforcing constraints are apparent from the review of GPR in Section \\ref{gpr}, which covers posterior prediction for $\\mathbf{f}$, the likelihood function for observations $\\mathbf{y}$, the kernel prior $K$, and the data involved in GPR. \nSome methods, such as the warping method of Section \\ref{sec:transform_output}, simply \napply a transformation to the output $\\bold{f}$ of GPR, so the transformed output satisfies the constraint. \nThis transformation is essentially independent of the other components of GPR.\nOne can instead introduce the constraints at the prediction of $\\bold{f}$, replacing the \\REVISION{density} \\eqref{eq:gpreg}, by augmenting the data with a discrete set of virtual points in the domain and predicting $\\bold{f}$ from the GP given the data \\emph{and} knowledge that the constraint holds at the virtual points. An example of this is in Section \\ref{sec:daveiga}. \nNext, the likelihood $p(\\bold{y}|X,\\bold{f})$ provides another opportunity to enforce constraints. One can replace the Gaussian likelihood \\eqref{eq:gauss_like} by a likelihood function such that constraints are satisfied by $\\bold{y}$ regardless of the output $\\bold{f}$.\n\\REVISION{Hyperparameter optimization provides yet another opportunity, in which maximization of the marginal-log-likelihood \\eqref{eq:log_like} is augmented with constraints on the posterior predictions of the GP, as in Section \\ref{sec:nonneg}.}\nA different strategy is to design a covariance kernel for the prior \\eqref{eq:multivar_norm} of the Gaussian process which enforces the constraint.\nSeveral of the methods discussed in this survey involve regression with an appropriate joint GP, defined by the constraint, which uses a ``four block'' covariance kernel incorporating the constraint in some of the blocks.\nThis is the strategy used for the linear PDE constraints in Section \\ref{pde_constraints}.\nSuch methods are based on derivations of linear transformations of GPs. \nThese types of kernels can combined with other strategies for constraints, such as for the monotonicity constraints of Section \\ref{constrained_likelihood_with_derivative_information} which use a four block covariance kernel (for $\\mathbf{f}$ and $\\mathbf{f}'$) within a likelihood approach.\nConsidering Gaussian processes as distributions over functions, another strategy is to consider a function space defined by a certain representation such that a global constraint can be translated into a finite set of constraints, e.g. on the coefficients of a spline expansion in Sections \\ref{sec:splines} and \\ref{sec:splines_monotonic}. Or a representation can be sought such that every element of the sample space satisfies the constraint before the Gaussian process (the distribution) is even introduced. The latter approach is taken in Sections \\ref{transformed_covariance} and \\ref{sec:boundary_constraints}; in these cases, this strategy amounts to deriving a specific kernel function related to the representation. \nFinally, data provides an opportunity to constrain Gaussian processes implicitly. Some approaches involve proving that, if the data fed into a GP through the posterior formula \\eqref{eq:bayes} satisfies the constraint, then the GP predictions satisfy the constraint -- either exactly, as for linear and quadratic equality constraints of , or within a certain error, as in linear PDE constraints discussed in Section \\ref{sec:empirical}. These results consider properties of GPs that form the basis of such algorithms.\n\\REVISION{We note that some of the methods we cover may result in a posterior distribution which is no longer Gaussian, unlike the standard GPR posterior \\eqref{eq:gpreg}. Thus, such ``constrained Gaussian processes'' no longer meet the definition of a ``Gaussian process'', rendering the former term a misnomer in this strict sense. Nevertheless, we refer to any method that uses the basic steps of GPR described in Section \\ref{sec:gpr} as a starting point for a constrained regression algorithm, as providing a ``constrained Gaussian process''.}", "id": "348e936a-b8fa-4f3d-be3e-7cbf494cf9f7", "level": "section", "origin_cites_number": 1, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Strategies for Constraints" ] ], "subsections": [], "title": "Strategies for Constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:bound_constraints}\nBound constraints of the form $a \\le f(\\mathbf{x}) \\le b$ over some region of interest arise naturally in many applications. For example, regression over chemical concentration data should enforce that predicted values lie between $0$ and $1$ . Bound constraints also include, as a special case, nonnegativity constraints $f \\ge 0$ ($a = 0, b = \\infty$). In this section we present three approaches for enforcing bound constraints.", "id": "6b851013-5efe-40f6-a2a4-48c879d41f38", "level": "section", "origin_cites_number": 1, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ] ], "subsections": [ "9aa6a9be-e5fe-4c60-8c33-997a64297195", "b2219c1a-2c01-43ce-b260-76e3cc26abb8", "8e0d497b-ef1e-4371-8df3-71cb0e9092b9", "fb32446a-d668-496a-9bc8-a088f5766b41" ], "title": "Bound Constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "The most direct way to impose bound constraints on a Gaussian process involves modifying the output of the regression. One way to do this is to transform the output $f$ of the GP using a ``warping'' function which satisfies the bounds. The second way is to replace the Gaussian likelihood \\eqref{eq:gauss_like} by a non-Gaussian likelihood that satisfies the bounds, which is then used to obtain a posterior formula for predicting observations $y$ from $f$. The paper by provides an overview and comparison of these two methods; we review this below. For the subsequent discussion, we assume that we have a set of observations \\REVISION{$y_i$} that satisfy the bound constraint: \\REVISION{$a \\le y_i \\le b$}. \n\\label{sec:transform_output}", "id": "9aa6a9be-e5fe-4c60-8c33-997a64297195", "level": "subsection", "origin_cites_number": 1, "parent_id": "6b851013-5efe-40f6-a2a4-48c879d41f38", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Transformed Output and Likelihood" ] ], "subsections": [ "e40dc2af-d473-4365-8032-8b6a52c05507", "b2d6bd6e-c0c9-451e-87bb-8f7cc2e33a3b" ], "title": "Transformed Output and Likelihood" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:warping_fn}\nWarping functions are used to transform bounded observations \\REVISION{$y_i$} to unbounded\nobservations $u_i$. The field $u$ together with the observations $u_i$ are then treated with a traditional GP model using the steps outlined in Section \\ref{gpr}. The probit function, which is the inverse cumulative distribution function of a standard normal random variable: $\\Phi^{-1}(\\cdot)$, is commonly used as a warping function~. \nThe probit function transforms bounded values \\REVISION{$y\\in[0,1]$} to unbounded values $u\\in(-\\infty,\\infty)$ via\n\\begin{equation}\\label{eq:warping_probit}\nu = \\Phi^{-1} \\REVISION{\\left(y\\right)}\n\\end{equation}\nThe probit function is popular when \\REVISION{$y_i$} is uniformly distributed in $[0,1]$ because the transformed values $u_i$ will be draws from a standard normal Gaussian with zero mean and unit variance. For a discussion of alternative warping functions we refer the reader to .", "id": "e40dc2af-d473-4365-8032-8b6a52c05507", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "9aa6a9be-e5fe-4c60-8c33-997a64297195", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Transformed Output and Likelihood" ], [ "subsubsection", "Warping Functions" ] ], "subsections": [], "title": "Warping Functions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:likelihood_formulations}\nIn addition to using warping functions, bound constraints can also be enforced using non-Gaussian likelihood functions $p(\\bold{y}|X,\\bold{f},\\theta)$ that are constructed to produce GP observations which satisfy the constraints. Given a general non-Gaussian likelihood $p(\\bold{y} | X,\\bold{f},{\\theta})$, the posterior distribution of GPR predictions is given by \\eqref{eq:bayes}.\nUnlike the posterior in \\eqref{eq:gpreg}, the posterior in this case is no longer guaranteed to be Gaussian. There are a number of parametric distribution functions with finite support that can be used for the likelihood function to constrain the GP model. suggest either a truncated Gaussian (see Section \\ref{sec:mvn}) or the beta distribution scaled appropriately to the interval $[a,b]$. Their results show that the beta distribution generally performs better. \nUnlike the warping method of Section \\ref{sec:warping_fn}, with either a truncated Gaussian likelihood or a beta likelihood, the \nposterior \\eqref{eq:bayes} is not analytically tractable. compare two schemes for \napproximate inference and prediction using bounded likelihood functions: the Laplace approximation \nand expectation propagation. These approaches both use a multivariate Gaussian approximation of the \nposterior, but solve for the governing posterior distribution in different ways.", "id": "b2d6bd6e-c0c9-451e-87bb-8f7cc2e33a3b", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "9aa6a9be-e5fe-4c60-8c33-997a64297195", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Transformed Output and Likelihood" ], [ "subsubsection", "Likelihood formulations" ] ], "subsections": [], "title": "Likelihood formulations" }, { "cite_extract_rate": 0.5, "cites": [ 2927 ], "content": "\\label{sec:daveiga}\nBy noting that a Gaussian process \\eqref{eq:multivar_norm} is always trained and evaluated at a finite set of points $X$, global constraints over \\REVISION{continuous domain $\\Omega$ (such as an interval in one dimension)} can be approximated by constraints at a finite set of $N_c$ auxiliary or ``virtual'' points $\\mathbf{x}_i, ..., \\mathbf{x}_{N_c} \\in \\REVISION{\\Omega}$. This approach, introduced by , requires constructing an unconstrained GP and then, over the virtual points, transforming this GP to \\REVISION{the} \\emph{truncated} multivariate \\REVISION{normal density $\\mathcal{TN}(\\textbf{z}; \\bm{\\mu},\\Sigma,\\textbf{a},\\textbf{b})$ as a postprocessing step. The truncated multivariate normal is defined and discussed in detail in Section \\ref{sec:mvn}.}\nMore specifically, construct an approximation which is conditioned on a truncated multivariate Gaussian distribution at the auxiliary points.\nWe point out how this approach \\REVISION{affects} the mean posterior predictions of the GP.\nThe unconstrained mean predictor is conditioned on the data $(X, \\mathbf{y})$: \n\\begin{equation}\\label{eq:unconstrained_mean}\n\\mathbb{E}\\left[ f(\\bold{x}^*) \\ \\big| \\ f(X) = \\bold{y} \\right].\n\\end{equation}\nThis setup is augmented by a fixed, finite set of discrete points $\\{\\mathbf{x}_i\\}_{i=1}^{N_c}$, and the predictor \\eqref{eq:unconstrained_mean} is replaced by the predictor \n\\begin{equation}\\label{eq:constrained_mean}\n\\mathbb{E}\\left[ f(\\bold{x}^*) \\ \\big| \\ f(X) = \\bold{y} \\text{ and } a \\le f(\\mathbf{x}_i) \\le b \\text{ for all $i = 1, 2, ... N_c$}\\right].\n\\end{equation}\nAs $[f(\\bold{x}_1), ..., f(\\bold{x}_{N_c})]^\\top$ is normally distributed in the unconstrained case \\eqref{eq:unconstrained_mean}, in the constrained case \\eqref{eq:constrained_mean} it is distributed according to the truncated multivariate normal. \nIn a few special cases, the mean and covariance of the truncated \\REVISION{normal} can be derived analytically. In one dimension, the mean at a \nsingle prediction point, $z_i$, is the unconstrained mean plus a factor which incorporates \nthe change in the probability mass of the Gaussian distribution to reflect the truncation: \n\\begin{equation}\n\\mathbb{E} \\left({z_i} \\mid a\\leq z_i\\leq b \\right) = \\mu + \\sigma\\frac{\\phi(\\alpha)-\\phi(\\beta)}{\\Phi(\\beta)-\\Phi(\\alpha)}\n\\end{equation}\nwhere $\\alpha = \\frac{a - \\mu}{\\sigma}$, $\\beta = \\frac{b-\\mu}{\\sigma}$, and $\\phi$ and $\\Phi$ are the probability density function and cumulative density function of a univariate standard normal distribution, respectively. In general, sampling and computing the moments of \\REVISION{$\\mathcal{TN}(\\textbf{z}; \\bm{\\mu},\\Sigma,\\textbf{a},\\textbf{b})$} is computationally demanding. estimate moments empirically using an expensive rejection sampling procedure, based on a modified Gibbs sampler, to generate samples that honor the truncation bounds. We discuss the computational challenge of estimating the moments further in Section \\ref{sec:mvn}. \nIn contrast to the warping approach (Section~\\ref{sec:transform_output}) or the spline approach (Section~\\ref{sec:splines}) which maintain a global enforcement of the constraints, the bounds in \\eqref{eq:constrained_mean} can depend on the location: $a_i \\le f(\\mathbf{x}_i) \\le b_i$, representing different bounds in different regions of $I$ (see Section 4 of for an example).\nA downside of using the approach described here is that it is unclear how many virtual points $\\bold{x}_i$ are needed to approximately constrain the GP globally with a prespecified level of confidence; some studies with increasing $N_c$ are presented by . However, if the number of points can be chosen adequately, this approach can be used to enforce not only bound constraints but also monotonicity and convexity constraints~; see Section \\ref{sec:monotonicity_constraints} for more details. These types of constraints can also include linear transformations of a Gaussian process~. \n\\REVISION{", "id": "b2219c1a-2c01-43ce-b260-76e3cc26abb8", "level": "subsection", "origin_cites_number": 2, "parent_id": "6b851013-5efe-40f6-a2a4-48c879d41f38", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Discrete Constraints using Truncated Gaussian Distributions" ] ], "subsections": [], "title": "Discrete Constraints using Truncated Gaussian Distributions" }, { "cite_extract_rate": 1, "cites": [ 2928 ], "content": "\\label{sec:nonneg}\nAnother option for handling bound constraints is to constrain the optimization of the log-marginal-likelihood \\eqref{eq:log_like}, so that hyperparameters are chosen to enforce bounds. \n introduced this approach to enforce nonnegativity constraints\nup to a small probability $0 < \\epsilon \\ll 1$ of violation \nat a finite set of constraint points $\\{\\mathbf{x}_i\\}_{i=1}^{N_c}$,\n\\begin{equation}\n\\label{eq:nonneg}\nP\\big( (f(\\bold{x}^*_i) \\ | \\ \\bold{y},X,\\bold{x}^*_i,\\bm{\\theta}) < 0 \\big) \\leq \\epsilon, \\quad i = 1, 2, ..., N_c. \n\\end{equation}\nFor a Gaussian likelihood, the unconstrained posterior $f^*$ follows a Gaussian distribution \\eqref{eq:gpreg}, and the probabilistic constraint \\eqref{eq:nonneg} can be written in terms of the posterior mean $\\widehat{m}(\\bold{x})$ and posterior standard deviation $s(\\bold{x}) = \\sqrt{\\hat{v}(\\bold{x})}$ given by \\eqref{eq:post_mean_var}, and probit function $\\Phi^{-1}$ (see Section \\ref{sec:warping_fn}): \n\\begin{equation}\n\\label{eq:nonneg2}\n\\widehat{m}(\\bold{x}^*_i) + \\Phi^{-1}(\\epsilon)s(\\bold{x}^*_i) \\geq 0, \\quad i = 1, 2, ..., N_c. \n\\end{equation}\n chose $\\epsilon = 2.3\\%$ so that $\\Phi^{-1}(\\epsilon)=-2$, i.e., the mean minus two standard deviations \nis nonnegative. With the condition that $f(\\bold{x}_j)$ be within $\\nu > 0$ of the observations $y_j$, $j = 1, ..., N$, the maximization of the log-marginal-likelihood then becomes\n\\begin{align}\n\\begin{split}\n\\label{eq:nnlik}\n\\text{Seek }& \\quad \\bm{\\theta}^* = \\underset{\\bm{\\theta}}{\\text{argmax}} \\log(p(\\bold{y}|X,\\bm{\\theta})) \\\\\n\\textrm{subject to }& \\quad 0 \\leq \\widehat{m}(\\bold{x}_i)-2s(\\bold{x}_i), \\quad i=1, ..., N_c \\\\ \n\\textrm{and }& \\quad 0 \\leq \\nu - | y_j - f(\\bold{x}_j)|, \\quad j=1, ..., N. \n\\end{split}\n\\end{align}\n solve the constrained optimization problem \\eqref{eq:nnlik} with $\\nu = 0.03$ using a nonlinear interior point solver, demonstrating that nonnegativity is enforced with high probability and also that posterior variance is significantly reduced. \nWhile this tends to be more expensive than a usual unconstrained optimization of the marginal log-marginal-likelihood, the effect on the posterior \\eqref{eq:gpreg} is to change the hyperparameters, while preserving the Gaussian form, so that more expensive inference methods such as MCMC are not required. In principle, two-sided bounds and other types of constraints can be treated in this fashion, although consider nonnegative constraints in their numerical examples. \n}", "id": "8e0d497b-ef1e-4371-8df3-71cb0e9092b9", "level": "subsection", "origin_cites_number": 1, "parent_id": "6b851013-5efe-40f6-a2a4-48c879d41f38", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Constrained maximum likelihood optimization to enforce nonnegativity constraints" ] ], "subsections": [], "title": "Constrained maximum likelihood optimization to enforce nonnegativity constraints" }, { "cite_extract_rate": 1, "cites": [ 8604 ], "content": "\\label{sec:splines}\n present a constrained Gaussian process formulation involving splines, where they place a multivariate Gaussian prior on a class of spline functions. The constraints are incorporated through constraints on the coefficients of the spline functions. \nTo avoid the difficulty of enforcing a bound constraint $a \\le f(\\bold{x}) \\le b$ globally on \\REVISION{a continuous domain $\\Omega$} for all predictions, the approach\\REVISION{es} in Section\\REVISION{s} \\ref{sec:daveiga} \\REVISION{and \\ref{sec:nonneg}} enforced constraints only at a finite set \\REVISION{of} points. \nIn contrast, the approach taken by is to instead consider a spline interpolant whose finite set of knot values are governed by a GP. This reduces the infinite-dimensional GP to a finite-dimensional one, for which the distributions of the knot values (i.e., the coefficients of the spline expansion) must be inferred. By using a set of piecewise linear splines that form a partition of unity, this approach guarantees that the set of all values between neighboring knots are bounded between the values of the knots. Thus if the knot values satisfy prescribed bound or monotonicity constraints, then so must all values in between them; that is, the global constraints are satisfied if the finite-dimensional constraints are. The problem then reduces to sampling the knot values from a truncated multivariate normal.", "id": "fb32446a-d668-496a-9bc8-a088f5766b41", "level": "subsection", "origin_cites_number": 1, "parent_id": "6b851013-5efe-40f6-a2a4-48c879d41f38", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Splines" ] ], "subsections": [ "59799be4-bba8-40c8-abc3-bc8dcbb00a5b", "c1c4f0fe-ff10-4a1c-8ef9-52bff2ae54d3", "81f6205f-e4dc-4e6a-bfba-712f55411f53" ], "title": "Splines" }, { "cite_extract_rate": 0.6666666666666661, "cites": [ 7627, 8604 ], "content": "We first discuss the spline formulation in one input dimension, and without loss of generality assume that the process being modeled is restricted to the domain [0,1]. Let $h(x)$ be the standard tent function, i.e., the piecewise linear spline function defined by\n\\begin{equation}\nh(x) = \\text{max}(1-|x|,0)\n\\end{equation}\nand define the locations of the knots to be $x_i = i/M$ for $i=0,1,...M$, with $M+1$ total spline functions. Then for any set of spline basis coefficients $\\xi_i$, the function representation is given by\n\\begin{equation}\\label{eq:spline_expansion}\nf(x) = \\sum_{i=0}^M \\xi_i h(M(x-x_i)) = \\sum_{i=0}^M \\xi_i h_i(x).\n\\end{equation}\nThis function representation gives a $C^0$ piecewise linear interpolant of the point values $(x_i, \\xi_i)$ for all $i=0,1,...,M$.\nThe crux of the spline approach to GPR lies in the following argument. Suppose we are given a set of $N$ data points at unique locations $(x_j,y_j)$. Define the matrix $A$ such that\n\\begin{equation}\nA_{ij} = h_i(x_j).\n\\end{equation}\nThen any set of spline coefficients $\\bm{\\xi}$ that satisfy the equation \n\\begin{equation}\\label{eq:spline_system}\nA\\bm{\\xi} = \\bold{y}\n\\end{equation}\nwill interpolate the data exactly. Clearly solutions to this system of equations will exist only if the rank of $A$ is greater than $N$, which requires that any given spline basis spans no more than two data points. Intuitively, this is because a linear function is only guaranteed to interpolate two points locally. Supposing that we make $M$ large enough to satisfy this condition, we can find multiple solutions to the system \\eqref{eq:spline_system}.\nWe now assume the knot values $\\bm{\\xi}$ to be governed by a Gaussian process with covariance function $K$. Because a linear function of a GP is also a GP, the values of $\\bm{\\xi}$ and $\\mathbf{y}$ are governed jointly by a GP prior in the form \n\\begin{equation}\n\\begin{bmatrix}\n\\mathbf{y} \\\\\n\\bm{\\xi}\n\\end{bmatrix}\n\\sim \\mathcal{N}\\left(\n\\begin{bmatrix}\n\\mathbf{0}\\\\\n\\mathbf{0}\n\\end{bmatrix},\n\\begin{bmatrix}\nAKA^\\top & KA^\\top\\\\\nAK & K\\\\\n\\end{bmatrix}\n\\right)\n\\end{equation}\nwhere each entry of the covariance matrix is understood to be a matrix. Upon observation of the data $\\mathbf{y}$, the conditional distribution of the knot values subject to $\\mathbf{y}=A\\bm{\\xi}$ is given by\n\\begin{equation}\np(\\bm{\\xi} \\ \\big| \\ \\mathbf{y}=A\\bm{\\xi}) = \\mathcal{N}\\Big(\\bm{\\xi}; KA^\\top (AKA^\\top)^{-1} \\mathbf{y}, K - KA^\\top(AKA^\\top)^{-1}AK\\Big)\n\\end{equation}\nThis formula is similar to that proposed by , in which a GP is interpolated to a regular grid design to take advantage of fast linear algebra. In this case, we are now interested in evaluating the distribution further conditioned on the inequality constraints given by\n\\REVISION{\\begin{equation}\\label{eq:spline_TN_C}\np(\\bm{\\xi} \\ \\big| \\ \\mathbf{y}=A\\bm{\\xi}, \\mathbf{a} \\leq \\bm{\\xi} \\leq \\mathbf{b}) = \\mathcal{TN}\\Big(\\bm{\\xi}; KA^\\top (AKA^\\top)^{-1} \\mathbf{y}, K - KA^\\top(AKA^\\top)^{-1}AK ,\\mathbf{a}, \\mathbf{b} \\Big)\n\\end{equation}\nwhere \nthe truncated normal \\REVISION{density} $\\mathcal{TN}(\\bm{\\mu}, \\Sigma, \\mathbf{a}, \\mathbf{b})$ is defined and discussed in Section \\ref{sec:mvn}.} We illustrate bound constrained GPs using this approach in Figure \\ref{fig:fn12}. We discuss monotonicity constraints using this approach in Section \\ref{sec:splines_monotonic} and constrained MLE estimation of the hyperparameters in Section \\ref{sec:mle}. Several constraint types can be combined in this approach, in which case $\\mathcal{C}$ in \\eqref{eq:spline_TN_C} is a convex set defined by a set of linear inequalities in $\\bm{\\xi}$ . \n\\begin{figure}[htpb!]\n\\includegraphics[width=0.46\\linewidth]{figs/Figure_1_cropped.pdf}\n\\includegraphics[width=0.45\\linewidth]{figs/Figure_2_cropped.pdf}\n\\caption{\n\\emph{Left}: Comparison of a bound constrained GP with lower bound $a = -20$ and upper bound $b = 20$ versus an unconstrained GP. \n\\emph{Right}: Comparison of a positivity constrained GP (lower bound $a = 0$) versus an unconstrained GP.\nThe data and hyperparameters are from .\nThe dotted lines are $\\mu \\pm 1\\sigma$ prediction intervals.\n}\\label{fig:fn12}\n\\end{figure}", "id": "59799be4-bba8-40c8-abc3-bc8dcbb00a5b", "level": "subsubsection", "origin_cites_number": 3, "parent_id": "fb32446a-d668-496a-9bc8-a088f5766b41", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Splines" ], [ "subsubsection", "GPR for spline coefficients" ] ], "subsections": [], "title": "GPR for spline coefficients" }, { "cite_extract_rate": 0.5, "cites": [ 8604 ], "content": "\\label{sec:spline_sampling}\nJust as for the discrete constraint method discussed in Section \\ref{sec:daveiga}, sampling from the truncated normal distribution for the spline coefficients $\\bm{\\xi}$ introduces a new computational challenge into the GPR framework. While we discuss this in more detail and for several dimensions in Section \\ref{sec:mvn}, we give a cursory discussion of this following . We consider one dimension and the one-sided constraint $f(x) \\ge b$ on $[0,1]$.\nThe original method of was to use a rejection sampling approach by sampling from the untruncated \ndistribution with a mean shifted to the mode (or maximum \\emph{a posteriori} point) of the true posterior. That is, one first solves the problem\n\\begin{equation}\\label{eq:posterior_mode}\n\\bm{\\xi}^* = \\underset{\\bm{\\xi}}{\\text{argmin}}(\\bm{\\xi}-\\bm{\\mu})^\\top\\Sigma^{-1}(\\bm{\\xi}-\\bm{\\mu})\n\\end{equation}\nsubject to the bound constraints $\\bm{\\xi} \\geq \\mathbf{b}$,\nwhere $\\bm{\\mu} = KA^\\top (AKA^\\top)^{-1} \\mathbf{y}$ and $\\Sigma = K-KA^\\top(AKA^\\top)^{-1}AK$. This is a convex quadratic program (assuming the covariance matrix is not too ill-conditioned) and may be solved efficiently.\nOne then draws samples from $\\mathcal{N}(\\bm{\\xi}^*,\\Sigma)$ and accepts or rejects the samples based on an inequality condition, described in more detail in . \nThis is a simple approach, but it does not perform well at larger scale. The probability of rejecting any sample increases exponentially with the number of splines $M$. Furthermore, imprecision in the mode evaluation from the optimization process can lead to a deterioration of acceptance (for example, if the computed mode only satisfies monotonicity constraints to within some solver tolerance). \nOther approaches to sampling from the multivariate normal rely on Markov chain Monte Carlo methods, and are discussed in Section \\ref{sec:mvn}.", "id": "c1c4f0fe-ff10-4a1c-8ef9-52bff2ae54d3", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "fb32446a-d668-496a-9bc8-a088f5766b41", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Splines" ], [ "subsubsection", "Sampling" ] ], "subsections": [], "title": "Sampling" }, { "cite_extract_rate": 0, "cites": [], "content": "The extension of the spline approach to higher dimensions is straightforward. The spline knots must be arranged in a regular grid with $M_j \\REVISION{+ 1}$ points in each dimension \\REVISION{$j$}, and the process is restricted to a hypercube domain of size $[0,1]^d$ for number of dimensions $d$. Under this restriction, the underlying function may be approximated with the \\REVISION{tensor-product spline expansion}\n\\begin{equation}\n\\REVISION{\nf(\\mathbf{x}) = \\sum_{i_1=0}^{M_{1}} \\sum_{i_2=0}^{M_{2}} ... \\sum_{i_d=0}^{M_{d}} \\xi_{i_1, i_2, ..., i_d} h_{i_1, i_2, ..., i_d} \n(x_{1}, x_{2}, ..., x_{d}),\n}\n\\end{equation}\n\\REVISION{where, with $h^j_{i} = h(M_j(x-x_{i_j}))$,\n\\begin{equation}\nh_{i_1, i_2, ..., i_d}(\\mathbf{x}) = h_{i_1}^1 \\otimes h_{i_2}^2 \\otimes ... \\otimes h_{i_d}^d (\\mathbf{x}) = h_{i_1}^1 (x_1) h_{i_2}^2(x_2) ... h_{i_d}^d(x_d),\n\\end{equation}}\nfor knot locations $(\\REVISION{x_{i_1}, x_{i_2}, ..., x_{i_d}})$ and coefficients \\REVISION{$\\xi_{i_1, i_2, ..., i_d}$ for $0 \\le i_j \\le M_j$.} The inference process from this representation proceeds as before, as any set of observed data values can be expressed as $\\bold{y}=A\\bm{\\xi}$ for the appropriately defined matrix $A$, and the coefficient values $\\bm{\\xi}$ may be inferred with a \\REVISION{truncated multivariate normal distribution}. \nThe primary issue with the multidimensional extension is the increase in cost. The spline approach suffers from the curse of dimensionality since the number of spline coefficients that must be inferred scales as $M^d$ with $M$ knots per dimension, leading to $O(M^{3d})$ scaling of the inference cost. This cost is further complicated by the fact that the spline formulation requires enough spline coefficients to guarantee interpolation through the data points in all dimensions, which means that $M\\geq N$. Some potential methods for addressing computational complexity are discussed later in this work.\nThe need for efficient sampling schemes is also increased in the multidimensional setting as the acceptance ratio of a simple rejection sampler as discussed in Section \\ref{sec:spline_sampling} decreases as the dimensionality (i.e. number of coefficients to infer) increases. This is partially addressed by the Gibbs sampling schemes referred to above, but those schemes also begin to lose efficiency as the size of the problem increases; for other approaches, see Section \\ref{sec:mvn}.", "id": "81f6205f-e4dc-4e6a-bfba-712f55411f53", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "fb32446a-d668-496a-9bc8-a088f5766b41", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Bound Constraints" ], [ "subsection", "Splines" ], [ "subsubsection", "Multidimensional Setting" ] ], "subsections": [], "title": "Multidimensional Setting" }, { "cite_extract_rate": 0.2, "cites": [ 8604 ], "content": "\\label{sec:monotonicity_constraints}\nMonotonicity constraints are an important class of ``shape constraints'' which are frequently required in a variety of applications. For example, applied monotonicity-constrained GPR for the output of the Los Alamos National Laboratory ``Lady Godiva'' nuclear reactor, which is known to be monotonic with respect to the density and radius of the spherical uranium core. considered monotonic Bayesian modeling of medical dose-response curves, as did for predicting sales from various prices of consumer goods. \nRoughly speaking, given a method to enforce bound constraints, monotonicity constraints can be enforced by utilizing this method to enforce \n$\\bold{f}' \\ge \\bold{0}$ on the derivative of the Gaussian process in a ``co-kriging'' setup for the joint GP \n$[\\bold{f};\\bold{f}']$. Indeed, many of the works reviewed in Section \\ref{sec:bound_constraints} considered both bound, monotonicity, and convexity constraints under the general heading of ``linear inequality constraints'' . As a result, some of the methods below are based on techniques reviewed in Section \\ref{sec:bound_constraints}, and we frequently refer to that section.", "id": "47d03984-771a-4f08-895f-be7081965518", "level": "section", "origin_cites_number": 5, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Monotonicity Constraints" ] ], "subsections": [ "0cb9a716-84ec-439d-8c95-906a1685fe29", "d1712d3f-2a33-40bd-9970-111e15bce37d", "a89a65e0-a175-468f-b05f-9de36c2b9479", "919a9a81-117f-4d5c-a5e0-4924bb0b524a" ], "title": "Monotonicity Constraints" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{constrained_likelihood_with_derivative_information}\nThe work of enforces monotonicity of a Gaussian process using a probit model for the likelihood of the derivative observations.\nProbit models are often used in classification problems or binary regression when one wants to predict a probability that a\nparticular sample belongs to a certain class (0 or 1)~. \nHere it is used generate a probability that the \nderivative is positive (1) or not (0). Monotonicity is obtained if the derivatives at all the selected points are 1 or 0. \nUsing the probit model, the likelihood\\footnote{This particular likelihood is the inverse of the probit function used for warping output in \\eqref{eq:warping_probit}: \nit maps a value from $(-\\infty,\\infty)$ to $[0,1]$, representing the probability that the value is in class 1 (which translates to monotonicity for this application).} for a particular derivative observation is given by \n\\begin{equation}\n\\Phi(z) = \\int_{-\\infty}^{z} \\REVISION{\\mathcal{N}(t;0,1)} dt\n\\end{equation}\nwhere \\REVISION{$\\mathcal{N}(t;0,1)$} is the probability density function of the standard one-dimensional normal distribution \\eqref{standard_mvn}. This likelihood is used within an expanded GPR framework that incorporates derivatives and constraints.\nAs part of this formulation, the original $n \\times n$ GP covariance matrix, representing \nthe covariance between $n$ data points, is extended to a\n``four block'' covariance matrix. The full covariance \nmatrix is composed of matrices involving the covariance between\nfunction values, the covariance between derivative values, and the covariance\nbetween function values and derivative values. \nFollowing our goal is to enforce the $d_i$-th partial derivative of $f$ at $\\bold{x}_i$ to be nonnegative, i.e.\n\\begin{equation}\\label{eq:discrete_monotonicity}\n\\frac{\\partial f}{\\partial x_{d_i}} (\\bold{x}_i) \\ge 0, \n\\end{equation}\nat a \\REVISION{finite set of virtual points} $X_m=\\{\\bold{x}_i\\}_{i=1}^m$. \nUsing the shorthand notation\n\\begin{equation}\\label{eq:notation_f}\nf'_i = \\frac{\\partial f}{\\partial x_{d_i}} (\\bold{x}_i), \\quad\\text{and}\\quad\n\\bold{f}' =\n\\begin{bmatrix}\n\\frac{\\partial f}{\\partial x_{d_1}}(\\bold{x}_1) \n\\hdots\n\\frac{\\partial f}{\\partial x_{d_m}}(\\bold{x}_m) \t\n\\end{bmatrix}^\\top\n=\n\\begin{bmatrix}\nf'_1\n\\hdots\nf'_m\t\n\\end{bmatrix}^\\top\n\\end{equation}\nand denoting\\footnote{ use the notation of $m_{d_i}^i$ rather than $y'_i$ for observations of ${\\partial f}/{\\partial x_{d_i}} (\\bold{x}_i)$. \\REVISION{They also use the term ``operating points'' for the virtual points.}}\nan observation of $f'_i = {\\partial f}/{\\partial x_{d_i}} (\\bold{x}_i)$ by $y'_i$, we can write\n\\begin{equation}\\label{eq:riihimaki_likelihood}\np\\left(y'_i\\big|f'_i \\right) = \\Phi\\left(f'_i\\frac{1}{\\nu}\\right).\n\\end{equation}\nHere $\\Phi(z)$ is the cumulative distribution function of the standard normal distribution and \\eqref{eq:riihimaki_likelihood} approaches a step function as $\\nu \\rightarrow 0$. \nNote that the likelihood function in ~\\eqref{eq:riihimaki_likelihood} forces the likelihood to be zero (for non-monotonicity) or one (for monotonicity) in most cases. \\REVISION{Therefore, by including observations of $y'_i = 1$ at the virtual points, the derivative is constrained to be positive}. point out that \\eqref{eq:riihimaki_likelihood} is more tolerable of error than a step function, and use $\\nu = 10^{-6}$ throughout their article. \nThe joint prior is now given by: \n\\begin{equation} \np(\\bold{f},\\bold{f}'|X,X_m)=\\mathcal{N}( \\bold{f}_{\\text{joint}} | \\bm{0} , K_{\\text{joint}} )\n\\end{equation} \nwhere\n\\begin{equation}\\label{eq:likelihood_k_joint}\n\\bold{f}_{\\text{joint}}=\n\\begin{bmatrix}\n\\bold{f} \\\\\n\\bold{f'} \\\\\t\t\n\\end{bmatrix}\n\\quad \\text{and} \\quad\nK_{\\text{joint}}=\n\\begin{bmatrix}\nK_{\\bold{f,f}} && K_{\\bold{f,f'}} \\\\\nK_{\\bold{f',f}} && K_{\\bold{f',f'}} \\\\\t\t\n\\end{bmatrix}.\n\\end{equation}\nHere, the $n \\times n$ matrix $K_{\\bold{f,f}}$ denotes the standard covariance matrix for the GP $\\bold{f}$ assembled over the data locations $X$: $K_{\\bold{f,f}} = k(X,X)$, as in Section \\ref{sec:gpr} where $k$ denotes the covariance function of $\\bold{f}$. \nThe $m \\times m$ matrix $K_{\\bold{f',f'}}$ in \\eqref{eq:likelihood_k_joint} denotes the covariance matrix between the values of the specified partial derivatives of $\\bold{f}$ at the operational points \n$X_m$:\n\\begin{equation}\n\\left[K_{\\bold{f',f'}}\\right]_{ij} = \n\\left[\n\\text{cov}\\left(\nf'_i,\nf'_j\n\\right)\n\\right]\n=\n\\left[\n\\text{cov}\\left(\n\\frac{\\partial f}{\\partial x_{d_i}}(\\bold{x}_i),\n\\frac{\\partial f}{\\partial x_{d_j}}(\\bold{x}_j)\n\\right)\n\\right], \\quad\n1 \\le i,j \\le m.\n\\end{equation}\n show that $\\frac{\\partial f}{\\partial x_{d_i}}$ is a GP with covariance matrix \n\\begin{equation}\\label{eq:derivative_covariance}\n\\frac{\\partial}{\\partial x_{d_i}} \\frac{\\partial}{\\partial x'_{d_j}} k (\\bold{x}, \\bold{x}'),\n\\end{equation}\nso that\n\\begin{equation}\n\\left[K_{\\bold{f',f'}}\\right]_{ij} = \n\\frac{\\partial^2 k }{\\partial x_{d_i} \\partial x'_{d_j}}(\\bold{x}_i, \\bold{x}'_j), \\quad\n1 \\le i,j \\le m.\n\\end{equation}\nThis result is a special case of a linear transformation of a GP; see Section \\ref{pde_constraints} for more details. \nBy the same general derivation in that section, \nthe matrix $n \\times m$ matrix $K_{\\bold{f,f'}}$ represents the covariance between $\\bold{f}$ and $\\bold{f'}$, and is given by\n\\begin{equation}\n\\left[K_{\\bold{f,f'}}\\right]_{ij} = \n\\frac{\\partial k }{\\partial x'_{d_j}}(\\bold{x}_i, \\bold{x}'_j),\n\\quad\n1 \\le i \\le n, 1 \\le j \\le m,\n\\end{equation}\nand the $m \\times n$ matrix $K_{\\bold{f',f}} = K_{\\bold{f,f'}}^\\top$, representing the covariance between $\\bold{f}'$ and $\\bold{f}$.\nPutting this all together, we have the posterior probability \\REVISION{density} of the joint distribution incorporating \nthe derivative information\\REVISION{:} \n\\begin{equation}\\label{riihimaki_posterior}\np(\\bold{f},\\bold{f}'|\\bold{y},\\bold{y}')=\\frac{1}{Z}p(\\bold{f},\\bold{f}'|X,X_m)p(\\bold{y}|\\bold{f})p(\\bold{y}'|\\bold{f}')\n\\end{equation} \nwhere $1/Z$ is a normalizing constant. This \\REVISION{density} is analytically intractable\nbecause of the non-Gaussian likelihood for the derivative components. sample this \\eqref{riihimaki_posterior} using expectation propagation. \nWe used an MCMC approach to \\REVISION{sample} the posterior distribution. This approach is illustrated for an example in Figure \\ref{fig:fn3}; this example is particularly challenging because the data is non-monotonic, but there is a requirement that the GP be monotonic. \n\\begin{figure}\n\\centering\n\\includegraphics[width=0.85\\linewidth]{figs/Figure_3_v2-crop.pdf}\n\\caption{Comparison of monotonic GP using the constrained likelihood formulation (\\emph{left}) and the unconstrained GP (\\emph{right}). \\REVISION{Observations are given at $\\{0.1, 0.25, 0.3, 0.5, 0.7, 0.8, 0.95\\}$, with monotonicity constraints at the virtual points $\\{0.15, 0.35, 0.55, 0.75\\}$.}}\n\\label{fig:fn3}\n\\end{figure}\nWe describe the MCMC approach that we used for \\eqref{riihimaki_posterior}. \nAs before, $\\bold{f}^*$ and $\\bold{y}^*$ denote the estimates of these quantities at a new prediction point $\\bold{x}^*$. \n\\begin{equation}\np(\\mathbf{y}^*|\\mathbf{x}^*,\\mathbf{x},\\mathbf{y}) = \\int p(\\mathbf{y}^*|\\mathbf{f}^*) p(\\mathbf{f}^*|\\mathbf{x}^*,\\mathbf{x},\\mathbf{y}) \nd\\mathbf{f}^*,\n\\end{equation}\n\\begin{equation}\np(\\mathbf{f}^*|\\mathbf{x}^*,\\mathbf{x},\\mathbf{y}) = \\int \\int p(\\mathbf{f}^*|\\mathbf{x}^*,\\mathbf{f},\\mathbf{f}')p(\\mathbf{f},\\mathbf{f}'|\\mathbf{x},\\mathbf{y}) d\\mathbf{f} d\\mathbf{f}'.\n\\end{equation}\nSince $p(\\mathbf{f},\\mathbf{f}'|\\mathbf{x},\\mathbf{y})$ was computed as samples from MCMC, we can approximate the posterior of $\\mathbf{f}^*$ as\n\\begin{equation}\np(\\mathbf{f}^*|\\mathbf{x}^*,\\mathbf{x},\\mathbf{y}) = \n\\REVISION{ \\underset{\\mathbf{f},\\mathbf{f}' \\sim p(\\mathbf{f},\\mathbf{f}'|\\mathbf{x},\\mathbf{y})} {\\mathbb{E}}\n\\big[ p(\\mathbf{f}^*|\\mathbf{x}^*,\\mathbf{f},\\mathbf{f}') \\big]} \\approx \\frac{1}{N}\\sum_{i=1}^N p(\\mathbf{f}^*|f_i,f'_i).\n\\label{eq:mcmc_monot}\n\\end{equation}\nThe MCMC samples outlined in \\REVISION{\\eqref{eq:mcmc_monot}} are generated over the \nvector $[\\mathbf{f}; \\mathbf{f}']$. This could be a large vector, indicating a large latent function space, which may pose a challenge to MCMC. Our experience indicates that one must start the MCMC sampling at a good initial point, one that is obtained either by finding the maximum \\REVISION{\\textit{a posteriori}} point (MAP) or by using a surrogate or interpolation to find a feasible initial point for $[\\mathbf{f};\\mathbf{f}']$. \nNote that this approach leads to the question of how to select the number and placement of the operating points $X_m$. point out that grid-based methods suffer from the curse of dimensionality, and that a more efficient strategy may be to successively add new operating points to $X_m$ by computing where derivatives of the GP are most likely to be negative for the current choice of $X_m$. \nWe did not find much discussion of the placement of virtual points for this method or for the discrete constraint method in Section~\\ref{sec:daveiga}. The issue of optimal point placement for the virtual points could be addressed with \nsome of the low-rank methods discussed in Section~\\ref{sec:low_rank}.", "id": "0cb9a716-84ec-439d-8c95-906a1685fe29", "level": "subsection", "origin_cites_number": 2, "parent_id": "47d03984-771a-4f08-895f-be7081965518", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Monotonicity Constraints" ], [ "subsection", "Constrained Likelihood with Derivative Information" ] ], "subsections": [], "title": "Constrained Likelihood with Derivative Information" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{daveiga_monotonic}\nThe approach, discussed in Section \\ref{sec:daveiga}, for bounding Gaussian processes at a finite set of virtual points can naturally be extended to enforce monotonicity constraints. Specifically, by treating the partial derivatives ${\\partial f}/{\\partial x_{d_i}}$ as GPs with covariance kernel functions given by \\eqref{eq:derivative_covariance}, monotonicity constraints of the same form as \\eqref{eq:discrete_monotonicity} can be enforced at a discrete set of $N_c$ virtual points, i.e.\n\\begin{equation}\n\\frac{\\partial f}{\\partial x_{d_i}} (\\bold{x}_i) \\ge 0,\\quad i = 1, \\ldots, N_c.\n\\end{equation}\nThis is done by treating the partial derivatives ${\\partial f}/{\\partial x_{d_i}}$ as GPs with covariance kernel functions given by \\eqref{eq:derivative_covariance}, and using the joint Gaussian process $\\mathbf{f}_{\\text{joint}}$ with covariance matrix $\\Sigma$ given by \\eqref{eq:likelihood_k_joint}. \nThen, given data $(X,\\bold{y})$ for $f$, \n replace the unconstrained predictor \\eqref{eq:unconstrained_mean} by the predictor\n\\begin{equation}\\label{eq:derivative_daveiga_predictor}\n\\mathbb{E}\\left[ f(\\bold{x}^*) \\ \\Bigg| \\ f(X) = \\bold{y} \\text{ and } \n0 \\le \\frac{\\partial f}{\\partial x_{d_i}}(\\bm{x}_i) \n\\text{ for all $i = 1, 2, ... N_c$}\\right].\n\\end{equation}\nThis is analogous to the predictor \\eqref{eq:constrained_mean} used for bound constraints. \nAs a postprocessing step, per \\eqref{eq:derivative_daveiga_predictor} the \\REVISION{density} $\\mathcal{N}(\\bm{\\mu},\\Sigma)$ for $\\bold{f}'$ over the virtual points $\\{\\mathbf{x}_i\\}_{i=1}^{N_c}$ is replaced by the \\REVISION{density} $\\mathcal{TN}(\\bm{\\mu},\\Sigma,\\bm{0},\\bm{\\infty})$; this \\REVISION{density} is discussed more in Section \\ref{sec:mvn}.", "id": "d1712d3f-2a33-40bd-9970-111e15bce37d", "level": "subsection", "origin_cites_number": 1, "parent_id": "47d03984-771a-4f08-895f-be7081965518", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Monotonicity Constraints" ], [ "subsection", "Monotonicity using Truncated Gaussian Distributions" ] ], "subsections": [], "title": "Monotonicity using Truncated Gaussian Distributions" }, { "cite_extract_rate": 0.5, "cites": [ 8604 ], "content": "\\label{sec:splines_monotonic}\nThe spline formulation, presented in Section \\ref{sec:splines} to globally enforce a bound constraint of the form $f \\ge a$ may be extended easily to enforce monotonicity constraints or other linear inequalities. For example, if $C$ is a first-order (backward or forward) finite difference matrix relating neighboring spline values, then monotonicity is enforced globally by sampling values of the knots $\\bm{\\xi}$ subject to the constraint\n\\begin{equation}\nC\\bm{\\xi} \\geq \\mathbf{0};\n\\end{equation}\nsee or .\nThis inequality is also used in the rejection sampler of Section \\ref{sec:spline_sampling} as a constraint to identify the MAP estimate to increase the sampling efficiency.\nBound and monotonicity constraints can be enforced simultaneously by requiring both $\\bm{\\xi} \\geq \\mathbf{b}$ and $C\\bm{\\xi} \\geq \\mathbf{0}$ in the sampling, though the acceptance ratio drops substantially with combined constraints.", "id": "a89a65e0-a175-468f-b05f-9de36c2b9479", "level": "subsection", "origin_cites_number": 2, "parent_id": "47d03984-771a-4f08-895f-be7081965518", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Monotonicity Constraints" ], [ "subsection", "Monotonic splines" ] ], "subsections": [], "title": "Monotonic splines" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 8604 ], "content": "\\label{sec:convexity}\nThe sections above illustrated how a method for bound constraints can be used with first derivatives of a Gaussian process $f$ to enforce \n\\REVISION{$\\partial f/ \\partial{x_{i}} \\ge 0$} and thereby monotonicity for the GP $f$, either globally as in Section \\ref{sec:splines_monotonic} or at a finite set of virtual points as in Sections \\ref{constrained_likelihood_with_derivative_information} and \\ref{daveiga_monotonic}. Similar nonnegativity constraints can be applied to higher derivatives of $f$ as well. In one dimension, this can be used to enforce convexity via the constraint\n\\begin{equation}\\label{eq:one_dimension_convexity}\n\\frac{\\partial^2 f}{\\partial x^2} \\ge 0,\n\\end{equation}\ntreating the left-hand side as a GP with covariance kernel \n\\begin{equation}\n\\frac{\\partial^4 k}{\\partial x^2 \\partial {x'}^2}(x,x').\n\\end{equation}\nAlthough monotonicity can be enforced in arbitrary dimensions, convexity presents a challenge in dimensions greater than one, since it cannot be expressed as a simple linear inequality involving the derivatives of $f$ as in \\eqref{eq:one_dimension_convexity}.\nAs point out, enforcing convexity in higher dimensions requires that \\eqref{eq:one_dimension_convexity} be replaced by the condition that the Hessian of $f$ be positive semidefinite. Sylvester's criterion yields the equivalent condition that each leading principal minor determinant of the Hessian be positive. Such inequality constraints involve \\emph{polynomials} in partial derivatives of $f$. As polynomial functions of GPs are no longer GPs, the bound constraint methods in Section \\ref{sec:bound_constraints} no longer apply. \nWhile higher dimensional convexity constraints are outside the scope of this survey, several references we have mentioned discuss the implementation of convexity constrained Gaussian processes in greater detail. \n discuss how convexity in one dimension of the form \\eqref{eq:one_dimension_convexity} can be enforced at virtual points\nusing the (partially) truncated multinormal, in a way analogous to Section \\ref{daveiga_monotonic},\nwhile convexity in two dimensions can be enforced using the elliptically truncated multinormal distribution.\n and point out that for the spline basis considered in Section \\ref{sec:splines}, convexity in one dimension amounts to requiring that the successive differences of the values at the spline knots are increasing in magnitude, i.e. \n\\begin{equation}\n\\xi_{k+1}-\\xi_{k} \\geq \\xi_{k}-\\xi_{k-1} \\text{ for all } k.\n\\end{equation}\nThis is equivalent to requiring that the second-order finite differences be positive. This can also easily be applied in higher dimensions to guarantee that the second partial derivatives are positive globally, although this does not imply convexity.", "id": "919a9a81-117f-4d5c-a5e0-4924bb0b524a", "level": "subsection", "origin_cites_number": 3, "parent_id": "47d03984-771a-4f08-895f-be7081965518", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Monotonicity Constraints" ], [ "subsection", "Convexity" ] ], "subsections": [], "title": "Convexity" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:pde_constraints}\nGaussian processes may be constrained to satisfy linear operator constraints of the form\n\\renewcommand\\vec{\\mathbf}\n\\begin{equation}\n\\label{eq:pde_constraint}\n\\mathcal{L} u = f\n\\end{equation}\ngiven data on $f$ and $u$. \nWhen $\\mathcal{L}$ is a linear partial differential operator of the form\n\\begin{equation}\n\\label{eq:lin_diff_op}\n\\mathcal{L} = \\sum_{\\bm{\\alpha}} C_{\\bm{\\alpha}}(\\mathbf{x}) \\frac{\\partial^{\\bm{\\alpha}}}{\\partial \\mathbf{x}^{\\bm{\\alpha}}}, \\quad\n\\bm{\\alpha} = (\\alpha_1, ..., \\alpha_d), \\quad\n\\frac{\\partial^{\\bm{\\alpha}}}{\\partial \\mathbf{x}^{\\bm{\\alpha}}} = \n\\frac{\\partial^{{\\alpha}_1}}{\\partial {x}_1^{{\\alpha}_1}}\n\\frac{\\partial^{{\\alpha}_2}}{\\partial {x}_2^{{\\alpha}_2}}\n...\n\\frac{\\partial^{{\\alpha}_d}}{\\partial {x}_{\\REVISION{d}}^{{\\alpha}_d}},\n\\end{equation}\nthe equation \\eqref{eq:pde_constraint} can be used to \nconstrain GP predictions to satisfy known physical laws expressed as linear partial differential equations.\nIn this section we survey methods to constraint GPs with PDE constraints of the form \\eqref{eq:pde_constraint}.", "id": "fbf12205-4c0f-46ca-a510-8409ad72bca8", "level": "section", "origin_cites_number": 0, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Differential Equation Constraints" ] ], "subsections": [ "1196fbad-4be9-4538-a3fa-a23a66d55c86", "21a37702-e925-427b-ba46-b1ceb62f632d", "6ebc81ba-65e5-420a-bf4c-436a04c1fc48", "84bac780-15ce-4747-95da-3c67a3d3e503" ], "title": "Differential Equation Constraints" }, { "cite_extract_rate": 0.30000000000000004, "cites": [ 2929, 7626, 2930 ], "content": "\\label{pde_constraints}\n\\REVISION{The work of introduced a joint GPR approach using a four-block covariance kernel allowing observations of both the solution $u$ and the forcing $f$ to be utilized. The principle behind this approach} is that if $u(\\mathbf{x})$ is a GP with mean function $m(\\mathbf{x})$ and covariance kernel\n$k(\\mathbf{x},\\mathbf{x'})$, \n\\begin{equation}\n\\label{eq:gp_for_u}\nu \\sim \\mathcal{GP}(m(\\mathbf{x}),k(\\mathbf{x},\\mathbf{x'}))\n\\end{equation}\nand if $m(\\cdot)$ and $k(\\cdot, \\mathbf{x'})$ belong to the domain of $\\mathcal{L}$, then\n$\\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} k(\\mathbf{x},\\mathbf{x'})$ defines a valid covariance kernel for a \nGP with mean function $\\mathcal{L}_{\\mathbf{x}} m(\\mathbf{x})$. \nThis Gaussian process is denoted $\\mathcal{L}u$:\n\\begin{equation}\n\\label{eq:gp_for_Lu}\n\\mathcal{L}u \\sim \\mathcal{GP}(\\mathcal{L}_{\\mathbf{x}} m(\\mathbf{x}), \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} k(\\mathbf{x},\\mathbf{x'})).\n\\end{equation}\nNote from \\eqref{eq:lin_diff_op} that the operator $\\mathcal{L}$ takes as input a function of a single variable $\\mathbf{x}$. When applying \n$\\mathcal{L}$ to a function of two variables such as $k(\\mathbf{x}, \\mathbf{x'})$, we use a subscript as in \\eqref{eq:gp_for_Lu} to denote the application of $\\mathcal{L}$ in the indicated variable, i.e., considering the input to $\\mathcal{L}$ as a function of the indicated variable only. Note that a special case of this, for $\\mathcal{L} = \\partial/\\partial x_{d_i}$, appeared in Section \\ref{constrained_likelihood_with_derivative_information}. The same formula \\eqref{eq:gp_for_Lu} was utilized in earlier works on GPR with differential equation constraints by and . showed a similar result in addressing the problem of identification of basis elements in the finite-dimensional approximation of PDE solutions. The latter work presents a Bayesian formulation of a numerical homogenization approach in which the optimal bases are shown to be polyharmonic splines and the optimal solution in the case of a white noise source term is a Gaussian field.\nThe notation ``$\\mathcal{L}u$'' for the GP \\eqref{eq:gp_for_Lu} is suggested by noting that if one could apply $\\mathcal{L}$ to the samples of the GP $u$, then the mean\nof the resulting stochastic process $\\mathcal{L}[u]$ would indeed be given by \n\\begin{align}\n\\text{mean}\\left(\\mathcal{L}[u](\\mathbf{x})\\right) &=\n\\mathbb{E} \\left[ \\mathcal{L} [u](\\mathbf{x}) \\right] = \\mathcal{L} \\mathbb{E}\\left[ u(\\mathbf{x}) \\right] = \\mathcal{L} m(\\mathbf{x})\n\\end{align}\nand the covariance by\n\\begin{align}\n\\begin{split}\n\\label{eq:cov_f_f}\n\\text{cov}\\left( \\mathcal{L}[u](\\mathbf{x}) , \\mathcal{L}[u](\\mathbf{x'}) \\right) &= \n\\mathbb{E} \\left[ \\mathcal{L}_{\\mathbf{x}} [u(\\mathbf{x})] \\mathcal{L}_{\\mathbf{x'}} [u (\\mathbf{x'})] \\right] =\n\\mathbb{E} \\left[ \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} \\left[ u(\\mathbf{x}) u(\\mathbf{x'}) \\right] \\right] \\\\\n\\qquad &= \\mathcal{L}_{\\mathbf{x}} \\mathbb{E} \\left[ \\mathcal{L}_{\\mathbf{x'}} \\left[ u(\\mathbf{x}) u(\\mathbf{x'}) \\right] \\right] \n= \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} \\mathbb{E} \\left[ u(\\mathbf{x}) u(\\mathbf{x'}) \\right] \\\\\n&= \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} \\left[ \\text{cov}\\left( u(\\mathbf{x}), u(\\mathbf{x'}) \\right) \\right] \n= \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} k(\\mathbf{x}, \\mathbf{x'}).\n\\end{split}\n\\end{align}\nThis justification is formal, as in general the samples of the process $\\mathcal{L}u$ defined by \\eqref{eq:gp_for_Lu} cannot be identified as $\\mathcal{L}$ applied to the samples of $u$ ; a rigorous interpretation involves the posterior predictions and reproducing kernel Hilbert spaces of the processes $u$ and $\\mathcal{L}u$ . \nIf scattered measurements\n$\\mathbf{y}_f$ \non the source term $f$ in \\eqref{eq:pde_constraint} are available at domain points $X_f$,\nthen this can be used to train and obtain predictions for $\\mathcal{L}u$ from the GP \\eqref{eq:gp_for_Lu} in the standard way. If, in addition, measurements $\\mathbf{y}_u$\nof $u$ are available at domain points $X_u$\na GP co-kriging procedure can be used. In this setting physics knowledge of the form \\eqref{eq:pde_constraint} enters via the data $(X_f, \\mathbf{y}_f)$ and can be used to improve prediction accuracy and reduce variance of the GPR of $u$. \nThe co-kriging procedure requires forming the joint Gaussian process $[u;f]$.\nSimilarly to the derivative case considered in Section \\ref{constrained_likelihood_with_derivative_information}, the covariance matrix of the resulting GP is a four block matrix assembled from\nthe covariance matrix of the GP \\eqref{eq:gp_for_u} for the solution $u$, the covariance of the GP \\eqref{eq:gp_for_Lu} for the forcing function, and the cross terms.\nGiven the covariance kernel $k(\\mathbf{x}, \\mathbf{x}')$ for $u$, the covariance kernel of this joint GP \nis\n\\begin{equation}\\label{eq:joint_covariance}\nk\n\\left(\n\\begin{bmatrix} \n\\mathbf{x}_1 \\\\ \n\\mathbf{x}_2 \n\\end{bmatrix}\n,\n\\begin{bmatrix} \n\\mathbf{x}'_1 \\\\ \n\\mathbf{x}'_2 \n\\end{bmatrix}\n\\right)\n=\n\\begin{bmatrix}\n\\phantom{\\mathcal{L}_{\\mathbf{x}}} k(\\mathbf{x}_1,\\mathbf{x}'_1) & \n\\phantom{\\mathcal{L}_{\\mathbf{x}}}\\mathcal{L}_{\\mathbf{x}'} k(\\mathbf{x}_1,\\mathbf{x}'_2)\\\\ \n\\mathcal{L}_{\\mathbf{x}} k (\\mathbf{x}_2,\\mathbf{x}'_1) \n& \\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}} k(\\mathbf{x}_2,\\mathbf{x}'_2)\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nK_{11} & \nK_{12} \\\\ \nK_{21} &\nK_{22} \n\\end{bmatrix}.\n\\end{equation}\nThe covariance between $u(\\mathbf{x})$ and $f(\\mathbf{x'})$ is given by $\\mathcal{L}_{\\mathbf{x}'} k(\\mathbf{x}_1,\\mathbf{x}'_2)$ in the upper right block of the kernel and can be justified by a calculation similar to \\eqref{eq:cov_f_f}; see . Similarly the covariance between $u(\\mathbf{x}')$ and $f(\\mathbf{x})$ is represented by the bottom left block $\\mathcal{L}_{\\mathbf{x}} k(\\mathbf{x}_2,\\mathbf{x}'_1)$ of the kernel. \nIn this notation, the joint Gaussian process for $[u; f]$\nis then\n\\begin{equation}\\label{eq:joint_GP_uf}\n\\begin{bmatrix}\nu({X}_1) \\\\ \nf({X}_2)\n\\end{bmatrix}\n\\sim \\mathcal{GP}\\left( \n\\begin{bmatrix}\n\\phantom{\\mathcal{L}}\nm({X}_1) \\\\\n\\mathcal{L}\nm({X}_2)\n\\end{bmatrix},\n\\begin{bmatrix}\nK_{11}(X_1,X_1) & \nK_{12}(X_1,X_2) \\\\ \nK_{21}(X_2,X_1) &\nK_{22}(X_2,X_2)\n\\end{bmatrix}\n\\right),\n\\end{equation}\nwhere $K_{12}(X_1,X_2) = \\left[K_{21}(X_2,X_1)\\right]^\\top$.\nGiven data $({X}_u,\\mathbf{y}_u)$ and $({X}_f,\\mathbf{y}_f)$, the GP kernel hyperparameters \nmay be trained by assembling the four-block covariance matrix in \\eqref{eq:joint_GP_uf} with \n${X}_1 = {X}_u$, ${X}_2 = {X}_f$,\n\\begin{equation}\\label{eq:covariance_over_data}\nK_{\\text{data}}=\n\\begin{bmatrix}\nK_{11}({X}_u,{X}_u) & \nK_{12}({X}_u,{X}_f) \\\\ \nK_{21}({X}_f,{X}_u) &\nK_{22}({X}_f,{X}_f)\n\\end{bmatrix}\n\\end{equation}\nand minimizing the negative log-marginal-likelihood\n\\begin{multline}\n\\label{NLML}\n-\\log{p(\\bold{y}_u,\\bold{y}_f|X_u,X_f,\\bm{\\theta)}} = \n\\frac{1}{2} \\left(\n\\mathbf{y}-\\mathbf{m}\n\\right)^{\\top}\nK_{\\text{data}}^{-1}\n\\left(\n\\mathbf{y}-\\mathbf{m}\n\\right)\n + \\frac{1}{2}\\log |{K}_{\\text{data}}| + \\frac{N}{2} \\log (2\\pi), \\\\\n\\text{with } \n\\mathbf{y} \n= \n\\left[\n\\begin{array}{c}\n\\mathbf{y}_u \\\\ \n\\mathbf{y}_f\n\\end{array} \n\\right]\n\\text{ and }\n\\mathbf{m} \n=\n\\left[\n\\begin{array}{c}\n\\phantom{\\mathcal{L}} m({X}_u) \\\\ \n\\mathcal{L} m({X}_f)\n\\end{array} \n\\right].\n\\end{multline} \nIn the presence of noise on measurements of $u$ and $f$, a standard approach analogous to the Gaussian likelihood \\eqref{eq:gauss_like} is to introduce two noise hyperparameters $\\sigma_{u}$ and $\\sigma_{f}$ and replace the four-block covariance\nmatrix \\eqref{eq:covariance_over_data} by \n\\begin{equation}\n\\begin{bmatrix*}[l]\nK_{11}({X}_u,{X}_u) + \\sigma_{u}^2 {I}_{N_u}& \nK_{12}({X}_u,{X}_f)\\\\ \nK_{21}({X}_f,{X}_u)&\nK_{22}({X}_f,{X}_f) + \\sigma_{f}^2 {I}_{N_f}\n\\end{bmatrix*}\n\\end{equation}\nThe inclusion of the additional terms depending on $\\sigma_{u}^2$ and $\\sigma_{f}^2$ correspond to an assumption of uncorrelated \nwhite noise on the measurements ${Y}_u$ and ${Y}_f$, i.e., \n\\begin{equation}\n{Y}_u = u({X}_u) + \\bm{\\epsilon}_u, \\quad\n{Y}_f = f({X}_f) + \\bm{\\epsilon}_f,\n\\end{equation}\nwith $\\bm{\\epsilon}_u \\sim \\mathcal{N}(\\mathbf{0},\\sigma_{u}^2 {I}_{N_u})$ and independently $\\bm{\\epsilon}_f \\sim \\mathcal{N}(\\mathbf{0},\\sigma_{f}^2 {I}_{N_u})$, given $N_u$ data points for $u$ and $N_f$ data points for $f$. \nThe implementation of the constrained Gaussian process kernel \\eqref{eq:joint_covariance} for constraints of the form\n\\eqref{eq:pde_constraint} raises several computational problems.\nThe first is the computation of $\\mathcal{L}_{\\mathbf{x}}k$ and $\\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}}k$. The most ideal scenario is that in which $k$ has an analytical formula and $\\mathcal{L}$ is a linear differential operator so that these expressions be computed in closed form by hand or with a symbolic computational software such as Mathematica. This was the approach used for the examples in , and , including for the heat equation, Burgers' equation, Korteweg-de Vries Equation, and Navier-Stokes equations. The nonlinear PDEs listed here were treated using an appropriate linearization. An example of $k$ being parametrized by a neural network (which allows derivatives to be computed using backpropagation) was also considered in for the Burgers' equation. \nClosed form expressions for the covariance kernel \\eqref{eq:joint_covariance} greatly simplify the implementation compared to numerical approximation of $\\mathcal{L}_{\\mathbf{x}}k$ and $\\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}}k$ using finite-differences or series expansions. As the size of the dataset and therefore size of the covariance matrix \\eqref{eq:joint_covariance} increases, our numerical experiments suggest that any numerical errors in the approximation of the action of $\\mathcal{L}$ rapidly lead to ill-conditioning of the covariance matrix. This in turn can lead to artifacts in the predictions or failure of maximum likelihood estimation with the constrained GP. Ill-conditioning can be reduced by adding an ad-hoc regularization on the diagonal of \\eqref{eq:joint_covariance} at the cost of reducing the accuracy of the regression, potentially negating the benefit of the added constraint.\nFor more general constraints of the form \\eqref{eq:pde_constraint}, depending on the form of $k$ or $\\mathcal{L}$, numerical methods may be unavoidable. For example, in and , fractional-order PDE constraints (amounting to $\\mathcal{L}$ being a nonlocal integral operator with singular kernel) were considered. For these constraints, the kernel blocks $\\mathcal{L}_{\\mathbf{x}}k$ and $\\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}}k$ had no closed formula. To approximate these terms, a series expansion was used in , and in a numerical method was developed involving Fourier space representations of $\\mathcal{L}_{\\mathbf{x}}k$ and $\\mathcal{L}_{\\mathbf{x}} \\mathcal{L}_{\\mathbf{x'}}k$ with Gaussian quadrature for Fourier transform inversion. \nA second problem is that the formulation \\eqref{eq:joint_GP_uf} requires enforcing the constraint \\eqref{eq:pde_constraint} at discrete points of ${X}_f$. Therefore, even if we have complete knowledge of the constraining equation \\eqref{eq:pde_constraint} and the forcing term $f$, enhancing the GPR for $u$ by including a high number of virtual data points makes inference as well as maximum likelihood estimation computationally expensive and prone to ill-conditioning. In this regard, the computational approaches discussed in Section \\ref{sec:low_rank}, particularly the subset of data approaches in Section \\ref{sec:subset_of_data}, may be helpful.\nFigure \\ref{fig:pde_example_1} shows an example of a one-dimensional GP with squared-exponential kernel constrained to satisfy the differential equation $1 = d^2 u / dx^2$ on the interval $[0,1]$. Data is generated from sampling the solution $u = \\frac{1}{8} [(2x-1)^2-1]$ at 10 points between 0.2 and 0.8. Both the constrained and unconstrained GPs give a reasonably accurate reconstruction on $[0.2, 0.8]$, but the unconstrained GP has poor accuracy outside this subinterval. On the other hand, the constrained GP is augmented by data $f = d^2 u / dx^2 = 1$ at 10 additional points between $0$ and $1$, leading to an improved reconstruction of $u$ outside $[0.2,0.8]$. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{figs/pde_constrained_example_cropped.pdf}\n \\caption{Comparison of unconstrained and PDE constrained GP. \\emph{Left:} Reconstruction of $u$ (red line) with an unconstrained GP (black line) using 10 data points (red dots) in $[0.2, 0.8]$. \\emph{Center:} Reconstruction of $u$ (red line) with a PDE constrained GP (black line) using the same 10 data points (red dots) in $[0.2, 0.8]$. \\emph{Right:} Right-hand side $f$ of the PDE, with 10 additional data points in $[0,1]$ used for the PDE constraint. Note the improved accuracy of the constrained GP outside $[0.2, 0.8]$ due to this constraint data.}\n \\label{fig:pde_example_1}\n\\end{figure}", "id": "1196fbad-4be9-4538-a3fa-a23a66d55c86", "level": "subsection", "origin_cites_number": 10, "parent_id": "fbf12205-4c0f-46ca-a510-8409ad72bca8", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Differential Equation Constraints" ], [ "subsection", "Block Covariance Kernel" ] ], "subsections": [], "title": "Block Covariance Kernel" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{transformed_covariance}\nA different approach to constrain Gaussian processes by differential equations is to design a specialized covariance kernel such that the GP satisfies the constraint globally, rather than at a discrete set of auxiliary data points as in Section \\ref{pde_constraints}. This method dates back to the divergence-free kernel of for vector-valued GPs. In addition to being a stronger enforcement of the constraint, this method also avoids the computational burden induced by the four-block covariance matrix. On the other hand, it has more limited applicability and specialized implementation, as it requires analytically solving for a kernel with the desired constraining property. The authors of propose finding a linear operator $\\mathcal{G}$ which maps a certain class of functions (modeled by a GP) to the null space of the linear differential operator defining the constraint. The operator $\\mathcal{G}$ can then used to compute the constrained kernel as a transformation of a starting kernel. We now summarize this approach, provide some examples, and compare it in greater detail to other approaches. \nGiven a linear operator $\\mathcal{L}_{\\mathbf{x}}$ and a vector-valued GP $\\mathbf{f}$ described using a matrix-valued covariance kernel function that encodes the covariance between the entries of the vector $\\mathbf{f}$, the constraint\n\\begin{equation}\\label{transformed_constraint}\n\\mathcal{L}_{\\mathbf{x}}\\mathbf{f} = 0\n\\end{equation}\nis satisfied if $\\mathbf{f}$ can be represented as\n\\begin{equation}\\label{f_from_g}\n\\mathbf{f} = \\mathcal{G}_{\\mathbf{x}}\\mathbf{g},\n\\end{equation}\nfor a transformation $\\mathcal{G}_{\\mathbf{x}}$ such that\n\\begin{equation}\\label{operator_equation}\n\\mathcal{L}_{\\mathbf{x}} \\mathcal{G}_{\\mathbf{x}} = 0.\n\\end{equation}\nIn other words, the range of the operator $\\mathcal{G}_{\\mathbf{x}}$ lies in the nullspace of the operator $\\mathcal{L}_{\\mathbf{x}}$.\nFurther, provided that\n$\\mathcal{G}_{\\mathbf{x}}$ is also a linear operator, if $\\mathbf{g}$ is a GP with covariance kernel $k_{\\mathbf{g}}$, then by \\eqref{f_from_g} $\\mathbf{f}$ is also a GP with covariance kernel\n\\begin{equation}\\label{transformed_kernel}\nk_{\\mathbf{f}} = \\mathcal{G}_{\\mathbf{x}} k_{\\mathbf{g}} \\mathcal{G}_{\\mathbf{x'}}^\\top. \n\\end{equation}\nAbove and throughout this section, we follow in using the notation $\\mathcal{G}_{\\mathbf{x}} k_{\\mathbf{g}} \\mathcal{G}_{\\mathbf{x'}}^\\top$ for the matrix-valued function with $(i,j)$-entry $\\REVISION{\\sum_{k} \\sum_{l}}\\left[ \\mathcal{G}_{\\mathbf{x}} \\right]_{ik}\n\\left[ \\mathcal{G}_{\\mathbf{x'}} \\right]_{jl} \\left[ k_{\\mathbf{g}}(\\mathbf{x}, \\mathbf{x}') \\right]_{kl}$; \nnote that if $\\mathbf{g}$ and therefore $k_{\\mathbf{g}}$ are scalar valued, this reduces to \\eqref{eq:cov_f_f}. \nIf the operator equation \\eqref{operator_equation} can be solved, one can choose a kernel $k_{\\mathbf{g}}$ and define a GP $\\mathbf{f}$ using \\eqref{transformed_kernel} which satisfies the constraint \\eqref{transformed_constraint}. The constraint is satisfied globally by the structure of the covariance kernel; no data is required to enforce it. We refer to this as the \\emph{transformed covariance kernel} approach. \nPrototypical examples applying the constraint \\eqref{transformed_constraint} are divergence-free and curl-free constraints on the vector field $\\mathbf{f}$; an excellent illustration of such GP vector fields is given by .\nAs an example of how to apply \\eqref{transformed_constraint}, consider the enforcement of the curl-free constraint \n\\begin{equation}\n\\mathcal{L}_{\\mathbf{x}} f = \n\\nabla \\times \\mathbf{f} = 0\n\\end{equation}\nfor a vector field $\\mathbf{f}: \\mathbb{R}^3 \\rightarrow \\mathbb{R}^3$. A curl-free vector field can be written\n$\\mathbf{f} = \\nabla g$,\nfor a scalar function $g : \\mathbb{R}^3 \\rightarrow \\mathbb{R}$. So, for $\\mathcal{L}_{\\mathbf{x}} = \\nabla \\times$ the choice \n\\begin{equation}\n\\mathcal{G}_{\\mathbf{x}} = \\nabla, \\quad \\text{i.e.} \\quad\n\\mathcal{G}_{\\mathbf{x}} g = \n\\begin{bmatrix}\n\\frac{\\partial g}{\\partial x_1} \\\\[6pt]\n\\frac{\\partial g}{\\partial x_2} \\\\[6pt]\n\\frac{\\partial g}{\\partial x_3}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial}{\\partial x_1} \\\\[6pt]\n\\frac{\\partial}{\\partial x_2} \\\\[6pt]\n\\frac{\\partial}{\\partial x_3}\n\\end{bmatrix}\ng\n\\end{equation}\nsatisfies \\eqref{operator_equation}. \nThus, placing a GP with scalar-valued covariance kernel $k_g(\\mathbf{x},\\mathbf{x}')$ on $g$ leads via \\eqref{transformed_kernel} to a $3 \\times 3$ matrix-valued covariance kernel \n\\begin{equation}\nk_{\\text{curl-free}}(\\mathbf{x},\\mathbf{x}') = \n\\mathcal{G}_{\\mathbf{x}} k_g \\mathcal{G}_{\\mathbf{x}}^\\top \n= \n\\begin{bmatrix*}\n\\frac{\\partial^2 }{\\partial x_1 \\partial x_1'}\n&\n\\frac{\\partial^2 }{\\partial x_1 \\partial x_2'}\n&\n\\frac{\\partial^2 }{\\partial x_1 \\partial x_3'}\n\\\\[6pt]\n\\frac{\\partial^2 }{\\partial x_2 \\partial x_1'}\n&\n\\frac{\\partial^2 }{\\partial x_2 \\partial x_2'}\n&\n\\frac{\\partial^2 }{\\partial x_2 \\partial x_3'}\n\\\\[6pt]\n\\frac{\\partial^2 }{\\partial x_3 \\partial x_1'}\n&\n\\frac{\\partial^2 }{\\partial x_3 \\partial x_2'}\n&\n\\frac{\\partial^2 }{\\partial x_3 \\partial x_3'}\n\\end{bmatrix*}\nk_g (\\mathbf{x}, \\mathbf{x}').\n\\end{equation}\nHere, $k_g$ is a scalar-valued kernel. \nIf the squared-exponential covariance kernel \n$k_g(\\mathbf{x},\\mathbf{x}') = \\gamma e^{-\\frac{|\\mathbf{x} - \\mathbf{x}'|^2}{2\\theta^2}}$ is used, this leads to a closed-form kernel\n\\begin{equation}\\label{curl_free_SE}\nk_{\\text{curl-free}}(\\mathbf{x},\\mathbf{x}') = \n\\frac{\\gamma^2}{\\theta^2} \ne^{-\\frac{|\\mathbf{x} - \\mathbf{x}'|^2}{2\\theta^2}}\n\\left(\nI_d - \n\\left( \\frac{\\mathbf{x} - \\mathbf{x}'}{\\theta} \\right)\n\\left( \\frac{\\mathbf{x} - \\mathbf{x}'}{\\theta} \\right)^\\top\n\\right); \n\\end{equation}\nsee or . We have derived this for dimension $d = 3$, but it is valid in any dimension $d \\ge 2$ . The specific covariance kernel \\eqref{curl_free_SE} was introduced by in the context of reproducing kernel Hilbert spaces, and was also used by . \nIn a similar way, one can enforce a divergence-free condition $\\nabla \\cdot \\mathbf{f} = 0$ for a vector-valued GP $\\mathbf{f}$ by writing $\\mathbf{f} = \\nabla \\times \\mathbf{g}$ and placing a GP prior on a vector field $\\mathbf{g}$, as $\\nabla \\cdot (\\nabla \\times \\mathbf{g}) = 0$ .\nModeling the components of $\\mathbf{g}$ as independent and placing a diagonal matrix-valued squared-exponential kernel on it leads to divergence-free covariance kernels for $\\mathbf{f}$ of the form\n\\begin{equation}\\label{div_free_SE}\nk_{\\text{div-free}}(\\mathbf{x},\\mathbf{x}') = \n\\frac{\\gamma^2}{\\theta^2} \ne^{-\\frac{|\\mathbf{x} - \\mathbf{x}'|^2}{2\\theta^2}}\n\\left(\n\\left( \\frac{\\mathbf{x} - \\mathbf{x}'}{\\theta} \\right)\n\\left( \\frac{\\mathbf{x} - \\mathbf{x}'}{\\theta} \\right)^\\top\n+\n\\left(\n(d-1) - \\frac{\\| \\mathbf{x} - \\mathbf{y} \\|^2}{\\theta^2}\n\\right) I_d\n\\right); \n\\end{equation}\nsee . \nThe specific divergence-free kernel \\eqref{div_free_SE} appears to have been introduced by .\nIn all these examples, solving the key operator equation \\eqref{operator_equation} has been an easy application of vector calculus identities. The work of proposes an approach for solving \\eqref{operator_equation} for general linear constraints involving first-order differential operators, which generalizes the curl-free and divergence-free examples. However, solving \\eqref{operator_equation} in general is difficult, depends dramatically on $\\mathcal{L}$, and may introduce significant computational challenges.\nFor example, to constrain a scalar function on the unit disk $D = \\{ z = (x_1,x_2) \\ : \\ |z| < 1 \\}$ in $\\mathbb{R}^2$\nto satisfy Poisson's equation\n\\begin{equation}\n\\Delta u = \\frac{\\partial^2 u}{\\partial x_1^2} + \\frac{\\partial^2 u}{\\partial x_2^2} = 0 \\quad \\text{ in } \\quad D\n\\end{equation}\ni.e., the constraint \\eqref{transformed_constraint} with $\\mathcal{L}_{\\mathbf{x}} = \\Delta$, one could exploit Poisson's kernel formula\n\\begin{equation}\\label{poisson_formula}\nu(r,\\theta) = \\mathcal{G} g = \\frac{1}{2\\pi} \\int_{-\\pi}^{\\pi} P_r(\\theta - t) g(e^{it}) dt \n\\end{equation}\nfor a boundary value $g$ defined on $\\partial D$. More precisely, $g \\in L^1(\\mathbb{T})$. Thus, one could model $g$ with an appropriate GP with covariance kernel $k_g$ and use $\\mathcal{G}$ in \\eqref{poisson_formula} to satisfy \\eqref{operator_equation} and then to define a kernel $k_u$ via \\eqref{transformed_kernel}. However, in this case $G$ is an integral operator which would make evaluation of \\eqref{transformed_kernel} more difficult than the vector calculus examples discussed above. This illustrates that imposing the constraint via solving the operator equation \\eqref{operator_equation} requires using analytical representations of the solution to the constraint equation \\eqref{transformed_constraint} that vary significantly from case to case. The same issue is illustrated by an example of a biharmonic constraint in one-dimension for a scalar GP in . This is in contrast to the block covariance method of \\ref{pde_constraints}, which involves more straightforward computation of the kernel blocks in \\eqref{eq:joint_covariance}.", "id": "21a37702-e925-427b-ba46-b1ceb62f632d", "level": "subsection", "origin_cites_number": 9, "parent_id": "fbf12205-4c0f-46ca-a510-8409ad72bca8", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Differential Equation Constraints" ], [ "subsection", "Transformed Covariance Kernel" ] ], "subsections": [], "title": "Transformed Covariance Kernel" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:empirical}\nGiven an ensemble of realizations of a random field $Y$ on a set of grid points and a smaller set of high-fidelity data on a subset of the low-fidelity grid points, \n build a Gaussian process for the unknown field over the unstructured grid which also passes through the high-fidelity data, at the same time ensuring that the GP satisfies the PDE used to generate the low-fidelity ensemble. The ensemble data may be obtained from a large number of simulations over a single unstructured grid of a deterministic solver for a linear PDE, sampling the stochastic parameters in the PDE according to some distribution. The high-fidelity may consist of field data obtained through a costly experiment, a situation common in geostatistics.\nThe idea of the article of is to compute the mean and covariance function of the GP empirically from these realizations of the random field $Y$. This removes the need to infer the hyperparameters of a\ncovariance function. Instead, one simply calculates the mean and covariance matrix \nfrom the random field realizations as follows. Using the notation of , we \nassume that we have $M$ realizations $Y^m(\\mathbf{x})$ of the output field $Y(\\mathbf{x})$ for $\\mathbf{x}$ in the $d$-dimensional grid \n$\\{\\mathbf{x}_i\\}_{i=1}^N$ (the low-fidelity data). Then the mean and covariance kernel are respectively given by\n\\begin{equation}\n\\mu(\\mathbf{x}) \\approx \\mu_{\\text{MC}}(\\mathbf{x})=\\frac{1}{M}\\sum_{m=1}^{M}Y^m(\\mathbf{x})\n\\end{equation} \nand\n\\begin{equation}\nk(\\mathbf{x},\\mathbf{x}') \\approx k_{\\text{MC}}(\\mathbf{x},\\mathbf{x}')=\\frac{1}{M-1}\\sum_{m=1}^{M}{(Y^m(\\mathbf{x})-\\mu_{\\text{MC}}(\\mathbf{x}))(Y^m(\\mathbf{x}')-\\mu_{\\text{MC}}(\\mathbf{x}'))}.\n\\end{equation}\nHence, with $\\mathbf{Y}^m = [Y^m(\\mathbf{x}_1), ..., Y^m(\\mathbf{x}_N)]^\\top$ and $\\bm{\\mu}_{\\text{MC}} = [\\mu_{\\text{MC}}(\\mathbf{x}_1), ..., \\mu_{\\text{MC}}(\\mathbf{x}_N)]^\\top$, the covariance matrix is approximated by\n\\begin{equation}\n\\mathbf{C} \\approx \\mathbf{C}_{\\text{MC}}=\\frac{1}{M-1}\\sum_{m=1}^{M}{(\\mathbf{Y}^m-\\boldsymbol{\\mu}_{\\text{MC}})(\\mathbf{Y}^m-\\boldsymbol{\\mu}_{\\text{MC}})^\\top}.\n\\end{equation}\nThe above formulas for the mean and covariance of the GP over the unstructured grid $\\{\\mathbf{x}_i\\}_{i=1}^N$ can then be used in the usual prediction formula \\eqref{eq:gpreg} for the posterior mean and variance at any point in the grid, conditioned on high-fidelity data at a \\emph{subset} of points on the grid. \nIt is important to note that this approach does not assume stationarity of the GP, nor does it \nassume a specific form of the covariance function. have shown that physical constraints in the form of a deterministic linear operator are guaranteed to be satisfied within a certain error in the resulting prediction when using this approach. The method was extended to model discrepancy between the low- and high-fidelity data in . They also provide an estimate of the error in preserving the physical constraints. However, as the method uses an empirical mean and covariance, it \\emph{cannot} interpolate for the field between the points where the stochastic realizations are available. The step of GPR for prediction at an arbitrary point $\\mathbf{x}^*$, represented by \\eqref{eq:gpreg}, is not available, as the covariance kernel function is bypassed entirely;\nthe covariance is obtained directly in matrix form over the unstructured grid. \n\\REVISION{", "id": "6ebc81ba-65e5-420a-bf4c-436a04c1fc48", "level": "subsection", "origin_cites_number": 2, "parent_id": "fbf12205-4c0f-46ca-a510-8409ad72bca8", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Differential Equation Constraints" ], [ "subsection", "Empirical Mean and Covariance" ] ], "subsections": [], "title": "Empirical Mean and Covariance" }, { "cite_extract_rate": 0.5, "cites": [ 7628 ], "content": "\\label{sec:expansion}\n developed an approach that is suited for GPR with linear PDE constraints of the form \\eqref{eq:pde_constraint} for the case of vanishing or localized source terms $f$. In the latter case, the solution $u$ is represented as a solution to the homogeneous equation $\\mathcal{L}u = f$ and an inhomogeneous contribution obtained as a linear model over fundamental solutions corresponding to the point sources.\nFocusing on the GPR for the solution $u$ to the homogeneous equation $\\mathcal{L} u = 0$, a specalized kernel function $k$ is derived from $\\mathcal{L}$ such that the GPR prediction satisfies the equation exactly. In this sense, the approach is similar to that of Section \\ref{transformed_covariance}, although the kernel is not obtained from a transformation of a prior kernel but rather is constructed from solutions to the problem $\\mathcal{L}u = 0$. show that such a covariance kernel must satisfy\n\\begin{equation}\\label{eq:albert_condition}\n\\mathcal{L}_{\\mathbf{x}} k(\\mathbf{x}, \\mathbf{x}')\\mathcal{L}^\\top_{\\mathbf{x}'} = 0\n\\end{equation}\nin the notation of Section \\ref{transformed_covariance}, and seek kernels $k$ in the form of a Mercer series\n\\begin{equation}\\label{eq:albert_kernel}\nk(\\mathbf{x}, \\mathbf{x}') = \\sum_{i,j} \\phi_i(\\mathbf{x}) \\Sigma^{i,j}_p \\phi_j(\\mathbf{x}')\n\\end{equation}\nfor basis functions $\\phi_i$ and matrix $\\Sigma_p$. They point out that convolution kernels can also be considered. study the Laplace, heat, and Hemholtz equations, performing MLE and inferring the solution $u$ from the PDE constraint and scattered observations. They construct kernels of the form \\eqref{eq:albert_kernel} which satisfy \\eqref{eq:albert_condition} by selecting $\\{\\phi_i\\}$ to be an orthogonal basis of solution to the corresponding equations. Although the resulting kernels are not stationary and require analytical construction, they result in improved reconstructions of solutions from the observations compared to squared-exponential kernels. We note that similar constructions of kernels -- as expansions in a suitable basis -- are utilized in the approach of in the following section to enforce boundary conditions. \n}", "id": "84bac780-15ce-4747-95da-3c67a3d3e503", "level": "subsection", "origin_cites_number": 2, "parent_id": "fbf12205-4c0f-46ca-a510-8409ad72bca8", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Differential Equation Constraints" ], [ "subsection", "Specialized Kernel Construction" ] ], "subsections": [], "title": "Specialized Kernel Construction" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:boundary_constraints}\nBoundary conditions and values are yet another type of prior knowledge that may be incorporated into Gaussian process regression. \nIn many experimental setups, measurements can be taken at the boundaries of a system in a cheap and non-invasive way that permits nearly complete knowledge of the boundary values of an unknown field. In other cases, the boundary values may be fully known or controlled by the user, such as for a system in a heat bath. Theoretically, various boundary conditions are often needed to complete the description of a well-posed model. Thus, intrinsic boundary condition constraints on a GP, as opposed to the treatment of boundary measurements as scattered data, may be of interest in applications both for improved accuracy and to avoid the computational burden of an expanded dataset. For one-dimensional GPs, the enforcing Dirichlet boundary conditions is trivial; noiseless observations at the boundary can be used to produce a posterior mean and covariance that satisfy the boundary conditions exactly. In higher dimensions, however, it is nontrivial to constrain GPs to satisfy boundary conditions globally over a continuous boundary. \\REVISION{ constructed an example of GPR on the two-dimensional unit square $[0,1]^2$ with Dirichlet boundary conditions by writing the solution as a product of a factor represented by a GP and an analytic factor which was identically zero at the boundary. We discuss a more general approach based on spectral expansions below.}", "id": "65d986b6-b11e-421e-971a-bb294bcbf1db", "level": "section", "origin_cites_number": 1, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Boundary Condition Constraints" ] ], "subsections": [ "a2e9aeb9-9082-41e6-90a9-b86682ec0062", "ae4bdd86-cc7b-49ce-872b-76349b98b032", "247a6cf6-5ff8-4e9d-98a9-4c41b81a81af" ], "title": "Boundary Condition Constraints" }, { "cite_extract_rate": 1, "cites": [ 7628 ], "content": "\\label{spectral_expansion_approach}\nThe work of introduced a method based on the spectral expansion of a desired stationary isotropic covariance kernel \n\\begin{equation}\\label{e:stationary_isotropic_kernel}\nk(\\mathbf{x},\\mathbf{x}') = k(|\\mathbf{x}-\\mathbf{x}'|)\n\\end{equation}\nin eigenfunctions of the Laplacian. For enforcing zero Dirichlet boundary values on a domain $\\Omega$, use the \\emph{spectral density} (Fourier transform) of the kernel \\eqref{e:stationary_isotropic_kernel},\n\\begin{equation}\n\\label{e:spectral_density}\ns(\\bm{\\omega}) = \\int_{\\mathbb{R}^d}\ne^{-i \\bm{\\omega} \\cdot \\mathbf{x}} \nk(|\\mathbf{x}|)\nd\\mathbf{x}. \n\\end{equation}\nThis enters into the approximation of the kernel:\n\\begin{equation}\\label{e:spectral_expansion_kernel}\nk(\\mathbf{x},\\mathbf{x}') \\approx \\sum_{\\ell=1}^m \ns(\\lambda_\\ell) \\phi_\\ell(\\mathbf{x}) \\phi_\\ell(\\mathbf{x}'),\n\\end{equation}\nwhere $\\lambda_j$ and $\\phi_j$ are the Dirichlet eigenvalues and eigenfunctions, respectively, of the Laplacian on the domain $\\Omega$.\nIn \\eqref{e:spectral_expansion_kernel}, $s(\\cdot)$ is thought of as a function of a scalar variable; since $k$ is isotropic in \\eqref{e:stationary_isotropic_kernel}, so is the Fourier transform $s(\\bm{\\omega}) = s(|\\bm{\\omega}|)$. \nNote that the expansion \\eqref{e:spectral_expansion_kernel} yields a covariance that is zero when $\\mathbf{x} \\in \\partial\\Omega$ or $\\mathbf{x}' \\in \\partial\\Omega$. \nThus if the mean of the GP satisfies the zero boundary conditions, Gaussian process predictions using the series \\eqref{e:spectral_expansion_kernel} will satisfy the boundary condition as well.", "id": "a2e9aeb9-9082-41e6-90a9-b86682ec0062", "level": "subsection", "origin_cites_number": 1, "parent_id": "65d986b6-b11e-421e-971a-bb294bcbf1db", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Boundary Condition Constraints" ], [ "subsection", "Spectral Expansion Approach" ] ], "subsections": [], "title": "Spectral Expansion Approach" }, { "cite_extract_rate": 0.33333333333333304, "cites": [ 7628 ], "content": "The first implementation task that presents itself is computation of the Dirichlet spectrum $(\\lambda_\\ell, \\phi_\\ell)$ of the Laplacian\n\\begin{align}\n\\Delta \\phi_\\ell &= \\lambda_\\ell \\phi_\\ell \\quad\\text{ in } \\Omega \\\\\n\\phi_\\ell &= 0 \\quad\\text{ on } \\partial\\Omega\n\\end{align}\nFor basic domains, such as rectangles, cylinders, or spheres, this can be solved in closed form. For general domains, the problem must be discretized and an approximate spectrum computed. \n obtain an approximate spectrum by discretizing the Laplace operator with a finite difference formula and applying a correction factor to the eigenvalues of the resulting matrix. There are many other approaches for computing the spectrum of the Laplacian with various boundary conditions; see, e.g., for an approach using the spectral element method for calculating both Dirichlet and Neumann spectrum in complex geometries. \nEvaluation of $s(\\lambda_\\ell)$, where $s$ denotes the spectral density \\eqref{e:spectral_density} in \\eqref{e:spectral_expansion_kernel}, is typically not difficult since $s$ is available in closed form for many stationary kernels, such as the squared exponential (SE) and Mat\\'ern ($M_{\\nu}$) kernels:\n\\begin{align}\ns_{\\text{SE}}(|\\bm{\\omega}|; \\gamma, \\theta) &=\n\\gamma^2 (2\\pi\\theta^2)^{\\frac{d}{2}}e^{-\\frac{|\\bm{\\omega}|^2 \\theta^2}{2}},\\\\\ns_{M_{\\nu}}(|\\bm{\\omega}|; \\gamma, \\theta) &= \n\\gamma^2 \\frac{2^d \\pi^{\\frac{d}{2}} (2\\nu)^\\nu \\Gamma(\\nu + \\frac{d}{2})}\n{\\theta^{2\\nu}\\Gamma(\\nu) }\n\\left(\n\\frac{2\\nu}{\\ell^2} + |\\bm{\\omega}|^2\n\\right)^{-\\frac{2\\nu + d}{2}}. \n\\end{align}\nNext, we review from and how the formulas for Gaussian processes regression and training can be expressed using the the formulation \\eqref{e:spectral_expansion_kernel}. \nGiven $n$ data points $\\{(\\mathbf{x}_i, y_i)\\}_{i=1}^n$, \nthe covariance matrix is approximated\nusing \\eqref{e:spectral_expansion_kernel} as\n\\begin{equation}\nK_{ij} = k(\\mathbf{x}_i,\\mathbf{x}_j) \\approx \n\\sum_{\\ell=1}^m \n\\phi_{\\ell}(\\mathbf{x}_i) s(\\lambda_\\ell) \n\\phi_{\\ell}(\\mathbf{x}_j).\n\\end{equation}\nIntroducing the $n \\times m$ matrix $\\Phi$,\n\\begin{equation}\n{\\Phi}_{i \\ell} = \n\\phi_{\\ell}(\\mathbf{x}_i), \n\\quad 1 \\le i \\le n,\n\\quad 1 \\le \\ell \\le m, \n\\end{equation}\nand the $m \\times m$ matrix \n$\\Lambda = \\text{diag}(s(\\lambda_{\\ell})), 1 \\le \\ell \\le m,$\nthis can be written\n\\begin{equation}\\label{e:big_k_spectral_matrix}\n{K} \\approx {\\Phi} \\Lambda {\\Phi}^\\top. \n\\end{equation}\nThus, the covariance matrix $K$ is diagonalized and, for a point $\\mathbf{x}^*$, we can write the $n \\times 1$ vector\n\\begin{equation}\\label{e:little_k_spectral_matrix}\n\\mathbf{k}_* = \\left[k(\\mathbf{x}^*, \\mathbf{x}_i)\\right]_{i=1}^n \\approx \n\\left[\n\\sum_{\\ell=1}^m \n\\phi_{\\ell}(\\mathbf{x}_i) s(\\lambda_\\ell) \n\\phi_{\\ell}(\\mathbf{x}^*)\n\\right]_{i=1}^n\n=\n\\Phi \\Lambda \\bm{\\Phi}_*, \n\\end{equation}\nwhere the $m \\times 1$ vector $\\bm{\\Phi}_*$ is defined by\n\\begin{equation}\n\\left[\\bm{\\Phi}_*\\right]_{\\ell} = \\phi_\\ell(\\mathbf{x}^*),\n\\quad\n1 \\le \\ell \\le m.\n\\end{equation}\nThe Woodbury formula can be used to\nobtain the following expressions for the posterior mean and variance over\na point $\\mathbf{x}^*$ given a Gaussian likelihood \n$y_i = f(x_i)+\\epsilon_i, \\epsilon_i \\sim \\mathcal{N}(0,\\sigma^2)$ :\n\\begin{align}\\label{e:regression_formulas_bc}\n\\begin{split}\n\\mathbb{E}[f(\\mathbf{x}^*)] &= \\mathbf{k}_*^\\top \n(K + \\sigma^2 I)^{-1} \\mathbf{y} \\\\\n&=\n\\bm{\\Phi}_*^\\top \n(\\Phi^\\top \\Phi + \\sigma^2 \\Lambda^{-1} )^{-1}\n\\Phi^\\top \\mathbf{y}. \\\\\n\\mathbb{V}[f(\\mathbf{x}^*)] &= \nk(\\mathbf{x}^*,\\mathbf{x}^*) - \n\\mathbf{k}_*^\\top (K + \\sigma^2 I)^{-1} \\mathbf{k}_* \\\\\n&=\n\\sigma^2 \\bm{\\Phi}_*^\\top \n(\\Phi^\\top \\Phi + \\sigma^2 \\Lambda^{-1})^{-1} \\bm{\\Phi}_*.\n\\end{split}\n\\end{align}\nStrategies for using this method with non-Gaussian likelihoods are also discussed by , although we do not go over them here. \nFor use in hyperparameter training, the following formulas were derived in and for the \nnegative log-marginal-likelihood \n\\begin{align}\n\\label{e:nlml_bc}\n&\\phantom{\\partial}\\begin{multlined} \n-p(\\bold{y}|X,\\bm{\\theta}) =\n\\frac{n-m}{2} \\log \\sigma^2\n+\\frac{1}{2} \\sum_{\\ell = 1}^{m} \n{\\color{black}\\log\\Big(}\n\\Lambda_{\\ell,\\ell}\n{\\color{black}\\Big)}\n+\\frac{1}{2} \\log \\det \\left( \\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)\n+\\frac{n}{2} \\log(2\\pi) \\\\\n+\\frac{1}{2\\sigma^2}\\left[\n\\mathbf{y}^\\top \\mathbf{y} - \\mathbf{y}^\\top \\Phi\n\\left(\\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1} \\Phi^{\\color{black}\\top}\n \\mathbf{y}\n\\right],\n\\end{multlined}\n\\end{align}\nand in for its derivative:\n\\begin{align}\n\\label{e:nlml_bc_derivatives}\n\\begin{split}\n&\\begin{multlined}\n-\\frac{\\partial p(\\bold{y}|X,\\bm{\\theta})}{\\partial \\theta_k} = \n\\frac{1}{2} \\sum_{\\ell = 1}^m \\frac{1}{\\Lambda_{\\ell,\\ell}} \n\\frac{\\partial \\Lambda_{\\ell,\\ell}}{\\partial \\theta_k}\n-\n\\frac{\\sigma^2}{2} \\text{Tr}\\left(\n\\left(\\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi\\right)^{-1}\n\\Lambda^{-2} \\frac{\\partial \\Lambda}{\\partial \\theta_k}\n\\right) \\\\\n-\\mathbf{y}^\\top \\Phi \n\\left( \\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1}\n\\left( \\Lambda^{-2} \\frac{\\partial \\Lambda}{\\partial \\theta_k} \\right)\n\\left( \\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1}\n\\Phi^\\top \\mathbf{y}\n,\\end{multlined}\n\\\\\n&\\begin{multlined}\n-\\frac{\\partial p(\\bold{y}|X,\\bm{\\theta})}{\\partial \\sigma^2} = \n\\frac{n-m}{2\\sigma^2}\n+\n\\frac{1}{2}\n\\text{Tr}\n\\left(\n\\left( \\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1}\n\\Lambda^{-1}\n\\right)\n\\\\ + \\frac{1}{2\\sigma^2}\n\\mathbf{y}^\\top \\Phi \\left( \\sigma \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1}\n\\Lambda^{-1}\n\\left( \\sigma \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1}\n\\Phi^\\top \\mathbf{y} \\\\\n\\REVISION{-\n\\frac{1}{2\\sigma^4} \\left[\n\\mathbf{y}^\\top \\mathbf{y} - \\mathbf{y}^\\top \\Phi\n\\left(\\sigma^2 \\Lambda^{-1} + \\Phi^\\top \\Phi \\right)^{-1} \\Phi^{\\color{black}\\top}\n \\mathbf{y}\n\\right].}\n\\end{multlined}\n\\end{split}\n\\end{align}\nNote that $\\Lambda$ is defined by the spectral density $s$ of the kernel $k$, which clearly depends on the kernel hyperparameters $\\bm{\\theta} = [\\theta_i]$, however $\\Phi$ does not. Typically, derivatives of $\\Lambda$ with respect to $\\theta_i$ can be computed in closed form which along with the formulas \\eqref{e:nlml_bc} and \\eqref{e:nlml_bc_derivatives} enable accurate first-order optimization of the kernel hyperparameters.", "id": "ae4bdd86-cc7b-49ce-872b-76349b98b032", "level": "subsection", "origin_cites_number": 3, "parent_id": "65d986b6-b11e-421e-971a-bb294bcbf1db", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Boundary Condition Constraints" ], [ "subsection", "Implementation" ] ], "subsections": [], "title": "Implementation" }, { "cite_extract_rate": 0.5, "cites": [ 7628 ], "content": "The expansion was originally developed for the computational advantages of using a low rank approximation to a kernel (see Section \\ref{sec:spectral_low_rank} for a discussion of this aspect) rather than for boundary condition constraints. Consequently, the discussions in and focused only on periodic and zero Dirichlet boundary conditions. One possible way \nto constrain a Gaussian process $f$ to satisfy nonzero Dirichlet conditions would be to write $f = (f - g) + g$, where $g$ is a harmonic function that satisfies a given nonzero Dirichlet condition, and model $f-g$ as a Gaussian processes that satisfying a zero Dirichlet condition using the above approach. remark that the method could also be extended to Neumann boundary conditions by using the Neumann eigenfunctions of the Laplacian, although no examples are given. Another limitation is that spectral expansions in and are only considered for isotropic kernels, but they suggest that the approach can be extended to the nonisotropic case.", "id": "247a6cf6-5ff8-4e9d-98a9-4c41b81a81af", "level": "subsection", "origin_cites_number": 2, "parent_id": "65d986b6-b11e-421e-971a-bb294bcbf1db", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Boundary Condition Constraints" ], [ "subsection", "Extensions" ] ], "subsections": [], "title": "Extensions" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:computation_considerations}\nIn this section, we describe methods that can help reduce the computational cost of constructing constrained GP models. Typically, building a constrained GP is significantly more expensive than training an unconstrained GP because of larger data sets representing derivative constraints, bounds, etc. at virtual points. Consequently, computationally efficient strategies for building constrained GPs are paramount.\nIn Section \\ref{sec:mvn} we discuss the truncated multivariate normal distribution, which is a fundamental component of the approaches discussed in Sections \\ref{sec:transform_output},\\ref{sec:daveiga},\\ref{sec:splines} and \\ref{daveiga_monotonic}. We then discuss the related problem of maximum likelihood estimation of the hyperparameters of constrained GPs constructed using the spline approach discussed in Sections \\ref{sec:splines}, \\ref{sec:splines_monotonic}, and \\ref{sec:convexity}. \nThe final subsection \\ref{sec:low_rank} focuses on reducing the numerical linear algebra cost of inference, using low-rank and Kronecker methods, respectively. The majority of approaches surveyed in these two sections were developed for unconstrained GPs; however, some methods have been applied in the constrained setting. Since such numerical recipes are the focus of much deeper survey articles such as and , we have intentionally kept our discussion short, while providing references to applications in constrained GPR where available.", "id": "d8114b96-b4e0-4478-b43f-0ac0ea15b28a", "level": "section", "origin_cites_number": 2, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ] ], "subsections": [ "89d7872f-72e9-4f03-82b1-2135a0dbbf5d", "7de0b200-1765-49ff-92e9-88eb4c96cc6c", "579208c0-c8b5-47c3-a5cd-26c05ee81d67" ], "title": "Computational Considerations" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:mvn}\nGiven a positive-definite covariance matrix $\\Sigma$ and \\REVISION{vectors $\\mathbf{a},\\mathbf{b} \\in \\mathbb{R}^d$ defining a rectangle \n$\\{\\mathbf{a} \\le \\mathbf{x} \\le \\mathbf{b}\\}$}, the truncated normal distribution is the conditional distribution of the random variable $\\bold{x} \\sim \\mathcal{N}\\left(\\bm{\\mu}, \\Sigma\\right)$ given \\REVISION{$\\mathbf{a} \\le \\mathbf{x} \\le \\mathbf{b}$}. The \\REVISION{density $\\mathcal{TN}\\left(\\bm{\\mu}, \\Sigma, \\mathbf{a}, \\mathbf{b} \\right)$} of the truncated normal can be expressed as\n\\begin{equation}\\label{e:truncated_normal}\n\\REVISION{\\mathcal{TN}\\left(\\bold{x}; \\bm{\\mu}, \\Sigma, \\mathbf{a}, \\mathbf{b} \\right)\n=\n\\frac{\\mathds{1}_{\\{\\mathbf{a} \\le \\mathbf{x} \\le \\mathbf{b}\\}}(\\bold{x})}{C}\n\\mathcal{N}\\left(\\bold{x}; \\bm{\\mu}, \\Sigma \\right),}\n\\end{equation}\nwhere the normalization constant\n\\begin{align}\\label{e:truncated_normalization}\n\\begin{split}\nC\n&=\n\\int_{a_1}^{b_1}\n\\int_{a_2}^{b_2}\n...\n\\int_{a_d}^{b_d}\n\\mathcal{N}\\left(\\mathbf{x}; \\bm{\\mu}, \\Sigma\\right)\nd\\mathbf{x}_1\nd\\mathbf{x}_2\n...\nd\\mathbf{x}_d\\\\\n&=\n\\frac{1}{(2\\pi)^{\\frac{d}{2}}|\\Sigma|^{\\frac{1}{2}}}\n\\int_{a_1}^{b_1}\n\\int_{a_2}^{b_2}\n\\hdots\n\\int_{a_d}^{b_d}\n\\exp\\left({-\\frac{1}{2} (\\mathbf{x} - \\bm{\\mu})^\\top \\Sigma^{-1} (\\mathbf{x} - \\bm{\\mu})}\\right)\nd\\mathbf{x}_1\nd\\mathbf{x}_2\n...\nd\\mathbf{x}_d\n\\end{split}\n\\end{align}\nis the probability that a sample of $\\mathcal{N}(\\REVISION{\\bm{\\mu}}, \\Sigma)$ lies in \\REVISION{$\\{\\mathbf{a} \\le \\mathbf{x} \\le \\mathbf{b}\\}$}. \nFor general $\\Sigma$ and dimension $d$, computing the normalization constant and sampling from the truncated multinormal distribution \n\\eqref{e:truncated_normal} can be difficult and require specialized methods. Of course, from the definition \\eqref{e:truncated_normalization} these two problems are related. However, they appear in two different contexts. Calculating \\REVISION{integrals of the form \\eqref{e:truncated_normalization}, known as \\emph{Gaussian orthant probabilities},} is called for in constrained maximum likelihood estimation of the GPR hyperparameters, while sampling \\eqref{e:truncated_normal} is needed for posterior prediction in several approaches discussed above. Therefore, we discuss sampling first, and discuss evaluation of \\REVISION{Gaussian orthant probabilities} in the next Section \\ref{sec:mle}. \nWhile there are several possible approaches to sampling from \\eqref{e:truncated_normal}, simple Monte Carlo methods scale poorly to high dimensions. One such example -- rejection sampling from the mode -- was discussed in Section \\eqref{sec:spline_sampling}. \nIn principal, it is possible to use a Metropolis-Hastings approach to sample the values of the knots, but it is expected that the dimensionality of the chain for a large number of splines is likely to slow down the convergence of the chain.\nSeveral Markov Chain Monte Carlo (MCMC) methods were studied by for sampling the truncated multivariate normal posterior distribution that arises in the spline approach described in Section \\ref{sec:splines}. \nComparison of expected sample size metrics suggested that Hamiltonian Monte Carlo (HMC) is the most efficient sampler in the setting of that article. An different approach for sampling \\eqref{e:truncated_normal}, based upon elliptical slice sampling and the fast Fourier transform, was presented in .", "id": "89d7872f-72e9-4f03-82b1-2135a0dbbf5d", "level": "subsection", "origin_cites_number": 2, "parent_id": "d8114b96-b4e0-4478-b43f-0ac0ea15b28a", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "The Truncated Multivariate Normal Distribution" ] ], "subsections": [], "title": "The Truncated Multivariate Normal Distribution" }, { "cite_extract_rate": 0.2, "cites": [ 8605 ], "content": "\\label{sec:mle}\nWe review the work of which discusses maximum likelihood estimation of hyperparameters within the spline approach \\REVISION{discussed in Sections \\ref{sec:splines} and \\ref{sec:splines_monotonic}}. \nThe starting point is the constrained log-marginal-likelihood function given the constraints $\\bm{\\xi} \\in \\mathcal{C}$, \\REVISION{where we have denoted $\\mathcal{C} = \\{\\mathbf{a} \\le \\bm{\\xi} \\le \\mathbf{b}\\}$}.\nThis is based on the posterior \\REVISION{density} $p_{\\bm{\\theta}}(\\mathbf{y} | \\bm{\\xi} \\in \\mathcal{C})$ of $\\mathbf{y}$ given the constraint $\\bm{\\xi} \\in \\mathcal{C}$, which by Bayes' rule can be expressed as\n\\begin{equation}\np_{\\bm{\\theta}}(\\mathbf{y} | \\bm{\\xi} \\in \\mathcal{C}) = \n\\frac{p_{\\bm{\\theta}}(\\mathbf{y}) P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C} | \\Phi \\bm{\\xi} = \\mathbf{y})}{P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C})}.\n\\end{equation}\nTaking the logarithm yields a constrained log-marginal-likelihood function:\n\\begin{align}\n\\begin{split}\n\\label{e:cmle}\n\\mathcal{L}_{\\text{cMLE}}\n&=\n\\log p_{\\bm{\\theta}}(\\mathbf{y} | \\bm{\\xi} \\in \\mathcal{C}) \\\\\n&= \n\\log p_{\\bm{\\theta}}(\\mathbf{y}) + \\log P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C} | \\Phi \\bm{\\xi} = \\mathbf{y}) - \\log P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C}) \\\\\n&=\\mathcal{L}_{\\text{MLE}} + \\log P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C} | \\Phi \\bm{\\xi} = \\mathbf{y}) - \\log P_{\\bm{\\theta}}(\\bm{\\xi} \\in \\mathcal{C}).\n\\end{split}\n\\end{align}\nIn the first term, $p_{\\bm{\\theta}}(\\mathbf{y})$ refers to the probability density function of the random variable $\\mathbf{y}$ with hyperparameters $\\bm{\\theta}$; thus, the first term is simply the unconstrained log-marginal-likelihood \\eqref{eq:log_like} which we denote $\\mathcal{L}_{\\text{MLE}}$. In the second and third terms, $P_{\\bm{\\theta}}$ refers to the probability of the indicated events. \nAs $\\bm{\\xi}$ and $\\bm{\\xi} | \\{\\Phi \\bm{\\xi} = \\mathbf{y}\\}$ are both normally distributed by equations \\eqref{eq:multivar_norm} and \\eqref{eq:gpreg}, respectively, the two last terms in \\eqref{e:cmle} can be expressed as integrals of a normal \\REVISION{density} over \n$\\mathcal{C}$, just like the normalization constant \\eqref{e:truncated_normalization}.\n\\REVISION{Such integrals can be reduced to integrals over orthants, so the last two terms in \\eqref{e:cmle} are referred in as Gaussian orthant probabilities.}\nUnlike the sampling of \\eqref{e:truncated_normal}, for which computing such integrals can be avoided with MCMC, calculation of Gaussian orthant probabilities is unavoidable if the user wants to train the kernel hyperparameters using the constrained objective function \\eqref{e:cmle}, which we refer to as cMLE. A thorough discussion of numerical approaches to truncated Gaussian integrals is . utilize the minimax exponential tilting method of , reported to be feasible for quadrature of Gaussian integrals in dimensions as high as 100, to compute the Gaussian orthant probabilities in \\eqref{e:cmle} and compare cMLE with MLE. Another current drawback of cMLE is that the gradient of $\\mathcal{L}_{\\text{cMLE}}$ is not available in closed form, unlike the gradient of \n$\\mathcal{L}_{\\text{MLE}}$ . \nThus, in , MLE was performed using a L-BFGS optimizer, while cMLE was performed using the method of moving asymptotes. This involved a numerical approximation to the gradient of $\\mathcal{L}_{\\text{cMLE}}$, which in our experience can impact the accuracy of the optimization. Although these numerical differences hamper direct comparison of MLE and cMLE, it was found by that for the case of limited data, cMLE can provide more accurate estimation of hyperparameter values and confidence intervals than MLE. \n also studied under which conditions MLE and cMLE yield consistent predictions of certain hyperparameters. This was further studied in , in which the authors perform an analysis of MLE and cMLE for the case of fixed-domain asymptotics, i.e., data in a fixed domain, as the number of data points tends to infinity. In this regime of dense data, the effect of constraints is expected to diminish. The authors show that MLE and cMLE yield consistent hyperparameters in this limit for the case of boundedness, monotonicity, and convexity constraints, and suggest quantitative tests to determine if the number of data points is sufficient to suggest unconstrained MLE as opposed to the more expensive cMLE.", "id": "7de0b200-1765-49ff-92e9-88eb4c96cc6c", "level": "subsection", "origin_cites_number": 5, "parent_id": "d8114b96-b4e0-4478-b43f-0ac0ea15b28a", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Constrained Maximum Likelihood Estimation for Splines" ] ], "subsections": [], "title": "Constrained Maximum Likelihood Estimation for Splines" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:low_rank}\nAs pointed out in Section \\ref{sec:gpr}, inference in GPR using the entire training dataset (of size $N$) scales as $O(N^3)$ due to covariance matrix inversion. This is exacerbated by certain methods to enforce constraints, such as the linear PDE constraints in Section \\ref{pde_constraints}, which require the inclusion of ``virtual'' constraint points in the training data.There have been few studies on improving scalability of constrained GPs. Thus, in this section, we mention several promising approaches and possible applications to constrained GPs. \nSome strategies, including the subset of data approach, the inducing point approach, and the spectral expansion approach, are specific to covariance matrices of GPs. Other methods are based on general linear algebra techniques.", "id": "579208c0-c8b5-47c3-a5cd-26c05ee81d67", "level": "subsection", "origin_cites_number": 0, "parent_id": "d8114b96-b4e0-4478-b43f-0ac0ea15b28a", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Scalable Inference" ] ], "subsections": [ "f07febce-747a-463c-858b-d35bb8ab80f9", "ce5b8545-ac65-4894-acfa-6a272f238247", "cca02e94-8034-4279-83ba-da7319fef40b", "f0b4d828-b367-46b9-89ad-9acad5ae2be6" ], "title": "Scalable Inference" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:subset_of_data}\nOne notable feature of increasing the density of training data is that the covariance matrix tends to become more ill-conditioned, the result of partially redundant information being added to the matrix.\nIn such situations it is worthwhile to identify a \\emph{subset of data} that minimizes prediction error subject to a maximum dataset size constraint . \nGreedy methods involve sequentially choosing points in the domain that have the maximal predictive variance in order to reduce the uncertainty in the final GP.\nThis choice is natural and has connections to information-theoretic metrics;\nother metrics include cross-validation prediction error, the likelihood value, or Bayesian mean-square prediction error. \nRather than building a new covariance matrix and inverting it for each added point, one may take advantage of the Woodbury matrix inversion lemma and block-matrix inversions to efficiently compute the inverse of the covariance matrix.\nOther methods for performing subset selection are based on \\emph{local approximation}. \nFrequently, the function values far away from a point of interest may have little influence on the function value there.\nA simple strategy based on this idea is to select the nearest neighbors to the target point to form the prediction.\nThe local approximation GP~ approach combines such local approximation with a greedy search heuristic to identify a better set of points to minimize the mean-squared prediction error at the location of interest.\nUsing a subset of points to form the GP corresponds to selecting a subset of the rows/columns of a full covariance matrix to represent the dataset. generalize this to a broad set of low-rank approximations to the full covariance matrix based on \\emph{inducing points}. In these methods, a subset (size $m$) of the data is used to form an approximate likelihood or prior for the entire dataset; all of the data is used, but most of the data is modeled as being conditionally dependent on a few inducing points. This reduces the cost of inference from $O(N^3)$ to $O(Nm^2)$. The greedy methods discussed above may be applied to identify an optimal set of inducing points. \nSuch methods may be especially helpful for selection and placement of virtual points for enforcing constraints. However, to our knowledge, there have not been any studies of this. An important question is how to treat the two sets of data -- training and virtual -- using these approaches.", "id": "f07febce-747a-463c-858b-d35bb8ab80f9", "level": "subsubsection", "origin_cites_number": 4, "parent_id": "579208c0-c8b5-47c3-a5cd-26c05ee81d67", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Scalable Inference" ], [ "subsubsection", "Subset of data \\& Inducing point methods" ] ], "subsections": [], "title": "Subset of data \\& Inducing point methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:spectral_low_rank}\nThe approach of Section \\ref{spectral_expansion_approach} for boundary condition constraints can also be used for reduced rank GPR .\nAn expansion of a covariance kernel in terms of the eigenvalues of the Laplacian with periodic boundary values in an artificial box containing the data is used to approximate the covariance kernel, as in \\eqref{e:spectral_expansion_kernel}. The error of approximation should be small if the boundaries of the box are sufficiently far from the data locations. \nWith $m$ basis functions in the expansion \\eqref{e:spectral_expansion_kernel}, the formula \\eqref{e:regression_formulas_bc} implies that inverses are required only of matrices of size $m$. Therefore, inversion scales as $O(m^3)$, while multiplication for inference in \\eqref{e:regression_formulas_bc} scales as $O(m^2 N)$. \nMoreover, formulas \\eqref{e:nlml_bc} and \\eqref{e:nlml_bc_derivatives} and the fact that $\\Phi$ does not depend on the kernel hyperparameters imply that the same reduced-rank advantage is present in hyperparameter training via MLE.", "id": "ce5b8545-ac65-4894-acfa-6a272f238247", "level": "subsubsection", "origin_cites_number": 2, "parent_id": "579208c0-c8b5-47c3-a5cd-26c05ee81d67", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Scalable Inference" ], [ "subsubsection", "Spectral expansion approximation" ] ], "subsections": [], "title": "Spectral expansion approximation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:hierarchical_matrices}\nRather than trying to reduce the effective size of the training and virtual constraint points, it is possible to simply approximate covariance matrix inversion using more general numerical linear algebra techniques.\nWe expect such methods to extend more readily to constrained GPs than the methods of Section \\ref{sec:subset_of_data}, although they may be less optimal and may not inform the placement of virtual constraint points. \nThe pseudo-inverse is often used to avoid the small eigenvalues that can corrupt predictions , although the singular value decomposition or eigenvalue decomposition are both computationally expensive as well.\nHierarchical matrices are an efficient way of approximating the full covariance matrix in a manner amenable to fast inversion . Sub-blocks of the matrix are replaced with fixed-rank approximations using a computational tree to organize the hierarchy of the matrix, and operations on the full matrix such as multiplication or inversion can be accomplished efficiently by operating on each of the individual ``leaves'' of the tree. applied hierarchical matrices methods to maximum likelihood estimation for Gaussian processes. \nAn alternative to inverting the covariance matrix is to\nto set up an optimization problem for a matrix such that the error between the product of the matrix with the covariance matrix and the identity matrix is minimized . The solution to this linear program is, of course, the precision matrix,\nbut by adding a $L_1$ penalty term on the entries of the matrix to the objective function as in LASSO regression, sparsity will be induced in the result. This estimator is referred to as CLIME .\nAnother popular approach for modeling sparsity in random processes is to use a Gaussian Markov random field (GMRF) . In a GMRF, the data may be seen as forming an undirected graph where points close to each other in data-space are connected by an edge, and points far from each other are not. Thus while all points are correlated with each other, most are only conditionally dependent on each other; this translates to a sparse precision matrix where the only off-diagonal entries correspond to points that are connected to each other. Different choices of the covariance widths or kernels, such as truncated or tapered kernels , yield different levels of sparsity in the final precision matrix.", "id": "cca02e94-8034-4279-83ba-da7319fef40b", "level": "subsubsection", "origin_cites_number": 9, "parent_id": "579208c0-c8b5-47c3-a5cd-26c05ee81d67", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Scalable Inference" ], [ "subsubsection", "Linear Algebra Techniques" ] ], "subsections": [], "title": "Linear Algebra Techniques" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{kronecker}\nConstraints that rely on non-Gaussian likelihood were reviewed in Sections \\ref{sec:likelihood_formulations} and \\ref{constrained_likelihood_with_derivative_information}. Recent work of focuses on scalable inference with non-Gaussian likelihoods on dense multi-dimensional grids.\nThe key assumption enabling the use of Kronecker formulas is that \nthe inputs $X$ are on a multi-dimensional Cartesian grid\n\\begin{equation}\nX=X_1\\otimes X_2\\otimes\\ldots\\otimes X_d\n\\end{equation}\nand the GP\nkernel in \\eqref{eq:gpkernel} is formed as a product of kernels across \ninput dimensions \n\\begin{equation}\nK(X,X)=K_1(X_1)\\otimes K_2(X_2)\\otimes\\ldots\\otimes K_d(X_d).\n\\end{equation}\nUnder these conditions the storage requirements are reduced from $O(N^2)$ to $O(dN^{2/d})$ and the complexity of inversion is reduced from $O(N^3)$ to $O(dN^{(d+1)/d})$, where $N$ is the cardinality of the full tensor\ngrid, i.e., the number of data points, and $N^{1/d}$ is the number of input points in each dimension. \nWe review the key Kronecker algebra results, including efficient matrix-vector \nmultiplication and eigendecomposition. For matrix-vector operations\n\\[\n(A\\otimes B)X=\\mathrm{vec}\\left(B\\,X\\,A^\\top\\right)\n\\]\nwhere $v = \\mathrm{vec}(V)$ converts column-major formatted matrices to vectors.\nFor higher dimensions, the expression above is applied recursively. In this \napproach, the full matrix is never formed, and individual steps rely only on\noperations with individual kernels $K_i$. To compute the inverse \n$(K(X,X)+\\sigma^2 I)^{-1}$ in \\eqref{eq:gpreg}, we use the eigendecomposition\nfor each kernel $K_i=Q_i^\\top\\Lambda_i Q_i$, which results in\n\\[\nK+\\sigma^2 I =(Q_1^\\top\\otimes Q_2^\\top\\otimes\\ldots \\otimes Q_d^\\top)(\\Lambda_1\\otimes\\Lambda_2\\otimes\\ldots \\otimes \\Lambda_d\n+\\sigma^2 I )(Q_1\\otimes Q_2\\otimes\\ldots \\otimes Q_d).\n\\]\nThe inverse is evaluated as\n\\[\n(K+\\sigma^2 I)^{-1}=(Q_1^\\top\\otimes Q_2^\\top\\otimes\\ldots \\otimes Q_d^\\top)\n(\\Lambda_1\\otimes\\Lambda_2\\otimes\\ldots \\otimes \\Lambda_d +\\sigma^2 I )^{-1}(Q_1\\otimes Q_2\\otimes \\ldots \\otimes Q_d).\n\\]\nIn this framework, the inverse of the full matrix now consists of eigendecompositions\nof smaller matrices.", "id": "f0b4d828-b367-46b9-89ad-9acad5ae2be6", "level": "subsubsection", "origin_cites_number": 1, "parent_id": "579208c0-c8b5-47c3-a5cd-26c05ee81d67", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Computational Considerations" ], [ "subsection", "Scalable Inference" ], [ "subsubsection", "Hierarchical Decomposition for non-Gaussian likelihoods" ] ], "subsections": [], "title": "Hierarchical Decomposition for non-Gaussian likelihoods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:conclusion}\nInterest in machine learning for scientific applications has intensified in recent \nyears, in part due to advances in algorithms, data storage, and computational \nanalysis capabilities . Fundamental challenges still remain when developing \nand using a machine learning model for scientific applications which must satisfy physical \nprinciples. Enforcing such principles as constraints helps ensure the behavior of the \nmodel is consistent with prior physical knowledge when queried in an extrapolatory \nregion. In other words, in addition to supplementing limited or expensive scientific data, \nconstraints help improve the generalizability of the model in ways that simply increasing dataset size may not. \nMany approaches have been developed to perform ``physics-informed machine learning.'' \nIn this survey, we have focused on constraint implementation in Gaussian processes, \nwhich are popular as a machine-learned metamodels or emulators for a computational simulation. \nOur survey focused on several important classes of constraints for Gaussian processes.\nThese included positivity or bound constraints on the Gaussian processes in Section \\ref{sec:bound_constraints}.\nWhen positivity constraints are applied to the derivatives of a Gaussian process, they lead to monotonicity and convexity constraints as in Section \\ref{sec:monotonicity_constraints}. This is a special example of regression with a linear transformation of a Gaussian process, which is the basis of Gaussian processes constrained by linear differential equations reviewed in Section \\ref{sec:pde_constraints}. We discuss boundary value constrained Gaussian processes in Section \\ref{sec:boundary_constraints}. \nThroughout, we see that constraints can be enforced in an implicit way through data that satisfies the constraint, by construction of a tailored sample space, by derivation of a constrained covariance kernel, or by modifying the output or likelihood of the Gaussian process. The constraints may be enforced in a ``global sense'', at a finite set of ``virtual'' or ``auxiliary'' points, or only in an approximate sense. We have pointed to these aspects as key features distinguishing the constraints in this survey. \nConstraints introduce new practical challenges into the GPR framework. These include: the analytical construction of sample spaces, transformations, or covariance kernels that inherently provide constraints; the sampling of truncated multivariate normals or intractable posterior distributions that arise when using non-Gaussian likelihoods; increased data and covariance matrix size when enforcing constraints with ``virtual'' data that leads to expanded ``four-block'' covariance; calculation of eigenvalues/eigenfunctions in bounded domains with complex geometry; the placement of virtual points or construction of spline grids in higher dimensions; and maximum likelihood training (optimization) of the hyperparameters of constrained Gaussian processes. Numerical issues are the focus of Section \\ref{sec:computation_considerations}. In that section, we have also reviewed established numerical strategies for accelerating GPR. Some of these techniques have been applied to constrained Gaussian processes in the literature, while others have not. In general, the adaptation of computational strategies to constrained GPR is a relatively new field, and best practices have not yet been established. Moreover, while several codebases have been developed for constrained GPR, such \\texttt{GPStuff} for non-Gaussian likelihoods and the \\texttt{lineqGPR} package for the spline approach including constrained MLE, constraints have not made their way into the most widely used production codes for GPR. Furthering these computational aspects of constrained GPR remains a promising area for future work. \nThe field of constrained Gaussian processes has made significant advances over the past decade, and we expect significant development to continue. The purpose of this survey, while non-exhaustive, has been to catalog and investigate some of the more common approaches, guide the practitioner to identify which strategies are most appropriate for his or her needs, and point out the new computational challenges of constrained Gaussian processes and how they can be approached. \n\\section*{Acknowledgements}\nThis work was completed with funding granted under Sandia's Laboratory Directed \nResearch and Development program.\nSandia National Laboratories is a multi-mission laboratory managed and operated\nby National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's \nNational Nuclear Security Administration under contract {DE-NA0003525}. \nThis paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Report number: SAND2020-6086J. \n\\bibliographystyle{plainnat}\n\\bibliography{gpldrd} \n\\end{document}", "id": "b29833f2-8093-4df3-938f-72d7f97ec09e", "level": "section", "origin_cites_number": 4, "parent_id": "86a0553c-8edc-49a7-8382-330c5ec7b9b7", "prefix_titles": [ [ "title", "A Survey of Constrained Gaussian Process Regression: \\\\ Approaches and Implementation Challenges" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
13
[ 8602, 7624, 2924, 2925, 8601, 7626, 8603, 7625, 2926, 2927, 2928, 8604, 7627, 2929, 2930, 7628, 8605 ]
0.713923
[ "Nils Barlaug", "Jon Atle Gulla" ]
Neural Networks for Entity Matching: A Survey
2020
2020-10-21T15:36:03Z
cs.DB
Entity matching is the problem of identifying which records refer to the same real-world entity. It has been actively researched for decades, and a variety of different approaches have been developed. Even today, it remains a challenging problem, and there is still generous room for improvement. In recent years we have seen new methods based upon deep learning techniques for natural language processing emerge. In this survey, we present how neural networks have been used for entity matching. Specifically, we identify which steps of the entity matching process existing work have targeted using neural networks, and provide an overview of the different techniques used at each step. We also discuss contributions from deep learning in entity matching compared to traditional methods, and propose a taxonomy of deep neural networks for entity matching.
[ { "cite_extract_rate": 0, "cites": [], "content": "", "id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "level": "root", "origin_cites_number": 0, "parent_id": null, "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ] ], "subsections": [ "a19edbb8-0e15-4ba1-8c4d-64ad0c17496c", "1dd05a75-3b06-4b4f-9d39-e341e53b4276", "8c2fe9d4-101e-4ae1-962e-0c7abef458fc", "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "1d86a0ee-e68c-446d-a9f8-bf0bc4cdd113", "1bfbeb8a-a767-4676-8769-a5b1beda49f5", "a8c2927a-9391-4fe5-bfbd-005c3a30f0af", "d304df7c-b0c7-4edd-a412-1e2739d185fb", "19acfdb5-671d-4d5c-949d-07b6acad746e" ], "title": "root" }, { "cite_extract_rate": 0, "cites": [], "content": "Our world is becoming increasingly digitalized.\nWhile this opens up a number of new, exciting opportunities, it also introduces challenges along the way.\nA substantial amount of the value to be harvested from increased digitalization depends on integrating different data sources.\nUnfortunately, many of the existing data sources one wishes to integrate do not share a common frame of reference.\nFor example, let us say a factory wants to use statistics from equipment maintenance logs to decide which equipment to prioritize for upgrades.\nCurrently, at this factory, equipment inventory is kept in one system, while maintenance logs are kept in a separate system.\nSadly, these two systems do not refer to equipment in the same way -- i.e., there are no common identifiers or names across the two systems.\nWhile it is possible for a human to identify which maintenance logs belong to which equipment in the inventory system, there is no simple, automatic way to tie the maintenance logs to the inventory records.\nEntity matching is the field of research dedicated to solving the problem of identifying which records refer to the same real-world entity.\nIt is an important data integration task that often arises when data originate from different sources.\nThe records are usually assumed to either be from two different data sources without duplicates or from the same data source with duplicates.\nIt is not a new problem.\nA group of similar problems has been studied for a long time in a variety of fields under different names (see Section \\ref{sec:background}).\nDespite having been researched for decades, entity matching remains a challenging problem in practice.\nThere are several factors that make it difficult in general:\n\\begin{itemize}\n \\item \\textbf{\n Poor data quality\n }:\n Real-world data is seldom completely clean, structured, and homogeneous.\n Data originating from manual insertion can contain typos, alternative spellings, or fail to comply with the schema (e.g., mixing first and last name).\n Automatic processes extracting information from unstructured sources might not always be accurate on the scope of attributes (e.g., \\texttt{\\{firstName: \"John Smith\", lastName: \"Programmer\"\\}}).\n Furthermore, some data might simply be missing.\n Data in entity matching is often assumed to be structured in records.\n However, it is not unusual that these records are in practice semi-structured because of certain unstructured string attributes -- opening up a world of possible inconsistencies --\n for example, a \\texttt{name} attribute (\\texttt{\"John Smith\"}, \\texttt{\"Smith, John\"}, \\texttt{\"John R. Smith\"}, \\texttt{\"John Richard Smith\"}) or an \\texttt{adress} attribute.\n In addition, we cannot always expect different data sources to follow the same schema, format, and syntactic conventions.\n \\item \\textbf{The large number of possible matches}:\n Given $|A|$ records from one data source and $|B| \\in \\Theta(|A|)$ from another, there are $\\Theta(|A|^2)$ possible matches.\n We would normally expect the number of positive matches to be $O(|A|)$.\n This has two important implications.\n First, it is infeasible to explicitly compare all possible pairs for any nontrivial number of records.\n Second, there is an extreme imbalance between positive and negative matches;\n more specifically, there are $\\Omega(|A|)$ times as many negative as positive matches.\n The potential for false positives is inherently greater.\n If one wants to use a learning-based approach,\n it can be difficult to label enough positive examples, since they occur in an ocean of negative examples.\n \\item \\textbf{Dependency on external human knowledge and interaction}:\n The space of potential entity matching problem instances is unbounded and offers great variety.\n While a substantial part of the instances can of course, in theory, be solved automatically,\n in many real-world instances,\n it is either unrealistic or impossible to perform matching as an automatic, isolated process, as the data sources simply do not contain all necessary information.\n Moreover, to perform matching, our solution has to interact with human experts and make use of their knowledge.\n Human interaction is in itself a complex domain.\n\\end{itemize}\nDeep learning has in recent years become an essential part of multiple research fields,\nmost notably in fields such as computer vision and natural language processing, which are concerned with unstructured data.\nIts most prominent advantage over earlier approaches is its ability to learn features instead of relying on carefully handcrafted features .\nResearchers have already realized the potential advantage of deep learning for entity matching \\cite[e.g.,][]{Ebraheem2018-ws, Mudgal2018-cx}.\nIn this survey, we aim to summarize the work done so far in the use of neural networks for entity matching.", "id": "a19edbb8-0e15-4ba1-8c4d-64ad0c17496c", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Introduction" ] ], "subsections": [ "d94fc646-a2ad-4c21-b8e4-322b9e206df8", "b63933f0-bd68-446a-9c07-e659c4581a8a", "27382712-d7a8-41b3-a2bb-0af735897a5b" ], "title": "Introduction" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the challenges of comparing how neural networks are used in entity matching is that published methods often do not address the exact same problem.\nThey tend to cover somewhat different aspects of entity matching.\nWith this is in mind, we formulate the following research questions:\n\\begin{itemize}\n \\item How do methods using neural networks for entity matching differ in what they solve, and how do the methods that address the same aspects differ in their approaches?\n \\item What benefits and opportunities does deep learning provide for entity matching, and what challenges does it pose?\n \\item How can we categorize the different deep neural networks used for entity matching?\n\\end{itemize}", "id": "d94fc646-a2ad-4c21-b8e4-322b9e206df8", "level": "subsection", "origin_cites_number": 0, "parent_id": "a19edbb8-0e15-4ba1-8c4d-64ad0c17496c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Research questions" ] ], "subsections": [], "title": "Research questions" }, { "cite_extract_rate": 0, "cites": [], "content": "To answer our research questions, we provide the following main contributions:\n\\begin{itemize}\n \\item We use a reference model of the traditional entity matching process to identify which steps of process that existing work has targeted using neural networks\n and provide an overview of the different techniques that are used for each step.\n \\item We discuss the contributions of deep learning to entity matching compared to traditional approaches using a proposed reference model for a deep learning-based entity matching process.\n \\item We propose a taxonomy of deep neural networks for entity matching.\n \\item We discuss challenges and propose potential future work for deep learning in entity matching understood in the context of our reference entity matching process and deep network taxonomy.\n\\end{itemize}", "id": "b63933f0-bd68-446a-9c07-e659c4581a8a", "level": "subsection", "origin_cites_number": 0, "parent_id": "a19edbb8-0e15-4ba1-8c4d-64ad0c17496c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Main contributions" ] ], "subsections": [], "title": "Main contributions" }, { "cite_extract_rate": 0, "cites": [], "content": "First, as necessary background information, Section~\\ref{sec:background} will introduce the problem definition and give a brief introduction to neural networks.\nSection~\\ref{sec:related-work} mentions related work --- both publications that survey or summarize similar topics and problems that are similar to entity matching.\nWe then provide an overview of the surveyed methods using a reference model of the entity matching process as a framework in Section~\\ref{sec:process},\nbefore we in Section~\\ref{sec:contributions-from-deep-learning} take a step back and discuss contributions from deep learning to entity matching compared to more traditional approaches.\nWith those contributions in mind, we introduce a taxonomy of deep neural networks for entity matching in Section~\\ref{sec:taxonomy}.\nSection~\\ref{sec:evaluation} provide a brief overview of how evaluation is performed and reported comparative evaluations between deep learning approaches and traditional methods.\nFinally, we discuss challenges and opportunities for deep learning in entity matching in Section~\\ref{sec:challenges}.", "id": "27382712-d7a8-41b3-a2bb-0af735897a5b", "level": "subsection", "origin_cites_number": 0, "parent_id": "a19edbb8-0e15-4ba1-8c4d-64ad0c17496c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Introduction" ], [ "subsection", "Outline" ] ], "subsections": [], "title": "Outline" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:background}\nThis section introduces the entity matching problem definition and its many names and variations.\nWhat follows is a brief introduction to neural networks and deep learning and how they are used with text.", "id": "1dd05a75-3b06-4b4f-9d39-e341e53b4276", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Background" ] ], "subsections": [ "20afc726-9b7e-4d9d-a323-7586728c44b6", "bf1c3598-f0a8-4628-97d6-349499224757" ], "title": "Background" }, { "cite_extract_rate": 0, "cites": [], "content": "Let $A$ and $B$ be two data sources.\n$A$ has the attributes $(A_1, A_2, ..., A_n)$, and we denote records as $a = (a_1, a_2, ..., a_n) \\in A$.\nSimilarly, $B$ has the attributes $(B_1, B_2, ..., B_m)$, and we denote records as $b = (b_1, b_2, ..., b_m) \\in B$.\nA data source is a set of records,\nand a record is a tuple following a specific schema of attributes.\nAn attribute is defined by the intended semantics of its values.\nSo $A_i = B_j$ if and only if values $a_i$ of $A_i$ are intended to carry the same information as values $b_j$ of $B_j$,\nand the specific syntactics of the attribute values are irrelevant.\nAttributes can also have metadata (like a name) associated with them, but this does not affect the equality between them.\nWe call the tuple of attributes $(A_1, A_2, ..., A_n)$ the schema of data source $A$, and correspondingly for $B$.\nThe goal of entity matching is to find the largest possible binary relation $M \\subseteq A \\times B$ such that $a$ and $b$ refer to the same entity for all $(a,b) \\in M$.\nIn other words, we would like to find all record pairs across data sources that refer to the same entity.\nWe define an entity to be something of unique existence\\footnote{An entity does not have to be a physical object, but can also be abstract or conceptual -- e.g., a company or an event.}.\nAttribute values are often assumed to be strings, but that is not always the case.\nIt is important to note that the two record sets need not necessarily have the same schema.\nAspects beyond what the surveyed methods cover have been intentionally left out.\nFor example, we make no matches within each data source, only across the two.\nWhich is not to say there cannot be duplicates within a data source.\nHowever, in this problem definition, we assume that we are not interested in finding them.\nIn practice, it is quite common to assume no duplicates within the data sources.\nIf we are explicitly interested in finding duplicates within a single data source, we can, as will be mentioned below, address duplicates in this formulation of the problem by simply having $A = B$.\nIn addition, there is also a more subtle assumption in this problem definition:\nThe record sets $A$ and $B$ are assumed to operate with the same taxonomic granularity.\nThis is not necessarily always the case.\nOne data source might refer to households; the other, to individuals,\nor two data sources could refer to street-level addresses and postal code areas, respectively.\nIn many cases, it would still make sense to match records that do not strictly refer to the same entity,\nbut rather refer to entities with some defined taxonomic closeness.\nWe leave this out of the definition for simplicity, as it does not affect our analysis of the surveyed methods.\nSomewhat ironically, as often pointed out, entity matching itself suffers from the problem of being referenced by many different names,\nsome referring to the exact same problem, while others are slight variations, specializations, or generalizations.\nIn addition, the names are not used completely consistently.\nTable~\\ref{tab:em-names} lists a selection of these names.\nWe will comment on a few.\n\\begin{table}[]\n \\centering\n \\caption{\n Some of the many names that are used for entity matching or similar variations of it.\n }\n \\begin{tabular}{l l l}\n Entity matching &\n Entity resolution &\n Record linkage \\\\\n Data matching &\n Data linkage &\n Reference reconciliation \\\\\n String matching &\n Approximate string matching &\n Fuzzy matching \\\\\n Fuzzy join &\n Similarity join &\n Deduplication \\\\\n Duplicate detection &\n Merge-purge &\n Object identification \\\\\n Re-identification &\n &\n \\end{tabular}\n \\label{tab:em-names}\n\\end{table}\n\\textit{Entity resolution}, \\textit{record linkage}, and \\textit{data matching} are frequently used for more or less the same problem as we defined above.\nIt is not unusual that $A$ and $B$ are assumed to have the same schema --- either because the schemas are, in fact, equal, or because some kind of schema matching has already been performed as a separate step.\nSometimes, fusing the matching pairs to one representation is considered a final step of the problem.\nIf we also have duplicates within each data source,\nit might be necessary to cluster and fuse more than two records at a time.\nIn this article, we will stick to the more narrow definition laid out above.\n\\textit{Deduplication} or \\textit{duplicate detection} is the problem of identifying which records in the same data source refer to the same entity,\nand can be seen as the special case $A = B$.\n\\textit{String matching} attempts to find strings that refer to the same entity\nand can be regarded as the special case $n = m = 1$, if strings are interpreted as single-attribute records.", "id": "20afc726-9b7e-4d9d-a323-7586728c44b6", "level": "subsection", "origin_cites_number": 0, "parent_id": "1dd05a75-3b06-4b4f-9d39-e341e53b4276", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Background" ], [ "subsection", "Problem definition" ] ], "subsections": [], "title": "Problem definition" }, { "cite_extract_rate": 0, "cites": [], "content": "We provide a brief and simplified description of neural networks and deep learning,\nfollowed by a short introduction to how deep learning is used in natural language processing.\nA comprehensive introduction to these topics is outside the scope of this survey.\nSee instead, for example, , from which we will adapt some of our notation in the following paragraphs, for a general introduction to deep learning, and for introductions to deep learning for natural language processing.\nA \\textit{neural network} is a machine learning model.\nWe wish to approximate some unknown function $f^*(\\bm{x})=\\bm{y}$ that can map from some interesting input $\\bm{x}$ to some desired output $\\bm{y}$.\nUsually, we will have some examples $D = \\{(\\bm{x}^{(j)}, \\bm{y}^{(j)}) | 1 \\leq j \\leq m \\}$, which are known to be such that $f^*(\\bm{x}^{(j)}) \\approx \\bm{y}^{(j)}$ for all $j$, to help guide us.\nTo approximate $f^*$ we define a function $f(\\bm{x};\\bm{\\theta})$ parameterized by $\\bm{\\theta}$,\nand then try to learn what $\\bm{\\theta}$ should be using the examples $D$.\nThis function $f$ is the neural network.\nEven though there are no strict requirements for what constitutes a neural network,\nthey usually follow a common recipe.\nGenerally, we let $f$ consist of one or more nested functions $f(\\bm{x}) = f_L(f_{L-1}(...f_1(\\bm{x})))$.\nEach such function $f_l$ would normally be a linear operation, like matrix multiplication, using the parameters $\\bm{\\theta}$ and then nested by a nonlinear element-wise operation.\nFor example, $f_l(\\bm{x}) = \\max(0, W \\bm{x} + \\bm{b})$, where both $W$ and $\\bm{b}$ are part of $\\bm{\\theta}$ and $\\max$ is element-wise.\nWe call these nested functions \\textit{layers},\nand $L$ is the \\textit{depth} of the network.\nWhen a neural network has several layers (no clear threshold), we call it a deep neural network.\nGiven a suitable network architecture $f$,\nwe try to find parameters $\\bm{\\theta}$ that will make it behave close to the examples $D$.\nWe first define a loss function $\\mathcal{L}(\\bm{y}, \\hat{\\bm{y}})$ quantifying how wrong a prediction $\\hat{\\bm{y}} = f(\\bm{x};\\bm{\\theta})$ is compared to the correct $\\bm{y}$.\nThen we randomly initialize $\\bm{\\theta}$ and perform some variant or descendant of stochastic gradient descent (SGD) with mini-batches:\n\\begin{equation*}\n \\bm{\\theta}_{t+1} = \\bm{\\theta}_t - \\alpha \\frac{1}{|\\widetilde{D}|} \\sum_{(\\bm{x}, \\bm{y}) \\in \\widetilde{D}} \\nabla_{\\bm{\\theta}_t} \\mathcal{L}(\\bm{y}, f(\\bm{x};\\bm{\\theta}))\n\\end{equation*}\nwhere $\\alpha$ is the learning rate,\nand $\\widetilde{D} \\subset D$ is a random mini-batch.\nThe stopping criterion and other details vary between methods.\nThis procedure is expensive, because it needs to evaluate $f$ and differentiate $\\mathcal{L}$ with respect to $\\bm{\\theta}$.\nTo make it efficient,\nwe make sure to choose $|\\widetilde{D}| \\ll |D|$ and also differentiate with the backpropagation algorithm.\nGenerally, we can interpret $f$ as a directional acyclic computational graph.\nThe backpropagation algorithm simply applies dynamic programming using the chain rule over this computational graph.\nThe real strength of deep learning is its ability to do hierarchical representation learning.\nWith modern techniques,\nmultilayered networks are able to learn useful features from relatively unstructured input data .\nThis is especially valuable for data such as images and text, which are notoriously hard to extract good features from with manually crafted procedures.", "id": "bf1c3598-f0a8-4628-97d6-349499224757", "level": "subsection", "origin_cites_number": 0, "parent_id": "1dd05a75-3b06-4b4f-9d39-e341e53b4276", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Background" ], [ "subsection", "Neural networks and deep learning" ] ], "subsections": [ "c38a38a1-5cfe-4e6f-aa9f-42d4da1bb299" ], "title": "Neural networks and deep learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Many state-of-the-art methods for natural language processing are deep learning models \\cite[e.g.,][]{Vaswani2017-kk, Devlin2018-sz,Yang2019-hf}.\nCentral to all these methods is how text is transformed to a numerical format suitable for a neural network.\nThis is done through embeddings,\nwhich are translations from textual units to a vector space -- traditionally available in a lookup table.\nThe textual units will usually be characters or words.\nAn embeddings lookup table can be seen as parameters to the network and be learned together with the rest of the network end-to-end.\nThat way the network is able to learn good distributed character or word representations for the task at hand.\nThe words used in a data set are often not unique to that data set, but rather just typical words from some language.\nTherefore one can often get a head start by using pretrained word embeddings like word2vec , GloVe or fastText ,\nwhich have been trained on enormous general corpora.\nFollowing a rather recent trend,\nlarge pretrained networks that can produce contextualized word embeddings that take into account the surrounding words are also available .\nText is naturally interpreted as a sequence.\nIt is therefore perhaps not so surprising that neural networks designed for sequences are often used.\nOne way to model sequences is to use Convolutional Neural Networks (CNNs) --- first popularized by computer vision applications --- which has received considerable attention within the natural language processing community .\nHowever, a more prominent sequence model has been Recurrent Neural Networks (RNN) and their variants.\nRNNs are constructed by repeating the same layer multiple times.\nEach layer takes both the output from the previous layer as well as some part of an input sequence.\nSo assuming the input to be a sequence $\\bm{x_1}, ..., \\bm{x_L}$,\nwe nest layers recursively as $\\bm{h_l} = f_l(\\bm{h_{l-1}}, \\bm{x_l}; \\bm{\\theta})$,\nwhere $\\bm{h}_l$ is called the hidden state.\nLayers share the same parameters,\nand the number of layers can therefore be dynamically adjusted to the length of the input sequence.\nThe last hidden state will, in theory, contain information about the whole input sequence.\nAdditional layers can be appended to further process this feature vector and produce some desired output.\nOutput sequences can be generated in a number of ways by setting the initial hidden state and then extracting the hidden state from different layers.\nRNNs themselves consist of a (dynamic) number of layers,\nbut it is also possible to nest several RNNs.\nWe then get what is called stacked RNNs.\nRNNs are relatively deep networks and are therefore prone to what is called vanishing gradients.\nThe gradients from the early layers become so small that they are ineffective in gradient descent.\nIn other words, the first parts of the input sequence have too little influence over the end result.\nTherefore, variants of RNNs such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) are often used in practice.\nThey make sure that hidden states are more easily able to flow through the subsequent layers undisturbed,\nso that gradients will remain strong when backpropagated through many layers.\nDespite this improvement, the networks will still tend to be influenced more by the end of the input sequence then the beginning.\nIt has become quite common to have bidirectional RNNs ,\nwhich can be seen as combining two RNNs, where one of them processes the input sequence backward.\nAnother popular way to face the issue of skew in influence for sequences is to use attention mechanisms .\nThe idea is to let the network itself choose what parts of the input to focus on,\npotentially for several iterations.\nThis is typically achieved in a network by producing some normalized attention vector that is multiplied with the vector of interest.\n\\begin{figure}\n \\centering\n \\includegraphics{RNN-Transformer}\n \\caption{\n Illustration of the architecture for a two-stack uni-directional RNN encoder and a three-layer BERT-style encoder for natural language processing.\n Let $(x_1, x_2, \\dots, x_l, \\dots, x_L)$ be the input sequence, and $e_l$ be an embedding for $x_l$.\n Both a standard RNN and LSTM block are illustrated for the RNN architecture.\n Notice the additional \\textit{context} state $C_l$ for LSTM, which can more easily carry gradients.\n Inspired by illustrations in .\n }\n \\label{fig:rnn-transformer}\n\\end{figure}\nWhile initially used as an enhancement to RNNs,\nnetworks based almost solely on attention have recently started to proliferate \nand are currently considered state-of-the-art for many, if not most, natural language processing tasks.\nWe call these Transformer-based networks --- as originally named by that targeted machine translation.\nIn contrast to RNN-based networks, they are not sequential with respect to the input sequence.\nSee Figure~\\ref{fig:rnn-transformer} for an illustration of an RNN and Transformer encoder.\nThis makes them more parallel, which again makes it easier to leverage modern, highly parallel hardware.\nIn addition, one avoids prohibitive deep networks (due to vanishing gradients) for long input sequences. \nEach layer performs self-attention over the whole input sequence, effectively removing the long paths between cells of RNNs that makes it so hard to learn long-range dependencies.\nSince transformer networks are architecturally agnostic to the input sequence order,\nthey are instead fed positional information through the input as positional embeddings.\nOne particular influential recent trend has been the ability to leverage huge pretrained models that have been trained unsupervised for language modeling on massive text corpora --- similar to what the computer vision community has done for a while.\nThey produce contextualized word embeddings that take into account the surrounding words.\nThe embeddings can be used as a much more powerful variant of the classical word embeddings,\nbut as popularized by BERT , one can also fine-tune the network to the task at hand.\nTake BERT as an example.\nIt is pretrained jointly on masked language modeling and next sentence prediction.\nInput during training is a special [CLS] token first, then the two sentences terminated by a special [SEP] token each.\nThe [CLS] tokens output from the network is used to do the next sentence classification.\nEach token's embedding is augmented with a positional embedding and a segment embedding indicating which sentence it belongs to.\nThis setup makes the network suitable for fine-tuning on both sequence labeling tasks as well as pair labeling tasks (such as question answering or entity matching).", "id": "c38a38a1-5cfe-4e6f-aa9f-42d4da1bb299", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "bf1c3598-f0a8-4628-97d6-349499224757", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Background" ], [ "subsection", "Neural networks and deep learning" ], [ "subsubsection", "Deep learning for natural language processing" ] ], "subsections": [], "title": "Deep learning for natural language processing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:related-work}", "id": "8c2fe9d4-101e-4ae1-962e-0c7abef458fc", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Related work" ] ], "subsections": [ "23fb9017-950f-48bc-8f25-9d82c3df7591", "da081ecb-cb52-4a0d-988c-943c832c60a0" ], "title": "Related work" }, { "cite_extract_rate": 0, "cites": [], "content": "Given entity matching's long history, there is no surprise that it has been surveyed before in various ways,\ncovering entity matching as a whole and more narrow aspects.\nFirst, there are several books that provide an overview.\n is a dedicated and comprehensive source on entity matching,\n specifically cover the slightly more specialized problem of duplicate detection, and\n all introduce entity matching in the context of data quality and integration.\nSecond, the workshop tutorials by serve as introductory summaries.\nThird, present a literature analysis.\nOther sources cover more narrow aspects of entity matching -- such as specific techniques or subtasks.\nQuite early on, statisticians dominated the field of entity matching.\nProbabilistic methods were developed by and given a solid theoretical framework by .\nThese probabilistic methods are summarized by .\nBlocking, which is surveyed by , is considered an important subtask of entity matching, meant to tackle the quadratic complexity of potential matches.\n specifically review entity matching techniques in the context of big data.\nThere has been an uptick in interest in both machine learning and crowdsourcing as a solution to entity matching in recent years.\nAs part of a larger survey on crowdsourced data management, cover crowdsourced entity matching.\n summarize the use of machine learning, while present an overview of crowdsourcing, active learning, and deep learning for entity matching.\nWhile earlier works mention or cover neural networks for entity matching to various degrees,\nwe are to the best of our knowledge the first to present a dedicated, complete, and up-to-date survey.", "id": "23fb9017-950f-48bc-8f25-9d82c3df7591", "level": "subsection", "origin_cites_number": 0, "parent_id": "8c2fe9d4-101e-4ae1-962e-0c7abef458fc", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Related work" ], [ "subsection", "Other surveys and extensive overviews" ] ], "subsections": [], "title": "Other surveys and extensive overviews" }, { "cite_extract_rate": 0, "cites": [], "content": "Entity matching can be seen as part of a larger group of tasks with roots in natural language processing that solve similar, but distinct, matching problems.\nInterestingly, but perhaps not surprisingly, deep learning-based methods have become state-of-the-art in all these tasks.\nWe will briefly mention some of the most prominent ones.\n\\begin{itemize}\n \\item \\textbf{Coreference resolution}:\n Given a text, find all mentions of entities and determine which mentions corefer.\n Two entity mentions corefer if they refer to the same entity .\n In contrast, entity matching is concerned with more structured data with clearly distinct units of data (records).\n Importantly, entity matching does not have to take into account a larger textual context,\n which is necessary in coreference resolution to find coreferring mentions across multiple sentences.\n State-of-the-art methods are able to perform the whole task end-to-end using a deep network without detecting and disambiguating mentions in two separate steps .\n \\item \\textbf{Entity alignment}:\n Given two knowledge bases, find which entries across the two that refer to the same entity.\n Knowledge bases, in contrast to record sets in entity matching, have relations between entries.\n Leveraging these relations are central to the task.\n The way most neural-based methods do this is by producing so-called knowledge graph embeddings ,\n embeddings of entries which incorporate information about their relationship to other entries.\n As a slightly specialized variant, user identity linkage is the problem of identifying which users across two social networks are the same .\n \\item \\textbf{Entity linking}:\n Given a text, find all mentions of entities and link them to entries in a knowledge base.\n One example of a heavily used knowledge base would be Wikipedia.\n In some ways,\n one can see entity linking as a hybrid between coreference resolution and entity alignment,\n and it differs from entity matching in the same ways.\n Neural-based methods are considered state-of-the-art .\n \\item \\textbf{Paraphrase identification}:\n Given two texts, determine if they are semantically equivalent -- i.e., if they carry the same meaning.\n This can be be seen as a generalization of string matching, if one interprets strings referring to the same entity as implicating that the strings are also semantically similar.\n Nonetheless,\n we still consider figuring out which texts convey the same meaning in general to be a distinct problem from entity matching.\n First, entity matching deals with more structured data.\n Second and most importantly, in entity matching, all records refer to an entity, and we are only concerned with which specific real-world entity a record is referring to.\n Any excess meaning carried by a record does not impact matching.\n Finally, there is also semantic textual similarity and textual entailment, which are closely related to paraphrase identification.\n Semantic textual similarity is concerned with the degree of how semantically similar two texts are,\n while textual entailment is about finding out whether one text semantically entails, contradicts, or is neutral to a second text.\n Additionally, in the case of multiple choice, question answering can also be seen as a matching problem.\n State-of-the-art for most of these matching problems rely on rather generic, but powerful, language understanding models .\n\\end{itemize}\nIn a broader sense,\nsimilar problems are also studied in the context of information retrieval .\nNeural networks not only provide effective techniques for retrieving unstructured text but also for data formats that have traditionally been less accessible such as images --- and even across modals .", "id": "da081ecb-cb52-4a0d-988c-943c832c60a0", "level": "subsection", "origin_cites_number": 0, "parent_id": "8c2fe9d4-101e-4ae1-962e-0c7abef458fc", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Related work" ], [ "subsection", "Related problems" ] ], "subsections": [], "title": "Related problems" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:process}\nTraditionally, entity matching is often thought of as a process consisting of multiple steps,\neven though there is no generally agreed upon list of specific steps.\nIt is useful to compare methods in light of how they relate to this abstract process.\nTo this end, we introduce a high-level reference model describing the entity matching process as five distinct steps.\nThese steps can also be viewed as a chain of the subtasks or subproblems that make up entity matching.\nInspired by processes and figures such as those in , Figure~\\ref{fig:process} depicts this reference model of the traditional entity matching process.\nWe will use the model to frame the discussion of different methods using neural networks.\n\\begin{figure}\n \\centering\n \\includegraphics{process}\n \\caption{\n Illustration of the reference model for a traditional entity matching process and its five steps.\n Human-in-the-loop aspects are not considered.\n }\n \\label{fig:process}\n\\end{figure}\nThe process adheres to the problem definition introduced above.\nIt assumes two data sources as input.\nIn theory, it could be generalized to multiple sources, but this is seldom done in the literature.\nA single source, as previously mentioned, can simply be seen as a special case.\nAt the end of the process, the result is simply matches.\nSince this is an abstract process extracted from the literature,\nit is not necessarily followed step by step.\nThe order might not be completely strict, and steps might be intermingled or skipped -- as will be clear when we look at specific methods.\nWe also note that this process is machine-oriented\nand does not highlight any iterative human interactions or feedback loops.\nSignificant research has gone into both crowdsourcing and active learning .\nInterestingly, use a deep neural network in their active learning approach.\nSuch human-in-the-loop factors are often crucial for entity matching in practice .\nWe do not consider our proposed process to be in conflict with these aspects,\nbut rather mostly orthogonal.\nEmpirically, based on the surveyed methods, we do not find neural networks to be very tightly coupled to any human-in-the-loop techniques.\nWe therefore focus on the machine-oriented aspects.", "id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ] ], "subsections": [ "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "710a2af2-21d5-4009-85a7-4471a2f8a6c0", "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "c17fd354-169c-4d2b-ba99-90779ea98a76", "327bfc6d-0aee-4056-a896-67c1a1d12b94", "d709375a-1094-4101-82cc-477d5288c1d8" ], "title": "The entity matching process" }, { "cite_extract_rate": 0, "cites": [], "content": "The first step in the process is data preprocessing, which is usually a crucial step in many data integration tasks .\nThe goal is to get both data sources into consistent and similar formats better suited for downstream tasks.\nTypical transformations may involve removing excess punctuation, lowercasing all letters, normalizing values, and tokenizing.\nSometimes, one might also view this step as feature extraction, where records are transformed to a feature space.\nPreprocessing is, of course, very dependent on the domain and the specific data sources.", "id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ] ], "subsections": [ "fc374fdd-a8ab-4091-9ac4-dd77f93e6106", "8d0a07c8-7269-4315-bc51-51ccf8bf2c61", "c4efafe0-8e43-4973-93f7-5ceab1b33929", "dcadb480-904c-4bf8-bf00-d4024359e8f1", "2d04a6e6-5bac-4112-b4e5-c04db4f64540" ], "title": "Data preprocessing" }, { "cite_extract_rate": 0, "cites": [], "content": "After preprocessing we perform schema matching,\nwhere the objective is to find out which attributes should be compared to one another,\nessentially identifying semantically related attributes.\nThis will enable downstream steps to compare records across the two sources.\nEven though schema matching is often considered a separate problem to be solved before performing entity matching \\cite[e.g.,][]{Christen2012-xr},\nwe choose to include it both because deep learning-based methods have the potential to perform it jointly with other steps (as a surveyed method shows ) and because it is frequently an unavoidable problem in real-world use cases for entity matching.\nIn practice,\nthis step is often performed manually as part of the preprocessing step,\nsimply making sure to transform both data sources into the same schema format.\nTraditional techniques for schema matching span a wide range of solutions.\nThey can use both schema metadata and actual attribute values.\nSome are supervised learning methods, while others are unsupervised.\nImportantly, most of them are completely independent of downstream tasks in the process,\nthough most techniques are actually not developed specifically for the purpose of entity matching.\nFor more in-depth coverage of schema matching, see .", "id": "fc374fdd-a8ab-4091-9ac4-dd77f93e6106", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ], [ "subsubsection", "Schema matching" ] ], "subsections": [], "title": "Schema matching" }, { "cite_extract_rate": 0, "cites": [], "content": "Since the number of potential matches grows quadratically,\nit is common to pick out candidate pairs $C \\subseteq A \\times B$ in a separate step before any records are compared directly.\nWe call this step \\textit{blocking}, and the goal is to bring down the number of potential matches $|C| \\ll |A \\times B|$ to a level where record-to-record comparison is feasible.\nNote that in the literature, blocking is sometimes used as a name for only one of the possible strategies for avoiding the quadratic complexity \\cite[e.g.,][]{Christophides2019-de}.\nFor simplicity, we refer to any effort to make record comparison feasible as blocking.\nOne can think of the blocking step as doing implicit comparison of records,\nwhile the comparison step described below is doing explicit comparison between pairs of records.\nThere is often no way around performing at least some explicit pairwise comparison, since implicit comparison cannot offer the necessary precision.\nIn certain cases, when the comparison lends itself to indexing, it is possible to fuse record pair comparison and blocking into one step.\nUsually, however, the explicit comparison is nontrivial, infeasible, or impossible to speed up by indexing --\nnecessitating a need to prune away obvious nonmatches in a separate blocking step.\nThis is possible because implicit comparison will typically have lower precision, but can be done more efficiently.\nAt the same time, explicit comparison will typically have higher precision but has inherent quadratic complexity.\nBy constructing a high-recall implicit comparison step to filter away obvious nonmatches first,\nwe can make it feasible to use a more powerful high-precision explicit comparison afterward.\nTypical techniques are based on hashing, sorting, or various ways of indexing.\nSome work completely independent from the downstream steps,\nwhile others are more coupled with the record pair comparison and classification steps.\nFor example,\nif matches are decided based on thresholds of string similarity measures,\nit is often possible to specifically index attribute values to prune away according to that criteria .\nMost techniques rely heavily on syntactic similarity,\nincluding those based on supervised machine learning.\nSee for extensive reviews on blocking techniques.\nIn practice,\nit is not uncommon that blocking involves quite a bit of manual feature selection,\npicking out which attributes should be used and which technique to apply.", "id": "8d0a07c8-7269-4315-bc51-51ccf8bf2c61", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ], [ "subsubsection", "Blocking" ] ], "subsections": [], "title": "Blocking" }, { "cite_extract_rate": 0, "cites": [], "content": "When the number of candidate pairs $|C|$ has been reduced to a manageable amount, we can compare individual records $(a,b) \\in C$.\nThe pairwise comparison results in a similarity vector $S$, consisting of one or more numerical values indicating how similar or dissimilar the two records are.\nTraditionally,\nsuch comparisons have mostly been made using string similarity measures.\nThese measures typically quantify a very specific syntactic similarity,\nand therefore differ in what clues for matching strings they are able to pick up.\nSome are, for example, good at adjusting for spelling errors or OCR errors.\nString similarity measures have been extensively covered before .\nIt is possible to incorporate domain knowledge in a string similarity measure to also perform semantic comparison instead of just syntactic ,\nbut it is less common and introduces the extra challenge of acquiring such materialized domain knowledge.\nString similarity measures are made to compare two strings and cannot be directly applied to a pair of records.\nNormally, one will compare those attributes which were found to be semantically similar in the schema matching step,\n thereby getting multiple measurements to include in $S$.\nAlso, since the similarity measures are almost always static and only cover a narrow syntactic similarity,\none can use multiple measures and offload the job of figuring out which ones to emphasize to the downstream classification step.", "id": "c4efafe0-8e43-4973-93f7-5ceab1b33929", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ], [ "subsubsection", "Record pair comparison" ] ], "subsections": [], "title": "Record pair comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "Lastly, the objective of the classification step is to classify each candidate pair as either match or nonmatch based on the similarity vector.\nIn cases where $|S|=1$, simple thresholding might be enough,\nwhile when $|S|>1$, one needs more elaborate solutions.\nEarly efforts in entity matching were focused on unsupervised probabilistic methods for doing classification.\nInitially developed by and later formalized by ,\nthe idea is that, given certain assumptions,\none can calculate the optimal matching choice according to some bounds on false positives and negatives.\nIt can be seen as very close to a näive Bayes classifier,\nclassifying record pairs as either match, nonmatch, or uncertain -- where the uncertain matches must go through manual review.\nThe motivation is that common attribute values that agree (for example, a very common first name) are less significant than rare attribute values that agree.\nSee for a complete introduction to probabilistic approaches.\nRecently, methods based on rules or machine learning have been more prominent.\nRules are predicates over the similarity vector $S$ that flag them as match or nonmatch.\nThey can be constructed manually, making them a powerful and highly explainable way of explicitly incorporating domain knowledge into the classification.\nManually constructing rules requires a lot of expert labor,\nso significant work has been put into automatically learning rules from examples.\nOther efforts in leveraging learning have used off-the-shelf classification models such as decision trees and support vector machines.\nThese machine learning models are then trained on examples of $S$ for which it is known if they represent a matching or nonmatching record pair.\nBoth rule-based and machine learning approaches are covered extensively in the literature .", "id": "dcadb480-904c-4bf8-bf00-d4024359e8f1", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ], [ "subsubsection", "Classification" ] ], "subsections": [], "title": "Classification" }, { "cite_extract_rate": 0, "cites": [], "content": "Table~\\ref{tab:method-process-matrix} lists, to the best of our knowledge, all methods that use neural networks for entity matching and which steps of the process they tackle using neural networks.\nWe will in the subsequent subsections take a closer look at each step\nand see how different methods use neural networks to handle them.\n\\begin{table}\n \\centering\n \\caption{\n Overview of which steps of the entity matching process reference model different methods tackle with neural networks.\n }\n \\small\n \\begin{tabular}{l|M{17mm}|M{13mm}|M{13mm}|M{17mm}|M{17mm}}\n Method & Data \\newline preprocessing & Schema matching & Blocking & Record pair \\newline comparison & Classification \\\\\n \\hline\n SEMINT & & $\\bullet$ & & & \\\\ \\hline\n SMDD & & $\\bullet$ & & & \\\\ \\hline\n & & $\\bullet$ & & $\\sim$ & \\\\ \\hline\n & & & & & $\\bullet$ \\\\ \\hline\n & & & & & $\\bullet$ \\\\ \\hline\n & & & & & $\\bullet$ \\\\ \\hline\n NNSM & & $\\bullet$ & & & \\\\ \\hline\n & $\\bullet$ & & & & $\\bullet$ \\\\ \\hline\n . & & & & & $\\bullet$ \\\\ \\hline\n & $\\bullet$ & & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n DeepMatcher & $\\bullet$ & & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n & $\\bullet$ & & & $\\bullet$ & \\\\ \\hline\n DeepER & $\\bullet$ & & $\\bullet$ & $\\bullet$ & $\\sim$ \\\\ \\hline\n MPM & $\\bullet$ & & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n & $\\bullet$ & & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n Seq2SeqMatcher & $\\bullet$ & $\\bullet$ & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n & $\\bullet$ & $\\bullet$ & & & \\\\ \\hline\n AutoBlock & $\\bullet$ & & $\\bullet$ & & \\\\ \\hline \n Hi-EM & $\\bullet$ & & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n & $\\bullet$ & $\\bullet$ & & $\\bullet$ & $\\bullet$ \\\\ \\hline\n Ditto & $\\bullet$ & $\\bullet$ & & $\\bullet$ & $\\bullet$\n \\end{tabular}\n \\label{tab:method-process-matrix}\n\\end{table}", "id": "2d04a6e6-5bac-4112-b4e5-c04db4f64540", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "9912a7af-3935-48a4-9f24-c1d06c0de7b6", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsubsection", "Data preprocessing" ], [ "subsubsection", "Outline" ] ], "subsections": [], "title": "Outline" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:data-preprocessing}\nDeep neural networks are good at doing representation learning.\nAs we will see, they can therefore effectively learn to do some of the data preprocessing we would traditionally do manually.\nWhen we explore how the different methods do this,\nwe will focus on two aspects:\nHow embeddings are used to get records in a suitable input format, and how the networks' hierarchical representations are structured.\n\\begin{table}\n \\centering\n \\caption{\n Overview of how the surveyed methods use embeddings,\n specifically at what granularity, if they use pretrained embeddings, and whether they fine-tune embeddings.\n Surveyed methods not using embeddings at all are left out.\n $^+$Other options were tried, but this was found to be most preferential.\n $^*$The method uses pretrained embeddings for the attribute value text,\n but standard embeddings trained from scratch for attribute labels.\n }\n \\small\n \\begin{tabular}{llm{15mm}m{15mm}}\n \\toprule\n Method & Embedding granularity & Pretrained \\newline embeddings & Fine-tuned \\newline embeddings \\\\ \\midrule\n & Word & No & - \\\\ \n & Word \\& Character N-gram & Yes & No \\\\ \n DeepMatcher & Word \\& Character N-gram$^+$ & Yes$^+$ & No \\\\ \n & Character & No & - \\\\ \n DeepER & Word & Yes & Yes$^+$ \\\\ \n MPM & Word \\& Character N-gram & Yes & No \\\\ \n & Word \\& Character N-gram & Yes & No \\\\ \n Seq2SeqMatcher & Word \\& Character N-gram & Both$^*$ & No \\\\ \n & Word & Yes & No \\\\ \n AutoBlock & Word \\& Character N-gram & Yes & No \\\\ \n Hi-EM & Character & No & - \\\\\n & Character N-gram & Yes & Yes \\\\\n Ditto & Character N-gram & Yes & Yes \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:feature-extraction}\n\\end{table}", "id": "710a2af2-21d5-4009-85a7-4471a2f8a6c0", "level": "subsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ] ], "subsections": [ "e7c85881-0e0e-4cff-aff2-3e7a4eab27d3", "e9f463eb-eff4-4d6d-9684-6294a3da6a15" ], "title": "Data preprocessing" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:embeddings}\nNeural networks alone only work with numerical data,\nso an important enabling factor in letting networks learn representations is how textual records are transformed into a numerical format.\nIn practice, this is done using embeddings, as explained in Section~\\ref{sec:background}.\nNote that while some embedding models, like GloVe , are not neural networks,\nwe still consider them a crucial component for neural networks and how they are able to replace manual feature extraction (see Section~\\ref{sec:contributions-from-deep-learning}).\nThey perform and enable powerful representation learning on text.\nOther embedding models, like word2vec, can be seen as a simple neural network.\nEven though the embeddings are later used in a lookup table, they were trained using this simple network.\nOne interesting use of word2vec is that of .\nThey do not use the word embeddings as input to a neural network,\nbut use them as is in a simple aggregation and comparison scheme to do schema matching (details in Section \\ref{sec:schema-matching-attribute-embeddings}).", "id": "e7c85881-0e0e-4cff-aff2-3e7a4eab27d3", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "710a2af2-21d5-4009-85a7-4471a2f8a6c0", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Embeddings" ] ], "subsections": [ "f7198fa4-ba0d-46b1-8208-f778473e8288", "6c7f89d8-5043-42db-8925-1fcc25b419a3", "e2338fc8-a8ee-4706-b8a4-d428f3e32ee9" ], "title": "Embeddings" }, { "cite_extract_rate": 0, "cites": [], "content": "Embeddings can be used at different granularities, usually at word- or character-level.\nThe second column of Table~\\ref{tab:feature-extraction} shows which methods use which granularity.\nWord embeddings significantly reduce the length of the sequences to be processed compared to character embeddings\nbut come at the expense of increasing the number of unique values that have to be represented by many orders of magnitude.\nThis often makes solutions relying on word embeddings more vulnerable to out-of-vocabulary (OOV) words --\ni.e., words that were not present in the training data.\nWord-based embedding models usually handle unknown words by assigning the same embedding to all unknown words, making no distinction between them.\nWhen embeddings are pretrained on large general corpora (as will be discussed next),\nbut the data sources at hand contain domain-specific words that are otherwise rare,\nthey will naturally tend to be less useful.\nIn addition, the data sources at hand might have low data quality and contain typos or small spelling variations that are not common in the training data -- thus effectively making those words out-of-vocabulary.\nMotivated by these concerns, the majority of the methods use fastText embeddings (or similarly, Wordpiece/SentencePiece/Byte-Pair-Encoding for the transformer networks).\nFastText combines embeddings for both the word itself and all character N-grams of certain lengths,\noften making it possible to find a suitable representation for an OOV word, since the word most likely has known N-grams.\nUsing N-grams in this way is basically a way of approximately incorporating a morpheme granularity level to word-level embeddings .\nDoes the choice of embeddings matter?\n compare fastText to (the purely word-based) GloVe\nand find fastText to have an edge when the data sources contain domain-specific words that are OOV and otherwise comparable.\n, meanwhile, compare fastText without N-grams, GloVe, and word2vec , reaching the conclusion that there is no significant difference.\nThe combined results might indicate that the embedding granularity is more important than which particular embedding is used.", "id": "f7198fa4-ba0d-46b1-8208-f778473e8288", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e7c85881-0e0e-4cff-aff2-3e7a4eab27d3", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Embeddings" ], [ "paragraph", "Granularity" ] ], "subsections": [], "title": "Granularity" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the benefits of popular word embeddings models like word2vec, GloVe, and fastText is that you can get pretrained embeddings.\nThey have been trained on enormous general corpora, have vocabularies of significant size, and are often available for different languages.\nPretrained character embeddings are not nearly as common,\nthough are, in essence, pretraining the entity matching model on large amounts of training data and can in a way be thought of having pretrained character embeddings.\nThe third column in Table~\\ref{tab:feature-extraction} shows which methods use pretrained embeddings.\nUsing pretrained embeddings is essentially a way of doing transfer learning for feature extraction.\nSince embeddings can be trained unsupervised,\nthere is generally a substantial amount of training data available.\nThis can be very helpful if it manages to reduce the necessary amount of labeled training data for the downstream entity matching task at hand.\n found that learning embeddings from scratch instead of using pretrained embeddings can be favorable to highly specialized data sources,\nwhile for other data sources,\npretrained embeddings either outperformed or were comparable to learning from scratch.", "id": "6c7f89d8-5043-42db-8925-1fcc25b419a3", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e7c85881-0e0e-4cff-aff2-3e7a4eab27d3", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Embeddings" ], [ "paragraph", "Pretraining" ] ], "subsections": [], "title": "Pretraining" }, { "cite_extract_rate": 0, "cites": [], "content": "Even when embeddings have been pretrained on some large text collection,\none still has the opportunity to continue adjusting them when doing the task-specific training together with all other weights.\nWe refer to this as \\textit{fine-tuning} the embeddings -- the opposite of \\textit{freezing} them.\nThe fourth column in Table~\\ref{tab:feature-extraction} shows which methods fine-tune their embeddings, which currently is only .\nThey found fine-tuning to help on hard data sets --- i.e., those that are very challenging or impossible to get close to perfect $F_1$ score on.", "id": "e2338fc8-a8ee-4706-b8a4-d428f3e32ee9", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e7c85881-0e0e-4cff-aff2-3e7a4eab27d3", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Embeddings" ], [ "paragraph", "Fine-tuning" ] ], "subsections": [], "title": "Fine-tuning" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:representation-levels}\nEmbeddings offer neural networks an initial mapping from the actual input to a suitable numeric representation.\nBut as mentioned earlier, the strength of deep learning's use of neural networks is really its ability to do hierarchical representation learning,\nwhich is achieved using multiple layers, learning increasingly abstract features .\nThe first layers of deep networks will typically be designed to enable building a good representation of the input, and then let the last layers focus on producing the desired output.\nIt is nontrivial to figure out what each layer actually learns.\nWhen we compare the surveyed methods,\nwe instead focus on explicit levels of representation.\n\\begin{table}\n \\centering\n \\caption{\n Overview of which explicit representation levels the surveyed methods make use of and which kinds of network layers are used to build representation from the level below.\n Methods upper half use independent record representations,\n and those in the bottom half use interdependent record representations.\n Self-attention is any attention mechanism that only uses elements within the same record,\n while cross-attention refers to any attention mechanism looking across two records.\n $^+$Other options were explored, but the representational power was similar or lower.\n $^*$Multiple options in use at the same time.\n }\n \\footnotesize\n \\begin{tabular}{lm{15mm}m{20mm}m{25mm}m{25mm}}\n \\toprule\n Method & Character & Word & Attribute & Record \\\\ \\midrule \n & & Standard \\newline embeddings & & 1 Convolutional \\\\ \\addlinespace[0.5em] \n & Standard \\newline embeddings & & & 1 BiLSTM, 2 FC \\\\ \\addlinespace[0.5em] \n DeepER & & GloVe$^+$ & & 1 BiLSTM$^+$ \\\\ \\addlinespace[0.5em] \n MPM & $^*$ & fastText$^*$ & $^*$ & \\\\ \\addlinespace[0.5em] \n & & fastText & 1 BiGRU & \\\\ \\addlinespace[0.5em] \n & & word2vec & Sum, average & \\\\ \\addlinespace[0.5em] \n AutoBlock & & fastText & 1 BiLSTM$^+$, \\newline self-attentation & Weighted sum \\\\ \\midrule \n DeepMatcher & & fastText$^+$ & Cross-attention, \\newline 1 BiGRU$^+$ & \\\\ \\addlinespace[0.5em] \n Seq2SeqMatcher & & Standard \\newline embeddings \\& \\newline fastText & & Cross-attention \\\\ \\addlinespace[0.5em] \n Hi-EM & Standard \\newline embeddings & 1 BiGRU, \\newline cross-attention, \\newline self-attention & 1 BiGRU, \\newline cross-attention, \\newline self-attention & Concatenation \\\\ \\addlinespace[0.5em]\n & \\multicolumn{2}{m{35mm}}{Byte-pair encoding$^+$, \\newline 12 transformer layers \\newline (self- and cross-attention)$^+$} & & \\\\ \\addlinespace[0.5em]\n Ditto & \\multicolumn{2}{m{35mm}}{Byte-pair encoding$^+$, \\newline 12 transformer layers \\newline (self- and cross-attention)$^+$} & & \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:representation-levels}\n\\end{table}\nTable~\\ref{tab:representation-levels} highlight how each method does representation learning by listing which explicit representation levels are used,\nand which techniques are used to build representation from the level below,\nwhere the explicit level units are character, word, attribute, and record.\nIt is important to note that the table does not reflect the entire neural network of each method,\nbut rather only the beginning layers that are to be considered as the feature extraction part of the network.\nWe consider a representation level as used if you can simply pick out a vector representation for units of that level after some layer.\nA vector is considered to represent a unit if its calculations only rely on that unit or other input through an attention mechanism.\nImportantly,\na vector that relies on two records through something else than an attention mechanism is not considered a representation,\nbut rather a comparison (see Section~\\ref{sec:comparison}).\nFigure~\\ref{fig:representation-comparison} illustrates the difference in terms of computational graphs.\nOf course, with neural networks, the actual line between the initial feature extraction part and the rest is an artificial one and not necessarily indicative of how the networks actually learn and work.\nBut they do reflect design decisions to a certain degree and help us compare them in that regard.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{representation-comparison}\n \\caption{\n Illustration of what is considered a vector representation (independent or interdependent) and what is considered a comparison in terms of computational graphs.\n Here, $a$ and $b$ are records and $\\alpha$ is an attention mechanism.\n }\n \\label{fig:representation-comparison}\n\\end{figure}\nWe see each method's first layer is (unsurprisingly) an embedding,\nproviding initial character or word vectors.\nSome use a specific embedding model, like fastText, while others just use standard lookup table embeddings that they train themselves.\nNext, we note the popularity of RNN-based models among the methods,\nwhich is in line with the widespread use of such sequence-aware models in natural language processing \\cite[e.g.,][]{Graves2013-uh, Sutskever2014-ya, Lample2016-xo}.\nAn interesting case is that of MPM , which actually combines two versions of DeepMatcher as well as classical similarity measures in its architecture.\nThe methods can be naturally divided into two categories when it comes to representation learning:\nindependent or interdependent representation.\nIf the highest representation level relies on a record pair instead of a single record,\nwe say it is an interdependent representation.\nOtherwise, it is an independent representation.\nSee again Figure~\\ref{fig:representation-comparison} for an illustration.\nThe methods in Table~\\ref{tab:representation-levels} have independent and interdependent representation at the top and bottom, respectively.\nInterdependent representations are, in essence, a way to incorporate record pair comparison into the feature extraction.\nThey have the benefit of being able to adapt based on what they will be compared to,\nwhile independent representations have the benefit of not relying on record pairs to be computed.\nThe latter will be important when we discuss blocking in Section~\\ref{sec:blocking}.", "id": "e9f463eb-eff4-4d6d-9684-6294a3da6a15", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "710a2af2-21d5-4009-85a7-4471a2f8a6c0", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Representation levels" ] ], "subsections": [ "416b5239-8b0a-41d5-9a9e-21c94a85c5fd", "e93e416e-4983-4d7b-beec-2557daa48afb" ], "title": "Representation levels" }, { "cite_extract_rate": 0, "cites": [], "content": "There is significant variation among the methods with independent representation.\n and mostly rely on word embeddings for the feature extraction part of the network.\n\\citeauthor{Kooli2018-sr} simply concatenate them before the next layers do comparison,\nand \\citeauthor{Nozaki2019-sg} aggregate them through summation and averaging.\n use bidirectional LSTM on character embeddings followed by dense layers to produce record-level representations.\nAs the only method, \\citeauthor{Wolcott2018-zb} actually go straight from characters to record representation.\n use bidirectional GRU on word embeddings to get an attribute-level representation.\nAs the only surveyed method, apply a simple convolutional layer to word embeddings.\nLastly, both DeepER and AutoBlock have networks solely aimed at finding good representations for use in blocking (see Section~\\ref{sec:blocking}).\nDeepER uses bidirectional LSTM on word embeddings to get a record-level representation (but also show a simple averaging approach is competitive).\nSomewhat differently,\nAutoBlock applies bidirectional LSTM and self-attention on word embeddings to get an attribute representation\nand represents records as a weighted sum of attributes.", "id": "416b5239-8b0a-41d5-9a9e-21c94a85c5fd", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e9f463eb-eff4-4d6d-9684-6294a3da6a15", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Representation levels" ], [ "paragraph", "Independent representation" ] ], "subsections": [], "title": "Independent representation" }, { "cite_extract_rate": 0, "cites": [], "content": "DeepMatcher explores several ways of building attribute representation from word embeddings.\nThe one with the highest representational power, Hybrid, uses a combination of bidirectional GRU and decomposable attention across records.\nUnique among the surveyed methods,\nSeq2SeqMatcher structures records as sequences of \\texttt{(attribute, word)} pairs.\nThe embedding of such a pair is a concatenation of a custom embedding for the attribute and a fastText embedding for the word itself.\nThe record-level representation is produced through an attention technique between the sequences of two records.\n treat a record pair as a sequence of attribute value sub-word tokens,\nwhile Ditto model record pair as a sequence of alternating attribute name and value sub-word tokens.\nBoth let each token keep its own representation throughout the representation building layers.\nThese Transformer networks take interdependent representation to an extreme,\nas each token depends on all other in every Transformer layer.\nFinally, Hi-EM is the only method which uses all the four explicit representation levels.\nIt applies a combination of bidirectional LSTM, self-attention within the record, and attention across records — both from its standard character embeddings to word vectors and from its word vectors to its attribute vectors.\nFor the record-level representation, it simply concatenate the attribute vectors.", "id": "e93e416e-4983-4d7b-beec-2557daa48afb", "level": "paragraph", "origin_cites_number": 0, "parent_id": "e9f463eb-eff4-4d6d-9684-6294a3da6a15", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Data preprocessing" ], [ "subsubsection", "Representation levels" ], [ "paragraph", "Interdependent representation" ] ], "subsections": [], "title": "Interdependent representation" }, { "cite_extract_rate": 0, "cites": [], "content": "Given two data sources $A$ and $B$, we divide the ways in which the schemas can be related in three:\n\\begin{itemize}\n \\item \\textbf{Aligned schemas}:\n Both data sources use the same schema.\n In other words, $\\forall_{i \\in \\{1, 2, ..., n\\}} (A_i = B_i)$ and $n = m$.\n \\item \\textbf{Misaligned schemas}:\n Both data sources have the same attributes, but not in the same order.\n In other words, there exists a bijective relation $H \\subset \\{A_i\\}_{i=1}^n \\times \\{B_j\\}_{j=1}^m$ such that $\\forall A_i B_j: ((A_i, B_j) \\in H) \\rightarrow (A_i = B_j)$.\n \\item \\textbf{Incompatible schemas}:\n There is no simple correspondence.\n In other words, there does not exist such a bijective relation as described above.\n\\end{itemize}\nWith aligned schemas,\nthere is no need for schema matching.\nFor misaligned schemas,\nfinding a one-to-one correspondence between attributes is sufficient,\nwhile in the general case of incompatible schemas,\nmore complex connections must be uncovered.\nMany schema matching techniques are concerned with the former case, misaligned schemas.\nFor entity matching,\nthe goal is usually to find out which attributes should be compared in the downstream task where records are compared;\nwe want to find the pairs $(A_i, B_j)$ of attributes that are semantically related.\nSo one attribute can be compared to several attributes from the other data source -- implying incompatible schemas.\nAn additional challenge that might occur is dirty attribute values -- values that should have been in another attribute .\nIn such cases,\nwe need to compare attributes that might not necessarily be semantically related in order to be robust to noise.\nThere are two sources of information when doing schema matching.\nThere are the actual attribute values in records,\nand there is the attribute metadata.\nAttribute metadata will often simply be a name (e.g., \\texttt{title}, \\texttt{author}, etc).\nWhen it comes to neural networks for schema matching,\nthere are essentially four approaches in the surveyed methods:", "id": "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "level": "subsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Schema matching" ] ], "subsections": [ "a8410dfb-5a99-44c2-a15d-9e9d6e3c3efc", "c0ec543f-1a0c-49eb-85ac-4bdd88dab152", "5db8ae30-9e9c-4869-8549-274f6c82fe3c", "1ba55203-9f37-45ed-948c-2fb8eae35f4f" ], "title": "Schema matching" }, { "cite_extract_rate": 0, "cites": [], "content": "SEMINT , SMDD , and NNSM specifically target schema matching.\nThey all first create training data by performing unsupervised clustering of attributes,\nand then use that data to train a multilayered perceptron\\footnote{A multilayered perceptron is a simple feedforward network using only fully connected layers.} (MLP).\nSEMINT uses a Self-Organizing Map to cluster the feature vectors of attributes in data source $A$ into categories.\nThe attribute features are handcrafted and are based on both schema metadata and attribute values.\nThe category clusters are then used as labeled data to train an MLP with one hidden layer.\nGiven an attribute feature vector,\nthe network scores its similarity to these cluster categories,\nand this is used to match the attributes of data source $B$ to the categories of $A$.\nSMDD follows a similar strategy, but uses the distribution of attribute values and a hierarchical clustering technique.\nSomewhat differently, NNSM clusters pairs of attributes into either being similar or dissimilar based on similarity scores of four traditional matchers.\nNext, they train an MLP with two hidden layers to classify a pair of attributes as either similar or dissimilar using the clusters as training data.\nAn interesting aspect of these schema matching methods is their lack of need for human-labeled training data.\nThe methods that learn from clusters generate training data by using more traditional unsupervised manual methods.\nAs explains it,\nthey are essentially using neural networks as a way to combine several traditional methods.", "id": "a8410dfb-5a99-44c2-a15d-9e9d6e3c3efc", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Schema matching" ], [ "subsubsection", "Learn attribute matching from clusters" ] ], "subsections": [], "title": "Learn attribute matching from clusters" }, { "cite_extract_rate": 0, "cites": [], "content": " translate records from source $A$ to records following the schema of $B$.\nThey train a network for each attribute in $B$,\nwhich can translate a record $a \\in A$ to a record of $B$'s schema.\nWorking only on purely numeric data, they are able to simply use records from $A$ as input and output values as the translated record from a neural network.\nThe network effectively transforms incompatible schemas into aligned schemas.\nThis approach can resolve the schema matching problem for downstream tasks but also does a lot of the heavy lifting of the record pair comparison by attempting to project records from $A$ to corresponding records in $B$.", "id": "c0ec543f-1a0c-49eb-85ac-4bdd88dab152", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Schema matching" ], [ "subsubsection", "Learn schema mapping" ] ], "subsections": [], "title": "Learn schema mapping" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:schema-matching-attribute-embeddings}\n do schema matching by thresholding the cosine distance between attribute vectors.\nThe attribute vectors are found by first summing up the pretrained word embeddings for each attribute in each record\nand then simply averaging per attribute across all records.\nEven though it relies on pretrained word embeddings,\nthe method itself is unsupervised.\nThe distance threshold was simply found experimentally.", "id": "5db8ae30-9e9c-4869-8549-274f6c82fe3c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Schema matching" ], [ "subsubsection", "Compare attribute representations" ] ], "subsections": [], "title": "Compare attribute representations" }, { "cite_extract_rate": 0, "cites": [], "content": "While schema matching has traditionally been dealt with as a separate task,\nas with the methods above,\n, , and incorporate it as part of their deep learning approach for comparing and classifying record pairs.\nAs explained in the previous subsection, Seq2SeqMatcher structure records as sequences of \\texttt{(attribute, word)} tokens,\nand then solve entity matching as a sequence matching problem.\nThe embedding of such a token is a concatenation of a custom embedding for the attribute and a fastText embedding for the word itself.\nNotice that no attribute metadata is used.\nTreating the input in this way enables the neural network to learn how to compare values across attributes.\nSpecifically, the authors use a bidirectional attention mechanism between token embeddings from two records,\nand then use only the max $k$ attention scores to get the soft-attended representation of a token.\nUsing only the $k$ largest attention scores, effectively setting the rest to zero, helps the model compare only relevant tokens and ignore irrelevant tokens.\n preserve no information about the attributes other than the order (which, of course, may differ across schemas), and simply treat a record as a sequence of sub-word tokens.\nThey rely entirely on the powerful attention mechanism in the Transformer network to do the schema matching using positional information provided through the input and whatever insight and correlation the attribute values provide.\nDitto , on the other hand, explicitly incorporate attribute name sub-word tokens in the input sequence,\nwhich gives more information to the Transformer network to perform schema matching.\nIn contrast to Seq2SeqMatcher, Ditto uses the attribute name instead of a randomly initialized embedding\n--- enabling the network to exploit knowledge from its language modeling pretraining that the attribute name might signal.", "id": "1ba55203-9f37-45ed-948c-2fb8eae35f4f", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "52d137f0-7ff3-45b1-a028-f6ac97a8dd8c", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Schema matching" ], [ "subsubsection", "Learn jointly with comparison and classification" ] ], "subsections": [], "title": "Learn jointly with comparison and classification" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:blocking}\nFew methods try to use neural networks for blocking, as seen in Table~\\ref{fig:process}.\nThe only two methods, DeepER and AutoBlock ,\nembed records into a high-dimensional metric space and then do nearest neighbor search to filter down the cartesian product $A \\times B$ to a candidate pair set $C$.\nThey both use cosine distance as a metric,\nand the networks are implicitly trained to produce record representations close to each other for matching records and far from each other for nonmatching records.\nFinding the nearest neighbors in high-dimensional spaces is computationally infeasible,\nso to make it more feasible,\nthey perform approximate nearest neighbor search.\nThen there is no guarantee to find the nearest neighbors, but rather a high probability.\nBoth methods do this using locality sensitive hashing (LSH) , which is a well-studied technique .\nThe two methods follow the same high-level strategy, but they have some important differences.\nThe networks themselves that are responsible for building a good record representation are differentiated in Section~\\ref{sec:data-preprocessing}.\nDeepER trains its network end-to-end with comparison and classification of record pairs.\nThe record representations are compared using either elementwise subtraction or multiplication,\nand then a dense layer performs the classification.\nAutoBlock, in comparison, trains specifically for blocking with a custom loss applied directly to the cosine distance between records.\nFor the actual nearest neighbor pruning, they use two different LSH methods.\nDeepER uses hyperplane LSH , a well-studied method that is known to be easy to implement and often fast in practice.\nAutoBlock uses cross-polytope LSH , which has the benefit of theoretically optimal query running time while also being efficient in practice.\nBoth use multiprobing with distance 1.", "id": "c17fd354-169c-4d2b-ba99-90779ea98a76", "level": "subsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Blocking" ] ], "subsections": [], "title": "Blocking" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:comparison}\nCentral to matching is to assess the similarity of two records, both syntactically and semantically.\nThe surveyed neural networks will generally produce some distributed representation at either attribute- or record-level and then compare the representations.\nWe consider the layers in a network that are responsible for reducing from representations per record to representations across records appropriate for classification as record pair comparison layers.\nOr, to put it simply, those layers producing the similarity vector $S$ from per-record representations.\nWe will look at three central characteristics of how comparison is performed:\n\\begin{table}\n \\centering\n \\caption{\n Overview of how the surveyed methods perform record pair comparison.\n $^+$Other options were tried, but this was found to be preferential.\n $^*$The most expressive model (BiLSTM-based) does non-attribute-aligned comparison, while the simpler averaging model is attribute-aligned.\n }\n \\small\n \\begin{tabular}{lM{22mm}M{22mm}M{26mm}}\n \\toprule\n Method & Attribute-aligned & Distributed similarity & Attention-based \\\\ \\midrule\n & No & Yes & No \\\\ \n DeepMatcher & Yes & Yes$^+$ & Words$^+$ \\\\ \n & No & No & No \\\\ \n DeepER & No$^*$ & Yes & No \\\\ \n MPM & Yes & Both & (Partially) words \\\\ \n & Yes & Yes & No \\\\ \n Seq2SeqMatcher & No & Yes & Words \\\\ \n Hi-EM & Yes & Yes & Characters, words \\\\\n & No & Yes & Character N-grams \\\\\n Ditto & No & Yes & Character N-grams \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:comparison}\n\\end{table}", "id": "327bfc6d-0aee-4056-a896-67c1a1d12b94", "level": "subsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Record pair comparison" ] ], "subsections": [ "b5adc3d3-d589-49e8-b729-bfa353f9a509", "137c19ac-d629-4459-bcac-2edf5ef2daa1", "8472ae7e-63c1-4b80-8a37-c4781a8e6f24" ], "title": "Record pair comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "If one assumes the two data sources $A$ and $B$ to have aligned schemas,\none can compare attributes in a one-to-one fashion.\nWe then say the comparison is attribute-aligned.\nThe alternative is to perform comparison at record level,\nas one will be less dependent on the schemas to be aligned.\nThe second column in Table~\\ref{tab:comparison} shows which of the surveyed methods do attribute-aligned comparison.\nDeepMatcher and both compare attributes one-to-one before they combine the similarity representation to record level.\nTo handle cases where non-attribute-aligned comparison is necessary because the data is dirty and are partially put in wrong attributes,\nDeepMatcher merges all attributes to one long string attribute -- essentially reducing the problem to a string matching problem.\nThis is, of course, something most attribute-aligned methods can do to overcome this restriction,\nbut then the information carried by the attribute separation is lost.\nHi-EM does actually not align the comparison of attribute representations,\nbut the attribute representations have been produced by implicitly comparing characters and words through an attention mechanism across aligned attributes.", "id": "b5adc3d3-d589-49e8-b729-bfa353f9a509", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "327bfc6d-0aee-4056-a896-67c1a1d12b94", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Record pair comparison" ], [ "subsubsection", "Attribute-aligned comparison" ] ], "subsections": [], "title": "Attribute-aligned comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "When two distributed representations are compared, one can either produce another distributed representation for the similarity of them\nor reduce the representations down to a nondistributed similarity representation -- usually a scalar.\nThe third column in Table~\\ref{tab:comparison} shows which of the surveyed methods make distributed similarity representations.\nAs can be seen, the majority of the surveyed methods do.\nTypical ways of computing these similarities include vector difference, Hadamard product, or concatenation.\nThe only with nondistributed similarities, ,\nuse cosine distance to compute the similarity.\nNondistributed similarities have the benefit of reducing complexity and training time,\nbut at the cost of expressiveness.\nThe increased expressive power of distributed similarities has to be matched by a classifier able to use it.\n reported that distributed similarities outperform nondistributed.\nIn addition, they found vector difference to be significantly better than concatenation when used after representation layers that do not use cross-attention.\nMPM stands out since it combines both distributed and nondistributed similarity.\nIt uses multiple classical similarity measures and two versions of DeepMatcher in parallel and let the network effectively choose a similarity representation through a softmax.", "id": "137c19ac-d629-4459-bcac-2edf5ef2daa1", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "327bfc6d-0aee-4056-a896-67c1a1d12b94", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Record pair comparison" ], [ "subsubsection", "Distributed similarity" ] ], "subsections": [], "title": "Distributed similarity" }, { "cite_extract_rate": 0, "cites": [], "content": "As we saw in Section~\\ref{sec:representation-levels},\nsome methods build distributed representations that are dependent on the record to be compared to through attention mechanisms.\nThey are essentially peeking at what is to come, which enables them to focus on what is important for the comparison.\nThe fourth column in Table~\\ref{tab:comparison} summarizes which methods use cross-attention and at which representation levels.\nDeepMatcher uses attention between words across records,\nwhile Hi-EM uses attention between both character- and word-level representations across records.\nBoth restrict their attention mechanism within each attribute, since the comparison is attribute-aligned.\nIn contrast, Seq2SeqMatcher , , and Ditto , are able to use attention between all sub-words/words across the compared records.\nThe Transformer networks take cross-attention all the way by relying almost exclusively on attention throughout the architecture and not making any distinction between self-attention and cross-attention.", "id": "8472ae7e-63c1-4b80-8a37-c4781a8e6f24", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "327bfc6d-0aee-4056-a896-67c1a1d12b94", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Record pair comparison" ], [ "subsubsection", "Cross-record attention" ] ], "subsections": [], "title": "Cross-record attention" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{table}\n \\centering\n \\caption{\n Overview of which neural network layers the surveyed methods use for classification.\n $^*$The authors emphasize that any machine learning classifier can be used and do not explicitly favorize a neural network.\n }\n \\small\n \\begin{tabular}{lm{80mm}}\n \\toprule\n Method & Classification layers \\\\\n \\midrule \n & Two linear layers with custom sparsity pattern, threshold on ratio of match and mismatch score \\\\ \n & Single perceptron, threshold \\\\ \n & MLP, softmax \\\\ \n & Linear layer, softmax \\\\ \n & Two-layered MLP with custom sparsity pattern, threshold \\\\ \n & MLP, LSTM, or CNN \\\\ \n DeepMatcher & Two-layered MLP with Highway-connections , softmax \\\\ \n DeepER & Single dense layer$^*$ \\\\ \n MPM & Two-layered MLP with Highway-connections , softmax \\\\ \n & Two-layered MLP with Highway-connections , softmax \\\\ \n Seq2SeqMatcher & Two-layered MLP, softmax \\\\ \n Hi-EM & Single dense layer \\\\\n & Single dense layer, softmax \\\\\n Ditto & Single dense layer, softmax \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:classification}\n\\end{table}\nCompared to the other steps,\nthere is less variance in how the surveyed methods perform classification.\nGenerally,\nthey take in a similarity vector $S$ and do binary classification.\nDeeming a record pair matching or not.\nAs an exception to this approach,\n classify records $a \\in A$ directly to a corresponding record $b \\in B$,\ntreating matching the problem as a multiclass classification with $|B|$ classes.\n$S$ can be from a separate procedure, like string similarity measures, or upstream layers in the same neural network.\nThe first was more common in earlier methods,\nwhile the former is common among newer deep learning methods.\nNonetheless,\nthe actual networks or layers used for classification are relatively similar.\nTable~\\ref{tab:classification} shows how each method's classification network is built -- either standalone or as the final layers of a larger network.\nWe see that most are variations of the same theme of MLP with softmax at the end.", "id": "d709375a-1094-4101-82cc-477d5288c1d8", "level": "subsection", "origin_cites_number": 0, "parent_id": "707fddc2-c0f3-4aad-a358-d6ad6a50dcfd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "The entity matching process" ], [ "subsection", "Classification" ] ], "subsections": [], "title": "Classification" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:contributions-from-deep-learning}\nIn the previous section we dissected the use of neural networks in all the surveyed methods using our process reference model.\nWe now take a step back and summarize which contributions deep learning provide entity matching.\nInitially when neural networks were applied for entity matching, they were used merely as a classifier over feature vectors,\neither for schema matching or determining if pair of records matched or not.\nIn the past few years, following the rise of deep learning,\nwe have seen not only an increase in the use of neural networks,\nbut also a broadening of the role they play in the entity matching process.", "id": "1d86a0ee-e68c-446d-a9f8-bf0bc4cdd113", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Contributions from deep learning" ] ], "subsections": [ "54fb2d6b-cdfc-4dd9-b45e-edff7291d469", "c4f3f922-48a5-4721-b699-cfc047a0fe69" ], "title": "Contributions from deep learning" }, { "cite_extract_rate": 0, "cites": [], "content": "Traditionally, as part of the preprocessing in entity matching,\nit is common to transform the data through a range of handcrafted procedures to a format more suitable for comparison.\nImportant features are made more prominent and accessible to the steps downstream,\nwhile less important features are filtered away.\nExamples include phonetic encoding, removing punctuation, stemming, and expanding common domain-specific abbreviations.\nThe problem is that these procedures are highly dependent on the data sources,\nand an expert has to decide which features are essential and how to extract them.\nSuch customization makes it harder to scale a solution across data sources and use cases.\nIn contrast, driven by the advances of deep learning for natural language processing,\nmore recent methods are able to learn feature extraction from less preprocessed records.\nAs mentioned in Section~\\ref{sec:embeddings}, embeddings are used as a powerful gateway to letting neural networks work with text,\nwhile hierarchical representation learning and sequence models make it possible to make semantically rich distributed representations of attribute values and records.\nThis alleviates the need for much of the ad-hoc handcrafted feature extraction procedures,\nleaving mostly standardized feature extraction steps such as simple tokenization.\nThis is in line with the development in other domains using deep learning, where deep neural networks have been increasingly able to replace complex handtuned feature engineering .\nOf course, it does not replace all forms of preprocessing when data sources in different formats and of different origins are to be matched.\nOne might, for example, need to extract records from some nonstandard XML or JSON format.\nWhile current deep learning methods do not remove the need for such handcrafted ad-hoc preprocessing,\nfeature extraction can still be a very labor-intensive part of the preprocessing,\nand so deep learning has the potential to lighten the manual load drastically.\nIn addition to feature extraction,\na point of significant handcrafted complexity is that of string similarity measures.\nTraditionally,\none would use several string similarity measures between two records to produce a similarity feature vector\nand then use a neural network, some other classification model, or rules to classify the pair as matching or nonmatching based on this similarity vector.\nDoing comparison together with feature extraction in a deep neural network,\nwe are able to effectively learn how to do comparison of records.\nThe network will learn to extract features that are suitable for comparing records,\neither for static comparison or for downstream layers in the network.\nIn cases where cross-attention is used,\nthe network is even able to learn to extract features tailored to be suitable for comparison against a specific record.\nCompared to traditional approaches,\nlearning comparison in this way can remove the need for these complex handcrafted similarity measures.\nAlso,\nsince deep networks can produce powerful distributed representations,\nthis approach opens up a straightforward way to perform semantic comparison.\nThis means one can depend less on syntactic similarity, which is hard to do with handcrafted methods.\nTransfer learning has been a catalyst to being able to train more powerful deep learning models with fewer labeled examples for the task at hand in both natural language processing and computer vision (where starting out with large networks \\cite[e.g.,][]{He2015-iu} pretrained on enormous data sets like ImageNet is common).\nDeep learning methods for entity matching have also started to use such opportunities.\nUsing some of the readily available pretrained word embeddings is popular,\nand is used as an efficient way of transfer learning from large general corpora.\n outline different ways to transfer learn between entity matching task using weigted sums of word embeddings as distributed record representations.\nSome methods also use transfer learning beyond word embeddings and pretrain models specifically for entity matching .\nFurthermore, most recently, we have also seen the use of large, powerful, general pretrained language models that are fine-tuned to entity matching .\nCompare this to traditional methods, which do not typically incorporate transfer learning.\nWhile one can argue that this is to be expected, since deep learning methods will usually require more training examples,\nthe use of deep networks makes it possible to realize very powerful transfer learning scenarios,\nwhich are hard to pull off with traditional methods.", "id": "54fb2d6b-cdfc-4dd9-b45e-edff7291d469", "level": "subsection", "origin_cites_number": 0, "parent_id": "1d86a0ee-e68c-446d-a9f8-bf0bc4cdd113", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Contributions from deep learning" ], [ "subsection", "Learned feature extraction and comparison" ] ], "subsections": [], "title": "Learned feature extraction and comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{figure}\n \\centering\n \\includegraphics{process-contributions}\n \\caption{\n Illustration of the reference model for a deep learning entity matching process (bottom) together with the reference model for the traditional entity matching process (top)\n and how they relate to each other.\n }\n \\label{fig:deep-learning-process}\n\\end{figure}\nIn Section~\\ref{sec:process},\nwe introduced the reference model of the traditional entity matching process.\nTo support our discussion, we introduce a reference model for a deep learning-based entity matching process.\nFigure~\\ref{fig:deep-learning-process} depicts this reference model together with the traditional variant while highlighting how steps correspond between them.\nOne should note that none of the surveyed deep learning methods actually follow this exact process.\nIt is the essence of all the methods summed up in one view.\nWhat all surveyed deep learning methods have in common is that they have fused together feature extraction, record pair comparison, and classification in a single step as a neural network.\nThe need for data processing as a separate step still exists, but with less focus on feature extraction.\nIn some ways, this development comes naturally.\nThe steps are carried out in more or less the same order, and one can (to a certain degree) distinguish between them in the neural network architecture.\nBut as we saw in Section~\\ref{sec:process},\nsome methods have also explored using neural networks for schema matching and blocking.\n is the first deep learning method to incorporate schema matching in an end-to-end fashion together with record pair comparison and classification.\nThey essentially construct the feature extraction and record pair comparison part of the network in such a manner that it is possible to learn schema matching --\nreducing the step to a built-in property of the network --\nin effect, jointly training a model for both matching schemas and records.\nCompared to solving schema matching as a separate task upfront,\nthis has the potential benefit of being able to adapt how the schemas are matched to how they are used downstream.\nOther deep learning methods assume the schema to be the same for both data sources,\nbut to what degree and how this assumption is manifested in the method differ (see Section~\\ref{sec:taxonomy}).\nThe surveyed methods tackling blocking with deep learning rely on making a distributed representation of a record with a neural network,\nand then finding the candidate pairs with approximate nearest neighbor search.\nIn other words, the network produces indexable feature vectors,\nand through the use of a suitable index,\nwe find similar records in subquadratic time.\nWhile it does not remove the blocking step,\nit does reduce the core blocking mechanism to an indexing problem over some standard metric.\nThe network will learn how to do blocking by learning how to represent similar records close to each other in the metric space,\nwhile the nearest neighbor search will always remain the same.\nIf one uses the same distributed representations downstream for record pair comparison, it also serves as a good way to align how blocking and record pair comparison evaluate similarity.\nIt is not unusual in traditional methods that these two steps, who both at their core do record comparison, have two considerably different ways of measuring similarity.\nIn computer vision and natural language processing,\none has traditionally operated on large, complex, and heavily engineered pipelines.\nThey are often divided into distinct steps that have been worked on separately.\nIn contrast,\ndeep learning methods in these fields have evolved to do what previously was done in several distinct steps in one go with a deep network \\cite[e.g.,][]{Sutskever2014-ya, Ren2015-it},\nmaking it possible to train on tasks end-to-end.\nIn a similar fashion, we can see deep learning increasingly reshaping the entity matching process by coalescing traditional steps.\nThis has at least two immediate benefits.\nFirst, it effectively reduces the number of steps in the process,\nand second, it makes it possible to train an increasing part of the process end-to-end.\nThe main enabling factor is the representation learning in modern deep learning techniques for natural language processing,\nwhich is able to tie all of these together.\nAs we have seen, powerful feature extraction can remove much of the need for manual feature engineering in schema matching, blocking, and record pair comparison.\nWhen in addition these steps are able to share the feature extraction and become part of an end-to-end network covering a large portion of the entity matching process,\nit can be translated to a potentially great reduction in complexity when building entity matching pipelines.\nBecause the complexity of the process lies not only in the individual steps, but also in the interaction between them.\nTuning and getting multiple heavily engineered steps to work smoothly together can be difficult.", "id": "c4f3f922-48a5-4721-b699-cfc047a0fe69", "level": "subsection", "origin_cites_number": 0, "parent_id": "1d86a0ee-e68c-446d-a9f8-bf0bc4cdd113", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Contributions from deep learning" ], [ "subsection", "Coalescing the entity matching process" ] ], "subsections": [], "title": "Coalescing the entity matching process" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:taxonomy}\nWe have seen how neural networks serve different purposes in the entity matching process and vary in how they do so.\nDeep learning has led to an increase in the use of neural networks for entity matching in the past few years,\nmaking deep networks the norm.\nAll of these deep learning methods have in common that they reduce the need for tedious handcrafting of features, but how the networks are structured at a high level differs in important ways, which impact their ability to interact with other steps in the entity matching process.\nThe schema matching and blocking steps are especially interesting in this regard,\nsince they have traditionally been solved with specialized methods separate from record pair comparison and classification.\nWith this in mind,\nwe propose a taxonomy of deep neural networks for entity matching consisting of four categories.\nThere is no concrete definition of what constitutes a \\textit{deep} neural network,\nso for the purpose of this taxonomy we only consider networks that do feature extraction\\footnote{Note we also exclude , since the method do not represent a neural network itself.} (basically those who are marked for the data preprocessing step in Table~\\ref{tab:method-process-matrix}).\nThe categories are decided by two binary properties that were introduced in Section~\\ref{sec:process}:\n1) Whether comparison is attribute-aligned or not, and 2) whether representations used for comparison are independent or interdependent.\nFigure~\\ref{fig:taxonomy} shows the taxonomy and which category each surveyed deep neural network falls into.\nIn addition,\nit highlights how schema matching and blocking are related to the categories and which of the methods leverage their neural networks to tackle these steps.\nThe two properties do not only reflect the high-level structural properties of the network,\nbut also the assumptions of the problem to be solved.\nTo help illustrate the taxonomy, we also show one representative deep learning network architecture for each category in Figure~\\ref{fig:architectures}.\n\\begin{figure}\n \\centering\n \\includegraphics{taxonomy}\n \\caption{\n Taxonomy of deep neural networks for entity matching.\n Split into four categories along two axes that represent two binary properties.\n Methods within the blue and red colored areas are methods that address blocking and schema matching, respectively.\n }\n \\label{fig:taxonomy}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics{Architectures}\n \\caption{\n High-level illustration of four deep learning architectures --- one for each category in the taxonomy.\n Note that the input is assumed to have already been replaced with its embedding.\n }\n \\label{fig:architectures}\n\\end{figure}", "id": "1bfbeb8a-a767-4676-8769-a5b1beda49f5", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Taxonomy of deep neural networks for entity matching" ] ], "subsections": [ "0a106b6b-be01-48c2-88bc-ac8773d2c5ab", "290d604a-b42c-483e-b137-5c8c4414d185", "6cacd1f8-bf25-4cb5-8e41-309a7ca7b53b" ], "title": "Taxonomy of deep neural networks for entity matching" }, { "cite_extract_rate": 0, "cites": [], "content": "If the network is structured in a way that assumes records from the two data sources to have aligned schemas and constrain record comparison to one-to-one on attributes,\nwe say it has attribute-aligned comparison.\nIf it does not, we say it has non-attribute-aligned comparison.\nNotice that comparison does not necessarily need to be explicit record pair comparison,\nbut can also be implicit comparison through, for example, indexing of distributed representations.\nAttribute-aligned comparison is a powerful way to incorporate prior knowledge into a method,\npotentially making training easier and more efficient.\nAt the same time,\nit prevents the possibility of performing schema matching.\nNon-attribute-aligned comparison does not necessarily imply the method is being used for schema matching,\nbut that it is in its nature more compatible and should be easier to adapt to performing it.", "id": "0a106b6b-be01-48c2-88bc-ac8773d2c5ab", "level": "subsection", "origin_cites_number": 0, "parent_id": "1bfbeb8a-a767-4676-8769-a5b1beda49f5", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Taxonomy of deep neural networks for entity matching" ], [ "subsection", "Attribute-aligned or non-attribute-aligned comparison" ] ], "subsections": [], "title": "Attribute-aligned or non-attribute-aligned comparison" }, { "cite_extract_rate": 0, "cites": [], "content": "If the network relies on seeing a pair of records that are to be compared to produce representations of the records,\nwe say it generates interdependent representations.\nOtherwise, we say it generates independent representations.\nInterdependent representations have the benefit of being able to observe what they are being compared to.\nIntuitively, it is easier to compare two things if we can look at both at the same time.\nIf we are only allowed to look at one thing at a time,\nthen it is harder to know what to focus on.\nOn the other side,\nby being dependent on seeing the record to be compared to produce a representation,\none has essentially made it impossible to compare a large number of records in any subquadratic way.\nFor independent representations,\nit is possible to generate a representation of each record once and implicitly compare them through a form of indexing,\neffectively opening up the possibility of doing blocking with the representations made from the network.", "id": "290d604a-b42c-483e-b137-5c8c4414d185", "level": "subsection", "origin_cites_number": 0, "parent_id": "1bfbeb8a-a767-4676-8769-a5b1beda49f5", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Taxonomy of deep neural networks for entity matching" ], [ "subsection", "Independent or interdependent representation" ] ], "subsections": [], "title": "Independent or interdependent representation" }, { "cite_extract_rate": 0, "cites": [], "content": "\\begin{itemize}\n \\item \\textbf{Rigid encoders} are networks that produce independent attribute representations and then perform comparison.\n Even though none of the surveyed methods do,\n it is possible to use such independent attribute representations for blocking purposes.\n \\item \\textbf{Flexible encoders} are networks that produce independent record representations and then perform comparison.\n It is possible to perform blocking using these independent representations,\n as shown by DeepER and AutoBlock .\n If the representations can be built from records with two different schemas,\n one can also effectively perform schema matching,\n but none of the current methods do.\n \\item \\textbf{Attribute comparators} are networks that use cross-attention only within the same attribute.\n They are inherently focused on doing aligned attribute-to-attribute comparison.\n DeepMatcher has attribute-to-attribute comparison explicitly designed into the network architecture.\n Hi-EM takes it a step further by training individual networks per attribute.\n There is no easy way to perform blocking or schema matching with such a network architecture.\n \\item \\textbf{Record comparators} are networks that use cross-attention at record level without being constrained by the boundaries of attributes.\n They can learn how to compare across attributes,\n making it possible to overcome misaligned schemas and even incompatible schemas, depending on the network structure.\n Seq2SeqMatcher was the first of the surveyed method in this category,\n and it is able to handle incompatible schemas.\n Transformer-based networks, which can naturally work as record comparators, have later shown to provide state-of-the-art performance .\n\\end{itemize}", "id": "6cacd1f8-bf25-4cb5-8e41-309a7ca7b53b", "level": "subsection", "origin_cites_number": 0, "parent_id": "1bfbeb8a-a767-4676-8769-a5b1beda49f5", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Taxonomy of deep neural networks for entity matching" ], [ "subsection", "The four categories" ] ], "subsections": [], "title": "The four categories" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:evaluation}", "id": "a8c2927a-9391-4fe5-bfbd-005c3a30f0af", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ] ], "subsections": [ "5747f0ab-ed9e-4d82-a79d-d86da09e62b3", "c2d5549a-5161-442f-b0da-e3872f937dfe", "d01f029d-62b7-4df0-a20e-18b57dcb305e" ], "title": "Evaluation" }, { "cite_extract_rate": 0, "cites": [], "content": "An entity matching system can be evaluated in several ways.\nSince it is fundamentally a very skewed binary classification problem with few positives compared to negatives,\nit is natural to use precision/recall measures -- popular metrics from information retrieval and machine learning classification.\nAnd while precision and recall (as well as accuracy) are sometimes reported,\nthe most prominent is to report and evaluate the matches using the $F_1$ measure.\nOf course, while simple, this metric only measure one aspect of an entity matching system.\nDifferent authors have therefore used additional metrics.\n focus on the important issue of how many training examples a model needs,\nand measure $F_1$ for different amounts of provided training examples.\nFocused on scalability,\n report wall-clock running time for different sizes of the data sources.\nIt is also possible to evaluate intermediate steps in isolation in addition to the end-to-end result.\nSome methods specifically target blocking,\nand so specifically measure the outcome of that step.\nOne could evaluate blocking the same way as the end result,\nusing $F_1$.\nBut remember that for blocking we are mainly interesting in getting high recall,\nand less interesting in precision as long as the number of candidate pairs $|C|$ is sufficiently lower than the size of the Cartesian product $|A \\times B|$.\nSo the surveyed methods report both recall and some variant of reduction ratio from data sources to candidate set.\n report $RR = \\frac{|C|}{|A \\times B|}$,\nwhile report $P/E = \\frac{|C|}{|A|}$.", "id": "5747f0ab-ed9e-4d82-a79d-d86da09e62b3", "level": "subsection", "origin_cites_number": 0, "parent_id": "a8c2927a-9391-4fe5-bfbd-005c3a30f0af", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Metrics" ] ], "subsections": [], "title": "Metrics" }, { "cite_extract_rate": 0, "cites": [], "content": "The early approaches to entity matching were mostly concerned with matching personal records (census data or medical records).\nSuch datasets are usually not publicly available due to privacy concerns.\nToday,\nmethods are evaluated on data from different domains.\nA substantial of amount of reported results are from closed datasets,\nbut we're seeing increasingly more open datasets being used.\nSome of the most popular open datasets include domains such as publications, restaurants, products, songs, and companies .\nSee Table~\\ref{tab:public-datasets} for an overview of the most popular public datasets among the surveyed methods.\nIn order to test certain scenarios,\nsome authors also artificially construct new datasets based on existing ones,\neither by reducing the data quality \\cite[e.g.,][]{Mudgal2018-cx} or constructing new records \\cite[e.g.,][]{Ioannou2013-fi}.\n\\begin{table}\n \\centering\n \\caption{\n Overview of public datasets that have been used by at least two of the surveyed publications.\n \\# Records lists the number of records in each data source,\n \\# Matches denotes the number of matches between them,\n and \\# Pos / \\# Candidates denotes the number of positive examples among all the record pair candidates in an agreed-upon subset of the potential matches.\n The latter is used when blocking is not part of the experiment.\n }\n \\small\n \\begin{tabular}{llcccM{17mm}}\n \\toprule\n Dataset & Domain & \\# Records & \\# Attributes & \\# Matches & \\# Pos / \\newline \\# Candidates \\\\\n \\midrule\n Abt-Buy & Product & 1081 - 1092 & 3 & 1096 & 1028/9575 \\\\ \n Amazon-Google & Software & 1363 - 3226 & 3 & 1300 & 1167/11460 \\\\ \n Beer & Beer & 3274 - 4345 & 4 & & 68/450 \\\\ \n DBLP-ACM & Citation & 2616 - 2294 & 4 & 2224 & 2220/12363 \\\\ \n DBLP-Scholar & Citation & 2616 - 64263 & 4 & 5347 & 5347/28707 \\\\ \n Fodors-Zagats & Restaurant & 533 - 331 & 6 & 112 & 110/946 \\\\ \n iTunes-Amazon & Music & 4875 - 5619 & 8 & & 132/539 \\\\ \n Walmart-Amazon & Electronics & 2554 - 22074 & 5 & 1154 & 962/10242 \\\\ \n \\bottomrule\n \\end{tabular}\n \\label{tab:public-datasets}\n\\end{table}", "id": "c2d5549a-5161-442f-b0da-e3872f937dfe", "level": "subsection", "origin_cites_number": 0, "parent_id": "a8c2927a-9391-4fe5-bfbd-005c3a30f0af", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Datasets" ] ], "subsections": [], "title": "Datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "d01f029d-62b7-4df0-a20e-18b57dcb305e", "level": "subsection", "origin_cites_number": 0, "parent_id": "a8c2927a-9391-4fe5-bfbd-005c3a30f0af", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Experimental results" ] ], "subsections": [ "a9127df2-d130-4fba-b7a7-51134839e4ce", "1268ca02-5ffd-4712-95c6-14fd93d3cdcf" ], "title": "Experimental results" }, { "cite_extract_rate": 0, "cites": [], "content": "It has historically been hard to directly compare reported experimental results.\nEither because they do not target and measure the same aspect of the entity matching process, do not use the same data, or do not evaluate in the same way.\nFor example, several methods have been evaluated on some of the same datasets but used completely different train-test splits (both the actual selection and train-test ratio).\nThere is a general lack of widespread and agreed upon benchmarks that would make comparison across many methods straightforward --- like we have seen in core computer vision and NLP tasks.\nTo a large degree, on has relied on authors to reimplement and run other methods within their own evaluation setup.\nSo for the majority of the surveyed methods, we can not make any comparison using reported experimental data.\nIt is, however, possible to partially compare some of the more recent methods.\nFigure~\\ref{fig:compare-experiments} shows which surveyed methods that have experimental results that let you compare it to other methods.\nMainly due to the adaptation of some of the experiments done by as a benchmark,\nbut also due to authors of a method running new experiments on earlier methods in their evaluation setup.\nThe benchmark let us compare (at least partially) several methods at once.\nTable~\\ref{tab:mudgal-benchmark} presents an overview of all available experimental results on those specific benchmark experiments.\nWe observe that the Transformer-based networks, especially Ditto , seems to be the current state-of-the-art.\nDitto does use some domain-specific optimizations, but the authors report strong results for a baseline without those optimizations.\nNote that this benchmark only addresses certain aspects of the entity matching process.\nComparable results for aspects such as training time, prediction latency, and label efficiency can not be collected in the same way for several methods --- \neven though some results and comparison between pairs of surveyed methods have been reported.\nAnd neural methods addressing blocking, using active learning, or using transfer learning are not in a state where any significant overview can be provided.\n\\begin{figure}\n \\centering\n \\includegraphics{Compare-Experiments.pdf}\n \\caption{\n Overview of which of the surveyed methods has been compared experimentally to each other and which have at least partially been subject to the same benchmarks.\n Arrows indicate that the method at the head was compared to the method at the tail in the latter's publication.\n Note that they are not necessarily transitive, and they do not all test the same task (i.e., some compare blocking, others only matching after blocking).\n The blue area is methods that have been tested on at least a subset of the public benchmark from .\n $^*$As pointed out by , the authors do model selection using the test set --- effectively leaking from the test set and making the results slightly unfit for comparison with others.\n }\n \\label{fig:compare-experiments}\n\\end{figure}\n\\begin{table}[]\n \\centering\n \\caption{\n Overview of reported results from the surveyed methods on the public part of the benchmark from .\n We have also included reported results from Magellan (MG) as a state-of-the-art classical non-neural method.\n All experiments use the post-blocking record pair candidates described in Table~\\ref{tab:public-datasets}.\n The \\textit{structured} versions are the unaltered datasets where attributes are aligned,\n while in the \\textit{dirty} variant other attributes are moved to the \\texttt{title} attribute with a 50\\% probability.\n For the \\textit{textual} dataset, Abt-Buy, all attributes are long textual descriptions.\n $^+$We only report the Hybrid model, which is generally considered the strongest.\n $^\\dagger$The results from are not comparable to the others due to different setup, so reproduced results from are used instead.\n Note that those results do not involve the blocking method from .\n $^*$Not entirely fit for comparison since the authors do model selection using the test set.\n }\n \\scriptsize\n \\begin{tabular}{lcccccccc}\n \\toprule\n Dataset & MG & DM$^+$ & DeepER$^\\dagger$ & MPM & Kasai & S2SM & B\\&S$^*$ & Ditto \\\\\n \\midrule\n \\textbf{Structured} & & & & & & & \\\\\n Amazon-Google & 49.1 & 69.3 & 56.08 & 70.7 & & & & 75.58 \\\\\n Beer & 78.8 & 72.7 & 50.00 & & & & & 94.37 \\\\\n DBLP-ACM & 98.4 & 98.4 & 97.63 & & 98.45 & 98.9 & & 98.99 \\\\\n DBLP-Scholar & 92.3 & 94.7 & 90.82 & & 92.94 & 95.3 & & 95.60 \\\\\n Fodors-Zagats & 100.0& 100 & 97.67 & & & & & 100.0 \\\\\n iTunes-Amazon & 91.2 & 88.0 & 72.46 & & & & & 97.06 \\\\\n Walmart-Amazon & 71.9 & 66.9 & 50.62 & 73.6 & & 78.2 & & 86.76 \\\\\n \\midrule \n \\textbf{Dirty} & & & & & & & \\\\\n DBLP-ACM & 91.9 & 98.1 & 89.62 & & & 98.4 & 98.9 & 99.03 \\\\\n DBLP-Scholar & 82.5 & 93.8 & 86.07 & & & 94.1 & 95.6 & 95.75 \\\\\n iTunes-Amazon & 46.8 & 74.5 & 67.80 & & & & 94.2 & 95.65 \\\\\n Walmart-Amazon & 37.4 & 46.0 & 36.44 & & & 68.3 & 85.5 & 85.69 \\\\\n \\midrule \n \\textbf{Textual} & & & & & & & & \\\\\n Abt-Buy & 43.6 & 62.8 & 42.99 & & & & 90.9 & 89.33 \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:mudgal-benchmark}\n\\end{table}", "id": "a9127df2-d130-4fba-b7a7-51134839e4ce", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d01f029d-62b7-4df0-a20e-18b57dcb305e", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Experimental results" ], [ "subsubsection", "Comparing surveyed methods" ] ], "subsections": [], "title": "Comparing surveyed methods" }, { "cite_extract_rate": 0, "cites": [], "content": "Several of the surveyed methods have evaluated deep learning approaches against more traditional approaches .\nThe results are generally promising for deep learning approaches,\nbut traditional methods are often competitive.\nMost methods compare themselves to Magellan ,\nwhich is considered a state-of-the-art traditional entity matching system,\nusing $F_1$ scores.\n report that DeepMatcher mostly outperforms Magellan with a few exceptions (see parts of the result in Table~\\ref{tab:mudgal-benchmark}).\nThe relative strength of DeepMatcher to the traditional method increases as the data quality decreases,\nand the same is reported about Seq2SeqMatcher .\nAutoBlock is found to be especially strong against traditional blocking techniques for dirty data.\nThis may suggest that deep learning approaches' strength is most clearly seen when the data is noisy and heterogeneous.\n investigate the use of transfer learning and active learning for their deep learning model.\nThey find Magellan to significantly outperform their model when given few labeled training examples,\nbut they were able to make it substantially more competitive using transfer learning and active learning.\nThe model still performs favorably in comparison to the traditional methods when given enough examples.", "id": "1268ca02-5ffd-4712-95c6-14fd93d3cdcf", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d01f029d-62b7-4df0-a20e-18b57dcb305e", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Evaluation" ], [ "subsection", "Experimental results" ], [ "subsubsection", "Deep learning vs. traditional methods" ] ], "subsections": [], "title": "Deep learning vs. traditional methods" }, { "cite_extract_rate": 0, "cites": [], "content": "\\label{sec:challenges}\nIn Section~\\ref{sec:contributions-from-deep-learning}, we saw which contributions deep learning have made to entity matching.\nWe will now take a look at both the challenges and opportunities for deep neural networks in entity matching,\nboth of which represent potential directions for future research.", "id": "d304df7c-b0c7-4edd-a412-1e2739d185fb", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ] ], "subsections": [ "32b04565-c736-41ab-81c2-600d40923327", "d499f831-6603-42f7-bfab-6f8be47bb1dd" ], "title": "Future research" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "32b04565-c736-41ab-81c2-600d40923327", "level": "subsection", "origin_cites_number": 0, "parent_id": "d304df7c-b0c7-4edd-a412-1e2739d185fb", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Challenges" ] ], "subsections": [ "a3de250c-9b8b-4389-958e-3125aafea6a5", "b035910a-4f11-4918-a7f5-34c02d1f2c64", "48a48485-152e-47ec-93bb-d3348ab82d77" ], "title": "Challenges" }, { "cite_extract_rate": 0, "cites": [], "content": "Entity matching, being a central data integration task,\nis constructing data models that are consumed by people or machines for downstream tasks.\nFor many applications, it is crucial to trust the data source,\nand being able to understand why something does not work is key.\nUnfortunately, deep learning models are notoriously hard to interpret.\nAs steps in the entity matching process increasingly coalesce into a large neural network, as illustrated in Figure~\\ref{fig:deep-learning-process},\nwe get fewer checkpoints along the way in the process that can easily be inspected.\nWe cannot see the output from each step in the same way anymore.\nTherefore, figuring out why two records where matched or not matched is usually nontrivial.\nThere are a few techniques that are already used, such as looking at alignment scores,\nbut we are still far away from a comprehensive way of debugging neural networks for entity matching.", "id": "a3de250c-9b8b-4389-958e-3125aafea6a5", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "32b04565-c736-41ab-81c2-600d40923327", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Challenges" ], [ "subsubsection", "Explainability and ease of debugging" ] ], "subsections": [], "title": "Explainability and ease of debugging" }, { "cite_extract_rate": 0, "cites": [], "content": "Human interaction is considered an important factor in entity matching .\nUsers cannot generally be expected to wait for very long when they are supposed to do interactive work,\nlimiting the potential applications of deep learning for entity matching in interactive settings, since the running time for both training and prediction is high -- especially compared to cleverly engineered traditional pipelines with good blocking.", "id": "b035910a-4f11-4918-a7f5-34c02d1f2c64", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "32b04565-c736-41ab-81c2-600d40923327", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Challenges" ], [ "subsubsection", "Running time in interactive settings" ] ], "subsections": [], "title": "Running time in interactive settings" }, { "cite_extract_rate": 0, "cites": [], "content": "As more steps in the entity matching process are addressed by deep learning-based techniques,\nthe more different steps rely on more training data.\nDeep learning models are hungry for training examples,\nand there is not always an abundance of them readily available.\nEven while the use of deep learning can potentially reduce the need for manual labor in the form of feature engineering,\nit might still be necessary to do expensive manual labeling.\nWhile transfer learning and active learning could help,\nthere are not any pretrained entity matching models publicly available to train from.", "id": "48a48485-152e-47ec-93bb-d3348ab82d77", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "32b04565-c736-41ab-81c2-600d40923327", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Challenges" ], [ "subsubsection", "Number of training examples" ] ], "subsections": [], "title": "Number of training examples" }, { "cite_extract_rate": 0, "cites": [], "content": "", "id": "d499f831-6603-42f7-bfab-6f8be47bb1dd", "level": "subsection", "origin_cites_number": 0, "parent_id": "d304df7c-b0c7-4edd-a412-1e2739d185fb", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Opportunities" ] ], "subsections": [ "f504e097-d9ae-4128-99a2-cd5e3717a027", "e31a2af8-9093-4152-a862-fbb3388d68c7", "acf7bfce-6d1e-4d05-8a1a-e13b30e897cb", "f9fbd766-53a2-4492-95d7-05aff810878c" ], "title": "Opportunities" }, { "cite_extract_rate": 0, "cites": [], "content": "As we have seen,\ndeep neural networks are increasingly able to take over steps in the entity matching process.\nBut we have still not seen a method doing both schema matching and blocking together with record pair comparison and classification in an end-to-end solution with a neural network --\nin other words,\nan approach that implements the whole deep learning entity matching reference model in Figure~\\ref{fig:deep-learning-process}.\nThe network would presumably be a \\textit{flexible encoder},\nbelonging in the upper-right corner of our taxonomy.\nIt would have independent representations to make efficient blocking possible and non-attribute-aligned comparison to make schema matching possible.\nThis could be an important step toward tackling the entity matching problem in a streamlined end-to-end fashion.", "id": "f504e097-d9ae-4128-99a2-cd5e3717a027", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d499f831-6603-42f7-bfab-6f8be47bb1dd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Opportunities" ], [ "subsubsection", "End-to-end approach with schema matching and blocking" ] ], "subsections": [], "title": "End-to-end approach with schema matching and blocking" }, { "cite_extract_rate": 0, "cites": [], "content": "One of the fundamental challenges when trying to develop an entity matching methods that will work across many datasets is the huge variation between domains and data sources.\nWhile several open datasets are available,\nwe would like to see significantly more.\nIt is especially important to increase the diversity of the domains represented.\nFor example, many datasets are related to either research publications or consumer-focused products/services -- there are no industrial datasets.\nThis would not only enable more complete evaluations, but also provide data suitable for transfer learning.", "id": "e31a2af8-9093-4152-a862-fbb3388d68c7", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d499f831-6603-42f7-bfab-6f8be47bb1dd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Opportunities" ], [ "subsubsection", "More open datasets" ] ], "subsections": [], "title": "More open datasets" }, { "cite_extract_rate": 0, "cites": [], "content": "It is not always easy to compare the different methods since they do not necessarily evaluate on the same data in the same way.\nWe have seen how standard open datasets with corresponding standard evaluation metrics have been a catalyst for advances in machine learning for fields such as computer vision , recommender systems and natural language processing .\nWe think there is great potential in similar agreed-upon benchmarks for entity matching.\nThis need not only be restricted to traditional precision/recall measures for the resulting matches.\nIt would also be of interest to standardize the evaluation of blocking techniques, transfer learning techniques, active learning techniques, and computational performance and efficiency.", "id": "acf7bfce-6d1e-4d05-8a1a-e13b30e897cb", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d499f831-6603-42f7-bfab-6f8be47bb1dd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Opportunities" ], [ "subsubsection", "Standardized benchmarks" ] ], "subsections": [], "title": "Standardized benchmarks" }, { "cite_extract_rate": 0, "cites": [], "content": "While there has been work on transfer learning ,\nthere are no pretrained models publicly available specifically for entity matching.\nHaving pretrained models to fine-tune could potentially speed up training and reduce the amount of necessary training examples.\nThis is challenging, because, as mentioned above, there is a huge variation across domains and data sources.\nEach data source is different.\nBuilding pretrained models that can be fine-tuned for a broad number of data sets and making them publicly available would be of huge benefit to the field.\nThis would, of course, benefit from the previous point about more open datasets.\nAn interesting approach for those networks with attribute-aligned comparison,\nexplored by ,\nis to have pretrained models for different types of attributes (e.g., name, addresses, organization) in addition to generic ones.\nThen one can mix and match models for each attribute one is about to match.\nThis, of course, only works for aligned schemas.\nIt can be even more interesting to look at networks with non-attribute-aligned comparison for such pretrained models,\nsince they can potentially offer pretrained schema matching,\nwhich will be crucial to handle the large variety of data source schemas out there.\nFor this, Transformer-based networks have considerable potential since they are already built on top of heavily pretrained language models.\nPretrained models can belong to any of the four categories of our taxonomy.\nNonetheless, we find pretrained flexible encoders that can support both blocking and schema matching to be the most significant opportunity.\nSpecifically, because it would enable transfer learning jointly on the largest possible portion of the entity matching process.", "id": "f9fbd766-53a2-4492-95d7-05aff810878c", "level": "subsubsection", "origin_cites_number": 0, "parent_id": "d499f831-6603-42f7-bfab-6f8be47bb1dd", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Future research" ], [ "subsection", "Opportunities" ], [ "subsubsection", "Publicly available pretrained models" ] ], "subsections": [], "title": "Publicly available pretrained models" }, { "cite_extract_rate": 0, "cites": [], "content": "We have seen how existing work has used neural networks in entity matching.\nBy using a reference model of the traditional entity matching process,\nwe have shown how the surveyed methods address different steps of the process and which techniques have been used at each step.\nMore recently, approaches based on deep learning techniques for natural language processing have emerged.\nWe looked at what such deep learning methods contribute to the entity matching task.\nThe central contribution is powerful hierarchical representation learning from text.\nAs we have seen,\nthis can alleviate most of the handcrafted feature engineering necessary in the data preprocessing,\nwhich can ease the burden of manually tailored procedures in downstream steps such as record pair comparison.\nFurthermore,\nit is a driver in increasingly coalescing steps of the entity matching process into end-to-end neural networks performing several steps in one go,\neffectively reducing the number of steps and enabling end-to-end training.\nTo give a clear view of how the entity matching process changes with such an increasingly coalesced deep learning approach, we propose a reference model for entity matching processes using deep learning.\nTo differentiate the deep neural networks used in the surveyed methods,\nwe introduced a taxonomy of deep neural networks for entity matching.\nIt focuses on two properties that are important in regard to how easy it is to support schema matching and blocking.\nLastly, we looked at potential directions for future research by discussing challenges and opportunities.\nThe challenges being explainability, running time in interactive settings, and the large need for training examples,\nwhile for opportunities we think it would be interesting to develop a complete end-to-end approach with both schema matching and blocking,\nexploring a new part of the deep neural network taxonomy.\nWe also see a lot of potential in trying to develop more open datasets, standardized benchmarks, and publicly available pretrained models for entity matching --- which have been important for other fields.\n\\begin{acks}\nThis work is supported by \\grantsponsor{cognite}{Cognite}{https://cognite.com} and \\grantsponsor{fr}{the Research Council of Norway}{https://www.forskningsradet.no} under Project \\grantnum{fr}{298998}.\nWe thank the reviewers for valuable feedback and Carl F. Straumsheim for proofreading.\n\\end{acks}\n\\bibliographystyle{ACM-Reference-Format}\n\\bibliography{bib}\n\\end{document}\n\\endinput", "id": "19acfdb5-671d-4d5c-949d-07b6acad746e", "level": "section", "origin_cites_number": 0, "parent_id": "87eb62ae-9acb-4832-a104-61ba9ef70d9a", "prefix_titles": [ [ "title", "Neural Networks for Entity Matching: A Survey" ], [ "section", "Conclusion" ] ], "subsections": [], "title": "Conclusion" } ]
126
[]
null